input stringlengths 2.65k 237k | output stringclasses 1
value |
|---|---|
# Why: #3791 in Alexa global
'http://www.gsm.ir/',
# Why: #3792 in Alexa global
'http://dek-d.com/',
# Why: #3793 in Alexa global
'http://www.giantbomb.com/',
# Why: #3794 in Alexa global
'http://www.tala.ir/',
# Why: #3795 in Alexa global
'http://www.extremetracking.com/',
# Why: #3796 in Alexa global
'http://www.homevv.com/',
# Why: #3797 in Alexa global
'http://www.truthaboutabs.com/',
# Why: #3798 in Alexa global
'http://www.psychologytoday.com/',
# Why: #3800 in Alexa global
'http://www.vod.pl/',
# Why: #3801 in Alexa global
'http://www.macromill.com/',
# Why: #3802 in Alexa global
'http://www.pagseguro.uol.com.br/',
# Why: #3804 in Alexa global
'http://www.amd.com/',
# Why: #3805 in Alexa global
'http://www.livescience.com/',
# Why: #3806 in Alexa global
'http://dedecms.com/',
# Why: #3807 in Alexa global
'http://www.jin115.com/',
# Why: #3808 in Alexa global
'http://www.ampxchange.com/',
# Why: #3809 in Alexa global
'http://www.profitcentr.com/',
# Why: #3810 in Alexa global
'http://www.webmotors.com.br/',
# Why: #3811 in Alexa global
'http://www.lan.com/',
# Why: #3812 in Alexa global
'http://www.fileice.net/',
# Why: #3813 in Alexa global
'http://www.ingdirect.es/',
# Why: #3814 in Alexa global
'http://www.amtrak.com/',
# Why: #3815 in Alexa global
'http://www.emag.ro/',
# Why: #3816 in Alexa global
'http://www.progressive.com/',
# Why: #3817 in Alexa global
'http://www.balatarin.com/',
# Why: #3818 in Alexa global
'http://www.immonet.de/',
# Why: #3819 in Alexa global
'http://www.e-travel.com/',
# Why: #3820 in Alexa global
'http://www.studymode.com/',
# Why: #3821 in Alexa global
'http://www.go2000.com/',
# Why: #3822 in Alexa global
'http://www.shopbop.com/',
# Why: #3823 in Alexa global
'http://www.filesfetcher.com/',
# Why: #3824 in Alexa global
'http://www.euroresidentes.com/',
# Why: #3825 in Alexa global
'http://www.movistar.es/',
# Why: #3826 in Alexa global
'http://lefeng.com/',
# Why: #3827 in Alexa global
'http://www.google.hn/',
# Why: #3828 in Alexa global
'http://www.homestead.com/',
# Why: #3829 in Alexa global
'http://www.filesonar.com/',
# Why: #3830 in Alexa global
'http://www.hsbccreditcard.com/',
# Why: #3831 in Alexa global
'http://www.google.com.np/',
# Why: #3832 in Alexa global
'http://www.parperfeito.com.br/',
# Why: #3833 in Alexa global
'http://www.sciencedaily.com/',
# Why: #3834 in Alexa global
'http://www.realgfporn.com/',
# Why: #3835 in Alexa global
'http://www.wonderhowto.com/',
# Why: #3836 in Alexa global
'http://www.rakuten-card.co.jp/',
# Why: #3837 in Alexa global
'http://www.coolrom.com/',
# Why: #3838 in Alexa global
'http://www.wikibooks.org/',
# Why: #3839 in Alexa global
'http://www.archdaily.com/',
# Why: #3840 in Alexa global
'http://www.gigazine.net/',
# Why: #3841 in Alexa global
'http://www.totaljerkface.com/',
# Why: #3842 in Alexa global
'http://www.bezaat.com/',
# Why: #3843 in Alexa global
'http://www.eurosport.com/',
# Why: #3844 in Alexa global
'http://www.fontspace.com/',
# Why: #3845 in Alexa global
'http://www.tirage24.com/',
# Why: #3846 in Alexa global
'http://www.bancomer.com.mx/',
# Why: #3847 in Alexa global
'http://www.nasdaq.com/',
# Why: #3848 in Alexa global
'http://www.bravoteens.com/',
# Why: #3849 in Alexa global
'http://www.bdjobs.com/',
# Why: #3850 in Alexa global
'http://www.zimbra.free.fr/',
# Why: #3851 in Alexa global
'http://www.arsenal.com/',
# Why: #3852 in Alexa global
'http://www.rabota.ru/',
# Why: #3853 in Alexa global
'http://www.lovefilm.com/',
# Why: #3854 in Alexa global
'http://www.artemisweb.jp/',
# Why: #3855 in Alexa global
'http://www.tsetmc.com/',
# Why: #3856 in Alexa global
'http://www.movshare.net/',
# Why: #3857 in Alexa global
'http://www.debonairblog.com/',
# Why: #3858 in Alexa global
'http://www.zmovie.co/',
# Why: #3859 in Alexa global
'http://www.peoplefinders.com/',
# Why: #3860 in Alexa global
'http://www.mercadolibre.com/',
# Why: #3861 in Alexa global
'http://www.connectlondoner.com/',
# Why: #3862 in Alexa global
'http://www.forbes.ru/',
# Why: #3863 in Alexa global
'http://www.gagnezauxoptions.com/',
# Why: #3864 in Alexa global
'http://www.taikang.com/',
# Why: #3865 in Alexa global
'http://www.mywapblog.com/',
# Why: #3866 in Alexa global
'http://www.citysearch.com/',
# Why: #3867 in Alexa global
'http://www.novafinanza.com/',
# Why: #3868 in Alexa global
'http://www.gruposantander.es/',
# Why: #3869 in Alexa global
'http://www.relianceada.com/',
# Why: #3870 in Alexa global
'http://www.rankingsandreviews.com/',
# Why: #3871 in Alexa global
'http://www.p-world.co.jp/',
# Why: #3872 in Alexa global
'http://hjenglish.com/',
# Why: #3873 in Alexa global
'http://www.state.nj.us/',
# Why: #3874 in Alexa global
'http://www.comdirect.de/',
# Why: #3875 in Alexa global
'http://www.claro.com.br/',
# Why: #3876 in Alexa global
'http://www.alluc.to/',
# Why: #3877 in Alexa global
'http://www.godlikeproductions.com/',
# Why: #3878 in Alexa global
'http://www.lowyat.net/',
# Why: #3879 in Alexa global
'http://www.dawn.com/',
# Why: #3880 in Alexa global
'http://www.18xgirls.com/',
# Why: #3881 in Alexa global
'http://www.origo.hu/',
# Why: #3882 in Alexa global
'http://www.loopnet.com/',
# Why: #3883 in Alexa global
'http://www.payu.in/',
# Why: #3884 in Alexa global
'http://www.digitalmedia-comunicacion.com/',
# Why: #3885 in Alexa global
'http://www.newsvine.com/',
# Why: #3886 in Alexa global
'http://www.petfinder.com/',
# Why: #3887 in Alexa global
'http://www.kuaibo.com/',
# Why: #3888 in Alexa global
'http://www.soft32.com/',
# Why: #3889 in Alexa global
'http://www.yellowpages.ca/',
# Why: #3890 in Alexa global
'http://www.1fichier.com/',
# Why: #3891 in Alexa global
'http://www.egyup.com/',
# Why: #3892 in Alexa global
'http://www.iskullgames.com/',
# Why: #3893 in Alexa global
'http://www.androidforums.com/',
# Why: #3894 in Alexa global
'http://www.blogspot.cz/',
# Why: #3895 in Alexa global
'http://www.umich.edu/',
# Why: #3896 in Alexa global
'http://www.madsextube.com/',
# Why: #3897 in Alexa global
'http://www.bigcinema.tv/',
# Why: #3898 in Alexa global
'http://www.donedeal.ie/',
# Why: #3899 in Alexa global
'http://www.winporn.com/',
# Why: #3900 in Alexa global
'http://www.cosmopolitan.com/',
# Why: #3901 in Alexa global
'http://www.reg.ru/',
# Why: #3902 in Alexa global
'http://www.localmoxie.com/',
# Why: #3903 in Alexa global
'http://www.kootation.com/',
# Why: #3904 in Alexa global
'http://www.gidonline.ru/',
# Why: #3905 in Alexa global
'http://www.clipconverter.cc/',
# Why: #3906 in Alexa global
'http://www.gioco.it/',
# Why: #3907 in Alexa global
'http://www.ravelry.com/',
# Why: #3908 in Alexa global
'http://www.gettyimages.com/',
# Why: #3909 in Alexa global
'http://www.nanapi.jp/',
# Why: #3910 in Alexa global
'http://www.medicalnewsreporter.com/',
# Why: #3911 in Alexa global
'http://www.shop411.com/',
# Why: #3912 in Alexa global
'http://www.aif.ru/',
# Why: #3913 in Alexa global
'http://www.journaldesfemmes.com/',
# Why: #3914 in Alexa global
'http://www.blogcu.com/',
# Why: #3915 in Alexa global
'http://www.vanguard.com/',
# Why: #3916 in Alexa global
'http://www.freemp3go.com/',
# Why: #3917 in Alexa global
'http://www.google.ci/',
# Why: #3918 in Alexa global
'http://www.findicons.com/',
# Why: #3919 in Alexa global
'http://www.tineye.com/',
# Why: #3920 in Alexa global
'http://www.webdesignerdepot.com/',
# Why: #3921 in Alexa global
'http://www.nomorerack.com/',
# Why: #3922 in Alexa global
'http://www.iqoo.me/',
# Why: #3923 in Alexa global
'http://www.amarujala.com/',
# Why: #3924 in Alexa global
'http://pengfu.com/',
# Why: #3925 in Alexa global
'http://www.leadpages.net/',
# Why: #3926 in Alexa global
'http://www.zalukaj.tv/',
# Why: #3927 in Alexa global
'http://www.avon.com/',
# Why: #3928 in Alexa global
'http://www.casasbahia.com.br/',
# Why: #3929 in Alexa global
'http://www.juegosdechicas.com/',
# Why: #3930 in Alexa global
'http://www.tvrain.ru/',
# Why: #3931 in Alexa global
'http://www.askmefast.com/',
# Why: #3932 in Alexa global
'http://www.stockcharts.com/',
# Why: #3934 in Alexa global
'http://www.footlocker.com/',
# Why: #3935 in Alexa global
'http://www.allanalpass.com/',
# Why: #3936 in Alexa global
'http://www.theoatmeal.com/',
# Why: #3937 in Alexa global
'http://www.storify.com/',
# Why: #3938 in Alexa global
'http://www.santander.com.br/',
# Why: #3939 in Alexa global
'http://www.laughnfiddle.com/',
# Why: #3940 in Alexa global
'http://www.lomadee.com/',
# Why: #3941 in Alexa global
'http://aftenposten.no/',
# Why: #3942 in Alexa global
'http://www.lamoda.ru/',
# Why: #3943 in Alexa global
'http://www.tasteofhome.com/',
# Why: #3944 in Alexa global
'http://www.news247.gr/',
# Why: #3946 in Alexa global
'http://www.sherdog.com/',
# Why: #3947 in Alexa global
'http://www.milb.com/',
# Why: #3948 in Alexa global
'http://www.3djuegos.com/',
# Why: #3949 in Alexa global
'http://www.dreammovies.com/',
# Why: #3950 in Alexa global
'http://www.commonfloor.com/',
# Why: #3951 in Alexa global
'http://www.tharunee.lk/',
# Why: #3952 in Alexa global
'http://www.chatrandom.com/',
# Why: #3953 in Alexa global
'http://xs8.cn/',
# Why: #3955 in Alexa global
'http://www.rechargeitnow.com/',
# Why: #3956 in Alexa global
'http://am15.net/',
# Why: #3957 in Alexa global
'http://www.sexad.net/',
# Why: #3958 in Alexa global
'http://www.herokuapp.com/',
# Why: #3959 in Alexa global
'http://www.apontador.com.br/',
# Why: #3960 in Alexa global
'http://www.rfi.fr/',
# Why: #3961 in Alexa global
'http://www.woozworld.com/',
# Why: #3962 in Alexa global
'http://www.hitta.se/',
# Why: #3963 in Alexa global
'http://www.comedycentral.com/',
# Why: #3964 in Alexa global
'http://www.fbsbx.com/',
# Why: #3965 in Alexa global
'http://www.aftabnews.ir/',
# Why: #3966 in Alexa global
'http://www.stepstone.de/',
# Why: #3967 in Alexa global
'http://www.filmon.com/',
# Why: #3969 in Alexa global
'http://www.smbc.co.jp/',
# Why: #3970 in Alexa global
'http://www.ameritrade.com/',
# Why: #3971 in Alexa global
'http://www.ecitic.com/',
# Why: #3972 in Alexa global
'http://www.bola.net/',
# Why: #3973 in Alexa global
'http://www.nexon.co.jp/',
# Why: #3974 in Alexa global
'http://www.hellowork.go.jp/',
# Why: #3975 in Alexa global
'http://www.hq-sex-tube.com/',
# Why: #3976 in Alexa global
'http://www.gsp.ro/',
# Why: #3977 in Alexa global
'http://www.groupon.co.uk/',
# Why: #3978 in Alexa global
'http://www.20min.ch/',
# Why: #3979 in Alexa global
| |
theta_B3) - g_B2_B3*cos(theta_B2 - theta_B3))
struct[0].Gy_ini[4,3] = V_B2*V_B3*(b_B2_B3*cos(theta_B2 - theta_B3) + g_B2_B3*sin(theta_B2 - theta_B3))
struct[0].Gy_ini[4,4] = V_B2*(b_B2_B3*sin(theta_B2 - theta_B3) - g_B2_B3*cos(theta_B2 - theta_B3)) + 2*V_B3*g_B2_B3
struct[0].Gy_ini[4,5] = V_B2*V_B3*(-b_B2_B3*cos(theta_B2 - theta_B3) - g_B2_B3*sin(theta_B2 - theta_B3))
struct[0].Gy_ini[4,10] = -S_n_B3/S_base
struct[0].Gy_ini[5,2] = V_B3*(b_B2_B3*cos(theta_B2 - theta_B3) + g_B2_B3*sin(theta_B2 - theta_B3))
struct[0].Gy_ini[5,3] = V_B2*V_B3*(-b_B2_B3*sin(theta_B2 - theta_B3) + g_B2_B3*cos(theta_B2 - theta_B3))
struct[0].Gy_ini[5,4] = V_B2*(b_B2_B3*cos(theta_B2 - theta_B3) + g_B2_B3*sin(theta_B2 - theta_B3)) + 2*V_B3*(-b_B2_B3 - bs_B2_B3/2)
struct[0].Gy_ini[5,5] = V_B2*V_B3*(b_B2_B3*sin(theta_B2 - theta_B3) - g_B2_B3*cos(theta_B2 - theta_B3))
struct[0].Gy_ini[5,11] = -S_n_B3/S_base
struct[0].Gy_ini[6,6] = -1
struct[0].Gy_ini[6,10] = -K_p_B3
struct[0].Gy_ini[6,12] = K_p_B3
struct[0].Gy_ini[7,7] = -1
struct[0].Gy_ini[7,11] = -K_q_B3
struct[0].Gy_ini[8,4] = -sin(delta_B3 - theta_B3)
struct[0].Gy_ini[8,5] = V_B3*cos(delta_B3 - theta_B3)
struct[0].Gy_ini[8,8] = -R_v_B3
struct[0].Gy_ini[8,9] = X_v_B3
struct[0].Gy_ini[9,4] = -cos(delta_B3 - theta_B3)
struct[0].Gy_ini[9,5] = -V_B3*sin(delta_B3 - theta_B3)
struct[0].Gy_ini[9,7] = 1
struct[0].Gy_ini[9,8] = -X_v_B3
struct[0].Gy_ini[9,9] = -R_v_B3
struct[0].Gy_ini[10,4] = i_d_B3*sin(delta_B3 - theta_B3) + i_q_B3*cos(delta_B3 - theta_B3)
struct[0].Gy_ini[10,5] = -V_B3*i_d_B3*cos(delta_B3 - theta_B3) + V_B3*i_q_B3*sin(delta_B3 - theta_B3)
struct[0].Gy_ini[10,8] = V_B3*sin(delta_B3 - theta_B3)
struct[0].Gy_ini[10,9] = V_B3*cos(delta_B3 - theta_B3)
struct[0].Gy_ini[10,10] = -1
struct[0].Gy_ini[11,4] = i_d_B3*cos(delta_B3 - theta_B3) - i_q_B3*sin(delta_B3 - theta_B3)
struct[0].Gy_ini[11,5] = V_B3*i_d_B3*sin(delta_B3 - theta_B3) + V_B3*i_q_B3*cos(delta_B3 - theta_B3)
struct[0].Gy_ini[11,8] = V_B3*cos(delta_B3 - theta_B3)
struct[0].Gy_ini[11,9] = -V_B3*sin(delta_B3 - theta_B3)
struct[0].Gy_ini[11,11] = -1
struct[0].Gy_ini[12,10] = 1
struct[0].Gy_ini[12,12] = -1
struct[0].Gy_ini[12,13] = -1
struct[0].Gy_ini[12,14] = 1
struct[0].Gy_ini[13,4] = i_d_B3*sin(delta_B3 - theta_B3) + i_q_B3*cos(delta_B3 - theta_B3)
struct[0].Gy_ini[13,5] = -V_B3*i_d_B3*cos(delta_B3 - theta_B3) + V_B3*i_q_B3*sin(delta_B3 - theta_B3)
struct[0].Gy_ini[13,8] = 2*R_s_B3*i_d_B3 + V_B3*sin(delta_B3 - theta_B3)
struct[0].Gy_ini[13,9] = 2*R_s_B3*i_q_B3 + V_B3*cos(delta_B3 - theta_B3)
struct[0].Gy_ini[13,13] = -1
struct[0].Gy_ini[14,14] = -1
struct[0].Gy_ini[14,15] = 2*k_u_B3*v_u_B3/V_u_max_B3**2
struct[0].Gy_ini[14,16] = -(-v_u_B3**2 + v_u_ref_B3**2)/V_u_max_B3**2
struct[0].Gy_ini[15,13] = -R_uc_B3*S_n_B3/(v_u_B3 + 0.1)
struct[0].Gy_ini[15,15] = -R_uc_B3*S_n_B3*(p_gou_B3 - p_t_B3)/(v_u_B3 + 0.1)**2 - 1
struct[0].Gy_ini[15,18] = R_uc_B3*S_n_B3/(v_u_B3 + 0.1)
struct[0].Gy_ini[16,15] = Piecewise(np.array([(0, V_u_min_B3 > v_u_B3), ((-K_u_0_B3 + K_u_max_B3)/(-V_u_lt_B3 + V_u_min_B3), V_u_lt_B3 > v_u_B3), ((-K_u_0_B3 + K_u_max_B3)/(-V_u_ht_B3 + V_u_max_B3), V_u_ht_B3 < v_u_B3), (0, True)]))
struct[0].Gy_ini[16,16] = -1
struct[0].Gy_ini[17,17] = -1
struct[0].Gy_ini[18,17] = p_gin_B3
struct[0].Gy_ini[18,18] = -1
struct[0].Gy_ini[19,6] = -Piecewise(np.array([(1/Droop_B3, (omega_B3 > 0.5*DB_B3 + omega_ref_B3) | (omega_B3 < -0.5*DB_B3 + omega_ref_B3)), (0, True)]))
struct[0].Gy_ini[19,19] = -1
struct[0].Gy_ini[20,20] = -1
struct[0].Gy_ini[21,0] = -sin(delta_B1 - theta_B1)
struct[0].Gy_ini[21,1] = V_B1*cos(delta_B1 - theta_B1)
struct[0].Gy_ini[21,21] = -R_v_B1
struct[0].Gy_ini[21,22] = -X_v_B1
struct[0].Gy_ini[22,0] = -cos(delta_B1 - theta_B1)
struct[0].Gy_ini[22,1] = -V_B1*sin(delta_B1 - theta_B1)
struct[0].Gy_ini[22,21] = X_v_B1
struct[0].Gy_ini[22,22] = -R_v_B1
struct[0].Gy_ini[23,0] = i_d_B1*sin(delta_B1 - theta_B1) + i_q_B1*cos(delta_B1 - theta_B1)
struct[0].Gy_ini[23,1] = -V_B1*i_d_B1*cos(delta_B1 - theta_B1) + V_B1*i_q_B1*sin(delta_B1 - theta_B1)
struct[0].Gy_ini[23,21] = V_B1*sin(delta_B1 - theta_B1)
struct[0].Gy_ini[23,22] = V_B1*cos(delta_B1 - theta_B1)
struct[0].Gy_ini[23,23] = -1
struct[0].Gy_ini[24,0] = i_d_B1*cos(delta_B1 - theta_B1) - i_q_B1*sin(delta_B1 - theta_B1)
struct[0].Gy_ini[24,1] = V_B1*i_d_B1*sin(delta_B1 - theta_B1) + V_B1*i_q_B1*cos(delta_B1 - theta_B1)
struct[0].Gy_ini[24,21] = V_B1*cos(delta_B1 - theta_B1)
struct[0].Gy_ini[24,22] = -V_B1*sin(delta_B1 - theta_B1)
struct[0].Gy_ini[24,24] = -1
struct[0].Gy_ini[25,6] = S_n_B3*T_p_B3/(2*K_p_B3*(1000000.0*S_n_B1 + S_n_B3*T_p_B3/(2*K_p_B3)))
struct[0].Gy_ini[25,20] = 1000000.0*S_n_B1/(1000000.0*S_n_B1 + S_n_B3*T_p_B3/(2*K_p_B3))
struct[0].Gy_ini[25,25] = -1
struct[0].Gy_ini[26,25] = -K_p_agc
struct[0].Gy_ini[26,26] = -1
def run_nn(t,struct,mode):
# Parameters:
S_base = struct[0].S_base
g_B1_B2 = struct[0].g_B1_B2
b_B1_B2 = struct[0].b_B1_B2
bs_B1_B2 = struct[0].bs_B1_B2
g_B2_B3 = struct[0].g_B2_B3
b_B2_B3 = struct[0].b_B2_B3
bs_B2_B3 = struct[0].bs_B2_B3
U_B1_n = struct[0].U_B1_n
U_B2_n = struct[0].U_B2_n
U_B3_n = struct[0].U_B3_n
S_n_B3 = struct[0].S_n_B3
Omega_b_B3 = struct[0].Omega_b_B3
K_p_B3 = struct[0].K_p_B3
T_p_B3 = struct[0].T_p_B3
K_q_B3 = struct[0].K_q_B3
T_q_B3 = struct[0].T_q_B3
X_v_B3 = struct[0].X_v_B3
R_v_B3 = struct[0].R_v_B3
R_s_B3 = struct[0].R_s_B3
C_u_B3 = struct[0].C_u_B3
K_u_0_B3 = struct[0].K_u_0_B3
K_u_max_B3 = struct[0].K_u_max_B3
V_u_min_B3 = struct[0].V_u_min_B3
V_u_max_B3 = struct[0].V_u_max_B3
R_uc_B3 = struct[0].R_uc_B3
K_h_B3 = struct[0].K_h_B3
R_lim_B3 = struct[0].R_lim_B3
V_u_lt_B3 = struct[0].V_u_lt_B3
V_u_ht_B3 = struct[0].V_u_ht_B3
Droop_B3 = struct[0].Droop_B3
DB_B3 = struct[0].DB_B3
T_cur_B3 = struct[0].T_cur_B3
S_n_B1 = struct[0].S_n_B1
Omega_b_B1 = struct[0].Omega_b_B1
X_v_B1 = struct[0].X_v_B1
R_v_B1 = struct[0].R_v_B1
K_delta_B1 = struct[0].K_delta_B1
K_p_agc = struct[0].K_p_agc
K_i_agc = struct[0].K_i_agc
# Inputs:
P_B1 = struct[0].P_B1
Q_B1 = struct[0].Q_B1
P_B2 = struct[0].P_B2
Q_B2 = struct[0].Q_B2
P_B3 = struct[0].P_B3
Q_B3 = struct[0].Q_B3
q_s_ref_B3 = struct[0].q_s_ref_B3
v_u_ref_B3 = struct[0].v_u_ref_B3
omega_ref_B3 = struct[0].omega_ref_B3
p_gin_B3 = struct[0].p_gin_B3
p_g_ref_B3 = struct[0].p_g_ref_B3
alpha_B1 = struct[0].alpha_B1
e_qv_B1 = struct[0].e_qv_B1
omega_ref_B1 = struct[0].omega_ref_B1
# Dynamical states:
delta_B3 = struct[0].x[0,0]
xi_p_B3 = struct[0].x[1,0]
xi_q_B3 = struct[0].x[2,0]
e_u_B3 = struct[0].x[3,0]
p_ghr_B3 = struct[0].x[4,0]
k_cur_B3 = struct[0].x[5,0]
delta_B1 = struct[0].x[6,0]
Domega_B1 = struct[0].x[7,0]
xi_freq = struct[0].x[8,0]
# Algebraic states:
V_B1 = struct[0].y_run[0,0]
theta_B1 = struct[0].y_run[1,0]
V_B2 = struct[0].y_run[2,0]
theta_B2 = struct[0].y_run[3,0]
V_B3 = struct[0].y_run[4,0]
theta_B3 = struct[0].y_run[5,0]
omega_B3 = struct[0].y_run[6,0]
e_qv_B3 = struct[0].y_run[7,0]
i_d_B3 = struct[0].y_run[8,0]
i_q_B3 = struct[0].y_run[9,0]
p_s_B3 = struct[0].y_run[10,0]
q_s_B3 = struct[0].y_run[11,0]
p_m_B3 = struct[0].y_run[12,0]
p_t_B3 = struct[0].y_run[13,0]
p_u_B3 = struct[0].y_run[14,0]
v_u_B3 = struct[0].y_run[15,0]
k_u_B3 = struct[0].y_run[16,0]
k_cur_sat_B3 = struct[0].y_run[17,0]
p_gou_B3 = struct[0].y_run[18,0]
p_f_B3 = struct[0].y_run[19,0]
omega_B1 = struct[0].y_run[20,0]
i_d_B1 = struct[0].y_run[21,0]
i_q_B1 = struct[0].y_run[22,0]
p_s_B1 = struct[0].y_run[23,0]
q_s_B1 = struct[0].y_run[24,0]
omega_coi = struct[0].y_run[25,0]
p_agc = struct[0].y_run[26,0]
# Differential equations:
if mode == 2:
struct[0].f[0,0] = Omega_b_B3*(omega_B3 - omega_coi)
struct[0].f[1,0] = p_m_B3 - p_s_B3
struct[0].f[2,0] = -q_s_B3 + q_s_ref_B3
struct[0].f[3,0] = S_n_B3*(p_gou_B3 - p_t_B3)/(C_u_B3*(v_u_B3 + 0.1))
struct[0].f[4,0] = Piecewise(np.array([(-R_lim_B3, R_lim_B3 < -K_h_B3*(-p_ghr_B3 + p_gou_B3)), (R_lim_B3, R_lim_B3 < K_h_B3*(-p_ghr_B3 + p_gou_B3)), (K_h_B3*(-p_ghr_B3 + p_gou_B3), True)]))
struct[0].f[5,0] = (-k_cur_B3 + p_f_B3/p_gin_B3 + p_g_ref_B3/p_gin_B3)/T_cur_B3
struct[0].f[6,0] = -K_delta_B1*delta_B1 + Omega_b_B1*(omega_B1 - omega_coi)
struct[0].f[7,0] = alpha_B1
struct[0].f[8,0] = 1 - omega_coi
# Algebraic equations:
if mode == 3:
struct[0].g[0,0] = -P_B1/S_base + V_B1**2*g_B1_B2 + V_B1*V_B2*(-b_B1_B2*sin(theta_B1 - theta_B2) - g_B1_B2*cos(theta_B1 - theta_B2)) - S_n_B1*p_s_B1/S_base
struct[0].g[1,0] = -Q_B1/S_base + V_B1**2*(-b_B1_B2 - bs_B1_B2/2) + V_B1*V_B2*(b_B1_B2*cos(theta_B1 - theta_B2) - g_B1_B2*sin(theta_B1 - theta_B2)) - S_n_B1*q_s_B1/S_base
struct[0].g[2,0] = -P_B2/S_base + V_B1*V_B2*(b_B1_B2*sin(theta_B1 - theta_B2) - g_B1_B2*cos(theta_B1 - theta_B2)) + V_B2**2*(g_B1_B2 + g_B2_B3) + V_B2*V_B3*(-b_B2_B3*sin(theta_B2 - theta_B3) - g_B2_B3*cos(theta_B2 - theta_B3))
struct[0].g[3,0] = -Q_B2/S_base + V_B1*V_B2*(b_B1_B2*cos(theta_B1 - theta_B2) + g_B1_B2*sin(theta_B1 - theta_B2)) + V_B2**2*(-b_B1_B2 - b_B2_B3 - bs_B1_B2/2 - bs_B2_B3/2) + V_B2*V_B3*(b_B2_B3*cos(theta_B2 - theta_B3) - g_B2_B3*sin(theta_B2 - theta_B3))
struct[0].g[4,0] = -P_B3/S_base + V_B2*V_B3*(b_B2_B3*sin(theta_B2 - theta_B3) - g_B2_B3*cos(theta_B2 - theta_B3)) + V_B3**2*g_B2_B3 - S_n_B3*p_s_B3/S_base
struct[0].g[5,0] = -Q_B3/S_base + V_B2*V_B3*(b_B2_B3*cos(theta_B2 - theta_B3) + g_B2_B3*sin(theta_B2 - theta_B3)) + V_B3**2*(-b_B2_B3 - bs_B2_B3/2) - S_n_B3*q_s_B3/S_base
struct[0].g[6,0] = K_p_B3*(p_m_B3 - p_s_B3 + xi_p_B3/T_p_B3) - omega_B3
struct[0].g[7,0] = K_q_B3*(-q_s_B3 + q_s_ref_B3 + xi_q_B3/T_q_B3) - e_qv_B3
struct[0].g[8,0] = -R_v_B3*i_d_B3 - V_B3*sin(delta_B3 - theta_B3) + X_v_B3*i_q_B3
struct[0].g[9,0] = -R_v_B3*i_q_B3 - V_B3*cos(delta_B3 - theta_B3) - X_v_B3*i_d_B3 + e_qv_B3
struct[0].g[10,0] = V_B3*i_d_B3*sin(delta_B3 - theta_B3) + V_B3*i_q_B3*cos(delta_B3 - theta_B3) - p_s_B3
struct[0].g[11,0] = V_B3*i_d_B3*cos(delta_B3 - theta_B3) - V_B3*i_q_B3*sin(delta_B3 - theta_B3) - q_s_B3
struct[0].g[12,0] = p_ghr_B3 - p_m_B3 + p_s_B3 - p_t_B3 + p_u_B3
struct[0].g[13,0] = i_d_B3*(R_s_B3*i_d_B3 + V_B3*sin(delta_B3 - theta_B3)) + i_q_B3*(R_s_B3*i_q_B3 + V_B3*cos(delta_B3 - theta_B3)) - p_t_B3
struct[0].g[14,0] = -p_u_B3 - k_u_B3*(-v_u_B3**2 + v_u_ref_B3**2)/V_u_max_B3**2
struct[0].g[15,0] = R_uc_B3*S_n_B3*(p_gou_B3 - p_t_B3)/(v_u_B3 + 0.1) + e_u_B3 - v_u_B3
struct[0].g[16,0] = -k_u_B3 + Piecewise(np.array([(K_u_max_B3, V_u_min_B3 > v_u_B3), (K_u_0_B3 + (-K_u_0_B3 + K_u_max_B3)*(-V_u_lt_B3 + v_u_B3)/(-V_u_lt_B3 + V_u_min_B3), V_u_lt_B3 > v_u_B3), (K_u_0_B3 + (-K_u_0_B3 + K_u_max_B3)*(-V_u_ht_B3 + v_u_B3)/(-V_u_ht_B3 + V_u_max_B3), V_u_ht_B3 < v_u_B3), (K_u_max_B3, V_u_max_B3 < v_u_B3), (K_u_0_B3, True)]))
struct[0].g[17,0] = -k_cur_sat_B3 + Piecewise(np.array([(0.0001, k_cur_B3 < 0.0001), (1, k_cur_B3 > 1), (k_cur_B3, True)]))
struct[0].g[18,0] = k_cur_sat_B3*p_gin_B3 - p_gou_B3
struct[0].g[19,0] = -p_f_B3 - Piecewise(np.array([((0.5*DB_B3 + omega_B3 - omega_ref_B3)/Droop_B3, omega_B3 < -0.5*DB_B3 + omega_ref_B3), ((-0.5*DB_B3 + omega_B3 - omega_ref_B3)/Droop_B3, omega_B3 > 0.5*DB_B3 + omega_ref_B3), (0.0, True)]))
struct[0].g[20,0] = Domega_B1 - omega_B1 + omega_ref_B1
struct[0].g[21,0] = -R_v_B1*i_d_B1 - V_B1*sin(delta_B1 - theta_B1) - X_v_B1*i_q_B1
struct[0].g[22,0] = -R_v_B1*i_q_B1 - V_B1*cos(delta_B1 - theta_B1) + X_v_B1*i_d_B1 + e_qv_B1
struct[0].g[23,0] = V_B1*i_d_B1*sin(delta_B1 - theta_B1) + V_B1*i_q_B1*cos(delta_B1 - theta_B1) - p_s_B1
struct[0].g[24,0] = V_B1*i_d_B1*cos(delta_B1 - theta_B1) - V_B1*i_q_B1*sin(delta_B1 - theta_B1) - q_s_B1
struct[0].g[25,0] = -omega_coi + (1000000.0*S_n_B1*omega_B1 + S_n_B3*T_p_B3*omega_B3/(2*K_p_B3))/(1000000.0*S_n_B1 + S_n_B3*T_p_B3/(2*K_p_B3))
struct[0].g[26,0] = K_i_agc*xi_freq + K_p_agc*(1 - omega_coi) - p_agc
# Outputs:
if mode == 3:
struct[0].h[0,0] = V_B1
struct[0].h[1,0] = V_B2
struct[0].h[2,0] = V_B3
struct[0].h[3,0] = p_gin_B3
struct[0].h[4,0] = p_g_ref_B3
struct[0].h[5,0] = -p_s_B3 + p_t_B3
struct[0].h[6,0] = (-V_u_min_B3**2 + e_u_B3**2)/(V_u_max_B3**2 - V_u_min_B3**2)
struct[0].h[7,0] = alpha_B1
if mode == 10:
struct[0].Fx[4,4] = Piecewise(np.array([(0, (R_lim_B3 < K_h_B3*(-p_ghr_B3 + p_gou_B3)) | (R_lim_B3 < -K_h_B3*(-p_ghr_B3 + p_gou_B3))), (-K_h_B3, True)]))
struct[0].Fx[5,5] = -1/T_cur_B3
struct[0].Fx[6,6] = -K_delta_B1
if mode == 11:
struct[0].Fy[0,6] = Omega_b_B3
struct[0].Fy[0,25] = -Omega_b_B3
struct[0].Fy[1,10] = -1
struct[0].Fy[1,12] = 1
struct[0].Fy[2,11] = -1
struct[0].Fy[3,13] = -S_n_B3/(C_u_B3*(v_u_B3 + 0.1))
struct[0].Fy[3,15] = -S_n_B3*(p_gou_B3 - p_t_B3)/(C_u_B3*(v_u_B3 + 0.1)**2)
struct[0].Fy[3,18] = S_n_B3/(C_u_B3*(v_u_B3 + 0.1))
struct[0].Fy[4,18] = Piecewise(np.array([(0, (R_lim_B3 < K_h_B3*(-p_ghr_B3 + p_gou_B3)) | (R_lim_B3 < -K_h_B3*(-p_ghr_B3 + | |
= getattr(handler.query_tree(), item)
self.positionGetter = handler
else:
self.positionGetter = pygame_win
pos = desktops['linux2'][self.desk_env].get('position_setter', None)
if pos:
if DEBUG_WM:
print "pos_setter.split('.')", pos.split('.')
handler = pygame_win
for item in pos.split('.'):
handler = getattr(handler.query_tree(), item)
self.positionSetter = handler
else:
self.positionSetter = pygame_win
# Position gap. Used to correct wrong positions on some environments.
self.position_gap = desktops['linux2'][self.desk_env].get('position_gap', (0, 0, False, False))
self.starting = True
self.gx, self.gy = 0, 0
# State handler
state = desktops['linux2'][self.desk_env].get('state', None)
if state:
if DEBUG_WM:
print "state.split('.')", state.split('.')
handler = pygame_win
for item in state.split('.'):
handler = getattr(handler.query_tree(), item)
self.stateHandler = handler
else:
self.stateHandler = pygame_win
if DEBUG_WM:
print "self.positionGetter:", self.positionGetter, 'ID:', self.positionGetter.id
print "self.positionSetter:", self.positionSetter, 'ID:', self.positionSetter.id
print "self.sizeGetter:", self.sizeGetter, 'ID:', self.sizeGetter.id
print "self.sizeSetter:", self.sizeSetter, 'ID:', self.sizeSetter.id
print "self.stateHandler:", self.stateHandler, 'ID:', self.stateHandler.id
print self.stateHandler.get_wm_state()
def get_root_rect(self):
"""Return a four values tuple containing the position and size of the very first OS window object."""
geom = self.display.screen().root.get_geometry()
return geom.x, geom.y, geom.width, geom.height
def get_size(self):
"""Return the window actual size as a tuple (width, height)."""
geom = self.sizeGetter.get_geometry()
if DEBUG_WM:
print "Actual size is", geom.width, geom.height
return geom.width, geom.height
def set_size(self, size, update=True):
"""Set the window size.
:size: list or tuple: the new size.
Raises a TypeError if something else than a list or a tuple is sent."""
if isinstance(size, (list, tuple)):
# Call the Xlib object handling the size to update it.
if DEBUG_WM:
print "Setting size to", size
print "actual size", self.get_size()
self.sizeSetter.configure(width=size[0], height=size[1])
if update:
self.sync()
else:
# Raise a Type error.
raise TypeError("%s is not a list or a tuple." % size)
def get_position(self):
"""Return the window actual position as a tuple."""
geom = self.positionGetter.get_geometry()
# if DEBUG_WM:
# print "Actual position is", geom.x, geom.y
return geom.x, geom.y
def set_position(self, pos, update=True):
"""Set the window position.
:pos: list or tuple: the new position (x, y).
:update: bool: wheteher to call the internal sync method."""
if DEBUG_WM:
print "Setting position to", pos
if isinstance(pos, (list, tuple)):
gx, gy = 0 or self.gx, 0 or self.gy
if self.starting:
gx, gy = self.position_gap[:2]
if self.position_gap[2]:
self.gx = gx
if self.position_gap[3]:
self.gy = gy
self.starting = False
# Call the Xlib object handling the position to update it.
self.positionSetter.configure(x=pos[0] + gx, y=pos[1] + gy)
if update:
self.sync()
else:
# Raise a Type error.
raise TypeError("%s is not a list or a tuple." % pos)
def get_state(self):
"""Return wheter the window is maximized or not, or minimized or full screen."""
state = self.stateHandler.get_full_property(self.display.intern_atom("_NET_WM_STATE"), 4)
# if DEBUG_WM:
# print "state_1.value", state.value
# print "max vert", self.display.intern_atom("_NET_WM_STATE_MAXIMIZED_VERT") ,self.display.intern_atom("_NET_WM_STATE_MAXIMIZED_VERT") in state.value
# print "max horz", self.display.intern_atom("_NET_WM_STATE_MAXIMIZED_HORZ"), self.display.intern_atom("_NET_WM_STATE_MAXIMIZED_HORZ") in state.value
if self.display.intern_atom("_NET_WM_STATE_MAXIMIZED_HORZ") in state.value and self.display.intern_atom("_NET_WM_STATE_MAXIMIZED_VERT") in state.value:
# if DEBUG_WM:
# print MAXIMIZED
return MAXIMIZED
elif self.display.intern_atom("_NET_WM_STATE_HIDEN") in state.value:
# if DEBUG_WM:
# print MINIMIZED
return MINIMIZED
elif self.display.intern_atom("_NET_WM_STATE_FULLSCREEN") in state.value:
# if DEBUG_WM:
# print FULLSCREEN
return FULLSCREEN
# if DEBUG_WM:
# print NORMAL
return NORMAL
def set_state(self, state=NORMAL, size=(-1, -1), pos=(-1, -1), update=True):
"""Set whether the window is maximized or not, or minimized or full screen.
If no argument is given, assume the state will be windowed and not maximized.
If arguments are given, only the first is relevant. The other ones are ignored.
** Only maximized and normal states are implemented for now. **
:state: valid arguments:
'minimized', MINIMIZED, 0.
'normal', NORMAL, 1: windowed, not maximized.
'maximized', MAXIMIZED, 2.
'fullscreen, FULLSCREEN, 3.
:size: list, tuple: the new size; if (-1, -1) self.get_size() is used.
If one element is -1 it is replaced by the corresponding valur from self.get_size().
:pos: list, tuple: the new position; if (-1, -1), self.get_position is used.
If one element is -1 it is replaced by the corresponding valur from self.get_position().
:update: bool: whether to call the internal flush method."""
if state not in (0, MINIMIZED, 'minimized', 1, NORMAL, 'normal', 2, MAXIMIZED, 'maximized', 3, FULLSCREEN, 'fullscreen'):
# Raise a value error.
raise ValueError("Invalid state argument: %s is not a correct value" % state)
if not isinstance(size, (list, tuple)):
raise TypeError("Invalid size argument: %s is not a list or a tuple.")
if not isinstance(pos, (list, tuple)):
raise TypeError("Invalid pos argument: %s is not a list or a tuple.")
if state in (1, NORMAL, 'normal'):
size = list(size)
sz = self.get_size()
if size[0] == -1:
size[0] = sz[0]
if size[1] == -1:
size[1] = sz[1]
pos = list(pos)
ps = self.get_position()
if pos[0] == -1:
pos[0] = ps[0]
if pos[1] == -1:
pos[1] = ps[1]
self.set_mode(size, self.mode)
self.set_position(pos)
elif state in (0, MINIMIZED, 'minimized'):
pass
elif state in (2, MAXIMIZED, 'maximized'):
data = [1, self.display.intern_atom("_NET_WM_STATE_MAXIMIZED_VERT", False), self.display.intern_atom("_NET_WM_STATE_MAXIMIZED_HORZ", False)]
data = (data + ([0] * (5 - len(data))))[:5]
if DEBUG_WM:
print self.stateHandler.get_wm_state()
print "creating event", Xlib.protocol.event.ClientMessage
print dir(self.stateHandler)
x_event = Xlib.protocol.event.ClientMessage(window=self.stateHandler, client_type=self.display.intern_atom("_NET_WM_STATE", False), data=(32, data))
if DEBUG_WM:
print "sending event"
self.display.screen().root.send_event(x_event, event_mask=Xlib.X.SubstructureRedirectMask)
if DEBUG_WM:
print self.stateHandler.get_wm_state()
elif state in (3, FULLSCREEN, 'fullscreen'):
pass
if update:
self.flush()
def flush(self):
"""Wrapper around Xlib.Display.flush()"""
if DEBUG_WM:
print "* flushing display"
self.display.flush()
def sync(self):
"""Wrapper around Xlib.Display.sync()"""
if DEBUG_WM:
print "* syncing display"
self.display.sync()
#=======================================================================
# WARNING: This class has been built on Linux using wine.
# Please review this code and change it consequently before using it without '--debug-wm' switch!
class WWindowHandler(BaseWindowHandler):
"""Object to deal with Microsoft Window managers."""
desk_env = desktop_environment
def __init__(self, pos=(0, 0), size=(0, 0), mode=None):
"""Set up the internal handlers."""
BaseWindowHandler.__init__(self, pos=pos, size=size, mode=mode)
# Tests
if DEBUG_WM:
print "#" * 72
print "WWindowHandler.__init__"
print "Desktop environment:", desktop_environment
for item in dir(win32con):
if 'maxim' in item.lower() or 'minim' in item.lower() or 'full' in item.lower():
print item, getattr(win32con, item)
self.base_handler = display
self.base_handler_id = display.get_wm_info()['window']
if platform.dist() == ('', '', ''):
# We're running on a native Windows.
def set_mode(self, size, mode):
"""Wrapper for pygame.display.set_mode()."""
# Windows pygame implementation seem to work on the display mode and size on it's own...
return
else:
# We're running on wine.
def set_mode(self, size, mode):
"""Wrapper for pygame.display.set_mode()."""
self.wine_state_fix = False
if getattr(self, 'wine_state_fix', False):
self.set_size(size)
self.wine_state_fix = True
def get_root_rect(self):
"""Return a four values tuple containing the position and size of the very first OS window object."""
flags, showCmd, ptMin, ptMax, rect = win32gui.GetWindowPlacement(win32gui.GetDesktopWindow())
return rect
def get_size(self):
"""Return the window actual size as a tuple (width, height)."""
flags, showCmd, ptMin, ptMax, rect = win32gui.GetWindowPlacement(self.base_handler_id)
w = rect[2] - rect[0]
h = rect[3] - rect[1]
return w, h
def set_size(self, size, update=True):
"""Set the window size.
:size: list or tuple: the new size.
:mode: bool: (re)set the pygame.display mode; self.mode must be a pygame display mode object.
Raises a TypeError if something else than a list or a tuple is sent."""
if isinstance(size, (list, tuple)):
w, h = size
cx, cy = win32gui.GetCursorPos()
if DEBUG_WM:
print "Settin size to", size
print "actual size", self.get_size()
print "actual position", self.get_position()
print 'cursor pos', cx, cy
flags, showCmd, ptMin, ptMax, rect = win32gui.GetWindowPlacement(self.base_handler_id)
if DEBUG_WM:
print "set_size rect", rect, "ptMin", ptMin, "ptMax", ptMax, "flags", flags
x = rect[0]
y = rect[1]
rect = x, y, x + w, y + h
win32gui.SetWindowPlacement(self.base_handler_id, (0, showCmd, ptMin, ptMax, rect))
else:
# Raise a Type error.
raise TypeError("%s is not a list or a tuple." % repr(size))
def get_position(self):
"""Return the window actual position as a tuple."""
flags, showCmd, ptMin, ptMax, rect = win32gui.GetWindowPlacement(self.base_handler_id)
x, y, r, b = rect
return x, y
def set_position(self, pos, update=True):
"""Set the window position.
:pos: list or tuple: the new position (x, y)."""
if DEBUG_WM:
print "Setting position to", pos
if isinstance(pos, (list, tuple)):
self.first_pos = False
x, y = pos
if update:
flags, showCmd, ptMin, ptMax, rect = win32gui.GetWindowPlacement(self.base_handler_id)
if DEBUG_WM:
print "set_position rect", rect, "ptMin", ptMin, "ptMax", ptMax
realW = rect[2] - rect[0]
realH = rect[3] - rect[1]
if DEBUG_WM:
print 'rect[0]', rect[0], 'rect[1]', rect[1]
print 'realW', realW, 'realH', realH
print 'cursor pos', win32gui.GetCursorPos()
rect = (x, y, x + realW, y + realH)
win32gui.SetWindowPlacement(self.base_handler_id, (0, showCmd, ptMin, ptMax, rect))
else:
# Raise a Type error.
| |
import math
import copy
import types
import random
from direct.showbase.ShowBaseGlobal import *
from direct.gui.DirectGui import *
from panda3d.core import *
from direct.showbase.PythonUtil import *
from direct.directnotify import DirectNotifyGlobal
from direct.controls import ControlManager
from direct.interval.IntervalGlobal import *
from direct.controls import BattleWalker
from direct.actor import Actor
from direct.showbase.InputStateGlobal import inputState
from direct.distributed.ClockDelta import *
from direct.showbase.ShadowPlacer import ShadowPlacer
from direct.fsm.StatePush import StateVar
from otp.avatar.LocalAvatar import LocalAvatar
from otp.avatar import PositionExaminer
from otp.otpbase import OTPGlobals
from otp.speedchat import SCDecoders
from otp.otpgui import OTPDialog
from pirates.audio import SoundGlobals
from pirates.piratesgui import PDialog
from pirates.battle import WeaponGlobals
from pirates.battle import DistributedBattleAvatar
from pirates.chat.PiratesChatManager import PiratesChatManager
from pirates.chat.PTalkAssistant import PTalkAssistant
from pirates.ship import ShipGlobals
from pirates.piratesgui import GuiManager
from pirates.piratesgui import PiratesGuiGlobals
from pirates.tutorial import ChatTutorial
from pirates.tutorial import ChatTutorialAlt
from pirates.piratesbase import PLocalizer
from pirates.piratesbase import PiratesGlobals
from pirates.piratesbase import EmoteGlobals
from pirates.reputation import ReputationGlobals
from pirates.battle import RangeDetector
from pirates.battle import BattleSkillDiary
from pirates.movement.CameraFSM import CameraFSM
from pirates.economy.EconomyGlobals import *
from pirates.economy import EconomyGlobals
from pirates.piratesbase import TeamUtils
from pirates.piratesbase import UserFunnel
from pirates.ship import DistributedSimpleShip
from pirates.instance import DistributedMainWorld
from pirates.world import DistributedGameArea
from pirates.world import OceanZone
from pirates.interact import InteractiveBase
from pirates.effects.CloudScud import CloudScud
from pirates.effects.ProtectionSpiral import ProtectionSpiral
from pirates.battle.EnemySkills import EnemySkills
from pirates.inventory import InventoryGlobals
from pirates.inventory.InventoryGlobals import Locations
from direct.controls.GhostWalker import GhostWalker
from direct.controls.PhysicsWalker import PhysicsWalker
from direct.controls.ObserverWalker import ObserverWalker
from pirates.movement.PiratesGravityWalker import PiratesGravityWalker
from pirates.movement.PiratesSwimWalker import PiratesSwimWalker
from pirates.quest import QuestDB
from pirates.quest import QuestStatus
from pirates.world.LocationConstants import LocationIds, getParentIsland
from pirates.world import WorldGlobals
from pirates.map.MinimapObject import GridMinimapObject
from pirates.pirate import TitleGlobals
from pirates.uberdog.UberDogGlobals import InventoryCategory, InventoryType
from pirates.uberdog.DistributedInventoryBase import DistributedInventoryBase
import Pirate
import LocalPirateGameFSM
from DistributedPlayerPirate import DistributedPlayerPirate
from pirates.pirate import PlayerStateGlobals
from pirates.pirate import AvatarTypes
from pirates.makeapirate import ClothingGlobals
from pirates.audio import SoundGlobals
from pirates.audio.SoundGlobals import loadSfx
from pirates.inventory import ItemGlobals
from direct.task.Task import Task
from pirates.effects.PooledEffect import PooledEffect
from pirates.piratesgui.GameOptions import Options
from pirates.piratesgui import MessageGlobals
from pirates.piratesbase import TODGlobals
from direct.gui import OnscreenText
from pirates.util.BpDb import *
globalClock = ClockObject.getGlobalClock()
if base.config.GetBool('want-pstats', 0):
import profile
import pstats
class bp:
bpdb = BpDb()
loginCfg = bpdb.bpPreset(iff = False, cfg = 'loginCfg', static = 1)
from direct.controls.ControlManager import ControlManager
if base.config.GetBool('want-custom-keys', 0):
ControlManager.wantCustomKeys = 1
ControlManager.wantWASD = 0
else:
ControlManager.wantCustomKeys = 0
ControlManager.wantWASD = 1
class LocalPirate(DistributedPlayerPirate, LocalAvatar):
notify = DirectNotifyGlobal.directNotify.newCategory('LocalPirate')
neverDisable = 1
def __init__(self, cr):
try:
pass
except:
self.LocalPirate_initialized = 1
DistributedPlayerPirate.__init__(self, cr)
self.masterHuman = base.cr.humanHigh
chatMgr = PiratesChatManager()
talkAssistant = PTalkAssistant()
LocalAvatar.__init__(self, cr, chatMgr, talkAssistant = talkAssistant)
self.gameFSM = None
self.equippedWeapons = []
self.monstrousTarget = None
self.distanceToTarget = 0
self._LocalPirate__lootUIEnabled = True
self.missedLootInformation = []
self.setLocalAvatarUsingWeapon(1)
self.cameraFSM = CameraFSM(self)
self.guiMgr = GuiManager.GuiManager(self)
self.interestHandles = []
if base.config.GetBool('debug-local-animMixer', 0):
self.animMixer.setVerbose(True)
self.currentMouseOver = None
self.currentAimOver = None
self.currentSelection = None
self.tutObject = None
self.currentDialogMovie = None
self.ship = None
self.shipList = set()
self.cannon = None
self._LocalPirate__turboOn = 0
self._LocalPirate__marioOn = 0
self.speedIndex = 0
self.curMoveSound = None
self.setupMovementSounds()
self.rangeDetector = RangeDetector.RangeDetector()
self.rangeDetector.detachNode()
self.showQuest = True
self.currentOcean = 0
self.soundWhisper = loadSfx(SoundGlobals.SFX_GUI_WHISPER)
self.positionExaminer = PositionExaminer.PositionExaminer()
self.skillDiary = BattleSkillDiary.BattleSkillDiary(self.cr, self)
self.lookAtTarget = None
self.lookAtTimer = None
self.lookAtDummy = self.attachNewNode('lookAtDummy')
self.lookFromNode = self.attachNewNode('lookFromTargetHelper')
self.lookFromNode.setZ(self.getHeight())
self.lookToNode = NodePath('lookToTargetHelper')
if base.config.GetBool('want-dev', False):
self.accept('shift-f12', self.toggleAvVis)
self.money = 0
self.firstMoneyQuieted = 0
self.enableAutoRun = 0
self.kickEvents = None
self.battleTeleportFlagTask = None
self.openJailDoorTrack = None
self.currentStoryQuests = []
self.cloudScudEffect = None
self.soloInteraction = False
self.emoteAccess = []
self.AFKDelay = base.config.GetInt('afk-delay', 600)
self.playRewardAnimation = None
self.localProjectiles = []
self._cannonAmmoSkillId = InventoryType.CannonRoundShot
self._siegeTeamSV = StateVar(0)
self.guildPopupDialog = None
self.moralePopupDialog = None
self.gmNameTagEnabledLocal = 0
self.gmNameTagStringLocal = ''
self.gmNameTagColorLocal = ''
soundEffects = [
SoundGlobals.SFX_MONSTER_JR_LAUGH_01,
SoundGlobals.SFX_MONSTER_JR_LAUGH_02,
SoundGlobals.SFX_MONSTER_JR_ENJOY,
SoundGlobals.SFX_MONSTER_JR_SUBMIT,
SoundGlobals.SFX_MONSTER_JR_JOIN]
self.jollySfx = loadSfx(random.choice(soundEffects))
self.currCombatMusic = None
self.clothingUpdateTaskName = 'inventoryClothingUpdate'
self.clothingUpdatePending = 0
self.sailHit = 0
self.playersNearby = { }
self.trackedRotation = []
self.trackedTurning = 0
self.lastCannonShot = globalClock.getFrameTime()
self.pendingInitQuest = None
self.inInvasion = False
self.levelFootStep = None
self.wobbleList = []
self.fovIval = None
self.lockRegenFlag = 0
self.everBeenGhost = 0
self.mistimedAttack = 0
if base.config.GetBool('want-easy-combos', 1):
self.wantComboTiming = 0
else:
self.wantComboTiming = 1
self.zombieEffect = None
self.zombieIval = None
self.defenceEffects = { }
self.skillSfxIval = None
self.currentWeaponSlotId = 1
if base.config.GetBool('want-pstats', 0):
self.pstatsGen = PStatCollector('Battle Avatars:Avatar Generating')
self.pstatsLoad = PStatCollector('Battle Avatars:Loading Asset')
self.pstatsFPS = PStatCollector('Battle Avatars:fps')
self.lastTime = None
taskMgr.add(self.logPStats, 'avatarPstats')
self.fishingGameHook = None
self.accept('shipRemoved', self.checkHaveShip)
self.rocketOn = 0
if base.config.GetBool('want-rocketman', 0):
self.startRocketJumpMode()
self.dialogProp = None
self.duringDialog = False
self.efficiency = False
self.boardedShip = False
self.shipLookAhead = 1
def setShipLookAhead(self, value):
self.shipLookAhead = value
def startRocketJumpMode(self):
self.oldGravity = None
self.accept('space', self.moveUpStart)
self.accept('space-up', self.moveUpEnd)
self.rocketOn = 1
def endRocketJumpMode(self):
self.moveUpEnd()
self.ignore('space')
self.ignore('space-up')
self.rocketOn = 0
def moveUpEnd(self):
taskMgr.remove('rocketDelayTask')
if self.oldGravity != None:
if self.oldGravity and 0:
self.controlManager.get('walk').lifter.setGravity(self.oldGravity)
else:
self.controlManager.get('walk').lifter.setGravity(32.173999999999999 * 2.0)
self.oldGravity = None
def moveUpStart(self):
self.lastJumpTime = None
self.jumpStartTime = globalClock.getFrameTime()
self.oldGravity = self.controlManager.get('walk').lifter.getGravity()
if self.controlManager.get('walk').lifter.isOnGround():
taskMgr.doMethodLater(0.5, self.rocketGrav, 'rocketDelayTask')
else:
self.rocketGrav()
def rocketGrav(self, task = None):
self.controlManager.get('walk').lifter.setGravity(-32.173999999999999)
if task:
return task.done
def sendUpdate(self, *args, **kw):
if self.isGenerated():
return DistributedPlayerPirate.sendUpdate(self, *args, **args)
def logPStats(self, task):
self.pstatsGen.setLevel(taskMgr.mgr.findTaskChain('background').getNumTasks() + 0)
self.pstatsLoad.setLevel(taskMgr.mgr.findTaskChain('loader').getNumTasks() + 0)
if self.lastTime == None:
self.lastTime = globalClock.getRealTime()
timeDelta = globalClock.getRealTime() - self.lastTime
self.lastTime = globalClock.getRealTime()
if timeDelta <= 0.0:
fps = 0.0
else:
fps = 1.0 / timeDelta
self.pstatsFPS.setLevel(fps)
return task.cont
def setupWalkControls(self, avatarRadius = 1.3999999999999999, floorOffset = OTPGlobals.FloorOffset, reach = 4.0, wallBitmask = OTPGlobals.WallBitmask, floorBitmask = OTPGlobals.FloorBitmask, ghostBitmask = OTPGlobals.GhostBitmask):
walkControls = PiratesGravityWalker(gravity = 32.173999999999999 * 2.0)
walkControls.setWallBitMask(wallBitmask)
walkControls.setFloorBitMask(floorBitmask)
walkControls.initializeCollisions(self.cTrav, self, avatarRadius, floorOffset, reach)
walkControls.setAirborneHeightFunc(self.getAirborneHeight)
self.controlManager.add(walkControls, 'walk')
self.physControls = walkControls
swimControls = PiratesSwimWalker()
swimControls.setWallBitMask(wallBitmask)
swimControls.setFloorBitMask(floorBitmask)
swimControls.initializeCollisions(self.cTrav, self, avatarRadius, floorOffset, 4.0)
swimControls.setAirborneHeightFunc(self.getAirborneHeight)
self.controlManager.add(swimControls, 'swim')
ghostControls = GhostWalker()
ghostControls.setWallBitMask(ghostBitmask)
ghostControls.setFloorBitMask(floorBitmask)
ghostControls.initializeCollisions(self.cTrav, self, avatarRadius, floorOffset, reach)
ghostControls.setAirborneHeightFunc(self.getAirborneHeight)
self.controlManager.add(ghostControls, 'ghost')
observerControls = ObserverWalker()
observerControls.setWallBitMask(ghostBitmask)
observerControls.setFloorBitMask(floorBitmask)
observerControls.initializeCollisions(self.cTrav, self, avatarRadius, floorOffset, reach)
observerControls.setAirborneHeightFunc(self.getAirborneHeight)
self.controlManager.add(observerControls, 'observer')
self.controlManager.use('walk', self)
self.controlManager.disable()
def respondShipUpgrade(self, shipId, retCode):
if retCode:
messenger.send('ShipUpgraded', [
shipId,
retCode])
base.localAvatar.guiMgr.queueInstructionMessageFront(PLocalizer.ShipUpgradeComplete, [], None, 1.0, messageCategory = MessageGlobals.MSG_CAT_PURCHASE)
else:
messenger.send('ShipUpgraded', [
shipId,
retCode])
base.localAvatar.guiMgr.queueInstructionMessageFront(PLocalizer.ShipUpgradeFailed, [], None, 1.0, messageCategory = MessageGlobals.MSG_CAT_PURCHASE_FAILED)
def createGameFSM(self):
self.gameFSM = LocalPirateGameFSM.LocalPirateGameFSM(self)
def updateReputation(self, category, value):
DistributedPlayerPirate.updateReputation(self, category, value)
self.guiMgr.updateReputation(category, value)
def playSkillMovie(self, skillId, ammoSkillId, skillResult, charge = 0, targetId = 0, areaIdList = []):
if WeaponGlobals.getSkillTrack(skillId) == WeaponGlobals.BREAK_ATTACK_SKILL_INDEX:
self.skillDiary.clearHits(skillId)
self.guiMgr.combatTray.clearSkillCharge(skillId)
else:
self.skillDiary.startRecharging(skillId, ammoSkillId)
if WeaponGlobals.getSkillTrack(skillId) == WeaponGlobals.DEFENSE_SKILL_INDEX:
if skillId == EnemySkills.MISC_VOODOO_REFLECT:
self.showEffectString(PLocalizer.AttackReflected)
else:
self.showEffectString(PLocalizer.AttackBlocked)
self.guiMgr.combatTray.startSkillRecharge(skillId)
if skillId in (EnemySkills.STAFF_TOGGLE_AURA_WARDING, EnemySkills.STAFF_TOGGLE_AURA_NATURE, EnemySkills.STAFF_TOGGLE_AURA_DARK):
if self.getAuraActivated():
skillId = EnemySkills.STAFF_TOGGLE_AURA_OFF
DistributedPlayerPirate.playSkillMovie(self, skillId, ammoSkillId, skillResult, charge, targetId, areaIdList)
def sendRequestMAPClothes(self, clothes):
self.sendUpdate('requestMAPClothes', [
clothes])
def lockRegen(self):
self.lockRegenFlag = 1
def unlockAndRegen(self, force = True):
self.lockRegenFlag = 0
if self.needRegenFlag or force:
self.cueRegenerate(force = 1)
def cueRegenerate(self, force = 0):
if (self.clothingUpdatePending or self.lockRegenFlag) and not force:
self.needRegenFlag = 1
return None
DistributedPlayerPirate.cueRegenerate(self)
def doRegeneration(self):
if not self.lockRegenFlag:
DistributedPlayerPirate.doRegeneration(self)
messenger.send('localAv-regenerate')
def wearJewelry(self, itemToWear, location, remove = None):
if remove:
self.tryOnJewelry(None, location)
else:
self.tryOnJewelry(itemToWear, location)
taskMgr.remove(self.clothingUpdateTaskName)
task = taskMgr.doMethodLater(5.0, self.sendClothingUpdate, self.clothingUpdateTaskName)
self.clothingUpdatePending = 1
def wearItem(self, itemToWear, location, remove = None):
if remove:
self.removeClothes(location)
else:
self.tryOnClothes(location, itemToWear.itemTuple)
taskMgr.remove(self.clothingUpdateTaskName)
task = taskMgr.doMethodLater(5.0, self.sendClothingUpdate, self.clothingUpdateTaskName)
self.clothingUpdatePending = 1
def wearTattoo(self, itemToWear, location, remove = None):
if remove:
self.tryOnTattoo(None, location)
else:
self.tryOnTattoo(itemToWear, location)
taskMgr.remove(self.clothingUpdateTaskName)
task = taskMgr.doMethodLater(5.0, self.sendClothingUpdate, self.clothingUpdateTaskName)
self.clothingUpdatePending = 1
def sendClothingUpdate(self, args = None):
self.sendUpdate('requestChangeClothes', [])
self.clothingUpdatePending = 0
def checkForWeaponInSlot(self, weaponId, slot):
inventory = localAvatar.getInventory()
if slot == -1:
return 1
if inventory:
weaponInSlot = inventory.getLocatables().get(slot)
if weaponInSlot and weaponInSlot[1] == weaponId:
return weaponInSlot[1]
else:
return None
def getWeaponFromSlot(self, slot):
inventory = localAvatar.getInventory()
if inventory:
weaponInSlot = inventory.getLocatables().get(slot)
if weaponInSlot and weaponInSlot[1]:
return weaponInSlot[1]
else:
return None
def toggleWeapon(self, newWeaponId, slotId, fromWheel = 0):
switchWeaponStates = [
'LandRoam',
'Battle',
'WaterRoam',
'Dialog']
if self.getGameState() not in switchWeaponStates:
return None
if self.belongsInJail():
return None
if self.guiMgr.mainMenu and not self.guiMgr.mainMenu.isHidden():
return None
if not self.checkForWeaponInSlot(newWeaponId, slotId):
return None
newSlot = self.currentWeaponSlotId != slotId
self.currentWeaponSlotId = slotId
if (newWeaponId != self.currentWeaponId or newSlot) and self.isWeaponDrawn:
self.d_requestCurrentWeapon(newWeaponId, 1)
self.l_setCurrentWeapon(newWeaponId, 1, slotId)
self.b_setGameState('Battle')
elif not (self.isWeaponDrawn) and fromWheel:
self.d_requestCurrentWeapon(newWeaponId, 1)
self.l_setCurrentWeapon(newWeaponId, 1, slotId)
self.b_setGameState('Battle')
elif not self.isWeaponDrawn:
self.d_requestCurrentWeapon(newWeaponId, 1)
self.l_setCurrentWeapon(newWeaponId, 1, slotId)
self.b_setGameState('Battle')
messenger.send('weaponEquipped')
else:
self.d_requestCurrentWeapon(newWeaponId, 0)
self.l_setCurrentWeapon(newWeaponId, 0, slotId)
self.b_setGameState('LandRoam')
messenger.send('weaponSheathed')
def putWeaponAway(self):
if self.isWeaponDrawn:
self.d_requestCurrentWeapon(self.currentWeaponId, 0)
self.l_setCurrentWeapon(self.currentWeaponId, 0, self.currentWeaponSlotId)
self.b_setGameState('LandRoam')
messenger.send('weaponSheathed')
def setCurrentWeapon(self, currentWeaponId, isWeaponDrawn):
pass
def l_setCurrentWeapon(self, currentWeaponId, isWeaponDrawn, slotId):
if not self.gameFSM.isInTransition() and self.getGameState() in [
'WaterRoam',
'WaterTreasureRoam']:
return None
if self.currentWeaponId != currentWeaponId or self.isWeaponDrawn != | |
<filename>AG.py
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.widgets import Slider
'''
Se define la función crearPoblacionInicial(cantidad, numeroDeCiudades) donde:
* cantidad es la cantidad de individuos que se desean crear para la población
* numeroDeCiudades es la cantidad de ciudades que se van a utilizar en la simulación
Esta función genera una población inicial con individuos con un genoma aleatorio
'''
def crearPoblacionInicial(cantidad, numeroDeCiudades):
#Se genera un arreglo con los índices de las ciudades, estos son los índices con los que se accesan las coordenadas de las ciudades del arreglo que contiene estas coordenadas, cargadas del archivo dado
indiceCiudades = np.linspace(0, numeroDeCiudades-1,numeroDeCiudades).astype(int)
#Se inicia el arreglo donde se va a contener a la población inicial
poblacionInicial = []
#Se inicia un ciclo while para generar un individuo con un genoma aleatorio en cada iteración, hasta llegar al número deseado de individuos
i = 0
while i < cantidad:
#Se le añade a la población inicial una permutación aleatoria del arreglo indiceCiudades
poblacionInicial.append(np.random.permutation(indiceCiudades).tolist())
#Se agrega uno al contador
i += 1
#Se retorna la población generada como un arreglo de Numpy
return np.array(poblacionInicial)
'''
Se define la función funcionDeOptimizacion(cromosoma) donde:
* cromosoma es el cromosoma de un individuo específico
* CoordenadasCiudades es el arreglo que contiene las coordenadas de las ciudades
Esta función calcula el valor de ajuste para un individuo
'''
def funcionDeOptimizacion(cromosoma, CoordenadasCiudades):
#Se define la variable distancia, el la cual se va a guardar el acumulado de las distancias euclidianas entre nodos adyacentes
distancia = 0
#Se itera sobre los nodos para calcular la distancia que hay entre ellos
#Se comienza desde -1 ya que este índice indica a Numpy a tomar el último elemento. Esto permite realizar fácilmente el cálculo entre el último y el primer nodo, ya que se toma el nodo actual y el siguiente en la iteración para el cálculo
for i in range(-1,len(cromosoma)-1):
#Se calcula la distancia euclidiana y se suma a la variable distancia
distancia += np.sqrt(np.sum(np.power(CoordenadasCiudades[cromosoma[i]]-CoordenadasCiudades[cromosoma[i+1]],2)))
#Se calcula el inverso de la distnacia y se retorna el valor
return 1/distancia
'''
Se define la función mutacion(cromosoma, probabilidad) donde:
* cromosoma es el cromosoma de un individuo específico
* probabilidad es la probabilidad de que ocurra una mutación
Esta función muta el individuo (cromosoma) ingresado con una cierta probabilidad
'''
def mutacion(cromosoma, probabilidad):
#Primero se genera un número en el intervalo [0;1[ y si este número es menor que el valor de la probabilidad ingresada, se procede a la mutación
if np.random.random() < probabilidad:
#Se inician dos variables para contener los índices de los elementos que se van a intercambiar
indice1 = 0
indice2 = 0
#En el caso de que los índices sean iguales, se continúa generando índices aleatorios hasta que sean diferentes
while indice1 == indice2:
#Se genera un arrglo con dos números aleatorios entre 0 y la longitud del cromosoma y se asigna cada elemento a la variable respectiva
indice1, indice2 = np.random.randint(0, len(cromosoma),2)
#Se intercambian de lugar los elementos en los índices aleatorios obtenidos
cromosoma[indice1], cromosoma[indice2] = cromosoma[indice2], cromosoma[indice1]
#Se regresa el cromosoma, mutado o no
return cromosoma
'''
Se define la función InicializacionModificada(cantidad, CoordenadasCiudades) donde:
* cantidad es la cantidad de individuos que se desean crear en la población
* CoordenadasCiudades son las coordenadas de las ciudades
Esta función genera una población con un individuo ventajoso
'''
def InicializacionModificada(cantidad, CoordenadasCiudades):
#Se ingresa a una variable la longitud del arrglo de las coordenadas de las ciudades y se inicia la lista para contener la población inicial
numeroDeCiudades = len(CoordenadasCiudades)
poblacionInicial = []
#Se inicia el ciclo para crear los individuos de la población
i = 0
while i < cantidad:
#Se genera un arreglo con los índices de las ciudades (una lista de 0 hasta la cantidad de ciudades menos 1) y se toma un elemento como el nodo inicial, eliminandolo del arreglo de posibles ciudades a elegir (ya que ya fue elegida)
indiceCiudades = np.linspace(0, numeroDeCiudades-1,numeroDeCiudades).astype(int).tolist()
nodoInicial = indiceCiudades.pop(np.random.randint(0,numeroDeCiudades))
#Se inicia el arreglo en el cual se van a ingresar los índices de las ciudades en orden de cercanía y se ingresa el primer nodo
individuoActual = []
individuoActual.append(nodoInicial)
#Se define la variable nodoPasado para guardar el valor del último índice ingresado para calcular en la siguiente iteración su ciudad más cercano
nodoPasado = nodoInicial
#Se continuan las iteraciones hasta que la lista de indiceCiudades esté vacía
while len(indiceCiudades) > 0:
#Se ordenan los indices dependiendo de la distancia entre las ciudades en orden ascendente
indiceCiudades.sort(key=lambda x: np.sqrt(np.sum(np.power(CoordenadasCiudades[x]-CoordenadasCiudades[nodoPasado],2))), reverse=False)
#Se elimina primer elemento de la lista y se ingresa a la variable nodoPasado, este luego se agrega a la lista de la población actual
nodoPasado = indiceCiudades.pop(0)
individuoActual.append(nodoPasado)
#Si la población no es la primera, se realizan entre 3 y 10 mutaciones (aleatoriamente) en el individuo
if i != 0:
for j in range(0,np.random.randint(3,11)):
#Se utiliza la misma lógica que en la función de mutación
indice1 = 0
indice2 = 0
while indice1 == indice2:
indice1, indice2 = np.random.randint(0, len(individuoActual),2)
individuoActual[indice1], individuoActual[indice2] = individuoActual[indice2], individuoActual[indice1]
#Se agrega el individuo generado a la lista de la población inicial
poblacionInicial.append(individuoActual)
#Se aumenta el contador
i += 1
#Se retorna la población inicial como un arreglo de Numpy
return np.array(poblacionInicial)
#----------Inicio del programa-------------------
#Se abre el archivo de las coordenadas de las ciudades y se crea una variable para contener el archvio formateado
archivo = open('CoordenadasCiudades.txt')
CoordenadasCiudades = []
#Se definien los parámetros para la simulación
tamañoDePoblacion = 100
probabilidadDeMutacion = 0.2
nIter = 300
#Se crea una lista para guardar los mejores individuos (los que lleguen a superar al mejor individuo guardado anterior) y otra para guardar los mejores individuos y los promedio de la función de ajuste de cada iteración
mejorCamino = []
resultados = []
#Se extraen los datos del archivo y se ingresan a una lista
for linea in archivo:
coordenadas = linea.replace(',', '').replace('[', '').replace(']', '').split(' ')
CoordenadasCiudades.append([float(coordenadas[0]),float(coordenadas[1])])
#Se convierte la lista de coordenadas a un arreglo de Numpy
CoordenadasCiudades = np.array(CoordenadasCiudades)
#Se define la población inicial
poblacion = crearPoblacionInicial(tamañoDePoblacion,len(CoordenadasCiudades)) #Para la simulación normal
# poblacion = InicializacionModificada(tamañoDePoblacion,CoordenadasCiudades) #Para la simulación con un individuo con ventaja
#Se inicia el contador y se inicia el ciclo de iteraciones
n = nIter
while n >= 0:
#Se convierte la población hasta el momento en una lista (una copia) y se define la variable para acumular la suma de los valores de ajuste para calcular su promedio
caminos = poblacion.tolist()
promedioGeneracional = 0
#Se itera sobre la población para calcular el valor de ajuste de cada individuoy calcular el promedio generacional
for i in range(0,len(caminos)):
#Se guarda en la variable camino el camino en el índice actual y su valor de ajuste y se suma el valor de ajuste al acumulador
caminos[i] = (caminos[i],funcionDeOptimizacion(caminos[i],CoordenadasCiudades))
promedioGeneracional += caminos[i][1]
#Se divide la suma total de valores de ajuste entre el total de individuos para calcular el promedio generacional
promedioGeneracional /= len(caminos)
#Se ordenan los individuos por su función de ajuste
caminos.sort(key=lambda x: x[1], reverse=True)
#Se guarda el valor de ajuste del mejor individuo y se guarda el promedio genracional en la lista de resultados
mejorValorAjuste = caminos[0][1]
resultados.append([promedioGeneracional,mejorValorAjuste])
#Si no ha habido un mejor camino, solamente se añade el mejor camino actual a la lista, si no fuera el caso y además el valor de ajuste del mejor individuo es mayor al del mejor individuo hasta el momento, este se añade al arreglo de mejorCamino
if mejorCamino == []:
mejorCamino.append((caminos[0][0], mejorValorAjuste))
elif mejorValorAjuste > mejorCamino[len(mejorCamino)-1][1]:
mejorCamino.append((caminos[0][0], mejorValorAjuste))
#Se aplican las mutaciones a cada individuo de la población, dependiendo de la probabilidad
for i in range(0,len(poblacion)):
poblacion[i] = mutacion(poblacion[i],probabilidadDeMutacion)
#Se reduce el contador en 1
n -= 1
#Se convierten los resultados a un arreglo de Numpy
resultados = np.array(resultados)
#Se mapean los índices en el cromosoma de los mejores caminos a las respectivas coordenadas de cada ciudad
coordenadasMejorCamino = []
for | |
import functools, math, struct, sys
from struct import Struct
from io import BytesIO
from Catalog.Identifiers import PageId, FileId, TupleId
from Catalog.Schema import DBSchema
from Storage.Page import PageHeader, Page, PageTupleIterator
class SlottedPageHeader(PageHeader):
"""
A slotted page header implementation. This stores a slot array
implemented as a memoryview on the byte buffer backing the page
associated with this header. Additionally this header object stores
the number of slots in the array, as well as the index of the next
available slot.
The binary representation of this header object is: (numSlots, nextSlot, slotBuffer)
>>> import io
>>> buffer = io.BytesIO(bytes(4096))
>>> ph = SlottedPageHeader(buffer=buffer.getbuffer(), tupleSize=16)
>>> ph2 = SlottedPageHeader.unpack(buffer.getbuffer())
## Dirty bit tests
>>> ph.isDirty()
False
>>> ph.setDirty(True)
>>> ph.isDirty()
True
>>> ph.setDirty(False)
>>> ph.isDirty()
False
## Tuple count tests
>>> ph.hasFreeTuple()
True
# First tuple allocated should be at the first slot.
# Notice this is a slot index, not an offset as with contiguous pages.
>>> ph.nextFreeTuple() == 0
True
>>> ph.numTuples()
1
>>> tuplesToTest = 10
>>> [ph.nextFreeTuple() for i in range(0, tuplesToTest)]
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
>>> ph.numTuples() == tuplesToTest+1
True
>>> ph.hasFreeTuple()
True
# Check space utilization
>>> ph.usedSpace() == (tuplesToTest+1)*ph.tupleSize
True
>>> ph.freeSpace() == 4096 - (ph.headerSize() + ((tuplesToTest+1) * ph.tupleSize))
True
>>> remainingTuples = int(ph.freeSpace() / ph.tupleSize)
# Fill the page.
>>> [ph.nextFreeTuple() for i in range(0, remainingTuples)] # doctest:+ELLIPSIS
[11, 12, ...]
>>> ph.hasFreeTuple()
False
# No value is returned when trying to exceed the page capacity.
>>> ph.nextFreeTuple() == None
True
>>> ph.freeSpace() < ph.tupleSize
True
"""
# # Slots are two unsigned shorts: slot offset and slot data length
# slotRepr = Struct("HH")
# slotSize = slotRepr.size
prefixFmt = "H"
prefixRepr = struct.Struct(prefixFmt)
def __init__(self, **kwargs):
other = kwargs.get("other", None)
if other:
self.fromOther(other, **kwargs)
else:
buffer = kwargs.get("buffer", None)
parent = kwargs.get("parent", None)
if buffer:
if parent:
super().__init__(other=parent)
else:
super().__init__(**kwargs)
self.numSlots = kwargs.get("numSlots", self.maxTuples())
self.slots = self.initializeSlots(buffer)
self.binrepr = Struct(SlottedPageHeader.prefixFmt+str(self.slotBufferSize())+"s")
self.reprSize = PageHeader.size + self.binrepr.size
# Call postHeaderInitialize now that we've initialized our local attributes
self.postHeaderInitialize(**kwargs)
else:
raise ValueError("No backing buffer supplied for SlottedPageHeader")
def __eq__(self, other):
return super().__eq__(other) and (
self.numSlots == other.numSlots
and self.slots == other.slots )
def postHeaderInitialize(self, **kwargs):
# Check local attributes have been initialized
if hasattr(self, "reprSize"):
fresh = kwargs.get("unpacked", None) is None
buffer = kwargs.get("buffer", None)
parent = kwargs.get("parent", None)
if parent is None:
super().postHeaderInitialize(**kwargs)
# Push the subclass metadata into the buffer.
# In this case, we also push the parent page header due
# to the updated free space offset.
if fresh and buffer:
start = PageHeader.size
end = start + SlottedPageHeader.prefixRepr.size
buffer[start:end] = SlottedPageHeader.prefixRepr.pack(self.numSlots)
self.slots[:] = b'\x00' * self.slotBufferSize()
else:
self.slots[:] = kwargs.get("slots", b'\x00' * self.slotBufferSize())
def fromOther(self, other):
super().fromOther(other)
if isinstance(other, SlottedPageHeader):
self.numSlots = other.numSlots
self.slots = other.slots
self.binrepr = other.binrepr
self.reprSize = other.reprSize
# Parent method overrides
def headerSize(self):
return self.reprSize
def numTuples(self):
return len(self.usedSlots())
# Returns the maximum number of tuples that can be held in this page.
def maxTuples(self):
headerSize = PageHeader.size + SlottedPageHeader.prefixRepr.size
headerPerTuple = 0.125
return math.floor((self.pageCapacity - headerSize) / (self.tupleSize + headerPerTuple))
# Returns the length of the bitvector for slots, in bytes.
def slotBufferSize(self):
if self.numSlots:
sz = self.numSlots >> 3
if self.numSlots % 8 == 0:
return sz
else:
return sz + 1
# Initializes the bitvector object for slots.
def initializeSlots(self, buffer):
if self.numSlots:
start = PageHeader.size + SlottedPageHeader.prefixRepr.size
end = start + self.slotBufferSize()
return memoryview(buffer[start:end])
else:
raise ValueError("Unable to initialize slots, do not know number of slots")
# Slotted page specific methods
# Returns the byte offset of the given slot in the bitvector.
def slotBufferByteOffset(self, slotIndex):
return slotIndex >> 3
# Returns the byte offset and bit offset of the given slot in the bitvector.
def slotBufferOffset(self, slotIndex):
return (slotIndex >> 3, 7 - (slotIndex % 8))
def hasSlot(self, slotIndex):
offset = self.slotBufferByteOffset(slotIndex)
return 0 <= offset and offset < self.slots.nbytes
def getSlot(self, slotIndex):
if self.hasSlot(slotIndex):
(byteIdx, bitIdx) = self.slotBufferOffset(slotIndex)
return bool(self.slots[byteIdx] & (0b1 << bitIdx))
else:
raise ValueError("Invalid get slot index")
def setSlot(self, slotIndex, used):
if self.hasSlot(slotIndex):
(byteIdx, bitIdx) = self.slotBufferOffset(slotIndex)
if used:
self.slots[byteIdx] = self.slots[byteIdx] | (0b1 << bitIdx)
else:
self.slots[byteIdx] = self.slots[byteIdx] & ~(0b1 << bitIdx)
else:
raise ValueError("Invalid set slot index or slot value")
# Marks a slot as free.
def resetSlot(self, slotIndex):
self.setSlot(slotIndex, False)
# Returns the slot indexes for all of the unused slots.
def freeSlots(self):
freeIndexes = []
for i in range(self.slots.nbytes-1):
for j in range(8):
if not ( self.slots[i] & (0b1 << (7 - j)) ):
freeIndexes.append((i << 3) + j)
# Special handling of the final byte to only consider up to numSlots.
i = self.slots.nbytes-1
j = 0
while (i << 3) + j < self.numSlots:
if not ( self.slots[i] & (0b1 << (7 - j)) ):
freeIndexes.append((i << 3) + j)
j += 1
return freeIndexes
# Returns the slot indexes for all used slots.
def usedSlots(self):
usedIndexes = []
for i in range(self.slots.nbytes-1):
for j in range(8):
if self.slots[i] & (0b1 << (7 - j)):
usedIndexes.append((i << 3) + j)
# Special handling of the final byte to only consider up to numSlots.
i = self.slots.nbytes-1
j = 0
while (i << 3) + j < self.numSlots:
if self.slots[i] & (0b1 << (7 - j)):
usedIndexes.append((i << 3) + j)
j += 1
return usedIndexes
# Converts an absolute page offset into a slot index.
def tupleIndex(self, offset):
tupleIdx = None
for i in self.usedSlots():
start = self.slotTupleOffset(i)
end = start + self.tupleSize
if start <= offset and offset < end:
tupleIdx = i
break
return tupleIdx
# Returns the offset within the page for the tuple corresponding to a slot.
def slotOffset(self, slotIndex):
if slotIndex is not None and self.tupleSize:
return self.validatePageOffset(self.dataOffset() + (self.tupleSize * slotIndex))
# Slotted pages interpret tuple indexes as the slot index.
def tupleOffset(self, tupleId):
if tupleId:
return self.slotOffset(tupleId.tupleIndex)
def tupleRange(self, tupleId):
if tupleId and self.tupleSize and self.getSlot(tupleId.tupleIndex):
start = self.tupleOffset(tupleId)
end = start + self.tupleSize
return (self.validateDataOffset(start), self.validateDataOffset(end))
else:
return (None, None)
def pageRange(self, tupleId):
if tupleId and self.tupleSize and self.getSlot(tupleId.tupleIndex):
start = self.tupleOffset(tupleId)
end = start + self.tupleSize
return (self.validatePageOffset(start), self.validatePageOffset(end))
else:
return (None, None)
# Returns the space available in the page associated with this header.
def freeSpace(self):
return self.pageCapacity - (self.dataOffset() + self.usedSpace())
# Returns the space used in the page associated with this header.
def usedSpace(self):
usage = sum([self.tupleSize for i in self.usedSlots()])
return usage if isinstance(usage, int) else 0
# Returns whether the page has any free space for a tuple.
def hasFreeTuple(self):
fullSlots = bytearray(b'\xff' * self.slots.nbytes)
if self.numSlots % 8 != 0:
b = 0
for i in range(self.numSlots % 8):
b = b | (0b1 << (7 - i))
fullSlots[self.slots.nbytes-1] = b
return self.slots != fullSlots
# Returns the tupleIndex of the next free tuple.
# This should also "allocate" the tuple, such that any subsequent call
# does not yield the same tupleIndex.
def nextFreeTuple(self):
index = None
for i in range(self.slots.nbytes-1):
# Compute the first free slot by:
# i. xor'ing with all bits set to determine the free bit mask
# ii. differencing against the bit length (i.e., the # of bits to represent the free mask)
slotInByte = 8 - ( (self.slots[i] ^ 0xff).bit_length() )
if slotInByte < 8:
index = (i << 3) + slotInByte
self.useTupleIndex(index)
break
# Special handling of the final byte to only consider up to numSlots.
if index is None:
i = self.slots.nbytes-1
j = 0
while index is None and (i << 3) + j < self.numSlots:
if not ( self.slots[i] & (0b1 << (7 - j)) ):
index = (i << 3) + j
self.useTupleIndex(index)
j += 1
return index
def nextTupleRange(self):
tupleIndex = self.nextFreeTuple()
start = self.slotOffset(tupleIndex) if tupleIndex is not None else None
| |
are uniform after resampling
self.weights = 1./self.num_particles * ones(self.num_particles)
new_particles = empty(self.num_particles,dtype=object)
used = set()
for i in range(len(resampled_indices)):
j = resampled_indices[i]
if j in used:
new_particles[i] = self.particles[j].shallow_copy()
else:
new_particles[i] = self.particles[j]
used.add(j)
self.particles = new_particles
end_t = time.time()
elapsed = end_t - start_t
remaining = elapsed * (self.T-t)
finish_time = time.strftime("%a %H:%M:%S",
time.localtime(time.time()+remaining))
print "Status: %i/%i -- %.1f => %s" % (t,self.T,elapsed,finish_time)
sys.stdout.flush()
logging.info("One step required " + str(elapsed) + " seconds, " +
str(remaining) + " secs remaining.")
def get_labeling(self):
labeling = empty((self.num_particles,self.T),dtype=int32)
for p in range(self.num_particles):
labeling[p,:] = self.particles[p].c
return labeling
class GibbsSampler(Inference):
def __init__(self,data,data_time,params,model,state=None):
self.data = data
self.data_time = data_time
self.model = model
self.params = params
self.T = data.shape[1]
if state != None:
self.state = state
if state.T != self.T:
logging.error("State length does not match data length!")
else:
self.state = self.__init_state()
self.old_lnp = self.p_log_joint()
self.lnps = []
self.num_accepted = 0
self.num_rejected = 0
self.__init_debugging()
def __init_debugging(self):
pass
def __init_state(self):
"""Initialize the state of the Gibbs sampler."""
self.num_clusters = 1 # TODO
def p_log_joint(self,inc_walk=True,inc_likelihood=True,inc_death=True):
"""Compute the log-joint probability of the current state."""
state = self.state
ms = zeros_like(self.state.mstore)
lnp = 0
active = set()
for t in range(self.T):
# construct m up to time t
if t > 0:
ms[:,t] = ms[:,t-1]
ms[state.c[t],t] += 1
dying = where(state.d == t)[0]
for tau in dying:
ms[state.c[tau],t] -= 1
if inc_walk:
for k in where(ms[:,t]>0)[0]:
theta = self.state.U[k,t]
if t > 0 and ms[k,t-1]>0:
# old cluster that is still alive
# aux | previous theta
old_theta = self.state.U[k,t-1]
aux_vars = self.state.aux_vars[t-1,k,:,:]
lnp += self.model.kernel.p_log_aux_vars(old_theta,aux_vars)
# theta | aux
lnp += self.model.kernel.p_log_posterior(theta,aux_vars)
else:
# new cluster
# G0(theta)
lnp += self.model.p_log_prior_params(theta)
# c | m
# TODO: speed up computation of alive clusters
lnp += log(self.p_crp(t,ms,active))
active.add(state.c[t])
# x | c, theta
if inc_likelihood:
c = self.state.c[t]
theta = self.state.U[c,t]
lnp += sum(logpnorm(self.data[:,t],theta.mu,theta.lam))
# d_t
if inc_death:
lnp += self.p_log_deathtime(t)
# update mstore
# self.mstore = ms
return lnp
def p_log_joint_cs(self):
return self.p_log_joint(False,False,False)
def mh_sweep(self):
"""Do one MH sweep through the data, i.e. propose for all parameters
once."""
for t in range(self.T):
# propose new c_t
self.sample_label(t)
self.propose_death_time(t)
self.sample_params(t)
self.propose_auxs(t)
#print self.num_accepted + self.num_rejected
#print ("Acceptance rate: %.2f" %
# (self.num_accepted/float(self.num_accepted + self.num_rejected)))
def propose_c(self,t):
# propose from mean occupancy count (not symmetric!)
active = where(sum(self.mstore,1)>0)[0]
K = active.shape[0]
forward_probs = zeros(K+1)
forward_probs[0:K] = mean(self.mstore[active,:],1)
forward_probs[K] = self.params.alpha
forward_probs /= sum(forward_probs) # normalize
new_c = rdiscrete(forward_probs)
forward_lnq = log(forward_probs[new_c])
old_c = self.state.c[t]
if new_c == K:
# new cluster
self.active = hstack((active,self.state.free_labels.pop()))
self.state.c[t] = active[new_c]
# TODO need to sample new d as well ...
new_ms = self.state.reconstruct_mstore(self.state.c,self.state.d)
backward_probs = zeros(active.shape[0])
backward_probs[0:K] = mean(new_ms[active,:],1)
backward_probs[K] = self.params.alpha
backward_probs /= sum(backward_probs) # normalize
if mh_accept(backward_lnq - forward_lnq):
return
else:
self.c[t] = old_c
def mh_accept(self,q_ratio=0.):
"""Return true if the current state is to be accepted by the MH
algorithm and update self.old_lnp.
Params:
q_ratio -- the log of the ratio of the proposal
= log q(z|z*)- log q(z*|z)
= 0 if the proposal is symmetric
"""
lnp = self.p_log_joint()
A = min(1,exp(lnp - self.old_lnp + q_ratio))
if random_sample() < A:
# accept!
self.old_lnp = lnp
self.num_accepted += 1
return True
else:
# reject
self.num_rejected += 1
return False
def p_log_deathtime(self,t):
"""Compute the log probability of the death time of the allocation
at time step t."""
alive = self.state.d[t] - t - 1
return alive*log(self.params.rho) + log(1-self.params.rho)
def sweep(self):
"""Do one Gibbs sweep though the data."""
for t in range(self.T):
logging.info("t=%i/%i" % (t,self.T))
self.sample_label(t)
self.sample_death_time(t)
self.sample_aux_vars(t)
self.sample_params(t)
self.state.check_consistency(self.data_time)
raw_input()
def p_crp(self,t,ms,active):
"""Compute the conditional probability of the allocation at time
t given the table sizes m (and the spike times tau).
"""
if t == 0:
return 1
state = self.state
active = array(list(active))
num_active = active.shape[0]
p_crp = zeros(num_active+1)
p_crp[-1] = self.params.alpha
for i in range(num_active):
c = active[i]
if (self.data_time[t] - self.get_last_spike_time(c,t-1)
< self.params.r_abs):
p_crp[i] = 0
else:
p_crp[i] = ms[c,t-1]
p_crp = normalize(p_crp)
idx = where(active==self.state.c[t])[0]
if len(idx) > 0:
pos = idx[0]
else:
pos = num_active
return p_crp[pos]
def get_last_spike_time(self,c,t):
"""Returns the occurence time of the last spike associated with cluster c
before time t."""
return self.state.lastspike[c,t]
def propose_death_time(self,t):
log_joint_before = self.p_log_joint(False,False)
old_d = self.state.d[t]
new_d = t + 1 + rgeometric(self.params.rho)
if new_d > self.T:
new_d = self.T
self.state.d[t] = new_d
log_joint_after = self.p_log_joint(False,False,False)
A = min(1,exp(log_joint_after - log_joint_before))
if random_sample() < A:
# accept
# if we extended the life of the cluster, sample new params
logging.debug("Accepted new death time %i" % new_d)
self.num_accepted += 1
if new_d > self.state.deathtime[self.state.c[t]]:
self.sample_walk(
self.state.c[t],
self.state.deathtime[self.state.c[t]],
new_d
)
self.state.deathtime[self.state.c[t]] = max(
self.state.d[self.state.c == self.state.c[t]])
self.state.mstore = self.state.reconstruct_mstore(
self.state.c,
self.state.d)
#print self.state.mstore
else:
# reject
self.num_rejected += 1
self.state.d[t] = old_d
def sample_death_time(self,t):
"""Sample a new death time for the allocation variable at time t.
The posterior p(d_t|...) is proportional to p(c_(t:last)|d_t)p(d_t),
where p(d_t) is prior death time distribution (geometric) and p(c|d_t)
is the probability of the assignments to cluster c_t from the current
time step until the last allocation in that cluster dies.
"""
state = self.state
c = state.c[t]
mc = state.mstore[c,:].copy()
d_old = state.d[t]
length = self.T - t
# relative indices of assignments to this cluster
assignments = where(state.c[t:] == c)[0]
if assignments[0] != 0:
raise RuntimeError,"Something's wrong!"
assignments = assignments[1:]
# determine the last assignment made to this cluster (rel. to t)
last_assignment = assignments[-1]
dp = ones(length)
# find the last allocation that "depends" on this allocation being,
# i.e. without it mstore at that point would be 0.
# take out current allocation
mc[t:d_old] -= 1
dependencies = where(
logical_and(state.c[t:d_old] == c,
mc[t:d_old] == 1
))[0]
if len(dependencies)>0:
last_dep = dependencies[-1]
# the probability of deletion before last_dep is 0
dp[0:last_dep]=0
else:
last_dep = 0
possible_deaths = t+arange(last_dep+1,self.T-t+1)
p = self.p_labels_given_deathtime(t,possible_deaths)
dp[last_dep:self.T-t] = p
# The prior probability for d=t+1,...,T
prior = self.params.rho ** arange(0,length)*(1-self.params.rho)
prior[-1] = 1-sum(prior[0:-1])
q = dp * prior
q = q / sum(q)
dt = rdiscrete(q)
return dt + t + 1
def p_labels_given_deathtime(self,t,possible_deaths):
p1 = self.p_labels_given_deathtime_slow(t,possible_deaths)
p2 = self.p_labels_given_deathtime_fast(t,possible_deaths)
p1 = p1/sum(p1)
p2 = p2/sum(p2)
assert(all(p1==p2))
def p_labels_given_deathtime_slow(self,t,possible_deaths):
"""Compute the likelihood of the label at time t as a function of the
possible death times for that label.
"""
c = self.state.c[t]
d_old = self.state.d[t]
p = ones(possible_deaths.shape[0])
for i in range(possible_deaths.shape[0]):
d = possible_deaths[i]
# construct mstore for this situation
ms = self.state.mstore.copy()
ms[c,t:d_old] -= 1
ms[c,t+1:d] += 1
for tau in range(t+1,self.T):
p[i] *= self.p_crp(tau,ms[:,tau-1])
return p
def p_labels_given_deathtime_fast(self,t,possible_deaths):
"""Like the slow version, but compute the likelihood incrementally,
thus saving _a lot_ of computation time."""
c = self.state.c[t]
d_old = self.state.d[t]
last_dep = possible_deaths[0] - 1 # this should always be true
num_possible = self.T - last_dep
assert(num_possible==possible_deaths.shape[0])
# possible deaths always ranges from last_dep+1 to T (inclusive)
p = ones(possible_deaths.shape[0])
# first, compute the full solution for the first possible death time
ms = self.state.mstore.copy()
# ms[:,t-1] has to represent the state after allocation at time step
# t-1 and after deletion at time step t
# TODO: Do we have to compute this backwards?!
ms[c,last_dep:d_old] -= 1
for tau in range(last_dep+1,self.T):
p[0] *= self.p_crp(tau,ms[:,tau-1])
for i in range(1,num_possible-1):
d = i + last_dep + 1
print d
ms[c,d-1] +=1
if self.state.c[d] == c:
# numerator changed
p[i]=p[i-1]/(ms[c,d-1] - 1)*ms[c,d-1]
old = sum(ms[:,d-1]) + self.params.alpha
new = old + 1
Z = old/new
p[i] = p[i-1]*Z
# dying after the last allocation has the same probability a
p[-1] = p[-2]
return p
def sample_label(self,t):
"""Sample a new label for the data point at time t.
The conditional probability of p(c_t|rest) is proportional to
p(c_t|seating) x p(x_t|c_t)
TODO: Handle the case of singletons separately -- the is no point | |
<reponame>twstewart42/ParlAI
#!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates.
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import unittest
from typing import Dict
import torch
import random
from parlai.core.metrics import (
AverageMetric,
SumMetric,
FixedMetric,
Metrics,
GlobalAverageMetric,
MacroAverageMetric,
TimerMetric,
aggregate_unnamed_reports,
aggregate_named_reports,
InterDistinctMetric,
IntraDistinctMetric,
FairseqBleuMetric,
)
from parlai.core.torch_classifier_agent import (
ConfusionMatrixMetric,
WeightedF1Metric,
AUCMetrics,
)
import parlai.utils.testing as testing_utils
class TestMetric(unittest.TestCase):
"""
Test individual Metric classes.
"""
def test_sum_metric_inputs(self):
passing_inputs_and_outputs = [
(2, 2.0),
(-5.0, -5.0),
(torch.LongTensor([[-1]]), -1.0),
(torch.DoubleTensor([34.68]), 34.68),
]
for input_, output in passing_inputs_and_outputs:
actual_output = SumMetric(input_).value()
self.assertEqual(actual_output, output)
failing_inputs = [
('4', AssertionError),
([6.8], AssertionError),
(torch.Tensor([1, 3.8]), ValueError), # Tensor has more than 1 element
]
for input_, error in failing_inputs:
with self.assertRaises(error):
SumMetric(input_)
def test_sum_metric_additions(self):
input_pairs_and_outputs = [
(1, 2, 3),
(3, -5.0, -2.0),
(torch.Tensor([[[4.2]]]), 3, 7.2),
(torch.DoubleTensor([1]), torch.LongTensor([[-1]]), 0),
]
for input1, input2, output in input_pairs_and_outputs:
actual_output = (SumMetric(input1) + SumMetric(input2)).value()
self.assertAlmostEqual(actual_output, output, places=6)
def test_average_metric_inputs(self):
passing_inputs_and_outputs = [
((2, 4), 0.5),
((17.0, 10.0), 1.7),
((torch.Tensor([2.3]), torch.LongTensor([2])), 1.15),
((torch.Tensor([2.3]), torch.Tensor([2.0])), 1.15),
]
for input_, output in passing_inputs_and_outputs:
actual_output = AverageMetric(input_[0], input_[1]).value()
self.assertAlmostEqual(actual_output, output, places=6)
self.assertIsInstance(actual_output, float)
failing_inputs = [
((2, '4'), AssertionError),
((torch.Tensor([1, 1]), torch.Tensor([2])), ValueError),
]
for input_, error in failing_inputs:
with self.assertRaises(error):
AverageMetric(input_[0], input_[1])
def test_average_metric_additions(self):
input_pairs_and_outputs = [
((2, 4), (1.5, 1), 0.7),
(
(torch.LongTensor([[[2]]]), torch.Tensor([4])),
(torch.FloatTensor([1.5]), 1),
0.7,
),
]
for input1, input2, output in input_pairs_and_outputs:
actual_output = (
AverageMetric(input1[0], input1[1])
+ AverageMetric(input2[0], input2[1])
).value()
self.assertAlmostEqual(actual_output, output, places=6)
self.assertIsInstance(actual_output, float)
def test_fixedmetric(self):
assert (FixedMetric(3) + FixedMetric(3)).value() == 3
with self.assertRaises(ValueError):
_ = FixedMetric(3) + FixedMetric(4)
def test_macroaverage_additions(self):
m1 = AverageMetric(1, 3)
m2 = AverageMetric(3, 4)
assert (m1 + m2) == AverageMetric(4, 7)
assert MacroAverageMetric({'a': m1, 'b': m2}) == 0.5 * (1.0 / 3 + 3.0 / 4)
class TestMetrics(unittest.TestCase):
"""
Test the Metrics aggregator.
"""
def test_simpleadd(self):
m = Metrics()
m.add('key', SumMetric(1))
m.add('key', SumMetric(2))
assert m.report()['key'] == 3
m.clear()
assert 'key' not in m.report()
m.add('key', SumMetric(1.5))
m.add('key', SumMetric(2.5))
assert m.report()['key'] == 4.0
def test_shared(self):
m = Metrics()
m2 = Metrics(shared=m.share())
m3 = Metrics(shared=m.share())
m2.add('key', SumMetric(1))
m3.add('key', SumMetric(2))
m.add('key', SumMetric(3))
assert m.report()['key'] == 6
def test_multithreaded(self):
# legacy test, but left because it's just another test
m = Metrics()
m2 = Metrics(shared=m.share())
m3 = Metrics(shared=m.share())
m2.add('key', SumMetric(1))
m3.add('key', SumMetric(2))
m.add('key', SumMetric(3))
assert m.report()['key'] == 6
def test_verymultithreaded(self):
# legacy test, but useful all the same, for ensuring
# metrics doesn't care about the order things are done
m = Metrics()
nt = 128
ms = [Metrics(shared=m.share()) for _ in range(nt)]
# intentionally just over the int overflow
for _ in range(32768 + 1):
ms[random.randint(0, nt - 1)].add('key', SumMetric(1))
thread_ids = list(range(nt))
random.shuffle(thread_ids)
assert m.report()['key'] == 32768 + 1
def test_largebuffer(self):
# legacy test. left as just another test
m = Metrics()
m2 = Metrics(shared=m.share())
# intentionally just over the int overflow
for _ in range(32768 + 1):
m2.add('key', SumMetric(1))
assert m.report()['key'] == 32768 + 1
def test_recent(self):
m = Metrics()
m2 = Metrics(shared=m.share())
m.add('test', SumMetric(1))
assert m.report() == {'test': 1}
assert m.report_recent() == {'test': 1}
m.clear_recent()
m.add('test', SumMetric(2))
assert m.report() == {'test': 3}
assert m.report_recent() == {'test': 2}
assert m2.report() == {'test': 3}
assert m2.report_recent() == {}
m2.add('test', SumMetric(3))
assert m2.report() == {'test': 6}
assert m.report() == {'test': 6}
assert m2.report_recent() == {'test': 3}
assert m.report_recent() == {'test': 2}
m2.clear_recent()
assert m2.report() == {'test': 6}
assert m.report() == {'test': 6}
assert m2.report_recent() == {}
assert m.report_recent() == {'test': 2}
m.clear_recent()
assert m2.report() == {'test': 6}
assert m.report() == {'test': 6}
assert m.report_recent() == {}
class TestAggregators(unittest.TestCase):
def test_unnamed_aggregation(self):
report1 = {
'avg': AverageMetric(3, 4),
'sum': SumMetric(3),
'fixed': FixedMetric(4),
'global_avg': GlobalAverageMetric(3, 4),
}
report2 = {
'avg': AverageMetric(1, 3),
'sum': SumMetric(4),
'fixed': FixedMetric(4),
'global_avg': GlobalAverageMetric(1, 3),
}
agg = aggregate_unnamed_reports([report1, report2])
assert agg['avg'] == 4.0 / 7
assert agg['sum'] == 7
assert agg['fixed'] == 4
assert agg['global_avg'] == 4.0 / 7
def test_macro_aggregation(self):
report1 = {
'avg': AverageMetric(3, 4),
'sum': SumMetric(3),
'fixed': FixedMetric(4),
'global_avg': GlobalAverageMetric(3, 4),
}
report2 = {
'avg': AverageMetric(1, 3),
'sum': SumMetric(4),
'fixed': FixedMetric(4),
'global_avg': GlobalAverageMetric(1, 3),
}
agg = aggregate_named_reports({'a': report1, 'b': report2}, micro_average=False)
assert agg['avg'] == 0.5 * (3.0 / 4 + 1.0 / 3)
assert agg['sum'] == 7
assert agg['fixed'] == 4
assert agg['global_avg'] in (report1['global_avg'], report2['global_avg'])
# task level metrics
assert agg['a/avg'] == 3.0 / 4
assert agg['a/sum'] == 3
assert agg['a/fixed'] == 4
assert 'a/global_avg' not in agg
assert agg['b/avg'] == 1.0 / 3
assert agg['b/sum'] == 4
assert agg['b/fixed'] == 4
assert 'b/global_avg' not in agg
def test_uneven_macro_aggrevation(self):
report1 = {'avg': AverageMetric(1, 1)}
report2 = {'avg': AverageMetric(0, 1)}
report3 = {'avg': AverageMetric(0, 1)}
agg1 = aggregate_named_reports(
{'a': report1, 'b': report2}, micro_average=False
)
agg2 = aggregate_named_reports({'a': {}, 'c': report3}, micro_average=False)
agg = aggregate_unnamed_reports([agg1, agg2])
assert agg1['avg'] == 0.5
assert agg2['avg'] == 0.0
assert agg['a/avg'] == 1.0
assert agg['b/avg'] == 0.0
assert agg['c/avg'] == 0.0
assert agg['avg'] == 1.0 / 3
def test_time_metric(self):
metric = TimerMetric(10, 0, 1)
assert metric.value() == 10
metric = TimerMetric(10, 0, 2)
assert metric.value() == 5
metric2 = TimerMetric(10, 4, 5)
# final start time 0
# final end time 5
# total processed = 20
assert (metric + metric2).value() == 4
def test_micro_aggregation(self):
report1 = {
'avg': AverageMetric(3, 4),
'sum': SumMetric(3),
'fixed': FixedMetric(4),
'global_avg': GlobalAverageMetric(3, 4),
}
report2 = {
'avg': AverageMetric(1, 3),
'sum': SumMetric(4),
'fixed': FixedMetric(4),
'global_avg': GlobalAverageMetric(1, 3),
}
agg = aggregate_named_reports({'a': report1, 'b': report2}, micro_average=True)
assert agg['avg'] == 4.0 / 7
assert agg['sum'] == 7
assert agg['fixed'] == 4
assert agg['global_avg'] in (report1['global_avg'], report2['global_avg'])
# task level metrics
assert agg['a/avg'] == 3.0 / 4
assert agg['a/sum'] == 3
assert agg['a/fixed'] == 4
assert 'a/global_avg' not in agg
assert agg['b/avg'] == 1.0 / 3
assert agg['b/sum'] == 4
assert agg['b/fixed'] == 4
assert 'b/global_avg' not in agg
def test_auc_metrics(self):
class_name = 'class_notok'
class_to_int = {'class_notok': 1, 'class_ok': 0}
decimal_place = 3
# task 1; borrowing example from scikit learn
task1_probabilities = [0.1, 0.4, 0.35, 0.8]
task1_gold_labels = ['class_ok', 'class_ok', 'class_notok', 'class_notok']
task1_pos_buckets = {0.35: 1, 0.8: 1}
task1_neg_buckets = {0.1: 1, 0.4: 1}
task1_exp_fp_tp = {
# thres: (False positives, True positives)
0.1: (2, 2),
0.35: (1, 2),
0.4: (1, 1),
0.8: (0, 1),
'_': (0, 0),
}
# task 2; checking with an odd number
task2_probabilities = [0.05, 0.2, 0.6]
task2_gold_labels = ['class_ok', 'class_ok', 'class_notok']
task2_pos_buckets = {0.6: 1}
task2_neg_buckets = {0.05: 1, 0.2: 1}
task2_exp_fp_tp = {0.05: (2, 1), 0.2: (1, 1), 0.6: (0, 1), 1.5: (0, 0)}
# task 3: combining task 1 and task 2
task3_probabilities = task1_probabilities + task2_probabilities
task3_gold_labels = task1_gold_labels + task2_gold_labels
task3_pos_buckets = {0.35: 1, 0.8: 1, 0.6: 1}
task3_neg_buckets = {0.1: 1, 0.4: 1, 0.05: 1, 0.2: 1}
task3_exp_fp_tp = {
# threshold: FP, TP
0.05: (4, 3),
0.1: (3, 3),
0.2: (2, 3),
0.35: (1, 3),
0.4: (1, 2),
0.6: (0, 2),
0.8: (0, 1),
'_': (0, 0),
}
# task 4: testing when there's ones in the same bucket
task4_probabilities = [0.1, 0.400001, 0.4, 0.359, 0.35, 0.900001, 0.9]
task4_gold_labels = [
'class_ok',
'class_ok',
'class_ok',
'class_notok',
'class_notok',
'class_notok',
'class_notok',
]
task4_neg_buckets = {0.1: 1, 0.4: 2}
task4_pos_buckets = {0.35: 1, 0.359: 1, 0.9: 2}
task4_exp_fp_tp = {
# thres: (False positives, True positives)
0.1: (3, 4),
0.35: (2, 4),
0.359: (2, 3),
0.4: (2, 2),
0.9: (0, 2),
'_': (0, 0),
}
# task 5: testing when there's more difference in the bucket (similar to task 4),
# but testing to make sure the rounding/flooring is correct, and the edge cases 0.0, 1.0
task5_probabilities = [0, 0.8, 0.4009, 0.400, 0.359, 0.35, 0.9999, 0.999, 1]
# 4 okay, 5 not okay
task5_gold_labels = [
'class_ok',
'class_ok',
'class_ok',
'class_ok',
'class_notok',
'class_notok',
'class_notok',
'class_notok',
'class_notok',
]
task5_neg_buckets = {0: 1, 0.8: 1, 0.4: 2}
task5_pos_buckets = {0.35: 1, 0.359: 1, 0.999: 2, 1: 1}
task5_exp_fp_tp = {
# thres: (False positives, True | |
in st.get_tg_links(data[device_type]):
if link <= max_mem:
if config_type == 'ipv4' or config_type == 'all':
ipaddr1 = "{}.{}.{}.1".format(ipv4_adr_leaf, j, link)
ipaddr2 = "{}.{}.{}.2".format(ipv4_adr_leaf, j, link)
tg_neigh_list.append(ipaddr2)
leaf_neigh_list.append(ipaddr1)
if class_reconfig == "No":
[tg, _, h1, _] = tg_info[device_type]['ipv4'][local]
rv = config_bgp_on_tg(tg, h1, dut_as, tg_as+j-1, ipaddr1, action='start', af='ipv4')
tg_info[device_type]['ipv4'][local].append(rv)
if config_type == 'ipv6' or config_type == 'all':
ip6addr_1 = "{}:{}:{}::1".format(ipv6_adr_leaf, j, link)
ip6addr_2 = "{}:{}:{}::2".format(ipv6_adr_leaf, j, link)
tg_neigh6_list.append(ip6addr_2)
leaf_neigh6_list.append(ip6addr_1)
if class_reconfig == "No":
[tg, _, h1, _] = tg_info[device_type]['ipv6'][local]
rv = config_bgp_on_tg(tg, h1, dut_as, tg_as+j-1, ip6addr_1, action='start', af='ipv6')
tg_info[device_type]['ipv6'][local].append(rv)
i += 1
link += 1
if config_type == 'ipv4' or config_type == 'all':
bgpapi.config_bgp_multi_neigh_use_peergroup(data[device_type], local_asn=dut_as,
peer_grp_name='leaf_tg', remote_asn=tg_as+j-1,
neigh_ip_list=tg_neigh_list, family='ipv4', activate=1)
if config_type == 'ipv6' or config_type == 'all':
bgpapi.config_bgp_multi_neigh_use_peergroup(data[device_type], local_asn=dut_as,
peer_grp_name='leaf_tg6', remote_asn=tg_as+j-1,
neigh_ip_list=tg_neigh6_list, family='ipv6', activate=1)
else:
bgpapi.cleanup_bgp_config([data[dut] for dut in data['leaf_routers']+data['spine_routers']])
return result, tg_info
def l3tc_vrfipv4v6_address_leafspine_rr_tg_bgp_config(config='yes', vrf_type='all', config_type='all', **kwargs):
"""
:param config:
:param vrf_type:
:param config_type:
:param kwargs:
:return:
"""
st.banner("{}Configuring BGP on TG connected interafces and on TG.".format('Un' if config != 'yes' else ''))
config = 'add' if config == 'yes' else 'remove'
max_mem = int(data['l3_max_tg_links_each_leaf_spine'])
#mas_addr_spine = data['mac_addres_tg_src_spine']
#mas_addr_leaf = data['mac_addres_tg_src_leaf']
#ipv4_adr_spine = data['ipv4_addres_first_octet_tg_spine']
#ipv6_adr_spine = data['ipv6_addres_first_octet_tg_spine']
ipv4_adr_leaf = data['ipv4_addres_first_octet_tg_leaf']
ipv6_adr_leaf = data['ipv6_addres_first_octet_tg_leaf']
spine_as = int(data['spine_as'])
leaf_as = int(data['leaf_as'])
leaf_tg_as = int(data['leaf_tg_as'])
spine_tg_as = int(data['spine_tg_as'])
if 'rr_enable' in kwargs:
spine_tg_as = spine_as
leaf_as = spine_as
result = True
if config == "add":
i = 0
for j, device_type in enumerate(tg_connected_routers, start=1):
tg_neigh_list, leaf_neigh_list, tg_neigh6_list, leaf_neigh6_list = [], [], [], []
link = 1
tg_as = leaf_tg_as
dut_as = leaf_as
if 'spine' in device_type:
tg_as = spine_tg_as
dut_as = spine_as
for local, _, _ in st.get_tg_links(data[device_type]):
if link <= max_mem:
if config_type == 'ipv4' or config_type == 'all':
ipaddr1 = "{}.{}.{}.1".format(ipv4_adr_leaf, j, link)
ipaddr2 = "{}.{}.{}.2".format(ipv4_adr_leaf, j, link)
tg_neigh_list.append(ipaddr2)
leaf_neigh_list.append(ipaddr1)
[tg, _, h1, _] = tg_info[device_type]['ipv4'][local]
rv = config_bgp_on_tg(tg, h1, dut_as, tg_as, ipaddr1, action='start', af='ipv4')
tg_info[device_type]['ipv4'][local].append(rv)
if config_type == 'ipv6' or config_type == 'all':
ip6addr_1 = "{}:{}:{}::1".format(ipv6_adr_leaf, j, link)
ip6addr_2 = "{}:{}:{}::2".format(ipv6_adr_leaf, j, link)
tg_neigh6_list.append(ip6addr_2)
leaf_neigh6_list.append(ip6addr_1)
[tg, _, h1, _] = tg_info[device_type]['ipv6'][local]
rv = config_bgp_on_tg(tg, h1, dut_as, tg_as, ip6addr_1, action='start', af='ipv6')
tg_info[device_type]['ipv6'][local].append(rv)
i += 1
link += 1
if config_type == 'ipv4' or config_type == 'all':
bgpapi.config_bgp_multi_neigh_use_peergroup(data[device_type], local_asn=dut_as,
peer_grp_name='leaf_tg', remote_asn=tg_as,
neigh_ip_list=tg_neigh_list, family='ipv4', activate=1)
if config_type == 'ipv6' or config_type == 'all':
bgpapi.config_bgp_multi_neigh_use_peergroup(data[device_type], local_asn=dut_as,
peer_grp_name='leaf_tg6', remote_asn=tg_as,
neigh_ip_list=tg_neigh6_list, family='ipv6', activate=1)
else:
bgpapi.cleanup_bgp_config([data[dut] for dut in data['leaf_routers']+data['spine_routers']])
return result, tg_info
def create_routing_interface_on_tg(tg, tg_port, intf_ip_addr, netmask, gateway, config, handle='none', af='ipv4'):
"""
:param tg:
:param tg_port:
:param intf_ip_addr:
:param netmask:
:param gateway:
:param af:
:return:
"""
tg = tgen_obj_dict[tg]
tg_ph_x = tg.get_port_handle(tg_port)
config = 'config' if config == 'add' else 'destroy'
if af == 'ipv4':
if config == 'config':
h1 = tg.tg_interface_config(port_handle=tg_ph_x, mode=config, intf_ip_addr=intf_ip_addr, gateway=gateway,
netmask=netmask, arp_send_req='1')
else:
tg.tg_interface_config(port_handle=tg_ph_x, handle=handle['handle'], mode=config)
else:
if config == 'config':
h1 = tg.tg_interface_config(port_handle=tg_ph_x, mode=config, ipv6_intf_addr=intf_ip_addr,
ipv6_prefix_length=netmask, ipv6_gateway=gateway, arp_send_req='1')
else:
tg.tg_interface_config(port_handle=tg_ph_x, handle=handle['handle'], mode=config)
if config == 'config':
st.log("#"*30)
st.log("h1 = {}".format(h1))
st.log("#"*30)
return [tg, tg_ph_x, h1]
else:
return [tg, tg_ph_x]
def config_bgp_on_tg(tg, handle, local_asn, tg_asn, local_ipaddr, action='start', af='ipv4'):
"""
:param tg:
:param handle:
:param local_asn:
:param tg_asn:
:param local_ipaddr:
:param action:
:param af:
:return:
"""
# STC / IXIA
handle_key_v4 = 'handle'
handle_key_v6 = 'handle'
if af == 'ipv4':
bgp_rtr1 = tg.tg_emulation_bgp_config(handle=handle[handle_key_v4], mode='enable', active_connect_enable='1',
local_as=tg_asn, remote_as=local_asn, remote_ip_addr=local_ipaddr,
enable_4_byte_as='1')
st.wait(5)
tg.tg_emulation_bgp_control(handle=bgp_rtr1['handle'], mode='start')
else:
bgp_rtr1 = tg.tg_emulation_bgp_config(handle=handle[handle_key_v6], mode='enable', ip_version='6',
active_connect_enable='1', local_as=tg_asn, remote_as=local_asn,
remote_ipv6_addr=local_ipaddr, enable_4_byte_as='1')
st.wait(5)
tg.tg_emulation_bgp_control(handle=bgp_rtr1['handle'], mode='start')
st.log("#"*30)
st.log("bgp_rtr1 = {}".format(bgp_rtr1))
st.log("#"*30)
return bgp_rtr1
def get_tg_topology_leafspine_bgp(dut_type, max_tg_links, nodes, af='all'):
"""
:param dut_type:
:param max_tg_links:
:param nodes:
:param af:
:return:
"""
st.banner("Getting TG topology info")
st.log("dut_type {}, max_tg_links {}, nodes {}".format(dut_type, max_tg_links, nodes))
rv = []
if dut_type == '':
return rv
# For 1:1 mode TG handled here, dut_type over written
if 'spine' in ','.join(tg_connected_routers) and int(nodes) >= 2:
dut_type = "leaf-spine"
final_rv = SpyTestDict()
if dut_type in ['leaf-spine', 'spine-leaf']:
temp = {}
for _, spine in enumerate(data['spine_routers'], start=1):
tg_list = st.get_tg_links(data[spine])
if int(max_tg_links) <= len(tg_list):
temp[spine] = tg_list
for _, leaf in enumerate(data['leaf_routers'], start=1):
tg_list = st.get_tg_links(data[leaf])
if int(max_tg_links) <= len(tg_list):
temp[leaf] = tg_list
if int(nodes) <= len(temp):
for each in temp:
final_rv[each] = random.sample(temp[each][:int(max_tg_links)], k=int(max_tg_links))
else:
st.log("Requested topology not found.")
return False
while True:
# TODO : Need to check this - 'leaf-spine', 'spine-leaf'
rv = {each: final_rv[each] for each in random.sample(final_rv.keys(), k=int(nodes))}
if 'spine' in str(rv.keys()) and 'leaf' in str(rv.keys()):
break
elif dut_type == 'spine':
temp = {}
for _, spine in enumerate(data['spine_routers'], start=1):
tg_list = st.get_tg_links(data[spine])
if int(max_tg_links) <= len(tg_list):
temp[spine] = tg_list
if int(nodes) <= len(temp):
for each in temp:
final_rv[each] = random.sample(temp[each][:int(max_tg_links)], k=int(max_tg_links))
else:
st.log("Requested topology not found.")
return False
rv = {each: final_rv[each] for each in random.sample(final_rv.keys(), k=int(nodes))}
elif dut_type == 'leaf':
temp = {}
for _, leaf in enumerate(data['leaf_routers'], start=1):
tg_list = st.get_tg_links(data[leaf])
if int(max_tg_links) <= len(tg_list):
temp[leaf] = tg_list
if int(nodes) <= len(temp):
for each in temp:
final_rv[each] = random.sample(temp[each][:int(max_tg_links)], k=int(max_tg_links))
else:
st.log("Requested topology not found.")
return False
rv = {each: final_rv[each] for each in random.sample(final_rv.keys(), k=int(nodes))}
rv = get_leaf_spine_topology_info(rv, af)
return rv
def get_leaf_spine_topology_info(input=[], af='all'):
"""
:param input:
:param af:
:return:
"""
max_mem = int(data['l3_max_tg_links_each_leaf_spine'])
st.banner("Getting topology info")
if not input:
input = {each:[] for each in data['spine_routers']+data['leaf_routers']}
temp = {}
for i, dut_type in enumerate(input, start=1):
if dut_type in tg_connected_routers:
tg_list = st.get_tg_links(data[dut_type])
if int(max_mem) <= len(tg_list):
temp[dut_type] = tg_list
for each in temp:
input[each] = random.sample(temp[each][:int(max_mem)], k=int(max_mem))
st.log("input = {}".format(input))
debug_print()
if af == "ipv4":
afmly = ['ipv4']
elif af == "ipv6":
afmly = ['ipv6']
else:
afmly = ['ipv4', 'ipv6']
st.log("afmly = {}".format(afmly))
rv = SpyTestDict()
temp = SpyTestDict()
rv['dut_list'] = []
rv['tg_dut_list'] = []
rv['tg_dut_list_name'] = []
rv['leaf_list'] = []
rv['spine_list'] = []
for i, dut_type in enumerate(input, start=1):
temp[dut_type] = "D{}".format(i)
rv["D{}_name".format(i)] = dut_type
rv["D{}".format(i)] = data[dut_type]
# List items
rv['dut_list'].append(data[dut_type])
if 'leaf' in dut_type:
rv['leaf_list'].append(data[dut_type])
if 'spine' in dut_type:
rv['spine_list'].append(data[dut_type])
if dut_type in tg_connected_routers:
rv['tg_dut_list'].append(data[dut_type])
rv['tg_dut_list_name'].append("D{}".format(i))
if tg_info:
for i, dut_type in enumerate(input, start=1):
if dut_type in tg_connected_routers:
for j, each_tgport in enumerate(input[dut_type], start=1):
rv["{}T1P{}".format(temp[dut_type], j)] = each_tgport[0]
rv["T1{}P{}".format(temp[dut_type], j)] = each_tgport[2]
for each_af in afmly:
rv["T1{}P{}_tg_obj".format(temp[dut_type], j)] = tg_info[dut_type][each_af][each_tgport[0]][0]
rv["T1{}P{}_{}_tg_ph".format(temp[dut_type], j, each_af)] = \
tg_info[dut_type][each_af][each_tgport[0]][1]
rv["T1{}P{}_{}_tg_ih".format(temp[dut_type], j, each_af)] = \
tg_info[dut_type][each_af][each_tgport[0]][2]
rv["T1{}P{}_{}_neigh".format(temp[dut_type], j, each_af)] = \
tg_info[dut_type][each_af][each_tgport[0]][3][0]
rv["T1{}P{}_{}".format(temp[dut_type], j, each_af)] = \
tg_info[dut_type][each_af][each_tgport[0]][3][1]
rv["T1{}P{}_{}_tg_bh".format(temp[dut_type], j, each_af)] = \
tg_info[dut_type][each_af][each_tgport[0]][4]
if topo_info:
for i, dut_type in enumerate(input, start=1):
for each_af in afmly:
if each_af == "ipv4":
list_of_port = [each for each in topo_info[each_af][dut_type] if each[2] in temp]
for each_port in list_of_port:
k = each_port[5]
rv["{}{}P{}".format(temp[dut_type], temp[each_port[2]], k)] = each_port[0]
rv["{}{}P{}_ipv4".format(temp[dut_type], temp[each_port[2]], k)] = each_port[1]
rv["{}{}P{}_neigh".format(temp[dut_type], temp[each_port[2]], k)] = data[each_port[2]]
rv["{}{}P{}_neigh_ipv4".format(temp[dut_type], temp[each_port[2]], k)] = each_port[4]
rv["{}{}P{}_neigh_port".format(temp[dut_type], temp[each_port[2]], k)] = each_port[3]
rv["{}{}P{}".format(temp[each_port[2]], temp[dut_type], k)] = each_port[3]
rv["{}{}P{}_ipv4".format(temp[each_port[2]], temp[dut_type], k)] = each_port[4]
rv["{}{}P{}_neigh".format(temp[each_port[2]], temp[dut_type], k)] = data[dut_type]
rv["{}{}P{}_neigh_ipv4".format(temp[each_port[2]], temp[dut_type], k)] = each_port[1]
rv["{}{}P{}_neigh_port".format(temp[each_port[2]], temp[dut_type], k)] = each_port[0]
if each_af == "ipv6":
list_of_port = [each for each in topo_info[each_af][dut_type] if each[2] in temp]
for each_port in list_of_port:
k = each_port[5]
rv["{}{}P{}".format(temp[dut_type], temp[each_port[2]], k)] = each_port[0]
rv["{}{}P{}_ipv6".format(temp[dut_type], temp[each_port[2]], k)] = each_port[1]
rv["{}{}P{}_neigh".format(temp[dut_type], temp[each_port[2]], k)] = data[each_port[2]]
rv["{}{}P{}_neigh_ipv6".format(temp[dut_type], temp[each_port[2]], k)] = each_port[4]
rv["{}{}P{}_neigh_port".format(temp[dut_type], temp[each_port[2]], k)] = each_port[3]
rv["{}{}P{}".format(temp[each_port[2]], temp[dut_type], k)] = each_port[3]
rv["{}{}P{}_ipv6".format(temp[each_port[2]], temp[dut_type], k)] = each_port[4]
rv["{}{}P{}_neigh".format(temp[each_port[2]], temp[dut_type], k)] = data[dut_type]
rv["{}{}P{}_neigh_ipv6".format(temp[each_port[2]], temp[dut_type], k)] = each_port[1]
rv["{}{}P{}_neigh_port".format(temp[each_port[2]], temp[dut_type], k)] = each_port[0]
for i, dut_type in enumerate(input, start=1):
if as_info:
rv["{}_as".format(temp[dut_type])] = as_info[dut_type]
for each_af in afmly:
if loopback_info:
if each_af == "ipv4":
rv["{}_loopback_ipv4".format(temp[dut_type])] = loopback_info[each_af][dut_type]
if each_af == "ipv6":
rv["{}_loopback_ipv6".format(temp[dut_type])] = loopback_info[each_af][dut_type]
st.log(pprint.pformat(rv, width=2))
return rv
def get_topo_info():
"""
:return:
"""
return topo_info
def get_tg_info():
"""
:return:
"""
return tg_info
def get_loopback_info():
"""
:return:
"""
return loopback_info
def get_as_info():
"""
:return:
"""
return as_info
def get_static_rt_info():
"""
:return:
"""
return static_rt_info
def get_fixed_nw_info():
"""
:return:
"""
return fixed_nw_info
def get_underlay_info():
"""
:return:
"""
return underlay_info
def get_route_attribute(output, parameter, **kwargs):
st.log("GET ROUTE ATTR -- {}".format(output))
st.log("PARAMS -- {}".format(parameter))
st.log("KWARGS -- {}".format(kwargs))
nw_route = filter_and_select(output, [parameter], kwargs)
st.log("NW_ROUTE -- {}".format(nw_route))
if not nw_route:
st.report_fail("entry_not_found")
return nw_route[0][parameter]
def debug_print():
"""
:return:
"""
st.log("get_tg_info(): \n{} ".format(get_tg_info()))
st.log("get_topo_info(): \n{} ".format(get_topo_info()))
st.log("get_as_info(): \n{} | |
"__call__") as call:
client.create_job()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == services.CreateJobRequest()
@pytest.mark.asyncio
async def test_create_job_async(
transport: str = "grpc_asyncio", request_type=services.CreateJobRequest
):
client = TranscoderServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.create_job), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
resources.Job(
name="name_value",
input_uri="input_uri_value",
output_uri="output_uri_value",
priority=898,
state=resources.Job.ProcessingState.PENDING,
failure_reason="failure_reason_value",
ttl_after_completion_days=2670,
)
)
response = await client.create_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == services.CreateJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, resources.Job)
assert response.name == "name_value"
assert response.input_uri == "input_uri_value"
assert response.output_uri == "output_uri_value"
assert response.priority == 898
assert response.state == resources.Job.ProcessingState.PENDING
assert response.failure_reason == "failure_reason_value"
assert response.ttl_after_completion_days == 2670
@pytest.mark.asyncio
async def test_create_job_async_from_dict():
await test_create_job_async(request_type=dict)
def test_create_job_field_headers():
client = TranscoderServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = services.CreateJobRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.create_job), "__call__") as call:
call.return_value = resources.Job()
client.create_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_create_job_field_headers_async():
client = TranscoderServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = services.CreateJobRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.create_job), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(resources.Job())
await client.create_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_create_job_flattened():
client = TranscoderServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.create_job), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = resources.Job()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.create_job(
parent="parent_value", job=resources.Job(name="name_value"),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
assert args[0].job == resources.Job(name="name_value")
def test_create_job_flattened_error():
client = TranscoderServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.create_job(
services.CreateJobRequest(),
parent="parent_value",
job=resources.Job(name="name_value"),
)
@pytest.mark.asyncio
async def test_create_job_flattened_async():
client = TranscoderServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.create_job), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = resources.Job()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(resources.Job())
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.create_job(
parent="parent_value", job=resources.Job(name="name_value"),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
assert args[0].job == resources.Job(name="name_value")
@pytest.mark.asyncio
async def test_create_job_flattened_error_async():
client = TranscoderServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.create_job(
services.CreateJobRequest(),
parent="parent_value",
job=resources.Job(name="name_value"),
)
def test_list_jobs(transport: str = "grpc", request_type=services.ListJobsRequest):
client = TranscoderServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_jobs), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = services.ListJobsResponse(
next_page_token="<PASSWORD>token_<PASSWORD>",
)
response = client.list_jobs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == services.ListJobsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListJobsPager)
assert response.next_page_token == "next_<PASSWORD>_token_value"
def test_list_jobs_from_dict():
test_list_jobs(request_type=dict)
def test_list_jobs_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = TranscoderServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_jobs), "__call__") as call:
client.list_jobs()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == services.ListJobsRequest()
@pytest.mark.asyncio
async def test_list_jobs_async(
transport: str = "grpc_asyncio", request_type=services.ListJobsRequest
):
client = TranscoderServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_jobs), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
services.ListJobsResponse(next_page_token="next_page_token_value",)
)
response = await client.list_jobs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == services.ListJobsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListJobsAsyncPager)
assert response.next_page_token == "next_page_token_value"
@pytest.mark.asyncio
async def test_list_jobs_async_from_dict():
await test_list_jobs_async(request_type=dict)
def test_list_jobs_field_headers():
client = TranscoderServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = services.ListJobsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_jobs), "__call__") as call:
call.return_value = services.ListJobsResponse()
client.list_jobs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_list_jobs_field_headers_async():
client = TranscoderServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = services.ListJobsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_jobs), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
services.ListJobsResponse()
)
await client.list_jobs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_list_jobs_flattened():
client = TranscoderServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_jobs), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = services.ListJobsResponse()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.list_jobs(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
def test_list_jobs_flattened_error():
client = TranscoderServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.list_jobs(
services.ListJobsRequest(), parent="parent_value",
)
@pytest.mark.asyncio
async def test_list_jobs_flattened_async():
client = TranscoderServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_jobs), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = services.ListJobsResponse()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
services.ListJobsResponse()
)
# Call the method with a truthy value for each flattened | |
import numpy as np
import pandas as pd
import pytest
from pandas.testing import assert_frame_equal
# from http://imachordata.com/2016/02/05/you-complete-me/
@pytest.fixture
def df1():
return pd.DataFrame(
{
"Year": [1999, 2000, 2004, 1999, 2004],
"Taxon": [
"Saccharina",
"Saccharina",
"Saccharina",
"Agarum",
"Agarum",
],
"Abundance": [4, 5, 2, 1, 8],
}
)
def test_empty_column(df1):
"""Return dataframe if `columns` is empty."""
assert_frame_equal(df1.complete(), df1)
def test_MultiIndex_column(df1):
"""Raise ValueError if column is a MultiIndex."""
df = df1
df.columns = [["A", "B", "C"], list(df.columns)]
with pytest.raises(ValueError):
df1.complete(["Year", "Taxon"])
def test_column_duplicated(df1):
"""Raise ValueError if column is duplicated in `columns`"""
with pytest.raises(ValueError):
df1.complete(
columns=[
"Year",
"Taxon",
{"Year": lambda x: range(x.Year.min().x.Year.max() + 1)},
]
)
def test_type_columns(df1):
"""Raise error if columns is not a list object."""
with pytest.raises(TypeError):
df1.complete(columns="Year")
def test_fill_value_is_a_dict(df1):
"""Raise error if fill_value is not a dictionary"""
with pytest.raises(TypeError):
df1.complete(columns=["Year", "Taxon"], fill_value=0)
def test_wrong_column_fill_value(df1):
"""Raise ValueError if column in `fill_value` does not exist."""
with pytest.raises(ValueError):
df1.complete(columns=["Taxon", "Year"], fill_value={"year": 0})
def test_wrong_data_type_dict(df1):
"""
Raise ValueError if value in dictionary
is not a 1-dimensional object.
"""
with pytest.raises(ValueError):
df1.complete(columns=[{"Year": pd.DataFrame([2005, 2006, 2007])}])
frame = pd.DataFrame(
{
"Year": [1999, 2000, 2004, 1999, 2004],
"Taxon": [
"Saccharina",
"Saccharina",
"Saccharina",
"Agarum",
"Agarum",
],
"Abundance": [4, 5, 2, 1, 8],
}
)
wrong_columns = (
(frame, ["b", "Year"]),
(frame, [{"Yayay": range(7)}]),
(frame, ["Year", ["Abundant", "Taxon"]]),
(frame, ["Year", ("Abundant", "Taxon")]),
)
empty_sub_columns = [
(frame, ["Year", []]),
(frame, ["Year", {}]),
(frame, ["Year", ()]),
]
@pytest.mark.parametrize("frame,wrong_columns", wrong_columns)
def test_wrong_columns(frame, wrong_columns):
"""Test that ValueError is raised if wrong column is supplied."""
with pytest.raises(ValueError):
frame.complete(columns=wrong_columns)
@pytest.mark.parametrize("frame,empty_sub_cols", empty_sub_columns)
def test_empty_subcols(frame, empty_sub_cols):
"""Raise ValueError for an empty group in columns"""
with pytest.raises(ValueError):
frame.complete(columns=empty_sub_cols)
def test_fill_value(df1):
"""Test fill_value argument."""
output1 = pd.DataFrame(
{
"Year": [1999, 1999, 2000, 2000, 2004, 2004],
"Taxon": [
"Agarum",
"Saccharina",
"Agarum",
"Saccharina",
"Agarum",
"Saccharina",
],
"Abundance": [1, 4.0, 0, 5, 8, 2],
}
)
result = df1.complete(
columns=["Year", "Taxon"], fill_value={"Abundance": 0}
)
assert_frame_equal(result, output1)
@pytest.fixture
def df1_output():
return pd.DataFrame(
{
"Year": [
1999,
1999,
2000,
2000,
2001,
2001,
2002,
2002,
2003,
2003,
2004,
2004,
],
"Taxon": [
"Agarum",
"Saccharina",
"Agarum",
"Saccharina",
"Agarum",
"Saccharina",
"Agarum",
"Saccharina",
"Agarum",
"Saccharina",
"Agarum",
"Saccharina",
],
"Abundance": [1.0, 4, 0, 5, 0, 0, 0, 0, 0, 0, 8, 2],
}
)
def test_fill_value_all_years(df1, df1_output):
"""
Test the complete function accurately replicates for
all the years from 1999 to 2004.
"""
result = df1.complete(
columns=[
{"Year": lambda x: range(x.Year.min(), x.Year.max() + 1)},
"Taxon",
],
fill_value={"Abundance": 0},
)
assert_frame_equal(result, df1_output)
def test_dict_series(df1, df1_output):
"""
Test the complete function if a dictionary containing a Series
is present in `columns`.
"""
result = df1.complete(
columns=[
{
"Year": lambda x: pd.Series(
range(x.Year.min(), x.Year.max() + 1)
)
},
"Taxon",
],
fill_value={"Abundance": 0},
)
assert_frame_equal(result, df1_output)
def test_dict_series_duplicates(df1, df1_output):
"""
Test the complete function if a dictionary containing a
Series (with duplicates) is present in `columns`.
"""
result = df1.complete(
columns=[
{
"Year": pd.Series(
[1999, 2000, 2000, 2001, 2002, 2002, 2002, 2003, 2004]
)
},
"Taxon",
],
fill_value={"Abundance": 0},
)
assert_frame_equal(result, df1_output)
def test_dict_values_outside_range(df1):
"""
Test the output if a dictionary is present,
and none of the values in the dataframe,
for the corresponding label, is not present
in the dictionary's values.
"""
result = df1.complete(
columns=[("Taxon", "Abundance"), {"Year": np.arange(2005, 2007)}]
)
expected = pd.DataFrame(
[
{"Taxon": "Agarum", "Abundance": 1, "Year": 1999},
{"Taxon": "Agarum", "Abundance": 1, "Year": 2005},
{"Taxon": "Agarum", "Abundance": 1, "Year": 2006},
{"Taxon": "Agarum", "Abundance": 8, "Year": 2004},
{"Taxon": "Agarum", "Abundance": 8, "Year": 2005},
{"Taxon": "Agarum", "Abundance": 8, "Year": 2006},
{"Taxon": "Saccharina", "Abundance": 2, "Year": 2004},
{"Taxon": "Saccharina", "Abundance": 2, "Year": 2005},
{"Taxon": "Saccharina", "Abundance": 2, "Year": 2006},
{"Taxon": "Saccharina", "Abundance": 4, "Year": 1999},
{"Taxon": "Saccharina", "Abundance": 4, "Year": 2005},
{"Taxon": "Saccharina", "Abundance": 4, "Year": 2006},
{"Taxon": "Saccharina", "Abundance": 5, "Year": 2000},
{"Taxon": "Saccharina", "Abundance": 5, "Year": 2005},
{"Taxon": "Saccharina", "Abundance": 5, "Year": 2006},
]
)
assert_frame_equal(result, expected)
# adapted from https://tidyr.tidyverse.org/reference/complete.html
complete_parameters = [
(
pd.DataFrame(
{
"group": [1, 2, 1],
"item_id": [1, 2, 2],
"item_name": ["a", "b", "b"],
"value1": [1, 2, 3],
"value2": [4, 5, 6],
}
),
["group", "item_id", "item_name"],
pd.DataFrame(
{
"group": [1, 1, 1, 1, 2, 2, 2, 2],
"item_id": [1, 1, 2, 2, 1, 1, 2, 2],
"item_name": ["a", "b", "a", "b", "a", "b", "a", "b"],
"value1": [
1.0,
np.nan,
np.nan,
3.0,
np.nan,
np.nan,
np.nan,
2.0,
],
"value2": [
4.0,
np.nan,
np.nan,
6.0,
np.nan,
np.nan,
np.nan,
5.0,
],
}
),
),
(
pd.DataFrame(
{
"group": [1, 2, 1],
"item_id": [1, 2, 2],
"item_name": ["a", "b", "b"],
"value1": [1, 2, 3],
"value2": [4, 5, 6],
}
),
["group", ("item_id", "item_name")],
pd.DataFrame(
{
"group": [1, 1, 2, 2],
"item_id": [1, 2, 1, 2],
"item_name": ["a", "b", "a", "b"],
"value1": [1.0, 3.0, np.nan, 2.0],
"value2": [4.0, 6.0, np.nan, 5.0],
}
),
),
]
@pytest.mark.parametrize("df,columns,output", complete_parameters)
def test_complete(df, columns, output):
"Test the complete function, with and without groupings."
assert_frame_equal(df.complete(columns), output)
@pytest.fixture
def duplicates():
return pd.DataFrame(
{
"row": [
"21.08.2020",
"21.08.2020",
"21.08.2020",
"21.08.2020",
"22.08.2020",
"22.08.2020",
"22.08.2020",
"22.08.2020",
],
"column": ["A", "A", "B", "C", "A", "B", "B", "C"],
"value": [43.0, 36, 36, 28, 16, 40, 34, 0],
}
)
# https://stackoverflow.com/questions/63541729/
# pandas-how-to-include-all-columns-for-all-rows-although-value-is-missing-in-a-d
# /63543164#63543164
def test_duplicates(duplicates):
"""Test that the complete function works for duplicate values."""
df = pd.DataFrame(
{
"row": {
0: "21.08.2020",
1: "21.08.2020",
2: "21.08.2020",
3: "21.08.2020",
4: "22.08.2020",
5: "22.08.2020",
6: "22.08.2020",
},
"column": {0: "A", 1: "A", 2: "B", 3: "C", 4: "A", 5: "B", 6: "B"},
"value": {0: 43, 1: 36, 2: 36, 3: 28, 4: 16, 5: 40, 6: 34},
}
)
result = df.complete(columns=["row", "column"], fill_value={"value": 0})
assert_frame_equal(result, duplicates)
def test_unsorted_duplicates(duplicates):
"""Test output for unsorted duplicates."""
df = pd.DataFrame(
{
"row": {
0: "22.08.2020",
1: "22.08.2020",
2: "21.08.2020",
3: "21.08.2020",
4: "21.08.2020",
5: "21.08.2020",
6: "22.08.2020",
},
"column": {
0: "B",
1: "B",
2: "A",
3: "A",
4: "B",
5: "C",
6: "A",
},
"value": {0: 40, 1: 34, 2: 43, 3: 36, 4: 36, 5: 28, 6: 16},
}
)
result = df.complete(columns=["row", "column"], fill_value={"value": 0})
assert_frame_equal(result, duplicates)
# https://stackoverflow.com/questions/32874239/
# how-do-i-use-tidyr-to-fill-in-completed-rows-within-each-value-of-a-grouping-var
def test_grouping_first_columns():
"""
Test complete function when the first entry
in columns is a grouping.
"""
df2 = pd.DataFrame(
{
"id": [1, 2, 3],
"choice": [5, 6, 7],
"c": [9.0, np.nan, 11.0],
"d": [
pd.NaT,
pd.Timestamp("2015-09-30 00:00:00"),
pd.Timestamp("2015-09-29 00:00:00"),
],
}
)
output2 = pd.DataFrame(
{
"id": [1, 1, 1, 2, 2, 2, 3, 3, 3],
"c": [9.0, 9.0, 9.0, np.nan, np.nan, np.nan, 11.0, 11.0, 11.0],
"d": [
pd.NaT,
pd.NaT,
pd.NaT,
pd.Timestamp("2015-09-30 00:00:00"),
pd.Timestamp("2015-09-30 00:00:00"),
pd.Timestamp("2015-09-30 00:00:00"),
pd.Timestamp("2015-09-29 00:00:00"),
pd.Timestamp("2015-09-29 00:00:00"),
pd.Timestamp("2015-09-29 00:00:00"),
],
"choice": [5, 6, 7, 5, 6, 7, 5, 6, 7],
}
)
result = df2.complete(columns=[("id", "c", "d"), "choice"])
assert_frame_equal(result, output2)
# https://stackoverflow.com/questions/48914323/tidyr-complete-cases-nesting-misunderstanding
def test_complete_multiple_groupings():
"""Test that `complete` gets the correct output for multiple groupings."""
df3 = pd.DataFrame(
{
"project_id": [1, 1, 1, 1, 2, 2, 2],
"meta": ["A", "A", "B", "B", "A", "B", "C"],
"domain1": ["d", "e", "h", "i", "d", "i", "k"],
"question_count": [3, 3, 3, 3, 2, 2, 2],
"tag_count": [2, 1, 3, 2, 1, 1, 2],
}
)
output3 = pd.DataFrame(
{
"meta": ["A", "A", "A", "A", "B", "B", "B", "B", "C", "C"],
"domain1": ["d", "d", "e", "e", "h", "h", "i", "i", "k", "k"],
"project_id": [1, 2, 1, 2, 1, 2, 1, 2, 1, 2],
"question_count": [3, 2, 3, 2, 3, 2, 3, 2, 3, 2],
"tag_count": [2.0, 1.0, 1.0, 0.0, 3.0, 0.0, 2.0, 1.0, 0.0, 2.0],
}
)
result = df3.complete(
columns=[("meta", "domain1"), ("project_id", "question_count")],
fill_value={"tag_count": 0},
)
assert_frame_equal(result, output3)
@pytest.fixture
def output_dict_tuple():
return pd.DataFrame(
[
{"Year": 1999, "Taxon": "Agarum", "Abundance": 1},
{"Year": 1999, "Taxon": "Agarum", "Abundance": 8},
{"Year": 1999, "Taxon": "Saccharina", "Abundance": 2},
{"Year": 1999, "Taxon": "Saccharina", "Abundance": 4},
{"Year": 1999, "Taxon": "Saccharina", "Abundance": 5},
{"Year": 2000, "Taxon": "Agarum", "Abundance": 1},
{"Year": 2000, "Taxon": "Agarum", "Abundance": 8},
{"Year": 2000, "Taxon": "Saccharina", "Abundance": 2},
{"Year": 2000, "Taxon": "Saccharina", "Abundance": 4},
{"Year": 2000, "Taxon": "Saccharina", "Abundance": 5},
{"Year": 2001, "Taxon": "Agarum", "Abundance": 1},
{"Year": 2001, "Taxon": "Agarum", "Abundance": 8},
{"Year": 2001, "Taxon": "Saccharina", "Abundance": 2},
{"Year": 2001, "Taxon": "Saccharina", "Abundance": 4},
{"Year": 2001, "Taxon": "Saccharina", "Abundance": 5},
{"Year": 2002, "Taxon": "Agarum", "Abundance": 1},
{"Year": 2002, "Taxon": "Agarum", "Abundance": 8},
{"Year": 2002, "Taxon": "Saccharina", "Abundance": 2},
{"Year": 2002, "Taxon": "Saccharina", "Abundance": 4},
{"Year": 2002, "Taxon": | |
from random import randrange
def choose_icebreaker():
icebreakers = ['What was your first job?',
'Have you ever met anyone famous?',
'What are you reading right now?',
'If you could pick up a new skill in an instant what would it be?',
'Who’s someone you really admire?',
'Seen any good movies lately you’d recommend?',
'Got any favorite quotes?',
'Been pleasantly surprised by anything lately?',
'What was your favorite band 10 years ago?',
'What’s your earliest memory?',
'Been anywhere recently for the first time?',
'What’s your favorite family tradition?',
'What was the first thing you bought with your own money?',
'What’s something you want to do in the next year that you’ve never done before?',
'Seen anything lately that made you smile?',
'What’s your favorite place you’ve ever visited?',
'Have you had your 15 minutes of fame yet?',
'What’s the best advice you’ve ever heard?',
'How do you like your eggs?',
'Do you have a favorite charity you wish more people knew about?',
'Got any phobias you’d like to break?',
'Have you returned anything you’ve purchased recently? Why?',
'Do you collect anything?',
'What’s your favorite breakfast cereal?',
'What is your most used emoji?',
'What was the worst haircut you ever had?',
'If you were a wrestler what would be your entrance theme song?',
'Have you ever been told you look like someone famous, who was it?',
'If you could bring back any fashion trend what would it be?',
'What did you name your first car?',
'You have your own late night talk show, who do you invite as your first guest?',
'What was your least favorite food as a child? Do you still hate it or do you love it now?',
'If you had to eat one meal everyday for the rest of your life what would it be?',
'If aliens landed on earth tomorrow and offered to take you home with them, would you go?',
'60s, 70s, 80s, 90s: Which decade do you love the most?',
'What’s your favorite sandwich and why?',
'What is your favorite item you’ve bought this year?',
'Say you’re independently wealthy and don’t have to work, what would you do with your time?',
'If you had to delete all but 3 apps from your smartphone, which ones would you keep?',
'What would your dream house be like?',
'You’re going sail around the world, what’s the name of your boat?',
'Which band / artist – dead or alive would play at your funeral?',
'As a child, what did you want to be when you grew up?',
'What’s your favorite tradition or holiday?',
'What is your favorite breakfast food?',
'What is your favorite time of the day and why?',
'Coffee or tea?',
'Teleportation or flying?',
'What is your favorite TV show?',
'What book read recently you would recommend and why?',
'If you had a time machine, would go back in time or into the future?',
'Do you think you could live without your smartphone (or other technology item) for 24 hours?',
'What is your favorite dessert?',
'What was your favorite game to play as a child?',
'Are you a traveler or a homebody?',
'What’s your favorite place of all the places you’ve travelled?',
'Have you ever completed anything on your “bucket list”?',
'What did you have for breakfast this morning?',
'What was the country you last visited?',
'What is one thing we don’t know about you?',
'What is your favorite meal to cook and why?',
'Are you a morning person or a night person?',
'What is your favorite musical instrument and why?',
'What languages do you know how to speak?',
'What’s the weirdest food you’ve ever eaten?',
'What is your cellphone wallpaper?',
'You can have an unlimited supply of one thing for the rest of your life, what is it? ',
'What season would you be?',
'Are you a good dancer?',
'If you could live anywhere in the world for a year, where would it be?',
'If you could see one movie again for the first time, what would it be and why?',
'If you could rename yourself, what name would you pick?',
'If you could have someone follow you around all the time, like a personal assistant, what would you have them do?',
'If you had to teach a class on one thing, what would you teach?',
'If you could magically become fluent in any language, what would it be?',
'If you could eliminate one thing from your daily routine, what would it be and why?',
'If you could go to Mars, would you? Why or why not?',
'Would you rather live in the ocean or on the moon?',
'Would you rather lose all of your money or all of your pictures?',
'Would you rather have invisibility or flight?',
'Would you rather live where it only snows or the temperature never falls below 40 degrees?',
'Would you rather always be slightly late or super early?',
'Would you rather give up your smartphone or your computer?',
'Would you rather live without AC or live without social media?',
'Would you rather be the funniest or smartest person in the room?',
'What are your favorite songs from your teenage years that you still rock out to when nobody else is listening?',
'What’s your most embarrassing moment from your teen years?',
'What’s the worst thing you ever did as a kid — and got away with?',
'What did you get into the most trouble for with your parents as a kid?',
'What was the first concert you ever went to?',
'Do you have any crazy housemate stories?',
'What was your first record, tape or CD that you ever owned',
'What were words you couldn’t pronounce as a child, so you made up your own?',
'Have you ever gotten super lost?',
'What was your first job?',
'Who was the worst school teacher you ever had?',
'What’s the best prank you’ve ever played on someone?',
'What’s your strangest talent?',
'What show on Netflix did you binge watch embarrassingly fast?',
'What is your favorite smell and why?',
'What food could you not live without?',
'What commercial jingle gets stuck in your head all the time?',
'What sport did you try as a child and fail at?',
'What do you never leave the house without (can’t be your phone, keys or wallet)?',
'What’s a nickname people actually call you?',
'If you could only eat at one restaurant forever, what restaurant would it be?',
'If you could only wear one type of shoes for the rest of your life, what type of shoes would it be?',
'If all your clothes had to be one color forever, what color would you pick?',
'If you could eliminate one food so that no one would eat it ever again, what would you pick to destroy?',
'If you could turn the ocean into a liquid other than water, which one would you pick?',
'If you could only listen to one song for the rest of your life, which song would you pick?',
'If you had to endorse a brand, which brand would it be?',
'How much does a polar bear weigh?',
'Blow our minds with a random fact',
'In a zombie apocalypse, what would your survival strategy be?',
'Have you ever been skinnydipping?',
'What is your best quality?',
'You find a high-denomination note in a restaurant floor. Do you hand it in, or pocket it?',
'Have you ever had a recurring nightmare?',
'What is the kindest thing a stranger has done for you?',
'What is your first memory involving a computer?',
'What is the weirdest thing you have eaten?',
'You can go back in time. Which year do you choose?',
'What is one piece of advice you would give to a child?',
'Have you ever needed stitches?',
'What is your favorite website?',
'What was | |
<reponame>TUM-EI-RCS/StratoX
#!/usr/bin/python
# This script parses GNATprove output and gives per-file coverage statistics,
# as well as overall statistics
#
# (C) 2016-2017 <NAME>, RCS, <NAME> <<EMAIL>>
import sys, getopt, os, inspect, time, math, re, datetime, numpy;
import pprint;
# use this if you want to include modules from a subfolder
cmd_subfolder = os.path.realpath(os.path.abspath(os.path.join(os.path.split(inspect.getfile( inspect.currentframe() ))[0],"pytexttable")))
if cmd_subfolder not in sys.path:
sys.path.insert(0, cmd_subfolder)
import texttable
#######################################
# GLOBAL CONSTANTS
#######################################
KNOWN_SORT_CRITERIA = ('alpha', 'coverage', 'success', 'props', 'subs', 'skip');
#######################################
# FUNCTION DEFINITIONS
#######################################
def file2unit(filename):
"""
transform file name into GNAT unit name
"""
unitname = os.path.splitext(filename)[0]
return unitname.lower()
def get_stdout_stats(inputfile):
"""
parse output of stdout from gnatprove
XXX! output comes twice. First for flow analysis, then again for proof.
this parser keeps the lastest outputs, i.e., overwriting flow information.
"""
if os.stat(inputfile).st_size == 0: return None
unit = ""
unit_pre = ""
units = {}
if True: #try:
with open(inputfile, 'r+') as f:
l = 0
for line in f:
l = l + 1
# new unit?
match = re.search(r'^([^\s:]+):(\d+):(\d+).*', line, re.MULTILINE)
if match:
filename = match.group(1)
srcline = match.group(2)
srccol = match.group(3)
unit = file2unit(filename)
if unit != unit_pre:
unit_pre = unit
unitinfo = {"props":0, "proven":0}
if unit: units[unit]=unitinfo
match = re.search(r'^.*: (medium|high|low): (.*)$', line, re.MULTILINE)
if match:
unitinfo["props"] = unitinfo["props"] + 1
match = re.search(r'^.*: info: .*$', line, re.MULTILINE)
if match:
unitinfo["props"] = unitinfo["props"] + 1
unitinfo["proven"] = unitinfo["proven"] + 1
if unit: units[unit]=unitinfo
return units
else: #except:
print "ERROR reading file " + inputfile
return None
def get_report_stats(inputfile):
"""
parse the report from gnatprove
"""
if os.stat(inputfile).st_size == 0: return None
unit_pre = ""
subname_pre =""
units = {}
if True: #try:
with open(inputfile, 'r+') as f:
l = 0
for line in f:
l = l + 1
# new unit?
match = re.search(r'^in unit ([^\s:,]+)', line, re.MULTILINE)
if match:
unit = match.group(1).lower()
if unit != unit_pre:
unit_pre = unit
unitinfo = {"subs":0, "props":0, "proven":0, "skip":0}
if unit: units[unit]=unitinfo
else:
proc = False
# location
match = re.search(r'^.*\s+([^\s:]+) at ([^\s:]+):(\d+)', line, re.MULTILINE)
if match:
proc = True
funcname = match.group(1)
filename = match.group(2)
srcline = match.group(3)
subname = filename + ":" + funcname + ":" + srcline
if subname != subname_pre:
subname_pre = subname
unitinfo["subs"] = unitinfo["subs"] + 1
# some are proven
match = re.search(r'^.* (\d+) checks out of (\d+) proved$', line, re.MULTILINE)
if match:
proc = True
pgood = match.group(1)
ptotal = match.group(2)
try:
succ = int(float(pgood) / float(ptotal) * 100)
except:
succ = 0.0
#print " " + funcname + ":" + srcline + " => " + pgood + " of " + ptotal + " proved (" + str(succ) + "%)"
unitinfo["props"] = unitinfo["props"] + int(ptotal)
unitinfo["proven"] = unitinfo["proven"] + int(pgood)
# all are proven
match = re.search(r'^.* and proved \((\d+) checks\)$', line, re.MULTILINE)
if match:
proc = True
pgood = match.group(1)
ptotal = pgood
succ = 100
unitinfo["props"] = unitinfo["props"] + int(ptotal)
unitinfo["proven"] = unitinfo["proven"] + int(pgood)
#print " " + funcname + ":" + srcline + " => " + ptotal + " of " + ptotal + " proved (" + str(succ) + "%)"
# sub skipped
match = re.search(r'^.* skipped', line, re.MULTILINE)
if match:
proc = True
#print " " + funcname + ":" + srcline + " => SKIPPED"
unitinfo["skip"] = unitinfo["skip"] + 1
if not proc:
#print "Unrecognized line " + str(l) + ": " + line
pass
# BULLOCKS: some properties are not listed in the log. Seems like the flow properties are missing
#Estimator.check_stable_Time at estimator.ads:65 flow analyzed (0 errors and 0 warnings) and not proved, 15 checks out of 16 proved
#Estimator.get_Baro_Height at estimator.ads:52 flow analyzed (0 errors and 0 warnings) and proved (0 checks)
if unit: units[unit]=unitinfo
return units
else: #except:
print "ERROR reading file " + inputfile
return None
def get_totals(reportunits, buildunits, sorting, exclude):
if not reportunits: return None
#################
# COMPLETE DATA
#################
## DATA FROM REPORT
for u,uinfo in reportunits.iteritems():
if uinfo["subs"] > 0:
unit_cov = 100 * (1.0 - (float(uinfo["skip"]) / uinfo["subs"]))
else:
unit_cov = 100
if uinfo["props"] > 0:
unit_success = 100*float(uinfo["proven"])/uinfo["props"]
else:
unit_success = 100
reportunits[u]["success"] = unit_success
reportunits[u]["coverage"] = unit_cov
reportunits[u]["datasrc"] = "report"
## DATA FROM BUILD LOG
if buildunits:
for u,uinfo in buildunits.iteritems():
buildunits[u]["datasrc"] = "log"
buildunits[u]["subs"]=0 # unknown
buildunits[u]["coverage"] = 100 # unknown
buildunits[u]["skip"] = 0 # unknown
if uinfo["props"] > 0:
unit_success = float(uinfo["proven"])/uinfo["props"]*100
else:
unit_success = 100.0
buildunits[u]["success"] = unit_success
#################
# MERGE DATA
#################
def do_merge(uinfo1, uinfo2):
uinfo["datasrc"] = "merged"
for cat in ("subs", "props", "proven", "skip"):
uinfo[cat] = max(uinfo1[cat],uinfo2[cat])
if uinfo["props"] > 0:
uinfo["success"] = 100*float(uinfo["proven"]) / uinfo["props"]
else:
uinfo["success"] = 100.0
if uinfo["subs"] > 0:
uinfo["coverage"] = 100 * (1.0 - (float(uinfo["skip"]) / uinfo["subs"]))
else:
uinfo["coverage"] = 100.0
return uinfo
# ----------
mergedunits=reportunits
if buildunits:
for u,uinfo in buildunits.iteritems():
if u in mergedunits:
mergedunits[u] = do_merge(mergedunits[u], uinfo)
else:
mergedunits[u] = uinfo
################
# FILTER UNITS
################
if exclude:
tmp = mergedunits
mergedunits = {u: uinfo for u,uinfo in tmp.iteritems() if not any(substring in u for substring in exclude) }
## TOTALS
n = len(mergedunits)
total_subs = sum([v["subs"] for k,v in mergedunits.iteritems()])
total_props = sum([v["props"] for k,v in mergedunits.iteritems()])
total_proven = sum([v["proven"] for k,v in mergedunits.iteritems()])
total_skip = sum([v["skip"] for k,v in mergedunits.iteritems()])
total_cov = (sum([v["coverage"] for k,v in mergedunits.iteritems()]) / n) if n > 0 else 0
total_sub_cov = (100*(float(total_subs - total_skip)) / total_subs) if total_subs > 0 else 0
total_success = (100*(float(total_proven) / total_props)) if total_props > 0 else 0
totals = {"units" : n, "unit_cov" : total_cov, "sub_cov" : total_sub_cov, "props" : total_props, "proven":total_proven, "success" : total_success, "subs": total_subs}
#################
# SORT
#################
tmp = [{k : v} for k,v in mergedunits.iteritems()]
def keyfunc(tup):
key, d = tup.iteritems().next()
tmp = [s for s in sorting if s != "alpha"]
order = [d[t] for t in tmp]
return order
if "alpha" in sorting:
sorted_mergedunits = sorted(tmp, key=lambda x: x)
else:
sorted_mergedunits = sorted(tmp, key=keyfunc, reverse=True)
return totals, sorted_mergedunits
def print_table(units):
if len (units) == 0: return
tab = texttable.Texttable()
tab.set_deco(texttable.Texttable.HEADER)
tab.set_precision(1)
# first row is header
firstrowvalue = units[0].itervalues().next()
header = ["unit"] + [k for k in firstrowvalue.iterkeys()]
num_datacols = (len(header)-1)
alignment = ["l"] + ["r"] * num_datacols
tab.set_cols_align(alignment);
data = [header]
maxlen = 0
for u in units:
u,uinfo = u.iteritems().next()
if len(u) > maxlen: maxlen = len(u)
fields = [v for k,v in uinfo.iteritems()]
data.append([u] + fields)
tab.add_rows(data)
tab.set_cols_width([maxlen] + [8]*num_datacols)
print tab.draw()
def print_usage():
print __file__ + " [OPTION] <gnatprove.out> [<build.log>]"
print ''
print 'OPTIONS:'
print ' --sort=s[,s]*'
print ' sort statistics by criteria (s=' + ",".join(KNOWN_SORT_CRITERIA) + ')'
print ' e.g., "--sort=coverage,success" to sort by coverage, then by success'
print ' --table, t'
print ' print as human-readable table instead of JSON/dict'
print ' --exclude=s[,s]*'
print ' exclude units which contain any of the given strings'
def main(argv):
inputfile = None
buildlogfile = None
sorting = []
exclude = []
table = False
try:
opts, args = getopt.getopt(argv, "hs:te:", ["help","sort=","table","exclude="])
except getopt.GetoptError:
print_usage();
sys.exit(2)
if len(sys.argv) < 2:
print_usage();
sys.exit(0);
for opt, arg in opts:
if opt in ('-h', "--help"):
print_usage()
sys.exit()
elif opt in ('-s', "--sort"):
cands = arg.split(",")
for c in cands:
s = c.strip()
if s in KNOWN_SORT_CRITERIA:
sorting.append(s)
else:
print "Sort criteria '" + s + "' unknown"
elif opt in ('-e', "--exclude"):
cands = arg.split(",")
for c in cands:
s = c.strip()
exclude.append(s)
elif opt in ('-t', '--table'):
table = True
if not sorting:
sorting = KNOWN_SORT_CRITERIA
print "sorting: " + ",".join(sorting)
print "exclude: " + ",".join(exclude)
print "WARNING: this script is deprecated. Use gnatprove_unitstats.py"
print "Known issues: 'props' includes some flow proofs"
inputfile = args[0]
if len(args) > 1: buildlogfile = args[1]
print "report file: " + inputfile
reportunits = get_report_stats(inputfile=inputfile)
if not reportunits: return 1
#pprint.pprint | |
<filename>train_progressive_all_pggan_improvements.py<gh_stars>0
"""
This file is based on WaveGAN v1: https://github.com/chrisdonahue/wavegan/tree/v1
and the Tensorflow Models implementation of PGGAN: https://github.com/tensorflow/models/tree/master/research/gan/progressive_gan
Both of these are modified and interwoven with new code so it is not practical to point out which section of code is inspired by which, but in this file
the train(), infer() and preview() functions are mostly adapted from WaveGAN, while the checkpointing functionality is adapted from
PGGAN. The "main" part also bears some minor similarities to WaveGAN.
Mixed precision training was inspired by and adapted from these slides: http://on-demand.gputechconf.com/gtc-taiwan/2018/pdf/5-1_Internal%20Speaker_Michael%20Carilli_PDF%20For%20Sharing.pdf
If code is adapted from other sources it is explicitly pointed out.
"""
import os
import time
import numpy as np
import tensorflow as tf
import pickle
from model_progressive_all_pggan_improvements import GANGenerator, GANDiscriminator, block_name
import dataloader_progressive as dataloader
import utils
import argparse
SAMPLING_RATE = 16000
# 100 random inputs for the generator
G_INIT_INPUT_SIZE = 100
D_UPDATES_PER_G_UPDATE = 5
# Set later in main properly
window_size = 16384
batch_size = 64
# G = generator
# D = discriminator
def train(training_data_dir, train_dir, stage_id, num_channels, freeze_early_layers=False, use_mixed_precision_training = False, augmentation_level=0):
print("Training called")
loader = dataloader.Dataloader(window_size, batch_size, training_data_dir, num_channels, augmentation_level=augmentation_level)
iterator = loader.get_next()
x = iterator.get_next()
print(x.get_shape())
G_input = tf.random_uniform([batch_size, G_INIT_INPUT_SIZE], -1., 1., dtype=tf.float32)
if use_mixed_precision_training:
# Generator network
G_input = tf.cast(G_input, tf.float16)
with tf.variable_scope("G", custom_getter=float32_variable_storage_getter):
G_output = GANGenerator(G_input, train=True, num_blocks=stage_id, freeze_early_layers=freeze_early_layers, channels=num_channels)
G_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope="G")
# Discriminator with real input data
x = tf.cast(x, tf.float16)
with tf.name_scope("D_real"), tf.variable_scope("D", custom_getter=float32_variable_storage_getter):
D_real_output = GANDiscriminator(x, num_blocks=stage_id, freeze_early_layers=freeze_early_layers)
D_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope="D")
# Discriminator with fake input data
# G_output is already in fp16
with tf.name_scope("D_fake"), tf.variable_scope("D", reuse=True, custom_getter=float32_variable_storage_getter):
D_fake_output = GANDiscriminator(G_output, num_blocks=stage_id, freeze_early_layers=freeze_early_layers)
x = tf.cast(x, tf.float32)
G_output = tf.cast(G_output, tf.float32)
D_real_output = tf.cast(D_real_output, tf.float32)
D_fake_output = tf.cast(D_fake_output, tf.float32)
else:
# Generator network
with tf.variable_scope("G"):
G_output = GANGenerator(G_input, train=True, num_blocks=stage_id, freeze_early_layers=freeze_early_layers, channels=num_channels)
G_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope="G")
# Discriminator with real input data
with tf.name_scope("D_real"), tf.variable_scope("D"):
D_real_output = GANDiscriminator(x, num_blocks=stage_id, freeze_early_layers=freeze_early_layers)
D_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope="D")
# Discriminator with fake input data
with tf.name_scope("D_fake"), tf.variable_scope("D", reuse=True):
D_fake_output = GANDiscriminator(G_output, num_blocks=stage_id, freeze_early_layers=freeze_early_layers)
# Write generator summary
tf.summary.audio("real_input", x, SAMPLING_RATE)
tf.summary.audio("generator_output", G_output, SAMPLING_RATE)
# RMS = reduced mean square
G_output_rms = tf.sqrt(tf.reduce_mean(tf.square(G_output[:, :, 0]), axis=1))
real_input_rms = tf.sqrt(tf.reduce_mean(tf.square(x[:, :, 0]), axis=1))
tf.summary.histogram("real_input_rms_batch", real_input_rms)
tf.summary.histogram("G_output_rms_batch", G_output_rms)
# Reduce the rms of batches into a single scalar
tf.summary.scalar("real_input_rms", tf.reduce_mean(real_input_rms))
tf.summary.scalar("G_output_rms", tf.reduce_mean(G_output_rms))
# Only use the WGAN-GP loss for now
G_loss = -tf.reduce_mean(D_fake_output)
D_loss = tf.reduce_mean(D_fake_output) - tf.reduce_mean(D_real_output)
alpha = tf.random_uniform(shape=[batch_size, 1, 1], minval=0., maxval=1.)
# Difference between real input and generator (fake) output
differences = G_output - x
interpolates = x + (alpha * differences)
LAMBDA = 10
if use_mixed_precision_training:
interpolates = tf.cast(interpolates, tf.float16)
with tf.name_scope("D_interpolates"), tf.variable_scope("D", reuse=True, custom_getter=float32_variable_storage_getter):
D_interpolates_output = GANDiscriminator(interpolates, num_blocks=stage_id, freeze_early_layers=freeze_early_layers)
# interpolates = tf.cast(interpolates, tf.float32)
# D_interpolates_output = tf.cast(D_interpolates_output, tf.float32)
gradients = tf.gradients(D_interpolates_output, [interpolates])[0]
gradients = tf.cast(gradients, tf.float32)
slopes = tf.sqrt(tf.reduce_sum(tf.square(gradients), reduction_indices=[1, 2]))
gradient_penalty = tf.reduce_mean((slopes - 1.) ** 2.)
D_loss += LAMBDA * gradient_penalty
else:
with tf.name_scope("D_interpolates"), tf.variable_scope("D", reuse=True):
D_interpolates_output = GANDiscriminator(interpolates, num_blocks=stage_id, freeze_early_layers=freeze_early_layers)
gradients = tf.gradients(D_interpolates_output, [interpolates])[0]
slopes = tf.sqrt(tf.reduce_sum(tf.square(gradients), reduction_indices=[1, 2]))
gradient_penalty = tf.reduce_mean((slopes - 1.) ** 2.)
D_loss += LAMBDA * gradient_penalty
tf.summary.scalar("Output/real_output", tf.reduce_mean(D_real_output))
tf.summary.scalar("Output/fake_output", tf.reduce_mean(D_fake_output))
tf.summary.scalar("Output/mixed_output", tf.reduce_mean(D_interpolates_output))
tf.summary.scalar("Loss/Generator_loss", G_loss)
tf.summary.scalar("Loss/Discriminator_loss", D_loss)
with tf.variable_scope("optimiser_vars") as var_scope:
# Optimisers - pretty sure for progressive growing changes might be needed, look at the PGGAN paper
G_opt = tf.train.AdamOptimizer(
learning_rate=1e-4,
beta1=0.5,
beta2=0.9
)
D_opt = tf.train.AdamOptimizer(
learning_rate=1e-4,
beta1=0.5,
beta2=0.9
)
if use_mixed_precision_training:
# Adapted from http://on-demand.gputechconf.com/gtc-taiwan/2018/pdf/5-1_Internal%20Speaker_Michael%20Carilli_PDF%20For%20Sharing.pdf
loss_scale = 32.0
G_gradients, G_variables = zip(*G_opt.compute_gradients(G_loss * loss_scale, var_list=G_vars))
G_gradients = [gradient / loss_scale for gradient in G_gradients]
G_train_op = G_opt.apply_gradients(zip(G_gradients, G_variables), global_step=tf.train.get_or_create_global_step())
loss_scale_discriminator = 32.0
D_gradients, D_variables = zip(*D_opt.compute_gradients(D_loss * loss_scale_discriminator, var_list=D_vars))
D_gradients = [gradient / loss_scale_discriminator for gradient in D_gradients]
D_train_op = D_opt.apply_gradients(zip(D_gradients, D_variables))
else:
# Training ops - need to specify the var_list so it does not default to all vars within TRAINABLE_VARIABLES
# See: https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer#minimize
G_train_op = G_opt.minimize(G_loss, var_list=G_vars, global_step=tf.train.get_or_create_global_step())
D_train_op = D_opt.minimize(D_loss, var_list=D_vars)
optimiser_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=var_scope.name)
scaffold = make_custom_scaffold(stage_id, optimiser_vars, train_dir, freeze_early_layers)
if freeze_early_layers:
print("Early layers frozen")
else:
print("Training all layers")
iterator_init_hook = IteratorInitiasliserHook(iterator, loader.all_sliced_samples)
# Training
with tf.train.MonitoredTrainingSession(
checkpoint_dir=get_train_subdirectory(stage_id, train_dir, freeze_early_layers),
save_checkpoint_secs=300,
save_summaries_secs=120,
hooks=[iterator_init_hook],
scaffold=scaffold) as sess:
print("Training start")
while True:
# Train discriminator
for i in range(D_UPDATES_PER_G_UPDATE):
sess.run(D_train_op)
# print("Discriminator trained")
sess.run(G_train_op)
# print("Generator trained")
# print("Both networks trained")
def infer(train_dir, stage_id, num_channels, use_mixed_precision_training=False):
infer_dir = os.path.join(train_dir, "infer")
if not os.path.isdir(infer_dir):
os.makedirs(infer_dir)
# Subgraph that generates latent vectors
# Number of samples to generate
sample_amount = tf.placeholder(tf.int32, [], name="samp_z_n")
# Input samples
input_samples = tf.random_uniform([sample_amount, G_INIT_INPUT_SIZE], -1.0, 1.0, dtype=tf.float32, name="samp_z")
input_placeholder = tf.placeholder(tf.float32, [None, G_INIT_INPUT_SIZE], name="z")
flat_pad = tf.placeholder(tf.int32, [], name="flat_pad")
# Run the generator
if use_mixed_precision_training:
with tf.variable_scope("G", custom_getter=float32_variable_storage_getter):
generator_output = GANGenerator(input_placeholder, train=False, num_blocks=stage_id, channels=num_channels)
else:
with tf.variable_scope("G"):
generator_output = GANGenerator(input_placeholder, train=False, num_blocks=stage_id, channels=num_channels)
generator_output = tf.identity(generator_output, name="G_z")
# Flatten batch and pad it so there is a pause between generated samples
# Only generate one file
num_channels = int(generator_output.get_shape()[-1])
output_padded = tf.pad(generator_output, [[0, 0], [0, flat_pad], [0, 0]])
output_flattened = tf.reshape(output_padded, [-1, num_channels], name="G_z_flat")
# Encode to int16 - assumes division by 32767 to encode to [-1, 1] float range
def float_to_int16(input_values, name=None):
input_int16 = input_values * 32767
input_int16 = tf.clip_by_value(input_int16, -32767., 32767)
input_int16 = tf.cast(input_int16, tf.int16, name=name)
return input_int16
generator_output_int16 = float_to_int16(generator_output, name="G_z_int16")
generator_output_flat_int16 = float_to_int16(output_flattened, name="G_z_flat_int16")
# Create saver
G_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope="G")
# Need to scope the global_step to the same scope that it has in the train function
with tf.variable_scope("optimiser_vars") as var_scope:
global_step = tf.train.get_or_create_global_step()
saver = tf.train.Saver(G_vars + [global_step])
# Export graph
tf.train.write_graph(tf.get_default_graph(), infer_dir, "infer.pbtxt")
# Export metagraph
infer_metagraph_filepath = os.path.join(infer_dir, "infer.meta")
tf.train.export_meta_graph(
filename=infer_metagraph_filepath,
clear_devices=True,
saver_def=saver.as_saver_def()
)
tf.reset_default_graph()
print("Metagraph construction done")
# Generate preview audio files - should be run in a separate process, parallel to or after training
# Might be problems with it when saving random input vectors with a given amount_to_preview
# and using another value when running it again
def preview(train_dir, amount_to_preview):
preview_dir = os.path.join(train_dir, "preview")
if not os.path.isdir(preview_dir):
os.makedirs(preview_dir)
# Graph loading - might or might not have to change this, we'll see
infer_metagraph_filepath = os.path.join(train_dir, "infer", "infer.meta")
graph = tf.get_default_graph()
saver = tf.train.import_meta_graph(infer_metagraph_filepath)
# Generate or restore the input random latent vector for the generator
# For now restoration is commented out - generate a new latent vector for every run
# It makes sense to use a fixed latent vector while actively training so the improvement
# can be heard
# input_filepath = os.path.join(preview_dir, "z.pkl")
# if os.path.exists(input_filepath):
# with open(input_filepath, "rb") as f:
# input_values = pickle.load(f)
# else:
# Generate random input values for the generator
sample_feeds = {}
sample_feeds[graph.get_tensor_by_name("samp_z_n:0")] = amount_to_preview
sample_fetches = {}
# "zs" are the random input values
sample_fetches["zs"] = graph.get_tensor_by_name("samp_z:0")
with tf.Session() as sess:
fetched_values = sess.run(sample_fetches, sample_feeds)
input_values = fetched_values["zs"]
# Save random input
# with open(input_filepath, "wb") as f:
# pickle.dump(input_values, f)
# Set up the graph for the generator
feeds = {}
feeds[graph.get_tensor_by_name("z:0")] = input_values
# Leave half of win_size length of no audio between samples
feeds[graph.get_tensor_by_name("flat_pad:0")] = SAMPLING_RATE
fetches = {}
fetches["step"] = tf.train.get_or_create_global_step()
# Output of the generator
fetches["G_z"] = graph.get_tensor_by_name("G_z:0")
# Output of the generator, flattened and transformed to int16
fetches["G_z_flat_int16"] = graph.get_tensor_by_name("G_z_flat_int16:0")
# Write summary
output = graph.get_tensor_by_name("G_z_flat:0")
summaries = [tf.summary.audio("preview", tf.expand_dims(output, axis=0), SAMPLING_RATE, max_outputs=1)]
fetches["summaries"] = tf.summary.merge(summaries)
summary_writer = tf.summary.FileWriter(preview_dir)
# Loop, wait until a new checkpoint is found - if found, execute
checkpoint_filepath = None
while True:
latest_checkpoint_filepath = tf.train.latest_checkpoint(train_dir)
if latest_checkpoint_filepath != checkpoint_filepath:
print("Preview: {}".format(latest_checkpoint_filepath))
with tf.Session() as sess:
saver.restore(sess, latest_checkpoint_filepath)
fetches_results = sess.run(fetches, feeds)
training_step = fetches_results["step"]
preview_filepath = os.path.join(preview_dir, "{}.wav".format(str(training_step).zfill(8)))
utils.write_wav_file(preview_filepath, SAMPLING_RATE, fetches_results["G_z_flat_int16"])
summary_writer.add_summary(fetches_results["summaries"], training_step)
print("Wav written")
checkpoint_filepath = latest_checkpoint_filepath
time.sleep(1)
# Stage_id is equivalent to num_blocks for now
def make_custom_scaffold(stage_id, optimiser_var_list, training_root_directory, early_layers_frozen):
restore_var_list = []
previous_checkpoint = None
current_checkpoint = tf.train.latest_checkpoint(get_train_subdirectory(stage_id, training_root_directory, early_layers_frozen))
# Skip var restoration if training only has 1 block and no saved checkpoint
if stage_id > 1 and current_checkpoint is None:
# Check if a frozen checkpoint exists from the current level
previous_checkpoint = tf.train.latest_checkpoint(get_train_subdirectory(stage_id, training_root_directory, True))
# If yes, restore every non-optimiser variable
if previous_checkpoint is not | |
qp\n'][i] = ''.join([' refvolume=',str(Vol),'\n'])
elif item.split('=')[0] == ' isoconstraint':
self.element['# qp\n'][i] = ''.join([' isoconstraint=',str(Dose),'\n'])
elif item.split('=')[0] == ' weight':
self.element['# qp\n'][i] = ''.join([' weight=',str(Weight),'\n'])
elif item.split('=')[0] == ' totalvolume':
self.element['# qp\n'][i] = ''.join([' totalvolume=',str(Opti_all),'\n'])
elif item.split('=')[0] == ' sanesurfacedose':
self.element['# qp\n'][i] = ''.join([' sanesurfacedose=',str(Surf_margin),'\n'])
return self.element['# qp\n']
def modify_po(self,po,Dose,alpha):
'''
This function is target EUD
'''
self.po = po
self.po[18] = ''.join([' isoconstraint=',str(Dose),'\n'])
self.po[10] = ''.join([' alpha=',str(alpha),'\n'])
self.po[-2] = ''.join([' !END\n'])
return self.po[:-1]
def modify_se(self,Dose,Weight,Shrink_margin,Opti_all,Powe_Law):
'''
Serial function
'''
for i,item in enumerate(self.element['# se\n']):
if item.split('=')[0] == ' isoconstraint':
self.element['# se\n'][i] = ''.join([' isoconstraint=',str(Dose),'\n'])
elif item.split('=')[0] == ' weight':
self.element['# se\n'][i] = ''.join([' weight=',str(Weight),'\n'])
elif item.split('=')[0] == ' totalvolume':
self.element['# se\n'][i] = ''.join([' totalvolume=',str(Opti_all),'\n'])
elif item.split('=')[0] == ' exponent':
self.element['# se\n'][i] = ''.join([' exponent=',str(Powe_Law),'\n'])
elif item.split('=')[0] == ' shrinkmargin':
self.element['# se\n'][i] = ''.join([' shrinkmargin=',str(Shrink_margin),'\n'])
return self.element['# se\n']
def modify_pa(self,Ref_dose,Volume,Weight,Powe_Law,Opti_all,Shrink_margin):
'''
Parallel Function
'''
for i,item in enumerate(self.element['# pa\n']):
if item.split('=')[0] == ' refdose':
self.element['# pa\n'][i] = ''.join([' refdose=',str(Ref_dose),'\n'])
elif item.split('=')[0] == ' isoconstraint':
self.element['# pa\n'][i] = ''.join([' isoconstraint=',str(Volume),'\n'])
elif item.split('=')[0] == ' weight':
self.element['# pa\n'][i] = ''.join([' weight=',str(Weight),'\n'])
elif item.split('=')[0] == ' exponent':
self.element['# pa\n'][i] = ''.join([' exponent=',str(Powe_Law),'\n'])
elif item.split('=')[0] == ' shrinkmargin':
self.element['# pa\n'][i] = ''.join([' shrinkmargin=',str(Shrink_margin),'\n'])
elif item.split('=')[0] == ' totalvolume':
self.element['# pa\n'][i] = ''.join([' totalvolume=',str(Opti_all),'\n'])
return self.element['# pa\n']
def modify_mxd(self,Dose,Weight,Opti_all,Shrink_margin):
'''
Maximum Dose
'''
for i,item in enumerate(self.element['# mxd\n']):
if item.split('=')[0] == ' isoconstraint':
self.element['# mxd\n'][i] = ''.join([' isoconstraint=',str(Dose),'\n' ])
elif item.split('=')[0] == ' weight':
self.element['# mxd\n'][i] = ''.join([' weight=',str(Weight),'\n'])
elif item.split('=')[0] == ' totalvolume':
self.element['# mxd\n'][i] = ''.join([' totalvolume=',str(Opti_all),'\n'])
elif item.split('=')[0] == ' shrinkmargin':
self.element['# mxd\n'][i] = ''.join([' shrinkmargin=',str(Shrink_margin),'\n'])
return self.element['# mxd\n']
def modify_qod(self,Dose,RMS,Shrink_margin):
'''
quadratic overdose
'''
for i,item in enumerate(self.element['# oq\n']):
if item.split('=')[0] == ' thresholddose':
self.element['# oq\n'][i] = ''.join([' thresholddose=',str(Dose),'\n'])
elif item.split('=')[0] == ' isoconstraint':
self.element['# oq\n'][i] = ''.join([' isoconstraint=',str(RMS),'\n'])
elif item.split('=')[0] == ' shrinkmargin':
self.element['# oq\n'][i] = ''.join([' shrinkmargin=',str(Shrink_margin),'\n'])
return self.element['# oq\n']
def read_csv(self):
self.pres_strt,self.dose_frac = [],[]
self.pres_strt_ind = {} # initialization
import csv
with open(self.csv) as csvfile:
readCSV = csv.reader(csvfile, delimiter=',')
prep = [row for row in readCSV]
prep = [item for item in prep if item != []]
for i,item in enumerate(prep):
prep[i][-1] = prep[i][-1].replace(' ','')
self.pres_strt = list(set([l[0] for l in prep if l[0] != 'prep' and l[0] != 'frac' ]))
self.dose_frac = [l for l in prep if l[0] == 'prep' or l[0] == 'frac' ]
for item in self.pres_strt: self.pres_strt_ind[item] = [] # initialization
for item in prep:
if item[0] != 'prep' and item[0] != 'frac':
if item[2][-1] != '%':
self.pres_strt_ind[item[0]].append((item[1],float(item[2])/100))
else:
self.pres_strt_ind[item[0]].append((item[1],float(item[2][:-1])))
return self.pres_strt,self.dose_frac,self.pres_strt_ind
def write_colone(self):
self.template_line.append('')
s=''.join(self.template_line)
f = open(self.updated_hyp_path,'w+')
f.seek(0)
f.write(s)
f.close()
def cf_OAR(self,path_new,OBJ):
'''
this function is aimed to convert the item to
most helpful one
'''
import re
weight_OARs = 0.01
k_se = 12
k_pa = 3
self.cost_fun = []
for i,j in enumerate(OBJ[1]):
if j[0][0] == 'D':
if j[0] == 'Dmean':
se = self.modify_se(path_new['se'], j[1], weight_OARs, 0, 0, 1)
self.cost_fun.extend(se)
elif j[0] == 'Dmax':
mxd =self.modify_mxd(path_new['mxd'], j[1], weight_OARs, 0, 0)
self.cost_fun.extend(mxd)
else:
se = self.modify_se(path_new['se'], j[1]*0.75, weight_OARs, 0, 0, 16)
self.cost_fun.extend(se)
elif j[0][0] == 'V' :
ss = (re.findall("\d+", j[0]))
s = float(ss[0])
flag = j[1]
if flag <= 15.0:
se = self.modify_se(path_new['se'], s*0.75, weight_OARs, 3, 0, k_se)
self.cost_fun.extend(se)
else:
pa = self.modify_pa(path_new['pa'], s, flag, weight_OARs, k_pa, 0, 0)
self.cost_fun.extend(pa)
return self.cost_fun
def ge_tem_pros(self,strt_ind_list,path_beam,dose_frac):
import math
self.template_line = []
grid = 3
tar = [(item[0],item[1],float(item[1][0][0][1:])) for item in strt_ind_list if 'PTV' in item[0] or 'PGTV' in item[0] or 'GTV' in item[0]]
tar.sort(key=lambda x:x[2],reverse = True)
tar_nam = [item[0] for item in tar]
OARs_nam = [item[0] for item in strt_ind_list if item[0] not in tar_nam]
prep_name = []
prep_name = prep_name + tar_nam + OARs_nam
##tar_res_nam = [item[0] for item in OARs_nam_level if 'PTV' in item[0] or 'PGTV' in item[0]]
ind = self.path[0].rindex('\\')
path_new = {}
for item in self.path:
path_new[item[int(ind+1):-4]] = self.exist_read_mod(item)
pres = float(dose_frac[1][1])/100
weight_target = 1
weight_OARs = 0.01
k_se = 12
k_pa = 3
RMS = 1
max_dose = 47.5
## ================== part1 ================= ##
part1 = ['000610b6\n','!LAYERING\n']
for item in prep_name:
if item == 'patient' or item == 'BODY':
part1.append(str(' ' + item + '\n'))
else:
part1.append(str(' ' + item + ':T\n'))
part1.append('!END\n')
## ================== part2 ================ ##
part2 = path_new['part2'] ## read template
part2[-2] = ' conformalavoidance=0\n'
part2 = part2[:-1]
target = []
OARs = []
for i,item in enumerate(tar):
if i != len(tar)-1: ## inner target 1
part2[1] = ' name=' + item[0] +'\n'
# prep_v = float((item[1][0][1]+3)/100)
prep_d = float(item[1][0][0][1:])
tar_pen = self.modify_po(path_new['po'],prep_d,0.6)
qod = self.modify_qod(path_new['qod'],prep_d+2,RMS,0)
target = target + part2 + tar_pen + qod
target.append('!END\n')
else: ## external target
part2[1] = ' name=' + item[0] +'\n'
# prep_v = float((item[1][0][1]+3)/100)
prep_d = float(item[1][0][0][1:])
po = self.modify_po(path_new['po'],prep_d,0.6)
## set two quadratic overdose to external targets
qod1 = self.modify_qod(path_new['qod'],pres,RMS,0)
# QOD2 = temp_tool.modify_QOD(QOD_path,prep_d-4,RMS1,grid)
qod2 = self.modify_qod(path_new['qod'],prep_d*1.1,RMS,grid*math.floor(abs(prep_d*1.1-pres)/grid))
target = target + part2 + po +qod1 + qod2
target.append('!END\n')
for item in strt_ind_list:
if item[0] not in tar_nam:
if item[-1] == 5: ## stem and cord
part2[1] = ' name=' + item[0] +'\n'
cf1 = self.cf_OAR(path_new,item)
OARs = OARs + part2 + cf1
OARs.append('!END\n')
elif item[-1] == 6: ## normal tissues
part2[1] = ' name=' + item[0] +'\n'
cf2 = self.cf_OAR(path_new,item)
OARs = OARs + part2 + cf2
OARs.append('!END\n')
elif item[-1] == 7: ## normal tissues
part2[1] = ' name=' + item[0] +'\n'
cf3 = self.cf_OAR(path_new,item)
OARs = OARs + part2 + cf3
OARs.append('!END\n')
elif item[-1] == 8: ## normal tissues
part2[1] = ' name=' + item[0] +'\n'
cf4 = self.cf_OAR(path_new,item)
OARs = OARs + part2 + cf4
OARs.append('!END\n')
elif item[-1] == 9: ## normal tissues
part2[1] = ' name=' + item[0] +'\n'
cf5 = self.cf_OAR(path_new,item)
OARs = OARs + part2 + cf5
OARs.append('!END\n')
elif item[-1] == 100: ## patient
part2[1] = ' name=' + item[0] +'\n'
## global maximum dose
mxd1 = self.modify_mxd(path_new['mxd'], round(pres*1.06,2), weight_OARs, 1, 0)
## the outer target dose
QOD1 = self.modify_qod(path_new['qod'],max_dose,RMS,grid*0)
QOD2 = self.modify_qod(path_new['qod'],max_dose*0.8,RMS/2,grid*math.floor((max_dose*0.2)/grid))
QOD3 = self.modify_qod(path_new['qod'],max_dose*0.6,RMS/2,grid*math.floor((max_dose*0.4)/grid))
OARs = OARs + part2 + mxd1 + QOD1 + QOD2 + QOD3
OARs.append('!END\n')
## ================== part3 ================ ##
part3 = path_new['part3']
part3[-2] = '!END\n'
## ================== part4 ================ ##
part4 = self.exist_read_mod(path_beam)
part4[-2] = '!END\n'
## ================== part5 ================ ##
part5 = path_new['part5']
for i,item in enumerate(part5):
if 'FRACTIONS' in item:
part5[i] = ''.join(['!FRACTIONS ',dose_frac[0][1],'\n'])
elif 'PRESCRIPTION' in item:
part5[i] = ''.join(['!PRESCRIPTION ',str(float(dose_frac[1][1])/100),'\n'])
## ================== template ==================== ##
self.template_line = self.template_line + part1 + target + OARs + part3[:-1] + part4[:-1] + part5[:-1]
print('###############################')
print('template has been generated !')
print('###############################')
return self.template_line
def hyp_solution_XHTOMO_HEADNECK(self,grid,fractions,prescription_dose,delivery_type):
'''
tar = [('PGTVrpn', 0.95, 61.6),
('PGTVnx', 0.95, 61.6),
('PGTVnd', 0.95, 59.36),
('PCTV', 0.95, 50.4)]
OARs_level1 = ['Brain Stem','Spinal Cord']
OARs_level2 = ['Optical Chiasm','Optical Nerve R','Optical Nerve L','Lens R','Lens L']
OARs_level3 = ['Eye R','Eye L','Parotid R','Parotid L',,'Pituitary','Brain']
OARs_level4 = ['T.Joint R','T.Joint L','T.Lobe R','T.Lobe L','Larynx','A.D L','A.D R','Mandible','Oral Cavity','Lung']
'''
OARs_level1 = ['Brain Stem','Spinal Cord']
OARs_level2 = ['Optical Chiasm','Optical Nerve R','Optical Nerve L','Lens R','Lens L','Eye R','Eye L']
OARs_level3 = ['Parotid R','Parotid L','Pituitary','Brain','Larynx','Oral Cavity']
OARs_level4 = ['T.Joint R','T.Joint L','T.Lobe R','T.Lobe L','A.D L','A.D R','Mandible','Lung']
| |
1, (10, 10), 'uint32')),
(IMPL_32_1_15F, (IMPL, 32, 1, 0, 15, (15, 10, 10), 'uint32')),
(EXPL_32_3_1F, (EXPL, 32, 3, 0, 1, (100, 100, 3), 'uint32')),
(EXPL_32_3_2F, (EXPL, 32, 3, 0, 2, (2, 100, 100, 3), 'uint32')),
]
@pytest.mark.skipif(not HAVE_NP, reason='Numpy is not available')
class TestNumpy_NumpyHandler:
"""Tests for handling Pixel Data with the handler."""
def setup(self):
"""Setup the test datasets and the environment."""
self.original_handlers = config.pixel_data_handlers
config.pixel_data_handlers = [NP_HANDLER]
def teardown(self):
"""Restore the environment."""
config.pixel_data_handlers = self.original_handlers
def test_environment(self):
"""Check that the testing environment is as expected."""
assert HAVE_NP
assert NP_HANDLER is not None
def test_unsupported_syntax_raises(self):
"""Test pixel_array raises exception for unsupported syntaxes."""
ds = dcmread(EXPL_16_1_1F)
for uid in UNSUPPORTED_SYNTAXES:
ds.file_meta.TransferSyntaxUID = uid
with pytest.raises((NotImplementedError, RuntimeError)):
ds.pixel_array
def test_dataset_pixel_array_handler_needs_convert(self):
"""Test Dataset.pixel_array when converting to RGB."""
ds = dcmread(EXPL_8_3_1F)
# Convert to YBR first
arr = ds.pixel_array
assert (255, 0, 0) == tuple(arr[5, 50, :])
arr = convert_color_space(arr, 'RGB', 'YBR_FULL')
ds.PixelData = arr.tobytes()
del ds._pixel_array # Weird PyPy2 issue without this
# Test normal functioning (False)
assert (76, 85, 255) == tuple(ds.pixel_array[5, 50, :])
def needs_convert(ds):
"""Change the default return to True"""
return True
# Test modified
orig_fn = NP_HANDLER.needs_to_convert_to_RGB
NP_HANDLER.needs_to_convert_to_RGB = needs_convert
# Ensure the pixel array gets updated
ds._pixel_id = None
assert (254, 0, 0) == tuple(ds.pixel_array[5, 50, :])
# Reset
NP_HANDLER.needs_to_convert_to_RGB = orig_fn
def test_dataset_pixel_array_no_pixels(self):
"""Test good exception message if no pixel data in dataset."""
ds = dcmread(NO_PIXEL)
msg = (
r"Unable to convert the pixel data: one of Pixel Data, Float "
r"Pixel Data or Double Float Pixel Data must be present in the "
r"dataset"
)
with pytest.raises(AttributeError, match=msg):
ds.pixel_array
@pytest.mark.parametrize("fpath, data", REFERENCE_DATA_UNSUPPORTED)
def test_can_access_unsupported_dataset(self, fpath, data):
"""Test can read and access elements in unsupported datasets."""
ds = dcmread(fpath)
assert data[0] == ds.file_meta.TransferSyntaxUID
assert data[1] == ds.PatientName
def test_pixel_array_8bit_un_signed(self):
"""Test pixel_array for 8-bit unsigned -> signed data."""
ds = dcmread(EXPL_8_1_1F)
# 0 is unsigned int, 1 is 2's complement
assert ds.PixelRepresentation == 0
ds.PixelRepresentation = 1
arr = ds.pixel_array
ref = dcmread(EXPL_8_1_1F)
assert not np.array_equal(arr, ref.pixel_array)
assert (600, 800) == arr.shape
assert -12 == arr[0].min() == arr[0].max()
assert (1, -10, 1) == tuple(arr[300, 491:494])
assert 0 == arr[-1].min() == arr[-1].max()
@pytest.mark.parametrize("handler_name", SUPPORTED_HANDLER_NAMES)
def test_decompress_using_handler(self, handler_name):
"""Test different possibilities for the numpy handler name."""
ds = dcmread(EXPL_8_1_1F)
ds.decompress(handler_name)
assert (600, 800) == ds.pixel_array.shape
assert 244 == ds.pixel_array[0].min() == ds.pixel_array[0].max()
assert (1, 246, 1) == tuple(ds.pixel_array[300, 491:494])
assert 0 == ds.pixel_array[-1].min() == ds.pixel_array[-1].max()
def test_pixel_array_16bit_un_signed(self):
"""Test pixel_array for 16-bit unsigned -> signed."""
ds = dcmread(EXPL_16_3_1F)
# 0 is unsigned int, 1 is 2's complement
assert ds.PixelRepresentation == 0
ds.PixelRepresentation = 1
arr = ds.pixel_array
ref = dcmread(EXPL_16_3_1F)
assert not np.array_equal(arr, ref.pixel_array)
assert (100, 100, 3) == arr.shape
assert -1 == arr[0, :, 0].min() == arr[0, :, 0].max()
assert -32640 == arr[50, :, 0].min() == arr[50, :, 0].max()
def test_pixel_array_32bit_un_signed(self):
"""Test pixel_array for 32-bit unsigned -> signed."""
ds = dcmread(EXPL_32_3_1F)
# 0 is unsigned int, 1 is 2's complement
assert ds.PixelRepresentation == 0
ds.PixelRepresentation = 1
arr = ds.pixel_array
ref = dcmread(EXPL_32_3_1F)
assert not np.array_equal(arr, ref.pixel_array)
assert (100, 100, 3) == arr.shape
assert -1 == arr[0, :, 0].min() == arr[0, :, 0].max()
assert -2139062144 == arr[50, :, 0].min() == arr[50, :, 0].max()
# Endian independent datasets
def test_8bit_1sample_1frame(self):
"""Test pixel_array for 8-bit, 1 sample/pixel, 1 frame."""
# Check supported syntaxes
ds = dcmread(EXPL_8_1_1F)
for uid in SUPPORTED_SYNTAXES:
ds.file_meta.TransferSyntaxUID = uid
arr = ds.pixel_array
assert arr.flags.writeable
assert (600, 800) == arr.shape
assert 244 == arr[0].min() == arr[0].max()
assert (1, 246, 1) == tuple(arr[300, 491:494])
assert 0 == arr[-1].min() == arr[-1].max()
def test_8bit_1sample_2frame(self):
"""Test pixel_array for 8-bit, 1 sample/pixel, 2 frame."""
# Check supported syntaxes
ds = dcmread(EXPL_8_1_2F)
for uid in SUPPORTED_SYNTAXES[:3]:
ds.file_meta.TransferSyntaxUID = uid
arr = ds.pixel_array
assert arr.flags.writeable
assert (2, 600, 800) == arr.shape
# Frame 1
assert 244 == arr[0, 0].min() == arr[0, 0].max()
assert (1, 246, 1) == tuple(arr[0, 300, 491:494])
assert 0 == arr[0, -1].min() == arr[0, -1].max()
# Frame 2 is frame 1 inverted
assert np.array_equal((2 ** ds.BitsAllocated - 1) - arr[1], arr[0])
def test_8bit_3sample_1frame_odd_size(self):
"""Test pixel_array for odd sized (3x3) pixel data."""
# Check supported syntaxes
ds = dcmread(EXPL_8_3_1F_ODD)
for uid in SUPPORTED_SYNTAXES:
ds.file_meta.TransferSyntaxUID = uid
arr = ds.pixel_array
assert ds.pixel_array[0].tolist() == [
[166, 141, 52], [166, 141, 52], [166, 141, 52]
]
assert ds.pixel_array[1].tolist() == [
[63, 87, 176], [63, 87, 176], [63, 87, 176]
]
assert ds.pixel_array[2].tolist() == [
[158, 158, 158], [158, 158, 158], [158, 158, 158]
]
def test_8bit_3sample_1frame_ybr422(self):
"""Test pixel_array for YBR_FULL_422 pixel data."""
ds = dcmread(EXPL_8_3_1F_YBR422)
assert ds.PhotometricInterpretation == 'YBR_FULL_422'
arr = ds.pixel_array
# Check resampling
assert [
[76, 85, 255],
[76, 85, 255],
[76, 85, 255],
[76, 85, 255]
] == arr[0:4, 0, :].tolist()
# Check values
assert (76, 85, 255) == tuple(arr[5, 50, :])
assert (166, 106, 193) == tuple(arr[15, 50, :])
assert (150, 46, 20) == tuple(arr[25, 50, :])
assert (203, 86, 75) == tuple(arr[35, 50, :])
assert (29, 255, 107) == tuple(arr[45, 50, :])
assert (142, 193, 118) == tuple(arr[55, 50, :])
assert (0, 128, 128) == tuple(arr[65, 50, :])
assert (64, 128, 128) == tuple(arr[75, 50, :])
assert (192, 128, 128) == tuple(arr[85, 50, :])
assert (255, 128, 128) == tuple(arr[95, 50, :])
def test_8bit_3sample_1frame(self):
"""Test pixel_array for 8-bit, 3 sample/pixel, 1 frame."""
# Check supported syntaxes
ds = dcmread(EXPL_8_3_1F)
for uid in SUPPORTED_SYNTAXES:
ds.file_meta.TransferSyntaxUID = uid
arr = ds.pixel_array
assert arr.flags.writeable
assert (255, 0, 0) == tuple(arr[5, 50, :])
assert (255, 128, 128) == tuple(arr[15, 50, :])
assert (0, 255, 0) == tuple(arr[25, 50, :])
assert (128, 255, 128) == tuple(arr[35, 50, :])
assert (0, 0, 255) == tuple(arr[45, 50, :])
assert (128, 128, 255) == tuple(arr[55, 50, :])
assert (0, 0, 0) == tuple(arr[65, 50, :])
assert (64, 64, 64) == tuple(arr[75, 50, :])
assert (192, 192, 192) == tuple(arr[85, 50, :])
assert (255, 255, 255) == tuple(arr[95, 50, :])
def test_8bit_3sample_2frame(self):
"""Test pixel_array for 8-bit, 3 sample/pixel, 2 frame."""
# Check supported syntaxes
ds = dcmread(EXPL_8_3_2F)
for uid in SUPPORTED_SYNTAXES:
ds.file_meta.TransferSyntaxUID = uid
arr = ds.pixel_array
assert arr.flags.writeable
# Frame 1
frame = arr[0]
assert (255, 0, 0) == tuple(frame[5, 50, :])
assert (255, 128, 128) == tuple(frame[15, 50, :])
assert (0, 255, 0) == tuple(frame[25, 50, :])
assert (128, 255, 128) == tuple(frame[35, 50, :])
assert (0, 0, 255) == tuple(frame[45, 50, :])
assert (128, 128, 255) == tuple(frame[55, 50, :])
assert (0, 0, 0) == tuple(frame[65, 50, :])
assert (64, 64, 64) == tuple(frame[75, 50, :])
assert (192, 192, 192) == tuple(frame[85, 50, :])
assert (255, 255, 255) == tuple(frame[95, 50, :])
# Frame 2 is frame 1 inverted
assert np.array_equal((2 ** ds.BitsAllocated - 1) - arr[1], arr[0])
# Little endian datasets
@pytest.mark.parametrize('fpath, data', REFERENCE_DATA_LITTLE)
def test_properties(self, fpath, data):
"""Test dataset and pixel array properties are as expected."""
ds = dcmread(fpath)
assert ds.file_meta.TransferSyntaxUID == data[0]
assert ds.BitsAllocated == data[1]
assert ds.SamplesPerPixel == data[2]
assert ds.PixelRepresentation == data[3]
assert getattr(ds, 'NumberOfFrames', 1) == data[4]
# Check all little endian syntaxes
for uid in SUPPORTED_SYNTAXES[:3]:
ds.file_meta.TransferSyntaxUID = uid
arr = ds.pixel_array
assert data[5] == arr.shape
assert arr.dtype == data[6]
# Default to 1 if element not present
nr_frames = getattr(ds, 'NumberOfFrames', 1)
# Odd sized data is padded by a final 0x00 byte
size = ds.Rows * ds.Columns * nr_frames * data[1] / 8 * data[2]
# YBR_FULL_422 data is 2/3rds usual size
if ds.PhotometricInterpretation == 'YBR_FULL_422':
size = size // 3 * 2
assert len(ds.PixelData) == size + size % 2
if size % 2:
assert ds.PixelData[-1] == b'\x00'[0]
def test_little_1bit_1sample_1frame(self):
"""Test pixel_array for little 1-bit, 1 sample/pixel, 1 frame."""
# Check all little endian syntaxes
ds = dcmread(EXPL_1_1_1F)
for uid in SUPPORTED_SYNTAXES[:3]:
ds.file_meta.TransferSyntaxUID = uid
arr = ds.pixel_array
assert arr.flags.writeable
assert arr.max() == 1
assert arr.min() == 0
assert (0, 1, 1) == tuple(arr[155, 180:183])
assert (1, 0, 1, 0) == tuple(arr[155, 310:314])
| |
# -*- coding: utf-8 -*-
"""Test for tmuxp configuration import, inlining, expanding and export.
tmuxp.tests.config
~~~~~~~~~~~~~~~~~~
"""
from __future__ import absolute_import, division, print_function, \
with_statement, unicode_literals
import logging
import os
import shutil
import tempfile
import unittest
import kaptan
from .helpers import TestCase
from .util import EnvironmentVarGuard
from .. import config, exc
logger = logging.getLogger(__name__)
TMUXP_DIR = os.path.join(os.path.dirname(__file__), '.tmuxp')
current_dir = os.path.abspath(os.path.dirname(__file__))
example_dir = os.path.abspath(os.path.join(
current_dir, '..', '..', 'examples'))
sampleconfigdict = {
'session_name': 'sampleconfig',
'start_directory': '~',
'windows': [
{
'window_name': 'editor',
'panes': [
{
'start_directory': '~',
'shell_command': ['vim'],
}, {
'shell_command': ['cowsay "hey"']
},
],
'layout': 'main-verticle'
},
{
'window_name': 'logging',
'panes': [{
'shell_command': ['tail -F /var/log/syslog'],
'start_directory':'/var/log'
}]
},
{
'options': {
'automatic_rename': True,
},
'panes': [
{'shell_command': ['htop']}
]
}
]
}
class ImportExportTest(TestCase):
def setUp(self):
self.tmp_dir = tempfile.mkdtemp(suffix='tmuxp')
def tearDown(self):
if os.path.isdir(self.tmp_dir):
shutil.rmtree(self.tmp_dir)
logging.debug('wiped %s' % TMUXP_DIR)
def test_export_json(self):
json_config_file = os.path.join(self.tmp_dir, 'config.json')
configparser = kaptan.Kaptan()
configparser.import_config(sampleconfigdict)
json_config_data = configparser.export('json', indent=2)
with open(json_config_file, 'w') as buf:
buf.write(json_config_data)
new_config = kaptan.Kaptan()
new_config_data = new_config.import_config(json_config_file).get()
self.assertDictEqual(sampleconfigdict, new_config_data)
def test_export_yaml(self):
yaml_config_file = os.path.join(self.tmp_dir, 'config.yaml')
configparser = kaptan.Kaptan()
sampleconfig = config.inline(sampleconfigdict)
configparser.import_config(sampleconfig)
yaml_config_data = configparser.export(
'yaml', indent=2, default_flow_style=False)
with open(yaml_config_file, 'w') as buf:
buf.write(yaml_config_data)
new_config = kaptan.Kaptan()
new_config_data = new_config.import_config(yaml_config_file).get()
self.assertDictEqual(sampleconfigdict, new_config_data)
def test_scan_config(self):
configs = []
garbage_file = os.path.join(self.tmp_dir, 'config.psd')
with open(garbage_file, 'w') as buf:
buf.write('wat')
if os.path.exists(self.tmp_dir):
for r, d, f in os.walk(self.tmp_dir):
for filela in (
x for x in f if x.endswith(('.json', '.ini', 'yaml'))
):
configs.append(os.path.join(
self.tmp_dir, filela))
files = 0
if os.path.exists(os.path.join(self.tmp_dir, 'config.json')):
files += 1
self.assertIn(os.path.join(
self.tmp_dir, 'config.json'), configs)
if os.path.exists(os.path.join(self.tmp_dir, 'config.yaml')):
files += 1
self.assertIn(os.path.join(
self.tmp_dir, 'config.yaml'), configs)
if os.path.exists(os.path.join(self.tmp_dir, 'config.ini')):
files += 1
self.assertIn(os.path.join(self.tmp_dir, 'config.ini'), configs)
self.assertEqual(len(configs), files)
class ExpandTest(TestCase):
"""Assume configuration has been imported into a python dict correctly."""
before_config = {
'session_name': 'sampleconfig',
'start_directory': '~',
'windows': [
{
'window_name': 'editor',
'panes': [
{
'shell_command': [
'vim',
'top'
]
},
{
'shell_command': ['vim'],
},
{
'shell_command': 'cowsay "hey"'
}
],
'layout': 'main-verticle'
},
{
'window_name': 'logging',
'panes': [
{
'shell_command': ['tail -F /var/log/syslog'],
}
]
},
{
'start_directory': '/var/log',
'options': {'automatic_rename': True, },
'panes': [
{
'shell_command': 'htop'
},
'vim',
]
},
{
'start_directory': './',
'panes': [
'pwd'
]
},
{
'start_directory': './asdf/',
'panes': [
'pwd'
]
},
{
'start_directory': '../',
'panes': [
'pwd'
]
},
{
'panes': [
'top'
]
}
]
}
after_config = {
'session_name': 'sampleconfig',
'start_directory': os.path.expanduser('~'),
'windows': [
{
'window_name': 'editor',
'panes': [
{
'shell_command': ['vim', 'top'],
},
{
'shell_command': ['vim'],
},
{
'shell_command': ['cowsay "hey"']
},
],
'layout': 'main-verticle'
},
{
'window_name': 'logging',
'panes': [
{
'shell_command': ['tail -F /var/log/syslog'],
}
]
},
{
'start_directory': '/var/log',
'options': {'automatic_rename': True},
'panes': [
{'shell_command': ['htop']},
{'shell_command': ['vim']}
]
},
{
'start_directory': os.path.normpath(os.path.join(os.path.join(os.path.expanduser('~'), './'))),
'panes': [
{'shell_command': ['pwd']}
]
},
{
'start_directory': os.path.normpath(os.path.join(os.path.join(os.path.expanduser('~'), './asdf'))),
'panes': [
{'shell_command': ['pwd']}
]
},
{
'start_directory': os.path.normpath(os.path.join(os.path.expanduser('~'), '../')),
'panes': [
{'shell_command': ['pwd']}
]
},
{
'panes': [
{'shell_command': ['top']}
]
}
]
}
def test_config(self):
"""Expand shell commands from string to list."""
self.maxDiff = None
test_config = config.expand(self.before_config)
self.assertDictEqual(test_config, self.after_config)
def test_no_window_name(self):
"""Expand shell commands from string to list."""
unexpanded_yaml = """
session_name: sampleconfig
start_directory: '~'
windows:
- window_name: focused window
layout: main-horizontal
focus: true
panes:
- shell_command:
- cd ~
- shell_command:
- cd /usr
focus: true
- shell_command:
- cd ~
- echo "moo"
- top
- window_name: window 2
panes:
- shell_command:
- top
focus: true
- shell_command:
- echo "hey"
- shell_command:
- echo "moo"
- window_name: window 3
panes:
- shell_command: cd /
focus: true
- pane
- pane
"""
expanded_yaml = """
session_name: sampleconfig
start_directory: {HOME}
windows:
- window_name: focused window
layout: main-horizontal
focus: true
panes:
- shell_command:
- cd ~
- shell_command:
- cd /usr
focus: true
- shell_command:
- cd ~
- echo "moo"
- top
- window_name: window 2
panes:
- shell_command:
- top
focus: true
- shell_command:
- echo "hey"
- shell_command:
- echo "moo"
- window_name: window 3
panes:
- shell_command:
- cd /
focus: true
- shell_command: []
- shell_command: []
""".format(
HOME=os.path.expanduser('~')
)
self.maxDiff = None
unexpanded_dict = kaptan.Kaptan(handler='yaml'). \
import_config(unexpanded_yaml).get()
expanded_dict = kaptan.Kaptan(handler='yaml'). \
import_config(expanded_yaml).get()
self.assertDictEqual(
config.expand(unexpanded_dict),
expanded_dict
)
class InlineTest(TestCase):
"""Tests for :meth:`config.inline()`."""
before_config = {
'session_name': 'sampleconfig',
'start_directory': '~',
'windows': [
{
'shell_command': ['top'],
'window_name': 'editor',
'panes': [
{
'shell_command': ['vim'],
}, {
'shell_command': ['cowsay "hey"']
},
],
'layout': 'main-verticle'
},
{
'window_name': 'logging',
'panes': [
{
'shell_command': ['tail -F /var/log/syslog'],
}
]
},
{
'options': {'automatic_rename': True, },
'panes': [
{'shell_command': ['htop']}
]
}
]
}
after_config = {
'session_name': 'sampleconfig',
'start_directory': '~',
'windows': [
{
'shell_command': 'top',
'window_name': 'editor',
'panes': [
'vim',
'cowsay "hey"'
],
'layout': 'main-verticle'
},
{
'window_name': 'logging',
'panes': [
'tail -F /var/log/syslog',
]
},
{
'options': {
'automatic_rename': True,
},
'panes': [
'htop'
]
},
]
}
def test_config(self):
""":meth:`config.inline()` shell commands list to string."""
self.maxDiff = None
test_config = config.inline(self.before_config)
self.assertDictEqual(test_config, self.after_config)
class InheritanceTest(TestCase):
"""Test config inheritance for the nested 'start_command'."""
config_before = {
'session_name': 'sampleconfig',
'start_directory': '/',
'windows': [
{
'window_name': 'editor',
'start_directory': '~',
'panes': [
{
'shell_command': ['vim'],
},
{
'shell_command': ['cowsay "hey"']
},
],
'layout': 'main-verticle'
},
{
'window_name': 'logging',
'panes': [
{
'shell_command': ['tail -F /var/log/syslog'],
}
]
},
{
'window_name': 'shufu',
'panes': [
{
'shell_command': ['htop'],
}
]
},
{
'options': {
'automatic_rename': True,
},
'panes': [
{
'shell_command': ['htop']
}
]
}
]
}
config_after = {
'session_name': 'sampleconfig',
'start_directory': '/',
'windows': [
{
'window_name': 'editor',
'start_directory': '~',
'panes': [
{
'shell_command': ['vim'],
}, {
'shell_command': ['cowsay "hey"'],
},
],
'layout': 'main-verticle'
},
{
'window_name': 'logging',
'panes': [
{
'shell_command': ['tail -F /var/log/syslog'],
}
]
},
{
'window_name': 'shufu',
'panes': [
{
'shell_command': ['htop'],
}
]
},
{
'options': {'automatic_rename': True, },
'panes': [
{
'shell_command': ['htop'],
}
]
}
]
}
def test_start_directory(self):
config = self.config_before
if 'start_directory' in config:
session_start_directory = config['start_directory']
else:
session_start_directory = None
for windowconfitem in config['windows']:
window_start_directory = None
# TODO: Look at verifying window_start_directory
if 'start_directory' in windowconfitem:
window_start_directory = windowconfitem['start_directory']
elif session_start_directory:
window_start_directory = session_start_directory
for paneconfitem in windowconfitem['panes']:
# if 'start_directory' in paneconfitem:
# pane_start_directory = paneconfitem['start_directory']
# elif window_start_directory:
# paneconfitem['start_directory'] = window_start_directory
# elif session_start_directory:
# paneconfitem['start_directory'] = session_start_directory
pass
self.maxDiff = None
self.assertDictEqual(config, self.config_after)
class ShellCommandBeforeTest(TestCase):
"""Config inheritance for the nested 'start_command'."""
config_unexpanded = { # shell_command_before is string in some areas
'session_name': 'sampleconfig',
'start_directory': '/',
'windows': [
{
'window_name': 'editor',
'start_directory': '~',
'shell_command_before': 'source .venv/bin/activate',
'panes': [
{
'shell_command': ['vim'],
},
{
'shell_command_before': ['rbenv local 2.0.0-p0'],
'shell_command': ['cowsay "hey"']
},
],
'layout': 'main-verticle'
},
{
'shell_command_before': 'rbenv local 2.0.0-p0',
'window_name': 'logging',
'panes': [
{
'shell_command': ['tail -F /var/log/syslog'],
},
{
}
]
},
{
'window_name': 'shufu',
'panes': [
{
'shell_command_before': ['rbenv local 2.0.0-p0'],
'shell_command': ['htop'],
}
]
},
{
'options': {'automatic_rename': True, },
'panes': [
{'shell_command': ['htop']}
]
},
{
'panes': ['top']
}
]
}
config_expanded = { # shell_command_before is string in some areas
'session_name': 'sampleconfig',
'start_directory': '/',
'windows': [
{
'window_name': 'editor',
'start_directory': os.path.expanduser('~'),
'shell_command_before': ['source .venv/bin/activate'],
'panes': [
{
'shell_command': ['vim'],
},
{
'shell_command_before': ['rbenv local 2.0.0-p0'],
'shell_command': ['cowsay "hey"']
},
],
'layout': 'main-verticle'
},
{
'shell_command_before': ['rbenv local 2.0.0-p0'],
'window_name': 'logging',
'panes': [
{
'shell_command': ['tail -F /var/log/syslog'],
},
{
'shell_command': []
}
]
},
{
'window_name': 'shufu',
'panes': [
{
'shell_command_before': ['rbenv local 2.0.0-p0'],
'shell_command': ['htop'],
}
]
},
{
'options': {'automatic_rename': True, },
'panes': [
{'shell_command': ['htop']}
]
},
{
'panes': [{
'shell_command': ['top']
}]
},
]
}
config_after = { # shell_command_before is string in some areas
'session_name': 'sampleconfig',
'start_directory': '/',
'windows': [
{
'window_name': 'editor',
'start_directory': os.path.expanduser('~'),
'shell_command_before': ['source .venv/bin/activate'],
'panes': [
{
'shell_command': ['source .venv/bin/activate', 'vim'],
},
{
'shell_command_before': ['rbenv local 2.0.0-p0'],
'shell_command': [
'source .venv/bin/activate',
'rbenv local 2.0.0-p0', 'cowsay "hey"'
]
},
],
'layout': 'main-verticle'
},
{
'shell_command_before': ['rbenv local 2.0.0-p0'],
'start_directory': '/',
'window_name': 'logging',
'panes': [
{
'shell_command': [
'rbenv local 2.0.0-p0',
'tail -F /var/log/syslog'
],
},
{
'shell_command': ['rbenv local 2.0.0-p0']
}
]
},
{
| |
<reponame>federatedcloud/FRB_pipeline<filename>Pipeline/Modules/friends.py
''' The module contains tools to run a Friends-of-Friends search algorithm.
The main function is fof(), which is found at the bottom of this file.
Most other functions defined in this module are called by fof(),
and some may also be useful for other purposes as well.
'''
# General Imports
import matplotlib
matplotlib.use("Agg")
import matplotlib.pyplot as plt
import numpy as np
import math
import scipy.odr as odr
from scipy.stats import linregress
import scipy.ndimage as ni
from subprocess import call
# kDM -- Interstellar dispersion constant
kDM = 4148.808 # MHz^2 / (pc cm^-3)
class Cluster:
''' A class to store information about a cluster of pixels identified
using the friends-of-friends algorithm.'''
def __init__(self, coords, sigs, std):
''' <coords> is a tuple containing two arrays (should be
output of np.where()), which contain respectively
the frequency and time coordinates of samples in the cluster. <sigs>
contains the values of the dyamic spectra at these coordinates. '''
self.t_co = coords[1] # time coordinates array
self.v_co = coords[0] # frequency coordinates array
self.sigs = sigs # signal values array
self.global_std = std
self.N = np.size(sigs) # number of samples in blob
self.t_mean = np.mean(self.t_co) # mean of time coords of samples in blob
self.t_range = (np.min(self.t_co),np.max(self.t_co)) # time range of blob in bins (tuple)
self.v_mean = np.mean(self.v_co) # mean of freq. coords of samples in blob
self.v_range = (np.min(self.v_co),np.max(self.v_co)) # frequency range of blob in bins (tuple)
self.sig_mean = np.mean(sigs) # signal mean
self.sig_max = np.max(sigs) # maximum signal value
self.SNR_mean = self.sig_mean / self.global_std
self.SNR_max = self.sig_max / self.global_std
argmax = np.argmax(sigs)
self.coord_max = (self.v_co[argmax],self.t_co[argmax]) # coordinates of sample with max signal value
# Cluster SNR statistics. See <NAME>'s paper for math formula
self.clust_SNR = ((self.sig_mean * math.sqrt(self.N)) / self.global_std)
def lin_fit(self,C):
''' Perform orthogonal linear regression on this Cluster.
The slope and intercept are stored as a tuple in the field <linear>'''
linear_fit = odr.Model(lin_func)
data = odr.Data(C * self.t_co,self.v_co) # y-axis is frequency
odr_inst = odr.ODR(data, linear_fit, beta0=[0.0,self.v_mean])
output = odr_inst.run()
(slope,intercept) = output.beta[0],output.beta[1]
self.linear = (C * slope,intercept)
def DM_fit(self, tstart, v_max_index):
''' Perform orthogonal DM (inverse square) regression on this Cluster.
The fitted DM value is stored in the field <DM>.'''
# Note: frequency axis is flipped, because high freqs correspond to low array indices
t_co = (self.t_co * tsamp * dt) + tstart
v_co = ((v_max_index-self.v_co) * vsamp * dv) + vlow
#print(DM_func([560.0*kDM], v_co, t_co[0], v_co[0]))
data = odr.Data(v_co,t_co) # y-axis is time
DM_mod = odr.Model(DM_func, extra_args=(t_co[0],v_co[0]))
# Note: beta0 is initial estimate of DM*kDM
odr_inst = odr.ODR(data, DM_mod, beta0=[560.0*kDM])
output = odr_inst.run()
self.DM = output.beta[0] / kDM
def DM_extrapolate(self, vchan, tchan):
# find the max time span for a single frequency band
v_prev = 0
t_seed = 0
dT_max = 0
for j in range(self.N):
if self.v_co[j] == v_prev:
dT_max = max((self.t_co[j] - t_seed), dT_max)
else:
t_seed = self.t_co[j]
v_prev = self.v_co[j]
#dT_width = int(dT_max / 2)
mask = np.zeros((vchan,tchan))
v_co = np.flip((np.arange(vchan) * vsamp * dv) + vlow,0)
# dispersed times, computed assuming t0 = self.t_mean, ignore tstart
t_mean = (self.t_mean * dt * tsamp)
v_mean = vhigh - (self.v_mean * dv * vsamp)
delayed_Ts = DM_func([self.DM * kDM], v_co, t_mean, v_mean)
delayed_Tbins = (delayed_Ts / (dt * tsamp)).astype(int)
t0 = int(self.t_mean)
for v in range(vchan):
T = delayed_Tbins[v]
mask[v, (T - dT_max):(T + dT_max)] = 1
self.DM_mask = mask
def lin_extrapolate(self, vchan, tchan):
# find the max time span for a single frequency band
v_prev = 0
t_seed = 0
dT_max = 0
for j in range(self.N):
if self.v_co[j] == v_prev:
dT_max = max((self.t_co[j] - t_seed), dT_max)
else:
t_seed = self.t_co[j]
v_prev = self.v_co[j]
dT_width = int(dT_max / 2)
mask = np.zeros((vchan,tchan))
#print(mask.shape)
#print(mask)
# disperse each frequency band, and mask
t_co = np.arange(tchan)
lin_v_co = lin_func(self.linear, t_co).astype(int)
valid_v = np.where((lin_v_co >= 0) & (lin_v_co < vchan))
if lin_v_co[0] > lin_v_co[-1]:
try:
t_min = valid_v[0][-1]
except:
t_min= 0
try:
t_max = valid_v[0][0]
except:
t_max= 0
else:
try:
t_min = valid_v[0][0]
except:
t_min= 0
try:
t_max = valid_v[0][-1]
except:
t_max= 0
for t in range(t_min,t_max):
mask[lin_v_co[t], (t-dT_max):(t+dT_max)] = 1
#print(mask.shape)
#print(mask)
self.lin_mask = mask
def fit_extrapolate(self, vchan, tchan, tstart, C):
''' Runs DM_fit(), lin_fit(), DM_extrapolate(), and lin_extrapolate(). '''
self.DM_fit(tstart, vchan-1)
self.lin_fit(C)
self.DM_extrapolate(vchan, tchan)
self.lin_extrapolate(vchan, tchan)
def statline(self):
'''Return a string of the class attributes separated by tabs.'''
return "%d\t %f\t %f\t %f\t %f\t %f\t %f\t %f\t %f\t %f\t %f\t %f\n" \
%(self.N, self.clust_SNR, self.sig_mean, self.sig_max,
self.SNR_mean, self.SNR_max, self.t_range[0],
self.t_range[1], self.v_range[0], self.v_range[1],
self.linear[0], self.DM)
# Note: Not used.
def manhattan(x1,x2):
'''Return the Manhattan distance between two coordinates
given by tuples x1 and x2'''
dist = abs(x1[0]-x2[0]) + abs(x1[1]-x2[1])
return dist
def avg_time(data, dt):
''' Decimate a dynamic spectrum in time (x axis) by averaging together
sets of <dt> time samples.'''
(vchan,tchan) = data.shape
t_num = int(tchan / dt)
avg_data = np.zeros([vchan,t_num])
for t in range(t_num):
t_index = t * dt
for v in range(vchan):
avg_data[v,t] = np.mean(data[v,t_index:(t_index+dt)])
return avg_data
def avg_freq(data, dv):
''' Decimate a dynamic spectrum in frequency (y axis) by averaging
together sets of <dv> frequency samples.'''
(vchan,tchan) = data.shape
v_num = int(vchan / dv)
avg_data = np.zeros([v_num,tchan])
for v in range(v_num):
v_index = v * dv
for t in range(tchan):
avg_data[v,t] = np.mean(data[v_index:(v_index+dv),t])
return avg_data
# Note: Not used.
def mean_rms(data):
''' Return the mean and RMS (in a tuple) of a data array.
Note that the variance is computed as:
var = (RMS)^2 - (mean)^2
'''
(vchan,tchan) = data.shape
ssum = 0.0
dsum = 0.0
for t in range(tchan):
for v in range(vchan):
ssum += (data[v,t])**2
dsum += data[v,t]
num_pixels = vchan * tchan
mean_ssum = ssum / num_pixels
rms = math.sqrt(mean_ssum)
mean = dsum / num_pixels
return (mean,rms)
def iterative_stats(data, out, thresh):
''' Return the mean and standard deviation (in a tuple) of a data array.
Iteratively remove outlying data points from the calculation. If a data point has
a value greater than (out * stddev), it is masked and removed from the next calculation.
The convergence criteria is:
(mean - mean_prev) / mean < thresh
(std - std_prev) / std < thresh
i.e. once successive iterates of both mean and standard deviation are within 1% of each other,
the method terminates, and the latest evaluations of mean and stddev are returned.
Arguments: (1) data -- a dynamic spectrum numpy array
(2) out -- data points that vary by more than (out * std_prev) are removed from the calculation
(3) thresh -- successive iterations for mean and std must be within <thresh> for the method to terminate
'''
(vchan,tchan) = data.shape
size = data.size
mask = np.zeros((vchan,tchan))
# Set initial iterates.
mean_prev = 0.0;
std_prev = 0.0;
mean = np.mean(data)
std = np.std(data)
# Iteratively compute the background mean and std. dev.
while (abs(mean_prev-mean) / mean) > thresh and (abs(std-std_prev) / std) > thresh:
mean_prev = mean
std_prev = std
ssum = 0.0 # squared sum
rsum = 0.0 # regular sum
for v in range(vchan):
for t in range(tchan):
if mask[v,t] == 0: # if not masked
if abs(data[v,t]-mean_prev) > (out * std_prev):
mask[v,t] = 1
size -= 1
else:
ssum += (data[v,t])**2
rsum += data[v,t]
mean = rsum / size
rms_squared = ssum / size
std = math.sqrt(rms_squared - mean**2)
ree = np.ma.array(data,mask=mask)
mean2 = ree.mean()
std2 = ree.std()
return (mean,std)
def mask1(data, mean, std, thresh):
''' Create a binary mask for some dynamic spectrum.
Ones flag high signal pixels; all other pixels are
represented as zeros.
A high signal pixel is determined as follows:
(pixel_value - mean) > (thresh * std)
'''
(vchan,tchan) = data.shape
mask = np.zeros([vchan,tchan])
thresh_val = std * thresh
for v in range(vchan):
for t in range(tchan):
val = data[v,t]
if val - mean > thresh_val:
mask[v,t] = 1
return mask
# The following | |
<reponame>sparkprime/helm
#!/usr/bin/env python
#
# Copyright 2015 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Template expansion utilities."""
import os.path
import sys
import traceback
import jinja2
import yaml
from sandbox_loader import FileAccessRedirector
import schema_validation
def Expand(config, imports=None, env=None, validate_schema=False):
"""Expand the configuration with imports.
Args:
config: string, the raw config to be expanded.
imports: map from import file name, e.g. "helpers/constants.py" to
its contents.
env: map from string to string, the map of environment variable names
to their values
validate_schema: True to run schema validation; False otherwise
Returns:
YAML containing the expanded configuration and its layout,
in the following format:
config:
...
layout:
...
Raises:
ExpansionError: if there is any error occurred during expansion
"""
try:
return _Expand(config, imports=imports, env=env,
validate_schema=validate_schema)
except Exception as e:
# print traceback.format_exc()
raise ExpansionError('config', str(e))
def _Expand(config, imports=None, env=None, validate_schema=False):
"""Expand the configuration with imports."""
FileAccessRedirector.redirect(imports)
yaml_config = None
try:
yaml_config = yaml.safe_load(config)
except yaml.scanner.ScannerError as e:
# Here we know that YAML parser could not parse the template
# we've given it. YAML raises a ScannerError that specifies which file
# had the problem, as well as line and column, but since we're giving
# it the template from string, error message contains <string>, which
# is not very helpful on the user end, so replace it with word
# "template" and make it obvious that YAML contains a syntactic error.
msg = str(e).replace('"<string>"', 'template')
raise Exception('Error parsing YAML: %s' % msg)
# Handle empty file case
if not yaml_config:
return ''
# If the configuration does not have ':' in it, the yaml_config will be a
# string. If this is the case just return the str. The code below it
# assumes yaml_config is a map for common cases.
if type(yaml_config) is str:
return yaml_config
if 'resources' not in yaml_config or yaml_config['resources'] is None:
yaml_config['resources'] = []
config = {'resources': []}
layout = {'resources': []}
_ValidateUniqueNames(yaml_config['resources'])
# Iterate over all the resources to process.
for resource in yaml_config['resources']:
processed_resource = _ProcessResource(resource, imports, env,
validate_schema)
config['resources'].extend(processed_resource['config']['resources'])
layout['resources'].append(processed_resource['layout'])
result = {'config': config, 'layout': layout}
return yaml.safe_dump(result, default_flow_style=False)
def _ProcessResource(resource, imports, env, validate_schema=False):
"""Processes a resource and expands if template.
Args:
resource: the resource to be processed, as a map.
imports: map from string to string, the map of imported files names
and contents
env: map from string to string, the map of environment variable names
to their values
validate_schema: True to run schema validation; False otherwise
Returns:
A map containing the layout and configuration of the expanded
resource and any sub-resources, in the format:
{'config': ..., 'layout': ...}
Raises:
ExpansionError: if there is any error occurred during expansion
"""
# A resource has to have to a name.
if 'name' not in resource:
raise ExpansionError(resource, 'Resource does not have a name.')
# A resource has to have a type.
if 'type' not in resource:
raise ExpansionError(resource, 'Resource does not have type defined.')
config = {'resources': []}
# Initialize layout with basic resource information.
layout = {'name': resource['name'],
'type': resource['type']}
if resource['type'] in imports:
# A template resource, which contains sub-resources.
expanded_template = ExpandTemplate(resource, imports,
env, validate_schema)
if expanded_template['resources']:
_ValidateUniqueNames(expanded_template['resources'],
resource['type'])
# Process all sub-resources of this template.
for resource_to_process in expanded_template['resources']:
processed_resource = _ProcessResource(resource_to_process,
imports, env,
validate_schema)
# Append all sub-resources to the config resources,
# and the resulting layout of sub-resources.
config['resources'].extend(processed_resource['config']
['resources'])
# Lazy-initialize resources key here because it is not set for
# non-template layouts.
if 'resources' not in layout:
layout['resources'] = []
layout['resources'].append(processed_resource['layout'])
if 'properties' in resource:
layout['properties'] = resource['properties']
else:
# A normal resource has only itself for config.
config['resources'] = [resource]
return {'config': config,
'layout': layout}
def _ValidateUniqueNames(template_resources, template_name='config'):
"""Make sure that every resource name in the given template is unique."""
names = set()
# Validate that every resource name is unique
for resource in template_resources:
if 'name' in resource:
if resource['name'] in names:
raise ExpansionError(
resource,
'Resource name \'%s\' is not unique in %s.'
% (resource['name'], template_name))
names.add(resource['name'])
# If this resource doesn't have a name, we will report that error later
def ExpandTemplate(resource, imports, env, validate_schema=False):
"""Expands a template, calling expansion mechanism based on type.
Args:
resource: resource object, the resource that contains parameters to the
jinja file
imports: map from string to string, the map of imported files names
and contents
env: map from string to string, the map of environment variable names
to their values
validate_schema: True to run schema validation; False otherwise
Returns:
The final expanded template
Raises:
ExpansionError: if there is any error occurred during expansion
"""
source_file = resource['type']
path = resource['type']
# Look for Template in imports.
if source_file not in imports:
raise ExpansionError(
source_file,
'Unable to find source file %s in imports.' % (source_file))
# source_file could be a short version of the template
# say github short name) so we need to potentially map this into
# the fully resolvable name.
if 'path' in imports[source_file] and imports[source_file]['path']:
path = imports[source_file]['path']
resource['imports'] = imports
# Populate the additional environment variables.
if env is None:
env = {}
env['name'] = resource['name']
env['type'] = resource['type']
resource['env'] = env
schema = source_file + '.schema'
if validate_schema and schema in imports:
properties = resource['properties'] if 'properties' in resource else {}
try:
resource['properties'] = schema_validation.Validate(
properties, schema, source_file, imports)
except schema_validation.ValidationErrors as e:
raise ExpansionError(resource['name'], e.message)
if path.endswith('jinja') or path.endswith('yaml'):
expanded_template = ExpandJinja(
source_file, imports[source_file]['content'], resource, imports)
elif path.endswith('py'):
# This is a Python template.
expanded_template = ExpandPython(
imports[source_file]['content'], source_file, resource)
else:
# The source file is not a jinja file or a python file.
# This in fact should never happen due to the IsTemplate check above.
raise ExpansionError(
resource['source'],
'Unsupported source file: %s.' % (source_file))
parsed_template = yaml.safe_load(expanded_template)
if parsed_template is None or 'resources' not in parsed_template:
raise ExpansionError(resource['type'],
'Template did not return a \'resources:\' field.')
return parsed_template
def ExpandJinja(file_name, source_template, resource, imports):
"""Render the jinja template using jinja libraries.
Args:
file_name:
string, the file name.
source_template:
string, the content of jinja file to be render
resource:
resource object, the resource that contains parameters to the
jinja file
imports:
map from string to map {name, path}, the map of imported
files names fully resolved path and contents
Returns:
The final expanded template
Raises:
ExpansionError in case we fail to expand the Jinja2 template.
"""
try:
jinja_imports = (
{i['path']: i['content'] for _, i in imports.iteritems()})
env = jinja2.Environment(loader=jinja2.DictLoader(jinja_imports))
template = env.from_string(source_template)
if ('properties' in resource or 'env' in resource or
'imports' in resource):
return template.render(resource)
else:
return template.render()
except Exception:
st = 'Exception in %s\n%s' % (file_name, traceback.format_exc())
raise ExpansionError(file_name, st)
def ExpandPython(python_source, file_name, params):
"""Run python script to get the expanded template.
Args:
python_source: string, the python source file to run
file_name: string, the name of the python source file
params: object that contains 'imports' and 'params', the parameters to
the python script
Returns:
The final expanded template.
"""
try:
# Compile the python code to be run.
constructor = {}
compiled_code = compile(python_source, '<string>', 'exec')
exec compiled_code in constructor # pylint: disable=exec-used
# Construct the parameters to the python script.
evaluation_context = PythonEvaluationContext(params)
return constructor['GenerateConfig'](evaluation_context)
except Exception:
st = 'Exception in %s\n%s' % (file_name, traceback.format_exc())
raise ExpansionError(file_name, st)
class PythonEvaluationContext(object):
"""The python evaluation context.
Attributes:
params -- the parameters to be used in the expansion
"""
def __init__(self, params):
if 'properties' in params:
self.properties = params['properties']
else:
self.properties = None
if 'imports' in params:
self.imports = params['imports']
else:
self.imports = None
if | |
Period(Semiannual), bondCalendar,
Unadjusted, Unadjusted,
DateGeneration.Backward, False)
floatingBond1 = FloatingRateBond(settlementDays, self.faceAmount,
floatingBondSchedule1,
self.iborIndex, Actual360(),
Following, fixingDays,
[1], [0.0056],
[], [],
inArrears,
100.0, Date(29,September,2003))
floatingBond1.setPricingEngine(bondEngine)
setCouponPricer(floatingBond1.cashflows(), self.pricer)
self.iborIndex.addFixing(Date(27,March,2007), 0.0402)
floatingBondImpliedValue1 = floatingBond1.cleanPrice()
floatingBondSettlementDate1= floatingBond1.settlementDate()
## standard market conventions:
## bond's frequency + coumpounding and daycounter of the YieldCurve
floatingBondCleanPrice1 = cleanPriceFromZSpread(
floatingBond1, self.yieldCurve, self.spread,
Actual365Fixed(), self.compounding, Semiannual,
fixedBondSettlementDate1)
error5 = abs(floatingBondImpliedValue1-floatingBondCleanPrice1)
self.assertFalse(error5>tolerance,
"wrong clean price for fixed bond:"
+ "\n market asset swap spread: " + str(floatingBondImpliedValue1)
+ "\n par asset swap spread: " + str(floatingBondCleanPrice1)
+ "\n error: " + str(error5)
+ "\n tolerance: " + str(tolerance))
## FRN bond (Isin: XS0090566539 COE 0 09/24/18)
## maturity occurs on a business day
floatingBondSchedule2 = Schedule(Date(24,September,2004),
Date(24,September,2018),
Period(Semiannual), bondCalendar,
ModifiedFollowing, ModifiedFollowing,
DateGeneration.Backward, False)
floatingBond2 = FloatingRateBond(settlementDays, self.faceAmount,
floatingBondSchedule2,
self.iborIndex, Actual360(),
ModifiedFollowing, fixingDays,
[1], [0.0025],
[], [],
inArrears,
100.0, Date(24,September,2004))
floatingBond2.setPricingEngine(bondEngine)
setCouponPricer(floatingBond2.cashflows(), self.pricer)
self.iborIndex.addFixing(Date(22,March,2007), 0.04013)
floatingBondImpliedValue2 = floatingBond2.cleanPrice()
floatingBondSettlementDate2= floatingBond2.settlementDate()
## standard market conventions:
## bond's frequency + coumpounding and daycounter of the YieldCurve
floatingBondCleanPrice2 = cleanPriceFromZSpread(
floatingBond2, self.yieldCurve,
self.spread, Actual365Fixed(), self.compounding, Semiannual,
fixedBondSettlementDate1)
error7 = abs(floatingBondImpliedValue2-floatingBondCleanPrice2)
self.assertFalse(error7>tolerance,
"wrong clean price for fixed bond:"
+ "\n market asset swap spread: " + str(floatingBondImpliedValue2)
+ "\n par asset swap spread: " + str(floatingBondCleanPrice2)
+ "\n error: " + str(error7)
+ "\n tolerance: " + str(tolerance))
#### CMS bond (Isin: XS0228052402 CRDIT 0 8/22/20)
#### maturity doesn't occur on a business day
cmsBondSchedule1 = Schedule(Date(22,August,2005),
Date(22,August,2020),
Period(Annual), bondCalendar,
Unadjusted, Unadjusted,
DateGeneration.Backward, False)
cmsBond1 = CmsRateBond(settlementDays, self.faceAmount, cmsBondSchedule1,
self.swapIndex, Thirty360(),
Following, fixingDays,
[1.0], [0.0],
[0.055], [0.025],
inArrears,
100.0, Date(22,August,2005))
cmsBond1.setPricingEngine(bondEngine)
setCouponPricer(cmsBond1.cashflows(), self.cmspricer)
self.swapIndex.addFixing(Date(18,August,2006), 0.04158)
cmsBondImpliedValue1 = cmsBond1.cleanPrice()
cmsBondSettlementDate1= cmsBond1.settlementDate()
## standard market conventions:
## bond's frequency + coumpounding and daycounter of the YieldCurve
cmsBondCleanPrice1 = cleanPriceFromZSpread(
cmsBond1, self.yieldCurve, self.spread,
Actual365Fixed(), self.compounding, Annual,
cmsBondSettlementDate1)
error9 = abs(cmsBondImpliedValue1-cmsBondCleanPrice1)
self.assertFalse(error9>tolerance,
"wrong clean price for fixed bond:"
+ "\n market asset swap spread: " + str(cmsBondImpliedValue1)
+ "\n par asset swap spread: " + str(cmsBondCleanPrice1)
+ "\n error: " + str(error9)
+ "\n tolerance: " + str(tolerance))
## CMS bond (Isin: XS0218766664 ISPIM 0 5/6/15)
## maturity occurs on a business day
cmsBondSchedule2 = Schedule(Date(6,May,2005),
Date(6,May,2015),
Period(Annual), bondCalendar,
Unadjusted, Unadjusted,
DateGeneration.Backward, False)
cmsBond2 = CmsRateBond(settlementDays, self.faceAmount, cmsBondSchedule2,
self.swapIndex, Thirty360(),
Following, fixingDays,
[0.84], [0.0],
[], [],
inArrears,
100.0, Date(6,May,2005))
cmsBond2.setPricingEngine(bondEngine)
setCouponPricer(cmsBond2.cashflows(), self.cmspricer)
self.swapIndex.addFixing(Date(4,May,2006), 0.04217)
cmsBondImpliedValue2 = cmsBond2.cleanPrice()
cmsBondSettlementDate2= cmsBond2.settlementDate()
## standard market conventions:
## bond's frequency + coumpounding and daycounter of the YieldCurve
cmsBondCleanPrice2 = cleanPriceFromZSpread(
cmsBond2, self.yieldCurve, self.spread,
Actual365Fixed(), self.compounding, Annual,
cmsBondSettlementDate2)
error11 = abs(cmsBondImpliedValue2-cmsBondCleanPrice2)
self.assertFalse(error11>tolerance,
"wrong clean price for fixed bond:"
+ "\n market asset swap spread: " + str(cmsBondImpliedValue2)
+ "\n par asset swap spread: " + str(cmsBondCleanPrice2)
+ "\n error: " + str(error11)
+ "\n tolerance: " + str(tolerance))
## Zero-Coupon bond (Isin: DE0004771662 IBRD 0 12/20/15)
## maturity doesn't occur on a business day
zeroCpnBond1 = ZeroCouponBond(settlementDays, bondCalendar, self.faceAmount,
Date(20,December,2015),
Following,
100.0, Date(19,December,1985))
zeroCpnBond1.setPricingEngine(bondEngine)
zeroCpnBondImpliedValue1 = zeroCpnBond1.cleanPrice()
zeroCpnBondSettlementDate1= zeroCpnBond1.settlementDate()
## standard market conventions:
## bond's frequency + coumpounding and daycounter of the YieldCurve
zeroCpnBondCleanPrice1 = cleanPriceFromZSpread(zeroCpnBond1,
self.yieldCurve,
self.spread,
Actual365Fixed(),
self.compounding, Annual,
zeroCpnBondSettlementDate1)
error13 = abs(zeroCpnBondImpliedValue1-zeroCpnBondCleanPrice1)
self.assertFalse(error13>tolerance,
"wrong clean price for zero coupon bond:"
+ "\n zero cpn implied value: " + str(zeroCpnBondImpliedValue1)
+ "\n zero cpn price: " + str(zeroCpnBondCleanPrice1)
+ "\n error: " + str(error13)
+ "\n tolerance: " + str(tolerance))
## Zero Coupon bond (Isin: IT0001200390 ISPIM 0 02/17/28)
## maturity doesn't occur on a business day
zeroCpnBond2 = ZeroCouponBond(settlementDays, bondCalendar, self.faceAmount,
Date(17,February,2028),
Following,
100.0, Date(17,February,1998))
zeroCpnBond2.setPricingEngine(bondEngine)
zeroCpnBondImpliedValue2 = zeroCpnBond2.cleanPrice()
zeroCpnBondSettlementDate2= zeroCpnBond2.settlementDate()
## standard market conventions:
## bond's frequency + coumpounding and daycounter of the YieldCurve
zeroCpnBondCleanPrice2 = cleanPriceFromZSpread(zeroCpnBond2,
self.yieldCurve,
self.spread,
Actual365Fixed(),
self.compounding, Annual,
zeroCpnBondSettlementDate2)
error15 = abs(zeroCpnBondImpliedValue2-zeroCpnBondCleanPrice2)
self.assertFalse(error15>tolerance,
"wrong clean price for zero coupon bond:"
+ "\n zero cpn implied value: " + str(zeroCpnBondImpliedValue2)
+ "\n zero cpn price: " + str(zeroCpnBondCleanPrice2)
+ "\n error: " + str(error15)
+ "\n tolerance: " + str(tolerance))
def testGenericBondImplied(self):
"""Testing implied generic-bond value against asset-swap fair price with null spread..."""
bondCalendar = TARGET()
settlementDays = 3
fixingDays = 2
payFixedRate = True
parAssetSwap = True
inArrears = False
## Fixed Underlying bond (Isin: DE0001135275 DBR 4 01/04/37)
## maturity doesn't occur on a business day
fixedBondStartDate1 = Date(4,January,2005)
fixedBondMaturityDate1 = Date(4,January,2037)
fixedBondSchedule1 = Schedule(fixedBondStartDate1,
fixedBondMaturityDate1,
Period(Annual), bondCalendar,
Unadjusted, Unadjusted,
DateGeneration.Backward, False)
fixedBondLeg1 = list(FixedRateLeg(fixedBondSchedule1,
ActualActual(ActualActual.ISDA),
[self.faceAmount],
[0.04]))
fixedbondRedemption1 = bondCalendar.adjust(fixedBondMaturityDate1,
Following)
fixedBondLeg1.append(SimpleCashFlow(100.0, fixedbondRedemption1))
fixedBond1 = Bond(settlementDays, bondCalendar, self.faceAmount,
fixedBondMaturityDate1, fixedBondStartDate1,
tuple(fixedBondLeg1))
bondEngine = DiscountingBondEngine(self.termStructure)
swapEngine = DiscountingSwapEngine(self.termStructure,True)
fixedBond1.setPricingEngine(bondEngine)
fixedBondPrice1 = fixedBond1.cleanPrice()
fixedBondAssetSwap1 = AssetSwap(payFixedRate,
fixedBond1, fixedBondPrice1,
self.iborIndex, self.spread,
Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap)
fixedBondAssetSwap1.setPricingEngine(swapEngine)
fixedBondAssetSwapPrice1 = fixedBondAssetSwap1.fairCleanPrice()
tolerance = 1.0e-13
error1 = abs(fixedBondAssetSwapPrice1-fixedBondPrice1)
self.assertFalse(error1>tolerance,
"wrong zero spread asset swap price for fixed bond:"
+ "\n bond's clean price: " + str(fixedBondPrice1)
+ "\n asset swap fair price: " + str(fixedBondAssetSwapPrice1)
+ "\n error: " + str(error1)
+ "\n tolerance: " + str(tolerance))
## Fixed Underlying bond (Isin: IT0006527060 IBRD 5 02/05/19)
## maturity occurs on a business day
fixedBondStartDate2 = Date(5,February,2005)
fixedBondMaturityDate2 = Date(5,February,2019)
fixedBondSchedule2 = Schedule(fixedBondStartDate2,
fixedBondMaturityDate2,
Period(Annual), bondCalendar,
Unadjusted, Unadjusted,
DateGeneration.Backward, False)
fixedBondLeg2 = list(FixedRateLeg(fixedBondSchedule2,Thirty360(Thirty360.BondBasis),
[self.faceAmount],[0.05]))
fixedbondRedemption2 = bondCalendar.adjust(fixedBondMaturityDate2,Following)
fixedBondLeg2.append(SimpleCashFlow(100.0, fixedbondRedemption2))
fixedBond2 = Bond(settlementDays, bondCalendar, self.faceAmount,
fixedBondMaturityDate2, fixedBondStartDate2, tuple(fixedBondLeg2))
fixedBond2.setPricingEngine(bondEngine)
fixedBondPrice2 = fixedBond2.cleanPrice()
fixedBondAssetSwap2 = AssetSwap(payFixedRate,
fixedBond2, fixedBondPrice2,
self.iborIndex, self.spread,
Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap)
fixedBondAssetSwap2.setPricingEngine(swapEngine)
fixedBondAssetSwapPrice2 = fixedBondAssetSwap2.fairCleanPrice()
error2 = abs(fixedBondAssetSwapPrice2-fixedBondPrice2)
self.assertFalse(error2>tolerance,
"wrong zero spread asset swap price for fixed bond:"
+ "\n bond's clean price: " + str(fixedBondPrice2)
+ "\n asset swap fair price: " + str(fixedBondAssetSwapPrice2)
+ "\n error: " + str(error2)
+ "\n tolerance: " + str(tolerance))
## FRN Underlying bond (Isin: IT0003543847 ISPIM 0 09/29/13)
## maturity doesn't occur on a business day
floatingBondStartDate1 = Date(29,September,2003)
floatingBondMaturityDate1 = Date(29,September,2013)
floatingBondSchedule1 = Schedule(floatingBondStartDate1,
floatingBondMaturityDate1,
Period(Semiannual), bondCalendar,
Unadjusted, Unadjusted,
DateGeneration.Backward, False)
floatingBondLeg1 = list(IborLeg([self.faceAmount],floatingBondSchedule1, self.iborIndex,
Actual360(),ModifiedFollowing, [fixingDays],[],[0.0056],[],[],inArrears))
floatingbondRedemption1 = bondCalendar.adjust(floatingBondMaturityDate1, Following)
floatingBondLeg1.append(SimpleCashFlow(100.0, floatingbondRedemption1))
floatingBond1 = Bond(settlementDays, bondCalendar, self.faceAmount,
floatingBondMaturityDate1, floatingBondStartDate1,
tuple(floatingBondLeg1))
floatingBond1.setPricingEngine(bondEngine)
setCouponPricer(floatingBond1.cashflows(), self.pricer)
self.iborIndex.addFixing(Date(27,March,2007), 0.0402)
floatingBondPrice1 = floatingBond1.cleanPrice()
floatingBondAssetSwap1 = AssetSwap (payFixedRate,
floatingBond1, floatingBondPrice1,
self.iborIndex, self.spread,
Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap)
floatingBondAssetSwap1.setPricingEngine(swapEngine)
floatingBondAssetSwapPrice1 = floatingBondAssetSwap1.fairCleanPrice()
error3 = abs(floatingBondAssetSwapPrice1-floatingBondPrice1)
self.assertFalse(error3>tolerance,
"wrong zero spread asset swap price for floater:"
+ "\n bond's clean price: " + str(floatingBondPrice1)
+ "\n asset swap fair price: " + str(floatingBondAssetSwapPrice1)
+ "\n error: " + str(error3)
+ "\n tolerance: " + str(tolerance))
## FRN Underlying bond (Isin: XS0090566539 COE 0 09/24/18)
## maturity occurs on a business day
floatingBondStartDate2 = Date(24,September,2004)
floatingBondMaturityDate2 = Date(24,September,2018)
floatingBondSchedule2 =Schedule(floatingBondStartDate2,
floatingBondMaturityDate2,
Period(Semiannual), bondCalendar,
ModifiedFollowing, ModifiedFollowing,
DateGeneration.Backward, False)
floatingBondLeg2 = list(IborLeg([self.faceAmount],floatingBondSchedule2, self.iborIndex,
Actual360(),ModifiedFollowing,[fixingDays],[],[0.0025],[],[],inArrears))
floatingbondRedemption2 = bondCalendar.adjust(floatingBondMaturityDate2, ModifiedFollowing)
floatingBondLeg2.append(SimpleCashFlow(100.0, floatingbondRedemption2))
floatingBond2 = Bond(settlementDays, bondCalendar, self.faceAmount,
floatingBondMaturityDate2, floatingBondStartDate2,
tuple(floatingBondLeg2))
floatingBond2.setPricingEngine(bondEngine)
setCouponPricer(floatingBond2.cashflows(), self.pricer)
self.iborIndex.addFixing(Date(22,March,2007), 0.04013)
currentCoupon=0.04013+0.0025
floatingCurrentCoupon= floatingBond2.nextCouponRate()
error4= abs(floatingCurrentCoupon-currentCoupon)
self.assertFalse(error4>tolerance,
"wrong current coupon is returned for floater bond:"
+ "\n bond's calculated current coupon: " + str(currentCoupon)
+ "\n current coupon asked to the bond: " + str(floatingCurrentCoupon)
+ "\n error: " + str(error4)
+ "\n tolerance: " + str(tolerance))
floatingBondPrice2 = floatingBond2.cleanPrice()
floatingBondAssetSwap2 = AssetSwap(payFixedRate,
floatingBond2, floatingBondPrice2,
self.iborIndex, self.spread,
Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap)
floatingBondAssetSwap2.setPricingEngine(swapEngine)
floatingBondAssetSwapPrice2 = floatingBondAssetSwap2.fairCleanPrice()
error5 = abs(floatingBondAssetSwapPrice2-floatingBondPrice2)
self.assertFalse(error5>tolerance,
"wrong zero spread asset swap price for floater:"
+ "\n bond's clean price: " + str(floatingBondPrice2)
+ "\n asset swap fair price: " + str(floatingBondAssetSwapPrice2)
+ "\n error: " + str(error5)
+ "\n tolerance: " + str(tolerance))
## CMS Underlying bond (Isin: XS0228052402 CRDIT 0 8/22/20)
## maturity doesn't occur on a business day
cmsBondStartDate1 = Date(22,August,2005)
cmsBondMaturityDate1 = Date(22,August,2020)
cmsBondSchedule1 = Schedule(cmsBondStartDate1,
cmsBondMaturityDate1,
Period(Annual), bondCalendar,
Unadjusted, Unadjusted,
DateGeneration.Backward, False)
cmsBondLeg1 = list(CmsLeg([self.faceAmount],cmsBondSchedule1, self.swapIndex,
Thirty360(),Following,[fixingDays],[],[0.055],[0.025],[],inArrears))
cmsbondRedemption1 = bondCalendar.adjust(cmsBondMaturityDate1, Following)
cmsBondLeg1.append(SimpleCashFlow(100.0, cmsbondRedemption1))
cmsBond1 = Bond(settlementDays, bondCalendar, self.faceAmount,
cmsBondMaturityDate1, cmsBondStartDate1, tuple(cmsBondLeg1))
cmsBond1.setPricingEngine(bondEngine)
setCouponPricer(cmsBond1.cashflows(), self.cmspricer)
self.swapIndex.addFixing(Date(18,August,2006), 0.04158)
cmsBondPrice1 = cmsBond1.cleanPrice()
cmsBondAssetSwap1 = AssetSwap(payFixedRate,
cmsBond1, cmsBondPrice1,
self.iborIndex, self.spread,
Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap)
cmsBondAssetSwap1.setPricingEngine(swapEngine)
cmsBondAssetSwapPrice1 = cmsBondAssetSwap1.fairCleanPrice()
error6 = abs(cmsBondAssetSwapPrice1-cmsBondPrice1)
self.assertFalse(error6>tolerance,
"wrong zero spread asset swap price for cms bond:"
+ "\n bond's clean price: | |
import requests
import time
from JumpScale import j
import JumpScale.portal
import JumpScale.lib.cloudrobots
import JumpScale.baselib.remote
import JumpScale.baselib.redis
import JumpScale.portal
import ujson as json
import sys
class Output(object):
def __init__(self):
self.out=""
self.lastlines=""
self.counter=0
def _write(self):
self.lastlines=self.lastlines.replace("\n\n","\n")
self.lastlines=self.lastlines.replace("\n\n","\n")
self.lastlines=self.lastlines.replace("\n\n","\n")
self.ms1.action.sendUserMessage(self.lastlines)
self.lastlines=""
self.counter=0
def write(self, buf,**args):
if self.lastlines.find(buf)==-1:
self.lastlines+="%s\n"%buf
if self.counter>20:
self._write()
if len(self.lastlines.split("\n"))>20:
self._write()
self.counter+=1
# self.stdout.prevout.write(buf)
# for line in buf.rstrip().splitlines():
# #print "###%s"%line
# self.out+="%s\n"%line
def isatty(self):
return False
def flush(self):
return None
class MS1(object):
def __init__(self):
self.secret = ''
self.IMAGE_NAME = 'Ubuntu 14.04 (JumpScale)'
self.redis_cl = j.clients.redis.getGeventRedisClient('localhost', 9999)
self.stdout=Output()
self.stdout.ms1=self
self.stdout.prevout=sys.stdout
self.action=None
self.vars={}
def getCloudspaceObj(self, space_secret,**args):
if not self.redis_cl.hexists('cloudrobot:cloudspaces:secrets', space_secret):
raise RuntimeError("E:Space secret does not exist, cannot continue (END)")
space=json.loads(self.redis_cl.hget('cloudrobot:cloudspaces:secrets', space_secret))
return space
def getCloudspaceId(self, space_secret):
space=self.getCloudspaceObj(space_secret)
return space["id"]
def getClouspaceSecret(self, login, password, cloudspace_name, location, spacesecret=None,**args):
"""
@param location ca1 (canada),us2 (us)
"""
params = {'username': login, 'password': password, 'authkey': ''}
response = requests.post('https://www.mothership1.com/restmachine/cloudapi/users/authenticate', params)
if response.status_code != 200:
raise RuntimeError("E:Could not authenticate user %s" % login)
auth_key = response.json()
params = {'authkey': auth_key}
response = requests.post('https://www.mothership1.com/restmachine/cloudapi/cloudspaces/list', params)
cloudspaces = response.json()
cloudspace = [cs for cs in cloudspaces if cs['name'] == cloudspace_name and cs['location'] == location]
if cloudspace:
cloudspace = cloudspace[0]
else:
raise RuntimeError("E:Could not find a matching cloud space with name %s and location %s" % (cloudspace_name, location))
self.redis_cl.hset('cloudrobot:cloudspaces:secrets', auth_key, json.dumps(cloudspace))
return auth_key
def sendUserMessage(self,msg,level=2,html=False,args={}):
if self.action<>None:
self.action.sendUserMessage(msg,html=html)
else:
print msg
def getApiConnection(self, space_secret,**args):
cs=self.getCloudspaceObj(space_secret)
host = 'www.mothership1.com' if cs["location"] == 'ca1' else '%s.mothership1.com' % cs["location"]
try:
api=j.core.portal.getClient(host, 443, space_secret)
except Exception,e:
raise RuntimeError("E:Could not login to MS1 API.")
# system = api.getActor("system", "contentmanager")
return api
# def deployAppDeck(self, spacesecret, name, memsize=1024, ssdsize=40, vsansize=0, jpdomain='solutions', jpname=None, config=None, description=None,**args):
# machine_id = self.deployMachineDeck(spacesecret, name, memsize, ssdsize, vsansize, description)
# api = self.getApiConnection(location)
# portforwarding_actor = api.getActor('cloudapi', 'portforwarding')
# cloudspaces_actor = api.getActor('cloudapi', 'cloudspaces')
# machines_actor = api.getActor('cloudapi', 'machines')
# # create ssh port-forward rule
# for _ in range(30):
# machine = machines_actor.get(machine_id)
# if j.basetype.ipaddress.check(machine['interfaces'][0]['ipAddress']):
# break
# else:
# time.sleep(2)
# if not j.basetype.ipaddress.check(machine['interfaces'][0]['ipAddress']):
# raise RuntimeError('Machine was created, but never got an IP address')
# cloudspace_forward_rules = portforwarding_actor.list(machine['cloudspaceid'])
# public_ports = [rule['publicPort'] for rule in cloudspace_forward_rules]
# ssh_port = '2222'
# cloudspace = cloudspaces_actor.get(machine['cloudspaceid'])
# while True:
# if ssh_port not in public_ports:
# portforwarding_actor.create(machine['cloudspaceid'], cloudspace['publicipaddress'], ssh_port, machine['id'], '22')
# break
# else:
# ssh_port = str(int(ssh_port) + 1)
# # do an ssh connection to the machine
# if not j.system.net.waitConnectionTest(cloudspace['publicipaddress'], int(ssh_port), 60):
# raise RuntimeError("Failed to connect to %s %s" % (cloudspace['publicipaddress'], ssh_port))
# ssh_connection = j.remote.cuisine.api
# username, password = machine['accounts'][0]['login'], machine['accounts'][0]['password']
# ssh_connection.fabric.api.env['password'] = password
# ssh_connection.fabric.api.env['connection_attempts'] = 5
# ssh_connection.connect('%s:%s' % (cloudspace['publicipaddress'], ssh_port), username)
# # install jpackages there
# ssh_connection.sudo('jpackage mdupdate')
# if config:
# jpackage_hrd_file = j.system.fs.joinPaths(j.dirs.hrdDir, '%s_%s' % (jpdomain, jpname))
# ssh_connection.file_write(jpackage_hrd_file, config, sudo=True)
# if jpdomain and jpname:
# ssh_connection.sudo('jpackage install -n %s -d %s' % (jpname, jpdomain))
# #cleanup
# cloudspace_forward_rules = portforwarding_actor.list(machine['cloudspaceid'])
# ssh_rule_id = [rule['id'] for rule in cloudspace_forward_rules if rule['publicPort'] == ssh_port][0]
# portforwarding_actor.delete(machine['cloudspaceid'], ssh_rule_id)
# if config:
# hrd = j.core.hrd.getHRD(content=config)
# if hrd.exists('services_ports'):
# ports = hrd.getList('services_ports')
# for port in ports:
# portforwarding_actor.create(machine['cloudspaceid'], cloudspace['publicipaddress'], str(port), machine['id'], str(port))
# return {'publicip': cloudspace['publicipaddress']}
def getMachineSizes(self,spacesecret):
if self.redis_cl.exists("ms1:cache:%s:sizes"%spacesecret):
return json.loads(self.redis_cl.get("ms1:cache:%s:sizes"%spacesecret))
api = self.getApiConnection(spacesecret)
sizes_actor = api.getActor('cloudapi', 'sizes')
sizes=sizes_actor.list()
self.redis_cl.setex("ms1:cache:%s:sizes"%spacesecret,json.dumps(sizes),3600)
return sizes
def createMachine(self, spacesecret, name, memsize=1, ssdsize=40, vsansize=0, description='',imagename="ubuntu.14.04",delete=False,**args):
"""
memsize #size is 0.5,1,2,4,8,16 in GB
ssdsize #10,20,30,40,100 in GB
imagename= fedora,windows,ubuntu.13.10,ubuntu.12.04,windows.essentials,ubuntu.14.04
zentyal,debian.7,arch,fedora,centos,opensuse,gitlab,ubuntu.jumpscale
"""
if delete:
self.deleteMachine(spacesecret, name)
self.vars={}
# self.session.vars["name"]=name
# self.session.save()
ssdsize=int(ssdsize)
memsize=int(memsize)
ssdsizes={}
ssdsizes[10]=10
ssdsizes[20]=20
ssdsizes[30]=30
ssdsizes[40]=40
ssdsizes[100]=100
memsizes={}
memsizes[0.5]=512
memsizes[1]=1024
memsizes[2]=2048
memsizes[4]=4096
memsizes[8]=8192
memsizes[16]=16384
if not memsizes.has_key(memsize):
raise RuntimeError("E: supported memory sizes are 0.5,1,2,4,8,16 (is in GB), you specified:%s"%memsize)
if not ssdsizes.has_key(ssdsize):
raise RuntimeError("E: supported ssd sizes are 10,20,30,40,100 (is in GB), you specified:%s"%memsize)
# get actors
api = self.getApiConnection(spacesecret)
cloudspaces_actor = api.getActor('cloudapi', 'cloudspaces')
machines_actor = api.getActor('cloudapi', 'machines')
cloudspace_id = self.getCloudspaceId(spacesecret)
self.vars["cloudspace.id"]=cloudspace_id
self.vars["machine.name"]=name
memsize2=memsizes[memsize]
size_ids = [size['id'] for size in self.getMachineSizes(spacesecret) if size['memory'] == int(memsize2)]
if len(size_ids)==0:
raise RuntimeError('E:Could not find a matching memory size %s'%memsize2)
ssdsize2=ssdsizes[ssdsize]
images=self.listImages(spacesecret)
if not imagename in images.keys():
j.events.inputerror_critical("Imagename '%s' not in available images: '%s'"%(imagename,images))
templateid=images[imagename][0]
self.sendUserMessage("create machine: %s"%(name))
try:
machine_id = machines_actor.create(cloudspaceId=cloudspace_id, name=name, description=description, \
sizeId=size_ids[0], imageId=templateid, disksize=int(ssdsize2))
except Exception,e:
if str(e).find("Selected name already exists")<>-1:
j.events.inputerror_critical("Could not create machine it does already exist.","ms1.createmachine.exists")
raise RuntimeError("E:Could not create machine, unknown error.")
self.vars["machine.id"]=machine_id
self.sendUserMessage("machine created")
self.sendUserMessage("find free ipaddr & tcp port")
for _ in range(60):
machine = machines_actor.get(machine_id)
if j.basetype.ipaddress.check(machine['interfaces'][0]['ipAddress']):
break
else:
time.sleep(1)
if not j.basetype.ipaddress.check(machine['interfaces'][0]['ipAddress']):
raise RuntimeError('E:Machine was created, but never got an IP address')
self.vars["machine.ip.addr"]=machine['interfaces'][0]['ipAddress']
#push initial key
self.sendUserMessage("push initial ssh key")
ssh=self._getSSHConnection(spacesecret,name,**args)
self.sendUserMessage("machine active & reachable")
self.sendUserMessage("ssh %s -p %s"%(self.vars["space.ip.pub"],self.vars["machine.last.tcp.port"]))
return machine_id,self.vars["space.ip.pub"],self.vars["machine.last.tcp.port"]
def getMachineObject(self,spacesecret, name,**args):
api,machines_actor,machine_id,cloudspace_id=self._getMachineApiActorId(spacesecret,name)
machine = machines_actor.get(machine_id)
return machine
def listImages(self,spacesecret,**args):
if self.redis_cl.exists("ms1:cache:%s:images"%spacesecret):
return json.loads(self.redis_cl.get("ms1:cache:%s:images"%spacesecret))
api = self.getApiConnection(spacesecret)
cloudspaces_actor = api.getActor('cloudapi', 'cloudspaces')
images_actor = api.getActor('cloudapi', 'images')
result={}
alias={}
imagetypes=["ubuntu.jumpscale","fedora","windows","ubuntu.13.10","ubuntu.12.04","windows.essentials","ubuntu.14.04",\
"zentyal","debian.7","arch","fedora","centos","opensuse","gitlab"]
# imagetypes=["ubuntu.jumpscale"]
for image in images_actor.list():
name=image["name"]
# print "name:%s"%name
namelower=name.lower()
for imagetype in imagetypes:
found=True
# print "imagetype:%s"%imagetype
for check in [item.strip().lower() for item in imagetype.split(".") if item.strip()<>""]:
if namelower.find(check)==-1:
found=False
# print "check:%s %s %s"%(check,namelower,found)
if found:
result[imagetype]=[image["id"],image["name"]]
self.redis_cl.setex("ms1:cache:%s:images"%spacesecret,json.dumps(result),600)
return result
def listMachinesInSpace(self, spacesecret,**args):
# get actors
api = self.getApiConnection(spacesecret)
machines_actor = api.getActor('cloudapi', 'machines')
cloudspace_id = self.getCloudspaceId(spacesecret)
# list machines
machines = machines_actor.list(cloudspaceId=cloudspace_id)
return machines
def _getMachineApiActorId(self, spacesecret, name,**args):
api=self.getApiConnection(spacesecret)
cloudspace_id = self.getCloudspaceId(spacesecret)
machines_actor = api.getActor('cloudapi', 'machines')
machine_id = [machine['id'] for machine in machines_actor.list(cloudspace_id) if machine['name'] == name]
if len(machine_id)==0:
raise RuntimeError("E:Could not find machine with name:%s, cannot continue action."%name)
machine_id = machine_id[0]
actor=api.getActor('cloudapi', 'machines')
return (api,actor,machine_id,cloudspace_id)
def deleteMachine(self, spacesecret, name,**args):
self.sendUserMessage("delete machine: %s"%(name))
try:
api,machines_actor,machine_id,cloudspace_id=self._getMachineApiActorId(spacesecret,name)
except Exception,e:
if str(e).find("Could not find machine")<>-1:
return "NOTEXIST"
if str(e).find("Space secret does not exist")<>-1:
return "E:SPACE SECRET IS NOT CORRECT"
raise RuntimeError(e)
try:
machines_actor.delete(machine_id)
except Exception,e:
print e
raise RuntimeError("E:could not delete machine %s"%name)
return "OK"
def startMachine(self, spacesecret, name,**args):
api,machines_actor,machine_id,cloudspace_id=self._getMachineApiActorId(spacesecret,name)
try:
machines_actor.start(machine_id)
except Exception,e:
raise RuntimeError("E:could not start machine.")
return "OK"
def stopMachine(self, spacesecret, name,**args):
api,machines_actor,machine_id,cloudspace_id=self._getMachineApiActorId(spacesecret,name)
try:
machines_actor.stop(machine_id)
except Exception,e:
raise RuntimeError("E:could not stop machine.")
return "OK"
def snapshotMachine(self, spacesecret, name, snapshotname,**args):
api,machines_actor,machine_id,cloudspace_id=self._getMachineApiActorId(spacesecret,name)
try:
machines_actor.snapshot(machine_id,snapshotname)
except Exception,e:
raise RuntimeError("E:could not stop machine.")
return "OK"
def createTcpPortForwardRule(self, spacesecret, name, machinetcpport, pubip="", pubipport=22,**args):
self.vars["machine.last.tcp.port"]=pubipport
return self._createPortForwardRule(spacesecret, name, machinetcpport, pubip, pubipport, 'tcp')
def createUdpPortForwardRule(self, spacesecret, name, machineudpport, pubip="", pubipport=22,**args):
return self._createPortForwardRule(spacesecret, name, machineudpport, pubip, pubipport, 'udp')
def deleteTcpPortForwardRule(self, spacesecret, name, machinetcpport, pubip, pubipport,**args):
return self._deletePortForwardRule(spacesecret, name, pubip, pubipport, 'tcp')
def _createPortForwardRule(self, spacesecret, name, machineport, pubip, pubipport, protocol):
# self.sendUserMessage("Create PFW rule:%s %s %s"%(pubip,pubipport,protocol),args=args)
api,machines_actor,machine_id,cloudspace_id=self._getMachineApiActorId(spacesecret,name)
portforwarding_actor = api.getActor('cloudapi', 'portforwarding')
if pubip=="":
cloudspaces_actor = api.getActor('cloudapi', 'cloudspaces')
cloudspace = cloudspaces_actor.get(cloudspace_id)
pubip=cloudspace['publicipaddress']
self.vars["space.ip.pub"]=pubip
self._deletePortForwardRule(spacesecret, name, pubip, pubipport, 'tcp')
portforwarding_actor.create(cloudspace_id, pubip, str(pubipport), machine_id, str(machineport), protocol)
return "OK"
def _deletePortForwardRule(self, spacesecret, name,pubip,pubipport, protocol):
# self.sendUserMessage("Delete PFW rule:%s %s %s"%(pubip,pubipport,protocol),args=args)
api,machines_actor,machine_id,cloudspace_id=self._getMachineApiActorId(spacesecret,name)
portforwarding_actor = api.getActor('cloudapi', 'portforwarding')
if pubip=="":
cloudspaces_actor = api.getActor('cloudapi', 'cloudspaces')
cloudspace = cloudspaces_actor.get(cloudspace_id)
pubip=cloudspace['publicipaddress']
for item in portforwarding_actor.list(cloudspace_id):
if int(item["publicPort"])==int(pubipport) and item['publicIp']==pubip:
print "delete portforward: %s "%item["id"]
portforwarding_actor.delete(cloudspace_id,item["id"])
return "OK"
def getFreeIpPort(self,spacesecret,mmin=90,mmax=1000,**args):
api=self.getApiConnection(spacesecret)
cloudspace_id = self.getCloudspaceId(spacesecret)
cloudspaces_actor = api.getActor('cloudapi', 'cloudspaces')
space=cloudspaces_actor.get(cloudspace_id)
self.vars["space.free.tcp.addr"]=space["publicipaddress"]
self.vars["space.ip.pub"]=space["publicipaddress"]
pubip=space["publicipaddress"]
portforwarding_actor = api.getActor('cloudapi', 'portforwarding')
tcpports={}
udpports={}
for item in portforwarding_actor.list(cloudspace_id):
if item['publicIp']==pubip:
if item['protocol']=="tcp":
tcpports[int(item['publicPort'])]=True
elif item['protocol']=="udp":
udpports[int(item['publicPort'])]=True
for i in range(mmin,mmax):
if not tcpports.has_key(i) and not udpports.has_key(i):
break
if i>mmax-1:
raise RuntimeError("E:cannot find free tcp or udp port.")
self.vars["space.free.tcp.port"]=str(i)
self.vars["space.free.udp.port"]=str(i)
return self.vars
def listPortforwarding(self,spacesecret, name,**args):
api,machines_actor,machine_id,cloudspace_id=self._getMachineApiActorId(spacesecret,name)
cloudspaces_actor = api.getActor('cloudapi', 'cloudspaces')
machine = machines_actor.get(machine_id)
if machine['cloudspaceid'] != cloudspace_id:
return 'Machine %s does not belong to cloudspace whose secret is given' % name
portforwarding_actor = api.getActor('cloudapi', 'portforwarding')
items=portforwarding_actor.list(cloudspace_id)
if len(machine["interfaces"])>0:
local_ipaddr=machine["interfaces"][0]['ipAddress'].strip()
else:
raise RuntimeError("cannot find local ip addr")
items=[]
for item in portforwarding_actor.list(cloudspace_id):
if item['localIp']==local_ipaddr:
items.append(item)
return items
def _getSSHConnection(self, spacesecret, name, **args):
api,machines_actor,machine_id,cloudspace_id=self._getMachineApiActorId(spacesecret,name)
mkey="%s_%s"%(cloudspace_id,machine_id)
print "check ssh connection:%s"%mkey
if self.redis_cl.hexists("ms1_iaas:machine:sshpub",mkey):
print "in cache"
pub_ipaddr,pub_port=self.redis_cl.hget("ms1_iaas:machine:sshpub",mkey).split(",")
ssh_connection = j.remote.cuisine.api
ssh_connection.fabric.api.env['connection_attempts'] = 5
ssh_connection.mode_user()
if j.system.net.tcpPortConnectionTest(pub_ipaddr, int(pub_port)):
try:
ssh_connection.connect('%s:%s' % (pub_ipaddr, pub_port), | |
`user_link`: str, the link to the user entity.
- `query`: str or dict.
- `options`: dict, the request options for the request.
:Returns:
query_iterable.QueryIterable
"""
path = base.GetPathFromLink(user_link, 'permissions')
user_id = base.GetResourceIdOrFullNameFromLink(user_link)
def fetch_fn(options):
return self.__QueryFeed(path,
'permissions',
user_id,
lambda r: r['Permissions'],
lambda _, b: b,
query,
options), self.last_response_headers
return query_iterable.QueryIterable(options, self.retry_policy, fetch_fn)
def ReplaceUser(self, user_link, user, options={}):
"""Replaces a user and return it.
:Parameters:
- `user_link`: str, the link to the user entity.
- `user`: dict
- `options`: dict, the request options for the request.
:Returns:
dict
"""
DocumentClient.__ValidateResource(user)
path = base.GetPathFromLink(user_link)
user_id = base.GetResourceIdOrFullNameFromLink(user_link)
return self.Replace(user,
path,
'users',
user_id,
None,
options)
def DeleteUser(self, user_link, options={}):
"""Deletes a user.
:Parameters:
- `user_link`: str, the link to the user entity.
- `options`: dict, the request options for the request.
:Returns:
dict
"""
path = base.GetPathFromLink(user_link)
user_id = base.GetResourceIdOrFullNameFromLink(user_link)
return self.DeleteResource(path,
'users',
user_id,
None,
options)
def ReplacePermission(self, permission_link, permission, options={}):
"""Replaces a permission and return it.
:Parameters:
- `permission_link`: str, the link to the permission.
- `permission`: dict
- `options`: dict, the request options for the request.
:Returns:
dict
"""
DocumentClient.__ValidateResource(permission)
path = base.GetPathFromLink(permission_link)
permission_id = base.GetResourceIdOrFullNameFromLink(permission_link)
return self.Replace(permission,
path,
'permissions',
permission_id,
None,
options)
def DeletePermission(self, permission_link, options={}):
"""Deletes a permission.
:Parameters:
- `permission_link`: str, the link to the permission.
- `options`: dict, the request options for the request.
:Returns:
dict
"""
path = base.GetPathFromLink(permission_link)
permission_id = base.GetResourceIdOrFullNameFromLink(permission_link)
return self.DeleteResource(path,
'permissions',
permission_id,
None,
options)
def ReadDocuments(self, collection_link, feed_options={}):
"""Reads all documents in a collection.
:Parameters:
- `collection_link`: str, the link to the document collection.
- `feed_options`: dict
:Returns:
query_iterable.QueryIterable
"""
return self.QueryDocuments(collection_link, None, feed_options)
def QueryDocuments(self, database_or_collection_link, query, options={}, partition_key=None):
"""Queries documents in a collection.
:Parameters:
- `database_or_collection_link`: str, the link to the database when using partitioning, otherwise link to the document collection.
- `query`: str or dict.
- `options`: dict, the request options for the request.
- `partition_key`: str, partition key for the query(default value None)
:Returns:
query_iterable.QueryIterable
"""
if(base.IsDatabaseLink(database_or_collection_link)):
# Python doesn't have a good way of specifying an overloaded constructor, and this is how it's generally overloaded constructors are specified(by calling a @classmethod) and returning the 'self' instance
return query_iterable.QueryIterable.PartitioningQueryIterable(self, options, self.retry_policy, database_or_collection_link, query, partition_key)
else:
path = base.GetPathFromLink(database_or_collection_link, 'docs')
collection_id = base.GetResourceIdOrFullNameFromLink(database_or_collection_link)
def fetch_fn(options):
return self.__QueryFeed(path,
'docs',
collection_id,
lambda r: r['Documents'],
lambda _, b: b,
query,
options), self.last_response_headers
return query_iterable.QueryIterable(options, self.retry_policy, fetch_fn)
def CreateDocument(self, database_or_collection_link, document, options={}):
"""Creates a document in a collection.
:Parameters:
- `database_or_collection_link`: str, the link to the database when using partitioning, otherwise link to the document collection.
- `document`: dict, the Azure DocumentDB document to create.
- `document['id']`: str, id of the document, MUST be unique for each
document.
- `options`: dict, the request options for the request.
- `options['disableAutomaticIdGeneration']`: bool, disables the
automatic id generation. If id is missing in the body and this
option is true, an error will be returned.
:Returns:
dict
"""
collection_id, document, path = self._GetCollectionIdWithPathForDocument(database_or_collection_link, document, options)
return self.Create(document,
path,
'docs',
collection_id,
None,
options)
def UpsertDocument(self, database_or_collection_link, document, options={}):
"""Upserts a document in a collection.
:Parameters:
- `database_or_collection_link`: str, the link to the database when using partitioning, otherwise link to the document collection.
- `document`: dict, the Azure DocumentDB document to upsert.
- `document['id']`: str, id of the document, MUST be unique for each
document.
- `options`: dict, the request options for the request.
- `options['disableAutomaticIdGeneration']`: bool, disables the
automatic id generation. If id is missing in the body and this
option is true, an error will be returned.
:Returns:
dict
"""
collection_id, document, path = self._GetCollectionIdWithPathForDocument(database_or_collection_link, document, options)
return self.Upsert(document,
path,
'docs',
collection_id,
None,
options)
PartitionResolverErrorMessage = "Couldn't find any partition resolvers for the database link provided. Ensure that the link you used when registering the partition resolvers matches the link provided or you need to register both types of database link(self link as well as ID based link)."
# Gets the collection id and path for the document
def _GetCollectionIdWithPathForDocument(self, database_or_collection_link, document, options):
if not database_or_collection_link:
raise ValueError("database_or_collection_link is None or empty.")
if document is None:
raise ValueError("document is None.")
DocumentClient.__ValidateResource(document)
document = document.copy()
if (not document.get('id') and
not options.get('disableAutomaticIdGeneration')):
document['id'] = base.GenerateGuidId()
collection_link = database_or_collection_link
if(base.IsDatabaseLink(database_or_collection_link)):
partition_resolver = self.GetPartitionResolver(database_or_collection_link)
if(partition_resolver != None):
collection_link = partition_resolver.ResolveForCreate(document)
else:
raise ValueError(PartitionResolverErrorMessage)
path = base.GetPathFromLink(collection_link, 'docs')
collection_id = base.GetResourceIdOrFullNameFromLink(collection_link)
return collection_id, document, path
def ReadDocument(self, document_link, options={}):
"""Reads a document.
:Parameters:
- `document_link`: str, the link to the document.
- `options`: dict, the request options for the request.
:Returns:
dict
"""
path = base.GetPathFromLink(document_link)
document_id = base.GetResourceIdOrFullNameFromLink(document_link)
return self.Read(path,
'docs',
document_id,
None,
options)
def ReadTriggers(self, collection_link, options={}):
"""Reads all triggers in a collection.
:Parameters:
- `collection_link`: str, the link to the document collection.
- `options`: dict, the request options for the request.
:Returns:
query_iterable.QueryIterable
"""
return self.QueryTriggers(collection_link, None, options)
def QueryTriggers(self, collection_link, query, options={}):
"""Queries triggers in a collection.
:Parameters:
- `collection_link`: str, the link to the document collection.
- `query`: str or dict.
- `options`: dict, the request options for the request.
:Returns:
query_iterable.QueryIterable
"""
path = base.GetPathFromLink(collection_link, 'triggers')
collection_id = base.GetResourceIdOrFullNameFromLink(collection_link)
def fetch_fn(options):
return self.__QueryFeed(path,
'triggers',
collection_id,
lambda r: r['Triggers'],
lambda _, b: b,
query,
options), self.last_response_headers
return query_iterable.QueryIterable(options, self.retry_policy, fetch_fn)
def CreateTrigger(self, collection_link, trigger, options={}):
"""Creates a trigger in a collection.
:Parameters:
- `collection_link`: str, the link to the document collection.
- `trigger`: dict
- `options`: dict, the request options for the request.
:Returns:
dict
"""
collection_id, path, trigger = self._GetCollectionIdWithPathForTrigger(collection_link, trigger)
return self.Create(trigger,
path,
'triggers',
collection_id,
None,
options)
def UpsertTrigger(self, collection_link, trigger, options={}):
"""Upserts a trigger in a collection.
:Parameters:
- `collection_link`: str, the link to the document collection.
- `trigger`: dict
- `options`: dict, the request options for the request.
:Returns:
dict
"""
collection_id, path, trigger = self._GetCollectionIdWithPathForTrigger(collection_link, trigger)
return self.Upsert(trigger,
path,
'triggers',
collection_id,
None,
options)
def _GetCollectionIdWithPathForTrigger(self, collection_link, trigger):
DocumentClient.__ValidateResource(trigger)
trigger = trigger.copy()
if trigger.get('serverScript'):
trigger['body'] = str(trigger.pop('serverScript', ''))
elif trigger.get('body'):
trigger['body'] = str(trigger['body'])
path = base.GetPathFromLink(collection_link, 'triggers')
collection_id = base.GetResourceIdOrFullNameFromLink(collection_link)
return collection_id, path, trigger
def ReadTrigger(self, trigger_link, options={}):
"""Reads a trigger.
:Parameters:
- `trigger_link`: str, the link to the trigger.
- `options`: dict, the request options for the request.
:Returns:
dict
"""
path = base.GetPathFromLink(trigger_link)
trigger_id = base.GetResourceIdOrFullNameFromLink(trigger_link)
return self.Read(path, 'triggers', trigger_id, None, options)
def ReadUserDefinedFunctions(self, collection_link, options={}):
"""Reads all user defined functions in a collection.
:Parameters:
- `collection_link`: str, the link to the document collection.
- `options`: dict, the request options for the request.
:Returns:
query_iterable.QueryIterable
"""
return self.QueryUserDefinedFunctions(collection_link, None, options)
def QueryUserDefinedFunctions(self, collection_link, query, options={}):
"""Queries user defined functions in a collection.
:Parameters:
- `collection_link`: str, the link to the collection.
- `query`: str or dict.
- `options`: dict, the request options for the request.
:Returns:
query_iterable.QueryIterable
"""
path = base.GetPathFromLink(collection_link, 'udfs')
collection_id = base.GetResourceIdOrFullNameFromLink(collection_link)
def fetch_fn(options):
return self.__QueryFeed(path,
'udfs',
collection_id,
lambda r: r['UserDefinedFunctions'],
lambda _, b: b,
query,
options), self.last_response_headers
return query_iterable.QueryIterable(options, self.retry_policy, fetch_fn)
def CreateUserDefinedFunction(self, collection_link, udf, options={}):
"""Creates a user defined function in a collection.
:Parameters:
- `collection_link`: str, the link to the collection.
- `udf`: str
- `options`: dict, the request options for the request.
:Returns:
dict
"""
collection_id, path, udf = self._GetCollectionIdWithPathForUDF(collection_link, udf)
return self.Create(udf,
path,
'udfs',
collection_id,
None,
options)
def UpsertUserDefinedFunction(self, collection_link, udf, options={}):
"""Upserts a user defined function in a collection.
:Parameters:
- `collection_link`: str, the link to the collection.
- `udf`: str
- `options`: dict, the request options for the request.
:Returns:
dict
"""
collection_id, path, udf = self._GetCollectionIdWithPathForUDF(collection_link, udf)
return self.Upsert(udf,
path,
'udfs',
collection_id,
None,
options)
def _GetCollectionIdWithPathForUDF(self, collection_link, udf):
DocumentClient.__ValidateResource(udf)
udf = udf.copy()
if udf.get('serverScript'):
udf['body'] = str(udf.pop('serverScript', ''))
elif udf.get('body'):
udf['body'] = str(udf['body'])
path = base.GetPathFromLink(collection_link, 'udfs')
collection_id = base.GetResourceIdOrFullNameFromLink(collection_link)
return collection_id, path, udf
def ReadUserDefinedFunction(self, udf_link, options={}):
"""Reads a user defined function.
:Parameters:
- `udf_link`: str, the link to the user defined function.
- `options`: dict, the | |
#!/usr/bin/env python
"""cflib.main
===============
This library contains functions that are used by PoMo.
"""
import argparse
import random
from scipy.misc import comb as choose
import cflib as lp
import os
import pdb
import time
# define PoMo10 states
codons = ["aaa", "aac", "aag", "aat", "aca", "acc", "acg", "act",
"aga", "agc", "agg", "agt", "ata", "atc", "atg", "att",
"caa", "cac", "cag", "cat", "cca", "ccc", "ccg", "cct",
"cga", "cgc", "cgg", "cgt", "cta", "ctc", "ctg", "ctt",
"gaa", "gac", "gag", "gat", "gca", "gcc", "gcg", "gct",
"gga", "ggc", "ggg", "ggt", "gta", "gtc", "gtg", "gtt",
"taa", "tac", "tag", "tat", "tca", "tcc", "tcg", "tct",
"tga", "tgc"]
nucs = ["A", "C", "G", "T"]
# Define mutation models.
mutmod = {}
mutmod["F81"] = ["global mu=0.01;\n", "mac:=mu;\n", "mag:=mu;\n",
"mat:=mu;\n", "mca:=mu;\n", "mct:=mu;\n",
"mcg:=mu;\n", "mgc:=mu;\n", "mga:=mu;\n",
"mgt:=mu;\n", "mta:=mu;\n", "mtc:=mu;\n",
"mtg:=mu;\n"]
mutmod["HKY"] = ["global kappa=0.01;\n", "global mu=0.01;\n",
"mac:=mu;\n", "mag:=kappa;\n", "mat:=mu;\n",
"mca:=mu;\n", "mct:=kappa;\n", "mcg:=mu;\n",
"mgc:=mu;\n", "mga:=kappa;\n", "mgt:=mu;\n",
"mta:=mu;\n", "mtc:=kappa;\n", "mtg:=mu;\n"]
mutmod["GTR"] = ["global muac=0.01;\n", "global muag=0.01;\n",
"global muat=0.01;\n", "global mucg=0.01;\n",
"global muct=0.01;\n", "global mugt=0.01;\n",
"mac:=muac;\n", "mag:=muag;\n", "mat:=muat;\n",
"mca:=muac;\n", "mct:=muct;\n", "mcg:=mucg;\n",
"mgc:=mucg;\n", "mga:=muag;\n", "mgt:=mugt;\n",
"mta:=muat;\n", "mtc:=muct;\n", "mtg:=mugt;\n"]
mutmod["NONREV"] = ["global mac=0.01;\n", "global mag=0.01;\n",
"global mat=0.01;\n", "global mcg=0.01;\n",
"global mct=0.01;\n", "global mgt=0.01;\n",
"global mca=0.01;\n", "global mga=0.01;\n",
"global mta=0.01;\n", "global mgc=0.01;\n",
"global mtc=0.01;\n", "global mtg=0.01;\n"]
# Define selection models.
selmod = {}
selmod["NoSel"] = ["sc := 0.0;\n", "sa := 0.0;\n", "st := 0.0;\n",
"sg := 0.0;\n"]
selmod["GCvsAT"] = ["global Sgc=0.0001;\n", "sc := Sgc;\n", "sa := 0.0;\n",
"st := 0.0;\n", "sg := Sgc;\n"]
selmod["AllNuc"] = ["global sc=0.0003;\n", "global sg=0.0003;\n",
"sa := 0.0;\n", "global st=0.0001;\n"]
def mutModel(mm):
"""Mutation model **type** for argparse."""
value = str(mm)
if not (mm in mutmod.keys()):
msg = "%r is not a valid mutation model" % mm
raise argparse.ArgumentTypeError(msg)
return value
def selModel(sm):
"""Selection model **type** for argparse."""
value = str(sm)
if not (sm in selmod.keys()):
msg = "%r is not a valid selection model" % sm
raise argparse.ArgumentTypeError(msg)
return value
def dsRatio(dsR):
"""Downsampling ratio **type** for argparse."""
value = float(dsR)
if not (0 < value <= 1):
msg = "%r is not a valid downsampling ratio" % dsR
raise argparse.ArgumentTypeError(msg)
return value
def setGM(gm):
"""Set variable mutation rate, if `gm` is given."""
if gm > 0:
mutgamma = ["global shape;\n",
"category rateCatMut =(" + str(gm) +
", EQUAL, MEAN, GammaDist(_x_,shape,shape), "
"CGammaDist(_x_,shape,shape),0,1e25);\n"]
else:
mutgamma = ["rateCatMut := 1.0;\n"]
return mutgamma
def setGS(gs):
"""Set fixation bias, if `gs` is given."""
if gs > 0:
selgamma = ["global shape2;\n",
"category rateCatSel =(" + str(gs) +
", EQUAL, MEAN, GammaDist(_x_,shape2,shape2), "
"CGammaDist(_x_,shape2,shape2),0,1e25);\n"]
else:
selgamma = ["rateCatSel := 1.0;\n"]
return selgamma
def a(n):
"""Calculate the Watterson's Theta coefficient."""
ret = 0
for i in range(n-1):
ret += (float(1.0)/(i+1))
return ret
def is_number(s):
"""Determine if value is an integer."""
try:
int(s)
return True
except ValueError:
return False
def binom(s, p, n):
"""Binomial Distribution
Calculate the binomial sampling probability (not very efficient,
but not much effieciency is needed with small samples).
"""
prob = (choose(n, s) * p**s * (1-p)**(n-s))
return prob
def probability_matrix(n):
"""Create probability matrices for the HyPhy batch file."""
o = n-1
#ignore values below this threshold (keeps the matrix sparse,
#avoiding increase in computational demands)
lim = 0.0001
s = ""
polys = [[0, 1], [0, 2], [0, 3], [1, 2], [1, 3], [2, 3]]
#write matrix
s += "matrixto"+str(o+1)+" ={\n"
for nucs in range(4):
s += "{"
for l in range(58-1):
if l == nucs:
s += "1.0,"
else:
s += "0.0,"
s += "0.0}\n"
for pol in range(6):
for fre in range(9):
s += "{"
for nucs in range(4):
if nucs == polys[pol][0]:
val = binom(o+1, float(9-fre)/10, o+1)
if val > lim:
s += str(val)+","
else:
s += "0.0,"
elif nucs == polys[pol][1]:
val = binom(o+1, float(fre+1)/10, o+1)
if val > lim:
s += str(val)+","
else:
s += "0.0,"
else:
s += "0.0,"
for pol2 in range(6):
for fre2 in range(9):
if pol == pol2:
if fre2 < o:
val = binom(fre2+1, float(fre+1)/10, o+1)
if val > lim:
s += str(val)
else:
s += "0.0"
else:
s += "0.0"
if pol2*fre2 != 40:
s += ","
else:
s += "}\n"
else:
if pol2*fre2 != 40:
s += "0.0,"
else:
s += "0.0}\n"
s += "};\n\n\n\n"
s += "Model Mto" + str(o+1) + " = (\"matrixto" + \
str(o+1) + "\", Freqs, EXPLICIT_FORM_MATRIX_EXPONENTIAL);\n\n"
return s
def get_species_from_cf_headerline(line):
"""Get the number of species and the names fom a counts format header line.
:param str line: The header line.
:rtype: (int n_species, [str] sp_names)
"""
sp_names = line.split()[2:]
n_species = len(sp_names)
if n_species < 2:
print("Error: Not sufficiently many species (<2).\n")
raise ValueError()
return (n_species, sp_names)
def get_data_from_cf_line(cfStr):
"""Read in the data of a single counts format line.
The return type is a list with the number of samples and a two
dimensional array of the form data[species][nucleotide], where
species is the index of the species and nucleotide is the index of
the nucleotide (0,1,2 or 3 for a,c,g and t, respectively).
:param cfStr CFStream: The CFStream pointing to the line to be
read in.
:rtype: ([int] n_samples, [[int]] data)
"""
n_samples = []
data = []
for i in range(cfStr.nIndiv):
q = []
summ = 0
for j in range(4):
q.append(int(cfStr.countsL[i][j]))
summ += q[j]
n_samples.append(summ)
data.append(q)
return (n_samples, data)
def read_data_write_HyPhy_input(fn, N, thresh, path_bf,
muts, mutgamma,
sels, selgamma,
PoModatafile, PoModatafile_cons,
theta=None, vb=None):
"""Read the count data and write the HyPhy input file.
The provided filename has to point to a data file in counts format
(cf. :doc:`cf <cf>`). The data will be downsampled if necessary
and the HyPhy batch and input files will be written. The number
of species, the species names, the number of species samples and
the theta value (usr_def) will be returned in a tuple.
:param str fn: Counts format file name.
:param int N: Virtual population size.
:param float thresh: Trheshold of data discard for downsampling.
:param str path_bf: Path to the HyPhy batch files
:param str muts: Mutation model (:func:`mutModel`).
:param str mutgamma: Gamma of the mutation model (:func:`setGM`).
:param str sels: Selection model (:func:`selModel`).
:param str selgamma: Gamma of selection model (:func:`setGS`).
:param str PoModatafile: Path to HyPhy input file.
:param str PoModatafile_cons: Path to HyPhy input file.
:param Boolean vb: Verbosity.
:rtype: (int n_species, [str] sp_names, [str] sp_samples, Boolean all_one,
float usr_def)
"""
# define variables
# number of species
n_species = 0
# species names
sp_names = []
# sample size of each species
sp_samples = []
# actual data; it is a 3-dimensional array sp_data[species][pos][base]
sp_data = []
# Check input file format. If format is not counts file, convert
# the file to counts format. I have decided to do this because
# for large files, a lot of memory is needed to traverse fasta
# files and the counts file type seems to be better.
# Verbose HYPHY output only with -vv or more.
if (vb is None) or (vb == 1):
vbHyphy = None
if vb is not None:
print("Starting to read input file.")
try:
cfStr = lp.cf.CFStream(fn)
except lp.cf.NotACountsFormatFileError:
print(fn + " is not in counts format.")
print("Assuming fasta file format.")
print("Convert fasta to counts.")
outFN = os.path.basename(fn).split(".")[0] + ".cf"
# absOutFN = os.path.abspath(fn).split(".")[0] + ".cf"
# pdb.set_trace()
lp.cf.fasta_to_cf(fn, outFN)
print("Created counts file:", outFN)
print("""This file will not be deleted after the run. If you want to avoid
repeated file conversions, please run PoMo with counts
files. File conversion scripts are provided with PoMo in the
scripts folder.""")
print("")
fn = outFN
cfStr = lp.cf.CFStream(fn)
# Assign species names (first two columns are Chrom and Pos).
# (n_species, sp_names) = get_species_from_cf_headerline(line)
n_species = cfStr.nIndiv
sp_names = cfStr.indivL
# Initialize the number of species samples to 0.
for i in range(n_species):
sp_data.append([])
sp_samples.append(0)
# Read in the data.
leng = 0
while True:
leng += 1
(n_samples, data) = get_data_from_cf_line(cfStr)
# Update sp_data and the number of samples.
for i in range(n_species):
sp_data[i].append(data[i])
if n_samples[i] > sp_samples[i]:
sp_samples[i] = n_samples[i]
try:
cfStr.read_next_pos()
except ValueError:
break
if vb is not None:
print("Count file has been read.")
# Sites where some species have coverage | |
specifies the configurations for an `Estimator` run."""
def __init__(self,
model_dir=None,
tf_random_seed=None,
save_summary_steps=100,
save_checkpoints_steps=_USE_DEFAULT,
save_checkpoints_secs=_USE_DEFAULT,
session_config=None,
keep_checkpoint_max=5,
keep_checkpoint_every_n_hours=10000,
log_step_count_steps=100,
train_distribute=None,
device_fn=None,
protocol=None,
eval_distribute=None,
experimental_distribute=None,
experimental_max_worker_delay_secs=None,
session_creation_timeout_secs=7200,
checkpoint_save_graph_def=True):
"""Constructs a RunConfig.
All distributed training related properties `cluster_spec`, `is_chief`,
`master` , `num_worker_replicas`, `num_ps_replicas`, `task_id`, and
`task_type` are set based on the `TF_CONFIG` environment variable, if the
pertinent information is present. The `TF_CONFIG` environment variable is a
JSON object with attributes: `cluster` and `task`.
`cluster` is a JSON serialized version of `ClusterSpec`'s Python dict from
`server_lib.py`, mapping task types (usually one of the `TaskType` enums) to
a list of task addresses.
`task` has two attributes: `type` and `index`, where `type` can be any of
the task types in `cluster`. When `TF_CONFIG` contains said information,
the following properties are set on this class:
* `cluster_spec` is parsed from `TF_CONFIG['cluster']`. Defaults to {}. If
present, must have one and only one node in the `chief` attribute of
`cluster_spec`.
* `task_type` is set to `TF_CONFIG['task']['type']`. Must set if
`cluster_spec` is present; must be `worker` (the default value) if
`cluster_spec` is not set.
* `task_id` is set to `TF_CONFIG['task']['index']`. Must set if
`cluster_spec` is present; must be 0 (the default value) if
`cluster_spec` is not set.
* `master` is determined by looking up `task_type` and `task_id` in the
`cluster_spec`. Defaults to ''.
* `num_ps_replicas` is set by counting the number of nodes listed
in the `ps` attribute of `cluster_spec`. Defaults to 0.
* `num_worker_replicas` is set by counting the number of nodes listed
in the `worker` and `chief` attributes of `cluster_spec`. Defaults to 1.
* `is_chief` is determined based on `task_type` and `cluster`.
There is a special node with `task_type` as `evaluator`, which is not part
of the (training) `cluster_spec`. It handles the distributed evaluation job.
Example of non-chief node:
```
cluster = {'chief': ['host0:2222'],
'ps': ['host1:2222', 'host2:2222'],
'worker': ['host3:2222', 'host4:2222', 'host5:2222']}
os.environ['TF_CONFIG'] = json.dumps(
{'cluster': cluster,
'task': {'type': 'worker', 'index': 1}})
config = RunConfig()
assert config.master == 'host4:2222'
assert config.task_id == 1
assert config.num_ps_replicas == 2
assert config.num_worker_replicas == 4
assert config.cluster_spec == server_lib.ClusterSpec(cluster)
assert config.task_type == 'worker'
assert not config.is_chief
```
Example of chief node:
```
cluster = {'chief': ['host0:2222'],
'ps': ['host1:2222', 'host2:2222'],
'worker': ['host3:2222', 'host4:2222', 'host5:2222']}
os.environ['TF_CONFIG'] = json.dumps(
{'cluster': cluster,
'task': {'type': 'chief', 'index': 0}})
config = RunConfig()
assert config.master == 'host0:2222'
assert config.task_id == 0
assert config.num_ps_replicas == 2
assert config.num_worker_replicas == 4
assert config.cluster_spec == server_lib.ClusterSpec(cluster)
assert config.task_type == 'chief'
assert config.is_chief
```
Example of evaluator node (evaluator is not part of training cluster):
```
cluster = {'chief': ['host0:2222'],
'ps': ['host1:2222', 'host2:2222'],
'worker': ['host3:2222', 'host4:2222', 'host5:2222']}
os.environ['TF_CONFIG'] = json.dumps(
{'cluster': cluster,
'task': {'type': 'evaluator', 'index': 0}})
config = RunConfig()
assert config.master == ''
assert config.evaluator_master == ''
assert config.task_id == 0
assert config.num_ps_replicas == 0
assert config.num_worker_replicas == 0
assert config.cluster_spec == {}
assert config.task_type == 'evaluator'
assert not config.is_chief
```
N.B.: If `save_checkpoints_steps` or `save_checkpoints_secs` is set,
`keep_checkpoint_max` might need to be adjusted accordingly, especially in
distributed training. For example, setting `save_checkpoints_secs` as 60
without adjusting `keep_checkpoint_max` (defaults to 5) leads to situation
that checkpoint would be garbage collected after 5 minutes. In distributed
training, the evaluation job starts asynchronously and might fail to load or
find the checkpoint due to race condition.
Args:
model_dir: directory where model parameters, graph, etc are saved. If
`PathLike` object, the path will be resolved. If `None`, will use a
default value set by the Estimator.
tf_random_seed: Random seed for TensorFlow initializers. Setting this
value allows consistency between reruns.
save_summary_steps: Save summaries every this many steps.
save_checkpoints_steps: Save checkpoints every this many steps. Can not be
specified with `save_checkpoints_secs`.
save_checkpoints_secs: Save checkpoints every this many seconds. Can not
be specified with `save_checkpoints_steps`. Defaults to 600 seconds if
both `save_checkpoints_steps` and `save_checkpoints_secs` are not set in
constructor. If both `save_checkpoints_steps` and
`save_checkpoints_secs` are `None`, then checkpoints are disabled.
session_config: a ConfigProto used to set session parameters, or `None`.
keep_checkpoint_max: The maximum number of recent checkpoint files to
keep. As new files are created, older files are deleted. If `None` or 0,
all checkpoint files are kept. Defaults to 5 (that is, the 5 most recent
checkpoint files are kept). If a saver is passed to the estimator, this
argument will be ignored.
keep_checkpoint_every_n_hours: Number of hours between each checkpoint to
be saved. The default value of 10,000 hours effectively disables the
feature.
log_step_count_steps: The frequency, in number of global steps, that the
global step and the loss will be logged during training. Also controls
the frequency that the global steps / s will be logged (and written to
summary) during training.
train_distribute: An optional instance of `tf.distribute.Strategy`. If
specified, then Estimator will distribute the user's model during
training, according to the policy specified by that strategy. Setting
`experimental_distribute.train_distribute` is preferred.
device_fn: A callable invoked for every `Operation` that takes the
`Operation` and returns the device string. If `None`, defaults to the
device function returned by `tf.train.replica_device_setter` with
round-robin strategy.
protocol: An optional argument which specifies the protocol used when
starting server. `None` means default to grpc.
eval_distribute: An optional instance of `tf.distribute.Strategy`. If
specified, then Estimator will distribute the user's model during
evaluation, according to the policy specified by that strategy. Setting
`experimental_distribute.eval_distribute` is preferred.
experimental_distribute: An optional
`tf.contrib.distribute.DistributeConfig` object specifying
DistributionStrategy-related configuration. The `train_distribute` and
`eval_distribute` can be passed as parameters to `RunConfig` or set in
`experimental_distribute` but not both.
experimental_max_worker_delay_secs: An optional integer specifying the
maximum time a worker should wait before starting. By default, workers
are started at staggered times, with each worker being delayed by up to
60 seconds. This is intended to reduce the risk of divergence, which can
occur when many workers simultaneously update the weights of a randomly
initialized model. Users who warm-start their models and train them for
short durations (a few minutes or less) should consider reducing this
default to improve training times.
session_creation_timeout_secs: Max time workers should wait for a session
to become available (on initialization or when recovering a session)
with MonitoredTrainingSession. Defaults to 7200 seconds, but users may
want to set a lower value to detect problems with variable / session
(re)-initialization more quickly.
checkpoint_save_graph_def: Whether to save the GraphDef and MetaGraphDef
to `checkpoint_dir`. The GraphDef is saved after the session is created
as `graph.pbtxt`. MetaGraphDefs are saved out for every checkpoint as
`model.ckpt-*.meta`.
Raises:
ValueError: If both `save_checkpoints_steps` and `save_checkpoints_secs`
are set.
"""
if (save_checkpoints_steps == _USE_DEFAULT and
save_checkpoints_secs == _USE_DEFAULT):
save_checkpoints_steps = None
save_checkpoints_secs = 600
elif save_checkpoints_secs == _USE_DEFAULT:
save_checkpoints_secs = None
elif save_checkpoints_steps == _USE_DEFAULT:
save_checkpoints_steps = None
elif (save_checkpoints_steps is not None and
save_checkpoints_secs is not None):
raise ValueError(_SAVE_CKPT_ERR)
self._verify_strategy_compatibility(train_distribute, eval_distribute)
tf_config = json.loads(os.environ.get(_TF_CONFIG_ENV, '{}'))
if tf_config:
tf.compat.v1.logging.info('TF_CONFIG environment variable: %s', tf_config)
model_dir = _get_model_dir(tf_config,
compat_internal.path_to_str(model_dir))
RunConfig._replace(
self,
allowed_properties_list=_DEFAULT_REPLACEABLE_LIST,
model_dir=model_dir,
tf_random_seed=tf_random_seed,
save_summary_steps=save_summary_steps,
save_checkpoints_steps=save_checkpoints_steps,
save_checkpoints_secs=save_checkpoints_secs,
session_config=session_config,
keep_checkpoint_max=keep_checkpoint_max,
keep_checkpoint_every_n_hours=keep_checkpoint_every_n_hours,
log_step_count_steps=log_step_count_steps,
train_distribute=train_distribute,
device_fn=device_fn,
protocol=protocol,
eval_distribute=eval_distribute,
experimental_distribute=experimental_distribute,
experimental_max_worker_delay_secs=experimental_max_worker_delay_secs,
session_creation_timeout_secs=session_creation_timeout_secs,
checkpoint_save_graph_def=checkpoint_save_graph_def)
# TODO(frankchn,priyag): Eventually use distributed coordinator for TPUs.
if ((train_distribute and
not train_distribute.__class__.__name__.startswith('TPUStrategy')) or
(eval_distribute and
not eval_distribute.__class__.__name__.startswith('TPUStrategy')) or
experimental_distribute):
tf.compat.v1.logging.info(
'Initializing RunConfig with distribution strategies.')
distribute_coordinator_training.init_run_config(self, tf_config)
else:
self._init_distributed_setting_from_environment_var(tf_config)
self._maybe_overwrite_session_config_for_distributed_training()
def _verify_strategy_compatibility(self, train_distribute, eval_distribute):
if ((train_distribute is not None and train_distribute.__class__ ==
parameter_server_strategy_v2.ParameterServerStrategyV2) or
(eval_distribute is not None and eval_distribute.__class__ ==
parameter_server_strategy_v2.ParameterServerStrategyV2)):
raise ValueError('Please use `tf.compat.v1.distribute.experimental.Param'
'eterServerStrategy` for parameter server strategy with '
'estimator.')
def _maybe_overwrite_session_config_for_distributed_training(self):
"""Overwrites the session_config for distributed training.
The default overwrite is optimized for between-graph training. Subclass
should override this method if necessary.
"""
# Get session_config only for between-graph distributed mode (cluster_spec
# is present).
if not self._session_config and self._cluster_spec:
RunConfig._replace(
self,
allowed_properties_list=_DEFAULT_REPLACEABLE_LIST,
session_config=self._get_default_session_config_distributed())
def _get_default_session_config_distributed(self):
"""Returns None or tf.ConfigProto instance with default device_filters set.
Device filters are set such that chief/master and worker communicates with
only ps. session_config=None for evaluators or any other TaskType.
"""
rewrite_opts = rewriter_config_pb2.RewriterConfig(
meta_optimizer_iterations=rewriter_config_pb2.RewriterConfig.ONE)
graph_opts = config_pb2.GraphOptions(rewrite_options=rewrite_opts)
device_filters = None
if | |
3.867840E+04, 4.123867E+04, 4.393547E+04, 4.677456E+04,
4.976184E+04, 5.290340E+04, 5.620549E+04, 5.967453E+04, 6.331712E+04, 6.714003E+04,
7.115023E+04, 7.535485E+04, 7.976121E+04, 8.437683E+04, 8.920942E+04, 9.426687E+04,
9.955728E+04, 1.050889E+05, 1.108703E+05, 1.169102E+05, 1.232174E+05, 1.298010E+05,
1.366705E+05, 1.438352E+05, 1.513051E+05, 1.590899E+05, 1.672000E+05, 1.756458E+05,
1.844378E+05, 1.935870E+05, 2.031044E+05, 2.130014E+05, 2.232894E+05, 2.339804E+05,
2.450863E+05, 2.566195E+05, 2.685923E+05, 2.810176E+05, 2.939083E+05, 3.072777E+05,
3.211394E+05, 3.355069E+05, 3.503944E+05, 3.658161E+05, 3.817865E+05, 3.983203E+05,
4.154326E+05, 4.331386E+05, 4.514539E+05, 4.703944E+05, 4.899760E+05, 5.102151E+05,
5.311284E+05, 5.527328E+05, 5.750453E+05, 5.980834E+05, 6.218648E+05, 6.464075E+05,
6.717298E+05, 6.978501E+05, 7.247874E+05, 7.525606E+05, 7.811893E+05, 8.106929E+05,
8.410916E+05, 8.724056E+05, 9.046553E+05, 9.378615E+05, 9.720455E+05, 1.007229E+06,
1.043432E+06, 1.080679E+06, 1.118991E+06, 1.158390E+06, 1.198899E+06, 1.240543E+06,
1.283343E+06, 1.327324E+06, 1.372511E+06, 1.418926E+06, 1.466595E+06, 1.515544E+06,
1.565796E+06, 1.617378E+06, 1.670316E+06, 1.724635E+06, 1.780363E+06, 1.837526E+06,
1.896150E+06, 1.956265E+06, 2.017896E+06, 2.081072E+06, 2.145822E+06, 2.212173E+06,
2.280156E+06, 2.349798E+06, 2.421130E+06, 2.494182E+06, 2.568982E+06, 2.645562E+06,
2.723953E+06, 2.804185E+06, 2.886290E+06, 2.970298E+06, 3.056243E+06, 3.144156E+06,
3.234069E+06, 3.326015E+06, 3.420028E+06, 3.516140E+06, 3.614384E+06, 3.714796E+06,
3.817408E+06, 3.922256E+06, 4.029373E+06, 4.138795E+06, 4.250556E+06, 4.364693E+06,
4.481240E+06, 4.600234E+06, 4.721710E+06, 4.845706E+06, 4.972258E+06, 5.101402E+06,
5.233175E+06, 5.367616E+06, 5.504761E+06, 5.644648E+06, 5.787315E+06, 5.932800E+06,
6.081142E+06, 6.232379E+06, 6.386550E+06, 6.543694E+06, 6.703849E+06, 6.867056E+06,
7.033353E+06, 7.202781E+06, 7.375378E+06, 7.551186E+06, 7.730243E+06, 7.912590E+06,
8.098268E+06, 8.287317E+06, 8.479777E+06, 8.675690E+06, 8.875095E+06, 9.078035E+06,
9.284549E+06, 9.494679E+06, 9.708467E+06, 9.925953E+06, 1.014718E+07, 1.037219E+07,
1.060102E+07, 1.083371E+07, 1.107031E+07, 1.131086E+07, 1.155539E+07, 1.180396E+07,
1.205660E+07, 1.231335E+07, 1.257426E+07, 1.283936E+07, 1.310870E+07, 1.338233E+07,
1.366027E+07, 1.394258E+07, 1.422929E+07, 1.452045E+07, 1.481610E+07, 1.511628E+07,
1.542103E+07, 1.573039E+07, 1.604440E+07, 1.636310E+07, 1.668655E+07, 1.701476E+07,
1.734779E+07, 1.768569E+07, 1.802847E+07, 1.837620E+07, 1.872890E+07, 1.908663E+07,
1.944941E+07, 1.981729E+07, 2.019030E+07, 2.056850E+07, 2.095191E+07, 2.134058E+07,
2.173454E+07, 2.213383E+07, 2.253850E+07, 2.294858E+07, 2.336411E+07, 2.378512E+07,
2.421166E+07, 2.464377E+07, 2.508147E+07, 2.552481E+07, 2.597382E+07, 2.642854E+07,
2.688901E+07, 2.735527E+07, 2.782734E+07, 2.830527E+07, 2.878909E+07, 2.927883E+07,
2.977453E+07, 3.027623E+07, 3.078396E+07, 3.129774E+07, 3.181763E+07, 3.234365E+07,
3.287583E+07, 3.341421E+07, 3.395881E+07, 3.450968E+07, 3.506684E+07,
])
# ---------------------- M = 2, I = 10 ---------------------------
M = 2
I = 10
TIPS_2017_ISOT_HASH[(M,I)] = TIPS_2017_ISOT[0]
TIPS_2017_ISOQ_HASH[(M,I)] = float64([
2.501700E+00, 4.041606E+01, 8.049824E+01, 1.205830E+02, 1.606730E+02, 2.007995E+02,
2.410771E+02, 2.817371E+02, 3.231082E+02, 3.655703E+02, 4.095143E+02, 4.553170E+02,
5.033312E+02, 5.538845E+02, 6.072820E+02, 6.638110E+02, 7.237459E+02, 7.873522E+02,
8.548902E+02, 9.266178E+02, 1.002793E+03, 1.083677E+03, 1.169533E+03, 1.260630E+03,
1.357243E+03, 1.459653E+03, 1.568150E+03, 1.683030E+03, 1.804598E+03, 1.933171E+03,
2.069070E+03, 2.212630E+03, 2.364195E+03, 2.524117E+03, 2.692761E+03, 2.870501E+03,
3.057723E+03, 3.254823E+03, 3.462209E+03, 3.680301E+03, 3.909530E+03, 4.150338E+03,
4.403179E+03, 4.668522E+03, 4.946845E+03, 5.238639E+03, 5.544410E+03, 5.864675E+03,
6.199964E+03, 6.550821E+03, 6.917802E+03, 7.301478E+03, 7.702433E+03, 8.121265E+03,
8.558585E+03, 9.015020E+03, 9.491209E+03, 9.987807E+03, 1.050548E+04, 1.104492E+04,
1.160682E+04, 1.219189E+04, 1.280087E+04, 1.343449E+04, 1.409352E+04, 1.477873E+04,
1.549091E+04, 1.623087E+04, 1.699943E+04, 1.779742E+04, 1.862571E+04, 1.948515E+04,
2.037665E+04, 2.130110E+04, 2.225942E+04, 2.325254E+04, 2.428143E+04, 2.534705E+04,
2.645038E+04, 2.759244E+04, 2.877424E+04, 2.999682E+04, 3.126123E+04, 3.256856E+04,
3.391989E+04, 3.531632E+04, 3.675900E+04, 3.824905E+04, 3.978765E+04, 4.137597E+04,
4.301520E+04, 4.470658E+04, 4.645132E+04, 4.825069E+04, 5.010596E+04, 5.201841E+04,
5.398935E+04, 5.602011E+04, 5.811204E+04, 6.026650E+04, 6.248487E+04, 6.476855E+04,
6.711897E+04, 6.953757E+04, 7.202580E+04, 7.458515E+04, 7.721711E+04, 7.992319E+04,
8.270494E+04, 8.556390E+04, 8.850166E+04, 9.151980E+04, 9.461994E+04, 9.780372E+04,
1.010728E+05, 1.044288E+05, 1.078734E+05, 1.114085E+05, 1.150356E+05, 1.187565E+05,
1.225730E+05, 1.264870E+05, 1.305001E+05, 1.346143E+05, 1.388313E+05, 1.431531E+05,
1.475815E+05, 1.521185E+05, 1.567659E+05, 1.615257E+05, 1.663998E+05, 1.713903E+05,
1.764992E+05, 1.817284E+05, 1.870800E+05, 1.925560E+05, 1.981586E+05, 2.038898E+05,
2.097518E+05, 2.157466E+05, 2.218764E+05, 2.281434E+05, 2.345498E+05, 2.410978E+05,
2.477895E+05, 2.546272E+05, 2.616132E+05, 2.687498E+05, 2.760392E+05, 2.834837E+05,
2.910857E+05, 2.988474E+05, 3.067713E+05, 3.148597E+05, 3.231150E+05, 3.315396E+05,
3.401358E+05, 3.489062E+05, 3.578531E+05, 3.669790E+05, 3.762864E+05, 3.857777E+05,
3.954555E+05, 4.053222E+05, 4.153804E+05, 4.256326E+05, 4.360813E+05, 4.467291E+05,
4.575786E+05, 4.686322E+05, 4.798927E+05, 4.913626E+05, 5.030444E+05, 5.149409E+05,
5.270546E+05, 5.393881E+05, 5.519441E+05, 5.647252E+05, 5.777342E+05, 5.909735E+05,
6.044459E+05, 6.181541E+05, 6.321007E+05, 6.462885E+05, 6.607200E+05, 6.753979E+05,
6.903251E+05, 7.055041E+05, 7.209376E+05, 7.366284E+05, 7.525791E+05, 7.687924E+05,
7.852711E+05, 8.020178E+05, 8.190353E+05, 8.363262E+05, 8.538932E+05, 8.717391E+05,
8.898665E+05, 9.082781E+05, 9.269767E+05, 9.459648E+05, 9.652453E+05, 9.848207E+05,
1.004694E+06, 1.024867E+06, 1.045343E+06, 1.066125E+06, 1.087216E+06, 1.108617E+06,
1.130332E+06, 1.152363E+06, 1.174712E+06, 1.197384E+06, 1.220379E+06, 1.243701E+06,
1.267352E+06, 1.291335E+06, 1.315653E+06, 1.340307E+06, 1.365301E+06, 1.390637E+06,
1.416318E+06, 1.442345E+06, 1.468723E+06, 1.495452E+06, 1.522536E+06, 1.549977E+06,
1.577777E+06, 1.605939E+06, 1.634466E+06, 1.663359E+06, 1.692621E+06, 1.722255E+06,
1.752263E+06, 1.782647E+06, 1.813409E+06, 1.844552E+06, 1.876079E+06, 1.907990E+06,
1.940290E+06, 1.972979E+06, 2.006060E+06, 2.039536E+06, 2.073408E+06, 2.107678E+06,
2.142350E+06, 2.177424E+06, 2.212904E+06, 2.248791E+06, 2.285086E+06,
])
# --------------- CO2 838: M = 2, I = 0 ALIAS-----------------
TIPS_2017_ISOT_HASH[(M,0)] = TIPS_2017_ISOT[0]
TIPS_2017_ISOQ_HASH[(M,0)] = TIPS_2017_ISOQ_HASH[(M,I)]
# ---------------------- M = 2, I = 11 ---------------------------
M = 2
I = 11
TIPS_2017_ISOT_HASH[(M,I)] = TIPS_2017_ISOT[2]
TIPS_2017_ISOQ_HASH[(M,I)] = float64([
2.782335E+01, 4.713557E+02, 9.387029E+02, 1.406080E+03, 1.873518E+03, 2.341368E+03,
2.810932E+03, 3.284859E+03, 3.766925E+03, 4.261513E+03, 4.773136E+03, 5.306147E+03,
5.864623E+03, 6.452344E+03, 7.072828E+03, 7.729387E+03, 8.425175E+03, 9.163245E+03,
9.946589E+03, 1.077817E+04, 1.166095E+04, 1.259791E+04, 1.359209E+04, 1.464656E+04,
1.576448E+04, 1.694906E+04, 1.820363E+04, 1.953159E+04, 2.093643E+04, 2.242176E+04,
2.399128E+04, 2.564882E+04, 2.739831E+04, 2.924378E+04, 3.118942E+04, 3.323950E+04,
3.539843E+04, 3.767076E+04, 4.006115E+04, 4.257440E+04, 4.521543E+04, 4.798931E+04,
5.090124E+04, 5.395655E+04, 5.716074E+04, 6.051942E+04, 6.403836E+04, 6.772347E+04,
7.158083E+04, 7.561663E+04, 7.983725E+04, 8.424921E+04, 8.885918E+04, 9.367400E+04,
9.870066E+04, 1.039463E+05, 1.094183E+05, 1.151240E+05, 1.210712E+05, 1.272677E+05,
1.337214E+05, 1.404405E+05, 1.474333E+05, 1.547084E+05, 1.622744E+05, 1.701402E+05,
1.783147E+05, 1.868073E+05, 1.956274E+05, 2.047844E+05, 2.142882E+05, 2.241486E+05,
2.343760E+05, 2.449804E+05, 2.559725E+05, 2.673630E+05, 2.791627E+05, 2.913827E+05,
3.040342E+05, 3.171288E+05, 3.306779E+05, 3.446935E+05, 3.591876E+05, 3.741723E+05,
3.896601E+05, 4.056634E+05, 4.221951E+05, 4.392682E+05, 4.568957E+05, 4.750909E+05,
4.938673E+05, 5.132387E+05, 5.332187E+05, 5.538215E+05, 5.750612E+05, 5.969522E+05,
6.195090E+05, 6.427462E+05, 6.666788E+05, 6.913216E+05, 7.166898E+05, 7.427988E+05,
7.696639E+05, 7.973007E+05, 8.257250E+05, 8.549525E+05, 8.849992E+05, 9.158811E+05,
9.476145E+05, 9.802157E+05, 1.013701E+06, 1.048087E+06, 1.083390E+06, 1.119627E+06,
1.156815E+06, 1.194970E+06, 1.234110E+06, 1.274250E+06, 1.315409E+06, 1.357604E+06,
1.400851E+06, 1.445167E+06, 1.490570E+06, 1.537077E+06, 1.584704E+06, 1.633470E+06,
1.683392E+06, 1.734485E+06, 1.786769E+06, 1.840259E+06, 1.894973E+06, 1.950928E+06,
2.008141E+06, 2.066630E+06, 2.126410E+06, 2.187498E+06, 2.249913E+06, 2.313669E+06,
2.378785E+06, 2.445276E+06, 2.513159E+06, 2.582450E+06, 2.653165E+06, 2.725322E+06,
2.798935E+06, 2.874020E+06, 2.950594E+06, 3.028672E+06, 3.108269E+06, 3.189401E+06,
3.272083E+06, 3.356330E+06, 3.442157E+06, 3.529579E+06, 3.618610E+06, 3.709264E+06,
3.801557E+06, 3.895501E+06, 3.991111E+06, 4.088400E+06, 4.187382E+06, 4.288070E+06,
4.390478E+06, 4.494617E+06, 4.600501E+06, 4.708142E+06, 4.817553E+06, 4.928745E+06,
5.041730E+06, 5.156520E+06, 5.273126E+06, 5.391560E+06, 5.511832E+06, 5.633953E+06,
5.757934E+06, 5.883784E+06,
])
# ---------------------- M = 2, I = 12 ---------------------------
M = 2
I = 12
TIPS_2017_ISOT_HASH[(M,I)] = TIPS_2017_ISOT[0]
TIPS_2017_ISOQ_HASH[(M,I)] = float64([
8.248682E+01, 1.374885E+03, 2.737741E+03, 4.100686E+03, 5.463802E+03, 6.828082E+03,
8.197234E+03, 9.578835E+03, 1.098375E+04, 1.242462E+04, 1.391448E+04, 1.546591E+04,
1.709070E+04, 1.879977E+04, 2.060326E+04, 2.251071E+04, 2.453122E+04, 2.667355E+04,
2.894631E+04, 3.135799E+04, 3.391712E+04, 3.663224E+04, 3.951206E+04, 4.256541E+04,
4.580131E+04, 4.922903E+04, 5.285804E+04, 5.669811E+04, 6.075926E+04, 6.505180E+04,
6.958637E+04, 7.437390E+04, 7.942564E+04, 8.475318E+04, 9.036846E+04, 9.628373E+04,
1.025116E+05, 1.090652E+05, 1.159576E+05, 1.232028E+05, 1.308148E+05, 1.388081E+05,
1.471975E+05, 1.559983E+05, 1.652263E+05, 1.748975E+05, 1.850284E+05, 1.956358E+05,
2.067372E+05, 2.183504E+05, 2.304934E+05, 2.431851E+05, 2.564443E+05, 2.702908E+05,
2.847445E+05, 2.998258E+05, 3.155556E+05, 3.319555E+05, 3.490472E+05, 3.668531E+05,
3.853960E+05, 4.046992E+05, 4.247867E+05, 4.456827E+05, 4.674120E+05, 4.900000E+05,
5.134725E+05, 5.378560E+05, 5.631772E+05, 5.894637E+05, 6.167433E+05, 6.450447E+05,
6.743967E+05, 7.048290E+05, 7.363717E+05, 7.690554E+05, 8.029115E+05, 8.379716E+05,
8.742681E+05, 9.118340E+05, 9.507027E+05, 9.909083E+05, 1.032485E+06, 1.075469E+06,
1.119896E+06, 1.165801E+06, 1.213222E+06, 1.262197E+06, 1.312763E+06, 1.364960E+06,
1.418826E+06, 1.474402E+06, 1.531729E+06, 1.590847E+06, 1.651799E+06, 1.714626E+06,
1.779372E+06, 1.846081E+06, 1.914797E+06, 1.985564E+06, 2.058429E+06, 2.133437E+06,
2.210636E+06, 2.290073E+06, 2.371795E+06, 2.455852E+06, 2.542294E+06, 2.631169E+06,
2.722529E+06, 2.816425E+06, 2.912910E+06, 3.012035E+06, 3.113855E+06, 3.218422E+06,
3.325792E+06, 3.436020E+06, 3.549161E+06, 3.665273E+06, 3.784412E+06, 3.906637E+06,
4.032006E+06, 4.160578E+06, 4.292413E+06, 4.427572E+06, 4.566115E+06, 4.708105E+06,
4.853604E+06, 5.002674E+06, 5.155381E+06, 5.311788E+06, 5.471961E+06, 5.635964E+06,
5.803865E+06, 5.975731E+06, 6.151628E+06, 6.331626E+06, 6.515793E+06, 6.704199E+06,
6.896914E+06, 7.094009E+06, 7.295554E+06, 7.501623E+06, 7.712287E+06, 7.927620E+06,
8.147695E+06, 8.372588E+06, 8.602372E+06, 8.837123E+06, 9.076918E+06, 9.321833E+06,
9.571945E+06, 9.827332E+06, 1.008807E+07, 1.035424E+07, 1.062593E+07, 1.090320E+07,
1.118615E+07, 1.147484E+07, 1.176937E+07, 1.206982E+07, 1.237626E+07, 1.268879E+07,
1.300747E+07, 1.333241E+07, 1.366367E+07, 1.400135E+07, 1.434553E+07, 1.469629E+07,
1.505372E+07, 1.541791E+07, 1.578894E+07, 1.616690E+07, 1.655188E+07, 1.694395E+07,
1.734322E+07, 1.774977E+07, 1.816368E+07, 1.858505E+07, 1.901396E+07, 1.945050E+07,
1.989476E+07, 2.034683E+07, 2.080679E+07, 2.127475E+07, 2.175079E+07, 2.223499E+07,
2.272745E+07, 2.322825E+07, 2.373750E+07, 2.425527E+07, 2.478166E+07, 2.531676E+07,
2.586066E+07, 2.641344E+07, 2.697521E+07, 2.754605E+07, 2.812604E+07, 2.871529E+07,
2.931387E+07, 2.992189E+07, 3.053943E+07, 3.116658E+07, 3.180343E+07, 3.245007E+07,
3.310659E+07, 3.377308E+07, 3.444963E+07, 3.513633E+07, 3.583326E+07, 3.654053E+07,
3.725820E+07, 3.798638E+07, 3.872515E+07, 3.947459E+07, 4.023480E+07, 4.100587E+07,
4.178787E+07, 4.258090E+07, 4.338505E+07, 4.420039E+07, 4.502701E+07, 4.586501E+07,
4.671445E+07, 4.757544E+07, 4.844805E+07, 4.933236E+07, 5.022846E+07, 5.113643E+07,
5.205636E+07, 5.298832E+07, 5.393239E+07, 5.488867E+07, 5.585722E+07, 5.683812E+07,
5.783147E+07, 5.883732E+07, 5.985577E+07, 6.088689E+07, 6.193076E+07, 6.298745E+07,
6.405704E+07, 6.513961E+07, 6.623523E+07, 6.734397E+07, 6.846590E+07, 6.960111E+07,
7.074966E+07, 7.191162E+07, 7.308707E+07, 7.427607E+07, 7.547870E+07,
])
# ---------------------- M = 2, I = 13 ---------------------------
M = 2
I = 13
TIPS_2017_ISOT_HASH[(M,I)] = TIPS_2017_ISOT[0]
TIPS_2017_ISOQ_HASH[(M,I)] = float64([
1.172250E+00, 1.797782E+01, 3.578844E+01, 5.360026E+01, 7.141461E+01, 8.924655E+01,
1.071502E+02, 1.252330E+02, 1.436455E+02, 1.625593E+02, 1.821490E+02, 2.025809E+02,
2.240094E+02, 2.465762E+02, 2.704128E+02, 2.956425E+02, 3.223827E+02, 3.507470E+02,
3.808470E+02, 4.127935E+02, 4.466974E+02, 4.826712E+02, 5.208288E+02, 5.612866E+02,
6.041637E+02, 6.495824E+02, 6.976684E+02, 7.485507E+02, 8.023624E+02, 8.592405E+02,
9.193259E+02, 9.827640E+02, 1.049704E+03, 1.120301E+03, 1.194713E+03, 1.273103E+03,
1.355638E+03, 1.442493E+03, 1.533844E+03, 1.629875E+03, 1.730772E+03, 1.836729E+03,
1.947943E+03, 2.064618E+03, 2.186963E+03, 2.315191E+03, 2.449522E+03, 2.590182E+03,
2.737401E+03, 2.891416E+03, 3.052469E+03, 3.220808E+03, 3.396688E+03, 3.580369E+03,
3.772118E+03, 3.972206E+03, 4.180913E+03, 4.398524E+03, 4.625331E+03, 4.861631E+03,
5.107729E+03, 5.363936E+03, 5.630570E+03, 5.907955E+03, 6.196423E+03, 6.496312E+03,
6.807968E+03, 7.131742E+03, 7.467993E+03, 7.817088E+03, 8.179401E+03, 8.555312E+03,
8.945209E+03, 9.349488E+03, 9.768552E+03, 1.020281E+04, 1.065268E+04, 1.111859E+04,
1.160098E+04, 1.210027E+04, 1.261693E+04, 1.315140E+04, 1.370416E+04, 1.427567E+04,
1.486641E+04, 1.547688E+04, 1.610757E+04, 1.675898E+04, 1.743163E+04, 1.812603E+04,
1.884273E+04, 1.958224E+04, 2.034513E+04, 2.113194E+04, 2.194324E+04, 2.277960E+04,
2.364159E+04, 2.452981E+04, 2.544485E+04, 2.638733E+04, 2.735784E+04, 2.835702E+04,
2.938550E+04, 3.044392E+04, 3.153292E+04, 3.265317E+04, 3.380534E+04, 3.499009E+04,
3.620811E+04, 3.746011E+04, 3.874677E+04, 4.006882E+04, 4.142697E+04, 4.282196E+04,
4.425451E+04, 4.572539E+04, 4.723535E+04, 4.878515E+04, 5.037557E+04, 5.200740E+04,
5.368142E+04, 5.539845E+04, 5.715929E+04, 5.896477E+04, 6.081571E+04, 6.271296E+04,
6.465735E+04, 6.664976E+04, 6.869105E+04, 7.078209E+04, 7.292377E+04, 7.511697E+04,
7.736262E+04, 7.966160E+04, 8.201486E+04, 8.442330E+04, 8.688789E+04, 8.940955E+04,
9.198924E+04, 9.462794E+04, 9.732660E+04, 1.000862E+05, 1.029078E+05, 1.057923E+05,
1.087407E+05, 1.117541E+05, 1.148335E+05, 1.179799E+05, 1.211944E+05, 1.244779E+05,
1.278316E+05, 1.312566E+05, 1.347538E+05, 1.383243E+05, 1.419693E+05, 1.456898E+05,
1.494870E+05, 1.533619E+05, 1.573156E+05, 1.613494E+05, 1.654642E+05, 1.696612E+05,
1.739416E+05, 1.783064E+05, 1.827570E+05, 1.872943E+05, 1.919196E+05, 1.966340E+05,
2.014388E+05, 2.063350E+05, 2.113239E+05, 2.164066E+05, 2.215843E+05, 2.268583E+05,
2.322298E+05, 2.376998E+05, 2.432697E+05, 2.489407E+05, 2.547139E+05, 2.605906E+05,
2.665720E+05, 2.726593E+05, 2.788538E+05, 2.851567E+05, 2.915692E+05, 2.980925E+05,
3.047279E+05, 3.114766E+05, 3.183399E+05, 3.253190E+05, 3.324151E+05, 3.396295E+05,
3.469634E+05, 3.544181E+05, 3.619948E+05, 3.696947E+05, 3.775191E+05, 3.854693E+05,
3.935464E+05, 4.017517E+05, 4.100865E+05, 4.185520E+05, 4.271494E+05, 4.358800E+05,
4.447450E+05, 4.537456E+05, 4.628831E+05, 4.721587E+05, 4.815736E+05, 4.911291E+05,
5.008263E+05, 5.106664E+05, 5.206508E+05, 5.307806E+05, 5.410569E+05, 5.514811E+05,
5.620543E+05, 5.727776E+05, 5.836524E+05, 5.946797E+05, 6.058608E+05, 6.171968E+05,
6.286889E+05, 6.403382E+05, 6.521460E+05, 6.641133E+05, 6.762413E+05, 6.885312E+05,
7.009841E+05, 7.136011E+05, 7.263834E+05, 7.393320E+05, 7.524481E+05, 7.657327E+05,
7.791870E+05, 7.928120E+05, 8.066089E+05, 8.205787E+05, 8.347224E+05, 8.490412E+05,
8.635361E+05, 8.782080E+05, 8.930581E+05, 9.080874E+05, 9.232969E+05, 9.386875E+05,
9.542604E+05, 9.700165E+05, 9.859567E+05, 1.002082E+06, 1.018394E+06,
])
# ---------------------- M = 3, I = 1 ---------------------------
M = 3
I = 1
TIPS_2017_ISOT_HASH[(M,I)] = TIPS_2017_ISOT[2]
TIPS_2017_ISOQ_HASH[(M,I)] = float64([
7.847400E-01, 5.870075E+01, 1.653093E+02, 3.033348E+02, 4.668337E+02, 6.523999E+02,
8.578395E+02, 1.081788E+03, 1.323572E+03, 1.583129E+03, 1.860885E+03, 2.157688E+03,
2.474797E+03, 2.813601E+03, 3.175846E+03, 3.563351E+03, 3.978042E+03, 4.422045E+03,
4.897528E+03, 5.406676E+03, 5.951807E+03, 6.535268E+03, 7.159589E+03, 7.827197E+03,
8.540615E+03, 9.302558E+03, 1.011573E+04, 1.098290E+04, 1.190686E+04, 1.289056E+04,
1.393708E+04, 1.504939E+04, 1.623081E+04, 1.748441E+04, 1.881364E+04, 2.022192E+04,
2.171273E+04, 2.328963E+04, | |
x3, x4, x5, x6, x7, l, s):
check(len(s) == 1*5 + 2*45 + 3*450 + 4*500)
return None, f, after
def test_compile_framework_5(self):
self.run('compile_framework_5')
def define_compile_framework_7(cls):
# Array of pointers (test the write barrier for setarrayitem_gc)
def before(n, x):
return n, x, None, None, None, None, None, None, None, None, [X(123)], None
def f(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s):
if n < 1900:
check(l[0].x == 123)
l = [None] * 16
l[0] = X(123)
l[1] = X(n)
l[2] = X(n+10)
l[3] = X(n+20)
l[4] = X(n+30)
l[5] = X(n+40)
l[6] = X(n+50)
l[7] = X(n+60)
l[8] = X(n+70)
l[9] = X(n+80)
l[10] = X(n+90)
l[11] = X(n+100)
l[12] = X(n+110)
l[13] = X(n+120)
l[14] = X(n+130)
l[15] = X(n+140)
if n < 1800:
check(len(l) == 16)
check(l[0].x == 123)
check(l[1].x == n)
check(l[2].x == n+10)
check(l[3].x == n+20)
check(l[4].x == n+30)
check(l[5].x == n+40)
check(l[6].x == n+50)
check(l[7].x == n+60)
check(l[8].x == n+70)
check(l[9].x == n+80)
check(l[10].x == n+90)
check(l[11].x == n+100)
check(l[12].x == n+110)
check(l[13].x == n+120)
check(l[14].x == n+130)
check(l[15].x == n+140)
n -= x.foo
return n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s
def after(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s):
check(len(l) == 16)
check(l[0].x == 123)
check(l[1].x == 2)
check(l[2].x == 12)
check(l[3].x == 22)
check(l[4].x == 32)
check(l[5].x == 42)
check(l[6].x == 52)
check(l[7].x == 62)
check(l[8].x == 72)
check(l[9].x == 82)
check(l[10].x == 92)
check(l[11].x == 102)
check(l[12].x == 112)
check(l[13].x == 122)
check(l[14].x == 132)
check(l[15].x == 142)
return before, f, after
def test_compile_framework_7(self):
self.run('compile_framework_7')
def define_compile_framework_7_interior(cls):
# Array of structs containing pointers (test the write barrier
# for setinteriorfield_gc)
S = lltype.GcStruct('S', ('i', lltype.Signed))
A = lltype.GcArray(lltype.Struct('entry', ('x', lltype.Ptr(S)),
('y', lltype.Ptr(S)),
('z', lltype.Ptr(S))))
class Glob:
a = lltype.nullptr(A)
glob = Glob()
#
def make_s(i):
s = lltype.malloc(S)
s.i = i
return s
#
@unroll_safe
def f(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s):
a = glob.a
if not a:
a = glob.a = lltype.malloc(A, 10)
i = 0
while i < 10:
a[i].x = make_s(n + i * 100 + 1)
a[i].y = make_s(n + i * 100 + 2)
a[i].z = make_s(n + i * 100 + 3)
i += 1
i = 0
while i < 10:
check(a[i].x.i == n + i * 100 + 1)
check(a[i].y.i == n + i * 100 + 2)
check(a[i].z.i == n + i * 100 + 3)
i += 1
n -= x.foo
return n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s
return None, f, None
def test_compile_framework_7_interior(self):
self.run('compile_framework_7_interior')
def define_compile_framework_8(cls):
# Array of pointers, of unknown length (test write_barrier_from_array)
def before(n, x):
return n, x, None, None, None, None, None, None, None, None, [X(123)], None
def f(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s):
if n < 1900:
check(l[0].x == 123)
l = [None] * (16 + (n & 7))
l[0] = X(123)
l[1] = X(n)
l[2] = X(n+10)
l[3] = X(n+20)
l[4] = X(n+30)
l[5] = X(n+40)
l[6] = X(n+50)
l[7] = X(n+60)
l[8] = X(n+70)
l[9] = X(n+80)
l[10] = X(n+90)
l[11] = X(n+100)
l[12] = X(n+110)
l[13] = X(n+120)
l[14] = X(n+130)
l[15] = X(n+140)
if n < 1800:
check(len(l) == 16 + (n & 7))
check(l[0].x == 123)
check(l[1].x == n)
check(l[2].x == n+10)
check(l[3].x == n+20)
check(l[4].x == n+30)
check(l[5].x == n+40)
check(l[6].x == n+50)
check(l[7].x == n+60)
check(l[8].x == n+70)
check(l[9].x == n+80)
check(l[10].x == n+90)
check(l[11].x == n+100)
check(l[12].x == n+110)
check(l[13].x == n+120)
check(l[14].x == n+130)
check(l[15].x == n+140)
n -= x.foo
return n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s
def after(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s):
check(len(l) >= 16)
check(l[0].x == 123)
check(l[1].x == 2)
check(l[2].x == 12)
check(l[3].x == 22)
check(l[4].x == 32)
check(l[5].x == 42)
check(l[6].x == 52)
check(l[7].x == 62)
check(l[8].x == 72)
check(l[9].x == 82)
check(l[10].x == 92)
check(l[11].x == 102)
check(l[12].x == 112)
check(l[13].x == 122)
check(l[14].x == 132)
check(l[15].x == 142)
return before, f, after
def test_compile_framework_8(self):
self.run('compile_framework_8')
def define_compile_framework_9(cls):
# Like compile_framework_8, but with variable indexes and large
# arrays, testing the card_marking case
def before(n, x):
return n, x, None, None, None, None, None, None, None, None, [X(123)], None
def f(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s):
if n < 1900:
check(l[0].x == 123)
num = 512 + (n & 7)
l = [None] * num
l[0] = X(123)
l[1] = X(n)
l[2] = X(n+10)
l[3] = X(n+20)
l[4] = X(n+30)
l[5] = X(n+40)
l[6] = X(n+50)
l[7] = X(n+60)
l[num-8] = X(n+70)
l[num-9] = X(n+80)
l[num-10] = X(n+90)
l[num-11] = X(n+100)
l[-12] = X(n+110)
l[-13] = X(n+120)
l[-14] = X(n+130)
l[-15] = X(n+140)
if n < 1800:
num = 512 + (n & 7)
check(len(l) == num)
check(l[0].x == 123)
check(l[1].x == n)
check(l[2].x == n+10)
check(l[3].x == n+20)
check(l[4].x == n+30)
check(l[5].x == n+40)
check(l[6].x == n+50)
check(l[7].x == n+60)
check(l[num-8].x == n+70)
check(l[num-9].x == n+80)
check(l[num-10].x == n+90)
check(l[num-11].x == n+100)
check(l[-12].x == n+110)
check(l[-13].x == n+120)
check(l[-14].x == n+130)
check(l[-15].x == n+140)
n -= x.foo
return n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s
def after(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s):
check(len(l) >= 512)
check(l[0].x == 123)
check(l[1].x == 2)
check(l[2].x == 12)
check(l[3].x == 22)
check(l[4].x == 32)
check(l[5].x == 42)
check(l[6].x == 52)
check(l[7].x == 62)
check(l[-8].x == 72)
check(l[-9].x == 82)
check(l[-10].x == 92)
check(l[-11].x == 102)
check(l[-12].x == 112)
check(l[-13].x == 122)
check(l[-14].x == 132)
check(l[-15].x == 142)
return before, f, after
def test_compile_framework_9(self):
self.run('compile_framework_9')
def define_compile_framework_external_exception_handling(cls):
def before(n, x):
x = X(0)
return n, x, None, None, None, None, None, None, None, None, None, None
@dont_look_inside
def g(x):
if x > 200:
return 2
raise ValueError
@dont_look_inside
def h(x):
if x > 150:
raise ValueError
return 2
def f(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s):
try:
x.x += g(n)
except ValueError:
x.x += 1
try:
x.x += h(n)
except ValueError:
x.x -= 1
n -= 1
return n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s
def after(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s):
check(x.x == 1800 * 2 + 150 * 2 + 200 - 1850)
return before, f, after
def test_compile_framework_external_exception_handling(self):
self.run('compile_framework_external_exception_handling')
def define_compile_framework_bug1(self):
@elidable
def nonmoving():
x = X(1)
for i in range(7):
rgc.collect()
return x
@dont_look_inside
def do_more_stuff():
x = X(5)
for i in range(7):
rgc.collect()
return x
def f(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s):
x0 = do_more_stuff()
check(nonmoving().x == 1)
n -= 1
return n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s
return None, f, None
def test_compile_framework_bug1(self):
self.run('compile_framework_bug1', 200)
def define_compile_framework_vref(self):
from rpython.rlib.jit import virtual_ref, virtual_ref_finish
class A:
pass
glob = A()
def f(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s):
a = A()
glob.v = vref = virtual_ref(a)
virtual_ref_finish(vref, a)
n -= 1
return n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s
return None, f, None
def test_compile_framework_vref(self):
self.run('compile_framework_vref', 200)
def define_compile_framework_float(self):
# test for a bug: the fastpath_malloc does not save and restore
# xmm registers around the actual call to the slow path
class A:
x0 = x1 = x2 = x3 = x4 = x5 = x6 = x7 = 0
@dont_look_inside
def escape1(a):
a.x0 += 0
a.x1 += 6
a.x2 += 12
a.x3 += 18
a.x4 += 24
a.x5 += 30
a.x6 += 36
a.x7 += 42
@dont_look_inside
def escape2(n, f0, f1, | |
<gh_stars>0
from bearlibterminal import terminal as blt
from e_tile import TileType
from e_culture import Culture
from e_person import Gender
from e_state import State
from e_omode import OMode
import random
import camera
import gamemap
import cursor
import console
import actor
import villager
import calendar
import settlement
import laminate
import resource
class historia():
"""
CLASS: main historia class
"""
def __init__(self):
self.gmap = gamemap.Gamemap(60, 0.1)
self.camera = camera.Camera(40, 40)
self.laminate = laminate.Laminate(40, 40)
self.cursor = cursor.Cursor()
self.console = console.Console()
self.actor_list = []
self.mouse_x = 0
self.mouse_y = 0
self.time = calendar.Calendar()
self.overlaylistsize = 20
self.page_number = 0
self.game_loop = 1
self.console_active = False
self.overlay_update = False
self.current_command = ""
self.select = 0
self.active_actor = None
self.tilelist = []
self.textlist = []
self.state = State.VIEW
self.overlay_mode = OMode.BASIC
def setup(self):
print("setup...")
blt.open()
# set title
blt.set("""
window: size=160x40,
cellsize=auto,
font:default""")
blt.set("window.title='historia'")
blt.set("input.filter={keyboard, mouse+}")
blt.clear()
blt.color("white")
# load tilesets
blt.set("U+E000: toen.png, size=16x16, align=top-left")
# set up map
self.gmap.add_random_forest(8, 6)
self.gmap.add_random_river(30)
self.gmap.add_random_river(30)
self.gmap.add_random_river(30)
def hello(self):
blt.printf(1, 1, 'Hello World')
blt.refresh()
def tile_test(self):
blt.put(1, 3, 0xE000+0)
blt.put(3, 3, 0xE000+2)
blt.refresh()
def basic_start(self):
# actor setup
# create a basic villager token
vil1 = villager.Villager(10, 10, Culture.GREEK, 1, self.time)
vil1.populate_new_random(12, 20.0, 3.0)
self.gmap.actorgrid[10][10].append(0)
# create another villager token on top of the first
vil2 = villager.Villager(10, 10, Culture.GREEK, 1, self.time)
vil2.populate_new_random(5, 20.0, 3.0)
self.gmap.actorgrid[10][10].append(1)
ham1 = settlement.Settlement(12, 5, Culture.GREEK, 1, self.time)
ham1.add_housing(5)
chicken = resource.Resource('chicken', 1)
ham1.add_resource(chicken)
self.gmap.actorgrid[12][5].append(2)
vil1.setparent(ham1)
vil2.setparent(ham1)
self.actor_list.append(vil1)
self.actor_list.append(vil2)
self.actor_list.append(ham1)
def print_grid(self):
blt.layer(1)
blt.clear()
for r, row in enumerate(range(self.camera.height)):
for c, element in enumerate(range(self.camera.width)):
gridx = c + self.camera.posx
gridy = r + self.camera.posy
# print(type(hex(self.gmap.grid[gridx][gridy].value)))
# print(type(0xE000))
hexcode = 0xE000 + self.gmap.grid[gridx][gridy].value
blt.put(c * 2, r, hexcode)
blt.layer(2)
blt.clear_area(0, 0, self.camera.width, self.camera.height)
if self.state == State.MOVE:
# print movement
for r, row in enumerate(self.laminate.lgrid):
for c, element in enumerate(row):
if element > -1:
hexcode = 0xE000 + 204
blt.put(r * 2, c, hexcode)
def print_vegetation(self):
blt.layer(3)
blt.clear_area(0, 0, self.camera.width, self.camera.height)
for r, row in enumerate(range(self.camera.height)):
for c, element in enumerate(range(self.camera.width)):
gridx = c + self.camera.posx
gridy = r + self.camera.posy
do_nothing = False
q = int(self.gmap.watergrid[gridx, gridy])
if q > 0:
hexcode = 0xE000 + q
blt.put(c * 2, r, hexcode)
continue
w = self.gmap.woodgrid[gridx, gridy]
if w >= 5:
hexcode = 0xE000 + 6
elif w >= 3:
hexcode = 0xE000 + 5
elif w >= 1:
hexcode = 0xE000 + 4
else:
do_nothing = True
if not do_nothing:
blt.put(c * 2, r, hexcode)
def print_actors(self):
blt.layer(4)
blt.clear_area(0, 0, self.camera.width, self.camera.height)
for actor in reversed(self.actor_list):
if (actor.posx >= self.camera.posx and actor.posx <
self.camera.posx + self.camera.width and
actor.posy >= self.camera.posy and actor.posy <
self.camera.posy + self.camera.height):
cx = actor.posx - self.camera.posx
cy = actor.posy - self.camera.posy
blt.put(cx * 2, cy, 0xE000 + actor.id.value)
# print selected actor
x = self.cursor.x + self.camera.posx
y = self.cursor.y + self.camera.posy
a = self.gmap.actorgrid[x][y]
if a:
ai = self.actor_list[a[self.select]]
blt.put(self.cursor.x * 2, self.cursor.y, 0xE000 + ai.id.value)
def update_overlay(self):
listmax = self.overlaylistsize
self.tilelist = []
self.textlist = []
# selection
selectcode = 0xE000 + TileType.SELECTY.value
self.tilelist.append((self.cursor.x * 2, self.cursor.y, selectcode))
# info part
x = self.cursor.x + self.camera.posx
y = self.cursor.y + self.camera.posy
self.textlist.append((82, 1, "(%d, %d)" % (x, y)))
self.textlist.append((82, 3, self.console.get_info(self.gmap, x, y)))
# print(self.gmap.actorgrid[x][y])
a = self.gmap.actorgrid[x][y]
if (not a) or (self.select > len(a)):
self.overlay_update = False
return
actor = self.actor_list[a[self.select]]
#self.textlist.append((82, 5, actor.id.name))
if actor.type == 'Villager':
self.textlist.append((82, 5,
"Villager - Count=%i" % (len(actor.poplist))))
actor.setstats(self.time)
off = listmax * self.page_number
#print(off)
for i, person in enumerate(actor.poplist[off:]):
self.textlist.append((82, 6+i, "%s %s" % (
person.name, person.surname)))
if person.gender == Gender.MALE:
g = 'M'
else:
g = 'F'
self.textlist.append((99, 6+i, "%s" % (g)))
self.textlist.append((102, 6+i, "Age:%d" % (
person.birth.getAge(self.time))))
if i >= off + listmax - 1:
break
self.textlist.append((82, 26, "<%i/%i> [[cycle with c]]" % (
self.page_number+1, (len(actor.poplist) // listmax) + 1)))
self.textlist.append((82, 28, "Productivity: %g" % (
actor.productivity)))
if actor.type == 'Hamlet':
if self.overlay_mode == OMode.BASIC:
self.textlist.append((82, 7, "%s" % (actor.name)))
self.textlist.append((82, 8, "Founded in %g" % (
actor.founding.year)))
self.textlist.append((82, 9, "Population: %d/%d" % (
actor.population, actor.capacity)))
elif self.overlay_mode == OMode.ADDIT:
# print inventory
for key, resource in actor.resource_dict.items():
print(key)
self.overlay_update = False
def print_static_overlay(self):
blt.layer(5)
blt.clear_area(0, 0, self.camera.width, self.camera.height)
for tile in self.tilelist:
blt.put(tile[0], tile[1], tile[2])
for text in self.textlist:
blt.puts(text[0], text[1], text[2])
def print_dynamic_overlay(self):
blt.layer(6)
blt.clear_area(0, 0, self.camera.width, self.camera.height)
def mouse_interaction(self):
if self.mouse_x > 0 and self.mouse_x < 200:
x = self.mouse_x // 2
y = self.mouse_y
self.cursor.x = x
self.cursor.y = y
def print_console(self):
blt.puts(2, 39, ":%s" % (self.current_command))
if self.state == State.MOVE:
for ix, row in enumerate(self.laminate.lgrid):
for iy, col in enumerate(row):
blt.puts(ix * 2, iy, "%i" % (self.laminate.lgrid[ix, iy]))
def process_command(self):
if self.current_command == "quit":
self.game_loop = 0
# reset
self.current_command = ""
def reset_selection(self):
self.select = 0
self.overlay_update = True
def select_page(self, page=1):
x = self.cursor.x + self.camera.posx
y = self.cursor.y + self.camera.posy
a = self.gmap.actorgrid[x][y]
if a:
ai = self.actor_list[a[self.select]]
if ai.type == 'Villager':
max_pages = len(ai.poplist) // self.overlaylistsize + 1
new_page = self.page_number + page
while new_page < 0:
new_page += max_pages
while new_page >= max_pages:
new_page -= max_pages
if not new_page == self.page_number:
self.overlay_update = True
self.page_number = new_page
def select_state(self, state=State.VIEW):
if self.state == state:
self.state = State.VIEW
if state == State.MOVE:
self.laminate.reset_move()
# move if cursor change
x = self.cursor.x + self.camera.posx
y = self.cursor.y + self.camera.posy
else:
self.state = state
# try move
x = self.cursor.x + self.camera.posx
y = self.cursor.y + self.camera.posy
if self.state == State.MOVE:
a = self.gmap.actorgrid[x][y]
# is this a movable actor?
if (not a):
self.state = State.VIEW
#print('no actor')
return
actor = self.actor_list[a[self.select]]
if not actor.movable:
#print('cannot move')
return
# we can move, calculate the movement!
#print('can move!')
self.laminate.add_move(self.cursor.x,
self.cursor.y)
self.laminate.fill_move(actor)
self.active_actor = actor
def select_cycle(self, cycle=1):
x = self.cursor.x + self.camera.posx
y = self.cursor.y + self.camera.posy
a = self.gmap.actorgrid[x][y]
if a:
new_pos = self.select + cycle
while new_pos < 0:
new_pos += len(a)
while new_pos >= len(a):
new_pos -= len(a)
if not new_pos == self.select:
self.overlay_update = True
self.page_number = 0
self.select = new_pos
def mainloop(self):
# print map
self.print_grid()
# print vegetation
self.print_vegetation()
# print actors
self.print_actors()
# print overlay
self.update_overlay()
self.print_static_overlay()
self.print_dynamic_overlay()
# print command if relevant
if self.console_active:
self.print_console()
# after printing, refresh
blt.refresh()
key = blt.read()
if key in (blt.TK_CLOSE, blt.TK_ESCAPE):
self.game_loop = 0
elif self.console_active:
if key == blt.TK_RETURN:
self.game_loop = 1
self.console_active = False
self.process_command()
elif blt.check(blt.TK_BACKSPACE):
self.current_command = self.current_command[:-1]
elif blt.check(blt.TK_CHAR):
self.current_command += (chr(blt.state(blt.TK_CHAR)))
#print(self.current_command)
self.game_loop = 1
else:
self.game_loop = 1
elif key == blt.TK_RETURN:
if self.state == State.MOVE:
# move to cursor
x = self.cursor.x + self.camera.posx
y = self.cursor.y + self.camera.posy
print(self.laminate.lgrid[x, y])
if self.laminate.lgrid[x, y] > -1:
px = self.active_actor.posx
py = self.active_actor.posy
self.active_actor.mv_points = self.laminate.lgrid[x, y]
self.active_actor.posx = x
self.active_actor.posy = y
n = self.gmap.actorgrid[px][py][self.select]
del self.gmap.actorgrid[px][py][self.select]
self.gmap.actorgrid[x][y].append(n)
self.state = State.VIEW
self.laminate.reset_move()
if self.state == State.VIEW:
# additional info
self.overlay_mode = OMode.ADDIT
else:
if (key == blt.TK_RIGHT and
self.camera.posx <
self.gmap.width - self.camera.width):
self.select_state(State.VIEW)
self.camera.posx += 1
self.reset_selection()
# print(self.camera.posx, self.camera.posy)
elif (key == blt.TK_LEFT and
self.camera.posx > 0):
self.select_state(State.VIEW)
self.camera.posx -= 1
self.reset_selection()
# print(self.camera.posx, self.camera.posy)
elif (key == blt.TK_DOWN and
self.camera.posy <
self.gmap.height - self.camera.height):
self.select_state(State.VIEW)
self.camera.posy += 1
self.reset_selection()
# print(self.camera.posx, self.camera.posy)
elif (key == blt.TK_UP and
self.camera.posy > 0):
self.select_state(State.VIEW)
self.camera.posy -= 1
self.reset_selection()
# print(self.camera.posx, self.camera.posy)
elif key == blt.TK_A:
if self.cursor.x > 0:
self.cursor.x -= 1
self.reset_selection()
elif self.camera.posx > 0:
self.camera.posx -= 1
self.reset_selection()
# print(self.camera.posx, self.camera.posy)
elif key == blt.TK_D:
if self.cursor.x < self.camera.width - 1:
self.cursor.x += 1
self.reset_selection()
elif self.camera.posx < self.gmap.width - self.camera.width:
self.camera.posx += 1
self.reset_selection()
# print(self.camera.posx, self.camera.posy)
elif key == blt.TK_W:
if self.cursor.y > 0:
self.cursor.y -= 1
self.reset_selection()
elif self.camera.posy > 0:
self.camera.posy -= 1
self.reset_selection()
# print(self.camera.posx, self.camera.posy)
elif key == blt.TK_S:
if self.cursor.y < self.camera.height - 1:
self.cursor.y += 1
self.reset_selection()
elif self.camera.posy < self.gmap.height - self.camera.height:
self.camera.posy += 1
self.reset_selection()
# print(self.camera.posx, self.camera.posy)
elif key == | |
# -*- coding: utf-8 -*-
#
# DDE Management Library (DDEML) client module
#
# Notes:
# This code has been adapted from <NAME>'s dde-client code from
# ActiveState's Python recipes (Revision 1) modified by <NAME>
# Copyright:
# (c) <NAME>
# Licence:
# New BSD license
# Website:
# http://code.activestate.com/recipes/577654-dde-client/
#
# DDEML will attempt to convert the command string based on the target build of the DDE server,
# which it knows based on whether you called the ANSI or Unicode version of DdeInitialize().
# The scenario where it does fail though is sending a CF_TEXT format command string
# from a Unicode build DDE client to an ANSI build DDE server.
# MUST default to CF_UNICODETEXT.
# http://chrisoldwood.blogspot.com/2013/11/dde-xtypexecute-command-corruption.html
#
from ctypes import c_int, c_double, c_char_p, c_wchar_p, c_void_p, c_ulong, c_char, c_byte
from ctypes import windll, byref, create_string_buffer, create_unicode_buffer, Structure, sizeof
from ctypes import POINTER, WINFUNCTYPE
from ctypes.wintypes import BOOL, HWND, MSG, DWORD, BYTE, INT, LPCWSTR, UINT, ULONG, LPCSTR, LPSTR, LPWSTR, WPARAM
import UserDict
# DECLARE_HANDLE(name) typedef void *name;
HCONV = c_void_p # = DECLARE_HANDLE(HCONV)
HDDEDATA = c_void_p # = DECLARE_HANDLE(HDDEDATA)
HSZ = c_void_p # = DECLARE_HANDLE(HSZ)
LPBYTE = POINTER(BYTE) #LPSTR
LPDWORD = POINTER(DWORD)
ULONG_PTR = WPARAM #c_ulong
DWORD_PTR = ULONG_PTR
HCONVLIST = c_void_p # = DECLARE_HANDLE(HCONVLIST)
PCONVINFO = c_void_p
PCONVCONTEXT = c_void_p
# DDEML errors
DMLERR_NO_ERROR = 0x0000 # No error
DMLERR_ADVACKTIMEOUT = 0x4000 # request for synchronous advise transaction timed out
DMLERR_BUSY = 0x4001
DMLERR_DATAACKTIMEOUT = 0x4002 # request for synchronous data transaction timed out
DMLERR_DLL_NOT_INITIALIZED = 0x4003 # DDEML functions called without iniatializing
DMLERR_DLL_USAGE = 0x4004
DMLERR_EXECACKTIMEOUT = 0x4005 # request for synchronous execute transaction timed out
DMLERR_INVALIDPARAMETER = 0x4006
DMLERR_LOW_MEMORY = 0x4007
DMLERR_MEMORY_ERROR = 0x4008
DMLERR_NOTPROCESSED = 0x4009
DMLERR_NO_CONV_ESTABLISHED = 0x400a # client's attempt to establish a conversation has failed (can happen during DdeConnect)
DMLERR_POKEACKTIMEOUT = 0x400b # A request for a synchronous poke transaction has timed out.
DMLERR_POSTMSG_FAILED = 0x400c # An internal call to the PostMessage function has failed.
DMLERR_REENTRANCY = 0x400d
DMLERR_SERVER_DIED = 0x400e
DMLERR_SYS_ERROR = 0x400f
DMLERR_UNADVACKTIMEOUT = 0x4010
DMLERR_UNFOUND_QUEUE_ID = 0x4011
# Predefined Clipboard Formats
CF_TEXT = 1
CF_BITMAP = 2
CF_METAFILEPICT = 3
CF_SYLK = 4
CF_DIF = 5
CF_TIFF = 6
CF_OEMTEXT = 7
CF_DIB = 8
CF_PALETTE = 9
CF_PENDATA = 10
CF_RIFF = 11
CF_WAVE = 12
CF_UNICODETEXT = 13
CF_ENHMETAFILE = 14
CF_HDROP = 15
CF_LOCALE = 16
CF_DIBV5 = 17
CF_MAX = 18
# DDE conversation states
XST_NULL = 0 # quiescent states
XST_INCOMPLETE = 1
XST_CONNECTED = 2
XST_INIT1 = 3 # mid-initiation states
XST_INIT2 = 4
XST_REQSENT = 5 # active conversation states
XST_DATARCVD = 6
XST_POKESENT = 7
XST_POKEACKRCVD = 8
XST_EXECSENT = 9
XST_EXECACKRCVD = 10
XST_ADVSENT = 11
XST_UNADVSENT = 12
XST_ADVACKRCVD = 13
XST_UNADVACKRCVD = 14
XST_ADVDATASENT = 15
XST_ADVDATAACKRCVD = 16
# DDE conversation status bits
ST_CONNECTED = 0x0001
ST_ADVISE = 0x0002
ST_ISLOCAL = 0x0004
ST_BLOCKED = 0x0008
ST_CLIENT = 0x0010
ST_TERMINATED = 0x0020
ST_INLIST = 0x0040
ST_BLOCKNEXT = 0x0080
ST_ISSELF = 0x0100
# DDE constants for wStatus field
DDE_FACK = 0x8000
DDE_FBUSY = 0x4000
DDE_FDEFERUPD = 0x4000
DDE_FACKREQ = 0x8000
DDE_FRELEASE = 0x2000
DDE_FREQUESTED = 0x1000
DDE_FAPPSTATUS = 0x00FF
DDE_FNOTPROCESSED = 0x0000
DDE_FACKRESERVED = (~(DDE_FACK | DDE_FBUSY | DDE_FAPPSTATUS))
DDE_FADVRESERVED = (~(DDE_FACKREQ | DDE_FDEFERUPD))
DDE_FDATRESERVED = (~(DDE_FACKREQ | DDE_FRELEASE | DDE_FREQUESTED))
DDE_FPOKRESERVED = (~(DDE_FRELEASE))
# DDEML Transaction class flags
XTYPF_NOBLOCK = 0x0002
XTYPF_NODATA = 0x0004
XTYPF_ACKREQ = 0x0008
XCLASS_MASK = 0xFC00
XCLASS_BOOL = 0x1000
XCLASS_DATA = 0x2000
XCLASS_FLAGS = 0x4000
XCLASS_NOTIFICATION = 0x8000
XTYP_ERROR = (0x0000 | XCLASS_NOTIFICATION | XTYPF_NOBLOCK)
XTYP_ADVDATA = (0x0010 | XCLASS_FLAGS)
XTYP_ADVREQ = (0x0020 | XCLASS_DATA | XTYPF_NOBLOCK)
XTYP_ADVSTART = (0x0030 | XCLASS_BOOL)
XTYP_ADVSTOP = (0x0040 | XCLASS_NOTIFICATION)
XTYP_EXECUTE = (0x0050 | XCLASS_FLAGS)
XTYP_CONNECT = (0x0060 | XCLASS_BOOL | XTYPF_NOBLOCK)
XTYP_CONNECT_CONFIRM = (0x0070 | XCLASS_NOTIFICATION | XTYPF_NOBLOCK)
XTYP_XACT_COMPLETE = (0x0080 | XCLASS_NOTIFICATION )
XTYP_POKE = (0x0090 | XCLASS_FLAGS)
XTYP_REGISTER = (0x00A0 | XCLASS_NOTIFICATION | XTYPF_NOBLOCK )
XTYP_REQUEST = (0x00B0 | XCLASS_DATA )
XTYP_DISCONNECT = (0x00C0 | XCLASS_NOTIFICATION | XTYPF_NOBLOCK )
XTYP_UNREGISTER = (0x00D0 | XCLASS_NOTIFICATION | XTYPF_NOBLOCK )
XTYP_WILDCONNECT = (0x00E0 | XCLASS_DATA | XTYPF_NOBLOCK)
XTYP_MONITOR = (0x00F0 | XCLASS_NOTIFICATION | XTYPF_NOBLOCK)
XTYP_MASK = 0x00F0
XTYP_SHIFT = 4
# DDE Timeout constants
TIMEOUT_ASYNC = 0xFFFFFFFF
# DDE Transaction ID constants
QID_SYNC = 0xFFFFFFFF
# DDE Initialization flags (afCmd)
# Callback filter flags for use with standard apps.
CBF_FAIL_SELFCONNECTIONS = 0x00001000
CBF_FAIL_CONNECTIONS = 0x00002000
CBF_FAIL_ADVISES = 0x00004000
CBF_FAIL_EXECUTES = 0x00008000
CBF_FAIL_POKES = 0x00010000
CBF_FAIL_REQUESTS = 0x00020000
CBF_FAIL_ALLSVRXACTIONS = 0x0003f000
CBF_SKIP_CONNECT_CONFIRMS = 0x00040000
CBF_SKIP_REGISTRATIONS = 0x00080000
CBF_SKIP_UNREGISTRATIONS = 0x00100000
CBF_SKIP_DISCONNECTS = 0x00200000
CBF_SKIP_ALLNOTIFICATIONS = 0x003c0000
# Application command flags
APPCMD_CLIENTONLY = 0x00000010
APPCMD_FILTERINITS = 0x00000020
APPCMD_MASK = 0x00000FF0
# Application classification flags
APPCLASS_STANDARD = 0x00000000
APPCLASS_MASK = 0x0000000F
APPCLASS_MONITOR = 0x00000001
# Callback filter flags for use with MONITOR apps - 0 implies no monitor callbacks.
MF_HSZ_INFO = 0x01000000
MF_SENDMSGS = 0x02000000
MF_POSTMSGS = 0x04000000
MF_CALLBACKS = 0x08000000
MF_ERRORS = 0x10000000
MF_LINKS = 0x20000000
MF_CONV = 0x40000000
MF_MASK = 0xFF000000
# Code page for rendering string.
CP_WINANSI = 1004 # default codepage for windows & old DDE convs.
CP_WINUNICODE = 1200
# Declaration
DDECALLBACK = WINFUNCTYPE(HDDEDATA, UINT, UINT, HCONV, HSZ, HSZ, HDDEDATA, ULONG_PTR, ULONG_PTR)
class SECURITY_QUALITY_OF_SERVICE(Structure):
_fields_ = [
("Length", c_ulong),
("ImpersonationLevel", c_ulong),
("ContextTrackingMode", c_byte ),
("EffectiveOnly", c_byte )
]
class CONVCONTEXT(Structure):
_fields_ = [
("cb", UINT ),
("wFlags", UINT ),
("wCountryID", UINT ),
("iCodePage", c_int ),
("dwLangID", DWORD ),
("dwSecurity", DWORD ),
("qos", SECURITY_QUALITY_OF_SERVICE)
]
class CONVINFO(Structure):
_fields_ = [
("cb", DWORD ),
("hUser", DWORD_PTR ),
("hConvPartner", HCONV ),
("hszSvcPartner", HSZ ),
("hszServiceReq", HSZ ),
("hszTopic", HSZ ),
("hszItem", HSZ ),
("wFmt", UINT ),
("wType", UINT ),
("wStatus", UINT ),
("wConvst", UINT ),
("wLastError", UINT ),
("hConvList", HCONVLIST ),
("ConvCtxt", CONVCONTEXT),
("hwnd", HWND ),
("hwndPartner", HWND )
]
def get_winfunc(funcname, restype=None, argtypes=()):
"""Retrieve a function from a library/DLL, and set the data types."""
func = getattr(windll.user32, funcname)
func.argtypes = argtypes
func.restype = restype
return func
class DDE(object):
"""Object containing all the DDEML functions"""
#AccessData = get_winfunc("DdeAccessData", LPBYTE, (HDDEDATA, LPDWORD))
ClientTransaction = get_winfunc("DdeClientTransaction", HDDEDATA, (LPBYTE, DWORD, HCONV, HSZ, UINT, UINT, DWORD, LPDWORD))
Connect = get_winfunc("DdeConnect", HCONV, (DWORD, HSZ, HSZ, PCONVCONTEXT))
#CreateDataHandle = get_winfunc("DdeCreateDataHandle", HDDEDATA, (DWORD, LPBYTE, DWORD, DWORD, HSZ, UINT, UINT))
CreateStringHandle = get_winfunc("DdeCreateStringHandleW", HSZ, (DWORD, LPCWSTR, UINT)) # Unicode version
#CreateStringHandle = get_winfunc("DdeCreateStringHandleA", HSZ, (DWORD, LPCSTR, UINT)) # ANSI version
Disconnect = get_winfunc("DdeDisconnect", BOOL, (HCONV,))
GetLastError = get_winfunc("DdeGetLastError", UINT, (DWORD,))
Initialize = get_winfunc("DdeInitializeW", UINT, (LPDWORD, DDECALLBACK, DWORD, DWORD)) # Unicode version of DDE initialize
#Initialize = get_winfunc("DdeInitializeA", UINT, (LPDWORD, DDECALLBACK, DWORD, DWORD)) # ANSI version of DDE initialize
FreeDataHandle = get_winfunc("DdeFreeDataHandle", BOOL, (HDDEDATA,))
FreeStringHandle = get_winfunc("DdeFreeStringHandle", BOOL, (DWORD, HSZ))
QueryString = get_winfunc("DdeQueryStringW", DWORD, (DWORD, HSZ, LPWSTR, DWORD, c_int)) # Unicode version of QueryString
#QueryString = get_winfunc("DdeQueryStringA", DWORD, (DWORD, HSZ, LPSTR, DWORD, c_int)) # ANSI version of QueryString
#UnaccessData = get_winfunc("DdeUnaccessData", BOOL, (HDDEDATA,))
Uninitialize = get_winfunc("DdeUninitialize", BOOL, (DWORD,))
AbandonTransaction = get_winfunc("DdeAbandonTransaction", BOOL, (DWORD, HCONV, DWORD))
GetData = get_winfunc("DdeGetData", DWORD, (HDDEDATA, LPBYTE, DWORD, DWORD))
QueryConvInfo = get_winfunc("DdeQueryConvInfo", UINT, (HCONV, DWORD, PCONVINFO))
@classmethod
def get_data(cls, hDdeData):
"""
Gets raw DDE data.
If text expected it may need to be interpreted with decode().
"""
if not hDdeData:
return None
i = cls.GetData(hDdeData, None, 0, 0)
if i:
array = bytearray(i)
pDst = (c_byte * i).from_buffer(array)
cls.GetData(hDdeData, pDst, i, 0)
return array
# pDst = (c_byte * i)()
# cls.GetData(hDdeData, pDst, i, 0)
# return cast(pDst, ctypes.c_char_p).value
# return bytearray(pDst)
else:
return None
@classmethod
def query_string(cls, idInst, hsz):
if not hsz:
return ""
i = cls.QueryString(idInst, hsz, None, 0, CP_WINUNICODE)
if i:
i += 1
buffer = create_unicode_buffer(i)
cls.QueryString(idInst, hsz, buffer, i, CP_WINUNICODE)
return buffer.value
else:
return ""
NOTHING = object()
class DDEError(OSError):
"""Exception raise when a DDE error occures."""
def __init__(self, msg, conv=None, app=None, num=NOTHING):
self.winerror = 0
if num == NOTHING and app is not None:
num = DDE.GetLastError(app)
elif num == NOTHING and conv is not None:
num = DDE.GetLastError(conv._idInst)
if num != NOTHING:
self.winerror = num
msg = hex(num) + " " + msg
# + " " + FormatError(num) windows have no strings about DDE
if conv is not None:
msg += " (service=%s, topic=%s)" % (conv.service, conv.topic)
super(DDEError, self).__init__(msg.encode("utf-8"))
class DDEClient(UserDict.DictMixin):
"""The DDEClient class.
Use this class to create and manage DDE conversations with service/topic.
DDEClient is caching DDE conversations, each idetified by case insensitive service/topic.
Therefore DDEClient has features of a dictionary:
dde_client = DDEClient()
# is empty dict
bool(dde_client)
> False
dde_client[("Service","Topic")].execute("This")
bool(dde_client)
> True
print "There are %i ready conversations." % len(dde_client)
# this always will disconnect irrespective of any other references:
del dde_client[("service","topic")]
#or dde_client.clear()
#or instead - finalize:
dde_client.shutdown()
There is no need to shutdown() DDEClient on DDE connection errors!
callback -- any function with **kwargs that returns None if not DDE specific.
Generally in it you want to check what type==XTYP_* you were waiting for bringing what: str1,2,data...
For | |
<gh_stars>100-1000
from rpython.rtyper.lltypesystem import lltype, llmemory
from rpython.flowspace.model import mkentrymap, checkgraph, Block, Link
from rpython.flowspace.model import Variable, Constant, SpaceOperation
from rpython.tool.algo.regalloc import perform_register_allocation
from rpython.tool.algo.unionfind import UnionFind
from rpython.translator.unsimplify import varoftype, insert_empty_block
from rpython.translator.unsimplify import insert_empty_startblock, split_block
from rpython.translator.simplify import join_blocks
from rpython.rlib.rarithmetic import intmask
from collections import defaultdict
def is_trivial_rewrite(op):
return (op.opname in ('same_as', 'cast_pointer', 'cast_opaque_ptr')
and isinstance(op.args[0], Variable))
def find_predecessors(graph, pending_pred):
"""Return the set of variables whose content can end up inside one
of the 'pending_pred', which is a list of (block, var) tuples.
"""
entrymap = mkentrymap(graph)
if len(entrymap[graph.startblock]) != 1:
insert_empty_startblock(graph)
entrymap = mkentrymap(graph)
pred = set([v for block, v in pending_pred])
def add(block, v):
if isinstance(v, Variable):
if v not in pred:
pending_pred.append((block, v))
pred.add(v)
while pending_pred:
block, v = pending_pred.pop()
if v in block.inputargs:
var_index = block.inputargs.index(v)
for link in entrymap[block]:
prevblock = link.prevblock
if prevblock is not None:
add(prevblock, link.args[var_index])
else:
for op in block.operations:
if op.result is v:
if is_trivial_rewrite(op):
add(block, op.args[0])
break
return pred
def find_successors(graph, pending_succ):
"""Return the set of variables where one of the 'pending_succ' can
end up. 'block_succ' is a list of (block, var) tuples.
"""
succ = set([v for block, v in pending_succ])
def add(block, v):
if isinstance(v, Variable):
if v not in succ:
pending_succ.append((block, v))
succ.add(v)
while pending_succ:
block, v = pending_succ.pop()
for op in block.operations:
if op.args and v is op.args[0] and is_trivial_rewrite(op):
add(block, op.result)
for link in block.exits:
for i, v1 in enumerate(link.args):
if v1 is v:
add(link.target, link.target.inputargs[i])
return succ
def find_interesting_variables(graph):
# Decide which variables are "interesting" or not. Interesting
# variables contain at least the ones that appear in gc_push_roots
# and gc_pop_roots.
pending_pred = []
pending_succ = []
interesting_vars = set()
for block in graph.iterblocks():
for op in block.operations:
if op.opname == 'gc_push_roots':
for v in op.args:
if not isinstance(v, Variable):
continue
interesting_vars.add(v)
pending_pred.append((block, v))
elif op.opname == 'gc_pop_roots':
for v in op.args:
if not isinstance(v, Variable):
continue
assert v in interesting_vars # must be pushed just above
pending_succ.append((block, v))
if not interesting_vars:
return None
# If there is a path from a gc_pop_roots(v) to a subsequent
# gc_push_roots(w) where w contains the same value as v along that
# path, then we consider all intermediate blocks along that path
# which contain a copy of the same value, and add these variables
# as "interesting", too. Formally, a variable in a block is
# "interesting" if it is both a "predecessor" and a "successor",
# where predecessors are variables which (sometimes) end in a
# gc_push_roots, and successors are variables which (sometimes)
# come from a gc_pop_roots.
pred = find_predecessors(graph, pending_pred)
succ = find_successors(graph, pending_succ)
interesting_vars |= (pred & succ)
return interesting_vars
def allocate_registers(graph):
interesting_vars = find_interesting_variables(graph)
if not interesting_vars:
return None
regalloc = perform_register_allocation(graph, interesting_vars.__contains__)
assert regalloc.graph is graph
regalloc.find_num_colors()
return regalloc
def _gc_save_root(index, var):
c_index = Constant(index, lltype.Signed)
return SpaceOperation('gc_save_root', [c_index, var],
varoftype(lltype.Void))
def _gc_restore_root(index, var):
c_index = Constant(index, lltype.Signed)
return SpaceOperation('gc_restore_root', [c_index, var],
varoftype(lltype.Void))
def make_bitmask(filled, graph='?'):
n = filled.count(False)
if n == 0:
return (None, None)
bitmask = 0
last_index = 0
for i in range(len(filled)):
if not filled[i]:
bitmask <<= (i - last_index)
last_index = i
bitmask |= 1
assert bitmask & 1
if bitmask != intmask(bitmask):
raise GCBitmaskTooLong("the graph %r is too complex: cannot create "
"a bitmask telling than more than 31/63 "
"shadowstack entries are unused" % (graph,))
# the mask is always a positive value, but it is replaced by a
# negative value during a minor collection root walking. Then,
# if the next minor collection finds an already-negative value,
# we know we can stop. So that's why we don't include here an
# optimization to not re-write a same-valued mask: it is important
# to re-write the value, to turn it from potentially negative back
# to positive, in order to mark this shadow frame as modified.
assert bitmask > 0
return (last_index, bitmask)
def expand_one_push_roots(regalloc, args):
if regalloc is None:
assert len(args) == 0
else:
filled = [False] * regalloc.numcolors
for v in args:
index = regalloc.getcolor(v)
assert not filled[index]
filled[index] = True
yield _gc_save_root(index, v)
bitmask_index, bitmask = make_bitmask(filled, regalloc.graph)
if bitmask_index is not None:
# xxx we might in some cases avoid this gc_save_root
# entirely, if we know we're after another gc_push/gc_pop
# that wrote exactly the same mask at the same index
bitmask_c = Constant(bitmask, lltype.Signed)
yield _gc_save_root(bitmask_index, bitmask_c)
def expand_one_pop_roots(regalloc, args):
if regalloc is None:
assert len(args) == 0
else:
for v in args:
index = regalloc.getcolor(v)
yield _gc_restore_root(index, v)
def expand_push_roots(graph, regalloc):
"""Expand gc_push_roots into a series of gc_save_root, including
writing a bitmask tag to mark some entries as not-in-use.
(If regalloc is None, it will still remove empty gc_push_roots.)
"""
for block in graph.iterblocks():
any_change = False
newops = []
for op in block.operations:
if op.opname == 'gc_push_roots':
args = [v for v in op.args if isinstance(v, Variable)]
newops += expand_one_push_roots(regalloc, args)
any_change = True
else:
newops.append(op)
if any_change:
block.operations = newops
def move_pushes_earlier(graph, regalloc):
"""gc_push_roots and gc_pop_roots are pushes/pops to the shadowstack,
immediately enclosing the operation that needs them (typically a call).
Here, we try to move individual pushes earlier.
Should run after expand_push_roots(), but before expand_pop_roots(),
so that it sees individual 'gc_save_root' operations but bulk
'gc_pop_roots' operations.
"""
# Concrete example (assembler tested on x86-64 gcc 5.3 and clang 3.7):
#
# ----original---- ----move_pushes_earlier----
#
# while (a > 10) { *foo = b;
# *foo = b; while (a > 10) {
# a = g(a); a = g(a);
# b = *foo; b = *foo;
# // *foo = b;
# } }
# return b; return b;
#
# => the store and the => the store is before, and gcc/clang
# load are in the loop, moves the load after the loop
# even in the assembler (the commented-out '*foo=b' is removed
# here, but gcc/clang would also remove it)
# Draft of the algorithm: see shadowcolor.txt
if not regalloc:
return
entrymap = mkentrymap(graph)
assert len(entrymap[graph.startblock]) == 1
inputvars = {} # {inputvar: (its block, its index in inputargs)}
for block in graph.iterblocks():
for i, v in enumerate(block.inputargs):
inputvars[v] = (block, i)
Plist = []
for index in range(regalloc.numcolors):
U = UnionFind()
S = set()
for block in graph.iterblocks():
for op in reversed(block.operations):
if op.opname == 'gc_pop_roots':
break
else:
continue # no gc_pop_roots in this block
for v in op.args:
if isinstance(v, Variable) and regalloc.checkcolor(v, index):
break
else:
continue # no variable goes into index i
succ = set()
pending_succ = [(block, v)]
while pending_succ:
block1, v1 = pending_succ.pop()
assert regalloc.checkcolor(v1, index)
for op1 in block1.operations:
if is_trivial_rewrite(op1) and op1.args[0] is v1:
if regalloc.checkcolor(op1.result, index):
pending_succ.append((block1, op1.result))
for link1 in block1.exits:
for i2, v2 in enumerate(link1.args):
if v2 is not v1:
continue
block2 = link1.target
w2 = block2.inputargs[i2]
if w2 in succ or not regalloc.checkcolor(w2, index):
continue
succ.add(w2)
for op2 in block2.operations:
if op2.opname in ('gc_save_root', 'gc_pop_roots'):
break
else:
pending_succ.append((block2, w2))
U.union_list(list(succ))
S.update(succ)
G = defaultdict(set)
for block in graph.iterblocks():
found = False
for opindex, op in enumerate(block.operations):
if op.opname == 'gc_save_root':
if (isinstance(op.args[1], Constant) and
op.args[1].concretetype == lltype.Signed):
break
elif op.args[0].value == index:
found = True
break
if not found or not isinstance(op.args[1], Variable):
continue # no matching gc_save_root in this block
key = (block, op)
pred = set()
pending_pred = [(block, op.args[1], opindex)]
while pending_pred:
block1, v1, opindex1 = pending_pred.pop()
assert regalloc.getcolor(v1) == index
for i in range(opindex1-1, -1, -1):
op1 = block1.operations[i]
if op1.opname == 'gc_pop_roots':
break # stop
if op1.result is v1:
if not is_trivial_rewrite(op1):
break # stop
if not regalloc.checkcolor(op1.args[0], index):
break # stop
v1 = op1.args[0]
else:
varindex = block1.inputargs.index(v1)
if v1 in pred:
continue # already done
pred.add(v1)
for link1 in entrymap[block1]:
prevblock1 = link1.prevblock
if prevblock1 is not None:
w1 = link1.args[varindex]
if isinstance(w1, Variable) and w1 not in pred:
if regalloc.checkcolor(w1, index):
pending_pred.append((prevblock1, w1,
len(prevblock1.operations)))
U.union_list(list(pred))
| |
import random
from django.contrib.auth import get_user_model
from django.contrib.auth.models import AbstractUser
from django.contrib.gis.db import models
from django.contrib.postgres.fields import ArrayField
from django.core.exceptions import ValidationError
from django.core.validators import MaxValueValidator, MinValueValidator
from django.db.models.signals import m2m_changed
from django.forms import MultipleChoiceField
from django.forms.widgets import CheckboxSelectMultiple
from django.utils.translation import ugettext_lazy as _
from django_countries.fields import CountryField
from . import data
from .utils.common import join_true_values
class MultiSelectField(ArrayField):
class MyMultipleChoiceField(MultipleChoiceField):
widget = CheckboxSelectMultiple
def formfield(self, **kwargs):
defaults = {
"form_class": self.MyMultipleChoiceField,
"choices": self.base_field.choices,
}
defaults.update(kwargs)
return super(ArrayField, self).formfield(**defaults)
class User(AbstractUser):
date_added = models.DateField(auto_now_add=True)
date_edited = models.DateField(auto_now=True)
last_name = models.CharField(max_length=128, null=False, blank=False)
first_name = models.CharField(max_length=128, null=False, blank=False)
email = models.EmailField(null=False, blank=False, unique=True)
title = models.CharField(max_length=16, null=True, blank=True, choices=data.Title.choices)
gender = models.CharField(max_length=16, null=True, blank=True, choices=data.Gender.choices)
position = models.CharField(max_length=256, null=False, blank=False)
affiliations = models.ManyToManyField(
"Affiliation", related_name="user", blank=True, help_text="Upto 3 affiliations can be added."
)
career_stage = models.CharField(
max_length=16, null=True, blank=True, choices=data.CareerStage.choices, verbose_name="I am "
)
career_stage_note = models.CharField(_("Other"), null=True, blank=True, max_length=256)
year_of_last_degree_graduation = models.PositiveIntegerField(
_("Year of last degree graduation"),
null=True,
blank=True,
validators=[MinValueValidator(1900), MaxValueValidator(2100)],
)
preferences = MultiSelectField(
models.CharField(choices=data.EXPERTS_PREFERENCES, max_length=128), default=list, null=True, blank=True
)
official_functions = models.TextField(
null=True,
blank=True,
help_text="Official functions that I hold in national and international programs, commissions, etc.",
)
photo = models.ImageField(
upload_to="experts", null=True, blank=True,
help_text="Format: .jpg, .png, Size: 300x350 pixels"
)
url_personal = models.URLField(
_("Personal website"),
null=True,
blank=True,
max_length=1024,
help_text="Link to personal or professional homepage",
)
url_cv = models.URLField(
_("Curriculum Vitae"), null=True, blank=True, max_length=1024, help_text="Link to CV, e.g. on LinkedIn"
)
url_researchgate = models.URLField(
_("ResearchGate link"), null=True, blank=True, max_length=1024, help_text="Link to your profile"
)
orcid = models.CharField(
_("ORCID"),
max_length=128,
null=True,
blank=True,
unique=True,
help_text="ORCID is a persistent unique digital identifier that you own and control",
)
url_publications = models.URLField(_("Link to publications"), null=True, blank=True, max_length=1024)
list_publications = models.TextField(_("Free text list of publications"), null=True, blank=True)
is_public = models.BooleanField(
default=False, verbose_name="I allow for my profile to be publicly visible in the MRI Expert Database"
)
is_photo_public = models.BooleanField(
default=False, verbose_name="I allow for my photo to be publicly visible in the MRI Expert Database"
)
is_subscribed_to_newsletter = models.BooleanField(
default=False, blank=False, null=False, verbose_name="I would like to subscribe to the MRI Global Newsletter"
)
# 2 EXPERTISE
@property
def fullname(self):
namearray = []
if self.title:
namearray.append(self.title)
if self.first_name:
namearray.append(self.first_name)
if self.last_name:
namearray.append(self.last_name)
return " ".join(namearray)
@property
def publications(self):
pub = self.list_publications
return pub.replace("\n", "<br>").replace("¶", "<br>")
@property
def career(self):
if self.career_stage == "OTHER":
return self.career_stage_note
return self.get_career_stage_display()
def __str__(self):
return f"{self.first_name} {self.last_name} ({self.username})"
class Meta:
verbose_name = _("Expert")
verbose_name_plural = _("Experts")
class Affiliation(models.Model):
name = models.CharField(max_length=512, null=False, blank=False)
street = models.CharField(max_length=1024, null=True, blank=True)
post_code = models.CharField(max_length=256, null=True, blank=True)
city = models.CharField(max_length=256, null=True, blank=True)
country = CountryField(null=False, blank=False)
creator = models.ForeignKey(get_user_model(), on_delete=models.SET_NULL, null=True, blank=True)
@property
def location(self):
if not self.city:
return self.country.name or ""
if not self.country:
return self.city or ""
return ", ".join([self.city.strip(), self.country.name])
def __str__(self):
return self.name
class Project(models.Model):
name = models.CharField(max_length=256)
acronym = models.CharField(max_length=16, null=True, blank=True)
date_start = models.DateField(null=True, blank=True)
date_ending = models.DateField(null=True, blank=True)
funding_source = models.CharField(max_length=256, null=True, blank=True)
role = models.CharField(max_length=256, null=True, blank=True)
homepage = models.URLField(null=True, blank=True, max_length=1024)
location = models.CharField(
max_length=256,
null=True,
blank=True,
help_text="This is the location where the research is conducted or the fieldwork, not the home of research group/affiliation",
)
# Although, we are not using it at the moment, but it have been in used for some period of time
# and may contain entires for it in the database, so we would let it stay.
coordinates = models.PointField(null=True, blank=True)
country = CountryField(
null=True,
blank=True,
help_text="This is the country where the research is conducted or the fieldwork, not the home of research group/affiliation",
)
user = models.ForeignKey(get_user_model(), on_delete=models.CASCADE, related_name="projects")
def __str__(self):
return self.name
class MountainManager(models.Manager):
def get_queryset(self):
return super().get_queryset().all().order_by("name")
def random(self, n):
ids = random.sample([obj.id for obj in self.get_queryset().all()], k=self.get_queryset().count() - n)
return self.get_queryset().exclude(id__in=ids)
# GMBA Mountains
class Mountain(models.Model):
name = models.CharField(max_length=256, unique=True, blank=False, null=False)
objects = MountainManager()
def __str__(self):
return self.name
class Expertise(models.Model):
research_expertise = models.ManyToManyField("ResearchExpertise", blank=True)
atmospheric_sciences = models.ManyToManyField("AtmosphericSciences", blank=True)
other_atmospheric_sciences = models.CharField(
max_length=1024, null=True, blank=True, help_text="This should be a comma seperated list"
)
biological_sciences = models.ManyToManyField("BiologicalSciences", blank=True)
other_biological_sciences = models.CharField(
max_length=1024, null=True, blank=True, help_text="This should be a comma seperated list"
)
cryospheric_sciences = models.ManyToManyField("CryosphericSciences", blank=True)
other_cryospheric_sciences = models.CharField(
max_length=1024, null=True, blank=True, help_text="This should be a comma seperated list"
)
earth_sciences = models.ManyToManyField("EarthSciences", blank=True)
other_earth_sciences = models.CharField(
max_length=1024, null=True, blank=True, help_text="This should be a comma seperated list"
)
hydrospheric_sciences = models.ManyToManyField("HydrosphericSciences", blank=True)
other_hydrospheric_sciences = models.CharField(
max_length=1024, null=True, blank=True, help_text="This should be a comma seperated list"
)
integrated_systems = models.ManyToManyField("IntegratedSystems", blank=True)
other_integrated_systems = models.CharField(
max_length=1024, null=True, blank=True, help_text="This should be a comma seperated list"
)
social_sciences_and_humanities = models.ManyToManyField("SocialSciencesAndHumanities", blank=True)
other_social_sciences_and_humanities = models.CharField(
max_length=1024, null=True, blank=True, help_text="This should be a comma seperated list"
)
spatial_scale_of_expertise = models.ManyToManyField("SpatialScaleOfExpertise", blank=True)
other_spatial_scale_of_expertise = models.CharField(
max_length=1024, null=True, blank=True, help_text="This should be a comma seperated list"
)
statistical_focus = models.ManyToManyField("StatisticalFocus", blank=True)
other_statistical_focus = models.CharField(
max_length=1024, null=True, blank=True, help_text="This should be a comma seperated list"
)
time_scales = models.ManyToManyField("TimeScales", blank=True)
other_time_scales = models.CharField(
max_length=1024, null=True, blank=True, help_text="This should be a comma seperated list"
)
methods = models.ManyToManyField("Methods", blank=True)
other_methods = models.CharField(
max_length=1024, null=True, blank=True, help_text="This should be a comma seperated list"
)
mountain_ranges_of_research_expertise = models.ManyToManyField(Mountain, blank=True, related_name="+")
other_mountain_ranges_of_research_expertise = models.TextField(null=True, blank=True)
mountain_ranges_of_research_interest = models.ManyToManyField(Mountain, blank=True, related_name="+")
other_mountain_ranges_of_research_interest = models.TextField(null=True, blank=True)
participation_in_assessments = models.ManyToManyField("ParticipationInAssessments", blank=True)
other_participation_in_assessments = models.CharField(
max_length=1024, null=True, blank=True, help_text="This should be a comma seperated list"
)
more_detail_about_participation_in_assessments = models.TextField(null=True, blank=True)
inputs_or_participation_to_un_conventions = models.ManyToManyField(
"InputsOrParticipationToUNConventions", blank=True,
verbose_name="Inputs / Participation to United Nations Conventions"
)
other_inputs_or_participation_to_un_conventions = models.CharField(
max_length=1024, null=True, blank=True, help_text="This should be a comma seperated list",
verbose_name="Other Inputs / Participation to United Nations Conventions"
)
user = models.OneToOneField(
User, help_text="Research expertise", on_delete=models.CASCADE, related_name="expertise"
)
@property
def research_expertise_display(self):
return ", ".join([expertise.title for expertise in self.research_expertise.all()])
@property
def atmospheric_sciences_display(self):
return join_true_values([
", ".join([expertise.title for expertise in self.atmospheric_sciences.all()]),
self.other_atmospheric_sciences
])
@property
def hydrospheric_sciences_display(self):
return join_true_values([
", ".join([expertise.title for expertise in self.hydrospheric_sciences.all()]),
self.other_hydrospheric_sciences
])
@property
def cryospheric_sciences_display(self):
return join_true_values([
", ".join([expertise.title for expertise in self.cryospheric_sciences.all()]),
self.other_cryospheric_sciences
])
@property
def earth_sciences_display(self):
return join_true_values([
", ".join([expertise.title for expertise in self.earth_sciences.all()]),
self.other_earth_sciences
])
@property
def biological_sciences_display(self):
return join_true_values([
", ".join([expertise.title for expertise in self.biological_sciences.all()]),
self.other_biological_sciences
])
@property
def social_sciences_and_humanities_display(self):
return join_true_values([
", ".join([expertise.title for expertise in self.social_sciences_and_humanities.all()]),
self.other_social_sciences_and_humanities
])
@property
def integrated_systems_display(self):
return join_true_values([
", ".join([expertise.title for expertise in self.integrated_systems.all()]),
self.other_integrated_systems
])
@property
def spatial_scale_of_expertise_display(self):
return join_true_values([
", ".join([expertise.title for expertise in self.spatial_scale_of_expertise.all()]),
self.other_spatial_scale_of_expertise
])
@property
def statistical_focus_display(self):
return join_true_values([
", ".join([expertise.title for expertise in self.statistical_focus.all()]),
self.other_statistical_focus
])
@property
def time_scales_display(self):
return join_true_values([
", ".join([expertise.title for expertise in self.time_scales.all()]),
self.other_time_scales
])
@property
def methods_display(self):
return join_true_values([
", ".join([expertise.title for expertise in self.methods.all()]),
self.other_methods
])
@property
def participation_in_assessments_display(self):
return join_true_values([
", ".join([expertise.title for expertise in self.participation_in_assessments.all()]),
self.other_participation_in_assessments
])
@property
def inputs_or_participation_to_un_conventions_display(self):
return join_true_values(
[
", ".join([expertise.title for expertise in self.inputs_or_participation_to_un_conventions.all()]),
self.other_inputs_or_participation_to_un_conventions,
]
)
@property
def mountain_ranges_of_research_interest_display(self):
return join_true_values(
[
", ".join([mountain.name for mountain in self.mountain_ranges_of_research_interest.all()]),
self.other_mountain_ranges_of_research_interest,
]
)
@property
def mountain_ranges_of_research_expertise_display(self):
return join_true_values(
[
", ".join([mountain.name for mountain in self.mountain_ranges_of_research_expertise.all()]),
self.other_mountain_ranges_of_research_expertise,
]
)
def __str__(self):
return f"{self.user}'s expertise"
class Meta:
verbose_name = _("Expertise")
verbose_name_plural = _("Expertise")
class SortedExpertiseModelManager(models.Manager):
def get_queryset(self):
return super().get_queryset().all().order_by("title")
class ResearchExpertise(models.Model):
title = models.CharField(max_length=1024, null=False, blank=False, unique=True)
objects = SortedExpertiseModelManager()
def __str__(self):
return self.title
class Meta:
verbose_name_plural = "Research Expertise"
class AtmosphericSciences(models.Model):
title = models.CharField(max_length=1024, null=False, blank=False, unique=True)
objects = SortedExpertiseModelManager()
def __str__(self):
return self.title
class Meta:
verbose_name_plural = "Atmospheric Sciences"
class HydrosphericSciences(models.Model):
title = models.CharField(max_length=1024, null=False, blank=False, unique=True)
objects = SortedExpertiseModelManager()
def __str__(self):
return self.title
class Meta:
verbose_name_plural = "Hydrospheric Sciences"
class CryosphericSciences(models.Model):
title = models.CharField(max_length=1024, null=False, blank=False, unique=True)
objects = SortedExpertiseModelManager()
def __str__(self):
return self.title
class Meta:
verbose_name_plural = "Cryospheric Sciences"
class EarthSciences(models.Model):
title = models.CharField(max_length=1024, null=False, blank=False, unique=True)
objects = SortedExpertiseModelManager()
def __str__(self):
return self.title
class Meta:
verbose_name_plural = "Earth Sciences"
class BiologicalSciences(models.Model):
title = models.CharField(max_length=1024, null=False, blank=False, unique=True)
objects = SortedExpertiseModelManager()
def __str__(self):
return self.title
class Meta:
verbose_name_plural = "Biological Sciences"
class SocialSciencesAndHumanities(models.Model):
title = models.CharField(max_length=1024, null=False, blank=False, unique=True)
objects = SortedExpertiseModelManager()
def __str__(self):
return self.title
class Meta:
verbose_name_plural = "Social Sciences And Humanities"
class IntegratedSystems(models.Model):
title = models.CharField(max_length=1024, null=False, blank=False, unique=True)
objects = SortedExpertiseModelManager()
def __str__(self):
return self.title
class Meta:
verbose_name_plural = "Integrated Systems"
class | |
<filename>flopz/target/bolero/bolero_target.py
from typing import Tuple
from flopz.arch.instruction import ArbitraryBytesInstruction
from flopz.core.addressable_object import AddressableObject
from flopz.core.function import Function
from flopz.core.module import Module
from flopz.core.shellcode import Shellcode
from flopz.core.target import Target
from flopz.core.label import Label, LabelRef, AbsoluteLabelRef
from flopz.arch.ppc.vle.instructions import *
from flopz.arch.ppc.vle.auto_instructions import *
from flopz.arch.ppc.vle.e200z0 import E200Z0
from flopz.listener.protocol import Protocol
from flopz.target.bolero.bolero_protocol import BoleroProtocol
class BoleroTarget(Target):
def get_init_gadgets(self, init_slice_addr: int, original_bytes: bytes,
init_gadget_addr: int) -> Tuple[Shellcode, Shellcode]:
core = E200Z0()
slice_patch = Shellcode(address=init_slice_addr, instructions=[
EB(init_gadget_addr - init_slice_addr)
])
def get_jmp_back(x):
return EB((init_slice_addr + 4) - x)
instr_init = Shellcode(address=init_gadget_addr, instructions=[
# first, save r26 so we can use it as a pointer (as in our slice gadget)
Mtspr(core.SPRG0, core.r26),
AutoLoadI(core.r26, (self.target_ram + 8)),
# save r27, r28, r29, r30, r31 to ram
EStmw(core.r27, core.r26, 0),
# initialize our BUF_IDX variable
AutoLoadI(core.r28, 0),
SeSubi(core.r26, 8),
SeStw(core.r28, core.r26, 0),
SeStw(core.r28, core.r26, 4),
SeAddi(core.r26, 8),
# restore registers
ELmw(core.r27, core.r26, 0),
Mfspr(core.r26, core.SPRG0),
# run ori instruciton
ArbitraryBytesInstruction(original_bytes),
# return
Label("here"),
AbsoluteLabelRef("here", get_jmp_back)
])
return slice_patch, instr_init
def get_slice_gadgets(self, slice_addr: int, original_bytes: bytes, id: int,
slice_gadget_addr: int, instrumentation_gadget: AddressableObject) -> Tuple[Shellcode, Shellcode]:
core = E200Z0()
slice_patch = Shellcode(address=slice_addr, instructions=[
EB(slice_gadget_addr - slice_addr)
])
slice_gadget = Shellcode(address=slice_gadget_addr, instructions=[
# disable interrupts
Wrteei(0),
# first, save r24 so we can use it as a target_ram pointer
Mtspr(core.SPRG0, core.r24),
# set up our instrumentation stack pointer in r24 + 0x8
# -> we leave 0x8 bytes reserved for variables
AutoLoadI(core.r24, (self.target_ram + 8)),
# save r25, r26, r27, r28, r29, r30, r31 to ram
# ..for this we use e_stmw which will do that in one instruction
EStmw(core.r25, core.r24, 0),
# increment the stack pointer by 28 (7 registers x 4 bytes)
SeAddi(core.r24, 28),
# store the old CR register
Mfcr(core.r27),
SeStw(core.r27, core.r24,0),
SeAddi(core.r24, 4),
# store the old link register, we'll use it
SeMflr(core.r27),
SeStw(core.r27, core.r24, 0),
SeAddi(core.r24, 4), # increment stack pointer
# store the id in target_ram + 4
AutoLoadI(core.r26, (self.target_ram + 4)),
AutoLoadI(core.r27, id),
SeSth(core.r27, core.r26, 0), # [r26] = r27 = id
# add the jmp to instr. gadget
Label("sgEnd"),
AbsoluteLabelRef("sgEnd", lambda curr_addr: EBl(instrumentation_gadget.get_absolute_address() - curr_addr))
])
# restore all the registers
slice_gadget.instructions.extend([
# read the stored link register back into LR
SeSubi(core.r24, 4), # decrement stack pointer by 4
SeLwz(core.r27, core.r24, 0), # read the old LR into r27
SeMtlr(core.r27), # put it back into the LR
# read the stored CR and restore it
SeSubi(core.r24, 4),
SeLwz(core.r27, core.r27, 0),
Mtcrf(0xff, core.r27),
# decrement the stack pointer first by 20 (5 registers x 4 bytes)
SeSubi(core.r24, 28),
# lead r25, r26, r27, r28, r29, r30, r31 at once
ELmw(core.r25, core.r24, 0),
# finally, restore the register containing our stack pointer
Mfspr(core.r24, core.SPRG0),
# ..restore interrupts
Wrteei(1),
])
# add the original bytes/code
slice_gadget.instructions.append(
ArbitraryBytesInstruction(bytes=original_bytes)
)
# jump back to the patch addr
slice_gadget.instructions.extend([
Label("currentPC"),
AbsoluteLabelRef("currentPC", lambda curr_addr: EB(- (curr_addr - (slice_addr + 4))))
])
return slice_patch, slice_gadget
def get_instrumentation_gadget(self, config: dict, target_addr: int, dataout_func: Function) -> Shellcode:
core = E200Z0()
return Shellcode(address=target_addr, instructions=[
# ARGS:
# r27: function ID
# - prepare arguments for dataout function
# store old LR, dataout func will restore r31 for us
SeMflr(core.r31),
# prepare dataout arguments
AutoLoadI(core.r28, self.target_ram + 4), # points to function id halfword
AutoLoadI(core.r29, 2), # send as 2 bytes
# - call dataout function
*dataout_func.make_call.get_instructions(),
# return to slice gadget
SeMtlr(core.r31),
SeBlr()
#SeBl
])
def get_data_out_function(self, target_addr: int) -> Function:
core = E200Z0()
def _save_regs_inst_func(args):
# assume r26 points to our instrumentation stack
return [
# store the registers used by DataOutFunc [r25 - r31]
EStmw(core.r25, core.r24, 0),
# increment the stack pointer by 28 (7 registers x 4 bytes)
SeAddi(core.r24, 28),
]
save_regs_mod = Module(instructions_func=_save_regs_inst_func, address=0)
def _restore_regs_ins_func(args):
# assume r26 points to our instrumentation stack
return [
SeSubi(core.r24, 28), # restore 7 * 4 bytes = 28
ELmw(core.r25, core.r24, 0),
# we're done! branch to link register
SeBlr(),
]
restore_regs_mod = Module(instructions_func=_restore_regs_ins_func, address=0)
make_call = Module(instructions=[
# branch and link to logic_mod (which is at target_addr)
# NOTE: this will overwrite LR - you need to save it yourself if you need it afterwards
Label("call_source"),
AbsoluteLabelRef("call_source", lambda cia: EBl(target_addr - cia))
], address=0)
logic_mod = Module(instructions=[
# ARGS:
# r28: data ptr
# r29: data length
# -> on entry, move arguments to r30, r31 for later
SeMr(core.r30, core.r28), # r30 = r28
SeMr(core.r31, core.r29), # r31 = r29
# always check if the CGM output is enabled
# if it isn't, bail out
AutoLoadI(core.r27, 0xc3fe0370),
SeLwz(core.r28, core.r27, 0),
SeCmpi(core.r28, 0),
LabelRef("bail_out", lambda x: EBe(x)),
# check if CAN is in FREEZE mode / not ready
AutoLoadI(core.r27, 0xfffc0000),
SeLbz(core.r28, core.r27, 0),
EAnd2i(core.r28, 0x8),
SeCmpi(core.r28, 0),
LabelRef("bail_out", lambda x: EBne(x)),
# always clear the FLEXCAN0 IMASK bits for MBs 30-31!
AutoLoadI(core.r27, 0xfffc0000 + 0x28),
SeLbz(core.r28, core.r27, 0),
# zero out bit 0 - bit 3 (including bit 3)
EAnd2i(core.r28, 0x3f),
SeStb(core.r28, core.r27, 0),
Label("top_outer_loop"),
# select a free Message Buffer by reading the 'next buffer index'
# for this, we load the static location in ram and read from it
AutoLoadI(core.r27, (self.target_ram + 0)),
SeLbz(core.r28, core.r27, 0),
# if this is out of range (> 3), we need to reset it and wait for the first MB to become free
SeCmpi(core.r28, 3),
LabelRef('dstbuf_known', lambda addr: EBlt(addr)),
# ..reset it to 0:
SeLi(core.r28, 0),
SeStb(core.r28, core.r27, 0), # [r27] = r28
Label("dstbuf_known"),
# put the msgbuf base in r27
# -> r27 = (msgbuf_base) + (msgbuf_idx * msgbuf_struct_size)
AutoLoadI(core.r27, 0xfffc0260), # flexcan0->BUF[30]
EMull2i(core.r28, 0x10), # r28 = r28 * 16 -> mbuf offset
SeAdd(core.r27, core.r28), # r27 = r27 + r28
# now we have our message buffer ptr in r27!
# -> check CODE register. if it is not 1000 (INACTIVE), then we need to wait
# -> CODE is at offset 0 of the MB, bits [4:7] (including bit 7)
Label("check_mb_code"),
SeLbz(core.r28, core.r27, 0), # r28 = 0x0(r27)
SeCmpi(core.r28, 8),
# while r28 > 8: busy wait
LabelRef("check_mb_code", lambda x: SeBgt(x)),
# write the ID word, adding the buf idx to it
AutoLoadI(core.r29, (self.target_ram + 0)),
SeLbz(core.r29, core.r29, 0), # r29 = [r29]
AutoLoadI(core.r28, 0x7f0),
SeAdd(core.r28, core.r29), # r28 += r29
SeSlwi(core.r28, 2), # r28 = r28 << 2 (shift left by two so it gets written to std id field)
SeSth(core.r28, core.r27, 0x4), # MB+4 = PRIO | ID_STD | ID_EXT ..
# loop head:
# i = 0
AutoLoadI(core.r28, 0), # i = r28 = 0
# mbuf_offset = target_ram+1 = r25
AutoLoadI(core.r25, (self.target_ram + 1)),
SeLbz(core.r25, core.r25, 0),
# r27 += mbuf_offset
SeAdd(core.r27, core.r25),
Label("inner_fill_frame"),
SeLbz(core.r29, core.r30, 0), # data = core.r29 = [r30]
SeStb(core.r29, core.r27, 8), # [r27]+8 = r29
SeAddi(core.r30, 1), # ptr += 1
SeAddi(core.r27, 1), # r27 += 1 (mbuf++)
SeAddi(core.r28, 1), # i += 1
# check len
SeSubi(core.r31, 1), # len -= 1
SeMr(core.r26, core.r28), # r26 = i (* precalculate r26 = i + mbuf_offset in case we exit here)
SeAdd(core.r26, core.r25), # r26 += r25 (mbuf_offset)
SeCmpi(core.r31, 1), # if len < 1: break
LabelRef("exit_inner_fill_frame", lambda x: SeBlt(x)),
# check if mbuf_offset + i < 8 (* use precalculated r26)
SeCmpi(core.r26, 8), # loop, if i+mbuf_offset < 8, else: break
LabelRef("inner_fill_frame", lambda x: SeBlt(x)),
Label("exit_inner_fill_frame"),
# restore r27 by subtracting i
SeSub(core.r27, core.r28),
# ..and subtracing mbuf_offset
SeSub(core.r27, core.r25),
# adjust the mbuf idx, if necessary
# at this point, r26 still contains old_mbuf_offset+i
SeCmpi(core.r26, 8),
LabelRef("skip_mbuf_increment", lambda x: SeBlt(x)),
# increment mbuf idx:
AutoLoadI(core.r29, (self.target_ram + 0)), # r29 = [target_ram]
SeLbz(core.r28, core.r29, 0), # r29 = [r29]
SeAddi(core.r28, 1), # r28++
SeStb(core.r28, core.r29, 0), # [r29] = r28
# zeroize mbuf_offset:
SeLi(core.r28, 0),
SeStb(core.r28, core.r29, 1), # [r29+1] = r28 = 0
# if we are here, this means that the mbuf is full and idx has been incremented
# ..so finally, we can send the frame!
# | |
#!/usr/bin/env python
import json
import logging
import os
import sys
from datetime import datetime
import click
import requests
from click import Context
from furl import furl
from requests import HTTPError
pkg_root = os.path.abspath(os.path.join(os.path.dirname(__file__), "..")) # noqa
sys.path.insert(0, pkg_root) # noqa
from backend.corpora.common.corpora_config import CorporaDbConfig
from backend.corpora.common.utils.json import CustomJSONEncoder
from backend.corpora.common.utils.db_session import db_session_manager, DBSessionMaker
from backend.corpora.common.corpora_orm import (
CollectionLinkType,
CollectionVisibility,
DbCollection,
DbCollectionLink,
DbDataset,
DatasetArtifactFileType,
DbDatasetArtifact,
DbProjectLink,
ProcessingStatus,
)
from backend.corpora.common.entities import DatasetAsset
from backend.corpora.common.entities.dataset import Dataset
from backend.corpora.common.entities.collection import Collection
from backend.corpora.common.utils.s3_buckets import buckets
from urllib.parse import urlparse
import requests as rq
logging.basicConfig()
logger = logging.getLogger(__name__)
os.environ["CORPORA_LOCAL_DEV"] = "1"
@click.group()
@click.option("--deployment", default="test", show_default=True, help="The name of the deployment to target.")
@click.pass_context
def cli(ctx, deployment):
os.environ["DEPLOYMENT_STAGE"] = deployment
ctx.obj["deployment"] = deployment
DBSessionMaker(get_database_uri())
@cli.command()
@click.argument("uuid")
@click.pass_context
def delete_dataset(ctx, uuid):
"""Delete a dataset from Cellxgene. You must first SSH into the target deployment using `make db/tunnel` before
running."""
with db_session_manager() as session:
dataset = Dataset.get(session, uuid, include_tombstones=True)
if dataset is not None:
click.echo(
json.dumps(dataset.to_dict(remove_attr=["collection"]), sort_keys=True, indent=2, cls=CustomJSONEncoder)
)
click.confirm(
f"Are you sure you want to delete the dataset:{uuid} from cellxgene:{ctx.obj['deployment']}?",
abort=True,
)
dataset.asset_deletion()
dataset.deployment_directory_deletion()
dataset.delete()
dataset = Dataset.get(session, uuid, include_tombstones=True)
if dataset is None:
click.echo(f"Deleted dataset:{uuid}")
exit(0)
else:
click.echo(f"Failed to delete dataset:{uuid}")
exit(1)
else:
click.echo(f"Dataset:{uuid} not found!")
exit(0)
@cli.command()
@click.argument("collection_name")
@click.pass_context
def delete_collections(ctx, collection_name):
"""
Delete collections from data portal staging or dev by collection name.
You must first SSH into the target deployment using `make db/tunnel` before running.
You must first set DEPLOYMENT_STAGE as an env var before running
To run
./scripts/cxg_admin.py --deployment dev delete-collections <collection_name>
Examples of valid collection_name:
- String with no spaces: ThisCollection
- String with spaces: "This Collection"
"""
if ctx.obj["deployment"] == "prod":
logger.info(f"Cannot run this script for prod. Aborting.")
exit(0)
click.confirm(
f"Are you sure you want to run this script? It will delete all of the "
f"collections with the name '{collection_name}' from the {ctx.obj['deployment']} environment.",
abort=True,
)
with db_session_manager() as session:
collections = session.query(DbCollection).filter_by(name=collection_name).all()
if not collections:
logger.info(f"There are no collections with the name '{collection_name}'. Aborting.")
exit(0)
logger.info(f"There are {len(collections)} collections with the name '{collection_name}'")
for c in collections:
collection = Collection.get_collection(session, c.id, CollectionVisibility.PUBLIC, include_tombstones=True)
if not collection:
collection = Collection.get_collection(
session, c.id, CollectionVisibility.PRIVATE, include_tombstones=True
)
# Delete collection
logger.info(f"Starting deletion of collection | name: {collection_name} | id: {c.id}")
collection.delete()
logger.info(f"Deletions complete!")
@cli.command()
@click.argument("uuid")
@click.pass_context
def tombstone_collection(ctx: Context, uuid: str):
"""
Tombstones the collection specified by UUID.
Before running, create a tunnel to the database, e.g.:
AWS_PROFILE=single-cell-prod DEPLOYMENT_STAGE=prod make db/tunnel
Then run as:
./scripts/cxg_admin.py --deployment prod tombstone-collection 7edef704-f63a-462c-8636-4bc86a9472bd
:param ctx: command context
:param uuid: UUID that identifies the collection to tombstone
"""
with db_session_manager() as session:
collection = Collection.get_collection(session, uuid, include_tombstones=True)
if not collection:
click.echo(f"Collection:{uuid} not found!")
exit(0)
click.echo(
json.dumps(
collection.to_dict(remove_attr=["datasets", "links", "genesets"]),
sort_keys=True,
indent=2,
cls=CustomJSONEncoder,
)
)
click.confirm(
f"Are you sure you want to tombstone the collection:{uuid} from cellxgene:{ctx.obj['deployment']}?",
abort=True,
)
collection.tombstone_collection()
tombstoned = Collection.get_collection(session, uuid, include_tombstones=True)
if tombstoned.tombstone:
click.echo(f"Tombstoned collection:{uuid}")
exit(0)
else:
click.echo(f"Failed to tombstone collection:{uuid}")
exit(1)
@cli.command()
@click.argument("uuid")
@click.pass_context
def tombstone_dataset(ctx, uuid):
"""
Remove a dataset from Cellxgene. This will delete its artifacts/genesets and mark the dataset as tombstoned so
it no longer shows up in the data portal.
You must first SSH into the target deployment using `make db/tunnel` before running.
./scripts/cxg_admin.py --deployment staging tombstone-dataset "57cf1b53-af10-49e5-9a86-4bc70d0c92b6"
"""
with db_session_manager() as session:
dataset = Dataset.get(session, uuid, include_tombstones=True)
if dataset is not None:
click.echo(
json.dumps(dataset.to_dict(remove_attr=["collection"]), sort_keys=True, indent=2, cls=CustomJSONEncoder)
)
click.confirm(
f"Are you sure you want to delete the dataset:{uuid} from cellxgene:{ctx.obj['deployment']}?",
abort=True,
)
dataset.asset_deletion()
dataset.tombstone_dataset_and_delete_child_objects()
dataset = Dataset.get(session, uuid)
tombstoned = Dataset.get(session, uuid, include_tombstones=True)
if tombstoned and dataset is None:
click.echo(f"Tombstoned dataset:{uuid}")
exit(0)
else:
click.echo(f"Failed to tombstone dataset:{uuid}")
exit(1)
else:
click.echo(f"Dataset:{uuid} not found!")
exit(0)
@cli.command()
@click.argument("collection_uuid")
@click.argument("new_owner")
@click.pass_context
def update_collection_owner(ctx, collection_uuid, new_owner):
"""Update the owner of a cellxgene collection. You must first SSH into the target deployment using
`make db/tunnel` before running.
To run (from repo root)
./scripts/cxg_admin.py --deployment prod update-collection-owner "$COLLECTION_ID $NEW_OWNER_ID
"""
with db_session_manager() as session:
key = (collection_uuid, CollectionVisibility.PUBLIC.name)
collection = Collection.get(session, key)
collection_name = collection.to_dict()["name"]
if collection is not None:
click.confirm(
f"Are you sure you want to update the owner of the collection:{collection_name}?",
abort=True,
)
collection.update(owner=new_owner)
collection = Collection.get(session, key)
if collection and (collection.owner == new_owner):
click.echo(f"Updated owner of collection:{collection_uuid}. Owner is now {collection.owner}")
exit(0)
else:
click.echo(f"Failed to update owner for collection_uuid:{collection_uuid}")
exit(0)
@cli.command()
@click.argument("curr_owner")
@click.argument("new_owner")
@click.pass_context
def transfer_collections(ctx, curr_owner, new_owner):
"""Transfer all collections owned by the curr_owner to the new_owner. You must first SSH into the target
deployment using `make db/tunnel` before running.
Retrieve user ids from auth0 before running or ping an engineer on the team to check the id of the owner in the database
To run (from repo root)
./scripts/cxg_admin.py --deployment prod transfer-collections $CURR_OWNER_ID $NEW_OWNER_ID
"""
with db_session_manager() as session:
collections = session.query(DbCollection).filter(DbCollection.owner == curr_owner).all()
new_owner_collections_count = len(session.query(DbCollection).filter(DbCollection.owner == new_owner).all())
if len(collections):
click.confirm(
f"Are you sure you want to update the owner of {len(collections)} collection{'s' if len(collections) > 1 else ''} from {curr_owner} to "
f"{new_owner}?",
abort=True,
)
updated = (
session.query(DbCollection)
.filter(DbCollection.owner == curr_owner)
.update({DbCollection.owner: new_owner})
)
session.commit()
if updated > 0:
collections = session.query(DbCollection).filter(DbCollection.owner == new_owner).all()
click.echo(
f"{new_owner} previously owned {new_owner_collections_count}, they now own {len(collections)}"
)
click.echo(
f"Updated owner of collection for {updated} collections. {new_owner} is now the owner of {[[x.name, x.id] for x in collections]}"
)
exit(0)
else:
click.echo(
f"Failed to update owner for collections. {curr_owner} is still the owner of {len(collections)} "
f"collections"
)
exit(0)
@cli.command()
@click.pass_context
def create_cxg_artifacts(ctx):
"""
Create cxg artifacts for all datasets in the database based on their explorer_url
DO NOT run/use once dataset updates have shipped -- the s3 location will no longer be
based on the explorer_url in all cases.
You must first SSH into the target deployment using `make db/tunnel` before running.
You must first set DEPLOYMENT_STAGE as an env var before running
To run
./scripts/cxg_admin.py --deployment prod create-cxg-artifacts
"""
with db_session_manager() as session:
click.confirm(
f"Are you sure you want to run this script? It will delete all of the current cxg artifacts and create new "
f"ones based on the explorer_url?",
abort=True,
)
session.query(DbDatasetArtifact).filter(DbDatasetArtifact.filetype == DatasetArtifactFileType.CXG).delete()
session.commit()
datasets = session.query(DbDataset.id, DbDataset.explorer_url).all()
for dataset in datasets:
if dataset.explorer_url:
object_key = dataset.explorer_url.split("/")[-2]
s3_uri = f"s3://{buckets.explorer_bucket.name}/{object_key}/"
click.echo(dataset.explorer_url, s3_uri)
DatasetAsset.create(
session,
dataset_id=dataset.id,
filename="explorer_cxg",
filetype=DatasetArtifactFileType.CXG,
user_submitted=True,
s3_uri=s3_uri,
)
@cli.command()
@click.pass_context
def migrate_schema_version(ctx):
"""
Populates `schema_version` for each existing dataset. Since the schema version only exists
in the cxg file and we don't want to open them, we will call the cellxgene explorer endpoint
which contains the version. This is a one-off procedure since new datasets will have
the version already set.
"""
with db_session_manager() as session:
click.confirm(
f"Are you sure you want to run this script? It will assign schema_version to all the datasets",
abort=True,
)
for record in session.query(DbDataset):
dataset_id = record.id
explorer_url = urlparse(record.explorer_url)
url = f"https://api.{explorer_url.netloc}/cellxgene{explorer_url.path}api/v0.2/config"
res = rq.get(url).json()
version = res["config"]["corpora_props"]["version"]["corpora_schema_version"]
logger.info(f"Setting version for dataset {dataset_id} to {version}")
record.schema_version = version
@cli.command()
@click.pass_context
def migrate_published_at(ctx):
"""
Populates `published_at` for each existing collection and dataset. This is a
one-off procedure since published_at will be set for collections and new
datasets when they are first published.
"""
with db_session_manager() as session:
click.confirm(
f"Are you sure you want to run this script? It will assign published_at to "
f"all of the existing collections and datasets",
abort=True,
)
# Collections
for record in session.query(DbCollection):
collection_id = record.id
# Skip private collection, since published_at will be populated when published.
if record.visibility == CollectionVisibility.PRIVATE:
logger.info(f"SKIPPING - Collection is PRIVATE | collection.id: {collection_id}")
continue
# Skip if published_at already populated.
if record.published_at is not None:
logger.info(f"SKIPPING - Collection already has published_at | collection.id: {collection_id}")
continue
logger.info(f"Setting published_at for collection {collection_id}")
collection_created_at = record.created_at
record.published_at = collection_created_at
logger.info(f"----- Finished migrating published_at for collections! -----")
# Datasets
for record in session.query(DbDataset):
dataset_id = record.id
# Skip private dataset, since published_at will be populated when published.
if record.collection_visibility == CollectionVisibility.PRIVATE:
logger.info(f"SKIPPING - Dataset's parent collection is PRIVATE | dataset.id: {dataset_id}")
continue
# Skip if published_at already populated.
if record.published_at is not None:
logger.info(f"SKIPPING - Dataset already has published_at | dataset.id: {dataset_id}")
continue
logger.info(f"Setting published_at for dataset {dataset_id}")
dataset_created_at = record.created_at
record.published_at = dataset_created_at
logger.info(f"----- Finished migrating published_at for datasets! -----")
@cli.command()
@click.pass_context
def populate_revised_at(ctx):
"""
Populates `revised_at` for each existing collection and dataset with | |
'blocks_q2' : 2D array of local medians
'blocks_iqr' : 2D array of local IQRs
'blocks_var' : IQR of blocks_q2
'iqr_ratio' : blocks_var / iqr
'blocks_iqrvar' : IQR of blocks_iqr
'mean' : mean of full chip array
'std' : std of full chip array
'blocks_mean' : 2D array of local means
'blocks_std' : 2D array of local stds
'blocks_mvar' : std of blocks_mean
'std_ratio' : blocks_mvar / std
'blocks_stdmvar' : std of blocks_mean
if stdonly is selcted, then q2 and iqr properties are not calculated. This may be faster
'''
def block_reshape( data, blocksize ):
rows, cols = data.shape
numR = rows/blocksize[0]
numC = cols/blocksize[1]
return data.reshape(rows , numC , -1 ).transpose((1,0,2)).reshape(numC,numR,-1).transpose((1,0,2))
output = {}
if exclude is not None:
data = data.astype( np.float )
data[ exclude.astype( np.bool ) ] = np.nan
data[ np.isinf(data) ] = np.nan
# Divide into miniblocks
blocks = block_reshape( data, blocksize )
newsize = ( data.shape[0] / blocksize[0] ,
data.shape[1] / blocksize[1] )
if iqr:
percs = [ 25, 50, 75 ]
# Calculate the full data properties
flat = data[ ~np.isnan( data ) ].flatten()
flat.sort()
numwells = len( flat )
levels = {}
for perc in percs:
r = perc / 100. * numwells - 0.5
if r%1 == 0:
p = flat[int(r)]
else:
try:
p = (flat[np.ceil(r)] - flat[np.floor(r)] ) * (r - np.floor(r)) + flat[np.floor(r)]
except:
p = np.nan
levels[perc] = p
output[ 'q2' ] = levels[50]
output[ 'iqr' ] = levels[75] - levels[25]
# Calculate the local properties
blocks.sort( axis=2 )
numwells = (~np.isnan( blocks )).sum( axis=2 )
regions = {}
def get_inds( blocks, inds2d, inds ):
selected_blocks = blocks[inds2d]
selected_wells = inds.astype( np.int )[inds2d]
rows = range( selected_wells.size )
output = selected_blocks[ rows, selected_wells ]
return output
for perc in percs:
r = perc / 100. * numwells - 0.5
r[ r < 0 ] = 0
r[ r > numwells ] > numwells[ r > numwells ]
inds = r%1 == 0
p = np.empty( r.shape )
p[:] = np.nan
p[ inds ] = get_inds( blocks, inds, r )
ceil_vals = get_inds( blocks, ~inds, np.ceil(r) )
floor_vals = get_inds( blocks, ~inds, np.floor(r) )
deltas = r[~inds] - np.floor(r[~inds] )
p[ ~inds ] = ( ceil_vals - floor_vals ) * deltas + floor_vals
p = p.reshape( newsize )
regions[perc] = p
blocks_iqr = regions[75] - regions[25]
if not only_values:
output[ 'blocks_q2' ] = regions[50]
output[ 'blocks_iqr' ] = blocks_iqr
# Calculate the propertis of the regional averages
flat = regions[50][ ~np.isnan( regions[50] ) ].flatten()
flat.sort()
numwells = len( flat )
levels = {}
for perc in percs:
r = perc / 100. * numwells - 0.5
if r%1 == 0:
p = flat[int(r)]
else:
try:
p = ( flat[np.ceil(r)] - flat[np.floor(r)] ) * (r - np.floor(r)) + flat[np.floor(r)]
except:
p = np.nan
levels[perc] = p
output[ 'blocks_var' ] = levels[75] - levels[25]
output['iqr_ratio'] = output[ 'blocks_var' ] / output[ 'iqr' ]
# Calculate the variability of the regional variability
flat = blocks_iqr[ ~np.isnan( regions[50] ) ].flatten()
flat.sort()
numwells = len( flat )
levels = {}
for perc in percs:
r = perc / 100. * numwells - 0.5
if r%1 == 0:
p = flat[int(r)]
else:
try:
p = ( flat[np.ceil(r)] - flat[np.floor(r)] ) * (r - np.floor(r)) + flat[np.floor(r)]
except:
p = np.nan
levels[perc] = p
output[ 'blocks_iqrvar' ] = levels[75] - levels[25]
if std:
output[ 'mean' ] = np.nanmean( data )
output[ 'std' ] = np.nanstd( data )
block_mean = np.nanmean( blocks, axis=2 ).reshape( newsize )
block_std = np.nanstd( blocks, axis=2 ).reshape( newsize )
if not only_values:
output[ 'blocks_mean' ] = block_mean
output[ 'blocks_std' ] = block_std
output[ 'blocks_mvar' ] = np.nanstd( block_mean )
output[ 'std_ratio' ] = output[ 'blocks_mvar' ] / output[ 'std' ]
output[ 'blocks_stdmvar' ] = np.nanstd( block_std )
return output
def chip_uniformity( data, chiptype, block='mini', exclude=None, only_values=False, iqr=True, std=False ):
'''
A wrapper on uniformity, designed to take a chiptype as an input instead of a block size
'''
blocksize = ( getattr( chiptype, '%sR' % block ),
getattr( chiptype, '%sC' % block ) )
output = uniformity( data, blocksize, exclude=exclude, only_values=only_values, iqr=iqr, std=std )
return output
################################################################################
# Averaging functions, include those brought over from average.py
################################################################################
def block_avg( data , blockR, blockC, goodpix=None, finite=False, superR=None, superC=None ):
"""
Averages the specified data, averaging regions of blockR x blockC
This returns a data of size
( data.shape[0]/blockR, data.shape[1]/blockC )
*****Note that blockR and blockC are not the same as blockR and blockC defaults by chiptype (fpga block rc shape)*****
goodpix is a boolean array of same size as data where pixels to be averaged are True. By default all pixels are averaged
raising finite sets all nans and infs to 0
For large data sets on computers with limited memory, it may help to set superR and superC to break the data into chunks
"""
bm = BlockMath( data , blockR, blockC, goodpix=goodpix, finite=finite, superR=superR, superC=superC )
return bm.get_mean()
def block_std( data, blockR, blockC, goodpix=None, finite=False, superR=None, superC=None ):
"""
Analog of block_avg that spits back the std.
*****Note that blockR and blockC are not the same as blockR and blockC defaults by chiptype (fpga block rc shape)*****
goodpix is a boolean array of same size as data where pixels to be averaged are True. By default all pixels are averaged
"""
bm = BlockMath( data , blockR, blockC, goodpix=goodpix, finite=finite, superR=superR, superC=superC )
return bm.get_std()
# This function works but needs some help
def BlockAvg3D( data , blocksize , mask ):
"""
3-D version of block averaging. Mainly applicable to making superpixel averages of datfile traces.
Not sure non-averaging calcs makes sense?
mask is a currently built for a 2d boolean array of same size as (data[0], data[1]) where pixels to be averaged are True.
"""
rows = data.shape[0]
cols = data.shape[1]
frames = data.shape[2]
if np.mod(rows,blocksize[0]) == 0 and np.mod(cols,blocksize[1]) == 0:
blockR = rows / blocksize[0]
blockC = cols / blocksize[1]
else:
print( 'Error, blocksize not evenly divisible into data size.')
return None
output = np.zeros((blockR,blockC,frames))
# Previous algorithm was slow and used annoying looping
# Improved algorithm that doeesn't need any looping. takes about 1.4 seconds instead of 60.
msk = np.array( mask , float )
msk.resize(rows, cols , 1 )
masked = np.array( data , float ) * np.tile( msk , ( 1 , 1 , frames ) )
step1 = masked.reshape(rows , blockC , -1 , frames).sum(2)
step2 = np.transpose(step1 , (1,0,2)).reshape(blockC , blockR , -1 , frames).sum(2)
step3 = np.transpose(step2 , (1,0,2))
mask1 = mask.reshape(rows , blockC , -1 ).sum(2)
count = mask1.transpose().reshape(blockC , blockR , -1).sum(2).transpose()
#mask1 = mask.reshape(rows , blockC , -1 , frames).sum(2)
#count = mask1.transpose().reshape(blockC , blockR , -1 , frames).sum(2).transpose()
output = step3 / count[:,:,np.newaxis]
output[ np.isnan(output) ] = 0
output[ np.isinf(output) ] = 0
return output
# I think these should just go away into their only calling modules. Perhaps. - PW 9/18/2019
# Pulled in from the average.py module. Only called by datfile.py
def masked_avg( image, pinned ):
'''
Calculates average trace while excluding pinned pixels.
If pinned = True, that pixel is excluded from the average.
Note: This is opposite compared to matlab functionality.
'''
avgtrace = np.mean ( image[ ~pinned ] , axis=0 )
return avgtrace
# Pulled in from average.py. Only called by chip.py.
def stripe_avg( data, ax ):
"""
Custom averaging algorithm to deal with pinned pixels
"""
# EDIT 10-11-13
# Utilize numpy.masked_array to do this much more quickly
# Not so custom anymore!
return ma.masked_array( data , (data == 0) ).mean( ax ).data
def top_bottom_diff( data, mask=None, force_sum=False ):
"""
Function to calculate the difference of a given chip data array between top and bottom.
Despite our conventions of plotting origin='lower' suggesting rows in | |
m.b1046 <= 0)
m.e999 = Constraint(expr= m.x519 - 41 * m.b1047 <= 0)
m.e1000 = Constraint(expr= m.x520 - 39 * m.b1048 <= 0)
m.e1001 = Constraint(expr= m.x489 - 40 * m.b1049 <= 0)
m.e1002 = Constraint(expr= m.x490 - 45 * m.b1050 <= 0)
m.e1003 = Constraint(expr= m.x491 - 41 * m.b1051 <= 0)
m.e1004 = Constraint(expr= m.x492 - 39 * m.b1052 <= 0)
m.e1005 = Constraint(expr= m.x493 - 40 * m.b1053 <= 0)
m.e1006 = Constraint(expr= m.x494 - 45 * m.b1054 <= 0)
m.e1007 = Constraint(expr= m.x495 - 41 * m.b1055 <= 0)
m.e1008 = Constraint(expr= m.x496 - 39 * m.b1056 <= 0)
m.e1009 = Constraint(expr= m.x497 - 40 * m.b1057 <= 0)
m.e1010 = Constraint(expr= m.x498 - 45 * m.b1058 <= 0)
m.e1011 = Constraint(expr= m.x499 - 41 * m.b1059 <= 0)
m.e1012 = Constraint(expr= m.x500 - 39 * m.b1060 <= 0)
m.e1013 = Constraint(expr= m.x501 - 40 * m.b1061 <= 0)
m.e1014 = Constraint(expr= m.x502 - 45 * m.b1062 <= 0)
m.e1015 = Constraint(expr= m.x503 - 41 * m.b1063 <= 0)
m.e1016 = Constraint(expr= m.x504 - 39 * m.b1064 <= 0)
m.e1017 = Constraint(expr= m.x345 - 10 * m.b937 <= 0)
m.e1018 = Constraint(expr= m.x346 - 10 * m.b938 <= 0)
m.e1019 = Constraint(expr= m.x347 - 10 * m.b939 <= 0)
m.e1020 = Constraint(expr= m.x348 - 10 * m.b940 <= 0)
m.e1021 = Constraint(expr= m.x349 - 10 * m.b941 <= 0)
m.e1022 = Constraint(expr= m.x350 - 10 * m.b942 <= 0)
m.e1023 = Constraint(expr= m.x351 - 10 * m.b943 <= 0)
m.e1024 = Constraint(expr= m.x352 - 10 * m.b944 <= 0)
m.e1025 = Constraint(expr= m.x353 - 50 * m.b945 <= 0)
m.e1026 = Constraint(expr= m.x354 - 50 * m.b946 <= 0)
m.e1027 = Constraint(expr= m.x355 - 50 * m.b947 <= 0)
m.e1028 = Constraint(expr= m.x356 - 50 * m.b948 <= 0)
m.e1029 = Constraint(expr= m.x357 - 50 * m.b949 <= 0)
m.e1030 = Constraint(expr= m.x358 - 50 * m.b950 <= 0)
m.e1031 = Constraint(expr= m.x359 - 50 * m.b951 <= 0)
m.e1032 = Constraint(expr= m.x360 - 50 * m.b952 <= 0)
m.e1033 = Constraint(expr= m.x361 + m.x377 - 40 * m.b953 <= 0)
m.e1034 = Constraint(expr= m.x362 + m.x378 - 40 * m.b954 <= 0)
m.e1035 = Constraint(expr= m.x363 + m.x379 - 40 * m.b955 <= 0)
m.e1036 = Constraint(expr= m.x364 + m.x380 - 40 * m.b956 <= 0)
m.e1037 = Constraint(expr= m.x365 + m.x381 - 40 * m.b957 <= 0)
m.e1038 = Constraint(expr= m.x366 + m.x382 - 40 * m.b958 <= 0)
m.e1039 = Constraint(expr= m.x367 + m.x383 - 40 * m.b959 <= 0)
m.e1040 = Constraint(expr= m.x368 + m.x384 - 40 * m.b960 <= 0)
m.e1041 = Constraint(expr= m.x369 + m.x385 - 60 * m.b961 <= 0)
m.e1042 = Constraint(expr= m.x370 + m.x386 - 60 * m.b962 <= 0)
m.e1043 = Constraint(expr= m.x371 + m.x387 - 60 * m.b963 <= 0)
m.e1044 = Constraint(expr= m.x372 + m.x388 - 60 * m.b964 <= 0)
m.e1045 = Constraint(expr= m.x373 + m.x389 - 60 * m.b965 <= 0)
m.e1046 = Constraint(expr= m.x374 + m.x390 - 60 * m.b966 <= 0)
m.e1047 = Constraint(expr= m.x375 + m.x391 - 60 * m.b967 <= 0)
m.e1048 = Constraint(expr= m.x376 + m.x392 - 60 * m.b968 <= 0)
m.e1049 = Constraint(expr= m.x393 - 15 * m.b969 <= 0)
m.e1050 = Constraint(expr= m.x394 - 15 * m.b970 <= 0)
m.e1051 = Constraint(expr= m.x395 - 15 * m.b971 <= 0)
m.e1052 = Constraint(expr= m.x396 - 15 * m.b972 <= 0)
m.e1053 = Constraint(expr= m.x397 - 15 * m.b973 <= 0)
m.e1054 = Constraint(expr= m.x398 - 15 * m.b974 <= 0)
m.e1055 = Constraint(expr= m.x399 - 15 * m.b975 <= 0)
m.e1056 = Constraint(expr= m.x400 - 15 * m.b976 <= 0)
m.e1057 = Constraint(expr= m.x401 - 25 * m.b977 <= 0)
m.e1058 = Constraint(expr= m.x402 - 25 * m.b978 <= 0)
m.e1059 = Constraint(expr= m.x403 - 25 * m.b979 <= 0)
m.e1060 = Constraint(expr= m.x404 - 25 * m.b980 <= 0)
m.e1061 = Constraint(expr= m.x405 - 25 * m.b981 <= 0)
m.e1062 = Constraint(expr= m.x406 - 25 * m.b982 <= 0)
m.e1063 = Constraint(expr= m.x407 - 25 * m.b983 <= 0)
m.e1064 = Constraint(expr= m.x408 - 25 * m.b984 <= 0)
m.e1065 = Constraint(expr= m.x409 - 15 * m.b985 <= 0)
m.e1066 = Constraint(expr= m.x410 - 15 * m.b986 <= 0)
m.e1067 = Constraint(expr= m.x411 - 15 * m.b987 <= 0)
m.e1068 = Constraint(expr= m.x412 - 15 * m.b988 <= 0)
m.e1069 = Constraint(expr= m.x413 - 15 * m.b989 <= 0)
m.e1070 = Constraint(expr= m.x414 - 15 * m.b990 <= 0)
m.e1071 = Constraint(expr= m.x415 - 15 * m.b991 <= 0)
m.e1072 = Constraint(expr= m.x416 - 15 * m.b992 <= 0)
m.e1073 = Constraint(expr= m.x417 - 20 * m.b993 <= 0)
m.e1074 = Constraint(expr= m.x418 - 20 * m.b994 <= 0)
m.e1075 = Constraint(expr= m.x419 - 20 * m.b995 <= 0)
m.e1076 = Constraint(expr= m.x420 - 20 * m.b996 <= 0)
m.e1077 = Constraint(expr= m.x421 - 20 * m.b997 <= 0)
m.e1078 = Constraint(expr= m.x422 - 20 * m.b998 <= 0)
m.e1079 = Constraint(expr= m.x423 - 20 * m.b999 <= 0)
m.e1080 = Constraint(expr= m.x424 - 20 * m.b1000 <= 0)
m.e1081 = Constraint(expr= m.x441 - 10 * m.b1001 <= 0)
m.e1082 = Constraint(expr= m.x442 - 10 * m.b1002 <= 0)
m.e1083 = Constraint(expr= m.x443 - 10 * m.b1003 <= 0)
m.e1084 = Constraint(expr= m.x444 - 10 * m.b1004 <= 0)
m.e1085 = Constraint(expr= m.x445 - 10 * m.b1005 <= 0)
m.e1086 = Constraint(expr= m.x446 - 10 * m.b1006 <= 0)
m.e1087 = Constraint(expr= m.x447 - 10 * m.b1007 <= 0)
m.e1088 = Constraint(expr= m.x448 - 10 * m.b1008 <= 0)
m.e1089 = Constraint(expr= m.x449 - 20 * m.b1009 <= 0)
m.e1090 = Constraint(expr= m.x450 - 20 * m.b1010 <= 0)
m.e1091 = Constraint(expr= m.x451 - 20 * m.b1011 <= 0)
m.e1092 = Constraint(expr= m.x452 - 20 * m.b1012 <= 0)
m.e1093 = Constraint(expr= m.x453 - 20 * m.b1013 <= 0)
m.e1094 = Constraint(expr= m.x454 - 20 * m.b1014 <= 0)
m.e1095 = Constraint(expr= m.x455 - 20 * m.b1015 <= 0)
m.e1096 = Constraint(expr= m.x456 - 20 * m.b1016 <= 0)
m.e1097 = Constraint(expr= m.x425 + m.x457 - 20 * m.b1017 <= 0)
m.e1098 = Constraint(expr= m.x426 + m.x458 - 20 * m.b1018 <= 0)
m.e1099 = Constraint(expr= m.x427 + m.x459 - 20 * m.b1019 <= 0)
m.e1100 = Constraint(expr= m.x428 + m.x460 - 20 * m.b1020 <= 0)
m.e1101 = Constraint(expr= m.x429 + m.x461 - 20 * m.b1021 <= 0)
m.e1102 = Constraint(expr= m.x430 + m.x462 - 20 * m.b1022 <= 0)
m.e1103 = Constraint(expr= m.x431 + m.x463 - 20 * m.b1023 <= 0)
m.e1104 = Constraint(expr= m.x432 + m.x464 - 20 * m.b1024 <= 0)
m.e1105 = Constraint(expr= m.x433 + m.x465 - 55 * m.b1025 <= 0)
m.e1106 = Constraint(expr= m.x434 + m.x466 - 55 * m.b1026 <= 0)
m.e1107 = Constraint(expr= m.x435 + m.x467 - 55 * m.b1027 <= 0)
m.e1108 = Constraint(expr= m.x436 + m.x468 - 55 * m.b1028 <= 0)
m.e1109 = Constraint(expr= m.x437 + m.x469 - 55 * m.b1029 <= 0)
m.e1110 = Constraint(expr= m.x438 + m.x470 - 55 * m.b1030 <= 0)
m.e1111 = Constraint(expr= m.x439 + m.x471 - 55 * m.b1031 <= 0)
m.e1112 = Constraint(expr= m.x440 + m.x472 - 55 * m.b1032 <= 0)
m.e1113 = Constraint(expr= m.x473 + m.x505 - 25 * m.b1033 <= 0)
m.e1114 = Constraint(expr= m.x474 + m.x506 - 25 * m.b1034 <= 0)
m.e1115 = Constraint(expr= m.x475 + m.x507 - 25 * m.b1035 <= 0)
m.e1116 = Constraint(expr= m.x476 + m.x508 - 25 * m.b1036 <= 0)
m.e1117 = Constraint(expr= m.x477 + m.x509 - 25 * m.b1037 <= 0)
m.e1118 = Constraint(expr= m.x478 + m.x510 - 25 * m.b1038 <= 0)
m.e1119 = Constraint(expr= m.x479 + m.x511 - 25 * m.b1039 <= 0)
m.e1120 = Constraint(expr= m.x480 + m.x512 - 25 * m.b1040 <= 0)
m.e1121 = Constraint(expr= m.x481 + m.x513 - 50 * m.b1041 <= 0)
m.e1122 = Constraint(expr= m.x482 + m.x514 - 50 * m.b1042 <= 0)
m.e1123 = Constraint(expr= m.x483 + m.x515 - 50 * m.b1043 <= 0)
m.e1124 = Constraint(expr= m.x484 + m.x516 - 50 * m.b1044 <= 0)
m.e1125 = Constraint(expr= m.x485 + m.x517 - 50 * m.b1045 <= 0)
m.e1126 = Constraint(expr= m.x486 + m.x518 - 50 * m.b1046 <= 0)
m.e1127 = Constraint(expr= m.x487 + m.x519 - 50 * m.b1047 <= 0)
m.e1128 = Constraint(expr= m.x488 + m.x520 - 50 * m.b1048 <= 0)
m.e1129 = Constraint(expr= m.x489 - 15 * m.b1049 <= 0)
m.e1130 = Constraint(expr= | |
= interpolate.interp1d(time_,temp[:,p_id],kind='linear',axis=0,fill_value=temp[istop,p_id])
temp_output = interp_func_temp(time_output)
interp_func_rho = interpolate.interp1d(time_,density[:,p_id],kind='linear',axis=0,fill_value=density[istop,p_id])
rho_output = interp_func_rho(time_output)
interp_func_ye = interpolate.interp1d(time_,ye[:,p_id],kind='linear',axis=0,fill_value=ye[istop,p_id])
ye_output = interp_func_ye(time_output)
#If extrap_factor is a float, then extrap_factor times the temp at istop
#for the particle is appended. If it is an array, it is concatenated.
#The arrays are reshaped to have a single column.
if type(extrap_factor) == float:
temp_out = np.append(temp_output,(temp[istop,p_id]*extrap_factor))
temp_out = temp_out.reshape((len(temp_out), 1))
rho_out = np.append(rho_output,(density[istop,p_id]*extrap_factor**3))
rho_out = rho_out.reshape((len(rho_out), 1))
else:
temp_out = np.concatenate((temp_output,(temp[istop,p_id]*extrap_factor)))
temp_out = temp_out.reshape((len(temp_out), 1))
rho_out = np.concatenate((rho_output,(density[istop,p_id]*extrap_factor**3)))
rho_out = rho_out.reshape((len(rho_out), 1))
#Similar for time_extrap: type affects whether it is appended or
#concatenated onto time_out, as well as the shape of the tiled
#matrix concatenated to ye
if type(time_extrap) == numpy.float64:
time_out = np.append(time_output,time_extrap)
time_out = time_out.reshape((len(time_out), 1))
ye_out = np.concatenate((ye_output, np.tile(ye[istop,p_id],(1,1))))
ye_out = ye_out.reshape((len(ye_out), 1))
else:
time_out = np.concatenate((time_output,time_extrap))
time_out = time_out.reshape((len(time_out), 1))
ye_out = np.concatenate((ye_output, np.tile(ye[istop,p_id],(len(time_extrap),1))))
ye_out = ye_out.reshape((len(ye_out), 1))
#Flux outputs are linearly interpolated
interp_flx_nu1 = interpolate.interp1d(time_,flxtot[:,0,p_id],kind='linear',axis=0,fill_value=flxtot[istop,0,p_id])
nu1_flx_output = interp_flx_nu1(time_output)
interp_flx_nu2 = interpolate.interp1d(time_,flxtot[:,1,p_id],kind='linear',axis=0,fill_value=flxtot[istop,1,p_id])
nu2_flx_output = interp_flx_nu2(time_output)
interp_flx_nu3 = interpolate.interp1d(time_,flxtot[:,2,p_id],kind='linear',axis=0,fill_value=flxtot[istop,2,p_id])
nu3_flx_output = interp_flx_nu3(time_output)
interp_flx_nu4 = interpolate.interp1d(time_,flxtot[:,3,p_id],kind='linear',axis=0,fill_value=flxtot[istop,3,p_id])
nu4_flx_output = interp_flx_nu4(time_output)
#Append vs. concatenate again. The r2_factor*flxtot[istop] is added to the end of the array.
if type(r2_factor) == numpy.float64:
nu1_flx_out = np.append(nu1_flx_output,(flxtot[istop,0,p_id]*r2_factor))
nu1_flx_out = nu1_flx_out.reshape((len(nu1_flx_out), 1))
nu2_flx_out = np.append(nu2_flx_output,(flxtot[istop,1,p_id]*r2_factor))
nu2_flx_out = nu2_flx_out.reshape((len(nu2_flx_out), 1))
nu3_flx_out = np.append(nu3_flx_output,(flxtot[istop,2,p_id]*r2_factor))
nu3_flx_out = nu3_flx_out.reshape((len(nu3_flx_out), 1))
nu4_flx_out = np.append(nu4_flx_output,(flxtot[istop,3,p_id]*r2_factor))
nu4_flx_out = nu4_flx_out.reshape((len(nu4_flx_out), 1))
else:
nu1_flx_out = np.concatenate((nu1_flx_output,(flxtot[istop,0,p_id]*r2_factor)))
nu1_flx_out = nu1_flx_out.reshape((len(nu1_flx_out), 1))
nu2_flx_out = np.concatenate((nu2_flx_output,(flxtot[istop,1,p_id]*r2_factor)))
nu2_flx_out = nu2_flx_out.reshape((len(nu2_flx_out), 1))
nu3_flx_out = np.concatenate((nu3_flx_output,(flxtot[istop,2,p_id]*r2_factor)))
nu3_flx_out = nu3_flx_out.reshape((len(nu3_flx_out), 1))
nu4_flx_out = np.concatenate((nu4_flx_output,(flxtot[istop,3,p_id]*r2_factor)))
nu4_flx_out = nu4_flx_out.reshape((len(nu4_flx_out), 1))
# Set fluxes to zero after some time corresponding to the end of neutrino emission
nu_mask = np.zeros(len(time_out),)
nu_mask_index_true = np.where( time_out > nu_time_stop )[0]
nu_mask[nu_mask_index_true] = True
nu_mask_index_false = np.where(time_out <= nu_time_stop)[0]
nu_mask[nu_mask_index_false] = False
nu_mask = nu_mask.astype(bool)
nu1_flx_out[nu_mask] = 0.0
nu2_flx_out[nu_mask] = 0.0
nu3_flx_out[nu_mask] = 0.0
nu4_flx_out[nu_mask] = 0.0
interp_nu1_temp = interpolate.interp1d(time_,nu_temp[:,0,p_id],kind='linear',axis=0,fill_value=nu_temp[istop,0,p_id])
nu1_temp_output = interp_nu1_temp(time_output)
interp_nu2_temp = interpolate.interp1d(time_,nu_temp[:,1,p_id],kind='linear',axis=0,fill_value=nu_temp[istop,1,p_id])
nu2_temp_output = interp_nu2_temp(time_output)
interp_nu3_temp = interpolate.interp1d(time_,nu_temp[:,2,p_id],kind='linear',axis=0,fill_value=nu_temp[istop,2,p_id])
nu3_temp_output = interp_nu3_temp(time_output)
interp_nu4_temp = interpolate.interp1d(time_,nu_temp[:,3,p_id],kind='linear',axis=0,fill_value=nu_temp[istop,3,p_id])
nu4_temp_output = interp_nu4_temp(time_output)
if type(time_extrap) == numpy.float64:
nu1_temp_out = np.concatenate((nu1_temp_output, np.tile(nu_temp[istop,0,p_id],(1,1))))
nu1_temp_out = nu1_temp_out.reshape((len(nu1_temp_out), 1))
nu2_temp_out = np.concatenate((nu2_temp_output, np.tile(nu_temp[istop,1,p_id],(1,1))))
nu2_temp_out = nu2_temp_out.reshape((len(nu2_temp_out), 1))
nu3_temp_out = np.concatenate((nu3_temp_output, np.tile(nu_temp[istop,2,p_id],(1,1))))
nu3_temp_out = nu3_temp_out.reshape((len(nu3_temp_out), 1))
nu4_temp_out = np.concatenate((nu4_temp_output, np.tile(nu_temp[istop,3,p_id],(1,1))))
nu4_temp_out = nu4_temp_out.reshape((len(nu4_temp_out), 1))
else:
nu1_temp_out = np.concatenate((nu1_temp_output, np.tile(nu_temp[istop,0,p_id],(len(time_extrap),1))))
nu1_temp_out = nu1_temp_out.reshape((len(nu1_temp_out), 1))
nu2_temp_out = np.concatenate((nu2_temp_output, np.tile(nu_temp[istop,1,p_id],(len(time_extrap),1))))
nu2_temp_out = nu2_temp_out.reshape((len(nu2_temp_out), 1))
nu3_temp_out = np.concatenate((nu3_temp_output, np.tile(nu_temp[istop,2,p_id],(len(time_extrap),1))))
nu3_temp_out = nu3_temp_out.reshape((len(nu3_temp_out), 1))
nu4_temp_out = np.concatenate((nu4_temp_output, np.tile(nu_temp[istop,3,p_id],(len(time_extrap),1))))
nu4_temp_out = nu4_temp_out.reshape((len(nu4_temp_out), 1))
#Almost identical to the code above, with a few exceptions
else:
for i in range(0,istop+1):
change_t = abs( 1.0 - temp[i,p_id] * rtemp_old )
change_d = abs( 1.0 - density[i,p_id] * rdensity_old ) * 0.1
change_f = abs( 1.0 - flxtot[i,0,p_id] * rnu_flx_old ) * 0.1
change = [ change_t, change_d, change_f ]
maxchange[i,0] = np.amax(change)
if maxchange[i,0] >= change_min or i == 0 or i == istop:
rtemp_old = 1/temp[i,p_id]
rdensity_old = 1/density[i,p_id]
rnu_flx_old = 1/flxtot[i,0,p_id]
out_mask_index_true = np.where( maxchange[:,0] >= change_min )[0]
out_mask[out_mask_index_true,0] = True
out_mask_index_false = np.where( maxchange[:,0] < change_min )[0]
out_mask[out_mask_index_false,0] = False
out_mask[0,0] = True
out_mask[peak,0] = True
out_mask[istop,0] = True
out_mask = out_mask.astype(bool)
if type(extrap_factor) == float:
temp_out = np.append(temp[out_mask[:,0],p_id],(temp[istop,p_id]*extrap_factor))
temp_out = temp_out.reshape((len(temp_out), 1))
rho_out = np.append(density[out_mask[:,0],p_id],(density[istop,p_id]*extrap_factor**3))
rho_out = rho_out.reshape((len(rho_out), 1))
else:
temp_out = np.concatenate((temp[out_mask[:,0],p_id],(temp[istop,p_id]*extrap_factor)))
temp_out = temp_out.reshape((len(temp_out), 1))
rho_out = np.concatenate((density[out_mask[:,0],p_id],(density[istop,p_id]*extrap_factor**3)))
rho_out = rho_out.reshape((len(rho_out), 1))
if type(time_extrap) == numpy.float64:
time_out = np.append(time_[out_mask[:,0]],time_extrap)
time_out = time_out.reshape((len(time_out), 1))
ye_out = np.concatenate((ye[out_mask[:,0],p_id], np.tile(ye[istop,p_id],(1,))))
ye_out = ye_out.reshape((len(ye_out), 1))
else:
time_out = np.concatenate((time_[out_mask[:,0]],time_extrap))
time_out = time_out.reshape((len(time_out), 1))
ye_out = np.concatenate((ye[out_mask[:,0],p_id], np.tile(ye[istop,p_id],(len(time_extrap),))))
ye_out = ye_out.reshape((len(ye_out), 1))
if type(r2_factor) == numpy.float64:
nu1_flx_out = np.append(flxtot[out_mask[:,0],0,p_id],(flxtot[istop,0,p_id]*r2_factor))
nu1_flx_out = nu1_flx_out.reshape((len(nu1_flx_out), 1))
nu2_flx_out = np.append(flxtot[out_mask[:,0],1,p_id],(flxtot[istop,1,p_id]*r2_factor))
nu2_flx_out = nu2_flx_out.reshape((len(nu2_flx_out), 1))
nu3_flx_out = np.append(flxtot[out_mask[:,0],2,p_id],(flxtot[istop,2,p_id]*r2_factor))
nu3_flx_out = nu3_flx_out.reshape((len(nu3_flx_out), 1))
nu4_flx_out = np.append(flxtot[out_mask[:,0],3,p_id],(flxtot[istop,3,p_id]*r2_factor))
nu4_flx_out = nu4_flx_out.reshape((len(nu4_flx_out), 1))
else:
nu1_flx_out = np.concatenate((flxtot[out_mask[:,0],0,p_id],(flxtot[istop,0,p_id]*r2_factor)))
nu1_flx_out = nu1_flx_out.reshape((len(nu1_flx_out), 1))
nu2_flx_out = np.concatenate((flxtot[out_mask[:,0],1,p_id],(flxtot[istop,1,p_id]*r2_factor)))
nu2_flx_out = nu2_flx_out.reshape((len(nu2_flx_out), 1))
nu3_flx_out = np.concatenate((flxtot[out_mask[:,0],2,p_id],(flxtot[istop,2,p_id]*r2_factor)))
nu3_flx_out = nu3_flx_out.reshape((len(nu3_flx_out), 1))
nu4_flx_out = np.concatenate((flxtot[out_mask[:,0],3,p_id],(flxtot[istop,3,p_id]*r2_factor)))
nu4_flx_out = nu4_flx_out.reshape((len(nu4_flx_out), 1))
# Set fluxes to zero after some time corresponding to the end of neutrino emission
nu_mask = np.zeros(len(time_out),)
nu_mask_index_true = np.where( time_out > nu_time_stop )[0]
nu_mask[nu_mask_index_true] = True
nu_mask_index_false = np.where(time_out <= nu_time_stop)[0]
nu_mask[nu_mask_index_false] = False
nu_mask = nu_mask.astype(bool)
nu1_flx_out[nu_mask] = 0.0
nu2_flx_out[nu_mask] = 0.0
nu3_flx_out[nu_mask] = 0.0
nu4_flx_out[nu_mask] = 0.0
if type(time_extrap) == numpy.float64:
nu1_temp_out = np.concatenate((nu_temp[out_mask[:,0],0,p_id], np.tile(nu_temp[istop,0,p_id],(1,))))
nu1_temp_out = nu1_temp_out.reshape((len(nu1_temp_out), 1))
nu2_temp_out = np.concatenate((nu_temp[out_mask[:,0],1,p_id], np.tile(nu_temp[istop,1,p_id],(1,))))
nu2_temp_out = nu2_temp_out.reshape((len(nu2_temp_out), 1))
nu3_temp_out = np.concatenate((nu_temp[out_mask[:,0],2,p_id], np.tile(nu_temp[istop,2,p_id],(1,))))
nu3_temp_out = nu3_temp_out.reshape((len(nu3_temp_out), 1))
nu4_temp_out = np.concatenate((nu_temp[out_mask[:,0],3,p_id], np.tile(nu_temp[istop,3,p_id],(1,))))
nu4_temp_out = nu4_temp_out.reshape((len(nu4_temp_out), 1))
else:
nu1_temp_out = np.concatenate((nu_temp[out_mask[:,0],0,p_id], np.tile(nu_temp[istop,0,p_id],(len(time_extrap),))))
nu1_temp_out = nu1_temp_out.reshape((len(nu1_temp_out), 1))
nu2_temp_out = np.concatenate((nu_temp[out_mask[:,0],1,p_id], np.tile(nu_temp[istop,1,p_id],(len(time_extrap),))))
nu2_temp_out = nu2_temp_out.reshape((len(nu2_temp_out), 1))
nu3_temp_out = np.concatenate((nu_temp[out_mask[:,0],2,p_id], np.tile(nu_temp[istop,2,p_id],(len(time_extrap),))))
nu3_temp_out = nu3_temp_out.reshape((len(nu3_temp_out), 1))
nu4_temp_out = np.concatenate((nu_temp[out_mask[:,0],3,p_id], np.tile(nu_temp[istop,3,p_id],(len(time_extrap),))))
nu4_temp_out = nu4_temp_out.reshape((len(nu4_temp_out), 1))
time_out = np.transpose(time_out)
temp_out = np.transpose(temp_out)
rho_out = np.transpose(rho_out)
ye_out = np.transpose(ye_out)
nu1_flx_out = np.transpose(nu1_flx_out)
nu2_flx_out = np.transpose(nu2_flx_out)
nu3_flx_out = np.transpose(nu3_flx_out)
nu4_flx_out = np.transpose(nu4_flx_out)
nu1_temp_out = np.transpose(nu1_temp_out)
nu2_temp_out = np.transpose(nu2_temp_out)
nu3_temp_out = np.transpose(nu3_temp_out)
nu4_temp_out = np.transpose(nu4_temp_out)
th_out = np.concatenate((time_out,temp_out,rho_out,ye_out,nu1_flx_out,\
nu2_flx_out,nu3_flx_out,nu4_flx_out,nu1_temp_out,\
nu2_temp_out, nu3_temp_out, nu4_temp_out),axis=0)
(_,ia) = np.unique(th_out[0,:],return_index=True)
th_out = th_out[:,ia]
th_out = np.transpose(th_out)
if write_flag:
#Last time in th_out
t_write = th_out[len(th_out)-1,0]
th_fname = '%s%05.0f%s%04.0f%s' % (th_fname_base,plist[_id],'_',(t_write*1000),'ms')
#Open file and write formatted data in below
f_id = open( th_fname, 'w+' )
f_id.write('Tracer Particle %d\n' %_id)
f_id.write('{:15.7e}'.format(t_start)+' Start Time'+'\n')
f_id.write('{:15.7e}'.format(t_write)+' Stop Time'+'\n')
f_id.write('{:15.7e}'.format((1e-6)*(t_write-t_start))+' Initial Timestep'+'\n')
i_outstart = np.where( th_out[:,0] >= t_start )[0][0]
i_outstop = np.where( th_out[:,0] >= t_write )[0][0]
for i in range(i_outstart,i_outstop+1):
for j in range(0,len(th_out[0])):
if j==0 or j==1 or j==2 or j==3:
f_id.write('{:15.7e}'.format(th_out[i,j]))
else:
f_id.write('{:12.3e}'.format(th_out[i,j]))
f_id.write('\n')
f_id.close()
#Runs when extrap_window is an array -> plot_flag true and extrap_window not empty
if type(extrap_window) == numpy.ndarray:
if plot_flag and extrap_window.any():
# Highlight the time range used for extrapolation on plot
# plot( axis_h(j,4), time(extrap_window) - time_bounce, zeros(size(extrap_window)), ...
# 'Color', color_array(n,:), ...
# 'Marker', 'o', ...
# 'MarkerSize', 4, ...
# 'MarkerEdgeColor', 'none', ...
# 'MarkerFaceColor', color_array(n,:), ...
# 'LineStyle', 'none', ...
# 'DisplayName', 'Range for extrap' );
#Plot the tau_rho_avg values in the extrap window, assign to handle4
handle4, = axis_h2.plot((time_[extrap_window]-time_bounce),tau_rho_avg[extrap_window], \
linestyle='None', marker='.', markersize=2.0, color=color_array[n],\
label=r'${{\tau}^{*}}_{\mathrm{exp}}(t_{f} =$ '+str(t_stop - time_bounce)+ ' s)')
'''###################
taurho_extrap_dict = {}
data_path = 'C:\\Users\\taylor\\Documents\\MATLAB\\B12\\B12-WH07\\tau_rho_extrap_values'+str(p_id+1)
taurho_extrap_dict = scio.loadmat(data_path, appendmat=True, variable_names=('tau_rho_extrap_vals','time_extrap_vals'))
tau_rho_extrap = taurho_extrap_dict['tau_rho_extrap_vals']
time_extrap_vals = taurho_extrap_dict['time_extrap_vals']
axis_h2.plot(time_extrap_vals,tau_rho_extrap,linestyle='None', marker='.', markersize=1.0,color='c')
'''#####################
#Get indices where time_out exceeds the time value at istop
ei = np.where(time_out[0,:]>time_[istop])[0]
extrap_time_array = time_out[0,ei] - time_bounce
extrap_temp_array = temp_out[0,ei]
handle5, = axis_h1.plot(extrap_time_array, extrap_temp_array, linewidth=1.5, color=color_array[n],\
label=r'$T(t; <{{\tau}^{*}}_{\mathrm{exp}}> =$ '+str(tau[n,p_id])+' s)')
# ^Plot the extrapolated temperature profile
'''############
tempout_extrap_dict = {}
data_path = 'C:\\Users\\taylor\\Documents\\MATLAB\\B12\\B12-WH07\\temp_out_extrap_values'+str(p_id+1)
tempout_extrap_dict = scio.loadmat(data_path, appendmat=True, variable_names=('tempout_extrap_vals','timeout_extrap_vals'))
temp_out_extrap = tempout_extrap_dict['tempout_extrap_vals']
timeout_extrap_vals = tempout_extrap_dict['timeout_extrap_vals']
axis_h1.plot(timeout_extrap_vals,temp_out_extrap,'c--',linewidth=1.5)
'''#################
plt.show()
#If the time at the minimum extrap_window value is less than t_stop,
#put both values in an array and subtract time_bounce, then plot the
#adiabatic[istop] values onto the plot at these times.
if t_stop > time_[min(extrap_window)]:
time_min_tstop = [time_[min(extrap_window)],t_stop]
time_min_tstop = np.asarray(time_min_tstop)
time_min_tstop = time_min_tstop.reshape((1,2))
tm_ts_tb = (time_min_tstop - time_bounce)
adi_istop_array = (np.tile(adiabatic[istop],(1,2)))
handle6, = axis_h3.plot(tm_ts_tb[0,:], adi_istop_array[0,:], linewidth = 0.5, linestyle='--',\
color=color_array[n], marker='o', markersize=6.0, \
markeredgecolor=color_array[n], markerfacecolor='None',\
label=adiabatic_string)
#Otherwise, plot the adiabatic[istop] value against the transposed array.
else:
time_min_tstop = [time_[min(extrap_window)],t_stop]
time_min_tstop = np.asarray(time_min_tstop)
time_min_tstop = time_min_tstop.reshape((2,1))
tm_ts_tb = time_min_tstop - time_bounce
handle6, = axis_h3.plot(tm_ts_tb[:,0], adiabatic[istop], linewidth = 0.5, \
linestyle='--',color=color_array[n], marker='o', markersize=6.0, \
markeredgecolor=color_array[n], markerfacecolor='None', \
label=adiabatic_string)
handle_flx1_ex,= axis_h2_1.plot(extrap_time_array,nu1_flx_out[0,ei],'k--', linewidth = 1.5, label='Extrapolated Electron Neutrino Flux')
handle_flx2_ex,= axis_h2_1.plot(extrap_time_array,nu2_flx_out[0,ei],'r--', linewidth = 1.5, label='Extrapolated Electron Anti-Neutrino Flux')
#handle_nutemp1_ex,= axis_h2_1.plot(extrap_time_array,nu1_temp_out[0,ei],'b--', linewidth = 1.5, label='Col 1 Temp Extrapolated')
#handle_nutemp2_ex,= axis_h2_1.plot(extrap_time_array,nu2_temp_out[0,ei],'g--', linewidth = 1.5, label='Col 2 Temp Extrapolated')
'''
##############
th_dict = {}
data_path = 'C:\\Users\\taylor\\Documents\\MATLAB\\B12\\B12-WH07\\th_out'+str(p_id+1)
th_dict = scio.loadmat(data_path, appendmat=True, variable_names=('time_out','nu1_flx_out','nu2_flx_out','nu1_temp_out','nu2_temp_out'))
timeout_MAT = th_dict['time_out']
nu1_flxout_MAT = th_dict['nu1_flx_out']
nu2_flxout_MAT = th_dict['nu2_flx_out']
nu1_tempout_MAT | |
#!/usr/bin/env python3
# coding=utf-8
import csv
import os
from collections import defaultdict
from enum import IntEnum
cool_x = "❌"
cool_check = "✔️"
ignore_targets = [
'no_undefined_behavior',
'reading_uninitialized_value_lib']
class DetectionType(IntEnum):
COMPILER_WARNING = 1
STANDALONE_STATIC_ANALYSIS = 2
RUNTIME_CRASH = 3
EXTRA_DYNAMIC_ANALYSIS = 4
def print_debug_release_legend():
print("Debug (unoptimized) / RelWithDebInfo (optimized)\n")
def read_static_analysis():
results = {}
for root, subdirs, files in os.walk('.'):
for f in files:
if f.find("warnings_table.txt") != -1:
filename = root + "/" + f
results[filename] = csv.reader(
open(root + "/" + f), delimiter=' ')
return results
def process_static_analysis_results(results):
output_table = defaultdict(lambda: defaultdict(lambda: defaultdict(str)))
for filename, csv in results.items():
tool = ""
if "clang" in filename:
if "tidy" in filename:
tool = "clang-tidy"
elif "/clang/":
tool = "clang"
else:
# skip dynamic analysis warnings
continue
elif "gcc" in filename:
tool = "gcc"
elif "cppcheck" in filename:
tool = "cppcheck"
elif "msvc" in filename:
tool = "msvc"
else:
print("couldn't find tool for", filename)
exit(1)
debug_mode = -1
if "debug" in filename:
debug_mode = 1
if "rel_with_deb_info" in filename:
debug_mode = 0
for row in csv:
if row[0] in ignore_targets:
if len(row) > 0 and row[1] != "":
print("static analysis is broken in", filename, row)
exit(1)
continue
if len(row) == 2:
result = row[1]
elif len(row) > 2:
print("unknown data:", filename, row)
exit(1)
else:
result = ""
test_name = row[0].replace("_", " ")
# Clang warnings are different with dynamic analysis
if len(output_table[tool][test_name][debug_mode]) == 0 or (
output_table[tool][test_name][debug_mode] == "1" and len(result) > 0):
output_table[tool][test_name][debug_mode] = result
return output_table
def print_compiler_warnings(output_table):
print("### 1.1.Compiler Warnings")
print("Compiler | Undefined Behavior Type | Warning | Warning Opt | Name")
print("--- | --- | --- | --- | ---")
for compiler, rest_0 in sorted(output_table.items()):
for test, rest_1 in sorted(rest_0.items()):
rest_1_dict = dict(rest_1)
if -1 in rest_1_dict:
continue
if 0 not in rest_1_dict:
print('error 0 not in rest_1_dict', rest_1_dict)
warning_opt = cool_check if len(rest_1_dict[0]) > 0 else cool_x
if 1 not in rest_1_dict:
print('error 1 not in rest_1_dict', rest_1_dict)
warning_unopt = cool_check if len(rest_1_dict[1]) > 0 else cool_x
warning_name = ""
if len(rest_1_dict[0]) > 0:
warning_name = rest_1_dict[0]
elif len(rest_1_dict[1]) > 0:
warning_name = rest_1_dict[1]
else:
warning_name = "n/a"
print(compiler, "|", test, "|", warning_unopt,
"|", warning_opt, "|", warning_name)
def print_tool_static_analysis(output_table):
print("### 1.2.Static Analyzers")
print("Tool | Undefined Behavior Type | Warning | Name")
print("--- | --- | --- | ---")
for tool, rest_0 in sorted(output_table.items()):
for test, rest_1 in sorted(rest_0.items()):
if -1 not in dict(rest_1):
continue
warning = rest_1[-1]
result = cool_check if len(warning) > 0 else cool_x
warning_name = ""
if len(warning) > 0 and warning != "1":
warning_name = warning
else:
warning_name = "n/a"
print(tool, "|", test, "|", result, "|", warning_name)
def print_static_analysis_summary(output_table):
print("### 1.3.Static Analysis Summary")
print_debug_release_legend()
summary_table = defaultdict(
lambda: defaultdict(
lambda: defaultdict(
lambda: defaultdict(bool))))
tool_line = "Undefined Behavior Type"
tool_table_line = "---"
tools = []
for tool, rest_0 in sorted(output_table.items()):
tools.append(tool)
tool_line += " | " + tool
tool_table_line += " | ---"
for test, rest_1 in sorted(rest_0.items()):
warning_result = -1
for optimization, warning in sorted(rest_1.items()):
summary_table[test][tool][optimization] = len(warning) > 0
print(tool_line)
print(tool_table_line)
for test, rest_0 in sorted(summary_table.items()):
output_line = test
tool_index = 0
for tool, rest_1 in sorted(rest_0.items()):
while tool != tools[tool_index]:
tool_index += 1
output_line += " | n/a"
tool_index += 1
output_line += " | "
if -1 in rest_1:
output_line += detection_to_str(rest_1[-1])
else:
assert(0 in rest_1)
assert(1 in rest_1)
output_line += detections_to_str(rest_1[1], rest_1[0])
while tool_index < len(tools):
tool_index += 1
output_line += " | n/a"
print(output_line)
def read_runtime_results():
results = {}
for root, subdirs, files in os.walk('.'):
for f in files:
if f.find("results.txt") != -1:
filename = root + "/" + f
results[filename] = csv.reader(
open(root + "/" + f), delimiter=' ')
return results
def process_runtime_results(runtime_results):
output_table = defaultdict(lambda: defaultdict(
lambda: defaultdict(lambda: defaultdict(str))))
for filename, csv in runtime_results.items():
if "valgrind" in filename:
analyzer = "valgrind"
elif "sanitize" in filename:
analyzer = ""
known_analyzers = ['address', 'leak', 'memory', 'undefined']
for known_analyzer in known_analyzers:
if known_analyzer in filename:
if len(analyzer) > 0:
analyzer += ','
analyzer += known_analyzer
else:
analyzer = ""
if "gcc" in filename:
compiler = "gcc"
elif "msvc" in filename:
compiler = "msvc"
else:
compiler = "clang"
if filename.find("debug") != -1:
debug_mode = 1
else:
debug_mode = 0
for row in csv:
if row[0] in ignore_targets:
if row[1] != "0":
print(filename, "broken")
exit(1)
continue
output_table[compiler][analyzer][row[0].replace(
"_", " ")][str(debug_mode)] = int(row[1])
return output_table
def detection_to_str(detection):
if detection:
return cool_check
else:
return cool_x
def detections_to_str(d, r):
if d and r:
return cool_check
elif not d and not r:
return cool_x
else:
left = detection_to_str(d)
right = detection_to_str(r)
return left + '/' + right
def return_codes_to_str(return_code_d, return_code_r):
return detections_to_str(return_code_d != 0, return_code_r != 0)
def print_runtime_crashes(output_table):
print("## 2.Runtime Crashes")
print("### 2.1.Runtime Crashes Return Codes")
print("Compiler | Undefined Behavior Type | Debug | RelWithDebInfo")
print("--- | --- | --- | ---")
no_tool_test_table = defaultdict(
lambda: defaultdict(lambda: defaultdict(str)))
compilers = []
for compiler, rest_0 in sorted(output_table.items()):
for tool, rest_1 in sorted(rest_0.items()):
if tool != "":
continue
compilers.append(compiler)
for test, rest_2 in sorted(rest_1.items()):
no_tool_test_table[test][compiler]["1"] = rest_2["1"]
no_tool_test_table[test][compiler]["0"] = rest_2["0"]
print(compiler + " | " + test + " | " +
str(rest_2["1"]) + " | " + str(rest_2["0"]))
print("")
print("### 2.2.Runtime Crashes Summary")
print_debug_release_legend()
tool_print_line = "Undefined Behavior"
tool_delim_print_line = "---"
tool_print_line_r = ""
for compiler in compilers:
tool_print_line += " | " + compiler
tool_delim_print_line += " | ---"
print(tool_print_line)
print(tool_delim_print_line)
for test, rest_0 in sorted(no_tool_test_table.items()):
line = test
line_r = ""
compiler_index = 0
for compiler, rest_1 in sorted(rest_0.items()):
while compiler != compilers[compiler_index]:
compiler_index += 1
line += " | n/a"
line_r += " | n/a"
compiler_index += 1
line += ' | '
line += return_codes_to_str(rest_1["1"], rest_1["0"])
while compiler_index < len(compilers):
compiler_index += 1
line += " | n/a"
print(line)
def print_dynamic_analysis(output_table):
print("## 3.Extra Dynamic Analysis")
print("### 3.1.Extra Dynamic Analysis Return Codes")
print("Compiler | Tool | Undefined Behavior Type | Debug | RelWithDebInfo")
print("--- | --- | --- | --- | ---")
tool_test_table = defaultdict(lambda: defaultdict(
lambda: defaultdict(lambda: defaultdict(lambda: defaultdict(str)))))
clang_tools = []
gcc_tools = []
for compiler, rest_0 in sorted(output_table.items()):
for tool, rest_1 in sorted(rest_0.items()):
if tool != "":
if compiler == "clang":
clang_tools.append(tool)
else:
gcc_tools.append(tool)
for test, rest_2 in sorted(rest_1.items()):
tool_test_table[test][compiler][tool]["1"] = rest_2["1"]
tool_test_table[test][compiler][tool]["0"] = rest_2["0"]
print(compiler + " | " + tool + " | " + test + " | " +
str(rest_2["1"]) + " | " + str(rest_2["0"]))
def print_extra_dynamic_analysis_by_compiler(
target_compiler, header_str, tools):
print("")
maybe_valgrind = ""
print(
"### " +
header_str +
"Extra Dynamic Analysis Summary " +
target_compiler.capitalize())
print_debug_release_legend()
dynamic_analysis_summary_row = "--- |" * len(tools) + " ---"
print("Undefined Behavior Type" +
''.join([' | ' + str(x) for x in tools]))
print(dynamic_analysis_summary_row)
for test, rest_0 in sorted(tool_test_table.items()):
line = test
for tool in tools:
line += " | "
if target_compiler in rest_0 and tool in rest_0[target_compiler]:
line += return_codes_to_str(
rest_0[target_compiler][tool]["1"],
rest_0[target_compiler][tool]["0"])
else:
line += "n/a"
print(line)
print_extra_dynamic_analysis_by_compiler("clang", "3.2.1.", clang_tools)
print_extra_dynamic_analysis_by_compiler("gcc", "3.2.2.", gcc_tools)
def print_runtime_results():
runtime_results = read_runtime_results()
output_table = process_runtime_results(runtime_results)
print_runtime_crashes(output_table)
print("")
print("")
print_dynamic_analysis(output_table)
return output_table
def print_static_analysis_results():
static_analysis_results = read_static_analysis()
output_table = process_static_analysis_results(static_analysis_results)
print("## 1.Static Analysis")
print_compiler_warnings(output_table)
print("")
print_tool_static_analysis(output_table)
print("")
print_static_analysis_summary(output_table)
return output_table
def print_overall_results(static_analysis, runtime_analysis):
print("### 4.Overall Summary")
print("Undefined Behavior Type | Compiler Warning | Standalone Static Analysis | Runtime Crash | Extra Dynamic Analsyis")
print("--- | --- | --- | --- | ---")
table = defaultdict(lambda: defaultdict(bool))
for tool, rest_0 in static_analysis.items():
for test, rest_1 in rest_0.items():
for optimization, result in rest_1.items():
if tool == "clang" or tool == "gcc" or tool == "msvc":
detection_type = DetectionType.COMPILER_WARNING
else:
detection_type = DetectionType.STANDALONE_STATIC_ANALYSIS
table[test][detection_type] = table[test][detection_type] or len(
result) > 0
for compiler, rest_0 in dynamic_analysis.items():
for tool, rest_1 in rest_0.items():
for test, rest_2 in rest_1.items():
for optimization, result in rest_2.items():
if len(tool) == 0:
detection_type = DetectionType.RUNTIME_CRASH
else:
detection_type = DetectionType.EXTRA_DYNAMIC_ANALYSIS
table[test][detection_type] = 0 != result or table[test][detection_type]
for test, rest_0 in sorted(table.items()):
line = test
for detection_type in DetectionType:
line += ' | '
if detection_type in rest_0:
line += detection_to_str(rest_0[detection_type])
else:
line += 'n/a'
print(line)
if __name__ | |
base class version always returns an empty Python list.
"""
if self.GetImageList(wx.IMAGE_LIST_SMALL):
raise Exception("List control has an image list, OnGetItemImage should be overridden.")
return []
def OnGetItemColumnImage(self, item, column=0):
"""
This function **must** be overloaded in the derived class for a control with
``ULC_VIRTUAL`` and ``ULC_REPORT`` style. It should return a Python list of
indexes representing the images associated to the input item or an empty list
for no images.
:param `item`: an integer specifying the item index.
:note: The base class version always returns an empty Python list.
"""
if column == 0:
return self.OnGetItemImage(item)
return []
def OnGetItemAttr(self, item):
"""
This function may be overloaded in the derived class for a control with
``ULC_VIRTUAL`` style. It should return the attribute for the specified
item or ``None`` to use the default appearance parameters.
:param `item`: an integer specifying the item index.
:note:
:class:`UltimateListCtrl` will not delete the pointer or keep a reference of it.
You can return the same :class:`UltimateListItemAttr` pointer for every
:meth:`~UltimateListCtrl.OnGetItemAttr` call.
:note: The base class version always returns ``None``.
"""
if item < 0 or item > self.GetItemCount():
raise Exception("Invalid item index in OnGetItemAttr()")
# no attributes by default
return None
def OnGetItemCheck(self, item):
"""
This function may be overloaded in the derived class for a control with
``ULC_VIRTUAL`` style. It should return whether a checkbox-like item or
a radiobutton-like item is checked or unchecked.
:param `item`: an integer specifying the item index.
:note: The base class version always returns an empty list.
"""
return []
def OnGetItemColumnCheck(self, item, column=0):
"""
This function **must** be overloaded in the derived class for a control with
``ULC_VIRTUAL`` and ``ULC_REPORT`` style. It should return whether a
checkbox-like item or a radiobutton-like item in the column header is checked
or unchecked.
:param `item`: an integer specifying the item index.
:note: The base class version always returns an empty Python list.
"""
if column == 0:
return self.OnGetItemCheck(item)
return []
def OnGetItemKind(self, item):
"""
This function **must** be overloaded in the derived class for a control with
``ULC_VIRTUAL`` style. It should return the item kind for the input item.
:param `item`: an integer specifying the item index.
:note: The base class version always returns 0 (a standard item).
:see: :meth:`~UltimateListCtrl.SetItemKind` for a list of valid item kinds.
"""
return 0
def OnGetItemColumnKind(self, item, column=0):
"""
This function **must** be overloaded in the derived class for a control with
``ULC_VIRTUAL`` style. It should return the item kind for the input item in
the header window.
:param `item`: an integer specifying the item index;
:param `column`: the column index.
:note: The base class version always returns 0 (a standard item).
:see: :meth:`~UltimateListCtrl.SetItemKind` for a list of valid item kinds.
"""
if column == 0:
return self.OnGetItemKind(item)
return 0
def SetItemCount(self, count):
"""
Sets the total number of items we handle.
:param `count`: the total number of items we handle.
"""
if not self._mainWin.IsVirtual():
raise Exception("This is for virtual controls only")
self._mainWin.SetItemCount(count)
def RefreshItem(self, item):
"""
Redraws the given item.
:param `item`: an integer specifying the item index;
:note: This is only useful for the virtual list controls as without calling
this function the displayed value of the item doesn't change even when the
underlying data does change.
"""
self._mainWin.RefreshLine(item)
def RefreshItems(self, itemFrom, itemTo):
"""
Redraws the items between `itemFrom` and `itemTo`.
The starting item must be less than or equal to the ending one.
Just as :meth:`~UltimateListCtrl.RefreshItem` this is only useful for virtual list controls
:param `itemFrom`: the first index of the refresh range;
:param `itemTo`: the last index of the refresh range.
"""
self._mainWin.RefreshLines(itemFrom, itemTo)
#
# Generic UltimateListCtrl is more or less a container for two other
# windows which drawings are done upon. These are namely
# 'self._headerWin' and 'self._mainWin'.
# Here we override 'virtual wxWindow::Refresh()' to mimic the
# behaviour UltimateListCtrl has under wxMSW.
#
def Refresh(self, eraseBackground=True, rect=None):
"""
Causes this window, and all of its children recursively (except under wxGTK1
where this is not implemented), to be repainted.
:param `eraseBackground`: If ``True``, the background will be erased;
:param `rect`: If not ``None``, only the given rectangle will be treated as damaged.
:note: Note that repainting doesn't happen immediately but only during the next
event loop iteration, if you need to update the window immediately you should
use :meth:`~UltimateListCtrl.Update` instead.
:note: Overridden from :class:`PyControl`.
"""
if not rect:
# The easy case, no rectangle specified.
if self._headerWin:
self._headerWin.Refresh(eraseBackground)
if self._mainWin:
self._mainWin.Refresh(eraseBackground)
else:
# Refresh the header window
if self._headerWin:
rectHeader = self._headerWin.GetRect()
rectHeader.Intersect(rect)
if rectHeader.GetWidth() and rectHeader.GetHeight():
x, y = self._headerWin.GetPosition()
rectHeader.OffsetXY(-x, -y)
self._headerWin.Refresh(eraseBackground, rectHeader)
# Refresh the main window
if self._mainWin:
rectMain = self._mainWin.GetRect()
rectMain.Intersect(rect)
if rectMain.GetWidth() and rectMain.GetHeight():
x, y = self._mainWin.GetPosition()
rectMain.OffsetXY(-x, -y)
self._mainWin.Refresh(eraseBackground, rectMain)
def Update(self):
"""
Calling this method immediately repaints the invalidated area of the window
and all of its children recursively while this would usually only happen when
the flow of control returns to the event loop.
:note: This function doesn't invalidate any area of the window so nothing
happens if nothing has been invalidated (i.e. marked as requiring a redraw).
Use :meth:`~UltimateListCtrl.Refresh` first if you want to immediately redraw the window unconditionally.
:note: Overridden from :class:`PyControl`.
"""
self._mainWin.ResetVisibleLinesRange(True)
wx.PyControl.Update(self)
def GetEditControl(self):
"""
Returns a pointer to the edit :class:`UltimateListTextCtrl` if the item is being edited or
``None`` otherwise (it is assumed that no more than one item may be edited
simultaneously).
"""
retval = None
if self._mainWin:
retval = self._mainWin.GetEditControl()
return retval
def Select(self, idx, on=True):
"""
Selects/deselects an item.
:param `idx`: the index of the item to select;
:param `on`: ``True`` to select the item, ``False`` to deselect it.
"""
item = CreateListItem(idx, 0)
item = self._mainWin.GetItem(item, 0)
if not item.IsEnabled():
return
if on:
state = ULC_STATE_SELECTED
else:
state = 0
self.SetItemState(idx, state, ULC_STATE_SELECTED)
def Focus(self, idx):
"""
Focus and show the given item.
:param `idx`: the index of the item to be focused.
"""
self.SetItemState(idx, ULC_STATE_FOCUSED, ULC_STATE_FOCUSED)
self.EnsureVisible(idx)
def GetFocusedItem(self):
"""Return the currently focused item or -1 if none is focused."""
return self.GetNextItem(-1, ULC_NEXT_ALL, ULC_STATE_FOCUSED)
def GetFirstSelected(self):
"""Return first selected item, or -1 when none is selected."""
return self.GetNextSelected(-1)
def GetNextSelected(self, item):
"""
Returns subsequent selected items, or -1 when no more are selected.
:param `item`: the index of the item.
"""
return self.GetNextItem(item, ULC_NEXT_ALL, ULC_STATE_SELECTED)
def IsSelected(self, idx):
"""
Returns ``True`` if the item is selected.
:param `idx`: the index of the item to check for selection.
"""
return (self.GetItemState(idx, ULC_STATE_SELECTED) & ULC_STATE_SELECTED) != 0
def IsItemChecked(self, itemOrId, col=0):
"""
Returns whether an item is checked or not.
:param `itemOrId`: an instance of :class:`UltimateListItem` or the item index;
:param `col`: the column index to which the input item belongs to.
"""
item = CreateListItem(itemOrId, col)
return self._mainWin.IsItemChecked(item)
def IsItemEnabled(self, itemOrId, col=0):
"""
Returns whether an item is enabled or not.
:param `itemOrId`: an instance of :class:`UltimateListItem` or the item index;
:param `col`: the column index to which the input item belongs to.
"""
item = CreateListItem(itemOrId, col)
return self._mainWin.IsItemEnabled(item)
def GetItemKind(self, itemOrId, col=0):
"""
Returns the item kind.
:param `itemOrId`: an instance of :class:`UltimateListItem` or the item index;
:param `col`: the column index to which the input item belongs to.
:see: :meth:`~UltimateListCtrl.SetItemKind` for a list of valid item kinds.
"""
item = CreateListItem(itemOrId, col)
return self._mainWin.GetItemKind(item)
def SetItemKind(self, itemOrId, col=0, kind=0):
"""
Sets the item kind.
:param `itemOrId`: an instance of :class:`UltimateListItem` or the item index;
:param `col`: the column index to which the input item belongs to;
:param `kind`: may be one of the following integers:
=============== ==========================
Item Kind Description
=============== ==========================
0 A normal item
1 A checkbox-like item
2 A radiobutton-type item
=============== ==========================
"""
item = CreateListItem(itemOrId, col)
return self._mainWin.SetItemKind(item, kind)
def EnableItem(self, | |
<reponame>Avdbergnmf/DataProcessing_DrossRemovalThesis
#!/usr/bin/env python
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import scipy.stats as stats
import pingouin as pg
from matplotlib.ticker import MaxNLocator
from statsmodels.graphics.factorplots import interaction_plot
import logging
log = logging.getLogger()
'''
This class outputs figures based on the data that is provided to it.
It serves more as a tool that can be used for the making of figures containing
plots, based on the df (dataframes) that are given as input.
'''
class PlotData(object):
def __init__(self, outputPath, doSwarm):
# Settings:
self.titlefontsize = 15
self.save = True
self.trimTime = True # Set this to false to leave on the leading bit of the data (overflow from previous repetition)
# Class Params
self.doSwarm = doSwarm
self.outputPath = outputPath # Path where the figures will be saved to
self.labels = ["MC","HC","MN","HN"] # labels used for plotting, M=Monitor,H=HMD;C=Cues,N=No cues
self.figCount = 1 # To keep figures from overlapping
# Prettier Colors : https://matplotlib.org/3.1.1/gallery/style_sheets/style_sheets_reference.html
plt.style.use('ggplot')
self.colors = plt.rcParams['axes.prop_cycle'].by_key()['color'] # get the colors of this theme as I think they look nicer
plt.style.use('default')
########### Time Plots ###########
def timePlot(self, ax, df, dfXColumn, dfYColumns, legend = []):
# dfXColumn is the header of the column we want to plot on the X axis
# dfYColumns is a list of headers of all the columns we want to plot on the y axis
if isinstance(dfYColumns,str): # if its not a list, make it a list so it doesnt error out
dfYColumns = [dfYColumns]
xValues = df[dfXColumn]
# Trim of the lead data
if self.trimTime: startElement = self.findStartTime(xValues)+1 # used to cut off the leading part of the data (for some reason part of the previous dataset flows over to the next one)
else: startElement = 0
plt.sca(ax) # set the current axes
for dfYColumn in dfYColumns:
yValues = df[dfYColumn]
plt.plot(xValues[startElement:], yValues[startElement:], label = dfYColumn)
return plt
def timeFigure(self,df,figureInfo, fig=False,ax=False, supTitle = ""):
if fig==None and ax==None:
fig,ax = plt.subplots()
# get the info
title = figureInfo.get('title')
legend = figureInfo.get('legend')
X = figureInfo.get('xAxis')
Ys = figureInfo.get('yAxis')
xlabel = figureInfo.get('xLabel')
ylabel = figureInfo.get('yLabel')
# set in figure
ax.set_title(title)
self.timePlot(ax, df, X, Ys)
if len(supTitle) > 0:
fig.suptitle(supTitle)
currhandles, currlabels = ax.get_legend_handles_labels()
if isinstance(legend, list):
ax.legend(loc='best', handles = currhandles, labels = legend)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.grid()
#ax.autoscale(enable=True)
return fig,ax
def addTimeSubPlot(self,df,figureInfo,fig,ax = None):
if ax == None:
ax = fig.axes[0]
geometry=ax.get_geometry()
geometry1 = []
geometry2 = []
nrows = []
ncols = []
tupleAdd = lambda t1,t2: tuple(np.array(t1)+np.array(t2))
geometry1 = tupleAdd((1,0,0),geometry)
nrows =geometry1[0]
ncols =geometry1[1]
geometry2 = tupleAdd((0,0,1),geometry1)
ax.change_geometry(*geometry1)
ax2 = fig.add_subplot(*geometry2)
else:
ax2 = ax
# set on gridspec
fig,ax2 = self.timeFigure(df,figureInfo,fig,ax2)
#allaxes = fig.get_axes()
#print(len(allaxes))
plt.subplots_adjust(top = 2, right = 2, hspace = 0.5)
return fig,ax2
########### Bar Plots/Boxplots ###########
def makeBarPlot(self, df, title = '', ylabel = '', save = True):
# Set class variables
self.title = title
### Initialize
self.newfig() # Start a new figure
ax = sns.barplot(x="Cues",y='value',hue="Display",data=df, palette="Set1")
fig = ax.get_figure()
# Figure Formatting
ax.set_title(title, fontsize = self.titlefontsize)
#ax.set_xlabel('Condition')
ax.set_ylabel(ylabel)
#ax.yaxis.set_major_locator(MaxNLocator(integer=True)) # only show ints on y-axis
if self.save:
self.saveFig(title, fig)
return fig , ax
def makeBoxPlot(self, rDataList, anova_table, title = '', ylabel = ''):
# Settings
a = 0.05 # criteria p < a for significance.
self.maxvalue = rDataList['value'].max()
# Set class variables
self.title = title
fig = self.newfig() # Start a new figure
if self.doSwarm: # Make the scatter plots
swarmplt = sns.swarmplot(x="Cues", y="value",hue="Display", size=6, marker="D",linewidth=1,edgecolors="black", dodge=True, data=rDataList, palette="Set1")
ax = sns.boxplot(x="Cues", y="value", hue="Display",dodge=True, data = rDataList, palette="Set1")
ax,legendBbox = self.formatBoxPlot(ax,fig)
self.annotateboxplot(fig, anova_table, legendBbox, a) # Annotate the significant p values into the plot
# Figure Formatting
ax.set_title(title, fontsize = self.titlefontsize)
#fig.subplots_adjust(top=0.80)
ax.set_ylabel(ylabel)
if self.save:
self.saveFig(title, fig)
return fig , ax
def formatBoxPlot(self, ax, fig = None):
if fig == None: fig = self.fig
### Format Plot
#ax.set_aspect(self.get_aspect(ax)*1.5) # Set aspect ratio so it looks cleaner
# Set Correct Legend
handles, labels = ax.get_legend_handles_labels()
leg = ax.legend(handles[0:2],["Monitor","HMD"], loc="upper left", bbox_to_anchor=(1.1, 0.6), borderaxespad=0.) # Set custom legend
legendBbox = leg.get_tightbbox(fig.canvas.get_renderer())
return ax, legendBbox
def makeVdLaanPlot(self, usefullness, satisfying, title = ''):
useBoxPlotPercentiles = True # Use box plots percentiles for the whiskers, instead of the mean+variance
# Settings
capsize = 0.02 # of the whiskers
plotlim = 2.1
markersize = 4
linewidth = 1.5
dashspacing = 8
#axLimits = [-.1,2,-.1,2]
axLimits = [-2,2,-2,2]
ticks = [-2,-1,0,1,2]
#ticks = [-0.5,0,0.5,1,1.5,2]
# Set class variables
self.title = title
### Plot Data
# Throw some boxplots in a new plot to get the values of the whiskers
tempfig = self.newfig()
# Start a new figure
vdlfig = self.newfig(None)
#colors = [self.colors[0],self.colors[1],self.colors[0],self.colors[1]]
colors = ['royalblue','darkred','deepskyblue','indianred']
alphas = [1,1,1,1]
markers = ["^","^","s","s"]
plt.figure(vdlfig.number)
# x,y=0 middle lines
plt.plot([-plotlim,plotlim],[0,0],"k--", linewidth=linewidth/2,dashes=(dashspacing, dashspacing))
plt.plot([0,0],[-plotlim,plotlim],"k--", linewidth=linewidth/2,dashes=(dashspacing, dashspacing))
if self.doSwarm: # Make the scatter plots (these are plotted in a separate loop, so that the other information is plotted on top and isnt obtsructed)
for i in range(4):
sati = [satisfying[x][i] for x in range(len(satisfying))]
usei = [usefullness[x][i] for x in range(len(usefullness))]
swarmplt, = plt.plot(sati,usei,linewidth=0,marker=markers[i],mfc = "w", color = colors[i], alpha = alphas[i], markersize = markersize)
for i in range(4):
sati = [satisfying[x][i] for x in range(len(satisfying))]
usei = [usefullness[x][i] for x in range(len(usefullness))]
plt.figure(vdlfig.number)
if useBoxPlotPercentiles:
midLabel = 'Median'
# Get the boxplot data
plotmetric = 'quartile' # 'quartile' or 'whisker'
labels = ["sat","use"]
plt.figure(tempfig.number)
bp = plt.boxplot([sati,usei],labels=labels)
bpdata = self.get_box_plot_data(bp,labels)
plt.figure(vdlfig.number)
# Plot Medians
plt.plot(bpdata['median'][0],bpdata['median'][1],marker=markers[i], color = colors[i], alpha = alphas[i])
# Plot middle line
mid, = plt.plot([bpdata['lower_{}'.format(plotmetric)][0],bpdata['upper_{}'.format(plotmetric)][0]],
[bpdata['median'][1],bpdata['median'][1]], color = colors[i], linewidth=linewidth)
plt.plot([bpdata['median'][0],bpdata['median'][0]],
[bpdata['lower_{}'.format(plotmetric)][1],bpdata['upper_{}'.format(plotmetric)][1]], color = colors[i],linewidth=linewidth, alpha = alphas[i])
# Plot Whiskers
plt.plot([bpdata['median'][0]-capsize,bpdata['median'][0]+capsize],
[bpdata['lower_{}'.format(plotmetric)][1],bpdata['lower_{}'.format(plotmetric)][1]], color = colors[i],linewidth=linewidth, alpha = alphas[i])
plt.plot([bpdata['median'][0]-capsize,bpdata['median'][0]+capsize],
[bpdata['upper_{}'.format(plotmetric)][1],bpdata['upper_{}'.format(plotmetric)][1]], color = colors[i],linewidth=linewidth, alpha = alphas[i])
plt.plot([bpdata['lower_{}'.format(plotmetric)][0],bpdata['lower_{}'.format(plotmetric)][0]],
[bpdata['median'][1]-capsize,bpdata['median'][1]+capsize], color = colors[i],linewidth=linewidth, alpha = alphas[i])
plt.plot([bpdata['upper_{}'.format(plotmetric)][0],bpdata['upper_{}'.format(plotmetric)][0]],
[bpdata['median'][1]-capsize,bpdata['median'][1]+capsize], color = colors[i],linewidth=linewidth, alpha = alphas[i])
else:
midLabel = 'Mean' # used for the legend
# Plot Means
means = [np.mean(sati),np.mean(usei)]
variances = [np.std(sati),np.std(usei)]
plt.plot(means[0],means[1], marker=markers[i],markersize = markersize * 2, color = colors[i])
# Plot Variances
mid, = plt.plot([means[0]+variances[0],means[0]-variances[0]],
[means[1],means[1]], color = colors[i], linewidth=linewidth, alpha = alphas[i])
plt.plot([means[0],means[0]],
[means[1]+variances[1],means[1]-variances[1]], color = colors[i],linewidth=linewidth, alpha = alphas[i])
# Plot Whiskers
plt.plot([means[0]-capsize,means[0]+capsize],
[means[1]-variances[1],means[1]-variances[1]], color = colors[i],linewidth=linewidth, alpha = alphas[i])
plt.plot([means[0]-capsize,means[0]+capsize],
[means[1]+variances[1],means[1]+variances[1]], color = colors[i],linewidth=linewidth, alpha = alphas[i])
plt.plot([means[0]-variances[0],means[0]-variances[0]],
[means[1]-capsize,means[1]+capsize], color = colors[i],linewidth=linewidth, alpha = alphas[i])
plt.plot([means[0]+variances[0],means[0]+variances[0]],
[means[1]-capsize,means[1]+capsize], color = colors[i],linewidth=linewidth, alpha = alphas[i])
# Get the axes object
ax = vdlfig.axes[0]
### Format Plot
ax.set_xlim(axLimits[0],axLimits[1])
ax.set_ylim(axLimits[2],axLimits[3])
plt.xticks(ticks)
plt.yticks(ticks)
ax.set_aspect(1)
ax.set_title(title, fontsize = self.titlefontsize)
#vdlfig.suptitle(title)
ax.set_xlabel('Satisfying')
ax.set_ylabel('Usefulness')
# Legend
# Old version, kept here for reference
# =============================================================================
# legend_elements = [plt.Line2D([0], [0], color=colors[0], lw=3, label='Monitor'),
# plt.Line2D([0], [0], color=colors[1], lw=3, label='HMD'),
# plt.Line2D([0], [0], lw = 0, mfc = "w", marker=markers[0], color='k', label='With Cues', markersize=6),
# plt.Line2D([0], [0], lw = 0, mfc = "w", marker=markers[2], color='k', label='Without Cues', markersize=6),
# plt.Line2D([0], [0], lw = 0, marker="o", color='k', label=midLabel, markersize=6)]
# =============================================================================
legendlabels = ["Mon+Cues","HMD+Cues","Mon, No cues","HMD, No cues"]
legend_elements = [plt.Line2D([0], [0], color=colors[i], marker=markers[i], lw=2, label=legendlabels[i]) for i in range(4)]
legend_elements = self.swapPositions(legend_elements,1,2)
leg = plt.legend(handles=legend_elements)
legendBbox = leg.get_tightbbox(vdlfig.canvas.get_renderer())
#self.annotateboxplot(bpfig, anova_table_u, legendBbox_u, a) # Annotate the significant p values into the plot
if self.save:
self.saveFig(title, vdlfig)
return vdlfig , ax
def swapPositions(self, list, pos1, pos2):
list[pos1], list[pos2] = list[pos2], list[pos1]
return list
def interactionPlot(self,df, title, valuecol = 'value'):
fig = self.newfig()
ax = plt.gca()
fig = interaction_plot(df['Cues'],df['Display'], df[valuecol], ax = ax, markers=['^','s'], ms=10)
ax.set_title(title)
fig.savefig(self.outputPath + "Figures/Interactions/" + title + "_interactionplot" + ".png", bbox_inches='tight',dpi=300)
return ax
def qqPlot(self, idata, title, label, pvalue):
fig = self.newfig()
ax = plt.gca()
pg.qqplot(idata, dist='norm', ax = ax)
ititle = title + ", qq-plot: " + label
ax.set_title(ititle)
plt.text(0.8, 0.2,"p = {}".format(np.round(pvalue,5)), ha='center', va='center', transform = ax.transAxes)
fig.savefig(self.outputPath + "Figures/" + str(title) + ".png", bbox_inches='tight',dpi=300)
return ax
########### ANNOTATIONS ###########
def annotateboxplot(self, fig, anova_table, legendBbox = [], a = 0.05): # Annotate p-values into the boxplots
# Get the correct figure
plt.figure(fig.number)
ax = plt.gca()
pkey = "Pr > F"
p1 = anova_table[pkey]["Cues"]
p2 = anova_table[pkey]["Display"]
p3 = anova_table[pkey]["Cues:Display"]
if p1 < a:
self.topannotation(0, 1, p1, ax)
if | |
import filecmp
import os
import sys
import unittest as unittest
import numpy as np
from qd.cae.dyna import *
class TestDynaModule(unittest.TestCase):
"""Tests the dyna module."""
def is_almost_equal(self, val1, val2, tolerance):
if(np.abs(val1 - val2) < tolerance):
return True
else:
return False
def assertCountEqual(self, *args, **kwargs):
'''Redefine because it does not exist in python2 unittest'''
if (sys.version_info[0] >= 3):
super(TestDynaModule, self).assertCountEqual(*args, **kwargs)
else:
super(TestDynaModule, self).assertItemsEqual(*args, **kwargs)
def test_dyna_d3plot(self):
d3plot_filepath = "test/d3plot"
d3plot_modes = ["inner", "mid", "outer", "max", "min", "mean"]
d3plot_vars = ["disp", "vel", "accel",
"stress", "strain", "plastic_strain", "stress", "stress_mises",
"history 1 shell", "history 1 solid"]
element_result_list = [None,
"plastic_strain",
lambda elem: elem.get_plastic_strain()[-1]]
part_ids = [1]
# D3plot loading/unloading
d3plot = D3plot(d3plot_filepath)
for mode in d3plot_modes: # load every var with every mode
d3plot = D3plot(d3plot_filepath)
vars2 = ["%s %s" % (var, mode) for var in d3plot_vars]
for var in vars2:
d3plot.read_states(var)
d3plot = D3plot(d3plot_filepath, read_states=vars2) # all at once
d3plot = D3plot(d3plot_filepath)
d3plot.read_states("disp")
d3plot.clear("disp")
self.assertEqual(len(d3plot.get_nodeByIndex(1).get_disp()), 0)
d3plot.read_states("vel")
d3plot.clear("vel")
self.assertEqual(len(d3plot.get_nodeByIndex(1).get_vel()), 0)
d3plot.read_states("accel")
d3plot.clear("accel")
self.assertEqual(len(d3plot.get_nodeByIndex(1).get_accel()), 0)
d3plot.read_states("energy")
d3plot.clear("energy")
self.assertEqual(len(d3plot.get_elementByID(
Element.shell, 1).get_energy()), 0)
d3plot.read_states("plastic_strain")
d3plot.clear("plastic_strain")
self.assertEqual(len(d3plot.get_elementByID(
Element.shell, 1).get_plastic_strain()), 0)
d3plot.read_states("stress")
d3plot.clear("stress")
self.assertEqual(
len(d3plot.get_elementByID(Element.shell, 1).get_stress()), 0)
d3plot.read_states("strain")
d3plot.clear("strain")
self.assertEqual(
len(d3plot.get_elementByID(Element.shell, 1).get_strain()), 0)
d3plot.read_states("history shell 1")
d3plot.clear("history")
self.assertEqual(
len(d3plot.get_elementByID(Element.shell, 1).get_history_variables()), 0)
d3plot.read_states("stress_mises")
d3plot.clear("stress_mises")
self.assertEqual(len(d3plot.get_elementByID(
Element.shell, 1).get_stress_mises()), 0)
# default mode (mode=mean)
d3plot = D3plot(d3plot_filepath, read_states=d3plot_vars)
d3plot.clear()
self.assertEqual(len(d3plot.get_nodeByIndex(1).get_disp()), 0)
self.assertEqual(len(d3plot.get_nodeByIndex(1).get_vel()), 0)
self.assertEqual(len(d3plot.get_nodeByIndex(1).get_accel()), 0)
self.assertEqual(
len(d3plot.get_elementByID(Element.shell, 1).get_energy()), 0)
self.assertEqual(len(d3plot.get_elementByID(
Element.shell, 1).get_plastic_strain()), 0)
self.assertEqual(
len(d3plot.get_elementByID(Element.shell, 1).get_stress()), 0)
self.assertEqual(
len(d3plot.get_elementByID(Element.shell, 1).get_strain()), 0)
self.assertEqual(
len(d3plot.get_elementByID(Element.shell, 1).get_history_variables()), 0)
# D3plot functions
# default mode (mode=mean)
d3plot = D3plot(d3plot_filepath, read_states=d3plot_vars)
self.assertEqual(d3plot.get_nNodes(), 4915)
self.assertEqual(len(d3plot.get_nodes()), 4915)
self.assertEqual(d3plot.get_nElements(), 4696)
self.assertEqual(d3plot.get_nElements(Element.beam), 0)
self.assertEqual(d3plot.get_nElements(Element.shell), 4696)
self.assertEqual(d3plot.get_nElements(Element.solid), 0)
self.assertEqual(len(d3plot.get_elements()), 4696)
self.assertEqual(np.sum([len(part.get_elements())
for part in d3plot.get_parts()]), 4696)
self.assertEqual(np.sum([len(part.get_elements(Element.beam))
for part in d3plot.get_parts()]), 0)
self.assertEqual(np.sum([len(part.get_elements(Element.shell))
for part in d3plot.get_parts()]), 4696)
self.assertEqual(np.sum([len(part.get_elements(Element.solid))
for part in d3plot.get_parts()]), 0)
self.assertEqual(d3plot.get_timesteps()[0], 0.)
self.assertEqual(len(d3plot.get_timesteps()), 1)
self.assertEqual(len(d3plot.get_parts()), 1)
part = d3plot.get_parts()[0]
self.assertEqual(part.get_nNodes(), 4915)
self.assertEqual(part.get_nElements(), 4696)
self.assertEqual(
part.get_element_node_ids(Element.shell, 4).shape, (4696, 4))
self.assertEqual(part.get_element_node_indexes(
Element.shell, 4).shape, (4696, 4))
self.assertEqual(part.get_node_ids().shape, (4915,))
self.assertEqual(part.get_node_indexes().shape, (4915,))
self.assertEqual(part.get_element_ids().shape, (4696,))
self.assertEqual(part.get_element_ids(Element.beam).shape, (0,))
self.assertEqual(part.get_element_ids(Element.shell).shape, (4696,))
self.assertEqual(part.get_element_ids(Element.tshell).shape, (0,))
self.assertEqual(part.get_element_ids(Element.solid).shape, (0,))
self.assertEqual(d3plot.get_element_energy().shape, (4696, 1))
self.assertEqual(d3plot.get_element_energy(Element.beam).shape, (0, 1))
self.assertEqual(d3plot.get_element_energy(
Element.shell).shape, (4696, 1))
self.assertEqual(d3plot.get_element_energy(
Element.solid).shape, (0, 1))
self.assertEqual(d3plot.get_element_energy(
Element.tshell).shape, (0, 1))
self.assertEqual(d3plot.get_element_plastic_strain().shape, (4696, 1))
self.assertEqual(d3plot.get_element_plastic_strain(
Element.beam).shape, (0, 1))
self.assertEqual(d3plot.get_element_plastic_strain(
Element.shell).shape, (4696, 1))
self.assertEqual(d3plot.get_element_plastic_strain(
Element.solid).shape, (0, 1))
self.assertEqual(d3plot.get_element_plastic_strain(
Element.tshell).shape, (0, 1))
self.assertEqual(d3plot.get_element_stress_mises().shape, (4696, 1))
self.assertEqual(d3plot.get_element_stress_mises(
Element.beam).shape, (0, 1))
self.assertEqual(d3plot.get_element_stress_mises(
Element.shell).shape, (4696, 1))
self.assertEqual(d3plot.get_element_stress_mises(
Element.solid).shape, (0, 1))
self.assertEqual(d3plot.get_element_stress_mises(
Element.tshell).shape, (0, 1))
self.assertEqual(d3plot.get_element_strain().shape, (4696, 1, 6))
self.assertEqual(d3plot.get_element_strain(
Element.beam).shape, (0, 1, 6))
self.assertEqual(d3plot.get_element_strain(
Element.shell).shape, (4696, 1, 6))
self.assertEqual(d3plot.get_element_strain(
Element.solid).shape, (0, 1, 6))
self.assertEqual(d3plot.get_element_strain(
Element.tshell).shape, (0, 1, 6))
self.assertEqual(d3plot.get_element_stress().shape, (4696, 1, 6))
self.assertEqual(d3plot.get_element_stress(
Element.beam).shape, (0, 1, 6))
self.assertEqual(d3plot.get_element_stress(
Element.shell).shape, (4696, 1, 6))
self.assertEqual(d3plot.get_element_stress(
Element.solid).shape, (0, 1, 6))
self.assertEqual(d3plot.get_element_stress(
Element.tshell).shape, (0, 1, 6))
self.assertEqual(d3plot.get_element_coords().shape, (4696, 1, 3))
self.assertEqual(d3plot.get_element_coords(
Element.beam).shape, (0, 1, 3))
self.assertEqual(d3plot.get_element_coords(
Element.shell).shape, (4696, 1, 3))
self.assertEqual(d3plot.get_element_coords(
Element.solid).shape, (0, 1, 3))
self.assertEqual(d3plot.get_element_coords(
Element.tshell).shape, (0, 1, 3))
with self.assertRaises(ValueError):
d3plot.get_element_history_vars(Element.none)
with self.assertRaises(ValueError):
d3plot.get_element_history_vars()
self.assertEqual(d3plot.get_element_history_vars(
Element.beam).shape, (0, 1, 0))
self.assertEqual(d3plot.get_element_history_vars(
Element.shell).shape, (4696, 1, 1))
self.assertEqual(d3plot.get_element_history_vars(
Element.solid).shape, (0, 1, 0))
self.assertEqual(d3plot.get_element_history_vars(
Element.tshell).shape, (0, 1, 0))
# D3plot error handling
# ... TODO
# Part
part1 = d3plot.get_parts()[0]
self.assertTrue(part1.get_name() == "Zugprobe")
self.assertTrue(part1.get_id() == 1)
self.assertTrue(len(part1.get_elements()) == 4696)
self.assertTrue(len(part1.get_nodes()) == 4915)
# Node
node_ids = [1, 2]
nodes_ids_v1 = [d3plot.get_nodeByID(node_id) for node_id in node_ids]
nodes_ids_v2 = d3plot.get_nodeByID(node_ids)
self.assertCountEqual([node.get_id()
for node in nodes_ids_v1], node_ids)
self.assertCountEqual([node.get_id()
for node in nodes_ids_v2], node_ids)
for node in nodes_ids_v1:
self.assertEqual(node.get_coords().shape, (1, 3))
self.assertEqual(len(node.get_disp()), 1)
self.assertEqual(len(node.get_vel()), 1)
self.assertEqual(len(node.get_accel()), 1)
self.assertGreater(len(node.get_elements()), 0)
node_indexes = [0, 1]
node_matching_ids = [1, 2] # looked it up manually
self.assertEqual(len(node_indexes), len(node_matching_ids))
nodes_indexes_v1 = [d3plot.get_nodeByIndex(
node_index) for node_index in node_indexes]
nodes_indexes_v2 = d3plot.get_nodeByIndex(node_indexes)
self.assertCountEqual([node.get_id()
for node in nodes_indexes_v1], node_matching_ids)
self.assertCountEqual([node.get_id()
for node in nodes_indexes_v2], node_matching_ids)
# .. TODO Error stoff
# Node Velocity and Acceleration Testing
self.assertCountEqual(d3plot.get_node_velocity().shape, (4915, 1, 3))
self.assertCountEqual(
d3plot.get_node_acceleration().shape, (4915, 1, 3))
self.assertCountEqual(d3plot.get_node_ids().shape, [4915])
# Shell Element
element_ids = [1, 2]
element_ids_shell_v1 = [d3plot.get_elementByID(
Element.shell, element_id) for element_id in element_ids]
element_ids_shell_v2 = d3plot.get_elementByID(
Element.shell, element_ids)
self.assertCountEqual([element.get_id()
for element in element_ids_shell_v1], element_ids)
self.assertCountEqual([element.get_id()
for element in element_ids_shell_v2], element_ids)
elem1 = element_ids_shell_v1[0]
elem2 = element_ids_shell_v2[0]
for element in element_ids_shell_v1:
pass
self.assertEqual(elem1.get_coords().shape, (1, 3))
self.assertEqual(elem1.get_plastic_strain().shape, (1,))
self.assertEqual(elem1.get_stress().shape, (1, 6))
self.assertEqual(elem1.get_stress_mises().shape, (1,))
self.assertEqual(elem1.get_strain().shape, (1, 6))
self.assertEqual(elem1.get_history_variables().shape, (1, 1))
self.assertCountEqual(d3plot.get_element_ids().shape, (4696,))
self.assertCountEqual(d3plot.get_element_node_ids(
Element.shell, 4).shape, (4696, 4))
# .. TODO Error stoff
# plotting (disabled)
'''
export_path = os.path.join(
os.path.dirname(__file__), "test_export.html")
for element_result in element_result_list:
# test d3plot.plot directly
d3plot.plot(iTimestep=-1,
element_result=element_result,
export_filepath=export_path)
self.assertTrue(os.path.isfile(export_path))
os.remove(export_path)
# test plotting by parts
D3plot.plot_parts(d3plot.get_parts(),
iTimestep=-1,
element_result=element_result,
export_filepath=export_path)
self.assertTrue(os.path.isfile(export_path))
os.remove(export_path)
for part in d3plot.get_parts():
part.plot(iTimestep=-1,
element_result=element_result,
export_filepath=export_path)
self.assertTrue(os.path.isfile(export_path))
os.remove(export_path)
for part_id in part_ids:
d3plot.get_partByID(part_id).plot(
iTimestep=-1, element_result=None, export_filepath=export_path)
self.assertTrue(os.path.isfile(export_path))
os.remove(export_path)
'''
def test_binout(self):
binout_filepath = "test/binout"
nTimesteps = 321
content = ["swforc"] # TODO: special case rwforc
content_subdirs = [['title', 'failure', 'ids', 'failure_time',
'typenames', 'axial', 'version', 'shear', 'time', 'date',
'length', 'resultant_moment', 'types', 'revision']]
# open file
binout = Binout(binout_filepath)
# check directory stuff
self.assertEqual(len(binout.read()), len(content))
self.assertCountEqual(content, binout.read())
# check variables reading
for content_dir, content_subdirs in zip(content, content_subdirs):
self.assertCountEqual(content_subdirs, binout.read(content_dir))
self.assertEqual(nTimesteps, len(binout.read(content_dir, "time")))
for content_subdir in content_subdirs:
# check if data containers not empty ...
self.assertGreater(
len(binout.read(content_dir, content_subdir)), 0)
# check string conversion
self.assertEqual(binout.read("swforc", "typenames"),
'constraint,weld,beam,solid,non nodal, ,solid assembly')
# test saving
binout.save_hdf5("./binout.h5")
self.assertTrue(os.path.isfile("./binout.h5"))
os.remove("./binout.h5")
def test_keyfile(self):
# test encryption detection
np.testing.assert_almost_equal(get_file_entropy(
"test/keyfile_include2.key"), 7.433761, decimal=6)
# file loading (arguments)
kf = KeyFile("test/keyfile.key")
self.assertEqual(len(kf.keys()), 8)
self.assertEqual(len(kf.get_includes()), 0)
self.assertEqual(kf.get_nNodes(), 0)
self.assertTrue(isinstance(kf["*INCLUDE"][0], Keyword))
self.assertTrue(isinstance(kf["*NODE"][0], Keyword))
self.assertTrue(isinstance(kf["*PART"][0], Keyword))
self.assertTrue(isinstance(kf["*ELEMENT_SHELL"][0], Keyword))
kf = KeyFile("test/keyfile.key", load_includes=True)
self.assertEqual(len(kf.keys()), 8)
self.assertEqual(len(kf.get_includes()), 2)
self.assertEqual(kf.get_nNodes(), 0)
self.assertTrue(isinstance(kf["*INCLUDE"][0], IncludeKeyword))
self.assertTrue(isinstance(kf["*NODE"][0], Keyword))
self.assertTrue(isinstance(kf["*PART"][0], Keyword))
self.assertTrue(isinstance(kf["*ELEMENT_SHELL"][0], Keyword))
kf = KeyFile("test/keyfile.key", load_includes=True, parse_mesh=True)
self.assertEqual(len(kf.keys()), 8)
self.assertEqual(len(kf.get_includes()), 2)
self.assertEqual(kf.get_nNodes(), 6)
self.assertCountEqual(kf.get_nodeByID(
11).get_coords()[0], (7., 7., 7.))
self.assertTrue(isinstance(kf["*INCLUDE"][0], Keyword))
self.assertTrue(isinstance(kf["*NODE"][0], NodeKeyword))
self.assertTrue(isinstance(kf["*PART"][0], PartKeyword))
self.assertTrue(isinstance(kf["*ELEMENT_SHELL"][0], ElementKeyword))
kf = KeyFile("test/keyfile.key", read_keywords=False)
self.assertEqual(len(kf.keys()), 0)
self.assertEqual(len(kf.get_includes()), 0)
self.assertEqual(kf.get_nNodes(), 0)
with self.assertRaises(ValueError):
KeyFile("test/invalid.key")
with self.assertRaises(ValueError):
KeyFile("test/invalid.key",
parse_mesh=True)
# saving
kf = KeyFile("test/keyfile.key")
self.assertEqual(kf.get_filepath(), "test/keyfile.key")
kf.save("test/tmp.key")
self.assertEqual(kf.get_filepath(), "test/tmp.key")
self.assertTrue(filecmp.cmp("test/keyfile.key", "test/tmp.key"))
os.remove("test/tmp.key")
kf = KeyFile("test/keyfile.key", read_keywords=True,
parse_mesh=False, load_includes=True)
_, kf2 = kf.get_includes()
kf2.save("test/tmp.key")
self.assertTrue(filecmp.cmp(
"test/keyfile_include2.key", "test/tmp.key"))
os.remove("test/tmp.key")
# Generic Keywords
kf = KeyFile("test/keyfile.key")
kwrds = kf["*PART"]
self.assertEqual(len(kwrds), 1)
kw = kwrds[0]
# getter
self.assertEqual(kw["pid"], 1)
self.assertEqual(kw.get_card_valueByName("pid"), 1)
self.assertEqual(kw["secid"], 1)
self.assertEqual(kw.get_card_valueByName("secid"), 1)
self.assertEqual(kw["mid"], 1)
self.assertEqual(kw.get_card_valueByName("mid"), 1)
self.assertEqual(kw[1, 0], 1)
self.assertEqual(kw.get_card_valueByIndex(1, 0), 1)
self.assertEqual(kw[1, 1], 1)
self.assertEqual(kw.get_card_valueByIndex(1, 1), 1)
self.assertEqual(kw[1, 2], 1)
self.assertEqual(kw.get_card_valueByIndex(1, 2), 1)
# self.assertEqual(kw[1, 0, 50], "1 1") # TODO fix this
self.assertEqual(kw[0, 0], "Iam beauti")
self.assertEqual(kw.get_card_valueByIndex(0, 0), "Iam beauti")
self.assertEqual(kw[0, 0, 80], "Iam beautiful")
self.assertEqual(kw.get_card_valueByIndex(0, 0, 80), "Iam beautiful")
with self.assertRaises(ValueError):
kw["error"]
with self.assertRaises(ValueError):
kw.get_card_valueByName("error")
with self.assertRaises(ValueError):
kw.get_card_valueByIndex(100, 0)
# setter
kw["pid"] = 100
self.assertEqual(kw["pid"], 100)
kw.set_card_valueByName("pid", 200)
self.assertEqual(kw["pid"], 200)
kw["pid"] = 12345678912
self.assertEqual(kw["pid"], 1234567891)
kw.set_card_valueByName("pid", 12345678912)
self.assertEqual(kw["pid"], 1234567891)
self.assertEqual(kw["secid"], 1)
kw[1, 0] = 300
self.assertEqual(kw[1, 0], 300)
kw.set_card_valueByIndex(1, 0, 400)
self.assertEqual(kw[1, 0], 400)
kw[1, 0] = 12345678912
self.assertEqual(kw[1, 0], 1234567891)
self.assertEqual(kw[1, 1], 1)
kw[0, 0, 30] = "Hihihi "
self.assertEqual(kw[0, 0, 30], "Hihihi")
kw.set_card_valueByDict({(1, 0): "yoy", "mid": "ok"})
self.assertEqual(kw[1, 0], "yoy")
self.assertEqual(kw["mid"], "ok")
# line manipulation
line_data = ["*PART",
"$ heading",
"Iam beautiful",
"$ pid secid mid",
" 1 1 1",
" "]
kf = KeyFile("test/keyfile.key")
kw = kf["*PART"][0]
self.assertEqual(len(kw), len(line_data))
self.assertCountEqual(kw.get_lines(), line_data)
for iLine, line in enumerate(line_data):
self.assertEqual(kw.get_line(iLine), line)
kw.insert_line(1, "$ another comment")
self.assertEqual(kw.get_line(1), "$ another comment")
kw.append_line("$ changed comment")
self.assertEqual(kw.get_line(7), "$ changed comment")
kw.remove_line(7)
kw.remove_line(1)
self.assertEqual(len(kw), len(line_data))
kw.append_line("blubber")
kw.set_lines(line_data)
self.assertCountEqual(kw.get_lines(), line_data)
# str
kf = KeyFile("test/keyfile.key")
kw = kf["*PART"][0]
kw_data = '*PART\n$ heading\nIam beautiful\n$ pid secid mid\n 1 1 1\n \n'
self.assertEqual(str(kw), kw_data)
# reformatting
Keyword.name_delimiter_used = True
Keyword.name_delimiter = '|'
Keyword.name_spacer = '-'
Keyword.name_alignment = Keyword.align.right
Keyword.field_alignment = Keyword.align.right
kw.reformat_all([0])
kw_data = '*PART\n$ heading\nIam beautiful\n$------pid|----secid|------mid\n 1 1 1\n \n'
self.assertEqual(str(kw), kw_data)
kw.reformat_field(0, 0, 80)
kw_data = '*PART\n$------------------------------------------------------------------------heading\n Iam beautiful\n$------pid|----secid|------mid\n 1 1 1\n \n'
self.assertEqual(str(kw), kw_data)
# NodeKeyword
kf = KeyFile("test/keyfile.key", load_includes=True, parse_mesh=True)
kw = kf["*NODE"][0]
self.assertEqual(kw.get_nNodes(), 4)
self.assertEqual(len(kw.get_nodes()), 4)
self.assertCountEqual(kw.get_node_ids(), [1, 3, 4, 5])
node = kw.add_node(6, 4., 4., 4., " 0 0")
self.assertEqual(node.get_id(), 6)
np.testing.assert_array_almost_equal(node.get_coords()[0],
[4., 4., 4.],
decimal=3)
node_kw_data = "*NODE\n$ some comment line\n$ id x y z\n 1 0 0 0 0 0\n 3 0 1 0 0 0\n 4 0 1 1 0 0\n 5 1 0 0 0 0\n 6 4 4 4 0 0\n\n"
| |
propvalue = self.repos.get_rev_prop(self.propname)
self.output.write('New Property Value:\n')
self.output.write(propvalue)
self.output.finish()
def generate_content(output, cfg, repos, changelist, group, params, pool):
svndate = repos.get_rev_prop(svn.core.SVN_PROP_REVISION_DATE)
### pick a different date format?
date = time.ctime(svn.core.secs_from_timestr(svndate, pool))
output.write('Author: %s\nDate: %s\nNew Revision: %s\n\n'
% (repos.author, date, repos.rev))
viewcvs_base_url = cfg.get('viewcvs_base_url', group, params)
if viewcvs_base_url and len(viewcvs_base_url):
output.write('URL: %s?view=rev&rev=%d\n'
% (viewcvs_base_url, repos.rev))
output.write('Log:\n%s\n'
% (repos.get_rev_prop(svn.core.SVN_PROP_REVISION_LOG) or ''))
# print summary sections
generate_list(output, 'Added', changelist, _select_adds)
generate_list(output, 'Removed', changelist, _select_deletes)
generate_list(output, 'Modified', changelist, _select_modifies)
# these are sorted by path already
for path, change in changelist:
generate_diff(output, cfg, repos, date, change, group, params, pool)
def _select_adds(change):
return change.added
def _select_deletes(change):
return change.path is None
def _select_modifies(change):
return not change.added and change.path is not None
def generate_list(output, header, changelist, selection):
items = [ ]
for path, change in changelist:
if selection(change):
items.append((path, change))
if items:
output.write('%s:\n' % header)
for fname, change in items:
if change.item_kind == svn.core.svn_node_dir:
is_dir = '/'
else:
is_dir = ''
if change.prop_changes:
if change.text_changed:
props = ' (contents, props changed)'
else:
props = ' (props changed)'
else:
props = ''
output.write(' %s%s%s\n' % (fname, is_dir, props))
if change.added and change.base_path:
if is_dir:
text = ''
elif change.text_changed:
text = ', changed'
else:
text = ' unchanged'
output.write(' - copied%s from r%d, %s%s\n'
% (text, change.base_rev, change.base_path[1:], is_dir))
def generate_diff(output, cfg, repos, date, change, group, params, pool):
if change.item_kind == svn.core.svn_node_dir:
# all changes were printed in the summary. nothing to do.
return
gen_diffs = cfg.get('generate_diffs', group, params)
viewcvs_base_url = cfg.get('viewcvs_base_url', group, params)
### Do a little dance for deprecated options. Note that even if you
### don't have an option anywhere in your configuration file, it
### still gets returned as non-None.
if len(gen_diffs):
diff_add = False
diff_copy = False
diff_delete = False
diff_modify = False
list = string.split(gen_diffs, " ")
for item in list:
if item == 'add':
diff_add = True
if item == 'copy':
diff_copy = True
if item == 'delete':
diff_delete = True
if item == 'modify':
diff_modify = True
else:
diff_add = True
diff_copy = True
diff_delete = True
diff_modify = True
### These options are deprecated
suppress = cfg.get('suppress_deletes', group, params)
if suppress == 'yes':
diff_delete = False
suppress = cfg.get('suppress_adds', group, params)
if suppress == 'yes':
diff_add = False
# Figure out if we're supposed to show ViewCVS URLs
if len(viewcvs_base_url):
show_urls = True
else:
show_urls = False
diff = None
diff_url = None
header = None
if not change.path:
### params is a bit silly here
header = 'Deleted: %s' % change.base_path
if show_urls:
diff_url = '%s/%s?view=auto&rev=%d' \
% (viewcvs_base_url,
urllib.quote(change.base_path[1:]), change.base_rev)
if diff_delete:
diff = svn.fs.FileDiff(repos.get_root(change.base_rev),
change.base_path, None, None, pool)
label1 = '%s\t%s' % (change.base_path, date)
label2 = '(empty file)'
singular = True
elif change.added:
if change.base_path and (change.base_rev != -1): # this file was copied.
# note that we strip the leading slash from the base (copyfrom) path
header = 'Copied: %s (from r%d, %s)' \
% (change.path, change.base_rev, change.base_path[1:])
if show_urls:
diff_url = '%s/%s?view=diff&rev=%d&p1=%s&r1=%d&p2=%s&r2=%d' \
% (viewcvs_base_url,
urllib.quote(change.path), repos.rev,
urllib.quote(change.base_path[1:]), change.base_rev,
urllib.quote(change.path), repos.rev)
if change.text_changed and diff_copy:
diff = svn.fs.FileDiff(repos.get_root(change.base_rev),
change.base_path[1:],
repos.root_this, change.path,
pool)
label1 = change.base_path[1:] + '\t(original)'
label2 = '%s\t%s' % (change.path, date)
singular = False
else:
header = 'Added: %s' % change.path
if show_urls:
diff_url = '%s/%s?view=auto&rev=%d' \
% (viewcvs_base_url,
urllib.quote(change.path), repos.rev)
if diff_add:
diff = svn.fs.FileDiff(None, None, repos.root_this, change.path, pool)
label1 = '(empty file)'
label2 = '%s\t%s' % (change.path, date)
singular = True
elif not change.text_changed:
# don't bother to show an empty diff. prolly just a prop change.
return
else:
header = 'Modified: %s' % change.path
if show_urls:
diff_url = '%s/%s?view=diff&rev=%d&p1=%s&r1=%d&p2=%s&r2=%d' \
% (viewcvs_base_url,
urllib.quote(change.path), repos.rev,
urllib.quote(change.base_path[1:]), change.base_rev,
urllib.quote(change.path), repos.rev)
if diff_modify:
diff = svn.fs.FileDiff(repos.get_root(change.base_rev),
change.base_path[1:],
repos.root_this, change.path,
pool)
label1 = change.base_path[1:] + '\t(original)'
label2 = '%s\t%s' % (change.path, date)
singular = False
if not (show_urls or diff):
return
output.write('\n' + header + '\n')
if diff_url:
output.write('Url: ' + diff_url + '\n')
output.write(SEPARATOR + '\n')
if not diff:
return
if diff.either_binary():
if singular:
output.write('Binary file. No diff available.\n')
else:
output.write('Binary files. No diff available.\n')
return
### do something with change.prop_changes
src_fname, dst_fname = diff.get_files()
output.run(cfg.get_diff_cmd({
'label_from' : label1,
'label_to' : label2,
'from' : src_fname,
'to' : dst_fname,
}))
class Repository:
"Hold roots and other information about the repository."
def __init__(self, repos_dir, rev, pool):
self.repos_dir = repos_dir
self.rev = rev
self.pool = pool
self.repos_ptr = svn.repos.svn_repos_open(repos_dir, pool)
self.fs_ptr = svn.repos.svn_repos_fs(self.repos_ptr)
self.roots = { }
self.root_this = self.get_root(rev)
self.author = self.get_rev_prop(svn.core.SVN_PROP_REVISION_AUTHOR)
def get_rev_prop(self, propname):
return svn.fs.revision_prop(self.fs_ptr, self.rev, propname, self.pool)
def get_root(self, rev):
try:
return self.roots[rev]
except KeyError:
pass
root = self.roots[rev] = svn.fs.revision_root(self.fs_ptr, rev, self.pool)
return root
class Config:
# The predefined configuration sections. These are omitted from the
# set of groups.
_predefined = ('general', 'defaults')
def __init__(self, fname, repos, global_params):
cp = ConfigParser.ConfigParser()
cp.read(fname)
# record the (non-default) groups that we find
self._groups = [ ]
for section in cp.sections():
if not hasattr(self, section):
section_ob = _sub_section()
setattr(self, section, section_ob)
if section not in self._predefined:
self._groups.append((section, section_ob))
else:
section_ob = getattr(self, section)
for option in cp.options(section):
# get the raw value -- we use the same format for *our* interpolation
value = cp.get(section, option, raw=1)
setattr(section_ob, option, value)
### do some better splitting to enable quoting of spaces
self._diff_cmd = string.split(self.general.diff)
# these params are always available, although they may be overridden
self._global_params = global_params.copy()
self._prep_groups(repos)
def get_diff_cmd(self, args):
cmd = [ ]
for part in self._diff_cmd:
cmd.append(part % args)
return cmd
def is_set(self, option):
"""Return None if the option is not set; otherwise, its value is returned.
The option is specified as a dotted symbol, such as 'general.mail_command'
"""
parts = string.split(option, '.')
ob = self
for part in string.split(option, '.'):
if not hasattr(ob, part):
return None
ob = getattr(ob, part)
return ob
def get(self, option, group, params):
if group:
sub = getattr(self, group)
if hasattr(sub, option):
return getattr(sub, option) % params
return getattr(self.defaults, option, '') % params
def _prep_groups(self, repos):
self._group_re = [ ]
self._last_group_re = [ ]
repos_dir = os.path.abspath(repos.repos_dir)
# compute the default repository-based parameters. start with some
# basic parameters, then bring in the regex-based params.
default_params = self._global_params.copy()
try:
match = re.match(self.defaults.for_repos, repos_dir)
if match:
default_params.update(match.groupdict())
except AttributeError:
# there is no self.defaults.for_repos
pass
# select the groups that apply to this repository
for group, sub in self._groups:
params = default_params
if hasattr(sub, 'for_repos'):
match = re.match(sub.for_repos, repos_dir)
if not match:
continue
params = self._global_params.copy()
params.update(match.groupdict())
# if a matching rule hasn't been given, then use the empty string
# as it will match all paths
for_paths = getattr(sub, 'for_paths', '')
exclude_paths = getattr(sub, 'exclude_paths', None)
suppress_if_match = getattr(sub, 'suppress_if_match', None)
if exclude_paths:
exclude_paths_re = re.compile(exclude_paths)
else:
exclude_paths_re = None
if suppress_if_match == 'yes':
self._last_group_re.append((group, re.compile(for_paths),
exclude_paths_re, True, params))
else:
self._group_re.append((group, re.compile(for_paths),
exclude_paths_re, False, params))
# after all the groups are done, add in the default group
try:
self._last_group_re.append((None,
re.compile(self.defaults.for_paths),
None, False,
default_params))
except AttributeError:
# there is no self.defaults.for_paths
pass
self._group_re = self._group_re + self._last_group_re
def which_groups(self, path):
"Return the path's associated groups."
groups = []
seen = False
for group, pattern, exclude_pattern, ignore, repos_params in self._group_re:
if ignore and seen:
continue
match = pattern.match(path)
if match:
if exclude_pattern and exclude_pattern.match(path):
continue
seen = True
params = repos_params.copy()
params.update(match.groupdict())
groups.append((group, params))
if not groups:
groups.append((None, self._global_params))
return groups
class _sub_section:
pass
class MissingConfig(Exception):
pass
class UnknownSubcommand(Exception):
pass
# enable True/False in older vsns of Python
try:
_unused = True
except NameError:
True = 1
False = 0
if __name__ == '__main__':
def usage():
sys.stderr.write(
'''USAGE: %s commit REPOS-DIR REVISION [CONFIG-FILE]
%s propchange REPOS-DIR REVISION AUTHOR PROPNAME [CONFIG-FILE]
'''
% (sys.argv[0], sys.argv[0]))
sys.exit(1)
if len(sys.argv) < 4:
usage()
cmd = sys.argv[1]
repos_dir = sys.argv[2]
revision = int(sys.argv[3])
config_fname = None
author = None
propname = None
if cmd == 'commit':
if len(sys.argv) > 5:
usage()
if len(sys.argv) > 4:
config_fname = sys.argv[4]
elif cmd == 'propchange':
if len(sys.argv) < 6 | |
audio_filters.append(arg)
all_args = []
all_args.append(self.ffmpeg_path)
all_args.extend(input_args)
if video_filters:
all_args.append("-filter:v {}".format(",".join(video_filters)))
if audio_filters:
all_args.append("-filter:a {}".format(",".join(audio_filters)))
all_args.extend(output_args)
return all_args
def input_output_paths(self, new_repre, output_def, temp_data):
"""Deduce input nad output file paths based on entered data.
Input may be sequence of images, video file or single image file and
same can be said about output, this method helps to find out what
their paths are.
It is validated that output directory exist and creates if not.
During process are set "files", "stagingDir", "ext" and
"sequence_file" (if output is sequence) keys to new representation.
"""
staging_dir = new_repre["stagingDir"]
repre = temp_data["origin_repre"]
if temp_data["input_is_sequence"]:
collections = clique.assemble(repre["files"])[0]
full_input_path = os.path.join(
staging_dir,
collections[0].format("{head}{padding}{tail}")
)
filename = collections[0].format("{head}")
if filename.endswith("."):
filename = filename[:-1]
# Make sure to have full path to one input file
full_input_path_single_file = os.path.join(
staging_dir, repre["files"][0]
)
else:
full_input_path = os.path.join(
staging_dir, repre["files"]
)
filename = os.path.splitext(repre["files"])[0]
# Make sure to have full path to one input file
full_input_path_single_file = full_input_path
filename_suffix = output_def["filename_suffix"]
output_ext = output_def.get("ext")
# Use input extension if output definition do not specify it
if output_ext is None:
output_ext = os.path.splitext(full_input_path)[1]
# TODO Define if extension should have dot or not
if output_ext.startswith("."):
output_ext = output_ext[1:]
# Store extension to representation
new_repre["ext"] = output_ext
self.log.debug("New representation ext: `{}`".format(output_ext))
# Output is image file sequence witht frames
output_ext_is_image = bool(output_ext in self.image_exts)
output_is_sequence = bool(
output_ext_is_image
and "sequence" in output_def["tags"]
)
if output_is_sequence:
new_repre_files = []
frame_start = temp_data["output_frame_start"]
frame_end = temp_data["output_frame_end"]
filename_base = "{}_{}".format(filename, filename_suffix)
# Temporary tempalte for frame filling. Example output:
# "basename.%04d.exr" when `frame_end` == 1001
repr_file = "{}.%{:0>2}d.{}".format(
filename_base, len(str(frame_end)), output_ext
)
for frame in range(frame_start, frame_end + 1):
new_repre_files.append(repr_file % frame)
new_repre["sequence_file"] = repr_file
full_output_path = os.path.join(
staging_dir, filename_base, repr_file
)
else:
repr_file = "{}_{}.{}".format(
filename, filename_suffix, output_ext
)
full_output_path = os.path.join(staging_dir, repr_file)
new_repre_files = repr_file
# Store files to representation
new_repre["files"] = new_repre_files
# Make sure stagingDire exists
staging_dir = os.path.normpath(os.path.dirname(full_output_path))
if not os.path.exists(staging_dir):
self.log.debug("Creating dir: {}".format(staging_dir))
os.makedirs(staging_dir)
# Store stagingDir to representaion
new_repre["stagingDir"] = staging_dir
# Store paths to temp data
temp_data["full_input_path"] = full_input_path
temp_data["full_input_path_single_file"] = full_input_path_single_file
temp_data["full_output_path"] = full_output_path
# Store information about output
temp_data["output_ext_is_image"] = output_ext_is_image
temp_data["output_is_sequence"] = output_is_sequence
self.log.debug("Input path {}".format(full_input_path))
self.log.debug("Output path {}".format(full_output_path))
def audio_args(self, instance, temp_data):
"""Prepares FFMpeg arguments for audio inputs."""
audio_in_args = []
audio_filters = []
audio_out_args = []
audio_inputs = instance.data.get("audio")
if not audio_inputs:
return audio_in_args, audio_filters, audio_out_args
for audio in audio_inputs:
# NOTE modified, always was expected "frameStartFtrack" which is
# STRANGE?!!! There should be different key, right?
# TODO use different frame start!
offset_seconds = 0
frame_start_ftrack = instance.data.get("frameStartFtrack")
if frame_start_ftrack is not None:
offset_frames = frame_start_ftrack - audio["offset"]
offset_seconds = offset_frames / temp_data["fps"]
if offset_seconds > 0:
audio_in_args.append(
"-ss {}".format(offset_seconds)
)
elif offset_seconds < 0:
audio_in_args.append(
"-itsoffset {}".format(abs(offset_seconds))
)
audio_in_args.append("-i \"{}\"".format(audio["filename"]))
# NOTE: These were changed from input to output arguments.
# NOTE: value in "-ac" was hardcoded to 2, changed to audio inputs len.
# Need to merge audio if there are more than 1 input.
if len(audio_inputs) > 1:
audio_out_args.append("-filter_complex amerge")
audio_out_args.append("-ac {}".format(len(audio_inputs)))
return audio_in_args, audio_filters, audio_out_args
def rescaling_filters(self, temp_data, output_def, new_repre):
"""Prepare vieo filters based on tags in new representation.
It is possible to add letterboxes to output video or rescale to
different resolution.
During this preparation "resolutionWidth" and "resolutionHeight" are
set to new representation.
"""
filters = []
letter_box = output_def.get("letter_box")
# Get instance data
pixel_aspect = temp_data["pixel_aspect"]
# NOTE Skipped using instance's resolution
full_input_path_single_file = temp_data["full_input_path_single_file"]
input_data = pype.lib.ffprobe_streams(full_input_path_single_file)[0]
input_width = input_data["width"]
input_height = input_data["height"]
self.log.debug("pixel_aspect: `{}`".format(pixel_aspect))
self.log.debug("input_width: `{}`".format(input_width))
self.log.debug("input_height: `{}`".format(input_height))
# NOTE Setting only one of `width` or `heigth` is not allowed
output_width = output_def.get("width")
output_height = output_def.get("height")
# Use instance resolution if output definition has not set it.
if output_width is None or output_height is None:
output_width = temp_data["resolution_width"]
output_height = temp_data["resolution_height"]
# Use source's input resolution instance does not have set it.
if output_width is None or output_height is None:
self.log.debug("Using resolution from input.")
output_width = input_width
output_height = input_height
self.log.debug(
"Output resolution is {}x{}".format(output_width, output_height)
)
# Skip processing if resolution is same as input's and letterbox is
# not set
if (
output_width == input_width
and output_height == input_height
and not letter_box
and pixel_aspect == 1
):
self.log.debug(
"Output resolution is same as input's"
" and \"letter_box\" key is not set. Skipping reformat part."
)
new_repre["resolutionWidth"] = input_width
new_repre["resolutionHeight"] = input_height
return filters
# defining image ratios
input_res_ratio = (
(float(input_width) * pixel_aspect) / input_height
)
output_res_ratio = float(output_width) / float(output_height)
self.log.debug("input_res_ratio: `{}`".format(input_res_ratio))
self.log.debug("output_res_ratio: `{}`".format(output_res_ratio))
# Round ratios to 2 decimal places for comparing
input_res_ratio = round(input_res_ratio, 2)
output_res_ratio = round(output_res_ratio, 2)
# get scale factor
scale_factor_by_width = (
float(output_width) / (input_width * pixel_aspect)
)
scale_factor_by_height = (
float(output_height) / input_height
)
self.log.debug(
"scale_factor_by_with: `{}`".format(scale_factor_by_width)
)
self.log.debug(
"scale_factor_by_height: `{}`".format(scale_factor_by_height)
)
# letter_box
if letter_box:
if input_res_ratio == output_res_ratio:
letter_box /= pixel_aspect
elif input_res_ratio < output_res_ratio:
letter_box /= scale_factor_by_width
else:
letter_box /= scale_factor_by_height
scale_filter = "scale={}x{}:flags=lanczos".format(
output_width, output_height
)
top_box = (
"drawbox=0:0:iw:round((ih-(iw*(1/{})))/2):t=fill:c=black"
).format(letter_box)
bottom_box = (
"drawbox=0:ih-round((ih-(iw*(1/{0})))/2)"
":iw:round((ih-(iw*(1/{0})))/2):t=fill:c=black"
).format(letter_box)
# Add letter box filters
filters.extend([scale_filter, "setsar=1", top_box, bottom_box])
# scaling none square pixels and 1920 width
if (
input_height != output_height
or input_width != output_width
or pixel_aspect != 1
):
if input_res_ratio < output_res_ratio:
self.log.debug(
"Input's resolution ratio is lower then output's"
)
width_scale = int(input_width * scale_factor_by_height)
width_half_pad = int((output_width - width_scale) / 2)
height_scale = output_height
height_half_pad = 0
else:
self.log.debug("Input is heigher then output")
width_scale = output_width
width_half_pad = 0
height_scale = int(input_height * scale_factor_by_width)
height_half_pad = int((output_height - height_scale) / 2)
self.log.debug("width_scale: `{}`".format(width_scale))
self.log.debug("width_half_pad: `{}`".format(width_half_pad))
self.log.debug("height_scale: `{}`".format(height_scale))
self.log.debug("height_half_pad: `{}`".format(height_half_pad))
filters.extend([
"scale={}x{}:flags=lanczos".format(
width_scale, height_scale
),
"pad={}:{}:{}:{}:black".format(
output_width, output_height,
width_half_pad, height_half_pad
),
"setsar=1"
])
new_repre["resolutionWidth"] = output_width
new_repre["resolutionHeight"] = output_height
return filters
def lut_filters(self, new_repre, instance, input_args):
"""Add lut file to output ffmpeg filters."""
filters = []
# baking lut file application
lut_path = instance.data.get("lutPath")
if not lut_path or "bake-lut" not in new_repre["tags"]:
return filters
# Prepare path for ffmpeg argument
lut_path = lut_path.replace("\\", "/").replace(":", "\\:")
# Remove gamma from input arguments
if "-gamma" in input_args:
input_args.remove("-gamme")
# Prepare filters
filters.append("lut3d=file='{}'".format(lut_path))
# QUESTION hardcoded colormatrix?
filters.append("colormatrix=bt601:bt709")
self.log.info("Added Lut to ffmpeg command.")
return filters
def main_family_from_instance(self, instance):
"""Returns main family of entered instance."""
family = instance.data.get("family")
if not family:
family = instance.data["families"][0]
return family
def families_from_instance(self, instance):
"""Returns all families of entered instance."""
families = []
family = instance.data.get("family")
if family:
families.append(family)
for family in (instance.data.get("families") or tuple()):
if family not in families:
families.append(family)
return families
def compile_list_of_regexes(self, in_list):
"""Convert strings in entered list to compiled regex objects."""
regexes = []
if not in_list:
return regexes
for item in in_list:
if not item:
continue
try:
regexes.append(re.compile(item))
except TypeError:
self.log.warning((
"Invalid type \"{}\" value \"{}\"."
" Expected string based object. Skipping."
).format(str(type(item)), str(item)))
return regexes
def validate_value_by_regexes(self, value, in_list):
"""Validates in any regexe from list match entered value.
Args:
in_list (list): List with regexes.
value (str): String where regexes is checked.
Returns:
int: Returns `0` when list is not set or is empty. Returns `1` when
any regex match value and returns `-1` when none of regexes
match value entered.
"""
if not in_list:
return 0
output = -1
regexes = self.compile_list_of_regexes(in_list)
for regex in regexes:
if re.match(regex, value):
output = 1
break
return output
def profile_exclusion(self, matching_profiles):
"""Find out most matching profile byt host, task and family match.
Profiles are selectivelly filtered. Each profile should have
"__value__" key with list of booleans. Each boolean represents
existence of filter for specific key (host, taks, family).
Profiles are looped in sequence. In each sequence are split into
true_list and false_list. For next sequence loop are used profiles in
true_list if there are any profiles else false_list is used.
Filtering ends when only one profile left in true_list. Or when all
existence booleans loops passed, | |
import os, sys
import adsk.core, adsk.fusion, adsk.cam, traceback, math
from math import sqrt
from ..ParameterValues import ParameterValues
class DimensionsBuilder:
def __init__(self, app):
self.app = app
self.design = app.activeProduct
def buildDimensions(self, parameters: ParameterValues):
rootComp = self.design.rootComponent
allOccs2 = rootComp.occurrences
newOcc2 = allOccs2.addNewComponent(adsk.core.Matrix3D.create())
dimsComp = adsk.fusion.Component.cast(newOcc2.component)
fretNumber = parameters.fretNumber
scaleLength = parameters.scaleLength
nutLength = parameters.nutLength
endLength = parameters.endLength
nutRadius = parameters.nutRadius
endRadius = parameters.endRadius
fretboardHeight = parameters.fretboardHeight
filletRadius = parameters.filletRadius
endCurve = parameters.endCurve
tangWidth = parameters.tangWidth
bridgeStringSpacing = parameters.bridgeStringSpacing
tangDepth = parameters.tangDepth
nutSlotWidth = parameters.nutSlotWidth
nutSlotDepth = parameters.nutSlotDepth
markerDiameter = parameters.markerDiameter
markerDepth = parameters.markerDepth
markerSpacing = parameters.markerSpacing
guitarLength = parameters.guitarLength
bodyWidth = parameters.bodyWidth
headstockLength = parameters.headstockLength
bodyLength = parameters.bodyLength
stringCount = parameters.stringCount
nutToPost = parameters.nutToPost
machinePostHoleSpacing = parameters.machinePostHoleSpacing
machinePostHoleDiameter = parameters.machinePostHoleDiameter
machinePostDiameter = parameters.machinePostDiameter
nutStringSpacing = parameters.nutStringSpacing
fretboardLength = parameters.fretboardLength + parameters.fretboardLengthOffset
headstockStyle = parameters.headstockStyle
neckSpacing = parameters.neckSpacing
bridgeSpacing = parameters.bridgeSpacing
singleCoilLength = parameters.singleCoilLength
singleCoilWidth = parameters.singleCoilWidth
singleCoilDepth = parameters.singleCoilDepth
humbuckerLength = parameters.humbuckerLength
humbuckerWidth = parameters.humbuckerWidth
humbuckerDepth = parameters.humbuckerDepth
humbuckerFillet = parameters.humbuckerFillet
pickupNeck = parameters.pickupNeck
pickupMiddle = parameters.pickupMiddle
pickupBridge = parameters.pickupBridge
pickupCavityMountLength = parameters.pickupCavityMountLength
pickupCavityMountTabWidth = parameters.pickupCavityMountTabWidth
bridgePickupAngle = self.design.unitsManager.convert(parameters.bridgePickupAngle, 'deg', 'rad')
#Equation for fret spacing
for fretNum in range(1,int((fretNumber))+1):
fretDistance = (scaleLength)-((scaleLength)/(2**(fretNum/12.0)))
#This calculates and rounds the total length of the fretboard using the scale length and number of frets
L = fretboardLength
stringInset = (nutLength - nutStringSpacing)/2
nutDistance = guitarLength - headstockLength
width = (nutLength-(stringInset*2)) + (2*sqrt(((L/(math.cos(math.radians(math.acos((scaleLength**2+(sqrt((((bridgeStringSpacing-(nutLength-(stringInset*2)))/2)**2)+(scaleLength**2)))**2-
((bridgeStringSpacing-(nutLength-(stringInset*2)))/2)**2)/(2*scaleLength*(sqrt((((bridgeStringSpacing-(nutLength-(stringInset*2)))/2)**2)+(scaleLength**2)))))*((180)/math.pi)))))**2)
-(L**2)))
# Create a new sketch.
sketches = dimsComp.sketches
yzPlane = dimsComp.yZConstructionPlane
xzPlane = dimsComp.xYConstructionPlane
xyPlane = dimsComp.xYConstructionPlane
#Create construction lines
sketch1 = sketches.add(xzPlane)
sketch1.isComputeDeferred = True
lines = sketch1.sketchCurves.sketchLines
sketch1.name = 'Boundary Dimensions'
sketch1.isVisible = True
sketch1.areDimensionsShown = True
sketch1.areConstraintsShown = False
sketch1.arePointsShown = False
sketch1.areProfilesShown = False
centerLine = lines.addByTwoPoints(adsk.core.Point3D.create(0, 0, 0), adsk.core.Point3D.create(guitarLength, 0, 0))
centerLine.isConstruction = True
centerLine.isFixed = True
originBoundary = lines.addByTwoPoints(adsk.core.Point3D.create(0, bodyWidth/2, 0), adsk.core.Point3D.create(0, -bodyWidth/2, 0))
originBoundary.isConstruction = True
endBoundary = lines.addByTwoPoints(adsk.core.Point3D.create(guitarLength, bodyWidth/2, 0), adsk.core.Point3D.create(guitarLength, -bodyWidth/2, 0))
endBoundary.isConstruction = True
topBoundary = lines.addByTwoPoints(adsk.core.Point3D.create(0, bodyWidth/2, 0), adsk.core.Point3D.create(guitarLength, bodyWidth/2, 0))
topBoundary.isConstruction = True
bottomBoundary = lines.addByTwoPoints(adsk.core.Point3D.create(0, -bodyWidth/2, 0), adsk.core.Point3D.create(guitarLength, -bodyWidth/2, 0))
bottomBoundary.isConstruction = True
bodyBoundary = lines.addByTwoPoints(adsk.core.Point3D.create(bodyLength, bodyWidth/2, 0), adsk.core.Point3D.create(bodyLength, -bodyWidth/2, 0))
bodyBoundary.isConstruction = True
nutBoundary = lines.addByTwoPoints(adsk.core.Point3D.create(nutDistance, bodyWidth/4, 0), adsk.core.Point3D.create(nutDistance, -bodyWidth/4, 0))
nutBoundary.isConstruction = True
bridgeBoundary = lines.addByTwoPoints(adsk.core.Point3D.create(nutDistance-scaleLength, bodyWidth/4, 0), adsk.core.Point3D.create(nutDistance-scaleLength, -bodyWidth/4, 0))
bridgeBoundary.isConstruction = True
fret12Boundary = lines.addByTwoPoints(adsk.core.Point3D.create(nutDistance-scaleLength/2, bodyWidth/4, 0), adsk.core.Point3D.create(nutDistance-scaleLength/2, -bodyWidth/4, 0))
fret12Boundary.isConstruction = True
#Create dimension lines
sketch1.areDimensionsShown = True
sketch1.sketchDimensions.addDistanceDimension(topBoundary.startSketchPoint, topBoundary.endSketchPoint,
adsk.fusion.DimensionOrientations.HorizontalDimensionOrientation,
adsk.core.Point3D.create((topBoundary.length/2), bodyWidth/2+4, 0), False)
sketch1.sketchDimensions.addDistanceDimension(originBoundary.startSketchPoint, originBoundary.endSketchPoint,
adsk.fusion.DimensionOrientations.VerticalDimensionOrientation,
adsk.core.Point3D.create(-4, 0, 0), False)
sketch1.sketchDimensions.addDistanceDimension(bridgeBoundary.startSketchPoint, fret12Boundary.startSketchPoint,
adsk.fusion.DimensionOrientations.HorizontalDimensionOrientation,
adsk.core.Point3D.create((nutDistance-scaleLength+scaleLength/4), bodyWidth/4+2, 0), False)
sketch1.sketchDimensions.addDistanceDimension(fret12Boundary.startSketchPoint, nutBoundary.startSketchPoint,
adsk.fusion.DimensionOrientations.HorizontalDimensionOrientation,
adsk.core.Point3D.create((nutDistance-scaleLength/4), bodyWidth/4+2, 0), False)
sketch1.sketchDimensions.addDistanceDimension(topBoundary.startSketchPoint, bodyBoundary.startSketchPoint,
adsk.fusion.DimensionOrientations.HorizontalDimensionOrientation,
adsk.core.Point3D.create((bodyLength/2), bodyWidth/2+2, 0), False)
sketch1.sketchDimensions.addDistanceDimension(bodyBoundary.startSketchPoint, topBoundary.endSketchPoint,
adsk.fusion.DimensionOrientations.HorizontalDimensionOrientation,
adsk.core.Point3D.create(((guitarLength+bodyLength)/2), bodyWidth/2+2, 0), False)
sketch1.sketchDimensions.addDistanceDimension(topBoundary.startSketchPoint, centerLine.startSketchPoint,
adsk.fusion.DimensionOrientations.VerticalDimensionOrientation,
adsk.core.Point3D.create(-2, bodyWidth/4, 0), False)
sketch1.sketchDimensions.addDistanceDimension(centerLine.startSketchPoint, bottomBoundary.startSketchPoint,
adsk.fusion.DimensionOrientations.VerticalDimensionOrientation,
adsk.core.Point3D.create(-2, -bodyWidth/4, 0), False)
sketch1.sketchDimensions.addDistanceDimension(centerLine.startSketchPoint, bridgeBoundary.startSketchPoint,
adsk.fusion.DimensionOrientations.HorizontalDimensionOrientation,
adsk.core.Point3D.create(((guitarLength-headstockLength-scaleLength)/2), bodyWidth/4+2, 0), False)
sketch1.sketchDimensions.addDistanceDimension(nutBoundary.startSketchPoint, centerLine.endSketchPoint,
adsk.fusion.DimensionOrientations.HorizontalDimensionOrientation,
adsk.core.Point3D.create((guitarLength-headstockLength/2), bodyWidth/4+2, 0), False)
sketch2 = sketches.add(xzPlane)
sketch2.isComputeDeferred = True
dimensionFrets = sketch2.sketchCurves.sketchLines
dimensionFrets2 = sketch2.sketchCurves.sketchLines
dimensionCircles = sketch2.sketchCurves.sketchCircles
dimensionCircles2 = sketch2.sketchCurves.sketchCircles
dimensionLines = sketch2.sketchCurves.sketchLines
humbuckerCavitySketch = sketch2.sketchCurves.sketchLines
sketch2.name = 'Core Dimensions'
sketch2.isVisible = True
sketch2.areDimensionsShown = True
sketch2.areConstraintsShown = False
sketch2.arePointsShown = False
sketch2.areProfilesShown = False
for fret in range(int(fretNumber)+1):
fretDistance = scaleLength-(scaleLength/(2**(fret/12.0)))
fretLength = nutLength + 2*sqrt(((fretDistance/(math.cos(math.radians(math.acos((L**2+(sqrt((((endLength-nutLength)/2)**2) +(L**2)))**2-
((endLength-nutLength)/2)**2)/(2*L*(sqrt((((endLength-nutLength)/2)**2)+(L**2)))))*((180)/math.pi)))))**2)
-(fretDistance**2))
dimensioning = dimensionFrets.addByTwoPoints(adsk.core.Point3D.create((nutDistance-fretDistance), (fretLength/2), 0),
adsk.core.Point3D.create((nutDistance-fretDistance), (-fretLength/2), 0))
dimensioning.isConstruction = True
dimensioning2 = dimensionFrets2.addByTwoPoints(adsk.core.Point3D.create(nutDistance-L, (endLength/2), 0), adsk.core.Point3D.create(nutDistance-L, (-endLength/2), 0))
dimensioning2.isConstruction = True
bridgeLine2 = dimensionFrets2.addByTwoPoints(adsk.core.Point3D.create(nutDistance-scaleLength, (bridgeStringSpacing/2), 0), adsk.core.Point3D.create(nutDistance-scaleLength, (-bridgeStringSpacing/2), 0))
bridgeLine2.isConstruction = True
for dime in range(int(fretNumber)):
sketchDime1 = sketch2.sketchDimensions.addDistanceDimension(dimensionFrets[0+dime].startSketchPoint, dimensionFrets[1+dime].startSketchPoint, adsk.fusion.DimensionOrientations.HorizontalDimensionOrientation, adsk.core.Point3D.create((nutDistance-scaleLength-(scaleLength/(-2**((dime+0.5)/12.0)))), 5, 0), False);
for dime in range(1, int(fretNumber)+1):
sketchDime1 = sketch2.sketchDimensions.addDistanceDimension(dimensionFrets[0].endSketchPoint, dimensionFrets[0+dime].endSketchPoint, adsk.fusion.DimensionOrientations.HorizontalDimensionOrientation, adsk.core.Point3D.create((nutDistance-scaleLength-(scaleLength/(-2**((dime-0.5)/12.0)))), -5-(dime/4), 0), False);
sketchDime2 = sketch2.sketchDimensions.addDistanceDimension(dimensionFrets[0].startSketchPoint,
dimensionFrets[0].endSketchPoint, adsk.fusion.DimensionOrientations.VerticalDimensionOrientation, adsk.core.Point3D.create(nutDistance+2, 0, 0), False)
sketchDime3 = sketch2.sketchDimensions.addDistanceDimension(dimensioning2.startSketchPoint,
dimensioning2.endSketchPoint, adsk.fusion.DimensionOrientations.VerticalDimensionOrientation, adsk.core.Point3D.create(nutDistance-L-1, 0, 0), False)
sketchDime4 = sketch2.sketchDimensions.addDistanceDimension(dimensioning2.endSketchPoint,
dimensionFrets[0].endSketchPoint, adsk.fusion.DimensionOrientations.HorizontalDimensionOrientation, adsk.core.Point3D.create(nutDistance-L/2, -12, 0), False)
sketchDime5 = sketch2.sketchDimensions.addDistanceDimension(bridgeLine2.startSketchPoint,
dimensionFrets[0].startSketchPoint, adsk.fusion.DimensionOrientations.HorizontalDimensionOrientation, adsk.core.Point3D.create(nutDistance-scaleLength/2, 8, 0), False)
sketchDime6 = sketch2.sketchDimensions.addDistanceDimension(bridgeLine2.startSketchPoint,
bridgeLine2.endSketchPoint, adsk.fusion.DimensionOrientations.VerticalDimensionOrientation, adsk.core.Point3D.create(nutDistance-scaleLength-1, 0, 0), False)
if headstockStyle == 'Straight In-line':
#Create sketch for bridge spacing
for spacing in range(int(stringCount)):
holeSpacingVert = (nutLength-stringInset*2)/2 - ((nutLength-stringInset*2)/(int(stringCount)-1))*spacing
holeSpacingHor = spacing*machinePostHoleSpacing
machinePosts = dimensionCircles.addByTwoPoints(adsk.core.Point3D.create(nutDistance+nutToPost+holeSpacingHor, holeSpacingVert, 0), adsk.core.Point3D.create(nutDistance+nutToPost+holeSpacingHor, holeSpacingVert+(machinePostDiameter), 0))
machinePosts.isConstruction = True
for dime in range(int(stringCount)-1):
sketchDime7 = sketch2.sketchDimensions.addDistanceDimension(dimensionCircles[0+dime].centerSketchPoint, dimensionCircles[1+dime].centerSketchPoint, adsk.fusion.DimensionOrientations.HorizontalDimensionOrientation,
adsk.core.Point3D.create((nutDistance+nutToPost+machinePostHoleSpacing*dime)+machinePostHoleSpacing/2, 4, 0), False);
sketchDime8 = sketch2.sketchDimensions.addDistanceDimension(dimensionCircles[0+dime].centerSketchPoint, dimensionCircles[1+dime].centerSketchPoint, adsk.fusion.DimensionOrientations.VerticalDimensionOrientation,
adsk.core.Point3D.create(guitarLength+2, (nutLength-stringInset*2)/2 - ((nutLength-stringInset*2)/(int(stringCount)-1))*dime, 0), False);
sketchDime9 = sketch2.sketchDimensions.addDistanceDimension(dimensionFrets.item(0).startSketchPoint, dimensionCircles.item(0).centerSketchPoint, adsk.fusion.DimensionOrientations.HorizontalDimensionOrientation, adsk.core.Point3D.create(nutDistance+nutToPost/2, 4, 0), False);
else:
pass
machinePosts = sketch2.sketchCurves.sketchCircles
machinePostHole1 = machinePosts.addByCenterRadius(adsk.core.Point3D.create(nutDistance+nutToPost, (nutStringSpacing/2)+(machinePostDiameter/2), 0), machinePostDiameter/2)
machinePostHole1.isConstruction = True
machinePostHole2 = machinePosts.addByCenterRadius(adsk.core.Point3D.create(nutDistance+nutToPost+machinePostHoleSpacing, (nutStringSpacing/2)+(machinePostDiameter/2), 0), machinePostDiameter/2)
machinePostHole2.isConstruction = True
machinePostHole3 = machinePosts.addByCenterRadius(adsk.core.Point3D.create(nutDistance+nutToPost+machinePostHoleSpacing*2, (nutStringSpacing/2)+(machinePostDiameter/2), 0), machinePostDiameter/2)
machinePostHole3.isConstruction = True
machinePostHole4 = machinePosts.addByCenterRadius(adsk.core.Point3D.create(nutDistance+nutToPost+machinePostHoleSpacing*2, (-nutStringSpacing/2)-(machinePostDiameter/2), 0), machinePostDiameter/2)
machinePostHole4.isConstruction = True
machinePostHole5 = machinePosts.addByCenterRadius(adsk.core.Point3D.create(nutDistance+nutToPost+machinePostHoleSpacing, (-nutStringSpacing/2)-(machinePostDiameter/2), 0), machinePostDiameter/2)
machinePostHole5.isConstruction = True
machinePostHole6 = machinePosts.addByCenterRadius(adsk.core.Point3D.create(nutDistance+nutToPost, (-nutStringSpacing/2)-(machinePostDiameter/2), 0), machinePostDiameter/2)
machinePostHole6.isConstruction = True
sketchDime10 = sketch2.sketchDimensions.addDistanceDimension(dimensionFrets.item(0).startSketchPoint, machinePostHole1.centerSketchPoint, adsk.fusion.DimensionOrientations.HorizontalDimensionOrientation, adsk.core.Point3D.create(nutDistance+nutToPost/2, 4, 0), False)
sketchDime11 = sketch2.sketchDimensions.addDistanceDimension(machinePostHole1.centerSketchPoint, machinePostHole2.centerSketchPoint, adsk.fusion.DimensionOrientations.HorizontalDimensionOrientation, adsk.core.Point3D.create(nutDistance+nutToPost+machinePostHoleSpacing/2, 4, 0), False)
sketchDime12 = sketch2.sketchDimensions.addDistanceDimension(machinePostHole2.centerSketchPoint, machinePostHole3.centerSketchPoint, adsk.fusion.DimensionOrientations.HorizontalDimensionOrientation, adsk.core.Point3D.create(nutDistance+nutToPost+machinePostHoleSpacing*1.5, 4, 0), False)
sketchDime13 = sketch2.sketchDimensions.addDistanceDimension(machinePostHole3.centerSketchPoint, machinePostHole4.centerSketchPoint, adsk.fusion.DimensionOrientations.VerticalDimensionOrientation, adsk.core.Point3D.create(nutDistance+machinePostHoleSpacing*3+2, 0, 0), False)
### PICKUPS
sketch3 = sketches.add(xzPlane)
sketch3.isComputeDeferred = True
sketch3.name = 'Pickup Dimensions'
sketch3.isVisible = True
sketch3.areDimensionsShown = True
sketch3.areConstraintsShown = False
sketch3.arePointsShown = False
sketch3.areProfilesShown = False
neckDistance = guitarLength - headstockLength - fretboardLength - neckSpacing
bridgeDistance = guitarLength - headstockLength - scaleLength + bridgeSpacing
middleDistance = bridgeDistance + (neckDistance - bridgeDistance)/2
neckLines = sketch3.sketchCurves.sketchLines
cavityNeckLines = sketch3.sketchCurves.sketchLines
if pickupNeck == "Single-Coil":
# Create sketch lines
pickupNeck1 = neckLines.addCenterPointRectangle(adsk.core.Point3D.create((neckDistance-singleCoilWidth/2), 0, 0), adsk.core.Point3D.create((neckDistance-singleCoilWidth/2)+singleCoilWidth/2, (singleCoilLength/2), 0))
pickupNeck1[0].isConstruction = True
pickupNeck1[1].isConstruction = True
pickupNeck1[2].isConstruction = True
pickupNeck1[3].isConstruction = True
pickupNeck2 = neckLines.addByTwoPoints(adsk.core.Point3D.create((neckDistance-singleCoilWidth/2), (-pickupCavityMountLength/2), 0), adsk.core.Point3D.create((neckDistance-singleCoilWidth/2), (pickupCavityMountLength/2), 0))
pickupNeck2.isConstruction = True
pickupNeckFillet1 = sketch3.sketchCurves.sketchArcs.addFillet(pickupNeck1[0], pickupNeck1[0].endSketchPoint.geometry, pickupNeck1[1], pickupNeck1[1].startSketchPoint.geometry, singleCoilWidth/2)
pickupNeckFillet1.isConstruction = True
pickupNeckFillet2 = sketch3.sketchCurves.sketchArcs.addFillet(pickupNeck1[1], pickupNeck1[1].endSketchPoint.geometry, pickupNeck1[2], pickupNeck1[2].startSketchPoint.geometry, singleCoilWidth/2)
pickupNeckFillet2.isConstruction = True
pickupNeckFillet3 = sketch3.sketchCurves.sketchArcs.addFillet(pickupNeck1[2], pickupNeck1[2].endSketchPoint.geometry, pickupNeck1[3], pickupNeck1[3].startSketchPoint.geometry, singleCoilWidth/2)
pickupNeckFillet3.isConstruction = True
pickupNeckFillet4 = sketch3.sketchCurves.sketchArcs.addFillet(pickupNeck1[3], pickupNeck1[3].endSketchPoint.geometry, pickupNeck1[0], pickupNeck1[0].startSketchPoint.geometry, singleCoilWidth/2)
pickupNeckFillet4.isConstruction = True
cavityNeck1 = cavityNeckLines.addCenterPointRectangle(adsk.core.Point3D.create((neckDistance-singleCoilWidth/2), 0, 0), adsk.core.Point3D.create((neckDistance-singleCoilWidth/2)+singleCoilWidth/2, (pickupCavityMountLength/2), 0))
cavityNeck1[0].isConstruction = True
cavityNeck1[1].isConstruction = True
cavityNeck1[2].isConstruction = True
cavityNeck1[3].isConstruction = True
cavityNeckFillet1 = sketch3.sketchCurves.sketchArcs.addFillet(cavityNeck1[0], cavityNeck1[0].endSketchPoint.geometry, cavityNeck1[1], cavityNeck1[1].startSketchPoint.geometry, singleCoilWidth/2)
cavityNeckFillet1.isConstruction = True
cavityNeckFillet2 = sketch3.sketchCurves.sketchArcs.addFillet(cavityNeck1[1], cavityNeck1[1].endSketchPoint.geometry, cavityNeck1[2], cavityNeck1[2].startSketchPoint.geometry, singleCoilWidth/2)
cavityNeckFillet2.isConstruction = True
cavityNeckFillet3 = sketch3.sketchCurves.sketchArcs.addFillet(cavityNeck1[2], cavityNeck1[2].endSketchPoint.geometry, cavityNeck1[3], cavityNeck1[3].startSketchPoint.geometry, singleCoilWidth/2)
cavityNeckFillet3.isConstruction = True
cavityNeckFillet4 = sketch3.sketchCurves.sketchArcs.addFillet(cavityNeck1[3], cavityNeck1[3].endSketchPoint.geometry, cavityNeck1[0], cavityNeck1[0].startSketchPoint.geometry, singleCoilWidth/2)
cavityNeckFillet4.isConstruction = True
elif pickupNeck == "Humbucker":
pickupNeck1 = neckLines.addCenterPointRectangle(adsk.core.Point3D.create((neckDistance-humbuckerWidth/2), 0, 0), adsk.core.Point3D.create((neckDistance-humbuckerWidth/2)+humbuckerWidth/2, (humbuckerLength/2), 0))
pickupNeck1[0].isConstruction = True
pickupNeck1[1].isConstruction = True
pickupNeck1[2].isConstruction = True
pickupNeck1[3].isConstruction = True
pickupNeck2 = neckLines.addByTwoPoints(adsk.core.Point3D.create((neckDistance-humbuckerWidth/2), (-pickupCavityMountLength/2), 0), adsk.core.Point3D.create((neckDistance-humbuckerWidth/2), (pickupCavityMountLength/2), 0))
pickupNeck2.isConstruction = True
pickupNeckFillet1 = sketch3.sketchCurves.sketchArcs.addFillet(pickupNeck1[0], pickupNeck1[0].endSketchPoint.geometry, pickupNeck1[1], pickupNeck1[1].startSketchPoint.geometry, humbuckerFillet)
pickupNeckFillet1.isConstruction = True
pickupNeckFillet2 = sketch3.sketchCurves.sketchArcs.addFillet(pickupNeck1[1], pickupNeck1[1].endSketchPoint.geometry, pickupNeck1[2], pickupNeck1[2].startSketchPoint.geometry, humbuckerFillet)
pickupNeckFillet2.isConstruction = True
pickupNeckFillet3 = sketch3.sketchCurves.sketchArcs.addFillet(pickupNeck1[2], pickupNeck1[2].endSketchPoint.geometry, pickupNeck1[3], pickupNeck1[3].startSketchPoint.geometry, humbuckerFillet)
pickupNeckFillet3.isConstruction = True
pickupNeckFillet4 = sketch3.sketchCurves.sketchArcs.addFillet(pickupNeck1[3], pickupNeck1[3].endSketchPoint.geometry, pickupNeck1[0], pickupNeck1[0].startSketchPoint.geometry, humbuckerFillet)
pickupNeckFillet4.isConstruction = True
cavityNeck1 = cavityNeckLines.addCenterPointRectangle(adsk.core.Point3D.create((neckDistance-humbuckerWidth/2), 0, 0), adsk.core.Point3D.create((neckDistance-humbuckerWidth/2)-pickupCavityMountTabWidth/2, (pickupCavityMountLength/2), 0))
cavityNeck1[0].isConstruction = True
cavityNeck1[1].isConstruction = True
cavityNeck1[2].isConstruction = True
cavityNeck1[3].isConstruction = True
cavityNeckFillet1 = sketch3.sketchCurves.sketchArcs.addFillet(cavityNeck1[0], cavityNeck1[0].endSketchPoint.geometry, cavityNeck1[1], cavityNeck1[1].startSketchPoint.geometry, humbuckerFillet/2)
cavityNeckFillet1.isConstruction = True
cavityNeckFillet2 = sketch3.sketchCurves.sketchArcs.addFillet(cavityNeck1[1], cavityNeck1[1].endSketchPoint.geometry, cavityNeck1[2], cavityNeck1[2].startSketchPoint.geometry, humbuckerFillet/2)
cavityNeckFillet2.isConstruction = True
cavityNeckFillet3 = sketch3.sketchCurves.sketchArcs.addFillet(cavityNeck1[2], cavityNeck1[2].endSketchPoint.geometry, cavityNeck1[3], cavityNeck1[3].startSketchPoint.geometry, humbuckerFillet/2)
cavityNeckFillet3.isConstruction = True
cavityNeckFillet4 = sketch3.sketchCurves.sketchArcs.addFillet(cavityNeck1[3], cavityNeck1[3].endSketchPoint.geometry, cavityNeck1[0], cavityNeck1[0].startSketchPoint.geometry, humbuckerFillet/2)
cavityNeckFillet4.isConstruction = True
else:
pass
middleLines = sketch3.sketchCurves.sketchLines
cavitymiddleLines = sketch3.sketchCurves.sketchLines
if pickupMiddle == "Single-Coil":
# Create sketch lines
pickupMiddle1 = middleLines.addCenterPointRectangle(adsk.core.Point3D.create((middleDistance), 0, 0), adsk.core.Point3D.create((middleDistance+singleCoilWidth/2), (singleCoilLength/2), 0))
pickupMiddle1[0].isConstruction = True
pickupMiddle1[1].isConstruction = True
pickupMiddle1[2].isConstruction = True
pickupMiddle1[3].isConstruction = True
pickupMiddle2 = middleLines.addByTwoPoints(adsk.core.Point3D.create((middleDistance), (-pickupCavityMountLength/2), 0), adsk.core.Point3D.create((middleDistance), (pickupCavityMountLength/2), 0))
pickupMiddle2.isConstruction = True
pickupMiddleFillet1 = sketch3.sketchCurves.sketchArcs.addFillet(pickupMiddle1[0], pickupMiddle1[0].endSketchPoint.geometry, pickupMiddle1[1], pickupMiddle1[1].startSketchPoint.geometry, singleCoilWidth/2)
pickupMiddleFillet1.isConstruction = True
pickupMiddleFillet2 = sketch3.sketchCurves.sketchArcs.addFillet(pickupMiddle1[1], pickupMiddle1[1].endSketchPoint.geometry, pickupMiddle1[2], pickupMiddle1[2].startSketchPoint.geometry, singleCoilWidth/2)
pickupMiddleFillet2.isConstruction = True
pickupMiddleFillet3 = sketch3.sketchCurves.sketchArcs.addFillet(pickupMiddle1[2], pickupMiddle1[2].endSketchPoint.geometry, pickupMiddle1[3], pickupMiddle1[3].startSketchPoint.geometry, singleCoilWidth/2)
pickupMiddleFillet3.isConstruction = True
pickupMiddleFillet4 = sketch3.sketchCurves.sketchArcs.addFillet(pickupMiddle1[3], pickupMiddle1[3].endSketchPoint.geometry, pickupMiddle1[0], pickupMiddle1[0].startSketchPoint.geometry, singleCoilWidth/2)
pickupMiddleFillet4.isConstruction = True
cavityMiddle1 = cavitymiddleLines.addCenterPointRectangle(adsk.core.Point3D.create((middleDistance), 0, 0), adsk.core.Point3D.create((middleDistance+singleCoilWidth/2), (pickupCavityMountLength/2), 0))
cavityMiddle1[0].isConstruction = True
cavityMiddle1[1].isConstruction = True
cavityMiddle1[2].isConstruction = True
cavityMiddle1[3].isConstruction = True
cavityMiddleFillet1 = sketch3.sketchCurves.sketchArcs.addFillet(cavityMiddle1[0], cavityMiddle1[0].endSketchPoint.geometry, cavityMiddle1[1], cavityMiddle1[1].startSketchPoint.geometry, singleCoilWidth/2)
cavityMiddleFillet1.isConstruction = True
cavityMiddleFillet2 = sketch3.sketchCurves.sketchArcs.addFillet(cavityMiddle1[1], cavityMiddle1[1].endSketchPoint.geometry, cavityMiddle1[2], cavityMiddle1[2].startSketchPoint.geometry, singleCoilWidth/2)
cavityMiddleFillet2.isConstruction = True
cavityMiddleFillet3 = sketch3.sketchCurves.sketchArcs.addFillet(cavityMiddle1[2], cavityMiddle1[2].endSketchPoint.geometry, cavityMiddle1[3], cavityMiddle1[3].startSketchPoint.geometry, singleCoilWidth/2)
cavityMiddleFillet3.isConstruction = True
cavityMiddleFillet4 = sketch3.sketchCurves.sketchArcs.addFillet(cavityMiddle1[3], cavityMiddle1[3].endSketchPoint.geometry, cavityMiddle1[0], cavityMiddle1[0].startSketchPoint.geometry, singleCoilWidth/2)
cavityMiddleFillet4.isConstruction = True
elif pickupMiddle == "Humbucker":
# Create sketch lines
pickupMiddle1 = middleLines.addCenterPointRectangle(adsk.core.Point3D.create((middleDistance), 0, 0), adsk.core.Point3D.create((middleDistance+humbuckerWidth/2), (humbuckerLength/2), 0))
pickupMiddle1[0].isConstruction = True
| |
110.0 106.53 SOURCE3 1
p2-n -p2 103.6 119.62 SOURCE3 1
p3-n -p3 106.4 108.73 SOURCE3 3 0.2591
p4-n -p4 108.7 108.55 SOURCE3 1
p5-n -p5 114.3 99.99 SOURCE3 1
pc-n -pc 103.2 119.62 SOURCE3 1
pd-n -pd 103.2 119.62 SOURCE3 1
s4-n -s4 63.2 113.75 SOURCE3 1
s6-n -s6 63.4 119.68 SOURCE3 1
sh-n -sh 63.2 119.03 SOURCE3 1
s -n -s 60.1 126.00 SOURCE3 1
ss-n -ss 63.5 118.49 SOURCE3 1
br-oh-ho 43.2 101.60 SOURCE3 1
c1-oh-ho 52.0 108.76 SOURCE3 1
c2-oh-ho 51.8 107.63 SOURCE3_SOURCE5 86 1.5038
c3-oh-ho 49.0 107.26 SOURCE3_SOURCE5 7781 0.7665
ca-oh-ho 50.7 108.58 SOURCE3_SOURCE5 3580 0.7052
cc-oh-ho 51.6 107.12 CORR_SOURCE5 226 1.6427
cd-oh-ho 51.6 107.12 CORR_SOURCE5 226 1.6427
ce-oh-ho 51.6 106.83 CORR_SOURCE5 48 1.2629
cf-oh-ho 51.6 106.83 CORR_SOURCE5 48 1.2629
c -oh-ho 51.6 106.55 SOURCE3_SOURCE5 2765 1.0627
cl-oh-ho 50.6 102.40 SOURCE2 1
cx-oh-ho 50.0 106.76 5/2017 3 0.8687
cy-oh-ho 49.3 107.80 5/2017 16 0.6264
f -oh-ho 64.7 96.80 SOURCE2 1
ho-oh-ho 42.2 106.49 SOURCE2_SOURCE5 23 1.3050
ho-oh-i 38.0 107.98 SOURCE3 2
ho-oh-n1 66.5 107.81 HF/6-31G* 1
ho-oh-n2 64.0 103.09 SOURCE3_SOURCE5 185 1.2900
ho-oh-n3 63.2 102.26 SOURCE3_SOURCE5 28 0.5790
ho-oh-n4 62.5 106.63 SOURCE3 3 0.2770
ho-oh-n 63.9 101.29 SOURCE3_SOURCE5 114 1.0315
ho-oh-na 63.5 104.37 SOURCE3_SOURCE5 16 0.9188
ho-oh-nh 63.0 102.77 SOURCE4_SOURCE5 57 0.7554
ho-oh-no 63.6 102.17 SOURCE3 1
ho-oh-o 59.4 100.87 SOURCE3 1
ho-oh-oh 62.1 98.72 SOURCE3 2
ho-oh-os 62.3 99.68 SOURCE4_SOURCE5 45 0.3142
ho-oh-p2 58.6 109.45 SOURCE3 8 3.3491
ho-oh-p3 56.4 110.64 SOURCE3 3 0.5191
ho-oh-p4 57.9 110.19 SOURCE3 4 0.2372
ho-oh-p5 59.0 110.08 SOURCE3_SOURCE5 1074 1.1258
ho-oh-py 58.8 110.49 SOURCE3_SOURCE5 115 1.4927
ho-oh-s4 44.2 106.85 SOURCE4_SOURCE5 28 0.5669
ho-oh-s 42.2 100.15 SOURCE3 2
ho-oh-s6 46.0 107.26 SOURCE3_SOURCE5 180 0.7965
ho-oh-sh 44.4 106.24 SOURCE3 2 0.0661
ho-oh-ss 44.4 107.11 SOURCE3_SOURCE5 12 1.0472
ho-oh-sy 45.7 106.42 SOURCE4_SOURCE5 121 0.3216
br-os-br 67.4 110.63 SOURCE3 1
c1-os-c1 71.2 115.02 SOURCE3 1
c1-os-c3 68.5 113.39 SOURCE3 1
c2-os-c2 69.6 113.14 SOURCE3 6 2.1932
c2-os-c3 67.0 115.59 SOURCE3_SOURCE5 149 2.3501
c2-os-ca 67.8 118.20 SOURCE3_SOURCE5 13 0.6779
c2-os-n2 84.0 118.13 SOURCE3 1
c2-os-na 88.1 103.85 SOURCE3 4 0.6297
c2-os-os 87.8 102.77 SOURCE3 1
c2-os-p5 82.3 126.37 SOURCE4 7 1.7939
c2-os-ss 66.6 108.13 SOURCE3 1
c3-os-c3 66.3 112.48 SOURCE4_SOURCE5 4012 1.7399
c3-os-ca 66.1 117.96 SOURCE4_SOURCE5 7354 1.4497
c3-os-cc 66.4 117.37 CORR_SOURCE5 411 1.1548
c3-os-cd 66.4 117.37 CORR_SOURCE5 411 1.1548
c3-os-ce 66.6 116.09 CORR_SOURCE5 59 1.9942
c3-os-cf 66.6 116.09 CORR_SOURCE5 59 1.9942
c3-os-cl 71.8 110.50 SOURCE2 1
c3-os-cy 67.1 110.36 5/2017 14 0.9990
c3-os-i 59.7 113.70 SOURCE3 1
c3-os-n1 86.0 113.50 HF/6-31G* 1
c3-os-n2 85.1 109.23 SOURCE3_SOURCE5 93 0.8090
c3-os-n3 83.7 109.83 SOURCE4_SOURCE5 46 1.7350
c3-os-n4 84.1 110.50 SOURCE3 3 0.5426
c3-os-n 84.7 109.68 SOURCE4_SOURCE5 42 0.9897
c3-os-na 83.2 110.98 SOURCE3_SOURCE5 17 1.2781
c3-os-nc 83.8 112.73 SOURCE3 2 1.0358
c3-os-nd 83.8 112.73 SOURCE3 2
c3-os-nh 84.5 109.79 SOURCE4_SOURCE5 22 0.2157
c3-os-no 82.8 113.89 SOURCE4_SOURCE5 112 0.3140
c3-os-o 84.5 103.00 SOURCE3 1
c3-os-oh 84.0 108.11 SOURCE4_SOURCE5 34 0.5701
c3-os-os 84.0 107.37 SOURCE3_SOURCE5 55 0.9835
c3-os-p2 86.1 115.47 SOURCE3 8 2.6374
c3-os-p3 81.9 117.51 SOURCE3_SOURCE5 11 0.9552
c3-os-p4 83.3 117.48 SOURCE3 4 0.3850
c3-os-p5 83.3 119.54 SOURCE3_SOURCE5 665 1.1338
c3-os-py 83.1 119.57 SOURCE3_SOURCE5 59 1.1952
c3-os-s4 64.6 113.21 SOURCE3_SOURCE5 18 1.1865
c3-os-s6 65.7 115.87 SOURCE4_SOURCE5 144 1.2750
c3-os-s 62.7 109.55 SOURCE3 1
c3-os-sh 65.3 112.82 SOURCE3 1
c3-os-ss 64.0 114.01 SOURCE3_SOURCE5 8 0.2853
ca-os-ca 67.1 119.89 SOURCE4_SOURCE5 312 1.5712
ca-os-cc 69.3 113.08 CORR_SOURCE5 343 1.5098
ca-os-cd 69.3 113.08 CORR_SOURCE5 343 1.5098
ca-os-n3 84.6 112.19 SOURCE3 1
ca-os-na 86.0 108.24 SOURCE3 1
ca-os-nc 87.0 109.32 SOURCE3_SOURCE5 7 0.0434
ca-os-nd 87.0 109.32 SOURCE3_SOURCE5 7 0.0434
ca-os-p5 83.2 123.18 SOURCE4_SOURCE5 136 1.2191
ca-os-s6 66.2 117.18 SOURCE4_SOURCE5 46 1.0420
c -os-c2 68.1 118.22 SOURCE4_SOURCE5 22 0.6933
c -os-c3 66.9 115.98 SOURCE3_SOURCE5 2731 1.0103
c -os-c 67.5 120.64 SOURCE4 7 1.5114
c -os-ca 67.0 121.15 SOURCE4_SOURCE5 731 1.7389
c -os-cc 67.7 119.62 SOURCE3 5 6.0675
cc-os-cc 71.5 106.72 CORR_SOURCE5 406 0.7345
cc-os-cd 67.8 118.68 SOURCE4_SOURCE5 49 2.2289
c -os-cd 67.7 119.62 SOURCE3 5 6.0675
cc-os-na 84.9 111.66 SOURCE3 28 4.1343
cc-os-nc 87.6 108.37 SOURCE3_SOURCE5 148 0.8594
cc-os-os 85.4 108.47 SOURCE3 2
cc-os-ss 63.3 119.59 SOURCE3 1
c -os-cy 68.2 112.64 5/2017 3 1.5599
cd-os-cd 71.5 106.72 CORR_SOURCE5 406 0.7345
cd-os-na 84.9 111.66 SOURCE3 28 4.1343
cd-os-nd 87.6 108.37 SOURCE3_SOURCE5 148 0.8594
cd-os-os 85.4 108.47 SOURCE3 2
cd-os-ss 63.3 119.59 SOURCE3 1
cl-os-cl 80.6 110.76 SOURCE3 2
c -os-n2 86.2 112.12 SOURCE4_SOURCE5 16 0.1285
c -os-n 85.9 112.24 SOURCE4_SOURCE5 17 0.6206
c -os-oh 85.0 110.50 SOURCE3 1
c -os-os 84.8 110.20 SOURCE4_SOURCE5 22 1.3187
c -os-p5 83.7 122.13 SOURCE4_SOURCE5 11 0.5685
c -os-sy 65.2 113.49 SOURCE3 1
cx-os-cx 89.1 61.78 SOURCE4_SOURCE5 379 0.2104
cx-os-n 114.4 59.99 SOURCE3 1
cx-os-os 84.7 107.41 5/2017 2 0.8185
cy-os-cy 73.0 91.86 SOURCE2_SOURCE5 16 1.0042
f -os-f 112.3 103.30 SOURCE2 1
f -os-os 105.9 109.50 SOURCE2 1
i -os-i 65.0 115.67 SOURCE3 1
n1-os-n1 111.0 117.79 HF/6-31G* 1
n2-os-n2 109.0 106.83 SOURCE3 1
n2-os-s6 84.5 111.30 SOURCE4_SOURCE5 14 0.5651
n3-os-n3 107.0 104.88 SOURCE3 1
n4-os-n4 103.7 114.68 SOURCE3 1
na-os-na 104.4 109.59 SOURCE3 1
na-os-ss 83.6 104.34 SOURCE3 1
nc-os-nc 106.1 112.75 SOURCE2_SOURCE5 12 0.7540
nc-os-ss 81.7 110.97 SOURCE3 1
nd-os-nd 106.1 112.75 SOURCE2_SOURCE5 12 0.7540
nd-os-ss 81.7 110.97 SOURCE3 1
nh-os-nh 107.2 108.29 SOURCE3 1
n -os-n 107.6 108.31 SOURCE3 1
no-os-no 105.0 111.86 SOURCE3 1
n -os-s6 83.5 113.63 SOURCE4_SOURCE5 13 0.1799
o -os-o 98.0 114.68 SOURCE3 1
p2-os-p2 112.4 120.02 SOURCE3 1
p2-os-p5 117.0 107.86 SOURCE3 1
p3-os-p3 105.1 121.22 SOURCE3 1
p3-os-py 114.5 105.58 SOURCE3 1
p5-os-p5 106.8 126.25 SOURCE3 1
s4-os-s4 65.8 111.63 SOURCE3 1
s6-os-s6 66.4 119.07 SOURCE3 2 0.4318
sh-os-sh 64.5 118.95 SOURCE3 1
s -os-s 60.1 118.08 SOURCE3 1
ss-os-ss 64.2 115.64 SOURCE3 1
br-p2-br 50.4 108.60 SOURCE3 1
br-p2-c2 49.3 102.32 SOURCE3 2 0.0146
br-p2-n2 61.8 103.33 SOURCE3 1
br-p2-o 59.9 110.87 SOURCE3 1
br-p2-p2 63.6 115.46 SOURCE3 4 7.8622
br-p2-s 50.7 110.52 SOURCE3 1
c1-p2-c1 49.5 99.04 SOURCE3 1
c1-p2-c2 50.3 101.29 SOURCE3 1
c1-p2-n2 63.9 101.79 SOURCE3 1
c1-p2-o 63.4 107.62 SOURCE3 1
c1-p2-p2 68.2 99.54 SOURCE3 1
c1-p2-s 51.7 105.90 SOURCE3 1
c2-p2-c2 51.1 104.50 SOURCE3 1
c2-p2-c3 48.8 101.90 SOURCE3 4 0.1132
c2-p2-ca 49.0 101.95 SOURCE3 1
c2-p2-cl 54.3 102.72 SOURCE3 2
c2-p2-f 67.8 103.47 SOURCE3 2 0.0136
c2-p2-hp 37.3 97.19 SOURCE3 3 0.0216
c2-p2-i 44.0 101.94 SOURCE3 2 0.0368
c2-p2-n2 66.7 99.88 SOURCE3 1
c2-p2-n3 64.9 101.80 SOURCE3 1
c2-p2-n4 60.3 98.26 SOURCE3 6 0.1522
c2-p2-n 63.2 103.28 SOURCE3 4 3.3113
c2-p2-na 62.6 103.99 SOURCE3 8 1.6834
c2-p2-nh 63.6 105.17 SOURCE3 8 0.8263
c2-p2-no 64.7 97.97 SOURCE3 3 0.4175
c2-p2-o 63.8 115.16 SOURCE3 1
c2-p2-oh 65.3 102.89 SOURCE3 3 0.8191
c2-p2-os 66.6 102.12 SOURCE3 4 0.8783
c2-p2-p2 70.1 99.56 SOURCE3 1
c2-p2-p3 61.6 99.27 SOURCE3 4 1.1590
c2-p2-p4 61.7 96.94 SOURCE3 1
c2-p2-p5 61.5 97.61 SOURCE3 1
c2-p2-s4 48.3 95.15 SOURCE3 1
c2-p2-s6 48.4 95.51 SOURCE3 1
c2-p2-s 53.3 105.53 SOURCE3 1
c2-p2-sh 50.7 101.49 SOURCE3 3 0.0057
c2-p2-ss 50.7 101.81 SOURCE3 4 0.5883
c3-p2-c3 47.2 99.30 SOURCE3 1
c3-p2-n2 62.2 100.82 SOURCE3 1
c3-p2-o 61.6 106.72 SOURCE3 1
c3-p2-os 62.5 101.34 SOURCE3 1
c3-p2-p2 66.3 100.48 SOURCE3 1
c3-p2-s 50.5 105.68 SOURCE3 1
ca-p2-ca 47.5 99.70 SOURCE3 1
ca-p2-n2 62.6 100.82 SOURCE3 1
ca-p2-n 64.3 89.97 SOURCE3 1
ca-p2-na 64.4 89.21 SOURCE3 1
ca-p2-o 61.9 106.88 SOURCE3 1
ca-p2-s 50.2 107.93 SOURCE3 1
c -p2-c2 49.1 97.30 SOURCE3 1
c -p2-c 48.4 90.10 SOURCE3 1
ce-p2-o 62.4 107.44 SOURCE3 1
ce-p2-s 51.2 105.54 SOURCE3 1
cf-p2-o 62.4 107.44 SOURCE3 1
cf-p2-s 51.2 105.54 SOURCE3 1
cl-p2-cl 58.9 108.70 SOURCE3 1
cl-p2-n2 68.4 103.38 SOURCE3 1
cl-p2-o 66.7 110.57 SOURCE3 1
cl-p2-p2 73.8 103.11 SOURCE3 1
cl-p2-s 55.8 110.11 SOURCE3 1
f -p2-f 88.5 107.10 SOURCE3 1
f -p2-n2 86.7 103.57 SOURCE3 1
f -p2-o 86.7 110.61 SOURCE3 1
f -p2-p2 90.0 103.48 SOURCE3 1
f -p2-s 66.9 114.71 SOURCE3 2 5.2794
hp-p2-hp 27.6 98.76 SOURCE3 1
hp-p2-n1 47.0 95.18 SOURCE3 2 1.5708
hp-p2-n2 48.5 95.54 SOURCE3 19 4.7352
hp-p2-ne 48.3 100.10 SOURCE3 14 6.1290
hp-p2-nf 48.3 100.10 SOURCE3 14
hp-p2-o 48.1 105.58 SOURCE3 1
hp-p2-p2 47.8 101.88 SOURCE3 27 12.9535
hp-p2-p4 41.0 94.51 SOURCE3 1
hp-p2-p5 42.2 89.07 SOURCE3 1
hp-p2-pe 47.0 97.25 SOURCE3 16 8.8916
hp-p2-pf 47.0 97.25 SOURCE3 16
hp-p2-s4 32.5 89.99 SOURCE3 1
hp-p2-s 37.4 102.52 SOURCE3 1
hp-p2-s6 33.0 88.13 SOURCE3 1
i -p2-i 47.8 104.16 SOURCE3 1
i -p2-n2 55.0 101.77 SOURCE3 1
i -p2-o 52.7 109.51 SOURCE3 1
i -p2-p2 60.9 102.63 SOURCE3 1
i -p2-s 45.7 110.60 SOURCE3 1
n1-p2-n1 87.8 86.22 HF/6-31G* 1
n2-p2-n2 86.1 98.00 SOURCE3 1
n2-p2-n3 83.3 100.42 SOURCE3 1
n2-p2-n4 78.4 93.42 SOURCE3 1
n2-p2-na 80.5 102.03 SOURCE3 1
n2-p2-nh 82.5 101.87 SOURCE3 2 0.8491
n2-p2-no 82.4 98.12 SOURCE3 1
n2-p2-o 81.7 115.34 SOURCE3 1
n2-p2-oh 80.7 109.72 SOURCE3 1
n2-p2-os 85.1 102.29 SOURCE3 1
n2-p2-p3 77.5 99.51 SOURCE3 1
n2-p2-p4 75.8 101.73 SOURCE3 1
n2-p2-p5 79.0 93.68 SOURCE3 1
n2-p2-s4 60.0 97.83 SOURCE3 1
n2-p2-s6 60.1 98.14 SOURCE3 1
n2-p2-s 65.5 112.94 SOURCE3 1
n2-p2-sh 64.4 100.82 SOURCE3 1
n2-p2-ss 64.2 101.76 SOURCE3 1
n3-p2-n3 79.4 106.30 SOURCE3 1
n3-p2-o 82.9 106.83 SOURCE3 1
n3-p2-p2 87.3 100.58 SOURCE3 1
n3-p2-s 66.6 105.75 SOURCE3 1
n4-p2-n4 74.8 88.80 SOURCE3 1
n4-p2-o 76.3 101.36 SOURCE3 1
n4-p2-p2 82.5 96.53 SOURCE3 1
n4-p2-s 61.8 104.98 SOURCE3 1
na-p2-na 75.9 106.10 SOURCE3 1
na-p2-o 80.1 107.46 SOURCE3 1
na-p2-s 64.5 108.15 SOURCE3 1
ne-p2-o 85.8 107.71 SOURCE3 1
ne-p2-s 68.4 105.50 SOURCE3 1
nf-p2-o 85.8 107.71 SOURCE3 1
nf-p2-s 68.4 105.50 SOURCE3 1
nh-p2-nh 79.9 104.00 SOURCE3 1
nh-p2-o 82.1 108.11 SOURCE3 | |
in range(len(files)):
# read the tif file
f = gdal.Open(path+"/"+files[i])
arr_3d[:,:,i] = f.ReadAsArray()
return arr_3d
def ExtractValues(Path, ExcludeValue, Compressed = True, OccupiedCellsOnly=True):
"""
=================================================================
ExtractValues(Path, ExcludeValue, Compressed = True)
=================================================================
this function is written to extract and return a list of all the values
in a map
#TODO (an ASCII for now to be extended later to read also raster)
Inputs:
1-Path
[String] a path includng the name of the ASCII and extention like
path="data/cropped.asc"
2-ExcludedValue:
[Numeric] values you want to exclude from exteacted values
3-Compressed:
[Bool] if the map you provided is compressed
"""
# input data validation
# data type
assert type(Path)== str, "Path input should be string type" + str(Path)
assert type(Compressed) == bool, "Compressed input should be Boolen type"
# input values
# check wether the path exist or not
assert os.path.exists(Path), "the path you have provided does not exist"
# check wether the path has the extention or not
if Compressed == True:
assert Path.endswith(".zip") , "file" + Path +" should have .asc extension"
else:
assert Path.endswith(".asc") , "file" + Path +" should have .asc extension"
ExtractedValues = list()
try:
# open the zip file
if Compressed :
Compressedfile = zipfile.ZipFile(Path)
# get the file name
fname = Compressedfile.infolist()[0]
# ASCIIF = Compressedfile.open(fname)
# SpatialRef = ASCIIF.readlines()[:6]
ASCIIF = Compressedfile.open(fname)
ASCIIRaw = ASCIIF.readlines()[6:]
rows = len(ASCIIRaw)
cols = len(ASCIIRaw[0].split())
MapValues = np.ones((rows,cols), dtype = np.float32)*0
# read the ascii file
for i in range(rows):
x = ASCIIRaw[i].split()
MapValues[i,:] = list(map(float, x ))
else:
MapValues, SpatialRef= Raster.ReadASCII(Path)
# count nonzero cells
NonZeroCells = np.count_nonzero(MapValues)
if OccupiedCellsOnly == True:
ExtractedValues = 0
return ExtractedValues, NonZeroCells
# get the position of cells that is not zeros
rows = np.where(MapValues[:,:] != ExcludeValue)[0]
cols = np.where(MapValues[:,:] != ExcludeValue)[1]
except:
print("Error Opening the compressed file")
NonZeroCells = -1
ExtractedValues = -1
return ExtractedValues, NonZeroCells
# get the values of the filtered cells
for i in range(len(rows)):
ExtractedValues.append(MapValues[rows[i],cols[i]])
return ExtractedValues, NonZeroCells
@staticmethod
def OverlayMap(Path, BaseMap, ExcludeValue, Compressed = False, OccupiedCellsOnly=True):
"""
=================================================================
(Path, BaseMap, ExcludeValue, Compressed = False, OccupiedCellsOnly=True)
=================================================================
this function is written to extract and return a list of all the values
in an ASCII file
Inputs:
1-Path:
[String] a path to ascii file (inclusing the extension).
2-BaseMap:
[String/array] a path includng the name of the ASCII and extention like
path="data/cropped.asc".
3-ExcludedValue:
[Numeric] values you want to exclude from extracted values.
4-Compressed:
[Bool] if the map you provided is compressed.
5-OccupiedCellsOnly:
[Bool] if you want to count only cells that is not zero.
Outputs:
1- ExtractedValues:
[Dict] dictonary with a list of values in the basemap as keys
and for each key a list of all the intersected values in the
maps from the path.
2- NonZeroCells:
[dataframe] dataframe with the first column as the "file" name
and the second column is the number of cells in each map.
"""
# input data validation
# data type
assert type(Path)== str, "Path input should be string type"
assert type(Compressed) == bool, "Compressed input should be Boolen type"
# assert type(BaseMapF) == str, "BaseMapF input should be string type"
# input values
# check wether the path exist or not
assert os.path.exists(Path), "the path you have provided does not exist"
# read the base map
if type(BaseMap) == str:
if BaseMap.endswith('.asc'):
BaseMapV, _ = Raster.ReadASCII(BaseMap)
else:
BaseMap = gdal.Open(BaseMap)
BaseMapV = BaseMap.ReadAsArray()
else:
BaseMapV = BaseMap
ExtractedValues = dict()
try:
# open the zip file
if Compressed :
Compressedfile = zipfile.ZipFile(Path)
# get the file name
fname = Compressedfile.infolist()[0]
ASCIIF = Compressedfile.open(fname)
# SpatialRef = ASCIIF.readlines()[:6]
ASCIIF = Compressedfile.open(fname)
ASCIIRaw = ASCIIF.readlines()[6:]
rows = len(ASCIIRaw)
cols = len(ASCIIRaw[0].split())
MapValues = np.ones((rows,cols), dtype = np.float32)*0
# read the ascii file
for row in range(rows):
x = ASCIIRaw[row].split()
MapValues[row,:] = list(map(float, x ))
else:
MapValues, SpatialRef= Raster.ReadASCII(Path)
# count number of nonzero cells
NonZeroCells = np.count_nonzero(MapValues)
if OccupiedCellsOnly == True:
ExtractedValues = 0
return ExtractedValues, NonZeroCells
# get the position of cells that is not zeros
rows = np.where(MapValues[:,:] != ExcludeValue)[0]
cols = np.where(MapValues[:,:] != ExcludeValue)[1]
except:
print("Error Opening the compressed file")
NonZeroCells = -1
ExtractedValues = -1
return ExtractedValues, NonZeroCells
# extract values
for i in range(len(rows)):
# first check if the sub-basin has a list in the dict if not create a list
if BaseMapV[rows[i],cols[i]] not in list(ExtractedValues.keys()):
ExtractedValues[BaseMapV[rows[i],cols[i]]] = list()
# if not np.isnan(MapValues[rows[i],cols[i]]):
ExtractedValues[BaseMapV[rows[i],cols[i]]].append(MapValues[rows[i],cols[i]])
# else:
# if the value is nan
# NanList.append(FilteredList[i])
return ExtractedValues, NonZeroCells
@staticmethod
def OverlayMaps(Path, BaseMapF, FilePrefix, ExcludeValue, Compressed = False,
OccupiedCellsOnly=True):
"""
=================================================================
OverlayMaps(Path, ExcludeValue, Compressed = True)
=================================================================
this function is written to extract and return a list of all the values
in an ASCII file
Inputs:
1-Path
[String] a path to the folder includng the maps.
2-BaseMapF:
[String] a path includng the name of the ASCII and extention like
path="data/cropped.asc"
3-FilePrefix:
[String] a string that make the files you want to filter in the folder
uniq
3-ExcludedValue:
[Numeric] values you want to exclude from exteacted values
4-Compressed:
[Bool] if the map you provided is compressed
5-OccupiedCellsOnly:
[Bool] if you want to count only cells that is not zero
Outputs:
1- ExtractedValues:
[Dict] dictonary with a list of values in the basemap as keys
and for each key a list of all the intersected values in the
maps from the path
2- NonZeroCells:
[dataframe] dataframe with the first column as the "file" name
and the second column is the number of cells in each map
"""
# input data validation
# data type
assert type(Path)== str, "Path input should be string type"
assert type(FilePrefix)== str, "Path input should be string type"
assert type(Compressed) == bool, "Compressed input should be Boolen type"
assert type(BaseMapF) == str, "BaseMapF input should be string type"
# input values
# check wether the path exist or not
assert os.path.exists(Path), "the path you have provided does not exist"
# check whether there are files or not inside the folder
assert os.listdir(Path)!= "","the path you have provided is empty"
# get list of all files
Files=os.listdir(Path)
FilteredList = list()
# filter file list with the File prefix input
for i in range(len(Files)):
if Files[i].startswith(FilePrefix):
FilteredList.append(Files[i])
NonZeroCells = pd.DataFrame()
NonZeroCells['files'] = FilteredList
NonZeroCells['cells'] = 0
# read the base map
if BaseMapF.endswith('.asc'):
BaseMapV, _ = Raster.ReadASCII(BaseMapF)
else:
BaseMap = gdal.Open(BaseMapF)
BaseMapV = BaseMap.ReadAsArray()
ExtractedValues = dict()
FilesNotOpened = list()
for i in range(len(FilteredList)):
print("File " + FilteredList[i])
if OccupiedCellsOnly == True :
ExtractedValuesi , NonZeroCells.loc[i,'cells'] = Raster.OverlayMap(Path + "/" + FilteredList[i],
BaseMapV, ExcludeValue, Compressed,
OccupiedCellsOnly)
else:
ExtractedValuesi, NonZeroCells.loc[i,'cells'] = Raster.OverlayMap(Path + "/" + FilteredList[i],
BaseMapV, ExcludeValue, Compressed,
OccupiedCellsOnly)
# these are the destinct values from the BaseMap which are keys in the
# ExtractedValuesi dict with each one having a list of values
BaseMapValues = list(ExtractedValuesi.keys())
for j in range(len(BaseMapValues)):
if BaseMapValues[j] not in list(ExtractedValues.keys()):
ExtractedValues[BaseMapValues[j]] = list()
ExtractedValues[BaseMapValues[j]] = ExtractedValues[BaseMapValues[j]] + ExtractedValuesi[BaseMapValues[j]]
if ExtractedValuesi == -1 or NonZeroCells.loc[i,'cells'] == -1:
FilesNotOpened.append(FilteredList[i])
continue
return ExtractedValues, NonZeroCells
@staticmethod
def Normalize(array):
"""
Normalizes numpy arrays into scale 0.0 - 1.0
Parameters
----------
array : TYPE
DESCRIPTION.
Returns
-------
TYPE
DESCRIPTION.
"""
array_min, array_max = array.min(), array.max()
return ((array - array_min)/(array_max - array_min))
@staticmethod
def GetEpsg(proj, extension = 'tiff'):
"""
=====================================================
GetEpsg(proj, extension = 'tiff')
=====================================================
This function reads the projection of a GEOGCS file or tiff file
Parameters
----------
proj : TYPE
projection read from the netcdf file.
extension : [string], optional
tiff or GEOGCS . The default is 'tiff'.
Returns
-------
epsg : [integer]
epsg number
"""
try:
if extension == 'tiff':
# Get info of the dataset that is used for transforming
g_proj = proj.GetProjection()
Projection=g_proj.split('EPSG","')
if extension == 'GEOGCS':
Projection = proj
epsg = int((str(Projection[-1]).split(']')[0])[0:-1])
except:
epsg = 4326
return(epsg)
@staticmethod
def NCdetails(nc, Var = None):
"""
==========================================================
NCGetGeotransform(nc, Var = None)
==========================================================
NCGetGeotransform takes a netcdf object and return | |
"""The watcher file is a script meant to be run as an ongoing process to watch a given directory and analyzing files
that may be moved there for certain keywords. Automatic processing will be started if the file contains the keywords."""
import collections
import concurrent.futures
import datetime
import os
from functools import lru_cache
import gzip
import importlib
import logging
import os.path as osp
import pickle
import shutil
import time
import zipfile
import yagmail
import yaml
from pylinac.core.decorators import value_accept
from pylinac.core.io import retrieve_demo_file, is_dicom_image
from pylinac.core.image import prepare_for_classification, DicomImage
from pylinac.core import schedule
from pylinac import VMAT, Starshot, PicketFence, WinstonLutz, LeedsTOR, StandardImagingQC3, load_log, LasVegas
from pylinac.log_analyzer import IMAGING
logger = logging.getLogger("pylinac")
logging.basicConfig(level=logging.INFO,
format='%(asctime)s - %(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
class AnalyzeMixin:
"""Mixin for processing files caught by the pylinac watcher.
Attributes
----------
obj : class
The class that analyzes the file; e.g. Starshot, PicketFence, etc.
config_name : str
The string that references the class in the YAML config file.
"""
obj = object
config_name = ''
has_classification = False
def __init__(self, path, config):
"""
Parameters
----------
path : str
The path to the file to be analyzed.
config :
The configuration settings of analysis. See `~pylinac.watcher.load_config`.
"""
self.full_path = path
self.local_path = osp.basename(path)
self.base_name = osp.splitext(self.full_path)[0]
self.config = config
@classmethod
def run(cls, files, config, skip_list):
files = drop_skips(files, skip_list)
for file in files:
cond1 = contains_keywords(file, config, cls.config_name)
if config[cls.config_name]['use-classifier']:
cond2 = matches_classifier(file, cls)
else:
cond2 = False
if cond1 or cond2:
obj = cls(file, config)
obj.process()
skip_list.append(osp.basename(file))
def process(self):
"""Process the file; includes analysis, saving results to file, and sending emails."""
logger.info(self.full_path + " will be analyzed...")
self.instance = self.obj(self.full_path, **self.constructor_kwargs)
self.analyze()
if self.config['email']['enable-all']:
self.send_email()
elif self.config['email']['enable-failure'] and self.should_send_failure_email():
self.send_email()
self.publish_pdf()
# self.save_zip()
logger.info("Finished analysis on " + self.local_path)
def save_zip(self):
# save results and original file to a compressed ZIP archive
with zipfile.ZipFile(self.zip_filename, 'w', compression=zipfile.ZIP_DEFLATED) as zfile:
zfile.write(self.full_path, arcname=osp.basename(self.full_path))
# remove the original files
os.remove(self.full_path)
@property
def constructor_kwargs(self):
"""Any keyword arguments meant to be given to the constructor call."""
return {}
@property
def zip_filename(self):
"""The name of the file for the ZIP archive."""
return self.base_name + self.config['general']['file-suffix'] + '.zip'
@property
def pdf_filename(self):
"""The name of the file for the PDF results."""
return self.base_name + '.pdf'
@property
def keywords(self):
"""The keywords that signal a file is of a certain analysis type."""
return self.config[self.config_name]['keywords']
def keyword_in_here(self):
"""Determine whether a keyword exists in the filename."""
return any(keyword in self.local_path.lower() for keyword in self.keywords)
@property
def failure_settings(self):
"""The YAML failure settings."""
return self.config[self.config_name]['failure']
@property
def analysis_settings(self):
"""The YAML analysis settings."""
return self.config[self.config_name]['analysis']
def send_email(self, name=None, attachments=None):
"""Send an email with the analysis results."""
if name is None:
name = self.local_path
if attachments is None:
attachments = [self.pdf_filename]
elif attachments == '':
attachments = []
# compose message
current_time = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
if self.config['email']['enable-all']:
statement = 'The pylinac watcher analyzed the file named or containing "{}" at {}. '
statement += 'The analysis results are in the folder "{}".'
elif self.config['email']['enable-failure']:
statement = 'The pylinac watcher analyzed the file named or containing "{}" at {} and '
statement += 'found something that failed your configuration settings.'
statement += 'The analysis results are in the folder "{}".'
statement = statement.format(name, current_time, osp.dirname(self.full_path))
# send the email
contents = [statement] + attachments
recipients = [recipient for recipient in self.config['email']['recipients']]
yagserver = yagmail.SMTP(self.config['email']['sender'], self.config['email']['sender-password'])
yagserver.send(to=recipients,
subject=self.config['email']['subject'],
contents=contents)
logger.info("An email was sent to the recipients with the results")
def publish_pdf(self):
self.instance.publish_pdf(self.pdf_filename, unit=self.config['general']['unit'])
def should_send_failure_email(self):
"""Check whether analysis results were poor and an email should be triggered."""
return not self.instance.passed
def analyze(self):
"""Analyze the file."""
self.instance.analyze(**self.analysis_settings)
class AnalyzeLeeds(AnalyzeMixin):
"""Analysis runner for Leeds TOR phantom."""
obj = LeedsTOR
config_name = 'leeds'
class AnalyzeQC3(AnalyzeMixin):
"""Analysis runner for Standard Imaging QC-3."""
obj = StandardImagingQC3
config_name = 'qc3'
class AnalyzeLasVegas(AnalyzeMixin):
"""Analysis runner for Las Vegas phantom."""
obj = LasVegas
config_name = 'las-vegas'
class AnalyzeStar(AnalyzeMixin):
"""Analysis runner for starshots."""
obj = Starshot
config_name = 'starshot'
@property
def constructor_kwargs(self):
"""Give the SID to the algorithm"""
return {'sid': self.config[self.config_name]['analysis']['sid']}
@property
def analysis_settings(self):
"""Starshot analysis settings"""
settings = super().analysis_settings
settings.pop('sid')
return settings
class AnalyzePF(AnalyzeMixin):
"""Analysis runner for picket fences."""
obj = PicketFence
config_name = 'picketfence'
class AnalyzeLog(AnalyzeMixin):
"""Analysis runner for dynalogs or trajectory logs."""
obj = load_log
config_name = 'logs'
@property
def log_time(self):
rev_path = self.local_path[::-1]
u_idx = rev_path.find('_')
log_time = osp.splitext(rev_path[:u_idx][::-1])[0]
return log_time
def send_email(self):
"""Send an email with the analysis results."""
super().send_email(name=self.log_time, attachments='')
def analyze(self):
"""Log analysis is done via calculating gamma."""
self.instance.fluence.gamma.calc_map(**self.analysis_settings)
def should_send_failure_email(self):
"""Failure is based on several varying criteria."""
send = False
for key, val in self.failure_settings.items():
if key == 'gamma':
gamma_below_threshold = self.instance.fluence.gamma.pass_prcnt < val
if gamma_below_threshold:
send = True
elif key == 'avg-rms':
rms_above_threshold = self.instance.axis_data.mlc.get_RMS_avg() > val
if rms_above_threshold:
send = True
elif key == 'max-rms':
rms_above_threshold = self.instance.axis_data.mlc.get_RMS_max() > val
if rms_above_threshold:
send = True
return send
def process(self):
"""Process the file; includes analysis, saving results to file, and sending emails."""
logger.info(self.local_path + " will be analyzed...")
self.instance = load_log(self.full_path, **self.constructor_kwargs)
if self.instance.treatment_type == IMAGING:
logger.info(self.local_path + " is an imaging log...")
else:
self.analyze()
self.publish_pdf()
self.save_zip()
if self.config['email']['enable-all']:
self.send_email()
elif self.config['email']['enable-failure'] and self.should_send_failure_email():
self.send_email()
logger.info("Finished analysis on " + self.local_path)
return True
@classmethod
def run(cls, files, config, skip_list):
files = drop_skips(files, skip_list)
for file in files:
if contains_keywords(file, config, cls.config_name):
obj = cls(file, config)
obj.process()
skip_list.append(osp.basename(file))
class AnalyzeCatPhan(AnalyzeMixin):
"""Analysis runner for CBCTs."""
config_name = 'catphan'
def __init__(self, path, config, zip_format=True):
self.zip_format = zip_format
super().__init__(path=path, config=config)
p = importlib.import_module('pylinac')
if zip_format:
self.obj = getattr(p, config[self.config_name]['model']).from_zip
else:
self.obj = getattr(p, config[self.config_name]['model'])
@property
def pdf_filename(self):
"""The name of the file for the PDF results."""
if self.zip_format:
return self.full_path.replace('.zip', '.pdf')
else:
return "{}\CBCT - {}.pdf".format(osp.dirname(self.instance.dicom_stack[0].path), self.instance.dicom_stack[0].date_created(format="%A, %I-%M-%S, %B %d, %Y"))
def analyze(self):
self.instance.analyze(**self.analysis_settings)
def should_send_failure_email(self):
"""Failure of CBCT depends on individual module performance."""
send = False
for key, val in self.failure_settings.items():
if key == 'hu-passed' and not self.instance.ctp404.passed_hu:
send = True
if key == 'uniformity-passed' and not self.instance.ctp486.overall_passed:
send = True
if key == 'geometry-passed' and not self.instance.ctp404.passed_geometry:
send = True
if key == 'thickness-passed' and not self.instance.ctp404.passed_thickness:
send = True
return send
@classmethod
def run(cls, files, config, skip_list):
files = drop_skips(files, skip_list)
# analyze ZIP archives
for file in files:
cond1 = contains_keywords(file, config, cls.config_name)
cond2 = file.endswith('.zip')
if cond1 and cond2:
obj = cls(file, config)
obj.process()
skip_list.append(osp.basename(file))
# analyze directory groups
done = False
while not done:
files = drop_skips(files, skip_list)
if len(files) > 50:
try:
obj = cls(osp.dirname(files[0]), config, zip_format=False)
obj.process()
if config[cls.config_name]['analysis']['zip_after']:
skip_list.append(osp.basename(obj.pdf_filename).replace('.pdf', '.zip'))
else:
for file in obj.instance.dicom_stack:
skip_list.append(osp.basename(file.path))
except Exception as e:
print(e)
done = True
else:
done = True
class AnalyzeWL(AnalyzeMixin):
"""Analysis runner for Winston-Lutz images."""
obj = WinstonLutz.from_zip
config_name = 'winston-lutz'
def __init__(self, path, config, zip_format=True):
self.zip_format = zip_format
self.config = config
if zip_format:
super().__init__(path, config)
self.obj = WinstonLutz.from_zip
else:
self.full_path = path
self.obj = WinstonLutz
def analyze(self):
"""Winson-Lutz doesn't explicitly analyze files."""
pass
def should_send_failure_email(self):
"""Failure of WL is based on 3 criteria."""
send = False
for key, val in self.failure_settings:
if key == 'gantry-iso-size' and (self.instance.gantry_iso_size > val):
send = True
if key == 'mean-cax-bb-distance' and (self.instance.cax2bb_distance() > val):
send = True
if key == 'max-cax-bb-distance' and (self.instance.cax2bb_distance('max') > val):
send = True
return send
def process(self):
"""Process the file; includes analysis, saving results to file, and sending emails."""
if self.zip_format:
logger.info(self.full_path + " will be analyzed...")
else:
logger.info("Winston Lutz batch will be analyzed...")
self.instance = self.obj(self.full_path, **self.constructor_kwargs)
self.analyze()
if self.config['email']['enable-all']:
self.send_email()
elif self.config['email']['enable-failure'] and self.should_send_failure_email():
self.send_email()
self.publish_pdf()
if not self.zip_format:
self.save_zip()
logger.info("Finished Winston-Lutz analysis")
@property
def pdf_filename(self):
"""The name of the file for the PDF results."""
if self.zip_format:
return self.base_name + '.pdf'
else:
dirname = osp.dirname(self.full_path[0])
dcm = DicomImage(self.full_path[0])
name = 'Winston-Lutz - {}.pdf'.format(dcm.date_created())
return osp.join(dirname, name)
@classmethod
def run(cls, files, config, skip_list):
files = drop_skips(files, skip_list)
# analyze ZIP archives
for file in files:
cond1 = contains_keywords(file, config, cls.config_name)
cond2 = file.endswith('.zip')
if cond1 and cond2:
obj = cls(file, config)
obj.process()
skip_list.append(osp.basename(file))
# analyze directory groups
if config[cls.config_name]['use-classifier']:
wlfiles = [f for f in files if matches_classifier(f, cls)]
if len(wlfiles) > 3:
obj = | |
# coding: utf-8
"""
Application Manager API
Application Manager APIs to control Apache Flink jobs # noqa: E501
OpenAPI spec version: 2.0.1
Contact: <EMAIL>
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from ververica_api_sdk.api_client import ApiClient
class SecretValueResourceApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def create_secret_value_using_post(self, namespace, secret_value, **kwargs): # noqa: E501
"""Create a secret value # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_secret_value_using_post(namespace, secret_value, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str namespace: namespace (required)
:param SecretValue secret_value: secretValue (required)
:return: SecretValue
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.create_secret_value_using_post_with_http_info(namespace, secret_value, **kwargs) # noqa: E501
else:
(data) = self.create_secret_value_using_post_with_http_info(namespace, secret_value, **kwargs) # noqa: E501
return data
def create_secret_value_using_post_with_http_info(self, namespace, secret_value, **kwargs): # noqa: E501
"""Create a secret value # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_secret_value_using_post_with_http_info(namespace, secret_value, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str namespace: namespace (required)
:param SecretValue secret_value: secretValue (required)
:return: SecretValue
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['namespace', 'secret_value'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_secret_value_using_post" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `create_secret_value_using_post`") # noqa: E501
# verify the required parameter 'secret_value' is set
if ('secret_value' not in params or
params['secret_value'] is None):
raise ValueError("Missing the required parameter `secret_value` when calling `create_secret_value_using_post`") # noqa: E501
collection_formats = {}
path_params = {}
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'secret_value' in params:
body_params = params['secret_value']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json', 'application/yaml']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/api/v1/namespaces/{namespace}/secret-values', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='SecretValue', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_secret_value_using_delete(self, name, namespace, **kwargs): # noqa: E501
"""Delete a secret value # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_secret_value_using_delete(name, namespace, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str name: name (required)
:param str namespace: namespace (required)
:return: SecretValue
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.delete_secret_value_using_delete_with_http_info(name, namespace, **kwargs) # noqa: E501
else:
(data) = self.delete_secret_value_using_delete_with_http_info(name, namespace, **kwargs) # noqa: E501
return data
def delete_secret_value_using_delete_with_http_info(self, name, namespace, **kwargs): # noqa: E501
"""Delete a secret value # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_secret_value_using_delete_with_http_info(name, namespace, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str name: name (required)
:param str namespace: namespace (required)
:return: SecretValue
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'namespace'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_secret_value_using_delete" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params or
params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `delete_secret_value_using_delete`") # noqa: E501
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `delete_secret_value_using_delete`") # noqa: E501
collection_formats = {}
path_params = {}
if 'name' in params:
path_params['name'] = params['name'] # noqa: E501
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json', 'application/yaml']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/api/v1/namespaces/{namespace}/secret-values/{name}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='SecretValue', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_secret_value_using_get(self, name, namespace, **kwargs): # noqa: E501
"""Get a secret value by name # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_secret_value_using_get(name, namespace, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str name: name (required)
:param str namespace: namespace (required)
:return: SecretValue
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_secret_value_using_get_with_http_info(name, namespace, **kwargs) # noqa: E501
else:
(data) = self.get_secret_value_using_get_with_http_info(name, namespace, **kwargs) # noqa: E501
return data
def get_secret_value_using_get_with_http_info(self, name, namespace, **kwargs): # noqa: E501
"""Get a secret value by name # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_secret_value_using_get_with_http_info(name, namespace, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str name: name (required)
:param str namespace: namespace (required)
:return: SecretValue
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'namespace'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_secret_value_using_get" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params or
params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `get_secret_value_using_get`") # noqa: E501
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `get_secret_value_using_get`") # noqa: E501
collection_formats = {}
path_params = {}
if 'name' in params:
path_params['name'] = params['name'] # noqa: E501
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json', 'application/yaml']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/api/v1/namespaces/{namespace}/secret-values/{name}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='SecretValue', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_secret_values_using_get(self, namespace, **kwargs): # noqa: E501
"""List all secrets values # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_secret_values_using_get(namespace, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str namespace: namespace (required)
:return: ResourceListSecretValue
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_secret_values_using_get_with_http_info(namespace, **kwargs) # noqa: E501
else:
(data) = self.get_secret_values_using_get_with_http_info(namespace, **kwargs) # noqa: E501
return data
def get_secret_values_using_get_with_http_info(self, namespace, **kwargs): # noqa: E501
"""List all secrets values # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread | |
""" Firefox Desktop Churn and Retention Cohorts
Tracked in Bug 1226379 [1]. The underlying dataset is generated via
the telemetry-batch-view [2] code, and is generated once a day. The
aggregated churn data is updated weekly.
Due to the client reporting latency, we need to wait 10 days for the
data to stabilize. If the date is passed into report through the
environment, it is assumed that the date is at least a week greater
than the report start date. For example, if today is `20170323`,
airflow will set the environment date to be '20170316'. The date is
then set back 10 days to `20170306`, and pinned to the nearest
Sunday. This example date happens to be a Monday, so the update will
be set to `20170305`.
Code is based on the previous FHR analysis code [3]. Details and
definitions are in Bug 1198537 [4].
The production location of this dataset can be found in the following
location: `s3://telemetry-parquet/churn/v2`.
[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1226379
[2] https://git.io/vSBAt
[3] https://github.com/mozilla/churn-analysis
[4] https://bugzilla.mozilla.org/show_bug.cgi?id=1198537
"""
import logging
import operator
import arrow
import click
from pyspark.sql import SparkSession, functions as F
from pyspark.sql.window import Window
from . import release, utils
from .schema import churn_schema
from .utils import DS, DS_NODASH
from functools import reduce
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
SOURCE_COLUMNS = [
"app_version",
"attribution",
"client_id",
"country",
"default_search_engine",
"distribution_id",
"locale",
"normalized_channel",
"profile_creation_date",
"submission_date_s3",
"subsession_length",
"subsession_start_date",
"sync_configured",
"sync_count_desktop",
"sync_count_mobile",
"timestamp",
"scalar_parent_browser_engagement_total_uri_count",
"scalar_parent_browser_engagement_unique_domains_count",
]
TOP_COUNTRIES = {
"US", "DE", "FR", "RU", "BR", "IN", "PL", "ID", "GB", "CN",
"IT", "JP", "CA", "ES", "UA", "MX", "AU", "VN", "EG", "AR",
"PH", "NL", "IR", "CZ", "HU", "TR", "RO", "GR", "AT", "CH",
"HK", "TW", "BE", "FI", "VE", "SE", "DZ", "MY"
}
# The number of seconds in a single hour, casted to float, so we get
# the fractional part when converting.
SECONDS_PER_HOUR = float(60 * 60)
SECONDS_PER_DAY = 24 * 60 * 60
MAX_SUBSESSION_LENGTH = 60 * 60 * 48 # 48 hours in seconds.
DEFAULT_DATE = '2000-01-01' # The default date used for cleaning columns
def clean_new_profile(new_profile):
"""Create a `main_summary` compatible dataset from the new profile ping.
:param new_profile: dataframe conforming to the new profile ping
:returns: subset of `main_summary` used in churn
"""
iso8601_tz_format = "yyyy-MM-dd'T'HH:mm:ss.S'+00:00'"
select_expr = {
"app_version": "environment.build.version",
"attribution": "environment.settings.attribution",
"client_id": None,
"country": "metadata.geo_country",
"default_search_engine": "environment.settings.default_search_engine",
"distribution_id": "environment.partner.distribution_id",
"locale": "environment.settings.locale",
"normalized_channel": "metadata.normalized_channel",
"profile_creation_date": "environment.profile.creation_date",
"subsession_length": F.lit(None).cast('long'),
# The subsession_start_date is the profile_creation_date
"subsession_start_date": F.from_unixtime(
F.col("metadata.creation_timestamp") / 10 ** 9, iso8601_tz_format),
# no access to `WEAVE_CONFIGURED`, `WEAVE_DEVICE_COUNT_*` histograms in the new_profile
"sync_configured": F.lit(None).cast("boolean"),
"sync_count_desktop": F.lit(None).cast("int"),
"sync_count_mobile": F.lit(None).cast("int"),
"timestamp": F.col("metadata.timestamp"),
"scalar_parent_browser_engagement_total_uri_count": F.lit(None).cast("int"),
"scalar_parent_browser_engagement_unique_domains_count": F.lit(None).cast("int"),
"sample_id": F.expr("crc32(encode(client_id, 'UTF-8')) % 100").cast("string"),
"submission_date_s3": "submission",
}
return new_profile.select(utils.build_col_expr(select_expr))
def coalesce_new_profile_attribution(main_summary, new_profile):
"""Bug 1416364 - Fill in missing attribution in main summary using
new-profile pings."""
np_attr_col = "_np_attribution"
nested_attr_cols = [
"attribution.source",
"attribution.medium",
"attribution.campaign",
"attribution.content",
]
# all pings in new-profile that contain attribution
np_attribution = (
new_profile
# filter null and empty attribution
.where(F.col("attribution").isNotNull() &
F.coalesce(*nested_attr_cols).isNotNull())
.groupBy("client_id")
# some clients contain more than one attribution code
.agg(F.first("attribution").alias("attribution"))
.select("client_id", F.col("attribution").alias(np_attr_col))
)
coalesced_ms = (
main_summary
.join(np_attribution, "client_id", "left")
.withColumn("attribution", F.coalesce(np_attr_col, "attribution"))
.select(main_summary.columns)
)
return coalesced_ms
def extract(main_summary, new_profile, start_ds, period, slack, is_sampled):
"""
Extract data from the main summary table taking into account the
retention period and submission latency.
:param main_summary: dataframe pointing to main_summary.v4
:param new_profile: dataframe pointing to new_profile_ping_parquet
:param start_ds: start date of the retention period
:param period: length of the retention period
:param slack: slack added to account for submission latency
:return: a dataframe containing the raw subset of data
"""
start = arrow.get(start_ds, DS_NODASH)
predicates = [
(F.col("subsession_start_date") >= utils.format_date(start, DS)),
(F.col("subsession_start_date") < utils.format_date(start, DS, period)),
(F.col("submission_date_s3") >= utils.format_date(start, DS_NODASH)),
(F.col("submission_date_s3") < utils.format_date(start, DS_NODASH,
period + slack)),
]
if is_sampled:
predicates.append((F.col("sample_id") == "57"))
extract_ms = (
main_summary
.where(reduce(operator.__and__, predicates))
.select(SOURCE_COLUMNS)
)
np = clean_new_profile(new_profile)
extract_np = (
np
.where(reduce(operator.__and__, predicates))
.select(SOURCE_COLUMNS)
)
coalesced_ms = coalesce_new_profile_attribution(extract_ms, np)
return coalesced_ms.union(extract_np)
def prepare_client_rows(main_summary):
"""Coalesce client pings into a DataFrame that contains one row for
each client."""
in_columns = {
"client_id",
"timestamp",
"scalar_parent_browser_engagement_total_uri_count",
"scalar_parent_browser_engagement_unique_domains_count",
"subsession_length"
}
out_columns = (
set(main_summary.columns) |
{
"usage_seconds",
"total_uri_count",
"unique_domains_count_per_profile"
}
)
assert(in_columns <= set(main_summary.columns))
# Get the newest ping per client and append to original dataframe
window_spec = (
Window
.partitionBy(F.col('client_id'))
.orderBy(F.col('timestamp').desc())
)
newest_per_client = (
main_summary
.withColumn('client_rank', F.row_number().over(window_spec))
.where(F.col("client_rank") == 1)
)
# Compute per client aggregates lost during newest client computation
select_expr = utils.build_col_expr({
"client_id": None,
"total_uri_count": (
F.coalesce(
"scalar_parent_browser_engagement_total_uri_count",
F.lit(0)
)),
"unique_domains_count": (
F.coalesce(
"scalar_parent_browser_engagement_unique_domains_count",
F.lit(0)
)),
# Clamp broken subsession values to [0, MAX_SUBSESSION_LENGTH].
"subsession_length": (
F.when(F.col('subsession_length') > MAX_SUBSESSION_LENGTH,
MAX_SUBSESSION_LENGTH)
.otherwise(
F.when(F.col('subsession_length') < 0, 0)
.otherwise(F.col('subsession_length'))))
})
per_client_aggregates = (
main_summary
.select(*select_expr)
.groupby('client_id')
.agg(
F.sum('subsession_length').alias('usage_seconds'),
F.sum('total_uri_count').alias('total_uri_count'),
F.avg('unique_domains_count')
.alias('unique_domains_count_per_profile')
)
)
# Join the two intermediate datasets
return (
newest_per_client
.join(per_client_aggregates, 'client_id', 'inner')
.select(*out_columns)
)
def sync_usage(desktop_count, mobile_count, sync_configured):
"""Determine sync usage of a client.
:param desktop_count: Column name for "sync_count_desktop"
:param mobile_count: Column name for "sync_count_mobile"
:param sync_configured: Column name for "sync_configured"
:return: One of `multiple`, `single`, `no`, `null`
"""
device_count = (
F.coalesce(F.col(desktop_count), F.lit(0)) +
F.coalesce(F.col(mobile_count), F.lit(0))
)
return (
F.when(device_count > 1, F.lit("multiple"))
.otherwise(
F.when((device_count == 1) | F.col(sync_configured),
F.lit("single"))
.otherwise(
F.when(F.col(sync_configured).isNotNull(), F.lit("no"))
.otherwise(F.lit(None))))
)
def in_top_countries(country):
"""Normalize country to be within a top country list.
:param country: Column name for "country"
:return: set(TOP_COUNTRIES) | "ROW"
"""
return (
F.when(F.col(country).isin(TOP_COUNTRIES), F.col(country))
.otherwise(F.lit("ROW"))
)
def clean_columns(prepared_clients, effective_version, start_ds):
"""Clean columns in preparation for aggregation.
This removes invalid values, tidies up dates, and limits the scope of
several dimensions.
:param prepared_clients: `main_summary` rows that conform to the
schema of `prepare_client_rows(...)`
:param effective_version: DataFrame mapping dates to the active Firefox
version that was distributed at that time
:param start_ds: DateString to determine whether a row that is
being processed is active during the current
week
:returns: DataFrame with cleaned columns and rows
"""
# Temporary column used for determining the validity of a row
is_valid = "_is_valid"
pcd = F.from_unixtime(F.col("profile_creation_date") * SECONDS_PER_DAY)
client_date = utils.to_datetime('subsession_start_date', "yyyy-MM-dd")
days_since_creation = F.datediff(client_date, pcd)
is_funnelcake = F.col('distribution_id').rlike("^mozilla[0-9]+.*$")
attr_mapping = {
'distribution_id': None,
'default_search_engine': None,
'locale': None,
'subsession_start': client_date,
'channel': (
F.when(is_funnelcake, F.concat(
F.col("normalized_channel"), F.lit("-cck-"), F.col("distribution_id")
))
.otherwise(F.col(("normalized_channel")))),
'geo': in_top_countries("country"),
# Bug 1289573: Support values like "mozilla86" and "mozilla86-utility-existing"
'is_funnelcake': (
F.when(is_funnelcake, F.lit("yes"))
.otherwise(F.lit("no"))),
'acquisition_period': F.date_format(
F.date_sub(F.next_day(pcd, 'Sun'), 7),
"yyyy-MM-dd"),
'sync_usage': sync_usage(
"sync_count_desktop",
"sync_count_mobile",
"sync_configured"
),
'current_version': F.col("app_version"),
'current_week': (
# -1 is a placeholder for bad data
F.when(days_since_creation < 0, F.lit(-1))
.otherwise(F.floor(days_since_creation / 7))
.cast("long")),
'source': F.col('attribution.source'),
'medium': F.col('attribution.medium'),
'campaign': F.col('attribution.campaign'),
'content': F.col('attribution.content'),
'is_active': (
F.when(client_date < utils.to_datetime(F.lit(start_ds)), F.lit("no"))
.otherwise(F.lit("yes"))),
}
usage_hours = F.col('usage_seconds') / SECONDS_PER_HOUR
metric_mapping = {
'n_profiles': F.lit(1),
'total_uri_count': None,
'unique_domains_count_per_profile': None,
'usage_hours': usage_hours,
'sum_squared_usage_hours': F.pow(usage_hours, 2),
}
# Set the attributes to null if it's invalid
select_attr = utils.build_col_expr({
attr: F.when(F.col(is_valid), expr).otherwise(F.lit(None))
for attr, expr in utils.preprocess_col_expr(attr_mapping).items()
})
select_metrics = utils.build_col_expr(metric_mapping)
select_expr = select_attr + select_metrics
cleaned_data = (
# Compile per-client rows for the current retention period
prepared_clients
# Filter out seemingly impossible rows. One very obvious notion
# is to make sure that a profile is always created before a sub-session.
# Unlike `sane_date` in previous versions, this is idempotent and only
# depends on the data.
.withColumn("profile_creation", F.date_format(pcd, 'yyyy-MM-dd'))
.withColumn(is_valid, (
F.col("profile_creation").isNotNull() &
(F.col("profile_creation") > DEFAULT_DATE) &
(pcd <= client_date)
))
.select(
# avoid acquisition dates in the future
(
F.when(F.col(is_valid), F.col("profile_creation"))
.otherwise(F.lit(None))
.alias("profile_creation")
),
*select_expr
)
# Set default values for the rows
.fillna({
'acquisition_period': DEFAULT_DATE,
'is_funnelcake': "no",
"current_week": -1,
})
.fillna(0)
.fillna(0.0)
.fillna('unknown')
)
result = release.with_effective_version(
cleaned_data,
effective_version,
"profile_creation"
)
return result
def transform(main_summary, effective_version, start_ds):
"""Compute the churn data for this week. Note that it takes 10 days
from the end of this period for all the activity to arrive. This data
should be from Sunday through Saturday.
df: DataFrame of the dataset relevant to computing the churn
week_start: datestring of this time period
"""
columns = [field.name for field in churn_schema.fields]
metrics = [
"n_profiles",
"usage_hours",
"sum_squared_usage_hours",
"total_uri_count",
"unique_domains_count_per_profile",
]
attributes = set(columns) - set(metrics)
# One row per client, normalized subsessions
prepared_clients = prepare_client_rows(main_summary)
cleaned_data = clean_columns(prepared_clients, effective_version, start_ds)
# Most aggregates are sums
agg_expr = {col: F.sum(col).alias(col) for col in metrics}
# unique_domains_count_per_profile is an odd one, since it is | |
<filename>results/generate_result.py
#!/usr/bin/python3
#
# Copyright 2018, The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""MLTS benchmark result generator.
Reads a CSV produced by MLTS benchmark and generates
an HTML page with results summary.
Usage:
generate_result [csv input file] [html output file]
"""
import argparse
import collections
import csv
import os
import re
import math
class ScoreException(Exception):
"""Generator base exception type. """
pass
LatencyResult = collections.namedtuple(
'LatencyResult',
['iterations', 'total_time_sec', 'time_freq_start_sec', 'time_freq_step_sec', 'time_freq_sec'])
COMPILATION_TYPES = ['compile_without_cache', 'save_to_cache', 'prepare_from_cache']
BASELINE_COMPILATION_TYPE = COMPILATION_TYPES[0]
CompilationResult = collections.namedtuple(
'CompilationResult',
['cache_size_bytes'] + COMPILATION_TYPES)
BenchmarkResult = collections.namedtuple(
'BenchmarkResult',
['name', 'backend_type', 'inference_latency', 'max_single_error',
'testset_size', 'evaluator_keys', 'evaluator_values', 'validation_errors',
'compilation_results'])
ResultsWithBaseline = collections.namedtuple(
'ResultsWithBaseline',
['baseline', 'other'])
BASELINE_BACKEND = 'TFLite_CPU'
KNOWN_GROUPS = [
(re.compile('mobilenet_v1.*quant.*'), 'MobileNet v1 Quantized'),
(re.compile('mobilenet_v1.*'), 'MobileNet v1 Float'),
(re.compile('mobilenet_v2.*quant.*'), 'MobileNet v2 Quantized'),
(re.compile('mobilenet_v2.*'), 'MobileNet v2 Float'),
(re.compile('mobilenet_v3.*uint8.*'), 'MobileNet v3 Quantized'),
(re.compile('mobilenet_v3.*'), 'MobileNet v3 Float'),
(re.compile('tts.*'), 'LSTM Text-to-speech'),
(re.compile('asr.*'), 'LSTM Automatic Speech Recognition'),
]
class BenchmarkResultParser:
"""A helper class to parse the input CSV file."""
def __init__(self, csvfile):
self.csv_reader = csv.reader(filter(lambda row: row[0] != '#', csvfile))
self.row = None
self.index = 0
def next(self):
"""Advance to the next row, returns the current row or None if reaches the end."""
try:
self.row = next(self.csv_reader)
except StopIteration:
self.row = None
finally:
self.index = 0
return self.row
def read_boolean(self):
"""Read the next CSV cell as a boolean."""
s = self.read_typed(str).lower()
if s == 'true':
return True
elif s == 'false':
return False
else:
raise ValueError('Cannot convert \'%s\' to a boolean' % s)
def read_typed(self, Type):
"""Read the next CSV cell as the given type."""
if Type is bool:
return self.read_boolean()
entry = self.row[self.index]
self.index += 1
return Type(entry)
def read_typed_array(self, Type, length):
"""Read the next CSV cells as a typed array."""
return [self.read_typed(Type) for _ in range(length)]
def read_latency_result(self):
"""Read the next CSV cells as a LatencyResult."""
result = {}
result['iterations'] = self.read_typed(int)
result['total_time_sec'] = self.read_typed(float)
result['time_freq_start_sec'] = self.read_typed(float)
result['time_freq_step_sec'] = self.read_typed(float)
time_freq_sec_count = self.read_typed(int)
result['time_freq_sec'] = self.read_typed_array(float, time_freq_sec_count)
return LatencyResult(**result)
def read_compilation_result(self):
"""Read the next CSV cells as a CompilationResult."""
result = {}
for compilation_type in COMPILATION_TYPES:
has_results = self.read_typed(bool)
result[compilation_type] = self.read_latency_result() if has_results else None
result['cache_size_bytes'] = self.read_typed(int)
return CompilationResult(**result)
def read_benchmark_result(self):
"""Read the next CSV cells as a BenchmarkResult."""
result = {}
result['name'] = self.read_typed(str)
result['backend_type'] = self.read_typed(str)
result['inference_latency'] = self.read_latency_result()
result['max_single_error'] = self.read_typed(float)
result['testset_size'] = self.read_typed(int)
evaluator_keys_count = self.read_typed(int)
validation_error_count = self.read_typed(int)
result['evaluator_keys'] = self.read_typed_array(str, evaluator_keys_count)
result['evaluator_values'] = self.read_typed_array(float, evaluator_keys_count)
result['validation_errors'] = self.read_typed_array(str, validation_error_count)
result['compilation_results'] = self.read_compilation_result()
return BenchmarkResult(**result)
def parse_csv_input(input_filename):
"""Parse input CSV file, returns: (benchmarkInfo, list of BenchmarkResult)."""
with open(input_filename, 'r') as csvfile:
parser = BenchmarkResultParser(csvfile)
# First line contain device info
benchmark_info = parser.next()
results = []
while parser.next():
results.append(parser.read_benchmark_result())
return (benchmark_info, results)
def group_results(results):
"""Group list of results by their name/backend, returns list of lists."""
# Group by name
groupings = collections.defaultdict(list)
for result in results:
groupings[result.name].append(result)
# Find baseline for each group, make ResultsWithBaseline for each name
groupings_baseline = {}
for name, results in groupings.items():
baseline = next(filter(lambda x: x.backend_type == BASELINE_BACKEND,
results))
other = sorted(filter(lambda x: x is not baseline, results),
key=lambda x: x.backend_type)
groupings_baseline[name] = ResultsWithBaseline(
baseline=baseline,
other=other)
# Merge ResultsWithBaseline for known groups
known_groupings_baseline = collections.defaultdict(list)
for name, results_with_bl in sorted(groupings_baseline.items()):
group_name = name
for known_group in KNOWN_GROUPS:
if known_group[0].match(results_with_bl.baseline.name):
group_name = known_group[1]
break
known_groupings_baseline[group_name].append(results_with_bl)
# Turn into a list sorted by name
groupings_list = []
for name, results_wbl in sorted(known_groupings_baseline.items()):
groupings_list.append((name, results_wbl))
return groupings_list
def get_frequency_graph_min_max(latencies):
"""Get min and max times of latencies frequency."""
mins = []
maxs = []
for latency in latencies:
mins.append(latency.time_freq_start_sec)
to_add = len(latency.time_freq_sec) * latency.time_freq_step_sec
maxs.append(latency.time_freq_start_sec + to_add)
return min(mins), max(maxs)
def get_frequency_graph(time_freq_start_sec, time_freq_step_sec, time_freq_sec,
start_sec, end_sec):
"""Generate input x/y data for latency frequency graph."""
left_to_pad = (int((time_freq_start_sec - start_sec) / time_freq_step_sec)
if time_freq_step_sec != 0
else math.inf)
end_time = time_freq_start_sec + len(time_freq_sec) * time_freq_step_sec
right_to_pad = (int((end_sec - end_time) / time_freq_step_sec)
if time_freq_step_sec != 0
else math.inf)
# After pading more that 64 values, graphs start to look messy,
# bail out in that case.
if (left_to_pad + right_to_pad) < 64:
left_pad = (['{:.2f}ms'.format(
(start_sec + x * time_freq_step_sec) * 1000.0)
for x in range(left_to_pad)], [0] * left_to_pad)
right_pad = (['{:.2f}ms'.format(
(end_time + x * time_freq_step_sec) * 1000.0)
for x in range(right_to_pad)], [0] * right_to_pad)
else:
left_pad = [[], []]
right_pad = [[], []]
data = (['{:.2f}ms'.format(
(time_freq_start_sec + x * time_freq_step_sec) * 1000.0)
for x in range(len(time_freq_sec))], time_freq_sec)
return (left_pad[0] + data[0] + right_pad[0],
left_pad[1] + data[1] + right_pad[1])
def is_topk_evaluator(evaluator_keys):
"""Are these evaluator keys from TopK evaluator?"""
return (len(evaluator_keys) == 5 and
evaluator_keys[0] == 'top_1' and
evaluator_keys[1] == 'top_2' and
evaluator_keys[2] == 'top_3' and
evaluator_keys[3] == 'top_4' and
evaluator_keys[4] == 'top_5')
def is_melceplogf0_evaluator(evaluator_keys):
"""Are these evaluator keys from MelCepLogF0 evaluator?"""
return (len(evaluator_keys) == 2 and
evaluator_keys[0] == 'max_mel_cep_distortion' and
evaluator_keys[1] == 'max_log_f0_error')
def is_phone_error_rate_evaluator(evaluator_keys):
"""Are these evaluator keys from PhoneErrorRate evaluator?"""
return (len(evaluator_keys) == 1 and
evaluator_keys[0] == 'max_phone_error_rate')
def generate_accuracy_headers(result):
"""Accuracy-related headers for result table."""
if is_topk_evaluator(result.evaluator_keys):
return ACCURACY_HEADERS_TOPK_TEMPLATE
elif is_melceplogf0_evaluator(result.evaluator_keys):
return ACCURACY_HEADERS_MELCEPLOGF0_TEMPLATE
elif is_phone_error_rate_evaluator(result.evaluator_keys):
return ACCURACY_HEADERS_PHONE_ERROR_RATE_TEMPLATE
else:
return ACCURACY_HEADERS_BASIC_TEMPLATE
raise ScoreException('Unknown accuracy headers for: ' + str(result))
def get_diff_span(value, same_delta, positive_is_better):
if abs(value) < same_delta:
return 'same'
if positive_is_better and value > 0 or not positive_is_better and value < 0:
return 'better'
return 'worse'
def generate_accuracy_values(baseline, result):
"""Accuracy-related data for result table."""
if is_topk_evaluator(result.evaluator_keys):
val = [float(x) * 100.0 for x in result.evaluator_values]
if result is baseline:
topk = [TOPK_BASELINE_TEMPLATE.format(val=x) for x in val]
return ACCURACY_VALUES_TOPK_TEMPLATE.format(
top1=topk[0], top2=topk[1], top3=topk[2], top4=topk[3],
top5=topk[4]
)
else:
base = [float(x) * 100.0 for x in baseline.evaluator_values]
diff = [a - b for a, b in zip(val, base)]
topk = [TOPK_DIFF_TEMPLATE.format(
val=v, diff=d, span=get_diff_span(d, 1.0, positive_is_better=True))
for v, d in zip(val, diff)]
return ACCURACY_VALUES_TOPK_TEMPLATE.format(
top1=topk[0], top2=topk[1], top3=topk[2], top4=topk[3],
top5=topk[4]
)
elif is_melceplogf0_evaluator(result.evaluator_keys):
val = [float(x) for x in
result.evaluator_values + [result.max_single_error]]
if result is baseline:
return ACCURACY_VALUES_MELCEPLOGF0_TEMPLATE.format(
max_log_f0=MELCEPLOGF0_BASELINE_TEMPLATE.format(
val=val[0]),
max_mel_cep_distortion=MELCEPLOGF0_BASELINE_TEMPLATE.format(
val=val[1]),
max_single_error=MELCEPLOGF0_BASELINE_TEMPLATE.format(
val=val[2]),
)
else:
base = [float(x) for x in
baseline.evaluator_values + [baseline.max_single_error]]
diff = [a - b for a, b in zip(val, base)]
v = [MELCEPLOGF0_DIFF_TEMPLATE.format(
val=v, diff=d, span=get_diff_span(d, 1.0, positive_is_better=False))
for v, d in zip(val, diff)]
return ACCURACY_VALUES_MELCEPLOGF0_TEMPLATE.format(
max_log_f0=v[0],
max_mel_cep_distortion=v[1],
max_single_error=v[2],
)
elif is_phone_error_rate_evaluator(result.evaluator_keys):
val = [float(x) for x in
result.evaluator_values + [result.max_single_error]]
if result is baseline:
return ACCURACY_VALUES_PHONE_ERROR_RATE_TEMPLATE.format(
max_phone_error_rate=PHONE_ERROR_RATE_BASELINE_TEMPLATE.format(
val=val[0]),
max_single_error=PHONE_ERROR_RATE_BASELINE_TEMPLATE.format(
val=val[1]),
)
else:
base = [float(x) for x in
baseline.evaluator_values + [baseline.max_single_error]]
diff = [a - b for a, b in zip(val, base)]
v = [PHONE_ERROR_RATE_DIFF_TEMPLATE.format(
val=v, diff=d, span=get_diff_span(d, 1.0, positive_is_better=False))
for v, d in zip(val, diff)]
return ACCURACY_VALUES_PHONE_ERROR_RATE_TEMPLATE.format(
max_phone_error_rate=v[0],
max_single_error=v[1],
)
else:
return ACCURACY_VALUES_BASIC_TEMPLATE.format(
max_single_error=result.max_single_error,
)
raise ScoreException('Unknown accuracy values for: ' + str(result))
def getchartjs_source():
return open(os.path.dirname(os.path.abspath(__file__)) + '/' +
CHART_JS_FILE).read()
def generate_avg_ms(baseline, latency):
"""Generate average latency value."""
if latency is None:
latency = baseline
result_avg_ms = (latency.total_time_sec / latency.iterations)*1000.0
if latency is baseline:
return LATENCY_BASELINE_TEMPLATE.format(val=result_avg_ms)
baseline_avg_ms = (baseline.total_time_sec / baseline.iterations)*1000.0
diff = (result_avg_ms/baseline_avg_ms - 1.0) * 100.0
diff_val = result_avg_ms - baseline_avg_ms
return LATENCY_DIFF_TEMPLATE.format(
val=result_avg_ms,
diff=diff,
diff_val=diff_val,
span=get_diff_span(diff, same_delta=1.0, positive_is_better=False))
def generate_result_entry(baseline, result):
if result is None:
result = baseline
return RESULT_ENTRY_TEMPLATE.format(
row_class='failed' if result.validation_errors else 'normal',
name=result.name,
backend=result.backend_type,
iterations=result.inference_latency.iterations,
testset_size=result.testset_size,
accuracy_values=generate_accuracy_values(baseline, result),
avg_ms=generate_avg_ms(baseline.inference_latency, result.inference_latency))
def generate_latency_graph_entry(tag, latency, tmin, tmax):
"""Generate a single latency graph."""
return LATENCY_GRAPH_ENTRY_TEMPLATE.format(
tag=tag,
i=id(latency),
freq_data=get_frequency_graph(latency.time_freq_start_sec,
latency.time_freq_step_sec,
latency.time_freq_sec,
tmin, tmax))
def generate_latency_graphs_group(tags, latencies):
"""Generate a group of latency graphs with the same tmin and tmax."""
tmin, tmax = get_frequency_graph_min_max(latencies)
return ''.join(
generate_latency_graph_entry(tag, latency, tmin, tmax)
for tag, latency in zip(tags, latencies))
def snake_case_to_title(string):
return string.replace('_', ' ').title()
def generate_inference_latency_graph_entry(results_with_bl):
"""Generate a group of latency graphs for inference latencies."""
results = [results_with_bl.baseline] + results_with_bl.other
tags = [result.backend_type for result in results]
latencies = [result.inference_latency for result in results]
return generate_latency_graphs_group(tags, | |
'v2',
}),
})
content_type, content_text = self.handler.store_and_build_forward_message(
form, '================1234==', bucket_name='my-test-bucket')
self.mox.VerifyAll()
self.assertEqual(EXPECTED_GENERATED_CONTENT_TYPE_WITH_BUCKET, content_type)
self.assertMessageEqual(EXPECTED_GENERATED_MIME_MESSAGE_WITH_BUCKET,
content_text)
blobkey = ('encoded_gs_file:bXktdGVzdC1idWNrZXQvZmFrZS1leHBlY3RlZGtleQ==')
blobkey = blobstore_stub.BlobstoreServiceStub.ToDatastoreBlobKey(blobkey)
blob1 = datastore.Get(blobkey)
self.assertTrue('my-test-bucket' in blob1['filename'])
def test_store_and_build_forward_message_utf8_values(self):
"""Test store and build message method with UTF-8 values."""
self.generate_blob_key().AndReturn(blobstore.BlobKey('item1'))
self.now().AndReturn(datetime.datetime(2008, 11, 12, 10, 40))
self.mox.ReplayAll()
form = FakeForm({
'field1': FakeForm(name='field1',
file=io.StringIO('file1'),
type='text/plain',
type_options={'a': 'b', 'x': 'y'},
filename='chinese_char_name_\xe6\xb1\x89.txt',
headers={'h1': 'v1',
'h2': 'v2',
}),
})
content_type, content_text = self.handler.store_and_build_forward_message(
form, '================1234==')
self.mox.VerifyAll()
self.assertEqual(EXPECTED_GENERATED_UTF8_CONTENT_TYPE, content_type)
self.assertMessageEqual(EXPECTED_GENERATED_UTF8_MIME_MESSAGE,
content_text)
blob1 = blobstore.get('item1')
self.assertEqual('chinese_char_name_\u6c49.txt', blob1.filename)
def test_store_and_build_forward_message_latin1_values(self):
"""Test store and build message method with Latin-1 values."""
# There is a special exception class for this case. This is designed to
# emulate production, which currently fails silently. See b/6722082.
self.now().AndReturn(datetime.datetime(2008, 11, 12, 10, 40))
self.mox.ReplayAll()
form = FakeForm({
'field1': FakeForm(name='field1',
file=io.StringIO('file1'),
type='text/plain',
type_options={'a': 'b', 'x': 'y'},
filename='german_char_name_f\xfc\xdfe.txt',
headers={'h1': 'v1',
'h2': 'v2',
}),
})
self.assertRaises(blob_upload._InvalidMetadataError,
self.handler.store_and_build_forward_message, form,
'================1234==')
self.mox.VerifyAll()
blob1 = blobstore.get('item1')
self.assertIsNone(blob1)
def test_store_and_build_forward_message_no_headers(self):
"""Test default header generation when no headers are provided."""
self.generate_blob_key().AndReturn(blobstore.BlobKey('item1'))
self.now().AndReturn(datetime.datetime(2008, 11, 12, 10, 40, 0, 100))
self.mox.ReplayAll()
form = FakeForm({'field1': FakeForm(name='field1',
file=io.StringIO('file1'),
type=None,
type_options={},
filename='file1',
headers={}),
'field2': FakeForm(name='field2',
value='variable1',
type=None,
type_options={},
filename=None,
headers={}),
})
content_type, content_text = self.handler.store_and_build_forward_message(
form, '================1234==')
self.mox.VerifyAll()
self.assertEqual(EXPECTED_GENERATED_CONTENT_TYPE_NO_HEADERS, content_type)
self.assertMessageEqual(EXPECTED_GENERATED_MIME_MESSAGE_NO_HEADERS,
content_text)
def test_store_and_build_forward_message_zero_length_blob(self):
"""Test upload with a zero length blob."""
self.generate_blob_key().AndReturn(blobstore.BlobKey('item1'))
self.generate_blob_key().AndReturn(blobstore.BlobKey('item2'))
self.now().AndReturn(datetime.datetime(2008, 11, 12, 10, 40))
self.mox.ReplayAll()
form = FakeForm({
'field1': FakeForm(name='field1',
file=io.StringIO('file1'),
type='image/png',
type_options={'a': 'b', 'x': 'y'},
filename='stuff.png',
headers={'h1': 'v1',
'h2': 'v2',
}),
'field2': FakeForm(name='field2',
file=io.StringIO(''),
type='application/pdf',
type_options={},
filename='stuff.pdf',
headers={}),
})
content_type, content_text = self.handler.store_and_build_forward_message(
form, '================1234==')
self.mox.VerifyAll()
self.assertEqual(EXPECTED_GENERATED_CONTENT_TYPE_ZERO_LENGTH_BLOB,
content_type)
self.assertMessageEqual(EXPECTED_GENERATED_MIME_MESSAGE_ZERO_LENGTH_BLOB,
content_text)
blob1 = blobstore.get('item1')
self.assertEqual('stuff.png', blob1.filename)
blob2 = blobstore.get('item2')
self.assertEqual('stuff.pdf', blob2.filename)
def test_store_and_build_forward_message_no_filename(self):
"""Test upload with no filename in content disposition."""
self.generate_blob_key().AndReturn(blobstore.BlobKey('item1'))
self.now().AndReturn(datetime.datetime(2008, 11, 12, 10, 40))
self.mox.ReplayAll()
form = FakeForm({
'field1': FakeForm(name='field1',
file=io.StringIO('file1'),
type='image/png',
type_options={'a': 'b', 'x': 'y'},
filename='stuff.png',
headers={'h1': 'v1',
'h2': 'v2',
}),
'field2': FakeForm(name='field2',
file=io.StringIO(''),
type='application/pdf',
type_options={},
filename='',
headers={}),
})
content_type, content_text = self.handler.store_and_build_forward_message(
form, '================1234==')
self.mox.VerifyAll()
self.assertEqual(EXPECTED_GENERATED_CONTENT_TYPE_NO_FILENAME, content_type)
self.assertMessageEqual(EXPECTED_GENERATED_MIME_MESSAGE_NO_FILENAME,
content_text)
blob1 = blobstore.get('item1')
self.assertEqual('stuff.png', blob1.filename)
self.assertEqual(None, blobstore.get('item2'))
def test_store_and_build_forward_message_bad_mimes(self):
"""Test upload with no headers provided."""
for unused_mime in range(len(BAD_MIMES)):
# Should not require actual time value upon failure.
self.now()
self.mox.ReplayAll()
for mime_type in BAD_MIMES:
form = FakeForm({'field1': FakeForm(name='field1',
file=io.StringIO('file1'),
type=mime_type,
type_options={},
filename='file',
headers={}),
})
self.assertRaisesRegex(
webob.exc.HTTPClientError,
'Incorrectly formatted MIME type: %s' % mime_type,
self.handler.store_and_build_forward_message,
form,
'================1234==')
self.mox.VerifyAll()
def test_store_and_build_forward_message_max_blob_size_exceeded(self):
"""Test upload with a blob larger than the maximum blob size."""
self.generate_blob_key().AndReturn(blobstore.BlobKey('item1'))
self.now().AndReturn(datetime.datetime(2008, 11, 12, 10, 40))
self.mox.ReplayAll()
form = FakeForm({
'field1': FakeForm(name='field1',
file=io.StringIO('a'),
type='image/png',
type_options={'a': 'b', 'x': 'y'},
filename='stuff.png',
headers={'h1': 'v1',
'h2': 'v2',
}),
'field2': FakeForm(name='field2',
file=io.StringIO('longerfile'),
type='application/pdf',
type_options={},
filename='stuff.pdf',
headers={}),
})
self.assertRaises(webob.exc.HTTPRequestEntityTooLarge,
self.handler.store_and_build_forward_message,
form, '================1234==', max_bytes_per_blob=2)
self.mox.VerifyAll()
blob1 = blobstore.get('item1')
self.assertIsNone(blob1)
def test_store_and_build_forward_message_total_size_exceeded(self):
"""Test upload with all blobs larger than the total allowed size."""
self.generate_blob_key().AndReturn(blobstore.BlobKey('item1'))
self.now().AndReturn(datetime.datetime(2008, 11, 12, 10, 40))
self.mox.ReplayAll()
form = FakeForm({
'field1': FakeForm(name='field1',
file=io.StringIO('a'),
type='image/png',
type_options={'a': 'b', 'x': 'y'},
filename='stuff.png',
headers={'h1': 'v1',
'h2': 'v2',
}),
'field2': FakeForm(name='field2',
file=io.StringIO('longerfile'),
type='application/pdf',
type_options={},
filename='stuff.pdf',
headers={}),
})
self.assertRaises(webob.exc.HTTPRequestEntityTooLarge,
self.handler.store_and_build_forward_message,
form, '================1234==', max_bytes_total=3)
self.mox.VerifyAll()
blob1 = blobstore.get('item1')
self.assertIsNone(blob1)
def test_store_blob_base64(self):
"""Test blob creation with a base-64-encoded body."""
expected_result = 'This is the blob content.'
self.execute_blob_test(base64.urlsafe_b64encode(expected_result),
expected_result,
base64_encoding=True)
def test_filename_too_large(self):
"""Test that exception is raised if the filename is too large."""
self.now().AndReturn(datetime.datetime(2008, 11, 12, 10, 40))
self.mox.ReplayAll()
filename = 'a' * blob_upload._MAX_STRING_NAME_LENGTH + '.txt'
form = FakeForm({
'field1': FakeForm(name='field1',
file=io.StringIO('a'),
type='image/png',
type_options={'a': 'b', 'x': 'y'},
filename=filename,
headers={}),
})
self.assertRaisesRegex(
webob.exc.HTTPClientError,
'The filename exceeds the maximum allowed length of 500.',
self.handler.store_and_build_forward_message,
form, '================1234==')
self.mox.VerifyAll()
def test_content_type_too_large(self):
"""Test that exception is raised if the content-type is too large."""
self.now().AndReturn(datetime.datetime(2008, 11, 12, 10, 40))
self.mox.ReplayAll()
content_type = 'text/' + 'a' * blob_upload._MAX_STRING_NAME_LENGTH
form = FakeForm({
'field1': FakeForm(name='field1',
file=io.StringIO('a'),
type=content_type,
type_options={'a': 'b', 'x': 'y'},
filename='foobar.txt',
headers={}),
})
self.assertRaisesRegex(
webob.exc.HTTPClientError,
'The Content-Type exceeds the maximum allowed length of 500.',
self.handler.store_and_build_forward_message,
form, '================1234==')
self.mox.VerifyAll()
class UploadHandlerUnitTestNamespace(UploadHandlerUnitTest):
"""Executes all of the superclass tests but with a namespace set."""
def setUp(self):
"""Setup for namespaces test."""
super(UploadHandlerUnitTestNamespace, self).setUp()
# Set the namespace. Blobstore should ignore this.
namespace_manager.set_namespace('abc')
class UploadHandlerWSGITest(UploadTestBase):
"""Test the upload handler as a whole, by making WSGI requests."""
def setUp(self):
"""Set up test framework."""
# Set up environment for Blobstore.
self.original_environ = dict(os.environ)
os.environ.update({
'APPLICATION_ID': 'app',
'USER_EMAIL': '<EMAIL>',
'SERVER_NAME': 'localhost',
'SERVER_PORT': '8080',
})
self.environ = {}
wsgiref.util.setup_testing_defaults(self.environ)
self.environ['REQUEST_METHOD'] = 'POST'
# Set up user stub.
self.user_stub = user_service_stub.UserServiceStub()
self.tmpdir = tempfile.mkdtemp()
## Set up testing blobstore files.
storage_directory = os.path.join(self.tmpdir, 'blobstore')
self.blob_storage = file_blob_storage.FileBlobStorage(storage_directory,
'appid1')
self.blobstore_stub = blobstore_stub.BlobstoreServiceStub(self.blob_storage)
# Use a fresh file datastore stub.
self.datastore_file = os.path.join(self.tmpdir, 'datastore_v3')
self.history_file = os.path.join(self.tmpdir, 'history')
for filename in [self.datastore_file, self.history_file]:
if os.access(filename, os.F_OK):
os.remove(filename)
self.datastore_stub = datastore_file_stub.DatastoreFileStub(
'app', self.datastore_file, self.history_file, use_atexit=False)
self.apiproxy = apiproxy_stub_map.APIProxyStubMap()
apiproxy_stub_map.apiproxy = self.apiproxy
apiproxy_stub_map.apiproxy.RegisterStub('datastore_v3', self.datastore_stub)
apiproxy_stub_map.apiproxy.RegisterStub('blobstore', self.blobstore_stub)
apiproxy_stub_map.apiproxy.RegisterStub('user', self.user_stub)
# Keep values given to forward_app.
self.forward_request_dict = {}
def forward_app(environ, start_response):
self.forward_request_dict['environ'] = environ
self.forward_request_dict['body'] = environ['wsgi.input'].read()
# Return a dummy body
start_response('200 OK', [('CONTENT_TYPE', 'text/plain')])
return ['Forwarded successfully.']
self.dispatcher = blob_upload.Application(forward_app)
def tearDown(self):
os.environ = self.original_environ
shutil.rmtree(self.tmpdir)
def run_dispatcher(self, request_body=''):
"""Runs self.dispatcher and returns the response.
self.environ should already be initialised with the WSGI environment,
including the HTTP_* headers.
Args:
request_body: String containing the body of the request.
Returns:
(status, headers, response_body, forward_environ, forward_body), where:
status is the response status string,
headers is a dict containing the response headers (with lowercase
names),
response_body is a string containing the response body,
forward_environ is the WSGI environ passed to the forwarded request, or
None if the forward application was not called,
forward_body is the request body passed to the forwarded request, or
None if the forward application was not called.
Raises:
AssertionError: start_response was not called.
Exception: The WSGI application returned an exception.
"""
response_dict = {}
state_dict = {
'start_response_already_called': False,
'headers_already_sent': False,
}
self.environ['wsgi.input'] = io.StringIO(request_body)
body = io.StringIO()
def write_body(text):
if not text:
return
assert state_dict['start_response_already_called']
body.write(text)
state_dict['headers_already_sent'] = True
def start_response(status, response_headers, exc_info=None):
if exc_info is None:
assert not state_dict['start_response_already_called']
if state_dict['headers_already_sent']:
six.reraise(exc_info[0], exc_info[1], exc_info[2])
state_dict['start_response_already_called'] = True
response_dict['status'] = status
response_dict['headers'] = dict((k.lower(), v) for (k, v) in
response_headers)
return write_body
self.forward_request_dict['environ'] = None
self.forward_request_dict['body'] = None
for s in self.dispatcher(self.environ, start_response):
write_body(s)
if 'status' not in response_dict:
self.fail('start_response was not called')
return (response_dict['status'], response_dict['headers'], body.getvalue(),
self.forward_request_dict['environ'],
self.forward_request_dict['body'])
def _run_test_success(self, upload_data, upload_url):
"""Basic dispatcher request flow."""
request_path = six.moves.urllib.parse.urlparse(upload_url)[2]
# Get session key from upload url.
session_key = upload_url.split('/')[-1]
self.environ['PATH_INFO'] = request_path
self.environ['CONTENT_TYPE'] = (
'multipart/form-data; boundary="================1234=="')
status, _, response_body, forward_environ, forward_body = (
self.run_dispatcher(upload_data))
self.assertEqual('200 OK', status)
self.assertEqual('Forwarded successfully.', response_body)
self.assertNotEqual(None, forward_environ)
# These must NOT be unicode strings.
self.assertIsInstance(forward_environ['PATH_INFO'], str)
if 'QUERY_STRING' in forward_environ:
self.assertIsInstance(forward_environ['QUERY_STRING'], str)
self.assertRegex(forward_environ['CONTENT_TYPE'],
r'multipart/form-data; boundary="[^"]+"')
self.assertEqual(len(forward_body), int(forward_environ['CONTENT_LENGTH']))
self.assertIn(constants.FAKE_IS_ADMIN_HEADER, forward_environ)
self.assertEqual('1', forward_environ[constants.FAKE_IS_ADMIN_HEADER])
new_request = email.message_from_string(
'Content-Type: %s\n\n%s' % (forward_environ['CONTENT_TYPE'],
forward_body))
(upload,) = new_request.get_payload()
self.assertEqual('message/external-body', upload.get_content_type())
message = email.message.Message()
message.add_header('Content-Type', upload['Content-Type'])
blob_key = message.get_param('blob-key')
blob_contents = blobstore.BlobReader(blob_key).read()
self.assertEqual('value', blob_contents)
self.assertRaises(datastore_errors.EntityNotFoundError,
datastore.Get,
session_key)
return upload, forward_environ, forward_body
def test_success(self):
"""Basic dispatcher request flow."""
# Create upload.
upload_data = (
"""--================1234==
Content-Type: text/plain
MIME-Version: 1.0
Content-Disposition: form-data; name="field1"; filename="stuff.txt"
value
--================1234==--""")
upload_url = blobstore.create_upload_url('/success?foo=bar')
upload, forward_environ, _ = self._run_test_success(
upload_data, upload_url)
self.assertEqual('/success', forward_environ['PATH_INFO'])
self.assertEqual('foo=bar', forward_environ['QUERY_STRING'])
self.assertEqual(
('form-data', {'filename': 'stuff.txt', 'name': 'field1'}),
cgi.parse_header(upload['content-disposition']))
def test_success_with_bucket(self):
"""Basic dispatcher request flow."""
# Create upload.
upload_data = (
"""--================1234==
Content-Type: text/plain
MIME-Version: 1.0
Content-Disposition: form-data; name="field1"; filename="stuff.txt"
value
--================1234==--""")
upload_url = blobstore.create_upload_url('/success?foo=bar',
gs_bucket_name='my_test_bucket')
upload, forward_environ, forward_body = self._run_test_success(
upload_data, upload_url)
self.assertEqual('/success', forward_environ['PATH_INFO'])
self.assertEqual('foo=bar', forward_environ['QUERY_STRING'])
self.assertEqual(
('form-data', {'filename': 'stuff.txt', 'name': 'field1'}),
cgi.parse_header(upload['content-disposition']))
self.assertIn('X-AppEngine-Cloud-Storage-Object: /gs/%s' % 'my_test_bucket',
forward_body)
def test_success_full_success_url(self):
"""Request flow with a success url containing protocol, host and port."""
# Create upload.
upload_data = (
"""--================1234==
Content-Type: text/plain
MIME-Version: 1.0
Content-Disposition: form-data; name="field1"; filename="stuff.txt"
value
--================1234==--""")
# The scheme, host and port should all be ignored.
upload_url = blobstore.create_upload_url(
'https://example.com:1234/success?foo=bar')
upload, forward_environ, _ = self._run_test_success(
upload_data, upload_url)
self.assertEqual('/success', forward_environ['PATH_INFO'])
self.assertEqual('foo=bar', forward_environ['QUERY_STRING'])
self.assertEqual(
('form-data', {'filename': 'stuff.txt', 'name': 'field1'}),
cgi.parse_header(upload['content-disposition']))
def test_base64(self):
"""Test automatic decoding of a base-64-encoded message."""
# Create upload.
upload_data = (
"""--================1234==
Content-Type: text/plain
MIME-Version: 1.0
Content-Disposition: form-data; name="field1"; filename="stuff.txt"
Content-Transfer-Encoding: base64
%s
--================1234==--""" % base64.urlsafe_b64encode('value'))
upload_url = blobstore.create_upload_url('/success')
upload, forward_environ, _ = self._run_test_success(
upload_data, upload_url)
self.assertEqual('/success', forward_environ['PATH_INFO'])
self.assertEqual(
('form-data', {'filename': 'stuff.txt', 'name': 'field1'}),
cgi.parse_header(upload['content-disposition']))
def | |
LOG.info('Device current rebuilding status is %s',
dev.rebuild_status)
if dev.rebuild_status in ['rebuilding', 'queued']:
msg = 'Device {0} rebuilding is in progress in '\
'{1}, Please try again later.'.format(
dev.name, self.cluster_name)
LOG.error(msg)
self.module.fail_json(msg=msg)
def dev_checks(device_name, chk_vol=None, chk_top_level=None,
chk_rebuild=None):
"""Validate device for different tasks"""
dev = self.get_device(device_name)
if isinstance(dev, str):
self.module.fail_json(msg=dev)
if chk_vol and dev.virtual_volume is not None:
msg = 'Device {0} is already used in {1} virtual '\
'volume in {2}'.format(
device_name, dev.virtual_volume, self.cluster_name)
LOG.info(msg)
self.module.fail_json(msg=msg)
if chk_top_level and not dev.top_level:
msg = 'Device {0} is already in use in {1}'.format(
device_name, self.cluster_name)
LOG.info(msg)
self.module.fail_json(msg=msg)
if chk_rebuild:
is_device_rebuilding(dev)
return dev
def verify_new_volume_name(name, field='new_virtual_volume_name'):
def exit_fail(msg):
LOG.error(msg)
self.module.fail_json(msg=msg)
# if name is valid
LOG.info('Valdating %s', field)
status, msg = utils.validate_name(
name, 63, field)
if not status:
exit_fail(msg)
msg = "Virtual volume {0} with same name already exists" \
" in ".format(name)
# if name is already assigned to dist virtual volume
vol, dummy = self.get_distributed_virtual_volume(name)
if vol and vol.locality == 'distributed':
msg += 'distributed virtual volume'
exit_fail(msg)
# if name is already assigned to virtual volume in same cluster
vol, dummy = self.get_volume_by_name(name)
if vol:
msg += self.cluster_name
exit_fail(msg)
def get_volume_type(children):
"""Get volume type, if its mirrored or expanded"""
def get_dates():
dates = ["2000-01-01", "9999-01-01"]
start, end = [datetime.strptime(
dummy, "%Y-%m-%d") for dummy in dates]
dates = OrderedDict(((start + timedelta(
dummy)).strftime(r"%Y%b"), None)
for dummy in range((end - start).days)).keys()
return dates
# verify if volume is mirrored or expanded
dates = get_dates()
if len(children) == 0:
return None
expanded = False
for child in list(children.keys()):
if child.split(vol_dev_name)[-1][:7] in dates:
children.pop(child)
expanded = True
if not expanded:
return 'mirrored'
return 'expanded'
def rename(new_vol_name):
payload = [{
"op": "replace",
"path": "/name",
"value": new_vol_name}]
self.vol_obj = self.update_volume(payload)
state = self.module.params['state']
vol_name = self.module.params['virtual_volume_name']
vol_id = self.module.params['virtual_volume_id']
new_vol_name = self.module.params['new_virtual_volume_name']
support_dev_name = self.module.params['supporting_device_name']
thin_enabled = self.module.params['thin_enable']
chk_rebuild = self.module.params['wait_for_rebuild']
expand = self.module.params['expand']
remote_access = self.module.params['remote_access']
additional_devs = self.module.params['additional_devices']
cache_invalidate = self.module.params['cache_invalidate']
changed = False
vol_type = None
if vol_name:
self.vol_obj, err_msg = self.get_volume_by_name(vol_name)
if not self.vol_obj and vol_id:
self.vol_obj, err_msg = self.get_volume_by_id(vol_id)
if not any([vol_name, vol_id]):
err_msg = "Both volume name and volume id can not be None"
if err_msg:
LOG.error(err_msg)
# delete virtual volume
if state == 'absent':
if self.vol_obj:
LOG.info('Trying to delete virtual volume %s',
self.vol_obj.name)
msg = 'Could not delete the virtual volume {0} in {1}, ' \
'since '.format(self.vol_obj.name, self.cluster_name)
if self.vol_obj.consistency_group:
msg += 'virtual volume is a part of Consistency Group'
LOG.error(msg)
self.module.fail_json(msg=msg)
if self.vol_obj.service_status != 'unexported':
msg += 'virtual volume is not uexported'
LOG.error(msg)
self.module.fail_json(msg=msg)
changed = self.delete_volume()
else:
msg = 'Volume is not present to delete'
LOG.info(msg)
exit_module({}, changed)
# create virtual volume
if state == 'present' and support_dev_name and not self.vol_obj:
if new_vol_name:
msg = "Could not perform create and rename in a single " \
"task. Please specify each operation in individual task."
LOG.error(msg)
self.module.fail_json(msg=msg)
if vol_name:
LOG.info('Trying to create virtual volume from %s',
support_dev_name)
verify_new_volume_name(vol_name, 'virtual_volume_name')
dev = dev_checks(support_dev_name, chk_top_level=True,
chk_rebuild=chk_rebuild)
if dev.virtual_volume is None:
uri = '/vplex/v2/clusters/{0}/devices/{1}'.format(
self.cluster_name, support_dev_name)
payload = {
"thin": thin_enabled,
"device": uri
}
self.vol_obj = self.create_volume(payload)
changed = True
if vol_name != self.vol_obj.name:
rename(vol_name)
else:
vol_name = dev.virtual_volume.split('/')[-1]
msg = 'Device {0} is already attached to volume {1} ' \
'in {2}'.format(dev.name, vol_name,
self.cluster_name)
LOG.error(msg)
self.module.fail_json(msg=msg)
else:
msg = 'Supporting device and volume name must be given to ' \
'create virtual volume'
LOG.error(msg)
self.module.fail_json(msg=msg)
# Perform cache invalidate
version = utils.get_vplex_setup(self.client)
if '6.2' in version:
vplex_version = 6
else:
vplex_version = 7
if state == 'present' and self.vol_obj and cache_invalidate and \
self.vol_obj.service_status != 'unexported':
dir_status = self.director_status()
if dir_status is not None:
msg = ("For cache invalidate operation, directors "
"communication status must be 'ok'")
self.module.fail_json(msg=msg)
if vplex_version > 6:
msg = ("To perform cache invalidate the VPLEX version "
"should be 6.2 or lesser")
self.module.fail_json(msg=msg)
else:
self.vol_obj, msg = self.cache_invalidate(vol_name)
if self.vol_obj is None:
self.module.fail_json(msg=msg)
changed = True
# remaining all operations required state and vol_obj to be present,
# exit if vol_obj is not avilable in this stage
if not self.vol_obj:
volume = vol_name if vol_name else vol_id
msg = 'Could not get \'{0}\' volume details in {1}.'.format(
volume, self.cluster_name)
logmsg = msg + '\nAll below operations required correct volume' \
' details:\n\tRename virtual volume' \
'\n\tEnable/Disable remote access' \
'\n\tExpand virtual volume'
LOG.error(logmsg)
self.module.fail_json(msg=msg)
# rename virtual volume
if new_vol_name:
LOG.info('Trying to rename volume from %s to %s',
self.vol_obj.name, new_vol_name)
if new_vol_name == self.vol_obj.name:
msg = 'New name is same as old name, '\
'No need to rename volume.'
LOG.info(msg)
else:
verify_new_volume_name(new_vol_name)
rename(new_vol_name)
changed = True
LOG.info('Volume name updated to %s',
self.vol_obj.name)
vol_dev_name = self.vol_obj.supporting_device.split('/')[-1]
# enable/disable remote access
if remote_access:
LOG.info('Trying to %sing remote access ', remote_access[:-1])
# enable/disable remote access can be updated
# in rebuilding state as well
payload = None
if self.vol_obj.visibility == 'local' and \
remote_access == 'enable':
payload = [{
"op": "replace",
"path": "/visibility",
"value": "global"
}]
elif self.vol_obj.visibility == 'global' and \
remote_access == 'disable':
payload = [{
"op": "replace",
"path": "/visibility",
"value": "local"
}]
if payload:
if self.volume_exists_in_other_clusters(self.vol_obj.name):
msg = "Could not update remote access of virtual volume "\
"{0} in {1}, since virtual volume with same name "\
"exists in another clusters".format(
self.vol_obj.name, self.cluster_name)
LOG.error(msg)
self.module.fail_json(msg=msg)
if self.device_exists_in_other_clusters(vol_dev_name):
msg = "Could not update remote access of virtual volume "\
"{0} in {1}, since device with same name exists "\
"in another clusters".format(
vol_dev_name, self.cluster_name)
LOG.error(msg)
self.module.fail_json(msg=msg)
dev, changed = self.update_device(vol_dev_name, payload)
if not changed:
self.module.fail_json(msg=dev)
self.vol_obj, dummy = self.get_volume_by_name(
self.vol_obj.name)
# If a virtual_volume has a mirror device,
# we should not allow additional devices to be added to it.
dev_uri = '/vplex/v2/clusters/{0}/devices/{1}'.format(
self.cluster_name, vol_dev_name)
children = self.get_map(dev_uri).children
# create dict of dev_name and uri
children = {child.split(
'/')[-1]: child for child in children if '/extents/' not in child}
vol_type = get_volume_type(children)
# expand volume
if vplex_version > 6 and len(additional_devs) > 0:
msg = ("To perform expand with additional device(s) the VPLEX "
"version should be 6.2 or lesser")
self.module.fail_json(msg=msg)
if additional_devs and not expand:
msg = 'Could not expand virtual volume {0} in ' \
'{1}, expand parameter should be set true to ' \
'expand.'.format(self.vol_obj.name, self.cluster_name)
LOG.error(msg)
self.module.fail_json(msg=msg)
if len(additional_devs) > 0 and expand:
LOG.info('Trying to expand volume using additional devices')
if vol_type == 'mirrored':
LOG.info('Children: %s', children)
msg = 'Could not expand virtual volume {0} in ' \
'{1}, volume is mirrored already, can not be ' \
'expanded.'.format(self.vol_obj.name, self.cluster_name)
LOG.error(msg)
self.module.fail_json(msg=msg)
err_msg = 'Could not expand virtual volume {0} in ' \
'{1}, additional_devices must has all the devices in an ' \
'ordered list.'.format(self.vol_obj.name, self.cluster_name)
err_msg += ' Current list: %s' % list(children.keys())
if len(children) <= len(additional_devs):
for child, new_child in zip(children, additional_devs):
if child != new_child:
LOG.error(err_msg)
self.module.fail_json(msg=err_msg)
if len(children) == len(additional_devs):
msg = 'All devices are already added'
LOG.info(msg)
LOG.debug(additional_devs)
exit_module(self.vol_obj, changed)
else:
additional_devs = additional_devs[len(children):]
# check if devices is used by another volume
for dev in additional_devs:
dev_checks(dev, chk_vol=True, chk_top_level=True)
capacity = self.vol_obj.capacity
LOG.info('Capacity: %s', capacity)
for dev in additional_devs:
dev_uri = '/vplex/v2/clusters/{0}/devices/{1}'.format(
self.cluster_name, dev)
payload = {
"skip_init": "False",
"spare_storage": dev_uri
}
LOG.debug('Expand Payload: %s', payload)
self.vol_obj = self.expand_volume(payload)
if capacity < self.vol_obj.capacity:
msg = 'Capacity increased from {0} to {1}.'.format(
capacity, self.vol_obj.capacity)
LOG.info(msg)
changed = True
else:
LOG.error(err_msg)
self.module.fail_json(msg=err_msg)
elif self.vol_obj.expandable_capacity > 0 and expand:
LOG.info('Trying to expand volume from backend array')
capacity = self.vol_obj.capacity
LOG.info('Capacity: %s', capacity)
payload = {"skip_init": "False"}
LOG.debug('Expand Payload: %s', payload)
self.vol_obj = self.expand_volume(payload)
if capacity < self.vol_obj.capacity:
msg = 'Capacity increased from {0} to {1}.'.format(
capacity, self.vol_obj.capacity)
LOG.info(msg)
changed = True
exit_module(self.vol_obj, changed)
def get_user_parameters():
"""This method provide the parameters required for the ansible
virtual volume module on VPLEX"""
return dict(
state=dict(type='str', required=True,
choices=['present', 'absent']),
cluster_name=dict(type='str', required=True),
virtual_volume_name=dict(type='str', required=False),
virtual_volume_id=dict(type='str', required=False),
new_virtual_volume_name=dict(type='str', required=False),
supporting_device_name=dict(type='str', required=False),
thin_enable=dict(type='bool', required=False, default=True,
choices=[True, False]),
wait_for_rebuild=dict(type='bool', required=False, | |
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: MIT-0
import boto3
import botocore
from botocore.exceptions import WaiterError
from botocore.waiter import WaiterModel, create_waiter_with_client
import logging
import os
import secrets
import time
import json
from cryptography.hazmat.primitives.asymmetric.ec import EllipticCurvePrivateKey
import pem
from cryptography import x509
from cryptography.x509.oid import AttributeOID, NameOID
from cryptography.hazmat import backends
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives import serialization
from cryptography.hazmat.primitives.asymmetric import rsa, dsa, ec, ed448, ed25519
logger = logging.getLogger()
logger.setLevel(logging.INFO)
cryptography_backend = backends.default_backend()
# ENV VARIABLES
DELAY = 1
MAX_ATTEMPTS = 6
ISSUE_NAME = "CertificateIssued"
RENEW_NAME = "CertificateRenewed"
waiter_config = {
"version": 2,
"waiters": {
"CertificateIssued": {
"operation": "DescribeCertificate",
"delay": DELAY,
"maxAttempts": MAX_ATTEMPTS,
"acceptors": [
{
"matcher": "path",
"expected": "ISSUED",
"argument": "Certificate.Status",
"state": "success"
},
{
"matcher": "path",
"expected": "PENDING_VALIDATION",
"argument": "Certificate.Status",
"state": "retry"
},
{
"matcher": "path",
"expected": "FAILED",
"argument": "Certificate.Status",
"state": "failure"
}
]
},
"CertificateRenewed": {
"operation": "DescribeCertificate",
"delay": DELAY,
"maxAttempts": MAX_ATTEMPTS,
"acceptors": [
{
"matcher": "path",
"expected": "INELIGIBLE",
"argument": "Certificate.RenewalEligibility",
"state": "success"
},
{
"matcher": "path",
"expected": "PENDING_AUTO_RENEWAL",
"argument": "Certificate.RenewalSummary.RenewalStatus",
"state": "retry"
},
{
"matcher": "path",
"expected": "ELIGIBLE",
"argument": "Certificate.RenewalEligibility",
"state": "retry"
},
{
"matcher": "path",
"expected": "PENDING_VALIDATION",
"argument": "Certificate.RenewalSummary.RenewalStatus",
"state": "retry"
},
{
"matcher": "path",
"expected": "FAILED",
"argument": "Certificate.RenewalSummary.RenewalStatus",
"state": "failure"
}
]
}
}
}
# Main Function
def lambda_handler(event, context):
"""Secrets Manager Rotation Template
This is a template for creating an AWS Secrets Manager rotation lambda
Args:
event (dict): Lambda dictionary of event parameters. These keys must include the following:
- SecretId: The secret ARN or identifier
- ClientRequestToken: The ClientRequestToken of the secret version
- Step: The rotation step (one of createSecret, setSecret, testSecret, or finishSecret)
context (LambdaContext): The Lambda runtime information
Raises:
ResourceNotFoundException: If the secret with the specified arn and stage does not exist
ValueError: If the secret is not properly configured for rotation
KeyError: If the event parameters do not contain the expected keys
"""
arn = event['SecretId']
token = event['ClientRequestToken']
step = event['Step']
# Setup the client
service_client = boto3.client('secretsmanager')
# Make sure the version is staged correctly
metadata = service_client.describe_secret(SecretId=arn)
if not metadata['RotationEnabled']:
logger.error("Secret %s is not enabled for rotation" % arn)
raise ValueError("Secret %s is not enabled for rotation" % arn)
versions = metadata['VersionIdsToStages']
if token not in versions:
logger.error("Secret version %s has no stage for rotation of secret %s." % (token, arn))
raise ValueError("Secret version %s has no stage for rotation of secret %s." % (token, arn))
if "AWSCURRENT" in versions[token]:
logger.info("Secret version %s already set as AWSCURRENT for secret %s." % (token, arn))
return
elif "AWSPENDING" not in versions[token]:
logger.error("Secret version %s not set as AWSPENDING for rotation of secret %s." % (token, arn))
raise ValueError("Secret version %s not set as AWSPENDING for rotation of secret %s." % (token, arn))
if step == "createSecret":
create_secret(service_client, arn, token)
elif step == "setSecret": # dont need this
set_secret(service_client, arn, token)
elif step == "testSecret": # dont need this
test_secret(service_client, arn, token)
elif step == "finishSecret":
finish_secret(service_client, arn, token)
else:
raise ValueError("Invalid step parameter")
############################################################################################################
####################################### HELPER FUNCTIONS ###################################################
############################################################################################################
def create_secret(service_client, arn, token):
"""Create the secret
This method first checks for the existence of a secret for the passed in token. If one does not exist, it will generate a
new secret and put it with the passed in token.
Args:
service_client (client): The secrets manager service client
arn (string): The secret ARN or other identifier
token (string): The ClientRequestToken associated with the secret version
Raises:
ResourceNotFoundException: If the secret with the specified arn and stage does not exist
"""
# Make sure the current secret exists
current_dict = get_secret_dict(service_client, arn, 'AWSCURRENT')
#Clients
acm_client = boto3.client('acm', region_name = current_dict["CA_ARN"].split(":")[3])
acm_pca_client = boto3.client('acm-pca', region_name = current_dict["CA_ARN"].split(":")[3])
waiter_model = WaiterModel(waiter_config)
issue_waiter = create_waiter_with_client(ISSUE_NAME, waiter_model, acm_client)
renew_waiter = create_waiter_with_client(RENEW_NAME, waiter_model, acm_client)
# Now try to get the secret version, if that fails, put a new secret
try:
get_secret_dict(service_client, arn, 'AWSPENDING', token)
logger.info("createSecret: Successfully retrieved secret for %s." % arn)
except service_client.exceptions.ResourceNotFoundException:
if current_dict['CERTIFICATE_TYPE'] == 'ACM_ISSUED':
current_dict = generate_acm_managed(current_dict, acm_client, renew_waiter, issue_waiter)
else:
key = ""
if 'CERTIFICATE_ARN' in current_dict: # renew certificate
key = serialization.load_pem_private_key(current_dict['PRIVATE_KEY_PEM'].encode(), password=<PASSWORD>, backend=cryptography_backend)
else: # need to create new certificate
# keypair object
key = generate_private_key(
current_dict["KEY_ALGORITHM"],
"" if "KEY_SIZE" not in current_dict else current_dict["KEY_SIZE"],
"" if "EC_CURVE" not in current_dict else current_dict["EC_CURVE"])
try:
## issue PCA certificate
current_dict = generate_customer_managed(current_dict, acm_pca_client, key)
except Exception as e:
logger.error("CreateSecret: Unable to create secret with error: %s" % (e))
# Put the secret
service_client.put_secret_value(SecretId=arn, ClientRequestToken=token, SecretString=json.dumps(current_dict), VersionStages=['AWSPENDING'])
logger.info("createSecret: Successfully put secret for ARN %s and version %s." % (arn, token))
def set_secret(service_client, arn, token):
"""Set the secret
This method should set the AWSPENDING secret in the service that the secret belongs to. For example, if the secret is a database
credential, this method should take the value of the AWSPENDING secret and set the user's password to this value in the database.
Args:
service_client (client): The secrets manager service client
arn (string): The secret ARN or other identifier
token (string): The ClientRequestToken associated with the secret version
"""
# This is where the secret should be set in the service
# raise NotImplementedError
# can implement if not concerned about application interruption
return
def test_secret(service_client, arn, token):
"""Test the secret
This method should validate that the AWSPENDING secret works in the service that the secret belongs to. For example, if the secret
is a database credential, this method should validate that the user can login with the password in AWSPENDING and that the user has
all of the expected permissions against the database.
Args:
service_client (client): The secrets manager service client
arn (string): The secret ARN or other identifier
token (string): The ClientRequestToken associated with the secret version
"""
# This is where the secret should be tested against the service
# raise NotImplementedError
# can implement if not concerned about application interruption
return
def finish_secret(service_client, arn, token):
"""Finish the secret
This method finalizes the rotation process by marking the secret version passed in as the AWSCURRENT secret.
Args:
service_client (client): The secrets manager service client
arn (string): The secret ARN or other identifier
token (string): The ClientRequestToken associated with the secret version
Raises:
ResourceNotFoundException: If the secret with the specified arn does not exist
"""
# First describe the secret to get the current version
metadata = service_client.describe_secret(SecretId=arn)
current_version = None
for version in metadata["VersionIdsToStages"]:
if "AWSCURRENT" in metadata["VersionIdsToStages"][version]:
if version == token:
# The correct version is already marked as current, return
logger.info("finishSecret: Version %s already marked as AWSCURRENT for %s" % (version, arn))
return
current_version = version
break
# Finalize by staging the secret version current
service_client.update_secret_version_stage(SecretId=arn, VersionStage="AWSCURRENT", MoveToVersionId=token, RemoveFromVersionId=current_version)
logger.info("finishSecret: Successfully set AWSCURRENT stage to version %s for secret %s." % (token, arn))
def get_secret_dict(service_client, arn, stage, token=None):
"""Gets the secret dictionary corresponding for the secret arn, stage, and token
This helper function gets credentials for the arn and stage passed in and returns the dictionary by parsing the JSON string
Args:
service_client (client): The secrets manager service client
arn (string): The secret ARN or other identifier
token (string): The ClientRequestToken associated with the secret version, or None if no validation is desired
stage (string): The stage identifying the secret version
Returns:
SecretDictionary: Secret dictionary
Raises:
ResourceNotFoundException: If the secret with the specified arn and stage does not exist
ValueError: If the secret is not valid JSON
"""
required_fields = []
# Only do VersionId validation against the stage if a token is passed in
if token:
secret = service_client.get_secret_value(SecretId=arn, VersionId=token, VersionStage=stage)
else:
secret = service_client.get_secret_value(SecretId=arn, VersionStage=stage)
plaintext = secret['SecretString']
secret_dict = json.loads(plaintext)
if 'CERTIFICATE_TYPE' not in secret_dict: # check that we got a certificate type
raise KeyError("Certificate Type (CERTIFICATE_TYPE) must be set to generate the proper certificate")
if secret_dict['CERTIFICATE_TYPE'] == 'ACM_ISSUED':
required_fields = ["CA_ARN", "COMMON_NAME", "ENVIRONMENT"]
else:
required_fields = ["CA_ARN", "COMMON_NAME", "TEMPLATE_ARN", "KEY_ALGORITHM", "KEY_SIZE", "SIGNING_ALGORITHM", "SIGNING_HASH"]
for field in required_fields:
if field not in secret_dict:
raise KeyError("%s key is missing from secret JSON" % field)
# Parse and return the secret JSON | |
etc.
5 4.02e+03 22.57 |
5 4.02e+03 22.57 | 22.57 0.5 28 0 | 0.05 0.81 1.37 0.33
5 1.93e+05 264.67 |
5 1.93e+05 264.67 | 264.67 2.3 111 1 | 0.15 0.10 12.14 0.09
5 3.04e+05 419.28 |
5 3.04e+05 419.28 | 419.28 1.3 142 0 | 0.16 0.12 19.81 0.11
5 3.91e+05 679.15 |
5 3.91e+05 679.15 | 679.15 3.2 246 2 | 0.18 0.16 28.57 0.14
5 4.68e+05 2296.08 |
5 4.68e+05 2296.08 | 2296.08 23.0 1000 0 | 0.14 0.18 33.53 0.15
5 5.38e+05 2296.08 | 1975.77 0.0 1000 0 | 0.13 0.19 35.84 0.14
5 6.00e+05 2513.52 |
5 6.00e+05 2513.52 | 2513.52 2.8 1000 0 | 0.17 0.18 39.30 0.13
5 6.51e+05 2627.30 |
5 6.51e+05 2627.30 | 2627.30 0.6 1000 0 | 0.16 0.17 40.20 0.12
5 7.00e+05 2627.30 | 2565.19 0.0 1000 0 | 0.16 0.15 40.18 0.11
5 7.44e+05 2820.52 |
5 7.44e+05 2820.52 | 2820.52 1.8 1000 0 | 0.17 0.15 40.12 0.09
5 7.88e+05 2820.52 | 2597.41 0.0 1000 0 | 0.17 0.13 39.35 0.08
5 8.28e+05 2820.52 | 2795.26 0.0 1000 0 | 0.18 0.11 38.10 0.07
5 8.69e+05 2936.57 |
5 8.69e+05 2936.57 | 2936.57 2.1 1000 0 | 0.18 0.11 37.18 0.07
5 9.05e+05 3017.49 |
5 9.05e+05 3017.49 | 3017.49 1.2 1000 0 | 0.19 0.10 36.89 0.06
5 9.41e+05 3017.49 | 2869.90 0.0 1000 0 | 0.19 0.09 35.58 0.06
5 9.80e+05 3079.65 |
5 9.80e+05 3079.65 | 3079.65 2.1 1000 0 | 0.19 0.09 35.00 0.06
5 1.01e+06 3111.62 |
5 1.01e+06 3111.62 | 3111.62 1.2 1000 0 | 0.20 0.09 35.51 0.06
5 1.04e+06 3111.62 | 3102.43 0.0 1000 0 | 0.19 0.09 35.28 0.06
5 1.08e+06 3195.77 |
5 1.08e+06 3195.77 | 3195.77 0.8 1000 0 | 0.20 0.08 35.27 0.06
5 1.10e+06 3204.82 |
5 1.10e+06 3204.82 | 3204.82 1.3 1000 0 | 0.20 0.08 35.29 0.05
5 1.13e+06 3227.59 |
5 1.13e+06 3227.59 | 3227.59 0.3 1000 0 | 0.20 0.08 34.81 0.05
5 1.17e+06 3236.78 |
5 1.17e+06 3236.78 | 3236.78 1.1 1000 0 | 0.20 0.07 34.90 0.05
5 1.19e+06 3251.62 |
5 1.19e+06 3251.62 | 3251.62 0.6 1000 0 | 0.20 0.07 34.88 0.05
5 1.22e+06 3310.67 |
5 1.22e+06 3310.67 | 3310.67 1.2 1000 0 | 0.21 0.07 34.24 0.05
5 1.25e+06 3310.67 | 3303.20 0.0 1000 0 | 0.21 0.07 34.15 0.05
5 1.28e+06 3312.09 |
5 1.28e+06 3312.09 | 3312.09 1.3 1000 0 | 0.20 0.07 33.99 0.05
5 1.31e+06 3361.26 |
5 1.31e+06 3361.26 | 3361.26 1.3 1000 0 | 0.20 0.07 34.49 0.05
5 1.34e+06 3361.26 | 3264.82 0.0 1000 0 | 0.21 0.07 34.20 0.05
5 1.36e+06 3361.26 | 3218.51 0.0 1000 0 | 0.22 0.07 34.42 0.05
5 1.39e+06 3361.26 | 3308.47 0.0 1000 0 | 0.22 0.06 33.95 0.05
5 1.41e+06 3361.26 | 3347.62 0.0 1000 0 | 0.22 0.06 34.16 0.05
5 1.44e+06 3361.26 | 3285.74 0.0 1000 0 | 0.21 0.07 34.00 0.05
5 1.46e+06 3361.26 | 3280.85 0.0 1000 0 | 0.22 0.06 33.90 0.04
5 1.49e+06 3361.26 | 3311.28 0.0 1000 0 | 0.21 0.06 34.01 0.04
5 1.51e+06 3407.79 |
5 1.51e+06 3407.79 | 3407.79 1.7 1000 0 | 0.22 0.06 33.94 0.05
5 1.54e+06 3407.79 | 3381.80 0.0 1000 0 | 0.20 0.06 34.44 0.05
5 1.56e+06 3407.79 | 3356.46 0.0 1000 0 | 0.22 0.06 34.72 0.05
5 1.59e+06 3407.79 | 3395.07 0.0 1000 0 | 0.20 0.06 34.86 0.05
5 1.61e+06 3407.79 | 3313.27 0.0 1000 0 | 0.22 0.06 34.64 0.05
5 1.63e+06 3407.79 | 3401.76 0.0 1000 0 | 0.22 0.06 34.94 0.05
5 1.66e+06 3489.30 |
5 1.66e+06 3489.30 | 3489.30 4.1 1000 0 | 0.23 0.06 34.42 0.05
5 1.68e+06 3489.30 | 3393.20 0.0 1000 0 | 0.23 0.06 34.14 0.05
5 1.70e+06 3489.30 | 3458.37 0.0 1000 0 | 0.22 0.05 34.41 0.05
5 1.73e+06 3489.30 | 3352.92 0.0 1000 0 | 0.22 0.06 34.62 0.05
5 1.75e+06 3489.30 | 3333.69 0.0 1000 0 | 0.22 0.06 35.26 0.05
5 1.77e+06 3489.30 | 3456.53 0.0 1000 0 | 0.21 0.05 35.47 0.05
| UsedTime: 5997 | SavedDir: ./Hopper-v2_ReliableSAC_5
| Learner: Save in ./Hopper-v2_ReliableSAC_5
| Arguments Remove cwd: ./Hopper-v2_ReliableSACHterm_6
################################################################################
ID Step maxR | avgR stdR avgS stdS | expR objC etc.
6 4.04e+03 78.95 |
6 4.04e+03 78.95 | 78.95 3.6 108 4 | 0.09 0.76 0.85 0.33
6 1.15e+05 198.10 |
6 1.15e+05 198.10 | 198.10 1.9 90 1 | 0.28 0.63 29.79 0.23
6 1.72e+05 889.93 |
6 1.72e+05 889.93 | 889.93 4.0 314 1 | 0.28 0.82 45.62 0.17
6 2.17e+05 889.93 | 274.09 0.0 110 0 | 0.31 0.94 58.49 0.21
6 2.55e+05 889.93 | 491.19 0.0 167 0 | 0.34 1.29 58.00 0.28
6 2.91e+05 889.93 | 261.90 0.0 119 0 | 0.30 2.26 58.54 0.49
6 3.25e+05 1031.75 |
6 3.25e+05 1031.75 | 1031.75 0.5 1000 0 | 0.15 1.82 85.02 0.71
6 3.54e+05 1031.75 | 1014.32 0.0 1000 0 | 0.13 1.47 101.81 0.50
6 3.81e+05 1034.31 |
6 3.81e+05 1034.31 | 1034.31 0.6 1000 0 | 0.14 1.17 95.27 0.42
6 4.06e+05 1034.31 | 122.12 0.0 151 0 | 0.14 0.91 81.74 0.36
6 4.29e+05 1034.31 | 277.10 0.0 221 0 | 0.14 0.78 73.37 0.34
6 4.51e+05 1034.31 | 565.33 0.0 384 0 | 0.16 0.75 68.32 0.31
6 4.71e+05 1378.52 |
6 4.71e+05 1378.52 | 1378.52 0.1 1000 0 | 0.22 0.60 66.21 0.26
6 4.89e+05 1378.52 | 448.34 0.0 315 0 | 0.21 0.49 60.09 0.20
6 5.06e+05 1378.52 | 419.95 0.0 168 0 | 0.29 0.44 53.30 0.17
6 5.24e+05 1378.52 | 1104.66 0.0 420 0 | 0.32 0.35 47.74 0.17
6 5.41e+05 1649.15 |
6 5.41e+05 1649.15 | 1649.15 879.9 704 300 | 0.29 0.31 45.29 0.15
6 5.60e+05 2190.47 |
6 5.60e+05 2190.47 | 2190.47 4.3 1000 0 | 0.25 0.33 44.60 0.14
6 5.77e+05 2190.47 | 2120.56 0.0 1000 0 | 0.33 0.32 42.70 0.12
6 5.95e+05 2686.57 |
6 5.95e+05 2686.57 | 2686.57 5.8 1000 0 | 0.38 0.27 41.22 0.11
6 6.08e+05 2686.57 | 956.68 0.0 292 0 | 0.39 0.25 40.32 0.11
6 6.21e+05 2686.57 | 273.05 0.0 132 0 | 0.34 0.23 39.83 0.11
6 6.34e+05 2686.57 | 308.15 0.0 137 0 | 0.30 0.24 40.72 0.11
6 6.47e+05 2686.57 | 458.55 0.0 200 0 | 0.29 0.27 40.67 0.11
6 6.60e+05 2686.57 | 760.84 0.0 286 0 | 0.34 0.27 41.64 0.11
6 6.75e+05 2686.57 | 1253.21 0.0 412 0 | 0.35 0.23 41.98 0.11
6 6.88e+05 2686.57 | 544.23 0.0 205 0 | 0.33 0.24 41.93 0.11
6 7.01e+05 2686.57 | 2637.08 0.0 1000 0 | 0.37 0.23 42.04 0.11
6 7.16e+05 2686.57 | 1063.86 0.0 369 0 | 0.35 0.23 42.23 0.12
6 7.27e+05 3082.43 |
6 7.27e+05 3082.43 | 3082.43 94.6 967 30 | 0.38 0.23 44.18 0.12
6 7.38e+05 3082.43 | 2714.58 0.0 1000 0 | 0.36 0.23 43.15 0.11
6 7.50e+05 3082.43 | 1576.83 0.0 530 0 | 0.36 0.22 42.79 0.11
6 7.62e+05 3082.43 | 2904.31 0.0 1000 0 | 0.37 0.22 42.24 0.12
6 7.74e+05 3082.43 | 2907.68 0.0 1000 0 | 0.39 0.22 43.42 0.12
6 7.87e+05 3082.43 | 3047.61 0.0 1000 0 | 0.38 0.23 43.15 0.12
6 8.01e+05 3082.43 | 2884.98 0.0 1000 0 | 0.39 0.23 44.60 0.12
6 8.15e+05 3082.43 | 3010.91 0.0 1000 0 | 0.40 0.23 45.58 0.12
6 8.27e+05 3082.43 | 2904.95 0.0 1000 0 | 0.36 0.23 43.33 0.13
6 8.39e+05 3082.43 | 2962.38 0.0 1000 0 | 0.38 0.20 45.61 0.13
6 8.53e+05 3082.43 | 2986.82 0.0 1000 0 | 0.37 0.22 46.55 0.13
6 8.65e+05 3082.43 | 3012.73 0.0 1000 0 | 0.39 0.22 46.56 0.12
6 8.77e+05 3082.43 | 3006.50 0.0 1000 0 | 0.38 0.20 45.55 0.12
6 8.89e+05 3082.43 | 2963.94 0.0 1000 0 | 0.39 0.20 46.11 0.12
6 9.00e+05 3082.43 | 2965.68 0.0 1000 0 | 0.38 0.20 45.67 0.12
6 9.10e+05 3082.43 | 2978.31 0.0 1000 0 | 0.39 0.19 47.14 0.12
6 9.20e+05 3082.43 | 3036.60 0.0 1000 0 | 0.39 0.18 46.73 0.12
6 9.32e+05 3082.43 | 3030.23 0.0 1000 0 | 0.37 0.20 46.14 0.12
6 9.41e+05 3082.43 | 2977.58 0.0 1000 0 | 0.38 0.19 46.32 0.12
6 9.53e+05 3082.43 | 2947.05 0.0 1000 0 | 0.39 0.19 45.68 0.12
6 9.62e+05 3082.43 | 3048.31 0.0 1000 0 | 0.38 0.18 46.31 0.12
6 9.71e+05 3082.43 | 3021.77 0.0 1000 0 | |
0)
TESTS:
Tests for :trac:`10888`::
sage: K.<th> = NumberField(x^2+3)
sage: E = EllipticCurve(K,[7,0])
sage: phi = E.isogeny(E(0,0))
sage: P = E(-3,4*th)
sage: phi(P)
(-16/3 : 8/9*th : 1)
sage: Q = phi(P)
sage: phihat = phi.dual()
sage: phihat(Q)
(-1/48 : 127/576*th : 1)
Call a composed isogeny (added for :trac:`16238`)::
sage: E = EllipticCurve(j=GF(7)(0))
sage: phi = E.isogeny([E(0), E((0,1)), E((0,-1))])
sage: phi(E.points()[0])
(0 : 1 : 0)
sage: phi2 = phi * phi
sage: phi2(E.points()[0])
(0 : 1 : 0)
Coercion works fine with :meth:`_call_` (added for :trac:`16238`)::
sage: K.<th> = NumberField(x^2+3)
sage: E = EllipticCurve(K,[7,0])
sage: E2=EllipticCurve(K,[5,0])
sage: phi=E.isogeny(E(0))
sage: phi(E2(0))
(0 : 1 : 0)
sage: E2(20,90)
(20 : 90 : 1)
sage: phi(E2(20,90))
Traceback (most recent call last):
...
TypeError: (20 : 90 : 1) fails to convert into the map's domain Elliptic Curve defined by y^2 = x^3 + 7*x over Number Field in th with defining polynomial x^2 + 3, but a `pushforward` method is not properly implemented
"""
if P.is_zero():
return self.__E2(0)
(xP, yP) = P.xy()
# if there is a pre isomorphism, apply it
if self.__pre_isomorphism is not None:
temp_xP = self.__prei_x_coord_ratl_map(xP)
temp_yP = self.__prei_y_coord_ratl_map(xP, yP)
(xP, yP) = (temp_xP, temp_yP)
if "velu" == self.__algorithm:
outP = self.__compute_via_velu_numeric(xP, yP)
elif "kohel" == self.__algorithm:
outP = self.__compute_via_kohel_numeric(xP,yP)
# the intermediate functions return the point at infinity
# if the input point is in the kernel
if (outP == self.__intermediate_codomain(0)):
return self.__E2(0)
# if there is a post isomorphism, apply it
if (self.__post_isomorphism is not None):
tempX = self.__posti_x_coord_ratl_map(outP[0])
tempY = self.__posti_y_coord_ratl_map(outP[0], outP[1])
outP = (tempX, tempY)
return self.__E2(outP)
def __getitem__(self, i):
r"""
Return one of the rational map components.
.. NOTE::
Both components are returned as elements of the function
field `F(x,y)` in two variables over the base field `F`,
though the first only involves `x`. To obtain the
`x`-coordinate function as a rational function in `F(x)`,
use :meth:`x_rational_map`.
EXAMPLES::
sage: E = EllipticCurve(QQ, [0,2,0,1,-1])
sage: phi = EllipticCurveIsogeny(E, [1])
sage: phi[0]
x
sage: phi[1]
y
sage: E = EllipticCurve(GF(17), [0,0,0,3,0])
sage: phi = EllipticCurveIsogeny(E, E((0,0)))
sage: phi[0]
(x^2 + 3)/x
sage: phi[1]
(x^2*y - 3*y)/x^2
"""
return self.rational_maps()[i]
def __iter__(self):
r"""
Return an iterator through the rational map components.
EXAMPLES::
sage: E = EllipticCurve(QQ, [0,2,0,1,-1])
sage: phi = EllipticCurveIsogeny(E, [1])
sage: for c in phi: print(c)
x
y
sage: E = EllipticCurve(GF(17), [0,0,0,3,0])
sage: phi = EllipticCurveIsogeny(E, E((0,0)))
sage: for c in phi: print(c)
(x^2 + 3)/x
(x^2*y - 3*y)/x^2
"""
return iter(self.rational_maps())
def __hash__(self):
r"""
Function that implements the hash ability of Isogeny objects.
This hashes the underlying kernel polynomial so that equal
isogeny objects have the same hash value. Also, this hashes
the base field, and domain and codomain curves as well, so
that isogenies with the same kernel polynomial (over different
base fields / curves) hash to different values.
EXAMPLES::
sage: E = EllipticCurve(QQ, [0,0,0,1,0])
sage: phi_v = EllipticCurveIsogeny(E, E((0,0)))
sage: phi_k = EllipticCurveIsogeny(E, [0,1])
sage: phi_k.__hash__() == phi_v.__hash__()
True
sage: E_F17 = EllipticCurve(GF(17), [0,0,0,1,1])
sage: phi_p = EllipticCurveIsogeny(E_F17, E_F17([0,1]))
sage: phi_p.__hash__() == phi_v.__hash__()
False
sage: E = EllipticCurve('49a3')
sage: R.<X> = QQ[]
sage: EllipticCurveIsogeny(E,X^3-13*X^2-58*X+503,check=False)
Isogeny of degree 7 from Elliptic Curve defined by y^2 + x*y = x^3 - x^2 - 107*x + 552 over Rational Field to Elliptic Curve defined by y^2 + x*y = x^3 - x^2 - 5252*x - 178837 over Rational Field
"""
if self.__this_hash is not None:
return self.__this_hash
ker_poly_list = self.__kernel_polynomial_list
if ker_poly_list is None:
ker_poly_list = self.__init_kernel_polynomial()
this_hash = 0
for a in ker_poly_list:
this_hash ^= hash(a)
this_hash ^= hash(self.__E1)
this_hash ^= hash(self.__E2)
this_hash ^= hash(self.__base_field)
self.__this_hash = this_hash
return self.__this_hash
def _richcmp_(self, other, op):
r"""
Compare :class:`EllipticCurveIsogeny` objects.
ALGORITHM:
This method compares domains, codomains, and :meth:`rational_maps`.
EXAMPLES::
sage: E = EllipticCurve(QQ, [0,0,0,1,0])
sage: phi_v = EllipticCurveIsogeny(E, E((0,0)))
sage: phi_k = EllipticCurveIsogeny(E, [0,1])
sage: phi_k == phi_v
True
sage: E_F17 = EllipticCurve(GF(17), [0,0,0,1,0])
sage: phi_p = EllipticCurveIsogeny(E_F17, [0,1])
sage: phi_p == phi_v
False
sage: E = EllipticCurve('11a1')
sage: phi = E.isogeny(E(5,5))
sage: phi == phi
True
sage: phi == -phi
False
sage: psi = E.isogeny(phi.kernel_polynomial())
sage: phi == psi
True
sage: phi.dual() == psi.dual()
True
"""
# We cannot just compare kernel polynomials, as was done until
# Trac #11327, as then phi and -phi compare equal, and
# similarly with phi and any composition of phi with an
# automorphism of its codomain, or any post-isomorphism.
# Comparing domains, codomains and rational maps seems much
# safer.
lx = self.domain()
rx = other.domain()
if lx != rx:
return richcmp_not_equal(lx, rx, op)
lx = self.codomain()
rx = other.codomain()
if lx != rx:
return richcmp_not_equal(lx, rx, op)
return richcmp(self.rational_maps(), other.rational_maps(), op)
def __neg__(self):
r"""
Function to implement unary negation (-) operator on
isogenies. Return a copy of this isogeny that has been
negated.
EXAMPLES:
The following examples inherently exercise this function::
sage: E = EllipticCurve(j=GF(17)(0))
sage: phi = EllipticCurveIsogeny(E, E((-1,0)) )
sage: negphi = -phi
sage: phi(E((0,1))) + negphi(E((0,1))) == 0
True
sage: E = EllipticCurve(j=GF(19)(1728))
sage: R.<x> = GF(19)[]
sage: phi = EllipticCurveIsogeny(E, x)
sage: negphi = -phi
sage: phi(E((3,7))) + negphi(E((3,12))) == phi(2*E((3,7)))
True
sage: negphi(E((18,6)))
(17 : 0 : 1)
sage: R.<x> = QQ[]
sage: E = EllipticCurve('17a1')
sage: R.<x> = QQ[]
sage: f = x - 11/4
sage: phi = EllipticCurveIsogeny(E, f)
sage: negphi = -phi
sage: phi.rational_maps()[0] == negphi.rational_maps()[0]
True
sage: P = E((7,13))
sage: phi(P) + negphi(P) == 0
True
"""
output = copy(self)
E2 = output.__E2
iso = WeierstrassIsomorphism(E2, (-1,0,-E2.a1(),-E2.a3()))
output._set_post_isomorphism(iso)
return output
#
# Sage Special Functions
#
def _repr_(self):
r"""
Special sage specific function that implement the
functionality to display the isogeny self as a string.
EXAMPLES::
sage: E = EllipticCurve(GF(31), [1,0,1,1,0])
sage: phi = EllipticCurveIsogeny(E, E((0,0)) )
sage: phi._repr_()
'Isogeny of degree 17 from Elliptic Curve defined by y^2 + x*y + y = x^3 + x over Finite Field of size 31 to Elliptic Curve defined by y^2 + x*y + y = x^3 + 14*x + 9 over Finite Field of size 31'
sage: E = EllipticCurve(QQ, [1,0,0,1,9])
sage: phi = EllipticCurveIsogeny(E, [2,1])
sage: phi._repr_()
'Isogeny of degree 2 from Elliptic Curve defined by y^2 + x*y = x^3 + x + 9 over Rational Field to Elliptic Curve defined by y^2 + x*y = x^3 - 59*x + 165 over Rational Field'
"""
return 'Isogeny of degree %r from %r to %r' % (
self.__degree, self.__E1, self.__E2)
def _latex_(self):
r"""
Special sage specific function that implements functionality
to display an isogeny object as a latex string.
This function returns a latex string representing the isogeny
self as the `x` and `y` coordinate rational functions.
EXAMPLES::
sage: E = EllipticCurve(QQ, [0,0,0,1,-1])
sage: phi = EllipticCurveIsogeny(E, E(0))
sage: phi._latex_()
'\\left( x , y \\right)'
sage: E = EllipticCurve(GF(17), [0,0,0,1,-1])
sage: R.<X> = GF(17)[]
sage: phi = EllipticCurveIsogeny(E, X+11)
sage: phi._latex_()
'\\left( \\frac{x^{2} + 11 x + 7}{x + 11} , \\frac{x^{2} y + 5 x y + 12 y}{x^{2} + 5 x + 2} \\right)'
"""
ratl_maps = self.rational_maps()
return '\\left( %s , %s \\right)' % (ratl_maps[0]._latex_(), ratl_maps[1]._latex_())
###########################
# Private Common Functions
###########################
# delete the hash value
def __clear_cached_values(self):
r"""
A private function to clear the hash if the codomain has been
modified by a pre or post isomorphism.
EXAMPLES::
sage: F = GF(7)
sage: E = EllipticCurve(j=F(0))
sage: phi = EllipticCurveIsogeny(E, [E((0,-1)), E((0,1))])
sage: old_hash = hash(phi)
sage: from sage.schemes.elliptic_curves.weierstrass_morphism import WeierstrassIsomorphism
sage: phi.set_post_isomorphism(WeierstrassIsomorphism(phi.codomain(), (-1,2,-3,4)))
...
sage: hash(phi) == old_hash
False
sage: R.<x> = QQ[]
sage: E = EllipticCurve(QQ, [0,0,0,1,0])
sage: phi = EllipticCurveIsogeny(E, x)
sage: old_ratl_maps = phi.rational_maps()
sage: from sage.schemes.elliptic_curves.weierstrass_morphism import WeierstrassIsomorphism
sage: phi.set_post_isomorphism(WeierstrassIsomorphism(phi.codomain(), (-1,0,0,0)))
...
sage: old_ratl_maps == phi.rational_maps()
False
sage: old_ratl_maps[1] == -phi.rational_maps()[1]
True
sage: F = GF(127); R.<x> = F[]
sage: E = EllipticCurve(j=F(1728))
| |
= linear_regression(tvec.flatten()[indx], stbarTime.flatten()[indx], w.flatten()[indx], intercept_origin=True)
else:
print('not enough points to estimate dv/v for dtw')
m0=0;em0=0
return m0*100,em0*100,dist
def mwcs_dvv(ref, cur, moving_window_length, slide_step, para, smoothing_half_win=5):
"""
Moving Window Cross Spectrum method to measure dv/v (relying on phi=2*pi*f*t in freq domain)
PARAMETERS:
----------------
ref: Reference waveform (np.ndarray, size N)
cur: Current waveform (np.ndarray, size N)
moving_window_length: moving window length to calculate cross-spectrum (np.float, in sec)
slide_step: steps in time to shift the moving window (np.float, in seconds)
para: a dict containing parameters about input data window and frequency info, including
delta->The sampling rate of the input timeseries (in Hz)
window-> The target window for measuring dt/t
freq-> The frequency bound to compute the dephasing (in Hz)
tmin: The leftmost time lag (used to compute the "time lags array")
smoothing_half_win: If different from 0, defines the half length of the smoothing hanning window.
RETURNS:
------------------
time_axis: the central times of the windows.
delta_t: dt
delta_err:error
delta_mcoh: mean coherence
Copied from MSNoise (https://github.com/ROBelgium/MSNoise/tree/master/msnoise)
Modified by <NAME>
"""
# common variables
t = para['t']
twin = para['twin']
freq = para['freq']
dt = para['dt']
tmin = np.min(twin)
tmax = np.max(twin)
fmin = np.min(freq)
fmax = np.max(freq)
tvect = np.arange(tmin,tmax,dt)
# parameter initialize
delta_t = []
delta_err = []
delta_mcoh = []
time_axis = []
# info on the moving window
window_length_samples = np.int(moving_window_length/dt)
padd = int(2 ** (nextpow2(window_length_samples) + 2))
count = 0
tp = cosine_taper(window_length_samples, 0.15)
minind = 0
maxind = window_length_samples
# loop through all sub-windows
while maxind <= len(ref):
cci = cur[minind:maxind]
cci = scipy.signal.detrend(cci, type='linear')
cci *= tp
cri = ref[minind:maxind]
cri = scipy.signal.detrend(cri, type='linear')
cri *= tp
minind += int(slide_step/dt)
maxind += int(slide_step/dt)
# do fft
fcur = scipy.fftpack.fft(cci, n=padd)[:padd // 2]
fref = scipy.fftpack.fft(cri, n=padd)[:padd // 2]
fcur2 = np.real(fcur) ** 2 + np.imag(fcur) ** 2
fref2 = np.real(fref) ** 2 + np.imag(fref) ** 2
# get cross-spectrum & do filtering
X = fref * (fcur.conj())
if smoothing_half_win != 0:
dcur = np.sqrt(smooth(fcur2, window='hanning',half_win=smoothing_half_win))
dref = np.sqrt(smooth(fref2, window='hanning',half_win=smoothing_half_win))
X = smooth(X, window='hanning',half_win=smoothing_half_win)
else:
dcur = np.sqrt(fcur2)
dref = np.sqrt(fref2)
dcs = np.abs(X)
# Find the values the frequency range of interest
freq_vec = scipy.fftpack.fftfreq(len(X) * 2, dt)[:padd // 2]
index_range = np.argwhere(np.logical_and(freq_vec >= fmin,freq_vec <= fmax))
# Get Coherence and its mean value
coh = getCoherence(dcs, dref, dcur)
mcoh = np.mean(coh[index_range])
# Get Weights
w = 1.0 / (1.0 / (coh[index_range] ** 2) - 1.0)
w[coh[index_range] >= 0.99] = 1.0 / (1.0 / 0.9801 - 1.0)
w = np.sqrt(w * np.sqrt(dcs[index_range]))
w = np.real(w)
# Frequency array:
v = np.real(freq_vec[index_range]) * 2 * np.pi
# Phase:
phi = np.angle(X)
phi[0] = 0.
phi = np.unwrap(phi)
phi = phi[index_range]
# Calculate the slope with a weighted least square linear regression
# forced through the origin; weights for the WLS must be the variance !
m, em = linear_regression(v.flatten(), phi.flatten(), w.flatten())
delta_t.append(m)
# print phi.shape, v.shape, w.shape
e = np.sum((phi - m * v) ** 2) / (np.size(v) - 1)
s2x2 = np.sum(v ** 2 * w ** 2)
sx2 = np.sum(w * v ** 2)
e = np.sqrt(e * s2x2 / sx2 ** 2)
delta_err.append(e)
delta_mcoh.append(np.real(mcoh))
time_axis.append(tmin+moving_window_length/2.+count*slide_step)
count += 1
del fcur, fref
del X
del freq_vec
del index_range
del w, v, e, s2x2, sx2, m, em
if maxind > len(cur) + int(slide_step/dt):
print("The last window was too small, but was computed")
# ensure all matrix are np array
delta_t = np.array(delta_t)
delta_err = np.array(delta_err)
delta_mcoh = np.array(delta_mcoh)
time_axis = np.array(time_axis)
# ready for linear regression
delta_mincho = 0.65
delta_maxerr = 0.1
delta_maxdt = 0.1
indx1 = np.where(delta_mcoh>delta_mincho)
indx2 = np.where(delta_err<delta_maxerr)
indx3 = np.where(delta_t<delta_maxdt)
#-----find good dt measurements-----
indx = np.intersect1d(indx1,indx2)
indx = np.intersect1d(indx,indx3)
if len(indx) >2:
#----estimate weight for regression----
w = 1/delta_err[indx]
w[~np.isfinite(w)] = 1.0
#---------do linear regression-----------
#m, a, em, ea = linear_regression(time_axis[indx], delta_t[indx], w, intercept_origin=False)
m0, em0 = linear_regression(time_axis[indx], delta_t[indx], w, intercept_origin=True)
else:
print('not enough points to estimate dv/v for mwcs')
m0=0;em0=0
return -m0*100,em0*100
def wxs_dvv(ref,cur,allfreq,para,dj=1/12, s0=-1, J=-1, sig=False, wvn='morlet',unwrapflag=False):
"""
Compute dt or dv/v in time and frequency domain from wavelet cross spectrum (wxs).
for all frequecies in an interest range
Parameters
--------------
ref: The "Reference" timeseries (numpy.ndarray)
cur: The "Current" timeseries (numpy.ndarray)
allfreq: a boolen variable to make measurements on all frequency range or not
para: a dict containing freq/time info of the data matrix
dj, s0, J, sig, wvn: common parameters used in 'wavelet.wct'
unwrapflag: True - unwrap phase delays. Default is False
RETURNS:
------------------
dvv*100 : estimated dv/v in %
err*100 : error of dv/v estimation in %
Originally written by <NAME> (1 March, 2019)
Modified by <NAME> (30 June, 2019) based on (Mao et al. 2019).
Updated by <NAME> (10 Oct, 2019) to merge the functionality for mesurements across all frequency and one freq range
"""
# common variables
t = para['t']
twin = para['twin']
freq = para['freq']
dt = para['dt']
tmin = np.min(twin)
tmax = np.max(twin)
fmin = np.min(freq)
fmax = np.max(freq)
itvec = np.arange(np.int((tmin-t.min())/dt)+1, np.int((tmax-t.min())/dt)+1)
tvec = t[itvec]
npts = len(tvec)
# perform cross coherent analysis, modified from function 'wavelet.cwt'
WCT, aWCT, coi, freq, sig = wct_modified(ref, cur, dt, dj=dj, s0=s0, J=J, sig=sig, wavelet=wvn, normalize=True)
if unwrapflag:
phase = np.unwrap(aWCT,axis=-1) # axis=0, upwrap along time; axis=-1, unwrap along frequency
else:
phase = aWCT
# zero out data outside frequency band
if (fmax> np.max(freq)) | (fmax <= fmin):
raise ValueError('Abort: input frequency out of limits!')
else:
freq_indin = np.where((freq >= fmin) & (freq <= fmax))[0]
# follow MWCS to do two steps of linear regression
if not allfreq:
delta_t_m, delta_t_unc = np.zeros(npts,dtype=np.float32),np.zeros(npts,dtype=np.float32)
# assume the tvec is the time window to measure dt
for it in range(npts):
w = 1/WCT[freq_indin,itvec[it]]
w[~np.isfinite(w)] = 1.
delta_t_m[it],delta_t_unc[it] = linear_regression(freq[freq_indin]*2*np.pi, phase[freq_indin,itvec[it]], w)
# new weights for regression
wWCT = WCT[:,itvec]
w2 = 1/np.mean(wWCT[freq_indin,],axis=0)
w2[~np.isfinite(w2)] = 1.
# now use dt and t to get dv/v
if len(w2)>2:
if not np.any(delta_t_m):
dvv, err = np.nan,np.nan
m, em = linear_regression(tvec, delta_t_m, w2, intercept_origin=True)
dvv, err = -m, em
else:
print('not enough points to estimate dv/v for wts')
dvv, err=np.nan, np.nan
return dvv*100,err*100
# convert phase directly to delta_t for all frequencies
else:
# convert phase delay to time delay
delta_t = phase / (2*np.pi*freq[:,None]) # normalize phase by (2*pi*frequency)
dvv, err = np.zeros(freq_indin.shape), np.zeros(freq_indin.shape)
# loop through freq for linear regression
for ii, ifreq in enumerate(freq_indin):
if len(tvec)>2:
if not np.any(delta_t[ifreq]):
continue
# how to better approach the uncertainty of delta_t
w = 1/WCT[ifreq, itvec]
w[~np.isfinite(w)] = 1.0
#m, a, em, ea = linear_regression(time_axis[indx], delta_t[indx], w, intercept_origin=False)
m, em = linear_regression(tvec, delta_t[ifreq, itvec], w, intercept_origin=True)
dvv[ii], err[ii] = -m, em
else:
print('not enough points to estimate dv/v for wts')
dvv[ii], err[ii]=np.nan, np.nan
return freq[freq_indin], dvv*100, err*100
def wts_dvv(ref,cur,allfreq,para,dv_range,nbtrial,dj=1/12,s0=-1,J=-1,wvn='morlet',normalize=True):
"""
Apply stretching method to continuous wavelet transformation (CWT) of signals
for all frequecies in an interest range
Parameters
--------------
ref: The complete "Reference" time series (numpy.ndarray)
cur: The complete "Current" time series (numpy.ndarray)
allfreq: a boolen variable to make measurements on all frequency range or not
para: a dict containing freq/time info of the data matrix
dv_range: absolute bound for the velocity variation; example: dv=0.03 for [-3,3]% of relative velocity change (float)
nbtrial: number of stretching coefficient between dvmin and dvmax, no need to be higher than 100 (float)
dj, s0, J, sig, wvn: common parameters used in 'wavelet.wct'
normalize: normalize the wavelet spectrum or not. Default is True
RETURNS:
------------------
dvv: estimated dv/v
err: error of dv/v estimation
Written by <NAME> (30 Jun, 2019)
"""
# common variables
t = para['t']
twin = para['twin']
freq = para['freq']
dt = para['dt']
tmin = np.min(twin)
tmax = np.max(twin)
fmin = np.min(freq)
fmax = | |
<gh_stars>1-10
from abc import abstractmethod
import time, os, math, random
import knowledgebases, benchutils
import pandas as pd
import numpy as np
import matplotlib as matplots
import matplotlib.pyplot as plt
from matplotlib_venn import venn2
from matplotlib_venn import venn3
from matplotlib import colors as mcolors
class AttributeRemover():
"""Prepares the input data set for subsequent classification by removing lowly-ranked features and only keeping the top k features.
Creates one "reduced" file for every ranking and from one to k (so if k is 50, we will end up with 50 files having one and up to 50 features.
:param dataDir: absolute path to the directory that contains the input data set whose features to reduce.
:type dataDir: str
:param rankingsDir: absolute path to the directory that contains the rankings.
:type rankingsDir: str
:param topK: maximum numbers of features to select.
:type topK: str
:param outputDir: absolute path to the directory where the reduced files will be stored.
:type outputDir: str
"""
def __init__(self, dataDir, rankingsDir, topK, outputDir):
self.dataDir = dataDir
self.rankingsDir = rankingsDir
self.topK = int(topK)
self.outputDir = outputDir
super().__init__()
def loadTopKRankings(self):
"""Loads all available rankings from files.
:return: Dictionary with selection methods as keys and a ranked list of the (column) names of the top k features.
:rtype: dict
"""
#dictionary: topK genes per method
rankings = {}
for file in os.listdir(self.rankingsDir):
if os.path.isfile(os.path.join(self.rankingsDir, file)):
selectionMethod = file.split(".")[0] #get method name from filename without ending
ranking = benchutils.loadRanking(self.rankingsDir + file)
# geneRankings has the format attributeName, score
# we need the feature name column
featureNameCol = ranking.columns[0]
# feature names
featureNames = ranking[featureNameCol]
# take feature names of top k features
#if topK is larger than the actual size (=number of features), the whole list is returned without
#throwing an error
rankings[selectionMethod] = featureNames[:self.topK]
return rankings
def removeAttributesFromDataset(self, method, ranking, dataset):
"""Creates reduced data sets from dataset for the given method's ranking that only contain the top x features.
Creates multiple reduced data sets from topKmin to topKmax specified in the config.
:param method: selection method applied for the ranking.
:type method: str
:param ranking: (ranked) list of feature names from the top k features.
:type ranking: :class:`List` of str
:param dataset: original input data set
:type dataset: :class:`pandas.DataFrame`
"""
# create new subdirectory
methodDir = self.outputDir + method + "/"
benchutils.createDirectory(methodDir)
topKmin = int(benchutils.getConfigValue("Evaluation", "topKmin"))
for i in range(topKmin, len(ranking)+1):
# write reduced data set to files
featuresNames = ranking[:i]
featuresToUse = list(set(featuresNames) & set(dataset.columns))
colnames = list(dataset.columns[:2])
colnames.extend(featuresToUse)
reducedSet = dataset[colnames]
# reduce i by one because ranking also contains the id column, which is not a gene
outputFilename = methodDir + "top" + str(i) + "features_" + method + ".csv"
reducedSet.to_csv(outputFilename, index=False, sep="\t")
def removeUnusedAttributes(self):
"""For every method and its corresponding ranking, create reduced files with only the top x features.
"""
#read in gene rankings and original data set
geneRankings = self.loadTopKRankings()
methods = list(geneRankings.keys())
datafiles = os.listdir(self.dataDir)
#sort input files by name length: the longer files having had a feature mapping first, the original file with only the prefix to be last
datafiles.sort(key = len, reverse = True)
for dataset in datafiles:
data = pd.read_csv(self.dataDir + "/" + dataset)
#match the method in the dataset name to the corresponding ranking
#check if any of the available methods is a substring of the filename
method_matches = [method for method in methods if method in dataset]
if len(method_matches) > 0:
#if we have a match, this file must contain a new feature space
#we assume there is only one match possible, otherwise there would be two input data sets for the same method
method = method_matches[0]
#remove method from list of methods as we cannot apply this ranking to the original data set
methods.remove(method)
#do the attributeremoval
ranking = geneRankings[method]
self.removeAttributesFromDataset(method, ranking, data)
else:
#if not matchable, we have the original dataset
#use the remaining rankings on the dataset
#ASSUMPTION: we only have one file with the original prefix
for method in methods:
ranking = geneRankings[method]
self.removeAttributesFromDataset(method, ranking, data)
class Evaluator:
"""Abstract super class.
Every evaluation class has to inherit from this class and implement its :meth:`evaluate()` method.
:param input: absolute path to the directory where the input data is located.
:type input: str
:param output: absolute path to the directory to which to save results.
:type output: str
:param methodColors: dictionary containing a color string for every selection method.
:type methodColors: dict of str
:param javaConfig: configuration parameters for java code (as specified in the config file).
:type javaConfig: str
:param rConfig: configuration parameters for R code (as specified in the config file).
:type rConfig: str
:param evalConfig: configuration parameters for evaluation, e.g. how many features to select (as specified in the config file).
:type evalConfig: str
:param classificationConfig: configuration parameters for classification, e.g. which classifiers to use (as specified in the config file).
:type classificationConfig: str
"""
def __init__(self, input, output, methodColors):
self.javaConfig = benchutils.getConfig("Java")
self.rConfig = benchutils.getConfig("R")
self.evalConfig = benchutils.getConfig("Evaluation")
self.classificationConfig = benchutils.getConfig("Classification")
self.input = input
self.output = output
self.methodColors = methodColors
super().__init__()
@abstractmethod
def evaluate(self):
"""Abstract.
Must be implemented by inheriting class as this method is invoked by :class:`framework.Framework` to run the evaluation.
"""
pass
def loadRankings(self, inputDir, maxRank, keepOrder):
"""Loads all rankings from a specified input directory.
If only the top k features shall be in the ranking, set maxRank accordingly, set it to 0 if otherwise (so to load all features).
If feature order is important in the returned rankings, set keepOrder to true; if you are only interested in what features are among the top maxRank, set it to false.
:param inputDir: absolute path to directory where all rankings are located.
:type inputDir: str
:param maxRank: maximum number of features to have in ranking.
:type maxRank: int
:param keepOrder: whether the order of the features in the ranking is important or not.
:type keepOrder: bool
:return: Dictionary of rankings per method, either as ordered list or set (depending on keepOrder attribute)
:rtype: dict
"""
rankings = {}
for file in os.listdir(inputDir):
if os.path.isfile(os.path.join(inputDir, file)):
selectionMethod = file.split(".")[0] # get method name from filename without ending
ranking = benchutils.loadRanking(inputDir + file)
genesColumn = ranking.columns[0]
genes = ranking[genesColumn]
# 0 is code number for using all items
if maxRank == 0:
maxRank = len(ranking) - 1
# add 1 for header column
if keepOrder:
rankings[selectionMethod] = genes[:maxRank + 1]
else:
rankings[selectionMethod] = set(genes[:maxRank + 1])
return rankings
def computeKendallsW(self, rankings):
"""Computes Kendall's W from two rankings.
Note: measure does not make much sense if the two rankings are highly disjunct, which can happen especially for traditional approaches.
:param rankings: matrix containing two rankings for which to compute Kendall's W.
:type rankings: matrix
:return: Kendall's W score.
:rtype: float
"""
if rankings.ndim != 2:
raise 'ratings matrix must be 2-dimensional'
m = rankings.shape[0] # raters
n = rankings.shape[1] # items rated
denom = m ** 2 * (n ** 3 - n)
rating_sums = np.sum(rankings, axis=0)
S = m * np.var(rating_sums)
return 12 * S / denom
class ClassificationEvaluator(Evaluator):
"""Evaluates selection methods via classification by using only the selected features and computing multiple standard metrics.
Uses :class:`AttributeRemover` to create reduced datasets containing only the top k features, which are then used for subsequent classification.
Currently, classification and subsequent evaluation is wrapped here and is actually carried out by java jars using WEKA.
:param input: absolute path to the directory where the input data for classification is located.
:type input: str
:param rankingsDir: absolute path to the directory where the rankings are located.
:type rankingsDir: str
:param intermediateDir: absolute path to the directory where the reduced datasets (containing only the top k features) are written to.
:type intermediateDir: str
:param output: absolute path to the directory to which to save results.
:type output: str
:param methodColors: dictionary containing a color string for every selection method.
:type methodColors: dict of | |
#!/usr/bin/env python2.7
# __BEGIN_LICENSE__
#
# Copyright (C) 2010-2013 Stanford University.
# All rights reserved.
#
# __END_LICENSE__
# lflib imports
import lflib
from lflib.imageio import load_image, save_image
from lflib.lightfield import LightField
from lflib.calibration import LightFieldCalibration
from lflib.util import ensure_path
# major libraries
import numpy as np
import h5py
import math, sys
import cv
import os
import subprocess
import time
import tempfile
import socket
import argparse
#----------------------------------------------------------------------------------
class SCPFailure(Exception):
def __init__(self, filepath, error_output):
Exception.__init__(self, filepath, error_output)
self.filepath = filepath
self.errout = error_output
def __str__(self):
return """
#####################
SCP failed for %s.
Process output was:
%s
#####################
""" % (self.filepath, self.errout)
def copy_file(source_path, dest_path):
"Copy a file via SCP, retrying a few times if necessary."
retries = 5
retry_interval_secs = 1
while retries > 0:
tic = time.time()
copyproc = subprocess.Popen(['scp', '-o StrictHostKeyChecking=no', '-i', args.private_fn, source_path, dest_path], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = copyproc.communicate()
if copyproc.returncode == 0:
print '\t\t--> file tranfer completed in %f secs' % (time.time() - tic)
break
time.sleep(retry_interval_secs)
retry_interval_secs *= 2 # backoff retry time
print '\t\t--> file transfer failed. Retrying.'
else:
# retries exhausted. give up.
raise SCPFailure(calibration_filename, err)
def retrieve_calibration_file(calibration_filename, id=''):
if ':' in calibration_filename and '@' in calibration_filename:
print '\t--> fetching calibration file on remote host...'
cache_filename = tempfile.gettempdir() + os.sep + 'tmp_lfc_cache_' + str(hash(calibration_filename)) + '_' + id + '.lfc'
# If the file is already present on the filesystem, then we can use it immediately.
if os.path.isfile(cache_filename):
print '\t\t--> using locally cached copy %s.'%cache_filename
calibration_filename = cache_filename
# If the file is not present, then it needs to be downloaded.
# Only one worker can do this at a time. The rest must wait
# for the file to be downloaded. We use a dummy file on the
# filesystem as a concurrency "lock" to ensure that only one
# worker downloads the file.
else:
try:
# Try to open a file to use as a concurrency lock. Only the first process to try will succeed. The rest will fail.
fd_concurrency_lock = os.open(cache_filename + ".lock", os.O_CREAT|os.O_EXCL|os.O_RDWR)
# If the worker has gotten this far, it has the concurrency lock.
print '\t\t--> using scp. temporary transfer location: %s.'%(cache_filename+'.tmp')
copy_file(calibration_filename, cache_filename+'.tmp')
# Rename the file
os.rename(cache_filename+'.tmp', cache_filename)
# Remove the concurrency lock. This should signal to
# other processes that they can safely use the
# downloaded lfc file.
os.close(fd_concurrency_lock)
os.remove(cache_filename + ".lock")
print '\t\t--> transfer complete. Cached at: %s.' % cache_filename
except OSError:
print '\t\t--> another process appears to be retriving the file. Waiting for transfer to complete.'
numAttempts = 0
while os.path.isfile(cache_filename + ".lock") and numAttempts < 10000:
time.sleep(5)
numAttempts+=1
if numAttempts>1000:
raise Exception('completion of transfer of calibration file by another process seems to stalled.')
calibration_filename = cache_filename
return calibration_filename
def do_deconvolve(args):
filename = args.input_file
# Default calibration filename has a *.lfc suffix, and the same prefix
if not args.calibration_file:
fileName, fileExtension = os.path.splitext(filename)
calibration_file = fileName + '.lfc'
else:
calibration_file = args.calibration_file
# Default output filename has a -RECTIFIED suffix
if not args.output_filename:
fileName, fileExtension = os.path.splitext(filename)
output_filename = fileName + '-STACK.tif'
else:
output_filename = args.output_filename
print '\t--> hostname:{host}'.format(host=socket.gethostname())
print '\t--> specified gpu-id:{gpuid}'.format(gpuid=args.gpu_id)
#check if output filename appears to be formatted as user@host:path.
remote_output_fn = None
if ':' in output_filename and '@' in output_filename:
remote_output_fn = output_filename
output_filename = tempfile.gettempdir() + os.sep + 'deconv_tmpout_' + str(hash(filename)) + os.path.splitext(output_filename)[1]
print '\t--> output file location is on remote host. Output will temporarily appear at %s.'%output_filename
# Loadim the calibration data
calibration_file = retrieve_calibration_file(calibration_file, id=str(args.gpu_id))
lfcal = LightFieldCalibration.load(calibration_file)
print '\t--> loaded calibration file.'
# Load the raw input data
if ':' in filename and '@' in filename:
tmpname = tempfile.gettempdir() + os.sep + 'deconv_tmpin_' + str(hash(filename)) + os.path.splitext(filename)[1]
print '\t--> input file is on remote host. Transferring temporarily to %s.'%tmpname
copy_file(filename, tmpname)
im = load_image(tmpname, dtype=np.float32, normalize=False)
os.remove(tmpname)
else:
im = load_image(filename, dtype=np.float32, normalize = False)
print '\t--> %s opened. Pixel values range: [%d, %d]' % (filename, int(im.min()), int(im.max()))
# Perform dark frame subtraction
from lflib.lfexceptions import ZeroImageException
try:
im = lfcal.subtract_dark_frame(im)
print '\t Dark frame subtracted. Pixel values range: [%f, %f]' % (im.min(), im.max())
lf_zeros = False
except ZeroImageException:
print "\t A frame with no light pixels was found, but it's no big deal"
lf_zeros = True
# Rectify the image
lf = lfcal.rectify_lf(im)
# save_image('/home/broxton/debug_retified_lenslet.tif', lf.asimage(LightField.TILED_LENSLET), dtype=np.float32)
# save_image('/home/broxton/debug_retified_subap.tif', lf.asimage(LightField.TILED_SUBAPERTURE), dtype=np.float32)
# save_image('/home/broxton/debug_retified_radiometry.tif', lfcal.rectified_radiometry, dtype=np.float32)
# Save a little bit of verboseness in the code below by extracting the appropriate 'db' object.
if lfcal.psf_db != None:
print '\t Using wave optic psf db.'
db = lfcal.psf_db
else:
print '\t Using rayspread db.'
db = lfcal.rayspread_db
# Initialize the volume as a plain focal stack. We normalize by the weights as well.
from lflib.volume import LightFieldProjection
lfproj = LightFieldProjection(lfcal.rayspread_db, lfcal.psf_db,
disable_gpu = args.disable_gpu, gpu_id = args.gpu_id)
# Enable radiometry correction
lfproj.set_premultiplier(lfcal.radiometric_correction)
# DEBUG - MANUAL RADIOMETRY FOR TRYING VARIOUS STUFF OUT
# lf_ones = lfproj.project(db.ones_volume())
# ideal_lf = lf_ones.asimage(representation = LightField.TILED_LENSLET)
# print ideal_lf.min(), ideal_lf.max()
# rectified_radiometry = lfcal.rectified_radiometry
# print rectified_radiometry.min(), rectified_radiometry.max()
# radiometric_correction = rectified_radiometry / (ideal_lf + 1e-16)
# print radiometric_correction.min(), radiometric_correction.max()
# save_image("raddebug_ideal.tif", ideal_lf, dtype=np.float32)
# save_image("raddebug_actual.tif", rectified_radiometry, dtype=np.float32)
# save_image("raddebug_correction.tif", radiometric_correction, dtype=np.float32)
# lightfield_im = lf.asimage(representation = LightField.TILED_LENSLET)
if args.test is not None:
print '===================================='
if args.test == 1:
lfproj.test_forward_backward()
if args.test == 2:
lfproj.test_forward_project()
if args.test == 3:
lfproj.test_back_project()
print 'finished testing SIRT, quitting...'
exit()
print '-------------------------------------------------------------------'
print 'Computing light field tomographic reconstruction of:', filename
print 'conv_thresh:', args.conv_thresh
print ' max_iter:', args.max_iter
print ' lambda: ', args.regularization_lambda
# All of the solvers below take as arguments a system matrix
# operator 'A' (implemented as a numpy LinearOperator) and a 'b'
# matrix, which is the light field sub-aperture image transformed
# into a vector. Some routines also take a starting volume vector
# 'x'.
#
# We take the resulting vector 'x' and transform it back into a
# volume below.
#
from lflib.linear_operators import LightFieldOperator
nrays = db.ns*db.nu*db.nt*db.nv
nvoxels = db.nx*db.ny*db.nz
A_lfop = LightFieldOperator(lfproj, db)
A_operator = A_lfop.as_linear_operator(nrays, nvoxels)
# Trim out entries in the light field that we are ignoring because
# they are too close to the edge of the NA of the lenslet. This
# make our residuals more accurate below.
lf = lfcal.mask_trimmed_angles(lf)
# Generate the b vector, which contains the observed lightfield;
# and the initial volume x containing all zeros.
im_subaperture = lf.asimage(representation = LightField.TILED_SUBAPERTURE)
b_vec = np.reshape(im_subaperture, np.prod(im_subaperture.shape))
# DEBUG - TEST OUT INTERCEPT IDEA
#
# This code seems to work well for removing some of the edge
# artifacts in our volumes, but I would still classify it as
# "beta" code for now. I'm leaving it here but we should consider
# whether this is exactly what we want to be doing.
#
TEST_INTERCEPT_CODE = False
if TEST_INTERCEPT_CODE:
lf_ones = lfproj.project(db.ones_volume()).asimage(LightField.TILED_SUBAPERTURE)
lf_ones /= lf_ones.max()
from lflib.linear_operators import LightFieldOperatorWithFullIntercept
A_lfop = LightFieldOperatorWithFullIntercept(lfproj, db)
A_operator = A_lfop.as_linear_operator()
# from lflib.linear_operators import LightFieldOperatorWithIntercept
# A_lfop = LightFieldOperatorWithIntercept(lfproj, db, np.reshape(lf_ones, np.prod(lf_ones.shape)))
# A_operator = A_lfop.as_linear_operator()
# /DEBUG
if (args.benchmark):
lfproj.compare_cpu_gpu_performance()
raise SystemExit
if lf_zeros:
vol = lfcal.psf_db.empty_volume()
x_vec = np.reshape(vol, np.prod(vol.shape)).astype(np.float32)
# If the user has requested a focal stack, we stop here and return
# the current volume.
elif args.focalstack:
print 'Computing focal stack reconstruction of:', filename
# Weight come from one forward projection followed by one back projection.
vol_weights = lfproj.backproject(lfproj.project(db.ones_volume()))
# This multiplier removes the grid artifact entirely from the
# volume. This only seems to work for the focal stack,
# though.
pm = 1.0 / vol_weights
pm[np.nonzero(vol_weights == 0)] = 0.0 # Prevent NaNs!
lfproj.set_postmultiplier(pm);
vol = lfproj.backproject(lf)
TEST_INTERCEPT_CODE = False # Need to disable this for focal stack.
x_vec = np.reshape(vol, np.prod(vol.shape)).astype(np.float32)
elif args.solver == 'amp':
from lflib.solvers.amp import amp_reconstruction
if args.conv_thresh == 0.0: args.conv_thresh = 1e-10
args.alpha = 1.0
vol, multiscale_coefs = amp_reconstruction(lfcal, lf, args.alpha,
args.conv_thresh, args.max_iter,
args.regularization_lambda,
delta = 1.0,
debug = args.debug,
disable_gpu = args.disable_gpu,
gpu_id = args.gpu_id,
debug_path = | |
<reponame>sevgiun/T_System
#!/usr/bin/python3
# -*- coding: utf-8 -*-
"""
.. module:: high_tech_aim
:platform: Unix
:synopsis: the top-level submodule of T_System that contains the classes related to high tech targeting mark of T_System's vision ability.
.. moduleauthor:: <NAME> <<EMAIL>>
"""
import numpy
import cv2
from math import sqrt, radians, cos, sin
from t_system import T_SYSTEM_PATH
class Aimer:
"""Class to draw a high tech aim mark.
This class provides necessary initiations and a function named :func:`t_system.high_tech_aim.Aimer.mark`
for draw the targeting mark.
"""
def __init__(self):
"""Initialization method of :class:`t_system.high_tech_aim.Aimer` class.
"""
self.image = None
self.image_width = 0
self.image_height = 0
self.object_distance = ""
self.text_font = cv2.FONT_HERSHEY_SIMPLEX
self.arc_direction = True
self.thin_arc_start_angle = 170
self.thin_arc_end_angle = 310
self.thick_arc_start_angle = 180
self.thick_arc_end_angle = 300
self.rect_diagonal_rate = 0.9
self.vendor_animation_capture = None
def mark_rotating_arcs(self, image, center, radius, physically_distance, color='red'):
"""The top-level method to draw target mark with rotating arcs like high tech.
Args:
image: Image matrix.
center: Center point of the aimed object.
radius: Radius of the aim.
physically_distance: Physically distance of the targeted object as pixel count.
color: The dominant color of the targeting mark.
"""
self.image = image
self.object_distance = str(round(physically_distance * 0.164 / 1000, 2)) + " m" # 0.164 mm is the length of one pixel.
radius *= 0.618
thickness = radius * 0.23
self.image_height, self.image_width = self.image.shape[:2]
center_x = center[0]
center_y = center[1]
self.__draw_arc(center_x, center_y, radius, thickness, self.thick_arc_start_angle, self.thick_arc_end_angle)
self.__draw_arc(center_x, center_y, radius * 0.95, thickness * 0.15, self.thin_arc_start_angle, self.thin_arc_end_angle)
text_point = (int(center_x - radius * 0.15), int(center_y + radius * 0.95 - radius * 0.1))
# parameters: image, put text, text's coordinates,font, scale, color, thickness, line type(this type is best for texts.)
cv2.putText(self.image, self.object_distance, text_point, self.text_font, radius * 0.004, (0, 0, 200), int(radius * 0.004), cv2.LINE_AA)
self.__draw_phys_dist_container(center_x, center_y, radius)
self.__check_angle_of_arcs()
self.__rotate_arcs()
return self.image
def mark_partial_rect(self, image, center, radius, physically_distance, color='red'):
"""The top-level method to draw target mark with partial rectangle like high tech.
Args:
image: Image matrix.
center: Center point of the aimed object.
radius: Radius of the aim.
physically_distance: Physically distance of the targeted object as pixel count.
color: The dominant color of the targeting mark.
"""
self.image = image
self.object_distance = str(round(physically_distance * 0.164 / 1000, 2)) + " m" # 0.164 mm is the length of one pixel.
radius *= 0.5
thickness = radius * 0.23
self.image_height, self.image_width = self.image.shape[:2]
center_x = center[0]
center_y = center[1]
self.__draw_rect(center_x, center_y, radius * 1.2, thickness * 0.4)
# self.draw_rect_triangler(center_x, center_y, radius * self.rect_diagonal_rate, thickness * 0.2)
text_point = (int(center_x - radius * 0.15), int(center_y + radius * 0.95 - radius * 0.1))
cv2.putText(self.image, self.object_distance, text_point, self.text_font, radius * 0.004, (0, 200, 0), int(radius * 0.004), cv2.LINE_AA)
# parameters: image, put text, text's coordinates,font, scale, color, thickness, line type(this type is best for texts.)
# self.rect_diagonal_rate -= 0.05
# if self.rect_diagonal_rate <= 0.2:
# self.rect_diagonal_rate = 0.9
return self.image
def mark_vendor_animation(self, image, center, radius, physically_distance, color='red'):
"""The top-level method to draw target mark with previously prepared HUD animation.
Args:
image: Image matrix.
center: Center point of the aimed object.
radius: Radius of the aim.
physically_distance: Physically distance of the targeted object as pixel count.
color: The dominant color of the targeting mark.
"""
ret, frame = self.vendor_animation_capture.read()
frame_count = self.vendor_animation_capture.get(cv2.CAP_PROP_FRAME_COUNT)
if ret:
if frame is None:
self.vendor_animation_capture.set(cv2.CAP_PROP_POS_FRAMES, int(frame_count / 2.5))
ret, frame = self.vendor_animation_capture.read()
frame = cv2.resize(frame, (radius, radius), interpolation=cv2.INTER_AREA)
self.__overlay_image_alpha(image, frame[:, :, 0:3], (center[0], center[1]), frame[:, :, 2] / 255.0)
def set_vendor_animation(self, animation_name):
"""The top-level method to set previously prepared HUD animation.
"""
animation_file = f'{T_SYSTEM_PATH}/high_tech_aim/mark_animations/{animation_name}.mov'
self.vendor_animation_capture = cv2.VideoCapture(animation_file, 0)
def __overlay_image_alpha(self, img, img_overlay, pos, alpha_mask):
"""Method to overlay img_overlay on top of img at the position specified by pos and blend using alpha_mask.
Alpha mask must contain values within the range [0, 1] and be the same size as img_overlay.
Args:
img: Bottom layer image matrix.
img_overlay: Top layer image matrix.
pos: Position of the overlay image matrix.
alpha_mask: The alpha mask of the overlay image matrix for eliminating the background of this image.
"""
x, y = pos
# Image ranges
y1, y2 = max(0, y), min(img.shape[0], y + img_overlay.shape[0])
x1, x2 = max(0, x), min(img.shape[1], x + img_overlay.shape[1])
# Overlay ranges
y1o, y2o = max(0, -y), min(img_overlay.shape[0], img.shape[0] - y)
x1o, x2o = max(0, -x), min(img_overlay.shape[1], img.shape[1] - x)
# Exit if nothing to do
if y1 >= y2 or x1 >= x2 or y1o >= y2o or x1o >= x2o:
return
channels = img.shape[2]
alpha = alpha_mask[y1o:y2o, x1o:x2o]
alpha_inv = 1.0 - alpha
for c in range(channels):
img[y1:y2, x1:x2, c] = (alpha * img_overlay[y1o:y2o, x1o:x2o, c] +
alpha_inv * img[y1:y2, x1:x2, c])
def __draw_arc(self, center_x, center_y, radius, thickness, start_angle, end_angle, edge_shine=False):
"""Method to draw arcs of the mark.
Args:
center_x: The object's x center(by column count).
center_y: The object's y center(by row count).
radius: Radius of the aim.
thickness: Thickness of arc.
start_angle: Start angle of the arc.
end_angle: End angle of the arc.
edge_shine: Flag for the activate te shining of the arc's edge.
"""
if end_angle >= start_angle:
pass
else:
start_angle, end_angle = end_angle, start_angle
rad = radius
while rad <= radius + thickness:
angle = start_angle
while angle <= end_angle:
x = center_x + rad * cos(radians(angle))
y = center_y - rad * sin(radians(angle))
if self.image_width >= x >= 0 and self.image_height >= y >= 0: # for the frames' limit protection.
distance = int(sqrt((center_x - x) ** 2 + (center_y - y) ** 2))
x = int(x)
y = int(y)
if radius <= distance <= radius + thickness:
[b, g, r] = self.image[y, x] = numpy.array(self.image[y, x]) * numpy.array([0, 0, 1.1])
# Following lines are for increase the visibility when the "mark" comes on the dark areas.
if r <= 100:
if r == 0:
r = 1
self.image[y, x] = [0, 0, 1]
redness_rate = (255 / r) / 0.12
self.image[y, x] = numpy.array(self.image[y, x]) * numpy.array([0, 0, redness_rate])
if edge_shine:
for thick in range(60, 100, 4):
if radius + thickness * thick / 100 <= distance <= radius + thickness:
# [b, g, r] = self.image[y, x]
self.image[y, x] = numpy.array(self.image[y, x]) + numpy.array([thick * 0.06, thick * 0.06, 255])
angle += 0.25
rad += 1
def __draw_rect(self, center_x, center_y, radius, thickness):
"""Method to draw partial rectangles those have missing parts of their edges.
Args:
center_x: The object's x center(by column count).
center_y: The object's y center(by row count).
radius: Radius of the aim.
thickness: Thickness of arc.
"""
center_x = int(center_x)
center_y = int(center_y)
radius = int(radius)
thickness = int(thickness)
edge_length = int(radius * 0.3)
x_ranges = list(range(center_x - radius - thickness, center_x - edge_length)) + list(range(center_x + edge_length, center_x + radius + thickness))
y_ranges = list(range(center_y - radius - thickness, center_y - radius)) + list(range(center_y + radius, center_y + radius + thickness))
for x in x_ranges:
for y in y_ranges:
if self.image_width > x >= 0 and self.image_height > y >= 0: # for the frames' limit protection.
[b, g, r] = self.image[y, x] = numpy.array(self.image[y, x]) * numpy.array([0, 1, 0])
if g <= 100:
if g == 0:
g = 1
self.image[y, x] = [0, 0, 1]
greenness_rate = (255 / g) / 0.12
self.image[y, x] = numpy.array(self.image[y, x]) * numpy.array([0, greenness_rate, 0])
y_ranges = list(range(center_y - radius - thickness, center_y - edge_length)) + list(range(center_y + edge_length, center_y + radius + thickness))
x_ranges = list(range(center_x - radius - thickness, center_x - radius)) + list(range(center_x + radius, center_x + radius + thickness))
for y in y_ranges:
for x in x_ranges:
if self.image_width > x >= 0 and self.image_height > y >= 0: # for the frames' limit protection.
[b, g, r] = self.image[y, x] = numpy.array(self.image[y, x]) * numpy.array([0, 1, 0])
if g <= 100:
if g == 0:
g = 1
self.image[y, x] = [0, 0, 1]
greenness_rate = (255 / | |
# Copyright (c) 2012 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""The hypervisors admin extension."""
from oslo_log import log as logging
from oslo_serialization import jsonutils
from oslo_utils import strutils
from oslo_utils import uuidutils
import webob.exc
from nova.api.openstack import api_version_request
from nova.api.openstack import common
from nova.api.openstack.compute.schemas import hypervisors as hyper_schema
from nova.api.openstack.compute.views import hypervisors as hyper_view
from nova.api.openstack import extensions
from nova.api.openstack import wsgi
from nova.api import validation
from nova.cells import utils as cells_utils
from nova import compute
from nova import exception
from nova.i18n import _
from nova.policies import hypervisors as hv_policies
from nova import servicegroup
from nova import utils
LOG = logging.getLogger(__name__)
UUID_FOR_ID_MIN_VERSION = '2.53'
class HypervisorsController(wsgi.Controller):
"""The Hypervisors API controller for the OpenStack API."""
_view_builder_class = hyper_view.ViewBuilder
def __init__(self):
self.host_api = compute.HostAPI()
self.servicegroup_api = servicegroup.API()
super(HypervisorsController, self).__init__()
def _view_hypervisor(self, hypervisor, service, detail, req, servers=None,
**kwargs):
alive = self.servicegroup_api.service_is_up(service)
# The 2.53 microversion returns the compute node uuid rather than id.
uuid_for_id = api_version_request.is_supported(
req, min_version=UUID_FOR_ID_MIN_VERSION)
hyp_dict = {
'id': hypervisor.uuid if uuid_for_id else hypervisor.id,
'hypervisor_hostname': hypervisor.hypervisor_hostname,
'state': 'up' if alive else 'down',
'status': ('disabled' if service.disabled
else 'enabled'),
}
if detail:
for field in ('vcpus', 'memory_mb', 'local_gb', 'vcpus_used',
'memory_mb_used', 'local_gb_used',
'hypervisor_type', 'hypervisor_version',
'free_ram_mb', 'free_disk_gb', 'current_workload',
'running_vms', 'disk_available_least', 'host_ip'):
hyp_dict[field] = getattr(hypervisor, field)
service_id = service.uuid if uuid_for_id else service.id
hyp_dict['service'] = {
'id': service_id,
'host': hypervisor.host,
'disabled_reason': service.disabled_reason,
}
if api_version_request.is_supported(req, min_version='2.28'):
if hypervisor.cpu_info:
hyp_dict['cpu_info'] = jsonutils.loads(hypervisor.cpu_info)
else:
hyp_dict['cpu_info'] = {}
else:
hyp_dict['cpu_info'] = hypervisor.cpu_info
if servers:
hyp_dict['servers'] = [dict(name=serv['name'], uuid=serv['uuid'])
for serv in servers]
# Add any additional info
if kwargs:
hyp_dict.update(kwargs)
return hyp_dict
def _get_compute_nodes_by_name_pattern(self, context, hostname_match):
compute_nodes = self.host_api.compute_node_search_by_hypervisor(
context, hostname_match)
if not compute_nodes:
msg = (_("No hypervisor matching '%s' could be found.") %
hostname_match)
raise webob.exc.HTTPNotFound(explanation=msg)
return compute_nodes
def _get_hypervisors(self, req, detail=False, limit=None, marker=None,
links=False):
"""Get hypervisors for the given request.
:param req: nova.api.openstack.wsgi.Request for the GET request
:param detail: If True, return a detailed response.
:param limit: An optional user-supplied page limit.
:param marker: An optional user-supplied marker for paging.
:param links: If True, return links in the response for paging.
"""
context = req.environ['nova.context']
context.can(hv_policies.BASE_POLICY_NAME)
# The 2.53 microversion moves the search and servers routes into
# GET /os-hypervisors and GET /os-hypervisors/detail with query
# parameters.
if api_version_request.is_supported(
req, min_version=UUID_FOR_ID_MIN_VERSION):
hypervisor_match = req.GET.get('hypervisor_hostname_pattern')
with_servers = strutils.bool_from_string(
req.GET.get('with_servers', False), strict=True)
else:
hypervisor_match = None
with_servers = False
if hypervisor_match is not None:
# We have to check for 'limit' in the request itself because
# the limit passed in is CONF.api.max_limit by default.
if 'limit' in req.GET or marker:
# Paging with hostname pattern isn't supported.
raise webob.exc.HTTPBadRequest(
_('Paging over hypervisors with the '
'hypervisor_hostname_pattern query parameter is not '
'supported.'))
# Explicitly do not try to generate links when querying with the
# hostname pattern since the request in the link would fail the
# check above.
links = False
# Get all compute nodes with a hypervisor_hostname that matches
# the given pattern. If none are found then it's a 404 error.
compute_nodes = self._get_compute_nodes_by_name_pattern(
context, hypervisor_match)
else:
# Get all compute nodes.
try:
compute_nodes = self.host_api.compute_node_get_all(
context, limit=limit, marker=marker)
except exception.MarkerNotFound:
msg = _('marker [%s] not found') % marker
raise webob.exc.HTTPBadRequest(explanation=msg)
hypervisors_list = []
for hyp in compute_nodes:
try:
instances = None
if with_servers:
instances = self.host_api.instance_get_all_by_host(
context, hyp.host)
service = self.host_api.service_get_by_compute_host(
context, hyp.host)
hypervisors_list.append(
self._view_hypervisor(
hyp, service, detail, req, servers=instances))
except (exception.ComputeHostNotFound,
exception.HostMappingNotFound):
# The compute service could be deleted which doesn't delete
# the compute node record, that has to be manually removed
# from the database so we just ignore it when listing nodes.
LOG.debug('Unable to find service for compute node %s. The '
'service may be deleted and compute nodes need to '
'be manually cleaned up.', hyp.host)
hypervisors_dict = dict(hypervisors=hypervisors_list)
if links:
hypervisors_links = self._view_builder.get_links(
req, hypervisors_list, detail)
if hypervisors_links:
hypervisors_dict['hypervisors_links'] = hypervisors_links
return hypervisors_dict
@wsgi.Controller.api_version(UUID_FOR_ID_MIN_VERSION)
@validation.query_schema(hyper_schema.list_query_schema_v253,
UUID_FOR_ID_MIN_VERSION)
@extensions.expected_errors((400, 404))
def index(self, req):
"""Starting with the 2.53 microversion, the id field in the response
is the compute_nodes.uuid value. Also, the search and servers routes
are superseded and replaced with query parameters for listing
hypervisors by a hostname pattern and whether or not to include
hosted servers in the response.
"""
limit, marker = common.get_limit_and_marker(req)
return self._index(req, limit=limit, marker=marker, links=True)
@wsgi.Controller.api_version("2.33", "2.52") # noqa
@validation.query_schema(hyper_schema.list_query_schema_v233)
@extensions.expected_errors((400))
def index(self, req):
limit, marker = common.get_limit_and_marker(req)
return self._index(req, limit=limit, marker=marker, links=True)
@wsgi.Controller.api_version("2.1", "2.32") # noqa
@extensions.expected_errors(())
def index(self, req):
return self._index(req)
def _index(self, req, limit=None, marker=None, links=False):
return self._get_hypervisors(req, detail=False, limit=limit,
marker=marker, links=links)
@wsgi.Controller.api_version(UUID_FOR_ID_MIN_VERSION)
@validation.query_schema(hyper_schema.list_query_schema_v253,
UUID_FOR_ID_MIN_VERSION)
@extensions.expected_errors((400, 404))
def detail(self, req):
"""Starting with the 2.53 microversion, the id field in the response
is the compute_nodes.uuid value. Also, the search and servers routes
are superseded and replaced with query parameters for listing
hypervisors by a hostname pattern and whether or not to include
hosted servers in the response.
"""
limit, marker = common.get_limit_and_marker(req)
return self._detail(req, limit=limit, marker=marker, links=True)
@wsgi.Controller.api_version("2.33", "2.52") # noqa
@validation.query_schema(hyper_schema.list_query_schema_v233)
@extensions.expected_errors((400))
def detail(self, req):
limit, marker = common.get_limit_and_marker(req)
return self._detail(req, limit=limit, marker=marker, links=True)
@wsgi.Controller.api_version("2.1", "2.32") # noqa
@extensions.expected_errors(())
def detail(self, req):
return self._detail(req)
def _detail(self, req, limit=None, marker=None, links=False):
return self._get_hypervisors(req, detail=True, limit=limit,
marker=marker, links=links)
@staticmethod
def _validate_id(req, hypervisor_id):
"""Validates that the id is a uuid for microversions that require it.
:param req: The HTTP request object which contains the requested
microversion information.
:param hypervisor_id: The provided hypervisor id.
:raises: webob.exc.HTTPBadRequest if the requested microversion is
greater than or equal to 2.53 and the id is not a uuid.
:raises: webob.exc.HTTPNotFound if the requested microversion is
less than 2.53 and the id is not an integer.
"""
expect_uuid = api_version_request.is_supported(
req, min_version=UUID_FOR_ID_MIN_VERSION)
if expect_uuid:
if not uuidutils.is_uuid_like(hypervisor_id):
msg = _('Invalid uuid %s') % hypervisor_id
raise webob.exc.HTTPBadRequest(explanation=msg)
else:
# This API is supported for cells v1 and as such the id can be
# a cell v1 delimited string, so we have to parse it first.
if cells_utils.CELL_ITEM_SEP in str(hypervisor_id):
hypervisor_id = cells_utils.split_cell_and_item(
hypervisor_id)[1]
try:
utils.validate_integer(hypervisor_id, 'id')
except exception.InvalidInput:
msg = (_("Hypervisor with ID '%s' could not be found.") %
hypervisor_id)
raise webob.exc.HTTPNotFound(explanation=msg)
@wsgi.Controller.api_version(UUID_FOR_ID_MIN_VERSION)
@validation.query_schema(hyper_schema.show_query_schema_v253,
UUID_FOR_ID_MIN_VERSION)
@extensions.expected_errors((400, 404))
def show(self, req, id):
"""The 2.53 microversion requires that the id is a uuid and as a result
it can also return a 400 response if an invalid uuid is passed.
The 2.53 microversion also supports the with_servers query parameter
to include a list of servers on the given hypervisor if requested.
"""
with_servers = strutils.bool_from_string(
req.GET.get('with_servers', False), strict=True)
return self._show(req, id, with_servers)
@wsgi.Controller.api_version("2.1", "2.52") # noqa F811
@extensions.expected_errors(404)
def show(self, req, id):
return self._show(req, id)
def _show(self, req, id, with_servers=False):
context = req.environ['nova.context']
context.can(hv_policies.BASE_POLICY_NAME)
self._validate_id(req, id)
try:
hyp = self.host_api.compute_node_get(context, id)
instances = None
if with_servers:
instances = self.host_api.instance_get_all_by_host(
context, hyp.host)
service = self.host_api.service_get_by_compute_host(
context, hyp.host)
except (ValueError, exception.ComputeHostNotFound,
exception.HostMappingNotFound):
msg = _("Hypervisor with ID '%s' could not be found.") % id
raise webob.exc.HTTPNotFound(explanation=msg)
return dict(hypervisor=self._view_hypervisor(
hyp, service, True, req, instances))
@extensions.expected_errors((400, 404, 501))
def uptime(self, req, id):
context = req.environ['nova.context']
context.can(hv_policies.BASE_POLICY_NAME)
self._validate_id(req, id)
try:
hyp = self.host_api.compute_node_get(context, id)
except (ValueError, exception.ComputeHostNotFound):
msg = _("Hypervisor with ID '%s' could not be found.") % id
raise webob.exc.HTTPNotFound(explanation=msg)
# Get the uptime
try:
host = hyp.host
uptime = self.host_api.get_host_uptime(context, host)
service = self.host_api.service_get_by_compute_host(context, host)
except NotImplementedError:
common.raise_feature_not_supported()
except exception.ComputeServiceUnavailable as e:
raise webob.exc.HTTPBadRequest(explanation=e.format_message())
except exception.HostMappingNotFound:
# NOTE(danms): This mirrors the compute_node_get() behavior
# where the node is missing, resulting in NotFound instead of
# BadRequest if we fail on the map lookup.
msg = _("Hypervisor with ID '%s' could not be found.") % id
raise webob.exc.HTTPNotFound(explanation=msg)
return dict(hypervisor=self._view_hypervisor(hyp, service, | |
# -*- coding: utf-8 -*-
"""
This module contains the class Population.
Created on Tue Sep 17 14:29:43 2019
@author: <NAME> (jcrvz.github.io), e-mail: <EMAIL>
"""
import numpy as np
from itertools import product as cartesian_product
from itertools import islice
__all__ = ['Population']
__selectors__ = ['all', 'greedy', 'metropolis', 'probabilistic']
class Population:
"""
This is the Population class, each object corresponds to a population of agents within a search space.
"""
# Internal variables
iteration = 1
rotation_matrix = []
# Parameters per selection method
metropolis_temperature = 1000.0
metropolis_rate = 0.01
metropolis_boltzmann = 1.0
probability_selection = 0.5
# Class initialisation
# ------------------------------------------------------------------------
def __init__(self, boundaries, num_agents=30, is_constrained=True):
"""
Return a population of size ``num_agents`` within a problem domain defined by ``boundaries``.
:param tuple boundaries:
A tuple with two lists of size D corresponding to the lower and upper limits of search space, such as:
boundaries = (lower_boundaries, upper_boundaries)
Note: Dimensions of search domain are read from these boundaries.
:param int num_agents: optional.
Number of search agents or population size. The default is 30.
:param bool is_constrained: optional.
Avoid agents abandon the search space. The default is True.
:returns: population object.
"""
# Read number of variables or dimension
if len(boundaries[0]) == len(boundaries[1]):
self.num_dimensions = len(boundaries[0])
else:
raise PopulationError('Lower and upper boundaries must have the same length')
# Read the upper and lower boundaries of search space
self.lower_boundaries = np.array(boundaries[0]) if isinstance(boundaries[0], list) else boundaries[0]
self.upper_boundaries = np.array(boundaries[1]) if isinstance(boundaries[1], list) else boundaries[1]
self.span_boundaries = self.upper_boundaries - self.lower_boundaries
self.centre_boundaries = (self.upper_boundaries + self.lower_boundaries) / 2.
# Read number of agents in population
assert isinstance(num_agents, int)
self.num_agents = num_agents
# Initialise positions and fitness values
self.positions = np.full((self.num_agents, self.num_dimensions), np.nan)
self.velocities = np.full((self.num_agents, self.num_dimensions), 0)
self.fitness = np.full(self.num_agents, np.nan)
# General fitness measurements
self.global_best_position = np.full(self.num_dimensions, np.nan)
self.global_best_fitness = float('inf')
self.current_best_position = np.full(self.num_dimensions, np.nan)
self.current_best_fitness = float('inf')
self.current_worst_position = np.full(self.num_dimensions, np.nan)
self.current_worst_fitness = -float('inf')
self.particular_best_positions = np.full((self.num_agents, self.num_dimensions), np.nan)
self.particular_best_fitness = np.full(self.num_agents, np.nan)
self.previous_positions = np.full((self.num_agents, self.num_dimensions), np.nan)
self.previous_velocities = np.full((self.num_agents, self.num_dimensions), np.nan)
self.previous_fitness = np.full(self.num_agents, np.nan)
self.is_constrained = is_constrained
# TODO Add capability for dealing with topologies (neighbourhoods)
# self.local_best_fitness = self.fitness
# self.local_best_positions = self.positions
# ===========
# BASIC TOOLS
# ===========
def get_state(self):
"""
Return a string containing the current state of the population, i.e.,
str = 'x_best = ARRAY, f_best = VALUE'
:returns: str
"""
return ('x_best = ' + str(self._rescale_back(self.global_best_position)) +
', f_best = ' + str(self.global_best_fitness))
def get_positions(self):
"""
Return the current population positions. Positions are represented in a matrix of size:
``positions.shape() = (num_agents, num_dimensions)``
**NOTE:** The position is rescaled from the normalised search space, i.e., [-1, 1]^num_dimensions.
:returns: numpy.ndarray
"""
return np.tile(self.centre_boundaries, (self.num_agents, 1)) + \
self.positions * np.tile(self.span_boundaries / 2., (self.num_agents, 1))
def set_positions(self, positions):
"""
Modify the current population positions. Positions are represented in a matrix of size:
``positions.shape() = (num_agents, num_dimensions)``
Note: The position is rescaled to the original search space.
:param numpy.ndarray positions:
Population positions must have the size num_agents-by-num_dimensions array.
:returns: numpy.ndarray
"""
return 2. * (positions - np.tile(self.centre_boundaries, (self.num_agents, 1))) / np.tile(
self.span_boundaries, (self.num_agents, 1))
def update_positions(self, level='population', selector='all'):
"""
Update the population positions according to the level and selection scheme.
**NOTE:** When an operator is applied (from the operators' module), it automatically replaces new positions, so
the logic of selectors is contrary as they commonly do.
:param str level: optional
Update level, it can be 'population' for the entire population, 'particular' for each agent (an its
historical performance), and 'global' for the current solution. The default is 'population'.
:param str selector: optional
Selection method. The selectors available are: 'greedy', 'probabilistic', 'metropolis', 'all', and 'none'.
The default is 'all'.
:returns: None.
"""
# Update population positons, velocities and fitness
if level == 'population':
for agent in range(self.num_agents):
if self._selection(self.fitness[agent], self.previous_fitness[agent], selector):
# if new positions are improved, then update past register
self.previous_fitness[agent] = np.copy(self.fitness[agent])
self.previous_positions[agent, :] = np.copy(self.positions[agent, :])
self.previous_velocities[agent, :] = np.copy(self.velocities[agent, :])
else:
# ... otherwise,return to previous values
self.fitness[agent] = np.copy(self.previous_fitness[agent])
self.positions[agent, :] = np.copy(self.previous_positions[agent, :])
self.velocities[agent, :] = np.copy(self.previous_velocities[agent, :])
# Update the current best and worst positions (forced to greedy)
self.current_best_position = np.copy(self.positions[self.fitness.argmin(), :])
self.current_best_fitness = np.min(self.fitness)
self.current_worst_position = np.copy(self.positions[self.fitness.argmax(), :])
self.current_worst_fitness = np.min(self.fitness)
# Update particular positions, velocities and fitness
elif level == 'particular':
for agent in range(self.num_agents):
if self._selection(self.fitness[agent], self.particular_best_fitness[agent], selector):
self.particular_best_fitness[agent] = np.copy(self.fitness[agent])
self.particular_best_positions[agent, :] = np.copy(self.positions[agent, :])
# Update global positions, velocities and fitness
elif level == 'global':
# Perform particular updating (recursive)
self.update_positions('particular', selector)
# Read current global best agent
candidate_position = np.copy(self.particular_best_positions[self.particular_best_fitness.argmin(), :])
candidate_fitness = np.min(self.particular_best_fitness)
if self._selection(candidate_fitness, self.global_best_fitness, selector) or np.isinf(candidate_fitness):
self.global_best_position = np.copy(candidate_position)
self.global_best_fitness = np.copy(candidate_fitness)
# Raise an error
else:
PopulationError('Invalid update level')
def evaluate_fitness(self, problem_function):
"""
Evaluate the population positions in the problem function.
:param function problem_function:
A function that maps a 1-by-D array of real values to a real value.
:returns: None.
"""
# Read problem, it must be a callable function
assert callable(problem_function)
# Check simple constraints before evaluate
if self.is_constrained:
self._check_simple_constraints()
# Evaluate each agent in this function
for agent in range(self.num_agents):
self.fitness[agent] = problem_function(self._rescale_back(self.positions[agent, :]))
# ==============
# INITIALISATORS
# ==============
# TODO Add more initialisation operators like grid, boundary, etc.
def initialise_positions(self, scheme='random'):
"""
Initialise population by an initialisation scheme.
:param str scheme: optional
Initialisation scheme. It is only 'random' and 'vertex' initialisation in the current version. We are
working on implementing initialisation methods. The 'random' consists of using a random uniform distribution
in [-1,1]. Otherwise, 'vertex' uses the vertices of nested hyper-cubes to allocate the agents. The default
is 'random'.
:returns: None.
"""
if scheme == 'vertex':
self.positions = self._grid_matrix(self.num_dimensions, self.num_agents)
else:
self.positions = np.random.uniform(-1, 1, (self.num_agents, self.num_dimensions))
# ================
# INTERNAL METHODS
# ================
# Avoid using them outside
@staticmethod
def _grid_matrix(num_dimensions, num_agents):
total_vertices = 2 ** num_dimensions
basic_matrix = 2 * np.array(
[[int(x) for x in list(format(k, '0{}b'.format(num_dimensions)))] for k in range(total_vertices)]) - 1
output_matrix = np.copy(basic_matrix)
if num_agents > total_vertices:
num_matrices = int(np.ceil((num_agents - total_vertices) / total_vertices)) + 1
for k in range(1, num_matrices):
k_matrix = (1 - k / num_matrices) * basic_matrix
output_matrix = np.concatenate((output_matrix, k_matrix), axis=0)
output_matrix = output_matrix[:num_agents, :]
return output_matrix
def _check_simple_constraints(self):
"""
Check simple constraints for all the dimensions like:
-1 <= position <= 1, for all i in 1, 2, ..., num_dimensions
When an agent position is outside the search space, it is reallocated at the closest boundary and its velocity
is set zero (if so).
**NOTE:** This check is performed only if Population.is_constrained = True.
:returns: None.
"""
# Check if there are nans values
if np.any(np.isnan(self.positions)):
np.nan_to_num(self.positions, copy=False, nan=1.0, posinf=1.0, neginf=-1.0)
# Check if agents are beyond lower boundaries
low_check = np.less(self.positions, -1.0)
if np.any(low_check):
# Fix them
self.positions[low_check] = -1.0
self.velocities[low_check] = 0.0
# Check if agents are beyond upper boundaries
upp_check = np.greater(self.positions, 1.0)
if np.any(upp_check):
# Fix them
self.positions[upp_check] = 1.0
self.velocities[upp_check] = 0.0
def _rescale_back(self, position):
"""
Rescale an agent position from [-1.0, 1.0] to the original search space boundaries per dimension.
:param numpy.ndarray position:
A position given by an array of 1-by-D with elements between [-1, 1].
:returns: ndarray
"""
return self.centre_boundaries + position * (self.span_boundaries / 2)
def _selection(self, new, old, selector='greedy'):
"""
Answer the question: 'should this new position be accepted?' To do so, a selection procedure is applied.
:param numpy.ndarray new:
A new position given by an array of 1-by-num_dimensions with elements between [-1, 1].
:param numpy.ndarray old:
An old position given by an array of 1-by-num_dimensions with elements between [-1, 1].
:param str selector: optional
A selection scheme used for deciding if the new position is kept. The default is 'greedy'.
:returns: bool
"""
# Greedy selection
if selector == 'greedy':
selection_condition = new <= old
# Metropolis selection
elif selector == 'metropolis':
if new <= old:
selection_condition = True
else:
selection_condition = bool(np.math.exp(-(new - old) / (
self.metropolis_boltzmann * | |
(compatible; Googlebot/2.1; +http://www.google.com/bot.html)',
'Mozilla/5.0 (compatible; Yahoo! Slurp; http://help.yahoo.com/help/us/ysearch/slurp)',
'YahooSeeker/1.2 (compatible; Mozilla 4.0; MSIE 5.5; yahooseeker at yahoo-inc dot com ; http://help.yahoo.com/help/us/shop/merchant/)',
'Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.3) Gecko/20090913 Firefox/3.5.3',
'Mozilla/5.0 (Windows; U; Windows NT 6.1; en; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 3.5.30729)',
'Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 3.5.30729)',
'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.1) Gecko/20090718 Firefox/3.5.1',
'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.1 (KHTML, like Gecko) Chrome/4.0.219.6 Safari/532.1',
'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; InfoPath.2)',
'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 1.1.4322; .NET CLR 3.5.30729; .NET CLR 3.0.30729)',
'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.2; Win64; x64; Trident/4.0)',
'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; SV1; .NET CLR 2.0.50727; InfoPath.2)',
'Mozilla/5.0 (Windows; U; MSIE 7.0; Windows NT 6.0; en-US)',
'Mozilla/4.0 (compatible; MSIE 6.1; Windows XP)',
'Opera/9.80 (Windows NT 5.2; U; ru) Presto/2.5.22 Version/10.51',
'AppEngine-Google; (+http://code.google.com/appengine; appid: webetrex)',
'Mozilla/5.0 (compatible; MSIE 9.0; AOL 9.7; AOLBuild 4343.19; Windows NT 6.1; WOW64; Trident/5.0; FunWebProducts)',
'Mozilla/4.0 (compatible; MSIE 8.0; AOL 9.7; AOLBuild 4343.27; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)',
'Mozilla/4.0 (compatible; MSIE 8.0; AOL 9.7; AOLBuild 4343.21; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET4.0C; .NET4.0E)',
'Mozilla/4.0 (compatible; MSIE 8.0; AOL 9.7; AOLBuild 4343.19; Windows NT 5.1; Trident/4.0; GTB7.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)',
'Mozilla/4.0 (compatible; MSIE 8.0; AOL 9.7; AOLBuild 4343.19; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET4.0C; .NET4.0E)',
'Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.7; AOLBuild 4343.19; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET4.0C; .NET4.0E)',
'Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.3) Gecko/20090913 Firefox/3.5.3',
'Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 2.0.50727)',
'Mozilla/5.0 (Windows; U; Windows NT 5.2; de-de; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 3.5.30729)',
'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.1) Gecko/20090718 Firefox/3.5.1 (.NET CLR 3.0.04506.648)',
'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727; .NET4.0C; .NET4.0E',
'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.1 (KHTML, like Gecko) Chrome/4.0.219.6 Safari/532.1',
'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; InfoPath.2)',
'Opera/9.60 (J2ME/MIDP; Opera Mini/4.2.14912/812; U; ru) Presto/2.4.15',
'Mozilla/5.0 (Macintosh; U; PPC Mac OS X; en-US) AppleWebKit/125.4 (KHTML, like Gecko, Safari) OmniWeb/v563.57',
'Mozilla/5.0 (SymbianOS/9.2; U; Series60/3.1 NokiaN95_8GB/31.0.015; Profile/MIDP-2.0 Configuration/CLDC-1.1 ) AppleWebKit/413 (KHTML, like Gecko) Safari/413',
'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 1.1.4322; .NET CLR 3.5.30729; .NET CLR 3.0.30729)',
'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.2; Win64; x64; Trident/4.0)',
'Mozilla/5.0 (Windows; U; WinNT4.0; en-US; rv:1.8.0.5) Gecko/20060706 K-Meleon/1.0',
'Lynx/2.8.6rel.4 libwww-FM/2.14 SSL-MM/1.4.1 OpenSSL/0.9.8g',
'Mozilla/4.76 [en] (PalmOS; U; WebPro/3.0.1a; Palm-Arz1)',
'Mozilla/5.0 (Macintosh; U; PPC Mac OS X; de-de) AppleWebKit/418 (KHTML, like Gecko) Shiira/1.2.2 Safari/125',
'Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.8.1.6) Gecko/2007072300 Iceweasel/2.0.0.6 (Debian-2.0.0.6-0etch1+lenny1)',
'Mozilla/5.0 (SymbianOS/9.1; U; en-us) AppleWebKit/413 (KHTML, like Gecko) Safari/413',
'Mozilla/4.0 (compatible; MSIE 6.1; Windows NT 5.1; Trident/4.0; SV1; .NET CLR 3.5.30729; InfoPath.2)',
'Mozilla/5.0 (Windows; U; MSIE 7.0; Windows NT 6.0; en-US)',
'Links (2.2; GNU/kFreeBSD 6.3-1-486 i686; 80x25)',
'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; WOW64; Trident/4.0; SLCC1)',
'Mozilla/1.22 (compatible; Konqueror/4.3; Linux) KHTML/4.3.5 (like Gecko)',
'Mozilla/4.0 (compatible; MSIE 6.0; Windows CE; IEMobile 6.5)',
'Opera/9.80 (Macintosh; U; de-de) Presto/2.8.131 Version/11.10',
'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.9) Gecko/20100318 Mandriva/2.0.4-69.1mib2010.0 SeaMonkey/2.0.4',
'Mozilla/4.0 (compatible; MSIE 6.1; Windows XP) Gecko/20060706 IEMobile/7.0',
'Mozilla/5.0 (iPad; U; CPU OS 3_2 like Mac OS X; en-us) AppleWebKit/531.21.10 (KHTML, like Gecko) Version/4.0.4 Mobile/7B334b Safari/531.21.10',
'Mozilla/5.0 (Macintosh; I; Intel Mac OS X 10_6_7; ru-ru)',
'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)',
'Mozilla/1.22 (compatible; MSIE 6.0; Windows NT 6.1; Trident/4.0; GTB6; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; OfficeLiveConnector.1.4; OfficeLivePatch.1.3)',
'Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)',
'Mozilla/4.0 (Macintosh; U; Intel Mac OS X 10_6_7; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.205 Safari/534.16',
'Mozilla/1.22 (X11; U; Linux x86_64; en-US; rv:1.9.1.1) Gecko/20090718 Firefox/3.5.1',
'Mozilla/5.0 (compatible; MSIE 2.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.0.30729; InfoPath.2)',
'Opera/9.80 (Windows NT 5.2; U; ru) Presto/2.5.22 Version/10.51',
'Mozilla/5.0 (compatible; MSIE 2.0; Windows CE; IEMobile 7.0)',
'Mozilla/4.0 (Macintosh; U; PPC Mac OS X; en-US)',
'Mozilla/5.0 (Windows; U; Windows NT 6.0; en; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7',
'BlackBerry8300/4.2.2 Profile/MIDP-2.0 Configuration/CLDC-1.1 VendorID/107 UP.Link/6.2.3.15.0',
'Mozilla/1.22 (compatible; MSIE 2.0; Windows 3.1)',
'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; Avant Browser [avantbrowser.com]; iOpus-I-M; QXW03416; .NET CLR 1.1.4322)',
'Mozilla/3.0 (Windows NT 6.1; ru-ru; rv:1.9.1.3.) Win32; x86 Firefox/3.5.3 (.NET CLR 2.0.50727)',
'Opera/7.0 (compatible; MSIE 2.0; Windows 3.1)',
'Opera/9.80 (Windows NT 5.1; U; en-US) Presto/2.8.131 Version/11.10',
'Mozilla/4.0 (compatible; MSIE 6.0; America Online Browser 1.1; rev1.5; Windows NT 5.1;)',
'Mozilla/5.0 (Windows; U; Windows CE 4.21; rv:1.8b4) Gecko/20050720 Minimo/0.007',
'BlackBerry9000/5.0.0.93 Profile/MIDP-2.0 Configuration/CLDC-1.1 VendorID/179',
'Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.3) Gecko/20090913 Firefox/3.5.3',
'Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 2.0.50727)',
'Mozilla/5.0 (Windows; U; Windows NT 5.2; de-de; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 3.5.30729)',
'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.1) Gecko/20090718 Firefox/3.5.1 (.NET CLR 3.0.04506.648)',
'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727; .NET4.0C; .NET4.0E',
'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.1 (KHTML, like Gecko) Chrome/4.0.219.6 Safari/532.1',
'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; InfoPath.2)',
'Opera/9.60 (J2ME/MIDP; Opera Mini/4.2.14912/812; U; ru) Presto/2.4.15',
'Mozilla/5.0 (Macintosh; U; PPC Mac OS X; en-US) AppleWebKit/125.4 (KHTML, like Gecko, Safari) OmniWeb/v563.57',
'Mozilla/5.0 (SymbianOS/9.2; U; Series60/3.1 NokiaN95_8GB/31.0.015; Profile/MIDP-2.0 Configuration/CLDC-1.1 ) AppleWebKit/413 (KHTML, like Gecko) Safari/413',
'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 1.1.4322; .NET CLR 3.5.30729; .NET CLR 3.0.30729)',
'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.2; Win64; x64; Trident/4.0)',
'Mozilla/5.0 (Windows; U; WinNT4.0; en-US; rv:1.8.0.5) Gecko/20060706 K-Meleon/1.0',
'Lynx/2.8.6rel.4 libwww-FM/2.14 SSL-MM/1.4.1 OpenSSL/0.9.8g',
'Mozilla/4.76 [en] (PalmOS; U; WebPro/3.0.1a; Palm-Arz1)',
'Mozilla/5.0 (Macintosh; U; PPC Mac OS X; de-de) AppleWebKit/418 (KHTML, like Gecko) Shiira/1.2.2 Safari/125',
'Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.8.1.6) Gecko/2007072300 Iceweasel/2.0.0.6 (Debian-2.0.0.6-0etch1+lenny1)',
'Mozilla/5.0 (SymbianOS/9.1; U; en-us) AppleWebKit/413 (KHTML, like Gecko) Safari/413',
'Mozilla/4.0 (compatible; MSIE 6.1; Windows NT 5.1; Trident/4.0; SV1; .NET CLR 3.5.30729; InfoPath.2)',
'Mozilla/5.0 (Windows; U; MSIE 7.0; Windows NT 6.0; en-US)',
'Links (2.2; GNU/kFreeBSD 6.3-1-486 i686; 80x25)',
'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; WOW64; Trident/4.0; SLCC1)',
'Mozilla/1.22 (compatible; Konqueror/4.3; Linux) KHTML/4.3.5 (like Gecko)',
'Mozilla/4.0 (compatible; MSIE 6.0; Windows CE; IEMobile 6.5)',
'Opera/9.80 (Macintosh; U; de-de) Presto/2.8.131 Version/11.10',
'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.9) Gecko/20100318 Mandriva/2.0.4-69.1mib2010.0 SeaMonkey/2.0.4',
'Mozilla/4.0 (compatible; MSIE 6.1; Windows XP) Gecko/20060706 IEMobile/7.0',
'Mozilla/5.0 (iPad; U; CPU OS 3_2 like Mac OS X; en-us) AppleWebKit/531.21.10 (KHTML, like Gecko) Version/4.0.4 Mobile/7B334b Safari/531.21.10',
'Mozilla/5.0 (Macintosh; I; Intel Mac OS X 10_6_7; ru-ru)',
'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)',
'Mozilla/1.22 (compatible; MSIE 6.0; Windows NT 6.1; Trident/4.0; GTB6; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; OfficeLiveConnector.1.4; OfficeLivePatch.1.3)',
'Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)',
'Mozilla/4.0 (Macintosh; U; Intel Mac OS X 10_6_7; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.205 Safari/534.16',
'Mozilla/1.22 (X11; U; Linux x86_64; en-US; rv:1.9.1.1) Gecko/20090718 Firefox/3.5.1',
'Mozilla/5.0 (compatible; MSIE 2.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.0.30729; InfoPath.2)',
'Opera/9.80 (Windows NT 5.2; U; ru) Presto/2.5.22 Version/10.51',
'Mozilla/5.0 (compatible; MSIE 2.0; Windows CE; IEMobile 7.0)',
'Mozilla/4.0 (Macintosh; U; PPC Mac OS X; en-US)',
'Mozilla/5.0 (Windows; U; Windows NT 6.0; en; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7',
'BlackBerry8300/4.2.2 Profile/MIDP-2.0 Configuration/CLDC-1.1 VendorID/107 UP.Link/6.2.3.15.0',
'Mozilla/1.22 (compatible; MSIE 2.0; Windows 3.1)',
'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; Avant Browser [avantbrowser.com]; iOpus-I-M; QXW03416; .NET CLR 1.1.4322)',
'Mozilla/3.0 (Windows NT 6.1; ru-ru; rv:1.9.1.3.) Win32; x86 Firefox/3.5.3 (.NET CLR 2.0.50727)',
'Opera/7.0 (compatible; MSIE 2.0; Windows 3.1)',
'Opera/9.80 (Windows NT | |
-> float:
"""Fee earned by the Lending Partner in a securities lending agreement."""
return self.__lending_partner_fee
@lending_partner_fee.setter
def lending_partner_fee(self, value: float):
self._property_changed('lending_partner_fee')
self.__lending_partner_fee = value
@property
def region(self) -> str:
"""Regional classification for the asset"""
return self.__region
@region.setter
def region(self, value: str):
self._property_changed('region')
self.__region = value
@property
def option_premium(self) -> float:
"""An indication of the market value of the option at the time of execution."""
return self.__option_premium
@option_premium.setter
def option_premium(self, value: float):
self._property_changed('option_premium')
self.__option_premium = value
@property
def owner_name(self) -> str:
"""Name of person submitting request."""
return self.__owner_name
@owner_name.setter
def owner_name(self, value: str):
self._property_changed('owner_name')
self.__owner_name = value
@property
def last_updated_by_id(self) -> str:
"""Unique identifier of user who last updated the object"""
return self.__last_updated_by_id
@last_updated_by_id.setter
def last_updated_by_id(self, value: str):
self._property_changed('last_updated_by_id')
self.__last_updated_by_id = value
@property
def z_score(self) -> float:
"""Z Score."""
return self.__z_score
@z_score.setter
def z_score(self, value: float):
self._property_changed('z_score')
self.__z_score = value
@property
def legal_entity_acct(self) -> str:
"""Account assoicated with the entity that has legal rights to the fund."""
return self.__legal_entity_acct
@legal_entity_acct.setter
def legal_entity_acct(self, value: str):
self._property_changed('legal_entity_acct')
self.__legal_entity_acct = value
@property
def target_shareholder_meeting_date(self) -> str:
"""Target acquisition entity shareholder meeting date."""
return self.__target_shareholder_meeting_date
@target_shareholder_meeting_date.setter
def target_shareholder_meeting_date(self, value: str):
self._property_changed('target_shareholder_meeting_date')
self.__target_shareholder_meeting_date = value
@property
def event_start_time(self) -> str:
"""The start time of the event if the event occurs during a time window and the
event has a specific start time. It is represented in HH:MM 24 hour
format in the time zone of the exchange where the company is listed."""
return self.__event_start_time
@event_start_time.setter
def event_start_time(self, value: str):
self._property_changed('event_start_time')
self.__event_start_time = value
@property
def turnover(self) -> float:
"""Turnover."""
return self.__turnover
@turnover.setter
def turnover(self, value: float):
self._property_changed('turnover')
self.__turnover = value
@property
def price_spot_target_unit(self) -> str:
"""Unit in which the target price is reported."""
return self.__price_spot_target_unit
@price_spot_target_unit.setter
def price_spot_target_unit(self, value: str):
self._property_changed('price_spot_target_unit')
self.__price_spot_target_unit = value
@property
def compliance_effective_time(self) -> datetime.datetime:
"""Time that the compliance status became effective."""
return self.__compliance_effective_time
@compliance_effective_time.setter
def compliance_effective_time(self, value: datetime.datetime):
self._property_changed('compliance_effective_time')
self.__compliance_effective_time = value
@property
def expiration_date(self) -> datetime.date:
"""The expiration date of the associated contract and the last date it trades."""
return self.__expiration_date
@expiration_date.setter
def expiration_date(self, value: datetime.date):
self._property_changed('expiration_date')
self.__expiration_date = value
@property
def leg_one_type(self) -> str:
"""Indication if leg 1 is fixed or floating or Physical."""
return self.__leg_one_type
@leg_one_type.setter
def leg_one_type(self, value: str):
self._property_changed('leg_one_type')
self.__leg_one_type = value
@property
def leg_two_spread(self) -> float:
"""Spread of leg."""
return self.__leg_two_spread
@leg_two_spread.setter
def leg_two_spread(self, value: float):
self._property_changed('leg_two_spread')
self.__leg_two_spread = value
@property
def coverage(self) -> str:
"""Coverage of dataset."""
return self.__coverage
@coverage.setter
def coverage(self, value: str):
self._property_changed('coverage')
self.__coverage = value
@property
def g_percentile(self) -> float:
"""Percentile based on G score."""
return self.__g_percentile
@g_percentile.setter
def g_percentile(self, value: float):
self._property_changed('g_percentile')
self.__g_percentile = value
@property
def lending_fund_nav(self) -> float:
"""Net Asset Value of a securities lending fund."""
return self.__lending_fund_nav
@lending_fund_nav.setter
def lending_fund_nav(self, value: float):
self._property_changed('lending_fund_nav')
self.__lending_fund_nav = value
@property
def source_original_category(self) -> str:
"""Source category's original name."""
return self.__source_original_category
@source_original_category.setter
def source_original_category(self, value: str):
self._property_changed('source_original_category')
self.__source_original_category = value
@property
def composite5_day_adv(self) -> float:
"""Composite 5 day ADV."""
return self.__composite5_day_adv
@composite5_day_adv.setter
def composite5_day_adv(self, value: float):
self._property_changed('composite5_day_adv')
self.__composite5_day_adv = value
@property
def latest_execution_time(self) -> datetime.datetime:
"""ISO 8601-formatted timestamp"""
return self.__latest_execution_time
@latest_execution_time.setter
def latest_execution_time(self, value: datetime.datetime):
self._property_changed('latest_execution_time')
self.__latest_execution_time = value
@property
def close_date(self) -> datetime.date:
"""Date the trade idea was closed."""
return self.__close_date
@close_date.setter
def close_date(self, value: datetime.date):
self._property_changed('close_date')
self.__close_date = value
@property
def new_ideas_wtd(self) -> float:
"""Ideas received by clients Week to date."""
return self.__new_ideas_wtd
@new_ideas_wtd.setter
def new_ideas_wtd(self, value: float):
self._property_changed('new_ideas_wtd')
self.__new_ideas_wtd = value
@property
def asset_class_sdr(self) -> str:
"""An indication of one of the broad categories. For our use case will typically be
CO."""
return self.__asset_class_sdr
@asset_class_sdr.setter
def asset_class_sdr(self, value: str):
self._property_changed('asset_class_sdr')
self.__asset_class_sdr = value
@property
def comment(self) -> str:
"""The comment associated with the trade idea in URL encoded format."""
return self.__comment
@comment.setter
def comment(self, value: str):
self._property_changed('comment')
self.__comment = value
@property
def source_symbol(self) -> str:
"""Source symbol."""
return self.__source_symbol
@source_symbol.setter
def source_symbol(self, value: str):
self._property_changed('source_symbol')
self.__source_symbol = value
@property
def scenario_id(self) -> str:
"""Marquee unique scenario identifier"""
return self.__scenario_id
@scenario_id.setter
def scenario_id(self, value: str):
self._property_changed('scenario_id')
self.__scenario_id = value
@property
def ask_unadjusted(self) -> float:
"""Unadjusted ask level of an asset based on official exchange fixing or
calculation agent marked level."""
return self.__ask_unadjusted
@ask_unadjusted.setter
def ask_unadjusted(self, value: float):
self._property_changed('ask_unadjusted')
self.__ask_unadjusted = value
@property
def queue_clock_time(self) -> float:
"""The Queue Clock Time of the stock on the particular date."""
return self.__queue_clock_time
@queue_clock_time.setter
def queue_clock_time(self, value: float):
self._property_changed('queue_clock_time')
self.__queue_clock_time = value
@property
def restrict_external_derived_data(self) -> bool:
"""Restricts Ability to Use Externally as Part of Derived Data."""
return self.__restrict_external_derived_data
@restrict_external_derived_data.setter
def restrict_external_derived_data(self, value: bool):
self._property_changed('restrict_external_derived_data')
self.__restrict_external_derived_data = value
@property
def ask_change(self) -> float:
"""Change in the ask price."""
return self.__ask_change
@ask_change.setter
def ask_change(self, value: float):
self._property_changed('ask_change')
self.__ask_change = value
@property
def tcm_cost_participation_rate50_pct(self) -> float:
"""TCM cost with a 50 percent participation rate."""
return self.__tcm_cost_participation_rate50_pct
@tcm_cost_participation_rate50_pct.setter
def tcm_cost_participation_rate50_pct(self, value: float):
self._property_changed('tcm_cost_participation_rate50_pct')
self.__tcm_cost_participation_rate50_pct = value
@property
def end_date(self) -> datetime.date:
"""The maturity, termination, or end date of the reportable SB swap transaction."""
return self.__end_date
@end_date.setter
def end_date(self, value: datetime.date):
self._property_changed('end_date')
self.__end_date = value
@property
def contract_type(self) -> str:
"""Contract type."""
return self.__contract_type
@contract_type.setter
def contract_type(self, value: str):
self._property_changed('contract_type')
self.__contract_type = value
@property
def type(self) -> str:
"""Asset type differentiates the product categorization or contract type"""
return self.__type
@type.setter
def type(self, value: str):
self._property_changed('type')
self.__type = value
@property
def strike_ref(self) -> str:
"""Reference for strike level (enum: spot, forward,delta_call, delta_put,
delta_neutral)."""
return self.__strike_ref
@strike_ref.setter
def strike_ref(self, value: str):
self._property_changed('strike_ref')
self.__strike_ref = value
@property
def cumulative_pnl(self) -> float:
"""Cumulative PnL from the start date to the current date."""
return self.__cumulative_pnl
@cumulative_pnl.setter
def cumulative_pnl(self, value: float):
self._property_changed('cumulative_pnl')
self.__cumulative_pnl = value
@property
def loss(self) -> float:
"""Loss price component."""
return self.__loss
@loss.setter
def loss(self, value: float):
self._property_changed('loss')
self.__loss = value
@property
def unadjusted_volume(self) -> float:
"""Unadjusted volume traded."""
return self.__unadjusted_volume
@unadjusted_volume.setter
def unadjusted_volume(self, value: float):
self._property_changed('unadjusted_volume')
self.__unadjusted_volume = value
@property
def midcurve_vol(self) -> float:
"""Historical implied normal volatility for a liquid point on midcurve vol surface."""
return self.__midcurve_vol
@midcurve_vol.setter
def midcurve_vol(self, value: float):
self._property_changed('midcurve_vol')
self.__midcurve_vol = value
@property
def trading_cost_pnl(self) -> float:
"""Trading cost profit and loss (PNL)."""
return self.__trading_cost_pnl
@trading_cost_pnl.setter
def trading_cost_pnl(self, value: float):
self._property_changed('trading_cost_pnl')
self.__trading_cost_pnl = value
@property
def price_notation_type(self) -> str:
"""Basis points, Price, Yield, Spread, Coupon, etc., depending on the type of SB
swap, which is calculated at affirmation."""
return self.__price_notation_type
@price_notation_type.setter
def price_notation_type(self, value: str):
self._property_changed('price_notation_type')
self.__price_notation_type = value
@property
def payment_quantity(self) -> float:
"""Quantity in the payment currency."""
return self.__payment_quantity
@payment_quantity.setter
def payment_quantity(self, value: float):
self._property_changed('payment_quantity')
self.__payment_quantity = value
@property
def position_idx(self) -> int:
"""The index of the corresponding position in the risk request."""
return self.__position_idx
@position_idx.setter
def position_idx(self, value: int):
self._property_changed('position_idx')
self.__position_idx = value
@property
def implied_volatility_by_relative_strike(self) -> float:
"""Volatility of an asset implied by observations of market prices."""
return self.__implied_volatility_by_relative_strike
@implied_volatility_by_relative_strike.setter
def implied_volatility_by_relative_strike(self, value: float):
self._property_changed('implied_volatility_by_relative_strike')
self.__implied_volatility_by_relative_strike = value
@property
def percent_adv(self) -> float:
"""Size of trade as percentage of average daily volume (e.g. .05, 1, 2, ..., 20)."""
return self.__percent_adv
@percent_adv.setter
def percent_adv(self, value: float):
self._property_changed('percent_adv')
self.__percent_adv = value
@property
def sub_region(self) -> str:
"""Filter by any identifier of an asset like ticker, bloomberg id etc."""
return self.__sub_region
@sub_region.setter
def sub_region(self, value: str):
self._property_changed('sub_region')
self.__sub_region = value
@property
def contract(self) -> str:
"""Contract month code and year (e.g. F18)."""
return self.__contract
@contract.setter
def contract(self, value: str):
self._property_changed('contract')
self.__contract = value
@property
def payment_frequency1(self) -> str:
"""An integer multiplier of a time period describing how often the parties to the
SB swap transaction exchange payments associated with each party???s
obligation. Such payment frequency may be described as one letter
preceded by an integer."""
return self.__payment_frequency1
@payment_frequency1.setter
def payment_frequency1(self, value: str):
self._property_changed('payment_frequency1')
self.__payment_frequency1 = | |
<gh_stars>1-10
from __future__ import with_statement
from binascii import unhexlify
import contextlib
from functools import wraps, partial
import hashlib, logging
log = logging.getLogger(__name__)
import random, re, os, sys, tempfile, threading, time
from otp.ai.passlib.exc import PasslibHashWarning, PasslibConfigWarning
from otp.ai.passlib.utils.compat import PY3, JYTHON
import warnings
from warnings import warn
from otp.ai.passlib import exc
from otp.ai.passlib.exc import MissingBackendError
import otp.ai.passlib.registry as registry
from otp.ai.passlib.tests.backports import TestCase as _TestCase, skip, skipIf, skipUnless, SkipTest
from otp.ai.passlib.utils import has_rounds_info, has_salt_info, rounds_cost_values, rng as sys_rng, getrandstr, is_ascii_safe, to_native_str, repeat_string, tick, batch
from otp.ai.passlib.utils.compat import iteritems, irange, u, unicode, PY2
from otp.ai.passlib.utils.decor import classproperty
import otp.ai.passlib.utils.handlers as uh
__all__ = [
'TEST_MODE',
'set_file', 'get_file',
'TestCase',
'HandlerCase']
try:
import google.appengine
except ImportError:
GAE = False
else:
GAE = True
def ensure_mtime_changed(path):
last = os.path.getmtime(path)
while os.path.getmtime(path) == last:
time.sleep(0.1)
os.utime(path, None)
return
def _get_timer_resolution(timer):
def sample():
start = cur = timer()
while start == cur:
cur = timer()
return cur - start
return min(sample() for _ in range(3))
TICK_RESOLUTION = _get_timer_resolution(tick)
_TEST_MODES = [
'quick', 'default', 'full']
_test_mode = _TEST_MODES.index(os.environ.get('PASSLIB_TEST_MODE', 'default').strip().lower())
def TEST_MODE(min=None, max=None):
if min and _test_mode < _TEST_MODES.index(min):
return False
if max and _test_mode > _TEST_MODES.index(max):
return False
return True
def has_relaxed_setting(handler):
if hasattr(handler, 'orig_prefix'):
return False
return 'relaxed' in handler.setting_kwds or issubclass(handler, uh.GenericHandler)
def get_effective_rounds(handler, rounds=None):
handler = unwrap_handler(handler)
return handler(rounds=rounds, use_defaults=True).rounds
def is_default_backend(handler, backend):
try:
orig = handler.get_backend()
except MissingBackendError:
return False
try:
handler.set_backend('default')
return handler.get_backend() == backend
finally:
handler.set_backend(orig)
def iter_alt_backends(handler, current=None, fallback=False):
if current is None:
current = handler.get_backend()
backends = handler.backends
idx = backends.index(current) + 1 if fallback else 0
for backend in backends[idx:]:
if backend != current and handler.has_backend(backend):
yield backend
return
def get_alt_backend(*args, **kwds):
for backend in iter_alt_backends(*args, **kwds):
return backend
return
def unwrap_handler(handler):
while hasattr(handler, 'wrapped'):
handler = handler.wrapped
return handler
def handler_derived_from(handler, base):
if handler == base:
return True
if isinstance(handler, uh.PrefixWrapper):
while handler:
if handler == base:
return True
handler = handler._derived_from
return False
if isinstance(handler, type) and issubclass(handler, uh.MinimalHandler):
return issubclass(handler, base)
raise NotImplementedError("don't know how to inspect handler: %r" % (handler,))
@contextlib.contextmanager
def patch_calc_min_rounds(handler):
if isinstance(handler, type) and issubclass(handler, uh.HasRounds):
wrapped = handler._calc_checksum
def wrapper(self, *args, **kwds):
rounds = self.rounds
try:
self.rounds = self.min_rounds
return wrapped(self, *args, **kwds)
finally:
self.rounds = rounds
handler._calc_checksum = wrapper
try:
yield
finally:
handler._calc_checksum = wrapped
else:
if isinstance(handler, uh.PrefixWrapper):
with patch_calc_min_rounds(handler.wrapped):
yield
else:
yield
return
def set_file(path, content):
if isinstance(content, unicode):
content = content.encode('utf-8')
with open(path, 'wb') as (fh):
fh.write(content)
def get_file(path):
with open(path, 'rb') as (fh):
return fh.read()
def tonn(source):
if not isinstance(source, str):
return source
if PY3:
return source.encode('utf-8')
try:
return source.decode('utf-8')
except UnicodeDecodeError:
return source.decode('latin-1')
def hb(source):
return unhexlify(re.sub('\\s', '', source))
def limit(value, lower, upper):
if value < lower:
return lower
if value > upper:
return upper
return value
def quicksleep(delay):
start = tick()
while tick() - start < delay:
pass
def time_call(func, setup=None, maxtime=1, bestof=10):
from timeit import Timer
from math import log
timer = Timer(func, setup=setup or '')
number = 1
end = tick() + maxtime
while True:
delta = min(timer.repeat(bestof, number))
if tick() >= end:
return (delta / number, int(log(number, 10)))
number *= 10
def run_with_fixed_seeds(count=128, master_seed=2611923443488327891L):
def builder(func):
@wraps(func)
def wrapper(*args, **kwds):
rng = random.Random(master_seed)
for _ in irange(count):
kwds['seed'] = rng.getrandbits(32)
func(*args, **kwds)
return wrapper
return builder
class TestCase(_TestCase):
descriptionPrefix = None
def shortDescription(self):
desc = super(TestCase, self).shortDescription()
prefix = self.descriptionPrefix
if prefix:
desc = '%s: %s' % (prefix, desc or str(self))
return desc
@classproperty
def __unittest_skip__(cls):
name = cls.__name__
return name.startswith('_') or getattr(cls, '_%s__unittest_skip' % name, False)
@classproperty
def __test__(cls):
return not cls.__unittest_skip__
__unittest_skip = True
resetWarningState = True
def setUp(self):
super(TestCase, self).setUp()
self.setUpWarnings()
def setUpWarnings(self):
if self.resetWarningState:
ctx = reset_warnings()
ctx.__enter__()
self.addCleanup(ctx.__exit__)
warnings.filterwarnings('ignore', 'the method .*\\.(encrypt|genconfig|genhash)\\(\\) is deprecated')
warnings.filterwarnings('ignore', "the 'vary_rounds' option is deprecated")
longMessage = True
def _formatMessage(self, msg, std):
if self.longMessage and msg and msg.rstrip().endswith(':'):
return '%s %s' % (msg.rstrip(), std)
return msg or std
def assertRaises(self, _exc_type, _callable=None, *args, **kwds):
msg = kwds.pop('__msg__', None)
if _callable is None:
return super(TestCase, self).assertRaises(_exc_type, None, *args, **kwds)
try:
result = _callable(*args, **kwds)
except _exc_type as err:
return err
std = 'function returned %r, expected it to raise %r' % (result,
_exc_type)
raise self.failureException(self._formatMessage(msg, std))
return
def assertEquals(self, *a, **k):
raise AssertionError('this alias is deprecated by unittest2')
assertNotEquals = assertRegexMatches = assertEquals
def assertWarning(self, warning, message_re=None, message=None, category=None, filename_re=None, filename=None, lineno=None, msg=None):
if hasattr(warning, 'category'):
wmsg = warning
warning = warning.message
else:
wmsg = None
if message:
self.assertEqual(str(warning), message, msg)
if message_re:
self.assertRegex(str(warning), message_re, msg)
if category:
self.assertIsInstance(warning, category, msg)
if filename or filename_re:
if not wmsg:
raise TypeError('matching on filename requires a WarningMessage instance')
real = wmsg.filename
if real.endswith('.pyc') or real.endswith('.pyo'):
real = real[:-1]
if filename:
self.assertEqual(real, filename, msg)
if filename_re:
self.assertRegex(real, filename_re, msg)
if lineno:
if not wmsg:
raise TypeError('matching on lineno requires a WarningMessage instance')
self.assertEqual(wmsg.lineno, lineno, msg)
return
class _AssertWarningList(warnings.catch_warnings):
def __init__(self, case, **kwds):
self.case = case
self.kwds = kwds
self.__super = super(TestCase._AssertWarningList, self)
self.__super.__init__(record=True)
def __enter__(self):
self.log = self.__super.__enter__()
def __exit__(self, *exc_info):
self.__super.__exit__(*exc_info)
if exc_info[0] is None:
self.case.assertWarningList(self.log, **self.kwds)
return
def assertWarningList(self, wlist=None, desc=None, msg=None):
if desc is None:
return self._AssertWarningList(self, desc=wlist, msg=msg)
if not isinstance(desc, (list, tuple)):
desc = [
desc]
for idx, entry in enumerate(desc):
if isinstance(entry, str):
entry = dict(message_re=entry)
else:
if isinstance(entry, type) and issubclass(entry, Warning):
entry = dict(category=entry)
else:
if not isinstance(entry, dict):
raise TypeError('entry must be str, warning, or dict')
try:
data = wlist[idx]
except IndexError:
break
else:
self.assertWarning(data, msg=msg, **entry)
else:
if len(wlist) == len(desc):
return
std = 'expected %d warnings, found %d: wlist=%s desc=%r' % (
len(desc), len(wlist), self._formatWarningList(wlist), desc)
raise self.failureException(self._formatMessage(msg, std))
return
def consumeWarningList(self, wlist, desc=None, *args, **kwds):
if desc is None:
desc = []
self.assertWarningList(wlist, desc, *args, **kwds)
del wlist[:]
return
def _formatWarning(self, entry):
tail = ''
if hasattr(entry, 'message'):
tail = ' filename=%r lineno=%r' % (entry.filename, entry.lineno)
if entry.line:
tail += ' line=%r' % (entry.line,)
entry = entry.message
cls = type(entry)
return '<%s.%s message=%r%s>' % (cls.__module__, cls.__name__,
str(entry), tail)
def _formatWarningList(self, wlist):
return '[%s]' % (', ').join(self._formatWarning(entry) for entry in wlist)
def require_stringprep(self):
from otp.ai.passlib.utils import stringprep
if not stringprep:
from otp.ai.passlib.utils import _stringprep_missing_reason
raise self.skipTest('not available - stringprep module is ' + _stringprep_missing_reason)
def require_TEST_MODE(self, level):
if not TEST_MODE(level):
raise self.skipTest('requires >= %r test mode' % level)
def require_writeable_filesystem(self):
if GAE:
return self.skipTest("GAE doesn't offer read/write filesystem access")
_random_global_lock = threading.Lock()
_random_global_seed = None
_random_cache = None
def getRandom(self, name='default', seed=None):
cache = self._random_cache
if cache and name in cache:
return cache[name]
with self._random_global_lock:
cache = self._random_cache
if cache and name in cache:
return cache[name]
if not cache:
cache = self._random_cache = {}
global_seed = seed or TestCase._random_global_seed
if global_seed is None:
global_seed = TestCase._random_global_seed = int(os.environ.get('RANDOM_TEST_SEED') or os.environ.get('PYTHONHASHSEED') or sys_rng.getrandbits(32))
log.info('using RANDOM_TEST_SEED=%d', global_seed)
cls = type(self)
source = ('\n').join([str(global_seed), cls.__module__, cls.__name__,
self._testMethodName, name])
digest = hashlib.sha256(source.encode('utf-8')).hexdigest()
seed = int(digest[:16], 16)
value = cache[name] = random.Random(seed)
return value
return
_mktemp_queue = None
def mktemp(self, *args, **kwds):
self.require_writeable_filesystem()
fd, path = tempfile.mkstemp(*args, **kwds)
os.close(fd)
queue = self._mktemp_queue
if queue is None:
queue = self._mktemp_queue = []
def cleaner():
for path in queue:
if os.path.exists(path):
os.remove(path)
del queue[:]
self.addCleanup(cleaner)
queue.append(path)
return path
def patchAttr(self, obj, attr, value, require_existing=True, wrap=False):
try:
orig = getattr(obj, attr)
except AttributeError:
if require_existing:
raise
def cleanup():
try:
delattr(obj, attr)
except AttributeError:
pass
self.addCleanup(cleanup)
else:
self.addCleanup(setattr, obj, attr, orig)
if wrap:
value = partial(value, orig)
wraps(orig)(value)
setattr(obj, attr, value)
RESERVED_BACKEND_NAMES = [
'any', 'default']
class HandlerCase(TestCase):
handler = None
backend = None
known_correct_hashes = []
known_correct_configs = []
known_alternate_hashes = []
known_unidentified_hashes = []
known_malformed_hashes = []
known_other_hashes = [
('des_crypt', '6f8c114b58f2c'),
('md5_crypt', '$1$dOHYPKoP$tnxS1T8Q6VVn3kpV8cN6o.'),
('sha512_crypt', '$6$rounds=123456$asaltof16chars..$BtCwjqMJGx5hrJhZywWvt0RLE8uZ4oPwcelCjmw2kSYu.Ec6ycULevoBK25fs2xXgMNrCzIMVcgEJAstJeonj1')]
stock_passwords = [
u('test'),
u('\\u20AC\\u00A5$'),
'\xe2\x82\xac\xc2\xa5$']
secret_case_insensitive = False
accepts_all_hashes = False
disabled_contains_salt = False
filter_config_warnings = False
@classproperty
def forbidden_characters(cls):
if 'os_crypt' in getattr(cls.handler, 'backends', ()):
return '\x00'
return
__unittest_skip = True
@property
def descriptionPrefix(self):
handler = self.handler
name = handler.name
if hasattr(handler, 'get_backend'):
name += ' (%s backend)' % (handler.get_backend(),)
return name
@classmethod
def iter_known_hashes(cls):
for secret, hash in cls.known_correct_hashes:
yield (secret, hash)
for config, secret, hash in cls.known_correct_configs:
yield (
secret, hash)
for alt, secret, hash in cls.known_alternate_hashes:
yield (
secret, | |
workflow with imported tools""")
@pytest.mark.cwl_conformance
@pytest.mark.cwl_conformance_v1_1
@pytest.mark.inline_javascript
@pytest.mark.workflow
@pytest.mark.green
def test_conformance_v1_1_wf_wc_expressiontool(self):
"""Test two step workflow with inline tools
Generated from::
id: 25
job: tests/wc-job.json
label: wf_wc_expressiontool
output:
count_output: 16
tags:
- inline_javascript
- workflow
tool: tests/count-lines2-wf.cwl
""" # noqa: W293
self.cwl_populator.run_conformance_test("""v1.1""", """Test two step workflow with inline tools""")
@pytest.mark.cwl_conformance
@pytest.mark.cwl_conformance_v1_1
@pytest.mark.scatter
@pytest.mark.inline_javascript
@pytest.mark.workflow
@pytest.mark.green
def test_conformance_v1_1_wf_wc_scatter(self):
"""Test single step workflow with Scatter step
Generated from::
id: 26
job: tests/count-lines3-job.json
label: wf_wc_scatter
output:
count_output:
- 16
- 1
tags:
- scatter
- inline_javascript
- workflow
tool: tests/count-lines3-wf.cwl
""" # noqa: W293
self.cwl_populator.run_conformance_test("""v1.1""", """Test single step workflow with Scatter step""")
@pytest.mark.cwl_conformance
@pytest.mark.cwl_conformance_v1_1
@pytest.mark.scatter
@pytest.mark.multiple_input
@pytest.mark.inline_javascript
@pytest.mark.workflow
@pytest.mark.green
def test_conformance_v1_1_wf_wc_scatter_multiple_merge(self):
"""Test single step workflow with Scatter step and two data links connected to same input, default merge behavior
Generated from::
id: 27
job: tests/count-lines4-job.json
label: wf_wc_scatter_multiple_merge
output:
count_output:
- 16
- 1
tags:
- scatter
- multiple_input
- inline_javascript
- workflow
tool: tests/count-lines4-wf.cwl
""" # noqa: W293
self.cwl_populator.run_conformance_test("""v1.1""", """Test single step workflow with Scatter step and two data links connected to same input, default merge behavior""")
@pytest.mark.cwl_conformance
@pytest.mark.cwl_conformance_v1_1
@pytest.mark.scatter
@pytest.mark.multiple_input
@pytest.mark.inline_javascript
@pytest.mark.workflow
@pytest.mark.red
def test_conformance_v1_1_wf_wc_scatter_multiple_nested(self):
"""Test single step workflow with Scatter step and two data links connected to same input, nested merge behavior
Generated from::
id: 28
job: tests/count-lines6-job.json
label: wf_wc_scatter_multiple_nested
output:
count_output:
- 32
- 2
tags:
- scatter
- multiple_input
- inline_javascript
- workflow
tool: tests/count-lines6-wf.cwl
""" # noqa: W293
self.cwl_populator.run_conformance_test("""v1.1""", """Test single step workflow with Scatter step and two data links connected to same input, nested merge behavior""")
@pytest.mark.cwl_conformance
@pytest.mark.cwl_conformance_v1_1
@pytest.mark.multiple_input
@pytest.mark.inline_javascript
@pytest.mark.workflow
@pytest.mark.green
def test_conformance_v1_1_wf_wc_scatter_multiple_flattened(self):
"""Test single step workflow with Scatter step and two data links connected to same input, flattened merge behavior
Generated from::
id: 29
job: tests/count-lines6-job.json
label: wf_wc_scatter_multiple_flattened
output:
count_output: 34
tags:
- multiple_input
- inline_javascript
- workflow
tool: tests/count-lines7-wf.cwl
""" # noqa: W293
self.cwl_populator.run_conformance_test("""v1.1""", """Test single step workflow with Scatter step and two data links connected to same input, flattened merge behavior""")
@pytest.mark.cwl_conformance
@pytest.mark.cwl_conformance_v1_1
@pytest.mark.inline_javascript
@pytest.mark.workflow
@pytest.mark.green
def test_conformance_v1_1_wf_wc_nomultiple(self):
"""Test that no MultipleInputFeatureRequirement is necessary when workflow step source is a single-item list
Generated from::
id: 30
job: tests/count-lines6-job.json
label: wf_wc_nomultiple
output:
count_output: 32
tags:
- inline_javascript
- workflow
tool: tests/count-lines13-wf.cwl
""" # noqa: W293
self.cwl_populator.run_conformance_test("""v1.1""", """Test that no MultipleInputFeatureRequirement is necessary when workflow step source is a single-item list""")
@pytest.mark.cwl_conformance
@pytest.mark.cwl_conformance_v1_1
@pytest.mark.inline_javascript
@pytest.mark.workflow
@pytest.mark.green
def test_conformance_v1_1_wf_input_default_missing(self):
"""Test workflow with default value for input parameter (missing)
Generated from::
id: 31
job: tests/empty.json
label: wf_input_default_missing
output:
count_output: 1
tags:
- inline_javascript
- workflow
tool: tests/count-lines5-wf.cwl
""" # noqa: W293
self.cwl_populator.run_conformance_test("""v1.1""", """Test workflow with default value for input parameter (missing)""")
@pytest.mark.cwl_conformance
@pytest.mark.cwl_conformance_v1_1
@pytest.mark.inline_javascript
@pytest.mark.workflow
@pytest.mark.green
def test_conformance_v1_1_wf_input_default_provided(self):
"""Test workflow with default value for input parameter (provided)
Generated from::
id: 32
job: tests/wc-job.json
label: wf_input_default_provided
output:
count_output: 16
tags:
- inline_javascript
- workflow
tool: tests/count-lines5-wf.cwl
""" # noqa: W293
self.cwl_populator.run_conformance_test("""v1.1""", """Test workflow with default value for input parameter (provided)""")
@pytest.mark.cwl_conformance
@pytest.mark.cwl_conformance_v1_1
@pytest.mark.required
@pytest.mark.workflow
@pytest.mark.green
def test_conformance_v1_1_wf_default_tool_default(self):
"""Test that workflow defaults override tool defaults
Generated from::
id: 33
job: tests/empty.json
label: wf_default_tool_default
output:
default_output: workflow_default
tags:
- required
- workflow
tool: tests/echo-wf-default.cwl
""" # noqa: W293
self.cwl_populator.run_conformance_test("""v1.1""", """Test that workflow defaults override tool defaults""")
@pytest.mark.cwl_conformance
@pytest.mark.cwl_conformance_v1_1
@pytest.mark.env_var
@pytest.mark.command_line_tool
@pytest.mark.green
def test_conformance_v1_1_envvar_req(self):
"""Test EnvVarRequirement
Generated from::
id: 34
job: tests/env-job.json
label: envvar_req
output:
out:
checksum: sha1$b3ec4ed1749c207e52b3a6d08c59f31d83bff519
class: File
location: out
size: 15
tags:
- env_var
- command_line_tool
tool: tests/env-tool1.cwl
""" # noqa: W293
self.cwl_populator.run_conformance_test("""v1.1""", """Test EnvVarRequirement""")
@pytest.mark.cwl_conformance
@pytest.mark.cwl_conformance_v1_1
@pytest.mark.scatter
@pytest.mark.workflow
@pytest.mark.red
def test_conformance_v1_1_wf_scatter_single_param(self):
"""Test workflow scatter with single scatter parameter
Generated from::
id: 35
job: tests/scatter-job1.json
label: wf_scatter_single_param
output:
out:
- foo one
- foo two
- foo three
- foo four
tags:
- scatter
- workflow
tool: tests/scatter-wf1.cwl
""" # noqa: W293
self.cwl_populator.run_conformance_test("""v1.1""", """Test workflow scatter with single scatter parameter""")
@pytest.mark.cwl_conformance
@pytest.mark.cwl_conformance_v1_1
@pytest.mark.scatter
@pytest.mark.workflow
@pytest.mark.red
def test_conformance_v1_1_wf_scatter_two_nested_crossproduct(self):
"""Test workflow scatter with two scatter parameters and nested_crossproduct join method
Generated from::
id: 36
job: tests/scatter-job2.json
label: wf_scatter_two_nested_crossproduct
output:
out:
- - foo one three
- foo one four
- - foo two three
- foo two four
tags:
- scatter
- workflow
tool: tests/scatter-wf2.cwl
""" # noqa: W293
self.cwl_populator.run_conformance_test("""v1.1""", """Test workflow scatter with two scatter parameters and nested_crossproduct join method""")
@pytest.mark.cwl_conformance
@pytest.mark.cwl_conformance_v1_1
@pytest.mark.scatter
@pytest.mark.workflow
@pytest.mark.red
def test_conformance_v1_1_wf_scatter_two_flat_crossproduct(self):
"""Test workflow scatter with two scatter parameters and flat_crossproduct join method
Generated from::
id: 37
job: tests/scatter-job2.json
label: wf_scatter_two_flat_crossproduct
output:
out:
- foo one three
- foo one four
- foo two three
- foo two four
tags:
- scatter
- workflow
tool: tests/scatter-wf3.cwl#main
""" # noqa: W293
self.cwl_populator.run_conformance_test("""v1.1""", """Test workflow scatter with two scatter parameters and flat_crossproduct join method""")
@pytest.mark.cwl_conformance
@pytest.mark.cwl_conformance_v1_1
@pytest.mark.scatter
@pytest.mark.workflow
@pytest.mark.red
def test_conformance_v1_1_wf_scatter_two_dotproduct(self):
"""Test workflow scatter with two scatter parameters and dotproduct join method
Generated from::
id: 38
job: tests/scatter-job2.json
label: wf_scatter_two_dotproduct
output:
out:
- foo one three
- foo two four
tags:
- scatter
- workflow
tool: tests/scatter-wf4.cwl#main
""" # noqa: W293
self.cwl_populator.run_conformance_test("""v1.1""", """Test workflow scatter with two scatter parameters and dotproduct join method""")
@pytest.mark.cwl_conformance
@pytest.mark.cwl_conformance_v1_1
@pytest.mark.scatter
@pytest.mark.workflow
@pytest.mark.green
def test_conformance_v1_1_wf_scatter_emptylist(self):
"""Test workflow scatter with single empty list parameter
Generated from::
id: 39
job: tests/scatter-empty-job1.json
label: wf_scatter_emptylist
output:
out: []
tags:
- scatter
- workflow
tool: tests/scatter-wf1.cwl
""" # noqa: W293
self.cwl_populator.run_conformance_test("""v1.1""", """Test workflow scatter with single empty list parameter""")
@pytest.mark.cwl_conformance
@pytest.mark.cwl_conformance_v1_1
@pytest.mark.scatter
@pytest.mark.workflow
@pytest.mark.red
def test_conformance_v1_1_wf_scatter_nested_crossproduct_secondempty(self):
"""Test workflow scatter with two scatter parameters and nested_crossproduct join method with second list empty
Generated from::
id: 40
job: tests/scatter-empty-job2.json
label: wf_scatter_nested_crossproduct_secondempty
output:
out:
- []
- []
tags:
- scatter
- workflow
tool: tests/scatter-wf2.cwl
""" # noqa: W293
self.cwl_populator.run_conformance_test("""v1.1""", """Test workflow scatter with two scatter parameters and nested_crossproduct join method with second list empty""")
@pytest.mark.cwl_conformance
@pytest.mark.cwl_conformance_v1_1
@pytest.mark.scatter
@pytest.mark.workflow
@pytest.mark.red
def test_conformance_v1_1_wf_scatter_nested_crossproduct_firstempty(self):
"""Test workflow scatter with two scatter parameters and nested_crossproduct join method with first list empty
Generated from::
id: 41
job: tests/scatter-empty-job3.json
label: wf_scatter_nested_crossproduct_firstempty
output:
out: []
tags:
- scatter
- workflow
tool: tests/scatter-wf3.cwl#main
""" # noqa: W293
self.cwl_populator.run_conformance_test("""v1.1""", """Test workflow scatter with two scatter parameters and nested_crossproduct join method with first list empty""")
@pytest.mark.cwl_conformance
@pytest.mark.cwl_conformance_v1_1
@pytest.mark.scatter
@pytest.mark.workflow
@pytest.mark.red
def test_conformance_v1_1_wf_scatter_flat_crossproduct_oneempty(self):
"""Test workflow scatter with two scatter parameters, one of which is empty and flat_crossproduct join method
Generated from::
id: 42
job: tests/scatter-empty-job2.json
label: wf_scatter_flat_crossproduct_oneempty
output:
out: []
tags:
- scatter
- workflow
tool: tests/scatter-wf3.cwl#main
""" # noqa: W293
self.cwl_populator.run_conformance_test("""v1.1""", """Test workflow scatter with two scatter parameters, one of which is empty and flat_crossproduct join method""")
@pytest.mark.cwl_conformance
@pytest.mark.cwl_conformance_v1_1
@pytest.mark.scatter
@pytest.mark.workflow
@pytest.mark.green
def test_conformance_v1_1_wf_scatter_dotproduct_twoempty(self):
"""Test workflow scatter with two empty scatter parameters and dotproduct join method
Generated from::
id: 43
job: tests/scatter-empty-job4.json
label: wf_scatter_dotproduct_twoempty
output:
out: []
tags:
- scatter
- workflow
tool: tests/scatter-wf4.cwl#main
""" # noqa: W293
self.cwl_populator.run_conformance_test("""v1.1""", """Test workflow scatter with two empty scatter parameters and dotproduct join method""")
@pytest.mark.cwl_conformance
@pytest.mark.cwl_conformance_v1_1
@pytest.mark.required
@pytest.mark.command_line_tool
@pytest.mark.green
def test_conformance_v1_1_any_input_param(self):
"""Test Any type input parameter
Generated from::
id: 44
job: tests/env-job.json
label: any_input_param
output:
out: 'hello test env
'
tags:
- required
- command_line_tool
tool: tests/echo-tool.cwl
""" # noqa: W293
self.cwl_populator.run_conformance_test("""v1.1""", """Test Any type input parameter""")
@pytest.mark.cwl_conformance
@pytest.mark.cwl_conformance_v1_1
@pytest.mark.subworkflow
@pytest.mark.workflow
@pytest.mark.inline_javascript
@pytest.mark.green
def test_conformance_v1_1_nested_workflow(self):
"""Test nested workflow
Generated from::
id: 45
job: tests/wc-job.json
label: nested_workflow
output:
count_output: 16
tags:
- subworkflow
- workflow
- inline_javascript
tool: tests/count-lines8-wf.cwl
""" # noqa: W293
self.cwl_populator.run_conformance_test("""v1.1""", """Test nested workflow""")
@pytest.mark.cwl_conformance
@pytest.mark.cwl_conformance_v1_1
@pytest.mark.env_var
@pytest.mark.workflow
@pytest.mark.green
def test_conformance_v1_1_requirement_priority(self):
"""Test requirement priority
Generated from::
id: 46
job: tests/env-job.json
label: requirement_priority
output:
out:
checksum: sha1$b3ec4ed1749c207e52b3a6d08c59f31d83bff519
class: File
location: out
size: 15
tags:
- env_var
- workflow
tool: tests/env-wf1.cwl
""" # noqa: W293
self.cwl_populator.run_conformance_test("""v1.1""", """Test requirement priority""")
@pytest.mark.cwl_conformance
@pytest.mark.cwl_conformance_v1_1
@pytest.mark.env_var
@pytest.mark.workflow
@pytest.mark.green
def test_conformance_v1_1_requirement_override_hints(self):
"""Test requirements override hints
Generated from::
id: 47
job: tests/env-job.json
label: requirement_override_hints
output:
out:
checksum: sha1$cdc1e84968261d6a7575b5305945471f8be199b6
class: File
location: out
size: 9
tags:
- env_var
- workflow
tool: tests/env-wf2.cwl
""" # noqa: W293
self.cwl_populator.run_conformance_test("""v1.1""", """Test requirements override hints""")
@pytest.mark.cwl_conformance
@pytest.mark.cwl_conformance_v1_1
@pytest.mark.env_var
@pytest.mark.workflow
@pytest.mark.green
def test_conformance_v1_1_requirement_workflow_steps(self):
"""Test requirements on workflow steps
Generated from::
id: 48
job: tests/env-job.json
| |
with a `DimensionNotAvailable` error if the specified dimension does not exist.
:param context: Additional data to be passed to the reducer.
:return: A data cube with the same dimensions. The dimension properties (name, type, labels, reference
system and resolution) remain unchanged, except for the resolution and dimension labels of the given
temporal dimension. The specified temporal dimension has the following dimension labels (`YYYY` = four-
digit year, `MM` = two-digit month, `DD` two-digit day of month): * `hour`: `YYYY-MM-DD-00` - `YYYY-
MM-DD-23` * `day`: `YYYY-001` - `YYYY-365` * `week`: `YYYY-01` - `YYYY-52` * `dekad`: `YYYY-00` -
`YYYY-36` * `month`: `YYYY-01` - `YYYY-12` * `season`: `YYYY-djf` (December - February), `YYYY-mam`
(March - May), `YYYY-jja` (June - August), `YYYY-son` (September - November). * `tropical-season`:
`YYYY-ndjfma` (November - April), `YYYY-mjjaso` (May - October). * `year`: `YYYY` * `decade`: `YYY0` *
`decade-ad`: `YYY1`
"""
return aggregate_temporal_period(data=self, period=period, reducer=reducer, dimension=dimension, context=context)
def all(self, ignore_nodata=UNSET) -> 'ProcessBuilder':
"""
Are all of the values true?
:param self: A set of boolean values.
:param ignore_nodata: Indicates whether no-data values are ignored or not and ignores them by default.
:return: Boolean result of the logical operation.
"""
return all(data=self, ignore_nodata=ignore_nodata)
def and_(self, y) -> 'ProcessBuilder':
"""
Logical AND
:param self: A boolean value.
:param y: A boolean value.
:return: Boolean result of the logical AND.
"""
return and_(x=self, y=y)
def anomaly(self, normals, period) -> 'ProcessBuilder':
"""
Computes anomalies
:param self: A data cube with exactly one temporal dimension and the following dimension labels for the
given period (`YYYY` = four-digit year, `MM` = two-digit month, `DD` two-digit day of month): *
`hour`: `YYYY-MM-DD-00` - `YYYY-MM-DD-23` * `day`: `YYYY-001` - `YYYY-365` * `week`: `YYYY-01` -
`YYYY-52` * `dekad`: `YYYY-00` - `YYYY-36` * `month`: `YYYY-01` - `YYYY-12` * `season`: `YYYY-djf`
(December - February), `YYYY-mam` (March - May), `YYYY-jja` (June - August), `YYYY-son` (September -
November). * `tropical-season`: `YYYY-ndjfma` (November - April), `YYYY-mjjaso` (May - October). *
`year`: `YYYY` * `decade`: `YYY0` * `decade-ad`: `YYY1` * `single-period` / `climatology-period`: Any
``aggregate_temporal_period()`` can compute such a data cube.
:param normals: A data cube with normals, e.g. daily, monthly or yearly values computed from a process
such as ``climatological_normal()``. Must contain exactly one temporal dimension with the following
dimension labels for the given period: * `hour`: `00` - `23` * `day`: `001` - `365` * `week`: `01` -
`52` * `dekad`: `00` - `36` * `month`: `01` - `12` * `season`: `djf` (December - February), `mam`
(March - May), `jja` (June - August), `son` (September - November) * `tropical-season`: `ndjfma`
(November - April), `mjjaso` (May - October) * `year`: Four-digit year numbers * `decade`: Four-digit
year numbers, the last digit being a `0` * `decade-ad`: Four-digit year numbers, the last digit being a
`1` * `single-period` / `climatology-period`: A single dimension label with any name is expected.
:param period: Specifies the time intervals available in the normals data cube. The following options
are available: * `hour`: Hour of the day * `day`: Day of the year * `week`: Week of the year *
`dekad`: Ten day periods, counted per year with three periods per month (day 1 - 10, 11 - 20 and 21 -
end of month). The third dekad of the month can range from 8 to 11 days. For example, the fourth dekad
is Feb, 1 - Feb, 10 each year. * `month`: Month of the year * `season`: Three month periods of the
calendar seasons (December - February, March - May, June - August, September - November). * `tropical-
season`: Six month periods of the tropical seasons (November - April, May - October). * `year`:
Proleptic years * `decade`: Ten year periods ([0-to-9
decade](https://en.wikipedia.org/wiki/Decade#0-to-9_decade)), from a year ending in a 0 to the next
year ending in a 9. * `decade-ad`: Ten year periods ([1-to-0
decade](https://en.wikipedia.org/wiki/Decade#1-to-0_decade)) better aligned with the Anno Domini (AD)
calendar era, from a year ending in a 1 to the next year ending in a 0. * `single-period` /
`climatology-period`: A single period of arbitrary length
:return: A data cube with the same dimensions. The dimension properties (name, type, labels, reference
system and resolution) remain unchanged.
"""
return anomaly(data=self, normals=normals, period=period)
def any(self, ignore_nodata=UNSET) -> 'ProcessBuilder':
"""
Is at least one value true?
:param self: A set of boolean values.
:param ignore_nodata: Indicates whether no-data values are ignored or not and ignores them by default.
:return: Boolean result of the logical operation.
"""
return any(data=self, ignore_nodata=ignore_nodata)
def apply(self, process, context=UNSET) -> 'ProcessBuilder':
"""
Apply a process to each pixel
:param self: A data cube.
:param process: A unary process to be applied on each value, may consist of multiple sub-processes.
:param context: Additional data to be passed to the process.
:return: A data cube with the newly computed values and the same dimensions. The dimension properties
(name, type, labels, reference system and resolution) remain unchanged.
"""
return apply(data=self, process=process, context=context)
def apply_dimension(self, process, dimension, target_dimension=UNSET, context=UNSET) -> 'ProcessBuilder':
"""
Apply a process to pixels along a dimension
:param self: A data cube.
:param process: Process to be applied on all pixel values. The specified process needs to accept an
array as parameter and must return an array with least one element. A process may consist of multiple
sub-processes.
:param dimension: The name of the source dimension to apply the process on. Fails with a
`DimensionNotAvailable` error if the specified dimension does not exist.
:param target_dimension: The name of the target dimension or `null` (the default) to use the source
dimension specified in the parameter `dimension`. By specifying a target dimension, the source
dimension is removed. The target dimension with the specified name and the type `other` (see
``add_dimension()``) is created, if if doesn't exist yet.
:param context: Additional data to be passed to the process.
:return: A data cube with the newly computed values. All dimensions stay the same, except for the
dimensions specified in corresponding parameters. There are three cases how the data cube changes: 1.
The source dimension **is** the target dimension: * The (number of) dimensions remain unchanged.
* The source dimension properties name, type and reference system remain unchanged. * The dimension
labels and the resolution are preserved when the number of pixel values in the source dimension is
equal to the number of values computed by the process. The other case is described below. 2. The source
dimension **is not** the target dimension and the latter **exists**: * The number of dimensions
decreases by one as the source dimension is dropped. * The target dimension properties name, type
and reference system remain unchanged. * The resolution changes, the number of dimension labels is
equal to the number of values computed by the process and the dimension labels are incrementing
integers starting from zero 3. The source dimension **is not** the target dimension and the latter
**does not exist**: * The number of dimensions remain unchanged, but the source dimension is
replaced with the target dimension. * The target dimension has the specified name and the type
other. The reference system is not changed. * The resolution changes, the number of dimension labels
is equal to the number of values computed by the process and the dimension labels are incrementing
integers starting from zero For all three cases except for the exception in the first case, the
resolution changes, the number of dimension labels is equal to the number of values computed by the
process and the dimension labels are incrementing integers starting from zero.
"""
return apply_dimension(data=self, process=process, dimension=dimension, target_dimension=target_dimension, context=context)
def apply_kernel(self, kernel, factor=UNSET, border=UNSET, replace_invalid=UNSET) -> 'ProcessBuilder':
"""
Apply a spatial convolution with a kernel
:param self: A data cube.
:param kernel: Kernel as a two-dimensional array of weights. The inner level of the nested array aligns
with the | |
# Standard library imports.
import ast
import getopt
import logging
import pprint
import re
import sys
# External imports.
import docx
import pympi
import pyramid.paster as paster
# Project imports.
from lingvodoc.models import (
DBSession,
Dictionary,
)
# Setting up logging, if we are not being run as a script.
if __name__ != '__main__':
log = logging.getLogger(__name__)
log.debug('module init')
def levenshtein(
snippet_str,
snippet_index,
word_str,
__debug_levenshtein_flag__ = False):
"""
Matches word string to the snippet string via adjusted Levenshtein matching, with no penalties for
snippet string skipping before and after match.
"""
d = {(0, j): (j, 1e256)
for j in range(len(word_str) + 1)}
for i in range(len(snippet_str) - snippet_index):
d[(i + 1, 0)] = (0, 1e256)
minimum_distance = len(word_str)
minimum_begin_index = 0
minimum_end_index = 0
for i in range(1, len(snippet_str) - snippet_index + 1):
if __debug_levenshtein_flag__:
log.debug(
'd[{0}, 0]: {1}'.format(i, d[(i, 0)]))
for j in range(1, len(word_str) + 1):
# Matching current characters of the word and snippet strings.
s_distance, s_begin_index = d[i - 1, j - 1]
substitution_value = s_distance + (
0 if snippet_str[snippet_index + i - 1] == word_str[j - 1] else 1)
substitution_index = min(s_begin_index, i - 1)
# Skipping current character from the snippet string.
d_distance, d_begin_index = d[i - 1, j]
deletion_value = d_distance + (
1 if j < len(word_str) else 0)
deletion_index = d_begin_index
# Skipping current character from the word string.
i_distance, i_begin_index = d[i, j - 1]
insertion_value = i_distance + 1
insertion_index = i_begin_index
# Getting minimum.
minimum_value = min(
substitution_value,
deletion_value,
insertion_value)
if minimum_value == deletion_value:
operation_index = 1
minimum_index = deletion_index
elif minimum_value == insertion_value:
operation_index = 2
minimum_index = insertion_index
else:
operation_index = 0
minimum_index = substitution_index
d[(i, j)] = (minimum_value, minimum_index)
# Showing edit distance computation details.
if __debug_levenshtein_flag__:
log.debug(
'\nd[{0}, {1}] (\'{18}\' & \'{14}\'): {4}'
'\n d[{5}, {6}] (\'{2}\' & \'{3}\'): {9} + {10}{11} (\'{12}\', \'{13}\')'
'\n d[{5}, {1}] (\'{2}\' & \'{14}\'): {15} + {16}{17}'
'\n d[{0}, {6}] (\'{18}\' & \'{3}\'): {19} + 1{20}'.format(
i, j,
snippet_str[snippet_index : snippet_index + i - 1] + '|' +
snippet_str[snippet_index + i - 1],
word_str[: j - 1] + '|' + word_str[j - 1],
d[(i, j)][0],
i - 1, j - 1,
snippet_str[snippet_index : snippet_index + i - 1] + '|',
word_str[: j - 1] + '|',
d[(i - 1, j - 1)][0],
0 if snippet_str[snippet_index + i - 1] == word_str[j - 1] else 1,
'*' if operation_index == 0 else '',
snippet_str[snippet_index + i - 1],
word_str[j - 1],
word_str[:j] + '|',
d[(i - 1, j)][0],
1 if j < len(word_str) else 0,
'*' if operation_index == 1 else '',
snippet_str[snippet_index : snippet_index + i] + '|',
d[(i, j - 1)][0],
'*' if operation_index == 2 else ''))
# Checking if we have a new best matching.
if d[i, len(word_str)][0] < minimum_distance:
minimum_distance, minimum_begin_index = d[i, len(word_str)]
minimum_end_index = i
if minimum_distance == 0:
break
return (
minimum_distance,
minimum_begin_index,
minimum_end_index)
def prepare_match_string(cell_str):
"""
Processes string for matching, finding and marking portions in parentheses to be considered as
optional during matching.
"""
chr_list = []
chr_index = 0
for match in re.finditer(r'\([^()]*?\)', cell_str):
for chr in re.sub(
r'\W+', '', cell_str[chr_index : match.start()]):
chr_list.append((chr, False))
for chr in re.sub(
r'\W+', '', match.group(0)):
chr_list.append((chr, True))
chr_index = match.end()
for chr in re.sub(
r'\W+', '', cell_str[chr_index:]):
chr_list.append((chr, False))
return chr_list
def format_match_string(marked_chr_list):
"""
Formats list of marked characters as a string.
"""
chr_list = []
mark_prev = False
for chr, mark in marked_chr_list:
if mark != mark_prev:
chr_list.append('(' if mark else ')')
chr_list.append(chr)
mark_prev = mark
if mark_prev:
chr_list.append(')')
return ''.join(chr_list)
class State(object):
"""
State of snippet table parsing.
"""
def __init__(self, snippet_str, cell_list, row_index):
"""
Initialization with the contents of the first snippet string.
"""
self.snippet_count = 0
self.snippet_chain = None
self.snippet_str = snippet_str
self.row_index = row_index
self.row_list = [cell_list]
self.d0 = []
self.d1 = [0.999 * i for i in range(len(self.snippet_str) + 1)]
self.word_list = []
self.word_str = []
self.total_value = 0
self.snippet_value = 0
def process_row(
self,
row_str,
cell_list,
row_index,
__debug_flag__ = False):
"""
Processing another data string, splitting into a state when it's a word string and another state
when it's a new snippet string.
"""
# First, assuming that this data string is the next snippet string.
if row_str:
copy = State(row_str, cell_list, row_index)
copy.snippet_chain = (
(tuple(self.row_list), self.row_index),
self.snippet_chain)
copy.snippet_count = self.snippet_count + 1
copy.total_value = self.total_value + self.d1[-1]
yield copy
# Second, assuming that this data string is a word string.
len_prev = len(self.word_str)
self.word_list.append(row_str)
self.word_str += row_str
self.row_list.append(cell_list)
# Updating Levenshtein alignment of snippet words to the snippet string.
for i in range(len(row_str)):
self.d0 = self.d1
self.d1 = [len_prev + i + 1]
for j in range(len(self.snippet_str)):
# Matching current characters of the snippet string and the word string.
s_cost = 0 if self.snippet_str[j][0] == row_str[i][0] else 1
if s_cost and (self.snippet_str[j][1] or row_str[i][1]):
s_cost = 0.001
s_value = self.d0[j] + s_cost
# Skipping current character either from the snippet string or from the word string.
d_value = self.d1[j] + (0.000999 if self.snippet_str[j][1] else 0.999)
i_value = self.d0[j + 1] + (0.001 if row_str[i][1] else 1)
self.d1.append(min(s_value, d_value, i_value))
# Showing debug info, if required.
if __debug_flag__:
log.debug((
format_match_string(self.snippet_str[:j]),
format_match_string(self.word_str[:len_prev + i]),
self.d0[j],
self.snippet_str[j][0],
row_str[i][0],
round(s_value, 6)))
log.debug((
format_match_string(self.snippet_str[:j]),
format_match_string(self.word_str[:len_prev + i + 1]),
self.d1[j],
self.snippet_str[j][0],
round(d_value, 6)))
log.debug((
format_match_string(self.snippet_str[:j + 1]),
format_match_string(self.word_str[:len_prev + i]),
self.d0[j + 1],
row_str[i][0],
round(i_value, 6)))
log.debug((
format_match_string(self.snippet_str[:j + 1]),
format_match_string(self.word_str[:len_prev + i + 1]),
round(min(s_value, d_value, i_value), 6)))
log.debug(self.d1)
# Updating alignment value.
if len(self.word_str) <= 0:
self.snippet_value = 0
elif len(self.word_str) > len(self.snippet_str):
self.snippet_value = self.d1[-1]
else:
self.snippet_value = min(
self.d1[len(self.word_str) : 2 * len(self.word_str)])
yield self
def beam_search_step(
state_list,
cell_str,
cell_list,
row_index,
beam_width,
__debug_beam_flag__ = False):
"""
Another step of alignment beam search.
"""
if not state_list:
return [State(
cell_str, cell_list, row_index)]
# Sorting parsing states by the snippet they are parsing.
state_dict = {}
for state in state_list:
for state_after in state.process_row(
cell_str, cell_list, row_index):
index = state_after.row_index
# Leaving only states with the best snippet histories.
if (index not in state_dict or
state_after.total_value < state_dict[index][0]):
state_dict[index] = (state_after.total_value, [state_after])
elif state_after.total_value == state_dict[index][0]:
state_dict[index][1].append(state_after)
state_list = []
for value, state_after_list in state_dict.values():
state_list.extend(state_after_list)
# Showing snippet alignment beam search state, if required.
if __debug_beam_flag__:
log.debug('\n' +
pprint.pformat([(
round(state.total_value + state.snippet_value, 6),
state.snippet_count,
format_match_string(state.snippet_str),
'|'.join(
format_match_string(word_str)
for word_str in state.word_list))
for state in state_list],
width = 384))
# Leaving only a number of best states.
state_list.sort(key = lambda state:
(state.total_value + state.snippet_value, state.snippet_count))
return state_list[:beam_width]
def parse_table(
row_list,
limit = None,
__debug_beam_flag__ = False):
"""
Tries to parse snippet data represented as a table.
"""
# Removing any snippet alignment marks, if we have any.
for cell_list in row_list:
for i in range(len(cell_list)):
match = re.match(r'\(__\d+__\)\s*', cell_list[i])
if match:
cell_list[i] = cell_list[i][match.end():]
state_list = []
beam_width = 32
# Going through snippet data.
for row_index, cell_list in enumerate(row_list[1:], 1):
if limit and row_index > limit:
break
if not any(cell_list[:3]):
continue
cell_str = (
prepare_match_string(
cell_list[0].lower()))
# Updating alignment search on another row.
state_list = (
beam_search_step(
state_list,
cell_str,
cell_list,
row_index,
beam_width,
__debug_beam_flag__))
# Returning final parsing search state.
return state_list
def parse_by_paragraphs(
row_list,
limit = None,
__debug_flag__ = False,
__debug_beam_flag__ = False):
"""
Tries to parse snippet data with paragraph separation inside table cells.
"""
# Splitting row texts by paragraphs.
line_row_list = []
line_row_count = 0
for cell_list in row_list[1:]:
if limit and line_row_count >= limit:
break
paragraph_list_list = [
re.split(r'[^\S\n]*\n\s*', text)
for text in cell_list]
how_many = max(
len(paragraph_list)
for paragraph_list in paragraph_list_list[:3])
# Iterating over aligned paragraphs in adjacent cells.
line_rank_list = []
for i in range(how_many):
line_cell_list = []
for paragraph_list in paragraph_list_list:
if i < len(paragraph_list):
# Removing snippet alignment mark, if there is one present.
cell_str = paragraph_list[i]
match = re.match(r'\(__\d+__\)\s*', cell_str)
line_cell_list.append(
cell_str[match.end():] if match else
cell_str)
else:
line_cell_list.append('')
# Another line | |
this turn
combinedVariance = np.var(combinedDensityMap)
#if the rounded variance of this turn is more or the same of the variance last turn
if round(combinedVariance,precision) >= round(lastVariance,precision):
placingcondition = True
#and update the last variance for next turn with the new value
lastVariance = combinedVariance
DensityMap = combinedDensityMap
else:
#dont add this element to the training dataset but to the test dataset
placingcondition = False
if opt.verbosity: print("index:", index," formerVariance,combinedVariance",round(lastVariance,precision),round(combinedVariance,precision),"so placing is",placingcondition)
#input()
return placingcondition, DensityMap, lastVariance
def make_train_test_data(self, shape):
Num_test = 0
Num_train = 0
firsttime = True
DensityMap= np.array([[0.0,0.0],])
lastVariance = 0.0
original_shape = shape
#For Visualizing tqdm-progress bar
from tqdm import tqdm
pbar = tqdm(total=maxIndexY-1)
counter = 0
train_counter = 0
test_counter =0
from os import listdir
DataFolder = FileParentPath + "/0Data/Images/"
filelist = [f for f in listdir(DataFolder)]
#print(filelist)
from tqdm import tqdm
pbar = tqdm(total=len(filelist))
n = 4 # werden wohl bis max 16 begrenzen # ist 4 da beim slicen richitg wäre boxsize[:4] oder [:-7]
for index, filename in enumerate(filelist):
pbar.update(1) #updates progressbar
counter +=1
try:
filepath = DataFolder+ filename #Load Image with Pillow
# Open the image
image = Image.open(filepath)
# If the picture is in RGB or other multi-channel mode
#just seperate the channels and concatenate them from left to right
ChannleDimension = len(str(image.mode)) # grey -> 1 chan , rgb 3 channle
if opt.verbosity == True:
# summarize some details about the image
print(image.format,image.size)
print(image.mode)
#show the image
image.show()
print("ChannleDimension",ChannleDimension)
channelcodierung = []
for channel in image.mode:
#FLATTEN EACH CHANNEL TO ONE BY TILLING, cause cnn have to be consistent channles
#and if one rgb is in grayscale, then error
if opt.verbosity: print(channel)
channelcodierung.append(channel)
C1, C2, C3, C4, C5, C6 = None, None, None, None, None, None
channellist = [C1,C2,C3,C4,C5,C6]
croppedChannelList = channellist[0:ChannleDimension-1]
croppedChannelList = image.split()
initEntry = None
stackedchannels = np.array(initEntry)
for idx, channel in enumerate(croppedChannelList):
PicNumpy = np.array(channel)
if opt.verbosity == True:
print(PicNumpy)
print(PicNumpy.shape)
if idx == 0:
stackedchannels = PicNumpy
else:
stackedchannels = np.concatenate((stackedchannels,PicNumpy),axis=1)
if opt.verbosity: print(stackedchannels.shape)
npOutputFile = stackedchannels
#########################################################################
# MULTITHREAD BOXCOUNT LABLE EXTRACTION
BoxCountR_SpacialLac_map_Dict = BoxcountFeatureExtr.MultithreadBoxcount(npOutputFile)
##### CALC PLACING CONDITION FOR EACH DATAPOINT(PICTURE)
maxiteration = 3 # cause boxsize 2,4,8,16 = 0,1,2,3 iteration # determines the maximum boxsize
#breaking it down to 2 values
sumBCR, sumLAK = BoxCountR_SpacialLac_map_Dict[1][0], BoxCountR_SpacialLac_map_Dict[1][1] #BoxCountR_SpacialLac_map_Dict[4] #[0], BoxCountR_SpacialLac_map_Dict[4][1]
if opt.verbosity: print("BCR,sumLAK MAP before sum", sumBCR,sumLAK)
sumBCR, sumLAK = np.sum(sumBCR), np.sum(sumLAK)
if opt.verbosity: print("sumBCR,sumLAK after sum", sumBCR,sumLAK)
#Calc, if the next data in the dataset will balance it more or not
placingcondition, DensityMap, lastVariance = self.CalcPlacingcondition(DensityMap, sumBCR,sumLAK,precision,lastVariance, index)
maxChunkCount = (npOutputFile.shape[1]/ChunkLenght) * (npOutputFile.shape[0]/ChunkLenght) #Chunks the picture is broken down into
#print("maxChunkCount", maxChunkCount)
Chunks = [None] * int(np.ceil(maxChunkCount)) # round up the non full boxes, cause they will be reshaped by padding with zeros
start = time.time()
BoxBoundriesY = [0,ChunkLenght]
BoxBoundriesX = [0,ChunkLenght]
iteration = 0 # is Boxsize 32 -> should be faster then more little boxsize
Boxsize=[2,4,8,16,32,64,128,256,512,1024]
scalingFaktor = 1.0 / float(Boxsize[iteration])
#If we take a sclice from the BCRmap/LAKmap, the boxboundries have to be scaled for maintaining spacial dimensions across scaling with iteration and Boxsize
Scaled_BoxBoundriesY = [0,int(ChunkLenght*scalingFaktor)]
Scaled_BoxBoundriesX = [0,int(ChunkLenght*scalingFaktor)]
for i in range(len(Chunks)):
Chunks[i] = npOutputFile[BoxBoundriesY[0]:BoxBoundriesY[1],BoxBoundriesX[0]:BoxBoundriesX[1]]
CHUNKED_BoxCountR_SpacialLac_map_Dict = {}
CuttedBoxsizeList = Boxsize[:maxiteration+1]
if opt.verbosity == True:
print("Current box",i,"of all",len(Chunks),"boxes")
print("Boxboundries: X, Y :" ,BoxBoundriesX,BoxBoundriesY)
print("CuttedBoxsizeList", CuttedBoxsizeList)
print("Converting BCRmap,LAKmap into Chunked Form")
for it , currentboxsize in enumerate(CuttedBoxsizeList):
if opt.verbosity: print("Iteration", it, " and currentBoxsize", currentboxsize)
scalingFaktor = 1.0 / float(currentboxsize)
try:
#calc Scaled BoxBoundriesY
Scaled_BoxBoundriesY = [int(BoxBoundriesY[0]*scalingFaktor),int(BoxBoundriesY[1]*scalingFaktor)]
except:
PrintException()
#Assuming devide by zero
Scaled_BoxBoundriesY = [0,int(BoxBoundriesY[1]*scalingFaktor)]
try:
#calc Scaled BoxBoundriesX
Scaled_BoxBoundriesX = [int(BoxBoundriesX[0]*scalingFaktor),int(BoxBoundriesX[1]*scalingFaktor)]
except:
PrintException()
#Assuming devide by zero
Scaled_BoxBoundriesX = [0,int(BoxBoundriesX[1]*scalingFaktor)]
BCRmap, LAKmap = BoxCountR_SpacialLac_map_Dict[it]
chunked_BCRmap = BCRmap[Scaled_BoxBoundriesY[0]:Scaled_BoxBoundriesY[1],Scaled_BoxBoundriesX[0]:Scaled_BoxBoundriesX[1]]
chunked_LAKmap = LAKmap[Scaled_BoxBoundriesY[0]:Scaled_BoxBoundriesY[1],Scaled_BoxBoundriesX[0]:Scaled_BoxBoundriesX[1]]
#SCALING----------------------------
#Scale the values of the Arrays in a gaussian distrubution with mean 0 and diviation 1?!?!?!
#chunked_BCRmap, chunked_LAKmap = preprocessing.scale(chunked_BCRmap) , preprocessing.scale(chunked_LAKmap)
'''
ATTENTION SCALING DEACTIVATED
'''
#Normalize the Values betweeen -1...1----------------------------
chunked_BCRmap, chunked_LAKmap = preprocessing.normalize(chunked_BCRmap, norm='l1') , preprocessing.normalize(chunked_LAKmap, norm='l1')
if opt.verbosity == True:
showNPArrayAsImage(Chunks[i], "Current Chunk", "gray")
print("chunked_BCRmap",chunked_BCRmap)
print("chunked_LAKmap",chunked_LAKmap)
showNPArrayAsImage(chunked_BCRmap, "chunked_BCRmap", "gray")
showNPArrayAsImage(chunked_LAKmap, "LAKmap", "gray")
#index the BCR /LAK map to the right size
CHUNKED_BoxCountR_SpacialLac_map_Dict[it] = [chunked_BCRmap, chunked_LAKmap]
#Scale Dataset : Scaled data has zero mean and unit variance
#Chunks[i]= preprocessing.scale(Chunks[i])
'''
ATTENTION SCALING DEACTIVATED
'''
#Normalizing ARRAY from 0...255 to -1...+1
Chunks[i] = preprocessing.normalize(Chunks[i], norm='l1')
Chunkshape = Chunks[i].shape
if BoxBoundriesX[1] > npOutputFile.shape[1] or BoxBoundriesY[1] > npOutputFile.shape[0]:
if opt.verbosity: print("Chunkshape and shape are Diffrent... reshaping")
Chunks[i] = reshape_Data(Chunks[i], Chunkshape, shape)
continue
assert Chunks[i].shape == original_shape
newshape = ( 1,int(original_shape[0]),int(original_shape[1]) )
Chunks[i] = np.reshape(Chunks[i], newshape)
#saving test or Train image and label
feature = Chunks[i]
label = [np.array(CHUNKED_BoxCountR_SpacialLac_map_Dict[0]) , np.array(CHUNKED_BoxCountR_SpacialLac_map_Dict[1]) , np.array(CHUNKED_BoxCountR_SpacialLac_map_Dict[2]) , np.array(CHUNKED_BoxCountR_SpacialLac_map_Dict[3]) ]
if opt.verbosity == True:
print("feature",feature)
print("label",label)
if placingcondition == True:
if opt.verbosity: print("Placingcondition is true, Append Chunk to dataset")
#saveplace
trainsaveplace = FileParentPath+"/Datasets/train/"
#saving image
imagesaveplace = trainsaveplace+ "/features/"+"Feature"+ str(Num_train)
np.save(imagesaveplace, feature)
#saving label
labelsaveplace = trainsaveplace+ "/labels/"+"label"+ str(Num_train)
#CANT save List with diffrent sized np arrays with np.save -> use pickle as workaround
pickle.dump(label,open(labelsaveplace,"wb"))
Num_train +=1
else:
testsaveplace = FileParentPath+"/Datasets/test/"
#saving image
imagesaveplace = testsaveplace+ "/features/"+"Feature"+ str(Num_test)
np.save(imagesaveplace, feature)
#saving label
labelsaveplace = testsaveplace+ "/labels/"+"label"+ str(Num_test)
#np.save(labelsaveplace, label)
#CANT save List with diffrent sized np arrays with np.save -> use pickle as workaround
pickle.dump(label,open(labelsaveplace,"wb"))
Num_test +=1
#After this Chunk set the new Borders of the new chunk for next turn
if BoxBoundriesX[1] < npOutputFile.shape[1]:
BoxBoundriesX[0] =BoxBoundriesX[0] + maxIndexX
BoxBoundriesX[1] = BoxBoundriesX[1] + maxIndexX
if opt.verbosity == True:
print("Move box into x direction")
print("BoxBoundriesX", BoxBoundriesX)
else:
if opt.verbosity == True:
print(BoxBoundriesY,"BoxBoundriesY")
print("move box into starting position in x-direction")
print("move box into ydirection")
BoxBoundriesX[0]=0
BoxBoundriesX[1]=maxIndexX
BoxBoundriesY[0]+=maxIndexY
BoxBoundriesY[1]+=maxIndexY
if opt.verbosity: input("Press any key for next File")
end = time.time()
print(round(end,1) - round(start,1), "seconds passed for chunking and Make Train Data for 1 File")
except :
PrintException()
input()
pass
pbar.close() #close the percentage bar cause all pictures have been processed
if opt.verbosity:
#To evaluate, if balancing happened show figure of all BCR/LAKS of the training dataset
x,y = np.array([]) , np.array([])
for Koordinate in DensityMap:
x,y = np.append(x,Koordinate[0]) , np.append(y,Koordinate[1])
# Plot
plt.scatter(x, y,s=1, alpha=0.33)
plt.title('Lacunarity-Boxcountratio Diagramm')
plt.xlabel('sumBCR')
plt.ylabel('MeanLAK')
plt.show()
print("Num_train",Num_train ,"Num_test",Num_test)
def create_dataset(self):
delete_dataset_from_last_time(FileParentPath)
print("Begin preprocessing train/test datasets")
self.make_train_test_data(shape)
print("Datasets created")
#REBUILDING/Balancing DATA DONE ---------------------------------------------------------------------------------------------
# Create COUSTOM Pytorch DATASET with features and labels----------------------------------------------------------------------------
# Source: [21] https://stackoverflow.com/questions/56774582/adding-custom-labels-to-pytorch-dataloader-dataset-does-not-work-for-custom-data
class Dataset:
def __init__(self, root):
"""Init function should not do any heavy lifting, but
must initialize how many items are availabel in this data set.
"""
from os import listdir
from os.path import isfile, join
self.featurepath = root + "/features"
self.labelpath = root + "/labels"
self.ROOT = root
self.featurelist = [f for f in listdir(self.featurepath) if isfile(join(self.featurepath, f))]
self.labellist = [f for f in listdir(self.labelpath) if isfile(join(self.labelpath, f))]
def __len__(self):
"""return number of points in our dataset"""
return len(self.featurelist)
def __getitem__(self, idx):
""" Here we have to return the item requested by `idx`
The PyTorch DataLoader class will use this method to make an iterable for
our training or validation loop.
"""
imagepath = self.featurepath+"/"+ "Feature" +str(idx)+".npy"
img = np.load(imagepath)
labelpath = self.labelpath+"/"+ "label"+str(idx)
#Below is to read and retrieve its contents, rb-read binary
with open(labelpath, "rb") as f:
label = pickle.load(f)
labels_2 = np.array(label[0])
labels_4 = np.array(label[1])
labels_8 = np.array(label[2])
labels_16 = np.array(label[3])
return img, labels_2 , labels_4 , labels_8, labels_16
def define_dataset(self,train_test_switch):
dataset = None
try:
if train_test_switch == "train":
trainDatasetSaveplace = FileParentPath + "/Datasets/train"
trainDataset = self.Dataset(trainDatasetSaveplace)
#Now, you can instantiate the DataLoader:
trainDataloader = DataLoader(trainDataset, batch_size=opt.batch_size, shuffle=True, num_workers=opt.n_cpu, drop_last=True)
dataiter = iter(trainDataloader)
trainDataset = dataiter.next()
trainDataset = transforms.ToTensor()
dataset = trainDataset
dataloader = trainDataloader
elif | |
header_roots = {header: (header.transaction_root, header.uncles_hash) for header in headers}
completed_header_roots = valfilter(lambda root: root in bodies_by_root, header_roots)
completed_headers = tuple(completed_header_roots.keys())
# store bodies for later usage, during block import
pending_bodies = {
header: bodies_by_root[root]
for header, root in completed_header_roots.items()
}
self._pending_bodies = merge(self._pending_bodies, pending_bodies)
self.logger.debug(
"Got block bodies for %d/%d headers from %s, from %r..%r",
len(completed_header_roots),
len(headers),
peer,
headers[0],
headers[-1],
)
return block_body_bundles, completed_headers
async def _request_block_bodies(
self,
peer: ETHPeer,
batch: Sequence[BlockHeaderAPI]) -> Tuple[BlockBodyBundle, ...]:
"""
Requests the batch of block bodies from the given peer, returning the
returned block bodies data, or an empty tuple on an error.
"""
self.logger.debug("Requesting block bodies for %d headers from %s", len(batch), peer)
try:
block_body_bundles = await peer.eth_api.get_block_bodies(tuple(batch))
except asyncio.TimeoutError:
self.logger.debug(
"Timed out requesting block bodies for %d headers from %s", len(batch), peer,
)
return tuple()
except CancelledError:
self.logger.debug("Pending block bodies call to %r future cancelled", peer)
return tuple()
except OperationCancelled:
self.logger.debug2("Pending block bodies call to %r operation cancelled", peer)
return tuple()
except PeerConnectionLost:
self.logger.debug("Peer went away, cancelling the block body request and moving on...")
return tuple()
except Exception:
self.logger.exception("Unknown error when getting block bodies from %s", peer)
raise
return block_body_bundles
async def _log_missing_parent(
self,
first_header: BlockHeaderAPI,
highest_block_num: int,
missing_exc: Exception) -> None:
self.logger.warning("Parent missing for header %r, restarting header sync", first_header)
block_num = first_header.block_number
try:
local_header = await self.db.coro_get_canonical_block_header_by_number(block_num)
except HeaderNotFound as exc:
self.logger.debug("Could not find canonical header at #%d: %s", block_num, exc)
local_header = None
try:
local_parent = await self.db.coro_get_canonical_block_header_by_number(
BlockNumber(block_num - 1)
)
except HeaderNotFound as exc:
self.logger.debug("Could not find canonical header parent at #%d: %s", block_num, exc)
local_parent = None
try:
canonical_tip = await self.db.coro_get_canonical_head()
except HeaderNotFound as exc:
self.logger.debug("Could not find canonical tip: %s", exc)
canonical_tip = None
self.logger.debug(
(
"Header syncer returned header %s, which has no parent in our DB. "
"Instead at #%d, our header is %s, whose parent is %s, with canonical tip %s. "
"The highest received header is %d. Triggered by missing dependency: %s"
),
first_header,
block_num,
local_header,
local_parent,
canonical_tip,
highest_block_num,
missing_exc,
)
class FastChainSyncer(BaseService):
def __init__(self,
chain: AsyncChainAPI,
db: BaseAsyncChainDB,
peer_pool: ETHPeerPool,
token: CancelToken = None) -> None:
super().__init__(token=token)
self._header_syncer = ETHHeaderChainSyncer(chain, db, peer_pool, token=self.cancel_token)
self._body_syncer = FastChainBodySyncer(
chain,
db,
peer_pool,
self._header_syncer,
self.cancel_token,
)
@property
def is_complete(self) -> bool:
return self._body_syncer.is_complete
async def _run(self) -> None:
self.run_daemon(self._header_syncer)
await self._body_syncer.run()
# The body syncer will exit when the body for the target header hash has been persisted
self._header_syncer.cancel_nowait()
@enum.unique
class BlockPersistPrereqs(enum.Enum):
STORE_BLOCK_BODIES = enum.auto()
STORE_RECEIPTS = enum.auto()
class ChainSyncStats(NamedTuple):
prev_head: BlockHeaderAPI
latest_head: BlockHeaderAPI
elapsed: float
num_blocks: int
blocks_per_second: float
num_transactions: int
transactions_per_second: float
class ChainSyncPerformanceTracker:
def __init__(self, head: BlockHeaderAPI) -> None:
# The `head` from the previous time we reported stats
self.prev_head = head
# The latest `head` we have synced
self.latest_head = head
# A `Timer` object to report elapsed time between reports
self.timer = Timer()
# EMA of the blocks per second
self.blocks_per_second_ema = EMA(initial_value=0, smoothing_factor=0.05)
# EMA of the transactions per second
self.transactions_per_second_ema = EMA(initial_value=0, smoothing_factor=0.05)
# Number of transactions processed
self.num_transactions = 0
def record_transactions(self, count: int) -> None:
self.num_transactions += count
def set_latest_head(self, head: BlockHeaderAPI) -> None:
self.latest_head = head
def report(self) -> ChainSyncStats:
elapsed = self.timer.pop_elapsed()
num_blocks = self.latest_head.block_number - self.prev_head.block_number
blocks_per_second = num_blocks / elapsed
transactions_per_second = self.num_transactions / elapsed
self.blocks_per_second_ema.update(blocks_per_second)
self.transactions_per_second_ema.update(transactions_per_second)
stats = ChainSyncStats(
prev_head=self.prev_head,
latest_head=self.latest_head,
elapsed=elapsed,
num_blocks=num_blocks,
blocks_per_second=self.blocks_per_second_ema.value,
num_transactions=self.num_transactions,
transactions_per_second=self.transactions_per_second_ema.value,
)
# reset the counters
self.num_transactions = 0
self.prev_head = self.latest_head
return stats
class FastChainBodySyncer(BaseBodyChainSyncer):
"""
Sync with the Ethereum network by fetching block headers/bodies and storing them in our DB.
Here, the run() method returns as soon as we complete a sync with the peer that announced the
highest TD, at which point we must run the StateDownloader to fetch the state for our chain
head.
"""
def __init__(self,
chain: AsyncChainAPI,
db: BaseAsyncChainDB,
peer_pool: ETHPeerPool,
header_syncer: HeaderSyncerAPI,
token: CancelToken = None) -> None:
super().__init__(chain, db, peer_pool, header_syncer, token)
# queue up any idle peers, in order of how fast they return receipts
self._receipt_peers: WaitingPeers[ETHPeer] = WaitingPeers(commands.Receipts)
# Track receipt download tasks
# - arbitrarily allow several requests-worth of headers queued up
# - try to get receipts from lower block numbers first
buffer_size = MAX_RECEIPTS_FETCH * REQUEST_BUFFER_MULTIPLIER
self._receipt_tasks = TaskQueue(buffer_size, attrgetter('block_number'))
# track when both bodies and receipts are collected, so that blocks can be persisted
self._block_persist_tracker = OrderedTaskPreparation(
BlockPersistPrereqs,
id_extractor=attrgetter('hash'),
# make sure that a block is not persisted until the parent block is persisted
dependency_extractor=attrgetter('parent_hash'),
)
# Track whether the fast chain syncer completed its goal
self.is_complete = False
async def _sync_from(self) -> BlockHeaderAPI:
"""
Select which header should be the last known header.
Start by importing headers that are children of this tip.
"""
return await self.db.coro_get_canonical_head()
async def _run(self) -> None:
head = await self.wait(self._sync_from())
self.tracker = ChainSyncPerformanceTracker(head)
self._block_persist_tracker.set_finished_dependency(head)
self.run_daemon_task(self._launch_prerequisite_tasks())
self.run_daemon_task(self._assign_receipt_download_to_peers())
self.run_daemon_task(self._assign_body_download_to_peers())
self.run_daemon_task(self._persist_ready_blocks())
self.run_daemon_task(self._display_stats())
await super()._run()
def register_peer(self, peer: BasePeer) -> None:
# when a new peer is added to the pool, add it to the idle peer lists
super().register_peer(peer)
peer = cast(ETHPeer, peer)
self._body_peers.put_nowait(peer)
self._receipt_peers.put_nowait(peer)
async def _should_skip_header(self, header: BlockHeaderAPI) -> bool:
"""
Should we skip trying to import this header?
Return True if the syncing of header appears to be complete.
This is fairly relaxed about the definition, preferring speed over slow precision.
"""
return await self.db.coro_header_exists(header.hash)
async def _launch_prerequisite_tasks(self) -> None:
"""
Watch for new headers to be added to the queue, and add the prerequisite
tasks as they become available.
"""
async for headers in self._sync_from_headers(
self._block_persist_tracker,
self._should_skip_header):
# Sometimes duplicates are added to the queue, when switching from one sync to another.
# We can simply ignore them.
new_body_tasks = tuple(h for h in headers if h not in self._block_body_tasks)
new_receipt_tasks = tuple(h for h in headers if h not in self._receipt_tasks)
# if any one of the output queues gets full, hang until there is room
await self.wait(asyncio.gather(
self._block_body_tasks.add(new_body_tasks),
self._receipt_tasks.add(new_receipt_tasks),
))
async def _display_stats(self) -> None:
while self.is_operational:
await self.sleep(5)
self.logger.debug(
"(in progress, queued, max size) of bodies, receipts: %r. Write capacity? %s",
[(q.num_in_progress(), len(q), q._maxsize) for q in (
self._block_body_tasks,
self._receipt_tasks,
)],
"yes" if self._db_buffer_capacity.is_set() else "no",
)
stats = self.tracker.report()
utcnow = int(datetime.datetime.utcnow().timestamp())
head_age = utcnow - stats.latest_head.timestamp
self.logger.info(
(
"blks=%-4d "
"txs=%-5d "
"bps=%-3d "
"tps=%-4d "
"elapsed=%0.1f "
"head=#%d %s "
"age=%s"
),
stats.num_blocks,
stats.num_transactions,
stats.blocks_per_second,
stats.transactions_per_second,
stats.elapsed,
stats.latest_head.block_number,
humanize_hash(stats.latest_head.hash),
humanize_seconds(head_age),
)
async def _persist_ready_blocks(self) -> None:
"""
Persist blocks as soon as all their prerequisites are done: body and receipt downloads.
Persisting must happen in order, so that the block's parent has already been persisted.
Also, determine if fast sync with this peer should end, having reached (or surpassed)
its target hash. If so, shut down this service.
"""
while self.is_operational:
# This tracker waits for all prerequisites to be complete, and returns headers in
# order, so that each header's parent is already persisted.
get_completed_coro = self._block_persist_tracker.ready_tasks(BLOCK_QUEUE_SIZE_TARGET)
completed_headers = await self.wait(get_completed_coro)
if self._block_persist_tracker.has_ready_tasks():
# Even after clearing out a big batch, there is no available capacity, so
# pause any coroutines that might wait for capacity
self._db_buffer_capacity.clear()
else:
# There is available capacity, let any waiting coroutines continue
self._db_buffer_capacity.set()
await self.wait(self._persist_blocks(completed_headers))
target_hash = self._header_syncer.get_target_header_hash()
if target_hash in [header.hash for header in completed_headers]:
# exit the service when reaching the target hash
self._mark_complete()
break
def _mark_complete(self) -> None:
self.is_complete = True
self.cancel_nowait()
async def _persist_blocks(self, headers: Sequence[BlockHeaderAPI]) -> None:
"""
Persist blocks for the given headers, directly to the database
:param headers: headers for which block bodies and receipts have been downloaded
"""
for header in headers:
vm_class = self.chain.get_vm_class(header)
block_class = vm_class.get_block_class()
if _is_body_empty(header):
transactions: List[SignedTransactionAPI] = []
uncles: List[BlockHeaderAPI] = []
else:
body = self._pending_bodies.pop(header)
uncles = body.uncles
# transaction data was already persisted in _block_body_bundle_processing, but
# we need to include the transactions for them to be added to the hash->txn lookup
tx_class = block_class.get_transaction_class()
transactions = [tx_class.from_base_transaction(tx) for tx | |
BCC_B2 PHASE
$ metastable
$
$ Present work: july 1999, study of Al-Cr-Ni, revision of NDTH.
PARAMETER G(BCC_B2,CR:NI:VA;0) 298.15 4000;,,N 01DUP !
PARAMETER G(BCC_B2,NI:CR:VA;0) 298.15 4000;,,N 01DUP !
$
$ L12_FCC PHASE
$ metastable
$ Present work: july 1999, study of Al-Cr-Ni, revision of NDTH.
$ The L12 phase is metastable in the binary Cr-Ni while it was stable in NDTH.
FUN U1CRNI 298.15 -1980;,,,N 01DUP !
$ FUN U1CRNI 298.15 -7060+3.63*T;,,,N 01DUP !
FUN U3CRNI 298.15 0;,,,N 01DUP !
FUN U4CRNI 298.15 0;,,,N 01DUP !
FUNCTION L04CRNI 298.15 U3CRNI;,,N 01DUP !
FUNCTION L14CRNI 298.15 U4CRNI;,,N 01DUP !
FUNCTION CRNI3 298.15 +3*U1CRNI;,,,N 01DUP !
FUNCTION CR2NI2 298.15 +4*U1CRNI;,,,N 01DUP !
FUNCTION CR3NI 298.15 +3*U1CRNI;,,,N 01DUP !
PARAMETER G(L12_FCC,NI:CR:VA;0) 298.15 +CRNI3;,, N 01DUP !
PARAMETER G(L12_FCC,CR:NI:VA;0) 298.15 +CR3NI;,, N 01DUP !
PARAMETER L(L12_FCC,CR,NI:CR:VA;0) 298.15
-1.5*CRNI3+1.5*CR2NI2+1.5*CR3NI;,,N 01DUP !
PARAMETER L(L12_FCC,CR,NI:NI:VA;0) 298.15
+1.5*CRNI3+1.5*CR2NI2-1.5*CR3NI;,,N 01DUP !
PARAMETER L(L12_FCC,CR,NI:CR:VA;1) 298.15
+0.5*CRNI3-1.5*CR2NI2+1.5*CR3NI;,,N 01DUP !
PARAMETER L(L12_FCC,CR,NI:NI:VA;1) 298.15
-1.5*CRNI3+1.5*CR2NI2-0.5*CR3NI;,,N 01DUP !
PARAMETER L(L12_FCC,*:CR,NI:VA;0) 298.15 +L04CRNI;,,N 01DUP !
PARAMETER L(L12_FCC,*:CR,NI:VA;1) 298.15 +L14CRNI;,,N 01DUP !
PARAMETER L(L12_FCC,CR,NI:*:VA;0) 298.15 +3*L04CRNI;,,N 01DUP !
PARAMETER L(L12_FCC,CR,NI:*:VA;1) 298.15 +3*L14CRNI;,,N 01DUP !
$
$ SIGMA_SGTE PHASE
$ metastable
$ Modified by <NAME>, NIST, Gaithersburg, MD, USA
PARAMETER G(SIGMA_SGTE,NI:CR:CR;0) 298.15
+221157-227*T+8*GHSERNI+18*GHSERCR+4*GHSERCR;,,N 88GUS1 !
PARAMETER G(SIGMA_SGTE,NI:NI:CR;0) 298.15
+175400+8*GHSERNI+18*GNIBCC+4*GHSERCR;,,N 88GUS1 !
$
$****************************************************************************
$
$ TERNARY PARAMETERS
$
$----------------------------------------------------------------------------
$
$ Al-Co-Cr
$ Mainly from the current work (2015)
$ Co-Cr from Kusoffsky used instead of Oikawa as energies
$ are shown to be better for sigma. Also produces a better
$ Co-Cr-Ni extrapolation
$
$ LIQUID PHASE
$
$Simple parameter to suppress liquid formation a 1573 K
PARAMETER G(LIQUID,AL,CO,CR;0) 298.15 +30000;,,N 15LIU !
$
$ FCC_A1 PHASE
$
PARAMETER G(FCC_A1,AL,CO,CR:VA;0) 298.15 +989.5+8.277709*T;,,N 15LIU !
$
$ BCC_A2 PHASE
$
$ BCC_A2 L0, L1 and L2 parameters are all used
$ just using L0 cannot reproduce accurate phase boundaries for the A2/B2
$ Al-Co and Al-Cr binaries may need severe work to reduce these
PARAMETER G(BCC_A2,AL,CO,CR:VA;0) 298.15 -28128;,,N 15LIU !
PARAMETER G(BCC_A2,AL,CO,CR:VA;1) 298.15 123468-61.878631*T;,,N 15LIU !
PARAMETER G(BCC_A2,AL,CO,CR:VA;2) 298.15 -12107-32.267228*T;,,N 15LIU !
$
$ BCC_B2 PHASE
$
FUNCTION BCOCRMAL 298.15 -22138;,,N 15LIU !
FUNCTION BALCOMCR 298.15 -61790;,,N 15LIU !
PARAMETER G(BCC_B2,AL:CO,CR:VA;0) 298.15 +.5*BCOCRMAL;,,N 15LIU !
PARAMETER G(BCC_B2,CO,CR:AL:VA;0) 298.15 +.5*BCOCRMAL;,,N 15LIU !
PARAMETER G(BCC_B2,AL,CO:CR:VA;0) 298.15 +.5*BALCOMCR;,,N 15LIU !
PARAMETER G(BCC_B2,CR:AL,CO:VA;0) 298.15 +.5*BALCOMCR;,,N 15LIU !
$
$ L12_FCC PHASE
$ metastable
$ Present work: 2015 CRALDAD
$ Using the same interaction constraints shown by Dupin for Al-Cr-Ni
FUN U1ALCOCR 298.15 0;,,N 15LIU !
FUN U2ALCOCR 298.15 0;,,N 15LIU !
FUN U3ALCOCR 298.15 0;,,N 15LIU !
FUN ALCOCR2 298.15 U1ALCO+2*U1ALCR+2*U1COCR+U1ALCOCR;,,N 15LIU !
FUN ALCO2CR 298.15 2*U1ALCO+U1ALCR+2*U1COCR+U2ALCOCR;,,N 15LIU !
FUN AL2COCR 298.15 2*U1ALCO+2*U1ALCR+U1COCR+U3ALCOCR;,,N 15LIU !
PARA L(L12_FCC,AL,CO,CR:AL:VA;0) 298.15
-1.5*ALCOCR2-1.5*ALCO2CR+ALCO3+ALCR3+6*AL2COCR
-1.5*AL2CO2-1.5*AL2CR2-1.5*AL3CO-1.5*AL3CR;,,N 15LIU !
PARA L(L12_FCC,AL,CO,CR:CO:VA;0) 298.15
-1.5*ALCOCR2+6*ALCO2CR-1.5*ALCO3-1.5*AL2COCR
-1.5*AL2CO2+AL3CO+COCR3-1.5*CO2CR2-1.5*CO3CR;,,N 15LIU !
PARA L(L12_FCC,AL,CO,CR:CR:VA;0) 298.15
+6*ALCOCR2-1.5*ALCO2CR-1.5*ALCR3-1.5*AL2COCR
-1.5*AL2CR2+AL3CR-1.5*COCR3-1.5*CO2CR2+CO3CR;,,N 15LIU !
PARA L(L12_FCC,AL,CO:CR:VA;0) 298.15
+1.5*ALCO2CR+1.5*AL2COCR-1.5*AL3CR-1.5*CO3CR;,,N 15LIU !
PARA L(L12_FCC,AL,CR:CO:VA;0) 298.15
+1.5*ALCOCR2+1.5*AL2COCR-1.5*AL3CO-1.5*COCR3;,,N 15LIU !
PARA L(L12_FCC,CO,CR:AL:VA;0) 298.15
+1.5*ALCOCR2+1.5*ALCO2CR-1.5*ALCO3-1.5*ALCR3;,,N 15LIU !
PARA L(L12_FCC,AL,CO:CR:VA;1) 298.15
-1.5*ALCO2CR+1.5*AL2COCR-0.5*AL3CR+0.5*CO3CR;,,N 15LIU !
PARA L(L12_FCC,AL,CR:CO:VA;1) 298.15
-1.5*ALCOCR2+1.5*AL2COCR-0.5*AL3CO+0.5*COCR3;,,N 15LIU !
PARA L(L12_FCC,CO,CR:AL:VA;1) 298.15
-1.5*ALCOCR2+1.5*ALCO2CR-0.5*ALCO3+0.5*ALCR3;,,N 15LIU !
$
$ SIGMA_SGTE PHASE
$
PARAMETER G(SIGMA_SGTE,CO:AL:CR;0) 298.15
-931862+8*GCOFCC+18*GALBCC+4*GHSERCR;,,N 15LIU !
PARAMETER G(SIGMA_SGTE,AL:CO:CR;0) 298.15
-617537+8*GHSERAL+18*GCOBCC+4*GHSERCR;,,N 15LIU !
PARAMETER G(SIGMA_SGTE,CO:AL,CR:CR;0) 298.15 -200000;,,N 15LIU !
$
$----------------------------------------------------------------------------
$
$ Al-Co-Ni
$ Mainly from the current work (2015)
$ Update of Al-Co-Ni from NDTH by Dupin
$
$
$
$ LIQUID PHASE
$
PARAMETER G(LIQUID,AL,CO,NI;0) 298.15 20000;,,N 15LIU !
$
$ FCC_A1 PHASE
$
$$$$$ NONE
$
$ BCC_A2 PHASE
$ metastable
$
PARAMETER G(BCC_A2,AL,CO,NI:VA;0) 298.15 -15483;,,N 15LIU !
$
$ BCC_B2 PHASE
$
FUNCTION BALCOMNI 298.15 -43538;,,N 15LIU !
PARAMETER G(BCC_B2,NI:AL,CO:VA;0) 298.15 +.5*BALCOMNI;,,N 15LIU !
PARAMETER G(BCC_B2,AL,CO:NI:VA;0) 298.15 +.5*BALCOMNI;,,N 15LIU !
$
$ L12_FCC PHASE
$ Present work: 2015 CRALDAD
$ Using the same interaction constraints shown by Dupin for Al-Cr-Ni
FUN U1ALCONI 298.15 0;,,N 15LIU !
FUN U2ALCONI 298.15 0;,,N 15LIU !
FUN U3ALCONI 298.15 0;,,N 15LIU !
FUN ALCONI2 298.15 U1ALCO+2*U1ALNI+2*U1CONI+U1ALCONI;,,N 15LIU !
FUN ALCO2NI 298.15 2*U1ALCO+U1ALNI+2*U1CONI+U2ALCONI;,,N 15LIU !
FUN AL2CONI 298.15 2*U1ALCO+2*U1ALNI+U1CONI+U3ALCONI;,,N 15LIU !
PARA L(L12_FCC,AL,CO,NI:AL:VA;0) 298.15
-1.5*ALCONI2-1.5*ALCO2NI+ALCO3+ALNI3+6*AL2CONI
-1.5*AL2CO2-1.5*AL2NI2-1.5*AL3CO-1.5*AL3NI;,,N 15LIU !
PARA L(L12_FCC,AL,CO,NI:CO:VA;0) 298.15
-1.5*ALCONI2+6*ALCO2NI-1.5*ALCO3-1.5*AL2CONI
-1.5*AL2CO2+AL3CO+CONI3-1.5*CO2NI2-1.5*CO3NI;,,N 15LIU !
PARA L(L12_FCC,AL,CO,NI:NI:VA;0) 298.15
+6*ALCONI2-1.5*ALCO2NI-1.5*ALNI3-1.5*AL2CONI
-1.5*AL2NI2+AL3NI-1.5*CONI3-1.5*CO2NI2+CO3NI;,,N 15LIU !
PARA L(L12_FCC,AL,CO:NI:VA;0) 298.15
+1.5*ALCO2NI+1.5*AL2CONI-1.5*AL3NI-1.5*CO3NI;,,N 15LIU !
PARA L(L12_FCC,AL,NI:CO:VA;0) 298.15
+1.5*ALCONI2+1.5*AL2CONI-1.5*AL3CO-1.5*CONI3;,,N 15LIU !
PARA L(L12_FCC,CO,NI:AL:VA;0) 298.15
+1.5*ALCONI2+1.5*ALCO2NI-1.5*ALCO3-1.5*ALNI3;,,N 15LIU !
PARA L(L12_FCC,AL,CO:NI:VA;1) 298.15
-1.5*ALCO2NI+1.5*AL2CONI-0.5*AL3NI+0.5*CO3NI;,,N 15LIU !
PARA L(L12_FCC,AL,NI:CO:VA;1) 298.15
-1.5*ALCONI2+1.5*AL2CONI-0.5*AL3CO+0.5*CONI3;,,N 15LIU !
PARA L(L12_FCC,CO,NI:AL:VA;1) 298.15
-1.5*ALCONI2+1.5*ALCO2NI-0.5*ALCO3+0.5*ALNI3;,,N 15LIU !
$
$----------------------------------------------------------------------------
$
$ Al-Cr-Ni
$ July 1999, ND
$ Revision. Main changes:
$ - description of the A2/B2
$ - new liquidus data taken into account
$ - simpler ternary interaction parameters
$
$ LIQUID PHASE
$
PARAMETER L(LIQUID,AL,CR,NI;0) 298.15 16000;,,N 01DUP !
$
$ FCC_A1 PHASE
$
PARAMETER G(FCC_A1,AL,CR,NI:VA;0) 298.15 30300;,,N 01DUP !
$
$ BCC_A2 PHASE
$
PARAMETER G(BCC_A2,AL,CR,NI:VA;0) 298.15 42500;,,N 01DUP !
$
$ L12_FCC PHASE
$
FUN U1ALCRNI 298.15 6650;,,N 01DUP !
FUN U2ALCRNI 298.15 0;,,N 01DUP !
FUN U3ALCRNI 298.15 0;,,N 01DUP !
FUN ALCRNI2 298.15 U1ALCR+2*U1ALNI+2*U1CRNI+U1ALCRNI;,,N 01DUP !
FUN ALCR2NI 298.15 2*U1ALCR+U1ALNI+2*U1CRNI+U2ALCRNI;,,N 01DUP !
FUN AL2CRNI 298.15 2*U1ALCR+2*U1ALNI+U1CRNI+U3ALCRNI;,,N 01DUP !
PARA L(L12_FCC,AL,CR,NI:AL:VA;0) 298.15
-1.5*ALCRNI2-1.5*ALCR2NI+ALCR3+ALNI3+6*AL2CRNI
-1.5*AL2CR2-1.5*AL2NI2-1.5*AL3CR-1.5*AL3NI;,,N 01DUP !
PARA L(L12_FCC,AL,CR,NI:CR:VA;0) 298.15
-1.5*ALCRNI2+6*ALCR2NI-1.5*ALCR3-1.5*AL2CRNI
-1.5*AL2CR2+AL3CR+CRNI3-1.5*CR2NI2-1.5*CR3NI;,,N 01DUP !
PARA L(L12_FCC,AL,CR,NI:NI:VA;0) 298.15
+6*ALCRNI2-1.5*ALCR2NI-1.5*ALNI3-1.5*AL2CRNI
-1.5*AL2NI2+AL3NI-1.5*CRNI3-1.5*CR2NI2+CR3NI;,,N 01DUP !
PARA L(L12_FCC,AL,CR:NI:VA;0) 298.15
+1.5*ALCR2NI+1.5*AL2CRNI-1.5*AL3NI-1.5*CR3NI;,,N 01DUP !
PARA L(L12_FCC,AL,NI:CR:VA;0) 298.15
+1.5*ALCRNI2+1.5*AL2CRNI-1.5*AL3CR-1.5*CRNI3;,,N 01DUP !
PARA L(L12_FCC,CR,NI:AL:VA;0) 298.15
+1.5*ALCRNI2+1.5*ALCR2NI-1.5*ALCR3-1.5*ALNI3;,,N 01DUP !
PARA L(L12_FCC,AL,CR:NI:VA;1) 298.15
-1.5*ALCR2NI+1.5*AL2CRNI-0.5*AL3NI+0.5*CR3NI;,,N 01DUP !
PARA L(L12_FCC,AL,NI:CR:VA;1) 298.15
-1.5*ALCRNI2+1.5*AL2CRNI-0.5*AL3CR+0.5*CRNI3;,,N 01DUP !
PARA L(L12_FCC,CR,NI:AL:VA;1) 298.15
-1.5*ALCRNI2+1.5*ALCR2NI-0.5*ALCR3+0.5*ALNI3;,,N 01DUP !
$
$ SIGMA_SGTE PHASE
$ metastable
PARAMETER G(SIGMA_SGTE,AL:NI:CR;0) 298.15
-1045169+8*GHSERAL+18*GNIBCC+4*GHSERCR;,,N 15LIU !
PARAMETER G(SIGMA_SGTE,NI:AL:CR;0) 298.15
-1169367+8*GHSERNI+18*GALBCC+4*GHSERCR;,,N 15LIU !
$
$----------------------------------------------------------------------------
$
$ Co-Cr-Ni
$ Mainly from the current work (2015)
$ Extrapolations from binaries are slightly improved
$
$
$
$
$ LIQUID PHASE
$
PARAMETER G(LIQUID,CO,CR,NI;0) 298.15 -16000;,,N 15LIU !
$
$ FCC_A1 PHASE
$
PARAMETER G(FCC_A1,CO,CR,NI:VA;0) 298.15 -40710+13.5334*T;,,N 15LIU !
$
$ BCC_A2 PHASE
$
PARAMETER G(BCC_A2,CO,CR,NI:VA;0) 298.15 -60134+17.699513*T;,,N 15LIU !
$
$
$ L12_FCC PHASE
$ metastable
$ Present work: 2015 CRALDAD
$ Using the same interaction constraints shown by Dupin for Al-Cr-Ni
FUN U1COCRNI 298.15 0;,,N 15LIU !
FUN U2COCRNI 298.15 0;,,N 15LIU !
FUN U3COCRNI 298.15 0;,,N 15LIU !
FUN COCRNI2 298.15 U1COCR+2*U1CONI+2*U1CRNI+U1COCRNI;,,N 15LIU !
FUN COCR2NI 298.15 2*U1COCR+U1CONI+2*U1CRNI+U2COCRNI;,,N 15LIU !
FUN CO2CRNI 298.15 2*U1COCR+2*U1CONI+U1CRNI+U3COCRNI;,,N 15LIU !
PARA L(L12_FCC,CO,CR,NI:CO:VA;0) 298.15
-1.5*COCRNI2-1.5*COCR2NI+COCR3+CONI3+6*CO2CRNI
-1.5*CO2CR2-1.5*CO2NI2-1.5*CO3CR-1.5*CO3NI;,,N 15LIU !
PARA L(L12_FCC,CO,CR,NI:CR:VA;0) 298.15
-1.5*COCRNI2+6*COCR2NI-1.5*COCR3-1.5*CO2CRNI
-1.5*CO2CR2+CO3CR+CRNI3-1.5*CR2NI2-1.5*CR3NI;,,N 15LIU !
PARA L(L12_FCC,CO,CR,NI:NI:VA;0) 298.15
+6*COCRNI2-1.5*COCR2NI-1.5*CONI3-1.5*CO2CRNI
-1.5*CO2NI2+CO3NI-1.5*CRNI3-1.5*CR2NI2+CR3NI;,,N 15LIU !
PARA L(L12_FCC,CO,CR:NI:VA;0) 298.15
+1.5*COCR2NI+1.5*CO2CRNI-1.5*CO3NI-1.5*CR3NI;,,N 15LIU !
PARA L(L12_FCC,CO,NI:CR:VA;0) 298.15
+1.5*COCRNI2+1.5*CO2CRNI-1.5*CO3CR-1.5*CRNI3;,,N 15LIU !
PARA L(L12_FCC,CR,NI:CO:VA;0) 298.15
+1.5*COCRNI2+1.5*COCR2NI-1.5*COCR3-1.5*CONI3;,,N 15LIU !
PARA L(L12_FCC,CO,CR:NI:VA;1) 298.15
-1.5*COCR2NI+1.5*CO2CRNI-0.5*CO3NI+0.5*CR3NI;,,N 15LIU !
PARA L(L12_FCC,CO,NI:CR:VA;1) 298.15
-1.5*COCRNI2+1.5*CO2CRNI-0.5*CO3CR+0.5*CRNI3;,,N 15LIU !
PARA L(L12_FCC,CR,NI:CO:VA;1) 298.15
-1.5*COCRNI2+1.5*COCR2NI-0.5*COCR3+0.5*CONI3;,,N 15LIU !
$
$ SIGMA_SGTE PHASE
$ Phase boundaries well reproduced without the need of ternary end-members
$ excess energies
PARAMETER G(SIGMA_SGTE,NI:CO:CR;0) 298.15
+8*GHSERNI+18*GCOBCC+4*GHSERCR;,,N 15LIU !
PARAMETER G(SIGMA_SGTE,CO:NI:CR;0) 298.15
+8*GCOFCC+18*GNIBCC+4*GHSERCR;,,N 15LIU !
$
$----------------------------------------------------------------------------
$----------------------------------------------------------------------------
$----------------------------------------------------------------------------
$----------------------------------------------------------------------------
$
$ Al-Co-Cr-Ni
$ Mainly from the current work (2015)
$ Extrapolations from ternaries only
$
$
$ LIQUID PHASE
$
$$$$$ NONE
$
$ FCC_A1 PHASE
$
$$$$$ NONE
$
$ BCC_A2 PHASE
$
$$$$$ NONE
$
$ BCC_B2 PHASE
$
$$$$$ NONE
$
$ L12_FCC PHASE
$
$ Using the same interaction constraints shown by Dupin in NDTH
FUNCTION ALCOCRNI 298.15 0.0;,,N 15LIU !
PARAMETER G(L12_FCC,AL,CO,CR:NI:VA;0) 298.15 +AL3NI+CO3NI+CR3NI
-1.5*ALCO2NI-1.5*ALCR2NI-1.5*AL2CONI-1.5*AL2CRNI-1.5*COCR2NI
-1.5*CO2CRNI+6*ALCOCRNI;,,N 15LIU !
PARAMETER G(L12_FCC,AL,CO,NI:CR:VA;0) 298.15 +AL3CR+CO3CR+CRNI3
-1.5*ALCO2CR-1.5*ALCRNI2-1.5*AL2COCR-1.5*AL2CRNI-1.5*COCRNI2
-1.5*CO2CRNI+6*ALCOCRNI;,,N 15LIU !
PARAMETER G(L12_FCC,AL,CR,NI:CO:VA;0) 298.15 +AL3CO+COCR3+CONI3
-1.5*ALCOCR2-1.5*ALCONI2-1.5*AL2COCR-1.5*AL2CONI-1.5*COCRNI2
-1.5*COCR2NI+6*ALCOCRNI;,,N 15LIU ! !
PARAMETER G(L12_FCC,CO,CR,NI:AL:VA;0) 298.15 +ALCO3+ALCR3+ALNI3
-1.5*ALCOCR2-1.5*ALCONI2-1.5*ALCO2CR-1.5*ALCO2NI-1.5*ALCRNI2
-1.5*ALCR2NI+6*ALCOCRNI;,,N 15LIU !
$
$****************************************************************************
LIST_OF_REFERENCES
NUMBER SOURCE
86DIN '<NAME>, <NAME>, MTDS NPL, Unpublished work (1986); CR-NI'
89DIN '<NAME>, SGTE Data for Pure Elements,
NPL Report DMA(A)195 September 1989'
91DIN '<NAME>, SGTE Data for Pure Elements, NPL Report
DMA(A)195 Rev. August 1990'
91LEE '<NAME>, unpublished revision (1991); C-Cr-Fe-Ni'
91SAU1 '<NAME>, 1991, based on
<NAME>, <NAME>
Z. metallkde, 78 (11), 795-801 (1987); Al-Cr'
91DIN '<NAME>, SGTE Data for Pure Elements,
Calphad Vol 15(1991) p 317-425,
also in NPL Report DMA(A)195 Rev. August 1990'
95DUP3 '<NAME>, Thesis, LTPCM, France, 1995;
Al-Ni,
also in <NAME>, <NAME>, <NAME>, <NAME>
J. Alloys Compds, 247 (1-2), 20-30 (1997)'
99DUP '<NAME>, <NAME>,
Z. metallkd., Vol 90 (1999) p 76-85;
Al-Ni'
01DUP '<NAME>, <NAME>
Thermodynamic Re-Assessment of the Ternary System Al-Cr-Ni,
Calphad, 25 (2), 279-298 (2001); Al-Cr-Ni'
REF184 'AL1<G> CODATA KEY VALUES SGTE **
ALUMINIUM <GAS>
Cp values similar in Codata Key Values and IVTAN Vol. 3'
REF448 'AL2<G> CHATILLON(1992)
Enthalpy of formation for Al1<g> taken from Codata Key Values.
Enthalpy of form. from TPIS dissociation energy mean Value
corrected with new fef from <NAME>. and <NAME>.
(J.Phys. Chem. 92(1988)2774) ab initio calculations.'
REF4469 'CO1<G> T.C.R.A.S Class: 1
Data provided by TCRAS. October 1996. Error in version 1997.
S298 corrected to 1bar.'
REF4561 'CO2<G> T.C.R.A.S Class: 6
Data provided by T.C.R.A.S. October 1996.'
REF4465 'CR1<G> T.C.R.A.S. Class: 1
CHROMIUM <GAS>'
REF4591 'CR2<G> T.C.R.A.S. Class: 6'
REF7504 'NI1<G> T.C.R.A.S Class: 1
Data provided by T.C.R.A.S. October 1996'
REF7553 'NI2<G> T.C.R.A.S Class: 5
Data provided by T.C.R.A.S. October 1996'
90DIN 90Din '<NAME>, | |
#!/usr/bin/python3
"""
This module provides the base class For driving unipolar and bipolar stepper motors via a number of different chips.
"""
import pootlestuff.watchables as wv
from pootlestuff.watchables import loglvls
import threading, time
class basestepper(wv.watchablepigpio):
"""
A base class for stepper motors that takes care of the higher level logic.
It supports:
direct stepping from software (for slower speeds and with the potential for responsive dynamic feedback control)
fast stepping using dma control blocks to provide very accurate timing with ramp up and down
capability using the motorset class.
The motor accepts a small number of commands:
---------------------------------------------
close: close the motor down - it cannot be used again once closed
stop: stop the motor - if it is running in will stop in a controlled manner and the op mode will revert to stopped
goto: the motor will travel to the target position and then monitor target position for change until stop is requested.
onegoto:the motor will travel to the target position and then revert to stopped.
run: the motor will run at the target speed and in the target direction until stop is requested
The motor can be in 1 of a number of (operation) modes. These are set by the motor and show the current state.
-------------------------------------------------------
closed : Motor has shut down and cannot be used - the motor can only be run again by re-creating the object
stopped :The motor is not currently active. There may be drive current to hold position.
running : The motor is actioning a command - usually moving but it may be stationary waiting (for example,
paused when about to change direction, or waiting for the target location to change in goto mode)
stepmodes provide detailed control of step timings by defining the class used to generate tick intervals
as well as whether the stepping will by directly from software or using DMA controlled stepping.
The class runs some aspects of the motor control in its own thread, commands are passed to that thread to process
rawposn is an integer representation of the position in microsteps at the highest microstep level.
"""
opmodes= ('closed', 'stopped', 'softrun', 'dmarun')
commands=('none', 'close', 'stop', 'goto', 'run', 'onegoto', 'setpos')
def __init__(self, name, app, value, wabledefs=[], **kwargs):
"""
sets up a generic stepper motor driver.
name : used in log messages, and to identify motor in wave processing
app : multi motor controller
value : dict with saved values for watchables
wabledefs: extra watchables to be included in the object
settings must be included in kwargs to define pins and fixed values for the motor. These vary
with the inheriting class.
see the example file 'motorset.json'
kwargs : other args passed to watchable.
"""
self.name=name
self.mthread=None
self.tlogs={}
modewables=[]
rmodes=[]
for smode, svals in value['stepmodes'].items(): # find the stepmodes declared in values (originally from json file) and make fields defs
modewables.append((smode, app.classdefs[svals['stepclass']], None, False, {'name': smode, 'motor': self}))
rmodes.append(smode)
self.stepmodenames=rmodes
wables=wabledefs+[
('userstepm', wv.enumWatch, rmodes[0], False, {'vlist': rmodes}), # available stepping modes - used by the gui, but needs rmodes...
('targetrawpos',wv.intWatch, 0, False), # target position for goto
('target_dir', wv.intWatch, 1, False), # target direction for run - +ve for fwd, -ve for reverse
('opmode', wv.enumWatch, self.opmodes[1], False, {'vlist': self.opmodes}), # what mode is current - set by the motor
('holdstopped', wv.floatWatch, .5, False), # when motor stops, drive_enable goes false after this time (in seconds), 0 means drive always enabled
('rawposn', wv.intWatch, 0, False), # current position in microsteps (not totally up to date while fast stepping)
('ticktime', wv.floatWatch, 0, False), # minimum step interval (clamped to minslow for slow stepping)
('stepmodes', wv.watchablesmart,None, False, {'wabledefs': modewables}), # the available stepper control modes
('activestepm', wv.textWatch, '-', False), # when running shows the active step mode
]
super().__init__(wabledefs=wables, app=app, value=value, **kwargs)
self.log(loglvls.INFO,'starting motor %s thread stepper using class %s' % (self.name, type(self).__name__))
self.maxstepfactor = self.getmaxusteplevel()
def waitstop(self):
if not self.mthread is None:
self.mthread.join()
self.drive_enable.setValue('disable', wv.myagents.app)
self.opmode.setValue('closed', wv.myagents.app)
def dothis(self, command, targetpos=None, targetdir=None, stepmode=None):
"""
request motor to apply a command.
command:
'none': noop commmand
'close': shuts down motor - no further actions can be taken on this class instance
'stop': requests motor to stop in a controlled way. For example if the motor is running with ramp=up / down. the maotor will ramp down and stop.
'goto': requests motor moves to 'targetpos' using 'stepmode'. 'targetpos' (and various other params) are monitored and action changes on the fly.
once the target position 'targetpos' continues to be monitored and the motor will respond if it changes. This continues until a 'stop'
is received.
'run': requests the motor moves in 'targetdir' using stepmode until a stop is issued. 'targetdir' (and various other params) are monitored
and action changes on the fly. This continues until a 'stop'
is received.
'onegoto': requests motor moves to 'targetpos' using 'stepmode'. Once the target is reached the motor stops.
'setpos' : if the motor is currently stopped the current position is changed to 'targetpos' without moving the motor.
"""
if command=='None':
return
curmode=self.opmode.getValue()
if curmode=='closed':
raise ValueError('motor %s is closed - cannot respond.' % self.name)
assert command in self.commands
if command in ('goto', 'onegoto', 'run'):
if curmode == 'stopped':
if not stepmode in self.userstepm.vlist:
raise ValueError('stepmode error: stepmode %s not found for motor %s' % (stepmode, self.name))
if command == 'run':
assert targetdir in ('fwd','rev')
if command == 'goto' or command == 'onegoto':
assert isinstance(targetpos, (int,float))
stepdef=getattr(self.stepmodes, stepmode)
if stepdef.mode=='software':
self.stepactive=True
self.stepinf=stepdef
self.opmode.setValue('softrun', wv.myagents.app) # opmode returns to stopped when the thread is about to exit
self.mthread= threading.Thread(name=self.name+'_softrun', target=self._softrun, kwargs={
'stepinf': stepdef,
'command': command,
'targetpos': targetpos,
'targetdir': targetdir})
self.mthread.start()
elif stepdef.mode=='wave':
return 'wave'
else:
raise NotImplementedError('oopsy')
else:
if not targetpos is None:
self.targetrawpos.setValue(targetpos, wv.myagents.app) # these 2 are monitored by the stepgenerator
if not targetdir is None: # other changed values in the step generator settings
self.target_dir.setValue(1 if targetdir=='fwd' else -1,wv.myagents.app) # are picked up directly in the generator
self.updatetickerparams=True
elif command=='close' or command=='stop':
curmode=self.opmode.getValue()
if curmode=='stopped' or curmode=='closed':
self.drive_enable.setValue('disable', wv.myagents.app)
elif curmode=='softrun' or curmode=='dmarun':
self.stepactive=False
elif command=='setpos':
if curmode=='stopped':
self.rawposn.setValue(int(targetpos), wv.myagents.app)
else:
raise ValueError('cannot set position in mode %s' % curmode)
return None
def dmarun(self, stepmode, **kwargs):
"""
prepare to move or goto position in dma mode
Check if this motor mode uses waves and if so return a generator
The main work is done in the controller.
"""
stepgen=getattr(self.stepmodes,stepmode)
mtype=stepgen.mode
if mtype=='wave':
self.stepactive=True
self.opmode.setValue('dmarun', wv.myagents.app)
return self.pulsegen(stepgen, **kwargs)
return None
def starttimelog(self, logname):
withlog={}
self.tlogs[logname]=withlog
withlog['startclock'] = time.perf_counter_ns()
withlog['startthread']= time.thread_time_ns()
def reporttimelog(self, logname):
if logname in self.tlogs:
started=self.tlogs[logname]
return 'elapsed: {clkt:7.3f}, thread: {thrt:7.3f}'.format(
clkt=(time.perf_counter_ns()-started['startclock'])/1000000000,
thrt=(time.thread_time_ns()-started['startthread'])/1000000000)
else:
return None
def _softrun(self, stepinf, command, targetpos, targetdir):
"""
drives the motor stepping from software.
It gets the step interval from the tick generator, which monitors both the target mode of the motor and
the target position to manage ramping.
"""
self.targetrawpos.setValue(targetpos, wv.myagents.app) # these 2 are monitored by the stepgenerator
self.target_dir.setValue(1 if targetdir=='fwd' else -1,wv.myagents.app) # ditto
self.activestepm.setValue(stepinf.usteplevel.getValue(), wv.myagents.app)
self.drive_enable.setValue('enable', wv.myagents.app)
self.log(loglvls.INFO, '%s _softrun starts' % self.name)
self.overrunctr=0
self.overruntime=0.0
tickmaker=stepinf.tickgen(command, self.rawposn.getValue())
steptrig=self.getstepfunc(stepinf)
directionset=self.direction.setValue
posnset = self.rawposn.setValue
self.starttimelog('softrun')
nextsteptime=time.time()
posupdatetime=nextsteptime+.8
tickctr=0
newpos=self.rawposn.getValue()
stoppedtimer=None
uslevelv, poschange = stepinf.getmicrosteps()
# tlf=open('tlog.txt','w')
tstart=time.time()
while True:
try:
ntick=next(tickmaker)
except StopIteration:
self.log(loglvls.INFO,'StopIteration!!!!!!!')
break
if isinstance(ntick, float):
if stoppedtimer is None:
steptrig()
newpos += poschange*dirsign
else:
self.drive_enable.setValue('enable', wv.myagents.app)
stoppedtimer=None
tickctr+=1
nextsteptime += ntick
if time.time() > posupdatetime:
posnset(newpos, wv.myagents.app)
posupdatetime += .8
elif ntick is None:
nextsteptime += .05 # If nothing to do just wait for a bit and go round again
if time.time() > posupdatetime:
posnset(newpos, wv.myagents.app)
posupdatetime += .8
else:
dirsign=1 if ntick[0]=='F' else -1
directionset(ntick[0], wv.myagents.app)
nextsteptime += ntick[1]
delay=nextsteptime - time.time()
if delay > 0:
time.sleep(delay)
else:
self.overrunctr+=1
| |
13.313, 51.508,
VERTEX, 25.356, 13.509, 51.534,
VERTEX, 25.335, 13.705, 51.557,
VERTEX, 25.331, 13.709, 51.533,
VERTEX, 25.307, 13.724, 51.335,
VERTEX, 25.285, 13.720, 51.136,
VERTEX, 25.270, 13.701, 50.973,
END,
BEGIN, LINE_LOOP,
VERTEX, 25.443, 12.790, 50.880,
VERTEX, 25.522, 12.625, 50.782,
VERTEX, 25.602, 12.461, 50.684,
VERTEX, 25.682, 12.297, 50.585,
VERTEX, 25.764, 12.133, 50.487,
VERTEX, 25.846, 11.971, 50.388,
VERTEX, 25.929, 11.808, 50.289,
VERTEX, 26.013, 11.647, 50.190,
VERTEX, 26.098, 11.485, 50.091,
VERTEX, 26.184, 11.325, 49.991,
VERTEX, 26.271, 11.165, 49.892,
VERTEX, 26.358, 11.006, 49.792,
VERTEX, 26.222, 11.040, 49.861,
VERTEX, 26.056, 11.082, 49.945,
VERTEX, 25.891, 11.125, 50.029,
VERTEX, 25.726, 11.168, 50.114,
VERTEX, 25.561, 11.212, 50.198,
VERTEX, 25.508, 11.316, 50.261,
VERTEX, 25.432, 11.476, 50.357,
VERTEX, 25.359, 11.638, 50.451,
VERTEX, 25.289, 11.803, 50.546,
VERTEX, 25.223, 11.969, 50.639,
VERTEX, 25.160, 12.137, 50.732,
VERTEX, 25.099, 12.307, 50.825,
VERTEX, 25.097, 12.315, 50.829,
VERTEX, 25.176, 12.506, 50.874,
VERTEX, 25.256, 12.697, 50.919,
VERTEX, 25.337, 12.889, 50.963,
VERTEX, 25.365, 12.955, 50.978,
END,
BEGIN, LINE_LOOP,
VERTEX, 25.360, 11.225, 50.203,
VERTEX, 25.159, 11.239, 50.206,
VERTEX, 24.958, 11.252, 50.207,
VERTEX, 24.757, 11.265, 50.207,
VERTEX, 24.556, 11.279, 50.203,
VERTEX, 24.355, 11.292, 50.198,
VERTEX, 24.154, 11.306, 50.189,
VERTEX, 23.954, 11.319, 50.177,
VERTEX, 23.754, 11.333, 50.162,
VERTEX, 23.905, 11.441, 50.244,
VERTEX, 24.059, 11.552, 50.326,
VERTEX, 24.214, 11.664, 50.405,
VERTEX, 24.369, 11.777, 50.483,
VERTEX, 24.524, 11.891, 50.559,
VERTEX, 24.680, 12.006, 50.634,
VERTEX, 24.835, 12.121, 50.707,
VERTEX, 24.991, 12.236, 50.780,
VERTEX, 25.097, 12.315, 50.829,
VERTEX, 25.158, 12.143, 50.736,
VERTEX, 25.222, 11.973, 50.642,
VERTEX, 25.288, 11.805, 50.547,
VERTEX, 25.358, 11.638, 50.452,
VERTEX, 25.432, 11.474, 50.356,
VERTEX, 25.510, 11.313, 50.259,
VERTEX, 25.561, 11.212, 50.198,
END,
BEGIN, LINE_LOOP,
VERTEX, 21.311, 11.048, 48.726,
VERTEX, 21.402, 11.073, 48.820,
VERTEX, 21.539, 11.110, 48.960,
VERTEX, 21.636, 11.120, 49.016,
VERTEX, 21.811, 11.137, 49.115,
VERTEX, 21.986, 11.154, 49.214,
VERTEX, 22.161, 11.171, 49.313,
VERTEX, 22.336, 11.189, 49.411,
VERTEX, 22.511, 11.206, 49.508,
VERTEX, 22.687, 11.224, 49.605,
VERTEX, 22.863, 11.241, 49.701,
VERTEX, 23.040, 11.259, 49.797,
VERTEX, 23.217, 11.277, 49.890,
VERTEX, 23.395, 11.295, 49.983,
VERTEX, 23.573, 11.314, 50.073,
VERTEX, 23.753, 11.333, 50.162,
VERTEX, 23.794, 11.330, 50.165,
VERTEX, 23.994, 11.316, 50.179,
VERTEX, 24.194, 11.303, 50.191,
VERTEX, 24.395, 11.289, 50.199,
VERTEX, 24.595, 11.276, 50.204,
VERTEX, 24.796, 11.263, 50.207,
VERTEX, 24.997, 11.249, 50.207,
VERTEX, 25.198, 11.236, 50.205,
VERTEX, 25.399, 11.223, 50.202,
VERTEX, 25.561, 11.212, 50.198,
VERTEX, 25.737, 11.165, 50.108,
VERTEX, 25.914, 11.119, 50.018,
VERTEX, 26.090, 11.074, 49.928,
VERTEX, 26.267, 11.029, 49.838,
VERTEX, 26.358, 11.006, 49.792,
VERTEX, 26.429, 10.961, 49.689,
VERTEX, 26.345, 10.921, 49.580,
VERTEX, 26.225, 10.864, 49.422,
VERTEX, 26.104, 10.807, 49.264,
VERTEX, 25.984, 10.751, 49.105,
VERTEX, 25.863, 10.696, 48.947,
VERTEX, 25.743, 10.641, 48.788,
VERTEX, 25.622, 10.587, 48.629,
VERTEX, 25.502, 10.534, 48.470,
VERTEX, 25.381, 10.482, 48.310,
VERTEX, 25.340, 10.477, 48.290,
VERTEX, 25.229, 10.481, 48.283,
VERTEX, 25.030, 10.490, 48.272,
VERTEX, 24.831, 10.500, 48.265,
VERTEX, 24.633, 10.512, 48.260,
VERTEX, 24.435, 10.527, 48.260,
VERTEX, 24.238, 10.544, 48.264,
VERTEX, 24.040, 10.565, 48.273,
VERTEX, 23.844, 10.588, 48.286,
VERTEX, 23.648, 10.613, 48.302,
VERTEX, 23.452, 10.641, 48.323,
VERTEX, 23.256, 10.671, 48.347,
VERTEX, 23.061, 10.703, 48.373,
VERTEX, 22.866, 10.736, 48.401,
VERTEX, 22.671, 10.770, 48.432,
VERTEX, 22.476, 10.805, 48.463,
VERTEX, 22.281, 10.841, 48.496,
VERTEX, 22.087, 10.877, 48.530,
VERTEX, 21.892, 10.914, 48.564,
VERTEX, 21.698, 10.951, 48.599,
VERTEX, 21.503, 10.989, 48.635,
VERTEX, 21.404, 11.008, 48.653,
END,
BEGIN, LINE_LOOP,
VERTEX, 22.102, 10.303, 48.074,
VERTEX, 21.986, 10.422, 48.169,
VERTEX, 21.870, 10.540, 48.265,
VERTEX, 21.754, 10.658, 48.361,
VERTEX, 21.638, 10.775, 48.458,
VERTEX, 21.522, 10.892, 48.555,
VERTEX, 21.405, 11.008, 48.652,
VERTEX, 21.412, 11.007, 48.651,
VERTEX, 21.612, 10.968, 48.614,
VERTEX, 21.811, 10.929, 48.577,
VERTEX, 22.011, 10.891, 48.541,
VERTEX, 22.210, 10.854, 48.506,
VERTEX, 22.410, 10.817, 48.472,
VERTEX, 22.610, 10.781, 48.439,
VERTEX, 22.809, 10.745, 48.407,
VERTEX, 23.010, 10.711, 48.378,
VERTEX, 23.210, 10.678, 48.350,
VERTEX, 23.410, 10.647, 48.325,
VERTEX, 23.611, 10.618, 48.303,
VERTEX, 23.812, 10.591, 48.285,
VERTEX, 24.014, 10.567, 48.271,
VERTEX, 24.216, 10.546, 48.262,
VERTEX, 24.419, 10.528, 48.257,
VERTEX, 24.622, 10.512, 48.257,
VERTEX, 24.825, 10.499, 48.262,
VERTEX, 25.029, 10.489, 48.269,
VERTEX, 25.233, 10.480, 48.280,
VERTEX, 25.333, 10.477, 48.287,
VERTEX, 25.193, 10.358, 48.183,
VERTEX, 25.056, 10.237, 48.079,
VERTEX, 24.921, 10.112, 47.975,
VERTEX, 24.780, 10.138, 47.984,
VERTEX, 24.581, 10.169, 47.995,
VERTEX, 24.381, 10.196, 48.004,
VERTEX, 24.181, 10.216, 48.011,
VERTEX, 23.980, 10.231, 48.015,
VERTEX, 23.779, 10.240, 48.017,
VERTEX, 23.577, 10.244, 48.017,
VERTEX, 23.374, 10.243, 48.015,
VERTEX, 23.171, 10.239, 48.011,
VERTEX, 22.968, 10.231, 48.006,
VERTEX, 22.765, 10.221, 48.000,
VERTEX, 22.561, 10.208, 47.993,
VERTEX, 22.358, 10.194, 47.985,
VERTEX, 22.218, 10.184, 47.980,
END,
BEGIN, LINE_LOOP,
VERTEX, 25.056, 10.236, 48.079,
VERTEX, 25.193, 10.358, 48.182,
VERTEX, 25.332, 10.477, 48.286,
VERTEX, 25.381, 10.482, 48.310,
VERTEX, 25.395, 10.289, 48.249,
VERTEX, 25.409, 10.096, 48.188,
VERTEX, 25.424, 9.903, 48.128,
VERTEX, 25.293, 9.962, 48.089,
VERTEX, 25.101, 10.042, 48.031,
VERTEX, 24.921, 10.112, 47.975,
END,
BEGIN, LINE_LOOP,
VERTEX, 23.476, 9.468, 46.628,
VERTEX, 23.350, 9.549, 46.758,
VERTEX, 23.224, 9.628, 46.889,
VERTEX, 23.097, 9.705, 47.021,
VERTEX, 22.972, 9.780, 47.154,
VERTEX, 22.846, 9.853, 47.288,
VERTEX, 22.720, 9.923, 47.424,
VERTEX, 22.594, 9.991, 47.561,
VERTEX, 22.469, 10.058, 47.700,
VERTEX, 22.344, 10.122, 47.839,
VERTEX, 22.218, 10.184, 47.980,
VERTEX, 22.401, 10.197, 47.987,
VERTEX, 22.602, 10.211, 47.994,
VERTEX, 22.803, 10.223, 48.001,
VERTEX, 23.003, 10.233, 48.007,
VERTEX, 23.203, 10.240, 48.012,
VERTEX, 23.403, 10.244, 48.015,
VERTEX, 23.603, 10.244, 48.017,
VERTEX, 23.802, 10.239, 48.017,
VERTEX, 24.000, 10.230, 48.015,
VERTEX, 24.198, 10.215, 48.011,
VERTEX, 24.396, 10.194, 48.004,
VERTEX, 24.593, 10.167, 47.995,
VERTEX, 24.789, 10.136, 47.984,
VERTEX, 24.921, 10.112, 47.975,
VERTEX, 24.792, 10.060, 47.836,
VERTEX, 24.664, 10.004, 47.697,
VERTEX, 24.538, 9.943, 47.558,
VERTEX, 24.413, 9.879, 47.419,
VERTEX, 24.289, 9.812, 47.281,
VERTEX, 24.167, 9.741, 47.143,
VERTEX, 24.045, 9.668, 47.005,
VERTEX, 23.923, 9.593, 46.867,
VERTEX, 23.803, 9.517, 46.730,
VERTEX, 23.683, 9.439, 46.592,
VERTEX, 23.602, 9.385, 46.499,
END,
BEGIN, LINE_LOOP,
VERTEX, 23.824, 9.188, 46.454,
VERTEX, 23.647, 9.268, 46.423,
VERTEX, 23.602, 9.385, 46.499,
VERTEX, 23.695, 9.447, 46.607,
VERTEX, 23.814, 9.524, 46.743,
VERTEX, 23.934, 9.600, 46.879,
VERTEX, 24.054, 9.674, 47.015,
VERTEX, 24.174, 9.746, 47.152,
VERTEX, 24.296, 9.815, 47.288,
VERTEX, 24.418, 9.882, 47.425,
VERTEX, 24.542, 9.945, 47.562,
VERTEX, 24.667, 10.005, 47.699,
VERTEX, 24.793, 10.061, 47.837,
VERTEX, 24.921, 10.112, 47.975,
VERTEX, 25.082, 10.050, 48.025,
VERTEX, 25.264, 9.974, 48.081,
VERTEX, 25.424, 9.903, 48.128,
VERTEX, 25.468, 9.759, 48.019,
VERTEX, 25.511, 9.614, 47.912,
VERTEX, 25.555, 9.468, 47.806,
VERTEX, 25.598, 9.321, 47.701,
VERTEX, 25.637, 9.189, 47.608,
VERTEX, 25.556, 9.167, 47.531,
VERTEX, 25.409, 9.158, 47.420,
VERTEX, 25.250, 9.149, 47.303,
VERTEX, 25.089, 9.141, 47.188,
VERTEX, 24.926, 9.133, 47.075,
VERTEX, 24.762, 9.127, 46.965,
VERTEX, 24.596, 9.121, 46.856,
VERTEX, 24.429, 9.116, 46.749,
VERTEX, 24.261, 9.112, 46.644,
VERTEX, 24.092, 9.108, 46.540,
VERTEX, 24.001, 9.107, 46.486,
END,
BEGIN, LINE_LOOP,
VERTEX, 23.826, 14.253, 50.342,
VERTEX, 24.053, 14.231, 50.448,
VERTEX, 24.278, 14.188, 50.550,
VERTEX, 24.499, 14.123, 50.649,
VERTEX, 24.716, 14.038, 50.743,
VERTEX, 24.925, 13.932, 50.832,
VERTEX, 25.127, 13.807, 50.915,
VERTEX, 25.270, 13.699, 50.973,
VERTEX, 25.293, 13.513, 50.975,
VERTEX, 25.317, 13.327, 50.976,
VERTEX, 25.340, 13.141, 50.978,
VERTEX, 25.365, 12.955, 50.978,
VERTEX, 25.330, 12.872, 50.959,
VERTEX, 25.251, 12.687, 50.916,
VERTEX, 25.174, 12.501, 50.873,
VERTEX, 25.097, 12.315, 50.829,
VERTEX, 25.051, 12.280, 50.808,
VERTEX, 24.902, 12.170, 50.739,
VERTEX, 24.753, 12.060, 50.668,
VERTEX, 24.604, 11.950, 50.598,
VERTEX, 24.456, 11.841, 50.526,
VERTEX, 24.308, 11.733, 50.452,
VERTEX, 24.160, 11.625, 50.378,
VERTEX, 24.012, 11.518, 50.301,
VERTEX, 23.865, 11.412, 50.223,
VERTEX, 23.754, 11.333, 50.163,
VERTEX, 23.580, 11.315, 50.077,
VERTEX, 23.407, 11.297, 49.989,
VERTEX, 23.235, 11.279, 49.900,
VERTEX, 23.064, 11.261, 49.810,
VERTEX, 22.893, 11.244, 49.718,
VERTEX, 22.723, 11.227, 49.625,
VERTEX, 22.553, 11.210, 49.531,
VERTEX, 22.383, 11.193, 49.437,
VERTEX, 22.214, 11.177, 49.342,
VERTEX, 22.045, 11.160, 49.247,
VERTEX, 21.876, 11.143, 49.152,
VERTEX, 21.707, 11.127, 49.056,
VERTEX, 21.538, 11.110, 48.960,
VERTEX, 21.478, 11.248, 48.941,
VERTEX, 21.398, 11.430, 48.916,
VERTEX, 21.370, 11.492, 48.907,
VERTEX, 21.420, 11.677, 48.953,
VERTEX, 21.470, 11.863, 48.998,
VERTEX, 21.519, 12.049, 49.042,
VERTEX, 21.568, 12.236, 49.085,
VERTEX, 21.618, 12.423, 49.127,
VERTEX, 21.667, 12.610, 49.167,
VERTEX, 21.716, 12.797, 49.207,
VERTEX, 21.765, 12.984, 49.245,
VERTEX, 21.814, 13.172, 49.282,
VERTEX, 21.863, 13.360, 49.318,
VERTEX, 21.986, 13.502, 49.391,
VERTEX, 22.153, 13.663, 49.487,
VERTEX, 22.333, 13.807, 49.587,
VERTEX, 22.525, 13.932, 49.691,
VERTEX, 22.726, 14.037, 49.798,
VERTEX, 22.936, 14.123, 49.906,
VERTEX, 23.152, 14.188, 50.015,
VERTEX, 23.374, 14.231, 50.125,
VERTEX, 23.599, 14.253, 50.234,
END,
BEGIN, LINE_LOOP,
VERTEX, 21.247, 11.267, 48.727,
VERTEX, 21.240, 11.367, 48.749,
VERTEX, 21.253, 11.380, 48.765,
VERTEX, 21.371, 11.493, 48.907,
VERTEX, 21.446, 11.321, 48.932,
VERTEX, 21.536, 11.118, 48.959,
VERTEX, 21.539, 11.110, 48.960,
VERTEX, 21.391, 11.070, 48.809,
VERTEX, 21.311, 11.048, 48.726,
END,
BEGIN, LINE_LOOP,
VERTEX, 21.240, 11.367, 48.748,
VERTEX, 21.247, 11.267, 48.727,
VERTEX, 21.235, 11.297, 48.728,
VERTEX, 21.232, 11.359, 48.742,
END,
BEGIN, LINE_LOOP,
VERTEX, 21.814, 13.173, 49.282,
VERTEX, 21.765, 12.985, 49.245,
VERTEX, 21.716, 12.798, 49.207,
VERTEX, 21.667, 12.611, 49.167,
VERTEX, 21.618, 12.424, 49.127,
VERTEX, 21.569, 12.237, 49.085,
VERTEX, 21.519, 12.050, 49.042,
VERTEX, 21.470, 11.864, 48.998,
VERTEX, 21.420, 11.678, 48.953,
VERTEX, 21.371, 11.493, 48.907,
VERTEX, 21.341, 11.464, 48.871,
VERTEX, 21.240, 11.367, 48.748,
VERTEX, 21.282, 11.565, 48.784,
VERTEX, 21.325, 11.762, 48.821,
VERTEX, 21.370, 11.958, 48.859,
VERTEX, 21.417, 12.153, 48.899,
VERTEX, 21.466, 12.348, 48.940,
VERTEX, 21.517, 12.542, 48.982,
VERTEX, 21.570, 12.735, 49.027,
VERTEX, 21.625, 12.927, 49.072,
VERTEX, 21.681, 13.119, 49.119,
VERTEX, 21.734, 13.200, 49.177,
VERTEX, 21.836, 13.332, 49.288,
VERTEX, 21.863, 13.361, 49.319,
END,
BEGIN, LINE_LOOP,
VERTEX, 21.625, 12.927, 49.072,
VERTEX, 21.570, 12.735, 49.027,
VERTEX, 21.517, 12.542, 48.982,
VERTEX, 21.466, 12.348, 48.940,
VERTEX, 21.417, 12.153, 48.899,
VERTEX, 21.370, 11.958, 48.859,
VERTEX, 21.325, 11.762, 48.821,
VERTEX, 21.281, 11.565, 48.784,
VERTEX, 21.240, 11.367, 48.748,
VERTEX, 21.231, 11.359, 48.741,
VERTEX, 21.223, 11.391, 48.735,
VERTEX, 21.196, 11.588, 48.713,
VERTEX, 21.186, 11.787, 48.706,
VERTEX, 21.194, 11.987, 48.713,
VERTEX, 21.221, 12.184, 48.736,
VERTEX, 21.265, 12.375, 48.773,
VERTEX, 21.326, 12.558, 48.824,
VERTEX, 21.404, 12.731, 48.889,
VERTEX, 21.496, 12.891, 48.966,
VERTEX, 21.603, 13.035, 49.054,
VERTEX, 21.681, 13.119, 49.119,
END,
BEGIN, LINE_LOOP,
VERTEX, 25.409, 10.096, 48.188,
VERTEX, 25.395, 10.289, 48.249,
VERTEX, 25.381, 10.483, 48.310,
VERTEX, 25.411, 10.495, 48.349,
VERTEX, 25.531, 10.547, 48.508,
VERTEX, 25.650, 10.600, 48.666,
VERTEX, 25.770, 10.654, 48.824,
VERTEX, 25.890, 10.708, 48.982,
VERTEX, 26.010, 10.763, 49.139,
VERTEX, 26.129, 10.819, 49.297,
VERTEX, 26.249, 10.876, 49.454,
VERTEX, 26.369, 10.932, 49.611,
VERTEX, 26.429, 10.961, 49.689,
VERTEX, 26.446, 10.923, 49.694,
VERTEX, 26.452, 10.788, 49.648,
VERTEX, 26.460, 10.607, 49.587,
VERTEX, 26.468, 10.426, 49.526,
VERTEX, 26.476, 10.246, 49.464,
VERTEX, 26.485, 10.065, 49.403,
VERTEX, 26.489, 9.981, 49.374,
VERTEX, 26.446, 9.820, 49.261,
VERTEX, 26.404, 9.658, 49.150,
VERTEX, 26.364, 9.495, 49.039,
VERTEX, 26.325, 9.332, 48.928,
VERTEX, 26.287, 9.167, 48.818,
VERTEX, 26.252, 9.200, 48.792,
VERTEX, 26.127, 9.313, 48.696,
VERTEX, 26.003, 9.425, 48.599,
VERTEX, 25.877, 9.535, 48.500,
VERTEX, 25.752, 9.641, 48.399,
VERTEX, 25.626, 9.745, 48.297,
VERTEX, 25.500, 9.845, 48.192,
VERTEX, 25.424, 9.903, 48.128,
END,
BEGIN, LINE_LOOP,
VERTEX, 25.637, 9.189, 47.608,
VERTEX, 25.599, 9.321, 47.701,
VERTEX, 25.555, 9.468, 47.806,
VERTEX, 25.511, 9.614, 47.912,
VERTEX, 25.468, 9.759, 48.019,
VERTEX, 25.424, 9.903, 48.128,
VERTEX, 25.538, 9.815, 48.224,
VERTEX, 25.670, 9.709, 48.333,
VERTEX, 25.801, 9.600, 48.439,
VERTEX, 25.932, 9.487, 48.543,
VERTEX, 26.063, 9.371, 48.646,
VERTEX, 26.193, 9.254, 48.747,
VERTEX, 26.287, 9.167, 48.818,
VERTEX, 26.364, 8.935, 48.663,
VERTEX, 26.289, 8.958, 48.546,
VERTEX, 26.187, 8.991, 48.388,
VERTEX, 26.083, 9.024, 48.230,
VERTEX, 25.978, 9.059, 48.074,
VERTEX, 25.871, 9.094, 47.919,
VERTEX, 25.763, 9.131, 47.765,
VERTEX, 25.715, 9.148, 47.699,
END,
BEGIN, LINE_LOOP,
VERTEX, 25.638, 9.188, 47.609,
VERTEX, | |
<reponame>timelyportfolio/bokeh<gh_stars>1-10
# -*- coding: utf-8 -*-
""" Models for display visual shapes whose attributes can be associated
with data columns from data sources.
"""
from __future__ import absolute_import
from ..plot_object import PlotObject
from ..mixins import FillProps, LineProps, TextProps
from ..enums import Units, AngleUnits, Direction, Anchor
from ..properties import Align, Bool, DataSpec, Enum, HasProps, Include, Instance, Size
from .mappers import LinearColorMapper
class Glyph(PlotObject):
""" Base class for all glyphs/marks/geoms/whatever-you-call-'em in Bokeh.
"""
visible = Bool(help="""
Whether the glyph should render or not.
""")
class AnnularWedge(Glyph):
""" Render annular wedges.
Example
-------
.. bokeh-plot:: ../tests/glyphs/AnnularWedge.py
:source-position: none
*source:* `tests/glyphs/AnnularWedge.py <https://github.com/bokeh/bokeh/tree/master/tests/glyphs/AnnularWedge.py>`_
"""
x = DataSpec("x", help="""
The x-coordinates of the center of the annular wedges.
""")
y = DataSpec("y", help="""
The y-coordinates of the center of the annular wedges.
""")
# TODO: (bev) should default to "inner_radius" field?
inner_radius = DataSpec(min_value=0, help="""
The inner radii of the annular wedges.
""")
# TODO: (bev) should default to "outer_radius" field?
outer_radius = DataSpec(min_value=0, help="""
The outer radii of the annular wedges.
""")
start_angle = DataSpec("start_angle", help="""
The angles to start the annular wedges, in radians, as measured from
the horizontal.
""")
end_angle = DataSpec("end_angle", help="""
The angles to end the annular wedges, in radians, as measured from
the horizontal.
""")
direction = Enum(Direction, help="""
Which direction to stroke between the start and end angles.
""")
line_props = Include(LineProps, use_prefix=False, help="""
The %s values for the annular wedges.
""")
fill_props = Include(FillProps, use_prefix=False, help="""
The %s values for the annular wedges.
""")
class Annulus(Glyph):
""" Render annuli.
Example
-------
.. bokeh-plot:: ../tests/glyphs/Annulus.py
:source-position: none
*source:* `tests/glyphs/Annulus.py <https://github.com/bokeh/bokeh/tree/master/tests/glyphs/Annulus.py>`_
"""
x = DataSpec("x", help="""
The x-coordinates of the center of the annuli.
""")
y = DataSpec("y", help="""
The y-coordinates of the center of the annuli.
""")
# TODO: (bev) should default to "inner_radius" field?
inner_radius = DataSpec(min_value=0, help="""
The inner radii of the annuli.
""")
# TODO: (bev) should default to "outer_radius" field?
outer_radius = DataSpec(min_value=0, help="""
The outer radii of the annuli.
""")
line_props = Include(LineProps, use_prefix=False, help="""
The %s values for the annuli.
""")
fill_props = Include(FillProps, use_prefix=False, help="""
The %s values for the annuli.
""")
class Arc(Glyph):
""" Render arcs.
Example
-------
.. bokeh-plot:: ../tests/glyphs/Arc.py
:source-position: none
*source:* `tests/glyphs/Arc.py <https://github.com/bokeh/bokeh/tree/master/tests/glyphs/Arc.py>`_
"""
x = DataSpec("x", help="""
The x-coordinates of the center of the arcs.
""")
y = DataSpec("y", help="""
The y-coordinates of the center of the arcs.
""")
# TODO: (bev) should default to "radius" field?
radius = DataSpec(min_value=0, help="""
Radius of the arc.
""")
start_angle = DataSpec("start_angle", help="""
The angles to start the arcs, in radians, as measured from the horizontal.
""")
end_angle = DataSpec("end_angle", help="""
The angles to end the arcs, in radians, as measured from the horizontal.
""")
direction = Enum(Direction, help="""
Which direction to stroke between the start and end angles.
""")
line_props = Include(LineProps, use_prefix=False, help="""
The %s values for the arcs.
""")
class Bezier(Glyph):
u""" Render Bézier curves.
For more information consult the `Wikipedia article for Bézier curve`_.
.. _Wikipedia article for Bézier curve: http://en.wikipedia.org/wiki/Bézier_curve
Example
-------
.. bokeh-plot:: ../tests/glyphs/Bezier.py
:source-position: none
*source:* `tests/glyphs/Bezier.py <https://github.com/bokeh/bokeh/tree/master/tests/glyphs/Bezier.py>`_
"""
x0 = DataSpec("x0", help="""
The x-coordinates of the starting points.
""")
y0 = DataSpec("y0", help="""
The y-coordinates of the starting points.
""")
x1 = DataSpec("x1", help="""
The x-coordinates of the ending points.
""")
y1 = DataSpec("y1", help="""
The y-coordinates of the ending points.
""")
cx0 = DataSpec("cx0", help="""
The x-coordinates of first control points.
""")
cy0 = DataSpec("cy0", help="""
The y-coordinates of first control points.
""")
cx1 = DataSpec("cx1", help="""
The x-coordinates of second control points.
""")
cy1 = DataSpec("cy1", help="""
The y-coordinates of second control points.
""")
line_props = Include(LineProps, use_prefix=False, help=u"""
The %s values for the Bézier curves.
""")
class Gear(Glyph):
""" Render gears.
The details and nomenclature concerning gear construction can
be quite involved. For more information, consult the `Wikipedia
article for Gear`_.
.. _Wikipedia article for Gear: http://en.wikipedia.org/wiki/Gear
Example
-------
.. bokeh-plot:: ../tests/glyphs/Gear.py
:source-position: none
*source:* `tests/glyphs/Gear.py <https://github.com/bokeh/bokeh/tree/master/tests/glyphs/Gear.py>`_
"""
x = DataSpec("x", help="""
The x-coordinates of the center of the gears.
""")
y = DataSpec("y", help="""
The y-coordinates of the center of the gears.
""")
angle = DataSpec(default=0, help="""
The angle the gears are rotated from horizontal. [rad]
""")
module = DataSpec("module", help="""
A scaling factor, given by::
m = p / pi
where *p* is the circular pitch, defined as the distance from one
face of a tooth to the corresponding face of an adjacent tooth on
the same gear, measured along the pitch circle. [float]
""")
teeth = DataSpec("teeth", help="""
How many teeth the gears have. [int]
""")
pressure_angle = DataSpec(default=20, help= """
The complement of the angle between the direction that the teeth
exert force on each other, and the line joining the centers of the
two gears. [deg]
""")
# TODO: (bev) evidently missing a test for default value
shaft_size = DataSpec(default=0.3, help="""
The central gear shaft size as a percentage of the overall gear
size. [float]
""")
# TODO: (bev) evidently missing a test for default value
internal = DataSpec(default=False, help="""
Whether the gear teeth are internal. [bool]
""")
line_props = Include(LineProps, use_prefix=False, help="""
The %s values for the gears.
""")
fill_props = Include(FillProps, use_prefix=False, help="""
The %s values for the gears.
""")
class Image(Glyph):
""" Render images given as scalar data together with a color
mapper.
"""
def __init__(self, **kwargs):
if 'palette' in kwargs and 'color_mapper' in kwargs:
raise ValueError("only one of 'palette' and 'color_mapper' may be specified")
elif 'color_mapper' not in kwargs:
# Use a palette (given or default)
palette = kwargs.pop('palette', 'Greys9')
mapper = LinearColorMapper(palette)
reserve_val = kwargs.pop('reserve_val', None)
if reserve_val is not None:
mapper.reserve_val = reserve_val
reserve_color = kwargs.pop('reserve_color', None)
if reserve_color is not None:
mapper.reserve_color = reserve_color
kwargs['color_mapper'] = mapper
super(Image, self).__init__(**kwargs)
image = DataSpec("image", help="""
The arrays of scalar data for the images to be colormapped.
""")
x = DataSpec("x", help="""
The x-coordinates to locate the image anchors.
""")
y = DataSpec("y", help="""
The y-coordinates to locate the image anchors.
""")
dw = DataSpec("dw", help="""
The widths of the plot regions that the images will occupy.
.. note::
This is not the number of pixels that an image is wide.
That number is fixed by the image itself.
""")
dh = DataSpec("dh", help="""
The height of the plot region that the image will occupy.
.. note::
This is not the number of pixels that an image is tall.
That number is fixed by the image itself.
""")
dilate = Bool(False, help="""
Whether to always round fractional pixel locations in such a way
as to make the images bigger.
This setting may be useful if pixel rounding errors are causing
images to have a gap between them, when they should appear flush.
""")
color_mapper = Instance(LinearColorMapper, help="""
A ``ColorMapper`` to use to map the scalar data from ``image``
into RGBA values for display.
.. note::
The color mapping step happens on the client.
""")
# TODO: (bev) support anchor property for Image
# ref: https://github.com/bokeh/bokeh/issues/1763
class ImageRGBA(Glyph):
""" Render images given as RGBA data.
"""
image = DataSpec("image", help="""
The arrays of RGBA data for the images.
""")
x = DataSpec("x", help="""
The x-coordinates to locate the image anchors.
""")
y = DataSpec("y", help="""
The y-coordinates to locate the image anchors.
""")
rows = DataSpec("rows", help="""
The numbers of rows in the images
""")
cols = DataSpec("cols", help="""
The numbers of columns in the images
""")
dw = DataSpec("dw", help="""
The widths of the plot regions that the images will occupy.
.. note::
This is not the number of pixels that an image is wide.
That number is fixed by the image itself.
""")
dh = DataSpec("dh", help="""
The height of the plot region that the image will occupy.
.. note::
This is not the number of pixels that an image is tall.
That number is fixed by the image itself.
""")
dilate = Bool(False, help="""
Whether to | |
id="div-pqhsp",
),
html.Div(
[
html.H6("Cross-Contamination"),
dbc.Tooltip(
"Cross-contamination rate in quarantine facilities.",
target="div-pcross",
placement="right",
),
dcc.Slider(
id="slider-pcross",
min=0,
max=1,
value=0.01,
step=0.01,
tooltip={"always_visible": True},
),
],
id="div-pcross",
),
],
id="collapse-p",
style=tab,
),
# Timing inputs
dbc.Button(
html.H2("Time Inputs"),
id="collapse-button-t",
className="mb-3",
color="danger",
style={"width": "100%"},
),
dbc.Collapse(
[
html.Div(
[
html.H6("Incubation"),
dbc.Tooltip(
"Incubation period.",
target="div-tinc",
placement="right",
),
dcc.Slider(
id="slider-tinc",
min=2.5,
max=7,
value=4.5,
step=0.1,
tooltip={"always_visible": True},
),
],
id="div-tinc",
),
html.Div(
[
html.H6("Infectious"),
dbc.Tooltip(
"Infectious period.",
target="div-tinf",
placement="right",
),
dcc.Slider(
id="slider-tinf",
min=1.0,
max=7,
value=2.9,
step=0.1,
tooltip={"always_visible": True},
),
],
id="div-tinf",
),
html.Div(
[
html.H6("Intensive Care"),
dbc.Tooltip(
"Time spent within the ICU.",
target="div-ticu",
placement="right",
),
dcc.Slider(
id="slider-ticu",
min=10.0,
max=14,
value=11,
step=0.1,
tooltip={"always_visible": True},
),
],
id="div-ticu",
),
html.Div(
[
html.H6("Hospitalised"),
dbc.Tooltip(
"Time spent hospitalised for non-critical patients.",
target="div-thsp",
placement="right",
),
dcc.Slider(
id="slider-thsp",
min=7,
max=21,
value=21,
step=0.1,
tooltip={"always_visible": True},
),
],
id="div-thsp",
),
html.Div(
[
html.H6("Critical"),
dbc.Tooltip(
"Time spent hospitalized before turning critical.",
target="div-tcrt",
placement="right",
),
dcc.Slider(
id="slider-tcrt",
min=1,
max=14,
value=7,
step=0.1,
tooltip={"always_visible": True},
),
],
id="div-tcrt",
),
html.Div(
[
html.H6("Self-Recovery"),
dbc.Tooltip(
"Self-Recovery time for non-disclosed cases.",
target="div-trec",
placement="right",
),
dcc.Slider(
id="slider-trec",
min=7,
max=21,
value=21,
step=0.1,
tooltip={"always_visible": True},
),
],
id="div-trec",
),
html.Div(
[
html.H6("Quarantine"),
dbc.Tooltip(
"Quarantine time under regulation.",
target="div-tqar",
placement="right",
),
dcc.Slider(
id="slider-tqar",
min=4,
max=21,
value=21,
step=0.1,
tooltip={"always_visible": True},
),
],
id="div-tqar",
),
html.Div(
[
html.H6("Quarantined, then Hospitalised"),
dbc.Tooltip(
"Time interval between the last COVID-19 positive result until getting hospitalized.",
target="div-tqah",
placement="right",
),
dcc.Slider(
id="slider-tqah",
min=0,
max=5,
value=2,
step=0.1,
tooltip={"always_visible": True},
),
],
id="div-tqah",
),
],
id="collapse-t",
style=tab,
),
# Input uploader
html.Div(
[
dcc.Upload(
html.Button(
["\f Upload custom .json input file"],
style={
"color": "white",
"margin": "2% 0",
"width": "100%",
},
),
id="up",
style={"padding": "2% 0", "font-style": "bold"},
),
dbc.Tooltip(
"You can import your json file that you have exported previously, rather than having to readjust inputs all over again",
target="div-up",
placement="right",
),
],
id="div-up",
),
html.Div(
[
dcc.Upload(
html.Button(
["\f Upload .csv stats for comparing"],
style={
"color": "white",
"margin": "2% 0",
"width": "100%",
},
),
id="up_stat",
style={"padding": "0% 0", "font-style": "bold"},
),
dbc.Tooltip(
html.Ul(
[
html.H6(
"You can upload a csv file of real statistics to compare. The device will find any matching columns and add them to the plot for comparison (assuming 1st row is 1st day of outbreak). The following column names can be selected:"
),
html.Li("infected"),
html.Li("daily_infected"),
html.Li("active_critical"),
html.Li("active_quarantined"),
html.Li("deaths"),
],
style={"text-align": "left"},
),
target="div-up-stat",
placement="right",
),
html.P(id="err", style={"color": "red"}),
],
id="div-up-stat",
),
],
style={
"width": "33%",
"display": "inline-block",
"vertical-align": "top",
"padding": "2%",
},
),
# Output
html.Div(
[
# Modes and sample picker
html.Div(
[
dcc.Checklist(
options=[
{"label": "Show Hospital Capacity", "value": 1},
{
"label": "Show Quarantine Capacity",
"value": 3,
},
{"label": "Show by Date", "value": 2},
],
value=[],
labelStyle={"display": "block"},
id="mods",
)
],
style={
"padding": "0% 3%",
"display": "inline-block",
"width": "35%",
},
),
html.Div(
[
dcc.Dropdown(
options=[
{"label": sample.name[n], "value": n}
for n in sample.name.keys()
],
placeholder="Select an example region",
id="init",
)
],
style={
"padding": "0% 3%",
"display": "inline-block",
"width": "55%",
},
),
# Plots
html.Div(
[
dcc.Graph(id="overall-plot"),
],
style={
"vertical-align": "top",
"border-style": "outset",
"margin": "1% 0%",
},
),
html.Div(
[
dcc.Graph(id="fatal-plot"),
],
style={
"vertical-align": "top",
"border-style": "outset",
"margin": "1% 0%",
},
),
html.Div(
[
dcc.Graph(id="r0-plot"),
],
style={
"vertical-align": "top",
"border-style": "outset",
"margin": "1% 0%",
},
),
# File downloader
html.Div(
[
html.Div(
[
html.H2("Download Statistics"),
dcc.Input(
id="file",
value="",
type="text",
placeholder="Specify exported file name (default: "
"exported_stats"
")",
style={"width": "100%"},
),
html.Button(
"Statistics Data (.csv)",
id="btn_csv",
style={"color": "white", "margin": "2%"},
),
dcc.Download(id="download-dataframe-csv"),
html.Button(
"Information Summary (.txt)",
id="btn_sum",
style={"color": "white", "margin": "2%"},
),
dcc.Download(id="download-sum"),
html.Button(
"Export Inputs (.json)",
id="btn_ipt",
style={"color": "white", "margin": "2%"},
),
dcc.Download(id="download-ipt"),
],
style={
"padding": "2% 3%",
"display": "inline-block",
"vertical-align": "bottom",
"text-align": "center",
"width": "100%",
},
),
],
style={
"vertical-align": "top",
"border-style": "outset",
"margin": "1% 0%",
},
),
],
style={
"width": "66%",
"display": "inline-block",
"vertical-align": "top",
"margin": "1% 0%",
},
),
]
)
]
)
# Dynamically creating stage inputs
@app.callback(
Output("in-r0", "children"),
Input("init", "value"),
[Input("num", "value")],
[Input("up", "contents")],
State("up", "filename"),
)
def ins_generate(init, n, content, file):
r"""
Generate dynamic stage inputs based on number of stages, either from file or from input.
Triggered when there is change in at least 1 variable.
Parameters
----------
init : `str`
Chosen sample region.
n : `int`
Number of stages.
content : `base64`
File content, encoded to base64.
file : `str`
File name.
Returns
-------
stages : :class:`list`
A list of HTML Elements with pre-defined values for the stage inputs.
"""
# Check which input is changing
ctx = dash.callback_context.triggered
current_call = [] if not ctx else ctx[0]["prop_id"].split(".")[0]
# If it's only changing in irrelevant variables, set to default
if (not file or "up" not in current_call) and "init" not in current_call:
# Default values for up to 30 stages
d = [
6,
20,
30,
49,
55,
60,
69,
70,
80,
85,
90,
95,
100,
105,
110,
115,
120,
125,
130,
135,
140,
145,
145,
145,
145,
145,
145,
145,
145,
145,
]
dr = [
1.5,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
]
pco = [
0.1,
0.4,
0.6,
0.8,
0.8,
0.85,
0.9,
0.9,
0.95,
1,
0.9,
0.9,
0.95,
1,
0.9,
0.9,
0.95,
1,
0.9,
0.9,
0.95,
1,
0.9,
0.9,
0.95,
1,
0.9,
0.9,
0.95,
1,
]
return [
html.Div(
[
html.H5(f"Stage {i+1}:"),
html.Div(
[
html.H6("Starting Date"),
dcc.Input(
id={"role": "day", "index": i},
min=1,
max=1000,
value=d[i],
step=1,
type="number",
style={"width": "80%"},
),
],
style={"width": "33%", "display": "inline-block"},
),
html.Div(
[
html.H6("R0 Reduction"),
dcc.Input(
id={"role": "r0", "index": i},
value=dr[i],
step=0.1,
type="number",
style={"width": "100%"},
),
],
style={
"width": "28%",
"display": "inline-block",
"margin": "0 5% 0 0",
},
),
html.Div(
[
html.H6("Contained Proportion"),
dcc.Slider(
id={"role": "pcont", "index": i},
min=0,
max=1,
value=pco[i],
step=0.01,
tooltip={"always_visible": False},
marks={0: "0", 1: "1"},
),
],
style={"width": "33%", "display": "inline-block"},
),
],
style={"border-style": "outset", "margin": "1%", "padding": "1%"},
)
for i in range(n)
]
# Choosing which to change the stage inputs, by finding out the triggered component
json_stage = ["delta_r0", "pcont", "day", "n_r0"]
# If it's sample that is looked for, use the inputs from sample file
if current_call == "init":
jf = json.loads(sample.loc[init])
# If it's a file, use the inputs from it
else:
_, content_string = content.split(",")
decoded = base64.b64decode(content_string)
jf = json.loads(decoded)
# Handling wrong formatting cases
# Irrelevant attributes
for i in json_stage:
if i not in jf:
return dash.no_update
# Inconsistent number of stage variables
if len(jf["day"]) != len(jf["delta_r0"]) or len(jf["day"]) != len(jf["pcont"]):
return dash.no_update
return [
html.Div(
[
html.H5(f"Stage {i+1}:"),
html.Div(
[
html.H6("Starting Date"),
dcc.Input(
id={"role": "day", "index": i},
min=1,
max=1000,
value=jf["day"][i],
step=1,
type="number",
style={"width": "80%"},
),
],
style={"width": "33%", "display": "inline-block"},
),
html.Div(
[
html.H6("R0 Reduction"),
dcc.Input(
id={"role": "r0", "index": i},
value=jf["delta_r0"][i],
step=0.1,
type="number",
style={"width": "100%"},
),
],
style={
"width": "28%",
"display": "inline-block",
"margin": "0 5% 0 0",
},
),
html.Div(
[
html.H6("Contained Proportion"),
dcc.Slider(
id={"role": "pcont", "index": i},
min=0,
max=1,
value=jf["pcont"][i],
step=0.01,
tooltip={"always_visible": False},
marks={0: "0", 1: "1"},
),
],
style={"width": "33%", "display": "inline-block"},
),
],
style={"border-style": "outset", "margin": "1%", "padding": "1%"},
)
for i in range(jf["n_r0"])
]
@app.callback(
[Output(f"collapse{i}", "is_open") for i in ["", "-p", "-t"]],
[Input(f"collapse-button{i}", "n_clicks") for i in ["", "-p", "-t"]],
[State(f"collapse{i}", "is_open") for i in ["", "-p", "-t"]],
)
def toggle_accordion(n1, n2, n3, is_open1, is_open2, is_open3):
r"""
Toggle Open/Close of input groups.
Parameters
----------
n1, n2, n3 : `int`
Number of clicks made on a specified button.
is_open1, is_open2, is_open3 : `bool`
Boolean indicating ff collapsible in any group is open.
Returns
-------
stage1, stage2, stage3 : :class:`bool`
Corresponding responsive open state for each collapsible group.
"""
ctx = dash.callback_context
# Check if any group is being clicked on first, by detecting change in number of clicks
if not ctx.triggered:
return False, False, False
else:
button_id = ctx.triggered[0]["prop_id"].split(".")[0]
# When a group is clicked, toggle its stage and close the | |
'z,-x+3/4,-y+3/4',
'-z+3/4,-x+3/4,y', '-z+3/4,x,-y+3/4', 'y,z,x', '-y+3/4,z,-x+3/4', 'y,-z+3/4,-x+3/4', '-y+3/4,-z+3/4,x',
'-x,-y,-z', 'x+1/4,y+1/4,-z', 'x+1/4,-y,z+1/4', '-x,y+1/4,z+1/4', '-z,-x,-y', '-z,x+1/4,y+1/4',
'z+1/4,x+1/4,-y', 'z+1/4,-x,y+1/4', '-y,-z,-x', 'y+1/4,-z,x+1/4', '-y,z+1/4,x+1/4', 'y+1/4,z+1/4,-x',
'x,y+1/2,z+1/2', '-x+3/4,-y+1/4,z+1/2', '-x+3/4,y+1/2,-z+1/4', 'x,-y+1/4,-z+1/4', 'z,x+1/2,y+1/2',
'z,-x+1/4,-y+1/4', '-z+3/4,-x+1/4,y+1/2', '-z+3/4,x+1/2,-y+1/4', 'y,z+1/2,x+1/2', '-y+3/4,z+1/2,-x+1/4',
'y,-z+1/4,-x+1/4', '-y+3/4,-z+1/4,x+1/2', '-x,-y+1/2,-z+1/2', 'x+1/4,y+3/4,-z+1/2', 'x+1/4,-y+1/2,z+3/4',
'-x,y+3/4,z+3/4', '-z,-x+1/2,-y+1/2', '-z,x+3/4,y+3/4', 'z+1/4,x+3/4,-y+1/2', 'z+1/4,-x+1/2,y+3/4',
'-y,-z+1/2,-x+1/2', 'y+1/4,-z+1/2,x+3/4', '-y,z+3/4,x+3/4', 'y+1/4,z+3/4,-x+1/2', 'x+1/2,y,z+1/2',
'-x+1/4,-y+3/4,z+1/2', '-x+1/4,y,-z+1/4', 'x+1/2,-y+3/4,-z+1/4', 'z+1/2,x,y+1/2', 'z+1/2,-x+3/4,-y+1/4',
'-z+1/4,-x+3/4,y+1/2', '-z+1/4,x,-y+1/4', 'y+1/2,z,x+1/2', '-y+1/4,z,-x+1/4', 'y+1/2,-z+3/4,-x+1/4',
'-y+1/4,-z+3/4,x+1/2', '-x+1/2,-y,-z+1/2', 'x+3/4,y+1/4,-z+1/2', 'x+3/4,-y,z+3/4', '-x+1/2,y+1/4,z+3/4',
'-z+1/2,-x,-y+1/2', '-z+1/2,x+1/4,y+3/4', 'z+3/4,x+1/4,-y+1/2', 'z+3/4,-x,y+3/4', '-y+1/2,-z,-x+1/2',
'y+3/4,-z,x+3/4', '-y+1/2,z+1/4,x+3/4', 'y+3/4,z+1/4,-x+1/2', 'x+1/2,y+1/2,z', '-x+1/4,-y+1/4,z',
'-x+1/4,y+1/2,-z+3/4', 'x+1/2,-y+1/4,-z+3/4', 'z+1/2,x+1/2,y', 'z+1/2,-x+1/4,-y+3/4', '-z+1/4,-x+1/4,y',
'-z+1/4,x+1/2,-y+3/4', 'y+1/2,z+1/2,x', '-y+1/4,z+1/2,-x+3/4', 'y+1/2,-z+1/4,-x+3/4', '-y+1/4,-z+1/4,x',
'-x+1/2,-y+1/2,-z', 'x+3/4,y+3/4,-z', 'x+3/4,-y+1/2,z+1/4', '-x+1/2,y+3/4,z+1/4', '-z+1/2,-x+1/2,-y',
'-z+1/2,x+3/4,y+1/4', 'z+3/4,x+3/4,-y', 'z+3/4,-x+1/2,y+1/4', '-y+1/2,-z+1/2,-x', 'y+3/4,-z+1/2,x+1/4',
'-y+1/2,z+3/4,x+1/4', 'y+3/4,z+3/4,-x'],
204: ['x,y,z', '-x,-y,z', '-x,y,-z', 'x,-y,-z', 'z,x,y', 'z,-x,-y', '-z,-x,y', '-z,x,-y', 'y,z,x', '-y,z,-x',
'y,-z,-x', '-y,-z,x', '-x,-y,-z', 'x,y,-z', 'x,-y,z', '-x,y,z', '-z,-x,-y', '-z,x,y', 'z,x,-y', 'z,-x,y',
'-y,-z,-x', 'y,-z,x', '-y,z,x', 'y,z,-x', 'x+1/2,y+1/2,z+1/2', '-x+1/2,-y+1/2,z+1/2',
'-x+1/2,y+1/2,-z+1/2', 'x+1/2,-y+1/2,-z+1/2', 'z+1/2,x+1/2,y+1/2', 'z+1/2,-x+1/2,-y+1/2',
'-z+1/2,-x+1/2,y+1/2', '-z+1/2,x+1/2,-y+1/2', 'y+1/2,z+1/2,x+1/2', '-y+1/2,z+1/2,-x+1/2',
'y+1/2,-z+1/2,-x+1/2', '-y+1/2,-z+1/2,x+1/2', '-x+1/2,-y+1/2,-z+1/2', 'x+1/2,y+1/2,-z+1/2',
'x+1/2,-y+1/2,z+1/2', '-x+1/2,y+1/2,z+1/2', '-z+1/2,-x+1/2,-y+1/2', '-z+1/2,x+1/2,y+1/2',
'z+1/2,x+1/2,-y+1/2', 'z+1/2,-x+1/2,y+1/2', '-y+1/2,-z+1/2,-x+1/2', 'y+1/2,-z+1/2,x+1/2',
'-y+1/2,z+1/2,x+1/2', 'y+1/2,z+1/2,-x+1/2'],
205: ['x,y,z', '-x+1/2,-y,z+1/2', '-x,y+1/2,-z+1/2', 'x+1/2,-y+1/2,-z', 'z,x,y', 'z+1/2,-x+1/2,-y',
'-z+1/2,-x,y+1/2', '-z,x+1/2,-y+1/2', 'y,z,x', '-y,z+1/2,-x+1/2', 'y+1/2,-z+1/2,-x', '-y+1/2,-z,x+1/2',
'-x,-y,-z', 'x+1/2,y,-z+1/2', 'x,-y+1/2,z+1/2', '-x+1/2,y+1/2,z', '-z,-x,-y', '-z+1/2,x+1/2,y',
'z+1/2,x,-y+1/2', 'z,-x+1/2,y+1/2', '-y,-z,-x', 'y,-z+1/2,x+1/2', '-y+1/2,z+1/2,x', 'y+1/2,z,-x+1/2'],
206: ['x,y,z', '-x+1/2,-y,z+1/2', '-x,y+1/2,-z+1/2', 'x+1/2,-y+1/2,-z', 'z,x,y', 'z+1/2,-x+1/2,-y',
'-z+1/2,-x,y+1/2', '-z,x+1/2,-y+1/2', 'y,z,x', '-y,z+1/2,-x+1/2', 'y+1/2,-z+1/2,-x', '-y+1/2,-z,x+1/2',
'-x,-y,-z', 'x+1/2,y,-z+1/2', 'x,-y+1/2,z+1/2', '-x+1/2,y+1/2,z', '-z,-x,-y', '-z+1/2,x+1/2,y',
'z+1/2,x,-y+1/2', 'z,-x+1/2,y+1/2', '-y,-z,-x', 'y,-z+1/2,x+1/2', '-y+1/2,z+1/2,x', 'y+1/2,z,-x+1/2',
'x+1/2,y+1/2,z+1/2', '-x,-y+1/2,z', '-x+1/2,y,-z', 'x,-y,-z+1/2', 'z+1/2,x+1/2,y+1/2', 'z,-x,-y+1/2',
'-z,-x+1/2,y', '-z+1/2,x,-y', 'y+1/2,z+1/2,x+1/2', '-y+1/2,z,-x', 'y,-z,-x+1/2', '-y,-z+1/2,x',
'-x+1/2,-y+1/2,-z+1/2', 'x,y+1/2,-z', 'x+1/2,-y,z', '-x,y,z+1/2', '-z+1/2,-x+1/2,-y+1/2', '-z,x,y+1/2',
'z,x+1/2,-y', 'z+1/2,-x,y', '-y+1/2,-z+1/2,-x+1/2', 'y+1/2,-z,x', '-y,z,x+1/2', 'y,z+1/2,-x'],
207: ['x,y,z', '-x,-y,z', '-x,y,-z', 'x,-y,-z', 'z,x,y', 'z,-x,-y', '-z,-x,y', '-z,x,-y', 'y,z,x', '-y,z,-x',
'y,-z,-x', '-y,-z,x', 'y,x,-z', '-y,-x,-z', 'y,-x,z', '-y,x,z', 'x,z,-y', '-x,z,y', '-x,-z,-y', 'x,-z,y',
'z,y,-x', 'z,-y,x', '-z,y,x', '-z,-y,-x'],
208: ['x,y,z', '-x,-y,z', '-x,y,-z', 'x,-y,-z', 'z,x,y', 'z,-x,-y', '-z,-x,y', '-z,x,-y', 'y,z,x', '-y,z,-x',
'y,-z,-x', '-y,-z,x', 'y+1/2,x+1/2,-z+1/2', '-y+1/2,-x+1/2,-z+1/2', 'y+1/2,-x+1/2,z+1/2',
'-y+1/2,x+1/2,z+1/2', 'x+1/2,z+1/2,-y+1/2', '-x+1/2,z+1/2,y+1/2', '-x+1/2,-z+1/2,-y+1/2',
'x+1/2,-z+1/2,y+1/2', 'z+1/2,y+1/2,-x+1/2', 'z+1/2,-y+1/2,x+1/2', '-z+1/2,y+1/2,x+1/2',
'-z+1/2,-y+1/2,-x+1/2'],
209: ['x,y,z', '-x,-y,z', '-x,y,-z', 'x,-y,-z', 'z,x,y', 'z,-x,-y', '-z,-x,y', '-z,x,-y', 'y,z,x', '-y,z,-x',
'y,-z,-x', '-y,-z,x', 'y,x,-z', '-y,-x,-z', 'y,-x,z', '-y,x,z', 'x,z,-y', '-x,z,y', '-x,-z,-y', 'x,-z,y',
'z,y,-x', 'z,-y,x', '-z,y,x', '-z,-y,-x', 'x,y+1/2,z+1/2', '-x,-y+1/2,z+1/2', '-x,y+1/2,-z+1/2',
'x,-y+1/2,-z+1/2', 'z,x+1/2,y+1/2', 'z,-x+1/2,-y+1/2', '-z,-x+1/2,y+1/2', '-z,x+1/2,-y+1/2',
'y,z+1/2,x+1/2', '-y,z+1/2,-x+1/2', 'y,-z+1/2,-x+1/2', '-y,-z+1/2,x+1/2', 'y,x+1/2,-z+1/2',
'-y,-x+1/2,-z+1/2', 'y,-x+1/2,z+1/2', '-y,x+1/2,z+1/2', 'x,z+1/2,-y+1/2', '-x,z+1/2,y+1/2',
'-x,-z+1/2,-y+1/2', 'x,-z+1/2,y+1/2', 'z,y+1/2,-x+1/2', 'z,-y+1/2,x+1/2', '-z,y+1/2,x+1/2',
'-z,-y+1/2,-x+1/2', 'x+1/2,y,z+1/2', '-x+1/2,-y,z+1/2', '-x+1/2,y,-z+1/2', 'x+1/2,-y,-z+1/2',
'z+1/2,x,y+1/2', 'z+1/2,-x,-y+1/2', '-z+1/2,-x,y+1/2', '-z+1/2,x,-y+1/2', 'y+1/2,z,x+1/2',
'-y+1/2,z,-x+1/2', 'y+1/2,-z,-x+1/2', '-y+1/2,-z,x+1/2', 'y+1/2,x,-z+1/2', '-y+1/2,-x,-z+1/2',
'y+1/2,-x,z+1/2', '-y+1/2,x,z+1/2', 'x+1/2,z,-y+1/2', '-x+1/2,z,y+1/2', '-x+1/2,-z,-y+1/2',
'x+1/2,-z,y+1/2', 'z+1/2,y,-x+1/2', 'z+1/2,-y,x+1/2', '-z+1/2,y,x+1/2', '-z+1/2,-y,-x+1/2',
'x+1/2,y+1/2,z', '-x+1/2,-y+1/2,z', '-x+1/2,y+1/2,-z', 'x+1/2,-y+1/2,-z', 'z+1/2,x+1/2,y',
'z+1/2,-x+1/2,-y', '-z+1/2,-x+1/2,y', '-z+1/2,x+1/2,-y', 'y+1/2,z+1/2,x', '-y+1/2,z+1/2,-x',
'y+1/2,-z+1/2,-x', '-y+1/2,-z+1/2,x', 'y+1/2,x+1/2,-z', '-y+1/2,-x+1/2,-z', 'y+1/2,-x+1/2,z',
'-y+1/2,x+1/2,z', 'x+1/2,z+1/2,-y', '-x+1/2,z+1/2,y', '-x+1/2,-z+1/2,-y', 'x+1/2,-z+1/2,y',
'z+1/2,y+1/2,-x', 'z+1/2,-y+1/2,x', '-z+1/2,y+1/2,x', '-z+1/2,-y+1/2,-x'],
210: ['x,y,z', '-x,-y+1/2,z+1/2', '-x+1/2,y+1/2,-z', 'x+1/2,-y,-z+1/2', 'z,x,y', 'z+1/2,-x,-y+1/2',
'-z,-x+1/2,y+1/2', '-z+1/2,x+1/2,-y', 'y,z,x', '-y+1/2,z+1/2,-x', 'y+1/2,-z,-x+1/2', '-y,-z+1/2,x+1/2',
'y+3/4,x+1/4,-z+3/4', '-y+1/4,-x+1/4,-z+1/4', 'y+1/4,-x+3/4,z+3/4', '-y+3/4,x+3/4,z+1/4',
'x+3/4,z+1/4,-y+3/4', '-x+3/4,z+3/4,y+1/4', '-x+1/4,-z+1/4,-y+1/4', 'x+1/4,-z+3/4,y+3/4',
'z+3/4,y+1/4,-x+3/4', 'z+1/4,-y+3/4,x+3/4', '-z+3/4,y+3/4,x+1/4', '-z+1/4,-y+1/4,-x+1/4',
'x,y+1/2,z+1/2', '-x,-y,z', '-x+1/2,y,-z+1/2', 'x+1/2,-y+1/2,-z', 'z,x+1/2,y+1/2', 'z+1/2,-x+1/2,-y',
'-z,-x,y', '-z+1/2,x,-y+1/2', 'y,z+1/2,x+1/2', '-y+1/2,z,-x+1/2', 'y+1/2,-z+1/2,-x', '-y,-z,x',
'y+3/4,x+3/4,-z+1/4', '-y+1/4,-x+3/4,-z+3/4', 'y+1/4,-x+1/4,z+1/4', '-y+3/4,x+1/4,z+3/4',
'x+3/4,z+3/4,-y+1/4', '-x+3/4,z+1/4,y+3/4', '-x+1/4,-z+3/4,-y+3/4', 'x+1/4,-z+1/4,y+1/4',
'z+3/4,y+3/4,-x+1/4', 'z+1/4,-y+1/4,x+1/4', '-z+3/4,y+1/4,x+3/4', '-z+1/4,-y+3/4,-x+3/4',
'x+1/2,y,z+1/2', '-x+1/2,-y+1/2,z', '-x,y+1/2,-z+1/2', 'x,-y,-z', 'z+1/2,x,y+1/2', 'z,-x,-y',
'-z+1/2,-x+1/2,y', '-z,x+1/2,-y+1/2', 'y+1/2,z,x+1/2', '-y,z+1/2,-x+1/2', 'y,-z,-x', '-y+1/2,-z+1/2,x',
'y+1/4,x+1/4,-z+1/4', '-y+3/4,-x+1/4,-z+3/4', 'y+3/4,-x+3/4,z+1/4', '-y+1/4,x+3/4,z+3/4',
'x+1/4,z+1/4,-y+1/4', '-x+1/4,z+3/4,y+3/4', '-x+3/4,-z+1/4,-y+3/4', 'x+3/4,-z+3/4,y+1/4',
'z+1/4,y+1/4,-x+1/4', 'z+3/4,-y+3/4,x+1/4', '-z+1/4,y+3/4,x+3/4', '-z+3/4,-y+1/4,-x+3/4',
'x+1/2,y+1/2,z', '-x+1/2,-y,z+1/2', '-x,y,-z', 'x,-y+1/2,-z+1/2', 'z+1/2,x+1/2,y', 'z,-x+1/2,-y+1/2',
'-z+1/2,-x,y+1/2', '-z,x,-y', 'y+1/2,z+1/2,x', '-y,z,-x', 'y,-z+1/2,-x+1/2', '-y+1/2,-z,x+1/2',
'y+1/4,x+3/4,-z+3/4', '-y+3/4,-x+3/4,-z+1/4', 'y+3/4,-x+1/4,z+3/4', '-y+1/4,x+1/4,z+1/4',
'x+1/4,z+3/4,-y+3/4', '-x+1/4,z+1/4,y+1/4', '-x+3/4,-z+3/4,-y+1/4', 'x+3/4,-z+1/4,y+3/4',
'z+1/4,y+3/4,-x+3/4', 'z+3/4,-y+1/4,x+3/4', '-z+1/4,y+1/4,x+1/4', '-z+3/4,-y+3/4,-x+1/4'],
211: ['x,y,z', '-x,-y,z', '-x,y,-z', 'x,-y,-z', 'z,x,y', 'z,-x,-y', '-z,-x,y', '-z,x,-y', 'y,z,x', '-y,z,-x',
'y,-z,-x', '-y,-z,x', 'y,x,-z', '-y,-x,-z', 'y,-x,z', '-y,x,z', 'x,z,-y', '-x,z,y', '-x,-z,-y', 'x,-z,y',
'z,y,-x', 'z,-y,x', '-z,y,x', '-z,-y,-x', 'x+1/2,y+1/2,z+1/2', '-x+1/2,-y+1/2,z+1/2',
'-x+1/2,y+1/2,-z+1/2', 'x+1/2,-y+1/2,-z+1/2', 'z+1/2,x+1/2,y+1/2', 'z+1/2,-x+1/2,-y+1/2',
'-z+1/2,-x+1/2,y+1/2', '-z+1/2,x+1/2,-y+1/2', 'y+1/2,z+1/2,x+1/2', '-y+1/2,z+1/2,-x+1/2',
'y+1/2,-z+1/2,-x+1/2', '-y+1/2,-z+1/2,x+1/2', 'y+1/2,x+1/2,-z+1/2', '-y+1/2,-x+1/2,-z+1/2',
'y+1/2,-x+1/2,z+1/2', '-y+1/2,x+1/2,z+1/2', 'x+1/2,z+1/2,-y+1/2', '-x+1/2,z+1/2,y+1/2',
'-x+1/2,-z+1/2,-y+1/2', 'x+1/2,-z+1/2,y+1/2', 'z+1/2,y+1/2,-x+1/2', 'z+1/2,-y+1/2,x+1/2',
'-z+1/2,y+1/2,x+1/2', '-z+1/2,-y+1/2,-x+1/2'],
212: ['x,y,z', '-x+1/2,-y,z+1/2', '-x,y+1/2,-z+1/2', 'x+1/2,-y+1/2,-z', 'z,x,y', 'z+1/2,-x+1/2,-y',
'-z+1/2,-x,y+1/2', '-z,x+1/2,-y+1/2', 'y,z,x', '-y,z+1/2,-x+1/2', 'y+1/2,-z+1/2,-x', '-y+1/2,-z,x+1/2',
'y+1/4,x+3/4,-z+3/4', '-y+1/4,-x+1/4,-z+1/4', 'y+3/4,-x+3/4,z+1/4', '-y+3/4,x+1/4,z+3/4',
'x+1/4,z+3/4,-y+3/4', '-x+3/4,z+1/4,y+3/4', '-x+1/4,-z+1/4,-y+1/4', 'x+3/4,-z+3/4,y+1/4',
'z+1/4,y+3/4,-x+3/4', 'z+3/4,-y+3/4,x+1/4', '-z+3/4,y+1/4,x+3/4', '-z+1/4,-y+1/4,-x+1/4'],
213: ['x,y,z', '-x+1/2,-y,z+1/2', '-x,y+1/2,-z+1/2', 'x+1/2,-y+1/2,-z', 'z,x,y', 'z+1/2,-x+1/2,-y',
'-z+1/2,-x,y+1/2', '-z,x+1/2,-y+1/2', 'y,z,x', '-y,z+1/2,-x+1/2', 'y+1/2,-z+1/2,-x', '-y+1/2,-z,x+1/2',
'y+3/4,x+1/4,-z+1/4', '-y+3/4,-x+3/4,-z+3/4', 'y+1/4,-x+1/4,z+3/4', '-y+1/4,x+3/4,z+1/4',
'x+3/4,z+1/4,-y+1/4', '-x+1/4,z+3/4,y+1/4', '-x+3/4,-z+3/4,-y+3/4', 'x+1/4,-z+1/4,y+3/4',
'z+3/4,y+1/4,-x+1/4', 'z+1/4,-y+1/4,x+3/4', '-z+1/4,y+3/4,x+1/4', '-z+3/4,-y+3/4,-x+3/4'],
214: ['x,y,z', '-x+1/2,-y,z+1/2', '-x,y+1/2,-z+1/2', 'x+1/2,-y+1/2,-z', 'z,x,y', 'z+1/2,-x+1/2,-y',
'-z+1/2,-x,y+1/2', '-z,x+1/2,-y+1/2', 'y,z,x', '-y,z+1/2,-x+1/2', 'y+1/2,-z+1/2,-x', '-y+1/2,-z,x+1/2',
'y+3/4,x+1/4,-z+1/4', '-y+3/4,-x+3/4,-z+3/4', 'y+1/4,-x+1/4,z+3/4', '-y+1/4,x+3/4,z+1/4',
'x+3/4,z+1/4,-y+1/4', '-x+1/4,z+3/4,y+1/4', '-x+3/4,-z+3/4,-y+3/4', 'x+1/4,-z+1/4,y+3/4',
'z+3/4,y+1/4,-x+1/4', 'z+1/4,-y+1/4,x+3/4', '-z+1/4,y+3/4,x+1/4', '-z+3/4,-y+3/4,-x+3/4',
'x+1/2,y+1/2,z+1/2', '-x,-y+1/2,z', '-x+1/2,y,-z', 'x,-y,-z+1/2', 'z+1/2,x+1/2,y+1/2', 'z,-x,-y+1/2',
'-z,-x+1/2,y', '-z+1/2,x,-y', 'y+1/2,z+1/2,x+1/2', '-y+1/2,z,-x', 'y,-z,-x+1/2', '-y,-z+1/2,x',
'y+1/4,x+3/4,-z+3/4', '-y+1/4,-x+1/4,-z+1/4', 'y+3/4,-x+3/4,z+1/4', '-y+3/4,x+1/4,z+3/4',
'x+1/4,z+3/4,-y+3/4', '-x+3/4,z+1/4,y+3/4', '-x+1/4,-z+1/4,-y+1/4', 'x+3/4,-z+3/4,y+1/4',
'z+1/4,y+3/4,-x+3/4', 'z+3/4,-y+3/4,x+1/4', '-z+3/4,y+1/4,x+3/4', '-z+1/4,-y+1/4,-x+1/4'],
215: ['x,y,z', '-x,-y,z', '-x,y,-z', 'x,-y,-z', 'z,x,y', 'z,-x,-y', '-z,-x,y', '-z,x,-y', 'y,z,x', '-y,z,-x',
'y,-z,-x', '-y,-z,x', 'y,x,z', '-y,-x,z', 'y,-x,-z', '-y,x,-z', 'x,z,y', '-x,z,-y', '-x,-z,y', 'x,-z,-y',
'z,y,x', 'z,-y,-x', '-z,y,-x', '-z,-y,x'],
216: ['x,y,z', '-x,-y,z', '-x,y,-z', 'x,-y,-z', 'z,x,y', 'z,-x,-y', '-z,-x,y', '-z,x,-y', 'y,z,x', '-y,z,-x',
'y,-z,-x', '-y,-z,x', 'y,x,z', '-y,-x,z', 'y,-x,-z', '-y,x,-z', 'x,z,y', '-x,z,-y', '-x,-z,y', 'x,-z,-y',
'z,y,x', 'z,-y,-x', '-z,y,-x', '-z,-y,x', 'x,y+1/2,z+1/2', '-x,-y+1/2,z+1/2', '-x,y+1/2,-z+1/2',
'x,-y+1/2,-z+1/2', 'z,x+1/2,y+1/2', 'z,-x+1/2,-y+1/2', '-z,-x+1/2,y+1/2', '-z,x+1/2,-y+1/2',
'y,z+1/2,x+1/2', '-y,z+1/2,-x+1/2', 'y,-z+1/2,-x+1/2', '-y,-z+1/2,x+1/2', 'y,x+1/2,z+1/2',
'-y,-x+1/2,z+1/2', 'y,-x+1/2,-z+1/2', '-y,x+1/2,-z+1/2', 'x,z+1/2,y+1/2', '-x,z+1/2,-y+1/2',
'-x,-z+1/2,y+1/2', 'x,-z+1/2,-y+1/2', 'z,y+1/2,x+1/2', 'z,-y+1/2,-x+1/2', '-z,y+1/2,-x+1/2',
'-z,-y+1/2,x+1/2', 'x+1/2,y,z+1/2', '-x+1/2,-y,z+1/2', '-x+1/2,y,-z+1/2', 'x+1/2,-y,-z+1/2',
'z+1/2,x,y+1/2', 'z+1/2,-x,-y+1/2', '-z+1/2,-x,y+1/2', '-z+1/2,x,-y+1/2', 'y+1/2,z,x+1/2',
'-y+1/2,z,-x+1/2', 'y+1/2,-z,-x+1/2', '-y+1/2,-z,x+1/2', 'y+1/2,x,z+1/2', '-y+1/2,-x,z+1/2',
'y+1/2,-x,-z+1/2', '-y+1/2,x,-z+1/2', 'x+1/2,z,y+1/2', '-x+1/2,z,-y+1/2', '-x+1/2,-z,y+1/2',
'x+1/2,-z,-y+1/2', 'z+1/2,y,x+1/2', 'z+1/2,-y,-x+1/2', '-z+1/2,y,-x+1/2', '-z+1/2,-y,x+1/2',
'x+1/2,y+1/2,z', '-x+1/2,-y+1/2,z', '-x+1/2,y+1/2,-z', 'x+1/2,-y+1/2,-z', 'z+1/2,x+1/2,y',
'z+1/2,-x+1/2,-y', '-z+1/2,-x+1/2,y', '-z+1/2,x+1/2,-y', 'y+1/2,z+1/2,x', '-y+1/2,z+1/2,-x',
'y+1/2,-z+1/2,-x', '-y+1/2,-z+1/2,x', 'y+1/2,x+1/2,z', '-y+1/2,-x+1/2,z', 'y+1/2,-x+1/2,-z',
'-y+1/2,x+1/2,-z', 'x+1/2,z+1/2,y', '-x+1/2,z+1/2,-y', '-x+1/2,-z+1/2,y', 'x+1/2,-z+1/2,-y',
'z+1/2,y+1/2,x', 'z+1/2,-y+1/2,-x', '-z+1/2,y+1/2,-x', '-z+1/2,-y+1/2,x'],
217: ['x,y,z', '-x,-y,z', '-x,y,-z', 'x,-y,-z', 'z,x,y', 'z,-x,-y', '-z,-x,y', '-z,x,-y', 'y,z,x', '-y,z,-x',
'y,-z,-x', '-y,-z,x', 'y,x,z', '-y,-x,z', 'y,-x,-z', '-y,x,-z', 'x,z,y', '-x,z,-y', '-x,-z,y', 'x,-z,-y',
'z,y,x', 'z,-y,-x', '-z,y,-x', '-z,-y,x', 'x+1/2,y+1/2,z+1/2', '-x+1/2,-y+1/2,z+1/2',
'-x+1/2,y+1/2,-z+1/2', 'x+1/2,-y+1/2,-z+1/2', 'z+1/2,x+1/2,y+1/2', 'z+1/2,-x+1/2,-y+1/2',
'-z+1/2,-x+1/2,y+1/2', '-z+1/2,x+1/2,-y+1/2', 'y+1/2,z+1/2,x+1/2', '-y+1/2,z+1/2,-x+1/2',
'y+1/2,-z+1/2,-x+1/2', '-y+1/2,-z+1/2,x+1/2', 'y+1/2,x+1/2,z+1/2', '-y+1/2,-x+1/2,z+1/2',
'y+1/2,-x+1/2,-z+1/2', '-y+1/2,x+1/2,-z+1/2', 'x+1/2,z+1/2,y+1/2', '-x+1/2,z+1/2,-y+1/2',
'-x+1/2,-z+1/2,y+1/2', 'x+1/2,-z+1/2,-y+1/2', 'z+1/2,y+1/2,x+1/2', 'z+1/2,-y+1/2,-x+1/2',
'-z+1/2,y+1/2,-x+1/2', '-z+1/2,-y+1/2,x+1/2'],
218: ['x,y,z', '-x,-y,z', '-x,y,-z', 'x,-y,-z', 'z,x,y', 'z,-x,-y', '-z,-x,y', '-z,x,-y', 'y,z,x', '-y,z,-x',
'y,-z,-x', '-y,-z,x', 'y+1/2,x+1/2,z+1/2', '-y+1/2,-x+1/2,z+1/2', 'y+1/2,-x+1/2,-z+1/2',
'-y+1/2,x+1/2,-z+1/2', 'x+1/2,z+1/2,y+1/2', '-x+1/2,z+1/2,-y+1/2', '-x+1/2,-z+1/2,y+1/2',
'x+1/2,-z+1/2,-y+1/2', 'z+1/2,y+1/2,x+1/2', 'z+1/2,-y+1/2,-x+1/2', '-z+1/2,y+1/2,-x+1/2',
'-z+1/2,-y+1/2,x+1/2'],
219: ['x,y,z', '-x,-y,z', '-x,y,-z', 'x,-y,-z', 'z,x,y', 'z,-x,-y', '-z,-x,y', '-z,x,-y', 'y,z,x', '-y,z,-x',
'y,-z,-x', '-y,-z,x', 'y+1/2,x+1/2,z+1/2', '-y+1/2,-x+1/2,z+1/2', 'y+1/2,-x+1/2,-z+1/2',
'-y+1/2,x+1/2,-z+1/2', 'x+1/2,z+1/2,y+1/2', '-x+1/2,z+1/2,-y+1/2', '-x+1/2,-z+1/2,y+1/2',
'x+1/2,-z+1/2,-y+1/2', 'z+1/2,y+1/2,x+1/2', 'z+1/2,-y+1/2,-x+1/2', '-z+1/2,y+1/2,-x+1/2',
'-z+1/2,-y+1/2,x+1/2', 'x,y+1/2,z+1/2', '-x,-y+1/2,z+1/2', '-x,y+1/2,-z+1/2', 'x,-y+1/2,-z+1/2',
'z,x+1/2,y+1/2', 'z,-x+1/2,-y+1/2', '-z,-x+1/2,y+1/2', '-z,x+1/2,-y+1/2', 'y,z+1/2,x+1/2',
'-y,z+1/2,-x+1/2', 'y,-z+1/2,-x+1/2', '-y,-z+1/2,x+1/2', 'y+1/2,x,z', '-y+1/2,-x,z', 'y+1/2,-x,-z',
'-y+1/2,x,-z', 'x+1/2,z,y', '-x+1/2,z,-y', '-x+1/2,-z,y', 'x+1/2,-z,-y', 'z+1/2,y,x', 'z+1/2,-y,-x',
'-z+1/2,y,-x', '-z+1/2,-y,x', 'x+1/2,y,z+1/2', '-x+1/2,-y,z+1/2', '-x+1/2,y,-z+1/2', 'x+1/2,-y,-z+1/2',
'z+1/2,x,y+1/2', 'z+1/2,-x,-y+1/2', '-z+1/2,-x,y+1/2', '-z+1/2,x,-y+1/2', 'y+1/2,z,x+1/2',
'-y+1/2,z,-x+1/2', 'y+1/2,-z,-x+1/2', '-y+1/2,-z,x+1/2', 'y,x+1/2,z', '-y,-x+1/2,z', 'y,-x+1/2,-z',
'-y,x+1/2,-z', 'x,z+1/2,y', '-x,z+1/2,-y', '-x,-z+1/2,y', 'x,-z+1/2,-y', 'z,y+1/2,x', 'z,-y+1/2,-x',
'-z,y+1/2,-x', '-z,-y+1/2,x', 'x+1/2,y+1/2,z', '-x+1/2,-y+1/2,z', '-x+1/2,y+1/2,-z', 'x+1/2,-y+1/2,-z',
'z+1/2,x+1/2,y', 'z+1/2,-x+1/2,-y', '-z+1/2,-x+1/2,y', '-z+1/2,x+1/2,-y', 'y+1/2,z+1/2,x',
'-y+1/2,z+1/2,-x', 'y+1/2,-z+1/2,-x', '-y+1/2,-z+1/2,x', 'y,x,z+1/2', '-y,-x,z+1/2', 'y,-x,-z+1/2',
'-y,x,-z+1/2', 'x,z,y+1/2', '-x,z,-y+1/2', '-x,-z,y+1/2', 'x,-z,-y+1/2', 'z,y,x+1/2', 'z,-y,-x+1/2',
'-z,y,-x+1/2', '-z,-y,x+1/2'],
220: ['x,y,z', '-x+1/2,-y,z+1/2', '-x,y+1/2,-z+1/2', 'x+1/2,-y+1/2,-z', 'z,x,y', 'z+1/2,-x+1/2,-y',
'-z+1/2,-x,y+1/2', '-z,x+1/2,-y+1/2', 'y,z,x', '-y,z+1/2,-x+1/2', 'y+1/2,-z+1/2,-x', '-y+1/2,-z,x+1/2',
'y+1/4,x+1/4,z+1/4', '-y+1/4,-x+3/4,z+3/4', 'y+3/4,-x+1/4,-z+3/4', '-y+3/4,x+3/4,-z+1/4',
'x+1/4,z+1/4,y+1/4', '-x+3/4,z+3/4,-y+1/4', '-x+1/4,-z+3/4,y+3/4', 'x+3/4,-z+1/4,-y+3/4',
'z+1/4,y+1/4,x+1/4', 'z+3/4,-y+1/4,-x+3/4', '-z+3/4,y+3/4,-x+1/4', '-z+1/4,-y+3/4,x+3/4',
'x+1/2,y+1/2,z+1/2', '-x,-y+1/2,z', '-x+1/2,y,-z', 'x,-y,-z+1/2', 'z+1/2,x+1/2,y+1/2', 'z,-x,-y+1/2',
'-z,-x+1/2,y', '-z+1/2,x,-y', 'y+1/2,z+1/2,x+1/2', '-y+1/2,z,-x', 'y,-z,-x+1/2', '-y,-z+1/2,x',
'y+3/4,x+3/4,z+3/4', '-y+3/4,-x+1/4,z+1/4', 'y+1/4,-x+3/4,-z+1/4', '-y+1/4,x+1/4,-z+3/4',
'x+3/4,z+3/4,y+3/4', '-x+1/4,z+1/4,-y+3/4', '-x+3/4,-z+1/4,y+1/4', 'x+1/4,-z+3/4,-y+1/4',
'z+3/4,y+3/4,x+3/4', 'z+1/4,-y+3/4,-x+1/4', '-z+1/4,y+1/4,-x+3/4', '-z+3/4,-y+1/4,x+1/4'],
221: ['x,y,z', '-x,-y,z', '-x,y,-z', 'x,-y,-z', 'z,x,y', 'z,-x,-y', '-z,-x,y', '-z,x,-y', 'y,z,x', '-y,z,-x',
'y,-z,-x', '-y,-z,x', 'y,x,-z', '-y,-x,-z', 'y,-x,z', '-y,x,z', 'x,z,-y', '-x,z,y', '-x,-z,-y', 'x,-z,y',
'z,y,-x', 'z,-y,x', '-z,y,x', '-z,-y,-x', '-x,-y,-z', 'x,y,-z', 'x,-y,z', '-x,y,z', '-z,-x,-y', '-z,x,y',
'z,x,-y', 'z,-x,y', '-y,-z,-x', 'y,-z,x', '-y,z,x', 'y,z,-x', '-y,-x,z', 'y,x,z', '-y,x,-z', 'y,-x,-z',
'-x,-z,y', 'x,-z,-y', 'x,z,y', '-x,z,-y', '-z,-y,x', '-z,y,-x', 'z,-y,-x', 'z,y,x'],
222: ['x,y,z', '-x+1/2,-y+1/2,z', '-x+1/2,y,-z+1/2', 'x,-y+1/2,-z+1/2', 'z,x,y', 'z,-x+1/2,-y+1/2',
'-z+1/2,-x+1/2,y', '-z+1/2,x,-y+1/2', 'y,z,x', '-y+1/2,z,-x+1/2', 'y,-z+1/2,-x+1/2', '-y+1/2,-z+1/2,x',
'y,x,-z+1/2', '-y+1/2,-x+1/2,-z+1/2', 'y,-x+1/2,z', '-y+1/2,x,z', 'x,z,-y+1/2', '-x+1/2,z,y',
'-x+1/2,-z+1/2,-y+1/2', 'x,-z+1/2,y', 'z,y,-x+1/2', 'z,-y+1/2,x', '-z+1/2,y,x', '-z+1/2,-y+1/2,-x+1/2',
'-x,-y,-z', 'x+1/2,y+1/2,-z', 'x+1/2,-y,z+1/2', '-x,y+1/2,z+1/2', '-z,-x,-y', '-z,x+1/2,y+1/2',
'z+1/2,x+1/2,-y', 'z+1/2,-x,y+1/2', '-y,-z,-x', 'y+1/2,-z,x+1/2', '-y,z+1/2,x+1/2', 'y+1/2,z+1/2,-x',
'-y,-x,z+1/2', 'y+1/2,x+1/2,z+1/2', '-y,x+1/2,-z', 'y+1/2,-x,-z', '-x,-z,y+1/2', 'x+1/2,-z,-y',
'x+1/2,z+1/2,y+1/2', '-x,z+1/2,-y', '-z,-y,x+1/2', '-z,y+1/2,-x', 'z+1/2,-y,-x', 'z+1/2,y+1/2,x+1/2'],
223: ['x,y,z', '-x,-y,z', '-x,y,-z', 'x,-y,-z', 'z,x,y', 'z,-x,-y', '-z,-x,y', '-z,x,-y', 'y,z,x', '-y,z,-x',
'y,-z,-x', '-y,-z,x', 'y+1/2,x+1/2,-z+1/2', '-y+1/2,-x+1/2,-z+1/2', 'y+1/2,-x+1/2,z+1/2',
'-y+1/2,x+1/2,z+1/2', 'x+1/2,z+1/2,-y+1/2', '-x+1/2,z+1/2,y+1/2', '-x+1/2,-z+1/2,-y+1/2',
'x+1/2,-z+1/2,y+1/2', 'z+1/2,y+1/2,-x+1/2', 'z+1/2,-y+1/2,x+1/2', '-z+1/2,y+1/2,x+1/2',
'-z+1/2,-y+1/2,-x+1/2', '-x,-y,-z', 'x,y,-z', 'x,-y,z', '-x,y,z', '-z,-x,-y', '-z,x,y', 'z,x,-y',
'z,-x,y', '-y,-z,-x', 'y,-z,x', '-y,z,x', 'y,z,-x', '-y+1/2,-x+1/2,z+1/2', 'y+1/2,x+1/2,z+1/2',
'-y+1/2,x+1/2,-z+1/2', 'y+1/2,-x+1/2,-z+1/2', '-x+1/2,-z+1/2,y+1/2', 'x+1/2,-z+1/2,-y+1/2',
'x+1/2,z+1/2,y+1/2', '-x+1/2,z+1/2,-y+1/2', '-z+1/2,-y+1/2,x+1/2', '-z+1/2,y+1/2,-x+1/2',
'z+1/2,-y+1/2,-x+1/2', 'z+1/2,y+1/2,x+1/2'],
224: ['x,y,z', '-x+1/2,-y+1/2,z', '-x+1/2,y,-z+1/2', 'x,-y+1/2,-z+1/2', 'z,x,y', 'z,-x+1/2,-y+1/2',
'-z+1/2,-x+1/2,y', '-z+1/2,x,-y+1/2', 'y,z,x', '-y+1/2,z,-x+1/2', 'y,-z+1/2,-x+1/2', '-y+1/2,-z+1/2,x',
'y+1/2,x+1/2,-z', '-y,-x,-z', 'y+1/2,-x,z+1/2', '-y,x+1/2,z+1/2', 'x+1/2,z+1/2,-y', '-x,z+1/2,y+1/2',
'-x,-z,-y', 'x+1/2,-z,y+1/2', 'z+1/2,y+1/2,-x', 'z+1/2,-y,x+1/2', '-z,y+1/2,x+1/2', '-z,-y,-x',
'-x,-y,-z', 'x+1/2,y+1/2,-z', 'x+1/2,-y,z+1/2', '-x,y+1/2,z+1/2', '-z,-x,-y', '-z,x+1/2,y+1/2',
'z+1/2,x+1/2,-y', 'z+1/2,-x,y+1/2', '-y,-z,-x', 'y+1/2,-z,x+1/2', '-y,z+1/2,x+1/2', 'y+1/2,z+1/2,-x',
'-y+1/2,-x+1/2,z', 'y,x,z', '-y+1/2,x,-z+1/2', 'y,-x+1/2,-z+1/2', '-x+1/2,-z+1/2,y', 'x,-z+1/2,-y+1/2',
'x,z,y', '-x+1/2,z,-y+1/2', '-z+1/2,-y+1/2,x', '-z+1/2,y,-x+1/2', 'z,-y+1/2,-x+1/2', 'z,y,x'],
225: ['x,y,z', '-x,-y,z', '-x,y,-z', 'x,-y,-z', 'z,x,y', 'z,-x,-y', '-z,-x,y', '-z,x,-y', 'y,z,x', '-y,z,-x',
'y,-z,-x', '-y,-z,x', 'y,x,-z', '-y,-x,-z', 'y,-x,z', '-y,x,z', 'x,z,-y', '-x,z,y', '-x,-z,-y', 'x,-z,y',
'z,y,-x', 'z,-y,x', '-z,y,x', '-z,-y,-x', '-x,-y,-z', 'x,y,-z', 'x,-y,z', '-x,y,z', '-z,-x,-y', '-z,x,y',
'z,x,-y', 'z,-x,y', '-y,-z,-x', 'y,-z,x', '-y,z,x', 'y,z,-x', '-y,-x,z', 'y,x,z', '-y,x,-z', 'y,-x,-z',
'-x,-z,y', 'x,-z,-y', 'x,z,y', '-x,z,-y', '-z,-y,x', '-z,y,-x', 'z,-y,-x', 'z,y,x', 'x,y+1/2,z+1/2',
'-x,-y+1/2,z+1/2', '-x,y+1/2,-z+1/2', 'x,-y+1/2,-z+1/2', 'z,x+1/2,y+1/2', 'z,-x+1/2,-y+1/2',
'-z,-x+1/2,y+1/2', '-z,x+1/2,-y+1/2', 'y,z+1/2,x+1/2', '-y,z+1/2,-x+1/2', 'y,-z+1/2,-x+1/2',
'-y,-z+1/2,x+1/2', 'y,x+1/2,-z+1/2', '-y,-x+1/2,-z+1/2', 'y,-x+1/2,z+1/2', '-y,x+1/2,z+1/2',
'x,z+1/2,-y+1/2', '-x,z+1/2,y+1/2', '-x,-z+1/2,-y+1/2', 'x,-z+1/2,y+1/2', 'z,y+1/2,-x+1/2',
'z,-y+1/2,x+1/2', '-z,y+1/2,x+1/2', '-z,-y+1/2,-x+1/2', '-x,-y+1/2,-z+1/2', 'x,y+1/2,-z+1/2',
'x,-y+1/2,z+1/2', '-x,y+1/2,z+1/2', '-z,-x+1/2,-y+1/2', '-z,x+1/2,y+1/2', 'z,x+1/2,-y+1/2',
'z,-x+1/2,y+1/2', '-y,-z+1/2,-x+1/2', 'y,-z+1/2,x+1/2', '-y,z+1/2,x+1/2', 'y,z+1/2,-x+1/2',
'-y,-x+1/2,z+1/2', 'y,x+1/2,z+1/2', '-y,x+1/2,-z+1/2', 'y,-x+1/2,-z+1/2', '-x,-z+1/2,y+1/2',
'x,-z+1/2,-y+1/2', 'x,z+1/2,y+1/2', '-x,z+1/2,-y+1/2', '-z,-y+1/2,x+1/2', '-z,y+1/2,-x+1/2',
'z,-y+1/2,-x+1/2', 'z,y+1/2,x+1/2', 'x+1/2,y,z+1/2', '-x+1/2,-y,z+1/2', '-x+1/2,y,-z+1/2',
'x+1/2,-y,-z+1/2', 'z+1/2,x,y+1/2', 'z+1/2,-x,-y+1/2', '-z+1/2,-x,y+1/2', '-z+1/2,x,-y+1/2',
'y+1/2,z,x+1/2', '-y+1/2,z,-x+1/2', 'y+1/2,-z,-x+1/2', '-y+1/2,-z,x+1/2', 'y+1/2,x,-z+1/2',
'-y+1/2,-x,-z+1/2', 'y+1/2,-x,z+1/2', '-y+1/2,x,z+1/2', 'x+1/2,z,-y+1/2', '-x+1/2,z,y+1/2',
'-x+1/2,-z,-y+1/2', 'x+1/2,-z,y+1/2', 'z+1/2,y,-x+1/2', 'z+1/2,-y,x+1/2', '-z+1/2,y,x+1/2',
'-z+1/2,-y,-x+1/2', '-x+1/2,-y,-z+1/2', 'x+1/2,y,-z+1/2', 'x+1/2,-y,z+1/2', '-x+1/2,y,z+1/2',
'-z+1/2,-x,-y+1/2', '-z+1/2,x,y+1/2', 'z+1/2,x,-y+1/2', 'z+1/2,-x,y+1/2', '-y+1/2,-z,-x+1/2',
'y+1/2,-z,x+1/2', '-y+1/2,z,x+1/2', 'y+1/2,z,-x+1/2', '-y+1/2,-x,z+1/2', 'y+1/2,x,z+1/2',
'-y+1/2,x,-z+1/2', | |
<reponame>limberc/hypercl
#!/usr/bin/env python3
# Copyright 2019 <NAME>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# @title :cifar/sa_hyper_model.py
# @author :ch
# @contact :<EMAIL>
# @created :02/21/2019
# @version :1.0
# @python_version :3.6.6
"""
Convolutional hypernetwork with self-attention layers
-----------------------------------------------------
The module :mod:`cifar.sa_hyper_model` implements a hypernetwork that uses
transpose convolutions (like the generator of a GAN) to generate weights.
Though, as convolutions usually suffer from only capturing local correlations
sufficiently, we incorporate the self-attention mechanism developed by
Zhang et al., "Self-Attention Generative Adversarial Networks", 2018,
https://arxiv.org/abs/1805.08318
See :class:`utils.self_attention_layer.SelfAttnLayerV2` for details on this
layer type.
.. note::
This module has been temporarily moved to this location from the deprecated
package ``classifier``. Once a new hypernetwork interface has been designed,
all hypernets (including this one) will be moved to the subpackage
``hnets``.
"""
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from utils.self_attention_layer import SelfAttnLayerV2
from mnets.mnet_interface import MainNetInterface
from toy_example.hyper_model import HyperNetwork
from utils.module_wrappers import CLHyperNetInterface
from utils.misc import init_params
class SAHnetPart(nn.Module, CLHyperNetInterface):
"""The goal of the network is to produce a chunk of the weights that are
used in a main network. Therefore, the network expects an embedding as
input (additional to the actual hypernet input), which will encode the
chunk of weights of the main network that will be generated by this
network.
This is a convolutional network, employing transpose convolutions. The
network structure is inspired by the DCGAN generator structure, though,
we are additionally using self-attention layers to model global
dependencies.
In general, each transpose convolutional layer will roughly double its
input size. Though, we set the hard constraint that if the input size of
a transpose convolutional layer would be smaller 4, than it doesn't change
the size.
Args:
out_size: A tuple of (width, height), denoting the output shape of
the weights generated by this hypernet. The number of output
channels is assumed to be 1, except if specified otherwise via
(width, height, channels).
num_layers: The number of transpose convolutional layers including
the initial fully-connected layer.
num_filters (optional): List of integers of length num_layers-1.
The number of output channels in each hidden transpose conv
layer. By default, the number of filters in the last hidden
layer will be 128 and doubled in every prior layer. Note, the
output of the first layer (which is fully-connected)) is here
considered to be in the shape of an image tensor.
kernel_size (optional): A single number, a tuple ``(k_x, k_y)`` or
a list of scalars/tuples of length ``num_layers-1``. Specifying the
kernel size in each convolutional layer.
sa_units: List of integers, each representing the index of a layer
in this network after which a self-attention unit should be
inserted. For instance, index 0 represents the
fully-connected layer. The last layer may not be chosen.
input_dim: The dimensionality of the input vectors (comprising
both: chunk embedding and actual hypernet input).
use_batch_norm: If ``True``, batchnorm will be applied to all hidden
layers.
use_spectral_norm: Enable spectral normalization for all layers.
no_theta: If set to ``True``, no trainable parameters ``theta`` will be
constructed, i.e., weights are assumed to be produced ad-hoc
by a hypernetwork and passed to the forward function.
Does not affect task embeddings.
init_theta (optional): This option is for convenience reasons.
The option expects a list of parameter values that are used to
initialize the network weights. As such, it provides a
convenient way of initializing a network with, for instance, a
weight draw produced by the hypernetwork.
The given data has to be in the same shape as the attribute
``theta`` if the network would be constructed with ``theta``.
Does not affect task embeddings.
"""
def __init__(self, out_size, num_layers, num_filters, kernel_size, sa_units,
input_dim, use_batch_norm, use_spectral_norm, no_theta,
init_theta):
# FIXME find a way using super to handle multiple inheritence.
#super(SAHnetPart, self).__init__()
nn.Module.__init__(self)
CLHyperNetInterface.__init__(self)
assert(init_theta is None or not no_theta)
if use_spectral_norm:
raise NotImplementedError('Spectral normalization not yet ' +
'implemented for this hypernetwork type.')
if use_batch_norm:
raise NotImplementedError('Batch normalization not yet ' +
'implemented for this hypernetwork type.')
# FIXME task embeddings are currently maintained outside of this class.
self._target_shapes = out_size
self._task_embs = None
self._size_ext_input = input_dim
self._num_outputs = np.prod(out_size)
if sa_units is None:
sa_units = []
self._sa_units_inds = sa_units
self._use_batch_norm = use_batch_norm
assert(num_layers > 0) # Initial fully-connected layer must exist.
assert(num_filters is None or len(num_filters) == num_layers-1)
assert(len(out_size) == 2 or len(out_size) == 3)
#assert(num_layers-1 not in sa_units)
assert(len(sa_units) == 0 or np.max(sa_units) < num_layers-1)
out_channels = 1 if len(out_size) == 2 else out_size[2]
if num_filters is None:
num_filters = [128] * (num_layers-1)
multipliers = np.power(2, range(num_layers-2, -1, -1)).tolist()
num_filters = [e1 * e2 for e1, e2 in zip(num_filters, multipliers)]
num_filters.append(out_channels)
if kernel_size is None:
kernel_size = 5
if not isinstance(kernel_size, list):
kernel_size = [kernel_size, kernel_size]
if len(kernel_size) == 2:
kernel_size = [kernel_size] * (num_layers-1)
else:
for i, tup in enumerate(kernel_size):
if not isinstance(tup, list):
kernel_size[i] = [tup, tup]
print('Building a self-attention generator with %d layers and an ' % \
(num_layers) + 'output shape of %s.' % str(out_size))
### Compute strides and pads of all transpose conv layers.
# Keep in mind the formula:
# W_o = S * (W_i - 1) - 2 * P + K + P_o
# S - Strides
# P - Padding
# P_o - Output padding
# K - Kernel size
strides = [[2, 2] for _ in range(num_layers-1)]
pads = [[0, 0] for _ in range(num_layers-1)]
out_pads = [[0, 0] for _ in range(num_layers-1)]
# Layer sizes.
sizes = [[out_size[0], out_size[1]]] * (num_layers-1)
w = out_size[0]
h = out_size[1]
def compute_pads(w, k, s):
"""Compute paddings. Given the equation
W_o = S * (W_i - 1) - 2 * P + K + P_o
Paddings and output paddings are chosen such that it holds:
W_o = S * W_i
Args:
w: Size of output dimension.
k: Kernel size.
s: Stride.
Returns:
Padding, output padding.
"""
offset = s
if s == 2 and (w % 2) == 1:
offset = 3
if ((k-offset) % 2) == 0:
p = (k-offset) // 2
p_out = 0
else:
p = int(np.ceil((k-offset) / 2))
p_out = -(k - offset - 2*p)
return p, p_out
for i in range(num_layers-2, -1, -1):
sizes[i] = [w, h]
# This is a condition we set.
# If one of the sizes is too small, we just keep the layer size.
if w <= 4:
strides[i][0] = 1
if h <= 4:
strides[i][1] = 1
pads[i][0], out_pads[i][0] = compute_pads(w, kernel_size[i][0],
strides[i][0])
pads[i][1], out_pads[i][1] = compute_pads(h, kernel_size[i][1],
strides[i][1])
w = w if strides[i][0] == 1 else w // 2
h = h if strides[i][1] == 1 else h // 2
self._fc_out_shape = [num_filters[0], w, h]
if num_layers > 1:
num_filters = num_filters[1:]
# Just a sanity check.
for i, s in enumerate(strides):
w = s[0] * (w-1) + kernel_size[i][0] - 2*pads[i][0] + out_pads[i][0]
h = s[1] * (h-1) + kernel_size[i][1] - 2*pads[i][1] + out_pads[i][1]
assert(w == out_size[0] and h == out_size[1])
# For shapes of self-maintained parameters (underlying modules, like
# self-attention layers, maintain their own weights).
theta_shapes_internal = []
if no_theta:
self._theta = None
else:
self._theta = nn.ParameterList()
if init_theta is not None and len(sa_units) > 0:
num_p = 7 # Number of param tensors per self-attention layer.
num_sa_p = len(sa_units) * num_p
sind = len(init_theta)-num_sa_p
sa_init_weights = []
for i in range(len(sa_units)):
sa_init_weights.append( \
init_theta[sind+i*num_p:sind+(i+1)*num_p])
init_theta = init_theta[:sind]
### Initial fully-connected layer.
num_units = np.prod(self._fc_out_shape)
theta_shapes_internal.extend([[num_units, input_dim], [num_units]])
print('The output shape of the fully-connected layer will be %s' %
(str(self._fc_out_shape)))
### Transpose Convolutional Layers.
| |
<reponame>davidharvey1986/rrg<gh_stars>1-10
"""
Table element classes for input
@author: <NAME>, <NAME>
@organization: Space Telescope - European Coordinating Facility (ST-ECF)
@license: Gnu Public Licence
@contact: <EMAIL>
@since: 2005/09/13
$LastChangedBy: mkuemmel $
$LastChangedDate: 2008-07-03 10:27:47 +0200 (Thu, 03 Jul 2008) $
$HeadURL: http://astropy.scipy.org/svn/astrolib/trunk/asciidata/Lib/asciielement.py $
"""
__version__ = "Version 1.0 $LastChangedRevision: 503 $"
import string, sys, os, types
from .asciierror import *
class Element(object):
"""
Class to analyze a data element. The data element is
given as a string, and the class methods find the type
of the element. The type of the element is different to
string if it can be transformed to this type
(such as e.g. "1.0" --> 1.0).
"""
def __init__(self, item):
"""
Constructor for the Element class.
@param item: the element to be analyzed
@type item: string/integer/float
"""
self._item = item
self._type = self._find_type(item)
def get_value(self):
"""
Returns the elements value as its type.
@return: the element as string
@rtype: string/integer/float
"""
return self._item
def get_type(self):
"""
Returns the element type.
@return: the element type
@rtype: <types> name
"""
return self._type
def _find_type(self, item):
"""
Finds the proper type of an element.
@param item: the item to analyze
@type item: string/integer/float
@return: the element type
@rtype: <types> name
"""
# check for int
if self._isint(item):
return int
# check for float
elif self._isfloat(item):
return float
# the default is string
else:
return bytes
def _isint(self, item):
"""
Checks whether an element is of type integer.
@param item: the element to check
@type item: any type
@return: 1/0
@rtype: integer
"""
try:
int(item)
except:
return 0
return 1
def _isfloat(self, item):
"""
Checks whether an element is of type float.
@param item: the element to check
@type item: any type
@return: 1/0
@rtype: float
"""
try:
float(item)
except:
return 0
return 1
class ValElement(Element):
"""
Derived class from the Element class. In addition
this class fills attributes with the element value
in its proper type.
"""
def __init__(self, item):
"""
Constructor for the ValElement class.
@param item: the element to be analyzed
@type item: string/integer/float
"""
# check whether it is a string
if isinstance(item, type("a")):
# if yes, initialize it in the super class
super(ValElement, self).__init__(item)
# get the typed value
self._tvalue = self._get_tvalue(item, self.get_type())
else:
# if no string, determine the type.
# store the typed value and the
# value as string
self._item = str(item)
self._type = type(item)
self._tvalue = item
def get_tvalue(self):
"""
Returns the elements value as its type.
@return: the element value
@rtype: string/integer/float
"""
return self._tvalue
def set_tvalue(self, tvalue):
"""
Sets the typed value
@param tvalue: the element to transform
@type tvalue: string
"""
self._tvalue = tvalue
def _get_tvalue(self, item, type):
"""
Transforms and returns the typed value.
For a string element with a type different from
string, the string is transformed into this type
(e.g. " 1", int ----> 1).
@param item: the element to transform
@type item: string
@param type: the type to transform into
@type type: <types> name
@return: the typed element value
@rtype: string/integer/float
"""
if type == int:
return int(item)
elif type == float:
return float(item)
else:
return self._item
class ForElement(ValElement):
"""
Derived class from the ValElement class. In addition
this class fills attributes with the proper format of
the element.
"""
def __init__(self, item):
"""
Constructor for the ForElement class.
@param item: the element to be analyzed
@type item: string/integer/float
"""
# if yes, initialize it in the super class
super(ForElement, self).__init__(item)
# check whether it is a string
if isinstance(item, type("a")):
# get the format
self._fvalue = self._get_fvalue(self.get_type())
else:
# get the default format
self._fvalue = self._get_fdefaults(self.get_type())
def get_fvalue(self):
"""
Returns the element format.
@return: the element value
@rtype: string/integer/float
"""
return self._fvalue
def _get_fvalue(self, type):
"""
Determines and returns the element format.
The proper format for the element is derived from
it string representation. This string representation
originates directly from the input data.
@param type: the type to transform into
@type type: <types> name
@return: the format string
@rtype: [string]
"""
# check for te data type
if type == int:
# get the length of the stripped string version
svalue = self._item.strip()
flength = len(svalue)
# correct for a sign in the stripped string
if self._tvalue < 0 or svalue[0]=='+':
flength -= 1
# there should always be five digits
if flength < 5:
# return the minimum format
return ['%5i','%5s']
else:
# return the format
return ['% '+str(flength)+'i','%'+str(flength+1)+'s']
elif type == float:
# store the stripped string
svalue = self._item.strip()
# check for an exponent
epos = svalue.find( 'E')
if epos < 0:
epos = svalue.find( 'e')
# get the floating point format
if epos > -1:
# compute the accuracy, to say the number
# of digits after '.', taking into account
# a possible sign
if self._tvalue < 0.0 or svalue[0] == '+':
accuracy = epos-3
else:
accuracy = epos-2
# check whether there is a '.'
if svalue.find( '.') < 0:
# correct for missing dot
accuracy += 1
# just for security:
if accuracy < 0:
accuracy = 0
# compute the total length
tlength = accuracy+6
# return the format
return ['% '+str(tlength)+'.'+str(accuracy)+'e', \
'%'+str(tlength+1)+'s']
# get the fixed point format
else:
# find the position of the '.' and the total length
dpos = svalue.find( '.')
tlength = len(svalue)
# compute the accuracy, to say the number
# of digits after '.'
accuracy = tlength-dpos-1
# correct the length for possible signs
if self._tvalue < 0.0 or svalue[0] == '+':
tlength -=1
# return the format
return ['% '+str(tlength)+'.'+str(accuracy)+'f', \
'%'+str(tlength+1)+'s']
else:
# default format for strings
flength = str(len(self._item))
return ['% '+flength+'s', '%'+flength+'s']
def _get_fdefaults(self, type):
"""
Determines and returns the default format
@param type: the type to find the format for
@type type: <types> name
@return: the list of format strings
@rtype: [string]
"""
if type == int:
# default format for integers
return ['%5i','%5s']
elif type == float:
# default format for floats
return ['% 12.6e', '%13s']
else:
# default format for strings
flength = str(len(self._item))
return ['% '+flength+'s', '%'+flength+'s']
class TypeTransformator(object):
"""
The class contains all rules about the transformation
of the different possible column types. It determines
whether a transformation is possible or not. It also
performs the transformation on elements.
"""
def __init__(self, orig_type, new_type):
"""
Constructor for the TypeTransformator class.
@param orig_type: the element to be analyzed
@type orig_type: <types>-name
@param new_type: the element to be analyzed
@type new_type: <types>-name
"""
self.istransf = self._analyze_types(orig_type, new_type)
if self.istransf:
self.higher_type = orig_type
else:
self._check_type(new_type)
self.higher_type = new_type
def _analyze_types(self, orig_type, new_type):
"""
Analyzes two types for transformability
The method analyzes whether a new type can be
transformed into the original type. An integer
is returned which gives the result.
@param orig_type: the element to be analyzed
@type orig_type: <types>-name
@param new_type: the element to be analyzed
@type new_type: <types>-name
@return: booleans to show transformability
@rtype: integer
"""
# initialize the return value
istransf = 0
# check whether the original type is string
if orig_type == bytes:
# everything can be transformed to a string
istransf = 1
# check whether the original type is float
elif orig_type == float:
# integers can be transformed to float
if new_type == int:
istransf = 1
# check whether the original type is integer
elif orig_type == int:
# NOTHING can be transformed to an integer
istransf = 0
else:
raise ColTypeError('Column type: "'+str(orig_type)+'" is not a valid column type!')
return istransf
def _check_type(self, in_type):
"""
Checks the validity of a type
Only a limited number of types are admited. The method
checks whether a certain type is valid or not.
An exception is thrown for invalid types.
@param in_type: the element to be checked
@type in_type: <types>-name
"""
# check whether the type is | |
offset
# Execute requests
response_json, status_code = utils.request(
session=self,
request_type=const.REQUEST_GET,
endpoint=Endpoints.SEARCH,
uri_params=uri_params
)
if status_code != 200:
raise utils.SpotifyError(status_code, response_json)
# Extract data per search type
# TODO: test what happens if unnecessary types are specified for
# the given offsets against live api
for curr_type in remaining_types:
api_result_type = map_args_to_api_result[curr_type]
items = response_json[api_result_type]['items']
# Add items to accumulator
for item in items:
if curr_type is map_args_to_api_call[const.ALBUMS]:
result.get(curr_type).append(Album(self, item))
elif curr_type is map_args_to_api_call[const.ARTISTS]:
result.get(curr_type).append(Artist(self, item))
elif curr_type is map_args_to_api_call[const.PLAYLISTS]:
result.get(curr_type).append(Playlist(self, item))
elif curr_type is map_args_to_api_call[const.TRACKS]:
result.get(curr_type).append(Track(self, item))
else:
# Should never reach here, but here for safety!
raise ValueError('Invalid type when building search')
# Only make necessary search queries
new_remaining_types = list()
for curr_type in remaining_types:
api_result_type = map_args_to_api_result[curr_type]
if response_json[api_result_type]['next'] is not None:
new_remaining_types.append(curr_type)
remaining_types = new_remaining_types
return self.SearchResult(result)
def get_albums(self,
album_ids,
market=const.TOKEN_REGION):
""" Gets the albums with the given Spotify ids.
Args:
album_ids (str, List[str]): The Spotify album id(s) to get.
market (str): a :term:`market code <Market>` or sp.TOKEN_REGION,
used for :term:`track relinking <Track Relinking>`. If None, no
market is passed to Spotify's Web API, and its default behavior
is invoked.
Returns:
Union[Album, List[Album]]: The requested album(s).
Raises:
TypeError: for invalid types in any argument.
ValueError: if market type is invalid. TODO
HTTPError: if failure or partial failure.
Calls endpoints:
- GET /v1/albums
Note: the following endpoint is not used.
- GET /v1/albums/{id}
"""
# Type/Argument validation
if not isinstance(album_ids, str) and\
not all(isinstance(x, str) for x in album_ids):
raise TypeError('album_ids should be str or list of str')
if market is None:
raise ValueError('market is a required argument')
if not isinstance(market, str):
raise TypeError('market should be str')
if isinstance(album_ids, str):
album_ids = list(album_ids)
# Construct params for API call
endpoint = Endpoints.SEARCH_ALBUMS
uri_params = dict()
if market is not None:
uri_params['market'] = market
# A maximum 20 albums can be returned per API call
batches = utils.create_batches(album_ids, 20)
result = list()
for batch in batches:
uri_params['ids'] = ','.join(batch)
# Execute requests
response_json, status_code = utils.request(
session=self,
request_type=const.REQUEST_GET,
endpoint=endpoint,
uri_params=uri_params
)
if status_code != 200:
raise utils.SpotifyError(status_code, response_json)
items = response_json['albums']
for item in items:
result.append(Album(self, item))
return result if len(result) != 1 else result[0]
def get_artists(self, artist_ids):
""" Gets the artists with the given Spotify ids.
Args:
artist_ids (str, List[str): The Spotify artist id(s) to get.
Returns:
Union[Album, List[Album]]: The requested artist(s).
Raises:
TypeError: for invalid types in any argument.
HTTPError: if failure or partial failure.
Calls endpoints:
- GET /v1/artists
Note: the following endpoint is not used.
- GET /v1/artists/{id}
"""
# Type validation
if not isinstance(artist_ids, str) and\
not all(isinstance(x, str) for x in artist_ids):
raise TypeError('artist_ids should be str or list of str')
if isinstance(artist_ids, str):
artist_ids = list(artist_ids)
# Construct params for API call
endpoint = Endpoints.SEARCH_ALBUMS
uri_params = dict()
# A maximum of 50 artists can be returned per API call
batches = utils.create_batches(artist_ids, 50)
result = list()
for batch in batches:
uri_params['ids'] = batch
# Execute requests
response_json, status_code = utils.request(
session=self,
request_type=const.REQUEST_GET,
endpoint=endpoint,
uri_params=uri_params
)
if status_code != 200:
raise utils.SpotifyError(status_code, response_json)
items = response_json['artists']
for item in items:
result.append(Artist(self, item))
return result if len(result) != 1 else result[0]
def get_tracks(self,
track_ids,
market=const.TOKEN_REGION):
""" Gets the tracks with the given Spotify ids.
Args:
track_ids (str, List[str]): The Spotify track id(s) to get.
market (str): a :term:`market code <Market>` or sp.TOKEN_REGION,
used for :term:`track relinking <Track Relinking>`. If None, no
market is passed to Spotify's Web API, and its default behavior
is invoked.
Returns:
Union[Track, List[Track]]: The requested track(s).
Raises:
TypeError: for invalid types in any argument.
ValueError: if market type is invalid. TODO
HTTPError: if failure or partial failure.
Calls endpoints:
- GET /v1/tracks
Note: the following endpoint is not used.
- GET /v1/tracks/{id}
"""
# Type validation
if not isinstance(track_ids, str) and\
not all(isinstance(x, str) for x in track_ids):
raise TypeError('track_ids should be str or list of str')
if market is not None and not isinstance(market, str):
raise TypeError('market should be None or str')
# Argument validation
if isinstance(track_ids, str):
track_ids = list(track_ids)
# Construct params for API call
endpoint = Endpoints.SEARCH_TRACKS
uri_params = dict()
if market is not None:
uri_params['market'] = market
# A maximum of 50 tracks can be returned per API call
batches = utils.create_batches(track_ids, 50)
result = list()
for batch in batches:
uri_params['ids'] = ','.join(batch)
# Execute requests
response_json, status_code = utils.request(
session=self,
request_type=const.REQUEST_GET,
endpoint=endpoint,
uri_params=uri_params
)
if status_code != 200:
raise utils.SpotifyError(status_code, response_json)
items = response_json['tracks']
for item in items:
result.append(Track(self, item))
return result if len(result) != 1 else result[0]
# TODO: what the heck are fields?
def get_playlists(self,
playlist_ids,
fields=None,
market=const.TOKEN_REGION):
""" Gets the playlist(s) with the given Spotify ids.
Args:
playlist_ids (str, List[str]): The Spotify playlist ids to get.
fields (str): filters for the query: a comma-separated list of the
fields to return. If omitted, all fields are returned. A dot
separator can be used to specify non-reoccurring fields, while
parentheses can be used to specify reoccurring fields within
objects. Use multiple parentheses to drill down into nested
objects. Fields can be excluded by prefixing them with an
exclamation mark.
market (str): a :term:`market code <Market>` or sp.TOKEN_REGION,
used for :term:`track relinking <Track Relinking>`. If None, no
market is passed to Spotify's Web API, and its default behavior
is invoked.
Returns:
Union[Playlist, List[Playlist]]: The requested playlist(s)
Raises:
TypeError: for invalid types in any argument.
ValueError: if market type is invalid. TODO
HTTPError: if failure or partial failure.
Calls endpoints:
- GET /v1/playlists/{playlist_id}
"""
# Note: additional_types is also a valid request param - it
# has been deprecated and therefore is removed from the API wrapper.
# Type/Argument validation
if not isinstance(playlist_ids, str) and\
not all(isinstance(x, str) for x in playlist_ids):
raise TypeError('playlist_ids should be str or list of str')
if fields is not None and not isinstance(fields, str):
raise TypeError('fields should be None or str')
if not isinstance(market, str):
raise TypeError('market should be str')
if isinstance(playlist_ids, str):
playlist_ids = list(playlist_ids)
# Construct params for API call
uri_params = dict()
uri_params['market'] = market
if fields is not None:
uri_params['fields'] = fields
# Each API call can return at most 1 playlist. Therefore there is no
# need to batch this query.
result = list()
for playlist_id in playlist_ids:
endpoint = Endpoints.SEARCH_PLAYLIST % playlist_id
# Execute requests
response_json, status_code = utils.request(
session=self,
request_type=const.REQUEST_GET,
endpoint=endpoint,
uri_params=uri_params
)
if status_code != 200:
raise utils.SpotifyError(status_code, response_json)
result.append(Playlist(self, response_json))
return result if len(result) != 1 else result[0]
def get_users(self, user_ids):
""" Gets the users with the given Spotify ids.
Args:
user_ids (str, List[str]): The Spotify user id(s) to get.
Returns:
Union[User, List[User]]: The requested user(s).
Raises:
TypeError: for invalid types in any argument.
HTTPError: if failure or partial failure.
Calls endpoints:
- GET /v1/users/{user_id}
"""
# Type validation
if not isinstance(user_ids, str) and\
not all(isinstance(x, str) for x in user_ids):
raise TypeError('user_ids should be str or list of str')
if isinstance(user_ids, str):
user_ids = list('user_ids should be str')
# Construct params for API call
uri_params = dict()
# Each API call can return at most 1 user. Therefore there is no need
# to batch this query.
result = list()
for user_id in user_ids:
# Execute requests
# TODO: Partial failure - if user with user_id does not exist,
# status_code is 404
response_json, status_code = utils.request(
session=self,
request_type=const.REQUEST_GET,
endpoint=Endpoints.SEARCH_USER % user_id,
uri_params=uri_params
)
if status_code != 200:
raise utils.SpotifyError(status_code, response_json)
result.append(User(self, response_json))
return result if len(result) != 1 else result[0]
def current_user(self):
"""
Returns:
User: The user associated with the current Spotify API token.
Raises:
ValueError: if the Spotify API key is not valid.
ValueError: if the response is empty.
HTTPError: if failure or partial failure.
Calls endpoints:
- GET /v1/me
"""
# Construct params for API call
endpoint = Endpoints.SEARCH_CURRENT_USER
# Execute requests
response_json, status_code = utils.request(
session=self,
| |
<filename>eosfactory/core/cleos.py<gh_stars>100-1000
import subprocess
import json
import pathlib
import random
import os
import re
import eosfactory.core.errors as errors
import eosfactory.core.logger as logger
import eosfactory.core.config as config
import eosfactory.core.setup as setup
import eosfactory.core.interface as interface
def set_local_nodeos_address_if_none():
if not setup.nodeos_address():
setup.set_nodeos_address(
"http://" + config.http_server_address())
setup.is_local_address = True
return setup.is_local_address
# http://www.sphinx-doc.org/domains.html#info-field-lists
class Cleos():
'''A prototype for *EOSIO cleos* commands.
Calls *EOSIO cleos*, and processes the responce.
Args:
args (list): List of *EOSIO cleos* positionals and options.
command_group (str): Command group name.
command (str): Command name.
Attributes:
out_msg (str): Responce received via the stdout stream.
out_msg_details (str): Responce received via the stderr stream.
err_msg (str): Error message received via the stderr stream.
json (json): Responce received as JSON, if any.
is_verbose (bool): If set, a message is printed.
args (list): Value of the *args* argument.
Raises:
.core.errors.Error: If err_msg.
'''
def __init__(self, args, command_group, command, is_verbose=True):
self.out_msg = None
self.out_msg_details = None
self.err_msg = None
self.json = {}
self.is_verbose = is_verbose
self.args = args
cl = [config.cli_exe()]
set_local_nodeos_address_if_none()
cl.extend(["--url", setup.nodeos_address()])
if setup.is_print_request:
cl.append("--print-request")
if setup.is_print_response:
cl.append("--print-response")
cl.append(command_group)
cl.extend(re.sub(re.compile(r'\s+'), ' ', command.strip()).split(" "))
cl.extend(args)
if setup.is_save_command_lines:
setup.add_to__command_line_file(" ".join(cl))
if setup.is_print_command_lines:
print("######## command line sent to cleos:")
print(" ".join(cl))
print("")
while True:
process = subprocess.run(
cl,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
cwd=str(pathlib.Path(config.cli_exe()).parent))
self.out_msg = process.stdout.decode("ISO-8859-1")
self.out_msg_details = process.stderr.decode("ISO-8859-1")
self.err_msg = None
error_key_words = ["ERROR", "Error", "error", "Failed"]
for word in error_key_words:
if word in self.out_msg_details:
self.err_msg = self.out_msg_details
self.out_msg_details = None
break
if not self.err_msg or self.err_msg and \
not "Transaction took too long" in self.err_msg:
break
errors.validate(self)
if not self.err_msg \
and (setup.is_print_request or setup.is_print_response):
print("######## cleos request and response:")
print(self.out_msg_details)
try:
self.json = json.loads(self.out_msg)
except:
pass
try:
self.json = json.loads(self.out_msg_details)
except:
pass
def printself(self, is_verbose=False):
'''Print a message.
Args:
is_verbose (bool): If set, a message is printed.
'''
if not hasattr(self, "is_verbose"):
self.is_verbose = is_verbose
if self.is_verbose:
logger.OUT(self.__str__())
def __str__(self):
if self.err_msg:
return self.err_msg
else:
out = self.out_msg
if self.out_msg_details:
out = out + self.out_msg_details
return out
def __repr__(self):
return ""
def common_parameters(
permission=None,
expiration_sec=None,
skip_sign=False, dont_broadcast=False, force_unique=False,
max_cpu_usage=0, max_net_usage=0,
ref_block=None,
delay_sec=0
):
'''Common parameters.
Args:
permission (.interface.Account or str or (str, str) or \
(.interface.Account, str) or any list of the previous items.):
An account and permission level to authorize.
Exemplary values of the argument *permission*::
eosio # eosio is interface.Account object
"eosio@owner"
("eosio", "owner")
(eosio, interface.Permission.ACTIVE)
["eosio@owner", (eosio, .Permission.ACTIVE)]
Args:
expiration (int): The time in seconds before a transaction expires,
defaults to 30s
skip_sign (bool): Specify if unlocked wallet keys should be used to
sign transaction.
dont_broadcast (bool): Don't broadcast transaction to the network
(just print).
force_unique(bool): Force the transaction to be unique. This will
consume extra bandwidth and remove any protections against
accidentally issuing the same transaction multiple times.
max_cpu_usage (int): Upper limit on the milliseconds of cpu usage budget,
for the execution of the transaction (defaults to 0 which means no limit).
max_net_usage (int): Upper limit on the net usage budget, in bytes, for the
transaction (defaults to 0 which means no limit).
ref_block (int): The reference block num or block id used for TAPOS
(Transaction as Proof-of-Stake).
delay_sec: The delay in seconds, defaults to 0s.
'''
pass
class GetAccount(interface.Account, Cleos):
'''Retrieve an account from the blockchain.
Args:
account (str or .interface.Account): The account to retrieve.
is_verbose (bool): If *False* do not print. Default is *True*.
Attributes:
name (str): The EOSIO name of the account.
owner_key (str) The *owner* public key.
active_key (str) The *active* public key.
'''
def __init__(self, account, is_info=True, is_verbose=True):
interface.Account.__init__(self, interface.account_arg(account))
Cleos.__init__(
self,
[self.name] if is_info else [self.name, "--json"],
"get", "account", is_verbose)
self.owner_key = None
self.active_key = None
try:
if not is_info:
permissions = self.json["permissions"]
for permission in permissions:
if permission["required_auth"]["keys"]:
key = permission["required_auth"]["keys"][0]["key"]
if permission["perm_name"] == "owner":
self.owner_key = key
if permission["perm_name"] == "active":
self.active_key = key
else:
owner = re.search(r'owner\s+1\:\s+1\s(.*)\n', self.out_msg)
active = re.search(r'active\s+1\:\s+1\s(.*)\n', self.out_msg)
if owner and active:
self.owner_key = owner.group(1)
self.active_key = active.group(1)
except:
pass
self.printself()
def __str__(self):
return "name: {}\n".format(self.name) + str(Cleos.__str__(self))
class GetTransaction(Cleos):
'''Retrieve a transaction from the blockchain.
Args:
transaction_id (str): ID of the transaction to retrieve.
block_hint (int): If set, the block number this transaction may be in.
is_verbose (bool): If *False* do not print. Default is *True*.
Attributes:
transaction_id: The ID of the transaction retrieved.
json: The transaction retrieved.
'''
def __init__(self, transaction_id, block_hint=None, is_verbose=True):
self.transaction_id = transaction_id
args = [transaction_id]
if block_hint:
args.extend(["--block-hint", str(block_hint)])
Cleos.__init__(
self, args, "get", "transaction", is_verbose)
self.printself()
class WalletCreate(interface.Wallet, Cleos):
'''Create a new wallet locally.
If the *password* argument is set, try to open a wallet. Otherwise, create
a new wallet.
Args:
name (str): The name of the wallet, defaults to *default*.
password (str): The password to the wallet, if the wallet exists.
is_verbose (bool): If *False* do not print. Default is *True*.
Attributes:
name (str): The name of the wallet.
password (str): The password returned by the *wallet create*
EOSIO cleos command.
is_created (bool): True, if the wallet created.
'''
def __init__(self, name="default", password="", is_verbose=True):
interface.Wallet.__init__(self, name)
self.password = None
if not password: # try to create a wallet
Cleos.__init__(
self, ["--name", self.name, "--to-console"],
"wallet", "create", is_verbose)
self.json["name"] = name
msg = self.out_msg
self.password = msg[msg.find("\"")+1:msg.rfind("\"")]
self.json["password"] = self.password
self.is_created = True
else: # try to open an existing wallet
WalletOpen(name, is_verbose=False)
wallet_unlock = WalletUnlock(name, password, is_verbose=False)
self.json = {}
self.name = name
self.password = password
self.is_created = False
self.json["name"] = name
self.json["password"] = password
self.out_msg = "Restored wallet: {}".format(self.name)
self.printself(is_verbose)
class WalletStop(Cleos):
'''Stop keosd, the EOSIO wallet manager.
Args:
is_verbose (bool): If *False* do not print. Default is *True*.
'''
def __init__(self, is_verbose=True):
Cleos.__init__(self, [], "wallet", "stop", is_verbose)
self.printself()
class WalletList(Cleos):
'''List opened wallets, * marks unlocked.
Args:
is_verbose (bool): If *False* do not print. Default is *True*.
Attributes:
json: The list of the open wallets.
'''
def __init__(self, is_verbose=True):
Cleos.__init__(
self, [], "wallet", "list", is_verbose)
self.json = json.loads("{" + self.out_msg.replace("Wallets", \
'"Wallets"', 1) + "}")
self.printself()
class WalletImport(Cleos):
'''Import a private key into wallet.
Args:
wallet (str or .interface.Wallet): A wallet to import key into.
key (str or .interface.Key): A key to import.
is_verbose (bool): If *False* do not print. Default is *True*.
Attributes:
key_private (str) The key imported.
'''
def __init__(self, key, wallet="default", is_verbose=True):
key_private = interface.key_arg(
key, is_owner_key=True, is_private_key=True)
Cleos.__init__(
self,
["--private-key", key_private, "--name",
interface.wallet_arg(wallet)],
"wallet", "import", is_verbose)
self.json["key_private"] = key_private
self.key_private = key_private
self.printself()
class WalletRemove_key(Cleos):
'''Remove key from wallet
Args:
wallet (str or .interface.Wallet): The wallet to remove key from.
password (str): The password returned by wallet create.
key (str or .interface.Key): A public key to remove.
is_verbose (bool): If *False* do not print. Default is *True*.
Attributes:
key_public (str): The public key removed.
'''
def __init__(self, key, wallet, password, is_verbose=True):
key_public = interface.key_arg(
key, is_owner_key=True, is_private_key=False)
Cleos.__init__(
self,
[key_public, "--name", interface.wallet_arg(wallet),
"--password", password],
"wallet", "remove_key", is_verbose)
self.json["key_public"] = key_public
self.key_public = key_public
self.printself()
class WalletKeys(Cleos):
'''List of public keys from all unlocked wallets.
Args:
is_verbose (bool): If *False* do not print. Default is *True*.
'''
def __init__(self, is_verbose=True):
Cleos.__init__(
self, [], "wallet", "keys", is_verbose)
self.printself()
def __str__(self):
out = "Keys in all opened wallets:\n"
out = out + str(Cleos.__str__(self))
return out
class WalletOpen(Cleos):
'''Open an existing wallet.
Args:
wallet (str or .interface.Wallet): The wallet to open.
is_verbose (bool): If *False* do not print. Default is *True*.
'''
def __init__(self, wallet="default", is_verbose=True):
Cleos.__init__(
self, ["--name", interface.wallet_arg(wallet)],
"wallet", "open", is_verbose)
self.printself()
class WalletLockAll(Cleos):
'''Lock all unlocked wallets.
Args:
is_verbose (bool): If *False* do not print. Default is *True*.
'''
def __init__(self, is_verbose=True):
Cleos.__init__(
self, [], "wallet", "lock_all", is_verbose)
self.printself()
class WalletLock(Cleos):
'''Lock wallet.
Args:
wallet (str or .interface.Wallet): The wallet to open.
is_verbose (bool): If *False* do not print. Default is *True*.
'''
def __init__(self, wallet="default", is_verbose=True):
Cleos.__init__(
self, ["--name", interface.wallet_arg(wallet)],
"wallet", "lock", | |
# Copyright 2018 Xanadu Quantum Technologies Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Integration tests for templates, including integration with initialization functions
in :mod:`pennylane.init`, running templates in larger circuits,
combining templates, using positional and keyword arguments, and using different interfaces.
New tests are added as follows:
* When adding a new interface, try to import it and extend the fixture ``interfaces``. Also add the interface
gradient computation to the TestGradientIntegration tests.
* When adding a new template, extend the fixtures ``QUBIT_CONSTANT_INPUT`` or ``CV_CONSTANT_INPUT`` by a *list* of arguments to the
template. Note: Even if the template takes only one argument, it has to be wrapped in a list (i.e. [weights]).
* When adding a new parameter initialization function, extend the fixtures ``qubit_func`` or
``cv_func``.
"""
# pylint: disable=protected-access,cell-var-from-loop
import pytest
import numpy as np
import pennylane as qml
from pennylane.templates import (Interferometer,
CVNeuralNetLayers,
StronglyEntanglingLayers,
RandomLayers,
AmplitudeEmbedding,
BasisEmbedding,
AngleEmbedding,
SqueezingEmbedding,
DisplacementEmbedding,
BasisStatePreparation,
MottonenStatePreparation)
from pennylane.init import (strong_ent_layers_uniform,
strong_ent_layers_normal,
random_layers_uniform,
random_layers_normal,
cvqnn_layers_all,
interferometer_all)
#######################################
# Interfaces
INTERFACES = [('numpy', np.array)]
try:
import torch
from torch.autograd import Variable as TorchVariable
INTERFACES.append(('torch', torch.tensor))
except ImportError as e:
pass
try:
import tensorflow as tf
if tf.__version__[0] == "1":
import tensorflow.contrib.eager as tfe
tf.enable_eager_execution()
TFVariable = tfe.Variable
else:
from tensorflow import Variable as TFVariable
INTERFACES.append(('tf', TFVariable))
except ImportError as e:
pass
#########################################
# Parameters shared between test classes
# qubit templates, constant inputs and kwargs
QUBIT_CONSTANT_INPUT = [(StronglyEntanglingLayers, [[[[4.54, 4.79, 2.98], [4.93, 4.11, 5.58]],
[[6.08, 5.94, 0.05], [2.44, 5.07, 0.95]]]], {}),
(RandomLayers, [[[0.56, 5.14], [2.21, 4.27]]], {}),
(AngleEmbedding, [[1., 2.]], {})
]
# cv templates, constant inputs and keyword argument
CV_CONSTANT_INPUT = [(DisplacementEmbedding, [[1., 2.]], {}),
(SqueezingEmbedding, [[1., 2.]], {}),
(CVNeuralNetLayers, [[[2.31], [1.22]],
[[3.47], [2.01]],
[[0.93, 1.58], [5.07, 4.82]],
[[0.21, 0.12], [-0.09, 0.04]],
[[4.76, 6.08], [6.09, 6.22]],
[[4.83], [1.70]],
[[4.74], [5.39]],
[[0.88, 0.62], [1.09, 3.02]],
[[-0.01, -0.05], [0.08, -0.19]],
[[1.89, 3.59], [1.49, 3.71]],
[[0.09, 0.03], [-0.14, 0.04]]
], {}),
(Interferometer, [[2.31], [3.49], [0.98, 1.54]], {})
]
#########################################
# Circuits shared by test classes
def qnode_qubit_args(dev, intrfc, templ1, templ2, n, hyperp1, hyperp2):
"""QNode for qubit integration circuit using positional arguments"""
hyperp1['wires'] = range(2)
hyperp2['wires'] = range(2)
@qml.qnode(dev, interface=intrfc)
def circuit(*inp):
# Split inputs again
inp1 = inp[:n]
inp2 = inp[n:]
# Circuit
qml.PauliX(wires=0)
templ1(*inp1, **hyperp1)
templ2(*inp2, **hyperp2)
qml.PauliX(wires=1)
return [qml.expval(qml.Identity(0)), qml.expval(qml.PauliX(1))]
return circuit
def qnode_qubit_kwargs(dev, intrfc, templ1, templ2, n, hyperp1, hyperp2):
"""QNode for qubit integration circuit using keyword arguments"""
hyperp1['wires'] = range(2)
hyperp2['wires'] = range(2)
@qml.qnode(dev, interface=intrfc)
def circuit(**inp):
# Split inputs again
ks = [int(k) for k in inp.keys()]
vs = inp.values()
inp = [x for _, x in sorted(zip(ks, vs))]
inp1 = inp[:n]
inp2 = inp[n:]
# Circuit
qml.PauliX(wires=0)
templ1(*inp1, **hyperp1)
templ2(*inp2, **hyperp2)
qml.PauliX(wires=1)
return [qml.expval(qml.Identity(0)), qml.expval(qml.PauliX(1))]
return circuit
def qnode_cv_args(dev, intrfc, templ1, templ2, n, hyperp1, hyperp2):
"""QNode for CV integration circuit using positional arguments"""
hyperp1['wires'] = range(2)
hyperp2['wires'] = range(2)
@qml.qnode(dev, interface=intrfc)
def circuit(*inp):
# Split inputs again
inp1 = inp[:n]
inp2 = inp[n:]
# Circuit
qml.Displacement(1., 1., wires=0)
templ1(*inp1, **hyperp1)
templ2(*inp2, **hyperp2)
qml.Displacement(1., 1., wires=1)
return [qml.expval(qml.Identity(0)), qml.expval(qml.X(1))]
return circuit
def qnode_cv_kwargs(dev, intrfc, templ1, templ2, n, hyperp1, hyperp2):
"""QNode for CV integration circuit using keyword arguments"""
hyperp1['wires'] = range(2)
hyperp2['wires'] = range(2)
@qml.qnode(dev, interface=intrfc)
def circuit(**inp_):
# Split inputs again
ks = [int(k) for k in inp_.keys()]
vs = inp_.values()
inp = [x for _, x in sorted(zip(ks, vs))]
inp1 = inp[:n]
inp2 = inp[n:]
# Circuit
qml.Displacement(1., 1., wires=0)
templ1(*inp1, **hyperp1)
templ2(*inp2, **hyperp2)
qml.Displacement(1., 1., wires=1)
return [qml.expval(qml.Identity(0)), qml.expval(qml.X(1))]
return circuit
######################
class TestIntegrationCircuit:
"""Tests the integration of templates into circuits using different interfaces. """
@pytest.mark.parametrize("template1, inpts1, hyperp1", QUBIT_CONSTANT_INPUT)
@pytest.mark.parametrize("template2, inpts2, hyperp2", QUBIT_CONSTANT_INPUT)
@pytest.mark.parametrize("intrfc, to_var", INTERFACES)
def test_integration_qubit_args(self, template1, inpts1, template2, inpts2,
intrfc, to_var, hyperp1, hyperp2):
"""Checks integration of qubit templates using positional arguments."""
inpts = inpts1 + inpts2 # Combine inputs to allow passing with *
inpts = [to_var(i) for i in inpts]
dev = qml.device('default.qubit', wires=2)
circuit = qnode_qubit_args(dev, intrfc, template1, template2, len(inpts1), hyperp1, hyperp2)
# Check that execution does not throw error
circuit(*inpts)
@pytest.mark.parametrize("template1, inpts1, hyperp1", QUBIT_CONSTANT_INPUT)
@pytest.mark.parametrize("template2, inpts2, hyperp2", QUBIT_CONSTANT_INPUT)
@pytest.mark.parametrize("intrfc, to_var", INTERFACES)
def test_integration_qubit_kwargs(self, template1, inpts1, template2, inpts2,
intrfc, to_var, hyperp1, hyperp2):
"""Checks integration of qubit templates using keyword arguments."""
inpts = inpts1 + inpts2 # Combine inputs to allow passing with **
inpts = {str(i): to_var(inp) for i, inp in enumerate(inpts)}
dev = qml.device('default.qubit', wires=2)
circuit = qnode_qubit_kwargs(dev, intrfc, template1, template2, len(inpts1), hyperp1, hyperp2)
# Check that execution does not throw error
circuit(**inpts)
@pytest.mark.parametrize("template1, inpts1, hyperp1", CV_CONSTANT_INPUT)
@pytest.mark.parametrize("template2, inpts2, hyperp2", CV_CONSTANT_INPUT)
@pytest.mark.parametrize("intrfc, to_var", INTERFACES)
def test_integration_cv_args(self, gaussian_device_2_wires, template1, inpts1, template2, inpts2,
intrfc, to_var, hyperp1, hyperp2):
"""Checks integration of continuous-variable templates using positional arguments."""
inpts = inpts1 + inpts2 # Combine inputs to allow passing with *
inpts = [to_var(i) for i in inpts]
dev = gaussian_device_2_wires
circuit = qnode_cv_args(dev, intrfc, template1, template2, len(inpts1), hyperp1, hyperp2)
# Check that execution does not throw error
circuit(*inpts)
@pytest.mark.parametrize("template1, inpts1, hyperp1", CV_CONSTANT_INPUT)
@pytest.mark.parametrize("template2, inpts2, hyperp2", CV_CONSTANT_INPUT)
@pytest.mark.parametrize("intrfc, to_var", INTERFACES)
def test_integration_cv_kwargs(self, gaussian_device_2_wires, template1, inpts1, template2, inpts2,
intrfc, to_var, hyperp1, hyperp2):
"""Checks integration of continuous-variable templates using keyword arguments."""
inpts = inpts1 + inpts2 # Combine inputs to allow passing with **
inpts = {str(i): to_var(inp) for i, inp in enumerate(inpts)}
dev = gaussian_device_2_wires
circuit = qnode_cv_kwargs(dev, intrfc, template1, template2, len(inpts1), hyperp1, hyperp2)
# Check that execution does not throw error
circuit(**inpts)
class TestIntegrationCircuitSpecialCases:
"""Tests the integration of templates with special requirements into circuits. """
REQUIRE_FIRST_USING_ARGS = [(AmplitudeEmbedding, [[1 / 2, 1 / 2, 1 / 2, 1 / 2]], {'normalize': False}),
(AmplitudeEmbedding, [[1 / 2, 1 / 2, 1 / 2, 1 / 2]], {'normalize': True}),
(MottonenStatePreparation, [np.array([1 / 2, 1 / 2, 1 / 2, 1 / 2])], {})]
REQUIRE_FIRST_USING_KWARGS = [(AmplitudeEmbedding, [[1 / 2, 1 / 2, 1 / 2, 1 / 2]], {'normalize': False}),
(AmplitudeEmbedding, [[1 / 2, 1 / 2, 1 / 2, 1 / 2]], {'normalize': True}),
(BasisEmbedding, [[1, 0]], {}),
(MottonenStatePreparation, [np.array([1 / 2, 1 / 2, 1 / 2, 1 / 2])], {}),
(BasisStatePreparation, [np.array([1, 0])], {})]
def qnode_first_op_args(self, dev, intrfc, templ1, templ2, hyperparameters1, hyperparameters2, n):
"""QNode for qubit integration circuit without gates before the first template,
and using positional arguments"""
hyperparameters1['wires'] = range(2)
hyperparameters2['wires'] = range(2)
@qml.qnode(dev, interface=intrfc)
def circuit(*inp):
# Split inputs again
inp1 = inp[:n]
inp2 = inp[n:]
# Circuit
templ1(*inp1, **hyperparameters1)
templ2(*inp2, **hyperparameters2)
qml.PauliX(wires=1)
return [qml.expval(qml.Identity(0)), qml.expval(qml.PauliX(1))]
return circuit
def qnode_first_op_kwargs(self, dev, intrfc, templ1, templ2, hyperparameters1, hyperparameters2, n):
"""QNode for qubit integration circuit without gates before the first template,
and using keyword arguments"""
hyperparameters1['wires'] = range(2)
hyperparameters2['wires'] = range(2)
@qml.qnode(dev, interface=intrfc)
def circuit(**inp):
# Split inputs again
ks = [int(k) for k in inp.keys()]
vs = inp.values()
inp = [x for _, x in sorted(zip(ks, vs))]
inp1 = inp[:n]
inp2 = inp[n:]
# Circuit
templ1(*inp1, **hyperparameters1)
templ2(*inp2, **hyperparameters2)
qml.PauliX(wires=1)
return [qml.expval(qml.Identity(0)), qml.expval(qml.PauliX(1))]
return circuit
@pytest.mark.parametrize("first_tmpl, first_inpts, first_hyperparams", REQUIRE_FIRST_USING_ARGS)
@pytest.mark.parametrize("template, inpts, hyperparams", QUBIT_CONSTANT_INPUT)
@pytest.mark.parametrize("intrfc, to_var", INTERFACES)
def test_integration_first_template_args(self, first_tmpl, first_inpts, first_hyperparams,
template, inpts, hyperparams, intrfc, to_var):
"""Checks integration of templates that must be the first operation in the circuit while
using positional arguments."""
inpts = first_inpts + inpts # Combine inputs to allow passing with *
inpts = [to_var(inp) for inp in inpts]
dev = qml.device('default.qubit', wires=2)
circuit = self.qnode_first_op_args(dev, intrfc, first_tmpl, template, first_hyperparams, hyperparams,
len(first_inpts))
# Check that execution does not throw error
circuit(*inpts)
@pytest.mark.parametrize("first_tmpl, first_inpts, first_hyperparams", REQUIRE_FIRST_USING_KWARGS)
@pytest.mark.parametrize("template, inpts, hyperparams", QUBIT_CONSTANT_INPUT)
@pytest.mark.parametrize("intrfc, to_var", INTERFACES)
def test_integration_first_template_kwargs(self, first_tmpl, first_inpts, first_hyperparams,
template, inpts, hyperparams, intrfc, to_var):
"""Checks integration of templates that must be the first operation in the circuit while
using keyword arguments."""
inpts = first_inpts + inpts # Combine inputs to allow passing with | |
import pytest
import numpy as np
import tensorflow as tf
from gym import spaces as gym_spaces
from tf_agents.environments.gym_wrapper import GymWrapper
from tf_agents.environments.tf_py_environment import TFPyEnvironment
import snc.utils.snc_tools as snc
from snc.environments.rl_environment_wrapper \
import RLControlledRandomWalk, rl_env_from_snc_env
from snc.environments.job_generators. \
scaled_bernoulli_services_poisson_arrivals_generator \
import ScaledBernoulliServicesPoissonArrivalsGenerator
from snc.environments.job_generators.discrete_review_job_generator \
import DeterministicDiscreteReviewJobGenerator
import snc.environments.state_initialiser as stinit
from snc.environments.scenarios import load_scenario
def test_find_coupled_resource_sets_a():
"""
Test the _find_coupled_resource_sets method of the wrapped RL environment directly.
The method takes in a binary matrix denoting which resources are affected by which constraints
and returns a list of sets of resources the actions of which are made dependent by the
constraints. This is essentially noticing and interpreting the chaining effect of multiple
constraints on activities.
"""
# Each resource has one activity and all are independent.
independent_resource_constraints_matrix = np.eye(4) # => resource sets are [{0}, {1}, {2}, {3}]
# Run the method for the test case defined above
independent_resource_sets = RLControlledRandomWalk._find_coupled_resource_sets(
independent_resource_constraints_matrix)
# Assert that the responses are as expected.
assert independent_resource_sets == [{0}, {1}, {2}, {3}]
def test_find_coupled_resource_sets_b():
"""
Test the _find_coupled_resource_sets method of the wrapped RL environment directly.
The method takes in a binary matrix denoting which resources are affected by which constraints
and returns a list of sets of resources the actions of which are made dependent by the
constraints. This is essentially noticing and interpreting the chaining effect of multiple
constraints on activities.
"""
# The constraints lead to resources 0 and 2, and 1 and 3 being coupled into two dependent sets.
simple_resource_constraints_matrix = np.array([
[1, 0],
[0, 1],
[1, 0],
[0, 1]
]) # => Resource sets are [{0, 2}, {1, 3}]
# Run the method for the test case defined above
simple_resource_sets = RLControlledRandomWalk._find_coupled_resource_sets(
simple_resource_constraints_matrix
)
# Assert that the responses are as expected.
assert simple_resource_sets == [{0, 2}, {1, 3}]
def test_find_coupled_resource_sets_c():
"""
Test the _find_coupled_resource_sets method of the wrapped RL environment directly.
The method takes in a binary matrix denoting which resources are affected by which constraints
and returns a list of sets of resources the actions of which are made dependent by the
constraints. This is essentially noticing and interpreting the chaining effect of multiple
constraints on activities.
"""
# This test case tests the chaining effect. The first constraint links resources 0 and 1 then
# the second links resources 2 and 3 and the third links 1 and 2 which creates a chain such that
# resources 0, 1, 2 and 3 form one large dependent resource set
overlapping_resource_constraints_matrix = np.array([
[1, 0, 0],
[1, 0, 1],
[0, 1, 1],
[0, 1, 0]
]) # => Resource set is {0, 1, 2, 3}
# Run the method for the test case defined above
overlapping_resource_sets = RLControlledRandomWalk._find_coupled_resource_sets(
overlapping_resource_constraints_matrix
)
# Assert that the responses are as expected.
assert overlapping_resource_sets == [{0, 1, 2, 3}]
def test_single_server_queue_spaces():
"""
Tests the RL environment wrapper for the Single Server Queue. The set up is copied from the
examples file: snc/stochastic_network_control/environments/examples.py
"""
# Set up the environment parameters.
cost_per_buffer = np.ones((1, 1))
initial_state = (0,)
capacity = np.ones((1, 1)) * np.inf
demand_rate_val = 0.7
job_conservation_flag = True
seed = 72
demand_rate = np.array([demand_rate_val])[:, None]
buffer_processing_matrix = - np.ones((1, 1))
constituency_matrix = np.ones((1, 1))
list_boundary_constraint_matrices = [constituency_matrix]
# Construct environment.
job_generator = ScaledBernoulliServicesPoissonArrivalsGenerator(
demand_rate, buffer_processing_matrix, job_gen_seed=seed)
assert job_generator.routes == {}
state_initialiser = stinit.DeterministicCRWStateInitialiser(initial_state)
env = RLControlledRandomWalk(cost_per_buffer, capacity, constituency_matrix, job_generator,
state_initialiser, job_conservation_flag,
list_boundary_constraint_matrices)
# Test that the wrapper sets the spaces up as we would expect.
assert isinstance(env.action_space, gym_spaces.Tuple)
assert len(env._rl_action_space.spaces) == 1
assert isinstance(env._rl_action_space.spaces[0], gym_spaces.Box)
assert env.action_space.spaces[0].shape == (1, 2)
assert isinstance(env.activities_action_space, gym_spaces.Box)
assert env.activities_action_space.shape == (1, 1)
assert isinstance(env.observation_space, gym_spaces.Box)
assert env.observation_space.shape == (1,)
def test_independent_resource_actions():
"""
Tests the RL environment wrapper for the ksrs_network_model example as per the examples file
from which the initial set up code is copied.
see snc/stochastic_network_control/environments/examples.py for the original code.
"""
# Set up the environment parameters.
alpha1, alpha3 = 2, 2
mu1, mu2, mu3, mu4 = 10, 3, 10, 3
cost_per_buffer = np.ones((4, 1))
initial_state = (0, 0, 0, 0)
capacity = np.ones((4, 1)) * np.inf
job_conservation_flag = True
seed = 72
demand_rate = np.array([alpha1, 0, alpha3, 0])[:, None]
buffer_processing_matrix = np.array([[-mu1, 0, 0, 0],
[mu1, -mu2, 0, 0],
[0, 0, -mu3, 0],
[0, 0, mu3, -mu4]])
constituency_matrix = np.array([[1, 0, 0, 1],
[0, 1, 1, 0]])
# Construct environment.
job_generator = ScaledBernoulliServicesPoissonArrivalsGenerator(
demand_rate, buffer_processing_matrix, job_gen_seed=seed)
state_initialiser = stinit.DeterministicCRWStateInitialiser(initial_state)
env = RLControlledRandomWalk(cost_per_buffer, capacity, constituency_matrix, job_generator,
state_initialiser, job_conservation_flag)
# Test that the wrapper sets the spaces up as we would expect.
assert len(env._rl_action_space.spaces) == 2
assert np.all(np.array([s.shape[1] for s in env._rl_action_space.spaces]) == 3)
assert len(env._action_vectors) == 3 + 3
assert env.observation_space.shape == (4,)
def test_input_queued_switch_spaces():
"""
Test the RL environment wrapper for the input_queued_switch_3x3_model example.
This is a good test for resources with activity constraints.
See snc/stochastic_network_control/environments/examples.py for the original set up code.
"""
# Set up the environment parameters.
mu11, mu12, mu13, mu21, mu22, mu23, mu31, mu32, mu33 = (1,) * 9
cost_per_buffer = np.ones((9, 1))
initial_state = np.zeros((9, 1))
capacity = np.ones((9, 1)) * np.inf
demand_rate = 0.3 * np.ones((9, 1))
job_conservation_flag = True
seed = 72
buffer_processing_matrix = - np.diag([mu11, mu12, mu13, mu21, mu22, mu23, mu31, mu32, mu33])
constraints = np.array([[1, 0, 0, 1, 0, 0, 1, 0, 0],
[0, 1, 0, 0, 1, 0, 0, 1, 0],
[0, 0, 1, 0, 0, 1, 0, 0, 1]])
constituency_matrix = np.vstack([
np.array([[1, 1, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 1, 1]]),
constraints]
)
index_phys_resources = (0, 1, 2)
# Construct environment.
job_generator = ScaledBernoulliServicesPoissonArrivalsGenerator(
demand_rate, buffer_processing_matrix, job_gen_seed=seed)
state_initialiser = stinit.DeterministicCRWStateInitialiser(initial_state)
env = RLControlledRandomWalk(cost_per_buffer, capacity, constituency_matrix, job_generator,
state_initialiser, job_conservation_flag,
index_phys_resources=index_phys_resources)
# Test that the environment wrapper works as expected.
assert isinstance(env._rl_action_space, gym_spaces.Tuple)
assert len(env._rl_action_space.spaces) == 1
# There are overall 34 combinations of activities (or inactivity) that are feasible.
assert len(env._action_vectors) == 34
assert env._action_vectors.shape == (34, 9)
assert snc.is_binary(env._action_vectors)
assert snc.is_binary(np.matmul(env._action_vectors, constraints.T))
def test_rl_env_with_simple_activity_conditions():
"""
Sets up a simple case of a tandem model with 4 independent resources each with one action.
Activity constraints are then added as a way of testing the wrapper's handling of such activity
constraints.
"""
# Set up the environment parameters.
alpha1 = 4
cost_per_buffer = np.ones((4, 1))
initial_state = np.zeros((4, 1))
capacity = np.ones((4, 1)) * np.inf
job_conservation_flag = False
seed = 72
demand_rate = np.array([alpha1, 0, 0, 0])[:, None]
buffer_processing_matrix = np.array([
[-1, 0, 0, 0],
[1, -1, 0, 0],
[0, 1, -1, 0],
[0, 0, 1, -1]
])
constituency_matrix = np.vstack([np.eye(4), np.array([[1, 1, 0, 0]])])
# Construct environment.
job_generator = ScaledBernoulliServicesPoissonArrivalsGenerator(
demand_rate, buffer_processing_matrix, job_gen_seed=seed)
state_initialiser = stinit.DeterministicCRWStateInitialiser(initial_state)
env = RLControlledRandomWalk(cost_per_buffer, capacity, constituency_matrix, job_generator,
state_initialiser, job_conservation_flag,
index_phys_resources=(0, 1, 2, 3))
# Ensure that the action spaces are as expected.
assert len(env._rl_action_space.spaces) == 3
assert env._rl_action_space.spaces[0].shape == (1, 3)
assert env._rl_action_space.spaces[1].shape == (1, 2)
def test_rl_env_with_complex_activity_conditions():
"""
A more complicated version of the previous test (test_rl_env_with_simple_activity_conditions).
Tests the handling of more complex activity constraints in an 8 resource and 8 activity tandem
model.
"""
# Set up environment parameters.
alpha1 = 4
cost_per_buffer = np.ones((8, 1))
initial_state = np.zeros((8, 1))
capacity = np.ones((8, 1)) * np.inf
job_conservation_flag = False
seed = 72
demand_rate = np.array([alpha1, 0, 0, 0, 0, 0, 0, 0])[:, None]
buffer_processing_matrix = np.array([
[-1, 0, 0, 0, 0, 0, 0, 0],
[1, -1, 0, 0, 0, 0, 0, 0],
[0, 1, -1, 0, 0, 0, 0, 0],
[0, 0, 1, -1, 0, 0, 0, 0],
[0, 0, 0, 1, -1, 0, 0, 0],
[0, 0, 0, 0, 1, -1, 0, 0],
[0, 0, 0, 0, 0, 1, -1, 0],
[0, 0, 0, 0, 0, 0, 1, -1]
])
constraints_matrix = np.array([
[1, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 0, 0, 0, 0],
[0, 1, 1, 0, 0, 0, 0, 0]
])
constituency_matrix = np.vstack([np.eye(8), | |
This tricky setup (immediately above) allows us to just write
# (e.g.) "split('exec')" in the Python code and the split #ifdef's
# will automatically be added to the exec_output variable. The inner
# Python execution environment doesn't know about the split points,
# so we carefully inject and wrap a closure that can retrieve the
# next split's #define from the parser and add it to the current
# emission-in-progress.
try:
exec split_setup+fixPythonIndentation(t[2]) in self.exportContext
except Exception, exc:
if debug:
raise
error(t, 'error: %s in global let block "%s".' % (exc, t[2]))
GenCode(self,
header_output=self.exportContext["header_output"],
decoder_output=self.exportContext["decoder_output"],
exec_output=self.exportContext["exec_output"],
decode_block=self.exportContext["decode_block"]).emit()
# Define the mapping from operand type extensions to C++ types and
# bit widths (stored in operandTypeMap).
def p_def_operand_types(self, t):
'def_operand_types : DEF OPERAND_TYPES CODELIT SEMI'
try:
self.operandTypeMap = eval('{' + t[3] + '}')
except Exception, exc:
if debug:
raise
error(t,
'error: %s in def operand_types block "%s".' % (exc, t[3]))
# Define the mapping from operand names to operand classes and
# other traits. Stored in operandNameMap.
def p_def_operands(self, t):
'def_operands : DEF OPERANDS CODELIT SEMI'
if not hasattr(self, 'operandTypeMap'):
error(t, 'error: operand types must be defined before operands')
try:
user_dict = eval('{' + t[3] + '}', self.exportContext)
except Exception, exc:
if debug:
raise
error(t, 'error: %s in def operands block "%s".' % (exc, t[3]))
self.buildOperandNameMap(user_dict, t.lexer.lineno)
# A bitfield definition looks like:
# 'def [signed] bitfield <ID> [<first>:<last>]'
# This generates a preprocessor macro in the output file.
def p_def_bitfield_0(self, t):
'def_bitfield : DEF opt_signed BITFIELD ID LESS INTLIT COLON INTLIT GREATER SEMI'
expr = 'bits(machInst, %2d, %2d)' % (t[6], t[8])
if (t[2] == 'signed'):
expr = 'sext<%d>(%s)' % (t[6] - t[8] + 1, expr)
hash_define = '#undef %s\n#define %s\t%s\n' % (t[4], t[4], expr)
GenCode(self, header_output=hash_define).emit()
# alternate form for single bit: 'def [signed] bitfield <ID> [<bit>]'
def p_def_bitfield_1(self, t):
'def_bitfield : DEF opt_signed BITFIELD ID LESS INTLIT GREATER SEMI'
expr = 'bits(machInst, %2d, %2d)' % (t[6], t[6])
if (t[2] == 'signed'):
expr = 'sext<%d>(%s)' % (1, expr)
hash_define = '#undef %s\n#define %s\t%s\n' % (t[4], t[4], expr)
GenCode(self, header_output=hash_define).emit()
# alternate form for structure member: 'def bitfield <ID> <ID>'
def p_def_bitfield_struct(self, t):
'def_bitfield_struct : DEF opt_signed BITFIELD ID id_with_dot SEMI'
if (t[2] != ''):
error(t, 'error: structure bitfields are always unsigned.')
expr = 'machInst.%s' % t[5]
hash_define = '#undef %s\n#define %s\t%s\n' % (t[4], t[4], expr)
GenCode(self, header_output=hash_define).emit()
def p_id_with_dot_0(self, t):
'id_with_dot : ID'
t[0] = t[1]
def p_id_with_dot_1(self, t):
'id_with_dot : ID DOT id_with_dot'
t[0] = t[1] + t[2] + t[3]
def p_opt_signed_0(self, t):
'opt_signed : SIGNED'
t[0] = t[1]
def p_opt_signed_1(self, t):
'opt_signed : empty'
t[0] = ''
def p_def_template(self, t):
'def_template : DEF TEMPLATE ID CODELIT SEMI'
if t[3] in self.templateMap:
print "warning: template %s already defined" % t[3]
self.templateMap[t[3]] = Template(self, t[4])
# An instruction format definition looks like
# "def format <fmt>(<params>) {{...}};"
def p_def_format(self, t):
'def_format : DEF FORMAT ID LPAREN param_list RPAREN CODELIT SEMI'
(id, params, code) = (t[3], t[5], t[7])
self.defFormat(id, params, code, t.lexer.lineno)
# The formal parameter list for an instruction format is a
# possibly empty list of comma-separated parameters. Positional
# (standard, non-keyword) parameters must come first, followed by
# keyword parameters, followed by a '*foo' parameter that gets
# excess positional arguments (as in Python). Each of these three
# parameter categories is optional.
#
# Note that we do not support the '**foo' parameter for collecting
# otherwise undefined keyword args. Otherwise the parameter list
# is (I believe) identical to what is supported in Python.
#
# The param list generates a tuple, where the first element is a
# list of the positional params and the second element is a dict
# containing the keyword params.
def p_param_list_0(self, t):
'param_list : positional_param_list COMMA nonpositional_param_list'
t[0] = t[1] + t[3]
def p_param_list_1(self, t):
'''param_list : positional_param_list
| nonpositional_param_list'''
t[0] = t[1]
def p_positional_param_list_0(self, t):
'positional_param_list : empty'
t[0] = []
def p_positional_param_list_1(self, t):
'positional_param_list : ID'
t[0] = [t[1]]
def p_positional_param_list_2(self, t):
'positional_param_list : positional_param_list COMMA ID'
t[0] = t[1] + [t[3]]
def p_nonpositional_param_list_0(self, t):
'nonpositional_param_list : keyword_param_list COMMA excess_args_param'
t[0] = t[1] + t[3]
def p_nonpositional_param_list_1(self, t):
'''nonpositional_param_list : keyword_param_list
| excess_args_param'''
t[0] = t[1]
def p_keyword_param_list_0(self, t):
'keyword_param_list : keyword_param'
t[0] = [t[1]]
def p_keyword_param_list_1(self, t):
'keyword_param_list : keyword_param_list COMMA keyword_param'
t[0] = t[1] + [t[3]]
def p_keyword_param(self, t):
'keyword_param : ID EQUALS expr'
t[0] = t[1] + ' = ' + t[3].__repr__()
def p_excess_args_param(self, t):
'excess_args_param : ASTERISK ID'
# Just concatenate them: '*ID'. Wrap in list to be consistent
# with positional_param_list and keyword_param_list.
t[0] = [t[1] + t[2]]
# End of format definition-related rules.
##############
#
# A decode block looks like:
# decode <field1> [, <field2>]* [default <inst>] { ... }
#
def p_top_level_decode_block(self, t):
'top_level_decode_block : decode_block'
codeObj = t[1]
codeObj.wrap_decode_block('''
StaticInstPtr
%(isa_name)s::Decoder::decodeInst(%(isa_name)s::ExtMachInst machInst)
{
using namespace %(namespace)s;
''' % self, '}')
codeObj.emit()
def p_decode_block(self, t):
'decode_block : DECODE ID opt_default LBRACE decode_stmt_list RBRACE'
default_defaults = self.defaultStack.pop()
codeObj = t[5]
# use the "default defaults" only if there was no explicit
# default statement in decode_stmt_list
if not codeObj.has_decode_default:
codeObj += default_defaults
codeObj.wrap_decode_block('switch (%s) {\n' % t[2], '}\n')
t[0] = codeObj
# The opt_default statement serves only to push the "default
# defaults" onto defaultStack. This value will be used by nested
# decode blocks, and used and popped off when the current
# decode_block is processed (in p_decode_block() above).
def p_opt_default_0(self, t):
'opt_default : empty'
# no default specified: reuse the one currently at the top of
# the stack
self.defaultStack.push(self.defaultStack.top())
# no meaningful value returned
t[0] = None
def p_opt_default_1(self, t):
'opt_default : DEFAULT inst'
# push the new default
codeObj = t[2]
codeObj.wrap_decode_block('\ndefault:\n', 'break;\n')
self.defaultStack.push(codeObj)
# no meaningful value returned
t[0] = None
def p_decode_stmt_list_0(self, t):
'decode_stmt_list : decode_stmt'
t[0] = t[1]
def p_decode_stmt_list_1(self, t):
'decode_stmt_list : decode_stmt decode_stmt_list'
if (t[1].has_decode_default and t[2].has_decode_default):
error(t, 'Two default cases in decode block')
t[0] = t[1] + t[2]
#
# Decode statement rules
#
# There are four types of statements allowed in a decode block:
# 1. Format blocks 'format <foo> { ... }'
# 2. Nested decode blocks
# 3. Instruction definitions.
# 4. C preprocessor directives.
# Preprocessor directives found in a decode statement list are
# passed through to the output, replicated to all of the output
# code streams. This works well for ifdefs, so we can ifdef out
# both the declarations and the decode cases generated by an
# instruction definition. Handling them as part of the grammar
# makes it easy to keep them in the right place with respect to
# the code generated by the other statements.
def p_decode_stmt_cpp(self, t):
'decode_stmt : CPPDIRECTIVE'
t[0] = GenCode(self, t[1], t[1], t[1], t[1])
# A format block 'format <foo> { ... }' sets the default
# instruction format used to handle instruction definitions inside
# the block. This format can be overridden by using an explicit
# format on the instruction definition or with a nested format
# block.
def p_decode_stmt_format(self, t):
'decode_stmt : FORMAT push_format_id LBRACE decode_stmt_list RBRACE'
# The format will be pushed on the stack when 'push_format_id'
# is processed (see below). Once the parser has recognized
# the full production (though the right brace), we're done
# with the format, so now we can pop it.
self.formatStack.pop()
t[0] = t[4]
# This rule exists so we can set the current format (& push the
# stack) when we recognize the format name part of the format
# block.
def p_push_format_id(self, t):
'push_format_id : ID'
try:
self.formatStack.push(self.formatMap[t[1]])
t[0] = ('', '// format %s' % t[1])
except KeyError:
error(t, 'instruction format "%s" not defined.' % t[1])
# Nested decode block: if the value of the current field matches
# the specified constant, do a nested | |
<filename>src/app.py
import dash_bootstrap_components as dbc
from dash import Dash, html, dcc, Output, Input, State
from vega_datasets import data
import altair as alt
import pandas as pd
from altair import datum
from wordcloud import WordCloud
import matplotlib.pyplot as plt
import base64
import io
import matplotlib
from PIL import Image
import numpy as np
alt.data_transformers.disable_max_rows()
app = Dash(__name__, external_stylesheets=[dbc.themes.DARKLY])
df = pd.read_csv("data/processed/processed.csv")
raw_data = pd.read_csv("data/raw/netflix_titles.csv")
geocodes = pd.read_csv("data/raw/world_country_latitude_and_longitude_values.csv")
server = app.server
app.title = 'Netflix Dashboard'
def world_map(cat, rate, year):
"""
Merges the processed data with country gecodes and
plots a world map containing the number of movies and TV shows produced in a given year, genre and rating.
Parameters
----------
cat: list
List of genres we want to filter out from the dataframe.
rate: list
List of ratings we want to filter out from the dataframe.
year: int, float
Filter the data based on year that the movie/TV show is released.
Returns
-------
altair.vegalite.v4.api.LayerChart
A layered Altair chart containing the base world map and the size of number of movies and TV shows produced over the years.
"""
# Explode "country" since some shows have multiple countries of production
movie_exploded = (df.set_index(df.columns.drop("country", 1)
.tolist()).country.str.split(',', expand = True)
.stack()
.reset_index()
.rename(columns = {0:'country'})
.loc[:, df.columns]
)
# Remove white space
movie_exploded.country = movie_exploded.country.str.lstrip()
movie_exploded.country = movie_exploded.country.str.rstrip()
# Get count per country and release year
count = (pd.DataFrame(movie_exploded.groupby(["country",
"release_year", "genres", "rating"]).size())
.reset_index()
.rename(columns = {0 : "count"})
)
# Merge with geocodes
count_geocoded = count.merge(geocodes, on = "country")
count_geocoded = count_geocoded.rename(columns = {"latitude": "lat",
"longitude": "lon"})
# Drop unused columns
count_geocoded = count_geocoded.drop(["usa_state_code",
"usa_state_latitude",
"usa_state_longitude",
"usa_state"], axis = 1)
plot_df = count_geocoded[count_geocoded["rating"].isin(rate)]
plot_df = (plot_df[plot_df["genres"].isin(cat)]
.query(f"release_year == @year")
.groupby(["country", "lat", "lon", "release_year"])
.sum()
.reset_index())
# Base map layer
source = alt.topo_feature(data.world_110m.url, 'countries')
base_map = alt.layer(
alt.Chart(source).mark_geoshape(
fill = 'black',
stroke = 'grey')
).project(
'equirectangular'
).properties(width = 900,
height = 400).configure_view(
stroke = None)
# Shows count size layer
points = alt.Chart(plot_df).mark_point().encode(
latitude = "lat",
longitude = "lon",
fill = alt.value("#E50914"),
size = alt.Size("count:Q",
scale = alt.Scale(domain = [0, 70]),
legend = None),
stroke = alt.value(None),
tooltip = [alt.Tooltip("country", title = "Country"),
alt.Tooltip("release_year:Q", title = "Release Year"),
alt.Tooltip("count:Q", title = "Count")]
)
chart = (base_map + points).configure_view(
strokeWidth = 0
).configure_mark(
opacity = 0.8
).configure(background = transparent, style=dict(cell=dict(strokeOpacity=0)))
return chart.to_html()
def plot_hist_duration(type_name, year, cat, rate, title):
"""
Plots the distribution of movies or TV series.
Parameters
----------
type_name: string
The name of the required plot category (TV or Movies).
year: int, float
Filter the data based on year that the movie/TV show is released.
cat: list
List of genres we want to filter out from the dataframe.
rate: list
List of ratings we want to filter out from the dataframe.
title: string
The x label of the barplot.
Returns
-------
altair.vegalite.v4.api.LayerChart
A barplot conditioned on the type of content (TV/Movie)
"""
plot_df = df[df["rating"].isin(rate)]
plot_df = (plot_df[plot_df["genres"].isin(cat)]
.query(f"release_year <= @year"))
plot_df = plot_df[['genres', 'duration', 'show_id', "type"]].copy().drop_duplicates()
chart = alt.Chart(plot_df).mark_boxplot(extent=2.5, color="#752516").encode(
alt.X("duration", title = title),
alt.Y('genres', title=""),
color = alt.value(color1),
tooltip = 'genres'
).transform_filter(datum.type == type_name).properties(
width=260,
height=200
).configure(background=transparent
).configure_axis(
labelColor=plot_text_color,
titleColor=plot_text_color
).interactive()
return chart.to_html()
def plot_directors(cat, rate, year):
"""
Plots the count of movies or TV series by individual directors.
Parameters
----------
cat: list
List of genres we want to filter out from the dataframe.
rate: list
List of ratings we want to filter out from the dataframe.
year: int, float
Filter the data based on year that the movie/TV show is released.
Returns
-------
altair.vegalite.v4.api.LayerChart
A barplot showing the number of content directed by individuals
"""
click = alt.selection_multi()
plot_df = df[df["rating"].isin(rate)]
plot_df = (
plot_df[plot_df["genres"].isin(cat)]
.query("director != 'Missing'")
.query(f"release_year <= @year")
.groupby(["director", "country"])
.show_id.nunique()
.reset_index()
.sort_values(ascending=False, by="show_id")
)
chart = (
alt.Chart(plot_df[0:10])
.mark_bar(color=color1)
.encode(
y=alt.Y("director", sort="-x", title=""),
x=alt.X("sum(show_id)", title="Number of Movies + TV shows"),
opacity=alt.condition(click, alt.value(1), alt.value(0.2)),
tooltip=[
alt.Tooltip("director", title="Director"),
alt.Tooltip("sum(show_id)", title="Count"),
],
)
.add_selection(click)
).configure(background=transparent
).configure_axis(
labelColor=plot_text_color,
titleColor=plot_text_color
)
return chart.to_html()
def title_cloud(cat, rate, year):
"""
Makes a word cloud of movie and TV show titles.
Parameters
----------
cat: list
List of genres we want to filter out from the dataframe.
rate: list
List of ratings we want to filter out from the dataframe.
year: int, float
Filter the data based on year that the movie/TV show is released.
returns
-------
matplotlib.image.AxesImage
Image of word cloud containing the movie and TV show titles.
"""
plot_df = df
# prevent error when no genre and rating is selected
if len(cat) > 0 and len(rate) == 0:
plot_df = df[df["genres"].isin(cat)].query(f'release_year <= @year')
elif len(cat) == 0 and len(rate) > 0:
plot_df = df[df["rating"].isin(rate)].query(f'release_year <= @year')
elif len(cat) > 0 and len(rate) > 0:
plot_df = df[df["genres"].isin(cat)]
plot_df = plot_df[plot_df["rating"].isin(rate)].query(f'release_year <= @year')
else:
plot_df = df.query(f'release_year <= @year')
words = " ".join(plot_df["title"].tolist())
mask = np.array(Image.open("src/assets/netflixN.png"))
colormap = matplotlib.colors.LinearSegmentedColormap.from_list("", ['#824d4d', '#b20710', "#ffeded", "#E50914"])
word_cloud = WordCloud(collocations = False,
background_color = "#222222", colormap = colormap, mask = mask).generate(words)
buf = io.BytesIO()
plt.figure()
plt.imshow(word_cloud, interpolation = "bilinear");
plt.axis("off")
plt.savefig(buf, format = "png", dpi = 150, bbox_inches = "tight", pad_inches = 0)
data = base64.b64encode(buf.getbuffer()).decode("utf8")
plt.close()
return "data:image/png;base64,{}".format(data)
transparent = "#00000000" # for transparent backgrounds
color1 = "#9E0600" # red
color2 = "#993535" # border colors
plot_text_color = "#ebe8e8" # plot axis and label color
title_color = "#ebe8e8" # general title and text color
border_radius = "5px" # rounded corner radius
border_width = "3px" # border width
app.layout = dbc.Container([
dbc.Row([
dbc.Col([
dbc.Col([
html.Img(
id = "image_wc",
className = "img-responsive",
style = {'width':'100%'}
)
]),
html.P("Select Year",
style={"background": color1, "color": title_color,
'textAlign': 'center', 'border-radius': border_radius, "margin-top": "15px"}),
html.Div([
html.Div(style={'padding': 3}),
dcc.Slider(id = 'year_slider',
min = 1942,
max = 2021,
value = 2021,
step = 1,
marks = {
1942: "1942",
2021: "2021"},
tooltip = {"placement": "bottom", "always_visible": False},
dots = True
)], style={"border": f"{border_width} solid {color2}", 'border-radius': border_radius}),
html.P("Select Genres",
style={"background": color1, "color": title_color,
'textAlign': 'center', 'border-radius': border_radius,
"margin-top": "15px"}),
html.Div([
dcc.Dropdown(
id="dropdown",
options=df.genres.unique().tolist(),
value=["International", "Dramas", "Thrillers", "Comedies", "Action"],
multi=True,
style={"background-color": transparent, "border": "0", "color": "black"}
)], style={"border": f"{border_width} solid {color2}",
'border-radius': border_radius}
),
html.P("Select Ratings",
style={"background": color1, "color": title_color,
'textAlign': 'center', 'border-radius': border_radius,
"margin-top": "15px"}),
html.Div([
dcc.Dropdown(
id="dropdown_ratings",
options=[i for i in df.rating.unique().tolist() if str(i) not in ['66 min', '84 min', 'nan', '74 min', 'null']],
value=['PG-13','TV-MA','PG','TV-14','TV-PG','TV-Y','R','TV-G','G','NC-17','NR'],
multi=True,
style={"background-color": transparent, "border": "0", "color": "black"}
)], style={"border": f"{border_width} solid {color2}", 'border-radius': border_radius}
)
],
md=4, style={'width': '17%'}),
dbc.Col([
dbc.Row([
dbc.Col([
html.H1("etflix Explorer", style={"font-weight": "bold", "fontSize":70}),
], md=4, style={"color": "#E50914", "width": "88.8%"}),
dbc.Col([
dbc.Button(
"ⓘ",
id="popover-target",
className="sm",
style={"border": color2, "background": f"{color1}95", 'margin-top': "30px"},
),
dbc.Popover(
[
dbc.PopoverHeader("Welcome to Netflix Explorer!"),
dbc.PopoverBody([
html.P("This dashboard contains:"),
html.P("🎥 The map - Number of movies and TV shows produced worldwide"),
html.P("🎥 The directors plot - Top number of movies and TV shows produced by directors"),
html.P("🎥 The durations plots - Durations of movies and TV shows per selected genre")
]),
dbc.PopoverBody([
html.P("To filter the data displayed:"),
html.P("🎥 Select the desired Year, Genre, and Rating from the side bar")
])
],
target="popover-target",
trigger="legacy",
placement="bottom"
)
]),
]),
html.H3("Movies and TV shows produced worldwide",
style={"background": color1, "color": title_color,
'textAlign': 'center', 'border-radius': border_radius, "width": "94.5%"}),
html.Div([
html.Iframe(
id = "world_map",
srcDoc = world_map(["International", "Dramas", "Thrillers", "Comedies", "Action"],
['PG-13','TV-MA','PG','TV-14','TV-PG','TV-Y','R','TV-G','G','NC-17','NR'],
2021),
style={'border': '0', 'width': '100%', 'height': '500px', "margin-left": "30px", "margin-top": "20px"})
], style={"border": f"{border_width} solid {color2}", 'border-radius': border_radius,
"width": "94.5%", "height": "470px"}),
dbc.Row([
dbc.Col([
html.H3("Top 10 directors",
style={"background": color1, "color": title_color,
'textAlign': 'center', 'border-radius': border_radius}),
html.Div([
html.Iframe(
id="plot_directors",
srcDoc = plot_directors(["International", "Dramas", "Thrillers", "Comedies", "Action"],
['PG-13','TV-MA','PG','TV-14','TV-PG','TV-Y','R','TV-G','G','NC-17','NR'],
2021),
style={
"border-width": "1",
"width": "100%",
"height": "300px",
"top": "20%",
"left": "70%",
"margin-top": "25px"
},
),
], style={"border": f"{border_width} solid {color2}", 'border-radius': border_radius, "height": "300px"})
], md=4, style={"width": "55%"}),
dbc.Col([
html.H3("Durations",
style={"background": color1, "color": title_color,
"textAlign": | |
cameraproperties.brightnesscmd = str("-br " + str(brightness1) + " ")
print("new brightness set to = " + str(brightness1))
else:
pass
def setcontrast():
contrast1 = cameraproperties.contrast
print("current contrast = " + str(contrast1) )
print("enter new contrast value between -100 and 100 ")
contrastvar = input()
if checkinputisnumber(contrastvar):
contrastvar = int(contrastvar)
else:
print("Oh come on... not a vaild option")
if checknumber(contrastvar,cameraproperties.contrastmax,cameraproperties.contrastmin,cameraproperties.contrastunit):
cameraproperties.contrast = contrastvar
contrast1 = cameraproperties.contrast
cameraproperties.contrastcmd = str("-co " + str(contrast1) + " ")
print("new contrast set to = " + str(contrast1))
else:
pass
def setsharpness():
sharpness1 = cameraproperties.sharpness
print("current shaprness = " + str(sharpness1) )
print("enter new sharpness value between -100 and 100 ")
sharpnessvar = input()
if checkinputisnumber(sharpnessvar):
sharpnessvar = int(sharpnessvar)
else:
print("Really!! Not a vaild option")
if checknumber(sharpnessvar,cameraproperties.sharpnessmax,cameraproperties.sharpnessmin,cameraproperties.sharpnessunit):
cameraproperties.sharpness = sharpnessvar
sharpness1 = cameraproperties.sharpness
cameraproperties.sharpnesscmd = str("-co " + str(sharpness1) + " ")
print("new contrast set to = " + str(sharpness1))
else:
pass
def setantiflicker():
antiflicker1 = cameraproperties.antiflicker
antiflicker1 = str(antiflicker1)
print("Current Antiflicker mode setting = " + antiflicker1)
print("Select new antiflicker mode value:")
print(" 1 = off ")
print(" 2 = auto ")
print(" 3 = 50 Hz ")
print(" 4 = 60 Hz ")
print("Enter value:")
antiflickervar = input()
if checkinputisnumber(antiflickervar):
antiflickervar = int(antiflickervar)
if antiflickervar == int(1):
cameraproperties.antiflicker = str("off")
elif antiflickervar == int(2):
cameraproperties.antiflicker = str("auto")
elif antiflickervar == int(3):
cameraproperties.antiflicker = str("50hz")
elif antiflickervar == int(4):
cameraproperties.antiflicker = str("60hz")
else:
print("are these my feet... invalid option")
antiflicker1 = cameraproperties.antiflicker
antiflicker1 = str(antiflicker1)
cameraproperties.antiflickercmd = ("-fli " + antiflicker1 + " ")
print("Camera antiflicker mode set to = " + antiflicker1)
else:
print("FFS! not a valid option")
def setburstmode():
if cameraproperties.burstmode:
print("burst mode is active: Enter 1 to keep active or enter 0 to turn off")
else:
print("burst mode is inactive: Enter 1 to Turn on or enter 0 to keep inactive")
burstmodevar = input()
if checkinputisnumber(burstmodevar):
burstmode1 = burstmodevar
burstmode1 = int(burstmode1)
if burstmode1==int(0):
cameraproperties.burstmode = False
cameraproperties.burstmodecmd = str(" ")
print("Burstmode inactive")
elif burstmode1==int(1):
cameraproperties.burstmode = True
cameraproperties.burstmodecmd = str("-bm ")
print("Burstmode active")
else:
print("Invalid option ..how did you even get here ")
else:
print(" so here we are again... Invaild option")
def setpreview():
print("The preview image is dumped on the HDMI output of the pi.")
print("Unless hooked up to a screen I'd leave it off")
print("Since this a cmdline program to use on a ssh")
print("If you are using a screen, why not use the way better PiCameraApp")
print("And help them impliment the timelapse feature")
print(" Just sayin' like")
print(" It would be real cool if you did")
if cameraproperties.preview:
print("preview mode is active: Enter 1 to keep active or enter 0 to turn off")
else:
print("preview mode is inactive: Enter 1 to Turn on or enter 0 to keep inactive")
previewvar = input()
if checkinputisnumber(previewvar):
preview1 = previewvar
preview1 = int(preview1)
if preview1==int(0):
cameraproperties.preview = False
cameraproperties.previewcmd = str("-n ")
print("preview inactive")
elif preview1==int(1):
cameraproperties.preview = True
cameraproperties.previewcmd = str("")
print("preview active")
else:
print("Invalid option ")
else:
print(" <NAME>... Invaild option")
def setdimensions():
imgwidth1 = cameraproperties.imgwidth
imgheight1 = cameraproperties.imgheight
imgwidth1 = str(imgwidth1)
imgheight1 = str(imgwidth1)
print("Current image dimensions are set as: " + "W " + imgwidth1 + " H " + imgheight1)
print("Set new dimensions:" )
print(" 1: H 4056 x W 3040")
print(" 2: H 2028 x W 1520")
print(" 3: H 1014 x W 760")
print(" 4: H 507 x 380")
print(" 5: Custom")
print("")
print("Select Image dimensions")
dimensionsvar = input ()
if checkinputisnumber(dimensionsvar):
dimensionsvar = int(dimensionsvar)
if dimensionsvar>int(5):
print("not a vaild option")
elif dimensionsvar<int(1):
print("not a vaild option")
elif dimensionsvar == int(1):
cameraproperties.imgwidth = int(4056)
cameraproperties.imgheight = int(3040)
cameraproperties.imgwidthcmd = str("-w 4056 ")
cameraproperties.imgheightcmd = str("-h 3040 ")
elif dimensionsvar == int(2):
cameraproperties.imgwidth = int(2028)
cameraproperties.imgheight = int(1520)
cameraproperties.imgwidthcmd = str("-w 2028 ")
cameraproperties.imgheightcmd = str("-h 1520 ")
elif dimensionsvar == int(3):
cameraproperties.imgwidth = int(1014)
cameraproperties.imgheight = int(760)
cameraproperties.imgwidthcmd = str("-w 1014 ")
cameraproperties.imgheightcmd = str("-h 760 ")
elif dimensionsvar == int(4):
cameraproperties.imgwidth = int(507)
cameraproperties.imgheight = int(380)
cameraproperties.imgwidthcmd = str("-w 507 ")
cameraproperties.imgheightcmd = str("-h 380 ")
elif dimensionsvar == int(5):
print("Please enter new value for width (max value 4056): ")
customwidthvar = input()
if checkinputisnumber(customwidthvar):
customwidthvar = int(customwidthvar)
if customwidthvar>cameraproperties.imgwidthmax:
customwidthvar = cameraproperties.imgwidthmax
else:
pass
if customwidthvar<cameraproperties.imgwidthmin:
customwidthvar = cameraproperties.imgwidthmin
else:
pass
cameraproperties.imgwidth = customwidthvar
cameraproperties.imgwidthcmd = ("-w " + str(customwidthvar) + " ")
print("Please enter new value for height (max value 3020): ")
customheightvar = input()
if checkinputisnumber(customheightvar):
customheightvar = int(customheightvar)
if customheightvar>cameraproperties.imgheightmax:
customheightvar = cameraproperties.imgheightmax
else:
pass
if customheightvar<cameraproperties.imgheightmin:
customheightvar = cameraproperties.imgheightmin
else:
pass
cameraproperties.imgheight = customheightvar
cameraproperties.imgheightcmd = ("-h " + str(customheightvar) + " ")
else:
print("how did you get this error.. Good god man!")
imgwidth1 = cameraproperties.imgwidth
imgheight1 = cameraproperties.imgheight
imgwidth1 = str(imgwidth1)
imgheight1 = str(imgheight1)
print("Your dimensions are set to: " + "W " + imgwidth1 + " H " + imgheight1)
def setdynamicrange():
dynamicrange1 = cameraproperties.dynamicrange
dynamicrange1 = str(dynamicrange1)
print("Current dynamic range mode setting = " + dynamicrange1)
print("Select new dynamic range mode value:")
print(" 1 = off ")
print(" 2 = low ")
print(" 3 = medium ")
print(" 4 = heigh ")
print("Enter value:")
dynamicrangevar = input()
if checkinputisnumber(dynamicrangevar):
dynamicrangevar = int(dynamicrangevar)
if dynamicrangevar == int(1):
cameraproperties.dynamicrange = str("off")
elif dynamicrangevar == int(2):
cameraproperties.dynamicrange = str("low")
elif dynamicrangevar == int(3):
cameraproperties.dynamicrange = str("med")
elif dynamicrangevar == int(4):
cameraproperties.dynamicrange = str("heigh")
else:
print("So this is kinda strange but... invalid option")
dynamicrange1 = cameraproperties.dynamicrange
dynamicrange1 = str(dynamicrange1)
cameraproperties.dynamicrangecmd = ("-drc " + dynamicrange1 + " ")
print("Camera dynamic range mode set to = " + dynamicrange1)
else:
print("not a valid option")
print(" Quick primer: Try using the number keys")
def setAandDgains():
Again1 = cameraproperties.Again
Again1 = str(Again1)
Dgain1 = cameraproperties.Dgain
Dgain1 = str(Dgain1)
if cameraproperties.AandDgains:
print("Manual gains are currently: On")
else:
print("Manual gains are currently: off")
print("Change manual gain status Y/N")
turngain = input()
inputresponse1 = str("Y")
inputresponse2 = str("y")
if turngain==inputresponse1 or turngain==inputresponse2:
if cameraproperties.AandDgains:
cameraproperties.AandDgains = False
cameraproperties.iso = str("100")
cameraproperties.isocmd = str("-ISO 100 ")
print("manual gain settings are turned off")
print("Camera Exposure mode set to Auto")
print("Camera ISO set to 100")
else:
cameraproperties.AandDgains = True
cameraproperties.exposuremode = str("")
cameraproperties.exposuremodecmd = str("")
cameraproperties.iso = str("")
cameraproperties.isocmd = str("")
print("manual gain settings are turned on")
if cameraproperties.AandDgains:
print("manual gain setting set to apply")
print("current gains are: Analogue " + Again1 + " Digital " + Dgain1)
print("Enter new values: y/n")
gainsconfirm = input()
if gainsconfirm==inputresponse1 or gainsconfirm==inputresponse2 :
print("enter analogue gain: (float value between 1 and 12)")
newAgain = input()
if checkinputisnumber(newAgain):
newAgain = float(newAgain)
if newAgain>float(12):
print("analogue gain greater than range and will set to max 12")
newAgain = 12
cameraproperties.Again = newAgain
elif newAgain<float(1):
print("analogue gain less than range and will set to min 1")
newAgain = float(1)
cameraproperties.Again = newAgain
else:
cameraproperties.Again = newAgain
print("enter Digital gain: (float value between 1 and 64)")
newDgain = input()
if checkinputisnumber(newDgain):
newDgain = float(newDgain)
if newDgain>float(64):
print("Digital gain greater than range and will set to max 64")
newDgain = 64
cameraproperties.Dgain = newDgain
elif newDgain<float(1):
print("analogue gain less than range and will set to min 1")
newDgain = float(1)
cameraproperties.Dgain = newDgain
else:
cameraproperties.Dgain = newDgain
else:
pass
if cameraproperties.AandDgains:
cameraproperties.AandDgainscmd = str("-ag " + str(cameraproperties.Again) + " " + "-dg " + str(cameraproperties.Dgain) + " ")
print("Current gain settings are: Analogue = " + str(cameraproperties.Again) + (" Digital = ") + str(cameraproperties.Dgain))
else:
cameraproperties.AandDgaincmd = str("")
print("Manual gain settings are off")
def setcameramode():
cameramode1 = cameraproperties.cameramode
cameramode1 = str(cameramode1)
print("Current camera mode setting = " + cameramode1)
print("Select new camera mode value:")
print(" 0 = auto ")
print(" 1 = 2028 x 1080 2x2 Binning ")
print(" 2 = 2028 x 1520 2x2 Binning ")
print(" | |
import flask
import time
from contextlib import closing
from flask_login import current_user
from werkzeug.exceptions import HTTPException
import gal_utils
from gal_utils import text
from PIL import Image
import os
import os.path
import traceback
import sys
S3_dir = u'https://glyphic.s3.amazonaws.com/cfa/gallery/'
This_dir = os.path.dirname(__file__)
CC_names = {
u'by': u'Creative Commons Attribution 4.0 International',
u'by-sa': u'Creative Commons Attribution-ShareAlike 4.0 International',
u'by-nd': u'Creative Commons Attribution-NoDerivatives 4.0 International',
u'by-nc': u'Creative Commons Attribution-NonCommercial 4.0 International',
u'by-nc-sa': u'Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International',
u'by-nc-nd': u'Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International'
}
class Design:
Query_base = (u'SELECT designid, owner, title, variation, tiled, ccURI, ccName, '
u'ccImage, filelocation, S3, imageversion, imagelocation, '
u'thumblocation, sm_thumblocation, numvotes, '
u'UNIX_TIMESTAMP(whenuploaded) AS uploaddate, '
u'CONVERT(notes USING utf8) as notes FROM gal_designs ')
Query_base_d = (u'SELECT d.designid, d.owner, d.title, d.variation, d.tiled, d.ccURI, '
u'd.ccName, d.ccImage, d.filelocation, d.S3, d.imageversion, '
u'd.imagelocation, d.thumblocation, d.sm_thumblocation, d.numvotes, '
u'UNIX_TIMESTAMP(d.whenuploaded) AS uploaddate, '
u'CONVERT(d.notes USING utf8) AS notes FROM gal_designs AS d ')
Query_CC = u'ccURI LIKE "%creativecommons.org%"'
def init(self, **data):
try:
if 'designid' in data:
id = int(data['designid'])
if id < 0:
flask.abort(400,u'Illegal Design ID.')
if hasattr(self, 'designid') and self.designid != id:
flask.abort(400,u'Design ID cannot be changed.')
self.designid = id
if 'owner' in data:
if not gal_utils.legalOwner(data['owner']):
flask.abort(400,u'Bad owner.')
if hasattr(self, 'owner') and self.owner != data['owner']:
flask.abort(400,u'Design owner cannot be changed.')
self.owner = data['owner']
if 'title' in data and len(data['title']) > 0:
title = data['title'].strip()
if (len(title) < 3 or len(title) > 100):
flask.abort(400,u'The title must be between 3 and 100 characters.')
self.title = title
if 'variation' in data:
var = data['variation'].strip()
if not gal_utils.legalVariation(var):
flask.abort(400,u'Illegal variation.')
self.variation = var
if 'tiled' in data:
tiled = int(data['tiled'])
if tiled < 0 or tiled > 3:
flask.abort(400,u'Illegal tile type.')
self.tiled = tiled
if 'ccURI' in data and 'ccName' in data and 'ccImage' in data:
if gal_utils.validateLicense(data):
self.ccURI = data['ccURI']
self.ccName = data['ccName']
self.ccImage = data['ccImage']
else:
self.ccURI = u''
self.ccName = u''
self.ccImage = u'No license chosen'
if 'cclicense' in data and isinstance(data['cclicense'], text):
if data['cclicense'] == u'zero':
self.ccURI = u'https://creativecommons.org/publicdomain/zero/1.0/'
self.ccName = u'CC0 1.0 Universal (CC0 1.0) Public Domain Dedication'
self.ccImage = u'http://i.creativecommons.org/p/zero/1.0/88x31.png'
elif data['cclicense'] in CC_names:
self.ccURI = u'https://creativecommons.org/licenses/' + data['cclicense'] + u'/4.0/'
self.ccName = CC_names[data['cclicense']]
self.ccImage = u'http://i.creativecommons.org/l/' + data['cclicense'] + u'/4.0/88x31.png'
elif data['cclicense'] != u'-':
self.ccURI = u''
self.ccName = u''
self.ccImage = u'No license chosen'
if 'filelocation' in data:
if not gal_utils.legalFilePath(data['filelocation'], True):
flask.abort(400,u'Illegal cfdg file specification.')
self.filelocation = data['filelocation']
if 'S3' in data:
if isinstance(data['S3'], bool):
self.S3 = data['S3']
elif isinstance(data['S3'], text):
if data['S3'] == u'Y':
self.S3 = True
elif data['S3'] == u'N':
self.S3 = False
else:
flask.abort(400,u'Illegal enum value for S3 flag.')
else:
flask.abort(400,u'S3 flag must be bool or enum Y/N.')
if 'imageversion' in data:
v = int(data['imageversion'])
if v < 0: flask.abort(400,u'Illegal image version.')
self.imageversion = v
if 'imagelocation' in data:
if not gal_utils.legalFilePath(data['imagelocation'], False):
flask.abort(400,u'Illegal image file specification.')
self.imagelocation = data['imagelocation']
if 'thumblocation' in data:
if not gal_utils.legalFilePath(data['thumblocation'], False):
flask.abort(400,u'Illegal image file specification.')
self.thumblocation = data['thumblocation']
if 'sm_thumblocation' in data:
if not gal_utils.legalFilePath(data['sm_thumblocation'], False):
flask.abort(400,u'Illegal image file specification.')
self.sm_thumblocation = data['sm_thumblocation']
if 'numvotes' in data:
num = int(data['numvotes'])
if num < 0: flask.abort(400,u'Illegal vote count.')
self.numvotes = num
if 'uploaddate' in data:
if isinstance(data['uploaddate'], int):
if data['uploaddate'] < 1104566400:
flask.abort(400,u'Upload date before 2005')
else:
self.uploaddate = data['uploaddate']
else:
flask.abort(400,u'Upload date must be a POSIX timestamp int.')
if 'notes' in data:
if len(data['notes']) > 1000:
flask.abort(400,u'Notes cannot be longer than 1000 bytes.')
self.notes = data['notes']
except HTTPException:
raise
except:
if 'wsgi.errors' in os.environ:
traceback.print_exc(None, os.environ['wsgi.errors'])
else:
traceback.print_exc()
flask.abort(400,u'Cannot instantiate a design.')
def __init__(self, **data):
self.init(**data)
def serialize(self):
return dict(self)
def imageHelper(self, url):
vurl = url + u"?" + text(self.imageversion)
if self.S3:
return S3_dir + vurl
else:
return vurl
def __iter__(self):
yield 'designid', self.designid
yield 'owner', self.owner
yield 'title', self.title
yield 'variation', self.variation
yield 'tiled', self.tiled
yield 'filelocation', self.filelocation
yield 'imagelocation', self.imageHelper(self.imagelocation)
yield 'thumblocation', self.imageHelper(self.thumblocation)
yield 'smthumblocation', self.imageHelper(self.sm_thumblocation)
yield 'numvotes', self.numvotes
yield 'notes', self.notes
yield 'notesmd', gal_utils.translate2Markdown(self.notes)
yield 'ccURI', self.ccURI
yield 'ccName', self.ccName
yield 'ccImage', self.ccImage
yield 'uploaddate', self.uploaddate
if hasattr(self, 'tags'):
yield 'tags', self.tags
if hasattr(self, 'fans'):
yield 'fans', self.fans
if hasattr(self, 'imagesize'):
yield 'imagesize', self.imagesize
if hasattr(self, 'thumbsize'):
yield 'thumbsize', self.thumbsize
def normalize(self):
try:
if not hasattr(self, 'designid'):
self.designid = 0 # INSERT new design
elif self.designid < 0:
flask.abort(400,u'Bad design id.')
if not hasattr(self, 'owner'):
u = current_user
if not u.is_authenticated or self.designid > 0:
flask.abort(400,u'A design must have an owner.')
self.owner = u.id
if not gal_utils.legalOwner(self.owner):
flask.abort(400,'Bad owner.')
if not hasattr(self, 'title'):
flask.abort(400,u'A design must have a title.')
elif self.title != self.title.strip() or len(self.title) < 3 or len(self.title) > 100:
flask.abort(400,'Bad title.')
if not hasattr(self, 'variation'):
self.variation = u'';
elif not gal_utils.legalVariation(self.variation):
flask.abort(400,u'Bad variation.')
if not hasattr(self, 'tiled'):
self.tiled = 0
elif self.tiled < 0 or self.tiled > 3:
flask.abort(400,u'Bad tiling state.')
if not hasattr(self, 'S3'):
self.S3 = False
elif not isinstance(self.S3, bool):
flask.abort(400,u'Bad S3 state.')
if not hasattr(self, 'filelocation'):
self.filelocation = u''
elif not gal_utils.legalFilePath(self.filelocation, True):
flask.abort(400,u'Bad cfdg file path.')
if not hasattr(self, 'imagelocation'):
self.imagelocation = u''
elif not gal_utils.legalFilePath(self.imagelocation, False):
flask.abort(400,u'Bad image file path.')
if not hasattr(self, 'thumblocation'):
self.thumblocation = u''
elif not gal_utils.legalFilePath(self.thumblocation, False):
flask.abort(400,u'Bad image file path.')
if not hasattr(self, 'sm_thumblocation'):
self.sm_thumblocation = u''
elif not gal_utils.legalFilePath(self.sm_thumblocation, False):
flask.abort(400,u'Bad image file path.')
if not hasattr(self, 'imageversion'):
self.imageversion = 0
elif self.imageversion < 0:
flask.abort(400,u'Bad image version.')
if not hasattr(self, 'numvotes'):
self.numvotes = 0
elif self.numvotes < 0:
flask.abort(400,u'Bad vote count.')
if not hasattr(self, 'notes'):
self.notes = u''
elif len(self.notes) > 1000:
flask.abort(400,u'Notes cannot be longer than 1000 bytes.')
if hasattr(self, 'ccURI') and hasattr(self, 'ccName') and hasattr(self, 'ccImage'):
if not gal_utils.validateLicense(self.__dict__):
self.ccURI = u''
self.ccName = u''
self.ccImage = u'No license chosen'
else:
self.ccURI = u''
self.ccName = u''
self.ccImage = u'No license chosen'
if hasattr(self, 'uploaddate'):
if self.uploaddate < 1104566400:
flask.abort(400,u'Upload date before 2005')
else:
self.uploaddate = int(time.time())
except HTTPException:
raise
except:
if 'wsgi.errors' in os.environ:
traceback.print_exc(None, os.environ['wsgi.errors'])
else:
traceback.print_exc()
flask.abort(400,u'Cannot normalize a design.')
def ready4display(self):
return (gal_utils.legalFilePath(self.filelocation, True) and
gal_utils.legalFilePath(self.imagelocation, False) and
gal_utils.legalFilePath(self.thumblocation, False) and
gal_utils.legalFilePath(self.sm_thumblocation, False));
def archive(self):
db = gal_utils.get_db()
with closing(db.cursor(buffered=True)) as cursor:
cursor.execute(u'SELECT S3 FROM gal_designs WHERE designid=%s', (self.designid,))
if cursor.rowcount != 1: return False
data = cursor.fetchone()
if not isinstance(data[0], text): return False
if data[0] == u'Y': return True
cursor.execute(u'UPDATE gal_designs SET S3 = "Y" WHERE designid=%s', (self.designid,))
return cursor.rowcount == 1
def save(self):
db = gal_utils.get_db()
owner = current_user
with closing(db.cursor(buffered=True)) as cursor:
if self.designid == 0:
cursor.execute(u'INSERT INTO gal_designs (owner, title, '
u'variation, tiled, ccURI, ccName, ccImage, S3, '
u'imageversion, numvotes, whenuploaded, notes) '
u'VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, NOW(), %s)',
(self.owner,self.title,self.variation,self.tiled,
self.ccURI,self.ccName,self.ccImage,u'Y' if self.S3 else u'N',
self.imageversion,self.numvotes,self.notes))
self.designid = cursor.lastrowid
if cursor.rowcount == 1:
owner.numposts += 1
else:
cursor.execute(u'UPDATE gal_designs SET title=%s, variation=%s, '
u'tiled=%s, ccURI=%s, ccName=%s, ccImage=%s, S3=%s, '
u'notes=%s WHERE designid=%s',
(self.title,self.variation,self.tiled,self.ccURI,
self.ccName,self.ccImage,u'Y' if self.S3 else u'N',
self.notes,self.designid))
if cursor.rowcount == 1:
owner.ccURI = self.ccURI
owner.save()
return self.designid
return None
def DeleteDesign(design_id):
db = gal_utils.get_db()
with closing(db.cursor(dictionary=True, buffered=True)) as cursor:
query = Design.Query_base + u'WHERE designid=%s'
cursor.execute(query, (design_id,))
if cursor.rowcount != 1:
return flask.abort(404, u'Cannot find design to delete.')
design = Design(**cursor.fetchone())
if not gal_utils.validateOwner(design.owner):
flask.abort(401, u'Unauthorized to delete this design.')
# TODO update tag counts
cursor.execute(u'DELETE FROM gal_designs WHERE designid=%s', (design_id,))
if cursor.rowcount != 1:
flask.abort(500, u'Design failed to be deleted.')
cursor.execute(u'DELETE FROM gal_comments WHERE designid=%s', (design_id,))
cursor.execute(u'DELETE FROM gal_favorites WHERE designid=%s', (design_id,))
cursor.execute(u'UPDATE gal_users SET numposts = numposts - 1 WHERE screenname=%s',
(design.owner,))
files = [design.filelocation, design.imagelocation,
design.thumblocation, design.sm_thumblocation]
for file in files:
if os.path.isfile(file):
try:
os.unlink(file)
except OSError:
pass
def UnaddDesign(design_id):
db = gal_utils.get_db()
with closing(db.cursor(dictionary=True, buffered=True)) as cursor:
query = Design.Query_base + u'WHERE designid=%s'
cursor.execute(query, (design_id,))
if cursor.rowcount != 1:
return
designdata = cursor.fetchone()
cursor.execute(u'DELETE FROM gal_designs WHERE designid=%s', (design_id,))
if cursor.rowcount != 1:
return
cursor.execute(u'DELETE FROM gal_comments WHERE designid=%s', (design_id,))
cursor.execute(u'DELETE FROM gal_favorites WHERE designid=%s', (design_id,))
cursor.execute(u'UPDATE gal_users SET numposts = numposts - 1 WHERE screenname=%s',
(designdata['owner'],))
files = []
if 'filelocation' | |
pt0, pt1 = t==0 and (self.point(t), self.point(t+0.001)) or (self.point(t-0.001), self.point(t))
return geometry.angle(pt0.x, pt0.y, pt1.x, pt1.y)
def point(self, t):
""" Returns the PathElement at time t (0.0-1.0) on the path.
See the linear interpolation math in bezier.py.
"""
if self._segments is None:
self._segments = bezier.length(self, segmented=True, n=10)
return bezier.point(self, t, segments=self._segments)
def points(self, amount=2, start=0.0, end=1.0):
""" Returns a list of PathElements along the path.
To omit the last point on closed paths: end=1-1.0/amount
"""
if self._segments is None:
self._segments = bezier.length(self, segmented=True, n=10)
return bezier.points(self, amount, start, end, segments=self._segments)
def addpoint(self, t):
""" Inserts a new PathElement at time t (0.0-1.0) on the path.
"""
self._segments = None
self._index = {}
return bezier.insert_point(self, t)
split = addpoint
@property
def length(self, precision=10):
""" Returns an approximation of the total length of the path.
"""
return bezier.length(self, segmented=False, n=precision)
@property
def contours(self):
""" Returns a list of contours (i.e. segments separated by a MOVETO) in the path.
Each contour is a BezierPath object.
"""
return bezier.contours(self)
@property
def bounds(self, precision=100):
""" Returns a (x, y, width, height)-tuple of the approximate path dimensions.
"""
# In _update(), traverse all the points and check if they have changed.
# If so, the bounds must be recalculated.
self._update()
if self._bounds is None:
l = t = float( "inf")
r = b = float("-inf")
for pt in self.points(precision):
if pt.x < l: l = pt.x
if pt.y < t: t = pt.y
if pt.x > r: r = pt.x
if pt.y > b: b = pt.y
self._bounds = (l, t, r-l, b-t)
return self._bounds
def contains(self, x, y, precision=100):
""" Returns True when point (x,y) falls within the contours of the path.
"""
bx, by, bw, bh = self.bounds
if bx <= x <= bx+bw and \
by <= y <= by+bh:
if self._polygon is None \
or self._polygon[1] != precision:
self._polygon = [(pt.x,pt.y) for pt in self.points(precision)], precision
# Ray casting algorithm:
return geometry.point_in_polygon(self._polygon[0], x, y)
return False
def hash(self, state=None, decimal=1):
""" Returns the path id, based on the position and handles of its PathElements.
Two distinct BezierPath objects that draw the same path therefore have the same id.
"""
f = lambda x: int(x*10**decimal) # Format floats as strings with given decimal precision.
id = [state]
for pt in self: id.extend((
pt.cmd, f(pt.x), f(pt.y), f(pt.ctrl1.x), f(pt.ctrl1.y), f(pt.ctrl2.x), f(pt.ctrl2.y)))
id = str(id)
id = md5(id).hexdigest()
return id
def __repr__(self):
return "BezierPath(%s)" % repr(list(self))
def __del__(self):
# Note: it is important that __del__() is called since it unloads the cache from GPU.
# BezierPath and PathElement should contain no circular references, e.g. no PathElement.parent.
if hasattr(self, "_cache") and self._cache is not None and flush:
if self._cache[0]: flush(self._cache[0])
if self._cache[1]: flush(self._cache[1])
def drawpath(path, **kwargs):
""" Draws the given BezierPath (or list of PathElements).
The current stroke, strokewidth and fill color are applied.
"""
if not isinstance(path, BezierPath):
path = BezierPath(path)
path.draw(**kwargs)
_autoclosepath = True
def autoclosepath(close=False):
""" Paths constructed with beginpath() and endpath() are automatically closed.
"""
global _autoclosepath
_autoclosepath = close
_path = None
def beginpath(x, y):
""" Starts a new path at (x,y).
The commands moveto(), lineto(), curveto() and closepath()
can then be used between beginpath() and endpath() calls.
"""
global _path
_path = BezierPath()
_path.moveto(x, y)
def moveto(x, y):
""" Moves the current point in the current path to (x,y).
"""
if _path is None:
raise NoCurrentPath
_path.moveto(x, y)
def lineto(x, y):
""" Draws a line from the current point in the current path to (x,y).
"""
if _path is None:
raise NoCurrentPath
_path.lineto(x, y)
def curveto(x1, y1, x2, y2, x3, y3):
""" Draws a curve from the current point in the current path to (x3,y3).
The curvature is determined by control handles x1, y1 and x2, y2.
"""
if _path is None:
raise NoCurrentPath
_path.curveto(x1, y1, x2, y2, x3, y3)
def closepath():
""" Closes the current path with a straight line to the last MOVETO.
"""
if _path is None:
raise NoCurrentPath
_path.closepath()
def endpath(draw=True, **kwargs):
""" Draws and returns the current path.
With draw=False, only returns the path so it can be manipulated and drawn with drawpath().
"""
global _path, _autoclosepath
if _path is None:
raise NoCurrentPath
if _autoclosepath is True:
_path.closepath()
if draw:
_path.draw(**kwargs)
p, _path = _path, None
return p
def findpath(points, curvature=1.0):
""" Returns a smooth BezierPath from the given list of (x,y)-tuples.
"""
return bezier.findpath(list(points), curvature)
Path = BezierPath
#--- BEZIER EDITOR -----------------------------------------------------------------------------------
EQUIDISTANT = "equidistant"
IN, OUT, BOTH = "in", "out", "both" # Drag pt1.ctrl2, pt2.ctrl1 or both simultaneously?
class BezierEditor:
def __init__(self, path):
self.path = path
def _nextpoint(self, pt):
i = self.path.index(pt) # BezierPath caches this operation.
return i < len(self.path)-1 and self.path[i+1] or None
def translate(self, pt, x=0, y=0, h1=(0,0), h2=(0,0)):
""" Translates the point and its control handles by (x,y).
Translates the incoming handle by h1 and the outgoing handle by h2.
"""
pt1, pt2 = pt, self._nextpoint(pt)
pt1.x += x
pt1.y += y
pt1.ctrl2.x += x + h1[0]
pt1.ctrl2.y += y + h1[1]
if pt2 is not None:
pt2.ctrl1.x += x + (pt2.cmd == CURVETO and h2[0] or 0)
pt2.ctrl1.y += y + (pt2.cmd == CURVETO and h2[1] or 0)
def rotate(self, pt, angle, handle=BOTH):
""" Rotates the point control handles by the given angle.
"""
pt1, pt2 = pt, self._nextpoint(pt)
if handle == BOTH or handle == IN:
pt1.ctrl2.x, pt1.ctrl2.y = geometry.rotate(pt1.ctrl2.x, pt1.ctrl2.y, pt1.x, pt1.y, angle)
if handle == BOTH or handle == OUT and pt2 is not None and pt2.cmd == CURVETO:
pt2.ctrl1.x, pt2.ctrl1.y = geometry.rotate(pt2.ctrl1.x, pt2.ctrl1.y, pt1.x, pt1.y, angle)
def scale(self, pt, v, handle=BOTH):
""" Scales the point control handles by the given factor.
"""
pt1, pt2 = pt, self._nextpoint(pt)
if handle == BOTH or handle == IN:
pt1.ctrl2.x, pt1.ctrl2.y = bezier.linepoint(v, pt1.x, pt1.y, pt1.ctrl2.x, pt1.ctrl2.y)
if handle == BOTH or handle == OUT and pt2 is not None and pt2.cmd == CURVETO:
pt2.ctrl1.x, pt2.ctrl1.y = bezier.linepoint(v, pt1.x, pt1.y, pt2.ctrl1.x, pt2.ctrl1.y)
def smooth(self, pt, mode=None, handle=BOTH):
pt1, pt2, i = pt, self._nextpoint(pt), self.path.index(pt)
if pt2 is None:
return
if pt1.cmd == pt2.cmd == CURVETO:
if mode == EQUIDISTANT:
d1 = d2 = 0.5 * (
geometry.distance(pt1.x, pt1.y, pt1.ctrl2.x, pt1.ctrl2.y) + \
geometry.distance(pt1.x, pt1.y, pt2.ctrl1.x, pt2.ctrl1.y))
else:
d1 = geometry.distance(pt1.x, pt1.y, pt1.ctrl2.x, pt1.ctrl2.y)
d2 = geometry.distance(pt1.x, pt1.y, pt2.ctrl1.x, pt2.ctrl1.y)
if handle == IN:
a = geometry.angle(pt1.x, pt1.y, pt1.ctrl2.x, pt1.ctrl2.y)
if handle == OUT:
a = geometry.angle(pt2.ctrl1.x, pt2.ctrl1.y, pt1.x, pt1.y)
if handle == BOTH:
a = geometry.angle(pt2.ctrl1.x, pt2.ctrl1.y, pt1.ctrl2.x, pt1.ctrl2.y)
pt1.ctrl2.x, pt1.ctrl2.y = geometry.coordinates(pt1.x, pt1.y, d1, a)
pt2.ctrl1.x, pt2.ctrl1.y = geometry.coordinates(pt1.x, pt1.y, d2, a-180)
elif pt1.cmd == CURVETO and pt2.cmd == LINETO:
d = mode == EQUIDISTANT and \
geometry.distance(pt1.x, pt1.y, pt2.x, pt2.y) or \
geometry.distance(pt1.x, pt1.y, pt1.ctrl2.x, pt1.ctrl2.y)
a = geometry.angle(pt1.x, pt1.y, pt2.x, pt2.y)
pt1.ctrl2.x, pt1.ctrl2.y = geometry.coordinates(pt1.x, pt1.y, d, a-180)
elif pt1.cmd == LINETO and pt2.cmd == CURVETO and i > 0:
d = mode == EQUIDISTANT and \
geometry.distance(pt1.x, pt1.y, self.path[i-1].x, self.path[i-1].y) or \
geometry.distance(pt1.x, pt1.y, pt2.ctrl1.x, pt2.ctrl1.y)
a = geometry.angle(self.path[i-1].x, self.path[i-1].y, pt1.x, pt1.y)
pt2.ctrl1.x, pt2.ctrl1.y = geometry.coordinates(pt1.x, pt1.y, d, a)
#--- POINT ANGLES ------------------------------------------------------------------------------------
def directed(points):
""" Returns an iterator that yields (angle, point)-tuples for the given list of points.
The angle represents the direction of the point on the path.
This works with BezierPath, Bezierpath.points, [pt1, pt2, pt2, ...]
For example:
for a, pt in directed(path.points(30)):
push()
translate(pt.x, pt.y)
rotate(a)
arrow(0, 0, 10)
pop()
This is useful if you want to have shapes following a path.
To put text on a path, rotate the angle by +-90 to get the normal (i.e. perpendicular).
"""
p = list(points)
n = len(p)
for i, pt in enumerate(p):
if 0 < i < n-1 and pt.__dict__.get("_cmd") == CURVETO:
# For a point on a curve, the control handle gives the best direction.
# For | |
# -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from __future__ import with_statement
import abc
import sys
import pprint
import datetime
from python_utils import converters
import six
from . import base
from . import utils
MAX_DATE = datetime.date.max
MAX_TIME = datetime.time.max
MAX_DATETIME = datetime.datetime.max
def string_or_lambda(input_):
if isinstance(input_, six.string_types):
def render_input(progress, data, width):
return input_ % data
return render_input
else:
return input_
def create_marker(marker):
def _marker(progress, data, width):
if progress.max_value is not base.UnknownLength \
and progress.max_value > 0:
length = int(progress.value / progress.max_value * width)
return (marker * length)
else:
return marker
if isinstance(marker, six.string_types):
marker = converters.to_unicode(marker)
assert utils.len_color(marker) == 1, \
'Markers are required to be 1 char'
return _marker
else:
return marker
class FormatWidgetMixin(object):
'''Mixin to format widgets using a formatstring
Variables available:
- max_value: The maximum value (can be None with iterators)
- value: The current value
- total_seconds_elapsed: The seconds since the bar started
- seconds_elapsed: The seconds since the bar started modulo 60
- minutes_elapsed: The minutes since the bar started modulo 60
- hours_elapsed: The hours since the bar started modulo 24
- days_elapsed: The hours since the bar started
- time_elapsed: Shortcut for HH:MM:SS time since the bar started including
days
- percentage: Percentage as a float
'''
required_values = []
def __init__(self, format, new_style=False, **kwargs):
self.new_style = new_style
self.format = format
def __call__(self, progress, data, format=None):
'''Formats the widget into a string'''
try:
if self.new_style:
return (format or self.format).format(**data)
else:
return (format or self.format) % data
except (TypeError, KeyError):
print('Error while formatting %r' % self.format, file=sys.stderr)
pprint.pprint(data, stream=sys.stderr)
raise
class WidthWidgetMixin(object):
'''Mixing to make sure widgets are only visible if the screen is within a
specified size range so the progressbar fits on both large and small
screens..
Variables available:
- min_width: Only display the widget if at least `min_width` is left
- max_width: Only display the widget if at most `max_width` is left
>>> class Progress(object):
... term_width = 0
>>> WidthWidgetMixin(5, 10).check_size(Progress)
False
>>> Progress.term_width = 5
>>> WidthWidgetMixin(5, 10).check_size(Progress)
True
>>> Progress.term_width = 10
>>> WidthWidgetMixin(5, 10).check_size(Progress)
True
>>> Progress.term_width = 11
>>> WidthWidgetMixin(5, 10).check_size(Progress)
False
'''
def __init__(self, min_width=None, max_width=None, **kwargs):
self.min_width = min_width
self.max_width = max_width
def check_size(self, progress):
if self.min_width and self.min_width > progress.term_width:
return False
elif self.max_width and self.max_width < progress.term_width:
return False
else:
return True
class WidgetBase(WidthWidgetMixin):
__metaclass__ = abc.ABCMeta
'''The base class for all widgets
The ProgressBar will call the widget's update value when the widget should
be updated. The widget's size may change between calls, but the widget may
display incorrectly if the size changes drastically and repeatedly.
The boolean INTERVAL informs the ProgressBar that it should be
updated more often because it is time sensitive.
The widgets are only visible if the screen is within a
specified size range so the progressbar fits on both large and small
screens.
WARNING: Widgets can be shared between multiple progressbars so any state
information specific to a progressbar should be stored within the
progressbar instead of the widget.
Variables available:
- min_width: Only display the widget if at least `min_width` is left
- max_width: Only display the widget if at most `max_width` is left
- weight: Widgets with a higher `weigth` will be calculated before widgets
with a lower one
'''
@abc.abstractmethod
def __call__(self, progress, data):
'''Updates the widget.
progress - a reference to the calling ProgressBar
'''
class AutoWidthWidgetBase(WidgetBase):
'''The base class for all variable width widgets.
This widget is much like the \\hfill command in TeX, it will expand to
fill the line. You can use more than one in the same line, and they will
all have the same width, and together will fill the line.
'''
@abc.abstractmethod
def __call__(self, progress, data, width):
'''Updates the widget providing the total width the widget must fill.
progress - a reference to the calling ProgressBar
width - The total width the widget must fill
'''
class TimeSensitiveWidgetBase(WidgetBase):
'''The base class for all time sensitive widgets.
Some widgets like timers would become out of date unless updated at least
every `INTERVAL`
'''
INTERVAL = datetime.timedelta(milliseconds=100)
class FormatLabel(FormatWidgetMixin, WidgetBase):
'''Displays a formatted label
>>> label = FormatLabel('%(value)s', min_width=5, max_width=10)
>>> class Progress(object):
... pass
>>> label = FormatLabel('{value} :: {value:^6}', new_style=True)
>>> str(label(Progress, dict(value='test')))
'test :: test '
'''
mapping = {
'finished': ('end_time', None),
'last_update': ('last_update_time', None),
'max': ('max_value', None),
'seconds': ('seconds_elapsed', None),
'start': ('start_time', None),
'elapsed': ('total_seconds_elapsed', utils.format_time),
'value': ('value', None),
}
def __init__(self, format, **kwargs):
FormatWidgetMixin.__init__(self, format=format, **kwargs)
WidgetBase.__init__(self, **kwargs)
def __call__(self, progress, data, **kwargs):
for name, (key, transform) in self.mapping.items():
try:
if transform is None:
data[name] = data[key]
else:
data[name] = transform(data[key])
except (KeyError, ValueError, IndexError): # pragma: no cover
pass
return FormatWidgetMixin.__call__(self, progress, data, **kwargs)
class Timer(FormatLabel, TimeSensitiveWidgetBase):
'''WidgetBase which displays the elapsed seconds.'''
def __init__(self, format='Elapsed Time: %(elapsed)s', **kwargs):
FormatLabel.__init__(self, format=format, **kwargs)
TimeSensitiveWidgetBase.__init__(self, **kwargs)
# This is exposed as a static method for backwards compatibility
format_time = staticmethod(utils.format_time)
class SamplesMixin(TimeSensitiveWidgetBase):
'''
Mixing for widgets that average multiple measurements
Note that samples can be either an integer or a timedelta to indicate a
certain amount of time
>>> class progress:
... last_update_time = datetime.datetime.now()
... value = 1
... extra = dict()
>>> samples = SamplesMixin(samples=2)
>>> samples(progress, None, True)
(None, None)
>>> progress.last_update_time += datetime.timedelta(seconds=1)
>>> samples(progress, None, True) == (datetime.timedelta(seconds=1), 0)
True
>>> progress.last_update_time += datetime.timedelta(seconds=1)
>>> samples(progress, None, True) == (datetime.timedelta(seconds=1), 0)
True
>>> samples = SamplesMixin(samples=datetime.timedelta(seconds=1))
>>> _, value = samples(progress, None)
>>> value
[1, 1]
>>> samples(progress, None, True) == (datetime.timedelta(seconds=1), 0)
True
'''
def __init__(self, samples=datetime.timedelta(seconds=2), key_prefix=None,
**kwargs):
self.samples = samples
self.key_prefix = (self.__class__.__name__ or key_prefix) + '_'
TimeSensitiveWidgetBase.__init__(self, **kwargs)
def get_sample_times(self, progress, data):
return progress.extra.setdefault(self.key_prefix + 'sample_times', [])
def get_sample_values(self, progress, data):
return progress.extra.setdefault(self.key_prefix + 'sample_values', [])
def __call__(self, progress, data, delta=False):
sample_times = self.get_sample_times(progress, data)
sample_values = self.get_sample_values(progress, data)
if sample_times:
sample_time = sample_times[-1]
else:
sample_time = datetime.datetime.min
if progress.last_update_time - sample_time > self.INTERVAL:
# Add a sample but limit the size to `num_samples`
sample_times.append(progress.last_update_time)
sample_values.append(progress.value)
if isinstance(self.samples, datetime.timedelta):
minimum_time = progress.last_update_time - self.samples
minimum_value = sample_values[-1]
while (sample_times[2:] and
minimum_time > sample_times[1] and
minimum_value > sample_values[1]):
sample_times.pop(0)
sample_values.pop(0)
else:
if len(sample_times) > self.samples:
sample_times.pop(0)
sample_values.pop(0)
if delta:
delta_time = sample_times[-1] - sample_times[0]
delta_value = sample_values[-1] - sample_values[0]
if delta_time:
return delta_time, delta_value
else:
return None, None
else:
return sample_times, sample_values
class ETA(Timer):
'''WidgetBase which attempts to estimate the time of arrival.'''
def __init__(
self,
format_not_started='ETA: --:--:--',
format_finished='Time: %(elapsed)8s',
format='ETA: %(eta)8s',
format_zero='ETA: 00:00:00',
format_NA='ETA: N/A',
**kwargs):
Timer.__init__(self, **kwargs)
self.format_not_started = format_not_started
self.format_finished = format_finished
self.format = format
self.format_zero = format_zero
self.format_NA = format_NA
def _calculate_eta(self, progress, data, value, elapsed):
'''Updates the widget to show the ETA or total time when finished.'''
if elapsed:
# The max() prevents zero division errors
per_item = elapsed.total_seconds() / max(value, 1e-6)
remaining = progress.max_value - data['value']
eta_seconds = remaining * per_item
else:
eta_seconds = 0
return eta_seconds
def __call__(self, progress, data, value=None, elapsed=None):
'''Updates the widget to show the ETA or total time when finished.'''
if value is None:
value = data['value']
if elapsed is None:
elapsed = data['time_elapsed']
ETA_NA = False
try:
data['eta_seconds'] = self._calculate_eta(
progress, data, value=value, elapsed=elapsed)
except TypeError:
data['eta_seconds'] = None
ETA_NA = True
data['eta'] = None
if data['eta_seconds']:
try:
data['eta'] = utils.format_time(data['eta_seconds'])
except (ValueError, OverflowError): # pragma: no cover
pass
if data['value'] == progress.min_value:
format = self.format_not_started
elif progress.end_time:
format = self.format_finished
elif data['eta']:
format = self.format
elif ETA_NA:
format = self.format_NA
else:
format = self.format_zero
return Timer.__call__(self, progress, data, format=format)
class AbsoluteETA(ETA):
'''Widget which attempts to estimate the absolute time of arrival.'''
def _calculate_eta(self, progress, data, value, elapsed):
eta_seconds = ETA._calculate_eta(self, progress, data, value, elapsed)
now = datetime.datetime.now()
try:
return now + datetime.timedelta(seconds=eta_seconds)
except OverflowError: # pragma: no cover
return datetime.datetime.max
def __init__(
self,
format_not_started='Estimated finish time: ----/--/-- --:--:--',
format_finished='Finished at: %(elapsed)s',
format='Estimated finish time: %(eta)s',
**kwargs):
ETA.__init__(self, format_not_started=format_not_started,
format_finished=format_finished, format=format, **kwargs)
class AdaptiveETA(ETA, SamplesMixin):
'''WidgetBase which attempts | |
# File: sshcustodian/sshcustodian.py
# -*- coding: utf-8 -*-
# Python 2/3 Compatibility
from __future__ import (unicode_literals, division, absolute_import,
print_function)
from six.moves import filterfalse
"""
This module creates a subclass of the main Custodian class in the Custodian
project (github.com/materialsproject/custodian), which is a wrapper that
manages jobs running on computing clusters. The Custodian module is part of The
Materials Project (materialsproject.org/).
This subclass adds the functionality to copy the temporary directory created
via monty to the scratch partitions on slave compute nodes, provided that the
cluster's filesystem is configured in this way. The implementation invokes a
subprocess to utilize the ssh executable installed on the cluster, so it is not
particularly elegant or platform independent, nor is this solution likely to be
general to all clusters. This is why this modification has not been submitted
as a pull request to the main Custodian project.
"""
# Import modules
import logging
import subprocess
import sys
import datetime
import time
import os
import re
from itertools import islice, groupby
from socket import gethostname
from monty.tempfile import ScratchDir
from monty.shutil import gzip_dir
from monty.json import MontyEncoder
from monty.serialization import dumpfn
from custodian.custodian import Custodian
from custodian.custodian import CustodianError
# Module-level logger
logger = logging.getLogger(__name__)
class SSHCustodian(Custodian):
"""
The SSHCustodian class modifies the Custodian class from the custodian
module to be able to handle clusters that have separate scratch partitions
for each node. When scratch_dir_node_only is enabled, the temp_dir that
monty creates will be copied to all other compute nodes used in the
calculation and subsequently removed when the job is finished.
"""
__doc__ += Custodian.__doc__
def __init__(self, handlers, jobs, validators=None, max_errors=1,
polling_time_step=10, monitor_freq=30,
skip_over_errors=False, scratch_dir=None,
gzipped_output=False, checkpoint=False,
scratch_dir_node_only=False, pbs_nodefile=None):
""" scratch_dir_node_only (bool): If set to True, custodian will grab
the list of nodes in the file path provided to pbs_nodefile and
use copy the temp_dir to the scratch_dir on each node over
ssh. This is necessary on cluster setups where each node has
its own independent scratch partition.
pbs_nodefile (str): The filepath to the list of nodes to be used in
a calculation. If this path does not point to a valid file,
then scratch_dir_node_only will be automatically set to False.
"""
super(SSHCustodian, self).__init__(handlers, jobs, validators,
max_errors, polling_time_step,
monitor_freq, skip_over_errors,
scratch_dir, gzipped_output,
checkpoint)
self.hostname = gethostname()
if pbs_nodefile is None:
self.scratch_dir_node_only = False
self.slave_compute_node_list = None
elif os.path.exists(pbs_nodefile):
self.scratch_dir_node_only = scratch_dir_node_only
self.pbs_nodefile = pbs_nodefile
self.slave_compute_node_list = (
self._process_pbs_nodefile(self.pbs_nodefile, self.hostname))
else:
self.scratch_dir_node_only = False
self.pbs_nodefile = None
self.slave_compute_node_list = None
@staticmethod
def _process_pbs_nodefile(pbs_nodefile, hostname):
with open(pbs_nodefile) as in_file:
nodelist = in_file.read().splitlines()
slave_compute_node_list = [
node for node, _ in groupby(filterfalse(lambda x: x == hostname,
nodelist))
]
return slave_compute_node_list
def _copy_to_slave_node_dirs(self, temp_dir_path):
"""
Copy temporary scratch directory from master node to other nodes.
Args:
temp_dir_path (str): The path to the temporary scratch directory.
It is assumed here that the root path of the scratch directory
is the same on all nodes.
"""
process_list = []
for node in self.slave_compute_node_list:
command = ['rsync', '-azhq', temp_dir_path,
'{0}:{1}'.format(node,
os.path.abspath(self.scratch_dir))]
p = subprocess.Popen(command, shell=False)
process_list.append(p)
# Wait for syncing to finish before moving on
for process in process_list:
process.wait()
def _update_slave_node_vasp_input_files(self, temp_dir_path):
"""
Update VASP input files in the scratch partition on the slave compute
nodes.
Args:
temp_dir_path (str): The path to the temporary scratch directory.
It is assumed here that the root path of the scratch directory
is the same on all nodes.
"""
VASP_INPUT_FILES = [x for x in ["{0}/CHGCAR".format(temp_dir_path),
"{0}/WAVECAR".format(temp_dir_path),
"{0}/INCAR".format(temp_dir_path),
"{0}/POSCAR".format(temp_dir_path),
"{0}/POTCAR".format(temp_dir_path),
"{0}/KPOINTS".format(temp_dir_path)] if
os.path.exists(x)]
process_list = []
for node in self.slave_compute_node_list:
for filepath in VASP_INPUT_FILES:
command = 'scp {0} {1}:{2}/'.format(filepath, node,
temp_dir_path)
p = subprocess.Popen(command, shell=True)
process_list.append(p)
# Wait for syncing to finish before moving on
for process in process_list:
process.wait()
def _delete_slave_node_dirs(self, temp_dir_path):
"""
Delete the temporary scratch directory on the slave nodes.
Args:
temp_dir_path (str): The path to the temporary scratch directory.
It is assumed here that the root path of the scratch directory
is the same on all nodes.
"""
process_list = []
for node in self.slave_compute_node_list:
command = 'ssh {0} "rm -rf {1}"'.format(node, temp_dir_path)
p = subprocess.Popen(command, shell=True)
process_list.append(p)
# Wait for deletion to finish before moving on
for process in process_list:
process.wait()
def _manage_node_scratch(self, temp_dir_path, job_start):
"""
Checks whether the user wants to make use of scratch partitions on each
compute node, and if True, either copies the temporary directory to or
deletes the temporary directory from each slave compute node. If the
user does not specify to use node-specific scratch partitions, then the
function does nothing.
Args:
temp_dir_path (str): The path to the temporary scratch directory.
job_start (bool): If True, then the job has started and the
temporary directory will be copied to the slave compute
nodes. If False, then the temporary directories will be deleted
from the slave compute nodes.
"""
if self.scratch_dir_node_only:
if job_start:
self._copy_to_slave_node_dirs(temp_dir_path)
else:
self._delete_slave_node_dirs(temp_dir_path)
else:
pass
def _update_node_scratch(self, temp_dir_path, job):
"""
Method to update the scratch partitions on the slave compute nodes
if they exist and are running a VASP job.
Args:
temp_dir_path (str): The path to the temporary scratch directory.
job (object): The job object you intend to run. Currently supports
VASP jobs.
"""
vasp_re = re.compile(r'vasp')
if self.scratch_dir is not None:
try:
jobtype = job.get_jobtype()
if self.scratch_dir_node_only:
if vasp_re.match(jobtype):
self._update_slave_node_vasp_input_files(temp_dir_path)
else:
pass
else:
pass
except:
pass
else:
pass
def run(self):
"""
Override of Custodian.run() to include instructions to copy the
temp_dir to the scratch partition on slave compute nodes if requested.
"""
cwd = os.getcwd()
with ScratchDir(self.scratch_dir, create_symbolic_link=True,
copy_to_current_on_exit=True,
copy_from_current_on_enter=True) as temp_dir:
self._manage_node_scratch(temp_dir_path=temp_dir,
job_start=True)
self.total_errors = 0
start = datetime.datetime.now()
logger.info("Run started at {} in {}.".format(
start, temp_dir))
v = sys.version.replace("\n", " ")
logger.info("Custodian running on Python version {}".format(v))
try:
# skip jobs until the restart
for job_n, job in islice(enumerate(self.jobs, 1),
self.restart, None):
self._run_job(job_n, job, temp_dir)
# Checkpoint after each job so that we can recover from
# last point and remove old checkpoints
if self.checkpoint:
super(SSHCustodian, self)._save_checkpoint(cwd, job_n)
except CustodianError as ex:
logger.error(ex.message)
if ex.raises:
raise RuntimeError("{} errors reached: {}. Exited..."
.format(self.total_errors, ex))
finally:
# Log the corrections to a json file.
logger.info("Logging to {}...".format(super(SSHCustodian,
self).LOG_FILE))
dumpfn(self.run_log, super(SSHCustodian, self).LOG_FILE,
cls=MontyEncoder, indent=4)
end = datetime.datetime.now()
logger.info("Run ended at {}.".format(end))
run_time = end - start
logger.info("Run completed. Total time taken = {}."
.format(run_time))
# Remove duplicate copy of log file, provided it ends with
# ".log"
for x in ([x for x in os.listdir(temp_dir)
if re.match(r'\w*\.log', x)]):
os.remove(os.path.join(temp_dir, x))
self._manage_node_scratch(temp_dir_path=temp_dir,
job_start=False)
if self.gzipped_output:
gzip_dir(".")
# Cleanup checkpoint files (if any) if run is successful.
super(SSHCustodian, self)._delete_checkpoints(cwd)
return self.run_log
def _run_job(self, job_n, job, temp_dir):
"""
Overrides custodian.custodian._run_job() to propagate changes to input
files on different scratch partitions on compute nodes, if needed.
"""
self.run_log.append({"job": job.as_dict(), "corrections": []})
job.setup()
for attempt in range(1, self.max_errors - self.total_errors + 1):
# Propagate updated input files, if needed
self._update_node_scratch(temp_dir, job)
logger.info(
"Starting job no. {} ({}) attempt no. {}. Errors "
"thus far = {}.".format(
job_n, job.name, attempt, self.total_errors))
p = job.run()
# Check for errors using the error handlers and perform
# corrections.
has_error = False
# While the job is running, we use the handlers that are
# monitors to monitor the job.
if isinstance(p, subprocess.Popen):
if self.monitors:
n = 0
while True:
n += 1
time.sleep(self.polling_time_step)
if p.poll() is not None:
break
if n % self.monitor_freq == 0:
has_error = self._do_check(self.monitors,
p.terminate)
else:
p.wait()
logger.info("{}.run has completed. "
"Checking remaining handlers".format(job.name))
# Check for errors again, since in some cases non-monitor
# handlers fix the problems detected by monitors
# if an error has been found, not all handlers need to run
if has_error:
self._do_check([h for h in self.handlers
if not h.is_monitor])
else:
has_error = self._do_check(self.handlers)
# If there are no errors detected, perform
# postprocessing and exit.
if not has_error:
for v in self.validators:
if v.check():
s = "Validation failed: {}".format(v)
raise CustodianError(s, True, v)
job.postprocess()
return
# check that all errors could be handled
for x in self.run_log[-1]["corrections"]:
if not x["actions"] and x["handler"].raises_runtime_error:
s = "Unrecoverable error for handler: {}. " \
"Raising RuntimeError".format(x["handler"])
raise CustodianError(s, True, x["handler"])
| |
= value
@property
def operating_system_name(self):
"""
Returns the value of the `operating_system_name` property.
"""
return self._operating_system_name
@operating_system_name.setter
def operating_system_name(self, value):
"""
Sets the value of the `operating_system_name` property.
"""
self._operating_system_name = value
@property
def architecture_name(self):
"""
Returns the value of the `architecture_name` property.
"""
return self._architecture_name
@architecture_name.setter
def architecture_name(self, value):
"""
Sets the value of the `architecture_name` property.
"""
self._architecture_name = value
class ExternalNetworkProviderConfiguration(Identified):
def __init__(
self,
comment=None,
description=None,
external_network_provider=None,
host=None,
id=None,
name=None,
):
super(ExternalNetworkProviderConfiguration, self).__init__(
comment=comment,
description=description,
id=id,
name=name,
)
self.external_network_provider = external_network_provider
self.host = host
@property
def external_network_provider(self):
"""
Returns the value of the `external_network_provider` property.
"""
return self._external_network_provider
@external_network_provider.setter
def external_network_provider(self, value):
"""
Sets the value of the `external_network_provider` property.
"""
Struct._check_type('external_network_provider', value, ExternalProvider)
self._external_network_provider = value
@property
def host(self):
"""
Returns the value of the `host` property.
"""
return self._host
@host.setter
def host(self, value):
"""
Sets the value of the `host` property.
"""
Struct._check_type('host', value, Host)
self._host = value
class ExternalProvider(Identified):
def __init__(
self,
authentication_url=None,
comment=None,
description=None,
id=None,
name=None,
password=<PASSWORD>,
properties=None,
requires_authentication=None,
url=None,
username=None,
):
super(ExternalProvider, self).__init__(
comment=comment,
description=description,
id=id,
name=name,
)
self.authentication_url = authentication_url
self.password = password
self.properties = properties
self.requires_authentication = requires_authentication
self.url = url
self.username = username
@property
def password(self):
"""
Returns the value of the `password` property.
"""
return self._password
@password.setter
def password(self, value):
"""
Sets the value of the `password` property.
"""
self._password = value
@property
def requires_authentication(self):
"""
Returns the value of the `requires_authentication` property.
"""
return self._requires_authentication
@requires_authentication.setter
def requires_authentication(self, value):
"""
Sets the value of the `requires_authentication` property.
"""
self._requires_authentication = value
@property
def username(self):
"""
Returns the value of the `username` property.
"""
return self._username
@username.setter
def username(self, value):
"""
Sets the value of the `username` property.
"""
self._username = value
@property
def authentication_url(self):
"""
Returns the value of the `authentication_url` property.
"""
return self._authentication_url
@authentication_url.setter
def authentication_url(self, value):
"""
Sets the value of the `authentication_url` property.
"""
self._authentication_url = value
@property
def url(self):
"""
Returns the value of the `url` property.
"""
return self._url
@url.setter
def url(self, value):
"""
Sets the value of the `url` property.
"""
self._url = value
@property
def properties(self):
"""
Returns the value of the `properties` property.
"""
return self._properties
@properties.setter
def properties(self, value):
"""
Sets the value of the `properties` property.
"""
self._properties = value
class File(Identified):
def __init__(
self,
comment=None,
content=None,
description=None,
id=None,
name=None,
storage_domain=None,
type=None,
):
super(File, self).__init__(
comment=comment,
description=description,
id=id,
name=name,
)
self.content = content
self.storage_domain = storage_domain
self.type = type
@property
def storage_domain(self):
"""
Returns the value of the `storage_domain` property.
"""
return self._storage_domain
@storage_domain.setter
def storage_domain(self, value):
"""
Sets the value of the `storage_domain` property.
"""
Struct._check_type('storage_domain', value, StorageDomain)
self._storage_domain = value
@property
def content(self):
"""
Returns the value of the `content` property.
"""
return self._content
@content.setter
def content(self, value):
"""
Sets the value of the `content` property.
"""
self._content = value
@property
def type(self):
"""
Returns the value of the `type` property.
"""
return self._type
@type.setter
def type(self, value):
"""
Sets the value of the `type` property.
"""
self._type = value
class Filter(Identified):
def __init__(
self,
comment=None,
description=None,
id=None,
name=None,
position=None,
scheduling_policy_unit=None,
):
super(Filter, self).__init__(
comment=comment,
description=description,
id=id,
name=name,
)
self.position = position
self.scheduling_policy_unit = scheduling_policy_unit
@property
def scheduling_policy_unit(self):
"""
Returns the value of the `scheduling_policy_unit` property.
"""
return self._scheduling_policy_unit
@scheduling_policy_unit.setter
def scheduling_policy_unit(self, value):
"""
Sets the value of the `scheduling_policy_unit` property.
"""
Struct._check_type('scheduling_policy_unit', value, SchedulingPolicyUnit)
self._scheduling_policy_unit = value
@property
def position(self):
"""
Returns the value of the `position` property.
"""
return self._position
@position.setter
def position(self, value):
"""
Sets the value of the `position` property.
"""
self._position = value
class Floppy(Device):
def __init__(
self,
comment=None,
description=None,
file=None,
id=None,
instance_type=None,
name=None,
template=None,
vm=None,
vms=None,
):
super(Floppy, self).__init__(
comment=comment,
description=description,
id=id,
instance_type=instance_type,
name=name,
template=template,
vm=vm,
vms=vms,
)
self.file = file
@property
def file(self):
"""
Returns the value of the `file` property.
"""
return self._file
@file.setter
def file(self, value):
"""
Sets the value of the `file` property.
"""
Struct._check_type('file', value, File)
self._file = value
class GlusterBrickAdvancedDetails(Device):
def __init__(
self,
comment=None,
description=None,
device=None,
fs_name=None,
gluster_clients=None,
id=None,
instance_type=None,
memory_pools=None,
mnt_options=None,
name=None,
pid=None,
port=None,
template=None,
vm=None,
vms=None,
):
super(GlusterBrickAdvancedDetails, self).__init__(
comment=comment,
description=description,
id=id,
instance_type=instance_type,
name=name,
template=template,
vm=vm,
vms=vms,
)
self.device = device
self.fs_name = fs_name
self.gluster_clients = gluster_clients
self.memory_pools = memory_pools
self.mnt_options = mnt_options
self.pid = pid
self.port = port
@property
def port(self):
"""
Returns the value of the `port` property.
"""
return self._port
@port.setter
def port(self, value):
"""
Sets the value of the `port` property.
"""
self._port = value
@property
def memory_pools(self):
"""
Returns the value of the `memory_pools` property.
"""
return self._memory_pools
@memory_pools.setter
def memory_pools(self, value):
"""
Sets the value of the `memory_pools` property.
"""
self._memory_pools = value
@property
def mnt_options(self):
"""
Returns the value of the `mnt_options` property.
"""
return self._mnt_options
@mnt_options.setter
def mnt_options(self, value):
"""
Sets the value of the `mnt_options` property.
"""
self._mnt_options = value
@property
def fs_name(self):
"""
Returns the value of the `fs_name` property.
"""
return self._fs_name
@fs_name.setter
def fs_name(self, value):
"""
Sets the value of the `fs_name` property.
"""
self._fs_name = value
@property
def pid(self):
"""
Returns the value of the `pid` property.
"""
return self._pid
@pid.setter
def pid(self, value):
"""
Sets the value of the `pid` property.
"""
self._pid = value
@property
def gluster_clients(self):
"""
Returns the value of the `gluster_clients` property.
"""
return self._gluster_clients
@gluster_clients.setter
def gluster_clients(self, value):
"""
Sets the value of the `gluster_clients` property.
"""
self._gluster_clients = value
@property
def device(self):
"""
Returns the value of the `device` property.
"""
return self._device
@device.setter
def device(self, value):
"""
Sets the value of the `device` property.
"""
self._device = value
class GlusterHook(Identified):
def __init__(
self,
checksum=None,
cluster=None,
comment=None,
conflict_status=None,
conflicts=None,
content=None,
content_type=None,
description=None,
gluster_command=None,
id=None,
name=None,
server_hooks=None,
stage=None,
status=None,
):
super(GlusterHook, self).__init__(
comment=comment,
description=description,
id=id,
name=name,
)
self.checksum = checksum
self.cluster = cluster
self.conflict_status = conflict_status
self.conflicts = conflicts
self.content = content
self.content_type = content_type
self.gluster_command = gluster_command
self.server_hooks = server_hooks
self.stage = stage
self.status = status
@property
def cluster(self):
"""
Returns the value of the `cluster` property.
"""
return self._cluster
@cluster.setter
def cluster(self, value):
"""
Sets the value of the `cluster` property.
"""
Struct._check_type('cluster', value, Cluster)
self._cluster = value
@property
def stage(self):
"""
Returns the value of the `stage` property.
"""
return self._stage
@stage.setter
def stage(self, value):
"""
Sets the value of the `stage` property.
"""
Struct._check_type('stage', value, HookStage)
self._stage = value
@property
def content_type(self):
"""
Returns the value of the `content_type` property.
"""
return self._content_type
@content_type.setter
def content_type(self, value):
"""
Sets the value of the `content_type` property.
"""
Struct._check_type('content_type', value, HookContentType)
self._content_type = value
@property
def conflict_status(self):
"""
Returns the value of the `conflict_status` property.
"""
return self._conflict_status
@conflict_status.setter
def conflict_status(self, value):
"""
Sets the value of the `conflict_status` property.
"""
self._conflict_status = value
@property
def conflicts(self):
"""
Returns the value of the `conflicts` property.
"""
return self._conflicts
@conflicts.setter
def conflicts(self, value):
"""
Sets the value of the `conflicts` property.
"""
self._conflicts = value
@property
def checksum(self):
"""
Returns the value of the `checksum` property.
"""
return self._checksum
@checksum.setter
def checksum(self, value):
"""
Sets the value of the `checksum` property.
"""
self._checksum = value
@property
def status(self):
"""
Returns the value of the `status` property.
"""
return self._status
@status.setter
def status(self, value):
"""
Sets the value of the `status` property.
"""
Struct._check_type('status', value, GlusterHookStatus)
self._status = value
@property
def gluster_command(self):
"""
Returns the value of the `gluster_command` property.
"""
return self._gluster_command
@gluster_command.setter
def gluster_command(self, value):
"""
Sets the value of the `gluster_command` property.
"""
self._gluster_command = value
@property
def content(self):
"""
Returns the value of the `content` property.
"""
return self._content
@content.setter
def content(self, value):
"""
Sets the value of the `content` property.
"""
self._content = value
@property
def server_hooks(self):
"""
Returns the value of the `server_hooks` property.
"""
return self._server_hooks
@server_hooks.setter
def server_hooks(self, value):
"""
Sets the value of the `server_hooks` property.
"""
self._server_hooks = value
class GlusterMemoryPool(Identified):
def __init__(
self,
alloc_count=None,
cold_count=None,
comment=None,
description=None,
hot_count=None,
| |
"""The main module handling the simulation"""
import copy
import datetime
import logging
import os
import pickle
import queue
import random
import sys
import threading
import warnings
from functools import lru_cache
from pprint import pformat # TODO set some defaults for width/etc with partial?
import numpy as np
import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pap
import tqdm
from ..numerical_libs import enable_cupy, reimport_numerical_libs, xp, xp_ivp
from ..util.distributions import approx_mPERT_sample, truncnorm
from ..util.util import TqdmLoggingHandler, _banner
from .arg_parser_model import parser
from .estimation import estimate_Rt
from .exceptions import SimulationException
from .graph import buckyGraphData
from .mc_instance import buckyMCInstance
from .npi import get_npi_params
from .parameters import buckyParams
from .rhs import RHS_func
from .state import buckyState
# supress pandas warning caused by pyarrow
warnings.simplefilter(action="ignore", category=FutureWarning)
# TODO we do alot of allowing div by 0 and then checking for nans later, we should probably refactor that
warnings.simplefilter(action="ignore", category=RuntimeWarning)
@lru_cache(maxsize=None)
def get_runid(): # TODO move to util and rename to timeid or something
"""Gets a UUID based of the current datatime and caches it"""
dt_now = datetime.datetime.now()
return str(dt_now).replace(" ", "__").replace(":", "_").split(".")[0]
def frac_last_n_vals(arr, n, axis=0, offset=0): # TODO assumes come from end of array currently, move to util
"""Return the last n values along an axis of an array; if n is a float, include the fractional amount of the int(n)-1 element"""
int_slice_ind = (
[slice(None)] * (axis)
+ [slice(-int(n + offset), -int(xp.ceil(offset)) or None)]
+ [slice(None)] * (arr.ndim - axis - 1)
)
ret = arr[int_slice_ind]
# handle fractional element before the standard slice
if (n + offset) % 1:
frac_slice_ind = (
[slice(None)] * (axis)
+ [slice(-int(n + offset + 1), -int(n + offset))]
+ [slice(None)] * (arr.ndim - axis - 1)
)
ret = xp.concatenate((((n + offset) % 1) * arr[frac_slice_ind], ret), axis=axis)
# handle fractional element after the standard slice
if offset % 1:
frac_slice_ind = (
[slice(None)] * (axis)
+ [slice(-int(offset + 1), -int(offset) or None)]
+ [slice(None)] * (arr.ndim - axis - 1)
)
ret = xp.concatenate((ret, (1.0 - (offset % 1)) * arr[frac_slice_ind]), axis=axis)
return ret
class buckyModelCovid:
"""Class that handles one full simulation (both time integration and managing MC states)"""
def __init__(
self,
debug=False,
sparse_aij=False,
t_max=None,
graph_file=None,
par_file=None,
npi_file=None,
disable_npi=False,
reject_runs=False,
):
"""Initialize the class, do some bookkeeping and read in the input graph"""
self.debug = debug
self.sparse = sparse_aij # we can default to none and autodetect
# w/ override (maybe when #adm2 > 5k and some sparsity critera?)
# Integrator params
self.t_max = t_max
self.run_id = get_runid()
logging.info(f"Run ID: {self.run_id}")
self.npi_file = npi_file
self.disable_npi = disable_npi
self.reject_runs = reject_runs
self.output_dates = None
# COVID/model params from par file
self.bucky_params = buckyParams(par_file)
self.consts = self.bucky_params.consts
self.dists = self.bucky_params.dists
self.g_data = self.load_graph(graph_file)
def update_params(self, update_dict):
self.bucky_params.update_params(update_dict)
self.consts = self.bucky_params.consts
self.dists = self.bucky_params.dists
def load_graph(self, graph_file):
"""Load the graph data and calculate all the variables that are static across MC runs"""
# TODO refactor to just have this return g_data
logging.info("loading graph")
with open(graph_file, "rb") as f:
G = pickle.load(f) # nosec
# Load data from input graph
# TODO we should go through an replace lots of math using self.g_data.* with function IN buckyGraphData
g_data = buckyGraphData(G, self.sparse)
# Make contact mats sym and normalized
self.contact_mats = G.graph["contact_mats"]
if self.debug:
logging.debug(f"graph contact mats: {G.graph['contact_mats'].keys()}")
for mat in self.contact_mats:
c_mat = xp.array(self.contact_mats[mat])
c_mat = (c_mat + c_mat.T) / 2.0
self.contact_mats[mat] = c_mat
# remove all_locations so we can sum over the them ourselves
if "all_locations" in self.contact_mats:
del self.contact_mats["all_locations"]
# Remove unknown contact mats
valid_contact_mats = ["home", "work", "other_locations", "school"]
self.contact_mats = {k: v for k, v in self.contact_mats.items() if k in valid_contact_mats}
self.Cij = xp.vstack([self.contact_mats[k][None, ...] for k in sorted(self.contact_mats)])
# Get stratified population (and total)
self.Nij = g_data.Nij
self.Nj = g_data.Nj
self.n_age_grps = self.Nij.shape[0] # TODO factor out
self.init_date = datetime.date.fromisoformat(G.graph["start_date"])
self.base_mc_instance = buckyMCInstance(self.init_date, self.t_max, self.Nij, self.Cij)
# fill in npi_params either from file or as ones
self.npi_params = get_npi_params(g_data, self.init_date, self.t_max, self.npi_file, self.disable_npi)
if self.npi_params["npi_active"]:
self.base_mc_instance.add_npi(self.npi_params)
self.adm0_cfr_reported = None
self.adm1_cfr_reported = None
self.adm2_cfr_reported = None
# If HHS hospitalization data is on the graph, use it to rescale initial H counts and CHR
# self.rescale_chr = "hhs_data" in G.graph
if self.consts.rescale_chr:
self.adm1_current_hosp = xp.zeros((g_data.max_adm1 + 1,), dtype=float)
# TODO move hosp data to the graph nodes and handle it with graph.py the way cases/deaths are
hhs_data = G.graph["hhs_data"].reset_index()
hhs_data["date"] = pd.to_datetime(hhs_data["date"])
hhs_data = (
hhs_data.set_index("date")
.sort_index()
.groupby("adm1")
.rolling(7)
.mean()
.drop(columns="adm1")
.reset_index()
)
hhs_curr_data = hhs_data.loc[hhs_data.date == pd.Timestamp(self.init_date)]
hhs_curr_data = hhs_curr_data.set_index("adm1").sort_index()
tot_hosps = (
hhs_curr_data.total_adult_patients_hospitalized_confirmed_covid
+ hhs_curr_data.total_pediatric_patients_hospitalized_confirmed_covid
)
self.adm1_current_hosp[tot_hosps.index.to_numpy()] = tot_hosps.to_numpy()
if self.debug:
logging.debug("Current hospitalizations: " + pformat(self.adm1_current_hosp))
# Estimate the recent CFR during the period covered by the historical data
cfr_delay = 25 # 14 # TODO This should come from CDC and Nij
n_cfr = 14
last_cases = (
g_data.rolling_cum_cases[-cfr_delay - n_cfr : -cfr_delay] - g_data.rolling_cum_cases[-cfr_delay - n_cfr - 1]
)
last_deaths = g_data.rolling_cum_deaths[-n_cfr:] - g_data.rolling_cum_deaths[-n_cfr - 1]
adm1_cases = g_data.sum_adm1(last_cases.T)
adm1_deaths = g_data.sum_adm1(last_deaths.T)
negative_mask = (adm1_deaths < 0.0) | (adm1_cases < 0.0)
adm1_cfr = adm1_deaths / adm1_cases
adm1_cfr[negative_mask] = xp.nan
# take mean over n days
self.adm1_current_cfr = xp.nanmedian(adm1_cfr, axis=1)
# Estimate recent CHR
if self.consts.rescale_chr:
chr_delay = 20 # TODO This should come from I_TO_H_TIME and Nij as a float (it's ~5.8)
n_chr = 7
tmp = hhs_data.loc[hhs_data.date > pd.Timestamp(self.init_date - datetime.timedelta(days=n_chr))]
tmp = tmp.loc[tmp.date <= pd.Timestamp(self.init_date)]
tmp = tmp.set_index(["adm1", "date"]).sort_index()
tmp = (
tmp.previous_day_admission_adult_covid_confirmed + tmp.previous_day_admission_pediatric_covid_confirmed
)
cum_hosps = xp.zeros((adm1_cfr.shape[0], n_chr))
tmp = tmp.unstack()
tmp_data = tmp.T.cumsum().to_numpy()
tmp_ind = tmp.index.to_numpy()
cum_hosps[tmp_ind] = tmp_data.T
last_cases = (
g_data.rolling_cum_cases[-chr_delay - n_chr : -chr_delay]
- g_data.rolling_cum_cases[-chr_delay - n_chr - 1]
)
adm1_cases = g_data.sum_adm1(last_cases.T)
adm1_hosps = cum_hosps # g_data.sum_adm1(last_hosps.T)
adm1_chr = adm1_hosps / adm1_cases
# take mean over n days
self.adm1_current_chr = xp.mean(adm1_chr, axis=1)
# self.adm1_current_chr = self.calc_lagged_rate(g_data.adm1_cum_case_hist, cum_hosps.T, chr_delay, n_chr)
if self.debug:
logging.debug("Current CFR: " + pformat(self.adm1_current_cfr))
return g_data
def reset(self, seed=None, params=None):
"""Reset the state of the model and generate new inital data from a new random seed"""
# TODO we should refactor reset of the compartments to be real pop numbers then /Nij at the end
if seed is not None:
random.seed(int(seed))
np.random.seed(seed)
xp.random.seed(seed)
# reroll model params if we're doing that kind of thing
self.g_data.Aij.perturb(self.consts.reroll_variance)
self.params = self.bucky_params.generate_params()
if params is not None:
self.params = copy.deepcopy(params)
if self.debug:
logging.debug("params: " + pformat(self.params, width=120))
for k in self.params:
if type(self.params[k]).__module__ == np.__name__:
self.params[k] = xp.asarray(self.params[k])
# TODO consolidate all the broadcast_to calls
self.params.H = xp.broadcast_to(self.params.H[:, None], self.Nij.shape)
self.params.F = xp.broadcast_to(self.params.F[:, None], self.Nij.shape)
if self.consts.rescale_chr:
# TODO this needs to be cleaned up BAD
adm1_Ni = self.g_data.adm1_Nij
adm1_N = self.g_data.adm1_Nj
# estimate adm2 expected CFR weighted by local age demo
tmp = self.params.F[:, 0][..., None] * self.g_data.adm1_Nij / self.g_data.adm1_Nj
adm1_F = xp.sum(tmp, axis=0)
# get ratio of actual CFR to expected CFR
adm1_F_fac = self.adm1_current_cfr / adm1_F
adm0_F_fac = xp.nanmean(adm1_N * adm1_F_fac) / xp.sum(adm1_N)
adm1_F_fac[xp.isnan(adm1_F_fac)] = adm0_F_fac
F_RR_fac = truncnorm(1.0, self.dists.F_RR_var, size=adm1_F_fac.size, a_min=1e-6)
if self.debug:
logging.debug("adm1 cfr rescaling factor: " + pformat(adm1_F_fac))
self.params.F = self.params.F * F_RR_fac[self.g_data.adm1_id] * adm1_F_fac[self.g_data.adm1_id]
self.params.F = xp.clip(self.params.F, a_min=1.0e-10, a_max=1.0)
adm1_Hi = self.g_data.sum_adm1((self.params.H * self.Nij).T).T
adm1_Hi = adm1_Hi / adm1_Ni
adm1_H = xp.nanmean(adm1_Hi, axis=0)
adm1_H_fac = self.adm1_current_chr / adm1_H
adm0_H_fac = xp.nanmean(adm1_N * adm1_H_fac) / xp.sum(adm1_N)
adm1_H_fac[xp.isnan(adm1_H_fac)] = adm0_H_fac
H_RR_fac = truncnorm(1.0, self.dists.H_RR_var, size=adm1_H_fac.size, a_min=1e-6)
adm1_H_fac = adm1_H_fac * H_RR_fac
# adm1_H_fac = xp.clip(adm1_H_fac, a_min=0.1, a_max=10.0) # prevent extreme values
if self.debug:
logging.debug("adm1 chr rescaling factor: " + pformat(adm1_H_fac))
self.params.H = self.params.H * adm1_H_fac[self.g_data.adm1_id]
self.params.H = xp.clip(self.params.H, a_min=self.params.F, a_max=1.0)
# crr_days_needed = max( #TODO this depends on all the Td params, and D_REPORT_TIME...
case_reporting = self.estimate_reporting(
self.g_data,
self.params,
cfr=self.params.F,
# case_lag=14,
days_back=25,
min_deaths=self.consts.case_reporting_min_deaths,
)
self.case_reporting = approx_mPERT_sample( # TODO these facs should go in param file
mu=xp.clip(case_reporting, a_min=0.05, a_max=0.95),
a=xp.clip(0.7 * case_reporting, a_min=0.01, a_max=0.9),
b=xp.clip(1.3 * case_reporting, a_min=0.1, a_max=1.0),
gamma=50.0,
)
mean_case_reporting = xp.nanmean(self.case_reporting[-self.consts.case_reporting_N_historical_days :], axis=0)
self.params["CASE_REPORT"] = mean_case_reporting
self.params["THETA"] = xp.broadcast_to(
self.params["THETA"][:, None], self.Nij.shape
) # TODO move all the broadcast_to's to one place, they're all over reset()
self.params["GAMMA_H"] = xp.broadcast_to(self.params["GAMMA_H"][:, None], self.Nij.shape)
self.params["F_eff"] = xp.clip(self.params["F"] / self.params["H"], 0.0, 1.0)
# | |
+ 1):
cand = get_expected(placed, lowest, exclude
) - exclude - needed_budget
ret = max(ret, cand)
print 'Case #%d: %.10lf' % (cc + 1, ret)
if lowest + can_replicate not in seen:
seen.add(lowest + can_replicate)
queue.append(lowest + can_replicate)
if lowest + can_replicate - 1 not in seen:
seen.add(lowest + can_replicate - 1)
queue.append(lowest + can_replicate - 1)
infile.close()
return seen
def func_1f633d0c3f81475c89eae40d71ee6d74(cases, infile):
for cc in xrange(cases):
budget, bets = map(int, infile.readline().split())
placed = sorted(map(int, infile.readline().split()))
ret = 0.0
queue = [1] + placed + [(p - 1) for p in placed] + [(p + 1) for p in
placed]
queue = sorted(set(queue))
seen = set(queue)
while queue:
lowest = queue.pop()
if lowest == 0:
continue
needed_budget = (37 - len(placed)) * lowest
for p in placed:
needed_budget += max(0, lowest - p)
if budget < needed_budget:
continue
remaining_budget = budget - needed_budget
partial = len([p for p in placed if p <= lowest])
lowest_cnt = 37 - len(placed) + partial
if lowest_cnt == 0:
continue
larger = [p for p in placed if p > lowest]
if larger:
next_larger = min(larger)
can_replicate = min(next_larger - lowest - 1,
remaining_budget / lowest_cnt)
else:
can_replicate = remaining_budget / lowest_cnt
if can_replicate > 0:
if lowest + can_replicate not in seen:
seen.add(lowest + can_replicate)
queue.append(lowest + can_replicate)
if lowest + can_replicate - 1 not in seen:
seen.add(lowest + can_replicate - 1)
queue.append(lowest + can_replicate - 1)
for exclude in xrange(0, min(remaining_budget, partial) + 1):
cand = get_expected(placed, lowest, exclude
) - exclude - needed_budget
ret = max(ret, cand)
print 'Case #%d: %.10lf' % (cc + 1, ret)
if lowest + can_replicate not in seen:
seen.add(lowest + can_replicate)
queue.append(lowest + can_replicate)
if lowest + can_replicate - 1 not in seen:
seen.add(lowest + can_replicate - 1)
queue.append(lowest + can_replicate - 1)
infile.close()
return next_larger
def func_29d35e5fc2d2418cb6b2e8d18f752eb6(cases, infile):
for cc in xrange(cases):
budget, bets = map(int, infile.readline().split())
placed = sorted(map(int, infile.readline().split()))
ret = 0.0
queue = [1] + placed + [(p - 1) for p in placed] + [(p + 1) for p in
placed]
queue = sorted(set(queue))
seen = set(queue)
while queue:
lowest = queue.pop()
if lowest == 0:
continue
needed_budget = (37 - len(placed)) * lowest
for p in placed:
needed_budget += max(0, lowest - p)
if budget < needed_budget:
continue
remaining_budget = budget - needed_budget
partial = len([p for p in placed if p <= lowest])
lowest_cnt = 37 - len(placed) + partial
if lowest_cnt == 0:
continue
larger = [p for p in placed if p > lowest]
if larger:
next_larger = min(larger)
can_replicate = min(next_larger - lowest - 1,
remaining_budget / lowest_cnt)
else:
can_replicate = remaining_budget / lowest_cnt
if can_replicate > 0:
if lowest + can_replicate not in seen:
seen.add(lowest + can_replicate)
queue.append(lowest + can_replicate)
if lowest + can_replicate - 1 not in seen:
seen.add(lowest + can_replicate - 1)
queue.append(lowest + can_replicate - 1)
for exclude in xrange(0, min(remaining_budget, partial) + 1):
cand = get_expected(placed, lowest, exclude
) - exclude - needed_budget
ret = max(ret, cand)
print 'Case #%d: %.10lf' % (cc + 1, ret)
if lowest + can_replicate not in seen:
seen.add(lowest + can_replicate)
queue.append(lowest + can_replicate)
if lowest + can_replicate - 1 not in seen:
seen.add(lowest + can_replicate - 1)
queue.append(lowest + can_replicate - 1)
infile.close()
return exclude
def func_c26ddaf7400741f3a903f8dcf4a1c423(cases, infile):
for cc in xrange(cases):
budget, bets = map(int, infile.readline().split())
placed = sorted(map(int, infile.readline().split()))
ret = 0.0
queue = [1] + placed + [(p - 1) for p in placed] + [(p + 1) for p in
placed]
queue = sorted(set(queue))
seen = set(queue)
while queue:
lowest = queue.pop()
if lowest == 0:
continue
needed_budget = (37 - len(placed)) * lowest
for p in placed:
needed_budget += max(0, lowest - p)
if budget < needed_budget:
continue
remaining_budget = budget - needed_budget
partial = len([p for p in placed if p <= lowest])
lowest_cnt = 37 - len(placed) + partial
if lowest_cnt == 0:
continue
larger = [p for p in placed if p > lowest]
if larger:
next_larger = min(larger)
can_replicate = min(next_larger - lowest - 1,
remaining_budget / lowest_cnt)
else:
can_replicate = remaining_budget / lowest_cnt
if can_replicate > 0:
if lowest + can_replicate not in seen:
seen.add(lowest + can_replicate)
queue.append(lowest + can_replicate)
if lowest + can_replicate - 1 not in seen:
seen.add(lowest + can_replicate - 1)
queue.append(lowest + can_replicate - 1)
for exclude in xrange(0, min(remaining_budget, partial) + 1):
cand = get_expected(placed, lowest, exclude
) - exclude - needed_budget
ret = max(ret, cand)
print 'Case #%d: %.10lf' % (cc + 1, ret)
if lowest + can_replicate not in seen:
seen.add(lowest + can_replicate)
queue.append(lowest + can_replicate)
if lowest + can_replicate - 1 not in seen:
seen.add(lowest + can_replicate - 1)
queue.append(lowest + can_replicate - 1)
infile.close()
return queue
def func_67efc5e3797f4aef847d54d8ae4ed3be(cases, infile):
for cc in xrange(cases):
budget, bets = map(int, infile.readline().split())
placed = sorted(map(int, infile.readline().split()))
ret = 0.0
queue = [1] + placed + [(p - 1) for p in placed] + [(p + 1) for p in
placed]
queue = sorted(set(queue))
seen = set(queue)
while queue:
lowest = queue.pop()
if lowest == 0:
continue
needed_budget = (37 - len(placed)) * lowest
for p in placed:
needed_budget += max(0, lowest - p)
if budget < needed_budget:
continue
remaining_budget = budget - needed_budget
partial = len([p for p in placed if p <= lowest])
lowest_cnt = 37 - len(placed) + partial
if lowest_cnt == 0:
continue
larger = [p for p in placed if p > lowest]
if larger:
next_larger = min(larger)
can_replicate = min(next_larger - lowest - 1,
remaining_budget / lowest_cnt)
else:
can_replicate = remaining_budget / lowest_cnt
if can_replicate > 0:
if lowest + can_replicate not in seen:
seen.add(lowest + can_replicate)
queue.append(lowest + can_replicate)
if lowest + can_replicate - 1 not in seen:
seen.add(lowest + can_replicate - 1)
queue.append(lowest + can_replicate - 1)
for exclude in xrange(0, min(remaining_budget, partial) + 1):
cand = get_expected(placed, lowest, exclude
) - exclude - needed_budget
ret = max(ret, cand)
print 'Case #%d: %.10lf' % (cc + 1, ret)
if lowest + can_replicate not in seen:
seen.add(lowest + can_replicate)
queue.append(lowest + can_replicate)
if lowest + can_replicate - 1 not in seen:
seen.add(lowest + can_replicate - 1)
queue.append(lowest + can_replicate - 1)
infile.close()
return lowest
def func_a9f63b1e792f47aa80f45a3d84e53e59(cases, infile):
for cc in xrange(cases):
budget, bets = map(int, infile.readline().split())
placed = sorted(map(int, infile.readline().split()))
ret = 0.0
queue = [1] + placed + [(p - 1) for p in placed] + [(p + 1) for p in
placed]
queue = sorted(set(queue))
seen = set(queue)
while queue:
lowest = queue.pop()
if lowest == 0:
continue
needed_budget = (37 - len(placed)) * lowest
for p in placed:
needed_budget += max(0, lowest - p)
if budget < needed_budget:
continue
remaining_budget = budget - needed_budget
partial = len([p for p in placed if p <= lowest])
lowest_cnt = 37 - len(placed) + partial
if lowest_cnt == 0:
continue
larger = [p for p in placed if p > lowest]
if larger:
next_larger = min(larger)
can_replicate = min(next_larger - lowest - 1,
remaining_budget / lowest_cnt)
else:
can_replicate = remaining_budget / lowest_cnt
if can_replicate > 0:
if lowest + can_replicate not in seen:
seen.add(lowest + can_replicate)
queue.append(lowest + can_replicate)
if lowest + can_replicate - 1 not in seen:
seen.add(lowest + can_replicate - 1)
queue.append(lowest + can_replicate - 1)
for exclude in xrange(0, min(remaining_budget, partial) + 1):
cand = get_expected(placed, lowest, exclude
) - exclude - needed_budget
ret = max(ret, cand)
print 'Case #%d: %.10lf' % (cc + 1, ret)
if lowest + can_replicate not in seen:
seen.add(lowest + can_replicate)
queue.append(lowest + can_replicate)
if lowest + can_replicate - 1 not in seen:
seen.add(lowest + can_replicate - 1)
queue.append(lowest + can_replicate - 1)
infile.close()
return larger
def func_93c53d2b7b094eccbefd7d4ff4f001cf(cases, infile):
for cc in xrange(cases):
budget, bets = map(int, infile.readline().split())
placed = sorted(map(int, infile.readline().split()))
ret = 0.0
queue = | |
sub_policy_name
super().__init__(xml_tags.Elements.BINDING)
@classmethod
def from_xml_node(cls, xml_node):
"""
Initialize the object from a XML node.
:param xml_node: The XML node from which all necessary parameters will be parsed.
:type xml_node: xml.etree.Element
"""
acl_node = get_xml_node(xml_node, xml_tags.Elements.ACL, True)
policy_node = get_xml_node(xml_node, xml_tags.Elements.POLICY, True)
if acl_node is not None:
acl = Access_List.from_xml_node(acl_node)
else:
acl = None
if policy_node is not None:
policy = Policy.from_xml_node(policy_node)
else:
policy = None
default = get_xml_text_value(xml_node, xml_tags.Elements.DEFAULT)
from_zone_node = get_xml_node(xml_node, xml_tags.Elements.FROM_ZONE, True)
if from_zone_node:
from_zone = Rule_Binding_Zone.from_xml_node(from_zone_node)
else:
from_zone = None
to_zone_node = get_xml_node(xml_node, xml_tags.Elements.TO_ZONE, True)
if to_zone_node:
to_zone = Rule_Binding_Zone.from_xml_node(to_zone_node)
else:
to_zone = None
uid = get_xml_text_value(xml_node, xml_tags.Elements.UID)
rule_count = get_xml_text_value(xml_node, xml_tags.Elements.RULE_COUNT)
security_rule_count = get_xml_text_value(xml_node, xml_tags.Elements.SECURITY_RULE_COUNT)
direction = get_xml_text_value(xml_node, xml_tags.Elements.DIRECTION)
display_name = get_xml_text_value(xml_node, xml_tags.Elements.DISPLAY_NAME)
sub_policy_name = get_xml_text_value(xml_node, xml_tags.Elements.SUB_POLICY_NAME)
return cls(acl, policy, default, rule_count, from_zone, to_zone, security_rule_count, uid, direction,
display_name, sub_policy_name)
class Destination_Network(Base_Object):
def __init__(self, name, display_name, object_id=None, uid=None, implicit=None):
super().__init__(xml_tags.Elements.DST_NETWORK, name, display_name, object_id, uid, implicit)
@classmethod
def from_xml_node(cls, xml_node):
"""
Initialize the object from a XML node.
:param xml_node: The XML node from which all necessary parameters will be parsed.
:type xml_node: xml.etree.Element
"""
name = get_xml_text_value(xml_node, xml_tags.Elements.NAME)
object_id = get_xml_int_value(xml_node, xml_tags.Elements.ID)
display_name = get_xml_text_value(xml_node, xml_tags.Elements.DISPLAY_NAME)
uid = get_xml_text_value(xml_node, xml_tags.Elements.UID)
implicit = get_xml_text_value(xml_node, xml_tags.Elements.IMPLICIT)
return cls(name, display_name, object_id, uid, implicit)
class Source_Network(Base_Object):
def __init__(self, name, display_name, object_id=None, uid=None, implicit=None):
super().__init__(xml_tags.Elements.SRC_NETWORK, name, display_name, object_id, uid, implicit)
@classmethod
def from_xml_node(cls, xml_node):
"""
Initialize the object from a XML node.
:param xml_node: The XML node from which all necessary parameters will be parsed.
:type xml_node: xml.etree.Element
"""
name = get_xml_text_value(xml_node, xml_tags.Elements.NAME)
object_id = get_xml_int_value(xml_node, xml_tags.Elements.ID)
display_name = get_xml_text_value(xml_node, xml_tags.Elements.DISPLAY_NAME)
uid = get_xml_text_value(xml_node, xml_tags.Elements.UID)
implicit = get_xml_text_value(xml_node, xml_tags.Elements.IMPLICIT)
return cls(name, display_name, object_id, uid, implicit)
class Destination_Service(Base_Object):
def __init__(self, name, display_name, object_id=None, uid=None, implicit=None):
super().__init__(xml_tags.Elements.DST_SERVICE, name, display_name, object_id, uid, implicit)
@classmethod
def from_xml_node(cls, xml_node):
"""
Initialize the object from a XML node.
:param xml_node: The XML node from which all necessary parameters will be parsed.
:type xml_node: xml.etree.Element
"""
name = get_xml_text_value(xml_node, xml_tags.Elements.NAME)
object_id = get_xml_int_value(xml_node, xml_tags.Elements.ID)
display_name = get_xml_text_value(xml_node, xml_tags.Elements.DISPLAY_NAME)
uid = get_xml_text_value(xml_node, xml_tags.Elements.UID)
implicit = get_xml_text_value(xml_node, xml_tags.Elements.IMPLICIT)
return cls(name, display_name, object_id, uid, implicit)
class RuleVPNOption(Base_Object):
def __init__(self, name, display_name, object_id=None):
super().__init__(xml_tags.Elements.VPN, name, display_name, object_id)
@classmethod
def from_xml_node(cls, xml_node):
"""
Initialize the object from a XML node.
:param xml_node: The XML node from which all necessary parameters will be parsed.
:type xml_node: xml.etree.Element
"""
name = get_xml_text_value(xml_node, xml_tags.Elements.NAME)
object_id = get_xml_int_value(xml_node, xml_tags.Elements.ID)
display_name = get_xml_text_value(xml_node, xml_tags.Elements.DISPLAY_NAME)
return cls(name, display_name, object_id)
class Rule_Track(XML_Object_Base):
NONE = "NONE"
LOG = "LOG"
def __init__(self, interval, level=None):
self.level = level
self.interval = interval
super().__init__(xml_tags.Elements.TRACK)
@classmethod
def from_xml_node(cls, xml_node):
"""
Initialize the object from a XML node.
:param xml_node: The XML node from which all necessary parameters will be parsed.
:type xml_node: xml.etree.Element
"""
level = get_xml_text_value(xml_node, xml_tags.Elements.LEVEL)
interval = get_xml_text_value(xml_node, xml_tags.Elements.INTERVAL)
return cls(interval, level)
def is_enabled(self):
if self.level != Rule_Track.NONE:
return True
return False
def is_logged(self):
if self.level == Rule_Track.LOG:
return True
return False
class Access_List(XML_Object_Base):
def __init__(self, is_global, interfaces=None, name=None):
self.global_ = is_global
self.interfaces = interfaces
self.name = name
super().__init__(xml_tags.Elements.ACL)
@classmethod
def from_xml_node(cls, xml_node):
"""
Initialize the object from a XML node.
:param xml_node: The XML node from which all necessary parameters will be parsed.
:type xml_node: xml.etree.Element
"""
global_ = get_xml_text_value(xml_node, xml_tags.Elements.GLOBAL)
interfaces = Interface.from_xml_node(get_xml_node(xml_node, xml_tags.Elements.INTERFACES))
name = get_xml_text_value(xml_node, xml_tags.Elements.NAME)
return cls(global_, interfaces, name)
class Policy(XML_Object_Base):
def __init__(self, num_id, itg_id, itg, name, unique_active_in_itg=None):
self.id = num_id
self.itg = itg
self.itg_id = itg_id
self.name = name
self.unique_active_in_itg = unique_active_in_itg
super().__init__(xml_tags.Elements.POLICY)
@classmethod
def from_xml_node(cls, xml_node=None):
"""
Initialize the object from a XML node.
:param xml_node: The XML node from which all necessary parameters will be parsed.
:type xml_node: xml.etree.Element
"""
num_id = get_xml_int_value(xml_node, xml_tags.Elements.ID)
itg = get_xml_text_value(xml_node, xml_tags.Elements.ITG)
itg_id = get_xml_int_value(xml_node, xml_tags.Elements.ITG_ID)
name = get_xml_text_value(xml_node, xml_tags.Elements.NAME)
unique_active_in_itg = get_xml_text_value(xml_node, xml_tags.Elements.UNIQUE_ACTIVE_IN_ITG)
return cls(num_id, itg_id, itg, name, unique_active_in_itg)
def __repr__(self):
return "Policy({id},{itg},{itg_id},'{name}',{unique_active_in_itg})".format(**self.__dict__)
class Interface_IP(XML_Object_Base, IPNetworkMixin):
def __init__(self, ip, netmask=None, precedence=None, visibility=None):
self.ip = ip
self.netmask = netmask
self.precedence = precedence
self.visibility = visibility
super().__init__(xml_tags.Elements.INTERFACE_IP)
def _get_ip_network(self):
"""
:rtype: netaddr.IPNetwork
"""
return netaddr.IPNetwork(self.ip + "/" + self.netmask)
def __str__(self):
return "{}/{}".format(self.ip, netmask_to_cidr(self.netmask))
@classmethod
def from_xml_node(cls, xml_node):
"""
Initialize the object from a XML node.
:param xml_node: The XML node from which all necessary parameters will be parsed.
:type xml_node: xml.etree.Element
"""
ip = get_xml_text_value(xml_node, xml_tags.Elements.IP)
netmask = get_xml_text_value(xml_node, xml_tags.Elements.NETMASK)
precedence = get_xml_text_value(xml_node, xml_tags.Elements.PRECEDENCE)
visibility = get_xml_text_value(xml_node, xml_tags.Elements.VISIBILITY)
return cls(ip, netmask, precedence, visibility)
class Interface(XSI_Object):
def __init__(self, name, num_id, direction, device_id, acl_name, is_global, interface_ips=None):
self.name = name
self.id = num_id
self.direction = direction
self.device_id = device_id
self.acl_name = acl_name
self.interface_ips = interface_ips
self.global_ = is_global
super().__init__(xml_tags.Elements.INTERFACE, Attributes.INTERFACE_TYPE)
@classmethod
def from_xml_node(cls, xml_node):
"""
Initialize the object from a XML node.
:param xml_node: The XML node from which all necessary parameters will be parsed.
:type xml_node: xml.etree.Element
"""
name = get_xml_text_value(xml_node, xml_tags.Elements.NAME)
num_id = get_xml_int_value(xml_node, xml_tags.Elements.ID)
direction = get_xml_text_value(xml_node, xml_tags.Elements.DIRECTION)
device_id = get_xml_int_value(xml_node, xml_tags.Elements.DEVICE_ID)
acl_name = get_xml_text_value(xml_node, xml_tags.Elements.ACL_NAME)
global_ = get_xml_text_value(xml_node, xml_tags.Elements.GLOBAL)
interface_ips = XML_List(xml_tags.Elements.INTERFACE_IPS)
for interface_ip_node in xml_node.iter(tag=xml_tags.Elements.INTERFACE_IP):
interface_ips.append(Interface_IP.from_xml_node(interface_ip_node))
return cls(name, num_id, direction, device_id, acl_name, global_, interface_ips)
class Topology_Interface(XML_Object_Base):
def __init__(self, device_id, ip, mask, name, virtual_router, zone):
self.device_id = device_id
self.ip = ip
self.mask = mask
self.name = name
self.virtual_router = virtual_router
self.zone = zone
super().__init__(xml_tags.Elements.INTERFACE)
@classmethod
def from_xml_node(cls, xml_node):
device_id = get_xml_int_value(xml_node, xml_tags.Elements.DEVICE_ID)
ip = get_xml_text_value(xml_node, xml_tags.Elements.IP)
mask = get_xml_text_value(xml_node, xml_tags.Elements.MASK)
name = get_xml_text_value(xml_node, xml_tags.Elements.NAME)
virtual_router = get_xml_text_value(xml_node, xml_tags.Elements.VIRTUAL_ROUTER)
zone = get_xml_text_value(xml_node, xml_tags.Elements.ZONE)
return cls(device_id, ip, mask, name, virtual_router, zone)
class Services_List(XML_List):
def __init__(self, services):
"""
:type services: list[Single_Service]
"""
self.services = services
super().__init__(xml_tags.Elements.SERVICES, services)
@classmethod
def from_xml_node(cls, xml_node):
"""
Initialize the object from a XML node.
:param xml_node: The XML node from which all necessary parameters will be parsed.
:type xml_node: xml.etree.Element
"""
services = []
for service_node in xml_node.iter(tag=xml_tags.Elements.SERVICE):
if service_node.attrib[xml_tags.Attributes.XSI_NAMESPACE_TYPE] == xml_tags.Attributes.SERVICE_TYPE_SINGLE:
services.append(Single_Service.from_xml_node(service_node))
elif service_node.attrib[xml_tags.Attributes.XSI_NAMESPACE_TYPE] == xml_tags.Attributes.SERVICE_TYPE_GROUP:
services.append(Group_Service.from_xml_node(service_node))
else:
raise ValueError("Unknown service type '{0}'.".format(
service_node.attrib[xml_tags.Attributes.XSI_NAMESPACE_TYPE]))
return cls(services)
class Single_Service(Service):
def __init__(self, service_id, display_name, is_global, name, service_type, protocol, port_min, port_max, negate,
comment, uid=None, class_name=None, implicit=None, timeout=None):
self.protocol = protocol
self.min = port_min
self.max = port_max
self.negate = negate
self.comment = comment
self.class_name = class_name
self.timeout = timeout
super().__init__(xml_tags.Elements.SERVICE, service_id, display_name, is_global, name, service_type,
xml_tags.Attributes.SERVICE_TYPE_SINGLE, uid, implicit)
@classmethod
def from_xml_node(cls, xml_node):
"""
Initialize the object from a XML node.
:param xml_node: The XML node from which all necessary parameters will be parsed.
:type xml_node: xml.etree.Element
"""
protocol = get_xml_int_value(xml_node, xml_tags.Elements.PROTOCOL)
negate = get_xml_text_value(xml_node, xml_tags.Elements.NEGATE)
port_min = get_xml_int_value(xml_node, xml_tags.Elements.MIN)
port_max = get_xml_int_value(xml_node, xml_tags.Elements.MAX)
comment = get_xml_text_value(xml_node, xml_tags.Elements.COMMENT)
display_name = get_xml_text_value(xml_node, xml_tags.Elements.DISPLAY_NAME)
is_global = get_xml_text_value(xml_node, xml_tags.Elements.GLOBAL)
name = get_xml_text_value(xml_node, xml_tags.Elements.NAME)
service_type = get_xml_text_value(xml_node, xml_tags.Elements.TYPE)
service_id = get_xml_int_value(xml_node, xml_tags.Elements.ID)
uid = get_xml_text_value(xml_node, xml_tags.Elements.UID)
implicit = get_xml_text_value(xml_node, xml_tags.Elements.IMPLICIT)
class_name = get_xml_text_value(xml_node, xml_tags.Elements.CLASS_NAME)
timeout = get_xml_text_value(xml_node, xml_tags.Elements.TIMEOUT)
return cls(service_id, display_name, is_global, name, service_type, protocol, port_min, port_max, negate,
comment, uid, class_name, implicit, timeout)
def __str__(self):
iana_protocols = get_iana_protocols()
if self.min == self.max:
return "{} {}".format(iana_protocols[int(self.protocol)], self.min)
else:
return "{} {}-{}".format(iana_protocols[int(self.protocol)], self.min, self.max)
def as_service_type(self):
if self.protocol is not None:
if self.min == self.max:
return Single_Service_Type(self.protocol, self.min)
else:
return Range_Service_Type(self.protocol, self.min, self.max)
else:
return Any_Service_Type()
class Group_Service(Service):
def __init__(self, service_id, display_name, is_global, name, service_type, members, uid=None, implicit=None):
self.members = members
super().__init__(xml_tags.Elements.SERVICE, service_id, display_name, is_global, name, service_type,
xml_tags.Attributes.SERVICE_TYPE_GROUP, uid, implicit)
@classmethod
def from_xml_node(cls, xml_node):
"""
Initialize the object from a XML node.
:param xml_node: The XML node from which all necessary parameters will be parsed.
:type xml_node: xml.etree.Element
"""
display_name = get_xml_text_value(xml_node, xml_tags.Elements.DISPLAY_NAME)
is_global = get_xml_text_value(xml_node, xml_tags.Elements.GLOBAL)
service_id = get_xml_int_value(xml_node, xml_tags.Elements.ID)
name = get_xml_text_value(xml_node, xml_tags.Elements.NAME)
service_type = get_xml_text_value(xml_node, xml_tags.Elements.TYPE)
members = XML_List(xml_tags.Elements.MEMBERS, [])
for member_node in xml_node.iter(tag=xml_tags.Elements.MEMBER):
member_id = get_xml_int_value(member_node, xml_tags.Elements.ID)
member_display_name = get_xml_text_value(member_node, xml_tags.Elements.DISPLAY_NAME)
member_name = get_xml_text_value(member_node, xml_tags.Elements.NAME)
members.append(Base_Object(xml_tags.Elements.MEMBER, member_name, member_display_name, member_id))
uid = get_xml_text_value(xml_node, xml_tags.Elements.UID)
implicit = get_xml_text_value(xml_node, xml_tags.Elements.IMPLICIT)
return cls(service_id, display_name, is_global, name, service_type, members, uid, implicit)
def __str__(self):
spacer = 4 * " "
if self.members:
return "{}Members:\n{}{}".format(spacer, 2 * spacer, "\n{}".format(2 * spacer).join(
[member.display_name for member in self.members]))
else:
return "{}No members".format(spacer)
def as_service_type(self):
return Group_Service_Type(self.members)
class Network_Objects_List(XML_List):
def __init__(self, network_objects):
"""
:type network_objects: list[T <= | |
<reponame>lottilotte/compas
from __future__ import print_function
from __future__ import absolute_import
from __future__ import division
from itertools import product
from compas.plugins import pluggable
from compas.geometry import Point
from compas.utilities import linspace
from compas.utilities import meshgrid
from .surface import Surface
@pluggable(category='factories')
def new_nurbssurface(*args, **kwargs):
raise NotImplementedError
@pluggable(category='factories')
def new_nurbssurface_from_parameters(*args, **kwargs):
raise NotImplementedError
@pluggable(category='factories')
def new_nurbssurface_from_points(*args, **kwargs):
raise NotImplementedError
@pluggable(category='factories')
def new_nurbssurface_from_fill(*args, **kwargs):
raise NotImplementedError
@pluggable(category='factories')
def new_nurbssurface_from_step(*args, **kwargs):
raise NotImplementedError
class NurbsSurface(Surface):
"""Class representing a NURBS surface.
Attributes
----------
points: List[List[Point]]
The control points of the surface.
weights: List[List[float]]
The weights of the control points.
u_knots: List[float]
The knot vector, in the U direction, without duplicates.
v_knots: List[float]
The knot vector, in the V direction, without duplicates.
u_mults: List[int]
The multiplicities of the knots in the knot vector of the U direction.
v_mults: List[int]
The multiplicities of the knots in the knot vector of the V direction.
u_degree: int
The degree of the polynomials in the U direction.
v_degree: int
The degree of the polynomials in the V direction.
u_domain: Tuple[float, float]
The parameter domain in the U direction.
v_domain: Tuple[float, float]
The parameter domain in the V direction.
is_u_periodic: bool
True if the curve is periodic in the U direction.
is_v_periodic: bool
True if the curve is periodic in the V direction.
"""
@property
def DATASCHEMA(self):
from schema import Schema
from compas.data import is_float3
from compas.data import is_sequence_of_int
from compas.data import is_sequence_of_float
return Schema({
'points': lambda points: all(is_float3(point) for point in points),
'weights': is_sequence_of_float,
'u_knots': is_sequence_of_float,
'v_knots': is_sequence_of_float,
'u_mults': is_sequence_of_int,
'v_mults': is_sequence_of_int,
'u_degree': int,
'v_degree': int,
'is_u_periodic': bool,
'is_v_periodic': bool
})
@property
def JSONSCHEMANAME(self):
raise NotImplementedError
def __new__(cls, *args, **kwargs):
return new_nurbssurface(*args, **kwargs)
def __init__(self, name=None):
super(NurbsSurface, self).__init__(name=name)
def __eq__(self, other):
raise NotImplementedError
def __str__(self):
lines = [
'NurbsSurface',
'------------',
'Points: {}'.format(self.points),
'Weights: {}'.format(self.weights),
'U Knots: {}'.format(self.u_knots),
'V Knots: {}'.format(self.v_knots),
'U Mults: {}'.format(self.u_mults),
'V Mults: {}'.format(self.v_mults),
'U Degree: {}'.format(self.u_degree),
'V Degree: {}'.format(self.v_degree),
'U Domain: {}'.format(self.u_domain),
'V Domain: {}'.format(self.v_domain),
'U Periodic: {}'.format(self.is_u_periodic),
'V Periodic: {}'.format(self.is_v_periodic),
]
return "\n".join(lines)
# ==============================================================================
# Data
# ==============================================================================
@property
def dtype(self):
"""str : The type of the object in the form of a '2-level' import and a class name."""
return 'compas.geometry/NurbsSurface'
@property
def data(self):
return {
'points': [[point.data for point in row] for row in self.points],
'weights': self.weights,
'u_knots': self.u_knots,
'v_knots': self.v_knots,
'u_mults': self.u_mults,
'v_mults': self.v_mults,
'u_degree': self.u_degree,
'v_degree': self.v_degree,
'is_u_periodic': self.is_u_periodic,
'is_v_periodic': self.is_v_periodic
}
@data.setter
def data(self, data):
raise NotImplementedError
@classmethod
def from_data(cls, data):
"""Construct a BSpline surface from its data representation.
Parameters
----------
data : dict
The data dictionary.
Returns
-------
:class:`compas.geometry.NurbsSurface`
The constructed surface.
"""
points = [[Point.from_data(point) for point in row] for row in data['points']]
weights = data['weights']
u_knots = data['u_knots']
v_knots = data['v_knots']
u_mults = data['u_mults']
v_mults = data['v_mults']
u_degree = data['u_degree']
v_degree = data['v_degree']
is_u_periodic = data['is_u_periodic']
is_v_periodic = data['is_v_periodic']
return cls.from_parameters(
points,
weights,
u_knots, v_knots,
u_mults, v_mults,
u_degree, v_degree,
is_u_periodic, is_v_periodic
)
# ==============================================================================
# Constructors
# ==============================================================================
@classmethod
def from_parameters(cls, points, weights, u_knots, v_knots, u_mults, v_mults, u_degree, v_degree, is_u_periodic=False, is_v_periodic=False):
"""Construct a NURBS surface from explicit parameters.
Parameters
----------
points : List[List[:class:`compas.geometry.Point`]]
The control points.
weights : List[List[float]]
The weights of the control points.
u_knots : List[float]
The knots in the U direction, without multiplicity.
v_knots : List[float]
The knots in the V direction, without multiplicity.
u_mults : List[int]
Multiplicity of the knots in the U direction.
v_mults : List[int]
Multiplicity of the knots in the V direction.
u_degree : int
Degree in the U direction.
v_degree : int
Degree in the V direction.
Returns
-------
:class:`compas.geometry.NurbsSurface`
"""
return new_nurbssurface_from_parameters(
points,
weights,
u_knots,
v_knots,
u_mults,
v_mults,
u_degree,
v_degree,
is_u_periodic=is_u_periodic,
is_v_periodic=is_v_periodic
)
@classmethod
def from_points(cls, points, u_degree=3, v_degree=3):
"""Construct a NURBS surface from control points.
Parameters
----------
points : List[List[:class:`compas.geometry.Point`]]
The control points.
u_degree : int
Degree in the U direction.
v_degree : int
Degree in the V direction.
Returns
-------
:class:`compas.geometry.NurbsSurface`
"""
return new_nurbssurface_from_points(points, u_degree=u_degree, v_degree=v_degree)
@classmethod
def from_meshgrid(cls, nu=10, nv=10):
"""Construct a NURBS surface from a mesh grid.
Parameters
----------
nu : int, optional
Number of control points in the U direction.
nv : int, optional
Number of control points in the V direction.
Returns
-------
:class:`compas.geometry.NurbsSurface`
"""
UU, VV = meshgrid(linspace(0, nu, nu + 1), linspace(0, nv, nv + 1))
points = []
for U, V in zip(UU, VV):
row = []
for u, v in zip(U, V):
row.append(Point(u, v, 0.0))
points.append(row)
return cls.from_points(points=points)
@classmethod
def from_step(cls, filepath):
"""Load a NURBS surface from a STP file.
Parameters
----------
filepath : str
Returns
-------
:class:`compas.geometry.NurbsSurface`
"""
return new_nurbssurface_from_step(filepath)
@classmethod
def from_fill(cls, curve1, curve2):
"""Construct a NURBS surface from the infill between two NURBS curves.
Parameters
----------
curve1 : :class:`compas.geometry.NurbsCurve`
curve2 : :class:`compas.geometry.NurbsCurve`
Returns
-------
:class:`compas.geometry.NurbsSurface`
"""
return new_nurbssurface_from_fill(curve1, curve2)
# ==============================================================================
# Conversions
# ==============================================================================
def to_step(self, filepath, schema="AP203"):
"""Write the surface geometry to a STP file.
Parameters
----------
filepath : str
schema : str, optional
Returns
-------
None
"""
raise NotImplementedError
def to_mesh(self, nu=100, nv=None):
"""Convert the surface to a quad mesh.
Parameters
----------
nu : int, optional
Number of faces in the U direction.
nv : int, optional
Number of faces in the V direction.
Returns
-------
:class:`compas.datastructures.Mesh`
"""
from compas.datastructures import Mesh
nv = nv or nu
vertices = [self.point_at(i, j) for i, j in product(self.u_space(nu + 1), self.v_space(nv + 1))]
faces = [[
i * (nv + 1) + j,
(i + 1) * (nv + 1) + j,
(i + 1) * (nv + 1) + j + 1,
i * (nv + 1) + j + 1
] for i, j in product(range(nu), range(nv))]
return Mesh.from_vertices_and_faces(vertices, faces)
def to_triangles(self, nu=100, nv=None):
"""Convert the surface to a list of triangles.
Parameters
----------
nu : int, optional
Number of quads in the U direction.
Every quad has two triangles.
nv : int, optional
Number of quads in the V direction.
Every quad has two triangles.
Returns
-------
List[List[Point]]
"""
import numpy as np
from functools import lru_cache
@lru_cache(maxsize=None)
def point_at(i, j):
return self.point_at(i, j)
nv = nv or nu
V, U = np.meshgrid(self.v_space(nv + 1), self.u_space(nu + 1), indexing='ij')
tris = [None] * (6 * nu * nv)
index = 0
for i, j in product(range(nv), range(nu)):
tris[index + 0] = point_at(U[i + 0][j + 0], V[i + 0][j + 0])
tris[index + 1] = point_at(U[i + 0][j + 1], V[i + 0][j + 1])
tris[index + 2] = point_at(U[i + 1][j + 1], V[i + 1][j + 1])
tris[index + 3] = point_at(U[i + 0][j + 0], V[i + 0][j + 0])
tris[index + 4] = point_at(U[i + 1][j + 1], V[i + 1][j + 1])
tris[index + 5] = point_at(U[i + 1][j + 0], V[i + 1][j + 0])
index += 6
return tris
# ==============================================================================
# Properties
# ==============================================================================
@property
def points(self):
raise NotImplementedError
@property
def weights(self):
raise NotImplementedError
@property
def u_knots(self):
raise NotImplementedError
@property
def v_knots(self):
raise NotImplementedError
@property
def u_mults(self):
raise NotImplementedError
@property
def v_mults(self):
raise NotImplementedError
@property
def u_degree(self):
raise NotImplementedError
@property
def v_degree(self):
raise NotImplementedError
@property
def u_domain(self):
raise NotImplementedError
@property
def v_domain(self):
raise NotImplementedError
@property
def is_u_periodic(self):
raise NotImplementedError
@property
def is_v_periodic(self):
raise NotImplementedError
# ==============================================================================
# Methods
# ==============================================================================
def copy(self):
"""Make an independent copy of the surface."""
return NurbsSurface.from_parameters(
self.points,
self.weights,
self.u_knots,
self.v_knots,
self.u_mults,
self.v_mults,
self.u_degree,
self.v_degree,
self.is_u_periodic,
self.is_v_periodic
)
def transform(self, T):
"""Transform this surface.
Parameters
----------
T : :class:`compas.geometry.Transformation`
Returns
-------
None
"""
raise NotImplementedError
def transformed(self, T):
"""Transform an independent copy of this surface.
Parameters
----------
T : :class:`compas.geometry.Transformation`
Returns
-------
:class:`compas.geometry.NurbsSurface`
"""
copy = self.copy()
copy.transform(T)
return copy
def u_space(self, n=10):
"""Compute evenly spaced parameters over the surface domain in the U direction.
Parameters
----------
n : int, optional
The number of parameters.
Returns
-------
list of float
"""
umin, umax = self.u_domain
return linspace(umin, umax, n)
def v_space(self, n=10):
"""Compute evenly spaced parameters over the surface domain in the V direction.
Parameters
----------
n : int, optional
The number | |
the name of iterator_cls here}, based
on ForcedEvenIterator, that wraps iterator_cls.
"""
assert issubclass(iterator_cls, SubsetIterator)
dct = ForcedEvenIterator.__dict__.copy()
dct["_base_iterator_cls"] = iterator_cls
dct["fancy"] = iterator_cls.fancy
dct["stochastic"] = iterator_cls.stochastic
NewForcedEvenClass = type("ForcedEven%s" % iterator_cls.__name__,
ForcedEvenIterator.__bases__, dct)
return NewForcedEvenClass
class SequentialSubsetIterator(SubsetIterator):
"""
Returns mini-batches proceeding sequentially through the dataset.
Notes
-----
Returns slice objects to represent ranges of indices (`fancy = False`).
See :py:class:`SubsetIterator` for detailed constructor parameter
and attribute documentation.
"""
def __init__(self, dataset_size, batch_size, num_batches, rng=None):
if rng is not None:
raise ValueError("non-None rng argument not supported for "
"sequential batch iteration")
assert num_batches is None or num_batches >= 0
self._dataset_size = dataset_size
if batch_size is None:
if num_batches is not None:
batch_size = int(np.ceil(self._dataset_size / num_batches))
else:
raise ValueError("need one of batch_size, num_batches "
"for sequential batch iteration")
elif batch_size is not None:
if num_batches is not None:
max_num_batches = np.ceil(self._dataset_size / batch_size)
if num_batches > max_num_batches:
raise ValueError("dataset of %d examples can only provide "
"%d batches with batch_size %d, but %d "
"batches were requested" %
(self._dataset_size, max_num_batches,
batch_size, num_batches))
else:
num_batches = np.ceil(self._dataset_size / batch_size)
self._batch_size = batch_size
self._num_batches = num_batches
self._next_batch_no = 0
self._idx = 0
self._batch = 0
@wraps(SubsetIterator.next, assigned=(), updated=())
def next(self):
if self._batch >= self.num_batches or self._idx >= self._dataset_size:
raise StopIteration()
# this fix the problem where dataset_size % batch_size != 0
elif (self._idx + self._batch_size) > self._dataset_size:
self._last = slice(self._idx, self._dataset_size)
self._idx = self._dataset_size
return self._last
else:
self._last = slice(self._idx, self._idx + self._batch_size)
self._idx += self._batch_size
self._batch += 1
return self._last
def __next__(self):
return self.next()
fancy = False
stochastic = False
uniform_batch_size = False
@property
@wraps(SubsetIterator.num_examples, assigned=(), updated=())
def num_examples(self):
product = self.batch_size * self.num_batches
return min(product, self._dataset_size)
@property
@wraps(SubsetIterator.uneven, assigned=(), updated=())
def uneven(self):
return self.batch_size * self.num_batches > self._dataset_size
class ShuffledSequentialSubsetIterator(SequentialSubsetIterator):
"""
Randomly shuffles the example indices and then proceeds sequentially
through the permutation.
Notes
-----
Returns lists of indices (`fancy = True`).
See :py:class:`SubsetIterator` for detailed constructor parameter
and attribute documentation.
"""
stochastic = True
fancy = True
uniform_batch_size = False
def __init__(self, dataset_size, batch_size, num_batches, rng=None):
super(ShuffledSequentialSubsetIterator, self).__init__(
dataset_size,
batch_size,
num_batches,
None
)
self._rng = make_np_rng(rng, which_method=["random_integers",
"shuffle"])
self._shuffled = np.arange(self._dataset_size)
self._rng.shuffle(self._shuffled)
@wraps(SubsetIterator.next)
def next(self):
if self._batch >= self.num_batches or self._idx >= self._dataset_size:
raise StopIteration()
# this fix the problem where dataset_size % batch_size != 0
elif (self._idx + self._batch_size) > self._dataset_size:
rval = self._shuffled[self._idx: self._dataset_size]
self._idx = self._dataset_size
return rval
else:
rval = self._shuffled[self._idx: self._idx + self._batch_size]
self._idx += self._batch_size
self._batch += 1
return rval
def __next__(self):
return self.next()
class RandomUniformSubsetIterator(SubsetIterator):
"""
Selects minibatches of examples by drawing indices uniformly
at random, with replacement.
Notes
-----
Returns lists of indices (`fancy = True`).
See :py:class:`SubsetIterator` for detailed constructor parameter
and attribute documentation.
"""
def __init__(self, dataset_size, batch_size, num_batches, rng=None):
self._rng = make_np_rng(rng, which_method=["random_integers",
"shuffle"])
if batch_size is None:
raise ValueError("batch_size cannot be None for random uniform "
"iteration")
elif num_batches is None:
raise ValueError("num_batches cannot be None for random uniform "
"iteration")
self._dataset_size = dataset_size
self._batch_size = batch_size
self._num_batches = num_batches
self._next_batch_no = 0
@wraps(SubsetIterator.next)
def next(self):
if self._next_batch_no >= self._num_batches:
raise StopIteration()
else:
self._last = self._rng.random_integers(low=0,
high=self._dataset_size - 1,
size=(self._batch_size,))
self._next_batch_no += 1
return self._last
def __next__(self):
return self.next()
fancy = True
stochastic = True
uniform_batch_size = True
class RandomSliceSubsetIterator(RandomUniformSubsetIterator):
"""
Returns minibatches that are randomly selected contiguous slices in
index space.
Notes
-----
Returns slice objects to represent ranges of indices (`fancy = False`).
See :py:class:`SubsetIterator` for detailed constructor parameter
and attribute documentation.
"""
def __init__(self, dataset_size, batch_size, num_batches, rng=None):
if batch_size is None:
raise ValueError("batch_size cannot be None for random slice "
"iteration")
elif num_batches is None:
raise ValueError("num_batches cannot be None for random slice "
"iteration")
super(RandomSliceSubsetIterator, self).__init__(dataset_size,
batch_size,
num_batches, rng)
self._last_start = self._dataset_size - self._batch_size
if self._last_start < 0:
raise ValueError("batch_size > dataset_size not supported for "
"random slice iteration")
@wraps(SubsetIterator.next)
def next(self):
if self._next_batch_no >= self._num_batches:
raise StopIteration()
else:
start = self._rng.random_integers(low=0, high=self._last_start)
self._last = slice(start, start + self._batch_size)
self._next_batch_no += 1
return self._last
def __next__(self):
return self.next()
fancy = False
stochastic = True
uniform_batch_size = True
class BatchwiseShuffledSequentialIterator(SequentialSubsetIterator):
"""
Returns minibatches randomly, but sequential inside each minibatch.
Notes
-----
Returns slice objects to represent ranges of indices (`fancy = False`).
See :py:class:`SubsetIterator` for detailed constructor parameter
and attribute documentation.
"""
def __init__(self, dataset_size, batch_size, num_batches=None, rng=None):
self._rng = make_np_rng(rng, which_method=["random_integers",
"shuffle"])
assert num_batches is None or num_batches >= 0
self._dataset_size = dataset_size
if batch_size is None:
if num_batches is not None:
batch_size = int(np.ceil(self._dataset_size / num_batches))
else:
raise ValueError("need one of batch_size, num_batches "
"for sequential batch iteration")
elif batch_size is not None:
if num_batches is not None:
max_num_batches = np.ceil(self._dataset_size / batch_size)
if num_batches > max_num_batches:
raise ValueError("dataset of %d examples can only provide "
"%d batches with batch_size %d, but %d "
"batches were requested" %
(self._dataset_size, max_num_batches,
batch_size, num_batches))
else:
num_batches = np.ceil(self._dataset_size / batch_size)
self._batch_size = batch_size
self._num_batches = int(num_batches)
self._next_batch_no = 0
self._idx = 0
self._batch_order = list(range(self._num_batches))
self._rng.shuffle(self._batch_order)
@wraps(SubsetIterator.next)
def next(self):
if self._next_batch_no >= self._num_batches:
raise StopIteration()
else:
start = self._batch_order[self._next_batch_no] * self._batch_size
if start + self._batch_size > self._dataset_size:
self._last = slice(start, self._dataset_size)
else:
self._last = slice(start, start + self._batch_size)
self._next_batch_no += 1
return self._last
def __next__(self):
return self.next()
fancy = False
stochastic = True
uniform_batch_size = False
class EvenSequencesSubsetIterator(SubsetIterator):
"""
An iterator for datasets with sequential data (e.g. list of words)
which returns a list of indices of sequences in the dataset which have
the same length.
Within one minibatch all sequences will have the same length, so it
might return minibatches with different sizes depending on the
distribution of the lengths of sequences in the data.
Notes
-----
Returns lists of indices (`fancy = True`).
Parameters
----------
sequence_data : list of lists or ndarray of objects (ndarrays)
The sequential data used to determine indices within the dataset such
that within a minibatch all sequences will have same lengths.
See :py:class:`SubsetIterator` for detailed constructor parameter
and attribute documentation.
"""
def __init__(self, sequence_data, batch_size, num_batches=None, rng=None):
self._rng = make_np_rng(rng, which_method=["random_integers",
"shuffle"])
if batch_size is None:
raise ValueError("batch_size cannot be None for random uniform "
"iteration")
if num_batches is not None:
raise ValueError("EvenSequencesSubsetIterator doesn't support"
" fixed number of batches")
if isinstance(sequence_data, list):
self._dataset_size = len(sequence_data)
elif isinstance(sequence_data, np.ndarray):
self._dataset_size = sequence_data.shape[0]
else:
raise ValueError("sequence_data must be of type list or"
" ndarray")
self._sequence_data = sequence_data
self._batch_size = batch_size
self.prepare()
self.reset()
def prepare(self):
# find unique lengths in sequences
self.lengths = [len(s) for s in self._sequence_data]
self.len_unique = np.unique(self.lengths)
# store the indices of sequences for each unique length,
# and their counts
self.len_indices = dict()
self.len_counts = dict()
for ll in self.len_unique:
self.len_indices[ll] = np.where(self.lengths == ll)[0]
self.len_counts[ll] = len(self.len_indices[ll])
def reset(self):
# make a copy of the number of sequences that share a specific length
self.len_curr_counts = copy.copy(self.len_counts)
# permute the array of unique lengths every epoch
self.len_unique = self._rng.permutation(self.len_unique)
self.len_indices_pos = dict()
# save current total counts to decide when to stop iteration
self.total_curr_counts = 0
for ll in self.len_unique:
# keep a pointer to where we should start picking our minibatch of
# same length sequences
self.len_indices_pos[ll] = 0
# permute the array of indices of sequences with specific lengths
# every epoch
self.len_indices[ll] = self._rng.permutation(self.len_indices[ll])
self.total_curr_counts += len(self.len_indices[ll])
self.len_idx = -1
@wraps(SubsetIterator.next)
def next(self):
# stop when there are no more sequences left
if self.total_curr_counts == 0:
self.reset()
raise StopIteration()
# pick a length from the permuted array of lengths
while True:
self.len_idx = np.mod(self.len_idx+1, len(self.len_unique))
curr_len = self.len_unique[self.len_idx]
if self.len_curr_counts[curr_len] > 0:
break
# find the position and the size of the minibatch of sequences
# to be returned
curr_batch_size = np.minimum(self._batch_size,
self.len_curr_counts[curr_len])
curr_pos = self.len_indices_pos[curr_len]
# get the actual indices for the sequences
curr_indices = self.len_indices[curr_len][curr_pos:curr_pos +
curr_batch_size]
# update the pointer and counts of sequences in the chosen length
self.len_indices_pos[curr_len] += curr_batch_size
self.len_curr_counts[curr_len] -= | |
increasing, so reverse the gcm arrays
# outside the range of Zf, interp returns the first or the last point of the range
h = les.get_zf()
thl = sputils.interp(h, Zf[::-1], thl_[::-1])
qt = sputils.interp(h, Zf[::-1], qt_[::-1])
ql = sputils.interp(h, Zf[::-1], QL[::-1])
u = sputils.interp(h, Zf[::-1], U[::-1])
v = sputils.interp(h, Zf[::-1], V[::-1])
if write:
spio.write_les_data(les, U=U.value_in(units.m / units.s),
V=V.value_in(units.m / units.s),
T=T.value_in(units.K),
SH=SH.value_in(units.shu),
QL=QL.value_in(units.mfu),
QI=QI.value_in(units.mfu), # A.value_in(units.ccu)
Pf=Pf.value_in(units.Pa),
Ph=Ph[1:].value_in(units.Pa),
Zf=Zf.value_in(units.m),
Zh=Zh[1:].value_in(units.m),
Psurf=Ph[-1].value_in(units.Pa),
Tv=Tv.value_in(units.K),
THL=thl_.value_in(units.K),
QT=qt_.value_in(units.mfu))
return u, v, thl, qt, Ph[-1], ql
# calculates QT and THL etc for GCM profiles for extra output columns
# like convert_profiles() for the les columns
def output_column_conversion(profile):
c = sputils.rv / sputils.rd - 1 # epsilon^(-1) -1 = 0.61
profile['Tv'] = profile['T'] * (1 + c * profile['SH'] - (profile['QL'] + profile['QI']))
Zh = (Zghalf-Zghalf[-1])/sputils.grav
Zf = (Zgfull-Zghalf[-1])/sputils.grav
profile['Zh'] = Zh[1:]
profile['Zf'] = Zf[:]
profile['Psurf'] = profile['Ph'][-1]
profile['Ph'] = profile['Ph'][1:]
profile['THL'] = (profile['T'] - (sputils.rlv * (profile['QL'] + profile['QI'])) / sputils.cp) * sputils.iexner(
profile['Pf'])
profile['QT'] = profile['SH'] + profile['QL'] + profile['QI']
# set the dales state
# is u, v, thl, qt are vertical profiles, numpy broadcasting stretch them to 3D fields
# The values are randomly perturbed in the interval [-w,w]
# TODO check w with DALES input files
def set_les_state(les, u, v, thl, qt, ps=None):
itot, jtot, ktot = les.get_itot(), les.get_jtot(), les.get_ktot()
# tiny noise used until feb 2018
# vabsmax,qabsmax = 0.001,0.00001
# les.set_field('U', numpy.random.uniform(-vabsmax, vabsmax, (itot, jtot, ktot)) + u)
# les.set_field('V', numpy.random.uniform(-vabsmax, vabsmax, (itot, jtot, ktot)) + v)
# les.set_field('THL', numpy.random.uniform(-vabsmax, vabsmax, (itot, jtot, ktot)) + thl)
# les.set_field('QT', numpy.random.uniform(-qabsmax, qabsmax, (itot, jtot, ktot)) + qt)
# more noise, according to Dales defaults. qabsmax defaults to 1e-5, 2.5e-5 is from a namoptions file
vabsmax = 0.5 | units.m / units.s
thlabsmax = 0.1 | units.K
qabsmax = 2.5e-5 | units.mfu
les.set_field('U', vabsmax * numpy.random.uniform(-1., 1., (itot, jtot, ktot)) + u)
les.set_field('V', vabsmax * numpy.random.uniform(-1., 1., (itot, jtot, ktot)) + v)
les.set_field('THL', thlabsmax * numpy.random.uniform(-1., 1., (itot, jtot, ktot)) + thl)
les.set_field('QT', qabsmax * numpy.random.uniform(-1., 1., (itot, jtot, ktot)) + qt)
if ps:
les.set_surface_pressure(ps)
# Computes and applies the forcings to the les model before time stepping,
# relaxing it toward the gcm mean state.
def set_les_forcings(les, gcm, dt_gcm, factor, couple_surface, qt_forcing='sp', write=True):
u, v, thl, qt, ps, ql = convert_profiles(les)
# get dales slab averages
u_d = les.get_profile_U()
v_d = les.get_profile_V()
thl_d = les.get_profile_THL()
qt_d = les.get_profile_QT()
ql_d = les.get_profile_QL()
ps_d = les.get_surface_pressure()
try:
rain_last = les.rain
except:
rain_last = 0 | units.kg / units.m ** 2
rain = les.get_rain()
les.rain = rain
rainrate = (rain - rain_last) / dt_gcm
# ft = dt # forcing time constant
# forcing
f_u = factor * (u - u_d) / dt_gcm
f_v = factor * (v - v_d) / dt_gcm
f_thl = factor * (thl - thl_d) / dt_gcm
f_qt = factor * (qt - qt_d) / dt_gcm
f_ps = factor * (ps - ps_d) / dt_gcm
f_ql = factor * (ql - ql_d) / dt_gcm
# log.info("RMS forcings at %d during time step" % les.grid_index)
# dt_gcm = gcm.get_timestep().value_in(units.s)
# log.info(" u : %f" % (sputils.rms(f_u)*dt_gcm))
# log.info(" v : %f" % (sputils.rms(f_v)*dt_gcm))
# log.info(" thl: %f" % (sputils.rms(f_thl)*dt_gcm))
# log.info(" qt : %f" % (sputils.rms(f_qt)*dt_gcm))
# store forcings on dales in the statistics
if write:
spio.write_les_data(les, f_u=f_u.value_in(units.m / units.s ** 2),
f_v=f_v.value_in(units.m / units.s ** 2),
f_thl=f_thl.value_in(units.K / units.s),
f_qt=f_qt.value_in(units.mfu / units.s),
rain=rain.value_in(units.kg / units.m ** 2),
rainrate=rainrate.value_in(units.kg / units.m ** 2 / units.s) * 3600)
# set tendencies for Dales
les.set_tendency_U(f_u)
les.set_tendency_V(f_v)
les.set_tendency_THL(f_thl)
les.set_tendency_QT(f_qt)
les.set_tendency_surface_pressure(f_ps)
les.set_tendency_QL(f_ql) # used in experimental local qt nudging
les.set_ref_profile_QL(ql) # used in experimental variability nudging
les.ql_ref = ql # store ql profile from GCM, interpolated to the LES levels
# for another variant of variability nudging
# transfer surface quantities
if couple_surface:
z0m, z0h, wt, wq = convert_surface_fluxes(les)
les.set_z0m_surf(z0m)
les.set_z0h_surf(z0h)
les.set_wt_surf(wt)
les.set_wq_surf(wq)
if write:
spio.write_les_data(les,
z0m=z0m.value_in(units.m),
z0h=z0h.value_in(units.m),
wthl=wt.value_in(units.m * units.s ** -1 * units.K),
wqt=wq.value_in(units.m / units.s))
spio.write_les_data(les,
TLflux=les.TLflux.value_in(units.W / units.m ** 2),
TSflux=les.TSflux.value_in(units.W / units.m ** 2),
SHflux=les.SHflux.value_in(units.kg / units.m ** 2 / units.s),
QLflux=les.QLflux.value_in(units.kg / units.m ** 2 / units.s),
QIflux=les.QIflux.value_in(units.kg / units.m ** 2 / units.s))
if qt_forcing == 'variance':
if les.get_model_time() > 0 | units.s:
starttime = time.time()
variability_nudge(les, gcm)
walltime = time.time() - starttime
log.info("variability nudge took %6.2f s" % walltime)
# Computes the LES tendencies upon the GCM:
def set_gcm_tendencies(gcm, les, factor=1, write=True):
U, V, T, SH, QL, QI, Pf, Ph, A, Zgfull, Zghalf = (getattr(les, varname, None) for varname in gcm_vars)
Zf = les.gcm_Zf # note: gcm Zf varies in time and space - must get it again after every step, for every column
h = les.get_zf()
u_d = les.get_profile_U()
v_d = les.get_profile_V()
sp_d = les.get_presf()
rhof_d = les.get_rhof()
rhobf_d = les.get_rhobf()
thl_d = les.get_profile_THL()
qt_d = les.get_profile_QT()
ql_d = les.get_profile_QL()
ql_ice_d = les.get_profile_QL_ice() # ql_ice is the ice part of QL
ql_water_d = ql_d - ql_ice_d # ql_water is the water part of ql
qr_d = les.get_profile_QR()
A_d = get_cloud_fraction(les)
# dales state
# dales.cdf.variables['presh'][gcm.step] = dales.get_presh().value_in(units.Pa) # todo associate with zh in netcdf
# calculate real temperature from Dales' thl, qt, using the pressures from openIFS
pf = sputils.interp(h, Zf[::-1], Pf[::-1])
t = thl_d * sputils.exner(pf) + sputils.rlv * ql_d / sputils.cp
# get real temperature from Dales - note it is calculated internally from thl and ql
t_d = les.get_profile_T()
if write:
spio.write_les_data(les, u=u_d.value_in(units.m / units.s),
v=v_d.value_in(units.m / units.s),
presf=sp_d.value_in(units.Pa),
rhof=rhof_d.value_in(units.kg / units.m**3),
rhobf=rhobf_d.value_in(units.kg / units.m**3),
qt=qt_d.value_in(units.mfu),
ql=ql_d.value_in(units.mfu),
ql_ice=ql_ice_d.value_in(units.mfu),
ql_water=ql_water_d.value_in(units.mfu),
thl=thl_d.value_in(units.K),
t=t.value_in(units.K),
t_=t_d.value_in(units.K),
qr=qr_d.value_in(units.mfu))
# forcing
ft = gcm.get_timestep() # should be the length of the NEXT time step
# interpolate to GCM heights
t_d = sputils.interp(Zf, h, t_d)
qt_d = sputils.interp(Zf, h, qt_d)
ql_d = sputils.interp(Zf, h, ql_d)
ql_water_d = sputils.interp(Zf, h, ql_water_d)
ql_ice_d = sputils.interp(Zf, h, ql_ice_d)
u_d = sputils.interp(Zf, h, u_d)
v_d = sputils.interp(Zf, h, v_d)
# log.info("Height of LES system: %f" % h[-1])
# first index in the openIFS colum which is inside the Dales system
start_index = sputils.searchsorted(-Zf, -h[-1])
# log.info("start_index: %d" % start_index)
f_T = factor * (t_d - T) / ft
f_SH = factor * ((qt_d - ql_d) - SH) / ft # !!!!! -ql_d here - SH is vapour only.
f_QL = factor * (ql_water_d - QL) / ft # condensed liquid water
f_QI = factor * (ql_ice_d - QI) / ft # condensed water as ice
# f_QL = factor * (ql_d - (QL+QI)) / ft dales QL is both liquid and ice - f_QL is liquid only. this conserves
# water mass but makes an error in latent heat.
f_U = factor * (u_d - U) / ft
f_V = factor * (v_d - V) / ft
f_A = factor * (A_d - A) / ft
f_T[0:start_index] *= 0 # zero out the forcings above the Dales system
f_SH[0:start_index] *= 0 # TODO : taper off smoothly instead
f_QL[0:start_index] *= 0
f_QI[0:start_index] *= 0
f_U[0:start_index] *= 0
f_V[0:start_index] *= 0
f_A[0:start_index] *= 0
# careful with double coriolis
gcm.set_profile_tendency("U", les.grid_index, f_U)
gcm.set_profile_tendency("V", les.grid_index, f_V)
gcm.set_profile_tendency("T", les.grid_index, f_T)
gcm.set_profile_tendency("SH", les.grid_index, f_SH)
gcm.set_profile_tendency("QL", les.grid_index, f_QL)
gcm.set_profile_tendency("QI", les.grid_index, f_QI)
gcm.set_profile_tendency("A", les.grid_index, f_A)
# store forcings on GCM in the statistics in the corresponding LES group
if write:
spio.write_les_data(les,f_U = f_U.value_in(units.m/units.s**2),
f_V = f_V.value_in(units.m/units.s**2),
f_T = f_T.value_in(units.K/units.s),
f_SH = f_SH.value_in(units.shu/units.s),
A = A.value_in(units.ccu),
A_d = A_d.value_in(units.ccu),
f_QL=f_QL.value_in(units.mfu/units.s),
f_QI=f_QI.value_in(units.mfu/units.s),
f_A=f_A.value_in(units.ccu/units.s)
)
# sets GCM forcings using values from the spifs.nc file
# not used - was thught to be necessary for restarts, but it isn't
def set_gcm_tendencies_from_file(gcm, les):
t = gcm.get_model_time()
ti = abs((spio.cdf_root.variables['Time'] | units.s) - t).argmin()
print('set_gcm_tendencies_from_file()', t, ti, spio.cdf_root.variables['Time'][ti])
gcm.set_profile_tendency("U", les.grid_index, les.cdf.variables['f_U'][ti] | units.m / units.s ** 2)
gcm.set_profile_tendency("V", les.grid_index, les.cdf.variables['f_V'][ti] | units.m / units.s ** 2)
gcm.set_profile_tendency("T", les.grid_index, les.cdf.variables['f_T'][ti] | units.K / units.s)
gcm.set_profile_tendency("SH", les.grid_index, les.cdf.variables['f_SH'][ti] | units.shu / units.s)
gcm.set_profile_tendency("QL", les.grid_index, les.cdf.variables['f_QL'][ti] | units.shu / units.s)
gcm.set_profile_tendency("QI", | |
regions_transformer(self.view, f)
# TODO: Revise this text object... Can't we have a simpler approach without
# this intermediary step?
class ViI(IrreversibleTextCommand):
def __init__(self, view):
IrreversibleTextCommand.__init__(self, view)
def run(self, inclusive=False):
state = VintageState(self.view)
if inclusive:
state.motion = 'vi_inclusive_text_object'
else:
state.motion = 'vi_exclusive_text_object'
class CollectUserInput(IrreversibleTextCommand):
def __init__(self, view):
IrreversibleTextCommand.__init__(self, view)
def run(self, character=None):
state = VintageState(self.view)
state.user_input += character
# The .user_input setter handles resetting the following property.
if not state.expecting_user_input:
state.eval()
class _vi_z_enter(IrreversibleTextCommand):
def __init__(self, view):
IrreversibleTextCommand.__init__(self, view)
def run(self):
first_sel = self.view.sel()[0]
current_row = self.view.rowcol(first_sel.b)[0] - 1
topmost_visible_row, _ = self.view.rowcol(self.view.visible_region().a)
self.view.run_command('scroll_lines', {'amount': (topmost_visible_row - current_row)})
class _vi_z_minus(IrreversibleTextCommand):
def __init__(self, view):
IrreversibleTextCommand.__init__(self, view)
def run(self):
first_sel = self.view.sel()[0]
current_row = self.view.rowcol(first_sel.b)[0]
bottommost_visible_row, _ = self.view.rowcol(self.view.visible_region().b)
number_of_lines = (bottommost_visible_row - current_row) - 1
if number_of_lines > 1:
self.view.run_command('scroll_lines', {'amount': number_of_lines})
class _vi_zz(IrreversibleTextCommand):
def __init__(self, view):
IrreversibleTextCommand.__init__(self, view)
def run(self):
first_sel = self.view.sel()[0]
current_row = self.view.rowcol(first_sel.b)[0]
topmost_visible_row, _ = self.view.rowcol(self.view.visible_region().a)
bottommost_visible_row, _ = self.view.rowcol(self.view.visible_region().b)
middle_row = (topmost_visible_row + bottommost_visible_row) / 2
self.view.run_command('scroll_lines', {'amount': (middle_row - current_row)})
class _vi_r(sublime_plugin.TextCommand):
def run(self, edit, character=None, mode=None):
def f(view, s):
next_row = view.rowcol(s.b - 1)[0] + 1
pt = view.text_point(next_row, 0)
return sublime.Region(pt, pt)
if mode == _MODE_INTERNAL_NORMAL:
for s in self.view.sel():
self.view.replace(edit, s, character * s.size())
if character == '\n':
regions_transformer(self.view, f)
class _vi_undo(IrreversibleTextCommand):
"""Once the latest vi command has been undone, we might be left with non-empty selections.
This is due to the fact that Vintageous defines selections in a separate step to the actual
command running. For example, v,e,d,u would undo the deletion operation and restore the
selection that v,e had created.
Assuming that after an undo we're back in normal mode, we can take for granted that any leftover
selections must be destroyed. I cannot think of any situation where Vim would have to restore
selections after *u*, but it may well happen under certain circumstances I'm not aware of.
Note 1: We are also relying on Sublime Text to restore the v or V selections existing at the
time the edit command was run. This seems to be safe, but we're blindly relying on it.
Note 2: Vim knows the position the caret was in before creating the visual selection. In
Sublime Text we lose that information (at least it doesn't seem to be straightforward to
obtain).
"""
# !!! This is a special command that does not go through the usual processing. !!!
# !!! It must skip the undo stack. !!!
# TODO: It must be possible store or retrieve the actual position of the caret before the
# visual selection performed by the user.
def run(self):
# We define our own transformer here because we want to handle undo as a special case.
# TODO: I don't know if it needs to be an special case in reality.
def f(view, s):
# Compensates the move issued below.
if s.a < s.b :
return sublime.Region(s.a + 1, s.a + 1)
else:
return sublime.Region(s.a, s.a)
state = VintageState(self.view)
for i in range(state.count):
self.view.run_command('undo')
if self.view.has_non_empty_selection_region():
regions_transformer(self.view, f)
# !! HACK !! /////////////////////////////////////////////////////////
# This is a hack to work around an issue in Sublime Text:
# When undoing in normal mode, Sublime Text seems to prime a move by chars
# forward that has never been requested by the user or Vintageous.
# As far as I can tell, Vintageous isn't at fault here, but it seems weird
# to think that Sublime Text is wrong.
self.view.run_command('move', {'by': 'characters', 'forward': False})
# ////////////////////////////////////////////////////////////////////
state.update_xpos()
# Ensure that we wipe the count, if any.
state.reset()
class _vi_repeat(IrreversibleTextCommand):
"""Vintageous manages the repeat operation on its own to ensure that we always use the latest
modifying command, instead of being tied to the undo stack (as Sublime Text is by default).
"""
# !!! This is a special command that does not go through the usual processing. !!!
# !!! It must skip the undo stack. !!!
def run(self):
state = VintageState(self.view)
try:
cmd, args, _ = state.repeat_command
except TypeError:
# Unreachable.
return
if not cmd:
return
elif cmd == 'vi_run':
args['next_mode'] = MODE_NORMAL
args['follow_up_mode'] = 'vi_enter_normal_mode'
args['count'] = state.count * args['count']
self.view.run_command(cmd, args)
elif cmd == 'sequence':
for i, _ in enumerate(args['commands']):
# Access this shape: {"commands":[['vi_run', {"foo": 100}],...]}
args['commands'][i][1]['next_mode'] = MODE_NORMAL
args['commands'][i][1]['follow_up_mode'] = 'vi_enter_normal_mode'
# TODO: Implement counts properly for 'sequence' command.
for i in range(state.count):
self.view.run_command(cmd, args)
# Ensure we wipe count data if any.
state.reset()
# XXX: Needed here? Maybe enter_... type commands should be IrreversibleTextCommands so we
# must/can call them whenever we need them withouth affecting the undo stack.
self.view.run_command('vi_enter_normal_mode')
class _vi_redo(IrreversibleTextCommand):
# !!! This is a special command that does not go through the usual processing. !!!
# !!! It must skip the undo stack. !!!
# TODO: It must be possible store or retrieve the actual position of the caret before the
# visual selection performed by the user.
def run(self):
state = VintageState(self.view)
for i in range(state.count):
self.view.run_command('redo')
state.update_xpos()
# Ensure that we wipe the count, if any.
state.reset()
self.view.run_command('vi_enter_normal_mode')
class _vi_ctrl_w_v_action(sublime_plugin.TextCommand):
def run(self, edit):
self.view.window().run_command('new_pane', {})
class Sequence(sublime_plugin.TextCommand):
"""Required so that mark_undo_groups_for_gluing and friends work.
"""
def run(self, edit, commands):
for cmd, args in commands:
self.view.run_command(cmd, args)
# XXX: Sequence is a special case in that it doesn't run through vi_run, so we need to
# ensure the next mode is correct. Maybe we can improve this by making it more similar to
# regular commands?
state = VintageState(self.view)
state.enter_normal_mode()
class _vi_big_j(sublime_plugin.TextCommand):
def run(self, edit, mode=None):
def f(view, s):
if mode == _MODE_INTERNAL_NORMAL:
full_current_line = view.full_line(s.b)
target = full_current_line.b - 1
full_next_line = view.full_line(full_current_line.b)
two_lines = sublime.Region(full_current_line.a, full_next_line.b)
# Text without \n.
first_line_text = view.substr(view.line(full_current_line.a))
next_line_text = view.substr(full_next_line)
if len(next_line_text) > 1:
next_line_text = next_line_text.lstrip()
sep = ''
if first_line_text and not first_line_text.endswith(' '):
sep = ' '
view.replace(edit, two_lines, first_line_text + sep + next_line_text)
if first_line_text:
return sublime.Region(target, target)
return s
else:
return s
regions_transformer(self.view, f)
class _vi_ctrl_a(sublime_plugin.TextCommand):
def run(self, edit, count=1, mode=None):
def f(view, s):
if mode == _MODE_INTERNAL_NORMAL:
word = view.word(s.a)
new_digit = int(view.substr(word)) + count
view.replace(edit, word, str(new_digit))
return s
if mode != _MODE_INTERNAL_NORMAL:
return
# TODO: Deal with octal, hex notations.
# TODO: Improve detection of numbers.
# TODO: Find the next numeric word in the line if none is found under the caret.
words = [self.view.substr(self.view.word(s)) for s in self.view.sel()]
if not all([w.isdigit() for w in words]):
utils.blink()
return
regions_transformer(self.view, f)
class _vi_ctrl_x(sublime_plugin.TextCommand):
DIGIT_PAT = re.compile('(-)?(\d+)(\D+)?')
NUM_PAT = re.compile('\d')
def get_word(self, s):
word = self.view.word(s.b)
if word.a > 0:
if self.view.substr(word.a - 1) == '-':
return sublime.Region(word.a - 1, word.b)
return word
def check_words(self, regions):
nums = [self.view.substr(s.b).isdigit() for s in regions]
if not all(nums):
return []
words = [self.get_word(s) for s in regions]
matches = [_vi_ctrl_x.DIGIT_PAT.match(self.view.substr(w)) for w in words]
if all(matches):
return zip(words, matches)
return []
def find_next_num(self, regions):
lines = [self.view.substr(sublime.Region(r.b, self.view.line(r.b).b)) for r in regions]
positions = [_vi_ctrl_x.NUM_PAT.search(text) for text in lines]
if all(positions):
pairs = zip(regions, positions)
rv = [sublime.Region(r.b + p.start()) for (r, p) in pairs]
return rv
return []
def run(self, edit, count=1, mode=None):
def f(view, s):
if mode == _MODE_INTERNAL_NORMAL:
word, match = next(pairs)
sign, amount, suffix = match.groups()
sign = -1 if sign else 1
suffix = suffix or ''
new_digit = (sign * int(amount)) - count
view.replace(edit, word, str(new_digit) + suffix)
offset = len(str(new_digit))
# FIXME: Deal with multiple sels as we should.
if len(view.sel()) == 1:
return sublime.Region(word.a + offset - 1)
# return sublime.Region(word.b - len(suffix) - 1)
return s
if mode != _MODE_INTERNAL_NORMAL:
return
# TODO: Deal with octal, hex notations.
# TODO: Improve detection of numbers.
pairs = list(self.check_words(list(self.view.sel())))
if not pairs:
next_nums = self.find_next_num(list(self.view.sel()))
if not next_nums:
utils.blink()
return
pairs = iter(reversed(list(self.check_words(next_nums))))
else:
pairs = iter(reversed(list(self.check_words(self.view.sel()))))
try:
xpos = []
if len(self.view.sel()) > 1:
rowcols = [self.view.rowcol(s.b - 1) for s in self.view.sel()]
regions_transformer_reversed(self.view, f)
if len(self.view.sel()) > | |
"""
Copyright (c) 2018 The University of Notre Dame
Copyright (c) 2018 The Regents of the University of California
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this
list of conditions and the following disclaimer in the documentation and/or other
materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors may
be used to endorse or promote products derived from this software without specific
prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
SUCH DAMAGE.
Contributors:
Written by <NAME>, for the Natural Hazard Modeling Laboratory, director: <NAME>, at Notre Dame
A set of objects useful for parsing geometry files"""
from __future__ import print_function
import sys
import math
MARGIN = 0.0000001
MODE_NIL = 0
MODE_VERTS = 1
MODE_FACES = 2
MODE_VOLS = 3
class SGFvert(object):
"""Class for a vertex in sgf data"""
def __init__(self, itemList):
self.xval = float(itemList[0])
self.yval = float(itemList[1])
self.zval = 0.0
self.is_spline = False
self.in_use = False
self.dim = 2
if len(itemList) >= 3:
self.zval = float(itemList[2])
self.dim = 3
self.idstr = "ERR"
self.ref = []
def set_spline(self):
"""Sets this point as a spline point"""
self.is_spline = True
def is_point(self, other_point):
"""Compares this point to another point"""
if self.dim != other_point.dim:
return False
if self.param_same(self.xval, other_point.xval) is False:
return False
if self.param_same(self.yval, other_point.yval) is False:
return False
if self.param_same(self.zval, other_point.zval) is False:
return False
return True
def param_same(self, val1, val2):
"""Compares two floating point coords to determine if they are the same"""
if val1 > val2 + MARGIN:
return False
if val1 < val2 - MARGIN:
return False
return True
def in_line_with(self, point1, point2):
"""Return true if self is on a line segment between point1 and point2 in 2D"""
if self.param_same(self.yval, point1.yval):
if self.param_same(self.yval, point2.yval):
return True
else:
return False
if self.param_same(self.xval, point1.xval):
if self.param_same(self.xval, point2.xval):
return True
else:
return False
rise = point2.yval - point1.yval
run = point2.xval - point1.xval
factor = (self.xval - point1.xval) / run
checkval = point1.yval + factor * rise
if checkval > self.yval + MARGIN:
return False
if checkval < self.yval - MARGIN:
return False
return True
def point_dist(self, other):
"""Returns distance to other point"""
xdiff = other.xval - self.xval
ydiff = other.yval - self.yval
zdiff = other.zval - self.zval
ret = xdiff * xdiff + ydiff * ydiff + zdiff * zdiff
return math.sqrt(ret)
def __str__(self):
if self.dim == 2:
return "(%f, %f)" % (self.xval, self.yval)
return "(%f, %f, %f)" % (self.xval, self.yval, self.zval)
class SGFface(object):
"""Class for a face in sgf data"""
def __init__(self, item_list, all_points):
self.point_list = []
self.idstr = "ERR"
for item in item_list:
if isinstance(item, (str, unicode)):
try:
self.point_list.append(all_points[item])
except IndexError:
print("ERROR: Vert Name not defined in face list")
sys.exit(2)
else:
self.point_list.append(item)
class SGFvol(object):
"""Class for a volumne in sgf data"""
def __init__(self, item_list, all_faces):
self.face_list = []
self.idstr = "ERR"
for item in item_list:
if isinstance(item, (str, unicode)):
try:
self.face_list.append(all_faces[item])
except IndexError:
print("ERROR: Face Name not defined in vols list")
sys.exit(2)
else:
self.face_list.append(item)
class SGFsegment(object):
"""Data of a 2-point segment"""
def __init__(self, point_list):
self.points = point_list
self.used = False
def __str__(self):
if len(self.points) == 2:
return "(%s <=> %s)" % (self.points[0], self.points[1])
return "(X <=> X)"
class Triangle(object):
"""Class stores a triangle pf points"""
def __init__(self, a, b, c, num):
self.num = num
self.pta = a
self.ptb = b
self.ptc = c
self.marked = -1
def doesnotcontain(pointlist, needle):
"""Returns true if no point in point list has same coords as needle"""
for point in pointlist:
if point.is_point(needle):
return False
return True
def containspoint(pointlist, needle):
"""Returns a point from the point list if it has the same coords as needle"""
for point in pointlist:
if point.is_point(needle):
return point
return None
def recenter_object(vert_list):
"""Recenters the verts in vert_list"""
minz = float(vert_list[0].zval)
minx = float(vert_list[0].xval)
maxx = float(vert_list[0].xval)
miny = float(vert_list[0].yval)
maxy = float(vert_list[0].yval)
for vert in vert_list:
if float(vert.xval) < minx:
minx = float(vert.xval)
if float(vert.xval) > maxx:
maxx = float(vert.xval)
if float(vert.zval) < minz:
minz = float(vert.zval)
if float(vert.yval) < miny:
miny = float(vert.yval)
if float(vert.yval) > maxy:
maxy = float(vert.yval)
midx = ((maxx - minx) / 2) + minx
midy = ((maxy - miny) / 2) + miny
for vert in vert_list:
vert.xval = vert.xval - midx
vert.yval = vert.yval - midy
vert.zval = vert.zval - minz
def get_file_lines(filename):
"""Return a list of lines from a given file name"""
try:
file_handle = open(filename, "r")
file_lines = file_handle.readlines()
file_handle.close()
except IOError:
print("ERROR: Unable to open input file")
sys.exit(1)
return file_lines
def parse_obj(file_lines, geo_object):
"""Parses some OBJ files into SGF data. So long as the faces are trianges.
Assumes mm dimension from FreeCAD, converts to meters."""
triangle_list = []
vert_list = {}
point_itr = 1
for line in file_lines:
if line[0] == '#':
continue
line_parts = line.split()
if len(line_parts) < 4:
continue
if line_parts[0] == "v":
cord_list = []
for entry in range(1, 4):
data = float(line_parts[entry])
data /= 1000
cord_list.append(data)
temp_point = SGFvert(cord_list)
apoint = containspoint(vert_list.values(), temp_point)
if apoint is None:
vert_list[str(point_itr)] = temp_point
else:
vert_list[str(point_itr)] = apoint
point_itr += 1
continue
if line_parts[0] == "f":
temp_face = Triangle(vert_list[extract_prefix_int_from_str(line_parts[1])],
vert_list[extract_prefix_int_from_str(line_parts[2])],
vert_list[extract_prefix_int_from_str(line_parts[3])], 0)
triangle_list.append(temp_face)
continue
geo_object.idstr = "objTranslate"
geo_object.dimension = 3
itr = 0
for point in vert_list.values():
geo_object.vert_list[str(itr)] = point
itr += 1
itr = 0
for itr in range(0, len(triangle_list)):
new_face = SGFface([triangle_list[itr].pta,
triangle_list[itr].ptb,
triangle_list[itr].ptc], geo_object.vert_list)
geo_object.face_list[str(itr)] = new_face
itr += 1
newvol = SGFvol(geo_object.face_list, geo_object.face_list)
geo_object.vols_list["0"] = newvol
for vertkey, vert in list(geo_object.vert_list.items()):
vert.idstr = vertkey
for facekey, face in list(geo_object.face_list.items()):
face.idstr = facekey
for volkey, vol in list(geo_object.vols_list.items()):
vol.idstr = volkey
#re-center the object
recenter_object(geo_object.vert_list.values())
def extract_prefix_int_from_str(a_str):
"""Gets an int from the string, taking only the char before a non digit char"""
ret_str = ""
for a_char in a_str:
if a_char.isdigit():
ret_str += a_char
else:
return ret_str
return ret_str
def parse_old_sgf(file_lines, geo_object):
"""Parses Data from old version SGF"""
have_first = False
in_mode = MODE_NIL
num_verts = 0
num_faces = 0
num_vols = 0
for line in file_lines:
tokens = line.split()
if len(tokens) == 0:
continue
if tokens[0] == "#Vertices":
if len(tokens) != 2:
print("ERROR: Vertices sysntax error")
sys.exit(2)
try:
num_verts = int(tokens[1])
except ValueError:
print("ERROR: Vertices syntax error not an int")
sys.exit(2)
in_mode = MODE_VERTS
elif tokens[0] == "#Faces":
if len(tokens) != 2:
print("ERROR: Faces sysntax error")
sys.exit(2)
try:
num_faces = int(tokens[1])
except ValueError:
print("ERROR: Faces sysntax error not an int")
sys.exit(2)
in_mode = MODE_FACES
elif tokens[0] == "#Volumes":
if len(tokens) != 2:
print("ERROR: Vols sysntax error")
sys.exit(2)
try:
num_vols = int(tokens[1])
except ValueError:
print("ERROR: Volumes sysntax error not an int")
sys.exit(2)
if geo_object.dimension != 3:
print("ERROR: Volumes defined in 2D object")
sys.exit(2)
in_mode = MODE_VOLS
elif (in_mode is MODE_VERTS) and (len(tokens) > 0):
try:
dim_val = int(tokens[1])
except ValueError:
print("ERROR: Dimension Read Error")
sys.exit(2)
if dim_val != geo_object.dimension:
print("ERROR: Dimension Mismatch")
sys.exit(2)
if dim_val > len(tokens) - 2:
print("ERROR: Dimension Mismatch in Vert")
sys.exit(2)
if tokens[0] in geo_object.vert_list:
print("ERROR: Vert Name used twice")
sys.exit(2)
new_vert = SGFvert(tokens[2:(dim_val + 2)])
geo_object.vert_list[tokens[0]] = new_vert
new_vert.idstr = tokens[0]
if dim_val < len(tokens) - 2:
if tokens[dim_val + 2][0] == 'S':
geo_object.vert_list[tokens[0]].set_spline()
elif (in_mode is MODE_FACES) and (len(tokens) > 0):
try:
dim_val = int(tokens[1])
except ValueError:
print("ERROR: Dimension Read Error")
sys.exit(2)
if | |
# -*- coding: utf-8 -*-
from collections import OrderedDict
from gluon import current
from gluon.storage import Storage
def config(settings):
"""
Cumbria County Council extensions to the Volunteer Management template
- branding
- support Donations
- support Assessments
"""
T = current.T
settings.base.system_name = T("Support Cumbria")
settings.base.system_name_short = T("Support Cumbria")
# Theme
settings.base.theme = "CCC"
settings.base.theme_layouts = "CCC"
settings.base.theme_config = "CCC"
# PrePopulate data
settings.base.prepopulate += ("CCC",)
settings.base.prepopulate_demo = ("CCC/Demo",)
# Authentication settings
# Do new users need to verify their email address?
settings.auth.registration_requires_verification = True
# Do new users need to be approved by an administrator prior to being able to login?
# - varies by path (see register() in controllers.py)
#settings.auth.registration_requires_approval = True
settings.auth.registration_requests_organisation = True
# Required for access to default realm permissions
settings.auth.registration_link_user_to = ["staff"]
settings.auth.registration_link_user_to_default = ["staff"]
# -------------------------------------------------------------------------
# L10n (Localization) settings
settings.L10n.languages = OrderedDict([
("en-gb", "English"),
])
# Default Language
settings.L10n.default_language = "en-gb"
# Uncomment to Hide the language toolbar
settings.L10n.display_toolbar = False
# Security Policy
# http://eden.sahanafoundation.org/wiki/S3AAA#System-widePolicy
# 1: Simple (default): Global as Reader, Authenticated as Editor
# 2: Editor role required for Update/Delete, unless record owned by session
# 3: Apply Controller ACLs
# 4: Apply both Controller & Function ACLs
# 5: Apply Controller, Function & Table ACLs
# 6: Apply Controller, Function, Table ACLs and Entity Realm
# 7: Apply Controller, Function, Table ACLs and Entity Realm + Hierarchy
# 8: Apply Controller, Function, Table ACLs, Entity Realm + Hierarchy and Delegations
settings.security.policy = 7 # Organisation-ACLs
# Consent Tracking
settings.auth.consent_tracking = True
# Record Approval
settings.auth.record_approval = True
settings.auth.record_approval_required_for = ("org_organisation",
)
# -------------------------------------------------------------------------
# Comment/uncomment modules here to disable/enable them
# Modules menu is defined in modules/eden/menu.py
#settings.modules.update([
settings.modules = OrderedDict([
# Core modules which shouldn't be disabled
("default", Storage(
name_nice = T("Home"),
restricted = False, # Use ACLs to control access to this module
access = None, # All Users (inc Anonymous) can see this module in the default menu & access the controller
module_type = None # This item is not shown in the menu
)),
("admin", Storage(
name_nice = T("Administration"),
#description = "Site Administration",
restricted = True,
access = "|1|", # Only Administrators can see this module in the default menu & access the controller
module_type = None # This item is handled separately for the menu
)),
("appadmin", Storage(
name_nice = T("Administration"),
#description = "Site Administration",
restricted = True,
module_type = None # No Menu
)),
("errors", Storage(
name_nice = T("Ticket Viewer"),
#description = "Needed for Breadcrumbs",
restricted = False,
module_type = None # No Menu
)),
#("sync", Storage(
# name_nice = T("Synchronization"),
# #description = "Synchronization",
# restricted = True,
# access = "|1|", # Only Administrators can see this module in the default menu & access the controller
# module_type = None # This item is handled separately for the menu
#)),
#("tour", Storage(
# name_nice = T("Guided Tour Functionality"),
# module_type = None,
#)),
#("translate", Storage(
# name_nice = T("Translation Functionality"),
# #description = "Selective translation of strings based on module.",
# module_type = None,
#)),
("gis", Storage(
name_nice = T("Map"),
#description = "Situation Awareness & Geospatial Analysis",
restricted = True,
module_type = None,
)),
("pr", Storage(
name_nice = T("Person Registry"),
#description = "Central point to record details on People",
restricted = True,
access = "|1|", # Only Administrators can see this module in the default menu (access to controller is possible to all still)
module_type = None,
)),
("org", Storage(
name_nice = T("Organizations"),
#description = 'Lists "who is doing what & where". Allows relief agencies to coordinate their activities',
restricted = True,
module_type = None,
)),
("hrm", Storage(
name_nice = T("Personnel"),
#description = "Human Resources Management",
restricted = True,
module_type = None,
)),
#("vol", Storage(
# name_nice = T("Volunteers"),
# #description = "Human Resources Management",
# restricted = True,
# module_type = 2,
#)),
("cms", Storage(
name_nice = T("Content Management"),
#description = "Content Management System",
restricted = True,
module_type = None,
)),
("doc", Storage(
name_nice = T("Documents"),
#description = "A library of digital resources, such as photos, documents and reports",
restricted = True,
module_type = None,
)),
("msg", Storage(
name_nice = T("Messaging"),
#description = "Sends & Receives Alerts via Email & SMS",
restricted = True,
# The user-visible functionality of this module isn't normally required. Rather it's main purpose is to be accessed from other modules.
module_type = None,
)),
#("cr", Storage(
# name_nice = T("Shelters"),
# #description = "Tracks the location, capacity and breakdown of victims in Shelters",
# restricted = True,
# module_type = 10
#)),
("dc", Storage(
name_nice = T("Assessments"),
#description = "Data collection tool",
restricted = True,
module_type = None,
)),
("project", Storage(
name_nice = T("Projects"),
#description = "Tasks for Contacts",
restricted = True,
module_type = None,
)),
("supply", Storage(
name_nice = T("Supply Chain Management"),
#description = "Used within Inventory Management, Request Management and Asset Management",
restricted = True,
module_type = None, # Not displayed
)),
#("inv", Storage(
# name_nice = T("Warehouses"),
# #description = "Receiving and Sending Items",
# restricted = True,
# module_type = None,
#)),
("req", Storage(
name_nice = T("Requests"),
#description = "Manage requests for supplies, assets, staff or other resources. Matches against Inventories where supplies are requested.",
restricted = True,
module_type = None,
)),
])
settings.search.filter_manager = False
settings.ui.filter_clear = False
settings.cms.richtext = True
settings.hrm.event_course_mandatory = False
settings.pr.hide_third_gender = False
#settings.project.task_priority_opts = {1: T("Low"),
# 2: T("Medium"),
# 3: T("High"),
# }
#settings.project.task_status_opts = {1: T("New"),
# 2: T("In-Progress"),
# 3: T("Closed"),
# }
# Now using req_need, so unused:
#settings.req.req_type = ("People",)
# -------------------------------------------------------------------------
def ccc_realm_entity(table, row):
"""
Assign a Realm Entity to records
"""
if current.auth.s3_has_role("ADMIN"):
# Use default rules
return 0
tablename = table._tablename
if tablename in (#"hrm_training_event",
"project_task",
#"req_need",
):
# Use the Org of the Creator
db = current.db
new_row = db(table.id == row.id).select(table.created_by,
limitby = (0, 1),
).first()
user_id = new_row.created_by
utable = db.auth_user
otable = current.s3db.org_organisation
query = (utable.id == user_id) & \
(utable.organisation_id == otable.id)
org = db(query).select(otable.pe_id,
limitby = (0, 1),
).first()
if org:
return org.pe_id
# Use default rules
return 0
settings.auth.realm_entity = ccc_realm_entity
# -------------------------------------------------------------------------
def ccc_rheader(r):
"""
Custom rheaders
"""
if r.representation != "html":
# RHeaders only used in interactive views
return None
# Need to use this format as otherwise req_match?viewing=org_office.x
# doesn't have an rheader
from s3 import s3_rheader_resource, s3_rheader_tabs
tablename, record = s3_rheader_resource(r)
if record is None:
# List or Create form: rheader makes no sense here
return None
from gluon import DIV, TABLE, TR, TH
T = current.T
if tablename == "hrm_training_event":
T = current.T
tabs = [(T("Basic Details"), None),
(T("Participants"), "participant"),
]
rheader_tabs = s3_rheader_tabs(r, tabs)
table = r.table
location_id = table.location_id
date_field = table.start_date
rheader = DIV(TABLE(TR(TH("%s: " % T("Date")),
date_field.represent(record.start_date),
),
TR(TH("%s: " % location_id.label),
location_id.represent(record.location_id),
)),
rheader_tabs)
elif tablename == "org_organisation":
T = current.T
tabs = [(T("Basic Details"), None),
#(T("Offices"), "office"),
(T("Key Locations"), "facility"),
#(T("Locations Served"), "location"),
(T("Volunteers"), "human_resource"),
]
rheader_tabs = s3_rheader_tabs(r, tabs)
from s3 import s3_fullname
table = r.table
rheader = DIV(TABLE(TR(TH("%s: " % T("Name")),
record.name,
)),
rheader_tabs)
elif tablename == "pr_group":
T = current.T
tabs = [(T("Basic Details"), None),
# 'Person' allows native tab breakout
#(T("Members"), "group_membership"),
(T("Members"), "person"),
#(T("Locations"), "group_location"),
#(T("Skills"), "competency"),
]
rheader_tabs = s3_rheader_tabs(r, tabs)
from s3 import s3_fullname
table = r.table
rheader = DIV(TABLE(TR(TH("%s: " % T("Name")),
record.name,
)),
rheader_tabs)
elif tablename == "pr_person":
T = current.T
tabs = [(T("Basic Details"), None),
(T("Address"), "address"),
(T("Contacts"), "contacts"),
# Included in Contacts tab:
#(T("Emergency Contacts"), "contact_emergency"),
]
get_vars_get = r.get_vars.get
has_role = current.auth.s3_has_role
if get_vars_get("donors") or \
has_role("DONOR", include_admin=False):
# Better on main form using S3SQLInlineLink
#tabs.append((T("Goods / Services"), "item"))
pass
elif get_vars_get("groups") or \
has_role("GROUP_ADMIN", include_admin=False):
# Better as menu item, to be able to access tab(s)
#tabs.append((T("Group"), "group"))
pass
else:
tabs.append((T("Additional Information"), "additional"))
# Better on main form using | |
= pk_max
idx_ecg = np.where(pk_max >= thresh)[0]
else:
# use correlation
idx_ecg = [meg_raw.ch_names.index(name_ecg)]
ecg_filtered = mne.filter.filter_data(meg_raw[idx_ecg, :][0],
meg_raw.info['sfreq'],
l_freq=flow, h_freq=fhigh)
ecg_scores = ica.score_sources(meg_raw, target=ecg_filtered,
score_func=score_func)
idx_ecg = np.where(np.abs(ecg_scores) >= thresh)[0]
else:
print(">>>> NOTE: No ECG channel found!")
idx_ecg = np.array([0])
return idx_ecg, ecg_scores
#######################################################
#
# calculate the performance of artifact rejection
#
#######################################################
def calc_performance(evoked_raw, evoked_clean):
''' Gives a measure of the performance of the artifact reduction.
Percentage value returned as output.
'''
from jumeg import jumeg_math as jmath
diff = evoked_raw.data - evoked_clean.data
rms_diff = jmath.calc_rms(diff, average=1)
rms_meg = jmath.calc_rms(evoked_raw.data, average=1)
arp = (rms_diff / rms_meg) * 100.0
return np.round(arp)
#######################################################
#
# calculate the frequency-correlation value
#
#######################################################
def calc_frequency_correlation(evoked_raw, evoked_clean):
"""
Function to estimate the frequency-correlation value
as introduced by Krishnaveni et al. (2006),
Journal of Neural Engineering.
"""
# transform signal to frequency range
fft_raw = np.fft.fft(evoked_raw.data)
fft_cleaned = np.fft.fft(evoked_clean.data)
# get numerator
numerator = np.sum(np.abs(np.real(fft_raw) * np.real(fft_cleaned)) +
np.abs(np.imag(fft_raw) * np.imag(fft_cleaned)))
# get denominator
denominator = np.sqrt(np.sum(np.abs(fft_raw) ** 2) *
np.sum(np.abs(fft_cleaned) ** 2))
return np.round(numerator / denominator * 100.)
#######################################################
#
# apply CTPS (for brain responses)
#
#######################################################
def apply_ctps(fname_ica, freqs=[(1, 4), (4, 8), (8, 12), (12, 16), (16, 20)],
tmin=-0.2, tmax=0.4, name_stim='STI 014', event_id=None,
baseline=(None, 0), proj=False):
''' Applies CTPS to a list of ICA files. '''
from jumeg.filter import jumeg_filter
fiws = jumeg_filter(filter_method="bw")
fiws.filter_type = 'bp' # bp, lp, hp
fiws.dcoffset = True
fiws.filter_attenuation_factor = 1
nfreq = len(freqs)
print('>>> CTPS calculation on: ', freqs)
# Trigger or Response ?
if name_stim == 'STI 014': # trigger
trig_name = 'trigger'
else:
if name_stim == 'STI 013': # response
trig_name = 'response'
else:
trig_name = 'auxillary'
fnlist = get_files_from_list(fname_ica)
# loop across all filenames
for fnica in fnlist:
name = os.path.split(fnica)[1]
#fname = fnica[0:len(fnica)-4]
basename = fnica[:fnica.rfind(ext_ica)]
fnraw = basename + ext_raw
#basename = os.path.splitext(os.path.basename(fnica))[0]
# load cleaned data
raw = mne.io.Raw(fnraw, preload=True)
picks = mne.pick_types(raw.info, meg=True, ref_meg=False, exclude='bads')
# read (second) ICA
print(">>>> working on: " + basename)
ica = mne.preprocessing.read_ica(fnica)
ica_picks = np.arange(ica.n_components_)
ncomp = len(ica_picks)
# stim events
stim_events = mne.find_events(raw, stim_channel=name_stim, consecutive=True)
nevents = len(stim_events)
if (nevents > 0):
# for a specific event ID
if event_id:
ix = np.where(stim_events[:, 2] == event_id)[0]
stim_events = stim_events[ix, :]
else:
event_id = stim_events[0, 2]
# create ctps dictionary
dctps = {'fnica': fnica,
'basename': basename,
'stim_channel': name_stim,
'trig_name': trig_name,
'ncomp': ncomp,
'nevent': nevents,
'event_id': event_id,
'nfreq': nfreq,
'freqs': freqs,
}
# loop across all filenames
pkarr = []
ptarr = []
pkmax_arr = []
for ifreq in range(nfreq):
ica_raw = ica.get_sources(raw)
flow, fhigh = freqs[ifreq][0], freqs[ifreq][1]
bp = str(flow) + '_' + str(fhigh)
# filter ICA data and create epochs
#tw=0.1
# ica_raw.filter(l_freq=flow, h_freq=fhigh, picks=ica_picks,
# method='fft',l_trans_bandwidth=tw, h_trans_bandwidth=tw)
# ica_raw.filter(l_freq=flow, h_freq=fhigh, picks=ica_picks,
# method='fft')
# filter ws settings
# later we will make this as a one line call
data_length = raw._data[0, :].size
fiws.sampling_frequency = raw.info['sfreq']
fiws.fcut1 = flow
fiws.fcut2 = fhigh
#fiws.init_filter_kernel(data_length)
#fiws.init_filter(data_length)
for ichan in ica_picks:
fiws.apply_filter(ica_raw._data[ichan, :])
ica_epochs = mne.Epochs(ica_raw, events=stim_events,
event_id=event_id, tmin=tmin,
tmax=tmax, verbose=False,
picks=ica_picks, baseline=baseline,
proj=proj)
# compute CTPS
_, pk, pt = ctps.ctps(ica_epochs.get_data())
pkmax = pk.max(1)
times = ica_epochs.times * 1e3
pkarr.append(pk)
ptarr.append(pt)
pkmax_arr.append(pkmax)
pkarr = np.array(pkarr)
ptarr = np.array(ptarr)
pkmax_arr = np.array(pkmax_arr)
dctps['pk'] = np.float32(pkarr)
dctps['pt'] = np.float32(ptarr)
dctps['pkmax'] = np.float32(pkmax_arr)
dctps['nsamp'] = len(times)
dctps['times'] = np.float32(times)
dctps['tmin'] = np.float32(ica_epochs.tmin)
dctps['tmax'] = np.float32(ica_epochs.tmax)
fnctps = basename + prefix_ctps + trig_name
np.save(fnctps, dctps)
# Note; loading example: dctps = np.load(fnctps).items()
else:
event_id = None
#######################################################
#
# Perform CTPS surrogates tests
#
#######################################################
def apply_ctps_surrogates(fname_ctps, fnout, nrepeat=1000,
mode='shuffle', save=True, n_jobs=4):
'''
Perform CTPS surrogate tests to estimate the significance level
for CTPS anaysis (a proper pK value ist estimated).
It is most likely that the statistical reliability of this test
is best improved by increasing the number of repetitions, while the
number of different experiments/subjects have minor effects only
Parameters
----------
fname_ctps: CTPS filename (or list of filenames)
fnout: Output (text) filename to store surrogate stats across all files
Options:
nrepeat: number of repetitions used to estimate the pk threshold
default is 1000
mode: 2 different modi are allowed.
'mode=shuffle' whill randomly shuffle the phase values. This is the default
'mode=shift' whill randomly shift the phase values
Return
------
info string array containing the statistical values about the surrogate analysis
'''
import os, time
from .jumeg_utils import make_surrogates_ctps, get_stats_surrogates_ctps
fnlist = get_files_from_list(fname_ctps)
# loop across all filenames
ifile = 1
sep = '=========================================================================='
info = [sep,'#','# Statistical analysis on CTPS surrogates','#',sep]
for fnctps in fnlist:
path = os.path.dirname(fnctps)
basename = os.path.basename(fnctps)
name = os.path.splitext(basename)[0]
print('>>> calc. surrogates based on: ' + basename)
# load CTPS data
dctps = np.load(fnctps).item()
phase_trials = dctps['pt'] # [nfreq, ntrials, nsources, nsamples]
# create surrogate tests
t_start = time.time()
pks = make_surrogates_ctps(phase_trials,nrepeat=nrepeat,
mode=mode,verbose=None,n_jobs=n_jobs)
# perform stats on surrogates
stats = get_stats_surrogates_ctps(pks, verbose=False)
info.append(sep)
info.append(path)
info.append(basename)
info.append('nfreq: '+ str(stats['nfreq']))
info.append('nrepeat: '+ str(stats['nrepeat']))
info.append('nsamples: '+ str(stats['nsamples']))
info.append('nsources: '+ str(stats['nsources']))
info.append('permutation mode: '+ mode)
# info for each freq. band of the current data set
info.append('# stats for each frequency band:')
line_f = 'freqs (Hz):'
line_max = 'pk max: '
line_mean = 'pk mean: '
line_min = 'pk min: '
for i in range(stats['nfreq']):
flow, fhigh = dctps['freqs'][i]
line_f += str('%5d-%d ' % (flow,fhigh))
line_max += str('%8.3f' % stats['pks_max'][i])
line_mean += str('%8.3f' % stats['pks_mean'][i])
line_min += str('%8.3f' % stats['pks_min'][i])
info.append(line_f)
info.append(line_min)
info.append(line_mean)
info.append(line_max)
# across freq. bands
pks_min = stats['pks_min_global']
pks_mean = stats['pks_mean_global']
pks_std = stats['pks_std_global']
pks_pct = stats['pks_pct99_global']
pks_max = stats['pks_max_global']
info.append('# stats across all frequency bands:')
info.append('pk min: '+ str('%8.3f' % pks_min))
info.append('pk mean: '+ str('%8.3f' % stats['pks_mean_global']))
info.append('pk std: '+ str('%8.3f' % stats['pks_std_global']))
info.append('pk pct99:'+ str('%8.3f' % stats['pks_pct99_global']))
info.append('pk pct99.90:'+ str('%8.3f' % stats['pks_pct999_global']))
info.append('pk pct99.99:'+ str('%8.3f' % stats['pks_pct9999_global']))
info.append('pk max: '+ str('%8.3f' % stats['pks_max_global']))
# combine global stats values of different ctps files into one global analysis
if (ifile > 1):
pks_all = np.concatenate((pks_all, pks.flatten()))
else:
pks_all = pks.flatten()
ifile += 1
if (ifile > 1):
info.append(sep)
info.append('#')
info.append('# stats across all files:')
info.append('#')
info.append('pk min: '+ str('%8.3f' % pks_all.min()))
info.append('pk mean: '+ str('%8.3f' % pks_all.mean()))
info.append('pk std: '+ str('%8.3f' % pks_all.std()))
info.append('pk pct99:'+ str('%8.3f' % np.percentile(pks_all,99)))
info.append('pk pct99.90:'+ str('%8.3f' % np.percentile(pks_all,99.9)))
info.append('pk pct99.99:'+ str('%8.3f' % np.percentile(pks_all,99.99)))
info.append('pk max: '+ str('%8.3f' % pks_all.max()))
info.append('#')
duration = (time.time() - t_start) / 60.0 # in minutes
info.append('duration [min]: %0.2f' % duration)
info.append('#')
info.append(sep)
# save surrogate stats
if (save):
np.savetxt(fnout, info, fmt='%s')
return info
#######################################################
#
# Select ICs from CTPS anaysis (for brain responses)
#
#######################################################
def apply_ctps_select_ic(fname_ctps, threshold=0.1):
''' Select ICs based on CTPS analysis. '''
fnlist = get_files_from_list(fname_ctps)
import matplotlib.pyplot as pl
# loop across all filenames
pl.ioff() # switch off (interactive) plot visualisation
ifile = 0
for fnctps in fnlist:
name = os.path.splitext(fnctps)[0]
basename = os.path.splitext(os.path.basename(fnctps))[0]
print('>>> working on: ' + basename)
# load CTPS data
dctps = np.load(fnctps).item()
freqs = dctps['freqs']
nfreq = len(freqs)
ncomp = dctps['ncomp']
trig_name = dctps['trig_name']
times = dctps['times']
ic_sel = []
# loop acros all freq. bands
fig = pl.figure(ifile + 1, figsize=(16, 9), dpi=100)
pl.clf()
fig.subplots_adjust(left=0.08, right=0.95, bottom=0.05,
top=0.93, wspace=0.2, hspace=0.2)
fig.suptitle(basename, fontweight='bold')
nrow = np.ceil(float(nfreq) / 2)
for ifreq in range(nfreq):
pk = dctps['pk'][ifreq]
pt = dctps['pt'][ifreq]
pkmax = pk.max(1)
ixmax = np.where(pkmax == pkmax.max())[0]
ix = (np.where(pkmax >= threshold))[0]
if np.any(ix+1):
if (ifreq > 0):
ic_sel = np.append(ic_sel, ix + 1)
else:
ic_sel = ix + 1
# do construct names for title, fnout_fig, fnout_ctps
frange = ' @' + str(freqs[ifreq][0]) + '-' + str(freqs[ifreq][1])
x = np.arange(ncomp) + 1
# do make bar plots for ctps thresh level plots
ax = fig.add_subplot(nrow, 2, ifreq + 1)
pl.bar(x, pkmax, color='steelblue')
pl.bar(x[ix], pkmax[ix], color='red')
pl.plot(x,np.repeat(threshold,ncomp),color='black')
pl.title(trig_name + frange, fontsize='small')
pl.xlim([1, ncomp + 1])
pl.ylim([0, 0.5])
pl.text(2, 0.45, 'ICs: ' + str(ix + 1))
ic_sel = np.unique(ic_sel)
nic = np.size(ic_sel)
fig.text(0.02, 0.98, 'pK threshold: ' + str(threshold),
transform=ax.transAxes)
info = 'ICs (all): ' + str(ic_sel).strip('[]')
fig.text(0.02, 0.01, info, transform=ax.transAxes)
# save CTPS components found
fntxt = name + '-ic_selection.txt'
ic_sel | |
<reponame>jdthorpe/MCPH
"""
because timelines have this nice property of keeping track of nested
timelines, we need to prevent timline instances from duplicating this
calculation. Hence, we need to freeze the timeline when it has been
accessed as a child timeline.
==============================================
Example: potential double counting situation
==============================================
tumor = Event(type='tumor',id='ovarianCancer',time = 55) # the origin of the tumor relative to it's reference
deathFromCancer = Event(time = 12)
tumor._addEvent(deathFromCancer)
someone._addEvent(tumor)
# then later:
thisTumor = someone.tumor
...
...
someone.tumor = thisTumor
==============================================
End Example
==============================================
in the above example, if we didn't keep track
of the reference timeline, we would end up shifting
the originof thisTumor twice and our ages at
events would get all screwed up
==============================================
time v. reftime
==============================================
Because all timelines are considered to be
within the context of a person's life, the
'global' cordintes of a timeline refers to
that person's time at the event.
The local coordinates, however, can be measured
with respect to some event that defines an reftime.
The Event Class has an implicit reftime
which is the time at the 'even'.
to get the global coordinates, use 'getAge' methods
, and to get local coordinates, ust the 'getTime'
methods.
"""
from math import isinf
from operator import attrgetter
from types import NoneType
import pdb
inf = float('inf')
class Event(object):
"""
most of what follows is not longer accurate. Here is what's current:
==================================================
* an event is an object with 3 special (public) properties:
'reference': another event that serves as the
reference time for this event. References
can be followed from child to parent
until a global (person) object is reached.
The global object cannonically does not have
a reference event.
'reftime': the time between this event and
the reference event (negative if this event occured
first).
'time': a calculated property := time between
this event and the global event (birth),
*iif* a global event is at the top of the
reference chain.
and a couple of sepcial methods:
'getEvent()': returns a named event.
'getEvents()': returns a (possibly empty) list
of events with a set of common characteristics
Events *may* have a reference event, in which case the
event is a child of it's reference.
there are three ways to set a reference event on an event:
(1) in the init method [ child = Event(reference=parent) ]
(2) via the child's reference attrubute [ child.reference = parent ]
(3) via attribute assignment [ parent.foo = child ]
Note that the first two methods to not assign a name
to the event.
The link between parnet and child can always be removed
via 'del child.reference', and in the case that the third
assignment option was used, 'del parent.foo' will also
remove the link between child and parent.
the final aspect of an event is that attribute assignment
can be used to set the reference (parent / child)
relationship. (e.g. parent.foo = child) sets the
parent / child relation and names the event 'foo'.
==================================================
"""
# default values for time and reftime
def __init__(self,
# the time between the reference event and this event
reftime=None,
# the reference object
reference=None,
# a string, tuple of strings, or list of strings to aid
# in searching for events.
type=()):
# store the reference event
if reference is not None:
self.reference = reference # implicicitly uses the reference setter property
`
# the time of the event relative to the reference frame
if reftime is not None:
self.__dict__['reftime'] = reftime
`
# store the 'type' tuple
if isinstance(type,str):
type = (type,)
elif isinstance(type,list):
type = tuple(type)
self.type = type # a tuple that names the event type
# initialize the childen and prevented by lists
self._children = []
self._preventedBy = []
# --------------------------------------------------
# prevented properties
# --------------------------------------------------
def unpreventALL(self):
self._preventedBy = []
def unprevent(self,by):
for(i in range(len(self._preventedBy)-1,-1,-1))
if self._preventedBy is by:
del self._preventedBy[i]
def prevent(self,by):
if inherits(by,origin):
raise RuntimeError('An event cannot be prevented by an orign')
if self is by:
raise RuntimeError('An event cannot be prevented by itself')
if by not in self._preventedBy :
self._preventedBy.append(by)
def _getTimePrevented(self):
if(len(self._preventedBy)):
return min([x.time for time in self._preventedBy])
else :
return float('inf')
TimePrevented = property(_getTimePrevented)
def _prevented (self):
""" An event is prevented if any of the prevention events
occure prior to the event in the absence of prevention
events.
"""
return float(self) > min(x.time for x in self._preventedBy])
prevented = property(_prevented)
# --------------------------------------------------
# time property
# --------------------------------------------------
def _getTime(self):
if 'reference' not in self.__dict__:
raise RuntimeError("Attempt to GET the time of an event before setting the event's reference attribute, OR no global reference found.")
refTime = self.reference.time
if self.reftime is None or refTime is None:
return None
else:
return float(self.reftime) + refTime
time = property(_getTime)
# --------------------------------------------------
# redraw method
# --------------------------------------------------
def redraw(self):
"""call the redraw method on self.reference.time or self.reference.reftime"""
try:
self.reference.time.redraw()
except AttributeError:
pass
try:
self.reference.reftime.redraw()
except AttributeError:
pass
# --------------------------------------------------
# Attribute Setter
# --------------------------------------------------
def __setattr__(self, name, value):
""" The Set Attr method, which is reponsible for setting
the double link between events for statements like: `e1.e2 = e2`
"""
if name in ('reference',):
# python calles setter methods in this order, so we have to bypass __setattr__
# in order to get the property getter and setter methods defined below to handle
# the assighment. See this page for details:
#
# http://stackoverflow.com/questions/15750522/class-properties-and-setattr
object.__setattr__(self, name, value)
return
if isinstance(value,Event):
if ('reference' in value.__dict__
and value.reference is not self):
raise AttributeError('Attempt to add two reference to a single event')
# PREVENT CIRCULAR PARENT/CHILD REFERENCES
tmp = value
while 'reference' in tmp.__dict__:
if(inherits(tmp,origin))
break
if tmp is self:
raise ValueError("Circular Reference Error: attempt to add a Event as a child of an ancestor.")
tmp = tmp.reference
# ADD SELF AS THE EVENT'S NEW 'REFERENCE' ATTIRUBTE
value.reference = self
self.__dict__[name] = value
# --------------------------------------------------
# Attribute Deleter
# --------------------------------------------------
def __delattr__(self, name):
if name not in self.__dict__:
raise AttributeError(name)
if name == 'reference':
# python calles setter methods in this order, so we have to bypass __setattr__
# in order to get the property getter and setter methods defined below to handle
# the assighment. See this page for details:
#
# http://stackoverflow.com/questions/15750522/class-properties-and-setattr
object.__delattr__(self, name)
# this propogates the delete on to the '__delReferenceEvent()' method below
return
if isinstance(self.__dict__[name],event):
# NO CIRCULAR REFERENCES PLEASE, hence the following line
# is NOT !!: self.__dict__[name].reference
del self.__dict__[name].__dict__['reference']
del self.__dict__[name]
# --------------------------------------------------
#
# --------------------------------------------------
def _origin(self):
""" Retruns the origin event which is an ancestor to self. """
this = self
while True:
if 'reference' not in self.__dict__:
return None
reference = self.__dict__['reference']
if isinstance(reference,origin):
return reference
else: this = reference
origin = property(_origin)
# --------------------------------------------------
# stubs for pre and post processing
# --------------------------------------------------
def preprocess(self):
""" preprocess() is called when the event is initialized. it is
responsible for initializing any values required for processing
and/or eligibility testing. before the person event is tested for
enrollment eligibility, and before the personEventProcessors in the
decisionRule are called.
"""
pass # to be over-written by sub-classes
def process(self):
""" process() is called in order to handle the conditional events.
For example It's not possible for a tubal ligation (TL) to occure
after the tubes are removed (BS), so the TL should set it's time to
None in it's "process()" method when a BSO occures before the TL.
The Process() method should be used to modifiy the event that it
is called on, and not other events. The cascasde | |
<reponame>bmmalone/pyllars
"""
This module contains helpers for using dask: https://dask.pydata.org/en/latest/
"""
import logging
logger = logging.getLogger(__name__)
from typing import Callable, Iterable, List
import argparse
import collections
import dask.distributed
import pandas as pd
import shlex
import tqdm
import typing
def connect(args:argparse.Namespace) -> typing.Tuple[dask.distributed.Client, typing.Optional[dask.distributed.LocalCluster]]:
""" Connect to the dask cluster specifed by the arguments in `args`
Specifically, this function uses args.cluster_location to determine whether
to start a dask.distributed.LocalCluster (in case args.cluster_location is
"LOCAL") or to (attempt to) connect to an existing cluster (any other
value).
If a local cluster is started, it will use a number of worker processes
equal to args.num_procs. Each process will use args.num_threads_per_proc
threads. The scheduler for the local cluster will listen to a random port.
Parameters
----------
args: argparse.Namespace
A namespace containing the following fields:
* cluster_location
* client_restart
* num_procs
* num_threads_per_proc
Returns
-------
client: dask.distributed.Client
The client for the dask connection
cluster: dask.distributed.LocalCluster or None
If a local cluster is started, the reference to the local cluster
object is returned. Otherwise, None is returned.
"""
from dask.distributed import Client as DaskClient
from dask.distributed import LocalCluster as DaskCluster
client = None
cluster = None
if args.cluster_location == "LOCAL":
msg = "[dask_utils]: starting local dask cluster"
logger.info(msg)
cluster = DaskCluster(
n_workers=args.num_procs,
processes=True,
threads_per_worker=args.num_threads_per_proc
)
client = DaskClient(cluster)
else:
msg = "[dask_utils]: attempting to connect to dask cluster: {}"
msg = msg.format(args.cluster_location)
logger.info(msg)
client = DaskClient(address=args.cluster_location)
if args.client_restart:
msg = "[dask_utils]: restarting client"
logger.info(msg)
client.restart()
return client, cluster
def add_dask_options(
parser:argparse.ArgumentParser,
num_procs:int=1,
num_threads_per_proc:int=1,
cluster_location:str="LOCAL") -> None:
""" Add options for connecting to and/or controlling a local dask cluster
Parameters
----------
parser : argparse.ArgumentParser
The parser to which the options will be added
num_procs : int
The default number of processes for a local cluster
num_threads_per_proc : int
The default number of threads for each process for a local cluster
cluster_location : str
The default location of the cluster
Returns
-------
None : None
A "dask cluster options" group is added to the parser
"""
dask_options = parser.add_argument_group("dask cluster options")
dask_options.add_argument('--cluster-location', help="The address for the "
"cluster scheduler. This should either be \"LOCAL\" or the address "
"and port of the scheduler. If \"LOCAL\" is given, then a "
"dask.distributed.LocalCluster will be started.",
default=cluster_location)
dask_options.add_argument('--num-procs', help="The number of processes to use "
"for a local cluster", type=int, default=num_procs)
dask_options.add_argument('--num-threads-per-proc', help="The number of "
"threads to allocate for each process. So the total number of threads "
"for a local cluster will be (args.num_procs * "
"args.num_threads_per_cpu).", type=int, default=num_threads_per_proc)
dask_options.add_argument('--client-restart', help="If this flag is "
"given, then the \"restart\" function will be called on the client "
"after establishing the connection to the cluster",
action='store_true')
def add_dask_values_to_args(
args:argparse.Namespace,
num_procs:int=1,
num_threads_per_proc:int=1,
cluster_location:str="LOCAL",
client_restart:bool=False) -> None:
""" Add the options for a dask cluster to the given argparse namespace
This function is mostly intended as a helper for use in ipython notebooks.
Parameters
----------
args : argparse.Namespace
The namespace on which the arguments will be set
num_procs : int
The number of processes for a local cluster
num_threads_per_proc : int
The number of threads for each process for a local cluster
cluster_location : str
The location of the cluster
client_restart : bool
Whether to restart the client after connection
Returns
-------
None : None
The respective options will be set on the namespace
"""
args.num_procs = num_procs
args.num_threads_per_proc = num_threads_per_proc
args.cluster_location = cluster_location
args.client_restart = client_restart
def get_dask_cmd_options(args:argparse.Namespace) -> List[str]:
""" Extract the flags and options specified for dask from
the parsed arguments.
Presumably, these were added with `add_dask_options`. This function
returns the arguments as an array. Thus, they are suitable for use
with `subprocess.run` and similar functions.
Parameters
-----------
args : argparse.Namespace
The parsed arguments
Returns
-------
dask_options : typing.List[str]
The list of dask options and their values.
"""
args_dict = vars(args)
# first, pull out the text arguments
dask_options = [
'num_procs',
'num_threads_per_proc',
'cluster_location'
]
# create a list of command line arguments
ret = []
for o in dask_options:
arg = str(args_dict[o])
if len(arg) > 0:
ret.append('--{}'.format(o.replace('_', '-')))
ret.append(arg)
if args.client_restart:
ret.append("--client-restart")
ret = [shlex.quote(c) for c in ret]
return ret
###
# Helpers to submit arbitrary jobs to a dask cluster
###
def apply_iter(
it:Iterable,
client:dask.distributed.Client,
func:Callable,
*args,
return_futures:bool=False,
progress_bar:bool=True,
priority:int=0,
**kwargs) -> List:
""" Distribute calls to `func` on each item in `it` across `client`.
Parameters
----------
it : typing.Iterable
The inputs for `func`
client : dask.distributed.Client
A dask client
func : typing.Callable
The function to apply to each item in `it`
args
Positional arguments to pass to `func`
kwargs
Keyword arguments to pass to `func`
return_futures : bool
Whether to wait for the results (`False`, the default) or return a
list of dask futures (when `True`). If a list of futures is returned,
the `result` method should be called on each of them at some point
before attempting to use the results.
progress_bar : bool
Whether to show a progress bar when waiting for results. The parameter
is only relevant when `return_futures` is `False`.
priority : int
The priority of the submitted tasks. Please see the dask documentation
for more details: http://distributed.readthedocs.io/en/latest/priority.html
Returns
-------
results: typing.List
Either the result of each function call or a future which will give
the result, depending on the value of `return_futures`
"""
msg = ("[dask_utils.apply_iter] submitting jobs to cluster")
logger.debug(msg)
if progress_bar:
it = tqdm.tqdm(it)
ret_list = [
client.submit(func, *(i, *args), **kwargs, priority=priority) for i in it
]
if return_futures:
return ret_list
msg = ("[dask_utils.apply_iter] collecting results from cluster")
logger.debug(msg)
# add a progress bar if we asked for one
if progress_bar:
ret_list = tqdm.tqdm(ret_list)
ret_list = [r.result() for r in ret_list]
return ret_list
def apply_df(
data_frame:pd.DataFrame,
client:dask.distributed.Client,
func:typing.Callable,
*args,
return_futures:bool=False,
progress_bar:bool=True,
priority:int=0,
**kwargs) -> List:
""" Distribute calls to `func` on each row in `data_frame` across `client`.
Additionally, `args` and `kwargs` are passed to the function call.
Parameters
----------
data_frame: pandas.DataFrame
A data frame
client: dask.distributed.Client
A dask client
func: typing.Callable
The function to apply to each row in `data_frame`
args
Positional arguments to pass to `func`
kwargs
Keyword arguments to pass to `func`
return_futures: bool
Whether to wait for the results (`False`, the default) or return a
list of dask futures (when `True`). If a list of futures is returned,
the `result` method should be called on each of them at some point
before attempting to use the results.
progress_bar: bool
Whether to show a progress bar when waiting for results. The parameter
is only relevant when `return_futures` is `False`.
priority : int
The priority of the submitted tasks. Please see the dask documentation
for more details: http://distributed.readthedocs.io/en/latest/priority.html
Returns
-------
results: typing.List
Either the result of each function call or a future which will give
the result, depending on the value of `return_futures`
"""
if len(data_frame) == 0:
return []
it = data_frame.iterrows()
if progress_bar:
it = tqdm.tqdm(it, total=len(data_frame))
ret_list = [
client.submit(func, *(row[1], *args), **kwargs, priority=priority)
for row in it
]
if return_futures:
return ret_list
# add a progress bar if we asked for one
if progress_bar:
ret_list = tqdm.tqdm(ret_list, total=len(data_frame))
ret_list = [r.result() for r in ret_list]
return ret_list
def apply_groups(
groups:pd.core.groupby.DataFrameGroupBy,
client:dask.distributed.client.Client,
func:typing.Callable,
*args,
return_futures:bool=False,
progress_bar:bool=True,
priority:int=0,
**kwargs) -> typing.List:
""" Distribute calls to `func` on each group in `groups` across `client`.
Additionally, `args` and `kwargs` are passed to the function call.
Parameters
----------
groups: pandas.DataFrameGroupBy
The result of a call to `groupby` on a data frame
client: distributed.Client
A dask client
func: typing.Callable
The function to apply to each group in `groups`
args
Positional arguments to pass to `func`
kwargs
Keyword arguments to pass to `func`
return_futures: bool
Whether to wait for the results (`False`, the default) or return a
list of dask futures (when `True`). If a list of futures is returned,
the `result` method should be called on | |
13244: "<square> 03BC 0057",
13245: "<square> 006D 0057",
13246: "<square> 006B 0057",
13247: "<square> 004D 0057",
13248: "<square> 006B 03A9",
13249: "<square> 004D 03A9",
13250: "<square> 0061 002E 006D 002E",
13251: "<square> 0042 0071",
13252: "<square> 0063 0063",
13253: "<square> 0063 0064",
13254: "<square> 0043 2215 006B 0067",
13255: "<square> 0043 006F 002E",
13256: "<square> 0064 0042",
13257: "<square> 0047 0079",
13258: "<square> 0068 0061",
13259: "<square> 0048 0050",
13260: "<square> 0069 006E",
13261: "<square> 004B 004B",
13262: "<square> 004B 004D",
13263: "<square> 006B 0074",
13264: "<square> 006C 006D",
13265: "<square> 006C 006E",
13266: "<square> 006C 006F 0067",
13267: "<square> 006C 0078",
13268: "<square> 006D 0062",
13269: "<square> 006D 0069 006C",
13270: "<square> 006D 006F 006C",
13271: "<square> 0050 0048",
13272: "<square> 0070 002E 006D 002E",
13273: "<square> 0050 0050 004D",
13274: "<square> 0050 0052",
13275: "<square> 0073 0072",
13276: "<square> 0053 0076",
13277: "<square> 0057 0062",
13278: "<square> 0056 2215 006D",
13279: "<square> 0041 2215 006D",
13280: "<compat> 0031 65E5",
13281: "<compat> 0032 65E5",
13282: "<compat> 0033 65E5",
13283: "<compat> 0034 65E5",
13284: "<compat> 0035 65E5",
13285: "<compat> 0036 65E5",
13286: "<compat> 0037 65E5",
13287: "<compat> 0038 65E5",
13288: "<compat> 0039 65E5",
13289: "<compat> 0031 0030 65E5",
13290: "<compat> 0031 0031 65E5",
13291: "<compat> 0031 0032 65E5",
13292: "<compat> 0031 0033 65E5",
13293: "<compat> 0031 0034 65E5",
13294: "<compat> 0031 0035 65E5",
13295: "<compat> 0031 0036 65E5",
13296: "<compat> 0031 0037 65E5",
13297: "<compat> 0031 0038 65E5",
13298: "<compat> 0031 0039 65E5",
13299: "<compat> 0032 0030 65E5",
13300: "<compat> 0032 0031 65E5",
13301: "<compat> 0032 0032 65E5",
13302: "<compat> 0032 0033 65E5",
13303: "<compat> 0032 0034 65E5",
13304: "<compat> 0032 0035 65E5",
13305: "<compat> 0032 0036 65E5",
13306: "<compat> 0032 0037 65E5",
13307: "<compat> 0032 0038 65E5",
13308: "<compat> 0032 0039 65E5",
13309: "<compat> 0033 0030 65E5",
13310: "<compat> 0033 0031 65E5",
13311: "<square> 0067 0061 006C",
42652: "<super> 044A",
42653: "<super> 044C",
42864: "<super> A76F",
43000: "<super> 0126",
43001: "<super> 0153",
43868: "<super> A727",
43869: "<super> AB37",
43870: "<super> 026B",
43871: "<super> AB52",
43881: "<super> 028D",
44032: "4352 4449",
44033: "4352 4449 4520",
44034: "4352 4449 4521",
44035: "4352 4449 4522",
44036: "4352 4449 4523",
44037: "4352 4449 4524",
44038: "4352 4449 4525",
44039: "4352 4449 4526",
44040: "4352 4449 4527",
44041: "4352 4449 4528",
44042: "4352 4449 4529",
44043: "4352 4449 4530",
44044: "4352 4449 4531",
44045: "4352 4449 4532",
44046: "4352 4449 4533",
44047: "4352 4449 4534",
44048: "4352 4449 4535",
44049: "4352 4449 4536",
44050: "4352 4449 4537",
44051: "4352 4449 4538",
44052: "4352 4449 4539",
44053: "4352 4449 4540",
44054: "4352 4449 4541",
44055: "4352 4449 4542",
44056: "4352 4449 4543",
44057: "4352 4449 4544",
44058: "4352 4449 4545",
44059: "4352 4449 4546",
44060: "4352 4450",
44061: "4352 4450 4520",
44062: "4352 4450 4521",
44063: "4352 4450 4522",
44064: "4352 4450 4523",
44065: "4352 4450 4524",
44066: "4352 4450 4525",
44067: "4352 4450 4526",
44068: "4352 4450 4527",
44069: "4352 4450 4528",
44070: "4352 4450 4529",
44071: "4352 4450 4530",
44072: "4352 4450 4531",
44073: "4352 4450 4532",
44074: "4352 4450 4533",
44075: "4352 4450 4534",
44076: "4352 4450 4535",
44077: "4352 4450 4536",
44078: "4352 4450 4537",
44079: "4352 4450 4538",
44080: "4352 4450 4539",
44081: "4352 4450 4540",
44082: "4352 4450 4541",
44083: "4352 4450 4542",
44084: "4352 4450 4543",
44085: "4352 4450 4544",
44086: "4352 4450 4545",
44087: "4352 4450 4546",
44088: "4352 4451",
44089: "4352 4451 4520",
44090: "4352 4451 4521",
44091: "4352 4451 4522",
44092: "4352 4451 4523",
44093: "4352 4451 4524",
44094: "4352 4451 4525",
44095: "4352 4451 4526",
44096: "4352 4451 4527",
44097: "4352 4451 4528",
44098: "4352 4451 4529",
44099: "4352 4451 4530",
44100: "4352 4451 4531",
44101: "4352 4451 4532",
44102: "4352 4451 4533",
44103: "4352 4451 4534",
44104: "4352 4451 4535",
44105: "4352 4451 4536",
44106: "4352 4451 4537",
44107: "4352 4451 4538",
44108: "4352 4451 4539",
44109: "4352 4451 4540",
44110: "4352 4451 4541",
44111: "4352 4451 4542",
44112: "4352 4451 4543",
44113: "4352 4451 4544",
44114: "4352 4451 4545",
44115: "4352 4451 4546",
44116: "4352 4452",
44117: "4352 4452 4520",
44118: "4352 4452 4521",
44119: "4352 4452 4522",
44120: "4352 4452 4523",
44121: "4352 4452 4524",
44122: "4352 4452 4525",
44123: "4352 4452 4526",
44124: "4352 4452 4527",
44125: "4352 4452 4528",
44126: "4352 4452 4529",
44127: "4352 4452 4530",
44128: "4352 4452 4531",
44129: "4352 4452 4532",
44130: "4352 4452 4533",
44131: "4352 4452 4534",
44132: "4352 4452 4535",
44133: "4352 4452 4536",
44134: "4352 4452 4537",
44135: "4352 4452 4538",
44136: "4352 4452 4539",
44137: "4352 4452 4540",
44138: "4352 4452 4541",
44139: "4352 4452 4542",
44140: "4352 4452 4543",
44141: "4352 4452 4544",
44142: "4352 4452 4545",
44143: "4352 4452 4546",
44144: "4352 4453",
44145: "4352 4453 4520",
44146: "4352 4453 4521",
44147: "4352 4453 4522",
44148: "4352 4453 4523",
44149: "4352 4453 4524",
44150: "4352 4453 4525",
44151: "4352 4453 4526",
44152: "4352 4453 4527",
44153: "4352 4453 4528",
44154: "4352 4453 4529",
44155: "4352 4453 4530",
44156: "4352 4453 4531",
44157: "4352 4453 4532",
44158: "4352 4453 4533",
44159: "4352 4453 4534",
44160: "4352 4453 4535",
44161: "4352 4453 4536",
44162: "4352 4453 4537",
44163: "4352 4453 4538",
44164: "4352 4453 4539",
44165: "4352 4453 4540",
44166: "4352 4453 4541",
44167: "4352 4453 4542",
44168: "4352 4453 4543",
44169: "4352 4453 4544",
44170: "4352 4453 4545",
44171: "4352 4453 4546",
44172: "4352 4454",
44173: "4352 4454 4520",
44174: "4352 4454 4521",
44175: "4352 4454 4522",
44176: "4352 4454 4523",
44177: "4352 4454 4524",
44178: "4352 4454 4525",
44179: "4352 4454 4526",
44180: "4352 4454 4527",
44181: "4352 4454 4528",
44182: "4352 4454 4529",
44183: "4352 4454 4530",
44184: "4352 4454 4531",
44185: "4352 4454 4532",
44186: "4352 4454 4533",
44187: "4352 4454 4534",
44188: "4352 4454 4535",
44189: "4352 4454 4536",
44190: "4352 4454 4537",
44191: "4352 4454 4538",
44192: "4352 4454 4539",
44193: "4352 4454 4540",
44194: "4352 4454 4541",
44195: "4352 4454 4542",
44196: "4352 4454 4543",
44197: "4352 4454 4544",
44198: "4352 4454 4545",
44199: "4352 4454 4546",
44200: "4352 4455",
44201: "4352 4455 4520",
44202: "4352 4455 4521",
44203: "4352 4455 4522",
44204: "4352 4455 4523",
44205: "4352 4455 4524",
44206: "4352 4455 4525",
44207: "4352 4455 4526",
44208: "4352 4455 4527",
44209: "4352 4455 4528",
44210: "4352 4455 4529",
44211: "4352 4455 4530",
44212: "4352 4455 4531",
44213: "4352 4455 4532",
44214: "4352 4455 4533",
44215: "4352 4455 4534",
44216: "4352 4455 4535",
44217: "4352 4455 4536",
44218: "4352 4455 4537",
44219: "4352 4455 4538",
44220: "4352 4455 4539",
44221: "4352 4455 4540",
44222: "4352 4455 4541",
44223: "4352 4455 4542",
44224: "4352 4455 4543",
44225: "4352 4455 4544",
44226: "4352 4455 4545",
44227: "4352 4455 4546",
44228: "4352 4456",
44229: "4352 4456 4520",
44230: "4352 4456 4521",
44231: "4352 4456 4522",
44232: "4352 4456 4523",
44233: "4352 4456 4524",
44234: "4352 4456 4525",
44235: "4352 4456 4526",
44236: "4352 4456 4527",
44237: "4352 4456 4528",
44238: "4352 4456 4529",
44239: "4352 4456 4530",
44240: "4352 4456 4531",
44241: "4352 4456 4532",
44242: "4352 4456 4533",
44243: "4352 4456 4534",
44244: "4352 4456 4535",
44245: "4352 4456 4536",
44246: "4352 4456 4537",
44247: "4352 4456 4538",
44248: "4352 4456 4539",
44249: "4352 4456 4540",
44250: "4352 4456 4541",
44251: "4352 4456 4542",
44252: "4352 4456 4543",
44253: "4352 4456 4544",
44254: "4352 4456 4545",
44255: "4352 4456 4546",
44256: "4352 4457",
44257: "4352 4457 4520",
44258: "4352 4457 4521",
44259: "4352 4457 4522",
44260: "4352 4457 4523",
44261: "4352 4457 4524",
44262: "4352 4457 4525",
44263: "4352 4457 4526",
44264: "4352 4457 4527",
44265: "4352 4457 4528",
44266: "4352 4457 4529",
44267: "4352 4457 4530",
44268: "4352 4457 4531",
44269: "4352 4457 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.