hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
bdf4bba83ce926e7df5ccce4b0f2ff6a517c48e0 | 19,839 | py | Python | Host_DM/plot_results.py | obscode/CSPMCMC | 3811d5196a99f2f0adea2b6d0e01b12d93fd9639 | [
"MIT"
] | 1 | 2019-09-30T13:57:57.000Z | 2019-09-30T13:57:57.000Z | Host_DM/plot_results.py | obscode/CSPMCMC | 3811d5196a99f2f0adea2b6d0e01b12d93fd9639 | [
"MIT"
] | null | null | null | Host_DM/plot_results.py | obscode/CSPMCMC | 3811d5196a99f2f0adea2b6d0e01b12d93fd9639 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import matplotlib
matplotlib.use("Agg")
import matplotlib.pyplot as plt
from numpy import *
import pickle
import sys,os,string
from astropy.io import ascii
from myplotlib import PanelPlot
import config
import STANstats
import get_data
import sigfig
try:
import corner
except:
corner = None
def MAD(a):
'''return the median absolute deviation *1.48'''
return 1.48*median(absolute(a-median(a)))
def RMS(a):
'''Return the root-mean-square'''
return sqrt(mean(power(a - median(a),2)))
def tomag(flux,eflux,zp):
m = -2.5*log10(flux) + zp
dm = eflux/flux*1.087
return m,dm
def toflux(mag,emag,zp):
flux = power(10, -0.4*(mag-zp))
eflux = emag*flux/1.087
return flux,eflux
cfg = config.config(sys.argv[1])
with open(cfg.sampler.output) as f:
d = pickle.load(f)
c = STANstats.STANchains(chains=d['samples'], flatnames=d['flatnames'])
if not cfg.model.NGauss:
cfg.model.NGauss = 1
# MCMC parameters
for var in c.params:
locals()[var] = c.median(var)
locals()['e_'+var] = c.std(var)
# Data from pickle file
for var in d['data']:
locals()[var] = d['data'][var]
if not cfg.model.in_mag:
flux,e_flux = mag,e_mag
mag,e_mag = tomag(mag, e_mag, 25.0)
flux4258,e_flux4258 = mag4258, e_mag4258
mag4258,e_mag4258 = tomag(mag4258, e_mag4258, 23.0)
fluxLMC,e_fluxLMC = magLMC,e_magLMC
magLMC,e_magLMC = tomag(magLMC, e_magLMC, 12.0)
fluxMW,e_fluxMW = magMW, e_magMW
magMW,e_magMW = tomag(magMW, e_magMW, 3.0)
w_P = []
w_VI = []
w_OH = []
w_res = []
s_res = []
w_labs = []
Pmin,Pmax = inf,-inf
VImin,VImax = inf,-inf
OHmin,OHmax = inf,-inf
fig1 = PanelPlot(1,2, pwidths=[1], pheights=[0.2, 0.8], figsize=(10,6))
fig2 = PanelPlot(1,2, pwidths=[1], pheights=[0.2, 0.8], figsize=(10,6))
fig3 = PanelPlot(1,2, pwidths=[1], pheights=[0.2, 0.8], figsize=(10,6))
aresids = []
alabels = []
if cfg.model.use_MW:
if 'betaVI_MW' not in c.params:
betaVI_MW = betaVI
e_betaVI_MW = e_betaVI
dist = -5*log10(pi_true) - 5
model = dist + betaVI*VI_MW + betaP*P_MW + betaOH*(OH_MW-9.5) + M
resids = magMW - model
if cfg.model.in_mag:
aresids.append(resids)
else:
r = fluxMW - toflux(model, 0, 3.0)[0]
aresids.append(r/RMS(r))
alabels.append('MW')
w_res.append(median(resids))
s_res.append(std(resids))
OH_MW = array([OH_MW]*len(VI_MW))
w_P.append(median(P_MW))
w_VI.append(median(VI_MW))
w_OH.append(median(OH_MW))
w_labs.append('MW')
fig1.axes[0].plot(P_MW, resids, 'o', color='blue')
fig2.axes[0].plot(VI_MW,resids, 'o', color='blue')
fig3.axes[0].plot([OH_MW]*len(VI_MW),resids, 'o', color='blue')
fig1.axes[1].plot(P_MW, magMW - dist - betaVI_MW*VI_MW - betaOH*(OH_MW-9.5),
'o', color='blue', label='MW')
fig2.axes[1].plot(VI_MW, magMW - dist - betaP*P_MW - betaOH*(OH_MW-9.5),
'o', color='blue', label='MW')
fig3.axes[1].plot(OH_MW, magMW - dist - betaP*P_MW - betaVI_MW*VI_MW,
'o', color='blue', label='MW')
fig1.axes[0].axhline(eps_MW, linestyle='--', color='blue')
fig1.axes[0].axhline(-eps_MW, linestyle='--', color='blue')
Pmin = min(Pmin, P_MW.min())
Pmax = max(Pmax, P_MW.max())
VImin = min(VImin, VI_MW.min())
VImax = max(VImax, VI_MW.max())
OHmin = min(OHmin, OH_MW.min())
OHmax = max(OHmax, OH_MW.max())
if cfg.model.use_LMC:
if 'betaVI_LMC' not in c.params:
betaVI_LMC = betaVI
e_betaVI_LMC = e_betaVI
model = DM_LMC + betaVI_LMC*VI_LMC + betaP*P_LMC + \
betaOH*(OH_LMC-9.5) + M
resids = magLMC - model
w_res.append(median(resids))
s_res.append(std((resids)))
if cfg.model.in_mag:
aresids.append(resids)
else:
r = fluxLMC - toflux(model,0.0, 12.0)[0]
aresids.append(r/RMS(r))
alabels.append('LMC')
OH_LMC = array([OH_LMC]*len(P_LMC))
w_P.append(median(P_LMC))
w_VI.append(median(VI_LMC))
w_OH.append(median(OH_LMC))
w_labs.append('LMC')
fig1.axes[0].plot(P_LMC, resids, 's', color='k')
fig2.axes[0].plot(VI_LMC, resids, 's', color='k')
fig3.axes[0].plot(OH_LMC, resids, 's', color='k')
fig1.axes[1].plot(P_LMC, resids + betaP*P_LMC + M, 's', color='k',
label='LMC')
fig2.axes[1].plot(VI_LMC, resids + betaVI_LMC*VI_LMC + M, 's', color='k',
label='LMC')
fig3.axes[1].plot(OH_LMC, resids + betaOH*(OH_LMC-9.5) + M, 's', color='k',
label='LMC')
fig1.axes[0].axhline(eps_LMC, linestyle='--', color='k')
fig1.axes[0].axhline(-eps_LMC, linestyle='--', color='k')
Pmin = min(Pmin, P_LMC.min())
Pmax = max(Pmax, P_LMC.max())
VImin = min(VImin, VI_LMC.min())
VImax = max(VImax, VI_LMC.max())
OHmin = min(OHmin, OH_LMC.min())
OHmax = max(OHmax, OH_LMC.max())
if cfg.model.use_4258:
if 'betaVI_4258' not in c.params:
betaVI_4258 = betaVI
e_betaVI_4258 = e_betaVI
model = DM_4258 + betaVI_4258*VI_4258 + betaP*P_4258 + \
betaOH*(OH_4258-9.5) + M
resids = mag4258 - model
if cfg.model.in_mag:
aresids.append(resids)
else:
r = flux4258 - toflux(model, 0.0, 23.0)[0]
aresids.append(r/RMS(r))
alabels.append('N4258')
w_res.append(median(resids))
s_res.append(std((resids)))
w_P.append(median(P_4258))
w_VI.append(median(VI_4258))
w_OH.append(median(OH_4258))
w_labs.append('4258')
fig1.axes[0].plot(P_4258, resids, '^', color='red')
fig2.axes[0].plot(VI_4258, resids, '^', color='red')
fig3.axes[0].plot(OH_4258, resids, '^', color='red')
fig1.axes[1].plot(P_4258, resids + betaP*P_4258 + M, '^', color='red',
label='4258')
fig2.axes[1].plot(VI_4258, resids + betaVI_4258*VI_4258+M, '^', color='red',
label='4258')
fig3.axes[1].plot(OH_4258, resids + betaOH*(OH_4258-9.5)+M, '^', color='red',
label='4258')
if cfg.model.NGauss > 1:
#mid = argmax(c.median('theta_4258'))
mid = argmax(c.median('theta'))
#fig1.axes[0].axhline(eps_4258[mid], linestyle='--', color='red')
#fig1.axes[0].axhline(-eps_4258[mid], linestyle='--', color='red')
fig1.axes[0].axhline(eps[mid], linestyle='--', color='red')
fig1.axes[0].axhline(-eps[mid], linestyle='--', color='red')
else:
#fig1.axes[0].axhline(eps_4258, linestyle='--', color='red')
#fig1.axes[0].axhline(-eps_4258, linestyle='--', color='red')
fig1.axes[0].axhline(eps, linestyle='--', color='red')
fig1.axes[0].axhline(-eps, linestyle='--', color='red')
Pmin = min(Pmin, P.min())
Pmax = max(Pmax, P.max())
VImin = min(VImin, VI_4258.min())
VImax = max(VImax, VI_4258.max())
OHmin = min(OHmin, OH_4258.min())
OHmax = max(OHmax, OH_4258.max())
#xx1 = array([Pmin,Pmax])
xx1 = linspace(Pmin,Pmax, 100)
xx2 = array([VImin,VImax])
xx3 = array([OHmin,OHmax])
fig1.axes[1].plot(xx1, M + betaP*xx1, '-', color='k')
fig3.axes[1].plot(xx3, M + betaOH*(xx3-9.5), '-', color='k')
if cfg.model.use_MW:
fig2.axes[1].plot(xx2, M + betaVI_MW*xx2, '-', color='blue')
if cfg.model.use_LMC:
fig2.axes[1].plot(xx2, M + betaVI_LMC*xx2, '-', color='k')
if cfg.model.use_4258:
fig2.axes[1].plot(xx2, M + betaVI_4258*xx2, '-', color='red')
fig1.axes[0].axhline(0, linestyle='-', color='k')
fig2.axes[0].axhline(0, linestyle='-', color='k')
fig3.axes[0].axhline(0, linestyle='-', color='k')
fig1.axes[0].set_xlabel(r'$\log_{10}\left(P\right)$')
fig2.axes[0].set_xlabel('$V-I$')
fig3.axes[0].set_xlabel('$[O/H]$')
fig1.axes[0].set_ylabel('resids')
fig1.axes[1].set_ylabel('corrected mag')
fig2.axes[0].set_ylabel('resids')
fig2.axes[1].set_ylabel('corrected mag')
fig3.axes[0].set_ylabel('resids')
fig3.axes[1].set_ylabel('corrected mag')
fig1.axes[1].legend()
fig2.axes[1].legend()
fig3.axes[1].legend()
plt.draw()
fig1.set_limits()
fig1.draw()
fig2.set_limits()
fig2.draw()
fig3.set_limits()
fig3.draw()
fig1.fig.savefig('anchors_P.pdf')
fig2.fig.savefig('anchors_VI.pdf')
fig3.fig.savefig('anchors_OH.pdf')
plt.close(fig1.fig)
plt.close(fig2.fig)
plt.close(fig3.fig)
symbs = ['o','s','^','d','p','v']*5
cols = ['k']*6+['red']*6+['blue']*6+['green']*6+['orange']*6
Pmin,Pmax = inf,-inf
VImin,VImax = inf,-inf
OHmin,OHmax = inf,-inf
fig1 = PanelPlot(1,2, pwidths=[1], pheights=[0.2, 0.8], figsize=(10,6))
fig2 = PanelPlot(1,2, pwidths=[1], pheights=[0.2, 0.8], figsize=(10,6))
fig3 = PanelPlot(1,2, pwidths=[1], pheights=[0.2, 0.8], figsize=(10,6))
if len(shape(betaVI)) == 0:
betaVI = ones((S,))*betaVI
e_betaVI = ones((S,))*e_betaVI
sresids = []
for i in range(S):
figi = PanelPlot(1,2, pwidths=[1], pheights=[0.2,0.8], figsize=(10,6))
gids = equal(ID, i+1)
model = DM[i] + betaVI[i]*VI[gids] + betaP*P[gids] + \
betaOH*(OH[gids]-9.5) + M
resids = mag[gids] - model
if cfg.model.in_mag:
sresids.append(resids)
else:
r = flux[gids] - toflux(model, 0.0, 25.0)[0]
sresids.append(r/RMS(r))
w_res.append(median(resids))
s_res.append(std((resids)))
w_P.append(median(P[gids]))
w_VI.append(median(VI[gids]))
w_OH.append(median(OH[gids]))
w_labs.append(cephlist[i])
fig1.axes[0].plot(P[gids],resids, symbs[i], color=cols[i])
figi.axes[0].plot(P[gids],resids, 'o', color='k')
fig2.axes[0].plot(VI[gids],resids, symbs[i], color=cols[i])
fig3.axes[0].plot(OH[gids],resids, symbs[i], color=cols[i])
fig1.axes[1].plot(P[gids], resids + betaP*P[gids] + M, symbs[i],
color=cols[i], label=cephlist[i])
figi.axes[1].plot(P[gids], resids + betaP*P[gids] + M, 'o',
color='k', label=cephlist[i])
reals = c.get_trace('M', merge=True)[newaxis,:,0] + \
(c.get_trace('DM', merge=True)[newaxis,:,i]-DM[i]) + \
c.get_trace('betaP', merge=True)[newaxis,:,0]*xx1[:,newaxis]
mreals = median(reals, axis=1)
sreals = std(reals, axis=1)
fig2.axes[1].plot(VI[gids], resids + betaVI[i]*VI[gids] + M, symbs[i],
color=cols[i], label=cephlist[i])
fig3.axes[1].plot(OH[gids], resids + betaOH*(OH[gids]-9.5) + M, symbs[i],
color=cols[i], label=cephlist[i])
figi.axes[0].axhline(0, linestyle='-', color='red')
figi.axes[0].plot(xx1, sreals, '--', color='red')
figi.axes[0].plot(xx1, -sreals, '--', color='red')
#figi.axes[0].axhline(eps, linestyle='--', color='red')
#figi.axes[1].plot(xx1, M + betaP*xx1, '-', color='red')
figi.axes[1].plot(xx1, mreals, '-', color='red')
figi.axes[1].plot(xx1, mreals+sreals, '--', color='red')
figi.axes[1].plot(xx1, mreals-sreals, '--', color='red')
figi.axes[0].set_xlabel(r'$\log_{10}\left(P\right)$')
figi.axes[0].set_ylabel('resids')
figi.axes[1].set_ylabel('corrected mag')
figi.axes[1].legend(fontsize=8)
figi.set_limits()
figi.draw()
figi.fig.savefig('SN_hosts_P_%s.pdf' % cephlist[i])
plt.close(figi.fig)
Pmin = min(Pmin, P.min())
Pmax = max(Pmax, P.max())
VImin = min(VImin, VI.min())
VImax = max(VImax, VI.max())
OHmin = min(OHmin, OH.min())
OHmax = max(OHmax, OH.max())
xx1 = array([Pmin,Pmax])
xx2 = array([VImin,VImax])
xx3 = array([OHmin,OHmax])
if 'm_betaVI' not in d:
m_betaVI = mean(betaVI)
fig1.axes[1].plot(xx1, M + betaP*xx1, '-', color='k')
fig2.axes[1].plot(xx2, M + m_betaVI*xx2, '-', color='k')
fig3.axes[1].plot(xx3, M + betaOH*(xx3-9.5), '-', color='k')
fig1.axes[0].axhline(0, linestyle='-', color='k')
fig2.axes[0].axhline(0, linestyle='-', color='k')
fig2.axes[0].axhline(0, linestyle='-', color='k')
if cfg.model.NGauss > 1:
aeps = eps[argmax(c.median('theta'))]
else:
aeps = eps
fig1.axes[0].axhline(aeps, linestyle='--', color='k')
fig1.axes[0].axhline(-aeps, linestyle='--', color='k')
fig2.axes[0].axhline(aeps, linestyle='--', color='k')
fig2.axes[0].axhline(-aeps, linestyle='--', color='k')
fig3.axes[0].axhline(aeps, linestyle='--', color='k')
fig3.axes[0].axhline(-aeps, linestyle='--', color='k')
fig1.axes[0].set_xlabel(r'$\log_{10}\left(P\right)$')
fig2.axes[0].set_xlabel('$V-I$')
fig3.axes[0].set_xlabel('$[O/H]$')
fig1.axes[0].set_ylabel('resids')
fig1.axes[1].set_ylabel('corrected mag')
fig2.axes[0].set_ylabel('resids')
fig2.axes[1].set_ylabel('corrected mag')
fig3.axes[0].set_ylabel('resids')
fig3.axes[1].set_ylabel('corrected mag')
fig1.axes[1].legend(fontsize=8)
fig2.axes[1].legend(fontsize=8)
fig3.axes[1].legend(fontsize=8)
plt.draw()
fig1.set_limits()
fig1.draw()
fig2.set_limits()
fig2.draw()
fig3.set_limits()
fig3.draw()
fig1.fig.savefig('SN_hosts_P.pdf')
fig2.fig.savefig('SN_hosts_VI.pdf')
fig3.fig.savefig('SN_hosts_OH.pdf')
plt.close(fig1.fig)
plt.close(fig2.fig)
plt.close(fig3.fig)
# Redidual histogram
sresids = concatenate(sresids)
aresids.append(sresids)
alabels.append('SN Hosts')
fig = plt.figure()
ax = fig.add_subplot(111)
ax.hist(aresids, label=alabels, histtype='step', stacked=True, normed=True,
bins=100, linewidth=2)
ax.set_xlabel('Model residuals', fontsize=16)
ax.set_xlim(-5,5)
ax.legend()
plt.tight_layout()
fig.savefig('resids_hist.pdf')
fig3 = plt.figure() # to told weighted average of residuals
fig4 = plt.figure() # to told weighted average of residuals
fig5 = plt.figure() # to told weighted average of residuals
ax3 = fig3.add_subplot(111)
ax4 = fig4.add_subplot(111)
ax5 = fig5.add_subplot(111)
for i in range(len(w_res)):
ax3.errorbar([w_P[i]], [w_res[i]], fmt=symbs[i], color=cols[i],
label=w_labs[i], yerr=s_res[i], capsize=0, ms=10)
ax4.errorbar([w_VI[i]], [w_res[i]], fmt=symbs[i], color=cols[i],
label=w_labs[i], yerr=s_res[i], capsize=0, ms=10)
ax5.errorbar([w_OH[i]], [w_res[i]], fmt=symbs[i], color=cols[i],
label=w_labs[i], yerr=s_res[i], capsize=0, ms=10)
lgd1 = ax3.legend(fontsize=8, loc=3, ncol=4, bbox_to_anchor=(0.,1.02,1.,0.102),
mode='expand')
lgd2 = ax4.legend(fontsize=8, loc=3, ncol=4, bbox_to_anchor=(0.,1.02,1.,0.102),
mode='expand')
lgd3 = ax5.legend(fontsize=8, loc=3, ncol=4, bbox_to_anchor=(0.,1.02,1.,0.102),
mode='expand')
ax3.set_xlabel(r'$\log_{10}\left(P\right)$')
ax4.set_xlabel(r'$V-I$')
ax5.set_xlabel(r'$[O/H]$')
ax3.set_ylabel(r'median residuals')
ax4.set_ylabel(r'median residuals')
ax5.set_ylabel(r'median residuals')
fig3.savefig('ceph_res_comb_P.pdf', bbox_extra_artists=(lgd1,), bbox_inches='tight')
fig4.savefig('ceph_res_comb_VI.pdf', bbox_extra_artists=(lgd2,), bbox_inches='tight')
fig5.savefig('ceph_res_comb_OH.pdf', bbox_extra_artists=(lgd3,), bbox_inches='tight')
plt.close(fig3)
plt.close(fig4)
plt.close(fig5)
# Now, let's do some triangle plots and output the parameters of interest.
# Cepheid parameters:
if corner is not None:
tp1 = c.triangle_plot(['M','betaP','betaVI','betaOH'])
tp1.savefig('Ceph_triangle.pdf')
else:
print "Warning: corner is not installed, so no triangle plots. To install:"
print "pip install corner"
# Now we output some tables.
fout = open('results_table.txt','w')
fout.write("Cepheids\n")
fout.write("--------\n")
fout.write('M: %s +/- %s\n' % sigfig.round_sig_error(M,e_M,2))
fout.write('betaP: %s +/- %s\n' % sigfig.round_sig_error(betaP,e_betaP,2))
fout.write('betaOH: %s +/- %s\n' % sigfig.round_sig_error(betaOH,e_betaOH,2))
hosts = []
headers = ['Host','DM','betaVI']
headers += ['eps%d' % i for i in range(cfg.model.NGauss)]
cols = [[],[]]
ecols = [[],[]]
for i in range(cfg.model.NGauss):
cols.append([])
ecols.append([])
if cfg.model.use_MW:
hosts.append('MW')
cols[0].append(-1); ecols[0].append(-1)
cols[1].append(betaVI_MW); ecols[1].append(e_betaVI_MW)
cols[2].append(eps_MW); ecols[2].append(e_eps_MW)
for i in range(1,cfg.model.NGauss):
cols[i+2].append(-1)
ecols[i+2].append(-1)
if cfg.model.use_LMC:
hosts.append('LMC')
cols[0].append(DM_LMC); ecols[0].append(e_DM_LMC)
cols[1].append(betaVI_LMC); ecols[1].append(e_betaVI_LMC)
cols[2].append(eps_LMC); ecols[2].append(e_eps_LMC)
for i in range(1,cfg.model.NGauss):
cols[i+2].append(-1)
ecols[i+2].append(-1)
if cfg.model.use_4258:
hosts.append('4258')
cols[0].append(DM_4258); ecols[0].append(e_DM_4258)
cols[1].append(betaVI_4258); ecols[1].append(e_betaVI_4258)
if cfg.model.NGauss > 1:
for i in range(cfg.model.NGauss):
#cols[2+i].append(eps_4258[i]);
#ecols[2+i].append(e_eps_4258[i])
cols[2+i].append(eps[i]);
ecols[2+i].append(e_eps[i])
else:
#cols[2].append(eps_4258);
#ecols[2].append(e_eps_4258)
cols[2].append(eps);
ecols[2].append(e_eps)
hosts += cephlist
cols[0] = concatenate([cols[0], DM]); ecols[0] = concatenate([ecols[0], e_DM])
cols[1] = concatenate([cols[1], betaVI]); ecols[1] = concatenate([ecols[1],
e_betaVI])
if cfg.model.NGauss > 1:
for i in range(cfg.model.NGauss):
cols[2+i] = concatenate([cols[2+i], [eps[i]]*S])
ecols[2+i] = concatenate([ecols[2+i], [e_eps[i]]*S])
else:
cols[2] = concatenate([cols[2], [eps]*S])
ecols[2] = concatenate([ecols[2], [e_eps]*S])
lines = sigfig.format_table(cols=cols, errors=ecols, n=2, headers=headers,
labels=hosts)
[fout.write(line+"\n") for line in lines]
fout.close()
# Final covariance matrix. We need to be a bit more robust here
DMs = c.get_trace('DM', merge=True)
devs = absolute(DMs - c.median('DM', merge=True)[newaxis,:])
# Do a 5-sigma clip to get rid fo really diviant points for NGC 4424
gids = less(devs, 5*1.4826*c.mad('DM', merge=True)[newaxis,:])
gids = product(gids, axis=1).astype(bool)
C = cov(DMs[gids,:].T)
DMs = c.median('DM')
eDMs = c.std('DM')
f = open('DM_cov.dat','w')
[f.write(ceph+" ") for ceph in cephlist]
f.write('\n')
[f.write("%f " % DM) for DM in DMs]
f.write('\n')
for i in range(C.shape[0]):
for j in range(C.shape[1]):
f.write("%f " % C[i,j])
f.write('\n')
# Now make a nice figure
fig = plt.figure()
ax = fig.add_subplot(111)
sids = argsort(diag(C))
names = [cephlist[i] for i in sids]
CC = zeros(C.shape)
for i in range(C.shape[0]):
for j in range(C.shape[1]):
CC[i,j] = C[sids[i],sids[j]]
img = ax.imshow(CC, interpolation='nearest', origin='lower', vmin=0, vmax=0.005)
plt.colorbar(img)
plt.xticks(arange(CC.shape[0]), names, rotation='vertical', fontsize=14)
plt.yticks(arange(CC.shape[1]), names, rotation='horizontal', fontsize=14)
plt.tight_layout()
fig.savefig('Hosts_covar.pdf')
plt.close(fig)
Rdata = ascii.read('Riess+2016tab5.dat')
sids = array([list(Rdata['Host']).index(name) for name in cephlist])
fig = plt.figure()
ax = fig.add_subplot(111)
ax.errorbar(Rdata['mu_ceph'][sids], DMs-Rdata['mu_ceph'][sids],
fmt='o', capsize=0, xerr=Rdata['sigma2'][sids], yerr=eDMs)
ax.set_xlabel('DM(Riess)')
ax.set_ylabel('DM(MCMC)-DM(Riess)')
ax.axhline(0)
ax.set_ylim(-0.5,0.5)
fig.savefig('Delta-DMs.pdf')
plt.close(fig)
Adata = ascii.read('Riess+2016tab4.mrt.dat')
Adata_g = Adata.group_by('Field')
metals = Adata_g['[O/H]'].groups.aggregate(mean)
sids2 = array([list(Adata_g.groups.keys['Field']).index(name) for name in cephlist])
fig = plt.figure()
ax = fig.add_subplot(111)
ax.errorbar(metals[sids2], DMs-Rdata['mu_ceph'][sids],
fmt='o', capsize=0, yerr=eDMs)
ax.set_xlabel('12+log(O/H)')
ax.set_ylabel('DM(MCMC)-DM(Riess)')
ax.axhline(0)
fig.savefig('Delta-DMs_metal.pdf')
plt.close(fig)
VIs = Adata_g['F555W-F814W'].groups.aggregate(mean)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.errorbar(VIs[sids2], DMs-Rdata['mu_ceph'][sids],
fmt='o', capsize=0, yerr=eDMs)
ax.set_xlabel('V-I')
ax.set_ylabel('DM(MCMC)-DM(Riess)')
ax.axhline(0)
fig.savefig('Delta-DMs_VI.pdf')
plt.close(fig)
Ps = Adata_g['Per'].groups.aggregate(mean)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.errorbar(Ps[sids2], DMs-Rdata['mu_ceph'][sids],
fmt='o', capsize=0, yerr=eDMs)
ax.set_xlabel('Period (days)')
ax.set_ylabel('DM(MCMC)-DM(Riess)')
ax.axhline(0)
fig.savefig('Delta-DMs_P.pdf')
plt.close(fig)
| 34.502609 | 85 | 0.63849 | 3,392 | 19,839 | 3.631191 | 0.112618 | 0.022327 | 0.025331 | 0.020784 | 0.584314 | 0.453763 | 0.41869 | 0.388082 | 0.356499 | 0.331087 | 0 | 0.05275 | 0.147638 | 19,839 | 574 | 86 | 34.562718 | 0.675636 | 0.050103 | 0 | 0.333988 | 0 | 0 | 0.080527 | 0.00651 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.023576 | null | null | 0.003929 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bdfc35f54a9b8bd1f3bd3c52660efd98b04dbd81 | 6,342 | py | Python | tests/test_hdf5.py | MSLNZ/MSL-IO | 0b7a1f6ddacc936a98f134fd67f209840a500030 | [
"MIT"
] | 6 | 2021-06-27T00:26:09.000Z | 2022-02-11T06:04:23.000Z | tests/test_hdf5.py | MSLNZ/MSL-IO | 0b7a1f6ddacc936a98f134fd67f209840a500030 | [
"MIT"
] | null | null | null | tests/test_hdf5.py | MSLNZ/MSL-IO | 0b7a1f6ddacc936a98f134fd67f209840a500030 | [
"MIT"
] | 1 | 2018-03-01T03:11:00.000Z | 2018-03-01T03:11:00.000Z | import os
import tempfile
import pytest
import numpy as np
try:
import h5py
except ImportError:
h5py = None
from msl.io import read, HDF5Writer, JSONWriter
from msl.io.readers import HDF5Reader
from helper import read_sample, roots_equal
@pytest.mark.skipif(h5py is None, reason='h5py not installed')
def test_read_write_convert():
root1 = read_sample('hdf5_sample.h5')
# write as HDF5 then read
writer = HDF5Writer(tempfile.gettempdir() + '/msl-hdf5-writer-temp.h5')
writer.write(root=root1, mode='w')
root2 = read(writer.file)
assert root2.file == writer.file
assert roots_equal(root1, root2)
os.remove(writer.file)
# convert to JSON then back to HDF5
json_writer = JSONWriter(tempfile.gettempdir() + '/msl-json-writer-temp.json')
json_writer.write(root=root1, mode='w')
root_json = read(json_writer.file)
assert root_json.file == json_writer.file
assert roots_equal(root1, root_json)
os.remove(json_writer.file)
writer2 = HDF5Writer(tempfile.gettempdir() + '/msl-hdf5-writer-temp2.h5')
writer2.write(root=root_json, mode='w')
root3 = read(writer2.file)
assert root3.file == writer2.file
assert roots_equal(root1, root3)
os.remove(writer2.file)
for root in [root1, root2, root3]:
assert isinstance(root, HDF5Reader)
for key, value in root.items():
k, v = str(key), str(value)
k, v = repr(key), repr(value)
order = ['D0', 'G0', 'G1A', 'D1', 'G1B', 'D2', 'D3', 'G2']
for i, key in enumerate(root.keys()):
assert os.path.basename(key) == order[i]
assert len(root.metadata) == 3
assert root.metadata['version_h5py'] == '2.8.0'
assert root.metadata.version_hdf5 == '1.10.2'
assert root.metadata['date_created'] == '2018-08-28 15:16:43.904990'
assert 'D0' in root
assert 'G0' in root
d0 = root['D0']
assert root.is_dataset(d0)
assert d0.shape == (10, 4)
assert d0.dtype.str == '<f4'
assert len(d0.metadata) == 2
assert d0.metadata['temperature'] == 21.2
assert d0.metadata.temperature_units == 'deg C'
g0 = root.G0
assert root.is_group(g0)
assert len(g0.metadata) == 1
assert all(g0.metadata['count'] == [1, 2, 3, 4, 5])
assert 'G1A' in g0
assert 'G1B' in g0
g1a = g0['G1A']
assert root.is_group(g1a)
assert len(g1a.metadata) == 2
assert g1a.metadata['one'] == 1
assert g1a.metadata['a'] == 'A'
g1b = g0['G1B']
assert root.is_group(g1b)
assert len(g1b.metadata) == 2
assert g1b.metadata['one'] == 1
assert g1b.metadata['b'] == 'B'
assert 'D1' in g0['G1A']
d1 = root.G0.G1A.D1
assert root.is_dataset(d1)
assert len(d1.metadata) == 0
assert d1.shape == (3, 3)
assert d1.dtype.str == '<f8'
assert 'D2' in g1b
assert 'D3' in g0.G1B
assert 'G2' in root.G0.G1B
d2 = g1b['D2']
assert root.is_dataset(d2)
assert len(d2.metadata) == 2
assert d2.metadata['voltage'] == 132.4
assert d2.metadata['voltage_units'] == 'uV'
assert d2.shape == (10,)
assert d2.dtype.str == '<i4'
assert d2[3] == 90
d3 = g1b.D3
assert root.is_dataset(d3)
assert len(d3.metadata) == 0
assert d3.shape == (10,)
assert d3.dtype.str == '<i4'
assert d3[7] == 51
g2 = root.G0.G1B.G2
assert root.is_group(g2)
assert len(g2.metadata) == 1
assert g2.metadata['hello'] == 'world'
@pytest.mark.skipif(h5py is None, reason='h5py not installed')
def test_raises():
root = read_sample('hdf5_sample.h5')
writer = HDF5Writer()
assert writer.file is None
# no file was specified
with pytest.raises(ValueError, match=r'must specify a file'):
writer.write(root=root)
# root must be a Root object
with pytest.raises(TypeError, match=r'Root'):
writer.write(file='whatever', root=list(root.datasets())[0])
with pytest.raises(TypeError, match=r'Root'):
writer.write(file='whatever', root=list(root.groups())[0])
with pytest.raises(TypeError, match=r'Root'):
writer.write(file='whatever', root='Root')
# cannot overwrite a file by default
file = tempfile.gettempdir() + '/msl-hdf5-writer-temp.h5'
with open(file, mode='wt') as fp:
fp.write('Hi')
with pytest.raises(OSError, match=r'File exists'):
writer.write(file=file, root=root)
with pytest.raises(OSError, match=r'File exists'):
writer.write(file=file, root=root, mode='x')
with pytest.raises(OSError, match=r'File exists'):
writer.write(file=file, root=root, mode='w-')
# invalid mode
for m in ['r', 'b', 'w+b']:
with pytest.raises(ValueError, match=r'Invalid mode'):
writer.write(file=file, root=root, mode=m)
# r+ is a valid mode, but the file must already exist
with pytest.raises(OSError, match=r'File does not exist'):
writer.write(file='does_not.exist', root=root, mode='r+')
# by specifying the proper mode one can overwrite a file
writer.write(file=file, root=root, mode='w')
assert roots_equal(root, read(file))
writer.write(file=file, root=root, mode='a')
assert roots_equal(root, read(file))
writer.write(file=file, root=root, mode='r+')
assert roots_equal(root, read(file))
os.remove(file)
@pytest.mark.skipif(h5py is None, reason='h5py not installed')
def test_numpy_unicode_dtype():
writer = HDF5Writer()
writer.add_metadata(wide_chars=np.array(['1', '-4e+99', 'True'], dtype='<U6'))
writer.create_dataset('wide_chars', data=np.random.random(100).reshape(4, 25).astype('<U32'))
file = tempfile.gettempdir() + '/msl-hdf5-writer-temp.h5'
writer.save(file, mode='w')
root = read(file)
assert np.array_equal(root.metadata.wide_chars, writer.metadata.wide_chars)
# the following array_equal assertion fails so we iterate over all elements instead
# assert np.array_equal(root.wide_chars.astype('<U32'), writer.wide_chars)
for a, b in zip(root.wide_chars.astype('<U32').flatten(), writer.wide_chars.flatten()):
assert a == b
os.remove(file)
| 33.555556 | 97 | 0.621728 | 907 | 6,342 | 4.286659 | 0.210584 | 0.039609 | 0.042438 | 0.034208 | 0.35571 | 0.301955 | 0.237912 | 0.221451 | 0.178241 | 0.178241 | 0 | 0.046268 | 0.233207 | 6,342 | 188 | 98 | 33.734043 | 0.753239 | 0.065594 | 0 | 0.126761 | 0 | 0 | 0.098225 | 0.020795 | 0 | 0 | 0 | 0 | 0.43662 | 1 | 0.021127 | false | 0 | 0.06338 | 0 | 0.084507 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
da00ed6eab4691bf8ae09817f4bff2f338f59adc | 2,758 | py | Python | exp_final_whitebox_attacker.py | alanefl/graph-based-recommender-attacks | 4b7545b32bb18ebe89cbfdcd8463550296373002 | [
"MIT"
] | 3 | 2019-03-08T09:09:52.000Z | 2019-12-05T02:39:43.000Z | exp_final_whitebox_attacker.py | alanefl/graph-based-recommender-attacks | 4b7545b32bb18ebe89cbfdcd8463550296373002 | [
"MIT"
] | 2 | 2018-12-07T19:03:43.000Z | 2018-12-08T03:16:15.000Z | exp_final_whitebox_attacker.py | alanefl/graph-based-recommender-attacks | 4b7545b32bb18ebe89cbfdcd8463550296373002 | [
"MIT"
] | 3 | 2019-01-28T09:13:42.000Z | 2022-02-22T00:27:29.000Z | import traceback
import sys
import numpy as np
from gbra.data.network_loader import Movielens100kLoader
from gbra.attackers.attacker import *
from gbra.recommender.recommenders import PixieRandomWalkRecommender
ITERATIONS = 5
PIXIE_PARAMS = {
'n_p': 30,
'n_v': 4,
'max_steps_in_walk': 1000,
'alpha': 0.01,
'beta': 20
}
RECOMMENDATIONS = 10
""""
python exp_final_whitebox_attacker.py 0.10 10 HighDegreeAttacker
"""
PERCENT_FAKE_ENTITIES, NUM_FAKE_REVIEWS, ATTACKER_NAME = sys.argv[1:]
PERCENT_FAKE_ENTITIES = float(PERCENT_FAKE_ENTITIES)
NUM_FAKE_REVIEWS = int(NUM_FAKE_REVIEWS)
attackers = {
'HighDegreeAttacker': HighDegreeAttacker,
'LowDegreeAttacker': LowDegreeAttacker,
'HillClimbingAttacker': HillClimbingAttacker,
'RandomAttacker': RandomAttacker,
'AverageAttacker': AverageAttacker,
'NeighborAttacker': NeighborAttacker,
'BlackBoxRWRAttacker': BlackBoxRWRAttacker,
'BlackBoxDeepRWRAttacker': BlackBoxDeepRWRAttacker
}
BLACK_BOX_RWR_NUM_SCOUT_ITEMS = 100
def get_attacker(network, recommender, target_item):
attacker_klass = attackers[ATTACKER_NAME]
num_fake_entities = int(PERCENT_FAKE_ENTITIES * network.num_entities)
kwargs = dict(
_recommender=recommender,
_target_item=target_item,
_num_fake_entities=num_fake_entities,
_num_fake_ratings=NUM_FAKE_REVIEWS
)
if ATTACKER_NAME == 'BlackBoxRWRAttacker':
kwargs.update(dict(
_num_items_to_scout=BLACK_BOX_RWR_NUM_SCOUT_ITEMS,
_num_recs=RECOMMENDATIONS
))
elif ATTACKER_NAME == 'BlackBoxDeepRWRAttacker':
kwargs.update(dict(
_num_items_to_scout=BLACK_BOX_RWR_NUM_SCOUT_ITEMS,
))
return attacker_klass(**kwargs)
def evaluate_attacker(target_item):
network = Movielens100kLoader().load()
recommender = PixieRandomWalkRecommender(G=network, **PIXIE_PARAMS)
attacker = get_attacker(network, recommender, target_item)
# before = recommender.calculate_hit_ratio(target_item, RECOMMENDATIONS, verbose=False)
before = 0 # this is basically always true
try:
attacker.attack()
except:
traceback.print_exc()
after = recommender.calculate_hit_ratio(target_item, RECOMMENDATIONS, verbose=False)
return (before, after)
results = []
network = Movielens100kLoader().load()
# target_items = network.get_random_items(ITERATIONS)
# print target_items
target_items = [2352, 380, 1722, 2514, 2384]
for i in range(ITERATIONS):
print i
(before, after) = evaluate_attacker(target_items[i])
results.append((before, after))
print results
arr = np.array([a[1] for a in results])
print "mean: {}, median {}, std dev {}".format(np.mean(arr), np.median(arr), np.std(arr))
| 29.978261 | 91 | 0.734228 | 313 | 2,758 | 6.172524 | 0.383387 | 0.028986 | 0.039337 | 0.039337 | 0.230331 | 0.21118 | 0.124224 | 0.124224 | 0.124224 | 0.056936 | 0 | 0.023612 | 0.170776 | 2,758 | 91 | 92 | 30.307692 | 0.821163 | 0.06744 | 0 | 0.115942 | 0 | 0 | 0.099038 | 0.018444 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.086957 | null | null | 0.057971 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
da013d7bf5bd3738906f5171efb4f3e2276d7876 | 695 | py | Python | Examples/connection_test.py | Wenlin88/MonoDAQ-U-X | 51bf5986537211bea6d36dcbcb515689ecd361d5 | [
"MIT"
] | 1 | 2019-09-30T18:03:21.000Z | 2019-09-30T18:03:21.000Z | Examples/connection_test.py | Wenlin88/MonoDAQ-U-X | 51bf5986537211bea6d36dcbcb515689ecd361d5 | [
"MIT"
] | null | null | null | Examples/connection_test.py | Wenlin88/MonoDAQ-U-X | 51bf5986537211bea6d36dcbcb515689ecd361d5 | [
"MIT"
] | null | null | null | from isotel.idm import gateway, monodaq
import pickle
from pathlib import Path
# For quick access; remote ip adress; username and password is stored at little.secrets pickle. Note! Newer use donwloaded pickle. It can be hacked!
little_secrets = pickle.load(open(str(Path.home()) + "/little.secrets", "rb" ) )
remote_ip = little_secrets["remote_ip"]
user = little_secrets["user"]
passwd = little_secrets["password"]
# connect to some remote host
mdu = monodaq.MonoDAQ_U( gateway.Group('http://' + remote_ip + ':33000',
username=user,
password=passwd) )
# print channel setup of a MonoDAQ device
mdu.print_setup()
| 38.611111 | 148 | 0.667626 | 90 | 695 | 5.055556 | 0.588889 | 0.171429 | 0.083516 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009363 | 0.231655 | 695 | 17 | 149 | 40.882353 | 0.842697 | 0.307914 | 0 | 0 | 0 | 0 | 0.106918 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.181818 | 0.272727 | 0 | 0.272727 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
da03a31757da96abc382fc61fe58ac8b0cd58b3b | 196 | py | Python | Ex031 Custo da Viagem.py | JeanPauloGarcia/Python-Exercicios | faff4670806c423680ee00a88d3c4c49b437e72e | [
"MIT"
] | null | null | null | Ex031 Custo da Viagem.py | JeanPauloGarcia/Python-Exercicios | faff4670806c423680ee00a88d3c4c49b437e72e | [
"MIT"
] | null | null | null | Ex031 Custo da Viagem.py | JeanPauloGarcia/Python-Exercicios | faff4670806c423680ee00a88d3c4c49b437e72e | [
"MIT"
] | null | null | null | n = float(input('Distância viagem: '))
'''if n > 200:
n1 = n*0.45
else:
n1 = n*0.5'''
# modo 2
n1 = n*0.5 if n<=200 else n*0.45
print('Sua viagem de {}km custará {} reais'.format(n, n1))
| 19.6 | 58 | 0.561224 | 40 | 196 | 2.75 | 0.525 | 0.072727 | 0.109091 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137255 | 0.219388 | 196 | 9 | 59 | 21.777778 | 0.581699 | 0.030612 | 0 | 0 | 0 | 0 | 0.398496 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
da10d2e7c8f7427a5a8d088d25892ba004bc9a7e | 8,483 | py | Python | BinlogCapturer/src/binlogCapturer.py | lvxinup/BaikalDB-Migrate | 60235c16ec63fc856b6e9e632344ecd7f08e7e69 | [
"Apache-2.0"
] | 3 | 2020-12-16T07:40:39.000Z | 2021-01-30T08:07:06.000Z | BinlogCapturer/src/binlogCapturer.py | lvxinup/BaikalDB-Migrate | 60235c16ec63fc856b6e9e632344ecd7f08e7e69 | [
"Apache-2.0"
] | null | null | null | BinlogCapturer/src/binlogCapturer.py | lvxinup/BaikalDB-Migrate | 60235c16ec63fc856b6e9e632344ecd7f08e7e69 | [
"Apache-2.0"
] | 4 | 2020-12-16T07:06:37.000Z | 2020-12-16T07:14:39.000Z | # Copyright (c) 2020-present ly.com, Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#!/usr/bin/python
#-*- coding:utf8 -*-
import threading
import Queue
import ConfigParser
import time
import os
import sys
import signal
import multiprocessing
from binlogReader import BinlogReader
from binlogParser import BinlogParser
from binlogProcessor import BinlogProcessor
from binlogFilter import BinlogFilter
#from binlogSender import BinlogSender
#from fileSender import BinlogSender
from schemaManager import SchemaManager
from senderManager import SenderManager
# from server import ServerHandler
from BaseHTTPServer import HTTPServer,BaseHTTPRequestHandler
class BinlogCapturer:
def __init__(self, configPath = './conf/config.cfg'):
self.configPath = configPath
self.config = ConfigParser.ConfigParser()
self.config.read(self.configPath)
self.loadBinlogReaderTimestamps()
self.initSchemaManager()
self.initQueues()
self.initWorks()
self._stop = False
def initSchemaManager(self):
self.metaServerList = self.config.get('global','metaServer')
self.schemaManager = SchemaManager(self.config.get('global','binlogTableID'), self.metaServerList)
self.schemaManager.init()
self.schemaManager.setSchemaUpdateFunc(self.tableSchemaUpdateFunc)
self.schemaManager.setRegionUpdateFunc(self.regionInfoUpdateFunc)
def loadBinlogReaderTimestamps(self):
self.binlogCheckPointPath = self.config.get('global','binlogCheckPointPath')
lastReadTs = 0
for line in open(self.binlogCheckPointPath):
ts = int(line)
if ts > lastReadTs:
lastReadTs = ts
self.binlogReadCheckPoint = lastReadTs
self.lastUpdateCheckPoint = time.time()
def initQueues(self):
self.binlogReaderQueue = multiprocessing.Queue(2000)
self.binlogParserQueue = multiprocessing.Queue(2000)
self.binlogSenderQueue = multiprocessing.Queue(2000)
self.binlogCheckPointQueue = multiprocessing.Queue(2000)
def initWorks(self):
self.binlogReaderDict = {}
self.regionID = self.config.getint('global','binlogRegionID')
self.regionInfo = self.schemaManager.getRegionInfoByRegionID(self.regionID)
self.binlogReader = BinlogReader(self.regionInfo, self.binlogReaderQueue, self.binlogReadCheckPoint, self.schemaManager)
self.binlogParser = BinlogParser(self.configPath, self.binlogReaderQueue, self.binlogParserQueue)
self.binlogProcessor = BinlogProcessor(self.schemaManager)
self.binlogFilter = BinlogFilter(self.config.get('global','filterRuleDict'))
self.binlogSender = SenderManager(self.config.get('global','senderConfig'), self.binlogSenderQueue, self.binlogCheckPointQueue)
def senderCallBack(self, ts):
self.binlogReadCheckPoint = ts
tmpFile = self.binlogCheckPointPath + '.tmp'
with open(tmpFile, 'w') as f:
f.write(str(ts) + '\n')
os.rename(tmpFile, self.binlogCheckPointPath)
readerSize = self.binlogReaderQueue.qsize()
senderSize = self.binlogSenderQueue.qsize()
f = open('queueSize.txt','w')
f.write(str(readerSize) + '\t' + str(senderSize))
f.close()
def processBinlogsThreadFunc(self):
while not self._stop:
self.binlogProcess()
def binlogProcess(self):
while not self._stop:
item = self.binlogParserQueue.get()
self.binlogProcessor.process(item)
if self.binlogFilter.filter(item):
continue
self.binlogSenderQueue.put(item)
def startProcessBinlogsThread(self):
self.processBinlogThread = threading.Thread(target = self.processBinlogsThreadFunc)
self.processBinlogThread.setDaemon(True)
self.processBinlogThread.start()
def tableSchemaUpdateFunc(self, insertDict, updateDict, deleteDict):
#当table schema更新时,此函数被调用
return
for tableID, schema in insertDict.items():
self.binlogParser.updateTableSchema(schema)
for tableID, schema in updateDict.items():
self.binlogParser.updateTableSchema(schema)
def regionInfoUpdateFunc(self, insertDict, updateDict, deleteDict):
#当binlog的region变更时,此函数被调用
#若binlog store 切主,需要更新binlogStore的storeClient
for rid, regionInfo in updateDict.items():
if rid != self.regionID:
continue
self.binlogReader.updateRegionInfo(regionInfo)
def updateCheckpointThreadFunc(self):
while True:
checkpoint = self.binlogCheckPointQueue.get()
if checkpoint == 0:
continue
self.binlogReadCheckPoint = checkpoint
tmpFile = self.binlogCheckPointPath + '.tmp'
with open(tmpFile, 'w') as f:
f.write(str(self.binlogReadCheckPoint) + '\n')
os.rename(tmpFile, self.binlogCheckPointPath)
# readerSize = self.binlogReaderQueue.qsize()
# parserSize = self.binlogParserQueue.qsize()
# senderSize = self.binlogSenderQueue.qsize()
# print "%d\t%d\t%d\t%d" % (readerSize, parserSize, senderSize, self.binlogReadCheckPoint)
# f = open('queueSize.txt','w')
# f.write(str(readerSize) + '\t' + str(senderSize))
# f.close()
def startUpdateCheckpointThread(self):
self.updateCheckpointThread = threading.Thread(target = self.updateCheckpointThreadFunc)
self.updateCheckpointThread.start()
def start(self):
self.schemaManager.start()
self.binlogReader.start()
self.binlogParser.start()
self.binlogSender.start()
self.httpThread = threading.Thread(target=self.httpFunc)
self.httpThread.setDaemon(True)
self.httpThread.start()
self.startProcessBinlogsThread()
self.startUpdateCheckpointThread()
self.threadMonitor()
def stop(self):
self.binlogReader.stop()
self.binlogParser.terminate()
self.binlogSender.terminate()
pid = os.getpid()
os.kill(pid, signal.SIGKILL)
def threadMonitor(self):
self.threadList = []
self.threadList.append(self.processBinlogThread)
self.threadList.append(self.binlogReader)
while True:
if not self.binlogReader.isAlive():
self.stop()
if not self.binlogParser.isAlive():
self.stop()
if not self.binlogSender.isAlive():
self.stop()
time.sleep(1)
def httpFunc(self):
self.port = int(self.config.get('global','httpPort'))
print self.port
self.http_server = HTTPServer(('0.0.0.0',self.port),ServerHandler)
self.http_server.serve_forever()
capturer = BinlogCapturer()
class ServerHandler(BaseHTTPRequestHandler):
def do_GET(self):
print "get"
self.send_response(200)
self.send_header('Content-type', 'application/json')
self.end_headers()
wor = '[status]\n\n'
wor += 'binlogReaderQueue.size:' + str(capturer.binlogReaderQueue.qsize()) + '\n'
wor += 'binlogParserQueue.size:' + str(capturer.binlogParserQueue.qsize()) + '\n'
wor += 'binlogSenderQueue.size:' + str(capturer.binlogSenderQueue.qsize()) + '\n'
wor += 'binlogReadCheckPoint:' + str(capturer.binlogReadCheckPoint) + '\n'
self.wfile.write(wor)
def signal_kill_handler(signum, handler):
print "kill:",signum
capturer.stop()
def register_signal_handler():
signal.signal(signal.SIGINT, signal_kill_handler)
signal.signal(signal.SIGCHLD, signal_kill_handler)
#signal.signal(signal.SIGKILL, signal_kill_handler)
if __name__ == '__main__':
register_signal_handler()
print os.getpid()
capturer.start()
| 38.559091 | 135 | 0.672875 | 825 | 8,483 | 6.877576 | 0.290909 | 0.014099 | 0.013747 | 0.020092 | 0.127952 | 0.090589 | 0.069792 | 0.069792 | 0.069792 | 0.069792 | 0 | 0.005337 | 0.226924 | 8,483 | 219 | 136 | 38.73516 | 0.859866 | 0.138394 | 0 | 0.1125 | 0 | 0 | 0.046854 | 0.012366 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.09375 | null | null | 0.025 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
da1334404697889260a8bebeea04c6d1b234a792 | 363 | py | Python | settings/local-dist.py | brianjgeiger/osf-pigeon | 474e741358514d24907d92be80d90d82dd38f707 | [
"MIT"
] | null | null | null | settings/local-dist.py | brianjgeiger/osf-pigeon | 474e741358514d24907d92be80d90d82dd38f707 | [
"MIT"
] | null | null | null | settings/local-dist.py | brianjgeiger/osf-pigeon | 474e741358514d24907d92be80d90d82dd38f707 | [
"MIT"
] | null | null | null | # New tokens can be found at https://archive.org/account/s3.php
IA_ACCESS_KEY = 'change to valid token'
IA_SECRET_KEY = 'change to valid token'
DOI_FORMAT = '10.70102/fk2osf.io/{guid}'
OSF_BEARER_TOKEN = ''
DATACITE_USERNAME = None
DATACITE_PASSWORD = None
DATACITE_URL = None
DATACITE_PREFIX = '10.70102' # Datacite's test DOI prefix -- update in production
| 27.923077 | 82 | 0.757576 | 57 | 363 | 4.631579 | 0.701754 | 0.136364 | 0.083333 | 0.121212 | 0.159091 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051282 | 0.140496 | 363 | 12 | 83 | 30.25 | 0.794872 | 0.30854 | 0 | 0 | 0 | 0 | 0.302419 | 0.100806 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
da15731fc2951f272b0cca8ce3a85560331472b7 | 10,894 | py | Python | basic_ops.py | dingmingxin/crossword_generator | ff6dae5673ab6f5ebdb259dc85a7609710c7a0f7 | [
"BSD-3-Clause"
] | 51 | 2016-05-19T14:04:44.000Z | 2022-02-02T23:39:16.000Z | basic_ops.py | dingmingxin/crossword_generator | ff6dae5673ab6f5ebdb259dc85a7609710c7a0f7 | [
"BSD-3-Clause"
] | null | null | null | basic_ops.py | dingmingxin/crossword_generator | ff6dae5673ab6f5ebdb259dc85a7609710c7a0f7 | [
"BSD-3-Clause"
] | 15 | 2016-05-20T15:59:13.000Z | 2022-03-05T02:23:12.000Z | import random
import time
def generate_random_possibility(words, dim):
""" This function returns a randomly-generated possibility, instead of generating all
possible ones.
"""
# Generate possibility
possibility = {"word": words[random.randint(0, len(words)-1)],
"location": [random.randint(0, dim[0]-1), random.randint(0, dim[1]-1)],
"D": "S" if random.random() > 0.5 else "E"}
# Return it
return possibility
def is_within_bounds(word_len, line, column, direction, grid_width, grid_height):
""" Returns whether the given word is withing the bounds of the grid.
"""
return (direction == "E" and column + word_len <= grid_width) or (direction == "S" and line + word_len <= grid_height)
def collides_with_existing_words(word, line, column, direction, grid):
""" Returns whether the given word collides with an existing one.
"""
for k, letter in enumerate(list(word)):
if direction == "E":
# Collisions
if grid[line][column+k] != 0 and grid[line][column+k] != letter:
return True
if direction == "S":
# Collisions
if grid[line+k][column] != 0 and grid[line+k][column] != letter:
return True
return False
def ends_are_isolated(word, line, column, direction, grid):
""" Returns whether the given word is isolated (blank before start and after end).
"""
if direction == "E":
# If the preceding space isn't empty
if not is_cell_free(line, column-1, grid):
return False
# If the succeding space isn't empy
if not is_cell_free(line, column+len(word), grid):
return False
if direction == "S":
# If the preceding space isn't empty
if not is_cell_free(line-1, column, grid):
return False
# If the succeding space isn't empy
if not is_cell_free(line+len(word), column, grid):
return False
return True
def find_new_words(word, line, column, direction, grid, words):
""" Given a new potential word, looks for new words that might have been created by adding it to the grid.
Returns None if new words are (geometrically) created but are not valid.
"""
new_words = []
for k, letter in enumerate(list(word)):
if direction == "E":
# If the space was originally blank and there are adjacent letters
if grid[line][column+k] == 0 and (line > 0 and grid[line-1][column+k] != 0 or line < len(grid)-1 and grid[line+1][column+k]):
# Then we have to extract this new word
poss_word = [letter]
l = 1
while line+l < len(grid[0]) and grid[line+l][column+k] != 0:
poss_word.append(grid[line+l][column+k])
l+=1
l = 1
while line-l > 0 and grid[line-l][column+k] != 0:
poss_word.insert(0, grid[line-l][column+k])
l+=1
poss_word = ''.join(poss_word)
# And check if it exists in the list
if poss_word not in words:
return None
new_words.append({"D": "S", "word":poss_word, "location": [line-l+1, column+k]})
if direction == "S":
# If the space was originally blank and there are adjacent letter
if grid[line+k][column] == 0 and (column > 0 and grid[line+k][column-1] != 0 or column < len(grid[0])-1 and grid[line+k][column+1]):
# Then we have to extract this new word
poss_word = [letter]
l = 1
while column+l < len(grid) and grid[line+k][column+l] != 0:
poss_word.append(grid[line+k][column+l])
l+=1
l = 1
while column-l > 0 and grid[line+k][column-l] != 0:
poss_word.insert(0, grid[line+k][column-l])
l+=1
poss_word = ''.join(poss_word)
# And check if it exists in the list
if poss_word not in words:
return None
new_words.append({"D": "E", "word":poss_word, "location": [line+k,column-l+1]})
return new_words
def is_valid(possibility, grid, words):
""" This function determines whether a possibility is still valid in the
given grid. (see generate_grid)
A possibility is deemed invalid if:
-> it extends out of bounds
-> it collides with any word that already exists, i.e. if any of its
elements does not match the words already in the grid;
-> if the cell that precedes and follows it in its direction is not empty.
The function also analyses how the word interacts with previous adjacent
words, and invalidates the possibility of returns a list with the new
words, if applicable.
"""
# Import possibility to local vars, for clarity
i = possibility["location"][0]
j = possibility["location"][1]
word = possibility["word"]
D = possibility["D"]
# Boundaries
if not is_within_bounds(len(word), i, j, D, len(grid[0]), len(grid)):
return False
# Collisions
if collides_with_existing_words(word, i, j, D, grid):
return False
# Start and End
if not ends_are_isolated(word, i, j, D, grid):
return False
# If we can't find any issues, it must be okay!
return True
def score_candidate(candidate_word, new_words):
return len(candidate_word) + 10*len(new_words)
def add_word_to_grid(possibility, grid):
""" Adds a possibility to the given grid, which is modified in-place.
(see generate_grid)
"""
# Import possibility to local vars, for clarity
i = possibility["location"][0]
j = possibility["location"][1]
word = possibility["word"]
# Word is left-to-right
if possibility["D"] == "E":
grid[i][j:len(list(word))+j] = list(word)
# Word is top-to-bottom
# (I can't seem to be able to use the slicing as above)
if possibility["D"] == "S":
for index, a in enumerate(list(word)):
grid[i+index][j] = a
def select_candidate(candidates, scores):
""" Select the candidate with the maximum score
"""
max_score = max(scores)
idx = scores.index(max_score)
return candidates[idx], scores[idx]
def compute_occupancy(grid):
return 1 - (sum(x.count(0) for x in grid) / (len(grid[0])*len(grid)))
def create_empty_grid(dimensions):
""" Creates an empty grid with the given dimensions.
dimensions[0] -> lines
dimensions[1] -> columns
"""
return [x[:] for x in [[0]*dimensions[1]]*dimensions[0]]
def generate_valid_candidates(grid, words, dim, timeout):
# Generate new candidates
candidates = []
scores = []
new_words = []
tries = 0
start_time = time.time()
# Generate a new candidate
while not candidates and time.time() < start_time + timeout:
# Increment search "time"
tries += 1
# Get new possibility
new = generate_random_possibility(words, dim)
# Evaluate validity
if not is_valid(new, grid, words):
continue
# Find new words that this possibility generates
new_words = find_new_words(new["word"], new["location"][0], new["location"][1], new["D"], grid, words)
# If new_words is None, then the possibility is invalid
if new_words == None:
new_words = []
continue
# Calculate this possibility's score
score = score_candidate(new["word"], new_words)
# Add to list of candidates
candidates.append(new)
scores.append(score)
return candidates, scores, new_words
def is_cell_free(line, col, grid):
""" Checks whether a cell is free.
Does not throw if the indices are out of bounds. These cases return as free.
"""
# Negative indices are "legal", but we treat them as out of bounds.
if line < 0 or col < 0:
return True
try:
return grid[line][col] == 0
except IndexError:
return True
def is_isolated(possibility, grid):
""" Determines whether a given possibility is completely isolated in the given grid.
It is assumed that the possibility is valid, of course.
"""
# Import possibility to local vars, for clarity
line = possibility["location"][0]
column = possibility["location"][1]
word = possibility["word"]
direction = possibility["D"]
# The word cannot be isolated if there is something at its ends
if not ends_are_isolated(word, line, column, direction, grid):
return False
# Look at the cells that surround the word
for i in range(len(word)):
if direction == "E":
if not is_cell_free(line-1, column+i, grid) or not is_cell_free(line+1, column+i, grid):
return False
if direction == "S":
if not is_cell_free(line+i, column-1, grid) or not is_cell_free(line+i, column+1, grid):
return False
# If nothing was found, then the word is isolated
return True
def basic_grid_fill(grid, occ_goal, timeout, dim, words):
""" Actually finds valid possibilities, scores them and adds them to the grid.
Algorithm:
This function operates by taking the words it receives randomly generating possibilities
until a valid one is found. It is then added to the grid.
This is done until the grid is above a given completion level.
"""
start_time = time.time()
occupancy = 0
added_words = []
while occupancy < occ_goal and time.time() - start_time < timeout:
# Generate some candidates
# This is limited to 1/10 of the total time we can use.
candidates, scores, new_words = generate_valid_candidates(grid, words, dim, timeout/10)
# If there are no candidates, we move to the next iteration. This ensures that we can actually respect timeouts.
if not candidates:
continue
# Select best candidate
new, new_score = select_candidate(candidates, scores)
# Add word to grid and to the list of added words
add_word_to_grid(new, grid)
added_words.append(new)
# Add new words to the words list
for word in new_words:
added_words.append(word)
# Remove words from list so we don't repeat ourselves
words.remove(new["word"])
for word in new_words:
words.remove(word["word"])
# Update occupancy
occupancy = compute_occupancy(grid)
print("Word \"{}\" added. Occupancy: {:2.3f}. Score: {}.".format(new["word"], occupancy, new_score))
if new_words:
print("This also created the words:", new_words)
return added_words | 33.937695 | 144 | 0.607582 | 1,520 | 10,894 | 4.274342 | 0.167105 | 0.030783 | 0.016931 | 0.020779 | 0.352932 | 0.306603 | 0.265045 | 0.197322 | 0.187163 | 0.16069 | 0 | 0.010611 | 0.290619 | 10,894 | 321 | 145 | 33.937695 | 0.830098 | 0.316046 | 0 | 0.388535 | 1 | 0 | 0.031601 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095541 | false | 0 | 0.012739 | 0.012739 | 0.299363 | 0.012739 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
da17e42de65984cda1a54beeefab951ef1a68679 | 1,538 | py | Python | utils/tail_server_events.py | dshean/sliderule-python | 3cf9a6c65987705354cb536d71f85a32fbb24d15 | [
"BSD-3-Clause"
] | 1 | 2021-04-09T22:01:33.000Z | 2021-04-09T22:01:33.000Z | utils/tail_server_events.py | dshean/sliderule-python | 3cf9a6c65987705354cb536d71f85a32fbb24d15 | [
"BSD-3-Clause"
] | null | null | null | utils/tail_server_events.py | dshean/sliderule-python | 3cf9a6c65987705354cb536d71f85a32fbb24d15 | [
"BSD-3-Clause"
] | null | null | null | #
# Connects to SlideRule server at provided url and prints log messages
# generated on server to local terminal
#
import sys
import logging
from sliderule import sliderule
from sliderule import icesat2
###############################################################################
# GLOBAL CODE
###############################################################################
# configure logging
logging.basicConfig(level=logging.INFO)
###############################################################################
# MAIN
###############################################################################
if __name__ == '__main__':
# Override server URL from command line
url = ["127.0.0.1"]
if len(sys.argv) > 1:
url = sys.argv[1]
# Override duration to maintain connection
duration = 30 # seconds
if len(sys.argv) > 2:
duration = int(sys.argv[2])
# Override event type
event_type = "LOG"
if len(sys.argv) > 3:
event_type = sys.argv[3]
# Override event level
event_level = "INFO"
if len(sys.argv) > 4:
event_level = sys.argv[4]
# Bypass service discovery if url supplied
if len(sys.argv) > 5:
if sys.argv[5] == "bypass":
url = [url]
# Initialize ICESat2/SlideRule Package
icesat2.init(url, True)
# Build Logging Request
rqst = {
"type": event_type,
"level" : event_level,
"duration": duration
}
# Retrieve logs
rsps = sliderule.source("event", rqst, stream=True)
| 25.213115 | 79 | 0.496749 | 159 | 1,538 | 4.716981 | 0.408805 | 0.093333 | 0.053333 | 0.08 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017456 | 0.217815 | 1,538 | 60 | 80 | 25.633333 | 0.605985 | 0.249675 | 0 | 0 | 1 | 0 | 0.063337 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.035714 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
da1a6848821060521f2d756bf5d84bdf6c4c26e4 | 1,160 | py | Python | hw/hw02/tests/q5.py | surajrampure/data-94-sp21 | 074543103579c28d796c681f78f3c38449825328 | [
"BSD-3-Clause"
] | 1 | 2020-11-21T09:42:52.000Z | 2020-11-21T09:42:52.000Z | hw/hw02/tests/q5.py | surajrampure/data-94-sp21 | 074543103579c28d796c681f78f3c38449825328 | [
"BSD-3-Clause"
] | null | null | null | hw/hw02/tests/q5.py | surajrampure/data-94-sp21 | 074543103579c28d796c681f78f3c38449825328 | [
"BSD-3-Clause"
] | null | null | null | test = { 'name': 'q5',
'points': 3,
'suites': [ { 'cases': [ {'code': ">>> big_tippers(['suraj', 15, 'isaac', 9, 'angela', 19]) == ['suraj', 'angela']\nTrue", 'hidden': False, 'locked': False},
{ 'code': ">>> big_tippers(['suraj', 15, 'isaac', 25, 'angela', 19, 'anna', 21, 'aayush', 14, 'sukrit', 8]) == ['isaac', 'angela', 'anna']\nTrue",
'hidden': False,
'locked': False},
{ 'code': '>>> # If you fail this, note that we want the names of those who tipped MORE than the average,;\n'
'>>> # not equal to or more than;\n'
">>> big_tippers(['a', 2, 'b', 2, 'c', 2]) == []\n"
'True',
'hidden': False,
'locked': False}],
'scored': True,
'setup': '',
'teardown': '',
'type': 'doctest'}]}
| 68.235294 | 183 | 0.323276 | 93 | 1,160 | 4 | 0.634409 | 0.080645 | 0.137097 | 0.177419 | 0.295699 | 0.295699 | 0 | 0 | 0 | 0 | 0 | 0.036021 | 0.497414 | 1,160 | 16 | 184 | 72.5 | 0.602058 | 0 | 0 | 0.125 | 0 | 0.1875 | 0.433621 | 0.036207 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
da221c0a85253196d067a1ec0dca319c11cf5a14 | 609 | py | Python | tgbot/models.py | psevdognom/gostbot | f5a142c0657285077cee58151590163a9e7f2527 | [
"Apache-2.0"
] | 1 | 2020-11-10T10:30:33.000Z | 2020-11-10T10:30:33.000Z | tgbot/models.py | psevdognom/gostbot | f5a142c0657285077cee58151590163a9e7f2527 | [
"Apache-2.0"
] | 1 | 2020-07-30T17:38:30.000Z | 2020-07-30T19:36:42.000Z | tgbot/models.py | psevdognom/gostbot | f5a142c0657285077cee58151590163a9e7f2527 | [
"Apache-2.0"
] | null | null | null | from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
engine = create_engine('sqlite:///gosts.db')
Session = sessionmaker(bind=engine)
session = Session()
Base = declarative_base()
#по идеи это все надо вынестии в __init__ файл
class Gost(Base):
__tablename__ = 'gosts'
id = Column(Integer, primary_key=True)
name = Column(String)
description = Column(String)
def __str__(self):
return self.name | 29 | 55 | 0.766831 | 77 | 609 | 5.831169 | 0.519481 | 0.155902 | 0.075724 | 0.124722 | 0.280624 | 0.280624 | 0.280624 | 0.280624 | 0.280624 | 0 | 0 | 0 | 0.152709 | 609 | 21 | 56 | 29 | 0.870155 | 0.073892 | 0 | 0.125 | 0 | 0 | 0.04078 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.3125 | 0.0625 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
da2549884ef76e4e06e5ec00ceef487768236a9e | 625 | py | Python | TBert.py | ggnicolau/Projeto-12--TBert-SP-City-Hall | 467e55b75e3e14eb6f8a2feadffc50a98f9e7a50 | [
"MIT"
] | null | null | null | TBert.py | ggnicolau/Projeto-12--TBert-SP-City-Hall | 467e55b75e3e14eb6f8a2feadffc50a98f9e7a50 | [
"MIT"
] | null | null | null | TBert.py | ggnicolau/Projeto-12--TBert-SP-City-Hall | 467e55b75e3e14eb6f8a2feadffc50a98f9e7a50 | [
"MIT"
] | null | null | null | #%%Links
# BERT with Topic Model
https://www.aclweb.org/anthology/2020.acl-main.630.pdf
https://towardsdatascience.com/topic-modeling-with-bert-779f7db187e6
https://www.kaggle.com/dskswu/topic-modeling-bert-lda
https://blog.insightdatascience.com/contextual-topic-identification-4291d256a032
https://datascience.stackexchange.com/questions/53270/bert-it-is-possible-to-use-it-for-topic-modeling
https://medium.com/atoti/topic-modeling-on-twitter-using-sentence-bert-8acdad958eb1
https://medium.com/analytics-vidhya/bert-for-topic-modeling-bert-vs-lda-8076e72c602b
https://github.com/MilaNLProc/contextualized-topic-models | 62.5 | 102 | 0.8208 | 88 | 625 | 5.829545 | 0.590909 | 0.126706 | 0.066277 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072368 | 0.0272 | 625 | 10 | 103 | 62.5 | 0.771382 | 0.0464 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
da28c193ef77163ddf391ccdf35c16001c14a2a9 | 4,577 | py | Python | Lib/test/test_compiler/test_static/readonly.py | mananpal1997/cinder | a8804cc6e3a5861463ff959abcd09ad60a0763e5 | [
"CNRI-Python-GPL-Compatible"
] | null | null | null | Lib/test/test_compiler/test_static/readonly.py | mananpal1997/cinder | a8804cc6e3a5861463ff959abcd09ad60a0763e5 | [
"CNRI-Python-GPL-Compatible"
] | null | null | null | Lib/test/test_compiler/test_static/readonly.py | mananpal1997/cinder | a8804cc6e3a5861463ff959abcd09ad60a0763e5 | [
"CNRI-Python-GPL-Compatible"
] | null | null | null | from compiler.errors import TypedSyntaxError
from unittest import skip
from .common import StaticTestBase
class ReadonlyTests(StaticTestBase):
def test_readonly_assign_0(self):
codestr = """
from typing import List
def foo():
x: List[int] = readonly([])
"""
self.type_error(
codestr,
"type mismatch: Readonly\\[list\\] cannot be assigned to list",
"readonly([])",
)
def test_readonly_assign_1(self):
codestr = """
from typing import List
def foo():
x: Readonly[List[int]] = []
y = x
z: List[int] = x
"""
self.type_error(
codestr, "type mismatch: Readonly\\[list\\] cannot be assigned to list", "x"
)
def test_readonly_parameter_0(self):
codestr = """
from typing import List
def f(l: List[int]) -> None:
pass
def g():
l = readonly([])
f(l)
"""
self.type_error(
codestr,
r"type mismatch: Readonly\[list\] received for positional arg 'l'",
"l",
)
def test_readonly_parameter_1(self):
codestr = """
from typing import List
def f(l: List[int], x: Readonly[List[int]]) -> None:
pass
def g():
l = readonly([])
x = []
f(l, x)
"""
self.type_error(
codestr,
r"type mismatch: Readonly\[list\] received for positional arg 'l'",
"l",
)
def test_readonly_parameter_2(self):
codestr = """
from __future__ import annotations
from typing import List
def f(l: List[int], x: Readonly[List[int]]) -> None:
pass
def g():
l = readonly([])
x = []
f(x, l)
"""
with self.in_module(codestr) as mod:
mod.g()
def test_readonly_return_1(self):
codestr = """
from typing import List
def f() -> int:
return 1 + 1
def g():
x: Readonly[int] = f()
"""
with self.in_module(codestr) as mod:
mod.g()
def test_readonly_return_2(self):
codestr = """
from typing import List
def f() -> Readonly[int]:
return readonly(1)
def g():
x: int = f()
"""
self.type_error(
codestr, r"type mismatch: Readonly\[int\] cannot be assigned to int", "f()"
)
def test_readonly_nonexact_int_assign(self):
codestr = """
class C(int):
pass
def foo():
x: int = C(1)
y: int = readonly(x)
"""
self.type_error(
codestr,
r"type mismatch: Readonly\[<module>.C\] cannot be assigned to int",
"readonly(x)",
)
def test_readonly_return_3(self):
codestr = """
def g() -> int:
return 1
def f() -> int:
return readonly(g())
"""
self.type_error(
codestr,
r"return type must be int, not Readonly\[int\]",
"return readonly(g())",
)
def test_readonly_override_1(self):
codestr = """
from __future__ import annotations
class C:
def f(self, x: int) -> None:
pass
class D(C):
def f(self, x: Readonly[int]) -> None:
pass
"""
with self.in_module(codestr) as mod:
mod.D().f(1)
def test_readonly_override_2(self):
codestr = """
class C:
def f(self, x: Readonly[int]) -> None:
pass
class D(C):
def f(self, x: int) -> None:
pass
"""
self.type_error(
codestr,
"Parameter x of type `int` is not a supertype "
r"of the overridden parameter `Readonly\[int\]`",
"def f(self, x: int)",
)
def test_readonly_override_3(self):
codestr = """
class C:
def f(self, x: int) -> int:
return 1
class D(C):
def f(self, x: int) -> Readonly[int]:
return 1
"""
self.type_error(
codestr,
r"Returned type `Readonly\[int\]` is not a "
"subtype of the overridden return `int`",
"def f(self, x: int) -> Readonly[int]",
)
| 26.005682 | 88 | 0.464059 | 502 | 4,577 | 4.11753 | 0.135458 | 0.027092 | 0.087083 | 0.087083 | 0.63135 | 0.590227 | 0.544751 | 0.524432 | 0.421384 | 0.328012 | 0 | 0.007113 | 0.41643 | 4,577 | 175 | 89 | 26.154286 | 0.766754 | 0 | 0 | 0.615385 | 0 | 0 | 0.580948 | 0.004807 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0.051282 | 0.076923 | 0 | 0.198718 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
da2ea93da9afa176a477a8495db2615a90754e3c | 301 | py | Python | Extras/002ex.py | lucasbraga10/ListaDeExerciciosPython | 496a6c90e33e60fdf13e841c78a22707aa7139a8 | [
"MIT"
] | null | null | null | Extras/002ex.py | lucasbraga10/ListaDeExerciciosPython | 496a6c90e33e60fdf13e841c78a22707aa7139a8 | [
"MIT"
] | null | null | null | Extras/002ex.py | lucasbraga10/ListaDeExerciciosPython | 496a6c90e33e60fdf13e841c78a22707aa7139a8 | [
"MIT"
] | null | null | null | hora = input('Digite a hora atual: ')
try:
hora = int(hora)
if 0 <= hora <= 11:
print('Bom Dia')
elif 12 <= hora <= 17:
print('Boa Tarde')
elif 18 <= hora <= 23:
print('Boa Noite')
else:
print('Valor Inválido')
except:
print('Valor Inválido')
| 18.8125 | 37 | 0.51495 | 39 | 301 | 3.974359 | 0.641026 | 0.103226 | 0.232258 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054455 | 0.328904 | 301 | 15 | 38 | 20.066667 | 0.712871 | 0 | 0 | 0.153846 | 0 | 0 | 0.245847 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.384615 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
da2fb0e4b9c5cd78249a882ade9203be37d076f3 | 4,344 | py | Python | run_2c/figures/plot_ivs.py | braghiere/3D_FSPM | 5ba06b6e65c3299776bb32e552d2788564a7306d | [
"MIT"
] | null | null | null | run_2c/figures/plot_ivs.py | braghiere/3D_FSPM | 5ba06b6e65c3299776bb32e552d2788564a7306d | [
"MIT"
] | null | null | null | run_2c/figures/plot_ivs.py | braghiere/3D_FSPM | 5ba06b6e65c3299776bb32e552d2788564a7306d | [
"MIT"
] | 1 | 2022-02-10T14:34:33.000Z | 2022-02-10T14:34:33.000Z |
import numpy as np
import matplotlib.pyplot as plt
import os, sys
from scipy.interpolate import interp2d
from pylab import *
filelist=[]
#error = []
biomrsd1= []
biomrsd2= []
dirname1 = "/home/renato/groimp_efficient/run_1/"
dirname2 = "/home/renato/groimp_efficient/run_1c/jules/"
list = [94]
for i in range(1,2):
#for i in list:
filelist.append("feddes.ivs")
#print filelist
for fname in filelist:
counter = filelist.index(fname)
#counter = 93
f1 = open(os.path.join(dirname1,fname),"r")
f2 = open(os.path.join(dirname2,fname),"r")
#print f
data1 = f1.readlines()
data2 = f2.readlines()
x1 = []
y1 = []
z1 = []
h_w1 = []
p_w1 = []
s_w1 = []
theta_w1 = []
transp1 = []
evap1 = []
x2 = []
y2 = []
z2 = []
h_w2 = []
p_w2 = []
s_w2 = []
theta_w2 = []
transp2 = []
evap2 = []
for i in range(3,len(data1)):
#print data[i]
line = data1[i].strip()
columns = data1[i].split()
x1.append(str(columns[0]))
y1.append(str(columns[1]))
z1.append(str(columns[2]))
h_w1.append(str(columns[3]))
p_w1.append(str(columns[4]))
s_w1.append(str(columns[5]))
theta_w1.append(str(columns[6]))
transp1.append(str(columns[7]))
evap1.append(str(columns[8]))
for i in range(3,len(data2)):
#print data[i]
line = data2[i].strip()
columns = data2[i].split()
x2.append(str(columns[0]))
y2.append(str(columns[1]))
z2.append(str(columns[2]))
h_w2.append(str(columns[3]))
p_w2.append(str(columns[4]))
s_w2.append(str(columns[5]))
theta_w2.append(str(columns[6]))
transp2.append(str(columns[7]))
evap2.append(str(columns[8]))
#print x,z,RSD
ndata = 50
x1 = np.linspace(0., 2., ndata)
x2 = np.linspace(-2., 0., ndata)
z = np.linspace(0., 2., ndata)
transp1 = np.reshape(theta_w1, (-1, ndata))
transp2 = np.reshape(theta_w2, (-1, ndata))
X1, Y = meshgrid(x1, z)
X2, Y = meshgrid(x2, z)
# simple fast plot
#plt.pcolor(X, Y, RSD, vmin=0, vmax=20)
#plt.colorbar()
#plt.savefig('images/RSD_%s.png' %counter)
#plt.close("all")
output_array = [1,2,3,4,5,6,7,8,9,10,15,20,30,40,50,60]
# scipy interp. cubic
f1 = interp2d(x1, z, transp1, kind='cubic')
f2 = interp2d(x2, z, transp2, kind='cubic')
xnew1 = np.arange(0, 2., .01)
xnew2 = np.arange(-2., 0., .01)
ynew = np.arange(0, 2., .01)
data1 = f1(xnew1,ynew)
data2 = f2(xnew2,ynew)
Xn1, Yn = np.meshgrid(xnew1, ynew)
Xn2, Yn = np.meshgrid(xnew2, ynew)
#cs = plt.pcolormesh(Xn1, Yn, data1, cmap='jet', vmin=min(data1.min(),data2.min()), vmax=max(data1.max(),data2.max()))
#cs = plt.pcolormesh(Xn2, Yn, data2, cmap='jet', vmin=min(data1.min(),data2.min()), vmax=max(data1.max(),data2.max()))
cs = plt.pcolormesh(Xn1, Yn, data1, cmap='jet', vmin=0.15, vmax=0.25)
cs = plt.pcolormesh(Xn2, Yn, data2, cmap='jet', vmin=0.15, vmax=0.25)
print min(data1.min(),data2.min()),max(data1.max(),data2.max())
cbar = plt.colorbar()
cbar.ax.set_ylabel('Soil moisture availability', rotation=270, labelpad=20)
#cbar.ax.set_ylabel('Root water uptake (m$^{-2}$m$^{-3}$)', rotation=270, labelpad=20)
#plt.xlabel("x (m)", labelpad=20)
plt.ylabel("z (m)", labelpad=20)
xname = [-1.0,1.0]
labels = ['hydraulic head = -100','hydraulic head = -5']
plt.xticks(xname,labels)
plt.ylim(0.,2.)
plt.xlim(-2.,2.)
#plt.title('DAY = %d'%output_array[counter])
plt.title('DAY = %d' %(counter + 1))
plt.tight_layout()
#plt.savefig('/home/renato/groimp_efficient/run_1c/paper_fig/transp_%02d.png' %(counter + 1),dpi = 300)
plt.savefig('/home/renato/groimp_efficient/run_1c/paper_fig/feddes_ivs_minus_100.png')
#plt.show()
print 'Figure transp_%02d.png saved sucessfully!' %(counter + 1)
plt.close("all")
#error.append(np.sum(RSD))
biomrsd1.append(np.sum(np.array(transp1).astype(np.float)))
biomrsd2.append(np.sum(np.array(transp2).astype(np.float)))
sys.exit()
biomrsd2 = np.array(biomrsd2)/(100*100)
biomrsd1 = np.array(biomrsd1)/(100*100)
plt.plot(biomrsd2,label='Beta factor')
plt.plot(biomrsd1,label='No limitation')
plt.xlabel("Time (days)", labelpad=20)
plt.ylabel("Integrated soil moisture availability", labelpad=20)
#plt.ylabel("Integrated root water uptake (m$^{-2}$m$^{-3}$)", labelpad=20)
# plt.title('DAY = 94')
plt.legend()
plt.title('Cereal')
plt.tight_layout()
plt.savefig('/home/renato/groimp_efficient/run_1c/paper_fig/theta_total.png')
plt.show()
sys.exit()
| 24.681818 | 119 | 0.652394 | 708 | 4,344 | 3.940678 | 0.261299 | 0.058065 | 0.103226 | 0.044803 | 0.31828 | 0.185663 | 0.164158 | 0.150538 | 0.143369 | 0.110394 | 0 | 0.071887 | 0.138582 | 4,344 | 175 | 120 | 24.822857 | 0.673704 | 0.200046 | 0 | 0.036364 | 0 | 0 | 0.128272 | 0.061664 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.045455 | null | null | 0.018182 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
da37e2b8ed8d04e1694b1c5e3fc2968aed7f5493 | 938 | py | Python | app/main/forms.py | tonyishangu/Pitch | 6af51349086d615bb3a14a100df5f3f2f4441214 | [
"MIT"
] | null | null | null | app/main/forms.py | tonyishangu/Pitch | 6af51349086d615bb3a14a100df5f3f2f4441214 | [
"MIT"
] | null | null | null | app/main/forms.py | tonyishangu/Pitch | 6af51349086d615bb3a14a100df5f3f2f4441214 | [
"MIT"
] | null | null | null | from flask_wtf import FlaskForm
from wtforms import StringField,TextAreaField,SubmitField, SelectField
from wtforms.validators import Required
class PitchForm(FlaskForm):
title = StringField('Pitch title',validators=[Required()])
category = SelectField('Pitch category', choices=[('Motivational', 'Motivational'), ('Famous', 'Famous'), ('Despair', 'Despair')], validators=[Required()])
description = TextAreaField('Pitch description', validators=[Required()])
submit = SubmitField('Submit')
class UpdateProfile(FlaskForm):
bio = TextAreaField('Tell us about you.',validators = [Required()])
submit = SubmitField('Submit')
class Commentform(FlaskForm):
description = TextAreaField('Comment description', validators=[Required()])
submit = SubmitField('Submit')
class Upvoteform(FlaskForm):
submit1 = SubmitField('Upvote (+)')
class Downvoteform(FlaskForm):
submit2 = SubmitField('Downvote (-)') | 39.083333 | 159 | 0.732409 | 86 | 938 | 7.976744 | 0.430233 | 0.131195 | 0.104956 | 0.153061 | 0.233236 | 0.233236 | 0.166181 | 0 | 0 | 0 | 0 | 0.002439 | 0.1258 | 938 | 24 | 160 | 39.083333 | 0.834146 | 0 | 0 | 0.166667 | 0 | 0 | 0.179979 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
da3c4ac30ec8cbb4f50d0e6f54d08e89941129bf | 357 | py | Python | theano/compile/tests/test_function_name.py | mdda/Theano | 6ca7b2b65000e371f009b617d41bc5a90f022d38 | [
"BSD-3-Clause"
] | 295 | 2015-09-25T21:15:04.000Z | 2022-01-13T01:16:18.000Z | theano/compile/tests/test_function_name.py | AtousaTorabi/Theano_old | ba2d2f74406243112e813df31429721c791a889a | [
"BSD-3-Clause"
] | 21 | 2015-10-28T19:06:32.000Z | 2022-03-11T23:13:05.000Z | theano/compile/tests/test_function_name.py | AtousaTorabi/Theano_old | ba2d2f74406243112e813df31429721c791a889a | [
"BSD-3-Clause"
] | 114 | 2015-09-26T21:23:02.000Z | 2021-11-19T02:36:41.000Z | import unittest
import os
import re
import theano
from theano import tensor
class FunctionName(unittest.TestCase):
def test_function_name(self):
x = tensor.vector('x')
func = theano.function([x], x + 1.)
regex = re.compile(os.path.basename('.*test_function_name.pyc?:13'))
assert(regex.match(func.name) is not None)
| 21 | 76 | 0.67507 | 50 | 357 | 4.74 | 0.6 | 0.101266 | 0.135021 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010563 | 0.204482 | 357 | 16 | 77 | 22.3125 | 0.823944 | 0 | 0 | 0 | 0 | 0 | 0.081232 | 0.078431 | 0 | 0 | 0 | 0 | 0.090909 | 1 | 0.090909 | false | 0 | 0.454545 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
da486790a4010f1ed533b021cccf13c3e964e378 | 5,004 | py | Python | sgh_stepperArm.py | davidramirezm30/scratch-orangepi | b4aa70f1cdcc030186532ad61cebf786be415b4b | [
"MIT"
] | null | null | null | sgh_stepperArm.py | davidramirezm30/scratch-orangepi | b4aa70f1cdcc030186532ad61cebf786be415b4b | [
"MIT"
] | null | null | null | sgh_stepperArm.py | davidramirezm30/scratch-orangepi | b4aa70f1cdcc030186532ad61cebf786be415b4b | [
"MIT"
] | null | null | null | # meArm.py - York Hack Space May 2014
# A motion control library for Phenoptix meArm using Adafruit 16-channel PWM servo driver
from sgh_Adafruit_PWM_Servo_Driver import PWM
import kinematics
import time
from math import pi
class meArm():
def __init__(self, sweepMinBase = -206, sweepMaxBase = 206, angleMinBase = -pi / 2.0 , angleMaxBase = pi / 2.0,
sweepMinShoulder = 103, sweepMaxShoulder = -103, angleMinShoulder = pi / 4.0, angleMaxShoulder = 3 * pi / 4.0,
sweepMinElbow = 0, sweepMaxElbow = 103, angleMinElbow = 0, angleMaxElbow = -2 * pi / 4.0,
sweepMinGripper = 75, sweepMaxGripper = 115, angleMinGripper = pi / 2.0, angleMaxGripper = 0):
"""Constructor for meArm - can use as default arm=meArm(), or supply calibration data for servos."""
self.servoInfo = {}
self.servoInfo["base"] = self.setupServo(sweepMinBase, sweepMaxBase, angleMinBase, angleMaxBase)
self.servoInfo["shoulder"] = self.setupServo(sweepMinShoulder, sweepMaxShoulder, angleMinShoulder, angleMaxShoulder)
self.servoInfo["elbow"] = self.setupServo(sweepMinElbow, sweepMaxElbow, angleMinElbow, angleMaxElbow)
self.servoInfo["gripper"] = self.setupServo(sweepMinGripper, sweepMaxGripper, angleMinGripper, angleMaxGripper)
print "servoinfo" , self.servoInfo
self.radBase = 0.0
self.radShoulder = pi / 2.0
self.radElbow = 0.0
self.BasePos = 0
self.ShoulderPos = 0
self.ElbowPos = 0
# Adafruit servo driver has four 'blocks' of four servo connectors, 0, 1, 2 or 3.
def begin(self, block = 0, address = 0x40):
"""Call begin() before any other meArm calls. Optional parameters to select a different block of servo connectors or different I2C address."""
self.pwm = 0#PWM(address) # Address of Adafruit PWM servo driver
self.base = block * 4
self.shoulder = block * 4 + 1
self.elbow = block * 4 + 2
self.gripper = block * 4 + 3
#self.pwm.setPWMFreq(60)
self.openGripper()
self.goDirectlyTo(0, 148, 80)
def setupServo(self, n_min, n_max, a_min, a_max):
"""Calculate servo calibration record to place in self.servoInfo"""
rec = {}
n_range = (n_max - n_min)
a_range = (a_max - a_min)
if a_range == 0: return
gain = n_range / a_range
zero = n_min - gain * a_min
rec["gain"] = gain
rec["zero"] = zero
rec["min"] = n_min
rec["max"] = n_max
return rec
def angle2pwm(self, servo, angle):
"""Work out pulse length to use to achieve a given requested angle taking into account stored calibration data"""
ret = int((self.servoInfo[servo]["zero"] + self.servoInfo[servo]["gain"] * angle) / 1.0 )
print "servo",servo,angle
return ret
def goDirectlyTo(self, x, y, z):
"""Set servo angles so as to place the gripper at a given Cartesian point as quickly as possible, without caring what path it takes to get there"""
angles = [0,0,0]
if kinematics.solve(x, y, z, angles):
print "angles", angles
print "degs", [int(x * 180.0) / pi for x in angles]
self.radBase = angles[0]
self.radShoulder = angles[1]
self.radElbow = angles[2]
self.BasePos = self.angle2pwm("base", self.radBase)
self.ShoulderPos = self.angle2pwm("shoulder", self.radShoulder)
self.ElbowPos = self.angle2pwm("elbow", self.radElbow)
self.x = x
self.y = y
self.z = z
print "goto %s" % ([x,y,z])
def gotoPoint(self, x, y, z):
"""Travel in a straight line from current position to a requested position"""
x0 = self.x
y0 = self.y
z0 = self.z
dist = kinematics.distance(x0, y0, z0, x, y, z)
step = 10
i = 0
while i < dist:
self.goDirectlyTo(x0 + (x - x0) * i / dist, y0 + (y - y0) * i / dist, z0 + (z - z0) * i / dist)
time.sleep(0.05)
i += step
self.goDirectlyTo(x, y, z)
time.sleep(0.05)
def openGripper(self):
"""Open the gripper, dropping whatever is being carried"""
#self.pwm.setPWM(self.gripper, 0, self.angle2pwm("gripper", pi/4.0))
time.sleep(0.3)
def closeGripper(self):
"""Close the gripper, grabbing onto anything that might be there"""
#self.pwm.setPWM(self.gripper, 0, self.angle2pwm("gripper", -pi/4.0))
time.sleep(0.3)
def isReachable(self, x, y, z):
"""Returns True if the point is (theoretically) reachable by the gripper"""
radBase = 0
radShoulder = 0
radElbow = 0
return kinematics.solve(x, y, z, [radBase, radShoulder, radElbow])
def getPos(self):
"""Returns the current position of the gripper"""
return [self.x, self.y, self.z]
| 43.894737 | 155 | 0.601519 | 644 | 5,004 | 4.635093 | 0.307453 | 0.039196 | 0.00804 | 0.007035 | 0.054271 | 0.042211 | 0.042211 | 0.042211 | 0.042211 | 0.042211 | 0 | 0.036415 | 0.286571 | 5,004 | 113 | 156 | 44.283186 | 0.79972 | 0.082134 | 0 | 0.047619 | 0 | 0 | 0.025488 | 0 | 0 | 0 | 0.001085 | 0 | 0 | 0 | null | null | 0 | 0.047619 | null | null | 0.059524 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
da5097b56f955d58d7c07e2cd9380b37c6714381 | 206 | py | Python | goodread/config.py | frictionlessdata/goodread | 094ca6c713f23e357d24795b207cb0fdee0012ca | [
"MIT"
] | 2 | 2021-03-10T07:38:21.000Z | 2021-04-03T09:49:20.000Z | goodread/config.py | frictionlessdata/goodread | 094ca6c713f23e357d24795b207cb0fdee0012ca | [
"MIT"
] | 3 | 2021-03-13T08:09:56.000Z | 2021-03-13T14:24:53.000Z | goodread/config.py | roll/goodread-py | 094ca6c713f23e357d24795b207cb0fdee0012ca | [
"MIT"
] | null | null | null | import os
# Helpers
def read_asset(*paths):
dirname = os.path.dirname(__file__)
return open(os.path.join(dirname, "assets", *paths)).read().strip()
# General
VERSION = read_asset("VERSION")
| 12.875 | 71 | 0.674757 | 27 | 206 | 4.925926 | 0.62963 | 0.135338 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165049 | 206 | 15 | 72 | 13.733333 | 0.773256 | 0.072816 | 0 | 0 | 0 | 0 | 0.069149 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
da53752dcf6b6cccf46e141a566ea1d87d74739d | 2,575 | py | Python | gcb_web_auth/backends/oauth.py | Duke-GCB/gcb-web-auth | 51b74f278a3234e1036cc111407ff2b951354873 | [
"MIT"
] | 1 | 2017-04-26T10:26:01.000Z | 2017-04-26T10:26:01.000Z | gcb_web_auth/backends/oauth.py | Duke-GCB/gcb-web-auth | 51b74f278a3234e1036cc111407ff2b951354873 | [
"MIT"
] | 25 | 2017-04-26T20:08:55.000Z | 2021-06-07T19:14:43.000Z | gcb_web_auth/backends/oauth.py | Duke-GCB/gcb-web-auth | 51b74f278a3234e1036cc111407ff2b951354873 | [
"MIT"
] | null | null | null | from ..utils import user_details_from_token, OAuthException
from ..groupmanager import user_belongs_to_group
from ..models import GroupManagerConnection
from .base import BaseBackend
from django.core.exceptions import PermissionDenied
import logging
# Maps django User attributes to OIDC userinfo keys
logging.basicConfig()
logger = logging.getLogger(__name__)
MISSING_GROUP_MANAGER_SETUP = 'Group Manager not setup.'
USER_NOT_IN_GROUP_FMT = 'User not in required group {}.'
class OAuth2Backend(BaseBackend):
def get_user_details_map(self):
"""
Map of django user model keys to the OIDC OAuth keys
:return:
"""
return {
'username': 'sub',
'first_name': 'given_name',
'last_name': 'family_name',
'email': 'email',
}
def authenticate(self, service=None, token_dict=None):
try:
details = user_details_from_token(service, token_dict)
except OAuthException as e:
logger.error('Exception getting user details', e)
return None
self.check_user_details(details)
user = self.save_user(details)
self.handle_new_user(user, details)
return user
def handle_new_user(self, user, details):
"""
Stub method to allow custom behavior for new OAuth users
:param user: A django user, created after receiving OAuth details
:param details: A dictionary of OAuth user info
:return: None
"""
pass
def check_user_details(self, details):
"""
Stub method to allow checking OAuth user details and raising PermissionDenied if not valid
:param details: A dictionary of OAuth user info
"""
pass
def verify_user_belongs_to_group(self, duke_unique_id, group_name):
"""
Using the singleton GroupManagerConnection object check to see if a user belongs to a group and raises
PermissionDenied if missing setup or user is not a member of the group.
:param duke_unique_id: str: unique duke id for a user
:param group_name: str: name of the group to check
"""
group_manager_connection = GroupManagerConnection.objects.first()
if not group_manager_connection:
logger.error(MISSING_GROUP_MANAGER_SETUP)
raise PermissionDenied(MISSING_GROUP_MANAGER_SETUP)
if not user_belongs_to_group(group_manager_connection, duke_unique_id, group_name):
raise PermissionDenied(USER_NOT_IN_GROUP_FMT.format(group_name))
| 36.267606 | 110 | 0.680388 | 322 | 2,575 | 5.220497 | 0.329193 | 0.065437 | 0.030934 | 0.032124 | 0.118977 | 0.045211 | 0.045211 | 0.045211 | 0 | 0 | 0 | 0.000521 | 0.254757 | 2,575 | 70 | 111 | 36.785714 | 0.875456 | 0.278058 | 0 | 0.051282 | 0 | 0 | 0.085194 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.128205 | false | 0.051282 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
da5817c0ccd94f09614efc895d74996c5c2086e4 | 7,315 | py | Python | backend/_tests/test_lambda.py | codemonkey800/napari-hub | 34a40b68d67002de2514d55b575b71159c7456cb | [
"MIT"
] | null | null | null | backend/_tests/test_lambda.py | codemonkey800/napari-hub | 34a40b68d67002de2514d55b575b71159c7456cb | [
"MIT"
] | null | null | null | backend/_tests/test_lambda.py | codemonkey800/napari-hub | 34a40b68d67002de2514d55b575b71159c7456cb | [
"MIT"
] | null | null | null | from unittest import mock
import requests
from requests.exceptions import HTTPError
from backend.napari import get_plugin
from backend.napari import get_plugins
from backend.napari import get_download_url
from backend.napari import get_license
class FakeResponse:
def __init__(self, *, data: str):
self.text = data
self.status_code = requests.codes.ok
@property
def status_code(self):
status_code = self._status_code
self.status_code = requests.codes.ok + 100
return status_code
@status_code.setter
def status_code(self, status_code):
self._status_code = status_code
def raise_for_status(self):
raise HTTPError
plugin_list = """
<li>
<a class="package-snippet" href="/project/brainreg-segment/">
<h3 class="package-snippet__title">
<span class="package-snippet__name">package1</span>
<span class="package-snippet__version">0.2.7</span>
<span class="package-snippet__released"><time datetime="2021-04-26T13:17:17+0000" data-controller="localized-time" data-localized-time-relative="true" data-localized-time-show-time="false">
Apr 26, 2021
</time></span>
</h3>
<p class="package-snippet__description">test package 1</p>
</a>
</li>
<li>
<a class="package-snippet" href="/project/napari-mri/">
<h3 class="package-snippet__title">
<span class="package-snippet__name">package2</span>
<span class="package-snippet__version">0.1.0</span>
<span class="package-snippet__released"><time datetime="2021-03-21T06:12:30+0000" data-controller="localized-time" data-localized-time-relative="true" data-localized-time-show-time="false">
Mar 21, 2021
</time></span>
</h3>
<p class="package-snippet__description">test package 2</p>
</a>
</li>
"""
plugin = """
{"info":{"author":"Test Author","author_email":"test@test.com",
"bugtrack_url":null,"classifiers":["Development Status :: 4 - Beta",
"Intended Audience :: Developers","License :: OSI Approved :: BSD License",
"Operating System :: OS Independent","Programming Language :: Python",
"Programming Language :: Python :: 3","Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7","Programming Language :: Python :: 3.8"
,"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Topic :: Software Development :: Testing"],"description":"description",
"description_content_type":"","docs_url":null,"download_url":"",
"downloads":{"last_day":-1,"last_month":-1,"last_week":-1},
"home_page":"https://github.com/test/test","keywords":"",
"license":"BSD-3","maintainer":"Test Author",
"maintainer_email":"test@test.com","name":"test",
"package_url":"https://pypi.org/project/test/","platform":"",
"project_url":"https://pypi.org/project/test/","project_urls":{
"Homepage":"https://github.com/test/test"},
"release_url":"https://pypi.org/project/test/0.0.1/",
"requires_dist":null,"requires_python":">=3.6",
"summary":"A test plugin",
"version":"0.0.1","yanked":false,"yanked_reason":null},
"last_serial":10229034,"releases":{"0.0.1":[{"comment_text":"",
"downloads":-1,"filename":"test.tar.gz","has_sig":false,
"md5_digest":"","packagetype":"sdist",
"python_version":"source","requires_python":">=3.6","size":3338,
"upload_time":"2020-04-13T03:37:20","upload_time_iso_8601":
"2020-04-13T03:37:20.169990Z","url":"","yanked":false,"yanked_reason":null}],
"0.0.2":[{"comment_text":"",
"downloads":-1,"filename":"","has_sig":false,
"packagetype":"sdist",
"python_version":"source","requires_python":">=3.6","size":3343,
"upload_time":"2020-04-13T14:58:21","upload_time_iso_8601":
"2020-04-13T14:58:21.644816Z","yanked":false,"yanked_reason":null}],"0.0.3":
[{"comment_text":"",
"downloads":-1,"filename":"test","has_sig":false,"packagetype":"sdist",
"python_version":"source","requires_python":">=3.6","size":3423,
"upload_time":"2020-04-20T15:28:53",
"upload_time_iso_8601":"2020-04-20T15:28:53.386281Z",
"url":"","yanked":false,"yanked_reason":null}]}}"""
@mock.patch(
'requests.get', return_value=FakeResponse(data=plugin_list)
)
def test_get_plugins(mock_get):
result = get_plugins()
assert len(result) == 2
assert result['package1'] == "0.2.7"
assert result['package2'] == "0.1.0"
@mock.patch(
'requests.get', return_value=FakeResponse(data=plugin)
)
@mock.patch(
'backend.napari.get_plugins', return_value={'test': '0.0.1'}
)
def test_get_plugin(mock_get, mock_plugins):
result = get_plugin("test")
assert(result["name"] == "test")
assert(result["summary"] == "A test plugin")
assert(result["description"] == "description")
assert(result["description_content_type"] == "")
assert(result["authors"] == [{'email': 'test@test.com', 'name': 'Test Author'}])
assert(result["license"] == "BSD-3")
assert(result["python_version"] == ">=3.6")
assert(result["operating_system"] == ['Operating System :: OS Independent'])
assert(result["release_date"] == '2020-04-13T03:37:20.169990Z')
assert(result["version"] == "0.0.1")
assert(result["first_released"] == "2020-04-13T03:37:20.169990Z")
assert(result["development_status"] == ['Development Status :: 4 - Beta'])
assert(result["requirements"] is None)
assert(result["project_site"] == "https://github.com/test/test")
assert(result["documentation"] == "")
assert(result["support"] == "")
assert(result["report_issues"] == "")
assert(result["twitter"] == "")
assert(result["code_repository"] == "https://github.com/test/test")
@mock.patch(
'requests.get', return_value=FakeResponse(data=plugin)
)
@mock.patch(
'backend.napari.get_plugins', return_value={'not_test': '0.0.1'}
)
def test_get_invalid_plugin(mock_get, mock_plugins):
assert({} == get_plugin("test"))
def test_github_get_url():
plugins = {"info": {"project_urls": {"Source Code": "test1"}}}
assert("test1" == get_download_url(plugins))
plugins = {"info": {"project_urls": {"Random": "https://random.com"}}}
assert(get_download_url(plugins) is None)
plugins = {"info": {"project_urls": {"Random": "https://github.com/org"}}}
assert(get_download_url(plugins) is None)
plugins = {"info": {"project_urls": {"Random": "https://github.com/org/repo/random"}}}
assert("https://github.com/org/repo" == get_download_url(plugins))
license_response = """
{
"name": "LICENSE",
"path": "LICENSE",
"license": {
"key": "bsd-3-clause",
"name": "BSD 3-Clause \\"New\\" or \\"Revised\\" License",
"spdx_id": "BSD-3-Clause",
"url": "https://api.github.com/licenses/bsd-3-clause"
}
}
"""
@mock.patch(
'requests.get', return_value=FakeResponse(data=license_response)
)
def test_github_license(mock_get):
result = get_license("test_website")
assert result == "BSD-3-Clause"
no_license_response = """
{
"name": "LICENSE",
"path": "LICENSE",
"license": {
"key": "other",
"name": "Other",
"spdx_id": "NOASSERTION",
"url": null
}
}
"""
@mock.patch(
'requests.get', return_value=FakeResponse(data=no_license_response)
)
def test_github_no_assertion_license(mock_get):
result = get_license("test_website")
assert result is None
| 36.034483 | 195 | 0.666029 | 944 | 7,315 | 4.988347 | 0.220339 | 0.058611 | 0.048418 | 0.029306 | 0.553196 | 0.459758 | 0.378424 | 0.330006 | 0.28265 | 0.220641 | 0 | 0.048255 | 0.13028 | 7,315 | 202 | 196 | 36.212871 | 0.691921 | 0 | 0 | 0.194286 | 0 | 0.022857 | 0.637731 | 0.31784 | 0 | 0 | 0 | 0 | 0.177143 | 1 | 0.057143 | false | 0 | 0.04 | 0 | 0.108571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
da597c1b33b1775b34cb070ed8fecbd878d88b90 | 1,981 | py | Python | src/webpage.py | Lanfei/hae | a8325083fff1792be73f16656000a048472dc296 | [
"MIT"
] | 39 | 2015-01-15T13:07:50.000Z | 2021-11-07T20:21:37.000Z | src/webpage.py | jjzhang166/hae | a8325083fff1792be73f16656000a048472dc296 | [
"MIT"
] | null | null | null | src/webpage.py | jjzhang166/hae | a8325083fff1792be73f16656000a048472dc296 | [
"MIT"
] | 15 | 2015-09-10T08:31:36.000Z | 2021-08-09T16:58:42.000Z | import assets
import webbrowser
from PyQt5.Qt import QMessageBox
from PyQt5.QtNetwork import QNetworkDiskCache
from PyQt5.QtWebKitWidgets import QWebPage, QWebInspector
class WebPage(QWebPage):
def __init__(self):
super(WebPage, self).__init__()
self.inspector = QWebInspector()
self.inspector.setPage(self)
self.inspector.resize(1024, 400)
diskCache = QNetworkDiskCache(self)
diskCache.setCacheDirectory(assets.fs.dataPath() + '/Cache')
self.networkAccessManager().setCache(diskCache)
self.networkAccessManager().setCookieJar(assets.dataJar)
def acceptNavigationRequest(self, frame, request, type):
if(type == QWebPage.NavigationTypeLinkClicked):
url = request.url().toString()
if(frame == self.mainFrame()):
self.view().load(url)
return False
elif frame == None:
# self.createWindow(QWebPage.WebBrowserWindow, url)
webbrowser.open(request.url().toString())
return False
return QWebPage.acceptNavigationRequest(self, frame, request, type)
# def downloadRequested(self, request):
# print(request)
def findText(self, text):
return super(WebPage, self).findText(text, QWebPage.FindBackward)
def showInspector(self):
self.inspector.show()
self.inspector.activateWindow()
def hideInspector(self):
self.inspector.close()
def createWindow(self, type, url = None):
from window import Window
window = Window(self.view().parentWidget(), url, isDialog = (type == QWebPage.WebModalDialog))
return window.webView.page()
def javaScriptAlert(self, frame, msg):
QMessageBox.information(self.view().parentWidget(), None, msg)
def javaScriptConfirm(self, frame, msg):
return QMessageBox.question(self.view().parentWidget(), None, msg) == QMessageBox.Yes
# There is a bug in PyQt
# def javaScriptPrompt(self, frame, msg, defaultValue):
# result = QInputDialog.getText(self.view().parentWidget(), None, msg)
# return (result[1], result[0])
def close(self):
self.hideInspector()
assets.dataJar.save()
| 31.951613 | 96 | 0.744573 | 223 | 1,981 | 6.578475 | 0.38565 | 0.05317 | 0.054533 | 0.04908 | 0.113838 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006948 | 0.128218 | 1,981 | 61 | 97 | 32.47541 | 0.842501 | 0.141848 | 0 | 0.046512 | 0 | 0 | 0.003546 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.209302 | false | 0 | 0.139535 | 0.046512 | 0.511628 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
da6d5e10eb908686caf520c0282d057272e35a99 | 417 | py | Python | python/ack.py | catseye/Dipple | 7c098e6b4cd9bffd8ff65465cfa4620ce09678a2 | [
"Unlicense"
] | 5 | 2016-05-18T01:51:37.000Z | 2022-01-13T23:34:32.000Z | python/ack.py | catseye/Dipple | 7c098e6b4cd9bffd8ff65465cfa4620ce09678a2 | [
"Unlicense"
] | 2 | 2015-08-06T20:21:11.000Z | 2017-08-02T14:30:31.000Z | python/ack.py | catseye/Dipple | 7c098e6b4cd9bffd8ff65465cfa4620ce09678a2 | [
"Unlicense"
] | 1 | 2015-01-25T02:49:49.000Z | 2015-01-25T02:49:49.000Z | #!/usr/bin/env python
import sys
def ack(m, n):
if m == 0:
return n + 1
elif n == 0:
return ack(m-1, 1)
else:
return ack(m-1, ack(m, n-1))
sys.setrecursionlimit(12000)
for m in range(0, 4):
for n in range(0, 10):
print "ack(%s,%s)=%s" % (m, n, ack(m, n))
m = 4
n = 0
print "ack(%s,%s)=%s" % (m, n, ack(m, n))
# m = 4
# n = 1
# print "ack(%s,%s)=%s" % (m, n, ack(m, n))
| 16.038462 | 45 | 0.472422 | 86 | 417 | 2.290698 | 0.290698 | 0.081218 | 0.126904 | 0.152284 | 0.304569 | 0.304569 | 0.304569 | 0.304569 | 0.304569 | 0.304569 | 0 | 0.071186 | 0.292566 | 417 | 25 | 46 | 16.68 | 0.59661 | 0.177458 | 0 | 0.133333 | 0 | 0 | 0.076696 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.066667 | null | null | 0.133333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
da72b5d6db173d0936d21e44910a561197ddd81b | 1,384 | py | Python | src/config/settings.py | crayzee/useful | d760a11202093681ff69e402ba816cfc7e5ff739 | [
"BSD-3-Clause"
] | 2 | 2021-02-14T17:57:18.000Z | 2021-02-14T18:11:17.000Z | src/config/settings.py | crayzee/useful | d760a11202093681ff69e402ba816cfc7e5ff739 | [
"BSD-3-Clause"
] | null | null | null | src/config/settings.py | crayzee/useful | d760a11202093681ff69e402ba816cfc7e5ff739 | [
"BSD-3-Clause"
] | null | null | null | import os
from .local_config import *
PROJECT_NAME = "Useful"
SERVER_HOST = 'http://127.0.0.1:8000'
# Secret key
SECRET_KEY = b"laksd^8223kad_)8dkfjslkjUJKSN83_*Kk3ja@8ksdfj"
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
API_V1_STR = "/api/v1"
# Token 60 minutes * 24 hours * 7 days = 7 days
ACCESS_TOKEN_EXPIRE_MINUTES = 60 * 24 * 7
# CORS
BACKEND_CORS_ORIGINS = [
"http://localhost",
"http://localhost:4200",
"http://localhost:3000",
"http://localhost:8080",
]
# Data Base via Docker
DATABASE_URI = f"""postgres://{os.environ.get("POSTGRES_USER")}:\
{os.environ.get("POSTGRES_PASSWORD")}@\
{os.environ.get("POSTGRES_HOST")}/\
{os.environ.get("POSTGRES_DB")}"""
USERS_OPEN_REGISTRATION = True
EMAILS_FROM_NAME = PROJECT_NAME
EMAIL_RESET_TOKEN_EXPIRE_HOURS = 48
EMAIL_TEMPLATES_DIR = "src/email-templates/build"
# Email
SMTP_TLS = os.environ.get("SMTP_TLS")
SMTP_PORT = os.environ.get("SMTP_PORT")
SMTP_HOST = os.environ.get("SMTP_HOST")
SMTP_USER = os.environ.get("SMTP_USER")
SMTP_PASSWORD = os.environ.get("SMTP_PASSWORD")
EMAILS_FROM_EMAIL = os.environ.get("EMAILS_FROM_EMAIL")
EMAILS_ENABLED = SMTP_HOST and SMTP_PORT and EMAILS_FROM_EMAIL
EMAIL_TEST_USER = "twopik@gmail"
APPS_MODELS = [
"src.app.user.models",
"src.app.auth.models",
"src.app.board.models",
"src.app.blog.models",
"aerich.models",
]
| 23.862069 | 70 | 0.721821 | 206 | 1,384 | 4.57767 | 0.417476 | 0.09544 | 0.127253 | 0.084836 | 0.033934 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03786 | 0.12211 | 1,384 | 57 | 71 | 24.280702 | 0.738272 | 0.064306 | 0 | 0 | 0 | 0 | 0.391001 | 0.171451 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.054054 | 0.054054 | 0 | 0.054054 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
da73bebdb1a8c110e03d6c6e8884f8ede489e4ad | 3,612 | py | Python | bluebottle/cms/migrations/0021_auto_20171017_2015.py | terrameijar/bluebottle | b4f5ba9c4f03e678fdd36091b29240307ea69ffd | [
"BSD-3-Clause"
] | 10 | 2015-05-28T18:26:40.000Z | 2021-09-06T10:07:03.000Z | bluebottle/cms/migrations/0021_auto_20171017_2015.py | terrameijar/bluebottle | b4f5ba9c4f03e678fdd36091b29240307ea69ffd | [
"BSD-3-Clause"
] | 762 | 2015-01-15T10:00:59.000Z | 2022-03-31T15:35:14.000Z | bluebottle/cms/migrations/0021_auto_20171017_2015.py | terrameijar/bluebottle | b4f5ba9c4f03e678fdd36091b29240307ea69ffd | [
"BSD-3-Clause"
] | 9 | 2015-02-20T13:19:30.000Z | 2022-03-08T14:09:17.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.10.8 on 2017-10-17 18:15
from __future__ import unicode_literals
import adminsortable.fields
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('utils', '0002_maillog'),
('cms', '0020_add_group_permissions'),
]
operations = [
migrations.CreateModel(
name='Link',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('highlight', models.BooleanField(default=False)),
('title', models.CharField(max_length=100, verbose_name='Title')),
('component', models.CharField(blank=True, choices=[(b'page', 'Page'), (b'project', 'Project'), (b'task', 'Task'), (b'fundraiser', 'Fundraiser'), (b'results', 'Results'), (b'news', 'News')], max_length=50, verbose_name='Component')),
('component_id', models.CharField(blank=True, max_length=100, verbose_name='Component ID')),
('external_link', models.CharField(blank=True, max_length=2000, verbose_name='External Link')),
('link_order', models.PositiveIntegerField(db_index=True, default=0, editable=False)),
],
options={
'ordering': ['link_order'],
},
),
migrations.CreateModel(
name='LinkGroup',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(choices=[(b'main', 'Main'), (b'about', 'About'), (b'info', 'Info'), (b'discover', 'Discover'), (b'social', 'Social')], default=b'main', max_length=25, unique=True)),
('title', models.CharField(blank=True, max_length=50, verbose_name='Title')),
],
),
migrations.CreateModel(
name='LinkPermission',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('permission', models.CharField(help_text='A dot separated app name and permission codename.', max_length=255)),
('present', models.BooleanField(default=True, help_text='Should the permission be present or not to access the link?')),
],
),
migrations.CreateModel(
name='SiteLinks',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('has_copyright', models.BooleanField(default=True)),
('language', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, to='utils.Language')),
],
options={
'verbose_name_plural': 'Site links',
},
),
migrations.AddField(
model_name='linkgroup',
name='site_links',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='link_groups', to='cms.SiteLinks'),
),
migrations.AddField(
model_name='link',
name='link_group',
field=adminsortable.fields.SortableForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='links', to='cms.LinkGroup'),
),
migrations.AddField(
model_name='link',
name='link_permissions',
field=models.ManyToManyField(blank=True, to='cms.LinkPermission'),
),
]
| 47.526316 | 249 | 0.592193 | 372 | 3,612 | 5.602151 | 0.327957 | 0.052783 | 0.026871 | 0.042226 | 0.334933 | 0.300384 | 0.252879 | 0.197697 | 0.197697 | 0.151631 | 0 | 0.01676 | 0.256645 | 3,612 | 75 | 250 | 48.16 | 0.759404 | 0.018826 | 0 | 0.441176 | 1 | 0 | 0.186106 | 0.007343 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.102941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
da7a14fecc0293a448e5b7f69f5fae7ef9b4d5fb | 1,615 | py | Python | usernado/torntriplets/api.py | reganto/usernado | cb6c2b7b855d2e9abcc5b9c9849ff2b7d46883db | [
"Apache-2.0"
] | 3 | 2022-01-19T18:18:49.000Z | 2022-03-31T06:47:00.000Z | usernado/torntriplets/api.py | reganto/Usernado | 2f3e28322f1f8af13a158e6e566a2f5d78039cac | [
"Apache-2.0"
] | 10 | 2022-03-23T05:42:47.000Z | 2022-03-31T11:32:38.000Z | usernado/torntriplets/api.py | reganto/usernado | cb6c2b7b855d2e9abcc5b9c9849ff2b7d46883db | [
"Apache-2.0"
] | 1 | 2022-03-31T06:47:12.000Z | 2022-03-31T06:47:12.000Z | from typing import Any, Dict, Optional
import tornado.escape
import tornado.web
from usernado.torntriplets.base import BaseHandler
class BaseValidationError(ValueError):
pass
class DataMalformedOrNotProvidedError(BaseValidationError):
pass
class APIHandler(BaseHandler):
def get_json_argument(
self,
name: str,
default: Optional[str] = None,
) -> str:
"""Get json argument from current request.
:param name: Name of the argument
:type name: str
:param default: Default value for argument if not presented,
defaults to None
:type default: str, optional
:raises DataMalformedOrNotProvidedError:
:return: Particular JSON argument that comes with current request
:rtype: str
"""
try:
raw_data = self.request.body.decode().replace("'", '"')
except Exception:
raise DataMalformedOrNotProvidedError
else:
json_data = tornado.escape.json_decode(raw_data)
return json_data.get(name, default)
def get_json_arguments(self) -> Dict[Any, Any]:
"""Get all json arguments from current request.
:raises DataMalformedOrNotProvidedError:
:return: All JSON argument that comes with current request
:rtype: Dict[Any, Any]
"""
try:
raw_data = self.request.body.decode().replace("'", '"')
except Exception:
raise DataMalformedOrNotProvidedError
else:
json_data = tornado.escape.json_decode(raw_data)
return json_data
| 28.839286 | 73 | 0.63839 | 166 | 1,615 | 6.126506 | 0.361446 | 0.047198 | 0.019666 | 0.041298 | 0.371681 | 0.371681 | 0.371681 | 0.371681 | 0.285152 | 0.285152 | 0 | 0 | 0.282972 | 1,615 | 55 | 74 | 29.363636 | 0.878238 | 0.300929 | 0 | 0.482759 | 0 | 0 | 0.003953 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068966 | false | 0.068966 | 0.137931 | 0 | 0.37931 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
da7a5ddce9f4b0e607e1f5ea8b534056c6de7812 | 412 | py | Python | mayan/apps/acls/permissions.py | eshbeata/open-paperless | 6b9ed1f21908116ad2795b3785b2dbd66713d66e | [
"Apache-2.0"
] | 2,743 | 2017-12-18T07:12:30.000Z | 2022-03-27T17:21:25.000Z | mayan/apps/acls/permissions.py | kyper999/mayan-edms | ca7b8301a1f68548e8e718d42a728a500d67286e | [
"Apache-2.0"
] | 15 | 2020-06-06T00:00:48.000Z | 2022-03-12T00:03:54.000Z | mayan/apps/acls/permissions.py | kyper999/mayan-edms | ca7b8301a1f68548e8e718d42a728a500d67286e | [
"Apache-2.0"
] | 257 | 2017-12-18T03:12:58.000Z | 2022-03-25T08:59:10.000Z | from __future__ import absolute_import, unicode_literals
from django.utils.translation import ugettext_lazy as _
from permissions import PermissionNamespace
namespace = PermissionNamespace('acls', _('Access control lists'))
permission_acl_edit = namespace.add_permission(
name='acl_edit', label=_('Edit ACLs')
)
permission_acl_view = namespace.add_permission(
name='acl_view', label=_('View ACLs')
)
| 27.466667 | 66 | 0.791262 | 49 | 412 | 6.265306 | 0.530612 | 0.084691 | 0.143322 | 0.169381 | 0.188925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114078 | 412 | 14 | 67 | 29.428571 | 0.841096 | 0 | 0 | 0 | 0 | 0 | 0.140777 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.3 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
da83cac0de8a0644873fc98b7129a03e1e3e96ac | 962 | py | Python | src/genie/libs/parser/ironware/tests/ShowMPLSLSP/cli/equal/golden_output1_expected.py | jamesditrapani/genieparser | d2c2f7e863889f323604c35ded767ca1d902055b | [
"Apache-2.0"
] | null | null | null | src/genie/libs/parser/ironware/tests/ShowMPLSLSP/cli/equal/golden_output1_expected.py | jamesditrapani/genieparser | d2c2f7e863889f323604c35ded767ca1d902055b | [
"Apache-2.0"
] | null | null | null | src/genie/libs/parser/ironware/tests/ShowMPLSLSP/cli/equal/golden_output1_expected.py | jamesditrapani/genieparser | d2c2f7e863889f323604c35ded767ca1d902055b | [
"Apache-2.0"
] | null | null | null | expected_output = {
'lsps': {
'mlx8.1_to_ces.2': {
'destination': '1.1.1.1',
'admin': 'UP',
'operational': 'UP',
'flap_count': 1,
'retry_count': 0,
'tunnel_interface': 'tunnel0'
},
'mlx8.1_to_ces.1': {
'destination': '2.2.2.2',
'admin': 'UP',
'operational': 'UP',
'flap_count': 1,
'retry_count': 0,
'tunnel_interface': 'tunnel56'
},
'mlx8.1_to_mlx8.2': {
'destination': '3.3.3.3',
'admin': 'UP',
'operational': 'UP',
'flap_count': 1,
'retry_count': 0,
'tunnel_interface': 'tunnel63'
},
'mlx8.1_to_mlx8.3': {
'destination': '4.4.4.4',
'admin': 'DOWN',
'operational': 'DOWN',
'flap_count': 0,
'retry_count': 0
}
}
}
| 26.722222 | 42 | 0.39501 | 92 | 962 | 3.913043 | 0.26087 | 0.083333 | 0.077778 | 0.166667 | 0.466667 | 0.466667 | 0.466667 | 0.466667 | 0.466667 | 0.466667 | 0 | 0.078324 | 0.429314 | 962 | 35 | 43 | 27.485714 | 0.577413 | 0 | 0 | 0.342857 | 0 | 0 | 0.391892 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
da851717fcc7c1db00f5aa310d945f9a35207e50 | 10,049 | py | Python | src/mbed_cloud/_backends/iam/models/user_invitation_resp.py | GQMai/mbed-cloud-sdk-python | 76ef009903415f37f69dcc5778be8f5fb14c08fe | [
"Apache-2.0"
] | 12 | 2017-12-28T11:18:43.000Z | 2020-10-04T12:11:15.000Z | src/mbed_cloud/_backends/iam/models/user_invitation_resp.py | GQMai/mbed-cloud-sdk-python | 76ef009903415f37f69dcc5778be8f5fb14c08fe | [
"Apache-2.0"
] | 50 | 2017-12-21T12:50:41.000Z | 2020-01-13T16:07:08.000Z | src/mbed_cloud/_backends/iam/models/user_invitation_resp.py | GQMai/mbed-cloud-sdk-python | 76ef009903415f37f69dcc5778be8f5fb14c08fe | [
"Apache-2.0"
] | 8 | 2018-04-25T17:47:29.000Z | 2019-08-29T06:38:27.000Z | # coding: utf-8
"""
Account Management API
API for managing accounts, users, creating API keys, uploading trusted certificates
OpenAPI spec version: v3
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from pprint import pformat
from six import iteritems
import re
class UserInvitationResp(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'account_id': 'str',
'created_at': 'datetime',
'email': 'str',
'etag': 'str',
'expiration': 'datetime',
'groups': 'list[str]',
'id': 'str',
'object': 'str',
'updated_at': 'datetime',
'user_id': 'str'
}
attribute_map = {
'account_id': 'account_id',
'created_at': 'created_at',
'email': 'email',
'etag': 'etag',
'expiration': 'expiration',
'groups': 'groups',
'id': 'id',
'object': 'object',
'updated_at': 'updated_at',
'user_id': 'user_id'
}
def __init__(self, account_id=None, created_at=None, email=None, etag=None, expiration=None, groups=None, id=None, object=None, updated_at=None, user_id=None):
"""
UserInvitationResp - a model defined in Swagger
"""
self._account_id = account_id
self._created_at = created_at
self._email = email
self._etag = etag
self._expiration = expiration
self._groups = groups
self._id = id
self._object = object
self._updated_at = updated_at
self._user_id = user_id
self.discriminator = None
@property
def account_id(self):
"""
Gets the account_id of this UserInvitationResp.
The UUID of the account the user is invited to.
:return: The account_id of this UserInvitationResp.
:rtype: str
"""
return self._account_id
@account_id.setter
def account_id(self, account_id):
"""
Sets the account_id of this UserInvitationResp.
The UUID of the account the user is invited to.
:param account_id: The account_id of this UserInvitationResp.
:type: str
"""
if account_id is None:
raise ValueError("Invalid value for `account_id`, must not be `None`")
self._account_id = account_id
@property
def created_at(self):
"""
Gets the created_at of this UserInvitationResp.
Creation UTC time RFC3339.
:return: The created_at of this UserInvitationResp.
:rtype: datetime
"""
return self._created_at
@created_at.setter
def created_at(self, created_at):
"""
Sets the created_at of this UserInvitationResp.
Creation UTC time RFC3339.
:param created_at: The created_at of this UserInvitationResp.
:type: datetime
"""
if created_at is None:
raise ValueError("Invalid value for `created_at`, must not be `None`")
self._created_at = created_at
@property
def email(self):
"""
Gets the email of this UserInvitationResp.
Email address of the invited user.
:return: The email of this UserInvitationResp.
:rtype: str
"""
return self._email
@email.setter
def email(self, email):
"""
Sets the email of this UserInvitationResp.
Email address of the invited user.
:param email: The email of this UserInvitationResp.
:type: str
"""
if email is None:
raise ValueError("Invalid value for `email`, must not be `None`")
self._email = email
@property
def etag(self):
"""
Gets the etag of this UserInvitationResp.
API resource entity version.
:return: The etag of this UserInvitationResp.
:rtype: str
"""
return self._etag
@etag.setter
def etag(self, etag):
"""
Sets the etag of this UserInvitationResp.
API resource entity version.
:param etag: The etag of this UserInvitationResp.
:type: str
"""
if etag is None:
raise ValueError("Invalid value for `etag`, must not be `None`")
self._etag = etag
@property
def expiration(self):
"""
Gets the expiration of this UserInvitationResp.
Invitation expiration as UTC time RFC3339.
:return: The expiration of this UserInvitationResp.
:rtype: datetime
"""
return self._expiration
@expiration.setter
def expiration(self, expiration):
"""
Sets the expiration of this UserInvitationResp.
Invitation expiration as UTC time RFC3339.
:param expiration: The expiration of this UserInvitationResp.
:type: datetime
"""
self._expiration = expiration
@property
def groups(self):
"""
Gets the groups of this UserInvitationResp.
A list of IDs of the groups the user is invited to.
:return: The groups of this UserInvitationResp.
:rtype: list[str]
"""
return self._groups
@groups.setter
def groups(self, groups):
"""
Sets the groups of this UserInvitationResp.
A list of IDs of the groups the user is invited to.
:param groups: The groups of this UserInvitationResp.
:type: list[str]
"""
self._groups = groups
@property
def id(self):
"""
Gets the id of this UserInvitationResp.
The UUID of the invitation.
:return: The id of this UserInvitationResp.
:rtype: str
"""
return self._id
@id.setter
def id(self, id):
"""
Sets the id of this UserInvitationResp.
The UUID of the invitation.
:param id: The id of this UserInvitationResp.
:type: str
"""
if id is None:
raise ValueError("Invalid value for `id`, must not be `None`")
self._id = id
@property
def object(self):
"""
Gets the object of this UserInvitationResp.
Entity name: always 'user-invitation'
:return: The object of this UserInvitationResp.
:rtype: str
"""
return self._object
@object.setter
def object(self, object):
"""
Sets the object of this UserInvitationResp.
Entity name: always 'user-invitation'
:param object: The object of this UserInvitationResp.
:type: str
"""
if object is None:
raise ValueError("Invalid value for `object`, must not be `None`")
allowed_values = ["user-invitation"]
if object not in allowed_values:
raise ValueError(
"Invalid value for `object` ({0}), must be one of {1}"
.format(object, allowed_values)
)
self._object = object
@property
def updated_at(self):
"""
Gets the updated_at of this UserInvitationResp.
Last update UTC time RFC3339.
:return: The updated_at of this UserInvitationResp.
:rtype: datetime
"""
return self._updated_at
@updated_at.setter
def updated_at(self, updated_at):
"""
Sets the updated_at of this UserInvitationResp.
Last update UTC time RFC3339.
:param updated_at: The updated_at of this UserInvitationResp.
:type: datetime
"""
self._updated_at = updated_at
@property
def user_id(self):
"""
Gets the user_id of this UserInvitationResp.
The UUID of the invited user.
:return: The user_id of this UserInvitationResp.
:rtype: str
"""
return self._user_id
@user_id.setter
def user_id(self, user_id):
"""
Sets the user_id of this UserInvitationResp.
The UUID of the invited user.
:param user_id: The user_id of this UserInvitationResp.
:type: str
"""
if user_id is None:
raise ValueError("Invalid value for `user_id`, must not be `None`")
self._user_id = user_id
def to_dict(self):
"""
Returns the model properties as a dict
"""
result = {}
for attr, _ in iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""
Returns the string representation of the model
"""
return pformat(self.to_dict())
def __repr__(self):
"""
For `print` and `pprint`
"""
return self.to_str()
def __eq__(self, other):
"""
Returns true if both objects are equal
"""
if not isinstance(other, UserInvitationResp):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""
Returns true if both objects are not equal
"""
return not self == other
| 26.584656 | 163 | 0.569509 | 1,132 | 10,049 | 4.922261 | 0.127208 | 0.043073 | 0.17229 | 0.055994 | 0.545406 | 0.464465 | 0.390165 | 0.311199 | 0.23654 | 0.216798 | 0 | 0.0047 | 0.343616 | 10,049 | 377 | 164 | 26.655172 | 0.840055 | 0.350582 | 0 | 0.213333 | 1 | 0 | 0.131036 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173333 | false | 0 | 0.02 | 0 | 0.32 | 0.006667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
da851a1afbbfafbfd8181febba1e1dc63ff24b87 | 475 | py | Python | src/orders/tests/order_content_type_replacement/tests_order_item.py | iNerV/education-backend | 787c0d090eb6e4a9338812941b0246a6e1b8e7ad | [
"MIT"
] | 151 | 2020-04-21T09:58:57.000Z | 2021-09-12T09:01:21.000Z | src/orders/tests/order_content_type_replacement/tests_order_item.py | iNerV/education-backend | 787c0d090eb6e4a9338812941b0246a6e1b8e7ad | [
"MIT"
] | 163 | 2020-05-29T20:52:00.000Z | 2021-09-11T12:44:56.000Z | src/orders/tests/order_content_type_replacement/tests_order_item.py | iNerV/education-backend | 787c0d090eb6e4a9338812941b0246a6e1b8e7ad | [
"MIT"
] | 39 | 2020-04-21T12:28:16.000Z | 2021-09-12T15:33:47.000Z | import pytest
pytestmark = [pytest.mark.django_db]
def test_order_without_items(order):
order = order()
assert order.item is None
def test_order_with_record(order, record):
order = order(record=record)
assert order.item == record
def test_order_with_course(order, course):
order = order(course=course)
assert order.item == course
def test_order_with_bundle(order, bundle):
order = order(bundle=bundle)
assert order.item == bundle
| 16.964286 | 42 | 0.717895 | 65 | 475 | 5.046154 | 0.292308 | 0.152439 | 0.146341 | 0.146341 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.187368 | 475 | 27 | 43 | 17.592593 | 0.849741 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 1 | 0.285714 | false | 0 | 0.071429 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
da8ed4c0472579bd4d5d2628a89966ad10d3fad5 | 2,887 | py | Python | vyperlogix/daemon/utils.py | raychorn/chrome_gui | f1fade70b61af12ee43c55c075aa9cfd32caa962 | [
"CC0-1.0"
] | 1 | 2020-09-29T01:36:33.000Z | 2020-09-29T01:36:33.000Z | vyperlogix/daemon/utils.py | raychorn/chrome_gui | f1fade70b61af12ee43c55c075aa9cfd32caa962 | [
"CC0-1.0"
] | null | null | null | vyperlogix/daemon/utils.py | raychorn/chrome_gui | f1fade70b61af12ee43c55c075aa9cfd32caa962 | [
"CC0-1.0"
] | null | null | null | import os, sys
import traceback
from vyperlogix.misc import _utils
from vyperlogix.hash import lists
_metadata = lists.HashedLists2()
def getDaemons(prefix, fpath):
import re
from vyperlogix import misc
_name = misc.funcName()
s_regex = r".+%s\.((py)|(pyc)|(pyo))" % ('_tasklet')
s_svn_regex = '[._]svn'
_regex = re.compile(s_regex)
svn_regex = re.compile(s_svn_regex)
files = [f for f in os.listdir(os.path.abspath(fpath)) if _regex.search(f)]
rejects = [f for f in os.listdir(os.path.abspath(fpath)) if (not _regex.search(f)) and (not svn_regex.search(f)) and (f.find('__init__.') == -1) and (f.find('dlib') == -1)]
if (len(rejects) > 0):
print >>sys.stderr, '(%s) :: Rejected daemon files are "%s" using (not "%s") and (not "%s"). PLS check the file names to ensure your daemons will be executed as planned.' % (_name,rejects,s_regex,s_svn_regex)
return files
def getNormalizedDaemons(prefix, fpath):
h = lists.HashedLists()
fs = []
dms = getDaemons(prefix, fpath)
for f in dms:
h[f.split('.')[0]] = f.split('.')[-1]
for k,v in h.iteritems():
x = [n for n in v if n == 'py']
if (len(x) == 0):
x = [n for n in v if n == 'pyc']
if (len(x) == 0):
x = [n for n in v if n == 'pyo']
if (len(x) > 0):
fs.append('.'.join([k,x[0]]))
return fs
def getNormalizedDaemonNamespaces(prefix, fpath):
return [f.split('.')[0] for f in getNormalizedDaemons(prefix, fpath)]
def execDaemon(f, dpath=None, _logging=None):
_import_error = False
try:
exec "import " + f
except ImportError:
_import_error = True
exc_info = sys.exc_info()
info_string = '\n'.join(traceback.format_exception(*exc_info))
if (_logging):
_logging.error(info_string)
else:
print >>sys.stderr, info_string
info_string = '_import_error=%s' % _import_error
if (_logging):
_logging.warning(info_string)
else:
print >>sys.stderr, info_string
if (not _import_error):
_metadata[f] = lists.HashedLists2()
try:
v = '%s._metadata' % (f)
vv = eval(v)
print '%s=[%s]' % (v,vv)
_metadata[f] = lists.HashedLists2(vv)
v = '%s.data_hook("%s")' % (f,dpath if (sys.platform[:3] != 'win') else dpath.replace(os.sep,'/'))
vv = eval(v)
print '%s=[%s]' % (v,vv)
except AttributeError:
pass
except ImportError:
exc_info = sys.exc_info()
info_string = '\n'.join(traceback.format_exception(*exc_info))
if (_logging):
_logging.error(info_string)
else:
print >>sys.stderr, info_string
def execDaemons(prefix, fpath, dpath=None, _logging=None):
for f in getNormalizedDaemonNamespaces(prefix, fpath):
execDaemon("%s.%s" % (prefix,f.split('.')[0]), dpath=dpath, _logging=_logging)
| 33.569767 | 215 | 0.602009 | 405 | 2,887 | 4.130864 | 0.274074 | 0.053796 | 0.017932 | 0.010759 | 0.289301 | 0.289301 | 0.261805 | 0.261805 | 0.211596 | 0.211596 | 0 | 0.006821 | 0.23831 | 2,887 | 85 | 216 | 33.964706 | 0.753979 | 0 | 0 | 0.337838 | 0 | 0.013514 | 0.101871 | 0.008316 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.013514 | 0.175676 | null | null | 0.081081 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
da92973f8664607a74f4efaec125fafddf4dd12d | 2,288 | py | Python | arike/visits/models.py | iamsdas/arike | ab76f48f49cd794dd4b77172b347e260a03413b2 | [
"MIT"
] | null | null | null | arike/visits/models.py | iamsdas/arike | ab76f48f49cd794dd4b77172b347e260a03413b2 | [
"MIT"
] | null | null | null | arike/visits/models.py | iamsdas/arike | ab76f48f49cd794dd4b77172b347e260a03413b2 | [
"MIT"
] | null | null | null | from django.contrib.auth import get_user_model
from django.db import models
from django.utils import timezone
from arike.patients.models import Patient, Treatment
User = get_user_model()
class Hygiene(models.TextChoices):
GOOD = "good"
POOR = "poor"
OK = "ok"
class Symptoms(models.TextChoices):
FEVER = "fever"
CAUGHING = "caughing"
class VisitSchedule(models.Model):
date = models.DateField(null=True)
time = models.TimeField(null=True)
duration = models.IntegerField()
patient = models.ForeignKey(Patient, on_delete=models.CASCADE)
nurse = models.ForeignKey(User, on_delete=models.CASCADE)
deleted = models.BooleanField(default=False)
created_at = models.DateTimeField(default=timezone.now())
updated_at = models.DateTimeField(null=True)
def save(self):
self.updated_at = timezone.now()
return super().save()
def __str__(self):
return f"{self.date} at {self.time}"
class VisitDetails(models.Model):
blood_pressure = models.IntegerField()
pulse = models.IntegerField()
sugar = models.FloatField()
mouth_hygiene = models.CharField(max_length=4, choices=Hygiene.choices)
public_hygiene = models.CharField(max_length=4, choices=Hygiene.choices)
systemic_examination = models.TextField()
patient_at_peace = models.BooleanField()
pain = models.BooleanField()
symptoms = models.CharField(max_length=20, choices=Symptoms.choices)
note = models.TextField()
schedule = models.OneToOneField(VisitSchedule, on_delete=models.CASCADE)
deleted = models.BooleanField(default=False)
created_at = models.DateTimeField(default=timezone.now())
updated_at = models.DateTimeField(null=True)
def save(self):
self.updated_at = timezone.now()
return super().save()
class TreatmentNote(models.Model):
note = models.TextField()
treatment = models.ForeignKey(Treatment, on_delete=models.CASCADE)
visit = models.ForeignKey(VisitSchedule, on_delete=models.CASCADE, null=True)
deleted = models.BooleanField(default=False)
created_at = models.DateTimeField(default=timezone.now())
updated_at = models.DateTimeField(null=True)
def save(self):
self.updated_at = timezone.now()
return super().save()
| 32.225352 | 81 | 0.719843 | 268 | 2,288 | 6.029851 | 0.294776 | 0.029703 | 0.07797 | 0.064975 | 0.434406 | 0.405322 | 0.405322 | 0.405322 | 0.405322 | 0.339728 | 0 | 0.002102 | 0.168269 | 2,288 | 70 | 82 | 32.685714 | 0.847084 | 0 | 0 | 0.37037 | 0 | 0 | 0.021416 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0 | 0.074074 | 0.018519 | 0.925926 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
16f71339643c7fc6d95188482e9e85260eb6d407 | 519 | py | Python | main.py | frederikkoenigwork/my-python-sample-app | abeed8e73877172099aa7a0c53c7ee05674c1b02 | [
"Unlicense"
] | null | null | null | main.py | frederikkoenigwork/my-python-sample-app | abeed8e73877172099aa7a0c53c7ee05674c1b02 | [
"Unlicense"
] | null | null | null | main.py | frederikkoenigwork/my-python-sample-app | abeed8e73877172099aa7a0c53c7ee05674c1b02 | [
"Unlicense"
] | null | null | null | import django
print(django.get_version())
print(f"boa {111 * 6}")
print(f"{6*6}")
input = input("Hey abuser, enter some stuff!\n")
cmp = 1 > 0
print(type(cmp))
print(input)
try:
int(asdf)
except Exception:
print("Oh no, it failed!")
while False:
print("False!")
listeL = [1,2,3,45,6,4,3,6,8,6,3,3]
setS = set(listeL)
listeL.append(33333)
print(listeL)
print("Set S:")
print(setSf)
# This is a comment?
"""asdkjf ölsaföl
sadkf jsaöld jfösa
askdf jöldsakf j
Multi Line Comment!
"""
| 9.611111 | 48 | 0.635838 | 86 | 519 | 3.825581 | 0.662791 | 0.036474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.062053 | 0.192678 | 519 | 53 | 49 | 9.792453 | 0.72315 | 0.034682 | 0 | 0 | 0 | 0 | 0.186603 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.05 | 0 | 0.05 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
16f9b725e9aef0aa98b319d6a522f9a1a0c54c28 | 6,826 | py | Python | ooobuild/lo/i18n/transliteration_modules.py | Amourspirit/ooo_uno_tmpl | 64e0c86fd68f24794acc22d63d8d32ae05dd12b8 | [
"Apache-2.0"
] | null | null | null | ooobuild/lo/i18n/transliteration_modules.py | Amourspirit/ooo_uno_tmpl | 64e0c86fd68f24794acc22d63d8d32ae05dd12b8 | [
"Apache-2.0"
] | null | null | null | ooobuild/lo/i18n/transliteration_modules.py | Amourspirit/ooo_uno_tmpl | 64e0c86fd68f24794acc22d63d8d32ae05dd12b8 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
#
# Copyright 2022 :Barry-Thomas-Paul: Moss
#
# Licensed under the Apache License, Version 2.0 (the "License")
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http: // www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Enum Class
# this is a auto generated file generated by Cheetah
# Namespace: com.sun.star.i18n
# Libre Office Version: 7.3
from enum import Enum
class TransliterationModules(Enum):
"""
Enum Class
See Also:
`API TransliterationModules <https://api.libreoffice.org/docs/idl/ref/namespacecom_1_1sun_1_1star_1_1i18n.html#a9c57a33dd757352c82923f4c7f6cf93c>`_
"""
__ooo_ns__: str = 'com.sun.star.i18n'
__ooo_full_ns__: str = 'com.sun.star.i18n.TransliterationModules'
__ooo_type_name__: str = 'enum'
END_OF_MODULE = 'END_OF_MODULE'
"""
"""
FULLWIDTH_HALFWIDTH = 'FULLWIDTH_HALFWIDTH'
"""
Transliterate a string from full width character to half width character.
"""
HALFWIDTH_FULLWIDTH = 'HALFWIDTH_FULLWIDTH'
"""
Transliterate a string from half width character to full width character.
"""
HIRAGANA_KATAKANA = 'HIRAGANA_KATAKANA'
"""
Transliterate a Japanese string from Hiragana to Katakana.
"""
IGNORE_CASE = 'IGNORE_CASE'
"""
Ignore case when comparing strings by transliteration service.
"""
IGNORE_KANA = 'IGNORE_KANA'
"""
Ignore Hiragana and Katakana when comparing strings by transliteration service.
"""
IGNORE_MASK = 'IGNORE_MASK'
"""
"""
IGNORE_WIDTH = 'IGNORE_WIDTH'
"""
Ignore full width and half width character when comparing strings by transliteration service.
Ignore full width and half width characters when comparing strings by transliteration service.
"""
IgnoreBaFa_ja_JP = 'IgnoreBaFa_ja_JP'
"""
Ignore Katakana and Hiragana Ba/Gua and Ha/Fa in Japanese fuzzy search.
"""
IgnoreHyuByu_ja_JP = 'IgnoreHyuByu_ja_JP'
"""
Ignore Katakana and Hiragana Hyu/Fyu and Byu/Gyu in Japanese fuzzy search.
"""
IgnoreIandEfollowedByYa_ja_JP = 'IgnoreIandEfollowedByYa_ja_JP'
"""
Ignore Katakana YA/A which follows the character in either I or E row in Japanese fuzzy search.
Ignore Katakana YA/A following the character in either I or E row in Japanese fuzzy search.
"""
IgnoreIterationMark_ja_JP = 'IgnoreIterationMark_ja_JP'
"""
Ignore Hiragana and Katakana iteration mark in Japanese fuzzy search.
"""
IgnoreKiKuFollowedBySa_ja_JP = 'IgnoreKiKuFollowedBySa_ja_JP'
"""
Ignore Katakana KI/KU which follows the character in SA column in Japanese fuzzy search.
Ignore Katakana KI/KU following the character in SA column in Japanese fuzzy search.
"""
IgnoreMiddleDot_ja_JP = 'IgnoreMiddleDot_ja_JP'
"""
Ignore middle dot in Japanese fuzzy search.
"""
IgnoreMinusSign_ja_JP = 'IgnoreMinusSign_ja_JP'
"""
Ignore dash or minus sign in Japanese fuzzy search.
"""
IgnoreProlongedSoundMark_ja_JP = 'IgnoreProlongedSoundMark_ja_JP'
"""
Ignore Japanese prolonged sound mark in Japanese fuzzy search.
"""
IgnoreSeZe_ja_JP = 'IgnoreSeZe_ja_JP'
"""
Ignore Katakana and Hiragana Se/Sye and Ze/Je in Japanese fuzzy search.
"""
IgnoreSeparator_ja_JP = 'IgnoreSeparator_ja_JP'
"""
Ignore separator punctuations in Japanese fuzzy search.
"""
IgnoreSize_ja_JP = 'IgnoreSize_ja_JP'
"""
Ignore Japanese normal and small sized character in Japanese fuzzy search.
"""
IgnoreSpace_ja_JP = 'IgnoreSpace_ja_JP'
"""
Ignore white space characters, include space, TAB, return, etc. in Japanese fuzzy search.
"""
IgnoreTiJi_ja_JP = 'IgnoreTiJi_ja_JP'
"""
Ignore Katakana and Hiragana Tsui/Tea/Ti and Dyi/Ji in Japanese fuzzy search.
"""
IgnoreTraditionalKana_ja_JP = 'IgnoreTraditionalKana_ja_JP'
"""
Ignore Japanese traditional Katakana and Hiragana character in Japanese fuzzy search.
Ignore Japanese traditional Katakana and Hiragana characters in Japanese fuzzy search.
"""
IgnoreTraditionalKanji_ja_JP = 'IgnoreTraditionalKanji_ja_JP'
"""
Ignore Japanese traditional Kanji character in Japanese fuzzy search.
Ignore Japanese traditional Kanji characters in Japanese fuzzy search.
"""
IgnoreZiZu_ja_JP = 'IgnoreZiZu_ja_JP'
"""
Ignore Katakana and Hiragana Zi/Zi and Zu/Zu in Japanese fuzzy search.
"""
KATAKANA_HIRAGANA = 'KATAKANA_HIRAGANA'
"""
Transliterate a Japanese string from Katakana to Hiragana.
"""
LOWERCASE_UPPERCASE = 'LOWERCASE_UPPERCASE'
"""
Transliterate a string from lower case to upper case.
"""
LargeToSmall_ja_JP = 'LargeToSmall_ja_JP'
"""
transliterate Japanese normal sized character to small sized character
"""
NON_IGNORE_MASK = 'NON_IGNORE_MASK'
"""
"""
NumToTextFormalHangul_ko = 'NumToTextFormalHangul_ko'
"""
Transliterate an ASCII number string to formal Korean Hangul number string in spellout format.
"""
NumToTextFormalLower_ko = 'NumToTextFormalLower_ko'
"""
Transliterate an ASCII number string to formal Korean Hanja lower case number string in spellout format.
"""
NumToTextFormalUpper_ko = 'NumToTextFormalUpper_ko'
"""
Transliterate an ASCII number string to formal Korean Hanja upper case number string in spellout format.
"""
NumToTextLower_zh_CN = 'NumToTextLower_zh_CN'
"""
Transliterate an ASCII number string to Simplified Chinese lower case number string in spellout format.
"""
NumToTextLower_zh_TW = 'NumToTextLower_zh_TW'
"""
Transliterate an ASCII number string to Traditional Chinese lower case number string in spellout format.
"""
NumToTextUpper_zh_CN = 'NumToTextUpper_zh_CN'
"""
Transliterate an ASCII number string to Simplified Chinese upper case number string in spellout format.
"""
NumToTextUpper_zh_TW = 'NumToTextUpper_zh_TW'
"""
Transliterate an ASCII number string to Traditional Chinese upper case number string in spellout format.
"""
SmallToLarge_ja_JP = 'SmallToLarge_ja_JP'
"""
transliterate Japanese small sized character to normal sized character
"""
UPPERCASE_LOWERCASE = 'UPPERCASE_LOWERCASE'
"""
Transliterate a string from upper case to lower case.
"""
__all__ = ['TransliterationModules']
| 35.005128 | 155 | 0.713742 | 824 | 6,826 | 5.730583 | 0.274272 | 0.030496 | 0.063532 | 0.088945 | 0.372935 | 0.311309 | 0.233799 | 0.193139 | 0.11817 | 0.089792 | 0 | 0.008572 | 0.213888 | 6,826 | 194 | 156 | 35.185567 | 0.871413 | 0.127454 | 0 | 0 | 0 | 0 | 0.353437 | 0.160532 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.023256 | 0 | 0.976744 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
16fe9a9c206b9e8c7de3aca2c90042c644707331 | 342 | py | Python | main.py | freakyLuffy/Teleuserbot | d5871e919b37d6b63de7e3115fd9d1d3bb6ce33b | [
"MIT"
] | null | null | null | main.py | freakyLuffy/Teleuserbot | d5871e919b37d6b63de7e3115fd9d1d3bb6ce33b | [
"MIT"
] | null | null | null | main.py | freakyLuffy/Teleuserbot | d5871e919b37d6b63de7e3115fd9d1d3bb6ce33b | [
"MIT"
] | 1 | 2021-09-06T08:57:43.000Z | 2021-09-06T08:57:43.000Z | from start import client
from modules import codeforces,delete,notes,hastebin,pin,pm,user,spam,rextester
white=[]
from telethon import TelegramClient,events
import logging
logging.basicConfig(format='[%(levelname) 5s/%(asctime)s] %(name)s: %(message)s',
level=logging.WARNING)
client.start().run_until_disconnected()
| 24.428571 | 81 | 0.739766 | 43 | 342 | 5.837209 | 0.744186 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003378 | 0.134503 | 342 | 13 | 82 | 26.307692 | 0.844595 | 0 | 0 | 0 | 0 | 0 | 0.15 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
e508750914ec307fdf73cf0c5c0d5889efc9baed | 349 | py | Python | perm-comb-finder/count-uniques.py | catseye/NaNoGenLab | 3e4a7314e6023557856e1cc910e9d0edc4daf43c | [
"Unlicense"
] | 20 | 2015-06-05T14:02:12.000Z | 2021-11-02T22:19:18.000Z | perm-comb-finder/count-uniques.py | catseye/NaNoGenLab | 3e4a7314e6023557856e1cc910e9d0edc4daf43c | [
"Unlicense"
] | 1 | 2015-10-15T12:58:35.000Z | 2015-10-15T12:58:35.000Z | perm-comb-finder/count-uniques.py | catseye/NaNoGenLab | 3e4a7314e6023557856e1cc910e9d0edc4daf43c | [
"Unlicense"
] | 1 | 2021-04-08T23:50:06.000Z | 2021-04-08T23:50:06.000Z | #!/usr/bin/env python
import sys
import re
words = []
for line in sys.stdin:
words.extend(line.strip().split())
def clean(w):
w = w.replace("'", "")
return re.match('^.*?([a-zA-Z0-9]+).*?$', w).group(1).upper()
words = [clean(w) for w in words]
print len(words), len(set(words))
assert len(words) == len(set(words))
print sorted(words)
| 23.266667 | 65 | 0.616046 | 58 | 349 | 3.706897 | 0.568966 | 0.055814 | 0.102326 | 0.130233 | 0.176744 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010101 | 0.148997 | 349 | 14 | 66 | 24.928571 | 0.713805 | 0.057307 | 0 | 0 | 0 | 0 | 0.070122 | 0.067073 | 0 | 0 | 0 | 0 | 0.083333 | 0 | null | null | 0 | 0.166667 | null | null | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e50fb4a27317e4ddcda68870a1cc74e298cdcda3 | 3,705 | py | Python | find_entity/probable_acmation.py | Mleader2/bert_music_correct | 8218abe595e602436f2ace8b7e6abc743b4a49d1 | [
"Apache-2.0"
] | 6 | 2020-08-05T07:57:46.000Z | 2022-03-01T08:26:43.000Z | find_entity/probable_acmation.py | Mleader2/bert_music_correct | 8218abe595e602436f2ace8b7e6abc743b4a49d1 | [
"Apache-2.0"
] | 3 | 2020-11-13T19:03:58.000Z | 2021-08-25T16:12:34.000Z | find_entity/probable_acmation.py | Mleader2/bert_music_correct | 8218abe595e602436f2ace8b7e6abc743b4a49d1 | [
"Apache-2.0"
] | 4 | 2020-08-05T07:57:47.000Z | 2022-01-29T09:20:17.000Z | # 发现疑似实体,辅助训练
# 用ac自动机构建发现疑似实体的工具
import os
from collections import defaultdict
import json
import re
from .acmation import KeywordTree, add_to_ac, entity_files_folder, entity_folder
from curLine_file import curLine, normal_transformer
domain2entity_map = {}
domain2entity_map["music"] = ["age", "singer", "song", "toplist", "theme", "style", "scene", "language", "emotion", "instrument"]
domain2entity_map["navigation"] = ["custom_destination", "city"] # place city
domain2entity_map["phone_call"] = ["phone_num", "contact_name"]
re_phoneNum = re.compile("[0-9一二三四五六七八九十拾]+") # 编译
# 也许直接读取下载的xls文件更方便,但那样需要安装xlrd模块
self_entity_trie_tree = {} # 总的实体字典 自己建立的某些实体类型的实体树
for domain, entity_type_list in domain2entity_map.items():
print(curLine(), domain, entity_type_list)
for entity_type in entity_type_list:
if entity_type not in self_entity_trie_tree:
ac = KeywordTree(case_insensitive=True)
else:
ac = self_entity_trie_tree[entity_type]
# TODO
if entity_type == "city":
# for current_entity_type in ["city", "province"]:
# entity_file = waibu_folder + "%s.json" % current_entity_type
# with open(entity_file, "r") as f:
# current_entity_dict = json.load(f)
# print(curLine(), "get %d %s from %s" %
# (len(current_entity_dict), current_entity_type, entity_file))
# for entity_before, entity_times in current_entity_dict.items():
# entity_after = entity_before
# add_to_ac(ac, entity_type, entity_before, entity_after, pri=1)
## 从标注语料中挖掘得到的地名
for current_entity_type in ["destination", "origin"]:
entity_file = os.path.join(entity_files_folder, "%s.json" % current_entity_type)
with open(entity_file, "r") as f:
current_entity_dict = json.load(f)
print(curLine(), "get %d %s from %s" %
(len(current_entity_dict), current_entity_type, entity_file))
for entity_before, entity_after_times in current_entity_dict.items():
entity_after = entity_after_times[0]
add_to_ac(ac, entity_type, entity_before, entity_after, pri=2)
input(curLine())
# 给的实体库,最高优先级
entity_file = os.path.join(entity_folder, "%s.txt" % entity_type)
if os.path.exists(entity_file):
with open(entity_file, "r") as fr:
lines = fr.readlines()
print(curLine(), "get %d %s from %s" % (len(lines), entity_type, entity_file))
for line in lines:
entity_after = line.strip()
entity_before = entity_after # TODO
pri = 3
if entity_type in ["song"]:
pri -= 0.5
add_to_ac(ac, entity_type, entity_before, entity_after, pri=pri)
ac.finalize()
self_entity_trie_tree[entity_type] = ac
def get_all_entity(corpus, useEntityTypeList):
self_entityTypeMap = defaultdict(list)
for entity_type in useEntityTypeList:
result = self_entity_trie_tree[entity_type].search(corpus)
for res in result:
after, priority = res.meta_data
self_entityTypeMap[entity_type].append({'before': res.keyword, 'after': after, "priority":priority})
if "phone_num" in useEntityTypeList:
token_numbers = re_phoneNum.findall(corpus)
for number in token_numbers:
self_entityTypeMap["phone_num"].append({'before':number, 'after':number, 'priority': 2})
return self_entityTypeMap | 46.3125 | 129 | 0.624022 | 440 | 3,705 | 4.963636 | 0.286364 | 0.105311 | 0.046703 | 0.041209 | 0.387363 | 0.339286 | 0.267399 | 0.267399 | 0.255952 | 0.213828 | 0 | 0.005189 | 0.271795 | 3,705 | 80 | 130 | 46.3125 | 0.804299 | 0.169771 | 0 | 0 | 0 | 0 | 0.090016 | 0 | 0 | 0 | 0 | 0.0125 | 0 | 1 | 0.017857 | false | 0 | 0.107143 | 0 | 0.142857 | 0.053571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e511045dc9d1f3b8b876038d32067f643c1a8989 | 46,886 | py | Python | scripts/merge.py | CHREC/drseus | 085a4f413088606455e85f8fd83b96bf09c2f260 | [
"Unlicense"
] | 1 | 2020-06-17T02:29:22.000Z | 2020-06-17T02:29:22.000Z | scripts/merge.py | CHREC/drseus | 085a4f413088606455e85f8fd83b96bf09c2f260 | [
"Unlicense"
] | 1 | 2019-09-17T22:38:39.000Z | 2021-03-23T14:52:51.000Z | scripts/merge.py | CHREC/drseus | 085a4f413088606455e85f8fd83b96bf09c2f260 | [
"Unlicense"
] | 4 | 2019-09-17T22:42:31.000Z | 2021-07-23T14:39:37.000Z | #!python/bin/python3
"""
Copyright (c) 2018 NSF Center for Space, High-performance, and Resilient Computing (SHREC)
University of Pittsburgh. All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided
that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS AS IS AND ANY EXPRESS OR IMPLIED WARRANTIES,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY
OF SUCH DAMAGE.
"""
# TODO: add key verification
from collections import defaultdict
from copy import deepcopy
from importlib import import_module
from os.path import abspath, dirname
from sys import path
path.append(dirname(dirname(abspath(__file__))))
targets = import_module('src.targets')
load_targets = targets.load_targets
save_targets = targets.save_targets
def tree():
return defaultdict(tree)
missing_total = 0
for device in ['a9', 'p2020']:
jtag_targets = load_targets(device, 'jtag')
simics_targets = load_targets(device, 'simics')
merged_targets = tree()
for old_type in ['simics', 'jtag']: # do simics first because of preprocess
print('\nerrors for', device, old_type)
if old_type == 'jtag':
old_targets = jtag_targets
other_type = 'simics'
other_targets = simics_targets
elif old_type == 'simics':
old_targets = simics_targets
other_type = 'jtag'
other_targets = jtag_targets
else:
raise Exception('unrecognized old_type')
for target in sorted(old_targets.keys()):
if 'unused_targets' not in merged_targets[other_type]:
merged_targets[other_type]['unused_targets'] = []
old_target = old_targets[target]
merged_target = merged_targets['targets'][target]
if target == 'TLB':
merged_target['type'] = 'tlb'
if 'is_gcache' in old_target and old_target['is_gcache']:
merged_target['type'] = 'gcache'
if target not in other_targets:
other_target = None
merged_targets[other_type]['unused_targets'].append(target)
# print('target in '+old_type+' but not '+other_type+':',
# target)
else:
other_target = other_targets[target]
for key in [key for key in old_target.keys()
if key not in ['registers', 'unused_registers']]:
if key == 'base':
merged_target[key] = old_target[key]
elif key == 'count':
if old_target[key] != 1:
merged_target[key] = old_target[key]
elif key in ['CP', 'memory_mapped']:
if not old_target[key]:
print('unexpected value for :', target+'['+key+']:',
old_target[key])
merged_target['type'] = key
elif key == 'OBJECT':
merged_targets[old_type]['targets'][target]['object'] = \
old_target['OBJECT']
elif key == 'limitations':
merged_targets[old_type]['targets'][target][key] = \
old_target[key]
else:
print('* key:', key, 'in target:', target)
if old_type == 'simics':
registers = list(old_target['registers'].keys())
if 'unused_registers' in old_target:
unused_registers = list(
old_target['unused_registers'].keys())
else:
unused_registers = []
for register in registers + unused_registers:
unused_register = register in unused_registers
if unused_register:
old_register = old_target['unused_registers'][register]
else:
old_register = old_target['registers'][register]
if 'count' in old_register:
count = old_register['count']
else:
count = []
if register == 'fpgprs':
for i in range(old_register['count'][0]):
old_target['registers']['fpr'+str(i)] = {'alias': {
'register': register, 'register_index': [i]}}
del old_target['registers'][register]
continue
if other_target is not None and \
register not in other_target['registers']:
matches = []
for other_register in other_target['registers']:
if count:
if other_register.startswith(register):
try:
try:
index = int(
other_register.replace(
register+'_', ''))
except:
index = int(
other_register.replace(
register, ''))
if register == 'SRDSCR' and \
index > 3:
index -= 1
elif register == 'DDR_SDRAM_RCW':
index -= 1
match = (other_register, [index])
matches.append(match)
# print(register, count, match)
continue
except:
pass
if register == 'MSI_MSISR' and \
other_register == 'MSISR':
match = ('MSISR', [0])
matches.append(match)
# print(register, count, match)
if register == 'usb_regs_prtsc' and \
other_register == 'PORTSC':
match = ('PORTSC', [0])
matches.append(match)
# print(register, count, match)
if register == 'PM_MR' and \
'MR' in other_register:
try:
index1 = int(other_register[2])
index2 = int(other_register[5])
match = (other_register,
[index1, index2])
matches.append(match)
# print(register, count, match)
continue
except:
pass
if register == 'IADDR' and \
other_register.startswith('IGADDR'):
index = int(other_register.replace(
'IGADDR', ''))
match = (other_register, [index])
matches.append(match)
# print(register, count, match)
continue
if register.startswith('MAC_ADD') and \
'MAC' in other_register and \
'ADDR' in other_register and \
register[-1] == other_register[-1]:
try:
index = int(other_register[3:5])-1
match = (other_register, [index])
matches.append(match)
# print(register, count, match)
continue
except:
pass
if register == 'PEX_outbound_OTWBAR' and \
other_register.startswith('PEXOWBAR'):
try:
index = int(
other_register.replace(
'PEXOWBAR', ''))
match = (other_register, [index])
matches.append(match)
# print(register, count, match)
continue
except:
pass
if len(register.split('_')) > 1:
reg = register.split('_')
if reg[-1][-1] == 'n':
reg[-1] = reg[-1][:-1]
if reg[0] == 'CS' and '_'.join(reg[1:]) == \
'_'.join(
other_register.split('_')[1:]):
try:
index = int(
other_register.split(
'_')[0].replace('CS', ''))
match = (other_register, [index])
matches.append(match)
# print(register, count, match)
continue
except:
pass
if reg[0] == 'PEX' and \
other_register.startswith(
'PEX'+reg[-1]):
try:
index = int(
other_register.replace(
'PEX'+reg[-1], ''))
if reg[-1] in ['IWAR', 'IWBAR',
'ITAR', 'IWBEAR']:
index -= 1
match = (other_register, [index])
matches.append(match)
# print(register, count, match)
continue
except:
pass
if reg[0] == 'MSG' and \
reg[1] in ['MER', 'MSR']:
if other_register.startswith(reg[1]):
if other_register[-1] == 'a':
index = 1
else:
index = 0
match = (other_register, [index])
matches.append(match)
# print(register, count, match)
continue
if reg[0] == 'GT' and \
reg[1] in ['TFRR', 'TCR']:
if other_register.startswith(reg[1]):
if other_register[-1] == 'A':
index = 0
elif other_register[-1] == 'B':
index = 1
match = (other_register, [index])
matches.append(match)
# print(register, count, match)
continue
if register == 'regs_port_' \
'port_error_and_status':
reg[2] = 'ESCSR'
if register == 'regs_port_port_control':
reg[2] = 'CCSR'
if register == 'regs_port_port_error_rate':
reg[2] = 'ERCSR'
if register == 'regs_port_' \
'capture_attributes':
reg[2] = 'ECACSR'
if register == 'regs_port_' \
'port_error_detect':
reg[2] = 'EDCSR'
if register == 'regs_port_' \
'port_error_rate_enable':
reg[2] = 'ERECSR'
if register == 'regs_port_' \
'port_error_rate_threshold':
reg[2] = 'ERTCSR'
if register == 'regs_port_capture_symbol':
reg[2] = 'PCSECCSR0'
if register == 'regs_port_' \
'port_local_ackid_status':
reg[2] = 'LASCSR'
if register == 'regs_port_' \
'capture_packet_1':
reg[2] = 'PECCSR1'
if register == 'regs_port_' \
'capture_packet_2':
reg[2] = 'PECCSR2'
if register == 'regs_port_' \
'capture_packet_3':
reg[2] = 'PECCSR3'
if register == 'regs_port_' \
'link_maintenance_request':
reg[2] = 'LMREQCSR'
if register == 'regs_port_' \
'link_maintenance_response':
reg[2] = 'LMRESPCSR'
if reg[1] == 'port' and \
reg[2] in other_register and \
other_register[0] == 'P':
if len(count) == 1:
try:
index = int(other_register[1])
if other_register == \
'P'+str(index)+reg[2]:
index -= 1
match = (other_register,
[index])
matches.append(match)
# print(register, count,
# match)
continue
except:
pass
elif len(count) == 2:
try:
index1 = int(other_register[1])
index1 -= 1
index2 = int(other_register[-1])
if reg[2] in ['RIWTAR',
'ROWTAR',
'ROWTEAR',
'RIWAR',
'ROWAR',
'ROWBAR',
'RIWBAR',
'ROWS1R',
'ROWS2R',
'ROWS3R']:
if index2 == 0:
continue
print('unexpected 0 '
'index')
else:
index2 -= 1
match = (other_register,
[index1, index2])
matches.append(match)
# print(register, count, match)
continue
except:
pass
if reg[1] == 'M':
r = None
if reg[2].startswith('EI'):
s = 'EIM'
r = reg[2][2:]
i = 3
elif reg[2].startswith('EO'):
s = 'EOM'
r = reg[2][2:]
i = 3
elif reg[2].startswith('I'):
s = 'IM'
r = reg[2][1:]
i = 2
elif reg[2].startswith('O'):
s = 'OM'
r = reg[2][1:]
i = 2
if r is not None and \
other_register.endswith(r) and \
other_register.startswith(s):
try:
index = int(other_register[i])
match = (other_register,
[index])
matches.append(match)
# print(register, count, match)
continue
except:
pass
if other_register.startswith(reg[-1]):
index = None
try:
index = int(other_register[-2:])
except:
try:
index = int(other_register[-1])
except:
pass
if index is not None:
if len(count) == 1:
if register == 'regs_SNOOP':
index -= 1
match = (other_register,
[index])
elif len(count) == 2:
if count[0] == 1:
match = (other_register,
[0, index])
elif other_register[-2] == 'a':
match = (other_register,
[1,
index % count[1]])
elif other_register[-2] == 'A':
match = (other_register,
[0, index])
elif other_register[-2] == 'B':
match = (other_register,
[1, index])
elif register == \
'P_IPIDR' and \
other_register.endswith(
'CPU' +
str(index).zfill(2)):
if index >= 10:
index1 = 2
else:
index1 = 1
match = (
other_register,
[index1,
index % 10])
else:
match = (other_register,
[int(index /
count[1]),
index % count[1]])
matches.append(match)
# print(register, count, match)
continue
else:
if other_register == register.upper():
match = (register.upper(), None)
matches.append(match)
# print(register, count, match)
break
if len(register.split('_')) > 1:
reg = register.split('_')
if register == 'regs_layer_error_detect':
reg[-1] = 'LTLEDCSR'
if register == 'regs_layer_error_enable':
reg[-1] = 'LTLEECSR'
if register == 'regs_src_operations':
reg[-1] = 'SOCAR'
if register == 'regs_pe_ll_status':
reg[-1] = 'PELLCCSR'
if register == 'regs_pe_features':
reg[-1] = 'PEFCAR'
if register == 'regs_DMIRIR':
reg[-1] = 'IDMIRIR'
if register == 'regs_error_block_header':
reg[-1] = 'ERBH'
if register == 'regs_dst_operations':
reg[-1] = 'DOCAR'
if register == 'regs_assembly_id':
reg[-1] = 'AIDCAR'
if register == 'regs_assembly_info':
reg[-1] = 'AICAR'
if register == 'regs_port_link_timeout':
reg[-1] = 'PLTOCCSR'
if register == 'regs_base_device_id':
reg[-1] = 'BDIDCSR'
if register == 'regs_component_tag':
reg[-1] = 'CTCSR'
if register == 'regs_device_info':
reg[-1] = 'DICAR'
if register == 'regs_device_id':
reg[-1] = 'DIDCAR'
if register == 'regs_port_block_header':
reg[-1] = 'PMBH0'
if register == 'regs_host_base_device_id':
reg[-1] = 'HBDIDLCSR'
if register == 'regs_port_general_control':
reg[-1] = 'PGCCSR'
if register == 'regs_ODRS':
reg[-1] = 'ODSR'
if register == 'regs_base1_status':
reg[-1] = 'LCSBA1CSR'
if register == 'regs_write_port_status':
reg[-1] = 'PWDCSR'
if register == 'regs_layer_capture_address':
reg[-1] = 'LTLACCSR'
if register == 'regs_layer_capture_control':
reg[-1] = 'LTLCCCSR'
if register == \
'regs_layer_capture_device_id':
reg[-1] = 'LTLDIDCCSR'
if register == '':
reg[-1] = ''
if register == '':
reg[-1] = ''
if other_register == reg[-1]:
match = (reg[-1], None)
matches.append(match)
# print(register, count, match)
break
if reg[0] == 'regs' and other_register == \
'_'.join(reg[1:]):
match = (other_register, None)
matches.append(match)
# print(register, count, match)
break
if other_register == reg[-1].upper():
match = (reg[-1].upper(), None)
matches.append(match)
# print(register, count, match)
break
if other_register == 'I'+reg[-1]:
match = ('I'+reg[-1], None)
matches.append(match)
# print(register, count, match)
break
if reg[-1].startswith('E') and \
other_register.startswith('E') and \
other_register == 'EI'+reg[-1][1:]:
match = ('EI'+reg[-1][1:], None)
matches.append(match)
# print(register, count, match)
break
if reg[-1] == 'ADDRESS' and \
other_register == \
register.replace('ADDRESS',
'ADDR'):
match = (register.replace('ADDRESS',
'ADDR'),
None)
matches.append(match)
# print(register, count, match)
break
if register == 'DDR_SDRAM_INIT':
match = ('DDR_DATA_INIT', None)
matches.append(match)
# print(register, count, match)
break
if register == 'ECMIP_REV1':
match = ('EIPBRR1', None)
matches.append(match)
# print(register, count, match)
break
if register == 'ECMIP_REV2':
match = ('EIPBRR2', None)
matches.append(match)
# print(register, count, match)
break
if register == 'IOVSELCR':
match = ('IOVSELSR', None)
matches.append(match)
# print(register, count, match)
break
if matches:
matches.sort(key=lambda x: x[0])
correct = True
if len(count) > 0:
counts0 = [match[1][0] for match in matches
if len(match[1]) == 1 or
(len(match[1]) == 2 and
match[1][1] == 0)]
missing = []
extra = []
for i in range(count[0]):
if i not in counts0:
correct = False
missing.append([i])
else:
counts0.remove(i)
if len(count) == 2:
counts1 = [match[1][1]
for match in matches
if match[1][0] == i]
for j in range(count[1]):
if j not in counts1:
correct = False
missing.append([i, j])
else:
counts1.remove(j)
if counts1:
extra.append([i, counts1])
correct = False
if counts0:
extra.extend(counts0)
correct = False
elif len(matches) > 1:
correct = False
if correct or register in [
'PEX_inbound_IWBEAR',
'PEX_outbound_OTWBAR',
'P_CTPR',
'regs_ENDPTCTRL',
'P_IPIDR']:
# print('matches:', register, count,
# matches)
if 'count' in old_register:
del old_register['count']
if len(list(old_register.keys())) > 0:
new_register = deepcopy(old_register)
else:
new_register = tree()
if unused_register:
del (old_target['unused_registers']
[register])
else:
del old_target['registers'][register]
for match in matches:
temp_register = deepcopy(new_register)
temp_register['alias'] = {
'register': register}
if match[1] is not None:
(temp_register['alias']
['register_index']) = \
match[1]
if unused_register:
(old_target['unused_registers']
[match[0]]) = \
temp_register
else:
(old_target['registers']
[match[0]]) = \
temp_register
else:
print('\n\nincorrect match for',
register, count)
print(matches)
if missing:
print('missing', missing)
if extra:
print('extra', extra)
print('\n\n')
else:
if target in ['CPU', 'TLB', 'GPR'] or \
('CP' in old_target and old_target['CP']):
merged_targets['targets'][target]['core'] = True
missing = []
registers = list(old_target['registers'].keys())
if 'unused_registers' in old_target:
unused_registers = list(old_target['unused_registers'].keys())
else:
unused_registers = []
for register in registers + unused_registers:
unused_register = register in unused_registers
if unused_register:
old_register = old_target['unused_registers'][register]
else:
old_register = old_target['registers'][register]
other_register = None
if other_target is not None:
if register in other_target['registers']:
other_register = other_target['registers'][register]
elif 'unused_registers' in other_target and \
register in other_target['unused_registers']:
other_register = \
other_target['unused_registers'][register]
merged_register = merged_target['registers'][register]
if unused_register:
(merged_targets[old_type]
['targets'][target]
['unused_registers'][register])
if other_target is not None and other_register is None:
(merged_targets[other_type]
['targets'][target]
['unused_registers'][register])
missing.append(register)
missing_total += 1
elif other_target is not None and other_register is None:
(merged_targets[other_type]
['targets'][target]
['unused_registers'][register])
missing.append(register)
missing_total += 1
for key in [key for key in old_register.keys()
if key not in ['fields', 'unused_fields']]:
if key in ['access', 'CP', 'CRm', 'CRn', 'Op1', 'Op2',
'PMR', 'SPR', 'offset', 'limitations']:
merged_register[key] = old_register[key]
elif key == 'alias':
if unused_register:
(merged_targets[old_type]
['targets'][target]
['unused_registers'][register]
[key]) = old_register[key]
else:
(merged_targets[old_type]
['targets'][target]
['registers'][register]
[key]) = old_register[key]
elif key == 'actual_bits':
if old_register[key] != 32:
merged_register['bits'] = old_register[key]
elif key == 'bits':
if 'actual_bits' not in old_register:
merged_register[key] = old_register[key]
elif key == 'partial':
if 'unused_fields' not in old_register:
print('* partial register:', register,
'in target:', target)
elif key == 'count' and old_type == 'simics' and \
other_register is None:
merged_register[key] = old_register[key]
elif key in ['is_tlb', 'is_gcache']:
pass
else:
print('* key:', key, 'value:', old_register[key],
'in register:', register, 'in target:', target)
if 'fields' in old_register:
if old_type == 'jtag' or target == 'TLB' or \
other_register is None:
merged_register['fields'] = old_register['fields']
if 'unused_fields' in old_register:
merged_register['fields'].extend(
old_register['unused_fields'])
elif other_register is not None and \
'fields' in other_register:
other_fields = other_register['fields']
other_fields.sort(key=lambda x: x[1][1],
reverse=True)
fields = old_register['fields']
if 'unused_fields' in old_register:
fields.extend(old_register['unused_fields'])
fields.sort(key=lambda x: x[1][1], reverse=True)
for i in range(len(fields)):
if fields[i][0] != other_fields[i][0]:
print('field name mismatch',
target, register,
fields[i][0], other_fields[i][0])
elif fields[i][1][0] != \
other_fields[i][1][0] or \
fields[i][1][1] != \
other_fields[i][1][1]:
print('field range mismatch',
target, register, fields[i][0])
if 'unused_fields' in old_register:
if 'actual_bits' not in old_register:
print('* unused_fields found, but missing actual_bits',
register)
if 'partial' not in old_register:
print('* unused_fields found, but missing partial',
register)
if 'unused_fields' not in (
merged_targets[old_type]
['targets'][target]
['registers'][register]):
(merged_targets[old_type]
['targets'][target]
['registers'][register]
['unused_fields']) = []
unused_fields = (merged_targets[old_type]
['targets'][target]
['registers'][register]
['unused_fields'])
for field in old_register['unused_fields']:
if field[0] in unused_fields:
print('* duplicate field', register, field[0])
unused_fields.append(field[0])
unused_fields.sort()
if 'fields' in merged_register and \
target not in ['TLB', 'L1DCACHE0', 'L1ICACHE0',
'L1DCACHE1', 'L1ICACHE1', 'L2CACHE']:
merged_register['fields'].sort(key=lambda x: x[1][0],
reverse=True)
if 'bits' in merged_register:
bits = merged_register['bits']
else:
bits = 32
for field in merged_register['fields']:
bits -= (field[1][1] - field[1][0]) + 1
if bits != 0:
print('* bit count error:', register,
merged_register['fields'])
if missing:
print('*', target, 'registers in', old_type,
'but not', other_type+':', ', '.join(sorted(missing)))
if 'unused_targets' in merged_targets[other_type]:
if merged_targets[other_type]['unused_targets']:
merged_targets[other_type]['unused_targets'].sort()
else:
del merged_targets[other_type]['unused_targets']
save_targets('', device, merged_targets)
print('\ntotal missing registers:', missing_total)
for device in ['a9', 'p2020']:
merged_targets = load_targets('', device)
for type_ in 'jtag', 'simics':
for target in list(merged_targets[type_]['targets'].keys()):
if not merged_targets[type_]['targets'][target]:
del merged_targets[type_]['targets'][target]
else:
if 'unused_registers' in \
merged_targets[type_]['targets'][target] and \
not (merged_targets[type_]
['targets'][target]
['unused_registers']):
del (merged_targets[type_]
['targets'][target]
['unused_registers'])
if device == 'p2020':
merged_targets['jtag']['unused_targets'].append('RAPIDIO')
merged_targets['jtag']['unused_targets'].sort()
reg_count = 0
for target in list(merged_targets['targets'].keys()):
reg_count += len(list(
merged_targets['targets'][target]['registers'].keys()))
print(device, 'register count:', reg_count)
save_targets('', device, merged_targets)
| 57.812577 | 131 | 0.316214 | 3,016 | 46,886 | 4.743369 | 0.134615 | 0.072697 | 0.038166 | 0.041801 | 0.505452 | 0.405145 | 0.319516 | 0.287082 | 0.258213 | 0.247449 | 0 | 0.015723 | 0.610673 | 46,886 | 810 | 132 | 57.883951 | 0.767996 | 0.050569 | 0 | 0.378788 | 0 | 0 | 0.07836 | 0.01068 | 0 | 0 | 0 | 0.001235 | 0 | 1 | 0.001377 | false | 0.015152 | 0.008264 | 0.001377 | 0.011019 | 0.027548 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e51aaa4a989eb3218049b6d066a0b24def1e1dae | 1,236 | py | Python | global_finprint/annotation/migrations/0018_migrate_obs_to_event.py | GlobalFinPrint/global_finprint | 8a91ceaaed42aaa716d8c9f27518ba673ebf351c | [
"Apache-2.0"
] | null | null | null | global_finprint/annotation/migrations/0018_migrate_obs_to_event.py | GlobalFinPrint/global_finprint | 8a91ceaaed42aaa716d8c9f27518ba673ebf351c | [
"Apache-2.0"
] | 6 | 2020-06-05T18:42:32.000Z | 2022-01-13T00:48:57.000Z | global_finprint/annotation/migrations/0018_migrate_obs_to_event.py | GlobalFinPrint/global_finprint | 8a91ceaaed42aaa716d8c9f27518ba673ebf351c | [
"Apache-2.0"
] | null | null | null | from __future__ import unicode_literals
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('annotation', '0017_auto_20160516_0030'),
]
operations = [
migrations.RunSQL('''
-- migrate initial observations to an event
insert into annotation_event
(
create_datetime,
last_modified_datetime,
observation_id,
event_time,
extent,
note,
user_id
)
select distinct
create_datetime,
last_modified_datetime,
id,
initial_observation_time,
extent,
comment,
user_id
from annotation_observation;
--skipping these for now, as there should be no data there:
-- migrate behaviors to event attributes
--INSERT INTO annotation_eventattribute () select ...;
--migrate gear to event attributes
--INSERT INTO annotation_eventattribute () select ...;
-- migrate features to event attributes
--INSERT INTO annotation_eventattribute () select ...;
'''),
]
| 25.22449 | 67 | 0.559061 | 105 | 1,236 | 6.342857 | 0.514286 | 0.06006 | 0.12012 | 0.103604 | 0.37988 | 0.277778 | 0.277778 | 0.277778 | 0.192192 | 0 | 0 | 0.020752 | 0.376214 | 1,236 | 48 | 68 | 25.75 | 0.843061 | 0 | 0 | 0.297297 | 0 | 0 | 0.820388 | 0.15534 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.054054 | 0 | 0.135135 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e522eaf34259c88d5f3b0fcb2083eb3344cecf3f | 3,925 | py | Python | src/Stele/analysis/analysis_ipg/fixes/LegendSettings_ui.py | SherwinGroup/Stele | 9bb7da0b406a801975e21c9f7ce05d369ae661e5 | [
"MIT"
] | 1 | 2021-06-22T19:38:32.000Z | 2021-06-22T19:38:32.000Z | src/Stele/analysis/analysis_ipg/fixes/LegendSettings_ui.py | SherwinGroup/Stele | 9bb7da0b406a801975e21c9f7ce05d369ae661e5 | [
"MIT"
] | 1 | 2021-08-23T20:54:25.000Z | 2021-08-23T20:54:25.000Z | src/Stele/analysis/analysis_ipg/fixes/LegendSettings_ui.py | SherwinGroup/Stele | 9bb7da0b406a801975e21c9f7ce05d369ae661e5 | [
"MIT"
] | 1 | 2017-08-16T21:05:46.000Z | 2017-08-16T21:05:46.000Z | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'C:\Users\FELLab\Documents\GitHub\Interactivepg-waffle\interactivePG\fixes\LegendSettings.ui'
#
# Created by: PyQt5 UI code generator 5.6
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_LegendSettingsDialog(object):
def setupUi(self, LegendSettingsDialog):
LegendSettingsDialog.setObjectName("LegendSettingsDialog")
LegendSettingsDialog.resize(207, 319)
self.verticalLayout = QtWidgets.QVBoxLayout(LegendSettingsDialog)
self.verticalLayout.setObjectName("verticalLayout")
self.formLayout = QtWidgets.QFormLayout()
self.formLayout.setObjectName("formLayout")
self.bBGColor = ColorButton(LegendSettingsDialog)
self.bBGColor.setText("")
self.bBGColor.setObjectName("bBGColor")
self.formLayout.setWidget(0, QtWidgets.QFormLayout.FieldRole, self.bBGColor)
self.label = QtWidgets.QLabel(LegendSettingsDialog)
self.label.setObjectName("label")
self.formLayout.setWidget(0, QtWidgets.QFormLayout.LabelRole, self.label)
self.label_2 = QtWidgets.QLabel(LegendSettingsDialog)
self.label_2.setObjectName("label_2")
self.formLayout.setWidget(1, QtWidgets.QFormLayout.LabelRole, self.label_2)
self.bBorderColor = ColorButton(LegendSettingsDialog)
self.bBorderColor.setText("")
self.bBorderColor.setObjectName("bBorderColor")
self.formLayout.setWidget(1, QtWidgets.QFormLayout.FieldRole, self.bBorderColor)
self.label_3 = QtWidgets.QLabel(LegendSettingsDialog)
self.label_3.setObjectName("label_3")
self.formLayout.setWidget(2, QtWidgets.QFormLayout.LabelRole, self.label_3)
self.sbFontSize = SpinBox(LegendSettingsDialog)
self.sbFontSize.setObjectName("sbFontSize")
self.formLayout.setWidget(2, QtWidgets.QFormLayout.FieldRole, self.sbFontSize)
self.verticalLayout.addLayout(self.formLayout)
self.groupBox = QtWidgets.QGroupBox(LegendSettingsDialog)
self.groupBox.setFlat(True)
self.groupBox.setObjectName("groupBox")
self.verticalLayout_2 = QtWidgets.QVBoxLayout(self.groupBox)
self.verticalLayout_2.setContentsMargins(0, 10, 0, 0)
self.verticalLayout_2.setObjectName("verticalLayout_2")
self.teDesc = QtWidgets.QTextEdit(self.groupBox)
self.teDesc.setObjectName("teDesc")
self.verticalLayout_2.addWidget(self.teDesc)
self.widget = QtWidgets.QWidget(self.groupBox)
self.widget.setObjectName("widget")
self.verticalLayout_2.addWidget(self.widget)
self.verticalLayout.addWidget(self.groupBox)
self.buttonBox = QtWidgets.QDialogButtonBox(LegendSettingsDialog)
self.buttonBox.setOrientation(QtCore.Qt.Horizontal)
self.buttonBox.setStandardButtons(QtWidgets.QDialogButtonBox.Cancel|QtWidgets.QDialogButtonBox.Ok)
self.buttonBox.setObjectName("buttonBox")
self.verticalLayout.addWidget(self.buttonBox)
self.retranslateUi(LegendSettingsDialog)
self.buttonBox.accepted.connect(LegendSettingsDialog.accept)
self.buttonBox.rejected.connect(LegendSettingsDialog.reject)
QtCore.QMetaObject.connectSlotsByName(LegendSettingsDialog)
def retranslateUi(self, LegendSettingsDialog):
_translate = QtCore.QCoreApplication.translate
LegendSettingsDialog.setWindowTitle(_translate("LegendSettingsDialog", "Legend Settings"))
self.label.setText(_translate("LegendSettingsDialog", "Background Color:"))
self.label_2.setText(_translate("LegendSettingsDialog", "Border Color:"))
self.label_3.setText(_translate("LegendSettingsDialog", "Font Size"))
self.groupBox.setTitle(_translate("LegendSettingsDialog", "Item Description"))
from pyqtgraph import ColorButton, SpinBox
| 53.767123 | 146 | 0.74293 | 369 | 3,925 | 7.840108 | 0.298103 | 0.037331 | 0.047701 | 0.034221 | 0.17767 | 0.091255 | 0 | 0 | 0 | 0 | 0 | 0.011487 | 0.157197 | 3,925 | 72 | 147 | 54.513889 | 0.863059 | 0.065987 | 0 | 0 | 1 | 0 | 0.084176 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.033333 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e5258940b9d55361af93bbe4d01f5b1dd591bfcd | 1,557 | py | Python | h/schemas.py | ssin122/test-h | c10062ae23b690afaac0ab4af7b9a5a5e4b686a9 | [
"MIT"
] | 2 | 2021-11-07T23:14:54.000Z | 2021-11-17T10:11:55.000Z | h/schemas.py | ssin122/test-h | c10062ae23b690afaac0ab4af7b9a5a5e4b686a9 | [
"MIT"
] | null | null | null | h/schemas.py | ssin122/test-h | c10062ae23b690afaac0ab4af7b9a5a5e4b686a9 | [
"MIT"
] | 1 | 2017-03-12T00:18:33.000Z | 2017-03-12T00:18:33.000Z | # -*- coding: utf-8 -*-
"""Classes for validating data passed to views."""
from __future__ import unicode_literals
import copy
import jsonschema
from jsonschema.exceptions import best_match
class ValidationError(Exception):
pass
class JSONSchema(object):
"""
Validate data according to a Draft 4 JSON Schema.
Inherit from this class and override the `schema` class property with a
valid JSON schema.
"""
schema = {}
def __init__(self):
format_checker = jsonschema.FormatChecker()
self.validator = jsonschema.Draft4Validator(self.schema,
format_checker=format_checker)
def validate(self, data):
"""
Validate `data` according to the current schema.
:param data: The data to be validated
:return: valid data
:raises ValidationError: if the data is invalid
"""
# Take a copy to ensure we don't modify what we were passed.
appstruct = copy.deepcopy(data)
error = best_match(self.validator.iter_errors(appstruct))
if error is not None:
raise ValidationError(_format_jsonschema_error(error))
return appstruct
def _format_jsonschema_error(error):
"""Format a :py:class:`jsonschema.ValidationError` as a string."""
if error.path:
dotted_path = '.'.join([str(c) for c in error.path])
return '{path}: {message}'.format(path=dotted_path,
message=error.message)
return error.message
| 29.377358 | 82 | 0.635196 | 180 | 1,557 | 5.366667 | 0.461111 | 0.040373 | 0.043478 | 0.047619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002671 | 0.278741 | 1,557 | 52 | 83 | 29.942308 | 0.857524 | 0.311496 | 0 | 0 | 0 | 0 | 0.018182 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0.041667 | 0.166667 | 0 | 0.541667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
e52646bd330bfa11295e34fc3b824b03dc40fe63 | 990 | py | Python | after/app.py | littlepea/python-admin-business-logic-talk | 8d020ceb175492e25cd02e5960d6e0b966017f2b | [
"MIT"
] | null | null | null | after/app.py | littlepea/python-admin-business-logic-talk | 8d020ceb175492e25cd02e5960d6e0b966017f2b | [
"MIT"
] | null | null | null | after/app.py | littlepea/python-admin-business-logic-talk | 8d020ceb175492e25cd02e5960d6e0b966017f2b | [
"MIT"
] | null | null | null | import urllib2
import json
from aqi import Station
from cache import cache
API_BASE = 'https://api.openaq.org/v1/latest'
def _get_city_url(city):
return '{}?city={}'.format(API_BASE, city)
def _load_results(url):
try:
response = urllib2.urlopen(url)
results = json.load(response)
return results['results']
except urllib2.HTTPError, e:
return []
def _get_station_pm25(station):
for measurement in station['measurements']:
if measurement['parameter'] == 'pm25':
return measurement['value']
@cache
def get_stations(city):
return [
Station(
name=result['location'],
pm25=_get_station_pm25(result))
for result in _load_results(_get_city_url(city))
]
def get_recommendation(level):
if level.indoors:
return 'Please, stay indoors with purified air.'
if level.mask:
return 'Please, wear a mask if going out.'
return 'Feel free to go out!'
| 20.625 | 56 | 0.643434 | 124 | 990 | 4.975806 | 0.459677 | 0.038898 | 0.032415 | 0.045381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016151 | 0.249495 | 990 | 47 | 57 | 21.06383 | 0.814266 | 0 | 0 | 0 | 0 | 0 | 0.180808 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.125 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e52c5ee52170299fb274be7b8f2275c6c731e728 | 377 | py | Python | 24.py | arvinddoraiswamy/LearnPython | f731a7390a0d335640c23a1e76fa1eb6887b6894 | [
"MIT"
] | 10 | 2015-02-05T04:46:25.000Z | 2020-12-17T21:11:36.000Z | 24.py | arvinddoraiswamy/LearnPython | f731a7390a0d335640c23a1e76fa1eb6887b6894 | [
"MIT"
] | null | null | null | 24.py | arvinddoraiswamy/LearnPython | f731a7390a0d335640c23a1e76fa1eb6887b6894 | [
"MIT"
] | 12 | 2015-05-23T08:55:54.000Z | 2020-09-04T15:47:59.000Z | import configparser
config = configparser.ConfigParser()
print("Read the file")
config.read("config_24.txt")
print("\n")
print("Once you read you get the data in and can play with it")
l1 = config.sections()
print(config[l1[0]]['User'])
print("\n")
print("To access the DEFAULT section, do it directly - it won't show up in sections()")
print(config['DEFAULT']['ForwardX11'])
| 29 | 87 | 0.718833 | 60 | 377 | 4.5 | 0.583333 | 0.044444 | 0.081481 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021084 | 0.119363 | 377 | 12 | 88 | 31.416667 | 0.792169 | 0 | 0 | 0.181818 | 0 | 0 | 0.485411 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0.636364 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
e52d6019fd994380e0d9a2736246d284c148e9ba | 443 | py | Python | manufacturer/migrations/0002_manufacturer_website.py | skaaldig/borrowing | 70a19a1f3db3719247e42814176ed6b69afe081c | [
"MIT"
] | null | null | null | manufacturer/migrations/0002_manufacturer_website.py | skaaldig/borrowing | 70a19a1f3db3719247e42814176ed6b69afe081c | [
"MIT"
] | null | null | null | manufacturer/migrations/0002_manufacturer_website.py | skaaldig/borrowing | 70a19a1f3db3719247e42814176ed6b69afe081c | [
"MIT"
] | null | null | null | # Generated by Django 2.0.13 on 2019-05-06 17:02
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('manufacturer', '0001_move_manufacturer_and_rename'),
]
operations = [
migrations.AddField(
model_name='manufacturer',
name='website',
field=models.URLField(default='none'),
preserve_default=False,
),
]
| 22.15 | 62 | 0.611738 | 45 | 443 | 5.888889 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.062696 | 0.27991 | 443 | 19 | 63 | 23.315789 | 0.768025 | 0.103837 | 0 | 0 | 1 | 0 | 0.172152 | 0.083544 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e52e7114c4c8002c2b57e4ffbd0cc8038d0f43cb | 3,656 | py | Python | download/ABCD_download.py | ThomasYeoLab/ABCD_scripts | 980097b3d89e63dd4efd037841f135ade7d3750e | [
"MIT"
] | 2 | 2021-08-08T06:49:04.000Z | 2021-11-05T02:36:08.000Z | download/ABCD_download.py | ThomasYeoLab/ABCD_scripts | 980097b3d89e63dd4efd037841f135ade7d3750e | [
"MIT"
] | null | null | null | download/ABCD_download.py | ThomasYeoLab/ABCD_scripts | 980097b3d89e63dd4efd037841f135ade7d3750e | [
"MIT"
] | null | null | null |
# coding: utf-8
# In[36]:
import pandas
import os
import sys
import datetime
from nda_aws_token_generator import *
from pathlib import Path
# In[37]:
def update_aws_config(username,password,web_service_url='https://nda.nih.gov/DataManager/dataManager'):
generator = NDATokenGenerator(web_service_url)
token = generator.generate_token(username, password)
config_file=os.path.join(str(Path.home()),'.aws','credentials')
f=open(config_file, 'w')
f.write('[default]\n'
'aws_access_key_id = %s\n'
'aws_secret_access_key = %s\n'
'aws_session_token = %s\n'
%(token.access_key,
token.secret_key,
token.session)
)
f.close()
# In[38]:
def ABCD_download(id_list,table_file,username,password,save_dir,mod):
'''
Download image from ABCD dataset
Input:
id_list: list of the subjects you want to download the images
table_file: full path of the fmriresults01.txt
username: username of your NDAR account
password: password of your NDAR account
save_dir: the directory you want to save the downloaded files
mod: the image modality you want to download. choose from: [t1,t2,dwi,rs,mid,nback,sst]
Ouput:NA
'''
# update config file
update_aws_config(username,password)
expire_time = datetime.datetime.now() + datetime.timedelta(hours=10)
# read image table
with open(id_list) as f:
id_list = f.read().splitlines()
image_table=pandas.read_table(table_file,sep='\t',usecols=['subjectkey','derived_files'])
all_image = image_table['derived_files']
# download image from aws server
task_codes = ABCD_task_coding()
keyword = task_codes[mod]
for id in id_list:
print(id)
ind = image_table['subjectkey'].str.contains(id)
subject_images = all_image[ind]
for curr_image in subject_images:
if (keyword in curr_image):
print(curr_image)
image_name = curr_image.rpartition('/')[-1]
filepath = os.path.join(save_dir, image_name)
if not os.path.exists(filepath):
os.system('aws s3 cp '+curr_image+' '+filepath)
# update config file when token expire
if datetime.datetime.now() > expire_time:
update_aws_config(username,password)
expire_time = datetime.datetime.now() + datetime.timedelta(hours=10)
# In[39]:
def ABCD_task_coding():
'''
Define the keywords in the iamge filename that can specify the image modality.
Returns:
task_coding: a dictionary for the matching keyword of each image modality.
'''
task_coding = {
"t1":"MPROC-T1",
"t2":"MPROC-T2",
"dwi":"MPROC-DTI",
"rs":"rsfMRI",
"mid":"MID-fMRI",
"nback":"nBack-fMRI",
"sst":"SST-fMRI"}
return task_coding
# In[41]:
if __name__ == "__main__":
if (len(sys.argv) >= 5):
username = input("type your NDAR account name: ")
password = input("type your NDAR account passward: ")
# download the images for each modality sequentially
for i in range(len(sys.argv)-4):
print('---------------downloading '+sys.argv[4+i]+' images---------------')
ABCD_download(sys.argv[1],sys.argv[2],username,password,sys.argv[3],sys.argv[4+i])
print('---------------------------------download finished-----------------------------------------')
else:
print("ERROR: not enough inputs")
| 30.214876 | 108 | 0.596827 | 459 | 3,656 | 4.588235 | 0.352941 | 0.023267 | 0.02849 | 0.032764 | 0.117284 | 0.079772 | 0.079772 | 0.079772 | 0.079772 | 0.079772 | 0 | 0.011905 | 0.26477 | 3,656 | 120 | 109 | 30.466667 | 0.771577 | 0.208698 | 0 | 0.063492 | 0 | 0 | 0.185106 | 0.05657 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0.111111 | 0.095238 | 0 | 0.15873 | 0.079365 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
e54460f82677a82a4c20f2a3e31d8914c449db71 | 1,463 | py | Python | tests/test_subprocess_extension.py | npcole/latexbuild | 596a2a0a4c42eaa5eb9503d64f9073ad5d0640d5 | [
"MIT"
] | 27 | 2016-04-15T22:38:04.000Z | 2022-02-06T12:13:37.000Z | tests/test_subprocess_extension.py | npcole/latexbuild | 596a2a0a4c42eaa5eb9503d64f9073ad5d0640d5 | [
"MIT"
] | 6 | 2017-04-06T00:32:56.000Z | 2021-07-09T04:16:37.000Z | tests/test_subprocess_extension.py | pappasam/latexbuild | 596a2a0a4c42eaa5eb9503d64f9073ad5d0640d5 | [
"MIT"
] | 6 | 2016-09-30T17:20:47.000Z | 2021-01-28T20:52:19.000Z | import os
import unittest
from subprocess import CalledProcessError
from latexbuild.subprocess_extension import check_output_cwd
#######################################################################
# Define constants
#######################################################################
PATH_FILE = os.path.abspath(__file__)
PATH_TEST = os.path.dirname(PATH_FILE)
PATH_MAIN = os.path.dirname(PATH_TEST)
NAME_FILE = os.path.basename(PATH_FILE)
#######################################################################
# Define helper functions
#######################################################################
def ls_and_split(directory):
stdout = check_output_cwd(['ls'], directory)
return stdout
#######################################################################
# Main class
#######################################################################
class TestCheckOutputCwd(unittest.TestCase):
def test_raises_bad_binary(self):
self.assertRaises(ValueError,
check_output_cwd, ['fjadklsjfkldsjf', '--ddfddf'], PATH_TEST)
def test_raises_bad_call(self):
self.assertRaises(CalledProcessError,
check_output_cwd, ['python', '--ddfddf'], PATH_TEST)
def test_ls_current_dir(self):
self.assertIn(NAME_FILE, ls_and_split(PATH_TEST))
def test_ls_above_dir(self):
self.assertNotIn(NAME_FILE, ls_and_split(PATH_MAIN))
if __name__ == '__main__':
unittest.main()
| 34.833333 | 77 | 0.518797 | 134 | 1,463 | 5.276119 | 0.365672 | 0.056577 | 0.079208 | 0.063649 | 0.142857 | 0.062235 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120984 | 1,463 | 41 | 78 | 35.682927 | 0.549767 | 0.03486 | 0 | 0 | 0 | 0 | 0.047862 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.208333 | false | 0 | 0.166667 | 0 | 0.458333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e545c987961ef987cbf7bd7af98052eacf914e76 | 2,131 | py | Python | FastAutoAugment/nas/arch_trainer.py | sytelus/fast-autoaugment | a53708699dce1233ce2a0bf0416ae2278007d506 | [
"MIT"
] | null | null | null | FastAutoAugment/nas/arch_trainer.py | sytelus/fast-autoaugment | a53708699dce1233ce2a0bf0416ae2278007d506 | [
"MIT"
] | null | null | null | FastAutoAugment/nas/arch_trainer.py | sytelus/fast-autoaugment | a53708699dce1233ce2a0bf0416ae2278007d506 | [
"MIT"
] | null | null | null | from typing import Optional, Callable
import os
import torch
from torch.utils.data import DataLoader
from torch import Tensor
from torch.optim.optimizer import Optimizer
from torch.optim.lr_scheduler import _LRScheduler
from overrides import overrides, EnforceOverrides
from ..common.config import Config
from ..common import common
from ..nas.model import Model
from ..nas.model_desc import ModelDesc
from ..common.trainer import Trainer
from ..nas.vis_model_desc import draw_model_desc
from ..common.check_point import CheckPoint
class ArchTrainer(Trainer, EnforceOverrides):
def __init__(self, conf_train: Config, model: Model, device,
check_point:Optional[CheckPoint]) -> None:
super().__init__(conf_train, model, device, check_point, aux_tower=True)
self._l1_alphas = conf_train['l1_alphas']
self._plotsdir = common.expdir_abspath(conf_train['plotsdir'], True)
@overrides
def compute_loss(self, lossfn: Callable,
x: Tensor, y: Tensor, logits: Tensor,
aux_weight: float, aux_logits: Optional[Tensor]) -> Tensor:
loss = super().compute_loss(lossfn, x, y, logits,
aux_weight, aux_logits)
if self._l1_alphas > 0.0:
l_extra = sum(torch.sum(a.abs()) for a in self.model.alphas())
loss += self._l1_alphas * l_extra
return loss
@overrides
def post_epoch(self, train_dl: DataLoader, val_dl: Optional[DataLoader])->None:
super().post_epoch(train_dl, val_dl)
self._draw_model()
def _draw_model(self) -> None:
if not self._plotsdir:
return
train_metrics, val_metrics = self.get_metrics()
if (val_metrics and val_metrics.is_best()) or \
(train_metrics and train_metrics.is_best()):
# log model_desc as a image
plot_filepath = os.path.join(
self._plotsdir, "EP{train_metrics.epoch:03d}")
draw_model_desc(self.model.finalize(), plot_filepath+"-normal",
caption=f"Epoch {train_metrics.epoch}")
| 38.053571 | 83 | 0.661661 | 271 | 2,131 | 4.95203 | 0.332103 | 0.033532 | 0.026826 | 0.031297 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004966 | 0.244017 | 2,131 | 55 | 84 | 38.745455 | 0.828057 | 0.011732 | 0 | 0.044444 | 0 | 0 | 0.037072 | 0.022814 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088889 | false | 0 | 0.333333 | 0 | 0.488889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
e547e877ea0b546cad0a6abe04c73ce96240387b | 896 | py | Python | pytan3/tests/test_utils/test_crypt.py | lifehackjim/pytan3 | ca8223facb3797261655645fb4ecba6e13856b5e | [
"MIT"
] | 3 | 2020-06-15T15:34:07.000Z | 2021-09-21T15:22:00.000Z | pytan3/tests/test_utils/test_crypt.py | lifehackjim/pytan3 | ca8223facb3797261655645fb4ecba6e13856b5e | [
"MIT"
] | null | null | null | pytan3/tests/test_utils/test_crypt.py | lifehackjim/pytan3 | ca8223facb3797261655645fb4ecba6e13856b5e | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Test suite for pytan3."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import pytest
import pytan3
import re
def test_encrypt_decrypt():
"""Test encrypt / decrypt with valid key."""
data = "{}#!:What a lame test"
key = "An even lamer key"
crypt = pytan3.utils.crypt.encrypt(data=data, key=key)
assert re.match(r"\d+\$\d+\$", crypt)
back = pytan3.utils.crypt.decrypt(data=crypt, key=key)
assert back == data
def test_decrypt_bad_key():
"""Test exc thrown with bad key."""
data = "{}#!:What a lame test"
key = "An even lamer key"
crypt = pytan3.utils.crypt.encrypt(data=data, key=key)
with pytest.raises(pytan3.utils.exceptions.ModuleError):
pytan3.utils.crypt.decrypt(data=crypt, key="an even worse key")
| 29.866667 | 71 | 0.689732 | 128 | 896 | 4.640625 | 0.351563 | 0.092593 | 0.107744 | 0.040404 | 0.383838 | 0.383838 | 0.383838 | 0.265993 | 0.265993 | 0.265993 | 0 | 0.010855 | 0.177455 | 896 | 29 | 72 | 30.896552 | 0.795115 | 0.127232 | 0 | 0.3 | 0 | 0 | 0.134465 | 0 | 0 | 0 | 0 | 0 | 0.1 | 1 | 0.1 | false | 0 | 0.35 | 0 | 0.45 | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
e54d26678218e56586a416feb9b45a4d21e7530a | 914 | py | Python | apps/areas/migrations/0003_auto_20210215_1959.py | windVane369/meiduo | 6165e2b1fe42aa529b6eb6ca832e9fa67b7477d3 | [
"Apache-2.0"
] | null | null | null | apps/areas/migrations/0003_auto_20210215_1959.py | windVane369/meiduo | 6165e2b1fe42aa529b6eb6ca832e9fa67b7477d3 | [
"Apache-2.0"
] | null | null | null | apps/areas/migrations/0003_auto_20210215_1959.py | windVane369/meiduo | 6165e2b1fe42aa529b6eb6ca832e9fa67b7477d3 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 3.1.6 on 2021-02-15 11:59
from django.db import migrations, models
import django.utils.timezone
class Migration(migrations.Migration):
dependencies = [
('areas', '0002_auto_20210215_1956'),
]
operations = [
migrations.AddField(
model_name='area',
name='create_time',
field=models.DateTimeField(auto_now_add=True, db_index=True, default=django.utils.timezone.now, verbose_name='创建时间'),
preserve_default=False,
),
migrations.AddField(
model_name='area',
name='update_time',
field=models.DateTimeField(auto_now=True, db_index=True, verbose_name='修改时间'),
),
migrations.AlterField(
model_name='area',
name='id',
field=models.BigAutoField(primary_key=True, serialize=False, verbose_name='主键ID'),
),
]
| 29.483871 | 129 | 0.612691 | 100 | 914 | 5.42 | 0.54 | 0.049816 | 0.071956 | 0.094096 | 0.258303 | 0.258303 | 0 | 0 | 0 | 0 | 0 | 0.046477 | 0.270241 | 914 | 30 | 130 | 30.466667 | 0.766117 | 0.049234 | 0 | 0.333333 | 1 | 0 | 0.087659 | 0.026528 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.208333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e54d30c9387c89a4896a9b3c359d877b95d8740d | 432 | py | Python | Python/Algorithms/1805.py | DimitrisJim/leetcode_solutions | 765ea578748f8c9b21243dec9dc8a16163e85c0c | [
"Unlicense"
] | 2 | 2021-01-15T17:22:54.000Z | 2021-05-16T19:58:02.000Z | Python/Algorithms/1805.py | DimitrisJim/leetcode_solutions | 765ea578748f8c9b21243dec9dc8a16163e85c0c | [
"Unlicense"
] | null | null | null | Python/Algorithms/1805.py | DimitrisJim/leetcode_solutions | 765ea578748f8c9b21243dec9dc8a16163e85c0c | [
"Unlicense"
] | null | null | null | from string import ascii_lowercase
class Solution:
trans_table = {ord(i): ' ' for i in ascii_lowercase}
def numDifferentIntegers(self, word: str) -> int:
seen = set()
# Translation map translates "[a-z] => ' '"
for i in word.translate(self.trans_table).split():
# strip any leading zeroes.
seen.add(i.lstrip('0'))
# return num of members.
return len(seen)
| 27 | 58 | 0.592593 | 54 | 432 | 4.666667 | 0.740741 | 0.111111 | 0.047619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003257 | 0.289352 | 432 | 15 | 59 | 28.8 | 0.81759 | 0.208333 | 0 | 0 | 0 | 0 | 0.005917 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
e55ed07309d1bc4c4714e7e47ef852ae00e35aeb | 2,424 | py | Python | awkward_pandas/accessor.py | martindurant/awkward_extras | 5cad442d52e960768f16cdf64aae0f0c10cd33f4 | [
"BSD-3-Clause"
] | null | null | null | awkward_pandas/accessor.py | martindurant/awkward_extras | 5cad442d52e960768f16cdf64aae0f0c10cd33f4 | [
"BSD-3-Clause"
] | 1 | 2020-11-17T15:54:26.000Z | 2020-11-17T19:27:22.000Z | awkward_pandas/accessor.py | martindurant/awkward_extras | 5cad442d52e960768f16cdf64aae0f0c10cd33f4 | [
"BSD-3-Clause"
] | 1 | 2020-11-23T08:52:09.000Z | 2020-11-23T08:52:09.000Z | import functools
import inspect
import pandas as pd
import awkward1 as ak
from .series import AwkwardSeries
from .dtype import AwkardType
funcs = [n for n in dir(ak) if inspect.isfunction(getattr(ak, n))]
@pd.api.extensions.register_series_accessor("ak")
class AwkwardAccessor:
def __init__(self, pandas_obj):
if not self._validate(pandas_obj):
raise AttributeError("ak accessor called on incompatible data")
self._obj = pandas_obj
self._arr = None
@property
def arr(self):
if self._arr is None:
if isinstance(self._obj, AwkwardSeries):
self._arr = self._obj
elif isinstance(self._obj.dtype, AwkardType) and isinstance(self._obj, pd.Series):
# this is a pandas Series that contains an Awkward
self._arr = self._obj.values
elif isinstance(self._obj.dtype, AwkardType):
# a dask series - figure out what to do here
raise NotImplementedError
else:
# this recreates series, possibly by iteration
self._arr = AwkwardSeries(self._obj)
return self._arr
@staticmethod
def _validate(*_):
return True
def to_arrow(self):
return self.arr.data.to_arrow()
def cartesian(self, other, **kwargs):
if isinstance(other, AwkwardSeries):
other = other.data
return AwkwardSeries(ak.cartesian([self.arr.data, other], **kwargs))
def __getattr__(self, item):
from .series import AwkwardSeries
# replace with concrete implementations of all top-level ak functions
if item not in funcs:
raise AttributeError
func = getattr(ak, item)
@functools.wraps(func)
def f(*others, **kwargs):
others = [other.data if isinstance(getattr(other, "data", None), ak.Array) else other
for other in others]
ak_arr = func(self.arr.data, *others, **kwargs)
# TODO: special case to carry over index and name information where output
# is similar to input, e.g., has same length
if isinstance(ak_arr, ak.Array):
# TODO: perhaps special case here if the output can be represented
# as a regular num/cupy array
return AwkwardSeries(ak_arr)
return ak_arr
return f
| 35.130435 | 97 | 0.613861 | 294 | 2,424 | 4.938776 | 0.384354 | 0.043388 | 0.046832 | 0.039945 | 0.049587 | 0.049587 | 0 | 0 | 0 | 0 | 0 | 0.000596 | 0.308168 | 2,424 | 68 | 98 | 35.647059 | 0.865236 | 0.171205 | 0 | 0.040816 | 0 | 0 | 0.0225 | 0 | 0 | 0 | 0 | 0.014706 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0.040816 | 0.44898 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e56f95313666f0c69512af094eaacef1cd96f697 | 1,735 | py | Python | cassandra-helper/copy_keyspace.py | HolmesProcessing/toolbox | 7de75cf477e71fb433bf0c2e2a55c2a7c38100a2 | [
"Apache-2.0"
] | null | null | null | cassandra-helper/copy_keyspace.py | HolmesProcessing/toolbox | 7de75cf477e71fb433bf0c2e2a55c2a7c38100a2 | [
"Apache-2.0"
] | null | null | null | cassandra-helper/copy_keyspace.py | HolmesProcessing/toolbox | 7de75cf477e71fb433bf0c2e2a55c2a7c38100a2 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python2.7
import pika
import json, os
import magic
import time
import ast
from sys import argv
from cassandra.cluster import Cluster
from cassandra.auth import PlainTextAuthProvider
from cassandra import query
from sets import Set
def print_usage():
print("USAGE: %s KEYSPACE_FROM KEYSPACE_TO TABLE SELECTOR CLUSTER_IPS USERNAME PASSWORD" % argv[0])
print("e.g.:\n%s holmes_totem holmes results \"service_name = 'yara'\" \"['10.0.4.80','10.0.4.81','10.0.4.82']\" cassandra password" % argv[0])
exit(-1)
if len(argv) != 8:
print_usage()
keyspace_from = argv[1]
keyspace_to = argv[2]
table = argv[3]
selection = argv[4]
cluster_ips = ast.literal_eval(argv[5])
username = argv[6]
password = argv[7]
if type(cluster_ips) != list:
print("ERROR: CLUSTER_IPS must be a list!")
print_usage()
print("Copying from keyspace '%s' to '%s' on cluster %s: Table '%s' where \"%s\".\n\nContinue? [yn]" % (keyspace_from, keyspace_to, cluster_ips, table, selection))
c = ""
while c != "y":
c = raw_input()
if c == 'n':
print("Aborted")
exit(-1)
ap = PlainTextAuthProvider(username=username, password=password)
cluster = Cluster(cluster_ips, auth_provider=ap)
sess_get = cluster.connect(keyspace_from)
sess_insert = cluster.connect(keyspace_to)
sess_get.row_factory = query.dict_factory
rows = sess_get.execute("SELECT * FROM %s WHERE %s;" % (table, selection))
i = 0
for r in rows:
i += 1
keys = []
vals = []
for k in r:
keys.append("%s" % str(k))
vals.append("%%(%s)s" % str(k))
insert_stmt = "INSERT INTO %s (%s) VALUES (%s)" % (table, ",".join(keys), ",".join(vals))
sess_insert.execute(insert_stmt, r)
print("Copied %d" % (i))
print("=======")
print("Copied %d entries" % i)
| 27.539683 | 163 | 0.676657 | 270 | 1,735 | 4.233333 | 0.388889 | 0.052493 | 0.010499 | 0.038495 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023208 | 0.15562 | 1,735 | 62 | 164 | 27.983871 | 0.756997 | 0.013256 | 0 | 0.076923 | 0 | 0.019231 | 0.23495 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019231 | false | 0.076923 | 0.192308 | 0 | 0.211538 | 0.211538 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
e572da2f2a7ca87ea284feddda92b470709ee671 | 25,068 | py | Python | powerpod/types.py | bucko909/powerpod | 829ad79a32f0041876c57f642d3d239947c3594a | [
"BSD-2-Clause"
] | 3 | 2016-12-18T20:51:02.000Z | 2019-12-29T12:47:41.000Z | powerpod/types.py | bucko909/powerpod | 829ad79a32f0041876c57f642d3d239947c3594a | [
"BSD-2-Clause"
] | null | null | null | powerpod/types.py | bucko909/powerpod | 829ad79a32f0041876c57f642d3d239947c3594a | [
"BSD-2-Clause"
] | null | null | null | from collections import namedtuple
import datetime
import calendar
import struct
import sys
class StructType(object):
"""
Automatically uses SHAPE to pack/unpack simple structs.
"""
@classmethod
def from_binary(cls, data):
try:
return cls(*cls._decode(*struct.unpack(cls.SHAPE, data)))
except:
sys.stderr.write("Error parsing {!r}\n".format(data))
raise
@staticmethod
def _decode(*args):
""" data from unpack -> data for __init__ """
return args
def to_binary(self):
return struct.pack(self.SHAPE, *self._encode())
def _encode(self):
""" data from self -> data for pack """
return self
@classmethod
def byte_size(cls):
return struct.Struct(cls.SHAPE).size
class StructListType(object):
"""
Automatically uses SHAPE to pack/unpack simple structs which are followed by lists of RECORD_TYPE records.
You must have 'size' in _fields, which must be the record count, and a 'records' field to hold the decoded records.
RECORD_TYPE must have a 'size', and a 'from_binary' function.
"""
@classmethod
def from_binary(cls, data):
encode = struct.Struct(cls.SHAPE)
header_size = cls.byte_size()
header = encode.unpack(data[:header_size])
record_size = cls.RECORD_TYPE.byte_size()
try:
# Specifies number of records
size_offset = cls._fields.index('size')
record_count = header[size_offset]
assert header_size + record_count * record_size == len(data), (header_size, record_count, record_size, len(data))
except ValueError:
# Specifies length of data
size_offset = cls._fields.index('data_size')
total_size = header[size_offset]
assert len(data) == header_size + total_size, (header_size, total_size, len(data))
assert total_size % record_size == 0, (total_size, record_size)
record_count = header[size_offset] / record_size
raw_records = [data[header_size + record_size * x:header_size + record_size * (x + 1)] for x in range(record_count)]
return cls(*(cls._decode(*header) + (map(cls.RECORD_TYPE.from_binary, raw_records),)))
@staticmethod
def _decode(*args):
""" data from unpack -> data for __init__ """
return args
def to_binary(self):
data_binary = ''.join(record.to_binary() for record in self.records)
if hasattr(self, 'size'):
assert self.size == len(self.records), (self.size, len(self.records))
else:
assert self.data_size == len(data_binary), (self.data_size, data_binary)
return struct.pack(self.SHAPE, *self._encode()) + data_binary
def _encode(self):
""" data from self -> data for pack """
record_offset = self._fields.index('records')
return self[:record_offset] + self[record_offset+1:]
@classmethod
def byte_size(cls):
return struct.Struct(cls.SHAPE).size
TIME_FIELDS = [
('secs', 'b'),
('mins', 'b'),
('hours', 'b'),
('day', 'b'),
('month', 'b'),
('month_length', 'b'),
('year', 'h'),
]
class NewtonTime(StructType, namedtuple('NewtonTime', zip(*TIME_FIELDS)[0])):
SHAPE = '<' + ''.join(zip(*TIME_FIELDS)[1])
def as_datetime(self):
return datetime.datetime(self.year, self.month, self.day, self.hours, self.mins, self.secs)
@classmethod
def from_datetime(cls, datetime):
days_in_month = calendar.monthrange(datetime.year, datetime.month)[1]
return cls(datetime.second, datetime.minute, datetime.hour, datetime.day, datetime.month, days_in_month, datetime.year)
PROFILE_FIELDS = [
('unknown_0', 'h'),
# Facts about sample_smoothing flags:
# If I send (in GetProfileData) 0x0000, I get (in SetProfileData) 0x0800.
# If I send 0xffff, I get 0xffdf.
# If I send 0x0539, I get 0x0d19.
# If I send 0x2ef0, I get 0x2ed0.
# Both of these are preserved.
# Conclusion: 0x0800 must be set, 0x0020 must be unset.
# Switching from 5s sample smoothing to 1s sets 0x0008. Setting back unsets it.
# Annoyingly, Isaac only resets to '1 sec' when you 'Get from iBike' -- it'll never reset to '5 sec', so I guess it just checks the flag.
# Conclusion: 0x0008 is the "don't smooth for 5s" flag.
# A reset profile gets 10251 (=0x280b)
('sample_smoothing', 'H', {14554: 1, 14546: 5}),
('unknown_1', 'h'),
('null_1', 'i'),
('null_2', 'h'),
# If I send 0x0000, I get 0x8009.
# If I send 0x8009, I get 0x8009.
# If I send 0xffff, I get 0x8009.
# If I then set the 'user-edited' flag by messing with stuff, I get 0x8005.
# Reset to factory default -> 0x800c
# Save/load profile 0x800c -> 0x8009
# Mess with settings 0x8009 -> 0x8005
# Save/load profile 0x8005 -> 0x8009
# Factory default is actually recognised by model (aero/fric etc) values.
# On a pristine profile, I see 0x800e or 0x800d and it's reset to 0x8009 with just a get/set. On an old recording, I saw it reset to 0x8005 on a user-edit.
# Resetting the profile gets 0x800c. Setting it once (or running through setup) gets 0x800d.
# bit 0 1 2 3
# reset 0 0 1 1
# user-edited 1 0 1 0
# save/load 1 0 0 1
# TODO TODO TODO
('user_edited', 'H', {0x8009: False, 0x8005: True}),
('total_mass_lb', 'h'),
('wheel_circumference_mm', 'h'),
('null_3', 'h'),
('unknown_3', 'h'),
('unknown_2', 'h'),
('unknown_4', 'H'),
('unknown_5', 'h'),
('aero', 'f'),
('fric', 'f'),
('unknown_6', 'f'),
('unknown_7', 'f'),
('unknown_8', 'i'),
('wind_scaling_sqrt', 'f'),
('tilt_cal', 'h'),
('cal_mass_lb', 'h'),
('rider_mass_lb', 'h'),
('unknown_9', 'h'),
# ftp_per_kilo_ish:
# Unaffected by bike weight/total weight. Just rider weight.
# rider(lb) FTP 20min value
# 100 38 40 1 # Min valid
# 100 85 91 1
# 100 86 92 2
# 100 105 111 2
# 100 106 112 3
# 100 120 126 3
# 100 121 127 4
# 100 149 157 4
# 100 150 158 5
# 100 163 172 5
# 100 164 173 6
# 100 183 193 6
# 100 184 194 7
# 100 207 218 7
# 100 208 219 8
# 100 227 239 8
# 100 228 240 9
# 100 247 260 9
# 100 248 261 10 # Stops increasing
# 80 200 211 10 # Stops increasing
# 81 200 211 9
# 88 200 211 9
# 89 200 211 8
# 96 200 211 8
# 97 200 211 7
# 109 200 211 7
# 110 200 211 6
# 122 200 211 6
# 123 200 211 5
# 134 200 211 5
# 135 200 211 4
# 165 200 211 4
# 166 200 211 3
# 189 200 211 3
# 190 200 211 2
# 232 200 211 2
# 233 200 211 1
# Roughly, this is (ftp_per_kilo-1.2)/0.454
# The values around 3 seem underestimated (formula underestimates).
# I think this is something related to the Coggan scale,
# which goes from 1.26 FTPW/kg to 6.6 FTPW/kg
('ftp_per_kilo_ish', 'h'),
('watts_20_min', 'h'), # = FTP / 0.95
('unknown_a', 'h'), # 0x0301 -> 0x0b01 (+0x0800) when sample rate changed to 1s. Never restored, though!
('speed_id', 'H'),
('cadence_id', 'H'),
('hr_id', 'H'),
('power_id', 'H'),
('speed_type', 'B'),
('cadence_type', 'B'),
('hr_type', 'B'),
('power_type', 'B'),
('power_smoothing_seconds', 'H'),
('unknown_c', 'h'), # 0x0032
]
class NewtonProfile(StructType, namedtuple('NewtonProfile', zip(*PROFILE_FIELDS)[0])):
SHAPE = '<' + ''.join(zip(*PROFILE_FIELDS)[1])
@classmethod
def _decode(cls, *args):
# Alert when any of these are interesting.
assert args[cls._fields.index('unknown_0')] == 0x5c16, args[cls._fields.index('unknown_0')]
assert args[cls._fields.index('sample_smoothing')] in (0x38d2, 0x38da, 0x380b, 0x38fb, 0x382b, 0x38db, 0x280b), args[cls._fields.index('sample_smoothing')]
assert args[cls._fields.index('unknown_1')] == 0x382b, args[cls._fields.index('unknown_1')]
assert args[cls._fields.index('null_1')] == 0, args[cls._fields.index('null_1')]
assert args[cls._fields.index('null_2')] == 0, args[cls._fields.index('null_2')]
assert args[cls._fields.index('user_edited')] in (0x8009, 0x8005, 0x800d, 0x800c, 0x19, 0x8008), args[cls._fields.index('user_edited')]
assert args[cls._fields.index('null_3')] == 0, args[cls._fields.index('null_3')]
assert args[cls._fields.index('unknown_2')] in (0, 2), args[cls._fields.index('unknown_2')]
assert args[cls._fields.index('unknown_3')] in (0, 0x1988, 0x5f5c), args[cls._fields.index('unknown_3')]
assert args[cls._fields.index('unknown_4')] in (0xbc00, 0xe766, 0, 0x20ff), args[cls._fields.index('unknown_4')]
assert args[cls._fields.index('unknown_5')] in (0, 1), args[cls._fields.index('unknown_5')]
assert args[cls._fields.index('unknown_6')] in (-38.0, -10.0, 0.0), args[cls._fields.index('unknown_6')]
assert args[cls._fields.index('unknown_7')] in (1.0, 0.0), args[cls._fields.index('unknown_7')]
assert args[cls._fields.index('unknown_8')] == 1670644000, args[cls._fields.index('unknown_8')]
assert args[cls._fields.index('unknown_9')] in (1850, 1803), args[cls._fields.index('unknown_9')]
assert args[cls._fields.index('unknown_a')] in (0x0301, 0x0b01, 0x351), args[cls._fields.index('unknown_a')]
assert args[cls._fields.index('unknown_c')] == 50, args[cls._fields.index('unknown_c')]
args = list(args)
args[cls._fields.index('tilt_cal')] = args[cls._fields.index('tilt_cal')] * 0.1
return args
def _encode(self):
return self._replace(tilt_cal=int(round(self.tilt_cal * 10)))
@classmethod
def default(cls):
return cls(
total_mass_lb=205,
user_edited=0x8008,
wheel_circumference_mm=2096,
sample_smoothing=10251,
aero=0.4899250099658966,
fric=11.310999870300293,
unknown_6=0.0,
unknown_7=0.0,
wind_scaling_sqrt=1.1510859727859497,
speed_id=0,
cadence_id=0,
hr_id=0,
power_id=0,
speed_type=0,
cadence_type=0,
hr_type=0,
power_type=0,
tilt_cal=-0.7,
cal_mass_lb=205,
rider_mass_lb=180,
unknown_9=1803,
ftp_per_kilo_ish=1,
watts_20_min=85,
unknown_a=769,
# ^^ SetProfileData
power_smoothing_seconds=1,
unknown_c=50,
# ^^ SetProfileData2
unknown_0=0x5c16,
unknown_1=0x382b,
null_1=0,
null_2=0,
null_3=0,
unknown_3=0,
unknown_2=0,
unknown_4=0,
unknown_5=0,
unknown_8=1670644000,
# ^^^ Complete unknowns
)
def swap_endian(x):
return (x >> 8) + ((x & ((1 << 8) - 1)) << 8)
def to_signed(x, bits):
if x & 1 << (bits - 1):
return x - (1 << bits)
else:
return x
def to_unsigned(x, bits):
if x < 0:
return x + (1 << bits)
else:
return x
IDENTITY = lambda x: x
TO_TIMES_TEN_SIGNED = lambda base: lambda x: to_unsigned(int(x * 10), base)
FROM_TIMES_TEN_SIGNED = lambda base: lambda x: to_signed(x, base) * 0.1
FROM_TIMES_TEN = lambda x: x * 0.1
TO_TIMES_TEN = lambda x: int(x * 10)
RIDE_DATA_FIELDS = [
('elevation_feet', 16, lambda x: to_signed(swap_endian(x), 16), lambda x: swap_endian(to_unsigned(x, 16))),
('cadence', 8, IDENTITY, IDENTITY),
('heart_rate', 8, IDENTITY, IDENTITY),
('temperature_farenheit', 8, lambda x: x - 100, lambda x: x + 100),
('unknown_0', 9, lambda x: to_signed(x, 9), lambda x: to_unsigned(x, 9)),
('tilt', 10, FROM_TIMES_TEN_SIGNED(10), TO_TIMES_TEN_SIGNED(10)),
('speed_mph', 10, FROM_TIMES_TEN, TO_TIMES_TEN),
('wind_tube_pressure_difference', 10, IDENTITY, IDENTITY),
('power_watts', 11, IDENTITY, IDENTITY),
('dfpm_power_watts', 11, IDENTITY, IDENTITY),
('acceleration_maybe', 10, lambda x: to_signed(x, 10), lambda x: to_unsigned(x, 10)),
('stopped_flag_maybe', 1, IDENTITY, IDENTITY),
('unknown_3', 8, IDENTITY, IDENTITY), # if this is large, "drafting" becomes true
]
# unknown_0 seems to be highly correlated to altitude. It might be average or integrated tilt. It seems to affect the /first record/ of the ride in Isaac but not much else (small = high power, big = low power -- which supports it being some sort of tilt offset).
# acceleration_maybe seems negative when stopping, positive in general. My feeling is that it's forward acceleration. I can't get this to affect anything.
# Using 'set profile after the ride' seems to ignore both unknown_0 and acceleration_maybe. I guess they are internal values, but I can only guess what they might do.
assert sum(x[1] for x in RIDE_DATA_FIELDS) == 15 * 8
DECODE_FIFTEEN_BYTES = '{:08b}' * 15
ENCODE_FIFTEEN_BYTES = ''.join('{:0%sb}' % (fielddef[1],) for fielddef in RIDE_DATA_FIELDS)
class NewtonRideData(object):
SHAPE = '15s'
__slots__ = zip(*RIDE_DATA_FIELDS)[0]
def __init__(self, *args):
for name, value in zip(self.__slots__, args):
setattr(self, name, value)
@staticmethod
def byte_size():
# We are not a struct type, but we want to look like one.
return 15
@classmethod
def from_binary(cls, data):
if data.startswith('\xff\xff\xff\xff\xff\xff'):
return NewtonRideDataPaused.from_binary(data)
binary = DECODE_FIFTEEN_BYTES.format(*struct.unpack('15B', data))
vals = []
start = 0
for _name, size, decode, _encode in RIDE_DATA_FIELDS:
value = int(binary[start:start+size], 2)
start += size
vals.append(decode(value))
return cls(*vals)
def to_binary(self):
vals = []
for name, size, _decode, encode in RIDE_DATA_FIELDS:
value = getattr(self, name)
vals.append(encode(value))
binary = ENCODE_FIFTEEN_BYTES.format(*vals)
assert len(binary) == 15 * 8
chopped = [int(binary[x:x+8], 2) for x in range(0, 15*8, 8)]
return struct.pack('15B', *chopped)
@property
def elevation_metres(self):
return self.elevation_feet * 0.3048
def pressure_Pa(self, reference_pressure_Pa=101325, reference_temperature_kelvin=288.15):
return reference_pressure_Pa * (1 - (0.0065 * self.elevation_metres) / reference_temperature_kelvin) ** (9.80665 * 0.0289644 / 8.31447 / 0.0065)
@property
def temperature_kelvin(self):
return (self.temperature_farenheit + 459.67) * 5 / 9
def density(self, reference_pressure_Pa=101325, reference_temperature_kelvin=288.15):
# I say 0.8773 at 22.7778C/2516.7336m; they say 0.8768. Good enough...
# Constants from Wikipedia.
return self.pressure_Pa(reference_pressure_Pa, reference_temperature_kelvin) * 0.0289644 / 8.31447 / self.temperature_kelvin
def wind_speed_kph(self, offset=621, multiplier=13.6355, reference_pressure_Pa=101325, reference_temperature_kelvin=288.15, wind_scaling_sqrt=1.0):
# multiplier based on solving from CSV file
if self.wind_tube_pressure_difference < offset:
return 0.0
return ((self.wind_tube_pressure_difference - offset) / self.density(reference_pressure_Pa, reference_temperature_kelvin) * multiplier) ** 0.5 * wind_scaling_sqrt
def __repr__(self):
return '{}({})'.format(self.__class__.__name__, ', '.join(repr(getattr(self, name)) for name in self.__slots__))
class NewtonRideDataPaused(StructType, namedtuple('NewtonRideDataPaused', 'tag newton_time unknown_3')):
SHAPE = '<6s8sb'
@staticmethod
def _decode(tag, newton_time_raw, unknown_3):
return (tag, NewtonTime.from_binary(newton_time_raw), unknown_3)
def _encode(self):
return (self.tag, self.newton_time.to_binary(), self.unknown_3)
RIDE_FIELDS = [
('unknown_0', 'h', IDENTITY, IDENTITY, 17), # byte 0 -- 0x1100 observed
('size', 'i', IDENTITY, IDENTITY, 0), # byte 2
('total_mass_lb', 'f', IDENTITY, IDENTITY, 235), # byte 6, always integer?!, could be total mass
('energy_kJ', 'f', IDENTITY, IDENTITY, 0), # byte 10
('aero', 'f', IDENTITY, IDENTITY, 0.384), # byte 14
('fric', 'f', IDENTITY, IDENTITY, 12.0), # byte 18
('initial_elevation_feet', 'f', IDENTITY, IDENTITY, 0), # byte 22, always integer?!
('elevation_gain_feet', 'f', IDENTITY, IDENTITY, 0), # byte 26, always integer?!
('wheel_circumference_mm', 'f', IDENTITY, IDENTITY, 2136.0), # byte 30, always integer?!
('unknown_1', 'h', IDENTITY, IDENTITY, 15), # byte 34, 0x0f00 and 0x0e00 and 0x0e00 observed; multiplying by 10 does nothing observable. TODO is this ftp per kilo ish?
('unknown_2', 'h', IDENTITY, IDENTITY, 1), # byte 36, =1?
('start_time', '8s', NewtonTime.from_binary, NewtonTime.to_binary, NewtonTime(0, 0, 0, 1, 1, 31, 2000)), # byte 38
('pressure_Pa', 'i', IDENTITY, IDENTITY, 101325), # byte 46, appears to be pressure in Pa (observed range 100121-103175) # (setting, reported) = [(113175, 1113), (103175, 1014), (93175, 915), (203175, 1996), (1e9, 9825490), (2e9, 19650979), (-2e9, -19650979)]. Reported value in Isaac (hPa) is this divided by ~101.7761 or multiplied by 0.00982549. This isn't affected by truncating the ride at all. It /is/ affected by unknown_3; if I make unknown_3 -73 from 73, I get (-2e9, -19521083).
('Cm', 'f', IDENTITY, IDENTITY, 1.0204), # byte 50
# average_temperature_farenheit = Average of temperature records. Does not affect displayed temperature in Isaac. It affects displayed pressure in Isaac (bigger temp = closer to pressure_Pa).
# pressure_Pa = 103175
# average_temperature_farenheit = 1, pressure = 1011mbar
# average_temperature_farenheit = 100, pressure = 1015mbar
# average_temperature_farenheit = 10000, pressure = 1031mbar
# pressure_Pa = 1e9
# average_temperature_farenheit = 1, pressure = 9798543mbar
# average_temperature_farenheit = 100, pressure = 9833825mbar
# average_temperature_farenheit = 10000, pressure = 9991024mbar
('average_temperature_farenheit', 'h', IDENTITY, IDENTITY, 73), # byte 54.
('wind_scaling_sqrt', 'f', IDENTITY, IDENTITY, 1.0), # byte 56
('riding_tilt_times_10', 'h', IDENTITY, IDENTITY, 0.0), # byte 60
('cal_mass_lb', 'h', IDENTITY, IDENTITY, 235), # byte 62
('unknown_5', 'h', IDENTITY, IDENTITY, 88), # byte 64, 0x5800 and 0x6000 and 0x5c00 observed; multiplying by 10 doesn't affect: wind speed, pressure, temperature.
('wind_tube_pressure_offset', 'h', lambda x: x - 1024, lambda x: x + 1024, 620), # byte 66, this is a 10-bit signed negative number cast to unsigned and stored in a 16 bit int...
('unknown_7', 'i', IDENTITY, IDENTITY, 0), # byte 68, 0x00000000 observed
('reference_temperature_kelvin', 'h', IDENTITY, IDENTITY, 288), # byte 72, normally 288 (14.85C)
('reference_pressure_Pa', 'i', IDENTITY, IDENTITY, 101325), # byte 74
('unknown_9', 'h', IDENTITY, IDENTITY, 1), # byte 78 -- 0x0100 observed
('unknown_a', 'h', IDENTITY, IDENTITY, 50), # byte 80 -- 0x3200 observed
# byte 82
]
RIDE_DECODE = zip(*RIDE_FIELDS)[2]
RIDE_ENCODE = zip(*RIDE_FIELDS)[3]
RIDE_DEFAULTS = {key: value for key, _, _, _, value in RIDE_FIELDS}
class NewtonRide(StructListType, namedtuple('NewtonRide', zip(*RIDE_FIELDS)[0] + ('records',))):
SHAPE = '<' + ''.join(zip(*RIDE_FIELDS)[1])
RECORD_TYPE = NewtonRideData
@classmethod
def make(cls, data, **kwargs):
kwargs = {}
assert 'size' not in kwargs
assert 'records' not in kwargs
for name in cls._fields[:-1]:
kwargs[name] = RIDE_DEFAULTS[name]
kwargs['records'] = data
kwargs['size'] = len(data)
if data:
# TODO start_time, elevation gain
kwargs['average_temperature_farenheit'] = int(round(sum(x.temperature_farenheit for x in data if hasattr(x, 'temperature_farenheit')) / len(data)))
kwargs['initial_elevation_feet'] = [x.elevation_feet for x in data if hasattr(x, 'elevation_feet')][0]
kwargs['data_records'] = len(data)
kwargs['energy_kJ'] = int(round(sum(x.power_watts for x in data if hasattr(x, 'power_watts')) / 1000))
args = []
for name in cls._fields:
args.append(kwargs[name])
return cls(*args)
def _encode(self):
return tuple(encode(val) for val, encode in zip(self[:-1], RIDE_ENCODE))
@staticmethod
def _decode(*args):
return tuple(decode(val) for val, decode in zip(args, RIDE_DECODE))
def get_header(self):
return NewtonRideHeader(self.unknown_0, self.start_time, sum(x.speed_mph * 1602 / 3600. for x in self.records if isinstance(x, NewtonRideData)))
def fit_to(self, csv):
pure_records = [x for x in self.records if not hasattr(x, 'newton_time')]
csv_data = [float(x['Wind Speed (km/hr)']) for x in csv.data]
compare = [(x, y) for x, y in zip(pure_records, csv_data) if y > 0]
reference_pressure_kPa = self.reference_pressure_Pa / 1000.0
get_errors = lambda offset, multiplier: [pure_record.wind_speed_kph(offset, multiplier, reference_pressure_kPa, self.reference_temperature_kelvin, self.wind_scaling_sqrt) - csv_datum for pure_record, csv_datum in compare]
dirs = [(x, y) for x in range(-1, 2) for y in range(-1, 2) if x != 0 or y != 0]
print dirs
skip = 500
best = current = (500, 10)
best_error = float('inf')
while skip > 0.000001:
new_best = False
for x, y in dirs:
test = (current[0] + x * skip, current[1] + y * skip * 0.02)
if test[1] < 0:
continue
error = sum(map(abs, get_errors(*test)))
#print test, error
if error < best_error:
best = test
best_error = error
new_best = True
if new_best:
current = best
else:
skip *= 0.5
#print best, skip, best_error
errors = get_errors(*best)
return best, best_error, max(map(abs, errors)), ["%0.4f" % (x,) for x in errors]
def fit_elevation(self, csv):
pure_records = [x for x in self.records if not hasattr(x, 'newton_time')]
csv_data = [float(x['Elevation (meters)']) / 0.3048 for x in csv.data]
compare = [(x, y) for x, y in zip(pure_records, csv_data)]
get_errors = lambda mul: [(pure_record.density(), pure_record.elevation_feet, csv_datum, pure_record.elevation_feet - csv_datum, (pure_record.wind_tube_pressure_difference - self.wind_tube_pressure_offset), pure_record.tilt, pure_record.unknown_0, pure_record) for pure_record, csv_datum in compare]
return get_errors(0.1)
class NewtonRideHeader(StructType, namedtuple('NewtonRideHeader', 'unknown_0 start_time distance_metres')):
# \x11\x00
# newton time
# float encoding of ride length in metres.
SHAPE = '<h8sf'
def _encode(self):
return (self.unknown_0, self.start_time.to_binary(), self.distance_metres)
@classmethod
def _decode(cls, unknown_0, start_time_raw, distance_metres):
return (unknown_0, NewtonTime.from_binary(start_time_raw), distance_metres)
def to_filename(self):
return "powerpod.%s-%0.1fkm.raw" % (self.start_time.as_datetime().strftime("%Y-%m-%dT%H-%M-%S"), self.distance_metres / 1000)
class NewtonProfileScreens(StructType):
# Data is laid out as [LEFT, RIGHT]
# Sides are [AGG1, AGG2, AGG3]
# Aggregates are [TOP, MIDDLE, BOTTOM]
# Meaning of indices in metrics
# (unverified, but 'average' is (1, 2, 1) and plain is (0, 2, 1))
AGG_NOW = 0
#AGG_TRIP = 1
AGG_AVG = 2
# Metrics (PowerPod 6.12)
METRIC_SPEED = (0, 2, 1)
METRIC_DISTANCE_POWER = (3, 5, 4)
METRIC_TIME = (6, 6, 6) # I guess no point in anything but 'trip'
METRIC_POWER = (7, 9, 8)
METRIC_OTHER = (10, 12, 11)
METRIC_SLOPE = (13, 15, 14)
METRIC_WIND = (16, 18, 17)
METRIC_BLANK = (19, 22, 20)
METRIC_NORMALISED_POWER = (21, 21, 21) # I guess no point in anything but 'trip'
# Which metrics are valid on which screens?
VALID_TOP = set([METRIC_SPEED, METRIC_WIND, METRIC_SLOPE, METRIC_POWER])
# Add averages.
VALID_TOP.update((z, y, z) for _x, y, z in list(VALID_TOP))
VALID_TOP.add(METRIC_BLANK)
VALID_MIDDLE = set([METRIC_POWER, METRIC_DISTANCE_POWER, METRIC_NORMALISED_POWER, METRIC_WIND, METRIC_BLANK])
VALID_BOTTOM = set([METRIC_TIME, METRIC_OTHER])
VALID = (VALID_BOTTOM, VALID_MIDDLE, VALID_TOP)
# Screens
TOP = 0
MIDDLE = 1
BOTTOM = 2
ROWS = 3
# Sides
LEFT = 0
RIGHT = 1
SIDES = 2
# Any triple is (Now, Trip, Average)
IDENTIFIER = 0x29
SHAPE = 'b' * 18
RESPONSE = None
def __init__(self, data):
self._data = list(data)
@classmethod
def _decode(cls, *args):
return args,
def _encode(self):
return self._data
def set_screen(self, side, row, metric, aggregate):
assert 0 <= side < self.SIDES, side
assert 0 <= row < self.ROWS, row
assert metric in self.VALID[row], (metric, row)
assert aggregate in (self.AGG_AVG, self.AGG_NOW), aggregate
metric = [metric[x] for x in (aggregate, 1, 2)]
for metric_idx in (0, 1, 2):
self._data[self._index(side, row, metric_idx)] = metric[metric_idx]
def to_dict(self):
sides = {}
for side_i, side_n in enumerate(['left', 'right']):
side = sides[side_n] = {}
for row_i, row_n in enumerate(['top', 'middle', 'bottom']):
row = side[row_n] = []
for metric_idx in (0, 1, 2):
row.append(self._data[self._index(side_i, row_i, metric_idx)])
return sides
def __repr__(self):
return "{}.from_dict({})".format(self.__class__.__name__, self.to_dict())
@classmethod
def from_dict(cls, sides):
data = [0] * 18
for side_i, side_n in enumerate(['left', 'right']):
side = sides[side_n]
for row_i, row_n in enumerate(['top', 'middle', 'bottom']):
row = side[row_n]
for metric_idx, value in enumerate(row):
data[cls._index(side_i, row_i, metric_idx)] = value
return cls(data)
@classmethod
def _index(cls, side, row, metric_idx):
return (side * 3 + metric_idx) * cls.ROWS + row
@classmethod
def default(cls):
return cls.from_dict({
'left': {
'top': cls.METRIC_SPEED,
'middle': cls.METRIC_DISTANCE_POWER,
'bottom': cls.METRIC_TIME,
},
'right': {
'top': cls.METRIC_SPEED,
'middle': cls.METRIC_POWER,
'bottom': cls.METRIC_OTHER,
}
})
| 39.16875 | 489 | 0.678873 | 3,805 | 25,068 | 4.287516 | 0.183706 | 0.022067 | 0.03261 | 0.03972 | 0.28546 | 0.217359 | 0.12995 | 0.091333 | 0.071105 | 0.056761 | 0 | 0.081807 | 0.181267 | 25,068 | 639 | 490 | 39.230047 | 0.713068 | 0.241703 | 0 | 0.155902 | 0 | 0 | 0.103764 | 0.019694 | 0 | 0 | 0.00922 | 0.001565 | 0.066815 | 0 | null | null | 0 | 0.011136 | null | null | 0.002227 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e5765b63498aedff82c2802f809cf6d2e6ee5e33 | 662 | py | Python | plugins/plugin_base.py | Robinson04/inoft_vocal_framework | 9659e0852604bc628b01e0440535add0ae5fc5d1 | [
"MIT"
] | 11 | 2020-04-15T07:47:34.000Z | 2022-03-30T21:47:36.000Z | plugins/plugin_base.py | Robinson04/inoft_vocal_framework | 9659e0852604bc628b01e0440535add0ae5fc5d1 | [
"MIT"
] | 20 | 2020-08-09T00:11:49.000Z | 2021-09-11T11:34:02.000Z | plugins/plugin_base.py | Robinson04/inoft_vocal_framework | 9659e0852604bc628b01e0440535add0ae5fc5d1 | [
"MIT"
] | 6 | 2020-02-21T04:45:19.000Z | 2021-07-18T22:13:55.000Z | from abc import abstractmethod
from typing import Any
class PluginBase:
# Exist to make it easier to detect if a class if a plugin of any class type.
pass
class PluginCodeGenerationBase(PluginBase):
# from inoft_vocal_framework.botpress_integration.generator import GeneratorCore
@abstractmethod
def execute(self, generator_core: Any): # GeneratorCore):
raise Exception("Execute method of the plugin must be implemented. "
"It will be the method that will be run when your plugin will be execute. "
"Also, all the classical class methods (like __init__) will be run like usual.")
| 36.777778 | 104 | 0.703927 | 86 | 662 | 5.325581 | 0.593023 | 0.052402 | 0.039301 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.247734 | 662 | 17 | 105 | 38.941176 | 0.919679 | 0.256798 | 0 | 0 | 0 | 0 | 0.409836 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0.1 | 0.2 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
e57953f7d080f6dd0d028b8c3c8f68f6eb411e7d | 4,366 | py | Python | one/image.py | OpenNebula/addon-linstor | 71cc6d5d625929f0350cec866ff07e953fcebe12 | [
"Apache-2.0"
] | 11 | 2018-10-18T19:53:52.000Z | 2021-11-08T11:42:56.000Z | one/image.py | OpenNebula/addon-linstor | 71cc6d5d625929f0350cec866ff07e953fcebe12 | [
"Apache-2.0"
] | 13 | 2018-11-26T16:15:35.000Z | 2021-08-02T18:24:14.000Z | one/image.py | OpenNebula/addon-linstor | 71cc6d5d625929f0350cec866ff07e953fcebe12 | [
"Apache-2.0"
] | 7 | 2018-11-08T03:44:59.000Z | 2021-05-16T20:47:19.000Z | # -*- coding: utf-8 -*-
"""
OpenNebula Driver for Linstor
Copyright 2018 LINBIT USA LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import xml.etree.ElementTree as ET
class Image(object):
"""Docstring for vm. """
def __init__(self, xml):
self.xmlstr = xml # type: str
self._root = ET.fromstring(xml)
def __str__(self):
return self.xmlstr
@property
def id(self):
"""Returns name"""
try:
return self._root.find("ID").text or ""
except AttributeError:
return ""
@property
def size(self):
"""Returns name"""
try:
return self._root.find("SIZE").text or ""
except AttributeError:
return ""
@property
def source(self):
"""Returns source"""
try:
return self._root.find("SOURCE").text or ""
except AttributeError:
return '""'
@property
def target_snap(self):
"""Returns target_snap"""
try:
return self._root.find("TARGET_SNAPSHOT").text or ""
except AttributeError:
return '""'
@property
def datastore_id(self):
"""Returns name"""
try:
return self._root.find("DATASTORE_ID").text or ""
except AttributeError:
return ""
@property
def fs_type(self):
"""Returns FS_type"""
try:
return self._root.find("FSTYPE").text or ""
except AttributeError:
return ""
@property
def fs(self):
"""Returns filesystem"""
try:
return self._root.find("FS").text or ""
except AttributeError:
return ""
@property
def path(self):
"""Returns path"""
try:
return self._root.find("PATH").text or ""
except AttributeError:
return ""
@property
def cloning_id(self):
"""Returns cloning_ID"""
try:
return self._root.find("CLONING_ID").text or ""
except AttributeError:
return ""
@property
def md5(self):
"""Returns md5"""
try:
return self._root.find("TEMPLATE").find("MD5").text or ""
except AttributeError:
return '""'
@property
def sha1(self):
"""Returns sha1"""
try:
return self._root.find("TEMPLATE").find("SHA1").text or ""
except AttributeError:
return '""'
@property
def no_decompress(self):
"""Returns no_decompress"""
try:
return self._root.find("TEMPLATE").find("NO_DECOMPRESS").text or ""
except AttributeError:
return '""'
@property
def limit_transfer_bw(self):
"""Returns limit_transfer_bw"""
try:
return self._root.find("TEMPLATE").find("LIMIT_TRANSFER_BW").text or ""
except AttributeError:
return '""'
@property
def format(self):
"""
Format of the image
:return: Image format info string, e.g. raw, qcow2
:rtype: str
"""
try:
return self._root.find("FORMAT").text or ""
except AttributeError:
return ''
@property
def template_format(self):
"""
Format of the image
:return: Image format info string, e.g. raw, qcow2
:rtype: str
"""
try:
return self._root.find("TEMPLATE").find("FORMAT").text or ""
except AttributeError:
return ''
@property
def template_driver(self):
"""
Driver of the image
:return: Image driver info string, e.g. raw, qcow2
:rtype: str
"""
try:
return self._root.find("TEMPLATE").find("DRIVER").text or ""
except AttributeError:
return ''
| 25.383721 | 83 | 0.554741 | 474 | 4,366 | 5.012658 | 0.261603 | 0.057239 | 0.087542 | 0.114478 | 0.573653 | 0.498317 | 0.498317 | 0.279882 | 0.188131 | 0.156145 | 0 | 0.006156 | 0.330279 | 4,366 | 171 | 84 | 25.532164 | 0.80643 | 0.25126 | 0 | 0.621359 | 0 | 0 | 0.057273 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.174757 | false | 0 | 0.009709 | 0.009709 | 0.514563 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
e5896561c75ca54acb654060cb261650c4d273df | 1,247 | py | Python | cru_robot/des_pub.py | chula-eic/NukeBot | 59ae793829d34dfa3abf8a95ac1054d645c134d8 | [
"Apache-2.0"
] | null | null | null | cru_robot/des_pub.py | chula-eic/NukeBot | 59ae793829d34dfa3abf8a95ac1054d645c134d8 | [
"Apache-2.0"
] | null | null | null | cru_robot/des_pub.py | chula-eic/NukeBot | 59ae793829d34dfa3abf8a95ac1054d645c134d8 | [
"Apache-2.0"
] | 1 | 2019-10-21T13:19:34.000Z | 2019-10-21T13:19:34.000Z | #!/usr/bin/env python3
import rospy
from std_msgs.msg import String
from cru_robot.msg import FloatList
pub = rospy.Publisher('destination', FloatList, queue_size=10)
servo_pub = rospy.Publisher('servo', String, queue_size=10)
queue = []; #DECLARE QUEUE/FLOATLIST QUEUE HERE
state = 0
def callback(data):
global pub
global queue
global state
if state == 0 :
if data.data == "red":
queue = [[0,0],[1,1],[2,2]] #INSERT RED HERE
elif data.data == "blue":
queue = [[3,3]] #INSERT POS HERE
elif data.data == "green":
queue = [[4,4]] #INSERT POS HERE
elif data.data == "yellow":
queue = [[5,5]] #INSERT POS HERE
state = 1
if state == 1:
pub.publish(queue[-1])
def feedback(data):
global pub
global queue
if data.data == "1":
if len(queue) > 0 :
pub.publish(queue.pop())
else:
servo_pub.publish("1")
print("Destinatoin Reached")
else:
print("FAILED")
def des_pub():
rospy.init_node('des_pub', anonymous=True)
rospy.Subscriber("color", String, callback)
rospy.Subscriber("Feedback", String, feedback)
rospy.spin()
if __name__ == '__main__':
try:
des_pub()
except rospy.ROSInterruptException:
pass
| 23.528302 | 62 | 0.621492 | 170 | 1,247 | 4.452941 | 0.382353 | 0.05284 | 0.047556 | 0.063408 | 0.129458 | 0.06605 | 0 | 0 | 0 | 0 | 0 | 0.026205 | 0.234964 | 1,247 | 52 | 63 | 23.980769 | 0.767296 | 0.092221 | 0 | 0.136364 | 0 | 0 | 0.079041 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.022727 | 0.068182 | null | null | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e58b603f3270c8cd8bb40f3d52ba76dda553c385 | 1,357 | py | Python | example_runner.py | AmirPupko/pandas-to-sql | 12cb699d70acd368a284304d78c086fa0b1bc204 | [
"MIT"
] | 18 | 2021-01-21T17:10:29.000Z | 2022-03-30T19:48:00.000Z | example_runner.py | AmirPupko/pandas-to-sql | 12cb699d70acd368a284304d78c086fa0b1bc204 | [
"MIT"
] | null | null | null | example_runner.py | AmirPupko/pandas-to-sql | 12cb699d70acd368a284304d78c086fa0b1bc204 | [
"MIT"
] | null | null | null | from copy import copy
import sqlite3
import pandas as pd
import pandas_to_sql
from pandas_to_sql.testing.utils.fake_data_creation import create_fake_dataset
from pandas_to_sql.conventions import flatten_grouped_dataframe
# table_name = 'random_data'
# df, _ = create_fake_dataset()
# df_ = pandas_to_sql.wrap_df(df, table_name)
# df2 = df_.groupby('random_int').agg({'random_float':['mean','sum','count'], 'random_str':', '.join})
# df2 = flatten_grouped_dataframe(df2)
# print(df2.get_sql_string())
iris = pd.read_csv('https://raw.githubusercontent.com/mwaskom/seaborn-data/master/iris.csv')
table_name = 'iris'
sql_connection = sqlite3.connect('./iris.db') #create db
iris.to_sql(table_name, sql_connection, if_exists='replace', index=False)
df = pandas_to_sql.wrap_df(iris, table_name)
pd_wrapped = pandas_to_sql.wrap_pd(pd)
df_ = copy(df)
df_['sepal_width_rounded'] = df_.sepal_width.round()
df_1 = df_[df_.species=='setosa'].reset_index(drop=True)
df_2 = df_[df_.species=='versicolor'].reset_index(drop=True)
some_df = pd_wrapped.concat([df_1, df_2]).reset_index(drop=True)
sql_string = some_df.get_sql_string()
df_from_sql_database = pd.read_sql_query(sql_string, sql_connection)
df_pandas = some_df.df_pandas
from pandas_to_sql.testing.utils.asserters import assert_dataframes_equals
assert_dataframes_equals(df_pandas, df_from_sql_database)
| 36.675676 | 102 | 0.791452 | 220 | 1,357 | 4.481818 | 0.35 | 0.040568 | 0.078093 | 0.045639 | 0.093306 | 0.093306 | 0 | 0 | 0 | 0 | 0 | 0.008006 | 0.079587 | 1,357 | 36 | 103 | 37.694444 | 0.781425 | 0.202653 | 0 | 0 | 0 | 0 | 0.116387 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 1 | 0 | false | 0 | 0.318182 | 0 | 0.318182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
e5900ab58ab8e7a67396a9fe2fe64e00c91a05bf | 448 | py | Python | min-blockchain/app/views/__init__.py | JoMingyu/Blockchain-py | 607e54c06eb9e4eb3667d18e32db736b2759ac46 | [
"MIT"
] | 12 | 2017-12-08T10:32:07.000Z | 2021-12-18T08:01:09.000Z | min-blockchain/app/views/__init__.py | JoMingyu/min-blockchain | 607e54c06eb9e4eb3667d18e32db736b2759ac46 | [
"MIT"
] | null | null | null | min-blockchain/app/views/__init__.py | JoMingyu/min-blockchain | 607e54c06eb9e4eb3667d18e32db736b2759ac46 | [
"MIT"
] | 5 | 2018-01-09T12:12:03.000Z | 2021-03-21T05:56:42.000Z | from flask_restful import Api
class ViewInjector:
def __init__(self, app=None):
if app is not None:
self.init_app(app)
def init_app(self, app):
from app.views.blockchain import Node, Chain, Mine, Transaction
api = Api(app)
api.add_resource(Node, '/node')
api.add_resource(Chain, '/chain')
api.add_resource(Mine, '/mine')
api.add_resource(Transaction, '/transaction')
| 24.888889 | 71 | 0.627232 | 58 | 448 | 4.655172 | 0.396552 | 0.088889 | 0.207407 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.261161 | 448 | 17 | 72 | 26.352941 | 0.81571 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e594f58c382594d969ae896f651140e88f621f92 | 23,856 | py | Python | omd/versions/1.2.8p15.cre/share/check_mk/modules/packaging.py | NCAR/spol-nagios | 4f88bef953983050bc6568d3f1027615fbe223fb | [
"BSD-3-Clause"
] | null | null | null | omd/versions/1.2.8p15.cre/share/check_mk/modules/packaging.py | NCAR/spol-nagios | 4f88bef953983050bc6568d3f1027615fbe223fb | [
"BSD-3-Clause"
] | null | null | null | omd/versions/1.2.8p15.cre/share/check_mk/modules/packaging.py | NCAR/spol-nagios | 4f88bef953983050bc6568d3f1027615fbe223fb | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/python
# -*- encoding: utf-8; py-indent-offset: 4 -*-
# +------------------------------------------------------------------+
# | ____ _ _ __ __ _ __ |
# | / ___| |__ ___ ___| | __ | \/ | |/ / |
# | | | | '_ \ / _ \/ __| |/ / | |\/| | ' / |
# | | |___| | | | __/ (__| < | | | | . \ |
# | \____|_| |_|\___|\___|_|\_\___|_| |_|_|\_\ |
# | |
# | Copyright Mathias Kettner 2014 mk@mathias-kettner.de |
# +------------------------------------------------------------------+
#
# This file is part of Check_MK.
# The official homepage is at http://mathias-kettner.de/check_mk.
#
# check_mk is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by
# the Free Software Foundation in version 2. check_mk is distributed
# in the hope that it will be useful, but WITHOUT ANY WARRANTY; with-
# out even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE. See the GNU General Public License for more de-
# tails. You should have received a copy of the GNU General Public
# License along with GNU Make; see the file COPYING. If not, write
# to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor,
# Boston, MA 02110-1301 USA.
import pprint, tarfile
try:
import simplejson as json
except ImportError:
import json
pac_ext = ".mkp"
class PackageException(Exception):
def __init__(self, reason):
self.reason = reason
def __str__(self):
return self.reason
if omd_root:
pac_dir = omd_root + "/var/check_mk/packages/"
else:
pac_dir = var_dir + "/packages/"
try:
os.makedirs(pac_dir)
except:
pass
# in case of local directories (OMD) use those instead
package_parts = [ (part, title, perm, ldir and ldir or dir) for part, title, perm, dir, ldir in [
( "checks", "Checks", 0644, checks_dir, local_checks_dir ),
( "notifications", "Notification scripts", 0755, notifications_dir, local_notifications_dir ),
( "inventory", "Inventory plugins", 0644, inventory_dir, local_inventory_dir ),
( "checkman", "Checks' man pages", 0644, check_manpages_dir, local_check_manpages_dir ),
( "agents", "Agents", 0755, agents_dir, local_agents_dir ),
( "web", "Multisite extensions", 0644, web_dir, local_web_dir ),
( "pnp-templates", "PNP4Nagios templates", 0644, pnp_templates_dir, local_pnp_templates_dir ),
( "doc", "Documentation files", 0644, doc_dir, local_doc_dir ),
( "bin", "Binaries", 0755, None, local_bin_dir ),
( "lib", "Libraries", 0644, None, local_lib_dir),
]]
package_ignored_files = {
"lib": [
"nagios/plugins/README.txt",
# it's a symlink to the nagios directory. All files would be doubled.
# So better ignore this directory to prevent confusions.
"icinga/plugins",
],
}
def get_package_parts():
return [ p for p in package_parts if p[3] != None ]
def packaging_usage():
sys.stdout.write("""Usage: check_mk [-v] -P|--package COMMAND [ARGS]
Available commands are:
create NAME ... Collect unpackaged files into new package NAME
pack NAME ... Create package file from installed package
release NAME ... Drop installed package NAME, release packaged files
find ... Find and display unpackaged files
list ... List all installed packages
list NAME ... List files of installed package
list PACK.mkp ... List files of uninstalled package file
show NAME ... Show information about installed package
show PACK.mkp ... Show information about uninstalled package file
install PACK.mkp ... Install or update package from file PACK.mkp
remove NAME ... Uninstall package NAME
-v enables verbose output
Package files are located in %s.
""" % pac_dir)
def do_packaging(args):
if len(args) == 0:
packaging_usage()
sys.exit(1)
command = args[0]
args = args[1:]
commands = {
"create" : package_create,
"release" : package_release,
"list" : package_list,
"find" : package_find,
"show" : package_info,
"pack" : package_pack,
"remove" : package_remove,
"install" : package_install,
}
f = commands.get(command)
if f:
try:
f(args)
except PackageException, e:
sys.stderr.write("%s\n" % e)
sys.exit(1)
else:
allc = commands.keys()
allc.sort()
allc = [ tty_bold + c + tty_normal for c in allc ]
sys.stderr.write("Invalid packaging command. Allowed are: %s and %s.\n" %
(", ".join(allc[:-1]), allc[-1]))
sys.exit(1)
def package_list(args):
if len(args) > 0:
for name in args:
show_package_contents(name)
else:
if opt_verbose:
table = []
for pacname in all_package_names():
package = read_package_info(pacname)
table.append((pacname, package["title"], package["num_files"]))
print_table(["Name", "Title", "Files"], [ tty_bold, "", "" ], table)
else:
for pacname in all_package_names():
sys.stdout.write("%s\n" % pacname)
def package_info(args):
if len(args) == 0:
raise PackageException("Usage: check_mk -P show NAME|PACKAGE.mkp")
for name in args:
show_package_info(name)
def show_package_contents(name):
show_package(name, False)
def show_package_info(name):
show_package(name, True)
def show_package(name, show_info = False):
try:
if name.endswith(pac_ext):
tar = tarfile.open(name, "r:gz")
info = tar.extractfile("info")
package = parse_package_info(info.read())
else:
package = read_package_info(name)
if not package:
raise PackageException("No such package %s." % name)
if show_info:
sys.stdout.write("Package file: %s%s\n" % (pac_dir, name))
except PackageException:
raise
except Exception, e:
raise PackageException("Cannot open package %s: %s" % (name, e))
if show_info:
sys.stdout.write("Name: %s\n" % package["name"])
sys.stdout.write("Version: %s\n" % package["version"])
sys.stdout.write("Packaged on Check_MK Version: %s\n" % package["version.packaged"])
sys.stdout.write("Required Check_MK Version: %s\n" % package["version.min_required"])
sys.stdout.write("Title: %s\n" % package["title"])
sys.stdout.write("Author: %s\n" % package["author"])
sys.stdout.write("Download-URL: %s\n" % package["download_url"])
sys.stdout.write("Files: %s\n" % \
" ".join([ "%s(%d)" % (part, len(fs)) for part, fs in package["files"].items() ]))
sys.stdout.write("Description:\n %s\n" % package["description"])
else:
if opt_verbose:
sys.stdout.write("Files in package %s:\n" % name)
for part, title, perm, dir in get_package_parts():
files = package["files"].get(part, [])
if len(files) > 0:
sys.stdout.write(" %s%s%s:\n" % (tty_bold, title, tty_normal))
for f in files:
sys.stdout.write(" %s\n" % f)
else:
for part, title, perm, dir in get_package_parts():
for fn in package["files"].get(part, []):
sys.stdout.write(dir + "/" + fn + "\n")
def package_create(args):
if len(args) != 1:
raise PackageException("Usage: check_mk -P create NAME")
pacname = args[0]
if read_package_info(pacname):
raise PackageException("Package %s already existing." % pacname)
verbose("Creating new package %s...\n" % pacname)
filelists = {}
package = {
"title" : "Title of %s" % pacname,
"name" : pacname,
"description" : "Please add a description here",
"version" : "1.0",
"version.packaged" : check_mk_version,
"version.min_required" : check_mk_version,
"author" : "Add your name here",
"download_url" : "http://example.com/%s/" % pacname,
"files" : filelists
}
num_files = 0
for part, title, perm, dir in get_package_parts():
files = unpackaged_files_in_dir(part, dir)
filelists[part] = files
num_files += len(files)
if len(files) > 0:
verbose(" %s%s%s:\n" % (tty_bold, title, tty_normal))
for f in files:
verbose(" %s\n" % f)
write_package_info(package)
verbose("New package %s created with %d files.\n" % (pacname, num_files))
verbose("Please edit package details in %s%s%s\n" % (tty_bold, pac_dir + pacname, tty_normal))
def package_find(_no_args):
first = True
for part, title, perm, dir in get_package_parts():
files = unpackaged_files_in_dir(part, dir)
if len(files) > 0:
if first:
verbose("Unpackaged files:\n")
first = False
verbose(" %s%s%s:\n" % (tty_bold, title, tty_normal))
for f in files:
if opt_verbose:
sys.stdout.write(" %s\n" % f)
else:
sys.stdout.write("%s/%s\n" % (dir, f))
if first:
verbose("No unpackaged files found.\n")
def package_release(args):
if len(args) != 1:
raise PackageException("Usage: check_mk -P release NAME")
pacname = args[0]
pacpath = pac_dir + pacname
if not package_exists(pacname):
raise PackageException("No such package %s." % pacname)
package = read_package_info(pacname)
verbose("Releasing files of package %s into freedom...\n" % pacname)
if opt_verbose:
for part, title, perm, dir in get_package_parts():
filenames = package["files"].get(part, [])
if len(filenames) > 0:
verbose(" %s%s%s:\n" % (tty_bold, title, tty_normal))
for f in filenames:
verbose(" %s\n" % f)
remove_package_info(pacname)
def package_exists(pacname):
pacpath = pac_dir + pacname
return os.path.exists(pacpath)
def package_pack(args):
if len(args) != 1:
raise PackageException("Usage: check_mk -P pack NAME")
# Make sure, user is not in data directories of Check_MK
p = os.path.abspath(os.curdir)
for dir in [var_dir] + [ dir for x,y,perm,dir in get_package_parts() ]:
if p == dir or p.startswith(dir + "/"):
raise PackageException("You are in %s!\n"
"Please leave the directories of Check_MK before creating\n"
"a packet file. Foreign files lying around here will mix up things." % p)
pacname = args[0]
package = read_package_info(pacname)
if not package:
raise PackageException("Package %s not existing or corrupt." % pacname)
tarfilename = "%s-%s%s" % (pacname, package["version"], pac_ext)
verbose("Packing %s into %s...\n" % (pacname, tarfilename))
create_mkp_file(package, file_name=tarfilename)
verbose("Successfully created %s\n" % tarfilename)
def create_mkp_file(package, file_name=None, file_object=None):
package["version.packaged"] = check_mk_version
def create_info(filename, size):
info = tarfile.TarInfo()
info.mtime = time.time()
info.uid = 0
info.gid = 0
info.size = size
info.mode = 0644
info.type = tarfile.REGTYPE
info.name = filename
return info
tar = tarfile.open(name=file_name, fileobj=file_object, mode="w:gz")
info_file = fake_file(pprint.pformat(package))
info = create_info("info", info_file.size())
tar.addfile(info, info_file)
info_file = fake_file(json.dumps(package))
info = create_info("info.json", info_file.size())
tar.addfile(info, info_file)
# Now pack the actual files into sub tars
for part, title, perm, dir in get_package_parts():
filenames = package["files"].get(part, [])
if len(filenames) > 0:
verbose(" %s%s%s:\n" % (tty_bold, title, tty_normal))
for f in filenames:
verbose(" %s\n" % f)
subtarname = part + ".tar"
subdata = os.popen("tar cf - --dereference --force-local -C '%s' %s" % (dir, " ".join(filenames))).read()
info = create_info(subtarname, len(subdata))
tar.addfile(info, fake_file(subdata))
tar.close()
def package_remove(args):
if len(args) != 1:
raise PackageException("Usage: check_mk -P remove NAME")
pacname = args[0]
package = read_package_info(pacname)
if not package:
raise PackageException("No such package %s." % pacname)
verbose("Removing package %s...\n" % pacname)
remove_package(package)
verbose("Successfully removed package %s.\n" % pacname)
def remove_package(package):
for part, title, perm, dir in get_package_parts():
filenames = package["files"].get(part, [])
if len(filenames) > 0:
verbose(" %s%s%s\n" % (tty_bold, title, tty_normal))
for fn in filenames:
verbose(" %s" % fn)
try:
path = dir + "/" + fn
os.remove(path)
verbose("\n")
except Exception, e:
if opt_debug:
raise
raise Exception("Cannot remove %s: %s\n" % (path, e))
os.remove(pac_dir + package["name"])
def create_package(package_info):
pacname = package_info["name"]
if package_exists(pacname):
raise PackageException("Packet already exists.")
validate_package_files(pacname, package_info["files"])
write_package_info(package_info)
def edit_package(pacname, new_package_info):
if not package_exists(pacname):
raise PackageException("No such package")
# Renaming: check for collision
if pacname != new_package_info["name"]:
if package_exists(new_package_info["name"]):
raise PackageException("Cannot rename package: a package with that name already exists.")
validate_package_files(pacname, new_package_info["files"])
remove_package_info(pacname)
write_package_info(new_package_info)
# Packaged files must either be unpackaged or already
# belong to that package
def validate_package_files(pacname, files):
packages = {}
for package_name in all_package_names():
packages[package_name] = read_package_info(package_name)
for part, title, perm, dir in get_package_parts():
validate_package_files_part(packages, pacname, part, dir, files.get(part, []))
def validate_package_files_part(packages, pacname, part, dir, rel_paths):
for rel_path in rel_paths:
path = dir + "/" + rel_path
if not os.path.exists(path):
raise PackageException("File %s does not exist." % path)
for other_pacname, other_package_info in packages.items():
for other_rel_path in other_package_info["files"].get(part, []):
if other_rel_path == rel_path and other_pacname != pacname:
raise PackageException("File %s does already belong to package %s" % (path, other_pacname))
def package_install(args):
if len(args) != 1:
raise PackageException("Usage: check_mk -P remove NAME")
path = args[0]
if not os.path.exists(path):
raise PackageException("No such file %s." % path)
return install_package(file_name = path)
def install_package(file_name=None, file_object=None):
tar = tarfile.open(name=file_name, fileobj=file_object, mode="r:gz")
package = parse_package_info(tar.extractfile("info").read())
verify_check_mk_version(package)
pacname = package["name"]
old_package = read_package_info(pacname)
if old_package:
verbose("Updating %s from version %s to %s.\n" % (pacname, old_package["version"], package["version"]))
update = True
else:
verbose("Installing %s version %s.\n" % (pacname, package["version"]))
update = False
# Before installing check for conflicts
keep_files = {}
for part, title, perm, dir in get_package_parts():
packaged = packaged_files_in_dir(part)
keep = []
keep_files[part] = keep
if update:
old_files = old_package["files"].get(part, [])
for fn in package["files"].get(part, []):
path = dir + "/" + fn
if update and fn in old_files:
keep.append(fn)
elif fn in packaged:
raise PackageException("File conflict: %s is part of another package." % path)
elif os.path.exists(path):
raise PackageException("File conflict: %s already existing." % path)
# Now install files, but only unpack files explicitely listed
for part, title, perm, dir in get_package_parts():
filenames = package["files"].get(part, [])
if len(filenames) > 0:
verbose(" %s%s%s:\n" % (tty_bold, title, tty_normal))
for fn in filenames:
verbose(" %s\n" % fn)
# make sure target directory exists
if not os.path.exists(dir):
verbose(" Creating directory %s\n" % dir)
os.makedirs(dir)
tarsource = tar.extractfile(part + ".tar")
subtar = "tar xf - -C %s %s" % (dir, " ".join(filenames))
tardest = os.popen(subtar, "w")
while True:
data = tarsource.read(4096)
if not data:
break
tardest.write(data)
tardest.close()
# Fix permissions of extracted files
for filename in filenames:
path = dir + "/" + filename
has_perm = os.stat(path).st_mode & 07777
if has_perm != perm:
verbose(" Fixing permissions of %s: %04o -> %04o\n" % (path, has_perm, perm))
os.chmod(path, perm)
# In case of an update remove files from old_package not present in new one
if update:
for part, title, perm, dir in get_package_parts():
filenames = old_package["files"].get(part, [])
keep = keep_files.get(part, [])
for fn in filenames:
if fn not in keep:
path = dir + "/" + fn
verbose("Removing outdated file %s.\n" % path)
try:
os.remove(path)
except Exception, e:
sys.stderr.write("Error removing %s: %s\n" % (path, e))
# Last but not least install package file
write_package_info(package)
return package
# Checks whether or not the minimum required Check_MK version is older than the
# current Check_MK version. Raises an exception if not. When the Check_MK version
# can not be parsed or is a daily build, the check is simply passing without error.
def verify_check_mk_version(package):
min_version = package["version.min_required"]
cmk_version = check_mk_version
if is_daily_build_version(min_version):
min_branch = branch_of_daily_build(min_version)
if min_branch == "master":
return # can not check exact version
else:
# use the branch name (e.g. 1.2.8 as min version)
min_version = min_branch
if is_daily_build_version(cmk_version):
branch = branch_of_daily_build(cmk_version)
if branch == "master":
return # can not check exact version
else:
# use the branch name (e.g. 1.2.8 as min version)
cmk_version = branch
compatible = True
try:
compatible = parse_check_mk_version(min_version) <= parse_check_mk_version(cmk_version)
except:
# Be compatible: When a version can not be parsed, then skip this check
if opt_debug:
raise
return
if not compatible:
raise PackageException("The package requires Check_MK version %s, "
"but you have %s installed." % (min_version, cmk_version))
def files_in_dir(part, dir, prefix = ""):
if dir == None or not os.path.exists(dir):
return []
# Handle case where one part-dir lies below another
taboo_dirs = [ d for p, t, perm, d in get_package_parts() if p != part ]
if dir in taboo_dirs:
return []
result = []
files = os.listdir(dir)
for f in files:
if f in [ '.', '..' ] or f.startswith('.') or f.endswith('~'):
continue
ignored = package_ignored_files.get(part, [])
if prefix + f in ignored:
continue
path = dir + "/" + f
if os.path.isdir(path):
result += files_in_dir(part, path, prefix + f + "/")
else:
result.append(prefix + f)
result.sort()
return result
def unpackaged_files():
unpackaged = {}
for part, title, perm, dir in get_package_parts():
unpackaged[part] = unpackaged_files_in_dir(part, dir)
return unpackaged
def package_part_info():
part_info = {}
for part, title, perm, dir in get_package_parts():
part_info[part] = {
"title" : title,
"permission" : perm,
"path" : dir,
"files" : os.listdir(dir),
}
return part_info
def unpackaged_files_in_dir(part, dir):
all = files_in_dir(part, dir)
packed = packaged_files_in_dir(part)
return [ f for f in all if f not in packed ]
def packaged_files_in_dir(part):
result = []
for pacname in all_package_names():
package = read_package_info(pacname)
if package:
result += package["files"].get(part, [])
return result
def read_package_info(pacname):
try:
package = parse_package_info(file(pac_dir + pacname).read())
package["name"] = pacname # do not trust package content
num_files = sum([len(fl) for fl in package["files"].values() ])
package["num_files"] = num_files
return package
except IOError:
return None
except Exception:
verbose("Ignoring invalid package file '%s%s'. Please remove it from %s!\n" % (pac_dir, pacname, pac_dir))
return None
def write_package_info(package):
file(pac_dir + package["name"], "w").write(pprint.pformat(package) + "\n")
def remove_package_info(pacname):
os.remove(pac_dir + pacname)
def all_package_names():
all = [ p for p in os.listdir(pac_dir) if p not in [ '.', '..' ] ]
all.sort()
return all
def parse_package_info(python_string):
try:
# ast.literal_eval does not execute any code, just reads in passive
# data structures, so it is safe. But: not available on all supported
# Python versions
import ast
except:
return eval(python_string)
return ast.literal_eval(python_string)
| 36.255319 | 117 | 0.577088 | 2,928 | 23,856 | 4.534495 | 0.152322 | 0.006327 | 0.01898 | 0.019206 | 0.303909 | 0.242374 | 0.191911 | 0.170822 | 0.152444 | 0.137004 | 0 | 0.006929 | 0.304242 | 23,856 | 657 | 118 | 36.310502 | 0.792987 | 0.116239 | 0 | 0.310838 | 0 | 0 | 0.178356 | 0.002282 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.002045 | 0.010225 | null | null | 0.00818 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e598f30af78d83c0850f21960df36978aef36582 | 335 | py | Python | ci_test_settings.py | cumanachao/utopia-crm | 6d648971c427ca9f380b15ed0ceaf5767b88e8b9 | [
"BSD-3-Clause"
] | 13 | 2020-12-14T19:56:04.000Z | 2021-11-06T13:24:48.000Z | ci_test_settings.py | cumanachao/utopia-crm | 6d648971c427ca9f380b15ed0ceaf5767b88e8b9 | [
"BSD-3-Clause"
] | 5 | 2020-12-14T19:56:30.000Z | 2021-09-22T22:09:39.000Z | ci_test_settings.py | cumanachao/utopia-crm | 6d648971c427ca9f380b15ed0ceaf5767b88e8b9 | [
"BSD-3-Clause"
] | 3 | 2021-03-24T03:55:08.000Z | 2022-01-13T15:22:34.000Z | # coding=utf-8
from datetime import date
from settings import *
DEBUG = True
ALLOWED_HOSTS = ['testserver', ]
DATABASES = {
'default': {
'HOST': '127.0.0.1',
'NAME': 'utopia',
'PASSWORD': 'citest',
'USER': 'utopiatest_django',
'ENGINE': 'django.contrib.gis.db.backends.postgis',
}
}
| 17.631579 | 59 | 0.573134 | 36 | 335 | 5.277778 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028112 | 0.256716 | 335 | 18 | 60 | 18.611111 | 0.73494 | 0.035821 | 0 | 0 | 0 | 0 | 0.370717 | 0.11838 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.076923 | 0.153846 | 0 | 0.153846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
e599c7f4a812fe28fcb19fc70378acc1a1ead3f3 | 1,396 | py | Python | trialswithotherCV/RDFRandomCV.py | devs4v/DecisionTreeAndRDF | 9e630df0354712e6496a0b679b4b417f14acc689 | [
"MIT"
] | 1 | 2016-05-28T17:34:09.000Z | 2016-05-28T17:34:09.000Z | trialswithotherCV/RDFRandomCV.py | devs4v/DecisionTreeAndRDF | 9e630df0354712e6496a0b679b4b417f14acc689 | [
"MIT"
] | null | null | null | trialswithotherCV/RDFRandomCV.py | devs4v/DecisionTreeAndRDF | 9e630df0354712e6496a0b679b4b417f14acc689 | [
"MIT"
] | null | null | null | ''' crossvalidation.py
# Author : Shivam Chaturvedi
# Last Modified : 06:25 PM, 10th September 2013
# Purpose : Perform random cross validation [Machine Learning](Checking accuracy of classified decisions using k-fold validation)
# Copyright : (C) 2013
'''
from random import random
import sys
from rdforest import * #importing the RDForest classifier file (includes the ID3 classifier for working)
numTrees = 10
fractionOfInstances = 0.6
a = raw_input("Building a Random Decision Forest with " + str(numTrees) + " Trees.\nHit Enter to Continue, or enter the number of Trees required and press enter\nAnswer:")
if a != "":
numTrees = int(a)
a = raw_input("Taking " + str(numTrees) + " fraction of Instances each time.\nHit Enter to Continue, or enter another fraction and press enter\nAnswer:")
if a != "":
fractionOfInstances = float(a)
attributes = readAttributes(sys.argv[1])
targetAttributes = readTargetAttributes(sys.argv[1])
instances = readInstances(sys.argv[2])
dtrees = makeRDForest(instances, attributes, targetAttributes, numTrees, fractionOfInstances)
#Making a test instance
testInstance = instances[int(random()*len(instances))]
actualAnswer = testInstance[-1]
testInstance = makeinstance(disassembleInstance(testInstance))
d = {1: "Correct", 0: "Incorrect"}
print "The classification is:",d[decideOnInstance(copy.deepcopy(dtrees), testInstance) == actualAnswer]
| 37.72973 | 171 | 0.758596 | 172 | 1,396 | 6.145349 | 0.593023 | 0.019868 | 0.017029 | 0.035951 | 0.092715 | 0.092715 | 0 | 0 | 0 | 0 | 0 | 0.020713 | 0.135387 | 1,396 | 36 | 172 | 38.777778 | 0.855012 | 0.073066 | 0 | 0.1 | 0 | 0.05 | 0.277401 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.15 | null | null | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e5a15de438d9e84ae243630576c60f87a6909270 | 764 | py | Python | models.py | plumdog/mainstay_kanban | 1930573c9ca832c1e7f5883226b110929d331a85 | [
"MIT"
] | null | null | null | models.py | plumdog/mainstay_kanban | 1930573c9ca832c1e7f5883226b110929d331a85 | [
"MIT"
] | null | null | null | models.py | plumdog/mainstay_kanban | 1930573c9ca832c1e7f5883226b110929d331a85 | [
"MIT"
] | null | null | null | from django.db import models
from mainstay.models import UpdatedAndCreated
class TaskUsersManager(models.Manager):
def for_user(self, user):
return self.get_queryset().filter(models.Q(user=user) | models.Q(user=None))
class Task(UpdatedAndCreated, models.Model):
name = models.CharField(max_length=200)
content = models.TextField()
user = models.ForeignKey('auth.User', blank=True, null=True)
delegated_by = models.ForeignKey('auth.User', blank=True, null=True, related_name='delegated_tasks')
started_at = models.DateTimeField(blank=True, null=True)
completed_at = models.DateTimeField(blank=True, null=True)
created_at = models.DateTimeField()
updated_at = models.DateTimeField()
objects = TaskUsersManager()
| 36.380952 | 104 | 0.742147 | 95 | 764 | 5.863158 | 0.473684 | 0.064632 | 0.093357 | 0.122083 | 0.283662 | 0.283662 | 0.283662 | 0.147217 | 0 | 0 | 0 | 0.004566 | 0.140052 | 764 | 20 | 105 | 38.2 | 0.843227 | 0 | 0 | 0 | 0 | 0 | 0.043194 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.133333 | 0.066667 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
e5a6e1742578b7dddfcb7f9f3b116b9682fe0ba7 | 10,252 | py | Python | module_api/rogertests/test_accounts.py | rogertalk/roger-api | 7228d96910ab0a96af53ca2830115d5fc5e1de79 | [
"MIT"
] | 3 | 2019-05-19T22:21:44.000Z | 2020-04-26T01:58:55.000Z | module_api/rogertests/test_accounts.py | rogertalk/roger-api | 7228d96910ab0a96af53ca2830115d5fc5e1de79 | [
"MIT"
] | null | null | null | module_api/rogertests/test_accounts.py | rogertalk/roger-api | 7228d96910ab0a96af53ca2830115d5fc5e1de79 | [
"MIT"
] | null | null | null | import mock
from roger import accounts, streams
from roger_common import errors
import rogertests
class BaseTestCase(rogertests.RogerTestCase):
def setUp(self):
super(BaseTestCase, self).setUp()
# Make sure the bots are initialized during this test.
import roger.bots
reload(roger.bots)
class Creation(BaseTestCase):
def setUp(self):
super(Creation, self).setUp()
self.ricardo = accounts.create('ricardovice', status='active')
self.blixt = accounts.create('blixt', status='active')
def test_add_identifier(self):
# Don't allow changing identifier to something that's taken.
with self.assertRaises(errors.AlreadyExists):
self.blixt.add_identifier('ricardovice')
self.assertItemsEqual(self.blixt.identifiers, ['blixt'])
self.blixt.add_identifier('ab')
# Validate that the account model was updated.
self.assertItemsEqual(self.blixt.identifiers, ['ab', 'blixt'])
# Also validate the data in the datastore.
self.assertItemsEqual(accounts.get_handler('blixt').identifiers, ['ab', 'blixt'])
# Ensure that "ab" is now unavailable.
with self.assertRaises(errors.AlreadyExists):
accounts.create('ab')
def test_creation(self):
bob = accounts.create('bob', status='active')
self.assertEqual(bob.identifiers, ['bob'])
# Don't allow creation of an existing account.
with self.assertRaises(errors.AlreadyExists):
accounts.create('bob')
def test_change_identifier(self):
# Don't allow changing identifier to something that's taken.
with self.assertRaises(errors.AlreadyExists):
self.blixt.change_identifier('blixt', 'ricardovice')
# Don't allow changing identifier that you don't own.
with self.assertRaises(errors.ForbiddenAction):
self.blixt.change_identifier('ricardovice', 'mfdoom')
# This shouldn't do anything, but make sure it doesn't cause errors.
self.blixt.change_identifier('blixt', 'blixt')
self.assertEqual(self.blixt.identifiers, ['blixt'])
self.blixt.change_identifier('blixt', 'ab')
self.assertEqual(self.blixt.identifiers, ['ab'])
# Ensure that "ab" is now unavailable.
with self.assertRaises(errors.AlreadyExists):
accounts.create('ab')
# Ensure that it's possible to change back.
self.blixt.change_identifier('ab', 'blixt')
self.assertEqual(self.blixt.identifiers, ['blixt'])
# Ensure that it's possible to use the now freed up identifier.
ab = accounts.create('ab')
self.assertEqual(ab.identifiers, ['ab'])
def test_remove_identifier(self):
zandra = accounts.create('zandra', status='active')
# Don't allow removing someone else's identifier.
with self.assertRaises(errors.ForbiddenAction):
self.blixt.remove_identifier('zandra')
# Don't allow removing the last identifier.
with self.assertRaises(errors.ForbiddenAction):
zandra.remove_identifier('zandra')
# Add and remove an identifier.
zandra.add_identifier('alexandra')
self.assertItemsEqual(zandra.identifiers, ['alexandra', 'zandra'])
zandra.remove_identifier('zandra')
self.assertItemsEqual(zandra.identifiers, ['alexandra'])
# Ensure that the identifier is available again.
zandra_2 = accounts.create('zandra', status='active')
self.assertItemsEqual(zandra_2.identifiers, ['zandra'])
# Ensure that both identifiers are now unavailable.
with self.assertRaises(errors.AlreadyExists):
zandra.change_identifier('alexandra', 'zandra')
with self.assertRaises(errors.AlreadyExists):
zandra_2.change_identifier('zandra', 'alexandra')
class Identifiers(BaseTestCase):
def test_brazil_number(self):
# Brazil has a special rule where a phone number can have two variants.
# Using a legacy number:
bruna = accounts.create('bruna', status='active')
bruna.add_identifier('+554467891234')
self.assertItemsEqual(bruna.identifiers, ['bruna', '+554467891234', '+5544967891234'])
# Using a converted number:
karyna = accounts.create('karyna', status='active')
karyna.add_identifier('+5522967891234')
self.assertItemsEqual(karyna.identifiers, ['karyna', '+552267891234', '+5522967891234'])
# Using a landline number (no additional number):
jully = accounts.create('jully', status='active')
jully.add_identifier('+552233891234')
self.assertItemsEqual(jully.identifiers, ['jully', '+552233891234'])
# Using a new mobile number (no legacy equivalent):
molinna = accounts.create('molinna', status='active')
molinna.add_identifier('+5522947891234')
self.assertItemsEqual(molinna.identifiers, ['molinna', '+5522947891234'])
# Takeover when temporary account has the legacy number:
anonymous = accounts.create('+554477881234', status='temporary')
pedro = accounts.create('pedro', status='active')
pedro.add_identifier('+5544977881234')
self.assertItemsEqual(pedro.identifiers, ['pedro', '+5544977881234', '+554477881234'])
class PasswordValidation(BaseTestCase):
def test_password_validation(self):
# Verify that accounts can set passwords.
blixt = accounts.create('blixt')
blixt.set_password('pa$$word')
# Verify that the password can be used to log in.
self.assertTrue(blixt.validate_password('pa$$word'))
# Ensure that an incorrect password does not work.
self.assertFalse(blixt.validate_password('invalid!'))
# Change the password.
blixt.set_password('new passw0rd')
# Make sure the old one doesn't work.
self.assertFalse(blixt.validate_password('pa$$word'))
# But the new one should work.
self.assertTrue(blixt.validate_password('new passw0rd'))
class StaticHandlers(BaseTestCase):
def setUp(self):
super(StaticHandlers, self).setUp()
@accounts.static_handler('hal9000')
class HAL9000(accounts.AccountHandler):
pass
self.hal9000_class = HAL9000
# Reserve an account for the "skynet" handler.
self.skynet = accounts.create('skynet')
self.skynet.add_identifier('+12345678')
@accounts.static_handler('skynet')
class Skynet(accounts.AccountHandler):
pass
self.skynet_class = Skynet
def tearDown(self):
super(StaticHandlers, self).tearDown()
accounts.unregister_static_handler('hal9000')
accounts.unregister_static_handler('skynet')
def test_static_handler(self):
handler = accounts.get_handler('hal9000')
# Ensure that we got the correct type of class.
self.assertIsInstance(handler, self.hal9000_class)
# Ensure that there is a "handler" property on the class pointing to the handler.
self.assertEqual(self.hal9000_class.handler, handler)
def test_static_handler_with_account(self):
# Ensure that the handler is found explicitly.
handler = accounts.get_handler('skynet')
self.assertIsInstance(handler, self.skynet_class)
# Ensure that the handler is found via an identifier.
handler = accounts.get_handler('+12345678')
self.assertIsInstance(handler, self.skynet_class)
# Ensure that the handler can be found via the account's key.
handler = accounts.get_handler(self.skynet.account.key)
self.assertIsInstance(handler, self.skynet_class)
# Ensure that the handler can also be found via the account id.
handler = accounts.get_handler(self.skynet.account_id)
self.assertIsInstance(handler, self.skynet_class)
class Status(BaseTestCase):
def setUp(self):
super(Status, self).setUp()
self.activations = 0
def incr(account):
self.activations += 1
accounts.activation_hooks['test_activation'] = incr
def tearDown(self):
super(Status, self).tearDown()
del accounts.activation_hooks['test_activation']
def test_activation_trigger_runs_once(self):
bob = accounts.create('bob')
self.assertEqual(self.activations, 0)
bob.change_status('active')
self.assertEqual(self.activations, 1)
bob.change_status('inactive')
self.assertEqual(bob.status, 'inactive')
bob.change_status('active')
self.assertEqual(self.activations, 1)
def test_change_status(self):
ricardo = accounts.create('ricardovice', status='requested')
self.assertEqual(ricardo.account.status, 'requested')
ricardo.change_status('inactive')
self.assertEqual(ricardo.account.status, 'inactive')
# Status must be valid.
with self.assertRaises(errors.InvalidArgument):
ricardo.change_status('yoloing')
with self.assertRaises(errors.InvalidArgument):
ricardo.change_status(None)
# Status may not be changed back to temporary etc.
with self.assertRaises(errors.ForbiddenAction):
ricardo.change_status('temporary')
self.assertEqual(ricardo.account.status, 'inactive')
def test_default(self):
bob = accounts.create('bob')
self.assertEqual(bob.account.status, 'temporary')
self.assertEqual(bob.identifiers, ['bob'])
def test_logging_in_activates(self):
arthur = accounts.create('arthur', status='invited')
arthur.create_session()
self.assertEqual(arthur.status, 'active')
self.assertEqual(self.activations, 1)
def test_logging_in_becomes_active(self):
don = accounts.create('don', status='inactive')
don.create_session()
self.assertEqual(don.status, 'active')
self.assertEqual(self.activations, 0)
def test_specific(self):
# Try specifying a status.
adam = accounts.create('adam', status='invited')
self.assertEqual(adam.account.status, 'invited')
self.assertEqual(adam.identifiers, ['adam'])
| 40.68254 | 96 | 0.668455 | 1,124 | 10,252 | 6.012456 | 0.191281 | 0.047647 | 0.038473 | 0.050015 | 0.417135 | 0.279077 | 0.19488 | 0.127109 | 0.109352 | 0.085528 | 0 | 0.029456 | 0.22181 | 10,252 | 251 | 97 | 40.844622 | 0.817623 | 0.175185 | 0 | 0.266667 | 0 | 0 | 0.104444 | 0 | 0 | 0 | 0 | 0 | 0.327273 | 1 | 0.127273 | false | 0.060606 | 0.030303 | 0 | 0.206061 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
e5b1c845186aa3bee0cde053a74b6e227193f41a | 515 | py | Python | test_check_DB_ORM.py | leScandal/Training | b76b9c767be43928c707cc484a3181661249ad50 | [
"Apache-2.0"
] | null | null | null | test_check_DB_ORM.py | leScandal/Training | b76b9c767be43928c707cc484a3181661249ad50 | [
"Apache-2.0"
] | null | null | null | test_check_DB_ORM.py | leScandal/Training | b76b9c767be43928c707cc484a3181661249ad50 | [
"Apache-2.0"
] | null | null | null | import mysql.connector
from fixture.orm import ORMFixture
from model.group import Group
db = ORMFixture(host="127.0.0.1", database = "addressbook", user = "root", password = "")
#connection = mysql.connector.connect(host="127.0.0.1", database = "addressbook", user = "root", password = "")
try:
l = db.get_group_list()
for item in l:
print(item)
print(len(l))
l1 = db.get_cont_in_gr(Group(id="114"))
for item in l1:
print(item)
print(len(l1))
finally:
pass #db.stop() | 25.75 | 111 | 0.642718 | 76 | 515 | 4.289474 | 0.5 | 0.08589 | 0.04908 | 0.055215 | 0.276074 | 0.276074 | 0.276074 | 0.276074 | 0.276074 | 0.276074 | 0 | 0.043689 | 0.2 | 515 | 20 | 112 | 25.75 | 0.747573 | 0.231068 | 0 | 0.133333 | 0 | 0 | 0.068354 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.133333 | 0.2 | 0 | 0.2 | 0.266667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
e5b4546e3025f4f59055fb05e61367a75decdfc1 | 43,202 | py | Python | .kodi/addons/plugin.video.projectfreetv/default.py | C6SUMMER/allinclusive-kodi-pi | 8baf247c79526849c640c6e56ca57a708a65bd11 | [
"Apache-2.0"
] | null | null | null | .kodi/addons/plugin.video.projectfreetv/default.py | C6SUMMER/allinclusive-kodi-pi | 8baf247c79526849c640c6e56ca57a708a65bd11 | [
"Apache-2.0"
] | null | null | null | .kodi/addons/plugin.video.projectfreetv/default.py | C6SUMMER/allinclusive-kodi-pi | 8baf247c79526849c640c6e56ca57a708a65bd11 | [
"Apache-2.0"
] | 2 | 2018-04-17T17:34:39.000Z | 2020-07-26T03:43:33.000Z | import xbmc, xbmcgui, xbmcplugin
import urllib, urllib2
import re, string
try:
from addon.common.addon import Addon
from addon.common.net import Net
except:
print 'Failed to import script.module.addon.common'
xbmcgui.Dialog().ok("PFTV Import Failure", "Failed to import addon.common", "A component needed by PFTV is missing on your system", "Please visit www.xbmchub.com for support")
addon = Addon('plugin.video.projectfreetv', sys.argv)
net = Net()
try:
from metahandler import metahandlers
except:
print 'Failed to import script.module.metahandler'
xbmcgui.Dialog().ok("PFTV Import Failure", "Failed to import Metahandlers", "A component needed by PFTV is missing on your system", "Please visit www.xbmchub.com for support")
#Common Cache
import xbmcvfs
dbg = False # Set to false if you don't want debugging
#Common Cache
try:
import StorageServer
except:
import storageserverdummy as StorageServer
cache = StorageServer.StorageServer('plugin.video.projectfreetv')
##### Queries ##########
play = addon.queries.get('play', '')
mode = addon.queries['mode']
video_type = addon.queries.get('video_type', '')
section = addon.queries.get('section', '')
url = addon.queries.get('url', '')
title = addon.queries.get('title', '')
name = addon.queries.get('name', '')
imdb_id = addon.queries.get('imdb_id', '')
season = addon.queries.get('season', '')
episode = addon.queries.get('episode', '')
print '-----------------Project Free TV Addon Params------------------'
print '--- Version: ' + str(addon.get_version())
print '--- Mode: ' + str(mode)
print '--- Play: ' + str(play)
print '--- URL: ' + str(url)
print '--- Video Type: ' + str(video_type)
print '--- Section: ' + str(section)
print '--- Title: ' + str(title)
print '--- Name: ' + str(name)
print '--- IMDB: ' + str(imdb_id)
print '--- Season: ' + str(season)
print '--- Episode: ' + str(episode)
print '---------------------------------------------------------------'
################### Global Constants #################################
#URLS
website_url = addon.get_setting('website_url')
if website_url == "Custom URL":
custom_url = addon.get_setting('custom_url')
# if custom_url.endswith("/"):
# MainUrl = custom_url
# else:
# MainUrl = custom_url + "/"
MainUrl = custom_url
else:
MainUrl = website_url
SearchUrl = MainUrl + '/search/?q=%s&md=%s'
MoviePath = "/movies/"
MovieUrl = MainUrl + MoviePath
TVPath = "/internet/"
TVUrl = MainUrl + TVPath
#PATHS
AddonPath = addon.get_path()
IconPath = AddonPath + "/icons/"
#VARIABLES
SearchMovies = 'movies'
SearchTV = 'shows'
SearchAll = 'all'
VideoType_Movies = 'movie'
VideoType_TV = 'tvshow'
VideoType_Season = 'season'
VideoType_Episode = 'episode'
#################### Addon Settings ##################################
#Helper function to convert strings to boolean values
def str2bool(v):
return v.lower() in ("yes", "true", "t", "1")
meta_setting = str2bool(addon.get_setting('use-meta'))
######################################################################
def icon_path(filename):
return IconPath + filename
def get_html(page_url):
if addon.get_setting('proxy_enable') == 'true':
proxy = 'http://' + addon.get_setting('proxy') + ':' + addon.get_setting('proxy_port')
proxy_handler = urllib2.ProxyHandler({'http': proxy})
username = addon.get_setting('proxy_user')
password = addon.get_setting('proxy_pass')
if username <> '' and password <> '':
print 'Using authenticated proxy: %s' % proxy
password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
password_mgr.add_password(None, proxy, username, password)
proxy_auth_handler = urllib2.ProxyBasicAuthHandler(password_mgr)
opener = urllib2.build_opener(proxy_handler, proxy_auth_handler)
else:
print 'Using proxy: %s' % proxy
opener = urllib2.build_opener(proxy_handler)
urllib2.install_opener(opener)
addon.log("Requesting URL: %s" % page_url)
html = net.http_GET(page_url).content
import HTMLParser
h = HTMLParser.HTMLParser()
html = h.unescape(html)
return html.encode('utf-8')
def Notify(typeq, box_title, message, times='', line2='', line3=''):
if box_title == '':
box_title='PTV Notification'
if typeq == 'small':
if times == '':
times='5000'
smallicon= icon_path('icon.png')
addon.show_small_popup(title=box_title, msg=message, delay=int(times), image=smallicon)
elif typeq == 'big':
addon.show_ok_dialog(message, title=box_title)
else:
addon.show_ok_dialog(message, title=box_title)
def setView(content, viewType):
# set content type so library shows more views and info
if content:
xbmcplugin.setContent(int(sys.argv[1]), content)
if addon.get_setting('auto-view') == 'true':
xbmc.executebuiltin("Container.SetViewMode(%s)" % addon.get_setting(viewType) )
# set sort methods - probably we don't need all of them
xbmcplugin.addSortMethod( handle=int( sys.argv[ 1 ] ), sortMethod=xbmcplugin.SORT_METHOD_UNSORTED )
xbmcplugin.addSortMethod( handle=int( sys.argv[ 1 ] ), sortMethod=xbmcplugin.SORT_METHOD_LABEL )
xbmcplugin.addSortMethod( handle=int( sys.argv[ 1 ] ), sortMethod=xbmcplugin.SORT_METHOD_VIDEO_RATING )
xbmcplugin.addSortMethod( handle=int( sys.argv[ 1 ] ), sortMethod=xbmcplugin.SORT_METHOD_DATE )
xbmcplugin.addSortMethod( handle=int( sys.argv[ 1 ] ), sortMethod=xbmcplugin.SORT_METHOD_PROGRAM_COUNT )
xbmcplugin.addSortMethod( handle=int( sys.argv[ 1 ] ), sortMethod=xbmcplugin.SORT_METHOD_VIDEO_RUNTIME )
xbmcplugin.addSortMethod( handle=int( sys.argv[ 1 ] ), sortMethod=xbmcplugin.SORT_METHOD_GENRE )
xbmcplugin.addSortMethod( handle=int( sys.argv[ 1 ] ), sortMethod=xbmcplugin.SORT_METHOD_MPAA_RATING )
def add_favourite():
saved_favs = cache.get('favourites_' + video_type)
favs = []
#Check for and remove COLOR tags which were added to titles
r = re.search('(.+?) \[COLOR red\]', name)
if r:
vidname = r.group(1)
else:
vidname = name
if saved_favs:
favs = eval(saved_favs)
if favs:
if (title, vidname, imdb_id, season, episode, url) in favs:
Notify('small', 'Favourite Already Exists', vidname.title() + ' already exists in your PFTV favourites','')
return
import urlparse
split_url = urlparse.urlsplit(url)
new_url = MainUrl + split_url.path
favs.append((title, vidname, imdb_id, season, episode, new_url))
cache.set('favourites_' + video_type, str(favs))
Notify('small', 'Added to favourites', vidname.title() + ' added to your PFTV favourites','')
def remove_favourite():
saved_favs = cache.get('favourites_' + video_type)
if saved_favs:
favs = eval(saved_favs)
import urlparse
split_url = urlparse.urlsplit(url)
new_url = MainUrl + split_url.path
favs.remove((title, name, imdb_id, season, episode, new_url))
cache.set('favourites_' + video_type, str(favs))
xbmc.executebuiltin("XBMC.Container.Refresh")
def refresh_movie(vidtitle, year=''):
metaget=metahandlers.MetaData()
search_meta = metaget.search_movies(vidtitle)
if search_meta:
movie_list = []
for movie in search_meta:
movie_list.append(movie['title'] + ' (' + str(movie['year']) + ')')
dialog = xbmcgui.Dialog()
index = dialog.select('Choose', movie_list)
if index > -1:
new_imdb_id = search_meta[index]['imdb_id']
new_tmdb_id = search_meta[index]['tmdb_id']
meta = metaget.update_meta('movie', vidtitle, imdb_id=imdb_id, new_imdb_id=new_imdb_id, new_tmdb_id=new_tmdb_id, year=year)
xbmc.executebuiltin("Container.Refresh")
else:
msg = ['No matches found']
addon.show_ok_dialog(msg, 'Refresh Results')
def refresh_tv(vidtitle, imdb_id):
metaget=metahandlers.MetaData()
show_list = metaget.get_tvdb_list(vidtitle)
name_list = []
filtered_show_list = []
for show in show_list:
(seriesid, SeriesName, IMDB_ID) = show
if IMDB_ID != None:
filtered_show_list.append([seriesid, SeriesName, IMDB_ID])
name_list.append(SeriesName)
dialog = xbmcgui.Dialog()
index = dialog.select('Choose', name_list)
if index > -1:
metaget.update_meta('tvshow', vidtitle, imdb_id, new_tmdb_id=filtered_show_list[index][0], new_imdb_id=filtered_show_list[index][2])
xbmc.executebuiltin("Container.Refresh")
Notify('small', 'Updated Metadata', filtered_show_list[index][1],'')
def episode_refresh(vidname, imdb, season_num, episode_num):
#refresh info for an episode
metaget=metahandlers.MetaData()
metaget.update_episode_meta(vidname, imdb, season_num, episode_num)
xbmc.executebuiltin("XBMC.Container.Refresh")
def season_refresh(vidname, imdb, season_num):
metaget=metahandlers.MetaData()
metaget.update_season(vidname, imdb, season_num)
xbmc.executebuiltin("XBMC.Container.Refresh")
def get_metadata(video_type, vidtitle, metaget=None, vidname='', year='', imdb='', season_list=None, season_num=0, episode_num=0):
if meta_setting:
#Get Meta settings
movie_covers = addon.get_setting('movie-covers')
tv_banners = addon.get_setting('tv-banners')
tv_posters = addon.get_setting('tv-posters')
movie_fanart = addon.get_setting('movie-fanart')
tv_fanart = addon.get_setting('tv-fanart')
if video_type in (VideoType_Movies, VideoType_TV):
meta = metaget.get_meta(video_type, vidtitle, year=year)
if video_type == VideoType_Season:
returnlist = True
if not season_list:
season_list = []
season_list.append(season_num)
returnlist = False
meta = metaget.get_seasons(vidtitle, imdb, season_list)
if not returnlist:
meta = meta[0]
if video_type == VideoType_Episode:
meta=metaget.get_episode_meta(vidtitle, imdb, season_num, episode_num)
#Check for and blank out covers if option disabled
if video_type==VideoType_Movies and movie_covers == 'false':
meta['cover_url'] = ''
elif video_type==VideoType_TV and tv_banners == 'false':
meta['cover_url'] = ''
#Check for banners vs posters setting
if video_type == VideoType_TV and tv_banners == 'true' and tv_posters == 'false':
meta['cover_url'] = meta['banner_url']
#Check for and blank out fanart if option disabled
if video_type==VideoType_Movies and movie_fanart == 'false':
meta['backdrop_url'] = ''
elif video_type in (VideoType_TV, VideoType_Episode) and tv_fanart == 'false':
meta['backdrop_url'] = ''
if not video_type == VideoType_Season:
#Lets keep the name PFTV gives us instead of TVDB
meta['title'] = vidname
else:
meta = {}
meta['title'] = vidname
meta['cover_url'] = ''
meta['imdb_id'] = imdb
meta['backdrop_url'] = ''
meta['year'] = year
meta['overlay'] = 0
if video_type in (VideoType_TV, VideoType_Episode):
meta['TVShowTitle'] = vidtitle
return meta
def add_contextmenu(use_meta, video_type, link, vidtitle, vidname, favourite, watched='', imdb='', year='', season_num=0, episode_num=0):
contextMenuItems = []
contextMenuItems.append(('Info', 'XBMC.Action(Info)'))
#Check if we are listing items in the Favourites list
if favourite:
contextMenuItems.append(('Delete from PFTV Favourites', 'XBMC.RunPlugin(%s)' % addon.build_plugin_url({'mode': 'del_fav', 'video_type': video_type, 'title': vidtitle, 'name':vidname, 'url':link, 'imdb_id':imdb, 'season': season_num, 'episode': episode_num})))
else:
contextMenuItems.append(('Add to PFTV Favourites', 'XBMC.RunPlugin(%s)' % addon.build_plugin_url({'mode': 'add_fav', 'video_type': video_type, 'title': vidtitle, 'name':vidname, 'url':link, 'imdb_id':imdb, 'season': season_num, 'episode': episode_num})))
#Meta is turned on so enable extra context menu options
if use_meta:
if watched == 6:
watched_mark = 'Mark as Watched'
else:
watched_mark = 'Mark as Unwatched'
contextMenuItems.append((watched_mark, 'XBMC.RunPlugin(%s?mode=watch_mark&video_type=%s&title=%s&imdb_id=%s&season=%s&episode=%s)' % (sys.argv[0], video_type, vidtitle.decode('utf-8'), imdb, season_num, episode_num)))
contextMenuItems.append(('Refresh Metadata', 'XBMC.RunPlugin(%s?mode=refresh_meta&video_type=%s&title=%s&year=%s&season=%s&episode=%s)' % (sys.argv[0], video_type, vidtitle.decode('utf-8'), year, season_num, episode_num)))
#if video_type == VideoType_Movies:
#contextMenuItems.append(('Search for trailer', 'XBMC.RunPlugin(%s?mode=trailer_search&vidname=%s&url=%s)' % (sys.argv[0], title, link)))
return contextMenuItems
def add_video_directory(mode, video_type, link, vidtitle, vidname, metaget=None, imdb='', year='', season_num=0, totalitems=0, favourite=False):
meta = get_metadata(video_type, vidtitle, metaget=metaget, year=year, imdb=imdb, season_num=season_num)
contextMenuItems = add_contextmenu(meta_setting, video_type, link, vidtitle, vidname, favourite, watched=meta['overlay'], imdb=meta['imdb_id'], year=year, season_num=season_num)
meta['title'] = vidname
#With meta data on, set watched/unwatched values for a tv show
if meta_setting and video_type == VideoType_TV:
properties = {}
episodes_unwatched = str(int(meta['episode']) - meta['playcount'])
properties['UnWatchedEpisodes'] = episodes_unwatched
properties['WatchedEpisodes'] = str(meta['playcount'])
else:
properties = None
addon.add_directory({'mode': mode, 'url': link, 'video_type': VideoType_Season, 'imdb_id': meta['imdb_id'], 'title': vidtitle, 'name': vidname, 'season': season_num}, meta, properties=properties, contextmenu_items=contextMenuItems, context_replace=True, img=meta['cover_url'], fanart=meta['backdrop_url'], total_items=totalitems)
def add_video_item(video_type, section, link, vidtitle, vidname, metaget=None, year='', imdb='', season_num=0, episode_num=0, totalitems=0, favourite=False):
meta = get_metadata(video_type, vidtitle, metaget=metaget, vidname=vidname, year=year, imdb=imdb, season_num=season_num, episode_num=episode_num)
if video_type == VideoType_Movies:
contextMenuItems = add_contextmenu(meta_setting, video_type, link, vidtitle, meta['title'], favourite, watched=meta['overlay'], imdb=meta['imdb_id'], year=meta['year'])
else:
contextMenuItems = add_contextmenu(meta_setting, video_type, link, vidtitle, meta['title'], favourite, watched=meta['overlay'], imdb=meta['imdb_id'], season_num=season_num, episode_num=episode_num)
if video_type == VideoType_Movies:
addon.add_video_item({'url': link, 'video_type': video_type, 'section': section, 'title': vidtitle, 'name': vidname}, meta, contextmenu_items=contextMenuItems, context_replace=True, img=meta['cover_url'], fanart=meta['backdrop_url'], total_items=totalitems)
elif video_type == VideoType_Episode:
addon.add_video_item({'url': link, 'video_type': video_type, 'section': section, 'title': vidtitle, 'name': vidname}, meta, contextmenu_items=contextMenuItems, context_replace=True, img=meta['cover_url'], fanart=meta['backdrop_url'], total_items=totalitems)
# Create A-Z Menu
def AZ_Menu(type, url):
addon.add_directory({'mode': type,
'url': url + 'numeric.html', 'letter': '#'},{'title': '#'},
img=icon_path("0.png"))
for l in string.uppercase:
addon.add_directory({'mode': type,
'url': url + str(l.lower()) + '.html', 'letter': l}, {'title': l},
img=icon_path(l + ".png"))
# Get List of Movies from given URL
def GetMovieList(url):
if meta_setting:
metaget=metahandlers.MetaData()
else:
metaget=None
html = get_html(url)
match = re.compile('<td width="97%" class="mnlcategorylist"><a href="(.+?)"><b>(.+?)[ (]*([0-9]{0,4})[)]*</b></a>(.+?)<').findall(html)
for link, vidname, year, numlinks in match:
if re.search("../",link) is not None:
link = link.strip('\n').replace("../","")
newUrl = MovieUrl + link
else:
newUrl = url + "/" + link
add_video_item(VideoType_Movies, VideoType_Movies, newUrl, vidname, vidname, metaget=metaget, totalitems=len(match))
setView('movies', 'movie-view')
if play:
try:
import urlresolver
except:
addon.log_error("Failed to import script.module.urlresolver")
xbmcgui.Dialog().ok("PFTV Import Failure", "Failed to import URLResolver", "A component needed by PFTV is missing on your system", "Please visit www.xbmchub.com for support")
sources = []
html = get_html(url)
if section == 'movies':
#Check for trailers
match = re.compile('<a target="_blank" style="font-size:9pt" class="mnlcategorylist" href=".+?id=(.+?)">(.+?)</a> ').findall(html)
for linkid, vidname in match:
media = urlresolver.HostedMediaFile(host='youtube', media_id=linkid, title=vidname)
sources.append(media)
elif section == 'latestmovies':
#Search within HTML to only get portion of links specific to movie name
# TO DO - currently does not return enough of the header for the first link
r = re.search('<div>%s</div>(.+?)(<div>(?!%s)|<p align="center">)' % (re.escape(title), re.escape(title)), html, re.DOTALL)
if r:
html = r.group(0)
else:
html = ''
elif section in ('tvshows', 'episode'):
#Search within HTML to only get portion of links specific to episode requested
r = re.search('<td class="episode"><b>%s</b></td>(.+?)(<tr bgcolor="#E3E3E3">|<p align="center">)' % re.escape(name), html, re.DOTALL)
#r = re.search('<td class="episode"><a name=".+?"></a><b>%s</b>(.+?)(<a name=|<p align="center">)' % re.escape(name), html, re.DOTALL)
if r:
html = r.group(1)
else:
html = ''
#Now Add video source links
match = re.compile('<a class="mnllinklist" target="_blank" href="(.+?)">.+?Loading Time: <span class=".+?">(.+?)</span>[\r\n ]*<br />[\r\n ]*Host: (.+?)[\r\n ]*<br/>.+?class="report">.+?([0-9]*[0-9]%) Said Work', re.DOTALL).findall(html)
#match = re.compile('<a onclick=\'.+?\' href=".+?id%3D(.+?)&.+?" target=".+?<div>.+?(|part [0-9]* of [0-9]*)</div>.+?<span class=\'.*?\'>(.*?)</span>.+?Host: (.+?)<br/>.+?class="report">.+?([0-9]*[0-9]%) Said Work', re.DOTALL).findall(html)
links = []
for link, load, host, working in match:
#for linkid, vidname, load, host, working in match:
# if vidname:
# vidname = vidname.title()
# else:
# vidname = 'Full'
vidname = 'Full'
#media = urlresolver.HostedMediaFile(host=host, media_id=linkid, title=vidname + ' - ' + host + ' - ' + load + ' - ' + working)
#sources.append(media)
sources.append(vidname + ' - ' + host + ' - ' + load + ' - ' + working)
links.append(link)
dialog = xbmcgui.Dialog()
index = dialog.select('Choose your stream:', sources)
source = None
if index > -1:
html = get_html(MainUrl + links[index])
link = re.search('src="(.+?)".+?></iframe>', html, re.IGNORECASE)
if link:
source=link.group(1)
#source = urlresolver.choose_source(sources)
if source:
stream_url = urlresolver.resolve(source)
else:
stream_url = False
#Play the stream
if stream_url:
addon.resolve_url(stream_url)
if mode == 'main':
addon.add_directory({'mode': 'movies', 'section': 'movies'}, {'title': 'Movies'}, img=icon_path('Movies.png'))
addon.add_directory({'mode': 'tv', 'section': 'tv'}, {'title': 'TV Shows'}, img=icon_path('TV_Shows.png'))
addon.add_directory({'mode': 'search', 'section': SearchAll}, {'title': 'Search All'}, img=icon_path('Search.png'))
addon.add_directory({'mode': 'resolver_settings'}, {'title': 'Resolver Settings'}, is_folder=False, img=icon_path('Settings.png'))
setView(None, 'default-view')
elif mode == 'movies':
addon.add_directory({'mode': 'favourites', 'video_type': VideoType_Movies}, {'title': 'Favourites'}, img=icon_path("Favourites.png"))
addon.add_directory({'mode': 'movieslatest', 'section': 'movieslatest'}, {'title': 'Latest Added Links'}, img=icon_path("Latest_Added.png"))
addon.add_directory({'mode': 'moviesaz', 'section': 'moviesaz'}, {'title': 'A-Z'}, img=icon_path("AZ.png"))
addon.add_directory({'mode': 'moviesgenre', 'section': 'moviesgenre'}, {'title': 'Genre'}, img=icon_path('Genre.png'))
addon.add_directory({'mode': 'moviesyear', 'section': 'moviesyear'}, {'title': 'Year'}, img=icon_path('Year.png'))
addon.add_directory({'mode': 'search', 'section': SearchMovies}, {'title': 'Search'}, img=icon_path('Search.png'))
setView(None, 'default-view')
elif mode == 'moviesaz':
AZ_Menu('movieslist', MovieUrl + 'browse/')
setView(None, 'default-view')
elif mode == 'moviesgenre':
url = MovieUrl
html = get_html(url)
match = re.compile('<a class ="genre" href="/(.+?)"><b>(.+?)</b></a><b>').findall(html)
# Add each link found as a directory item
for link, genre in match:
addon.add_directory({'mode': 'movieslist', 'url': MainUrl + link, 'section': 'movies'}, {'title': genre})
setView(None, 'default-view')
elif mode == 'movieslatest':
if meta_setting:
metaget=metahandlers.MetaData()
else:
metaget=None
latestlist = []
url = MovieUrl
html = get_html(url)
match = re.compile('''<a onclick='visited.+?' href=".+?" target=.+?<div>(.+?)</div>''',re.DOTALL).findall(html)
for vidname in match:
latestlist.append(vidname)
#convert list to a set which removes duplicates, then back to a list
latestlist = list(set(latestlist))
for movie in latestlist:
add_video_item(VideoType_Movies, 'latestmovies', url, movie, movie, metaget=metaget, totalitems=len(match))
setView('movies', 'movie-view')
elif mode == 'moviesyear':
url = MovieUrl
html = get_html(url)
match = re.compile('''<td width="97%" nowrap="true" class="mnlcategorylist"><a href="(.+?)"><b>(.+?)</b></a></td>''').findall(html)
# Add each link found as a directory item
for link, year in match:
addon.add_directory({'mode': 'movieslist', 'url': url + urllib.quote(link), 'section': 'movies'}, {'title': year})
setView(None, 'default-view')
elif mode == 'movieslist':
GetMovieList(url)
elif mode == 'tv':
addon.add_directory({'mode': 'favourites', 'video_type': VideoType_TV}, {'title': 'Favourites'}, img=icon_path("Favourites.png"))
addon.add_directory({'mode': 'tvseries_upc', 'section': 'tvseries_upc'}, {'title': 'Upcoming Episodes'}, img=icon_path('Upcoming.png'))
addon.add_directory({'mode': 'tvlastadded', 'section': 'tv24hours', 'url': TVUrl + 'index_last.html'}, {'title': 'Last 24 Hours'}, img=icon_path('Last_24_Hours.png'))
addon.add_directory({'mode': 'tvlastadded', 'section': 'tv3days', 'url': TVUrl + 'index_last_3_days.html'}, {'title': 'Last 3 Days'}, img=icon_path('Last_3_Days.png'))
addon.add_directory({'mode': 'tvlastadded', 'section': 'tv7days', 'url': TVUrl + 'index_last_7_days.html'}, {'title': 'Last 7 Days'}, img=icon_path('Last_7_Days.png'))
addon.add_directory({'mode': 'tvlastadded', 'section': 'tvmonth', 'url': TVUrl + 'index_last_30_days.html'}, {'title': 'This Month'}, img=icon_path('This_Month.png'))
addon.add_directory({'mode': 'tvlastadded', 'section': 'tv90days', 'url': TVUrl + 'index_last_365_days.html'}, {'title': 'Last 90 Days'}, img=icon_path('Last_90_Days.png'))
addon.add_directory({'mode': 'tvpopular', 'section': 'tvpopular'}, {'title': 'Popular'}, img=icon_path('Popular.png'))
addon.add_directory({'mode': 'tvseries_all', 'section': 'tvseries_all'}, {'title': 'All'}, img=icon_path('All_Shows.png'))
addon.add_directory({'mode': 'tvaz', 'section': 'tvaz'}, {'title': 'A-Z'}, img=icon_path("AZ.png"))
addon.add_directory({'mode': 'search', 'section': SearchTV}, {'title': 'Search'}, img=icon_path('Search.png'))
setView(None, 'default-view')
elif mode == 'tvaz':
AZ_Menu('tvseries-az',TVUrl)
setView(None, 'default-view')
elif mode == 'tvseries-az':
url = TVUrl
letter = addon.queries['letter']
if meta_setting:
metaget=metahandlers.MetaData()
else:
metaget=None
html = get_html(url)
r = re.search('<a name="%s">(.+?)(<a name=|</table>)' % letter, html, re.DOTALL)
if r:
match = re.compile('class="mnlcategorylist"><a href="(.+?)"><b>(.+?)</b></a> (<sub>New Episode!</sub>|)</td>').findall(r.group(1))
for link, vidtitle, newep in match:
vidname = vidtitle
if newep:
vidname = vidtitle + ' [COLOR red]New Episode![/COLOR]'
add_video_directory('tvseasons', VideoType_TV, TVUrl + link, vidtitle, vidname, metaget=metaget, totalitems=len(match))
setView('tvshows', 'tvshow-view')
elif mode == 'tvseries_all':
if meta_setting:
metaget=metahandlers.MetaData()
else:
metaget=None
url = TVUrl
html = get_html(url)
match = re.compile('class="mnlcategorylist"><a href="(.+?)"><b>(.+?)</b></a> (<sub>New Episode!</sub>|)</td>').findall(html)
for link, vidtitle, newep in match:
vidname = vidtitle
if newep:
vidname = vidtitle + ' [COLOR red]New Episode![/COLOR]'
add_video_directory('tvseasons', VideoType_TV, TVUrl + link, vidtitle, vidname, metaget=metaget, totalitems=len(match))
setView('tvshows', 'tvshow-view')
elif mode == 'tvseries_upc':
if meta_setting:
metaget=metahandlers.MetaData()
else:
metaget=None
url = TVUrl
html = get_html(url)
html = re.search('<link rel="stylesheet" href="/css/schedule1.css" type="text/css">(.+?)</table>', html, re.DOTALL)
if html:
today = False
sched_match = re.compile('<td width="14%" class="(schedule[ today]*)">[\r\n\t ]*<span class="sheader">(.+?)</span>[\r\n\t ]*<span class="sdate">(.+?)</span>(.+?)</td>', re.DOTALL).findall(html.group(1))
for schedule, day, date, episodes in sched_match:
if schedule == 'schedule today':
today = True
addon.add_directory({'mode': 'none'}, {'title': '[COLOR red] %s %s[/COLOR]' % (day.strip(), date.strip())}, is_folder=False, img='')
elif not schedule == 'schedule today' and today:
addon.add_directory({'mode': 'none'}, {'title': '[COLOR blue] %s %s[/COLOR]' % (day.strip(), date.strip())}, is_folder=False, img='')
if today:
#ep_match = re.compile('<a href=\'/(.+?)\' title="(.+? S([0-9]+)E([0-9]+) .+?)" class=\'epp\'>(.+?)</a>').findall(episodes)
ep_match = re.compile('<a class="epp" href="(.+?)">(.+?)</a>').findall(episodes)
#for link, vidname, season_num, episode_num, vidtitle in ep_match:
for link, vidtitle in ep_match:
#Since we are getting season level items, try to grab the imdb_id of the TV Show first to make meta get easier
if meta_setting:
meta = get_metadata(VideoType_TV, vidtitle, metaget=metaget)
imdb = meta['imdb_id']
else:
imdb = ''
#They give a link to the show, but not the correct season, let's fix that
#new_url = MainUrl + link + 'season_' + str(int(season_num)) + '.html'
add_video_directory('tvseasons', VideoType_Season, MainUrl + link, vidtitle, vidtitle, metaget=metaget, imdb=imdb, totalitems=(len(sched_match) * len(ep_match)))
setView('seasons', 'season-view')
elif mode == 'tvlastadded':
if meta_setting:
metaget=metahandlers.MetaData()
else:
metaget=None
html = get_html(url)
#full_match = re.compile('class="mnlcategorylist"><a href="(.+?)#.+?"><b>((.+?) - Season ([0-9]+) Episode ([0-9]+)) <').findall(html)
full_match = re.compile('class="mnlcategorylist">[\r\n ]*<a href="(.+?)[#]*.*?">[\r\n ]*<b>((.+?) - Season ([0-9]+) Episode ([0-9]+))<', re.DOTALL).findall(html)
match = re.compile('<a name="*(.+?)"></a>(.+?)(?:<td colspan="2">|</table>)', re.DOTALL).findall(html)
for added_date, inside_html in match:
addon.add_directory({'mode': 'none'}, {'title': '[COLOR blue]' + added_date + '[/COLOR]'}, is_folder=False, img='')
inside_match = re.compile('class="mnlcategorylist">[\r\n ]*<a href="(.+?)">[\r\n ]*<b>((.+?) [\(]*([0-9]{0,4})[\) ]*- Season ([0-9]+) Episode ([0-9]+))<').findall(inside_html)
for link, vidname, vidtitle, year, season_num, episode_num in inside_match:
#Since we are getting season level items, try to grab the imdb_id of the TV Show first to make meta get easier
if meta_setting:
meta = get_metadata(VideoType_TV, vidtitle, metaget=metaget, year=year)
imdb = meta['imdb_id']
else:
imdb = ''
#They give a link to the show, but not always to the correct season, let's fix that
import urlparse
split_url = urlparse.urlsplit(link)
if not split_url.fragment:
link = link + '/season_' + str(int(season_num)) + '.html'
if link.startswith(TVPath):
newLink = MainUrl + link
else:
newLink = TVUrl + link
add_video_directory('tvepisodes', VideoType_Season, newLink, vidtitle, vidname, metaget=metaget, imdb=imdb, season_num=season_num, totalitems=len(full_match))
setView('seasons', 'season-view')
elif mode == 'tvpopular':
if meta_setting:
metaget=metahandlers.MetaData()
else:
metaget=None
url = MainUrl
html = get_html(url)
match = re.compile('<td class="tleft".*?><a href="(.+?)/">(.+?)</a></td>').findall(html)
for link, vidname in match:
is_tv = re.search('/internet/', link)
if vidname != "...more" and is_tv:
add_video_directory('tvseasons', VideoType_TV, link, vidname, vidname, metaget=metaget, totalitems=len(match))
setView('tvshows', 'tvshow-view')
elif mode == 'tvseasons':
if meta_setting:
metaget=metahandlers.MetaData()
else:
metaget=None
if url.startswith('http') == False:
url = MainUrl + url
html = get_html(url)
match = re.compile('class="mnlcategorylist">.*?<a href="(.+?)"><b>(.+?)</b></a>(.+?)<', re.DOTALL).findall(html)
seasons = re.compile('class="mnlcategorylist">.*?<a href=".+?"><b>Season ([0-9]+)</b></a>.+?<', re.DOTALL).findall(html)
#seasons = list(xrange(len(match)))
#If we have more matches than seasons found then we might have an extra 'special' season, add it as Season '0'
if len(match) > len(seasons):
seasons.insert(0,'0')
season_meta = {}
if meta_setting:
season_meta = get_metadata(video_type, title, metaget=metaget, imdb=imdb_id, season_list=seasons)
else:
meta = {}
meta['TVShowTitle'] = title
meta['cover_url'] = ''
meta['imdb_id'] = ''
meta['backdrop_url'] = ''
meta['overlay'] = 0
num = 0
for link, season_num, episodes in match:
is_season = re.search('Season ([0-9]+)', season_num)
if season_meta and is_season:
meta = season_meta[num]
else:
num = num - 1
meta = {}
meta['TVShowTitle'] = title
meta['cover_url'] = ''
meta['imdb_id'] = ''
meta['backdrop_url'] = ''
meta['overlay'] = 0
meta['title'] = season_num + episodes
link = MainUrl + link
contextMenuItems = add_contextmenu(meta_setting, video_type, link, title, meta['title'], favourite=False, watched=meta['overlay'], imdb=meta['imdb_id'], season_num=seasons[num])
addon.add_directory({'mode': 'tvepisodes', 'url': link, 'video_type': VideoType_Season, 'imdb_id': meta['imdb_id'], 'title': title, 'name': meta['title'], 'season': seasons[num]}, meta, contextmenu_items=contextMenuItems, context_replace=True, img=meta['cover_url'], fanart=meta['backdrop_url'], total_items=len(match))
num = num + 1
setView('seasons', 'season-view')
elif mode == 'tvepisodes':
if meta_setting:
metaget=metahandlers.MetaData()
else:
metaget=None
html = get_html(url)
match = re.compile('<td class="episode">(.*?)<b>(.+?)</b></td>[\r\n ]*<td align="right" class="mnllinklist">[\r\n ]*<div class="right">.+Air Date: (.+?)</div>').findall(html)
#match = re.compile('<td class="episode">.*?<a name=".+?"></a>(.*?)<b>(.+?)</b></td>[\r\n\t]*(<td align="right".+?Air Date: (.*?)</div>)*', re.DOTALL).findall(html)
for next_episode, vidname, next_air in match:
print vidname
episode_num = re.search('([0-9]{0,2})\.', vidname)
if episode_num:
episode_num = episode_num.group(1)
else:
episode_num = 0
if not next_episode:
add_video_item(VideoType_Episode, VideoType_Episode, url, title, vidname, metaget=metaget, imdb=imdb_id, season_num=season, episode_num=episode_num, totalitems=len(match))
else:
meta = get_metadata(VideoType_Episode, title, metaget=metaget, vidname=vidname, imdb=imdb_id, season_num=season, episode_num=episode_num)
if next_air:
meta['title'] = '[COLOR blue]Next Episode: %s - %s[/COLOR]' % (next_air, vidname)
addon.add_directory({'mode': 'none'}, meta, is_folder=False, img=meta['cover_url'], fanart=meta['backdrop_url'])
setView('episodes', 'episode-view')
elif mode == 'search':
if meta_setting:
metaget=metahandlers.MetaData()
else:
metaget=None
index = 0
search_text = ""
search_list = []
new_search = False
#Check first if a 'name' has been passed in - signals an adhoc search request
if name:
new_search = True
search_text = name
else:
search_hist = cache.get('search_' + section)
#Convert returned string back into a list
if search_hist:
try:
search_list = eval(search_hist)
except:
search_list.insert(0, search_hist)
#If we have historical search items, prompt the user with list
if search_list:
dialog = xbmcgui.Dialog()
#Add a place holder at list index 0 for allowing user to do new searches
tmp_search_list = list(search_list)
tmp_search_list.insert(0, 'New Search')
index = dialog.select('Select Search', tmp_search_list)
#If index is 0 user selected New Search, if greater than user selected existing item
if index > 0:
search_text = tmp_search_list[index]
elif index == 0:
new_search = True
#If a new search is required, bring up the keyboard
if (not search_text and not index == -1) or new_search:
kb = xbmc.Keyboard(search_text, 'Search Project Free TV - %s' % section.capitalize(), False)
kb.doModal()
if (kb.isConfirmed()):
search_text = kb.getText()
#If we have some text to search by, lets do it
if search_text:
#Add to our search history only if it doesn't already exist
if search_text not in search_list:
search_list.insert(0, search_text)
#Lets keep just 10 search history items at a time
if len(search_list) > 10:
del search_list[10]
#Write the list back to cache
cache.set('search_' + section, str(search_list))
search_quoted = urllib.quote(search_text)
url = SearchUrl % (search_quoted, section)
html = get_html(url)
#match = re.compile('<td width="97%" class="mnlcategorylist">[\r\n\t]*<a href="(.+?)">[\r\n\t]*<b>(.+?)[ (]*([0-9]{0,4})[)]*</b>').findall(html)
match = re.compile('<td width="99%" class="mnlcategorylist"><a href="(.+?)"><b>(.+?)</b></a>').findall(html)
if match:
#for link, vidname, year in match:
for link, vidname in match:
link = MainUrl + link
if re.search('/movies/', link):
#add_video_item(VideoType_Movies, VideoType_Movies, link, vidname, vidname, metaget=metaget, year=year, totalitems=len(match))
add_video_item(VideoType_Movies, VideoType_Movies, link, vidname, vidname, metaget=metaget, totalitems=len(match))
else:
#add_video_directory('tvseasons', VideoType_TV, link, vidname, vidname, metaget=metaget, year=year, totalitems=len(match))
add_video_directory('tvseasons', VideoType_TV, link, vidname, vidname, metaget=metaget, totalitems=len(match))
else:
Notify('small', 'No Results', 'No search results found','')
setView(None, 'default-view')
elif mode == 'favourites':
if meta_setting:
metaget=metahandlers.MetaData()
else:
metaget=None
#Add Season/Episode sub folders
if video_type == VideoType_TV:
addon.add_directory({'mode': 'favourites', 'video_type': VideoType_Season}, {'title': '[COLOR blue]Seasons[/COLOR]'})
addon.add_directory({'mode': 'favourites', 'video_type': VideoType_Episode}, {'title': '[COLOR blue]Episodes[/COLOR]'})
#Grab saved favourites from DB and populate list
saved_favs = cache.get('favourites_' + video_type)
if saved_favs:
favs = sorted(eval(saved_favs), key=lambda fav: fav[1])
for fav in favs:
import urlparse
split_url = urlparse.urlsplit(fav[5])
new_url = MainUrl + split_url.path
if video_type in (VideoType_Movies, VideoType_Episode):
add_video_item(video_type, video_type, new_url, fav[0], fav[1], metaget=metaget, imdb=fav[2], season_num=fav[3], episode_num=fav[4], totalitems=len(favs), favourite=True)
elif video_type == VideoType_TV:
add_video_directory('tvseasons', video_type, new_url, fav[0], fav[1], metaget=metaget, imdb=fav[2], season_num=fav[3], totalitems=len(favs), favourite=True)
elif video_type == VideoType_Season:
add_video_directory('tvepisodes', video_type, new_url, fav[0], fav[1], metaget=metaget, imdb=fav[2], season_num=fav[3], totalitems=len(favs), favourite=True)
setView(video_type +'s', video_type + '-view')
elif mode == 'add_fav':
add_favourite()
elif mode == 'del_fav':
remove_favourite()
elif mode == 'refresh_meta':
if video_type == VideoType_Movies:
refresh_movie(title)
elif video_type == VideoType_TV:
refresh_tv(title, imdb_id)
elif video_type == VideoType_Season:
season_refresh(title, imdb_id, season)
elif video_type == VideoType_Episode:
episode_refresh(title, imdb_id, season, episode)
elif mode == 'watch_mark':
metaget=metahandlers.MetaData()
metaget.change_watched(video_type, title, imdb_id, season=season, episode=episode)
xbmc.executebuiltin("Container.Refresh")
elif mode == 'resolver_settings':
import urlresolver
urlresolver.display_settings()
elif mode=='meta_settings':
print "Metahandler Settings"
import metahandler
metahandler.display_settings()
elif mode=='delete_favs':
dialog = xbmcgui.Dialog()
ret = dialog.yesno('Delete Favourites', 'Do you wish to delete %s PFTV Favourites?' % video_type.upper(), '','This cannot be undone!')
if ret == True:
addon.log("Deleting favourites: %s" % video_type)
if video_type == 'all':
cache.delete('favourites_%s' % VideoType_Movies)
cache.delete('favourites_%s' % VideoType_TV)
cache.delete('favourites_%s' % VideoType_Season)
cache.delete('favourites_%s' % VideoType_Episode)
else:
cache.delete('favourites_%s' % video_type)
Notify('small', 'PFTV Favourites', 'PFTV %s Favourites Deleted' % video_type.title())
elif mode=='delete_search_history':
dialog = xbmcgui.Dialog()
ret = dialog.yesno('Delete Search History', 'Do you wish to delete PFTV Search History', '','This cannot be undone!')
if ret == True:
addon.log("Deleting search history")
try:
cache.delete('search_all')
cache.delete('search_movies')
cache.delete('search_shows')
Notify('small', 'PFTV History', 'PFTV Search History Deleted')
except Exception, e:
addon.log("Failed to delete search history: %s" % e)
Notify('big', 'PFTV History', 'Error deleting PFTV search history')
if not play:
addon.end_of_directory() | 43.949135 | 334 | 0.601083 | 5,182 | 43,202 | 4.862601 | 0.109224 | 0.026431 | 0.022264 | 0.027502 | 0.494127 | 0.423605 | 0.385943 | 0.32832 | 0.297603 | 0.257997 | 0 | 0.005679 | 0.241841 | 43,202 | 983 | 335 | 43.949135 | 0.763632 | 0.09944 | 0 | 0.330914 | 0 | 0.020319 | 0.208577 | 0.048274 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.007257 | 0.033382 | null | null | 0.027576 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e5bd6851f61673a18a9f645f842878ce00a27be8 | 2,549 | py | Python | www/apis.py | yumaojun03/blog-python-app | 92ecad4c693090d67351d022e47b2d5be901d25d | [
"FTL"
] | 200 | 2015-09-16T15:47:00.000Z | 2021-01-14T07:45:04.000Z | www/apis.py | yumaojun03/blog-python-app | 92ecad4c693090d67351d022e47b2d5be901d25d | [
"FTL"
] | 7 | 2015-12-06T16:41:34.000Z | 2018-04-10T02:43:55.000Z | www/apis.py | yumaojun03/blog-python-app | 92ecad4c693090d67351d022e47b2d5be901d25d | [
"FTL"
] | 274 | 2015-09-10T07:23:22.000Z | 2020-10-17T06:35:18.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
实现以Json数据格式进行交换的RESTful API
设计原因:
由于API就是把Web App的功能全部封装了,所以,通过API操作数据,
可以极大地把前端和后端的代码隔离,使得后端代码易于测试,
前端代码编写更简单
实现方式:
一个API也是一个URL的处理函数,我们希望能直接通过一个@api来
把函数变成JSON格式的REST API, 因此我们需要实现一个装饰器,
由该装饰器将 函数返回的数据 处理成 json 格式
"""
import json
import logging
import functools
from transwarp.web import ctx
def dumps(obj):
"""
Serialize ``obj`` to a JSON formatted ``str``.
序列化对象
"""
return json.dumps(obj)
class APIError(StandardError):
"""
the base APIError which contains error(required), data(optional) and message(optional).
存储所有API 异常对象的数据
"""
def __init__(self, error, data='', message=''):
super(APIError, self).__init__(message)
self.error = error
self.data = data
self.message = message
class APIValueError(APIError):
"""
Indicate the input value has error or invalid. The data specifies the error field of input form.
输入不合法 异常对象
"""
def __init__(self, field, message=''):
super(APIValueError, self).__init__('value:invalid', field, message)
class APIResourceNotFoundError(APIError):
"""
Indicate the resource was not found. The data specifies the resource name.
资源未找到 异常对象
"""
def __init__(self, field, message=''):
super(APIResourceNotFoundError, self).__init__('value:notfound', field, message)
class APIPermissionError(APIError):
"""
Indicate the api has no permission.
权限 异常对象
"""
def __init__(self, message=''):
super(APIPermissionError, self).__init__('permission:forbidden', 'permission', message)
def api(func):
"""
A decorator that makes a function to json api, makes the return value as json.
将函数返回结果 转换成json 的装饰器
@api需要对Error进行处理。我们定义一个APIError,
这种Error是指API调用时发生了逻辑错误(比如用户不存在)
其他的Error视为Bug,返回的错误代码为internalerror
@app.route('/api/test')
@api
def api_test():
return dict(result='123', items=[])
"""
@functools.wraps(func)
def _wrapper(*args, **kw):
try:
r = dumps(func(*args, **kw))
except APIError, e:
r = json.dumps(dict(error=e.error, data=e.data, message=e.message))
except Exception, e:
logging.exception(e)
r = json.dumps(dict(error='internalerror', data=e.__class__.__name__, message=e.message))
ctx.response.content_type = 'application/json'
return r
return _wrapper
if __name__ == '__main__':
import doctest
doctest.testmod()
| 25.237624 | 101 | 0.655551 | 285 | 2,549 | 5.680702 | 0.452632 | 0.017295 | 0.027177 | 0.027795 | 0.064237 | 0.064237 | 0.039531 | 0 | 0 | 0 | 0 | 0.002036 | 0.229109 | 2,549 | 100 | 102 | 25.49 | 0.821883 | 0.016477 | 0 | 0.054054 | 0 | 0 | 0.065597 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.135135 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e5c1929a317092258c55c48775a60ed4c418c8ad | 1,131 | py | Python | Pyspark/Ak1Twitter.py | akshaymantriwar/Data-Analysis | 4c1c8b33b07df348ffdb5d374c1cfd71267e6511 | [
"MIT"
] | 1 | 2018-02-28T11:29:12.000Z | 2018-02-28T11:29:12.000Z | Pyspark/Ak1Twitter.py | akshaymantriwar/Data-Analysis | 4c1c8b33b07df348ffdb5d374c1cfd71267e6511 | [
"MIT"
] | null | null | null | Pyspark/Ak1Twitter.py | akshaymantriwar/Data-Analysis | 4c1c8b33b07df348ffdb5d374c1cfd71267e6511 | [
"MIT"
] | null | null | null | import socket
import sys
import requests
import requests_oauthlib
import json
ACCESS_TOKEN = 'your access token'
ACCESS_SECRET = 'yours'
CONSUMER_KEY = 'yours'
CONSUMER_SECRET = 'yours'
my_auth = requests_oauthlib.OAuth1(CONSUMER_KEY, CONSUMER_SECRET,ACCESS_TOKEN, ACCESS_SECRET)
TCP_IP = "localhost"
TCP_PORT = 9993
conn = None
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((TCP_IP, TCP_PORT))
s.listen(1)
print("Waiting for TCP connection...")
conn, addr = s.accept()
print(conn)
print("Connected... Starting getting tweets.")
url = 'https://stream.twitter.com/1.1/statuses/filter.json'
query_data = [('locations', '-130,-20,100,50')]
query_url = url + '?' + '&'.join([str(t[0]) + '=' + str(t[1]) for t in query_data])
response = requests.get(query_url, auth=my_auth, stream=True)
print(query_url, response)
for line in response.iter_lines():
full_tweet = json.loads(line.decode('utf-8'))
tweet_text = full_tweet['text'].encode('utf-8')
print("Tweet Text: " + tweet_text)
print ("------------------------------------------")
conn.send(tweet_text + '\n')
| 25.133333 | 93 | 0.664898 | 159 | 1,131 | 4.54717 | 0.471698 | 0.062241 | 0.047026 | 0.063624 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022822 | 0.147657 | 1,131 | 44 | 94 | 25.704545 | 0.727178 | 0 | 0 | 0 | 0 | 0 | 0.226868 | 0.037367 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.16129 | null | null | 0.193548 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e5c660d1e9db34e1f450fb210a16a64dc1938736 | 3,537 | py | Python | examples/ncbi_gene_mapping.py | JTaeger/graphio | e856d4266842540cfe56ba7367d8f97183ae2954 | [
"Apache-2.0"
] | 12 | 2020-01-16T22:05:43.000Z | 2021-05-27T11:36:17.000Z | examples/ncbi_gene_mapping.py | JTaeger/graphio | e856d4266842540cfe56ba7367d8f97183ae2954 | [
"Apache-2.0"
] | 2 | 2020-04-16T17:23:14.000Z | 2021-05-17T13:40:24.000Z | examples/ncbi_gene_mapping.py | JTaeger/graphio | e856d4266842540cfe56ba7367d8f97183ae2954 | [
"Apache-2.0"
] | 4 | 2020-01-16T23:38:52.000Z | 2021-05-27T11:36:19.000Z | # This excample script shows how to download a data file,
# parse nodes and relationships from the file and load them to Neo4j
#
# The maintainer of this package has a background in computational biology.
# This example loads data on gene IDs from a public genome database.
# We create (:Gene) nodes and (:Gene)-[:MAPS]->(:Gene) relationships.
#
# Example line of data file:
# 9606 11 NATP - AACP|NATP1 HGNC:HGNC:15 8 8p22 N-acetyltransferase pseudogene pseudo NATP N-acetyltransferase pseudogene O arylamide acetylase pseudogene 20191221 -
import gzip
import os
import shutil
from urllib.request import urlopen
from graphio import NodeSet, RelationshipSet
import py2neo
# setup file paths, Neo4j config and Graph instance
DOWNLOAD_DIR = "/set/your/path/here"
DOWNLOAD_FILE_PATH = os.path.join(DOWNLOAD_DIR, 'Homo_sapiens.gene_info.gz')
NEO4J_HOST = 'localhost'
NEO4J_PORT = 7687
NEO4J_USER = 'neo4j'
NEO4J_PASSWORD = 'test'
graph = py2neo.Graph(host=NEO4J_HOST, user=NEO4J_USER, password=NEO4J_PASSWORD)
graph.run("MATCH (a) RETURN a LIMIT 1")
# Download file from NCBI FTP Server
print('Download file from NCBI FTP server')
with urlopen('ftp://ftp.ncbi.nih.gov/gene/DATA/GENE_INFO/Mammalia/Homo_sapiens.gene_info.gz') as r:
with open(DOWNLOAD_FILE_PATH, 'wb') as f:
shutil.copyfileobj(r, f)
# define NodeSet and RelationshipSet
ncbi_gene_nodes = NodeSet(['Gene'], ['gene_id'])
ensembl_gene_nodes = NodeSet(['Gene'], ['gene_id'])
gene_mapping_rels = RelationshipSet('MAPS', ['Gene'], ['Gene'], ['gene_id'], ['gene_id'])
# iterate the data file and extract nodes/relationships
print('Iterate file and create nodes/relationships')
# collect mapped ENSEMBL gene IDs to avoid duplicate genes
ensembl_gene_ids_added = set()
with gzip.open(DOWNLOAD_FILE_PATH, 'rt') as file:
# skip header line
next(file)
# iterate file
for line in file:
fields = line.strip().split('\t')
ncbi_gene_id = fields[1]
# get mapping to ENSEMBL Gene IDs
mapped_ensembl_gene_ids = []
# get dbXrefs
db_xrefs = fields[5]
for mapped_element in db_xrefs.split('|'):
if 'Ensembl' in mapped_element:
ensembl_gene_id = mapped_element.split(':')[1]
mapped_ensembl_gene_ids.append(ensembl_gene_id)
# create nodes and relationships
# add NCBI gene node
ncbi_gene_nodes.add_node({'gene_id': ncbi_gene_id, 'db': 'ncbi'})
# add ENSEMBL gene nodes if they not exist already
for ensembl_gene_id in mapped_ensembl_gene_ids:
if ensembl_gene_id not in ensembl_gene_ids_added:
ensembl_gene_nodes.add_node({'gene_id': ensembl_gene_id, 'db': 'ensembl'})
ensembl_gene_ids_added.add(ensembl_gene_id)
# add (:Gene)-[:MAPS]->(:Gene) relationship
for ensembl_gene_id in mapped_ensembl_gene_ids:
gene_mapping_rels.add_relationship(
{'gene_id': ncbi_gene_id}, {'gene_id': ensembl_gene_id}, {'db': 'ncbi'}
)
# load data to Neo4j
print(len(ncbi_gene_nodes.nodes))
print(len(ensembl_gene_nodes.nodes))
print(len(gene_mapping_rels.relationships))
# create index for property 'gene_id' on (Gene) nodes first
print('Create index on Gene nodes')
try:
graph.schema.create_index('Gene', 'gene_id')
except py2neo.database.ClientError:
pass
# load data, first nodes then relationships
print('Load data to Neo4j')
ncbi_gene_nodes.create(graph)
ensembl_gene_nodes.create(graph)
gene_mapping_rels.create(graph)
| 34.676471 | 165 | 0.713316 | 510 | 3,537 | 4.739216 | 0.303922 | 0.100124 | 0.052131 | 0.041374 | 0.156392 | 0.110054 | 0.031444 | 0.031444 | 0.031444 | 0 | 0 | 0.015278 | 0.185751 | 3,537 | 101 | 166 | 35.019802 | 0.823958 | 0.307323 | 0 | 0.036364 | 0 | 0.018182 | 0.168729 | 0.042079 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.054545 | 0.109091 | 0 | 0.109091 | 0.127273 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
e5c7dc69d51e3024de52f806ec754b770919bf02 | 1,344 | py | Python | data_loader/data_sets.py | Shawn-Guo-CN/EmergentLanguage | 607240a7455776755b09af3399c7e87f85531b7e | [
"MIT"
] | 1 | 2020-09-24T15:56:40.000Z | 2020-09-24T15:56:40.000Z | data_loader/data_sets.py | Shawn-Guo-CN/EmergentLanguage | 607240a7455776755b09af3399c7e87f85531b7e | [
"MIT"
] | null | null | null | data_loader/data_sets.py | Shawn-Guo-CN/EmergentLanguage | 607240a7455776755b09af3399c7e87f85531b7e | [
"MIT"
] | null | null | null | import numpy as np
import torch
from torch.utils.data import Dataset
class DSpritesDataset(Dataset):
"""dSprites dataset."""
def __init__(self, npz_file:str, transform=None):
"""
Args:
npz_file: Path to the npz file.
root_dir: Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
dataset_zip = np.load(npz_file, allow_pickle=True, encoding='latin1')
self.dataset = self.preprocess_zip(dataset_zip)
del dataset_zip
self.transform = transform
def __len__(self):
return self.dataset['images'].shape[0]
def __getitem__(self, idx):
image = self.dataset['images'][idx]
latents_class = self.dataset['latents_classes'][idx]
latents_value = self.dataset['latents_values'][idx]
sample = (image, latents_class, latents_value)
if self.transform:
sample = self.transform(sample)
return sample
@staticmethod
def preprocess_zip(data_zip):
# TODO: filter out the data we do not need in the future
return {
'images': data_zip['imgs'],
'latents_classes': data_zip['latents_values'],
'latents_values': data_zip['latents_values']
}
| 28 | 77 | 0.616815 | 157 | 1,344 | 5.050955 | 0.43949 | 0.069357 | 0.042875 | 0.050441 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002079 | 0.284226 | 1,344 | 47 | 78 | 28.595745 | 0.822245 | 0.186012 | 0 | 0 | 0 | 0 | 0.110465 | 0 | 0 | 0 | 0 | 0.021277 | 0 | 1 | 0.153846 | false | 0 | 0.115385 | 0.076923 | 0.423077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e5ced64f600ac3c74487191e1195ab9459b82463 | 5,223 | py | Python | checklists_scrapers/tests/validation/test_species.py | StuartMacKay/checklists_scrapers | a6151481554e761ca0cd9a190f6a27334b130cdf | [
"BSD-3-Clause"
] | 4 | 2015-10-25T15:43:06.000Z | 2021-05-06T03:18:23.000Z | checklists_scrapers/tests/validation/test_species.py | StuartMacKay/checklists_scrapers | a6151481554e761ca0cd9a190f6a27334b130cdf | [
"BSD-3-Clause"
] | null | null | null | checklists_scrapers/tests/validation/test_species.py | StuartMacKay/checklists_scrapers | a6151481554e761ca0cd9a190f6a27334b130cdf | [
"BSD-3-Clause"
] | null | null | null | """Validate the species in each entry of the downloaded checklists.
Validation Tests:
Species:
1. the species is a dict.
2. either the common name or scientific name is given.
SpeciesName:
1. common name is a string.
2. common name is set.
3. common name does not have leading/trailing whitespace.
SpeciesScientificName:
1. scientific name is a string.
2. scientific name is set.
3. scientific name does not have leading/trailing whitespace.
4. scientific name has two or three words.
5. the genus (first word) of the scientific name is capitalized.
6. the species (second word) of the scientific name is lower case.
7. the subspecies (third word) of the scientific name is lower case.
"""
from checklists_scrapers.tests.validation import checklists, ValidationTestCase
class Species(ValidationTestCase):
"""Validate the species in the downloaded checklists."""
def setUp(self):
"""Initialize the test."""
self.species = []
for checklist in checklists:
for entry in checklist['entries']:
self.species.append((entry['species'], checklist['source']))
def test_species_type(self):
"""Verify the species field contains a dict."""
for species, source in self.species:
self.assertIsInstance(species, dict, msg=source)
def test_name_or_scientific_name(self):
"""Verify the either the name or scientific name is set."""
for species, source in self.species:
self.assertTrue('name' in species or 'scientific_name' in species,
msg=source)
class SpeciesName(ValidationTestCase):
"""Validate the species name in the downloaded checklists."""
def setUp(self):
"""Initialize the test."""
self.species = []
for checklist in checklists:
for entry in checklist['entries']:
self.species.append((entry['species'], checklist['source']))
def test_name_type(self):
"""Verify the species name is a unicode string."""
for species, source in self.species:
if 'name' in species:
self.assertIsInstance(species['name'], unicode, msg=source)
def test_name_set(self):
"""Verify the species name is set"""
for species, source in self.species:
if 'name' in species:
self.assertTrue(species['name'], msg=source)
def test_name_stripped(self):
"""Verify the species name has no extra whitespace."""
for species, source in self.species:
if 'name' in species:
self.assertStripped(species['name'], msg=source)
class SpeciesScientificName(ValidationTestCase):
"""Validate the species scientific name in the downloaded checklists."""
def setUp(self):
"""Initialize the test."""
self.species = []
for checklist in checklists:
for entry in checklist['entries']:
self.species.append((entry['species'], checklist['source']))
def test_scientific_name_type(self):
"""Verify the scientific name is a unicode string."""
for species, source in self.species:
if 'name' in species:
self.assertIsInstance(species['name'], unicode, msg=source)
def test_scientific_name_set(self):
"""Verify the scientific name is set"""
for species, source in self.species:
if 'name' in species:
self.assertTrue(species['name'], msg=source)
def test_scientific_name_stripped(self):
"""Verify the scientific name has no extra whitespace."""
for species, source in self.species:
if 'name' in species:
self.assertStripped(species['name'], msg=source)
def test_scientific_name_items(self):
"""Verify the scientific name two or three components."""
for species, source in self.species:
if 'scientific_name' in species:
items = len(species['scientific_name'].split())
self.assertTrue(items > 1, msg=source)
def test_scientific_name_genus(self):
"""Verify the genus of the scientific name."""
for species, source in self.species:
if 'scientific_name' in species:
genus = species['scientific_name'].split()[0]
self.assertRegexpMatches(genus, r'[A-Z][a-z]+', msg=source)
def test_scientific_name_species(self):
"""Verify the species of the scientific name."""
for species, source in self.species:
if 'scientific_name' in species:
genus = species['scientific_name'].split()[1]
self.assertRegexpMatches(genus, r'[a-z]+', msg=source)
def test_scientific_name_subspecies(self):
"""Verify the subspecies of the scientific name."""
for species, source in self.species:
if 'scientific_name' in species and \
len(species['scientific_name'].split()) == 3:
genus = species['scientific_name'].split()[2]
self.assertRegexpMatches(genus, r'[a-z]+', msg=source)
| 38.977612 | 79 | 0.624928 | 621 | 5,223 | 5.185185 | 0.140097 | 0.152174 | 0.048447 | 0.067081 | 0.73882 | 0.640994 | 0.586335 | 0.55 | 0.476708 | 0.476708 | 0 | 0.004477 | 0.273023 | 5,223 | 133 | 80 | 39.270677 | 0.843561 | 0.297339 | 0 | 0.619718 | 0 | 0 | 0.079787 | 0 | 0 | 0 | 0 | 0 | 0.169014 | 1 | 0.211268 | false | 0 | 0.014085 | 0 | 0.267606 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e5d6dad7953ab7d02a774b3d3ef187f861ee936c | 7,515 | py | Python | sdk/python/pulumi_aws_native/apigateway/resource.py | AaronFriel/pulumi-aws-native | 5621690373ac44accdbd20b11bae3be1baf022d1 | [
"Apache-2.0"
] | 29 | 2021-09-30T19:32:07.000Z | 2022-03-22T21:06:08.000Z | sdk/python/pulumi_aws_native/apigateway/resource.py | AaronFriel/pulumi-aws-native | 5621690373ac44accdbd20b11bae3be1baf022d1 | [
"Apache-2.0"
] | 232 | 2021-09-30T19:26:26.000Z | 2022-03-31T23:22:06.000Z | sdk/python/pulumi_aws_native/apigateway/resource.py | AaronFriel/pulumi-aws-native | 5621690373ac44accdbd20b11bae3be1baf022d1 | [
"Apache-2.0"
] | 4 | 2021-11-10T19:42:01.000Z | 2022-02-05T10:15:49.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = ['ResourceArgs', 'Resource']
@pulumi.input_type
class ResourceArgs:
def __init__(__self__, *,
parent_id: pulumi.Input[str],
path_part: pulumi.Input[str],
rest_api_id: pulumi.Input[str]):
"""
The set of arguments for constructing a Resource resource.
:param pulumi.Input[str] parent_id: The parent resource's identifier.
:param pulumi.Input[str] path_part: The last path segment for this resource.
:param pulumi.Input[str] rest_api_id: The ID of the RestApi resource in which you want to create this resource..
"""
pulumi.set(__self__, "parent_id", parent_id)
pulumi.set(__self__, "path_part", path_part)
pulumi.set(__self__, "rest_api_id", rest_api_id)
@property
@pulumi.getter(name="parentId")
def parent_id(self) -> pulumi.Input[str]:
"""
The parent resource's identifier.
"""
return pulumi.get(self, "parent_id")
@parent_id.setter
def parent_id(self, value: pulumi.Input[str]):
pulumi.set(self, "parent_id", value)
@property
@pulumi.getter(name="pathPart")
def path_part(self) -> pulumi.Input[str]:
"""
The last path segment for this resource.
"""
return pulumi.get(self, "path_part")
@path_part.setter
def path_part(self, value: pulumi.Input[str]):
pulumi.set(self, "path_part", value)
@property
@pulumi.getter(name="restApiId")
def rest_api_id(self) -> pulumi.Input[str]:
"""
The ID of the RestApi resource in which you want to create this resource..
"""
return pulumi.get(self, "rest_api_id")
@rest_api_id.setter
def rest_api_id(self, value: pulumi.Input[str]):
pulumi.set(self, "rest_api_id", value)
class Resource(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
parent_id: Optional[pulumi.Input[str]] = None,
path_part: Optional[pulumi.Input[str]] = None,
rest_api_id: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
Resource Type definition for AWS::ApiGateway::Resource
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] parent_id: The parent resource's identifier.
:param pulumi.Input[str] path_part: The last path segment for this resource.
:param pulumi.Input[str] rest_api_id: The ID of the RestApi resource in which you want to create this resource..
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: ResourceArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Resource Type definition for AWS::ApiGateway::Resource
:param str resource_name: The name of the resource.
:param ResourceArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(ResourceArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
parent_id: Optional[pulumi.Input[str]] = None,
path_part: Optional[pulumi.Input[str]] = None,
rest_api_id: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = ResourceArgs.__new__(ResourceArgs)
if parent_id is None and not opts.urn:
raise TypeError("Missing required property 'parent_id'")
__props__.__dict__["parent_id"] = parent_id
if path_part is None and not opts.urn:
raise TypeError("Missing required property 'path_part'")
__props__.__dict__["path_part"] = path_part
if rest_api_id is None and not opts.urn:
raise TypeError("Missing required property 'rest_api_id'")
__props__.__dict__["rest_api_id"] = rest_api_id
__props__.__dict__["resource_id"] = None
super(Resource, __self__).__init__(
'aws-native:apigateway:Resource',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None) -> 'Resource':
"""
Get an existing Resource resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = ResourceArgs.__new__(ResourceArgs)
__props__.__dict__["parent_id"] = None
__props__.__dict__["path_part"] = None
__props__.__dict__["resource_id"] = None
__props__.__dict__["rest_api_id"] = None
return Resource(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="parentId")
def parent_id(self) -> pulumi.Output[str]:
"""
The parent resource's identifier.
"""
return pulumi.get(self, "parent_id")
@property
@pulumi.getter(name="pathPart")
def path_part(self) -> pulumi.Output[str]:
"""
The last path segment for this resource.
"""
return pulumi.get(self, "path_part")
@property
@pulumi.getter(name="resourceId")
def resource_id(self) -> pulumi.Output[str]:
"""
A unique primary identifier for a Resource
"""
return pulumi.get(self, "resource_id")
@property
@pulumi.getter(name="restApiId")
def rest_api_id(self) -> pulumi.Output[str]:
"""
The ID of the RestApi resource in which you want to create this resource..
"""
return pulumi.get(self, "rest_api_id")
| 38.937824 | 134 | 0.628077 | 900 | 7,515 | 4.923333 | 0.15 | 0.05958 | 0.07267 | 0.030016 | 0.625367 | 0.539382 | 0.502821 | 0.477996 | 0.438276 | 0.421124 | 0 | 0.000183 | 0.272921 | 7,515 | 192 | 135 | 39.140625 | 0.810761 | 0.245775 | 0 | 0.358974 | 1 | 0 | 0.112455 | 0.00567 | 0 | 0 | 0 | 0 | 0 | 1 | 0.136752 | false | 0.008547 | 0.042735 | 0 | 0.264957 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e5da80306fa2ac2bc6ac0b8cb8ff988dfdd4d6e0 | 4,098 | py | Python | physics/atoms.py | wirawan0/pyqmc | 8d641ba2b91d1d7a05a90574d0787fb991ee15e2 | [
"Apache-2.0"
] | null | null | null | physics/atoms.py | wirawan0/pyqmc | 8d641ba2b91d1d7a05a90574d0787fb991ee15e2 | [
"Apache-2.0"
] | null | null | null | physics/atoms.py | wirawan0/pyqmc | 8d641ba2b91d1d7a05a90574d0787fb991ee15e2 | [
"Apache-2.0"
] | null | null | null | # $Id: atoms.py,v 1.2 2010-09-07 15:10:56 wirawan Exp $
#
# pyqmc.physics.atoms module
# Created: 20100903
# Wirawan Purwanto
#
# This module is part of PyQMC project.
#
# Information about atoms
#
# Rigged with the help of Wikipedia,
# http://en.wikipedia.org/wiki/List_of_elements_by_symbol
# - taken from the source code (see pyqmc/_scratch subdir)
# - rigged with _rig_atoms.py
ATOM_LIST = """\
1 H hydrogen
2 He helium
3 Li lithium
4 Be beryllium
5 B boron
6 C carbon
7 N nitrogen
8 O oxygen
9 F fluorine
10 Ne neon
11 Na sodium natrium
12 Mg magnesium
13 Al aluminium aluminum
14 Si silicon
15 P phosphorus phosphorous
16 S sulfur sulphur
17 Cl chlorine
18 Ar argon
19 K potassium kalium
20 Ca calcium
21 Sc scandium
22 Ti titanium
23 V vanadium
24 Cr chromium
25 Mn manganese
26 Fe iron
27 Co cobalt
28 Ni nickel
29 Cu copper
30 Zn zinc
31 Ga gallium
32 Ge germanium
33 As arsenic
34 Se selenium
35 Br bromine
36 Kr krypton
37 Rb rubidium
38 Sr strontium
39 Y yttrium
40 Zr zirconium
41 Nb niobium
42 Mo molybdenum
43 Tc technetium
44 Ru ruthenium
45 Rh rhodium
46 Pd palladium
47 Ag silver
48 Cd cadmium
49 In indium
50 Sn tin
51 Sb antimony
52 Te tellurium
53 I iodine
54 Xe xenon
55 Cs caesium cesium
56 Ba barium
57 La lanthanum
58 Ce cerium
59 Pr praseodymium
60 Nd neodymium
61 Pm promethium
62 Sm samarium
63 Eu europium
64 Gd gadolinium
65 Tb terbium
66 Dy dysprosium
67 Ho holmium
68 Er erbium
69 Tm thulium
70 Yb ytterbium
71 Lu lutetium
72 Hf hafnium
73 Ta tantalum
74 W tungsten
75 Re rhenium
76 Os osmium
77 Ir iridium
78 Pt platinum
79 Au gold
80 Hg mercury
81 Tl thallium
82 Pb lead
83 Bi bismuth
84 Po polonium
85 At astatine
86 Rn radon
87 Fr francium
88 Ra radium
89 Ac actinium
90 Th thorium
91 Pa protactinium
92 U uranium
93 Np neptunium
94 Pu plutonium
95 Am americium
96 Cm curium
97 Bk berkelium
98 Cf californium
99 Es einsteinium
100 Fm fermium
101 Md mendelevium
102 No nobelium
103 Lr lawrencium
104 Rf rutherfordium
105 Db dubnium
106 Sg seaborgium
107 Bh bohrium
108 Hs hassium
109 Mt meitnerium
110 Ds darmstadtium
111 Rg roentgenium
112 Cn copernicium
113 Uut ununtrium
114 Uuq ununquadium
115 Uup ununpentium
116 Uuh ununhexium
117 Uus ununseptium
118 Uuo ununoctium
"""
class atom:
"""Representation of an atomic species.
Useful fields are:
. no
. symb
. name()
. names
"""
def __init__(self, no, symb, names):
self.no = no
self.symb = symb
self.names = names
def name(self):
return self.names[0]
def __repr__(self):
return "atom(%d,'%s',%s)" % (self.no, self.symb, self.names)
def parse_atom_list(ATOM_LIST=ATOM_LIST):
"""Internal routine to parse the atom list for the first time."""
atom_list_by_no = {}
atom_list_by_symbol = {}
atom_list_by_name = {}
atom_list_by_whatever = {}
for L in ATOM_LIST.split("\n"):
#print L
fld = L.split()
if len(fld) < 3: continue
no = int(fld[0])
symb = fld[1]
names = fld[2:]
at = atom(no, symb, names)
atom_list_by_no[no] = at
atom_list_by_symbol[symb] = at
atom_list_by_whatever[no] = at
atom_list_by_whatever[symb] = at
for n in names:
atom_list_by_name[n] = at
atom_list_by_whatever[n] = at
return (atom_list_by_no, atom_list_by_symbol, atom_list_by_name, atom_list_by_whatever)
(atom_list_by_no,
atom_list_by_symbol,
atom_list_by_name,
atom_list_by_whatever) = parse_atom_list()
def get(symb=None):
try:
return atom_list_by_whatever[symb]
except:
pass
# Try three more things
try:
no = int(symb)
return atom_list_by_no[no]
except:
pass
try:
return atom_list_by_name[symb.lower()]
except:
pass
try:
if len(symb) > 1:
return atom_list_by_symbol[symb[0].upper()+symb[1:].lower()]
else:
return atom_list_by_symbol[symb.upper()]
except:
raise KeyError, "Unknown atomic symbol: %s" % symb
| 19.421801 | 89 | 0.688141 | 688 | 4,098 | 3.965116 | 0.655523 | 0.087977 | 0.084311 | 0.046188 | 0.150293 | 0.085044 | 0.065982 | 0.065982 | 0.065982 | 0.065982 | 0 | 0.090613 | 0.251342 | 4,098 | 210 | 90 | 19.514286 | 0.798566 | 0.093216 | 0 | 0.063218 | 0 | 0 | 0.591386 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.017241 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e5e421d037f71a654f61f43d828fef468d8bcce1 | 8,590 | py | Python | 6_nac_workflow/step2b/DO_NACs.py | compchem-cybertraining/Tutorials_QE_and_eQE | e7315621d4a670b1abd72da8f4c6622aa7986a5a | [
"CC0-1.0"
] | 1 | 2021-09-24T01:44:34.000Z | 2021-09-24T01:44:34.000Z | 6_nac_workflow/step2b/DO_NACs.py | compchem-cybertraining/Tutorials_QE_and_eQE | e7315621d4a670b1abd72da8f4c6622aa7986a5a | [
"CC0-1.0"
] | null | null | null | 6_nac_workflow/step2b/DO_NACs.py | compchem-cybertraining/Tutorials_QE_and_eQE | e7315621d4a670b1abd72da8f4c6622aa7986a5a | [
"CC0-1.0"
] | 1 | 2021-09-24T01:44:37.000Z | 2021-09-24T01:44:37.000Z | #!/usr/bin/env python
# coding: utf-8
# # DO NACs (HPC version)
#
# This file demonstrates how to run the calculations of the NACs in the KS space, using QE.
#
# In particular, this example is designed to run calculations on UB HPC cluster, CCR (Center for Computational Research). More specifically, using the nodes of the Akimov group (*valhalla* cluster).
#
# Unline for the on-laptop version, we may need to wait until the submitted jobs are done, so we wil not be able to run the plotting and other data analysis calculations right away. So those parts are removed.
#
# So, lets start by loading the required modules.
# In[1]:
import os
import sys
# Fisrt, we add the location of the library to test to the PYTHON path
if sys.platform=="cygwin":
from cyglibra_core import *
elif sys.platform=="linux" or sys.platform=="linux2":
from liblibra_core import *
from libra_py import hpc_utils
from libra_py import data_read
from libra_py import data_outs
from libra_py import units
from libra_py import QE_methods
from libra_py.workflows.nbra import step2
# For convenience, lets print out the location of the current working directory.
#
# This directory should contain a folder called **PP**, in which we should have placed the atomic pseudopotentials suitable for our system
# In[2]:
print( os.getcwd())
# Assume we have already produced a QE MD trajectory and it is stored in the file **x0.md.out** (which we copied in the present directory).
#
# We need to also create:
#
# * a **x0.scf.in** file that contains the parameters for QE calculations (the type of calculation should be *scf*). The file should not contain the atomic coordiantes section, but should contain the cell parameters sections or occupations if they are used.
#
# * a **x0.exp.in** file (also to be placed in the present directory). It shall describe the procedured for the wavefunction "export" operation - mainly the location and names of atomic pseudopotentials and the correct prefix for the files.
#
# In the section below, the user can define (e.g. via copy/paste) the content of the corresponding files and the files will be automatically generated by Python
# In[3]:
PP_dir = os.getcwd()+"/PP/"
scf_in = """&CONTROL
calculation = 'scf',
dt = 20.67055,
nstep = 10,
pseudo_dir = '%s',
outdir = './',
prefix = 'x0',
disk_io = 'low',
wf_collect = .true.
/
&SYSTEM
ibrav = 0,
celldm(1) = 1.89,
nat = 4,
ntyp = 2,
nspin = 2,
starting_magnetization(1) = 0.1,
nbnd = 40,
ecutwfc = 40,
tot_charge = 0.0,
occupations = 'smearing',
smearing = 'gaussian',
degauss = 0.005,
nosym = .true.,
/
&ELECTRONS
electron_maxstep = 300,
conv_thr = 1.D-5,
mixing_beta = 0.45,
/
&IONS
ion_dynamics = 'verlet',
ion_temperature = 'andersen',
tempw = 300.00 ,
nraise = 1,
/
ATOMIC_SPECIES
Cd 121.411 Cd.pbe-n-rrkjus_psl.1.0.0.UPF
Se 78.96 Se.pbe-dn-rrkjus_psl.1.0.0.UPF
K_POINTS automatic
1 1 1 0 0 0
CELL_PARAMETERS (alat= 1.89000000)
4.716986504 -0.015512615 -0.002400656
-2.371926710 4.062829845 -0.000273730
-0.002552594 -0.001387965 8.436361230
""" % (PP_dir)
exp_in = """&inputpp
prefix = 'x0',
outdir = './',
pseudo_dir = '%s',
psfile(1) = 'Cd.pbe-n-rrkjus_psl.1.0.0.UPF',
psfile(2) = 'Se.pbe-dn-rrkjus_psl.1.0.0.UPF',
single_file = .FALSE.,
ascii = .TRUE.,
uspp_spsi = .FALSE.,
/
""" % (PP_dir)
f = open("x0.scf.in", "w")
f.write(scf_in)
f.close()
f = open("x0.exp.in", "w")
f.write(exp_in)
f.close()
#print scf_in
#print exp_in
# The following section will clean up the previous results and temporary directory (BEWARE!!! you may not always want to do this for it will delete expensive results)
# In[4]:
# Remove the previous results and temporary working directory from the previous runs
os.system("rm -r res")
os.system("rm -r wd")
# Create the nelw results directory
os.system("mkdir res")
rd = os.getcwd()+"/res" # where all the results stuff will go
# Now we need to setup the submit script template - it will be used to create actual submit scripts (by substituting the param1 and param2 variables) in the working directory. Then those files will be distributed among the job directories and will be used to submit the actual jobs.
#
# This section also defines the parameters to be used by the **step2.run()** (and other functions called inside, so look for the description all all suitable parameters). The meaning of the most parameters is quite intuitive. Let me just clarify a couple less-obvious points.
#
# That section of the code looks weird because:
# * it is a Python sctring that defines ...
# * a SLURM script that uses bash commands and calls ..
# * the Python script to be executed, which eventually calls ...
# * the Libra modules to do the step2 calculations
#
# In this example, I plan to submit the calculation in the HPC cluster to "parallelize" the calculations via the SLURM batch system, so specify "BATCH_SYSTEM":"srun"
#
# The system in this example has 56 electrons, so the HOMO would correspond to number 28 and LUMO to number 29. In this example, we use more orbitals - those below HOMO and above LUMO: minband = 20 and maxband = 39, so we could study the HOMO-LUMO transitions as well as all other types of relaxation. In all, there are 20 orbitals included in our present active space. The resulting files will contain matrixes 40 by 40 - because we plot alpha and beta channels. Just a reminder: the orbital indexing starts from 1.
# In[5]:
submit_str = """#!/bin/sh
#SBATCH --partition=valhalla --qos=valhalla
#SBATCH --clusters=faculty
#SBATCH --time=02:00:00
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=4
#SBATCH --mem=5000
###SBATCH --mail-user=alexeyak@buffalo.edu
echo "SLURM_JOBID="$SLURM_JOBID
echo "SLURM_JOB_NODELIST="$SLURM_JOB_NODELIST
echo "SLURM_NNODES="$SLURM_NNODES
echo "SLURMTMPDIR="$SLURMTMPDIR
echo "working directory="$SLURM_SUBMIT_DIR
NPROCS=`srun --nodes=${SLURM_NNODES} bash -c 'hostname' |wc -l`
echo NPROCS=$NPROCS
module load jupyter
eval "$(/projects/academic/cyberwksp21/Software/Conda/Miniconda3/bin/conda shell.bash hook)"
conda activate libra
module load espresso/6.2.1
export PWSCF=/util/academic/espresso/6.2.1/bin
#The PMI library is necessary for srun
export I_MPI_PMI_LIBRARY=/usr/lib64/libpmi.so
env
which python
which pw.x
which pw_export.x
# These will be assigned automatically, leave them as they are
param1=
param2=
# This is invocation of the scripts which will further handle NA-MD calclculations on the NAC calculation step
# NOTE: minband - starting from 1
# maxband - is included
python -c \"from libra_py.workflows.nbra import step2
params = {}
params[\\"EXE\\"] = \\"pw.x\\"
params[\\"EXE_EXPORT\\"] = \\"pw_export.x\\"
params[\\"BATCH_SYSTEM\\"] = \\"srun\\"
params[\\"NP\\"] = 4
params[\\"start_indx\\"] = $param1
params[\\"stop_indx\\"] = $param2
params[\\"dt\\"] = %8.5f
params[\\"prefix0\\"] = \\"x0.scf\\"
params[\\"nac_method\\"] = 1
params[\\"minband\\"] = 20
params[\\"maxband\\"] = 39
params[\\"minband_soc\\"] = 20
params[\\"maxband_soc\\"] = 39
params[\\"compute_Hprime\\"] = True
params[\\"wd\\"] = \\"wd\\"
params[\\"rd\\"] = \\"%s\\"
params[\\"verbosity\\"] = 0
step2.run(params)
\"
""" % ( 1.0*units.fs2au, os.getcwd()+"/res" )
f = open("submit_templ.slm", "w")
f.write(submit_str)
f.close()
#print submit_str
# We'll use **QE_methods.out2inp()** function to convert the MD trajectory into a bunch of input files for SCF calculation - this is something we'll need for NAC calculations.
#
# In this case, you need to setup the *iinit* and *ifinal* variables which determine which steps of the original MD trajectory will be used to produce the input files and subsequenctly used in the NACs calculations
#
# All these files will be generated in the temporarily-created **wd** directory. The system then "cd" into that directory to start the consecutive operations in that directory.
# In[6]:
iinit = 0
ifinal = 30
QE_methods.out2inp("x0.md.out","x0.scf.in","wd","x0.scf", iinit, ifinal,1)
os.system("cp submit_templ.slm wd")
os.system("cp x0.exp.in wd")
# Now, lets change into our working (temporary) directory, copy all the template files and submit the calculations of multiple jobs. Come back to the original working directory once we are done.
# In[7]:
help(hpc_utils.distribute)
# In[8]:
os.chdir("wd")
tot_nsteps = 30
nsteps_per_job = 10
hpc_utils.distribute(0,tot_nsteps,nsteps_per_job,"submit_templ.slm",["x0.exp.in"],["x0.scf"],2)
os.chdir("../")
# In[9]:
print(os.getcwd())
# In[ ]:
| 28.922559 | 516 | 0.711874 | 1,374 | 8,590 | 4.390102 | 0.361718 | 0.008289 | 0.012765 | 0.014092 | 0.042772 | 0.025862 | 0.025862 | 0.014257 | 0.014257 | 0 | 0 | 0.039098 | 0.169267 | 8,590 | 296 | 517 | 29.02027 | 0.806194 | 0.511176 | 0 | 0.086667 | 0 | 0.02 | 0.733672 | 0.194969 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.073333 | 0 | 0.073333 | 0.013333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e5e55875316373b225d26baf74a1dab1a548ce5c | 215 | py | Python | src/tmtccmd/__init__.py | robamu-org/tmtccmd | 7b8b936f0d18fdbd375da92d43ecdd37d71ded57 | [
"Apache-2.0"
] | 1 | 2021-08-30T10:20:45.000Z | 2021-08-30T10:20:45.000Z | src/tmtccmd/__init__.py | robamu-org/tmtccmd | 7b8b936f0d18fdbd375da92d43ecdd37d71ded57 | [
"Apache-2.0"
] | 8 | 2021-09-06T17:23:32.000Z | 2022-03-04T13:41:52.000Z | src/tmtccmd/__init__.py | robamu-org/tmtccmd | 7b8b936f0d18fdbd375da92d43ecdd37d71ded57 | [
"Apache-2.0"
] | null | null | null | VERSION_NAME = "tmtccmd"
VERSION_MAJOR = 1
VERSION_MINOR = 10
VERSION_REVISION = 2
# I think this needs to be in string representation to be parsed so we can't
# use a formatted string here.
__version__ = "1.10.2"
| 23.888889 | 76 | 0.753488 | 37 | 215 | 4.162162 | 0.72973 | 0.051948 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045198 | 0.176744 | 215 | 8 | 77 | 26.875 | 0.824859 | 0.47907 | 0 | 0 | 0 | 0 | 0.119266 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e5e5dceba9ff0dbb036d5ff97359215a91dde058 | 3,114 | py | Python | knocker/request.py | klen/knocker | 69a69a862c52e272db58f3e3f1030174c6dbbebe | [
"MIT"
] | 4 | 2020-06-04T06:39:08.000Z | 2022-03-16T07:49:04.000Z | knocker/request.py | klen/knocker | 69a69a862c52e272db58f3e3f1030174c6dbbebe | [
"MIT"
] | null | null | null | knocker/request.py | klen/knocker | 69a69a862c52e272db58f3e3f1030174c6dbbebe | [
"MIT"
] | null | null | null | """Do requests."""
import asyncio
import http
from random import random
import sentry_sdk
from asgi_tools._compat import aio_sleep
from httpx import (
HTTPError, ConnectError, TimeoutException, NetworkError,
AsyncClient, Response, HTTPStatusError)
from . import config as global_config, logger
async def process(client: AsyncClient, config: dict, method: str, url: str, **kwargs):
"""Send requests."""
attempts = 0
error = None
kwargs['timeout'] = config['timeout']
while True:
try:
attempts += 1
res: Response = await request(client, method, url, **kwargs)
res.raise_for_status()
logger.info(
'Request #%s done (%d): "%s %s" %d %s',
config['id'], attempts, method, url, res.status_code,
http.HTTPStatus(res.status_code).phrase)
return
except HTTPError as exc:
error = exc_to_code(exc)
if config['retries'] > (attempts - 1):
retry = min(global_config.RETRIES_BACKOFF_FACTOR_MAX, (
config['backoff_factor'] * (2 ** (attempts - 1)) + random()
))
logger.warning(
'Request #%s fail (%d), retry in %ss: "%s %s" %d',
config['id'], attempts, retry, method, url, error)
await aio_sleep(retry)
continue
logger.warning(
'Request #%s failed (%d): "%s %s" %d', config['id'], attempts, method, url, error)
if global_config.SENTRY_DSN and global_config.SENTRY_FAILED_REQUESTS:
sentry_sdk.capture_exception(exc)
# An unhandled exception
except Exception as exc:
logger.error(
'Request #%s raises an exception (%d): "%s %s"',
config['id'], attempts, method, url)
logger.exception(exc)
if global_config.SENTRY_DSN:
sentry_sdk.capture_exception(exc)
break
if config.get('callback'):
# TODO: Remove dependency from asyncio (spawn nursery in worker)
asyncio.create_task(process(
client, config, 'POST', config.pop('callback'), json={
'config': config,
'method': method,
'url': url,
'status_code': error or 999,
}, headers=[('x-knocker-origin', 'knocker'), *kwargs['headers']]
))
async def request(client: AsyncClient, method: str, url: str, **kwargs) -> Response:
"""Make a request."""
# We don't need to read response body here
async with client.stream(method, url, **kwargs) as response:
return response
def exc_to_code(exc: HTTPError) -> int:
"""Convert an exception into a response code."""
if isinstance(exc, HTTPStatusError):
return exc.response and exc.response.status_code or 418
if isinstance(exc, ConnectError):
return 502
if isinstance(exc, NetworkError):
return 503
if isinstance(exc, TimeoutException):
return 504
return 418
| 31.14 | 98 | 0.572897 | 345 | 3,114 | 5.075362 | 0.356522 | 0.035979 | 0.036551 | 0.037693 | 0.138778 | 0.051399 | 0 | 0 | 0 | 0 | 0 | 0.010763 | 0.313744 | 3,114 | 99 | 99 | 31.454545 | 0.80861 | 0.058767 | 0 | 0.088235 | 0 | 0 | 0.098019 | 0 | 0 | 0 | 0 | 0.010101 | 0 | 1 | 0.014706 | false | 0 | 0.102941 | 0 | 0.220588 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e5ee61fe5a868caef0b43048a674bc0b30ecaf71 | 719 | py | Python | wait_until/test_main.py | gabrieldemarmiesse/wait-until | 6244ff16da2c6e85b13523892c77ec248e3467b3 | [
"MIT"
] | 2 | 2020-06-28T15:43:22.000Z | 2021-06-18T12:25:47.000Z | wait_until/test_main.py | gabrieldemarmiesse/wait-until | 6244ff16da2c6e85b13523892c77ec248e3467b3 | [
"MIT"
] | null | null | null | wait_until/test_main.py | gabrieldemarmiesse/wait-until | 6244ff16da2c6e85b13523892c77ec248e3467b3 | [
"MIT"
] | null | null | null | import time
from wait_until import wait_until
import pytest
def some_function_that_cannot_work():
raise ValueError("I cannot work!")
def test_wait_until_exception_raised():
with pytest.raises(TimeoutError) as err:
wait_until(some_function_that_cannot_work, timeout=1)
assert "Timeout is 1s" in str(err.value)
class Dummy:
def __init__(self):
self.start_time = time.time()
def is_loaded(self):
if time.time() - self.start_time <= 1.5:
raise ValueError("Not loaded yet!")
return "Yay!"
def test_wait_until_is_loaded():
dum = Dummy()
result = wait_until(dum.is_loaded, timeout=2)
assert result == "Yay!"
| 22.46875 | 62 | 0.652295 | 98 | 719 | 4.5 | 0.469388 | 0.122449 | 0.068027 | 0.099773 | 0.117914 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009276 | 0.250348 | 719 | 31 | 63 | 23.193548 | 0.808905 | 0 | 0 | 0 | 0 | 0 | 0.072674 | 0 | 0 | 0 | 0 | 0 | 0.1 | 1 | 0.25 | false | 0 | 0.15 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e5f025e14360e6ae8dc29fa8231cf53c05e198cb | 2,263 | py | Python | sql_handler.py | zepc007/CityData | d7e2f7c8378bd46385e3dc2ce638eac97d66b00c | [
"MIT"
] | 1 | 2021-07-11T13:10:30.000Z | 2021-07-11T13:10:30.000Z | sql_handler.py | zepc007/CityData | d7e2f7c8378bd46385e3dc2ce638eac97d66b00c | [
"MIT"
] | null | null | null | sql_handler.py | zepc007/CityData | d7e2f7c8378bd46385e3dc2ce638eac97d66b00c | [
"MIT"
] | 1 | 2021-07-11T13:09:56.000Z | 2021-07-11T13:09:56.000Z | import json
import pandas as pd
from sqlalchemy import create_engine
class SqlClient:
def __init__(self, host, port, username, password, db):
self.host = host
self.port = port
self.username = username
self.password = password
self.db = db
self._conn = None
self.init_conn()
def init_conn(self):
_conn_string = f'mysql+pymysql://{self.username}:{self.password}@{self.host}:{self.port}/{self.db}?charset=utf8'
self._conn = create_engine(_conn_string)
def query(self, query):
df = pd.read_sql(query, self._conn)
if not df.empty:
data_dict = df.set_index(df.columns[0]).T.to_dict()
return data_dict
return {}
if __name__ == '__main__':
sql_client = SqlClient('ip', 'port', 'username', 'password', 'data_region')
country_query = 'SELECT * From data_region where level=2'
country_map = {}
country_data = sql_client.query(country_query)
for country_id, country_detail in country_data.items():
states = []
country_data = {'label': country_detail['name'], 'value': country_detail['name'],
'label_en': country_detail['name_en'], 'children': states}
country_map[country_detail['name_en'].replace(u'\xa0', u' ')] = country_data
state_or_province_query = 'SELECT * From data_region where level=3 and pid=%s' % country_id
state_or_province_data = sql_client.query(state_or_province_query)
for state_or_province_id, state_or_province_detail in state_or_province_data.items():
city_query = 'SELECT * From data_region where level=4 and pid=%s' % state_or_province_id
city_data = sql_client.query(city_query)
states.append({'label': state_or_province_detail['name'], 'value': state_or_province_detail['name'],
'label_en': state_or_province_detail['name_en'],
'children': [i['name'] for i in city_data.values()],
'children_en': [i['name_en'] for i in city_data.values()],
})
with open('data_region.json', mode='w', encoding='utf8') as f:
f.write(json.dumps(country_map, ensure_ascii=False))
| 41.145455 | 120 | 0.627928 | 296 | 2,263 | 4.476351 | 0.300676 | 0.05283 | 0.113208 | 0.063396 | 0.166038 | 0.109434 | 0.079245 | 0 | 0 | 0 | 0 | 0.00412 | 0.249227 | 2,263 | 54 | 121 | 41.907407 | 0.77575 | 0 | 0 | 0 | 0 | 0.023256 | 0.181617 | 0.041538 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069767 | false | 0.093023 | 0.069767 | 0 | 0.209302 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
e5f4db7859b756897941cc13fc6474573e60afe6 | 9,787 | py | Python | PythonExtensions/debug/console.py | Jakar510/PythonExtensions | f29600f73454d21345f6da893a1df1b71ddacd0b | [
"MIT"
] | null | null | null | PythonExtensions/debug/console.py | Jakar510/PythonExtensions | f29600f73454d21345f6da893a1df1b71ddacd0b | [
"MIT"
] | null | null | null | PythonExtensions/debug/console.py | Jakar510/PythonExtensions | f29600f73454d21345f6da893a1df1b71ddacd0b | [
"MIT"
] | null | null | null | import sys
import traceback
from pprint import PrettyPrinter
from threading import Lock
from types import TracebackType
from typing import *
__all__ = [
# 'getPPrintStr', 'check', 'get_func_details', 'print_signature'
'PRINT', 'Print', 'print_exception', 'print_stack_trace', 'PrettyPrint',
# 'TITLE_TAG', 'DEFAULT_TAG', 'END_TAG',
'Printer', 'pp', 'CallStack',
'GetFunctionName', 'GetFuncModule',
]
class NoStringWrappingPrettyPrinter(PrettyPrinter):
"""
https://stackoverflow.com/questions/31485402/can-i-make-pprint-in-python3-not-split-strings-like-in-python2
https://stackoverflow.com/a/31485450/9530917
"""
@classmethod
def Create(cls): return cls(indent=4, sort_dicts=False)
# noinspection PyProtectedMember, PyUnresolvedReferences
def _format(self, o, *args):
if isinstance(o, (str, bytes, bytearray)):
width = self._width
self._width = sys.maxsize
try:
return super()._format(o, *args)
finally:
self._width = width
elif isinstance(o, CallStack):
print('__CallStack__', o, *args)
# super()._format(o.Lines, *args)
return super()._format(str(o), *args)
else:
return super()._format(o, *args)
class CallStack(object):
_lines: Iterable[str] = None
def __init__(self, indent: str = 4 * ' ', *, once: bool = True):
self._indent = indent
self._once = once
self.Update()
def Update(self):
if self._once and self._lines is not None: raise RuntimeError()
self._lines = [self._indent + line.strip() for line in traceback.format_stack()][:-1]
@property
def Lines(self) -> Iterable[str]:
if self._lines is None: self.Update()
return self._lines
def __str__(self) -> str: return '\n'.join(self.Lines)
# def __repr__(self) -> str: return '\n'.join(self.Lines)
class Printer(object):
DEFAULT_TAG = '\n______________________________________________________________\n"{0}"'
TITLE_TAG = "\n ---------------- {0} ---------------- \n"
END_TAG = '\n ============================================================= \n'
_lock = Lock()
_active: bool = False
def __init__(self, _pp: PrettyPrinter = None, *, use_double_quotes: bool, end: str, file=None):
"""
:param end: string to append to end of passed args
:type end: str
:param file: file to write to
:type file: file
:param _pp: any PrettyPrinter inpmentation. provide your own to customize the output.
:type _pp: PrettyPrinter
:param use_double_quotes: use double quotes (") instead of the default single quotes (')
:type use_double_quotes: bool
"""
self._file = file
self._end = end
self._use_double_quotes = use_double_quotes
self._pp = _pp or NoStringWrappingPrettyPrinter.Create()
@property
def can_print(self) -> bool: return __debug__
def __enter__(self):
self._active = True
self._lock.__enter__()
return self
def __exit__(self, exc_type: Optional[Type[BaseException]], exc_val: Optional[BaseException], exc_tb: Optional[TracebackType]) -> Optional[bool]:
self._active = False
return self._lock.__exit__(exc_type, exc_val, exc_tb)
def Print(self, *args):
if self.can_print:
if self._active:
return self.print(*args)
with self as p:
return p.print(*args)
def print(self, *args):
if self.can_print:
if self._active:
return print(*args, sep='\n', end=self._end, file=self._file)
with self:
return print(*args, sep='\n', end=self._end, file=self._file)
@overload
def PrettyPrint(self, *args): ...
@overload
def PrettyPrint(self, title: str, *args): ...
@overload
def PrettyPrint(self, **kwargs): ...
@overload
def PrettyPrint(self, title: str, **kwargs): ...
@staticmethod
def _PrettyPrint(obj, *args, **kwargs):
assert (isinstance(obj, Printer))
if kwargs:
if args and isinstance(args[0], str):
title = args[0]
obj.Print(title)
obj._pp.pprint(kwargs)
else: obj._pp.pprint(kwargs)
elif args:
if isinstance(args[0], str):
title = args[0]
args = args[1:]
obj.Print(title)
obj._pp.pprint(args)
else: obj._pp.pprint(args)
def PrettyPrint(self, *args, **kwargs):
if self.can_print:
if self._active:
self._PrettyPrint(self, *args, **kwargs)
else:
with self as p:
self._PrettyPrint(p, *args, **kwargs)
def getPPrintStr(self, o: any) -> str:
"""
:param o: object to be serialized
:type o: any
:return: formatted string of the passed object
:rtype: str
"""
s = self._pp.pformat(o)
if self._use_double_quotes: s = s.replace("'", '"')
return s
def print_exception(self, e: Exception, limit=None, file=None, chain=True):
"""Print exception up to 'limit' stack trace entries from 'tb' to 'file'.
This differs from print_tb() in the following ways: (1) if
traceback is not None, it prints a header "Traceback (most recent
call last):"; (2) it prints the exception type and value after the
stack trace; (3) if type is SyntaxError and value has the
appropriate format, it prints the line where the syntax error
occurred with a caret on the next line indicating the approximate
position of the error.
"""
if self.can_print:
if self._active:
return traceback.print_exception(type(e), e, e.__traceback__, limit, file, chain)
with self._lock:
return traceback.print_exception(type(e), e, e.__traceback__, limit, file, chain)
def print_stack_trace(self, f=None, limit=None, file=None):
"""Print a stack trace from its invocation point.
The optional 'f' argument can be used to specify an alternate
stack frame at which to start. The optional 'limit' and 'file'
arguments have the same meaning as for print_exception().
"""
if self.can_print:
if self._active:
traceback.print_stack(f, limit, file)
return print()
with self._lock:
traceback.print_stack(f, limit, file)
return print()
def get_func_details(self, func: callable, tag: str, result: Any, args, kwargs) -> Tuple[Any, str, str, str, str]:
"""
:param result: result of the passed function or method
:type result: Any
:param func: function or method being called
:type func: callable
:param tag: line to print before function/method details
:type tag: str
:param args: args passed to function/method
:param kwargs: keyword args passed to function/method
:return: result, tag, name, signature, pp_result
:rtype: Tuple[Any, str, str, str, str]
"""
assert ('{0}' in tag)
name = GetFunctionName(func)
tag = tag.format(name)
signature = self.getPPrintStr({ 'args': args, 'kwargs': kwargs })
pp_result = self.getPPrintStr(result)
return result, tag, name, signature, pp_result
def print_signature(self, func: callable, tag: str, *args, **kwargs):
if self.can_print:
assert ('{0}' in tag)
result = func(*args, **kwargs)
result, _tag, name, signature, pp_result = self.get_func_details(func, tag, result, args, kwargs)
self.Print(tag, f'{name}(\n {signature}\n )', name, f'returned: \n{self.getPPrintStr(result)}')
return result
@classmethod
def Default(cls): return cls(use_double_quotes=True, end='\n\n')
@staticmethod
def Set(_pp):
"""
:param _pp: Printer class instance to be used for all printing.
:type _pp: Printer
"""
if not isinstance(_pp, Printer): raise TypeError(type(_pp), (Printer,))
global pp
pp = _pp
return pp
pp: Printer = Printer.Default()
def GetFuncModule(func: callable) -> str: return func.__module__
def GetFunctionName(func: callable) -> str:
if hasattr(func, '__qualname__') and hasattr(func, '__module__'): return f"{func.__module__}.{func.__qualname__}"
elif hasattr(func, '__qualname__'): return func.__qualname__
else: return func.__name__
def PRINT(title: str, *args, tag: str = pp.TITLE_TAG, **kwargs):
"""
:param tag: identifer to seprate calls
:type tag: str
:param title: message to start the Print, to make it easier to find it.
:type title: str
"""
with pp as p:
p.Print(tag.format(title))
return p.PrettyPrint(dict(args=args, kwargs=kwargs))
def Print(*args):
with pp as p:
return p.Print(*args)
def print_exception(e: Exception, limit=None, file=None, chain=True):
with pp as p:
return p.print_exception(e, limit, file, chain)
def print_stack_trace(f=None, limit=None, file=None):
with pp as p:
return p.print_stack_trace(f, limit, file)
@overload
def PrettyPrint(*args): ...
@overload
def PrettyPrint(title: str, *args): ...
@overload
def PrettyPrint(**kwargs): ...
@overload
def PrettyPrint(title: str, **kwargs): ...
def PrettyPrint(*args, **kwargs):
with pp as p:
return p.PrettyPrint(*args, **kwargs)
| 32.30033 | 149 | 0.598345 | 1,193 | 9,787 | 4.682314 | 0.190277 | 0.015038 | 0.021482 | 0.015038 | 0.268887 | 0.223058 | 0.15145 | 0.092016 | 0.051916 | 0.051916 | 0 | 0.005663 | 0.278328 | 9,787 | 302 | 150 | 32.407285 | 0.785219 | 0.222234 | 0 | 0.298851 | 0 | 0 | 0.064507 | 0.027527 | 0 | 0 | 0 | 0 | 0.017241 | 1 | 0.206897 | false | 0 | 0.034483 | 0.028736 | 0.425287 | 0.183908 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f904dbf4d95a4bd9b1226658b8ecfb3a7a36830a | 579 | py | Python | pyNetSocket/docs/__init__.py | DrSparky2k7/pyNetSocket | 43cac3b0bf0179a3ca2146f07b2e17eb5e766a3e | [
"MIT"
] | 1 | 2021-01-09T09:40:54.000Z | 2021-01-09T09:40:54.000Z | pyNetSocket/docs/__init__.py | DrSparky2k7/pyNetSocket | 43cac3b0bf0179a3ca2146f07b2e17eb5e766a3e | [
"MIT"
] | null | null | null | pyNetSocket/docs/__init__.py | DrSparky2k7/pyNetSocket | 43cac3b0bf0179a3ca2146f07b2e17eb5e766a3e | [
"MIT"
] | 1 | 2021-01-13T04:47:07.000Z | 2021-01-13T04:47:07.000Z | print('The documentation for the pyNetSocket library')
print('This covers all the information you need to start')
print('')
print('Topics:',
'server',
'client',
'callbacks',
sep='\n\t')
print('To view information:',
'import pyNetSocket.docs.TOPIC',
sep='\n')
print('')
print('You can see the full guides here:')
print('https://github.com/DrSparky-2007/PyNetSocket/wiki/')
openLink = input('Open link in browser? (y/n) ')[0].lower()
if openLink == 'y':
import webbrowser as wb
wb.open('https://github.com/DrSparky-2007/PyNetSocket/wiki/')
| 28.95 | 63 | 0.661485 | 79 | 579 | 4.848101 | 0.620253 | 0.052219 | 0.073107 | 0.114883 | 0.214099 | 0.214099 | 0.214099 | 0 | 0 | 0 | 0 | 0.018557 | 0.162349 | 579 | 19 | 64 | 30.473684 | 0.771134 | 0 | 0 | 0.111111 | 0 | 0 | 0.585492 | 0.037997 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0.444444 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
f906cfdd689803bd9e44a27fad5503c8c3323058 | 6,182 | py | Python | pypy/lang/prolog/interpreter/arithmetic.py | camillobruni/pygirl | ddbd442d53061d6ff4af831c1eab153bcc771b5a | [
"MIT"
] | 12 | 2016-01-06T07:10:28.000Z | 2021-05-13T23:02:02.000Z | pypy/lang/prolog/interpreter/arithmetic.py | woodrow/pyoac | b5dc59e6a38e7912db47f26fb23ffa4764a3c0e7 | [
"MIT"
] | null | null | null | pypy/lang/prolog/interpreter/arithmetic.py | woodrow/pyoac | b5dc59e6a38e7912db47f26fb23ffa4764a3c0e7 | [
"MIT"
] | 2 | 2016-07-29T07:09:50.000Z | 2016-10-16T08:50:26.000Z | import py
import math
from pypy.lang.prolog.interpreter.parsing import parse_file, TermBuilder
from pypy.lang.prolog.interpreter import engine, helper, term, error
from pypy.lang.prolog.interpreter.error import UnificationFailed, FunctionNotFound
from pypy.rlib.rarithmetic import intmask
from pypy.rlib.jit import we_are_jitted, hint
from pypy.rlib.unroll import unrolling_iterable
arithmetic_functions = {}
arithmetic_functions_list = []
class CodeCollector(object):
def __init__(self):
self.code = []
self.blocks = []
def emit(self, line):
for line in line.split("\n"):
self.code.append(" " * (4 * len(self.blocks)) + line)
def start_block(self, blockstarter):
assert blockstarter.endswith(":")
self.emit(blockstarter)
self.blocks.append(blockstarter)
def end_block(self, starterpart=""):
block = self.blocks.pop()
assert starterpart in block, "ended wrong block %s with %s" % (
block, starterpart)
def tostring(self):
assert not self.blocks
return "\n".join(self.code)
def wrap_builtin_operation(name, pattern, unwrap_spec, can_overflow, intversion):
code = CodeCollector()
code.start_block("def prolog_%s(engine, query):" % name)
for i, spec in enumerate(unwrap_spec):
varname = "var%s" % (i, )
code.emit("%s = eval_arithmetic(engine, query.args[%s])" %
(varname, i))
for i, spec in enumerate(unwrap_spec):
varname = "var%s" % (i, )
if spec == "int":
code.start_block(
"if not isinstance(%s, term.Number):" % (varname, ))
code.emit("error.throw_type_error('int', %s)" % (varname, ))
code.end_block("if")
if "expr" in unwrap_spec and intversion:
# check whether all arguments are ints
for i, spec in enumerate(unwrap_spec):
varname = "var%s" % (i, )
if spec == "int":
continue
code.start_block(
"if isinstance(%s, term.Number):" % (varname, ))
code.emit("v%s = var%s.num" % (i, i))
code.emit("return term.Number(int(%s))" % (pattern, ))
for i, spec in enumerate(unwrap_spec):
if spec == "int":
continue
code.end_block("if")
#general case in an extra function
args = ", ".join(["var%s" % i for i in range(len(unwrap_spec))])
code.emit("return general_%s(%s)" % (name, args))
code.end_block("def")
code.start_block("def general_%s(%s):" % (name, args))
for i, spec in enumerate(unwrap_spec):
varname = "var%s" % (i, )
code.emit("v%s = 0" % (i, ))
code.start_block("if isinstance(%s, term.Number):" % (varname, ))
code.emit("v%s = %s.num" % (i, varname))
code.end_block("if")
expected = 'int'
if spec == "expr":
code.start_block("elif isinstance(%s, term.Float):" % (varname, ))
code.emit("v%s = %s.floatval" % (i, varname))
code.end_block("elif")
expected = 'float'
code.start_block("else:")
code.emit("error.throw_type_error('%s', %s)" % (expected, varname, ))
code.end_block("else")
code.emit("return norm_float(term.Float(%s))" % pattern)
code.end_block("def")
miniglobals = globals().copy()
exec py.code.Source(code.tostring()).compile() in miniglobals
result = miniglobals["prolog_" + name]
result._look_inside_me_ = True
return result
wrap_builtin_operation._annspecialcase_ = 'specialize:memo'
def eval_arithmetic(engine, query):
return query.eval_arithmetic(engine)
eval_arithmetic._look_inside_me_ = True
def norm_float(obj):
v = obj.floatval
if v == int(v):
return term.Number(int(v))
else:
return obj
simple_functions = [
("+", ["expr", "expr"], "v0 + v1", True, True),
("-", ["expr", "expr"], "v0 - v1", True, True),
("*", ["expr", "expr"], "v0 * v1", True, True),
("//", ["int", "int"], "v0 / v1", True, False),
("**", ["expr", "expr"], "math.pow(float(v0), float(v1))", True, False),
(">>", ["int", "int"], "v0 >> v1", False, False),
("<<", ["int", "int"], "intmask(v0 << v1)", False,
False),
("\\/", ["int", "int"], "v0 | v1", False, False),
("/\\", ["int", "int"], "v0 & v1", False, False),
("xor", ["int", "int"], "v0 ^ v1", False, False),
("mod", ["int", "int"], "v0 % v1", False, False),
("\\", ["int"], "~v0", False, False),
("abs", ["expr"], "abs(v0)", True, True),
("max", ["expr", "expr"], "max(v0, v1)", False, True),
("min", ["expr", "expr"], "min(v0, v1)", False, True),
("round", ["expr"], "int(v0 + 0.5)", False, False),
("floor", ["expr"], "math.floor(v0)", False, False), #XXX
("ceiling", ["expr"], "math.ceil(v0)", False, False), #XXX
("float_fractional_part", ["expr"], "v0 - int(v0)", False, False), #XXX
("float_integer_part", ["expr"], "int(v0)", False, True),
]
for prolog_name, unwrap_spec, pattern, overflow, intversion in simple_functions:
# the name is purely for flowgraph viewing reasons
if prolog_name.replace("_", "").isalnum():
name = prolog_name
else:
import unicodedata
name = "".join([unicodedata.name(unicode(c)).replace(" ", "_").replace("-", "").lower() for c in prolog_name])
f = wrap_builtin_operation(name, pattern, unwrap_spec, overflow,
intversion)
signature = "%s/%s" % (prolog_name, len(unwrap_spec))
arithmetic_functions[signature] = f
arithmetic_functions_list.append((signature, f))
arithmetic_functions_list = unrolling_iterable(arithmetic_functions_list)
| 42.634483 | 118 | 0.537043 | 709 | 6,182 | 4.555712 | 0.22426 | 0.014861 | 0.022291 | 0.018576 | 0.317337 | 0.218576 | 0.19226 | 0.139628 | 0.139628 | 0.130031 | 0 | 0.00871 | 0.294241 | 6,182 | 144 | 119 | 42.930556 | 0.731607 | 0.020544 | 0 | 0.18254 | 0 | 0 | 0.160218 | 0.020999 | 0 | 0 | 0 | 0 | 0.02381 | 0 | null | null | 0 | 0.071429 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f90a63e090dfc5e07483171a1b2114c6fd453602 | 379 | py | Python | froide/campaign/listeners.py | xenein/froide | 59bd3eeded3c3ed00fbc858fe20bfea99c8dbefa | [
"MIT"
] | 198 | 2016-12-03T22:42:55.000Z | 2022-03-25T15:08:36.000Z | froide/campaign/listeners.py | xenein/froide | 59bd3eeded3c3ed00fbc858fe20bfea99c8dbefa | [
"MIT"
] | 264 | 2016-11-30T18:53:17.000Z | 2022-03-17T11:34:18.000Z | froide/campaign/listeners.py | xenein/froide | 59bd3eeded3c3ed00fbc858fe20bfea99c8dbefa | [
"MIT"
] | 42 | 2016-12-22T04:08:27.000Z | 2022-02-26T08:30:38.000Z | from .utils import connect_foirequest
def connect_campaign(sender, **kwargs):
reference = kwargs.get("reference")
if not reference:
return
if "@" in reference:
parts = reference.split("@", 1)
else:
parts = reference.split(":", 1)
if len(parts) != 2:
return
namespace = parts[0]
connect_foirequest(sender, namespace)
| 22.294118 | 41 | 0.614776 | 42 | 379 | 5.47619 | 0.547619 | 0.147826 | 0.165217 | 0.173913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014337 | 0.263852 | 379 | 16 | 42 | 23.6875 | 0.810036 | 0 | 0 | 0.153846 | 0 | 0 | 0.031662 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.076923 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f90beb2c215d893adc92dbd991ff199c881a7e70 | 8,857 | py | Python | escsim/simulator.py | Edlward/foc_esc | 58425c0ce5865c077ce313bd672b971380f2edad | [
"MIT"
] | 67 | 2015-07-12T18:08:43.000Z | 2022-03-05T07:05:01.000Z | escsim/simulator.py | Edlward/foc_esc | 58425c0ce5865c077ce313bd672b971380f2edad | [
"MIT"
] | 1 | 2015-09-08T14:01:46.000Z | 2015-09-09T01:36:22.000Z | escsim/simulator.py | gtoonstra/foc_esc | 58425c0ce5865c077ce313bd672b971380f2edad | [
"MIT"
] | 44 | 2015-07-17T14:59:07.000Z | 2021-02-20T13:55:14.000Z | import math
import constants
class Simulator(object):
def __init__ (self):
self.bemfa = 0.0
self.bemfb = 0.0
self.va = 0.0
self.vb = 0.0
self.kp = constants.KP_EST_RPM
self.ki = constants.KI_EST_RPM
self.ls = 0.035
self.rs = 1.05
self.esta = 0.0
self.estb = 0.0
self.vq = 0.0
self.vd = 0.0
self.v_factor = 0.0
self.inta = 0.0
self.intb = 0.0
self.secretR = 1.05
self.secretL = 0.035
self.T = ( 1.0 / constants.FREQ )
self.F = ( 1.0 - ( self.T * ( self.rs / self.ls )) )
self.G = ( self.T / self.ls )
self.K1 = (500*500)
self.K2 = (2*0.84)/500
self.factorFfw = constants.AMAX / constants.RPM_MAX
self.costheta = 0.0
self.sintheta = 0.0
self.thetak = 0.0
self.theta = 0.0
self.wk = 0.0
self.a2k = 0.0
self.rpm = 0.0
self.real_rpm = 0.0
self.pwm_in = 1099
self.rpm_command_lim = 0.0
self.rpm_i_out = 0.0
self.rpm_error = 0.0
self.Iqr = 0.0
self.Idr = 0.0
self.Iqm = 0.0
self.Idm = 0.0
self.vdc_int = 0.0
def step_sim( self, t, n ):
# self.vq = 50
torque = 0.00537 * self.Iqm
Pleft = constants.V * self.Iqm * 0.85
if ( torque > 0 ):
self.real_rpm = self.real_rpm * 0.99 + 0.01 * (Pleft / torque)
else:
self.real_rpm = self.real_rpm * 0.80
# update real model first.
self.thetak = (self.real_rpm * t) % (math.pi * 2.0)
realbemfa = constants.BEMFK * self.real_rpm * math.sin( self.thetak )
realbemfb = constants.BEMFK * self.real_rpm * math.sin( self.thetak - 2.0943951 )
self.vdc_int = constants.V * 1000
# va = 2/3 * valpha
# vb = -1/3 * valpha + 1/sqrt(3) * vbeta
# vc = -1/3 * valpha - 1/sqrt(3) * vbeta
# These are representations of voltages when voltage is applied on coils on the real motor,
# which need to be calculated separately or the motor won't start turning, since the
# follower needs something to follow.
realva = (self.vd * math.cos(self.thetak) - self.vq*math.sin(self.thetak)) * self.v_factor
realvb = (self.vd * math.sin(self.thetak) + self.vq*math.cos(self.thetak)) * self.v_factor
va_int = (0.66666 * realva)
vb_int = (-0.33333 * realva + constants.ONEDIVSQRT3 * realvb)
vc_int = (-0.33333 * realva - constants.ONEDIVSQRT3 * realvb)
ia_raw = (va_int / self.secretR)
# V * 120 degrees
ib_raw = (vb_int / self.secretR)
ic_raw = -ia_raw - ib_raw
# Done with reading sensors. Let's update the follower model.
if n % constants.LOOP_INTERVAL == 0:
self.calc_input( t )
self.calc_output( t )
# Ialpha = 1.5 * ia_int; // [+-18] [mA]
# Ibeta = SQRT3DIV2 * (ib_int - ic_int); // [+-18] [mA]
# Do all transformations
self.Ialpha = ia_raw
self.Ibeta = constants.SQRT3DIV2 * ( ib_raw - ic_raw )
# self.Ibeta = constants.ONEDIVSQRT3 * ia_raw + constants.TWODIVSQRT3 * ib_raw
# self.Ibeta = self.Ibeta
#Idm = Ialpha*cos(theta) + Ibeta * sin(theta)
#Iqm = -Ialpha*sin(theta) + Ibeta * cos(theta)
# GT: CONST_FACTOR removed?
self.Idm = self.Ialpha * math.cos(self.theta) + self.Ibeta * math.sin(self.theta)
#Idm = AIDQP * Idm + AIDQN * temp; // filter
self.Iqm = -self.Ialpha * math.sin( self.theta ) + self.Ibeta * math.cos(self.theta)
#Iqm = AIDQP * Iqm + AIDQN * temp; // filter
# Valpha = vd*cos(theta) - vq*sin(theta)
# Vbeta = vd*sin(theta) + vq*cos(theta)
self.va = (self.vd * math.cos(self.theta) - self.vq*math.sin(self.theta)) * self.v_factor
self.vb = (self.vd * math.sin(self.theta) + self.vq*math.cos(self.theta)) * self.v_factor
# then update observers...
self.esta = (self.F * self.esta) + (self.G * (self.va - self.bemfa) )
self.estb = (self.F * self.estb) + (self.G * (self.vb - self.bemfb) )
erra = self.Ialpha - self.esta
errb = self.Ibeta - self.estb
self.inta = self.inta + (self.ki*erra)
self.intb = self.intb + (self.ki*errb)
if self.inta > constants.LIMIT:
self.inta = constants.LIMIT
if self.intb > constants.LIMIT:
self.intb = constants.LIMIT
if self.inta < -constants.LIMIT:
self.inta = -constants.LIMIT
if self.intb < -constants.LIMIT:
self.intb = -constants.LIMIT
self.bemfa = -(erra * self.kp) - self.inta
self.bemfb = -(errb * self.kp) - self.intb
# i s (n + 1) = F*is(n) + G*(v s (n) - e s (n))
# F = 0.9375
# G = 0.025
# update the angle tracking observer...
ek = ( (self.bemfa * constants.BEMFK * self.costheta) + (self.bemfb * constants.BEMFK * self.sintheta) )
# One measurement per 20 us
self.wk = self.wk + (self.K1 * self.T * ek )
self.a2k = self.a2k + ( self.T * self.wk )
self.rpm = 0.5 * self.rpm + 0.5 * self.wk
self.theta = ((self.K2 * self.wk) + self.a2k) % (math.pi * 2.0)
self.rpm = self.real_rpm
self.theta = self.thetak
self.costheta = math.cos( self.theta )
self.sintheta = math.sin( self.theta )
print self.Iqm, self.real_rpm, self.rpm
# return 0, ia_raw * 25 , 0, ib_raw * 25 , 0, va_int , 0, self.Iqm * 25
# return 0, self.Ialpha * 50, 0, self.Iqm * 50, 0, self.theta, 0, self.va
# return 5 * self.esta, 5 * self.ia, 5 * self.estb, 5 * self.ib
return self.Ialpha * 25, self.esta * 25, 0, self.rpm - self.real_rpm, self.thetak * 3, self.theta * 3, realbemfa * 25, self.bemfa * 25
# return self.Ialpha * 50, self.Ibeta * 50, self.Idm * 50, self.Iqm * 50, self.va, self.vb, erra, errb
def getPwm( self ):
return self.pwm_in
def movepwm( self, inc ):
self.pwm_in = self.pwm_in + inc
def calc_input( self, t ):
temp = (self.pwm_in - constants.PWM_IN_MIN);
temp *= (constants.RPM_MAX - constants.RPM_MIN);
temp /= (float)(constants.PWM_IN_MAX - constants.PWM_IN_MIN);
temp += constants.RPM_MIN;
rpm_command_raw = temp;
if rpm_command_raw < constants.RPM_MIN:
rpm_command_raw = 0.0
if rpm_command_raw > constants.RPM_MAX:
rpm_command_raw = constants.RPM_MAX
if ( rpm_command_raw >= (self.rpm_command_lim + constants.RPM_SLEW * constants.DT_LOOP) ):
self.rpm_command_lim += constants.RPM_SLEW * constants.DT_LOOP
elif ( rpm_command_raw < (self.rpm_command_lim - constants.RPM_SLEW * constants.DT_LOOP) ):
self.rpm_command_lim -= constants.RPM_SLEW * constants.DT_LOOP
else:
self.rpm_command_lim = rpm_command_raw
if self.rpm_command_lim < 0.0:
self.rpm_command_lim = 0.0
if ( self.rpm_command_lim > constants.RPM_MAX ):
self.rpm_command_lim = constants.RPM_MAX
# RPM Control Outer Loop
self.rpm_error = (self.rpm_command_lim - self.rpm)
rpm_p_out = constants.KP_RPM_UP * self.rpm_error
if ( (self.rpm_error > 0.0) and (self.rpm_i_out < constants.I_SAT_RPM) ):
self.rpm_i_out += constants.KI_RPM * self.rpm_error * constants.DT_LOOP
if ( (self.rpm_error < 0.0) and (self.rpm_i_out > -constants.I_SAT_RPM) ):
self.rpm_i_out += constants.KI_RPM * self.rpm_error * constants.DT_LOOP
rpm_ff_out = constants.KFF_I * self.rpm_command_lim * self.rpm_command_lim;
# Feed forward rpm setting
self.Iqr = rpm_p_out + self.rpm_i_out + rpm_ff_out;
if ( self.Iqr > constants.AMAX):
self.Iqr = constants.AMAX
if ( self.Iqr < -constants.BMAX):
self.Iqr = -constants.BMAX
self.Idr = 0.0
# Request 2A constant
self.Iqr = 2.0
def calc_output( self, t ):
self.vq = self.vq + constants.KPQ * (self.Iqr-self.Iqm);
self.vq += constants.KFF_V * self.rpm_error;
# Restrict to 0-75 to maintain 4% minimum off-time.
# 982
#if ( self.vq > 40 ):
# self.vq = 40
#if ( self.vq < 0 ):
# self.vq = 0
self.v_factor = 0.264
if (self.vq > 0):
self.vd = self.vd + constants.KPD * (self.Idr - self.Idm)
#if ( self.vd > constants.CHAR_90_DEG):
# self.vd = constants.CHAR_90_DEG
#if ( self.vd < 0):
# self.vd = 0
else:
vd = 0
| 36.004065 | 142 | 0.554815 | 1,279 | 8,857 | 3.70993 | 0.172009 | 0.043203 | 0.034141 | 0.046575 | 0.367334 | 0.274183 | 0.226765 | 0.154689 | 0.14373 | 0.126027 | 0 | 0.045884 | 0.318392 | 8,857 | 245 | 143 | 36.15102 | 0.740103 | 0.194535 | 0 | 0.062069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.013793 | null | null | 0.006897 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f90cbcecedc6eda63c34624621d2b2d5d9a2e112 | 643 | py | Python | 50/26.py | ElyKar/Euler | 38744b553b22565ac30ece06e2e3fbf3408068e2 | [
"MIT"
] | null | null | null | 50/26.py | ElyKar/Euler | 38744b553b22565ac30ece06e2e3fbf3408068e2 | [
"MIT"
] | null | null | null | 50/26.py | ElyKar/Euler | 38744b553b22565ac30ece06e2e3fbf3408068e2 | [
"MIT"
] | null | null | null | #!/bin/python
from decimal import *
import re
maxi = 0
maxidx = 0
def longest(s):
cur = 0
sub = ''
for start in range(len(s)):
end = start+1
test = s[start]
while end < len(s) and len(sub) == 0:
if test == s[end:end+end-start]:
sub = test
test += s[end]
end += 1
return len(sub)
getcontext().prec = 10000
#frac = str(Decimal(1)/Decimal(2)).replace('0.','').rstrip('0').lstrip('0')
#print longest(frac)
for d in range(2,1000):
frac = str(Decimal(1)/Decimal(d)).replace('0.','').rstrip('0').lstrip('0')
count = longest(frac)
if count > maxi:
maxi = count
maxidx = d
print '%d : %d' % (d, maxi)
print maxidx | 20.09375 | 75 | 0.595645 | 107 | 643 | 3.579439 | 0.364486 | 0.031332 | 0.041775 | 0.057441 | 0.229765 | 0.114883 | 0 | 0 | 0 | 0 | 0 | 0.048638 | 0.200622 | 643 | 32 | 76 | 20.09375 | 0.696498 | 0.163297 | 0 | 0 | 0 | 0 | 0.020522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.08 | null | null | 0.08 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f90cc0604730602a4aaae583a2f8ead38f06e506 | 6,108 | py | Python | ghiblister.py | mcscope/NoisebrigePythonDebuggingTalk | af2156376671694afd94bd6158ffa27a00022924 | [
"Unlicense"
] | 46 | 2017-09-27T20:19:36.000Z | 2020-12-08T10:07:19.000Z | ghiblister.py | mcscope/NoisebridgePythonDebuggingTalk | af2156376671694afd94bd6158ffa27a00022924 | [
"Unlicense"
] | 6 | 2018-01-09T08:07:37.000Z | 2020-09-07T12:25:13.000Z | ghiblister.py | mcscope/NoisebridgePythonDebuggingTalk | af2156376671694afd94bd6158ffa27a00022924 | [
"Unlicense"
] | 18 | 2017-10-10T02:06:51.000Z | 2019-12-01T10:18:13.000Z | # Christopher Beacham May 5, 2017
# Created for Noisebridge Python class, distribute and use freely
# This is an example program that is a client for Studio Ghibli's API.
# (https://ghibliapi.herokuapp.com/#section/Studio-Ghibli-API)
# It downloads all the resources available in the API, and cross links some of them,
# to make an in-memory copy.
# Unfortuately, it is written in a way that it makes way too many requests,
# And takes way too long!
# Use these debugging tools to understand it's behavior and improve it's performance:
# Charles https://www.charlesproxy.com
# PUDB https://documen.tician.de/pudb/
# cProfile https://docs.python.org/2/library/profile.html
# SnakeViz (for reading cProfile output) https://jiffyclub.github.io/snakeviz/
# Your challenges:
# 1 - Make this script faster. It's really slow on my machine. WHY?
# 2 - This script uses too many network calls. Can you use less?
# 3 - The network calls are blocking, they could be parallel. You can see this in charles. Can you fix that?
# 4 - The story it makes at the end sucks, and doesn't use all of the resources
# available to you from the API. Make it cooler.
import requests
import random
# This is like namedtuple, but mutable. (https://pypi.python.org/pypi/recordtype)
# `pip install recordtype` to get it
from recordtype import recordtype
GHIBLI_URL = "https://ghibliapi.herokuapp.com/"
session = requests.session()
# We don't want to panic when we see charles custom SSL cert.
# I've turned ssl verification off, but you can also pass a file path to charles's cert
# There are several ways to handle this.
session.verify = False
# Here's what passing the cert file would look like.
# You can ask charles to save the file somewhere on your machine
# session.verify = "/Users/Christopher/charles_sessions/charles-ssl-proxying-certificate.pem"
Person = recordtype("Person", ["id",
"name",
"gender",
"age",
"eye_color",
"hair_color",
"films",
"species",
"url"])
Species = recordtype("Species", ["id",
"name",
"classification",
"eye_colors",
"hair_colors",
"url",
"people",
"films", ])
Film = recordtype("Film", ["id",
"title",
"description",
"director",
"producer",
"release_date",
"rt_score",
"people",
"species",
"locations",
"vehicles",
"url"])
Vehicle = recordtype("Vehicle", ["id",
"name",
"description",
"vehicle_class",
"length",
"pilot",
"films",
"url", ])
Location = recordtype("Location", ["id",
"name",
"climate",
"terrain",
"surface_water",
"residents",
"films",
"url", ])
def get_record(recordtype, record_url):
"""
Given a record url and a recordtype, fetch the record from the and build the record.
Assume there's just one record at the location and everything goes well
"""
resp = session.get(record_url)
new_record = recordtype(**resp.json())
return new_record
def is_specific_url(url):
"""
This api has a tendency to give you the url for the general list of things
if there are no entries.
this separates "people/" from "people/123456"
"""
return url[-1] != '/'
def get_all_people():
"""
You can hit the resource name without an ID to get a listing of all of that resource
"""
resp = session.get(GHIBLI_URL + "people")
people = [Person(**json_person)
for json_person in resp.json()]
return people
def make_crossover(all_people):
"""
Make a new story about the studio ghibli characters
"""
story = """
{protagonist.name} is a {protagonist.gender} {protagonist.species.name},
and {friend.name} is a {friend.gender} {friend.species.name}, and they are going on an adventure.
They have to challenge many obstacles, and fight the evil {antagonist.name}, a {antagonist.age} year old
{antagonist.species.name}
"""
# import pudb; pudb.set_trace() # breakpoint 7ee757b5 //
story_choices = {
"protagonist": random.choice(all_people),
"friend": random.choice(all_people),
"antagonist": random.choice(all_people),
}
print story.format(**story_choices)
from guppy import hpy
heap = hpy()
print heap.heap()
def main():
people = get_all_people()
for person in people:
person.species = get_record(Species, person.species)
film_objs = []
# I wish this graph was fully connected, it would be cooler for the story, maybe...
for film_url in person.films:
film = get_record(Film, film_url)
film.locations = [get_record(Location, url)
for url in film.locations if is_specific_url(url)]
film.vehicles = [get_record(Vehicle, url)
for url in film.vehicles if is_specific_url(url)]
film_objs.append(film)
person.films = film_objs
make_crossover(people)
if __name__ == '__main__':
main()
| 35.719298 | 108 | 0.540439 | 685 | 6,108 | 4.740146 | 0.388321 | 0.016631 | 0.012011 | 0.014783 | 0.02279 | 0.013551 | 0 | 0 | 0 | 0 | 0 | 0.005707 | 0.368861 | 6,108 | 170 | 109 | 35.929412 | 0.836576 | 0.286182 | 0 | 0.180851 | 0 | 0.021277 | 0.188941 | 0.019392 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.042553 | null | null | 0.021277 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f912619400af35ddcfb350826e39f99938321fc6 | 1,206 | py | Python | CellCycle/ChainModule/ProdCons.py | AQuadroTeam/server_cellsCycle | 4a07ef62928e87256350dd0c501ddeb19bc89462 | [
"MIT"
] | 3 | 2016-08-11T15:46:16.000Z | 2016-11-18T09:19:03.000Z | CellCycle/ChainModule/ProdCons.py | AQuadroTeam/server_cellsCycle | 4a07ef62928e87256350dd0c501ddeb19bc89462 | [
"MIT"
] | 3 | 2016-08-15T12:32:28.000Z | 2016-12-09T11:11:51.000Z | CellCycle/ChainModule/ProdCons.py | AQuadroTeam/CellsCycle | 4a07ef62928e87256350dd0c501ddeb19bc89462 | [
"MIT"
] | null | null | null | #! /usr/bin/env python
import Queue
from Queue import Empty
from ListThread import ListThread
BUF_SIZE = 10
q = Queue.Queue(BUF_SIZE)
class ProducerThread(ListThread):
def __init__(self, myself, master, slave, slave_of_slave, master_of_master, logger, settings, name):
ListThread.__init__(self, myself, master, slave, slave_of_slave, master_of_master, logger, settings, name)
def run(self):
print_string = "I\'m {} and i\'m running".format(self.myself.id)
self.logger.debug(print_string)
@staticmethod
def produce(item):
if not q.full():
q.put_nowait(item)
return True
else:
return False
class ConsumerThread(ListThread):
def __init__(self, myself, master, slave, slave_of_slave, master_of_master, logger, settings, name):
ListThread.__init__(self, myself, master, slave, slave_of_slave, master_of_master, logger, settings, name)
def run(self):
print_string = "I\'m {} and i\'m running".format(self.myself.id)
self.logger.debug(print_string)
@staticmethod
def consume():
try:
return q.get_nowait()
except Empty:
return None
| 28.714286 | 114 | 0.661692 | 157 | 1,206 | 4.828025 | 0.33758 | 0.079156 | 0.073879 | 0.105541 | 0.672823 | 0.672823 | 0.672823 | 0.672823 | 0.672823 | 0.672823 | 0 | 0.002172 | 0.236318 | 1,206 | 41 | 115 | 29.414634 | 0.820847 | 0.017413 | 0 | 0.4 | 0 | 0 | 0.040541 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.1 | 0 | 0.5 | 0.133333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f91bedac14fe13a22bfb82edb11d867f9395e611 | 17,601 | py | Python | Classes.py | SamVarney/Robinhood | 8dc517f86957138a543cef186997a18371ca1e9b | [
"MIT"
] | null | null | null | Classes.py | SamVarney/Robinhood | 8dc517f86957138a543cef186997a18371ca1e9b | [
"MIT"
] | null | null | null | Classes.py | SamVarney/Robinhood | 8dc517f86957138a543cef186997a18371ca1e9b | [
"MIT"
] | null | null | null |
from Robinhood import Robinhood
import config
import pandas as pd
from datetime import time, datetime
from bokeh.plotting import figure, output_file, show
#my_trader = Robinhood()
def instrument_info(instrument):
return instrument['symbol']
class Portfolio:
def __init__(self, my_trader):
self.trader = my_trader
securities = my_trader.securities_owned()['results']
self.stocks = []
for item in securities:
instrument = dict(self.trader.get_url(item['instrument']))
self.stocks.append(Stock(trader= self.trader,
instrument=instrument,
portfolio=self))
def stock_handles(self):
stock_dict = dict()
for stock in self.stocks:
stock_dict[stock.symbol] = stock
return stock_dict
def all_past_orders(self):
"""
Gets past order history for STOCKS, won't return crypto trades
:return: df with all past orders
"""
print "~~~~ Getting Trade History ~~~~"
#Get raw past order output from Robinhood
past_orders_raw = self.trader.order_history()
# Fetch past orders
results = past_orders_raw['results']
# reformat into a df
order_history_pd = pd.DataFrame(data=results)
order_history_pd.columns = order_history_pd.columns.astype(str)
# insert column that will hold symbol names for each trade
order_history_pd.insert(0, 'symbol', None)
# Use instrument url to get the stock symbol for each trade and insert it into the df
for row in order_history_pd.iterrows():
instrument = self.trader.get_url(row[1]['instrument'])
order_history_pd.at[row[0], 'symbol'] = instrument['symbol']
return order_history_pd
def general_info(self):
for stock in self.stocks:
print stock.general_info()
return True
def news(self, article_summaries=False):
#Get handles for each stock owned
stock_handles_dict = self.stock_handles()
#Print info and news for each stock
for key in stock_handles_dict:
stock_handle = stock_handles_dict[key]
news = my_trader.get_news(stock_handle.symbol)['results']
#Get updated quote info
stock_handle.get_quote()
# Print out the stock info and news articles
print stock_handle.symbol + ' - Current Price: $' + str(stock_handle.bid_price) + '. (Bought: ' + str(stock_handle.avg_buy_cost) + ')'
for article in news:
article = dict(article)
print '--------------------------------------------------'
print '(Pub: ' + article['published_at'] + ') ' + article['title']
print article['url']
if article_summaries:
print article['summary']
print '--------------------------------------------------'
print '--------------------------------------------------\n\n\n'
"""
print "\n\n"
for result in securities['results']:
instrument = my_trader.get_url(result['instrument']) # get symbol of stock
name = instrument['symbol']
news = my_trader.get_news(name)
#Fetch Recent prices prices
quote = my_trader.quote_data(name)
last_price = float(quote['last_trade_price'])
price_bought = float(result['pending_average_buy_price'])
# Print out the info
print name + ' - Current Price: $' + str(last_price) + '. (Bought: ' + str(price_bought) + ')'
for article in news['results']:
article = dict(article)
print '--------------------------------------------------'
print '(Pub: ' + article['published_at'] + ') ' + article['title']
print article['url']
print article['summary']
print '--------------------------------------------------'
print '--------------------------------------------------\n\n'
"""
class Stock:
def __init__(self, trader, instrument, portfolio=None,):
if portfolio is not None:
self.portfolio = portfolio
else:
self.portfolio = Portfolio(trader)
self.symbol = instrument['symbol']
self.id = instrument['id']
self.fudamentals = instrument['fundamentals']
self.quote = instrument['quote']
self.url = instrument['url']
self.type = instrument['type']
#Store Trader instance (not sure this is Kosher but will do for now)
#TODO: revist storing a Trader instance in each stock instance
self.trader = trader
#Now get info on perfomance since bought
#Get portfolio
securities = self.trader.securities_owned()['results']
pd_securities = pd.DataFrame.from_dict(securities)
#The keys from Robinhood come as unicode so converting them to strings
pd_securities.columns = pd_securities.columns.astype(str)
#get_instrument URL for this stock
stock = pd_securities[pd_securities['instrument'] == instrument['url']]
#Use instrument to
self.avg_buy_cost = float(stock['average_buy_price'].values[0])
self.quantity_owned = float(stock['quantity'].values[0])
def get_quote(self):
#use instrument url to get current quote prices
quote = dict(self.trader.quote_data(self.symbol))
#Selling Info
self.bid_price = float(quote['bid_price'])
self.bid_size = float(quote['bid_size'])
#Buying Info
self.ask_price = float(quote['ask_price'])
self.ask_size = float(quote['ask_size'])
#Other Info
self.last_extened_hours_trade_price = quote['last_extended_hours_trade_price'] # for estimating return when markets are closed
#Quote Update Time (don't plan to use for a bit)
self.quote_time = datetime.strptime(str(quote['updated_at']),'%Y-%m-%dT%H:%M:%SZ') #Assuming it will always come as UTC (Z)
return True
def plot_historical_quotes(self, interval = '5minute', span = 'week', fig_title='', show_plot = True):
'''
Creates a bokeh plot of the historical quotes for this stock. Useful for then overlaying other info (like
buy/sell points, etc.)
:param data:
:param data_type: Historical quotes, open-close, purchasing over historical prices
:return: Bokeh plot of historical quotes
'''
#create generic figure name if one wasn't passed
if fig_title == '':
fig_title = "{} - Historical Quotes".format(self.symbol)
#Get historical quotes
self.historical_quotes_df = self.historical_quotes(interval=interval, span=span)
print "********* Plotting Historical Quotes for {} ***********".format(self.symbol)
#Do the plotting
p = figure(title=fig_title, plot_width=1000, plot_height=500, x_axis_type = 'datetime')
p.line(x= self.historical_quotes_df['begins_at'].values, y=self.historical_quotes_df['high_price'].values, color='blue', legend='High Price')
p.line(x=self.historical_quotes_df['begins_at'].values, y=self.historical_quotes_df['low_price'].values, color='red', legend='Low Price')
p.yaxis.axis_label = "Price (Dollar)"
p.xaxis.axis_label = "Date"
output_file('{}-Historical Quotes Plot.html'.format(self.symbol))
if show_plot:
show(p)
return p
def update_past_orders(self):
"""
Fetches past orders of this stock and does some parsing to make accessing filled orders easier elsewhere
:return:
"""
#TODO: Implement a method to grab the order history for just one stock
all_past_orders = self.portfolio.all_past_orders() #This is REALLY inefficient (takes forever)
#Now pre-parse into commonly used categories
self.past_orders = all_past_orders[all_past_orders['symbol']==self.symbol] #Past orders for only this stock
self.filled_orders = self.past_orders[self.past_orders['state']=='filled'] #Only orders that were filled (not canceled)
return True
def plot_purchase_vs_price(self):
"""
Plots the buy & Sell order history of this stock overlaid on the historical quote data to get a quick guage of
buy vs sell timing.
:output: Bokeh Plot. Shows it in a browser and saves it.
:return: True
"""
#Fetch most up to date past orders
self.update_past_orders()
#Now parse out sell and buy orders into their own dataframes
buy_orders = pd.DataFrame(columns=['datetime','price'])
sell_orders = pd.DataFrame(columns=['datetime', 'price'])
for order in self.filled_orders.iterrows():
order = order[1]
if order['side'] == 'buy': #Buy Orders
executions = order['executions'][0]
price = float(executions['price'])
timestamp = executions['timestamp']
#append to buy orders df
buy_orders = buy_orders.append({'datetime': timestamp,
'price': price}, ignore_index=True)
elif order['side'] == 'sell': #Sell Orders
executions = order['executions'][0]
price = float(executions['price'])
timestamp = executions['timestamp']
#append to sell orders df
sell_orders = buy_orders.append({'datetime': timestamp,
'price': price}, ignore_index=True)
#convert timestamps to datetime for plotting
buy_orders['datetime'] = pd.to_datetime(buy_orders['datetime'])
sell_orders['datetime'] = pd.to_datetime(buy_orders['datetime'])
#PLOTTING
#Start by creating plot of historical Quotes to build off of
p = self.plot_historical_quotes(interval='day',
span='3month',
fig_title='{} - Buy vs Sell Plot'.format(self.symbol),
show_plot=False)
#now plot buy and sell orders over the historical quotes data
p.scatter(x=buy_orders['datetime'].values, y=buy_orders['price'].values, color='black', legend='Buy')
p.scatter(x=sell_orders['datetime'].values, y=sell_orders['price'].values, color='green', legend='Sell')
p.legend
#save and show the plot
output_file('{}-buy sell plot.html'.format(self.symbol))
show(p)
return True
def historical_quotes(self, interval = '5minute', span = 'week'):
'''
:param num_months: number of months to fetch
:return:
'''
#Grab historical data
history = self.trader.get_historical_quotes(self.symbol, interval=interval, span = span, bounds='regular')
#sort and reformat historicals
historicals = history['historicals']
hist_pd = pd.DataFrame.from_dict(historicals)
hist_pd.columns = hist_pd.columns.astype(str)
hist_pd['begins_at'] = pd.to_datetime(hist_pd['begins_at'])
#save log
#TODO: add time span to naming
hist_pd.to_csv('/Users/samvarney/PycharmProjects/robinhood_trading/data/quote_historicals/{}_hist_quotes.csv'.format(self.symbol))
return hist_pd
# Methods for summarizing information on stock
def all_info(self):
self.general_info()
self.quote_info()
return dict({'symbol': self.symbol,
'quantity': self.quantity_owned,
'Cost': self.avg_buy_cost})
def general_info(self):
self.get_quote() #update for most current price
print '\n__________General Info for {}___________'.format(self.symbol)
print 'Symbol: {}'.format(self.symbol)
print '# Owned: {}'.format(self.quantity_owned)
print 'Avg Buy Cost: ${}'.format(self.avg_buy_cost)
if self.bid_price > 0: #TODO: make this acutally work (instead of repeating the same code)
print 'Return: ${}'.format(self.bid_price*self.quantity_owned - self.avg_buy_cost*self.quantity_owned) #TODO: should use market price not bid price
else:
print 'Return: ${}'.format(self.bid_price * self.quantity_owned - self.avg_buy_cost * self.quantity_owned) # TODO: should use market price not bid price
print '----------------------------------\n'
#Print quote info nicely
def quote_info(self):
#update quote
self.get_quote()
#print it nicely
print '\n__________Quote for {}___________'.format(self.symbol)
print 'Updated: {}'.format(self.quote_time)
print 'Bid: ${} (Vol: {})'.format(self.bid_price, self.bid_size)
print 'Ask: ${} (Vol: {})'.format(self.ask_price, self.ask_size)
print '----------------------------------\n'
return
class crypto_porfolio:
def __init__(self, my_trader):
#Get holdings from robinhood
holdings = my_trader.crypto_holdings()
#Create a holding instance for each crypto currency, ignoring the USD currency element that comes at the end
self.holdings = []
for holding in holdings:
cost_bases = holding['cost_bases'] # TODO: will this every return a list with more than 1 element?
if len(cost_bases) > 0:
self.holdings.append(cryto_holding(my_trader, holding))
else: #not a crypto holding
print "Cost_bases has length of {}".format(len(cost_bases))
print "Not a holding. Probably the USD item that comes at end of holdings request."
def general_info(self):
for holding in self.holdings:
holding.general_info()
class cryto_holding:
def __init__(self, my_trader, holding):
self.trader = my_trader
#account info
self.account_id = holding['account_id']
self.created_at = holding['created_at']
self.updated_at = holding['updated_at']
#currency info
currency = dict(holding['currency'])
self.code = currency['code']
self.name = currency['name']
self.currency_id = currency['id']
#quantities
self.quantity_available = float(holding['quantity_available'])
self.quantity_held_for_sell = float(holding['quantity_held_for_sell'])
self.quantity_total = float(holding['quantity'])
self.id = holding['id'] #TODO: Figure out what this id refers to
#Cost info
cost_bases = holding['cost_bases'] #TODO: will this every return a list with more than 1 element?
if len(cost_bases) > 0:
cost_bases = dict(cost_bases[0])
self.direct_cost_basis = float(cost_bases['direct_cost_basis'])
self.direct_quantity = float(cost_bases['direct_quantity']) # TODO: what is Direct Quantity vs Quantity held?
self.intraday_cost_basis = float(cost_bases['intraday_cost_basis'])
if self.quantity_total > 0:
self.avg_cost_per_coin = self.direct_cost_basis/self.quantity_total #TODO: this lines up w/Value in app but validate the calculation values
else:
self.avg_cost_per_coin = None
else:
print "cost bases has length of {}".format(len(cost_bases))
print "Not a holding. Probably the USD item that comes at end of holdings request."
#quote methods
def get_quote(self):
quote = self.trader.crypto_quote_data("{}USD".format(self.code)) #assuming I'll always buy in USD
#Selling Info
self.bid_price = float(quote['bid_price'])
#Buying Info
self.ask_price = float(quote['ask_price'])
#TODO: Figure out what each of these mean
self.high_price = float(quote['high_price'])
self.low_price = float(quote['low_price'])
self.volume = float(quote['volume'])
self.mark_price = float(quote['mark_price'])
self.open_price = float(quote['open_price'])
self.quote_id = quote['id'] #TODO: Not sure if this is actually the quote id
#Quote Update Time (don't plan to use for a bit)
self.quote_time = datetime.now() #crypto quotes don't seem to come with a timestamp (TODO:revisit to check if it changes)
return
#Info methods
def general_info(self):
self.get_quote() # update for most current price
print '\n__________General Info for {}___________'.format(self.name)
print 'Code: {}'.format(self.code)
print '# Owned: {}'.format(self.quantity_total)
print 'Avg Buy Cost: ${}'.format(self.avg_cost_per_coin)
if self.quantity_total >0:
print 'Return: ${}'.format(self.mark_price*self.quantity_total - self.avg_cost_per_coin*self.quantity_total)
print '----------------------------------\n'
if __name__ == "__main__":
#Setup
my_trader = Robinhood()
#login
my_trader.login(username=config.USERNAME, password=config.PASSWORD)
"""
instrument = dict(my_trader.instruments('BTH'))
print instrument
agen = Stock(my_trader, instrument)
agen.general_info()
print '------------------------'
"""
#agen.all_info()
#agen.quote_info()
port = Portfolio(my_trader)
port.news() | 36.66875 | 171 | 0.602807 | 2,123 | 17,601 | 4.801696 | 0.173811 | 0.021581 | 0.013243 | 0.006867 | 0.25878 | 0.202276 | 0.1907 | 0.170689 | 0.162252 | 0.146361 | 0 | 0.002028 | 0.271462 | 17,601 | 480 | 172 | 36.66875 | 0.79295 | 0.156696 | 0 | 0.23913 | 0 | 0 | 0.156001 | 0.032553 | 0 | 0 | 0 | 0.00625 | 0 | 0 | null | null | 0.004348 | 0.021739 | null | null | 0.13913 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f91d774d8418b73c0df6843208a96ce220b0f168 | 3,753 | py | Python | vendor/github.com/elastic/beats/libbeat/tests/system/test_template.py | N0mansky/countbeat | fa80242cf1ea46f036a3a4920eea3e00554f777e | [
"Apache-2.0"
] | 16 | 2018-08-22T03:29:31.000Z | 2021-09-05T14:01:10.000Z | vendor/github.com/elastic/beats/libbeat/tests/system/test_template.py | railroadmanuk/rubrikbeat | af012076d68f64e12092d885257aa5a706453695 | [
"MIT"
] | 3 | 2020-05-29T13:53:51.000Z | 2021-06-01T22:19:56.000Z | libbeat/tests/system/test_template.py | sure0000/beats | 1690690b3fcbe4a46aedc1121f9aa128497ed22d | [
"ECL-2.0",
"Apache-2.0"
] | 6 | 2018-10-31T06:55:01.000Z | 2021-02-06T18:50:04.000Z | from base import BaseTest
import os
from elasticsearch import Elasticsearch, TransportError
from nose.plugins.attrib import attr
import unittest
INTEGRATION_TESTS = os.environ.get('INTEGRATION_TESTS', False)
class Test(BaseTest):
def test_index_modified(self):
"""
Test that beat stops in case elasticsearch index is modified and pattern not
"""
self.render_config_template(
elasticsearch={"index": "test"},
)
exit_code = self.run_beat()
assert exit_code == 1
assert self.log_contains(
"setup.template.name and setup.template.pattern have to be set if index name is modified.") is True
def test_index_not_modified(self):
"""
Test that beat starts running if elasticsearch output is set
"""
self.render_config_template(
elasticsearch={"hosts": "localhost:9200"},
)
proc = self.start_beat()
self.wait_until(lambda: self.log_contains("mockbeat start running."))
proc.check_kill_and_wait()
def test_index_modified_no_pattern(self):
"""
Test that beat stops in case elasticsearch index is modified and pattern not
"""
self.render_config_template(
elasticsearch={"index": "test"},
es_template_name="test",
)
exit_code = self.run_beat()
assert exit_code == 1
assert self.log_contains(
"setup.template.name and setup.template.pattern have to be set if index name is modified.") is True
def test_index_modified_no_name(self):
"""
Test that beat stops in case elasticsearch index is modified and name not
"""
self.render_config_template(
elasticsearch={"index": "test"},
es_template_pattern="test",
)
exit_code = self.run_beat()
assert exit_code == 1
assert self.log_contains(
"setup.template.name and setup.template.pattern have to be set if index name is modified.") is True
def test_index_with_pattern_name(self):
"""
Test that beat starts running if elasticsearch output with modified index and pattern and name are set
"""
self.render_config_template(
elasticsearch={"hosts": "localhost:9200"},
es_template_name="test",
es_template_pattern="test-*",
)
proc = self.start_beat()
self.wait_until(lambda: self.log_contains("mockbeat start running."))
proc.check_kill_and_wait()
@unittest.skipUnless(INTEGRATION_TESTS, "integration test")
@attr('integration')
def test_json_template(self):
"""
Test loading of json based template
"""
self.copy_files(["template.json"])
path = os.path.join(self.working_dir, "template.json")
print path
self.render_config_template(
elasticsearch={"hosts": self.get_host()},
template_overwrite="true",
template_json_enabled="true",
template_json_path=path,
template_json_name="bla",
)
proc = self.start_beat()
self.wait_until(lambda: self.log_contains("mockbeat start running."))
self.wait_until(lambda: self.log_contains("Loading json template from file"))
self.wait_until(lambda: self.log_contains("Elasticsearch template with name 'bla' loaded"))
proc.check_kill_and_wait()
es = Elasticsearch([self.get_elasticsearch_url()])
result = es.transport.perform_request('GET', '/_template/bla')
assert len(result) == 1
def get_host(self):
return os.getenv('ES_HOST', 'localhost') + ':' + os.getenv('ES_PORT', '9200')
| 32.921053 | 111 | 0.632294 | 448 | 3,753 | 5.089286 | 0.212054 | 0.024561 | 0.052632 | 0.063158 | 0.651316 | 0.603947 | 0.585526 | 0.555702 | 0.555702 | 0.460965 | 0 | 0.005829 | 0.268585 | 3,753 | 113 | 112 | 33.212389 | 0.824772 | 0 | 0 | 0.472222 | 0 | 0.041667 | 0.195482 | 0.020709 | 0 | 0 | 0 | 0 | 0.097222 | 0 | null | null | 0 | 0.069444 | null | null | 0.013889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f91e42103d2c1859585030f9f680e4ff2f4e698a | 690 | py | Python | corehq/apps/accounting/management/commands/generate_invoices.py | dslowikowski/commcare-hq | ad8885cf8dab69dc85cb64f37aeaf06106124797 | [
"BSD-3-Clause"
] | 1 | 2015-02-10T23:26:39.000Z | 2015-02-10T23:26:39.000Z | corehq/apps/accounting/management/commands/generate_invoices.py | SEL-Columbia/commcare-hq | 992ee34a679c37f063f86200e6df5a197d5e3ff6 | [
"BSD-3-Clause"
] | 1 | 2022-03-12T01:03:25.000Z | 2022-03-12T01:03:25.000Z | corehq/apps/accounting/management/commands/generate_invoices.py | johan--/commcare-hq | 86ee99c54f55ee94e4c8f2f6f30fc44e10e69ebd | [
"BSD-3-Clause"
] | null | null | null | from optparse import make_option
import datetime
from django.core.management import BaseCommand
from corehq.apps.accounting.tasks import generate_invoices
class Command(BaseCommand):
help = ("Generate missing invoices based on the given date in YYYY-MM-DD "
"format")
option_list = BaseCommand.option_list + (
make_option('--create', action='store_true', default=False,
help='Generate invoices'),
)
def handle(self, *args, **options):
generate_invoices(
based_on_date=datetime.date(*[int(_) for _ in args[0:3]]),
check_existing=True,
is_test=not options.get('create', False),
)
| 31.363636 | 78 | 0.652174 | 82 | 690 | 5.329268 | 0.634146 | 0.10984 | 0.06865 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003831 | 0.243478 | 690 | 21 | 79 | 32.857143 | 0.833333 | 0 | 0 | 0 | 1 | 0 | 0.16087 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.235294 | 0 | 0.470588 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f91e77f81799f057320d41ed5c0898b9104a8573 | 668 | py | Python | src/solutions/solution_7.py | mannickutd/project_euler | 3e042773cc8628b19fa4d341e56ab8cecd2ea642 | [
"Apache-2.0"
] | null | null | null | src/solutions/solution_7.py | mannickutd/project_euler | 3e042773cc8628b19fa4d341e56ab8cecd2ea642 | [
"Apache-2.0"
] | null | null | null | src/solutions/solution_7.py | mannickutd/project_euler | 3e042773cc8628b19fa4d341e56ab8cecd2ea642 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Solution to question 7
Nicholas Staples
2014-04-15
"""
from utils.include_decorator import include_decorator
# Generator for primes.
# Not a particulary quick one but for small primes it is fine.
# If you are consistently looking for prime numbers you would probably
# generate them once and then refer to that list. Or there is
# better algorithms to solve repeated look ups for primes.
def gen_prime():
yield 1
yield 2
x = 3
while True:
for y in range(2, x):
if x % y == 0:
break
else:
yield x
x += 1
@include_decorator(7)
def problem_7_solution():
gen = gen_prime()
print [next(gen) for __ in range(10002)][-1] | 20.875 | 70 | 0.703593 | 111 | 668 | 4.153153 | 0.666667 | 0.104121 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045198 | 0.20509 | 668 | 32 | 71 | 20.875 | 0.822976 | 0.434132 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.0625 | null | null | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f9215c75703c23dde5e4fa67bed97e08eca42978 | 921 | py | Python | scraper/storage_spiders/diegoshoecom.py | chongiadung/choinho | d2a216fe7a5064d73cdee3e928a7beef7f511fd1 | [
"MIT"
] | null | null | null | scraper/storage_spiders/diegoshoecom.py | chongiadung/choinho | d2a216fe7a5064d73cdee3e928a7beef7f511fd1 | [
"MIT"
] | 10 | 2020-02-11T23:34:28.000Z | 2022-03-11T23:16:12.000Z | scraper/storage_spiders/diegoshoecom.py | chongiadung/choinho | d2a216fe7a5064d73cdee3e928a7beef7f511fd1 | [
"MIT"
] | 3 | 2018-08-05T14:54:25.000Z | 2021-06-07T01:49:59.000Z | # Auto generated by generator.py. Delete this line if you make modification.
from scrapy.spiders import Rule
from scrapy.linkextractors import LinkExtractor
XPATH = {
'name' : "//div[@class='product-info']/form/h1",
'price' : "//div[@class='price-wrap clearfix']/p[@class='actual-price']/span/span/span",
'category' : "//div/div[@class='breadcrumbs clearfix']/a",
'description' : "//div[@class='product-main-info']/div[@id='content_description']/p",
'images' : "//div[@class='cm-image-wrap']/a/img/@src",
'canonical' : "",
'base_url' : "",
'brand' : ""
}
name = 'diegoshoe.com'
allowed_domains = ['diegoshoe.com']
start_urls = ['http://diegoshoe.com']
tracking_url = ''
sitemap_urls = ['']
sitemap_rules = [('', 'parse_item')]
sitemap_follow = []
rules = [
Rule(LinkExtractor(), 'parse_item'),
Rule(LinkExtractor(), 'parse'),
#Rule(LinkExtractor(), 'parse_item_and_links'),
]
| 34.111111 | 92 | 0.648208 | 110 | 921 | 5.309091 | 0.581818 | 0.068493 | 0.113014 | 0.089041 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001259 | 0.137894 | 921 | 26 | 93 | 35.423077 | 0.734257 | 0.130293 | 0 | 0 | 1 | 0.043478 | 0.483709 | 0.307018 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.086957 | 0 | 0.086957 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f92418c9a9ee88d443081957c827b2cf66e6d52b | 435 | py | Python | Part-03-Understanding-Software-Crafting-Your-Own-Tools/models/edx-platform/lms/djangoapps/courseware/migrations/0015_add_courseware_stats_index.py | osoco/better-ways-of-thinking-about-software | 83e70d23c873509e22362a09a10d3510e10f6992 | [
"MIT"
] | 3 | 2021-12-15T04:58:18.000Z | 2022-02-06T12:15:37.000Z | Part-03-Understanding-Software-Crafting-Your-Own-Tools/models/edx-platform/lms/djangoapps/courseware/migrations/0015_add_courseware_stats_index.py | osoco/better-ways-of-thinking-about-software | 83e70d23c873509e22362a09a10d3510e10f6992 | [
"MIT"
] | null | null | null | Part-03-Understanding-Software-Crafting-Your-Own-Tools/models/edx-platform/lms/djangoapps/courseware/migrations/0015_add_courseware_stats_index.py | osoco/better-ways-of-thinking-about-software | 83e70d23c873509e22362a09a10d3510e10f6992 | [
"MIT"
] | 1 | 2019-01-02T14:38:50.000Z | 2019-01-02T14:38:50.000Z | # Generated by Django 2.2.18 on 2021-02-18 17:35
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('courseware', '0014_fix_nan_value_for_global_speed'),
]
operations = [
migrations.AddIndex(
model_name='studentmodule',
index=models.Index(fields=['module_state_key', 'grade', 'student'], name='courseware_stats'),
),
]
| 24.166667 | 105 | 0.643678 | 49 | 435 | 5.510204 | 0.795918 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05988 | 0.232184 | 435 | 17 | 106 | 25.588235 | 0.748503 | 0.105747 | 0 | 0 | 1 | 0 | 0.263566 | 0.090439 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f925678191b707a90f520b823a88f06012e7d065 | 148 | py | Python | problem/10000~19999/11948/11948.py3.py | njw1204/BOJ-AC | 1de41685725ae4657a7ff94e413febd97a888567 | [
"MIT"
] | 1 | 2019-04-19T16:37:44.000Z | 2019-04-19T16:37:44.000Z | problem/10000~19999/11948/11948.py3.py | njw1204/BOJ-AC | 1de41685725ae4657a7ff94e413febd97a888567 | [
"MIT"
] | 1 | 2019-04-20T11:42:44.000Z | 2019-04-20T11:42:44.000Z | problem/10000~19999/11948/11948.py3.py | njw1204/BOJ-AC | 1de41685725ae4657a7ff94e413febd97a888567 | [
"MIT"
] | 3 | 2019-04-19T16:37:47.000Z | 2021-10-25T00:45:00.000Z | x=[]
for i in range(4):
x.append(int(input()))
x.sort()
ans=sum(x[1:])
x=[]
for i in range(2):
x.append(int(input()))
ans+=max(x)
print(ans) | 14.8 | 26 | 0.567568 | 31 | 148 | 2.709677 | 0.516129 | 0.095238 | 0.119048 | 0.166667 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024 | 0.155405 | 148 | 10 | 27 | 14.8 | 0.648 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f926e4b4d495797807abf91c8bfa7da02a68b80a | 733 | py | Python | mytutor/urls.py | adityapandadev/FindMyTutor | 7c153824355cd53c82d20fd4208b7091b84967d7 | [
"MIT"
] | 1 | 2021-07-26T16:06:53.000Z | 2021-07-26T16:06:53.000Z | mytutor/urls.py | adityapandadev/Findmytutor | 7c153824355cd53c82d20fd4208b7091b84967d7 | [
"MIT"
] | null | null | null | mytutor/urls.py | adityapandadev/Findmytutor | 7c153824355cd53c82d20fd4208b7091b84967d7 | [
"MIT"
] | null | null | null | from django.contrib import admin
from django.urls import path
from django.urls.conf import include
from mytutor import views
from django.views.generic.base import RedirectView
urlpatterns = [
path('home/', views.HomeView.as_view()),
path('tutor/', views.TutorListView.as_view()),
path('contact/', views.ContactView.as_view()),
path('tutor/<int:pk>', views.TutorDetailView.as_view()),
path('question/', views.QuestionListView.as_view()),
path('question/<int:pk>', views.QuestionDetailView.as_view()),
path('question/create/', views.QuestionCreate.as_view(success_url="/mytutor/question")),
path('contact/submit', views.contact),
path('', RedirectView.as_view(url="home/")),
]
| 38.578947 | 92 | 0.699864 | 90 | 733 | 5.6 | 0.377778 | 0.095238 | 0.119048 | 0.107143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136426 | 733 | 18 | 93 | 40.722222 | 0.796209 | 0 | 0 | 0 | 0 | 0 | 0.151432 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.3125 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
f9284ae0ed060eed8944ff4b814eee1e48057630 | 717 | py | Python | django/publicmapping/publicmapping/celery.py | PublicMapping/districtbuilder-classic | 6e4b9d644043082eb0499f5aa77e777fff73a67c | [
"Apache-2.0"
] | 2 | 2020-06-15T00:37:15.000Z | 2021-09-23T00:05:25.000Z | django/publicmapping/publicmapping/celery.py | PublicMapping/districtbuilder-classic | 6e4b9d644043082eb0499f5aa77e777fff73a67c | [
"Apache-2.0"
] | 2 | 2020-05-11T20:54:54.000Z | 2020-06-05T17:16:13.000Z | django/publicmapping/publicmapping/celery.py | PublicMapping/districtbuilder-classic | 6e4b9d644043082eb0499f5aa77e777fff73a67c | [
"Apache-2.0"
] | 1 | 2021-09-23T00:06:08.000Z | 2021-09-23T00:06:08.000Z | from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from . import REDIS_URL
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "publicmapping.settings")
# Configure Celery app to use Redis as both the results backend and the message broker.
app = Celery('publicmapping', backend=REDIS_URL, broker=REDIS_URL)
# Using a string here means the worker doesn't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks()
| 37.736842 | 87 | 0.800558 | 101 | 717 | 5.534653 | 0.564356 | 0.042934 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125523 | 717 | 18 | 88 | 39.833333 | 0.891547 | 0.488145 | 0 | 0 | 0 | 0 | 0.230556 | 0.122222 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
0054ba2d43533ad49392c07910fef919c68d4430 | 18,625 | py | Python | babilim/training/losses.py | penguinmenac3/babilim | d3b1dd7c38a9de8f1e553cc5c0b2dfa62fe25c27 | [
"MIT"
] | 1 | 2020-05-04T15:20:55.000Z | 2020-05-04T15:20:55.000Z | babilim/training/losses.py | penguinmenac3/babilim | d3b1dd7c38a9de8f1e553cc5c0b2dfa62fe25c27 | [
"MIT"
] | 1 | 2019-11-28T09:03:20.000Z | 2019-11-28T09:03:20.000Z | babilim/training/losses.py | penguinmenac3/babilim | d3b1dd7c38a9de8f1e553cc5c0b2dfa62fe25c27 | [
"MIT"
] | 1 | 2019-11-28T08:30:13.000Z | 2019-11-28T08:30:13.000Z | # AUTOGENERATED FROM: babilim/training/losses.ipynb
# Cell: 0
"""doc
# babilim.training.losses
> A package containing all losses.
"""
# Cell: 1
from collections import defaultdict
from typing import Any
import json
import numpy as np
import babilim
from babilim.core.itensor import ITensor
from babilim.core.logging import info
from babilim.core.tensor import Tensor
from babilim.core.module import Module
# Cell: 2
class Loss(Module):
def __init__(self, reduction: str = "mean"):
"""
A loss is a statefull object which computes the difference between the prediction and the target.
:param log_std: When true the loss will log its standard deviation. (default: False)
:param log_min: When true the loss will log its minimum values. (default: False)
:param log_max: When true the loss will log its maximal values. (default: False)
:param reduction: Specifies the reduction to apply to the output: `'none'` | `'mean'` | `'sum'`. `'none'`: no reduction will be applied, `'mean'`: the sum of the output will be divided by the number of elements in the output, `'sum'`: the output will be summed. Default: 'mean'.
"""
super().__init__()
self._accumulators = defaultdict(list)
self.reduction = reduction
if reduction not in ["none", "mean", "sum"]:
raise NotImplementedError()
def call(self, y_pred: Any, y_true: Any) -> ITensor:
"""
Implement a loss function between preds and true outputs.
**DO NOT**:
* Overwrite this function (overwrite `self.loss(...)` instead)
* Call this function (call the module instead `self(y_pred, y_true)`)
Arguments:
:param y_pred: The predictions of the network. Either a NamedTuple pointing at ITensors or a Dict or Tuple of ITensors.
:param y_true: The desired outputs of the network (labels). Either a NamedTuple pointing at ITensors or a Dict or Tuple of ITensors.
"""
loss = self.loss(y_pred, y_true)
if loss.is_nan().any():
raise ValueError("Loss is nan. Loss value: {}".format(loss))
if self.reduction == "mean":
loss = loss.mean()
elif self.reduction == "sum":
loss = loss.sum()
return loss
def loss(self, y_pred: Any, y_true: Any) -> ITensor:
"""
Implement a loss function between preds and true outputs.
**`loss` must be overwritten by subclasses.**
**DO NOT**:
* Call this function (call the module instead `self(y_pred, y_true)`)
Arguments:
:param y_pred: The predictions of the network. Either a NamedTuple pointing at ITensors or a Dict or Tuple of ITensors.
:param y_true: The desired outputs of the network (labels). Either a NamedTuple pointing at ITensors or a Dict or Tuple of ITensors.
"""
raise NotImplementedError("Every loss must implement the call method.")
def log(self, name: str, value: ITensor) -> None:
"""
Log a tensor under a name.
These logged values then can be used for example by tensorboard loggers.
:param name: The name under which to log the tensor.
:param value: The tensor that should be logged.
"""
if isinstance(value, ITensor):
val = value.numpy()
if len(val.shape) > 0:
self._accumulators[name].append(val)
else:
self._accumulators[name].append(np.array([val]))
else:
self._accumulators[name].append(np.array([value]))
def reset_avg(self) -> None:
"""
Reset the accumulation of tensors in the logging.
Should only be called by a tensorboard logger.
"""
self._accumulators = defaultdict(list)
def summary(self, samples_seen, summary_writer=None, summary_txt=None, log_std=False, log_min=False, log_max=False) -> None:
"""
Write a summary of the accumulated logs into tensorboard.
:param samples_seen: The number of samples the training algorithm has seen so far (not iterations!).
This is used for the x axis in the plot. If you use the samples seen it is independant of the batch size.
If the network was trained for 4 batches with 32 batch size or for 32 batches with 4 batchsize does not matter.
:param summary_writer: The summary writer to use for writing the summary. If none is provided it will use the tensorflow default.
:param summary_txt: The file where to write the summary in csv format.
"""
results = {}
if summary_writer is not None:
for k in self._accumulators:
if not self._accumulators[k]:
continue
combined = np.concatenate(self._accumulators[k], axis=0)
summary_writer.add_scalar("{}".format(k), combined.mean(), global_step=samples_seen)
results[f"{k}"] = combined.mean()
if log_std:
results[f"{k}_std"] = combined.std()
summary_writer.add_scalar("{}_std".format(k), results[f"{k}_std"], global_step=samples_seen)
if log_min:
results[f"{k}_min"] = combined.min()
summary_writer.add_scalar("{}_min".format(k), results[f"{k}_min"], global_step=samples_seen)
if log_max:
results[f"{k}_max"] = combined.max()
summary_writer.add_scalar("{}_max".format(k), results[f"{k}_max"], global_step=samples_seen)
else:
import tensorflow as tf
for k in self._accumulators:
if not self._accumulators[k]:
continue
combined = np.concatenate(self._accumulators[k], axis=0)
tf.summary.scalar("{}".format(k), combined.mean(), step=samples_seen)
results[f"{k}"] = combined.mean()
if log_std:
results[f"{k}_std"] = combined.std()
tf.summary.scalar("{}_std".format(k), results[f"{k}_std"], step=samples_seen)
if log_min:
results[f"{k}_min"] = combined.min()
tf.summary.scalar("{}_min".format(k), results[f"{k}_min"], step=samples_seen)
if log_max:
results[f"{k}_max"] = combined.max()
tf.summary.scalar("{}_max".format(k), results[f"{k}_max"], step=samples_seen)
if summary_txt is not None:
results["samples_seen"] = samples_seen
for k in results:
results[k] = f"{results[k]:.5f}"
with open(summary_txt, "a") as f:
f.write(json.dumps(results)+"\n")
@property
def avg(self):
"""
Get the average of the loged values.
This is helpfull to print values that are more stable than values from a single iteration.
"""
avgs = {}
for k in self._accumulators:
if not self._accumulators[k]:
continue
combined = np.concatenate(self._accumulators[k], axis=0)
avgs[k] = combined.mean()
return avgs
# Cell: 3
class NativeLossWrapper(Loss):
def __init__(self, loss, reduction: str = "mean"):
"""
Wrap a native loss as a babilim loss.
The wrapped object must have the following signature:
```python
Callable(y_pred, y_true, log_val) -> Tensor
```
where log_val will be a function which can be used for logging scalar tensors/values.
:param loss: The loss that should be wrapped.
:param reduction: Specifies the reduction to apply to the output: `'none'` | `'mean'` | `'sum'`. `'none'`: no reduction will be applied, `'mean'`: the sum of the output will be divided by the number of elements in the output, `'sum'`: the output will be summed. Default: 'mean'.
"""
super().__init__(reduction=reduction)
self.native_loss = loss
self._auto_device()
def _auto_device(self):
if babilim.is_backend(babilim.PYTORCH_BACKEND):
import torch
self.native_loss = self.native_loss.to(torch.device(self.device))
return self
def loss(self, y_pred: Any, y_true: Any) -> ITensor:
"""
Compute the loss using the native loss function provided in the constructor.
:param y_pred: The predictions of the network. Either a NamedTuple pointing at ITensors or a Dict or Tuple of ITensors.
:param y_true: The desired outputs of the network (labels). Either a NamedTuple pointing at ITensors or a Dict or Tuple of ITensors.
"""
# Unwrap arguments
tmp = y_true._asdict()
y_true_tmp = {k: tmp[k].native for k in tmp}
y_true = type(y_true)(**y_true_tmp)
tmp = y_pred._asdict()
y_pred_tmp = {k: tmp[k].native for k in tmp}
y_pred = type(y_pred)(**y_pred_tmp)
# call function
result = self.native_loss(y_pred=y_pred, y_true=y_true,
log_val=lambda name, tensor: self.log(name, Tensor(data=tensor, trainable=True)))
return Tensor(data=result, trainable=True)
# Cell: 4
class SparseCrossEntropyLossFromLogits(Loss):
def __init__(self, reduction: str = "mean"):
"""
Compute a sparse cross entropy.
This means that the preds are logits and the targets are not one hot encoded.
:param reduction: Specifies the reduction to apply to the output: `'none'` | `'mean'` | `'sum'`. `'none'`: no reduction will be applied, `'mean'`: the sum of the output will be divided by the number of elements in the output, `'sum'`: the output will be summed. Default: 'mean'.
"""
super().__init__(reduction=reduction)
if babilim.is_backend(babilim.PYTORCH_BACKEND):
from torch.nn import CrossEntropyLoss
self.loss_fun = CrossEntropyLoss(reduction="none")
else:
from tensorflow.nn import sparse_softmax_cross_entropy_with_logits
self.loss_fun = sparse_softmax_cross_entropy_with_logits
def loss(self, y_pred: ITensor, y_true: ITensor) -> ITensor:
"""
Compute the sparse cross entropy assuming y_pred to be logits.
:param y_pred: The predictions of the network. Either a NamedTuple pointing at ITensors or a Dict or Tuple of ITensors.
:param y_true: The desired outputs of the network (labels). Either a NamedTuple pointing at ITensors or a Dict or Tuple of ITensors.
"""
y_true = y_true.cast("int64")
if babilim.is_backend(babilim.PYTORCH_BACKEND):
return Tensor(data=self.loss_fun(y_pred.native, y_true.native[:, 0]), trainable=True)
else:
return Tensor(data=self.loss_fun(labels=y_true.native, logits=y_pred.native), trainable=True)
# Cell: 5
class BinaryCrossEntropyLossFromLogits(Loss):
def __init__(self, reduction: str = "mean"):
"""
Compute a binary cross entropy.
This means that the preds are logits and the targets are a binary (1 or 0) tensor of same shape as logits.
:param reduction: Specifies the reduction to apply to the output: `'none'` | `'mean'` | `'sum'`. `'none'`: no reduction will be applied, `'mean'`: the sum of the output will be divided by the number of elements in the output, `'sum'`: the output will be summed. Default: 'mean'.
"""
super().__init__(reduction=reduction)
if babilim.is_backend(babilim.PYTORCH_BACKEND):
from torch.nn import BCEWithLogitsLoss
self.loss_fun = BCEWithLogitsLoss(reduction="none")
else:
from tensorflow.nn import sigmoid_cross_entropy_with_logits
self.loss_fun = sigmoid_cross_entropy_with_logits
def loss(self, y_pred: ITensor, y_true: ITensor) -> ITensor:
"""
Compute the sparse cross entropy assuming y_pred to be logits.
:param y_pred: The predictions of the network. Either a NamedTuple pointing at ITensors or a Dict or Tuple of ITensors.
:param y_true: The desired outputs of the network (labels). Either a NamedTuple pointing at ITensors or a Dict or Tuple of ITensors.
"""
if babilim.is_backend(babilim.PYTORCH_BACKEND):
return Tensor(data=self.loss_fun(y_pred.native, y_true.native), trainable=True)
else:
return Tensor(data=self.loss_fun(labels=y_true.native, logits=y_pred.native), trainable=True)
# Cell: 6
class SmoothL1Loss(Loss):
def __init__(self, reduction: str = "mean"):
"""
Compute a binary cross entropy.
This means that the preds are logits and the targets are a binary (1 or 0) tensor of same shape as logits.
:param reduction: Specifies the reduction to apply to the output: `'none'` | `'mean'` | `'sum'`. `'none'`: no reduction will be applied, `'mean'`: the sum of the output will be divided by the number of elements in the output, `'sum'`: the output will be summed. Default: 'mean'.
"""
super().__init__(reduction=reduction)
if babilim.is_backend(babilim.PYTORCH_BACKEND):
from torch.nn import SmoothL1Loss
self.loss_fun = SmoothL1Loss(reduction="none")
else:
from tensorflow.keras.losses import huber
self.loss_fun = huber
self.delta = 1.0
def loss(self, y_pred: ITensor, y_true: ITensor) -> ITensor:
"""
Compute the sparse cross entropy assuming y_pred to be logits.
:param y_pred: The predictions of the network. Either a NamedTuple pointing at ITensors or a Dict or Tuple of ITensors.
:param y_true: The desired outputs of the network (labels). Either a NamedTuple pointing at ITensors or a Dict or Tuple of ITensors.
"""
if babilim.is_backend(babilim.PYTORCH_BACKEND):
return Tensor(data=self.loss_fun(y_pred.native, y_true.native), trainable=True)
else:
return Tensor(data=self.loss_fun(labels=y_true.native, logits=y_pred.native, delta=self.delta), trainable=True)
# Cell: 7
class MeanSquaredError(Loss):
def __init__(self, reduction: str = "mean"):
"""
Compute the mean squared error.
:param reduction: Specifies the reduction to apply to the output: `'none'` | `'mean'` | `'sum'`. `'none'`: no reduction will be applied, `'mean'`: the sum of the output will be divided by the number of elements in the output, `'sum'`: the output will be summed. Default: 'mean'.
"""
super().__init__(reduction=reduction)
def loss(self, y_pred: ITensor, y_true: ITensor, axis: int=-1) -> ITensor:
"""
Compute the mean squared error.
:param y_pred: The predictions of the network. Either a NamedTuple pointing at ITensors or a Dict or Tuple of ITensors.
:param y_true: The desired outputs of the network (labels). Either a NamedTuple pointing at ITensors or a Dict or Tuple of ITensors.
:param axis: (Optional) The axis along which to compute the mean squared error.
"""
return ((y_pred - y_true) ** 2).mean(axis=axis)
# Cell: 8
class SparseCategoricalAccuracy(Loss):
def __init__(self, reduction: str = "mean"):
"""
Compute the sparse mean squared error.
Sparse means that the targets are not one hot encoded.
:param reduction: Specifies the reduction to apply to the output: `'none'` | `'mean'` | `'sum'`. `'none'`: no reduction will be applied, `'mean'`: the sum of the output will be divided by the number of elements in the output, `'sum'`: the output will be summed. Default: 'mean'.
"""
super().__init__(reduction=reduction)
def loss(self, y_pred: ITensor, y_true: ITensor, axis: int=-1) -> ITensor:
"""
Compute the sparse categorical accuracy.
:param y_pred: The predictions of the network. Either a NamedTuple pointing at ITensors or a Dict or Tuple of ITensors.
:param y_true: The desired outputs of the network (labels). Either a NamedTuple pointing at ITensors or a Dict or Tuple of ITensors.
:param axis: (Optional) The axis along which to compute the sparse categorical accuracy.
"""
pred_class = y_pred.argmax(axis=axis)
true_class = y_true.cast("int64")
correct_predictions = pred_class == true_class
return correct_predictions.cast("float32").mean(axis=axis)
# Cell: 9
class NaNMaskedLoss(Loss):
def __init__(self, loss, masked_dim=-1):
"""
Compute a sparse cross entropy.
This means that the preds are logits and the targets are not one hot encoded.
:param loss: The loss that should be wrapped and only applied on non nan values.
"""
super().__init__(reduction="none")
self.wrapped_loss = loss
self.zero = Tensor(data=np.array(0), trainable=False)
self.masked_dim = masked_dim
def loss(self, y_pred: ITensor, y_true: ITensor) -> ITensor:
"""
Compute the loss given in the constructor only on values where the GT is not NaN.
:param y_pred: The predictions of the network. Either a NamedTuple pointing at ITensors or a Dict or Tuple of ITensors.
:param y_true: The desired outputs of the network (labels). Either a NamedTuple pointing at ITensors or a Dict or Tuple of ITensors.
"""
binary_mask = (~y_true.is_nan())
mask = binary_mask.cast("float32")
masked_y_true = (y_true * mask)[binary_mask]
shape = list(y_true.shape)
shape[self.masked_dim] = -1
masked_y_true = masked_y_true.reshape(shape)
for dim in range(len(binary_mask.shape)):
if y_pred.shape[dim] != binary_mask.shape[dim] and y_pred.shape[dim] % binary_mask.shape[dim] == 0:
repeat = y_pred.shape[dim] / binary_mask.shape[dim]
binary_mask = binary_mask.repeat(int(repeat), axis=dim)
masked_y_pred = (y_pred * mask)[binary_mask]
shape = list(y_pred.shape)
shape[self.masked_dim] = -1
masked_y_pred = masked_y_pred.reshape(shape)
if masked_y_pred.shape[self.masked_dim] > 0:
loss = self.wrapped_loss(masked_y_pred, masked_y_true)
else:
loss = self.zero
return loss
| 45.874384 | 286 | 0.628617 | 2,503 | 18,625 | 4.544946 | 0.118658 | 0.021976 | 0.018987 | 0.039557 | 0.638186 | 0.618143 | 0.605397 | 0.582718 | 0.541579 | 0.531646 | 0 | 0.003556 | 0.275275 | 18,625 | 405 | 287 | 45.987654 | 0.839235 | 0.420671 | 0 | 0.382199 | 1 | 0 | 0.032889 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115183 | false | 0 | 0.089005 | 0 | 0.314136 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
00573c549df2f7162056e919b9a823effa6b19f7 | 1,535 | py | Python | Examples/digits/NN/runNN.py | longtengz/pyml | fa65d4274afe6ef60b31cd2a006a142c350c577d | [
"MIT"
] | 4 | 2017-04-09T16:25:14.000Z | 2018-03-28T15:41:55.000Z | Examples/digits/NN/runNN.py | longtengz/pyml | fa65d4274afe6ef60b31cd2a006a142c350c577d | [
"MIT"
] | null | null | null | Examples/digits/NN/runNN.py | longtengz/pyml | fa65d4274afe6ef60b31cd2a006a142c350c577d | [
"MIT"
] | null | null | null | import sys
# TODO
# need to normalize this path for Windows users
sys.path.insert(0, '../../../NeuralNet')
from NeuralNet import NN
from ActivationFunction.AF import *
trainingPairs = list()
testPairs = list()
with open('../data/trainingDigits.data', 'r') as trainingDigitsFile:
for line in list(trainingDigitsFile):
# make outputValue as a vector, if the output is number n, then we make the (n+1)th element in vector as 1, and the rest elements are of value 0
outputValue = [0] * 9
outputValue.insert(int(line[0]), 1)
inputValue = [int(x) for x in list(line[2:-1])]
trainingPairs.append([inputValue, outputValue])
with open('../data/testDigits.data', 'r') as testDigitsFile:
for line in list(testDigitsFile):
outputValue = [0] * 9
outputValue.insert(int(line[0]), 1)
inputValue = [int(x) for x in list(line[2:-1])]
testPairs.append([inputValue, outputValue])
print('start')
#digitsClassifier = NN([1024, 30, 10], sigmoid, sigmoidDiff)
#digitsClassifier = NN([1024, 30, 10], sigmoid, sigmoidDiff, '../data/digitsWeights-1024-500-10.data')
# stochastic gradient descent
digitsClassifier = NN([1024, 30, 10], sigmoid, sigmoidDiff, '../data/digitsWeights-sgd-1024-500-10.data')
#digitsClassifier.train(trainingPairs, 10, 0.05, isSGD=True)
digitsClassifier.test(testPairs)
#digitsClassifier.saveWeightsToFile('../data/digitsWeights-1024-500-10.data')
# SGD
#digitsClassifier.saveWeightsToFile('../data/digitsWeights-sgd-1024-500-10.data')
| 30.7 | 152 | 0.695765 | 199 | 1,535 | 5.366834 | 0.396985 | 0.022472 | 0.033708 | 0.048689 | 0.370787 | 0.370787 | 0.330524 | 0.243446 | 0.243446 | 0.129213 | 0 | 0.062791 | 0.159609 | 1,535 | 49 | 153 | 31.326531 | 0.765116 | 0.391531 | 0 | 0.285714 | 0 | 0 | 0.126761 | 0.099675 | 0 | 0 | 0 | 0.020408 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
005ac9b14b94b52e2743e4397f8aed64e361559e | 1,507 | py | Python | observations/r/swahili.py | hajime9652/observations | 2c8b1ac31025938cb17762e540f2f592e302d5de | [
"Apache-2.0"
] | 199 | 2017-07-24T01:34:27.000Z | 2022-01-29T00:50:55.000Z | observations/r/swahili.py | hajime9652/observations | 2c8b1ac31025938cb17762e540f2f592e302d5de | [
"Apache-2.0"
] | 46 | 2017-09-05T19:27:20.000Z | 2019-01-07T09:47:26.000Z | observations/r/swahili.py | hajime9652/observations | 2c8b1ac31025938cb17762e540f2f592e302d5de | [
"Apache-2.0"
] | 45 | 2017-07-26T00:10:44.000Z | 2022-03-16T20:44:59.000Z | # -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import csv
import numpy as np
import os
import sys
from observations.util import maybe_download_and_extract
def swahili(path):
"""Swahili
Attitudes towards the Swahili language among Kenyan school children
A dataset with 480 observations on the following 4 variables.
`Province`
`NAIROBI` or `PWANI`
`Sex`
`female` or `male`
`Attitude.Score`
Score (out a possible 200 points) on a survey of attitude towards the
Swahili language
`School`
Code for the school: `A` through `L`
Args:
path: str.
Path to directory which either stores file or otherwise file will
be downloaded and extracted there.
Filename is `swahili.csv`.
Returns:
Tuple of np.ndarray `x_train` with 480 rows and 4 columns and
dictionary `metadata` of column headers (feature names).
"""
import pandas as pd
path = os.path.expanduser(path)
filename = 'swahili.csv'
if not os.path.exists(os.path.join(path, filename)):
url = 'http://dustintran.com/data/r/Stat2Data/Swahili.csv'
maybe_download_and_extract(path, url,
save_file_name='swahili.csv',
resume=False)
data = pd.read_csv(os.path.join(path, filename), index_col=0,
parse_dates=True)
x_train = data.values
metadata = {'columns': data.columns}
return x_train, metadata
| 23.184615 | 71 | 0.684141 | 206 | 1,507 | 4.868932 | 0.563107 | 0.03988 | 0.047856 | 0.045862 | 0.043868 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012111 | 0.232913 | 1,507 | 64 | 72 | 23.546875 | 0.855536 | 0.446583 | 0 | 0 | 0 | 0 | 0.100381 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.409091 | 0 | 0.5 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
005bacdde9dfe7d2f249be1a724d289ebd153f3b | 19,504 | py | Python | qubits/cl_utils.py | thespacedoctor/qubits | 3c02ace7226389841c6bb838d045c11bed61a3c2 | [
"MIT"
] | 3 | 2018-09-25T09:32:55.000Z | 2021-11-17T11:35:17.000Z | qubits/cl_utils.py | thespacedoctor/qubits | 3c02ace7226389841c6bb838d045c11bed61a3c2 | [
"MIT"
] | 1 | 2018-03-16T16:04:52.000Z | 2018-03-16T16:04:52.000Z | qubits/cl_utils.py | thespacedoctor/qubits | 3c02ace7226389841c6bb838d045c11bed61a3c2 | [
"MIT"
] | 2 | 2017-07-16T08:23:41.000Z | 2021-02-23T12:49:17.000Z | #!/usr/local/bin/python
# encoding: utf-8
"""
*Documentation for qubits can be found here: https://github.com/thespacedoctor/qubits*
Usage:
qubits init <pathToWorkspace>
qubits run -s <pathToSettingsFile> -o <pathToOutputDirectory> -d <pathToSpectralDatabase>
COMMANDS
--------
init setup a qubits settings file and a test spectral database
run run the qubits simulation according to the setup given in the settings file
ARGUMENTS
---------
pathToSettingsFile path to the yaml settings file
pathToWorkspace path to a directory within which to setup an example qubit workspace
FLAGS
-----
-h, --help show this help message
-s, --settings provide a path to the settings file
-d, --database provide the path to the root directory containing your nested-folders and files spectral database
-o, --output provide a path to an output directory for the results of the simulations*
"""
################# GLOBAL IMPORTS ####################
import sys
import os
os.environ['TERM'] = 'vt100'
import readline
import glob
import pickle
import yaml
from docopt import docopt
from fundamentals import tools, times
from subprocess import Popen, PIPE, STDOUT
from datetime import datetime, date, time
from . import commonutils as cu
from . import surveysim as ss
from . import datagenerator as dg
from . import results as r
import dryxPython.commonutils as dcu
from . import universe as u
import dryxPython.mmd.mmd as dmd
# from ..__init__ import *
def main(arguments=None):
"""
*The main function used when ``cl_utils.py`` is run as a single script from the cl, or when installed as a cl command*
"""
# setup the command-line util settings
su = tools(
arguments=arguments,
docString=__doc__,
logLevel="WARNING",
options_first=False,
projectName="qubits"
)
arguments, settings, log, dbConn = su.setup()
# unpack remaining cl arguments using `exec` to setup the variable names
# automatically
for arg, val in arguments.iteritems():
if arg[0] == "-":
varname = arg.replace("-", "") + "Flag"
else:
varname = arg.replace("<", "").replace(">", "")
if varname == "import":
varname = "iimport"
if isinstance(val, str) or isinstance(val, unicode):
exec(varname + " = '%s'" % (val,))
else:
exec(varname + " = %s" % (val,))
if arg == "--dbConn":
dbConn = val
log.debug('%s = %s' % (varname, val,))
## START LOGGING ##
startTime = times.get_now_sql_datetime()
log.info(
'--- STARTING TO RUN THE cl_utils.py AT %s' %
(startTime,))
if init:
from . import workspace
ws = workspace(
log=log,
pathToWorkspace=pathToWorkspace
)
ws.setup()
return
# IMPORT THE SIMULATION SETTINGS
(allSettings,
programSettings,
limitingMags,
sampleNumber,
peakMagnitudeDistributions,
explosionDaysFromSettings,
extendLightCurveTail,
relativeSNRates,
lowerRedshiftLimit,
upperRedshiftLimit,
redshiftResolution,
restFrameFilter,
kCorrectionTemporalResolution,
kCorPolyOrder,
kCorMinimumDataPoints,
extinctionType,
extinctionConstant,
hostExtinctionDistributions,
galacticExtinctionDistribution,
surveyCadenceSettings,
snLightCurves,
surveyArea,
CCSNRateFraction,
transientToCCSNRateFraction,
extraSurveyConstraints,
lightCurvePolyOrder,
logLevel) = cu.read_in_survey_parameters(
log,
pathToSettingsFile=pathToSettingsFile
)
logFilePath = pathToOutputDirectory + "/qubits.log"
del log
log = _set_up_command_line_tool(
level=str(logLevel),
logFilePath=logFilePath
)
# dbConn, log = cu.settings(
# pathToSettingsFile=pathToSettingsFile,
# dbConn=False,
# log=True
# )
## START LOGGING ##
startTime = dcu.get_now_sql_datetime()
log.info('--- STARTING TO RUN THE qubits AT %s' % (startTime,))
resultsDict = {}
pathToOutputPlotDirectory = pathToOutputDirectory + "/plots/"
dcu.dryx_mkdir(
log,
directoryPath=pathToOutputPlotDirectory
)
pathToResultsFolder = pathToOutputDirectory + "/results/"
dcu.dryx_mkdir(
log,
directoryPath=pathToResultsFolder
)
if not programSettings['Extract Lightcurves from Spectra'] and not programSettings['Generate KCorrection Database'] and not programSettings['Run the Simulation'] and not programSettings['Compile and Plot Results']:
print "All stages of the simulatation have been switched off. Please switch on at least one stage of the simulation under the 'Programming Settings' in the settings file `%(pathToSettingsFile)s`" % locals()
# GENERATE THE DATA FOR SIMULATIONS
if programSettings['Extract Lightcurves from Spectra']:
log.info('generating the Lightcurves')
dg.generate_model_lightcurves(
log=log,
pathToSpectralDatabase=pathToSpectralDatabase,
pathToOutputDirectory=pathToOutputDirectory,
pathToOutputPlotDirectory=pathToOutputPlotDirectory,
explosionDaysFromSettings=explosionDaysFromSettings,
extendLightCurveTail=extendLightCurveTail,
polyOrder=lightCurvePolyOrder
)
print "The lightcurve file can be found here: %(pathToOutputDirectory)stransient_light_curves.yaml" % locals()
print "The lightcurve plots can be found in %(pathToOutputPlotDirectory)s" % locals()
if programSettings['Generate KCorrection Database']:
log.info('generating the kcorrection data')
dg.generate_kcorrection_listing_database(
log,
pathToOutputDirectory=pathToOutputDirectory,
pathToSpectralDatabase=pathToSpectralDatabase,
restFrameFilter=restFrameFilter,
temporalResolution=kCorrectionTemporalResolution,
redshiftResolution=redshiftResolution,
redshiftLower=lowerRedshiftLimit,
redshiftUpper=upperRedshiftLimit + redshiftResolution)
log.info('generating the kcorrection polynomials')
dg.generate_kcorrection_polynomial_database(
log,
pathToOutputDirectory=pathToOutputDirectory,
restFrameFilter=restFrameFilter,
kCorPolyOrder=kCorPolyOrder, # ORDER OF THE POLYNOMIAL TO FIT
kCorMinimumDataPoints=kCorMinimumDataPoints,
redshiftResolution=redshiftResolution,
redshiftLower=lowerRedshiftLimit,
redshiftUpper=upperRedshiftLimit + redshiftResolution,
plot=programSettings['Generate KCorrection Plots'])
print "The k-correction database has been generated here: %(pathToOutputDirectory)sk_corrections" % locals()
if programSettings['Generate KCorrection Plots']:
print "The k-correction polynomial plots can also be found in %(pathToOutputDirectory)sk_corrections" % locals()
if programSettings['Run the Simulation']:
# CREATE THE OBSERVABLE UNIVERSE!
log.info('generating the redshift array')
redshiftArray = u.random_redshift_array(
log,
sampleNumber,
lowerRedshiftLimit,
upperRedshiftLimit,
redshiftResolution=redshiftResolution,
pathToOutputPlotDirectory=pathToOutputPlotDirectory,
plot=programSettings['Plot Simulation Helper Plots'])
resultsDict['Redshifts'] = redshiftArray.tolist()
log.info('generating the SN type array')
snTypesArray = u.random_sn_types_array(
log,
sampleNumber,
relativeSNRates,
pathToOutputPlotDirectory=pathToOutputPlotDirectory,
plot=programSettings['Plot Simulation Helper Plots'])
resultsDict['SN Types'] = snTypesArray.tolist()
log.info('generating peak magnitudes for the SNe')
peakMagnitudesArray = u.random_peak_magnitudes(
log,
peakMagnitudeDistributions,
snTypesArray,
plot=programSettings['Plot Simulation Helper Plots'])
log.info('generating the SN host extictions array')
hostExtinctionArray = u.random_host_extinction(
log,
sampleNumber,
extinctionType,
extinctionConstant,
hostExtinctionDistributions,
plot=programSettings['Plot Simulation Helper Plots'])
log.info('generating the SN galactic extictions array')
galacticExtinctionArray = u.random_galactic_extinction(
log,
sampleNumber,
extinctionType,
extinctionConstant,
galacticExtinctionDistribution,
plot=programSettings['Plot Simulation Helper Plots'])
log.info('generating the raw lightcurves for the SNe')
rawLightCurveDict = u.generate_numpy_polynomial_lightcurves(
log,
snLightCurves=snLightCurves,
pathToOutputDirectory=pathToOutputDirectory,
pathToOutputPlotDirectory=pathToOutputPlotDirectory,
plot=programSettings['Plot Simulation Helper Plots'])
log.info('generating the k-correction array for the SNe')
kCorrectionArray = u.build_kcorrection_array(
log,
redshiftArray,
snTypesArray,
snLightCurves,
pathToOutputDirectory=pathToOutputDirectory,
plot=programSettings['Plot Simulation Helper Plots'])
log.info('generating the observed lightcurves for the SNe')
observedFrameLightCurveInfo, peakAppMagList = u.convert_lightcurves_to_observered_frame(
log,
snLightCurves=snLightCurves,
rawLightCurveDict=rawLightCurveDict,
redshiftArray=redshiftArray,
snTypesArray=snTypesArray,
peakMagnitudesArray=peakMagnitudesArray,
kCorrectionArray=kCorrectionArray,
hostExtinctionArray=hostExtinctionArray,
galacticExtinctionArray=galacticExtinctionArray,
restFrameFilter=restFrameFilter,
pathToOutputDirectory=pathToOutputDirectory,
pathToOutputPlotDirectory=pathToOutputPlotDirectory,
polyOrder=lightCurvePolyOrder,
plot=programSettings['Plot Simulation Helper Plots'])
log.info('generating the survey observation cadence')
cadenceDictionary = ss.survey_cadence_arrays(
log,
surveyCadenceSettings,
pathToOutputDirectory=pathToOutputDirectory,
pathToOutputPlotDirectory=pathToOutputPlotDirectory,
plot=programSettings['Plot Simulation Helper Plots'])
log.info('determining if the SNe are discoverable by the survey')
discoverableList = ss.determine_if_sne_are_discoverable(
log,
redshiftArray=redshiftArray,
limitingMags=limitingMags,
observedFrameLightCurveInfo=observedFrameLightCurveInfo,
pathToOutputDirectory=pathToOutputDirectory,
pathToOutputPlotDirectory=pathToOutputPlotDirectory,
plot=programSettings['Plot Simulation Helper Plots'])
log.info(
'determining the day (if and) when each SN is first discoverable by the survey')
ripeDayList = ss.determine_when_sne_are_ripe_for_discovery(
log,
redshiftArray=redshiftArray,
limitingMags=limitingMags,
discoverableList=discoverableList,
observedFrameLightCurveInfo=observedFrameLightCurveInfo,
plot=programSettings['Plot Simulation Helper Plots'])
# log.info('determining the day when each SN is disappears fainter than the survey limiting mags')
# disappearDayList = determine_when_discovered_sne_disappear(
# log,
# redshiftArray=redshiftArray,
# limitingMags=limitingMags,
# ripeDayList=ripeDayList,
# observedFrameLightCurveInfo=observedFrameLightCurveInfo,
# plot=programSettings['Plot Simulation Helper Plots'])
log.info('determining if and when each SN is discovered by the survey')
lightCurveDiscoveryDayList, surveyDiscoveryDayList, snCampaignLengthList = ss.determine_if_sne_are_discovered(
log,
limitingMags=limitingMags,
ripeDayList=ripeDayList,
cadenceDictionary=cadenceDictionary,
observedFrameLightCurveInfo=observedFrameLightCurveInfo,
extraSurveyConstraints=extraSurveyConstraints,
plot=programSettings['Plot Simulation Helper Plots'])
resultsDict[
'Discoveries Relative to Peak Magnitudes'] = lightCurveDiscoveryDayList
resultsDict[
'Discoveries Relative to Survey Year'] = surveyDiscoveryDayList
resultsDict['Campaign Length'] = snCampaignLengthList
resultsDict['Cadence Dictionary'] = cadenceDictionary
resultsDict['Peak Apparent Magnitudes'] = peakAppMagList
now = datetime.now()
now = now.strftime("%Y%m%dt%H%M%S")
fileName = pathToOutputDirectory + \
"/simulation_results_%s.yaml" % (now,)
stream = file(fileName, 'w')
yamlContent = dict(allSettings.items() + resultsDict.items())
yaml.dump(yamlContent, stream, default_flow_style=False)
stream.close()
print "The simulation output file can be found here: %(fileName)s. Remember to update your settings file 'Simulation Results File Used for Plots' parameter with this filename before compiling the results." % locals()
if programSettings['Plot Simulation Helper Plots']:
print "The simulation helper-plots found in %(pathToOutputPlotDirectory)s" % locals()
# COMPILE AND PLOT THE RESULTS
if programSettings['Compile and Plot Results']:
pathToYamlFile = pathToOutputDirectory + "/" + \
programSettings['Simulation Results File Used for Plots']
result_log = r.log_the_survey_settings(log, pathToYamlFile)
snSurveyDiscoveryTimes, lightCurveDiscoveryTimes, snTypes, redshifts, cadenceDictionary, peakAppMagList, snCampaignLengthList = r.import_results(
log, pathToYamlFile)
snRatePlotLink, totalRate, tooFaintRate, shortCampaignRate = r.determine_sn_rate(
log,
lightCurveDiscoveryTimes,
snSurveyDiscoveryTimes,
redshifts,
surveyCadenceSettings=surveyCadenceSettings,
lowerRedshiftLimit=lowerRedshiftLimit,
upperRedshiftLimit=upperRedshiftLimit,
redshiftResolution=redshiftResolution,
surveyArea=surveyArea,
CCSNRateFraction=CCSNRateFraction,
transientToCCSNRateFraction=transientToCCSNRateFraction,
peakAppMagList=peakAppMagList,
snCampaignLengthList=snCampaignLengthList,
extraSurveyConstraints=extraSurveyConstraints,
pathToOutputPlotFolder=pathToOutputPlotDirectory)
result_log += """
## Results ##
This simulated survey discovered a total of **%s** transients per year. An extra **%s** transients were detected but deemed too faint to constrain a positive transient identification and a further **%s** transients where detected but an observational campaign of more than **%s** days could not be completed to ensure identification. See below for the various output plots.
""" % (totalRate, tooFaintRate, shortCampaignRate, extraSurveyConstraints["Observable for at least ? number of days"])
cadenceWheelLink = r.plot_cadence_wheel(
log,
cadenceDictionary,
pathToOutputPlotFolder=pathToOutputPlotDirectory)
result_log += """%s""" % (cadenceWheelLink,)
discoveryMapLink = r.plot_sn_discovery_map(
log,
snSurveyDiscoveryTimes,
peakAppMagList,
snCampaignLengthList,
redshifts,
extraSurveyConstraints,
pathToOutputPlotFolder=pathToOutputPlotDirectory)
result_log += """%s""" % (discoveryMapLink,)
ratioMapLink = r.plot_sn_discovery_ratio_map(
log,
snSurveyDiscoveryTimes,
redshifts,
peakAppMagList,
snCampaignLengthList,
extraSurveyConstraints,
pathToOutputPlotFolder=pathToOutputPlotDirectory)
result_log += """%s""" % (ratioMapLink,)
result_log += """%s""" % (snRatePlotLink,)
now = datetime.now()
now = now.strftime("%Y%m%dt%H%M%S")
mdLogPath = pathToResultsFolder + \
"simulation_result_log_%s.md" % (now,)
mdLog = open(mdLogPath, 'w')
mdLog.write(result_log)
mdLog.close()
dmd.convert_to_html(
log=log,
pathToMMDFile=mdLogPath,
css="amblin"
)
print "Results can be found here: %(pathToResultsFolder)s" % locals()
html = mdLogPath.replace(".md", ".html")
print "Open this file in your browser: %(html)s" % locals()
if "dbConn" in locals() and dbConn:
dbConn.commit()
dbConn.close()
## FINISH LOGGING ##
endTime = times.get_now_sql_datetime()
runningTime = times.calculate_time_difference(startTime, endTime)
log.info('-- FINISHED ATTEMPT TO RUN THE cl_utils.py AT %s (RUNTIME: %s) --' %
(endTime, runningTime, ))
return
def _set_up_command_line_tool(
level="DEBUG",
logFilePath="/tmp/tmp.log"):
import logging
import logging.config
import yaml
logging.shutdown()
reload(logging)
loggerConfig = """
version: 1
formatters:
file_style:
format: '* %(asctime)s - %(name)s - %(levelname)s (%(filename)s > %(funcName)s > %(lineno)d) - %(message)s '
datefmt: '%Y/%m/%d %H:%M:%S'
console_style:
format: '* %(asctime)s - %(levelname)s: %(filename)s:%(funcName)s:%(lineno)d > %(message)s'
datefmt: '%H:%M:%S'
html_style:
format: '<div id="row" class="%(levelname)s"><span class="date">%(asctime)s</span> <span class="label">file:</span><span class="filename">%(filename)s</span> <span class="label">method:</span><span class="funcName">%(funcName)s</span> <span class="label">line#:</span><span class="lineno">%(lineno)d</span> <span class="pathname">%(pathname)s</span> <div class="right"><span class="message">%(message)s</span><span class="levelname">%(levelname)s</span></div></div>'
datefmt: '%Y-%m-%d <span class= "time">%H:%M <span class= "seconds">%Ss</span></span>'
handlers:
console:
class: logging.StreamHandler
level: """ + level + """
formatter: console_style
stream: ext://sys.stdout
development_logs:
class: logging.FileHandler
level: """ + level + """
formatter: file_style
filename: """ + logFilePath + """
mode: w
root:
level: DEBUG
handlers: [console,development_logs]"""
logging.config.dictConfig(yaml.load(loggerConfig))
log = logging.getLogger(__name__)
return log
if __name__ == '__main__':
main()
| 40.46473 | 485 | 0.656429 | 1,718 | 19,504 | 7.37078 | 0.259604 | 0.010503 | 0.024876 | 0.038695 | 0.265182 | 0.184632 | 0.152649 | 0.129985 | 0.118297 | 0.101714 | 0 | 0.000414 | 0.257742 | 19,504 | 481 | 486 | 40.548857 | 0.874283 | 0.045324 | 0 | 0.366755 | 0 | 0.01847 | 0.258761 | 0.044778 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.063325 | null | null | 0.023747 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
00603629eb004f9787cdbd9d31855418006e15bc | 2,734 | py | Python | examples/projects/LGL/LGL-simulation.py | JamesPino/booleannet | dc7324ca6756a80de74551bb24bb6ce23b1a762d | [
"MIT"
] | null | null | null | examples/projects/LGL/LGL-simulation.py | JamesPino/booleannet | dc7324ca6756a80de74551bb24bb6ce23b1a762d | [
"MIT"
] | null | null | null | examples/projects/LGL/LGL-simulation.py | JamesPino/booleannet | dc7324ca6756a80de74551bb24bb6ce23b1a762d | [
"MIT"
] | 1 | 2019-03-13T14:52:51.000Z | 2019-03-13T14:52:51.000Z | """
LGL simulator
It is also a demonstration on how the collector works
"""
import boolean2
from boolean2 import Model, util
from random import choice
# ocasionally randomized nodes
TARGETS = set( "PDGF IL15".split() )
def new_getvalue( state, name, p):
"""
Called every time a node value is used in an expression.
It will override the value for the current step only.
Returns random values for the node states
"""
global TARGETS
value = util.default_get_value( state, name, p )
if name in TARGETS:
# pick at random from True, False and original value
return choice( [True, False, value] )
else:
return value
def run( text, nodes, repeat, steps ):
"""
Runs the simulation and collects the nodes into a collector,
a convenience class that can average the values that it collects.
"""
coll = util.Collector()
for i in xrange( repeat ):
engine = Model( mode='async', text=text )
engine.RULE_GETVALUE = new_getvalue
# minimalist initial conditions, missing nodes set to false
engine.initialize( missing=util.false )
engine.iterate( steps=steps)
coll.collect( states=engine.states, nodes=nodes )
print '- completed'
avgs = coll.get_averages( normalize=True )
return avgs
if __name__ == '__main__':
# read in the text
text = file( 'LGL.txt').read()
# the nodes of interest that are collected over the run
# NODES = 'Apoptosis STAT3 FasL Ras'.split()
# this collects the state of all nodes
NODES = boolean2.all_nodes(text)
#
# raise this for better curves (will take about 2 seconds per repeat)
# plots were made for REPEAT = 1000, STEPS=150
#
REPEAT = 10
STEPS = 50
data = []
print '- starting simulation with REPEAT=%s, STEPS=%s' % (REPEAT, STEPS)
# a single overexpressed node
mtext = boolean2.modify_states(text=text, turnon=['Stimuli'])
avgs = run( text=mtext, repeat=REPEAT, nodes=NODES, steps=STEPS)
data.append( avgs )
# multiple overexrpessed nodes
mtext = boolean2.modify_states(text=text, turnon=['Stimuli', 'Mcl1'])
avgs = run( text=mtext, repeat=REPEAT, nodes=NODES, steps=STEPS)
data.append( avgs )
mtext = boolean2.modify_states(text=text, turnon=['Stimuli', 'sFas'])
avgs = run( text=mtext, repeat=REPEAT, nodes=NODES, steps=STEPS)
data.append( avgs )
mtext = boolean2.modify_states(text=text, turnon=['Stimuli', 'Mcl1', 'sFas'])
avgs = run( text=mtext, repeat=REPEAT, nodes=NODES, steps=STEPS)
data.append( avgs )
fname = 'LGL-run.bin'
util.bsave( data, fname=fname )
print '- data saved into %s' % fname | 30.377778 | 81 | 0.653621 | 361 | 2,734 | 4.897507 | 0.401662 | 0.027149 | 0.042986 | 0.056561 | 0.253394 | 0.253394 | 0.253394 | 0.253394 | 0.227376 | 0.196833 | 0 | 0.011605 | 0.243599 | 2,734 | 90 | 82 | 30.377778 | 0.843327 | 0.168252 | 0 | 0.181818 | 0 | 0 | 0.086559 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.068182 | null | null | 0.068182 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0066a97d1120a09b9055e61b5c4459d6c38fa43a | 450 | py | Python | book/migrations/0011_auto_20170603_1526.py | pyprism/Hiren-Mail-Notify | 324583a2edd25da5d2077914a79da291e00c743e | [
"MIT"
] | null | null | null | book/migrations/0011_auto_20170603_1526.py | pyprism/Hiren-Mail-Notify | 324583a2edd25da5d2077914a79da291e00c743e | [
"MIT"
] | 144 | 2015-10-18T17:19:03.000Z | 2021-06-27T07:05:56.000Z | book/migrations/0011_auto_20170603_1526.py | pyprism/Hiren-Mail-Notify | 324583a2edd25da5d2077914a79da291e00c743e | [
"MIT"
] | 1 | 2015-10-18T17:04:39.000Z | 2015-10-18T17:04:39.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11 on 2017-06-03 09:26
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('book', '0010_auto_20170603_1441'),
]
operations = [
migrations.AlterField(
model_name='book',
name='note',
field=models.TextField(blank=True, null=True),
),
]
| 21.428571 | 58 | 0.611111 | 50 | 450 | 5.32 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097264 | 0.268889 | 450 | 20 | 59 | 22.5 | 0.711246 | 0.146667 | 0 | 0 | 1 | 0 | 0.091864 | 0.060367 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.