hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
f4e4cc8986a9527befc2475ef90d7896d8a24c55 | 5,376 | py | Python | silx/opencl/test/test_medfilt.py | vallsv/silx | 834bfe9272af99096faa360e1ad96291bf46a2ac | [
"CC0-1.0",
"MIT"
] | 1 | 2017-08-03T15:51:42.000Z | 2017-08-03T15:51:42.000Z | silx/opencl/test/test_medfilt.py | vallsv/silx | 834bfe9272af99096faa360e1ad96291bf46a2ac | [
"CC0-1.0",
"MIT"
] | 7 | 2016-10-19T09:27:26.000Z | 2020-01-24T13:26:56.000Z | silx/opencl/test/test_medfilt.py | vallsv/silx | 834bfe9272af99096faa360e1ad96291bf46a2ac | [
"CC0-1.0",
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Project: Median filter of images + OpenCL
# https://github.com/silx-kit/silx
#
# Permission is hereby granted, free of charge, to any person
# obtaining a copy of this software and associated documentation
# files (the "Software"), to deal in the Software without
# restriction, including without limitation the rights to use,
# copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following
# conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
# OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
# OTHER DEALINGS IN THE SOFTWARE.
"""
Simple test of the median filter
"""
from __future__ import division, print_function
__authors__ = ["Jérôme Kieffer"]
__contact__ = "jerome.kieffer@esrf.eu"
__license__ = "MIT"
__copyright__ = "2013-2017 European Synchrotron Radiation Facility, Grenoble, France"
__date__ = "15/03/2017"
import sys
import time
import logging
import numpy
import unittest
from collections import namedtuple
try:
import mako
except ImportError:
mako = None
from ..common import ocl
if ocl:
import pyopencl
import pyopencl.array
from .. import medfilt
logger = logging.getLogger(__name__)
Result = namedtuple("Result", ["size", "error", "sp_time", "oc_time"])
try:
from scipy.misc import ascent
except:
def ascent():
"""Dummy image from random data"""
return numpy.random.random((512, 512))
try:
from scipy.ndimage import filters
median_filter = filters.median_filter
HAS_SCIPY = True
except:
HAS_SCIPY = False
from silx.math import medfilt2d as median_filter
@unittest.skipUnless(ocl and mako, "PyOpenCl is missing")
class TestMedianFilter(unittest.TestCase):
def setUp(self):
if ocl is None:
return
self.data = ascent().astype(numpy.float32)
self.medianfilter = medfilt.MedianFilter2D(self.data.shape, devicetype="gpu")
def tearDown(self):
self.data = None
self.medianfilter = None
def measure(self, size):
"Common measurement of accuracy and timings"
t0 = time.time()
if HAS_SCIPY:
ref = median_filter(self.data, size, mode="nearest")
else:
ref = median_filter(self.data, size)
t1 = time.time()
try:
got = self.medianfilter.medfilt2d(self.data, size)
except RuntimeError as msg:
logger.error(msg)
return
t2 = time.time()
delta = abs(got - ref).max()
return Result(size, delta, t1 - t0, t2 - t1)
@unittest.skipUnless(ocl and mako, "pyopencl is missing")
def test_medfilt(self):
"""
tests the median filter kernel
"""
r = self.measure(size=11)
if r is None:
logger.info("test_medfilt: size: %s: skipped")
else:
logger.info("test_medfilt: size: %s error %s, t_ref: %.3fs, t_ocl: %.3fs" % r)
self.assert_(r.error == 0, 'Results are correct')
def benchmark(self, limit=36):
"Run some benchmarking"
try:
import PyQt5
from ...gui.matplotlib import pylab
from ...gui.utils import update_fig
except:
pylab = None
def update_fig(*ag, **kwarg):
pass
fig = pylab.figure()
fig.suptitle("Median filter of an image 512x512")
sp = fig.add_subplot(1, 1, 1)
sp.set_title(self.medianfilter.ctx.devices[0].name)
sp.set_xlabel("Window width & height")
sp.set_ylabel("Execution time (s)")
sp.set_xlim(2, limit + 1)
sp.set_ylim(0, 4)
data_size = []
data_scipy = []
data_opencl = []
plot_sp = sp.plot(data_size, data_scipy, "-or", label="scipy")[0]
plot_opencl = sp.plot(data_size, data_opencl, "-ob", label="opencl")[0]
sp.legend(loc=2)
fig.show()
update_fig(fig)
for s in range(3, limit, 2):
r = self.measure(s)
print(r)
if r.error == 0:
data_size.append(s)
data_scipy.append(r.sp_time)
data_opencl.append(r.oc_time)
plot_sp.set_data(data_size, data_scipy)
plot_opencl.set_data(data_size, data_opencl)
update_fig(fig)
fig.show()
if sys.version_info[0] < 3:
raw_input()
else:
input()
def suite():
testSuite = unittest.TestSuite()
testSuite.addTest(TestMedianFilter("test_medfilt"))
return testSuite
def benchmark():
testSuite = unittest.TestSuite()
testSuite.addTest(TestMedianFilter("benchmark"))
return testSuite
if __name__ == '__main__':
unittest.main(defaultTest="suite")
| 30.545455 | 90 | 0.635603 | 692 | 5,376 | 4.810694 | 0.398844 | 0.032442 | 0.018023 | 0.01532 | 0.115951 | 0.093722 | 0.027035 | 0.027035 | 0 | 0 | 0 | 0.016435 | 0.264323 | 5,376 | 175 | 91 | 30.72 | 0.825284 | 0.243862 | 0 | 0.175 | 0 | 0 | 0.119931 | 0.005407 | 0 | 0 | 0 | 0 | 0.008333 | 1 | 0.075 | false | 0.008333 | 0.158333 | 0 | 0.291667 | 0.016667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f4e4e8adc9fc784da3b4a6c6cb19a8207a9fb903 | 25,864 | py | Python | spotpy/algorithms/demcz.py | xuexianwu/spotpy | f3e1e2cdc95de46272026f6fb51f8562306d8e44 | [
"MIT"
] | 1 | 2016-04-20T17:03:41.000Z | 2016-04-20T17:03:41.000Z | spotpy/algorithms/demcz.py | xuexianwu/spotpy | f3e1e2cdc95de46272026f6fb51f8562306d8e44 | [
"MIT"
] | null | null | null | spotpy/algorithms/demcz.py | xuexianwu/spotpy | f3e1e2cdc95de46272026f6fb51f8562306d8e44 | [
"MIT"
] | 1 | 2019-02-02T08:03:59.000Z | 2019-02-02T08:03:59.000Z | # -*- coding: utf-8 -*-
'''
Copyright (c) 2015 by Tobias Houska
This file is part of Statistical Parameter Estimation Tool (SPOTPY).
:author: Tobias Houska
Implements a variant of DE-MC_Z. The sampler is a multi-chain sampler that
proposal states based on the differences between random past states.
The sampler does not use the snooker updater but does use the crossover
probability, probability distribution. Convergence assessment is based on a
naive implementation of the Gelman-Rubin convergence statistics.
The basis for this algorithm are the following papers:
Provides the basis for the DE-MC_Z extension (also see second paper).
C.J.F. ter Braak, and J.A. Vrugt, Differential evolution Markov chain with
snooker updater and fewer chains, Statistics and Computing, 18(4),
435-446, doi:10.1007/s11222-008-9104-9, 2008.
Introduces the origional DREAM idea:
J.A. Vrugt, C.J.F. ter Braak, C.G.H. Diks, D. Higdon, B.A. Robinson, and
J.M. Hyman, Accelerating Markov chain Monte Carlo simulation by
differential evolution with self-adaptive randomized subspace sampling,
International Journal of Nonlinear Sciences and Numerical
Simulation, 10(3), 273-290, 2009.
This paper uses DREAM in an application
J.A. Vrugt, C.J.F. ter Braak, M.P. Clark, J.M. Hyman, and B.A. Robinson,
Treatment of input uncertainty in hydrologic modeling: Doing hydrology
backward with Markov chain Monte Carlo simulation, Water Resources
Research, 44, W00B09, doi:10.1029/2007WR006720, 2008.
Based on multichain_mcmc 0.3
Copyright (c) 2010 John Salvatier.
All rights reserved.
Redistribution and use in source and binary forms are permitted
provided that the above copyright notice and this paragraph are
duplicated in all such forms and that any documentation,
advertising materials, and other materials related to such
distribution and use acknowledge that the software was developed
by the <organization>. The name of the
<organization> may not be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.'''
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from . import _algorithm
from spotpy import objectivefunctions
import numpy as np
import time
class demcz(_algorithm):
'''
Implements the DE-MC_Z algorithm from ter Braak and Vrugt (2008).
Input
----------
spot_setup: class
model: function
Should be callable with a parameter combination of the parameter-function
and return an list of simulation results (as long as evaluation list)
parameter: function
When called, it should return a random parameter combination. Which can
be e.g. uniform or Gaussian
objectivefunction: function
Should return the objectivefunction for a given list of a model simulation and
observation.
evaluation: function
Should return the true values as return by the model.
dbname: str
* Name of the database where parameter, objectivefunction value and simulation results will be saved.
dbformat: str
* ram: fast suited for short sampling time. no file will be created and results are saved in an array.
* csv: A csv file will be created, which you can import afterwards.
parallel: str
* seq: Sequentiel sampling (default): Normal iterations on one core of your cpu.
* mpc: Multi processing: Iterations on all available cores on your cpu (recommended for windows os).
* mpi: Message Passing Interface: Parallel computing on cluster pcs (recommended for unix os).
save_sim: boolean
*True: Simulation results will be saved
*False: Simulationt results will not be saved
'''
def __init__(self, spot_setup, dbname=None, dbformat=None, parallel='seq',save_sim=True):
_algorithm.__init__(self, spot_setup, dbname=dbname, dbformat=dbformat, parallel=parallel,save_sim=save_sim)
def find_min_max(self):
randompar=self.parameter()['random']
for i in range(1000):
randompar=np.column_stack((randompar,self.parameter()['random']))
return np.amin(randompar,axis=1),np.amax(randompar,axis=1)
def check_par_validity(self,par):
if len(par) == len(self.min_bound) and len(par) == len(self.max_bound):
for i in range(len(par)):
if par[i]<self.min_bound[i]:
par[i]=self.min_bound[i]
if par[i]>self.max_bound[i]:
par[i]=self.max_bound[i]
else:
print('ERROR Bounds have not the same lenghts as Parameterarray')
return par
#def simulate(self):
def sample(self, repetitions, nChains = 5, burnIn = 100, thin = 1, convergenceCriteria = .8,variables_of_interest = None, DEpairs = 2, adaptationRate ='auto', eps = 5e-2, mConvergence = True, mAccept = True):
"""
Samples from a posterior distribution using DREAM.
Parameters
----------
repetitions : int
number of draws from the sample distribution to be returned
nChains : int
number of different chains to employ
burnInSize : int
number of iterations (meaning draws / nChains) to do before doing actual sampling.
DEpairs : int
number of pairs of chains to base movements off of
eps : float
used in jittering the chains
Returns
-------
None : None
sample sets
self.history which contains the combined draws for all the chains
self.iter which is the total number of iterations
self.acceptRatio which is the acceptance ratio
self.burnIn which is the number of burn in iterations done
self.R which is the gelman rubin convergence diagnostic for each dimension
"""
starttime=time.time()
intervaltime=starttime
self.min_bound, self.max_bound = self.find_min_max()
repetitions=int(repetitions/nChains)
ndraw_max = repetitions*nChains
maxChainDraws = int(ndraw_max/nChains)
dimensions=len(self.parameter()['random'])
history = _SimulationHistory(maxChainDraws, nChains, dimensions)
#minbound,maxbound=self.find_min_max()
if variables_of_interest is not None:
slices = []
for var in variables_of_interest:
slices.append(self.slices[var])
else:
slices = [slice(None,None)]
history.add_group('interest', slices)
# initialize the temporary storage vectors
currentVectors = np.zeros((nChains, dimensions))
currentLogPs = np.zeros(nChains)
self.accepts_ratio = 0
#) make a list of starting chains that at least spans the dimension space
# in this case it will be of size 2*dim
nSeedChains = int(np.ceil(dimensions* 2/nChains) * nChains)
nSeedIterations = int(nSeedChains/nChains)
if nSeedIterations <=2:
nSeedIterations=2
burnInpar=[]
for i in range(nSeedIterations):
vectors = np.zeros((nChains, dimensions))
for j in range(nChains):
randompar = self.parameter()['random']#+np.random.uniform(low=-1,high=1)
vectors[j] = randompar
#print randompar
burnInpar.append(vectors)
history.record(vectors,0,0)
#use the last nChains chains as the actual chains to track
#add the starting positions to the history
for i in range(len(burnInpar)):
self._logPs=[]
param_generator = ((rep,list(burnInpar[i][rep])) for rep in xrange(int(nChains)))
for rep,vector,simulations in self.repeat(param_generator):
likelist=objectivefunctions.log_p(simulations,self.evaluation)
simulationlist=simulations
self._logPs.append(likelist)
#Save everything in the database
self.datawriter.save(likelist,vector,simulations=simulations)
#print len(self._logPs)
#print len(burnInpar[i])
history.record(burnInpar[i],self._logPs,1)
# for chain in range(nChains):
# simulationlist=self.model(vectors[chain])#THIS WILL WORK ONLY FOR MULTIPLE CHAINS
# likelist=objectivefunctions.log_p(simulationlist,self.evaluation)
# self._logPs.append(likelist)
# self.datawriter.save(likelist,list(vectors[chain]),simulations=list(simulationlist),chains=chain)
# history.record(vectors,self._logPs,1)
gamma = None
# initilize the convergence diagnostic object
grConvergence = _GRConvergence()
covConvergence = _CovarianceConvergence()
# get the starting log objectivefunction and position for each of the chains
currentVectors = vectors
currentLogPs = self._logPs[-1]
#2)now loop through and sample
iter = 0
accepts_ratio_weighting = 1 - np.exp(-1.0/30)
lastRecalculation = 0
# continue sampling if:
# 1) we have not drawn enough samples to satisfy the minimum number of iterations
# 2) or any of the dimensions have not converged
# 3) and we have not done more than the maximum number of iterations
while iter<maxChainDraws:
if iter == burnIn:
history.start_sampling()
#every5th iteration allow a big jump
if np.random.randint(5) == 0.0:
gamma = np.array([1.0])
else:
gamma = np.array([2.38 / np.sqrt( 2 * DEpairs * dimensions)])
if iter >=burnIn:
proposalVectors = _dream_proposals(currentVectors, history,dimensions, nChains, DEpairs, gamma, .05, eps)
for i in range(len(proposalVectors)):
proposalVectors[i]=self.check_par_validity(proposalVectors[i])
#print proposalVectors
else:
proposalVectors=[]
for i in range(nChains):
proposalVectors.append(self.parameter()['random'])
proposalVectors[i]=self.check_par_validity(proposalVectors[i])
#if self.bounds_ok(minbound,maxbound,proposalVectors,nChains):
proposalLogPs=[]
old_simulationlist=simulationlist
old_likelist=likelist
new_simulationlist=[]
new_likelist=[]
param_generator = ((rep,list(proposalVectors[rep])) for rep in xrange(int(nChains)))
for rep,vector,simulations in self.repeat(param_generator):
new_simulationlist.append(simulations)
like=objectivefunctions.log_p(simulations,self.evaluation)
self._logPs.append(like)
new_likelist.append(like)
proposalLogPs.append(like)
# for i in range(nChains):
# simulations=self.model(proposalVectors[i])#THIS WILL WORK ONLY FOR MULTIPLE CHAINS
# new_simulationlist.append(simulations)
# like=objectivefunctions.log_p(simulations,self.evaluation)
# new_likelist.append(like)
# proposalLogPs.append(like)
#apply the metrop decision to decide whether to accept or reject each chain proposal
decisions, acceptance = self._metropolis_hastings(currentLogPs,proposalLogPs, nChains)
self._update_accepts_ratio(accepts_ratio_weighting, acceptance)
# if mAccept and iter % 20 == 0:
# print self.accepts_ratio
currentVectors = np.choose(decisions[:,np.newaxis], (currentVectors, proposalVectors))
currentLogPs = np.choose(decisions, (currentLogPs, proposalLogPs))
simulationlist = list(np.choose(decisions[:,np.newaxis], (new_simulationlist, old_simulationlist)))
likelist = list(np.choose(decisions[:,np.newaxis], (new_likelist, old_likelist)))
# we only want to recalculate convergence criteria when we are past the burn in period
if iter % thin == 0:
historyStartMovementRate = adaptationRate
#try to adapt more when the acceptance rate is low and less when it is high
if adaptationRate == 'auto':
historyStartMovementRate = min((.234/self.accepts_ratio)*.5, .95)
history.record(currentVectors, currentLogPs, historyStartMovementRate,grConvergence=grConvergence.R)
for chain in range(nChains):
if not sum(simulationlist[chain])==-np.Inf or sum(simulationlist[chain])==np.Inf:
self.datawriter.save(likelist[0][chain],list(currentVectors[chain]),simulations=list(simulationlist[chain]),chains=chain)
if history.nsamples > 0 and iter > lastRecalculation * 1.1 and history.nsequence_histories > dimensions:
lastRecalculation = iter
grConvergence.update(history)
covConvergence.update(history,'all')
covConvergence.update(history,'interest')
if all(grConvergence.R<convergenceCriteria):
iter =maxChainDraws
print('All chains fullfil the convergence criteria. Sampling stopped.')
iter += 1
# else:
# print 'A proposal vector was ignored'
#Progress bar
acttime=time.time()
#Refresh progressbar every second
if acttime-intervaltime>=2:
text=str(iter)+' of '+str(repetitions)
print(text)
intervaltime=time.time()
#3) finalize
# only make the second half of draws available because that's the only part used by the convergence diagnostic
self.history = history.samples
self.histo=history
self.iter = iter
self.burnIn = burnIn
self.R = grConvergence.R
text='Gelman Rubin R='+str(self.R)
print(text)
self.repeat.terminate()
try:
self.datawriter.finalize()
except AttributeError: #Happens if no database was assigned
pass
text='Duration:'+str(round((acttime-starttime),2))+' s'
print(text)
def _update_accepts_ratio(self, weighting, acceptances):
self.accepts_ratio = weighting * np.mean(acceptances) + (1-weighting) * self.accepts_ratio
def _metropolis_hastings(self,currentLogPs, proposalLogPs, nChains, jumpLogP = 0, reverseJumpLogP = 0):
"""
makes a decision about whether the proposed vector should be accepted
"""
logMetropHastRatio = (np.array(proposalLogPs) - np.array(currentLogPs)) #+ (reverseJumpLogP - jumpLogP)
decision = np.log(np.random.uniform(size = nChains)) < logMetropHastRatio
return decision, np.minimum(1, np.exp(logMetropHastRatio))
class _SimulationHistory(object):
group_indicies = {'all' : slice(None, None)}
def __init__(self, maxChainDraws, nChains, dimensions):
self._combined_history = np.zeros((nChains * maxChainDraws,dimensions ))
self._sequence_histories = np.zeros((nChains, dimensions, maxChainDraws))
self._logPSequences = np.zeros((nChains, maxChainDraws))
self._logPHistory = np.zeros(nChains * maxChainDraws)
self.r_hat= []*dimensions
self._sampling_start = np.Inf
self._nChains = nChains
self._dimensions = dimensions
self.relevantHistoryStart = 0
self.relevantHistoryEnd = 0
def add_group(self, name, slices):
indexes = range(self._dimensions)
indicies = []
for s in slices:
indicies.extend(indexes[s])
self.group_indicies[name] = np.array(indicies)
def record(self, vectors, logPs ,increment,grConvergence=None):
if len(vectors.shape) < 3:
self._record(vectors, logPs, increment,grConvergence)
else:
for i in range(vectors.shape[2]):
self._record(vectors[:,:,i], logPs[:, i], increment,grConvergence)
def _record(self, vectors, logPs, increment,grConvergence):
try:
self._sequence_histories[:,:,self.relevantHistoryEnd] = vectors
self._combined_history[(self.relevantHistoryEnd *self._nChains) :(self.relevantHistoryEnd *self._nChains + self._nChains),:] = vectors
self._logPSequences[:, self.relevantHistoryEnd] = logPs
self._logPHistory[(self.relevantHistoryEnd *self._nChains) :(self.relevantHistoryEnd *self._nChains + self._nChains)] = logPs
self.relevantHistoryEnd += 1
if np.isnan(increment):
self.relevantHistoryStart +=0
else:
self.relevantHistoryStart += increment
self.r_hat.append(grConvergence)
except IndexError:
print('index error')
self.relevantHistoryEnd += 1
self.relevantHistoryStart += increment
pass
def start_sampling(self):
self._sampling_start = self.relevantHistoryEnd
@property
def sequence_histories(self):
return self.group_sequence_histories('all')
def group_sequence_histories(self, name):
return self._sequence_histories[:,self.group_indicies[name], np.ceil(self.relevantHistoryStart):self.relevantHistoryEnd]
@property
def nsequence_histories(self):
return self.sequence_histories.shape[2]
@property
def combined_history(self):
return self.group_combined_history('all')
def group_combined_history(self, name):
#print self._combined_history
#print self.relevantHistoryStart
return self._combined_history[(np.ceil(self.relevantHistoryStart) *self._nChains):(self.relevantHistoryEnd * self._nChains),self.group_indicies[name]]
@property
def ncombined_history(self):
return self.combined_history.shape[0]
@property
def samples(self):
return self.group_samples('all')
def group_samples(self, name):
if self._sampling_start < np.Inf:
start = (max(np.ceil(self.relevantHistoryStart), self._sampling_start) *self._nChains)
end = (self.relevantHistoryEnd * self._nChains)
else:
start=0
end=0
return self._combined_history[start:end,self.group_indicies[name]]
@property
def nsamples(self):
return self.samples.shape[0]
@property
def combined_history_logps(self):
return self._logPHistory[(np.ceil(self.relevantHistoryStart) *self._nChains):(self.relevantHistoryEnd * self._nChains)]
def _random_no_replace(sampleSize, populationSize, numSamples):
samples = np.zeros((numSamples, sampleSize),dtype=int)
# Use Knuth's variable names
n = sampleSize
N = populationSize
i = 0
t = 0 # total input records dealt with
m = 0 # number of items selected so far
while i < numSamples:
t = 0
m = 0
while m < n :
u = np.random.uniform() # call a uniform(0,1) random number generator
if (N - t)*u >= n - m :
t += 1
else:
samples[i,m] = t
t += 1
m += 1
i += 1
return samples
class _CovarianceConvergence:
relativeVariances = {}
def update(self, history, group):
relevantHistory = history.group_combined_history(group)
self.relativeVariances[group] = self.rv(relevantHistory)
@staticmethod
def rv(relevantHistory):
end = relevantHistory.shape[0]
midpoint = np.floor(end/2)
covariance1 = np.cov(relevantHistory[0:midpoint, :].transpose())
covariance2 = np.cov(relevantHistory[midpoint:end, :].transpose())
_eigenvalues1, _eigenvectors1 = _eigen(covariance1)
basis1 = (np.sqrt(_eigenvalues1)[np.newaxis,:] * _eigenvectors1)
_eigenvalues2, _eigenvectors2 = _eigen(covariance2)
basis2 = (np.sqrt(_eigenvalues2)[np.newaxis,:] * _eigenvectors2)
# project the second basis onto the first basis
try:
projection = np.dot(np.linalg.inv(basis1), basis2)
except np.linalg.linalg.LinAlgError:
projection=(np.array(basis1)*np.nan)
print('Exception happend!')
# find the releative size in each of the basis1 directions
return np.log(np.sum(projection**2, axis = 0)**.5)
def _eigen(a, n = -1):
if len(a.shape) == 0: # if we got a 0-dimensional array we have to turn it back into a 2 dimensional one
a = a[np.newaxis,np.newaxis]
if n == -1:
n = a.shape[0]
_eigenvalues, _eigenvectors = np.linalg.eigh(a)
indicies = np.argsort(_eigenvalues)[::-1]
return _eigenvalues[indicies[0:n]], _eigenvectors[:,indicies[0:n]]
def _dream_proposals( currentVectors, history, dimensions, nChains, DEpairs, gamma, jitter, eps ):
"""
generates and returns proposal vectors given the current states
"""
sampleRange = history.ncombined_history
currentIndex = np.arange(sampleRange - nChains,sampleRange)[:, np.newaxis]
combined_history = history.combined_history
#choose some chains without replacement to combine
chains = _random_no_replace(DEpairs * 2, sampleRange - 1, nChains)
# makes sure we have already selected the current chain so it is not replaced
# this ensures that the the two chosen chains cannot be the same as the chain for which the jump is
chains += (chains >= currentIndex)
chainDifferences = (np.sum(combined_history[chains[:, 0:DEpairs], :], axis = 1) -
np.sum(combined_history[chains[:, DEpairs:(DEpairs*2)], :], axis = 1))
e = np.random.normal(0, jitter, (nChains,dimensions))
E = np.random.normal(0, eps,(nChains,dimensions)) # could replace eps with 1e-6 here
proposalVectors = currentVectors + (1 + e) * gamma[:,np.newaxis] * chainDifferences + E
return proposalVectors
def _dream2_proposals( currentVectors, history, dimensions, nChains, DEpairs, gamma, jitter, eps ):
"""
generates and returns proposal vectors given the current states
NOT USED ATM
"""
sampleRange = history.ncombined_history
currentIndex = np.arange(sampleRange - nChains,sampleRange)[:, np.newaxis]
combined_history = history.combined_history
#choose some chains without replacement to combine
chains = _random_no_replace(1, sampleRange - 1, nChains)
# makes sure we have already selected the current chain so it is not replaced
# this ensures that the the two chosen chains cannot be the same as the chain for which the jump is
chains += (chains >= currentIndex)
proposalVectors = combined_history[chains[:, 0], :]
return proposalVectors
class _GRConvergence:
"""
Gelman Rubin convergence diagnostic calculator class. It currently only calculates the naive
version found in the first paper. It does not check to see whether the variances have been
stabilizing so it may be misleading sometimes.
"""
_R = np.Inf
_V = np.Inf
_VChange = np.Inf
_W = np.Inf
_WChange = np.Inf
def __init__(self):
pass
def _get_R(self):
return self._R
R = property(_get_R)
@property
def VChange(self):
return self._VChange
@property
def WChange(self):
return self._WChange
def update(self, history):
"""
Updates the convergence diagnostic with the current history.
"""
N = history.nsequence_histories
sequences = history.sequence_histories
variances = np.var(sequences,axis = 2)
means = np.mean(sequences, axis = 2)
withinChainVariances = np.mean(variances, axis = 0)
betweenChainVariances = np.var(means, axis = 0) * N
varEstimate = (1 - 1.0/N) * withinChainVariances + (1.0/N) * betweenChainVariances
self._R = np.sqrt(varEstimate/ withinChainVariances)
self._WChange = np.abs(np.log(withinChainVariances /self._W)**.5)
self._W = withinChainVariances
self._VChange = np.abs(np.log(varEstimate /self._V)**.5)
self._V = varEstimate
| 40.602826 | 212 | 0.621791 | 2,873 | 25,864 | 5.502959 | 0.226592 | 0.017078 | 0.008855 | 0.005566 | 0.192726 | 0.146996 | 0.135927 | 0.115813 | 0.102593 | 0.102593 | 0 | 0.013169 | 0.295391 | 25,864 | 636 | 213 | 40.666667 | 0.854368 | 0.320755 | 0 | 0.159509 | 0 | 0 | 0.014602 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09816 | false | 0.009202 | 0.02454 | 0.03681 | 0.223926 | 0.02454 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f4e96436ab58fac5818d0904717ceb8b2614caf6 | 8,225 | py | Python | bbp/comps/bband_utils.py | kevinmilner/bbp | d9ba291b123be4e85f76317ef23600a339b2354d | [
"Apache-2.0"
] | null | null | null | bbp/comps/bband_utils.py | kevinmilner/bbp | d9ba291b123be4e85f76317ef23600a339b2354d | [
"Apache-2.0"
] | null | null | null | bbp/comps/bband_utils.py | kevinmilner/bbp | d9ba291b123be4e85f76317ef23600a339b2354d | [
"Apache-2.0"
] | 1 | 2018-11-12T23:10:02.000Z | 2018-11-12T23:10:02.000Z | #!/usr/bin/env python
"""
Southern California Earthquake Center Broadband Platform
Copyright 2010-2016 Southern California Earthquake Center
Utility classes for the SCEC Broadband Platform
$Id: bband_utils.py 1730 2016-09-06 20:26:43Z fsilva $
"""
from __future__ import division, print_function
# Import Python modules
import os
import re
import sys
import traceback
import subprocess
# Compile regular expressions
re_parse_property = re.compile(r'([^:= \t]+)\s*[:=]?\s*(.*)')
# Constants used by several Python scripts
# This is used to convert from accel in g to accel in cm/s/s
G2CMSS = 980.665 # Convert g to cm/s/s
# Set to the maximum allows filename in the GP codebase
GP_MAX_FILENAME = 256
# Set to the maximum allowed filename in the SDSU codebase
SDSU_MAX_FILENAME = 256
class BroadbandExternalError(Exception):
"""
Exception when an external program invoked by the Broadband
platform fails
"""
pass
class ParameterError(Exception):
"""
Exception when a parameter provided to a module is deemed invalid
"""
pass
class ProcessingError(Exception):
"""
Exception raised when a Broadband module finds an unrecoverable
error during processing and cannot go any further
"""
pass
def runprog(cmd, print_cmd=True, abort_on_error=False):
"""
Run a program on the command line and capture the output and print
the output to stdout
"""
# Check if we have a binary to run
if not os.access(cmd.split()[0], os.X_OK) and cmd.startswith("/"):
raise BroadbandExternalError("%s does not seem an executable path!" %
(cmd.split()[0]))
try:
if print_cmd:
print("Running: %s" % (cmd))
proc = subprocess.Popen(cmd, shell=True)
proc.wait()
except KeyboardInterrupt:
print("Interrupted!")
sys.exit(1)
except:
print("Unexpected error returned from Subprocess call: ",
sys.exc_info()[0])
if abort_on_error:
# If we got a non-zero exit code, abort
if proc.returncode != 0:
# Check if interrupted
if proc.returncode is None:
raise BroadbandExternalError("%s\n" %
(traceback.format_exc()) +
"%s failed!" %
(cmd))
raise BroadbandExternalError("%s\n" %
(traceback.format_exc()) +
"%s returned %d" %
(cmd, proc.returncode))
return proc.returncode
def get_command_output(cmd, output_on_stderr=False, abort_on_error=False):
"""
Get the output of the command from the shell. Adapter from CSEP's
commandOutput function in Environment.py
"""
# Execute command using the UNIX shell
child = subprocess.Popen(cmd,
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
child_data, child_error = child.communicate()
if child_error and output_on_stderr is False:
if abort_on_error:
error_msg = ("Child process '%s' failed with error code %s" %
(cmd, child_error))
raise BroadbandExternalError("%s\n" %
(traceback.format_exc()) +
"%s" % (error_msg))
else:
return ""
# Check for non-empty result string from the command
if ((child_data is None or len(child_data) == 0) and
output_on_stderr is False):
if abort_on_error:
error_msg = "Child process '%s' returned no data!" % (cmd)
raise BroadbandExternalError("%s\n" %
(traceback.format_exc()) +
"%s" % (error_msg))
else:
return ""
# If command output is on stderr
if output_on_stderr is True:
child_data = child_error
return child_data
def mkdirs(list_of_dirs, print_cmd=True):
"""
Creates all directories specified in the list_of_dirs
"""
for my_dir in list_of_dirs:
cmd = "mkdir -p %s" % (my_dir)
runprog(cmd, print_cmd=print_cmd, abort_on_error=True)
def relpath(path, start=os.curdir):
"""
Return a relative version of a path
(from Python 2.6 os.path.relpath() implementation)
"""
sep = os.sep
if not path:
raise ValueError("no path specified")
start_list = os.path.abspath(start).split(sep)
path_list = os.path.abspath(path).split(sep)
# Work out how much of the filepath is shared by start and path.
i = len(os.path.commonprefix([start_list, path_list]))
rel_list = [os.pardir] * (len(start_list) - i) + path_list[i:]
if not rel_list:
return '.'
return os.path.join(*rel_list)
def check_path_lengths(variables, max_length):
"""
This function checks each variable in the variables list and makes
sure their path lenghts are less than max_length. It raises a
ValueError exception otherwise.
"""
for var in variables:
if len(var) > max_length:
raise ValueError("Path len for %s " % (var) +
" is %d characters long, maximum is %d" %
(len(var), max_length))
def list_subdirs(d):
"""
This function returns all subdirectories inside the directory d
"""
# Return empty array if d is None
if d is None:
return []
# Use list comprehension
return [sub for sub in os.listdir(d) if os.path.isdir(os.path.join(d, sub))]
def parse_properties(filename):
"""
This function reads all properties from filename and returns a
dictionary containing all key=value pairs found in the file
"""
my_file = open(filename, 'r')
props = {}
for line in my_file:
# Strip tabs, spaces and newlines from both ends
line = line.strip(' \t\n')
# Skip comments
if line.startswith('#'):
continue
# Remove inline comments
line = line.split('#')[0]
# Skip empty lines
if len(line) == 0:
continue
result = re_parse_property.search(line)
if result:
# Property parsing successful
key = result.group(1)
val = result.group(2)
# Make key lowercase
key = key.lower()
props[key] = val
# Don't forget to close the file
my_file.close()
# All done!
return props
def parse_src_file(a_srcfile):
"""
Function parses the SRC file and checks for needed keys. It
returns a dictionary containing the keys found in the src file.
"""
src_keys = parse_properties(a_srcfile)
required_keys = ["magnitude", "fault_length", "fault_width", "dlen",
"dwid", "depth_to_top", "strike", "rake", "dip",
"lat_top_center", "lon_top_center"]
for key in required_keys:
if key not in src_keys:
raise ParameterError("key %s missing in src file" % (key))
# Convert keys to floats
for key in src_keys:
src_keys[key] = float(src_keys[key])
return src_keys
def count_header_lines(a_bbpfile):
"""
Function counts and returns the number of header lines in a BBP file
"""
header_lines = 0
my_file = open(a_bbpfile, 'r')
for line in my_file:
line = line.strip()
# Check for empty lines, we count them too
if not line:
header_lines = header_lines + 1
continue
# Check for comments
if line.startswith('%') or line.startswith('#'):
header_lines = header_lines + 1
continue
# Reached non header line
break
my_file.close()
return header_lines
if __name__ == "__main__":
print("Testing: %s" % (sys.argv))
CMD = "/bin/date"
RESULT = runprog(CMD)
if RESULT != 0:
print("Error running cmd: %s" % (CMD))
else:
print("Success!")
| 31.51341 | 80 | 0.58541 | 1,031 | 8,225 | 4.544132 | 0.310378 | 0.018783 | 0.015368 | 0.02476 | 0.106297 | 0.088367 | 0.075133 | 0.075133 | 0.064888 | 0.053362 | 0 | 0.009874 | 0.322796 | 8,225 | 260 | 81 | 31.634615 | 0.831239 | 0.276839 | 0 | 0.212329 | 0 | 0 | 0.092521 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061644 | false | 0.020548 | 0.041096 | 0 | 0.19863 | 0.075342 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f4ebee75050c551eaa917d4d4a959b8b06e0ccfa | 5,415 | py | Python | mkpage.py | jancc/mkpage | c44a40b3799756987d1572536e426510b1db1f09 | [
"Zlib"
] | null | null | null | mkpage.py | jancc/mkpage | c44a40b3799756987d1572536e426510b1db1f09 | [
"Zlib"
] | null | null | null | mkpage.py | jancc/mkpage | c44a40b3799756987d1572536e426510b1db1f09 | [
"Zlib"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
from distutils.dir_util import copy_tree
from sys import exit
import os
import json
import datetime
import argparse
useMarkdown = False
try:
import markdown
useMarkdown = True
except ImportError as e:
useMarkdown = False
__version__ = "0.3.0"
class Site():
def __init__(self, file, output, workingDirectory):
self.output = output
self.workingDirectory = workingDirectory
config = self.loadConfig(os.path.join(workingDirectory, file))
template = self.loadTemplate()
"""
Load the config json file and parse using inbuild json
parsing functionality.
"""
def loadConfig(self, file):
try:
configFile = open(file, "r")
except IOError:
print("Error: No file: " + file + " found!")
exit()
self.config = json.load(configFile)
"""
Load the template html file, in which all other files will be embedded
"""
def loadTemplate(self):
configFilename = os.path.join(self.workingDirectory, self.config["template"])
try:
templateFile = open(configFilename, "r")
except IOError:
print("Error: file " + configFilename + " not found!")
exit()
self.template = templateFile.read()
"""
Parse the optional hidden attribute of a page
"""
def isHidden(self, page):
try:
return page["hidden"]
except KeyError:
return False
"""
Prepare menu as a simple unnumbered HTML list with links.
The current page isn't a link however.
"""
def buildMenu(self, currentPage):
menu = "<ul>"
for page in self.config["pages"]:
filename, filetype = os.path.splitext(page["file"])
if not self.isHidden(page) and page != currentPage:
menu += "<li><a href='" + "%s.html" % filename + "'>" + page["title"] + "</a></li>"
elif not self.isHidden(page):
menu += "<li><a class='nav_active' href='" + "%s.html" % filename + "'>" + page["title"] + "</a></li>"
menu += "</ul>"
return menu
"""
Prepare a single page by first putting it inside the template and then
loading all other definitions.
"""
def buildPage(self, folder, page):
try:
source = open(os.path.join(folder, page["file"]), "r")
except IOError:
print("Error: Failed to read " + os.path.join(folder, page["file"]))
exit()
htmlPage = ""
filename, filetype = os.path.splitext(page["file"])
# always do lowercase comparisions for file extensions to make them case insensitive
filetype = filetype.lower()
# either write html directly into output or parse
if filetype == ".html" or filetype == ".htm":
htmlPage = source.read()
elif useMarkdown and filetype == ".md":
htmlPage = markdown.markdown(source.read())
else:
print("Unknown file type in page: %s" % page["file"])
try:
dest = open(os.path.join(self.output, "%s.html" % filename), "w")
except IOError:
print("Error: Failed to generate " + self.output + "/" + page["file"])
exit()
generated = self.template.replace("$PAGE$", htmlPage)
menu = self.buildMenu(page)
generated = generated.replace("$MENU$", menu)
generated = generated.replace("$TITLE$", self.config["title"])
generated = generated.replace("$SUBTITLE$", self.config["subtitle"])
generated = generated.replace("$AUTHOR$", self.config["author"])
generated = generated.replace("$PAGETITLE$", page["title"])
now = datetime.datetime.now()
generated = generated.replace("$YEAR$", str(now.year))
dest.write(generated)
dest.flush()
dest.close()
source.close()
return
"""
Build all pages that are defined in the config.
"""
def buildPages(self):
if not os.path.isdir(self.output):
os.makedirs(self.output)
for page in self.config["pages"]:
self.buildPage(os.path.join(self.workingDirectory, "pages"), page)
return
"""
Copy all files in the "assets" folder into the generated folder.
"""
def copyAssets(self):
if(os.path.isdir(os.path.join(self.workingDirectory, "assets"))):
copy_tree(os.path.join(self.workingDirectory, "assets"), self.output)
return
def mkpage():
argparser = argparse.ArgumentParser(description="Very simple static site generator.")
argparser.add_argument("-o", "--out",
help="Directory to output generated files (default: 'generated')",
default="generated")
argparser.add_argument("-f", "--file",
help="Path to JSON file that describes your page (default: 'page.json')",
default="page.json")
argparser.add_argument("-d", "--directory",
help="Path to directory that includes your files (default: current directory)",
default=os.getcwd())
args = argparser.parse_args()
site = Site(args.file, args.out, args.directory)
site.buildPages()
site.copyAssets()
return
if __name__ == "__main__":
mkpage()
| 34.935484 | 118 | 0.58301 | 598 | 5,415 | 5.23913 | 0.32107 | 0.022981 | 0.025535 | 0.022343 | 0.15225 | 0.125439 | 0.042771 | 0.018513 | 0 | 0 | 0 | 0.001294 | 0.286242 | 5,415 | 154 | 119 | 35.162338 | 0.809314 | 0.032133 | 0 | 0.216216 | 0 | 0 | 0.146759 | 0 | 0.018018 | 0 | 0 | 0 | 0 | 1 | 0.081081 | false | 0 | 0.072072 | 0 | 0.225225 | 0.045045 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f4ee15f8096d1f3bdce4ef51c3fdf364ed669b9d | 4,421 | py | Python | adapters.py | RechnioMateusz/weather_forecast | 5ae9c65336831042b74a77e05c163b7b65b90dcd | [
"MIT"
] | 1 | 2019-10-22T20:09:54.000Z | 2019-10-22T20:09:54.000Z | adapters.py | RechnioMateusz/weather_forecast | 5ae9c65336831042b74a77e05c163b7b65b90dcd | [
"MIT"
] | null | null | null | adapters.py | RechnioMateusz/weather_forecast | 5ae9c65336831042b74a77e05c163b7b65b90dcd | [
"MIT"
] | null | null | null | import logging
from abc import ABC, abstractmethod
from hardware import BME280, TSL2561, YL83
from resources.errors import AdapterException
from resources.utils import average
class AbstractAdapter(ABC):
def __init__(self):
self.data_buffer = dict()
@abstractmethod
def initialize(self, *args, **kwargs):
...
@abstractmethod
def read_data(self, *args, **kwargs):
...
def get_data(self):
ret = dict().fromkeys(self.data_buffer)
for key, value in self.data_buffer.items():
ret.update({key: average(list_=value)})
return ret
class BME280Adapter(AbstractAdapter):
def __init__(self):
self._log = logging.getLogger("BME280_adapter")
self._log.info("Initializing BME280Adapter...")
super().__init__()
self.bme280 = None
self.data_buffer.update(
temperature=list(), pressure=list(), humidity=list()
)
self._log.info("BME280Adapter initialized...")
def initialize(self, *args, **kwargs):
self._log.info("Started initialization...")
if not args and not kwargs:
self.bme280 = BME280()
elif len(args) == 1 and not kwargs:
self.bme280 = BME280(*args)
elif not args and len(kwargs) == 1 and "i2c_id" in kwargs:
self.bme280 = BME280(**kwargs)
else:
raise AdapterException(
msg="Invalid arguments.",
desc=f"Passed arguments: {args}\t{kwargs}"
)
self._log.info("Initialization successfull")
def read_data(self, *args, **kwargs):
temperature = self.bme280.read_temperature()
pressure = self.bme280.read_pressure()
humidity = self.bme280.read_humidity()
self._log.debug(f"Temperature: {temperature}")
self._log.debug(f"Pressure: {pressure}")
self._log.debug(f"Humidity: {humidity}")
self.data_buffer.get("temperature").append(temperature)
self.data_buffer.get("pressure").append(pressure)
self.data_buffer.get("humidity").append(humidity)
class TSL2561Adapter(AbstractAdapter):
def __init__(self):
self._log = logging.getLogger("TSL2561_adapter")
self._log.info("Initializing TSL2561Adapter...")
super().__init__()
self.tsl2561 = None
self.data_buffer.update(light_intensity=list())
self._log.info("TSL2561Adapter initialized...")
def initialize(self, *args, **kwargs):
self._log.info("Started initialization...")
if not args and not kwargs:
self.tsl2561 = TSL2561()
elif len(args) == 1 and not kwargs:
self.tsl2561 = TSL2561(*args)
elif not args and len(kwargs) == 1 and "i2c_id" in kwargs:
self.tsl2561 = TSL2561(**kwargs)
else:
raise AdapterException(
msg="Invalid arguments.",
desc=f"Passed arguments: {args}\t{kwargs}"
)
self._log.info("Initialization successfull")
def read_data(self, *args, **kwargs):
light_intensity = self.tsl2561.read_full_spectrum()
self._log.debug(f"Light intensity: {light_intensity}")
self.data_buffer.get("light_intensity").append(light_intensity)
class YL83Adapter(AbstractAdapter):
def __init__(self):
self._log = logging.getLogger("YL83_adapter")
self._log.info("Initializing YL83Adapter...")
super().__init__()
self.yl83 = None
self.data_buffer.update(precipitation=list())
self._log.info("YL83Adapter initialized...")
def initialize(self, *args, **kwargs):
self._log.info("Started initialization...")
if not args and not kwargs:
self.yl83 = YL83()
elif len(args) == 1 and not kwargs:
self.yl83 = YL83(*args)
elif not args and len(kwargs) == 1 and "i2c_id" in kwargs:
self.yl83 = YL83(**kwargs)
else:
raise AdapterException(
msg="Invalid arguments.",
desc=f"Passed arguments: {args}\t{kwargs}"
)
self._log.info("Initialization successfull")
def read_data(self, *args, **kwargs):
precipitation = self.yl83.read_precipitation()
self._log.debug(f"Precipitation: {precipitation}")
self.data_buffer.get("precipitation").append(int(precipitation))
| 30.916084 | 72 | 0.612757 | 483 | 4,421 | 5.438923 | 0.163561 | 0.053293 | 0.050247 | 0.038828 | 0.520746 | 0.448801 | 0.413399 | 0.413399 | 0.325466 | 0.325466 | 0 | 0.040478 | 0.262384 | 4,421 | 142 | 73 | 31.133803 | 0.765103 | 0 | 0 | 0.438095 | 0 | 0 | 0.163311 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12381 | false | 0.028571 | 0.047619 | 0 | 0.219048 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f4ee2c718e6af4bb583c319cd3a84784bbd8c48e | 6,899 | py | Python | gui.py | kirberich/gag | 1e784444f5d63bdbf0673df52aca547448865ee7 | [
"MIT"
] | 2 | 2016-12-04T17:57:33.000Z | 2022-03-12T03:49:46.000Z | gui.py | kirberich/gag | 1e784444f5d63bdbf0673df52aca547448865ee7 | [
"MIT"
] | null | null | null | gui.py | kirberich/gag | 1e784444f5d63bdbf0673df52aca547448865ee7 | [
"MIT"
] | null | null | null | import math
import pygame
import cairo
import numpy
import Image
import random
import copy
class Color(object):
def __init__(self, r = 1, g = 1, b = 1, a = 1):
self.r = r
self.g = g
self.b = b
self.a = a
def replace(self, r = None, g = None, b = None, a = None):
""" Returns a copy of the color with all supplied values replaced. """
if not r: r = self.r
if not g: g = self.g
if not b: b = self.b
if not a: a = self.a
return Color(r,g,b,a)
def __repr__(self):
return 'Color object: (%s,%s,%s,%s)' % (self.r, self.g, self.b, self.a)
class Gui(object):
def __init__(self, width = 640, height = 480, caption = "DisplayTest", textureDirectory = "textures", virtual_width = None, virtual_height = None):
""" Initialize a pygame window. This is only for early testing, there probably won't be a gui later. """
pygame.init()
screen = pygame.display.set_mode((width,height))
pygame.display.set_caption(caption)
background = pygame.Surface(screen.get_size())
background.fill((255, 255, 255))
screen.blit(background, (0, 0))
pygame.display.flip()
data = numpy.empty(width * height * 4, dtype=numpy.int8)
self.cairo_surface = cairo.ImageSurface.create_for_data(data, cairo.FORMAT_ARGB32, width, height, width * 4)
self.cairo_context = cairo.Context(self.cairo_surface)
self.cairo_context.set_antialias(cairo.ANTIALIAS_SUBPIXEL)
self.cairo_context.set_line_width(0.01)
self.screen = screen
self.textureDirectory = textureDirectory
self.width = width
self.height = height
self.clock = pygame.time.Clock()
# Set virtual resolution for pixel stuff
self.virtual_width = width
self.virtual_height = height
self.pixel_width = 1
self.pixel_height = 1
if virtual_width and virtual_width < width:
self.virtual_width = virtual_width
self.pixel_width = width / virtual_width
if virtual_height and virtual_height < height:
self.virtual_height = virtual_height
self.pixel_height = height / virtual_height
def fill(self, color):
""" Fill the entire surface with one color """
self.set_color(color)
self.cairo_context.paint()
def draw_circle(self, center, radius, fill_color = None, stroke_color = None):
""" Draw a circle at center, with a given radius and optional fill_color and stroke_color """
(x, y) = center
self.cairo_context.arc(x, y, radius, 0, 2 * math.pi)
self.apply_colors(fill_color, stroke_color)
self.cairo_context.close_path()
def draw_rect(self, x, y, width, height, fill_color = None, stroke_color = None):
""" Draw a rectangle with its upper left corner at x,y, size of width,height and optional fill_color and stroke_color """
self.cairo_context.rectangle(x, y, width, height)
self.apply_colors(fill_color, stroke_color)
self.cairo_context.close_path()
def draw_pixel(self, x, y, color = None):
""" Draw a virtual pixel. The size of the pixel is determined by Gui.virtual_width and Gui.virtual_height """
self.cairo_context.rectangle(x * self.pixel_width, y * self.pixel_height, self.pixel_width, self.pixel_height)
if color: self.set_color(color)
self.cairo_context.fill()
self.cairo_context.new_path()
def draw_pixels(self, pixels):
for pixel in pixels:
self.draw_pixel(*pixel)
def draw_polygon(self, coordinates, fill_color = None, stroke_color = None):
""" Draw an n-sided polygon """
if len(coordinates) < 3: raise Exception("Polygons need to have at least three points")
self.cairo_context.move_to( coordinates[0][0], coordinates[0][1] )
for (x,y) in coordinates[1:]:
self.cairo_context.line_to(x,y)
self.apply_colors(fill_color, stroke_color)
def draw_text(self, x, y, text, fill_color = None, stroke_color = None):
self.cairo_context.move_to(x,y)
self.cairo_context.text_path(text)
self.apply_colors(fill_color, stroke_color)
def apply_colors(self, fill_color, stroke_color):
""" Apply fill and stroke colors to the current path """
if fill_color:
self.set_color(fill_color)
self.cairo_context.fill_preserve()
if stroke_color:
self.set_color(stroke_color)
self.cairo_context.stroke()
self.cairo_context.new_path()
def set_color(self, color):
self.cairo_context.set_source_rgba(color.r, color.g, color.b, color.a)
def rotate(self, angle):
""" Rotates the transformation matrix, this only has an effect on
newly drawn things and is kind of useless.
"""
self.cairo_context.rotate(self.from_degrees(angle))
def reverse_rotate(self, angle):
self.rotate(self.from_degrees(-angle))
def scale(self, amount=1):
self.cairo_context.scale(amount, amount)
def reverse_scale(self, amount=1):
self.scale(1.0 / amount)
def translate(self, x, y):
self.cairo_context.translate(x, y)
def reverse_translate(self, x, y):
self.translate(-x, -y)
def transform(self, translate_x = 0, translate_y = 0, scale = 1):
self.translate(translate_x, translate_y)
self.scale(scale)
def reverse_transform(self, translate_x = 0, translate_y = 0, scale = 1):
self.cairo_context.scale(1.0 / scale, 1.0 / scale)
self.cairo_context.translate(translate_x * -1, translate_y * -1)
def from_degrees(self, degrees):
return degrees * math.pi / 180.0
def cairo_drawing_test(self):
""" Just a cairo test """
# Reset background
self.cairo_context.set_source_rgba(1, 1, 1, 1)
self.cairo_context.paint()
self.cairo_context.set_line_width(100)
self.cairo_context.arc(320, 240, 200, 0, 1.9 * math.pi)
self.cairo_context.set_source_rgba(1, 0, 0, random.random())
self.cairo_context.fill_preserve()
self.cairo_context.set_source_rgba(0, 1, 0, 0.5)
self.cairo_context.stroke()
def _bgra_surf_to_rgba_string(self):
img = Image.frombuffer(
'RGBA', (self.cairo_surface.get_width(),
self.cairo_surface.get_height()),
self.cairo_surface.get_data(), 'raw', 'BGRA', 0, 1)
return img.tostring('raw', 'RGBA', 0, 1)
def update(self):
data_string = self._bgra_surf_to_rgba_string()
pygame_surface = pygame.image.frombuffer(data_string, (self.width, self.height), 'RGBA')
self.screen.blit(pygame_surface, (0,0))
pygame.display.flip() | 37.906593 | 151 | 0.635744 | 959 | 6,899 | 4.385819 | 0.190824 | 0.079173 | 0.121731 | 0.039943 | 0.314551 | 0.210651 | 0.143129 | 0.105088 | 0.053257 | 0.053257 | 0 | 0.017913 | 0.255544 | 6,899 | 182 | 152 | 37.906593 | 0.801012 | 0.110596 | 0 | 0.123077 | 0 | 0 | 0.018335 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.192308 | false | 0 | 0.053846 | 0.015385 | 0.292308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f4ee67390a931e669ff717d7145205d799bba792 | 1,917 | py | Python | tahmin.py | cumadagli2327/ymgkproje | 4533124b935cedda14a645dc8732c5eafd5ca137 | [
"Unlicense"
] | 1 | 2020-03-27T12:40:14.000Z | 2020-03-27T12:40:14.000Z | tahmin.py | cumadagli2327/ymgkproje | 4533124b935cedda14a645dc8732c5eafd5ca137 | [
"Unlicense"
] | null | null | null | tahmin.py | cumadagli2327/ymgkproje | 4533124b935cedda14a645dc8732c5eafd5ca137 | [
"Unlicense"
] | null | null | null | # -*- coding: utf-8 -*-
import numpy as np
import pandas as pd
from sklearn.preprocessing import LabelEncoder
#Veri setinin yüklenmesi
veri = pd.read_csv('c.csv')
#Verinin Sinif sayisinin ve etiklerinin belirlenmesi
label_encoder = LabelEncoder().fit(veri['HKIsonuc'])
labels = label_encoder.transform(veri['HKIsonuc'])
classes = list(label_encoder.classes_)
#Girdi ve çikti verilerinin hazirlanmasi
nb_features = 6
nb_classes = len(classes)
X = veri.drop(['HKIsonuc'], axis=1)
y = labels
#Verilerin standartlasmasi
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X = sc.fit_transform(X)
#Egitim ve test verilerinin hazirlamanmasi
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = 0.1)
# çikti degerlerinin kategorilestirmesi
from keras.utils import to_categorical
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
#YSA model
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
model = Sequential()
model.add(Dense(16,input_dim=4, activation="relu"))
model.add(Dense(12,activation="relu"))
model.add(Dense(3,activation="softmax"))
model.summary()
#modelin derlenmesi
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics= ["accuracy"])
#modelin egitilmesi
model.fit(X_train,y_train,validation_data=(X_test,y_test), epochs=500)
#Gerekli bilgilerin verilmesi
print(("Ortalama egitim kaybi: ", np.mean(model.history.history["loss"])))
print(("Ortalama Egitim Basarimi: ", np.mean(model.history.history["accuracy"])))
print(("Ortalama Dogrulama kaybi: ", np.mean(model.history.history["val_loss"])))
print(("Ortalama Dogrulama Basarimi: ", np.mean(model.history.history["val_accuracy"])))
tahmin = np.array([19.34,8.42,463.52,14.6]).reshape(1,4)
print(model.predict_classes(tahmin))
| 32.491525 | 89 | 0.753782 | 263 | 1,917 | 5.361217 | 0.444867 | 0.017731 | 0.031206 | 0.051064 | 0.131915 | 0.093617 | 0 | 0 | 0 | 0 | 0 | 0.018332 | 0.117893 | 1,917 | 58 | 90 | 33.051724 | 0.815494 | 0.162233 | 0 | 0 | 0 | 0 | 0.140625 | 0.015625 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.235294 | 0 | 0.235294 | 0.147059 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f4eeaec6a8cc3963e0d160e3b5825ff612c792fa | 357 | py | Python | poke-rename-files/app.py | vitorkaio/python-revision | f62937c829355cb5a46725ff14fc79ee8be1796d | [
"MIT"
] | null | null | null | poke-rename-files/app.py | vitorkaio/python-revision | f62937c829355cb5a46725ff14fc79ee8be1796d | [
"MIT"
] | null | null | null | poke-rename-files/app.py | vitorkaio/python-revision | f62937c829355cb5a46725ff14fc79ee8be1796d | [
"MIT"
] | null | null | null | import os
from ser import rename
pokes = './pokes'
lista = []
#os.rename("path/to/current/file.foo", "path/to/new/destination/for/file.foo")
# print (f'{item}.png')
for file in os.listdir(pokes):
item = rename(file)
print(f"tranferindo {file} para {item}.jpg")
os.rename(f"{pokes}/{file}", f"./pokemons/{item}.jpg")
#for item in lista:
# print item
| 19.833333 | 78 | 0.669468 | 58 | 357 | 4.12069 | 0.431034 | 0.066946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12605 | 357 | 17 | 79 | 21 | 0.766026 | 0.358543 | 0 | 0 | 0 | 0 | 0.339286 | 0.09375 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f4f0162a66c4f189f0d0f6c06fb23655b1e3e175 | 3,022 | py | Python | yql/tests/test_yahoo_token.py | project-fondue/python-yql | 475911d987c6fc91bef3d9d24da9bd8c4bc0c4bc | [
"BSD-3-Clause"
] | 5 | 2015-03-06T16:45:46.000Z | 2020-04-04T02:29:32.000Z | yql/tests/test_yahoo_token.py | project-fondue/python-yql | 475911d987c6fc91bef3d9d24da9bd8c4bc0c4bc | [
"BSD-3-Clause"
] | null | null | null | yql/tests/test_yahoo_token.py | project-fondue/python-yql | 475911d987c6fc91bef3d9d24da9bd8c4bc0c4bc | [
"BSD-3-Clause"
] | 8 | 2015-02-16T15:27:59.000Z | 2018-12-13T06:39:52.000Z | from unittest import TestCase
from nose.tools import raises
try:
from urlparse import parse_qs, parse_qsl
except ImportError:
from cgi import parse_qs, parse_qsl
import yql
class YahooTokenTest(TestCase):
def test_create_yahoo_token(self):
token = yql.YahooToken('test-key', 'test-secret')
self.assertEqual(token.key, 'test-key')
self.assertEqual(token.secret, 'test-secret')
def test_y_token_to_string(self):
token = yql.YahooToken('test-key', 'test-secret')
token_to_string = token.to_string()
string_data = dict(parse_qsl(token_to_string))
self.assertEqual(string_data.get('oauth_token'), 'test-key')
self.assertEqual(string_data.get('oauth_token_secret'), 'test-secret')
def test_y_token_to_string2(self):
token = yql.YahooToken('test-key', 'test-secret')
token.timestamp = '1111'
token.session_handle = 'poop'
token.callback_confirmed = 'basilfawlty'
token_to_string = token.to_string()
string_data = dict(parse_qsl(token_to_string))
self.assertEqual(string_data.get('oauth_token'), 'test-key')
self.assertEqual(string_data.get('oauth_token_secret'), 'test-secret')
self.assertEqual(string_data.get('token_creation_timestamp'), '1111')
self.assertEqual(string_data.get('oauth_callback_confirmed'), 'basilfawlty')
self.assertEqual(string_data.get('oauth_session_handle'), 'poop')
def test_y_token_from_string(self):
token_string = "oauth_token=foo&oauth_token_secret=bar&"\
"oauth_session_handle=baz&token_creation_timestamp=1111"
token_from_string = yql.YahooToken.from_string(token_string)
self.assertEqual(token_from_string.key, 'foo')
self.assertEqual(token_from_string.secret, 'bar')
self.assertEqual(token_from_string.session_handle, 'baz')
self.assertEqual(token_from_string.timestamp, '1111')
@raises(ValueError)
def test_y_token_raises_value_error(self):
yql.YahooToken.from_string('')
@raises(ValueError)
def test_y_token_raises_value_error2(self):
yql.YahooToken.from_string('foo')
@raises(ValueError)
def test_y_token_raises_value_error3(self):
yql.YahooToken.from_string('oauth_token=bar')
@raises(ValueError)
def test_y_token_raises_value_error4(self):
yql.YahooToken.from_string('oauth_token_secret=bar')
@raises(AttributeError)
def test_y_token_without_timestamp_raises(self):
token = yql.YahooToken('test', 'test2')
y = yql.ThreeLegged('test', 'test2')
y.check_token(token)
def test_y_token_without_timestamp_raises2(self):
def refresh_token_replacement(token):
return 'replaced'
y = yql.ThreeLegged('test', 'test2')
y.refresh_token = refresh_token_replacement
token = yql.YahooToken('test', 'test2')
token.timestamp = 11111
self.assertEqual(y.check_token(token), 'replaced')
| 36.853659 | 84 | 0.696889 | 379 | 3,022 | 5.242744 | 0.171504 | 0.105687 | 0.036236 | 0.058883 | 0.605435 | 0.453447 | 0.365878 | 0.328636 | 0.228485 | 0.161047 | 0 | 0.012255 | 0.18994 | 3,022 | 81 | 85 | 37.308642 | 0.799428 | 0 | 0 | 0.301587 | 0 | 0 | 0.157512 | 0.053938 | 0 | 0 | 0 | 0 | 0.222222 | 1 | 0.174603 | false | 0 | 0.095238 | 0.015873 | 0.301587 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f4f154d34eaef9c43ce09cee27a2657d3f158f67 | 4,258 | py | Python | data_pipeline/tools/schema_ref_json_generator.py | poros/data_pipeline | e143a4031b0940e17b22cdf36db0b677b46e3975 | [
"Apache-2.0"
] | 110 | 2016-11-17T18:32:25.000Z | 2022-01-03T17:27:58.000Z | data_pipeline/tools/schema_ref_json_generator.py | poros/data_pipeline | e143a4031b0940e17b22cdf36db0b677b46e3975 | [
"Apache-2.0"
] | 12 | 2016-11-18T00:00:37.000Z | 2018-01-14T00:31:37.000Z | data_pipeline/tools/schema_ref_json_generator.py | poros/data_pipeline | e143a4031b0940e17b22cdf36db0b677b46e3975 | [
"Apache-2.0"
] | 25 | 2016-11-18T15:00:16.000Z | 2020-10-01T13:42:47.000Z | # -*- coding: utf-8 -*-
# Copyright 2016 Yelp Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""
This script will create a JSON version of the Google Spreadsheet schema reference.
Google Spreadsheet:
https://docs.google.com/spreadsheets/d/1ZIE8UdMadTgBpcELOnGLNJIhOLXBsij0_f5BiwlAQss/edit#gid=8
Usage:
1. download sheets as csvs from the Google Spreadsheet into the same directory as this script
- ref_cols.csv Sheet 1 ("DW Schema Reference")
- ref_tables.csv Sheet 2 ("DW Tables")
2. download BAM's listings of table owners into the same directory as this script
- ref_owners.csv Original Version
- ref_owners_new.csv Updated Version
3. run the script and the output will be in schema_ref.json
"""
from __future__ import absolute_import
from __future__ import unicode_literals
import csv
import json
from data_pipeline.helpers.frozendict_json_encoder import FrozenDictEncoder
def _read_rows_from_file(file_name):
rows = []
with open(file_name, 'rb') as file:
reader = csv.reader(file)
for row in reader:
rows.append(row)
return rows
def _parse_col_row(row):
_, _, col_name, pos, _, nullable, write_once, data_type, _, _, _, _, description, _, notes = row
if nullable == 'NO':
data_type += ' not null'
if write_once == 'YES':
data_type += ' write once'
if notes.strip() == '0':
notes = ''
return {
'name': col_name,
'doc': description,
'note': notes,
}
if __name__ == '__main__':
owners_rows = _read_rows_from_file('ref_owners.csv')
owners_new_rows = _read_rows_from_file('ref_owners_new.csv')
tables_rows = _read_rows_from_file('ref_tables.csv')
cols_rows = _read_rows_from_file('ref_cols.csv')
tables_rows = tables_rows[1:]
output = {
'doc_source': 'https://docs.google.com/spreadsheets/d/1ZIE8UdMadTgBpcELOnGLNJIhOLXBsij0_f5BiwlAQss/edit#gid=11',
'doc_owner': 'bam@yelp.com',
'docs': []
}
for row in tables_rows:
schema, name, category, description, _, _, notes, _ = row
table_output = {
'namespace': schema,
'source': name,
'doc': description,
'note': notes,
'category': category,
'fields': []
}
owner_row = filter(lambda row: row[3] == name, owners_rows)
owner_new_row = filter(lambda row: row[3] == name, owners_new_rows)
try:
_, source_path, _, _, _, owner = owner_row.pop()
owner = owner.split(',')[0]
table_output['owner_email'] = owner
table_output['file_display'] = source_path
table_output['file_url'] = 'https://opengrok.yelpcorp.com/xref/yelp-main/' + source_path
except IndexError:
if len(owner_new_row) > 0:
_, source_path, _, _, _, owner = owner_new_row.pop()
owner = owner.split(',')[0]
table_output['owner_email'] = owner
table_output['file_display'] = source_path
table_output['file_url'] = 'https://opengrok.yelpcorp.com/xref/yelp-main/' + source_path
else:
table_output['owner_email'] = ''
table_output['file_display'] = ''
table_output['file_url'] = ''
col_rows = filter(lambda row: row[0] == schema and row[1] == name, cols_rows)
for row in col_rows:
table_output['fields'].append(_parse_col_row(row))
output['docs'].append(table_output)
with open('schema_ref.json', 'wb') as outfile:
outfile.write(json.dumps(output, cls=FrozenDictEncoder))
outfile.close()
| 31.776119 | 120 | 0.631752 | 537 | 4,258 | 4.748603 | 0.324022 | 0.051765 | 0.035294 | 0.031373 | 0.29098 | 0.269804 | 0.251765 | 0.22902 | 0.176471 | 0.176471 | 0 | 0.01079 | 0.259981 | 4,258 | 133 | 121 | 32.015038 | 0.798477 | 0.290747 | 0 | 0.166667 | 0 | 0 | 0.161107 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027778 | false | 0 | 0.069444 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f4f2530b15e0a6e73ff4aad9cdc33c40fbbded90 | 1,536 | py | Python | Medium/1492.ThekthFactorofn.py | YuriSpiridonov/LeetCode | 2dfcc9c71466ffa2ebc1c89e461ddfca92e2e781 | [
"MIT"
] | 39 | 2020-07-04T11:15:13.000Z | 2022-02-04T22:33:42.000Z | Medium/1492.ThekthFactorofn.py | YuriSpiridonov/LeetCode | 2dfcc9c71466ffa2ebc1c89e461ddfca92e2e781 | [
"MIT"
] | 1 | 2020-07-15T11:53:37.000Z | 2020-07-15T11:53:37.000Z | Medium/1492.ThekthFactorofn.py | YuriSpiridonov/LeetCode | 2dfcc9c71466ffa2ebc1c89e461ddfca92e2e781 | [
"MIT"
] | 20 | 2020-07-14T19:12:53.000Z | 2022-03-02T06:28:17.000Z | """
Given two positive integers n and k.
A factor of an integer n is defined as an integer i where n % i == 0.
Consider a list of all factors of n sorted in ascending order,
return the kth factor in this list or return -1 if n has less
than k factors.
Example:
Input: n = 12, k = 3
Output: 3
Explanation: Factors list is [1, 2, 3, 4, 6, 12], the 3rd factor is 3.
Example:
Input: n = 7, k = 2
Output: 7
Explanation: Factors list is [1, 7], the 2nd factor is 7.
Example:
Input: n = 4, k = 4
Output: -1
Explanation: Factors list is [1, 2, 4], there is only 3 factors.
We should return -1.
Example:
Input: n = 1, k = 1
Output: 1
Explanation: Factors list is [1], the 1st factor is 1.
Example:
Input: n = 1000, k = 3
Output: 4
Explanation: Factors list is [1, 2, 4, 5, 8, 10, 20, 25, 40, 50, 100,
125, 200, 250, 500, 1000].
Constraints:
- 1 <= k <= n <= 1000
"""
#Difficulty: Medium
#207 / 207 test cases passed.
#Runtime: 28 ms
#Memory Usage: 14.1 MB
#Runtime: 28 ms, faster than 87.17% of Python3 online submissions for The kth Factor of n.
#Memory Usage: 14.1 MB, less than 76.75% of Python3 online submissions for The kth Factor of n.
class Solution:
def kthFactor(self, n: int, k: int) -> int:
i = 1
while i <= n:
if not n % i:
k -= 1
if k == 0:
return i
i += 1
return -1
| 26.482759 | 95 | 0.554688 | 250 | 1,536 | 3.408 | 0.388 | 0.021127 | 0.076291 | 0.140845 | 0.309859 | 0.242958 | 0.212441 | 0.103286 | 0.103286 | 0.103286 | 0 | 0.111888 | 0.348307 | 1,536 | 57 | 96 | 26.947368 | 0.739261 | 0.752604 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f4f409f8b4bdb8f278504e0979a0acd5b5e14dff | 7,834 | py | Python | pnp/validator.py | HazardDede/pnp | 469ca17254dcca1a4eefe0dc5ac574692a9ab38e | [
"MIT"
] | 4 | 2018-10-07T11:32:00.000Z | 2019-04-23T09:34:23.000Z | pnp/validator.py | HazardDede/pnp | 469ca17254dcca1a4eefe0dc5ac574692a9ab38e | [
"MIT"
] | null | null | null | pnp/validator.py | HazardDede/pnp | 469ca17254dcca1a4eefe0dc5ac574692a9ab38e | [
"MIT"
] | 1 | 2019-08-12T19:56:10.000Z | 2019-08-12T19:56:10.000Z | """Contains utility methods for validating."""
import os
from typing import Any, Iterable, List, Optional
from typeguard import typechecked
@typechecked
def all_items(item_type: type, **kwargs: Iterable[Any]) -> None:
"""
Examples:
>>> all_items(str, a=['a', 'b', 'c'])
>>> all_items(int, a=[1, 2, 3])
>>> all_items(int, b=[1, 's', 3])
Traceback (most recent call last):
...
TypeError: Item value at pos 1 of argument 'b' is expected to be a <class 'int'>,\
but is <class 'str'>
"""
is_iterable_but_no_str(**kwargs)
for arg_name, arg_val in kwargs.items():
for i, item in enumerate(arg_val):
if not isinstance(item, item_type):
raise TypeError(
"Item value at pos {} of argument '{}' is expected to be a {}, "
"but is {}".format(
i, arg_name, item_type, type(item)
)
)
def is_directory(**kwargs: Any) -> None:
"""
Examples:
>>> is_directory(arg='/tmp')
>>> is_directory(arg='/thisonedoesnotexists')
Traceback (most recent call last):
...
ValueError: Argument 'arg' is expected to be a directory, but is '/thisonedoesnotexists'
"""
for arg_name, arg_value in kwargs.items():
if not os.path.isdir(arg_value):
raise ValueError(
"Argument '{arg_name}' is expected to be a directory, "
"but is '{arg_value}'".format(arg_name=arg_name, arg_value=arg_value)
)
def is_file(**kwargs: Any) -> None:
"""
Examples:
>>> import tempfile
>>> with tempfile.NamedTemporaryFile() as tmpf:
... is_file(arg=tmpf.name)
>>> is_file(arg='/doesnotexist.txt')
Traceback (most recent call last):
...
ValueError: Argument 'arg' is expected to be a file, but is '/doesnotexist.txt'
"""
for arg_name, arg_value in kwargs.items():
if not os.path.isfile(str(arg_value)):
raise ValueError(
"Argument '{arg_name}' is expected to be a file, "
"but is '{arg_value}'".format(
arg_name=arg_name, arg_value=arg_value
)
)
def is_function(**kwargs: Any) -> None:
"""
Examples:
>>> def foo():
... pass
>>> is_function(foo=foo)
>>> is_function(bar='bar')
Traceback (most recent call last):
...
TypeError: Argument 'bar' is expected to be a function/callable, but is '<class 'str'>'
>>> is_function(baz=lambda: True)
"""
for arg_name, arg_value in kwargs.items():
if not callable(arg_value):
arg_type = type(arg_value)
raise TypeError(
"Argument '{arg_name}' is expected to be a function/callable, "
"but is '{arg_type}'".format(
arg_name=arg_name, arg_type=arg_type
)
)
@typechecked
def is_instance(*required_types: type, allow_none: bool = False, **kwargs: Any) -> None:
"""
Examples:
>>> is_instance(str, arg="i am a string") # Should be ok
>>> is_instance(str, allow_none=True, arg=None) # Should be ok as well
>>> is_instance(bool, arg="i am a string") # Not ok
Traceback (most recent call last):
...
TypeError: Argument 'arg' is expected to be a (<class 'bool'>,), but is <class 'str'>
>>> is_instance(bool, str, arg="i am a string") # Should be ok, too
>>> is_instance(str, bool, arg1="i am a string", arg2=True) # Ok ...
>>> is_instance(str, arg1="i am a string", arg2=True) # Not ok
Traceback (most recent call last):
...
TypeError: Argument 'arg2' is expected to be a (<class 'str'>,), but is <class 'bool'>
"""
for arg_name, arg_value in kwargs.items():
if not (allow_none and arg_value is None) and not isinstance(arg_value, required_types):
raise TypeError(
"Argument '{arg_name}' is expected to be a {required_type}"
", but is {actual_type}".format(
arg_name=arg_name,
required_type=required_types,
actual_type=type(arg_value)
))
def is_iterable_but_no_str(**kwargs: Any) -> None:
"""
Examples:
>>> is_iterable_but_no_str(l=[])
>>> is_iterable_but_no_str(t=("a", "b"))
>>> is_iterable_but_no_str(s="abc")
Traceback (most recent call last):
...
TypeError: Argument 's' is expected to be a non-str iterable, but is '<class 'str'>'
"""
for arg_name, arg_value in kwargs.items():
# Cannot import from utils -> cyclic dependency
if not hasattr(arg_value, '__iter__') or isinstance(arg_value, (str, bytes)):
arg_type = type(arg_value)
raise TypeError(
"Argument '{arg_name}' is expected to be a non-str iterable, "
"but is '{arg_type}'".format(
arg_name=arg_name, arg_type=arg_type
)
)
def one_not_none(**kwargs: Any) -> None:
"""
Examples:
>>> one_not_none(arg1=None, arg2=None, arg3='passed')
>>> one_not_none(arg1=None, arg2=None)
Traceback (most recent call last):
...
ValueError: Arguments ['arg1', 'arg2'] expects at least one passed, but all are none
"""
if all(x is None for x in kwargs.values()):
raise ValueError(
"Arguments {args} expects at least one passed, but all are none".format(
args=sorted(list(kwargs.keys()))
)
)
@typechecked
def one_of(possible: Iterable[Any], **kwargs: Any) -> None:
"""
Examples:
>>> one_of(['a', 'b', 'c'], arg='a')
>>> one_of(['a', 'b', 'c'], arg='z')
Traceback (most recent call last):
...
ValueError: Argument 'arg' is expected to be one of ['a', 'b', 'c'], but is 'z'
>>> one_of(['a', 'b', 'c'], arg1='a', arg2='z')
Traceback (most recent call last):
...
ValueError: Argument 'arg2' is expected to be one of ['a', 'b', 'c'], but is 'z'
"""
for arg_name, arg_value in kwargs.items():
if arg_value not in possible:
raise ValueError(
"Argument '{arg_name}' is expected to be one of {possible}, "
"but is '{arg_value}'".format(
arg_name=arg_name, possible=possible, arg_value=arg_value
)
)
@typechecked
def subset_of(possible: Iterable[Any], **kwargs: Any) -> None:
"""
Examples:
>>> subset_of(['a', 'b', 'c'], arg='a')
>>> subset_of(['a', 'b', 'c'], arg=('a', 'b'))
>>> subset_of(['a', 'b', 'c'], arg=('a', 'b'), arg2='c')
>>> subset_of(['a', 'b', 'c'], arg='d')
Traceback (most recent call last):
...
ValueError: Argument 'arg' is expected to be a subset of ['a', 'b', 'c'], but is 'd'
"""
def make_list(lst: Any) -> Optional[List[Any]]:
if lst is None:
return None
if isinstance(lst, list):
return lst
if isinstance(lst, dict):
return [lst]
if hasattr(lst, '__iter__') and not isinstance(lst, str):
return list(lst)
return [lst]
for arg_name, arg_value in kwargs.items():
if not set(make_list(arg_value) or set()) <= set(possible):
raise ValueError(
"Argument '{arg_name}' is expected to be a subset of {possible}, "
"but is '{arg_value}'".format(
arg_name=arg_name, possible=possible, arg_value=arg_value
)
)
| 35.93578 | 96 | 0.537401 | 987 | 7,834 | 4.113475 | 0.134752 | 0.061084 | 0.046798 | 0.062069 | 0.621921 | 0.561084 | 0.500493 | 0.443596 | 0.386946 | 0.31133 | 0 | 0.004138 | 0.321419 | 7,834 | 217 | 97 | 36.101382 | 0.759594 | 0.400945 | 0 | 0.333333 | 0 | 0 | 0.16111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.10101 | false | 0.010101 | 0.030303 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f4f4495710e469abc167c610a7b652db56ad9fbf | 1,473 | py | Python | scripts/python/ocs_start_of_night_process.py | lsst-ts/ts_ocs_sequencer | e99b43e5264bcc22d664c12f1988c42411c73d5a | [
"BSD-3-Clause"
] | null | null | null | scripts/python/ocs_start_of_night_process.py | lsst-ts/ts_ocs_sequencer | e99b43e5264bcc22d664c12f1988c42411c73d5a | [
"BSD-3-Clause"
] | null | null | null | scripts/python/ocs_start_of_night_process.py | lsst-ts/ts_ocs_sequencer | e99b43e5264bcc22d664c12f1988c42411c73d5a | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# +
# import(s)
# -
from OcsCameraEntity import *
from OcsSequencerEntity import *
import multiprocessing
import os
# +
# function: worker_code()
# -
def worker_code(entity='', entobj=None):
# debug output
print('name: {0:s}'.format(multiprocessing.current_process().name))
print('entity: {0:s}'.format(entity))
if hasattr(os, 'getppid'):
print('parent process id: {0:s}'.format(str(os.getppid())))
if hasattr(os, 'getpid'):
print('process id: {0:s}'.format(str(os.getpid())))
# do start_of_night stuff
if entobj:
# enter control
entobj.logger.info('{0:s}.entercontrol()'.format(entity))
entobj.entercontrol()
# start
entobj.logger.info("{0:s}.start('Normal')".format(entity))
entobj.start('Normal')
# enable
entobj.logger.info('{0:s}.enable()'.format(entity))
entobj.enable()
# return
return
# +
# main()
# -
if __name__ == "__main__":
# created shared entities
camera = OcsCameraEntity('CCS', 'Camera', False)
sequencer = OcsSequencerEntity('OCS', 'ocs', False)
# create jobs for each entity:
jobs = []
for E in ( camera, sequencer ):
j = multiprocessing.Process(target=worker_code, args=(E._entity, E))
jobs.append(j)
j.start()
for j in jobs:
j.join()
print('{0:s}.exitcode: {1:s}'.format(j.name, str(j.exitcode)))
| 23.015625 | 76 | 0.595384 | 178 | 1,473 | 4.842697 | 0.398876 | 0.018561 | 0.037123 | 0.059165 | 0.113689 | 0.051044 | 0.051044 | 0 | 0 | 0 | 0 | 0.008834 | 0.2315 | 1,473 | 63 | 77 | 23.380952 | 0.75265 | 0.148676 | 0 | 0 | 0 | 0 | 0.148178 | 0.017004 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.133333 | 0 | 0.2 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f4f5fb00f3faae5f8a89503e48fc65c4398d9a65 | 2,068 | py | Python | mousemover/mousemover.py | TheIvanovization/mousemover | 53f3cd696ce04b617633a45ea1bf5a9a554aa6f4 | [
"MIT"
] | 13 | 2019-06-04T09:08:07.000Z | 2021-12-29T14:55:41.000Z | mousemover/mousemover.py | TheIvanovization/mousemover | 53f3cd696ce04b617633a45ea1bf5a9a554aa6f4 | [
"MIT"
] | 3 | 2019-06-04T00:50:10.000Z | 2020-06-28T09:25:19.000Z | mousemover/mousemover.py | TheIvanovization/mousemover | 53f3cd696ce04b617633a45ea1bf5a9a554aa6f4 | [
"MIT"
] | 7 | 2019-06-04T01:58:27.000Z | 2021-10-19T04:26:27.000Z | import ctypes
import sys
import time
import yaml
import os
from ui_mainwindow import *
from ui_handler import UI_Handler
# Set HiDPI Scalling
os.environ['QT_AUTO_SCREEN_SCALE_FACTOR'] = '1'
# get the bundle location and file path
def get_file_path(filename):
bundle_dir = getattr(sys, '_MEIPASS', os.path.abspath(os.path.dirname(__file__)))
path_to_file = os.path.abspath(os.path.join(bundle_dir, filename))
return path_to_file
def main():
# Display the UI
MainWindow.show()
# Exit the program
sys.exit(app.exec_())
if __name__ == "__main__":
# This is added to fix the App Icon not appearing in Taskbar
myappid = 'mycompany.myproduct.subproduct.version'
ctypes.windll.shell32.SetCurrentProcessExplicitAppUserModelID(myappid)
# Load config
with open(get_file_path('config.yml'), 'r') as stream:
try:
config = yaml.safe_load(stream)
except yaml.YAMLError as exc:
print(exc)
# Initialized UI
app_icon = get_file_path(config["icon_path"])
app = QtWidgets.QApplication(sys.argv)
app.setWindowIcon(QtGui.QIcon(app_icon))
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
# Initialize UI event handlers
ui_handler = UI_Handler(ui, MainWindow, config)
# Attach UI elements to event handlers
ui.timerEnabled.stateChanged.connect(lambda: ui_handler.enable_timer(ui.timerEnabled))
ui.randomMovementEnabled.stateChanged.connect(lambda: ui_handler.enable_random_movement(ui.randomMovementEnabled))
ui.randomDelayEnabled.stateChanged.connect(lambda: ui_handler.enable_random_delay(ui.randomDelayEnabled))
ui.minimizeToTrayEnabled.stateChanged.connect(lambda: ui_handler.enable_tray_minimize(ui.minimizeToTrayEnabled))
ui.startButton.clicked.connect(lambda: ui_handler.start_mouse_movement(ui.startButton))
# ui.stopButton.clicked.connect(lambda: ui_handler.stop_mouse_movement(ui.stopButton))
ui.stopButton.clicked.connect(ui_handler.stop_mouse_movement)
# Start the app
main()
| 33.354839 | 118 | 0.749516 | 266 | 2,068 | 5.601504 | 0.417293 | 0.066443 | 0.060403 | 0.088591 | 0.208725 | 0.115436 | 0.061745 | 0 | 0 | 0 | 0 | 0.001717 | 0.155222 | 2,068 | 61 | 119 | 33.901639 | 0.851173 | 0.163927 | 0 | 0 | 0 | 0 | 0.059406 | 0.037857 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054054 | false | 0.027027 | 0.189189 | 0 | 0.27027 | 0.027027 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f4f72718efe630a93b5e44139b70895fe2be070c | 952 | py | Python | string/Python/1078-occurrences-after-bigram-3.py | ljyljy/LeetCode-Solution-in-Good-Style | 0998211d21796868061eb22e2cbb9bcd112cedce | [
"Apache-2.0"
] | 1 | 2021-01-10T17:03:21.000Z | 2021-01-10T17:03:21.000Z | string/Python/1078-occurrences-after-bigram-3.py | lemonnader/LeetCode-Solution-Well-Formed | baabdb1990fd49ab82a712e121f49c4f68b29459 | [
"Apache-2.0"
] | null | null | null | string/Python/1078-occurrences-after-bigram-3.py | lemonnader/LeetCode-Solution-Well-Formed | baabdb1990fd49ab82a712e121f49c4f68b29459 | [
"Apache-2.0"
] | 1 | 2021-07-25T07:53:14.000Z | 2021-07-25T07:53:14.000Z | from typing import List
class Solution:
def findOcurrences(self, text: str, first: str, second: str) -> List[str]:
res = []
combination = first + ' ' + second + ' '
text = text + ' '
word_len = len(combination)
start = 0
while True:
idx = text.find(combination, start)
start = idx
if idx == -1:
break
third_word_end = text.find(' ', start + word_len)
third_word = text[start + word_len:third_word_end]
if third_word != '':
res.append(third_word)
start = idx + word_len
return res
if __name__ == '__main__':
text = "alice is a good girl she is a good student"
first = "a"
second = "good"
combination = first + ' ' + second
print(text.index(combination))
solution = Solution()
res = solution.findOcurrences(text, first, second)
print(res)
| 26.444444 | 78 | 0.540966 | 107 | 952 | 4.635514 | 0.383178 | 0.090726 | 0.08871 | 0.068548 | 0.084677 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003241 | 0.351891 | 952 | 35 | 79 | 27.2 | 0.800648 | 0 | 0 | 0 | 0 | 0 | 0.063025 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0 | 0.035714 | 0 | 0.142857 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f4f7c85bfe45772354bd00121d82c53b3ff107b8 | 615 | py | Python | WDCData/StockFundamentals.py | wangdecheng/QAStrategy | d970242ea61cff2f1a6f69545dc7f65e8efd1672 | [
"MIT"
] | null | null | null | WDCData/StockFundamentals.py | wangdecheng/QAStrategy | d970242ea61cff2f1a6f69545dc7f65e8efd1672 | [
"MIT"
] | null | null | null | WDCData/StockFundamentals.py | wangdecheng/QAStrategy | d970242ea61cff2f1a6f69545dc7f65e8efd1672 | [
"MIT"
] | null | null | null | import pandas as pd
from QUANTAXIS.QAUtil import (
DATABASE
)
import pymongo
_table = DATABASE.stock_fundamentals
def init_index():
_table.create_index([('code',pymongo.ASCENDING),("date",pymongo.ASCENDING)], unique=True)
def query_fundamentals(codes,date):
query_condition = {
'date': date,
'code': {
'$in': codes
}
}
item_cursor = _table.find(query_condition)
items_from_collection = [item for item in item_cursor]
df_data = pd.DataFrame(items_from_collection).drop(['_id'],axis=1)
return df_data
if __name__ == "__main__":
init_index() | 22.777778 | 93 | 0.669919 | 75 | 615 | 5.133333 | 0.56 | 0.046753 | 0.098701 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002053 | 0.20813 | 615 | 27 | 94 | 22.777778 | 0.788501 | 0 | 0 | 0 | 0 | 0 | 0.048701 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0 | 0.142857 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f4fc81b1f1c5fe906abaeaad348ba0f2e857bc26 | 487 | py | Python | tests/ropemporium/test_callme.py | mariuszskon/autorop | 5735073008f722fab00f3866ef4a05f04620593b | [
"MIT"
] | 15 | 2020-10-03T05:20:31.000Z | 2022-03-20T06:19:29.000Z | tests/ropemporium/test_callme.py | mariuszskon/autorop | 5735073008f722fab00f3866ef4a05f04620593b | [
"MIT"
] | 8 | 2020-10-02T09:51:39.000Z | 2021-04-24T03:14:18.000Z | tests/ropemporium/test_callme.py | mariuszskon/autorop | 5735073008f722fab00f3866ef4a05f04620593b | [
"MIT"
] | 2 | 2021-04-16T06:33:49.000Z | 2021-09-03T09:21:10.000Z | from .. import *
CWD = "./tests/ropemporium/"
BIN32 = "./callme32"
BIN64 = "./callme"
def test_callme32_local(exploit):
with cwd(CWD):
state = exploit(BIN32, lambda: process(BIN32))
state = turnkey.Classic()(state)
assert assertion.have_shell(state.target)
def test_callme_local(exploit):
with cwd(CWD):
state = exploit(BIN64, lambda: process(BIN64))
state = turnkey.Classic()(state)
assert assertion.have_shell(state.target)
| 24.35 | 54 | 0.655031 | 57 | 487 | 5.491228 | 0.421053 | 0.044728 | 0.102236 | 0.121406 | 0.594249 | 0.594249 | 0.594249 | 0.376997 | 0.376997 | 0.376997 | 0 | 0.041558 | 0.209446 | 487 | 19 | 55 | 25.631579 | 0.771429 | 0 | 0 | 0.428571 | 0 | 0 | 0.078029 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.142857 | false | 0 | 0.071429 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f4fdd9ac9b67f3d52df047777f1cf5d1676adba6 | 559 | py | Python | Experiment-2/EXAMPLES/Exp-2_ClassTask-10.py | aaryarajoju/cu-py | 2292ab06197405d379f063dd03861936c9912103 | [
"Unlicense"
] | 2 | 2021-01-08T08:29:27.000Z | 2021-01-14T13:47:27.000Z | Experiment-2/EXAMPLES/Exp-2_ClassTask-10.py | aaryarajoju/cu-py | 2292ab06197405d379f063dd03861936c9912103 | [
"Unlicense"
] | 1 | 2020-11-02T20:20:34.000Z | 2020-11-02T20:57:31.000Z | Experiment-2/EXAMPLES/Exp-2_ClassTask-10.py | aaryarajoju/cu-py | 2292ab06197405d379f063dd03861936c9912103 | [
"Unlicense"
] | null | null | null | #Class Task-10
#Hero Inventory
# Hero's Inventory
# Demonstrates tuple creation
# create an empty tuple
inventory = ()
# treat the tuple as a condition
if not inventory:
print("You are empty-handed.")
input("\nPress the enter key to continue.")
# create a tuple with some items
inventory = ("sword",
"armor",
"shield",
"healing potion",
5)
# print the tuple
print("\nThe tuple inventory is:")
print(inventory)
# print each element in the tuple
print("\nYour items:")
for item in inventory:
print(item)
input("\n\nPress the enter key to exit.")
| 17.46875 | 43 | 0.708408 | 83 | 559 | 4.771084 | 0.578313 | 0.060606 | 0.070707 | 0.085859 | 0.09596 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006494 | 0.173524 | 559 | 31 | 44 | 18.032258 | 0.850649 | 0.364937 | 0 | 0 | 0 | 0 | 0.447977 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f4fffa9a7bcbfb04e0cc78df4b3a2359473370dc | 806 | py | Python | tests/chain_tests.py | knub/skypyblue | 93aa91446c652466b8f9311f0e403c201dbce21b | [
"MIT"
] | 4 | 2019-04-29T15:10:36.000Z | 2021-09-11T23:21:05.000Z | tests/chain_tests.py | babelsberg/skypyblue | 6b74344959a734f352925c8c817ec6c89ee31772 | [
"MIT"
] | 1 | 2021-06-30T12:16:34.000Z | 2021-06-30T12:16:34.000Z | tests/chain_tests.py | knub/skypyblue | 93aa91446c652466b8f9311f0e403c201dbce21b | [
"MIT"
] | 1 | 2015-07-23T14:01:52.000Z | 2015-07-23T14:01:52.000Z | from unittest import TestCase
from skypyblue.models import *
from skypyblue.core import Mvine, Marker, ConstraintSystem
class ChainTests(TestCase):
def setUp(self):
cs = ConstraintSystem()
self.first = None
self.last = None
prev = None
n = 50
# We need to go up to n inclusively, as this is done
# in the original test as well
for i in range(n + 1):
name = "v%s" % i
v = Variable(name, 0, cs)
if prev is not None:
c = ConstraintFactory.equality_constraint(prev, v, Strength.STRONG)
cs.add_constraint(c)
if i == 0:
self.first = v
if i == n:
self.last = v
prev = v
self.last.stay()
def test_chain_of_constraints(self):
self.first.set_value(5)
self.assertEqual(5,self.last.get_value())
| 23.028571 | 75 | 0.624069 | 118 | 806 | 4.20339 | 0.542373 | 0.064516 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012048 | 0.279156 | 806 | 34 | 76 | 23.705882 | 0.841652 | 0.098015 | 0 | 0 | 0 | 0 | 0.004144 | 0 | 0 | 0 | 0 | 0 | 0.04 | 1 | 0.08 | false | 0 | 0.12 | 0 | 0.24 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
76015333620ed686ce1bdc637bad068a5ec57f92 | 1,926 | py | Python | simulacra/tests/unit/youtube_dl/post/test_download_videos.py | vdloo/simulacra | 6dbb43048c8613db6f7bf95592b08c13d95f858c | [
"Apache-2.0"
] | 2 | 2017-11-21T18:09:51.000Z | 2020-10-21T17:16:12.000Z | simulacra/tests/unit/youtube_dl/post/test_download_videos.py | vdloo/simulacra | 6dbb43048c8613db6f7bf95592b08c13d95f858c | [
"Apache-2.0"
] | 2 | 2017-11-21T18:12:27.000Z | 2018-04-08T08:14:30.000Z | simulacra/tests/unit/youtube_dl/post/test_download_videos.py | vdloo/simulacra | 6dbb43048c8613db6f7bf95592b08c13d95f858c | [
"Apache-2.0"
] | 2 | 2017-11-21T18:10:04.000Z | 2017-11-22T01:07:59.000Z | from mock import Mock
from flows.simulacra.youtube_dl.factory import youtube_dl_flow_factory
from flows.simulacra.youtube_dl.post import download_videos
from tests.testcase import TestCase
class TestDownloadVideos(TestCase):
def setUp(self):
self.open = self.set_up_patch(
'flows.simulacra.youtube_dl.post.open'
)
self.open.return_value.__exit__ = lambda a, b, c, d: None
self.file_handle = Mock()
self.file_handle.readlines.return_value = iter([
'some_channel1',
'some_other_channel2'
])
self.open.return_value.__enter__ = lambda x: self.file_handle
self.post_job = self.set_up_patch(
'flows.simulacra.youtube_dl.post.post_job'
)
def test_download_videos_opens_channels_file(self):
download_videos('/tmp/some_list_of_yt_channels.txt')
self.open.assert_called_once_with(
'/tmp/some_list_of_yt_channels.txt'
)
def test_download_videos_reads_lines_from_channels_file(self):
download_videos('/tmp/some_list_of_yt_channels.txt')
self.file_handle.readlines.assert_called_once_with()
def test_download_videos_posts_job_to_download_yt_videos(self):
download_videos('/tmp/some_list_of_yt_channels.txt')
expected_channels = ['some_channel1', 'some_other_channel2']
self.post_job.assert_called_once_with(
youtube_dl_flow_factory,
hierarchy=False,
factory_args=[expected_channels]
)
def test_download_videos_draws_hierarchy_of_download_yt_videos(self):
download_videos('/tmp/some_list_of_yt_channels.txt', hierarchy=True)
expected_channels = ['some_channel1', 'some_other_channel2']
self.post_job.assert_called_once_with(
youtube_dl_flow_factory,
hierarchy=True,
factory_args=[expected_channels]
)
| 35.018182 | 76 | 0.696781 | 245 | 1,926 | 5.004082 | 0.265306 | 0.102773 | 0.044861 | 0.053018 | 0.529364 | 0.482055 | 0.455139 | 0.433931 | 0.433931 | 0.367047 | 0 | 0.004 | 0.221184 | 1,926 | 54 | 77 | 35.666667 | 0.813333 | 0 | 0 | 0.255814 | 0 | 0 | 0.174974 | 0.12513 | 0 | 0 | 0 | 0 | 0.093023 | 1 | 0.116279 | false | 0 | 0.093023 | 0 | 0.232558 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7601cb2a5970e0fa03256104f33dfb6ebb42e481 | 3,594 | py | Python | module.py | adamian98/LabelNoiseFlatMinimizers | 2c7a60ea0b72f8ac3a0ce3f059526440385b4f60 | [
"MIT"
] | 7 | 2021-08-21T23:45:28.000Z | 2021-12-13T13:39:38.000Z | module.py | adamian98/LabelNoiseFlatMinimizers | 2c7a60ea0b72f8ac3a0ce3f059526440385b4f60 | [
"MIT"
] | null | null | null | module.py | adamian98/LabelNoiseFlatMinimizers | 2c7a60ea0b72f8ac3a0ce3f059526440385b4f60 | [
"MIT"
] | null | null | null | import torch
from torch.optim.lr_scheduler import LambdaLR
import pytorch_lightning as pl
from pytorch_lightning.metrics import Accuracy
from loss import LabelSmoothingLoss
from models import resnet18, vgg16
from bisect import bisect
from torch import nn
from absl import flags
from pl_bolts.optimizers.lr_scheduler import LinearWarmupCosineAnnealingLR
FLAGS = flags.FLAGS
flags.DEFINE_enum("model", "resnet18", ["resnet18", "vgg16"], "model to use")
flags.DEFINE_enum(
"norm_layer",
"groupnorm",
["groupnorm", "batchnorm", "none"],
"normalization layer to use for resnet",
)
flags.DEFINE_integer("group_size", 32, "channels per group")
flags.DEFINE_float("lr", 1, "learning rate")
flags.DEFINE_float("momentum", 0, "momentum")
flags.DEFINE_float("weight_decay", 0, "weight decay")
flags.DEFINE_float(
"smoothing", 0, "probability of flipping a label for label smoothing/label noise"
)
flags.DEFINE_bool(
"label_noise",
False,
"whether to use randomized label noise instead of label smoothing",
)
flags.DEFINE_float("warmup", 0, "length of warmup phase (between 0 and 1)")
flags.DEFINE_float(
"div_start", float("inf"), "factor to divide learning rate by during warmup"
)
flags.DEFINE_float(
"div_end", float("inf"), "factor to divide learning rate by during cosine annealing"
)
flags.DEFINE_bool("freezeBN", False, "whether to freeze batch norm during training")
class CIFAR10Module(pl.LightningModule):
def __init__(self):
super().__init__()
model_dict = {
"resnet18": resnet18,
"vgg16": vgg16,
}
norm_dict = {
"groupnorm": lambda x: nn.GroupNorm(x // FLAGS.group_size, x),
"batchnorm": nn.BatchNorm2d,
"none": nn.Identity,
}
self.criterion = LabelSmoothingLoss(
10, FLAGS.smoothing, label_noise=FLAGS.label_noise
)
self.accuracy = Accuracy()
self.model = model_dict[FLAGS.model](
num_classes=10, norm_layer=norm_dict[FLAGS.norm_layer]
)
def forward(self, batch):
if FLAGS.freezeBN:
self.model.eval()
x, y = batch
output = self.model(x)
loss, trueloss = self.criterion(output, y)
_, predictions = output.max(-1)
accuracy = 100 * predictions.eq(y).sum() / len(y)
return loss, trueloss, accuracy
def training_step(self, batch, batch_nb):
loss, trueloss, accuracy = self.forward(batch)
self.log("loss/train", trueloss, on_epoch=True)
self.log("acc/train", accuracy, on_epoch=True)
return loss
def validation_step(self, batch, batch_nb):
_, trueloss, accuracy = self.forward(batch)
self.log("loss/val", trueloss)
self.log("acc/val", accuracy, prog_bar=True)
def configure_optimizers(self):
optimizer = torch.optim.SGD(
self.model.parameters(),
lr=FLAGS.lr,
weight_decay=FLAGS.weight_decay,
momentum=FLAGS.momentum,
)
if FLAGS.fullbatch:
total_steps = FLAGS.max_epochs
else:
total_steps = FLAGS.max_epochs * len(self.train_dataloader())
scheduler = {
"scheduler": LinearWarmupCosineAnnealingLR(
optimizer,
FLAGS.warmup * total_steps,
total_steps,
FLAGS.lr / FLAGS.div_start,
FLAGS.lr / FLAGS.div_end,
),
"interval": "step",
"name": "learning_rate",
}
return [optimizer], [scheduler]
| 32.972477 | 88 | 0.634391 | 420 | 3,594 | 5.278571 | 0.32381 | 0.05954 | 0.050519 | 0.021651 | 0.116373 | 0.07668 | 0.07668 | 0.07668 | 0.037889 | 0 | 0 | 0.014195 | 0.255147 | 3,594 | 108 | 89 | 33.277778 | 0.813971 | 0 | 0 | 0.030612 | 0 | 0 | 0.18837 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05102 | false | 0 | 0.102041 | 0 | 0.193878 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
76043c7361fbf57d3f196a2bdeefa66f40a7f8a8 | 4,987 | py | Python | test.py | EdHarry/crfasrnn_keras | a3033ec4e49a56924585f38f6c677b510e844962 | [
"MIT"
] | 2 | 2019-02-06T16:49:34.000Z | 2020-01-07T14:15:28.000Z | test.py | EdHarry/crfasrnn_keras | a3033ec4e49a56924585f38f6c677b510e844962 | [
"MIT"
] | null | null | null | test.py | EdHarry/crfasrnn_keras | a3033ec4e49a56924585f38f6c677b510e844962 | [
"MIT"
] | 1 | 2020-01-07T14:15:35.000Z | 2020-01-07T14:15:35.000Z | import sys, os, numpy as np
from PIL import Image, ImageDraw, ImageFont
from PIL.ImageColor import getcolor, getrgb
from PIL.ImageOps import grayscale
from crfrnn_model import get_crfrnn_model_def
GenImSize = 128
FitImSize = 500
def MakeCellImage(x, y, r, i):
im = Image.new(mode='F', size=(GenImSize, GenImSize))
draw = ImageDraw.Draw(im)
draw.ellipse(xy=[x-r, y-r, x+r, y+r], fill='White')
im = np.array(im).astype(np.float32)
im *= (i / 255.0)
return im
def MakeRandomCellImage(n):
im = np.zeros(shape=(GenImSize, GenImSize))
for i in range(n):
radius = np.random.randint(low=-5, high=10) + 10
intensity = (np.random.randn() * 0.1) + 0.5
intensity = max(min(intensity, 1.0), 0.0)
position = np.random.randint(low=radius, high=GenImSize-radius, size=2)
im += MakeCellImage(position[0], position[1], radius, intensity)
im_rand = im + (np.random.randn(im.shape[0], im.shape[1]) * 0.1) + 0.2
im_rand[im_rand < 0] = 0
im_rand[im_rand > 1] = 1
im[im > 0] = 1
im[im < 1] = 0
return im, im_rand
def MakeInputData(n):
im, im_rand = MakeRandomCellImage(n)
im_rand_ = Image.fromarray(im_rand)
im_rand_ = im_rand_.resize((FitImSize, FitImSize), Image.ANTIALIAS)
im_rand_ = np.array(im_rand_)
im_rand_ = im_rand_.astype(np.float32)
im_rand_ -= np.mean(im_rand_.flatten())
im_rand_ /= np.std(im_rand_.flatten())
# return im, im_rand, im_rand_[np.newaxis, :, :, np.newaxis]
return im, im_rand, im_rand_.reshape(1, FitImSize, FitImSize, 1)
def image_tint(im, tint='#ff0000'):
src = Image.new('RGB', im.size)
src.paste(im)
tr, tg, tb = getrgb(tint)
tl = getcolor(tint, "L") # tint color's overall luminosity
if not tl: tl = 1 # avoid division by zero
tl = float(tl) # compute luminosity preserving tint factors
sr, sg, sb = map(lambda tv: tv/tl, (tr, tg, tb)) # per component adjustments
# create look-up tables to map luminosity to adjusted tint
# (using floating-point math only to compute table)
luts = (list(map(lambda lr: int(lr*sr + 0.5), range(256))) +
list(map(lambda lg: int(lg*sg + 0.5), range(256))) +
list(map(lambda lb: int(lb*sb + 0.5), range(256))))
l = grayscale(src) # 8-bit luminosity version of whole image
if Image.getmodebands(src.mode) < 4:
merge_args = (src.mode, (l, l, l)) # for RGB verion of grayscale
else: # include copy of src image's alpha layer
a = Image.new("L", src.size)
a.putdata(src.getdata(3))
merge_args = (src.mode, (l, l, l, a)) # for RGBA verion of grayscale
luts += range(256) # for 1:1 mapping of copied alpha values
return Image.merge(*merge_args).point(luts)
saved_model_path = "SegmentationModel_Weights.h5"
nTest = 128
font = ImageFont.truetype("/usr/share/fonts/TTF/Anonymous Pro.ttf", 8, encoding="unic")
label_seg_gt = u"Segmentation (Ground Truth)"
label_im = u"Image"
label_seg = u"Computed Segmentation"
text_width_gt, text_height_gt = font.getsize(label_seg_gt)
text_width_im, text_height_im = font.getsize(label_im)
text_width_seg, text_height_seg = font.getsize(label_seg)
model = get_crfrnn_model_def()
model.load_weights(saved_model_path, by_name=True)
for i in range(nTest):
seg_gt, im, im_input = MakeInputData(n=np.random.randint(low=1, high=5))
probs = model.predict(im_input, verbose=False)[0, :, :, :]
labels = probs.argmax(axis=2)
seg_gt = Image.fromarray((seg_gt * 255).astype("uint8"))
im = Image.fromarray((im * 255).astype("uint8"))
labels = Image.fromarray((labels * 255).astype("uint8"))
labels = labels.resize((GenImSize, GenImSize), Image.NEAREST)
seg_gt = image_tint(seg_gt, tint='#8AFAAB')
im = image_tint(im, tint='#8AA0FA')
labels = image_tint(labels, tint='#F98A8A')
draw_gt = ImageDraw.Draw(seg_gt)
draw_gt.text((int((GenImSize-text_width_gt)/2), 1), label_seg_gt, 'green', font)
draw_im = ImageDraw.Draw(im)
draw_im.text((int((GenImSize-text_width_im)/2), 1), label_im, 'blue', font)
draw_seg = ImageDraw.Draw(labels)
draw_seg.text((int((GenImSize-text_width_seg)/2), 1), label_seg, 'red', font)
full_im = Image.new('RGB', (3 * GenImSize, GenImSize))
full_im.paste(im, (0, 0))
full_im.paste(seg_gt, (GenImSize, 0))
full_im.paste(labels, (2 * GenImSize, 0))
full_im.save("testOutput/test_{}.png".format(i))
sys.stdout.write("Testing: {0:.2f}%".format(100.0 * (i+1) / (nTest)) + '\r')
sys.stdout.flush()
print("Testing: {0:.2f}%".format(100.0 * (i+1) / (nTest)))
# seg_gt.save("testOutput/seg_gt_{}.png".format(i))
# im.save("testOutput/im_{}.png".format(i))
# labels.save("testOutput/seg_{}.png".format(i))
| 41.214876 | 88 | 0.627431 | 747 | 4,987 | 4.037483 | 0.282463 | 0.045756 | 0.023873 | 0.03183 | 0.095822 | 0.070955 | 0.045756 | 0.017905 | 0.017905 | 0 | 0 | 0.032724 | 0.221777 | 4,987 | 120 | 89 | 41.558333 | 0.744396 | 0.121115 | 0 | 0 | 0 | 0 | 0.057248 | 0.018319 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043011 | false | 0 | 0.053763 | 0 | 0.139785 | 0.010753 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
760773975ca8eafe13c9ad7b8a806a2308df94b0 | 885 | py | Python | Python-desenvolvimento/ex081.py | MarcosMaciel-MMRS/Desenvolvimento-python | 2b2fc54788da3ca110d495b9e80a494f2b31fb09 | [
"MIT"
] | null | null | null | Python-desenvolvimento/ex081.py | MarcosMaciel-MMRS/Desenvolvimento-python | 2b2fc54788da3ca110d495b9e80a494f2b31fb09 | [
"MIT"
] | null | null | null | Python-desenvolvimento/ex081.py | MarcosMaciel-MMRS/Desenvolvimento-python | 2b2fc54788da3ca110d495b9e80a494f2b31fb09 | [
"MIT"
] | null | null | null | #Crie um programa que vai ler vários números e colocar em uma lista. Depois disso, mostre:
#A) Quantos números foram digitados.
#B) A lista de valores, ordenada de forma decrescente.
#C) Se o valor 5 foi digitado e está ou não na lista.
valores = list()
cont = 0
while True:# assim eu consegui capturar quantos valores o usuário quisesse
n = int(input('Digite um número: '))
valores.append(n)
cont += 1
resp = ' '
while resp not in 'SN':
resp = str(input('Deseja continuar: [S/N]')).strip().upper()[0]
if resp == 'N':
break
valores.sort(reverse=True)# apresenta em ondem decrescente
print(f'Você digitou {cont} números para criar a lista: {valores}')
if 5 in valores:# validando se o número 5 esta presente na lista
print('O número 5 está presente na lista.')
else:
print('O número 5 não está presente na lista.')
| 38.478261 | 91 | 0.668927 | 139 | 885 | 4.258993 | 0.582734 | 0.047297 | 0.040541 | 0.043919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01173 | 0.229379 | 885 | 22 | 92 | 40.227273 | 0.856305 | 0.416949 | 0 | 0 | 0 | 0 | 0.35729 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.176471 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7607dd74236868774cdb5ed3e06c4a1a1b59ed04 | 9,649 | py | Python | scripts/analyze_stats.py | GioCurnis/DeepGlobalRegistration | c2a08c87d28b784c3e34fdcf5aed5b98e3ad2a73 | [
"MIT"
] | null | null | null | scripts/analyze_stats.py | GioCurnis/DeepGlobalRegistration | c2a08c87d28b784c3e34fdcf5aed5b98e3ad2a73 | [
"MIT"
] | null | null | null | scripts/analyze_stats.py | GioCurnis/DeepGlobalRegistration | c2a08c87d28b784c3e34fdcf5aed5b98e3ad2a73 | [
"MIT"
] | null | null | null | # Copyright (c) Chris Choy (chrischoy@ai.stanford.edu) and Wei Dong (weidong@andrew.cmu.edu)
#
# Please cite the following papers if you use any part of the code.
# - Christopher Choy, Wei Dong, Vladlen Koltun, Deep Global Registration, CVPR 2020
# - Christopher Choy, Jaesik Park, Vladlen Koltun, Fully Convolutional Geometric Features, ICCV 2019
# - Christopher Choy, JunYoung Gwak, Silvio Savarese, 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks, CVPR 2019
from matplotlib.patches import Rectangle
import matplotlib.pyplot as plt
import numpy as np
import argparse
PROPERTY_IDX_MAP = {
'Recall': 0,
'TE (m)': 1,
'RE (deg)': 2,
'log Time (s)': 3,
'Scene ID': 4
}
def analyze_by_pair(stats, rte_thresh, rre_thresh):
'''
\input stats: (num_methods, num_pairs, num_pairwise_stats=5)
\return valid mean_stats: (num_methods, 4)
4 properties: recall, rte, rre, time
'''
num_methods, num_pairs, num_pairwise_stats = stats.shape
pairwise_stats = np.zeros((num_methods, 4))
for m in range(num_methods):
# Filter valid registrations by rte / rre thresholds
mask_rte = stats[m, :, 1] < rte_thresh
mask_rre = stats[m, :, 2] < rre_thresh
mask_valid = mask_rte * mask_rre
# Recall, RTE, RRE, Time
pairwise_stats[m, 0] = mask_valid.mean()
pairwise_stats[m, 1] = stats[m, mask_valid, 1].mean()
pairwise_stats[m, 2] = stats[m, mask_valid, 2].mean()
pairwise_stats[m, 3] = stats[m, mask_valid, 3].mean()
return pairwise_stats
def analyze_by_scene(stats, scene_id_list, rte_thresh=0.3, rre_thresh=15):
'''
\input stats: (num_methods, num_pairs, num_pairwise_stats=5)
\return scene_wise mean stats: (num_methods, num_scenes, 4)
4 properties: recall, rte, rre, time
'''
num_methods, num_pairs, num_pairwise_stats = stats.shape
num_scenes = len(scene_id_list)
scene_wise_stats = np.zeros((num_methods, len(scene_id_list), 4))
for m in range(num_methods):
# Filter valid registrations by rte / rre thresholds
mask_rte = stats[m, :, 1] < rte_thresh
mask_rre = stats[m, :, 2] < rre_thresh
mask_valid = mask_rte * mask_rre
for s in scene_id_list:
mask_scene = stats[m, :, 4] == s
# Valid registrations in the scene
mask = mask_scene * mask_valid
# Recall, RTE, RRE, Time
scene_wise_stats[m, s, 0] = 0 if np.sum(mask_scene) == 0 else float(
np.sum(mask)) / float(np.sum(mask_scene))
scene_wise_stats[m, s, 1] = stats[m, mask, 1].mean()
scene_wise_stats[m, s, 2] = stats[m, mask, 2].mean()
scene_wise_stats[m, s, 3] = stats[m, mask, 3].mean()
return scene_wise_stats
def plot_precision_recall_curves(stats, method_names, rte_precisions, rre_precisions,
output_postfix, cmap):
'''
\input stats: (num_methods, num_pairs, 5)
\input method_names: (num_methods) string, shown as xticks
'''
num_methods, num_pairs, _ = stats.shape
rre_precision_curves = np.zeros((num_methods, len(rre_precisions)))
rte_precision_curves = np.zeros((num_methods, len(rte_precisions)))
for i, rre_thresh in enumerate(rre_precisions):
pairwise_stats = analyze_by_pair(stats, rte_thresh=np.inf, rre_thresh=rre_thresh)
rre_precision_curves[:, i] = pairwise_stats[:, 0]
for i, rte_thresh in enumerate(rte_precisions):
pairwise_stats = analyze_by_pair(stats, rte_thresh=rte_thresh, rre_thresh=np.inf)
rte_precision_curves[:, i] = pairwise_stats[:, 0]
fig = plt.figure(figsize=(10, 3))
ax1 = fig.add_subplot(1, 2, 1, aspect=3.0 / np.max(rte_precisions))
ax2 = fig.add_subplot(1, 2, 2, aspect=3.0 / np.max(rre_precisions))
for m, name in enumerate(method_names):
alpha = rre_precision_curves[m].mean()
alpha = 1.0 if alpha > 0 else 0.0
ax1.plot(rre_precisions, rre_precision_curves[m], color=cmap[m], alpha=alpha)
ax2.plot(rte_precisions, rte_precision_curves[m], color=cmap[m], alpha=alpha)
ax1.set_ylabel('Recall')
ax1.set_xlabel('Rotation (deg)')
ax1.set_ylim((0.0, 1.0))
ax2.set_xlabel('Translation (m)')
ax2.set_ylim((0.0, 1.0))
ax2.legend(method_names, loc='center left', bbox_to_anchor=(1, 0.5))
ax1.grid()
ax2.grid()
plt.tight_layout()
plt.savefig('{}_{}.png'.format('precision_recall', output_postfix))
plt.close(fig)
def plot_scene_wise_stats(scene_wise_stats, method_names, scene_names, property_name,
ylim, output_postfix, cmap):
'''
\input scene_wise_stats: (num_methods, num_scenes, 4)
\input method_names: (num_methods) string, shown as xticks
\input scene_names: (num_scenes) string, shown as legends
\input property_name: string, shown as ylabel
'''
num_methods, num_scenes, _ = scene_wise_stats.shape
assert len(method_names) == num_methods
assert len(scene_names) == num_scenes
# Initialize figure
fig = plt.figure(figsize=(14, 3))
ax = fig.add_subplot(1, 1, 1)
# Add some paddings
w = 1.0 / (num_methods + 2)
# Rightmost bar
x = np.arange(0, num_scenes) - 0.5 * w * num_methods
for m in range(num_methods):
m_stats = scene_wise_stats[m, :, PROPERTY_IDX_MAP[property_name]]
valid = not (np.logical_and.reduce(np.isnan(m_stats))
or np.logical_and.reduce(m_stats == 0))
alpha = 1.0 if valid else 0.0
ax.bar(x + m * w, m_stats, w, color=cmap[m], alpha=alpha)
plt.ylim(ylim)
plt.xlim((0 - w * num_methods, num_scenes))
plt.ylabel(property_name)
plt.xticks(np.arange(0, num_scenes), tuple(scene_names))
ax.legend(method_names, loc='center left', bbox_to_anchor=(1, 0.5))
plt.tight_layout()
plt.grid()
plt.savefig('{}_{}.png'.format(property_name, output_postfix))
plt.close(fig)
def plot_pareto_frontier(pairwise_stats, method_names, cmap):
recalls = pairwise_stats[:, 0]
times = 1.0 / pairwise_stats[:, 3]
ind = np.argsort(times)
offset = 0.05
plt.rcParams.update({'font.size': 30})
fig = plt.figure(figsize=(20, 12))
ax = fig.add_subplot(111)
ax.set_xlabel('Number of registrations per second (log scale)')
ax.set_xscale('log')
xmin = np.power(10, -2.2)
xmax = np.power(10, 1.5)
ax.set_xlim(xmin, xmax)
ax.set_ylabel('Registration recall')
ax.set_ylim(-offset, 1)
ax.set_yticks(np.arange(0, 1, step=0.2))
plots = [None for m in ind]
max_gain = -1
for m in ind[::-1]:
# 8, 9: our methods
if (recalls[m] > max_gain):
max_gain = recalls[m]
ax.add_patch(
Rectangle((0, -offset),
times[m],
recalls[m] + offset,
facecolor=(0.94, 0.94, 0.94)))
plot, = ax.plot(times[m], recalls[m], 'o', c=colors[m], markersize=30)
plots[m] = plot
ax.legend(plots, method_names, loc='center left', bbox_to_anchor=(1, 0.5))
plt.tight_layout()
plt.savefig('frontier.png')
if __name__ == '__main__':
'''
Input .npz file to analyze:
\prop npz['stats']: (num_methods, num_pairs, num_pairwise_stats=5)
5 pairwise stats properties consist of
- \bool success: decided by evaluation thresholds, will be ignored in this script
- \float rte: relative translation error (in cm)
- \float rre: relative rotation error (in deg)
- \float time: registration time for the pair (in ms)
- \int scene_id: specific for 3DMatch test sets (8 scenes in total)
\prop npz['names']: (num_methods)
Corresponding method name stored in string
'''
# Setup fonts
from matplotlib import rc
rc('font', **{'family': 'sans-serif', 'sans-serif': ['Helvetica']})
rc('text', usetex=False)
# Parse arguments
parser = argparse.ArgumentParser()
parser.add_argument('npz', help='path to the npz file')
parser.add_argument('--output_postfix', default='', help='postfix of the output')
parser.add_argument('--end_method_index',
default=1000,
type=int,
help='reserved only for making slides')
args = parser.parse_args()
# Load npz file with aformentioned format
npz = np.load(args.npz)
stats = npz['stats']
print(stats.shape)
# Reserved only for making slides, will be skipped by default
#stats[args.end_method_index:, :, 1] = np.inf
#stats[args.end_method_index:, :, 2] = np.inf
method_names = npz['names']
scene_names = [
'Kitchen', 'Home1', 'Home2', 'Hotel1', 'Hotel2', 'Hotel3', 'Study', 'Lab'
]
cmap = plt.get_cmap('tab20b')
colors = [cmap(i) for i in np.linspace(0, 1, len(method_names))]
colors.reverse()
# Plot scene-wise bar charts
scene_wise_stats = analyze_by_scene(stats,
range(len(scene_names)),
rte_thresh=0.3,
rre_thresh=15)
plot_scene_wise_stats(scene_wise_stats, method_names, scene_names, 'Recall',
(0.0, 1.0), args.output_postfix, colors)
plot_scene_wise_stats(scene_wise_stats, method_names, scene_names, 'TE (m)',
(0.0, 0.3), args.output_postfix, colors)
plot_scene_wise_stats(scene_wise_stats, method_names, scene_names, 'RE (deg)',
(0.0, 15.0), args.output_postfix, colors)
# Plot rte/rre - recall curves
plot_precision_recall_curves(stats,
method_names,
rre_precisions=np.arange(0, 15, 0.05),
rte_precisions=np.arange(0, 0.3, 0.005),
output_postfix=args.output_postfix,
cmap=colors)
pairwise_stats = analyze_by_pair(stats, rte_thresh=0.3, rre_thresh=15)
plot_pareto_frontier(pairwise_stats, method_names, cmap=colors)
| 35.215328 | 133 | 0.658203 | 1,431 | 9,649 | 4.22362 | 0.205451 | 0.041363 | 0.041694 | 0.020847 | 0.369623 | 0.316347 | 0.274322 | 0.231966 | 0.192422 | 0.154368 | 0 | 0.02882 | 0.212457 | 9,649 | 273 | 134 | 35.344322 | 0.766548 | 0.166857 | 0 | 0.09697 | 0 | 0 | 0.063537 | 0 | 0 | 0 | 0 | 0 | 0.012121 | 1 | 0.030303 | false | 0 | 0.030303 | 0 | 0.072727 | 0.006061 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7609f7291af920ace67f27f5a7e4df4849da540f | 42,165 | py | Python | rcnn/rcnn_heads.py | Edward-Sun/TSP-Detection | da63a9f23053df22629d1ad1e2c93e548689ba84 | [
"Apache-2.0"
] | 37 | 2021-10-12T13:05:00.000Z | 2022-03-22T02:13:02.000Z | rcnn/rcnn_heads.py | Edward-Sun/TSP-Detection | da63a9f23053df22629d1ad1e2c93e548689ba84 | [
"Apache-2.0"
] | 2 | 2021-11-01T09:19:55.000Z | 2021-12-16T07:31:11.000Z | rcnn/rcnn_heads.py | Edward-Sun/TSP-Detection | da63a9f23053df22629d1ad1e2c93e548689ba84 | [
"Apache-2.0"
] | 1 | 2021-10-15T00:40:17.000Z | 2021-10-15T00:40:17.000Z | # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
import inspect
from typing import Dict, List, Optional, Tuple, Union
import torch
import copy
from torch import nn
import math
import numpy as np
import torch.nn.functional as F
from torch.autograd.function import Function
from detectron2.modeling.box_regression import Box2BoxTransform
from detectron2.config import configurable
from detectron2.layers import batched_nms, ShapeSpec
from detectron2.structures import Boxes, ImageList, Instances, pairwise_iou
from detectron2.modeling.roi_heads.box_head import build_box_head
from detectron2.modeling.roi_heads.keypoint_head import build_keypoint_head
from detectron2.modeling.roi_heads.mask_head import build_mask_head
from detectron2.modeling.proposal_generator.proposal_utils import add_ground_truth_to_proposals, add_ground_truth_to_proposals_single_image
from detectron2.utils.events import get_event_storage
from detectron2.modeling.roi_heads.roi_heads import select_foreground_proposals, select_proposals_with_visible_keypoints, ROIHeads
from detectron2.modeling.roi_heads import ROI_HEADS_REGISTRY
from detectron2.modeling.matcher import Matcher
from detectron2.modeling.sampling import subsample_labels
from detectron2.modeling.roi_heads.fast_rcnn import FastRCNNOutputLayers
from detectron2.utils.env import TORCH_VERSION
from .mypooler import MyROIPooler
from .my_fast_rcnn_output import MyFastRCNNOutputLayers
__all__ = ["TransformerROIHeads", "CascadeTransformerROIHeads"]
def box_cxcywh_to_xyxy(x):
x_c, y_c, w, h = x.unbind(-1)
b = [(x_c - 0.5 * w), (y_c - 0.5 * h),
(x_c + 0.5 * w), (y_c + 0.5 * h)]
return torch.stack(b, dim=-1)
def box_xyxy_to_cxcywh(x):
x0, y0, x1, y1 = x.unbind(-1)
b = [(x0 + x1) / 2, (y0 + y1) / 2,
(x1 - x0), (y1 - y0)]
return torch.stack(b, dim=-1)
def add_noise_to_boxes(boxes):
cxcy_boxes = box_xyxy_to_cxcywh(boxes)
resize_factor = torch.rand(cxcy_boxes.shape, device=cxcy_boxes.device)
new_cxcy = cxcy_boxes[..., :2] + cxcy_boxes[..., 2:] * (resize_factor[..., :2] - 0.5) * 0.2
assert (cxcy_boxes[..., 2:] > 0).all().item()
new_wh = cxcy_boxes[..., 2:] * (0.8 ** (resize_factor[..., 2:] * 2 - 1))
assert (new_wh > 0).all().item()
new_cxcy_boxes = torch.cat([new_cxcy, new_wh], dim=-1)
new_boxes = box_cxcywh_to_xyxy(new_cxcy_boxes)
return new_boxes
@ROI_HEADS_REGISTRY.register()
class TransformerROIHeads(ROIHeads):
"""
It's "standard" in a sense that there is no ROI transform sharing
or feature sharing between tasks.
Each head independently processes the input features by each head's
own pooler and head.
This class is used by most models, such as FPN and C5.
To implement more models, you can subclass it and implement a different
:meth:`forward()` or a head.
"""
@configurable
def __init__(
self,
*,
box_in_features: List[str],
box_pooler: MyROIPooler,
box_head: nn.Module,
box_predictor: nn.Module,
mask_in_features: Optional[List[str]] = None,
mask_pooler: Optional[MyROIPooler] = None,
mask_head: Optional[nn.Module] = None,
keypoint_in_features: Optional[List[str]] = None,
keypoint_pooler: Optional[MyROIPooler] = None,
keypoint_head: Optional[nn.Module] = None,
train_on_pred_boxes: bool = False,
add_noise_to_proposals: bool = False,
encoder_feature: Optional[str] = None,
random_sample_size: bool = False,
random_sample_size_upper_bound: float = 1.0,
random_sample_size_lower_bound: float = 0.8,
random_proposal_drop: bool = False,
random_proposal_drop_upper_bound: float = 1.0,
random_proposal_drop_lower_bound: float = 0.8,
max_proposal_per_batch: int = 0,
visualize: bool = False,
**kwargs
):
"""
NOTE: this interface is experimental.
Args:
box_in_features (list[str]): list of feature names to use for the box head.
box_pooler (ROIPooler): pooler to extra region features for box head
box_head (nn.Module): transform features to make box predictions
box_predictor (nn.Module): make box predictions from the feature.
Should have the same interface as :class:`FastRCNNOutputLayers`.
mask_in_features (list[str]): list of feature names to use for the mask head.
None if not using mask head.
mask_pooler (ROIPooler): pooler to extra region features for mask head
mask_head (nn.Module): transform features to make mask predictions
keypoint_in_features, keypoint_pooler, keypoint_head: similar to ``mask*``.
train_on_pred_boxes (bool): whether to use proposal boxes or
predicted boxes from the box head to train other heads.
"""
super().__init__(**kwargs)
# keep self.in_features for backward compatibility
self.in_features = self.box_in_features = box_in_features
self.box_pooler = box_pooler
self.box_head = box_head
self.box_predictor = box_predictor
self.mask_on = mask_in_features is not None
if self.mask_on:
self.mask_in_features = mask_in_features
self.mask_pooler = mask_pooler
self.mask_head = mask_head
self.keypoint_on = keypoint_in_features is not None
if self.keypoint_on:
self.keypoint_in_features = keypoint_in_features
self.keypoint_pooler = keypoint_pooler
self.keypoint_head = keypoint_head
self.train_on_pred_boxes = train_on_pred_boxes
self.add_noise_to_proposals = add_noise_to_proposals
self.encoder_feature = encoder_feature
self.random_sample_size = random_sample_size
self.random_proposal_drop = random_proposal_drop
self.max_proposal_per_batch = max_proposal_per_batch
self.random_proposal_drop_upper_bound = random_proposal_drop_upper_bound
self.random_proposal_drop_lower_bound = random_proposal_drop_lower_bound
self.random_sample_size_upper_bound = random_sample_size_upper_bound
self.random_sample_size_lower_bound = random_sample_size_lower_bound
self.visualize = visualize
@classmethod
def from_config(cls, cfg, input_shape):
ret = super().from_config(cfg)
ret["visualize"] = cfg.MODEL.VISUALIZE
ret["train_on_pred_boxes"] = cfg.MODEL.ROI_BOX_HEAD.TRAIN_ON_PRED_BOXES
ret["add_noise_to_proposals"] = cfg.MODEL.ROI_BOX_HEAD.ADD_NOISE_TO_PROPOSALS
ret["encoder_feature"] = cfg.MODEL.ROI_BOX_HEAD.ENCODER_FEATURE
ret["random_sample_size"] = cfg.MODEL.ROI_BOX_HEAD.RANDOM_SAMPLE_SIZE
ret["random_sample_size_upper_bound"] = cfg.MODEL.ROI_BOX_HEAD.RANDOM_SAMPLE_SIZE_UPPER_BOUND
ret["random_sample_size_lower_bound"] = cfg.MODEL.ROI_BOX_HEAD.RANDOM_SAMPLE_SIZE_LOWER_BOUND
ret["random_proposal_drop"] = cfg.MODEL.ROI_BOX_HEAD.RANDOM_PROPOSAL_DROP
ret["random_proposal_drop_upper_bound"] = cfg.MODEL.ROI_BOX_HEAD.RANDOM_PROPOSAL_DROP_UPPER_BOUND
ret["random_proposal_drop_lower_bound"] = cfg.MODEL.ROI_BOX_HEAD.RANDOM_PROPOSAL_DROP_LOWER_BOUND
ret["max_proposal_per_batch"] = cfg.MODEL.ROI_BOX_HEAD.MAX_PROPOSAL_PER_BATCH
# Subclasses that have not been updated to use from_config style construction
# may have overridden _init_*_head methods. In this case, those overridden methods
# will not be classmethods and we need to avoid trying to call them here.
# We test for this with ismethod which only returns True for bound methods of cls.
# Such subclasses will need to handle calling their overridden _init_*_head methods.
if inspect.ismethod(cls._init_box_head):
ret.update(cls._init_box_head(cfg, input_shape))
if inspect.ismethod(cls._init_mask_head):
ret.update(cls._init_mask_head(cfg, input_shape))
if inspect.ismethod(cls._init_keypoint_head):
ret.update(cls._init_keypoint_head(cfg, input_shape))
ret["proposal_matcher"] = Matcher(
cfg.MODEL.ROI_HEADS.IOU_THRESHOLDS,
cfg.MODEL.ROI_HEADS.IOU_LABELS,
allow_low_quality_matches=False,
)
return ret
@classmethod
def _init_box_head(cls, cfg, input_shape):
# fmt: off
in_features = cfg.MODEL.ROI_HEADS.IN_FEATURES
pooler_resolution = cfg.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION
pooler_scales = tuple(1.0 / input_shape[k].stride for k in in_features)
sampling_ratio = cfg.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO
pooler_type = cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE
# fmt: on
# If StandardROIHeads is applied on multiple feature maps (as in FPN),
# then we share the same predictors and therefore the channel counts must be the same
in_channels = [input_shape[f].channels for f in in_features]
# Check all channel counts are equal
assert len(set(in_channels)) == 1, in_channels
in_channels = in_channels[0]
box_pooler = MyROIPooler(
output_size=pooler_resolution,
scales=pooler_scales,
sampling_ratio=sampling_ratio,
pooler_type=pooler_type,
)
# Here we split "box head" and "box predictor", which is mainly due to historical reasons.
# They are used together so the "box predictor" layers should be part of the "box head".
# New subclasses of ROIHeads do not need "box predictor"s.
box_head = build_box_head(
cfg, ShapeSpec(channels=in_channels, height=pooler_resolution, width=pooler_resolution)
)
box_predictor = MyFastRCNNOutputLayers(cfg, box_head.output_shape)
return {
"box_in_features": in_features,
"box_pooler": box_pooler,
"box_head": box_head,
"box_predictor": box_predictor,
}
@classmethod
def _init_mask_head(cls, cfg, input_shape):
if not cfg.MODEL.MASK_ON:
return {}
else:
raise NotImplementedError
@classmethod
def _init_keypoint_head(cls, cfg, input_shape):
if not cfg.MODEL.KEYPOINT_ON:
return {}
else:
raise NotImplementedError
def forward(
self,
images: ImageList,
features: Dict[str, torch.Tensor],
proposals: List[Instances],
targets: Optional[List[Instances]] = None,
) -> Tuple[List[Instances], Dict[str, torch.Tensor]]:
"""
See :class:`ROIHeads.forward`.
"""
del images
if self.training:
assert targets
proposals = self.label_and_sample_proposals(proposals, targets)
if self.training:
losses = self._forward_box(features, proposals, targets)
# Usually the original proposals used by the box head are used by the mask, keypoint
# heads. But when `self.train_on_pred_boxes is True`, proposals will contain boxes
# predicted by the box head.
losses.update(self._forward_mask(features, proposals))
losses.update(self._forward_keypoint(features, proposals))
return proposals, losses
else:
if self.visualize:
pred_instances, attention_maps = self._forward_box(features, proposals)
else:
attention_maps = None
pred_instances = self._forward_box(features, proposals)
# During inference cascaded prediction is used: the mask and keypoints heads are only
# applied to the top scoring box detections.
pred_instances = self.forward_with_given_boxes(features, pred_instances)
if self.visualize:
for instance, proposal in zip(pred_instances, proposals):
instance._fields["proposal"] = proposal.proposal_boxes.tensor
for instance, attention in zip(pred_instances, attention_maps):
instance._fields["attention"] = attention
return pred_instances, {}
def forward_with_given_boxes(
self, features: Dict[str, torch.Tensor], instances: List[Instances]
) -> List[Instances]:
"""
Use the given boxes in `instances` to produce other (non-box) per-ROI outputs.
This is useful for downstream tasks where a box is known, but need to obtain
other attributes (outputs of other heads).
Test-time augmentation also uses this.
Args:
features: same as in `forward()`
instances (list[Instances]): instances to predict other outputs. Expect the keys
"pred_boxes" and "pred_classes" to exist.
Returns:
instances (list[Instances]):
the same `Instances` objects, with extra
fields such as `pred_masks` or `pred_keypoints`.
"""
assert not self.training
assert instances[0].has("pred_boxes") and instances[0].has("pred_classes")
instances = self._forward_mask(features, instances)
instances = self._forward_keypoint(features, instances)
return instances
def _forward_box(
self, features: Dict[str, torch.Tensor], proposals: List[Instances], targets=None
):
"""
Forward logic of the box prediction branch. If `self.train_on_pred_boxes is True`,
the function puts predicted boxes in the `proposal_boxes` field of `proposals` argument.
Args:
features (dict[str, Tensor]): mapping from feature map names to tensor.
Same as in :meth:`ROIHeads.forward`.
proposals (list[Instances]): the per-image object proposals with
their matching ground truth.
Each has fields "proposal_boxes", and "objectness_logits",
"gt_classes", "gt_boxes".
Returns:
In training, a dict of losses.
In inference, a list of `Instances`, the predicted instances.
"""
box_features = [features[f] for f in self.box_in_features]
padded_box_features, dec_mask, inds_to_padded_inds = (
self.box_pooler(box_features, [x.proposal_boxes for x in proposals]))
enc_feature = None
enc_mask = None
if self.box_head.use_encoder_decoder:
enc_feature = features[self.encoder_feature]
b = len(proposals)
h = max([x.image_size[0] for x in proposals])
w = max([x.image_size[1] for x in proposals])
enc_mask = torch.ones((b, h, w), dtype=torch.bool, device=padded_box_features.device)
for c, image_size in enumerate([x.image_size for x in proposals]):
enc_mask[c, :image_size[0], :image_size[1]] = False
names = ["res1", "res2", "res3", "res4", "res5"]
if self.encoder_feature == "p6":
names.append("p6")
for name in names:
if name == "res1":
target_shape = ((h+1)//2, (w+1)//2)
else:
x = features[name]
target_shape = x.shape[-2:]
m = enc_mask
enc_mask = F.interpolate(m[None].float(), size=target_shape).to(torch.bool)[0]
max_num_proposals = padded_box_features.shape[1]
normalized_proposals = []
for x in proposals:
gt_box = x.proposal_boxes.tensor
img_h, img_w = x.image_size
gt_box = gt_box / torch.tensor([img_w, img_h, img_w, img_h],
dtype=torch.float32, device=gt_box.device)
gt_box = torch.cat([box_xyxy_to_cxcywh(gt_box), gt_box], dim=-1)
gt_box = F.pad(gt_box, [0, 0, 0, max_num_proposals - gt_box.shape[0]])
normalized_proposals.append(gt_box)
normalized_proposals = torch.stack(normalized_proposals, dim=0)
if self.visualize:
padded_box_features, attention_maps = self.box_head(enc_feature, enc_mask,
padded_box_features, dec_mask,
normalized_proposals)
else:
attention_maps = None
padded_box_features = self.box_head(enc_feature, enc_mask, padded_box_features, dec_mask, normalized_proposals)
box_features = padded_box_features[inds_to_padded_inds]
predictions = self.box_predictor(box_features)
if self.training:
losses = self.box_predictor.losses(predictions, proposals, targets)
# proposals is modified in-place below, so losses must be computed first.
if self.train_on_pred_boxes:
with torch.no_grad():
pred_boxes = self.box_predictor.predict_boxes_for_gt_classes(
predictions, proposals
)
for proposals_per_image, pred_boxes_per_image in zip(proposals, pred_boxes):
proposals_per_image.proposal_boxes = Boxes(pred_boxes_per_image)
return losses
else:
pred_instances, _ = self.box_predictor.inference(predictions, proposals)
if self.visualize:
return pred_instances, attention_maps
else:
return pred_instances
def _forward_mask(
self, features: Dict[str, torch.Tensor], instances: List[Instances]
) -> Union[Dict[str, torch.Tensor], List[Instances]]:
"""
Forward logic of the mask prediction branch.
Args:
features (dict[str, Tensor]): mapping from feature map names to tensor.
Same as in :meth:`ROIHeads.forward`.
instances (list[Instances]): the per-image instances to train/predict masks.
In training, they can be the proposals.
In inference, they can be the predicted boxes.
Returns:
In training, a dict of losses.
In inference, update `instances` with new fields "pred_masks" and return it.
"""
if not self.mask_on:
return {} if self.training else instances
else:
raise NotImplementedError
def _forward_keypoint(
self, features: Dict[str, torch.Tensor], instances: List[Instances]
) -> Union[Dict[str, torch.Tensor], List[Instances]]:
"""
Forward logic of the keypoint prediction branch.
Args:
features (dict[str, Tensor]): mapping from feature map names to tensor.
Same as in :meth:`ROIHeads.forward`.
instances (list[Instances]): the per-image instances to train/predict keypoints.
In training, they can be the proposals.
In inference, they can be the predicted boxes.
Returns:
In training, a dict of losses.
In inference, update `instances` with new fields "pred_keypoints" and return it.
"""
if not self.keypoint_on:
return {} if self.training else instances
else:
raise NotImplementedError
@torch.no_grad()
def label_and_sample_proposals(
self, proposals: List[Instances], targets: List[Instances]
) -> List[Instances]:
"""
Prepare some proposals to be used to train the ROI heads.
It performs box matching between `proposals` and `targets`, and assigns
training labels to the proposals.
It returns ``self.batch_size_per_image`` random samples from proposals and groundtruth
boxes, with a fraction of positives that is no larger than
``self.positive_fraction``.
Args:
See :meth:`ROIHeads.forward`
Returns:
list[Instances]:
length `N` list of `Instances`s containing the proposals
sampled for training. Each `Instances` has the following fields:
- proposal_boxes: the proposal boxes
- gt_boxes: the ground-truth box that the proposal is assigned to
(this is only meaningful if the proposal has a label > 0; if label = 0
then the ground-truth box is random)
Other fields such as "gt_classes", "gt_masks", that's included in `targets`.
"""
gt_boxes = [copy.deepcopy(x.gt_boxes) for x in targets]
# Augment proposals with ground-truth boxes.
# In the case of learned proposals (e.g., RPN), when training starts
# the proposals will be low quality due to random initialization.
# It's possible that none of these initial
# proposals have high enough overlap with the gt objects to be used
# as positive examples for the second stage components (box head,
# cls head, mask head). Adding the gt boxes to the set of proposals
# ensures that the second stage components will have some positive
# examples from the start of training. For RPN, this augmentation improves
# convergence and empirically improves box AP on COCO by about 0.5
# points (under one tested configuration).
proposals_with_gt = []
num_fg_samples = []
num_bg_samples = []
for proposals_per_image, targets_per_image, gt_boxes_per_image in zip(proposals, targets, gt_boxes):
has_gt = len(targets_per_image) > 0
if self.add_noise_to_proposals:
proposals_per_image.proposal_boxes.tensor = (
add_noise_to_boxes(proposals_per_image.proposal_boxes.tensor))
match_quality_matrix = pairwise_iou(
targets_per_image.gt_boxes, proposals_per_image.proposal_boxes
)
matched_idxs, matched_labels = self.proposal_matcher(match_quality_matrix)
if not torch.any(matched_labels == 1) and self.proposal_append_gt:
gt_boxes_per_image.tensor = add_noise_to_boxes(gt_boxes_per_image.tensor)
proposals_per_image = add_ground_truth_to_proposals_single_image(gt_boxes_per_image,
proposals_per_image)
match_quality_matrix = pairwise_iou(
targets_per_image.gt_boxes, proposals_per_image.proposal_boxes
)
matched_idxs, matched_labels = self.proposal_matcher(match_quality_matrix)
sampled_idxs, gt_classes = self._sample_proposals(
matched_idxs, matched_labels, targets_per_image.gt_classes)
# Set target attributes of the sampled proposals:
proposals_per_image = proposals_per_image[sampled_idxs]
proposals_per_image.gt_classes = gt_classes
# We index all the attributes of targets that start with "gt_"
# and have not been added to proposals yet (="gt_classes").
if has_gt:
sampled_targets = matched_idxs[sampled_idxs]
# NOTE: here the indexing waste some compute, because heads
# like masks, keypoints, etc, will filter the proposals again,
# (by foreground/background, or number of keypoints in the image, etc)
# so we essentially index the data twice.
for (trg_name, trg_value) in targets_per_image.get_fields().items():
if trg_name.startswith("gt_") and not proposals_per_image.has(trg_name):
proposals_per_image.set(trg_name, trg_value[sampled_targets])
proposals_per_image.set('gt_idxs', sampled_targets)
else:
gt_boxes = Boxes(
targets_per_image.gt_boxes.tensor.new_zeros((len(sampled_idxs), 4))
)
proposals_per_image.gt_boxes = gt_boxes
proposals_per_image.set('gt_idxs', torch.zeros_like(sampled_idxs))
num_bg_samples.append((gt_classes == self.num_classes).sum().item())
num_fg_samples.append(gt_classes.numel() - num_bg_samples[-1])
proposals_with_gt.append(proposals_per_image)
# Log the number of fg/bg samples that are selected for training ROI heads
storage = get_event_storage()
storage.put_scalar("roi_head/num_fg_samples", np.mean(num_fg_samples))
storage.put_scalar("roi_head/num_bg_samples", np.mean(num_bg_samples))
return proposals_with_gt
def _sample_proposals(
self, matched_idxs: torch.Tensor, matched_labels: torch.Tensor, gt_classes: torch.Tensor
) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Based on the matching between N proposals and M groundtruth,
sample the proposals and set their classification labels.
Args:
matched_idxs (Tensor): a vector of length N, each is the best-matched
gt index in [0, M) for each proposal.
matched_labels (Tensor): a vector of length N, the matcher's label
(one of cfg.MODEL.ROI_HEADS.IOU_LABELS) for each proposal.
gt_classes (Tensor): a vector of length M.
Returns:
Tensor: a vector of indices of sampled proposals. Each is in [0, N).
Tensor: a vector of the same length, the classification label for
each sampled proposal. Each sample is labeled as either a category in
[0, num_classes) or the background (num_classes).
"""
if self.random_sample_size:
diff = self.random_sample_size_upper_bound - self.random_sample_size_lower_bound
sample_factor = self.random_sample_size_upper_bound - np.random.rand(1)[0] * diff
nms_topk = int(matched_idxs.shape[0] * sample_factor)
matched_idxs = matched_idxs[:nms_topk]
matched_labels = matched_labels[:nms_topk]
has_gt = gt_classes.numel() > 0
# Get the corresponding GT for each proposal
if has_gt:
gt_classes = gt_classes[matched_idxs]
# Label unmatched proposals (0 label from matcher) as background (label=num_classes)
gt_classes[matched_labels == 0] = self.num_classes
# Label ignore proposals (-1 label)
gt_classes[matched_labels == -1] = -1
else:
gt_classes = torch.zeros_like(matched_idxs) + self.num_classes
sampled_fg_idxs, sampled_bg_idxs = subsample_labels(
gt_classes, self.batch_size_per_image, self.positive_fraction, self.num_classes
)
sampled_idxs = torch.cat([sampled_fg_idxs, sampled_bg_idxs], dim=0)
if self.random_proposal_drop:
diff = self.random_proposal_drop_upper_bound - self.random_proposal_drop_lower_bound
sample_factor = self.random_proposal_drop_upper_bound - np.random.rand(1)[0] * diff
nms_topk = int(sampled_idxs.shape[0] * sample_factor)
subsample_idxs = np.random.choice(sampled_idxs.shape[0], nms_topk, replace=False)
subsample_idxs = torch.from_numpy(subsample_idxs).to(sampled_idxs.device)
sampled_idxs = sampled_idxs[subsample_idxs]
return sampled_idxs, gt_classes[sampled_idxs]
class _ScaleGradient(Function):
@staticmethod
def forward(ctx, input, scale):
ctx.scale = scale
return input
@staticmethod
def backward(ctx, grad_output):
return grad_output * ctx.scale, None
@ROI_HEADS_REGISTRY.register()
class CascadeTransformerROIHeads(TransformerROIHeads):
"""
It's "standard" in a sense that there is no ROI transform sharing
or feature sharing between tasks.
Each head independently processes the input features by each head's
own pooler and head.
This class is used by most models, such as FPN and C5.
To implement more models, you can subclass it and implement a different
:meth:`forward()` or a head.
"""
"""
Shengcao: Please complete the following initialization functions with proper variable names for TSP-RCNN.
Later functions will use:
num_cascade_stages: int
box_head: List[nn.Module], length num_cascade_stages
box_predictor: List[nn.Module], length num_cascade_stages
proposal_matchers: List[Matcher], length num_cascade_stages
Adapted from:
https://github.com/facebookresearch/detectron2/blob/master/detectron2/modeling/roi_heads/cascade_rcnn.py
"""
@configurable
def __init__(
self,
*,
inherit_match: bool,
box_in_features: List[str],
box_pooler: MyROIPooler,
box_heads: List[nn.Module],
box_predictors: List[nn.Module],
proposal_matchers: List[Matcher],
**kwargs):
assert "proposal_matcher" not in kwargs, (
"CascadeROIHeads takes 'proposal_matchers=' for each stage instead "
"of one 'proposal_matcher='."
)
# The first matcher matches RPN proposals with ground truth, done in the base class
kwargs["proposal_matcher"] = proposal_matchers[0]
num_stages = self.num_cascade_stages = len(box_heads)
box_heads = nn.ModuleList(box_heads)
box_predictors = nn.ModuleList(box_predictors)
assert len(box_predictors) == num_stages, f"{len(box_predictors)} != {num_stages}!"
assert len(proposal_matchers) == num_stages, f"{len(proposal_matchers)} != {num_stages}!"
super().__init__(
box_in_features=box_in_features,
box_pooler=box_pooler,
box_head=box_heads,
box_predictor=box_predictors,
**kwargs,
)
self.proposal_matchers = proposal_matchers
self.inherit_match = inherit_match
@classmethod
def from_config(cls, cfg, input_shape):
ret = super().from_config(cfg, input_shape)
ret["inherit_match"] = cfg.MODEL.ROI_BOX_CASCADE_HEAD.INHERIT_MATCH
ret.pop("proposal_matcher")
return ret
@classmethod
def _init_box_head(cls, cfg, input_shape):
# fmt: off
in_features = cfg.MODEL.ROI_HEADS.IN_FEATURES
pooler_resolution = cfg.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION
pooler_scales = tuple(1.0 / input_shape[k].stride for k in in_features)
sampling_ratio = cfg.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO
pooler_type = cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE
# cascade-specific
cascade_bbox_reg_weights = cfg.MODEL.ROI_BOX_CASCADE_HEAD.BBOX_REG_WEIGHTS
cascade_ious = cfg.MODEL.ROI_BOX_CASCADE_HEAD.IOUS
assert len(cascade_bbox_reg_weights) == len(cascade_ious)
assert cfg.MODEL.ROI_BOX_HEAD.CLS_AGNOSTIC_BBOX_REG, \
"CascadeROIHeads only support class-agnostic regression now!"
assert cascade_ious[0] == cfg.MODEL.ROI_HEADS.IOU_THRESHOLDS[0]
# fmt: on
# If StandardROIHeads is applied on multiple feature maps (as in FPN),
# then we share the same predictors and therefore the channel counts must be the same
in_channels = [input_shape[f].channels for f in in_features]
# Check all channel counts are equal
assert len(set(in_channels)) == 1, in_channels
in_channels = in_channels[0]
box_pooler = MyROIPooler(
output_size=pooler_resolution,
scales=pooler_scales,
sampling_ratio=sampling_ratio,
pooler_type=pooler_type,
)
# Here we split "box head" and "box predictor", which is mainly due to historical reasons.
# They are used together so the "box predictor" layers should be part of the "box head".
# New subclasses of ROIHeads do not need "box predictor"s.
pooled_shape = ShapeSpec(
channels=in_channels, height=pooler_resolution, width=pooler_resolution
)
box_heads, box_predictors, proposal_matchers = [], [], []
share_output_head = cfg.MODEL.ROI_BOX_CASCADE_HEAD.SHARE_OUTPUT_HEAD
fine_tune_head = cfg.MODEL.ROI_BOX_CASCADE_HEAD.FINE_TUNE_HEAD
box_predictor = None
for match_iou, bbox_reg_weights in zip(cascade_ious, cascade_bbox_reg_weights):
box_head = build_box_head(cfg, pooled_shape)
box_heads.append(box_head)
if box_predictor is None or (not share_output_head and not fine_tune_head):
box_predictor = MyFastRCNNOutputLayers(
cfg,
box_head.output_shape,
box2box_transform=Box2BoxTransform(
weights=bbox_reg_weights),
)
elif fine_tune_head:
box_predictor = MyFastRCNNOutputLayers(
cfg,
box_head.output_shape,
box2box_transform=Box2BoxTransform(
weights=bbox_reg_weights),
finetune_on_set=True,
)
box_predictors.append(box_predictor)
proposal_matchers.append(Matcher([match_iou], [0, 1], allow_low_quality_matches=False))
return {
"box_in_features": in_features,
"box_pooler": box_pooler,
"box_heads": box_heads,
"box_predictors": box_predictors,
"proposal_matchers": proposal_matchers,
}
def _forward_box(
self, features: Dict[str, torch.Tensor], proposals: List[Instances], targets=None
) -> Union[Dict[str, torch.Tensor], List[Instances]]:
"""
Forward logic of the box prediction branch. If `self.train_on_pred_boxes is True`,
the function puts predicted boxes in the `proposal_boxes` field of `proposals` argument.
Args:
features (dict[str, Tensor]): mapping from feature map names to tensor.
Same as in :meth:`ROIHeads.forward`.
proposals (list[Instances]): the per-image object proposals with
their matching ground truth.
Each has fields "proposal_boxes", and "objectness_logits",
"gt_classes", "gt_boxes".
Returns:
In training, a dict of losses.
In inference, a list of `Instances`, the predicted instances.
"""
box_features = [features[f] for f in self.box_in_features]
head_outputs = [] # (predictor, predictions, proposals)
prev_pred_boxes = None
prev_box_features = None
prev_proposals = None
image_sizes = [x.image_size for x in proposals]
for k in range(self.num_cascade_stages):
if k > 0:
# The output boxes of the previous stage are used to create the input
# proposals of the next stage.
proposals = self._create_proposals_from_boxes(prev_pred_boxes, image_sizes)
if self.training:
if self.inherit_match:
proposals = self._inherit_labels(proposals, prev_proposals)
else:
proposals = self._match_and_label_boxes(proposals, k, targets)
prev_box_features, predictions = self._run_stage(box_features, prev_box_features, proposals, k)
prev_pred_boxes = self.box_predictor[k].predict_boxes(predictions, proposals)
prev_proposals = proposals
head_outputs.append((self.box_predictor[k], predictions, proposals))
if self.training:
losses = {}
storage = get_event_storage()
for stage, (predictor, predictions, proposals) in enumerate(head_outputs):
with storage.name_scope("stage{}".format(stage)):
stage_losses = predictor.losses(predictions, proposals)
losses.update({k + "_stage{}".format(stage): v for k, v in stage_losses.items()})
if self.train_on_pred_boxes:
# Default False
raise NotImplementedError
return losses
else:
# Use the boxes of the last head
predictor, predictions, proposals = head_outputs[-1]
pred_instances, _ = predictor.inference(predictions, proposals)
return pred_instances
@torch.no_grad()
def _inherit_labels(self, proposals, prev_proposals):
for proposals_per_image, prev_proposals_per_image in zip(proposals, prev_proposals):
proposals_per_image.gt_classes = prev_proposals_per_image.gt_classes
proposals_per_image.gt_boxes = prev_proposals_per_image.gt_boxes
proposals_per_image.gt_idxs = prev_proposals_per_image.gt_idxs
return proposals
@torch.no_grad()
def _match_and_label_boxes(self, proposals, stage, targets):
"""
Match proposals with ground-truth using the matcher at the given stage.
Label the proposals as foreground or background based on the match.
Args:
proposals (list[Instances]): One Instances for each image, with
the field "proposal_boxes".
stage (int): the current stage
targets (list[Instances]): the ground truth instances
Returns:
list[Instances]: the same proposals, but with fields "gt_classes" and "gt_boxes"
"""
num_fg_samples, num_bg_samples = [], []
for proposals_per_image, targets_per_image in zip(proposals, targets):
match_quality_matrix = pairwise_iou(
targets_per_image.gt_boxes, proposals_per_image.proposal_boxes
)
# proposal_labels are 0 or 1
matched_idxs, proposal_labels = self.proposal_matchers[stage](match_quality_matrix)
if len(targets_per_image) > 0:
gt_classes = targets_per_image.gt_classes[matched_idxs]
# Label unmatched proposals (0 label from matcher) as background (label=num_classes)
gt_classes[proposal_labels == 0] = self.num_classes
gt_boxes = targets_per_image.gt_boxes[matched_idxs]
gt_idxs = matched_idxs
else:
gt_classes = torch.zeros_like(matched_idxs) + self.num_classes
gt_boxes = Boxes(
targets_per_image.gt_boxes.tensor.new_zeros((len(proposals_per_image), 4))
)
gt_idxs = torch.zeros_like(matched_idxs)
proposals_per_image.gt_classes = gt_classes
proposals_per_image.gt_boxes = gt_boxes
proposals_per_image.gt_idxs = gt_idxs
num_fg_samples.append((proposal_labels == 1).sum().item())
num_bg_samples.append(proposal_labels.numel() - num_fg_samples[-1])
# Log the number of fg/bg samples in each stage
storage = get_event_storage()
storage.put_scalar(
"stage{}/roi_head/num_fg_samples".format(stage),
sum(num_fg_samples) / len(num_fg_samples),
)
storage.put_scalar(
"stage{}/roi_head/num_bg_samples".format(stage),
sum(num_bg_samples) / len(num_bg_samples),
)
return proposals
def _run_stage(self, box_features, prev_box_features, proposals, stage):
"""
Args:
box_features (list[Tensor]): #lvl input features to ROIHeads
proposals (list[Instances]): #image Instances, with the field "proposal_boxes"
stage (int): the current stage
Returns:
Same output as `FastRCNNOutputLayers.forward()`.
"""
padded_box_features, dec_mask, inds_to_padded_inds = (
self.box_pooler(box_features, [x.proposal_boxes for x in proposals]))
enc_feature = None
enc_mask = None
if self.box_head[stage].use_encoder_decoder:
# Default False
raise NotImplementedError
max_num_proposals = padded_box_features.shape[1]
normalized_proposals = []
for x in proposals:
gt_box = x.proposal_boxes.tensor
img_h, img_w = x.image_size
gt_box = gt_box / torch.tensor([img_w, img_h, img_w, img_h],
dtype=torch.float32, device=gt_box.device)
gt_box = torch.cat([box_xyxy_to_cxcywh(gt_box), gt_box], dim=-1)
gt_box = F.pad(gt_box, [0, 0, 0, max_num_proposals - gt_box.shape[0]])
normalized_proposals.append(gt_box)
normalized_proposals = torch.stack(normalized_proposals, dim=0)
# Shengcao: Not sure if this gradient scaling is necessary, I just copy it here
# The original implementation averages the losses among heads,
# but scale up the parameter gradients of the heads.
# This is equivalent to adding the losses among heads,
# but scale down the gradients on features.
padded_box_features = _ScaleGradient.apply(padded_box_features, 1.0 / self.num_cascade_stages)
padded_box_features = self.box_head[stage](enc_feature, enc_mask, padded_box_features, dec_mask, normalized_proposals, prev_box_features)
box_features = padded_box_features[inds_to_padded_inds]
return padded_box_features, self.box_predictor[stage](box_features)
def _create_proposals_from_boxes(self, boxes, image_sizes):
"""
Args:
boxes (list[Tensor]): per-image predicted boxes, each of shape Ri x 4
image_sizes (list[tuple]): list of image shapes in (h, w)
Returns:
list[Instances]: per-image proposals with the given boxes.
"""
# Just like RPN, the proposals should not have gradients
boxes = [Boxes(b.detach()) for b in boxes]
proposals = []
for boxes_per_image, image_size in zip(boxes, image_sizes):
boxes_per_image.clip(image_size)
prop = Instances(image_size)
prop.proposal_boxes = boxes_per_image
proposals.append(prop)
return proposals
| 46.85 | 145 | 0.646105 | 5,293 | 42,165 | 4.87833 | 0.107312 | 0.019209 | 0.021068 | 0.011928 | 0.485496 | 0.406917 | 0.355602 | 0.328841 | 0.316835 | 0.292281 | 0 | 0.005416 | 0.277529 | 42,165 | 899 | 146 | 46.902113 | 0.842202 | 0.254714 | 0 | 0.339823 | 0 | 0 | 0.031055 | 0.011713 | 0 | 0 | 0 | 0 | 0.023009 | 1 | 0.044248 | false | 0 | 0.046018 | 0.00177 | 0.143363 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
760b228106cff48908ab574b36c94ea44a843fda | 1,423 | py | Python | src/239-sliding_window_maximum.py | dennislblog/coding | cae25eef18e97224831b70ce5b187ba9e2c266d5 | [
"MIT"
] | null | null | null | src/239-sliding_window_maximum.py | dennislblog/coding | cae25eef18e97224831b70ce5b187ba9e2c266d5 | [
"MIT"
] | 45 | 2019-12-06T20:20:23.000Z | 2021-01-04T13:12:01.000Z | src/239-sliding_window_maximum.py | dennislblog/coding | cae25eef18e97224831b70ce5b187ba9e2c266d5 | [
"MIT"
] | null | null | null | class Solution:
"""
@问题: 给定一个数列 nums = [1,3,-1,-3,5,3,6,7] 和 一个常数 k 代表窗口大小, 输出每一个window最大值
@例子: Window position Max
--------------- -----
[1 3 -1] -3 5 3 6 7 3
1 [3 -1 -3] 5 3 6 7 3
1 3 [-1 -3 5] 3 6 7 5
1 3 -1 [-3 5 3] 6 7 5
1 3 -1 -3 [5 3 6] 7 6
1 3 -1 -3 5 [3 6 7] 7
@思路:
1)维护一个priority queue (最大值和对应index)
2)当 cur ind - top ind >= k 这样最大值不在当前这个window里,那么pop
@例子: 就上面那个
----------------
cur @1 que=[(0,1)]
cur @3 que=[(1,3)]
cur @-1 que=[(1,3),(2,-1 )] now 1st max = 3
cur @-3 que=[(1,3),(2,-1),(3,-3)] now 2nd max = 3
cur @5 que=[(4,5)] (with popleft and pop) now 3rd max = 5
cur @3 que=[(4,5),(5,3)] now 4th max = 5
...
"""
def maxSlidingWindow(self, nums: List[int], k: int) -> List[int]:
que = collections.deque() # [[i, num]]
res = []
for i, num in enumerate(nums):
if que and i - que[0][0] >= k:
que.popleft()
while que and que[-1][1] <= num:
que.pop()
que.append([i, num])
if i >= k - 1:
res.append(que[0][1])
return res | 38.459459 | 74 | 0.3591 | 206 | 1,423 | 2.480583 | 0.291262 | 0.07045 | 0.052838 | 0.054795 | 0.170254 | 0.117417 | 0.117417 | 0.117417 | 0.086106 | 0.086106 | 0 | 0.140374 | 0.47435 | 1,423 | 37 | 75 | 38.459459 | 0.542781 | 0.619115 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
760d8c194188904ce5de11e244946fdb3dbb3eed | 5,992 | py | Python | src/nn.py | amalroy/tipr-second-assignment | a69f6d8e328656d73ac111b12159204f58dd8d02 | [
"MIT"
] | null | null | null | src/nn.py | amalroy/tipr-second-assignment | a69f6d8e328656d73ac111b12159204f58dd8d02 | [
"MIT"
] | null | null | null | src/nn.py | amalroy/tipr-second-assignment | a69f6d8e328656d73ac111b12159204f58dd8d02 | [
"MIT"
] | null | null | null | import numpy as np
from sklearn.utils import shuffle
from sklearn.metrics import accuracy_score,f1_score
from scipy.special import expit
def softmax(r):
shift=r-np.max(r)
exps=np.exp(shift)
return exps/np.sum(exps,axis=0)
def sigmoid(r):
return expit(r)
def swish(r):
beta=1
return r * sigmoid(beta * r)
class Layer:
def __init__(self, n_input, n_neurons, activation=None, weights=None):
self.eps=1/np.sqrt(n_input)
self.weights = np.random.normal(0,self.eps,(n_neurons, n_input))
self.activation = activation
self.n_input=n_input
self.n_neurons=n_neurons
self.z = np.zeros((n_neurons,1))
self.del_z = np.zeros((n_neurons,1))
self.last_activation = np.zeros((n_neurons,1))
self.input = np.zeros((n_input,1))
self.error = np.zeros([n_neurons,1])
self.delta = np.zeros([n_neurons,n_input])
def init_weight(self):
self.delta_w=np.zeros((self.n_neurons, self.n_input))
def activate(self, x,istest=False):
if(istest):
z=np.dot(self.weights,x) #+ self.bias
act=self._apply_activation(z)
return act
else:
self.input=x.reshape(-1,1)
self.z = np.dot(self.weights,self.input) #+ self.bias
self.last_activation=self._apply_activation(self.z)
return self.last_activation
def _apply_activation(self, r):
# In case no activation function was chosen
if self.activation is None:
return r
# tanh
if self.activation == 'tanh':
return np.tanh(r)
if self.activation == 'relu':
return np.maximum(r,np.zeros(r.shape))
# sigmoid
if self.activation == 'sigmoid':
return sigmoid(r)
# swish
if self.activation == 'swish':
return swish(r)
# softmax
if self.activation == 'softmax':
return softmax(r)
return r
def apply_activation_derivative(self, r, y=None, output=None):
if self.activation is None:
return r
if self.activation == 'tanh':
return 1 - (np.tanh(r)) ** 2
if self.activation == 'relu':
return (r > 0.0)*1.0
if self.activation == 'swish':
beta=1
k=swish(r)
return beta * k + (sigmoid(beta * r)*(1-beta*k))
if self.activation == 'sigmoid':
k=sigmoid(r)
return k * (1.0 - k)
if self.activation == 'softmax':
k=softmax(r)
return k * (1.0 - k)
return r
class NeuralNetwork:
def __init__(self):
self._layers = []
def add_layer(self, layer):
self._layers.append(layer)
def feed_forward(self, X, istest=False):
for layer in self._layers:
X = layer.activate(X,istest)
return X
def predict(self, X,istest=False):
#predict
forward_pass = self.feed_forward(X.T,istest)
return np.argmax(forward_pass,axis=0)
def backpropagation(self, X, Y, n_classes, learning_rate=1e-4,weight_reg=1e-4):
#code for backprop and weight update after going through the full batch
for layer in self._layers:
layer.init_weight()
# Loop over the layers backward
for i in range(X.shape[0]):
# Feed forward for the output
input=X[i]
y=np.eye(n_classes)[Y[i]].reshape((n_classes,))
output = self.feed_forward(input).reshape((n_classes,))
for l in reversed(range(len(self._layers))):
layer = self._layers[l]
# for the output layer
if layer == self._layers[-1]:
#print(output.shape,y.shape)
err=-(y-output)
layer.error=err.reshape(-1,1)
# The output = layer.last_activation in this case
layer.del_z = layer.apply_activation_derivative(layer.z) * layer.error
#print("llayer delta",layer.z.shape,layer.error.shape)
layer.delta = np.dot(layer.del_z,layer.input.T)
#print("err",layer.error.shape,layer.delta.shape,layer.apply_activation_derivative(output).shape)
else:
next_layer = self._layers[l + 1]
layer.error = np.dot(next_layer.weights.T, next_layer.del_z)
layer.del_z = layer.apply_activation_derivative(layer.z) * layer.error
layer.delta = np.dot(layer.del_z, layer.input.T)
layer.delta_w += layer.delta
# apply weights update from all samples in batch
for l in range(len(self._layers)):
layer = self._layers[l]
layer.weights -= (layer.delta_w/i * learning_rate + weight_reg* layer.weights)
#print("l",l,layer.weights.shape)
def fit(self,X_train,y_train, n_classes, minibatch_size=10, learning_rate=1, max_epochs=1000):
for epoch in range(max_epochs):
X_train, y_train = shuffle(X_train, y_train)
onehot=np.eye(n_classes)[y_train].reshape((n_classes,len(y_train)))
for i in range(0, X_train.shape[0], minibatch_size):
# Get pair of (X, y) of the current minibatch/chunk
X_train_mini = X_train[i:i + minibatch_size]
y_train_mini = y_train[i:i + minibatch_size]
self.backpropagation(X_train_mini, y_train_mini, n_classes, learning_rate)
if epoch % (max_epochs/10) == 0:
print("epoch",epoch)
y_pred=self.predict(X_train,y_train)
print("Training set accuracy",accuracy_score(y_train,y_pred))
return self
def predict(self,X_test,y_test):
out=self.feed_forward(X_test.T,istest=True)
y_pred=np.argmax(out,axis=0)
#print("accuracy", accuracy_score(y_test,y_pred))
return y_pred
| 39.682119 | 117 | 0.580941 | 820 | 5,992 | 4.085366 | 0.17561 | 0.054328 | 0.057313 | 0.022388 | 0.207761 | 0.122388 | 0.103881 | 0.07403 | 0.053731 | 0.053731 | 0 | 0.011271 | 0.304072 | 5,992 | 150 | 118 | 39.946667 | 0.792086 | 0.108144 | 0 | 0.245902 | 0 | 0 | 0.015026 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.122951 | false | 0.016393 | 0.032787 | 0.008197 | 0.360656 | 0.016393 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
760e6f1b77c5b4e77a80b57f6c3e2a3083e72729 | 1,298 | py | Python | projects/Web-Server/code.py | Kranthi-Guribilli/Python-Projects | 73059ba06079c14b022b0c80fbc6d031cddfbecb | [
"MIT"
] | 54 | 2021-11-03T08:54:50.000Z | 2022-01-09T19:16:52.000Z | projects/Web-Server/code.py | LeoLivs/Python-Projects | b681deba7220278ea8e37ec2865f8e1fb8ad4755 | [
"MIT"
] | 13 | 2021-10-31T05:01:01.000Z | 2022-01-08T13:48:07.000Z | projects/Web-Server/code.py | LeoLivs/Python-Projects | b681deba7220278ea8e37ec2865f8e1fb8ad4755 | [
"MIT"
] | 14 | 2021-10-30T20:17:50.000Z | 2022-01-09T14:15:13.000Z | from http.server import BaseHTTPRequestHandler, HTTPServer
class Server(BaseHTTPRequestHandler):
def do_GET(self):
"""Handles HTTP GET request"""
if self.path in ["/", "/index.html"]:
# Set a 200 response status code
self.send_response(200)
# Set content type to text/html because we are serving an html file
self.send_header("Content-type", "text/html")
# end_headers should be called everytime we call send_header
self.end_headers()
# set path from / to /index.html
self.path = "/index.html"
# returns index.html
file = self.open_file(self.path)
# Send index.html as response to the client
return self.wfile.write(bytes(file, "utf-8"))
else:
# Sends a 404 error to every other path visited
self.send_response(404)
return self.wfile.write(bytes("404 page not found.", "utf-8"))
def open_file(self, file_name: str):
"""Returns a file with given file name"""
file_name = file_name.split("/")[1]
with open(file_name) as f:
return f.read()
httpd = HTTPServer(("localhost", 8000), Server)
print("Serving on port 8000.")
httpd.serve_forever()
| 30.904762 | 79 | 0.597843 | 168 | 1,298 | 4.535714 | 0.47619 | 0.059055 | 0.041995 | 0.052493 | 0.065617 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028509 | 0.297381 | 1,298 | 41 | 80 | 31.658537 | 0.807018 | 0.273498 | 0 | 0 | 0 | 0 | 0.112311 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.05 | 0 | 0.35 | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7610189d3ecc1aefcf13481b951dc426b486f027 | 4,019 | py | Python | scripts/training.py | anudeepsekhar/Lane-Detection-Pytorch | cfddda8a0768cf83afd87e29d605fd58aa89df59 | [
"MIT"
] | 1 | 2022-01-11T16:43:50.000Z | 2022-01-11T16:43:50.000Z | scripts/training.py | anudeepsekhar/Lane-Detection-Pytorch | cfddda8a0768cf83afd87e29d605fd58aa89df59 | [
"MIT"
] | null | null | null | scripts/training.py | anudeepsekhar/Lane-Detection-Pytorch | cfddda8a0768cf83afd87e29d605fd58aa89df59 | [
"MIT"
] | null | null | null | import os
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import DataLoader
import pandas as pd
from sklearn.model_selection import train_test_split
import numpy as np
from pathlib import Path
from torch.autograd import Variable
from tqdm import tqdm
from data_utils import LanesDataset
from model import UNet
from loss import DiscriminativeLoss, CrossEntropyLoss2d
root_dir = '/home/anudeep/lane-detection/dataset/'
df = pd.read_csv(os.path.join(root_dir,'data/paths.csv'))
X_train, X_test, y_train, y_test = train_test_split(
df.img_paths, df.mask_paths, test_size=0.2, random_state=42)
# loading data
train_dataset = LanesDataset(np.array(X_train[0:5000]), np.array(y_train[0:5000]), resize=(128,128))
train_dataloader = DataLoader(train_dataset,batch_size=1, shuffle=False, pin_memory=True, num_workers=2)
#loading model
model = UNet().cuda()
# Loss Function
criterion_disc = DiscriminativeLoss(delta_var=0.5,
delta_dist=1.5,
norm=2,
usegpu=True).cuda()
criterion_ce = CrossEntropyLoss2d()
criterion_bce_logits=torch.nn.BCEWithLogitsLoss()
# Optimizer
parameters = model.parameters()
optimizer = optim.SGD(parameters, lr=0.001, momentum=0.09, weight_decay=0.0001)
scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer=optimizer,
mode='min',
factor=0.1,
patience=10,
verbose=True)
# Train
model_dir = Path(root_dir+'models/')
best_loss = np.inf
for epoch in range(1):
print(f'epoch : {epoch}')
disc_losses = []
bce_losses = []
for i, batched in enumerate(tqdm(train_dataloader)):
# with tqdm(total=len(train_dataloader.dataset), unit='img') as pbar:
images, sem_labels, ins_labels = batched
images = Variable(images).cuda()
sem_labels = Variable(sem_labels).cuda()
ins_labels = Variable(ins_labels).cuda()
model.zero_grad()
sem_predict, ins_predict = model(images.float())
loss = 0
# Discriminative Loss
disc_loss = criterion_disc(ins_predict,
ins_labels,
[2] * len(images))
loss += disc_loss
if i%50 == 0:
print(f'Disc Loss: {disc_loss.cpu().data.numpy()}')
disc_losses.append(disc_loss.cpu().data.numpy())
# BCE
bce_loss=criterion_bce_logits(sem_predict,sem_labels)
loss += bce_loss*3
if i%50 == 0:
print(f'BCE Loss: {bce_loss.cpu().data.numpy()}')
bce_losses.append(bce_loss.cpu().data.numpy())
# Cross Entropy Loss
# _, sem_labels_ce = sem_labels.max(1)
# ce_loss = criterion_ce(ins_predict,
# ins_labels.long())
# loss += ce_loss
# ce_losses.append(ce_loss.cpu().data.numpy()[0])
loss.backward()
optimizer.step()
# break
disc_loss = np.mean(disc_losses)
bce_loss = np.mean(bce_losses)
print(f'DiscriminativeLoss: {disc_loss:.4f}')
print(f'BinaryCrossEntropyLoss: {bce_loss:.4f}')
scheduler.step(disc_loss)
if disc_loss < best_loss:
best_loss = disc_loss
print('Best Model!')
modelname = 'model-2.pth'
torch.save(model.state_dict(), model_dir.joinpath(modelname))
# break
# for img, bmask, imask in train_dataloader:
# print(img.shape)
# print(bmask.shape)
# print(imask.shape)
# sem_predict, ins_predict = model(img.cuda())
# print(ins_predict.shape)
# disc_loss = criterion_disc(ins_predict.float(),
# imask.cuda().float(),
# [4] * len(img.cuda()))
# print(disc_loss.cpu().data.numpy())
# break
| 32.41129 | 104 | 0.603882 | 501 | 4,019 | 4.648703 | 0.299401 | 0.041219 | 0.028338 | 0.041219 | 0.103049 | 0.036926 | 0 | 0 | 0 | 0 | 0 | 0.021181 | 0.283404 | 4,019 | 123 | 105 | 32.674797 | 0.7875 | 0.190843 | 0 | 0.027397 | 0 | 0 | 0.077854 | 0.036911 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.205479 | 0 | 0.205479 | 0.082192 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7612322e7921680a844a8470fc0ca3366a93c48c | 1,326 | py | Python | python/2020/day3/part1.py | CalebRoberts65101/AdventOfCode | 71874ee04833f0170a63cfe242aa5a8a9b923244 | [
"MIT"
] | null | null | null | python/2020/day3/part1.py | CalebRoberts65101/AdventOfCode | 71874ee04833f0170a63cfe242aa5a8a9b923244 | [
"MIT"
] | null | null | null | python/2020/day3/part1.py | CalebRoberts65101/AdventOfCode | 71874ee04833f0170a63cfe242aa5a8a9b923244 | [
"MIT"
] | null | null | null |
with open('python\\2020\day3\data.txt') as f:
lines = f.readlines()
## start in the top left, 0,0
lay = 0
hang = 0
# found by counting data
width = 30
lay_d = 3
hang_d = 1
tree_hits = 0
stop = False
# loop for part 1
stop = True
while not stop:
if hang == len(lines):
stop = True
continue
# figure out if we colide
if lines[hang][lay] == '#':
print(hang, lay)
print(lines[hang])
tree_hits+=1
# increment
lay = (lay+lay_d) % (width+1)
hang = hang+hang_d
# print("tree hits:", tree_hits)
def tree_loop(lines, lay_d, hang_d):
lay = 0
hang = 0
tree_hits = 0
stop = False
while not stop:
if hang >= len(lines):
stop = True
continue
# figure out if we colide
if lines[hang][lay] == '#':
print(hang, lay)
print(lines[hang])
tree_hits+=1
# increment
lay = (lay+lay_d) % (width+1)
hang = hang+hang_d
return tree_hits
result_1 = tree_loop(lines, 1, 1)
result_2 = tree_loop(lines, 3, 1)
result_3 = tree_loop(lines, 5, 1)
result_4 = tree_loop(lines, 7, 1)
result_5 = tree_loop(lines, 1, 2)
print("results:", result_1, result_2, result_3, result_4, result_5)
print("answer:", result_1 * result_2 * result_3 * result_4 * result_5) | 21.737705 | 70 | 0.581448 | 208 | 1,326 | 3.538462 | 0.264423 | 0.076087 | 0.105978 | 0.024457 | 0.546196 | 0.497283 | 0.497283 | 0.497283 | 0.497283 | 0.497283 | 0 | 0.050214 | 0.294118 | 1,326 | 61 | 70 | 21.737705 | 0.736111 | 0.12368 | 0 | 0.627907 | 0 | 0 | 0.037326 | 0.022569 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023256 | false | 0 | 0 | 0 | 0.046512 | 0.139535 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7616ef890c434e455d8550201546bbbdd57349a1 | 538 | py | Python | LassoVariants/AlternateLasso/example/example_classification.py | carushi/Catactor | 27d35261249daf695659f2f329aa470922f60922 | [
"MIT"
] | 12 | 2016-11-30T04:39:18.000Z | 2021-09-11T13:57:37.000Z | LassoVariants/AlternateLasso/example/example_classification.py | carushi/Catactor | 27d35261249daf695659f2f329aa470922f60922 | [
"MIT"
] | 2 | 2018-03-05T19:01:09.000Z | 2019-10-10T00:30:55.000Z | LassoVariants/AlternateLasso/example/example_classification.py | carushi/Catactor | 27d35261249daf695659f2f329aa470922f60922 | [
"MIT"
] | 6 | 2017-08-19T17:49:51.000Z | 2022-01-09T07:41:22.000Z | # -*- coding: utf-8 -*-
"""
@author: Satoshi Hara
"""
import sys
sys.path.append('../')
import numpy as np
from AlternateLinearModel import AlternateLogisticLasso
# setting
seed = 0
num = 1000
dim = 2
dim_extra = 2
# data
np.random.seed(seed)
X = np.random.randn(num, dim + dim_extra)
for i in range(dim_extra):
X[:, dim + i] = X[:, 0] + 0.5 * np.random.randn(num)
y = X[:, 0] + 0.3 * X[:, 1] + 0.5 * np.random.randn(num) > 0
# Alternate Lasso
mdl = AlternateLogisticLasso(rho=0.1, verbose=True)
mdl.fit(X, y)
print()
print(mdl)
| 17.933333 | 60 | 0.641264 | 89 | 538 | 3.842697 | 0.494382 | 0.093567 | 0.114035 | 0.140351 | 0.105263 | 0.105263 | 0 | 0 | 0 | 0 | 0 | 0.045045 | 0.174721 | 538 | 29 | 61 | 18.551724 | 0.725225 | 0.135688 | 0 | 0 | 0 | 0 | 0.006608 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.176471 | 0 | 0.176471 | 0.117647 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7618025680ba1de890f72f20da5e84d5e3f29d4c | 1,195 | py | Python | examples/svdd_example_kernels.py | iqiukp/SVDD | 07a5614053cc80cbee1f5c0eacca4e74ebd0cce6 | [
"MIT"
] | 51 | 2020-03-27T07:48:22.000Z | 2021-10-05T14:45:08.000Z | examples/svdd_example_kernels.py | iqiukp/SVDD | 07a5614053cc80cbee1f5c0eacca4e74ebd0cce6 | [
"MIT"
] | 3 | 2020-12-09T19:40:33.000Z | 2021-03-25T17:27:30.000Z | examples/svdd_example_kernels.py | iqiukp/SVDD | 07a5614053cc80cbee1f5c0eacca4e74ebd0cce6 | [
"MIT"
] | 20 | 2020-06-03T07:27:18.000Z | 2021-10-05T14:45:15.000Z | # -*- coding: utf-8 -*-
import sys
sys.path.append("..")
from src.svdd import SVDD
from src.visualize import Visualization as draw
from data import PrepareData as load
# load banana-shape data
trainData, testData, trainLabel, testLabel = load.banana()
# kernel list
kernelList = {"1": {"type": 'gauss', "width": 1/24},
"2": {"type": 'linear', "offset": 0},
"3": {"type": 'ploy', "degree": 2, "offset": 0},
"4": {"type": 'tanh', "gamma": 1e-4, "offset": 0},
"5": {"type": 'lapl', "width": 1/12}
}
for i in range(len(kernelList)):
# set SVDD parameters
parameters = {"positive penalty": 0.9,
"negative penalty": 0.8,
"kernel": kernelList.get(str(i+1)),
"option": {"display": 'on'}}
# construct an SVDD model
svdd = SVDD(parameters)
# train SVDD model
svdd.train(trainData, trainLabel)
# test SVDD model
distance, accuracy = svdd.test(testData, testLabel)
# visualize the results
# draw.testResult(svdd, distance)
# draw.testROC(testLabel, distance)
draw.boundary(svdd, trainData, trainLabel)
| 25.425532 | 64 | 0.56569 | 137 | 1,195 | 4.934307 | 0.532847 | 0.031065 | 0.038462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02662 | 0.276987 | 1,195 | 46 | 65 | 25.978261 | 0.755787 | 0.184937 | 0 | 0 | 0 | 0 | 0.147917 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.190476 | 0 | 0.190476 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7619fafbd2cda5852a583207fe10119c3e96f887 | 2,428 | py | Python | src/nti/traversal/tests/test_location.py | NextThought/nti.traversal | a3ccfa570c6093875cb6e2905d82ac34d7574d8c | [
"Apache-2.0"
] | null | null | null | src/nti/traversal/tests/test_location.py | NextThought/nti.traversal | a3ccfa570c6093875cb6e2905d82ac34d7574d8c | [
"Apache-2.0"
] | 1 | 2020-01-02T15:14:58.000Z | 2020-01-02T15:19:42.000Z | src/nti/traversal/tests/test_location.py | NextThought/nti.traversal | a3ccfa570c6093875cb6e2905d82ac34d7574d8c | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import division
from __future__ import print_function
from __future__ import absolute_import
# pylint: disable=protected-access,too-many-public-methods,inherit-non-class
import unittest
from hamcrest import is_
from hamcrest import has_length
from hamcrest import assert_that
from zope import interface
from zope.interface import Interface
from zope.interface import directlyProvides
from zope.location.interfaces import ILocation
from nti.traversal.location import lineage
from nti.traversal.location import find_interface
from nti.traversal.tests import SharedConfiguringTestLayer
class Root(object):
pass
@interface.implementer(ILocation)
class Location(object):
__name__ = __parent__ = None
class TestLineage(unittest.TestCase):
layer = SharedConfiguringTestLayer
def _callFUT(self, context):
return lineage(context)
def test_lineage(self):
o1 = Location()
o2 = Location()
o2.__parent__ = o1
o3 = Location()
o3.__parent__ = o2
o4 = Location()
o4.__parent__ = o3
result = list(self._callFUT(o3))
assert_that(result, is_([o3, o2, o1]))
result = list(self._callFUT(o1))
assert_that(result, is_([o1]))
assert_that(list(self._callFUT(Root())),
has_length(1))
class DummyContext(object):
__parent__ = None
def __init__(self, next_=None, name=None):
self.next = next_
self.__name__ = name
class TestFindInterface(unittest.TestCase):
layer = SharedConfiguringTestLayer
def _callFUT(self, context, iface):
return find_interface(context, iface)
def test_it_interface(self):
# pylint:disable=blacklisted-name
baz = DummyContext()
bar = DummyContext(baz)
foo = DummyContext(bar)
root = DummyContext(foo)
root.__parent__ = None
root.__name__ = 'root'
foo.__parent__ = root
foo.__name__ = 'foo'
bar.__parent__ = foo
bar.__name__ = 'bar'
baz.__parent__ = bar
baz.__name__ = 'baz'
class IFoo(Interface):
pass
directlyProvides(root, IFoo)
result = self._callFUT(baz, IFoo)
self.assertEqual(result.__name__, 'root')
result = self._callFUT(baz, DummyContext)
self.assertEqual(result.__name__, 'baz')
| 24.28 | 76 | 0.667628 | 270 | 2,428 | 5.6 | 0.292593 | 0.036376 | 0.031746 | 0.030423 | 0.175926 | 0.136243 | 0.089947 | 0.089947 | 0 | 0 | 0 | 0.009777 | 0.241763 | 2,428 | 99 | 77 | 24.525253 | 0.811515 | 0.061367 | 0 | 0.060606 | 0 | 0 | 0.008791 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 1 | 0.075758 | false | 0.030303 | 0.212121 | 0.030303 | 0.484848 | 0.015152 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
761a9a4bbf412088e8072d8a1c219bbb324e142c | 673 | py | Python | migrations/versions/574aef3b4730_blogposts.py | MainaKamau92/apexselftaught | 9f9a3bd1ba23e57a12e173730917fb9bb7003707 | [
"MIT"
] | 4 | 2019-01-02T19:52:00.000Z | 2022-02-21T11:07:34.000Z | migrations/versions/574aef3b4730_blogposts.py | MainaKamau92/apexselftaught | 9f9a3bd1ba23e57a12e173730917fb9bb7003707 | [
"MIT"
] | 2 | 2019-12-04T13:36:54.000Z | 2019-12-04T13:49:21.000Z | migrations/versions/574aef3b4730_blogposts.py | MainaKamau92/apexselftaught | 9f9a3bd1ba23e57a12e173730917fb9bb7003707 | [
"MIT"
] | 1 | 2021-11-28T13:23:14.000Z | 2021-11-28T13:23:14.000Z | """blogposts
Revision ID: 574aef3b4730
Revises: 916e12044eea
Create Date: 2019-02-18 02:50:45.508429
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '574aef3b4730'
down_revision = '916e12044eea'
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('blogposts', sa.Column('image_file', sa.String(length=1000), nullable=False))
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_column('blogposts', 'image_file')
# ### end Alembic commands ###
| 23.206897 | 95 | 0.699851 | 83 | 673 | 5.590361 | 0.590361 | 0.05819 | 0.090517 | 0.099138 | 0.189655 | 0.189655 | 0.189655 | 0.189655 | 0 | 0 | 0 | 0.1 | 0.167905 | 673 | 28 | 96 | 24.035714 | 0.728571 | 0.432392 | 0 | 0 | 0 | 0 | 0.17971 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
761dc08b0572f2d704979584486a408b74130f4f | 60,992 | py | Python | cdbgui.py | toddn1704/Client_Database | 523426e9f80b062ea39b1317a00eac8e2c576676 | [
"MIT"
] | null | null | null | cdbgui.py | toddn1704/Client_Database | 523426e9f80b062ea39b1317a00eac8e2c576676 | [
"MIT"
] | null | null | null | cdbgui.py | toddn1704/Client_Database | 523426e9f80b062ea39b1317a00eac8e2c576676 | [
"MIT"
] | null | null | null | """cdbgui.py
Developers: Christina Hammer, Noelle Todd
Last Updated: August 19, 2014
This file contains a class version of the interface, in an effort to
make a program with no global variables.
"""
from datetime import datetime, timedelta, date
from tkinter import *
from tkinter import ttk
from tkinter import messagebox
from cdbifunc2 import *
import cdbvolunteer
import cdbcustom
class allobjects:
"""This class attempts to contain ALL labels, entries, etc.,
so that there are no global variables.
"""
def __init__(self, volunteerID, volunteerName, bgcolor):
"""This function declares all variables that are used by
more than one function.
"""
self.volID = volunteerID #the id of the volunteer who logged in
self.volunteerName = volunteerName
self.bgcolor = bgcolor
#Variables used later on
self.cursel = 0
self.selectedVisit = 0
self.id_list = []
self.mem_list = []
self.clientlist = list_people()
self.visitDict = {}
#holds entryboxes for family members
self.memDict = {}
self.info = {}
self.addmemberON = False #checks if member boxes have already been added
#dictionaries/lists used for date entry
self.month_li = ["January", "February", "March", "April",
"May", "June", "July", "August", "September",
"October", "November", "December"]
self.month_day_dict = {"January":31, "February":29, "March":31,
"April":30, "May":31, "June":30, "July":31,
"August":31, "September":30, "October":31,
"November":30, "December":31}
self.month_int = {1:"January", 2:"February", 3:"March",
4:"April", 5:"May", 6:"June", 7:"July",
8:"August", 9:"September", 10:"October",
11:"November", 12:"December"}
self.int_month = {"January":1, "February":2, "March":3,
"April":4, "May":5, "June":6, "July":7,
"August":8, "September":9, "October":10,
"November":11, "December":12}
#customize colors/fonts
#This will connect to the database itself,
#and retrieve the colors from there.
#self.bgcolor = 'light blue' #'lavender'
#self.labfont = 'Helvetica'
#self.labBGcolor = 'gray10'
#self.labFGcolor = 'white'
#self.cliSearLabBG = 'Coral'
#self.cliSearLabFG = 'white'
#configuring window
self.ciGui=Tk()
self.gridframe=Frame(self.ciGui).grid()
self.ciGui.configure(background=self.bgcolor)
self.ciGui.title('Food Pantry Database')
#CLIENT SEARCH SETUP
self.cslabel = Label(self.gridframe,text='Client Search',
font=("Helvetica", 16),fg='white',bg='gray10')\
.grid(row=0,column=0,columnspan=2, sticky=W)
self.csblank = Label(self.gridframe, text=' ',
font=('Helvetica',10), bg=self.bgcolor)\
.grid(row=0,column=2,sticky=E)
#Name Searchbox
self.ns = StringVar()
self.nameSearchEnt = Entry(self.gridframe, textvariable=self.ns)
self.nameSearchEnt.grid(row=2,column=0)
self.nameSearchEnt.bind('<Key>',self.nameSearch)
self.searchButton = Button(self.gridframe, text='Search Clients',
command=self.nameSearch)
self.searchButton.grid(row=2, column=1)
#Client Listbox
self.client_listbox = Listbox(self.gridframe,height=10,width=40)
self.client_listbox.bind('<<ListboxSelect>>', self.displayInfo )
self.client_listbox.config(exportselection=0)
self.scrollb = Scrollbar(self.gridframe)
self.client_listbox.bind('<<ListboxSelect>>',self.displayInfo )
self.client_listbox.config(yscrollcommand=self.scrollb.set)
self.scrollb.config(command=self.client_listbox.yview)
self.client_listbox.grid(row=3, column=0, rowspan=5, columnspan=2)
self.scrollb.grid(row=3, column=1, rowspan=5, sticky=E+N+S)
self.firstSep = ttk.Separator(self.gridframe, orient='vertical')\
.grid(row=1,column=2,rowspan=40,sticky=NS)
self.NCButton = Button(self.gridframe, text='New Client',
command=self.newClientDisplay, width=25)\
.grid(row=9, column=0, columnspan=2)
#CLIENT INFORMATION SETUP
self.secondSep = ttk.Separator(self.gridframe, orient='horizontal')\
.grid(row=0,column=3,columnspan=40,sticky=EW)
self.cilabel = Label(self.gridframe, text='Client Information',
font=("Helvetica", 16),fg='white',bg='gray10')\
.grid(row=0,column=3,columnspan=12, sticky=W)
self.ciblank = Label(self.gridframe, text=' ',font=('Helvetica',10),
bg=self.bgcolor).grid(row=1,column=3,sticky=E)
#First name
self.fnv = StringVar()
self.fnlabel = Label(self.gridframe, text="First Name: ",
font=('Helvetica',12),bg=self.bgcolor)\
.grid(row=2, column=3,rowspan=2,sticky=E)
self.fname = Entry(self.gridframe, textvariable=self.fnv,bd=4)
self.fname.grid(row=2, column=4, rowspan=2, columnspan=1, sticky=W)
#Last name
self.lnv = StringVar()
self.lnlabel = Label(self.gridframe, text='Last Name: ',
font=('Helvetica',12),bg=self.bgcolor)\
.grid(row=2,column=5,rowspan=2, sticky=W)
self.lname = Entry(self.gridframe, textvariable=self.lnv,bd=4)
self.lname.grid(row=2,column=6, rowspan=2, columnspan=1, sticky=W)
#Phone
self.phv = StringVar()
self.phlabel = Label(self.gridframe, text='Phone: ',
font=('Helvetica',12),bg=self.bgcolor)\
.grid(row=2, column=7,rowspan=2, sticky=E)
self.phone = Entry(self.gridframe, textvariable=self.phv, bd=4)
self.phone.grid(row=2, column=8, columnspan=2, rowspan=2, sticky=W)
#Date of Birth
self.doblabel = Label(self.gridframe, text='Date of Birth: ',
font=('Helvetica',12),bg=self.bgcolor)\
.grid(row=4,column=3, rowspan=2, sticky=E)
self.mv = StringVar()
self.dv = StringVar()
self.yv = StringVar()
#dob month combobox
self.mob = ttk.Combobox(self.gridframe, width=10, state='readonly',
values=self.month_li, textvariable=self.mv)
self.mob.bind('<<ComboboxSelected>>', self.monthbox_select)
#dob day spinbox
self.dob = Spinbox(self.gridframe, from_=0, to=0,
textvariable=self.dv, width=5, bd=4)
#dob year spinbox
self.yob = Spinbox(self.gridframe, from_=1900, to=2500,
textvariable=self.yv, width=7, bd=4)
self.mob.grid(row=4, column=4, rowspan=2, sticky=W)
self.dob.grid(row=4, column=4, rowspan=2, sticky=E)
self.yob.grid(row=4, column=5, rowspan=2)
#Age
self.agev = StringVar()
self.avallabel = Label(self.gridframe, textvariable=self.agev,
font=('Helvetica',12),bg=self.bgcolor)\
.grid(row=4,column=6, rowspan=2)
#Date Joined
self.datejoinv = StringVar()
self.djlabel = Label(self.gridframe, text="Date Joined:",
font=('Helvetica',12), bg=self.bgcolor)\
.grid(row=4,column=7,rowspan=2, sticky=E)
self.djEntry = Entry(self.gridframe, textvariable=self.datejoinv,
bd=4).grid(row=4, column=8, rowspan=2)
#VISIT INFORMATION SETUP
self.thirdSep = ttk.Separator(self.gridframe, orient='horizontal')\
.grid(row=6,column=3,columnspan=40,sticky=EW)
self.vilabel = Label(self.gridframe,text='Visit Information',
font=("Helvetica", 16),fg='white', bg='gray10')\
.grid(row=6,column=3,columnspan=12, sticky=W)
self.datelab = Label(self.gridframe, text='Date: ',
font=('Helvetica',14), bg=self.bgcolor)\
.grid(row=7,column=3)
self.notelab = Label(self.gridframe, text='Notes:',
font=('Helvetica',14), bg=self.bgcolor)\
.grid(row=7,column=4)
self.vislab = Label(self.gridframe, text='Visitor: ',
font=('Helvetica',14),bg=self.bgcolor)\
.grid(row=7,column=7, padx=10)
self.vollab = Label(self.gridframe, text='Volunteer: ',
font=('Helvetica',14),bg=self.bgcolor)\
.grid(row=9, column=7, padx=10)
self.visit_listbox = Listbox(self.gridframe,height=4,width=15,font=12, bd=4)
self.visit_listbox.bind('<<ListboxSelect>>', self.displayVisit)
self.visit_listbox.config(exportselection=0)
self.visit_scroll = Scrollbar(self.gridframe)
self.visit_listbox.config(yscrollcommand=self.visit_scroll.set)
self.visit_scroll.config(command=self.visit_listbox.yview)
self.visit_listbox.grid(row=8, column=3, rowspan=4, columnspan=1, sticky=W)
self.visit_scroll.grid(row=8, column=3, rowspan=4, columnspan=1, sticky=E+N+S)
#Entry box for visit (when new visit is added)
self.visdatev = StringVar()
self.visitdate = Entry(self.gridframe,textvariable=self.visdatev,bd=4)
#self.visitdate.grid(row=8, column=3)
#visit notes
self.notv = StringVar()
self.notescv = Text(self.gridframe, state='disabled', width=50, height=4, bd=4, font='Helvetica')
self.vnotes_scroll = Scrollbar(self.gridframe)
self.notescv.config(yscrollcommand=self.vnotes_scroll.set)
self.vnotes_scroll.config(command=self.notescv.yview)
#visit notes
self.notescv.grid(row=8, column=4, columnspan=3, rowspan=4, sticky=W, padx=10)
self.vnotes_scroll.grid(row=8, column=4, rowspan=4, columnspan=3, sticky=E+N+S)
#visit visitor
self.visv = StringVar()
self.visitor = Entry(self.gridframe,textvariable=self.visv,
state='readonly',bd=4)
self.visitor.grid(row=8, column=7, rowspan=1, sticky=E, padx=10)
#visit volunteer
self.volv = IntVar()
self.volun = Entry(self.gridframe,textvariable=self.volv,bd=4,
state='readonly')
self.volun.grid(row=10, column=7, rowspan=1, padx=10)
#Extra blank label
self.blankLab2 = Label(self.gridframe, text=' ',
font=('Helvetica',10), bg=self.bgcolor)\
.grid(row=13,column=3, rowspan=2, sticky=E)
#Visit buttons
self.newVisit = Button(self.gridframe, text='New Visit', width=15,
command=self.newvisitf)
self.newVisit.grid(row=8, column=8, sticky=W)
self.editVisit = Button(self.gridframe, text='Edit Visit', width=15,
command=self.editvisitf)
self.editVisit.grid(row=9, column=8, sticky=W)
self.deleteVisit = Button(self.gridframe, text='Delete Visit', width=15,
command=self.deletevisitf)
self.deleteVisit.grid(row=10, column=8, sticky=W)
#records/updates visit
self.saveVisit = Button(self.gridframe, text='Save Visit', width=15,
command=self.recordVisit)
self.saveVisitE = Button(self.gridframe, text='Save Visit', width=15,
command=self.savevisitf)
#self.saveVisit.grid(row=8,column=8,sticky=W)
self.cancelVisit = Button(self.gridframe, text='Cancel', width=15,
command=self.cancelvisitf)
#self.cancelVisit.grid(row=9, column=8, sticky=W)
#HOUSEHOLD INFORMATION SETUP
self.fourthSep = ttk.Separator(self.gridframe, orient='horizontal')\
.grid(row=15,column=3,columnspan=40,sticky=EW)
self.hilabel = Label(self.gridframe,text='Household Information',
font=("Helvetica", 16),fg='white', bg='gray10')\
.grid(row=15,column=3,columnspan=12, sticky=W)
#blank line
self.hiblank = Label(self.gridframe, text=' ',font=('Helvetica',10),
bg=self.bgcolor).grid(row=16,column=3,sticky=E)
#street address
self.adv = StringVar()
self.adlab = Label(self.gridframe, text='Address: ',
font=('Helvetica',12), bg=self.bgcolor)\
.grid(row=17,column=3, rowspan=2, sticky=E)
self.address = Entry(self.gridframe,textvariable=self.adv,
width=40,bd=4)
self.address.grid(row=17, column=4,columnspan=2, rowspan=2)
#apartment
self.apv = StringVar()
self.aplab = Label(self.gridframe, text='Apt: ',font=('Helvetica',12),
bg=self.bgcolor).grid(row=17,column=6,
rowspan=2, sticky=E)
self.aptn = Entry(self.gridframe,textvariable=self.apv,width=10,bd=4)
self.aptn.grid(row=17,column=7, rowspan=2, sticky=W)
#city
self.ctyv = StringVar()
self.cilab = Label(self.gridframe, text='City: ',font=('Helvetica',12),
bg=self.bgcolor).grid(row=17,column=8, rowspan=2, sticky=E)
self.city = Entry(self.gridframe,textvariable=self.ctyv,bd=4)
self.city.grid(row=17,column=9, rowspan=2, sticky=W)
#state
self.stav = StringVar()
self.stlab = Label(self.gridframe, text='State: ',
font=('Helvetica',12), bg=self.bgcolor)\
.grid(row=20,column=3, rowspan=2, sticky=E)
self.state = Entry(self.gridframe,textvariable=self.stav,bd=4)
self.state.grid(row=20,column=4, rowspan=2)
#zip
self.zpv = StringVar()
self.zilab = Label(self.gridframe, text='Zip Code: ',font=('Helvetica',12),
bg=self.bgcolor).grid(row=20, column=5, rowspan=2, sticky=E)
self.zipc = Entry(self.gridframe,textvariable=self.zpv,bd=4)
self.zipc.grid(row=20, column=6, rowspan=2)
#Date Verified
self.dverilabel = Label(self.gridframe, text='Last Verified: ',
font=('Helvetica',12),bg=self.bgcolor)\
.grid(row=20,column=7, rowspan=2, sticky=E)
self.mvv = StringVar()
self.dvv = StringVar()
self.yvv = StringVar()
self.mvv.set("")
self.dvv.set("")
self.yvv.set("")
#for month entry
self.mov = ttk.Combobox(self.gridframe, width=10, state='readonly',
values=self.month_li, textvariable=self.mvv)
#self.mob.bind('<<ComboboxSelected>>', self.monthbox_select)
#for day entry
self.dov = Spinbox(self.gridframe, from_=0, to=0,
textvariable=self.dvv, width=5, bd=4)
#for year entry
self.yov = Spinbox(self.gridframe, from_=1900, to=2500,
textvariable=self.yvv, width=9, bd=4)
self.mov.grid(row=20, column=8, rowspan=2, sticky=E, padx=10)
self.dov.grid(row=20, column=9, columnspan=2, rowspan=2, padx=10, sticky=W)
self.yov.grid(row=20, column=10, rowspan=2, padx=10, sticky=W)
#formatting labels/objects
self.blankLab5 = Label(self.gridframe, text=' ',
font=('Helvetica',12), bg=self.bgcolor)\
.grid(row=23,column=3,sticky=E)
self.blankLab6 = Label(self.gridframe, text=' ',
font=('Helvetica',10), bg=self.bgcolor)\
.grid(row=25,column=3,sticky=E)
self.fifthsep = ttk.Separator(self.gridframe, orient='horizontal')\
.grid(row=27,column=3,columnspan=40,sticky=EW, pady=10)
#The following variables will be removed and re-gridded
#as the function of the interface changes.
#
#HOUSEHOLD MEMBERS SETUP
#These variables appear on the updateClientDisplay only
#
#info display widgets
self.adl = StringVar()
self.dispad = Label(self.gridframe,textvariable=self.adl,
font=('Helvetica',12),bg=self.bgcolor)
self.chil = StringVar()
self.dischil = Label(self.gridframe,textvariable=self.chil,
font=('Helvetica',12),bg=self.bgcolor)
self.sen = StringVar()
self.dissen = Label(self.gridframe,textvariable=self.sen,
font=('Helvetica',12),bg=self.bgcolor)
self.inf = StringVar()
self.disinf = Label(self.gridframe,textvariable=self.inf,
font=('Helvetica',12),bg=self.bgcolor)
self.tot = StringVar()
self.distot = Label(self.gridframe, textvariable=self.tot,
bg=self.bgcolor,font=('Helvetica',12))
self.houseSep = ttk.Separator(self.gridframe, orient='horizontal')
self.houseSep.grid(row=23,column=3,columnspan=40,sticky=EW)
self.housetitle = Label(self.gridframe,text='Household Members',
font=("Helvetica", 16),fg='white',bg='gray10')
self.housetitle.grid(row=23,column=3,columnspan=12, sticky=W)
#listbox of family members
self.family_listbox = Listbox(self.gridframe,height=5,width=35,font=12)
self.family_listbox.config(exportselection=0)
self.fam_scroll = Scrollbar(self.gridframe)
self.family_listbox.config(yscrollcommand=self.fam_scroll.set)
self.fam_scroll.config(command=self.family_listbox.yview)
self.family_listbox.grid(row=24, column=3, rowspan=3, columnspan=2, sticky=W)
self.fam_scroll.grid(row=24, column=4, rowspan=3, columnspan=1, sticky=E+N+S)
#family member buttons
self.addmemb = Button(self.gridframe, text='Add Member', width=14,
command=self.addMemberEntryBoxes)
self.addmemb.grid(row=24,column=5,sticky=E+N+S)
self.removmemb = Button(self.gridframe, text='Remove Member',width=14,
command=self.removeMemberConfirm)
self.removmemb.grid(row=25,column=5,sticky=E+N+S)
self.viewmemb = Button(self.gridframe, text='View Member',width=14,
command=self.runViewMember)
self.viewmemb.grid(row=26,column=5,sticky=E+N+S)
#update save/cancel buttons
self.saveB = Button(self.gridframe, text='Save Changes',
command=self.updateInfo,width=20)
self.saveB.grid(row=28, column=3, columnspan=2)
self.cancelB = Button(self.gridframe, text='Cancel Changes',
command=self.cancel_changes,width=20)
self.cancelB.grid(row=28, column=5, columnspan=2)
#NEW CLIENT DISPLAY WIDGETS
#These variables appear on the newClientDisplay only
#
self.addhhsep = ttk.Separator(self.gridframe, orient='horizontal')
self.addhhtitle = Label(self.gridframe,text='Add Household Members',
font=("Helvetica", 16),fg='white',bg='gray10')
#add members to new household variable
self.q = IntVar()
self.famNum = Entry(self.gridframe, textvariable=self.q)
self.entNum = Label(self.gridframe,
text='Total Family Members: ',
font=('Helvetica',10),bg=self.bgcolor)
self.famname = Label(self.gridframe, text='Name:',
font=('Helvetica',10),bg=self.bgcolor)
self.famfn = Label(self.gridframe, text='First Name:',
font=('Helvetica',10),bg=self.bgcolor)
self.famln = Label(self.gridframe, text='Last Name:',
font=('Helvetica',10),bg=self.bgcolor)
self.famdob = Label(self.gridframe, text='Date of Birth:',
font=('Helvetica',10),bg=self.bgcolor)
self.famphone = Label(self.gridframe, text='Phone',
font=('Helvetica',10),bg=self.bgcolor)
self.fammon = Label(self.gridframe,text='mm',
font=('Helvetica',10),bg=self.bgcolor)
self.famday = Label(self.gridframe,text='dd',
font=('Helvetica',10),bg=self.bgcolor)
self.famyear = Label(self.gridframe,text='yyyy',
font=('Helvetica',10),bg=self.bgcolor)
self.newMembersB = Button(self.gridframe, text='Add Members',
command=self.familyEntryBoxes)
self.newClientSave = Button(self.gridframe, text='Save Client',
command=self.addNew)
self.cancelNewB = Button(self.gridframe, text='Cancel New Entry',
command=self.newClientDisplay)
#MENU SETUP
self.menubar = Menu(self.ciGui)
#^Essentially re-selects client
self.volmenu = Menu(self.menubar, tearoff=0)
self.volmenu.add_command(label='Log Off', command=self.logoff)
self.volmenu.add_command(label='Configure Color', command=self.configure_background)
self.menubar.add_cascade(label='Volunteers',menu=self.volmenu)
self.optionsmenu = Menu(self.menubar,tearoff=0)
self.optionsmenu.add_command(label='Quit', command=self.quitprogram)
#self.optionsmenu.add_command(label='Change Instructions', command=self.edit_instructions)
self.menubar.add_cascade(label='Options',menu=self.optionsmenu)
#Reports Menu
self.reportmenu = Menu(self.menubar,tearoff=0)
self.reportmenu.add_command(label='View Weekly Report',
command=self.weeklyReport)
self.reportmenu.add_command(label='View Monthly Report',
command=self.monthlyReport)
self.reportmenu.add_command(label='View Yearly Report',
command=self.yearlyReport)
self.reportmenu.add_command(label='View Custom Report',
command=self.customReport)
self.menubar.add_cascade(label='Reports',menu=self.reportmenu)
#add menubar to grid
self.ciGui.config(menu=self.menubar)
#instructive labels
self.instructions = Text(self.gridframe, bd=4, width=20, font=('Helvetica', 12), wrap=WORD)
f = open('instructions.txt', 'r')
instruct = f.read()
f.close()
self.instructions.insert('1.0', instruct)
self.i_scroll = Scrollbar(self.gridframe)
self.instructions.config(yscrollcommand=self.i_scroll.set)
self.i_scroll.config(command=self.instructions.yview)
self.instructions.grid(row=14, column=0, rowspan=20, columnspan=2, padx=10)
self.i_scroll.grid(row=14, column=0, rowspan=20, columnspan=2, sticky=E+N+S, padx=10)
#Sets some sizing stuff
for i in range(0, 10):
self.ciGui.columnconfigure(i, weight=1, minsize=10)
for i in range(0, 30):
self.ciGui.rowconfigure(i, weight=1, minsize=10)
for i in range(7, 11):
self.ciGui.rowconfigure(i, weight=1, minsize=20)
self.ciGui.rowconfigure(18, weight=1, minsize=25)
#mainloop
self.newClientDisplay()
self.ciGui.mainloop()
#DISPLAY SCREENS
def newClientDisplay(self):
"""This function will clear all irrelevant widgets, and
grid all widgets necessary for the new client screen.
"""
#clear widgets
self.clearEntries()
#grid widgets
self.addhhsep.grid(row=23,column=3,columnspan=40,sticky=EW, pady=10)
self.addhhtitle.grid(row=23,column=3,columnspan=12, sticky=W, pady=10)
self.famNum.grid(row=24, column=4)
self.entNum.grid(row=24, column=3)
self.newMembersB.grid(row=24, column=5)
self.newClientSave.grid(row=40,column=3, columnspan=2)
self.cancelNewB.grid(row=40, column=5, columnspan=2)
self.newvisitf()
self.saveVisit.grid_forget()
self.cancelVisit.grid_forget()
return
def updateClientDisplay(self):
"""This function will clear all irrelevant widgets and
grid all widgets necessary for the updating-client screen.
"""
#clear widgets
self.clearEntries()
#grid widgets
self.family_listbox.grid(row=24, column=3, rowspan=3, columnspan=2, sticky=W)
self.fam_scroll.grid(row=24, column=4, rowspan=3, columnspan=1, sticky=E+N+S)
self.addmemb.grid(row=24,column=5,sticky=E+N+S)
self.removmemb.grid(row=25,column=5,sticky=E+N+S)
self.viewmemb.grid(row=26,column=5,sticky=E+N+S)
self.housetitle.grid(row=23,column=3,columnspan=12, sticky=W)
self.houseSep.grid(row=23,column=3,columnspan=40,sticky=EW)
self.saveB.grid(row=28, column=3, columnspan=2)
self.cancelB.grid(row=28, column=5, columnspan=2)
return
#DISPLAY FOR SELECTED CLIENTS
def displayInfo(self, *args):
"""This function displays the information for a client that
has been selected in the client_listbox.
"""
try:
self.cursel = int(self.id_list[self.client_listbox.curselection()[0]])
info = select_client(self.cursel)
self.info = info
self.updateClientDisplay()
self.displayHouseholdMem(info)
self.displayVisitInfo(info)
self.displayClientInfo(info)
self.displayHouseholdInfo(info)
except IndexError:
pass
return
def displayNewInfo(self, client_id):
"""This function displays the information for a specified
client whose id is client_id.
"""
cursel = client_id
info = select_client(cursel)
self.info = info
self.updateClientDisplay()
self.displayHouseholdMem(info)
self.displayVisitInfo(info)
self.displayClientInfo(info)
self.displayHouseholdInfo(info)
return
#DISPLAY INFORMATION FUNCTIONS
def displayClientInfo(self, info, *args):
"""This function displays the client information.
"""
#retrieve info from dictionary
visitor = info["visitor"]
#set variables
self.fnv.set(visitor.firstname)
self.lnv.set(visitor.lastname)
month = self.month_int[visitor.dob.month]
self.mv.set(month)
self.dv.set(visitor.dob.day)
self.yv.set(visitor.dob.year)
self.phv.set(visitor.phone)
#parse and set datejoined
joined = str(visitor.dateJoined.month) + "/" +\
str(visitor.dateJoined.day) + "/" +\
str(visitor.dateJoined.year)
self.datejoinv.set(joined)
#set age
ad=str(age(visitor.dob))
a="Age: "
ad=str(a+ad)
self.agev.set(ad)
return
def displayHouseholdInfo(self, info, *args):
"""This function displays the household information for
a client.
"""
#retrieve info from dictionary
house = info["household"]
#set variables
self.adv.set(house.street)
self.apv.set(house.apt)
self.ctyv.set(house.city)
self.stav.set(house.state)
self.zpv.set(house.zip)
#check dateVerified, and set variables accordingly
if house.dateVerified != None:
month = house.dateVerified.month
self.mvv.set(self.month_int[month])
self.dvv.set(house.dateVerified.day)
self.yvv.set(house.dateVerified.year)
#parse and set label variables for all members
ad=str(info["agegroup_dict"]["adults"])
a="Adults: "
ad=str(a+ad)
self.adl.set(ad)
ch=str(info["agegroup_dict"]["children"])
c="Children: "
ch=c+ch
self.chil.set(ch)
sn=str(info["agegroup_dict"]["seniors"])
s="Seniors: "
sn=s+sn
self.sen.set(sn)
infa=str(info["agegroup_dict"]["infants"])
i="Infants: "
infa=i+infa
self.inf.set(infa)
tl = str(info["agegroup_dict"]["total"])
t="Total: "
tl = t+tl
self.tot.set(tl)
#grid family member labels
self.dispad.grid(row=22,column=3,sticky=W, pady=10)
self.dischil.grid(row=22,column=4,sticky=W)
self.dissen.grid(row=22,column=5,sticky=W)
self.disinf.grid(row=22,column=6,sticky=W)
self.distot.grid(row=22,column=7,sticky=W)
return
def displayVisitInfo(self, info, *args):
"""This function display the visit information for a client.
"""
self.clearVisits()
self.visitDict = {}
visitor = info["visitor"]
name = str(visitor.firstname)+ " " +str(visitor.lastname)
self.visv.set(name)
#visit info
visits = info["visit_list"]
if len(visits) == 0:
pass
else:
vdatelabs = []
vnlabs = []
vvisitors = []
vvols = []
vids = []
for v in visits:
d=str(v.date.month)+'/'+str(v.date.day)+'/'+str(v.date.year)
n=v.notes
vi=v.visitor
vol=v.volunteer
vid=v.visitID
vdatelabs.append(d)
vnlabs.append(n)
vvisitors.append(vi)
vvols.append(vol)
vids.append(vid)
#set variables to display first visit
self.visv.set(vvisitors[0])
self.volv.set(vvols[0])
self.notv.set(vnlabs[0])
self.notescv.config(state='normal')
self.notescv.insert('1.0', vnlabs[0])
self.notescv.config(state='disabled')
#save lists in dictionary
self.visitDict['dates'] = vdatelabs
self.visitDict['notes'] = vnlabs
self.visitDict['visitors'] = vvisitors
self.visitDict['volunteers'] = vvols
self.visitDict['ids'] = vids
for i in range(0, len(vdatelabs)):
self.visit_listbox.insert(i, vdatelabs[i])
self.visit_listbox.selection_set(0)
def displayVisit(self, *args):
"""This function will display the data for a visit when
a visit date is selected.
"""
try:
self.notescv.config(state='normal')
self.notescv.delete('1.0', END)
datev = int(self.visit_listbox.curselection()[0])
self.selectedVisit = datev
n = self.visitDict['notes']
vi = self.visitDict['visitors']
vol = self.visitDict['volunteers']
self.visv.set(vi[datev])
self.volv.set(vol[datev])
self.notv.set(n[datev])
notes = str(self.notv.get())
self.notescv.insert('1.0', notes)
self.notescv.config(state='disabled')
except IndexError:
pass
def displayHouseholdMem(self, info, *args):
"""This function displays the household information for a client.
"""
self.family_listbox.delete(0,END)
a=[]
del self.mem_list[:]
for member in info["member_list"]:
self.mem_list.append(member.id)
s=str(age(member.dob))
q='Age: '
s=q+s
x=(member.firstname, member.lastname,s)
a.append(x)
for i in range(len(a)):
self.family_listbox.insert(i,a[i])
#DISPLAY EXTRA ENTRY BOXES FOR ADDITIONAL FAMILY MEMBERS
#BUG: WHEN Add Member IS PRESSED MORE THAN ONCE, EXTRA
#BOXES HANG AROUND, AND ARE NEVER CLEARED
def familyEntryBoxes(self, *args):
"""This function generates entry boxes for adding new family members.
The entry boxes are saved in list form and added to the dictionary
memDict.
"""
#clears any boxes already displayed
self.clearFamily()
try:
n = int(self.q.get())
except ValueError:
return
#add instructive labels to grid
self.famfn.grid(row=25,column=3)
self.famln.grid(row=25,column=4)
self.famdob.grid(row=25,column=5)
self.famphone.grid(row=25,column=8)
#create lists
fnames = []
lnames = []
mm = []
dd = []
yy = []
phnum = []
#create entry boxes, grid them, and append them to a list
for i in range(0, n):
fname = Entry(self.gridframe)
fname.grid(row=26+i, column=3)
fnames.append(fname)
lname = Entry(self.gridframe)
lname.grid(row=26+i, column=4)
lnames.append(lname)
month = ttk.Combobox(self.gridframe, width=12, state='readonly',
values=self.month_li)
#month.bind('<<ComboboxSelected>>', self.monthbox_select)
month.grid(row=26+i, column=5)
mm.append(month)
day = Spinbox(self.gridframe, from_=0, to=0, width=5)
day.grid(row=26+i, column=6)
dd.append(day)
year = Spinbox(self.gridframe, from_=1900, to=2500, width=7)
year.grid(row=26+i, column=7)
yy.append(year)
phone = Entry(self.gridframe)
phone.grid(row=26+i, column=8)
phnum.append(phone)
#add all lists to dictionary
self.memDict["first"] = fnames
self.memDict["last"] = lnames
self.memDict["mm"] = mm
self.memDict["dd"] = dd
self.memDict["yy"] = yy
self.memDict["phone"] = phnum
def addMemberEntryBoxes(self, *args):
"""This function generates entry boxes for adding new family members.
The entry boxes are saved in list form and added to the dictionary
memDict.
"""
if self.addmemberON == True:
pass
else:
#add instructive labels to grid
self.famfn.grid(row=24,column=6) #, sticky=NE)
self.famln.grid(row=24,column=8) #, sticky=NE)
self.famdob.grid(row=25,column=6)
self.famphone.grid(row=26,column=6)
#create entry boxes, grid them, and append them to a list
#first name
self.fname = Entry(self.gridframe)
self.fname.grid(row=24, column=7, sticky=W)
self.memDict["first"]=[self.fname]
#last name
self.lname = Entry(self.gridframe)
self.lname.grid(row=24, column=9, sticky=W)
self.memDict["last"]=[self.lname]
#dob: month
self.month = ttk.Combobox(self.gridframe, width=12, state='readonly',
values=self.month_li)
#self.month.bind('<<ComboboxSelected>>', self.monthbox_select)
self.month.grid(row=25, column=7, sticky=W)
self.memDict["mm"]=[self.month]
#dob: day
self.day = Spinbox(self.gridframe, from_=0, to=0, width=5)
self.day.grid(row=25, column=8, sticky=W)
self.memDict["dd"]=[self.day]
#dob: year
self.year = Spinbox(self.gridframe, from_=1900, to=2500, width=7)
self.year.grid(row=25, column=9, sticky=W)
self.memDict["yy"]=[self.year]
#phone
self.phone = Entry(self.gridframe)
self.phone.grid(row=26, column=7, sticky=W)
self.memDict["phone"]=[self.phone]
#self.addmemberON = True
#CLEAR WIDGETS FUNCTIONS
def clearVisits(self):
"""This function clears the entry boxes/visit notes
used for visits.
"""
self.visit_listbox.delete(0, END)
self.visv.set("")
self.volv.set("")
self.notv.set("")
self.notescv.config(state='normal')
self.notescv.delete('1.0', END)
self.notescv.config(state='disabled')
visitob = [self.visit_listbox, self.visit_scroll, self.visitdate,
self.newVisit, self.editVisit, self.deleteVisit,
self.saveVisit, self.saveVisitE, self.cancelVisit]
for ob in visitob:
ob.grid_forget()
self.visit_listbox.grid(row=8, column=3, rowspan=4, columnspan=1, sticky=W)
self.visit_scroll.grid(row=8, column=3, rowspan=4, columnspan=1, sticky=E+N+S)
self.newVisit.grid(row=8, column=8, sticky=W)
self.editVisit.grid(row=9, column=8, sticky=W)
self.deleteVisit.grid(row=10, column=8, sticky=W)
def clearFamily(self):
#forgets additional family members
self.family_listbox.delete(0, END)
try:
mfname = self.memDict["first"]
mlname = self.memDict["last"]
mm = self.memDict["mm"]
dd = self.memDict["dd"]
yy = self.memDict["yy"]
phnum = self.memDict["phone"]
easylist = [mfname, mlname, mm, dd,
yy, phnum]
for i in range(0, 6):
for j in range(0, len(easylist[i])):
easylist[i][j].grid_forget()
for i in range(0, 6):
easylist[i] = []
self.memDict = {}
except KeyError:
pass
def clearEntries(self):
"""This function clears the entry boxes that will never be
removed from the display.
"""
allvaries = [self.fnv, self.lnv, self.phv, self.mv, self.dv, self.yv,
self.adv, self.apv, self.q, self.agev,
self.notv, self.volv, self.visv, self.adl, self.chil,
self.sen, self.inf, self.tot, self.datejoinv, self.mvv,
self.dvv, self.yvv]
#Clears the entryboxes
for i in range(0, len(allvaries)):
allvaries[i].set("")
#sets defaulted entries
today = datetime.now()
todaystr = str(today.month)+'/'+str(today.day)+\
'/'+str(today.year)
#self.visdatev.set(todaystr)
self.datejoinv.set(todaystr)
self.ctyv.set("Troy")
self.stav.set("NY")
self.zpv.set(12180)
#new client stuff
allforgets = [self.family_listbox,
self.fam_scroll, self.addmemb, self.removmemb,
self.viewmemb, self.housetitle, self.houseSep, self.saveB,
self.cancelB, self.dispad, self.dischil, self.dissen,
self.disinf, self.distot, self.addhhsep, self.addhhtitle,
self.famNum, self.entNum, self.newMembersB,
self.newClientSave, self.cancelNewB, self.famname,
self.famfn, self.famln, self.famdob, self.famphone,
self.fammon, self.famday, self.famyear]
for i in range(0, len(allforgets)):
allforgets[i].forget()
allforgets[i].grid_forget()
#forgets additional family members
#self.family_listbox.delete(0, END)
self.clearFamily()
#forgets previous visit notes
self.clearVisits()
self.visitDict = {}
def monthbox_select(self, *args):
"""This function is called when a month is selected from the
month combobox. It will look up the month in the month_day_dict,
and assign the right number of days to the "dob" spinbox.
"""
month = self.mv.get()
days = self.month_day_dict[month]
self.dob.config(from_=1, to=days)
return
#visit buttons
def newvisitf(self):
"""This function will clear unnecessary widgets, add an entrybox
for the date, and prepopulate the date, volunteer, and visitor fields.
"""
#clear Notes, Vol, & Visitor
self.visit_listbox.grid_forget()
self.visit_scroll.grid_forget()
self.newVisit.grid_forget()
self.editVisit.grid_forget()
self.deleteVisit.grid_forget()
#set date of visit to today
today = datetime.now()
tstr = str(today.month) + "/" + str(today.day) + "/" + str(today.year)
self.visdatev.set(tstr)
self.visitdate.grid(row=8, column=3)
#prepopulate volunteer
self.volv.set(self.volunteerName)
###############self.visv.set(self....?)
#prepopulate visitor (add test to see if this exists, in case of newclient)
self.notescv.config(state='normal')
self.notescv.delete('1.0', END)
self.saveVisit.grid(row=8, column=8, sticky=W)
self.cancelVisit.grid(row=9, column=8, sticky=W)
def editvisitf(self):
"""This function sets up a display identical to the "new visit"
display, but the date, visitor, notes, and volunteer are all
prepopulated with information from the database.
"""
#gridding
self.visit_listbox.grid_forget()
self.visit_scroll.grid_forget()
self.newVisit.grid_forget()
self.editVisit.grid_forget()
self.deleteVisit.grid_forget()
#set volunteer from database
self.volv.set(self.visitDict['volunteers'][self.selectedVisit])
#set visitor from database
self.visv.set(self.visitDict['visitors'][self.selectedVisit])
#set visdatev to Visit Date from database
vdate = self.visitDict['dates'][self.selectedVisit]
self.visdatev.set(vdate)
self.visitdate.grid(row=8, column=3)
self.notescv.config(state='normal')
self.saveVisitE.grid(row=8, column=8, sticky=W)
self.cancelVisit.grid(row=9, column=8, sticky=W)
def cancelvisitf(self):
"""This function will cancel a visit/changes to a visit,
and return to the normal visit display.
"""
self.clearVisits()
d = self.visitDict["dates"]
for i in range(0, len(d)):
self.visit_listbox.insert(i, d[i])
self.visit_listbox.selection_set(0)
self.displayVisit()
def savevisitf(self):
"""This will connect to Update Visit.
"""
try:
notes = str(self.notescv.get('1.0', END))
d = str(self.visdatev.get())
da = d.split('/')
dat = date(month=int(da[0]), day=int(da[1]), year=int(da[2]))
except:
self.error_popup("Check the visit date!")
idlist = self.visitDict['ids']
vid = idlist[self.selectedVisit]
update_vis(vid, dat, notes)
#refresh screen
self.clearVisits()
pid = self.cursel
info = select_client(pid)
self.displayVisitInfo(info)
def deletevisitf(self):
"""This function will delete the selected visit, first asking
the user to confirm the action, and will update the visit display
to reflect the change. This function connects to the "delete visit"
button.
"""
conf = messagebox.askquestion(
title='Confirm Delete',
message='Are you sure you want to delete this visit?')
if conf == 'yes':
idlist = self.visitDict['ids']
vid = idlist[self.selectedVisit]
remove_visit(vid)
#refresh screen
self.clearVisits()
pid = self.cursel
info = select_client(pid)
self.displayVisitInfo(info)
return
else:
return
def cancel_changes(self):
"""This function will clear the display and refill it with
the selected client's information from the database.
"""
self.updateClientDisplay()
self.displayInfo()
return
def quitprogram(self):
"""This function safely closes the database and
interface window.
"""
quit_session()
self.ciGui.destroy()
return
def logoff(self):
"""This function closes the database and interface window,
and returns to the volunteer login page.
"""
quit_session()
self.ciGui.destroy()
vo = cdbvolunteer.VolunteerDisplay()
return
def monthlyReport(self):
generate_monthly_report()
conf = messagebox.showinfo(title='Info',
message='Your report has been generated!')
return
def yearlyReport(self):
generate_yearly_report()
conf = messagebox.showinfo(title='Info',
message='Your report has been generated!')
return
def weeklyReport(self):
generate_weekly_report()
conf = messagebox.showinfo(title='Info',
message='Your report has been generated!')
return
def customReport(self):
"""This function allows the user to enter a start and end date
for generating the report.
"""
cw = cdbcustom.customwindow()
return
def error_popup(self, errmessage):
"""This function implements a simple pop-up window to warn user
about bad data entry.
"""
conf = messagebox.showerror(title='Error', message=errmessage)
def recordVisit(self):
"""This function will insert a new visit, clear old visit
display info, and reset the visit display.
"""
#inserts new visit
try:
vol_id = self.volID #int(self.volv.get())
except ValueError:
self.error_popup("Check volunteer id")
return
#get visit date
try:
dv = (str(self.visdatev.get())).split('/')
dvm = int(dv[0])
dvd = int(dv[1])
dvy = int(dv[2])
vdate = date(year=dvy, month=dvm, day=dvd)
except ValueError:
self.error_popup("Check visit date field!\n Enter: MM/DD/YYYY")
return
#get visit notes
try:
note = self.notescv.get("1.0", END)
except ValueError:
self.error_popup("Uh, oh! Better check the visit info!")
return
#create visitData object, and call function to record new visit
visitInfo = visitData(vol_id, visitDate=vdate, notes=note)
new_visit(self.cursel, visitInfo)
#clears old visit notes
self.clearVisits()
#refreshes visit note display
info = select_client(self.cursel)
self.displayVisitInfo(info)
#"Get All Input and Test It" functions
def getVisitorInput(self, ctype, cID=None):
"""This function tests all of the data for the visitor
entry boxes and returns an object.
"""
#Error checking for visitor's name and phone
try:
fname = str(self.fnv.get())
except ValueError:
self.error_popup("Check visitor's first name!")
return
try:
lname = str(self.lnv.get())
except ValueError:
self.error_popup("Check visitor's last name!")
return
try:
phnum = str(self.phv.get())
except ValueError:
self.error_popup("Check visitor's phone number!")
return
#Error checking for visitor's DOB
try:
month = str(self.mv.get())
dm = self.int_month[month]
except ValueError and KeyError:
self.error_popup("Check visitor's month of birth!")
return
try:
dd = int(self.dv.get())
except ValueError:
self.error_popup("Check visitor's day of birth!")
return
try:
dy = int(self.yv.get())
except ValueError:
self.error_popup("Check visitor's year of birth!")
return
try:
DOB = date(year=dy, month=dm, day=dd)
except ValueError:
self.error_popup("Was an invalid day of birth chosen?")
return
#Error checking for datejoined
try:
dj = (str(self.datejoinv.get())).split('/')
djm = int(dj[0])
djd = int(dj[1])
djy = int(dj[2])
datejoined = date(year=djy, month=djm, day=djd)
except ValueError:
self.error_popup("Check Date Joined field!\n Enter: MM/DD/YYYY")
return
if ctype == "old":
cd = oldClientData(cID, firstname=fname, lastname=lname,
dob=DOB, phone=phnum, dateJoined=datejoined)
elif ctype == "new":
cd = newClientData(firstname=fname, lastname=lname,
dob=DOB, phone=phnum, dateJoined=datejoined)
return cd
def getMemberInput(self, clist):
"""This function tests all of the input data for members
entry boxes and returns a data object.
"""
#Error checking for datejoined
try:
dj = (str(self.datejoinv.get())).split('/')
djm = int(dj[0])
djd = int(dj[1])
djy = int(dj[2])
datejoined = date(year=djy, month=djm, day=djd)
except ValueError:
self.error_popup("Check Date Joined field!\n Enter: MM/DD/YYYY")
return
#Check to see if any
if self.memDict != {}:
mfname = self.memDict["first"]
mlname = self.memDict["last"]
mm = self.memDict["mm"]
dd = self.memDict["dd"]
yy = self.memDict["yy"]
phnum = self.memDict["phone"]
for i in range(0, len(mfname)):
try:
fname = str(mfname[i].get())
except ValueError:
self.error_popup("Check family member "+str(i)+"'s first name!")
return
try:
lname = str(mlname[i].get())
except ValueError:
self.error_popup("Check family member "+str(i)+"'s last name!")
return
try:
phn = str(phnum[i].get())
except ValueError:
self.error_popup("Check family member "+str(i)+"'s phone!")
return
try:
month = str(mm[i].get())
dm = self.int_month[month]
except ValueError and KeyError:
self.error_popup("Check family member "+str(i)\
+"'s month of birth!")
return
try:
dday = int(dd[i].get())
except ValueError:
self.error_popup("Check family member "+str(i)\
+"'s day of birth!")
return
try:
dy = int(yy[i].get())
except ValueError:
self.error_popup("Check family member "+str(i)\
+"'s year of birth!")
return
try:
DOB = date(year=dy, month=dm, day=dday)
except ValueError:
self.error_popup("Was an invalid day of birth chosen for"\
" family member "+str(i)+"?")
return
ncd = newClientData(firstname=fname, lastname=lname,
dob=DOB, phone=phn, dateJoined=datejoined)
clist.append(ncd)
return clist
def getHouseholdInput(self):
"""This function tests all input for households in the household
entry boxes, and returns a data object.
"""
#get street address
try:
streeta = str(self.adv.get())
except ValueError:
self.error_popup("Check street address!")
return
#get city
try:
citya = str(self.ctyv.get())
except ValueError:
self.error_popup("Check city!")
return
#get state
try:
statea = str(self.stav.get())
except ValueError:
self.error_popup("Check state!")
return
#get zip code
try:
zipa = int(self.zpv.get())
except ValueError:
self.error_popup("Check zip code!")
return
#get apartment number
try:
apta = str(self.apv.get())
except ValueError:
self.error_popup("Check apartment number!")
return
#get date verified
if self.mvv.get() == self.dvv.get() == self.yvv.get() == "":
datev = None
else:
#get month
try:
month = str(self.mvv.get())
vm = self.int_month[month]
except ValueError and KeyError:
self.error_popup("Check month of date verified!")
return
#get day
try:
vd = int(self.dvv.get())
except ValueError:
self.error_popup("Check day of date verified!")
return
#get year
try:
vy = int(self.yvv.get())
except ValueError:
self.error_popup("Check day of date verified!")
return
#final date testing
try:
datev = date(year=vy, month=vm, day=vd)
except ValueError:
self.error_popup("Was an invalid day for date"\
+" verified chosen?")
return
houseInfo = houseData(street=streeta, city=citya, state=statea,
zip=zipa, apt=apta, dateVerified=datev)
return houseInfo
def getVisitInput(self):
"""This function tests all visit input and returns an object.
"""
#IMPLEMENT get volunteer id
try:
v = str(self.visdatev.get())
vd = v.split('/')
vdate = date(year=int(vd[2]), month=int(vd[0]), day=int(vd[1]))
except ValueError:
self.error_popup("Check the visit date!")
#get visit notes
try:
note = self.notescv.get("1.0", END)
except ValueError:
note = None
visitInfo = visitData(Vol_ID=self.volID, visitDate=vdate, notes=note)
return visitInfo
def addNew(self):
"""This function adds a new household to the database.
#NOTE: we need to check checkboxes for dummy addresses
#(domestic violence address, and homeless address)
"""
#What if one of our "gets" fail?
#Test all input and create newClientData object for visitor
cd = self.getVisitorInput("new")
clist = [cd]
newClientInfo_list = self.getMemberInput(clist)
houseInfo = self.getHouseholdInput()
visitInfo = self.getVisitInput()
#send all objects to new_household function
client_id = new_household(houseInfo, visitInfo, newClientInfo_list)
self.cursel = client_id
#refresh list of clients
self.clientlist = list_people()
#refresh screen
self.displayNewInfo(client_id)
def updateInfo(self, *args):
"""This function will update the visitor's information, the household
information, and the visit information. It will also add family members,
but it will NOT update the family members.
"""
sel_id = self.cursel
nclist = []
cd = self.getVisitorInput("old", cID=sel_id)
oldClientInfo_list = [cd]
houseInfo = self.getHouseholdInput()
newClientInfo_list = self.getMemberInput(nclist)
update_all(sel_id, houseInfo, oldClientInfo_list, newClientInfo_list)
#refresh list of clients
self.clientlist = list_people()
#refresh screen
#self.updateClientDisplay()
self.displayNewInfo(self.cursel)
def nameSearch(self, *args):
"""This function returns relevant results
"""
#removes old listbox contents
self.client_listbox.delete(0, END)
del self.id_list[:]
#get user input
name = str(self.ns.get())
nameC = name.capitalize()
#name = str(self.ns.get()).capitalize()
#NOTE:Get lowercase names, too
c = self.clientlist
#find matching names in list
found_clients = []
for i in range(len(c)):
if name in c[i][0] or nameC in c[i][0]:
found_clients.append(c[i])
found_clients.sort()
#listing just the names and addresses of the people
x=[]
for i in range(len(found_clients)):
dobstr=str(found_clients[i][1].month)+\
"/"+str(found_clients[i][1].day)+\
'/'+str(found_clients[i][1].year)
a=str(found_clients[i][0])+" --"+dobstr
x.append(a)
self.id_list.append(found_clients[i][2])
#insert results into listbox
for i in range(len(x)):
self.client_listbox.insert(i,x[i])
return
def runViewMember(self):
"""This function displays the information for a client that
has been selected in the family_listbox.
"""
try:
n = self.family_listbox.curselection()[0]
self.cursel = self.mem_list[n]
info = select_client(self.cursel)
self.displayHouseholdMem(info)
self.displayVisitInfo(info)
self.displayClientInfo(info)
self.displayHouseholdInfo(info)
except IndexError:
pass
return
def removeMemberConfirm(self):
"""This function removes a family member.
"""
try:
n = self.family_listbox.curselection()[0]
tbd = self.mem_list[n]
conf = messagebox.askquestion(
title='Confirm Removal',
message='Are you sure you want to delete this client?')
if conf == 'yes':
remove_client(tbd)
self.updateInfo()
return
else:
return
except IndexError:
pass
return
def configure_background(self, *args):
"""This function takes in a string and, if it matches a
valid color, will set the color of the interface to
the new color.
"""
import tkinter.colorchooser as cc
color = cc.askcolor()
color_name = color[1]
self.bgcolor = color_name
#self.volID = volunteerID #the id of the volunteer who logged in
#self.volunteerName = volunteerName
#self.bgcolor = bgcolor
update_vol(vol_id=self.volID, color=color_name)
self.ciGui.configure(background=self.bgcolor)
#for i in range(0, len(self.alllabs)):
# self.alllabs[i].configure(bg=self.bgcolor)
#self.cslabel.configure(self.gridframe, bg=self.bgcolor)
#for lab in self.alllabs:
# lab.config(background=self.bgcolor)
return
#print(color_name)
| 37.144945 | 105 | 0.553023 | 6,975 | 60,992 | 4.802007 | 0.110824 | 0.026333 | 0.026393 | 0.023646 | 0.46886 | 0.371858 | 0.332209 | 0.281334 | 0.258016 | 0.21998 | 0 | 0.020808 | 0.331043 | 60,992 | 1,641 | 106 | 37.167581 | 0.800103 | 0.14051 | 0 | 0.329838 | 0 | 0 | 0.063285 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038132 | false | 0.006673 | 0.007626 | 0 | 0.099142 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
761f38250ecaeace6f2c4eb6c960c78b7e7dd302 | 16,971 | py | Python | data_ops.py | DaniUPC/tf_dataio | 86fb5eb4746d4df40db4fdd254929829e8153568 | [
"MIT"
] | 2 | 2017-11-22T09:59:21.000Z | 2019-06-26T02:37:53.000Z | data_ops.py | DaniUPC/tf_dataio | 86fb5eb4746d4df40db4fdd254929829e8153568 | [
"MIT"
] | 7 | 2020-03-24T15:39:03.000Z | 2021-03-18T20:27:13.000Z | data_ops.py | DaniUPC/tf_dataio | 86fb5eb4746d4df40db4fdd254929829e8153568 | [
"MIT"
] | null | null | null | import matplotlib.path as mplPath
from abc import ABCMeta
import abc
from protodata.utils import read_json
from protodata.columns import create_image_column
import numpy as np
import os
import tensorflow as tf
import logging
logger = logging.getLogger(__name__)
""" General functions for data manipulation """
class TrainMode(object):
WIDE = 'wide'
DEEP = 'deep'
CNN = 'cnn'
WIDE_AND_DEEP = 'wide_and_deep'
ALL = 'wide_deep_cnn'
class DataMode(object):
TRAINING = 'training'
VALIDATION = 'validation'
TEST = 'testing'
""" Data filename pattern """
def get_filename(name_tag, shard, num_shards):
""" Returns the format of the record file names given the
data tag, the current shard and the total amount of shards"""
return '%s-%.5d-of-%.5d' % (name_tag, shard, num_shards)
def get_filename_pattern(folder, tag):
""" Returns the pattern to read record files """
return os.path.join(folder, '%s-*' % str(tag))
""" Neighborhood processing functions """
def read_city_data(path):
""" Maps the neighborhoods of the city from the provided data
so each entry contains a map between the neighborhood and the
polygon delimiting it """
raw_cities = read_json(path)
cities = {}
for n in raw_cities['features']:
coords = np.array([c for c in n['geometry']['coordinates'][0][0]])
# Make sure coordinates have form Nx2
if coords.shape[1] == 3:
coords = coords[:, :-1]
if coords.shape[1] != 2:
raise ValueError('Coordinates have depth %d and should have 2'
% coords.shape[1])
# Map into polygon and add to dictionary
neigh_name = n['properties']['neighbourhood']
bb = mplPath.Path(coords)
cities.update({neigh_name: bb})
return cities
def get_neighborhood(neighs, longitude, latitude):
""" Returns the neighborhood of the input point. We assume that a
point can only be contained in one neighborhood (otherwise data is broken)
Args:
neighs: Dictionary mapping neighborhoods and their area
longitude: Longitude of the input point
latitude: Latitude of the input point
Returns:
name: Name of the neighborhood the point belongs to or None if no
valid neighborhood found
"""
for neigh, bb in neighs.items():
if bb.contains_point((longitude, latitude)):
return neigh
return None
""" Data normalization functions """
def quantile_norm(val, edges, nq):
""" Normalizes continuous values so they are converted by the formula:
x = i/(nq - 1)
Where i is the quartile number [0, nq) the input value falls in
the feature distribution and nq is the number of quartiles used.
Args:
val: Input value
edges: Histogram edges
nq: Number of quantiles
"""
if val >= edges[-1]:
quantile = len(edges)
elif val <= edges[0]:
quantile = 0
else:
quantile = np.argmax(edges >= val)
return quantile / (nq - 1.0)
def feature_normalize(dataset):
""" Normalizes feature and returns the mean, deviation and minimum and
maximum values """
mu = np.mean(dataset, axis=0)
sigma = np.std(dataset, axis=0)
min_c, max_c = np.min(dataset, axis=0), np.max(dataset, axis=0)
return mu, sigma, min_c, max_c
""" Tensorflow helpers """
def get_interval_mask(low, high, values):
""" Returns the boolean mask such that True values are those x which
are in interval low <= x < high """
# Since greater_equal and less support broadcasting, we can do this
low_t, high_t = tf.greater_equal(values, low), tf.less(values, high)
return tf.logical_and(low_t, high_t)
def copy_columns(x, num):
""" Replicates the input tensor x by concatenating its columns num times"""
# Create replicas into a 1D vector
vector = tf.tile(x, tf.stack([num]))
# Transpose and reshape
return tf.transpose(tf.reshape(vector, tf.stack([num, -1])))
""" Pandas helpers """
def get_column_info(data, excluded=[]):
""" Returns the zipped pair of columns and corresponding numpy type
for columns of interest """
col_names = [i for i in data.columns.values if i not in excluded]
col_types = [data[i].dtype for i in col_names if i not in excluded]
return list(zip(col_names, col_types))
def quantile_normalization(data, train_ind, nq, excluded=[]):
""" Normalizes into [0,1] numeric values in the dataset using
quantile normalization as described in:
Wide & Deep Learning for Recommender Systems.
Cheng et al (2016)
[https://arxiv.org/abs/1606.07792]
Args:
data: Pandas dataframe
train_ind: Instances in the training set
nq: Number of quantiles to use
excluded: Columns to ignore in the normalization process
Returns:
data: Normalized dataset given the mean and standard deviation
extracted from the training
d: Dictionary indexed by column name where each entry contains
the cuts for a numeric feature
"""
d = {}
for (name, dtype) in get_column_info(data, excluded=excluded):
if is_numeric(dtype):
logger.debug('Normalizing column %s' % name)
# Compute histogram edges for training data
train_content = data.iloc[train_ind, name]
hist, edges = np.histogram(train_content, bins=nq-2)
edges = np.array(edges)
# Store entry in dictionary
d[name] = edges
# Update column
data[name] = data[name].apply(lambda x: quantile_norm(x,
edges,
nq))
return data, d
def z_scores(data, train_ind, excluded=[]):
""" Returns a dictionary containing the min, max, mean and standard
deviation for the numeric
columns in the dataframe
Args:
data: Pandas dataframe
train_ind: List of instance indices belonging to the train set
excluded: Columns to ignore in the normalization process
Returns:
data: Normalized dataset given the mean and standard deviation
extracted from the training
d: Dictionary indexed by numeric column name where each entry
contains its mean and std
"""
d = {}
for (name, dtype) in get_column_info(data, excluded=excluded):
if is_numeric(dtype):
# Compute mean and std on training
train_content = data.iloc[train_ind, name].as_matrix()
mean, std, min_c, max_c = feature_normalize(train_content)
# Store entry in dictionary
d[name] = {'mean': mean, 'std': std, 'min': min_c, 'max': max_c}
# Update column
data[name] = (data[name] - mean)/std
return data, d
def normalize_data(data, train_ind, zscores=True, excluded=[], nq=5):
""" Normalizes the input
Args:
data: Pandas dataframe
train_ind: List of instance indices belonging to the train set
zscores: Whether to use z-scores normalization (True) or
quantile normalization (False)
excluded: Columns to ignore in the normalization process
nq: Number of quantiles. Only used if zscores is False.
Returns:
data: Normalized dataset
d: Dictionary containing metadata of the normalization. For z-scores
contains mean, min, max and standard deviation of each numeric
column. For quantile norm, it contains the edges of the histogram
"""
if zscores:
return z_scores(data, train_ind=train_ind, excluded=excluded)
else:
return quantile_normalization(data,
train_ind=train_ind,
nq=nq,
excluded=excluded)
def to_dummy(data, name):
""" Converts categorical column into dummy binary column
(one for each possible value) """
def is_equal(x, other):
return x == other
logger.debug('Converting %s into dummy column ...' % name)
for val in data[name].unique():
# Obtain a valid string formatted name for the new column
col_name = '_'.join([name, val])
dummy_name = erase_special(unicode_to_str(col_name))
# Convert according to whether it is equal or not to current value
data[dummy_name] = data[name].apply(lambda x: is_equal(x, val))
# Erase original column
return data.drop(name, axis=1)
def convert_to_dummy(data, excluded_columns=[]):
""" Converts categorical variables into dummy binary variables
Args:
data: pandas dataframe
excluded_columns: columns to ignore.
"""
for (feat_name, feat_type) in get_column_info(data,
excluded=excluded_columns):
if feat_type == np.dtype('object'):
data = to_dummy(data, feat_name)
return data
def convert_boolean(data, excluded_columns=[], func=float):
""" Converts the boolean columns in the dataset into the desired type
Args:
data: pandas Dataframe
excluded_columns: Columns to exclude in the conversion process
func: Type (and function) to use to convert booleans to
(e.g. float, int). Default is float
"""
for (c, t) in get_column_info(data, excluded=excluded_columns):
if t == np.dtype('bool'):
logger.debug('Converting boolean column %s to numeric' % c)
data[c] = data[c].apply(func)
return data
def is_categorical(type_def):
""" Whether input column type represents a categorical
column (True) or not (False) """
return type_def == np.dtype('object')
def is_numeric(type_def):
""" Whether columns represents a numerical value (True) or not (False) """
return np.issubdtype(type_def, np.number)
def is_bool(type_def):
""" Whether columns represents a boolean value """
return np.issubdtype(type_def, bool) and not np.issubdtype(type_def,
np.number)
""" TF Serialization wrapper operations """
def int64_feature(value):
""" Wrapper for int64 proto features """
value = [value] if not isinstance(value, list) else value
return tf.train.Feature(int64_list=tf.train.Int64List(value=value))
def float64_feature(value):
""" Wrapper for floay64 proto features """
value = [value] if not isinstance(value, list) else value
return tf.train.Feature(float_list=tf.train.FloatList(value=value))
def bytes_feature(value):
""" Wrapper for byte proto features """
# Ensure we have a list of elements
value = [value] if not isinstance(value, list) else value
# Convert string into bytes if found
for i in range(len(value)):
if isinstance(value[i], str):
value[i] = str.encode(value[i])
return tf.train.Feature(bytes_list=tf.train.BytesList(value=value))
def map_feature(value, f_type):
""" Builds the Tensorflow feature for the given feature information """
if f_type == np.dtype('object'):
return bytes_feature(value)
elif f_type == np.dtype('int'):
return int64_feature(value)
elif f_type == np.dtype('float'):
return float64_feature(value)
elif f_type == np.dtype('bool'):
return int64_feature(value.astype('int'))
else:
raise ValueError('Do not know how to store value {} with type {}'
.format(value, f_type))
def map_feature_type(np_type):
""" Maps numpy types into accepted Tensorflow feature
types (int64, float32 and string) """
if np.issubdtype(np_type, np.integer):
return tf.int64
elif np.issubdtype(np_type, np.float):
return tf.float32
elif np_type == np.bool:
return tf.int64
elif np_type == np.object:
return tf.string
else:
raise TypeError('Could not map type {} into a '
+ 'correct Tensorflow type'.format(np_type))
def str_to_unicode(x):
""" Encodes input values into UTF-8 """
if isinstance(x, list):
return [to_utf8(i) for i in x]
else:
return to_utf8(x)
def to_utf8(x):
""" Encodes the input value into UTF-8 """
if isinstance(x, unicode): # noqa
return x # Already unicode, no need for conversion
elif isinstance(x, str):
return x.decode('utf-8')
else:
raise ValueError('Input value must be in string format')
def erase_special(x):
""" Removes special characters and spaces from the input string """
return ''.join([i for i in x if i.isalnum()])
def unicode_to_str(x):
""" Encodes input values into UTF-8 """
if isinstance(x, list):
return [to_str(i) for i in x]
else:
return to_str(x)
def to_str(x):
""" Encodes the input value into UTF-8 """
if isinstance(x, str):
return x # Already string, no need for conversion
elif isinstance(x, unicode): # noqa
return x.encode('utf-8')
else:
raise ValueError('Input value must be in unicode format')
def split_data(total, train_ratio, val_ratio):
""" Splits the total number of instances into training, validation
and testing and returns the corresponding indices for each set """
# Compute instance number per set
train_num = int(total * train_ratio)
val_num = int(total * val_ratio)
# Get random ordering and extract indices
permutation = np.random.permutation(total)
train = permutation[:train_num]
val = permutation[train_num:train_num+val_num]
test = permutation[train_num+val_num:]
return train, val, test
class ExampleColumn(object):
""" Represents a column from a serializes proto Example """
__metaclass__ = ABCMeta
def __init__(self, name, type):
self.name = name
self.type = type
@abc.abstractmethod
def get_feature(self):
""" Returns the feature type according to each column """
@abc.abstractmethod
def to_column(self, args=None):
""" Returns the categorical definition of the column """
class NumericColumn(ExampleColumn):
""" Real-valued column """
def __init__(self, name, type, length=1):
""" Args:
length: Length of the feature value. Default is 1
"""
super(NumericColumn, self).__init__(name, type)
self.type = type
self.length = length
def get_feature(self):
return tf.FixedLenFeature([self.length], self.type)
def to_column(self, args=None):
return tf.contrib.layers.real_valued_column(self.name,
self.length)
def categorize(self, args=None):
real_valued = tf.contrib.layers.real_valued_column(self.name)
return tf.contrib.layers.bucketized_column(real_valued,
boundaries=args)
class SparseColumn(ExampleColumn):
""" Categorical column """
def __init__(self, name, type, keys=5):
"""
Args:
keys: Number of hash buckets to use or list of keys
present in the column.
"""
super(SparseColumn, self).__init__(name, type)
self.keys = keys
def get_feature(self):
return tf.VarLenFeature(self.type)
def to_column(self, args=None):
if isinstance(self.keys, list):
return tf.contrib.layers.sparse_column_with_keys(self.name,
self.keys)
elif isinstance(self.keys, int):
return tf.contrib.layers.sparse_column_with_hash_bucket(self.name,
self.keys)
else:
raise ValueError('Unexpected type of keys/bucket size of {}'
.format(self.keys))
def to_embedding(self, dims):
""" Returns an embedding column out of the Sparse column """
return tf.contrib.layers.embedding_column(self.to_column(),
dimension=dims)
class ImageColumn(ExampleColumn):
""" Column that represents an image input """
def __init__(self, name, format):
super(ImageColumn, self).__init__(name, tf.string)
self.format = format
def get_feature(self):
return tf.FixedLenFeature([], dtype=self.type)
def to_column(self, args=None):
return create_image_column(name=self.name, dims=args)
def create_cross_column(cols, bucket_size=int(1e4)):
""" Returns a cross column that has the values of the cartesian product of the
input categorical columns """
return tf.contrib.layers.crossed_column(cols, hash_bucket_size=bucket_size)
| 31.960452 | 82 | 0.627541 | 2,203 | 16,971 | 4.725374 | 0.189741 | 0.013833 | 0.010086 | 0.012104 | 0.269741 | 0.226801 | 0.179443 | 0.1439 | 0.115946 | 0.096254 | 0 | 0.006959 | 0.280243 | 16,971 | 530 | 83 | 32.020755 | 0.845272 | 0.328384 | 0 | 0.170732 | 0 | 0 | 0.051506 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.182927 | false | 0 | 0.036585 | 0.02439 | 0.49187 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
76214af8fcb04bb2b3e07a36313d492cfdc3f3d5 | 8,619 | py | Python | pages/views.py | Arden-Zeng/Jasper | 8859523d11cb70e4d6a635ca9caf7f85c7670fe7 | [
"MIT"
] | null | null | null | pages/views.py | Arden-Zeng/Jasper | 8859523d11cb70e4d6a635ca9caf7f85c7670fe7 | [
"MIT"
] | null | null | null | pages/views.py | Arden-Zeng/Jasper | 8859523d11cb70e4d6a635ca9caf7f85c7670fe7 | [
"MIT"
] | null | null | null | import time
import re
from django.db.models.query import EmptyQuerySet
from django.http.response import HttpResponseRedirect, JsonResponse
from django.shortcuts import render
from django.shortcuts import redirect, render
from django.contrib.auth import authenticate, login, logout
from django.contrib import messages
from templates.forms import CreateUserForm
from users.models import jasperUser
from reddit.models import Subreddit
from reddit.models import Post
from reddit.models import PostContainer
from reddit.models import NewsCatagory
from hashlib import sha256
# Create your views here.
default_news_cat = ["gnews", "tech", "cnews"]
def home_page(request):
global default_news_cat
display_threads = []
saved_posts = set()
if request.user.is_authenticated:
if jasperUser.objects.all().filter(user=request.user).exists():
jUser = jasperUser.objects.all().get(user = request.user)
else:
jUser = createJUser(request.user)
newscats = jUser.followed_threads.all()[::1]
for cat in newscats:
display_threads.extend(cat.subreddit_set.all()[::1])
saved_post_container = jUser.saved_posts.all()[::1]
for post_container in saved_post_container:
saved_posts.add(post_container.saved_post)
else:
for newscat in default_news_cat:
display_threads.extend(NewsCatagory.objects.all().get(name=newscat).subreddit_set.all()[::1])
display_posts = []
for subreddit in display_threads:
display_posts.extend(subreddit.post_set.all()[::1])
display_posts.sort(key = lambda x : x.time, reverse=True)
context = {'posts' : display_posts, 'saved_posts': saved_posts, 'searchTerm' : "Search Jasper"}
return render(request, "index.html", context)
def profile_page(request):
if not (request.user.is_authenticated):
return redirect('login_page')
jUser = jasperUser.objects.all().get(user=request.user)
saved_post_container = jUser.saved_posts.all()[::1]
saved_post_container.sort(key = lambda x : x.save_time, reverse=True)
saved_posts = []
for post_container in saved_post_container:
saved_posts.append(post_container.saved_post)
context = {'posts' : saved_posts, 'saved_posts' : saved_posts, 'searchTerm' : "Search Jasper"}
return render(request, "profile.html", context)
def search_page(request, searchTerm):
userSearch = re.sub('[^a-zA-Z0-9 \n\.]', '', searchTerm)
searchTerms = userSearch.split(" ")
posts = Post.objects.all()
for term in searchTerms:
posts = posts.filter(title__icontains=term)
saved_posts = set()
if request.user.is_authenticated:
jUser = jasperUser.objects.all().get(user=request.user)
saved_post_container = jUser.saved_posts.all()[::1]
for post_container in saved_post_container:
saved_posts.add(post_container.saved_post)
context = {'posts' : posts, 'searchTerm' : searchTerm, 'saved_posts' : saved_posts}
return render(request, "search.html", context)
def register_page(request, *args, **kwargs):
if request.user.is_authenticated:
return redirect('home_page')
global default_reddits
form = CreateUserForm()
context = {'form':form}
if request.method == "POST":
form = CreateUserForm(request.POST)
if form.is_valid():
form.save()
username = request.POST['username']
password = request.POST['password1']
user = authenticate(request, username=username, password=password)
createJUser(user)
login(request, user)
return redirect('home_page')
context = {'form' : form, 'searchTerm' : "Search Jasper"}
return render(request, 'register.html', context)
def login_page(request, *args, **kwargs):
if request.user.is_authenticated:
return redirect('home_page')
if request.method == "POST":
username = str(request.POST['username'])
password = str(request.POST['password'])
user = authenticate(request, username=username, password=password)
if user is not None:
login(request, user)
return redirect('home_page')
else:
messages.success(request, "Incorrect Username/Password.")
return redirect('login_page')
# Return an 'invalid login' error message.
context = {'searchTerm' : "Search Jasper"}
return render(request, "login.html", context)
def logout_page(request):
logout(request)
return redirect('home_page')
def save_post(request):
if request.method == "POST":
try:
postid = str(request.POST['postid'])
post = Post.objects.get(id = postid)
jUser = jasperUser.objects.get(user = request.user)
if jUser.saved_posts.all().filter(saved_post_id = postid).exists():
jUser.saved_posts.remove(jUser.saved_posts.all().get(saved_post_id = postid))
else:
newPostContainer = PostContainer.objects.create(saved_post=post, save_time = int(time.time()))
jUser.saved_posts.add(newPostContainer)
return JsonResponse({"valid":True}, status = 200)
except:
pass
return redirect('home_page')
def modify_interest(request):
if request.method == "POST":
jUser = jasperUser.objects.get(user = request.user)
interest = str(request.POST['interest'])
match interest:
case "gnews":
gnews = NewsCatagory.objects.all().get(name = "gnews")
if jUser.followed_threads.all().filter(name = "gnews").exists():
jUser.followed_threads.remove(gnews)
else:
jUser.followed_threads.add(gnews)
case "cnews":
cnews = NewsCatagory.objects.all().get(name = "cnews")
if jUser.followed_threads.all().filter(name = "cnews").exists():
jUser.followed_threads.remove(cnews)
else:
jUser.followed_threads.add(cnews)
case "cpoli":
cpoli = NewsCatagory.objects.all().get(name = "cpoli")
if jUser.followed_threads.all().filter(name = "cpoli").exists():
jUser.followed_threads.remove(cpoli)
else:
jUser.followed_threads.add(cpoli)
case "apoli":
apoli = NewsCatagory.objects.all().get(name = "apoli")
if jUser.followed_threads.all().filter(name = "apoli").exists():
jUser.followed_threads.remove(apoli)
else:
jUser.followed_threads.add(apoli)
case "econ":
econ = NewsCatagory.objects.all().get(name = "econ")
if jUser.followed_threads.all().filter(name = "econ").exists():
jUser.followed_threads.remove(econ)
else:
jUser.followed_threads.add(econ)
case "tech":
tech = NewsCatagory.objects.all().get(name = "tech")
if jUser.followed_threads.all().filter(name = "tech").exists():
jUser.followed_threads.remove(tech)
else:
jUser.followed_threads.add(tech)
case "env":
env = NewsCatagory.objects.all().get(name = "env")
if jUser.followed_threads.all().filter(name = "env").exists():
jUser.followed_threads.remove(env)
else:
jUser.followed_threads.add(env)
case "sci":
sci = NewsCatagory.objects.all().get(name = "sci")
if jUser.followed_threads.all().filter(name = "sci").exists():
jUser.followed_threads.remove(sci)
else:
jUser.followed_threads.add(sci)
case "enter":
enter = NewsCatagory.objects.all().get(name = "enter")
if jUser.followed_threads.all().filter(name = "enter").exists():
jUser.followed_threads.remove(enter)
else:
jUser.followed_threads.add(enter)
case _:
pass
return JsonResponse({"valid":True}, status = 200)
def createJUser(user):
jUser = jasperUser.objects.create(user = user)
for catagory in default_news_cat:
newscat = NewsCatagory.objects.all().get(name = catagory)
jUser.followed_threads.add(newscat)
return jUser
| 38.824324 | 110 | 0.614688 | 946 | 8,619 | 5.469345 | 0.15222 | 0.072864 | 0.112099 | 0.05315 | 0.523386 | 0.310205 | 0.263819 | 0.151527 | 0.127368 | 0.118477 | 0 | 0.003019 | 0.269869 | 8,619 | 221 | 111 | 39 | 0.819164 | 0.007425 | 0 | 0.282609 | 0 | 0 | 0.062909 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.048913 | false | 0.038043 | 0.081522 | 0 | 0.217391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
76220faca79cd7e5333e588e1c9aca8746739eee | 3,253 | py | Python | src/bite/args/bitbucket.py | radhermit/bite | 2aa2cc5db2ab0e119965741080323bfae962b787 | [
"BSD-3-Clause"
] | 3 | 2018-06-05T04:10:13.000Z | 2018-06-07T04:18:10.000Z | src/bite/args/bitbucket.py | radhermit/bite | 2aa2cc5db2ab0e119965741080323bfae962b787 | [
"BSD-3-Clause"
] | 24 | 2018-04-08T10:32:28.000Z | 2018-06-21T15:47:36.000Z | src/bite/args/bitbucket.py | radhermit/bite | 2aa2cc5db2ab0e119965741080323bfae962b787 | [
"BSD-3-Clause"
] | 1 | 2018-04-04T02:07:33.000Z | 2018-04-04T02:07:33.000Z | from functools import partial
from .. import args
from ..argparser import ParseStdin
class BitbucketOpts(args.ServiceOpts):
"""Bitbucket"""
_service = 'bitbucket'
class Search(args.Search, BitbucketOpts):
def add_args(self):
super().add_args()
# optional args
self.opts.add_argument(
'--sort', metavar='TERM',
help='sorting order for search query',
docs="""
Requested sorting order for the given search query.
Only one field can be sorted on for a query, compound fields
sorting is not supported.
Sorting in descending order can be done by prefixing a given
sorting term with '-'; otherwise, sorting is done in an
ascending fashion by default.
""")
time = self.parser.add_argument_group('Time related')
time.add_argument(
'-c', '--created', type='time interval', metavar='TIME_INTERVAL',
help=f'{self.service.item.type}s created within a specified time interval')
time.add_argument(
'-m', '--modified', type='time interval', metavar='TIME_INTERVAL',
help=f'{self.service.item.type}s modified within a specified time interval')
attr = self.parser.add_argument_group('Attribute related')
attr.add_argument(
'--id', type='id_list', action=partial(ParseStdin, 'ids'),
help='restrict by issue ID(s)')
attr.add_argument(
'-p', '--priority', type='str_list', action='parse_stdin',
help='restrict by priority (one or more)',
docs="""
Restrict issues returned by their priority.
Multiple priorities can be entered as comma-separated values in
which case results can match any of the given values.
""")
attr.add_argument(
'-s', '--status', type='str_list', action='parse_stdin',
help='restrict by status (one or more)',
docs="""
Restrict issues returned by their status.
Multiple statuses can be entered as comma-separated values in
which case results can match any of the given values.
""")
attr.add_argument(
'--type', type='str_list', action='parse_stdin',
help='restrict by type (one or more)',
docs="""
Restrict issues returned by their type.
Multiple types can be entered as comma-separated values in
which case results can match any of the given values.
""")
attr.add_argument(
'--votes',
help='restrict by number of votes or greater')
attr.add_argument(
'--watchers',
help='restrict by number of watchers or greater')
class Get(args.Get, BitbucketOpts):
def add_args(self):
super().add_args(history=True)
class Attachments(args.Attachments, BitbucketOpts):
def add_args(self):
super().add_args(id_map='id_str_maps', item_id=False)
class Comments(args.Comments, BitbucketOpts):
pass
class Changes(args.Changes, BitbucketOpts):
pass
| 33.536082 | 88 | 0.590839 | 379 | 3,253 | 4.984169 | 0.313984 | 0.064055 | 0.047644 | 0.036527 | 0.48703 | 0.406564 | 0.406564 | 0.406564 | 0.344627 | 0.212811 | 0 | 0 | 0.308946 | 3,253 | 96 | 89 | 33.885417 | 0.840302 | 0.007378 | 0 | 0.342857 | 0 | 0 | 0.51722 | 0.015514 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042857 | false | 0.028571 | 0.042857 | 0 | 0.185714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
76222e49e5bfa6989272b4022d010ae3da6259c7 | 1,196 | py | Python | Voicelab/toolkits/Voicelab/MeasureIntensityNode.py | KorewaLidesu/VoiceLab | afae638b94fd6bb52d6269122636a6f125651f13 | [
"MIT"
] | 27 | 2020-06-11T11:51:55.000Z | 2022-03-29T20:21:36.000Z | Voicelab/toolkits/Voicelab/MeasureIntensityNode.py | KorewaLidesu/VoiceLab | afae638b94fd6bb52d6269122636a6f125651f13 | [
"MIT"
] | 4 | 2020-07-01T13:17:26.000Z | 2022-03-14T15:07:31.000Z | Voicelab/toolkits/Voicelab/MeasureIntensityNode.py | KorewaLidesu/VoiceLab | afae638b94fd6bb52d6269122636a6f125651f13 | [
"MIT"
] | 6 | 2021-07-01T09:35:22.000Z | 2022-03-18T00:34:32.000Z | import parselmouth
from Voicelab.pipeline.Node import Node
from parselmouth.praat import call
from Voicelab.toolkits.Voicelab.VoicelabNode import VoicelabNode
class MeasureIntensityNode(VoicelabNode):
def __init__(self, *args, **kwargs):
"""
Args:
*args:
**kwargs:
"""
super().__init__(*args, **kwargs)
self.args = {"minimum_pitch": 100}
###############################################################################################
# process: WARIO hook called once for each voice file.
###############################################################################################
def process(self):
voice = self.args["voice"]
try:
minimum_pitch = self.args["minimum_pitch"]
intensity = voice.to_intensity(minimum_pitch)
mean_intensity = intensity.get_average()
return {
"Intensity": intensity,
"Mean Intensity (dB)": mean_intensity
}
except:
return {
# "Intensity": intensity,
"Mean Intensity (dB)": "Intensity measure failed"
}
| 31.473684 | 99 | 0.473244 | 92 | 1,196 | 5.978261 | 0.445652 | 0.058182 | 0.054545 | 0.072727 | 0.141818 | 0.141818 | 0 | 0 | 0 | 0 | 0 | 0.003513 | 0.285953 | 1,196 | 37 | 100 | 32.324324 | 0.640515 | 0.090301 | 0 | 0.090909 | 0 | 0 | 0.11902 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.181818 | 0 | 0.409091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
76237579317f8ab7d373dde1a00c10efe17fe4ab | 2,907 | py | Python | skylib/calibration/cosmic.py | SkynetRTN/skylib | 58fe57053db6a048f8a72d7b453ae411a2302545 | [
"Apache-2.0"
] | 1 | 2022-02-17T19:59:14.000Z | 2022-02-17T19:59:14.000Z | skylib/calibration/cosmic.py | SkynetRTN/skylib | 58fe57053db6a048f8a72d7b453ae411a2302545 | [
"Apache-2.0"
] | null | null | null | skylib/calibration/cosmic.py | SkynetRTN/skylib | 58fe57053db6a048f8a72d7b453ae411a2302545 | [
"Apache-2.0"
] | null | null | null | """
Cosmic ray correction based on Astroscrappy
correct_cosmics(): remove cosmic rays from a FITS image
"""
from __future__ import division, print_function
__all__ = ['correct_cosmics']
def correct_cosmics(img, detect=False, sigclip=4.5, sigfrac=0.3, objlim=5.0,
gain=1.0, readnoise=10.0, satlevel=65535, niter=4,
sepmed=True, cleantype='meanmask', fsmode='median',
psfmodel='gauss', psffwhm=2.5, psfk=None, psfsize=7,
psfbeta=4.765):
"""
Remove the tracks of cosmic rays from a FITS image using the Curtis
McCully's Astroscrappy package based on the L.A.Cosmic algorithm by Pieter
van Dokkum
:param astropy.io.fits.HDUList img: input image; modified in place
:param bool detect: return a cosmics-only image instead of a cleaned input
image
:param float sigclip: Laplacian-to-noise limit
:param float sigfrac: fractional detection limit for neighboring pixels
:param float objlim: minimum contrast between Laplacian image and fine
structure image
:param float gain: CCD inverse gain [e-/ADU]
:param float readnoise: CCD readout noise [e-]
:param float satlevel: CCD saturation level [ADU]
:param int niter: number of iterations of L.A.Cosmic
:param bool sepmed: use the separable median filter instead of the full one
:param str cleantype: clean algorithm to use: "median", "medmask",
"meanmask", "idw"
:param str fsmode: method to build the fine structure image: "median" or
"convolve"
:param str psfmodel: model to use to generate the PSF kernel for fsmode ==
"convolve" and psfk == None: "gauss", "gaussx", "gaussy", "moffat"
:param float psffwhm: FWHM of the generated PSF [pixels]
:param int psfsize: size of the generated PSF kernel [pixels]
:param array_like psfk: PSF kernel array to use if fsmode == "convolve"; if
None, use psfmodel, psffwhm, and psfsize to calculate the kernel
:param float psfbeta: beta parameter for Moffat kernel
:return: None
"""
from numpy import uint8
from astroscrappy import detect_cosmics
for i, hdu in enumerate(img):
if not hdu.header.get('CRCORR', False):
crmask, cleanarr = detect_cosmics(
hdu.data.astype(float), sigclip=sigclip, sigfrac=sigfrac,
objlim=objlim, gain=gain, readnoise=readnoise,
satlevel=satlevel, pssl=0.0, niter=niter, sepmed=sepmed,
cleantype=cleantype, fsmode=fsmode, psfmodel=psfmodel,
psffwhm=psffwhm, psfsize=psfsize, psfk=psfk, psfbeta=psfbeta)
if detect:
hdu.data = crmask.astype(uint8)
hdu.header['CRONLY'] = (True, 'cosmic ray image')
else:
hdu.data = cleanarr
hdu.header['CRCORR'] = (True, 'cosmics removed')
| 44.045455 | 79 | 0.655659 | 377 | 2,907 | 5.015915 | 0.405836 | 0.042306 | 0.014807 | 0.015865 | 0.025383 | 0.025383 | 0 | 0 | 0 | 0 | 0 | 0.012945 | 0.255934 | 2,907 | 65 | 80 | 44.723077 | 0.861304 | 0.531476 | 0 | 0 | 0 | 0 | 0.067206 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.130435 | 0 | 0.173913 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7623f82bcdfc245834d11701353c47621419d271 | 5,407 | py | Python | models/weave_vgg.py | No43problem/SSD_Pytorch | ddc548824bffbc83b540a68b176ee0261b133ee0 | [
"MIT"
] | 163 | 2018-08-01T13:03:30.000Z | 2022-01-21T09:22:23.000Z | models/weave_vgg.py | No43problem/SSD_Pytorch | ddc548824bffbc83b540a68b176ee0261b133ee0 | [
"MIT"
] | 39 | 2018-08-31T02:45:46.000Z | 2021-09-18T05:40:48.000Z | models/weave_vgg.py | No43problem/SSD_Pytorch | ddc548824bffbc83b540a68b176ee0261b133ee0 | [
"MIT"
] | 58 | 2018-08-24T08:02:37.000Z | 2021-08-28T05:19:29.000Z | # -*- coding: utf-8 -*-
# Written by yq_yao
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import torch.nn.init as init
from models.model_helper import FpnAdapter, WeaveAdapter, weights_init, WeaveAdapter2
# from model_helper import FpnAdapter, WeaveAdapter, weights_init, WeaveAdapter2
class L2Norm(nn.Module):
def __init__(self, n_channels, scale):
super(L2Norm, self).__init__()
self.n_channels = n_channels
self.gamma = scale or None
self.eps = 1e-10
self.weight = nn.Parameter(torch.Tensor(self.n_channels))
self.reset_parameters()
def reset_parameters(self):
init.constant_(self.weight, self.gamma)
def forward(self, x):
norm = x.pow(2).sum(dim=1, keepdim=True).sqrt() + self.eps
x = x / norm
out = self.weight.unsqueeze(0).unsqueeze(2).unsqueeze(3).expand_as(
x) * x
return out
# This function is derived from torchvision VGG make_layers()
# https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py
def vgg(cfg, i, batch_norm=False):
layers = []
in_channels = i
for v in cfg:
if v == 'M':
layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
elif v == 'C':
layers += [nn.MaxPool2d(kernel_size=2, stride=2, ceil_mode=True)]
else:
conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)
if batch_norm:
layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]
else:
layers += [conv2d, nn.ReLU(inplace=True)]
in_channels = v
pool5 = nn.MaxPool2d(kernel_size=3, stride=1, padding=1)
conv6 = nn.Conv2d(512, 1024, kernel_size=3, padding=6, dilation=6)
conv7 = nn.Conv2d(1024, 1024, kernel_size=1)
layers += [
pool5, conv6,
nn.ReLU(inplace=True), conv7,
nn.ReLU(inplace=True)
]
return layers
base = {
'300': [
64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'C', 512, 512, 512, 'M',
512, 512, 512
],
'512': [
64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'C', 512, 512, 512, 'M',
512, 512, 512
],
}
def add_extras(size):
layers = []
layers += [nn.Conv2d(1024, 256, kernel_size=1, stride=1)]
layers += [nn.Conv2d(256, 256, kernel_size=3, stride=2, padding=1)]
layers += [nn.Conv2d(256, 128, kernel_size=1, stride=1)]
layers += [nn.Conv2d(128, 256, kernel_size=3, stride=2, padding=1)]
return layers
class VGG16Extractor(nn.Module):
def __init__(self, size, channel_size='48'):
super(VGG16Extractor, self).__init__()
self.vgg = nn.ModuleList(vgg(base[str(size)], 3))
self.extras = nn.ModuleList(add_extras(str(size)))
self.L2Norm_4_3 = L2Norm(512, 10)
self.L2Norm_5_3 = L2Norm(1024, 8)
self.raw_channels = [512, 1024, 256, 256]
self.weave_add_channels = [(48, 48), (48, 48), (48, 48), (48, 48)]
self.weave_channels = [256, 256, 256, 256]
# self.weave = WeaveAdapter([512, 1024, 256, 256], 4)
self.weave = WeaveAdapter2(self.raw_channels, self.weave_add_channels, self.weave_channels)
self._init_modules()
def _init_modules(self):
self.extras.apply(weights_init)
def forward(self, x):
"""Applies network layers and ops on input image(s) x.
Args:
x: input image or batch of images. Shape: [batch,3*batch,300,300].
Return:
Depending on phase:
test:
Variable(tensor) of output class label predictions,
confidence score, and corresponding location predictions for
each object detected. Shape: [batch,topk,7]
train:
list of concat outputs from:
1: confidence layers, Shape: [batch*num_priors,num_classes]
2: localization layers, Shape: [batch,num_priors*4]
3: priorbox layers, Shape: [2,num_priors*4]
"""
arm_sources = list()
odm_sources = list()
for i in range(23):
x = self.vgg[i](x)
#38x38
c2 = x
c2 = self.L2Norm_4_3(c2)
arm_sources.append(c2)
for k in range(23, len(self.vgg)):
x = self.vgg[k](x)
#19x19
c3 = x
c3 = self.L2Norm_5_3(c3)
arm_sources.append(c3)
# 10x10
x = F.relu(self.extras[0](x), inplace=True)
x = F.relu(self.extras[1](x), inplace=True)
c4 = x
arm_sources.append(c4)
# 5x5
x = F.relu(self.extras[2](x), inplace=True)
x = F.relu(self.extras[3](x), inplace=True)
c5 = x
arm_sources.append(c5)
if len(self.extras) > 4:
x = F.relu(self.extras[4](x), inplace=True)
x = F.relu(self.extras[5](x), inplace=True)
c6 = x
arm_sources.append(c6)
odm_sources = self.weave(arm_sources)
return arm_sources, odm_sources
def weave_vgg(size, channel_size='48'):
return VGG16Extractor(size)
if __name__ == "__main__":
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "1"
model = weave_vgg(size=300)
print(model)
with torch.no_grad():
model.eval()
x = torch.randn(1, 3, 320, 320)
model.cuda()
model(x.cuda()) | 32.184524 | 99 | 0.581838 | 741 | 5,407 | 4.112011 | 0.260459 | 0.032819 | 0.011815 | 0.019692 | 0.215294 | 0.164096 | 0.164096 | 0.158845 | 0.026912 | 0.026912 | 0 | 0.085899 | 0.285186 | 5,407 | 168 | 100 | 32.184524 | 0.702458 | 0.169225 | 0 | 0.119658 | 0 | 0 | 0.011226 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.059829 | 0.008547 | 0.196581 | 0.008547 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
762887d6fa0f11b4fbb8417b0c28d95a8331da2d | 5,366 | py | Python | docs/conflib/get_simp_version.py | kenyon/simp-doc | d9bbcd73f21e0fa25a062a68805e027c1e5218a0 | [
"Apache-2.0"
] | 25 | 2015-07-17T12:12:39.000Z | 2022-01-24T07:16:21.000Z | docs/conflib/get_simp_version.py | Akshay-Hegde/simp-doc | e87a3d56f0b9672cc1db6bfb21f9171611a4a660 | [
"Apache-2.0"
] | 91 | 2015-05-29T19:32:39.000Z | 2022-01-31T22:12:25.000Z | docs/conflib/get_simp_version.py | Akshay-Hegde/simp-doc | e87a3d56f0b9672cc1db6bfb21f9171611a4a660 | [
"Apache-2.0"
] | 69 | 2015-05-27T16:15:23.000Z | 2021-04-21T07:04:17.000Z |
import os
import re
import sys
import time
import urllib.request, urllib.error, urllib.parse
sys.path.append(os.path.abspath(os.path.dirname(os.path.dirname(__file__))))
from conflib.constants import *
def valid_version_and_release(version, release):
return (version != SIMP_INVALID_VERSION) and (release != SIMP_INVALID_RELEASE)
def get_simp_version(rootdir=ROOTDIR, simp_github_raw_base=SIMP_GITHUB_RAW_BASE,
default_simp_branch=DEFAULT_SIMP_BRANCH, on_rtd=ON_RTD):
"""
Get the SIMP version and release either from local disk or GitHub
PASS 1
- When run from ReadTheDocs, attempt to extract the version from the
simp.spec file in the git repository for a specific simp-core, where
the version is specified by the 'READTHEDOCS_VERSION' environment
variable.
- When not run from ReadTheDocs, attempt to extract the version from
a local, auto-generated release file within this project's tree.
PASS 2
If PASS 1 fails, attempt to extract the simp.spec file in the git
repository for the default simp tag/branch.
If both PASS 1 and PASS 2 fail, return invalid/unset values.
NOTE: The invalid version value must be a numeric to allow
ReadTheDocs to render the rest of the documentation in a
best-effort fashion.
"""
retval = {
'version': SIMP_INVALID_VERSION,
'release': SIMP_INVALID_RELEASE,
'full_version': None,
'version_family': None,
'simp_branch': None
}
# If we're running on ReadTheDocs, we should go fetch the content from the
# actual branch that we're using
if on_rtd:
rtd_version = os.environ.get('READTHEDOCS_VERSION')
old_version_regex = re.compile('^4.|^5.|^6.0')
if (old_version_regex.match(rtd_version) == None):
url_tgt = '/'.join([
simp_github_raw_base, 'simp-core', rtd_version, 'src', 'assets',
'simp', 'build', 'simp.spec'
])
else:
url_tgt = '/'.join([
simp_github_raw_base, 'simp-core', rtd_version, 'src', 'build',
'simp.spec'
])
result = __extract_from_url(url_tgt)
if valid_version_and_release(result['version'], result['release']):
retval['version'] = result['version']
retval['release'] = result['release']
retval['simp_branch'] = rtd_version
else:
# Attempt to read auto-generated release file. This file is generated
# by rake munge:prep for all rake doc:* and pkg:* targets.
rel_file = os.path.join(rootdir, 'build/rpm_metadata/release')
result = __extract_from_file(rel_file)
retval['version'] = result['version']
retval['release'] = result['release']
retval['simp_branch'] = None
if not valid_version_and_release(retval['version'], retval['release']):
# Fall back to something valid
url_tgt = '/'.join([
simp_github_raw_base, 'simp-core', default_simp_branch, 'src',
'assets', 'simp', 'build', 'simp.spec'
])
result = __extract_from_url(url_tgt)
if valid_version_and_release(result['version'],result['release']):
retval['version'] = result['version']
retval['release'] = result['release']
retval['simp_branch'] = default_simp_branch
retval['full_version'] = '-'.join([retval['version'], retval['release']])
patch_wildcard = re.sub(r'\.\d$', '.X', retval['version'])
minor_wildcard = re.sub(r'\.\d\.X$', '.X', patch_wildcard)
retval['version_family'] = [patch_wildcard, minor_wildcard]
return retval
### Private Methods
def __extract_from_file(release_file):
result = {
'version': SIMP_INVALID_VERSION,
'release': SIMP_INVALID_RELEASE
}
if os.path.isfile(release_file):
with open(release_file, 'r') as f:
for line in f:
_tmp = line.split(':')
if 'version' in _tmp:
result['version'] = _tmp[-1].strip()
elif 'release' in _tmp:
result['release'] = _tmp[-1].strip()
return result
def __extract_from_url(simp_spec_url):
result = {
'version': SIMP_INVALID_VERSION,
'release': SIMP_INVALID_RELEASE
}
for i in range(0, MAX_SIMP_URL_GET_ATTEMPTS):
try:
print("NOTICE: Downloading SIMP Spec File: " + simp_spec_url, file=sys.stderr)
simp_spec_content = urllib.request.urlopen(simp_spec_url).read().splitlines()
# Read the version out of the spec file and run with it.
for line in simp_spec_content:
_tmp = line.split()
if 'Version:'.encode('ASCII') in _tmp:
version_list = _tmp[-1].decode('ASCII').split('.')
version = '.'.join(version_list[0:3]).strip()
result['version'] = re.sub(r'%\{.*?\}', '', version)
elif 'Release:'.encode('ASCII') in _tmp:
release = _tmp[-1].decode('ASCII').strip()
result['release'] = re.sub(r'%\{.*?\}', '', release)
except urllib.error.URLError:
print('WARNING: Could not download ' + simp_spec_url, file=sys.stderr)
time.sleep(1)
continue
break
return result
| 37.788732 | 90 | 0.613865 | 671 | 5,366 | 4.695976 | 0.266766 | 0.030467 | 0.032371 | 0.026976 | 0.338305 | 0.304348 | 0.267534 | 0.267534 | 0.251666 | 0.151698 | 0 | 0.004316 | 0.265934 | 5,366 | 141 | 91 | 38.056738 | 0.795633 | 0.200708 | 0 | 0.284211 | 0 | 0 | 0.148289 | 0.006179 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042105 | false | 0 | 0.063158 | 0.010526 | 0.147368 | 0.021053 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
762916d53ae07b2443d2cd0f75167419589d6b81 | 5,118 | py | Python | tests/test_environment.py | fastly/fastly-blocklist | a286c547e3bf020c66a04b214d49164a570c362a | [
"MIT"
] | 4 | 2020-02-11T17:21:45.000Z | 2021-06-15T14:19:54.000Z | tests/test_environment.py | fastly/fastly-blocklist | a286c547e3bf020c66a04b214d49164a570c362a | [
"MIT"
] | null | null | null | tests/test_environment.py | fastly/fastly-blocklist | a286c547e3bf020c66a04b214d49164a570c362a | [
"MIT"
] | 5 | 2020-02-25T15:16:26.000Z | 2022-03-29T19:30:50.000Z | '''
Test environment creation with lib Environment()
'''
import unittest
import os
import argparse
from lib import Environment
class EnvironmentTests(unittest.TestCase):
'''
Test environment creation with Environment()
'''
def setUp(self):
with open('tests.apikey', 'w') as file_apikey:
file_apikey.write('MYTESTAPIKEY')
self.args = argparse.Namespace(
init=True,
apikey='tests.apikey',
config='tests.blocklist',
service=['SERVICEID'],
log='',
block='',
force=False,
verbose=False
)
def tearDown(self):
try:
os.remove('tests.apikey')
os.remove('tests.blocklist')
except BaseException:
pass
def test_init(self):
'''
test init of a new config file
'''
# create a new environment
env = Environment(self.args)
# ensure apikey and config are populated
self.assertEqual(env.apikey, 'MYTESTAPIKEY')
self.assertTrue(env.config['services'])
self.assertFalse(env.config['lists'])
self.assertEqual(env.config['log'], '')
def test_init_service_defaults(self):
'''
ensure service config defaults are set during init
'''
# create a new environment
env = Environment(self.args)
# ensure service defaults
self.assertEqual(env.config['services'][0]['type'], 'recv')
self.assertEqual(env.config['services'][0]['priority'], '10')
self.assertEqual(env.config['services'][0]['options']['edge_only'], True)
self.assertEqual(env.config['services'][0]['options']['var_ip'], 'client.ip')
def test_init_config_exists(self):
'''
test init of a new config file where another exists
'''
# create a 'valid' config file
with open('tests.blocklist', 'w') as file_apikey:
file_apikey.write('{}')
# try to create a new config over existing
self.args.force = False
with self.assertRaisesRegex(
SystemExit,
"config file exists"
):
Environment(self.args)
def test_init_config_exists_force(self):
'''
test init of a new config file where another exists
and args.force is provided
'''
# create a 'valid' config file
with open('tests.blocklist', 'w') as file_apikey:
file_apikey.write('{}')
# force create a new config over existing
self.args.force = True
env = Environment(self.args)
# ensure apikey and config are populated
self.assertEqual(env.apikey, 'MYTESTAPIKEY')
self.assertTrue(env.config['services'])
self.assertFalse(env.config['lists'])
self.assertEqual(env.config['log'], '')
def test_load_config(self):
'''
test init and load of a config file
'''
# create a new config file
Environment(self.args)
# load an existing config file
self.args.init = False
env = Environment(self.args)
# ensure apikey and config are populated
self.assertEqual(env.apikey, 'MYTESTAPIKEY')
self.assertTrue(env.config['services'])
self.assertFalse(env.config['lists'])
self.assertEqual(env.config['log'], '')
def test_load_config_auto_generate(self):
'''
test init of a new config file where args.init is not provided
'''
# auto init a config file when one doesn't exist
self.args.init = False
env = Environment(self.args)
# ensure apikey and config are populated
self.assertEqual(env.apikey, 'MYTESTAPIKEY')
self.assertTrue(env.config['services'])
self.assertFalse(env.config['lists'])
self.assertEqual(env.config['log'], '')
def test_load_config_override_services(self):
'''
test runtime override of args.services
'''
# create a new config file
Environment(self.args)
# load an existing config file and replace services in running config
self.args.init = False
self.args.service = ['SERVICE1', 'SERVICE2']
env = Environment(self.args)
# ensure override services are targeted
self.assertEqual(
len(env.config['services']),
2
)
def test_save_config(self):
'''
test saving a config file
'''
# create a new config file
Environment(self.args)
# load existing config file, change something, and save
self.args.init = False
self.args.service = ['SERVICE1', 'SERVICE2']
Environment(self.args).save_config()
# load the modified config file
self.args.service = []
env = Environment(self.args)
# ensure updated config is loaded
self.assertEqual(
len(env.config['services']),
2
)
if __name__ == '__main__':
unittest.main()
| 27.815217 | 85 | 0.582845 | 565 | 5,118 | 5.20885 | 0.2 | 0.059803 | 0.077472 | 0.06524 | 0.641522 | 0.606864 | 0.584438 | 0.523276 | 0.513761 | 0.418281 | 0 | 0.003387 | 0.307737 | 5,118 | 183 | 86 | 27.967213 | 0.827265 | 0.237007 | 0 | 0.450549 | 0 | 0 | 0.108447 | 0 | 0 | 0 | 0 | 0 | 0.252747 | 1 | 0.10989 | false | 0.010989 | 0.043956 | 0 | 0.164835 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
762976043903bd4578aa1729b2e798ca4dda2019 | 2,231 | py | Python | shop/search/mixins.py | 2000-ion/TIDPP-Lab3 | 3fc97e6214b6e51f40df39f1692d4deec4bb0cc2 | [
"BSD-3-Clause"
] | 2,160 | 2016-01-24T05:08:59.000Z | 2022-03-31T12:15:30.000Z | shop/search/mixins.py | 2000-ion/TIDPP-Lab3 | 3fc97e6214b6e51f40df39f1692d4deec4bb0cc2 | [
"BSD-3-Clause"
] | 455 | 2016-01-29T22:41:33.000Z | 2022-03-23T08:28:01.000Z | shop/search/mixins.py | 2000-ion/TIDPP-Lab3 | 3fc97e6214b6e51f40df39f1692d4deec4bb0cc2 | [
"BSD-3-Clause"
] | 818 | 2016-02-01T15:09:07.000Z | 2022-03-28T19:52:26.000Z | from django.utils.translation import get_language_from_request
from django_elasticsearch_dsl.registries import registry
from shop.models.product import ProductModel
class SearchViewMixin:
def get_document(self, language):
documents = registry.get_documents([ProductModel])
try:
return next(doc for doc in documents if doc._language == language)
except StopIteration:
return next(doc for doc in documents if doc._language is None)
class ProductSearchViewMixin(SearchViewMixin):
"""
Mixin class to be added to the ProductListView to restrict that list to entities matching
the query string.
"""
search_fields = ['product_name', 'product_code']
def get_renderer_context(self):
renderer_context = super().get_renderer_context()
if renderer_context['request'].accepted_renderer.format == 'html':
renderer_context['search_autocomplete'] = True
return renderer_context
def get_queryset(self):
query = self.request.GET.get('q')
if query:
language = get_language_from_request(self.request)
document = self.get_document(language)
search = document.search().source(excludes=['body'])
search = search.query('multi_match', query=query, fields=self.search_fields, type='bool_prefix')
queryset = search.to_queryset()
else:
queryset = super().get_queryset()
return queryset
class CatalogSearchViewMixin(SearchViewMixin):
"""
Mixin class to be added to the ProductListView in order to create a full-text search.
"""
search_fields = ['product_name', 'product_code', 'body']
def get_serializer(self, *args, **kwargs):
kwargs.setdefault('label', 'search')
return super().get_serializer(*args, **kwargs)
def get_queryset(self):
language = get_language_from_request(self.request)
document = self.get_document(language)
query = self.request.GET.get('q')
search = document.search().source(excludes=['body'])
if query:
search = search.query('multi_match', query=query, fields=self.search_fields)
return search.to_queryset()
| 36.57377 | 108 | 0.676378 | 256 | 2,231 | 5.71875 | 0.316406 | 0.061475 | 0.030738 | 0.045082 | 0.441257 | 0.441257 | 0.311475 | 0.311475 | 0.311475 | 0.237705 | 0 | 0 | 0.225011 | 2,231 | 60 | 109 | 37.183333 | 0.846732 | 0.086508 | 0 | 0.292683 | 0 | 0 | 0.067898 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121951 | false | 0 | 0.073171 | 0 | 0.463415 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
762ac5fc64b30c065a9df52d1916a6834bcd4cf5 | 9,835 | py | Python | adjointgrad/fit_sde.py | ronan-keane/adjointgrad | 802c55a42ccf5d8e52567afeb7db9c1f96fd7070 | [
"MIT"
] | null | null | null | adjointgrad/fit_sde.py | ronan-keane/adjointgrad | 802c55a42ccf5d8e52567afeb7db9c1f96fd7070 | [
"MIT"
] | null | null | null | adjointgrad/fit_sde.py | ronan-keane/adjointgrad | 802c55a42ccf5d8e52567afeb7db9c1f96fd7070 | [
"MIT"
] | null | null | null | # make sure to run this file in the same folder as sdegrad.py, electricdata.pkl
import pickle
import numpy as np
from sdegrad import sde, sde_mle, jump_ode, PiecewiseODE
import tensorflow as tf
import random
from tqdm import tqdm
#%% these are small models which run faster on cpu, disable gpu
try:
# Disable all GPUS
tf.config.set_visible_devices([], 'GPU')
visible_devices = tf.config.get_visible_devices()
for device in visible_devices:
assert device.device_type != 'GPU'
except:
# Invalid device or cannot modify virtual devices once initialized.
pass
#%% make dataset
with open('electricdata.pkl', 'rb') as f:
data = pickle.load(f)
# excerpt of data from https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014
# ~3.5 years of electrical usage from 20 customers in 15 minute intervals
# data is 2d numpy array, rows is time, columns are features (customers)
# split into train/test
split_ratio = .2
split_ind = int(len(data)*(1-split_ratio))
train = data[:split_ind]
test = data[split_ind:]
# normalize
mean = np.mean(train, axis=0)
std = np.std(train, axis=0)
train = (train-mean)/std
test = (test-mean)/std
# add periodicity: in days and years. Only use the periodicity as input, not the actual time
time = np.arange(0, len(data))
dayperiod = time/(24*4)*2*np.pi
yearperiod = time/(24*4*365)*2*np.pi
periods = np.concatenate([np.expand_dims(np.sin(dayperiod),1), np.expand_dims(np.cos(dayperiod),1),
np.expand_dims(np.sin(yearperiod),1), np.expand_dims(np.cos(yearperiod),1)], axis=1)
train = np.concatenate([train, periods[:split_ind]], axis=1)
test = np.concatenate([test, periods[split_ind:]], axis=1)
train = tf.convert_to_tensor(train, dtype=tf.float32)
test = tf.convert_to_tensor(test, dtype=tf.float32)
#%% make model
# class mysde(sde):
# class mysde(sde_mle):
# class mysde(jump_ode):
class mysde(PiecewiseODE):
# Just the sde class, with periodicity added
@tf.function
def add_time_input_to_curstate(self, curstate, t):
dayperiod = t/(24*4)*2*3.1415926535
yearperiod = t/(24*4*365)*2*3.1415926535
times = tf.stack([tf.math.sin(dayperiod), tf.math.cos(dayperiod),
tf.math.sin(yearperiod), tf.math.cos(yearperiod)], axis=1)
return tf.concat([curstate, times],1)
# model = mysde(20, pastlen=12, l2=.01, p=1e-4) # parameters for huber loss
# model = mysde(20, pastlen=12, l2=.008) # for mle loss
# model = mysde(20, 5, delta=.5, pastlen=12, l2=.008) # for jump_ode
model = mysde(20, 5, delta=.5, pastlen=12, l2=.008, p=5e-2)
#%% training loop
def training_loop(model, data, prediction_length, epochs, learning_rate, batch_size, report_every=100):
# prediction_length is the integer length of prediction model should do
# epoch is the float number of epochs
# report_every: print out objective for current batch every report_every batches
optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)
# randomly make batches for the requested number of epochs
inds = list(range(model.pastlen, len(data)-prediction_length)) # possible indices of first prediction
inds_list = []
for i in range(int(epochs)):
random.shuffle(inds)
inds_list.extend(inds)
random.shuffle(inds)
inds_list.extend(inds[:int(len(inds)*(epochs-int(epochs)))])
average_obj = []
for i in range(len(inds_list)//batch_size): # for each batch
inds = inds_list[i*batch_size:(i+1)*batch_size] # starting indices for first prediction of current batch
ind_float = tf.convert_to_tensor(inds, dtype=tf.float32)
# make inputs for batch
curdata = [tf.stack([train[inds[j]+k-model.pastlen,:] for j in range(len(inds))], axis=0)
for k in range(prediction_length+model.pastlen)]
init_state = curdata[:model.pastlen]
yhat = curdata[model.pastlen:]
# do update
obj, grad = model.grad(init_state, prediction_length, yhat, start=ind_float)
optimizer.apply_gradients(zip(grad, model.trainable_variables))
# reporting
average_obj.append(obj.numpy())
if i % report_every == 0:
print('batch '+str(i)+' objective value is '+str(obj.numpy())+
', average over past '+str(report_every)+' batches is '+str(np.mean(average_obj)))
average_obj = []
#%% training
# used for huber loss
# training_loop(model, train, 12, .15, 1e-4, 8)
# training_loop(model, train, 24, .15, 1e-4, 8)
# training_loop(model, train, 48, .15, 5e-5, 8)
# training_loop(model, train, 96, .15, 1e-5, 8)
# training_loop(model, train, 192, .15, 5e-6, 8)
# for mle loss
# training_loop(model, train, 3, .15, 1e-4, 8)
# training_loop(model, train, 6, .15, 1e-3, 8)
# training_loop(model, train, 12, .25, 1e-4, 8)
# training_loop(model, train, 24, .25, 5e-4, 8)
# training_loop(model, train, 48, .25, 1e-4, 8)
# training_loop(model, train, 96, .1, 1e-5, 8)
# training_loop(model, train, 192, .1, 5e-6, 8)
# for jump_ode
# training_loop(model, train, 12, .08, 1e-4, 1)
# training_loop(model, train, 24, .08, 1e-4, 1)
# training_loop(model, train, 48, .04, 2e-5, 1)
# training_loop(model, train, 96, .04, 8e-6, 1)
# for piecewise_ode
training_loop(model, train, 12, .08, 1e-4, 1)
training_loop(model, train, 24, .08, 1e-4, 1)
training_loop(model, train, 48, .04, 2e-5, 1)
training_loop(model, train, 96, .04, 8e-6, 1)
training_loop(model, train, 192, .02, 4e-6, 1)
#%% make baseline model which uses average electric in that time at that day of the week
period = 384
baseline_predictions = [np.mean(train[i::period], axis=0) for i in range(period)]
def baseline(ind):
ind = ind%(period)
return baseline_predictions[ind]
#%% test
import matplotlib.pyplot as plt
batch_size = 200 # number of replications
prediction_length = 24*3*4
ind = 74 # starting time in test set
customer = 10 # which customer to plot
offset = len(train)
assert ind-model.pastlen >=0
assert ind + prediction_length <= len(test)
assert customer < 20
curdata = [tf.tile(test[j+ind-model.pastlen:j+ind-model.pastlen+1,:], [batch_size, 1]) for j in range(prediction_length+model.pastlen)]
init_state = curdata[:model.pastlen]
yhat = curdata[model.pastlen:]
ind_float = tf.convert_to_tensor([ind for i in range(batch_size)], dtype=tf.float32)
obj = model.solve(init_state, prediction_length, yhat, start=ind_float+offset)
y1 = [yhat[i][0,customer] for i in range(len(yhat))]
x = [[model.mem[i+model.pastlen][j,customer] for i in range(len(yhat))] for j in range(batch_size)]
base_y = [baseline(i+offset)[customer] for i in range(ind, ind+prediction_length)]
plt.figure()
plt.plot(y1)
print('baseline mse is '+str(np.mean(np.square(np.array(y1)-np.array(base_y)))))
print('sde mse is '+str(np.mean(np.square(np.array(x[0])-np.array(y1)))))
plt.plot(base_y)
for i in range(1):
plt.plot(x[i])
#%% fancy plot
from matplotlib.patches import Patch
from matplotlib.lines import Line2D
x = np.zeros((prediction_length, batch_size))
for i in range(len(x)):
x[i,:] = model.mem[i+model.pastlen][:,customer]
mean = np.mean(x,axis=1)
std = np.std(x,axis=1)
t = list(range(ind+len(train), ind+len(train)+prediction_length))
input_t = list(range(ind-model.pastlen+len(train), ind+len(train)))
input_y = [init_state[i][0,customer] for i in range(model.pastlen)]
plt.figure(figsize=(13,8))
frame = plt.gca()
plt.plot(t, y1, 'C0')
plt.fill_between(t, mean-std, mean+std, alpha=.3, facecolor='C2')
plt.plot(t, base_y, 'C1')
plt.plot(t, x[:,0], 'C2', alpha=.3)
plt.plot(input_t, input_y, 'C3')
frame.tick_params(bottom=False)
frame.axes.xaxis.set_ticks([])
plt.ylabel('normalized demand')
plt.xlabel('time (3 days total)')
legend_elements = [Line2D([0], [0], color='C0', label='ground truth'),
Patch(facecolor='C2', alpha=.3, label='Piecewise ODE mean '+u'\u00b1'+' std dev'),
Line2D([0], [0], color = 'C2', alpha=.3, label = 'Piecewise ODE example prediction'),
Line2D([0], [0], color='C1', label = 'historical average'),
Line2D([0], [0], color='C3', label = 'Input')]
frame.legend(handles=legend_elements, loc='lower left')
#%% testing over entire test set
def testing_error(model, data, replications, prediction_length, starting=0, offset=0):
assert starting >= model.pastlen
baseline_errors = []
pred_errors = []
for i in tqdm(range((len(data)-starting) // prediction_length)):
ind = starting + i*prediction_length
curdata = [tf.tile(data[j+ind-model.pastlen:j+ind-model.pastlen+1,:], [replications, 1]) for j in range(prediction_length+model.pastlen)]
init_state = curdata[:model.pastlen]
yhat = curdata[model.pastlen:]
ind_float = tf.convert_to_tensor([ind for i in range(replications)], dtype=tf.float32)
model.solve(init_state, prediction_length, start=ind_float+offset)
baseline_pred = [baseline(i+offset) for i in range(ind, ind+prediction_length)]
baseline_constant = [init_state[-1][0] for i in range(prediction_length)]
yhat = tf.convert_to_tensor(yhat)
baseline_error = tf.reduce_mean(tf.square(tf.convert_to_tensor(baseline_pred) - yhat[:,0,:]))
baseline_error_constant = tf.reduce_mean(tf.square(tf.convert_to_tensor(baseline_constant) - yhat[:,0,:]))
baseline_error = min(baseline_error, baseline_error_constant)
pred = tf.reduce_mean(tf.convert_to_tensor(model.mem[model.pastlen:]),axis=1) # average over the replications
pred_error = tf.reduce_mean(tf.square(pred - yhat[:,0,:]))
baseline_errors.append(baseline_error.numpy())
pred_errors.append(pred_error.numpy())
return pred_errors, baseline_errors
pred_errors, baseline_errors = testing_error(model, test, 200, 192, starting=74, offset=len(train))
| 38.568627 | 145 | 0.68785 | 1,540 | 9,835 | 4.280519 | 0.21039 | 0.041869 | 0.056735 | 0.070085 | 0.303853 | 0.256371 | 0.211924 | 0.18568 | 0.132282 | 0.113471 | 0 | 0.043637 | 0.165836 | 9,835 | 254 | 146 | 38.720472 | 0.759873 | 0.240671 | 0 | 0.067114 | 0 | 0 | 0.037147 | 0 | 0.006711 | 0 | 0 | 0 | 0.033557 | 1 | 0.026846 | false | 0.006711 | 0.060403 | 0 | 0.114094 | 0.020134 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
76305721472c1a22caa2971a3a1b3adf20b51560 | 1,828 | py | Python | setup.py | pahaz/cryptoassets | 84ed4ff5e18c221ad0c13c73695cc34c9ff17ec4 | [
"MIT"
] | 1 | 2019-11-24T23:03:43.000Z | 2019-11-24T23:03:43.000Z | setup.py | pahaz/cryptoassets | 84ed4ff5e18c221ad0c13c73695cc34c9ff17ec4 | [
"MIT"
] | null | null | null | setup.py | pahaz/cryptoassets | 84ed4ff5e18c221ad0c13c73695cc34c9ff17ec4 | [
"MIT"
] | 3 | 2019-11-24T23:03:45.000Z | 2020-11-06T08:33:46.000Z | import os
from setuptools import setup, find_packages
here = os.path.abspath(os.path.dirname(__file__))
with open(os.path.join(here, 'README.rst')) as f:
README = f.read()
with open(os.path.join(here, 'CHANGES.rst')) as f:
CHANGES = f.read()
requires = [
'SQLAlchemy',
'python-slugify',
'block-io',
'rainbow_logging_handler',
'apscheduler',
'PyYAML',
'requests',
'python-bitcoinrpc',
'zope.dottedname'
]
setup(name='cryptoassets.core',
version='0.3.dev0',
description='A Python framework for building Bitcoin, other cryptocurrency (altcoin) and cryptoassets services',
long_description=README + '\n\n' + CHANGES,
# https://packaging.python.org/en/latest/distributing.html#classifiers
classifiers=[
'Development Status :: 4 - Beta',
"Programming Language :: Python",
'Intended Audience :: Developers',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3',
'License :: OSI Approved :: MIT License',
'Topic :: Security :: Cryptography',
'Intended Audience :: Financial and Insurance Industry'
],
author='Mikko Ohtamaa',
author_email='mikko@opensourcehacker.com',
url='https://bitbucket.org/miohtama/cryptoassets',
keywords='bitcoin litecoin dogecoin sqlalchemy',
packages=find_packages(),
include_package_data=True,
zip_safe=False,
test_suite='cryptoassets.core',
install_requires=requires,
entry_points="""\
[console_scripts]
cryptoassets-initialize-database = cryptoassets.core.service.main:initializedb
cryptoassets-helper-service = cryptoassets.core.service.main:helper
cryptoassets-scan-received = cryptoassets.core.service.main:scan_received
""",
)
| 33.851852 | 118 | 0.661379 | 193 | 1,828 | 6.170984 | 0.606218 | 0.06717 | 0.062972 | 0.06801 | 0.036944 | 0.036944 | 0 | 0 | 0 | 0 | 0 | 0.004844 | 0.209519 | 1,828 | 53 | 119 | 34.490566 | 0.819377 | 0.036652 | 0 | 0 | 0 | 0 | 0.541524 | 0.146758 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.041667 | 0 | 0.041667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7630c65069f78263c2b3f5845a388ab7541e4229 | 3,899 | py | Python | tools/gen_of_gpu.py | tsingqguo/AttackTracker | 054268d5afa0044675c7acf1ac13e621f1c9549e | [
"Apache-2.0"
] | 11 | 2020-11-25T16:19:23.000Z | 2022-01-12T08:08:47.000Z | tools/gen_of_gpu.py | tsingqguo/AttackTracker | 054268d5afa0044675c7acf1ac13e621f1c9549e | [
"Apache-2.0"
] | null | null | null | tools/gen_of_gpu.py | tsingqguo/AttackTracker | 054268d5afa0044675c7acf1ac13e621f1c9549e | [
"Apache-2.0"
] | null | null | null | import os
import numpy as np
from toolkit.tvnet_pytorch.train_options import arguments
from toolkit.tvnet_pytorch.model.network import model
import scipy.io as sio
import cv2
import PIL.Image as Image
from toolkit.tvnet_pytorch.utils import *
from toolkit.datasets import DatasetFactory
from torchvision import transforms
args = arguments().parse()
cur_dir = os.path.dirname(os.path.realpath(__file__))
dataset_root = os.path.join(cur_dir, '../testing_dataset', args.dataset)
# create dataset
dataset = DatasetFactory.create_dataset(name=args.dataset,
dataset_root=dataset_root,
load_img=False)
save_root = '/media/vil/sda/Object_Tracking/Dataset/OTB/'
torch.set_num_threads(1)
def get_transfrom():
transforms_list = []
transforms_list = [transforms.ToTensor()]
return transforms.Compose(transforms_list)
to_tensor = get_transfrom()
def main():
args.data_size = [1, 3, 224, 224]
if cfg.CUDA:
tvnet = model(args).cuda()
else:
tvnet = model(args)
# extract optical flow
for v_idx, video in enumerate(dataset):
toc = 0
flow_path = os.path.join(save_root, video.name, 'flow')
if not os.path.exists(flow_path):
os.mkdir(flow_path)
for idx, (img, gt_bbox) in enumerate(video):
res_path = os.path.join(flow_path, video.img_names[idx])
res_path = res_path.replace('/img/', '/flow/')
res_path = res_path.replace('.jpg', '.mat')
if os.path.exists(res_path):
print('existing path:' + res_path)
continue
tic = cv2.getTickCount()
if idx + 1 == len(video):
break
img1 = Image.fromarray(
cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
img2 = Image.fromarray(cv2.cvtColor(cv2.imread(
video.img_names[idx+1]), cv2.COLOR_BGR2RGB))
w, h = img1.size
if args.data_size[2] != h or args.data_size[3] != w:
args.data_size[2] = h
args.data_size[3] = w
tvnet = model(args).cuda()
img1 = to_tensor(img1).cuda()
img2 = to_tensor(img2).cuda()
img1 = img1.view([1]+list(img1.size()))
img2 = img2.view([1]+list(img2.size()))
with torch.no_grad():
u1, u2, rho = tvnet(img1, img2, True)
# Save flow map to file, for visualization in matlab
u1_np = np.squeeze(u1.detach().cpu().numpy())
u2_np = np.squeeze(u2.detach().cpu().numpy())
flow_mat = np.zeros([h, w, 2])
flow_mat[:, :, 0] = u1_np
flow_mat[:, :, 1] = u2_np
print('save path:'+res_path)
sio.savemat(res_path, {'flow': flow_mat})
if args.vis:
# note that the flow map can also be visulized using library cv2
# Sample code using cv2:
plt.imshow(img)
# Use Hue, Saturation, Value colour model
hsv = np.zeros(img.shape, dtype=np.uint8)
hsv[..., 1] = 255
mag, ang = cv2.cartToPolar(
u1.detach().squeeze().cpu().numpy(), u2.detach().squeeze().cpu().numpy())
hsv[..., 0] = ang * 180 / np.pi / 2
hsv[..., 2] = cv2.normalize(mag, None, 0, 255, cv2.NORM_MINMAX)
bgr = cv2.cvtColor(hsv, cv2.COLOR_HSV2BGR)
cv2.imshow("colored flow", bgr)
cv2.waitKey(1)
# cv2.destroyAllWindows()
toc += cv2.getTickCount() - tic
toc /= cv2.getTickFrequency()
print('({:3d}) Video: { : 12s} Time: {:5.1f}s Speed: {:3.1f}fps'.format(
v_idx+1, video.name, toc, idx / toc))
if __name__ == '__main__':
main()
| 34.8125 | 93 | 0.552706 | 490 | 3,899 | 4.24898 | 0.357143 | 0.030259 | 0.028818 | 0.033141 | 0.04707 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034599 | 0.31803 | 3,899 | 111 | 94 | 35.126126 | 0.748402 | 0.060528 | 0 | 0.024096 | 0 | 0.012048 | 0.051436 | 0.011765 | 0 | 0 | 0 | 0 | 0 | 1 | 0.024096 | false | 0 | 0.120482 | 0 | 0.156627 | 0.036145 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
763199e5d11d0ba004c4f44f7bde4f6711d38d59 | 14,930 | py | Python | main.py | ndreb/covid-19-data | 3736a07dca9786ce275165798ba77c3e5d9fc4e2 | [
"MIT"
] | 3 | 2022-01-04T14:23:38.000Z | 2022-01-04T14:23:42.000Z | main.py | ndreb/covid-19-data | 3736a07dca9786ce275165798ba77c3e5d9fc4e2 | [
"MIT"
] | null | null | null | main.py | ndreb/covid-19-data | 3736a07dca9786ce275165798ba77c3e5d9fc4e2 | [
"MIT"
] | 1 | 2021-06-14T03:38:19.000Z | 2021-06-14T03:38:19.000Z | #!/usr/bin/env python3
import requests, hashlib, os, tempfile, io
from time import time, sleep
from tqdm import tqdm, trange
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as tkr
from datetime import datetime
from README_TEMPLATE import README_TEMPLATE
from subprocess import check_call
def fetch(url):
acc = 0
while True:
try:
print(f"fetching '{url}'")
with requests.get(url, stream=True) as response:
response.raise_for_status()
dat = response.content
print("comparing hashes")
sig = hashlib.md5()
for line in response.iter_lines():
sig.update(line)
digest = sig.hexdigest()
fp = os.path.join(tempfile.gettempdir(), hashlib.md5(digest.encode('utf-8')).hexdigest())
if os.path.isfile(fp) and os.stat(fp).st_size > 0:
print("no update available")
acc = 0
return False
else:
print(f"writing to '{fp}'")
with open(f"{fp}.tmp", 'wb') as f:
f.write(dat)
os.rename(f"{fp}.tmp", fp)
return dat
except Exception as error:
acc += 1
retry(acc, error)
def timeout(sec):
for s in (t := trange(sec, ncols=103, leave=False, ascii=' #')):
t.set_description(uptime())
sleep(1)
def uptime():
ct = time()
et = ct - st
d = (et // 86400) % 365
h = (et // 3600) % 24
m = (et // 60) % 60
s = et % 60
d, h, m, s = [int(i) for i in (d, h, m, s)]
d = str(d).zfill(3)
h, m, s = [str(i).zfill(2) for i in (h, m, s)]
uptime = f"uptime: {d} {h}:{m}:{s}"
return uptime
def retry(acc, error):
print(f"\n{str(acc).zfill(2)}/10: {error}\n")
if acc < 10:
timeout(6)
else:
print("max retries exceeded")
exit(1)
def get_array(df, col, dtype):
return np.array(df[col], dtype=dtype)
def get_arrays(df, dictionary):
return [get_array(df, col, dtype) for col, dtype in dictionary.items()]
def get_diff(arr):
arr = np.asarray(arr)
diff = np.diff(arr)
arr = np.insert(diff, 0, arr[0])
arr[arr < 0] = 0
return arr
def get_diffs(*arrs):
return [get_diff(arr) for arr in arrs]
def write_csv(dictionary, fn):
pd.DataFrame(dictionary).to_csv(fn, index=False)
def plot(arrays, suffix, fp):
fig, ax = plt.subplots(2, 2, figsize=(12, 7), dpi=200)
fig.suptitle(f"{suffix} COVID-19 Data")
x = dates
fig.autofmt_xdate()
if total_cases[-1] >= 1_000_000:
y = total_cases / 1_000_000
ax[0,0].get_yaxis().set_major_formatter(
tkr.FuncFormatter(lambda y, p: f"{y}M"))
else:
y = total_cases
label = 'Cases'
ax[0,0].get_yaxis().set_major_formatter(
tkr.FuncFormatter(lambda y, p: f"{int(y):,d}"))
ax[0,0].set_title('Total Cases')
ax[0,0].grid(True)
ax[0,0].plot(x, y, color='b')
y = new_cases
ax[0,1].set_title('New Cases')
ax[0,1].get_yaxis().set_major_formatter(
tkr.FuncFormatter(lambda y, p: f"{int(y):,d}"))
ax[0,1].grid(True)
ax[0,1].plot(x, y, color='b')
y = total_deaths
ax[1,0].set_title('Total Deaths')
ax[1,0].get_yaxis().set_major_formatter(
tkr.FuncFormatter(lambda y, p: f"{int(y):,d}"))
ax[1,0].grid(True)
ax[1,0].plot(x, y, color='b')
y = new_deaths
ax[1,1].set_title('New Deaths')
ax[1,1].get_yaxis().set_major_formatter(
tkr.FuncFormatter(lambda y, p: f"{int(y):,d}"))
ax[1,1].grid(True)
ax[1,1].plot(x, y, color='b')
plt.savefig(fp, bbox_inches='tight')
plt.close(fig)
def update_readme():
tc, td = total_cases[-1], total_deaths[-1]
cases, deaths = new_cases[-1], new_deaths[-1]
cmean, dmean = np.mean(new_cases[-7:]), np.mean(new_deaths[-7:])
date = dates[-1]
md = datetime.strptime(str(date), '%Y-%m-%d').strftime('%B %d')
today = datetime.now().strftime('%B %d, %Y')
today = f"{today}, {clck()} EST"
df = pd.DataFrame({
"U.S": ["Cases", "Deaths"],
"Total Reported": [f"{tc:,d}", f"{td:,d}"],
f"On {md}": [f"{cases:,d}", f"{deaths:,d}"],
"7-Day Average": [f"{int(cmean):,d}", f"{int(dmean):,d}"]
})
df = df.to_markdown(index=False, disable_numparse=True)
write_readme(README_TEMPLATE(), today, df)
def write_readme(template, date, df):
print(f"writing to '{os.path.join(os.getcwd(), 'README.md')}'")
with open('README.md', 'w') as f:
f.write(template.format(date, df))
def clck():
h = datetime.now().strftime('%H')
h = int(h)
m = datetime.now().strftime('%M')
am = 'A.M'
pm = 'P.M'
if h < 12:
if h == 0:
h += 12
c = f"{h}:{m} {am}"
elif h >= 12:
if h != 12:
h -= 12
c = f"{h}:{m} {pm}"
return c
def mk_dir(*dirs):
for d in dirs:
if not os.path.isdir(d):
print(f"creating '{os.path.join(os.getcwd(), d)}'")
os.mkdir(d)
def parse(df, suffix):
df = df.sort_values(by=[suffix])
df = df[suffix]
df = df.drop_duplicates()
df = [df for df in df]
return df
def push_git():
if os.path.isdir('.git'):
try:
check_call('/usr/bin/git add .', shell=True)
check_call('/usr/bin/git commit -m "Updating data."', shell=True)
except Exception as error:
print(f"\n{error}\n")
acc = 0
while True:
try:
check_call('/usr/bin/git push', shell=True)
break
except Exception as error:
acc += 1
retry(acc, error)
def fetch_all(urls):
return [fetch(url) for url in urls]
def nan_to_mean(array):
pos = []
for i in range(len((a := array)) - 1):
if a[i] >= 0:
pos.append(a[i])
else:
if a[i+1] >= 0:
x = pos[-1]
y = a[i+1]
a[i] = (y + x) / 2
pos.append(a[i])
elif a[i+2] >= 0:
x = pos[-1]
y = a[i+2]
n = (y + x) / 2
a[i] = (n + x) / 2
a[i+1] = (y + x) / 2
pos.append(a[i])
pos.append(a[i+1])
elif a[i+3] >= 0:
x = pos[-1]
y = a[i+3]
n = (y + x) / 2
a[i] = (n + x) / 2
a[i+2] = (n + y) / 2
a[i+1] = (a[i+2] + a[i]) / 2
pos.append(a[i])
pos.append(a[i+1])
pos.append(a[i+2])
def plot_vacs(arrays, suffix, fn):
x = dates
y1 = total_doses
y2 = first_dose
y3 = second_dose
fig, ax = plt.subplots(figsize=(12, 7), dpi=200)
fig.suptitle(f"{suffix} COVID-19 Vaccinations")
ax.grid(True)
fig.autofmt_xdate()
if second_dose[-1] >= 1_000_000:
ax.get_yaxis().set_major_formatter(
tkr.FuncFormatter(lambda y, p: f"{y / 1_000_000}M"))
else:
ax.get_yaxis().set_major_formatter(
tkr.FuncFormatter(lambda y, p: f"{int(y):,d}"))
ax.plot(x, y1, label='Total Doses')
ax.plot(x, y2, label='First Dose')
ax.plot(x, y3, label='Second Dose')
ax.legend()
plt.savefig(fn, bbox_inches='tight')
plt.close()
st = time()
while True:
urls = (
'https://raw.githubusercontent.com/nytimes/covid-19-data/master/us.csv',
'https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-states.csv',
'https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/vaccinations/vaccinations.csv',
'https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/vaccinations/us_state_vaccinations.csv',
'https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/vaccinations/vaccinations-by-manufacturer.csv'
)
ncd, scd, nvc, svc, man = fetch_all(urls)
if any([ncd, scd, nvc, svc, man]) is False:
exit(0) # or timeout(3600) if running from a server.
if ncd is not False:
df = pd.read_csv(io.StringIO(ncd.decode('utf-8')))
us_cols_dtypes = {
'date': 'datetime64',
'cases': 'int64',
'deaths': 'int64'
}
dates, total_cases, total_deaths = get_arrays(df, us_cols_dtypes)
new_cases, new_deaths = get_diffs(total_cases, total_deaths)
usd = {
'date': dates,
'total cases': total_cases,
'total deaths': total_deaths,
'new cases': new_cases,
'new deaths': new_deaths
}
print(f"writing to '{os.path.join(os.getcwd(), 'us.csv')}'")
write_csv(usd, 'us.csv')
print(f"writing to '{os.path.join(os.getcwd(), 'us.png')}'")
plot(usd.values(), 'U.S', 'us.png')
update_readme()
if scd is not False:
d = pd.read_csv(io.StringIO(scd.decode('utf-8')))
states = parse(d, 'state')
mk_dir('states')
print(f"writing to '{os.path.join(os.getcwd(), 'states')}'")
for state in (t := tqdm(states, ncols=103, leave=False, ascii=' #')):
t.set_description(state)
df = d[d['state'].str.contains(f"^{state}$", case=False)]
st_cols_dtypes = {
'date': 'datetime64',
'state': 'U',
'cases': 'int64',
'deaths': 'int64'
}
dates, states, total_cases, total_deaths = get_arrays(df, st_cols_dtypes)
new_cases, new_deaths = get_diffs(total_cases, total_deaths)
std = {
'date': dates,
'state': states,
'total cases': total_cases,
'total deaths': total_deaths,
'new cases': new_cases,
'new deaths': new_deaths
}
write_csv(std, f"states/{state}.csv")
plot(std.values(), state, f"states/{state}.png")
if nvc is not False:
df = pd.read_csv(io.StringIO(nvc.decode('utf-8')))
df = df[df['location'].str.contains("United States", case=False)]
us_cols_dtypes = {
'date': 'datetime64',
'people_vaccinated': 'float64',
'people_fully_vaccinated': 'float64'
}
dates, first_dose, second_dose = get_arrays(df, us_cols_dtypes)
nan_to_mean(first_dose)
second_dose[0:25] = 0
nan_to_mean(second_dose)
first_dose = first_dose.astype(np.float64)
second_dose = second_dose.astype(np.float64)
total_doses = np.array(first_dose + second_dose, dtype='float64')
usv = {
'date': dates,
'total doses': total_doses,
'first dose': first_dose,
'second dose': second_dose
}
mk_dir('vaccinations')
print(f"writing to '{os.path.join(os.getcwd(), 'vaccinations/us.csv')}'")
write_csv(usv, 'vaccinations/us.csv')
print(f"writing to '{os.path.join(os.getcwd(), 'vaccinations/us.png')}'")
plot_vacs(usv.values(), 'U.S', 'vaccinations/us.png')
if svc is not False:
d = pd.read_csv(io.StringIO(svc.decode('utf-8')))
states = parse(d, 'location')
states.remove('Long Term Care')
states.remove('United States')
mk_dir('vaccinations', 'vaccinations/states')
print(f"writing to '{os.path.join(os.getcwd(), 'vaccinations/states')}'")
for state in (t := tqdm(states, ncols=103, leave=False, ascii=' #')):
t.set_description(state)
df = d[d['location'].str.contains(f"^{state}$", case=False)]
st_cols_dtypes = {
'date': 'datetime64',
'location': 'U',
'people_vaccinated': 'float32',
'people_fully_vaccinated': 'float32'
}
dates, states, first_dose, second_dose = get_arrays(df, st_cols_dtypes)
if (np.isnan(first_dose[0])):
first_dose[0] = 0
if (np.isnan(second_dose[0])):
second_dose[0] = 0
nan_to_mean(first_dose)
nan_to_mean(second_dose)
first_dose = first_dose.astype(np.int64)
second_dose = second_dose.astype(np.int64)
total_doses = np.array(first_dose + second_dose, dtype='int64')
stv = {
'date': dates,
'state': states,
'total doses': total_doses,
'first dose': first_dose,
'second dose': second_dose,
}
write_csv(stv, f"vaccinations/states/{state}.csv")
plot_vacs(stv.values(), state, f"vaccinations/states/{state}.png")
if man is not False:
df = pd.read_csv(io.StringIO(man.decode('utf-8')))
df = df[df['location'].str.contains("United States", case=False)]
jj = df[df['vaccine'].str.contains('Johnson&Johnson', case=False)]
pb = df[df['vaccine'].str.contains('Pfizer/BioNTech', case=False)]
ma = df[df['vaccine'].str.contains('Moderna', case=False)]
mk_dir('vaccinations')
print(f"writing to '{os.path.join(os.getcwd(), 'vaccinations')}'")
mf_cols_dtypes = {
'date': 'datetime64',
'total_vaccinations': 'int64'
}
jj_dates, jj_total_vaccinations = get_arrays(jj, mf_cols_dtypes)
pb_dates, pb_total_vaccinations = get_arrays(pb, mf_cols_dtypes)
ma_dates, ma_total_vaccinations = get_arrays(ma, mf_cols_dtypes)
x1 = pb_dates
x2 = ma_dates
x3 = jj_dates
y1 = pb_total_vaccinations
y2 = ma_total_vaccinations
y3 = jj_total_vaccinations
fig, ax = plt.subplots(figsize=(12, 7), dpi=200)
fig.suptitle('Vaccines')
ax.grid(True)
fig.autofmt_xdate()
if jj_total_vaccinations[-1] >= 1_000_000:
ax.get_yaxis().set_major_formatter(
tkr.FuncFormatter(lambda y, p: f"{y / 1_000_000}M"))
else:
ax.get_yaxis().set_major_formatter(
tkr.FuncFormatter(lambda y, p: f"{int(y):,d}"))
ax.plot(x1, y1, label='Pfizer / BioNTech')
ax.plot(x2, y2, label='Moderna')
ax.plot(x3, y3, label='Johnson & Johnson')
ax.legend()
plt.savefig('vaccinations/vaccines.png', bbox_inches='tight')
if any([ncd, scd, nvc, svc, man]) is True:
push_git()
exit(0) # or timeout(3600) if running from a server.
| 36.062802 | 131 | 0.529002 | 2,024 | 14,930 | 3.783103 | 0.15415 | 0.005746 | 0.020112 | 0.017631 | 0.494972 | 0.430456 | 0.402899 | 0.374037 | 0.358887 | 0.323887 | 0 | 0.029055 | 0.315338 | 14,930 | 413 | 132 | 36.150121 | 0.720016 | 0.007167 | 0 | 0.279373 | 0 | 0.013055 | 0.175911 | 0.031916 | 0 | 0 | 0 | 0 | 0 | 1 | 0.049608 | false | 0 | 0.02611 | 0.010444 | 0.101828 | 0.041775 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
76332fd6041c3195e20892d8e0163173310cc3c6 | 1,600 | py | Python | Hard/MajorityElement.py | roeiherz/CodingInterviews | 1737a86692aef7f0b1f1d7a481a1db563d9dcf6b | [
"MIT"
] | null | null | null | Hard/MajorityElement.py | roeiherz/CodingInterviews | 1737a86692aef7f0b1f1d7a481a1db563d9dcf6b | [
"MIT"
] | null | null | null | Hard/MajorityElement.py | roeiherz/CodingInterviews | 1737a86692aef7f0b1f1d7a481a1db563d9dcf6b | [
"MIT"
] | null | null | null | __author__ = 'roeiherz'
"""
A majority element is an element that makes up more than half of the items in an array.
Given a positive integers array, find the majority element. If there is no majority element, return -1.
Do this in O(N) time and O(1) space.
"""
def partition(A, p, q):
pivot = A[q]
curr = int(p)
while curr < q:
if pivot < A[curr]:
curr += 1
else:
A[p], A[curr] = A[curr], A[p]
p += 1
curr += 1
A[p], A[q] = A[q], A[p]
return p
def majority_element(A, p, q, k):
if k > q - p + 1:
return
pivot = partition(A, p, q)
if pivot == p + k:
return A[pivot]
elif pivot > p + k:
return majority_element(A, p=p, q=pivot-1, k=k)
else:
return majority_element(A, p=pivot+1, q=q, k=k)
def majority_element_eff(A):
def _get_candidate(A):
majority = 0
cnt = 0
for n in A:
if cnt == 0:
majority = n
if majority == n:
cnt += 1
else:
cnt -= 1
return majority
candidate = _get_candidate(A)
cnt = 0
for n in A:
if n == candidate:
cnt += 1
return candidate if cnt >= len(A) / 2 else -1
if __name__ == '__main__':
A = [3, 5, 3, 4, 2, 2, 1, 8]
candidate = majority_element_eff(A)
# candidate = majority_element(A, p=0, q=len(A)-1, k=(len(A)-1)/2)
# cnt = 0
# for n in A:
# if n == candidate:
# cnt += 1
#
# print candidate if cnt >= len(A) / 2 else -1
| 21.917808 | 103 | 0.496875 | 250 | 1,600 | 3.084 | 0.24 | 0.02594 | 0.083009 | 0.088197 | 0.20882 | 0.149157 | 0.149157 | 0.132296 | 0.070039 | 0.070039 | 0 | 0.033099 | 0.376875 | 1,600 | 72 | 104 | 22.222222 | 0.740221 | 0.105625 | 0 | 0.25 | 0 | 0 | 0.013479 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
57ff92069bb401e9285260ae5a13c35d0ba310ed | 3,621 | py | Python | advancedalgorithms/graphs/problems/connecting_islands.py | informramiz/data-structures-and-algorithms | 7038c8becc4cbad82867c9c8bca42637ca27c8d7 | [
"Apache-2.0"
] | null | null | null | advancedalgorithms/graphs/problems/connecting_islands.py | informramiz/data-structures-and-algorithms | 7038c8becc4cbad82867c9c8bca42637ca27c8d7 | [
"Apache-2.0"
] | null | null | null | advancedalgorithms/graphs/problems/connecting_islands.py | informramiz/data-structures-and-algorithms | 7038c8becc4cbad82867c9c8bca42637ca27c8d7 | [
"Apache-2.0"
] | 1 | 2020-09-24T22:54:52.000Z | 2020-09-24T22:54:52.000Z | """
Author: Ramiz Raja
Created on: 24/01/2020
Problem: In an ocean, there are n islands some of which are connected via bridges. Travelling over a
bridge has some cost attaced with it. Find bridges in such a way that all islands are connected
with minimum cost of travelling.
You can assume that there is at least one possible way in which all islands are connected with each other.
You will be provided with two input parameters:
num_islands = number of islands
bridge_config = list of lists. Each inner list will have 3 elements:
a. island A
b. island B
c. cost of bridge connecting both islands
Each island is represented using a number
Example:
num_islands = 4
bridge_config = [[1, 2, 1], [2, 3, 4], [1, 4, 3], [4, 3, 2], [1, 3, 10]]
Input parameters explanation:
1. Number of islands = 4
2. Island 1 and 2 are connected via a bridge with cost = 1
Island 2 and 3 are connected via a bridge with cost = 4
Island 1 and 4 are connected via a bridge with cost = 3
Island 4 and 3 are connected via a bridge with cost = 2
Island 1 and 3 are connected via a bridge with cost = 10
In this example if we are connecting bridges like this...
between 1 and 2 with cost = 1
between 1 and 4 with cost = 3
between 4 and 3 with cost = 2
...then we connect all 4 islands with cost = 6 which is the minimum traveling cost.
"""
from asserts.asserts import assert_
from advancedalgorithms.graphs.dijkstrasalgorithm.dijkstra_algorithm import *
from queue import PriorityQueue
def build_graph(num_islands, bridge_config):
adjacency_list = [list() for _ in range(num_islands+1)]
for bridge in bridge_config:
island_a, island_b, cost = bridge
adjacency_list[island_a].append((island_b, cost))
adjacency_list[island_b].append((island_a, cost))
return adjacency_list
def get_minimum_cost_of_connecting(num_islands, bridge_config):
"""
:param: num_islands - number of islands
:param: bridge_config - bridge configuration as explained in the problem statement
return: cost (int) minimum cost of connecting all islands
"""
adjacency_list = build_graph(num_islands, bridge_config)
start_vertex = 1
is_visited = [False for _ in range(len(adjacency_list) + 1)]
priority_queue = PriorityQueue()
priority_queue.put_nowait((0, start_vertex))
total_cost = 0
while priority_queue.qsize() > 0:
cost, selected_vertex = priority_queue.get_nowait()
if is_visited[selected_vertex]:
continue
total_cost += cost
is_visited[selected_vertex] = True
# add neighbors
for neighbor, edge_cost in adjacency_list[selected_vertex]:
if not is_visited[neighbor]:
priority_queue.put_nowait((edge_cost, neighbor))
return total_cost
def test_function(test_case):
num_islands = test_case[0]
bridge_config = test_case[1]
solution = test_case[2]
output = get_minimum_cost_of_connecting(num_islands, bridge_config)
assert_(expected=solution, actual=output)
def tests():
num_islands = 4
bridge_config = [[1, 2, 1], [2, 3, 4], [1, 4, 3], [4, 3, 2], [1, 3, 10]]
solution = 6
test_case = [num_islands, bridge_config, solution]
test_function(test_case)
num_islands = 5
bridge_config = [[1, 2, 5], [1, 3, 8], [2, 3, 9]]
solution = 13
test_case = [num_islands, bridge_config, solution]
test_function(test_case)
num_islands = 5
bridge_config = [[1, 2, 3], [1, 5, 9], [2, 3, 10], [4, 3, 9]]
solution = 31
test_case = [num_islands, bridge_config, solution]
test_function(test_case)
tests()
| 31.486957 | 106 | 0.700635 | 561 | 3,621 | 4.352941 | 0.258467 | 0.061425 | 0.062244 | 0.063063 | 0.309582 | 0.267813 | 0.22932 | 0.20475 | 0.20475 | 0.123669 | 0 | 0.039085 | 0.215686 | 3,621 | 114 | 107 | 31.763158 | 0.820775 | 0.420878 | 0 | 0.16 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04 | 1 | 0.08 | false | 0 | 0.06 | 0 | 0.18 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
520985691c7842fd46d2158df5dc2cbcf311aeb9 | 10,850 | py | Python | scripts/addons/Poly_Source/retopology.py | Tilapiatsu/blender-custom_conf | 05592fedf74e4b7075a6228b8448a5cda10f7753 | [
"MIT"
] | 2 | 2020-04-16T22:12:40.000Z | 2022-01-22T17:18:45.000Z | scripts/addons/Poly_Source/retopology.py | Tilapiatsu/blender-custom_conf | 05592fedf74e4b7075a6228b8448a5cda10f7753 | [
"MIT"
] | null | null | null | scripts/addons/Poly_Source/retopology.py | Tilapiatsu/blender-custom_conf | 05592fedf74e4b7075a6228b8448a5cda10f7753 | [
"MIT"
] | 2 | 2019-05-16T04:01:09.000Z | 2020-08-25T11:42:26.000Z | import bpy
from gpu_extras.batch import batch_for_shader
from bpy.types import Operator, GizmoGroup, Gizmo
import bmesh
import bgl
import gpu
from math import sin, cos, pi
from gpu.types import (
GPUBatch,
GPUVertBuf,
GPUVertFormat,
)
from mathutils import Matrix, Vector
import mathutils
# Shader
from .utils.shader import vs_uni, fs_uni, vs_sm, fs_sm
shader_uni = gpu.types.GPUShader(vs_uni, fs_uni)
shader_sm = gpu.types.GPUShader(vs_sm, fs_sm)
# Activate Tool
from .utils.active_tool import active_tool
# --- Retopology Tool
tools_ret = {
"PS_tool.poly_quilt",
"PS_tool.poly_quilt_poly",
"PS_tool.poly_quilt_extrude",
"PS_tool.poly_quilt_edgeloop",
"PS_tool.poly_quilt_loopcut",
"PS_tool.poly_quilt_knife",
"PS_tool.poly_quilt_delete",
"PS_tool.poly_quilt_brush",
"PS_tool.poly_quilt_seam",
"builtin.poly_build",
'mesh_tool.poly_quilt',
'mesh_tool.poly_quilt_poly',
'mesh_tool.poly_quilt_extrude',
'mesh_tool.poly_quilt_edgeloop',
'mesh_tool.poly_quilt_loopcut',
'mesh_tool.poly_quilt_knife',
'mesh_tool.poly_quilt_delete',
'mesh_tool.poly_quilt_brush',
'mesh_tool.poly_quilt_seam',
}
def PS_draw_bgl(self, context):
if context.mode == 'EDIT_MESH': # context.active_object != None and context.active_object.select_get() and
#start_time = time.time()
props = context.preferences.addons[__package__].preferences
settings = context.scene.ps_set_
theme = context.preferences.themes['Default']
vertex_size = theme.view_3d.vertex_size
# Color
VA_Col = props.v_alone_color[0], props.v_alone_color[1], props.v_alone_color[2], props.v_alone_color[3]
VE_Col = props.VE_color[0], props.VE_color[1], props.VE_color[2], props.VE_color[3]
F_Col = props.F_color[0], props.F_color[1], props.F_color[2], props.opacity
sel_Col = props.select_color[0], props.select_color[1], props.select_color[2], 1.0
bgl.glEnable(bgl.GL_BLEND)
bgl.glLineWidth(props.edge_width)
bgl.glPointSize(vertex_size + props.verts_size)
bgl.glCullFace(bgl.GL_BACK)
if props.xray_ret == False:
bgl.glEnable(bgl.GL_DEPTH_TEST)
bgl.glEnable(bgl.GL_CULL_FACE)
if props.line_smooth:
bgl.glEnable(bgl.GL_LINE_SMOOTH)
#bgl.glDepthRange(0, 0.99999)
#bgl.glDepthFunc(600)
bgl.glDepthMask(False)
is_perspective = context.region_data.is_perspective
if is_perspective:
z_bias = props.z_bias / 350
else:
z_bias = 1.0
tool_retopo = active_tool().idname in tools_ret # Retopology Tools
if tool_retopo:
shader = shader_uni
else:
shader = shader_sm
shader.bind()
view_mat = context.region_data.perspective_matrix
shader.uniform_float("view_mat", view_mat)
shader.uniform_float("Z_Bias", z_bias)
shader.uniform_float("Z_Offset", props.z_offset)
if props.use_mod_ret:
depsgraph = context.evaluated_depsgraph_get()
uniques = context.objects_in_mode_unique_data
#uniques = context.selected_objects
#uniques = context.objects_in_mode
for obj in uniques:
if props.use_mod_ret:
if len(obj.modifiers) > 0:
depsgraph.update()
ob_eval = obj.evaluated_get(depsgraph)
me = ob_eval.to_mesh()
bm = bmesh.new()
bm.from_mesh(me, face_normals=True, use_shape_key=False)
bm.verts.ensure_lookup_table()
bm.edges.ensure_lookup_table()
bm.faces.ensure_lookup_table()
else:
bm = bmesh.from_edit_mesh(obj.data)
if len(bm.verts) <= props.maxP_retop:
# Если выбран инструмент ретопологии
if tool_retopo:
# все вертексы
vCo = [obj.matrix_world @ v.co for v in bm.verts]
vNm = [v.normal for v in bm.verts]
# --- FACES
if settings.draw_faces:
loop_triangles = bm.calc_loop_triangles()
faces_indices = [[loop.vert.index for loop in looptris] for looptris in loop_triangles]
FACES = batch_for_shader(shader, 'TRIS', {"pos": vCo, 'nrm': vNm}, indices=faces_indices)
shader.uniform_float("color", F_Col)
FACES.draw(shader)
# --- EDGES
if settings.draw_edges:
#edges_indices = [ [e.verts[0].index, e.verts[1].index] for e in bm.edges]
edges_ind = [e.index for e in bm.edges]
edges_cord = [obj.matrix_world @ v.co for i in edges_ind for v in bm.edges[i].verts]
eNm = [v.normal for i in edges_ind for v in bm.edges[i].verts]
EDGES = batch_for_shader(shader, 'LINES', {"pos": edges_cord, 'nrm': eNm})
shader.uniform_float("color", VE_Col)
EDGES.draw(shader)
# --- VERTS
# только одиночные вертексы
if settings.draw_verts:
vCo_one = [obj.matrix_world @ v.co for v in bm.verts if len(v.link_faces) < 1] #not v.is_manifold] (not v.is_manifold and v.is_wire)
vCo_one_Nm = [v.normal for v in bm.verts if len(v.link_faces) < 1]
VERTS = batch_for_shader(shader, 'POINTS', {"pos": vCo_one, 'nrm': vCo_one_Nm})
shader.uniform_float("color", VA_Col)
VERTS.draw(shader)
# Если выбраны обычные инструменты
else:
# --- FACES
vCo = [obj.matrix_world @ v.co for v in bm.verts]
vNm = [v.normal for v in bm.verts]
v_len = len(vCo)
if settings.draw_faces:
loop_triangles = bm.calc_loop_triangles()
faces_indices = [[loop.vert.index for loop in looptris] for looptris in loop_triangles]
face_col = [F_Col for i in range(v_len)]
FACES = batch_for_shader(shader, 'TRIS', {"pos": vCo, "col": face_col, 'nrm': vNm}, indices=faces_indices)
FACES.draw(shader)
# --- EDGES
if settings.draw_edges:
edges_ind = [e.index for e in bm.edges]
edges_cord = [obj.matrix_world @ v.co for i in edges_ind for v in bm.edges[i].verts]
eNm = [v.normal for i in edges_ind for v in bm.edges[i].verts]
edge_col = [VE_Col for i in range(len(edges_cord))]
for i, edge in enumerate(bm.edges): # Окрашивание выделенных элементов
if edge.select:
ind = i*2
ind2 = ind + 1
edge_col[ind] = sel_Col
edge_col[ind2] = sel_Col
#edges_indices = [ [e.verts[0].index, e.verts[1].index] for e in bm.edges]
EDGES = batch_for_shader(shader, 'LINES', {"pos": edges_cord, "col": edge_col, 'nrm': eNm}) # , indices=edges_indices
EDGES.draw(shader)
# --- VERTS
if settings.draw_verts:
vert_col = [VE_Col for i in range(v_len)]
for i, vert in enumerate(bm.verts): # Окрашивание выделенных элементов
if len(vert.link_faces) < 1:
vert_col[i] = VA_Col
if vert.select:
#face_col[i] = select_color_f
vert_col[i] = sel_Col
#edge_col[i] = sel_Col
VERTS = batch_for_shader(shader, 'POINTS', {"pos": vCo, "col": vert_col, 'nrm': vNm})
if context.tool_settings.mesh_select_mode[0]:
VERTS.draw(shader)
if props.use_mod_ret:
bm.free()
""" if props.line_smooth:
bgl.glDisable(bgl.GL_LINE_SMOOTH) """
""" bgl.glDisable(bgl.GL_DEPTH_TEST)
bgl.glDisable(bgl.GL_CULL_FACE)
bgl.glLineWidth(1)
bgl.glPointSize(1)
bgl.glDisable(bgl.GL_BLEND) """
#end_time = time.time()
#print(end_time-start_time)
REFRESH = False
#----------------------------------------------- FROM GIZMO TODO
class PS_GT_draw(Gizmo):
bl_idname = 'PS_GT_draw'
def draw(self, context):
global REFRESH
if REFRESH:
PS_draw_bgl(self, context)
#REFRESH = False
def setup(self):
self.use_draw_modal = False
#self.hide_select = True
#self.group = PS_GGT_draw_group.bl_idname
""" def test_select(self, context, location):
if context.area.type == 'VIEW_3D':
context.area.tag_redraw()
return -1 """
class PS_GGT_draw_group(GizmoGroup):
bl_idname = 'PS_GGT_draw_mesh'
bl_label = "PS Draw"
bl_space_type = 'VIEW_3D'
bl_region_type = 'WINDOW'
bl_options = {'3D', 'SHOW_MODAL_ALL'} #'DEPTH_3D' , 'TOOL_INIT', 'SELECT', , 'SCALE' , 'SHOW_MODAL_ALL' 'PERSISTENT',
@classmethod
def poll(cls, context):
settings = context.scene.ps_set_
return settings.PS_retopology
def setup(self, context):
mesh = self.gizmos.new(PS_GT_draw.bl_idname)
self.mesh = mesh
def refresh(self, context):
global REFRESH
REFRESH = True
def draw_prepare(self, context):
settings = context.scene.ps_set_
""" mesh = self.mesh
if settings.PS_retopology:
mesh.hide = False
else:
mesh.hide = True """
classes = [
PS_GT_draw,
PS_GGT_draw_group,
]
def register():
for cls in classes:
bpy.utils.register_class(cls)
def unregister():
for cls in classes:
bpy.utils.unregister_class(cls) | 30.138889 | 156 | 0.53318 | 1,297 | 10,850 | 4.199692 | 0.185813 | 0.026437 | 0.042959 | 0.014687 | 0.295392 | 0.241968 | 0.21186 | 0.199559 | 0.173123 | 0.145218 | 0 | 0.007601 | 0.369493 | 10,850 | 360 | 157 | 30.138889 | 0.788627 | 0.096682 | 0 | 0.227027 | 0 | 0 | 0.07122 | 0.044594 | 0 | 0 | 0 | 0.002778 | 0 | 1 | 0.048649 | false | 0 | 0.064865 | 0 | 0.162162 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
520a9ec43a0d88d96156f255913929defdac0155 | 8,465 | py | Python | models/attribute_model.py | imwillhang/multimodal-healthcare | 4959fa1a8c99e23334c926b73202c944f1eda457 | [
"MIT"
] | null | null | null | models/attribute_model.py | imwillhang/multimodal-healthcare | 4959fa1a8c99e23334c926b73202c944f1eda457 | [
"MIT"
] | null | null | null | models/attribute_model.py | imwillhang/multimodal-healthcare | 4959fa1a8c99e23334c926b73202c944f1eda457 | [
"MIT"
] | null | null | null | import numpy as np
import torch
import random
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision as vision
import sys
from scipy.misc import imresize
from torchvision import transforms, utils
import models.modules as modules
import scipy.ndimage
import scipy.misc
def build_model(config):
if config.mode == 2:
return modules.AttributeNet(config)
if config.model == 'convnet':
return modules.FiveLayerConvnet(config)
elif config.model == 'inceptionnet':
return modules.InceptionNet(config)
elif config.model == 'densenet':
return modules.ModifiedDenseNet(config)
elif config.model == 'resnet':
return modules.ResNet(config)
def get_attrib_loss_and_acc(config, logits, labels, pred_attr, real_attr):
pred = np.argmax(logits.data.cpu().numpy(), axis=1)
acc = np.mean(pred == labels.data.cpu().numpy())
attr_loss = config.second_loss(pred_attr, real_attr)
if config.mode == 2:
config.recon_weight = max(config.recon_weight, 0.1)
loss = config.loss(logits, labels) + config.recon_weight * attr_loss
return loss, attr_loss, acc, pred
def get_loss_and_acc(config, logits, labels):
pred = np.argmax(logits.data.cpu().numpy(), axis=1)
acc = np.mean(pred == labels.data.cpu().numpy())
loss = config.loss(logits, labels)
return loss, acc, pred
def build_and_train(config, train_fold, val_fold, test_fold):
model = build_model(config).cuda()
parameters = filter(lambda p: p.requires_grad, model.parameters())
print('net is built')
config.loss = nn.CrossEntropyLoss()
config.second_loss = nn.L1Loss()
config.optimizer = optim.SGD(parameters, lr=config.lr, momentum=0.9)
if config.model == 'inceptionnett':
from torch.optim import lr_scheduler
config.scheduler = lr_scheduler.ReduceLROnPlateau(config.optimizer, 'min')
best_val = 0.0
best_test = 0.0
best_test_epoch = 0
save_train_loss = []
save_train_acc = []
save_val_loss = []
save_val_acc = []
save_test_acc = []
for epoch in range(config.epochs):
train_loss, train_acc, _, _ = run_epoch(model, config, train_fold, epoch, mode='Train')
val_loss, val_acc, _, _ = run_epoch(model, config, val_fold, epoch, mode='Val')
_, test_acc, all_labels, all_preds = run_epoch(model, config, test_fold, epoch, mode='Test')
if val_acc > best_val:
best_val = val_acc
if test_acc > best_test:
best_test = test_acc
best_test_epoch = epoch
print('Best val accuracy: {}'.format(best_val))
print('Best test accuracy: {} at epoch {}'.format(best_test, best_test_epoch))
if config.model == 'inceptionnett':
config.scheduler.step(val_loss)
save_train_loss.append(train_loss)
save_train_acc.append(train_acc)
save_val_loss.append(val_loss)
save_val_acc.append(val_acc)
save_test_acc.append(test_acc)
np.save('outputs/train_loss_{}.npy'.format(config.experimentid), np.asarray(save_train_loss))
np.save('outputs/train_acc_{}.npy'.format(config.experimentid), np.asarray(save_train_acc))
np.save('outputs/val_loss_{}.npy'.format(config.experimentid), np.asarray(save_val_loss))
np.save('outputs/val_acc_{}.npy'.format(config.experimentid), np.asarray(save_val_acc))
np.save('outputs/test_acc_{}.npy'.format(config.experimentid), np.asarray(save_test_acc))
np.save('outputs/labels{}.npy'.format(config.experimentid), all_labels)
np.save('outputs/preds{}.npy'.format(config.experimentid), all_preds)
return best_val
def prepare_data(config, images, labels, attributes, mode):
#if config.mode == 2:
# images = np.array([imresize(image, (224, 224)) for image in images])
#print(images)
mean = 118.546752674
std = 56.8170623267
# 0.203418499378 0.0694907381909
images = images.astype(np.uint8)
if mode == 'Train' and config.augment > 0:
transform = transforms.Compose([
vision.transforms.ToPILImage(),
vision.transforms.Scale(360),
vision.transforms.RandomCrop(299),
vision.transforms.ToTensor()
])
a_images = []
a_labels = []
a_attribs = []
for idx in range(len(images)):
image = images[idx]
label = labels[idx]
attribs = attributes[idx]
times = 1
images_ = []
images_.append(image)
a_images.append(image)
for _ in range(config.flips):
rotated = scipy.ndimage.interpolation.rotate(image, random.randrange(1, 360))
rotated = scipy.misc.imresize(rotated, (299, 299))
images_.append(rotated)
a_images.append(rotated)
times += 1
for im in images_:
for _ in range(config.augment):
expim = np.expand_dims(im, axis=2)
a_images.append((transform(expim).numpy() * 255).squeeze())
times += 1
a_labels.extend([label] * times)
a_attribs += [attribs] * times
images = np.asarray(a_images)
images = np.expand_dims(images, axis=1)
labels = np.asarray(a_labels)
attributes = np.asarray(a_attribs).astype(np.float32)
else:
images = np.expand_dims(images, axis=1)
if mode != 'Train':
np.save('saved_images/images.npy', images)
images = images.astype(np.float64)
images -= mean
images /= std
#for image in images:
images = torch.from_numpy(images).float()
labels = torch.from_numpy(labels).long()
attributes = torch.from_numpy(attributes).float()
return images, labels, attributes
def run_epoch(model, config, fold, epoch, mode='Train'):
'''
main training method
'''
total_loss = 0.0 # <- haha
total_acc = 0.0
all_labels = []
all_preds = []
it = 0
if mode == 'Train':
model.train()
else:
model.eval()
mode_data = None
if config.mode == -1:
more_data = fold
else:
more_data = fold.get_iterator()
for item in more_data:
images, labels, attributes = item
it += 1
# feed into model
if config.mode != -1:
images, labels, attributes = prepare_data(config, images, labels, attributes, mode)
images = Variable(
images,
volatile=mode is not 'Train'
).cuda()
labels = Variable(
labels,
volatile=mode is not 'Train'
).cuda()
attributes = Variable(
attributes,
volatile=mode is not 'Train'
).cuda()
attr_loss_num = None
pred = None
if config.mode != 2:
logits = model(images)
loss, acc, pred = get_loss_and_acc(config, logits, labels)
else:
logits, reconstruction = model(images)
loss, attr_loss, acc, pred = get_attrib_loss_and_acc(config, logits, labels, reconstruction, attributes)
attr_loss_num = attr_loss.data[0]
loss_num = loss.data[0]
total_loss += loss_num
total_acc += acc
if mode == 'Train':
config.optimizer.zero_grad()
loss.backward()
nn.utils.clip_grad_norm(model.parameters(), 5.0)
config.optimizer.step()
if it % 50 == 0:
print(pred, labels.data.cpu().numpy())
if config.mode != 2:
print('Epoch {} | Iteration {} | Loss {} | Accuracy {} | LR {}'.format(
epoch, it, loss_num, acc, config.lr)
)
else:
print('Epoch {} | Iteration {} | Loss {} | Recon Loss {} | Accuracy {} | LR {}'.format(
epoch, it, loss_num, attr_loss_num, acc, config.lr)
)
sys.stdout.flush()
else:
all_labels.extend(labels.data.cpu().numpy())
all_preds.extend(pred)
total_loss /= it
total_acc /= it
print('{} loss: {}'.format(mode, total_loss))
print('{} accuracy: {}'.format(mode, total_acc))
return total_loss, total_acc, all_labels, all_preds
| 35.270833 | 116 | 0.600945 | 1,034 | 8,465 | 4.74178 | 0.174081 | 0.016317 | 0.017132 | 0.038548 | 0.254334 | 0.162146 | 0.146237 | 0.103406 | 0.026922 | 0.026922 | 0 | 0.019724 | 0.281276 | 8,465 | 239 | 117 | 35.41841 | 0.78616 | 0.023745 | 0 | 0.14 | 0 | 0 | 0.062849 | 0.016986 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03 | false | 0 | 0.075 | 0 | 0.155 | 0.04 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
520d7d29a85d306e13d766664aaa02e5f79571a2 | 1,410 | py | Python | fileProcessing.py | sayid-coder/python-reference | a51e3868d99fb1585aa87c1975f8449d9e6d109f | [
"MIT"
] | null | null | null | fileProcessing.py | sayid-coder/python-reference | a51e3868d99fb1585aa87c1975f8449d9e6d109f | [
"MIT"
] | null | null | null | fileProcessing.py | sayid-coder/python-reference | a51e3868d99fb1585aa87c1975f8449d9e6d109f | [
"MIT"
] | null | null | null | #### Write a File ####
with open('somefile.txt', 'w') as file:
file.write('tomato\npasta\ngarlic')
#### Read a File ####
with open('somefile.txt', 'r') as file:
# Read the whole content into one string.
content = file.read()
# Make a list where each line of the file is an element in the list.
print(file.readlines())
try:
with open("fileName.dat", "r") as file:
print(file.read())
except FileExistsError:
print('File not found.')
#### Read File Line-By-Line ####
with open('myFile.txt', 'r') as file:
for line in file:
print(line)
#### File System ####
import os
rootDirectory = 'C:/'
for root, folders, files in os.walk(rootDirectory):
for filename in files:
print(root, filename)
path = '/path/to/some/file.txt'
if os.path.isfile(path):
print('It is a file!')
if os.path.exists(path):
print('Exists!')
#### Text Processing ####
with open(filePath, 'r') as file:
for line in file:
segments = line.split('|')
for segment in segments:
print(segment)
#### JSON Processing ####
import json
with open(filePath, 'r') as file:
for line in file:
payload = json.loads(line)
print(payload["00020001"])
#### CSV File Processing ####
import csv
with open(filePath) as file:
csvReader = csv.reader(file)
for row in csvReader:
print(row[0])
| 23.114754 | 72 | 0.597872 | 195 | 1,410 | 4.323077 | 0.374359 | 0.066429 | 0.041518 | 0.035587 | 0.166074 | 0.166074 | 0.109134 | 0.085409 | 0.085409 | 0.085409 | 0 | 0.008483 | 0.247518 | 1,410 | 60 | 73 | 23.5 | 0.786051 | 0.161702 | 0 | 0.131579 | 0 | 0 | 0.12656 | 0.038324 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.078947 | 0 | 0.078947 | 0.263158 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
520ef037c08f5f52eb5faedfce85b0ca5e41f23e | 1,007 | py | Python | main.py | HYOUG/XOREncryption | dd61da113a1e20ec5a5f271c74a24c7a2d50bfbc | [
"MIT"
] | 1 | 2021-02-18T11:44:50.000Z | 2021-02-18T11:44:50.000Z | main.py | HYOUG/XOREncryption | dd61da113a1e20ec5a5f271c74a24c7a2d50bfbc | [
"MIT"
] | null | null | null | main.py | HYOUG/XOREncryption | dd61da113a1e20ec5a5f271c74a24c7a2d50bfbc | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
# script by "HYOUG"
from argparse import ArgumentParser
def xorbytes(target:bytes, key:bytes) -> bytes:
output = []
index = 0
for byte in target:
output.append(byte ^ key[index % (len(key)-1)])
return bytes(output)
def main() -> None:
parser = ArgumentParser()
parser.add_argument("fp_target", help="File pointer of the target")
parser.add_argument("fp_key", help="File pointer of the key")
args = parser.parse_args()
with open(args.fp_target, "rb") as f:
target_data = f.read()
with open(args.fp_key, "rb") as f:
key_data = f.read()
new_data = xorbytes(target_data, key_data)
if "." in args.fp_target:
new_fp = f"{'.'.join(args.fp_target.split('.')[:-1])}_encrypted.{args.fp_target.split('.')[-1]}"
else:
new_fp = f"{args.fp_target}_encrypted"
with open(new_fp, "wb") as f:
f.write(new_data)
if __name__ == "__main__":
main() | 26.5 | 104 | 0.60576 | 144 | 1,007 | 4.027778 | 0.402778 | 0.082759 | 0.103448 | 0.065517 | 0.131034 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007762 | 0.232373 | 1,007 | 38 | 105 | 26.5 | 0.742561 | 0.060576 | 0 | 0 | 0 | 0.04 | 0.200212 | 0.116525 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.04 | 0 | 0.16 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
520f073cd857c8c25197934ee70d2006058631ab | 4,257 | py | Python | src/models/wisenet_base/_DEPLOY/train.py | JanAlexanderPersonal/covid19_weak_supervision | 5599e48c9945f1e08a2731740bc8f6e44a031703 | [
"Apache-2.0"
] | 7 | 2020-07-22T19:48:52.000Z | 2021-08-06T13:43:21.000Z | src/models/wisenet_base/_DEPLOY/train.py | JanAlexanderPersonal/covid19_weak_supervision | 5599e48c9945f1e08a2731740bc8f6e44a031703 | [
"Apache-2.0"
] | 1 | 2021-03-06T15:57:21.000Z | 2021-03-06T15:57:21.000Z | src/models/wisenet_base/_DEPLOY/train.py | JanAlexanderPersonal/covid19_weak_supervision | 5599e48c9945f1e08a2731740bc8f6e44a031703 | [
"Apache-2.0"
] | 1 | 2021-02-09T02:16:21.000Z | 2021-02-09T02:16:21.000Z | import torch
import numpy as np
import timeit
start = timeit.default_timer()
import misc as ms
import ann_utils as au
def main(main_dict, train_only=False):
ms.print_welcome(main_dict)
# EXTRACT VARIABLES
reset = main_dict["reset"]
epochs = main_dict["epochs"]
batch_size = main_dict["batch_size"]
sampler_name = main_dict["sampler_name"]
verbose = main_dict["verbose"]
loss_name = main_dict["loss_name"]
metric_name = main_dict["metric_name"]
epoch2val = main_dict["epoch2val"]
val_batchsize = main_dict["val_batchsize"]
metric_class = main_dict["metric_dict"][metric_name]
loss_function = main_dict["loss_dict"][loss_name]
predictList = main_dict["predictList"]
# Assert everything is available
## Sharp proposals
## LCFCN points
## gt_annDict
# Dataset
train_set, val_set = ms.load_trainval(main_dict)
train_set[0]
# Model
if reset == "reset" or not ms.model_exists(main_dict):
model, opt, history = ms.init_model_and_opt(main_dict,
train_set)
print("TRAINING FROM SCRATCH EPOCH: %d/%d" % (history["epoch"],
epochs))
else:
model, opt, history = ms.load_latest_model_and_opt(main_dict,
train_set)
print("RESUMING EPOCH %d/%d" % (history["epoch"], epochs))
# Get Dataloader
trainloader = ms.get_dataloader(dataset=train_set,
batch_size=batch_size,
sampler_class=main_dict["sampler_dict"][sampler_name])
# SAVE HISTORY
history["epoch_size"] = len(trainloader)
if "trained_batch_names" in history:
model.trained_batch_names = set(history["trained_batch_names"])
ms.save_pkl(main_dict["path_history"], history)
# START TRAINING
start_epoch = history["epoch"]
for epoch in range(start_epoch + 1, epochs):
with torch.enable_grad():
# %%%%%%%%%%% 1. TRAIN PHASE %%%%%%%%%%%%"
train_dict = ms.fit(model, trainloader, opt,
loss_function=loss_function,
verbose=verbose,
epoch=epoch)
# Update history
history["epoch"] = epoch
history["trained_batch_names"] = list(model.trained_batch_names)
history["train"] += [train_dict]
# Save model, opt and history
ms.save_latest_model_and_opt(main_dict, model, opt, history)
# %%%%%%%%%%% 2. VALIDATION PHASE %%%%%%%%%%%%"
with torch.no_grad():
for predict_name in predictList:
if predict_name == "MAE":
val_dict = ms.validate(dataset=val_set,
model=model,
verbose=verbose,
metric_class=metric_class,
batch_size=val_batchsize,
epoch=epoch)
val_dict["predict_name"] = predict_name
val_dict["epoch"] = epoch
# Update history
history["val"] += [val_dict]
# Higher is better
if (history["best_model"] == {} or
history["best_model"][metric_name] >= val_dict[metric_name]):
history["best_model"] = val_dict
ms.save_best_model(main_dict, model)
else:
val_dict, pred_annList = au.validate(model, val_set,
predict_method=predict_name,
n_val=len(val_set), return_annList=True)
val_dict["predict_name"] = predict_name
val_dict["epoch"] = epoch
# Update history
history["val"] += [val_dict]
# Higher is better
if (history["best_model"] == {} or
history["best_model"]["0.5"] <= val_dict["0.5"]):
history["best_model"] = val_dict
ms.save_best_model(main_dict, model)
ms.save_pkl(main_dict["path_best_annList"], pred_annList)
# annList = ms.load_pkl(main_dict["path_best_annList"])
#
# gtAnnDict = au.load_gtAnnDict(main_dict)
# au.get_perSizeResults(gtAnnDict, annList)
ms.save_pkl(main_dict["path_history"], history)
| 31.768657 | 88 | 0.576697 | 487 | 4,257 | 4.73922 | 0.223819 | 0.093588 | 0.041594 | 0.025997 | 0.301127 | 0.271664 | 0.211438 | 0.211438 | 0.15338 | 0.15338 | 0 | 0.003406 | 0.310312 | 4,257 | 133 | 89 | 32.007519 | 0.782698 | 0.11393 | 0 | 0.282051 | 0 | 0 | 0.113782 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012821 | false | 0 | 0.064103 | 0 | 0.076923 | 0.038462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5210086771a1308858446bfa16fcd10e0c975b59 | 777 | py | Python | strava/commands/profile.py | bwilczynski/strava-cli | 54e05663f897bb710e886b4d834eca940b0c4378 | [
"MIT"
] | 15 | 2019-01-24T21:20:13.000Z | 2022-03-16T23:17:43.000Z | strava/commands/profile.py | bwilczynski/strava-cli | 54e05663f897bb710e886b4d834eca940b0c4378 | [
"MIT"
] | 7 | 2020-01-10T06:43:29.000Z | 2021-09-23T19:07:20.000Z | strava/commands/profile.py | bwilczynski/strava-cli | 54e05663f897bb710e886b4d834eca940b0c4378 | [
"MIT"
] | 10 | 2019-05-09T17:51:16.000Z | 2022-03-16T23:17:56.000Z | import click
from strava import api
from strava.decorators import output_option, login_required, format_result, OutputType
_PROFILE_COLUMNS = ("key", "value")
@click.command("profile")
@output_option()
@login_required
@format_result(table_columns=_PROFILE_COLUMNS)
def get_profile(output):
result = api.get_athlete()
return result if output == OutputType.JSON.value else _as_table(result)
def _as_table(athlete):
def format_name():
return f'{athlete.get("firstname")} {athlete.get("lastname")}'
formatted_athlete = {
"id": athlete.get("id"),
"username": athlete.get("username"),
"name": format_name(),
"email": athlete.get("email"),
}
return [{"key": k, "value": v} for k, v in formatted_athlete.items()]
| 25.9 | 86 | 0.682111 | 97 | 777 | 5.247423 | 0.412371 | 0.098232 | 0.066798 | 0.098232 | 0.145383 | 0.145383 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173745 | 777 | 29 | 87 | 26.793103 | 0.792835 | 0 | 0 | 0 | 0 | 0 | 0.140283 | 0.065637 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0.047619 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5210e1a09ccc2ac0e45cca2349fa4bbf3b523caa | 11,419 | py | Python | tests/app/helpers.py | pebblecode/cirrus-marketplace-api | 64d9e3be8705a2fe64c964b16947e9877885de7b | [
"MIT"
] | null | null | null | tests/app/helpers.py | pebblecode/cirrus-marketplace-api | 64d9e3be8705a2fe64c964b16947e9877885de7b | [
"MIT"
] | null | null | null | tests/app/helpers.py | pebblecode/cirrus-marketplace-api | 64d9e3be8705a2fe64c964b16947e9877885de7b | [
"MIT"
] | null | null | null | from __future__ import absolute_import
import os
import json
from datetime import datetime, timedelta
from nose.tools import assert_equal, assert_in
from app import create_app, db
from app.models import Service, Supplier, ContactInformation, Framework, Lot, User, FrameworkLot, Brief, Order
TEST_SUPPLIERS_COUNT = 3
COMPLETE_DIGITAL_SPECIALISTS_BRIEF = {
"essentialRequirements": ["MS Paint", "GIMP"],
"startDate": "31/12/2016",
"evaluationType": ["Work history", "Reference", "Interview"],
"niceToHaveRequirements": ["LISP"],
"existingTeam": "Nice people.",
"specialistWork": "All the things",
"workingArrangements": "Just get the work done.",
"organisation": "Org.org",
"location": "Wales",
"specialistRole": "developer",
"title": "I need a Developer",
"priceWeighting": 85,
"contractLength": "3 weeks",
"culturalWeighting": 5,
"securityClearance": "Developed vetting required.",
"technicalWeighting": 10,
"culturalFitCriteria": ["CULTURAL", "FIT"],
"numberOfSuppliers": "3",
"summary": "Doing some stuff to help out.",
"workplaceAddress": "Aviation House"
}
class WSGIApplicationWithEnvironment(object):
def __init__(self, app, **kwargs):
self.app = app
self.kwargs = kwargs
def __call__(self, environ, start_response):
for key, value in self.kwargs.items():
environ[key] = value
return self.app(environ, start_response)
class BaseApplicationTest(object):
lots = {
"iaas": "G6-IaaS.json",
"saas": "G6-SaaS.json",
"paas": "G6-PaaS.json",
"scs": "G6-SCS.json"
}
config = None
def setup(self):
self.app = create_app('test')
self.client = self.app.test_client()
self.setup_authorization(self.app)
def setup_authorization(self, app):
"""Set up bearer token and pass on all requests"""
valid_token = 'valid-token'
app.wsgi_app = WSGIApplicationWithEnvironment(
app.wsgi_app,
HTTP_AUTHORIZATION='Bearer {}'.format(valid_token))
self._auth_tokens = app.config['DM_API_AUTH_TOKENS']
app.config['DM_API_AUTH_TOKENS'] = valid_token
def do_not_provide_access_token(self):
self.app.wsgi_app = self.app.wsgi_app.app
def setup_dummy_user(self, id=123, role='buyer'):
with self.app.app_context():
if User.query.get(id):
return id
user = User(
id=id,
email_address="test+{}@digital.gov.uk".format(id),
name="my name",
password="fake password",
active=True,
role=role,
password_changed_at=datetime.now()
)
db.session.add(user)
db.session.commit()
return user.id
def setup_dummy_briefs(self, n, title=None, status='draft', user_id=1, brief_start=1, lot="digital-specialists"):
user_id = self.setup_dummy_user(id=user_id)
with self.app.app_context():
framework = Framework.query.filter(Framework.slug == "digital-outcomes-and-specialists").first()
lot = Lot.query.filter(Lot.slug == lot).first()
data = COMPLETE_DIGITAL_SPECIALISTS_BRIEF.copy()
data["title"] = title
for i in range(brief_start, brief_start + n):
self.setup_dummy_brief(
id=i,
user_id=user_id,
data=dict(COMPLETE_DIGITAL_SPECIALISTS_BRIEF, title=title),
framework_slug="digital-outcomes-and-specialists",
lot_slug=lot.slug,
status=status,
)
db.session.commit()
def setup_dummy_brief(self, id=None, user_id=1, status=None, data=None, published_at=None,
framework_slug="digital-outcomes-and-specialists", lot_slug="digital-specialists"):
if published_at is not None and status is not None:
raise ValueError("Cannot provide both status and published_at")
if not published_at:
if status == 'closed':
published_at = datetime.utcnow() - timedelta(days=1000)
else:
published_at = None if status == 'draft' else datetime.utcnow()
framework = Framework.query.filter(Framework.slug == framework_slug).first()
lot = Lot.query.filter(Lot.slug == lot_slug).first()
db.session.add(Brief(
id=id,
data=data,
framework=framework,
lot=lot,
users=[User.query.get(user_id)],
published_at=published_at,
))
def setup_dummy_suppliers(self, n):
with self.app.app_context():
for i in range(n):
db.session.add(
Supplier(
supplier_id=i,
name=u"Supplier {}".format(i),
description="",
clients=[]
)
)
db.session.add(
ContactInformation(
supplier_id=i,
contact_name=u"Contact for Supplier {}".format(i),
email=u"{}@contact.com".format(i),
postcode=u"SW1A 1AA"
)
)
db.session.commit()
def setup_additional_dummy_suppliers(self, n, initial):
with self.app.app_context():
for i in range(1000, n+1000):
db.session.add(
Supplier(
supplier_id=i,
name=u"{} suppliers Ltd {}".format(initial, i),
description="",
clients=[]
)
)
db.session.add(
ContactInformation(
supplier_id=i,
contact_name=u"Contact for Supplier {}".format(i),
email=u"{}@contact.com".format(i),
postcode=u"SW1A 1AA"
)
)
db.session.commit()
def setup_dummy_service(self, service_id, supplier_id=1, data=None,
status='published', framework_id=1, lot_id=1):
now = datetime.utcnow()
db.session.add(Service(service_id=service_id,
supplier_id=supplier_id,
status=status,
data=data or {
'serviceName': 'Service {}'.
format(service_id)
},
framework_id=framework_id,
lot_id=lot_id,
created_at=now,
updated_at=now))
def setup_dummy_services(self, n, supplier_id=None, framework_id=1,
start_id=0, lot_id=1):
with self.app.app_context():
for i in range(start_id, start_id + n):
self.setup_dummy_service(
service_id=str(2000000000 + start_id + i),
supplier_id=supplier_id or (i % TEST_SUPPLIERS_COUNT),
framework_id=framework_id,
lot_id=lot_id
)
db.session.commit()
def setup_dummy_services_including_unpublished(self, n):
self.setup_dummy_suppliers(TEST_SUPPLIERS_COUNT)
self.setup_dummy_services(n)
with self.app.app_context():
# Add extra 'enabled' and 'disabled' services
self.setup_dummy_service(
service_id=str(n + 2000000001),
supplier_id=n % TEST_SUPPLIERS_COUNT,
status='disabled')
self.setup_dummy_service(
service_id=str(n + 2000000002),
supplier_id=n % TEST_SUPPLIERS_COUNT,
status='enabled')
# Add an extra supplier that will have no services
db.session.add(
Supplier(supplier_id=TEST_SUPPLIERS_COUNT, name=u"Supplier {}"
.format(TEST_SUPPLIERS_COUNT))
)
db.session.add(
ContactInformation(
supplier_id=TEST_SUPPLIERS_COUNT,
contact_name=u"Contact for Supplier {}".format(
TEST_SUPPLIERS_COUNT),
email=u"{}@contact.com".format(TEST_SUPPLIERS_COUNT),
postcode=u"SW1A 1AA"
)
)
db.session.commit()
def teardown(self):
self.teardown_authorization()
self.teardown_database()
def teardown_authorization(self):
if self._auth_tokens is None:
del self.app.config['DM_API_AUTH_TOKENS']
else:
self.app.config['DM_API_AUTH_TOKENS'] = self._auth_tokens
def teardown_database(self):
with self.app.app_context():
db.session.remove()
for table in reversed(db.metadata.sorted_tables):
if table.name not in ["lots", "frameworks", "framework_lots"]:
db.engine.execute(table.delete())
FrameworkLot.query.filter(FrameworkLot.framework_id >= 100).delete()
Framework.query.filter(Framework.id >= 100).delete()
db.session.commit()
db.get_engine(self.app).dispose()
def load_example_listing(self, name):
file_path = os.path.join("example_listings", "{}.json".format(name))
with open(file_path) as f:
return json.load(f)
def string_to_time_to_string(self, value, from_format, to_format):
return datetime.strptime(
value, from_format).strftime(to_format)
def string_to_time(self, value, from_format):
return datetime.strptime(
value, from_format)
def set_framework_status(self, slug, status):
Framework.query.filter_by(slug=slug).update({'status': status})
db.session.commit()
class JSONTestMixin(object):
"""
Tests to verify that endpoints that accept JSON.
"""
endpoint = None
method = None
client = None
def open(self, **kwargs):
return self.client.open(
self.endpoint.format(self=self),
method=self.method,
**kwargs
)
def test_non_json_causes_failure(self):
response = self.open(
data='this is not JSON',
content_type='application/json')
assert_equal(response.status_code, 400)
assert_in(b'Invalid JSON',
response.get_data())
def test_invalid_content_type_causes_failure(self):
response = self.open(
data='{"services": {"foo": "bar"}}')
assert_equal(response.status_code, 400)
assert_in(b'Unexpected Content-Type', response.get_data())
class JSONUpdateTestMixin(JSONTestMixin):
def test_missing_updated_by_should_fail_with_400(self):
response = self.open(
data='{}',
content_type='application/json')
assert_equal(response.status_code, 400)
assert_in("'updated_by' is a required property", response.get_data(as_text=True))
| 36.482428 | 117 | 0.552938 | 1,212 | 11,419 | 5.0033 | 0.232673 | 0.021933 | 0.029683 | 0.016161 | 0.313654 | 0.275066 | 0.231036 | 0.172493 | 0.108839 | 0.074208 | 0 | 0.013048 | 0.342237 | 11,419 | 312 | 118 | 36.599359 | 0.794302 | 0.016376 | 0 | 0.244275 | 0 | 0 | 0.120517 | 0.014362 | 0 | 0 | 0 | 0 | 0.026718 | 1 | 0.091603 | false | 0.007634 | 0.026718 | 0.01145 | 0.179389 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5216104ad6200530bb1378300e0bd1de568d7f5a | 2,221 | py | Python | src/olympia/zadmin/admin.py | Sparsh-Bansal/addons-server | 133c9d8924017de010e47a25d0a8ca33a45c9f1a | [
"BSD-3-Clause"
] | 1 | 2021-11-27T15:47:47.000Z | 2021-11-27T15:47:47.000Z | src/olympia/zadmin/admin.py | Sparsh-Bansal/addons-server | 133c9d8924017de010e47a25d0a8ca33a45c9f1a | [
"BSD-3-Clause"
] | 1,398 | 2020-10-08T06:32:26.000Z | 2022-03-31T12:06:24.000Z | src/olympia/zadmin/admin.py | Sparsh-Bansal/addons-server | 133c9d8924017de010e47a25d0a8ca33a45c9f1a | [
"BSD-3-Clause"
] | 1 | 2021-11-24T07:29:55.000Z | 2021-11-24T07:29:55.000Z | from django.conf import settings
from django.contrib import admin, auth
from django.core.exceptions import PermissionDenied
from django.shortcuts import redirect
from django.utils.html import format_html
from django.urls import reverse
from olympia.accounts.utils import redirect_for_login
from . import models
def related_content_link(obj, related_class, related_field,
related_manager='objects', count=None):
"""
Return a link to the admin changelist for the instances of related_class
linked to the object.
"""
url = 'admin:{}_{}_changelist'.format(
related_class._meta.app_label, related_class._meta.model_name)
queryset = getattr(related_class, related_manager).filter(
**{related_field: obj})
if count is None:
count = queryset.count()
return format_html(
'<a href="{}?{}={}">{}</a>',
reverse(url), related_field, obj.pk, count)
def related_single_content_link(obj, related_field):
"""
Return a link to the admin change page for a related instance linked to the
object.
"""
instance = getattr(obj, related_field)
if instance:
related_class = instance._meta.model
url = 'admin:{}_{}_change'.format(
related_class._meta.app_label, related_class._meta.model_name)
return format_html(
'<a href="{}">{}</a>',
reverse(url, args=(instance.pk,)), repr(instance))
else:
return ''
# Hijack the admin's login to use our pages.
def login(request):
# if the user has permission, just send them to the index page
if request.method == 'GET' and admin.site.has_permission(request):
next_path = request.GET.get(auth.REDIRECT_FIELD_NAME)
return redirect(next_path or 'admin:index')
# otherwise, they're logged in but don't have permission return a 403.
elif request.user.is_authenticated:
raise PermissionDenied
else:
return redirect_for_login(request)
admin.site.register(models.Config)
admin.site.disable_action('delete_selected')
admin.site.site_url = settings.EXTERNAL_SITE_URL
admin.site.site_header = admin.site.index_title = 'AMO Administration'
admin.site.login = login
| 34.169231 | 79 | 0.695182 | 294 | 2,221 | 5.071429 | 0.363946 | 0.064386 | 0.042924 | 0.028169 | 0.144869 | 0.144869 | 0.1167 | 0.1167 | 0.073776 | 0.073776 | 0 | 0.001697 | 0.203962 | 2,221 | 64 | 80 | 34.703125 | 0.841629 | 0.158487 | 0 | 0.139535 | 0 | 0 | 0.075492 | 0.02407 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069767 | false | 0 | 0.186047 | 0 | 0.372093 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5216fda27307e0faaea82fdd134e8b8b198b4db0 | 686 | py | Python | play.py | corollari/markov-music | 7fd8e005dd93d119b2ab520fe7f388f4e1079982 | [
"Unlicense"
] | 1 | 2019-03-06T19:43:51.000Z | 2019-03-06T19:43:51.000Z | play.py | corollari/markov-music | 7fd8e005dd93d119b2ab520fe7f388f4e1079982 | [
"Unlicense"
] | null | null | null | play.py | corollari/markov-music | 7fd8e005dd93d119b2ab520fe7f388f4e1079982 | [
"Unlicense"
] | null | null | null | import os, csv, time
def beep(f,d):
os.system('beep -f %s -l %s' % (f,d))
with open('musica.txt') as csvfile:
dataOrig = list(csv.reader(csvfile))
data=[]
for k in dataOrig:
data=data+k[:-1]
for j in range(len(data)):
data[j]=int(data[j])
duration=200
n=1
data=data[:50]
for i in range(len(data)-1):
if data[i]!=0 and data[i+1]!=0:
for j in range(1,n+1):
f=200+(data[i]+(data[i+1]-data[i])*j/n)*20 #change(200->400)
beep(f, duration/n) if i>0 else time.sleep(duration/1000) #change(20->10)
#list(map(lambda x: x*duration/20, range(20))):
#beep(200+i*20, duration) if i>0 else time.sleep(duration/1000)
| 27.44 | 85 | 0.578717 | 130 | 686 | 3.053846 | 0.361538 | 0.062972 | 0.030227 | 0.055416 | 0.146096 | 0.146096 | 0.146096 | 0.146096 | 0 | 0 | 0 | 0.089385 | 0.217201 | 686 | 24 | 86 | 28.583333 | 0.649907 | 0.201166 | 0 | 0 | 0 | 0 | 0.047794 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.055556 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5218eeaf2ee1a04fb33c6f41eb42f27bc72a5315 | 4,558 | py | Python | tydev/gui/list.py | tylerb94/tydev | ca1d9f563e8f804f2b40a88233b00de74f74d195 | [
"Unlicense"
] | null | null | null | tydev/gui/list.py | tylerb94/tydev | ca1d9f563e8f804f2b40a88233b00de74f74d195 | [
"Unlicense"
] | null | null | null | tydev/gui/list.py | tylerb94/tydev | ca1d9f563e8f804f2b40a88233b00de74f74d195 | [
"Unlicense"
] | null | null | null | import pygame
import tydev
from tydev.gui.template import Template
class List(Template):
def __init__(self, location, size):
Template.__init__(self, location=location, size=size)
self.background_color = (255, 255, 255)
self.highlight_color = (130, 145, 255)
self.objects = []
self.scroll_amount = 0.0
self.scroll_max = 0.0
self.scroll_speed = 25.0
self.selected = -1
self.map = {}
self.list_height = 0.0
self.redraw = True
self.object_images = []
self.scrollbar_location = [0, 0]
self.scrollbar_bounds = [0, 0]
self.scrollbar_width = 10
self.scrollbar_color = (120, 130, 230, 255)
self.scrollbar_backcolor = (50, 50, 50, 120)
def append(self, object):
self.objects.append(object)
self.redraw = True
def clear(self):
self.objects.clear()
self.redraw = True
def count(self):
return len(self.objects)
def draw(self):
if self.redraw:
self.redraw = False
# Render all the objects in the list
self.object_images = []
for obj in self.objects:
obj.draw()
self.object_images.append(obj.image)
self.image.fill(self.background_color)
# Place each list object onto the list image
y = -self.scroll_amount
self.list_height = 0.0
index = 0
self.map.clear()
for img in self.object_images:
# Remap the y locations of each object
self.map[str(y)] = index
# Highlight image if selected
if self.selected == index:
height = img.get_height()
width = self.image.get_width()
pygame.draw.rect(self.image, self.highlight_color,
(0, y, width, height))
index += 1
# Draw the image
self.image.blit(img, (0, y))
y += img.get_height()
self.list_height += img.get_height()
self.scroll_max = self.list_height - self.size[1]
# Draw scrollbar
if self.list_height > self.size[1]:
x = self.image.get_width() - self.scrollbar_width
y = 0
w = self.scrollbar_width
h = self.image.get_height()
self.scrollbar_bounds = (x, h)
pygame.draw.rect(self.image, self.scrollbar_backcolor, (int(x), int(y), int(w), int(h)), 0)
h = (self.size[1] / self.list_height) * self.size[1]
y = (self.size[1] - h + 1) * (self.scroll_amount / self.scroll_max)
pygame.draw.rect(self.image, self.scrollbar_color, (int(x), int(y), int(w), int(h)), 0)
def move(self, amount):
self.selected += amount
if self.selected < 0:
self.selected = 0
elif self.selected >= len(self.objects):
self.selected = len(self.objects) - 1
def event(self, event, delta):
if self.mouse_over():
if event.type == pygame.MOUSEWHEEL:
if self.list_height > self.size[1]:
self.scroll_amount -= event.y * self.scroll_speed
if self.scroll_amount > self.scroll_max:
self.scroll_amount = self.scroll_max
elif self.scroll_amount < 0:
self.scroll_amount = 0
elif event.type == pygame.MOUSEBUTTONDOWN:
if event.button == 1 or event.button == 3:
if self.mouse_over():
mouse = self.get_relative_mouse()
y = mouse[1]
for index in self.map:
if y > float(index):
self.selected = self.map[index]
elif event.type == pygame.KEYDOWN:
if self.has_focus:
if event.button == pygame.K_UP:
self.move(-1)
elif event.button == pygame.K_DOWN:
self.move(-1)
index = 0
height = 0
for obj in self.objects:
y = self.location[1] + height - self.scroll_amount
x = self.location[0]
height += obj.image.get_height()
obj.location = [x, y]
obj.event(event, delta)
index += 1
def update(self, delta):
for obj in self.objects:
obj.update(delta) | 30.386667 | 103 | 0.514919 | 542 | 4,558 | 4.214022 | 0.177122 | 0.070053 | 0.063047 | 0.035026 | 0.202715 | 0.157618 | 0.068301 | 0.014886 | 0.014886 | 0 | 0 | 0.030496 | 0.381308 | 4,558 | 150 | 104 | 30.386667 | 0.779433 | 0.037736 | 0 | 0.190476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.07619 | false | 0 | 0.028571 | 0.009524 | 0.12381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
521a9acd3760c7a09ba3c248825a23d45cc0e859 | 1,525 | py | Python | HW1-1/lib/visualize.py | b05611038/MLDS_2019SPRING | 0591a1a6f461da0a02b9e1b83f37ad3579f36f4d | [
"MIT"
] | 3 | 2019-06-20T06:47:30.000Z | 2021-11-05T03:16:37.000Z | HW1-1/lib/visualize.py | b05611038/MLDS_2019SPRING | 0591a1a6f461da0a02b9e1b83f37ad3579f36f4d | [
"MIT"
] | null | null | null | HW1-1/lib/visualize.py | b05611038/MLDS_2019SPRING | 0591a1a6f461da0a02b9e1b83f37ad3579f36f4d | [
"MIT"
] | null | null | null | import csv
import numpy as np
import torch
import matplotlib.pyplot as plt
def TrainHistoryPlot(his, his_label, save_name, title, axis_name, save = True):
#history must be input as list[0]: iter or epoch
#and otehr of history list is the acc or loss of different model
plt.figure(figsize = (10, 6))
for i in range(1, len(his)):
plt.plot(his[0], his[i])
plt.title(title)
plt.xlabel(axis_name[0])
plt.ylabel(axis_name[1])
plt.legend(his_label, loc = 'upper left')
if save:
plt.savefig(save_name + '.png')
print('Picture: ' + save_name + '.png done.')
else:
plt.show()
def ModelPredictFunctionPlot(model, model_name, func, save_name, title, save = True):
#to plot the demo of the model in funciton fitting
if len(model) != len(model_name):
raise RuntimeError('Please check the model list.')
plt.figure(figsize = (10, 6))
domain = np.arange(0, 20, 0.01)
outcome = [func(domain)]
plt.plot(domain, outcome[0], label = 'Ground truth')
for i in range(1, len(model) + 1):
outcome.append(model[i - 1](torch.tensor(domain).view(-1, 1).float()))
outcome[i] = outcome[i].view(-1, ).detach().numpy()
plt.plot(domain, outcome[i], label = model_name[i - 1])
plt.title(title)
plt.xlabel('input')
plt.ylabel('output')
plt.legend(loc = 'upper right')
if save:
plt.savefig(save_name + '.png')
print('Picture: ' + save_name + '.png done.')
else:
plt.show()
| 30.5 | 85 | 0.619672 | 227 | 1,525 | 4.101322 | 0.374449 | 0.051557 | 0.047261 | 0.038668 | 0.259936 | 0.171858 | 0.139635 | 0.139635 | 0.139635 | 0.139635 | 0 | 0.021368 | 0.232787 | 1,525 | 49 | 86 | 31.122449 | 0.774359 | 0.104262 | 0 | 0.378378 | 0 | 0 | 0.086701 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054054 | false | 0 | 0.108108 | 0 | 0.162162 | 0.054054 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
521aac0b6e06e733b3abb4c43e51d21f0ae10476 | 2,254 | py | Python | pcmdi_metrics/variability_mode/param/myParam_demo_NAM.py | jasonb5/pcmdi_metrics | 0c23d8d247da24d0ab9deb04d8db9619af628680 | [
"BSD-3-Clause"
] | null | null | null | pcmdi_metrics/variability_mode/param/myParam_demo_NAM.py | jasonb5/pcmdi_metrics | 0c23d8d247da24d0ab9deb04d8db9619af628680 | [
"BSD-3-Clause"
] | null | null | null | pcmdi_metrics/variability_mode/param/myParam_demo_NAM.py | jasonb5/pcmdi_metrics | 0c23d8d247da24d0ab9deb04d8db9619af628680 | [
"BSD-3-Clause"
] | null | null | null | import datetime
import os
# =================================================
# Background Information
# -------------------------------------------------
mip = "cmip5"
exp = "historical"
frequency = "mo"
realm = "atm"
# =================================================
# Analysis Options
# -------------------------------------------------
variability_mode = "NAM" # Available domains: NAM, NAO, SAM, PNA, PDO
seasons = ["DJF"] # Available seasons: DJF, MAM, JJA, SON, monthly, yearly
ConvEOF = True # Calculate conventioanl EOF for model
CBF = True # Calculate Common Basis Function (CBF) for model
# =================================================
# Miscellaneous
# -------------------------------------------------
update_json = True # False
debug = True # False
# =================================================
# Observation
# -------------------------------------------------
reference_data_name = "NOAA-CIRES_20CR"
reference_data_path = os.path.join(
"/p/user_pub/PCMDIobs/PCMDIobs2/atmos/mon/psl/20CR/gn/v20200707",
"psl_mon_20CR_BE_gn_v20200707_187101-201212.nc",
)
varOBS = "psl"
ObsUnitsAdjust = (True, "divide", 100.0) # Pa to hPa; or (False, 0, 0)
osyear = 1900
oeyear = 2005
eofn_obs = 1
# =================================================
# Models
# -------------------------------------------------
modpath = os.path.join(
"/p/css03/cmip5_css02/data/cmip5/output1/CSIRO-BOM/ACCESS1-0/historical/mon/atmos/Amon/r1i1p1/psl/1",
"psl_Amon_ACCESS1-0_historical_r1i1p1_185001-200512.nc",
)
modnames = ["ACCESS1-0"]
realization = "r1i1p1"
varModel = "psl"
ModUnitsAdjust = (True, "divide", 100.0) # Pa to hPa
msyear = 1900
meyear = 2005
eofn_mod = 1
# =================================================
# Output
# -------------------------------------------------
case_id = "{:v%Y%m%d}".format(datetime.datetime.now())
pmprdir = "/p/user_pub/pmp/pmp_results/pmp_v1.1.2"
if debug:
pmprdir = "/work/lee1043/temporary/result_test"
results_dir = os.path.join(
pmprdir,
"%(output_type)",
"variability_modes",
"%(mip)",
"%(exp)",
"%(case_id)",
"%(variability_mode)",
"%(reference_data_name)",
)
nc_out = True # Write output in NetCDF
plot = True # Create map graphics
| 26.833333 | 105 | 0.501331 | 229 | 2,254 | 4.781659 | 0.593886 | 0.035616 | 0.027397 | 0.020091 | 0.038356 | 0.038356 | 0.038356 | 0 | 0 | 0 | 0 | 0.053971 | 0.12866 | 2,254 | 83 | 106 | 27.156627 | 0.503564 | 0.423691 | 0 | 0 | 0 | 0.020408 | 0.400787 | 0.277953 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.040816 | 0 | 0.040816 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
521e74ef614227491e5aa8b283c8d7e3d6ed4005 | 6,915 | py | Python | wl_sensibility.py | Algue-Rythme/GAT-Skim-Gram | e6e9db5a936e87a2adfdf81a1f00d952d800d1c8 | [
"Apache-2.0"
] | 1 | 2021-10-30T23:19:57.000Z | 2021-10-30T23:19:57.000Z | wl_sensibility.py | Algue-Rythme/GAT-Skim-Gram | e6e9db5a936e87a2adfdf81a1f00d952d800d1c8 | [
"Apache-2.0"
] | null | null | null | wl_sensibility.py | Algue-Rythme/GAT-Skim-Gram | e6e9db5a936e87a2adfdf81a1f00d952d800d1c8 | [
"Apache-2.0"
] | null | null | null | import argparse
from collections import Counter
import hashlib
import random
from joblib import Parallel, delayed
import matplotlib.pyplot as plt
import multiprocessing
import networkx as nx
import numpy as np
from tqdm import tqdm
def get_degrees(graph):
return {node:str(degree) for node, degree in graph.degree}
class WeisfeilerLehmanMachine:
"""
Weisfeiler Lehman feature extractor class.
"""
def __init__(self, graph, iterations):
"""
Initialization method which also executes feature extraction.
:param graph: The Nx graph object.
:param iterations: Number of WL iterations.
"""
self.iterations = iterations
self.graph = graph
self.features = get_degrees(graph)
self.nodes = self.graph.nodes()
self.extracted_features = [str(v) for k, v in self.features.items()]
self.per_stage = []
def do_a_recursion(self):
"""
The method does a single WL recursion.
:return new_features: The hash table with extracted WL features.
"""
new_features = {}
for node in self.nodes:
nebs = self.graph.neighbors(node)
degs = [self.features[neb] for neb in nebs]
features = [str(self.features[node])]+sorted([str(deg) for deg in degs])
features = "_".join(features)
hash_object = hashlib.md5(features.encode())
hashing = hash_object.hexdigest()
new_features[node] = hashing
self.per_stage.append(list(new_features.values()))
self.extracted_features = self.extracted_features + list(new_features.values())
return new_features
def do_recursions(self, per_stage):
"""
The method does a series of WL recursions.
"""
for _ in range(self.iterations):
self.features = self.do_a_recursion()
if not per_stage:
return [Counter(self.extracted_features)]
return [Counter(stage) for stage in self.per_stage]
def wl_procedure(graph, iterations, per_stage):
machine = WeisfeilerLehmanMachine(graph, iterations)
labels = machine.do_recursions(per_stage)
return labels
def get_graph(graph_type, num_nodes):
if graph_type == 'tree':
return nx.full_rary_tree(3, num_nodes)
elif graph_type == 'barbell':
return nx.barbell_graph(num_nodes // 2, num_nodes // 2)
elif graph_type == 'turan':
return nx.turan_graph(num_nodes, 16)
elif graph_type == 'wheel':
return nx.wheel_graph(num_nodes)
elif graph_type == 'cycle':
return nx.cycle_graph(num_nodes)
elif graph_type == 'ladder':
return nx.ladder_graph(num_nodes // 2)
raise ValueError
def add_random_edge(graph):
[node_1, node_2] = random.sample(list(graph.nodes()), 2)
graph.add_edge(node_1, node_2)
def remove_random_edge(graph):
node_1, node_2 = random.choice(list(graph.edges()))
graph.remove_edge(node_1, node_2)
def add_random_edges(graph, num_random_edges, cpy=False):
if cpy:
graph = nx.Graph(graph)
nodes_1 = random.choices(list(graph.nodes()),k=num_random_edges)
nodes_2 = random.choices(list(graph.nodes()),k=num_random_edges)
graph.add_edges_from([(node_a, node_b) for node_a, node_b in zip(nodes_1, nodes_2)])
return graph
def remove_random_edges(graph, num_random_edges, cpy=False):
if cpy:
graph = nx.Graph(graph)
nodes_1 = random.choices(list(graph.nodes()),k=num_random_edges)
nodes_2 = random.choices(list(graph.nodes()),k=num_random_edges)
graph.remove_edges_from([(node_a, node_b) for node_a, node_b in zip(nodes_1, nodes_2)])
return graph
def get_random_graph(graph_type, num_nodes, a_edges, r_edges):
graph = get_graph(graph_type, num_nodes)
add_random_edges(graph, a_edges)
remove_random_edges(graph, r_edges)
return graph
def get_volume(multiset):
return sum([num for _, num in multiset.items()])
def get_error(multiset_base_lst, multiset_lst):
errors = []
for multiset_base, multiset in zip(multiset_base_lst, multiset_lst):
intersection = multiset & multiset_base
union = multiset | multiset_base
volume_intersection = get_volume(intersection)
volume_union = get_volume(union)
errors.append(volume_intersection / volume_union * 100.)
return np.array(errors)
def progressive_damage_remove(graph_base, iterations, damage_max, per_stage):
multiset_base = wl_procedure(graph_base, iterations, per_stage)
graph = nx.Graph(graph_base)
errors = []
for _ in range(damage_max):
remove_random_edge(graph)
multiset = wl_procedure(graph, iterations, per_stage)
error = get_error(multiset_base, multiset)
errors.append(error)
return np.stack(errors)
def one_damage(graph_type, num_nodes, a_edges, r_edges, iterations, damage_max, per_stage):
graph_base = get_random_graph(graph_type, num_nodes, a_edges, r_edges)
return progressive_damage_remove(graph_base, iterations, damage_max, per_stage)
def experiment(graph_type, num_nodes, a_edges, r_edges, max_steps, iterations, damage_max, per_stage, plot_all=False):
args = [graph_type, num_nodes, a_edges, r_edges, iterations, damage_max, per_stage]
workers = max(multiprocessing.cpu_count()-2, 1)
errors_lst = Parallel(n_jobs=workers)(delayed(one_damage)(*args) for _ in tqdm(range(max_steps)))
if plot_all:
for error in errors_lst:
print(error)
error_avg = np.mean(errors_lst, axis=0).transpose()
print('Average\n', error_avg)
return error_avg
if __name__ == '__main__':
seed = random.randint(1, 1000 * 1000)
print('Seed used: %d'%seed)
np.random.seed(seed + 139)
parser = argparse.ArgumentParser()
parser.add_argument('--graph_type', default='tree')
parser.add_argument('--num_nodes', type=int, default=500)
parser.add_argument('--a_edges', type=int, default=0)
parser.add_argument('--r_edges', type=int, default=0)
parser.add_argument('--max_steps', type=int, default=500)
parser.add_argument('--iterations', type=int, default=2)
parser.add_argument('--damage_max', type=int, default=10)
parser.add_argument('--per_stage', action='store_true')
args = parser.parse_args()
np.set_printoptions(precision=2)
error_avg = experiment(args.graph_type, args.num_nodes, args.a_edges, args.r_edges,
args.max_steps, args.iterations, args.damage_max, args.per_stage)
for iteration in range(error_avg.shape[0]):
plt.plot(np.arange(1, error_avg.shape[1]+1), error_avg[iteration],
marker='o', label='stage '+str(iteration+1))
plt.xlabel('Number of edges removed')
plt.ylabel('Percentage of labels in common')
plt.title('%s graph'%args.graph_type)
plt.axis([0, error_avg.shape[1]+1, 0, 100])
plt.legend()
plt.show()
| 39.289773 | 118 | 0.685756 | 962 | 6,915 | 4.677755 | 0.196466 | 0.028444 | 0.030222 | 0.026444 | 0.295778 | 0.266889 | 0.221556 | 0.206444 | 0.169778 | 0.169778 | 0 | 0.011976 | 0.203037 | 6,915 | 175 | 119 | 39.514286 | 0.804573 | 0.047722 | 0 | 0.091549 | 0 | 0 | 0.035825 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.112676 | false | 0 | 0.070423 | 0.014085 | 0.323944 | 0.028169 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52210a390fb772a65950f9a77b1235cd948779f5 | 2,999 | py | Python | pythonblockchain/blockchain.py | Alpha5714/mlh-team-mbm | 9def98cc6d92dae1f8ee9244b619ee29677e17ed | [
"MIT"
] | null | null | null | pythonblockchain/blockchain.py | Alpha5714/mlh-team-mbm | 9def98cc6d92dae1f8ee9244b619ee29677e17ed | [
"MIT"
] | 2 | 2020-07-19T09:20:46.000Z | 2022-02-13T04:48:57.000Z | pythonblockchain/blockchain.py | Alpha5714/mlh-team-mbm | 9def98cc6d92dae1f8ee9244b619ee29677e17ed | [
"MIT"
] | null | null | null | from hashlib import sha256
from tkinter import *
import time
import sys
import webbrowser
global LastHash;LastHash=""
try:
open('HASH.dat','x').write('')
except:
pass
#global difficulty
difficulty=1
#global index
index=[]
class block:
def __init__(self,data):
self.timestamp=self.timestampee()
self.LastHash=LastHash
self.data=data
a=[]
a=block.Hashe(data,LastHash,self.timestamp)
self.Hash=str(a[0])
self.epoch=str(a[1])
block.addBlock(self.data,self.Hash,self.LastHash,self.timestamp,self.epoch)
def reset_hash():
global LastHash
LastHash= str(open("HASH.dat",'r').read())
def timestampee(self):
a=int(float(time.time()*1000))
a=str(a)
return a
def printblock(timestamp,LastHash,Hash,data,epoch):
a=str("Block was added with \n Timestamp=%s \n lastHash= %s \n Hash=%s \n Data= %s \n epoch= %s \n \n \n"%
(str(timestamp),str(LastHash),str(Hash)[0:15],str(data),str(epoch)))
open('HASH.dat','w').write(str(Hash[0:15]))
block.reset_hash()
return a
def toHex(x):
x=x.encode('utf-32')
x=(sha256(x)).hexdigest()
return x
def Hashe(data,LastHash,timestamp):
epoch=0
#global Hash
Hash=""
temp=" "
a=""
while(temp[0:difficulty]!=a):
timestamp=block.timestampee(3)
temp=(''+str(data)+str(LastHash)+str(timestamp)+'')
cond2=str(block.toHex(("0"*difficulty)))
temp=str(block.toHex(temp))
a= str(block.toHex("0"*difficulty))
a=a[0:difficulty]
epoch=epoch+1
Hash=str(temp)
return [Hash,epoch]
def genesis():
data='1st Block'
Hash="f1r5t Ha5h"
LastHash=""
timestamp="Genesis Time"
epoch=0
block.addBlock(data,Hash,LastHash,timestamp,epoch)
def addBlock(data,Hash,LastHash,timestamp,epoch):
gene=block.printblock(timestamp,LastHash,Hash,data,epoch)
index.append(gene)
def write_html(data):
htmlstart=str(open("toread.html","r").read())
#now writing in the web
htmlend=str(open('toend.html','r').read())
total=""
open('temp_data.html','w').write(htmlstart)
for i in range(0,20):
if (i%2!=0 or i==0):
total=str(""+str(index[i]))
total=total.replace('\n','<br>')
open('temp_data.html','a').write(total)
total=""
open('temp_data.html','a').write(htmlend)
def runhtml():
webbrowser.open('temp_data.html')
block.genesis()
for i in range(1,20):
data=("THIS IS BLOCK NO. "+str(i))
index.append(str(block(data))+"")
for j in range(0,20):
if (j%2!=0 or j==0):
write_html(index[j])
runhtml()
| 24.382114 | 115 | 0.541847 | 385 | 2,999 | 4.18961 | 0.233766 | 0.024799 | 0.029758 | 0.039678 | 0.184749 | 0.123993 | 0 | 0 | 0 | 0 | 0 | 0.023333 | 0.299767 | 2,999 | 122 | 116 | 24.581967 | 0.744762 | 0.020674 | 0 | 0.069767 | 0 | 0.011628 | 0.096051 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.116279 | false | 0.011628 | 0.05814 | 0 | 0.232558 | 0.023256 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52218f75fc7dc0e71c225b1ec2999d31866f521e | 2,310 | py | Python | model/model/DenoiseBert.py | xcjthu/DisputeMJJD | bfc33da517e472ff7b8d58c8fd3ffbf5d71388a4 | [
"MIT"
] | 1 | 2021-04-09T20:36:43.000Z | 2021-04-09T20:36:43.000Z | model/model/DenoiseBert.py | xcjthu/DisputeMJJD | bfc33da517e472ff7b8d58c8fd3ffbf5d71388a4 | [
"MIT"
] | null | null | null | model/model/DenoiseBert.py | xcjthu/DisputeMJJD | bfc33da517e472ff7b8d58c8fd3ffbf5d71388a4 | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
import torch.nn.functional as F
import json
from transformers import BertModel
from tools.accuracy_tool import multi_label_accuracy, single_label_top1_accuracy
class DenoiseBert(nn.Module):
def __init__(self, config, gpu_list, *args, **params):
super(DenoiseBert, self).__init__()
self.encoder = BertModel.from_pretrained('bert-base-chinese')
self.hidden_size = 768
self.score = nn.Linear(self.hidden_size, 1)
self.criterion = nn.CrossEntropyLoss()
self.accuracy_function = single_label_top1_accuracy
def init_multi_gpu(self, device, config, *args, **params):
return
def forward_test(self, data):
inputx = data['inputx'] # batch_size, seq_len
mask = data['mask']
_, bcls = self.encoder(inputx, attention_mask=mask)
score = self.score(bcls).squeeze(1)
return {"loss": 0, "output": list(zip(data['ids'], score.tolist()))}
def forward(self, data, config, gpu_list, acc_result, mode):
if mode == 'test':
return self.forward_test(data)
inputx = data['inputx'] # batch, seq_len
neginputx = data['neginputx']
mask = data['mask']
negmask = data['negmask']
batch = inputx.shape[0]
_, bcls = self.encoder(torch.cat([inputx, neginputx], dim = 0), attention_mask=torch.cat([mask, negmask], dim = 0))
score = self.score(bcls).squeeze(1)
pscore = score[:batch]
nscore = score[batch:]
#print(pscore.shape)
#print(nscore.shape)
scoreMat = torch.cat([pscore.unsqueeze(1), nscore.unsqueeze(0).repeat(batch, 1)], dim = 1) # batch, batch+1
loss = self.criterion(scoreMat, data['label'])
acc_result = accuracy(scoreMat, data["label"], config, acc_result)
return {"loss": loss, "acc_result": acc_result}
def accuracy(score, label, config, acc_result):
if acc_result is None:
acc_result = {'right': 0, 'pre_num': 0, 'actual_num': 0}
predict = torch.max(score, dim=1)[1]
acc_result['pre_num'] += int(score.shape[0])
acc_result['right'] += int((predict == 0).int().sum())
acc_result['actual_num'] += int(score.shape[0])
return acc_result
| 36.09375 | 124 | 0.621212 | 293 | 2,310 | 4.726962 | 0.296928 | 0.077978 | 0.018773 | 0.033213 | 0.098195 | 0.037545 | 0 | 0 | 0 | 0 | 0 | 0.014261 | 0.241126 | 2,310 | 63 | 125 | 36.666667 | 0.775813 | 0.037662 | 0 | 0.130435 | 0 | 0 | 0.064067 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108696 | false | 0 | 0.130435 | 0.021739 | 0.369565 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5222182808c98751a6b8076f3469746dbb3186ac | 1,571 | py | Python | computer-vision/image-classification/mnist_rmdl/cnn.py | tyburam/paperswithcode | fcea3fac37e5bf10bb0284216ef7aded4c0c778b | [
"MIT"
] | 2 | 2019-03-31T19:40:48.000Z | 2019-04-05T19:16:29.000Z | computer-vision/image-classification/mnist_rmdl/cnn.py | tyburam/paperswithcode | fcea3fac37e5bf10bb0284216ef7aded4c0c778b | [
"MIT"
] | null | null | null | computer-vision/image-classification/mnist_rmdl/cnn.py | tyburam/paperswithcode | fcea3fac37e5bf10bb0284216ef7aded4c0c778b | [
"MIT"
] | null | null | null | import tensorflow as tf
import random
from tensorflow.keras.layers import Flatten, Dense, Dropout, Conv2D, MaxPooling2D
from tensorflow.keras.constraints import MaxNorm
class CNN(tf.keras.Model):
def __init__(self, shape, number_of_classes, min_hidden_layer_cnn=3, max_hidden_layer_cnn=10,
min_nodes_cnn=128, max_nodes_cnn=512, dropout=0.05):
super(CNN, self).__init__()
values = list(range(min_nodes_cnn, max_nodes_cnn))
l_values = list(range(min_hidden_layer_cnn, max_hidden_layer_cnn))
n_layers = random.choice(l_values)
conv_count = random.choice(values)
self.conv0 = Conv2D(conv_count, (3, 3), padding='same', input_shape=shape, activation='relu')
self.conv1 = Conv2D(conv_count, (3, 3), activation='relu')
self.n_conv = []
for i in range(n_layers):
conv_count = random.choice(values)
self.n_conv.append(Conv2D(conv_count, (3, 3), padding='same', activation='relu'))
self.n_conv.append(MaxPooling2D(pool_size=(2, 2)))
self.n_conv.append(Dropout(dropout))
self.flat = Flatten()
self.d0 = Dense(256, activation='relu')
self.drop = Dropout(dropout)
self.d1 = Dense(number_of_classes, activation='softmax', kernel_constraint=MaxNorm(3))
def call(self, x):
x = self.conv0(x)
x = self.conv1(x)
for i in range(len(self.n_conv)):
x = self.n_conv[i](x)
x = self.flat(x)
x = self.d0(x)
x = self.drop(x)
return self.d1(x)
| 35.704545 | 101 | 0.636537 | 223 | 1,571 | 4.26009 | 0.318386 | 0.031579 | 0.056842 | 0.050526 | 0.190526 | 0.124211 | 0.058947 | 0 | 0 | 0 | 0 | 0.031746 | 0.238065 | 1,571 | 43 | 102 | 36.534884 | 0.761905 | 0 | 0 | 0.060606 | 0 | 0 | 0.019733 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060606 | false | 0 | 0.121212 | 0 | 0.242424 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5222500ee6e362203013fd073f05aa2c30408d2a | 612 | py | Python | python/cl1de_plot_utilities.py | joshuahansel/cl1de | a6e641f6f6ffaa477a3a82ef40e013100577b61f | [
"MIT"
] | null | null | null | python/cl1de_plot_utilities.py | joshuahansel/cl1de | a6e641f6f6ffaa477a3a82ef40e013100577b61f | [
"MIT"
] | null | null | null | python/cl1de_plot_utilities.py | joshuahansel/cl1de | a6e641f6f6ffaa477a3a82ef40e013100577b61f | [
"MIT"
] | null | null | null | from file_utilities import readCSVFile
from PlotterLine import PlotterLine
def plotDataSets(data_sets, data_names):
for var in data_names:
desc, symbol = data_names[var]
plotter = PlotterLine("$x$", desc + ", $" + symbol + "$")
for i, data_set in enumerate(data_sets):
set_name, data = data_set
plotter.addSet(data["x"], data[var], set_name, color=i)
plotter.save(var + ".png")
euler1phase_data_names = {
"r": ("Density", "\\rho"),
"u": ("Velocity", "u"),
"p": ("Pressure", "p")
}
def plotEuler1PhaseDataSets(data_sets):
plotDataSets(data_sets, euler1phase_data_names)
| 29.142857 | 61 | 0.669935 | 79 | 612 | 4.987342 | 0.443038 | 0.114213 | 0.101523 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005929 | 0.173203 | 612 | 20 | 62 | 30.6 | 0.772727 | 0 | 0 | 0 | 0 | 0 | 0.073529 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.117647 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5222fc6f39040bbbe14ad373dd67cb2ff2c273e8 | 2,633 | py | Python | runtests.py | dtisza1/bluebutton-web-server | 6322f28d75bd9e00f8dc4b5988a0cd5f7c6c80cb | [
"Apache-2.0"
] | null | null | null | runtests.py | dtisza1/bluebutton-web-server | 6322f28d75bd9e00f8dc4b5988a0cd5f7c6c80cb | [
"Apache-2.0"
] | null | null | null | runtests.py | dtisza1/bluebutton-web-server | 6322f28d75bd9e00f8dc4b5988a0cd5f7c6c80cb | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
import argparse
import django
import os
import sys
from django.conf import settings
from django.test.utils import get_runner
'''
Reference: https://docs.djangoproject.com/en/3.0/topics/testing/advanced/#defining-a-test-runner
Command line arguments:
--integration This optional flag indicates tests are to run in integration test mode.
Space separated list of Django tests to run.
For example:
$ docker-compose exec web python runtests.py apps.dot_ext.tests
For more specific use:
$ docker-compose exec web python runtests.py apps.dot_ext.tests.test_templates
For a single test:
$ docker-compose exec web python runtests.py apps.dot_ext.tests.\
test_templates.TestDOTTemplates.test_application_list_template_override
For multiple arguments:
$ docker-compose exec web python runtests.py apps.dot_ext.tests apps.accounts.tests.test_login
'''
# Parse command line arguments
parser = argparse.ArgumentParser()
parser.add_argument('--integration', help='Integration tests mode', action='store_true')
parser.add_argument('test', nargs='*')
args = parser.parse_args()
if args.integration:
# Unset ENV variables for integration type tests so default values get set.
for env_var in ['DJANGO_MEDICARE_SLSX_LOGIN_URI', 'DJANGO_MEDICARE_SLSX_REDIRECT_URI',
'DJANGO_SLSX_USERINFO_ENDPOINT', 'DJANGO_SLSX_TOKEN_ENDPOINT',
'DJANGO_SLSX_HEALTH_CHECK_ENDPOINT',
'DATABASES_CUSTOM', 'DJANGO_LOG_JSON_FORMAT_PRETTY']:
if env_var in os.environ:
del os.environ[env_var]
else:
# Unset ENV variables for Django unit type tests so default values get set.
for env_var in ['FHIR_URL', 'DJANGO_MEDICARE_SLSX_LOGIN_URI', 'DJANGO_MEDICARE_SLSX_REDIRECT_URI',
'DJANGO_SLSX_USERINFO_ENDPOINT', 'DJANGO_SLSX_TOKEN_ENDPOINT',
'DJANGO_SLSX_HEALTH_CHECK_ENDPOINT',
'DJANGO_FHIR_CERTSTORE', 'DATABASES_CUSTOM', 'DJANGO_LOG_JSON_FORMAT_PRETTY',
'DJANGO_USER_ID_ITERATIONS', 'DJANGO_USER_ID_SALT']:
if env_var in os.environ:
del os.environ[env_var]
if __name__ == '__main__':
os.environ['DJANGO_SETTINGS_MODULE'] = 'hhs_oauth_server.settings.test'
django.setup()
TestRunner = get_runner(settings)
test_runner = TestRunner()
# Is there a list of specific tests to run?
if args.test:
failures = test_runner.run_tests(args.test)
else:
failures = test_runner.run_tests(None)
sys.exit(bool(failures))
| 37.614286 | 102 | 0.70376 | 346 | 2,633 | 5.080925 | 0.375723 | 0.020478 | 0.03868 | 0.045506 | 0.445961 | 0.416382 | 0.416382 | 0.370876 | 0.370876 | 0.370876 | 0 | 0.000965 | 0.212685 | 2,633 | 69 | 103 | 38.15942 | 0.847082 | 0.090771 | 0 | 0.285714 | 0 | 0 | 0.365544 | 0.291163 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.171429 | 0 | 0.171429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52268da0e35ed4564d5a254cfc09888ff76a933a | 2,535 | py | Python | dockerfiles/tasks.py | lexeii/readthedocs.org | df58ae94aa87e3513e57686b3b431ea1deda8fe7 | [
"MIT"
] | null | null | null | dockerfiles/tasks.py | lexeii/readthedocs.org | df58ae94aa87e3513e57686b3b431ea1deda8fe7 | [
"MIT"
] | null | null | null | dockerfiles/tasks.py | lexeii/readthedocs.org | df58ae94aa87e3513e57686b3b431ea1deda8fe7 | [
"MIT"
] | null | null | null | from invoke import task
DOCKER_COMPOSE = 'docker-compose.yml'
DOCKER_COMPOSE_SEARCH = 'docker-compose-search.yml'
DOCKER_COMPOSE_COMMAND = f'docker-compose -f {DOCKER_COMPOSE} -f {DOCKER_COMPOSE_SEARCH}'
@task
def build(c):
"""Build docker image for servers."""
c.run(f'{DOCKER_COMPOSE_COMMAND} build --no-cache', pty=True)
@task
def down(c, volumes=False):
"""Stop and remove all the docker containers."""
if volumes:
c.run(f'{DOCKER_COMPOSE_COMMAND} down -v', pty=True)
else:
c.run(f'{DOCKER_COMPOSE_COMMAND} down', pty=True)
@task
def up(c, no_search=False, init=False):
"""Start all the docker containers for a Read the Docs instance"""
INIT = ''
if init:
INIT = 'INIT=t'
if no_search:
c.run(f'{INIT} docker-compose -f {DOCKER_COMPOSE} up', pty=True)
else:
c.run(f'{INIT} {DOCKER_COMPOSE_COMMAND} up', pty=True)
@task
def shell(c, running=False, container='web'):
"""Run a shell inside a container."""
if running:
c.run(f'{DOCKER_COMPOSE_COMMAND} exec {container} /bin/bash', pty=True)
else:
c.run(f'{DOCKER_COMPOSE_COMMAND} run --rm {container} /bin/bash', pty=True)
@task
def manage(c, command):
"""Run manage.py with a specific command."""
c.run(f'{DOCKER_COMPOSE_COMMAND} run --rm web python3 manage.py {command}', pty=True)
@task
def attach(c, container):
"""Attach a tty to a running container (useful for pdb)."""
c.run(f'docker attach readthedocsorg_{container}_1', pty=True)
@task
def restart(c, containers):
"""Restart one or more containers."""
c.run(f'{DOCKER_COMPOSE_COMMAND} restart {containers}', pty=True)
# When restarting a container that nginx is connected to, we need to restart
# nginx as well because it has the IP cached
need_nginx_restart = [
'web',
'proxito'
'storage',
]
for extra in need_nginx_restart:
if extra in containers:
c.run(f'{DOCKER_COMPOSE_COMMAND} restart nginx', pty=True)
break
@task
def pull(c):
"""Pull all docker images required for build servers."""
images = [
('4.0', 'stable'),
('5.0', 'latest'),
]
for image, tag in images:
c.run(f'docker pull readthedocs/build:{image}', pty=True)
c.run(f'docker tag readthedocs/build:{image} readthedocs/build:{tag}', pty=True)
@task
def test(c, arguments=''):
"""Run all test suite."""
c.run(f'{DOCKER_COMPOSE_COMMAND} run --rm --no-deps web tox {arguments}', pty=True)
| 30.914634 | 89 | 0.647732 | 370 | 2,535 | 4.337838 | 0.272973 | 0.161994 | 0.043614 | 0.082243 | 0.282866 | 0.249844 | 0.158255 | 0.137695 | 0.04486 | 0 | 0 | 0.002994 | 0.209467 | 2,535 | 81 | 90 | 31.296296 | 0.797904 | 0.190138 | 0 | 0.206897 | 0 | 0 | 0.390828 | 0.193918 | 0 | 0 | 0 | 0 | 0 | 1 | 0.155172 | false | 0 | 0.017241 | 0 | 0.172414 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5227fa5e31da69fb997acb1f57122d553968b3c0 | 7,196 | py | Python | src/msbuilder.py | neobepmat/BatchBuilder | 17d04f91d4c5f628592347e6e083f3783e478a4d | [
"MIT"
] | null | null | null | src/msbuilder.py | neobepmat/BatchBuilder | 17d04f91d4c5f628592347e6e083f3783e478a4d | [
"MIT"
] | null | null | null | src/msbuilder.py | neobepmat/BatchBuilder | 17d04f91d4c5f628592347e6e083f3783e478a4d | [
"MIT"
] | null | null | null | # Generic build script that builds, tests, and creates nuget packages.
#
# INSTRUCTIONS:
# Update the following project paths:
# proj Path to the project file (.csproj)
# test Path to the test project (.csproj)
# nuspec Path to the package definition for NuGet.
#
# delete any of the lines if not applicable
#
#
# Update the paths to the build tools:
# msbuild Path to msbuild
# msconfig Configuration name to be built
# test Path to mstest.exe (requires Visual Studio) (optional - delete line)
# nuget Path to nuget.exe (requires NuGet command line tool) (optional - delete line)
# trx2html Path to trx2html.exe (http://trx2html.codeplex.com/) (optional - delete line)
#
#
# USAGE:
#
# proj = r'path to project (.csproj)'
# test = r'path to project containing test (.csproj)'
# nuspec = r'path to nuspec definition (.nuspec)'
#
#
# msbuild = r'C:\Windows\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe'
# mstest = r'C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\MSTest.exe'
# nuget = r'C:\BuildTools\nuget\2199eada12ce\nuget.exe'
# trx2html = r'C:\BuildTools\trx2html\0.6\trx2html.exe'
#
# bld = MsBuilder(msbuild=msbuild, mstest=mstest, nuget=nuget, trx2html=trx2html)
# bld.run(proj, test, nuspec)
#
import os, shlex, subprocess, re, datetime
from platform import platform
class MsBuilder:
def __init__(self, msbuild=None, msconfig=None, msTarget=None, msplatform=None, mstest=None, nuget=None, trx2html=None, ):
# The following dictionary holds the location of the various
# msbuild.exe paths for the .net framework versions
if msbuild==None:
self.msbuild = r'C:\Windows\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe'
else:
self.msbuild = msbuild
# Path to mstest (this requires vs2010 to be installed
if mstest==None:
self.mstest = r'C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\MSTest.exe'
else:
self.mstest = mstest
# Path to nuget packager
if nuget==None:
self.nuget = r'C:\BuildTools\nuget\2199eada12ce\nuget.exe'
else:
self.nuget = nuget
# Path to trx2html transformation tool
if trx2html==None:
self.trx2html = r'C:\BuildTools\trx2html\0.6\trx2html.exe'
else:
self.trx2html = trx2html
# Path to trx2html transformation tool
if msconfig==None:
self.msconfig = r'Release'
else:
self.msconfig = msconfig
if msplatform==None:
self.msplatform = r'x86'
else:
self.msplatform = msplatform
if msTarget==None:
self.mstarget = r'Release'
else:
self.mstarget = msTarget
def build(self, projPath):
# Ensure msbuild exists
if not os.path.isfile(self.msbuild):
raise Exception('MsBuild.exe not found. path=' + self.msbuild)
arg1 = '/t:' + self.mstarget
arg2 = '/p:Configuration=' + self.msconfig
arg3 = '/p:Platform=' + self.msplatform
p = subprocess.call([self.msbuild, projPath, arg1, arg2, arg3])
if p==1: return False # exit early
return True
def test(self, testProject):
if not os.path.isfile(self.msbuild):
raise Exception('MsBuild.exe not found. path=' + self.msbuild)
if not os.path.isfile(self.mstest):
raise Exception('MsTest.exe not found. path=' + self.mstest)
# build the test project
arg1 = '/t:Rebuild'
arg2 = '/p:Configuration=Release'
p = subprocess.call([self.msbuild, testProject, arg1, arg2])
# find the test dll
f = open(testProject)
xml = f.read()
f.close()
match = re.search(r'<AssemblyName>(.*)</AssemblyName>', xml)
if not match:
print( 'Could not find "AssemblyName" in test project file.')
return False
outputFolder = os.path.dirname(testProject) + '\\bin\\Release\\'
dll = outputFolder + match.groups()[0] + '.dll'
resultFile = outputFolder + 'testResults.trx'
if os.path.isfile(resultFile):
os.remove(resultFile)
# execute the tests in the container
arg1 = '/testcontainer:' + dll
arg2 = '/resultsfile:' + resultFile
p = subprocess.call([self.mstest, arg1, arg2])
# convert the results file
if os.path.isfile(self.trx2html):
subprocess.call([self.trx2html, resultFile])
else:
print( 'TRX to HTML converter not found. path=' + self.trx2html)
if p==1: return False # exit early
return True
def pack(self, packageSpec, version = '0.0.0.0'):
if not os.path.isfile(self.nuget):
raise Exception('Nuget.exe not found. path=' + self.nuget)
outputFolder = os.path.dirname(packageSpec) + '\\artifacts\\'
if not os.path.exists(outputFolder):
os.makedirs(outputFolder)
p = subprocess.call([self.nuget, 'pack', packageSpec, '-Version', version, '-Symbols', '-o', outputFolder])
if p==1: return False #exit early
return True
def validate(self, projectPath):
packFile = os.path.dirname(projectPath) + '\\packages.config'
if os.path.isfile(packFile):
f = open(packFile)
xml = f.read()
f.close()
print( xml)
match = re.search(r'version="0.0.0.0"', xml)
if match:
# Found a non-versioned package being used by this project
return False
else:
print( 'No "packages.config" file was found. path=' + packFile)
return True
def run(self, proj=None, test=None, nuspec=None):
summary = '';
# File header
start = datetime.datetime.now()
print( '\n'*5)
summary += self.log('STARTED BUILD - ' + start.strftime("%Y-%m-%d %H:%M:%S"))
# Build the project
if proj is not None:
buildOk = self.build(proj)
if not buildOk:
self.log('BUILD: FAILED', start)
sys.exit(100)
summary += self.log('BUILD: SUCCEEDED', start)
else:
summary += self.log('BUILD: NOT SPECIFIED')
# Build the tests and run them
if test is not None:
testOk = self.test(test)
if not testOk:
print( self.log('TESTS: FAILED', start))
sys.exit(100)
summary += self.log('TESTS: PASSED', start)
else:
summary += self.log('TESTS: NOT SPECIFIED')
# Package up the artifacts
if nuspec is not None:
packOk = self.pack(nuspec, '0.0.0.0')
if not packOk:
print( self.log('NUGET PACK: FAILED', start))
sys.exit(100)
summary += self.log('NUGET PACK: SUCCEEDED', start)
else:
summary += self.log('NUGET PACK: NOT SPECIFIED')
# Validate dependencies
if not self.validate(proj):
print( self.log('DEPENDENCIES: NOT VALIDATED - DETECTED UNVERSIONED DEPENDENCY', start))
sys.exit(100)
summary += self.log('DEPENDENCIES: VALIDATED', start)
# Build footer
stop = datetime.datetime.now()
diff = stop - start
summary += self.log('FINISHED BUILD', start)
# Build summary
print( '\n\n' + '-'*80)
print( summary)
print( '-'*80)
def log(self, message, start=None):
timestamp = ''
numsecs = ''
if start is not None:
split = datetime.datetime.now()
diff = split - start
timestamp = split.strftime("%Y-%m-%d %H:%M:%S") + '\t'
numsecs = ' (' + str(diff.seconds) + ' seconds)'
msg = timestamp + message + numsecs + '\n\n'
print( '='*10 + '> ' + msg)
return msg | 31.286957 | 124 | 0.645914 | 953 | 7,196 | 4.873033 | 0.225603 | 0.018088 | 0.027132 | 0.011843 | 0.257321 | 0.219208 | 0.178941 | 0.166667 | 0.125969 | 0.108312 | 0 | 0.020889 | 0.221651 | 7,196 | 230 | 125 | 31.286957 | 0.808249 | 0.25264 | 0 | 0.232394 | 0 | 0.007042 | 0.205969 | 0.043589 | 0 | 0 | 0 | 0 | 0 | 1 | 0.049296 | false | 0.007042 | 0.014085 | 0 | 0.119718 | 0.084507 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
522a4d325c9353e0142c370ba6f40b501dc79e45 | 1,664 | py | Python | Python/Canvas/HotkeyEditor/HotkeyStyledItemDelegate.py | yoann01/FabricUI | d4d24f25245b8ccd2d206aded2b6c5f2aca09155 | [
"BSD-3-Clause"
] | null | null | null | Python/Canvas/HotkeyEditor/HotkeyStyledItemDelegate.py | yoann01/FabricUI | d4d24f25245b8ccd2d206aded2b6c5f2aca09155 | [
"BSD-3-Clause"
] | null | null | null | Python/Canvas/HotkeyEditor/HotkeyStyledItemDelegate.py | yoann01/FabricUI | d4d24f25245b8ccd2d206aded2b6c5f2aca09155 | [
"BSD-3-Clause"
] | null | null | null | #
# Copyright (c) 2010-2017 Fabric Software Inc. All rights reserved.
#
from PySide import QtCore, QtGui
from FabricEngine.Canvas.Utils import *
class HotkeyStyledItemDelegate(QtGui.QStyledItemDelegate):
keyPressed = QtCore.Signal(QtGui.QKeySequence)
def __init__(self, parent=None):
super(HotkeyStyledItemDelegate, self).__init__(parent)
self.parent = parent
def createEditor(self, parent, option, index):
self.editor = QtGui.QLineEdit(parent)
self.editor.setFrame(False)
self.editor.installEventFilter(self)
return self.editor
def setEditorData(self, editor, index):
value = index.model().data(index, QtCore.Qt.EditRole)
editor.setText(value)
def setModelData(self, editor, model, index):
value = editor.text()
model.setData(index, value, QtCore.Qt.EditRole)
def updateEditorGeometry(self, editor, option, index):
editor.setGeometry(option.rect)
def eventFilter(self, target, event):
if target is self.editor:
if event.type() == QtCore.QEvent.KeyPress:
# Gets the sequence from the event.
keySequence = GetQKeySequenceFromQKeyEvent(event)
if keySequence is not None:
self.keyPressed.emit(keySequence)
return True
if event.type() == QtCore.QEvent.MouseButtonPress:
return True
if event.type() == QtCore.QEvent.MouseButtonDblClick:
return True
if event.type() == QtCore.QEvent.MouseMove:
return True
return False
| 31.396226 | 67 | 0.626803 | 168 | 1,664 | 6.160714 | 0.422619 | 0.077295 | 0.042512 | 0.0657 | 0.117874 | 0.095652 | 0.095652 | 0 | 0 | 0 | 0 | 0.006717 | 0.284255 | 1,664 | 52 | 68 | 32 | 0.862301 | 0.059495 | 0 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0 | 0.058824 | 0 | 0.470588 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
522c1ac5060d0dd5467b006cd94c060a4be7d8c8 | 1,423 | py | Python | hs_core/management/commands/remove_auto_generated_generic_metadata.py | tommac7/hydroshare | 87c4543a55f98103d2614bf4c47f7904c3f9c029 | [
"BSD-3-Clause"
] | 178 | 2015-01-08T23:03:36.000Z | 2022-03-03T13:56:45.000Z | hs_core/management/commands/remove_auto_generated_generic_metadata.py | tommac7/hydroshare | 87c4543a55f98103d2614bf4c47f7904c3f9c029 | [
"BSD-3-Clause"
] | 4,125 | 2015-01-01T14:26:15.000Z | 2022-03-31T16:38:55.000Z | hs_core/management/commands/remove_auto_generated_generic_metadata.py | tommac7/hydroshare | 87c4543a55f98103d2614bf4c47f7904c3f9c029 | [
"BSD-3-Clause"
] | 53 | 2015-03-15T17:56:51.000Z | 2022-03-17T00:32:16.000Z | """Removes unmodified GenericLogicalFiles found in composite resources. This functionality is
to remove unused GenericLogicalFiles that were created by an earlier iteration of CompositeResource
that created an aggregation for every file added to a resource.
"""
from django.core.management.base import BaseCommand
from hs_composite_resource.models import CompositeResource
from hs_file_types.models.generic import GenericLogicalFile
class Command(BaseCommand):
help = "Removes auto generated single file aggregations in the composite resource"
def handle(self, *args, **options):
resource_counter = 0
generic_files = 0
unmodified_generic_files_removed_counter = 0
for res in CompositeResource.objects.all():
resource_counter += 1
for file in res.files.all():
if type(file.logical_file) is GenericLogicalFile:
generic_files += 1
if not file.logical_file.metadata.has_modified_metadata:
file.logical_file.remove_aggregation()
unmodified_generic_files_removed_counter += 1
print(">> {} COMPOSITE RESOURCES PROCESSED.".format(resource_counter))
print(">> {} TOTAL GENERIC FILES FOUND".format(generic_files))
print(">> {} TOTAL UNMODIFIED GENERIC FILES REMOVED"
.format(unmodified_generic_files_removed_counter))
| 43.121212 | 99 | 0.700632 | 159 | 1,423 | 6.09434 | 0.45912 | 0.099071 | 0.090815 | 0.119711 | 0.111455 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005505 | 0.234013 | 1,423 | 32 | 100 | 44.46875 | 0.883486 | 0.179902 | 0 | 0 | 0 | 0 | 0.158621 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.142857 | 0 | 0.285714 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
523161b039f90859bbeabd886de8c34f480f8c75 | 1,385 | py | Python | projects/EulerMethods/backward_euler_ode_solver.py | brtymn/python-mini-projects | 25c48a0cb2f374a718f85ddee585e87797070b01 | [
"MIT"
] | null | null | null | projects/EulerMethods/backward_euler_ode_solver.py | brtymn/python-mini-projects | 25c48a0cb2f374a718f85ddee585e87797070b01 | [
"MIT"
] | null | null | null | projects/EulerMethods/backward_euler_ode_solver.py | brtymn/python-mini-projects | 25c48a0cb2f374a718f85ddee585e87797070b01 | [
"MIT"
] | null | null | null | import matplotlib.pyplot as plt
import numpy as np
def feval(funcName, *args):
return eval(funcName)(*args)
def backwardEuler(func, yinit, x_range, h):
m = len(yinit)
n = int((x_range[-1] - x_range[0])/h)
x = x_range[0]
y = yinit
xsol = np.empty(0)
xsol = np.append(xsol, x)
ysol = np.empty(0)
ysol = np.append(ysol, y)
for i in range(n):
yprime = feval(func, x+h, y)/(1+h)
for j in range(m):
y[j] = y[j] + h*yprime[j]
x += h
xsol = np.append(xsol, x)
for r in range(len(y)):
ysol = np.append(ysol, y[r]) # Saves all new y's
return [xsol, ysol]
def myFunc(x, y):
'''
We define our ODEs in this function.
'''
dy = np.zeros((len(y)))
dy[0] = 3*(1+x) - y[0]
return dy
h = 0.2
x = np.array([1.0, 2.0])
yinit = np.array([4.0])
[ts, ys] = backwardEuler('myFunc', yinit, x, h)
# Calculates the exact solution, for comparison
dt = int((x[-1] - x[0]) / h)
t = [x[0]+i*h for i in range(dt+1)]
yexact = []
for i in range(dt+1):
ye = 3 * t[i] + np.exp(1 - t[i])
yexact.append(ye)
plt.plot(ts, ys, 'r')
plt.plot(t, yexact, 'b')
plt.xlim(x[0], x[1])
plt.legend(["Backward Euler method",
"Exact solution manually computed"], loc = 2)
plt.xlabel('x', fontsize = 10)
plt.ylabel('y', fontsize = 10)
plt.tight_layout()
plt.show()
| 19.507042 | 61 | 0.548014 | 244 | 1,385 | 3.090164 | 0.336066 | 0.046419 | 0.023873 | 0.043767 | 0.127321 | 0.037135 | 0 | 0 | 0 | 0 | 0 | 0.031403 | 0.26426 | 1,385 | 70 | 62 | 19.785714 | 0.708538 | 0.072924 | 0 | 0.044444 | 0 | 0 | 0.049724 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.044444 | 0.022222 | 0.177778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5231f3068efad280dae966b4d5266edbdafdb2b4 | 7,531 | py | Python | jans-linux-setup/jans_setup/setup_app/installers/oxd.py | nikdavnik/jans | 5e9abc74cca766a066512eab2aca6563ce480bff | [
"Apache-2.0"
] | 18 | 2022-01-13T13:45:13.000Z | 2022-03-30T04:41:18.000Z | jans-linux-setup/jans_setup/setup_app/installers/oxd.py | nikdavnik/jans | 5e9abc74cca766a066512eab2aca6563ce480bff | [
"Apache-2.0"
] | 604 | 2022-01-13T12:32:50.000Z | 2022-03-31T20:27:36.000Z | jans-linux-setup/jans_setup/setup_app/installers/oxd.py | nikdavnik/jans | 5e9abc74cca766a066512eab2aca6563ce480bff | [
"Apache-2.0"
] | 8 | 2022-01-28T00:23:25.000Z | 2022-03-16T05:12:12.000Z | import os
import glob
import ruamel.yaml
from setup_app import paths
from setup_app.static import AppType, InstallOption
from setup_app.utils import base
from setup_app.config import Config
from setup_app.utils.setup_utils import SetupUtils
from setup_app.installers.base import BaseInstaller
class OxdInstaller(SetupUtils, BaseInstaller):
def __init__(self):
setattr(base.current_app, self.__class__.__name__, self)
self.service_name = 'oxd-server'
self.oxd_root = '/opt/oxd-server/'
self.needdb = False # we don't need backend connection in this class
self.app_type = AppType.SERVICE
self.install_type = InstallOption.OPTONAL
self.install_var = 'installOxd'
self.register_progess()
self.oxd_server_yml_fn = os.path.join(self.oxd_root, 'conf/oxd-server.yml')
def install(self):
self.logIt("Installing", pbar=self.service_name)
self.run(['tar', '-zxf', Config.oxd_package, '--no-same-owner', '--strip-components=1', '-C', self.oxd_root])
self.run(['chown', '-R', 'jetty:jetty', self.oxd_root])
if base.snap:
self.log_dir = os.path.join(base.snap_common, 'jans/oxd-server/log/')
else:
self.log_dir = '/var/log/oxd-server'
service_file = os.path.join(self.oxd_root, 'oxd-server.service')
if os.path.exists(service_file):
self.run(['cp', service_file, '/lib/systemd/system'])
else:
self.run([Config.cmd_ln, service_file, '/etc/init.d/oxd-server'])
if not os.path.exists(self.log_dir):
self.run([paths.cmd_mkdir, self.log_dir])
self.run([
'cp',
os.path.join(Config.install_dir, 'static/oxd/oxd-server.default'),
os.path.join(Config.osDefault, 'oxd-server')
])
self.log_file = os.path.join(self.log_dir, 'oxd-server.log')
if not os.path.exists(self.log_file):
open(self.log_file, 'w').close()
if not base.snap:
self.run(['chown', '-R', 'jetty:jetty', self.log_dir])
for fn in glob.glob(os.path.join(self.oxd_root,'bin/*')):
self.run([paths.cmd_chmod, '+x', fn])
self.modify_config_yml()
self.generate_keystore()
self.enable()
def modify_config_yml(self):
self.logIt("Configuring", pbar=self.service_name)
yml_str = self.readFile(self.oxd_server_yml_fn)
oxd_yaml = ruamel.yaml.load(yml_str, ruamel.yaml.RoundTripLoader)
if 'bind_ip_addresses' in oxd_yaml:
oxd_yaml['bind_ip_addresses'].append(Config.ip)
else:
for i, k in enumerate(oxd_yaml):
if k == 'storage':
break
else:
i = 1
addr_list = [Config.ip]
if base.snap:
addr_list.append('127.0.0.1')
oxd_yaml.insert(i, 'bind_ip_addresses', addr_list)
if Config.get('oxd_use_jans_storage'):
oxd_yaml['storage_configuration'].pop('dbFileLocation')
oxd_yaml['storage'] = 'jans_server_configuration'
oxd_yaml['storage_configuration']['baseDn'] = 'o=jans'
oxd_yaml['storage_configuration']['type'] = Config.jans_properties_fn
oxd_yaml['storage_configuration']['connection'] = Config.ox_ldap_properties if Config.mapping_locations['default'] == 'ldap' else Config.jansCouchebaseProperties
oxd_yaml['storage_configuration']['salt'] = os.path.join(Config.configFolder, "salt")
if base.snap:
for appenders in oxd_yaml['logging']['appenders']:
if appenders['type'] == 'file':
appenders['currentLogFilename'] = self.log_file
appenders['archivedLogFilenamePattern'] = os.path.join(base.snap_common, 'jans/oxd-server/log/oxd-server-%d{yyyy-MM-dd}-%i.log.gz')
yml_str = ruamel.yaml.dump(oxd_yaml, Dumper=ruamel.yaml.RoundTripDumper)
self.writeFile(self.oxd_server_yml_fn, yml_str)
def generate_keystore(self):
self.logIt("Generating certificate", pbar=self.service_name)
# generate oxd-server.keystore for the hostname
self.run([
paths.cmd_openssl,
'req', '-x509', '-newkey', 'rsa:4096', '-nodes',
'-out', '/tmp/oxd.crt',
'-keyout', '/tmp/oxd.key',
'-days', '3650',
'-subj', '/C={}/ST={}/L={}/O={}/CN={}/emailAddress={}'.format(Config.countryCode, Config.state, Config.city, Config.orgName, Config.hostname, Config.admin_email),
])
self.run([
paths.cmd_openssl,
'pkcs12', '-export',
'-in', '/tmp/oxd.crt',
'-inkey', '/tmp/oxd.key',
'-out', '/tmp/oxd.p12',
'-name', Config.hostname,
'-passout', 'pass:example'
])
self.run([
Config.cmd_keytool,
'-importkeystore',
'-deststorepass', 'example',
'-destkeypass', 'example',
'-destkeystore', '/tmp/oxd.keystore',
'-srckeystore', '/tmp/oxd.p12',
'-srcstoretype', 'PKCS12',
'-srcstorepass', 'example',
'-alias', Config.hostname,
])
oxd_keystore_fn = os.path.join(self.oxd_root, 'conf/oxd-server.keystore')
self.run(['cp', '-f', '/tmp/oxd.keystore', oxd_keystore_fn])
self.run([paths.cmd_chown, 'jetty:jetty', oxd_keystore_fn])
for f in ('/tmp/oxd.crt', '/tmp/oxd.key', '/tmp/oxd.p12', '/tmp/oxd.keystore'):
self.run([paths.cmd_rm, '-f', f])
def installed(self):
return os.path.exists(self.oxd_server_yml_fn)
def download_files(self, force=False):
oxd_url = os.path.join(base.current_app.app_info['JANS_MAVEN'], 'maven/io/jans/oxd-server/{0}/oxd-server-{0}-distribution.zip').format(base.current_app.app_info['ox_version'])
self.logIt("Downloading {} and preparing package".format(os.path.basename(oxd_url)))
oxd_zip_fn = '/tmp/oxd-server.zip'
oxd_tgz_fn = '/tmp/oxd-server.tgz' if base.snap else os.path.join(Config.dist_jans_dir, 'oxd-server.tgz')
tmp_dir = os.path.join('/tmp', os.urandom(5).hex())
oxd_tmp_dir = os.path.join(tmp_dir, 'oxd-server')
self.run([paths.cmd_mkdir, '-p', oxd_tmp_dir])
self.download_file(oxd_url, oxd_zip_fn)
self.run([paths.cmd_unzip, '-qqo', oxd_zip_fn, '-d', oxd_tmp_dir])
self.run([paths.cmd_mkdir, os.path.join(oxd_tmp_dir, 'data')])
if not base.snap:
service_file = 'oxd-server.init.d' if base.deb_sysd_clone else 'oxd-server.service'
service_url = 'https://raw.githubusercontent.com/GluuFederation/community-edition-package/master/package/systemd/oxd-server.service'.format(base.current_app.app_info['ox_version'], service_file)
self.download_file(service_url, os.path.join(oxd_tmp_dir, service_file))
oxd_server_sh_url = 'https://raw.githubusercontent.com/GluuFederation/oxd/master/debian/oxd-server'
self.download_file(oxd_server_sh_url, os.path.join(oxd_tmp_dir, 'bin/oxd-server'))
self.run(['tar', '-zcf', oxd_tgz_fn, 'oxd-server'], cwd=tmp_dir)
#self.run(['rm', '-r', '-f', tmp_dir])
Config.oxd_package = oxd_tgz_fn
def create_folders(self):
if not os.path.exists(self.oxd_root):
self.run([paths.cmd_mkdir, self.oxd_root])
| 42.548023 | 206 | 0.608419 | 976 | 7,531 | 4.495902 | 0.243852 | 0.065634 | 0.038742 | 0.034184 | 0.21969 | 0.149043 | 0.084321 | 0.051048 | 0.03464 | 0.03464 | 0 | 0.005592 | 0.240207 | 7,531 | 176 | 207 | 42.789773 | 0.761272 | 0.017129 | 0 | 0.136691 | 0 | 0.021583 | 0.227525 | 0.052589 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05036 | false | 0.028777 | 0.071942 | 0.007194 | 0.136691 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5233c1576475dcf2ff06eeb86ca49404bf46f4ce | 7,303 | py | Python | gnn_pygan/gan_attack/train.py | Guo-lab/Graph | c4c5fbc8fb5d645c16da20351b9746019cf75aab | [
"MIT"
] | null | null | null | gnn_pygan/gan_attack/train.py | Guo-lab/Graph | c4c5fbc8fb5d645c16da20351b9746019cf75aab | [
"MIT"
] | null | null | null | gnn_pygan/gan_attack/train.py | Guo-lab/Graph | c4c5fbc8fb5d645c16da20351b9746019cf75aab | [
"MIT"
] | null | null | null | import numpy as np
import scipy.sparse as sp
import torch
import torch.nn as nn
from tqdm import tqdm
import networkx as nx
import random
import math, os
from collections import defaultdict
import argparse
from models import DGI, LogReg
from utils import process
from attacker.attacker import Attacker
from estimator.estimator import mi_loss, mi_loss_neg
parser = argparse.ArgumentParser(description='Process some integers.')
parser.add_argument('--dataset', type=str, default='cora', help='dataset') # 'cora', 'citeseer', 'polblogs'
parser.add_argument('--alpha', type=float, default=0.4)
parser.add_argument('--epsilon', type=float, default=0.1)
parser.add_argument('--tau', type=float, default=0.01)
parser.add_argument('--critic', type=str, default="bilinear") # 'inner product', 'bilinear', 'separable'
parser.add_argument('--hinge', type=bool, default=True)
parser.add_argument('--dim', type=int, default=512)
parser.add_argument('--gpu', type=str, default="0")
parser.add_argument('--save-model', type=bool, default=True)
parser.add_argument('--show-task', type=bool, default=True)
parser.add_argument('--show-attack', type=bool, default=True)
args = parser.parse_args()
dataset = args.dataset
print(args)
os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu
make_adv = True
attack_rate = args.alpha
# /#/#/#/#/#/#/#/#/#/#/#/#/#/#/#/#/#/#/#/
# #/#/#/#/#/# training params* /#/#/#/#/
# /#/#/#/#/#/#/#/#/#/#/#/#/#/#/#/#/#/#/#/
batch_size = 1
nb_epochs = 10000
patience = 20
lr = 0.001
l2_coef = 0.0
drop_prob = 0.0
hid_units = args.dim
sparse = True
if dataset == 'polblogs':
attack_mode = 'A'
else:
attack_mode = 'both'
nonlinearity = 'prelu' # special name to separate parameters
if dataset == 'polblogs':
adj, features, labels, idx_train, idx_val, idx_test = process.load_data_polblogs(dataset)
else:
adj, features, labels, idx_train, idx_val, idx_test = process.load_data(dataset)
nb_nodes = features.shape[0]
ft_size = features.shape[1]
nb_classes = labels.shape[1]
nb_edges = int(adj.sum() / 2)
n_flips = int(nb_edges * attack_rate)
A = adj.copy()
features, _ = process.preprocess_features(features, dataset=dataset)
adj = process.normalize_adj(adj + sp.eye(adj.shape[0]))
if sparse:
sp_adj = process.sparse_mx_to_torch_sparse_tensor(adj)
sp_A = process.sparse_mx_to_torch_sparse_tensor(A)
else:
adj = (adj + sp.eye(adj.shape[0])).todense()
features = torch.FloatTensor(features[np.newaxis])
if not sparse:
adj = torch.FloatTensor(adj[np.newaxis])
labels = torch.FloatTensor(labels[np.newaxis])
idx_train = torch.LongTensor(idx_train)
idx_val = torch.LongTensor(idx_val)
idx_test = torch.LongTensor(idx_test)
if torch.cuda.is_available():
print('Using CUDA')
features = features.cuda()
if sparse:
sp_adj = sp_adj.cuda()
sp_A = sp_A.cuda()
else:
adj = adj.cuda()
A = A.cuda()
labels = labels.cuda()
idx_train = idx_train.cuda()
idx_val = idx_val.cuda()
idx_test = idx_test.cuda()
sp_adj = sp_adj.to_dense()
sp_adj_ori = sp_adj.clone()
features_ori = features.clone()
sp_A = sp_A.to_dense()
encoder = DGI(ft_size, hid_units, nonlinearity, critic=args.critic)
atm = Attacker(encoder, features, nb_nodes, attack_mode=attack_mode,
show_attack=args.show_attack, gpu=torch.cuda.is_available())
optimiser = torch.optim.Adam(encoder.parameters(), lr=lr, weight_decay=l2_coef)
if torch.cuda.is_available():
encoder.cuda()
atm.cuda()
b_xent = nn.BCEWithLogitsLoss()
xent = nn.CrossEntropyLoss()
cnt_wait = 0
best = 1e9
best_t = 0
step_size_init = 20
attack_iters = 10
stepsize_x = 1e-5
# attack_mode = 'both'
drop = 0.8
epochs_drop = 20
train_lbls = torch.argmax(labels[0, idx_train], dim=1)
val_lbls = torch.argmax(labels[0, idx_val], dim=1)
test_lbls = torch.argmax(labels[0, idx_test], dim=1)
def task(embeds):
train_embs = embeds[0, idx_train]
val_embs = embeds[0, idx_val]
test_embs = embeds[0, idx_test]
log = LogReg(hid_units, nb_classes)
opt = torch.optim.Adam(log.parameters(), lr=0.01, weight_decay=0.0)
if torch.cuda.is_available():
log.cuda()
for _ in range(100):
log.train()
opt.zero_grad()
logits = log(train_embs)
loss = xent(logits, train_lbls)
loss.backward()
opt.step()
logits = log(test_embs)
preds = torch.argmax(logits, dim=1)
acc = torch.sum(preds == test_lbls).float() / test_lbls.shape[0]
return acc.detach().cpu().numpy()
for epoch in range(nb_epochs):
encoder.train()
optimiser.zero_grad()
if make_adv:
# step_size = step_size_init * math.pow(drop, math.floor((1 + epoch) / epochs_drop))
step_size = step_size_init
step_size_x = stepsize_x
adv = atm(sp_adj, sp_A, None, n_flips, b_xent=b_xent, step_size=step_size,
eps_x=args.epsilon, step_size_x=step_size_x,
iterations=attack_iters, should_normalize=True, random_restarts=False, make_adv=True)
if attack_mode == 'A':
sp_adj = adv
elif attack_mode == 'X':
features = adv
elif attack_mode == 'both':
sp_adj = adv[0]
features = adv[1]
loss = mi_loss(encoder, sp_adj, features, nb_nodes, b_xent, batch_size, sparse)
if args.hinge:
loss_ori = mi_loss(encoder, sp_adj_ori, features_ori, nb_nodes, b_xent, batch_size, sparse)
RV = loss - loss_ori
print("RV: {}; RV-tau: {}; MI-nature: {}; MI-worst: {}".format(RV.detach().cpu().numpy(),
(RV - args.tau).detach().cpu().numpy(),
loss_ori.detach().cpu().numpy(),
loss.detach().cpu().numpy()))
if RV - args.tau < 0:
loss = loss_ori
if args.show_task and epoch%5==0:
adv = atm(sp_adj_ori, sp_A, None, n_flips, b_xent=b_xent, step_size=20,
eps_x=args.epsilon, step_size_x=1e-3,
iterations=50, should_normalize=True, random_restarts=False, make_adv=True)
if attack_mode == 'A':
embeds, _ = encoder.embed(features_ori, adv, sparse, None)
elif attack_mode == 'X':
embeds, _ = encoder.embed(adv, sp_adj_ori, sparse, None)
elif attack_mode == 'both':
embeds, _ = encoder.embed(adv[1], adv[0], sparse, None)
acc_adv = task(embeds)
embeds, _ = encoder.embed(features_ori, sp_adj_ori, sparse, None)
acc_nat = task(embeds)
print('Epoch:{} Step_size: {:.4f} Loss:{:.4f} Natural_Acc:{:.4f} Adv_Acc:{:.4f}'.format(
epoch, step_size, loss.detach().cpu().numpy(), acc_nat, acc_adv))
else:
print('Epoch:{} Step_size: {:.4f} Loss:{:.4f}'.format(epoch, step_size, loss.detach().cpu().numpy()))
if loss < best:
best = loss
best_t = epoch
cnt_wait = 0
if args.save_model:
torch.save(encoder.state_dict(), 'model.pkl')
else:
cnt_wait += 1
if cnt_wait == patience:
print('Early stopping!')
break
loss.backward()
optimiser.step() | 25.714789 | 110 | 0.630426 | 1,020 | 7,303 | 4.298039 | 0.210784 | 0.029197 | 0.042655 | 0.017336 | 0.25479 | 0.187272 | 0.170164 | 0.102646 | 0.084398 | 0.066606 | 0 | 0.015893 | 0.224565 | 7,303 | 284 | 111 | 25.714789 | 0.758255 | 0.038614 | 0 | 0.127072 | 0 | 0 | 0.054846 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.005525 | false | 0 | 0.077348 | 0 | 0.088398 | 0.033149 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5237cf1fdafafc7f3df3d20bbeac6dd2091828d7 | 7,274 | py | Python | antenna_size.py | EzraCerpac/SatLink | d5da25d8f287ea25a7da6e91eed8b435ed8416f1 | [
"MIT"
] | 8 | 2021-02-12T00:19:19.000Z | 2022-03-14T07:36:09.000Z | antenna_size.py | EzraCerpac/SatLink | d5da25d8f287ea25a7da6e91eed8b435ed8416f1 | [
"MIT"
] | 5 | 2021-02-11T21:52:02.000Z | 2021-06-24T21:09:37.000Z | antenna_size.py | EzraCerpac/SatLink | d5da25d8f287ea25a7da6e91eed8b435ed8416f1 | [
"MIT"
] | 3 | 2021-10-04T17:23:42.000Z | 2022-03-02T07:35:43.000Z | from GrStat import GroundStation, Reception
from sat import Satellite
from pathos.pools import ParallelPool
from scipy import interpolate
import pandas as pd
import numpy as np
import pickle
import tqdm
import datetime
import sys, os
# this file contains the functions used to estimate antenna sizes and display the results in the interface
# STILL NEED TO CREATE A HEADER IN THE CSV OUTPUT (mp_mp_ant_size)
def loop_graph_ant_size(args):
sat = args[0]
margin = args[1]
snr_relaxation = args[2]
ant_size = args[3]
sat.reception.ant_size = round(ant_size, 1)
availability = sat.get_availability(margin, snr_relaxation)
return availability
def point_ant_size(args): # function loop - return the availability to a given Lat/Long
min_ant_size = 0.5
max_ant_size = 10
step_ant_size = 0.2
target_availability = args[5]
point = args[0]
sat = args[1]
reception = args[2]
margin = args[3]
snr_relaxation = args[4]
lat = point['Lat']
long = point['Long']
station = GroundStation(lat, long)
sat.set_grstation(station)
sat.set_reception(reception)
ant_size_vector = np.arange(min_ant_size, max_ant_size, step_ant_size)
for ant_size in ant_size_vector:
sat.reception.ant_size = round(ant_size, 1)
if sat.get_availability(margin, snr_relaxation) >= target_availability:
sat.reception.ant_size = round(round(ant_size, 1) - 0.1, 1)
if sat.get_availability(margin, snr_relaxation) >= target_availability:
point['ant size'] = round(round(ant_size, 1) - 0.1, 1)
else:
point['ant size'] = round(ant_size, 1)
break
return point
def sp_ant_size(): # this function runs the availability for a single point and shows a complete output
with open('temp\\args.pkl', 'rb') as f:
(site_lat, site_long, sat_long, freq, max_eirp, sat_height, max_bw, bw_util,
modcod, pol, roll_off, ant_eff, lnb_gain, lnb_temp, aditional_losses,
cable_loss, max_depoint, max_ant_size, min_ant_size, margin, cores) = pickle.load(f)
f.close()
#####################################
##### ground station parameters #####
#####################################
# creating a ground station object
station = GroundStation(site_lat, site_long)
##############################
### satellite parameters ###
##############################
data = pd.read_csv('models\\Modulation_dB.csv', sep=';')
line = data.loc[(data.Modcod) == modcod]
# tech = line['Tech'].values[0]
mod = line['Modulation'].values[0]
fec = line['FEC'].values[0]
# criando o objeto satélite
satellite = Satellite(sat_long, freq, max_eirp, sat_height, max_bw, bw_util, 0, 0, mod, roll_off, fec)
# atribuindo uma estação terrena à um satélite
satellite.set_grstation(station)
##############################
### reception parametters ####
##############################
polarization_loss = 3 # perda de polarização, em dB
# criando o objeto receptor
reception = Reception(None, ant_eff, aditional_losses, polarization_loss, lnb_gain, lnb_temp, cable_loss,
max_depoint)
# atribuindo uma recepção à um enlace de satélite
satellite.set_reception(reception) # setando o receptor do link de satélite
###################################
######### OUTPUTS #########
###################################
############ SNR target's calcullation ################
step = 0.2
interp_step = int(round((max_ant_size - min_ant_size) * 100))
ant_size_vector = np.arange(min_ant_size, max_ant_size, step)
ant_size_vector_interp = np.linspace(min_ant_size, max_ant_size, interp_step)
# parallel loop for each antenna size
pool = ParallelPool(nodes=round(cores/2)) #ARRUMAR AQUI
availability_vector = list(pool.imap(loop_graph_ant_size, [(satellite, margin, 1, ant_size) for ant_size in ant_size_vector]))
pool.clear()
ant_size_vector = np.array(ant_size_vector)
availability_vector = np.array(availability_vector)
ant_size_vector = ant_size_vector[availability_vector > 60]
availability_vector = availability_vector[availability_vector > 60]
# a_BSpline = interpolate.make_interp_spline(ant_size_vector, availability_vector, k=2)
# availability_vector_interp = a_BSpline(ant_size_vector_interp)
availability_vector_interp = 0
with open('temp\\args.pkl', 'wb') as f:
pickle.dump(
[ant_size_vector, ant_size_vector_interp, availability_vector, availability_vector_interp], f)
f.close()
return
def mp_ant_size():
with open('temp\\args.pkl', 'rb') as f: # opening the input variables in the temp file
(gr_station_path, sat_long, freq, max_eirp, sat_height, max_bw, bw_util,
modcod, pol, roll_off, ant_eff, lnb_gain, lnb_temp, aditional_losses,
cable_loss, max_depoint, availability_target, snr_relaxation, margin, threads) = pickle.load(f)
f.close()
# reading the input table
# dir = 'models\\'
# file = 'CitiesBrazil'
# cities = pd.read_csv(dir + file + '.csv', sep=';', encoding='latin1')
# cities['availability'] = np.nan # creating an empty results column
point_list = pd.read_csv(gr_station_path, sep=';', encoding='latin1') # creating a point dataframe from csv file
data = pd.read_csv('models\\Modulation_dB.csv', sep=';')
line = data.loc[data.Modcod == modcod]
# tech = line['Tech'].values[0]
mod = line['Modulation'].values[0]
fec = line['FEC'].values[0]
# creating the satellite object
sat = Satellite(sat_long, freq, max_eirp, sat_height, max_bw, bw_util, 0, 0, mod, roll_off, fec)
polarization_loss = 3
reception = Reception(None, ant_eff, aditional_losses, polarization_loss, lnb_gain, lnb_temp, cable_loss,
max_depoint) # creating the receptor object
# ======================== PARALLEL POOL =============================
pool = ParallelPool(nodes=threads) # creating the parallelPoll
sys.stderr = open('temp\\out.txt', 'w') # to print the output dynamically
print('initializing . . .', file=sys.stderr)
# running the parallel pool
data = list(
tqdm.tqdm(pool.imap(point_ant_size,
[(point, sat, reception, margin, snr_relaxation, availability_target) for index, point in
point_list.iterrows()]),
total=len(point_list)))
pool.clear()
point_list.drop(point_list.index, inplace=True)
point_list = point_list.append(data, ignore_index=True)
# saving the results into a csv file
dir = 'results'
if not os.path.exists(dir):
os.makedirs(dir)
point_list.to_csv(dir + '\\' + 'results_ant ' + datetime.datetime.now().strftime('%y-%m-%d_%H-%M-%S') + '.csv', sep=';',
encoding='latin1')
print('Complete!!!', file=sys.stderr)
sys.stderr.close()
return
| 36.552764 | 131 | 0.61713 | 923 | 7,274 | 4.64247 | 0.24377 | 0.078413 | 0.03944 | 0.015169 | 0.371062 | 0.337223 | 0.283547 | 0.283547 | 0.257643 | 0.244341 | 0 | 0.009889 | 0.235359 | 7,274 | 198 | 132 | 36.737374 | 0.760518 | 0.201127 | 0 | 0.241071 | 0 | 0 | 0.047265 | 0.009529 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0 | 0.089286 | 0 | 0.160714 | 0.017857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5239f00531c87925dc70d50f063ddeaecd58a029 | 2,596 | py | Python | storyruntime/processing/Stories.py | adnrs96/runtime | e824224317e6aa108cf06968474fc44fa33488d6 | [
"Apache-2.0"
] | null | null | null | storyruntime/processing/Stories.py | adnrs96/runtime | e824224317e6aa108cf06968474fc44fa33488d6 | [
"Apache-2.0"
] | null | null | null | storyruntime/processing/Stories.py | adnrs96/runtime | e824224317e6aa108cf06968474fc44fa33488d6 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import time
from .. import Metrics
from ..Exceptions import StoryscriptRuntimeError
from ..Story import Story
from ..constants.LineSentinels import LineSentinels
from ..processing import Lexicon
class Stories:
@staticmethod
def story(app, logger, story_name):
return Story(app, story_name, logger)
@staticmethod
def save(logger, story, start):
"""
Saves the narration and the results for each line.
"""
logger.log('story-save', story.name, story.app_id)
@staticmethod
async def execute(logger, story):
"""
Executes each line in the story
"""
line_number = story.first_line()
while line_number:
result = await Lexicon.execute_line(logger, story, line_number)
# Sentinels are not allowed to escape from here.
if LineSentinels.is_sentinel(result):
raise StoryscriptRuntimeError(
message=f'A sentinel has escaped ({result})!',
story=story, line=story.line(line_number))
line_number = result
logger.log('story-execution', line_number)
@classmethod
async def run(cls,
app, logger, story_name, *, story_id=None,
block=None, context=None,
function_name=None):
start = time.time()
try:
logger.log('story-start', story_name, story_id)
story = cls.story(app, logger, story_name)
story.prepare(context)
if function_name:
raise StoryscriptRuntimeError('No longer supported')
elif block:
with story.new_frame(block):
await Lexicon.execute_block(logger, story,
story.line(block))
else:
await cls.execute(logger, story)
logger.log('story-end', story_name, story_id)
Metrics.story_run_success.labels(app_id=app.app_id,
story_name=story_name) \
.observe(time.time() - start)
except BaseException as err:
Metrics.story_run_failure.labels(app_id=app.app_id,
story_name=story_name) \
.observe(time.time() - start)
raise err
finally:
Metrics.story_run_total.labels(app_id=app.app_id,
story_name=story_name) \
.observe(time.time() - start)
| 34.157895 | 75 | 0.553544 | 271 | 2,596 | 5.151292 | 0.324723 | 0.083811 | 0.080229 | 0.038682 | 0.175501 | 0.122493 | 0.122493 | 0.122493 | 0.122493 | 0.122493 | 0 | 0.000601 | 0.359014 | 2,596 | 75 | 76 | 34.613333 | 0.838341 | 0.046225 | 0 | 0.163636 | 0 | 0 | 0.040902 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036364 | false | 0 | 0.109091 | 0.018182 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5240f058aadb675ebf9c2c66247249f0300d169c | 3,028 | py | Python | 02_assignment/toolbox/Toolbox_Python02450/Scripts/ex8_1_2.py | LukaAvbreht/ML_projects | 8b36acdeb017ce8a57959c609b96111968852d5f | [
"MIT"
] | null | null | null | 02_assignment/toolbox/Toolbox_Python02450/Scripts/ex8_1_2.py | LukaAvbreht/ML_projects | 8b36acdeb017ce8a57959c609b96111968852d5f | [
"MIT"
] | null | null | null | 02_assignment/toolbox/Toolbox_Python02450/Scripts/ex8_1_2.py | LukaAvbreht/ML_projects | 8b36acdeb017ce8a57959c609b96111968852d5f | [
"MIT"
] | null | null | null | # exercise 8.1.2
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from toolbox_02450 import rocplot, confmatplot
font_size = 15
plt.rcParams.update({'font.size': font_size})
# Load Matlab data file and extract variables of interest
mat_data = loadmat('../Data/wine2.mat')
X = mat_data['X']
y = mat_data['y'].squeeze()
attributeNames = [name[0] for name in mat_data['attributeNames'][0]]
classNames = [name[0][0] for name in mat_data['classNames']]
N, M = X.shape
C = len(classNames)
# Create crossvalidation partition for evaluation
# using stratification and 95 pct. split between training and test
K = 20
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.95, stratify=y)
# Try to change the test_size to e.g. 50 % and 99 % - how does that change the
# effect of regularization? How does differetn runs of test_size=.99 compare
# to eachother?
# Standardize the training and set set based on training set mean and std
mu = np.mean(X_train, 0)
sigma = np.std(X_train, 0)
X_train = (X_train - mu) / sigma
X_test = (X_test - mu) / sigma
# Fit regularized logistic regression model to training data to predict
# the type of wine
lambda_interval = np.logspace(-8, 2, 50)
train_error_rate = np.zeros(len(lambda_interval))
test_error_rate = np.zeros(len(lambda_interval))
coefficient_norm = np.zeros(len(lambda_interval))
for k in range(0, len(lambda_interval)):
mdl = LogisticRegression(penalty='l2', C=1/lambda_interval[k] )
mdl.fit(X_train, y_train)
y_train_est = mdl.predict(X_train).T
y_test_est = mdl.predict(X_test).T
train_error_rate[k] = np.sum(y_train_est != y_train) / len(y_train)
test_error_rate[k] = np.sum(y_test_est != y_test) / len(y_test)
w_est = mdl.coef_[0]
coefficient_norm[k] = np.sqrt(np.sum(w_est**2))
min_error = np.min(test_error_rate)
opt_lambda_idx = np.argmin(test_error_rate)
opt_lambda = lambda_interval[opt_lambda_idx]
plt.figure(figsize=(8,8))
#plt.plot(np.log10(lambda_interval), train_error_rate*100)
#plt.plot(np.log10(lambda_interval), test_error_rate*100)
#plt.plot(np.log10(opt_lambda), min_error*100, 'o')
plt.semilogx(lambda_interval, train_error_rate*100)
plt.semilogx(lambda_interval, test_error_rate*100)
plt.semilogx(opt_lambda, min_error*100, 'o')
plt.text(1e-8, 3, "Minimum test error: " + str(np.round(min_error*100,2)) + ' % at 1e' + str(np.round(np.log10(opt_lambda),2)))
plt.xlabel('Regularization strength, $\log_{10}(\lambda)$')
plt.ylabel('Error rate (%)')
plt.title('Classification error')
plt.legend(['Training error','Test error','Test minimum'],loc='upper right')
plt.ylim([0, 4])
plt.grid()
plt.show()
plt.figure(figsize=(8,8))
plt.semilogx(lambda_interval, coefficient_norm,'k')
plt.ylabel('L2 Norm')
plt.xlabel('Regularization strength, $\log_{10}(\lambda)$')
plt.title('Parameter vector L2 norm')
plt.grid()
plt.show()
print('Ran Exercise 9.1.1') | 34.804598 | 127 | 0.735799 | 507 | 3,028 | 4.207101 | 0.311637 | 0.078762 | 0.036568 | 0.028129 | 0.265823 | 0.224566 | 0.168776 | 0.042194 | 0 | 0 | 0 | 0.033258 | 0.126156 | 3,028 | 87 | 128 | 34.804598 | 0.772865 | 0.22325 | 0 | 0.140351 | 0 | 0 | 0.12997 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.105263 | 0 | 0.105263 | 0.017544 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5244de274bbf8f74278162b4dc6478110325167a | 49,037 | py | Python | mnemopy/mnemopy.py | VivekThazhathattil/mnemopy | bdbe582a5c46f82a03cc14575c2a4f8e4ae33db0 | [
"MIT"
] | 3 | 2021-07-13T12:44:36.000Z | 2021-11-02T21:03:38.000Z | mnemopy/mnemopy.py | VivekThazhathattil/mnemopy | bdbe582a5c46f82a03cc14575c2a4f8e4ae33db0 | [
"MIT"
] | null | null | null | mnemopy/mnemopy.py | VivekThazhathattil/mnemopy | bdbe582a5c46f82a03cc14575c2a4f8e4ae33db0 | [
"MIT"
] | 1 | 2021-09-28T22:43:07.000Z | 2021-09-28T22:43:07.000Z | import datetime
import random
import numpy as np
from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtMultimedia import QSound
import sys
from .resources import *
class Ui_main_window(object):
def setupUi(self, main_window):
self.running_applet = False
self.counter = 0
self.app_no = 0
self.recall_mode = False
self.watch_counter = 0
self.is_watch_reset = True
self.watch_timer = QtCore.QTimer()
self.watch_timer.timeout.connect(self.run_watch)
self.watch_timer.setInterval(100)
main_window.setObjectName("main_window")
main_window.resize(739, 446)
main_window.setStyleSheet("background-color: rgb(18, 18, 18);")
self.centralwidget = QtWidgets.QWidget(main_window)
self.centralwidget.setObjectName("centralwidget")
self.stacked_windows = QtWidgets.QStackedWidget(self.centralwidget)
self.stacked_windows.setGeometry(QtCore.QRect(0, -20, 731, 441))
self.stacked_windows.setObjectName("stacked_windows")
# ========================================================================================
self.page_main_menu = QtWidgets.QWidget()
self.page_main_menu.setObjectName("page_main_menu")
self.app_descr = QtWidgets.QTextBrowser(self.page_main_menu)
self.app_descr.setGeometry(QtCore.QRect(100, 130, 551, 81))
self.app_descr.setStyleSheet('font: 20pt "Sans Serif";\n' "")
self.app_descr.setObjectName("app_descr")
self.button_fmw = QtWidgets.QPushButton(self.page_main_menu)
self.button_fmw.setGeometry(QtCore.QRect(430, 240, 201, 23))
self.button_fmw.setStyleSheet("color: rgb(255, 255, 255);")
self.button_fmw.setObjectName("button_fmw")
self.button_fmw.clicked.connect(self.open_window_15min_words)
self.button_exit = QtWidgets.QPushButton(self.page_main_menu)
self.button_exit.setGeometry(QtCore.QRect(340, 400, 80, 23))
self.button_exit.setStyleSheet("color: rgb(255, 255, 255);")
self.button_exit.setObjectName("button_exit")
self.button_exit.clicked.connect(self.exit_the_app)
self.speed_cards_button = QtWidgets.QPushButton(self.page_main_menu)
self.speed_cards_button.setGeometry(QtCore.QRect(140, 240, 91, 23))
self.speed_cards_button.setStyleSheet("color: rgb(255, 255, 255);")
self.speed_cards_button.setObjectName("speed_cards_button")
self.speed_cards_button.clicked.connect(self.open_window_speed_cards)
self.app_title = QtWidgets.QLabel(self.page_main_menu)
self.app_title.setGeometry(QtCore.QRect(200, 60, 331, 71))
self.app_title.setStyleSheet(
'font: 30pt "Sans Serif";\n' "color: rgb(255, 56, 56)"
)
self.app_title.setObjectName("app_title")
self.button_fmn = QtWidgets.QPushButton(self.page_main_menu)
self.button_fmn.setGeometry(QtCore.QRect(260, 240, 141, 23))
self.button_fmn.setStyleSheet("color: rgb(255, 255, 255);")
self.button_fmn.setObjectName("button_fmn")
self.button_fmn.clicked.connect(self.open_window_5min_nums)
self.button_sn = QtWidgets.QPushButton(self.page_main_menu)
self.button_sn.setGeometry(QtCore.QRect(140, 290, 121, 23))
self.button_sn.setStyleSheet("color: rgb(255, 255, 255);")
self.button_sn.setObjectName("button_sn")
self.button_sn.clicked.connect(self.open_window_sn)
self.button_bn = QtWidgets.QPushButton(self.page_main_menu)
self.button_bn.setGeometry(QtCore.QRect(280, 290, 121, 23))
self.button_bn.setStyleSheet("color: rgb(255, 255, 255);")
self.button_bn.setObjectName("button_bn")
self.button_bn.clicked.connect(self.open_window_bn)
self.heart_img = QtWidgets.QLabel(self.page_main_menu)
self.heart_img.setGeometry(QtCore.QRect(560, 70, 41, 41))
self.heart_img.setText("")
self.heart_img.setPixmap(QtGui.QPixmap(":/dat/img/love.png"))
self.heart_img.setScaledContents(True)
self.heart_img.setObjectName("heart_img")
self.stacked_windows.addWidget(self.page_main_menu)
# _______________________ SPEED CARDS____________________________
self.page_sc = QtWidgets.QWidget()
self.page_sc.setObjectName("page_sc")
self.time_delay_label = QtWidgets.QLabel(self.page_sc)
self.time_delay_label.setGeometry(QtCore.QRect(30, 60, 121, 16))
self.time_delay_label.setStyleSheet("color:rgb(255, 255, 255)")
self.time_delay_label.setObjectName("time_delay_label")
self.time_delay_label.setVisible(True)
self.begin_sc = QtWidgets.QPushButton(self.page_sc)
self.begin_sc.setGeometry(QtCore.QRect(30, 100, 80, 23))
self.begin_sc.setStyleSheet("color: rgb(255, 255, 255);")
self.begin_sc.setObjectName("begin_sc")
self.begin_sc.clicked.connect(self.applet_sc)
self.begin_sc.setVisible(True)
self.pause_sc = QtWidgets.QPushButton(self.page_sc)
self.pause_sc.setGeometry(QtCore.QRect(130, 100, 80, 23))
self.pause_sc.setStyleSheet("color: rgb(255, 255, 255);")
self.pause_sc.setObjectName("pause_sc")
self.pause_sc.setVisible(False)
self.pause_sc.clicked.connect(self.pause_action)
self.hide_timer_sc = QtWidgets.QCheckBox(self.page_sc)
self.hide_timer_sc.setGeometry(QtCore.QRect(230, 100, 82, 23))
self.hide_timer_sc.setStyleSheet("color: rgb(255, 255, 255);")
self.hide_timer_sc.setObjectName("hide_timer_sc")
self.hide_timer_sc.clicked.connect(self.hide_timer)
self.hide_timer_sc.setVisible(True)
self.recall_sc = QtWidgets.QPushButton(self.page_sc)
self.recall_sc.setGeometry(QtCore.QRect(330, 100, 80, 23))
self.recall_sc.setStyleSheet("color: rgb(255, 255, 255);")
self.recall_sc.setObjectName("recall_sc")
self.recall_sc.setVisible(False)
self.recall_sc.clicked.connect(self.recall_sc_fn)
self.doubleSpinBox = QtWidgets.QDoubleSpinBox(self.page_sc)
self.doubleSpinBox.setGeometry(QtCore.QRect(160, 60, 61, 24))
self.doubleSpinBox.setStyleSheet("background-color: rgb(102, 102, 102);")
self.doubleSpinBox.setObjectName("doubleSpinBox")
self.doubleSpinBox.setMinimum(0)
self.doubleSpinBox.setDecimals(2)
self.doubleSpinBox.setVisible(True)
self.next_sc = QtWidgets.QPushButton(self.page_sc)
self.next_sc.setGeometry(QtCore.QRect(590, 280, 80, 23))
self.next_sc.setStyleSheet("color: rgb(255, 255, 255);")
self.next_sc.clicked.connect(self.manual_next)
self.next_sc.setObjectName("next_sc")
self.next_sc.setVisible(False)
self.prev_sc = QtWidgets.QPushButton(self.page_sc)
self.prev_sc.setGeometry(QtCore.QRect(590, 320, 80, 23))
self.prev_sc.setStyleSheet("color: rgb(255, 255, 255);")
self.prev_sc.setObjectName("prev_sc")
self.prev_sc.clicked.connect(self.manual_prev)
self.prev_sc.setVisible(False)
self.button_quit = QtWidgets.QPushButton(self.page_sc)
self.button_quit.setGeometry(QtCore.QRect(430, 100, 141, 23))
self.button_quit.setStyleSheet("color: rgb(255, 255, 255);")
self.button_quit.setObjectName("button_quit")
self.button_quit.clicked.connect(self.return_to_main_menu)
self.exit_sc = QtWidgets.QPushButton(self.page_sc)
self.exit_sc.setGeometry(QtCore.QRect(590, 100, 80, 23))
self.exit_sc.setStyleSheet("color: rgb(255, 255, 255);")
self.exit_sc.setObjectName("exit_sc")
self.exit_sc.clicked.connect(self.exit_the_app)
self.card_image = QtWidgets.QLabel(self.page_sc)
self.card_image.setGeometry(QtCore.QRect(270, 150, 181, 261))
self.card_image.setText("")
pixmap = QtGui.QPixmap(":/dat/sc/back.png")
self.card_image.setPixmap(pixmap.scaled(150, 300, QtCore.Qt.KeepAspectRatio))
self.lcdNumber = QtWidgets.QLCDNumber(self.page_sc)
self.lcdNumber.setGeometry(QtCore.QRect(590, 160, 111, 31))
self.lcdNumber.setObjectName("lcdNumber")
self.checkBox = QtWidgets.QCheckBox(self.page_sc)
self.checkBox.setGeometry(QtCore.QRect(590, 210, 85, 21))
self.checkBox.setStyleSheet("color: rgb(255, 255, 255);")
self.checkBox.setObjectName("checkBox")
self.checkBox.stateChanged.connect(self.manual_mode)
self.card_no_sc = QtWidgets.QLabel(self.page_sc)
self.card_no_sc.setGeometry(QtCore.QRect(530, 300, 41, 20))
self.card_no_sc.setStyleSheet("color:rgb(255, 255, 255)")
self.card_no_sc.setObjectName("card_no_sc")
self.stacked_windows.addWidget(self.page_sc)
# ___________________5 MINUTE NUMBERS___________________________
self.page_fmn = QtWidgets.QWidget()
self.page_fmn.setObjectName("page_fmn")
# quit button
self.quit_fmn = QtWidgets.QPushButton(self.page_fmn)
self.quit_fmn.setGeometry(QtCore.QRect(210, 50, 131, 23))
self.quit_fmn.setStyleSheet("color: rgb(255, 255, 255);")
self.quit_fmn.setObjectName("quit_fmn")
self.quit_fmn.clicked.connect(self.return_to_main_menu)
# begin button
self.begin_fmn = QtWidgets.QPushButton(self.page_fmn)
self.begin_fmn.setGeometry(QtCore.QRect(30, 50, 80, 23))
self.begin_fmn.setStyleSheet("color: rgb(255, 255, 255);")
self.begin_fmn.setObjectName("begin_fmn")
self.begin_fmn.clicked.connect(self.applet_fmn)
# exit button
self.exit_fmn = QtWidgets.QPushButton(self.page_fmn)
self.exit_fmn.setGeometry(QtCore.QRect(350, 50, 80, 23))
self.exit_fmn.setStyleSheet("color: rgb(255, 255, 255);")
self.exit_fmn.setObjectName("exit_fmn")
self.exit_fmn.clicked.connect(self.exit_the_app)
# prev button
self.prev_fmn = QtWidgets.QPushButton(self.page_fmn)
self.prev_fmn.setGeometry(QtCore.QRect(230, 380, 80, 23))
self.prev_fmn.setStyleSheet("color: rgb(255, 255, 255);")
self.prev_fmn.setVisible(False)
self.prev_fmn.setObjectName("prev_fmn")
self.prev_fmn.clicked.connect(self.click_prev)
# next button
self.next_fmn = QtWidgets.QPushButton(self.page_fmn)
self.next_fmn.setGeometry(QtCore.QRect(380, 380, 80, 23))
self.next_fmn.setStyleSheet("color: rgb(255, 255, 255);")
self.next_fmn.setVisible(False)
self.next_fmn.setObjectName("next_fmn")
self.next_fmn.clicked.connect(self.click_next)
# recall button
self.recall_fmn = QtWidgets.QPushButton(self.page_fmn)
self.recall_fmn.setGeometry(QtCore.QRect(120, 50, 80, 23))
self.recall_fmn.setStyleSheet("color: rgb(255, 255, 255);")
self.recall_fmn.setObjectName("recall_fmn")
self.recall_fmn.setVisible(False)
self.recall_fmn.clicked.connect(self.recall_fmn_fn)
# lcd watch
self.timer_fmn = QtWidgets.QLCDNumber(self.page_fmn)
self.timer_fmn.setGeometry(QtCore.QRect(563, 32, 131, 41))
self.timer_fmn.setObjectName("timer_fmn")
# number display panel
self.disp_panel_fmn = QtWidgets.QTextBrowser(self.page_fmn)
self.disp_panel_fmn.setGeometry(QtCore.QRect(90, 100, 511, 251))
self.disp_panel_fmn.setObjectName("disp_panel_fmn")
self.disp_panel_fmn.setStyleSheet(
'font: 13.5pt "Sans Serif";\n' "color: rgb(255, 255, 255);"
)
# page no label
self.page_no_fmn = QtWidgets.QLabel(self.page_fmn)
self.page_no_fmn.setGeometry(QtCore.QRect(327, 380, 34, 21))
self.page_no_fmn.setObjectName("page_no_fmn")
self.page_no_fmn.setStyleSheet("color: rgb(255, 255, 255);")
# hide clock checkbox
self.hide_timer_fmn = QtWidgets.QCheckBox(self.page_fmn)
self.hide_timer_fmn.setGeometry(QtCore.QRect(620, 90, 85, 21))
self.hide_timer_fmn.setStyleSheet("color: rgb(255, 255, 255);")
self.hide_timer_fmn.setObjectName("hide_timer_fmn")
self.hide_timer_fmn.stateChanged.connect(self.hide_timer)
# show next digit in recall
self.show_next_digit_fmn = QtWidgets.QPushButton(self.page_fmn)
self.show_next_digit_fmn.setGeometry(QtCore.QRect(660, 160, 31, 31))
self.show_next_digit_fmn.setStyleSheet("color: rgb(255, 255, 255);")
self.show_next_digit_fmn.setObjectName("show_next_digit")
self.show_next_digit_fmn.setVisible(False)
self.show_next_digit_fmn.clicked.connect(self.show_next_digit_fmn_fn)
self.stacked_windows.addWidget(self.page_fmn)
# ___________________15 MINUTE RANDOM WORDS______________________
self.page_fmw = QtWidgets.QWidget()
self.page_fmw.setObjectName("page_fmw")
self.begin_fmw = QtWidgets.QPushButton(self.page_fmw)
self.begin_fmw.setGeometry(QtCore.QRect(30, 50, 80, 23))
self.begin_fmw.setStyleSheet("color: rgb(255, 255, 255);")
self.begin_fmw.setObjectName("begin_fmw")
self.begin_fmw.setVisible(True)
self.begin_fmw.clicked.connect(self.applet_fmw)
self.recall_fmw = QtWidgets.QPushButton(self.page_fmw)
self.recall_fmw.setGeometry(QtCore.QRect(130, 50, 80, 23))
self.recall_fmw.setStyleSheet("color: rgb(255, 255, 255);")
self.recall_fmw.setObjectName("recall_fmw")
self.recall_fmw.setVisible(False)
self.recall_fmw.clicked.connect(self.recall_fmw_fn)
self.quit_fmw = QtWidgets.QPushButton(self.page_fmw)
self.quit_fmw.setGeometry(QtCore.QRect(230, 50, 131, 23))
self.quit_fmw.setStyleSheet("color: rgb(255, 255, 255);")
self.quit_fmw.setObjectName("quit_fmw")
self.quit_fmw.clicked.connect(self.return_to_main_menu)
self.exit_fmw = QtWidgets.QPushButton(self.page_fmw)
self.exit_fmw.setGeometry(QtCore.QRect(380, 50, 80, 23))
self.exit_fmw.setStyleSheet("color: rgb(255, 255, 255);")
self.exit_fmw.setObjectName("exit_fmw")
self.exit_fmw.clicked.connect(self.exit_the_app)
self.timer_fmw = QtWidgets.QLCDNumber(self.page_fmw)
self.timer_fmw.setGeometry(QtCore.QRect(620, 40, 84, 33))
self.timer_fmw.setObjectName("timer_fmw")
self.hide_timer_fmw = QtWidgets.QCheckBox(self.page_fmw)
self.hide_timer_fmw.setGeometry(QtCore.QRect(510, 50, 85, 21))
self.hide_timer_fmw.setStyleSheet("color: rgb(255, 255, 255);")
self.hide_timer_fmw.setObjectName("hide_timer_fmw")
self.hide_timer_fmw.clicked.connect(self.hide_timer)
self.hide_timer_fmw.setVisible(True)
self.table_fmw = QtWidgets.QTableView(self.page_fmw)
self.table_fmw.setGeometry(QtCore.QRect(110, 90, 566, 331))
self.table_fmw.setStyleSheet(
"color: rgb(255, 255, 255);\n" "gridline-color: rgb(255, 255, 255);"
)
self.table_fmw.setObjectName("table_fmw")
self.table_fmw.setShowGrid(True)
self.table_fmw.setEditTriggers(QtWidgets.QAbstractItemView.NoEditTriggers)
self.model = QtGui.QStandardItemModel() # SELECTING THE MODEL
self.table_fmw.setModel(self.model) # SETTING THE MODEL
self.stacked_windows.addWidget(self.page_fmw)
# ________________ SPOKEN NUMBERS____________________________
self.page_sn = QtWidgets.QWidget()
self.page_sn.setObjectName("page_sn")
self.begin_sn = QtWidgets.QPushButton(self.page_sn)
self.begin_sn.setGeometry(QtCore.QRect(30, 40, 80, 23))
self.begin_sn.setStyleSheet("color: rgb(255, 255, 255);")
self.begin_sn.setObjectName("begin_sn")
self.begin_sn.clicked.connect(self.applet_sn)
self.recall_sn = QtWidgets.QPushButton(self.page_sn)
self.recall_sn.setGeometry(QtCore.QRect(130, 40, 80, 23))
self.recall_sn.setStyleSheet("color: rgb(255, 255, 255);")
self.recall_sn.setObjectName("recall_sn")
self.recall_sn.setVisible(False)
self.recall_sn.clicked.connect(self.recall_sn_fn)
self.quit_sn = QtWidgets.QPushButton(self.page_sn)
self.quit_sn.setGeometry(QtCore.QRect(440, 400, 151, 23))
self.quit_sn.setStyleSheet("color: rgb(255, 255, 255);")
self.quit_sn.setObjectName("quit_sn")
self.quit_sn.clicked.connect(self.return_to_main_menu)
self.exit_sn = QtWidgets.QPushButton(self.page_sn)
self.exit_sn.setGeometry(QtCore.QRect(620, 400, 80, 23))
self.exit_sn.setStyleSheet("color: rgb(255, 255, 255);")
self.exit_sn.setObjectName("exit_sn")
self.exit_sn.clicked.connect(self.exit_the_app)
self.timer_sn = QtWidgets.QLCDNumber(self.page_sn)
self.timer_sn.setGeometry(QtCore.QRect(550, 30, 151, 31))
self.timer_sn.setObjectName("timer_sn")
self.hide_timer_sn = QtWidgets.QCheckBox(self.page_sn)
self.hide_timer_sn.setGeometry(QtCore.QRect(610, 80, 85, 21))
self.hide_timer_sn.setStyleSheet("color: rgb(255, 255, 255);")
self.hide_timer_sn.setObjectName("hide_timer_sn")
self.hide_timer_sn.stateChanged.connect(self.hide_timer)
self.num_disp_sn = QtWidgets.QTextBrowser(self.page_sn)
self.num_disp_sn.setGeometry(QtCore.QRect(30, 80, 181, 331))
self.num_disp_sn.setObjectName("num_disp_sn")
self.num_disp_sn.setVisible(False)
self.num_disp_sn.setStyleSheet(
'font: 13.5pt "Sans Serif";\n color: rgb(255, 255, 255);'
)
self.speaker_icon_sn = QtWidgets.QLabel(self.page_sn)
self.speaker_icon_sn.setGeometry(QtCore.QRect(350, 120, 161, 141))
self.speaker_icon_sn.setText("")
self.speaker_icon_sn.setObjectName("speaker_icon_label")
pixmap = QtGui.QPixmap(":/dat/img/speaker_icon.png")
self.speaker_icon_sn.setPixmap(
pixmap.scaled(150, 300, QtCore.Qt.KeepAspectRatio)
)
self.speaker_icon_sn.setVisible(False)
self.stacked_windows.addWidget(self.page_sn)
# ___________________BINARY NUMBERS _________________________
self.page_bn = QtWidgets.QWidget()
self.page_bn.setObjectName("page_bn")
self.prev_bn = QtWidgets.QPushButton(self.page_bn)
self.prev_bn.setGeometry(QtCore.QRect(237, 388, 80, 23))
self.prev_bn.setStyleSheet("color: rgb(255, 255, 255);")
self.prev_bn.setObjectName("prev_bn")
self.prev_bn.setDisabled(True)
self.prev_bn.setVisible(False)
self.prev_bn.clicked.connect(self.click_prev_bn)
self.quit_bn = QtWidgets.QPushButton(self.page_bn)
self.quit_bn.setGeometry(QtCore.QRect(217, 58, 131, 23))
self.quit_bn.setStyleSheet("color: rgb(255, 255, 255);")
self.quit_bn.setObjectName("quit_bn")
self.quit_bn.clicked.connect(self.return_to_main_menu)
self.checkbox_bn = QtWidgets.QCheckBox(self.page_bn)
self.checkbox_bn.setGeometry(QtCore.QRect(627, 98, 85, 21))
self.checkbox_bn.setStyleSheet("color: rgb(255, 255, 255);")
self.checkbox_bn.setObjectName("checkbox_bn")
self.checkbox_bn.stateChanged.connect(self.hide_timer)
self.begin_bn = QtWidgets.QPushButton(self.page_bn)
self.begin_bn.setGeometry(QtCore.QRect(37, 58, 80, 23))
self.begin_bn.setStyleSheet("color: rgb(255, 255, 255);")
self.begin_bn.setObjectName("begin_bn")
self.begin_bn.clicked.connect(self.applet_bn)
self.page_no_bn = QtWidgets.QLabel(self.page_bn)
self.page_no_bn.setGeometry(QtCore.QRect(337, 388, 34, 21))
self.page_no_bn.setObjectName("page_no_bn")
self.page_no_bn.setStyleSheet("color: rgb(255, 255, 255);")
self.disp_panel_bn = QtWidgets.QTextBrowser(self.page_bn)
self.disp_panel_bn.setGeometry(QtCore.QRect(97, 108, 511, 251))
self.disp_panel_bn.setObjectName("disp_panel_bn")
self.disp_panel_bn.setStyleSheet(
'font: 13.5pt "Sans Serif";\n' "color: rgb(255, 255, 255);"
)
self.button_exit_bn = QtWidgets.QPushButton(self.page_bn)
self.button_exit_bn.setGeometry(QtCore.QRect(357, 58, 80, 23))
self.button_exit_bn.setStyleSheet("color: rgb(255, 255, 255);")
self.button_exit_bn.setObjectName("button_exit_bn")
self.button_exit_bn.clicked.connect(self.exit_the_app)
self.recall_bn = QtWidgets.QPushButton(self.page_bn)
self.recall_bn.setGeometry(QtCore.QRect(127, 58, 80, 23))
self.recall_bn.setStyleSheet("color: rgb(255, 255, 255);")
self.recall_bn.setObjectName("recall_bn")
self.recall_bn.setVisible(False)
self.recall_bn.clicked.connect(self.recall_bn_fn)
self.next_bn = QtWidgets.QPushButton(self.page_bn)
self.next_bn.setGeometry(QtCore.QRect(387, 388, 80, 23))
self.next_bn.setStyleSheet("color: rgb(255, 255, 255);")
self.next_bn.setObjectName("next_bn")
self.next_bn.setDisabled(True)
self.next_bn.setVisible(False)
self.next_bn.clicked.connect(self.click_next_bn)
self.timer_bn = QtWidgets.QLCDNumber(self.page_bn)
self.timer_bn.setGeometry(QtCore.QRect(570, 40, 131, 41))
self.timer_bn.setObjectName("timer_bn")
self.show_next_digit_bn = QtWidgets.QPushButton(self.page_bn)
self.show_next_digit_bn.setGeometry(QtCore.QRect(660, 160, 31, 31))
self.show_next_digit_bn.setStyleSheet("color: rgb(255, 255, 255);")
self.show_next_digit_bn.setObjectName("show_next_digit")
self.show_next_digit_bn.setVisible(False)
self.show_next_digit_bn.clicked.connect(self.show_next_digit_bn_fn)
self.stacked_windows.addWidget(self.page_bn)
# ========================================================================================
main_window.setCentralWidget(self.centralwidget)
self.menubar = QtWidgets.QMenuBar(main_window)
self.menubar.setGeometry(QtCore.QRect(0, 0, 739, 20))
self.menubar.setObjectName("menubar")
main_window.setMenuBar(self.menubar)
self.statusbar = QtWidgets.QStatusBar(main_window)
self.statusbar.setObjectName("statusbar")
main_window.setStatusBar(self.statusbar)
self.retranslateUi(main_window)
QtCore.QMetaObject.connectSlotsByName(main_window)
# ========================================================================================
def pause_action(self):
if self.running_applet == True:
print("Pausing the applet")
if self.app_no == 1:
self.pause_sc.setText(self._translate("main_window", "unpause"))
self.running_applet = not self.running_applet
else:
print("Unpausing the applet")
if self.app_no == 1:
self.pause_sc.setText(self._translate("main_window", "pause"))
self.running_applet = not self.running_applet
self.start_watch()
if self.app_no == 1:
self.update_image()
def exit_the_app(self):
print("Exiting the application")
exit()
def open_window_speed_cards(self):
print("speed_cards")
self.stacked_windows.setCurrentIndex(1)
def open_window_5min_nums(self):
print("5min")
self.stacked_windows.setCurrentIndex(2)
def open_window_15min_words(self):
print("15min")
self.stacked_windows.setCurrentIndex(3)
def open_window_sn(self):
print("spoken numbers")
self.stacked_windows.setCurrentIndex(4)
def open_window_bn(self):
print("spoken_numbers")
self.stacked_windows.setCurrentIndex(5)
def return_to_main_menu(self):
self.watch_reset()
self.recall_mode = False
self.counter = 0
if self.app_no == 1:
pixmap = QtGui.QPixmap(":/dat/sc/back.png")
self.card_image.setPixmap(
pixmap.scaled(150, 300, QtCore.Qt.KeepAspectRatio)
)
if self.app_no == 2:
self.disp_panel_fmn.setPlainText(self._translate("main_window", ""))
self.page_no_fmn.setText(
self._translate("main_window", "{}/50".format(self.counter + 1))
)
self.next_fmn.setDisabled(True)
self.prev_fmn.setDisabled(True)
self.begin_fmn.setText(self._translate("main_window", "begin"))
self.show_next_digit_fmn.setVisible(False)
if self.app_no == 3:
self.table_fmw.setVisible(False)
self.recall_fmw.setVisible(False)
self.begin_fmw.setText(self._translate("main_window", "begin"))
if self.app_no == 4:
self.speaker_icon_sn.setVisible(False)
self.num_disp_sn.setVisible(False)
self.recall_sn.setVisible(False)
if self.app_no == 5:
self.disp_panel_bn.setPlainText(self._translate("main_window", ""))
self.page_no_bn.setText(
self._translate("main_window", "{}/50".format(self.counter + 1))
)
self.next_bn.setDisabled(True)
self.prev_bn.setDisabled(True)
self.begin_bn.setText(self._translate("main_window", "begin"))
self.show_next_digit_bn.setVisible(False)
self.app_no = 0
self.running_applet = False
print("returning to the main menu")
self.stacked_windows.setCurrentIndex(0)
def update_image(self):
if self.counter < 52:
print("counter = " + str(self.counter))
pic_temp = str(self.images[self.counter])
pixmap = QtGui.QPixmap(pic_temp)
print(str(self.images[self.counter]))
self.card_image.setPixmap(
pixmap.scaled(150, 300, QtCore.Qt.KeepAspectRatio)
)
self.card_image.setObjectName("card_image")
self.counter += 1
self.card_no_sc.setText(
self._translate("main_window", "{}/52".format(str(self.counter)))
)
if not self.checkBox.isChecked() and self.running_applet:
QtCore.QTimer.singleShot(self.time_step * 1000, self.update_image)
# elif self.running_applet == False:
# self.counter += 1
# update_image(self)
# self.running_applet = False
# self.stop_watch()
# self.begin_sc.setText(self._translate("main_window","begin"))
# self.pause_sc.setVisible(False)
# self.counter = 0
def applet_sc(self):
self.app_no = 1
self.time_step = self.doubleSpinBox.value()
if (
self.running_applet == False
): # clicked when 'begin' visible, applet not running
self.begin_sc.setText(self._translate("main_window", "stop"))
self.pause_sc.setVisible(True)
self.time_delay_label.setVisible(True)
self.doubleSpinBox.setVisible(True)
self.start_watch()
self.recall_sc.setVisible(False)
ranks = [str(n) for n in range(2, 11)] + list("JQKA")
suits = ["S", "D", "C", "H"]
Deck = [rank + suit for suit in suits for rank in ranks]
Cards = random.sample(Deck, 52)
self.images = []
img_path = ":/dat/sc/"
for i in range(52):
img_name = Cards[i] + ".png"
full_img_loc = img_path + img_name
self.images = self.images + [full_img_loc]
self.counter = 0
self.running_applet = True
self.update_image()
else: # clicked when 'stop' visible, applet running
self.begin_sc.setText(self._translate("main_window", "begin"))
self.pause_sc.setVisible(False)
self.stop_watch()
self.recall_sc.setVisible(True)
self.running_applet = False
def recall_sc_fn(self):
print("speedcards recall")
self.counter = 0
pixmap = QtGui.QPixmap(":/dat/sc/back.png")
self.card_image.setPixmap(pixmap.scaled(150, 300, QtCore.Qt.KeepAspectRatio))
self.card_no_sc.setText(self._translate("main_window", "--/52"))
self.recall_sc.setVisible(False)
self.time_delay_label.setVisible(False)
self.doubleSpinBox.setVisible(False)
self.next_sc.setVisible(True)
self.prev_sc.setVisible(True)
self.begin_sc.setText(self._translate("main_window", "begin"))
self.watch_reset()
self.checkBox.setChecked(True)
def showLCD(self):
text = (str(datetime.timedelta(milliseconds=self.watch_counter)) + ".000")[:+9]
if self.app_no == 1:
self.lcdNumber.setDigitCount(11)
if not self.is_watch_reset: # if "is_watch_reset" is False
self.lcdNumber.display(text)
else:
self.lcdNumber.display("0:00:00.000")
if self.app_no == 2:
self.timer_fmn.setDigitCount(11)
if not self.is_watch_reset: # if "is_watch_reset" is False
self.timer_fmn.display(text)
else:
self.timer_fmn.display("0:00:00.000")
if self.app_no == 3:
self.timer_fmw.setDigitCount(11)
if not self.is_watch_reset: # if "is_watch_reset" is False
self.timer_fmw.display(text)
else:
self.timer_fmw.display("0:00:00.000")
if self.app_no == 4:
self.timer_sn.setDigitCount(11)
if not self.is_watch_reset: # if "is_watch_reset" is False
self.timer_sn.display(text)
else:
self.timer_sn.display("0:00:00.000")
if self.app_no == 5:
self.timer_bn.setDigitCount(11)
if not self.is_watch_reset: # if "is_watch_reset" is False
self.timer_bn.display(text)
else:
self.timer_bn.display("0:00:00.000")
def run_watch(self):
self.watch_counter += 100
self.showLCD()
def start_watch(self):
print("started timer")
self.watch_timer.start()
self.is_watch_reset = False
def stop_watch(self):
self.watch_timer.stop()
self.watch_counter = 0
def watch_reset(self):
self.watch_timer.stop()
self.watch_counter = 0
self.is_watch_reset = True
self.showLCD()
def display_num_matrix(self):
self.matr2str = "\n".join(
" ".join("%d" % x for x in y) for y in self.num_matrix[self.counter]
)
self.disp_panel_fmn.setPlainText(self._translate("main_window", self.matr2str))
def show_next_digit_fmn_fn(self):
self.x_count += 1
if(self.x_count*self.y_count > 150):
self.click_next()
if(self.x_count == 16): # dont hardcode this!
self.x_count = 1
self.y_count += 1
self.recall_str += "\n "
else:
self.recall_str += " "
self.recall_str += str(self.num_matrix[self.counter][self.y_count-1][self.x_count-1])
self.disp_panel_fmn.setPlainText(self._translate("main_window", self.recall_str))
def click_next(self):
if self.counter >= 49:
self.next_fmn.setDisabled(True)
self.prev_fmn.setDisabled(False)
else:
self.next_fmn.setDisabled(False)
self.prev_fmn.setDisabled(False)
self.counter += 1
self.page_no_fmn.setText(
self._translate("main_window", "{}/50".format(self.counter + 1))
)
if self.recall_mode:
self.x_count = 0
self.y_count = 1
self.recall_str = ""
self.disp_panel_fmn.setPlainText(self._translate("main_window", self.recall_str))
else:
self.display_num_matrix()
def click_prev(self):
if self.counter <= 0:
self.prev_fmn.setDisabled(True)
self.next_fmn.setDisabled(False)
else:
self.prev_fmn.setDisabled(False)
self.next_fmn.setDisabled(False)
self.counter -= 1
self.page_no_fmn.setText(
self._translate("main_window", "{}/50".format(self.counter + 1))
)
if self.recall_mode:
self.x_count = 0
self.y_count = 1
self.recall_str = ""
self.disp_panel_fmn.setPlainText(self._translate("main_window", self.recall_str))
else:
self.display_num_matrix()
def recall_fmn_fn(self):
self.watch_reset()
self.counter = 0
self.recall_mode = True
self.recall_str = ""
self.x_count = 0
self.y_count = 1
self.recall_fmn.setVisible(False)
self.disp_panel_fmn.setVisible(True)
self.disp_panel_fmn.setPlainText(self._translate("main_window", self.recall_str))
self.show_next_digit_fmn.setVisible(True)
self.next_fmn.setVisible(True)
self.next_fmn.setDisabled(False)
self.prev_fmn.setVisible(True)
self.prev_fmn.setDisabled(False)
self.page_no_fmn.setVisible(True)
self.page_no_fmn.setText(
self._translate("main_window", "{}/50".format(self.counter + 1))
)
def applet_fmn(self):
self.app_no = 2
if self.running_applet:
print("clicked stop:applet_fmn")
self.begin_fmn.setText(self._translate("main_window", "begin"))
self.running_applet = False
self.stop_watch()
self.next_fmn.setVisible(False)
self.prev_fmn.setVisible(False)
self.disp_panel_fmn.setVisible(False)
self.recall_fmn.setVisible(True)
self.page_no_fmn.setVisible(False)
else:
print("clicked begin:applet_fmn")
self.begin_fmn.setText(self._translate("main_window", "stop"))
self.counter = 0
self.running_applet = True
self.next_fmn.setVisible(True)
self.prev_fmn.setVisible(True)
self.next_fmn.setDisabled(False)
self.prev_fmn.setDisabled(False)
self.disp_panel_fmn.setVisible(True)
self.recall_fmn.setVisible(False)
self.page_no_fmn.setVisible(True)
self.num_matrix = np.random.randint(
10, size=(50, 10, 15)
) # create a num matrix 15x10x10
self.start_watch()
self.page_no_fmn.setText(
self._translate("main_window", "{}/50".format(self.counter + 1))
)
self.display_num_matrix()
def manual_mode(self):
self.next_sc.setVisible(self.checkBox.isChecked())
self.prev_sc.setVisible(self.checkBox.isChecked())
if self.checkBox.isChecked():
self.time_step = 1704351
else:
self.time_step = self.doubleSpinBox.value()
def manual_next(self):
if self.counter < 52:
self.update_image()
def manual_prev(self):
self.counter -= 2
if self.counter < 0:
self.counter = 0
self.update_image()
def display_num_matrix_bn(self):
self.matr2str = "\n".join(
" ".join("%d" % x for x in y) for y in self.num_matrix_bn[self.counter]
)
self.disp_panel_bn.setPlainText(self._translate("main_window", self.matr2str))
def show_next_digit_bn_fn(self):
self.x_count += 1
if(self.x_count*self.y_count > 150):
self.click_next_bn()
if(self.x_count == 16): # dont hardcode this!
self.x_count = 1
self.y_count += 1
self.recall_str += "\n "
else:
self.recall_str += " "
self.recall_str += str(self.num_matrix_bn[self.counter][self.y_count-1][self.x_count-1])
self.disp_panel_bn.setPlainText(self._translate("main_window", self.recall_str))
def click_next_bn(self):
if self.counter >= 49:
self.next_bn.setDisabled(True)
self.prev_bn.setDisabled(False)
else:
self.next_bn.setDisabled(False)
self.prev_bn.setDisabled(False)
self.counter += 1
self.page_no_bn.setText(
self._translate("main_window", "{}/50".format(self.counter + 1))
)
if self.recall_mode:
self.x_count = 0
self.y_count = 1
self.recall_str = ""
self.disp_panel_bn.setPlainText(self._translate("main_window", self.recall_str))
else:
self.display_num_matrix_bn()
def click_prev_bn(self):
if self.counter <= 0:
self.prev_bn.setDisabled(True)
self.next_bn.setDisabled(False)
else:
self.prev_bn.setDisabled(False)
self.next_bn.setDisabled(False)
self.counter -= 1
self.page_no_bn.setText(
self._translate("main_window", "{}/50".format(self.counter + 1))
)
if self.recall_mode:
self.x_count = 0
self.y_count = 1
self.recall_str = ""
self.disp_panel_bn.setPlainText(self._translate("main_window", self.recall_str))
else:
self.display_num_matrix_bn()
# fold_here
def recall_bn_fn(self):
self.watch_reset()
self.counter = 0
self.recall_mode = True
self.recall_str = ""
self.disp_panel_bn.setPlainText(self._translate("main_window", self.recall_str))
self.x_count = 0
self.y_count = 1
self.recall_bn.setVisible(False)
self.show_next_digit_bn.setVisible(True)
self.disp_panel_bn.setVisible(True)
self.next_bn.setVisible(True)
self.prev_bn.setVisible(True)
self.page_no_bn.setVisible(True)
self.page_no_bn.setText(
self._translate("main_window", "{}/50".format(self.counter + 1))
)
def applet_bn(self):
self.app_no = 5
print("clicked begin applet bn")
if self.running_applet:
self.begin_bn.setText(self._translate("main_window", "begin"))
self.running_applet = False
self.stop_watch()
self.next_bn.setVisible(False)
self.prev_bn.setVisible(False)
self.disp_panel_bn.setVisible(False)
self.recall_bn.setVisible(True)
self.page_no_bn.setVisible(False)
else:
self.begin_bn.setText(self._translate("main_window", "stop"))
self.counter = 0
self.running_applet = True
self.next_bn.setVisible(True)
self.prev_bn.setVisible(True)
self.next_bn.setDisabled(False)
self.prev_bn.setDisabled(False)
self.recall_bn.setVisible(False)
self.disp_panel_bn.setVisible(True)
self.page_no_bn.setVisible(True)
self.num_matrix_bn = np.random.randint(
2, size=(50, 10, 15)
) # create a num matrix 15x10x10
self.start_watch()
self.page_no_bn.setText(
self._translate("main_window", "{}/50".format(self.counter + 1))
)
self.display_num_matrix_bn()
def hide_timer(self):
if self.app_no == 1:
self.lcdNumber.setVisible(not self.hide_timer_sc.isChecked())
if self.app_no == 2:
self.timer_fmn.setVisible(not self.hide_timer_fmn.isChecked())
if self.app_no == 3:
self.timer_fmw.setVisible(not self.hide_timer_fmw.isChecked())
if self.app_no == 4:
self.timer_sn.setVisible(not self.hide_timer_sn.isChecked())
if self.app_no == 5:
self.timer_bn.setVisible(not self.checkbox_bn.isChecked())
def update_sn(self):
print("entered update_sn")
if self.counter < 1000 and self.running_applet == True:
print("counter = " + str(self.counter))
print(":/dat/sn/" + str(self.num_list_sn[self.counter]) + ".wav")
QSound.play(":/dat/sn/" + str(self.num_list_sn[self.counter]) + ".wav")
self.counter += 1
QtCore.QTimer.singleShot(1000, self.update_sn)
def recall_sn_fn(self):
self.watch_reset()
self.recall_sn.setVisible(False)
self.num_disp_sn.setVisible(True)
ls2str = ""
for i in range(self.counter):
ls2str = ls2str + str(i + 1) + ") " + str(self.num_list_sn[i]) + "\n"
self.num_disp_sn.setPlainText(self._translate("main_window", ls2str))
def applet_sn(self):
self.app_no = 4
print("clicked begin applet sn")
if not self.running_applet: # when begin clicked while not running
self.begin_sn.setText(self._translate("main_window", "stop"))
self.counter = 0
self.running_applet = True
self.num_list_sn = np.random.randint(10, size=(1000))
self.start_watch()
self.recall_sn.setVisible(False)
self.num_disp_sn.setVisible(False)
self.speaker_icon_sn.setVisible(True)
self.update_sn()
else: # when stop clicked while running
self.begin_sn.setText(self._translate("main_window", "begin"))
self.running_applet = False
self.stop_watch()
self.speaker_icon_sn.setVisible(False)
self.num_disp_sn.setVisible(False)
self.recall_sn.setVisible(True)
def populate_fmw(self):
values = []
# Get the complete list from word bank
# randomly select 1000 words from the word bank
# add it to the table
word_list_full = [line.strip() for line in open(":/dat/wordbank.txt")]
word_list = random.choices(word_list_full, k=500)
word_matrix = [[word_list[5 * j + i] for i in range(5)] for j in range(100)]
for value in word_matrix:
row = []
for item in value:
cell = QtGui.QStandardItem(str(item))
row.append(cell)
self.model.appendRow(row)
def applet_fmw(self):
self.app_no = 3
if not self.running_applet:
print("clicked begin:applet_fmw")
self.running_applet = True
self.table_fmw.setVisible(True)
self.begin_fmw.setVisible(True)
self.begin_fmw.setText(self._translate("main_window", "stop"))
self.recall_fmw.setVisible(False)
self.counter = 0
for idx in range(100):
self.model.removeRow(99 - idx)
self.start_watch()
self.populate_fmw()
else:
print("clicked stop:applet_fmw")
self.running_applet = False
self.table_fmw.setVisible(False)
self.begin_fmw.setVisible(True)
self.begin_fmw.setText(self._translate("main_window", "restart"))
self.recall_fmw.setVisible(True)
self.stop_watch()
def recall_fmw_fn(self, signal):
def double_click_event():
print("haha")
# row = signal.row()
# column = signal.column()
# print("row = ", row)
# self.table_fmw.showRow(row)
self.watch_reset()
print("recall_fmw")
self.begin_fmw.setVisible(True)
self.begin_fmw.setText("restart")
self.table_fmw.setVisible(True)
self.recall_fmw.setVisible(False)
# for i in range(5):
# self.table_fmw.hideColumn(i)
self.table_fmw.setShowGrid(True)
self.table_fmw.doubleClicked.connect(double_click_event)
# ========================================================================================
def retranslateUi(self, main_window):
self._translate = QtCore.QCoreApplication.translate
main_window.setWindowTitle(self._translate("main_window", "MainWindow"))
self.app_descr.setHtml(
self._translate(
"main_window",
'<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd">\n'
'<html><head><meta name="qrichtext" content="1" /><style type="text/css">\n'
"p, li { white-space: pre-wrap; }\n"
"</style></head><body style=\" font-family:'Sans Serif'; font-size:20pt; font-weight:400; font-style:normal;\">\n"
'<p align="center" style=" margin-top:0px; margin-bottom:0px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;"><span style=" font-size:12pt; color:#ffffff;">This app is aimed at users who wish to hone their memorizing skills for events like XMT and WMC. Choose from the following list to begin your training session:</span></p></body></html>',
)
)
self.button_fmw.setText(
self._translate("main_window", "15-minute random list of words")
)
self.button_exit.setText(self._translate("main_window", "Exit"))
self.speed_cards_button.setText(self._translate("main_window", "Speed Cards"))
self.app_title.setText(self._translate("main_window", "MnemoPy Arena"))
self.button_fmn.setText(self._translate("main_window", "5-minute numbers"))
self.time_delay_label.setText(
self._translate("main_window", "Enter time delay :")
)
self.begin_sc.setText(self._translate("main_window", "begin"))
self.pause_sc.setText(self._translate("main_window", "pause"))
self.hide_timer_sc.setText(self._translate("main_window", "hide timer"))
self.hide_timer_fmw.setText(self._translate("main_window", "hide timer"))
self.recall_sc.setText(self._translate("main_window", "recall"))
self.button_quit.setText(self._translate("main_window", "quit to main menu"))
self.exit_sc.setToolTip(
self._translate(
"main_window",
"<html><head/><body><p>exit the program</p></body></html>",
)
)
self.exit_sc.setWhatsThis(
self._translate(
"main_window",
"<html><head/><body><p>exit the program</p></body></html>",
)
)
self.exit_sc.setText(self._translate("main_window", "exit"))
self.quit_fmn.setText(self._translate("main_window", "quit to main menu"))
self.begin_fmn.setText(self._translate("main_window", "begin"))
self.exit_fmn.setText(self._translate("main_window", "exit"))
self.prev_fmn.setText(self._translate("main_window", "prev"))
self.disp_panel_fmn.setPlainText(self._translate("main_window", ""))
self.next_fmn.setText(self._translate("main_window", "next"))
self.checkBox.setText(self._translate("main_window", "manual"))
self.recall_fmn.setText(self._translate("main_window", "recall"))
self.hide_timer_fmn.setText(self._translate("main_window", "hide clock"))
self.next_sc.setText(self._translate("main_window", "next"))
self.prev_sc.setText(self._translate("main_window", "prev"))
self.button_sn.setText(self._translate("main_window", "Spoken numbers"))
self.button_bn.setText(self._translate("main_window", "Binary numbers"))
self.begin_fmw.setText(self._translate("main_window", "begin"))
self.recall_fmw.setText(self._translate("main_window", "recall"))
self.quit_fmw.setText(self._translate("main_window", "quit to main menu"))
self.exit_fmw.setText(self._translate("main_window", "exit"))
self.begin_sn.setText(self._translate("main_window", "begin"))
self.recall_sn.setText(self._translate("main_window", "recall"))
self.quit_sn.setText(self._translate("main_window", "quit to main menu"))
self.exit_sn.setText(self._translate("main_window", "exit"))
self.hide_timer_sn.setText(self._translate("main_window", "hide timer"))
self.prev_bn.setText(self._translate("main_window", "prev"))
self.quit_bn.setText(self._translate("main_window", "quit to main menu"))
self.checkbox_bn.setText(self._translate("main_window", "hide clock"))
self.begin_bn.setText(self._translate("main_window", "begin"))
self.button_exit_bn.setText(self._translate("main_window", "exit"))
self.recall_bn.setText(self._translate("main_window", "recall"))
self.next_bn.setText(self._translate("main_window", "next"))
self.card_no_sc.setText(self._translate("main_window", "--/52"))
self.show_next_digit_fmn.setText(self._translate("main_window", ">"))
self.show_next_digit_bn.setText(self._translate("main_window", ">"))
| 46.657469 | 383 | 0.633705 | 6,197 | 49,037 | 4.745683 | 0.074552 | 0.036383 | 0.059438 | 0.071169 | 0.622088 | 0.529226 | 0.452549 | 0.363596 | 0.261825 | 0.221701 | 0 | 0.03879 | 0.239799 | 49,037 | 1,050 | 384 | 46.701905 | 0.750121 | 0.039236 | 0 | 0.353879 | 0 | 0.003188 | 0.108271 | 0.00391 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043571 | false | 0 | 0.007439 | 0 | 0.052072 | 0.025505 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
524514da290d2935ceb86dac13afcf7d933b0596 | 25,056 | py | Python | app/console_interface.py | igorxxl8/Calistra | ced32a53f42a8d7a2309a1eb15acef42418a3ecb | [
"MIT"
] | null | null | null | app/console_interface.py | igorxxl8/Calistra | ced32a53f42a8d7a2309a1eb15acef42418a3ecb | [
"MIT"
] | null | null | null | app/console_interface.py | igorxxl8/Calistra | ced32a53f42a8d7a2309a1eb15acef42418a3ecb | [
"MIT"
] | null | null | null | import os
import sys
import uuid
from app.configuration import Files, Configuration
from app.command_parser import get_parsers
from app.formatted_argparse import FormattedParser
from app.help_functions import *
from app.parser_args import ParserArgs
from app.printer import Printer
from app.user_wrapper import (
UserWrapperStorage,
UserWrapperController,
LoginError,
LogoutError,
SaveUserError
)
from calistra_lib.exceptions.base_exception import AppError
from calistra_lib.interface import Interface
from calistra_lib.logger import (
log_cli,
get_logger,
set_logger_config_file,
set_logger_enabled
)
from calistra_lib.plan.json_plan_storage import JsonPlanStorage
from calistra_lib.plan.plan_controller import PlanController
from calistra_lib.queue.json_queue_storage import JsonQueueStorage
from calistra_lib.queue.queue_controller import QueueController
from calistra_lib.task.json_task_storage import JsonTaskStorage
from calistra_lib.task.task import TaskStatus
from calistra_lib.task.task_controller import TaskController
from calistra_lib.user.json_user_storage import JsonUserStorage
from calistra_lib.user.user_controller import UserController
ERROR_CODE = 1
TASK_KEY_BYTES = 8
QUEUE_KEY_BYTES = 4
PLAN_KEY_BYTES = 6
def check_program_data_files(folder, files):
if not os.path.exists(folder):
os.mkdir(folder)
for file in files:
if not os.path.exists(file[0]):
with open(file[0], 'w') as file_obj:
file_obj.write(file[1])
def run() -> int:
"""
Start program
:return: int - exit code
"""
# check settings
Configuration.apply_settings()
set_logger_enabled(Configuration.is_logger_enabled())
# check for files and create it if they missed
check_program_data_files(Files.FOLDER, Files.FILES)
# set logging configuration
if os.path.exists(Files.LOG_CONFIG):
set_logger_config_file(Files.LOG_CONFIG)
else:
set_logger_config_file(Files.DEFAULT_LOG_CONFIG_FILE)
# create loggers and
cli_logger = get_logger()
cli_logger.info('Start program.')
# load data from storage and create entities controllers
users_wrapper_storage = UserWrapperStorage(Files.AUTH_FILE, Files.ONLINE)
queue_storage = JsonQueueStorage(Files.QUEUES_FILE)
queue_controller = QueueController(queue_storage)
task_storage = JsonTaskStorage(Files.TASKS_FILE)
task_controller = TaskController(task_storage)
user_storage = JsonUserStorage(Files.USERS_FILE)
user_controller = UserController(user_storage)
plan_storage = JsonPlanStorage(Files.PLANS_FILE)
plan_controller = PlanController(plan_storage)
# init library interface
library = Interface(users_wrapper_storage.online_user, queue_controller,
user_controller, task_controller, plan_controller)
# update reminders deadlines queue and other
library.update_all()
_show_new_messages(library)
parser = get_parsers()
args = vars(parser.parse_args())
cli_logger.debug('Console args: {}'.format(args))
# check that target is defined
target = args.pop(ParserArgs.TARGET.name)
if target is None:
parser.error('target is required', need_to_exit=False)
return ERROR_CODE
if target == ParserArgs.UPDATE.name:
return 0
# check that action is defined
action = args.pop(ParserArgs.ACTION)
if action is None:
FormattedParser.active_sub_parser.error(
'action is required', need_to_exit=False)
return ERROR_CODE
# check that target is config and do action with it
if target == ParserArgs.CONFIG.name:
if action == ParserArgs.SET:
return _set_configuration(args)
if action == ParserArgs.RESET:
conf_type_obj = args.pop(ParserArgs.CONFIG_TYPE.name)
return _reset_configuration(conf_type_obj)
# check that target is user and do action with it
if target == ParserArgs.USER.name:
if action == ParserArgs.ADD:
return _add_user(
nick=args.pop(ParserArgs.NICKNAME.name),
password=args.pop(ParserArgs.PASSWORD.name),
users_storage=users_wrapper_storage,
library=library
)
if action == ParserArgs.LOGIN.name:
return _login(
nick=args.pop(ParserArgs.NICKNAME.name),
password=args.pop(ParserArgs.PASSWORD.name),
users_storage=users_wrapper_storage,
library=library
)
if action == ParserArgs.LOGOUT.name:
return _logout(users_wrapper_storage)
if action == ParserArgs.SHOW:
long = args.pop(ParserArgs.LONG.dest)
sortby = args.pop(ParserArgs.SORT_BY.dest)
return _show_user_tasks(library, long, sortby)
# check that target is queue and do action with it
if target == ParserArgs.QUEUE.name:
if action == ParserArgs.ADD:
return _add_queue(
name=args.pop(ParserArgs.QUEUE_NAME.name).strip(' '),
library=library
)
if action == ParserArgs.DELETE:
return _del_queue(
key=args.pop(ParserArgs.QUEUE_NAME.name).strip(' '),
recursive=args.pop(ParserArgs.RECURSIVE.dest),
library=library
)
if action == ParserArgs.SET:
key = args.pop(ParserArgs.KEY.name)
new_name = args.pop(ParserArgs.NEW_NAME.dest)
if new_name is None:
parser.active_sub_parser.help()
return 0
return _edit_queue(
key=key,
new_name=new_name,
library=library
)
if action == ParserArgs.SHOW:
return _show_queue_tasks(
key=args.pop(ParserArgs.KEY.name),
opened=args.pop(ParserArgs.OPEN_TASKS.dest),
archive=args.pop(ParserArgs.SOLVED_TASKS.dest),
failed=args.pop(ParserArgs.FAILED_TASKS.dest),
long=args.pop(ParserArgs.LONG.dest),
library=library,
sortby=args.pop(ParserArgs.SORT_BY.dest)
)
if action == ParserArgs.FIND:
name = args.pop(ParserArgs.QUEUE_NAME.name)
return _find_queues(name, library)
# check that target is task and do action with it
if target == ParserArgs.TASK.name:
if action == ParserArgs.ADD:
return _add_task(args, library)
if action == ParserArgs.SET:
return _edit_task(args, library)
if action == ParserArgs.DELETE:
return _del_task(args, library)
if action == ParserArgs.SHOW:
return _show_task(
args.pop(ParserArgs.KEY.name),
library,
args.pop(ParserArgs.LONG.dest)
)
if action == ParserArgs.FIND:
return _find_task(args, library)
if action == ParserArgs.ACTIVATE:
key = args.pop(ParserArgs.KEY.name)
return _activate_task(key, library)
# check that target is plan and do action with it
if target == ParserArgs.PLAN.name:
if action == ParserArgs.ADD:
name = args.pop(ParserArgs.PLAN_NAME.name)
period = args.pop(ParserArgs.PLAN_PERIOD.name)
activation_time = args.pop(ParserArgs.PLAN_ACTIVATION_TIME.name)
reminder = args.pop(ParserArgs.TASK_REMINDER.dest)
return _add_plan(name, activation_time, period, reminder, library)
if action == ParserArgs.SET:
key = args.pop(ParserArgs.KEY.name)
new_name = args.pop(ParserArgs.PLAN_NAME_OPTIONAL.dest)
period = args.pop(ParserArgs.PLAN_PERIOD_OPTIONAL.dest)
activation_time = args.pop(
ParserArgs.PLAN_ACTIVATION_TIME_OPTIONAL.dest)
reminder = args.pop(ParserArgs.TASK_REMINDER.dest)
return _edit_plan(key, new_name, period, activation_time,
reminder, library)
if action == ParserArgs.SHOW:
return _show_plans(library)
if action == ParserArgs.DELETE:
key = args.pop(ParserArgs.KEY.name)
return _delete_plan(key, library)
if target == ParserArgs.NOTIFICATIONS.name:
if action == ParserArgs.SHOW:
notifications = library.online_user.notifications
print('Notifications for user "{}":'.format(
library.online_user.nick)
)
if _show_messages(notifications):
print('Notifications not found!')
library.clear_new_messages()
if action == ParserArgs.DELETE:
_del_notifications(
library,
_all=args.pop(ParserArgs.ALL.dest),
old=args.pop(ParserArgs.OLD.dest)
)
# =================================================
# functions for work with user's account instance =
# =================================================
@log_cli
def _set_configuration(args):
"""
This method using for set configurations of program and logger
:param args: arguments from user console input
:return: exit code
"""
conf_type_obj = args.pop(ParserArgs.CONFIG_TYPE.name)
if conf_type_obj == ParserArgs.SETTINGS.name:
storage_path = args.pop(ParserArgs.STORAGE_PATH.dest)
try:
Configuration.edit_settings(storage_path)
except Exception as e:
sys.stderr.write(str(e))
return ERROR_CODE
print('Program settings edited...')
elif conf_type_obj == ParserArgs.LOGGER.name:
level = args.pop(ParserArgs.LOGGER_LEVEL.dest)
enabled = args.pop(ParserArgs.ENABLED_LOGGER.dest)
log_file_path = args.pop(ParserArgs.LOG_FILE_PATH.dest)
Configuration.edit_logger_configs(Files.LOG_CONFIG, level, enabled,
log_file_path)
print('Logger configs edited...')
return 0
@log_cli
def _reset_configuration(conf_type_obj):
if (conf_type_obj == ParserArgs.LOGGER.name and
os.path.exists(Files.LOG_CONFIG)):
os.remove(Files.LOG_CONFIG)
print('Logger configs reset...')
if (conf_type_obj == ParserArgs.SETTINGS.name and
os.path.exists(Files.SETTINGS)):
Files.set_default()
Configuration.reset_settings()
os.remove(Files.SETTINGS)
print('Program settings reset...')
return 0
# =================================================
# functions for work with user's account instance =
# =================================================
@log_cli
def _add_user(nick, password, users_storage, library: Interface):
"""
Function for creating user. Use call from library interface for creating u
user in library
:param nick: user nick
:param password: user password
:param users_storage: place where store all users
:param library: library interface, join all library functions in one
interface
:return: int - error code
"""
try:
users_storage.add_user(nick, password)
except SaveUserError as e:
sys.stderr.write(str(e))
return ERROR_CODE
uid = uuid.uuid4().int
queue_key = os.urandom(QUEUE_KEY_BYTES).hex()
library.add_user(nick, uid, queue_key)
print('User "{}" successfully created!'.format(nick))
return 0
@log_cli
def _login(nick, password, users_storage, library) -> int:
"""
Function for login user in system
:param nick: user nick
:param password: user password
:param users_storage: place where store all users
:param library: library interface, join all library functions in one
interface
:return: error code or 0 in case success
"""
controller = UserWrapperController(users_storage)
try:
controller.login(nick, password)
except LoginError as e:
sys.stderr.write(str(e))
return ERROR_CODE
print('User "{}" now is online.'.format(nick))
library.set_online_user(nick)
_show_new_messages(library)
return 0
@log_cli
def _logout(users_storage) -> int:
"""
:param users_storage:
:return: error code or 0 in case success
"""
controller = UserWrapperController(users_storage)
try:
controller.logout()
except LogoutError as e:
sys.stderr.write(str(e))
return ERROR_CODE
print('All users now offline.')
return 0
@log_cli
def _show_new_messages(library) -> int:
if library.online_user is None:
return ERROR_CODE
new_messages = library.online_user.new_messages
if new_messages:
print('Reminders and new messages:')
_show_messages(new_messages)
library.clear_new_messages()
print(Printer.LINE)
return 0
@log_cli
def _show_messages(messages) -> int:
if messages:
reminders = []
for message in messages[:]: # type: str
if message.lower().startswith('reminder'):
reminders.append(message)
messages.remove(message)
Printer.print_reminders(reversed(reminders))
print()
Printer.print_notifications(reversed(messages))
return 0
return ERROR_CODE
@log_cli
def _del_notifications(library, _all, old) -> int:
try:
library.clear_notifications(old)
except ValueError as e:
sys.stderr.write(str(e))
return ERROR_CODE
print('Notifications deleted')
return 0
@log_cli
def _show_user_tasks(library, long, sortby) -> int:
try:
print('User: "{}".'.format(library.get_online_user().nick))
queues = library.get_user_queues()
Printer.print_queues(queues)
except AppError as e:
sys.stderr.write(str(e))
return ERROR_CODE
if sortby is None:
sortby = ParserArgs.TASK_PRIORITY.dest.lower()
author_tasks, responsible_tasks = library.get_user_tasks()
print('Tasks:')
author_tasks.sort(key=lambda x: x.__dict__[sortby], reverse=True)
responsible_tasks.sort(key=lambda x: x.__dict__[sortby], reverse=True)
Printer.print_tasks(author_tasks, "Author", long)
Printer.print_tasks(responsible_tasks, "Responsible", long)
return 0
# =================================================
# functions for work with queue instance =
# =================================================
@log_cli
def _add_queue(name, library):
key = os.urandom(QUEUE_KEY_BYTES).hex()
try:
added_queue = library.add_queue(name=name, key=key)
except AppError as e:
sys.stderr.write(str(e))
return ERROR_CODE
print('Queue "{}" added. It\'s key - {}'.format(added_queue.name, key))
return 0
@log_cli
def _del_queue(key, recursive, library):
try:
deleted_queue = library.remove_queue(
key=key,
recursive=recursive)
except AppError as e:
sys.stderr.write(str(e))
return ERROR_CODE
print('Queue "{}" deleted'.format(deleted_queue.name))
return 0
@log_cli
def _edit_queue(key, new_name, library):
try:
new_name = check_str_len(new_name)
except ValueError as e:
sys.stderr.write(str(e))
return ERROR_CODE
try:
library.edit_queue(key, new_name)
except AppError as e:
sys.stderr.write(str(e))
return ERROR_CODE
print('Queue {} now have new name "{}"'.format(key, new_name))
return 0
@log_cli
def _show_queue(library) -> int:
try:
queues = library.get_user_queues()
except AppError as e:
sys.stderr.write(str(e))
return ERROR_CODE
for queue in queues:
print('Queue name: "{}", key = {}'.format(queue.name, queue.key))
return 0
@log_cli
def _show_queue_tasks(key, library, opened, archive, failed, long, sortby):
def load_tasks(task_keys):
_tasks = []
for _key in task_keys:
task = library.get_task(key=_key)
_tasks.append(task)
_tasks.sort(key=lambda x: x.__dict__[sortby], reverse=True)
return _tasks
if not opened and not archive and not failed:
opened = True
try:
queue = library.get_queue(key)
except AppError as e:
sys.stderr.write(str(e))
return ERROR_CODE
if sortby is None:
sortby = ParserArgs.TASK_PRIORITY.dest.lower()
print('Queue: "{}", key {}\nTasks:'.format(queue.name, queue.key))
if opened:
tasks = load_tasks(queue.opened_tasks)
Printer.print_tasks(tasks, TaskStatus.OPENED, long, Printer.CL_YELLOW)
if archive:
tasks = load_tasks(queue.solved_tasks)
Printer.print_tasks(tasks, TaskStatus.SOLVED, long, Printer.CL_BLUE)
if failed:
tasks = load_tasks(queue.failed_tasks)
Printer.print_tasks(tasks, TaskStatus.FAILED, long, Printer.CL_RED)
return 0
@log_cli
def _find_queues(name, library: Interface) -> int:
queues = library.find_queues(name)
print('Search:')
Printer.print_queues(queues, 'Results for "{}"'.format(name))
return 0
# =================================================
# functions for work with task instance =
# =================================================
@log_cli
def _add_task(args, library) -> int:
key = os.urandom(TASK_KEY_BYTES).hex()
queue_key = args.pop(ParserArgs.TASK_QUEUE.dest)
try:
name = args.pop(ParserArgs.TASK_NAME.name).strip(' ')
name = check_str_len(name)
description = args.pop(ParserArgs.TASK_DESCRIPTION.dest)
description = check_str_len(description)
related = args.pop(ParserArgs.TASK_RELATED.dest)
check_related_correctness(related)
responsible = args.pop(ParserArgs.TASK_RESPONSIBLE.dest)
responsible = check_responsible_correctness(responsible)
priority = args.pop(ParserArgs.TASK_PRIORITY.dest)
priority = check_priority_correctness(priority)
progress = args.pop(ParserArgs.TASK_PROGRESS.dest)
progress = check_progress_correctness(progress)
start = args.pop(ParserArgs.TASK_START.dest)
start = check_time_format(start, entity=ParserArgs.TASK.name)
deadline = args.pop(ParserArgs.TASK_DEADLINE.dest)
deadline = check_time_format(deadline, entity=ParserArgs.TASK.name)
check_terms_correctness(start, deadline)
tags = args.pop(ParserArgs.TASK_TAGS.dest)
tags = check_tags_correctness(tags)
reminder = args.pop(ParserArgs.TASK_REMINDER.dest)
reminder = check_reminder_format(reminder, entity=ParserArgs.TASK.name)
except ValueError as e:
sys.stderr.write(str(e))
return ERROR_CODE
try:
library.create_task(name, queue_key, description,
args.pop(ParserArgs.TASK_PARENT.dest), related,
responsible,
priority, progress, start, deadline, tags, reminder,
key
)
except AppError as e:
sys.stderr.write(str(e))
return ERROR_CODE
print('Task "{}" added. It\'s key - {}'.format(name, key))
return 0
@log_cli
def _edit_task(args, library) -> int:
key = args.pop(ParserArgs.KEY.name)
try:
name = args.pop(ParserArgs.NEW_NAME.dest)
name = check_str_len(name)
description = args.pop(ParserArgs.TASK_DESCRIPTION.dest)
description = check_str_len(description)
related = args.pop(ParserArgs.TASK_RELATED.dest)
check_related_correctness(related, action=ParserArgs.SET)
responsible = args.pop(ParserArgs.TASK_RESPONSIBLE.dest)
responsible = check_responsible_correctness(responsible,
action=ParserArgs.SET)
priority = args.pop(ParserArgs.TASK_PRIORITY.dest)
priority = check_priority_correctness(priority, action=ParserArgs.SET)
progress = args.pop(ParserArgs.TASK_PROGRESS.dest)
progress = check_progress_correctness(progress, action=ParserArgs.SET)
start = args.pop(ParserArgs.TASK_START.dest)
start = check_time_format(
start, entity=ParserArgs.TASK.name, action=ParserArgs.SET)
deadline = args.pop(ParserArgs.TASK_DEADLINE.dest)
deadline = check_time_format(
deadline, entity=ParserArgs.TASK.name, action=ParserArgs.SET)
check_terms_correctness(start, deadline)
tags = args.pop(ParserArgs.TASK_TAGS.dest)
tags = check_tags_correctness(tags, action=ParserArgs.SET)
reminder = args.pop(ParserArgs.TASK_REMINDER.dest)
reminder = check_reminder_format(reminder, entity=ParserArgs.TASK.name,
action=ParserArgs.SET)
status = args.pop(ParserArgs.TASK_STATUS.dest)
status = check_status_correctness(status, action=ParserArgs.SET)
except ValueError as e:
sys.stderr.write(str(e))
return ERROR_CODE
try:
library.edit_task(key, name, description,
args.pop(ParserArgs.TASK_PARENT.dest), related,
responsible, priority, progress, start, deadline,
tags, reminder, status)
except AppError as e:
sys.stderr.write(str(e))
return ERROR_CODE
print('Task with key "{}" edited'.format(key))
return 0
@log_cli
def _del_task(args, library) -> int:
try:
tasks = library.remove_task(
key=args.pop(ParserArgs.KEY.name),
recursive=args.pop(ParserArgs.RECURSIVE.dest)
)
except AppError as e:
sys.stderr.write(str(e))
return ERROR_CODE
for task in tasks:
print('Task "{}" deleted'.format(task.name))
return 0
@log_cli
def _show_task(key, library, long) -> int:
try:
task = library.get_task(key)
except AppError as e:
sys.stderr.write(str(e))
return ERROR_CODE
print('Main task:')
if long:
Printer.print_task_fully(task)
else:
Printer.print_task_briefly(task)
sub_tasks = library.task_controller.get_sub_tasks(task)
if sub_tasks:
Printer.print_tasks(sub_tasks, "Sub tasks:")
return 0
@log_cli
def _find_task(task_params, library) -> int:
try:
tasks = library.find_task(task_params)
except AppError as e:
sys.stderr.write(str(e))
return ERROR_CODE
print('Search:')
Printer.print_tasks(tasks, 'Result:')
return 0
@log_cli
def _activate_task(key, library) -> int:
try:
task = library.activate_task(key)
except AppError as e:
sys.stderr.write(str(e))
return ERROR_CODE
print('Participation in task "{}" is confirmed!'.format(task.name))
return 0
# =================================================
# functions for work with plan instance =
# =================================================
@log_cli
def _add_plan(name, activation_time, period, reminder, library) -> int:
key = os.urandom(PLAN_KEY_BYTES).hex()
try:
check_period_format(period)
check_str_len(name)
activation_time = check_time_format(activation_time,
entity=ParserArgs.PLAN.name)
validate_activation_time(activation_time)
reminder = check_reminder_format(reminder, entity=ParserArgs.PLAN.name)
plan = library.add_plan(key, name, period, activation_time, reminder)
except ValueError as e:
sys.stderr.write(str(e))
return ERROR_CODE
print('Plan "{}" successfully created. '
'It\'s key - {}'.format(plan.name, plan.key)
)
return 0
@log_cli
def _edit_plan(key, new_name, period, activation_time, reminder,
library) -> int:
try:
reminder = check_reminder_format(reminder, ParserArgs.PLAN.name,
ParserArgs.SET)
print(key)
print(new_name)
print(reminder)
print(activation_time)
plan = library.edit_plan(key, new_name, period,
activation_time, reminder)
except ValueError as e:
sys.stderr.write(str(e))
return ERROR_CODE
print('Plan "{}"({}) was successfully edited.'.format(plan.name, plan.key))
return 0
@log_cli
def _show_plans(library) -> int:
plans = library.get_plans()
if plans:
print('Plans:')
for i in range(len(plans)):
print('{}) {}'.format(i + 1, str(plans[i])))
else:
print('calistra: Plans not found')
return 0
@log_cli
def _delete_plan(key, library) -> int:
try:
plan = library.del_plan(key)
except AppError as e:
sys.stderr.write(str(e))
return ERROR_CODE
print('Plan "{}"({}) successfully deleted'.format(plan.name, plan.key))
return 0
| 30.97157 | 80 | 0.630587 | 2,952 | 25,056 | 5.158198 | 0.090108 | 0.03126 | 0.075918 | 0.034478 | 0.539962 | 0.472582 | 0.412688 | 0.34314 | 0.313456 | 0.301767 | 0 | 0.002047 | 0.259179 | 25,056 | 808 | 81 | 31.009901 | 0.818285 | 0.082854 | 0 | 0.405498 | 0 | 0 | 0.035647 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04811 | false | 0.010309 | 0.037801 | 0 | 0.216495 | 0.087629 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5245ec313baa01b218b015463d7468714caad528 | 14,647 | py | Python | dualbound/Arnoldi/shell_Green_Taylor_Arnoldi_spatialDiscretization_mp.py | PengningChao/emdb-sphere | d20ac81ab4fd744f87788bda46d3aa19598658ee | [
"MIT"
] | null | null | null | dualbound/Arnoldi/shell_Green_Taylor_Arnoldi_spatialDiscretization_mp.py | PengningChao/emdb-sphere | d20ac81ab4fd744f87788bda46d3aa19598658ee | [
"MIT"
] | null | null | null | dualbound/Arnoldi/shell_Green_Taylor_Arnoldi_spatialDiscretization_mp.py | PengningChao/emdb-sphere | d20ac81ab4fd744f87788bda46d3aa19598658ee | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Wed Aug 5 21:46:50 2020
@author: pengning
"""
import numpy as np
import scipy.special as sp
import matplotlib.pyplot as plt
from .shell_domain import shell_rho_M, shell_rho_N
import mpmath
from mpmath import mp
from .dipole_field import mp_spherical_jn, mp_vec_spherical_jn, mp_spherical_yn, mp_vec_spherical_yn, mp_vec_spherical_djn, mp_vec_spherical_dyn
from .spherical_Green_Taylor_Arnoldi_speedup import mp_re, mp_im
from .shell_Green_Taylor_Arnoldi_spatialDiscretization import complex_to_mp, grid_integrate_trap, rgrid_Mmn_plot,rgrid_Nmn_plot, rgrid_Mmn_normsqr, rgrid_Nmn_normsqr, rgrid_Mmn_vdot,rgrid_Nmn_vdot
def mp_to_complex(mpcarr):
a = np.zeros(len(mpcarr),dtype=np.complex)
for i in range(len(mpcarr)):
a[i] = np.complex(mpcarr[i])
return a
def shell_Green_grid_Mmn_vec_mp(n,k, rsqrgrid,rdiffgrid, RgMgrid, ImMgrid, vecMgrid):
"""
evaluates G(r,r')*vecM(r') over a shell region from R1 to R2
the region coordinates are contained in rsqrgrid, a grid of r^2, and rdiffgrid, the distances between neighboring grid points; these instead of the original rgrid are given so that they only need to be computed once in main Arnoldi method
"""
#rsqrgrid = rgrid**2
#rdiffgrid = np.diff(rgrid)
RgMvecMrsqr_grid = RgMgrid*vecMgrid*rsqrgrid
Im_newvecMgrid = k**3 * grid_integrate_trap(RgMvecMrsqr_grid, rdiffgrid) * RgMgrid
Re_ImMfactgrid = np.zeros_like(rsqrgrid, dtype=type(mp.one))
Re_ImMfactgrid[1:] = k**3 * np.cumsum((RgMvecMrsqr_grid[:-1]+RgMvecMrsqr_grid[1:])*rdiffgrid/2.0)
rev_ImMvecMrsqr_grid = np.flip(ImMgrid*vecMgrid*rsqrgrid) #reverse the grid direction to evaluate integrands of the form kr' to kR2
Re_RgMfactgrid = np.zeros_like(rsqrgrid, dtype=type(mp.one))
Re_RgMfactgrid[:-1] = k**3 * np.flip(np.cumsum( (rev_ImMvecMrsqr_grid[:-1]+rev_ImMvecMrsqr_grid[1:])*np.flip(rdiffgrid)/2.0 ))
Re_newvecMgrid = -ImMgrid*Re_ImMfactgrid - RgMgrid*Re_RgMfactgrid
return Re_newvecMgrid + 1j*Im_newvecMgrid
def shell_Green_grid_Arnoldi_RgandImMmn_step_mp(n,k, invchi, rgrid,rsqrgrid,rdiffgrid, RgMgrid, ImMgrid, unitMvecs, Gmat, plotVectors=False):
"""
using a mpf valued grid
this method does one more Arnoldi step, given existing Arnoldi vectors in unitMvecs
the last two entries in unitMvecs is unitMvecs[-2]=G*unitMvecs[-4] and unitMvecs[-1]=G*unitMvecs[-3] without orthogonalization and normalization
its indices -1 and -3 because we are alternatingly generating new vectors starting from either the RgM line or the ImM line
so len(unitMvecs) = len(Gmat)+2 going in and going out of the method
this is setup for most efficient iteration since G*unitMvec is only computed once
the unitMvecs list is modified on spot; a new enlarged Gmat nparray is returned at the end
for each iteration we only advance Gmat by 1 row and 1 column
Gmat here is an mpmatrix
"""
#first, begin by orthogonalizing and normalizing unitMvecs[-1]
vecnum = Gmat.rows
for i in range(vecnum):
coef = Gmat[i,vecnum-2]
unitMvecs[-2] -= coef*unitMvecs[i]
unitMvecs[-2][:] = mp_re(unitMvecs[-2][:]) #the Arnoldi vectors should all be real since RgM is a family head and only non-zero singular vector of AsymG
norm = mp.sqrt(rgrid_Mmn_normsqr(unitMvecs[-2], rsqrgrid,rdiffgrid))
unitMvecs[-2] /= norm
if plotVectors:
rgrid_Mmn_plot(unitMvecs[-2], rgrid)
#get new vector
newvecM = shell_Green_grid_Mmn_vec_mp(n,k, rsqrgrid,rdiffgrid, RgMgrid,ImMgrid, unitMvecs[-2])
newvecM[:] = mp_re(newvecM)
vecnum += 1
Gmat.rows+=1; Gmat.cols+=1
for i in range(vecnum):
Gmat[i,vecnum-1] = rgrid_Mmn_vdot(unitMvecs[i], newvecM, rsqrgrid,rdiffgrid)
Gmat[vecnum-1,i] = Gmat[i,vecnum-1]
unitMvecs.append(newvecM) #append to end of unitMvecs for next round of iteration
return Gmat
def shell_Green_grid_Arnoldi_RgandImMmn_Uconverge_mp(n,k,R1,R2, invchi, gridpts=1000, Unormtol=1e-10, veclim=3, delveclim=2, maxveclim=40, plotVectors=False):
np.seterr(over='raise',under='raise',invalid='raise')
#for high angular momentum number could have floating point issues; in this case, raise error. Outer method will catch the error and use the mpmath version instead
rgrid = np.linspace(R1,R2,gridpts)
rsqrgrid = rgrid**2
rdiffgrid = np.diff(rgrid)
"""
RgMgrid = sp.spherical_jn(n, k*rgrid) #the argument for radial part of spherical waves is kr
ImMgrid = sp.spherical_yn(n, k*rgrid)
RgMgrid = RgMgrid.astype(np.complex)
ImMgrid = ImMgrid.astype(np.complex)
RgMgrid = complex_to_mp(RgMgrid)
ImMgrid = complex_to_mp(ImMgrid)
"""
RgMgrid = mp_vec_spherical_jn(n, k*rgrid)
ImMgrid = mp_vec_spherical_yn(n, k*rgrid)
vecRgMgrid = RgMgrid / mp.sqrt(rgrid_Mmn_normsqr(RgMgrid, rsqrgrid,rdiffgrid))
vecImMgrid = ImMgrid - rgrid_Mmn_vdot(vecRgMgrid, ImMgrid, rsqrgrid,rdiffgrid)*vecRgMgrid
vecImMgrid /= mp.sqrt(rgrid_Mmn_normsqr(vecImMgrid,rsqrgrid,rdiffgrid))
if plotVectors:
rgrid_Mmn_plot(vecRgMgrid,rgrid)
rgrid_Mmn_plot(vecImMgrid,rgrid)
unitMvecs = [vecRgMgrid,vecImMgrid]
GvecRgMgrid = shell_Green_grid_Mmn_vec_mp(n,k, rsqrgrid,rdiffgrid, RgMgrid,ImMgrid, vecRgMgrid)
GvecImMgrid = shell_Green_grid_Mmn_vec_mp(n,k, rsqrgrid,rdiffgrid, RgMgrid,ImMgrid, vecImMgrid)
Gmat = mp.zeros(2,2)
Gmat[0,0] = rgrid_Mmn_vdot(vecRgMgrid, GvecRgMgrid, rsqrgrid,rdiffgrid)
Gmat[0,1] = rgrid_Mmn_vdot(vecRgMgrid, GvecImMgrid, rsqrgrid,rdiffgrid)
Gmat[1,0] = Gmat[0,1]
Gmat[1,1] = rgrid_Mmn_vdot(vecImMgrid,GvecImMgrid, rsqrgrid,rdiffgrid)
Uinv = mp.eye(2)*invchi-Gmat
unitMvecs.append(GvecRgMgrid)
unitMvecs.append(GvecImMgrid) #append unorthogonalized, unnormalized Arnoldi vector for further iterations
b = mp.matrix([mp.one])
prevUnorm = 1 / Uinv[0,0]
i=2
while i<veclim:
Gmat = shell_Green_grid_Arnoldi_RgandImMmn_step_mp(n,k,invchi, rgrid,rsqrgrid,rdiffgrid, RgMgrid, ImMgrid, unitMvecs, Gmat, plotVectors=plotVectors)
i += 1
print(i)
if i==maxveclim:
break
if i==veclim:
#solve for first column of U and see if its norm has converged
Uinv = mp.eye(Gmat.rows)*invchi-Gmat
b.rows = i
x = mp.lu_solve(Uinv, b)
Unorm = mp.norm(x)
print('Unorm', Unorm, flush=True)
if np.abs(prevUnorm-Unorm) > np.abs(Unorm)*Unormtol:
veclim += delveclim
prevUnorm = Unorm
return rgrid,rsqrgrid,rdiffgrid, RgMgrid, ImMgrid, unitMvecs, Uinv, Gmat
def shell_Green_grid_Nmn_vec_mp(n,k, rsqrgrid,rdiffgrid, RgBgrid,RgPgrid, ImBgrid,ImPgrid, vecBgrid,vecPgrid):
"""
evaluates G(r,r')*vecN(r') over a shell region from R1 to R2
the region coordinates are contained in rsqrgrid, a grid of r^2, and rdiffgrid, the distances between neighboring grid points; these instead of the original rgrid are given so that they only need to be computed once in main Arnoldi method
"""
#rsqrgrid = rgrid**2
#rdiffgrid = np.diff(rgrid)
RgNvecNrsqr_grid = (RgBgrid*vecBgrid+RgPgrid*vecPgrid)*rsqrgrid
imfac = k**3 * grid_integrate_trap(RgNvecNrsqr_grid, rdiffgrid)
Im_newvecBgrid = imfac * RgBgrid
Im_newvecPgrid = imfac * RgPgrid
Re_ImNfactgrid = np.zeros_like(rsqrgrid, dtype=type(1j*mp.one))
Re_ImNfactgrid[1:] = k**3 * np.cumsum((RgNvecNrsqr_grid[:-1]+RgNvecNrsqr_grid[1:])*rdiffgrid/2.0)
rev_ImNvecNrsqr_grid = np.flip((ImBgrid*vecBgrid + ImPgrid*vecPgrid) * rsqrgrid) #reverse the grid direction to evaluate integrands of the form kr' to kR2
Re_RgNfactgrid = np.zeros_like(rsqrgrid, dtype=type(1j*mp.one))
Re_RgNfactgrid[:-1] = k**3 * np.flip(np.cumsum( (rev_ImNvecNrsqr_grid[:-1]+rev_ImNvecNrsqr_grid[1:])*np.flip(rdiffgrid)/2.0 ))
Re_newvecBgrid = -ImBgrid*Re_ImNfactgrid - RgBgrid*Re_RgNfactgrid
Re_newvecPgrid = -ImPgrid*Re_ImNfactgrid - RgPgrid*Re_RgNfactgrid - vecPgrid #last term is delta contribution
return Re_newvecBgrid + 1j*Im_newvecBgrid, Re_newvecPgrid + 1j*Im_newvecPgrid
def shell_Green_grid_Arnoldi_RgandImNmn_step_mp(n,k, invchi, rgrid,rsqrgrid,rdiffgrid, RgBgrid,RgPgrid, ImBgrid,ImPgrid, unitBvecs,unitPvecs, Gmat, plotVectors=False):
"""
this method does one more Arnoldi step, given existing Arnoldi vectors in unitNvecs
the last two entries in unitMvecs is unitNvecs[-2]=G*unitNvecs[-4] and unitNvecs[-1]=G*unitNvecs[-3] without orthogonalization and normalization
its indices -1 and -3 because we are alternatingly generating new vectors starting from either the RgN line or the ImN line
so len(unitNvecs) = len(Gmat)+2 going in and going out of the method
this is setup for most efficient iteration since G*unitNvec is only computed once
the unitNvecs lists is modified on spot; a new enlarged Gmat mpmatrix is returned at the end
for each iteration we only advance Gmat by 1 row and 1 column
"""
#first, begin by orthogonalizing and normalizing unitMvecs[-1]
vecnum = Gmat.rows
for i in range(vecnum):
coef = Gmat[i,vecnum-2]
unitBvecs[-2] -= coef*unitBvecs[i]; unitPvecs[-2] -= coef*unitPvecs[i]
#the Arnoldi vectors should all be real since RgM is a family head and only non-zero singular vector of AsymG
unitBvecs[-2][:] = mp_re(unitBvecs[-2][:]); unitPvecs[-2][:] = mp_re(unitPvecs[-2][:])
norm = mp.sqrt(rgrid_Nmn_normsqr(unitBvecs[-2],unitPvecs[-2], rsqrgrid,rdiffgrid))
unitBvecs[-2] /= norm; unitPvecs[-2] /= norm
if plotVectors:
rgrid_Nmn_plot(unitBvecs[-2],unitPvecs[-2], rgrid)
#get new vector
newvecB,newvecP = shell_Green_grid_Nmn_vec_mp(n,k, rsqrgrid,rdiffgrid, RgBgrid,RgPgrid, ImBgrid,ImPgrid, unitBvecs[-2],unitPvecs[-2])
newvecB[:] = mp_re(newvecB); newvecP[:] = mp_re(newvecP)
vecnum += 1
Gmat.rows+=1; Gmat.cols+=1
for i in range(vecnum):
Gmat[i,vecnum-1] = rgrid_Nmn_vdot(unitBvecs[i],unitPvecs[i], newvecB,newvecP, rsqrgrid,rdiffgrid)
Gmat[vecnum-1,i] = Gmat[i,vecnum-1]
unitBvecs.append(newvecB); unitPvecs.append(newvecP) #append to end of unitNvecs for next round of iteration
return Gmat
def shell_Green_grid_Arnoldi_RgandImNmn_Uconverge_mp(n,k,R1,R2, invchi, gridpts=1000, Unormtol=1e-10, veclim=3, delveclim=2, maxveclim=40, plotVectors=False):
np.seterr(over='raise',under='raise',invalid='raise')
#for high angular momentum number could have floating point issues; in this case, raise error. Outer method will catch the error and use the mpmath version instead
rgrid = np.linspace(R1,R2,gridpts)
rsqrgrid = rgrid**2
rdiffgrid = np.diff(rgrid)
"""
RgBgrid = sp.spherical_jn(n, k*rgrid)/(k*rgrid) + sp.spherical_jn(n,k*rgrid,derivative=True) #the argument for radial part of spherical waves is kr
RgPgrid = np.sqrt(n*(n+1))*sp.spherical_jn(n, k*rgrid)/(k*rgrid)
ImBgrid = sp.spherical_yn(n, k*rgrid)/(k*rgrid) + sp.spherical_yn(n,k*rgrid,derivative=True)
ImPgrid = np.sqrt(n*(n+1))*sp.spherical_yn(n, k*rgrid)/(k*rgrid)
RgBgrid = RgBgrid.astype(np.complex)
RgPgrid = RgPgrid.astype(np.complex)
ImBgrid = ImBgrid.astype(np.complex)
ImPgrid = ImPgrid.astype(np.complex)
RgBgrid = complex_to_mp(RgBgrid)
RgPgrid = complex_to_mp(RgPgrid)
ImBgrid = complex_to_mp(ImBgrid)
ImPgrid = complex_to_mp(ImPgrid)
"""
RgBgrid = mp_vec_spherical_jn(n,k*rgrid)/(k*rgrid) + mp_vec_spherical_djn(n,k*rgrid)
RgPgrid = mp.sqrt(n*(n+1))*mp_vec_spherical_jn(n, k*rgrid)/(k*rgrid)
ImBgrid = mp_vec_spherical_yn(n, k*rgrid)/(k*rgrid) + mp_vec_spherical_dyn(n,k*rgrid)
ImPgrid = mp.sqrt(n*(n+1))*mp_vec_spherical_yn(n, k*rgrid)/(k*rgrid)
RgN_normvec = mp.sqrt(rgrid_Nmn_normsqr(RgBgrid,RgPgrid, rsqrgrid,rdiffgrid))
RgN_vecBgrid = RgBgrid / RgN_normvec
RgN_vecPgrid = RgPgrid / RgN_normvec
#next generate the orthonormal head for the outgoing wave series
coef = rgrid_Nmn_vdot(RgN_vecBgrid,RgN_vecPgrid, ImBgrid,ImPgrid, rsqrgrid,rdiffgrid)
ImN_vecBgrid = ImBgrid - coef*RgN_vecBgrid
ImN_vecPgrid = ImPgrid - coef*RgN_vecPgrid
ImN_normvec = mp.sqrt(rgrid_Nmn_normsqr(ImN_vecBgrid,ImN_vecPgrid, rsqrgrid,rdiffgrid))
ImN_vecBgrid /= ImN_normvec
ImN_vecPgrid /= ImN_normvec
if plotVectors:
rgrid_Nmn_plot(mp_to_complex(RgN_vecBgrid),mp_to_complex(RgN_vecPgrid),rgrid)
rgrid_Nmn_plot(mp_to_complex(ImN_vecBgrid),mp_to_complex(ImN_vecPgrid),rgrid)
unitBvecs = [RgN_vecBgrid,ImN_vecBgrid]
unitPvecs = [RgN_vecPgrid,ImN_vecPgrid]
GvecRgBgrid, GvecRgPgrid = shell_Green_grid_Nmn_vec_mp(n,k, rsqrgrid,rdiffgrid, RgBgrid,RgPgrid, ImBgrid,ImPgrid, RgN_vecBgrid,RgN_vecPgrid)
GvecImBgrid, GvecImPgrid = shell_Green_grid_Nmn_vec_mp(n,k, rsqrgrid,rdiffgrid, RgBgrid,RgPgrid, ImBgrid,ImPgrid, ImN_vecBgrid,ImN_vecPgrid)
Gmat = mp.zeros(2,2)
Gmat[0,0] = rgrid_Nmn_vdot(RgN_vecBgrid,RgN_vecPgrid, GvecRgBgrid,GvecRgPgrid, rsqrgrid,rdiffgrid)
Gmat[0,1] = rgrid_Nmn_vdot(RgN_vecBgrid,RgN_vecPgrid, GvecImBgrid,GvecImPgrid, rsqrgrid,rdiffgrid)
Gmat[1,0] = Gmat[0,1]
Gmat[1,1] = rgrid_Nmn_vdot(ImN_vecBgrid,ImN_vecPgrid, GvecImBgrid,GvecImPgrid, rsqrgrid,rdiffgrid)
Uinv = mp.eye(2)*invchi-Gmat
unitBvecs.append(GvecRgBgrid); unitPvecs.append(GvecRgPgrid)
unitBvecs.append(GvecImBgrid); unitPvecs.append(GvecImPgrid) #append unorthogonalized, unnormalized Arnoldi vector for further iterations
b = mp.matrix([mp.one])
prevUnorm = 1 / Uinv[0,0]
i=2
while i<veclim:
Gmat = shell_Green_grid_Arnoldi_RgandImNmn_step_mp(n,k,invchi, rgrid,rsqrgrid,rdiffgrid, RgBgrid,RgPgrid, ImBgrid,ImPgrid, unitBvecs,unitPvecs, Gmat, plotVectors=plotVectors)
i += 1
print(i)
if i==maxveclim:
break
if i==veclim:
#solve for first column of U and see if its norm has converged
Uinv = mp.eye(Gmat.rows)*invchi-Gmat
b.rows = i
x = mp.lu_solve(Uinv, b)
Unorm = mp.norm(x)
print('Unorm', Unorm, flush=True)
if np.abs(prevUnorm-Unorm) > np.abs(Unorm)*Unormtol:
veclim += delveclim
prevUnorm = Unorm
return rgrid,rsqrgrid,rdiffgrid, RgBgrid,RgPgrid, ImBgrid,ImPgrid, unitBvecs,unitPvecs, Uinv, Gmat
| 48.339934 | 242 | 0.709429 | 2,121 | 14,647 | 4.748232 | 0.151815 | 0.005958 | 0.011121 | 0.005561 | 0.629431 | 0.586238 | 0.550094 | 0.525072 | 0.486347 | 0.458544 | 0 | 0.015193 | 0.186659 | 14,647 | 302 | 243 | 48.5 | 0.830186 | 0.236294 | 0 | 0.4 | 0 | 0 | 0.004 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041176 | false | 0 | 0.052941 | 0 | 0.135294 | 0.023529 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
524690b6022bc5b06f95bfc8ab15bde750cf7ead | 745 | py | Python | sqlalchemy_collectd/server/listener.py | sqlalchemy/sqlalchemy-collectd | f074fb09b9368213f9c1371a64c5aef4a1e73242 | [
"MIT"
] | 24 | 2018-02-12T04:53:20.000Z | 2022-02-12T22:05:54.000Z | sqlalchemy_collectd/server/listener.py | sqlalchemy/sqlalchemy-collectd | f074fb09b9368213f9c1371a64c5aef4a1e73242 | [
"MIT"
] | 11 | 2018-02-15T08:22:42.000Z | 2022-01-06T15:54:42.000Z | sqlalchemy_collectd/server/listener.py | sqlalchemy/sqlalchemy-collectd | f074fb09b9368213f9c1371a64c5aef4a1e73242 | [
"MIT"
] | 9 | 2018-02-14T08:55:47.000Z | 2021-12-02T07:33:29.000Z | import logging
import threading
import typing
if typing.TYPE_CHECKING:
from .receiver import Receiver
log = logging.getLogger("sqlalchemy_collectd")
def _receive(receiver: "Receiver"):
while True:
try:
receiver.receive()
except Exception:
log.error("message receiver caught an exception", exc_info=True)
except BaseException as be:
log.info(
"message receiver thread caught %s exception, exiting"
% type(be).__name__
)
break
def listen(receiver: "Receiver"):
global listen_thread
listen_thread = threading.Thread(target=_receive, args=(receiver,))
listen_thread.daemon = True
listen_thread.start()
| 24.032258 | 76 | 0.641611 | 79 | 745 | 5.886076 | 0.506329 | 0.103226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.27651 | 745 | 30 | 77 | 24.833333 | 0.862709 | 0 | 0 | 0 | 0 | 0 | 0.165101 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.173913 | 0 | 0.26087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
524763f6378c293fcd27f9f54dd646dbb8d1f69d | 661 | py | Python | app/main.py | karma-git/cicd_ec2 | 3b55a0fabded0aeae7623c155a5319cf98849fb2 | [
"WTFPL"
] | null | null | null | app/main.py | karma-git/cicd_ec2 | 3b55a0fabded0aeae7623c155a5319cf98849fb2 | [
"WTFPL"
] | null | null | null | app/main.py | karma-git/cicd_ec2 | 3b55a0fabded0aeae7623c155a5319cf98849fb2 | [
"WTFPL"
] | null | null | null | """
Fast API application
ref: https://fastapi.tiangolo.com/
"""
import os
from socket import gethostname
from datetime import datetime
from uuid import uuid4
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
async def runtime_info() -> dict:
return {
"hostname": gethostname(),
"timestamp": datetime.now(),
"uuid": uuid4(),
}
@app.get("/info")
async def application_info() -> dict:
return {
"commit": os.environ.get("CI_COMMIT_SHORT_SHA"),
"pipeline": os.environ.get("CI_PIPELINE_ID"),
"tag": os.environ.get("CI_COMMIT_TAG"),
}
# CI TRIGGER 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
| 20.65625 | 56 | 0.633888 | 93 | 661 | 4.408602 | 0.569892 | 0.065854 | 0.087805 | 0.102439 | 0.097561 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 0.223903 | 661 | 31 | 57 | 21.322581 | 0.746589 | 0.164902 | 0 | 0.1 | 0 | 0 | 0.165441 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.35 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
524ae3bb7a1380ac60a2ff16c03a95c0ba9e192b | 5,028 | py | Python | tf_yarn/tensorflow/metrics.py | nateagr/tf-yarn | 1f958256291a4cacc3c122900c86831b7882f1e3 | [
"Apache-2.0"
] | null | null | null | tf_yarn/tensorflow/metrics.py | nateagr/tf-yarn | 1f958256291a4cacc3c122900c86831b7882f1e3 | [
"Apache-2.0"
] | null | null | null | tf_yarn/tensorflow/metrics.py | nateagr/tf-yarn | 1f958256291a4cacc3c122900c86831b7882f1e3 | [
"Apache-2.0"
] | null | null | null | import time
import os
import logging.config
from typing import Union, List
import tensorflow as tf
import skein
from tf_yarn.event import broadcast
from tf_yarn.tensorflow import experiment, keras_experiment
from tf_yarn import mlflow
from tf_yarn._task_commons import n_try, is_chief, get_task
logger = logging.getLogger(__name__)
class StepPerSecondHook(tf.estimator.StepCounterHook):
def __init__(
self,
every_n_steps=100,
every_n_secs=None,
output_dir=None,
summary_writer=None
):
tf.estimator.StepCounterHook.__init__(
self,
every_n_steps=every_n_steps,
every_n_secs=every_n_secs,
output_dir=output_dir,
summary_writer=summary_writer
)
def _log_and_record(self, elapsed_steps: int, elapsed_time: float, global_step: int):
if is_chief():
steps_per_sec = elapsed_steps / elapsed_time
mlflow.log_metric(f"steps_per_sec_{n_try()}", steps_per_sec, step=global_step)
class EvalMonitorHook(tf.estimator.SessionRunHook):
'''
Hook to generate statistics about evaluator usage
Usage: tf.estimator.EvalSpec(...,hooks=[Eval_analyzer_hook()])
'''
def __init__(self):
self.client = skein.ApplicationClient.from_current()
self.task = get_task()
self.step_counter = 0
self.eval_start_time = 0.0
self.eval_step_dur_accu = 0.0
self.start_time = time.time()
def before_run(self, run_context):
self.eval_start_time = time.time()
return tf.estimator.SessionRunArgs(tf.compat.v1.train.get_global_step())
def after_run(self, _run_context, run_values):
self.step_counter += 1
cur_time = time.time()
self.eval_step_dur_accu += cur_time - self.eval_start_time
self.broadcast('eval_step_mean_duration', str(self.eval_step_dur_accu / self.step_counter))
self.broadcast(
'awake_time_ratio',
str(self.eval_step_dur_accu / (cur_time - self.start_time))
)
self.broadcast('nb_eval_steps', str(self.step_counter))
self.broadcast('last_training_step', str(run_values.results))
def broadcast(self, key: str, value: str):
broadcast(self.client, f'{self.task}/{key}', value)
def get_all_metrics(model_path):
events = _gen_events_iterator(model_path)
dataframe = {
'step': list(),
'name': list(),
'value': list()
}
for event in events:
summary = event.summary
if summary:
for value in summary.value:
if value.simple_value:
dataframe['step'].append(event.step)
dataframe['name'].append(value.tag)
dataframe['value'].append(value.simple_value)
return dataframe
def _is_event_file(filename):
return os.path.basename(filename).startswith('events.out')
def _gen_events_iterator(model_path):
event_file = next((filename for filename in tf.compat.v1.gfile.ListDirectory(model_path)
if _is_event_file(filename)))
assert isinstance(event_file, str)
return tf.compat.v1.train.summary_iterator(os.path.join(model_path, event_file))
def _hook_name_already_exists(
hook: tf.estimator.SessionRunHook,
hooks: List[tf.estimator.SessionRunHook]) -> bool:
hook_name = type(hook).__name__
return len([h for h in hooks
if type(h).__name__ == hook_name]) > 0
def _add_monitor_to_experiment(
my_experiment: Union[experiment.Experiment, keras_experiment.KerasExperiment]
) -> Union[experiment.Experiment, keras_experiment.KerasExperiment]:
if isinstance(my_experiment, experiment.Experiment):
logger.info(f"configured training hooks: {my_experiment.train_spec.hooks}")
training_hooks = list(my_experiment.train_spec.hooks)
if my_experiment.config.log_step_count_steps is not None:
steps_per_second_hook = StepPerSecondHook(
every_n_steps=my_experiment.config.log_step_count_steps
)
if not _hook_name_already_exists(steps_per_second_hook, training_hooks):
training_hooks.append(steps_per_second_hook)
else:
logger.warning("do not add StepPerSecondHook as there is already one configured")
monitored_train_spec = my_experiment.train_spec._replace(
hooks=training_hooks
)
monitored_eval_spec = my_experiment.eval_spec._replace(
hooks=(EvalMonitorHook(), *my_experiment.eval_spec.hooks)
)
my_experiment = my_experiment._replace(
eval_spec=monitored_eval_spec, train_spec=monitored_train_spec)
elif isinstance(my_experiment, keras_experiment.KerasExperiment):
logger.warning("equivalent of StepPerSecondHook not yet implemented for KerasExperiment")
else:
raise ValueError("experiment must be an Experiment or a KerasExperiment")
return my_experiment
| 35.160839 | 99 | 0.681185 | 628 | 5,028 | 5.113057 | 0.256369 | 0.048583 | 0.012457 | 0.018686 | 0.155403 | 0.082529 | 0.040486 | 0.018686 | 0 | 0 | 0 | 0.003358 | 0.230111 | 5,028 | 142 | 100 | 35.408451 | 0.826143 | 0.022275 | 0 | 0.036364 | 0 | 0 | 0.080065 | 0.015931 | 0 | 0 | 0 | 0 | 0.009091 | 1 | 0.1 | false | 0 | 0.090909 | 0.009091 | 0.263636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
524d3f9238edfe504582ed378f0ba7eec353ca1d | 6,417 | py | Python | test.py | huangzhikun1995/IPM-Net | 9a4bfdeb3f8b38cd592d5a669b484b489b64a24a | [
"Net-SNMP",
"Xnet",
"RSA-MD"
] | 175 | 2020-04-25T11:30:45.000Z | 2022-01-20T07:35:55.000Z | test.py | huangzhikun1995/IPM-Net | 9a4bfdeb3f8b38cd592d5a669b484b489b64a24a | [
"Net-SNMP",
"Xnet",
"RSA-MD"
] | 10 | 2020-05-02T07:02:28.000Z | 2022-03-31T07:14:23.000Z | test.py | huangzhikun1995/IPM-Net | 9a4bfdeb3f8b38cd592d5a669b484b489b64a24a | [
"Net-SNMP",
"Xnet",
"RSA-MD"
] | 14 | 2020-04-25T13:55:46.000Z | 2021-05-12T00:21:37.000Z | # -*- coding: utf-8 -*-
from __future__ import print_function
from utils import get_config, pytorch03_to_pytorch04
from trainer import IPMNet_Trainer
from torch.autograd import Variable
import argparse
import torchvision.utils as vutils
import sys
import torch
import os
import random
from torchvision import transforms
from PIL import Image
import time
# number of iterations
num = '01000000'
# config name
name = 'config'
if not os.path.isdir('./outputs/%s'%name):
assert 0, "please changse the name to your model name"
parser = argparse.ArgumentParser()
parser.add_argument('--config', type=str, default='./outputs/%s/config.yaml'%name, help="net configuration")
parser.add_argument('--checkpoint', type=str, default="./outputs/%s/checkpoints/gen_%s.pt"%(name, num), help="checkpoint of autoencoders")
parser.add_argument('--a2b', type=int, default=1, help="1 for a2b and 0 for b2a")
parser.add_argument('--sty_num', type=int, default=30, help="the number of randomly selected reference images.")
parser.add_argument('--seed', type=int, default=10, help="random seed")
parser.add_argument('--num_style',type=int, default=10, help="number of styles to sample")
parser.add_argument('--synchronized', action='store_true', help="whether use synchronized style code or not")
parser.add_argument('--output_only', action='store_true', help="whether use synchronized style code or not")
parser.add_argument('--output_path', type=str, default="./test_results/%s/%s"%(name, num), help="output image path")
parser.add_argument('--trainer', type=str, default='IPMNet')
opts = parser.parse_args()
opts.input = '/home/username/IPM-Net/datasets/makeup/testB/'
opts.input_mask ='/home/username/IPM-Net/datasets/makeup/testB_mask/'
opts.input_texture ='/home/username/IPM-Net/datasets/makeup/testB_highcontract/'
opts.style = '/home/username/IPM-Net/datasets/makeup/testA/'
opts.style_mask ='/home/username/IPM-Net/datasets/makeup/testA_mask/'
opts.style_texture ='/home/username/IPM-Net/datasets/makeup/testA_highcontract/'
if not os.path.exists(opts.output_path):
os.makedirs(opts.output_path)
torch.manual_seed(opts.seed)
torch.cuda.manual_seed(opts.seed)
# Load experiment setting
config = get_config(opts.config)
opts.num_style = 1 if opts.style != '' else opts.num_style
# Setup model and data loader
# config['vgg_model_path'] = opts.output_path
if opts.trainer == 'IPMNet':
style_dim = config['gen']['style_dim']
trainer = IPMNet_Trainer(config)
else:
sys.exit("Only support MUNIT|UNIT")
try:
state_dict = torch.load(opts.checkpoint)
trainer.gen_a.load_state_dict(state_dict['a'])
trainer.gen_b.load_state_dict(state_dict['b'])
except:
state_dict = pytorch03_to_pytorch04(torch.load(opts.checkpoint), opts.trainer)
trainer.gen_a.load_state_dict(state_dict['a'])
trainer.gen_b.load_state_dict(state_dict['b'])
trainer.cuda()
trainer.eval()
encode = trainer.gen_a.encode #if opts.a2b else trainer.gen_b.encode # encode function
style_encode = trainer.gen_b.encode #if opts.a2b else trainer.gen_a.encode # encode function
decode = trainer.gen_b.decode #if opts.a2b else trainer.gen_a.decode # decode function
if 'new_size' in config:
new_size = config['new_size']
else:
new_size = config['new_size_a']
if 'crop_image_height' and 'crop_image_width' in config:
height = config['crop_image_height']
weight = config['crop_image_width']
# get files lists
image_files = os.listdir(opts.input)
style_files = os.listdir(opts.style)
img_transform = transforms.Compose([transforms.Resize(new_size),
transforms.CenterCrop((height, weight)),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
mask_transform = transforms.Compose([transforms.Resize(new_size),
transforms.CenterCrop((height, weight)),
transforms.ToTensor()])
with torch.no_grad():
for i in range(len(image_files)):
# time_start = time.time()
img = str(i) + '.png'
source = Variable(img_transform(Image.open(opts.input + img).convert('RGB')).unsqueeze(0).cuda())
source_mask = Variable(mask_transform(Image.open(opts.input_mask + img)).unsqueeze(0).cuda())
source_texture = Variable(mask_transform(Image.open(opts.input_texture + img)).unsqueeze(0).cuda())
subpath = os.path.join(opts.output_path, str(i))
if not os.path.exists(subpath):
os.makedirs(subpath)
# save source image
path = os.path.join(subpath, '%d_0.png'%i)
image_save = vutils.make_grid(source, nrow=source.size(0), padding=0, normalize=True, scale_each=True)
vutils.save_image(image_save, path, nrow=1)
for n,sty in enumerate(style_files):
style_image = Variable(img_transform(Image.open(opts.style + sty).convert('RGB')).unsqueeze(0).cuda()) if opts.style != '' else None
style_mask = Variable(mask_transform(Image.open(opts.style_mask + sty)).unsqueeze(0).cuda())
style_texture = Variable(mask_transform(Image.open(opts.style_texture + sty)).unsqueeze(0).cuda())
if i == 0:
style_savepath = os.path.join(opts.output_path, 'style')
if not os.path.exists(style_savepath):
os.makedirs(style_savepath)
style_save = vutils.make_grid(style_image, nrow=source.size(0), padding=0, normalize=True, scale_each=True)
vutils.save_image(style_save, os.path.join(style_savepath, 'style_%d.png'%n), nrow=1)
content, _, c_facial_mask = encode(source, source_mask, source_texture)
_, source_sty, _ = style_encode(source, source_mask, style_texture)
_, style, _ = style_encode(style_image, style_mask, style_texture)
# make the makeup controllable
level = 0
new_sty = level*source_sty.unsqueeze(0) + (1-level)*(style.unsqueeze(0))
outputs = decode(content, new_sty, c_facial_mask)
# save transfer image
path = os.path.join(subpath, '%d_%s.png'%(i+1,sty))
image_save = vutils.make_grid(outputs, nrow=source.size(0), padding=0, normalize=True, scale_each=True)
vutils.save_image(image_save, path, nrow=1)
print(i)
| 46.5 | 144 | 0.686458 | 895 | 6,417 | 4.740782 | 0.206704 | 0.021211 | 0.040066 | 0.025454 | 0.396418 | 0.330662 | 0.303794 | 0.172048 | 0.172048 | 0.172048 | 0 | 0.012913 | 0.179367 | 6,417 | 137 | 145 | 46.839416 | 0.792822 | 0.065607 | 0 | 0.093458 | 0 | 0 | 0.169287 | 0.06089 | 0 | 0 | 0 | 0 | 0.009346 | 1 | 0 | false | 0 | 0.121495 | 0 | 0.121495 | 0.018692 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
524dc9cc67b4526f75a6314915dd77fbc6247841 | 1,424 | py | Python | mycodetests/final_single_bn.py | sajib-4414/comparative-study-with-pgmpy | ab33f6fbda594a626c69573d915b560cab038381 | [
"MIT"
] | null | null | null | mycodetests/final_single_bn.py | sajib-4414/comparative-study-with-pgmpy | ab33f6fbda594a626c69573d915b560cab038381 | [
"MIT"
] | null | null | null | mycodetests/final_single_bn.py | sajib-4414/comparative-study-with-pgmpy | ab33f6fbda594a626c69573d915b560cab038381 | [
"MIT"
] | null | null | null | import numpy as np
from pgmpy.models import BayesianModel
from pgmpy.factors.discrete import TabularCPD
from pgmpy.inference.EliminationOrder import WeightedMinFill, MinWeight, MinNeighbors, MinFill
model = BayesianModel([('c', 'd'), ('d', 'g'), ('i', 'g'), ('i', 's'), ('s', 'j'), ('g', 'l'), ('l', 'j'), ('j', 'h'), ('g', 'h')])
cpd_c = TabularCPD('c', 2, np.random.rand(2, 1))
cpd_d = TabularCPD('d', 2, np.random.rand(2, 2), ['c'], [2])
cpd_g = TabularCPD('g', 3, np.random.rand(3, 4), ['d', 'i'], [2, 2])
cpd_i = TabularCPD('i', 2, np.random.rand(2, 1))
cpd_s = TabularCPD('s', 2, np.random.rand(2, 2), ['i'], [2])
cpd_j = TabularCPD('j', 2, np.random.rand(2, 4), ['l', 's'], [2, 2])
cpd_l = TabularCPD('l', 2, np.random.rand(2, 3), ['g'], [3])
cpd_h = TabularCPD('h', 2, np.random.rand(2, 6), ['g', 'j'], [3, 2])
model.add_cpds(cpd_c, cpd_d, cpd_g, cpd_i, cpd_s, cpd_j, cpd_l, cpd_h)
model.add_cpds(cpd_c, cpd_d, cpd_g, cpd_i, cpd_s, cpd_j, cpd_l, cpd_h)
model.add_cpds(cpd_c, cpd_d, cpd_g, cpd_i, cpd_s, cpd_j, cpd_l, cpd_h)
def run_model_with_heuristic(heuristic, model):
return heuristic(model).get_elimination_order()
print("with WeightedMinFill:", run_model_with_heuristic(WeightedMinFill, model))
print("with MinFill:", run_model_with_heuristic(MinFill, model))
print("with MinWeight:", run_model_with_heuristic(MinWeight, model))
print("with MinNeighbors:", run_model_with_heuristic(MinNeighbors, model)) | 56.96 | 131 | 0.669242 | 248 | 1,424 | 3.633065 | 0.181452 | 0.071032 | 0.106548 | 0.100999 | 0.266371 | 0.219756 | 0.186459 | 0.146504 | 0.146504 | 0.146504 | 0 | 0.026025 | 0.109551 | 1,424 | 25 | 132 | 56.96 | 0.684543 | 0 | 0 | 0.136364 | 0 | 0 | 0.071579 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.181818 | 0.045455 | 0.272727 | 0.181818 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
524ed72cf498e495560960c8c922d41f3726549d | 2,111 | py | Python | tests/nn/rnn_test.py | TylerYep/edutorch | 6a4a425cbfd7fcdcd851b010816d29c3b5bae8bd | [
"MIT"
] | 3 | 2021-06-14T01:17:31.000Z | 2022-01-20T09:34:32.000Z | tests/nn/rnn_test.py | TylerYep/edutorch | 6a4a425cbfd7fcdcd851b010816d29c3b5bae8bd | [
"MIT"
] | null | null | null | tests/nn/rnn_test.py | TylerYep/edutorch | 6a4a425cbfd7fcdcd851b010816d29c3b5bae8bd | [
"MIT"
] | null | null | null | import numpy as np
from edutorch.nn import RNN
from tests.gradient_check import estimate_gradients, rel_error
def test_rnn_forward() -> None:
N, T, D, H = 2, 3, 4, 5
x = np.linspace(-0.1, 0.3, num=N * T * D).reshape(N, T, D)
model = RNN(D, H, N)
model.h0 = np.linspace(-0.3, 0.1, num=N * H).reshape(N, H)
model.Wx = np.linspace(-0.2, 0.4, num=D * H).reshape(D, H)
model.Wh = np.linspace(-0.4, 0.1, num=H * H).reshape(H, H)
model.b = np.linspace(-0.7, 0.1, num=H)
h = model(x)
expected_h = np.asarray(
[
[
[-0.42070749, -0.27279261, -0.11074945, 0.05740409, 0.22236251],
[-0.39525808, -0.22554661, -0.0409454, 0.14649412, 0.32397316],
[-0.42305111, -0.24223728, -0.04287027, 0.15997045, 0.35014525],
],
[
[-0.55857474, -0.39065825, -0.19198182, 0.02378408, 0.23735671],
[-0.27150199, -0.07088804, 0.13562939, 0.33099728, 0.50158768],
[-0.51014825, -0.30524429, -0.06755202, 0.17806392, 0.40333043],
],
]
)
assert np.allclose(expected_h, h)
def test_rnn_backward() -> None:
N, D, T, H = 2, 3, 10, 5
x = np.random.randn(N, T, D)
h0 = np.random.randn(N, H)
Wx = np.random.randn(D, H)
Wh = np.random.randn(H, H)
b = np.random.randn(H)
model = RNN(D, H, N)
model.h0 = h0
model.Wx = Wx
model.Wh = Wh
model.b = b
out = model(x)
dout = np.random.randn(*out.shape)
dx, dh0, dWx, dWh, db = model.backward(dout)
params = {"h0": h0, "Wx": Wx, "Wh": Wh, "b": b}
dx_num, dh0_num, dWx_num, dWh_num, db_num = estimate_gradients(
model, dout, x, params
)
assert np.allclose(dx_num, dx), f"dx error: {rel_error(dx_num, dx)}"
assert np.allclose(dh0_num, dh0), f"dh0 error: {rel_error(dh0_num, dh0)}"
assert np.allclose(dWx_num, dWx), f"dWx error: {rel_error(dWx_num, dWx)}"
assert np.allclose(dWh_num, dWh), f"dWh error: {rel_error(dWh_num, dWh)}"
assert np.allclose(db_num, db), f"db error: {rel_error(db_num, db)}"
| 31.984848 | 80 | 0.563714 | 348 | 2,111 | 3.333333 | 0.25 | 0.041379 | 0.082759 | 0.017241 | 0.043103 | 0.031034 | 0.031034 | 0 | 0 | 0 | 0 | 0.199359 | 0.261014 | 2,111 | 65 | 81 | 32.476923 | 0.544231 | 0 | 0 | 0.078431 | 0 | 0 | 0.085741 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 1 | 0.039216 | false | 0 | 0.058824 | 0 | 0.098039 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5250ad7e34834ab3c8f0931a48fb8303dc4ac59f | 2,035 | py | Python | Examples/VoiceMESSAGE/main_data_function.py | davidhozic/Discord-Shiller | ff22bb1ceb7b4128ee0d27f3c9c9dd0a5279feb9 | [
"MIT"
] | 12 | 2022-02-20T20:50:24.000Z | 2022-03-24T17:15:15.000Z | Examples/VoiceMESSAGE/main_data_function.py | davidhozic/Discord-Shiller | ff22bb1ceb7b4128ee0d27f3c9c9dd0a5279feb9 | [
"MIT"
] | 3 | 2022-02-21T15:17:43.000Z | 2022-03-17T22:36:23.000Z | Examples/VoiceMESSAGE/main_data_function.py | davidhozic/discord-advertisement-framework | ff22bb1ceb7b4128ee0d27f3c9c9dd0a5279feb9 | [
"MIT"
] | 1 | 2022-03-31T01:04:01.000Z | 2022-03-31T01:04:01.000Z | import framework, datetime, secret
from framework import discord
############################################################################################
# It's VERY IMPORTANT that you use @framework.data_function!
############################################################################################
@framework.data_function
def get_data(param1, param2):
return framework.AUDIO("VoiceMessage.mp3")
guilds = [
framework.GUILD(
guild_id=123456789, # ID of server (guild)
messages_to_send=[ # List MESSAGE objects
framework.VoiceMESSAGE(
start_period=None, # If None, messages will be send on a fixed period (end period)
end_period=15, # If start_period is None, it dictates the fixed sending period,
# If start period is defined, it dictates the maximum limit of randomized period
data=get_data(1, 2), # Data parameter
channel_ids=[123456789], # List of ids of all the channels you want this message to be sent into
start_now=True # Start sending now (True) or wait until period
),
],
generate_log=True ## Generate file log of sent messages (and failed attempts) for this server
)
]
############################################################################################
if __name__ == "__main__":
framework.run( token=secret.C_TOKEN, # MANDATORY,
server_list=guilds, # MANDATORY
is_user=False, # OPTIONAL
user_callback=None, # OPTIONAL
server_log_output="History", # OPTIONAL
debug=True) # OPTIONAL
| 52.179487 | 140 | 0.434398 | 172 | 2,035 | 4.982558 | 0.523256 | 0.038506 | 0.049008 | 0.035006 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019826 | 0.380344 | 2,035 | 39 | 141 | 52.179487 | 0.659794 | 0.278624 | 0 | 0 | 0 | 0 | 0.026383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037037 | false | 0 | 0.074074 | 0.037037 | 0.148148 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
525147078c4303005bed80d768d6e902199dcf56 | 8,036 | py | Python | taxcalc/parameters.py | jlyons871/Tax-Calculator | 77f3f67ae77f12ea83e15e138e9fc0ab9fd5cd1d | [
"MIT"
] | null | null | null | taxcalc/parameters.py | jlyons871/Tax-Calculator | 77f3f67ae77f12ea83e15e138e9fc0ab9fd5cd1d | [
"MIT"
] | null | null | null | taxcalc/parameters.py | jlyons871/Tax-Calculator | 77f3f67ae77f12ea83e15e138e9fc0ab9fd5cd1d | [
"MIT"
] | null | null | null | import numpy as np
from .utils import expand_array
import os
import json
from pkg_resources import resource_stream, Requirement
DEFAULT_START_YEAR = 2013
class Parameters(object):
CUR_PATH = os.path.abspath(os.path.dirname(__file__))
PARAM_FILENAME = "params.json"
params_path = os.path.join(CUR_PATH, PARAM_FILENAME)
#Mapping of year to inflation rate
__rates = {2013:0.015, 2014:0.020, 2015:0.022, 2016:0.020, 2017:0.021,
2018:0.022, 2019:0.023, 2020:0.024, 2021:0.024, 2022:0.024,
2023:0.024, 2024:0.024}
@classmethod
def from_file(cls, file_name, **kwargs):
if file_name:
with open(file_name) as f:
params = json.loads(f.read())
else:
params = None
return cls(data=params, **kwargs)
def __init__(self, start_year=DEFAULT_START_YEAR, budget_years=12,
inflation_rate=None, inflation_rates=None, data=None,
**kwargs):
if inflation_rate and inflation_rates:
raise ValueError("Can only specify either one constant inflation"
" rate or a list of inflation rates")
self._inflation_rates = None
if inflation_rate:
self._inflation_rates = [inflation_rate] * budget_years
if inflation_rates:
assert len(inflation_rates) == budget_years
self._inflation_rates = [inflation_rates[start_year + i]
for i in range(0, budget_years)]
if not self._inflation_rates:
self._inflation_rates = [self.__rates[start_year + i]
for i in range(0, budget_years)]
self._current_year = start_year
self._start_year = start_year
self._budget_years = budget_years
if data:
self._vals = data
else:
self._vals = default_data(metadata=True)
# INITIALIZE
for name, data in self._vals.items():
cpi_inflated = data.get('cpi_inflated', False)
values = data['value']
setattr(self, name, expand_array(values,
inflate=cpi_inflated, inflation_rates=self._inflation_rates,
num_years=budget_years))
self.set_year(start_year)
def update(self, year_mods):
"""
Take a dictionary of year: {name:val} mods and set them on this Params object.
'year_mods' is a dictionary of year: mods where mods is a dict of key:value pairs
and key_cpi:Bool pairs. The key_cpi:Bool pairs indicate if the value for 'key'
should be inflated
Parameters:
----------
mods: dict
"""
if not all(isinstance(k, int) for k in year_mods.keys()):
raise ValueError("Every key must be a year, e.g. 2011, 2012, etc.")
defaults = default_data(metadata=True)
for year, mods in year_mods.items():
num_years_to_expand = (self.start_year + self.budget_years) - year
for name, values in mods.items():
if name.endswith("_cpi"):
continue
if name in defaults:
default_cpi = defaults[name].get('cpi_inflated', False)
else:
default_cpi = False
cpi_inflated = mods.get(name + "_cpi", default_cpi)
if year == self.start_year and year == self.current_year:
nval = expand_array(values,
inflate=cpi_inflated,
inflation_rates=self._inflation_rates,
num_years=num_years_to_expand)
setattr(self, name, nval)
elif year <= self.current_year and year >= self.start_year:
# advance until the parameter is in line with the current
# year
num_years_to_skip=self.current_year - year
offset_year = year - self.start_year
inf_rates = [self._inflation_rates[offset_year + i]
for i in range(0, num_years_to_expand)]
nval = expand_array(values,
inflate=cpi_inflated,
inflation_rates=inf_rates,
num_years=num_years_to_expand)
if self.current_year > self.start_year:
cur_val = getattr(self, name)
offset = self.current_year - self.start_year
cur_val[offset:] = nval[num_years_to_skip:]
else:
setattr(self, name, nval[num_years_to_skip:])
else: # year > current_year
msg = ("Can't specify a parameter for a year that is in the"
" future because we don't know how to fill in the "
" values for the years between {0} and {1}.")
raise ValueError(msg.format(self.current_year, year))
# Set up the '_X = [a, b,...]' variables as 'X = a'
self.set_year(self._current_year)
@property
def current_year(self):
return self._current_year
@property
def start_year(self):
return self._start_year
@property
def budget_years(self):
return self._budget_years
def increment_year(self):
self._current_year += 1
self.set_year(self._current_year)
def set_year(self, yr):
for name, vals in self._vals.items():
arr = getattr(self, name)
setattr(self, name[1:], arr[yr-self._start_year])
def default_data(metadata=False, start_year=None):
""" Retreive of default parameters """
parampath = Parameters.params_path
if not os.path.exists(parampath):
path_in_egg = os.path.join("taxcalc", Parameters.PARAM_FILENAME)
buf = resource_stream(Requirement.parse("taxcalc"), path_in_egg)
_bytes = buf.read()
as_string = _bytes.decode("utf-8")
params = json.loads(as_string)
else:
with open(Parameters.params_path) as f:
params = json.load(f)
if start_year:
for k, v in params.items():
first_year = v.get('start_year', DEFAULT_START_YEAR)
assert isinstance(first_year, int)
if start_year < first_year:
msg = "Can't set a start year of {0}, because it is before {1}"
raise ValueError(msg.format(start_year, first_year))
#Set the new start year:
v['start_year'] = start_year
#Work with the values
vals = v['value']
last_year_for_data = first_year + len(vals) - 1
if last_year_for_data < start_year:
if v['row_label']:
v['row_label'] = ["2015"]
#Need to produce new values
new_val = vals[-1]
if v['cpi_inflated'] is True:
if isinstance(new_val, list):
for y in range(last_year_for_data, start_year):
new_val = [x * (1.0 + Parameters._Parameters__rates[y]) for x in new_val]
else:
for y in range(last_year_for_data, start_year):
new_val *= 1.0 + Parameters._Parameters__rates[y]
#Set the new values
v['value'] = [new_val]
else:
#Need to get rid of [first_year, ..., start_year-1] values
years_to_chop = start_year - first_year
if v['row_label']:
v['row_label'] = v['row_label'][years_to_chop:]
v['value'] = v['value'][years_to_chop:]
if (metadata):
return params
else:
return { k: v['value'] for k,v in params.items()}
| 37.203704 | 101 | 0.547038 | 978 | 8,036 | 4.250511 | 0.205521 | 0.071446 | 0.039692 | 0.024537 | 0.242723 | 0.182824 | 0.130623 | 0.115708 | 0.087804 | 0.074092 | 0 | 0.025651 | 0.364485 | 8,036 | 215 | 102 | 37.376744 | 0.78833 | 0.080264 | 0 | 0.173333 | 0 | 0 | 0.068054 | 0 | 0 | 0 | 0 | 0 | 0.013333 | 1 | 0.06 | false | 0 | 0.033333 | 0.02 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52526e6b9718bd65a27d53722dce3848e3edd23c | 1,704 | py | Python | manila/tests/common/test_config.py | gouthampacha/manila | 4b7ba9b99d272663f519b495668715fbf979ffbc | [
"Apache-2.0"
] | 159 | 2015-01-02T09:35:15.000Z | 2022-01-04T11:51:34.000Z | manila/tests/common/test_config.py | gouthampacha/manila | 4b7ba9b99d272663f519b495668715fbf979ffbc | [
"Apache-2.0"
] | 6 | 2021-02-11T16:09:43.000Z | 2022-03-15T09:56:25.000Z | manila/tests/common/test_config.py | gouthampacha/manila | 4b7ba9b99d272663f519b495668715fbf979ffbc | [
"Apache-2.0"
] | 128 | 2015-01-05T22:52:28.000Z | 2021-12-29T14:00:58.000Z | # Copyright 2015 Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import ddt
from manila.common import config
from manila.common import constants
from manila import exception
from manila import test
from manila.tests import utils as test_utils
VALID_CASES = [proto.lower() for proto in constants.SUPPORTED_SHARE_PROTOCOLS]
VALID_CASES.extend([proto.upper() for proto in VALID_CASES])
VALID_CASES.append(','.join(case for case in VALID_CASES))
@ddt.ddt
class VerifyConfigShareProtocolsTestCase(test.TestCase):
@ddt.data(*VALID_CASES)
def test_verify_share_protocols_valid_cases(self, proto):
data = dict(DEFAULT=dict(enabled_share_protocols=proto))
with test_utils.create_temp_config_with_opts(data):
config.verify_share_protocols()
@ddt.data(None, '', 'fake', [], ['fake'], [VALID_CASES[0] + 'fake'])
def test_verify_share_protocols_invalid_cases(self, proto):
data = dict(DEFAULT=dict(enabled_share_protocols=proto))
with test_utils.create_temp_config_with_opts(data):
self.assertRaises(
exception.ManilaException, config.verify_share_protocols)
| 38.727273 | 78 | 0.74061 | 235 | 1,704 | 5.208511 | 0.455319 | 0.065359 | 0.065359 | 0.026144 | 0.207516 | 0.163399 | 0.163399 | 0.163399 | 0.163399 | 0.163399 | 0 | 0.006406 | 0.175469 | 1,704 | 43 | 79 | 39.627907 | 0.864769 | 0.351526 | 0 | 0.181818 | 0 | 0 | 0.011927 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 1 | 0.090909 | false | 0 | 0.272727 | 0 | 0.409091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5252b9e0d384e25de7e889713311e7723ff134c8 | 564 | py | Python | 1st_100/problem040.py | takekoputa/project-euler | 6f434be429bd26f5d0f84f5ab0f5fa2bd677c790 | [
"MIT"
] | null | null | null | 1st_100/problem040.py | takekoputa/project-euler | 6f434be429bd26f5d0f84f5ab0f5fa2bd677c790 | [
"MIT"
] | null | null | null | 1st_100/problem040.py | takekoputa/project-euler | 6f434be429bd26f5d0f84f5ab0f5fa2bd677c790 | [
"MIT"
] | 1 | 2021-11-02T12:08:46.000Z | 2021-11-02T12:08:46.000Z | # Question: https://projecteuler.net/problem=40
def get_ith_digit_of(n, i):
return int(str(n)[i])
N = [10**i for i in range(7)]
i = 0
k = 1
prod = 1
lower = 1 # position of the first digit of [10**(k-1), 10**k)
upper = 9 # position of the last digit of [10**(k-1), 10**k)
for n in N:
while not (n >= lower and n <= upper):
k = k + 1
lower = upper + 1
upper = lower + k * 9 * 10 ** (k - 1) - 1
num = 10**(k-1) + (n - lower) // k
digit_pos = (n - lower) % k
prod = prod * get_ith_digit_of(num, digit_pos)
print(prod)
| 23.5 | 61 | 0.546099 | 106 | 564 | 2.830189 | 0.358491 | 0.04 | 0.053333 | 0.086667 | 0.093333 | 0.093333 | 0.093333 | 0 | 0 | 0 | 0 | 0.073892 | 0.280142 | 564 | 23 | 62 | 24.521739 | 0.665025 | 0.255319 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0 | 0.058824 | 0.117647 | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52543cfbc90962ca7a1e72a2c1d73487a8e487c6 | 5,312 | py | Python | grizzly_cli/init.py | Biometria-se/grizzly-cli | 75690e565cc42e014de53feb12e3250c014b5b02 | [
"MIT"
] | null | null | null | grizzly_cli/init.py | Biometria-se/grizzly-cli | 75690e565cc42e014de53feb12e3250c014b5b02 | [
"MIT"
] | null | null | null | grizzly_cli/init.py | Biometria-se/grizzly-cli | 75690e565cc42e014de53feb12e3250c014b5b02 | [
"MIT"
] | 1 | 2021-11-02T09:36:21.000Z | 2021-11-02T09:36:21.000Z | from typing import Generator
from argparse import Namespace as Arguments
from os import path
from pathlib import Path
from .utils import ask_yes_no
from . import EXECUTION_CONTEXT, register_parser
from .argparse import ArgumentSubParser
# prefix components:
space = ' '
branch = '│ '
# pointers:
tee = '├── '
last = '└── '
@register_parser(order=1)
def create_parser(sub_parser: ArgumentSubParser) -> None:
# grizzly-cli init
init_parser = sub_parser.add_parser('init', description=(
'create a skeleton project with required structure and files.'
))
init_parser.add_argument(
'project',
nargs=None,
type=str,
help='project name, a directory will be created with this name',
)
init_parser.add_argument(
'--grizzly-version',
type=str,
required=False,
default=None,
help='specify which grizzly version to use for project, default is latest'
)
init_parser.add_argument(
'--with-mq',
action='store_true',
default=False,
required=False,
help='if grizzly should be installed with IBM MQ support (external dependencies excluded)',
)
init_parser.add_argument(
'-y', '--yes',
action='store_true',
default=False,
required=False,
help='automagically answer yes on any questions',
)
if init_parser.prog != 'grizzly-cli init': # pragma: no cover
init_parser.prog = 'grizzly-cli init'
def tree(dir_path: Path, prefix: str = '') -> Generator[str, None, None]:
'''A recursive generator, given a directory Path object
will yield a visual tree structure line by line
with each line prefixed by the same characters
credit: https://stackoverflow.com/a/59109706
'''
contents = sorted(list(dir_path.iterdir()))
# contents each get pointers that are ├── with a final └── :
pointers = [tee] * (len(contents) - 1) + [last]
for pointer, sub_path in zip(pointers, contents):
yield prefix + pointer + sub_path.name
if sub_path.is_dir(): # extend the prefix and recurse:
extension = branch if pointer == tee else space
# i.e. space because last, └── , above so no more |
yield from tree(sub_path, prefix=prefix + extension)
def init(args: Arguments) -> int:
if path.exists(path.join(EXECUTION_CONTEXT, args.project)):
print(f'"{args.project}" already exists in {EXECUTION_CONTEXT}')
return 1
if all([path.exists(path.join(EXECUTION_CONTEXT, p)) for p in ['environments', 'features', 'requirements.txt']]):
print('oops, looks like you are already in a grizzly project directory', end='\n\n')
print(EXECUTION_CONTEXT)
for line in tree(Path(EXECUTION_CONTEXT)):
print(line)
return 1
layout = f'''
{args.project}
├── environments
│ └── {args.project}.yaml
├── features
│ ├── environment.py
│ ├── steps
│ │ └── steps.py
│ ├── {args.project}.feature
│ └── requests
└── requirements.txt
'''
message = f'the following structure will be created:\n{layout}'
if not args.yes:
ask_yes_no(f'{message}\ndo you want to create grizzly project "{args.project}"?')
else:
print(message)
# create project root
structure = Path(path.join(EXECUTION_CONTEXT, args.project))
structure.mkdir()
# create requirements.txt
grizzly_dependency = 'grizzly-loadtester'
if args.with_mq:
grizzly_dependency = f'{grizzly_dependency}[mq]'
if args.grizzly_version is not None:
grizzly_dependency = f'{grizzly_dependency}=={args.grizzly_version}'
(structure / 'requirements.txt').write_text(f'{grizzly_dependency}\n')
# create environments/
structure_environments = structure / 'environments'
structure_environments.mkdir()
# create environments/<project>.yaml
(structure_environments / f'{args.project}.yaml').write_text('''configuration:
template:
host: https://localhost
''')
# create features/ directory
structure_features = structure / 'features'
structure_features.mkdir()
# create features/<project>.feature
(structure_features / f'{args.project}.feature').write_text('''Feature: Template feature file
Scenario: Template scenario
Given a user of type "RestApi" with weight "1" load testing "$conf::template.host"
''')
# create features/environment.py
(structure_features / 'environment.py').write_text('from grizzly.environment import *\n\n')
# create features/requests directory
(structure_features / 'requests').mkdir()
# create features/steps directory
structure_feature_steps = structure_features / 'steps'
structure_feature_steps.mkdir()
# create features/steps/steps.py
(structure_feature_steps / 'steps.py').write_text('from grizzly.steps import *\n\n')
print(f'successfully created project "{args.project}", with the following options:')
print(f'{" " * 2}\u2022 {"with" if args.with_mq else "without"} IBM MQ support')
if args.grizzly_version is not None:
print(f'{" " * 2}\u2022 pinned to grizzly version {args.grizzly_version}')
else:
print(f'{" " * 2}\u2022 latest grizzly version')
return 0
| 32 | 117 | 0.65738 | 664 | 5,312 | 5.230422 | 0.286145 | 0.031673 | 0.014973 | 0.024187 | 0.145407 | 0.090988 | 0.042039 | 0.025338 | 0 | 0 | 0 | 0.007037 | 0.224209 | 5,312 | 165 | 118 | 32.193939 | 0.823587 | 0.12933 | 0 | 0.1875 | 0 | 0.008929 | 0.364152 | 0.038814 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026786 | false | 0 | 0.080357 | 0 | 0.133929 | 0.080357 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52547218d91a9e3538d379d757700348c851643e | 2,086 | py | Python | Projective-Geometry/tony/com.tonybeltramelli.homography/PersonTracker.py | tonybeltramelli/Graphics-And-Vision | a1dbeada8e907b119ecce1fe421ae91e64ff3371 | [
"Apache-2.0"
] | 12 | 2017-05-26T12:04:38.000Z | 2021-07-11T04:42:19.000Z | Projective-Geometry/tony/com.tonybeltramelli.homography/PersonTracker.py | tonybeltramelli/Graphics-And-Vision | a1dbeada8e907b119ecce1fe421ae91e64ff3371 | [
"Apache-2.0"
] | null | null | null | Projective-Geometry/tony/com.tonybeltramelli.homography/PersonTracker.py | tonybeltramelli/Graphics-And-Vision | a1dbeada8e907b119ecce1fe421ae91e64ff3371 | [
"Apache-2.0"
] | 4 | 2017-05-09T08:26:44.000Z | 2018-04-23T03:16:01.000Z | __author__ = 'tbeltramelli'
from AHomography import *
class PersonTracker(AHomography):
_input = None
_data = None
_map = None
_counter = 0
_tracking_output_path = ""
def __init__(self, video_path, map_path, tracking_data_path, tracking_output_path, homography_output_path):
self._data = self.get_tracking_data(tracking_data_path)
self._map = UMedia.get_image(map_path)
self._tracking_output_path = tracking_output_path
self._homography_output_path = homography_output_path
UMedia.load_media(video_path, self.process)
def process(self, img):
self._input = img
self.define_map_homography([img, self._map])
x, y = self.get_person_position()
x, y = self.get_2d_transform_from_homography(x, y, self._homography)
cv2.circle(self._map, (x, y), 3, (0, 255, 0))
UMedia.show(self._input, self._map)
def get_person_position(self):
self._counter += 1
if self._counter >= len(self._data):
cv2.imwrite(self._tracking_output_path, self._map)
return 0, 0
row = self._data[self._counter]
colors = [(255, 0, 0), (0, 255, 0), (0, 0, 255)]
for i, box in enumerate(row):
cv2.rectangle(self._input, box[0], box[1], colors[i])
box = row[len(row)-1]
base_x = box[0][0] + ((box[1][0] - box[0][0]) / 2)
base_y = box[1][1]
cv2.circle(self._input, (base_x, base_y), 1, (0, 255, 255))
return base_x, base_y
def get_tracking_data(self, tracking_data_path):
data = np.loadtxt(tracking_data_path)
length, n = data.shape
boxes = []
for i in range(length):
boxes.append(self.get_tracking_boxes(data[i, :]))
return boxes
def get_tracking_boxes(self, data):
points = [(int(data[i]), int(data[i + 1])) for i in range(0, len(data) - 1, 2)]
boxes = []
for i in range(0, len(data) / 2, 2):
box = tuple(points[i:i + 2])
boxes.append(box)
return boxes
| 28.972222 | 111 | 0.601151 | 293 | 2,086 | 3.986348 | 0.225256 | 0.068493 | 0.077055 | 0.028253 | 0.114726 | 0.032534 | 0.032534 | 0 | 0 | 0 | 0 | 0.038158 | 0.271333 | 2,086 | 71 | 112 | 29.380282 | 0.730263 | 0 | 0 | 0.081633 | 0 | 0 | 0.005753 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.102041 | false | 0 | 0.020408 | 0 | 0.326531 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5256dc49ae43be461633b31795b4833a40117536 | 10,159 | py | Python | dpsutil/attrdict/defaultdict.py | connortran216/DPS_Util | 8e6af59c3cc5d4addf3694ee0dfede08206ec4b3 | [
"MIT"
] | 1 | 2021-01-19T03:14:42.000Z | 2021-01-19T03:14:42.000Z | dpsutil/attrdict/defaultdict.py | connortran216/DPS_Util | 8e6af59c3cc5d4addf3694ee0dfede08206ec4b3 | [
"MIT"
] | 1 | 2021-01-27T09:50:33.000Z | 2021-01-27T09:50:33.000Z | dpsutil/attrdict/defaultdict.py | connortran216/DPS_Util | 8e6af59c3cc5d4addf3694ee0dfede08206ec4b3 | [
"MIT"
] | 3 | 2020-03-24T02:49:47.000Z | 2021-02-26T04:05:06.000Z | from inspect import isclass
from .attrdict import AttrDict
class DefaultDict(AttrDict):
"""
DefaultDict help cover your dict with (keys, values) that was defined before.
Implement from dpsutil.attrdict.AttrDict
Example:
your_dict = DefaultDict(a=1, b=2)
your_dict.a # return: 1
# Set key-value if it wasn't defined before
your_dict.c = 4 # raise KeyError
# To avoid KeyError
your_dict.set_default('c', 4)
# Next, you set key a=5
your_dict.a = 5
your_dict.a # return: 5
# After clear value, default value will be set.
your_dict.clear('a')
or
del your_dict['a']
your_dict.a # return: 1
# Delete key.
your.clear('a')
/// or
del your_dict['a']
# Remove key:
your_dict.del_default('a')
or
your_dict.remove('a')
==================
Supported decorator.
@attrdict.default_dict
Decorator that it create DefaultDict by attribute of class.
Support attribute alias:
@attrdict.default_dict
class CustomDict:
a=1
b=2
custom_dict = CustomDict()
custom_dict.a # return: 1
custom_dict.b # return: 2
User case:
- When you need to control your params or hyper config.
- Make sure your dict only store some field.
"""
def __init__(self, *args, **default_params):
self._setattr("__default", {})
self.setdefault(**AttrDict(*args, **default_params))
super().__init__()
def __setitem__(self, key, value):
if not self._default_contain(key):
raise KeyError("Key not in default keys.")
return super().__setitem__(key, value)
def __delitem__(self, key):
"""
Clear and set back value to default.
"""
try:
super().__delitem__(key)
super().__setitem__(key, self.get_default(key))
except KeyError as e:
if self.get_default(key) is None:
raise e
def __setattr__(self, key, value):
if not self._default_contain(key):
raise KeyError("Key not in default keys.")
return super().__setattr__(key, value)
def __delattr__(self, key, **kwargs):
"""
Clear and set back value to default.
"""
try:
super().__delattr__(key)
super().__setattr__(key, self.get_default(key))
except KeyError as e:
if key not in super().__getattribute__(f"_{self.__class__.__name__}__default"):
raise e
def __call__(self, _data):
if type(_data) is bytes:
self.from_buffer(_data)
elif isinstance(_data, dict):
self.update(_data)
else:
raise TypeError(f"Type {type(_data)} isn't supported!")
return self
def _default_contain(self, key):
return key in super().__getattribute__(f"_{self.__class__.__name__}__default")
def get_default(self, key):
return super().__getattribute__(f"_{self.__class__.__name__}__default")[key]
def del_default(self, key):
"""
Clear key and return it's value.
"""
value = self.__getitem__(key)
self._del_default(key)
return value
def _del_default(self, key):
super().__delitem__(key)
super().__getattribute__(f"_{self.__class__.__name__}__default").__delitem__(key)
def setdefault(self, _k=None, _v=None, **kwargs):
"""
Change default value of key.
Example:
a = {k: v}
for k, v in a.items():
defaultdict.setdefault(k, v)
or
defaultdict.setdefault(**a)
"""
configs_default = AttrDict(_k, _v, **kwargs)
super().__getattribute__(f"_{self.__class__.__name__}__default").update(configs_default)
self.update(configs_default)
def update(self, __m, **kwargs) -> None:
kwargs.update(__m)
for k, v in kwargs.items():
self[k] = v
def get(self, key):
try:
value = self.__getitem__(key)
except KeyError:
return None
return value
def remove(self, key):
"""
Clear key and value.
"""
self._del_default(key)
def pop(self, key):
"""
Clear key and return it's value.
"""
return self.del_default(key)
def popitem(self):
"""
Pop the last item.
:return:
"""
k, v = super().popitem()
super().__getattribute__(f"_{self.__class__.__name__}__default").__delitem__(k)
return k, v
def copy(self):
_copy = DefaultDict(super().__getattribute__(f"_{self.__class__.__name__}__default"))
_copy.update(self)
return _copy
class TypedDict(AttrDict):
"""
Dict only access one type for all element.
Raise TypeOfValueError if type of set value not same as type defined before.
Example:
your_dict = UniqueTypeDict(int)
your_dict.a = 1 # it's working
your_dict.a = 2.0 # raise error TypeOfValueError
# Un-support decorator
"""
def __init__(self, _type, _args=None, _kwargs=None):
super().__init__()
if _kwargs is None:
_kwargs = {}
if _args is None:
_args = ()
if type(_args) is list:
_args = tuple(_args)
assert type(_args) is tuple
assert isinstance(_kwargs, dict)
if not isclass(_type):
raise TypeError(f"Only support type class. But got {_type}")
self._setattr('__type', _type)
self._setattr('__args', _args)
self._setattr('__kwargs', _kwargs)
def __setitem__(self, key, value=None):
if value is None:
value = self.type(*self.__getattribute__(f"_{self.__class__.__name__}__args"),
**self.__getattribute__(f"_{self.__class__.__name__}__kwargs"))
if not isinstance(value, self.type):
try:
value = self.type(value)
except ValueError as e:
e.args = f"Default is {self.type}. Got {type(value)}",
raise e
super().__setitem__(key, value)
def add(self, key, value=None, force=False):
if key in self and not force:
raise KeyError(f'Key "{key}" was existed!')
self.__setitem__(key, value)
@property
def type(self):
return self.__getattribute__(f"_{self.__class__.__name__}__type")
def setdefault(*args, **kwargs):
raise AttributeError
def default_params(self):
raise AttributeError
@staticmethod
def fromkeys(*args, **kwargs):
raise AttributeError
def set_type(self, _type):
if not isclass(_type):
raise TypeError(f"Only support type class. But got {_type}")
_values = list(self.values())
for idx, _value in enumerate(_values):
_values[idx] = _type(_value)
super().setdefault('__type', _type)
for k, v in zip(self.keys(), _values):
self[k] = v
class DefaultTypeDict(DefaultDict):
"""
DefaultTypeDict help cover your dict when set item same as type of default value.
Implement from DefaultDict.
Example:
your_dict = DefaultTypeDict(a=int, b=float)
your_dict.a = 1 # It's working
your_dict.a = "default" # Error TypeOfValueError
Custom class:
class abc(DefaultTypeDict):
a: 1
b: float
your_dict = abc()
your_dict.a = 1 # It's working
your_dict.a = "default" # Error TypeOfValueError
==================
Supported decorator.
@attrdict.default_type_dict
Decorator that it create DefaultTypeDict base on attribute of class.
Raise 'TypeError': On the same key, if type of value isn't same as type in annotations.
Support attribute alias:
@attrdict.default_type_dict
class CustomDict:
a: float = 1 # value will be cast to type that it was defined in annotation.
b = 2 # if type not in annotation, type of value will be used.
c: int = 'abcd' # raise TypeError
# if remove key 'c' in above class. After that, this dict like:
custom_dict = CustomDict()
custom_dict.a # return: 1.0
custom_dict.b # return: 2
"""
def __setitem__(self, key, value):
if key not in super().__getattribute__(f"_{self.__class__.__name__}__default"):
raise KeyError(f"Not found '{key}'")
_type = self.get_default(key)
if not isclass(_type):
_type = _type.__class__
if isclass(value):
value = value()
if not isinstance(value, _type):
try:
value = _type(value)
except ValueError as e:
e.args = f"Default is {_type}. Got {type(value)}",
raise e
return super().__setitem__(key, value)
def __setattr__(self, key, value):
if key not in super().__getattribute__(f"_{self.__class__.__name__}__default"):
raise KeyError(f"Not found '{key}'")
_type = self.get_default(key)
if not isclass(_type):
_type = _type.__class__
if isclass(value):
value = value()
if not isinstance(value, _type):
try:
value = _type(value)
except ValueError as e:
e.args = f"Default is {_type}. Got {type(value)}",
raise e
return super().__setattr__(key, value)
def setdefault(self, _k=None, _v=None, **kwargs):
super().setdefault(_k, _v, **kwargs)
if _k:
kwargs.update({_k: _v})
for k, v in kwargs.items():
_type = v if isclass(v) else v.__class__
if k in self and not isinstance(self[k], _type):
self[k] = _type(self[k])
__all__ = ['DefaultDict', 'DefaultTypeDict', 'TypedDict']
| 29.108883 | 96 | 0.571021 | 1,202 | 10,159 | 4.451747 | 0.142263 | 0.035881 | 0.020183 | 0.049337 | 0.447767 | 0.36666 | 0.317137 | 0.288357 | 0.237526 | 0.211736 | 0 | 0.003479 | 0.320898 | 10,159 | 348 | 97 | 29.192529 | 0.772141 | 0.28802 | 0 | 0.439024 | 0 | 0 | 0.122568 | 0.061808 | 0 | 0 | 0 | 0 | 0.012195 | 1 | 0.170732 | false | 0 | 0.012195 | 0.018293 | 0.286585 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5257e0aa35e7430b3b24144db93916b29ef06292 | 5,196 | py | Python | assets/src/ba_data/python/ba/osmusic.py | Benefit-Zebra/ballistica | eb85df82cff22038e74a2d93abdcbe9cd755d782 | [
"MIT"
] | 6 | 2021-04-16T14:25:25.000Z | 2021-11-18T17:20:19.000Z | assets/src/ba_data/python/ba/osmusic.py | Benefit-Zebra/ballistica | eb85df82cff22038e74a2d93abdcbe9cd755d782 | [
"MIT"
] | 1 | 2021-08-30T10:09:06.000Z | 2021-09-21T10:44:15.000Z | assets/src/ba_data/python/ba/osmusic.py | Benefit-Zebra/ballistica | eb85df82cff22038e74a2d93abdcbe9cd755d782 | [
"MIT"
] | 2 | 2021-04-20T15:39:27.000Z | 2021-07-18T08:45:56.000Z | # Released under the MIT License. See LICENSE for details.
#
"""Music playback using OS functionality exposed through the C++ layer."""
from __future__ import annotations
import os
import random
import threading
from typing import TYPE_CHECKING
import _ba
from ba._music import MusicPlayer
if TYPE_CHECKING:
from typing import Callable, Any, Union, List, Optional
class OSMusicPlayer(MusicPlayer):
"""Music player that talks to internal C++ layer for functionality.
(internal)"""
def __init__(self) -> None:
super().__init__()
self._want_to_play = False
self._actually_playing = False
@classmethod
def get_valid_music_file_extensions(cls) -> List[str]:
"""Return file extensions for types playable on this device."""
# FIXME: should ask the C++ layer for these; just hard-coding for now.
return ['mp3', 'ogg', 'm4a', 'wav', 'flac', 'mid']
def on_select_entry(self, callback: Callable[[Any], None],
current_entry: Any, selection_target_name: str) -> Any:
# pylint: disable=cyclic-import
from bastd.ui.soundtrack.entrytypeselect import (
SoundtrackEntryTypeSelectWindow)
return SoundtrackEntryTypeSelectWindow(callback, current_entry,
selection_target_name)
def on_set_volume(self, volume: float) -> None:
_ba.music_player_set_volume(volume)
def on_play(self, entry: Any) -> None:
music = _ba.app.music
entry_type = music.get_soundtrack_entry_type(entry)
name = music.get_soundtrack_entry_name(entry)
assert name is not None
if entry_type == 'musicFile':
self._want_to_play = self._actually_playing = True
_ba.music_player_play(name)
elif entry_type == 'musicFolder':
# Launch a thread to scan this folder and give us a random
# valid file within it.
self._want_to_play = True
self._actually_playing = False
_PickFolderSongThread(name, self.get_valid_music_file_extensions(),
self._on_play_folder_cb).start()
def _on_play_folder_cb(self,
result: Union[str, List[str]],
error: Optional[str] = None) -> None:
from ba import _language
if error is not None:
rstr = (_language.Lstr(
resource='internal.errorPlayingMusicText').evaluate())
if isinstance(result, str):
err_str = (rstr.replace('${MUSIC}', os.path.basename(result)) +
'; ' + str(error))
else:
err_str = (rstr.replace('${MUSIC}', '<multiple>') + '; ' +
str(error))
_ba.screenmessage(err_str, color=(1, 0, 0))
return
# There's a chance a stop could have been issued before our thread
# returned. If that's the case, don't play.
if not self._want_to_play:
print('_on_play_folder_cb called with _want_to_play False')
else:
self._actually_playing = True
_ba.music_player_play(result)
def on_stop(self) -> None:
self._want_to_play = False
self._actually_playing = False
_ba.music_player_stop()
def on_app_shutdown(self) -> None:
_ba.music_player_shutdown()
class _PickFolderSongThread(threading.Thread):
def __init__(self, path: str, valid_extensions: List[str],
callback: Callable[[Union[str, List[str]], Optional[str]],
None]):
super().__init__()
self._valid_extensions = valid_extensions
self._callback = callback
self._path = path
def run(self) -> None:
from ba import _language
from ba._general import Call
do_print_error = True
try:
_ba.set_thread_name('BA_PickFolderSongThread')
all_files: List[str] = []
valid_extensions = ['.' + x for x in self._valid_extensions]
for root, _subdirs, filenames in os.walk(self._path):
for fname in filenames:
if any(fname.lower().endswith(ext)
for ext in valid_extensions):
all_files.insert(random.randrange(len(all_files) + 1),
root + '/' + fname)
if not all_files:
do_print_error = False
raise RuntimeError(
_language.Lstr(resource='internal.noMusicFilesInFolderText'
).evaluate())
_ba.pushcall(Call(self._callback, all_files, None),
from_other_thread=True)
except Exception as exc:
from ba import _error
if do_print_error:
_error.print_exception()
try:
err_str = str(exc)
except Exception:
err_str = '<ENCERR4523>'
_ba.pushcall(Call(self._callback, self._path, err_str),
from_other_thread=True)
| 38.205882 | 79 | 0.580062 | 576 | 5,196 | 4.946181 | 0.319444 | 0.014742 | 0.02106 | 0.02457 | 0.127764 | 0.058266 | 0.058266 | 0.058266 | 0.030186 | 0 | 0 | 0.002879 | 0.331601 | 5,196 | 135 | 80 | 38.488889 | 0.817449 | 0.105081 | 0 | 0.145631 | 0 | 0 | 0.047372 | 0.018603 | 0 | 0 | 0 | 0.007407 | 0.009709 | 1 | 0.097087 | false | 0 | 0.126214 | 0 | 0.271845 | 0.048544 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52594cf6b7777920a97c3a55f55f74487fe1da36 | 2,710 | py | Python | eukarya/scripts_nonsql/run_ROC_stats.py | ESDeutekom/ComparingOrthologies | 090b95c29f70865f2c3d6d408f565482f49c0f44 | [
"MIT"
] | 2 | 2020-07-15T10:37:31.000Z | 2020-10-22T08:44:21.000Z | eukarya/scripts_nonsql/run_ROC_stats.py | ESDeutekom/ComparingOrthologies | 090b95c29f70865f2c3d6d408f565482f49c0f44 | [
"MIT"
] | 1 | 2021-11-05T02:40:09.000Z | 2021-11-10T10:36:57.000Z | eukarya/scripts_nonsql/run_ROC_stats.py | ESDeutekom/ComparingOrthologies | 090b95c29f70865f2c3d6d408f565482f49c0f44 | [
"MIT"
] | null | null | null | #!/hosts/linuxhome/scarab/eva2/Programs/miniconda3/bin/python
#python3
import sys
import random
import pandas as pd
import numpy as np
from sklearn.metrics import roc_curve, roc_auc_score, auc
from multiprocessing import Process
from ROC_statistics import *
#Get ROC statistics with permutations and bootstrapping
#There is a large difference in datasize between positive and negative set.
import pandas as pd
#Run from main directory
file_dir = "./Results/Distances/"
#Interaction files mapped to orthologies and their distances between phylogenetic profiles
files = ["eggnog_diamond_distances",\
"eggnog_hmmer_corrected_distances",\
"orthofinder_diamond_e-3_distances",\
"orthofinder_blast_e-3_distances",\
"broccoli_distances",\
"panther_different_distances",\
"Sonicparanoid_sensitive_distances",\
"Swiftortho_c50_distances"]
out_file = "./Results/roc_bootstraps"
file_list = ["".join([file_dir, el]) for el in files]
def bootstrap_all(file_list, out_file):
#make dictionary with all the dataframes of different distances
data_dict = {}
for file in file_list:
name = file.split("/")[-1]
#only read in cosine and Pseudo negative set.
data_dict[name] = pd.read_csv(file, engine = 'python', sep = "\t", index_col = False, \
usecols=['pair', 'cosine', 'Interaction']).query('Interaction != "RusselNeg"')
data_dict[name].loc[:,'cosine'].astype(float)
#data_dict['egg_hmm_distances_5pubID']#.query('Interaction != "BioGrid"').sort_values(by = ['Interaction'], ascending = True)
#confidence intervals
c_ints = {} #confidence intervals dictionary (key = orthology, value are the confidence intervals)
#permute_auc = {} #
for name in data_dict:
#default boots/perms = 1000, bootstrap the AUC values
#since we have an extreme positive and negative set size difference,
#do we need to make a selection of the big negative set first?
#or is here like John said the impalance also not a problem when calculating the AUC?
c_ints[name] = bootstrap_auc(data_dict[name].loc[:,'Interaction'],\
data_dict[name].loc[:,'cosine'])
for name, value in c_ints.items():
print("%s\t%f\t%f\n" % (name, value[0], value[1]))
return c_ints
if __name__ == '__main__':
# Process(target=bootstrap_all, args =(file_list,)).start()
Process(target=bootstrap_all, args =(file_list[0:2],out_file,)).start()
Process(target=bootstrap_all, args = (file_list[2:4],out_file,)).start()
Process(target=bootstrap_all, args = (file_list[4:6],out_file,)).start()
Process(target=bootstrap_all, args = (file_list[6:8],out_file,)).start()
| 45.932203 | 129 | 0.703321 | 368 | 2,710 | 4.983696 | 0.459239 | 0.034896 | 0.059978 | 0.068157 | 0.146129 | 0.123228 | 0.123228 | 0.103053 | 0.080153 | 0.080153 | 0 | 0.0103 | 0.176015 | 2,710 | 58 | 130 | 46.724138 | 0.811017 | 0.362731 | 0 | 0.052632 | 0 | 0 | 0.2137 | 0.133489 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026316 | false | 0 | 0.210526 | 0 | 0.263158 | 0.026316 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
525cae8718f08e705c6cbc8f68291e4c9f2f821a | 1,494 | py | Python | src/zaif.py | colticol/CryptoCurrencyProfit | 99fd29b828c7f5a21b02051249e66a90ba2ecf1f | [
"BSD-2-Clause"
] | null | null | null | src/zaif.py | colticol/CryptoCurrencyProfit | 99fd29b828c7f5a21b02051249e66a90ba2ecf1f | [
"BSD-2-Clause"
] | 3 | 2018-01-11T07:34:50.000Z | 2018-07-24T13:48:24.000Z | src/zaif.py | colticol/CryptoCurrencyTax | 99fd29b828c7f5a21b02051249e66a90ba2ecf1f | [
"BSD-2-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
import pandas as pd
import re
from exchange import Exchange
class Zaif(Exchange):
"""docstring for Zaif"""
def __init__(self, f_trades, jpy):
self.trades = pd.read_csv(f_trades, parse_dates=['日時'])
self.trades['日時'] = self.trades['日時'].dt.normalize()
self.trades = pd.merge(self.trades, jpy.getJPY(), left_on='日時', right_on='snapped_at', how='left')
def calcTotal(self, controller):
# for trades
for _, row in self.getTrades().iterrows():
# Set Currency Unit
unit_amount, unit = self.splitUnitCurrency(row['価格'])
if unit == 'BTC' or unit == 'BCH':
price = unit_amount * row['p' + unit]
elif unit == 'JPY':
price = unit_amount * 1.0
else:
print('[Zaif]: Unknown 価格 ' + row['価格'])
# Set Dictionary
amount, currency = self.splitUnitCurrency(row['数量'])
# Buy or Sell
if row['注文'] == '買い': # buy
controller.buy(currency, price * amount, amount)
controller.sell(unit, price * amount, price * amount)
elif row['注文'] == '売り': # sell
controller.sell(currency, price * amount, amount)
controller.buy(unit, price * amount, price * amount)
return controller
def splitUnitCurrency(self, s):
index = re.search('[A-Z]', s).start()
return float(s[:index]), s[index:]
| 38.307692 | 106 | 0.544846 | 175 | 1,494 | 4.565714 | 0.422857 | 0.082603 | 0.030038 | 0.035044 | 0.152691 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002918 | 0.311914 | 1,494 | 38 | 107 | 39.315789 | 0.774319 | 0.07095 | 0 | 0 | 0 | 0 | 0.050872 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.107143 | false | 0 | 0.107143 | 0 | 0.321429 | 0.035714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
525e334714eb214e4a6f911c418914177e016131 | 3,543 | py | Python | pretix_pizzabot/management/commands/import_appsmart.py | raphaelm/pretix-pizzabot | 4ef2d4a1c5a199fcfe32d4da926521622f20784f | [
"Apache-2.0"
] | null | null | null | pretix_pizzabot/management/commands/import_appsmart.py | raphaelm/pretix-pizzabot | 4ef2d4a1c5a199fcfe32d4da926521622f20784f | [
"Apache-2.0"
] | null | null | null | pretix_pizzabot/management/commands/import_appsmart.py | raphaelm/pretix-pizzabot | 4ef2d4a1c5a199fcfe32d4da926521622f20784f | [
"Apache-2.0"
] | null | null | null | import sys
import requests
from django.core.files.base import ContentFile
from django.core.management.base import BaseCommand
from pretix.base.models import Event
class Command(BaseCommand):
help = "Import data from appsmart"
def add_arguments(self, parser):
parser.add_argument('event_id', type=int)
parser.add_argument('url', type=str)
parser.add_argument('branch_id', type=int)
def handle(self, *args, **options):
event = Event.objects.get(pk=options.get('event_id'))
event.items.all().delete()
event.categories.all().delete()
event.quotas.get_or_create(name="all", size=None)
r = requests.get(
'{url}get-categories/{branch_id}'.format(**options)
)
for i, record in enumerate(r.json()['d']):
self.add_category(options, i, event, record)
def add_category(self, options, i, event, record):
cat = event.categories.create(
name=record.get('name'),
description=record.get('description') or "",
position=i
)
r = requests.get(
'{options[url]}get-products-of-category/{options[branch_id]}/{r[id]}'.format(
options=options, r=record
)
)
for i, record in enumerate(r.json()['d']):
self.add_item(options, i, event, cat, record)
def add_item(self, options, i, event, category, record):
r = requests.get(
'{options[url]}get-single-product/{options[branch_id]}/{r[id]}'.format(
options=options, r=record
)
)
record = r.json()['d']
for sizeid, s in record['sizes'].items():
item = event.items.create(
name='{record[name]} – {s[name]}'.format(record=record, s=s),
description=record['description'] or '',
category=category,
position=i,
default_price=s.get('delivery_price')
)
event.quotas.get(name="all").items.add(item)
if record.get('picurl'):
imgf = requests.get(record.get('picurl'))
item.picture.save(
'picture.jpg',
ContentFile(imgf.content)
)
item.save()
for ig in record.get('basic_ingredients_groups').values():
self.add_ingredients_group(sizeid, i, event, item, ig)
for ig in record.get('extra_ingredients_groups').values():
self.add_ingredients_group(sizeid, i, event, item, ig)
def add_ingredients_group(self, sizeid, i, event, item, record):
cat = event.categories.create(
name='{} – {}'.format(
record.get('description'),
item.name,
),
is_addon=True
)
max_quan = record.get('max_quan')
if 0 < record.get('free_quan') < max_quan:
max_quan = record.get('free_quan')
if max_quan == -1:
max_quan = len(record.get('ingredients')) + 1
item.addons.create(
addon_category=cat,
min_count=record.get('min_quan'),
max_count=max_quan
)
for i in record.get('ingredients').values():
item = event.items.create(
name=i.get('name'),
category=cat,
default_price=i.get('price_diff').get(sizeid).get('price')
)
event.quotas.get(name="all").items.add(item)
| 35.079208 | 89 | 0.546712 | 409 | 3,543 | 4.633252 | 0.242054 | 0.061741 | 0.027441 | 0.02533 | 0.295515 | 0.253298 | 0.191029 | 0.191029 | 0.191029 | 0.150923 | 0 | 0.001238 | 0.315834 | 3,543 | 100 | 90 | 35.43 | 0.779703 | 0 | 0 | 0.174419 | 0 | 0.011628 | 0.129551 | 0.058425 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05814 | false | 0 | 0.069767 | 0 | 0.151163 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52624e098464fc85ce6b1d78ae147345b2406918 | 4,710 | py | Python | parakeet/frontend/cn_frontend.py | lym0302/Parakeet | 97b7000aa2be182d3ff4681f435f8c1463e97083 | [
"Apache-2.0"
] | null | null | null | parakeet/frontend/cn_frontend.py | lym0302/Parakeet | 97b7000aa2be182d3ff4681f435f8c1463e97083 | [
"Apache-2.0"
] | null | null | null | parakeet/frontend/cn_frontend.py | lym0302/Parakeet | 97b7000aa2be182d3ff4681f435f8c1463e97083 | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import jieba.posseg as psg
import numpy as np
import paddle
import re
from g2pM import G2pM
from parakeet.frontend.tone_sandhi import ToneSandhi
from parakeet.frontend.cn_normalization.text_normlization import TextNormalizer
from pypinyin import lazy_pinyin, Style
from parakeet.frontend.generate_lexicon import generate_lexicon
class Frontend():
def __init__(self, g2p_model="pypinyin"):
self.tone_modifier = ToneSandhi()
self.text_normalizer = TextNormalizer()
self.punc = ":,;。?!“”‘’':,;.?!"
# g2p_model can be pypinyin and g2pM
self.g2p_model = g2p_model
if self.g2p_model == "g2pM":
self.g2pM_model = G2pM()
self.pinyin2phone = generate_lexicon(
with_tone=True, with_erhua=False)
def _get_initials_finals(self, word):
initials = []
finals = []
if self.g2p_model == "pypinyin":
orig_initials = lazy_pinyin(
word, neutral_tone_with_five=True, style=Style.INITIALS)
orig_finals = lazy_pinyin(
word, neutral_tone_with_five=True, style=Style.FINALS_TONE3)
for c, v in zip(orig_initials, orig_finals):
if re.match(r'i\d', v):
if c in ['z', 'c', 's']:
v = re.sub('i', 'ii', v)
elif c in ['zh', 'ch', 'sh', 'r']:
v = re.sub('i', 'iii', v)
initials.append(c)
finals.append(v)
elif self.g2p_model == "g2pM":
pinyins = self.g2pM_model(word, tone=True, char_split=False)
for pinyin in pinyins:
pinyin = pinyin.replace("u:", "v")
if pinyin in self.pinyin2phone:
initial_final_list = self.pinyin2phone[pinyin].split(" ")
if len(initial_final_list) == 2:
initials.append(initial_final_list[0])
finals.append(initial_final_list[1])
elif len(initial_final_list) == 1:
initials.append('')
finals.append(initial_final_list[1])
else:
# If it's not pinyin (possibly punctuation) or no conversion is required
initials.append(pinyin)
finals.append(pinyin)
return initials, finals
# if merge_sentences, merge all sentences into one phone sequence
def _g2p(self, sentences, merge_sentences=True):
segments = sentences
phones_list = []
for seg in segments:
phones = []
seg = psg.lcut(seg)
initials = []
finals = []
seg = self.tone_modifier.pre_merge_for_modify(seg)
for word, pos in seg:
if pos == 'eng':
continue
sub_initials, sub_finals = self._get_initials_finals(word)
sub_finals = self.tone_modifier.modified_tone(word, pos,
sub_finals)
initials.append(sub_initials)
finals.append(sub_finals)
# assert len(sub_initials) == len(sub_finals) == len(word)
initials = sum(initials, [])
finals = sum(finals, [])
for c, v in zip(initials, finals):
# NOTE: post process for pypinyin outputs
# we discriminate i, ii and iii
if c and c not in self.punc:
phones.append(c)
if v and v not in self.punc:
phones.append(v)
# add sp between sentence (replace the last punc with sp)
if initials[-1] in self.punc:
phones.append('sp')
phones_list.append(phones)
if merge_sentences:
phones_list = sum(phones_list, [])
return phones_list
def get_phonemes(self, sentence):
sentences = self.text_normalizer.normalize(sentence)
phonemes = self._g2p(sentences)
return phonemes
| 41.681416 | 92 | 0.572187 | 552 | 4,710 | 4.733696 | 0.327899 | 0.042863 | 0.036739 | 0.025258 | 0.093379 | 0.077306 | 0.035974 | 0.035974 | 0.035974 | 0.035974 | 0 | 0.011247 | 0.339278 | 4,710 | 112 | 93 | 42.053571 | 0.828406 | 0.198726 | 0 | 0.070588 | 0 | 0 | 0.018652 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047059 | false | 0 | 0.105882 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52629716534d1a743641bdea955149cf6682212e | 1,580 | py | Python | samples/analyzer_level/dotnet/annotation/main.py | CAST-projects/Extension-SDK | 7d9233d8e94bf72d3dd516257bc16838f35307ab | [
"MIT"
] | 3 | 2017-09-24T21:21:37.000Z | 2022-03-09T04:08:46.000Z | samples/analyzer_level/dotnet/annotation/main.py | CAST-projects/Extension-SDK | 7d9233d8e94bf72d3dd516257bc16838f35307ab | [
"MIT"
] | null | null | null | samples/analyzer_level/dotnet/annotation/main.py | CAST-projects/Extension-SDK | 7d9233d8e94bf72d3dd516257bc16838f35307ab | [
"MIT"
] | 4 | 2016-09-06T06:24:41.000Z | 2020-01-28T17:17:16.000Z | from cast.analysers import log, external_link, filter, create_link
import cast.analysers.dotnet
def link_to_table(type_, table_name):
# search all tables or views with table_name as name
tables = external_link.find_objects(table_name, filter.tables_or_views)
# the position of the link will be the position of the class
position = type_.get_position()
for table in tables:
create_link('useLink', type_, table, position)
class Extension(cast.analysers.dotnet.Extension):
"""
Handle links from classes to tables with System.Data.Linq.Mapping.TableAttribute
"""
def start_type(self, type_):
"""
@type type_: cast.analysers.Type
"""
# iterate on annotations of the class
for annotation in type_.get_annotations():
# annotation is a structure, first element is the 'annotation class'
annotationType = annotation[0]
if annotationType.get_fullname() == 'System.Data.Linq.Mapping.TableAttribute':
# now we can focus on parameters of the annotation
parameters = annotation[1]
if 'Name' in parameters:
table_name = parameters['Name']
else:
# default value when no parameter is provided...
table_name = type_.get_name()
log.debug('firing link between ' + str(type_) + ' and ' + table_name)
link_to_table(type_, table_name)
| 33.617021 | 90 | 0.599367 | 179 | 1,580 | 5.111732 | 0.424581 | 0.068852 | 0.04153 | 0.032787 | 0.128962 | 0.052459 | 0 | 0 | 0 | 0 | 0 | 0.001874 | 0.324684 | 1,580 | 46 | 91 | 34.347826 | 0.85567 | 0.268354 | 0 | 0 | 0 | 0 | 0.071107 | 0.035104 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.105263 | 0 | 0.263158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |