hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
30838b215cd243be104eaed5f18458e8b036722f | 4,342 | py | Python | main.py | 1blackghost/Fall_Management | ebd980c9866a873d681deca56e32e4a94ed64502 | [
"MIT"
] | 2 | 2021-12-17T15:49:36.000Z | 2022-01-27T05:14:34.000Z | main.py | 1blackghost/Fall_Management | ebd980c9866a873d681deca56e32e4a94ed64502 | [
"MIT"
] | null | null | null | main.py | 1blackghost/Fall_Management | ebd980c9866a873d681deca56e32e4a94ed64502 | [
"MIT"
] | null | null | null | from flask import *
app=Flask(__name__)
app.config['SECRET_KEY']="thisisasecretkey"
@app.route("/logout")
def logout():
if 'user' in session and 'role' in session:
session.pop('user',None)
session.pop('role',None)
return redirect(url_for('home'))
@app.route('/home',methods=['GET','POST'])
def root():
if 'user' in session and 'role' in session:
if session['step']=="False":
if request.method=='POST':
history=request.form['history']
medicines=request.form['medicines']
doctor=request.form["doctor"]
r1=request.form['r1']
r2=request.form['r2']
with open("add.txt",'r') as f:
d=eval(f.read())
main=[]
main.append(session['user'])
main.append(history)
main.append(medicines)
main.append(doctor)
main.append(r1)
main.append(r2)
d.append(main)
with open("add.txt",'w') as f:
f.write(str(d))
with open("data.txt",'r') as f:
d=eval(f.read())
for i in d:
if i[0]==session['user']:
i[5]="True"
with open('data.txt','w') as f:
f.write(str(d))
session['step']="True"
return redirect(url_for('root'))
return render_template('step2.html')
if session['role']=="Ambulance Driver":
with open("data.txt",'r') as f:
d=eval(f.read())
for i in d:
if i[0]==session['user']:
try:
param=i[4]
param=param.split(":")
lat=param[4]
lng=param[6]
except:
return render_template("ambulance.html",username=session['user'],error="Oops! Maps Could Not Be Loaded With Location Off!")
if request.method=="POST":
return render_template('map.html',lat=lat,lng=lng,name="Your Location: ",error="")
return render_template("ambulance.html",username=session['user'])
with open("add.txt",'r') as f:
d=eval(f.read())
for i in d:
if i[0]==session['user']:
history=i[1]
medicines=i[2]
doctor=i[3]
r1=i[4]
r2=i[5]
return render_template('root.html',username=session['user'],history=history,medicines=medicines,doctor=doctor,r1=r1,r2=r2)
else:
return redirect(url_for('login'))
@app.errorhandler(404)
def page_not_found(e):
print(e)
return render_template('404.html')
@app.route("/")
def home():
return render_template("home.html")
@app.route("/signup",methods=["GET","POST"])
def signup():
if request.method=="POST":
username=request.form['name']
email=request.form['email']
password=request.form['password']
conf_password=request.form['conf']
loc=request.form["demo"]
print(password,conf_password)
error="Something went wrong!"
try:
role=request.form['options']
except:
return render_template("signup.html",error=error)
if username=="":
return render_template("signup.html",error=error)
if password=="":
return render_template("signup.html",error=error)
if email=="":
return render_template("signup.html",error=error)
if conf_password=="":
return render_template("signup.html",error=error)
if role=="":
return render_template("signup.html",error="Role Error")
if str(password)!=str(conf_password):
return render_template("signup.html",error="Passwords Don't Match!")
data_list=[]
data_list.append(username)
data_list.append(email)
data_list.append(password)
data_list.append(role)
data_list.append(loc)
if role=="User":
data_list.append("False")
else:
data_list.append("None")
with open("data.txt",'r') as f:
d=eval(f.read())
d.append(data_list)
with open('data.txt','w') as f:
f.write(str(d))
session['user']=str(username)
session['role']=str(role)
if role=="User":
session['step']=str("False")
else:
session['step']=str("True")
return redirect(url_for("root"))
return render_template("signup.html")
@app.route("/login",methods=['GET','POST'])
def login():
if request.method=="POST":
username=request.form['name']
password=request.form['password']
loc=request.form["demo"]
with open("data.txt","r") as f:
d=eval(f.read())
In=False
for i in d:
if i[0]==username or i[1]==username:
if i[2]==password:
session['user']=username
session['role']=str(i[3])
session['step']=i[5]
In=True
return redirect(url_for('root'))
if not In:
return render_template("login.html",error="No Match Or User Not Found!")
return render_template("login.html")
if __name__=="__main__":
app.run(debug=True) | 27.308176 | 129 | 0.651082 | 644 | 4,342 | 4.312112 | 0.178571 | 0.073461 | 0.122434 | 0.074901 | 0.39323 | 0.368743 | 0.346057 | 0.342096 | 0.182211 | 0.111631 | 0 | 0.009836 | 0.15707 | 4,342 | 159 | 130 | 27.308176 | 0.748907 | 0 | 0 | 0.342466 | 0 | 0 | 0.17085 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041096 | false | 0.068493 | 0.006849 | 0.006849 | 0.19863 | 0.013699 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
3086deb196e467ecb0e3c292f9798f7ee852eb32 | 6,468 | py | Python | data_structure/busca_profundidade_largura.py | uadson/data-structure | e7c62ff732b9b89e57b9b08dfc6f777e57a52397 | [
"MIT"
] | null | null | null | data_structure/busca_profundidade_largura.py | uadson/data-structure | e7c62ff732b9b89e57b9b08dfc6f777e57a52397 | [
"MIT"
] | null | null | null | data_structure/busca_profundidade_largura.py | uadson/data-structure | e7c62ff732b9b89e57b9b08dfc6f777e57a52397 | [
"MIT"
] | null | null | null | class Vertice:
def __init__(self, rotulo):
self.rotulo = rotulo
self.visitado = False
self.adjacentes = []
def adiciona_adjacente(self, adjacente):
self.adjacentes.append(adjacente)
def mostra_adjacentes(self):
for i in self.adjacentes:
print(i.vertice.rotulo, i.custo)
class Adjacente:
def __init__(self, vertice, custo):
self.vertice = vertice
self.custo = custo
class Grafo:
arad = Vertice('Arad')
zerind = Vertice('Zerind')
oradea = Vertice('Oradea')
sibiu = Vertice('Sibiu')
timisoara = Vertice('Timisoara')
lugoj = Vertice('Lugoj')
mehadia = Vertice('Mehadia')
dobreta = Vertice('Dobreta')
craiova = Vertice('Craiova')
rimnicu = Vertice('Rimnicu')
fagaras = Vertice('Fagaras')
pitesti = Vertice('Pitesti')
bucharest = Vertice('Bucharest')
giurgiu = Vertice('Giurgiu')
arad.adiciona_adjacente(Adjacente(zerind, 75))
arad.adiciona_adjacente(Adjacente(sibiu, 140))
arad.adiciona_adjacente(Adjacente(timisoara, 118))
zerind.adiciona_adjacente(Adjacente(arad, 75))
zerind.adiciona_adjacente(Adjacente(oradea, 71))
oradea.adiciona_adjacente(Adjacente(zerind, 71))
oradea.adiciona_adjacente(Adjacente(sibiu, 151))
sibiu.adiciona_adjacente(Adjacente(oradea, 151))
sibiu.adiciona_adjacente(Adjacente(arad, 140))
sibiu.adiciona_adjacente(Adjacente(fagaras, 99))
sibiu.adiciona_adjacente(Adjacente(rimnicu, 80))
timisoara.adiciona_adjacente(Adjacente(arad, 118))
timisoara.adiciona_adjacente(Adjacente(lugoj, 111))
lugoj.adiciona_adjacente(Adjacente(timisoara, 111))
lugoj.adiciona_adjacente(Adjacente(mehadia, 70))
mehadia.adiciona_adjacente(Adjacente(lugoj, 70))
mehadia.adiciona_adjacente(Adjacente(dobreta, 75))
dobreta.adiciona_adjacente(Adjacente(mehadia, 75))
dobreta.adiciona_adjacente(Adjacente(craiova, 120))
craiova.adiciona_adjacente(Adjacente(dobreta, 120))
craiova.adiciona_adjacente(Adjacente(pitesti, 138))
craiova.adiciona_adjacente(Adjacente(rimnicu, 146))
rimnicu.adiciona_adjacente(Adjacente(craiova, 146))
rimnicu.adiciona_adjacente(Adjacente(sibiu, 80))
rimnicu.adiciona_adjacente(Adjacente(pitesti, 97))
fagaras.adiciona_adjacente(Adjacente(sibiu, 99))
fagaras.adiciona_adjacente(Adjacente(bucharest, 211))
pitesti.adiciona_adjacente(Adjacente(rimnicu, 97))
pitesti.adiciona_adjacente(Adjacente(craiova, 138))
pitesti.adiciona_adjacente(Adjacente(bucharest, 101))
bucharest.adiciona_adjacente(Adjacente(fagaras, 211))
bucharest.adiciona_adjacente(Adjacente(pitesti, 101))
bucharest.adiciona_adjacente(Adjacente(giurgiu, 90))
grafo = Grafo()
import numpy as np
class FilaCircular:
def __init__(self, capacidade):
self.capacidade = capacidade
self.inicio = 0
self.final = -1
self.numero_elementos = 0
# Mudança no tipo de dado
self.valores = np.empty(self.capacidade, dtype=object)
def __fila_vazia(self):
return self.numero_elementos == 0
def __fila_cheia(self):
return self.numero_elementos == self.capacidade
def enfileirar(self, valor):
if self.__fila_cheia():
print('A fila está cheia')
return
if self.final == self.capacidade - 1:
self.final = -1
self.final += 1
self.valores[self.final] = valor
self.numero_elementos += 1
def desenfileirar(self):
if self.__fila_vazia():
print('A fila já está vazia')
return
temp = self.valores[self.inicio]
self.inicio += 1
if self.inicio == self.capacidade - 1:
self.inicio = 0
self.numero_elementos -= 1
return temp
def primeiro(self):
if self.__fila_vazia():
return -1
return self.valores[self.inicio]
import numpy as np
class Pilha:
def __init__(self, capacidade):
self.__capacidade = capacidade
self.__topo = -1
# Mudança do tipo do array
self.__valores = np.empty(self.__capacidade, dtype=object)
def __pilha_cheia(self):
if self.__topo == self.__capacidade - 1:
return True
else:
return False
def __pilha_vazia(self):
if self.__topo == -1:
return True
else:
return False
def empilhar(self, valor):
if self.__pilha_cheia():
print('A pilha está cheia')
else:
self.__topo += 1
self.__valores[self.__topo] = valor
def desempilhar(self):
# Retorna o elemento desempilhado
if self.__pilha_vazia():
print('A pilha está vazia')
return None
else:
temp = self.__valores[self.__topo]
self.__topo -= 1
return temp
def ver_topo(self):
if self.__topo != -1:
return self.__valores[self.__topo]
else:
return -1
class BuscaProfundidade:
def __init__(self, inicio):
self.inicio = inicio
self.inicio.visitado = True
self.pilha = Pilha(20)
self.pilha.empilhar(inicio)
def buscar(self):
topo = self.pilha.ver_topo()
print('Topo: {}'.format(topo.rotulo))
for adjacente in topo.adjacentes:
print('Topo é {}. {} já foi visitada? {}'.format(topo.rotulo, adjacente.vertice.rotulo, adjacente.vertice.visitado))
if adjacente.vertice.visitado == False:
adjacente.vertice.visitado = True
self.pilha.empilhar(adjacente.vertice)
print('Empilhou {}'.format(adjacente.vertice.rotulo))
self.buscar()
print('Desempilhou: {}'.format(self.pilha.desempilhar().rotulo))
print()
class BuscaLargura:
def __init__(self, inicio):
self.inicio = inicio
self.inicio.visitado = True
self.fila = FilaCircular(20)
self.fila.enfileirar(inicio)
def buscar(self):
primeiro = self.fila.primeiro()
print('-------')
print('Primeiro da fila: {}'.format(primeiro.rotulo))
temp = self.fila.desenfileirar()
print('Desenfileirou: {}'.format(temp.rotulo))
for adjacente in primeiro.adjacentes:
print('Primeiro era {}. {} já foi visitado? {}'.format(temp.rotulo, adjacente.vertice.rotulo, adjacente.vertice.visitado))
if adjacente.vertice.visitado == False:
adjacente.vertice.visitado = True
self.fila.enfileirar(adjacente.vertice)
print('Enfileirou: {}'.format(adjacente.vertice.rotulo))
if self.fila.numero_elementos > 0:
self.buscar()
#busca_profundidade = BuscaProfundidade(grafo.arad)
#busca_profundidade.buscar()
busca_largura = BuscaLargura(grafo.arad)
busca_largura.buscar() | 29.266968 | 128 | 0.687693 | 753 | 6,468 | 5.734396 | 0.151394 | 0.133858 | 0.198703 | 0.028717 | 0.322603 | 0.146132 | 0.138027 | 0.124595 | 0.101899 | 0.080593 | 0 | 0.020933 | 0.19496 | 6,468 | 221 | 129 | 29.266968 | 0.808335 | 0.024273 | 0 | 0.225434 | 0 | 0 | 0.052331 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115607 | false | 0 | 0.011561 | 0.011561 | 0.33526 | 0.086705 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
308884d924ff840a10ba014007f164cd905b2668 | 11,014 | py | Python | Assignment 4/NNet.py | Chirag-Galani/B551-Elements-Of-Artificial-Intelligence | 6e6d04bf17522768c176145e86ccc31e8ea903b4 | [
"MIT"
] | null | null | null | Assignment 4/NNet.py | Chirag-Galani/B551-Elements-Of-Artificial-Intelligence | 6e6d04bf17522768c176145e86ccc31e8ea903b4 | [
"MIT"
] | null | null | null | Assignment 4/NNet.py | Chirag-Galani/B551-Elements-Of-Artificial-Intelligence | 6e6d04bf17522768c176145e86ccc31e8ea903b4 | [
"MIT"
] | 2 | 2021-12-01T20:38:02.000Z | 2021-12-01T22:42:38.000Z | #!/usr/bin/env python
'''
#
# Neural networks is a supervised learning algorithm, which utilizes neurons (which is a mathematical functions which accepts weighted values, and gives an output using an activation function) inorder to learn a problem and
# then to evaluate and predict the outcomes for data which has not yet been seen.
#
# Problem : We have been given a train data set which contains the correct orientations of around 36000 images, using this we have to train our program.
# We then have to predict/assign an orientation for the images in the test-data set
#
# Formulation : We utilize the pixels of the image, representing an unique feature, which is fed into the neural network, which results in a model file post training.
# The model file contains the weights and biases which are to be fed into the input, hidden and output layer.
#
# Citation : 1. Discussed with Zoher Kachwala, Umang Mehta, Chetan Patil and Kushal Giri.
# 2. Understanding the whole process of back propagation neural network:
# https://www.youtube.com/watch?v=aircAruvnKk&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi
# 3. Used as a reference to check the working of code with a small sample input and expected output data &
# Understanding the implementation of batch gradient descent & bias matrix and also learnt about the keepdims parameter used for numpy.sum:
# https://www.analyticsvidhya.com/blog/2017/05/neural-network-from-scratch-in-python-and-r/
# 4. Referred for understanding the mathematical formulas and its working:
# 1. https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/
# 2. http://neuralnetworksanddeeplearning.com/chap2.html
#
# Code-Descrption : The class file present here is called from orient.py based on the input provided
#
# NOTE: please provide extension .npz for the model file instead of .txt
#
# Training:
# orient.py train train_file.txt mddel_file.npz nearest
# NOTE: please provide extension .npz for the model file instead of .txt
# Initialize with random weights between -1 to 1, for input to hidden, hidden to output, hidden bias and output bias. This auto-corrects itself by evaluating itself against the training data via feed forward.
# We compute the error rate, on the basis of mis-classification, and use to reduce the error rate via back-propogation. We again update the bias and the weights. This continues for the number of iterations we
# defined. We evaluate the error rate for each iteration by computing the root means square error for each iteration. Output bias is the sum of all the delta output times the learning rate, and bias hidden layer
# is the sum of delta of hidden layer for each image. Finally, we store the weights and bias into the model file.
#
# Testing:
# orient.py test test_file.txt mddel_file.npz nearest
# NOTE: please provide extension .npz for the model file instead of .txt
# We initialize all the weights and biases as per the model file. Then we perform feed forward to compute the predicted orientation of the test-image. We provide the accurace based on the correctly identified images
# over the total images.
#
#
# Problems(P), Assumptions(A), Simplifications (S), Design Decisions (DD):
# Experimented with the learning rates to get better output (DD)
# Experimented with the hidden layer to get better output (DD)
# Stochastic gradient descent takes a lot of time for computation (P)
# Implemented batch gradient descent, to get better performance (DD)
# Store the model file as a default file format supported by numpy, instead of txt (DD)
#
# Analysis :
# We evaluated the implementation for various values of epochs and hidden layer, with a constant learning rate of 0.0001
# It takes approx 15 minutes for training and approx 5 mins for the test process, on the whole set.
# We had multiple values but decided to go with the following to show our process, we came to 564 after much analysis.
#
# hidden layer size | epochs | Accuracy |
# ------------------------------------------------------------------
# 20 | 1000 | 69.88 |
# 20 | 2000 | 68.82 |
# 20 | 500 | 70.41 |
# 10 | 1000 | 67.97 |
# 10 | 2000 | 69.35 |
# 10 | 5000 | 66.38 |
# 8 | 1000 | 70.94 |
# 8 | 2000 | 69.03 |
# 8 | 5000 | 68.08 |
# 8 | 1500 | 68.39 |
# 8 | 6000 | 69.67 |
# 6 | 2000 | 70.837 |
# 6 | 1000 | 69.56 |
# 6 | 4377 | 71.36 |
# ------------------------------------------------------------------
'''
import os
import numpy as np
np.warnings.filterwarnings('ignore')
class NNet:
# Variable Initialization
def __init__(self):
self.input_layer_size = 192
self.hidden_layer_size = 6 #Hidden layer size
self.output_layer_size = 4
self.epoch_iterations = 4377 #Training iterations
self.alpha = 0.0001 # Learning rate
# self.epoch_iterations = 100
self.possible_output = [0, 90, 180, 270] #Different orientation types
'''
This function returns the orientation value to output layer formats
0
90
180
270
'''
def returnBinForm(self,orientation):
single_output = [0,0,0,0]
single_output[self.possible_output.index(int(orientation))]=1
return single_output
# Sigmoid Function
def sigmoid(self,xx):
return 1 / (1 + np.exp(-xx))
# Derivative of Sigmoid Function
def d_sigmoid(self,xx):
return xx * (1 - xx)
#Train file needs the train_file.txt and the model file name without any file format extension
#as numpy saves the model in npz format'
def train(self,trainFile, modelFile):
input_text_size = sum(1 for line in open(trainFile))
input_data = np.zeros(shape=(input_text_size, self.input_layer_size)) #a numpy matrix of 36976 x 192
output_data = np.zeros(shape=(input_text_size, self.output_layer_size))
i = -1
with open(trainFile) as f:
for line in f:
i = i + 1
item = line.split()
for j in range(len(item) - 2):
input_data[i][j] = int(item[j + 2])
output_data[i] = self.returnBinForm(item[1])
random_start = -1
random_end = +1
np.random.seed(1)
input_to_hidden = np.random.uniform(random_start, random_end, size=(self.input_layer_size, self.hidden_layer_size))
bias_hidden_layer = np.random.uniform(random_start, random_end, size=(self.hidden_layer_size))
hidden_to_output = np.random.uniform(random_start, random_end, size=(self.hidden_layer_size, self.output_layer_size))
output_bias = np.random.uniform(random_start, random_end, size=(self.output_layer_size))
for i in range(self.epoch_iterations + 1):
#Feed Forward network
hidden_layer = self.sigmoid(np.dot(input_data, input_to_hidden) + bias_hidden_layer)
actual_output = self.sigmoid(np.dot(hidden_layer, hidden_to_output) + output_bias)
#Back Propogation network
error_value = output_data - actual_output
delta_output = error_value * self.d_sigmoid(actual_output)
delta_hidden_layer = np.dot(delta_output, (np.transpose(hidden_to_output))) * self.d_sigmoid(hidden_layer)
hidden_to_output += np.dot(np.transpose(hidden_layer), delta_output) * self.alpha
input_to_hidden += np.dot(np.transpose(input_data), delta_hidden_layer) * self.alpha
output_bias += np.sum(delta_output, axis=0) * self.alpha
bias_hidden_layer += np.sum(delta_hidden_layer, axis=0) * self.alpha
RMSError= np.sum(np.square(error_value))*0.5
#print str(i)+" RMS= "+str(RMSError)
np.savez_compressed(modelFile, input_to_hidden=input_to_hidden, hidden_to_output=hidden_to_output,
bias_hidden_layer=bias_hidden_layer, output_bias=output_bias)
def test(self,modelFile, testFile, outputFile):
#Save data to model
test_file_lines = sum(1 for line in open(testFile))
test_X = np.zeros(shape=(test_file_lines, self.input_layer_size))
test_correct_orientation = np.zeros(shape=(test_file_lines))
#Load data from model
modelData = np.load(modelFile)
input_to_hidden = modelData['input_to_hidden']
hidden_to_output = modelData['hidden_to_output']
bias_hidden_layer = modelData['bias_hidden_layer']
output_bias = modelData['output_bias']
test_label=dict()
i = -1
with open(testFile) as f:
for line in f:
i = i + 1
x = line.split()
for j in range(len(x) - 2):
test_X[i][j] = x[j + 2]
test_correct_orientation[i] = x[1]
test_label[i] = x[0]
correct_track_counter = 0
total_track_counter = 0
if os.path.isfile(outputFile):
os.remove(outputFile)
for i_counter in range(len(test_X)):
hidden_layer = self.sigmoid(np.dot(test_X, input_to_hidden) + bias_hidden_layer)
actual_output = self.sigmoid(np.dot(hidden_layer, hidden_to_output) + output_bias)
predicted_test_out = self.possible_output[np.argmax(actual_output[i_counter])]
if predicted_test_out == int(test_correct_orientation[i_counter]):
correct_track_counter += 1
total_track_counter += 1
with open(outputFile, 'a') as outs:
output_line = test_label[i_counter] + " " + str(predicted_test_out) + "\n"
outs.write(output_line)
# print "Results = ", str(self.epoch_iterations) + " == " + str((correct_track_counter * 100.0) / len(test_X))
print "Accuracy: " + str((correct_track_counter * 100.0) / len(test_X))
# train("train-data.txt", "nnet_model")
# test("nnet_model.npz","test-data.txt","output.txt")
| 55.626263 | 233 | 0.605684 | 1,428 | 11,014 | 4.530812 | 0.273109 | 0.045904 | 0.021638 | 0.011128 | 0.207419 | 0.178053 | 0.144513 | 0.119011 | 0.10881 | 0.080062 | 0 | 0.034388 | 0.305611 | 11,014 | 197 | 234 | 55.908629 | 0.811585 | 0.062194 | 0 | 0.094118 | 0 | 0 | 0.016723 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.023529 | null | null | 0.011765 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
308a8ead8c09d9060f476c9189d0c618fd81a7a3 | 1,027 | py | Python | documents/migrations/0001_initial.py | iliadmitriev/taskcamp | f0da4aa5694bd1f2235cddcf0e3026b07d957e2d | [
"MIT"
] | 4 | 2021-11-30T10:28:17.000Z | 2022-01-31T07:44:08.000Z | documents/migrations/0001_initial.py | iliadmitriev/taskcamp | f0da4aa5694bd1f2235cddcf0e3026b07d957e2d | [
"MIT"
] | 66 | 2021-08-17T08:20:20.000Z | 2022-03-31T02:20:53.000Z | documents/migrations/0001_initial.py | iliadmitriev/taskcamp | f0da4aa5694bd1f2235cddcf0e3026b07d957e2d | [
"MIT"
] | null | null | null | # Generated by Django 3.1.7 on 2021-03-22 07:18
from django.db import migrations, models
import documents.helpers
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Document',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('uploaded', models.DateTimeField(auto_now_add=True, verbose_name='Date and time uploaded')),
('document', models.FileField(upload_to=documents.helpers.document_upload_path, verbose_name='Document')),
('title', models.CharField(blank=True, max_length=100, verbose_name='Description')),
('description', models.CharField(blank=True, max_length=500, verbose_name='Description')),
],
options={
'verbose_name': 'Document',
'verbose_name_plural': 'Documents',
},
),
]
| 34.233333 | 122 | 0.607595 | 104 | 1,027 | 5.836538 | 0.567308 | 0.126853 | 0.062603 | 0.079077 | 0.108731 | 0.108731 | 0 | 0 | 0 | 0 | 0 | 0.027926 | 0.26777 | 1,027 | 29 | 123 | 35.413793 | 0.779255 | 0.043817 | 0 | 0 | 1 | 0 | 0.146939 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3090bd4ae1bc431a0e0140abb9b38829f936bb15 | 1,219 | py | Python | joytest.py | AlessioMorale/fport-joy-bridge | ed5ff13acc391863d8df4005a3848743bf363d27 | [
"MIT"
] | null | null | null | joytest.py | AlessioMorale/fport-joy-bridge | ed5ff13acc391863d8df4005a3848743bf363d27 | [
"MIT"
] | null | null | null | joytest.py | AlessioMorale/fport-joy-bridge | ed5ff13acc391863d8df4005a3848743bf363d27 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
from evdev import UInput, UInputError, ecodes, AbsInfo
from evdev import util
from fport import FportParser, FportMessageControl
import serial
from time import sleep
if __name__ == '__main__':
device = None
def handler(message):
if type(message) is FportMessageControl:
print("Handled:", message)
pass
if False:
counter = counter + 1
device.write(ecodes.EV_ABS, ecodes.ABS_X, counter % 255)
device.syn()
try:
description = 'TstAM'
default_props = AbsInfo(value=0, min=0, max=2048, fuzz=0, flat=0, resolution=0)
events = {ecodes.EV_ABS: [
(ecodes.ABS_X, default_props),
(ecodes.ABS_Y, default_props),
(ecodes.ABS_Z, default_props),
(ecodes.ABS_RZ, default_props)
], ecodes.EV_KEY:[], ecodes.EV_REL: []}
device = UInput(events=events)
counter = 0
parser = FportParser(handler)
ui = UInput()
with serial.Serial('/dev/ttyUSB0', 115200, timeout=1) as ser:
while True:
s = ser.read(100)
parser.parse(s)
finally:
device.close()
pass | 31.25641 | 87 | 0.584085 | 142 | 1,219 | 4.859155 | 0.535211 | 0.065217 | 0.104348 | 0.091304 | 0.06087 | 0.06087 | 0 | 0 | 0 | 0 | 0 | 0.030916 | 0.31009 | 1,219 | 39 | 88 | 31.25641 | 0.789536 | 0.017227 | 0 | 0.057143 | 0 | 0 | 0.027546 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028571 | false | 0.057143 | 0.142857 | 0 | 0.171429 | 0.028571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
3093f2eef129b7ff15a03825ab1986f511920339 | 1,058 | py | Python | bitHopper/Logic/__init__.py | DavidVorick/bitHopper | 72544e50b05aa2529ab0adf505bf9a9b609ae64c | [
"MIT"
] | 2 | 2018-04-24T07:30:32.000Z | 2018-06-19T18:13:38.000Z | bitHopper/Logic/__init__.py | KirillShaman/bitHopper | 72544e50b05aa2529ab0adf505bf9a9b609ae64c | [
"MIT"
] | null | null | null | bitHopper/Logic/__init__.py | KirillShaman/bitHopper | 72544e50b05aa2529ab0adf505bf9a9b609ae64c | [
"MIT"
] | 1 | 2018-04-24T07:30:33.000Z | 2018-04-24T07:30:33.000Z | """
This module contains all of the server selection logic.
It supplies one function:
get_server() which returns the name of a server to mine.
It has two external dependencies.
1) btcnet_info via btcnet_wrapper
2) a way to pull getworks for checking if we should delag pools
"""
import ServerLogic
import bitHopper.Configuration.Workers as Workers
def get_server():
"""
Returns a valid server, worker, username tuple
Note this isn't quite a perfectly even distribution but it
works well enough
"""
return _select(list(generate_tuples(ServerLogic.get_server())))
i = 0
def generate_tuples( server):
"""
Generates a tuple of server, user, password for valid servers
"""
tokens = Workers.get_worker_from(server)
for user, password in tokens:
yield (server, user, password)
def _select(item):
"""
Selection utility function
"""
global i
i = i + 1 if i < 10**10 else 0
if len(item) == 0:
raise ValueError("No item available")
return item[i % len(item)]
| 24.045455 | 67 | 0.680529 | 150 | 1,058 | 4.726667 | 0.58 | 0.038082 | 0.050776 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012392 | 0.23724 | 1,058 | 43 | 68 | 24.604651 | 0.866171 | 0.458412 | 0 | 0 | 0 | 0 | 0.033333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0.133333 | 0.133333 | 0 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
3098dcc308b0f38894e8932ddd5c2709945d9041 | 846 | py | Python | Ago-Dic-2019/DanielM/PracticaUno/3.9_DinnerGuests.py | Arbupa/DAS_Sistemas | 52263ab91436b2e5a24ce6f8493aaa2e2fe92fb1 | [
"MIT"
] | 41 | 2017-09-26T09:36:32.000Z | 2022-03-19T18:05:25.000Z | Ago-Dic-2019/DanielM/PracticaUno/3.9_DinnerGuests.py | Arbupa/DAS_Sistemas | 52263ab91436b2e5a24ce6f8493aaa2e2fe92fb1 | [
"MIT"
] | 67 | 2017-09-11T05:06:12.000Z | 2022-02-14T04:44:04.000Z | Ago-Dic-2019/DanielM/PracticaUno/3.9_DinnerGuests.py | Arbupa/DAS_Sistemas | 52263ab91436b2e5a24ce6f8493aaa2e2fe92fb1 | [
"MIT"
] | 210 | 2017-09-01T00:10:08.000Z | 2022-03-19T18:05:12.000Z | # 3-9. Dinner Guests: Working with one of the programs from Exercises 3-4 through 3-7 (page 46),
# use len() to print a message indicating the number of people you are inviting to dinner.
guests = ['Antonio', 'Emanuel', 'Francisco']
message = "1.- Hello dear uncle " + guests[0] + ", I hope you can come this 16th for a mexican dinner in my house."
print(message)
message = "2.- Hi " + guests[1] + "! The next monday we'll have a dinner, you should come here to spend time with " \
"friends for a while, also we will have some beers. "
print(message)
message = "3.- Hello grandpa " + guests[2] + "!, my mother told me that we will have a dinner next monday and we want" \
" that you come here because we miss you. "
print(message)
print('\n')
print(len(guests)) | 49.764706 | 120 | 0.628842 | 131 | 846 | 4.061069 | 0.572519 | 0.067669 | 0.071429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025806 | 0.267139 | 846 | 17 | 121 | 49.764706 | 0.832258 | 0.216312 | 0 | 0.272727 | 0 | 0 | 0.571861 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.454545 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
309b0655db3c1fd9b58f914f090d62e8f523e861 | 8,802 | py | Python | argocd_client/models/v1alpha1_cluster.py | thepabloaguilar/argocd-client | a6c4ff268a63ee6715f9f837b9225b798aa6bde2 | [
"BSD-3-Clause"
] | 1 | 2021-09-29T11:57:07.000Z | 2021-09-29T11:57:07.000Z | argocd_client/models/v1alpha1_cluster.py | thepabloaguilar/argocd-client | a6c4ff268a63ee6715f9f837b9225b798aa6bde2 | [
"BSD-3-Clause"
] | 1 | 2020-09-09T00:28:57.000Z | 2020-09-09T00:28:57.000Z | argocd_client/models/v1alpha1_cluster.py | thepabloaguilar/argocd-client | a6c4ff268a63ee6715f9f837b9225b798aa6bde2 | [
"BSD-3-Clause"
] | 2 | 2020-10-13T18:31:59.000Z | 2021-02-15T12:52:33.000Z | # coding: utf-8
"""
Consolidate Services
Description of all APIs # noqa: E501
The version of the OpenAPI document: version not set
Generated by: https://openapi-generator.tech
"""
import pprint
import re # noqa: F401
import six
from argocd_client.configuration import Configuration
class V1alpha1Cluster(object):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
"""
Attributes:
openapi_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
openapi_types = {
'config': 'V1alpha1ClusterConfig',
'connection_state': 'V1alpha1ConnectionState',
'info': 'V1alpha1ClusterInfo',
'name': 'str',
'namespaces': 'list[str]',
'refresh_requested_at': 'V1Time',
'server': 'str',
'server_version': 'str'
}
attribute_map = {
'config': 'config',
'connection_state': 'connectionState',
'info': 'info',
'name': 'name',
'namespaces': 'namespaces',
'refresh_requested_at': 'refreshRequestedAt',
'server': 'server',
'server_version': 'serverVersion'
}
def __init__(self, config=None, connection_state=None, info=None, name=None, namespaces=None, refresh_requested_at=None, server=None, server_version=None, local_vars_configuration=None): # noqa: E501
"""V1alpha1Cluster - a model defined in OpenAPI""" # noqa: E501
if local_vars_configuration is None:
local_vars_configuration = Configuration()
self.local_vars_configuration = local_vars_configuration
self._config = None
self._connection_state = None
self._info = None
self._name = None
self._namespaces = None
self._refresh_requested_at = None
self._server = None
self._server_version = None
self.discriminator = None
if config is not None:
self.config = config
if connection_state is not None:
self.connection_state = connection_state
if info is not None:
self.info = info
if name is not None:
self.name = name
if namespaces is not None:
self.namespaces = namespaces
if refresh_requested_at is not None:
self.refresh_requested_at = refresh_requested_at
if server is not None:
self.server = server
if server_version is not None:
self.server_version = server_version
@property
def config(self):
"""Gets the config of this V1alpha1Cluster. # noqa: E501
:return: The config of this V1alpha1Cluster. # noqa: E501
:rtype: V1alpha1ClusterConfig
"""
return self._config
@config.setter
def config(self, config):
"""Sets the config of this V1alpha1Cluster.
:param config: The config of this V1alpha1Cluster. # noqa: E501
:type: V1alpha1ClusterConfig
"""
self._config = config
@property
def connection_state(self):
"""Gets the connection_state of this V1alpha1Cluster. # noqa: E501
:return: The connection_state of this V1alpha1Cluster. # noqa: E501
:rtype: V1alpha1ConnectionState
"""
return self._connection_state
@connection_state.setter
def connection_state(self, connection_state):
"""Sets the connection_state of this V1alpha1Cluster.
:param connection_state: The connection_state of this V1alpha1Cluster. # noqa: E501
:type: V1alpha1ConnectionState
"""
self._connection_state = connection_state
@property
def info(self):
"""Gets the info of this V1alpha1Cluster. # noqa: E501
:return: The info of this V1alpha1Cluster. # noqa: E501
:rtype: V1alpha1ClusterInfo
"""
return self._info
@info.setter
def info(self, info):
"""Sets the info of this V1alpha1Cluster.
:param info: The info of this V1alpha1Cluster. # noqa: E501
:type: V1alpha1ClusterInfo
"""
self._info = info
@property
def name(self):
"""Gets the name of this V1alpha1Cluster. # noqa: E501
:return: The name of this V1alpha1Cluster. # noqa: E501
:rtype: str
"""
return self._name
@name.setter
def name(self, name):
"""Sets the name of this V1alpha1Cluster.
:param name: The name of this V1alpha1Cluster. # noqa: E501
:type: str
"""
self._name = name
@property
def namespaces(self):
"""Gets the namespaces of this V1alpha1Cluster. # noqa: E501
Holds list of namespaces which are accessible in that cluster. Cluster level resources would be ignored if namespace list is not empty. # noqa: E501
:return: The namespaces of this V1alpha1Cluster. # noqa: E501
:rtype: list[str]
"""
return self._namespaces
@namespaces.setter
def namespaces(self, namespaces):
"""Sets the namespaces of this V1alpha1Cluster.
Holds list of namespaces which are accessible in that cluster. Cluster level resources would be ignored if namespace list is not empty. # noqa: E501
:param namespaces: The namespaces of this V1alpha1Cluster. # noqa: E501
:type: list[str]
"""
self._namespaces = namespaces
@property
def refresh_requested_at(self):
"""Gets the refresh_requested_at of this V1alpha1Cluster. # noqa: E501
:return: The refresh_requested_at of this V1alpha1Cluster. # noqa: E501
:rtype: V1Time
"""
return self._refresh_requested_at
@refresh_requested_at.setter
def refresh_requested_at(self, refresh_requested_at):
"""Sets the refresh_requested_at of this V1alpha1Cluster.
:param refresh_requested_at: The refresh_requested_at of this V1alpha1Cluster. # noqa: E501
:type: V1Time
"""
self._refresh_requested_at = refresh_requested_at
@property
def server(self):
"""Gets the server of this V1alpha1Cluster. # noqa: E501
:return: The server of this V1alpha1Cluster. # noqa: E501
:rtype: str
"""
return self._server
@server.setter
def server(self, server):
"""Sets the server of this V1alpha1Cluster.
:param server: The server of this V1alpha1Cluster. # noqa: E501
:type: str
"""
self._server = server
@property
def server_version(self):
"""Gets the server_version of this V1alpha1Cluster. # noqa: E501
:return: The server_version of this V1alpha1Cluster. # noqa: E501
:rtype: str
"""
return self._server_version
@server_version.setter
def server_version(self, server_version):
"""Sets the server_version of this V1alpha1Cluster.
:param server_version: The server_version of this V1alpha1Cluster. # noqa: E501
:type: str
"""
self._server_version = server_version
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.openapi_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, V1alpha1Cluster):
return False
return self.to_dict() == other.to_dict()
def __ne__(self, other):
"""Returns true if both objects are not equal"""
if not isinstance(other, V1alpha1Cluster):
return True
return self.to_dict() != other.to_dict()
| 28.859016 | 204 | 0.60918 | 972 | 8,802 | 5.365226 | 0.139918 | 0.036817 | 0.128859 | 0.115053 | 0.463663 | 0.376989 | 0.353787 | 0.209204 | 0.136337 | 0.08092 | 0 | 0.030551 | 0.30459 | 8,802 | 304 | 205 | 28.953947 | 0.821434 | 0.355828 | 0 | 0.089552 | 1 | 0 | 0.071163 | 0.009102 | 0 | 0 | 0 | 0 | 0 | 1 | 0.164179 | false | 0 | 0.029851 | 0 | 0.328358 | 0.014925 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
309ddfbd9a766a8a929b5b31731638f34ca48d1c | 1,489 | py | Python | src/msgraph/model/shared.py | microsoftarchive/msgraph-sdk-python | 1320ba9116be0d00a1d7fce3484ea979e24ee82d | [
"MIT"
] | 7 | 2019-07-17T06:59:53.000Z | 2021-05-13T15:23:37.000Z | src/msgraph/model/shared.py | microsoftarchive/msgraph-sdk-python | 1320ba9116be0d00a1d7fce3484ea979e24ee82d | [
"MIT"
] | null | null | null | src/msgraph/model/shared.py | microsoftarchive/msgraph-sdk-python | 1320ba9116be0d00a1d7fce3484ea979e24ee82d | [
"MIT"
] | 2 | 2020-06-30T13:06:59.000Z | 2021-06-03T09:47:35.000Z | # -*- coding: utf-8 -*-
"""
# Copyright (c) Microsoft Corporation. All Rights Reserved. Licensed under the MIT License. See License in the project root for license information.
#
# This file was generated and any changes will be overwritten.
"""
from __future__ import unicode_literals
from ..model.identity_set import IdentitySet
from ..graph_object_base import GraphObjectBase
class Shared(GraphObjectBase):
def __init__(self, prop_dict={}):
self._prop_dict = prop_dict
@property
def owner(self):
"""
Gets and sets the owner
Returns:
:class:`IdentitySet<microsoft.graph.model.identity_set.IdentitySet>`:
The owner
"""
if "owner" in self._prop_dict:
if isinstance(self._prop_dict["owner"], GraphObjectBase):
return self._prop_dict["owner"]
else :
self._prop_dict["owner"] = IdentitySet(self._prop_dict["owner"])
return self._prop_dict["owner"]
return None
@owner.setter
def owner(self, val):
self._prop_dict["owner"] = val
@property
def scope(self):
"""Gets and sets the scope
Returns:
str:
The scope
"""
if "scope" in self._prop_dict:
return self._prop_dict["scope"]
else:
return None
@scope.setter
def scope(self, val):
self._prop_dict["scope"] = val
| 26.589286 | 151 | 0.589657 | 168 | 1,489 | 5.005952 | 0.386905 | 0.123662 | 0.171225 | 0.121284 | 0.170036 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000976 | 0.311619 | 1,489 | 55 | 152 | 27.072727 | 0.819512 | 0.288784 | 0 | 0.296296 | 0 | 0 | 0.052743 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.185185 | false | 0 | 0.111111 | 0 | 0.518519 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
30abbfa62591f83ed2fd836ae5d584e0bbebca55 | 1,554 | py | Python | errors.py | Rested/multi-translate | 565ef2ac7e8b5f94595cecc78b4076a3bc9be45e | [
"MIT"
] | 1 | 2021-08-22T14:43:11.000Z | 2021-08-22T14:43:11.000Z | errors.py | Rested/multi-translate | 565ef2ac7e8b5f94595cecc78b4076a3bc9be45e | [
"MIT"
] | null | null | null | errors.py | Rested/multi-translate | 565ef2ac7e8b5f94595cecc78b4076a3bc9be45e | [
"MIT"
] | null | null | null | from typing import Optional
from fastapi import HTTPException
class BaseMultiTranslateError(HTTPException):
def __init__(self, detail: Optional[str] = None):
super().__init__(status_code=400, detail=detail, headers={})
class TranslationEngineNotConfiguredError(BaseMultiTranslateError):
"""A problem with the configuration of the translation engine"""
class DetectionError(BaseMultiTranslateError):
"""A problem detecting the language of an empty from_language request"""
class DetectionNotSupportedError(BaseMultiTranslateError):
"""Detection is not supported for this engine"""
class TranslationError(BaseMultiTranslateError):
"""A problem performing or parsing the translation"""
class EngineApiError(BaseMultiTranslateError):
"""An error reported by an api service"""
class UnsupportedLanguagePairError(BaseMultiTranslateError):
"""The from to language pair is not supported"""
class InvalidISO6391CodeError(BaseMultiTranslateError):
"""Is not a valid iso-639-1 code"""
class AlignmentNotSupportedError(BaseMultiTranslateError):
"""Alignment is not supported for this language combination"""
class AlignmentError(BaseMultiTranslateError):
"""Alignment failed despite being supported"""
class InvalidEngineNameError(BaseMultiTranslateError):
"""Invalid engine name"""
class NoValidEngineConfiguredError(BaseMultiTranslateError):
"""No valid engine is configured"""
class InvalidLanguagePreferencesError(BaseMultiTranslateError):
"""Invalid language preferences"""
| 27.263158 | 76 | 0.780566 | 143 | 1,554 | 8.412587 | 0.517483 | 0.016625 | 0.077307 | 0.028263 | 0.034913 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008172 | 0.133848 | 1,554 | 56 | 77 | 27.75 | 0.885587 | 0.323037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.117647 | 0 | 0.941176 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
30ae6e3f4e096d6682d25c21fc12e815e531f331 | 30,320 | py | Python | SRC/__mainv2__.py | Prog-LucasAlves/dados_financeiros_b3 | e9cb2f6ed0e7b1a524fe68d75a444e458aad5689 | [
"MIT"
] | 3 | 2021-11-06T02:04:08.000Z | 2022-01-12T19:33:19.000Z | SRC/__mainv2__.py | Prog-LucasAlves/dados_financeiros_b3 | e9cb2f6ed0e7b1a524fe68d75a444e458aad5689 | [
"MIT"
] | 4 | 2021-11-06T01:44:42.000Z | 2022-03-03T16:21:39.000Z | SRC/__mainv2__.py | Prog-LucasAlves/dados_financeiros_b3 | e9cb2f6ed0e7b1a524fe68d75a444e458aad5689 | [
"MIT"
] | null | null | null | # This Python file uses the following encoding: utf-8
'''
Author: Lucas Alves
Linkedin: https://www.linkedin.com/in/lucasalves-ast/
'''
# TODO #4 Atualizar python 3.9.5 -> 3.9.9
# Importar bibliotecas internas
import __conectdb__
import __query__
import __check__
import __check_semana__
import __list__
# Importar bibliotecas
import backoff
from bs4 import BeautifulSoup
import requests
import time
from datetime import date, datetime, timedelta
import logging
from tqdm import tqdm
import pandas as pd
# TODO #1 Criar Sheduler
# Cores utilizada no script
RED = "\033[1;31m"
GREEN = "\033[0;32m"
GREEN_T = "\033[92m"
RESET = "\033[0;0m"
YELLOW = "\033[1;33m"
BLUE = "\033[1;34m"
GRAY = "\033[1;35m"
#####
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)
@backoff.on_exception(backoff.expo, (), max_tries=10)
# Inicio da funcao para coleta dos dados
def dados():
# Dados atual - Criando um DataFrame só com os dados atuais
dados_atual = pd.DataFrame(columns=[
'papel','tipo','empresa','setor','cotacao','dt_ult_cotacao','min_52_sem','max_52_sem','vol_med','valor_mercado','valor_firma','ult_balanco_pro','nr_acoes','os_dia','pl','lpa','pvp','vpa','p_ebit','marg_bruta','psr','marg_ebit','p_ativo','marg_liquida','p_cap_giro','ebit_ativo','p_ativo_circ_liq','roic','div_yield','roe','ev_ebitda','liquidez_corr','ev_ebit','cres_rec','ativo','disponibilidades','ativo_circulante','divd_bruta','divd_liquida','patr_liquido','lucro_liquido_12m','lucro_liquido_3m' ]
)
# Variável(dt) - responsavel por informar qual (x) dia sera feita a coleta dos dados
# Ex.: dt = date.today() - timedelta(days=3) -> volta 3 dias atras no calendario
dt = date.today() - timedelta(days=0)
dt_sem = dt.weekday()
# Variavel dt_dia_sem - responsavel por verificar qual e o dia da semana(Se for Sabado ou Domingo - nao havera coleta de dados)
dt_dia_sem = __check_semana__.DIAS[dt_sem]
dt = dt.strftime("%d/%m/%Y")
# Faz a checagem se o dia da semana e Sabado ou Domingo
if __check__.data_check != dt or dt_dia_sem == "Sábado" or dt_dia_sem == "Domingo":
print(f"+{GRAY} Site não atualizado {RESET}+")
print("--------------------------------------")
print(f"Hoje é dia: {dt} - {dt_dia_sem} ")
print(f"Data do site é: {__check__.data_check} - {__check__.day}")
print("--------------------------------------")
else:
print(f"+{GREEN_T} Site atualizado vamos começar a coletar os dados. {RESET}+")
# Faz checagem se a conexao com o banco de dados foi estabelecida
if __conectdb__.verifica_conexao() == False:
return print(
f"""
+{RED} Conexão não estabelecida com o Banco de Dados, verifique: {RESET}+
-{RED} Docker {RESET}
"""
)
else:
print(
f"""
+{GREEN_T} Conexão estabelecida com sucesso ao Banco de Dados. {RESET}+ """
)
print("-------------------------------------------------------")
# Inicio do contador de tempo de execução do script
inicio = time.time()
# Variável (acao) - armazena uma lista com os tickers da acoes
acao = __list__.lst_acao
# Variável contador
n = 0
# Percorre a lista com os códigos das ações
for i in tqdm(acao):
try:
# Consulta no banco de dados para verificar se os dados já se encontram no mesmo (Ref.: data_ult_cotacao / papel)
query_consult_bd = f" SELECT data_dado_inserido, papel \
FROM dados \
WHERE data_ult_cotacao = '{dt}' \
AND papel = '{i}' "
result = __conectdb__.se_dados(query_consult_bd)
# --- #
if result != []:
print(f"+{YELLOW} Dados da ação: {i}, já cadastrados {RESET}+")
else:
# Aqui começa o script para coleta dos dados
hearder = {"user-agent": "Mozilla/5.0"}
url = f"https://fundamentus.com.br/detalhes.php?papel={i}+"
page = requests.get(url, headers=hearder)
soup = BeautifulSoup(page.content, "html.parser")
dados = soup.findAll("div", {"class": "conteudo clearfix"})
# cria a lista das variaveis aonde seram armazenados os dados coletados
for data in dados:
dadosI = []
papel = []
tipo = []
empresa = []
setor = []
cotacao = []
dt_ult_cotacao = []
min_52_sem = []
max_52_sem = []
vol_med = []
valor_mercado = []
valor_firma = []
ult_balanco_pro = []
nr_acoes = []
os_dia = []
pl = []
lpa = []
pvp = []
vpa = []
p_ebit = []
marg_bruta = []
psr = []
marg_ebit = []
p_ativo = []
marg_liquida = []
p_cap_giro = []
ebit_ativo = []
p_ativo_circ_liq = []
roic = []
div_yield = []
roe = []
ev_ebitda = []
liquidez_corr = []
ev_ebit = []
cres_rec = []
ativo = []
disponibilidades = []
ativo_circulante = []
divd_bruta = []
divd_liquida = []
patr_liquido = []
lucro_liquido_12m = []
lucro_liquido_3m = []
dadosI = data.find_all("span", {"class": "txt"})
dadosO = data.find_all("span", {"class": "oscil"})
#
papel.append(dadosI[0].text)
if "Papel" in papel[0]:
papel.append(dadosI[1].text.strip())
else:
papel.append(0)
#
tipo.append(dadosI[4].text)
if "Tipo" in tipo[0]:
tipo.append(dadosI[5].text.strip())
else:
tipo.append(0)
#
empresa.append(dadosI[8].text)
if "Empresa" in empresa[0]:
empresa.append(dadosI[9].text)
else:
empresa.append(0)
#
setor.append(dadosI[12].text)
if "Setor" in setor[0]:
setor.append(dadosI[13].text)
else:
setor.append(0)
#
cotacao.append(dadosI[2].text)
if "Cotação" in cotacao[0]:
cotacao.append(dadosI[3].text)
else:
cotacao.append(0)
#
dt_ult_cotacao.append(dadosI[6].text)
if "Data últ cot" in dt_ult_cotacao[0]:
dt_ult_cotacao.append(dadosI[7].text)
else:
dt_ult_cotacao.append(0)
#
min_52_sem.append(dadosI[10].text)
if "Min 52 sem" in min_52_sem[0]:
min_52_sem.append(dadosI[11].text)
else:
min_52_sem.append(0)
#
max_52_sem.append(dadosI[14].text)
if "Max 52 sem" in max_52_sem[0]:
max_52_sem.append(dadosI[15].text)
else:
max_52_sem.append(0)
#
vol_med.append(dadosI[18].text)
if "Vol $ méd (2m)" in vol_med[0]:
vol_med.append(dadosI[19].text)
else:
vol_med.append(0)
#
valor_mercado.append(dadosI[20].text)
if "Valor de mercado" in valor_mercado[0]:
valor_mercado.append(dadosI[21].text)
else:
valor_mercado.append(0)
#
valor_firma.append(dadosI[24].text)
if "Valor da firma" in valor_firma[0]:
valor_firma.append(dadosI[25].text)
else:
valor_firma.append(0)
#
ult_balanco_pro.append(dadosI[22].text)
if "Últ balanço processado" in ult_balanco_pro[0]:
ult_balanco_pro.append(dadosI[23].text)
else:
ult_balanco_pro.append(0)
#
nr_acoes.append(dadosI[26].text)
if "Nro. Ações" in nr_acoes[0]:
nr_acoes.append(dadosI[27].text.replace(".", ""))
else:
nr_acoes.append(0)
#
os_dia.append(dadosI[30].text)
if "Dia" in os_dia[0]:
os_dia.append(
dadosO[0]
.text.replace("\n", "")
.replace(",", ".")
.replace("%", "")
)
else:
os_dia.append(0)
#
pl.append(dadosI[31].text)
if "P/L" in pl[0]:
pl.append(
dadosI[32].text.replace(".", "").replace(",", ".")
)
else:
pl.append(0)
#
lpa.append(dadosI[33].text)
if "LPA" in lpa[0]:
lpa.append(dadosI[34].text.replace(",", "."))
else:
lpa.append(0)
#
pvp.append(dadosI[36].text)
if "P/VP" in pvp[0]:
pvp.append(
dadosI[37].text.replace(".", "").replace(",", ".")
)
else:
pvp.append(0)
#
vpa.append(dadosI[38].text)
if "VPA" in vpa[0]:
vpa.append(
dadosI[39].text.replace(".", "").replace(",", ".")
)
else:
vpa.append(0)
#
p_ebit.append(dadosI[41].text)
if "P/EBIT" in p_ebit:
p_ebit.append(
dadosI[42].text.replace("\n", "").replace(",", ".")
)
if len(p_ebit[1]) <= 1:
p_ebit[1] = 0
else:
p_ebit.append(0)
#
marg_bruta.append(dadosI[43].text)
if "Marg. Bruta" in marg_bruta:
marg_bruta.append(
dadosI[44]
.text.replace("\n", "")
.replace(".", "")
.replace(",", ".")
.replace("%", "")
)
if len(marg_bruta[1]) <= 1:
marg_bruta[1] = 0
else:
marg_bruta.append(0)
#
psr.append(dadosI[46].text)
if "PSR" in psr:
psr.append(
dadosI[47]
.text.replace("\n", "")
.replace(".", "")
.replace(",", ".")
)
if len(psr[1]) <= 1:
psr[1] = 0
else:
psr.append(0)
#
marg_ebit.append(dadosI[48].text)
if "Marg. EBIT" in marg_ebit:
marg_ebit.append(
dadosI[49]
.text.replace("\n", "")
.replace(".", "")
.replace(",", ".")
.replace("%", "")
)
if len(marg_ebit[1]) <= 1:
marg_ebit[1] = 0
else:
marg_ebit.append(0)
#
p_ativo.append(dadosI[51].text)
if "P/Ativos" in p_ativo:
p_ativo.append(
dadosI[52]
.text.replace("\n", "")
.replace(".", "")
.replace(",", ".")
)
if len(p_ativo[1]) <= 1:
p_ativo[1] = 0
else:
p_ativo.append(0)
#
marg_liquida.append(dadosI[53].text)
if "Marg. Líquida" in marg_liquida:
marg_liquida.append(
dadosI[54]
.text.replace("\n", "")
.replace(".", "")
.replace(",", ".")
.replace("%", "")
)
if len(marg_liquida[1]) <= 1:
marg_liquida[1] = 0
else:
marg_liquida.append(0)
#
p_cap_giro.append(dadosI[56].text)
if "P/Cap. Giro" in p_cap_giro:
p_cap_giro.append(
dadosI[57]
.text.replace("\n", "")
.replace(".", "")
.replace(",", ".")
)
if len(p_cap_giro[1]) <= 1:
p_cap_giro[1] = 0
else:
p_cap_giro.append(0)
#
ebit_ativo.append(dadosI[58].text)
if "EBIT / Ativo" in ebit_ativo:
ebit_ativo.append(
dadosI[59]
.text.replace(".", "")
.replace(",", ".")
.replace("%", "")
)
if len(ebit_ativo[1]) <= 1:
ebit_ativo[1] = 0
else:
ebit_ativo.append(0)
#
p_ativo_circ_liq.append(dadosI[61].text)
if "P/Ativ Circ Liq" in p_ativo_circ_liq:
p_ativo_circ_liq.append(
dadosI[62]
.text.replace("\n", "")
.replace(".", "")
.replace(",", ".")
)
if len(p_ativo_circ_liq[1]) <= 1:
p_ativo_circ_liq[1] = 0
else:
p_ativo_circ_liq.append(0)
#
roic.append(dadosI[63].text)
if "ROIC" in roic:
roic.append(
dadosI[64]
.text.replace("\n", "")
.replace(".", "")
.replace(",", ".")
.replace("%", "")
)
if len(roic[1]) <= 1:
roic[1] = 0
else:
roic.append(0)
#
div_yield.append(dadosI[66].text)
if "Div. Yield" in div_yield:
div_yield.append(
dadosI[67].text.replace(",", ".").replace("%", "")
)
if len(div_yield[1]) <= 1:
div_yield[1] = 0
else:
div_yield.append(0)
#
roe.append(dadosI[68].text)
if "ROE" in roe:
roe.append(
dadosI[69]
.text.replace("\n", "")
.replace(".", "")
.replace(",", ".")
.replace("%", "")
)
if len(roe[1]) <= 1:
roe[1] = 0
else:
roe.append(0)
#
ev_ebitda.append(dadosI[71].text)
if "EV / EBITDA" in ev_ebitda:
ev_ebitda.append(
dadosI[72]
.text.replace("\n", "")
.replace(".", "")
.replace(",", ".")
)
if len(ev_ebitda[1]) <= 1:
ev_ebitda[1] = 0
else:
ev_ebitda.append(0)
#
liquidez_corr.append(dadosI[73].text)
if "Liquidez Corr" in liquidez_corr:
liquidez_corr.append(
dadosI[74].text.replace("\n", "").replace(",", ".")
)
if len(liquidez_corr[1]) <= 1:
liquidez_corr[1] = 0
else:
liquidez_corr.append(0)
#
ev_ebit.append(dadosI[76].text)
if "EV / EBIT" in ev_ebit:
ev_ebit.append(
dadosI[77]
.text.replace("\n", "")
.replace(".", "")
.replace(",", ".")
)
if len(ev_ebit[1]) <= 1:
ev_ebit[1] = 0
else:
ev_ebit.append(0)
#
cres_rec.append(dadosI[81].text)
if "Cres. Rec (5a)" in cres_rec:
cres_rec.append(
dadosI[82]
.text.replace("\n", "")
.replace(",", ".")
.replace("%", "")
)
if len(cres_rec[1]) <= 1:
cres_rec[1] = 0
else:
cres_rec.append(0)
#
if setor[1] == "Intermediários Financeiros":
ativo.append("Ativo")
ativo.append(dadosI[87].text)
disponibilidades.append("Disponibilidades")
disponibilidades.append("0")
ativo_circulante.append("Ativo Circulante")
ativo_circulante.append("0")
divd_bruta.append("Dív. Bruta")
divd_bruta.append("0")
divd_liquida.append("Dív. Líquida")
divd_liquida.append("0")
patr_liquido.append("Patrim. Líq")
patr_liquido.append(dadosI[93].text)
lucro_liquido_12m.append("Lucro Líquido")
lucro_liquido_12m.append(dadosI[106].text)
lucro_liquido_3m.append("Lucro Líquido")
lucro_liquido_3m.append(dadosI[108].text)
else:
ativo.append(dadosI[86].text)
if "Ativo" in ativo:
ativo.append(dadosI[87].text)
else:
ativo.append(0)
#
disponibilidades.append(dadosI[90].text)
if "Disponibilidades" in disponibilidades:
disponibilidades.append(dadosI[91].text)
else:
disponibilidades.append(0)
#
ativo_circulante.append(dadosI[94].text)
if "Ativo Circulante" in ativo_circulante:
ativo_circulante.append(dadosI[95].text)
else:
ativo_circulante.append(0)
#
divd_bruta.append(dadosI[88].text)
if "Dív. Bruta" in divd_bruta:
divd_bruta.append(dadosI[89].text)
else:
divd_bruta.append(0)
#
divd_liquida.append(dadosI[92].text)
if "Dív. Líquida" in divd_liquida:
divd_liquida.append(dadosI[93].text)
else:
divd_liquida.append(0)
#
patr_liquido.append(dadosI[96].text)
if "Patrim. Líq" in patr_liquido:
patr_liquido.append(dadosI[97].text)
else:
patr_liquido.append(0)
#
lucro_liquido_12m.append(dadosI[109].text)
if "Lucro Líquido" in lucro_liquido_12m:
lucro_liquido_12m.append(dadosI[110].text)
else:
lucro_liquido_12m.append(0)
#
lucro_liquido_3m.append(dadosI[111].text)
if "Lucro Líquido" in lucro_liquido_3m:
lucro_liquido_3m.append(dadosI[112].text)
else:
lucro_liquido_3m.append(0)
# Query para inserir os dados coletados no banco de dados Postgres
query_insert_bd = f" INSERT INTO dados VALUES ( '{dt}','{papel[1]}','{tipo[1]}','{empresa[1]}','{setor[1]}','{cotacao[1]}','{dt_ult_cotacao[1]}','{min_52_sem[1]}','{max_52_sem[1]}','{vol_med[1]}','{valor_mercado[1]}','{valor_firma[1]}','{ult_balanco_pro[1]}','{nr_acoes[1]}','{os_dia[1]}','{pl[1]}','{lpa[1]}','{pvp[1]}','{vpa[1]}','{p_ebit[1]}','{marg_bruta[1]}','{psr[1]}','{marg_ebit[1]}','{p_ativo[1]}','{marg_liquida[1]}','{p_cap_giro[1]}','{ebit_ativo[1]}','{p_ativo_circ_liq[1]}','{roic[1]}','{div_yield[1]}','{roe[1]}','{ev_ebitda[1]}','{liquidez_corr[1]}','{ev_ebit[1]}','{cres_rec[1]}','{ativo[1]}','{disponibilidades[1]}','{ativo_circulante[1]}','{divd_bruta[1]}','{divd_liquida[1]}','{patr_liquido[1]}','{lucro_liquido_12m[1]}','{lucro_liquido_3m[1]}' ) "
# Inserindo os dados coletados no banco de dados Postgres
__conectdb__.in_dados(query_insert_bd)
print(
f"+{GREEN} Dados da ação: {i}, gravados com sucesso {RESET}+"
)
# --- #
n += 1
# Dados atual - Salvando os dados atuais no Dataframe
dados_atual.loc[dados_atual.shape[0]] = [
papel[1],tipo[1],empresa[1],setor[1],cotacao[1],dt_ult_cotacao[1],min_52_sem[1],max_52_sem[1],vol_med[1],valor_mercado[1],valor_firma[1],ult_balanco_pro[1],nr_acoes[1],os_dia[1],pl[1],lpa[1],pvp[1],vpa[1],p_ebit[1],marg_bruta[1],psr[1],marg_ebit[1],p_ativo[1],marg_liquida[1],p_cap_giro[1],ebit_ativo[1],p_ativo_circ_liq[1],roic[1],div_yield[1],roe[1],ev_ebitda[1],liquidez_corr[1],ev_ebit[1],cres_rec[1],ativo[1],disponibilidades[1],ativo_circulante[1],divd_bruta[1],divd_liquida[1],patr_liquido[1],lucro_liquido_12m[1],lucro_liquido_3m[1]
]
# Dados atual - Salvando os dados atuais em um arquivo .csv
dados_atual.to_csv('../Dados_Atual/dados.csv', sep=';')
except:
print(f"+{RED} Dados da ação: {i}, não gravados {RESET} +")
return
# Removendo linhas(tabela dados) do Banco de Dados com valores vazios (ref.: na coluna papel)
delete_vazio = __query__.delete_vazio_query
__conectdb__.in_dados(delete_vazio)
# Removendo linhas(tabela dados) do Banco de Dados duplicados (ref.: na coluna papel / data_ult_cotacao )
delete_duplicados = __query__.delete_duplicados_query
__conectdb__.in_dados(delete_duplicados)
# Backup do banco de dados
csv_file_name = "../Backup/some_file.csv"
bk = __query__.backup_query
with open(csv_file_name, "w") as f:
__conectdb__.bk(bk, f)
# Fim do contador de Tempo do script
fim = time.time()
hours, rem = divmod(fim - inicio, 3600)
minutes, seconds = divmod(rem, 60)
# Fim
print(f"{RED}-----------------{RESET}")
print(f"{BLUE}Finalizou. {n} Empresa(s) Cadastrada(s)")
print(
"Tempo: {:0>2}:{:0>2}:{:05.2f}".format(
int(hours), int(minutes), seconds
)
)
print(f"{RESET}{RED}-----------------{RESET}")
dados() | 49.381107 | 795 | 0.342183 | 2,432 | 30,320 | 4.064967 | 0.176809 | 0.105604 | 0.019421 | 0.028829 | 0.36142 | 0.283229 | 0.238418 | 0.198058 | 0.164677 | 0.144447 | 0 | 0.038663 | 0.552144 | 30,320 | 614 | 796 | 49.381107 | 0.689373 | 0.054914 | 0 | 0.203629 | 0 | 0.002016 | 0.0991 | 0.035075 | 0 | 0 | 0 | 0.001629 | 0 | 1 | 0.002016 | false | 0 | 0.02621 | 0 | 0.032258 | 0.032258 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
30b4cc58be44d21f1ee7aa4386ff6c15fa728253 | 2,750 | py | Python | client/hr_services/doctype/employee_loan_application/employee_loan_application.py | mhbu50/client | b99003b872a1599ba8c4b0ca948610a1f49d527f | [
"MIT"
] | null | null | null | client/hr_services/doctype/employee_loan_application/employee_loan_application.py | mhbu50/client | b99003b872a1599ba8c4b0ca948610a1f49d527f | [
"MIT"
] | null | null | null | client/hr_services/doctype/employee_loan_application/employee_loan_application.py | mhbu50/client | b99003b872a1599ba8c4b0ca948610a1f49d527f | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and contributors
# For license information, please see license.txt
from __future__ import unicode_literals
import frappe, math
from frappe import _
from frappe.utils import flt
from frappe.model.mapper import get_mapped_doc
from frappe.model.document import Document
from erpnext.hr.doctype.employee_loan.employee_loan import get_monthly_repayment_amount, check_repayment_method
class EmployeeLoanApplication(Document):
def validate(self):
check_repayment_method(self.repayment_method, self.loan_amount, self.repayment_amount, self.repayment_periods)
self.validate_loan_amount()
self.get_repayment_details()
self.validate_emp()
if self.workflow_state:
if "Rejected" in self.workflow_state:
self.docstatus = 1
self.docstatus = 2
def validate_emp(self):
if self.get('__islocal'):
if u'CEO' in frappe.get_roles(frappe.session.user):
self.workflow_state = "Created By CEO"
elif u'Director' in frappe.get_roles(frappe.session.user):
self.workflow_state = "Created By Director"
elif u'Manager' in frappe.get_roles(frappe.session.user):
self.workflow_state = "Created By Manager"
elif u'Line Manager' in frappe.get_roles(frappe.session.user):
self.workflow_state = "Created By Line Manager"
elif u'Employee' in frappe.get_roles(frappe.session.user):
self.workflow_state = "Pending"
def validate_loan_amount(self):
maximum_loan_limit = frappe.db.get_value('Loan Type', self.loan_type, 'maximum_loan_amount')
if maximum_loan_limit and self.loan_amount > maximum_loan_limit:
frappe.throw(_("Loan Amount cannot exceed Maximum Loan Amount of {0}").format(maximum_loan_limit))
def get_repayment_details(self):
if self.repayment_method == "Repay over Number of Months":
self.repayment_amount = get_monthly_repayment_amount(self.repayment_method, self.loan_amount, self.rate_of_interest, self.repayment_periods)
if self.rate_of_interest>0:
if self.repayment_method == "Repay Once":
monthly_interest_rate = flt(self.rate_of_interest) / (12 *100)
self.repayment_periods = math.ceil((math.log(self.repayment_amount) - math.log(self.repayment_amount - \
(self.loan_amount*monthly_interest_rate)))/(math.log(1+monthly_interest_rate)))
self.total_payable_amount = self.repayment_amount * self.repayment_periods
self.total_payable_interest = self.total_payable_amount - self.loan_amount
@frappe.whitelist()
def make_employee_loan(source_name, target_doc = None):
doclist = get_mapped_doc("Employee Loan Application", source_name, {
"Employee Loan Application": {
"doctype": "Employee Loan",
"validation": {
"docstatus": ["=", 1]
}
}
}, target_doc)
return doclist | 41.044776 | 143 | 0.768 | 384 | 2,750 | 5.236979 | 0.273438 | 0.084038 | 0.059175 | 0.039781 | 0.302337 | 0.229736 | 0.229736 | 0.197911 | 0.14918 | 0.14918 | 0 | 0.006683 | 0.129455 | 2,750 | 67 | 144 | 41.044776 | 0.833333 | 0.049455 | 0 | 0 | 0 | 0 | 0.131367 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.12963 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
30b91860bc2a1d24f754665429d1043bd8923a57 | 17,616 | py | Python | rdll.py | juntalis/python-ctypes-sandbox | a71783d79dff693fbf6d4f77b1ee864c8c1f30c9 | [
"WTFPL"
] | 3 | 2015-06-07T11:33:00.000Z | 2021-01-28T09:05:13.000Z | rdll.py | juntalis/python-ctypes-sandbox | a71783d79dff693fbf6d4f77b1ee864c8c1f30c9 | [
"WTFPL"
] | null | null | null | rdll.py | juntalis/python-ctypes-sandbox | a71783d79dff693fbf6d4f77b1ee864c8c1f30c9 | [
"WTFPL"
] | 1 | 2015-06-07T11:40:15.000Z | 2015-06-07T11:40:15.000Z | # encoding: utf-8
"""
This program is free software. It comes without any warranty, to
the extent permitted by applicable law. You can redistribute it
and/or modify it under the terms of the Do What The Fuck You Want
To Public License, Version 2, as published by Sam Hocevar. See
http://sam.zoy.org/wtfpl/COPYING for more details.
TODO: Make this not suck.
"""
import os
from _ctypes import FUNCFLAG_CDECL as _FUNCFLAG_CDECL,\
FUNCFLAG_STDCALL as _FUNCFLAG_STDCALL,\
FUNCFLAG_PYTHONAPI as _FUNCFLAG_PYTHONAPI,\
FUNCFLAG_USE_ERRNO as _FUNCFLAG_USE_ERRNO,\
FUNCFLAG_USE_LASTERROR as _FUNCFLAG_USE_LASTERROR
from _kernel32 import PLoadLibraryW as PLoadLibrary
from extern import pefile
import functools
from _kernel32 import *
from struct import calcsize as _calcsz
# Utility stuff (decorators/base classes/functions)
def memoize(obj):
"""
From the Python Decorator Library (http://wiki.python.org/moin/PythonDecoratorLibrary):
Cache the results of a function call with specific arguments. Note that this decorator ignores **kwargs.
"""
cache = obj.cache = {}
@functools.wraps(obj)
def memoizer(*args, **kwargs):
if args not in cache:
cache[args] = obj(*args, **kwargs)
return cache[args]
return memoizer
def _find_parent_process():
"""
Obtain the process and thread identifiers of the parent process.
BOOL get_parent_process( LPPROCESS_INFORMATION ppi )
{
HANDLE hSnap;
PROCESSENTRY32 pe;
THREADENTRY32 te;
DWORD id = GetCurrentProcessId();
BOOL fOk;
hSnap = CreateToolhelp32Snapshot( TH32CS_SNAPPROCESS|TH32CS_SNAPTHREAD, id );
if (hSnap == INVALID_HANDLE_VALUE)
return FALSE;
find_proc_id( hSnap, id, &pe );
if (!find_proc_id( hSnap, pe.th32ParentProcessID, &pe ))
{
CloseHandle( hSnap );
return FALSE;
}
te.dwSize = sizeof(te);
for (fOk = Thread32First( hSnap, &te ); fOk; fOk = Thread32Next( hSnap, &te ))
if (te.th32OwnerProcessID == pe.th32ProcessID)
break;
CloseHandle( hSnap );
ppi->dwProcessId = pe.th32ProcessID;
ppi->dwThreadId = te.th32ThreadID;
return fOk;
}
"""
pid = GetCurrentProcessId()
hSnap = CreateToolhelp32Snapshot(PROC_THREAD_SNAPSHOT, 0)
if hSnap == NULL:
raise WinError('Could not create a Toolhelp32Snapshot')
(fOk, pe) = _find_proc_id(hSnap, pid)
if fOk == FALSE:
raise WinError('Could not find current proc')
ppid = pe.th32ParentProcessID
fOk, ppe = _find_proc_id(hSnap, ppid)
if fOk == FALSE:
raise WinError('Could not find parent proc id')
te = THREADENTRY32()
te.dwSize = SZTHREADENTRY
fOk = Thread32First(hSnap, byref(te))
while fOk != FALSE:
if te.th32OwnerProcessID == ppe.th32ProcessID: break
fOk = Thread32Next(hSnap, byref(te))
if fOk == FALSE:
raise WinError('Could not find thread.')
CloseHandle(hSnap)
return ppe.th32ProcessID, te.th32ThreadID
def _find_proc_id(hSnap, pid):
"""
Search each process in the snapshot for id.
BOOL find_proc_id( HANDLE snap, DWORD id, LPPROCESSENTRY32 ppe )
{
BOOL fOk;
ppe->dwSize = sizeof(PROCESSENTRY32);
for (fOk = Process32First( snap, ppe ); fOk; fOk = Process32Next( snap, ppe ))
if (ppe->th32ProcessID == id)
break;
return fOk;
}
"""
ppe = PROCESSENTRY32()
ppe.dwSize = SZPROCESSENTRY
fOk = Process32First(hSnap, byref(ppe))
while fOk != FALSE:
if ppe.th32ProcessID == pid: break
fOk = Process32Next(hSnap, byref(ppe))
return fOk, ppe
def _bypid(pid):
"""
Find a process and it's main thread by its process ID.
"""
hSnap = CreateToolhelp32Snapshot(PROC_THREAD_SNAPSHOT, 0)
if hSnap == NULL: raise WinError('Could not create a Toolhelp32Snapshot')
(fOk, pe) = _find_proc_id(hSnap, pid)
if fOk == FALSE: raise WinError('Could not find process by id: %d' % pid)
# Find the thread
te = THREADENTRY32()
te.dwSize = SZTHREADENTRY
fOk = Thread32First(hSnap, byref(te))
while fOk != FALSE:
if te.th32OwnerProcessID == pe.th32ProcessID: break
fOk = Thread32Next(hSnap, byref(te))
if fOk == FALSE: raise WinError('Could not find thread.')
CloseHandle(hSnap)
return pe.th32ProcessID, te.th32ThreadID
def _pack_args(*args):
""" Pack multiple arguments into """
class _Args(Structure): pass
fields = []
for i, arg in enumerate(args):
fields.append(('arg%d' % i, type(arg),))
_Args._fields_ = fields
Args = _Args()
for i, arg in enumerate(args):
try:
setattr(Args, 'arg%d' % i, arg)
except:
try:
setattr(Args, 'arg%d' % i, arg.value)
except:
setattr(Args, 'arg%d' % i, arg.contents)
return Args
_szp1 = lambda a: len(a) + 1
def _isptr(typ):
return hasattr(typ, '_type_') and (typ._type_ == 'P' or type(typ._type_) != str)
def _pynumtyp2ctype(arg, typ=None):
if typ is None: typ = type(arg)
if typ == int:
if arg < 0:
#ctyp = c_short
#if arg > c_short_max or arg < c_short_min:
ctyp = c_int
if arg > c_int_max or arg < c_int_min:
ctyp = c_longlong if arg > c_long_max or arg < c_long_min else c_long
return ctyp
else:
#ctyp = c_ushort
#if arg > c_ushort_max:
ctyp = c_uint
if arg > c_uint_max:
ctyp = c_ulonglong if arg > c_ulong_max else c_ulong
return ctyp
elif typ == long:
if arg < 0:
return c_longlong if arg > c_long_max or arg < c_long_min else c_long
else:
return c_ulonglong if arg > c_ulong_max else c_ulong
elif typ == float:
ctyp = c_float
try: result = ctyp(arg)
except:
ctyp = c_double
try: result = ctyp(arg)
except: ctyp = c_longdouble
return ctyp
else:
raise Exception('Arg doesnt appear to be a number-type.. Arg: %s Type: %s' % (str(arg), str(typ)))
def _carrtype(val, typ, size, num=True):
buf = typ()
larg = len(val) - 1
for i in range(0, size - 1):
if i > larg: continue
if type(val[i]) in [str, unicode] and num:
val[i] = ord(val[i])
buf[i] = val[i]
return buf
def _pychars2ctype(arg, size = None, typ=None):
if typ is None: typ = type(arg)
if size is None: size = len(arg)
if typ == str:
return c_char_p, create_string_buffer(arg, size)
elif typ == unicode:
return c_wchar_p, create_unicode_buffer(arg, size)
elif typ == buffer:
#noinspection PyTypeChecker
argtype = c_ubyte * size
return argtype, _carrtype(list(arg), argtype, size)
elif typ == bytearray:
size += 1
#noinspection PyTypeChecker,PyUnresolvedReferences
argtype = c_byte * size
return argtype, _carrtype(list(arg), argtype, size - 1)
def py2ctype(arg):
""" TODO: Use this in the allocation/argtype stuff in RCFuncPtr """
typ = type(arg)
if typ in [str, unicode, buffer, bytearray]:
ctyp, cval = _pychars2ctype(arg, typ=typ)
return cval
elif typ in [ int, long, float ]:
ctyp = _pynumtyp2ctype(arg, typ)
return ctyp(arg)
elif typ in [list, set, tuple]:
arg = list(arg)
size = len(arg) + 1
argtype = c_int
numtyp = True
# Only going to handle collections of strings, unicode strings, and numbers
for argi in arg:
typ = type(argi)
if typ in [ str, unicode ]:
argtype, dummy = _pychars2ctype(argi, typ=typ)
numtyp = False
break
elif typ in [ long, int, float ]:
argtype = _pynumtyp2ctype(argi, typ)
if typ == float: numtyp = False
break
return _carrtype(arg, argtype * size, size, num=numtyp)
else:
raise Exception('Dont know what to do with arg.\nArg: %s\nType: %s' % (arg, type(arg)))
class _RCFuncPtr(object):
_addr_ = 0
_flags_ = None
_restype_ = None
_funcptr_ = None
_hprocess_ = None
def _valueof(self, arg):
if not hasattr(arg, '_type_'):
return arg
elif hasattr(arg, 'value'):
return arg.value
elif hasattr(arg, 'contents'):
return arg.contents
else:
return arg
#raise Exception('Don\'t know how to get the value of arg.\nType: %s' % type(arg))
def _valtoargtype(self, arg, argtype):
result = 0
if type(arg) in [str, unicode]:
if argtype == c_char_p:
result = create_string_buffer(arg, len(arg) + 1)
elif argtype == c_wchar_p:
result = create_unicode_buffer(arg, len(arg) + 1)
elif argtype._type_ == c_ubyte:
result = (c_ubyte * len(arg) + 1)()
for i, c in enumerate(arg):
result[i] = c
else:
raise Exception('Don\'t know how to convert string, "%s" into type: %s' % (arg, argtype))
# Array type
elif hasattr(argtype, '_length_')\
or type(argtype._type_) != str: # Pointer type
try:
result = cast(arg, argtype)
except:
result = arg
elif hasattr(argtype, 'value'):
try: result = argtype(arg)
except: result = arg
else:
try: result = cast(arg, c_void_p)
except: result = arg
#raise Exception('Don\'t know how to convert arg to argtype.\nArg: %s\nArgtype: %s' % (arg, argtype))
return result
def _alloc_set_var(self, val):
"""
BOOL alloc_set_varA(LPCSTR* buffer, HANDLE hProcess, LPCSTR val)
{
SIZE_T buflen = (lstrlen(val) + 1) * sizeof(const char);
if (!(*buffer = (LPCSTR) VirtualAllocEx(hProcess, NULL, buflen, MEM_COMMIT, PAGE_READWRITE)))
return_error("Could not allocate memory for our test call.");
if (!WriteProcessMemory(hProcess, (LPVOID)*buffer, (LPCVOID)val, (SIZE_T)buflen, NULL))
return_error("Could write to our remote variable..");
return TRUE;
}
"""
buflen = sizeof(val)
buffer = VirtualAllocEx(self._hprocess_, 0L, buflen, MEM_COMMIT, PAGE_READWRITE)
if buffer == NULL:
raise Exception('Could not allocate our remote buffer.')
try:
if WriteProcessMemory(self._hprocess_, LPCVOID(buffer), val, buflen, ULONG_PTR(0)) == FALSE:
raise Exception('Could not write to our remote variable.')
except ArgumentError:
if WriteProcessMemory(self._hprocess_, LPCVOID(buffer), addressof(val), buflen, ULONG_PTR(0)) == FALSE:
raise Exception('Could not write to our remote variable.')
return buffer
def __call__(self, *more): # real signature unknown; restored from __doc__
""" x.__call__(...) <==> x(...) """
funcptr = self._funcptr_
result = DWORD(0L) if not hasattr(funcptr, 'restype') or funcptr.restype is None else funcptr.restype()
lpParameter = NULL
if not hasattr(funcptr, 'noalloc') or not funcptr.noalloc:
if funcptr.argtypes is not None and len(funcptr.argtypes) > 0:
args = []
argcount = len(more)
for i, argtype in enumerate(funcptr.argtypes):
arg = 0
if i >= argcount:
arg = argtype()
elif hasattr(more[i], '_type_'):
if more[i]._type_ == argtype:
arg = more[i]
else:
arg = self._valtoargtype(self._valueof(more[i]), argtype)
else:
arg = self._valtoargtype(more[i], argtype)
args.append(arg)
if argcount > 1:
lpParameter = _pack_args(*args)
else:
lpParameter = args[0]
if hasattr(lpParameter, '_b_needsfree_') and lpParameter._b_needsfree_ == 1 and bool(lpParameter):
lpParameter = self._alloc_set_var(lpParameter)
elif len(more) > 0:
if len(more) == 1:
lpParameter = cast(more[0], c_void_p)
else:
tlen = len(self.argtypes) if hasattr(self, 'argtypes') else 0
more = list(more)
for i, arg in enumerate(more):
if i > tlen: more[i] = py2ctype(arg)
else:
typ = self.argtypes[i]
if typ == c_char_p:
more[i] = create_string_buffer(arg)
elif typ == c_wchar_p:
more[i] = create_unicode_buffer(arg)
elif _isptr(typ):
more[i] = cast(arg,typ)
else:
more[i] = self.argtypes[i](arg)
lpParameter = _pack_args(*more)
hRemoteThread = CreateRemoteThread(
self._hprocess_, NULL_SECURITY_ATTRIBUTES, 0,
cast(self._addr_, LPTHREAD_START_ROUTINE),
lpParameter, 0L, byref(c_ulong(0L))
)
if hRemoteThread == NULL:
if hasattr(lpParameter, '_b_needsfree_') and lpParameter._b_needsfree_ == 1 and bool(lpParameter):
VirtualFreeEx(self._hprocess_, lpParameter, 0, MEM_RELEASE)
CloseHandle(self._hprocess_)
raise WinError('Failed to start our remote thread.')
WaitForSingleObject(hRemoteThread, INFINITE)
GetExitCodeThread(hRemoteThread, cast(byref(result), LPDWORD))
CloseHandle(hRemoteThread)
if hasattr(lpParameter, '_b_needsfree_') and lpParameter._b_needsfree_ == 1 and bool(lpParameter):
VirtualFreeEx(self._hprocess_, lpParameter, 0, MEM_RELEASE)
return result
def __init__(self, offset, funcid, rdll):
self._addr_ = offset
if self._flags_ == _FUNCFLAG_CDECL:
self._funcptr_ = CFUNCTYPE(self._restype_)
elif self._flags_ == _FUNCFLAG_STDCALL:
self._funcptr_ = WINFUNCTYPE(self._restype_)
elif self._flags_ == _FUNCFLAG_PYTHONAPI:
self._funcptr_ = PYFUNCTYPE(self._restype_)
self._funcptr_._func_flags_ = self._flags_
def __nonzero__(self):
""" x.__nonzero__() <==> x != 0 """
return self._funcptr_.__nonzero__()
def __repr__(self): # real signature unknown; restored from __doc__
""" x.__repr__() <==> repr(x) """
return self._funcptr_.__repr__()
@memoize
def _has(self, key): return key in dir(_RCFuncPtr)
def __setattr__(self, key, value):
if self._has(key):
super(_RCFuncPtr, self).__setattr__(key, value)
else:
setattr(self._funcptr_, key, value)
def __getattr__(self, key):
return super(_RCFuncPtr, self).__getattr__(key) if\
self._has(key) else\
getattr(self._funcptr_, key)
class RCDLL(object):
_func_flags_ = _FUNCFLAG_CDECL
_func_restype_ = c_int
_hprocess_ = 0
_hthread_ = 0
_exports_ = {}
_funcs_ = {}
def __init__(self, name = None, pid = 0, thid = 0, mode = DEFAULT_MODE, handle = None, use_errno = False,
use_last_error = False):
if name is None and handle is None:
raise WindowsError('We need either a name or a handle to a preloaded DLL to create a DLL interface.')
elif name is None:
self._name = GetModuleFileName(handle)
else:
self._name = name
flags = self._func_flags_
if use_errno:
flags |= _FUNCFLAG_USE_ERRNO
if use_last_error:
flags |= _FUNCFLAG_USE_LASTERROR
self._hthread_ = thid
pi, ti = 0, 0
if pid == 0:
check = _find_parent_process()
if check is None:
raise WinError('Failed to open our parent process and no pid specified.')
pi, ti = check
else:
pi, ti = _bypid(pid)
if self._hthread_ == 0:
self._hthread_ = ti
self._hprocess_ = OpenProcess(PROCESS_MOST, FALSE, pi)
class _FuncPtr(_RCFuncPtr):
_flags_ = flags
_restype_ = self._func_restype_
_hprocess_ = self._hprocess_
self._FuncPtr = _FuncPtr
self._handle = self.__inject__()
if self._handle == 0:
raise WindowsError('Could not inject your library: %s' % self._name)
self.__populate_exports__()
def __inject__(self):
val = create_unicode_buffer(self._name, len(self._name) + 1)
buflen = sizeof(val)
buffer = VirtualAllocEx(self._hprocess_, 0L, buflen, MEM_COMMIT, PAGE_READWRITE)
if buffer == NULL:
raise Exception('Could not allocate our remote buffer.')
if WriteProcessMemory(self._hprocess_, buffer, cast(val, LPCVOID), buflen, ULONG_PTR(0)) == FALSE:
raise Exception('Could not write to our remote variable.')
hRemoteThread = CreateRemoteThread(
self._hprocess_, NULL_SECURITY_ATTRIBUTES, 0,
PLoadLibrary, buffer, 0L, byref(c_ulong(0L))
)
if hRemoteThread == NULL:
VirtualFreeEx(self._hprocess_, buffer, 0, MEM_RELEASE)
CloseHandle(self._hprocess_)
raise WinError('Failed to start our remote thread.')
WaitForSingleObject(hRemoteThread, INFINITE)
result = c_ulong(0)
GetExitCodeThread(hRemoteThread, byref(result))
CloseHandle(hRemoteThread)
VirtualFreeEx(self._hprocess_, buffer, 0, MEM_RELEASE)
return result.value
def __populate_exports__(self):
if len(os.path.splitext(self._name)[1].lower()) == 0:
self._name += '.dll'
pe = pefile.PE(self._name, fast_load = True)
direxport = pe.OPTIONAL_HEADER.DATA_DIRECTORY[0]
exportsobj = pe.parse_export_directory(direxport.VirtualAddress, direxport.Size)
pe.close()
for export in exportsobj.symbols:
self._exports_[export.name] =\
self._exports_[export.ordinal] =\
self._handle + export.address
def __repr__(self):
return "<%s '%s', handle %x at %x>" %\
(self.__class__.__name__, self._name,
(self._handle & (_sys.maxint * 2 + 1)),
id(self) & (_sys.maxint * 2 + 1))
def __getattr__(self, name):
if name.startswith('__') and name.endswith('__'):
raise AttributeError(name)
func = self.__getitem__(name)
super(RCDLL, self).__setattr__(name, func)
self._funcs_[name] = func
#setattr(self, name, func)
return func
def __setattr__(self, key, value):
if key in self._exports_.keys():
self._funcs_[key] = value
else:
super(RCDLL, self).__setattr__(key, value)
def __getitem__(self, name_or_ordinal):
if name_or_ordinal in self._funcs_.keys():
return self._funcs_[name_or_ordinal]
ordinal = isinstance(name_or_ordinal, (int, long))
if not self._exports_.has_key(name_or_ordinal):
if ordinal: raise WindowsError('Could not find address of function at ordinal: %d' % name_or_ordinal)
else: raise WindowsError('Could not find address of function named: %s' % name_or_ordinal)
func = self._FuncPtr(self._exports_[name_or_ordinal], name_or_ordinal, self)
if not ordinal:
func.__name__ = name_or_ordinal
return func
class RPyDLL(RCDLL):
"""This class represents the Python library itself. It allows to
access Python API functions. The GIL is not released, and
Python exceptions are handled correctly.
"""
_func_flags_ = _FUNCFLAG_CDECL | _FUNCFLAG_PYTHONAPI
class RWinDLL(RCDLL):
"""This class represents a dll exporting functions using the
Windows stdcall calling convention.
"""
_func_flags_ = _FUNCFLAG_STDCALL
rcdll = LibraryLoader(RCDLL)
rwindll = LibraryLoader(RWinDLL)
rpydll = LibraryLoader(RPyDLL)
if __name__ == '__main__':
testdll = RCDLL('testdll.dll')
Initialize = testdll.Initialize
Initialize.restype = None
Initialize.argtypes = []
Initialize()
testdll.Finalize()
| 30.689895 | 106 | 0.699535 | 2,431 | 17,616 | 4.828877 | 0.178116 | 0.010904 | 0.011074 | 0.012522 | 0.280348 | 0.26544 | 0.236051 | 0.21075 | 0.166965 | 0.166965 | 0 | 0.011008 | 0.185229 | 17,616 | 573 | 107 | 30.743456 | 0.80687 | 0.036614 | 0 | 0.246973 | 0 | 0.009685 | 0.071183 | 0 | 0 | 0 | 0 | 0.00349 | 0 | 0 | null | null | 0.002421 | 0.016949 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
30b94d8884a9988e3bf75f6547fb55a3b5efb3c6 | 254 | py | Python | pyneos/mypaths.py | kavehshamsi/neos | 2ecfbd821c1cd53d1fecf1b51d25df8124955345 | [
"MIT"
] | null | null | null | pyneos/mypaths.py | kavehshamsi/neos | 2ecfbd821c1cd53d1fecf1b51d25df8124955345 | [
"MIT"
] | null | null | null | pyneos/mypaths.py | kavehshamsi/neos | 2ecfbd821c1cd53d1fecf1b51d25df8124955345 | [
"MIT"
] | null | null | null |
import os
home_dir = os.path.expanduser('~')
neos_dir = '.' # have this point to your neos directory
neos_cmd = neos_dir + '/bin/neos'
abc_cmd = neos_dir + '/bin/abc'
abclib_path = neos_dir + '/cells/simpler.lib'
neos_bench_dir = neos_dir + '/bench/'
| 23.090909 | 55 | 0.69685 | 41 | 254 | 4.04878 | 0.512195 | 0.210843 | 0.120482 | 0.156627 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153543 | 254 | 10 | 56 | 25.4 | 0.772093 | 0.149606 | 0 | 0 | 0 | 0 | 0.207547 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
30bda8ef7e38b0405c6b820fe73dc16fb49cbd7d | 1,245 | py | Python | house_prices_regression_model/predict.py | jmonsalverodilla/house_prices_regression_model | 28fd24e777fcf838acffda6ea669e1339d92819d | [
"MIT"
] | null | null | null | house_prices_regression_model/predict.py | jmonsalverodilla/house_prices_regression_model | 28fd24e777fcf838acffda6ea669e1339d92819d | [
"MIT"
] | null | null | null | house_prices_regression_model/predict.py | jmonsalverodilla/house_prices_regression_model | 28fd24e777fcf838acffda6ea669e1339d92819d | [
"MIT"
] | null | null | null | import typing as t
import numpy as np
import pandas as pd
from house_prices_regression_model import __version__ as VERSION
from house_prices_regression_model.processing.data_manager import load_pipeline
from house_prices_regression_model.config.core import load_config_file, SETTINGS_PATH
from house_prices_regression_model.processing.data_validation import validate_inputs
# Config files
config = load_config_file(SETTINGS_PATH)
PIPELINE_ARTIFACT_NAME = config["PIPELINE_ARTIFACT_NAME"]
pipeline_file_name = f"{PIPELINE_ARTIFACT_NAME}_v{VERSION}.pkl"
cb_pipe = load_pipeline(file_name=pipeline_file_name)
#Function
def make_prediction(*,input_data: t.Union[pd.DataFrame, dict],) -> list:
"""Make a prediction using a saved model pipeline."""
df = pd.DataFrame(input_data)
validated_df, error_dict = validate_inputs(input_data=df)
errors_list = list(error_dict.values())
results = {'model_output': None}
if error_dict == {}:
log_predictions = cb_pipe.predict(validated_df)
predictions = [np.exp(pred) for pred in log_predictions]
results['model_output'] = predictions
else:
results['model_output'] = 'Errors making prediction:' + ' '.join(map(str, errors_list))
return results
| 40.16129 | 95 | 0.774297 | 171 | 1,245 | 5.298246 | 0.403509 | 0.039735 | 0.066225 | 0.110375 | 0.220751 | 0.09713 | 0.09713 | 0 | 0 | 0 | 0 | 0 | 0.138153 | 1,245 | 30 | 96 | 41.5 | 0.844362 | 0.055422 | 0 | 0 | 0 | 0 | 0.105218 | 0.052181 | 0.043478 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.304348 | 0 | 0.391304 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
30c15578533a2994c5fc4ffc5aaf53e8cae12930 | 3,355 | py | Python | src/gui/SelectActorsFormWrapper.py | perfidia/afefuc | 9717f446dab909cbdac0dc57374859f75436238e | [
"MIT"
] | null | null | null | src/gui/SelectActorsFormWrapper.py | perfidia/afefuc | 9717f446dab909cbdac0dc57374859f75436238e | [
"MIT"
] | null | null | null | src/gui/SelectActorsFormWrapper.py | perfidia/afefuc | 9717f446dab909cbdac0dc57374859f75436238e | [
"MIT"
] | 1 | 2021-10-01T18:09:33.000Z | 2021-10-01T18:09:33.000Z | '''
Created on Apr 25, 2013
@author: Bartosz Alchimowicz
'''
from PyQt4 import QtCore, QtGui
from generated.ui.SelectForm import Ui_SelectForm
#from format import model
#from utils import converter
try:
_fromUtf8 = QtCore.QString.fromUtf8
except AttributeError:
_fromUtf8 = lambda s: s
class SelectActorsTableModel(QtCore.QAbstractTableModel):
def __init__(self, parent, project, item, target, unselectable):
QtCore.QAbstractItemModel.__init__(self, parent)
self.project = project
self.parent = parent
#self.item = item
self.target = target
self.unselectable = []
unselectable = [a.item for a in unselectable]
for i, a in enumerate(project.actors):
if a in unselectable:
self.unselectable.append(i)
def rowCount(self, index):
return len(self.project.actors)
def columnCount(self, parent):
return 2
def index(self, row, column, parent):
if not parent.isValid():
return self.createIndex(row, column, None)
def data(self, index, role):
column = index.column()
if column == 0 and role == QtCore.Qt.DisplayRole:
return QtCore.QVariant(self.project.actors[index.row()].identifier)
elif column == 1 and role == QtCore.Qt.DisplayRole:
return QtCore.QVariant(self.project.actors[index.row()].name)
def flags(self, index):
flags = super(QtCore.QAbstractTableModel, self).flags(index)
if index.row() in self.unselectable:
flags = QtCore.Qt.NoItemFlags
return flags
def parent(self, index):
return QtCore.QModelIndex()
class SelectActorsFormWrapper():
def __init__(self, parent, project, item, target, unselectable, single):
self.parent = parent
self.dialog = QtGui.QDialog()
self.form = Ui_SelectForm()
self.project = project
self.item = None #item
self.target = target
self.unselectable = unselectable
self.single = single
def load(self):
toSelect = [i.item for i in self.target]
tmp = self.form.itemsView.selectionModel()
for i, a in enumerate(self.project.actors):
if a in toSelect:
tmp.select(
self.model.createIndex(i, 0, None),
QtGui.QItemSelectionModel.Select | QtGui.QItemSelectionModel.Rows
)
def show(self):
self.form.setupUi(self.dialog)
self.model = SelectActorsTableModel(self.form.itemsView, self.project, self.item, self.target, self.unselectable)
self.form.itemsView.setModel(self.model)
self.form.itemsView.horizontalHeader().setResizeMode(0, QtGui.QHeaderView.ResizeToContents)
self.form.itemsView.horizontalHeader().setResizeMode(1, QtGui.QHeaderView.Stretch)
self.form.itemsView.horizontalHeader().hide()
self.form.itemsView.verticalHeader().hide()
self.form.itemsView.setSelectionBehavior(QtGui.QAbstractItemView.SelectRows)
if self.single:
self.form.itemsView.setSelectionMode(QtGui.QAbstractItemView.SingleSelection)
QtCore.QObject.connect(self.form.boxButton, QtCore.SIGNAL(_fromUtf8("accepted()")), self.clickedOKButton)
QtCore.QObject.connect(self.form.boxButton, QtCore.SIGNAL(_fromUtf8("rejected()")), self.clickedCancelButton)
self.load()
self.dialog.exec_()
def clickedCancelButton(self):
self.dialog.close()
def clickedOKButton(self):
indexes = set([i.row() for i in self.form.itemsView.selectedIndexes()])
while len(self.target): del self.target[0] # ???
for i in indexes:
self.target.append(self.project.actors[i].get_ref())
self.dialog.close()
| 28.675214 | 115 | 0.742176 | 421 | 3,355 | 5.866983 | 0.28266 | 0.045344 | 0.068826 | 0.040081 | 0.244534 | 0.179757 | 0.179757 | 0.140891 | 0.103644 | 0.05749 | 0 | 0.006554 | 0.135917 | 3,355 | 116 | 116 | 28.922414 | 0.845464 | 0.03845 | 0 | 0.101266 | 1 | 0 | 0.006223 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.151899 | false | 0 | 0.025316 | 0.037975 | 0.291139 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
30c22525eb175856677ab787ab580cdf22d02aef | 737 | py | Python | guillotina/db/strategies/simple.py | rboixaderg/guillotina | fcae65c2185222272f3b8fee4bc2754e81e0e983 | [
"BSD-2-Clause"
] | 173 | 2017-03-10T18:26:12.000Z | 2022-03-03T06:48:56.000Z | guillotina/db/strategies/simple.py | rboixaderg/guillotina | fcae65c2185222272f3b8fee4bc2754e81e0e983 | [
"BSD-2-Clause"
] | 921 | 2017-03-08T14:04:43.000Z | 2022-03-30T10:28:56.000Z | guillotina/db/strategies/simple.py | rboixaderg/guillotina | fcae65c2185222272f3b8fee4bc2754e81e0e983 | [
"BSD-2-Clause"
] | 60 | 2017-03-16T19:59:44.000Z | 2022-03-03T06:48:59.000Z | from guillotina import configure
from guillotina import glogging
from guillotina.db.interfaces import IDBTransactionStrategy
from guillotina.db.interfaces import ITransaction
from guillotina.db.strategies.base import BaseStrategy
logger = glogging.getLogger("guillotina")
@configure.adapter(for_=ITransaction, provides=IDBTransactionStrategy, name="simple")
class SimpleStrategy(BaseStrategy):
async def tpc_begin(self):
await self.retrieve_tid()
if self._transaction._db_txn is None:
await self._storage.start_transaction(self._transaction)
async def tpc_finish(self):
# do actual db commit
if self.writable_transaction:
await self._storage.commit(self._transaction)
| 33.5 | 85 | 0.766621 | 84 | 737 | 6.571429 | 0.488095 | 0.126812 | 0.086957 | 0.094203 | 0.115942 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162822 | 737 | 21 | 86 | 35.095238 | 0.894652 | 0.02578 | 0 | 0 | 0 | 0 | 0.022346 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
30c5f2bbaaebb57c2278a7e13761c8ab234ce074 | 945 | py | Python | app/grandchallenge/products/migrations/0007_projectairfiles.py | nlessmann/grand-challenge.org | 36abf6ccb40e2fc3fd3ff00e81deabd76f7e1ef8 | [
"Apache-2.0"
] | 101 | 2018-04-11T14:48:04.000Z | 2022-03-28T00:29:48.000Z | app/grandchallenge/products/migrations/0007_projectairfiles.py | nlessmann/grand-challenge.org | 36abf6ccb40e2fc3fd3ff00e81deabd76f7e1ef8 | [
"Apache-2.0"
] | 1,733 | 2018-03-21T11:56:16.000Z | 2022-03-31T14:58:30.000Z | app/grandchallenge/products/migrations/0007_projectairfiles.py | nlessmann/grand-challenge.org | 36abf6ccb40e2fc3fd3ff00e81deabd76f7e1ef8 | [
"Apache-2.0"
] | 42 | 2018-06-08T05:49:07.000Z | 2022-03-29T08:43:01.000Z | # Generated by Django 3.1.11 on 2021-07-01 20:18
from django.db import migrations, models
import grandchallenge.core.storage
class Migration(migrations.Migration):
dependencies = [
("products", "0006_product_ce_under"),
]
operations = [
migrations.CreateModel(
name="ProjectAirFiles",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("title", models.CharField(max_length=150)),
(
"study_file",
models.FileField(
upload_to=grandchallenge.core.storage.get_pdf_path
),
),
],
),
]
| 25.540541 | 74 | 0.433862 | 70 | 945 | 5.7 | 0.8 | 0.090226 | 0.125313 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046559 | 0.477249 | 945 | 36 | 75 | 26.25 | 0.761134 | 0.048677 | 0 | 0.172414 | 1 | 0 | 0.070234 | 0.023411 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.068966 | 0 | 0.172414 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
30c78c08bfce5b97f5737d7a5077457a88b30664 | 225 | py | Python | shop/shop/products/validators.py | nikolaynikolov971/NftShop | 09a535a6f708f0f6da5addeb8781f9bdcea72cf3 | [
"MIT"
] | null | null | null | shop/shop/products/validators.py | nikolaynikolov971/NftShop | 09a535a6f708f0f6da5addeb8781f9bdcea72cf3 | [
"MIT"
] | null | null | null | shop/shop/products/validators.py | nikolaynikolov971/NftShop | 09a535a6f708f0f6da5addeb8781f9bdcea72cf3 | [
"MIT"
] | null | null | null | from django.core.exceptions import ValidationError
def is_alpha_or_space_validator(value):
result = all(c.isalpha() or c.isspace() for c in value)
if not result:
raise ValidationError("Write a valid name.")
| 28.125 | 59 | 0.728889 | 33 | 225 | 4.848485 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.182222 | 225 | 7 | 60 | 32.142857 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0.084444 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
30c939a4b0dc0d1167348e5202a204de6c5d17f1 | 762 | py | Python | amlearn/utils/tests/test_data.py | Qi-max/amlearn | 88189519bc1079ab5085d5871169c223e0d03057 | [
"BSD-3-Clause-LBNL"
] | 12 | 2019-02-07T16:45:29.000Z | 2021-03-15T12:44:07.000Z | amlearn/utils/tests/test_data.py | Qi-max/amlearn | 88189519bc1079ab5085d5871169c223e0d03057 | [
"BSD-3-Clause-LBNL"
] | 2 | 2018-11-22T04:59:10.000Z | 2019-12-05T14:22:29.000Z | amlearn/utils/tests/test_data.py | Qi-max/amlearn | 88189519bc1079ab5085d5871169c223e0d03057 | [
"BSD-3-Clause-LBNL"
] | 5 | 2020-12-03T07:18:50.000Z | 2022-01-20T09:17:47.000Z | import numpy as np
from amlearn.utils.basetest import AmLearnTest
from amlearn.utils.data import get_isometric_lists
class test_data(AmLearnTest):
def setUp(self):
pass
def test_get_isometric_lists(self):
test_lists= [[1, 2, 3], [4], [5, 6], [1, 2, 3]]
isometric_lists = \
get_isometric_lists(test_lists, limit_width=80, fill_value=0)
self.assertEqual(np.array(isometric_lists).shape, (4, 80))
test_arrays = np.array([np.array([1, 2, 3]), np.array([4]),
np.array([5, 6]), np.array([1, 2, 3])])
isometric_arrays = \
get_isometric_lists(test_arrays, limit_width=80, fill_value=0)
self.assertEqual(np.array(isometric_arrays).shape, (4, 80))
| 34.636364 | 74 | 0.624672 | 108 | 762 | 4.203704 | 0.324074 | 0.10793 | 0.14978 | 0.052863 | 0.277533 | 0.23348 | 0.23348 | 0.23348 | 0.23348 | 0.23348 | 0 | 0.051724 | 0.238845 | 762 | 21 | 75 | 36.285714 | 0.731034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.125 | false | 0.0625 | 0.1875 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
30ce16a2a000257fc608a7033745bf529c0115b7 | 239 | py | Python | Ejercicios/for.py | julioh/python-chile | da7fc9cdfb03c69c36a98903b80a45c795f2c543 | [
"MIT"
] | null | null | null | Ejercicios/for.py | julioh/python-chile | da7fc9cdfb03c69c36a98903b80a45c795f2c543 | [
"MIT"
] | null | null | null | Ejercicios/for.py | julioh/python-chile | da7fc9cdfb03c69c36a98903b80a45c795f2c543 | [
"MIT"
] | null | null | null | #! /usr/bin/python
# -*- coding: iso-8859-15 -*-
n = int(input("Ingrese la cantidad de datos: "))
suma = 0
for i in range(n):
x = float(input("Ingrese el dato: "))
suma = suma + x
prom = suma / n
print("El promedio es: " ,prom) | 26.555556 | 48 | 0.585774 | 39 | 239 | 3.589744 | 0.74359 | 0.171429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037838 | 0.225941 | 239 | 9 | 49 | 26.555556 | 0.718919 | 0.188285 | 0 | 0 | 0 | 0 | 0.326425 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
30d9ddb09f55d2ace753f27b29e345ba784b61c0 | 2,945 | py | Python | Mailbox/models.py | Positron11/Corkran | 53f463ca4f02e2617205bf67c123923dda2ce403 | [
"MIT"
] | null | null | null | Mailbox/models.py | Positron11/Corkran | 53f463ca4f02e2617205bf67c123923dda2ce403 | [
"MIT"
] | 3 | 2020-06-23T17:13:11.000Z | 2021-04-08T19:57:20.000Z | Mailbox/models.py | Positron11/Corkran | 53f463ca4f02e2617205bf67c123923dda2ce403 | [
"MIT"
] | null | null | null | from django.db import models
from datetime import datetime
from django.contrib import messages
from django.dispatch import receiver
from django.contrib.auth.models import User
from django.db.models.signals import post_save
from polymorphic.models import PolymorphicModel
from Blog.models import Comment, Article, Announcement
# base mail model
class Mail(PolymorphicModel):
recipient = models.ForeignKey(User, related_name="mail", on_delete=models.CASCADE)
date = models.DateTimeField(default=datetime.now, blank=True)
email_reminder = models.BooleanField(default=False)
heading = models.CharField(max_length=100)
read = models.BooleanField(default=False)
# show self as heading when queried
def __str__(self):
return self.heading
# get all children
def get_children(self):
rel_objs = self._meta.related_objects
return [getattr(self, x.get_accessor_name()) for x in rel_objs if x.model != type(self)]
# new article mail
class NewAnnouncementMail(Mail):
announcement = models.ForeignKey(Announcement, on_delete=models.CASCADE)
# new article mail
class NewArticleMail(Mail):
article = models.ForeignKey(Article, on_delete=models.CASCADE)
# new comment mail
class NewCommentMail(Mail):
article = models.ForeignKey(Article, on_delete=models.CASCADE)
comment = models.ForeignKey(Comment, on_delete=models.CASCADE)
# announcement creation receiver
@receiver(post_save, sender=Announcement)
def new_anouncement_notification(sender, instance, created, **kwargs):
# if new announcement
if created:
# send message to all users
for user in User.objects.all():
message_to_all = NewAnnouncementMail(recipient=user, heading=f"New Announcement.", announcement=instance)
message_to_all.save()
# article creation receiver
@receiver(post_save, sender=Article)
def new_article_notification(sender, instance, created, **kwargs):
# if new article
if created:
# send message to all subscribed users
for profile in instance.author.subscribed.all():
message_to_subscribed = NewArticleMail(recipient=profile.user, heading=f"New Article by {instance.author.username}.", article=instance)
message_to_subscribed.save()
# comment creation receiver
@receiver(post_save, sender=Comment)
def new_comment_notification(sender, instance, created, **kwargs):
# if new comment
if created:
# if the comment is a reply
if instance.parent:
if instance.author != instance.parent.author:
message_to_comment_author = NewCommentMail(recipient=instance.parent.author, heading="New Reply to Your Comment.", article=instance.article, comment=instance)
message_to_comment_author.save()
# send message to author of article if comment is not by same author
elif instance.author != instance.article.author:
message_to_article_author = NewCommentMail(recipient=instance.article.author, heading="New Comment on Your Article.", article=instance.article, comment=instance)
message_to_article_author.save()
| 35.914634 | 164 | 0.788795 | 388 | 2,945 | 5.860825 | 0.260309 | 0.043536 | 0.030783 | 0.046174 | 0.230871 | 0.218997 | 0.146878 | 0.048373 | 0.048373 | 0 | 0 | 0.00116 | 0.121562 | 2,945 | 81 | 165 | 36.358025 | 0.877851 | 0.137861 | 0 | 0.104167 | 0 | 0 | 0.046392 | 0.010706 | 0 | 0 | 0 | 0 | 0 | 1 | 0.104167 | false | 0 | 0.166667 | 0.020833 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
30dc1db20603dba0a6581e8467b5ea5697262a2a | 7,400 | py | Python | archive/extras/cat_dog_estimator/main.py | Pandinosaurus/tensorflow-workshop | 31c7ca9f9248d00da37a13ded55eada4c493c463 | [
"Apache-2.0"
] | 796 | 2016-09-27T17:35:54.000Z | 2021-08-23T06:15:03.000Z | archive/extras/cat_dog_estimator/main.py | Pandinosaurus/tensorflow-workshop | 31c7ca9f9248d00da37a13ded55eada4c493c463 | [
"Apache-2.0"
] | 18 | 2016-09-27T20:36:53.000Z | 2020-08-13T12:33:15.000Z | archive/extras/cat_dog_estimator/main.py | Pandinosaurus/tensorflow-workshop | 31c7ca9f9248d00da37a13ded55eada4c493c463 | [
"Apache-2.0"
] | 328 | 2016-09-27T17:36:06.000Z | 2021-01-18T03:17:17.000Z | """
A very simplified introduction to TensorFlow using Estimator API for training a
cat vs. dog classifier from the CIFAR-10 dataset. This version is intentionally
simplified and has a lot of room for improvment, in speed and accuracy.
Usage:
python main.py [train|predict] [predict file]
"""
import sys
import numpy as np
from PIL import Image
import tensorflow as tf
# Data file saved by extract_cats_dogs.py
DATA_FILE = 'catdog_data.npy'
NUM_IMAGES = 10000
# Model checkpoints and logs are saved here. If you want to train from scratch,
# be sure to delete everything in MODEL_DIR/ or change the directory.
MODEL_DIR = 'models'
# Some of the tunable hyperparameters are set here
LEARNING_RATE = 0.01
MOMENTUM = 0.9
TRAIN_EPOCHS = 20
BATCH_SIZE = 32
def model_fn(features, labels, mode):
"""Defines the CNN model that runs on the data.
The model we run is 3 convolutional layers followed by 1 fully connected
layer before the output. This is much simpler than most CNN models and is
designed to run decently on CPU. With a GPU, it is possible to scale to
more layers and more filters per layer.
Args:
features: batch_size x 32 x 32 x 3 uint8 images
labels: batch_size x 1 uint8 labels (0 or 1)
mode: TRAIN, EVAL, or PREDICT
Returns:
EstimatorSpec which defines the model to run
"""
# Preprocess the features by converting to floats in [-0.5, 0.5]
features = tf.cast(features, tf.float32)
features = (features / 255.0) - 1.0
# Define the CNN network
# conv1: 32 x 32 x 3 -> 32 x 32 x 16
net = tf.layers.conv2d(
inputs=features,
filters=16, # 16 channels after conv
kernel_size=3, # 3x3 conv kernel
padding='same', # Output tensor is same shape
activation=tf.nn.relu) # ReLU activation
# pool1: 32 x 32 x 16 -> 16 x 16 x 16
net = tf.layers.max_pooling2d(
inputs=net,
pool_size=2,
strides=2) # Downsample 2x
# conv2: 16 x 16 x 16 -> 16 x 16 x 32
net = tf.layers.conv2d(
inputs=net,
filters=32,
kernel_size=3,
padding='same',
activation=tf.nn.relu)
# pool2: 16 x 16 x 32 -> 8 x 8 x 32
net = tf.layers.max_pooling2d(
inputs=net,
pool_size=2,
strides=2)
# conv3: 8 x 8 x 32 -> 8 x 8 x 64
net = tf.layers.conv2d(
inputs=net,
filters=64,
kernel_size=3,
padding='same',
activation=tf.nn.relu)
# flat: 8 x 8 x 64 -> 4096
net = tf.contrib.layers.flatten(net)
# fc4: 4096 -> 1000
net = tf.layers.dense(
inputs=net,
units=1000,
activation=tf.nn.relu)
# output: 1000 -> 2
logits = tf.layers.dense(
inputs=net,
units=2)
# Softmax for probabilities
probabilities = tf.nn.softmax(logits)
predictions = tf.argmax(
input=logits,
axis=1,
output_type=tf.int32)
# Return maximum prediction if we're running PREDICT
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(
mode=mode,
predictions={
'prediction': predictions,
'probability': probabilities})
# Loss function and optimizer for training
loss = tf.losses.softmax_cross_entropy(
onehot_labels=tf.one_hot(labels, depth=2),
logits=logits)
train_op = tf.train.MomentumOptimizer(
LEARNING_RATE, MOMENTUM).minimize(
loss=loss,
global_step=tf.train.get_global_step())
# Accuracy for evaluation
eval_metric_ops = {
'accuracy': tf.metrics.accuracy(
labels=labels,
predictions=predictions)}
# EVAL uses loss and eval_metric_ops, TRAIN uses loss and train_op
return tf.estimator.EstimatorSpec(
mode=mode,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops)
def input_fn_wrapper(is_training):
"""Input function wrapper for training and eval.
A wrapper funcution is used because we want to have slightly different
behavior for the dataset during training (shuffle and loop data) and
evaluation (don't shuffle and run exactly once).
Args:
is_training: bool for if the model is training
Returns:
function with signature () -> features, labels
where features and labels are the same shapes expected by model_fn
"""
def input_fn():
data = np.load(DATA_FILE).item()
np_input_fn = tf.estimator.inputs.numpy_input_fn(
x={'x': data['images']},
y=data['labels'],
batch_size=BATCH_SIZE,
shuffle=is_training,
num_epochs=None if is_training else 1)
features_dict, labels = np_input_fn()
# Since the only feature is the image itself, return the image directly
# instead of the features dict
return features_dict['x'], labels
return input_fn
def process_image(image_file):
"""Convert PIL Image to a format that the network can accept.
Operations performed:
- Load image file
- Central crop square
- Resize to 32 x 32
- Convert to numpy array
Args:
image_file: str file name of image
Returns:
numpy.array image which shape [1, 32, 32, 3]
Assumes that image is RGB and at least 32 x 32.
"""
image = Image.open(image_file)
width, height = image.size
min_dim = min(width, height)
left = (width - min_dim) / 2
top = (height - min_dim) / 2
right = (width + min_dim) / 2
bottom = (height + min_dim) / 2
image = image.crop((left, top, right, bottom))
image = image.resize((32, 32), resample=Image.BILINEAR)
image = np.asarray(image, dtype=np.uint8)
image = np.reshape(image, [1, 32, 32, 3])
return image
def main():
if len(sys.argv) < 2 or sys.argv[1] not in ['train', 'predict']:
print 'Usage: python main.py [train|predict] [predict file]'
sys.exit()
tf.logging.set_verbosity(tf.logging.INFO)
# Create the estimator object that is used by train, evaluate, and predict
# Note that model_fn is not called until the first usage of the model.
estimator = tf.estimator.Estimator(
model_fn=model_fn,
config=tf.estimator.RunConfig().replace(
model_dir=MODEL_DIR))
if sys.argv[1] == 'train':
steps_per_epoch = NUM_IMAGES / BATCH_SIZE
for epoch in xrange(TRAIN_EPOCHS):
estimator.train(
input_fn=input_fn_wrapper(True),
steps=steps_per_epoch)
# Evaluating on the same dataset as training for simplicity, normally
# this is a very bad idea since you are not testing how well your
# model generalizes to unseen data.
estimator.evaluate(input_fn=input_fn_wrapper(False))
else: # sys.argv[1] == 'predict'
if len(sys.argv) < 3:
print 'Usage: python main.py predict [predict file]'
sys.exit()
image = process_image(sys.argv[2])
# Define a new input function for prediction which outputs a single image
def predict_input_fn():
np_input_fn = tf.estimator.inputs.numpy_input_fn(
x={'x': image},
num_epochs=1,
shuffle=False)
features_dict = np_input_fn()
return features_dict['x']
pred_dict = estimator.predict(
input_fn=predict_input_fn).next()
print 'Probability of cat: %.5f\tProbability of dog: %.5f' % (
pred_dict['probability'][1], pred_dict['probability'][0])
print 'Prediction %s' % ('CAT' if pred_dict['prediction'] == 1 else 'DOG')
if __name__ == '__main__':
main()
| 29.365079 | 79 | 0.662838 | 1,091 | 7,400 | 4.388634 | 0.298808 | 0.023392 | 0.006266 | 0.005013 | 0.155388 | 0.118212 | 0.084378 | 0.070593 | 0.053885 | 0.037176 | 0 | 0.034686 | 0.244189 | 7,400 | 251 | 80 | 29.482072 | 0.821384 | 0.184054 | 0 | 0.259259 | 0 | 0 | 0.068823 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.02963 | null | null | 0.02963 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
30dc839ebe6b8f98f13b1bedf2d15538a5ad4f51 | 3,547 | py | Python | c2cgeoportal/tests/test_wfsparsing.py | pgiraud/c2cgeoportal | 3ec955c5c67d16256af726a62d586b3f4ec3b500 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | c2cgeoportal/tests/test_wfsparsing.py | pgiraud/c2cgeoportal | 3ec955c5c67d16256af726a62d586b3f4ec3b500 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | c2cgeoportal/tests/test_wfsparsing.py | pgiraud/c2cgeoportal | 3ec955c5c67d16256af726a62d586b3f4ec3b500 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright (c) 2013, Camptocamp SA
# All rights reserved.
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# The views and conclusions contained in the software and documentation are those
# of the authors and should not be interpreted as representing official policies,
# either expressed or implied, of the FreeBSD Project.
from unittest import TestCase
class TestWFSParsing(TestCase):
def test_is_get_feature(self):
from c2cgeoportal.lib.wfsparsing import is_get_feature
from c2cgeoportal.tests.xmlstr import getfeature
assert is_get_feature(getfeature)
def test_is_get_feature_not(self):
from c2cgeoportal.lib.wfsparsing import is_get_feature
assert not is_get_feature('<is_not>foo</is_not>')
def test_limit_featurecollection_outlimit(self):
from xml.etree.ElementTree import fromstring
from c2cgeoportal.lib.wfsparsing import limit_featurecollection
from c2cgeoportal.tests.xmlstr import featurecollection_outlimit
content = limit_featurecollection(featurecollection_outlimit)
collection = fromstring(content.encode('utf-8'))
features = collection.findall(
'{http://www.opengis.net/gml}featureMember'
)
self.assertEquals(len(features), 200)
from xml.etree.ElementTree import fromstring
from c2cgeoportal.lib.wfsparsing import limit_featurecollection
from c2cgeoportal.tests.xmlstr import featurecollection_outlimit
content = limit_featurecollection(featurecollection_outlimit, limit=2)
collection = fromstring(content.encode('utf-8'))
features = collection.findall(
'{http://www.opengis.net/gml}featureMember'
)
self.assertEquals(len(features), 2)
def test_limit_featurecollection_inlimit(self):
from xml.etree.ElementTree import fromstring
from c2cgeoportal.lib.wfsparsing import limit_featurecollection
from c2cgeoportal.tests.xmlstr import featurecollection_inlimit
content = limit_featurecollection(featurecollection_inlimit)
collection = fromstring(content.encode('utf-8'))
features = collection.findall(
'{http://www.opengis.net/gml}featureMember'
)
self.assertEquals(len(features), 199)
| 47.293333 | 81 | 0.74993 | 435 | 3,547 | 6.034483 | 0.386207 | 0.054857 | 0.027429 | 0.055238 | 0.492571 | 0.447238 | 0.447238 | 0.447238 | 0.447238 | 0.408381 | 0 | 0.009368 | 0.187482 | 3,547 | 74 | 82 | 47.932432 | 0.901457 | 0.43135 | 0 | 0.5 | 0 | 0 | 0.079277 | 0 | 0 | 0 | 0 | 0 | 0.131579 | 1 | 0.105263 | false | 0 | 0.342105 | 0 | 0.473684 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
30e3bdfd7871899ee3a520a4d40bafa5df32a895 | 1,059 | py | Python | app/ac_common/testrunner.py | marwahaha/allecena | b8f8a15ca0dbc80e745febf0e81263ec197e7363 | [
"Apache-2.0"
] | 3 | 2018-04-29T15:40:37.000Z | 2020-04-15T20:37:08.000Z | app/ac_common/testrunner.py | marwahaha/allecena | b8f8a15ca0dbc80e745febf0e81263ec197e7363 | [
"Apache-2.0"
] | 1 | 2019-10-30T20:35:46.000Z | 2019-10-30T20:35:46.000Z | app/ac_common/testrunner.py | marwahaha/allecena | b8f8a15ca0dbc80e745febf0e81263ec197e7363 | [
"Apache-2.0"
] | 2 | 2019-08-04T02:54:22.000Z | 2021-03-03T21:03:11.000Z | # coding: utf-8
import os
from django.test.runner import DiscoverRunner
from django.conf import settings
class AcTestRunner(DiscoverRunner):
def __init__(self, **kwargs):
super(AcTestRunner, self).__init__(**kwargs)
self.no_db = kwargs.get('no_db', None) if kwargs.get('no_db', None) is not None else False
os.environ['DJANGO_LIVE_TEST_SERVER_ADDRESS'] = settings.DJANGO_LIVE_TEST_SERVER_ADDRESS
def setup_databases(self, **kwargs):
if self.no_db:
pass
else:
return super(AcTestRunner, self).setup_databases(**kwargs)
def teardown_databases(self, old_config, **kwargs):
if self.no_db:
pass
else:
return super(AcTestRunner, self).teardown_databases(old_config, **kwargs)
@classmethod
def add_arguments(cls, parser):
super(AcTestRunner, cls).add_arguments(parser)
parser.add_argument('-n', '--no-db', action='store_true', dest='no_db', default=False,
help='Do not use DB for testing')
| 32.090909 | 98 | 0.653447 | 133 | 1,059 | 4.962406 | 0.421053 | 0.042424 | 0.095455 | 0.039394 | 0.287879 | 0.154545 | 0.154545 | 0.154545 | 0.154545 | 0.154545 | 0 | 0.001236 | 0.236072 | 1,059 | 32 | 99 | 33.09375 | 0.814586 | 0.012276 | 0 | 0.26087 | 0 | 0 | 0.086207 | 0.029693 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173913 | false | 0.086957 | 0.130435 | 0 | 0.434783 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
30e7d355b3ba63a871cabdf99afcb8758eaf40d1 | 1,157 | py | Python | cogs/restart.py | snoringninja/niftybot-discord | b3f7e92e2be3fda06e87ceb00a65b8dc85eec67c | [
"MIT"
] | null | null | null | cogs/restart.py | snoringninja/niftybot-discord | b3f7e92e2be3fda06e87ceb00a65b8dc85eec67c | [
"MIT"
] | 11 | 2018-06-06T19:01:08.000Z | 2019-07-29T14:55:03.000Z | cogs/restart.py | snoringninja/niftybot-discord | b3f7e92e2be3fda06e87ceb00a65b8dc85eec67c | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
restart.py
@author xNifty
@site - https://snoring.ninja
Restart the python process
I wouldn't recommend using this
"""
import os
import sys
from discord.ext import commands
from resources.config import ConfigLoader
class Restart():
"""Restart()
Restart the bot python process; I wouldn't recommend using
this in its current state
"""
def __init__(self, bot):
self.bot = bot
self.owner_id = ConfigLoader().load_config_setting_int('BotSettings', 'owner_id')
@commands.command(pass_context=True, no_pm=True)
async def restart(self, ctx):
"""Handles calling the restart process
if invoked by the bot owner
"""
user_id = ctx.message.author.id
if int(user_id) == self.owner_id:
await self.bot.say("Restarting!")
await self.bot.logout()
await self.restart_process()
async def restart_process(self):
"""Restart the python process
"""
os.execv(sys.executable, ['python'] + sys.argv)
def setup(bot):
"""This makes it so we can actually use it."""
bot.add_cog(Restart(bot))
| 24.104167 | 89 | 0.640449 | 154 | 1,157 | 4.701299 | 0.5 | 0.038674 | 0.044199 | 0.063536 | 0.107735 | 0.107735 | 0.107735 | 0.107735 | 0 | 0 | 0 | 0.001144 | 0.244598 | 1,157 | 47 | 90 | 24.617021 | 0.827231 | 0.237684 | 0 | 0 | 0 | 0 | 0.049724 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0.052632 | 0.210526 | 0 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
30e94968db2242192ef6d7100f4410fa00a4184b | 2,433 | py | Python | webapp/models.py | JagerCox/InverseRelation | 721c739e542ff26cd14e393227c2a5702a79093c | [
"MIT"
] | null | null | null | webapp/models.py | JagerCox/InverseRelation | 721c739e542ff26cd14e393227c2a5702a79093c | [
"MIT"
] | null | null | null | webapp/models.py | JagerCox/InverseRelation | 721c739e542ff26cd14e393227c2a5702a79093c | [
"MIT"
] | null | null | null | from django.db import models
from phonenumber_field.modelfields import PhoneNumberField
class Contact(models.Model):
name = models.CharField(max_length=25, help_text="Example: John", null=False, blank=False)
surname = models.CharField(max_length=100, help_text="Example: Doe", null=False, blank=False)
nick_name = models.CharField(max_length=25, help_text="Example: J4Nthng", null=True, blank=True)
alias = models.CharField(max_length=25, help_text="Example: Jonny", null=True, blank=True)
place = models.CharField(max_length=512, help_text="Example: We kwonw two years ago in a congress about...",
null=False, blank=False)
birth_date = models.DateField(help_text="Format YYYY/MM/DD Ex: 2018/06/30", null=True, blank=True)
phone_number_one = PhoneNumberField(help_text="Example: +34611111111", null=False, blank=False)
phone_number_two = PhoneNumberField(help_text="Example: +34622222222", null=True, blank=True)
phone_number_three = PhoneNumberField(help_text="Example: +34633333333", null=True, blank=True)
email_one = models.EmailField(help_text="Example: johndoe@mail.com", null=True, blank=True)
email_two = models.EmailField(help_text="Example: johndoe@yahoo.com", null=True,
blank=True)
email_three = models.EmailField(help_text="Example: johndoe@gmail.com", null=True,
blank=True)
email_four = models.EmailField(help_text="Example: johndoe@proton.com", null=True,
blank=True)
telegram_user = models.CharField(max_length=30, null=True, blank=True, help_text="Example: @johndoe")
github_user = models.CharField(max_length=30, null=True, blank=True, help_text="Example: johndoe")
bitbucket_user = models.CharField(max_length=30, null=True, blank=True, help_text="Example: johndoe")
facebook_user = models.CharField(max_length=30, null=True, blank=True, help_text="Example: johndoe")
pinterest_user = models.CharField(max_length=30, null=True, blank=True, help_text="Example: johndoe")
twitter_user = models.CharField(max_length=30, null=True, blank=True, help_text="Example: @johndoe")
additional_data = models.CharField(max_length=100, null=True, blank=True, help_text="Example: Doe")
def __str__(self):
return 'Name/Surname/Nick({}/{}/{})'.format(self.name, self.surname, self.nick_name)
| 65.756757 | 113 | 0.709001 | 324 | 2,433 | 5.148148 | 0.256173 | 0.095923 | 0.170863 | 0.16307 | 0.574341 | 0.515588 | 0.345923 | 0.326739 | 0.302158 | 0.248201 | 0 | 0.033873 | 0.162762 | 2,433 | 36 | 114 | 67.583333 | 0.784978 | 0 | 0 | 0.103448 | 0 | 0 | 0.183313 | 0.011097 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.068966 | 0.034483 | 0.862069 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
30eaa2105d07e43d9a8ec898704e63e5994f330e | 12,680 | py | Python | MyUtils/ImageProcessing.py | mairob/Semantic-segmentation-and-Depth-estimation | d9624cdbde000a0c41e1025f89aa6edfdf947045 | [
"MIT"
] | 6 | 2018-06-15T21:18:58.000Z | 2021-07-05T08:41:21.000Z | MyUtils/ImageProcessing.py | mairob/Semantic-segmentation-and-Depth-estimation | d9624cdbde000a0c41e1025f89aa6edfdf947045 | [
"MIT"
] | null | null | null | MyUtils/ImageProcessing.py | mairob/Semantic-segmentation-and-Depth-estimation | d9624cdbde000a0c41e1025f89aa6edfdf947045 | [
"MIT"
] | 4 | 2018-06-15T21:19:08.000Z | 2021-07-05T08:41:23.000Z | #! /usr/bin/python3
#############################################################
### Helper File for TFRecords and Image manipulation ########
#############################################################
import tensorflow as tf
import numpy as np
## Label mapping for Cityscapes (34 classes)
Cityscapes34_ID_2_RGB = [(0,0,0), (0,0,0), (0,0,0), (0,0,0), (0,0,0)
# 0=unlabeled, ego vehicle,rectification border, oo roi, static
,(111,74,0),(81,0,81),(128,64,128),(244,35,232),(250,170,160)
# 5=dynamic, 6=ground, 7=road, 8=sidewalk, 9=parking
,(230,150,140), (70,70,70), (102,102,156),(190,153,153),(180,165,180)
# 10=rail track, 11=building, 12=wall, 13=fence, 14=guard rail
,(150,100,100),(150,120, 90),(153,153,153),(153,153,153),(250,170, 30)
# 15= bridge, 16=tunnel, 17=pole, 18=polegroup, 19=traffic light
,(220,220,0),(107,142,35),(152,251,152),(70,130,180),(220,20,60)
# 20=traffic sign 21=vegetation, 22=terrain, 23=sky, 24=person
,(255,0,0),(0,0,142),(0,0,70),(0,60,100),(0,0,90), (0,0,110), (0,80,100), (0,0,230), (119, 11, 32)]
# 25=rider, 26=car, 22=terrain, 27=truck, 28=bus, 29=caravan, 30= trailer, 31=train, 32=motorcyle ,33=bicycle
## Label mapping for Cityscapes (19 classes + '255'=wildcard)
Cityscapes20_ID_2_RGB = [(128,64,128),(244,35,232), (70,70,70), (102,102,156),(190,153,153)
#0=road, 1=sidewalk, 2=building, 3=wall, 4=fence
,(153,153,153), (250,170, 30), (220,220,0),(107,142,35),(152,251,152),(70,130,180),(220,20,60)
# 5= pole, 6=traffic light, 7= traffic sign, 8= vegetation,9= terrain, 10=sky, 11=person
,(255,0,0),(0,0,142),(0,0,70),(0,60,100), (0,80,100), (0,0,230), (119, 11, 32), (255,255,255)]
# 12=rider, 13=car, 14=truck, 15=bus, 16=train, 17=motorcycle, 18=bicycle, #255 --cast via tf.minimum
Pred_2_ID = [7, 8, 11, 12, 13
#0=road, 1=sidewalk, 2=building, 3=wall, 4=fence
,17 , 19, 20, 21, 22, 23, 24
# 5= pole, 6=traffic light, 7= traffic sign, 8= vegetation,9= terrain, 10=sky, 11=person
,25 , 26, 27, 28, 31, 32, 33, -1]
# 12=rider, 13=car, 14=truck, 15=bus, 16=train, 17=motorcycle, 18=bicycle, #255 --cast via tf.minimum
##################################################################################
################## Functions for Image Preprocessing #############################
##################################################################################
def read_and_decode(filename_queue, hasDisparity=False, constHeight=1024, constWidth=1024):
"""Decode images from TF-Records Bytestream. TF-Record must be compiled with the "make_tf_record.py"-script!
Args:
filename_queue: String representation of TF-Records (returned from tf.train.string_input_producer([TFRECORD_FILENAME])
filename_queue: Boolean, needed for procession disparity maps
constHeight, constWidth: Expected shapes of Images to decode
Returns:
Decoded image and mask
"""
with tf.name_scope("Input_Decoder"):
reader = tf.TFRecordReader()
_, serialized_example = reader.read(filename_queue)
if not hasDisparity:
features = tf.parse_single_example(
serialized_example,
features={
'image_raw': tf.FixedLenFeature([], tf.string),
'mask_raw': tf.FixedLenFeature([], tf.string)
})
image = tf.decode_raw(features['image_raw'], tf.uint8)
annotation = tf.decode_raw(features['mask_raw'], tf.uint8)
image_shape = tf.stack([constHeight, constWidth, 3])
annotation_shape = tf.stack([constHeight, constWidth, 1])
image = tf.reshape(image, image_shape)
annotation = tf.reshape(annotation, annotation_shape)
return image, annotation
else:
features = tf.parse_single_example(
serialized_example,
features={
'image_raw': tf.FixedLenFeature([], tf.string),
'mask_raw': tf.FixedLenFeature([], tf.string),
'disp_raw': tf.FixedLenFeature([], tf.string)
})
image = tf.decode_raw(features['image_raw'], tf.uint8)
annotation = tf.decode_raw(features['mask_raw'], tf.uint8)
disparity = tf.decode_raw(features['disp_raw'], tf.int16) #uint6
image_shape = tf.stack([constHeight, constWidth, 3])
masks_shape = tf.stack([constHeight, constWidth, 1])
image = tf.reshape(image, image_shape)
annotation = tf.reshape(annotation, masks_shape)
disparity = tf.reshape(disparity, masks_shape)
return image, annotation, disparity
def decode_labels(mask, num_images=1, num_classes=20, label=Cityscapes20_ID_2_RGB):
"""Decode batch of segmentation masks.
Args:
mask: result of inference after taking argmax.
num_images: number of images to decode from the batch.
num_classes: number of classes to predict (including background).
label: List, which value to assign for different classes
Returns:
A batch with num_images RGB images of the same size as the input.
"""
from PIL import Image
n, h, w, c = mask.shape
assert(n >= num_images), 'Batch size %d should be greater or equal than number of images to save %d.' % (n, num_images)
outputs = np.zeros((num_images, h, w, 3), dtype=np.uint8)
for i in range(num_images):
img = Image.new('RGB', (len(mask[i, 0]), len(mask[i])))
pixels = img.load()
for j_, j in enumerate(mask[i, :, :, 0]):
for k_, k in enumerate(j):
if k < num_classes:
pixels[k_,j_] = label[k]
outputs[i] = np.array(img)
return outputs
def apply_with_random_selector(x, func, num_cases):
from tensorflow.python.ops import control_flow_ops
with tf.name_scope("Random_Selector"):
sel = tf.random_uniform([], maxval=num_cases, dtype=tf.int32)
# Pass the real x only to one of the func calls.
return control_flow_ops.merge([
func(control_flow_ops.switch(x, tf.equal(sel, case))[1], case)
for case in range(num_cases)])[0]
def distort_color(image, color_ordering=0, fast_mode=True, scope=None):
with tf.name_scope("Color_distortion"):
if fast_mode:
if color_ordering == 0:
image = tf.image.random_brightness(image, max_delta=32. / 255.)
image = tf.image.random_saturation(image, lower=0.5, upper=1.5)
else:
image = tf.image.random_saturation(image, lower=0.5, upper=1.5)
image = tf.image.random_brightness(image, max_delta=32. / 255.)
else:
if color_ordering == 0:
image = tf.image.random_brightness(image, max_delta=32. / 255.)
image = tf.image.random_saturation(image, lower=0.5, upper=1.5)
image = tf.image.random_hue(image, max_delta=0.2)
image = tf.image.random_contrast(image, lower=0.5, upper=1.5)
elif color_ordering == 1:
image = tf.image.random_saturation(image, lower=0.5, upper=1.5)
image = tf.image.random_brightness(image, max_delta=32. / 255.)
image = tf.image.random_contrast(image, lower=0.5, upper=1.5)
image = tf.image.random_hue(image, max_delta=0.2)
elif color_ordering == 2:
image = tf.image.random_contrast(image, lower=0.5, upper=1.5)
image = tf.image.random_hue(image, max_delta=0.2)
image = tf.image.random_brightness(image, max_delta=32. / 255.)
image = tf.image.random_saturation(image, lower=0.5, upper=1.5)
elif color_ordering == 3:
image = tf.image.random_hue(image, max_delta=0.2)
image = tf.image.random_saturation(image, lower=0.5, upper=1.5)
image = tf.image.random_contrast(image, lower=0.5, upper=1.5)
image = tf.image.random_brightness(image, max_delta=32. / 255.)
else:
raise ValueError('color_ordering must be in [0, 3]')
# The random_* ops do not necessarily clamp.
return tf.clip_by_value(image, 0.0, 1.0)
from tensorflow.python.ops import control_flow_ops
def flip_randomly_left_right_image_with_annotation(image_tensor, annotation_tensor):
"""Flips an image randomly and applies the same to an annotation tensor.
Args:
image_tensor, annotation_tensor: 3-D-Tensors
Returns:
Flipped image and gt.
"""
random_var = tf.random_uniform(maxval=2, dtype=tf.int32, shape=[])
randomly_flipped_img = control_flow_ops.cond(pred=tf.equal(random_var, 0),
fn1=lambda: tf.image.flip_left_right(image_tensor),
fn2=lambda: image_tensor)
randomly_flipped_annotation = control_flow_ops.cond(pred=tf.equal(random_var, 0),
fn1=lambda: tf.image.flip_left_right(annotation_tensor),
fn2=lambda: annotation_tensor)
return randomly_flipped_img, randomly_flipped_annotation
def random_crop_and_pad_image_and_labels(image, sem_labels, dep_labels, size):
"""Randomly crops `image` together with `labels`.
Args:
image: A Tensor with shape [D_1, ..., D_K, N]
labels: A Tensor with shape [D_1, ..., D_K, M]
size: A Tensor with shape [K] indicating the crop size.
Returns:
A tuple of (cropped_image, cropped_label).
"""
combined = tf.concat([image, sem_labels, dep_labels], axis=2)
print("combined : ", str(combined.get_shape()[:]))
combined_crop = tf.random_crop(combined, [size[0], size[1],5])
print("combined_crop : ", str(combined_crop.get_shape()[:]))
channels = tf.unstack(combined_crop, axis=-1)
image = tf.stack([channels[0],channels[1],channels[2]], axis=-1)
sem_label = tf.expand_dims(channels[3], axis=2)
dep_label = tf.expand_dims(channels[4], axis=2)
return image, sem_label, dep_label
def preprocessImage(image, central_crop_fraction= 0.875):
with tf.name_scope("Preprocessing"):
if image.dtype != tf.float32:
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
distorted_image = apply_with_random_selector( image, lambda x, ordering: distort_color(x, ordering, fast_mode=True),num_cases=4)
image = tf.subtract(distorted_image, 0.5)
image = tf.multiply(image, 2.0)
return image
##################################################################################
################## Functions for Image Postprocessing #############################
##################################################################################
def generate_prediction_Img(mask, num_images=1, num_classes= 20, label=Pred_2_ID):
"""Decode batch of segmentation masks.
Args:
mask: result of inference after taking argmax.
num_images: number of images to decode from the batch.
num_classes: number of classes to predict (including background).
label: List, which value to assign for different classes
Returns:
A batch with num_images RGB images of the same size as the input.
"""
from PIL import Image
n, h, w, c = mask.shape
assert(n >= num_images), 'Batch size %d should be greater or equal than number of images to save %d.' % (n, num_images)
outputs = np.zeros((num_images, h, w), dtype=np.uint8)
for i in range(num_images):
img = Image.new('L', (len(mask[i, 0]), len(mask[i])))
pixels = img.load()
for j_, j in enumerate(mask[i, :, :, 0]):
for k_, k in enumerate(j):
if k < num_classes:
pixels[k_,j_] = label[k]
outputs[i] = np.array(img)
return outputs
def plot_depthmap(mask):
"""Network output as [w, h, 1]-Tensor is transformed to a heatmap for easier visual interpretation
Args:
mask: result of inference (depth = 1)
Returns:
A RGB-Image (representation of the depth prediction as heatmap
"""
import matplotlib.pyplot as plt
cmap = plt.get_cmap('hot')
gray = mask[0,:,:,0].astype(np.uint16)
divisor = np.max(gray) - np.min(gray)
if divisor != 0:
normed = (gray - np.min(gray)) / divisor
else:
normed = (gray - np.min(gray))
rgba_img = cmap(normed)
rgb_img = np.delete(rgba_img, 3,2)
return (65535 * rgb_img).astype(np.float32) | 44.181185 | 133 | 0.593612 | 1,735 | 12,680 | 4.204035 | 0.217291 | 0.007952 | 0.034549 | 0.049356 | 0.539073 | 0.516041 | 0.506992 | 0.496298 | 0.470524 | 0.459145 | 0 | 0.074469 | 0.238565 | 12,680 | 287 | 134 | 44.181185 | 0.680994 | 0 | 0 | 0.43949 | 0 | 0 | 0.040866 | 0 | 0 | 0 | 0 | 0 | 0.012739 | 0 | null | null | 0 | 0.044586 | null | null | 0.012739 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
30ecb86d7b4b8c0041f3054b609592352ad97efe | 9,468 | py | Python | src/config/utilities.py | ionitadaniel19/testframeworksevolution | 873b7d7e7de770b9407840f6ccd662929b5dd3b6 | [
"MIT"
] | null | null | null | src/config/utilities.py | ionitadaniel19/testframeworksevolution | 873b7d7e7de770b9407840f6ccd662929b5dd3b6 | [
"MIT"
] | null | null | null | src/config/utilities.py | ionitadaniel19/testframeworksevolution | 873b7d7e7de770b9407840f6ccd662929b5dd3b6 | [
"MIT"
] | null | null | null | '''
Created on 24.05.2014
@author: ionitadaniel19
'''
import logging.config
import os
import json
from xlsmanager import easyExcel
from constants import *
import traceback
import copy
def setup_logging(default_path='logging.json', default_level=logging.INFO,env_key='LOG_CFG'):
"""Setup logging configuration"""
path = os.path.join(os.path.dirname(os.path.abspath(__file__)),default_path)
value = os.getenv(env_key, None)
if value:
path = value
if os.path.exists(path):
with open(path, 'r') as f:
config = json.load(f)
logging.config.dictConfig(config)
else:
logging.basicConfig(level=default_level)
def load_browser_driver(browser_driver_path):
"""Setup browser driver configuration"""
return os.path.join(os.path.dirname(os.path.abspath(__file__)),browser_driver_path)
def get_webdriver_selector_element(element_name):
element=None
selector=None
if element_name.startswith("css="):
element = element_name.split('=', 1)[-1]
selector= SELECTOR_CSS
elif element_name.startswith("xpath=") or element_name.startswith("//"):
element = element_name.split('=', 1)[-1]
selector= SELECTOR_XPATH
elif element_name.startswith("id="):
element = element_name.split('=', 1)[-1]
selector= SELECTOR_ID
elif element_name.startswith("link="):
element = element_name.split('=', 1)[-1]
selector= SELECTOR_LINK
elif element_name.startswith("name=") or element_name.find("=") == -1:
element = element_name.split('=', 1)[-1]
selector= SELECTOR_NAME
elif element_name.startswith("class="):
element = element_name.split('=', 1)[-1]
selector= SELECTOR_CLASS
elif element_name.startswith("tag="):
element = element_name.split('=', 1)[-1]
selector= SELECTOR_TAG
else:
raise Exception("Incorrect element %s.It should be one of type:css,xpath,id,link,name,class,tag." %element_name)
return (selector,element)
def get_data_driven_scenario_values(scenario_id=1,xls_file=DEF_DATA_PATH,sheet_name="Data"):
data_xls_keys_cols={'scenario':1,'login':2,'select':4}
data_driven_data={CELL_USER:'',CELL_PWD:'',CELL_ANSWER:'',CELL_EXPECTED:''}
try:
xls_sheet=easyExcel(xls_file,sheet_name)
last_row=xls_sheet.get_sheet_last_row(sheet_name)
found=False
scenario_row=0
for row in range(1,last_row):
if xls_sheet.getCell(row,data_xls_keys_cols['scenario'])==scenario_id:
found=True
scenario_row=row
break
if found is False:
raise Exception('Scenarion %s not found in xls file %s sheet %s' %(scenario_id,xls_file,sheet_name))
#stop at finding blank value or exit at index 5
for index_row in range(scenario_row,scenario_row+5):
if xls_sheet.getCell(index_row,data_xls_keys_cols['login'])==None:
break
if xls_sheet.getCell(index_row,data_xls_keys_cols['login'])==CELL_USER:
#actual values is one column to the right
data_driven_data[CELL_USER]=xls_sheet.getCell(index_row,data_xls_keys_cols['login']+1)
if xls_sheet.getCell(index_row,data_xls_keys_cols['login'])==CELL_PWD:
data_driven_data[CELL_PWD]=xls_sheet.getCell(index_row,data_xls_keys_cols['login']+1)
if xls_sheet.getCell(index_row,data_xls_keys_cols['select'])==CELL_ANSWER:
data_driven_data[CELL_ANSWER]=xls_sheet.getCell(index_row,data_xls_keys_cols['select']+1)
if xls_sheet.getCell(index_row,data_xls_keys_cols['select'])==CELL_EXPECTED:
data_driven_data[CELL_EXPECTED]=xls_sheet.getCell(index_row,data_xls_keys_cols['select']+1)
index_row=index_row+1
return data_driven_data
except Exception,ex:
print ex
return None
finally:
xls_sheet.close()
def get_simple_hybrid_driven_scenario_values(scenario_id=1,xls_file=DEF_DATA_PATH,sheet_name="HybridSimple"):
data_xls_keys_cols={'scenario':1,'function':2,'parameters':3}
hybrid_driven_dict={FRAMEWORK_FUNCTIONS:'',PARAMETERS:[]}
hybrid_driven_data=[] #list of dictionaries of hybrid_driven_dict type
try:
xls_sheet=easyExcel(xls_file,sheet_name)
last_row=xls_sheet.get_sheet_last_row(sheet_name)
found=False
scenario_row=0
for row in range(1,last_row):
if xls_sheet.getCell(row,data_xls_keys_cols['scenario'])==scenario_id:
found=True
scenario_row=row
break
if found is False:
raise Exception('Scenarion %s not found in xls file %s sheet %s' %(scenario_id,xls_file,sheet_name))
#stop at finding blank value or exit at index 5
for index_row in range(scenario_row,scenario_row+5):
if xls_sheet.getCell(index_row,data_xls_keys_cols['function'])==None:
break
temp_hybrid_dict=copy.deepcopy(hybrid_driven_dict)
if xls_sheet.getCell(index_row,data_xls_keys_cols['function'])==CELL_F_REMEMBER_ME:
temp_hybrid_dict[FRAMEWORK_FUNCTIONS]=CELL_F_REMEMBER_ME
if xls_sheet.getCell(index_row,data_xls_keys_cols['parameters'])!=None:
temp_hybrid_dict[PARAMETERS]=xls_sheet.getCell(index_row,data_xls_keys_cols['parameters']).split("&&")
if xls_sheet.getCell(index_row,data_xls_keys_cols['function'])==CELL_F_LOGIN:
temp_hybrid_dict[FRAMEWORK_FUNCTIONS]=CELL_F_LOGIN
if xls_sheet.getCell(index_row,data_xls_keys_cols['parameters'])!=None:
temp_hybrid_dict[PARAMETERS]=xls_sheet.getCell(index_row,data_xls_keys_cols['parameters']).split("&&")
if xls_sheet.getCell(index_row,data_xls_keys_cols['function'])==CELL_F_SELECT_ANSWER:
temp_hybrid_dict[FRAMEWORK_FUNCTIONS]=CELL_F_SELECT_ANSWER
if xls_sheet.getCell(index_row,data_xls_keys_cols['parameters'])!=None:
temp_hybrid_dict[PARAMETERS]=xls_sheet.getCell(index_row,data_xls_keys_cols['parameters']).split("&&")
if xls_sheet.getCell(index_row,data_xls_keys_cols['function'])==CELL_F_SHOW_ANSWER:
temp_hybrid_dict[FRAMEWORK_FUNCTIONS]=CELL_F_SHOW_ANSWER
if xls_sheet.getCell(index_row,data_xls_keys_cols['parameters'])!=None:
temp_hybrid_dict[PARAMETERS]=xls_sheet.getCell(index_row,data_xls_keys_cols['parameters']).split("&&")
hybrid_driven_data.append(temp_hybrid_dict)
index_row=index_row+1
return hybrid_driven_data
except Exception,ex:
print ex
return None
finally:
xls_sheet.close()
def get_keywords_driven_scenario_values(scenario_id=1,xls_file=DEF_DATA_PATH,sheet_name="Keyword"):
data_xls_keys_cols={'scenario':1,'action':2,'window':3,'locator':4,'parameters':5}
keyword_driven_dict={FRAMEWORK_FUNCTIONS:'',PARAMETERS:[],PAGE_WINDOW:'',LOCATOR:''}
keyword_driven_data=[] #list of dictionaries of keyword_driven_dict type
try:
xls_sheet=easyExcel(xls_file,sheet_name)
last_row=xls_sheet.get_sheet_last_row(sheet_name)
found=False
scenario_row=0
for row in range(1,last_row):
if xls_sheet.getCell(row,data_xls_keys_cols['scenario'])==scenario_id:
found=True
scenario_row=row
break
if found is False:
raise Exception('Scenarion %s not found in xls file %s sheet %s' %(scenario_id,xls_file,sheet_name))
#get next scenario
for next_row in range(scenario_row,last_row):
if xls_sheet.getCell(row,data_xls_keys_cols['scenario'])!=None:
next_scenario_row=next_row
#stop at finding blank value or next scenario value
for index_row in range(scenario_row,next_scenario_row):
if xls_sheet.getCell(index_row,data_xls_keys_cols['action'])==None:
break
temp_keyword_dict=copy.deepcopy(keyword_driven_dict)
temp_keyword_dict[FRAMEWORK_FUNCTIONS]=xls_sheet.getCell(index_row,data_xls_keys_cols['action'])
if xls_sheet.getCell(index_row,data_xls_keys_cols['window'])!=None:
temp_keyword_dict[PAGE_WINDOW]=xls_sheet.getCell(index_row,data_xls_keys_cols['window'])
if xls_sheet.getCell(index_row,data_xls_keys_cols['locator'])!=None:
temp_keyword_dict[LOCATOR]=xls_sheet.getCell(index_row,data_xls_keys_cols['locator'])
if xls_sheet.getCell(index_row,data_xls_keys_cols['parameters'])!=None:
temp_keyword_dict[PARAMETERS]=xls_sheet.getCell(index_row,data_xls_keys_cols['parameters']).split("&&")
keyword_driven_data.append(temp_keyword_dict)
index_row=index_row+1
return keyword_driven_data
except Exception,ex:
print ex
return None
finally:
xls_sheet.close()
| 47.104478 | 123 | 0.649873 | 1,239 | 9,468 | 4.613398 | 0.121065 | 0.060182 | 0.071204 | 0.097096 | 0.703464 | 0.678272 | 0.646781 | 0.619314 | 0.554059 | 0.554059 | 0 | 0.007848 | 0.246303 | 9,468 | 200 | 124 | 47.34 | 0.793161 | 0.031052 | 0 | 0.475309 | 0 | 0.006173 | 0.073982 | 0.004299 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.04321 | null | null | 0.018519 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
30f171763de2f4636a1f96ffc3321d80779aff6d | 670 | py | Python | api/migrations/0001_initial.py | study-abacus/admin-site | 045168cae3edcc95a3bb068d7b1ba19a87bf3070 | [
"MIT"
] | 1 | 2020-10-19T09:26:38.000Z | 2020-10-19T09:26:38.000Z | api/migrations/0001_initial.py | study-abacus/admin-site | 045168cae3edcc95a3bb068d7b1ba19a87bf3070 | [
"MIT"
] | 10 | 2018-10-25T21:06:12.000Z | 2021-06-10T20:57:46.000Z | api/migrations/0001_initial.py | study-abacus/admin-site | 045168cae3edcc95a3bb068d7b1ba19a87bf3070 | [
"MIT"
] | 1 | 2020-10-19T08:55:16.000Z | 2020-10-19T08:55:16.000Z | # Generated by Django 2.1.2 on 2019-06-21 15:16
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='ContactQuery',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=256)),
('email', models.EmailField(max_length=254)),
('phone_number', models.CharField(max_length=20)),
('message', models.TextField()),
],
),
]
| 26.8 | 114 | 0.567164 | 67 | 670 | 5.567164 | 0.701493 | 0.072386 | 0.096515 | 0.128686 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.049145 | 0.301493 | 670 | 24 | 115 | 27.916667 | 0.747863 | 0.067164 | 0 | 0 | 1 | 0 | 0.070626 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
30f24f3c782ca580ac6fe9778ca672c384b9687e | 930 | py | Python | helper.py | chin-gyou/lstm | 087ae684e0f4fbd91d36140a20de2b8ce1e30790 | [
"MIT"
] | null | null | null | helper.py | chin-gyou/lstm | 087ae684e0f4fbd91d36140a20de2b8ce1e30790 | [
"MIT"
] | null | null | null | helper.py | chin-gyou/lstm | 087ae684e0f4fbd91d36140a20de2b8ce1e30790 | [
"MIT"
] | null | null | null | def num_contain(f,token):
with open(f) as fin:
lines=fin.readlines()
r=[l for l in lines if token not in l]
print(len(r))
def combine(f1,f2,w):
l1=open(f1).readlines()
l2=open(f2).readlines()
pair=zip(l1,l2)
print(pair[0])
r=[lin1.strip()+" "+lin2.strip()+"\n" for lin1,lin2 in zip(l1,l2)]
print(r[0])
with open(w,"w") as fout:
fout.writelines(r)
def combine_for_lstm(f1,f2,w):
l1=open(f1).readlines()
l2=open(f2).readlines()
r=[lin1.strip()+" "+lin2.strip()+" "+lin3 for lin1,lin2,lin3 in zip(l1,l2[:400],l2[400:])]
with open(w,"w") as fout:
fout.writelines(r)
print(w)
#num_contain('../models/tf/8192-2048nd/complete.txt','<S>')
#combine('../data/generate.txt','../models/tf/biglstm/generate_all','../models/tf/biglstm/generate.txt')
combine_for_lstm('../data/generate.txt','../models/tf/biglstm/generate_all','../models/tf/biglstm/complete.txt')
| 33.214286 | 112 | 0.627957 | 154 | 930 | 3.74026 | 0.311688 | 0.069444 | 0.104167 | 0.119792 | 0.503472 | 0.4375 | 0.4375 | 0.4375 | 0.4375 | 0.329861 | 0 | 0.056747 | 0.147312 | 930 | 27 | 113 | 34.444444 | 0.669609 | 0.173118 | 0 | 0.363636 | 0 | 0 | 0.121252 | 0.08605 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.181818 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a50a8c12b43e3147dc21fc80b072b9b3ccc322f0 | 583 | py | Python | src/princeton_scraper_cos_courses/__init__.py | jlumbroso/princeton-scraper-cos-courses | 9d666d827a75a60c95c713e101f45d8354e7b40f | [
"Unlicense"
] | 1 | 2021-09-16T16:28:47.000Z | 2021-09-16T16:28:47.000Z | src/princeton_scraper_cos_courses/__init__.py | jlumbroso/princeton-scraper-cos-courses | 9d666d827a75a60c95c713e101f45d8354e7b40f | [
"Unlicense"
] | null | null | null | src/princeton_scraper_cos_courses/__init__.py | jlumbroso/princeton-scraper-cos-courses | 9d666d827a75a60c95c713e101f45d8354e7b40f | [
"Unlicense"
] | null | null | null |
"""
Library to fetch and parse the public Princeton COS courses history as a
Python dictionary or JSON data source.
"""
__version__ = '1.0.0'
__author__ = "Jérémie Lumbroso <lumbroso@cs.princeton.edu>"
__all__ = [
"CosCourseInstance",
"CosCourseTerm",
"fetch_cos_courses",
]
from princeton_scraper_cos_courses.parsing import CosCourseInstance
from princeton_scraper_cos_courses.parsing import CosCourseTerm
from princeton_scraper_cos_courses.cos_courses import fetch_cos_courses
version_info = tuple(int(v) if v.isdigit() else v for v in __version__.split('.'))
| 23.32 | 82 | 0.778731 | 78 | 583 | 5.423077 | 0.564103 | 0.165485 | 0.141844 | 0.163121 | 0.274232 | 0.20331 | 0.20331 | 0 | 0 | 0 | 0 | 0.005964 | 0.137221 | 583 | 24 | 83 | 24.291667 | 0.83499 | 0.190395 | 0 | 0 | 0 | 0 | 0.209957 | 0.058442 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.272727 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a5109f1adc34ba30dd99b10be532856ef200eb92 | 268 | py | Python | tests/test_print.py | syegulalp/myjit | 7427fee86a871a9c3a45704839ef7d0249773fc0 | [
"MIT"
] | 11 | 2021-03-17T15:08:54.000Z | 2022-02-21T18:31:25.000Z | tests/test_print.py | syegulalp/myjit | 7427fee86a871a9c3a45704839ef7d0249773fc0 | [
"MIT"
] | null | null | null | tests/test_print.py | syegulalp/myjit | 7427fee86a871a9c3a45704839ef7d0249773fc0 | [
"MIT"
] | null | null | null | import unittest
from jit import jit
from jit import j_types as j
@jit
def test_print(x: j.i64):
return print(x)
class Test(unittest.TestCase):
def test_void_zero(self):
self.assertEqual(test_print(8), 2)
self.assertEqual(test_print(64), 3)
| 17.866667 | 43 | 0.697761 | 44 | 268 | 4.113636 | 0.522727 | 0.149171 | 0.143646 | 0.265193 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03271 | 0.201493 | 268 | 14 | 44 | 19.142857 | 0.813084 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.2 | false | 0 | 0.3 | 0.1 | 0.7 | 0.4 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a51178638a04068eaacf47738aff0a5fca5d3d5f | 3,794 | py | Python | lab.py | LiuDaveLiu/dj-spineimaging | ab7a18e8698a604dfb977def4a65f9c391389f83 | [
"MIT"
] | null | null | null | lab.py | LiuDaveLiu/dj-spineimaging | ab7a18e8698a604dfb977def4a65f9c391389f83 | [
"MIT"
] | null | null | null | lab.py | LiuDaveLiu/dj-spineimaging | ab7a18e8698a604dfb977def4a65f9c391389f83 | [
"MIT"
] | 1 | 2019-02-27T15:18:43.000Z | 2019-02-27T15:18:43.000Z | import datajoint as dj
schema = dj.schema('boazmohar_lab', locals())
@schema
class Person(dj.Manual):
definition = """
username : varchar(12)
----
fullname : varchar(60)
"""
contents = [('boazmohar', 'Boaz Mohar')]
@schema
class Rig(dj.Lookup): # This list will be everchanging and espanding for the lab. I don't think it should be a lookup table.
definition = """
rig : varchar(16)
---
room : varchar(20) # example 2w.342
rig_description : varchar(1024)
"""
contents = [('Spine2P', '2c.382', '3D resonant high NA 2P microscope for dendrite and spine imaging')]
@schema
class AnimalSource(dj.Lookup):
definition = """
animal_source : varchar(30)
"""
contents = zip(['Jackson Labs', 'Charles River', 'MMRRC', 'Taconic', 'Other'])
@schema
class Species(dj.Lookup):
definition = """
species : varchar(60)
"""
contents = zip(['mus musculus'])
@schema
class Strain(dj.Lookup): # This list will be everchanging and espanding for the lab. I don't think it should be a lookup table.
definition = """
# Mouse strain
strain : varchar(30) # mouse strain
"""
contents = zip(['Syt17 (NO14)', 'Chrna2 OE25', 'wt'])
@schema
class GeneModification(dj.Lookup): # This list will be everchanging and espanding for the lab. I don't think it should be a lookup table.
definition = """
gene_modification : varchar(60)
"""
contents = zip(['Syt17-cre', 'ACTB-tTa', 'Chrna2-cre', 'CamK2a-tTA', 'TITL-GCaMP6f'])
@schema
class Subject(dj.Manual): # I prefer animal rather than subject
definition = """
subject_id : int # institution animal ID
---
-> Species
date_of_birth : date
date_of_surgery : date
sex : enum('M','F','Unknown')
-> [nullable] AnimalSource
"""
class GeneModification(dj.Part):
definition = """
# Subject gene modifications
-> Subject
-> GeneModification
"""
class Strain(dj.Part):
definition = """
-> Subject
-> Strain
"""
@schema
class WaterRestriction(dj.Manual):
definition = """
-> Subject
water_restriction_number : varchar(16) # WR number
---
wr_start_date : date
wr_start_weight : Decimal(6,3)
"""
@schema
class VirusSource(dj.Lookup):
definition = """
virus_source : varchar(60)
"""
contents = zip(['Janelia', 'UPenn', 'Addgene', 'UNC'])
@schema
class Virus(dj.Lookup):
definition = """
virus_id : int unsigned
---
-> VirusSource
virus_name : varchar(64)
titer : Decimal(20,1)
order_date : date
remarks : varchar(256)
"""
@schema
class VirusReference(dj.Lookup):
definition = """
virus_reference : varchar(60)
"""
contents = zip(['Bregma', 'lambda'])
@schema
class Surgery(dj.Manual):
definition = """
-> Subject
surgery_id : int # surgery number
---
date_of_surgery : date
description : varchar(256)
"""
class VirusInjection(dj.Part): # I am unsure if part table entry will be enforced
definition = """
# Virus injections
-> Surgery
injection_id : int
---
-> Virus
-> VirusReference
ml_location : Decimal(8,3) # um from ref left is positive
ap_location : Decimal(8,3) # um from ref anterior is positive
dv_location : Decimal(8,3) # um from dura dorsal is positive
location_name : varchar(60)
volume : Decimal(10,3) # in nl
dilution : Decimal (10, 2) # 1 to how much
"""
| 24.960526 | 137 | 0.570374 | 417 | 3,794 | 5.122302 | 0.402878 | 0.061798 | 0.039794 | 0.037453 | 0.171348 | 0.171348 | 0.160581 | 0.136236 | 0.136236 | 0.136236 | 0 | 0.0292 | 0.304955 | 3,794 | 151 | 138 | 25.125828 | 0.780812 | 0.102003 | 0 | 0.442623 | 0 | 0 | 0.630697 | 0.013819 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.008197 | 0 | 0.295082 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a516fe5403ac004bb7f05d5813cb02f12722720e | 26,362 | py | Python | files/spam-filter/tracspamfilter/admin.py | Puppet-Finland/puppet-trac | ffdf467ba80ff995778c30b0bdc6dc3e7d4e6cd3 | [
"BSD-2-Clause"
] | null | null | null | files/spam-filter/tracspamfilter/admin.py | Puppet-Finland/puppet-trac | ffdf467ba80ff995778c30b0bdc6dc3e7d4e6cd3 | [
"BSD-2-Clause"
] | null | null | null | files/spam-filter/tracspamfilter/admin.py | Puppet-Finland/puppet-trac | ffdf467ba80ff995778c30b0bdc6dc3e7d4e6cd3 | [
"BSD-2-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
#
# Copyright (C) 2015 Edgewall Software
# Copyright (C) 2015 Dirk Stöcker <trac@dstoecker.de>
# All rights reserved.
#
# This software is licensed as described in the file COPYING, which
# you should have received as part of this distribution. The terms
# are also available at http://trac.edgewall.com/license.html.
#
# This software consists of voluntary contributions made by many
# individuals. For the exact contribution history, see the revision
# history and logs, available at http://projects.edgewall.com/trac/.
#
# Author: Dirk Stöcker <trac@dstoecker.de>
import urllib2
from trac.admin import IAdminPanelProvider
from trac.config import BoolOption, IntOption
from trac.core import Component, implements
from trac.web.api import HTTPNotFound
from trac.web.chrome import (
ITemplateProvider, add_link, add_script, add_script_data, add_stylesheet)
from tracspamfilter.api import _, gettext, ngettext
from tracspamfilter.filters.akismet import AkismetFilterStrategy
from tracspamfilter.filters.blogspam import BlogSpamFilterStrategy
from tracspamfilter.filters.botscout import BotScoutFilterStrategy
from tracspamfilter.filters.fspamlist import FSpamListFilterStrategy
from tracspamfilter.filters.stopforumspam import StopForumSpamFilterStrategy
from tracspamfilter.filtersystem import FilterSystem
from tracspamfilter.model import LogEntry, Statistics
try:
from tracspamfilter.filters.bayes import BayesianFilterStrategy
except ImportError: # SpamBayes not installed
BayesianFilterStrategy = None
try:
from tracspamfilter.filters.httpbl import HttpBLFilterStrategy
from tracspamfilter.filters.ip_blacklist import IPBlacklistFilterStrategy
from tracspamfilter.filters.url_blacklist import URLBlacklistFilterStrategy
except ImportError: # DNS python not installed
HttpBLFilterStrategy = None
IPBlacklistFilterStrategy = None
URLBlacklistFilterStrategy = None
try:
from tracspamfilter.filters.mollom import MollomFilterStrategy
except ImportError: # Mollom not installed
MollomFilterStrategy = None
class SpamFilterAdminPageProvider(Component):
"""Web administration panel for configuring and monitoring the spam
filtering system.
"""
implements(ITemplateProvider)
implements(IAdminPanelProvider)
MAX_PER_PAGE = 10000
MIN_PER_PAGE = 5
DEF_PER_PAGE = IntOption('spam-filter', 'spam_monitor_entries', '100',
"How many monitor entries are displayed by default "
"(between 5 and 10000).", doc_domain='tracspamfilter')
train_only = BoolOption('spam-filter', 'show_train_only', False,
"Show the buttons for training without deleting entry.",
doc_domain='tracspamfilter')
# IAdminPanelProvider methods
def get_admin_panels(self, req):
if 'SPAM_CONFIG' in req.perm:
yield ('spamfilter', _("Spam Filtering"),
'config', _("Configuration"))
if 'SPAM_MONITOR' in req.perm:
yield ('spamfilter', _("Spam Filtering"),
'monitor', _("Monitoring"))
def render_admin_panel(self, req, cat, page, path_info):
if page == 'config':
if req.method == 'POST':
if self._process_config_panel(req):
req.redirect(req.href.admin(cat, page))
data = self._render_config_panel(req, cat, page)
else:
if req.method == 'POST':
if self._process_monitoring_panel(req):
req.redirect(req.href.admin(cat, page,
page=req.args.getint('page'),
num=req.args.getint('num')))
if path_info:
data = self._render_monitoring_entry(req, cat, page, path_info)
page = 'entry'
else:
data = self._render_monitoring_panel(req, cat, page)
data['allowselect'] = True
data['monitor'] = True
add_script_data(req, {
'bayestext': _("SpamBayes determined spam probability "
"of %s%%"),
'sel100text': _("Select 100.00%% entries") % (),
'sel90text': _("Select >90.00%% entries") % (),
'sel10text': _("Select <10.00%% entries") % (),
'sel0text': _("Select 0.00%% entries") % (),
'selspamtext': _("Select Spam entries"),
'selhamtext': _('Select Ham entries')
})
add_script(req, 'spamfilter/adminmonitor.js')
add_script_data(req, {'toggleform': 'spammonitorform'})
add_script(req, 'spamfilter/toggle.js')
add_stylesheet(req, 'spamfilter/admin.css')
data['accmgr'] = 'ACCTMGR_USER_ADMIN' in req.perm
return 'admin_spam%s.html' % page, data
# ITemplateProvider methods
def get_htdocs_dirs(self):
from pkg_resources import resource_filename
return [('spamfilter', resource_filename(__name__, 'htdocs'))]
def get_templates_dirs(self):
from pkg_resources import resource_filename
return [resource_filename(__name__, 'templates')]
# Internal methods
def _render_config_panel(self, req, cat, page):
req.perm.require('SPAM_CONFIG')
filter_system = FilterSystem(self.env)
strategies = []
for strategy in filter_system.strategies:
for variable in dir(strategy):
if variable.endswith('karma_points'):
strategies.append({
'name': strategy.__class__.__name__,
'karma_points': getattr(strategy, variable),
'variable': variable,
'karma_help': gettext(getattr(strategy.__class__,
variable).__doc__)
})
add_script(req, 'spamfilter/adminconfig.js')
return {
'strategies': sorted(strategies, key=lambda x: x['name']),
'min_karma': filter_system.min_karma,
'authenticated_karma': filter_system.authenticated_karma,
'attachment_karma': filter_system.attachment_karma,
'register_karma': filter_system.register_karma,
'trust_authenticated': filter_system.trust_authenticated,
'logging_enabled': filter_system.logging_enabled,
'nolog_obvious': filter_system.nolog_obvious,
'purge_age': filter_system.purge_age,
'spam_monitor_entries_min': self.MIN_PER_PAGE,
'spam_monitor_entries_max': self.MAX_PER_PAGE,
'spam_monitor_entries': self.DEF_PER_PAGE
}
def _process_config_panel(self, req):
req.perm.require('SPAM_CONFIG')
spam_config = self.config['spam-filter']
min_karma = req.args.as_int('min_karma')
if min_karma is not None:
spam_config.set('min_karma', min_karma)
attachment_karma = req.args.as_int('attachment_karma')
if attachment_karma is not None:
spam_config.set('attachment_karma', attachment_karma)
register_karma = req.args.as_int('register_karma')
if register_karma is not None:
spam_config.set('register_karma', register_karma)
authenticated_karma = req.args.as_int('authenticated_karma')
if authenticated_karma is not None:
spam_config.set('authenticated_karma', authenticated_karma)
for strategy in FilterSystem(self.env).strategies:
for variable in dir(strategy):
if variable.endswith('karma_points'):
key = strategy.__class__.__name__ + '_' + variable
points = req.args.get(key)
if points is not None:
option = getattr(strategy.__class__, variable)
self.config.set(option.section, option.name, points)
logging_enabled = 'logging_enabled' in req.args
spam_config.set('logging_enabled', logging_enabled)
nolog_obvious = 'nolog_obvious' in req.args
spam_config.set('nolog_obvious', nolog_obvious)
trust_authenticated = 'trust_authenticated' in req.args
spam_config.set('trust_authenticated', trust_authenticated)
if logging_enabled:
purge_age = req.args.as_int('purge_age')
if purge_age is not None:
spam_config.set('purge_age', purge_age)
spam_monitor_entries = req.args.as_int('spam_monitor_entries',
min=self.MIN_PER_PAGE,
max=self.MAX_PER_PAGE)
if spam_monitor_entries is not None:
spam_config.set('spam_monitor_entries', spam_monitor_entries)
self.config.save()
return True
def _render_monitoring_panel(self, req, cat, page):
req.perm.require('SPAM_MONITOR')
pagenum = req.args.as_int('page', 1) - 1
pagesize = req.args.as_int('num', self.DEF_PER_PAGE,
min=self.MIN_PER_PAGE,
max=self.MAX_PER_PAGE)
total = LogEntry.count(self.env)
if total < pagesize:
pagenum = 0
elif total <= pagenum * pagesize:
pagenum = (total - 1) / pagesize
offset = pagenum * pagesize
entries = list(LogEntry.select(self.env, limit=pagesize,
offset=offset))
if pagenum > 0:
add_link(req, 'prev',
req.href.admin(cat, page, page=pagenum, num=pagesize),
_("Previous Page"))
if offset + pagesize < total:
add_link(req, 'next',
req.href.admin(cat, page, page=pagenum + 2, num=pagesize),
_("Next Page"))
return {
'enabled': FilterSystem(self.env).logging_enabled,
'entries': entries,
'offset': offset + 1,
'page': pagenum + 1,
'num': pagesize,
'total': total,
'train_only': self.train_only
}
def _render_monitoring_entry(self, req, cat, page, entry_id):
req.perm.require('SPAM_MONITOR')
entry = LogEntry.fetch(self.env, entry_id)
if not entry:
raise HTTPNotFound(_("Log entry not found"))
previous = entry.get_previous()
if previous:
add_link(req, 'prev', req.href.admin(cat, page, previous.id),
_("Log Entry %(id)s", id=previous.id))
add_link(req, 'up', req.href.admin(cat, page), _("Log Entry List"))
next = entry.get_next()
if next:
add_link(req, 'next', req.href.admin(cat, page, next.id),
_("Log Entry %(id)s", id=next.id))
return {'entry': entry, 'train_only': self.train_only}
def _process_monitoring_panel(self, req):
req.perm.require('SPAM_TRAIN')
filtersys = FilterSystem(self.env)
spam = 'markspam' in req.args or 'markspamdel' in req.args
train = spam or 'markham' in req.args or 'markhamdel' in req.args
delete = 'delete' in req.args or 'markspamdel' in req.args or \
'markhamdel' in req.args or 'deletenostats' in req.args
deletestats = 'delete' in req.args
if train or delete:
entries = req.args.getlist('sel')
if entries:
if train:
filtersys.train(req, entries, spam=spam, delete=delete)
elif delete:
filtersys.delete(req, entries, deletestats)
if 'deleteobvious' in req.args:
filtersys.deleteobvious(req)
return True
class ExternalAdminPageProvider(Component):
"""Web administration panel for configuring the External spam filters."""
implements(IAdminPanelProvider)
# IAdminPanelProvider methods
def get_admin_panels(self, req):
if 'SPAM_CONFIG' in req.perm:
yield ('spamfilter', _("Spam Filtering"),
'external', _("External Services"))
def render_admin_panel(self, req, cat, page, path_info):
req.perm.require('SPAM_CONFIG')
data = {}
spam_config = self.config['spam-filter']
akismet = AkismetFilterStrategy(self.env)
stopforumspam = StopForumSpamFilterStrategy(self.env)
botscout = BotScoutFilterStrategy(self.env)
fspamlist = FSpamListFilterStrategy(self.env)
ip_blacklist_default = ip6_blacklist_default = \
url_blacklist_default = None
if HttpBLFilterStrategy:
ip_blacklist = IPBlacklistFilterStrategy(self.env)
ip_blacklist_default = ip_blacklist.servers_default
ip6_blacklist_default = ip_blacklist.servers6_default
url_blacklist = URLBlacklistFilterStrategy(self.env)
url_blacklist_default = url_blacklist.servers_default
mollom = 0
if MollomFilterStrategy:
mollom = MollomFilterStrategy(self.env)
blogspam = BlogSpamFilterStrategy(self.env)
if req.method == 'POST':
if 'cancel' in req.args:
req.redirect(req.href.admin(cat, page))
akismet_api_url = req.args.get('akismet_api_url')
akismet_api_key = req.args.get('akismet_api_key')
mollom_api_url = req.args.get('mollom_api_url')
mollom_public_key = req.args.get('mollom_public_key')
mollom_private_key = req.args.get('mollom_private_key')
stopforumspam_api_key = req.args.get('stopforumspam_api_key')
botscout_api_key = req.args.get('botscout_api_key')
fspamlist_api_key = req.args.get('fspamlist_api_key')
httpbl_api_key = req.args.get('httpbl_api_key')
ip_blacklist_servers = req.args.get('ip_blacklist_servers')
ip6_blacklist_servers = req.args.get('ip6_blacklist_servers')
url_blacklist_servers = req.args.get('url_blacklist_servers')
blogspam_api_url = req.args.get('blogspam_api_url')
blogspam_skip_tests = req.args.get('blogspam_skip_tests')
use_external = 'use_external' in req.args
train_external = 'train_external' in req.args
skip_external = req.args.get('skip_external')
stop_external = req.args.get('stop_external')
skip_externalham = req.args.get('skip_externalham')
stop_externalham = req.args.get('stop_externalham')
try:
verified_key = akismet.verify_key(req, akismet_api_url,
akismet_api_key)
if akismet_api_key and not verified_key:
data['akismeterror'] = 'The API key is invalid'
data['error'] = 1
except urllib2.URLError, e:
data['alismeterror'] = e.reason[1]
data['error'] = 1
if mollom:
try:
verified_key = mollom.verify_key(req, mollom_api_url,
mollom_public_key,
mollom_private_key)
except urllib2.URLError, e:
data['mollomerror'] = e.reason[1]
data['error'] = 1
else:
if mollom_public_key and mollom_private_key and \
not verified_key:
data['mollomerror'] = 'The API keys are invalid'
data['error'] = 1
if not data.get('error', 0):
spam_config.set('akismet_api_url', akismet_api_url)
spam_config.set('akismet_api_key', akismet_api_key)
spam_config.set('mollom_api_url', mollom_api_url)
spam_config.set('mollom_public_key', mollom_public_key)
spam_config.set('mollom_private_key', mollom_private_key)
spam_config.set('stopforumspam_api_key', stopforumspam_api_key)
spam_config.set('botscout_api_key', botscout_api_key)
spam_config.set('fspamlist_api_key', fspamlist_api_key)
spam_config.set('httpbl_api_key', httpbl_api_key)
if HttpBLFilterStrategy:
if ip_blacklist_servers != ip_blacklist_default:
spam_config.set('ip_blacklist_servers',
ip_blacklist_servers)
else:
spam_config.remove('ip_blacklist_servers')
if ip6_blacklist_servers != ip6_blacklist_default:
spam_config.set('ip6_blacklist_servers',
ip6_blacklist_servers)
else:
spam_config.remove('ip6_blacklist_servers')
if url_blacklist_servers != url_blacklist_default:
spam_config.set('url_blacklist_servers',
url_blacklist_servers)
else:
spam_config.remove('url_blacklist_servers')
spam_config.set('blogspam_json_api_url',
blogspam_api_url)
spam_config.set('blogspam_json_skip_tests',
blogspam_skip_tests)
spam_config.set('use_external', use_external)
spam_config.set('train_external', train_external)
spam_config.set('skip_external', skip_external)
spam_config.set('stop_external', stop_external)
spam_config.set('skip_externalham', skip_externalham)
spam_config.set('stop_externalham', stop_externalham)
self.config.save()
req.redirect(req.href.admin(cat, page))
else:
filter_system = FilterSystem(self.env)
use_external = filter_system.use_external
train_external = filter_system.train_external
skip_external = filter_system.skip_external
stop_external = filter_system.stop_external
skip_externalham = filter_system.skip_externalham
stop_externalham = filter_system.stop_externalham
blogspam_api_url = blogspam.api_url
blogspam_skip_tests = ','.join(blogspam.skip_tests)
akismet_api_url = akismet.api_url
akismet_api_key = akismet.api_key
mollom_public_key = mollom_private_key = mollom_api_url = None
if MollomFilterStrategy:
mollom_api_url = mollom.api_url
mollom_public_key = mollom.public_key
mollom_private_key = mollom.private_key
stopforumspam_api_key = stopforumspam.api_key
botscout_api_key = botscout.api_key
fspamlist_api_key = fspamlist.api_key
httpbl_api_key = spam_config.get('httpbl_api_key')
ip_blacklist_servers = spam_config.get('ip_blacklist_servers')
ip6_blacklist_servers = spam_config.get('ip6_blacklist_servers')
url_blacklist_servers = spam_config.get('url_blacklist_servers')
if HttpBLFilterStrategy:
data['blacklists'] = 1
data['ip_blacklist_default'] = ip_blacklist_default
data['ip6_blacklist_default'] = ip6_blacklist_default
data['url_blacklist_default'] = url_blacklist_default
if MollomFilterStrategy:
data['mollom'] = 1
data['mollom_public_key'] = mollom_public_key
data['mollom_private_key'] = mollom_private_key
data['mollom_api_url'] = mollom_api_url
data['blogspam_api_url'] = blogspam_api_url
data['blogspam_skip_tests'] = blogspam_skip_tests
data['blogspam_methods'] = blogspam.getmethods()
data.update({
'akismet_api_key': akismet_api_key,
'akismet_api_url': akismet_api_url,
'httpbl_api_key': httpbl_api_key,
'stopforumspam_api_key': stopforumspam_api_key,
'botscout_api_key': botscout_api_key,
'fspamlist_api_key': fspamlist_api_key,
'use_external': use_external,
'train_external': train_external,
'skip_external': skip_external,
'stop_external': stop_external,
'skip_externalham': skip_externalham,
'stop_externalham': stop_externalham,
'ip_blacklist_servers': ip_blacklist_servers,
'ip6_blacklist_servers': ip6_blacklist_servers,
'url_blacklist_servers': url_blacklist_servers
})
add_script(req, 'spamfilter/adminexternal.js')
add_stylesheet(req, 'spamfilter/admin.css')
return 'admin_external.html', data
class BayesAdminPageProvider(Component):
"""Web administration panel for configuring the Bayes spam filter."""
if BayesianFilterStrategy:
implements(IAdminPanelProvider)
# IAdminPanelProvider methods
def get_admin_panels(self, req):
if 'SPAM_CONFIG' in req.perm:
yield 'spamfilter', _("Spam Filtering"), 'bayes', _("Bayes")
def render_admin_panel(self, req, cat, page, path_info):
req.perm.require('SPAM_CONFIG')
bayes = BayesianFilterStrategy(self.env)
hammie = bayes._get_hammie()
data = {}
if req.method == 'POST':
if 'train' in req.args:
bayes.train(None, None, req.args['bayes_content'], '127.0.0.1',
spam='spam' in req.args['train'].lower())
req.redirect(req.href.admin(cat, page))
elif 'test' in req.args:
bayes_content = req.args['bayes_content']
data['content'] = bayes_content
try:
data['score'] = hammie.score(bayes_content.encode('utf-8'))
except Exception, e:
self.log.warn('Bayes test failed: %s', e, exc_info=True)
data['error'] = unicode(e)
else:
if 'reset' in req.args:
self.log.info('Resetting SpamBayes training database')
self.env.db_transaction("DELETE FROM spamfilter_bayes")
elif 'reduce' in req.args:
self.log.info('Reducing SpamBayes training database')
bayes.reduce()
min_training = req.args.as_int('min_training')
if min_training is not None and \
min_training != bayes.min_training:
self.config.set('spam-filter', 'bayes_min_training',
min_training)
self.config.save()
min_dbcount = req.args.as_int('min_dbcount')
if min_dbcount is not None and \
min_dbcount != bayes.min_dbcount:
self.config.set('spam-filter', 'bayes_min_dbcount',
min_dbcount)
self.config.save()
req.redirect(req.href.admin(cat, page))
ratio = ''
nspam = hammie.bayes.nspam
nham = hammie.bayes.nham
if nham and nspam:
if nspam > nham:
ratio = _("(ratio %.1f : 1)") % (float(nspam) / float(nham))
else:
ratio = _("(ratio 1 : %.1f)") % (float(nham) / float(nspam))
dblines, dblines_spamonly, dblines_hamonly, dblines_reduce = \
bayes.dblines()
dblines_mixed = dblines - dblines_hamonly - dblines_spamonly
data.update({
'min_training': bayes.min_training,
'min_dbcount': bayes.min_dbcount,
'dblines': dblines,
'dblinesreducenum': dblines_reduce,
'dblinesspamonly':
ngettext("%(num)d spam", "%(num)d spam", dblines_spamonly),
'dblineshamonly':
ngettext("%(num)d ham", "%(num)d ham", dblines_hamonly),
'dblinesreduce':
ngettext("%(num)d line", "%(num)d lines", dblines_reduce),
'dblinesmixed':
ngettext("%(num)d mixed", "%(num)d mixed", dblines_mixed),
'nspam': nspam,
'nham': nham,
'ratio': ratio
})
add_script_data(req, {'hasdata': True if nham + nspam > 0 else False})
add_script(req, 'spamfilter/adminbayes.js')
add_stylesheet(req, 'spamfilter/admin.css')
return 'admin_bayes.html', data
class StatisticsAdminPageProvider(Component):
"""Web administration panel for spam filter statistics."""
implements(IAdminPanelProvider)
# IAdminPanelProvider methods
def get_admin_panels(self, req):
if 'SPAM_CONFIG' in req.perm:
yield ('spamfilter', _("Spam Filtering"),
'statistics', _("Statistics"))
def render_admin_panel(self, req, cat, page, path_info):
req.perm.require('SPAM_CONFIG')
stats = Statistics(self.env)
if req.method == 'POST':
if 'clean' in req.args:
stats.clean(req.args['strategy'])
elif 'cleanall' in req.args:
stats.cleanall()
req.redirect(req.href.admin(cat, page))
strategies, overall = stats.getstats()
data = {'strategies': strategies, 'overall': overall}
add_stylesheet(req, 'spamfilter/admin.css')
return 'admin_statistics.html', data
| 43.573554 | 80 | 0.582278 | 2,720 | 26,362 | 5.373529 | 0.130882 | 0.027778 | 0.025794 | 0.012315 | 0.409962 | 0.302545 | 0.19636 | 0.134784 | 0.114874 | 0.082033 | 0 | 0.005462 | 0.326379 | 26,362 | 604 | 81 | 43.645695 | 0.817603 | 0.029778 | 0 | 0.210744 | 0 | 0 | 0.16048 | 0.021541 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.049587 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a5179480fa77effa02a250c5278f91140c5e5c57 | 2,130 | py | Python | bin/convert_to_wdiff.py | zerogerc/wikiedits | d9c91d448254ce6f43abb977d492b0d878f6aacc | [
"Apache-2.0"
] | null | null | null | bin/convert_to_wdiff.py | zerogerc/wikiedits | d9c91d448254ce6f43abb977d492b0d878f6aacc | [
"Apache-2.0"
] | null | null | null | bin/convert_to_wdiff.py | zerogerc/wikiedits | d9c91d448254ce6f43abb977d492b0d878f6aacc | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import sys
from difflib import SequenceMatcher
SKIP_COMMENTS = False
ONE_LINE_COMMENTS = False
def main():
err = None
cor = None
comment = ''
for line in sys.stdin:
line = line.strip()
if line.startswith('###'):
if not SKIP_COMMENTS:
comment += line + "\n"
err = None
cor = None
elif line:
if err is None:
err = line
else:
cor = line
if comment:
if ONE_LINE_COMMENTS:
print minimize_comment(comment)
else:
print comment.strip()
comment = ''
text = wdiff(err.split(), cor.split())
if text:
print text
else:
print cor
err = None
cor = None
def minimize_comment(comment):
return comment.replace("\n### ", ' ').strip()
#.replace("\n###", ',').replace('### ', '### {').strip() + '}'
def wdiff(err_toks, cor_toks):
result = ''
matcher = SequenceMatcher(None, err_toks, cor_toks)
for tag, i1, i2, j1, j2 in matcher.get_opcodes():
err = ' '.join(err_toks[i1:i2])
cor = ' '.join(cor_toks[j1:j2])
if tag == 'replace':
result += "[-{}-] {{+{}+}} ".format(err, cor)
elif tag == 'insert':
result += "{{+{}+}} ".format(cor)
elif tag == 'delete':
result += "[-{}-] ".format(err)
else:
result += err + ' '
return result.strip()
if __name__ == '__main__':
if '-h' in sys.argv or '--help' in sys.argv:
print "Example: cat enwiki.xxx.txt | perl path/to/mosesdecoder/.../tokenizer-for-wiked.perl -no-escape -skip | python convert_to_wdiff.py [--skip-comments] [--one-line-comments]"
exit()
if '--skip-comments' in sys.argv:
SKIP_COMMENTS = True
if '--one-line-comments' in sys.argv:
ONE_LINE_COMMENTS = True
main()
| 26.962025 | 186 | 0.480751 | 230 | 2,130 | 4.330435 | 0.33913 | 0.060241 | 0.075301 | 0.042169 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006737 | 0.37277 | 2,130 | 78 | 187 | 27.307692 | 0.738772 | 0.044131 | 0 | 0.2 | 0 | 0.016667 | 0.142292 | 0.034585 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.05 | null | null | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a5179f68c24505447773476e48e4e052fb2455ce | 7,143 | py | Python | apps/stock_members.py | ohjho/open_terminal | 35e3fdc0db65a8f91c9e7d2a8685e23a59799f47 | [
"Apache-2.0"
] | null | null | null | apps/stock_members.py | ohjho/open_terminal | 35e3fdc0db65a8f91c9e7d2a8685e23a59799f47 | [
"Apache-2.0"
] | 3 | 2021-04-20T02:37:17.000Z | 2021-08-24T07:24:53.000Z | apps/stock_members.py | ohjho/open_terminal | 35e3fdc0db65a8f91c9e7d2a8685e23a59799f47 | [
"Apache-2.0"
] | null | null | null | import os, sys, json, requests
import streamlit as st
from stqdm import stqdm
import pandas as pd
#Paths
cwdir = os.path.dirname(os.path.realpath(__file__))
sys.path.insert(1, os.path.join(cwdir, "../"))
from toolbox.st_utils import show_plotly
from toolbox.yf_utils import tickers_parser, get_stocks_obj, get_stocks_info
from toolbox.data_utils import JsonReader, JsonLookUp
STOCK_UNIVERSE = JsonReader(os.path.join(cwdir,'../data/index_definition.json'))
def get_index_members(index_name, index_dicts = STOCK_UNIVERSE, limit = None):
'''
return a list of yf tickers for the given index
references:
https://medium.com/wealthy-bytes/5-lines-of-python-to-automate-getting-the-s-p-500-95a632e5e567
https://medium.com/financial-data-analysis/step-1-web-scraping-hong-kong-hsi-stock-price-7d8606c07c57
https://tcoil.info/build-simple-stock-trading-bot-advisor-in-python/
'''
if index_name not in [d['index'] for d in index_dicts]:
return None
else:
idict = JsonLookUp(index_dicts, searchKey = 'index', searchVal = index_name)
tables = pd.read_html(
requests.get(idict['url'], headers={'User-agent': 'Mozilla/5.0'}).text,
header= 0)
df = tables[idict['df']]
# Special Handling
if index_name.startswith('^DVD'):
df['Symbol'] = df['Company'].apply( lambda x: x.split()[-1].split(":")[-1].replace(")",''))
elif index_name in ['^NDX']:
df['Symbol'] = df['Ticker']
elif index_name == '^TX60':
#TODO: remove columns with NaN
#df = df.dropna(by = ['Symbol'])
df['Symbol'] = df['Symbol'].apply(lambda x: str(x).replace('.', '-')+'.TO')
elif index_name in ['^HSI', '^HCM','^HCL', '^HSTECH', '^H35']:
df['Symbol'] = [str(s).zfill(4)+'.HK' for s in df['Code'].tolist()]
# TODO: apply limit
return df['Symbol'].tolist()
@st.cache
def get_etf_holdings(etf_ticker, parse = False):
'''
prototype function, doesn't work yet
'''
# TODO: need chromium webdriver
# see: https://medium.com/hackernoon/python-notebook-research-to-replicate-etf-using-free-data-ca9f88eb7349
# or scrap zacks: https://stackoverflow.com/questions/64908086/using-python-to-identify-etf-holdings
ref_url = f'https://www.barchart.com/etfs-funds/quotes/{etf_ticker}/constituents?page=all'
# tables = pd.read_html(ref_url, header = {'User-Agent': 'Mozilla/5.0'})
tables = pd.read_html(requests.get(ref_url,
headers={'User-agent': 'Mozilla/5.0'}).text,
attrs={"class":"constituents"} if parse else None)
print(len(tables))
df = tables[2]
return df
def showIndices(l_indices = STOCK_UNIVERSE, st_asset = st, as_df = False):
with st_asset.beta_expander('available indices'):
if as_df:
df = pd.DataFrame(l_indices).set_index('index')
st.write(df)
else:
for i in l_indices:
st.write(f'`{i["index"]}`: [{i["name"]}]({i["url"]})')
@st.cache(suppress_st_warning=True)
def get_members_info(asset, tqdm_func = stqdm):
'''
return a list of json object containing info for each member within the asset
Args:
asset: Index name (must be in STOCK_UNIVERSE) or list of tickers
'''
l_tickers = None
if type(asset) == list:
l_tickers = asset
elif type(asset) == str:
l_tickers = get_index_members(asset)
if l_tickers:
results = get_stocks_info(" ".join(l_tickers), tqdm_func = tqdm_func)
return results
else:
return None
@st.cache
def get_members_info_df(asset, l_keys = ['symbol', 'longName']):
info_json = get_members_info(asset = asset)
df = pd.DataFrame(info_json)
return df[l_keys]
def get_index_tickers(st_asset = st.sidebar):
with st_asset:
l_indices = [d['index'] for d in STOCK_UNIVERSE]
idx = st.selectbox('Index', options = [''] + l_indices)
if idx:
l_members = get_index_members(index_name = idx)
ref_security = JsonLookUp(STOCK_UNIVERSE,
searchKey = 'index', searchVal = idx, resultKey = 'reference_security')
st.info(f'''
Found {len(l_members)} index members and reference security: {ref_security}
''')
if st.checkbox('Load members to tickers field', value = False):
return ' '.join(l_members)
else:
return ''
else:
return ''
def Main():
with st.sidebar.beta_expander("MBRS"):
st.info(f'''
Getting Indices members and ETFs holdings (coming soon)
* data by [yfinance](https://github.com/ranaroussi/yfinance)
''')
showIndices(st_asset = st.sidebar)
default_tickers = get_index_tickers(
st_asset = st.sidebar.beta_expander('Load an Index', expanded = True)
)
with st.sidebar.beta_expander('settings', expanded = False):
df_height = st.number_input("members' df height", value = 500, min_value = 200)
tickers = tickers_parser(
st.text_input("index members' tickers [space separated]",
value = default_tickers)
)
if tickers:
with st.beta_expander('display keys'):
l_col, r_col = st.beta_columns(2)
with l_col:
l_keys_des = st.multiselect('descriptive',
options = ['longName', 'previousClose','sector', 'fullTimeEmployees', 'country', 'industry', 'currency', 'exchangeTimezoneName'],
default = ['longName'])
l_keys_vol = st.multiselect('volume',
options = ['averageVolume10days', 'circulatingSupply', 'sharesOutstanding', 'sharesShort','sharesPercentSharesOut', 'floatShares', 'shortRatio', 'heldPercentInsiders', 'impliedSharesOutstanding']
)
with r_col:
l_keys_dvd = st.multiselect('dividend related',
options = ['dividendRate', 'exDividendDate', 'dividendYield', 'lastDividendDate', 'exDividendDate', 'lastDividendValue']
)
l_keys_fun = st.multiselect('fundamental',
options = ['marketCap','trailingPE','priceToSalesTrailing12Month','forwardPE', 'profileMargins', 'forwardEps','bookValue', 'priceToBook', 'payoutRatio']
)
l_keys = l_keys_des + l_keys_vol + l_keys_dvd + l_keys_fun
if len(l_keys) < 1:
st.warning(f'no key selected.')
return None
# st.subheader(f'Members of `{idx}`')
st.subheader(f'Index Members stats')
data = get_members_info_df(asset = tickers.split(), l_keys=['symbol'] + l_keys)
st.dataframe(data, height = df_height)
# TODO: ticker selector to return a space-separated string for use in other apps
if __name__ == '__main__':
Main()
| 42.266272 | 227 | 0.600308 | 854 | 7,143 | 4.85363 | 0.340749 | 0.016888 | 0.008685 | 0.01158 | 0.087334 | 0.043426 | 0.030398 | 0.01544 | 0 | 0 | 0 | 0.012624 | 0.268095 | 7,143 | 168 | 228 | 42.517857 | 0.780222 | 0.146997 | 0 | 0.132231 | 0 | 0.008264 | 0.217529 | 0.021121 | 0 | 0 | 0 | 0.017857 | 0 | 1 | 0.057851 | false | 0 | 0.057851 | 0 | 0.198347 | 0.008264 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
eb48ef960c6913ce20374139a49fb2f9e16ae84a | 2,450 | py | Python | accounts/forms.py | rijalanupraj/halkapan | a1b5964034a4086a890f839ba4d3d2885a54235f | [
"MIT"
] | null | null | null | accounts/forms.py | rijalanupraj/halkapan | a1b5964034a4086a890f839ba4d3d2885a54235f | [
"MIT"
] | null | null | null | accounts/forms.py | rijalanupraj/halkapan | a1b5964034a4086a890f839ba4d3d2885a54235f | [
"MIT"
] | null | null | null | # External Import
from django.contrib.auth import get_user_model
from django import forms
from django.contrib.auth.forms import UserCreationForm, AuthenticationForm
from django.core.exceptions import ValidationError
from django.contrib import messages
from django.urls import reverse
from django.contrib.sites.shortcuts import get_current_site
from django.template.loader import render_to_string
User = get_user_model()
class UserRegistrationForm(UserCreationForm):
email = forms.EmailField(max_length=250, widget=forms.EmailInput(
attrs={'placeholder': 'Email',
'class': 'myInput'}
))
username = forms.CharField(label='Username', widget=forms.TextInput(
attrs={'placeholder': 'Username', 'class': 'myInput'}))
password1 = forms.CharField(label='Password', widget=forms.PasswordInput(
attrs={'placeholder': 'Password', 'class': 'myInput'}))
password2 = forms.CharField(label='Confirmation Password', widget=forms.PasswordInput(
attrs={'placeholder': 'Confirm Password', 'class': 'myInput'}))
class Meta:
model = User
fields = ['username', 'email', 'password1', 'password2']
def clean(self):
email = self.cleaned_data.get('email').lower()
username = self.cleaned_data.get('username').lower()
if User.objects.filter(email=email).exists():
raise forms.ValidationError(" Email exists")
elif User.objects.filter(username=username).exists():
raise forms.ValidationError(f"{username} Username Already Taken")
return self.cleaned_data
class LoginForm(AuthenticationForm):
username = forms.CharField(label='Username', widget=forms.TextInput(
attrs={'placeholder': 'Username or Email', 'class': 'myInput', 'id': 'username'}))
password = forms.CharField(label="Password", widget=forms.PasswordInput(
attrs={'placeholder': 'Password', 'class': 'myInput', 'id': 'password'})
)
def confirm_login_allowed(self, user):
if not user.is_active:
url = reverse("resend_verification")
URL = f"<a href='{url}'>Resend Verification URL</a>"
messages.error(
self.request, f'{URL}')
raise ValidationError(
("This account is inactive. Check your email"),
code='inactive',
)
class SendEmailVerificationForm(forms.Form):
email = forms.EmailField(required=True)
| 38.888889 | 90 | 0.672245 | 259 | 2,450 | 6.297297 | 0.355212 | 0.04905 | 0.058246 | 0.05886 | 0.232986 | 0.232986 | 0.203556 | 0.203556 | 0.203556 | 0.203556 | 0 | 0.003568 | 0.199184 | 2,450 | 62 | 91 | 39.516129 | 0.827727 | 0.006122 | 0 | 0.040816 | 0 | 0 | 0.197287 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040816 | false | 0.142857 | 0.163265 | 0 | 0.44898 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
eb4abce918b424db6423be51cf90882e0dc4decd | 4,037 | py | Python | umbra/common/protobuf/umbra_grpc.py | RafaelAPB/umbra | cf075bbe73e46540e9edee25f9ec3d0828620d5f | [
"Apache-2.0"
] | null | null | null | umbra/common/protobuf/umbra_grpc.py | RafaelAPB/umbra | cf075bbe73e46540e9edee25f9ec3d0828620d5f | [
"Apache-2.0"
] | null | null | null | umbra/common/protobuf/umbra_grpc.py | RafaelAPB/umbra | cf075bbe73e46540e9edee25f9ec3d0828620d5f | [
"Apache-2.0"
] | null | null | null | # Generated by the Protocol Buffers compiler. DO NOT EDIT!
# source: umbra.proto
# plugin: grpclib.plugin.main
import abc
import typing
import grpclib.const
import grpclib.client
if typing.TYPE_CHECKING:
import grpclib.server
import google.protobuf.struct_pb2
import google.protobuf.timestamp_pb2
from umbra.common.protobuf import umbra_pb2
class BrokerBase(abc.ABC):
@abc.abstractmethod
async def Manage(self, stream: 'grpclib.server.Stream[umbra_pb2.Config, umbra_pb2.Report]') -> None:
pass
@abc.abstractmethod
async def Measure(self, stream: 'grpclib.server.Stream[umbra_pb2.Evaluation, umbra_pb2.Status]') -> None:
pass
def __mapping__(self) -> typing.Dict[str, grpclib.const.Handler]:
return {
'/umbra.Broker/Manage': grpclib.const.Handler(
self.Manage,
grpclib.const.Cardinality.UNARY_UNARY,
umbra_pb2.Config,
umbra_pb2.Report,
),
'/umbra.Broker/Measure': grpclib.const.Handler(
self.Measure,
grpclib.const.Cardinality.UNARY_UNARY,
umbra_pb2.Evaluation,
umbra_pb2.Status,
),
}
class BrokerStub:
def __init__(self, channel: grpclib.client.Channel) -> None:
self.Manage = grpclib.client.UnaryUnaryMethod(
channel,
'/umbra.Broker/Manage',
umbra_pb2.Config,
umbra_pb2.Report,
)
self.Measure = grpclib.client.UnaryUnaryMethod(
channel,
'/umbra.Broker/Measure',
umbra_pb2.Evaluation,
umbra_pb2.Status,
)
class ScenarioBase(abc.ABC):
@abc.abstractmethod
async def Establish(self, stream: 'grpclib.server.Stream[umbra_pb2.Workflow, umbra_pb2.Status]') -> None:
pass
def __mapping__(self) -> typing.Dict[str, grpclib.const.Handler]:
return {
'/umbra.Scenario/Establish': grpclib.const.Handler(
self.Establish,
grpclib.const.Cardinality.UNARY_UNARY,
umbra_pb2.Workflow,
umbra_pb2.Status,
),
}
class ScenarioStub:
def __init__(self, channel: grpclib.client.Channel) -> None:
self.Establish = grpclib.client.UnaryUnaryMethod(
channel,
'/umbra.Scenario/Establish',
umbra_pb2.Workflow,
umbra_pb2.Status,
)
class MonitorBase(abc.ABC):
@abc.abstractmethod
async def Listen(self, stream: 'grpclib.server.Stream[umbra_pb2.Instruction, umbra_pb2.Snapshot]') -> None:
pass
def __mapping__(self) -> typing.Dict[str, grpclib.const.Handler]:
return {
'/umbra.Monitor/Listen': grpclib.const.Handler(
self.Listen,
grpclib.const.Cardinality.UNARY_UNARY,
umbra_pb2.Instruction,
umbra_pb2.Snapshot,
),
}
class MonitorStub:
def __init__(self, channel: grpclib.client.Channel) -> None:
self.Listen = grpclib.client.UnaryUnaryMethod(
channel,
'/umbra.Monitor/Listen',
umbra_pb2.Instruction,
umbra_pb2.Snapshot,
)
class AgentBase(abc.ABC):
@abc.abstractmethod
async def Probe(self, stream: 'grpclib.server.Stream[umbra_pb2.Instruction, umbra_pb2.Snapshot]') -> None:
pass
def __mapping__(self) -> typing.Dict[str, grpclib.const.Handler]:
return {
'/umbra.Agent/Probe': grpclib.const.Handler(
self.Probe,
grpclib.const.Cardinality.UNARY_UNARY,
umbra_pb2.Instruction,
umbra_pb2.Snapshot,
),
}
class AgentStub:
def __init__(self, channel: grpclib.client.Channel) -> None:
self.Probe = grpclib.client.UnaryUnaryMethod(
channel,
'/umbra.Agent/Probe',
umbra_pb2.Instruction,
umbra_pb2.Snapshot,
)
| 28.230769 | 111 | 0.600446 | 406 | 4,037 | 5.795567 | 0.179803 | 0.105397 | 0.072673 | 0.061198 | 0.684233 | 0.631959 | 0.487888 | 0.317042 | 0.317042 | 0.238844 | 0 | 0.011648 | 0.298241 | 4,037 | 142 | 112 | 28.429577 | 0.81892 | 0.025762 | 0 | 0.527778 | 1 | 0 | 0.131077 | 0.087809 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0.046296 | 0.074074 | 0.037037 | 0.259259 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
eb50c22d247814c193179eadb9b7153398a4d13c | 1,086 | py | Python | leetcode/2.Add_Two_Numbers/python/add_two_numbers_v2.py | realXuJiang/research_algorithms | 8f2876288cb607b9eddb2aa75f51a1d574b51ec4 | [
"Apache-2.0"
] | 1 | 2019-08-12T09:32:30.000Z | 2019-08-12T09:32:30.000Z | leetcode/2.Add_Two_Numbers/python/add_two_numbers_v2.py | realXuJiang/research_algorithms | 8f2876288cb607b9eddb2aa75f51a1d574b51ec4 | [
"Apache-2.0"
] | null | null | null | leetcode/2.Add_Two_Numbers/python/add_two_numbers_v2.py | realXuJiang/research_algorithms | 8f2876288cb607b9eddb2aa75f51a1d574b51ec4 | [
"Apache-2.0"
] | null | null | null | class ListNode(object):
def __init__(self, x):
self.val = x
self.next = None
def addTwoNumbers(l1, l2):
carry = 0
root = n = ListNode(0)
while l1 or l2 or carry:
v1 = v2 = 0
if l1:
v1 = l1.val
l1 = l1.next
if l2:
v2 = l2.val
l2 = l2.next
sum = int(v1) + int(v2) + carry
# carry, val = divmod(sum, 10)
carry = sum / 10
val = sum % 10
n.next = ListNode(val)
n = n.next
return root.next
def createListNode(l1):
head = ListNode(str(l1)[0])
temp = head
for val in str(l1)[1:]:
temp.next = ListNode(val)
temp = temp.next
return head
def printLN(l1):
res = ''
while l1 is not None:
res += str(l1.val) + ' -> '
l1 = l1.next
return res
l1 = createListNode(243)
l2 = createListNode(564)
print 'Input1:' + printLN(l1)
print 'Input2:' + printLN(l2)
res = addTwoNumbers(l1, l2)
print 'Output:' + printLN(res)
| 23.608696 | 44 | 0.483425 | 140 | 1,086 | 3.721429 | 0.314286 | 0.028791 | 0.065259 | 0.034549 | 0.049904 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078221 | 0.399632 | 1,086 | 45 | 45 | 24.133333 | 0.720859 | 0.025783 | 0 | 0.05 | 0 | 0 | 0.023674 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
eb566541f6de184d1ac1cf427e1c154f7fe641b8 | 1,067 | py | Python | setup.py | kain88-de/numkit | 31b948b5d6f9093fbb35db98496dd69046511afe | [
"BSD-3-Clause"
] | null | null | null | setup.py | kain88-de/numkit | 31b948b5d6f9093fbb35db98496dd69046511afe | [
"BSD-3-Clause"
] | null | null | null | setup.py | kain88-de/numkit | 31b948b5d6f9093fbb35db98496dd69046511afe | [
"BSD-3-Clause"
] | null | null | null | #! /usr/bin/python
"""Setuptools-based setup script for numkit.
For a basic installation just type the command::
python setup.py install
"""
from setuptools import setup, find_packages
setup(name='numkit',
version='1.1.0-dev',
description='numerical first aid kit',
author='Oliver Beckstein',
author_email='orbeckst@gmail.com',
url = 'https://github.com/Becksteinlab/numkit',
classifiers=[
'Development Status :: 4 - Beta',
'Environment :: Console',
'Intended Audience :: Science/Research',
'License :: OSI Approved :: BSD License',
'Operating System :: POSIX',
'Programming Language :: Python',
'Topic :: Scientific/Engineering',
'Topic :: Software Development :: Libraries :: Python Modules',
],
packages=find_packages('src'),
package_dir={'': 'src'},
scripts=[],
license='BSD',
long_description=open('README.rst').read(),
tests_require = ['numpy>=1.9', 'pytest'],
install_requires=['numpy>=1.9', 'scipy']
)
| 29.638889 | 71 | 0.614808 | 113 | 1,067 | 5.743363 | 0.752212 | 0.03698 | 0.021572 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009768 | 0.232427 | 1,067 | 35 | 72 | 30.485714 | 0.782662 | 0.12746 | 0 | 0 | 0 | 0 | 0.469122 | 0.023835 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.04 | 0 | 0.04 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
eb58292318b92d933a7b5900cc0d082100be1cdb | 44,215 | py | Python | Proj029Pipelines/PipelineMetaomics.py | CGATOxford/proj029 | f0a8ea63b4f086e673aa3bf8b7d3b9749261b525 | [
"BSD-3-Clause"
] | 3 | 2016-04-04T22:54:14.000Z | 2017-04-01T09:37:54.000Z | Proj029Pipelines/PipelineMetaomics.py | CGATOxford/proj029 | f0a8ea63b4f086e673aa3bf8b7d3b9749261b525 | [
"BSD-3-Clause"
] | null | null | null | Proj029Pipelines/PipelineMetaomics.py | CGATOxford/proj029 | f0a8ea63b4f086e673aa3bf8b7d3b9749261b525 | [
"BSD-3-Clause"
] | null | null | null | ####################################################
####################################################
# functions and classes used in conjunction with
# pipeline_metaomics.py
####################################################
####################################################
# import libraries
import sys
import re
import os
import itertools
import sqlite3
import CGAT.IOTools as IOTools
import CGATPipelines.Pipeline as P
from rpy2.robjects import r as R
import pandas
import numpy as np
####################################################
####################################################
####################################################
# SECTION 1
####################################################
####################################################
####################################################
def buildDiffStats(infile, outfile, db, connection):
'''
build differential abundance statistics
at different p-value and Fold change
thresholds for each comparison
'''
tablename = P.toTable(os.path.basename(infile))
statement = "ATTACH '%(db)s' as diff;" % locals()
connection.execute(statement)
# build table of results at different thresholds
ps = [0.01, 0.05, 0.1]
fcs = [0, 0.5, 1, 1.5, 2]
# build results for each pair
pairs = [("HhaIL10R", "WT"), ("WT", "aIL10R"), ("Hh", "WT")]
outf = open(outfile, "w")
outf.write("group1\tgroup2\tadj_P_Val\tlogFC\tnumber\n")
for pair in pairs:
p1, p2 = pair[0], pair[1]
for p, fc in itertools.product(ps, fcs):
statement = """SELECT COUNT(*)
FROM diff.%(tablename)s
WHERE group1 == "%(p1)s"
AND group2 == "%(p2)s"
AND adj_P_Val < %(p)f
AND abs(logFC) > %(fc)f""" % locals()
for data in connection.execute(statement).fetchall():
outf.write("\t".join([p1, p2, str(p), str(fc), str(data[0])]) + "\n")
outf.close()
####################################################
####################################################
####################################################
# SECTION 2
####################################################
####################################################
####################################################
def buildCommonList(rnadb, dnadb, outfile):
'''
build a list of NOGs/genera that were found in
common after filtering between RNA and
DNA data sets
'''
# select appropriate table depending on
# whether we want genera or NOGs
if "genera" in outfile:
tablename = "genus_diamond_aggregated_counts_diff"
else:
tablename = "gene_counts_diff"
# connect to respective
# databases for RNA and DNA
dbh_rna = sqlite3.connect(rnadb)
cc_rna = dbh_rna.cursor()
dbh_dna = sqlite3.connect(dnadb)
cc_dna = dbh_dna.cursor()
# collect NOGs/genera and write to
# file
outf = open(outfile, "w")
rna = set()
dna = set()
for gene in cc_rna.execute("""
SELECT taxa
FROM %s
WHERE group1 == "HhaIL10R"
AND group2 == "WT"
""" % tablename).fetchall():
rna.add(gene[0])
for gene in cc_dna.execute("""SELECT taxa
FROM %s
WHERE group1 == "HhaIL10R"
AND group2 == "WT"
""" % tablename).fetchall():
dna.add(gene[0])
for gene in rna.intersection(dna):
outf.write(gene + "\n")
####################################################
####################################################
####################################################
def buildDiffList(db,
commonset,
outfile,
fdr=0.05,
l2fold=1,
tablename=None):
'''
build a list of differentially expressed
NOGs between colitis and steady state
'''
# list of common NOGs for sql statement
common = set([x[:-1] for x in open(commonset).readlines()])
common = "(" + ",".join(['"'+x+'"' for x in common]) + ")"
# connect to database
dbh = sqlite3.connect(db)
cc = dbh.cursor()
# remove any genes that are different between Hh and steady state
# or between aIL10R and steady state
hh = set([x[0] for x in cc.execute("""SELECT taxa
FROM %s \
WHERE group1 == "Hh" \
AND group2 == "WT" \
AND adj_P_Val < %f""" % (tablename, fdr)).fetchall()])
# sql list
hh = "(" + ",".join(['"'+x+'"' for x in hh]) + ")"
ail10r = set([x[0] for x in cc.execute("""SELECT taxa
FROM %s
WHERE group1 == "WT"
AND group2 == "aIL10R"
AND adj_P_Val < %f""" % (tablename, fdr)).fetchall()])
# sql list
ail10r = "(" + ",".join(['"'+x+'"' for x in ail10r]) + ")"
outf = open(outfile, "w")
for gene in cc.execute("""SELECT taxa
FROM %s
WHERE group1 == "HhaIL10R"
AND group2 == "WT"
AND adj_P_Val < %f
AND (logFC > %i OR logFC < -%i)
AND taxa IN %s
AND taxa NOT IN %s
AND taxa NOT IN %s
ORDER BY logFC DESC""" % (tablename, fdr, l2fold, l2fold, common, hh, ail10r)).fetchall():
outf.write(gene[0] + "\n")
outf.close()
####################################################
####################################################
####################################################
def heatmapDiffFeatures(diff_list,
matrix,
outfile):
'''
draw heatmap of differentially abundant features
'''
R('''library(gplots)''')
R('''library(gtools)''')
R('''diff <- read.csv("%s", header=F, sep="\t", stringsAsFactors=F)''' % diff_list)
R('''dat <- read.csv("%s", header=T, stringsAsFactors=F, sep="\t")''' % matrix)
R('''rownames(dat) <- dat$taxa''')
R('''dat <- dat[, 1:ncol(dat)-1]''')
R('''dat <- dat[diff[,1],]''')
R('''dat <- na.omit(dat)''')
R('''dat <- dat[, mixedsort(colnames(dat))]''')
R('''samples <- colnames(dat)''')
R('''dat <- t(apply(dat, 1, scale))''')
R('''colnames(dat) <- samples''')
R('''cols <- colorRampPalette(c("blue", "white", "red"))''')
R('''pdf("%s")''' % outfile)
R('''heatmap.2(as.matrix(dat), col = cols, scale = "row", trace = "none", Rowv = F, Colv = F, margins = c(15,15),
distfun = function(x) dist(x, method = "manhattan"),
hclustfun = function(x) hclust(x, method = "ward.D2"))''')
R["dev.off"]()
####################################################
####################################################
####################################################
def buildDiffGeneOverlap(dnafile, rnafile, outfile):
'''
overlap differentially abundant NOGs between
RNA and DNA data sets
'''
dna = set([x[:-1] for x in open(dnafile).readlines()])
rna = set([x[:-1] for x in open(rnafile).readlines()])
ndna = len(dna)
nrna = len(rna)
overlap = len(dna.intersection(rna))
outf = open(outfile, "w")
outf.write("nDNA\tnRNA\tnoverlap\n%(ndna)i\t%(nrna)i\t%(overlap)i\n" % locals())
outf.close()
####################################################
####################################################
####################################################
def testSignificanceOfOverlap(common, overlap, outfile):
'''
Test significance of overlapping lists
bewteen RNA and DNA using hypergeometric test
'''
R('''pop <- read.csv("%s", header = F, sep = "\t", stringsAsFactors = F)''' % common)
R('''overlaps <- read.csv("%s", header = T, stringsAsFactors = F, sep = "\t")''' % overlap)
# total genes in population
R('''npop <- nrow(pop)''')
# x = number of white balls picked = overlap
R('''x <- overlaps$noverlap''')
# m = total number of white balls = total diff in RNA analysis
R('''m <- overlaps$nRNA''')
# n = total number of black balls = total - diff in RNA analysis
R('''n <- npop - m''')
# k = total balls sampled = number of genera different in DNA analysis
R('''k <- overlaps$nDNA''')
# hypergeometric test
R('''p <- 1-phyper(x,m,n,k)''')
# write result
R('''res <- matrix(ncol = 2, nrow = 5)''')
R('''res[1,1] <- "x"''')
R('''res[2,1] <- "m"''')
R('''res[3,1] <- "n"''')
R('''res[4,1] <- "k"''')
R('''res[5,1] <- "p-value"''')
R('''res[1,2] <- x''')
R('''res[2,2] <- m''')
R('''res[3,2] <- n''')
R('''res[4,2] <- k''')
R('''res[5,2] <- p''')
R('''print(res)''')
R('''write.table(as.data.frame(res), file = "%s", quote = F, sep = "\t", row.names = F)''' % outfile)
####################################################
####################################################
####################################################
def scatterplotAbundanceEstimates(dnamatrix,
rnamatrix,
outfile):
'''
scatterplot abundance estimates between DNA and RNA
data sets
'''
R('''rna <- read.csv("%s", header = T, stringsAsFactors = F, sep = "\t")''' % rnamatrix)
R('''rownames(rna) <- rna$taxa''')
R('''rna <- rna[,1:ncol(rna)-1]''')
R('''dna <- read.csv("%s", header = T, stringsAsFactors = F, sep = "\t")''' % dnamatrix)
R('''rownames(dna) <- dna$taxa''')
R('''dna <- dna[,1:ncol(dna)-1]''')
# intersection of taxa/NOGs present
R('''keep <- intersect(rownames(rna), rownames(dna))''')
# get data where there is rna and dna
R('''rna <- rna[keep,]''')
R('''dna <- dna[keep,]''')
# take averages
R('''rna.ave <- data.frame(apply(rna, 1, mean))''')
R('''dna.ave <- data.frame(apply(dna, 1, mean))''')
R('''print(cor(dna.ave,rna.ave)[[1]])''')
R('''png("%s")''' % outfile)
R('''plot(dna.ave[,1],
rna.ave[,1],
pch = 16,
col = "slateGrey",
xlab = "Mean DNA abundance",
ylab = "Mean RNA abundance",
main = paste("N = ", nrow(dna.ave), sep = ""))
abline(lm(rna[,1]~dna[,1], na.rm = T))''')
R["dev.off"]()
####################################################
####################################################
####################################################
def buildDetectionOverlap(rnacounts, dnacounts, outfile):
'''
build detection overlaps between RNA and DNA
data sets
'''
R('''rna <- read.csv("%s", header = T, stringsAsFactors = F, sep = "\t")''' % rnacounts)
R('''rownames(rna) <- rna$taxa''')
R('''rna <- rna[,1:ncol(rna)]''')
R('''dna <- read.csv("%s", header = T, stringsAsFactors = F, sep = "\t")''' % dnacounts)
R('''rownames(dna) <- dna$taxa''')
R('''dna <- dna[,1:ncol(dna)]''')
R('''taxa.rna <- rownames(rna)''')
R('''taxa.dna <- rownames(dna)''')
# union of taxa across samples
R('''nrna = length(taxa.rna)''')
R('''ndna = length(taxa.dna)''')
# get overlapping
R('''noverlap = length(intersect(taxa.rna, taxa.dna))''')
R('''result = data.frame(nrna = nrna, ndna = ndna, noverlap = noverlap)''')
R('''write.table(result, file = "%s", sep = "\t", quote = F, row.names = F)''' % outfile)
####################################################
####################################################
####################################################
def plotAbundanceLevelsOfOverlap(rnacounts,
dnacounts,
outfile,
of=None):
'''
plot abundance levels pf taxa/NOGs that do
and don't overlap between data sets
'''
R('''library(ggplot2)''')
# get rna reads per million
R('''rna <- read.csv("%s", header = T, stringsAsFactors = F, sep = "\t")''' % rnacounts)
R('''rownames(rna) <- rna$taxa''')
R('''rna <- rna[,2:ncol(rna)]''')
R('''rna <- sweep(rna, 2, colSums(rna)/1000000, "/")''')
# get dna reads per million
R('''dna <- read.csv("%s", header = T, stringsAsFactors = F, sep = "\t")''' % dnacounts)
R('''rownames(dna) <- dna$taxa''')
R('''dna <- dna[,2:ncol(dna)]''')
R('''dna <- sweep(dna, 2, colSums(dna)/1000000, "/")''')
# common and distinct sets
R('''common <- intersect(rownames(dna), rownames(rna))''')
R('''rna.only <- setdiff(rownames(rna), rownames(dna))''')
R('''dna.only <- setdiff(rownames(dna), rownames(rna))''')
# boxplot the abundance levels
R('''rna.common <- apply(rna[common,], 1, mean)''')
R('''dna.common <- apply(dna[common,], 1, mean)''')
R('''rna.distinct <- apply(rna[rna.only,], 1, mean)''')
R('''dna.distinct <- apply(dna[dna.only,], 1, mean)''')
if of == "genes":
# this is just so the thing will run
# genes do not have distinct genes
# in RNA analysis
R('''rna.distinct <- rep(0, 20)''')
else:
R('''rna.distinct <- rna.distinct''')
# test sig bewteen groups
R('''wtest1 <- wilcox.test(rna.common, rna.distinct)''')
R('''wtest2 <- wilcox.test(dna.common, dna.distinct)''')
R('''wtest3 <- wilcox.test(rna.common, dna.distinct)''')
R('''wtest4 <- wilcox.test(dna.common, rna.distinct)''')
R('''wtest5 <- wilcox.test(dna.common, rna.common)''')
R('''res <- data.frame("rna.common_vs_rna.distinct" = wtest1$p.value,
"dna.common_vs_dna.distinct" = wtest2$p.value,
"rna.common_vs_dna.distinct" = wtest3$p.value,
"dna.common_vs_rna.distinct" = wtest4$p.value,
"dna.common_vs_rna.common" = wtest5$p.value)''')
outname_sig = outfile[:-4] + ".sig"
R('''write.table(res, file = "%s", row.names = F, sep = "\t", quote = F)''' % outname_sig)
# create dataframe for plotting
R('''dat <- data.frame(values = c(dna.distinct, dna.common, rna.common, rna.distinct),
status = c(rep("unique.dna", length(dna.distinct)),
rep("common.dna", length(dna.common)),
rep("common.rna", length(rna.common)),
rep("unique.rna", length(rna.distinct))))''')
R('''plot1 <- ggplot(dat, aes(x = factor(status, levels = status), y = values, stat = "identity"))''')
R('''plot1 + geom_boxplot() + scale_y_log10()''')
R('''ggsave("%s")''' % outfile)
####################################################
####################################################
####################################################
# SECTION 3
####################################################
####################################################
####################################################
def runPCA(infile, outfile):
'''
run pca analysis - this outputs
a plot coloured by condition and
also the loadings
'''
if "RNA" in infile:
suffix = "rna"
else:
suffix = "dna"
if "gene" in infile:
xlim, ylim = 40,40
else:
xlim, ylim = 12,7
outname_plot = P.snip(outfile, ".loadings.tsv").replace("/", "/%s_" % suffix) + ".pca.pdf"
R('''dat <- read.csv("%s", header = T, stringsAsFactors = F, sep = "\t")''' % infile)
R('''rownames(dat) <- dat$taxa''')
R('''dat <- dat[, 1:ncol(dat)-1]''')
R('''pc <- prcomp(t(dat))''')
R('''conds <- unlist(strsplit(colnames(dat), ".R[0-9]"))[seq(1, ncol(dat)*2, 2)]''')
R('''conds <- unlist(strsplit(conds, ".", fixed = T))[seq(2, length(conds)*2, 2)]''')
# plot the principle components
R('''library(ggplot2)''')
R('''pcs <- data.frame(pc$x)''')
R('''pcs$cond <- conds''')
# get variance explained
R('''imps <- c(summary(pc)$importance[2], summary(pc)$importance[5])''')
R('''p <- ggplot(pcs, aes(x = PC1, y = PC2, colour = cond, size = 3)) + geom_point()''')
R('''p2 <- p + xlab(imps[1]) + ylab(imps[2])''')
R('''p3 <- p2 + scale_colour_manual(values = c("slateGrey", "green", "red", "blue"))''')
R('''p3 + xlim(c(-%i, %i)) + ylim(c(-%i, %i))''' % (xlim, xlim, ylim, ylim))
R('''ggsave("%s")''' % outname_plot)
# get the loadings
R('''loads <- data.frame(pc$rotation)''')
R('''loads$taxa <- rownames(loads)''')
# write out data
R('''write.table(loads, file = "%s", sep = "\t", row.names = F, quote = F)''' % outfile.replace("/", "/%s_" % suffix))
####################################################
####################################################
####################################################
def plotPCALoadings(infile, outfile):
'''
plot PCA loadings
'''
R('''library(ggplot2)''')
R('''library(grid)''')
R('''dat <- read.csv("%s", header = T, stringsAsFactors = F, sep = "\t")''' % infile)
R('''top5pc1 <- dat[order(-dat$PC1),][1:5,]''')
R('''bottom5pc1 <- dat[order(dat$PC1),][1:5,]''')
R('''top5pc2 <- dat[order(-dat$PC2),][1:5,]''')
R('''bottom5pc2 <- dat[order(dat$PC2),][1:5,]''')
R('''totext <- data.frame(rbind(top5pc1, bottom5pc1, top5pc2, bottom5pc2))''')
R('''dat$x <- 0''')
R('''dat$y <- 0''')
R('''p <- ggplot(dat, aes(x = x, y = y, xend = PC1, yend = PC2, colour = taxa))''')
R('''p2 <- p + geom_segment(arrow = arrow(length = unit(0.2, "cm")))''')
R('''p2 + geom_text(data = totext, aes(x = PC1, y = PC2, label = totext$taxa, size = 6)) + xlim(c(-0.5,0.5)) + ylim(c(-0.5,0.25))''')
R('''ggsave("%s")''' % outfile)
# rna = [x for x in infiles if "RNA" in x][0]
# dna = [x for x in infiles if "DNA" in x][0]
# R('''rna <- read.csv("%s", header = T, stringsAsFactors = F, sep = "\t")''' % rna)
# R('''dna <- read.csv("%s", header = T, stringsAsFactors = F, sep = "\t")''' % dna)
# R('''rna <- rna[rna$group1 == "HhaIL10R" & rna$group2 == "WT",]''')
# R('''dna <- dna[dna$group1 == "HhaIL10R" & dna$group2 == "WT",]''')
# R('''rownames(rna) <- rna$taxa''')
# R('''rownames(dna) <- dna$taxa''')
# R('''rna <- rna[,1:ncol(rna)-1]''')
# R('''dna <- dna[,1:ncol(dna)-1]''')
# # only look at those that are present in both
# R('''keep <- intersect(rownames(rna), rownames(dna))''')
# R('''rna <- rna[keep,]''')
# R('''dna <- dna[keep,]''')
# R('''rna.ratio <- rna$logFC''')
# R('''dna.ratio <- dna$logFC''')
# R('''rna.p <- rna$adj.P.Val''')
# R('''dna.p <- dna$adj.P.Val''')
# R('''ratio <- data.frame(gene = keep, dna = dna.ratio, rna = rna.ratio, pdna = dna.p, prna = rna.p, ratio = rna.ratio - dna.ratio)''')
# R('''write.table(ratio, file = "%s", sep = "\t", row.names = F, quote = F)''' % outfile)
####################################################
####################################################
####################################################
def barchartProportions(infile, outfile):
'''
stacked barchart description of percent reads
mapping to each taxon
'''
R('''library(ggplot2)''')
R('''library(gtools)''')
R('''library(reshape)''')
R('''dat <- read.csv("%s", header = T, stringsAsFactors = F, sep = "\t")''' % infile)
R('''rownames(dat) <- dat$taxa''')
# get rid of taxa colomn
R('''dat <- dat[,1:ncol(dat)-1]''')
R('''dat.percent <- data.frame(apply(dat, 2, function(x) x*100))''')
# candidate genera
R('''candidates <- c("Peptoniphilus",
"Deferribacter",
"Escherichia",
"Lactobacillus",
"Turicibacter",
"Akkermansia",
"Bifidobacterium",
"Methylacidiphilum")''')
R('''dat.percent <- dat.percent[candidates,]''')
R('''dat.percent <- dat.percent[,mixedsort(colnames(dat.percent))]''')
# add taxa column with "other" = < 5% in any sample
R('''dat.percent$taxa <- rownames(dat.percent)''')
# reshape and plot
outname = P.snip(outfile, ".pdf")
R('''dat.percent <- melt(dat.percent)''')
R('''conds <- unlist(strsplit(as.character(dat.percent$variable), ".R[0-9]"))[seq(1, nrow(dat.percent)*2, 2)]''')
R('''conds <- unlist(strsplit(conds, ".", fixed = T))[seq(2, length(conds)*2, 2)]''')
R('''dat.percent$cond <- conds''')
R('''for (taxon in candidates){
outname <- paste("%s", paste("_", taxon, sep=""), ".pdf", sep="")
dat.percent.restrict <- dat.percent[dat.percent$taxa==taxon,]
plot1 <- ggplot(dat.percent.restrict,
aes(x=factor(cond, levels=c("WT","aIL10R", "Hh", "HhaIL10R")),
y=value, group=cond, colour=cond, label=variable))
plot1 + geom_boxplot() + geom_jitter() + geom_text() + scale_colour_manual(values=c("darkGreen", "red", "grey", "blue"))
ggsave(outname)}''' % outname)
####################################################
####################################################
####################################################
# SECTION 4
####################################################
####################################################
####################################################
def buildRNADNARatio(dnadiff, rnadiff, outfile):
'''
build ratio of RNAfold/DNAfold
'''
R('''rna <- read.csv("%s", header = T, stringsAsFactors = F, sep = "\t")''' % rnadiff)
R('''dna <- read.csv("%s", header = T, stringsAsFactors = F, sep = "\t")''' % dnadiff)
R('''rna <- rna[rna$group1 == "HhaIL10R" & rna$group2 == "WT",]''')
R('''dna <- dna[dna$group1 == "HhaIL10R" & dna$group2 == "WT",]''')
R('''rownames(rna) <- rna$taxa''')
R('''rownames(dna) <- dna$taxa''')
R('''rna <- rna[,1:ncol(rna)-1]''')
R('''dna <- dna[,1:ncol(dna)-1]''')
# only look at those that are present in both
R('''keep <- intersect(rownames(rna), rownames(dna))''')
R('''rna <- rna[keep,]''')
R('''dna <- dna[keep,]''')
R('''rna.ratio <- rna$logFC''')
R('''dna.ratio <- dna$logFC''')
R('''rna.p <- rna$adj.P.Val''')
R('''dna.p <- dna$adj.P.Val''')
R('''ratio <- data.frame(gene = keep,
dna = dna.ratio,
rna = rna.ratio,
pdna = dna.p,
prna = rna.p,
ratio = rna.ratio - dna.ratio)''')
R('''write.table(ratio,
file = "%s",
sep = "\t",
row.names = F,
quote = F)''' % outfile)
####################################################
####################################################
####################################################
def annotateRNADNARatio(RNADNARatio,
dnalist,
rnalist,
outfile):
'''
annotate NOGs as to whether they were differentially
regulated in metagenomic, metatranscriptomic or both
data sets
'''
rna_diff = set([y[:-1] for y in open(rnalist).readlines()])
dna_diff = set([y[:-1] for y in open(dnalist).readlines()])
inf = IOTools.openFile(RNADNARatio)
inf.readline()
outf = IOTools.openFile(outfile, "w")
outf.write("gene\tdna\trna\tpdna\tprna\tratio\tstatus\n")
for line in inf.readlines():
gene, dna, rna, pdna, prna, ratio = line[:-1].split("\t")
gene = gene.strip('"')
dna, rna = float(dna), float(rna)
if gene in rna_diff and gene in dna_diff and dna > 0 and rna > 0:
status = "up.both"
elif gene in rna_diff and gene in dna_diff and dna < 0 and rna < 0:
status = "down.both"
elif gene in rna_diff and rna > 0:
status = "up.RNA"
elif gene in rna_diff and rna < 0:
status = "down.RNA"
elif gene in dna_diff and dna > 0:
status = "up.DNA"
elif gene in dna_diff and dna < 0:
status = "down.DNA"
else:
status = "NS"
outf.write("%(gene)s\t%(dna)s\t%(rna)s\t%(pdna)s\t%(prna)s\t%(ratio)s\t%(status)s\n" % locals())
outf.close()
####################################################
####################################################
####################################################
def plotSets(infile, outfile):
'''
plot the fold changes in RNA and DNA analyses
and label by how they are regulated in DNA and
RNA analyses
MUST HAVE GOI FILE IN WORKING DIR - not ideal
'''
R('''library(ggplot2)''')
# read in data
R('''dat <- read.csv("%s", header = T, stringsAsFactors = F, sep = "\t")''' % infile)
# get nog 2 gene map
R('''cog2gene <- read.csv("goi.tsv", header = F, stringsAsFactors = F, sep = "\t", row.names = 1)''')
# just get those signficant in either DNA or RNA or both
R('''dat$status[dat$status == "NS"] = "z"''')
R('''genes <- dat$gene''')
# regression model
R('''mod1 <- lm(dat$rna~dat$dna)''')
R('''intercept <- mod1[[1]][1]''')
R('''slope = mod1[[1]][2]''')
R('''print(summary(mod1))''')
# prediction intervals
R('''pred.ints <- predict(mod1, interval = "prediction", level = 0.95)''')
# add to data.frame
R('''dat$lwr <- pred.ints[,2]''')
R('''dat$upr <- pred.ints[,3]''')
# add labels
R('''dat$goi <- cog2gene[dat$gene,]''')
R('''dat$pointsize <- ifelse(!(is.na(dat$goi)), 10, 1)''')
# plot
R('''plot1 <- ggplot(dat, aes(x = dna, y = rna, alpha = 1, colour = status))''')
R('''plot2 <- plot1 + geom_point(shape = 18, aes(size = pointsize))''')
R('''plot3 <- plot2 + scale_size_area() + xlim(c(-5,5))''')
R('''plot4 <- plot3 + scale_colour_manual(values = c("blue",
"brown",
"darkGreen",
"orange",
"purple",
"red",
"grey"))''')
R('''plot5 <- plot4 + geom_abline(intercept = intercept, slope = slope)''')
# prediction intervals
R('''plot6 <- plot5 + geom_line(aes(x = dna, y = lwr), linetype = "dashed", colour = "black")''')
R('''plot7 <- plot6 + geom_line(aes(x = dna, y = upr), linetype = "dashed", colour = "black")''')
R('''plot7 + geom_text(aes(label = goi))''')
R('''ggsave("%s")''' % outfile)
####################################################
####################################################
####################################################
def buildGenesOutsidePredictionInterval(infile, outfile):
'''
annotate genes as being outside prediction
interval - these are the NOGs that we are
defining as colitis-responsive
'''
R('''dat <- read.csv("%s", header = T, stringsAsFactors = F, sep = "\t")''' % infile)
# just get those signficant in either DNA or RNA or both
R('''genes <- dat$gene''')
# regression model
R('''mod1 <- lm(dat$rna~dat$dna)''')
# prediction intervals
R('''pred.ints <- predict(mod1, interval = "prediction", level = 0.95)''')
# add to data.frame
R('''dat$lwr <- pred.ints[,2]''')
R('''dat$upr <- pred.ints[,3]''')
# annotate with whether or not they are above
# prediction intervals
R('''dat$pi_status[dat$rna > dat$upr & dat$status == "up.RNA"] <- "diff.up.rna"''')
R('''dat$pi_status[dat$rna > dat$upr & dat$status == "down.DNA"] <- "diff.down.dna"''')
R('''dat$pi_status[dat$rna > dat$upr & dat$status == "up.both"] <- "diff.up.rna"''')
R('''dat$pi_status[dat$rna < dat$lwr & dat$status == "down.RNA"] <- "diff.down.rna"''')
R('''dat$pi_status[dat$rna < dat$lwr & dat$status == "up.DNA"] <- "diff.up.dna"''')
R('''dat$pi_status[dat$rna < dat$lwr & dat$status == "down.both"] <- "diff.down.rna"''')
# write results
R('''write.table(dat, file = "%s", sep = "\t", quote = F, row.names = F)''' % outfile)
####################################################
####################################################
####################################################
# SECTION 6
####################################################
####################################################
####################################################
def buildGenusCogCountsMatrix(infile, outfile):
'''
build cog x genus proportion
matrix
'''
inf = IOTools.openFile(infile)
header = inf.readline()
result = {}
# create container for results
for line in inf.readlines():
data = line[:-1].split("\t")
cog, taxa = data[0], data[1]
if taxa == "unassigned": continue
result[cog] = {}
# get average % taxa per cog
inf = IOTools.openFile(infile)
header = inf.readline()
for line in inf.readlines():
data = line[:-1].split("\t")
if len(data) == 19:
cog, taxa = data[0], data[1]
values = map(float,data[3:])
elif len(data) == 20:
cog, taxa = data[0], data[1]
values = map(float,data[4:])
else:
cog, taxa = data[0], data[1]
values = map(float,data[2:])
if taxa == "unassigned": continue
ave = np.mean(values)
try:
result[cog][taxa] = ave
except KeyError: continue
df = pandas.DataFrame(result)
df.to_csv(outfile, sep = "\t", na_rep = 0)
####################################################
####################################################
####################################################
def mergePathwaysAndGenusCogCountsMatrix(annotations,
matrix,
outfile):
'''
merge cog annotations and per taxa cog counts
'''
# read annotations
R('''anno <- read.csv("%s", header=T, stringsAsFactors=F, sep="\t", row.names=1)''' % annotations)
R('''anno.no.pathways <- anno[,1:ncol(anno)-1]''')
R('''anno.p <- sweep(anno.no.pathways, 2, colSums(anno.no.pathways), "/")''')
R('''anno.p$average <- rowMeans(anno.p)''')
R('''anno.p$pathway <- anno$taxa''')
# read matrix
R('''mat <- read.csv("%s", header=T, stringsAsFactors=F, sep="\t", row.names=1)''' % matrix)
R('''mat <- data.frame(t(mat))''')
R('''mat$ref <- rownames(mat)''')
# split pathway annotations
R('''for (pathway in unique(anno.p$pathway)){
if (pathway == "Function unknown"){next}
# some weirness with some names
pw <- gsub("/", "_", pathway)
outname <- paste("candidate_pathways.dir", paste(pw, "tsv", sep = "."), sep="/")
outname <- gsub(" ", "_", outname)
print(outname)
anno.p2 <- anno.p[anno.p$pathway == pathway,]
anno.p2 <- anno.p2[order(anno.p2$average, decreasing=T),]
# top 10
# anno.p2 <- anno.p2[1:10,]
# merge with matrix
mat2 <- mat[rownames(anno.p2),]
mat2$pathway <- anno.p2$pathway
write.table(mat2, file=outname, sep="\t", row.names=F)}''')
####################################################
####################################################
####################################################
def plotNumberOfTaxaPerPathway(infiles, outfile):
'''
plot the average number of taxa expressing genes
in each pathway
'''
tmp = P.getTempFilename(".")
infs = " ".join(infiles)
statement = '''awk 'FNR==1 && NR!=1 { while (/ref/) getline; }1 {print}' %(infs)s > %(tmp)s'''
P.run()
R('''library(ggplot2)''')
R('''library(plyr)''')
R('''library(reshape)''')
R('''dat <-read.csv("%s", header=T, stringsAsFactors=F, sep="\t")''' % tmp)
R('''t <- ncol(dat)''')
R('''dat <- na.omit(dat)''')
R('''pathways <- dat$pathway''')
R('''dat2 <- dat[,1:ncol(dat)-1]''')
R('''dat2 <- dat2[,1:ncol(dat2)-1]''')
# colsums gives the total number of taxa expressing each NOG
R('''col.sums <- data.frame(t(sapply(split(dat2, pathways), colSums)))''')
R('''rownames(col.sums) <- unique(pathways)''')
# rowsums gives the total number of taxa expressing
# at least one NOG per pathway
R('''total.taxa <- data.frame(rowSums(col.sums > 0))''')
R('''total.taxa$pathway <- rownames(col.sums)''')
# sort by highest
R('''total.taxa <- total.taxa[order(total.taxa[,1], decreasing=T), ]''')
R('''colnames(total.taxa) <- c("value", "pathway")''')
R('''plot1 <- ggplot(total.taxa, aes(x=factor(pathway,levels=pathway), y=value/t, stat="identity"))''')
R('''plot1 + geom_bar(stat="identity") + theme(axis.text.x=element_text(angle=90))''')
R('''ggsave("%s")''' % outfile)
os.unlink(tmp)
####################################################
####################################################
####################################################
def plotTaxaContributionsToCandidatePathways(matrix,
outfile):
'''
plot the distribution of maximum genus
contribution per gene set
'''
R('''library(ggplot2)''')
R('''library(gplots)''')
R('''library(pheatmap)''')
R('''mat <- read.csv("%s", header = T, stringsAsFactors = F, sep = "\t")''' % matrix)
R('''mat <- na.omit(mat)''')
R('''print(mat$ref)''')
# just plot top 10
R('''rownames(mat) <- mat$ref''')
R('''mat2 <- mat[,1:ncol(mat)-1]''')
R('''mat2 <- mat2[,1:ncol(mat2)-1]''')
# only keep those genera that contribute > 5% to
# a NOG
R('''mat2 <- mat2[,colSums(mat2) > 5]''')
R('''cols <- colorRampPalette(c("white", "blue"))(75)''')
R('''pdf("%s")''' % outfile)
R('''pheatmap(mat2,
color=cols,
cluster_cols=T,
cluster_rows=T,
cluster_method="ward.D2")''')
R["dev.off"]()
####################################################
####################################################
####################################################
def plotMaxTaxaContribution(matrix, annotations, outfile):
'''
plot the distribution of maximum genus
contribution per gene set
'''
R('''library(ggplot2)''')
R('''dat <- read.csv("%s", header = T, stringsAsFactors = F, sep = "\t")''' % matrix)
R('''annotations <- read.csv("%s", header = T, stringsAsFactors = F, sep = "\t")''' % annotations)
R('''maximums <- apply(dat, 2, max)''')
R('''dat2 <- data.frame("cog" = colnames(dat), "max" = maximums)''')
R('''dat3 <- merge(dat2, annotations, by.x = "cog", by.y = "gene")''')
R('''dat3$pi_status <- ifelse(dat3$status == "NS", "NS", dat3$pi_status)''')
R('''dat3$pi_status[is.na(dat3$pi_status)] <- "other_significant"''')
R('''plot1 <- ggplot(dat3, aes(x = as.numeric(as.character(max)), group = pi_status, colour = pi_status))''')
R('''plot2 <- plot1 + stat_ecdf(size = 1.1)''')
R('''plot2 + scale_colour_manual(values = c("cyan3",
"darkorchid",
"black",
"darkgoldenrod2",
"grey",
"darkBlue"))''')
R('''ggsave("%s")''' % outfile)
####################################################
####################################################
####################################################
def testSignificanceOfMaxTaxaContribution(matrix, annotations, outfile):
'''
Test significance of distribution differences. Compared to NS
group
'''
R('''library(ggplot2)''')
R('''dat <- read.csv("%s", header = T, stringsAsFactors = F, sep = "\t")''' % matrix)
R('''annotations <- read.csv("%s", header = T, stringsAsFactors = F, sep = "\t")''' % annotations)
R('''maximums <- apply(dat, 2, max)''')
R('''dat2 <- data.frame("cog" = colnames(dat), "max" = maximums)''')
R('''dat3 <- merge(dat2, annotations, by.x = "cog", by.y = "gene")''')
R('''dat3$pi_status <- ifelse(dat3$status == "NS", "NS", dat3$pi_status)''')
R('''diff.up.rna <- as.numeric(as.character(dat3$max[dat3$pi_status == "diff.up.rna"]))''')
R('''diff.down.rna <- as.numeric(as.character(dat3$max[dat3$pi_status == "diff.down.rna"]))''')
R('''diff.up.dna <- as.numeric(as.character(dat3$max[dat3$pi_status == "diff.up.dna"]))''')
R('''diff.down.dna <- as.numeric(as.character(dat3$max[dat3$pi_status == "diff.down.dna"]))''')
R('''ns <- as.numeric(as.character(dat3$max[dat3$pi_status == "NS"]))''')
# ks tests
R('''ks1 <- ks.test(diff.up.rna, ns)''')
R('''ks2 <- ks.test(diff.down.rna, ns)''')
R('''ks3 <- ks.test(diff.up.dna, ns)''')
R('''ks4 <- ks.test(diff.down.dna, ns)''')
R('''res <- data.frame("RNAGreaterThanDNA.up.pvalue" = ks1$p.value,
"RNAGreaterThanDNA.up.D" = ks1$statistic,
"RNAGreaterThanDNA.down.pvalue" = ks2$p.value,
"RNAGreaterThanDNA.down.D" = ks2$statistic,
"DNAGreaterThanRNA.up.pvalue" = ks3$p.value,
"DNAGreaterThanRNA.up.D" = ks3$statistic,
"DNAGreaterThanRNA.down.pvalue" = ks4$p.value,
"DNAGreaterThanRNA.down.D" = ks4$statistic)''')
R('''write.table(res, file = "%s", sep = "\t", quote = F, row.names = F)''' % outfile)
####################################################
####################################################
####################################################
def heatmapTaxaCogProportionMatrix(matrix, annotations, outfile):
'''
plot the taxa associated with each cog on
a heatmap
'''
R('''library(gplots)''')
R('''library(gtools)''')
R('''dat <- read.csv("%s", header = T, stringsAsFactors = F, sep = "\t", row.names = 1)''' % matrix)
R('''annotations <- read.csv("%s", header = T, stringsAsFactors = F, sep = "\t")''' % annotations)
R('''rownames(annotations) <- annotations$gene''')
# get genes present in both - not sure why these are different
# in the first place - need to check
R('''genes <- intersect(rownames(annotations), colnames(dat))''')
R('''dat <- dat[, genes]''')
R('''dat <- dat[grep("unassigned", rownames(dat), invert = T),]''')
R('''genera <- rownames(dat)''')
R('''rownames(dat) <- genera''')
R('''colnames(dat) <- genes''')
R('''annotations <- annotations[genes,]''')
R('''annotations <- annotations[order(annotations$pi_status),]''')
# only for the COGs that have RNA fold > DNA fold up-regulated
R('''annotations <- annotations[annotations$pi_status == "diff.up.rna",]''')
R('''annotations <- na.omit(annotations)''')
R('''dat <- dat[,rownames(annotations)]''')
R('''annotation <- data.frame(cluster = as.character(annotations$pi_status))''')
R('''rownames(annotation) <- rownames(annotations)''')
R('''colors1 <- c("grey")''')
R('''names(colors1) <- c("diff.up.rna")''')
R('''anno_colors <- list(cluster = colors1)''')
R('''cols <- colorRampPalette(c("white", "darkBlue"))(150)''')
R('''dat <- dat[,colSums(dat > 50) >= 1]''')
R('''dat <- dat[rowSums(dat > 10) >= 1,]''')
# not reading numeric in all instances
R('''dat2 <- data.frame(t(apply(dat, 1, as.numeric)))''')
R('''colnames(dat2) <- colnames(dat)''')
R('''pdf("%s", height = 10, width = 15)''' % outfile)
R('''library(pheatmap)''')
R('''pheatmap(dat2,
clustering_distance_cols = "manhattan",
clustering_method = "ward",
annotation = annotation,
annotation_colors = anno_colors,
cluster_rows = T,
cluster_cols = F,
color = cols,
fontsize = 8)''')
R["dev.off"]()
####################################################
####################################################
####################################################
def scatterplotPerCogTaxaDNAFoldRNAFold(taxa_cog_rnadiff,
taxa_cog_dnadiff,
cog_rnadiff,
cog_dnadiff):
'''
scatterplot fold changes for per genus cog
differences for NOGs of interestx
'''
R('''library(ggplot2)''')
# read in cogs + taxa
R('''dna <- read.csv("%s", header = T, stringsAsFactors = F, sep = "\t")''' % taxa_cog_dnadiff)
R('''dna <- dna[dna$group2 == "WT" & dna$group1 == "HhaIL10R",]''')
R('''rna <- read.csv("%s", header = T, stringsAsFactors = F, sep = "\t")''' % taxa_cog_rnadiff)
R('''rna <- rna[rna$group2 == "WT" & rna$group1 == "HhaIL10R",]''')
# read in cogs alone
R('''dna.cog <- read.csv("%s", header = T, stringsAsFactors = F, sep = "\t")''' % cog_dnadiff)
R('''dna.cog <- dna.cog[dna.cog$group2 == "WT" & dna.cog$group1 == "HhaIL10R",]''')
R('''rna.cog <- read.csv("%s", header = T, stringsAsFactors = F, sep = "\t")''' % cog_rnadiff)
R('''rna.cog <- rna.cog[rna.cog$group2 == "WT" & rna.cog$group1 == "HhaIL10R",]''')
# merge data for cogs + taxa
R('''dat <- merge(dna, rna,
by.x = "taxa",
by.y = "taxa",
all.x = T,
all.y = T,
suffixes = c(".dna.taxa.cog", ".rna.taxa.cog"))''')
# sub NA for 0
R('''dat[is.na(dat)] <- 0''')
# NOTE these are specified and hardcoded
# here - NOGs of interest
R('''cogs <- c("COG0783", "COG2837", "COG0435","COG5520", "COG0508", "COG0852")''')
# iterate over cogs and scatterplot
# fold changes in DNA and RNA analysis.
# if not present in one or other then fold change will
# be 0
R('''for (cog in cogs){
dat2 <- dat[grep(cog, dat$taxa),]
dna.cog2 <- dna.cog[grep(cog, dna.cog$taxa),]
rna.cog2 <- rna.cog[grep(cog, rna.cog$taxa),]
# add the data for COG fold changes and abundance
dat3 <- data.frame("genus" = append(dat2$taxa, cog),
"dna.fold" = append(dat2$logFC.dna.taxa.cog, dna.cog2$logFC),
"rna.fold" = append(dat2$logFC.rna.taxa.cog, rna.cog2$logFC),
"abundance" = append(dat2$AveExpr.rna.taxa.cog, rna.cog2$AveExpr))
suffix <- paste(cog, "scatters.pdf", sep = ".")
outname <- paste("scatterplot_genus_cog_fold.dir", suffix, sep = "/")
plot1 <- ggplot(dat3, aes(x = dna.fold, y = rna.fold, size = log10(abundance), label = genus))
plot2 <- plot1 + geom_point(shape = 18)
plot3 <- plot2 + geom_text(hjust = 0.5, vjust = 1) + scale_size(range = c(3,6))
plot4 <- plot3 + geom_abline(intercept = 0, slope = 1, colour = "blue")
plot5 <- plot4 + geom_hline(yintercept = c(-1,1), linetype = "dashed")
plot6 <- plot5 + geom_vline(xintercept = c(-1,1), linetype = "dashed")
plot7 <- plot6 + geom_hline(yintercept = 0) + geom_vline(xintercept = 0)
ggsave(outname)
}''')
| 39.690305 | 140 | 0.455728 | 4,861 | 44,215 | 4.11582 | 0.141329 | 0.009397 | 0.008997 | 0.023092 | 0.350927 | 0.308692 | 0.282851 | 0.25636 | 0.253061 | 0.237117 | 0 | 0.018031 | 0.254913 | 44,215 | 1,113 | 141 | 39.725966 | 0.589273 | 0.122379 | 0 | 0.239234 | 0 | 0.130782 | 0.626881 | 0.104157 | 0.004785 | 0 | 0 | 0 | 0 | 1 | 0.038278 | false | 0 | 0.017544 | 0 | 0.055821 | 0.009569 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
eb6d0eb419a5d2952ce72fb2728210efca01373f | 956 | py | Python | setup.py | breathe/NotebookScripter | ecdc9b0842e88179df76f96e4fbcf93032effaa5 | [
"MIT"
] | 23 | 2018-11-20T16:50:20.000Z | 2021-11-16T11:36:43.000Z | setup.py | breathe/NotebookScripter | ecdc9b0842e88179df76f96e4fbcf93032effaa5 | [
"MIT"
] | 5 | 2018-11-21T10:57:30.000Z | 2019-12-20T21:53:36.000Z | setup.py | breathe/NotebookScripter | ecdc9b0842e88179df76f96e4fbcf93032effaa5 | [
"MIT"
] | 1 | 2019-06-13T04:32:13.000Z | 2019-06-13T04:32:13.000Z | from setuptools import setup
setup(
name='NotebookScripter',
version='6.0.0',
packages=('NotebookScripter',),
url='https://github.com/breathe/NotebookScripter',
license='MIT',
author='N. Ben Cohen',
author_email='breathevalue@icloud.com',
install_requires=(
"ipython",
"nbformat"
),
tests_require=(
"nose",
"coverage",
"snapshottest",
"matplotlib"
),
description='Expose ipython jupyter notebooks as callable functions. More info here https://github.com/breathe/NotebookScripter',
long_description='Expose ipython jupyter notebooks as callable functions. More info here https://github.com/breathe/NotebookScripter',
classifiers=(
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: Implementation :: CPython')
)
| 32.965517 | 139 | 0.64749 | 93 | 956 | 6.612903 | 0.580645 | 0.053659 | 0.068293 | 0.102439 | 0.411382 | 0.35122 | 0.35122 | 0.35122 | 0.35122 | 0.35122 | 0 | 0.008086 | 0.223849 | 956 | 28 | 140 | 34.142857 | 0.820755 | 0 | 0 | 0.074074 | 0 | 0 | 0.59205 | 0.024059 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.037037 | 0 | 0.037037 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
eb70ba677dc1dd7a466c342944b35d0477841536 | 3,244 | py | Python | thejoker/distributions.py | adrn/thejoker | e77182bdb368e20127a17cc76ba1083ab77746ea | [
"MIT"
] | 22 | 2016-09-05T00:01:14.000Z | 2021-05-14T19:28:23.000Z | thejoker/distributions.py | adrn/thejoker | e77182bdb368e20127a17cc76ba1083ab77746ea | [
"MIT"
] | 111 | 2016-09-04T18:21:00.000Z | 2022-03-13T06:38:27.000Z | thejoker/distributions.py | adrn/thejoker | e77182bdb368e20127a17cc76ba1083ab77746ea | [
"MIT"
] | 8 | 2016-09-04T17:12:34.000Z | 2022-02-18T13:12:09.000Z | # Third-party
import astropy.units as u
import numpy as np
import pymc3 as pm
from pymc3.distributions import generate_samples
import aesara_theano_fallback.tensor as tt
import exoplanet.units as xu
__all__ = ['UniformLog', 'FixedCompanionMass']
class UniformLog(pm.Continuous):
def __init__(self, a, b, **kwargs):
"""A distribution over a value, x, that is uniform in log(x) over the
domain :math:`(a, b)`.
"""
self.a = float(a)
self.b = float(b)
assert (self.a > 0) and (self.b > 0)
self._fac = np.log(self.b) - np.log(self.a)
shape = kwargs.get("shape", None)
if shape is None:
testval = 0.5 * (self.a + self.b)
else:
testval = 0.5 * (self.a + self.b) + np.zeros(shape)
kwargs["testval"] = kwargs.pop("testval", testval)
super(UniformLog, self).__init__(**kwargs)
def _random(self, size=None):
uu = np.random.uniform(size=size)
return np.exp(uu * self._fac + np.log(self.a))
def random(self, point=None, size=None):
return generate_samples(
self._random,
dist_shape=self.shape,
broadcast_shape=self.shape,
size=size,
)
def logp(self, value):
return -tt.as_tensor_variable(value) - np.log(self._fac)
class FixedCompanionMass(pm.Normal):
r"""
A distribution over velocity semi-amplitude, :math:`K`, that, at
fixed primary mass, is a fixed Normal distribution in companion mass. This
has the form:
.. math::
p(K) \propto \mathcal{N}(K \,|\, \mu_K, \sigma_K)
\sigma_K = \sigma_{K, 0} \, \left(\frac{P}{P_0}\right)^{-1/3} \,
\left(1 - e^2\right)^{-1}
where :math:`P` and :math:`e` are period and eccentricity, and
``sigma_K0`` and ``P0`` are parameters of this distribution that must
be specified.
"""
@u.quantity_input(sigma_K0=u.km/u.s, P0=u.day, max_K=u.km/u.s)
def __init__(self, P, e, sigma_K0, P0, mu=0., max_K=500*u.km/u.s,
K_unit=None, **kwargs):
self._sigma_K0 = sigma_K0
self._P0 = P0
self._max_K = max_K
if K_unit is not None:
self._sigma_K0 = self.sigma_K0.to(K_unit)
self._max_K = self._max_K.to(self._sigma_K0.unit)
if hasattr(P, xu.UNIT_ATTR_NAME):
self._P0 = self._P0.to(getattr(P, xu.UNIT_ATTR_NAME))
sigma_K0 = self._sigma_K0.value
P0 = self._P0.value
sigma = tt.min([self._max_K.value,
sigma_K0 * (P/P0)**(-1/3) / np.sqrt(1-e**2)])
super().__init__(mu=mu, sigma=sigma)
class Kipping13Long(pm.Beta):
def __init__(self):
r"""
The inferred long-period eccentricity distribution from Kipping (2013).
"""
super().__init__(1.12, 3.09)
class Kipping13Short(pm.Beta):
def __init__(self):
r"""
The inferred short-period eccentricity distribution from Kipping (2013).
"""
super().__init__(0.697, 3.27)
class Kipping13Global(pm.Beta):
def __init__(self):
r"""
The inferred global eccentricity distribution from Kipping (2013).
"""
super().__init__(0.867, 3.03)
| 28.964286 | 80 | 0.589088 | 467 | 3,244 | 3.877944 | 0.299786 | 0.042518 | 0.03037 | 0.008283 | 0.220872 | 0.156267 | 0.156267 | 0.135284 | 0 | 0 | 0 | 0.034702 | 0.271578 | 3,244 | 111 | 81 | 29.225225 | 0.731697 | 0.241985 | 0 | 0.048387 | 1 | 0 | 0.020294 | 0 | 0 | 0 | 0 | 0 | 0.016129 | 1 | 0.129032 | false | 0 | 0.096774 | 0.032258 | 0.354839 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
eb72965a0a45325849d4f2b7f020b22ca5abd6a7 | 1,369 | py | Python | pythonUtils/reptile/produce_name.py | MarcWong/matlabTools | 183b437707e2a9ddf7a8b5be0a4e874109fcfaa7 | [
"MIT"
] | null | null | null | pythonUtils/reptile/produce_name.py | MarcWong/matlabTools | 183b437707e2a9ddf7a8b5be0a4e874109fcfaa7 | [
"MIT"
] | null | null | null | pythonUtils/reptile/produce_name.py | MarcWong/matlabTools | 183b437707e2a9ddf7a8b5be0a4e874109fcfaa7 | [
"MIT"
] | null | null | null | #usage: python produce_name.py yourpath file_type
#coding=utf8
import sys
reload(sys)
sys.setdefaultencoding('utf8')
import os
import shutil
def md5(str):
import hashlib
m = hashlib.md5()
m.update(str)
return m.hexdigest()
if __name__ == '__main__':
fp_write = open('index.txt','w')
file_path = sys.argv[1]
file_type = sys.argv[2]
dst_path = './'
departments = os.listdir(file_path)
#breast // breath
if file_type == '1':
#head = 'http://y-doctor-oss.oss-cn-shenzhen.aliyuncs.com/paper/breath/'
head = 'http://y-doctor-oss.oss-cn-shenzhen.aliyuncs.com/paper/breath/'
elif file_type == '2':
head = 'http://y-doctor-oss.oss-cn-shenzhen.aliyuncs.com/guideline/'
for diss in departments:
if diss == '.DS_Store':
continue
if os.path.isdir(file_path + diss):
disease_name = diss
dir_path = file_path + diss + '/'
files = os.listdir(dir_path)
for file in files:
if file == '.DS_Store':
continue
if os.path.isfile(dir_path + file):
new_file_name = md5(dir_path+ file)
print new_file_name
shutil.copyfile(dir_path + file, dst_path + new_file_name)
fp_write.write(file + '\t' + head + new_file_name + '\n')
| 31.837209 | 79 | 0.582177 | 182 | 1,369 | 4.175824 | 0.379121 | 0.046053 | 0.057895 | 0.059211 | 0.255263 | 0.255263 | 0.194737 | 0.194737 | 0.194737 | 0.194737 | 0 | 0.00924 | 0.288532 | 1,369 | 42 | 80 | 32.595238 | 0.771047 | 0.107378 | 0 | 0.057143 | 0 | 0.028571 | 0.139573 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.114286 | null | null | 0.028571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
eb73a3b7734e41fc3fe55a18a60b1a2823f19e7c | 2,649 | py | Python | demo/cheeseboard/app.py | dw/acid | 3aabb3940f23c052ed7a009cff5d84cc50b099fb | [
"Apache-2.0"
] | 15 | 2015-09-24T03:57:49.000Z | 2020-08-25T22:44:20.000Z | demo/cheeseboard/app.py | dw/acid | 3aabb3940f23c052ed7a009cff5d84cc50b099fb | [
"Apache-2.0"
] | 2 | 2015-06-21T02:06:20.000Z | 2019-11-14T14:02:39.000Z | demo/cheeseboard/app.py | dw/acid | 3aabb3940f23c052ed7a009cff5d84cc50b099fb | [
"Apache-2.0"
] | 1 | 2019-09-11T03:13:52.000Z | 2019-09-11T03:13:52.000Z | #
# Copyright 2013, David Wilson.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import sys
import time
import urllib
import acid
import acid.meta
import bottle
import wheezy.template.engine
import wheezy.template.ext.core
import wheezy.template.loader
import models
templates = wheezy.template.engine.Engine(
loader=wheezy.template.loader.FileLoader(['templates']),
extensions=[wheezy.template.ext.core.CoreExtension()])
store = models.init_store()
# Hack to avoid attempting to create Post collection from RO txn
with store.begin(write=True):
list(models.Post.iter())
models.Post.find()
def getint(name, default=None):
try:
return int(bottle.request.params.get(name))
except (ValueError, TypeError):
return default
@bottle.route('/')
def index():
t0 = time.time()
hi = getint('hi')
with store.begin():
posts = list(models.Post.iter(hi=hi, reverse=True, max=5))
highest_id = models.Post.collection().findkey(reverse=True)
t1 = time.time()
older = None
newer = None
if posts:
oldest = posts[-1].key[0] - 1
if oldest > 0:
older = '?hi=' + str(oldest)
if posts[0].key < highest_id:
newer = '?hi=' + str(posts[0].key[0] + 5)
template = templates.get_template('index.html')
return template.render({
'error': bottle.request.query.get('error'),
'posts': posts,
'older': older,
'newer': newer,
'msec': int((t1 - t0) * 1000)
})
@bottle.route('/static/<filename>')
def static(filename):
return bottle.static_file(filename, root='static')
@bottle.post('/newpost')
def newpost():
post = models.Post(name=bottle.request.POST.name,
text=bottle.request.POST.text)
try:
with store.begin(write=True):
post.save()
except acid.errors.ConstraintError, e:
return bottle.redirect('.?error=' + urllib.quote(str(e)))
return bottle.redirect('.')
if 'debug' in sys.argv:
bottle.run(host='0.0.0.0', port=8000, debug=True)
else:
import bjoern
bjoern.run(bottle.default_app(), '0.0.0.0', 8000)
| 27.309278 | 74 | 0.659117 | 359 | 2,649 | 4.846797 | 0.43454 | 0.034483 | 0.006897 | 0.018391 | 0.026437 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019552 | 0.208381 | 2,649 | 96 | 75 | 27.59375 | 0.810205 | 0.231408 | 0 | 0.0625 | 0 | 0 | 0.05894 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.171875 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
eb7ce2e0e0ccae64594656f88ac709f3c1f99613 | 13,011 | py | Python | aadcUser/frAIburg_ThriftFacenet/python/gen-py/ExtIf/ttypes.py | PhilJd/frAIburg | 7585999953486bceb945f1eb7a96cbe94ea72186 | [
"BSD-3-Clause"
] | 10 | 2017-11-21T09:34:36.000Z | 2021-07-06T21:15:28.000Z | aadcUser/frAIburg_ThriftFacenet/python/gen-py/ExtIf/ttypes.py | PhilJd/frAIburg | 7585999953486bceb945f1eb7a96cbe94ea72186 | [
"BSD-3-Clause"
] | null | null | null | aadcUser/frAIburg_ThriftFacenet/python/gen-py/ExtIf/ttypes.py | PhilJd/frAIburg | 7585999953486bceb945f1eb7a96cbe94ea72186 | [
"BSD-3-Clause"
] | null | null | null | #
# Autogenerated by Thrift Compiler (0.9.1)
#
# DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
#
# options string: py
#
from thrift.Thrift import TType, TMessageType, TException, TApplicationException
from thrift.transport import TTransport
from thrift.protocol import TBinaryProtocol, TProtocol
try:
from thrift.protocol import fastbinary
except:
fastbinary = None
class TransportDef:
"""
****************************************************************************
interface objects
****************************************************************************
"""
IMAGEDATA = 0
STRINGDATA = 1
_VALUES_TO_NAMES = {
0: "IMAGEDATA",
1: "STRINGDATA",
}
_NAMES_TO_VALUES = {
"IMAGEDATA": 0,
"STRINGDATA": 1,
}
class TDataRaw:
"""
Attributes:
- raw_data
"""
thrift_spec = (
None, # 0
(1, TType.STRING, 'raw_data', None, None, ), # 1
)
def __init__(self, raw_data=None,):
self.raw_data = raw_data
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRING:
self.raw_data = iprot.readString();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TDataRaw')
if self.raw_data is not None:
oprot.writeFieldBegin('raw_data', TType.STRING, 1)
oprot.writeString(self.raw_data)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.raw_data is None:
raise TProtocol.TProtocolException(message='Required field raw_data is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TObjectResult:
"""
Attributes:
- classification
- distance
- selected
- bbox_xmin
- bbox_ymin
- bbox_xmax
- bbox_ymax
"""
thrift_spec = (
None, # 0
(1, TType.STRING, 'classification', None, None, ), # 1
(2, TType.DOUBLE, 'distance', None, None, ), # 2
(3, TType.BOOL, 'selected', None, None, ), # 3
(4, TType.DOUBLE, 'bbox_xmin', None, None, ), # 4
(5, TType.DOUBLE, 'bbox_ymin', None, None, ), # 5
(6, TType.DOUBLE, 'bbox_xmax', None, None, ), # 6
(7, TType.DOUBLE, 'bbox_ymax', None, None, ), # 7
)
def __init__(self, classification=None, distance=None, selected=None, bbox_xmin=None, bbox_ymin=None, bbox_xmax=None, bbox_ymax=None,):
self.classification = classification
self.distance = distance
self.selected = selected
self.bbox_xmin = bbox_xmin
self.bbox_ymin = bbox_ymin
self.bbox_xmax = bbox_xmax
self.bbox_ymax = bbox_ymax
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRING:
self.classification = iprot.readString();
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.DOUBLE:
self.distance = iprot.readDouble();
else:
iprot.skip(ftype)
elif fid == 3:
if ftype == TType.BOOL:
self.selected = iprot.readBool();
else:
iprot.skip(ftype)
elif fid == 4:
if ftype == TType.DOUBLE:
self.bbox_xmin = iprot.readDouble();
else:
iprot.skip(ftype)
elif fid == 5:
if ftype == TType.DOUBLE:
self.bbox_ymin = iprot.readDouble();
else:
iprot.skip(ftype)
elif fid == 6:
if ftype == TType.DOUBLE:
self.bbox_xmax = iprot.readDouble();
else:
iprot.skip(ftype)
elif fid == 7:
if ftype == TType.DOUBLE:
self.bbox_ymax = iprot.readDouble();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TObjectResult')
if self.classification is not None:
oprot.writeFieldBegin('classification', TType.STRING, 1)
oprot.writeString(self.classification)
oprot.writeFieldEnd()
if self.distance is not None:
oprot.writeFieldBegin('distance', TType.DOUBLE, 2)
oprot.writeDouble(self.distance)
oprot.writeFieldEnd()
if self.selected is not None:
oprot.writeFieldBegin('selected', TType.BOOL, 3)
oprot.writeBool(self.selected)
oprot.writeFieldEnd()
if self.bbox_xmin is not None:
oprot.writeFieldBegin('bbox_xmin', TType.DOUBLE, 4)
oprot.writeDouble(self.bbox_xmin)
oprot.writeFieldEnd()
if self.bbox_ymin is not None:
oprot.writeFieldBegin('bbox_ymin', TType.DOUBLE, 5)
oprot.writeDouble(self.bbox_ymin)
oprot.writeFieldEnd()
if self.bbox_xmax is not None:
oprot.writeFieldBegin('bbox_xmax', TType.DOUBLE, 6)
oprot.writeDouble(self.bbox_xmax)
oprot.writeFieldEnd()
if self.bbox_ymax is not None:
oprot.writeFieldBegin('bbox_ymax', TType.DOUBLE, 7)
oprot.writeDouble(self.bbox_ymax)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.classification is None:
raise TProtocol.TProtocolException(message='Required field classification is unset!')
if self.distance is None:
raise TProtocol.TProtocolException(message='Required field distance is unset!')
if self.selected is None:
raise TProtocol.TProtocolException(message='Required field selected is unset!')
if self.bbox_xmin is None:
raise TProtocol.TProtocolException(message='Required field bbox_xmin is unset!')
if self.bbox_ymin is None:
raise TProtocol.TProtocolException(message='Required field bbox_ymin is unset!')
if self.bbox_xmax is None:
raise TProtocol.TProtocolException(message='Required field bbox_xmax is unset!')
if self.bbox_ymax is None:
raise TProtocol.TProtocolException(message='Required field bbox_ymax is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TImageParams:
"""
Attributes:
- height
- width
- bytesPerPixel
- name
"""
thrift_spec = (
None, # 0
(1, TType.I16, 'height', None, None, ), # 1
(2, TType.I16, 'width', None, None, ), # 2
(3, TType.I16, 'bytesPerPixel', None, None, ), # 3
(4, TType.STRING, 'name', None, None, ), # 4
)
def __init__(self, height=None, width=None, bytesPerPixel=None, name=None,):
self.height = height
self.width = width
self.bytesPerPixel = bytesPerPixel
self.name = name
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.I16:
self.height = iprot.readI16();
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.I16:
self.width = iprot.readI16();
else:
iprot.skip(ftype)
elif fid == 3:
if ftype == TType.I16:
self.bytesPerPixel = iprot.readI16();
else:
iprot.skip(ftype)
elif fid == 4:
if ftype == TType.STRING:
self.name = iprot.readString();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TImageParams')
if self.height is not None:
oprot.writeFieldBegin('height', TType.I16, 1)
oprot.writeI16(self.height)
oprot.writeFieldEnd()
if self.width is not None:
oprot.writeFieldBegin('width', TType.I16, 2)
oprot.writeI16(self.width)
oprot.writeFieldEnd()
if self.bytesPerPixel is not None:
oprot.writeFieldBegin('bytesPerPixel', TType.I16, 3)
oprot.writeI16(self.bytesPerPixel)
oprot.writeFieldEnd()
if self.name is not None:
oprot.writeFieldBegin('name', TType.STRING, 4)
oprot.writeString(self.name)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.height is None:
raise TProtocol.TProtocolException(message='Required field height is unset!')
if self.width is None:
raise TProtocol.TProtocolException(message='Required field width is unset!')
if self.bytesPerPixel is None:
raise TProtocol.TProtocolException(message='Required field bytesPerPixel is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TIoException(TException):
"""
thrown by services
Attributes:
- message
"""
thrift_spec = (
None, # 0
(1, TType.STRING, 'message', None, None, ), # 1
)
def __init__(self, message=None,):
self.message = message
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRING:
self.message = iprot.readString();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TIoException')
if self.message is not None:
oprot.writeFieldBegin('message', TType.STRING, 1)
oprot.writeString(self.message)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __str__(self):
return repr(self)
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
| 31.503632 | 188 | 0.648067 | 1,535 | 13,011 | 5.286645 | 0.09316 | 0.017868 | 0.032163 | 0.037708 | 0.708318 | 0.631423 | 0.580776 | 0.570795 | 0.499692 | 0.460382 | 0 | 0.009441 | 0.226578 | 13,011 | 412 | 189 | 31.580097 | 0.796979 | 0.043194 | 0 | 0.569231 | 1 | 0 | 0.058843 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.089231 | false | 0 | 0.012308 | 0.030769 | 0.218462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
eb7ea27d449cd305cfd059a8b717ab286d3a4004 | 1,005 | py | Python | hwtHls/netlist/dumpStreamNodes.py | Nic30/hwtHls | 1fac6ed128318e698d51e15e9871249ddf243e1c | [
"MIT"
] | 8 | 2018-09-25T03:28:11.000Z | 2021-12-15T07:44:38.000Z | hwtHls/netlist/dumpStreamNodes.py | Nic30/hwtHls | 1fac6ed128318e698d51e15e9871249ddf243e1c | [
"MIT"
] | 1 | 2020-12-21T10:56:44.000Z | 2020-12-21T10:56:44.000Z | hwtHls/netlist/dumpStreamNodes.py | Nic30/hwtHls | 1fac6ed128318e698d51e15e9871249ddf243e1c | [
"MIT"
] | 2 | 2018-09-25T03:28:18.000Z | 2021-12-15T10:28:35.000Z | from io import StringIO
from hwtHls.netlist.transformations.rtlNetlistPass import RtlNetlistPass
from hwtHls.allocator.allocator import ConnectionsOfStage
class RtlNetlistPassDumpStreamNodes(RtlNetlistPass):
def __init__(self, out: StringIO, close=False):
self.out = out
self.close = close
def apply(self, hls: "HlsStreamProc", to_hw: "SsaSegmentToHwPipeline"):
if to_hw.backward_edges:
self.out.write(f"########## backedges ##########\n")
for e in to_hw.backward_edges:
self.out.write(repr(e))
self.out.write("\n")
self.out.write("\n")
for st_i, st in enumerate(to_hw.hls.allocator._connections_of_stage):
st: ConnectionsOfStage
self.out.write(f"########## st {st_i:d} ##########\n")
if st.sync_node is not None:
self.out.write(repr(st.sync_node))
self.out.write("\n")
if self.close:
self.out.close()
| 33.5 | 77 | 0.59403 | 120 | 1,005 | 4.833333 | 0.391667 | 0.12069 | 0.144828 | 0.067241 | 0.1 | 0.1 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0.265672 | 1,005 | 29 | 78 | 34.655172 | 0.785908 | 0 | 0 | 0.136364 | 0 | 0 | 0.108458 | 0.021891 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0.090909 | 0.136364 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
eb873ee05f656225ec226d30c33f22616f66c979 | 2,924 | py | Python | sites/mysites/models.py | cmwaura/Final_Red_Scrap | 6b1b78de7d1129cda787e9f4688ddd409af39eb5 | [
"MIT"
] | null | null | null | sites/mysites/models.py | cmwaura/Final_Red_Scrap | 6b1b78de7d1129cda787e9f4688ddd409af39eb5 | [
"MIT"
] | null | null | null | sites/mysites/models.py | cmwaura/Final_Red_Scrap | 6b1b78de7d1129cda787e9f4688ddd409af39eb5 | [
"MIT"
] | null | null | null | from __future__ import unicode_literals
from django.db import models
from django.db.models.signals import pre_delete
from scrapy_djangoitem import DjangoItem
from dynamic_scraper.models import Scraper, SchedulerRuntime
from django.dispatch import receiver
# Create your models here.
class JobWebsite(models.Model):
'''
in this situation this is the job website that we will be scraping from. So for instance if we are looking for all
the business analyst postions from indeed.com this particular django models wll specify what website is nessecary for
which particular job.
It borrows from thenm django class models.Model to do the job.
The scraper variable is the actual scraper that will borrow from the dynamic_scraper modules and is responsible
for using scrapy functions to scrape whatever website we desire.
The scraper_runtime is as the title suggest. It borrows from the SchedulerRuntime class of the dynamic_scraper modulesd
'''
title = models.CharField(max_length=250)
url = models.URLField()
scraper = models.ForeignKey(Scraper, blank= True, null=True, on_delete=models.SET_NULL)
scraper_runtime = models.ForeignKey(SchedulerRuntime, blank= True, null=True, on_delete=models.SET_NULL)
class JobAd(models.Model):
'''
This particular class is concerned with receiving all the scraped material once the job is done. For instance if we are scraping
business analyst positions from a company in mountain view then what we expect is that:
title = Jr Business Analyst
url = www.company.com/careers.html (or something similar)
description = "this is a junior level position for college graduates that require 10 years of freaking experience. We dont care
that you are only 23 you must have started working snce you were 12. actually scratch that, by the time you were conceived if you
didnt know what batch processing was dont even bother applying because we will not consider you.
company = company
location = virtual location with virtual address
'''
job_website = models.ForeignKey(JobWebsite)
checker_runtime = models.ForeignKey(SchedulerRuntime, blank= True, null=True, on_delete=models.SET_NULL)
title = models.CharField(max_length=250)
url = models.URLField()
description = models.TextField(blank=True)
company = models.CharField(max_length=200)
location = models.CharField(max_length=300)
def __str__(self):
return self.title
class JobAdItem(DjangoItem):
'''
this is a scrapy requirement for all results in the scrapy instance to be saved in the sqlite/Postgresql database in the
django database.
'''
django_model = JobAd
@receiver(pre_delete)
def pre_delete_handler(sender, instance, using, **kwargs):
if isinstance(instance, JobWebsite):
if instance.scraper_runtime:
instance.scraper_runtime.delete()
if isinstance(instance, JobAd):
if instance.checker_runtime:
instance.checker_runtime.delete()
pre_delete.connect(pre_delete_handler)
| 38.473684 | 131 | 0.793434 | 427 | 2,924 | 5.34192 | 0.40281 | 0.019728 | 0.031565 | 0.042087 | 0.14292 | 0.127137 | 0.127137 | 0.127137 | 0.127137 | 0.067514 | 0 | 0.007217 | 0.147059 | 2,924 | 75 | 132 | 38.986667 | 0.907378 | 0.516416 | 0 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.1875 | 0.03125 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
eb8b1a7716dbcc642fea899106feb3ac86d6cd27 | 194 | py | Python | src/__init__.py | ukc-co663/dependency-solver-2019-ds576 | a4b029ff53d87ee5e0f58f3f8f431b48410d2de0 | [
"MIT"
] | 1 | 2019-03-05T15:15:11.000Z | 2019-03-05T15:15:11.000Z | src/__init__.py | ukc-co663/dependency-solver-2019-ds576 | a4b029ff53d87ee5e0f58f3f8f431b48410d2de0 | [
"MIT"
] | null | null | null | src/__init__.py | ukc-co663/dependency-solver-2019-ds576 | a4b029ff53d87ee5e0f58f3f8f431b48410d2de0 | [
"MIT"
] | null | null | null | __all__ = [
'cyclic',
'dep_expander',
'dep_manager',
'logger',
'package_filter',
'package',
'parse_input',
'sat_solver_satispy',
'topo_packages',
'util'
] | 16.166667 | 25 | 0.561856 | 18 | 194 | 5.444444 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.273196 | 194 | 12 | 26 | 16.166667 | 0.695035 | 0 | 0 | 0 | 0 | 0 | 0.523077 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
eb8b58946c79e1e6114e3885fd5783239dfde293 | 1,979 | py | Python | tags-to-csv.py | just-trees/ec2-tags-to-csv | c0f1db7087b07282390abc9725868bf03ed7f20b | [
"MIT"
] | 9 | 2016-08-24T13:22:52.000Z | 2021-04-26T22:31:36.000Z | tags-to-csv.py | just-trees/ec2-tags-to-csv | c0f1db7087b07282390abc9725868bf03ed7f20b | [
"MIT"
] | null | null | null | tags-to-csv.py | just-trees/ec2-tags-to-csv | c0f1db7087b07282390abc9725868bf03ed7f20b | [
"MIT"
] | 7 | 2016-08-24T06:16:21.000Z | 2021-04-26T22:31:26.000Z | #!/usr/bin/env python
import boto3
import botocore
import argparse
import csv
# parse command line argumetns
def parse_args():
parser = argparse.ArgumentParser(prog='tags-to-csv', description='Get instance tags in CSV format.')
# required
parser.add_argument('-o', '--out', required=True, action='store', dest='output_file', type=str, help='path to where the output should be written')
# optional
parser.add_argument('-r', '--region',action='store', default='us-east-1', dest='aws_region', type=str, help='AWS region to use.')
parser.add_argument('-v', '--version', action='version', version='0.1')
args = parser.parse_args()
return args
def get_instances(filters=[]):
reservations = {}
try:
reservations = ec2.describe_instances(
Filters=filters
)
except botocore.exceptions.ClientError as e:
print e.response['Error']['Message']
instances = []
for reservation in reservations.get('Reservations', []):
for instance in reservation.get('Instances', []):
instances.append(instance)
return instances
#
# Main
#
def main():
global args
global ec2
args = parse_args()
ec2 = boto3.client('ec2', region_name=args.aws_region)
instances = get_instances()
tag_set = []
for instance in instances:
for tag in instance.get('Tags', []):
if tag.get('Key'):
tag_set.append(tag.get('Key'))
tag_set = list(set(tag_set))
with open(args.output_file, 'w') as csvfile:
fieldnames = ['InstanceId'] + tag_set
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
for instance in instances:
row = {}
for tag in instance.get('Tags', []):
row[tag.get('Key')] = tag.get('Value')
row['InstanceId'] = instance.get('InstanceId')
writer.writerow(row)
if __name__ == "__main__":
main()
| 28.271429 | 150 | 0.618494 | 236 | 1,979 | 5.072034 | 0.411017 | 0.025063 | 0.042607 | 0.030075 | 0.063492 | 0.038429 | 0 | 0 | 0 | 0 | 0 | 0.005984 | 0.24002 | 1,979 | 69 | 151 | 28.681159 | 0.789894 | 0.036382 | 0 | 0.083333 | 0 | 0 | 0.146316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.083333 | null | null | 0.020833 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ebac4bfc3e33998220d5884f2925dfb01805bca2 | 30,104 | py | Python | pyVHDLModel/VHDLModel.py | Xiretza/pyVHDLModel | 2301a2fa79b737852e6c53ed77f376b958371810 | [
"Apache-2.0"
] | null | null | null | pyVHDLModel/VHDLModel.py | Xiretza/pyVHDLModel | 2301a2fa79b737852e6c53ed77f376b958371810 | [
"Apache-2.0"
] | null | null | null | pyVHDLModel/VHDLModel.py | Xiretza/pyVHDLModel | 2301a2fa79b737852e6c53ed77f376b958371810 | [
"Apache-2.0"
] | null | null | null | # =============================================================================
# __ ___ _ ____ _ __ __ _ _
# _ __ _ \ \ / / | | | _ \| | | \/ | ___ __| | ___| |
# | '_ \| | | \ \ / /| |_| | | | | | | |\/| |/ _ \ / _` |/ _ \ |
# | |_) | |_| |\ V / | _ | |_| | |___| | | | (_) | (_| | __/ |
# | .__/ \__, | \_/ |_| |_|____/|_____|_| |_|\___/ \__,_|\___|_|
# |_| |___/
# ==============================================================================
# Authors: Patrick Lehmann
#
# Python module: An abstract VHDL language model.
#
# Description:
# ------------------------------------
# TODO:
#
# License:
# ==============================================================================
# Copyright 2017-2021 Patrick Lehmann - Boetzingen, Germany
# Copyright 2016-2017 Patrick Lehmann - Dresden, Germany
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# SPDX-License-Identifier: Apache-2.0
# ==============================================================================
#
"""
:copyright: Copyright 2007-2021 Patrick Lehmann - Bötzingen, Germany
:license: Apache License, Version 2.0
This module contains a document language model for VHDL.
"""
# load dependencies
from enum import Enum
from pathlib import Path
from typing import Any, List
from pydecor.decorators import export
__all__ = []
#__api__ = __all__ # FIXME: disabled due to a bug in pydecors export decorator
@export
class ModelEntity:
"""
``ModelEntity`` is a base class for all classes in the VHDL language model,
except for mixin classes (see multiple inheritance) and enumerations.
Each entity in this model has a reference to its parent entity. Therefore
a protected variable :attr:`_parent` is available and a readonly property
:attr:`Parent`.
"""
_parent: 'ModelEntity'
def __init__(self):
self._parent = None
@property
def Parent(self) -> 'ModelEntity':
"""Returns a reference to the parent entity."""
return self._parent
@export
class NamedEntity:
"""
A ``NamedEntity`` is a mixin class for all VHDL entities that have names.
A protected variable :attr:`_name` is available to derived classes as well as
a readonly property :attr:`Name` for public access.
"""
_name: str
def __init__(self, name: str):
self._name = name
@property
def Name(self) -> str:
"""Returns a model entity's name."""
return self._name
@export
class LabeledEntity:
"""
A ``LabeledEntity`` is a mixin class for all VHDL entities that can have
labels.
A protected variable :attr:`_label` is available to derived classes as well
as a readonly property :attr:`Label` for public access.
"""
_label: str
def __init__(self, label: str):
self._label = label
@property
def Label(self) -> str:
"""Returns a model entity's label."""
return self._label
@export
class Design(ModelEntity):
"""
A ``Design`` represents all loaded files (see :class:`~pyVHDLModel.VHDLModel.Document`)
and analysed. It's the root of this document-object-model (DOM). It contains
at least on VHDL library (see :class:`~pyVHDLModel.VHDLModel.Library`).
"""
_libraries: List['Library'] #: List of all libraries defined for a design
_documents: List['Document'] #: List of all documents loaded for a design
def __init__(self):
super().__init__()
self._libraries = []
self._documents = []
@property
def Libraries(self) -> List['Library']:
"""Returns a list of all libraries specified for this design."""
return self._libraries
@property
def Documents(self) -> List['Document']:
"""Returns a list of all documents (files) loaded for this design."""
return self._documents
@export
class Library(ModelEntity, NamedEntity):
"""
A ``Library`` represents a VHDL library. It contains all *primary* design
units.
"""
_contexts: List['Context'] #: List of all contexts defined in a library.
_configurations: List['Configuration'] #: List of all configurations defined in a library.
_entities: List['Entity'] #: List of all entities defined in a library.
_packages: List['Package'] #: List of all packages defined in a library.
def __init__(self, name: str):
super().__init__()
NamedEntity.__init__(self, name)
self._contexts = []
self._configurations = []
self._entities = []
self._packages = []
@property
def Contexts(self) -> List['Context']:
"""Returns a list of all context declarations loaded for this design."""
return self._contexts
@property
def Configurations(self) -> List['Configuration']:
"""Returns a list of all configuration declarations loaded for this design."""
return self._configurations
@property
def Entities(self) -> List['Entity']:
"""Returns a list of all entity declarations loaded for this design."""
return self._entities
@property
def Packages(self) -> List['Package']:
"""Returns a list of all package declarations loaded for this design."""
return self._packages
@export
class Document(ModelEntity):
"""
A ``Document`` represents a sourcefile. It contains primary and secondary
design units.
"""
_path: Path #: path to the document. ``None`` if virtual document.
_contexts: List['Context'] #: List of all contexts defined in a document.
_configurations: List['Configuration'] #: List of all configurations defined in a document.
_entities: List['Entity'] #: List of all entities defined in a document.
_architectures: List['Architecture'] #: List of all architectures defined in a document.
_packages: List['Package'] #: List of all packages defined in a document.
_packageBodies: List['PackageBody'] #: List of all package bodies defined in a document.
def __init__(self, path: Path):
super().__init__()
self._path = path
self._contexts = []
self._configurations = []
self._entities = []
self._architectures = []
self._packages = []
self._packageBodies = []
@property
def Path(self) -> Path:
return self._path
@property
def Contexts(self) -> List['Context']:
"""Returns a list of all context declarations found in this document."""
return self._contexts
@property
def Configurations(self) -> List['Configuration']:
"""Returns a list of all configuration declarations found in this document."""
return self._configurations
@property
def Entities(self) -> List['Entity']:
"""Returns a list of all entity declarations found in this document."""
return self._entities
@property
def Architectures(self) -> List['Architecture']:
"""Returns a list of all architecture declarations found in this document."""
return self._architectures
@property
def Packages(self) -> List['Package']:
"""Returns a list of all package declarations found in this document."""
return self._packages
@property
def PackageBodies(self) -> List['PackageBody']:
"""Returns a list of all package body declarations found in this document."""
return self._packageBodies
@export
class Direction(Enum):
"""
A ``Direction`` is an enumeration and represents a direction (``to`` or ``downto``)
in a range.
"""
To = 0
DownTo = 1
@export
class Mode(Enum):
"""
A ``Mode`` is an enumeration and represents a direction (``in``, ``out``, ...)
for how objects are passed.
"""
Default = 0
In = 1
Out = 2
InOut = 3
Buffer = 4
Linkage = 5
@export
class Class(Enum):
"""
A ``Class`` is an enumeration and represents an object's class (``constant``,
``signal``, ...).
"""
Default = 0
Constant = 1
Variable = 2
Signal = 3
File = 4
Type = 5
Subprogram = 6
@export
class BaseType(ModelEntity, NamedEntity):
"""``BaseType`` is the base class of all type entities in this model."""
def __init__(self, name: str):
super().__init__()
NamedEntity.__init__(self, name)
@export
class Type(BaseType):
pass
@export
class SubType(BaseType):
_type: 'SubType'
_baseType: Type
_range: 'Range'
_resolutionFunction: 'Function'
def __init__(self, name: str):
super().__init__(name)
@property
def Type(self) -> 'SubType':
return self._type
@property
def BaseType(self) -> Type:
return self._baseType
@property
def Range(self) -> 'Range':
return self._range
@property
def ResolutionFunction(self) -> 'Function':
return self._resolutionFunction
@export
class ScalarType(BaseType):
pass
@export
class NumericType:
pass
@export
class DiscreteType:
pass
@export
class CompositeType(BaseType):
pass
@export
class ProtectedType(BaseType):
pass
@export
class AccessType(BaseType):
pass
@export
class FileType(BaseType):
pass
@export
class EnumeratedType(ScalarType, DiscreteType):
_elements: List
def __init__(self, name: str):
super().__init__(name)
self._elements = []
@property
def Elements(self) -> List:
return self._elements
@export
class IntegerType(ScalarType, NumericType, DiscreteType):
_leftBound: 'Expression'
_rightBound: 'Expression'
def __init__(self, name: str):
super().__init__(name)
@export
class RealType(ScalarType, NumericType):
_leftBound: 'Expression'
_rightBound: 'Expression'
def __init__(self, name: str):
super().__init__(name)
# TODO: PhysicalType
@export
class ArrayType(CompositeType):
_dimensions: List['Range']
_elementType: SubType
def __init__(self, name: str):
super().__init__(name)
self._dimensions = []
@property
def Dimensions(self):
return self._dimensions
@property
def ElementType(self):
return self._elementType
@export
class RecordTypeMember(ModelEntity):
def __init__(self, name: str):
super().__init__()
self._name = name
self._subType = None
@property
def Name(self):
return self._name
@export
class RecordType(BaseType):
_members: List[RecordTypeMember]
def __init__(self, name: str):
super().__init__(name)
self._members = []
@property
def Members(self):
return self._members
@export
class Expression:
pass
@export
class Literal:
pass
@export
class IntegerLiteral:
_value: int
def __init__(self, value: int):
self._value = value
@property
def Value(self):
return self._value
@export
class FloatingPointLiteral:
_value: float
def __init__(self, value: float):
self._value = value
@property
def Value(self):
return self._value
# CharacterLiteral
# StringLiteral
# BitStringLiteral
# EnumerationLiteral
# PhysicalLiteral
@export
class UnaryExpression(Expression):
_operand: Expression
def __init__(self):
pass
@property
def Operand(self):
return self._operand
@export
class FunctionCall(Expression):
pass
@export
class QualifiedExpression(Expression):
pass
@export
class BinaryExpression(Expression):
_leftOperand: Expression
_rightOperand: Expression
def __init__(self):
pass
@property
def LeftOperand(self):
return self._leftOperand
@property
def RightOperand(self):
return self._rightOperand
# AddingExpression
# MultiplyingExpression
# LogicalExpression
# ShiftExpression
@export
class Range:
_leftBound: Any
_rightBound: Any
_direction: Direction
def __init__(self):
pass
@export
class InterfaceItem(ModelEntity):
_name: str
_mode: Mode
def __init__(self, name: str, mode: Mode):
super().__init__()
self._name = name
self._mode = mode
@property
def Name(self) -> str:
return self._name
@property
def Mode(self) -> Mode:
return self._mode
@export
class GenericInterfaceItem(InterfaceItem):
pass
@export
class PortInterfaceItem(InterfaceItem):
pass
@export
class ParameterInterfaceItem(InterfaceItem):
pass
@export
class GenericConstantInterfaceItem(GenericInterfaceItem):
_subtype: SubType # FIXME: add documentation
_defaultExpression: Expression # FIXME: add documentation
@property
def SubType(self) -> SubType:
return self._subType
@property
def DefaultExpression(self) -> Expression:
return self._defaultExpression
@export
class GenericTypeInterfaceItem(GenericInterfaceItem):
pass
@export
class GenericSubprogramInterfaceItem(GenericInterfaceItem):
pass
@export
class GenericPackageInterfaceItem(GenericInterfaceItem):
pass
@export
class PortSignalInterfaceItem(PortInterfaceItem):
_subType: SubType
_defaultExpression: Expression
def __init__(self, name: str, mode: Mode):
super().__init__(name, mode)
@property
def SubType(self) -> SubType:
return self._subType
@property
def DefaultExpression(self) -> Expression:
return self._defaultExpression
@export
class ParameterConstantInterfaceItem(ParameterInterfaceItem):
pass
@export
class ParameterVariableInterfaceItem(ParameterInterfaceItem):
_subType: SubType
_mode: Mode
_defaultExpression: Expression
def __init__(self, name: str):
super().__init__(name)
@property
def SubType(self) -> SubType:
return self._subType
@property
def Mode(self) -> Mode:
return self._mode
@property
def DefaultExpression(self) -> Expression:
return self._defaultExpression
@export
class ParameterSignalInterfaceItem(ParameterInterfaceItem):
pass
@export
class ParameterFileInterfaceItem(ParameterInterfaceItem):
pass
# class GenericItem(ModelEntity):
# def __init__(self):
# super().__init__()
# self._name = None
# self._subType = None
# self._init = None
#
#
# class PortItem(ModelEntity):
# def __init__(self):
# super().__init__()
# self._name = None
# self._subType = None
# self._init = None
# self._mode = None
# self._class = None
@export
class LibraryReference(ModelEntity):
_library: Library
def __init__(self):
super().__init__()
self._library = None
@property
def Library(self) -> Library:
return self._library
@export
class Use(ModelEntity):
_library: Library
_package: 'Package'
_item: str
def __init__(self):
super().__init__()
@property
def Library(self) -> Library:
return self._library
@property
def Package(self) -> 'Package':
return self._package
@property
def Item(self) -> str:
return self._item
@export
class PrimaryUnit(ModelEntity, NamedEntity):
def __init__(self, name: str):
super().__init__()
NamedEntity.__init__(self, name)
@export
class SecondaryUnit(ModelEntity, NamedEntity):
def __init__(self, name: str):
super().__init__()
NamedEntity.__init__(self, name)
@export
class Context(PrimaryUnit):
_uses: List[Use]
def __init__(self, name):
super().__init__(name)
self._uses = []
@property
def Uses(self) -> List[Use]:
return self._uses
@export
class Entity(PrimaryUnit):
_libraryReferences: List[LibraryReference]
_uses: List[Use]
_genericItems: List[GenericInterfaceItem]
_portItems: List[PortInterfaceItem]
_declaredItems: List # FIXME: define liste element type e.g. via Union
_bodyItems: List['ConcurrentStatement']
def __init__(self, name: str):
super().__init__(name)
self._libraryReferences = []
self._uses = []
self._genericItems = []
self._portItems = []
self._declaredItems = []
self._bodyItems = []
@property
def LibraryReferences(self) -> List[LibraryReference]:
return self._libraryReferences
@property
def Uses(self) -> List[Use]:
return self._uses
@property
def GenericItems(self) -> List[GenericInterfaceItem]:
return self._genericItems
@property
def PortItems(self) -> List[PortInterfaceItem]:
return self._portItems
@property
def DeclaredItems(self) -> List: # FIXME: define liste element type e.g. via Union
return self._declaredItems
@property
def BodyItems(self) -> List['ConcurrentStatement']:
return self._bodyItems
@export
class Architecture(SecondaryUnit):
_entity: Entity
_libraryReferences: List[Library]
_uses: List[Use]
_declaredItems: List # FIXME: define liste element type e.g. via Union
_bodyItems: List['ConcurrentStatement']
def __init__(self, name: str):
super().__init__(name)
self._libraryReferences = []
self._uses = []
self._declaredItems = []
self._bodyItems = []
@property
def Entity(self) -> Entity:
return self._entity
@property
def LibraryReferences(self) -> List[Library]:
return self._libraryReferences
@property
def Uses(self) -> List[Use]:
return self._uses
@property
def DeclaredItems(self) -> List: # FIXME: define liste element type e.g. via Union
return self._declaredItems
@property
def BodyItems(self) -> List['ConcurrentStatement']:
return self._bodyItems
@export
class AssociationItem(ModelEntity):
_formal: str # FIXME: defined type
_actual: Expression
def __init__(self):
super().__init__()
@property
def Formal(self): # FIXME: defined return type
return self._formal
@property
def Actual(self) -> Expression:
return self._actual
@export
class GenericAssociationItem(InterfaceItem):
pass
@export
class PortAssociationItem(InterfaceItem):
pass
@export
class ParameterAssociationItem(InterfaceItem):
pass
@export
class Configuration(ModelEntity, NamedEntity):
def __init__(self, name: str):
super().__init__()
NamedEntity.__init__(self, name)
@export
class Instantiation:
pass
@export
class Package(PrimaryUnit):
_libraryReferences: List[Library]
_uses: List[Use]
_genericItems: List[GenericInterfaceItem]
_declaredItems: List
def __init__(self, name: str):
super().__init__(name)
self._libraryReferences = []
self._uses = []
self._genericItems = []
self._declaredItems = []
@property
def LibraryReferences(self) -> List[Library]:
return self._libraryReferences
@property
def Uses(self) -> List[Use]:
return self._uses
@property
def GenericItems(self) -> List[GenericInterfaceItem]:
return self._genericItems
@property
def DeclaredItems(self) -> List:
return self._declaredItems
@export
class PackageBody(SecondaryUnit):
_package: Package
_libraryReferences: List[Library]
_uses: List[Use]
_declaredItems: List
def __init__(self, name: str):
super().__init__(name)
self._libraryReferences = []
self._uses = []
self._declaredItems = []
@property
def Package(self) -> Package:
return self._package
@property
def LibraryReferences(self) -> List[Library]:
return self._libraryReferences
@property
def Uses(self) -> List[Use]:
return self._uses
@property
def DeclaredItems(self) -> List:
return self._declaredItems
@export
class PackageInstantiation(PrimaryUnit, Instantiation):
_packageReference: Package
_genericAssociations: List[GenericAssociationItem]
def __init__(self, name: str):
super().__init__(name)
Instantiation.__init__(self)
self._genericAssociations = []
@property
def PackageReference(self) -> Package:
return self._packageReference
@property
def GenericAssociations(self) -> List[GenericAssociationItem]:
return self._genericAssociations
@export
class Object(ModelEntity, NamedEntity):
_subType: SubType
def __init__(self, name: str):
super().__init__()
NamedEntity.__init__(self, name)
@property
def SubType(self) -> SubType:
return self._subType
@export
class BaseConstant(Object):
pass
@export
class Constant(BaseConstant):
_defaultExpression: Expression
def __init__(self, name: str):
super().__init__(name)
@property
def DefaultExpression(self) -> Expression:
return self._defaultExpression
@export
class DeferredConstant(BaseConstant):
_constantReference: Constant
def __init__(self, name: str):
super().__init__(name)
@property
def ConstantReference(self) -> Constant:
return self._constantReference
@export
class Variable(Object):
_defaultExpression: Expression
def __init__(self, name: str):
super().__init__(name)
@property
def DefaultExpression(self) -> Expression:
return self._defaultExpression
@export
class Signal(Object):
_defaultExpression: Expression
def __init__(self, name: str):
super().__init__(name)
@property
def DefaultExpression(self) -> Expression:
return self._defaultExpression
@export
class SubProgramm(ModelEntity, NamedEntity):
_genericItems: List[GenericInterfaceItem]
_parameterItems: List[ParameterInterfaceItem]
_declaredItems: List
_bodyItems: List['SequentialStatement']
def __init__(self, name: str):
super().__init__()
NamedEntity.__init__(self, name)
self._genericItems = []
self._parameterItems = []
self._declaredItems = []
self._bodyItems = []
@property
def GenericItems(self) -> List[GenericInterfaceItem]:
return self._genericItems
@property
def ParameterItems(self) -> List[ParameterInterfaceItem]:
return self._parameterItems
@property
def DeclaredItems(self) -> List:
return self._declaredItems
@property
def BodyItems(self) -> List['SequentialStatement']:
return self._bodyItems
@export
class Procedure(SubProgramm):
pass
@export
class Function(SubProgramm):
_returnType: SubType
_isPure: bool = True
def __init__(self, name: str):
super().__init__(name)
@property
def ReturnType(self) -> SubType:
return self._returnType
@property
def IsPure(self) -> bool:
return self._isPure
@export
class SubprogramInstantiation(ModelEntity, Instantiation):
def __init__(self):
super().__init__()
Instantiation.__init__(self)
self._subprogramReference = None
@export
class ProcedureInstantiation(SubprogramInstantiation):
pass
@export
class FunctionInstantiation(SubprogramInstantiation):
pass
@export
class Method:
def __init__(self):
self._protectedType = None
@export
class ProcedureMethod(Procedure, Method):
def __init__(self, name: str):
super().__init__(name)
Method.__init__(self)
@export
class FunctionMethod(Function, Method):
def __init__(self, name: str):
super().__init__(name)
Method.__init__(self)
@export
class Statement(ModelEntity, LabeledEntity):
def __init__(self, label: str = None):
super().__init__()
LabeledEntity.__init__(self, label)
@export
class ConcurrentStatement(Statement):
pass
@export
class SequentialStatement(Statement):
pass
@export
class ProcessStatement(ConcurrentStatement):
_parameterItems: List[Signal]
_declaredItems: List # TODO: create a union for (concurrent / sequential) DeclaredItems
_bodyItems: List[SequentialStatement]
def __init__(self, label: str = None):
super().__init__(label=label)
self._parameterItems = []
self._declaredItems = []
self._bodyItems = []
@property
def ParameterItems(self) -> List[Signal]:
return self._parameterItems
@property
def DeclaredItems(self) -> List:
return self._declaredItems
@property
def BodyItems(self) -> List[SequentialStatement]:
return self._bodyItems
# TODO: could be unified with ProcessStatement if 'List[ConcurrentStatement]' becomes parametric to T
class BlockStatement:
_declaredItems: List # TODO: create a union for (concurrent / sequential) DeclaredItems
_bodyItems: List[ConcurrentStatement]
def __init__(self):
self._declaredItems = []
self._bodyItems = []
@property
def DeclaredItems(self) -> List:
return self._declaredItems
@property
def BodyItems(self) -> List[ConcurrentStatement]:
return self._bodyItems
@export
class ConcurrentBlockStatement(ConcurrentStatement, BlockStatement):
_portItems: List[PortInterfaceItem]
def __init__(self, label: str = None):
super().__init__(label=label)
BlockStatement.__init__(self)
self._portItems = []
@property
def PortItems(self) -> List[PortInterfaceItem]:
return self._portItems
@export
class BaseConditional:
_condition: Expression
def __init__(self):
super().__init__()
@property
def Condition(self) -> Expression:
return self._condition
@export
class BaseBranch:
pass
@export
class BaseConditionalBranch(BaseBranch, BaseConditional):
def __init__(self):
super().__init__()
BaseConditional.__init__(self)
@export
class BaseIfBranch(BaseConditionalBranch):
pass
@export
class BaseElsifBranch(BaseConditionalBranch):
pass
@export
class BaseElseBranch(BaseBranch):
pass
@export
class GenerateBranch(ModelEntity):
pass
@export
class IfGenerateBranch(GenerateBranch, BaseIfBranch):
def __init__(self):
super().__init__()
BaseIfBranch.__init__(self)
@export
class ElsifGenerateBranch(GenerateBranch, BaseElsifBranch):
def __init__(self):
super().__init__()
BaseElsifBranch.__init__(self)
@export
class ElseGenerateBranch(GenerateBranch, BaseElseBranch):
def __init__(self):
super().__init__()
BaseElseBranch.__init__(self)
@export
class GenerateStatement(ConcurrentStatement):
def __init__(self, label: str = None):
super().__init__(label=label)
self._declaredItems = []
self._bodyItems = []
@property
def DeclaredItems(self):
return self._declaredItems
@property
def BodyItems(self):
return self._bodyItems
@export
class IfGenerateStatement(GenerateStatement):
_ifBranch: IfGenerateBranch
_elsifBranch: List['ElsifGenerateBranch']
_elseBranch: ElseGenerateBranch
def __init__(self, label: str = None):
super().__init__(label=label)
self._elsifBranches = []
@export
class ForGenerateStatement(GenerateStatement):
_loopIndex: Constant
_range: Range
def __init__(self, label: str = None):
super().__init__(label=label)
@property
def LoopIndex(self) -> Constant:
return self._loopIndex
@property
def Range(self) -> Range:
return self._range
# TODO: CaseGenerateStatement
# class CaseGenerateStatement(GenerateStatement):
# def __init__(self):
# super().__init__()
# self._expression = None
# self._cases = []
@export
class Assignment:
_target: Object
_expression: Expression
def __init__(self):
super().__init__()
@property
def Target(self) -> Object:
return self._target
@property
def Expression(self) -> Expression:
return self._expression
@export
class SignalAssignment(Assignment):
pass
@export
class VariableAssignment(Assignment):
pass
@export
class ConcurrentSignalAssignment(ConcurrentStatement, SignalAssignment):
def __init__(self, label: str = None):
super().__init__(label=label)
SignalAssignment.__init__(self)
@export
class SequentialSignalAssignment(SequentialStatement, SignalAssignment):
def __init__(self):
super().__init__()
SignalAssignment.__init__(self)
@export
class SequentialVariableAssignment(SequentialStatement, VariableAssignment):
def __init__(self):
super().__init__()
VariableAssignment.__init__(self)
@export
class ReportStatement:
_message: Expression
_severity: Expression
def __init__(self):
super().__init__()
@property
def Message(self) -> Expression:
return self._message
@property
def Severity(self) -> Expression:
return self._severity
@export
class AssertStatement(ReportStatement):
_condition: Expression
def __init__(self):
super().__init__()
@property
def Condition(self) -> Expression:
return self._condition
@export
class ConcurrentAssertStatement(ConcurrentStatement, AssertStatement):
def __init__(self, label: str = None):
super().__init__(label=label)
AssertStatement.__init__(self)
@export
class SequentialReportStatement(SequentialStatement, ReportStatement):
def __init__(self):
super().__init__()
ReportStatement.__init__(self)
@export
class SequentialAssertStatement(SequentialStatement, AssertStatement):
def __init__(self):
super().__init__()
AssertStatement.__init__(self)
@export
class Branch(ModelEntity):
pass
@export
class IfBranch(Branch, BaseIfBranch):
def __init__(self):
super().__init__()
BaseIfBranch.__init__(self)
@export
class ElsifBranch(Branch, BaseElsifBranch):
def __init__(self):
super().__init__()
BaseElsifBranch.__init__(self)
@export
class ElseBranch(Branch, BaseElseBranch):
def __init__(self):
super().__init__()
BaseElseBranch.__init__(self)
@export
class CompoundStatement(SequentialStatement):
_bodyItems: List[SequentialStatement]
def __init__(self):
super().__init__()
self._bodyItems = []
@property
def BodyItems(self) -> List[SequentialStatement]:
return self._bodyItems
@export
class IfStatement(CompoundStatement):
_ifBranch: IfBranch
_elsifBranches: List['ElsifBranch']
_elseBranch: ElseBranch
def __init__(self):
super().__init__()
self._elsifBranches = []
@property
def IfBranch(self) -> IfBranch:
return self._ifBranch
@property
def ElsIfBranches(self) -> List['ElsifBranch']:
return self._elsifBranches
@property
def ElseBranch(self) -> ElseBranch:
return self._elseBranch
@export
class LoopStatement(CompoundStatement):
pass
@export
class ForLoopStatement(LoopStatement):
_loopIndex: Constant
_range: Range
def __init__(self):
super().__init__()
@property
def LoopIndex(self) -> Constant:
return self._loopIndex
@property
def Range(self) -> Range:
return self._range
@export
class WhileLoopStatement(LoopStatement, BaseConditional):
def __init__(self):
super().__init__()
BaseConditional.__init__(self)
@export
class LoopControlStatement(ModelEntity, BaseConditional):
_loopReference: LoopStatement
def __init__(self):
super().__init__()
BaseConditional.__init__(self)
@property
def LoopReference(self) -> LoopStatement:
return self._loopReference
@export
class NextStatement(LoopControlStatement):
pass
@export
class ExitStatement(LoopControlStatement):
pass
@export
class ReturnStatement(SequentialStatement):
pass
| 20.244788 | 101 | 0.710171 | 3,182 | 30,104 | 6.384978 | 0.118793 | 0.06497 | 0.041689 | 0.022887 | 0.484668 | 0.453463 | 0.428607 | 0.391249 | 0.356647 | 0.336319 | 0 | 0.001886 | 0.172137 | 30,104 | 1,486 | 102 | 20.258412 | 0.813338 | 0.198811 | 0 | 0.688161 | 0 | 0 | 0.020302 | 0 | 0 | 0 | 0 | 0.002019 | 0.005285 | 1 | 0.184989 | false | 0.048626 | 0.004228 | 0.090909 | 0.553911 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
ebb13a34f8127fcf23ca00f30833fd6aeea18caf | 7,778 | py | Python | dabble/nn.py | bibarz/bibarz.github.io | 354f9dc8484a745737230f59524966d5452dd818 | [
"MIT"
] | null | null | null | dabble/nn.py | bibarz/bibarz.github.io | 354f9dc8484a745737230f59524966d5452dd818 | [
"MIT"
] | null | null | null | dabble/nn.py | bibarz/bibarz.github.io | 354f9dc8484a745737230f59524966d5452dd818 | [
"MIT"
] | null | null | null | import collections
import math
import cv2
import scipy.weave
import numpy as np
import time
import os
import random
class Atan(object):
@classmethod
def fwd(cls, x):
return np.arctan(x)
@classmethod
def derivative(cls, x):
return 1./ (1 + x ** 2.)
class Tanh(object):
@classmethod
def fwd(cls, x):
return np.tanh(x)
@classmethod
def derivative(cls, x):
return 1 - np.tanh(x) ** 2
class Relu(object):
@classmethod
def fwd(cls, x):
return x * (x > 0)
@classmethod
def derivative(cls, x):
return (x > 0).astype(np.float)
class NN(object):
def __init__(self, sizes, types, init_w_scale = 0.1, init_b_scale = 0.1,
eta = 0.1, momentum = 0.):
assert len(types) == len(sizes) - 1
self._sizes = sizes
self._types = types
self._init_w_scale = init_w_scale
self._init_b_scale = init_b_scale
self._eta = eta
self._momentum = momentum
self._initialize()
def _initialize(self):
self._w = [self._init_w_scale * np.random.randn(self._sizes[i], self._sizes[i + 1]) / np.sqrt(self._sizes[i])
for i in range(len(self._sizes) - 1)]
self._b = [self._init_b_scale * np.random.randn(self._sizes[i + 1])
for i in range(len(self._sizes) - 1)]
self._dw = [np.zeros((self._sizes[i], self._sizes[i + 1]))
for i in range(len(self._sizes) - 1)]
self._db = [np.zeros(self._sizes[i + 1])
for i in range(len(self._sizes) - 1)]
def train(self, batch, teacher):
assert batch.shape[1:] == tuple(self._sizes[0:1])
i = [] # layer inputs
o = [batch] # layer outputs
for k in range(len(self._w)):
i.append(np.dot(o[-1], self._w[k]) + self._b[k])
o.append(self._types[k].fwd(i[-1]))
e = teacher - o[-1]
self._last_gb = []
self._last_gw = []
for k in range(len(self._w))[::-1]:
e *= self._types[k].derivative(i[k]) # backprop the error across the neurons
self._last_gb = [np.mean(e, axis=0)] + self._last_gb # prepending, not adding
self._last_gw = [np.mean(o[k][..., None] * e[:, None, :], axis=0)] + self._last_gw # prepending, not adding
e = np.dot(e, self._w[k].T) # backprop the error across the bridge
for k, (db, dw) in enumerate(zip(self._last_gb, self._last_gw)):
self._db[k] = self._momentum * self._db[k] + (1 - self._momentum) * db
self._dw[k] = self._momentum * self._dw[k] + (1 - self._momentum) * dw
self._b[k] += self._eta * self._db[k]
self._w[k] += self._eta * self._dw[k]
def predict(self, batch):
assert batch.shape[1:] == tuple(self._sizes[0:1])
o = batch # layer outputs
for k in range(len(self._w)):
i = np.dot(o, self._w[k]) + self._b[k]
o = self._types[k].fwd(i)
return o
def copy(self):
c = NN([s for s in self._sizes], [t for t in self._types],
self._init_w_scale, self._init_b_scale, self._eta, self._momentum)
c._w = [w.copy() for w in self._w]
c._b = [b.copy() for b in self._b]
c._dw = [dw.copy() for dw in self._dw]
c._db = [db.copy() for db in self._db]
return c
def test_gradients():
sizes = [5, 3, 2]
eps = 1e-9
n = NN(sizes, [Atan] * (len(sizes) - 1), init_w_scale=0.1)
w = [x.copy() for x in n._w]
b = [x.copy() for x in n._b]
input = np.random.random((1, sizes[0]))
output = np.ones((1, sizes[-1]))
e0 = 0.5 * np.sum((n.predict(input) - output) ** 2)
n.train(input, output)
gb = [x.copy() for x in n._last_gb]
gw = [x.copy() for x in n._last_gw]
for k in range(len(sizes) - 1):
for i in range(sizes[k]):
for j in range(sizes[k + 1]):
w_prime = w[k].copy()
w_prime[i, j] += eps
n._w = [x for x in w]
n._w[k] = w_prime
n._b = b
e1 = 0.5 * np.sum((n.predict(input) - output) ** 2)
assert np.allclose(gw[k][i, j], (e0 - e1) / eps, rtol=1e-3)
for j in range(sizes[k + 1]):
b_prime = b[k].copy()
b_prime[j] += eps
n._b = [x for x in b]
n._b[k] = b_prime
n._w = w
e1 = 0.5 * np.sum((n.predict(input) - output) ** 2)
assert np.allclose(gb[k][j], (e0 - e1) / eps, rtol=1e-3)
def test_train():
# Test that we learn correctly to predict two outputs, each latched to
# one particular input, with opposite signs
sizes = [10, 5, 2]
n_samples = 200
batch_size = 50
training_input = np.random.random((n_samples, sizes[0]))
for j in range(sizes[0]):
n = NN(sizes, [Tanh] * (len(sizes) - 1), init_w_scale=0.1, eta=0.5, momentum=0.95)
training_output = np.vstack((0.999 * ((training_input[:, j] > 0.5) * 2 - 1),
0.999 * ((training_input[:, (j + sizes[0] / 2) % sizes[0]] < 0.5) * 2 - 1))).T
e0 = np.sqrt(np.mean((n.predict(training_input) - training_output) ** 2, axis=0))
for _ in range(1000):
for begin in range(0, n_samples, batch_size):
end = min(n_samples, begin + batch_size)
n.train(training_input[begin:end], training_output[begin:end])
e1 = np.sqrt(np.mean((n.predict(training_input) - training_output) ** 2, axis=0))
# with these parameters (batching and momentum are particularly useful),
# training should reduce error to about 5% of the initial
assert np.all(e1 < e0 * 0.1)
def test_copy():
sizes = [10, 5, 2]
n_samples = 200
batch_size = 50
training_input = np.random.random((n_samples, sizes[0]))
n = NN(sizes, [Tanh] * (len(sizes) - 1), init_w_scale=0.1, eta=0.5, momentum=0.95)
training_output = np.vstack((0.999 * ((training_input[:, 0] > 0.5) * 2 - 1),
0.999 * ((training_input[:, sizes[0] / 2] < 0.5) * 2 - 1))).T
n_out_0 = n.predict(training_input)
cn = n.copy()
cn_out_0 = cn.predict(training_input)
assert np.array_equal(n_out_0, cn_out_0)
# do some training of the original network
for _ in range(10):
for begin in range(0, n_samples, batch_size):
end = min(n_samples, begin + batch_size)
n.train(training_input[begin:end], training_output[begin:end])
# the original network has changed
n_out_1 = n.predict(training_input)
assert not np.array_equal(n_out_0, n_out_1)
# but the copy is intact
assert np.array_equal(cn_out_0, cn.predict(training_input))
# Now do the exact same training on the copy
for _ in range(10):
for begin in range(0, n_samples, batch_size):
end = min(n_samples, begin + batch_size)
cn.train(training_input[begin:end], training_output[begin:end])
cn_out_1 = cn.predict(training_input)
# now the output of the copy is the same as the output of the original
assert np.array_equal(n_out_1, cn_out_1)
# Train some more and check they still go together
for _ in range(10):
for begin in range(0, n_samples, batch_size):
end = min(n_samples, begin + batch_size)
n.train(training_input[begin:end], training_output[begin:end])
cn.train(training_input[begin:end], training_output[begin:end])
assert np.array_equal(n.predict(training_input), cn.predict(training_input))
if __name__ == "__main__":
test = True
if test:
test_gradients()
test_train()
test_copy()
print "Passed all tests!"
| 38.315271 | 120 | 0.564027 | 1,216 | 7,778 | 3.421875 | 0.143092 | 0.033646 | 0.043259 | 0.023552 | 0.57318 | 0.502043 | 0.465032 | 0.376352 | 0.327085 | 0.298005 | 0 | 0.033418 | 0.292106 | 7,778 | 202 | 121 | 38.504951 | 0.722303 | 0.084469 | 0 | 0.297619 | 0 | 0 | 0.00352 | 0 | 0 | 0 | 0 | 0 | 0.065476 | 0 | null | null | 0.005952 | 0.047619 | null | null | 0.005952 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ebb3d6fab765a0117b5d480ea9aba0880f2a0728 | 4,900 | py | Python | api/robot_abstract.py | alexisvincent/downy | 55f18e603c387dbd543f85eb07098e8f170fc93c | [
"Apache-2.0"
] | null | null | null | api/robot_abstract.py | alexisvincent/downy | 55f18e603c387dbd543f85eb07098e8f170fc93c | [
"Apache-2.0"
] | null | null | null | api/robot_abstract.py | alexisvincent/downy | 55f18e603c387dbd543f85eb07098e8f170fc93c | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python2.4
#
# Copyright 2009 Google Inc. All Rights Reserved.
"""Defines the generic robot classes.
This module provides the Robot class and RobotListener interface,
as well as some helper functions for web requests and responses.
"""
__author__ = 'davidbyttow@google.com (David Byttow)'
import events
import model
import ops
import simplejson
import util
def ParseJSONBody(json_body):
"""Parse a JSON string and return a context and an event list."""
json = simplejson.loads(json_body)
# TODO(davidbyttow): Remove this once no longer needed.
data = util.CollapseJavaCollections(json)
context = ops.CreateContext(data)
event_list = [model.CreateEvent(event_data) for event_data in data['events']]
return context, event_list
def SerializeContext(context, version):
"""Return a JSON string representing the given context."""
context_dict = util.Serialize(context)
context_dict['version'] = version
return simplejson.dumps(context_dict)
class Robot(object):
"""Robot metadata class.
This class holds on to basic robot information like the name and profile.
It also maintains the list of event handlers and cron jobs and
dispatches events to the appropriate handlers.
"""
def __init__(self, name, version, image_url='', profile_url=''):
"""Initializes self with robot information."""
self._handlers = {}
self.name = name
self.version = version
self.image_url = image_url
self.profile_url = profile_url
self.cron_jobs = []
def RegisterListener(self, listener):
"""Registers all event handlers exported by the given object.
Args:
listener: an object with methods corresponding to wave events.
Methods should be named either in camel case, e.g. 'OnBlipSubmitted',
or in lowercase, e.g. 'on_blip_submitted', with names corresponding
to the event names in the events module.
"""
for event in dir(events):
if event.startswith('_'):
continue
lowercase_method_name = 'on_' + event.lower()
camelcase_method_name = 'On' + util.ToUpperCamelCase(event)
if hasattr(listener, lowercase_method_name):
handler = getattr(listener, lowercase_method_name)
elif hasattr(listener, camelcase_method_name):
handler = getattr(listener, camelcase_method_name)
else:
continue
if callable(handler):
self.RegisterHandler(event, handler)
def RegisterHandler(self, event_type, handler):
"""Registers a handler on a specific event type.
Multiple handlers may be registered on a single event type and are
guaranteed to be called in order.
The handler takes two arguments, the event properties and the Context of
this session. For example:
def OnParticipantsChanged(properties, context):
pass
Args:
event_type: An event type to listen for.
handler: A function handler which takes two arguments, event properties
and the Context of this session.
"""
self._handlers.setdefault(event_type, []).append(handler)
def RegisterCronJob(self, path, seconds):
"""Registers a cron job to surface in capabilities.xml."""
self.cron_jobs.append((path, seconds))
def HandleEvent(self, event, context):
"""Calls all of the handlers associated with an event."""
for handler in self._handlers.get(event.type, []):
# TODO(jacobly): pass the event in to the handlers directly
# instead of passing the properties dictionary.
handler(event.properties, context)
def GetCapabilitiesXml(self):
"""Return this robot's capabilities as an XML string."""
lines = ['<w:version>%s</w:version>' % self.version]
lines.append('<w:capabilities>')
for capability in self._handlers:
lines.append(' <w:capability name="%s"/>' % capability)
lines.append('</w:capabilities>')
if self.cron_jobs:
lines.append('<w:crons>')
for job in self.cron_jobs:
lines.append(' <w:cron path="%s" timerinseconds="%s"/>' % job)
lines.append('</w:crons>')
robot_attrs = ' name="%s"' % self.name
if self.image_url:
robot_attrs += ' imageurl="%s"' % self.image_url
if self.profile_url:
robot_attrs += ' profileurl="%s"' % self.profile_url
lines.append('<w:profile%s/>' % robot_attrs)
return ('<?xml version="1.0"?>\n'
'<w:robot xmlns:w="http://wave.google.com/extensions/robots/1.0">\n'
'%s\n</w:robot>\n') % ('\n'.join(lines))
def GetProfileJson(self):
"""Returns JSON body for any profile handler.
Returns:
String of JSON to be sent as a response.
"""
data = {}
data['name'] = self.name
data['imageUrl'] = self.image_url
data['profileUrl'] = self.profile_url
# TODO(davidbyttow): Remove this java nonsense.
data['javaClass'] = 'com.google.wave.api.ParticipantProfile'
return simplejson.dumps(data)
| 33.793103 | 80 | 0.686939 | 639 | 4,900 | 5.173709 | 0.319249 | 0.019056 | 0.025408 | 0.015124 | 0.058681 | 0.039322 | 0.024803 | 0.024803 | 0 | 0 | 0 | 0.00256 | 0.202857 | 4,900 | 144 | 81 | 34.027778 | 0.84383 | 0.375714 | 0 | 0.027778 | 0 | 0.013889 | 0.148416 | 0.036501 | 0 | 0 | 0 | 0.020833 | 0 | 1 | 0.125 | false | 0 | 0.069444 | 0 | 0.263889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ebc3f12d5c99d4e6a1f3086f75aadcab92ac7ac3 | 357 | py | Python | setup.py | tradr-project/doxygen_catkin | 12e5592b62830ddc6c4a4ca8612310f55ac97e4e | [
"BSD-3-Clause"
] | 5 | 2018-01-15T08:25:39.000Z | 2022-03-07T01:03:50.000Z | setup.py | jackiecx/doxygen_catkin | 12e5592b62830ddc6c4a4ca8612310f55ac97e4e | [
"BSD-3-Clause"
] | 1 | 2021-08-31T04:00:09.000Z | 2021-08-31T04:00:09.000Z | setup.py | jackiecx/doxygen_catkin | 12e5592b62830ddc6c4a4ca8612310f55ac97e4e | [
"BSD-3-Clause"
] | 14 | 2015-08-11T07:29:20.000Z | 2022-03-24T08:30:05.000Z |
## ! DO NOT MANUALLY INVOKE THIS setup.py, USE CATKIN INSTEAD
from distutils.core import setup
from catkin_pkg.python_setup import generate_distutils_setup
# fetch values from package.xml
setup_args = generate_distutils_setup(
packages=['doxygen_catkin'],
package_dir={'':'bin'},
scripts=['bin/doxygen-catkin-filegen']
)
setup(**setup_args)
| 23.8 | 61 | 0.756303 | 48 | 357 | 5.416667 | 0.583333 | 0.130769 | 0.169231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134454 | 357 | 14 | 62 | 25.5 | 0.841424 | 0.246499 | 0 | 0 | 1 | 0 | 0.162879 | 0.098485 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ebca78cf85e23623d8ce8975f718a8c0042b6089 | 1,030 | py | Python | setup.py | garethtilley/referredby | 5c6a52f17efc7ffd618078458844d456eb8b6683 | [
"0BSD"
] | null | null | null | setup.py | garethtilley/referredby | 5c6a52f17efc7ffd618078458844d456eb8b6683 | [
"0BSD"
] | null | null | null | setup.py | garethtilley/referredby | 5c6a52f17efc7ffd618078458844d456eb8b6683 | [
"0BSD"
] | null | null | null | # -*- coding: utf-8 -*-
#
# setup.py
# referredby
#
"""
Packaging for the referredby project.
"""
from setuptools import setup
VERSION = '0.1.3'
setup(
name='referredby',
description="Parsing referrer URLS for common search engines.",
url="http://github.com/larsyencken/referredby/",
version=VERSION,
author="Lars Yencken",
author_email="lars@yencken.org",
license="ISC",
long_description=open('README.rst').read(),
packages=[
'referredby',
],
test_suite='test_referredby',
classifiers=[
'Development Status :: 4 - Beta',
'Intended Audience :: Developers',
'License :: OSI Approved :: ISC License (ISCL)',
'Natural Language :: English',
"Programming Language :: Python :: 2",
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
],
)
| 25.121951 | 67 | 0.6 | 105 | 1,030 | 5.847619 | 0.609524 | 0.185668 | 0.2443 | 0.127036 | 0.087948 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01928 | 0.24466 | 1,030 | 40 | 68 | 25.75 | 0.769923 | 0.079612 | 0 | 0.071429 | 0 | 0 | 0.556624 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.035714 | 0 | 0.035714 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ebcc6309f0776662cef038525e082e3a9f063621 | 2,795 | py | Python | handlers/admins_tools_handl.py | bbt-t/Yuuko | 6b0d7862b14fe739be52d87ff8c8610a3f4548e1 | [
"Apache-2.0"
] | null | null | null | handlers/admins_tools_handl.py | bbt-t/Yuuko | 6b0d7862b14fe739be52d87ff8c8610a3f4548e1 | [
"Apache-2.0"
] | null | null | null | handlers/admins_tools_handl.py | bbt-t/Yuuko | 6b0d7862b14fe739be52d87ff8c8610a3f4548e1 | [
"Apache-2.0"
] | null | null | null | from aiogram.dispatcher import FSMContext
from aiogram.dispatcher.filters import Command
from aiogram.types import Message, CallbackQuery
from sqlalchemy.exc import NoResultFound
from loader import dp, logger_guru
from utils.database_manage.sql.sql_commands import DB_USERS
from utils.keyboards.admins_tools_kb import tools_choice_kb
from utils.misc.notify_users import send_a_message_to_all_users
@dp.message_handler(Command('admin_tools'))
async def go_to_admin_panel(message: Message, state: FSMContext) -> None:
lang: str = await DB_USERS.select_bot_language(telegram_id=message.from_user.id)
await message.answer(
'Чего изволите?' if lang == 'ru' else 'What would you like?', reply_markup=tools_choice_kb
)
await state.set_state('admin_in_action')
async with state.proxy() as data:
data['lang']: str = lang
@dp.callback_query_handler(text={'reset_user_codeword', 'make_newsletter'}, state='admin_in_action')
async def choose_an_action(call: CallbackQuery, state: FSMContext) -> None:
async with state.proxy() as data:
lang: str = data.get('lang')
if call.data == 'reset_user_codeword':
await call.message.answer('id пользователя?' if lang == 'ru' else 'user id?')
await state.set_state('accept_user_id')
elif call.data == 'make_newsletter':
await call.message.answer('текст рассылки?' if lang == 'ru' else 'mailing text?')
await state.set_state('receiving_mailing_text')
await call.message.delete_reply_markup()
@dp.message_handler(state='accept_user_id')
async def take_user_id(message: Message, state: FSMContext) -> None:
async with state.proxy() as data:
lang: str = data.get('lang')
try:
if await DB_USERS.check_personal_pass(telegram_id=message.text):
await DB_USERS.update_personal_pass(telegram_id=message.text, personal_pass=None)
await message.answer('СДЕЛАНО!' if lang == 'ru' else 'MADE!')
except NoResultFound:
logger_guru.exception('Failed attempt to reset the code word!')
await message.reply(
'Что-то пошло не так...смотри логи' if lang == 'ru' else 'Something went wrong...look at the logs'
)
finally:
await state.finish()
@dp.message_handler(state='receiving_mailing_text')
async def make_newsletter_to_all_users(message: Message, state: FSMContext) -> None:
async with state.proxy() as data:
lang: str = data.get('lang')
if not (result := await send_a_message_to_all_users(msg=message.text)):
await message.answer('Готово!' if lang == 'ru' else 'YAHOO!')
else:
await message.answer(
f'Что-то пошло не так: {result}' if lang == 'ru' else
f'Something went wrong: {result}'
)
await state.finish()
| 40.507246 | 110 | 0.703041 | 390 | 2,795 | 4.841026 | 0.323077 | 0.022246 | 0.029661 | 0.044492 | 0.242055 | 0.18697 | 0.115466 | 0.115466 | 0.115466 | 0.115466 | 0 | 0 | 0.186404 | 2,795 | 68 | 111 | 41.102941 | 0.830255 | 0 | 0 | 0.2 | 0 | 0 | 0.176029 | 0.015742 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.036364 | 0.145455 | 0 | 0.145455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ebce968d2e85b503c8dd92b078bf65aef8b7b726 | 384 | py | Python | accounts/migrations/0004_auto_20200102_1952.py | venieri/cancan | 5a10d6aa9911733e78eee062025a7b1af96e6167 | [
"CC0-1.0"
] | null | null | null | accounts/migrations/0004_auto_20200102_1952.py | venieri/cancan | 5a10d6aa9911733e78eee062025a7b1af96e6167 | [
"CC0-1.0"
] | null | null | null | accounts/migrations/0004_auto_20200102_1952.py | venieri/cancan | 5a10d6aa9911733e78eee062025a7b1af96e6167 | [
"CC0-1.0"
] | null | null | null | # Generated by Django 3.0.1 on 2020-01-02 19:52
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('accounts', '0003_auto_20200102_1847'),
]
operations = [
migrations.RenameField(
model_name='account',
old_name='addreed_to_rules',
new_name='agreed_to_rules',
),
]
| 20.210526 | 48 | 0.606771 | 43 | 384 | 5.186047 | 0.813953 | 0.06278 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113139 | 0.286458 | 384 | 18 | 49 | 21.333333 | 0.70073 | 0.117188 | 0 | 0 | 1 | 0 | 0.204748 | 0.068249 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ebd0e053d31d6fa50f5235449f18875a98067924 | 1,225 | py | Python | lixian_commands/readd.py | ntkrnl/xunlei-lixian | ea418bf58671d727a66bc395f9407233b8346d4a | [
"MIT"
] | 1 | 2021-01-26T17:28:36.000Z | 2021-01-26T17:28:36.000Z | lixian_commands/readd.py | ntkrnl/xunlei-lixian | ea418bf58671d727a66bc395f9407233b8346d4a | [
"MIT"
] | null | null | null | lixian_commands/readd.py | ntkrnl/xunlei-lixian | ea418bf58671d727a66bc395f9407233b8346d4a | [
"MIT"
] | null | null | null | from lixian_commands.util import *
from lixian_cli_parser import *
from lixian_encoding import default_encoding
import lixian_help
import lixian_query
@command_line_parser(help=lixian_help.readd)
@with_parser(parse_login)
@with_parser(parse_logging)
@command_line_option('deleted')
@command_line_option('expired')
@command_line_option('all')
def readd_task(args):
if args.deleted:
status = 'deleted'
elif args.expired:
status = 'expired'
else:
raise NotImplementedError('Please use --expired or --deleted')
client = create_client(args)
if status == 'expired' and args.all:
return client.readd_all_expired_tasks()
to_readd = lixian_query.search_tasks(client, args)
non_bt = []
bt = []
if not to_readd:
return
print "Below files are going to be re-added:"
for x in to_readd:
print x['name'].encode(default_encoding)
if x['type'] == 'bt':
bt.append((x['bt_hash'], x['id']))
else:
non_bt.append((x['original_url'], x['id']))
if non_bt:
urls, ids = zip(*non_bt)
client.add_batch_tasks(urls, ids)
for hash, id in bt:
client.add_torrent_task_by_info_hash2(hash, id)
| 29.878049 | 70 | 0.66449 | 171 | 1,225 | 4.502924 | 0.421053 | 0.057143 | 0.066234 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001047 | 0.220408 | 1,225 | 40 | 71 | 30.625 | 0.805236 | 0 | 0 | 0.052632 | 0 | 0 | 0.115102 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.131579 | null | null | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ebd1ec0dc2b71198cfd1f63f4c0430eed11130da | 8,438 | py | Python | CalculationUI.py | atharvaagrawal/analysis-of-NSE | 578020878f66478967dd91782fb9f4f01c815431 | [
"MIT"
] | 4 | 2019-07-04T16:34:15.000Z | 2021-11-23T03:15:35.000Z | CalculationUI.py | atharvaagrawal/analysis-of-NSE | 578020878f66478967dd91782fb9f4f01c815431 | [
"MIT"
] | null | null | null | CalculationUI.py | atharvaagrawal/analysis-of-NSE | 578020878f66478967dd91782fb9f4f01c815431 | [
"MIT"
] | 2 | 2021-07-04T15:09:29.000Z | 2021-11-12T04:06:21.000Z | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'Calculation.ui'
#
# Created by: PyQt5 UI code generator 5.9.2
#
# WARNING! All changes made in this file will be lost!
from Calculation.CalculatingLast5Days import CalculatingLast5Days
from PyQt5 import QtCore, QtGui, QtWidgets
class CalculationUi_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(933, 444)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.lblHeading = QtWidgets.QLabel(self.centralwidget)
self.lblHeading.setGeometry(QtCore.QRect(10, 10, 909, 109))
font = QtGui.QFont()
font.setPointSize(22)
font.setBold(True)
font.setWeight(75)
self.lblHeading.setFont(font)
self.lblHeading.setStyleSheet("background-color: rgb(140, 244, 255)")
self.lblHeading.setFrameShape(QtWidgets.QFrame.NoFrame)
self.lblHeading.setAlignment(QtCore.Qt.AlignCenter)
self.lblHeading.setObjectName("lblHeading")
self.horizontalLayoutWidget = QtWidgets.QWidget(self.centralwidget)
self.horizontalLayoutWidget.setGeometry(QtCore.QRect(10, 130, 911, 301))
self.horizontalLayoutWidget.setObjectName("horizontalLayoutWidget")
self.horizontalLayout_2 = QtWidgets.QHBoxLayout(self.horizontalLayoutWidget)
self.horizontalLayout_2.setContentsMargins(0, 0, 0, 0)
self.horizontalLayout_2.setObjectName("horizontalLayout_2")
self.horizontalLayout_6 = QtWidgets.QHBoxLayout()
self.horizontalLayout_6.setObjectName("horizontalLayout_6")
self.verticalLayout_6 = QtWidgets.QVBoxLayout()
self.verticalLayout_6.setObjectName("verticalLayout_6")
self.btnlast5days = QtWidgets.QPushButton(self.horizontalLayoutWidget)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Expanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.btnlast5days.sizePolicy().hasHeightForWidth())
self.btnlast5days.setSizePolicy(sizePolicy)
font = QtGui.QFont()
font.setFamily("Times New Roman")
font.setPointSize(14)
font.setBold(True)
font.setWeight(75)
self.btnlast5days.setFont(font)
self.btnlast5days.setStyleSheet("")
self.btnlast5days.setObjectName("btnlast5days")
self.verticalLayout_6.addWidget(self.btnlast5days)
self.btnlast2 = QtWidgets.QPushButton(self.horizontalLayoutWidget)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Expanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.btnlast2.sizePolicy().hasHeightForWidth())
self.btnlast2.setSizePolicy(sizePolicy)
font = QtGui.QFont()
font.setFamily("Times New Roman")
font.setPointSize(14)
font.setBold(True)
font.setItalic(False)
font.setWeight(75)
self.btnlast2.setFont(font)
self.btnlast2.setStyleSheet("")
self.btnlast2.setObjectName("btnlast2")
self.verticalLayout_6.addWidget(self.btnlast2)
self.btnNifty50FromNiftyAll_3 = QtWidgets.QPushButton(self.horizontalLayoutWidget)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Expanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.btnNifty50FromNiftyAll_3.sizePolicy().hasHeightForWidth())
self.btnNifty50FromNiftyAll_3.setSizePolicy(sizePolicy)
font = QtGui.QFont()
font.setFamily("Times New Roman")
font.setPointSize(14)
font.setBold(True)
font.setItalic(False)
font.setWeight(75)
self.btnNifty50FromNiftyAll_3.setFont(font)
self.btnNifty50FromNiftyAll_3.setStyleSheet("")
self.btnNifty50FromNiftyAll_3.setObjectName("btnNifty50FromNiftyAll_3")
self.verticalLayout_6.addWidget(self.btnNifty50FromNiftyAll_3)
self.horizontalLayout_6.addLayout(self.verticalLayout_6)
self.horizontalLayout_2.addLayout(self.horizontalLayout_6)
self.horizontalLayout_7 = QtWidgets.QHBoxLayout()
self.horizontalLayout_7.setObjectName("horizontalLayout_7")
self.verticalLayout_7 = QtWidgets.QVBoxLayout()
self.verticalLayout_7.setObjectName("verticalLayout_7")
self.btnlast1 = QtWidgets.QPushButton(self.horizontalLayoutWidget)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Expanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.btnlast1.sizePolicy().hasHeightForWidth())
self.btnlast1.setSizePolicy(sizePolicy)
font = QtGui.QFont()
font.setFamily("Times New Roman")
font.setPointSize(14)
font.setBold(True)
font.setItalic(False)
font.setWeight(75)
self.btnlast1.setFont(font)
self.btnlast1.setStyleSheet("")
self.btnlast1.setObjectName("btnlast1")
self.verticalLayout_7.addWidget(self.btnlast1)
self.btnlast4 = QtWidgets.QPushButton(self.horizontalLayoutWidget)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Expanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.btnlast4.sizePolicy().hasHeightForWidth())
self.btnlast4.setSizePolicy(sizePolicy)
font = QtGui.QFont()
font.setFamily("Times New Roman")
font.setPointSize(14)
font.setBold(True)
font.setItalic(False)
font.setWeight(75)
self.btnlast4.setFont(font)
self.btnlast4.setStyleSheet("")
self.btnlast4.setObjectName("btnlast4")
self.verticalLayout_7.addWidget(self.btnlast4)
self.btnlast5 = QtWidgets.QPushButton(self.horizontalLayoutWidget)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Expanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.btnlast5.sizePolicy().hasHeightForWidth())
self.btnlast5.setSizePolicy(sizePolicy)
font = QtGui.QFont()
font.setFamily("Times New Roman")
font.setPointSize(14)
font.setBold(True)
font.setItalic(False)
font.setWeight(75)
self.btnlast5.setFont(font)
self.btnlast5.setStyleSheet("")
self.btnlast5.setObjectName("btnlast5")
self.verticalLayout_7.addWidget(self.btnlast5)
self.horizontalLayout_7.addLayout(self.verticalLayout_7)
self.horizontalLayout_2.addLayout(self.horizontalLayout_7)
MainWindow.setCentralWidget(self.centralwidget)
# Calling Function
self.objCalculation = CalculatingLast5Days()
self.btnlast5days.clicked.connect(self.objCalculation.calculatingData)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
self.lblHeading.setText(_translate("MainWindow", "Calulation of NSE Data"))
self.btnlast5days.setText(_translate("MainWindow", "Last 5 Days"))
self.btnlast2.setText(_translate("MainWindow", "Last "))
self.btnNifty50FromNiftyAll_3.setText(_translate("MainWindow", "Last "))
self.btnlast1.setText(_translate("MainWindow", "Last"))
self.btnlast4.setText(_translate("MainWindow", "Last "))
self.btnlast5.setText(_translate("MainWindow", "Last "))
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = CalculationUi_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
| 49.05814 | 109 | 0.702892 | 756 | 8,438 | 7.767196 | 0.190476 | 0.061308 | 0.054326 | 0.021458 | 0.439884 | 0.384026 | 0.366996 | 0.358651 | 0.358651 | 0.342302 | 0 | 0.028567 | 0.199336 | 8,438 | 171 | 110 | 49.345029 | 0.840586 | 0.023821 | 0 | 0.354839 | 1 | 0 | 0.060817 | 0.005709 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012903 | false | 0 | 0.019355 | 0 | 0.03871 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ebd43551b7fa5af4e421313a7473ebde193df318 | 1,094 | py | Python | cute.py | eight04/node_vm2 | 08e7655b477f95cac67edb8c7e510cfd0b533f5a | [
"MIT"
] | 42 | 2017-07-11T16:25:35.000Z | 2022-03-25T04:22:08.000Z | cute.py | eight04/node_vm2 | 08e7655b477f95cac67edb8c7e510cfd0b533f5a | [
"MIT"
] | 34 | 2017-10-16T12:50:55.000Z | 2022-03-28T17:26:00.000Z | cute.py | eight04/node_vm2 | 08e7655b477f95cac67edb8c7e510cfd0b533f5a | [
"MIT"
] | 6 | 2019-03-05T06:52:16.000Z | 2021-12-01T09:35:28.000Z | #! python3
from xcute import cute, LiveReload
cute(
pkg_name = "node_vm2",
lint = [
'cd node_vm2/vm-server && npm test && cd ..',
'pylint {pkg_name}'
],
test = ['lint', 'python test.py', 'readme_build'],
bump_pre = 'test',
bump_post = ['dist', 'release', 'publish', 'install'],
dist_pre = 'x-clean build dist',
dist = 'python setup.py sdist bdist_wheel',
release = [
'git add .',
'git commit -m "Release v{version}"',
'git tag -a v{version} -m "Release v{version}"'
],
publish = [
'twine upload dist/*',
'git push --follow-tags'
],
publish_err = 'start https://pypi.python.org/pypi/{pkg_name}/',
install = 'pip install -e .',
readme_build = [
'python setup.py --long-description | x-pipe build/readme/index.rst',
'rst2html5.py --no-raw --exit-status=1 --verbose '
'build/readme/index.rst build/readme/index.html'
],
readme_pre = "readme_build",
readme = LiveReload("README.rst", "readme_build", "build/readme"),
doc_build = "sphinx-build docs build/docs",
doc_pre = "doc_build",
doc = LiveReload(["{pkg_name}", "docs"], "doc_build", "build/docs")
)
| 28.789474 | 71 | 0.645338 | 153 | 1,094 | 4.48366 | 0.457516 | 0.080175 | 0.069971 | 0.046647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006529 | 0.159963 | 1,094 | 37 | 72 | 29.567568 | 0.739935 | 0.008227 | 0 | 0.117647 | 0 | 0 | 0.594096 | 0.061808 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.029412 | 0 | 0.029412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ebd79d4bda2a0d97bcf4306ed7b808cf533646e1 | 931 | py | Python | ex17.py | mparikh15/python_exercises | a1b19aa772eab1f88b756557b66830c15481c7cc | [
"MIT"
] | null | null | null | ex17.py | mparikh15/python_exercises | a1b19aa772eab1f88b756557b66830c15481c7cc | [
"MIT"
] | null | null | null | ex17.py | mparikh15/python_exercises | a1b19aa772eab1f88b756557b66830c15481c7cc | [
"MIT"
] | null | null | null | from sys import argv
from os.path import exists
script, from_file, to_file = argv # takes script, origin and export file as inputs
print "Copying from %s to %s" % (from_file, to_file) # Just saying what's going on
# we could do these two on one line, how?
indata = open(from_file).read() # Made it shorteer, by directly reading into new var
#indata = in_file.read() #indata stores all the info in the file !!! Commented out, redundant after above line
print "The input file is %d bytes long" % len(indata) # prints out len
print "Does the output file exist? %f" % exists(to_file) # checks that an output file exists
print "Ready, hit RETURN to continue, CTRL-C to abort." # user abort option
raw_input()
out_file = open(to_file, 'w') # new variable, that stores the image of to_file
out_file.write(indata) # puts indata into the new file
print "Alright, all done."
out_file.close() #closing up shop
| 38.791667 | 111 | 0.716434 | 159 | 931 | 4.113208 | 0.566038 | 0.045872 | 0.030581 | 0.042813 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196563 | 931 | 23 | 112 | 40.478261 | 0.874332 | 0.464017 | 0 | 0 | 0 | 0 | 0.319654 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.153846 | null | null | 0.384615 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ebdad1427769475e77d3e3d80ead6f03cbd6c8f1 | 7,122 | py | Python | InstagramAPIApplication/datasetcrawler.py | SocioRecSys/FashionRec_DataCollection | 8bb97beda01b34fa1eaab06d003faa95b2b2808f | [
"RSA-MD"
] | 2 | 2020-05-03T00:51:41.000Z | 2020-08-23T17:19:59.000Z | InstagramAPIApplication/datasetcrawler.py | SocioRecSys/FashionRec_DataCollection | 8bb97beda01b34fa1eaab06d003faa95b2b2808f | [
"RSA-MD"
] | 5 | 2021-06-08T22:55:28.000Z | 2022-03-12T00:33:42.000Z | InstagramAPIApplication/datasetcrawler.py | SocioRecSys/FashionRec_DataCollection | 8bb97beda01b34fa1eaab06d003faa95b2b2808f | [
"RSA-MD"
] | null | null | null | #from app import app
import os
from flask import Flask,render_template,request, url_for, redirect,session
from constant import *
from storeDataToDB import InstagramDataStore
import json,ast
import requests
import urllib
import urllib2
global accessToken_gl
from pymongo import MongoClient
app = Flask(__name__)
client = MongoClient('localhost:27017')
db = client.instagramapidata
@app.route('/',methods=['GET', 'POST'])
#@app.route('/index',methods=['GET', 'POST'])
def index():
if request.method == 'GET':
code = request.args.get('code')
username = request.args.get('username')
media_show = request.args.get('show_media')
if code:
print code
url = 'https://api.instagram.com/oauth/access_token'
values = {
'client_id':'329baa060a044ea58e966a10e30f5473',
'client_secret':'30c24436db944af89e953549bb8de647',
'redirect_uri':REDIRECT_URI,
'code':code,
'grant_type':'authorization_code'
}
data_access_token = urllib.urlencode(values)
print data_access_token
req = urllib2.Request(url, data_access_token)
response = urllib2.urlopen(req)
response_string = response.read()
response.close()
print str(response_string)
instagram_data = json.loads(response_string)
#accessToken_gl.append(json.loads(response_string))
if 'access_token' in instagram_data:
print ' ACCESSTOKEN::'+instagram_data['access_token'] + " of USER:: "+instagram_data['user']['username']
login_user = instagram_data['user']['username']
#db.useraccesstokeninfo.create_index([('username', ASCENDING)], unique=True)
access_token = instagram_data['access_token']
db.useraccesstokeninfo.insert({"username":login_user,"accesstoken":access_token})
checkcsr = db.useraccesstokeninfo.find({"username":login_user})
if (checkcsr.count()>0):
pass
else:
db.useraccesstokeninfo.insert({"username":login_user,"accesstoken":access_token})
return redirect(url_for('login',current_user = login_user))
else:
print 'authentication failure'
if username:
print 'MAIN PAGE USER SEARCH' + str(username)
user_dict = {'5467508547':'umu2017','4841977337':'wara.kab'}
for uid,name in user_dict.items():
if username in name:
print uid
print name
break
else:
name = 'NoUser'
return render_template("templates_view.html",username = name)
if username and media_show:
print "//////"+ username + "///"+ media_show
database = InstagramDataStore()
all_user = InstagramDataStore.retrieve_from_db_alldata(database)
#alldata = all_user
print 'datasetCrawler retriever data from database'
data =[]
for user in all_user:
#deleting _id as json cannot dump it
current_user_data_show = user[u'name']
if username in current_user_data_show:
print current_user_data_show
del user[u'_id']
alldata = ast.literal_eval(json.dumps(user))
#print type(alldata)
for key,value in alldata.items():
print key, 'corresponds to',value
data.append(value)
total_user = all_user.count()
#print data
#print len(data)
return render_template("templates_view.html", content=data, row=total_user,current_user = username)
if request.method == 'POST':
print 'post method'
return render_template("templates_view.html")
@app.route('/login',methods=['GET', 'POST'])
def login():
if request.method == 'GET':
username = request.args.get('current_user')
media_list_show = request.args.get('show_media')
if username:
print 'LOGGED IN USER:: '+request.args.get('current_user')
## here the media list is shown
if media_list_show:
print 'SHOW MEDIA:'
database = InstagramDataStore()
all_user = InstagramDataStore.retrieve_from_db_alldata(database)
#alldata = all_user
print 'datasetCrawler retriever data from database'
data =[]
for user in all_user:
#deleting _id as json cannot dump it
current_user_data_show = user[u'name']
if username in current_user_data_show:
print current_user_data_show
del user[u'_id']
alldata = ast.literal_eval(json.dumps(user))
#print type(alldata)
for key,value in alldata.items():
print key, 'corresponds to',value
data.append(value)
total_user = all_user.count()
return render_template("login.html", content=data, row=total_user,current_user = username)
return render_template("login.html",current_user = username)
@app.route('/showdataset',methods=['GET', 'POST'])
def showdataset():
if request.method == 'GET':
username = request.args.get('username')
media_show = request.args.get('show_media')
database = InstagramDataStore()
all_user = InstagramDataStore.retrieve_from_db_alldata(database)
#alldata = all_user
print 'datasetCrawler retriever data from database'
data =[]
for user in all_user:
#deleting _id as json cannot dump it
current_user_data_show = user[u'name']
if username in current_user_data_show:
del user[u'_id']
alldata = ast.literal_eval(json.dumps(user))
#print type(alldata)
for key,value in alldata.items():
#print key, 'corresponds to',value
data.append(value)
total_user = all_user.count()
return render_template("templates_view.html", content=data, row=total_user,current_user = username)
@app.route('/logininstagram',methods=['GET', 'POST'])
def logininstagram():
if request.method == 'POST':
return redirect(url_for('index'))
if request.method == 'GET':
instagram_client_id = '329baa060a044ea58e966a10e30f5473'
instagram_client_secret = '30c24436db944af89e953549bb8de647'
instagram_redirect_url = REDIRECT_URI
login_url = 'https://api.instagram.com/oauth/authorize/?client_id=' + instagram_client_id + '&redirect_uri=' +instagram_redirect_url + '&response_type=code&scope=basic'
print 'redirecting to -> '+login_url
return redirect(login_url )
@app.route("/logout",methods=['GET', 'POST'])
def logout():
"""Logout Form"""
# #session['logged_in'] = False
url = 'http://instagram.com/accounts/logout/'
return redirect(url)
@app.route('/policy',methods=['GET', 'POST'])
def policy():
if request.method == 'POST':
return redirect(url_for('index'))
# show the form, it wasn't submitted
return render_template('policy.html')
@app.route('/description',methods=['GET', 'POST'])
def description():
if request.method == 'POST':
return redirect(url_for('index'))
# show the form, it wasn't submitted
return render_template('description.html')
if __name__== "__main__":
app.secret_key = "123"
app.run(host = '130.237.20.58')
| 32.081081 | 176 | 0.656276 | 840 | 7,122 | 5.366667 | 0.192857 | 0.036602 | 0.024845 | 0.033718 | 0.523957 | 0.478705 | 0.449867 | 0.44299 | 0.430124 | 0.380878 | 0 | 0.024275 | 0.224937 | 7,122 | 221 | 177 | 32.226244 | 0.792391 | 0.082982 | 0 | 0.421053 | 0 | 0 | 0.189031 | 0.024495 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.006579 | 0.059211 | null | null | 0.131579 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ebdd311d83a28a33b9ca97f008b55de04389e5d0 | 4,930 | py | Python | library(data.table).py | edward-jiang3649/data_project | 110789e6db9fd27c284861b8b0da532c78949191 | [
"MIT"
] | null | null | null | library(data.table).py | edward-jiang3649/data_project | 110789e6db9fd27c284861b8b0da532c78949191 | [
"MIT"
] | null | null | null | library(data.table).py | edward-jiang3649/data_project | 110789e6db9fd27c284861b8b0da532c78949191 | [
"MIT"
] | null | null | null |
# Which city has the most traffic? Which city has the least?
# What month is the most busiest in the year?
# Which airport route has the busiest one?
library(data.table)
library(tidyverse)
library(stringr)
library(plotly)
library(readr)
library(xml2)
# ----------------------------------------------------------------------------------------
airport < - read_csv('../input/au-dom-traffic/audomcitypairs-20180406.csv')
airport$City1 < - airport$City1 % > % str_to_lower()
airport$City1 < - airport$City1 % > % str_to_title()
airport$City2 < - airport$City2 % > % str_to_lower()
airport$City2 < - airport$City2 % > % str_to_title()
airport < - airport % > % filter(Year < 2018)
airport < - airport % > % filter(Year >= 2000)
# ----------------------------------------------------------------------------------------
city < - fread("../input/world-cities-database/worldcitiespop.csv", data.table=FALSE)
##
Read 15.1 % of 3173958 rows
Read 24.6 % of 3173958 rows
Read 34.0 % of 3173958 rows
Read 37.2 % of 3173958 rows
Read 47.3 % of 3173958 rows
Read 61.4 % of 3173958 rows
Read 79.4 % of 3173958 rows
Read 88.2 % of 3173958 rows
Read 3173958 rows and 7 (of 7) columns from 0.153 GB file in 00: 00: 13
city.australia < - city % > % filter(Country == "au")
city.australia < - city.australia % > % select(-Country, -Population, -Region, -City)
names(city.australia)[1] < - "City"
# 5 Data Component
airport % > % str()
# Classes 'tbl_df', 'tbl' and 'data.frame': 13169 obs. of 12 variables:
# $ City1 : chr "Albury" "Albury" "Albury" "Albury" ...
# $ City2 : chr "Sydney" "Sydney" "Sydney" "Sydney" ...
# $ Month : num 36526 36557 36586 36617 36647 ...
# $ Passenger_Trips : num 8708 8785 10390 9693 9831 ...
# $ Aircraft_Trips : num 401 398 423 394 418 403 458 589 566 580 ...
# $ Passenger_Load_Factor: num 62.5 63.6 70.4 70.7 67.3 67 63.9 59.5 64.2 53.1 ...
# $ Distance_GC_(km) : num 452 452 452 452 452 452 452 452 452 452 ...
# $ RPKs : num 3936016 3970820 4696280 4381236 4443612 ...
# $ ASKs : num 6297264 6243024 6667904 6196016 6601912 ...
# $ Seats : num 13932 13812 14752 13708 14606 ...
# $ Year : num 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 ...
# $ Month_num : num 1 2 3 4 5 6 7 8 9 10 ...
# - attr(*, "spec")=
# .. cols(
# .. City1 = col_character(),
# .. City2 = col_character(),
# .. Month = col_double(),
# .. Passenger_Trips = col_double(),
# .. Aircraft_Trips = col_double(),
# .. Passenger_Load_Factor = col_double(),
# .. `Distance_GC_(km)` = col_double(),
# .. RPKs = col_double(),
# .. ASKs = col_double(),
# .. Seats = col_double(),
# .. Year = col_double(),
# .. Month_num = col_double()
# .. )
port.city < - c("Adelaide", "Albury", "Alice Springs", "Armidale", "Ayers Rock",
"Ballina", "Brisbane", "Broome", "Bundaberg", "Burnie", "Cairns", "Canberra",
"Coffs Harbour", "Darwin", "Devonport", "Dubbo", "Emerald", "Geraldton",
"Gladstone", "Gold Coast", "Hamilton Island", "Hervey Bay", "Hobart",
"Kalgoorlie", "Karratha", "Launceston", "Mackay", "Melbourne", "Mildura",
"Moranbah", "Mount Isa", "Newcastle", "Newman", "Perth", "Port Hedland",
"Port Lincoln", "Port Macquarie", "Proserpine", "Rockhampton", "Sunshine Coast",
"Sydney", "Tamworth", "Townsville", "Wagga Wagga")
city.australia < - city.australia % > % filter(City % in % port.city)
airport < - merge(airport, city.australia, by.x="City1", by.y="City")
names(airport)[13] < - "City1.Latitude"
names(airport)[14] < - "City1.Longitude"
airport < - merge(airport, city.australia, by.x="City2", by.y="City")
names(airport)[15] < - "City2.Latitude"
names(airport)[16] < - "City2.Longitude"
# 7.1 Map Visualization of all routes
airport < - airport % > % mutate(id=rownames(airport))
airport.1 < - airport % >%
select(-contains("Latitude"), -contains("Longitude"))
airport.1 < - airport.1 % >%
gather('City1', 'City2', key="Airport.type", value="City")
airport.1$Airport.type < - airport.1$Airport.type % > % str_replace(pattern="City1", replacement="Departure")
airport.1$Airport.type < - airport.1$Airport.type % > % str_replace(pattern="City2", replacement="Arrive")
airport.1 < - merge(airport.1, city.australia, by.x="City", by.y="City")
world.map < - map_data("world")
au.map < - world.map % > % filter(region == "Australia")
au.map < - fortify(au.map)
ggplot() +
geom_map(data=au.map, map=au.map,
aes(x=long, y=lat, group=group, map_id=region),
fill="white", colour="black") +
ylim(-43, -10) +
xlim(110, 155) +
geom_point(data=airport.1, aes(x=Longitude, y=Latitude)) +
geom_line(data=airport.1, aes(x=Longitude, y=Latitude, group=id), colour="red", alpha=.1) +
labs(title="Australian Domestic Aircraft Routes")
| 41.779661 | 109 | 0.602434 | 632 | 4,930 | 4.631329 | 0.401899 | 0.030065 | 0.035531 | 0.046464 | 0.185856 | 0.148275 | 0.108644 | 0.084728 | 0.061496 | 0.037581 | 0 | 0.116074 | 0.196146 | 4,930 | 117 | 110 | 42.136752 | 0.622508 | 0.345639 | 0 | 0 | 0 | 0 | 0.222536 | 0.031387 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ebe066301f9db184fccc963e211e96a39a32434b | 1,852 | py | Python | src/incognita/preprocessing/setup_district_boundaries.py | the-scouts/incognita | f0634f3d812b816ad3d46f55e88d7524ebf81d32 | [
"MIT"
] | 2 | 2019-06-14T08:05:24.000Z | 2021-01-03T00:18:07.000Z | src/incognita/preprocessing/setup_district_boundaries.py | the-scouts/geo_mapping | f0634f3d812b816ad3d46f55e88d7524ebf81d32 | [
"MIT"
] | 47 | 2019-06-17T21:27:57.000Z | 2021-03-11T00:27:47.000Z | src/incognita/preprocessing/setup_district_boundaries.py | the-scouts/incognita | f0634f3d812b816ad3d46f55e88d7524ebf81d32 | [
"MIT"
] | null | null | null | import time
import geopandas as gpd
import pandas as pd
from incognita.data.scout_census import load_census_data
from incognita.geographies import district_boundaries
from incognita.logger import logger
from incognita.utility import config
from incognita.utility import filter
from incognita.utility import timing
if __name__ == "__main__":
start_time = time.time()
logger.info(f"Starting at {time.strftime('%H:%M:%S', time.localtime(start_time))}")
census_data = load_census_data()
census_data = filter.filter_records(census_data, "Census_ID", {20})
# Remove Jersey, Guernsey, and Isle of Man as they have invalid lat/long coordinates for their postcodes
census_data = filter.filter_records(census_data, "C_name", {"Bailiwick of Guernsey", "Isle of Man", "Jersey"}, exclude_matching=True)
# low resolution shape data
world_low_res = gpd.read_file(gpd.datasets.get_path("naturalearth_lowres"))
uk_shape = world_low_res.loc[world_low_res.name == "United Kingdom", "geometry"].array.data[0]
# # high resolution shape data
# uk_shape = gpd.read_file(r"S:\Development\incognita\data\UK Shape\GBR_adm0.shp")["geometry"].array.data[0]
logger.info("UK outline shapefile loaded.")
district_polygons = district_boundaries.create_district_boundaries(census_data, clip_to=uk_shape)
logger.info("District boundaries estimated!")
location_ids = census_data[["D_ID", "C_ID", "R_ID", "X_ID"]].dropna(subset=["D_ID"]).drop_duplicates().astype("Int64")
district_polygons = pd.merge(district_polygons, location_ids, how="left", on="D_ID")
logger.info("Added County, Region & Country location codes.")
district_polygons.to_file(config.SETTINGS.folders.boundaries / "districts-borders-uk.geojson", driver="GeoJSON")
logger.info("District boundaries saved.")
timing.close(start_time)
| 46.3 | 137 | 0.75378 | 260 | 1,852 | 5.134615 | 0.453846 | 0.067416 | 0.044944 | 0.058427 | 0.058427 | 0.058427 | 0.058427 | 0 | 0 | 0 | 0 | 0.004329 | 0.12689 | 1,852 | 39 | 138 | 47.487179 | 0.821274 | 0.142009 | 0 | 0 | 0 | 0 | 0.231838 | 0.0518 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.346154 | 0 | 0.346154 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
ebe0c716c2fa3c66d4bdfcd4205a4e9c067dbd26 | 471 | py | Python | setup.py | drolando/pyramid_zipkin-example | a377b84912a8e8b5c2a6a5f50d8406e06cbda53f | [
"Apache-2.0"
] | 11 | 2016-10-14T03:23:10.000Z | 2021-06-19T09:13:22.000Z | setup.py | drolando/pyramid_zipkin-example | a377b84912a8e8b5c2a6a5f50d8406e06cbda53f | [
"Apache-2.0"
] | 4 | 2016-10-13T11:21:48.000Z | 2019-02-26T01:55:31.000Z | setup.py | drolando/pyramid_zipkin-example | a377b84912a8e8b5c2a6a5f50d8406e06cbda53f | [
"Apache-2.0"
] | 6 | 2016-12-01T07:53:33.000Z | 2021-06-23T01:10:57.000Z | #!/usr/bin/python
# -*- coding: utf-8 -*-
from setuptools import setup
setup(
name='pyramid_zipkin-example',
version='0.1',
author='OpenZipkin',
author_email='zipkin-user@googlegroups.com',
license='Apache 2.0',
url='https://github.com/openzipkin/pyramid_zipkin-example',
description='See how much time python services spend on an http request',
install_requires=[
'pyramid',
'requests',
'pyramid_zipkin',
]
)
| 24.789474 | 77 | 0.649682 | 56 | 471 | 5.375 | 0.767857 | 0.129568 | 0.13289 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013333 | 0.203822 | 471 | 18 | 78 | 26.166667 | 0.789333 | 0.080679 | 0 | 0 | 0 | 0 | 0.491879 | 0.116009 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.066667 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ebe3cdd5919b1539b46ad13dc42c9b3bad8c9f65 | 280 | py | Python | 10. File Name/main.py | MahmudX/Algorithms | df498929b5d3fc0f0d558b3369c2aa9804c292f1 | [
"MIT"
] | null | null | null | 10. File Name/main.py | MahmudX/Algorithms | df498929b5d3fc0f0d558b3369c2aa9804c292f1 | [
"MIT"
] | null | null | null | 10. File Name/main.py | MahmudX/Algorithms | df498929b5d3fc0f0d558b3369c2aa9804c292f1 | [
"MIT"
] | null | null | null | n = int(input())
s = str(input())
removal = 0
counter = 0
for x in s:
if x != 'x':
if counter >= 3:
removal += counter - 2
counter = 0
elif x == 'x':
counter += 1
if counter >= 3:
removal += counter - 2
print(removal)
| 18.666667 | 35 | 0.460714 | 38 | 280 | 3.394737 | 0.447368 | 0.124031 | 0.155039 | 0.263566 | 0.387597 | 0.387597 | 0 | 0 | 0 | 0 | 0 | 0.047337 | 0.396429 | 280 | 14 | 36 | 20 | 0.715976 | 0 | 0 | 0.428571 | 0 | 0 | 0.007519 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ccda408a9fc107d667abef139bfbc493590b5a5a | 2,205 | py | Python | tests/unit-tests/test_pkg_configs_devmode.py | releng-tool/releng-tool | cd8728f35a7bdaf6ef90fd019e8c33bc5da8b265 | [
"BSD-2-Clause"
] | 7 | 2019-04-06T21:21:22.000Z | 2021-12-10T04:07:20.000Z | tests/unit-tests/test_pkg_configs_devmode.py | releng-tool/releng-tool | cd8728f35a7bdaf6ef90fd019e8c33bc5da8b265 | [
"BSD-2-Clause"
] | 1 | 2019-10-01T20:03:10.000Z | 2019-10-02T20:28:00.000Z | tests/unit-tests/test_pkg_configs_devmode.py | releng-tool/releng-tool | cd8728f35a7bdaf6ef90fd019e8c33bc5da8b265 | [
"BSD-2-Clause"
] | 1 | 2021-07-23T17:00:57.000Z | 2021-07-23T17:00:57.000Z | # -*- coding: utf-8 -*-
# Copyright 2021 releng-tool
from releng_tool.opts import RelengEngineOptions
from releng_tool.packages.exceptions import RelengToolInvalidPackageKeyValue
from releng_tool.packages.manager import RelengPackageManager
from releng_tool.registry import RelengRegistry
from tests.support.pkg_config_test import TestPkgConfigsBase
class TestPkgConfigsDevmode(TestPkgConfigsBase):
def test_pkgconfig_devmode_ignore_cache_disabled(self):
pkg, _, _ = self.LOAD('devmode-ignore-cache-disabled')
self.assertEqual(pkg.devmode_ignore_cache, False)
def test_pkgconfig_devmode_ignore_cache_enabled(self):
pkg, _, _ = self.LOAD('devmode-ignore-cache-enabled')
self.assertEqual(pkg.devmode_ignore_cache, True)
def test_pkgconfig_devmode_ignore_cache_invalid(self):
with self.assertRaises(RelengToolInvalidPackageKeyValue):
self.LOAD('devmode-ignore-cache-invalid')
def test_pkgconfig_devmode_ignore_cache_missing(self):
pkg, _, _ = self.LOAD('missing')
self.assertIsNone(pkg.devmode_ignore_cache)
def test_pkgconfig_devmode_revision_invalid(self):
with self.assertRaises(RelengToolInvalidPackageKeyValue):
self.LOAD('devmode-revision-invalid-type')
def test_pkgconfig_devmode_revision_missing(self):
pkg, _, _ = self.LOAD('missing')
self.assertFalse(pkg.has_devmode_option)
def test_pkgconfig_devmode_revision_valid_default(self):
pkg, _, _ = self.LOAD('devmode-revision-valid')
self.assertEqual(pkg.revision, 'dummy')
self.assertEqual(pkg.version, 'dummy')
self.assertTrue(pkg.has_devmode_option)
def test_pkgconfig_devmode_revision_valid_enabled(self):
# force engine options to default packages to internal
opts = RelengEngineOptions()
opts.devmode = True
registry = RelengRegistry()
manager = RelengPackageManager(opts, registry)
pkg, _, _ = self.LOAD('devmode-revision-valid', manager=manager)
self.assertEqual(pkg.revision, 'my-devmode-revision')
self.assertEqual(pkg.version, 'my-devmode-revision')
self.assertTrue(pkg.has_devmode_option)
| 41.603774 | 76 | 0.739683 | 243 | 2,205 | 6.432099 | 0.242798 | 0.083173 | 0.115163 | 0.117722 | 0.525912 | 0.457454 | 0.254639 | 0.170186 | 0.170186 | 0.070377 | 0 | 0.002729 | 0.169161 | 2,205 | 52 | 77 | 42.403846 | 0.850437 | 0.045805 | 0 | 0.157895 | 0 | 0 | 0.104762 | 0.075238 | 0 | 0 | 0 | 0 | 0.315789 | 1 | 0.210526 | false | 0 | 0.131579 | 0 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cce30443f995ba90a50489c29b9243eb7d862590 | 4,392 | py | Python | newpriority.py | NathanZabriskie/microfiction | 65de5de8749abf26a48598edf07732d0f3468ac2 | [
"MIT"
] | 1 | 2020-11-15T03:34:14.000Z | 2020-11-15T03:34:14.000Z | newpriority.py | NathanZabriskie/microfiction | 65de5de8749abf26a48598edf07732d0f3468ac2 | [
"MIT"
] | null | null | null | newpriority.py | NathanZabriskie/microfiction | 65de5de8749abf26a48598edf07732d0f3468ac2 | [
"MIT"
] | 1 | 2020-11-15T03:35:02.000Z | 2020-11-15T03:35:02.000Z | import helpers as h
import random
import heapq as hq
numChildren = 4
strikes = 3
maxSpecies = 10
class Species:
def __init__(self,s,node):
self.seed = s
self.isDead = False
self.heap = []
self.stale = 0
self.lowsc = node.score
self.bestsc = node.score
self.bestch = node
self.secondch = None
self.push(node)
def checkBest(self,curr):
if curr.score < self.lowsc:
self.lowsc = curr.score
if curr.score > self.bestsc:
self.bestsc = curr.score
self.secondch = self.bestch
self.bestch = curr
self.stale = 0
return True
return False
def push(self,node):
if self.stale > strikes:
return
if not self.heap:
self.isDead = False
hq.heappush(self.heap,(-node.score,node))
def step(self):
if self.isDead:
return []
if not self.heap:
self.isDead = True
return []
curr = hq.heappop(self.heap)[1]
if not self.checkBest(curr):
self.stale += 1
if self.stale > strikes:
self.isDead = True
return []
childs = []
for i in xrange(numChildren):
newch = curr.getChild()
if newch is not None:
childs.append(newch)
return childs
class Settings:
#rf is function that takes a "locks" list (see "formats" functions in micro.py)
#canR is list of indices that can be regenerated
def __init__(self,rf,canR):
self.regen = rf
self.canRegen = canR
class Node:
#s is string (artifact)
#sett is Settings object
def __init__(self,s,sett):
self.s = s
self.sett = sett
self.isbad = False
try: # fixes unicode characters trying to sneak through; see https://stackoverflow.com/questions/517923/what-is-the-best-way-to-remove-accents-in-a-python-unicode-string
self.words = h.strip(s).split()
except Exception as e:
#print s, e
self.isbad = True
self.score = None#sett.calcScore
#print "--Created node [",s,"]",self.score
def getChild(self):
i = random.choice(self.sett.canRegen)
lock = self.words[:]
lock[i] = None
temp = self.sett.regen(lock)
if not temp:
return None
news,fraw = temp
if not news:
return None
child = Node(news,self.sett)
if child.isbad:
return None
#TODO! This rejects too many, I think? Test more! Maybe make it not match the original story...?
#if h.numMatch(self.words,child.words) > 2: #too similar
# print self.words,child.words
# return None
return child
def getIndex(story, i):
return h.strip(story.split(' ')[i])
#stories can be a string or list
#NOT STRIPPED
def best(stories,regenf,canRegen,scoref,fraw,norm=False):
if type(stories) != list:
stories = [stories]
species = {}
seedi = fraw['root']['index']
bad = True
for s in stories:
if not s:
continue
seed = getIndex(s,seedi)
root = Node(s,Settings(regenf,canRegen))
if root.isbad:
break
root.score = scoref([s])[0]
species[seed] = Species(seed,root)
bad = False
if bad:
print "Refiner got no stories!"
return None
while True:
#print "--------------------------------"
children = []
allDead = True
for k in sorted(species.keys(),key=lambda x: species[x].bestsc,reverse=True)[:maxSpecies]:
p = species[k]
if not p.isDead:
allDead = False
children += p.step()
if allDead and not children:
break
if not children:
continue
#print "Num species, children:",len(species.keys()),",",len(children)
#raw = [h.strip(c.s) for c in children]
scores = scoref([c.s for c in children])
for i,child in enumerate(children):
child.score = scores[i]
k = getIndex(child.s,seedi)
if k not in species:
ni2 = Species(k,child)
species[k] = ni2
else:
species[k].push(child)
#print len(species)
lowest = 1000
highest = -1000
choices = []
for k in species:
p = species[k]
if p.lowsc < lowest:
lowest = p.lowsc
if p.bestsc > highest:
highest = p.bestsc
#print p.bestch.s,p.bestsc
assert p.bestch.score == p.bestsc
choices.append((p.bestch.s, p.bestsc))
if p.secondch: #balance between not letting good stories getting buried under slightly better stories and minimizing species score bias
choices.append((p.secondch.s,p.secondch.score))
if norm:
choices = [(s,h.rangify(c,lowest,highest,0,1)) for s,c in choices]
choices = sorted(choices,key=lambda x: x[1],reverse=True)[:maxSpecies]
if False:
for s,c in choices:
print s,c
m = min([c[1] for c in choices])
if m >=0:
m = 0
i = h.weighted_choice(choices,-m)
return choices[i] + (choices,)
| 24.4 | 171 | 0.664845 | 680 | 4,392 | 4.275 | 0.275 | 0.01376 | 0.011352 | 0.008256 | 0.050912 | 0.03096 | 0.019952 | 0 | 0 | 0 | 0 | 0.009119 | 0.201047 | 4,392 | 179 | 172 | 24.536313 | 0.819322 | 0.213798 | 0 | 0.158621 | 0 | 0 | 0.009615 | 0 | 0 | 0 | 0 | 0.005587 | 0.006897 | 0 | null | null | 0 | 0.02069 | null | null | 0.013793 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cce35f5232bb1193dda86864eb9f31fbad3e7841 | 7,005 | py | Python | test_right_product.py | mirrkka/seleniumcourse | d2c2aea82896009a697bb523153182dfc2f0e6fd | [
"Apache-2.0"
] | null | null | null | test_right_product.py | mirrkka/seleniumcourse | d2c2aea82896009a697bb523153182dfc2f0e6fd | [
"Apache-2.0"
] | null | null | null | test_right_product.py | mirrkka/seleniumcourse | d2c2aea82896009a697bb523153182dfc2f0e6fd | [
"Apache-2.0"
] | null | null | null | import pytest
import time
from selenium import webdriver
from driver_fixture import *
import re
import ast
#а) на главной странице и на странице товара совпадает текст названия товара
#б) на главной странице и на странице товара совпадают цены (обычная и акционная)
#в) обычная цена зачёркнутая и серая (можно считать, что "серый" цвет это такой, у которого в RGBa представлении одинаковые значения для каналов R, G и B)
#г) акционная жирная и красная (можно считать, что "красный" цвет это такой, у которого в RGBa представлении каналы G и B имеют нулевые значения)
#(цвета надо проверить на каждой странице независимо, при этом цвета на разных страницах могут не совпадать)
#д) акционная цена крупнее, чем обычная (это тоже надо проверить на каждой странице независимо)
def test_product(driver):
driver.get("http://localhost/litecart")
time.sleep(3)
duck = driver.find_elements_by_xpath("//div[@id='box-campaigns']//li[@class='product column shadow hover-light']")
link_product=[]
name = []
regular_price = []
reduced_price = []
reduced_price_tag = []
regular_price_style_text_decoration = []
regular_price_style_font_size = []
reduced_price_style_font_weight = []
reduced_price_style_font_size = []
for elements in duck:
link_product.append((elements.find_element_by_xpath(".//a[@class='link']").get_attribute('href')))
name.append(elements.find_element_by_xpath(".//div[@class='name']").get_attribute('textContent'))
regular_price.append(elements.find_element_by_xpath(".//div[@class='price-wrapper']/s").get_attribute('textContent'))
reduced_price.append(elements.find_element_by_xpath(".//div[@class='price-wrapper']/strong").get_attribute('textContent'))
reduced_price_tag.append(elements.find_element_by_xpath(".//div[@class='price-wrapper']/strong").get_attribute('tagName'))
regular_price_style_text_decoration.append(elements.find_element_by_xpath("//s[@class='regular-price']").value_of_css_property('text-decoration'))
regular_price_style_font_size.append(elements.find_element_by_xpath("//s[@class='regular-price']").value_of_css_property('font-size'))
reduced_price_style_font_weight.append(elements.find_element_by_xpath("//strong[@class='campaign-price']").value_of_css_property('font-weight'))
reduced_price_style_font_size.append(elements.find_element_by_xpath("//strong[@class='campaign-price']").value_of_css_property('font-size'))
print(name)
print(regular_price)
print(reduced_price)
print(regular_price_style_text_decoration)
print(regular_price_style_font_size)
print(reduced_price_style_font_weight)
print(reduced_price_style_font_size)
print(reduced_price_tag)
for k in range(len(name)):
driver.get(link_product[k])
a_name = driver.find_element_by_xpath(".//*[@id='box-product']/div[1]/h1").get_attribute('textContent')
a_regular_price = driver.find_element_by_xpath("//s[@class='regular-price']").get_attribute('textContent')
a_reduced_price = driver.find_element_by_xpath("//strong[@class='campaign-price']").get_attribute('textContent')
a_regular_price_style_text_decoration = driver.find_element_by_xpath("//s[@class='regular-price']").value_of_css_property('text-decoration')
a_regular_price_style_font_size = driver.find_element_by_xpath("//s[@class='regular-price']").value_of_css_property('font-size')
a_reduced_price_style_font_weight = driver.find_element_by_xpath("//strong[@class='campaign-price']").value_of_css_property('font-weight')
a_reduced_price_style_font_size = driver.find_element_by_xpath("//strong[@class='campaign-price']").value_of_css_property('font-size')
a_reduced_price_tag = driver.find_element_by_xpath("//strong[@class='campaign-price']").get_attribute('tagName')
a_regular_price_style_text_decoration_color = driver.find_element_by_xpath("//s[@class='regular-price']").value_of_css_property('color')
rgba = a_regular_price_style_text_decoration_color
r, g, b, alpha = ast.literal_eval(rgba.strip("rgba"))
hex_value = '#%02x%02x%02x' % (r, g, b)
a_reduced_price_style_color = driver.find_element_by_xpath("//strong[@class='campaign-price']").value_of_css_property('color')
rgba = a_reduced_price_style_color
r, g, b, alpha = ast.literal_eval(rgba.strip("rgba"))
hex = '#%02x%02x%02x' % (r, g, b)
print(hex)
print(hex_value)
print(a_name)
print(a_regular_price)
print(a_reduced_price)
print(a_regular_price_style_text_decoration)
print(a_regular_price_style_font_size)
print(a_reduced_price_style_font_weight)
print(a_reduced_price_style_font_size)
print(a_regular_price_style_text_decoration_color)
assert name[k] == a_name #а) на главной странице и на странице товара совпадает текст названия товара
assert regular_price[k] == a_regular_price #б) на главной странице и на странице товара совпадают цены (обычная и акционная)
assert reduced_price[k] == a_reduced_price #б) на главной странице и на странице товара совпадают цены (обычная и акционная)
assert regular_price_style_text_decoration[k] != a_regular_price_style_text_decoration
assert regular_price_style_font_size[k] < a_regular_price_style_font_size #акционная цена крупнее, чем обычная (это тоже надо проверить на каждой странице независимо)
assert reduced_price_tag ==['STRONG'] # проверка жирного шрифта по тегу
assert a_reduced_price_tag == 'STRONG' # проверка жирного шрифта по тегу
assert (reduced_price_style_font_weight[k] == a_reduced_price_style_font_weight) > (regular_price_style_font_size[k] == a_regular_price_style_font_size)
assert regular_price_style_text_decoration[k].__contains__("line-through solid rgb(119, 119, 119)") #проверка зачеркивания цены (наличие line-through)
assert a_regular_price_style_text_decoration.__contains__("line-through solid rgb(102, 102, 102)") #проверка зачеркивания цены (наличие line-through)
assert hex_value == '#666666' #проверка серого цвета обычной цены на странице товара
assert hex == '#cc0000' #проверка красного цвета акционной цены на странице товара
def test_product_color(driver):
driver.get("http://localhost/litecart")
time.sleep(3)
color = driver.find_element_by_xpath("//s[@class='regular-price']").value_of_css_property('color')
rgba = color
r, g, b, alpha = ast.literal_eval(rgba.strip("rgba"))
hex2 = '#%02x%02x%02x' % (r, g, b)
color2 = driver.find_element_by_xpath("//strong[@class='campaign-price']").value_of_css_property('color')
rgba = color2
r, g, b, alpha = ast.literal_eval(rgba.strip("rgba"))
hex3 = '#%02x%02x%02x' % (r, g, b)
print(hex2)
print(hex3)
assert hex2 == '#777777' # проверка обычного цвета на общей странице
assert hex3 == 'cc0000' # проверка красного цвета на общей странице
| 52.276119 | 170 | 0.740757 | 988 | 7,005 | 4.919028 | 0.163968 | 0.08642 | 0.073457 | 0.077778 | 0.76893 | 0.714403 | 0.617284 | 0.50535 | 0.480658 | 0.445679 | 0 | 0.012282 | 0.1399 | 7,005 | 133 | 171 | 52.669173 | 0.794357 | 0.190578 | 0 | 0.086957 | 0 | 0 | 0.200497 | 0.117031 | 0 | 0 | 0 | 0 | 0.152174 | 1 | 0.021739 | false | 0 | 0.065217 | 0 | 0.086957 | 0.217391 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cce77b6ce1740e65f98b8582e1e1fc207488ea4d | 1,085 | py | Python | geotrek/outdoor/urls.py | IdrissaD/Geotrek-admin | c5aba155d5665fdccdde0620a1024e02ebe71a7c | [
"BSD-2-Clause"
] | null | null | null | geotrek/outdoor/urls.py | IdrissaD/Geotrek-admin | c5aba155d5665fdccdde0620a1024e02ebe71a7c | [
"BSD-2-Clause"
] | null | null | null | geotrek/outdoor/urls.py | IdrissaD/Geotrek-admin | c5aba155d5665fdccdde0620a1024e02ebe71a7c | [
"BSD-2-Clause"
] | null | null | null | from django.conf import settings
from geotrek.common.urls import PublishableEntityOptions
from geotrek.outdoor import models as outdoor_models
from geotrek.outdoor import views as outdoor_views # noqa Fix an import loop
from mapentity.registry import registry
app_name = 'outdoor'
urlpatterns = []
class SiteEntityOptions(PublishableEntityOptions):
document_public_view = outdoor_views.SiteDocumentPublic
document_public_booklet_view = outdoor_views.SiteDocumentBookletPublic
markup_public_view = outdoor_views.SiteMarkupPublic
class CourseEntityOptions(PublishableEntityOptions):
document_public_view = outdoor_views.CourseDocumentPublic
document_public_booklet_view = outdoor_views.CourseDocumentBookletPublic
markup_public_view = outdoor_views.CourseMarkupPublic
urlpatterns += registry.register(outdoor_models.Site, SiteEntityOptions,
menu=settings.SITE_MODEL_ENABLED)
urlpatterns += registry.register(outdoor_models.Course, CourseEntityOptions,
menu=settings.COURSE_MODEL_ENABLED)
| 38.75 | 77 | 0.799078 | 109 | 1,085 | 7.688073 | 0.385321 | 0.100239 | 0.114558 | 0.105012 | 0.379475 | 0.217184 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153917 | 1,085 | 27 | 78 | 40.185185 | 0.912854 | 0.021198 | 0 | 0 | 0 | 0 | 0.006604 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.263158 | 0 | 0.684211 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
ccee05b4c0934983fd14692e6ee9512161f8e6e7 | 1,697 | py | Python | tripleo_ansible/tests/plugins/filter/test_range_list.py | beagles/tripleo-ansible | 7faddd87cffc8903a9cdedc7a6454cdf44aeed67 | [
"Apache-2.0"
] | 22 | 2018-08-29T12:33:15.000Z | 2022-03-30T00:17:25.000Z | tripleo_ansible/tests/plugins/filter/test_range_list.py | beagles/tripleo-ansible | 7faddd87cffc8903a9cdedc7a6454cdf44aeed67 | [
"Apache-2.0"
] | 1 | 2020-02-07T20:54:34.000Z | 2020-02-07T20:54:34.000Z | tripleo_ansible/tests/plugins/filter/test_range_list.py | beagles/tripleo-ansible | 7faddd87cffc8903a9cdedc7a6454cdf44aeed67 | [
"Apache-2.0"
] | 19 | 2019-07-16T04:42:00.000Z | 2022-03-30T00:17:29.000Z | # Copyright 2020 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from tripleo_ansible.ansible_plugins.filter import range_list
from tripleo_ansible.tests import base as tests_base
class TestRangeListFilters(tests_base.TestCase):
def setUp(self):
super(TestRangeListFilters, self).setUp()
self.filters = range_list.FilterModule()
def test_run_with_ranges(self):
num_list = "0,22,23,24,25,60,65,66,67"
expected_result = "0,22-25,60,65-67"
result = self.filters.range_list(num_list)
self.assertEqual(result, expected_result)
def test_run_with_no_range(self):
num_list = "0,22,24,60,65,67"
expected_result = "0,22,24,60,65,67"
result = self.filters.range_list(num_list)
self.assertEqual(result, expected_result)
def test_run_with_empty_input(self):
num_list = ""
self.assertRaises(Exception,
self.filters.range_list, num_list)
def test_run_with_invalid_input(self):
num_list = ",d"
self.assertRaises(Exception,
self.filters.range_list, num_list)
| 36.106383 | 78 | 0.685916 | 238 | 1,697 | 4.731092 | 0.453782 | 0.049734 | 0.071048 | 0.08881 | 0.314387 | 0.262877 | 0.248668 | 0.248668 | 0.248668 | 0.156306 | 0 | 0.044207 | 0.226871 | 1,697 | 46 | 79 | 36.891304 | 0.814024 | 0.352976 | 0 | 0.333333 | 0 | 0 | 0.069252 | 0.023084 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.208333 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ccf9a7fee215cd56c5d2c27aaa915471648ac3e0 | 773 | py | Python | examples/tutorial_python/1_extract_pose.py | shalalalatuzki/openpose | 896167d317f328343d3694dba9f5b4640de61d84 | [
"DOC",
"MIT-CMU"
] | null | null | null | examples/tutorial_python/1_extract_pose.py | shalalalatuzki/openpose | 896167d317f328343d3694dba9f5b4640de61d84 | [
"DOC",
"MIT-CMU"
] | null | null | null | examples/tutorial_python/1_extract_pose.py | shalalalatuzki/openpose | 896167d317f328343d3694dba9f5b4640de61d84 | [
"DOC",
"MIT-CMU"
] | 1 | 2020-01-06T08:16:01.000Z | 2020-01-06T08:16:01.000Z | import sys
import cv2
import os
dir_path = os.path.dirname(os.path.realpath(__file__))
sys.path.append('../../python')
from openpose import *
params = dict()
params["logging_level"] = 3
params["output_resolution"] = "-1x-1"
params["net_resolution"] = "-1x368"
params["model_pose"] = "COCO"
params["alpha_pose"] = 0.6
params["scale_gap"] = 0.3
params["scale_number"] = 1
params["render_threshold"] = 0.05
params["num_gpu_start"] = 0
params["disable_blending"] = False
params["default_model_folder"] = dir_path + "/../../../models/"
openpose = OpenPose(params)
img = cv2.imread(dir_path + "/../../../examples/media/COCO_val2014_000000000192.jpg")
arr, output_image = openpose.forward(img, True)
print arr
while 1:
cv2.imshow("output", output_image)
cv2.waitKey(15)
| 27.607143 | 85 | 0.706339 | 109 | 773 | 4.788991 | 0.559633 | 0.04023 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05644 | 0.10608 | 773 | 27 | 86 | 28.62963 | 0.698987 | 0 | 0 | 0 | 0 | 0 | 0.32859 | 0.069858 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.16 | null | null | 0.04 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ccff523cb93eee5f10b99246d2750d49d4b60abb | 261 | py | Python | forloops/07.py | mallimuondu/python-homworks | 352721a8e77d0b3bdb7a8a54197b6a04e1aec3c0 | [
"MIT"
] | null | null | null | forloops/07.py | mallimuondu/python-homworks | 352721a8e77d0b3bdb7a8a54197b6a04e1aec3c0 | [
"MIT"
] | null | null | null | forloops/07.py | mallimuondu/python-homworks | 352721a8e77d0b3bdb7a8a54197b6a04e1aec3c0 | [
"MIT"
] | null | null | null | cars= {
"bugati":1000$,
"ferari":9000$
}
b = int(input("whow m any are you printing: "))
c = list()
for b in range(d):
f = cars[input ("write what you want to select: ")]
g = e.append(f)
print("total amount is " + str(total)) | 21.75 | 56 | 0.54023 | 40 | 261 | 3.525 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043011 | 0.287356 | 261 | 12 | 57 | 21.75 | 0.715054 | 0 | 0 | 0 | 0 | 0 | 0.343511 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
690ad0e7658ae07f293da3ae6c500805b40500c0 | 1,040 | py | Python | Data Preprocessing/Manual Method.py | roupenminassian/UTS-DSI-x-Disability-Research-Network | e08378594f09560a477521f22f62a47622e07cdd | [
"MIT"
] | null | null | null | Data Preprocessing/Manual Method.py | roupenminassian/UTS-DSI-x-Disability-Research-Network | e08378594f09560a477521f22f62a47622e07cdd | [
"MIT"
] | null | null | null | Data Preprocessing/Manual Method.py | roupenminassian/UTS-DSI-x-Disability-Research-Network | e08378594f09560a477521f22f62a47622e07cdd | [
"MIT"
] | null | null | null | # These lines of code allow for manually chunking sentences or paragraphs of extracted information to a list
# The string that is defined for 'A' will be manually updated each time the previous string has been added to GovList
# This means that we are incrementaly adding sentences and paragraphs to this list
# This particular formatting is beneficial for BM25, to allow us to retieve the most relevant sentences or paragraphs related to our query
#The pickle library also allows us to save this list to be used for later.
import pickle #import pickle library
GovList = [] #Setup an empty list
A = "[Insert Text Here]" #Define A to a string. This will change everytime the previous string is added to the list.
GovList.append(A) #Append string to list
len(GovList) #Check the length of the list
with open("test.txt", "wb") as fp: #Pickling the list in order to save to a file
... pickle.dump(GovList, fp)
with open("/content/drive/MyDrive/test.txt","rb") as fp:# Unpickling the list to use in the notebook
b = pickle.load(fp)
| 47.272727 | 138 | 0.761538 | 177 | 1,040 | 4.474576 | 0.514124 | 0.035354 | 0.05303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002336 | 0.176923 | 1,040 | 21 | 139 | 49.52381 | 0.922897 | 0.749038 | 0 | 0 | 0 | 0 | 0.245968 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.111111 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
69170f355d442cc7becc20e80a9796b70651f43b | 663 | py | Python | views/terminal_view.py | rbenamotz/LEMPA | eab84e2494aac0d1461582c7f83405cb4ab7c16e | [
"MIT"
] | 83 | 2020-08-11T21:03:21.000Z | 2022-02-27T17:52:31.000Z | views/terminal_view.py | rbenamotz/LEMPA | eab84e2494aac0d1461582c7f83405cb4ab7c16e | [
"MIT"
] | 7 | 2020-09-06T17:10:04.000Z | 2021-05-25T11:53:18.000Z | views/terminal_view.py | rbenamotz/LEMPA | eab84e2494aac0d1461582c7f83405cb4ab7c16e | [
"MIT"
] | 6 | 2020-09-05T23:42:01.000Z | 2021-06-21T04:09:03.000Z | from views import View
HEADER_LEN = 70
class TerminalView(View):
def __init__(self, app):
super().__init__(app)
def cleanup(self):
print("Goodbye")
def print(self, txt):
if (txt):
print("\033[92m{}\033[39m".format(txt))
def detail(self, txt):
print(txt)
def error(self, e):
if not e:
return
print("\n\n{}".format("|" * HEADER_LEN))
print("\033[31m{}\033[39m".format(e))
print("!" * HEADER_LEN)
def header(self):
if self.app and self.app.app_state:
print("\n\n\033[7m{:^{w}}\033[0m".format(self.app.app_state, w=HEADER_LEN))
| 22.862069 | 87 | 0.544495 | 92 | 663 | 3.771739 | 0.380435 | 0.103746 | 0.069164 | 0.086455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.063025 | 0.282051 | 663 | 28 | 88 | 23.678571 | 0.665966 | 0 | 0 | 0 | 0 | 0 | 0.11463 | 0.037707 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.047619 | 0 | 0.428571 | 0.380952 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
691e359bcb9bd3066852922dd6dc0c8be344038f | 699 | py | Python | hello_world_cpp/launch/server.launch.py | fjnkt98/hello_world | 095301082410a73ebee4a38970d04a2b9fcccae4 | [
"BSD-2-Clause"
] | null | null | null | hello_world_cpp/launch/server.launch.py | fjnkt98/hello_world | 095301082410a73ebee4a38970d04a2b9fcccae4 | [
"BSD-2-Clause"
] | null | null | null | hello_world_cpp/launch/server.launch.py | fjnkt98/hello_world | 095301082410a73ebee4a38970d04a2b9fcccae4 | [
"BSD-2-Clause"
] | null | null | null | import launch
import launch_ros
def generate_launch_description():
server_node_container = launch_ros.actions.ComposableNodeContainer(
node_name='server_node_container',
node_namespace='',
package='rclcpp_components',
node_executable='component_container',
composable_node_descriptions=[
launch_ros.descriptions.ComposableNode(
package='hello_world_cpp',
node_plugin='hello_world_cpp::Server',
node_name='server'
)
],
output='screen'
)
return launch.LaunchDescription([server_node_container])
| 33.285714 | 71 | 0.595136 | 58 | 699 | 6.758621 | 0.5 | 0.102041 | 0.145408 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.329041 | 699 | 20 | 72 | 34.95 | 0.835821 | 0 | 0 | 0 | 1 | 0 | 0.153076 | 0.062947 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.111111 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
69211ba2113bac15f1fed9f83d65445f94323e53 | 2,198 | py | Python | todobackend/todobackend/settings.py | Grox-Ni/todoapp | 7a68c5a7c6384085eeba4b3fc29ae44e20c098a6 | [
"MIT"
] | null | null | null | todobackend/todobackend/settings.py | Grox-Ni/todoapp | 7a68c5a7c6384085eeba4b3fc29ae44e20c098a6 | [
"MIT"
] | null | null | null | todobackend/todobackend/settings.py | Grox-Ni/todoapp | 7a68c5a7c6384085eeba4b3fc29ae44e20c098a6 | [
"MIT"
] | null | null | null | import os
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
SECRET_KEY = '_6872wmne5+gnr6x9f&$js=)c12f+kn*#p35o-ou3^qy5h3^ab'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'todoapi.apps.TodosConfig',
'rest_framework',
'corsheaders',
)
MIDDLEWARE_CLASSES = (
'corsheaders.middleware.CorsMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware',
)
ROOT_URLCONF = 'todobackend.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'todobackend.wsgi.application'
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'todolist',
'USER': 'root',
'PASSWORD':'123456',
'HOST':'',
'PORT':'',
}
}
# Internationalization
# https://docs.djangoproject.com/en/1.8/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.8/howto/static-files/
STATIC_URL = '/static/'
CORS_ORIGIN_ALLOW_ALL = True
| 23.634409 | 70 | 0.67061 | 218 | 2,198 | 6.633028 | 0.559633 | 0.107884 | 0.047026 | 0.020747 | 0.062241 | 0.040111 | 0.040111 | 0 | 0 | 0 | 0 | 0.016844 | 0.189718 | 2,198 | 92 | 71 | 23.891304 | 0.795059 | 0.116015 | 0 | 0 | 0 | 0 | 0.558884 | 0.458161 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.016393 | 0.016393 | 0 | 0.016393 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
693a0199dc8894a4cfdc4a4894ca98fde9d509ce | 404 | py | Python | is_cuda_available.py | chumingqian/Model_Compression_For_YOLOV4 | 3bc803ff6ebb4000bf1f2cafc61c7711fea7a2ab | [
"Apache-2.0"
] | 13 | 2020-12-14T02:22:47.000Z | 2021-08-07T09:58:09.000Z | is_cuda_available.py | chumingqian/Model_Compression_For_YOLOV4 | 3bc803ff6ebb4000bf1f2cafc61c7711fea7a2ab | [
"Apache-2.0"
] | 2 | 2021-02-02T17:37:40.000Z | 2021-02-10T01:40:11.000Z | is_cuda_available.py | chumingqian/Model_Compression_For_YOLOV3-V4 | e4a6c51084e55c18ad51e854fafccb90c7e9f2dc | [
"Apache-2.0"
] | 3 | 2021-12-08T17:20:32.000Z | 2022-01-06T06:55:21.000Z | import os
# os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
# os.environ["CUDA_VISIBLE_DEVICES"] = "0"
import torch
flag = torch.cuda.is_available()
print(flag)
ngpu= 1
# Decide which device we want to run on
device = torch.device("cuda:0" if (torch.cuda.is_available() and ngpu > 0) else "cpu")
print(device)
print("\n")
# print('/n')
print(torch.cuda.get_device_name(0))
print(torch.rand(3,3).cuda()) | 23.764706 | 86 | 0.710396 | 69 | 404 | 4.014493 | 0.507246 | 0.097473 | 0.093863 | 0.144404 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019553 | 0.113861 | 404 | 17 | 87 | 23.764706 | 0.75419 | 0.339109 | 0 | 0 | 0 | 0 | 0.041825 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
693b7325f433bc079a0e3cb877d5731a3566eef2 | 6,542 | py | Python | codegen/codegen/host_codegen.py | spcl/fblas | 96425fbdbaeab6f43997d839836b8224a04f3b53 | [
"BSD-3-Clause"
] | 68 | 2019-02-07T21:30:21.000Z | 2022-02-16T20:09:27.000Z | codegen/codegen/host_codegen.py | spcl/fblas | 96425fbdbaeab6f43997d839836b8224a04f3b53 | [
"BSD-3-Clause"
] | 2 | 2019-03-15T17:49:03.000Z | 2019-07-24T14:05:35.000Z | codegen/codegen/host_codegen.py | spcl/fblas | 96425fbdbaeab6f43997d839836b8224a04f3b53 | [
"BSD-3-Clause"
] | 25 | 2019-03-15T03:00:15.000Z | 2021-08-04T10:21:43.000Z | import json
from codegen import json_definitions as jd
from codegen import json_writer as jw
from codegen import fblas_routine
from codegen import fblas_types
import codegen.generator_definitions as gd
from codegen.fblas_helper import FBLASHelper
import logging
import os
import jinja2
from typing import List
class HostAPICodegen:
_output_path = ""
def __init__(self, output_path: str):
self._output_path = output_path
def generateRoutines(self, routines: List[fblas_routine.FBLASRoutine]):
"""
Generates the code for the given routines
:param routines:
:return:
"""
routine_id = 0
json_routines = []
for r in routines:
print("Generating: " + r.user_name)
#dispatch
method_name = "_codegen_" + r.blas_name
method = getattr(self, method_name)
jr = method(r, routine_id)
routine_id = routine_id + 1
json_routines.append(jr)
#Output json for generated routines
json_content = {"routine": json_routines}
jw.write_to_file(self._output_path+"generated_routines.json", json_content)
def _write_file(self, path, content, append=False):
print("Generating file: "+path)
with open(path, "a" if append else "w") as f:
if append is True:
f.write("\n")
f.write(content)
def _read_template_file(self, path):
templates = os.path.join(os.path.dirname(__file__), "../../templates")
loader = jinja2.FileSystemLoader(searchpath=templates)
logging.basicConfig()
logger = logging.getLogger('logger')
logger = jinja2.make_logging_undefined(logger=logger, base=jinja2.Undefined)
env = jinja2.Environment(loader=loader, undefined=logger)
env.lstrip_blocks = True
env.trim_blocks = True
return env.get_template(path)
def _codegen_dot(self, routine: fblas_routine.FBLASRoutine, id: int):
template = self._read_template_file("1/dot.cl")
chan_in_x_name = gd.CHANNEL_IN_VECTOR_X_BASE_NAME+str(id)
chan_in_y_name = gd.CHANNEL_IN_VECTOR_Y_BASE_NAME+str(id)
chan_out = gd.CHANNEL_OUT_SCALAR_BASE_NAME+str(id)
channels_routine = {"channel_in_vector_x": chan_in_x_name, "channel_in_vector_y": chan_in_y_name, "channel_out_scalar": chan_out}
output_path = self._output_path + "/" + routine.user_name+".cl"
self._write_file(output_path, template.render(routine=routine, channels=channels_routine))
#add helpers
template = self._read_template_file("helpers/"+gd.TEMPLATE_READ_VECTOR_X)
channels_helper = {"channel_out_vector": chan_in_x_name}
helper_name_read_x = gd.HELPER_READ_VECTOR_X_BASE_NAME+str(id)
self._write_file(output_path, template.render(helper_name=helper_name_read_x, helper=routine, channels=channels_helper), append=True)
#Read y
template = self._read_template_file("helpers/" + gd.TEMPLATE_READ_VECTOR_Y)
channels_helper = {"channel_out_vector": chan_in_y_name}
helper_name_read_y = gd.HELPER_READ_VECTOR_Y_BASE_NAME + str(id)
self._write_file(output_path, template.render(helper_name=helper_name_read_y, helper=routine, channels=channels_helper),
append=True)
#Write scalar
template = self._read_template_file("helpers/" + gd.TEMPLATE_WRITE_SCALAR)
channels_helper = {"channel_in_scalar": chan_out}
helper_name_write_scalar = gd.HELPER_WRITE_SCALAR_BASE_NAME + str(id)
self._write_file(output_path, template.render(helper_name=helper_name_write_scalar, helper=routine, channels=channels_helper),
append=True)
#create the json entries
json = {}
jw.add_commons(json, routine)
jw.add_incx(json, routine)
jw.add_incy(json, routine)
jw.add_item(json, jd.GENERATED_READ_VECTOR_X, helper_name_read_x)
jw.add_item(json, jd.GENERATED_READ_VECTOR_Y, helper_name_read_y)
jw.add_item(json, jd.GENERATED_WRITE_SCALAR, helper_name_write_scalar)
return json
def _codegen_axpy(self, routine: fblas_routine.FBLASRoutine, id: int):
template = self._read_template_file("1/axpy.cl")
chan_in_x_name = gd.CHANNEL_IN_VECTOR_X_BASE_NAME+str(id)
chan_in_y_name = gd.CHANNEL_IN_VECTOR_Y_BASE_NAME+str(id)
chan_out = gd.CHANNEL_OUT_VECTOR_BASE_NAME+str(id)
channels_routine = {"channel_in_vector_x": chan_in_x_name, "channel_in_vector_y": chan_in_y_name, "channel_out_vector": chan_out}
output_path = self._output_path + "/" + routine.user_name+".cl"
self._write_file(output_path, template.render(routine=routine, channels=channels_routine))
#add helpers
template = self._read_template_file("helpers/"+gd.TEMPLATE_READ_VECTOR_X)
channels_helper = {"channel_out_vector": chan_in_x_name}
helper_name_read_x = gd.HELPER_READ_VECTOR_X_BASE_NAME+str(id)
self._write_file(output_path, template.render(helper_name=helper_name_read_x, helper=routine, channels=channels_helper), append=True)
#Read y
template = self._read_template_file("helpers/" + gd.TEMPLATE_READ_VECTOR_Y)
channels_helper = {"channel_out_vector": chan_in_y_name}
helper_name_read_y = gd.HELPER_READ_VECTOR_Y_BASE_NAME + str(id)
self._write_file(output_path, template.render(helper_name=helper_name_read_y, helper=routine, channels=channels_helper),
append=True)
#Write vector
template = self._read_template_file("helpers/" + gd.TEMPLATE_WRITE_VECTOR)
channels_helper = {"channel_in_vector": chan_out}
helper_name_write_vector = gd.HELPER_WRITE_VECTOR_BASE_NAME + str(id)
self._write_file(output_path, template.render(helper_name=helper_name_write_vector, helper=routine, channels=channels_helper),
append=True)
#create the json entries
json = {}
jw.add_commons(json, routine)
jw.add_incx(json, routine)
jw.add_incy(json, routine)
jw.add_item(json, jd.GENERATED_READ_VECTOR_X, helper_name_read_x)
jw.add_item(json, jd.GENERATED_READ_VECTOR_Y, helper_name_read_y)
jw.add_item(json, jd.GENERATED_WRITE_VECTOR, helper_name_write_vector)
return json
| 43.324503 | 141 | 0.684653 | 869 | 6,542 | 4.741082 | 0.127733 | 0.058252 | 0.032039 | 0.037864 | 0.645631 | 0.629126 | 0.629126 | 0.629126 | 0.629126 | 0.604854 | 0 | 0.001779 | 0.226842 | 6,542 | 150 | 142 | 43.613333 | 0.812772 | 0.032712 | 0 | 0.407767 | 0 | 0 | 0.061205 | 0.003666 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058252 | false | 0 | 0.106796 | 0 | 0.213592 | 0.019417 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
693d4412ea5398c8c6577225252aaea517df327c | 5,221 | py | Python | test/test_vehicle.py | pchevallier/bimmer_connected | 7cfeced15b8db1970761ca8345408b7f839bb085 | [
"Apache-2.0"
] | 141 | 2019-08-28T06:23:37.000Z | 2022-03-31T08:30:33.000Z | test/test_vehicle.py | pchevallier/bimmer_connected | 7cfeced15b8db1970761ca8345408b7f839bb085 | [
"Apache-2.0"
] | 344 | 2019-08-06T01:35:44.000Z | 2022-03-31T20:19:21.000Z | test/test_vehicle.py | EddyK69/bimmer_connected | 809b3725423af0b43aed1b4a68bc76f4713f8130 | [
"Apache-2.0"
] | 44 | 2019-09-13T17:48:48.000Z | 2022-03-11T20:24:28.000Z | """Tests for ConnectedDriveVehicle."""
import unittest
from unittest import mock
from test import load_response_json, BackendMock, TEST_USERNAME, TEST_PASSWORD, TEST_REGION, \
G31_VIN, F48_VIN, I01_VIN, I01_NOREX_VIN, F15_VIN, F45_VIN, F31_VIN, TEST_VEHICLE_DATA, \
ATTRIBUTE_MAPPING, MISSING_ATTRIBUTES, ADDITIONAL_ATTRIBUTES, G30_PHEV_OS7_VIN, AVAILABLE_STATES_MAPPING
from bimmer_connected.vehicle import ConnectedDriveVehicle, DriveTrainType
from bimmer_connected.account import ConnectedDriveAccount
_VEHICLES = load_response_json('vehicles.json')['vehicles']
G31_VEHICLE = _VEHICLES[0]
class TestVehicle(unittest.TestCase):
"""Tests for ConnectedDriveVehicle."""
def test_drive_train(self):
"""Tests around drive_train attribute."""
vehicle = ConnectedDriveVehicle(None, G31_VEHICLE)
self.assertEqual(DriveTrainType.CONVENTIONAL, vehicle.drive_train)
def test_parsing_attributes(self):
"""Test parsing different attributes of the vehicle."""
backend_mock = BackendMock()
with mock.patch('bimmer_connected.account.requests', new=backend_mock):
account = ConnectedDriveAccount(TEST_USERNAME, TEST_PASSWORD, TEST_REGION)
for vehicle in account.vehicles:
print(vehicle.name)
self.assertIsNotNone(vehicle.drive_train)
self.assertIsNotNone(vehicle.name)
self.assertIsNotNone(vehicle.has_internal_combustion_engine)
self.assertIsNotNone(vehicle.has_hv_battery)
self.assertIsNotNone(vehicle.drive_train_attributes)
self.assertIsNotNone(vehicle.has_statistics_service)
self.assertIsNotNone(vehicle.has_weekly_planner_service)
self.assertIsNotNone(vehicle.has_destination_service)
self.assertIsNotNone(vehicle.has_rangemap_service)
def test_drive_train_attributes(self):
"""Test parsing different attributes of the vehicle."""
backend_mock = BackendMock()
with mock.patch('bimmer_connected.account.requests', new=backend_mock):
account = ConnectedDriveAccount(TEST_USERNAME, TEST_PASSWORD, TEST_REGION)
for vehicle in account.vehicles:
self.assertEqual(vehicle.vin in [G31_VIN, F48_VIN, F15_VIN, F45_VIN, F31_VIN, G30_PHEV_OS7_VIN],
vehicle.has_internal_combustion_engine)
self.assertEqual(vehicle.vin in [I01_VIN, I01_NOREX_VIN, G30_PHEV_OS7_VIN],
vehicle.has_hv_battery)
self.assertEqual(vehicle.vin in [I01_VIN],
vehicle.has_range_extender)
def test_parsing_of_lsc_type(self):
"""Test parsing the lsc type field."""
backend_mock = BackendMock()
with mock.patch('bimmer_connected.account.requests', new=backend_mock):
account = ConnectedDriveAccount(TEST_USERNAME, TEST_PASSWORD, TEST_REGION)
for vehicle in account.vehicles:
self.assertIsNotNone(vehicle.lsc_type)
def test_available_attributes(self):
"""Check that available_attributes returns exactly the arguments we have in our test data."""
backend_mock = BackendMock()
with mock.patch('bimmer_connected.account.requests', new=backend_mock):
account = ConnectedDriveAccount(TEST_USERNAME, TEST_PASSWORD, TEST_REGION)
for vin, dirname in TEST_VEHICLE_DATA.items():
vehicle = account.get_vehicle(vin)
print(vehicle.name)
status_data = load_response_json('{}/status.json'.format(dirname))
existing_attributes = status_data['vehicleStatus'].keys()
existing_attributes = sorted([ATTRIBUTE_MAPPING.get(a, a) for a in existing_attributes
if a not in MISSING_ATTRIBUTES])
expected_attributes = sorted([a for a in vehicle.available_attributes if a not in ADDITIONAL_ATTRIBUTES])
self.assertListEqual(existing_attributes, expected_attributes)
def test_available_state_services(self):
"""Check that available_attributes returns exactly the arguments we have in our test data."""
backend_mock = BackendMock()
with mock.patch('bimmer_connected.account.requests', new=backend_mock):
account = ConnectedDriveAccount(TEST_USERNAME, TEST_PASSWORD, TEST_REGION)
vehicles = load_response_json('vehicles.json')
for test_vehicle in vehicles['vehicles']:
vehicle = account.get_vehicle(test_vehicle['vin'])
print(vehicle.name)
services_to_check = {
k: v
for k, v in test_vehicle.items()
if k in list(AVAILABLE_STATES_MAPPING)
}
available_services = ['STATUS']
for key, value in services_to_check.items():
if AVAILABLE_STATES_MAPPING[key].get(value):
available_services += AVAILABLE_STATES_MAPPING[key][value]
if vehicle.drive_train != DriveTrainType.CONVENTIONAL:
available_services += ['EFFICIENCY', 'NAVIGATION']
self.assertListEqual(sorted(vehicle.available_state_services), sorted(available_services))
| 48.794393 | 117 | 0.690864 | 575 | 5,221 | 5.993043 | 0.2 | 0.031921 | 0.07545 | 0.041788 | 0.513639 | 0.402786 | 0.354034 | 0.308474 | 0.308474 | 0.308474 | 0 | 0.010924 | 0.2285 | 5,221 | 106 | 118 | 49.254717 | 0.844588 | 0.078529 | 0 | 0.269231 | 0 | 0 | 0.055136 | 0.034591 | 0 | 0 | 0 | 0 | 0.205128 | 1 | 0.076923 | false | 0.076923 | 0.064103 | 0 | 0.153846 | 0.038462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
69487310ad1cbe8b3580ebe35ae8596a6868c8cd | 2,364 | py | Python | keynote-export.py | jmckind/keynote-export | 405443a7734b87397f062526da9d7b7fe54f080a | [
"MIT"
] | 1 | 2015-11-19T02:59:05.000Z | 2015-11-19T02:59:05.000Z | keynote-export.py | jmckind/keynote-export | 405443a7734b87397f062526da9d7b7fe54f080a | [
"MIT"
] | null | null | null | keynote-export.py | jmckind/keynote-export | 405443a7734b87397f062526da9d7b7fe54f080a | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from AppKit import NSURL, NSMutableDictionary
from ScriptingBridge import SBApplication
import sys
DEBUG = False
BUNDLE = 'com.apple.iWork.Keynote'
SAVING_OPTIONS = {
'yes': 0x79657320, # 'yes '
'no': 0x6E6F2020, # 'no '
'ask': 0x61736B20, # 'ask '
}
EXPORT_FORMAT = {
'Khtm': 0x4B68746D, # HTML
'Kmov': 0x4B6D6F76, # QuickTime movie
'Kpdf': 0x4B706466, # PDF
'Kimg': 0x4B696D67, # Image
'Kppt': 0x4B707074, # Microsoft PowerPoint
'Kkey': 0x4B6B6579, # Keynote 09
}
IMAGE_FORMATS = {
'Kifj': 0x4B69666A, # JPEG
'Kifp': 0x4B696670, # PNG
'Kift': 0x4B696674, # TIFF
}
MOVIE_FORMATS = {
'Kmf3': 0x4B6D6633, # 360p
'Kmf5': 0x4B6D6635, # 540p
'Kmf7': 0x4B6D6637, # 720p
}
PRINT_WHAT = {
'Kpwi': 0x4B707769, # individual slides
'Kpwn': 0x4B70776E, # slides with notes
'Kpwh': 0x4B707768, # handouts
}
EXPORT_PROPERTIES = {
'Kxic': 0x4B786963, # compressed image quality, ranging from 0.0 to 1.0
'Kxif': 0x4B786966, # format for resulting images
'Kxmf': 0x4B786D66, # format for exported movie
'Kxpw': 0x4B787077, # choose whether to include notes, etc.
'Kxpa': 0x4B787061, # print each stage of builds
'Kxps': 0x4B787073, # include skipped slides
'Kxpb': 0x4B787062, # add borders around slides
'Kxpn': 0x4B78706E, # include slide numbers
'Kxpd': 0x4B787064, # include date
'Kxkf': 0x4B786B66, # export in raw KPF
'KxPW': 0x4B785057, # password
'KxPH': 0x4B785048, # password hint
}
if len(sys.argv) < 2:
print "usage: %s <keynote-file>" % sys.argv[0]
sys.exit(-1)
# Export options
keynote_file = sys.argv[1]
to_file = NSURL.fileURLWithPath_(keynote_file.split('.key')[0] + '.pdf')
as_format = EXPORT_FORMAT['Kpdf']
with_properties = NSMutableDictionary.dictionaryWithDictionary_({
})
if DEBUG:
print(" KEYNOTE_FILE: %s" % keynote_file)
print(" TO_FILE: %s" % to_file)
print(" AS_FORMAT: %s" % as_format)
print("WITH_PROPERTIES: %s" % with_properties)
# Open Keynote file
keynote = SBApplication.applicationWithBundleIdentifier_(BUNDLE)
doc = keynote.open_(keynote_file)
# Export to format
doc.exportTo_as_withProperties_(to_file, as_format, with_properties)
# Close keynote
doc.closeSaving_savingIn_(SAVING_OPTIONS['no'], None)
keynote.quitSaving_(SAVING_OPTIONS['no'])
| 27.811765 | 75 | 0.678511 | 273 | 2,364 | 5.74359 | 0.578755 | 0.049107 | 0.015306 | 0.022959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131027 | 0.192893 | 2,364 | 84 | 76 | 28.142857 | 0.690776 | 0.226734 | 0 | 0 | 0 | 0 | 0.14222 | 0.012828 | 0 | 0 | 0.167317 | 0 | 0 | 0 | null | null | 0 | 0.046154 | null | null | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
69517993578b4ee9869a253cdcca60e4b9c8c7b3 | 7,072 | py | Python | pomade/assertions.py | saucelabs/pomade | 3be5f9910ed3678e32d2efb174a13087930ed68f | [
"Apache-2.0"
] | null | null | null | pomade/assertions.py | saucelabs/pomade | 3be5f9910ed3678e32d2efb174a13087930ed68f | [
"Apache-2.0"
] | 1 | 2016-03-08T16:02:03.000Z | 2016-03-09T10:22:07.000Z | pomade/assertions.py | saucelabs/pomade | 3be5f9910ed3678e32d2efb174a13087930ed68f | [
"Apache-2.0"
] | null | null | null | from pprint import pformat
import json
import time
import traceback
import sys
from config import SPIN_TIMEOUT
class FailTestException(Exception):
pass
def spinAssert(msg, test, timeout=None, args=[]):
timeout = timeout or SPIN_TIMEOUT
name = getattr(test, '__name__', 'unknown')
last_e = None
for i in xrange(timeout):
try:
if not test(*args):
raise AssertionError(msg)
if i > 0:
print msg, "success on %s (%s)" % (i + 1, name)
break
except FailTestException:
raise
except Exception, e:
if (str(e), type(e)) != (str(last_e), type(last_e)):
print msg, "(try: %s):" % (i + 1), str(e), type(e)
traceback.print_exc(file=sys.stdout)
last_e = e
time.sleep(1)
else:
print "%s fail (%s tries) (%s)" % (msg, i + 1, name)
raise AssertionError(msg)
class PomadeAssertions(object):
def _format(self, var):
formatted_var = pformat(var)
return formatted_var
def assert_equal(self, first, second, message=None):
self.assertEqual(first, second, message)
def assert_not_equal(self, first, second, message=None):
self.assertNotEqual(first, second, message)
def assert_is_valid_json(self, filename):
try:
with open(filename) as fo:
json.load(fo)
except ValueError, e:
self.fail(filename + " is not valid json (%s)" % e.message)
def assert_less(self, first, second, message=None):
message = (message if message
else "%s not less than %s" % (first, second))
self.assertTrue(first < second, message)
def assert_less_equal(self, first, second, message=None):
message = (message if message
else "%s not less than or equal to %s" % (first, second))
self.assertTrue(first <= second, message)
def assert_greater(self, first, second, message=None):
message = (message if message
else "%s not greater than %s" % (first, second))
self.assertTrue(first > second, message)
def assert_greater_equal(self, first, second, message=None):
message = (message if message
else "%s not greater than or equal to %s" % (first, second))
self.assertTrue(first >= second, message)
def assert_none(self, item, message=None):
message = (message if message
else "%s should have been None" % pformat(item))
self.assertTrue(item is None, message)
def assert_not_none(self, item, message=None):
message = (message if message
else "%s should not have been None" % pformat(item))
self.assertFalse(item is None, message)
def assert_excepts(self, exception_type, func, *args, **kwargs):
excepted = False
try:
val = func(*args, **kwargs)
print ("assert_excepts: Crap. That wasn't supposed to work."
" Here's what I got: ", pformat(val))
except exception_type, e:
print ("assert_excepts: Okay, %s failed the way it was supposed"
" to: %s" % (func, e))
excepted = True
self.assertTrue(excepted, "assert_excepts: calling %s didn't raise %s"
% (func, exception_type))
def assert_in(self, needle, haystack, message=None):
return self.assert_contains(haystack, needle, message)
def assert_not_in(self, needle, haystack, message=None):
return self.assert_not_contains(haystack, needle, message)
def assert_contains(self, haystack, needle, message=None):
displaystack = self._format(haystack)
message = (message if message
else "%s not found in %s" % (needle, displaystack))
its_in_there = False
try:
if needle in haystack:
its_in_there = True
except:
pass
try:
if not its_in_there and haystack in needle:
print "! HEY !" * 5
print "HEY! it looks like you called assert_contains backwards"
print "! HEY !" * 5
except:
pass
self.assertTrue(needle in haystack, message)
def assert_any(self, conditions, message=None):
message = (message if message
else "%s were all False" % pformat(conditions))
self.assertTrue(any(conditions), message)
def assert_not_any(self, conditions, message=None):
message = (message if message
else "%s was not all False" % pformat(conditions))
self.assertFalse(any(conditions), message)
def assert_not_contains(self, haystack, needle, message=None):
displaystack = self._format(haystack)
message = (message if message
else "%s not wanted but found in %s" % (needle, displaystack))
self.assertFalse(needle in haystack, message)
def assert_startswith(self, haystack, needle, message=None):
displaystack = self._format(haystack)
message = (message if message
else "%s should have been at the beginning of %s"
% (needle, displaystack))
self.assertTrue(haystack.startswith(needle), message)
def assert_endswith(self, haystack, needle, message=None):
displaystack = self._format(haystack)
message = (message if message
else "%s should have been at the end of %s"
% (needle, displaystack))
self.assertTrue(haystack.endswith(needle), message)
def assert_not_startswith(self, haystack, needle, message=None):
displaystack = self._format(haystack)
message = (message if message
else "%s should not have been at the beginning of %s"
% (needle, displaystack))
self.assertFalse(haystack.startswith(needle), message)
def assert_is(self, expected, actual, message=None):
message = message if message else "%s is not %s" % (expected, actual)
self.assertTrue(expected is actual)
def assert_is_not(self, expected, actual, message=None):
message = message if message else "%s is %s" % (expected, actual)
self.assertTrue(expected is not actual)
def spinAssert(self, *args, **kwargs):
return spinAssert(*args, **kwargs)
class BasicAssertions(object):
# PomadeAssertions depends on these basic assertions, which come with
# unittest.TestCase but we don't want to subclass that here. So let's
# just duplicate the functionality, sigh
def assertTrue(self, test, msg=None):
msg = msg or "%s was not true" % test
assert test is True
def assertEqual(self, obj1, obj2, msg=None):
msg = msg or "%s != %s" % (obj1, obj2)
assert obj1 == obj2, msg
def assertFalse(self, test, msg):
msg = msg or "%s was not false" % test
assert test is False, msg
def assertNotEqual(self, obj1, obj2, msg):
msg = msg or "%s == %s" % (obj1, obj2)
assert obj1 != obj2, msg | 36.453608 | 79 | 0.609729 | 868 | 7,072 | 4.890553 | 0.184332 | 0.044523 | 0.067845 | 0.081272 | 0.58775 | 0.534982 | 0.438398 | 0.389164 | 0.389164 | 0.365135 | 0 | 0.003774 | 0.288179 | 7,072 | 194 | 80 | 36.453608 | 0.839491 | 0.024604 | 0 | 0.227273 | 0 | 0 | 0.111095 | 0 | 0 | 0 | 0 | 0 | 0.383117 | 0 | null | null | 0.019481 | 0.038961 | null | null | 0.064935 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
69578623cdda52681346851eaabb75c2b5fbaa8b | 1,739 | py | Python | src/mlregression/mlreg.py | muhlbach/ml-regression | 59dfa5acc9841729d632030492e029bb329ce3ed | [
"MIT"
] | 1 | 2021-11-12T22:45:32.000Z | 2021-11-12T22:45:32.000Z | src/mlregression/mlreg.py | muhlbach/ml-regression | 59dfa5acc9841729d632030492e029bb329ce3ed | [
"MIT"
] | 1 | 2021-11-15T22:14:10.000Z | 2021-11-16T15:56:14.000Z | src/mlregression/mlreg.py | muhlbach/ml-regression | 59dfa5acc9841729d632030492e029bb329ce3ed | [
"MIT"
] | null | null | null | #------------------------------------------------------------------------------
# Libraries
#------------------------------------------------------------------------------
# Standard
import numpy as np
# User
from .base.base_mlreg import BaseMLRegressor
#------------------------------------------------------------------------------
# MLRegressor
#------------------------------------------------------------------------------
class MLRegressor(BaseMLRegressor):
"""
This class implements the mlreg command
"""
# -------------------------------------------------------------------------
# Constructor function
# -------------------------------------------------------------------------
def __init__(self,
estimator,
param_grid=None,
cv_params={'scoring':None,
'n_jobs':None,
'refit':True,
'verbose':0,
'pre_dispatch':'2*n_jobs',
'error_score':np.nan,
'return_train_score':False},
fold_type="KFold",
n_cv_folds=5,
shuffle=False,
test_size=None,
max_n_models=50,
n_cf_folds=2,
verbose=False,
):
super().__init__(
estimator=estimator,
param_grid=param_grid,
cv_params=cv_params,
fold_type=fold_type,
n_cv_folds=n_cv_folds,
shuffle=shuffle,
test_size=test_size,
max_n_models=max_n_models,
n_cf_folds=n_cf_folds,
verbose=verbose)
| 35.489796 | 79 | 0.343301 | 120 | 1,739 | 4.6 | 0.475 | 0.048913 | 0.043478 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005177 | 0.333525 | 1,739 | 48 | 80 | 36.229167 | 0.471096 | 0.320299 | 0 | 0 | 0 | 0 | 0.068339 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | false | 0 | 0.0625 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6958db9fdde1fe0bbdc0e41e1041c7e74f35fc42 | 818 | py | Python | code_soup/common/vision/models/__init__.py | gchhablani/code-soup | eec666b6cd76bad9c7133a185bb85021b4a390f0 | [
"MIT"
] | 18 | 2021-07-29T16:21:02.000Z | 2021-12-13T12:58:15.000Z | code_soup/common/vision/models/__init__.py | gchhablani/code-soup | eec666b6cd76bad9c7133a185bb85021b4a390f0 | [
"MIT"
] | 93 | 2021-08-04T02:48:15.000Z | 2022-01-16T04:58:51.000Z | code_soup/common/vision/models/__init__.py | gchhablani/code-soup | eec666b6cd76bad9c7133a185bb85021b4a390f0 | [
"MIT"
] | 27 | 2021-08-06T06:51:34.000Z | 2021-11-02T05:47:18.000Z | from torchvision.models import (
alexnet,
densenet121,
densenet161,
densenet169,
densenet201,
googlenet,
inception_v3,
mnasnet0_5,
mnasnet0_75,
mnasnet1_0,
mnasnet1_3,
mobilenet_v2,
mobilenet_v3_large,
mobilenet_v3_small,
resnet18,
resnet34,
resnet50,
resnet101,
resnet152,
resnext50_32x4d,
resnext101_32x8d,
shufflenet_v2_x0_5,
shufflenet_v2_x1_0,
shufflenet_v2_x1_5,
shufflenet_v2_x2_0,
squeezenet1_0,
squeezenet1_1,
vgg11,
vgg13,
vgg16,
vgg19,
wide_resnet50_2,
wide_resnet101_2,
)
from code_soup.common.vision.models.allconvnet import AllConvNet
from code_soup.common.vision.models.nin import NIN
from code_soup.common.vision.models.simple_cnn_classifier import SimpleCnnClassifier
| 20.45 | 84 | 0.717604 | 96 | 818 | 5.75 | 0.552083 | 0.086957 | 0.065217 | 0.097826 | 0.163043 | 0.163043 | 0 | 0 | 0 | 0 | 0 | 0.124409 | 0.223716 | 818 | 39 | 85 | 20.974359 | 0.744882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.105263 | 0 | 0.105263 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6958fc1c0934f28a0597e1de1f9bbf912390777b | 4,198 | py | Python | desktop/core/ext-py/django-celery-beat-1.4.0/django_celery_beat/migrations/0005_add_solarschedule_events_choices_squashed_0009_merge_20181012_1416.py | maulikjs/hue | 59ac879b55bb6fb26ecb4e85f4c70836fc21173f | [
"Apache-2.0"
] | 5,079 | 2015-01-01T03:39:46.000Z | 2022-03-31T07:38:22.000Z | desktop/core/ext-py/django-celery-beat-1.4.0/django_celery_beat/migrations/0005_add_solarschedule_events_choices_squashed_0009_merge_20181012_1416.py | zks888/hue | 93a8c370713e70b216c428caa2f75185ef809deb | [
"Apache-2.0"
] | 1,623 | 2015-01-01T08:06:24.000Z | 2022-03-30T19:48:52.000Z | desktop/core/ext-py/django-celery-beat-1.4.0/django_celery_beat/migrations/0005_add_solarschedule_events_choices_squashed_0009_merge_20181012_1416.py | zks888/hue | 93a8c370713e70b216c428caa2f75185ef809deb | [
"Apache-2.0"
] | 2,033 | 2015-01-04T07:18:02.000Z | 2022-03-28T19:55:47.000Z | # Generated by Django 2.1.2 on 2018-10-12 14:18
from __future__ import absolute_import, unicode_literals
from django.db import migrations, models
import django_celery_beat.validators
import timezone_field.fields
class Migration(migrations.Migration):
replaces = [
('django_celery_beat', '0005_add_solarschedule_events_choices'),
('django_celery_beat', '0006_auto_20180210_1226'),
('django_celery_beat', '0006_auto_20180322_0932'),
('django_celery_beat', '0007_auto_20180521_0826'),
('django_celery_beat', '0008_auto_20180914_1922'),
]
dependencies = [
('django_celery_beat', '0004_auto_20170221_0000'),
]
operations = [
migrations.AlterField(
model_name='solarschedule',
name='event',
field=models.CharField(
choices=[('dawn_astronomical', 'dawn_astronomical'),
('dawn_civil', 'dawn_civil'),
('dawn_nautical', 'dawn_nautical'),
('dusk_astronomical', 'dusk_astronomical'),
('dusk_civil', 'dusk_civil'),
('dusk_nautical', 'dusk_nautical'),
('solar_noon', 'solar_noon'), ('sunrise', 'sunrise'),
('sunset', 'sunset')], max_length=24,
verbose_name='event'),
),
migrations.AlterModelOptions(
name='crontabschedule',
options={
'ordering': ['month_of_year', 'day_of_month', 'day_of_week',
'hour', 'minute', 'timezone'],
'verbose_name': 'crontab', 'verbose_name_plural': 'crontabs'},
),
migrations.AlterModelOptions(
name='crontabschedule',
options={
'ordering': ['month_of_year', 'day_of_month', 'day_of_week',
'hour', 'minute', 'timezone'],
'verbose_name': 'crontab', 'verbose_name_plural': 'crontabs'},
),
migrations.AddField(
model_name='crontabschedule',
name='timezone',
field=timezone_field.fields.TimeZoneField(default='UTC'),
),
migrations.AddField(
model_name='periodictask',
name='one_off',
field=models.BooleanField(default=False,
verbose_name='one-off task'),
),
migrations.AddField(
model_name='periodictask',
name='start_time',
field=models.DateTimeField(blank=True, null=True,
verbose_name='start_time'),
),
migrations.AlterField(
model_name='crontabschedule',
name='day_of_month',
field=models.CharField(default='*', max_length=124, validators=[
django_celery_beat.validators.day_of_month_validator],
verbose_name='day of month'),
),
migrations.AlterField(
model_name='crontabschedule',
name='day_of_week',
field=models.CharField(default='*', max_length=64, validators=[
django_celery_beat.validators.day_of_week_validator],
verbose_name='day of week'),
),
migrations.AlterField(
model_name='crontabschedule',
name='hour',
field=models.CharField(default='*', max_length=96, validators=[
django_celery_beat.validators.hour_validator],
verbose_name='hour'),
),
migrations.AlterField(
model_name='crontabschedule',
name='minute',
field=models.CharField(default='*', max_length=240, validators=[
django_celery_beat.validators.minute_validator],
verbose_name='minute'),
),
migrations.AlterField(
model_name='crontabschedule',
name='month_of_year',
field=models.CharField(default='*', max_length=64, validators=[
django_celery_beat.validators.month_of_year_validator],
verbose_name='month of year'),
),
]
| 40.365385 | 78 | 0.557408 | 372 | 4,198 | 5.951613 | 0.27957 | 0.065041 | 0.086721 | 0.070461 | 0.506775 | 0.429991 | 0.277326 | 0.256549 | 0.208672 | 0.208672 | 0 | 0.039873 | 0.324917 | 4,198 | 103 | 79 | 40.757282 | 0.741355 | 0.010719 | 0 | 0.43299 | 1 | 0 | 0.239942 | 0.036618 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.041237 | 0 | 0.082474 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
695a62db915f9a1f77713cced18912c22747ce67 | 1,253 | py | Python | python/cracking_codes_with_python/k_columnar_transposition_cipher_hack.py | MerrybyPractice/book-challanges-and-tutorials | f2d3e07c673fb6e3244e164f4fb03f80b8ec781b | [
"MIT"
] | null | null | null | python/cracking_codes_with_python/k_columnar_transposition_cipher_hack.py | MerrybyPractice/book-challanges-and-tutorials | f2d3e07c673fb6e3244e164f4fb03f80b8ec781b | [
"MIT"
] | null | null | null | python/cracking_codes_with_python/k_columnar_transposition_cipher_hack.py | MerrybyPractice/book-challanges-and-tutorials | f2d3e07c673fb6e3244e164f4fb03f80b8ec781b | [
"MIT"
] | null | null | null | # Columnar Transposition Hack per Cracking Codes with Python
# https://www.nostarch.com/crackingcodes/ (BSD Licensed)
import pyperclip
from j_detect_english import is_english
from g_decrypt_columnar_transposition_cipher import decrypt_message as decrypt
def hack_transposition(text):
print('Press Ctrl-C to quit at any time.')
print('Hacking...')
for key in range(1, len(text)):
print('Trying key #%s...' % (key))
print()
print('...')
decrypted_text = decrypt(key, text)
print()
print('...')
if is_english(decrypted_text):
print()
print('Possible encryption hack:')
print('Key %s: %s' % (key, decrypted_text[:100]))
print()
print('Enter D if done, anything else to continue the hack:')
response = input('>')
if response.strip().upper().startswith('D'):
return decrypted_text
return None
def main(text):
hacked_text = hack_transposition(text)
if hacked_text == None:
print('Failed to hack the Columnar Transposition Encryption')
else:
print('Copying hacked string to clipboard:')
print(hacked_text)
pyperclip.copy(hacked_text)
if __name__ == '__main__':
text = input('What would you like to decrypt? ')
main(text)
| 24.568627 | 79 | 0.667199 | 161 | 1,253 | 5.024845 | 0.484472 | 0.044499 | 0.051916 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00404 | 0.209896 | 1,253 | 50 | 80 | 25.06 | 0.813131 | 0.090184 | 0 | 0.176471 | 0 | 0 | 0.248021 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.088235 | 0 | 0.205882 | 0.441176 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
695d84e2cab5875c13913366e8fd2643e1ba6224 | 498 | py | Python | tasks.py | larsbutler/celery-examples | 52682891d245e7ada9c6c0584267489aac55a9e7 | [
"BSD-3-Clause"
] | 19 | 2015-06-26T09:24:37.000Z | 2020-05-11T01:58:57.000Z | tasks.py | Stackato-Apps/celery-examples | 34ee4eea3a0b37cb12485a77f2ec75447abc5f31 | [
"BSD-3-Clause"
] | null | null | null | tasks.py | Stackato-Apps/celery-examples | 34ee4eea3a0b37cb12485a77f2ec75447abc5f31 | [
"BSD-3-Clause"
] | 8 | 2015-06-26T09:24:38.000Z | 2018-07-24T10:03:41.000Z | from celery.decorators import task
@task
def make_pi(num_calcs):
"""
Simple pi approximation based on the Leibniz formula for pi.
http://en.wikipedia.org/wiki/Leibniz_formula_for_pi
:param num_calcs: defines the length of the sequence
:type num_calcs: positive int
:returns: an approximation of pi
"""
print "Approximating pi with %s iterations" % num_calcs
pi = 0.0
for k in xrange(num_calcs):
pi += 4 * ((-1)**k / ((2.0 * k) + 1))
return pi
| 26.210526 | 64 | 0.654618 | 76 | 498 | 4.171053 | 0.605263 | 0.126183 | 0.107256 | 0.119874 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018519 | 0.240964 | 498 | 18 | 65 | 27.666667 | 0.820106 | 0 | 0 | 0 | 0 | 0 | 0.148305 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.125 | null | null | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
695f3000e009b77a96dbba802da859035c0c537e | 474 | py | Python | tests/test_datareactor.py | data-dev/DataReactor | 26cd08129d348cf5ff3596c3e509619c59e300b8 | [
"MIT"
] | 1 | 2022-02-08T11:10:08.000Z | 2022-02-08T11:10:08.000Z | tests/test_datareactor.py | data-dev/DataReactor | 26cd08129d348cf5ff3596c3e509619c59e300b8 | [
"MIT"
] | null | null | null | tests/test_datareactor.py | data-dev/DataReactor | 26cd08129d348cf5ff3596c3e509619c59e300b8 | [
"MIT"
] | null | null | null | import tempfile
import unittest
from glob import glob
from parameterized import parameterized
from datareactor import DataReactor
class TestDataReactor(unittest.TestCase):
@parameterized.expand(glob("datasets/**/"))
def test_datasets(self, path_to_example):
reactor = DataReactor()
with tempfile.TemporaryDirectory() as path_to_output:
reactor.transform(
path_to_example,
path_to_output
)
| 23.7 | 61 | 0.685654 | 48 | 474 | 6.583333 | 0.5 | 0.075949 | 0.082278 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.248945 | 474 | 19 | 62 | 24.947368 | 0.88764 | 0 | 0 | 0 | 0 | 0 | 0.025316 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.357143 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
69647d7ac20b03e313df97bd4c323fc9f26c4caa | 707 | py | Python | packages/M2Crypto-0.21.1/demo/ssl/xmlrpc_cli.py | RaphaelPrevost/Back2Shops | 5f2d369e82fe2a7b9b3a6c55782319b23d142dfd | [
"CECILL-B"
] | null | null | null | packages/M2Crypto-0.21.1/demo/ssl/xmlrpc_cli.py | RaphaelPrevost/Back2Shops | 5f2d369e82fe2a7b9b3a6c55782319b23d142dfd | [
"CECILL-B"
] | 6 | 2021-03-31T19:21:50.000Z | 2022-01-13T01:46:09.000Z | packages/M2Crypto-0.21.1/demo/ssl/xmlrpc_cli.py | RaphaelPrevost/Back2Shops | 5f2d369e82fe2a7b9b3a6c55782319b23d142dfd | [
"CECILL-B"
] | null | null | null | #!/usr/bin/env python
"""Demonstration of M2Crypto.xmlrpclib2.
Copyright (c) 1999-2004 Ng Pheng Siong. All rights reserved."""
from M2Crypto import Rand
from M2Crypto.m2xmlrpclib import Server, SSL_Transport
def ZServerSSL():
# Server is Zope-2.6.4 on ZServerSSL/0.12.
zs = Server('https://127.0.0.1:8443/', SSL_Transport())
print zs.propertyMap()
def xmlrpc_srv():
# Server is ../https/START_xmlrpc.py or ./xmlrpc_srv.py.
zs = Server('https://127.0.0.1:39443', SSL_Transport())
print zs.Testing(1, 2, 3)
print zs.BringOn('SOAP')
if __name__ == '__main__':
Rand.load_file('../randpool.dat', -1)
#ZServerSSL()
xmlrpc_srv()
Rand.save_file('../randpool.dat')
| 26.185185 | 63 | 0.671853 | 104 | 707 | 4.403846 | 0.576923 | 0.078603 | 0.056769 | 0.069869 | 0.082969 | 0.082969 | 0.082969 | 0 | 0 | 0 | 0 | 0.074324 | 0.162659 | 707 | 26 | 64 | 27.192308 | 0.699324 | 0.181047 | 0 | 0 | 0 | 0 | 0.187633 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.153846 | null | null | 0.230769 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
15ce0be79bed3abf0dae31bdcec96b21cba4ea89 | 1,621 | py | Python | Utilities/CPUmining.py | C3ald/Token-Project | 278938978d3d198c28502d7cfde5b18c3479ed27 | [
"MIT"
] | 1 | 2022-01-19T14:46:58.000Z | 2022-01-19T14:46:58.000Z | Utilities/CPUmining.py | C3ald/Token-Project | 278938978d3d198c28502d7cfde5b18c3479ed27 | [
"MIT"
] | 6 | 2022-01-19T15:14:55.000Z | 2022-03-18T22:51:15.000Z | Utilities/CPUmining.py | C3ald/Token-Project | 278938978d3d198c28502d7cfde5b18c3479ed27 | [
"MIT"
] | null | null | null | import time as t
import hashlib
class Calibrate:
""" Calibration class for CPU mining """
def __init__(self):
pass
def calibrate(self):
""" Calibrates the cpu power """
time_started = t.time()
for x in range(10000000):
hashlib.sha512('hash'.encode())
hashlib.blake2b('hash'.encode())
time_finished = t.time()
time_passed = time_finished - time_started
# hashes = 100000000 / time_passed
return time_passed
def run(self):
""" Runs the calibration """
cali = []
for x in range(5):
cal = self.calibrate()
cali.append(cal)
total = 0
for x in cali:
total = total + x
average = total / len(cali)
print('calibration done!')
print(average)
return average
class Minging:
""" CPU mining algorithm """
def __init__(self):
calibrate = Calibrate()
self.hashes_a_second = calibrate.run()
def calculate_difficulty(self):
""" Calculates block difficulty """
if self.hashes_a_second < 1:
return '000000000000000000'
if self.hashes_a_second > 1 and self.hashes_a_second < 4:
return '000000000000000'
if self.hashes_a_second > 4:
return '00000000000'
def run(self, previous_proof):
difficulty = self.calculate_difficulty()
start = t.time()
proof = 1
while True:
hashd = hashlib.sha256(str(proof**2 -previous_proof**2).encode()).hexdigest()
if hashd[:len(difficulty)] == difficulty:
end_time = t.time()
passed = end_time - start
print(passed)
return proof
else:
proof = proof + 1
# def random_question()
if __name__ == '__main__':
mining = Minging()
print(len('000000000000000000'))
print(mining.run(10)) | 21.051948 | 80 | 0.675509 | 212 | 1,621 | 4.976415 | 0.358491 | 0.047393 | 0.052133 | 0.080569 | 0.085308 | 0.083412 | 0 | 0 | 0 | 0 | 0 | 0.075443 | 0.198643 | 1,621 | 77 | 81 | 21.051948 | 0.736721 | 0.115978 | 0 | 0.037736 | 0 | 0 | 0.06776 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.113208 | false | 0.09434 | 0.037736 | 0 | 0.301887 | 0.09434 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
15ce533fbe73c6678aa437ad3be6859922a5ef65 | 23,147 | py | Python | Assignment 1/problem1/route.py | Chirag-Galani/B551-Elements-Of-Artificial-Intelligence | 6e6d04bf17522768c176145e86ccc31e8ea903b4 | [
"MIT"
] | null | null | null | Assignment 1/problem1/route.py | Chirag-Galani/B551-Elements-Of-Artificial-Intelligence | 6e6d04bf17522768c176145e86ccc31e8ea903b4 | [
"MIT"
] | null | null | null | Assignment 1/problem1/route.py | Chirag-Galani/B551-Elements-Of-Artificial-Intelligence | 6e6d04bf17522768c176145e86ccc31e8ea903b4 | [
"MIT"
] | 2 | 2021-12-01T20:38:02.000Z | 2021-12-01T22:42:38.000Z | #!/usr/bin/env python
#Assignment 1 ; Question1
#Team: Apurva Gupta, Anshul Jain, Chirag Gilani
#To find good driving directions between pair of cities given by the user using four algorithms.
#We have included our observations in readme file of question1.
import sys
import Queue
import pandas
import numpy
import math
#Heurisitc function:To calculate great circle distance between city to goal.
#We require the lattitude and longitudes of city and goal to estimate the distance between them.
def calculate_heuristic(city,goal):
i = 0
found_city = False
found_goal = False
if city==goal:
return 0.00
lat_city = city_gps1[city][0]
long_city = city_gps1[city][1]
lat_goal = city_gps1[goal][0]
long_goal = city_gps1[goal][1]
long_city = float(long_city) * 0.0174
long_goal = float(long_goal) * 0.0174
lat_city = float(lat_city) * 0.0174
lat_goal = float(lat_goal) * 0.0174
longitude_distance = long_goal - long_city
lattitude_distance = lat_goal - lat_city
temp = (numpy.sin(lattitude_distance/2.0))**2 + (numpy.cos(lat_city) * numpy.cos(lat_goal) * ((numpy.sin(longitude_distance/2.0))**2))
great_circle_distance = 2 * numpy.arcsin(numpy.sqrt(temp))
mile_distance = float(3959 * great_circle_distance)
return (mile_distance)
#It calculates the longest route between two cities using a star algorithm.
def astar_longtour(adjacency_list,start,goal):
visited = []
highway_list =[]
h1 = 0
h2 = 0
q1 = Queue.PriorityQueue()
error = ["None"]
error1 = ["None1"]
if start not in adjacency_list.keys() or goal not in adjacency_list.keys():
return error
g = 0
h = calculate_heuristic(start,goal)
q1.put((float(g)+float(h),float(g),start,start))
goal1 = False
t_cost = 0
visited1 = []
a = 0
while q1.empty()==False:
f,cost_path,vertex, path = q1.get()
if vertex==goal:
goal1= True
t_cost = cost_path
path1 = path
#print "visited nodes",len(visited)
return path1
for next in adjacency_list[vertex]:
speed = int(speed_list[vertex][next])
if vertex not in visited and next not in visited and next not in visited1 and int(speed) != int(0):
cost_node = float(float(adjacency_list[vertex][next])+float(cost_path))
if next in city_data:
h1 = calculate_heuristic(next,goal)
f1 = float(float(cost_node)+float(h1))
q1.put((-1*f1,float(cost_node),next,path+"#"+next))
else:
next2 = next
for next_node in adjacency_list[next2]:
speed1 = int(speed_list[next2][next_node])
if next_node not in visited1 and speed1!=0:
cost_node1 = float(float(adjacency_list[next2][next_node])+float(cost_node))
path_node1 = path+"#"+next2+"#"+next_node
highway_list.append((next_node,cost_node1,path_node1))
visited1.append(next2)
while len(highway_list) != 0:
next_t,cost_t,path_t = highway_list.pop()
if next_t in visited1:
a = a+1
else:
if next_t in city_data:
h2 = calculate_heuristic(next_t,goal)
f2 = float(float(cost_t)+float(h2))
q1.put((-1*f2,float(cost_t),next_t,path_t))
else:
for next_node1 in adjacency_list[next_t]:
speed2 = int(speed_list[next_t][next_node1])
if next_node1 not in visited1 and speed2!=0:
cost_node2 = float(float(adjacency_list[next_t][next_node1])+float(cost_t))
path_node2 = path_t+"#"+next_node1
highway_list.append((next_node1,cost_node2,path_node2))
visited1.append(next_t)
visited.append(vertex)
if goal1 == False:
return error1
#For A star to find shortest distance,we use Heurisitic as the great circle distance between two cities.
#For handling highway intersections, we have explored all reachable cities from highway intersections and calculated great circle distance from those cities.
#Assumption:We ignored paths will zero speed or null speed.
def astar_distance(adjacency_list,start,goal):
visited = []
highway_list =[]
h1 = 0
h2 = 0
q1 = Queue.PriorityQueue()
error = ["None"]
error1 = ["None1"]
if start not in adjacency_list.keys() or goal not in adjacency_list.keys():
return error
g = 0
h = calculate_heuristic(start,goal)
q1.put((float(g)+float(h),float(g),start,start))
goal1 = False
t_cost = 0
visited1 = []
a = 0
while q1.empty()==False:
f,cost_path,vertex, path = q1.get()
if vertex==goal:
goal1= True
t_cost = cost_path
path1 = path
#print "visited nodes",len(visited)
return path1
for next in adjacency_list[vertex]:
speed = int(speed_list[vertex][next])
if vertex not in visited and next not in visited and vertex not in visited1 and int(speed) != int(0):
cost_node = float(float(adjacency_list[vertex][next])+float(cost_path))
if next in city_data:
h1 = calculate_heuristic(next,goal)
f1 = float(float(cost_node)+float(h1))
q1.put((f1,float(cost_node),next,path+"#"+next))
else:
next2 = next
for next_node in adjacency_list[next2]:
speed1 = int(speed_list[next2][next_node])
if next_node not in visited1 and speed1!=0:
cost_node1 = float(float(adjacency_list[next2][next_node])+float(cost_node))
path_node1 = path+"#"+next2+"#"+next_node
highway_list.append((next_node,cost_node1,path_node1))
visited1.append(next2)
while len(highway_list) != 0:
next_t,cost_t,path_t = highway_list.pop()
if next_t in visited1:
a = a+1
else:
if next_t in city_data:
h2 = calculate_heuristic(next_t,goal)
f2 = float(float(cost_t)+float(h2))
q1.put((f2,float(cost_t),next_t,path_t))
else:
for next_node1 in adjacency_list[next_t]:
speed2 = int(speed_list[next_t][next_node1])
if next_node1 not in visited1 and speed2!=0:
cost_node2 = float(float(adjacency_list[next_t][next_node1])+float(cost_t))
path_node2 = path_t+"#"+next_node1
highway_list.append((next_node1,cost_node2,path_node2))
visited1.append(next_t)
visited.append(vertex)
if goal1 == False:
return error1
#For a star with minimum amount of time, we have take heuristic as the great circle distance to reach from A to goal/speed from A's previous city to A).
#We assumed that the speed with which a person was travelling remains constant till he reaches goal.
#Also,we didnt had the exact coordinates for highway intersections,so we calculated distances from all connecting cities to highway intersection.
#Assumption : It ignores the path with zero speed or null speed.
def astar_time(time_list,start,goal):
visited = []
highway_list =[]
h1 = 0
h2 = 0
q1 = Queue.PriorityQueue()
error = ["None"]
error1 = ["None1"]
if start not in time_list.keys() or goal not in time_list.keys():
return error
g = 0
h = calculate_heuristic(start,goal)
q1.put((float(g)+float(h),float(g),start,start))
goal1 = False
t_cost = 0
visited1 = []
a = 0
while q1.empty()==False:
f,cost_path,vertex, path = q1.get()
if vertex==goal:
goal1= True
t_cost = cost_path
path1 = path
#print "visited nodes",len(visited)
return path1
for next in time_list[vertex]:
if vertex not in visited and next not in visited:
cost_node = float(float(time_list[vertex][next])+float(cost_path))
if next in city_data:
h1 = calculate_heuristic(next,goal)
speed1 = speed_list[vertex][next]
if float(speed1)!=float(0):
ex_time = float(float(h1)/float(speed1))
f1 = float(float(cost_node)+float(ex_time))
q1.put((f1,float(cost_node),next,path+"#"+next))
else:
next2 = next
for next_node in adjacency_list[next2]:
if next_node not in visited1:
cost_node1 = float(float(time_list[next2][next_node])+float(cost_node))
path_node1 = path+"#"+next2+"#"+next_node
speed_node1 = float(speed_list[next2][next_node])
if float(speed_node1)!=float(0):
highway_list.append((next_node,cost_node1,path_node1,speed_node1))
visited1.append(next2)
while len(highway_list) != 0:
next_t,cost_t,path_t,speed_t = highway_list.pop()
if next_t in visited1:
a = a+1
else:
if next_t in city_data:
h2 = calculate_heuristic(next_t,goal)
if float(speed_t)!=float(0):
ex_time1 = float(float(h2)/float(speed_t))
f2 = float(float(cost_t)+float(ex_time1))
q1.put((f2,float(cost_t),next_t,path_t))
else:
for next_node1 in time_list[next_t]:
if next_node1 not in visited1:
cost_node2 = float(float(time_list[next_t][next_node1])+float(cost_t))
path_node2 = path_t+"#"+next_node1
speed_node2 = float(speed_list[next_t][next_node1])
if float(speed_node2)!=float(0):
highway_list.append((next_node1,cost_node2,path_node2,speed_node2))
visited1.append(next_t)
visited.append(vertex)
if goal1 == False:
return error1
#Astar algorithm uses a heurisitc function to reduce the state space and finds the optimal answer.
#For Astar,with smallest number of segments, we have taken heuristic = 1.
def astar_segment(start,goal):
visited = []
highway_list =[]
h1 = 1
h2 = 1
q1 = Queue.PriorityQueue()
error = ["None"]
error1 = ["None1"]
if start not in adjacency_list.keys() or goal not in adjacency_list.keys():
return error
g = 0
h = 1
q1.put((int(g)+int(h),int(g),start,start))
goal1 = False
t_cost = 0
while q1.empty()==False:
f1,cost_path,vertex, path = q1.get()
if vertex==goal:
goal1= True
t_cost = cost_path
path1 = path
#print "visited",len(visited)
return path1
for next in adjacency_list[vertex]:
speed = int(speed_list[vertex][next])
if vertex not in visited and next not in visited and speed!=0:
cost_node = int(1) + int(cost_path)
h1 = 1
f = int(h1)+cost_node
q1.put((f,int(cost_node),next,path+"#"+next))
visited.append(vertex)
if goal1 == False:
return error1
#uniform cost search-time
#It returns a path with smallest amount of time but explores a lot of state space.
#It takes a priority queue and explores the path which takes the smallest amount of time.
#It finds an optimal answer.
def ucs_time(time_list,start,goal):
visited = []
q1 = Queue.PriorityQueue()
error = ["None"]
error1 = ["None1"]
if start not in time_list.keys() or goal not in time_list.keys():
return error
for next in time_list[start]:
w = float((time_list[start][next]))
speed = int(speed_list[start][next])
if speed!=0:
q1.put((float(w),next,start+"#"+next))
goal1 = False
t_cost = 0
visited.append(start)
while q1.empty()==False:
cost_path,vertex, path = q1.get()
if vertex==goal:
goal1= True
t_cost = cost_path
path1 = path
#print "visited nodes",len(visited)
return path1
for next in time_list[vertex]:
speed = int(speed_list[vertex][next])
if vertex not in visited and next not in visited and speed!=0:
cost_node = float((time_list[vertex][next]))+float((cost_path))
q1.put((float(cost_node),next,path+"#"+next))
visited.append(vertex)
if goal1 == False:
return error1
#Uniform cost search-segments
#We take a priority queue and store the number of edges we have traversed. It tries
#to find a path with smallest number of edges
#It returns an optimal answer with minimum number of edges.
#BFS also returns apath with miniumum edges but the path returned by both the algorithms might be different
def ucs_segment(start,goal):
visited = []
q1 = Queue.PriorityQueue()
error = ["None"]
error1 = ["None1"]
if start not in adjacency_list.keys() or goal not in adjacency_list.keys():
return error
for next in adjacency_list[start]:
w =0
speed = int(speed_list[start][next])
if speed!=0:
q1.put((int(w),next,start+"#"+next))
goal1 = False
t_cost = 0
visited.append(start)
while q1.empty()==False:
cost_path,vertex, path = q1.get()
if vertex==goal:
goal1= True
t_cost = cost_path
path1 = path
#print "visited",len(visited)
return path1
for next in adjacency_list[vertex]:
speed = int(speed_list[vertex][next])
if vertex not in visited and next not in visited and speed!=0:
cost_node = int(1) + int(cost_path)
q1.put((int(cost_node),next,path+"#"+next))
visited.append(vertex)
if goal1 == False:
return error1
#uniform cost search-distance
#The algorithm takes a priority queue and traverses where it finds the shortest cumulative value
#It explores a lot of state space but returns an optimal answer.
def ucs_distance(adjacency_list,start,goal):
visited = []
q1 = Queue.PriorityQueue()
error = ["None"]
error1 = ["None1"]
if start not in adjacency_list.keys() or goal not in adjacency_list.keys():
return error
for next in adjacency_list[start]:
w = adjacency_list[start][next]
speed = int(speed_list[start][next])
if speed!=0:
q1.put((int(w),next,start+"#"+next))
goal1 = False
t_cost = 0
visited.append(start)
while q1.empty()==False:
cost_path,vertex, path = q1.get()
if vertex==goal:
goal1= True
t_cost = cost_path
path1 = path
#print "visited",len(visited)
return path1
for next in adjacency_list[vertex]:
speed = int(speed_list[vertex][next])
if vertex not in visited and next not in visited and speed!=0:
cost_node = int(adjacency_list[vertex][next])+int(cost_path)
q1.put((int(cost_node),next,path+"#"+next))
visited.append(vertex)
if goal1 == False:
return error1
def ucs_longtour(adjacency_list,start,goal):
visited = []
q1 = Queue.PriorityQueue()
error = ["None"]
error1 = ["None1"]
if start not in adjacency_list.keys() or goal not in adjacency_list.keys():
return error
for next in adjacency_list[start]:
w = adjacency_list[start][next]
speed = int(speed_list[start][next])
if speed!=0:
#referred https://stackoverflow.com/questions/15124097/priority-queue-with-higher-priority-first-in-python to find how to put element with high value first
q1.put((-1*int(w),next,start+"#"+next))
goal1 = False
t_cost = 0
visited.append(start)
while q1.empty()==False:
cost_path,vertex, path = q1.get()
if vertex==goal:
goal1= True
t_cost = cost_path
path1 = path
#print "visited",len(visited)
return path1
for next in adjacency_list[vertex]:
speed = int(speed_list[vertex][next])
if vertex not in visited and next not in visited and speed!=0:
cost_node = int(adjacency_list[vertex][next])-int(cost_path)
q1.put((-1*int(cost_node),next,path+"#"+next))
visited.append(vertex)
if goal1 == False:
return error1
#It calculates total time taken to go from source to destination
def calculate_time(p):
time = 0
i = 0
while i<len(p):
if i != len(p)-1:
first_city = p[i]
next_city = p[i+1]
time += time_list[first_city][next_city]
i = i+1
return time
#It calculates total distance travelled from source to destination
def calculate_distance(p):
distance = 0
i = 0
while i<len(p)-1:
first_city = p[i]
next_city = p[i+1]
distance += int(adjacency_list[first_city][next_city])
i = i+1
return distance
#print data according to format
def print_data(path):
final_path= []
final_path1 = []
p = ""
if "None" in path:
print "Start node or goal node doesnot exist"
return 0
elif "None1" in path:
print "There is no goal from source to destination"
return 0
else:
if routing_algo == "uniform" or routing_algo == "astar":
final_path= path.split("#")
else:
final_path = path
start = final_path[0] + " "
p = start
for i in range(1,len(final_path)):
next = final_path[i] + " "
p = p + next
i = 0
while i<len(final_path)-1:
start = final_path[i]
next = final_path[i+1]
speed = speed_list[start][next]
highway = highway_list[start][next]
distance = adjacency_list[start][next]
print start,"To",next,"Distance:",distance,"Speed Limit:",speed,"Travel From Highway number:",highway
i = i +1
time = calculate_time(final_path)
cost = calculate_distance(final_path)
print cost,round(time,3),p
return 1
#Breadth first search
#Assumption:When speed is 0 or null,It doesnot consider that path
def bfs(adjacency_list,start,goal):
queue = [(start,[start])]
visited =[]
error = ["None"]
error1 = ["None1"]
IsGoal = False
if start not in adjacency_list.keys() or goal not in adjacency_list.keys():
return error
while queue:
(vertex,path) = queue.pop(0)
for next in adjacency_list[vertex]:
speed = int(speed_list[vertex][next])
if vertex not in visited and speed!=0:
if next==goal:
IsGoal = True
# print "visited nodes",len(visited)
return path + [next]
else:
queue.append((next,path + [next]))
visited.append(vertex)
if IsGoal == False:
return error1
#Depth first search
#We use a visited list to avoid cycles
#Assumption:When speed is 0 or null, It doesnot consider that path
def dfs(adjacency_list,start,goal):
queue1 = [(start,[start])]
visited =[]
error = ["None"]
error1 = ["None1"]
IsGoal = False
if start not in adjacency_list.keys() or goal not in adjacency_list.keys():
return error
while queue1:
(vertex,path) = queue1.pop()
for next in adjacency_list[vertex]:
speed = int(speed_list[vertex][next])
if vertex not in visited and speed!=0:
if next==goal:
IsGoal = True
#print "visited nodes",len(visited)
return path + [next]
else:
queue1.append((next,path+[next]))
visited.append(vertex)
if IsGoal== False:
return error1
i = 0
adjacency_list = {}
splitlines=[]
time_list={}
speed_list={}
highway_list={}
#Reading road-segments file and storing it in a lists of list
with open('road-segments.txt','r') as route_file:
lines = route_file.read().splitlines()
newline = [i.split(" ") for i in lines]
#Creating dictionaries for storing distance,speed and highway
for i in range(0,len(newline)-1):
start_city = newline[i][0]
next_city = newline[i][1]
distance = newline[i][2]
if len(newline[i][3].strip())==0:
speed = int(0)
else:
speed = int(newline[i][3])
highway = newline[i][4]
if start_city not in adjacency_list.keys():
adjacency_list[start_city]={}
adjacency_list[start_city][next_city]=distance
speed_list[start_city]={}
speed_list[start_city][next_city]=int(speed)
time_list[start_city]={}
if int(speed)!=0: #DivideByZero error handling
time_list[start_city][next_city]=float(int(distance))/float(int(speed))
else:
time_list[start_city][next_city] = float(0)
else:
adjacency_list[start_city][next_city] = distance
speed_list[start_city][next_city] = int(speed)
if int(speed)!=0: #DivideByZero error handling
time_list[start_city][next_city] = float(int(distance))/float(int(speed))
else:
time_list[start_city][next_city] = float(0)
if start_city not in highway_list.keys():
highway_list[start_city] = {}
highway_list[start_city][next_city]=highway
else:
highway_list[start_city][next_city] = highway
if next_city not in adjacency_list.keys():
adjacency_list[next_city] = {}
adjacency_list[next_city][start_city] = distance
speed_list[next_city] = {}
speed_list[next_city][start_city] = int(speed)
time_list[next_city]={}
if int(speed)!=0: #DivideByZero error handling
time_list[next_city][start_city]=float(int(distance))/float(int(speed))
else:
time_list[next_city][start_city] = float(0)
else:
adjacency_list[next_city][start_city] = distance
speed_list[next_city][start_city] = int(speed)
if int(speed)!=0: #DivideByZero error handling
time_list[next_city][start_city] = float(int(distance))/float(int(speed))
else:
time_list[next_city][start_city] = float(0)
if next_city not in highway_list.keys():
highway_list[next_city] = {}
highway_list[next_city][start_city] = highway
else:
highway_list[next_city][start_city]= highway
city_gps1={}
#Reading city master data and storing it in list
city_data = []
with open('city-gps.txt','r') as city_gps:
lines = city_gps.read().splitlines()
newline = [i.split(" ") for i in lines]
for city_name,lattitude,longitude in newline:
city_data.append(city_name)
city_gps1[city_name] = [lattitude,longitude]
#Accepting data from command line inputs
start = sys.argv[1]
goal = sys.argv[2]
routing_algo = sys.argv[3]
cost_function = sys.argv[4]
#Conditions for various routing algorithms and cost functions
if routing_algo == "bfs" and (cost_function=="distance" or cost_function=="time" or cost_function=="segments" or cost_function=="longtour"):#BFS is same for all cost functions.It returns a path with minimum number of segments
total_path1 = bfs(adjacency_list,start,goal)
print_data(total_path1)
elif routing_algo == "dfs" and (cost_function=="distance" or cost_function=="time" or cost_function=="segments" or cost_function=="longtour"):#DFS is same for all cost functions. It does not return a optimal solution.
total_path1 = dfs(adjacency_list,start,goal)
print_data(total_path1)
elif routing_algo == "uniform" and cost_function=="distance":
total_path1 = ucs_distance(adjacency_list,start,goal)
print_data(total_path1)
elif routing_algo == "uniform" and cost_function == "time":
total_path1 = ucs_time(time_list,start,goal)
print_data(total_path1)
elif routing_algo == "uniform" and cost_function == "segments":
total_path1 = ucs_segment(start,goal)
print_data(total_path1)
elif routing_algo == "uniform" and cost_function == "longtour":
total_path1 = ucs_longtour(adjacency_list,start,goal)
print_data(total_path1)
elif routing_algo == "astar" and cost_function == "distance":
total_path1 = astar_distance(adjacency_list,start,goal)
print_data(total_path1)
elif routing_algo == "astar" and cost_function == "time":
total_path1 = astar_time(time_list,start,goal)
print_data(total_path1)
elif routing_algo == "astar" and cost_function == "segments":
total_path1 = astar_segment(start,goal)
print_data(total_path1)
elif routing_algo == "astar" and cost_function == "longtour":
total_path1 = astar_longtour(adjacency_list,start,goal)
print_data(total_path1)
else:#In case of wrong command line inputs
print "Please enter the correct routing algorithm and cost function!(bfs,dfs,uniform,astar) and (distance,segments,time,longtour)"
| 33.497829 | 225 | 0.669763 | 3,440 | 23,147 | 4.347384 | 0.089244 | 0.056503 | 0.034102 | 0.021665 | 0.704647 | 0.682915 | 0.648679 | 0.628017 | 0.608158 | 0.589435 | 0 | 0.023929 | 0.214628 | 23,147 | 690 | 226 | 33.546377 | 0.798724 | 0.1538 | 0 | 0.681979 | 0 | 0.001767 | 0.030013 | 0.003329 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.008834 | null | null | 0.028269 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
15e87b87619b41df7d75ea8b043b7dbd742c2cb3 | 1,929 | py | Python | api_lambda_function.py | yuyasugano/defi-test | 959cded7440058e12aeaf9e8c75e06394f450332 | [
"MIT"
] | 3 | 2020-04-06T11:41:06.000Z | 2021-06-04T21:45:36.000Z | api_lambda_function.py | yuyasugano/defi-test | 959cded7440058e12aeaf9e8c75e06394f450332 | [
"MIT"
] | null | null | null | api_lambda_function.py | yuyasugano/defi-test | 959cded7440058e12aeaf9e8c75e06394f450332 | [
"MIT"
] | null | null | null | #!/usr/bin/python
import json
import boto3
import decimal
from decimal import Decimal
from boto3.dynamodb.conditions import Key, Attr
TABLE = 'Tokens'
dynamodb = boto3.resource('dynamodb', region_name='ap-northeast-1')
table = dynamodb.Table(TABLE_NAME)
# Helper class to convert a DynamoDB item to JSON
class DecimalEncoder(json.JSONEncoder):
def default(self, o):
if isinstance(o, decimal.Decimal):
if abs(o) % 1 > 0:
return float(o)
else:
return int(o)
return super(DecimalEncoder, self).default(o)
# Helper function to convert decimal object to float
def decimal_to_float(obj):
if isinstance(obj, Decimal):
return float(obj)
raise TypeError
def get_lastprice(name):
try:
response = table.query(
KeyConditionExpression=Key('name').eq(name)
)
except ClientError as e:
print(e.response['Error']['Message']
else:
item = max(response['Items'], key=(lambda x: x['datetime']))
record = decimal_to_float(item['tvl']['USD']['value']
print('Latest record for {}: {}'.format(name, record))
return record
def lambda_handler(event, context):
if event['queryStringParameters'] is None:
return {
'isBase64Encoded': False,
'statusCode': 500,
'headers': {},
'body': json.dumps('Internal server error')
}
try:
message = get_lastprice(event['queryStringParameters']['name'])
except Exception as e:
print(e.arps)
return {
'isBase64Encoded': False,
'statusCode': 400,
'headers': {},
'body': json.dumps('Bad request')
}
else:
return {
'isBase64Encoded': False,
'statusCode': 200,
'headers': {},
'body': json.dumps(message, cls=DecimalEncoder)
}
| 28.791045 | 71 | 0.585796 | 207 | 1,929 | 5.415459 | 0.454106 | 0.018733 | 0.069581 | 0.096343 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015373 | 0.291861 | 1,929 | 66 | 72 | 29.227273 | 0.805271 | 0 | 0 | 0.245614 | 0 | 0 | 0.153506 | 0.023192 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.087719 | null | null | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
15ed3c737319751d6781c65bc474c7c96a12786a | 12,148 | py | Python | tests/broker/test_refresh_user.py | ned21/aquilon | 6562ea0f224cda33b72a6f7664f48d65f96bd41a | [
"Apache-2.0"
] | 7 | 2015-07-31T05:57:30.000Z | 2021-09-07T15:18:56.000Z | tests/broker/test_refresh_user.py | ned21/aquilon | 6562ea0f224cda33b72a6f7664f48d65f96bd41a | [
"Apache-2.0"
] | 115 | 2015-03-03T13:11:46.000Z | 2021-09-20T12:42:24.000Z | tests/broker/test_refresh_user.py | ned21/aquilon | 6562ea0f224cda33b72a6f7664f48d65f96bd41a | [
"Apache-2.0"
] | 13 | 2015-03-03T11:17:59.000Z | 2021-09-09T09:16:41.000Z | #!/usr/bin/env python
# -*- cpy-indent-level: 4; indent-tabs-mode: nil -*-
# ex: set expandtab softtabstop=4 shiftwidth=4:
#
# Copyright (C) 2014-2018 Contributor
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Module for testing the refresh user principals command."""
import os
import pwd
import unittest
if __name__ == "__main__":
import utils
utils.import_depends()
from brokertest import TestBrokerCommand
class TestRefreshUser(TestBrokerCommand):
def test_110_grant_testuser4_root(self):
command = ["grant_root_access", "--user", "testuser4",
"--personality", "utunused/dev"] + self.valid_just_tcm
self.successtest(command)
def test_111_verify_testuser4_root(self):
command = ["show_personality", "--personality", "utunused/dev"]
out = self.commandtest(command)
self.matchoutput(out, "Root Access User: testuser4", command)
command = ["cat", "--personality", "utunused/dev",
"--archetype", "aquilon"]
out = self.commandtest(command)
self.matchoutput(out, "testuser4", command)
def test_200_refresh(self):
command = ["refresh_user"]
err = self.statustest(command)
self.matchoutput(err,
"Duplicate UID: 1236 is already used by testuser3, "
"skipping dup_uid.",
command)
self.matchoutput(err, "Added 3, deleted 1, updated 2 users.", command)
def test_210_verify_all(self):
command = ["show_user", "--all"]
out = self.commandtest(command)
self.matchoutput(out, "testuser1", command)
self.matchoutput(out, "testuser2", command)
self.matchoutput(out, "testuser3", command)
self.matchclean(out, "testuser4", command)
self.matchclean(out, "bad_line", command)
self.matchclean(out, "dup_uid", command)
self.matchclean(out, "foo", command)
self.matchoutput(out, "testbot1", command)
self.matchoutput(out, "testbot2", command)
def test_210_verify_testuser1(self):
command = ["show_user", "--username", "testuser1"]
out = self.commandtest(command)
self.searchoutput(out, r'User: testuser1$', command)
self.searchoutput(out, r'Type: human$', command)
self.searchoutput(out, r'UID: 1234$', command)
self.searchoutput(out, r'GID: 423$', command)
self.searchoutput(out, r'Full Name: test user 1$', command)
self.searchoutput(out, r'Home Directory: /tmp$', command)
def test_210_verify_testuser3(self):
command = ["show_user", "--username", "testuser3"]
out = self.commandtest(command)
self.searchoutput(out, r'User: testuser3$', command)
self.searchoutput(out, r'Type: human$', command)
self.searchoutput(out, r'UID: 1236$', command)
self.searchoutput(out, r'GID: 655$', command)
self.searchoutput(out, r'Full Name: test user 3$', command)
self.searchoutput(out, r'Home Directory: /tmp/foo$', command)
def test_210_verify_testbot1_robot(self):
command = ["show_user", "--username", "testbot1"]
out = self.commandtest(command)
self.searchoutput(out, r'User: testbot1$', command)
self.searchoutput(out, r'Type: robot$', command)
self.searchoutput(out, r'UID: 1337$', command)
self.searchoutput(out, r'GID: 655$', command)
self.searchoutput(out, r'Full Name: test bot 1$', command)
self.searchoutput(out, r'Home Directory: /tmp/bothome1$', command)
def test_210_verify_testbot2_not_robot(self):
command = ["show_user", "--username", "testbot2"]
out = self.commandtest(command)
self.searchoutput(out, r'User: testbot2$', command)
self.searchoutput(out, r'Type: human$', command)
self.searchoutput(out, r'UID: 1338$', command)
self.searchoutput(out, r'GID: 655$', command)
self.searchoutput(out, r'Full Name: test bot 2$', command)
self.searchoutput(out, r'Home Directory: /tmp/bothome2$', command)
def test_220_verify_testuser4_root_gone(self):
command = ["show_personality", "--personality", "utunused/dev"]
out = self.commandtest(command)
self.matchclean(out, "testuser4", command)
command = ["cat", "--personality", "utunused/dev",
"--archetype", "aquilon"]
out = self.commandtest(command)
self.matchclean(out, "testuser4", command)
def test_300_update_testuser3(self):
self.noouttest(["update_user", "--username", "testuser3",
"--uid", "1237", "--gid", "123",
"--full_name", "Some other name",
"--home_directory", "/tmp"] + self.valid_just_sn)
def test_300_update_testbot1(self):
self.noouttest(["update_user", "--username", "testbot1",
"--type", "human"] + self.valid_just_sn)
def test_300_update_testbot2(self):
self.noouttest(["update_user", "--username", "testbot2",
"--type", "robot"] + self.valid_just_sn)
def test_301_verify_testuser3_before_sync(self):
command = ["show_user", "--username", "testuser3"]
out = self.commandtest(command)
self.searchoutput(out, r'User: testuser3$', command)
self.searchoutput(out, r'Type: human$', command)
self.searchoutput(out, r'UID: 1237$', command)
self.searchoutput(out, r'GID: 123$', command)
self.searchoutput(out, r'Full Name: Some other name$', command)
self.searchoutput(out, r'Home Directory: /tmp$', command)
def test_301_verify_testbot1_before_sync(self):
command = ["show_user", "--username", "testbot1"]
out = self.commandtest(command)
self.searchoutput(out, r'User: testbot1$', command)
self.searchoutput(out, r'Type: human$', command)
def test_301_verify_testbot2_before_sync(self):
command = ["show_user", "--username", "testbot2"]
out = self.commandtest(command)
self.searchoutput(out, r'User: testbot2$', command)
self.searchoutput(out, r'Type: robot$', command)
def test_305_refresh_again(self):
command = ['refresh_user', '--incremental']
err = self.partialerrortest(command)
self.matchoutput(err,
'Duplicate UID: 1236 is already used by testuser3, '
'skipping dup_uid.',
command)
self.matchoutput(err,
'Updating human user testuser3 (uid = 1236, was '
'1237; gid = 655, was 123; '
'full_name = test user 3, was Some other name; '
'home_dir = /tmp/foo, was /tmp)',
command)
self.matchoutput(err,
'Updating robot user testbot1 (type = robot, was '
'human)',
command)
self.matchoutput(err,
'Updating human user testbot2 (type = human, was '
'robot)',
command)
def test_310_verify_testuser1_again(self):
command = ["show_user", "--username", "testuser1"]
out = self.commandtest(command)
self.searchoutput(out, r'User: testuser1$', command)
self.searchoutput(out, r'Type: human$', command)
self.searchoutput(out, r'UID: 1234$', command)
self.searchoutput(out, r'GID: 423$', command)
self.searchoutput(out, r'Full Name: test user 1$', command)
self.searchoutput(out, r'Home Directory: /tmp$', command)
def test_310_verify_testuser3_again(self):
command = ["show_user", "--username", "testuser3"]
out = self.commandtest(command)
self.searchoutput(out, r'User: testuser3$', command)
self.searchoutput(out, r'Type: human$', command)
self.searchoutput(out, r'UID: 1236$', command)
self.searchoutput(out, r'GID: 655$', command)
self.searchoutput(out, r'Full Name: test user 3$', command)
self.searchoutput(out, r'Home Directory: /tmp/foo$', command)
def test_310_verify_testbot1_again(self):
command = ["show_user", "--username", "testbot1"]
out = self.commandtest(command)
self.searchoutput(out, r'User: testbot1$', command)
self.searchoutput(out, r'Type: robot$', command)
def test_310_verify_testbot2_again(self):
command = ["show_user", "--username", "testbot2"]
out = self.commandtest(command)
self.searchoutput(out, r'User: testbot2$', command)
self.searchoutput(out, r'Type: human$', command)
def test_310_verify_all_again(self):
command = ["show_user", "--all"]
out = self.commandtest(command)
self.matchoutput(out, "testuser1", command)
self.matchoutput(out, "testuser2", command)
self.matchoutput(out, "testuser3", command)
self.matchclean(out, "testuser4", command)
self.matchclean(out, "bad_line", command)
self.matchclean(out, "dup_uid", command)
self.matchoutput(out, "testbot1", command)
self.matchoutput(out, "testbot2", command)
def test_320_add_users(self):
limit = self.config.getint("broker", "user_delete_limit")
for i in range(limit + 5):
name = "testdel_%d" % i
uid = i + 5000
self.noouttest(["add_user", "--username", name, "--uid", uid,
"--gid", 1000, "--full_name", "Delete test",
"--home_directory", "/tmp"] + self.valid_just_tcm)
def test_321_refresh_refuse(self):
limit = self.config.getint("broker", "user_delete_limit")
command = ["refresh_user"]
out = self.statustest(command)
self.matchoutput(out,
"Cowardly refusing to delete %s users, because "
"it is over the limit of %s. Use the "
"--ignore_delete_limit option to override." %
(limit + 5, limit),
command)
self.matchoutput(out, "deleted 0,", command)
def test_322_verify_still_there(self):
command = ["show_user", "--all"]
out = self.commandtest(command)
limit = self.config.getint("broker", "user_delete_limit")
for i in range(limit + 5):
name = "testdel_%d" % i
self.matchoutput(out, name, command)
def test_323_refresh_override(self):
limit = self.config.getint("broker", "user_delete_limit")
command = ["refresh", "user", "--ignore_delete_limit"]
out = self.statustest(command)
self.matchoutput(out,
"Added 0, deleted %s, updated 0 users." % (limit + 5),
command)
def test_324_verify_all_gone(self):
command = ["show_user", "--all"]
out = self.commandtest(command)
self.matchoutput(out, "testuser1", command)
self.matchoutput(out, "testuser2", command)
self.matchoutput(out, "testuser3", command)
self.matchclean(out, "testuser4", command)
self.matchclean(out, "bad_line", command)
self.matchclean(out, "dup_uid", command)
self.matchclean(out, "testdel_", command)
self.matchoutput(out, "testbot1", command)
self.matchoutput(out, "testbot2", command)
if __name__ == '__main__':
suite = unittest.TestLoader().loadTestsFromTestCase(TestRefreshUser)
unittest.TextTestRunner(verbosity=2).run(suite)
| 44.014493 | 79 | 0.608166 | 1,361 | 12,148 | 5.296841 | 0.166789 | 0.135802 | 0.159523 | 0.18033 | 0.729089 | 0.694271 | 0.653766 | 0.621029 | 0.606464 | 0.58746 | 0 | 0.029919 | 0.259878 | 12,148 | 275 | 80 | 44.174545 | 0.771883 | 0.060257 | 0 | 0.599099 | 0 | 0 | 0.23486 | 0.003686 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117117 | false | 0 | 0.027027 | 0 | 0.148649 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
15f09f466f08acc7d158d1e5f5bf092afb8c1bc3 | 642 | py | Python | temp-down/application.py | scavicchio/easyWaltonTracker | 05fd5b12e9e6d9e21f7209baca3b8137c013f002 | [
"MIT"
] | 2 | 2018-05-10T04:50:11.000Z | 2018-05-10T04:50:13.000Z | temp-down/application.py | scavicchio/easyWaltonTracker | 05fd5b12e9e6d9e21f7209baca3b8137c013f002 | [
"MIT"
] | 5 | 2018-06-11T22:23:06.000Z | 2020-02-28T02:20:52.000Z | temp-down/application.py | scavicchio/easyWaltonTracker | 05fd5b12e9e6d9e21f7209baca3b8137c013f002 | [
"MIT"
] | null | null | null | from flask import Flask, render_template, request, url_for, redirect
application = app = Flask(__name__)
@app.before_request
def check_for_maintenance():
if request.path != url_for('maintenance'):
return redirect(url_for('maintenance'))
# Or alternatively, dont redirect
# return 'Sorry, off for maintenance!', 503
@app.route('/maintenance')
def maintenance():
return render_template('downsite.html')
# run the app.
if __name__ == "__main__":
# Setting debug to True enables debug output. This line should be
# removed before deploying a production app.
app.run(0.0.0.0,port=8080,debug=True) | 30.571429 | 69 | 0.707165 | 85 | 642 | 5.105882 | 0.564706 | 0.129032 | 0.078341 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020952 | 0.182243 | 642 | 21 | 70 | 30.571429 | 0.805714 | 0.302181 | 0 | 0 | 0 | 0 | 0.124154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.090909 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
15f5866d34b5d50188b37efacae2ab46191d04ac | 1,231 | py | Python | services/twilio.py | sourceperl/docker.mqttwarn | 9d87337f766843c8bdee34eba8d29776e7032009 | [
"MIT"
] | null | null | null | services/twilio.py | sourceperl/docker.mqttwarn | 9d87337f766843c8bdee34eba8d29776e7032009 | [
"MIT"
] | null | null | null | services/twilio.py | sourceperl/docker.mqttwarn | 9d87337f766843c8bdee34eba8d29776e7032009 | [
"MIT"
] | 2 | 2016-09-03T09:12:17.000Z | 2020-03-03T11:58:40.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
__author__ = 'Jan-Piet Mens <jpmens()gmail.com>'
__copyright__ = 'Copyright 2014 Jan-Piet Mens'
__license__ = """Eclipse Public License - v 1.0 (http://www.eclipse.org/legal/epl-v10.html)"""
HAVE_TWILIO=True
try:
from twilio.rest import TwilioRestClient
except ImportError:
HAVE_TWILIO=False
def plugin(srv, item):
''' expects (accountSID, authToken, from, to) in addrs'''
srv.logging.debug("*** MODULE=%s: service=%s, target=%s", __file__, item.service, item.target)
if not HAVE_TWILIO:
srv.logging.warn("twilio-python is not installed")
return False
try:
account_sid, auth_token, from_nr, to_nr = item.addrs
except:
srv.logging.warn("Twilio target is incorrectly configured")
return False
text = item.message
try:
client = TwilioRestClient(account_sid, auth_token)
message = client.messages.create(
body=text,
to=to_nr,
from_=from_nr)
srv.logging.debug("Twilio returns %s" % (message.sid))
except Exception, e:
srv.logging.warn("Twilio failed: %s" % (str(e)))
return False
return True
| 28.627907 | 98 | 0.62632 | 155 | 1,231 | 4.793548 | 0.522581 | 0.067295 | 0.056528 | 0.080754 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00974 | 0.249391 | 1,231 | 42 | 99 | 29.309524 | 0.794372 | 0.034119 | 0 | 0.2 | 0 | 0.033333 | 0.242478 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.066667 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c603335b3b633aab6abb9698186f5935369ba9de | 1,805 | py | Python | big_o.py | IanDCarroll/BigO | ed9af977a96df88c60e5eca3fb541416db91ee30 | [
"MIT"
] | 1 | 2019-09-07T21:18:40.000Z | 2019-09-07T21:18:40.000Z | big_o.py | IanDCarroll/BigO | ed9af977a96df88c60e5eca3fb541416db91ee30 | [
"MIT"
] | 1 | 2016-11-15T16:53:14.000Z | 2016-11-16T22:26:26.000Z | big_o.py | IanDCarroll/BigO | ed9af977a96df88c60e5eca3fb541416db91ee30 | [
"MIT"
] | null | null | null | class BigO_of_1(object):
def check_index_0_is_int(self, value_list):
if value_list[0] == int(value_list[0]):
return True
class BigO_of_N(object):
def double_values(self, value_list):
for i in range(0, len(value_list)):
value_list[i] *= 2
return value_list
class BigO_of_N_Squared(object):
def create_spam_field(self, value_list):
for i in range(0, len(value_list)):
value_list[i] = []
for j in range(0, len(value_list)):
value_list[i].append('spam')
return value_list
class BigO_of_N_Cubed(object):
def create_spam_space(self, value_list):
for i in range(0, len(value_list)):
value_list[i] = []
for j in range(0, len(value_list)):
value_list[i].append([])
for k in range (0, len(value_list)):
value_list[i][j].append('spam')
return value_list
class BigO_of_N_to_the_Fourth(object):
def create_spam_hyperspace(self, value_list):
for i in range(0, len(value_list)):
value_list[i] = []
for j in range(0, len(value_list)):
value_list[i].append([])
for k in range(0, len(value_list)):
value_list[i][j].append([])
for l in range(0, len(value_list)):
value_list[i][j][k].append('spam')
return value_list
class BigO_of_2_to_the_N(object):
def get_factorial(self, value):
final_number = 0
if value > 1:
final_number = value * self.get_factorial(value - 1)
return final_number
else:
return 1
class BigO_of_N_log_N(object):
def sort_list(self, value_list):
return sorted(value_list)
| 33.425926 | 64 | 0.572853 | 262 | 1,805 | 3.664122 | 0.183206 | 0.309375 | 0.083333 | 0.114583 | 0.56875 | 0.56875 | 0.56875 | 0.540625 | 0.503125 | 0.438542 | 0 | 0.016221 | 0.316898 | 1,805 | 53 | 65 | 34.056604 | 0.762368 | 0 | 0 | 0.382979 | 0 | 0 | 0.006648 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.148936 | false | 0 | 0 | 0.021277 | 0.468085 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c6045bad43e4ebedcf50caee95ac7467f2df8e94 | 429 | py | Python | renamedb.py | mochisoft/OC_Offline | 058b129a8b221f46a3fcbe3fccd36e9983eac2ff | [
"MIT"
] | null | null | null | renamedb.py | mochisoft/OC_Offline | 058b129a8b221f46a3fcbe3fccd36e9983eac2ff | [
"MIT"
] | null | null | null | renamedb.py | mochisoft/OC_Offline | 058b129a8b221f46a3fcbe3fccd36e9983eac2ff | [
"MIT"
] | null | null | null | import glob, os
def rename(dir, pattern):
for pathAndFilename in glob.iglob(os.path.join(dir, pattern)):
print pathAndFilename
title, ext = os.path.splitext(os.path.basename(pathAndFilename))
print title
print ext
os.rename(pathAndFilename,os.path.join(dir, 'study_name_to_be_renamed_to' + ext))
rename(r'C:\Path_to_where_the_extracted_zipped_file_is_stored\extracted', r'*.backup')
| 35.75 | 89 | 0.717949 | 61 | 429 | 4.836066 | 0.540984 | 0.081356 | 0.067797 | 0.088136 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170163 | 429 | 11 | 90 | 39 | 0.828652 | 0 | 0 | 0 | 0 | 0 | 0.226107 | 0.207459 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.111111 | null | null | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c60e6d500a7f4402b2213e04baa96746a9880683 | 723 | py | Python | gui/profile/urls.py | klebed/esdc-ce | 2c9e4591f344247d345a83880ba86777bb794460 | [
"Apache-2.0"
] | 97 | 2016-11-15T14:44:23.000Z | 2022-03-13T18:09:15.000Z | gui/profile/urls.py | klebed/esdc-ce | 2c9e4591f344247d345a83880ba86777bb794460 | [
"Apache-2.0"
] | 334 | 2016-11-17T19:56:57.000Z | 2022-03-18T10:45:53.000Z | gui/profile/urls.py | klebed/esdc-ce | 2c9e4591f344247d345a83880ba86777bb794460 | [
"Apache-2.0"
] | 33 | 2017-01-02T16:04:13.000Z | 2022-02-07T19:20:24.000Z | from django.conf.urls import patterns, url
urlpatterns = patterns(
'gui.profile.views',
# Profile pages, with url prefix: accounts/profile
url(r'^$', 'index', name='profile'),
url(r'^api_keys/$', 'apikeys', name='profile_apikeys'),
url(r'^update/$', 'update', name='profile_update'),
url(r'^password/$', 'password_change', name='profile_password'),
url(r'^activate/$', 'activation', name='profile_activation'),
url(r'^ssh_key/(?P<action>add|delete)/$', 'sshkey', name='profile_sshkey'),
url(r'^impersonate/user/(?P<username>[A-Za-z0-9@.+_-]+)/$', 'start_impersonation', name='start_impersonation'),
url(r'^impersonate/cancel/$', 'stop_impersonation', name='stop_impersonation'),
)
| 45.1875 | 115 | 0.665284 | 90 | 723 | 5.2 | 0.488889 | 0.068376 | 0.047009 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003125 | 0.114799 | 723 | 15 | 116 | 48.2 | 0.728125 | 0.06639 | 0 | 0 | 0 | 0 | 0.554235 | 0.156018 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.083333 | 0.083333 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
c614e8587dd436a8c02676924e1ca14eecbd8b56 | 1,009 | py | Python | debug/deploy_zipped.py | micprog/SystemVerilog | 7eca705e87f87b94478fe222fc91d54d488cc8e3 | [
"Apache-2.0"
] | 29 | 2017-09-25T20:33:29.000Z | 2022-03-15T17:57:45.000Z | debug/deploy_zipped.py | micprog/SystemVerilog | 7eca705e87f87b94478fe222fc91d54d488cc8e3 | [
"Apache-2.0"
] | 50 | 2019-09-26T20:58:35.000Z | 2022-03-31T20:30:00.000Z | debug/deploy_zipped.py | micprog/SystemVerilog | 7eca705e87f87b94478fe222fc91d54d488cc8e3 | [
"Apache-2.0"
] | 15 | 2018-11-21T11:36:18.000Z | 2022-03-15T17:58:18.000Z | import util
from deploy_config import PACKAGE_CONTROL_SETTINGS_FILE, SUBLIME_SETTINGS_FILE, PACKAGE_NAME, SRC, DST_ZIPPED, IGNORE_DIRS
import time
print('[deploy] Deployment to Installed Packages ...')
util.change_settings(PACKAGE_CONTROL_SETTINGS_FILE,
"auto_upgrade_ignore", PACKAGE_NAME, action='add')
util.change_settings(PACKAGE_CONTROL_SETTINGS_FILE,
"in_process_packages", PACKAGE_NAME, action='add')
util.change_settings(SUBLIME_SETTINGS_FILE,
"ignored_packages", PACKAGE_NAME, action='add')
time.sleep(2)
util.in_installed_packages(src=SRC, dst=DST_ZIPPED, action='install', ignore_dirs=IGNORE_DIRS)
time.sleep(1)
util.change_settings(SUBLIME_SETTINGS_FILE,
"ignored_packages", PACKAGE_NAME, action='del')
util.change_settings(PACKAGE_CONTROL_SETTINGS_FILE,
"in_process_packages", PACKAGE_NAME, action='del')
print('[deploy] Deployment to Installed Packages DONE')
| 40.36 | 123 | 0.723489 | 122 | 1,009 | 5.614754 | 0.295082 | 0.122628 | 0.131387 | 0.151825 | 0.656934 | 0.643796 | 0.527007 | 0.429197 | 0.429197 | 0.429197 | 0 | 0.00243 | 0.184341 | 1,009 | 24 | 124 | 42.041667 | 0.829891 | 0 | 0 | 0.277778 | 0 | 0 | 0.205285 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.166667 | 0 | 0.166667 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.