seq_id stringlengths 4 11 | text stringlengths 113 2.92M | repo_name stringlengths 4 125 ⌀ | sub_path stringlengths 3 214 | file_name stringlengths 3 160 | file_ext stringclasses 18
values | file_size_in_byte int64 113 2.92M | program_lang stringclasses 1
value | lang stringclasses 93
values | doc_type stringclasses 1
value | stars int64 0 179k ⌀ | dataset stringclasses 3
values | pt stringclasses 78
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
19965217882 |
# coding: utf-8
# In[79]:
# An event is some movement or action such as:
# - standup,
# - sitdown,
# - (taking a) step,
# - utter (a word).
# When no movement is occurring, we call the situation a "nothing" or "none" event.
# Sometimes there is no gap between events, e.g. when we are walking, we take one step after another.
# But with events like "standing up", we can not have multiple "standing up" events one after the other.
# To distinguish between such events, we call the ones which do not occur more than once without some other
# event in between (e.g. situp liedown situp) as transition events.
# And the ones which can happen repetitively with possibly some "none" events in between as repetitive events.
# So we have three categories of events:
# transition events
# repetitive events
# none events
# We need to detect these different events given a stream of measurements taken while these events are being
# performed.
# Events will have a start time and an end time. The challenge is to identify the start and end time correctly.
# Once the start time and end time is identified correctly, the measurements between these two times need to be
# mapped to one of the many possible events.
# The problem of identifying the start and end time itself is a difficult one.
# One can start by assuming each event will have a none event before and after.
# However, this may not be true for repetitive events.
# For transition events, even if there is a none event before and after, the period of the none event may be very short
# and hence it may be difficult to detect.
# method 1:
# if events, are bounded before and after by none events, detecting none events and then detecting the event
# bounded by the none events is expected to provide higher accuracy in detection of events.
# consider the following sequence:
# none (3s), liedown(4s), none (3s).
#
# if event detection used 2s windows without consideration for none events, then the above sequence will be split
# into
# window1: none (2s),
# window2: none (1s) liedown(1s),
# window3: liedown(2s),
# window4: liedown(1s) none(1s),
# window5: none(2s)
# Since window 2 and 4 contains a mix of events, and window 3 contains a subset of events, it may be hard to detect the
# correct event for these windows.
#
# on the other hand, if none windows of 1s duration are used to detect other events, and the other events
# are allowed to be of duration say 1 to 6s, then the above sequence would be split into:
# window1: none(1s)
# window2: none(1s)
# window3: none(1s)
# window4: liedown(4s)
# window5: none(1s)
# ...
# Since the windows are pure in the sense that they contain complete events of only a single type, it
# should be easier to identify the actual events in this case.
# we also split the activities into strong and weak events.
# The idea is, strong events, if not considered over their full duration, may be identified as weak events.
# In[80]:
# it is assumed that there is an event identifier, which given a window of accelerometer readings, identified
# the event from a set of events that the readings most closely resemble. The event identifier in our case will be
# an ML pipeline.
# it is assumed that the shortest duration of an event is 1s, longest is 6s.
# from random import randint
# get_ipython().magic('run ./globalVars')
# get_ipython().magic('run ./eventClassifier')
from eventClassifier import eventClassifier
from globalVars import strongEvents
from globalVars import eventsDict
minWinSize = 1
maxWinSize = 6
# In[81]:
def find_event(accl_readings, offset):
offset = round(offset)
win = accl_readings.loc[(accl_readings.time >= offset) & (accl_readings.time < offset+minWinSize)]
if(win.shape[0] == 0):
return (None, offset)
initEvent = eventClassifier(win)
# if ((initEvent == "none") | (initEvent in strongEvents)):
# return (initEvent, minWinSize)
if (initEvent == eventsDict["rest"][0]):
return (initEvent, minWinSize)
# try to find a strongEvent.
# if for the given window size, the event is not recognized, extend window till maxWinSize.
# if for the given window size, event is recognized as a strongEvent, try to extent window size
# to max possible but still within maxWinSize such that event still remains a strongEvent.
strongEventFound = False
winSize = minWinSize + 1
maxRecordedTime = round(accl_readings['time'].max())
while (winSize <= maxWinSize) & (offset+winSize <= maxRecordedTime):
win = accl_readings.loc[(accl_readings.time >= offset) & (accl_readings.time < offset+winSize)]
# print(win)
event = eventClassifier(win)
if (event in strongEvents):
strongEventFound = True
strongEvent = event
longestWinSize = winSize
winSize = winSize + 1
if (strongEventFound == True):
return (strongEvent, longestWinSize)
return (initEvent, minWinSize)
# In[82]:
def find_events(accl_readings):
currOffset = accl_readings['time'].min()
eventsSequence = []
(nextEvent, winSize) = find_event(accl_readings, currOffset)
while (nextEvent != None):
eventsSequence.append((nextEvent, currOffset, winSize))
currOffset = currOffset + winSize
(nextEvent, winSize) = find_event(accl_readings, currOffset)
return eventsSequence
| dilettante98/HAR | current/eventStreamPartitioner.py | eventStreamPartitioner.py | py | 5,423 | python | en | code | 0 | github-code | 13 |
31293353955 | from flask import jsonify#,request
from app.transit.webscrapping_extended_new import planet_points
from .intercept_planet import intercept_planet_starts
#--------------------------- check exaltation, detriment, fall --------------------------
def cal_status(btd,btm,bty):
natal = planet_points(btd, btm-1, bty,(13,9,18,'asc'),(13,6,41,'mc'),'status')
#print(natal)
check_status = [{"det":[11],"ext":[1],"fall":[7],"lord":[6]},
{"det":[10],"ext":[2],"fall":[8],"lord":[4]},
{"det":[9,12],"ext":[6],"fall":[12],"lord":[3,6]},
{"det":[8,1],"ext":[12],"fall":[6],"lord":[2,7]},
{"det":[7,2],"ext":[10],"fall":[4],"lord":[1,8]},
{"det":[6,3],"ext":[4],"fall":[10],"lord":[12,9]},
{"det":[4,6],"ext":[7],"fall":[1],"lord":[10,11]},
{"det":[5],"ext":[8],"fall":[2],"lord":[11]},
{"det":[6],"ext":[4],"fall":[11],"lord":[12]}]
check_Interception = {"intercept":[4, 10]}
status = {} #{'mars_det': 'det', 'sat_ext': 'ext'}
intercept = {}
#planet position sun=3,moon=4...
p_pos = {'sun':3,'moon':4,'merc':5,'ven':6,'mars':7}
n = 0
while n != len(natal):
for k, v in check_status[n].items():
if natal[n][1] in v:
status[f"{natal[n][3]}"] = k
break
#check for intercepting planet intercept_planet_starts(day,month,yr,planet_position)
if natal[n][1] in check_Interception["intercept"] and natal[n][3] in ['sun','moon','merc','ven','mars']:
print(f'planet_deg = {natal[n][1]}, and planet_pos = {p_pos[natal[n][3]]}')
intercept[f"{natal[n][3]}"] = intercept_planet_starts(btd,btm,bty,p_pos[natal[n][3]])
elif natal[n][1] in check_Interception["intercept"] and natal[n][3] not in ['sun','moon','merc','ven','mars']:
intercept[f"{natal[n][3]}"] = 'will not in this life time'
#else:
#status[f"{natal[n][3]}_int"] = ['int',natal[n][3]]
n+=1
#--------------------------- check interception --------------------------
'''When a planet is within an orb of 3 degrees of any of these points, it will be found to exercise a
much stronger influence in the life than otherwise.'''
cardinal_signs = {'sigh':[1,4,7,10],'deg':[0,1,2,3,4,10,11,12,13,14,15,16,23,24,25,26,27,28,29]} #['AR','CN','LI', 'CP']:[1,13,26]
fixed_signs = {'sigh':[2, 5, 8, 11],'deg':[6,7,8,9,10,11,12,18,19,20,21,22,23,24]} #TU, LE, SC, AQ,:9,21
common_signs = {'sigh':[3, 6, 9, 12],'deg': [1,2,3,4,5,6,7,14,15,16,17,18,19,20]} #GE, VI, SG, PI:4,17
critical_deg = []
def critical_degrees(planet_deg,planet_sigh,planet_name):
if planet_sigh in cardinal_signs['sigh'] and planet_deg in cardinal_signs['deg']:
return planet_name
elif planet_sigh in common_signs['sigh'] and planet_deg in common_signs['deg']:
return planet_name
elif planet_sigh in fixed_signs['sigh'] and planet_deg in fixed_signs['deg']:
return planet_name
#print(f'{planet_name} is not in critical degrees')
for a_planet in natal:
deg = a_planet[0]
sigh = a_planet[1]
name = a_planet[3]
if critical_degrees(deg,sigh,name) is not None:
critical_deg.append(critical_degrees(deg,sigh,name))
#print("critical_deg = ",critical_deg)
return jsonify(status,intercept,critical_deg)
#print(cal_status(21,3,1985))
| oprincely/map | app/api/planet_status.py | planet_status.py | py | 3,456 | python | en | code | 0 | github-code | 13 |
35064908801 | import boto3
import time
time.sleep(600)
l=["Namenode","Datanode1","Datanode2"]
p={}
client = boto3.client('ec2',region_name='us-east-1')
response = client.describe_instances()
for r in response['Reservations']:
for i in r['Instances']:
for j in i['Tags']:
if j[u'Key']=="Name":
p[j[u'Value']]=i['PrivateDnsName']
p1={}
for i in p:
if i in l:
p1[i]=p[i]
f=open("/home/ec2-user/hostmapping.json",'r')
data=f.read()
f.close()
data=data.replace("Namenode",p1['Namenode'])
data=data.replace("Datanode1",p1['Datanode1'])
data=data.replace("Datanode2",p1['Datanode2'])
f=open("/home/ec2-user/hostmapping.json",'w')
f.write(data)
f.close()
| amarwalke95/multinode-hdp-cluster | Python Script/pyth.py | pyth.py | py | 778 | python | en | code | 0 | github-code | 13 |
26801621352 | # -*- coding: utf-8 -*-
import re,sys
import numpy as np
from math import log
class PageSequence:
def extractFeatures(self, pages, pagenums):
feats=[]
maxx=0
for i in range(len(pages)):
feats.append({})
if pagenums[i] > maxx:
maxx=pagenums[i]
numbers=[None]*(maxx+1)
numbers[0]={}
numbers[0][-1]=1
for i in range(len(pages)):
pagenum=pagenums[i]
text=""
lines=pages[i]
for idx, line in enumerate(lines):
if idx < 5 or idx >= len(lines)-5:
text+=line.rstrip()
tokens=re.split("\s+", text.lower())
seen={}
numbers[pagenum]={}
# null
numbers[pagenum][-1]=1
if pagenum == 0:
continue
for token in tokens:
if re.match("^[0-9]+$", token) != None:
number=int(token)
if abs(pagenum-number) < 25:
numbers[pagenum][number]=1
#print len(numbers)
for i in range(maxx):
if numbers[i] == None:
numbers[i]={}
numbers[i][-1]=1
viterbi={}
backpointer={}
viterbi[0]={}
viterbi[0][-1]=log(10)
numbers[0][-1]=1
maxpath={}
maxpath[0]={}
maxpath[0][-1]=-1
for i in range(1,maxx):
# print i
# print numbers[i]
viterbi[i]={}
backpointer[i]={}
maxpath[i]={}
# viterbi[i]=[]
# backpointer[i]=[]
j=i-1
for num_i in numbers[i]:
viterbi[i][num_i]=log(sys.float_info.max)
backpointer[i][num_i]=-1
# print "num j", j, numbers[j]
for num_j in numbers[j]:
if (num_j > num_i or (num_j == num_i)) and (num_i != -1 and num_j != -1):
# print num_j, num_i, "too big"
continue
#print num_j, maxpath[j]
# if num_j not in maxpath[j]:
# continue
# max_on_path=maxpath[j][num_j]
# if num_i <= max_on_path and num_i != -1 and max_on_path != -1:
# continue
diff=abs(num_i-num_j)
if num_i == -1 and num_j == -1:
diff=10
elif num_i == -1 or num_j == -1:
diff=100
# print diff, num_i, num_j, i, j
vit=viterbi[j][num_j] + log(diff)
# print vit
# print "vit: %s %.10f %.5f" % (num_j, vit, diff), i, j, num_i
if vit < viterbi[i][num_i]:
viterbi[i][num_i]=vit
backpointer[i][num_i]=num_j
# maxpath[i][num_i]=num_j
# if maxpath[j][num_j] > num_j:
# maxpath[i][num_i]=maxpath[j][num_j]
# print "back %s" % backpointer[i][num_i]
final=sys.float_info.max
finalpointer=-1
for num_j in numbers[maxx-1]:
vit=viterbi[maxx-1][num_j]
if vit < final:
final=vit
finalpointer=num_j
#print "final", final
pointer=finalpointer
stack=[]
for i in reversed(range(1,maxx)):
#print i
stack.append(pointer)
pointer=backpointer[i][pointer]
# print i, pointer
counts=np.zeros(maxx)
c=0
for idx, val in enumerate(reversed(stack)):
#print val,
if val != -1:
offset=(idx-val)+1
if offset >= 0:
counts[offset]+=1
c+=1
if c >= 20:
break
#print
argmax=np.argmax(counts)
firstpage=-1
firstval=-1
for idx, val in enumerate(reversed(stack)):
#print val,
if val != -1:
offset=(idx-val)+1
if offset == argmax:
firstpage=idx
firstval=val
break
# print "argmax: %s" % argmax, counts[argmax], firstpage, firstval
#print counts
if counts[argmax] >= 5:
for p in range(len(pages)):
i=pagenums[p]
feats[p]["page_sequence:page_count_identified"]=1
if i < argmax:
feats[p]["page_sequence:before_first_inferred_page"]=1
else:
feats[p]["page_sequence:after_first_inferred_page"]=1
if firstpage != -1:
if i < firstpage:
feats[p]["page_sequence:before_first_marked_page"]=1
else:
feats[p]["page_sequence:after_first_marked_page"]=1
return feats
| dbamman/book-segmentation | code/features/PageSequence.py | PageSequence.py | py | 3,671 | python | en | code | 12 | github-code | 13 |
23384164844 | # -*- coding: utf-8 -*-
import scrapy
import re
# from lxml import etree
# import logging
# logger = logging.getLogger(__name__)
class ImdbSpider(scrapy.Spider):
name = 'imdb'
allowed_domains = ['imdb.cn']
start_urls = ['http://www.imdb.cn/IMDB250/']
def parse(self, response):
with open('./imdb.html','wb') as f:
f.write(response.text.encode('utf-8'))
# endurls = re.findall('<a href="(/p/.*?)" title=',response.text)
# html = etree.HTML(str(response.text))
# logger.warning(response)
film_list = response.xpath("//div[@class='ss-3 clear']//a")
for film in film_list:
item = {}
item["href"] = film.xpath("@href").extract_first()
item["file_name"] = film.xpath("./div[@class='honghe']/div[@class='honghe-1']/div[@class='honghe-2']/div[@class='honghe-3']/p[@class='bb']/text()").extract_first()
item["rate"] = film.xpath("./div[@class='honghe']/div[@class='honghe-1']/div[@class='honghe-2']/span/i/text()").extract_first()
item["other_name"] = film.xpath("./div[@class='honghe']/div[@class='honghe-1']/div[@class='honghe-4 clear']/p[1]/i/text()").extract_first()
item["en_name"] = film.xpath("./div[@class='honghe']/div[@class='honghe-1']/div[@class='honghe-4 clear']/p[2]/text()").extract_first()
item["director_name"] = film.xpath("./div[@class='honghe']/div[@class='honghe-1']/div[@class='honghe-4 clear']/p[3]/span/text()").extract_first()
yield scrapy.Request(
"http://www.imdb.cn{}".format(item["href"]),
callback=self.parse_detail,
meta= {"item": item}
)
next_url = response.xpath("//div[@class='page-1 clear']/a[1]/@href").extract_first()
next_tag = response.xpath("//div[@class='page-1 clear']/a[1]/text()").extract_first()
if next_tag == "下一页":
yield scrapy.Request(
"http://www.imdb.cn{}".format(next_url),
callback=self.parse
)
def parse_detail(self, response):
item = response.meta["item"]
item["content"] = response.xpath("//div[@class='fk-4 clear']//div[@class='bdd clear']").extract_first()
yield item
| dreamzhangyuyi/mySpider | mySpider/spiders/imdb.py | imdb.py | py | 2,289 | python | en | code | 0 | github-code | 13 |
72060662097 | #!/usr/bin/env python
# coding=utf-8
"""
Script for sobol sampling
https://people.sc.fsu.edu/~jburkardt/py_src/sobol/sobol_lib.py
"""
import math
import numpy as np
import random as rd
def i4_uniform(a, b, seed):
# *****************************************************************************80
#
## I4_UNIFORM returns a scaled pseudorandom I4.
#
# Discussion:
#
# The pseudorandom number will be scaled to be uniformly distributed
# between A and B.
#
# Licensing:
#
# This code is distributed under the MIT license.
#
# Modified:
#
# 22 February 2011
#
# Author:
#
# Original MATLAB version by John Burkardt.
# PYTHON version by Corrado Chisari
#
# Reference:
#
# Paul Bratley, Bennett Fox, Linus Schrage,
# A Guide to Simulation,
# Springer Verlag, pages 201-202, 1983.
#
# Pierre L'Ecuyer,
# Random Number Generation,
# in Handbook of Simulation,
# edited by Jerry Banks,
# Wiley Interscience, page 95, 1998.
#
# Bennett Fox,
# Algorithm 647:
# Implementation and Relative Efficiency of Quasirandom
# Sequence Generators,
# ACM Transactions on Mathematical Software,
# Volume 12, Number 4, pages 362-376, 1986.
#
# Peter Lewis, Allen Goodman, James Miller
# A Pseudo-Random Number Generator for the System/360,
# IBM Systems Journal,
# Volume 8, pages 136-143, 1969.
#
# Parameters:
#
# Input, integer A, B, the minimum and maximum acceptable values.
#
# Input, integer SEED, a seed for the random number generator.
#
# Output, integer C, the randomly chosen integer.
#
# Output, integer SEED, the updated seed.
#
assert seed !=0, 'I4_UNIFORM - Fatal error!'
seed = math.floor(seed)
a = round(a)
b = round(b)
seed = np.mod(seed, 2147483647)
if (seed < 0):
seed = seed + 2147483647
k = math.floor(seed / 127773)
seed = 16807 * (seed - k * 127773) - k * 2836
if (seed < 0):
seed = seed + 2147483647
r = seed * 4.656612875E-10
#
# Scale R to lie between A-0.5 and B+0.5.
#
r = (1.0 - r) * (min(a, b) - 0.5) + r * (max(a, b) + 0.5)
#
# Use rounding to convert R to an integer between A and B.
#
value = round(r)
value = max(value, min(a, b))
value = min(value, max(a, b))
c = value
return [c, int(seed)]
def do_sobol_sampling(min_val, max_val, nb_samples, seed=1, do_print=False):
"""
Do sobol sampling for range of min/max values with specific seed.
Requires uniform distribution
Parameters
----------
min_val : float
Minimal value
max : float
Maximal value
nb_samples : int
Number of samples for list_of_cs
seed : int, optional
Seed for random number generation (default: 1)
do_print : bool, optional
Defines, if list should be printed out (default: False)
Returns
-------
list_of_cs : list (of floats)
List with chosen values
"""
# Increase float input values. This is done to prevent generation of
# repeating int output values
min_val *= 1000000
max_val *= 1000000
list_of_cs = []
while len(list_of_cs) < nb_samples:
[c, seed] = i4_uniform(a=min_val, b=max_val, seed=seed)
if c not in list_of_cs:
list_of_cs.append(c)
if do_print:
print(list_of_cs)
# Reconvert list values
for i in range(len(list_of_cs)):
val = list_of_cs[i]
val /= 1000000
list_of_cs[i] = round(val, 4)
assert val >= min_val / 1000000, 'Sampling values is smaller than min.'
assert val <= max_val / 1000000, 'Sampling values is larger than max.'
return list_of_cs
if __name__ == '__main__':
# Nb. of samples
nb_samples = 100
min_val = 0.5
max_val = 20.5
# Do sobol sampling with uniform distribution
list_of_cs = do_sobol_sampling(min_val=min_val, max_val=max_val,
nb_samples=nb_samples, seed=1,
do_print=False)
print('Sampling list:')
print(list_of_cs)
def only_for_comparison_random_sampling(min_val, max_val, nb_samples):
"""
Perform random sampling
Parameters
----------
min_val : float
Minimal value
max : float
Maximal value
nb_samples : int
Number of samples for list_of_cs
Returns
-------
list_rd : list
List of floats (sample values)
"""
list_rd = []
# Increase float input values. This is done to prevent generation of
# repeating int output values
min_val *= 1000000
max_val *= 1000000
for i in range(nb_samples):
val = rd.randint(min_val, max_val) / 1000000
list_rd.append(val)
return list_rd
list_rnd = only_for_comparison_random_sampling(min_val=min_val,
max_val=max_val,
nb_samples=nb_samples)
import matplotlib.pyplot as plt
plt.hist(list_of_cs, bins=nb_samples * 5, label='Sobol')
plt.title('Sobol')
plt.show()
plt.close()
plt.hist(list_rnd, bins=nb_samples * 5, label='Random')
plt.title('Random')
plt.show()
plt.close()
plt.hist(list_of_cs, bins=nb_samples*5, label='Sobol')
plt.hist(list_rnd, bins=nb_samples*5, label='Random')
plt.legend()
plt.show()
plt.close() | RWTH-EBC/pyCity_calc | pycity_calc/toolbox/mc_helpers/experiments/sobol_script.py | sobol_script.py | py | 5,737 | python | en | code | 7 | github-code | 13 |
73771910418 | #!/usr/bin/python3
# -*-coding:Utf-8 -*
"""
Ce fichier et le fichier principal,
il permet de lancer l'interface graphique
"""
import sys
import os
from tkinter import Tk, PhotoImage, Frame, Canvas, Button
from tkinter.messagebox import askretrycancel
from tkinter.filedialog import askopenfilename
from tkinter.ttk import *
from valeur_entree import val_num
def interface(x_fenetre=800, y_fenetre=640):
"""
La fonction interface permet de créer une interface,
Ce sera l'interface principal ou il suffira de cliquer
sur un bouton pour appeler les différents fonctions du projet
Args:
Il est possible lors de l'éxecution du programme,
de rentrer x_fenetre et y_fenetre via la méthode sys.argv
x_fenetre (int): Longueur de la fenêtre
y_fenetre (int): Hauteur de la fenêtre
"""
#On crée l'interface avec sa taille, son icône, sa couleur de fond...
#fenetre_tkinter est l'objet interface principal du programme
fenetre_tkinter = Tk()
fenetre_tkinter.title("Gestionnaire de la Blackliste")
fenetre_tkinter.geometry(str(x_fenetre)+'x'+str(y_fenetre)+'+300+0')
fenetre_tkinter.configure(background="white")
#On crée un frame (partie d'une fênetre, voir documentation Tkinter)
frame_liste_num = Frame(
fenetre_tkinter,
width=x_fenetre/2,
height=y_fenetre-10,
)
#Position du frame
frame_liste_num.place(x=10, y=10)
combobox = Combobox(
frame_liste_num,
width=45,
height=int(y_fenetre-20),
background="white",
values = ["0000000000", "0933909398"],
state = "normal"
)
combobox.bind('<<ComboboxSelected>>', print(Combobox.get()))
combobox.pack()
#Le canvas nous permet de dessiner l'image dans le frame
#à une certaine position
frame_bouton = Frame(
fenetre_tkinter,
width=x_fenetre/2-20,
height=y_fenetre-20
)
#On crée un frame où l'on va insérer des boutons appelants
#les différentes fonctions du programme
frame_bouton.place(x=x_fenetre/2+10, y=10)
canvas_bouton = Canvas(
frame_bouton,
width=x_fenetre/2-20,
height=y_fenetre-20, background="white"
)
#Le canvas_bouton nous permettra de dessiner ces différents
#bouton
bouton_formulaire = Button(
canvas_bouton,
text="Récuperer une page avec un formulaire",
command=lambda: formulaire(
valeur_entree_formulaire()
),
#command est l'option du bouton qui exécute l'appel
#de la fonction liée
#On passe par une fonction lamba afin de pouvoir entrer
#des paramètres sans que la fonction soit exécute au lancement
#de projet_python.py
width=int(x_fenetre/32+5)
)
bouton_correcteur = Button(
canvas_bouton,
text="Corriger une page HTML",
command=lambda: correcteur_de_html_css(
chercher_fichier('html')
),
width=int(x_fenetre/32+5)
)
#On pointe tous les boutons dans une liste
liste_bouton = []
liste_bouton.append(bouton_formulaire)
liste_bouton.append(bouton_correcteur)
i = (y_fenetre/4)-70
#On parcours la liste et on dessine les boutons
#afin de les centrer dans le canvas
for bouton in liste_bouton:
bouton.pack()
canvas_bouton.create_window(x_fenetre/4-10, i, window=bouton)
i += 60
canvas_bouton.pack()
#fenetre_tkinter.mainloop() et la fonction
#permettant d'utiliser l'interface
fenetre_tkinter.mainloop()
fenetre_tkinter.quit()
interface() | Gladorme/SMS-Project | SMS-PRoject-Server/www/html/projet_python.py | projet_python.py | py | 3,681 | python | fr | code | 2 | github-code | 13 |
26270268529 | def on_button_pressed_a():
music.stop_all_sounds()
input.on_button_pressed(Button.A, on_button_pressed_a)
def intruder():
if pins.digital_read_pin(DigitalPin.P16) == 1:
alarm()
serial.write_line("HUMAN DETECTED")
while pins.digital_read_pin(DigitalPin.P4) == 1:
strip.show_color(neopixel.colors(NeoPixelColors.RED))
strip.show()
basic.pause(100)
strip.show_color(neopixel.colors(NeoPixelColors.BLUE))
strip.show()
basic.pause(100)
def alarm():
serial.write_line("Alarm system activated")
music.start_melody(["c5", "", "c5", ""], MelodyOptions.FOREVER_IN_BACKGROUND)
def worker():
strip.show_color(neopixel.colors(NeoPixelColors.WHITE))
strip: neopixel.Strip = None
thieve = 0
led.enable(False)
strip = neopixel.create(DigitalPin.P2, 8, NeoPixelMode.RGB)
ds = DS1302.create(DigitalPin.P13, DigitalPin.P14, DigitalPin.P15)
ds.start()
serial.write_line("" + str(ds.get_hour()) + ":" + ("" + str(ds.get_minute())))
esp8266.init(SerialPin.P16, SerialPin.P15, BaudRate.BAUD_RATE115200)
esp8266.connect_wi_fi("PandaRouter", "Panda1234")
def on_forever():
if pins.digital_read_pin(DigitalPin.P4) == 1:
serial.write_line("Pintu Buka")
if ds.get_hour() > 19:
worker()
serial.write_line("Pekerja")
if ds.get_hour() < 19:
intruder()
serial.write_line("Pencuri")
if pins.digital_read_pin(DigitalPin.P4) == 0:
strip.clear()
serial.write_line("Pintu Tutup")
basic.pause(400)
basic.forever(on_forever) | PandaMerah/ATM_MainHub | main.py | main.py | py | 1,603 | python | en | code | 0 | github-code | 13 |
17333152224 | """Helper tools for converting between AaC objects and Pygls objects."""
from pygls.lsp import Position, Range
from typeguard import check_type
from aac.lang.definitions.source_location import SourceLocation
def source_location_to_position(location: SourceLocation) -> Position:
"""Convert a source location to a position."""
check_type("location", location, SourceLocation)
return Position(line=location.line, character=location.column)
def source_location_to_range(location: SourceLocation) -> Range:
"""Convert a source location to a range."""
check_type("location", location, SourceLocation)
return Range(
start=Position(line=location.line, character=location.column),
end=Position(line=location.line, character=location.column + location.span),
)
def source_locations_to_range(location1: SourceLocation, location2: SourceLocation) -> Range:
"""Convert two source locations to a range."""
check_type("location1", location1, SourceLocation)
check_type("location2", location2, SourceLocation)
return Range(
start=Position(line=location1.line, character=location1.column),
end=Position(line=location2.line, character=location2.column),
)
| jondavid-black/AaC | python/src/aac/plugins/first_party/lsp_server/conversion_helpers.py | conversion_helpers.py | py | 1,227 | python | en | code | 14 | github-code | 13 |
5738698188 | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
cc = """
Created on Fri Jan 7 05:05:36 2022
@author: santosg \n
Este programa saca la lista de directorios que se encuentran
dentro de un directorio padre, en este caso es './'
"""
import sys
import time
import funcionesLinux
print(cc)
time.sleep(3)
val = input("Mete el directorio padre [./]: ")
dir1 = './'
if len(val) > 0:
dir1 = val
res = funcionesLinux.SacaLista_Dir_Otro(dir1)
if len(res) > 0:
print(res)
else:
print('NO HUBO Directorios') | santosg572/Python_Libros | p3.py | p3.py | py | 518 | python | es | code | 0 | github-code | 13 |
4139573974 | """Test functions in utils/ directory"""
import os
import sys
import unittest
# get base directory and import util files
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from utils import _threading, download_youtube, query_itunes, query_youtube
class testThreading(unittest.TestCase):
"""Test utils/_threading.py"""
def setUp(self):
pass
def example_func_for_threading(self, value):
"""Example function for testing threading"""
return value + 1
def test_threading(self):
"""Test _threading.map_threads for proper threading functionality"""
iterable = [i for i in range(500)]
total_value_sum_one = _threading.map_threads(self.example_func_for_threading, iterable)
self.assertEqual(len(list(total_value_sum_one)), 500)
class testYouTubeQuery(unittest.TestCase):
"""Test utils/youtube_query.py"""
def setUp(self):
self.playlist_url = "https://www.youtube.com/playlist?list=PL3PhWT10BW3Urh8ZXXpuU9h526ChwgWKy"
self.video_url = "https://www.youtube.com/watch?v=pgXozIma-Oc&list=PL3PhWT10BW3Urh8ZXXpuU9h526ChwgWKy&index=4"
self.video_info_list = [
{"title": "Test1", "id": 1, "duration": 100},
{"title": "Test2", "id": 2, "duration": 200},
]
def test_get_youtube_playlist_content_false_override(self):
"""Test query_youtube.get_youtube_content with false error override
for a playlist url"""
override_error = False
try:
youtube_video_dict = query_youtube.get_youtube_content(self.playlist_url, override_error)
except RuntimeError: # successfully threw RuntimeError
youtube_video_dict = {}
self.assertIsInstance(youtube_video_dict, dict)
def test_get_youtube_video_content_false_override(self):
"""Test query_youtube.get_youtube_content with false error override
for a video url"""
override_error = False
try:
youtube_video_dict = query_youtube.get_youtube_content(self.video_url, override_error)
except RuntimeError:
youtube_video_dict = {}
self.assertIsInstance(youtube_video_dict, dict)
def test_get_youtube_playlist_content_true_override(self):
"""Test query_youtube.get_youtube_content with error override
for a playlist url"""
override_error = True
youtube_video_dict = query_youtube.get_youtube_content(self.playlist_url, override_error)
self.assertIsInstance(youtube_video_dict, dict)
def test_get_youtube_video_content_true_override(self):
"""Test query_youtube.get_youtube_content with error override
for a video url"""
override_error = True
youtube_video_dict = query_youtube.get_youtube_content(self.video_url, override_error)
self.assertIsInstance(youtube_video_dict, dict)
def test_get_playlist_video_info(self):
"""Test fetching individual urls in a playlist url"""
youtube_playlist_videos_tuple = query_youtube.get_playlist_video_info(self.playlist_url)
self.assertIsInstance(youtube_playlist_videos_tuple, tuple)
def test_get_video_info_false_override(self):
"""Test getting video information with false error override"""
override_error = False
args = (self.video_url, override_error)
try:
video_info = query_youtube.get_video_info(args)
except RuntimeError:
video_info = {}
self.assertIsInstance(video_info, dict)
def test_get_video_info_true_override(self):
"""Test getting video information with error override"""
override_error = True
args = (self.video_url, override_error)
video_info = query_youtube.get_video_info(args)
self.assertIsInstance(video_info, dict)
def test_video_content_to_dict(self):
"""Test a list of video info dictionaries is successfully converted
to a dict type"""
video_list_to_dict = query_youtube.video_content_to_dict(self.video_info_list)
self.assertIsInstance(video_list_to_dict, dict)
class testiTunesQuery(unittest.TestCase):
"""Test utils/itunes_query.py"""
def setUp(self):
# threading is accomplished in main.py
self.youtube_video_key_value = (
"Bob Marley - Blackman Redemption",
{"id": "KlmPOxwoC6Y", "duration": 212},
)
self.row_index = 0
self.video_url_for_oembed = "https://www.youtube.com/watch?v=kZyCXjNDuv8"
def test_thread_query_itunes(self):
"""Test accurate parsing of youtube video url to retrieve
iTunes metadata."""
args = (self.row_index, self.youtube_video_key_value)
itunes_return_arg = query_itunes.thread_query_itunes(args)
return_row_index = itunes_return_arg[0]
return_itunes_json = itunes_return_arg[1]
self.assertEqual(return_row_index, 0)
self.assertIsInstance(return_itunes_json, dict)
def test_get_itunes_metadata(self):
"""Test retrieving iTunes metadata as a high level function"""
itunes_meta_data = query_itunes.get_itunes_metadata(self.video_url_for_oembed)
self.assertIsInstance(itunes_meta_data, dict)
def test_oembed_title_non_url(self):
"""Test converting a non-url string to oembed. Should raise
TypeError"""
with self.assertRaises(TypeError):
query_itunes.oembed_title("invalid_url")
def test_oembed_title_url(self):
"""Test the conversion of a youtube video url to an oembed format
for simple extraction of video information."""
video_title = query_itunes.oembed_title(self.video_url_for_oembed)
self.assertIsInstance(video_title, str)
def test_query_itunes(self):
"""Test low level function to fetch iTunes metadata based on the
youtube video title."""
youtube_video_title = self.youtube_video_key_value[0]
itunes_query_results = query_itunes.query_itunes(youtube_video_title)
self.assertIsInstance(itunes_query_results, list)
class testYouTubeDownload(unittest.TestCase):
"""Test utils/download_youtube.py"""
def setUp(self):
self.test_dirpath = os.path.dirname(os.path.abspath(__file__))
self.test_mp4_dirpath = os.path.join(self.test_dirpath, "mp4")
self.mp3_args_for_thread_query_youtube = (
("No Time This Time - The Police", {"id": "nbXACcsTn84", "duration": 198}),
(
self.test_dirpath,
self.test_mp4_dirpath,
),
{
"song": "No Time This Time",
"album": "Reggatta de Blanc (Remastered)",
"artist": "The Police",
"genre": "Rock",
"artwork": "https://is2-ssl.mzstatic.com/image/thumb/Music128/v4/21/94/c7/2194c796-c7f0-2c4b-2f94-ac247bab22a5/source/600x600bb.jpg",
},
False,
)
self.mp4_args_for_thread_query_youtube = (
("No Time This Time - The Police", {"id": "nbXACcsTn84", "duration": 198}),
(
self.test_dirpath,
self.test_mp4_dirpath,
),
{
"song": "No Time This Time",
"album": "Reggatta de Blanc (Remastered)",
"artist": "The Police",
"genre": "Rock",
"artwork": "https://is2-ssl.mzstatic.com/image/thumb/Music128/v4/21/94/c7/2194c796-c7f0-2c4b-2f94-ac247bab22a5/source/600x600bb.jpg",
},
True,
)
self.mp3_filepath = os.path.join(self.test_dirpath, "No Time This Time.mp3")
self.m4a_filepath = os.path.join(self.test_dirpath, "No Time This Time.m4a")
def test_get_youtube_mp4(self):
"""Test download of mp4 file (m4a) using the setUp var above"""
download_youtube.thread_query_youtube(self.mp4_args_for_thread_query_youtube)
assert os.path.exists(self.m4a_filepath)
os.remove(self.m4a_filepath) # remove generated m4a file
def test_get_youtube_mp3(self):
"""Test download of mp3 file (mp3) using the setUp var above"""
download_youtube.thread_query_youtube(self.mp3_args_for_thread_query_youtube)
assert os.path.exists(self.mp3_filepath)
os.remove(self.mp3_filepath) # remove generated mp3 file
def tearDown(self):
import shutil
if os.path.exists(self.test_mp4_dirpath):
# remove mp4 dir if it exists
shutil.rmtree(self.test_mp4_dirpath)
# TODO: add tests for mp3 and mp4 annotations -- above tests are for high-level functions.
if __name__ == "__main__":
unittest.main()
| irahorecka/youtube2audio | tests/test_utils.py | test_utils.py | py | 8,748 | python | en | code | 141 | github-code | 13 |
24715771244 | import requests
import re
import os
import time
headers = {'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36'}
def generateParameters(target_index):
"""生成请求参数"""
para = {
'append': 'list-home',
'paged': target_index,
'action': 'ajax_load_posts',
'query': '',
'page': 'home'
}
return para
def requestHTML(target_url):
"""请求网页"""
print(target_url)
response = requests.get(target_url, headers=headers, verify=False)
res_html = response.text
return res_html
def parseHTML(target_html):
"""解析网页"""
titles = re.findall('<h1 class="post-title h3">(.*?)</h1>', target_html)
if len(titles) > 0:
title = titles[-1]
else:
title = time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time()))
res_dir_name = 'girls_pic/' + title
if not os.path.exists(res_dir_name):
os.mkdir(res_dir_name)
res_urls = re.findall('<a href="(.*?)" alt=".*?" title=".*?">', target_html)
print(res_urls)
return res_dir_name, res_urls
def savePic(target_dir_name, target_urls):
"""保存图片"""
for target_url in target_urls:
# 图片的名字
file_name = target_dir_name + '/' + target_url.split('/')[-1]
if not os.path.exists(file_name):
response = requests.get(target_url, headers=headers)
with open(file_name, 'wb') as f:
f.write(response.content)
print(f'\r保存{file_name}成功', end='')
# 换行
print()
if __name__ == '__main__':
requests.packages.urllib3.disable_warnings()
requests.adapters.DEFAULT_RETRIES = 5
requestUrl = 'https://www.vmgirls.com/wp-admin/admin-ajax.php'
for index in range(11, 50):
print(f'index{index}')
res = requests.post(requestUrl, data=generateParameters(index), headers=headers)
urls = re.findall('<a href="(.*?)" class="list-title text-md h-2x" target="_blank">', res.text)
for url in urls:
html = requestHTML(url)
dir_name, urls = parseHTML(html)
savePic(dir_name, urls)
nav_links = re.findall('<a href="(.*?)" class="post-page-numbers">.*?</a>', html)
for link in nav_links:
html = requestHTML(link)
dir_name, urls = parseHTML(html)
savePic(dir_name, urls)
| Sternoo/Requests | crowGirls.py | crowGirls.py | py | 2,485 | python | en | code | 0 | github-code | 13 |
14275359473 | # This files contains your custom actions which can be used to run
# custom Python code.
#
# See this guide on how to implement these action:
# https://rasa.com/docs/rasa/custom-actions
# This is a simple example for a custom action which utters "Hello World!"
from .information import CovidInformation as covidInfo
from .vaccineslots import Slots as slots
import re
import datetime
from typing import Any, Text, Dict, List
import os
import uuid
from rasa_sdk import Action, Tracker
from rasa_sdk.executor import CollectingDispatcher
from rasa_sdk.events import UserUtteranceReverted, FollowupAction
#
#
class ActionCovidSearch(Action):
def name(self) -> Text:
return "action_covid_search"
def run(
self,
dispatcher: CollectingDispatcher,
tracker: Tracker,
domain: Dict[Text, Any],
) -> List[Dict[Text, Any]]:
print("Inside ActionCovidSearch action !!!!")
sender_id = uuid.uuid4()
entities = tracker.latest_message["entities"]
country = "India"
stage = None
states = None
top_bottom = None
rate = None
for item in entities:
print(item)
if item["entity"].lower() == "country":
print(item["value"])
country = item["value"].capitalize()
elif item["entity"].lower() == "stage":
print(item["value"])
stage = item["value"]
elif item["entity"].lower() == "state":
print(item["value"])
if states:
states.append(item["value"])
else:
states = []
states.append(item["value"])
elif item["entity"].lower() == "top_bottom":
print(item["value"])
top_bottom = item["value"]
elif item["entity"].lower() == "rate":
print(item["value"])
rate = item["value"]
if stage or states or top_bottom or rate:
covid_info = covidInfo().get_count_by_country(
country, stage, states, top_bottom, rate,sender_id
)
else:
covid_info = "Kindly rephrase your question."
if top_bottom:
dispatcher.utter_message(text=covid_info,image=f'http://localhost:7000/img/charts/{sender_id}.png')
else :
dispatcher.utter_message(text=covid_info)
return []
class ActionVaccineStatus(Action):
def name(self) -> Text:
return "action_vaccine_status"
def run(
self,
dispatcher: CollectingDispatcher,
tracker: Tracker,
domain: Dict[Text, Any],
) -> List[Dict[Text, Any]]:
print("Inside ActionVaccineStatus action !!!!")
entities = tracker.latest_message["entities"]
country = "India"
level = None
for item in entities:
print(item)
if item["entity"].lower() == "country":
print(item["value"])
country = item["value"].capitalize()
elif item["entity"].lower() == "level":
print(item["value"])
level = item["value"]
if level:
vaccine_info = covidInfo().get_vaccine_info(country, level)
else:
vaccine_info = "Kindly rephrase your question ."
dispatcher.utter_message(text=vaccine_info)
return []
class ActionVaccineSlot(Action):
def name(self) -> Text:
return "action_vaccine_slot"
def run(
self,
dispatcher: CollectingDispatcher,
tracker: Tracker,
domain: Dict[Text, Any],
) -> List[Dict[Text, Any]]:
print("Inside ActionVaccineSlot action !!!!")
entities = tracker.latest_message["entities"]
country = "India"
district = None
pincode = None
slots_info = None
for item in entities:
if item["entity"].lower() == "country":
print(item["value"])
country = item["value"].capitalize()
elif item["entity"].lower() == "district":
print(item["value"])
district = item["value"]
elif item["entity"].lower() == "pincode":
print(item["value"])
pincode = item["value"]
if district:
current_date = datetime.datetime.today()
date_list = [current_date + datetime.timedelta(days=x) for x in range(7)]
date_str = [x.strftime("%d-%m-%Y") for x in date_list]
available_slots_info = []
for inp_date in date_str:
slots_available_message = slots().get_slots_by_district(
district, inp_date
)
if slots_available_message is None:
print("incorrect district ")
slots_info = "Kindly provide the correct district name."
break
if not slots_available_message:
if not available_slots_info:
available_slots_info.append(
f"No slots available for District: {district.lower().title()}, as of Date: {inp_date}"
)
else:
available_slots_info.append("\n")
available_slots_info.append(
f"No slots available for District: {district.lower().title()}, as of Date: {inp_date}"
)
else:
if not available_slots_info:
available_slots_info.append(
f"Below slots available for District: {district.lower().title()}, as of Date: {inp_date}"
)
available_slots_info.append("\n")
available_slots_info.append("".join(slots_available_message))
break
else:
available_slots_info.append("\n")
available_slots_info.append(
f"Below slots available for District: {district.lower().title()}, as of Date: {inp_date}"
)
available_slots_info.append("\n")
available_slots_info.append("".join(slots_available_message))
break
if slots_info is None:
slots_info = "".join(available_slots_info)
elif pincode:
current_date = datetime.datetime.today()
date_list = [current_date + datetime.timedelta(days=x) for x in range(7)]
date_str = [x.strftime("%d-%m-%Y") for x in date_list]
available_slots_info = []
for inp_date in date_str:
slots_available_message = slots().get_slots_by_pincode(
pincode, inp_date
)
if not slots_available_message:
if not available_slots_info:
available_slots_info.append(
f"No slots available for Pincode: {pincode}, as of Date: {inp_date}"
)
else:
available_slots_info.append("\n")
available_slots_info.append(
f"No slots available for Pincode: {pincode}, as of Date: {inp_date}"
)
else:
if not available_slots_info:
available_slots_info.append(
f"Below slots available for Pincode: {pincode}, as of Date: {inp_date}"
)
available_slots_info.append("\n")
available_slots_info.append("".join(slots_available_message))
break
else:
available_slots_info.append("\n")
available_slots_info.append(
f"Below slots available for Pincode: {pincode}, as of Date: {inp_date}"
)
available_slots_info.append("\n")
available_slots_info.append("".join(slots_available_message))
break
slots_info = "".join(available_slots_info)
else:
slots_info = "Kindly rephrase your question ."
dispatcher.utter_message(text=slots_info)
return []
class DefaultFallbackAction(Action):
def name(self) -> Text:
return "default_fallback_action"
def run(
self,
dispatcher: CollectingDispatcher,
tracker: Tracker,
domain: Dict[Text, Any],
) -> List[Dict[Text, Any]]:
print("Inside DefaultFallbackAction action !!!!")
dispatcher.utter_message(template="utter_fallback")
return []
class CovidInfoAction(Action):
def name(self) -> Text:
return "covid_info_action"
def run(
self,
dispatcher: CollectingDispatcher,
tracker: Tracker,
domain: Dict[Text, Any],
) -> List[Dict[Text, Any]]:
print("Inside CovidInfoAction action !!!!")
dispatcher.utter_message(template="utter_covid_info")
return [
UserUtteranceReverted(),
FollowupAction(tracker.active_form.get("name")),
]
| sumanentc/COVID-19-bot | actions/actions.py | actions.py | py | 9,461 | python | en | code | 1 | github-code | 13 |
13934404665 | import sys
import tensorflow as tf
from PIL import Image, ImageFilter
import os
import pickle
import glob
import pprint
import operator
sy = ['!', '(', ')', '+', '-', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '=', 'a', 'alpha', 'b', 'beta', 'c', 'cos', 'd', 'div', 'e', 'f', 'forward_slash', 'g', 'gamma', 'geq', 'gt', 'h',
'i', 'infty', 'int', 'j', 'k', 'l', 'leq', 'lim', 'log', 'lt', 'm', 'n', 'neq', 'o', 'p', 'phi', 'pi', 'q', 'r', 's', 'sin', 'sqrt', 'sum', 't', 'tan', 'theta', 'mul', 'u', 'v', 'w', 'x', 'y', 'z']
slash_sy = ['tan', 'sqrt', 'mul', 'pi', 'phi', 'theta', 'sin', 'alpha', 'beta', 'gamma',
'infty', 'leq', 'sum', 'geq', 'neq', 'lim', 'log', 'int', 'frac', 'cos', 'bar', 'div', '^', '_']
variable = [i for i in sy if i not in slash_sy]
brules = {}
def update(im_name, symbol_list):
im = Image.open(im_name)
list_len = len(symbol_list)
for i in range(list_len):
if i >= len(symbol_list): break
symbol = symbol_list[i]
predict_result = symbol[1]
# deal with equal mark
if predict_result == "-":
if i < (len(symbol_list) - 1):
s1 = symbol_list[i+1]
if s1[1] == "-" and abs(s1[2] - symbol[2]) < 30 and abs(s1[4] - symbol[4]) < 30:
updateEqual(symbol, s1, symbol_list, im, i)
continue
# deal with bar
if predict_result == "-":
if i < (len(symbol_list) - 2):
s1 = symbol_list[i+1]
s2 = symbol_list[i+2]
if isVSame(symbol, s1) and (not isVSame(symbol, s2)):
updateBar(symbol, symbol_list, im, i)
continue
# deal with division mark
if predict_result == "-":
if i < (len(symbol_list) - 2):
s1 = symbol_list[i+1]
s2 = symbol_list[i+2]
if s1[3] < symbol[3] and s2[3] > symbol[3] and (s2[2] - s1[2]) < 30:
if s1[1] == "dot" and s2[1] == "dot":
updateDivision(symbol, s1, s2, symbol_list, im, i)
continue
# deal with fraction
if predict_result == "-":
j = i
upPart = 0
underPart = 0
while j < len(symbol_list):
tmp = symbol_list[j]
if tmp[2] > symbol[2] and tmp[4] < symbol[4] and tmp[5] > symbol[3]: upPart += 1
if tmp[2] > symbol[2] and tmp[4] < symbol[4] and tmp[3] < symbol[5]: underPart += 1
j += 1
if upPart > 0 and underPart > 0:
updateFrac(symbol, symbol_list, im, i)
continue
# deal with dots
if predict_result == "dot":
if i < (len(symbol_list) - 2):
s1 = symbol_list[i+1]
s2 = symbol_list[i+2]
if symbol_list[i+1][1] == "dot" and symbol_list[i+2][1] == "dot":
updateDots(symbol, s1, s2, symbol_list, im, i)
continue
# deal with i
if predict_result == "dot":
if i < (len(symbol_list) - 1):
s1 = symbol_list[i+1]
if s1[1] == "1" and abs(s1[2] - symbol[2]) < 30:
updateI(symbol, s1, symbol_list, im, i)
continue
if i > 1:
s1 = symbol_list[i-1]
if s1[1] == "1" and abs(s1[2] - symbol[2]) < 30:
updateI(symbol, s1, symbol_list, im, i)
continue
# deal with +-
if i < (len(symbol_list) - 1):
if (symbol[1] == "+" and symbol_list[i+1][1] == "-") or (symbol[1] == "-" and symbol_list[i+1][1] == "+"):
x,y,xw,yh = symbol[2:]
x1, y1, xw1, yh1 = symbol_list[i+1][2:]
cenX = x + (xw - x) / 2
cenX1 = x1 + (xw1 - x1) / 2
s1 = symbol_list[i+1]
if abs(cenX - cenX1) < 15:
updatePM(symbol, s1, symbol_list, im, i)
continue
return symbol_list
def toLatex(symbol_list):
s = []
i = 0
while (i < len(symbol_list)):
symbol = symbol_list[i]
value = symbol[1]
if value == 'frac':
upper = []
under = []
i = i + 1
while (i < len(symbol_list) and (isUpperFrac(symbol, symbol_list[i]) or isUnderFrac(symbol, symbol_list[i]))):
if isUpperFrac(symbol, symbol_list[i]): upper.append(symbol_list[i])
if isUnderFrac(symbol, symbol_list[i]): under.append(symbol_list[i])
i = i + 1
if len(upper) > 0 and upper[len(upper) - 1][1] not in variable:
upper.pop()
i = i - 1
if len(under) > 0 and under[len(under) - 1][1] not in variable:
under.pop()
i = i - 1
upper_string = '{' + toLatex(upper) + '}'
under_string = '{' + toLatex(under) + '}'
s.append('\\frac'+upper_string+under_string)
continue
elif value == 'sqrt':
outer = []
inner = []
i = i + 1
while (i < len(symbol_list) and isInner(symbol, symbol_list[i])):
inner.append(symbol_list[i])
i = i + 1
if len(inner) > 0 and inner[len(inner) - 1][1] not in variable:
inner.pop()
i = i - 1
inner_string = '{' + toLatex(inner) + '}'
s.append('\\sqrt'+inner_string)
continue
elif value in slash_sy:
s.append('\\' + value)
base = i
elif i > 0 and (s[len(s) - 1] in slash_sy):
# need to consider about range within squrt and frac
s.append('{'+value+'}')
elif i < len(symbol_list) - 1 and isUpperSymbol(symbol, symbol_list[i+1]) and (symbol[1] in variable) and (symbol_list[i+1][1] in variable):
s.append(value)
s.append('^{')
i = i+1
while (i < len(symbol_list) and isUpperSymbol(symbol, symbol_list[i])):
s.append(symbol_list[i][1])
i = i + 1
s.append('}')
continue
elif i < len(symbol_list) - 1 and isLowerSymbol(symbol, symbol_list[i+1]) and (symbol[1] in variable) and (symbol_list[i+1][1] in variable):
s.append(value)
s.append('_{')
i = i+1
while (i < len(symbol_list) and isLowerSymbol(symbol, symbol_list[i])):
s.append(symbol_list[i][1])
i = i + 1
s.append('}')
continue
else:
s.append(value)
base = i
i = i + 1
return "".join(s)
def isVSame(cur, next):
cur_center_x = cur[2] + (cur[4] - cur[2])/2
next_center_x = next[2] + (next[4] - next[2])/2
if abs(cur_center_x - next_center_x) < 30: return True
else: return False
def isInner(cur, next):
if next[3] < cur[5] and next[2] > cur[2] and next[4] - cur[4] < 10: return True
else: return False
def isUpperFrac(cur, next):
if next[5] < cur[3] and next[2] - cur[2] > -10 and next[4] - cur[4] < 10: return True
else: return False
def isUnderFrac(cur, next):
if next[3] > cur[5] and next[2] - cur[2] > -10 and next[4] - cur[4] < 10: return True
else: return False
def isUpperSymbol(cur, next):
cur_center = cur[3] + (cur[5] - cur[3])/2
next_center = next[3] + (next[5] - next[3])/2
cur_center_x = cur[2] + (cur[4] - cur[2])/2
if next_center < cur_center - (next[5] - next[3])/2 and next[2] > cur_center_x: return True
else: return False
def isLowerSymbol(cur, next):
cur_center = cur[3] + (cur[5] - cur[3])/2
next_center = next[3] + (next[5] - next[3])/2
cur_center_x = cur[2] + (cur[4] - cur[2])/2
if next_center > cur_center + (next[5] - next[3])/2 and next[2] > cur_center_x: return True
else: return False
def area(symbol):
return (symbol[4] - symbol[2]) * (symbol[5] - symbol[3])
def updateEqual(symbol,s1,symbol_list, im, i):
new_x = min(symbol[2], s1[2])
new_y = min(symbol[3], s1[3])
new_xw = max(symbol[4], s1[4])
new_yh = max(symbol[5], s1[5])
new_symbol = (im.crop((new_x, new_y, new_xw, new_yh)), "=", new_x, new_y, new_xw, new_yh)
symbol_list[i] = new_symbol
symbol_list.pop(i+1)
def updateDivision(symbol,s1,s2,symbol_list, im, i):
new_x = min(symbol[2], s1[2], s2[2])
new_y = min(symbol[3], s1[3], s2[3])
new_xw = max(symbol[4], s1[4], s2[4])
new_yh = max(symbol[5], s1[5], s2[5])
new_symbol = (im.crop((new_x, new_y, new_xw, new_yh)), "div", new_x, new_y, new_xw, new_yh)
symbol_list[i] = new_symbol
symbol_list.pop(i+2)
symbol_list.pop(i+1)
def updateDots(symbol,s1,s2,symbol_list, im, i):
new_x = min(symbol[2], s1[2], s2[2])
new_y = min(symbol[3], s1[3], s2[3])
new_xw = max(symbol[4], s1[4], s2[4])
new_yh = max(symbol[5], s1[5], s2[5])
new_symbol = (im.crop((new_x, new_y, new_xw, new_yh)), "dots", new_x, new_y, new_xw, new_yh)
symbol_list[i] = new_symbol
symbol_list.pop(i+2)
symbol_list.pop(i+1)
def updateI(symbol,s1,symbol_list, im, i):
new_x = min(symbol[2], s1[2])
new_y = min(symbol[3], s1[3])
new_xw = max(symbol[4], s1[4])
new_yh = max(symbol[5], s1[5])
new_symbol = (im.crop((new_x, new_y, new_xw, new_yh)), "i", new_x, new_y, new_xw, new_yh)
symbol_list[i] = new_symbol
symbol_list.pop(i+1)
def updatePM(symbol,s1,symbol_list, im, i):
new_x = min(symbol[2], s1[2])
new_y = min(symbol[3], s1[3])
new_xw = max(symbol[4], s1[4])
new_yh = max(symbol[5], s1[5])
new_symbol = (im.crop((new_x, new_y, new_xw, new_yh)), "pm", new_x, new_y, new_xw, new_yh)
symbol_list[i] = new_symbol
symbol_list.pop(i+1)
def updateBar(symbol,symbol_list, im, i):
x, y, xw, yh = symbol[2:]
new_symbol = (symbol[0], "bar", x, y, xw, yh)
symbol_list[i] = new_symbol
def updateFrac(symbol,symbol_list, im, i):
x, y, xw, yh = symbol[2:]
new_symbol = (symbol[0], "frac", x, y, xw, yh)
symbol_list[i] = new_symbol | madhavgoyal98/MathsVision | predict_function.py | predict_function.py | py | 10,277 | python | en | code | 0 | github-code | 13 |
24592149506 | """
Script Name: ANOVA Analysis of Growth Count Matrices
Description:
This Python script is designed to perform one-way Analysis of Variance (ANOVA) on multiple matrices of growth count data.
It reads data from a CSV file and processes it to calculate the F-statistic and significance probability value.
The purpose is to test if there are any statistically significant differences between the means of the different groups.
Dependencies:
- SciPy for statistical calculations
- pandas for data manipulation
Functions:
- flatten(matrix): Flattens a 2D matrix into a 1D list
- ANOVA(*matrixs): Performs ANOVA on multiple 2D matrices
Usage:
1. Ensure that you have the required libraries installed.
2. Place the CSV file of the experiment in the same directory as this script.
3. Run the script.
Example:
An example is provided within the script to demonstrate its usage. Uncomment it to see how it works.
"""
__appname__ = 'ANOVAtest'
__author__ = 'ANQI WANG (aw222@ic.ac.uk)'
__version__ = '0.0.1'
__license__ = "None"
from scipy import stats # Import stats module from scipy library for statistical calculations
import pandas as pd # Import pandas library for data manipulation
# Define a function to flatten a 2D matrix into a 1D list
def flatten(matrix):
'''
Flatten a 2D matrix into a 1D list
'''
result = []
for sublist in matrix:
result += sublist # Append each element of the sublist to the result list
return result
# Define a function to perform ANOVA on multiple matrices
def ANOVA(*matrixs):
'''
Take the final results of growth count matrices and perform ANOVA analysis on multiple matrices
'''
flatten_matrixs = []
for matrix in matrixs:
# Flatten each matrix into a 1D list and store in a new variable
flatten_matrixs.append(flatten(matrix))
anova_table = stats.f_oneway(*flatten_matrixs) # Perform one-way ANOVA
print(anova_table) # Directly print the result
# Read experimental data from a CSV file
data1 = pd.read_csv('ExperimentName.csv')
ANOVA(data1)
# Test code:
# Define example data for groups
# group1 = [[12], [13], [11], [14], [10]]
# group2 = [[10], [15], [9], [12], [11]]
# group3 = [[14], [16], [15], [17], [13]]
# Comment explaining F-statistic and significance probability value
'''
F: F-statistic, the ratio of the between-group mean square to the within-group mean square.
Pr(>F): Significance probability value, used to evaluate the probability of the null hypothesis being true.
If the significance probability value is less than 0.05, the null hypothesis can be rejected, indicating that there is a significant difference in the means of the groups.
'''
# stats_analysis(group1, group2, group3) # Last result indicates F-statistic and significance probability value
| AnqiW222/CMEE_MSc_Project | code/ANOVAtest.py | ANOVAtest.py | py | 2,801 | python | en | code | 0 | github-code | 13 |
13419751173 | from functools import reduce
from typing import Dict, List
from Code.Backend.Domain.DiscountPolicyObjects.AndDiscount import AndDiscount
from Code.Backend.Domain.DiscountPolicyObjects.ConditionalDiscount import ConditionalDiscount
from Code.Backend.Domain.DiscountPolicyObjects.MaxDiscount import MaxDiscount
from Code.Backend.Domain.DiscountPolicyObjects.OrDiscount import OrDiscount
from Code.Backend.Domain.DiscountPolicyObjects.SumDiscount import SumDiscount
from Code.Backend.Domain.DiscountPolicyObjects.VisibleDiscount import VisibleDiscount
from Code.Backend.Domain.DiscountPolicyObjects.XorDiscount import XorDiscount
from Code.Backend.Domain.Product import Product
from Code.DAL.Objects.store import Discount, ComplexDiscount
class DiscountPolicy:
def __init__(self):
"""
"""
self.__discounts = {}
self.__authorized_for_discount = None # TODO
self.id_counter = 0
def calculate_basket(self, products: List[Product], user_status, quantity_dic):
# TODO need to check if user is authorized.
products_discounts: Dict[str, List[float]] = {prdct.get_ID(): [] for prdct in
products}
for discount in self.__discounts.values():
discount.calculate_price(quantity_dic, products, [products_discounts])
price = 0
for p in products:
# check if there is a discount
if products_discounts[p.get_ID()]:
price += (p.get_price() * quantity_dic[p.get_ID()]) \
- (max(products_discounts[p.get_ID()])) * p.get_price() * quantity_dic[p.get_ID()]
# if not discount
else:
price += p.get_price() * quantity_dic[p.get_ID()]
return price
def get_visible_discounts(self):
lst = []
for discount in self.__discounts.values():
if isinstance(discount, VisibleDiscount):
lst.append(discount)
return lst
def get_conditional_discounts(self):
lst = []
for discount in self.__discounts.values():
if isinstance(discount, ConditionalDiscount):
lst.append(discount)
return lst
def get_combined_discounts(self):
lst = []
for discount in self.__discounts.values():
if not isinstance(discount, ConditionalDiscount) and not isinstance(discount, VisibleDiscount):
lst.append(discount)
return lst
def add_visible_discount(self, discount_price, end_date, discount_on, Type):
if discount_price >= 1:
raise ValueError("cant get discount over 100%")
if discount_price <= 0:
raise ValueError("discount cant be 0 or negative")
self.id_counter += 1
discount = VisibleDiscount(discount_price, end_date, discount_on, Type)
discount.set_id(self.id_counter)
self.__discounts[self.id_counter] = discount
return discount
def add_conditional_discount(self, discount_price, end_date, discount_on,
Type, dic_of_products_and_quantity,
min_price_for_discount):
if discount_price >= 1:
raise ValueError("cant get discount over 100%")
if discount_price <= 0:
raise ValueError("discount cant be 0 or negative")
self.id_counter += 1
discount = ConditionalDiscount(discount_price, end_date, discount_on,
Type,
dic_of_products_and_quantity,
min_price_for_discount)
discount.set_id(self.id_counter)
self.__discounts[self.id_counter] = discount
return discount
def add_or_discount(self, first_discount, second_discount):
try:
discount1 = self.__discounts.pop(first_discount)
discount2 = self.__discounts.pop(second_discount)
self.id_counter += 1
discount = OrDiscount(discount1, discount2)
discount.set_id(self.id_counter)
self.__discounts[self.id_counter] = discount
return discount
except:
raise ValueError("no discount was found with the given id")
def add_and_discount(self, first_discount, second_discount):
try:
discount1 = self.__discounts.pop(first_discount)
discount2 = self.__discounts.pop(second_discount)
self.id_counter += 1
discount = AndDiscount(discount1, discount2)
discount.set_id(self.id_counter)
self.__discounts[self.id_counter] = discount
return discount
except:
raise ValueError("no discount was found with the given id")
def add_xor_discount(self, first_discount, second_discount):
try:
discount1 = self.__discounts.pop(first_discount)
discount2 = self.__discounts.pop(second_discount)
self.id_counter += 1
discount = XorDiscount(discount1, discount2)
discount.set_id(self.id_counter)
self.__discounts[self.id_counter] = discount
return discount
except:
raise ValueError("no discount was found with the given id")
def add_sum_discount(self, first_discount, second_discount):
try:
discount1 = self.__discounts.pop(first_discount)
discount2 = self.__discounts.pop(second_discount)
self.id_counter += 1
discount = SumDiscount(discount1, discount2)
discount.set_id(self.id_counter)
self.__discounts[self.id_counter] = discount
return discount
except:
raise ValueError("no discount was found with the given id")
def add_max_discount(self, first_discount, second_discount):
try:
discount1 = self.__discounts.pop(first_discount)
discount2 = self.__discounts.pop(second_discount)
self.id_counter += 1
discount = MaxDiscount(discount1, discount2)
discount.set_id(self.id_counter)
self.__discounts[self.id_counter] = discount
return discount
except:
raise ValueError("no discount was found with the given id")
def create_discount_policy_from_db(self, discounts: List[Discount],
complexDiscounts: List[ComplexDiscount], id_counter):
persist_discounts = {x.id: self.__create_discount_from_db(x) for x in discounts}
persist_complex_discounts = self.__create_complex_discount_from_db(complexDiscounts, persist_discounts)
self.__discounts = persist_discounts.update(persist_complex_discounts)
self.id_counter = id_counter
def __create_discount_from_db(self, discountDB: Discount):
if discountDB.is_visible:
# TODO add discountDB.discount
discount = VisibleDiscount(discountDB.discount_value, discountDB.end_date, discountDB.discount_on, discountDB.type)
discount.set_id(discountDB.id)
return discount
else:
dic_of_prod = {discountDB.product_id: discountDB.min_count_of_product}
discount = ConditionalDiscount(discountDB.discount_value, discountDB.end_date,
discountDB.discount_on, discountDB.type,
dic_of_prod, discountDB.min_price_of_product)
discount.set_id(discountDB.id)
return discount
def __create_complex_discount_from_db(self, complexDiscounts, persist_discounts):
res = {}
for discountDB in complexDiscounts:
self.__complex_rec(discountDB.id, complexDiscounts, persist_discounts, res)
return res
def __complex_rec(self, discount_id, complexDiscounts, persist_discounts, res):
curDiscountDB = next(filter(lambda x: x.id == discount_id, complexDiscounts))
# exit term
if not curDiscountDB or (curDiscountDB and curDiscountDB.id in res.keys()):
return
# check if not complex discount is son
if curDiscountDB.first_discount in persist_discounts.keys():
first_discount = persist_discounts[curDiscountDB.first_discount]
else:
# check if already made the complex discount
if curDiscountDB.first_discount in res.keys():
first_discount = res[curDiscountDB.first_discount]
else:
first_discount = self.__complex_rec(curDiscountDB.first_discount, complexDiscounts, persist_discounts, res)
# check if not complex discount is son
if curDiscountDB.second_discount in persist_discounts.keys():
second_discount = persist_discounts[curDiscountDB.second_discount]
else:
# check if already made the complex discount
if curDiscountDB.second_discount in res.keys():
second_discount = res[curDiscountDB.second_discount]
else:
second_discount = self.__complex_rec(curDiscountDB.second_discount, complexDiscounts, persist_discounts, res)
if curDiscountDB.type_ == 0: # or
discount = OrDiscount(first_discount, second_discount)
elif curDiscountDB.type_ == 1: # and
discount = AndDiscount(first_discount, second_discount)
elif curDiscountDB.type_ == 2: # xor
discount = XorDiscount(first_discount, second_discount)
elif curDiscountDB.type_ == 3: # sum
discount = SumDiscount(first_discount, second_discount)
else: # 4 max
discount = MaxDiscount(first_discount, second_discount)
discount.set_id(curDiscountDB.id)
res[discount.discount_id] = discount
return discount
| yanay94sun/Workshop | Code/Backend/Domain/DiscountPolicyObjects/DiscountPolicy.py | DiscountPolicy.py | py | 9,872 | python | en | code | 0 | github-code | 13 |
35205574199 | import re
from util import aoc
NAMES = {
"one": 1,
"two": 2,
"three": 3,
"four": 4,
"five": 5,
"six": 6,
"seven": 7,
"eight": 8,
"nine": 9,
}
def part_one(input):
return either_part(input, r"\d", int)
def either_part(input, digit_expr, parse_digit):
re_first = re.compile(r".*?(" + digit_expr + ").*")
re_last = re.compile(r".*(" + digit_expr + ").*?")
nums = []
for line in input.splitlines():
f = re_first.fullmatch(line).group(1)
l = re_last.fullmatch(line).group(1)
nums.append(parse_digit(f) * 10 + parse_digit(l))
return sum(n for n in nums)
def part_two(input):
return either_part(
input,
r"\d|" + "|".join(NAMES.keys()),
lambda s: int(s) if len(s) == 1 else NAMES[s],
)
if __name__ == "__main__":
aoc.solve(
__file__,
None,
part_one,
part_two, # not 55330
)
| barneyb/aoc-2023 | python/aoc2023/day01/trebuchet.py | trebuchet.py | py | 933 | python | en | code | 0 | github-code | 13 |
19220187227 | """
Daily coding problem #3:
Given the root to a binary tree, implement serialize(root),
which serializes the tree into a string, and deserialize(s),
which deserializes the string back into the tree.
Code author: Hoang Tuan Anh
Date: 10/09/2019
"""
from collections import deque
class Node:
def __init__(self, val, left=None, right=None):
self.val = val
self.left = left
self.right = right
def serialize(root):
string_tree = ""
string_tree = string_tree + root.val + " "
q = deque()
q.append(root)
while(q):
s = q.popleft()
if(s.left != None and s.right != None):
q.append(s.left)
q.append(s.right)
string_tree = string_tree + s.left.val + " "
string_tree = string_tree + s.right.val + " "
elif(s.left != None):
#If left child is null, add None to denote the left child vacancy in the tree
q.append(s.left)
string_tree = string_tree + s.left.val + " "
string_tree = string_tree + "None" + " "
elif(s.right != None):
#If right child is null, add None to denote the right child vacancy in the tree
q.append(s.right)
string_tree = string_tree + "None" + " "
string_tree = string_tree + s.right.val + " "
return string_tree
def deserialize(s):
#Turn string s into a list of values of nodes
list_node = s.split(" ")[:-1]
list_length = len(list_node)
#Initialise node position
current_node_pos = 0
left_child_pos = current_node_pos + 1
right_child_pos = left_child_pos + 1
#Append the first node into the queue
previousNode = Node(list_node[current_node_pos])
q_node = deque()
q_node.append(previousNode)
left = True #This variable memorises whether we are adding left node or right node to tree
while(q_node):
s = q_node.popleft()
loop_counter = 0
#Loop only two times to add left and right children
while(loop_counter != 2):
if(left):
if(left_child_pos < list_length and list_node[left_child_pos]):
s.left = Node(list_node[left_child_pos])
q_node.append(s.left)
left_child_pos = left_child_pos + 2
left = False
loop_counter = loop_counter + 1
else:
if(right_child_pos < list_length and list_node[right_child_pos]):
s.right = Node(list_node[right_child_pos])
q_node.append(s.right)
right_child_pos = right_child_pos + 2
left = True
loop_counter = loop_counter + 1
return previousNode
def main():
node = Node('root', Node('left', Node('left.left', Node('left.left.left'), Node('left.left.right')), ), Node('right', Node('right.left'), Node('right.right')))
string_tree = serialize(node)
print (string_tree)
assert deserialize(serialize(node)).left.left.val == 'left.left'
if __name__ == '__main__':
main() | peteranh/practice | serialise tree/solution.py | solution.py | py | 3,102 | python | en | code | 0 | github-code | 13 |
20880318133 | """Tests for the hyalus.run.clean module"""
__author__ = "David McConnell"
__credits__ = ["David McConnell"]
__maintainer__ = "David McConnell"
from datetime import date
from pathlib import Path
import shutil
from unittest.mock import patch
import pytest
from hyalus.run import clean
from hyalus.run.common import HyalusRun
# pylint: disable=duplicate-code
OUTER_DIR = Path(__file__).parent
RUNS_DIR = OUTER_DIR / "runs_dir"
TEST_RUN_1 = HyalusRun(RUNS_DIR / "runtest_1_2023-02-09_ey2S4AGY")
TEST_RUN_2 = HyalusRun(RUNS_DIR / "runtest_2_2023-02-10_ndTVVsed")
TEST_RUN_7 = HyalusRun(RUNS_DIR / "runtest_7_2023-02-11_5KUBAvgo")
@pytest.fixture(name="runs_dir")
def fixture_runs_dir(tmp_path):
"""Copy contents of RUNS_DIR to tmp_path and then return it"""
shutil.copytree(RUNS_DIR, tmp_path, dirs_exist_ok=True)
return tmp_path
class TestHyalusCleanRunner:
"""Tests for the HyalusCleanRunner class"""
def test_confirm_test_run_removal_yes(self):
"""Test confirmation of run removal when runs should be removed based on user input"""
runner = clean.HyalusCleanRunner(RUNS_DIR, force=False)
for return_value in ["y", "yes"]:
with patch("builtins.input", return_value=return_value):
assert runner.confirm_test_run_removal([TEST_RUN_1])
def test_confirm_test_run_removal_no(self):
"""Test confirmation of run removal when runs should not be removed based on user input"""
runner = clean.HyalusCleanRunner(RUNS_DIR, force=False)
for return_value in ["some", "other", "values"]:
with patch("builtins.input", return_value=return_value):
assert not runner.confirm_test_run_removal([TEST_RUN_1])
def test_confirm_test_run_removal_force(self):
"""Test that confirm_test_run_removal always returns True when force=True"""
runner = clean.HyalusCleanRunner(RUNS_DIR, force=True)
for return_value in ["yes", "y", "some", "other", "values"]:
with patch("builtins.input", return_value=return_value):
assert runner.confirm_test_run_removal([TEST_RUN_1])
def test_run_no_tests_found(self, capsys, runs_dir):
"""Test path for when no tests are found for removal"""
runner = clean.HyalusCleanRunner(runs_dir, to_clean=["runtest_99"], force=True)
expected_fs_objs = len(list(runs_dir.iterdir()))
expected_msg = f"Couldn't find any test runs to remove in {runs_dir} based on given criteria"
runner.run()
assert expected_fs_objs == len(list(runs_dir.iterdir()))
assert capsys.readouterr().out.strip('\n') == expected_msg
def test_run_tests_found_1(self, capsys, runs_dir):
"""Test path for when tests are found for removal, case one"""
runner = clean.HyalusCleanRunner(runs_dir, force=True)
expected_fs_objs = len(list(runs_dir.iterdir())) - 3
expected_msg = "3 old test runs have been removed"
runner.run()
assert expected_fs_objs == len(list(runs_dir.iterdir()))
assert capsys.readouterr().out.strip('\n') == expected_msg
def test_run_tests_found_2(self, capsys, runs_dir):
"""Test path for when no tests are found for removal"""
runner = clean.HyalusCleanRunner(runs_dir, oldest=date(2023, 2, 10), newest=date(2023, 2, 10), force=True)
expected_fs_objs = len(list(runs_dir.iterdir())) - 1
expected_msg = "1 old test runs have been removed"
runner.run()
assert expected_fs_objs == len(list(runs_dir.iterdir()))
assert capsys.readouterr().out.strip('\n') == expected_msg
def test_run_removal_canceled(self, capsys, runs_dir):
"""Test that when test run removal is canceled by the user no tests get removed"""
runner = clean.HyalusCleanRunner(runs_dir)
expected_fs_objs = len(list(runs_dir.iterdir()))
expected_msg = "Test run removal canceled"
with patch("builtins.input", return_value="n"):
runner.run()
assert expected_fs_objs == len(list(runs_dir.iterdir()))
assert capsys.readouterr().out.strip('\n') == expected_msg
| dvmcconnell/hyalus | tests/run/test_clean.py | test_clean.py | py | 4,165 | python | en | code | 0 | github-code | 13 |
21562176539 |
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(800, 600)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.frame = QtWidgets.QFrame(self.centralwidget)
self.frame.setGeometry(QtCore.QRect(-10, -20, 881, 591))
self.frame.setFrameShape(QtWidgets.QFrame.StyledPanel)
self.frame.setFrameShadow(QtWidgets.QFrame.Raised)
self.frame.setObjectName("frame")
self.pushButton = QtWidgets.QPushButton(self.frame)
self.pushButton.setGeometry(QtCore.QRect(640, 510, 121, 41))
font = QtGui.QFont()
font.setFamily("URW Bookman L")
font.setPointSize(14)
font.setBold(True)
font.setItalic(True)
font.setWeight(75)
self.pushButton.setFont(font)
self.pushButton.setCursor(QtGui.QCursor(QtCore.Qt.ArrowCursor))
self.pushButton.setObjectName("pushButton")
self.listWidget = QtWidgets.QListWidget(self.frame)
self.listWidget.setGeometry(QtCore.QRect(20, 40, 256, 41))
self.listWidget.setObjectName("listWidget")
item = QtWidgets.QListWidgetItem()
font = QtGui.QFont()
font.setFamily("Chilanka")
font.setPointSize(22)
font.setItalic(True)
item.setFont(font)
brush = QtGui.QBrush(QtGui.QColor(85, 170, 255))
brush.setStyle(QtCore.Qt.Dense6Pattern)
item.setBackground(brush)
self.listWidget.addItem(item)
self.listWidget_2 = QtWidgets.QListWidget(self.frame)
self.listWidget_2.setGeometry(QtCore.QRect(150, 80, 256, 151))
self.listWidget_2.setObjectName("listWidget_2")
item = QtWidgets.QListWidgetItem()
font = QtGui.QFont()
font.setFamily("URW Chancery L")
font.setPointSize(16)
font.setBold(True)
font.setItalic(True)
font.setWeight(75)
item.setFont(font)
brush = QtGui.QBrush(QtGui.QColor(170, 0, 255))
brush.setStyle(QtCore.Qt.FDiagPattern)
item.setBackground(brush)
self.listWidget_2.addItem(item)
self.listWidget_3 = QtWidgets.QListWidget(self.frame)
self.listWidget_3.setGeometry(QtCore.QRect(20, 260, 256, 41))
self.listWidget_3.setObjectName("listWidget_3")
item = QtWidgets.QListWidgetItem()
font = QtGui.QFont()
font.setFamily("Chilanka")
font.setPointSize(24)
font.setItalic(True)
item.setFont(font)
brush = QtGui.QBrush(QtGui.QColor(85, 85, 255))
brush.setStyle(QtCore.Qt.Dense6Pattern)
item.setBackground(brush)
self.listWidget_3.addItem(item)
self.listWidget_4 = QtWidgets.QListWidget(self.frame)
self.listWidget_4.setGeometry(QtCore.QRect(110, 300, 291, 211))
self.listWidget_4.setObjectName("listWidget_4")
item = QtWidgets.QListWidgetItem()
font = QtGui.QFont()
font.setFamily("URW Chancery L")
font.setPointSize(16)
font.setBold(True)
font.setItalic(True)
font.setWeight(75)
item.setFont(font)
brush = QtGui.QBrush(QtGui.QColor(85, 0, 127))
brush.setStyle(QtCore.Qt.FDiagPattern)
item.setBackground(brush)
self.listWidget_4.addItem(item)
self.listWidget_5 = QtWidgets.QListWidget(self.frame)
self.listWidget_5.setGeometry(QtCore.QRect(440, 260, 256, 41))
self.listWidget_5.setObjectName("listWidget_5")
item = QtWidgets.QListWidgetItem()
font = QtGui.QFont()
font.setFamily("Chilanka")
font.setPointSize(22)
font.setItalic(True)
item.setFont(font)
brush = QtGui.QBrush(QtGui.QColor(85, 0, 255))
brush.setStyle(QtCore.Qt.Dense6Pattern)
item.setBackground(brush)
self.listWidget_5.addItem(item)
self.listWidget_6 = QtWidgets.QListWidget(self.frame)
self.listWidget_6.setGeometry(QtCore.QRect(430, 40, 261, 41))
self.listWidget_6.setObjectName("listWidget_6")
item = QtWidgets.QListWidgetItem()
font = QtGui.QFont()
font.setFamily("Chilanka")
font.setPointSize(22)
font.setItalic(True)
item.setFont(font)
brush = QtGui.QBrush(QtGui.QColor(85, 0, 255))
brush.setStyle(QtCore.Qt.Dense6Pattern)
item.setBackground(brush)
self.listWidget_6.addItem(item)
self.listWidget_7 = QtWidgets.QListWidget(self.frame)
self.listWidget_7.setGeometry(QtCore.QRect(480, 80, 281, 91))
self.listWidget_7.setObjectName("listWidget_7")
item = QtWidgets.QListWidgetItem()
font = QtGui.QFont()
font.setFamily("URW Chancery L")
font.setPointSize(16)
font.setBold(True)
font.setItalic(True)
font.setWeight(75)
item.setFont(font)
brush = QtGui.QBrush(QtGui.QColor(85, 0, 127))
brush.setStyle(QtCore.Qt.FDiagPattern)
item.setBackground(brush)
self.listWidget_7.addItem(item)
self.listWidget_8 = QtWidgets.QListWidget(self.frame)
self.listWidget_8.setGeometry(QtCore.QRect(480, 300, 301, 181))
self.listWidget_8.setObjectName("listWidget_8")
item = QtWidgets.QListWidgetItem()
font = QtGui.QFont()
font.setFamily("URW Chancery L")
font.setPointSize(16)
font.setBold(True)
font.setItalic(True)
font.setWeight(75)
item.setFont(font)
brush = QtGui.QBrush(QtGui.QColor(170, 170, 255))
brush.setStyle(QtCore.Qt.FDiagPattern)
item.setBackground(brush)
self.listWidget_8.addItem(item)
MainWindow.setCentralWidget(self.centralwidget)
self.menubar = QtWidgets.QMenuBar(MainWindow)
self.menubar.setGeometry(QtCore.QRect(0, 0, 800, 25))
self.menubar.setObjectName("menubar")
MainWindow.setMenuBar(self.menubar)
self.statusbar = QtWidgets.QStatusBar(MainWindow)
self.statusbar.setObjectName("statusbar")
MainWindow.setStatusBar(self.statusbar)
self.pushButton.clicked.connect(self.quit)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "Help"))
self.pushButton.setText(_translate("MainWindow", "Close"))
__sortingEnabled = self.listWidget.isSortingEnabled()
self.listWidget.setSortingEnabled(False)
item = self.listWidget.item(0)
item.setText(_translate("MainWindow", "Technology Used:"))
self.listWidget.setSortingEnabled(__sortingEnabled)
__sortingEnabled = self.listWidget_2.isSortingEnabled()
self.listWidget_2.setSortingEnabled(False)
item = self.listWidget_2.item(0)
item.setText(_translate("MainWindow", "Python\n"
"Image processing\n"
"Machine learning\n"
"Deep learning \n"
""))
self.listWidget_2.setSortingEnabled(__sortingEnabled)
__sortingEnabled = self.listWidget_3.isSortingEnabled()
self.listWidget_3.setSortingEnabled(False)
item = self.listWidget_3.item(0)
item.setText(_translate("MainWindow", "Features:"))
self.listWidget_3.setSortingEnabled(__sortingEnabled)
__sortingEnabled = self.listWidget_4.isSortingEnabled()
self.listWidget_4.setSortingEnabled(False)
item = self.listWidget_4.item(0)
item.setText(_translate("MainWindow", "Factors that our program takes into\n"
"consideration include, the usage of a\n"
"mobile phone, eating and drinking,\n"
"conversation with co-passengers,\n"
"self-grooming, reading or watching\n"
"videos and adjusting the radio or\n"
"music player."))
self.listWidget_4.setSortingEnabled(__sortingEnabled)
__sortingEnabled = self.listWidget_5.isSortingEnabled()
self.listWidget_5.setSortingEnabled(False)
item = self.listWidget_5.item(0)
item.setText(_translate("MainWindow", "Working"))
self.listWidget_5.setSortingEnabled(__sortingEnabled)
__sortingEnabled = self.listWidget_6.isSortingEnabled()
self.listWidget_6.setSortingEnabled(False)
item = self.listWidget_6.item(0)
item.setText(_translate("MainWindow", "Data Set:"))
self.listWidget_6.setSortingEnabled(__sortingEnabled)
__sortingEnabled = self.listWidget_7.isSortingEnabled()
self.listWidget_7.setSortingEnabled(False)
item = self.listWidget_7.item(0)
item.setText(_translate("MainWindow", "Since, creating our own data set\n"
"would be a tough task, we prefer a\n"
"pre-developed data set."))
self.listWidget_7.setSortingEnabled(__sortingEnabled)
__sortingEnabled = self.listWidget_8.isSortingEnabled()
self.listWidget_8.setSortingEnabled(False)
item = self.listWidget_8.item(0)
item.setText(_translate("MainWindow", "While driving, the driver\'s behaviour\n"
"is continuously monitored through\n"
"2-D pictures clicked by a camera\n"
"placed on the dashboard, and the\n"
"driver is immediately notified if\n"
"he/she is found to be distracted."))
self.listWidget_8.setSortingEnabled(__sortingEnabled)
def quit(self):
sys.exit()
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
| singhv1shal/Driver-Safety-Interface | Help.py | Help.py | py | 9,670 | python | en | code | 1 | github-code | 13 |
7455493930 | from pants.backend.project_info.dependents import Dependents, DependentsRequest
from pants.engine.addresses import Addresses
from pants.engine.internals.selectors import Get, MultiGet
from pants.engine.rules import collect_rules, rule
from pants.engine.target import (
Dependencies,
DependenciesRequest,
HydratedSources,
HydrateSourcesRequest,
SourcesField,
Targets,
TransitiveTargets,
TransitiveTargetsRequest,
)
from depgraph.backend.structs import (
AddressWithFilter,
GraphDataDeps,
GraphDataRequest,
GraphDataReverseDeps,
SourceFiles,
)
PYTHON_SOURCE_CODE_TARGETS = (
"python_source",
"python_sources",
"python_test",
"python_tests",
"python_test_utils",
)
@rule(
desc="Get source files from an address, optionally keeping only targets of source code nature."
)
async def get_source_files_from_address_with_filter(
address_with_filter: AddressWithFilter,
) -> SourceFiles:
files: list[str] = []
targets = await Get(Targets, Addresses([address_with_filter.address]))
target = targets[0]
if target.alias in ("file", "resource", *PYTHON_SOURCE_CODE_TARGETS):
all_sources = await Get(HydratedSources, HydrateSourcesRequest(target.get(SourcesField)))
files.extend(all_sources.snapshot.files)
elif not address_with_filter.sources_only:
files.append(str(target.address))
return SourceFiles(files)
@rule(desc="Get dependents data out of the dependency graph.")
async def get_dependents(graph_data_request: GraphDataRequest) -> GraphDataReverseDeps:
target_to_deps: dict[str, list[str]] = {}
for target in graph_data_request.targets:
total_files: set[str] = set()
dependents = await Get(
Dependents,
DependentsRequest(
(target.address,),
transitive=graph_data_request.transitive,
include_roots=False,
),
)
results = await MultiGet(
Get(
SourceFiles,
AddressWithFilter,
AddressWithFilter(address=d, sources_only=graph_data_request.sources_only),
)
for d in dependents
)
for result in results:
total_files.update(result)
target_filepath_or_address = next(
iter(
await Get(SourceFiles, AddressWithFilter, AddressWithFilter(address=target.address))
)
)
target_to_deps[target_filepath_or_address] = sorted(total_files)
return GraphDataReverseDeps(data=target_to_deps)
@rule(desc="Get dependencies data out of the dependency graph.")
async def get_dependencies(graph_data_request: GraphDataRequest) -> GraphDataDeps:
# to only analyze dependencies that have dependencies -> if t.has_fields([Dependencies])]
valid_targets = [t for t in graph_data_request.targets]
target_to_deps: dict[str, list[str]] = {}
for target in valid_targets:
total_files: list[str] = []
if graph_data_request.transitive:
transitive_deps_request_result = await Get(
TransitiveTargets, TransitiveTargetsRequest([target.address])
)
targets = transitive_deps_request_result.dependencies
else: # direct dependencies only
targets = await Get(
Targets, # type: ignore
DependenciesRequest(target.get(Dependencies)),
)
# ignore non-sources targets, if requested
relevant_targets = (
(t for t in targets if t.alias in PYTHON_SOURCE_CODE_TARGETS)
if graph_data_request.sources_only
else (t for t in targets)
)
results = await MultiGet(
Get(SourceFiles, AddressWithFilter, AddressWithFilter(address=d.address))
for d in relevant_targets
)
for result in results:
total_files.extend(result)
# convert object to filepath, if applicable
target_filepath_or_address = next(
iter(
await Get(SourceFiles, AddressWithFilter, AddressWithFilter(address=target.address))
)
)
target_to_deps[target_filepath_or_address] = sorted(total_files)
return GraphDataDeps(data=target_to_deps)
def rules():
return collect_rules()
| AlexTereshenkov/pants-dep-graph | src/depgraph/backend/rules.py | rules.py | py | 4,356 | python | en | code | 1 | github-code | 13 |
29338434113 | import random
from aqt import mw
from aqt.qt import *
from anki.hooks import addHook, runHook
from anki.utils import intTime
from .config import *
ADDON_NAME='3ft_Under'
class ThreeFeetUnder:
def __init__(self):
self.config=Config(ADDON_NAME)
addHook(ADDON_NAME+'.configLoaded', self.onConfigLoaded)
self.setupMenu()
def setupMenu(self):
menu=None
for a in mw.form.menubar.actions():
if '&Study' == a.text():
menu=a.menu()
# menu.addSeparator()
break
if not menu:
menu=mw.form.menubar.addMenu('&Study')
qact=QAction("Bury 3ft Under", mw)
qact.triggered.connect(self.checkStats)
menu.addAction(qact)
def onConfigLoaded(self):
if self.config.get('auto_bury_on_startup', True):
self.checkStats()
def checkStats(self):
use_mod=self.config.get('use_modification_time',False)
scan_days=self.config.get('scan_days',3)
mod_cutoff=mw.col.sched.dayCutoff-(86400*scan_days)
if use_mod:
sql="mod > %d" % mod_cutoff
else:
cid_cutoff=mod_cutoff*1000 #convert to cid time
sql="id > %d" % cid_cutoff
newCards=mw.col.db.list("""
select id from cards where type=0 and
queue=0 and odid=0 and %s"""%sql)
if newCards:
self.toBury(newCards)
mw.reset() #update view
def toBury(self, cids):
mw.checkpoint(_("Bury 3ft Under"))
mw.col.db.executemany("""
update cards set queue=-2,mod=%d,usn=%d where id=?"""%
(intTime(), mw.col.usn()), ([i] for i in cids))
mw.col.log(cids)
tfu=ThreeFeetUnder()
| lovac42/3ft_Under | src/three_ft_Under/tft_under.py | tft_under.py | py | 1,722 | python | en | code | 0 | github-code | 13 |
4262660114 | import os
class BatchRename():
'''
批量重命名文件夹中的图片文件
'''
def __init__(self):
self.path = r'/running_saved/fabric_shortcut' # 表示需要命名处理的文件夹
self.save_path = r'/running_saved/fabric_video' # 保存重命名后的图片地址
def rename(self):
filelist = os.listdir(self.path) # 获取文件路径
total_num = len(filelist) # 获取文件长度(个数)
i = 000000 # 表示文件的命名是从200000开始的
for item in filelist:
print(item)
if item.endswith('.jpg'): # 初始的图片的格式为jpg格式的(或者源文件是png格式及其他格式,后面的转换格式就可以调整为自己需要的格式即可)
src = os.path.join(os.path.abspath(self.path), item) # 当前文件中图片的地址
dst = os.path.join(os.path.abspath(self.save_path), '' + str(i) + '.jpg') # 处理后文件的地址和名称,可以自己按照自己的要求改进
try:
os.rename(src, dst)
print('converting %s to %s ...' % (src, dst))
i = i + 1
except:
continue
print('total %d to rename & converted %d jpgs' % (total_num, i))
if __name__ == '__main__':
# demo = BatchRename()
# demo.rename()
import os
import re
import sys
path = r'/running_saved/fabric_video' # 图片路径
for file in os.listdir(path):
if os.path.isfile(os.path.join(path, file)):
fname, ext = os.path.splitext(file)
on = os.path.join(path, file)
nn = os.path.join(path, str(fname).zfill(6) + ext) # 数字6是定义为6位数,可随意修改需要的
os.rename(on, nn)
| linhuaizhou/yida_gedc_fabric4show | frame_processing/rename_demo.py | rename_demo.py | py | 1,881 | python | zh | code | 1 | github-code | 13 |
17726161412 | from __future__ import annotations
from cell import Cell
class LinkedList:
def __init__(self):
sentinel = Cell(None, None, None)
sentinel.next = sentinel
sentinel.prev = sentinel
self.size = 0
self.sentinel = sentinel
def is_empty(self):
return self.size == 0
def __len__(self):
return self.size
def head(self):
if self.is_empty():
return None
return self.sentinel.next
def tail(self):
if self.is_empty():
return None
return self.sentinel.prev
def __str__(self):
s = ""
c = self.sentinel
for i in range(len(self)):
c = c.next
s += str(c.value) + "теВ"
return s[:-1]
def lookup(self, item: int):
c = self.sentinel
for i in range(len(self)):
c = c.next
if c.value == item:
return c
return None
def cell_at(self, index: int):
if index >= len(self):
raise IndexError("Error, index is too big")
c = self.sentinel
for i in range(index + 1):
c = c.next
return c
def get(self, idx: Cell):
c = self.sentinel
while c.next != idx:
c = c.next
if c == self.sentinel:
raise IndexError("Cell not in the list")
return c.next.value
def set(self, idx: Cell, item: int):
c = self.sentinel
while c.next != idx:
c = c.next
if c == self.sentinel:
raise IndexError("Cell not in the list")
c.next.value = item
return self
def insert(self, item: int, neighbor: Cell, after: bool = True):
c = self.sentinel
while c.next != neighbor:
c = c.next
if c == self.sentinel:
raise IndexError("Cell not in the list")
# Arrived to the cell before the neighbor
if not after:
new = Cell(item, neighbor, c)
c.next = new
new.next.prev = new
else:
c = c.next.next # Go to the cell after the neighbor
new = Cell(item, c, neighbor)
c.prev = new
new.prev.next = new
self.size += 1
return self
def append(self, item: int):
return self.insert(item, self.sentinel.prev)
def prepend(self, item: int):
return self.insert(item, self.sentinel.next, False)
def remove(self, cell: Cell):
c = self.sentinel
while c.next != cell:
c = c.next
if c == self.sentinel:
raise IndexError("Cell not in the list")
c.next = c.next.next
c.next.prev = c
self.size -= 1
return self
def extend(self, l: LinkedList):
if not l.is_empty():
if self.is_empty():
self = l
else:
c = self.tail() # Take last cell of l1
c.next = l.head() # Its last cell is the first of l2
l.head().prev = c # The previous value of the first cell of l2 is now the last of l1
l.tail().next = (
self.sentinel
) # The next value of the last cell of l2 is now the sentinel of l1
self.sentinel.prev = (
l.tail()
) # The previous value of the sentinel of l1 is now the last cell of l2
self.size += len(l) # Change the size and it's all good :)
return self
def reverse(self, k: int):
if k > len(self) or k <= 0:
raise IndexError("Index error")
for i in range(k // 2):
self.swap(i, k - i - 1)
return self
def swap(self, i: int, j: int):
if i > j:
return self.swap(j, i)
before_c1 = self.cell_at(i - 1)
after_c1 = self.cell_at(i + 1)
before_c2 = self.cell_at(j - 1)
try:
after_c2 = self.cell_at(j + 1)
except IndexError:
after_c2 = self.head()
c1 = self.cell_at(i)
c2 = self.cell_at(j)
if i + 1 != j: # If i and j are not consecutive
c1.next, c1.prev, c2.next, c2.prev = c2.next, c2.prev, c1.next, c1.prev
before_c1.next = c2
after_c1.prev = c2
before_c2.next = c1
after_c2.prev = c1
else:
c1.next, c1.prev, c2.next, c2.prev = c2.next, c2, c1, c1.prev
before_c1.next = c2
after_c2.prev = c1
return self
if __name__ == "__main__":
l = LinkedList()
l.append(4)
l.append(5)
l.prepend(1)
print(l)
l = l.reverse(3)
print(l)
| Inkkonu/PolytechClasses | S5_Algorithmic/pw4/linked_list.py | linked_list.py | py | 4,751 | python | en | code | 1 | github-code | 13 |
32650445416 | from flask import Flask, jsonify, abort, request, render_template
from flask.ext.sqlalchemy import SQLAlchemy
#from flask_restless import APIManager
#from flask_restful import Api
#api = Api(app)
app = Flask(__name__)
app.config.from_object('config')
db = SQLAlchemy(app)
import models
@app.route('/dev/<string:id>/')
def get_dev(id):
return jsonify({'dev': iotDevice.query.get(id)})
@app.route('/dev/iotDevice/', methods = ['POST'])
def create_dev():
if not request.json or not 'uName' in request.json:
abort(400)
print(request.json['deviceNum'])
dev = models.iotDevice(request.json['uName'], request.json['deviceNum'])
devBackup = models.iotDeviceBackup(request.json['uName'], request.json['deviceNum'])
db.session.add(dev)
db.session.add(devBackup)
db.session.commit()
devAdded = models.iotDevice.query.filter_by(userName=dev.userName).first()
return jsonify( { 'iotDevice': devAdded.deviceNum }) , 201
if __name__ == '__main__':
app.run(debug = True)
| ray-x/flaskMultiDb | app.py | app.py | py | 1,016 | python | en | code | 0 | github-code | 13 |
39431862680 | # Access Websites for Price and Items
# Calculate list of User Items (shopping list)
# Return Prices and highlight Cheapest Store
import requests
from bs4 import BeautifulSoup
# Get users Shopping List
shopping = input('Please enter your items: ')
# Append Items to empty list
shopping_list = []
for item in shopping:
shopping_list.append(shopping.lower())
break
print(shopping_list)
# Scrape data for users items
if __name__ == '__main__':
response = requests.get(' ***URL HERE***')
response.raise_for_status()
html = BeautifulSoup(response.text, 'html.parser')
print(html)
| ChrisQuestad/Code_Guild_Labs | Python/Grocery_App.py | Grocery_App.py | py | 605 | python | en | code | 0 | github-code | 13 |
72641498899 | import mysql.connector
mydb = mysql.connector.connect(
host="localhost",
port="3306",
user="root",
password="yous1/2*3-LOLIl",
database="mydatabase"
)
cursor = mydb.cursor()
cursor.execute("DROP DATABASE IF EXISTS mydatabase; CREATE DATABASE mydatabase;")
cursor.execute(" SHOW DATABASES ")
for x in cursor:
print(x)
cursor.execute("""
CREATE TABLE customers(
name VARCHAR(255),
address VARCHAR(255)
)
""")
cursor.execute("""
SHOW TABLES;
""")
cursor.execute("""
ALTER TABLE customers ADD COLUMN id INT AUTO_INCREMENT PRIMARY KEY
""")
insert_client = """ INSERT INTO customers (name,address) VALUES (%s,%s) """
val = ("Youssef","Ecole Drissia, Rue A, Idrissia 2")
cursor.execute(insert_client,val)
mydb.commit()
print(cursor.rowcount,"record inserted")
insert_client = """ INSERT INTO customers (name,address) VALUES (%s,%s) """
val = [("Youssef","Ecole Drissia, Rue A, Idrissia 2"),("Ayoub","Derb Sadni")]
cursor.executemany(insert_client,val)
mydb.commit()
print(cursor.rowcount,"record inserted , ID : ", cursor.lastrowid)
cursor.execute(""" SELECT * FROM customers Where address ='Derb Sadni'""")
customers = cursor.fetchall()
for customer in customers:
print(customer)
cursor.execute(""" SELECT * FROM customers WHERE address LIKE '%d%'""")
customers = cursor.fetchall()
for customer in customers:
print(customer)
afficher = """ SELECT * FROM customers WHERE address = %s ORDER BY id DESC"""
val = ("Derb Sadni",)
cursor.execute(afficher,val)
customers = cursor.fetchall()
for customer in customers:
print(customer)
| YoussefJemmane/ENSA | Python/TPs/TP5/EX1.py | EX1.py | py | 1,610 | python | en | code | 0 | github-code | 13 |
38580542347 | #! /usr/bin/python3
# -*- coding: utf-8 -*-
from flask import Flask, render_template, send_file, request, jsonify, send_from_directory
import os, subprocess, time, threading, subprocess
from flask_socketio import SocketIO, send, emit
import requests, logging, random, sys
import matplotlib as mpl
import matplotlib.pyplot as plt
from flask_cors import CORS
from GraphsForResourceToPNG import *
from GameTimeDemon import *
from ResourseTrand import *
from PlayersAPI import *
from TradeRequestPlayersListClass import *
from DataBaseAPIandLog import *
app = Flask(__name__, template_folder="templates")
CORS(app)
sio = SocketIO(app)
app_log = logging.getLogger('werkzeug')
file_handler = logging.FileHandler('log/flask.log', 'w')
app_log.addHandler(file_handler)
app_log.setLevel(logging.INFO)
def DailyCycleUpdates():
while 1:
today = GameTime.GameDate()
while(today == GameTime.GameDate()): time.sleep(0.1)
TradeRequestPlayersList.setGameDate(GameTime.GameDate())
status = Players.setGameDate()
gold = Gold.getNewTrand()
wood = Wood.getNewTrand()
rock = Rock.getNewTrand()
Graph = Graphs.NewElement(gold, wood, rock, GameTime.GameDateMini())
with app.test_request_context('/'):
sio.emit("market_and_table_online_and_gametime",
{"money": str(gold), "wood": str(wood), "rock": str(rock),
"status": status,
"gametime": GameTime.GameDate(),
"graph": f"data:image/png;base64,{Graph}"},
broadcast=True)
time.sleep(1)
def SituationalUpdates():
while 1:
if(Log.getChanged()):
with app.test_request_context('/'):
sio.emit("get_log", {"log": Log.LogRead(10)}, broadcast=True)
Log.setChanged(False)
if(TradeRequestPlayersList.getChanged()):
with app.test_request_context('/'):
sio.emit("trade_players_list", {"tradeplayerlist": TradeRequestPlayersList.GetList()}, broadcast=True)
TradeRequestPlayersList.setChanged(False)
for i in Players.Players:
if(i.getChanged()):
with app.test_request_context('/'):
emit("status_player", {
"Money": i.Money,
"Gold": i.Gold,
"Wood": i.Wood,
"Rock": i.Rock},
room=i.sid, namespace='/')
#print(i.name, "SEND STATUS")
i.setChanged(False)
time.sleep(1)
TradeRequestPlayersList = TradeRequestPlayersListClass()
GameTime = GameTimeServerClass(k=4096)
Graphs = GraphsClass(10)
Gold = ResourseTrand(GameTime.GameDate())
Wood = ResourseTrand(GameTime.GameDate())
Rock = ResourseTrand(GameTime.GameDate())
Log = DataBaseAPI()
clear_log = False
clear_users = False
for i in sys.argv:
if(i.find("--clear_users") != -1 or i.find("-cu") != -1): clear_users = True
elif(i.find("--clear_log") != -1 or i.find("-cl") != -1): clear_log = True
Players = PlayersClass(10, clear_users, clear_log)
demon = threading.Thread(target=DailyCycleUpdates)
demon.daemon = True
demon.start()
demon2 = threading.Thread(target=SituationalUpdates)
demon2.daemon = True
demon2.start()
@sio.on('send_sid')
def send_sid(name):
id_sid = Players.getIdPlayerForName(name)
Players.Players[id_sid].sid = request.sid
print("{} Зашёл в игру".format(name))
Log.LogWrite("В игру вошёл {}".format(name), color="green")
Players.Players[id_sid].online = "Online"
@sio.on('disconnect')
def disconnect():
try:
id_sid = Players.getIdPlayerForSId(request.sid)
print("{0} Вышел из игры".format(Players.Players[id_sid].name))
Log.LogWrite("Из игры вышел {}".format(name), color="green")
Players.Players[id_sid].online = time.strftime("%X %d.%m.%Y", time.gmtime(time.time()))
except BaseException: pass
@app.route("/")
def index(): return render_template("index.html")
@app.route('/favicon.ico')
def favicon():
return send_from_directory(os.path.join(app.root_path, 'static'), 'favicon.ico', mimetype='image/vnd.microsoft.icon')
@app.route("/send_to_log", methods=["POST"])
def send_to_log():
Log.LogWrite(request.form.get("message"), "message", request.form.get("login"))
return jsonify({})
@app.route("/get_tradeplayerlist", methods=["POST"])
def get_tradeplayerlist():
if(request.form.get("type") == "approv"):
TradeRequest = (TradeRequestPlayersList.GetLine(request.form.get("id")))[0]
Player1 = Players.getIdPlayerForName(TradeRequest[2])
Player2 = Players.getIdPlayerForName(request.form.get("login"))
if(TradeRequest[3] == "Sale"):
Player_tmp = Player1
Player1 = Player2
Player2 = Player_tmp
result = False
Players.Players[Player2].Money += int(TradeRequest[6])*int(TradeRequest[5])
Players.Players[Player1].Money -= int(TradeRequest[6])*int(TradeRequest[5])
if(TradeRequest[4] == "Gold" and Players.Players[Player2].Gold >= TradeRequest[5]):
Players.Players[Player2].Gold -= TradeRequest[5]
Players.Players[Player1].Gold += TradeRequest[5]
result = True
elif(TradeRequest[4] == "Wood" and Players.Players[Player2].Wood >= TradeRequest[5]):
Players.Players[Player2].Wood -= TradeRequest[5]
Players.Players[Player1].Wood += TradeRequest[5]
result = True
elif(TradeRequest[4] == "Rock" and Players.Players[Player2].Rock >= TradeRequest[5]):
Players.Players[Player2].Rock -= TradeRequest[5]
Players.Players[Player1].Rock += TradeRequest[5]
result = True
if(result):
TradeRequestPlayersList.DeleteLine(TradeRequest[0])
Log.LogWrite("Игрок {0} и игрок {1} заключили сделку на {2} монет"
.format(Players.Players[Player1].name, Players.Players[Player2].name, TradeRequest[6]), "game")
elif(request.form.get("type") == "close"):
TradeRequestPlayersList.DeleteLine(request.form.get("id"))
Log.LogWrite("Игрок {0} отменил cвоё предложение"
.format(request.form.get("login")), "game")
return jsonify({})
@app.route("/user_status_or_trade", methods=["POST"])
def Trade():
login = request.form.get("login")
password = request.form.get("password")
typeRequest = request.form.get("type")
RequestPlayer = Players.getPlayerForName(login)
Players.setActive(login)
typeResource = request.form.get("typeResource")
typeTransaction = request.form.get("typeTransaction")
Quantity = request.form.get("Quantity")
if(typeRequest == "trade with market"):
if(typeResource == "Gold"): Price = Gold.getTrand()
elif(typeResource == "Wood"): Price = Wood.getTrand()
elif(typeResource == "Rock"): Price = Rock.getTrand()
Players.TradingWithMarket(login, typeTransaction, typeResource, int(Quantity), Price)
elif(typeRequest == "trade with players"):
Price = request.form.get("Price")
TradeRequestPlayersList.AppendToList(login, typeTransaction, typeResource, int(Quantity), int(Price))
Log.LogWrite("Игрок {0} выложил заявку на {1} монет"
.format(request.form.get("login"), Price), "game")
return jsonify({})
@app.route("/login", methods=["POST"])
def Login():
login = request.form.get("login")
password = request.form.get("password")
LoginPlayer = Player(None, login, password, request.remote_addr)
LoginUserWithUniqueName = Players.CheckUserUniquenessName(LoginPlayer)
if(LoginUserWithUniqueName):
Players.append(LoginPlayer)
print(Players)
return jsonify({"answer": "LOGINOK"})
elif(not LoginUserWithUniqueName):
LoginUserWithValidationPassword = Players.PasswordValidation(LoginPlayer)
if(not LoginUserWithValidationPassword): return jsonify({"answer": open('SystemMessages/NameBusy.txt').read()})
else: return jsonify({"answer": "LOGINOK"})
sio.run(app, host="0.0.0.0", port=5001)
#app.run(host='0.0.0.0', port=5001, debug=False)
| StepanovPlaton/WarTrade | run.py | run.py | py | 8,394 | python | en | code | 0 | github-code | 13 |
21245668641 | import math
EPSILON = 1e-08
def solve(a: float, b: float, c: float) -> tuple[float, float] | tuple[None, None]:
"""Вычисление корней квадратного уравнения.
a•x^2 + b•x + c = 0
Args:
a, b, c: Коэффициенты квадратного уравнения.
Raises:
ValueError: Аргумент a = 0.
TypeError: Хотя бы один аргумент не соответствует типу float.
Returns:
Корни квадратного уравнения x1 и x2.
"""
if not all(isinstance(arg, float) for arg in (a, b, c)):
raise TypeError("all arguments must be float")
if abs(0. - a) <= EPSILON:
raise ValueError("argument 'a' сan't be equal to 0")
d = b ** 2 - 4 * a * c
print(d)
if d < -EPSILON:
return (None, None)
return (
(- b + math.sqrt(d)) / 2 * a,
(- b - math.sqrt(d)) / 2 * a,
)
| vakhet/otus_architecture_and_patterns | module_03/quadratic_equation.py | quadratic_equation.py | py | 982 | python | ru | code | 0 | github-code | 13 |
11427206218 | from collections import deque
monsters = deque(int(x) for x in input().split(','))
soldier = [int(x) for x in input().split(',')]
counter = 0
while monsters and soldier:
current_armour = monsters.popleft()
current_strike = soldier.pop()
if current_strike >= current_armour:
counter += 1
current_strike -= current_armour
if soldier:
soldier[-1] += current_strike
elif not soldier and current_strike > 0:
soldier.append(current_strike)
else:
current_armour -= current_strike
monsters.append(current_armour)
if not monsters:
print("All monsters have been killed!")
if not soldier:
print("The soldier has been defeated.")
print(f"Total monsters killed: {counter}") | KrisKov76/SoftUni-Courses | python_advanced_09_2023/00_python_advanced_exam/01. Monster Extermination.py | 01. Monster Extermination.py | py | 760 | python | en | code | 0 | github-code | 13 |
7198015115 | from collections import OrderedDict
from matcher.allocation import Allocation
from matcher.exceptions import BadRequestException
from datetime import datetime
MAX_RESERVED_AMOUNT = 25000.00
MIN_RESERVED_AMOUNT = 5.00
MAX_MATCHED_AMOUNT = 25000.00
MIN_MATCHED_AMOUNT = 5.00
MIN_DONATION_AMOUNT = 5.00
MAX_DONATION_AMOUNT = 25000.00
# Valid statuses
RESERVED = "Reserved"
COLLECTED = "Collected"
EXPIRED = "Expired"
class FundMatcher(object):
def __init__(self, match_funds):
"""
Core algorithm to match donation to matchfunds
Instantiated with an ordered dict of match funds
"""
match_funds.sort(reverse=False, key=lambda x: x.match_order)
self.match_funds = OrderedDict([(mf.match_fund_id, mf) for mf in match_funds])
self.allocation_state = {}
def get_match_funds_as_list(self):
return list(self.match_funds.values())
def reserve_funds(self, donation):
"""
Method takes a Donation object and reserves this donation against match funds
Depending on the funds availale in the match_fund, the donation is either fully,
partially matched. or not matched at all.
The match_fund_state is updated with the result.
Assuming that donation id of a donation is unique
"""
donation_balance = donation.amount
allocations = []
for match_fund in self.match_funds.values():
matching_amount_required = donation_balance * (match_fund.matching_ratio_as_float_multiplier)
if match_fund.total_amount == 0:
# fund is exhausted - move on to next match_fund
continue
if match_fund.total_amount >= matching_amount_required:
# full match
allocation = Allocation(match_fund.match_fund_id, matching_amount_required, RESERVED)
allocations.append(allocation)
donation_balance = 0
match_fund.total_amount -= matching_amount_required
break
else:
# partial match
matched_allocated_amount = matching_amount_required - match_fund.total_amount
allocation = Allocation(match_fund.match_fund_id, match_fund.total_amount, RESERVED)
allocations.append(allocation)
donation_balance -= (matching_amount_required - matched_allocated_amount) / match_fund.matching_ratio_as_float_multiplier
match_fund.total_amount = 0
if donation_balance == 0:
# donation has been matched completely - break from for
break
allocation_state_doc = {
'allocations': allocations,
'created_time': datetime.now(),
'updated_time': datetime.now(),
'original_donation': donation.amount,
'donation_balance_unmatched': donation_balance,
'overall_status': RESERVED
}
self.allocation_state[donation.donation_id] = allocation_state_doc
def collect_donation(self, donation_id):
"""
Collect a donation that is set to Reserved
Sets the state of all allocations against that donation are set to Collected
Throws errors if the allocation status is not Reserved
"""
# Is donation_id valid?
if not donation_id in self.allocation_state:
raise BadRequestException("Invalid donation id %s" % donation_id)
allocations = self.allocation_state[donation_id]['allocations']
# Ensure that only allocation status of RESERVED are COLLECTED
for allocation in allocations:
if not allocation.status == RESERVED:
raise BadRequestException("Invalid collection request. Allocation is not reserved")
else:
allocation.status = COLLECTED
self.allocation_state[donation_id]['allocations'] = allocations
self.allocation_state[donation_id]['updated_time'] = datetime.now()
self.allocation_state[donation_id]['overall_status'] = COLLECTED
def expire_donation(self, donation_id):
"""
Expire a donation
Finds the donation based on id, checks to see if is RESERVED.
If it is, then set the status to EXPIRED and return the matched funds
so that new donations can use them.
Note: the instructions mention not to do this for previously collected donations - but
not new donations.
"""
if donation_id not in self.allocation_state:
raise BadRequestException("Invalid donation_id %s" % donation_id)
allocations = self.allocation_state[donation_id]['allocations']
for allocation in allocations:
if not allocation.status == RESERVED:
raise BadRequestException("Invalid collection request. Allocation is not reserved")
else:
allocation.status = EXPIRED
allocation_match_fund = self.match_funds[allocation.match_fund_id]
allocation_match_fund.total_amount += allocation.match_fund_allocation
self.match_funds[allocation.match_fund_id] = allocation_match_fund
self.allocation_state[donation_id]['allocations'] = allocations
self.allocation_state[donation_id]['updated_time'] = datetime.now()
self.allocation_state[donation_id]['overall_status'] = EXPIRED
def list_match_fund_allocations(self):
"""
Only list RESERVED or COLLECTED allocations
Returns the allocations
"""
all_allocations = []
for donation_id in self.allocation_state.keys():
if self.allocation_state[donation_id]['overall_status'] != EXPIRED:
output_doc = {
'donation_id': donation_id,
**self.allocation_state[donation_id]
}
allocations = [a.to_dict() for a in self.allocation_state[donation_id]['allocations']]
output_doc['allocations'] = allocations
all_allocations.append(output_doc)
return all_allocations | TClark000/fund-matching-pytest | matcher/fund_matcher.py | fund_matcher.py | py | 6,170 | python | en | code | 0 | github-code | 13 |
36933266943 | import email.utils
def users(user):
if(user[0].isalpha()):
user=user.replace('_','')
user=user.replace('-','')
user=user.replace('.','')
if(user.isalnum()):
return True
return False
n=int(input())
for i in range(n):
mail=input()
temp=email.utils.parseaddr(mail)
domain=temp[1]
if '@' in domain and '.' in domain:
temp=domain.rpartition('@')
user=temp[0]
domain=temp[2]
domain=domain.rpartition('.')
extension=domain[2]
domain=domain[0]
if(domain.isalpha() and extension.isalpha() and len(extension)<=3 and users(user)):
print(mail)
| imhariprakash/Courses | python/hackerrank programs/hackerrank-email-validation-py/main.py | main.py | py | 691 | python | en | code | 4 | github-code | 13 |
41593880242 | #url: https://www.hackerrank.com/challenges/piling-up/problem
# Enter your code here. Read input from STDIN. Print output to STDOUT
import collections
T = int(input())
for i in range(T):
n = int(input())
x = collections.deque(map(int, input().split()))
while len(x) > 1 and x[0] >= x[1]:
x.popleft()
while len(x) > 1 and x[-1] >= x[-2]:
x.pop()
if len(x) <= 1:
print("Yes")
else:
print("No")
| Huido1/Hackerrank | Python/07 - Collections/08 - Piling Up!.py | 08 - Piling Up!.py | py | 452 | python | en | code | 0 | github-code | 13 |
42074473059 |
import cx_Oracle
import pandas as pd
import pyodbc
import sqlalchemy
import datetime as dt
import string
import platform
import os
import logging
from airflow import DAG
from airflow.operators.python_operator import PythonOperator
from airflow.operators.bash_operator import BashOperator
#from airflow.hooks.mssql_hook import MsSqlHook
from airflow.hooks.oracle_hook import OracleHook
#from airflow.hooks.S3_hook import S3Hook
from airflow.contrib.operators.ssh_operator import SSHOperator
from airflow.operators.email_operator import EmailOperator
from airflow.operators.oracle_operator import OracleOperator
from airflow.operators.dummy_operator import DummyOperator
#from airflow.operators.dagrun_operator import TriggerDagRunOperator
# from airflow.operators.sensors import ExternalTaskSensor
# from airflow.operators.dagrun_operator import TriggerDagRunOperator, DagRunOrder
from airflow.models import DagRun
from airflow.sensors.base_sensor_operator import BaseSensorOperator
from airflow.utils.db import provide_session
from airflow.utils.decorators import apply_defaults
from airflow.utils.state import State
from airflow.sensors.sql import SqlSensor
default_args = {
'owner': 'Lin Wang',
'depends_on_past': False,
'start_date': dt.datetime(2021,9,1),
'email': ['Lin.Wang@vnsny.org','Neha.Teli@vnsny.org','Ripul.Patel@vnsny.org'],
'email_on_failure': True,
'email_on_retry': True,
'retries': 1,
'retry_delay': dt.timedelta(minutes=15),
# 'queue': 'bash_queue',
# 'pool': 'backfill',
# 'priority_weight': 10,
# 'end_date': datetime(2016, 1, 1),
# 'wait_for_downstream': False,
# 'dag': dag,
# 'sla': timedelta(hours=2),
# 'execution_timeout': timedelta(seconds=300),
# 'on_failure_callback': some_function,
# 'on_success_callback': some_other_function,
# 'on_retry_callback': another_function,
# 'sla_miss_callback': yet_another_function,
# 'trigger_rule': 'all_success'
}
def process_fmm(**kwargs):
print(kwargs['mvname'])
#today=dt.datetime.today()
query="""
begin
etl.p_refresh_mv('""" + kwargs['mvname'] + """');
end;
"""
print(query)
# Create a table in Oracle database
try:
sql_hook = OracleHook(oracle_conn_id='CHOICEBI')
conn=sql_hook.get_conn()
cursor = conn.cursor()
# Creating a table srollno heading which is number
#conn.run(query)
cursor.execute(query)
print("Command executed successful")
except cx_Oracle.DatabaseError as e:
print("There is a problem with Oracle", e)
raise;
#
# by writing finally if any error occurs
# then also we can close the all database operation
finally:
if cursor:
cursor.close()
if conn:
conn.close()
dag = DAG(
'CCSS',
default_args=default_args,
catchup=False,
schedule_interval='15 2 * * *', #2.15 AM EST
description='CCSS MATERLIZED VIEWS REFRESH'
)
###############################################################
# Check DW_OWNER.F9_STATS data loading
DUMMY_OPERATOR_All_NODE_In=DummyOperator(
task_id='DUMMY_OPERATOR_All_NODE_In',
dag=dag,
)
# connection
def Insert_script():
hook=OracleHook(oracle_conn_id='CHOICEBI')
conn=hook.get_connection(conn_id='CHOICEBI')
HOST=conn.host
USER=conn.login
PASSWORD=conn.password
SCHEMA =conn.schema
PORT=conn.port
SID="NEXUS2"
engine =sqla.create_engine("oracle://{user}:{password}@{host}:{port}/{sid}".format(user=USER, password=PASSWORD, host=HOST, database=SCHEMA, port=PORT, sid=SID))
oracle_c = engine.connect()
cnt_chk_DAS="""
select count(table_n)
from DW_OWNER.F9_STATS
where trunc(updated) = (trunc(sysdate)-1)
and table_n in('F9_AGENT', 'F9_CALL_LOG', 'F9_CALL_SEGMENT','F9_ACD_QUEUE', 'F9_CONTACT', 'F9_DNIS')
group by trunc(updated)
"""
df_read_oracle_cont=pd.read_sql(cnt_chk_DAS,con=oracle_c)
cnt_chk_q=df_read_oracle_cont.iloc[0,0]
if cnt_chk_q == 6:
DUMMY_OPERATOR_All_NODE_In=DummyOperator(
task_id='DUMMY_OPERATOR_All_NODE_In',
dag=dag,
)
else:
print('source load is not complete')
oracle_c.close()
###############################################################
#Node1 - Task1: Refresh MV_DIM_F9_AGENT
N1T1_MV_DIM_F9_AGENT=PythonOperator(
task_id='N1T1_MV_DIM_F9_AGENT',
python_callable=process_fmm,
op_kwargs={'mvname':'CHOICEBI.MV_DIM_F9_AGENT'},
dag=dag
)
N1T1_MV_DIM_F9_AGENT.set_upstream(DUMMY_OPERATOR_All_NODE_In)
#Node1 - Task2: #REFRESH SKILLS FROM ALL TABLES
N1T2_MERGE_INTO_MV_DIM_F9_SKILL=OracleOperator(
task_id='N1T2_MERGE_INTO_MV_DIM_F9_SKILL',
oracle_conn_id='CHOICEBI',
sql="""
begin MERGE INTO MV_DIM_F9_SKILL A USING
(
select distinct * from
(
SELECT DISTINCT SKILL FROM DW_OWNER.F9_CALL_LOG WHERE SKILL IS NOT NULL
UNION ALL
SELECT DISTINCT SKILL FROM DW_OWNER.F9_ACD_QUEUE WHERE SKILL IS NOT NULL
UNION ALL
SELECT DISTINCT SKILL FROM DW_OWNER.F9_CALL_SEGMENT WHERE SKILL IS NOT NULL
UNION ALL
SELECT DISTINCT SKILL FROM DW_OWNER.F9_AGENT WHERE SKILL IS NOT NULL
union all
SELECT DISTINCT SKILL FROM CHOICEBI.V_DIM_F9_AGENT_SKILL_MAP WHERE SKILL IS NOT NULL
)
) B
ON (A.SKILL = B.SKILL)
WHEN NOT MATCHED THEN
INSERT (
dl_skill_sk,
SKILL)
VALUES
(
SEQ_F9_SKILL.NEXTVAL,
B.SKILL
);END;""",
autocommit ='True',
dag=dag
)
N1T2_MERGE_INTO_MV_DIM_F9_SKILL.set_upstream(DUMMY_OPERATOR_All_NODE_In)
#Node1 - Task3: #REFRESH CAMPAIGN FROM ALL TABLES
N1T3_MERGE_INTO_MV_DIM_F9_CAMPAIGN=OracleOperator(
task_id='N1T3_MERGE_INTO_MV_DIM_F9_CAMPAIGN',
oracle_conn_id='CHOICEBI',
sql="""
begin MERGE INTO MV_DIM_F9_CAMPAIGN A USING
(
select distinct * from
(
SELECT DISTINCT CAMPAIGN FROM DW_OWNER.F9_CALL_LOG WHERE SKILL IS NOT NULL
UNION ALL
SELECT DISTINCT CAMPAIGN FROM DW_OWNER.F9_ACD_QUEUE WHERE SKILL IS NOT NULL
UNION ALL
SELECT DISTINCT CAMPAIGN FROM DW_OWNER.F9_CALL_SEGMENT WHERE SKILL IS NOT NULL
UNION ALL
SELECT DISTINCT CAMPAIGN FROM DW_OWNER.F9_AGENT WHERE SKILL IS NOT NULL
)
) B
ON (A.CAMPAIGN = B.CAMPAIGN)
WHEN NOT MATCHED THEN
INSERT (
dl_CAMPAIGN_sk,
CAMPAIGN)
VALUES
(
SEQ_F9_CAMPAIGN.NEXTVAL,
B.CAMPAIGN
);END;""",
autocommit ='True',
dag=dag
)
N1T3_MERGE_INTO_MV_DIM_F9_CAMPAIGN.set_upstream(DUMMY_OPERATOR_All_NODE_In)
#Node1 - Task4: Refresh MV_DIM_F9_CONTACT
N1T4_MV_DIM_F9_CONTACT=PythonOperator(
task_id='N1T4_MV_DIM_F9_CONTACT',
python_callable=process_fmm,
op_kwargs={'mvname':'CHOICEBI.MV_DIM_F9_CONTACT'},
dag=dag
)
N1T4_MV_DIM_F9_CONTACT.set_upstream(DUMMY_OPERATOR_All_NODE_In)
#Node2 - Task1: Refresh MV_FACT_F9_MASTER
N2T1_MV_FACT_F9_MASTER=PythonOperator(
task_id='N2T1_MV_FACT_F9_MASTER',
python_callable=process_fmm,
op_kwargs={'mvname':'CHOICEBI.MV_FACT_F9_MASTER'},
dag=dag
)
[N1T2_MERGE_INTO_MV_DIM_F9_SKILL, N1T3_MERGE_INTO_MV_DIM_F9_CAMPAIGN]>>N2T1_MV_FACT_F9_MASTER
#Node2 - Task2: Refresh MV_DIM_F9_CAMPAIGN_SKILL_MAP
N2T2_MV_DIM_F9_CAMPAIGN_SKILL_MAP=PythonOperator(
task_id='N2T2_MV_DIM_F9_CAMPAIGN_SKILL_MAP',
python_callable=process_fmm,
op_kwargs={'mvname':'CHOICEBI.MV_DIM_F9_CAMPAIGN_SKILL_MAP'},
dag=dag
)
[N1T2_MERGE_INTO_MV_DIM_F9_SKILL, N1T3_MERGE_INTO_MV_DIM_F9_CAMPAIGN]>>N2T2_MV_DIM_F9_CAMPAIGN_SKILL_MAP
#Node2 - Task3: Refresh MV_FACT_F9_CALL_LOG
N2T3_MV_FACT_F9_CALL_LOG=PythonOperator(
task_id='N2T3_MV_FACT_F9_CALL_LOG',
python_callable=process_fmm,
op_kwargs={'mvname':'CHOICEBI.MV_FACT_F9_CALL_LOG'},
dag=dag
)
[N1T2_MERGE_INTO_MV_DIM_F9_SKILL, N1T3_MERGE_INTO_MV_DIM_F9_CAMPAIGN]>>N2T3_MV_FACT_F9_CALL_LOG
#Node2 - Task4: Refresh MV_FACT_F9_AGENT_ACTIVITY_LOG
N2T4_MV_FACT_F9_AGENT_ACTIVITY_LOG=PythonOperator(
task_id='N2T4_MV_FACT_F9_AGENT_ACTIVITY_LOG',
python_callable=process_fmm,
op_kwargs={'mvname':'CHOICEBI.MV_FACT_F9_AGENT_ACTIVITY_LOG'},
dag=dag
)
[N1T2_MERGE_INTO_MV_DIM_F9_SKILL, N1T3_MERGE_INTO_MV_DIM_F9_CAMPAIGN]>>N2T4_MV_FACT_F9_AGENT_ACTIVITY_LOG
#Node2 - Task5: Refresh MV_FACT_F9_ACD_QUEUE
N2T5_MV_FACT_F9_ACD_QUEUE=PythonOperator(
task_id='N2T5_MV_FACT_F9_ACD_QUEUE',
python_callable=process_fmm,
op_kwargs={'mvname':'CHOICEBI.MV_FACT_F9_ACD_QUEUE'},
dag=dag
)
[N1T2_MERGE_INTO_MV_DIM_F9_SKILL, N1T3_MERGE_INTO_MV_DIM_F9_CAMPAIGN]>>N2T5_MV_FACT_F9_ACD_QUEUE
#Node2 - Task6: Refresh MV_FACT_F9_CALL_SEGMENT
N2T6_MV_FACT_F9_CALL_SEGMENT=PythonOperator(
task_id='N2T6_MV_FACT_F9_CALL_SEGMENT',
python_callable=process_fmm,
op_kwargs={'mvname':'CHOICEBI.MV_FACT_F9_CALL_SEGMENT'},
dag=dag
)
[N1T2_MERGE_INTO_MV_DIM_F9_SKILL, N1T3_MERGE_INTO_MV_DIM_F9_CAMPAIGN]>>N2T6_MV_FACT_F9_CALL_SEGMENT
#Node2 - Task8: Refresh MV_DIM_F9_AGENT_SKILL_MAP
N2T8_MV_DIM_F9_AGENT_SKILL_MAP=PythonOperator(
task_id='N2T8_MV_DIM_F9_AGENT_SKILL_MAP',
python_callable=process_fmm,
op_kwargs={'mvname':'CHOICEBI.MV_DIM_F9_AGENT_SKILL_MAP'},
dag=dag
)
N2T8_MV_DIM_F9_AGENT_SKILL_MAP.set_upstream(N1T2_MERGE_INTO_MV_DIM_F9_SKILL)
#Node 3 - Task 1: Trigger bat file: CALL_CENTER_DAILY_LOAD
N3T1_CALL_CENTER_DAILY_LOAD = SSHOperator (
ssh_conn_id='ssh_MSTR',
task_id='N3T1_CALL_CENTER_DAILY_LOAD',
command="""E:\Support\MicroStrategy\Command_Manager_Scripts\Enterprise\CALL_CENTER_DAILY_LOAD.bat""",
dag =dag
)
N3T1_CALL_CENTER_DAILY_LOAD.set_upstream(N2T1_MV_FACT_F9_MASTER) | 58173/vnsny_CODE | Airflow/CCSS MV Refresh_OLD.py | CCSS MV Refresh_OLD.py | py | 9,825 | python | en | code | 0 | github-code | 13 |
35181638470 | #!/usr/bin/env python3
import argparse
import sys
import pandas as pd
import numpy as np
from pybedtools import BedTool
BED_FIELDS = ["chrom", "start", "end", "name", "score", "strand"]
def argument_parser() -> argparse.ArgumentParser:
parser = argparse.ArgumentParser(
usage="annotate_peaks.py -i <bed> --features <bed> --genes <bed> -o <output>"
)
parser.add_argument(
"-s", "--summits", metavar="<path>", help="summits in bed format"
)
parser.add_argument("-p", "--peaks")
parser.add_argument(
"-f", "--features", metavar="<path>", help="annotations in bed format"
)
parser.add_argument(
"-o", "--output", metavar="<path>", help="path where to store annotated summits"
)
return parser
def features_overlaping_summits(summits: BedTool, features: BedTool) -> pd.DataFrame:
return (
summits.intersect(features, wa=True, wb=True)
.to_dataframe(
names=[f"summit_{field}" for field in BED_FIELDS]
+ ["peak_name"]
+ [f"feature_{field}" for field in BED_FIELDS]
+ [f"gene_{field}" for field in BED_FIELDS]
+ ["gene_type"]
)
.drop(
columns=[
f"feature_{field}"
for field in ["chrom", "start", "end", "strand", "score"]
]
)
)
def get_genes_from_features(features: BedTool) -> pd.DataFrame:
return BedTool.from_dataframe(
features.to_dataframe(
usecols=[i for i in range(6, 13)],
names=[f"gene_{field}" for field in BED_FIELDS + ["type"]],
)
)
def genes_inside_peaks(peaks: BedTool, genes: BedTool) -> pd.DataFrame:
return peaks.intersect(genes, F=1, wa=True, wb=True).to_dataframe(
names=[f"peak_{field}" for field in BED_FIELDS]
+ [f"gene_{field}" for field in BED_FIELDS + ["type"]]
)
def peaks_to_summits(peaks: pd.DataFrame, summits: pd.DataFrame):
result = (
summits.merge(peaks, on="peak_name")
.drop(
columns=[
f"peak_{field}"
for field in ["chrom", "start", "end", "score", "strand"]
]
)
.drop_duplicates()
)
result["feature_name"] = "gene_inside_peak"
return result
def tss_tts_distance(row):
"""Calculates TSS and TTS distances"""
if row.gene_strand == "+":
tss = row.summit_start - row.gene_start
tts = row.summit_start - row.gene_end
elif row.gene_strand == "-":
tss = row.gene_end - row.summit_start
tts = row.gene_start - row.summit_start
else:
return pd.Series([pd.NA, pd.NA])
return pd.Series([tss, tts])
def main():
# Parse arguments
parser = argument_parser()
args = parser.parse_args()
if len(sys.argv) < 3:
parser.print_help()
exit()
# read summits and genes
summits = BedTool(args.summits)
peaks = BedTool(args.peaks)
features = BedTool(args.features)
genes = get_genes_from_features(features)
# Intersect summits with features
annotated_summits = features_overlaping_summits(summits, features)
# Short genes inside peaks
inside_peaks = genes_inside_peaks(peaks, genes)
inside_summits = peaks_to_summits(
inside_peaks,
summits.to_dataframe(
names=[f"summit_{field}" for field in BED_FIELDS] + ["peak_name"]
),
)
inside_summits = inside_summits[
~(inside_summits.gene_name.isin(annotated_summits.gene_name))
]
# Concatenate both
column_order = (
[f"summit_{field}" for field in BED_FIELDS]
+ ["peak_name"]
+ [f"gene_{field}" for field in BED_FIELDS + ["type"]]
+ ["feature_name"]
)
inside_summits = inside_summits[column_order]
annotated_summits = annotated_summits[column_order]
annotated = pd.concat([annotated_summits, inside_summits], axis=0).drop_duplicates()
# Calculate TSS and TTS distance
annotated[["tss_distance", "tts_distance"]] = annotated.apply(
tss_tts_distance, axis=1
)
column_order = (
[f"summit_{field}" for field in BED_FIELDS]
+ ["peak_name"]
+ [f"gene_{field}" for field in BED_FIELDS + ["type"]]
+ ["tss_distance", "tts_distance", "feature_name"]
)
# Write to file
annotated[column_order].sort_values(["summit_chrom", "summit_start"]).to_csv(
args.output, sep="\t", header=False, index=False
)
if __name__ == "__main__":
main()
| jperezalemany/DiegoMartin2022 | chip_seq/workflow/scripts/annotate_summits.py | annotate_summits.py | py | 4,538 | python | en | code | 0 | github-code | 13 |
16081890755 | import pickle
from heapq import heappush, heappushpop, heappop
from collections import deque
CAPACITY=5
N_GRAM=3
class JobTitleRecommender:
def __init__(self, v_tree_dir='bin/v_tree.pkl', trie_dir='bin/trie.pkl'):
# vocabulary tree for ngram likelihood estimates (next word prediction)
with open(v_tree_dir, 'rb') as pickle_in:
self.v_tree = pickle.load(pickle_in, encoding='utf8')
# trie for autocomplete
with open(trie_dir, 'rb') as pickle_in:
self.trie = pickle.load(pickle_in, encoding='utf8')
# Retrive top CAPACITY results using unigram scores
def auto_complete(self, prefix, capacity=CAPACITY):
min_heap = []
trie_generator = self.trie.all_words_beginning_with_prefix(prefix)
while True:
try:
val = next(trie_generator)
if len(min_heap) < capacity:
heappush(min_heap, (self.v_tree.get_likelihood(val), val))
else:
heappushpop(min_heap, (self.v_tree.get_likelihood(val), val))
except StopIteration:
break
return sorted([heappop(min_heap) for i in range(len(min_heap))], reverse=True)
# Retrive top CAPACITY results from the V-ary tree
# likelihoods are precalculated in the Language Model module
# Based upon Maximum Likelihood, a sequence of words (context) has a set of 'next word' results.
# This is used to predict the next possible words based on training data
def predict_next_word(self, text, capacity=CAPACITY):
text = '<s> <s> '+text
text = ' '.join(text.split())
min_heap = []
context = tuple(text.split()[-N_GRAM+1:])
l = self.v_tree.word_prediction(context)
for word_lklhd in l:
if len(min_heap) < capacity:
heappush(min_heap, word_lklhd)
else:
heappushpop(min_heap, word_lklhd)
return sorted([heappop(min_heap) for i in range(len(min_heap))], reverse=True)
def main():
job_title_recommender = JobTitleRecommender()
# print(job_title_recommender.auto_complete('ae'))
print(job_title_recommender.predict_next_word('java'))
if __name__ == "__main__":
main() | nikola-spasojevic/Interpolated_Language_Model | jobtitlerecommender/job_title_recommender.py | job_title_recommender.py | py | 1,975 | python | en | code | 0 | github-code | 13 |
10289731731 |
from django.conf.urls import include, url
from .views import (
ver_clientes,
consultar_cliente_nit,
consultar_cliente_nombre,
registrar_cliente,
mayorista,
descuento,
)
urlpatterns = [
url(r'^$',ver_clientes.as_view(),name='ver_clientes'),
url(r'^nit/$',consultar_cliente_nit.as_view(),name='consultar_nit'),
url(r'^nombre/$',consultar_cliente_nombre.as_view(),name='consultar_nombre'),
url(r'^registrar_cliente/$',registrar_cliente.as_view(),name='registrar_cliente'),
url(r'^mayorista/$',mayorista.as_view(),name='mayorista'),
url(r'^descuento/$',descuento.as_view(),name='descuento'),
]
| corporacionrst/software_RST | app/cliente_proveedor/cliente/urls.py | urls.py | py | 611 | python | es | code | 0 | github-code | 13 |
7649967295 | import numpy as np
class Shear(object):
def __init__(self):
pass
def create_shear(self, angle=45, lambda_1=1.2, lambda_2=0.8,
shift=[ [0], [-1] ], center=[[13], [13]]):
# define params for shearing matrix
self.angle = angle
self.theta = np.radians(angle)
self.lambda_1 = lambda_1
self.lambda_2 = lambda_2
self.shift = np.asarray(shift)
self.center = np.asarray(center) # get the center of image
# create shear
self.create_matrices()
def create_matrices(self):
self.P = np.asarray([[np.cos(self.theta), -np.sin(self.theta)],
[np.sin(self.theta), np.cos(self.theta)]]) # orthonormal basis
self.Lambda = np.diag([self.lambda_1, self.lambda_2])
self.Lambda_inv = np.diag([1/self.lambda_1, 1/self.lambda_2])
self.A = np.matmul(np.transpose(self.P),
np.matmul(self.Lambda,self.P)) # create shear
# define inverse shear
self.A_inverse = np.matmul(np.transpose(self.P),
np.matmul(self.Lambda_inv, self.P))
#self.b = self.center + self.shift
def apply_inverse(self, input_coord):
#inverse_point1 = self.A_inverse @ (input_coord - self.center- self.shift ) + self.center
inverse_point = np.matmul(self.A_inverse, (input_coord - self.center - self.shift)) + self.center
#print(inverse_point-inverse_point1)
x = inverse_point[0][0]
y = inverse_point[1][0]
return inverse_point, x, y
def find_inverse_point(self, i, j, imag):
input_coord = np.asarray([ [i], [j] ]) # get current input coordinate
# get the point that gets mapped to input_coord from applying A
# we use A_inverse here to get that
original_point, x, y = self.apply_inverse(input_coord)
if x > 27 or y > 27 or x<0 or y<0:
return 0 # original point is outside of grid
# get points on grid that are close to original inverse_point
x_floor = np.floor(x)
x_ceil = np.ceil(x)
y_floor = np.floor(y)
y_ceil = np.ceil(y)
point_1 = np.asarray([ [x_floor], [y_floor] ])
point_2 = np.asarray([ [x_floor], [y_ceil] ])
point_3 = np.asarray([ [x_ceil], [y_floor] ])
point_4 = np.asarray([ [x_ceil], [y_ceil] ])
all_points = [point_1, point_2, point_3, point_4]
points = []
for point in all_points:
bool_val = any([np.array_equal(point,point_prime) for point_prime in points])
if not bool_val:
points.append(point)
# get rid of repeat points (happens if x or y is already an integer)
def fractional_value(point_val):
x_temp = point_val[0][0]
y_temp = point_val[1][0]
if x_temp > 27 or y_temp > 27 or x_temp < 0 or y_temp < 0:
pixel = 0
else:
pixel = imag[int(x_temp), int(y_temp)]
# compute similariy as e^{-||x - y||_2}
similarity_metric = np.exp(-np.linalg.norm(original_point-point_val))
return pixel, similarity_metric
pixel_value = 0
total_dist = 0
for point in points:
imag_val, similarity_metric = fractional_value(point)
if imag_val > 0:
total_dist += similarity_metric
pixel_value += imag_val*similarity_metric
if total_dist > 0:
final_pixel_val = pixel_value / total_dist
else:
final_pixel_val = pixel_value
return final_pixel_val
def shear_image(self,image):
sheared_image = np.zeros((28,28))
for i in range(28):
for j in range(28):
pixel_ij = self.find_inverse_point(i, j, image)
sheared_image[i,j] = pixel_ij
return sheared_image
| enegrini/Applications-of-No-Collision-Transportation-Maps-in-Manifold-Learning | code/Functions/Shear_LOT.py | Shear_LOT.py | py | 3,955 | python | en | code | 1 | github-code | 13 |
35886074344 | import pybullet as pb
from source.utils import rotate
from .i_matrix import IMatrix
class ContractVMArgs(object):
def __init__(self, camera_pos, target_pos, up_vector):
self.camera_pos = camera_pos
self.target_pos = target_pos
self.vector_up = up_vector
class ViewMatrixData(object):
def __init__(self, position, angles, up_vector, orient, offset):
super().__init__()
self.position = position
self.angles = angles
self.up_vector = up_vector
self.orient = orient
self.offset = offset
def get(self) -> ContractVMArgs:
angles = [-self.angles[0], -self.angles[1], -self.angles[2]]
orient_rot = rotate(self.orient, angles)
offset_rot = rotate(self.offset, angles)
camera_pos = self.position + offset_rot
target_pos = camera_pos + orient_rot
up_vector = self.up_vector
return ContractVMArgs(camera_pos, target_pos, up_vector)
class ViewMatrix(IMatrix):
def __init__(self, data: ViewMatrixData):
self.data = data
super().__init__()
def update(self):
args = self.data.get()
self._matrix = pb.computeViewMatrix(
args.camera_pos,
args.target_pos,
args.vector_up
)
| AntivistRock/AIIJC-AI-in-robotics | source/engine/camera/view_matrix.py | view_matrix.py | py | 1,293 | python | en | code | 1 | github-code | 13 |
28244658503 | from typing import List
arr= [1, 2, 3, 4, 5, 7, 8, 11, 18]
target = 12
arr2 = [3,2,4]
# l r
target2 = 6
# Output: 1 3
# !!sorted arrr ints
# target
# 2 numbs add to target
# return indices
# 0n
# **list comp to eliminate right side where > target
def twoSum(nums: List[int], target: int) -> List[int]:
left, right = 0, len(nums)-1
while right > left:
two_sum = nums[left] + nums[right]
if two_sum == target:
return [left, right]
if nums[right] > target or two_sum > target:
right-=1
else:
left+=1
print(twoSum(arr2, target2))
| thefrankharvey/cs | algorithms/two-pointers/two-sum.py | two-sum.py | py | 615 | python | en | code | 0 | github-code | 13 |
12269628222 |
# Creating a hash table with collision handling
class HashTable:
def __init__(self):
self.Max = 10
self.arr = [[] for i in range(self.Max)]
#Hash function
def get_hash(self, key):
h = 0
for char in key:
h += ord(char)
return h % self.Max
# Create a function to add key value pair in the hash table
def __setitem__(self, key, val):
h = self.get_hash(key)
found = False
for idx, element in enumerate(self.arr[h]):
if len(element)==2 and element[0]==key:
self.arr[h][idx] = (key,val)
found = True
break
if not found:
self.arr[h].append((key, val))
# Create a function to get key value
def __getitem__(self, key):
h = self.get_hash(key)
for element in self.arr[h]:
if element[0] == key:
return element[1]
def __delitem__(self, key):
h = self.get_hash(key)
for index, element in enumerate(self.arr[h]):
if element[0] == key:
del self.arr[h][index]
t = HashTable()
t["march 6"] = 120
t["march 6"] = 78
t["march 8"] = 67
t["march 9"] = 4
t["march 17"] = 459
print(t["march 6"]) | moussa-sanou/Python4 | EPI/Dictionaries/collision.py | collision.py | py | 1,254 | python | en | code | 0 | github-code | 13 |
25528110897 | #! /usr/bin/env python3
import rospy
from std_msgs.msg import Int64
rospy.init_node('pwm')
pwm_l = rospy.Publisher('/control_l',Int64,self.callback_l,queue_size=1)
pwm_r = rospy.Publisher('/control_r',Int64,self.callback_r,queue_size=1)
a = Int64()
b = Int64()
a.data = 65
b.data = 65
while not rospy.is_shutdown():
pwm_l.publish(a)
pwm_r.publish(b)
rate.sleep() | luppyfox/keng_boat_biw_odyssey | keng/test_robot01/src/PWM01.py | PWM01.py | py | 377 | python | en | code | 0 | github-code | 13 |
42363145111 | #! /usr/bin/env python
#
# reduce a EDGE galaxy from the GBT-EDGE survey
# all work occurs in a subdirectory of the "galaxy" name
#
# e.g. ./reduce.py [options] NGC0001 [...]
#
# options:
# -noweather
# -offtype PCA
# -nproc 4
# -scanblorder 7
# -posblorder 3
# -pixperbeam 3
# -rmsthresh 1.1
# -hanning 2
# -smooth 2
import os
import sys
import glob
import numpy as np
from functools import partial
from astropy.table import Table
from astropy.coordinates import SkyCoord
from astropy.wcs import wcs
from astropy.io import fits
import astropy.units as u
from gbtpipe.ArgusCal import calscans, ZoneOfAvoidance,SpatialMask, SpatialSpectralMask, NoMask
from gbtpipe import griddata
from degas import postprocess
from degas.masking import buildmasks
#from skimage import morphology as morph
from spectral_cube import SpectralCube
from radio_beam import Beam
import argparse
__version__ = "29-dec-2022"
def edgemask(galaxy, maskfile=None):
"""
based on an input mask file (0,1) this will use gal_12CO.fits
or a handpicked one if maskfile given
"""
if maskfile == None:
maskname = '../masks/mask_{0}.fits'.format(galaxy)
print("Reading default mask file %s" % maskname)
else:
if maskfile[0] == '/':
maskname = maskfile
else:
maskname = '../masks/' + maskfile
buildmasks(maskname, galname=galaxy, outdir='./',
setups=['12CO'], grow_v=50, grow_xy=2)
# this writes a file outdir + galname+'.12co.mask.fits'
# but it should return that filename
return galaxy+'.12co.mask.fits'
def edgegrid(galaxy, badfeed=[], maskfile=None):
"""
maskfile can be None, in which case no masking was done
badfeed a list of (0-based) feeds that should not be included
from the list via wildfeed
"""
filelist = glob.glob(galaxy +'*_pol0.fits')
for bf in badfeed:
fl = glob.glob(galaxy +'*_feed%s_*.fits' % bf)
n = len(filelist)
for fli in fl:
filelist.remove(fli)
if len(filelist) == n:
print("Warning, could not remove",fli)
else:
print("Not using ",fli)
n = len(filelist)
filename = galaxy + '__12CO'
edgetrim = 64
outdir='.'
plotTimeSeries=True
scanblorder=7
posblorder=5
if maskfile == None:
windowStrategy='simple'
else:
windowStrategy='cubemask'
# example way too smooth pipeline
smooth_v = 3
smooth_xy = 3
# do a bit
smooth_v = 1
smooth_xy = 1
# do nothing
smooth_v = 0
smooth_xy = 0
# Peter's quicklook pipeline
smooth_v = 2
smooth_xy = 2
# Alberto's preference
smooth_v = 2
smooth_xy = 0
# Erik's original
smooth_v = 1
smooth_xy = 1.3
# new trial
smooth_v = 2
smooth_xy = 1.3
griddata(filelist,
startChannel=edgetrim,
endChannel=1024-edgetrim,
outdir='.',
flagSpike=True, spikeThresh=1.5,
flagRMS=True, plotTimeSeries=plotTimeSeries,
flagRipple=True, rippleThresh=1.3,
pixPerBeam=4.0,
rmsThresh=1.3,
robust=False,
blorder=scanblorder,
plotsubdir='timeseries/',
windowStrategy=windowStrategy, # 'cubemask' or 'simple'
maskfile=maskfile,
dtype=np.float32,
outname=filename)
postprocess.cleansplit(filename + '.fits',
spectralSetup='12CO',
HanningLoops=smooth_v, # was: 1
spatialSmooth=smooth_xy, # was: 1.3
Vwindow=1500*u.km/u.s,
CatalogFile='../GBTEDGE.cat',
maskfile=maskfile,
blorder=posblorder)
# @todo - match with setting
if False:
s = SpectralCube.read(galaxy+'_12CO_rebase{0}_smooth1.3_hanning1.fits'.format(posblorder))
s2 = s.convolve_to(Beam(12*u.arcsec))
s2.write(galaxy+'_12CO_12arcsec.fits', overwrite=True)
def galcenter(galaxy):
"""
"""
CatalogFile = '../GBTEDGE.cat'
Catalog = Table.read(CatalogFile, format='ascii')
match = np.zeros_like(Catalog, dtype=bool)
galcoord = None
for index, row in enumerate(Catalog):
if galaxy in row['NAME']:
match[index] = True
MatchRow = Catalog[match]
galcoord = SkyCoord(MatchRow['RA'],
MatchRow['DEC'],
unit=(u.hourangle, u.deg))
return(galcoord)
def getscans(gal, select=[], parfile='gals.pars'):
"""
allowed formats:
GAL SEQ START STOP REF1,REF2
GAL SEQ START STOP # cheating: REF1=START-2 REF2=STOP+1
"""
print("getscans: ",gal,select)
scans = []
fp = open(parfile)
lines = fp.readlines()
for line in lines:
if line[0] == '#':
continue
try:
line = line.split('#')[0] # removing trailing comments
w = line.split()
if len(w) < 4:
continue
if gal != w[0]:
continue
seq = int(w[1])
start = int(w[2])
stop = int(w[3])
if len(w) == 4:
refscans = [start-2, stop+1]
elif len(w) == 5:
ss = w[4].split(',')
refscans = [int(ss[0]),int(ss[1])]
else:
print("Skipping long line",line.strip())
continue
if len(select)==0 or seq in select:
scans.append( (seq,start,stop,refscans) )
print('%s: found %s' % (gal,scans[-1]))
else:
print('%s: skipping %s' % (gal,line))
except:
print('Skipping bad line: ',line.strip())
return scans
def my_calscans(gal, scan, maskstrategy, maskfile, pid='AGBT21B_024', rawdir='../rawdata'):
"""
@todo badfeeds=[]
xs
"""
seq = scan[0]
start = scan[1]
stop = scan[2]
refscans = scan[3]
dirname = '%s/%s_%02d/%s_%02d.raw.vegas' % (rawdir,pid,seq,pid,seq)
OffType = 'PCA' # 'linefit' 'median' 'median2d'
if maskstrategy == None:
calscans(dirname, start=start, stop=stop, refscans=refscans, OffType=OffType, nProc=1, opacity=True, varfrac=0.1)
else:
calscans(dirname, start=start, stop=stop, refscans=refscans, OffType=OffType, nProc=1, opacity=True, OffSelector=maskstrategy, varfrac=0.1)
def main(args):
"""
parse arguments (@todo use parseargs) and execute pipeline
"""
do_scan = True
do_mask = False
badfeed = []
grabwild = False
grabmask = False
grabseq = False
do_seed = False
mask2 = None
dryrun = False
seq = []
for gal in args:
if grabwild:
grabwild = False
print("Warning: removing feeds '%s'" % gal)
badfeed = [int(x) for x in gal.split(',')]
print("badfeeds",badfeed)
continue
if grabmask:
mask2 = gal
print("Using mask '%s'" % mask2)
grabmask = False
continue
if grabseq:
grabseq = False
print("Using seq %s" % gal)
seq = [int(x) for x in gal.split(',')]
print("seq", seq)
continue
if gal == '-s':
print("Warning: skipping accumulating scans, only doing gridding. Affects mask")
do_scan = False
continue
if gal == '-n':
dryrun = True
continue
if gal == '-f':
grabwild = True
continue
if gal == '-g':
grabseq = True
continue
if gal == '-M':
do_mask = True
continue
if gal == '-m':
grabmask = True
do_mask = True
continue
if gal == '-h':
print("Usage: %s [-h] [-s] [-M] [-m mfile] [-f f1,f2,...] [-g g1,g2,...] galaxy" % sys.argv[0])
print("Version: %s" % __version__)
print(" -h help")
print(" -s skip scan building (assumed you've done it before).")
print(" if mask changed, do not use -s")
print(" -M add masking (needs special masks/mask_GAL.fits file)")
print(" -m mfile use masking file and deeper GAL/MASK/<results>")
print(" -f f1,... comma separated list of bad feeds (0-based numbers)")
print(" -g g1,... comma separated list of good sessions (1,2,...) [PJT only]")
print(" -n dryrun - report sessions/scans found and exit")
print(" galaxy galaxy name(s), e.g. NGC0001, as they appear in gals.pars")
print(" In theory multiple galaxies can be used, probably not with -m,-g,-f")
continue
if do_seed:
# this doesn't seem to work
print("Warning: fixed seed=123 for reproducable cubes")
np.random.seed(123)
print("Trying galaxy %s" % gal)
scans = getscans(gal, seq)
if dryrun:
return
if len(scans) > 0:
os.makedirs(gal, exist_ok=True)
os.chdir(gal)
# keep track of sessions
fp = open("sessions.log","a")
for scan in scans:
fp.write("%d\n" % scan[0])
fp.close()
# log this last pipeline run
cmd = 'date +%Y-%m-%dT%H:%M:%S >> runs.log'
os.system(cmd)
if do_mask:
maskfile = edgemask(gal, mask2) # make mask file
print("Using mask from %s" % maskfile)
hdu = fits.open(maskfile)
mask = hdu[0].data
#maskstrategy=partial(SpatialSpectralMask, mask=hdu[0].data, wcs=wcs.WCS(hdu[0].header), offpct=50)
maskstrategy=partial(SpatialMask, mask=np.any(mask, axis=0), wcs=wcs.WCS(hdu[0].header).celestial)
else:
maskstrategy = None
maskfile = None
print('maskfile',maskfile)
if do_scan:
for scan in scans:
my_calscans(gal, scan, maskstrategy, maskfile)
edgegrid(gal, badfeed, maskfile)
os.chdir('..')
else:
print("Skipping %s: no entry found in gals.pars" % gal)
if __name__ == "__main__":
main(sys.argv[1:])
| teuben/GBT-EDGE | reduce.py | reduce.py | py | 10,719 | python | en | code | 0 | github-code | 13 |
70457129937 | from modules.cloud import AWS, FIREHOSE, S3, SQS, chunker, logger
from modules.static import *
import json
logger.info('Importando constantes')
logger.info(f'Região: {REGION}')
logger.info(f'Account id: {ACCOUNT_ID}')
aws = AWS(REGION, ACCOUNT_ID)
s3 = S3(REGION, ACCOUNT_ID, BUCKET_NAME)
sqs = SQS(REGION, ACCOUNT_ID, SQS_QUEUES[1])
firehose = FIREHOSE(REGION, ACCOUNT_ID, FIREHOSE_DS)
logger.info(f'Lendo mensagens da fila {SQS_QUEUES[1]} no SQS')
sqs_read_messages = sqs.read_message()
while sqs_read_messages != None:
for sqs_message in sqs_read_messages:
json_sqs_to_dict = json.loads(sqs_message)
key = json_sqs_to_dict['MessageBody']['key']
logger.info(f'Lendo {key} no S3')
read_s3_object = s3.read_object(key, 'json')
## Envio ao firehose em batch records
logger.info('Preparando para o envio em batch de mensagens ao Firehose')
for chunk_record in chunker(read_s3_object, MAX_BATCH_SIZE):
batch_records = list(map(lambda record: {'Data':record}, chunk_record))
logger.info(f'Carga com {MAX_BATCH_SIZE} registros enviada')
firehose.put_records(batch_records, 'batch')
## Envio ao firehose em batch records
logger.info('Lendo novamente a fila do SQS')
sqs_read_messages = sqs.read_message()
logger.info('Fim do streaming de registros para o Firehose') | codeis4fun/aws-auto-deployment | manual_pipeline/4_from_sqs_to_firehose.py | 4_from_sqs_to_firehose.py | py | 1,383 | python | en | code | 1 | github-code | 13 |
40963677802 | # -*- coding: UTF-8 -*-
"""
# @Time : 2019-10-23 22:05
# @Author : yanlei
# @FileName: 回调函数_爬取数据.py
"""
import requests
from multiprocessing import Pool
def get_data(url):
response = requests.get(url)
if response.status_code == 200:
return url, response.content.decode('utf-8')
def call_back(args):
url, content = args
print(url, len(content))
url_list = [
'https://www.baidu.com',
'https://www.sohu.com',
'https://www.sogou.com',
'https://www.runoob.com',
'https://leetcode-cn.com',
'https://cn.bing.com',
]
p = Pool(2)
for url in url_list:
p.apply_async(get_data, args=(url, ), callback=call_back)
p.close()
p.join()
| Yanl05/FullStack | 并发编程/进程池/回调函数_爬取数据.py | 回调函数_爬取数据.py | py | 698 | python | en | code | 0 | github-code | 13 |
74525643856 | import tensorflow as tf
import tensorflow_addons as tfa
from sle_gan.network.common_layers import GLU
class InputBlock(tf.keras.layers.Layer):
"""
Input Block
Input shape: (B, 1, 1, 256)
Output shape: (B, 4, 4, 256)
"""
def __init__(self, filters: int, **kwargs):
super().__init__(**kwargs)
self.conv2d_transpose = tf.keras.layers.Conv2DTranspose(filters=filters * 2,
kernel_size=(4, 4),
strides=(1, 1))
self.normalization = tf.keras.layers.BatchNormalization()
self.glu = GLU()
def call(self, inputs, **kwargs):
x = self.conv2d_transpose(inputs)
x = self.normalization(x)
x = self.glu(x)
return x
class UpSamplingBlock(tf.keras.layers.Layer):
def __init__(self, output_filters: int, **kwargs):
super().__init__(**kwargs)
self.output_filters = output_filters
self.upsampling = tf.keras.layers.UpSampling2D(size=(2, 2), interpolation="nearest")
self.conv2d = tf.keras.layers.Conv2D(filters=output_filters * 2, kernel_size=(3, 3), padding="same")
self.normalization = tf.keras.layers.BatchNormalization()
self.glu = GLU()
def call(self, inputs, **kwargs):
x = self.upsampling(inputs)
x = self.conv2d(x)
x = self.normalization(x)
x = self.glu(x)
return x
class SkipLayerExcitationBlock(tf.keras.layers.Layer):
"""
Skip-Layer Excitation Block
This block receives 2 feature maps, a high and a low resolution one. Then transforms the low resolution feature map
and at the end it is multiplied along the channel dimension with the high resolution input.
E.g.:
Inputs:
- High_res shape: (B, 128, 128, 64)
- Low_res shape: (B, 8, 8, 512)
Output:
- shape: (B, 128, 128, 64)
"""
def __init__(self, input_low_res_filters: int, input_high_res_filters: int, **kwargs):
super().__init__(**kwargs)
self.pooling = tfa.layers.AdaptiveAveragePooling2D(output_size=(4, 4), data_format="channels_last")
self.conv2d_1 = tf.keras.layers.Conv2D(filters=input_low_res_filters,
kernel_size=(4, 4),
strides=1,
padding="valid")
self.leaky_relu = tf.keras.layers.LeakyReLU(alpha=0.1)
self.conv2d_2 = tf.keras.layers.Conv2D(filters=input_high_res_filters,
kernel_size=(1, 1),
strides=1,
padding="valid")
def call(self, inputs, **kwargs):
x_low, x_high = inputs
x = self.pooling(x_low)
x = self.conv2d_1(x)
x = self.leaky_relu(x)
x = self.conv2d_2(x)
x = tf.nn.sigmoid(x)
return x * x_high
class OutputBlock(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.conv = tf.keras.layers.Conv2D(filters=3, kernel_size=3, strides=1, padding="same")
def call(self, inputs, **kwargs):
x = self.conv(inputs)
x = tf.nn.tanh(x)
return x
class Generator(tf.keras.models.Model):
"""
Input of the Generator is in shape: (B, 1, 1, 256)
"""
def __init__(self, output_resolution: int, *args, **kwargs):
super().__init__(*args, **kwargs)
assert output_resolution in [256, 512, 1024], "Resolution should be 256 or 512 or 1024"
self.output_resolution = output_resolution
self.input_block = InputBlock(filters=1024)
# Every layer is initiated, but we might not use the last ones. It depends on the resolution
self.upsample_8 = UpSamplingBlock(512)
self.upsample_16 = UpSamplingBlock(256)
self.upsample_32 = UpSamplingBlock(128)
self.upsample_64 = UpSamplingBlock(128)
self.upsample_128 = UpSamplingBlock(64)
self.upsample_256 = UpSamplingBlock(32)
self.upsample_512 = UpSamplingBlock(16)
self.upsample_1024 = UpSamplingBlock(8)
self.sle_8_128 = SkipLayerExcitationBlock(self.upsample_8.output_filters, self.upsample_128.output_filters)
self.sle_16_256 = SkipLayerExcitationBlock(self.upsample_16.output_filters, self.upsample_256.output_filters)
self.sle_32_512 = SkipLayerExcitationBlock(self.upsample_32.output_filters, self.upsample_512.output_filters)
self.output_image = OutputBlock()
def initialize(self, batch_size: int = 1):
sample_input = tf.random.normal(shape=(batch_size, 1, 1, 256), mean=0, stddev=1.0, dtype=tf.float32)
sample_output = self.call(sample_input)
return sample_output
@tf.function
def call(self, inputs, training=None, mask=None):
x = self.input_block(inputs) # --> (B, 4, 4, 1024)
x_8 = self.upsample_8(x) # --> (B, 8, 8, 512)
x_16 = self.upsample_16(x_8) # --> (B, 16, 16, 256)
x_32 = self.upsample_32(x_16) # --> (B, 32, 32, 128)
x_64 = self.upsample_64(x_32) # --> (B, 64, 64, 128)
x_128 = self.upsample_128(x_64) # --> (B, 128, 128, 64)
x_sle_128 = self.sle_8_128([x_8, x_128]) # --> (B, 128, 128, 64)
x_256 = self.upsample_256(x_sle_128) # --> (B, 256, 256, 32)
x = self.sle_16_256([x_16, x_256]) # --> (B, 256, 256, 32)
if self.output_resolution > 256:
x_512 = self.upsample_512(x) # --> (B, 512, 512, 16)
x = self.sle_32_512([x_32, x_512]) # --> (B, 512, 512, 16)
if self.output_resolution > 512:
x = self.upsample_1024(x) # --> (B, 1024, 1024, 8)
image = self.output_image(x) # --> (B, resolution, resolution, 3)
return image
| gaborvecsei/SLE-GAN | sle_gan/network/generator.py | generator.py | py | 6,094 | python | en | code | 68 | github-code | 13 |
21964909992 | import matplotlib.pyplot as plt
x_values = range(1, 1001)
y_values = [x**2 for x in x_values]
"""
x_values = [1, 2, 3, 4, 5]
y_values = [1, 4, 9, 16, 25]
"""
plt.style.use('seaborn')
fig, ax = plt.subplots()
# Using a Colormap
ax.scatter(x_values, y_values, c=y_values, cmap=plt.cm.Blues, s=10)
# Defining Custom Colors
# ax.scatter(x_values, y_values, c='red', s=10)
# green:
# ax.scatter(x_values, y_values, c=(0, 0.8, 0), s=10)
# ---
# ax.scatter(x_values, y_values, s=10)
# ax.scatter(x_values, y_values, s=100)
# ax.scatter(2, 4, s=200)
# Set chart title and label axes.
ax.set_title("Square Numbers", fontsize=24)
ax.set_xlabel("Value", fontsize=14)
ax.set_ylabel("Square of Value", fontsize=14)
# Set size of tick labels.
ax.tick_params(axis='both', which='major', labelsize=14)
# Set the range for each axis.
ax.axis([0, 1100, 0, 1100000])
plt.show()
# saves the figure.
# plt.savefig('savefig_plot2.png', bbox_inches='tight')
| pranjal779/Eric-Matthes | Data Visualization/projectcode/scatter_squares.py | scatter_squares.py | py | 943 | python | en | code | 2 | github-code | 13 |
17971467809 | #!/usr/bin/env python
# coding: utf-8
# In[130]:
import numpy as np
# In[151]:
step = 0
a = []
for i in range(1,1000):
step=0
for i in range(0,1000):
out = np.random.randint(1,7)
if out < 4 and step!=0:
step = step -1
if out >= 4 and out < 6:
step = step+1
if out == 6:
m = np.random.randint(1,7)
step = step+m
a.append(step)
b = np.mean(a)
b
# In[ ]:
# In[ ]:
# In[ ]:
| preetithakur1/learning_python | staircase.py | staircase.py | py | 488 | python | en | code | 0 | github-code | 13 |
16268877934 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from flask import Flask, request, jsonify
#from flask_sslify import SSLify
app = Flask(__name__)
# sslify = SSLify(app,subdomains=True)
# EAALNyShJXH8BAOo5o88mnzJu43t8TqeBl42qGNOna3Gx1RBhPUGvBcFB6tY6RXYNH7Df68Aj6IK3KMtRw9bBiHkeD5h6X7kAAlAxgFb5fiHbp3Udhx2sY7FET8xfIbz5tiLsFynlhDGz0W30Fz4FQFcuL1KZClwZAZA3U0l4gZDZD
@app.route('/webhook', methods=['GET','POST'])
def webhook():
if request.method == 'GET':
VERIFY_TOKEN = "ga75HpoblY9qBtOKo2m8QXauNvBoKQzt" # Key for Verify Token
hubverify = request.args.get('hub.verify_token') # Get Verify Key tokem
hubchallenge = request.args.get('hub.challenge') # For return to Facebook must to 'CHALLENGE_ACCEPTED'
hubmode = request.args.get('hub.mode') # Mode must to 'subscribe'
if hubverify == VERIFY_TOKEN and hubmode == "subscribe": # Check data verify and mode
print('WEBHOOK_VERIFIED')
return hubchallenge , 200 # Return 'CHALLENGE_ACCEPTED'
else:
return 'You Wrong Something' , 200
elif request.method == 'POST':
data = request.get_json()
if data['object'] == "page":
print(data['entry'][0]['messaging'][0]['message'])
#print(data['entry'])
#print(data)
return 'EVENT_RECEIVED' , 200
if hubverify == VERIFY_TOKEN and hubmode == "subscribe": # Check data verify and mode
return hubchallenge , 200 # Return 'CHALLENGE_ACCEPTED'
elif request.method == 'GET':
return 'GET' , 200
else:
return 'Forbidden' , 403
@app.route('/', methods=['GET','POST','PUT','DELETE'])
def index():
if request.method == 'GET' or request.method == 'POST' or request.method == 'PUT' or request.method == 'DELETE':
return 'Service Not Found', 404
if __name__ == '__main__':
app.run(debug=True)
| mosragcool/WebHook_CapReport | src/main.py | main.py | py | 1,913 | python | en | code | 0 | github-code | 13 |
40963110404 | import pandas as pd
import turtle
raw_df = pd.read_csv('50_states.csv')
timmy = turtle.Turtle()
timmy.penup()
timmy.hideturtle()
def draw(state_name):
row = raw_df[raw_df['state'] == state_name]
print(int(row['x']))
timmy.goto(int(row['x']), int(row['y']))
timmy.write(state_name, align='center', font=("Arial", 10, 'normal'))
image = 'blank_states_img.gif'
screen = turtle.Screen()
screen.screensize(canvwidth=730, canvheight=500)
screen.title('U.S. State Game')
screen.addshape(image)
turtle.shape(image)
state_list = raw_df['state'].to_list()
correct_guesses = []
while len(correct_guesses) < 50:
answer_state = screen.textinput(title=f'{len(correct_guesses)}/50 States', prompt="What's another state's name?").title()
if answer_state == "Exit":
break
if answer_state not in correct_guesses and answer_state in state_list:
correct_guesses.append(answer_state)
draw(answer_state)
missing_states = {
"state":list(set(state_list) - set(correct_guesses))
}
pd.DataFrame(missing_states).to_csv('states_to_learn.csv') | jjbondoc/learning-python | hundred-days-of-code/day_025_pandas/sporcle/main.py | main.py | py | 1,083 | python | en | code | 0 | github-code | 13 |
38259447891 | import numpy as np
import pandas as pd
train = pd.read_csv('./practice/dacon/data/train/train.csv')
submission = pd.read_csv('./practice/dacon/data/sample_submission.csv')
day = 4
def split_to_seq(data):
tmp = []
for i in range(48):
tmp1 = pd.DataFrame()
for j in range(int(len(data)/48)):
tmp2 = data.iloc[j*48+i,:]
tmp2 = tmp2.to_numpy()
tmp2 = tmp2.reshape(1,tmp2.shape[0])
tmp2 = pd.DataFrame(tmp2)
tmp1 = pd.concat([tmp1,tmp2])
x = tmp1.to_numpy()
tmp.append(x)
return np.array(tmp)
def make_cos(dataframe): # 특정 열이 해가 뜨고 해가지는 시간을 가지고 각 시간의 cos를 계산해주는 함수
dataframe /=dataframe
c = dataframe.dropna()
d = c.to_numpy()
def into_cosine(seq):
for i in range(len(seq)):
if i < len(seq)/2:
seq[i] = float((len(seq)-1)/2) - (i)
if i >= len(seq)/2:
seq[i] = seq[len(seq) - i - 1]
seq = seq/ np.max(seq) * np.pi/2
seq = np.cos(seq)
return seq
d = into_cosine(d)
dataframe = dataframe.replace(to_replace = np.NaN, value = 0)
dataframe.loc[dataframe['cos'] == 1] = d
return dataframe
def preprocess_data(data, is_train = True):
a = pd.DataFrame()
for i in range(int(len(data)/48)):
tmp = pd.DataFrame()
tmp['cos'] = data.loc[i*48:(i+1)*48-1,'TARGET']
tmp['cos'] = make_cos(tmp)
a = pd.concat([a,tmp])
data['cos'] = a
data.insert(1,'GHI',data['DNI']*data['cos']+data['DHI'])
data.insert(1,'Time',data['Hour']*2+data['Minute']/30.)
temp = data.copy()
temp = temp[['Time','TARGET','GHI','DHI','DNI','WS','RH','T']]
if is_train == True:
temp['TARGET1'] = temp['TARGET'].shift(-48).fillna(method = 'ffill')
temp['TARGET2'] = temp['TARGET'].shift(-96).fillna(method = 'ffill')
temp = temp.dropna()
return temp.iloc[:-96]
elif is_train == False:
return temp.iloc[-48*day:, :]
df_train = preprocess_data(train)
df_test = []
for i in range(81):
file_path = './practice/dacon/data/test/%d.csv'%i
temp = pd.read_csv(file_path)
temp = preprocess_data(temp,is_train=False)
temp = split_to_seq(temp)
df_test.append(temp)
df_test = np.array(df_test)
print(df_test.shape) # (81, 48, 4, 8)
# train = split_to_seq(df_train)
# test = split_to_seq(x_test)
# print(train.shape)(48, 1093, 10)
# print(test.shape) #(48, 324, 8) | dongjaeseo/study | practice/make_seq.py | make_seq.py | py | 2,526 | python | en | code | 2 | github-code | 13 |
72797006738 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""maria.py
This script validates our model against datasets from against which MARIA was
evaluated.
"""
import argparse
import copy
import json
import logging
import pprint
import random
import warnings
from pathlib import Path
from typing import Dict, List, Tuple
import hydra
import numpy as np
import pandas as pd
import pytorch_lightning as pl
import torch
from Bio.SeqIO.FastaIO import SimpleFastaParser
from mhciipresentation.constants import (
AA_TO_INT,
LEN_BOUNDS_HUMAN,
USE_CONTEXT,
USE_GPU,
USE_SUBSET,
)
from experiments.inference import make_inference, setup_model
from mhciipresentation.loaders import (
load_K562_dataset,
load_melanoma_dataset,
load_pseudosequences,
load_uniprot,
)
from mhciipresentation.metrics import (
build_scalar_metrics,
build_vector_metrics,
compute_performance_metrics,
save_performance_metrics,
)
from mhciipresentation.paths import DATA_DIR, EPITOPES_DIR, RAW_DATA
from mhciipresentation.utils import (
assign_pseudosequences,
encode_aa_sequences,
flatten_lists,
get_accelerator,
get_hydra_logging_directory,
make_dir,
make_predictions_with_transformer,
render_precision_recall_curve,
render_roc_curve,
sample_from_human_uniprot,
set_pandas_options,
)
from omegaconf import DictConfig
from pyprojroot import here
from pytorch_lightning import loggers as pl_loggers
from pytorch_lightning.callbacks.progress.rich_progress import RichProgressBar
from torch import nn
from torch.utils.data import DataLoader, TensorDataset
from tqdm import tqdm
set_pandas_options()
logger = logging.getLogger(__name__)
cfg: DictConfig
def load_DRB1_0101_DRB1_0404() -> List[str]:
"""Loads DRB1_0101 and DRB1_0404
Returns:
List[str]: the two pseudosequences
"""
pseudosequences = load_pseudosequences()
return pseudosequences.loc[
pseudosequences.Name.isin(["DRB1_0101", "DRB1_0404"])
].Pseudosequence.to_list()
def handle_K562_dataset(ligands: pd.DataFrame, fname: str) -> None:
"""Make predictions on K562 ligand dataset
Args:
ligands (pd.DataFrame): ligands to assign pseudosequences to.
title (str): title of the resulting ROC curve.
fname (str): filename of resulting plot.
"""
# ligands = ligands.loc[ligands.Sequence.str.len() <= 21]
n_len = ligands.Sequence.str.len().value_counts().sort_index().to_dict()
decoys = sample_from_human_uniprot(n_len)
ligands["label"] = 1
decoys = pd.DataFrame(flatten_lists(decoys), columns=["Sequence"])
decoys["label"] = 0
decoys = assign_pseudosequences(ligands, decoys)
data = pd.concat(
[
ligands[["Sequence", "label", "Pseudosequence"]],
decoys[["Sequence", "label", "Pseudosequence"]],
]
)
data["peptides_and_pseudosequence"] = data["Sequence"].astype(str) + data[
"Pseudosequence"
].astype(str)
device = torch.device("cuda" if USE_GPU else "cpu") # training device
if cfg.model.feature_set == "seq_mhc":
input_dim = 33 + 2 + 34
X = encode_aa_sequences(
data.peptides_and_pseudosequence,
AA_TO_INT,
)
elif cfg.model.feature_set == "seq_only":
input_dim = 33 + 2
X = encode_aa_sequences(
data.Sequence,
AA_TO_INT,
)
else:
raise ValueError(
f"Unknown feature set {cfg.model.feature_set}. "
"Please choose from seq_only or seq_and_mhc"
)
make_inference(
X,
data.label.values,
cfg,
input_dim,
get_hydra_logging_directory() / "K562" / fname,
)
def handle_melanoma_dataset(ligands: pd.DataFrame, fname: str) -> None:
"""Makes predictions on the melanoma dataset.
Args:
ligands (pd.DataFrame): ligands eluted from melanoma tissues
title (str): title of the resulting ROC curve.
fname (str): filename of resulting plot.
"""
ligands = ligands.loc[ligands.Sequence.str.len() <= 25]
n_len = ligands.Sequence.str.len().value_counts().sort_index().to_dict()
decoys = sample_from_human_uniprot(n_len)
ligands["label"] = 1
decoys = pd.DataFrame(flatten_lists(decoys), columns=["Sequence"])
decoys["label"] = 0
data = pd.concat(
[
ligands[["Sequence", "label"]],
decoys[["Sequence", "label"]],
]
)
device = torch.device("cuda" if USE_GPU else "cpu") # training device
if cfg.model.feature_set == "seq_only":
input_dim = 33 + 2
X = encode_aa_sequences(
data.Sequence,
AA_TO_INT,
)
make_inference(
X,
data.label.values,
cfg,
input_dim,
get_hydra_logging_directory() / "melanoma",
)
else:
logger.info("Not possible")
@hydra.main(
version_base="1.3", config_path=str(here() / "conf"), config_name="config"
)
def main(mariaconfig: DictConfig) -> None:
global cfg
cfg = mariaconfig
make_dir(Path("./data/evaluation/"))
logger.info("Handle K562 datasets")
DRB1_0101_ligands, DRB1_0404_ligands = load_K562_dataset()
# To exclude shorter peptides in the test set
DRB1_0101_ligands = DRB1_0101_ligands.loc[
DRB1_0101_ligands.Sequence.str.len() >= 15
]
# To exclude peptides shorter than the binding pocket
DRB1_0404_ligands = DRB1_0404_ligands.loc[
(DRB1_0404_ligands.Sequence.str.len() >= 9)
]
DRB1_0101, DRB1_0404 = load_DRB1_0101_DRB1_0404()
DRB1_0101_ligands["Pseudosequence"] = DRB1_0101
DRB1_0404_ligands["Pseudosequence"] = DRB1_0404
logger.info("DRB1_0101")
handle_K562_dataset(
DRB1_0101_ligands,
"DRB1_0101_ligands",
)
logger.info("DRB1_0404")
handle_K562_dataset(
DRB1_0404_ligands,
"DRB1_0404_ligands",
)
logger.info("DRB1_0404")
melanoma_dataset = load_melanoma_dataset()
logger.info("Handle melanoma datasets")
logger.info(melanoma_dataset.shape)
handle_melanoma_dataset(melanoma_dataset, "Melanoma")
if __name__ == "__main__":
main()
| Novartis/AEGIS | experiments/evaluation/maria.py | maria.py | py | 6,236 | python | en | code | 9 | github-code | 13 |
20900619570 | #CTI_110
#M5HW1_DISTANCE TRAVELED
#JOE FRYE
#10/22/2017
speed= int(input(" how fast was the vehicle going?"))
time= int(input("how fas has the vehicle gone for?"))
print("hour(s)","\t distance traveled")
for time in range(0,4):
distance=time*speed
print(time,'\t\t\t',distance)
| fryej7125/cti110 | M5HW1_FRYE.py | M5HW1_FRYE.py | py | 311 | python | en | code | 0 | github-code | 13 |
12888564860 | import discord
import random
from discord.ext import commands
class Random(commands.Cog):
def __init__(self, bot):
self.bot = bot
# example of this would be input: "Mamatay ka na." -> output: "MaMatAy KA nA."
@commands.command(name="memeify", aliases=["spongebob"], help="Spongebob loves you and me together.", description="What user inputs becomes a Spongebob version of the string.")
async def memeify(self, context, *, input_string: str):
output_string = []
for char in input_string:
cap = random.randint(0, 1)
if cap:
output_string.append(char.upper())
else:
output_string.append(char.lower())
final_output = ''.join(output_string)
await context.channel.send(f"{context.author.mention} {final_output}")
@commands.command(name="roll", aliases=["lucky", "dice"], help="Just rolling a dice...", description="User inputs a number x to get a random number from 1-x. If no input, a standard six-sided dice will be rolled.")
async def roll(self, context, limit: str):
# strictly numeric only
if limit.isnumeric():
result = random.randint(1, int(limit))
await context.channel.send(f"{context.author.mention} Oi gago, nakuha mo **{result}**")
# as long as input is provided and it's not a number
else:
await context.channel.send(f"Hoy {context.author.mention}, hindi number nilagay mo tanga! Batukan kita dyan eh!")
def setup(bot):
bot.add_cog(Random(bot))
'''
Mainly got code from this gist: https://gist.github.com/EvieePy/d78c061a4798ae81be9825468fe146be
API Documentation can be found here: https://discordpy.readthedocs.io/en/latest/
''' | brainfrozeno00o/DeeDeeEs-Discord-Bot | cogs/random-stuff.py | random-stuff.py | py | 1,755 | python | en | code | 0 | github-code | 13 |
18824526076 | # -*- coding: utf-8 -*-
import os
import sys
import click
import logging
import numpy as np
import pandas as pd
import boto3
from dotenv import get_variable
env_file = '/home/ubuntu/science/quora_question_pairs/.env'
S3_BUCKET = get_variable(env_file, 'S3_BUCKET')
S3_DATA_PATH = get_variable(env_file, 'S3_DATA_PATH')
PROJECT_DIR = get_variable(env_file, 'PROJECT_DIR')
CHUNKSIZE = 4 * int(get_variable(env_file, 'CHUNKSIZE'))
TEST_ROWS = int(get_variable(env_file, 'TEST_ROWS'))
@click.command()
@click.argument('test', type=click.Path(), default='False')
def main(test):
if test == 'True': # Don't chunk
for f_name, ix_id in zip(['train', 'test'], ['id', 'test_id']):
print('Downloading (test)', f_name)
D = pd.read_csv(
's3://' + S3_BUCKET + '/' + S3_DATA_PATH + '/raw/' + f_name +
'.csv',
index_col=ix_id,
nrows=TEST_ROWS)
D.to_csv(
PROJECT_DIR + '/data/raw/' + f_name + '_test.csv',
mode='w',
index_label='id')
else:
for f_name, ix_id in zip(['train', 'test'], ['id', 'test_id']):
D_it = pd.read_csv(
's3://' + S3_BUCKET + '/' + S3_DATA_PATH + '/raw/' + f_name +
'.csv',
chunksize=CHUNKSIZE,
index_col=ix_id)
D0 = D_it.get_chunk()
D0.to_csv(
PROJECT_DIR + '/data/raw/' + f_name + '.csv',
mode='w',
index_label='id')
del D0
i = 0
for Di in D_it:
i += 1
print('Downloading ', f_name, ' chunk: ', i, end='\r')
sys.stdout.flush()
Di.to_csv(
PROJECT_DIR + '/data/raw/' + f_name + '.csv',
mode='a',
header=False,
index_label='id')
print()
return
if __name__ == '__main__':
main()
| RJTK/kaggle_quora | src/data/download_raw_data.py | download_raw_data.py | py | 2,010 | python | en | code | 0 | github-code | 13 |
73539347538 | from cvxopt import matrix, solvers
import numpy
from common import *
def f(features, b, w):
return sum([a * b for a, b in zip(features, w)]) + b
def train_svm(training_set, C):
solvers.options['show_progress'] = False
m = len(training_set) # number of training examples
dim = 30 # dimension of the feature vector
# explanation of this weird stuff is at <http://cvxopt.org/userguide/coneprog.html#quadratic-programming>
# structure of variables:
# 0: bias
# [1, 1 + dim): weights
# [1 + dim, 1 + dim + m): regularisation variables (xis)
pP = [[0.0 for _ in range(1 + dim + m)] for _ in range(1 + dim + m)]
# set ||w||^2 constraint
for i in range(1, 1 + dim):
pP[i][i] = 1.0
P = matrix(pP)
pq = [0.0 for _ in range(1 + dim + m)]
for i in range(1 + dim, 1 + dim + m):
pq[i] = C
q = matrix(pq)
pG = [[0.0 for _ in range(1 + dim + m)] for _ in range(2 * m)]
ph = [0.0 for i in range(2 * m)]
# set \xi_i >= 0 constraints
for i in range(0, m):
pG[i][1 + dim + i] = -1.0
# no need to set right hand side
# set constraints on training set points
# y_i (w^T x_i + b) >= 1 - \xi_i
for i in range(0, m):
ph[m + i] = -1.0 # rhs
point = training_set[i]
line = pG[i + m]
# bias coefficient
line[0] = -point.correct
# dot product coeffecients
for j, f in enumerate(point.features):
line[1 + j] = -point.correct * f
# regularisation coeffecient
line[1 + dim + i] = -1.0
G = matrix(([[pG[i][j] for i in range(2 * m)] for j in range(1 + dim + m)])) # cvxopt uses column-major order
h = matrix(ph)
# no equality constraints
sol = solvers.qp(P, q, G, h)['x']
b = sol[0]
w = [sol[i] for i in range(1, 1 + dim)]
xi = [sol[i] for i in range(1 + dim, 1 + dim + m)]
return b, w, xi
def test_svm(test_set, b, theta):
res = []
for e in test_set:
r = f(e.features, b, theta)
rr = None
if r >= 1:
rr = 1
else:
rr = -1
res.append(rr)
return res | anton-bannykh/ml-2013 | dmitry.gerasimov/lab-svm/svm.py | svm.py | py | 2,146 | python | en | code | 4 | github-code | 13 |
42414403065 | from selenium.webdriver.common.keys import Keys
from functional_tests.base import FunctionalTest
class LayoutTest(FunctionalTest):
def test_layout_styling(self):
# I'm opening home page and expect to see nice CENTERED task field
self.browser.get(self.live_server_url)
self.browser.set_window_size(1024, 768)
input_box = self.get_input_box_id()
# rounding errors ( ͡° ͜ʖ ͡°)
self.assertAlmostEqual(input_box.location["x"] + input_box.size['width'] / 2,
512,
delta=10)
# Then I add a task and and expect to see same field position on new page
input_box = self.get_input_box_id()
input_box.send_keys("my new task")
input_box.send_keys(Keys.ENTER)
self.wait_for_row_task_table('1: my new task')
input_box = self.get_input_box_id()
self.assertAlmostEqual(input_box.location["x"] + input_box.size['width'] / 2,
512,
delta=10) | festeh/mytodolist | functional_tests/test_layout.py | test_layout.py | py | 1,060 | python | en | code | 0 | github-code | 13 |
29044621463 | # -*- coding: utf-8 -*-
"""
Created on Mon Mar 29 14:52:38 2021
@author: claum
"""
# maps label to attribute name and types
label_attr_map = {
# ==============================================================================================================
"empty_A=": ["empty_A", float],
"empty_c=": ["empty_c", float],
"empty_m=": ["empty_m", float],
"Endurance=": ["Endurance", float], # [s]
"Range=": ["Range", float], # [ft]
"W_crew=": ["W_crew", float], # [lb]
"W_payload=": ["W_payload", float], # [lb]
"W_0=": ["W_0", float], # [lb]
# V_cruise = 557.849409449 # [ft * s^-1]
"V_cruise=": ["V_cruise", float], # [ft * s^-1]
"eta_prop=": ["eta_prop", float],
"PSFC=": ["PSFC", float], # [lb * (hr * bhp)^-1]
"TSFC_cruise=": ["TSFC_cruise", float], # [s^-1]
"TSFC_loiter=": ["TSFC_loiter", float], # [s^-1]
"D_fus=": ["D_fus", float], # [ft]
"l_fus=": ["l_fus", float], # [ft]
"slend_ratio=": ["slend_ratio", float],
"S_wet_fus=": ["S_wet_fus", float], # [ft^2]
"S_wet_wing=": ["S_wet_wing", float], # [ft^2]
"taper_htail=": ["taper_htail", float],
"taper_vtail=": ["taper_vtail", float],
"thick_root=": ["thick_root", float],
"thick_tip=": ["thick_tip", float],
"tau=": ["tau", float],
"S_wet_htail=": ["S_wet_htail", float], # [ft^2]
"S_wet_vtail=": ["S_wet_vtail", float], # [ft^2]
"S_wet_total=": ["S_wet_total", float], # [ft^2]
"S_ratio=": ["S_ratio", float], # S_wet/S_ref
"Aspect_ratio=": ["Aspect_ratio", float], # span^2/S_ref
"AR_wet=": ["AR_wet ", float], # AR/S_ratio
"W_guess=": ["W_guess", float], # [lb]
# ==============================================================================================================
}
# ============================================================================
class Params(object):
def __init__(self, input_file_name):
with open(input_file_name, 'r') as input_file:
for line in input_file:
row = line.split()
label = row[0]
data = row[1:] # rest of row is data list
attr = label_attr_map[label][0]
datatypes = label_attr_map[label][1:]
values = [(datatypes[i](data[i])) for i in range(len(data))]
self.__dict__[attr] = values if len(values) > 1 else values[0]
# ============================================================================
| CMirabella180890/Aircraft-Design | Raymer_sizing/params.py | params.py | py | 2,748 | python | en | code | 0 | github-code | 13 |
1793899351 | """
MultipleLingersTelegramFilterByCommandTrigger telgram bot chat handlers to control multiple Lingers
"""
# Operation specific imports
import ast
import json
import threading
from collections import defaultdict
from datetime import datetime, timedelta
from telegram import ReplyKeyboardMarkup
from telegram.ext import CommandHandler, MessageHandler, ConversationHandler, Filters
import LingerConstants
import LingerTriggers.LingerBaseTrigger as lingerTriggers
CHOOSING_LINGER = 1
CHOOSING_COMMANDS = 2
class MultipleLingersTelegramFilterByCommandTrigger(lingerTriggers.LingerBaseTrigger):
"""Trigger that engaged when a new mail is recieved thread"""
DEFAULT_END_PHRASE = "Done"
DEFAULT_REFRESH_PHRASE = "Refresh"
DEFAULT_REFRESH_INTERVAL = 300.0
def __init__(self, configuration):
super(MultipleLingersTelegramFilterByCommandTrigger, self).__init__(configuration)
self.actions_by_labels = defaultdict(list)
# Fields
self.telegram_bot_adapter_uuid = configuration["telegram_bot_adapter"]
self.command_word = configuration["command_word"]
# TODO this should be already a list in the configuration system
self.authorized_users = ast.literal_eval(configuration["authorized_users"])
self.lock = threading.Lock()
self.running = False
self.lingers_names = []
self.chosen_linger = None
self.scheduled_job = None
self.lingers_commands_lists = {}
self.lingers_layouts = ReplyKeyboardMarkup(self.lingers_names, one_time_keyboard=True)
self.markup = self.lingers_layouts
# The conversation structure
self.conversation_handler = ConversationHandler(
entry_points=[CommandHandler(self.command_word, self.start_command)],
states={
CHOOSING_LINGER: [MessageHandler(Filters.text, self.trigger_choose_linger)],
CHOOSING_COMMANDS: [MessageHandler(Filters.text, self.trigger_get_command)],
},
fallbacks=[]
)
self.logger.debug("MultipleLingersTelegramFilterByCommandTrigger initialized")
def telegram_bot_adapter(self):
"""Getter for the bot adapter"""
return self.get_adapter_by_uuid(self.telegram_bot_adapter_uuid)
def start_command(self, bot, update): # Telegram Handler method, can't change signature pylint: disable=w0613
"""Starting the commands listen loop"""
# Check authorization
if str(update.message.from_user.id) not in self.authorized_users:
update.message.reply_text("Nice to meet you.")
return ConversationHandler.END
# Else, user it authorized
update.message.reply_text(
"Hi, Choose Linger to command.",
reply_markup=self.markup)
return CHOOSING_LINGER
def trigger_choose_linger(self, bot, update): # Telegram Handler method, can't change signature pylint: disable=w0613
"""Checking trigger if should enagage an action"""
# Check authorization
if str(update.message.from_user.id) not in self.authorized_users:
return ConversationHandler.END
# Else, user it authorized
self.chosen_linger = update.message.text
if self.chosen_linger == self.DEFAULT_END_PHRASE:
self.chosen_linger = None
update.message.reply_text("Good bye")
return ConversationHandler.END
if self.chosen_linger == self.DEFAULT_REFRESH_PHRASE:
self.chosen_linger = None
update.message.reply_text("Refreshing lingers...\nThis might take some time...")
# TODO: Set not yet got from lingers, to update about command loading
self.collect_commands_from_lingers()
update.message.reply_text("Sent requests for commands", reply_markup=self.markup)
return CHOOSING_LINGER
elif self.chosen_linger in self.lingers_names:
# TODO: Here should reply about the last time linger returned answer
#update.message.reply_text(
# "Loading commands from linger: {}".format(self.chosen_linger))
if self.lingers_commands_lists.get(self.chosen_linger, None):
self.markup = self.lingers_commands_lists[self.chosen_linger]["Markup"]
update.message.reply_text(
"Commands loaded from Linger: {}".format(self.chosen_linger),
reply_markup=self.markup)
return CHOOSING_COMMANDS
else:
# No commands for given linger
linger_name = self.chosen_linger
# Un-setting the current chosen linger
self.chosen_linger = None
update.message.reply_text("No commands were loaded from Linger: {}".format(linger_name), reply_markup=self.markup)
return CHOOSING_LINGER
def trigger_get_command(self, bot, update): # Telegram Handler method, can't change signature pylint: disable=w0613
"""Checking trigger if should engage an action"""
# Check authorization
if str(update.message.from_user.id) in self.authorized_users:
command = update.message.text
if command == self.DEFAULT_END_PHRASE:
self.chosen_linger = None
self.markup = self.lingers_layouts
update.message.reply_text("Choose Linger to command.", reply_markup=self.markup)
return CHOOSING_LINGER
elif command in self.lingers_commands_lists[self.chosen_linger]["Commands"]:
update.message.reply_text(
"Executing command: {}".format(command))
self.trigger_engaged(command)
update.message.reply_text(
"Command: {} finished executing, ready for commands.".format(command),
reply_markup=self.markup)
else:
update.message.reply_text(
"Unknown command: {}".format(self.chosen_linger),
reply_markup=self.markup)
return CHOOSING_COMMANDS
else:
return ConversationHandler.END
def trigger_engaged(self, command=None):
trigger_data = {}
result = None
if command:
trigger_data[LingerConstants.COMMAND_NAME] = command
for action in self.actions_by_labels[self.chosen_linger]:
result = self.trigger_specific_action_callback(self.uuid, action.uuid, trigger_data)
return result
def collect_commands_from_lingers(self):
"""Requesting commands from all the lingers"""
for linger_name in self.lingers_names:
self.logger.debug("Requesting commands for linger %s", linger_name)
trigger_data = {LingerConstants.TRIGGER_ACTION: LingerConstants.REQUEST_COMMAND_ACTION,
LingerConstants.TRIGGER_CALLBACK: self.command_retrieve_callback}
for action in self.actions_by_labels[linger_name]:
self.trigger_specific_action_callback(self.uuid, action.uuid, trigger_data)
def start(self):
# Building the list of lingers to command
keyboard_layout = []
self.lingers_names.sort()
for linger_name in self.lingers_names:
keyboard_layout += [[linger_name]]
keyboard_layout += [[self.DEFAULT_REFRESH_PHRASE]]
keyboard_layout += [[self.DEFAULT_END_PHRASE]]
self.logger.debug(keyboard_layout)
self.lingers_layouts = ReplyKeyboardMarkup(keyboard_layout, one_time_keyboard=True)
self.markup = self.lingers_layouts
self.telegram_bot_adapter().add_handler(self.conversation_handler)
self.subscribe_to_actions()
def command_retrieve_callback(self, linger_name, payload, **kwargs):
"""
Loads command retrieved from another linger, as a callback
"""
self.logger.debug("Got payload:%s for linger:%s", payload, linger_name)
received_commands_list = None
if payload:
try:
loaded_payload = json.loads(payload.decode("utf-8"))
received_commands_list = loaded_payload.get(LingerConstants.LABELS_LIST, None)
except ValueError:
self.logger.error("Not a JSON", exc_info=True)
received_commands_list = None
except TypeError:
self.logger.error("Got bytes instead of string", exc_info=True)
received_commands_list = None
self.logger.debug("got commands %s", received_commands_list)
if received_commands_list:
keyboard_layout = []
commands = []
received_commands_list.sort()
for command in received_commands_list:
keyboard_layout += [[command]]
commands.append(command)
keyboard_layout += [[self.DEFAULT_END_PHRASE]]
with self.lock:
self.lingers_commands_lists[linger_name] = {"Markup": ReplyKeyboardMarkup(keyboard_layout, one_time_keyboard=True),
"Commands": commands}
def subscribe_to_actions(self):
for linger_name in self.lingers_names:
trigger_data = {LingerConstants.TRIGGER_ACTION: LingerConstants.SUBSCRIBE_ACTION,
LingerConstants.TRIGGER_CALLBACK: self.command_retrieve_callback,
LingerConstants.LINGER_NAME:linger_name}
for action in self.actions_by_labels[linger_name]:
self.trigger_specific_action_callback(self.uuid, action.uuid, trigger_data)
def unsubscribe_from_actions(self):
for linger_name in self.lingers_names:
trigger_data = {LingerConstants.TRIGGER_ACTION: LingerConstants.UNSUBSCRIBE_ACTION,
LingerConstants.TRIGGER_CALLBACK: self.command_retrieve_callback}
for action in self.actions_by_labels[linger_name]:
self.trigger_specific_action_callback(self.uuid, action.uuid, trigger_data)
def stop(self):
self.telegram_bot_adapter().remove_handler(self.conversation_handler)
self.unsubscribe_from_actions()
def register_action(self, action):
super(MultipleLingersTelegramFilterByCommandTrigger, self).register_action(action)
self.actions_by_labels[action.label] += [action]
self.lingers_names += [action.label]
class TelegramFilterByCommandTriggerFactory(lingerTriggers.LingerBaseTriggerFactory):
"""TelegramFilterByCommandTriggerFactory generates MultipleLingersTelegramFilterByCommandTrigger instances"""
def __init__(self):
super(TelegramFilterByCommandTriggerFactory, self).__init__()
self.item = MultipleLingersTelegramFilterByCommandTrigger
@staticmethod
def get_instance_name():
"""Returns instance name"""
return "MultipleLingersTelegramFilterByCommandTrigger"
def get_fields(self):
fields, optional_fields = super(TelegramFilterByCommandTriggerFactory, self).get_fields()
fields += [('telegram_bot_adapter', 'uuid'),
('command_word', 'string'),
('authorized_users', ('array', 'string'))]
return fields, optional_fields
| GreenBlast/Linger | LingerTriggers/MultipleLingersTelegramFilterByCommandTrigger.py | MultipleLingersTelegramFilterByCommandTrigger.py | py | 11,307 | python | en | code | 0 | github-code | 13 |
33573621035 | # Importing the required packages
from flask import Flask, request, render_template
import telegram
import os
from nltk.chat.eliza import eliza_chatbot
# Bot credentials
from botcontroller.credentials import BOT_TOKEN, BOT_USERNAME, URL
# Initialize flask app
app = Flask(__name__)
# Initialize telegram bot
bot = telegram.Bot(token = BOT_TOKEN)
@app.route("/")
def index():
return render_template("index.html")
@app.route(f"/{BOT_TOKEN}", methods = ["POST"])
def respond():
"""
Desc : This function defines the logic that controls how the telegram bot responds when a message is sent
"""
# When a user sends a message in Telegram, we can receive the message as a JSON object and convert it to a Telegram object using the telegram module
new_message = telegram.Update.de_json(request.get_json(force = True), bot)
chat_id = new_message.message.chat_id
message_id = new_message.message.message_id
# Encoding text for unicode compatibility
text = new_message.message.text.encode("utf-8").decode()
print(f"[RECEIVED TEXT] : {text}")
# For a welcome message
if text == "/start":
welcome_msg = f"Hi {new_message.message.from_user.first_name}, I'm Toyosi - The favourite mental health bot. Let's talk about your issues, Be free with me"
bot.sendMessage(chat_id = chat_id, reply_to_message_id = message_id, text = welcome_msg)
else:
bot.sendMessage(chat_id = chat_id, reply_to_message_id = message_id, text = eliza_chatbot.respond(text))
return ""
@app.route("/set_webhook")
def setWebhook():
"""
Desc : This webhook enables the bot to run once the server is invoked
"""
hook = bot.setWebhook(os.path.join(URL, BOT_TOKEN))
if hook:
return "Webhook successfully set."
return "Webhook configuration failed."
if __name__ == "__main__":
app.run(debug = True, threaded = True) | rexsimiloluwah/telebot | bot.py | bot.py | py | 1,904 | python | en | code | 0 | github-code | 13 |
36662114938 | #######################################################
### Get one row data to calculate.
### Calculate block(R1~4/Gr1~4/Gb1~4/B1~4) std and avg.
import numpy as np
import time
import csv
import datetime
import enum
import os
StartTime = time.time()
#######################################################
### Change the parameters to match the settings
nWidth = 8000
nHeight = 6000
nFileCount = 100
sFilePath = '/home/dino/RawShared/20211111_fulldark/'
sFileTempTime = '20211111160205'
sFileTempFormat = 'P10'
g_sFilePathFolder = [
'0x0010', '0x0020', '0x0030', '0x0040', '0x0050', '0x0060', '0x0070', '0x0080', '0x0090', '0x00A0', '0x00B0', '0x00C0', \
]
bExposureRaw = False # True/False
nFileExposureIM = 1
nFileExposureID = 30
nFileExposureCount = 10
nFileExposureInterval = 1
nFileExposureIntervalNum = 1
nROI_X = 1444#3998
nROI_Y = 337#2998
nROI_W = 4 #multiple of 4
nROI_H = 4 #multiple of 4
sSavePath = '/home/dino/RawShared/Output/'
### Change the parameters to match the settings
#######################################################
if not bExposureRaw:
# Normal
sFileTempName = 'FrameID0_W{0:d}_H{1:d}_{2:s}_{3:s}_{4:04d}.raw'
sSaveStdFile = 'STD_{}.csv'
sSaveAvgFile = 'AVG_{}.csv'
sSaveTempFile = '{}_Single_{}.csv'
sSaveOrganizeTempFile = '{}_{}.csv'
nFileExposureIntervalNum = 1
else:
# Exposure
sFileTempName = 'FrameID0_W{0:d}_H{1:d}_{2:s}_{3:s}_{4:04d}_{5:d}_{6:d}.raw'
sSaveStdFile = 'STD_{}_{}_{}_{}.csv'
sSaveAvgFile = 'AVG_{}_{}_{}_{}.csv'
sSaveTempFile = '{}_{}_{}_{}.csv'
sSaveOrganizeTempFile = '{}_{}_{}.csv'
#PixelRow_array = np.zeros((nFileCount, nROI_W))
lCsvStdRow = []
lCsvAvgRow = []
NowDate = datetime.datetime.now()
#TimeInfo = '{:04d}{:02d}{:02d}{:02d}{:02d}{:02d}'.format(NowDate.year, NowDate.month, NowDate.day, NowDate.hour, NowDate.minute, NowDate.second)
TimeInfo = sFileTempTime
#print(TimeInfo)
def Save_CSV(FileName, RowInfo):
with open(FileName, 'a+') as f:
# create the csv writer
csv_writer = csv.writer(f)
# write a row to the csv file
#print(RowInfo)
csv_writer.writerow(RowInfo)
def Cal_Information(y, nCount, ChannelArray, sColor):
for i in range(0, nCount+1):
if i < nCount:
for j in range(0, 4):
lCsvStdRow.append('{}_STD_{}{}'.format(i, sColor, j+1))
#print(lCsvStdRow)
Channel_STD = np.std(ChannelArray[i,j])
lCsvStdRow.append(Channel_STD.tolist())
#print(lCsvStdRow)
lCsvAvgRow.append('{}_AVG_{}{}'.format(i, sColor, j+1))
Channel_AVG = np.average(ChannelArray[i,j])
lCsvAvgRow.append(Channel_AVG.tolist())
#print(lCsvStdRow)
#print(lCsvAvgRow)
elif i == nCount: # Total
for j in range(0, 4):
ChannelAllPixel = ChannelArray[:,j,:].flatten()
lCsvStdRow.append('Total_STD_{}{}'.format(sColor, j+1))
Channel_STD = np.std(ChannelAllPixel)
lCsvStdRow.append(Channel_STD.tolist())
lCsvAvgRow.append('Total_AVG_{}{}'.format(sColor, j+1))
Channel_AVG = np.average(ChannelAllPixel)
lCsvAvgRow.append(Channel_AVG.tolist())
#print(lCsvStdRow)
#print(lCsvAvgRow)
def Cal_Save_AllInformation(y, nCount, ChannelArray, sColor, sSaveFileName, nExpIndex, sSaveOrgFile):
#print(ChannelArray)
#print('')
lRawInfo = []
lRawInfo.clear()
lRawInfo = ['', 'Ch1_AVG', 'Ch1_STD', 'Ch2_AVG', 'Ch2_STD', 'Ch3_AVG', 'Ch3_STD', 'Ch4_AVG', 'Ch4_STD']
Save_CSV(sSaveFileName, lRawInfo)
for i in range(0, nCount+1):
if i < nCount:
lRawInfo.clear()
lRawInfo.append('Frame{}'.format(i))
for j in range(0, 4):
#print(ChannelArray[i,j])
Channel_AVG = np.average(ChannelArray[i,j])
lRawInfo.append(Channel_AVG.tolist())
Channel_STD = np.std(ChannelArray[i,j])
lRawInfo.append(Channel_STD.tolist())
Save_CSV(sSaveFileName, lRawInfo)
elif i == nCount: # Total
lRawMin = []
lRawMin.clear()
lRawMin.append('Min:')
lRawMax = []
lRawMax.clear()
lRawMax.append('Max:')
lRawOrglInfo = []
lRawOrglInfo.clear()
lRawOrglInfo.append('Exp{}'.format(nExpIndex))
lRawInfo.clear()
lRawInfo.append('FrameTotal')
for j in range(0, 4):
ChannelAllPixel = ChannelArray[:,j,:].flatten()
#print(ChannelAllPixel)
#print('Min: ', np.min(ChannelAllPixel))
#print('Max: ', np.max(ChannelAllPixel))
lRawMin.append(np.min(ChannelAllPixel))
lRawMin.append('')
lRawMax.append(np.max(ChannelAllPixel))
lRawMax.append('')
Channel_AVG = np.average(ChannelAllPixel)
lRawOrglInfo.append(Channel_AVG.tolist())
lRawInfo.append(Channel_AVG.tolist())
Channel_STD = np.std(ChannelAllPixel)
lRawOrglInfo.append(Channel_STD.tolist())
lRawInfo.append(Channel_STD.tolist())
Save_CSV(sSaveFileName, lRawInfo)
Save_CSV(sSaveFileName, lRawMin)
Save_CSV(sSaveFileName, lRawMax)
Save_CSV(sSaveOrgFile, lRawOrglInfo)
def ParsingPixel():
nCount = nFileCount
if bExposureRaw:
nCount = nFileExposureCount
#print('nROI_X: ', nROI_X)
#print('nROI_Y: ', nROI_Y)
#Get the numbers of every channel
nR_Gb_Len = nROI_W//4 * nROI_H//4
nGr_B_Len = nROI_W//4 * nROI_H//4
#print(nR_Gb_Len)
#print(nGr_B_Len)
#Get the leftest pixel offset
nWOffset = nROI_X % 4
#Set the save orgnize file (Orgnize result)
if not bExposureRaw:
sSaveOrgRFile = sSavePath+sSaveOrganizeTempFile.format(TimeInfo, 'R')
sSaveOrgGrFile = sSavePath+sSaveOrganizeTempFile.format(TimeInfo, 'Gr')
sSaveOrgGbFile = sSavePath+sSaveOrganizeTempFile.format(TimeInfo, 'Gb')
sSaveOrgBFile = sSavePath+sSaveOrganizeTempFile.format(TimeInfo, 'B')
else:
sSaveOrgRFile = sSavePath+sSaveOrganizeTempFile.format(TimeInfo, nFileExposureID, 'R')
sSaveOrgGrFile = sSavePath+sSaveOrganizeTempFile.format(TimeInfo, nFileExposureID, 'Gr')
sSaveOrgGbFile = sSavePath+sSaveOrganizeTempFile.format(TimeInfo, nFileExposureID, 'Gb')
sSaveOrgBFile = sSavePath+sSaveOrganizeTempFile.format(TimeInfo, nFileExposureID, 'B')
if os.path.exists(sSaveOrgRFile):
os.remove(sSaveOrgRFile)
if os.path.exists(sSaveOrgGrFile):
os.remove(sSaveOrgGrFile)
if os.path.exists(sSaveOrgGbFile):
os.remove(sSaveOrgGbFile)
if os.path.exists(sSaveOrgBFile):
os.remove(sSaveOrgBFile)
lRawInfo = []
lRawInfo.clear()
lRawInfo = ['', 'Ch1_AVG', 'Ch1_STD', 'Ch2_AVG', 'Ch2_STD', 'Ch3_AVG', 'Ch3_STD', 'Ch4_AVG', 'Ch4_STD']
Save_CSV(sSaveOrgRFile, lRawInfo)
Save_CSV(sSaveOrgGrFile, lRawInfo)
Save_CSV(sSaveOrgGbFile, lRawInfo)
Save_CSV(sSaveOrgBFile, lRawInfo)
#Every exposure interval
for h in range(0, nFileExposureIntervalNum):
#4 Quad channel (1~4) of 4Channel (R/Gr/Gb/B)
ChannelR_array = np.zeros((nCount, 4, nR_Gb_Len))
ChannelGr_array = np.zeros((nCount, 4, nGr_B_Len))
ChannelGb_array = np.zeros((nCount, 4, nR_Gb_Len))
ChannelB_array = np.zeros((nCount, 4, nGr_B_Len))
#The exposure time index
if not bExposureRaw:
nExposureIntervalIndex = 0
else:
nExposureIntervalIndex = h*nFileExposureInterval+nFileExposureIM
'''
#Set the every channel saving file (R/Gr/Gb/B) (Std&Avg)
sSaveRStdFile = sSavePath+sSaveStdFile.format(TimeInfo, nExposureIntervalIndex, nFileExposureID, 'R')
sSaveRAvgFile = sSavePath+sSaveAvgFile.format(TimeInfo, nExposureIntervalIndex, nFileExposureID, 'R')
sSaveGrStdFile = sSavePath+sSaveStdFile.format(TimeInfo, nExposureIntervalIndex, nFileExposureID, 'Gr')
sSaveGrAvgFile = sSavePath+sSaveAvgFile.format(TimeInfo, nExposureIntervalIndex, nFileExposureID, 'Gr')
sSaveGbStdFile = sSavePath+sSaveStdFile.format(TimeInfo, nExposureIntervalIndex, nFileExposureID, 'Gb')
sSaveGbAvgFile = sSavePath+sSaveAvgFile.format(TimeInfo, nExposureIntervalIndex, nFileExposureID, 'Gb')
sSaveBStdFile = sSavePath+sSaveStdFile.format(TimeInfo, nExposureIntervalIndex, nFileExposureID, 'B')
sSaveBAvgFile = sSavePath+sSaveAvgFile.format(TimeInfo, nExposureIntervalIndex, nFileExposureID, 'B')
if os.path.exists(sSaveRStdFile):
os.remove(sSaveRStdFile)
if os.path.exists(sSaveRAvgFile):
os.remove(sSaveRAvgFile)
if os.path.exists(sSaveGrStdFile):
os.remove(sSaveGrStdFile)
if os.path.exists(sSaveGrAvgFile):
os.remove(sSaveGrAvgFile)
if os.path.exists(sSaveGbStdFile):
os.remove(sSaveGbStdFile)
if os.path.exists(sSaveGbAvgFile):
os.remove(sSaveGbAvgFile)
if os.path.exists(sSaveBStdFile):
os.remove(sSaveBStdFile)
if os.path.exists(sSaveBAvgFile):
os.remove(sSaveBAvgFile)
'''
#Set the every channel saving file (R/Gr/Gb/B) (Total)
if not bExposureRaw:
sSaveRFile = sSavePath+sSaveTempFile.format(TimeInfo, 'R')
sSaveGrFile = sSavePath+sSaveTempFile.format(TimeInfo, 'Gr')
sSaveGbFile = sSavePath+sSaveTempFile.format(TimeInfo, 'Gb')
sSaveBFile = sSavePath+sSaveTempFile.format(TimeInfo, 'B')
else:
sSaveRFile = sSavePath+sSaveTempFile.format(TimeInfo, nExposureIntervalIndex, nFileExposureID, 'R')
sSaveGrFile = sSavePath+sSaveTempFile.format(TimeInfo, nExposureIntervalIndex, nFileExposureID, 'Gr')
sSaveGbFile = sSavePath+sSaveTempFile.format(TimeInfo, nExposureIntervalIndex, nFileExposureID, 'Gb')
sSaveBFile = sSavePath+sSaveTempFile.format(TimeInfo, nExposureIntervalIndex, nFileExposureID, 'B')
if os.path.exists(sSaveRFile):
os.remove(sSaveRFile)
if os.path.exists(sSaveGrFile):
os.remove(sSaveGrFile)
if os.path.exists(sSaveGbFile):
os.remove(sSaveGbFile)
if os.path.exists(sSaveBFile):
os.remove(sSaveBFile)
#Every file of one exposure time index
for k in range(0, nCount):
nR0Index, nR1Index, nR2Index, nR3Index = 0, 0, 0, 0
nGr0Index, nGr1Index, nGr2Index, nGr3Index = 0, 0, 0, 0
nGb0Index, nGb1Index, nGb2Index, nGb3Index = 0, 0, 0, 0
nB0Index, nB1Index, nB2Index, nB3Index = 0, 0, 0, 0
#Set the source file
if not bExposureRaw:
sFileTemp = sFilePath+sFileTempName.format(nWidth, nHeight, sFileTempTime, sFileTempFormat, k)
else:
nContinueFileIndex = k+((h+nFileExposureIM-1)*nFileExposureCount)
sFileTemp = sFilePath+sFileTempName.format(nWidth, nHeight, sFileTempTime, sFileTempFormat, nContinueFileIndex, nExposureIntervalIndex, nFileExposureID)
#if k == 0:
# print('File: ' + sFileTemp)
#input_file = open(sFileTemp, 'rb')
#input_array = np.fromfile(input_file, dtype=np.uint16, count=-1, sep="", offset=0)
#input_array = input_array.reshape((nHeight, nWidth))
#print('Frame{} Index:{} Max: {}'.format(k, np.argmax(input_array), np.max(input_array)))
#print('Frame{} Index:{} Min: {}'.format(k, np.argmin(input_array), np.min(input_array)))
#arr_condition = np.where(input_array > 70)
#print('Frame{} Size:{} >70: {}'.format(k, np.size(arr_condition), arr_condition))
#input_file.close()
for i in range(nROI_Y, nROI_Y+nROI_H):
bNeedCal = False
nPixelOffset = nWidth * i * 2 + nROI_X * 2
#print('nPixelOffset: ', nPixelOffset)
input_file = open(sFileTemp, 'rb')
#Get all pixel of one range row
input_array = np.fromfile(input_file, dtype=np.uint16, count=nROI_W, sep="", offset=nPixelOffset)
input_file.close()
#print('input_array: ', input_array)
if i%4==0: #R1~2+Gr1~2
for l in range(0, nROI_W):
if (l+nWOffset)%4==0: #R1
#print('h:{}, i:{}, k:{}, Index:{}, l:{}'.format(h, i, k, nR0Index, l))
ChannelR_array[k,0,nR0Index] = input_array[l]
nR0Index += 1
elif (l+nWOffset)%4==1: #R2
ChannelR_array[k,1,nR1Index] = input_array[l]
nR1Index += 1
elif (l+nWOffset)%4==2: #Gr1
ChannelGr_array[k,0,nGr0Index] = input_array[l]
nGr0Index += 1
elif (l+nWOffset)%4==3: #Gr2
ChannelGr_array[k,1,nGr1Index] = input_array[l]
nGr1Index += 1
elif i%4==1: #R3~4+Gr3~4
for l in range(0, nROI_W):
if (l+nWOffset)%4==0: #R3
ChannelR_array[k,2,nR2Index] = input_array[l]
nR2Index += 1
elif (l+nWOffset)%4==1: #R4
ChannelR_array[k,3,nR3Index] = input_array[l]
nR3Index += 1
elif (l+nWOffset)%4==2: #Gr3
ChannelGr_array[k,2,nGr2Index] = input_array[l]
nGr2Index += 1
elif (l+nWOffset)%4==3: #Gr4
ChannelGr_array[k,3,nGr3Index] = input_array[l]
nGr3Index += 1
elif i%4==2: #Gb1~2+B1~2
for l in range(0, nROI_W):
if (l+nWOffset)%4==0: #Gb1
ChannelGb_array[k,0,nGb0Index] = input_array[l]
nGb0Index += 1
elif (l+nWOffset)%4==1: #Gb2
ChannelGb_array[k,1,nGb1Index] = input_array[l]
nGb1Index += 1
elif (l+nWOffset)%4==2: #B1
ChannelB_array[k,0,nB0Index] = input_array[l]
nB0Index += 1
elif (l+nWOffset)%4==3: #B2
ChannelB_array[k,1,nB1Index] = input_array[l]
nB1Index += 1
elif i%4==3: #Gb3~4+B3~4
for l in range(0, nROI_W):
if (l+nWOffset)%4==0: #Gb3
ChannelGb_array[k,2,nGb2Index] = input_array[l]
nGb2Index += 1
elif (l+nWOffset)%4==1: #Gb4
ChannelGb_array[k,3,nGb3Index] = input_array[l]
nGb3Index += 1
elif (l+nWOffset)%4==2: #B3
ChannelB_array[k,2,nB2Index] = input_array[l]
nB2Index += 1
elif (l+nWOffset)%4==3: #B4
ChannelB_array[k,3,nB3Index] = input_array[l]
nB3Index += 1
#Save the R information
#print(h)
#lCsvStdRow.clear()
#lCsvAvgRow.clear()
#Save_CSV(sSaveRStdFile, lCsvStdRow)
#Save_CSV(sSaveRAvgFile, lCsvAvgRow)
Cal_Save_AllInformation(i, nCount, ChannelR_array, 'R', sSaveRFile, nExposureIntervalIndex, sSaveOrgRFile)
#Save the G information
#lCsvStdRow.clear()
#lCsvAvgRow.clear()
#Save_CSV(sSaveGrStdFile, lCsvStdRow)
#Save_CSV(sSaveGrAvgFile, lCsvAvgRow)
Cal_Save_AllInformation(i, nCount, ChannelGr_array, 'Gr', sSaveGrFile, nExposureIntervalIndex, sSaveOrgGrFile)
#Save the Gb information
#lCsvStdRow.clear()
#lCsvAvgRow.clear()
#Save_CSV(sSaveGbStdFile, lCsvStdRow)
#Save_CSV(sSaveGbAvgFile, lCsvAvgRow)
Cal_Save_AllInformation(i, nCount, ChannelGb_array, 'Gb', sSaveGbFile, nExposureIntervalIndex, sSaveOrgGbFile)
#Save the B information
#lCsvStdRow.clear()
#lCsvAvgRow.clear()
#Save_CSV(sSaveBStdFile, lCsvStdRow)
#Save_CSV(sSaveBAvgFile, lCsvAvgRow)
Cal_Save_AllInformation(i, nCount, ChannelB_array, 'B', sSaveBFile, nExposureIntervalIndex, sSaveOrgBFile)
nEachIntervalTime = time.time()
print("Durning Each Interval Time(sec): {}".format(nEachIntervalTime - StartTime))
def CallMain(nWidth, nHeight, nX, nY, nROI_W, nROI_H, nFileCounts, FileTimeStamp, InputFolder, ArrayFolder, OutputFolder):
listVarOfGlobals = globals()
listVarOfGlobals['nWidth'] = nWidth
listVarOfGlobals['nHeight'] = nHeight
listVarOfGlobals['nFileCount'] = nFileCounts
listVarOfGlobals['sFilePath'] = InputFolder
listVarOfGlobals['sFileTempTime'] = '20211111160205'
listVarOfGlobals['sFileTempFormat'] = 'P10'
listVarOfGlobals['bExposureRaw'] = False # True/False
listVarOfGlobals['nFileExposureIM'] = 1
listVarOfGlobals['nFileExposureID'] = 30
listVarOfGlobals['nFileExposureCount'] = 10
listVarOfGlobals['nFileExposureInterval'] = 1
listVarOfGlobals['nFileExposureIntervalNum'] = 1
listVarOfGlobals['nROI_X'] = nX
listVarOfGlobals['nROI_Y'] = nY
listVarOfGlobals['nROI_W'] = nROI_W
listVarOfGlobals['nROI_H'] = nROI_H
listVarOfGlobals['sSavePath'] = OutputFolder
#print(listVarOfGlobals['g_sFilePathFolder'])
listVarOfGlobals['g_sFilePathFolder'] = ArrayFolder
#print(listVarOfGlobals['g_sFilePathFolder'])
#ParsingPixel()
pass
if __name__ == "__main__":
print("Main")
ParsingPixel()
EndTime = time.time()
print("Durning Program Time(sec): ", EndTime - StartTime) | dinoliang/SampleCode | Python/raw/channelrowparse_maxmin.py | channelrowparse_maxmin.py | py | 18,774 | python | en | code | 0 | github-code | 13 |
5075432955 | from src.model.user import User
from src.model.base import db
admin = User(title='admin', release_date='dssd')
guest = User(title='guest', release_date='ds')
db.session.add(admin)
db.session.add(guest)
db.session.commit()
print(User.query.all())
| balramsinghindia/python-flask-sqlalchemy | queries.py | queries.py | py | 249 | python | en | code | 0 | github-code | 13 |
11510832829 |
# Recursive-descent parser with Pratt-style expression parsing. Based on:
# http://www.craftinginterpreters.com/parsing-expressions.html
# http://journal.stuffwithstuff.com/2011/03/19/pratt-parsers-expression-parsing-made-easy/
import json
import re
from collections import defaultdict
from contextlib import contextmanager
from .errors import PyxellError as err
from .lexer import tokenize, Token, ASSIGNMENT_OPERATORS
class Fixity:
PREFIX = True
NON_PREFIX = False
EXPR_OPERATOR_PRECEDENCE = defaultdict(lambda: 0)
for precedence, (fixity, ops) in enumerate(reversed([
(Fixity.NON_PREFIX, ['.', '?.']),
(Fixity.NON_PREFIX, ['[', '?[']),
(Fixity.NON_PREFIX, ['(', '@(']),
(Fixity.NON_PREFIX, ['!']),
(Fixity.NON_PREFIX, ['^', '^^']),
(Fixity.PREFIX, ['+', '-']),
(Fixity.NON_PREFIX, ['/']),
(Fixity.NON_PREFIX, ['//', '%', '*', '&']),
(Fixity.NON_PREFIX, ['+', '-']),
(Fixity.NON_PREFIX, ['%%']),
(Fixity.NON_PREFIX, ['??']),
(Fixity.NON_PREFIX, ['...', '..']),
(Fixity.NON_PREFIX, ['by']),
(Fixity.PREFIX, ['...']),
(Fixity.NON_PREFIX, ['==', '!=', '<', '<=', '>', '>=']),
(Fixity.PREFIX, ['not']),
(Fixity.NON_PREFIX, ['and']),
(Fixity.NON_PREFIX, ['or']),
(Fixity.NON_PREFIX, ['?']),
(Fixity.PREFIX, ['lambda']),
]), 1):
for op in ops:
EXPR_OPERATOR_PRECEDENCE[fixity, op] = precedence
TYPE_OPERATOR_PRECEDENCE = defaultdict(lambda: 0)
for precedence, op in enumerate(reversed(['?', '??', '?...', '*', '...', '->']), 1):
TYPE_OPERATOR_PRECEDENCE[op] = precedence
class PyxellParser:
def __init__(self, lines, filepath, start_position=(1, 1)):
tokens = tokenize(lines, start_position)
self.tokens = tokens[:-1]
self.eof_token = tokens[-1]
self.index = 0
self.filepath = filepath
def raise_syntax_error(self, token):
raise err(self.filepath, token.position, err.InvalidSyntax())
def eof(self):
return self.index >= len(self.tokens)
def peek(self, offset=0):
if self.index + offset < len(self.tokens):
return self.tokens[self.index + offset]
return self.eof_token
def pop(self):
token = self.peek()
self.index += 1
return token
def backtrack(self, count=1):
self.index -= count
def check(self, *words):
""" Returns whether the current tokens match all the given words. """
for i, word in enumerate(words):
if self.peek(i).text != word:
return False
return True
def match(self, *words):
""" Consumes the current tokens if they match all the given words. Returns whether there was a match. """
if not self.check(*words):
return False
self.index += len(words)
return True
def expect(self, *words):
""" Consumes the current tokens if they match all the given words. Returns True if there was a match, raises a syntax error otherwise. """
for i, word in enumerate(words):
token = self.pop()
if token.text != word:
self.raise_syntax_error(token)
return True
def node(self, name, token):
return {
'node': name,
'position': token.position,
}
def expr_node(self, name, token):
return {
**self.node(name, token),
'op': self.op_node(token),
}
def op_node(self, token):
return {
'position': token.position,
'text': token.text,
}
def id_node(self, token, placeholder_allowed=False):
if token.type == Token.ID:
return {
**self.expr_node('AtomId', token),
'name': token.text,
}
if token.text == '_' and placeholder_allowed:
return self.expr_node('AtomPlaceholder', token)
@contextmanager
def try_parse(self, backtrack=False):
index = self.index
try:
yield
if backtrack:
self.index = index
except err:
self.index = index
def parse_program(self):
token = self.peek()
stmts = []
while not self.eof():
stmts.append(self.parse_stmt())
self.expect(';')
return {
**self.node('Block', token),
'stmts': stmts,
}
def parse_block(self):
token = self.peek()
stmts = []
self.expect('{')
while not self.check('}'):
stmts.append(self.parse_stmt())
self.expect(';')
self.expect('}')
return {
**self.node('Block', token),
'stmts': stmts,
}
def parse_stmt(self):
token = self.pop()
if token.text == 'use':
return {
**self.node('StmtUse', token),
'id': self.parse_id(),
'detail': self.match('hiding') and ['hiding', *self.parse_id_list()] or ['all'],
}
if token.text == 'skip':
return {
**self.node('StmtSkip', token),
}
if token.text == 'print':
return {
**self.node('StmtPrint', token),
'exprs': [] if self.check(';') else self.parse_expr_list(),
}
if token.text == 'return':
return {
**self.node('StmtReturn', token),
'expr': None if self.check(';') else self.parse_tuple_expr(),
}
if token.text == 'yield':
return {
**self.node('StmtYield', token),
'expr': None if self.check(';') else self.parse_tuple_expr(),
}
if token.text == 'if':
exprs = [self.parse_tuple_expr()]
self.expect('do')
blocks = [self.parse_block()]
while self.match(';', 'elif'):
exprs.append(self.parse_tuple_expr())
self.expect('do')
blocks.append(self.parse_block())
if self.match(';', 'else'):
self.expect('do')
blocks.append(self.parse_block())
return {
**self.node('StmtIf', token),
'exprs': exprs,
'blocks': blocks,
}
if token.text in {'while', 'until'}:
return {
**self.node(f'Stmt{token.text.capitalize()}', token),
'expr': self.parse_tuple_expr(),
'label': self.match('label') and self.parse_id() or None,
'block': self.expect('do') and self.parse_block(),
'else': self.match(';', 'else') and self.expect('do') and self.parse_block(),
}
if token.text == 'for':
return {
**self.node('StmtFor', token),
'vars': self.parse_expr_list(),
'iterables': self.expect('in') and self.parse_expr_list(),
'label': self.match('label') and self.parse_id() or None,
'block': self.expect('do') and self.parse_block(),
'else': self.match(';', 'else') and self.expect('do') and self.parse_block(),
}
if token.text in {'break', 'continue'}:
return {
**self.node('StmtLoopControl', token),
'stmt': token.text,
'label': None if self.check(';') else self.parse_id(),
}
if token.text == 'func':
return {
**self.node('StmtFunc', token),
**self.parse_func_header(),
'block': self.match('def') and self.parse_block() or self.expect('extern') and None,
}
if token.text == 'class':
return {
**self.node('StmtClass', token),
'id': self.parse_id(),
'base': self.match(':') and self.parse_type() or None,
'members': self.expect('def') and self.parse_class_member_list(),
}
self.backtrack()
if token.type == Token.ID and self.peek(1).text == ':':
return {
**self.node('StmtDecl', token),
**self.parse_decl(),
}
exprs = [self.parse_tuple_expr()]
op_token = self.pop()
for op in ASSIGNMENT_OPERATORS:
if op_token.text == op:
return {
**self.expr_node('StmtAssgExpr', Token(op[:-1], op_token.type, op_token.position)),
'position': token.position,
'exprs': [exprs[0], self.parse_tuple_expr()],
}
self.backtrack()
while self.match('='):
exprs.append(self.parse_tuple_expr())
return {
**self.node('StmtAssg', token),
'exprs': exprs,
}
def parse_class_member_list(self):
members = []
self.expect('{')
while not self.check('}'):
members.append(self.parse_class_member())
self.expect(';')
self.expect('}')
return members
def parse_class_member(self):
token = self.pop()
if token.text == 'func':
return {
**self.node('ClassMethod', token),
**self.parse_func_header(),
'block': self.match('def') and self.parse_block() or self.expect('abstract') and None,
}
if token.text in {'constructor', 'destructor'}:
return {
**self.node(f'Class{token.text.capitalize()}', token),
'id': {
**self.expr_node('AtomId', token),
'name': f'<{token.text}>',
},
'args': [],
'ret': {
**self.node('TypeId', token),
'name': 'Void',
},
'block': self.expect('def') and self.parse_block(),
}
self.backtrack() # backtrack if no keyword has been matched
return {
**self.node(f'ClassField', token),
**self.parse_decl(),
}
def parse_func_header(self):
return {
'id': self.parse_id(),
'typevars': self.match('<') and (self.parse_id_list(), self.expect('>'))[0] or [],
'args': self.parse_func_arg_list(),
'ret': self.match(':') and self.parse_type() or None,
}
def parse_func_arg_list(self):
args = []
self.expect('(')
while not self.check(')'):
args.append(self.parse_func_arg())
if not self.match(','):
break
self.expect(')')
return args
def parse_func_arg(self):
return {
**self.node('FuncArg', self.peek()),
'variadic': self.match('...'),
**self.parse_decl(placeholder_allowed=True, type_required=False, tuple_expr_allowed=False),
}
def parse_decl(self, placeholder_allowed=False, type_required=True, tuple_expr_allowed=True):
return {
'id': self.parse_id(placeholder_allowed=placeholder_allowed),
'type': (self.expect(':') if type_required else self.match(':')) and self.parse_type() or None,
'expr': self.match('=') and (self.parse_tuple_expr() if tuple_expr_allowed else self.parse_expr()) or None,
}
def parse_id_list(self, **kwargs):
ids = [self.parse_id(**kwargs)]
while self.match(','):
ids.append(self.parse_id(**kwargs))
return ids
def parse_id(self, **kwargs):
token = self.pop()
id = self.id_node(token, **kwargs)
if not id:
self.raise_syntax_error(token)
return id
def parse_interpolation_expr(self):
expr = self.parse_tuple_expr()
if not self.eof():
self.raise_syntax_error(self.peek())
return expr
def parse_tuple_expr(self):
token = self.peek()
exprs = self.parse_expr_list()
if len(exprs) == 1:
return exprs[0]
return {
**self.expr_node('ExprTuple', token),
'exprs': exprs,
}
def parse_expr_list(self):
exprs = [self.parse_expr()]
while self.match(','):
exprs.append(self.parse_expr())
return exprs
def parse_expr(self, precedence=0):
# When calling `parse_expr()` recursively, the `precedence` argument should be equal to the precedence of the
# recently parsed operator, if it's left-associative, or that precedence minus one, if it's right-associative.
token = self.pop()
expr = self.parse_expr_prefix_op(token)
while EXPR_OPERATOR_PRECEDENCE[Fixity.NON_PREFIX, self.peek().text] > precedence:
expr = self.parse_expr_non_prefix_op(expr, self.pop())
expr['position'] = token.position
return expr
def parse_expr_prefix_op(self, token):
precedence = EXPR_OPERATOR_PRECEDENCE[Fixity.PREFIX, token.text]
id = self.id_node(token, placeholder_allowed=True)
if id:
return id
if token.type == Token.NUMBER:
text = token.text.replace('_', '').lower()
if any(text.startswith(prefix) for prefix in ['0b', '0o', '0x']):
bases = {'b': 2, 'o': 8, 'x': 16}
value = int(text, bases[text[1]])
elif any(c in text for c in 'ef'):
value = float(text.replace('f', ''))
elif any(c in text for c in '.r'):
value = text.replace('r', '')
else:
value = int(text)
return {
**self.expr_node('AtomInt' if isinstance(value, int) else 'AtomFloat' if isinstance(value, float) else 'AtomRat', token),
'value': value,
}
if token.text in {'false', 'true'}:
return {
**self.expr_node('AtomBool', token),
'value': token.text == 'true',
}
if token.type in {Token.CHAR, Token.STRING}:
value = token.text[1:-1]
i = 0
while i < len(value):
if value[i] == '\\':
i += 1
if value[i] not in {'\\', '\'', '"', 'n', 'r', 't', '0'}:
raise err(self.filepath, (token.position[0], token.position[1] + i), err.InvalidEscapeSequence(value[i-1:i+1]))
i += 1
return {
**self.expr_node(f'Atom{token.type.capitalize()}', token),
'value': value,
}
if token.text in {'null', 'super', 'this'}:
return {
**self.expr_node(f'Atom{token.text.capitalize()}', token),
}
if token.text == '(': # grouping
return {
**self.parse_tuple_expr(),
'_parenthesized': self.expect(')'),
}
if token.text in {'[', '{'}: # containers
closing_bracket = chr(ord(token.text) + 2)
kind = 'array'
comprehensions = self.parse_comprehensions()
if comprehensions:
exprs = [self.parse_expr()]
if token.text == '{':
kind = 'set'
if self.match(':'):
kind = 'dict'
exprs.append(self.parse_expr())
return {
**self.expr_node('ExprComprehension', token),
'kind': kind,
'comprehensions': comprehensions,
'exprs': (exprs, self.expect(closing_bracket))[0],
}
items = []
if token.text == '{' and self.match(':'):
kind = 'dict'
else:
if token.text == '{':
kind = 'set'
with self.try_parse(backtrack=True):
if self.match('...:') or self.parse_expr() and self.match(':'):
kind = 'dict'
while not self.check(closing_bracket):
if kind in {'array', 'set'}:
items.append(self.parse_expr())
elif kind == 'dict':
items.append(self.parse_dict_item())
if not self.match(','):
break
return {
**self.expr_node('ExprCollection', token),
'kind': kind,
'items': (items, self.expect(closing_bracket))[0],
}
if token.text in {'+', '-', 'not'}: # prefix operators
return {
**self.expr_node('ExprUnaryOp', token),
'expr': self.parse_expr(precedence),
}
if token.text == '...': # spread operator
return {
**self.expr_node('ExprSpread', token),
'expr': self.parse_expr(precedence),
}
if token.text == 'lambda':
return {
**self.expr_node('ExprLambda', token),
'ids': [] if self.check(':') else self.parse_id_list(placeholder_allowed=True),
'expr': self.expect(':') and self.parse_expr(precedence),
}
self.raise_syntax_error(token)
def parse_expr_non_prefix_op(self, left, token):
precedence = EXPR_OPERATOR_PRECEDENCE[Fixity.NON_PREFIX, token.text]
if token.text in {'.', '?.'}: # attribute access
return {
**self.expr_node('ExprAttr', token),
'expr': left,
'safe': token.text.startswith('?'),
'id': self.parse_id(),
}
if token.text in {'[', '?['}: # element access or slicing
safe = token.text.startswith('?')
slice = None
with self.try_parse():
exprs = [None] * 3
with self.try_parse():
exprs[0] = self.parse_expr()
self.expect(':')
with self.try_parse():
exprs[1] = self.parse_expr()
with self.try_parse():
self.expect(':')
with self.try_parse():
exprs[2] = self.parse_expr()
slice = exprs
if slice:
return {
**self.expr_node('ExprSlice', token),
'safe': safe,
'expr': left,
'slice': (slice, self.expect(']'))[0],
}
return {
**self.expr_node('ExprIndex', token),
'safe': safe,
'exprs': [left, (self.parse_tuple_expr(), self.expect(']'))[0]],
}
if token.text in {'(', '@('}: # function call
args = []
while not self.check(')'):
args.append(self.parse_call_arg())
if not self.match(','):
break
self.expect(')')
return {
**self.expr_node('ExprCall', token),
'expr': left,
'partial': token.text.startswith('@'),
'args': args,
}
if token.text in {'!'}: # postfix operators
return {
**self.expr_node('ExprUnaryOp', token),
'expr': left,
}
if token.text in {'^', '^^', '??', 'and', 'or'}: # right-associative infix operators
return {
**self.expr_node('ExprBinaryOp', token),
'exprs': [left, self.parse_expr(precedence - 1)],
}
if token.text in {'/', '//', '%', '*', '&', '+', '-', '%%'}: # left-associative infix operators
return {
**self.expr_node('ExprBinaryOp', token),
'exprs': [left, self.parse_expr(precedence)],
}
if token.text in {'...', '..'}: # range operators
exprs = [left]
with self.try_parse(): # infinite range if no second expression
exprs.append(self.parse_expr(precedence))
inclusive = token.text == '..'
if len(exprs) == 1 and inclusive:
self.raise_syntax_error(token)
return {
**self.expr_node('ExprRange', token),
'exprs': exprs,
'inclusive': inclusive,
}
if token.text == 'by':
return {
**self.expr_node('ExprBy', token),
'exprs': [left, self.parse_expr(precedence)],
}
if token.text in {'==', '!=', '<', '<=', '>', '>='}: # comparison operators
right = self.parse_expr(precedence - 1)
chained = right['node'] == 'ExprCmp' and not right.get('_parenthesized')
return {
**self.expr_node('ExprCmp', token),
'exprs': [left, *right['exprs']] if chained else [left, right],
'ops': [self.op_node(token), *right['ops']] if chained else [self.op_node(token)],
}
if token.text == '?': # ternary conditional operator
return {
**self.expr_node('ExprCond', token),
'exprs': [left, self.parse_expr(), self.expect(':') and self.parse_expr(precedence - 1)],
}
# No syntax error here since `EXPR_OPERATOR_PRECEDENCE` is 0 for unknown operators anyway.
def parse_dict_item(self):
token = self.peek()
if self.match('...:'):
return {
**self.expr_node('DictSpread', token),
'expr': self.parse_expr(),
}
return {
**self.node('DictPair', token),
'exprs': [self.parse_expr(), self.expect(':') and self.parse_expr()],
}
def parse_comprehensions(self):
comprehensions = []
while self.check('for') or self.check('if'):
comprehensions.append(self.parse_comprehension())
if comprehensions:
self.expect('yield')
return comprehensions
def parse_comprehension(self):
token = self.pop()
if token.text == 'for':
return {
**self.node('ComprehensionIteration', token),
'vars': self.parse_expr_list(),
'iterables': self.expect('in') and self.parse_expr_list(),
}
if token.text == 'if':
return {
**self.node('ComprehensionPredicate', token),
'expr': self.parse_tuple_expr(),
}
def parse_call_arg(self):
token = self.peek()
return {
**self.node('CallArg', token),
'id': (self.parse_id(), self.expect('='))[0] if token.type == Token.ID and self.peek(1).text == '=' else None,
'expr': self.parse_expr(),
}
def parse_type(self, precedence=0):
# When calling `parse_type()` recursively, the `precedence` argument should be equal to the precedence of the
# recently parsed operator, if it's left-associative, or that precedence minus one, if it's right-associative.
type = self.parse_type_prefix_op(self.pop())
while TYPE_OPERATOR_PRECEDENCE[self.peek().text] > precedence:
type = self.parse_type_non_prefix_op(type, self.pop())
return type
def parse_type_prefix_op(self, token):
if token.type == Token.ID:
return {
**self.node('TypeId', token),
'name': token.text,
}
if token.text == '(': # grouping
if self.match(')') and self.check('->'): # function without arguments
return None
return {
**self.parse_type(),
'_parenthesized': self.expect(')'),
}
if token.text in {'[', '{'}: # containers
closing_bracket = chr(ord(token.text) + 2)
subtypes = [self.parse_type()]
kind = 'array'
if token.text == '{':
if self.match(':'):
kind = 'dict'
subtypes.append(self.parse_type())
else:
kind = 'set'
return {
**self.node('TypeCollection', token),
'kind': kind,
'subtypes': (subtypes, self.expect(closing_bracket))[0],
}
self.raise_syntax_error(token)
def parse_type_non_prefix_op(self, left, token):
precedence = TYPE_OPERATOR_PRECEDENCE[token.text]
if token.text[0] == '?': # nullable (note that there are three possible tokens: '?', '??', and '?...')
left = {
**self.node('TypeNullable', token),
'subtype': left,
}
return self.parse_type_non_prefix_op(left, Token(token.text[1:], token.type, token.position)) if len(token.text) > 1 else left
if token.text == '*': # tuple
right = self.parse_type(precedence - 1)
chained = right['node'] == 'TypeTuple' and not right.get('_parenthesized')
return {
**self.node('TypeTuple', token),
'types': [left, *right['types']] if chained else [left, right],
}
if token.text == '...': # generator
return {
**self.node('TypeGenerator', token),
'subtype': left,
}
if token.text == '->': # function
right = self.parse_type(precedence - 1)
chained = right['node'] == 'TypeFunc' and not right.get('_parenthesized')
left = [] if left is None else [left]
return {
**self.node('TypeFunc', token),
'types': [*left, *right['types']] if chained else [*left, right],
}
# No syntax error here since `TYPE_OPERATOR_PRECEDENCE` is 0 for unknown operators anyway.
| adamsol/Pyxell | src/parser.py | parser.py | py | 26,079 | python | en | code | 51 | github-code | 13 |
3725416190 | """Utility functions for NumPy-based Reinforcement learning algorithms."""
import numpy as np
from garage._dtypes import TrajectoryBatch
from garage.misc import tensor_utils
from garage.sampler.utils import rollout
def samples_to_tensors(paths):
"""Return processed sample data based on the collected paths.
Args:
paths (list[dict]): A list of collected paths.
Returns:
dict: Processed sample data, with keys
* undiscounted_returns (list[float])
* success_history (list[float])
* complete (list[bool])
"""
success_history = [
path['success_count'] / path['running_length'] for path in paths
]
undiscounted_returns = [path['undiscounted_return'] for path in paths]
# check if the last path is complete
complete = [path['dones'][-1] for path in paths]
samples_data = dict(undiscounted_returns=undiscounted_returns,
success_history=success_history,
complete=complete)
return samples_data
def obtain_evaluation_samples(policy, env, max_path_length=1000,
num_trajs=100):
"""Sample the policy for num_trajs trajectories and return average values.
Args:
policy (garage.Policy): Policy to use as the actor when
gathering samples.
env (garage.envs.GarageEnv): The environement used to obtain
trajectories.
max_path_length (int): Maximum path length. The episode will
terminate when length of trajectory reaches max_path_length.
num_trajs (int): Number of trajectories.
Returns:
TrajectoryBatch: Evaluation trajectories, representing the best
current performance of the algorithm.
"""
paths = []
# Use a finite length rollout for evaluation.
for _ in range(num_trajs):
path = rollout(env,
policy,
max_path_length=max_path_length,
deterministic=True)
paths.append(path)
return TrajectoryBatch.from_trajectory_list(env.spec, paths)
def paths_to_tensors(paths, max_path_length, baseline_predictions, discount):
"""Return processed sample data based on the collected paths.
Args:
paths (list[dict]): A list of collected paths.
max_path_length (int): Maximum length of a single rollout.
baseline_predictions(numpy.ndarray): : Predicted value of GAE
(Generalized Advantage Estimation) Baseline.
discount (float): Environment reward discount.
Returns:
dict: Processed sample data, with key
* observations (numpy.ndarray): Padded array of the observations of
the environment
* actions (numpy.ndarray): Padded array of the actions fed to the
the environment
* rewards (numpy.ndarray): Padded array of the acquired rewards
* agent_infos (dict): a dictionary of {stacked tensors or
dictionary of stacked tensors}
* env_infos (dict): a dictionary of {stacked tensors or
dictionary of stacked tensors}
* rewards (numpy.ndarray): Padded array of the validity information
"""
baselines = []
returns = []
for idx, path in enumerate(paths):
# baselines
path['baselines'] = baseline_predictions[idx]
baselines.append(path['baselines'])
# returns
path['returns'] = tensor_utils.discount_cumsum(path['rewards'],
discount)
returns.append(path['returns'])
obs = [path['observations'] for path in paths]
obs = tensor_utils.pad_tensor_n(obs, max_path_length)
actions = [path['actions'] for path in paths]
actions = tensor_utils.pad_tensor_n(actions, max_path_length)
rewards = [path['rewards'] for path in paths]
rewards = tensor_utils.pad_tensor_n(rewards, max_path_length)
agent_infos = [path['agent_infos'] for path in paths]
agent_infos = tensor_utils.stack_tensor_dict_list([
tensor_utils.pad_tensor_dict(p, max_path_length) for p in agent_infos
])
env_infos = [path['env_infos'] for path in paths]
env_infos = tensor_utils.stack_tensor_dict_list(
[tensor_utils.pad_tensor_dict(p, max_path_length) for p in env_infos])
valids = [np.ones_like(path['returns']) for path in paths]
valids = tensor_utils.pad_tensor_n(valids, max_path_length)
samples_data = dict(observations=obs,
actions=actions,
rewards=rewards,
agent_infos=agent_infos,
env_infos=env_infos,
valids=valids)
return samples_data
| jaekyeom/IBOL | garaged/src/garage/np/_functions.py | _functions.py | py | 4,795 | python | en | code | 28 | github-code | 13 |
17041795824 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import json
from alipay.aop.api.constant.ParamConstants import *
class AlipayInsSceneInshealthserviceprodItemoperationrecordQueryModel(object):
def __init__(self):
self._ant_ser_prod_no = None
self._init_time_end = None
self._init_time_start = None
self._merchant_item_code = None
self._page_no = None
self._page_size = None
self._refresh_type_list = None
self._status_list = None
@property
def ant_ser_prod_no(self):
return self._ant_ser_prod_no
@ant_ser_prod_no.setter
def ant_ser_prod_no(self, value):
self._ant_ser_prod_no = value
@property
def init_time_end(self):
return self._init_time_end
@init_time_end.setter
def init_time_end(self, value):
self._init_time_end = value
@property
def init_time_start(self):
return self._init_time_start
@init_time_start.setter
def init_time_start(self, value):
self._init_time_start = value
@property
def merchant_item_code(self):
return self._merchant_item_code
@merchant_item_code.setter
def merchant_item_code(self, value):
self._merchant_item_code = value
@property
def page_no(self):
return self._page_no
@page_no.setter
def page_no(self, value):
self._page_no = value
@property
def page_size(self):
return self._page_size
@page_size.setter
def page_size(self, value):
self._page_size = value
@property
def refresh_type_list(self):
return self._refresh_type_list
@refresh_type_list.setter
def refresh_type_list(self, value):
if isinstance(value, list):
self._refresh_type_list = list()
for i in value:
self._refresh_type_list.append(i)
@property
def status_list(self):
return self._status_list
@status_list.setter
def status_list(self, value):
if isinstance(value, list):
self._status_list = list()
for i in value:
self._status_list.append(i)
def to_alipay_dict(self):
params = dict()
if self.ant_ser_prod_no:
if hasattr(self.ant_ser_prod_no, 'to_alipay_dict'):
params['ant_ser_prod_no'] = self.ant_ser_prod_no.to_alipay_dict()
else:
params['ant_ser_prod_no'] = self.ant_ser_prod_no
if self.init_time_end:
if hasattr(self.init_time_end, 'to_alipay_dict'):
params['init_time_end'] = self.init_time_end.to_alipay_dict()
else:
params['init_time_end'] = self.init_time_end
if self.init_time_start:
if hasattr(self.init_time_start, 'to_alipay_dict'):
params['init_time_start'] = self.init_time_start.to_alipay_dict()
else:
params['init_time_start'] = self.init_time_start
if self.merchant_item_code:
if hasattr(self.merchant_item_code, 'to_alipay_dict'):
params['merchant_item_code'] = self.merchant_item_code.to_alipay_dict()
else:
params['merchant_item_code'] = self.merchant_item_code
if self.page_no:
if hasattr(self.page_no, 'to_alipay_dict'):
params['page_no'] = self.page_no.to_alipay_dict()
else:
params['page_no'] = self.page_no
if self.page_size:
if hasattr(self.page_size, 'to_alipay_dict'):
params['page_size'] = self.page_size.to_alipay_dict()
else:
params['page_size'] = self.page_size
if self.refresh_type_list:
if isinstance(self.refresh_type_list, list):
for i in range(0, len(self.refresh_type_list)):
element = self.refresh_type_list[i]
if hasattr(element, 'to_alipay_dict'):
self.refresh_type_list[i] = element.to_alipay_dict()
if hasattr(self.refresh_type_list, 'to_alipay_dict'):
params['refresh_type_list'] = self.refresh_type_list.to_alipay_dict()
else:
params['refresh_type_list'] = self.refresh_type_list
if self.status_list:
if isinstance(self.status_list, list):
for i in range(0, len(self.status_list)):
element = self.status_list[i]
if hasattr(element, 'to_alipay_dict'):
self.status_list[i] = element.to_alipay_dict()
if hasattr(self.status_list, 'to_alipay_dict'):
params['status_list'] = self.status_list.to_alipay_dict()
else:
params['status_list'] = self.status_list
return params
@staticmethod
def from_alipay_dict(d):
if not d:
return None
o = AlipayInsSceneInshealthserviceprodItemoperationrecordQueryModel()
if 'ant_ser_prod_no' in d:
o.ant_ser_prod_no = d['ant_ser_prod_no']
if 'init_time_end' in d:
o.init_time_end = d['init_time_end']
if 'init_time_start' in d:
o.init_time_start = d['init_time_start']
if 'merchant_item_code' in d:
o.merchant_item_code = d['merchant_item_code']
if 'page_no' in d:
o.page_no = d['page_no']
if 'page_size' in d:
o.page_size = d['page_size']
if 'refresh_type_list' in d:
o.refresh_type_list = d['refresh_type_list']
if 'status_list' in d:
o.status_list = d['status_list']
return o
| alipay/alipay-sdk-python-all | alipay/aop/api/domain/AlipayInsSceneInshealthserviceprodItemoperationrecordQueryModel.py | AlipayInsSceneInshealthserviceprodItemoperationrecordQueryModel.py | py | 5,685 | python | en | code | 241 | github-code | 13 |
3560239242 | from django.db import models
from django.contrib.auth.models import User
import jdatetime
from datetime import timedelta,date
from django.core.validators import MaxValueValidator, MinValueValidator
from djmoney.models.fields import MoneyField
from django.db.models import Avg,Max,Min
from argparse import Namespace
#from django_jalali.db import models as jmodels
class Restaurant(models.Model):
name = models.CharField(max_length=200)
address = models.TextField(null=True, blank=True)
photo = models.ImageField(upload_to="app1/photos/", null=True, blank=True)
menu = models.TextField(null=True, blank=True)
tags = models.CharField(max_length=200)
pub_date = models.DateTimeField(auto_now_add=True)
def __str__(self):
return self.name
class Meta:
verbose_name_plural = 'رستوران'
class Hotel(models.Model):
name = models.CharField(max_length=200)
address = models.TextField(null=True, blank=True)
photo = models.ImageField(upload_to="app1/photos/", null=True, blank=True)
services = models.TextField(null=True, blank=True)
tags = models.CharField(max_length=200)
pub_date = models.DateTimeField(auto_now_add=True)
def __str__(self):
return self.name
class Meta:
verbose_name_plural = 'هتل'
class Pictures(models.Model):
title = models.CharField(max_length=200)
address = models.TextField(null=True, blank=True)
photo = models.ImageField(upload_to="app1/photos/", null=True, blank=True)
pub_date = models.DateTimeField(auto_now_add=True)
owner = models.ForeignKey(User, on_delete=models.CASCADE, null=True, blank=True)
def __str__(self):
return self.title
class Meta:
verbose_name_plural = 'عکس'
class Tour(models.Model):
title = models.CharField(max_length=200)
DistAddress = models.TextField(null=True, blank=True)
photo = models.ImageField(upload_to="app1/photos/", null=True, blank=True)
Comment = models.TextField(null=True, blank=True)
tags = models.CharField(max_length=200)
DepartureDate = models.DateTimeField()
ReturnDate = models.DateTimeField()
pub_date = models.DateTimeField(auto_now_add=True)
registeredUsers = models.ManyToManyField(User)
galaryPictures = models.ManyToManyField(Pictures,null=True, blank=True)
def __str__(self):
return self.title
class Meta:
verbose_name_plural = 'تور'
class villaCategory(models.Model):
title = models.CharField(max_length=200)
tags = models.CharField(max_length=1000)
comment = models.TextField()
def __str__(self):
return self.title
class Meta:
verbose_name_plural = 'دسته بندی ویلا'
class Villa(models.Model):
title = models.CharField(max_length=200)
villaCategory = models.ForeignKey(villaCategory,on_delete=models.CASCADE,null=True, blank=True)
address = models.TextField(null=True, blank=True)
photo = models.ImageField(upload_to="app1/photos/", null=True, blank=True)
comment = models.TextField(null=True, blank=True)
pub_date = models.DateTimeField(auto_now_add=True)
galaryPictures = models.ManyToManyField(Pictures,null=True, blank=True)
serchArea = models.CharField(max_length=600,null=True, blank=True)
latitude = models.FloatField(null=True, blank=True,default = 0)
longitude = models.FloatField(null=True, blank=True,default = 0)
minPrice = models.FloatField(null=True, blank=True,default = 0)
maxPrice = models.FloatField(null=True, blank=True,default = 0)
avgPrice = models.FloatField(null=True, blank=True,default = 0)
owner = models.ForeignKey(User, on_delete=models.CASCADE, null=True, blank=True)
def __str__(self):
return self.title
class Meta:
verbose_name_plural = 'ویلا'
ordering = ('-pub_date',)
def save(self, *args, **kwargs):
if not self.id:
super().save(*args, **kwargs)
Area = self.serchArea.split(',')
if self.serchArea:
self.latitude = Area[0]
self.longitude = Area[1]
super(Villa, self).save(*args, **kwargs)
class villaVote(models.Model):
owner = models.ForeignKey(User, on_delete=models.CASCADE, null=True, blank=True)
villa = models.ForeignKey(Villa,on_delete=models.CASCADE,null=True, blank=True)
comment = models.TextField('توضیحات',null=True, blank=True)
rate = models.IntegerField('نمره',default=0,validators=[MaxValueValidator(5), MinValueValidator(-5)],null=True, blank=True)
category = models.IntegerField('دسته',default=0,validators=[MaxValueValidator(1), MinValueValidator(-1)],null=True, blank=True)
pub_date = models.DateTimeField(auto_now_add=True,null=True, blank=True)
ownerTitle = models.CharField(max_length=100,null=True, blank=True)
def __str__(self):
return ' ویلا: ' + self.villa.title + ' تاریخ: ' + str(jdatetime.date.fromgregorian(date=self.pub_date)) + ' توسط: ' + self.owner.first_name + ' ' + self.owner.last_name
def save(self, *args, **kwargs):
if not self.id:
super().save(*args, **kwargs)
self.ownerTitle = self.owner.first_name + ' ' + self.owner.last_name
super(villaVote, self).save(*args, **kwargs)
class Meta:
verbose_name_plural = 'نظرات ویلا'
STATUS_CHOICES = (
(0, ("آزاد")),
(1, ("اجاره شده")),
(2, ("رزرو شده")),
(3, ("دردست تعمیر")),
(4, ("اجاره داده نمی شود"))
)
ACTIVE_CHOISES = (
(0, ("غیرفعال")),
(1, ("فعال"))
)
class villaDateStatus(models.Model):
villaId = models.ForeignKey(Villa,on_delete=models.CASCADE,null=True, blank=True)
statusId = models.IntegerField(default=0,choices= STATUS_CHOICES)
date = models.DateField()
jdateYear = models.IntegerField(null=True, blank=True)
jdateMonth = models.IntegerField(null=True, blank=True)
jdateDay = models.IntegerField(null=True, blank=True)
jdateWeekDay = models.CharField(max_length=20,null=True, blank=True)
price = MoneyField(max_digits=14, decimal_places=0,default_currency = 'IRR' )
class Meta:
ordering = ('villaId','date',)
class villaStatus(models.Model):
owner = models.ForeignKey(User, on_delete=models.CASCADE, null=True, blank=True)
villa = models.ForeignKey(Villa,on_delete=models.CASCADE,null=True, blank=True)
comment = models.TextField('توضیحات',null=True, blank=True)
pub_date = models.DateTimeField(auto_now_add=True)
j_fromDateYear = models.IntegerField('از سال ',default=1397,validators=[MaxValueValidator(1400), MinValueValidator(1397)])
j_fromDateMonth = models.IntegerField('از ماه ',default=1,validators=[MaxValueValidator(12), MinValueValidator(1)])
j_fromDateDay = models.IntegerField('از روز ',default=1,validators=[MaxValueValidator(31), MinValueValidator(1)])
j_toDateYear = models.IntegerField('تا سال ',default=1397,validators=[MaxValueValidator(1400), MinValueValidator(1397)])
j_toDateMonth = models.IntegerField('تا ماه ',default=10,validators=[MaxValueValidator(12), MinValueValidator(1)])
j_toDateDay = models.IntegerField('تا روز ',default=1,validators=[MaxValueValidator(31), MinValueValidator(1)])
STATUS_OF_VILLA = models.IntegerField('وضعیت',default=0,choices= STATUS_CHOICES)
price = MoneyField('قیمت',max_digits=14, decimal_places=0,default_currency = 'IRR' )
fromDate = models.DateTimeField(null=True, blank=True)
toDate = models.DateTimeField(null=True, blank=True)
def save(self, *args, **kwargs):
if not self.id:
super().save(*args, **kwargs)
self.fromDate = jdatetime.date(self.j_fromDateYear,self.j_fromDateMonth, self.j_fromDateDay).togregorian()
self.toDate = jdatetime.date(self.j_toDateYear,self.j_toDateMonth, self.j_toDateDay).togregorian()
single_date = self.fromDate
count = 1
allPrice = 0
for n in range(int((self.toDate - self.fromDate).days + 2)):
jdate = jdatetime.date.fromgregorian(date=single_date)
jdate_year = jdate.year
jdate_month = jdate.month
jdate_day = jdate.day
jdate_weekday_int = jdate.weekday()
if jdate_weekday_int == 4:
jdate_weekday = 'چهارشنبه'
elif jdate_weekday_int == 5:
jdate_weekday = 'پنجشنبه'
elif jdate_weekday_int == 6:
jdate_weekday = 'جمعه'
elif jdate_weekday_int == 0:
jdate_weekday = 'شنبه'
elif jdate_weekday_int == 1:
jdate_weekday = 'یکشنبه'
elif jdate_weekday_int == 2:
jdate_weekday = 'دوشنبه'
elif jdate_weekday_int == 3:
jdate_weekday = 'سه شنبه'
OBJvillaDateStatus, created = villaDateStatus.objects.get_or_create( villaId = self.villa,date = single_date)
villaDateStatus.objects.filter(villaId = self.villa,date = single_date).update(statusId = self.STATUS_OF_VILLA
,jdateYear = jdate_year
,jdateMonth = jdate_month
,jdateDay = jdate_day
,jdateWeekDay = jdate_weekday
,price = self.price,)
single_date = self.fromDate + timedelta(n)
count = count + 1
allPrice = allPrice + self.price
today = date.today()
priceDic = villaDateStatus.objects.filter(villaId = self.villa,date__gte=today).aggregate(Avg('price'),Min('price'),Max('price'))
price = Namespace(**priceDic)
avg = price.price__avg - (price.price__avg % 10000 )
msg = str(price.price__avg)+'/'+str(price.price__avg % 10000 )+'/'+str(avg)+'\n'
# with open('E:/logs.txt','a') as log:
# log.write(msg)
Villa.objects.filter(villastatus__id=self.id).update(avgPrice =avg,minPrice = price.price__min,maxPrice = price.price__max)
super(villaStatus, self).save(*args, **kwargs)
def __str__(self):
return self.villa.title + ' - از: ' + str(jdatetime.date.fromgregorian(date=self.fromDate)) + ' تا: ' + str(jdatetime.date.fromgregorian(date=self.toDate)) +' : '+ str( self.STATUS_OF_VILLA)
class Meta:
verbose_name_plural = 'وضعیت ویلا'
class customer(models.Model):
username = models.CharField(max_length=100)
password = models.CharField(max_length=100)
address = models.TextField(null = True,blank = True)
phonenumber = models.CharField(max_length=20,null = True,blank = True)
firstname = models.CharField(max_length=100,null = True,blank = True)
lastname = models.CharField(max_length=100, null=True, blank=True)
active = models.IntegerField(default=1,choices= ACTIVE_CHOISES)
def __str__(self):
return self.username
class Meta:
verbose_name_plural = 'مشتری'
| abbas-ezoji/proj | app1/models.py | models.py | py | 11,380 | python | en | code | 0 | github-code | 13 |
6964202906 | import monkey
from . import state
from . import factory
from . import util
def reset_invincibility():
state.invincible = False
def mario_is_hit(player, foe):
if state.invincible:
return
if state.mario_state == 0:
player.set_state('dead')
s = monkey.script()
ii = s.add(monkey.delay(1))
ii = s.add(monkey.move_accelerated(id=player.id,
timeout=1,
velocity=monkey.vec3(0, 100, 0),
acceleration=monkey.vec3(0, -state.gravity, 0)), ii)
s.add(monkey.callfunc(util.restart), ii)
monkey.play(s)
else:
state.mario_state -= 1
state.invincible = True
st = state.mario_states[state.mario_state]
player.set_model(monkey.get_sprite(st['model']))
player.get_controller().set_size(st['size'], st['center'])
s = monkey.script()
ii = s.add(monkey.blink(id=player.id, duration=state.invincible_duration, period=0.2))
s.add(monkey.callfunc(reset_invincibility), ii)
monkey.play(s)
def jump_on_foe(player, foe, callback=None):
a = player.get_dynamics()
a.velocity.y = 200
foe.set_state('dead')
def fire():
player = monkey.engine().get_node(state.player_id)
main = monkey.engine().get_node(state.cn)
aa = factory.fireball(player.x, player.y+20, -1 if player.flip_x else 1)
main.add(aa)
def make_coin(b):
pos = b.position
def f(comp, node):
if node.position[1] < pos[1]:
node.remove()
node = monkey.Node()
node.set_model(monkey.get_sprite('sprites/flying_coin'))
node.set_position(pos[0], pos[1], 1)
mm = monkey.move_dynamics(1.)
mm.set_velocity(0, 250, 0)
mm.set_constant_force(0, -state.gravity, 0)
mm.set_callback(f)
node.add_component(mm)
state.coins += 1
update_coin()
main = monkey.engine().get_node(state.cn)
main.add(node)
def make_powerup(id):
def f(b):
pos = b.get_parent().position
node = factory.powerup(pos[0] + 8, pos[1], id)
main = monkey.engine().get_node(state.cn)
main.add(node)
return f
def update_coin():
monkey.engine().get_node(state.coin_label).set_text('{:02d}'.format(state.coins))
def hit_sensor(a, b, dist):
pp = b.get_parent()
v = a.get_dynamics().velocity
if v.y < 0:
return
v.y = 0
pp.get_collider().set_collision_flag(util.flags.platform)
if pp.user_data['hits'] > 0:
if pp.user_data['hits'] == 1:
pp.set_animation('taken')
print(pp.user_data['hits'])
pp.user_data['hits']-=1
ad = pp.get_move_dynamics()
ad.set_velocity(0, 50, 0)
pp.user_data['callback'](b)
elif pp.user_data['hits'] == -1:
if state.mario_state == 0:
ad = pp.get_move_dynamics()
ad.set_velocity(0, 50, 0)
else:
pp.remove()
a.get_dynamics().velocity.y = 0
main = monkey.engine().get_node(state.cn)
pos = pp.position
main.add(factory.brick_piece(0, pos[0], pos[1], -100, 150))
main.add(factory.brick_piece(0, pos[0], pos[1], -50, 250))
main.add(factory.brick_piece(0, pos[0], pos[1], 100, 150))
main.add(factory.brick_piece(0, pos[0], pos[1], 50, 250))
def hit_powerup(player, b, dist):
b.user_data['callback'](player)
b.remove()
def hit_goomba(player, foe, dist):
print(dist.x, dist.y, dist.z)
print(player.id, " " , foe.id)
if dist.y > 0:
jump_on_foe(player, foe)
else:
mario_is_hit(player, foe)
def hit_koopa(player, foe, dist):
if foe.state == 'dead':
direction = 0
if dist.x != 0:
direction = -1 if dist.x > 0 else 1
else:
direction = -1 if player.x > foe.x else 1
foe.set_state('fly', dir=direction)
else:
hit_goomba(player, foe, dist)
def hit_gk(goomba, koopa, dist):
if koopa.state == 'fly':
foe_killed(goomba, -50 if koopa.x > goomba.x else 50)
def foe_killed(foe, vx):
foe.set_state('dead2')
s = monkey.script()
ii = s.add(monkey.move_accelerated(id=foe.id,
timeout=0.5,
velocity=monkey.vec3(vx, 100, 0),
acceleration=monkey.vec3(0, -state.gravity, 0)))
s.add(monkey.remove(id=foe.id), ii)
monkey.play(s)
def fire_hit_foe(foe, fire, dist):
fire.remove()
foe_killed(foe, -50 if fire.x > foe.x else 50)
def hit_hotspot(player, hotspot, dist):
on_start = hotspot.user_data.get('on_start')
if on_start:
on_start()
rm = hotspot.user_data.get('remove', True)
if rm:
hotspot.remove()
def leave_hotspot(player, hotspot):
on_end = hotspot.user_data.get('on_end')
if on_end:
on_end() | fabr1z10/monkey_examples | demo/game/rooms/functions.py | functions.py | py | 4,959 | python | en | code | 0 | github-code | 13 |
49215718274 | import copy
import random
import math
import pdb
import tttFunctions as tFn
import strFile
class Ai:
def __init__(self, symbol, algo, name):
self.symbol = symbol
self.func = algo
self.name = name
def choose(self, board):
return self.func(board, self.symbol)
# All the AI functions will take in 2 arguments:
# board and symbol of the person who's turn it is
# Return will be the coordinate of the move
def legalMoves(board):
# Determines what cells are available on a board
# Arguments: board as list of list
# Return: a list of open cells
legalMoves = [];
for r in range(len(board)):
for c in range(len(board[r])):
if board[r][c] == None:
for key, val in strFile.choiceMap.items():
if val == (r, c):
legalMoves.append(key)
break
return legalMoves
def minimaxAi(board, symbol):
bestMoves = []
bestScore = None
possibleMoves = legalMoves(board)
for move in possibleMoves:
testBoard = copy.deepcopy(board)
newBoard = tFn.updateBoard(testBoard, symbol, move)
score = minimaxScore(newBoard, symbol, True)
if bestScore == None or score > bestScore:
bestMove = move
bestScore = score
return bestMove
def minimaxScore(board, symbol, isMaximizer):
# Runs the minimax algorithm
#pdb.set_trace()
results = tFn.checkBoard(board, symbol)
full = results[0]
won = results[1]
moveSet = legalMoves(board)
if symbol == "X":
minPlayer = "O"
else:
minPlayer = "X"
# Base Case
if won == None and full:
return 0
elif won == True:
return 10
elif won == False:
return -10
scores = []
for move in moveSet:
testBoard = copy.deepcopy(board)
newBoard = tFn.updateBoard(testBoard, symbol, move)
tempScore = minimaxScore(newBoard, minPlayer, False)
scores.append(tempScore)
if isMaximizer:
return max(scores)
else:
return min(scores)
def randomAi(board, symbol):
# Randomly chooses an open space
possibleMoves = legalMoves(board)
choice = random.randrange(0, len(possibleMoves))
return possibleMoves[choice]
def showWinningMoves(board, symbol):
# Function: show winning cells
# Argument: board as a list of lists
# Return: a list of possible winning cells (1-9) index
openCells = legalMoves(board)
winningMoves = []
# Rows
for r in range(len(board)):
for c in range(len(board[r])):
if board[0][c] == symbol and board[1][c] == symbol:
if tFn.coordsToMap(2,c) in openCells:
winningMoves.append((2,c))
elif board[0][c] == symbol and board[2][c] == symbol:
if tFn.coordsToMap(1, c) in openCells:
winningMoves.append((1,c))
elif board[1][c] == symbol and board[2][c] == symbol:
if tFn.coordsToMap(0, c) in openCells:
winningMoves.append((0,c))
# Columns
for r in range(len(board)):
if board[r][0] == symbol and board[r][1] == symbol:
if tFn.coordsToMap(r, 2) in openCells:
winningMoves.append((r,2))
elif board[r][1] == symbol and board[r][2] == symbol:
if tFn.coordsToMap(r, 0) in openCells:
winningMoves.append((r,0))
elif board[r][0] == symbol and board[r][2] == symbol:
if tFn.coordsToMap(r, 1) in openCells:
winningMoves.append((r,1))
# Diagonals
if board[1][1] == symbol:
if board[0][0] == symbol and tFn.coordsToMap(2, 2) in openCells:
winningMoves.append((2,2))
elif board[2][2] == symbol and tFn.coordsToMap(0, 0) in openCells:
winningMoves.append((0,0))
elif board[2][0] == symbol and tFn.coordsToMap(0, 2) in openCells:
winningMoves.append((0,2))
elif board[0][2] == symbol and tFn.coordsToMap(2, 0) in openCells:
winningMoves.append((2,0))
return winningMoves
def findWinningAi(board, symbol):
# If a winning space is open, will choose it. Random otherwise
winningMoves = showWinningMoves(board, symbol)
if len(winningMoves) > 0:
for key, val in strFile.choiceMap.items():
if val == winningMoves[0]:
choice = key
break
else:
choice = randomAi(board, symbol)
return choice
def findWinLossAi(board, symbol):
# Will claim a winning spot or block a losing spot.
# Random otherwise
if symbol == "X":
otherSymbol = "O"
else:
otherSymbol = "X"
myWinningMoves = showWinningMoves(board, symbol)
otherWinningMoves = showWinningMoves(board, otherSymbol)
if len(myWinningMoves) > 0:
for key, val in strFile.choiceMap.items():
if val == myWinningMoves[0]:
choice = key
break
elif len(otherWinningMoves) > 0:
for key, val in strFile.choiceMap.items():
if val == otherWinningMoves[0]:
choice = key
break
else:
choice = randomAi(board, symbol);
return choice
| StewartJake/pythonProjects | ticTacToe/gameAi.py | gameAi.py | py | 5,304 | python | en | code | 0 | github-code | 13 |
11562655681 | from flask import Flask, render_template, request, redirect, url_for
app = Flask(__name__)
posts = []
@app.route('/')
def homepage():
return render_template('home.html')
@app.route('/blog')
def blog_page():
return render_template('blog.html', posts=posts)
@app.route('/post', methods=['GET', 'POST'])
def add_post():
if request.method == 'POST':
title = request.form['title']
content = request.form['content']
global posts
posts.append({
'title': title,
'content': content
})
return redirect(url_for('blog_page'))
return render_template('new_post.html')
@app.route('/post/<string:title>')
def see_post(title):
global posts
for post in posts:
if post['title'] == title:
return render_template('post.html', post=post)
return render_template('post.html', post=None)
def shutdown_server():
'''
The Werkzeug server that is used by the app.run() command
can be shut down starting with Werkzeug 0.8.
This can be helpful for small applications that should serve as a
frontend to a simple library on a user's computer.
:return:
'''
func = request.environ.get('werkzeug.server.shutdown')
if func is None:
raise RuntimeError('Not running with the Werkzeug Server')
func()
@app.route('/shutdown', methods=['POST'])
def shutdown():
'''
You can shutdown the server by calling this function.
The shutdown functionality is written in a way that the server
will finish handling the current request and then stop.
Source: http://flask.pocoo.org/snippets/67/
:return:
'''
shutdown_server()
return 'Server shutting down...'
if __name__ == '__main__':
app.run(debug=True)
| ikostan/automation_with_python | video_code_section_10/app.py | app.py | py | 1,780 | python | en | code | 0 | github-code | 13 |
26512161210 | '''
Escreva um programa que leia a velocidade de um carro.
Se ele ultrapassar 80km/hr, mostre uma mensagem dizendo que ele foi multado e o valor da multa.
A multa custa R$7,00 por cada km acima do limite
'''
from random import randrange
from time import sleep
velR=randrange(60,180)
print(("Você estava dirigindo a {}km em uma pista de 80km".format(velR)))
sleep(3)
if velR>80:
print("Você será multado em R${:.2f}".format(7*(velR-80)))
else:
print("Você não será multado!")
input() | MLucasf/PythonExercises | ex029.py | ex029.py | py | 499 | python | pt | code | 0 | github-code | 13 |
2650381902 | import sys
import random
import math
def dist(list1,list2):
sum=0
for i in range(len(list1)):
sum+=(list1[i]-list2[i])**2
return sum
def mean(list1):
mean=[]
for col in range(len(list1[0])):
mean.append(round((sum([item[col] for item in list1]))/len(list1),2))
return mean
#Read Data File
DataList = open(sys.argv[1]).readlines()
DataList = [line.split() for line in DataList]
DataList = [list(map(float, line)) for line in DataList]
row=len(DataList)
col=len(DataList[0])
k_value=int(sys.argv[2])
Data=[]
mean_cluster=[]
split=row//k_value
data=DataList[:]
random.shuffle(DataList)
# Initial Clusters
for i in range(k_value):
Cluster_Data=[]
if i==k_value-1:
x=row
else:
x=(i+1)*split
for k in range(i*split,x):
Cluster_Data.append(DataList[k])
Data.append(Cluster_Data)
mean_cluster.append(mean(Cluster_Data))
# K-Mean
prev_obj=100000
count=0
while True:
count+=1
obj=0
for d in Data:
j=d[:]
for k in range(len(j)):
min_dist=float('inf')
for i in range(k_value):
x=dist(mean_cluster[i],j[k])
obj+=x
if x<min_dist:
min_dist=x
idx=i
Data[idx].append(j[k])
d.remove(j[k])
if prev_obj - obj == 0:
break
prev_obj = obj
for i in range(k_value):
if Data[i]==[]:
mean_cluster[i]=[]
for k in range(col):
mean_cluster[i].append(0)
else:
mean_cluster[i]=mean(Data[i])
#printing datapoints and cluster
for i in range(row):
for j in range(len(Data)):
if data[i] in Data[j]:
print(j,i)
| Suraj-Jha1508/Machine_Learning_CS675 | Assignments/K-Mean(8)/K-Means.py | K-Means.py | py | 1,766 | python | en | code | 1 | github-code | 13 |
71084003219 | users = {}
def register(username: str, license_plate: str):
if username in users.keys():
print(f'ERROR: already registered with plate number {license_plate}')
else:
users[username] = license_plate
print(f'{username} registered {license_plate} successfully')
def unregister(username: str):
if username not in users.keys():
print(f'ERROR: user {username} not found')
else:
print(f'{username} unregistered successfully')
users.pop(username)
def print_users():
for (user, plate) in users.items():
print(f'{user} => {plate}')
def su_parking():
n = int(input())
for _ in range(n):
command = input().split(' ')
if command[0] == 'register':
register(command[1], command[2])
elif command[0] == 'unregister':
unregister(command[1])
print_users()
# Driver code
if __name__ == '__main__':
# function call
su_parking()
| bobsan42/SoftUni-Learning-42 | ProgrammingFunadamentals/a25DictionariesExrecises/suparking.py | suparking.py | py | 959 | python | en | code | 0 | github-code | 13 |
8861129225 | # Input
L = ['goat', 'ant', 'bat', 'zebra', 'monkey']
# Processing
# Arun
L.append('buffalo')
#L.replace('goat', 'giraffe')
L.remove('goat')
L.append('giraffe')
L.sort(reverse=True)
# Output
print(L)
# ['zebra', 'monkey', 'giraffe', 'buffalo', 'bat', 'ant']
| mindful-ai/oracle-aug20 | day_01/labs/lab_03.py | lab_03.py | py | 288 | python | en | code | 0 | github-code | 13 |
11261698585 | import sys
simple, sliding = -1, -3
q = [-1,-1,-1,-1]
for i, line in enumerate(sys.stdin):
curr = i % 4
q[curr] = int(line.strip())
sliding += int(q[curr] > q[(i+1) % 4])
simple += int(q[curr] > q[(i-1) % 4])
print(simple, sliding) | Tethik/advent-of-code | 2021/01/both-golf.py | both-golf.py | py | 261 | python | en | code | 0 | github-code | 13 |
35128858169 | """
# Definition for a Node.
class Node:
def __init__(self, val=None, children=None):
self.val = val
self.children = children
"""
class Solution:
def maxDepth(self, root: 'Node') -> int:
if not root:
return 0
depth = 0
stack = [(root, 1)]
while stack:
node, level = stack.pop()
depth = max(depth, level)
if node:
for child in node.children:
stack.append((child, level + 1))
# print(child.val)
return depth
"""Iterative solution using stack and DFS. Instead of left and right nodes,
using for loop for all children. Unlike Binary tree, cannot detect if for
updating the depth, the condition if node and not node.left and not node.right,
did not check it here and updated the value of depth based on max value of level."""
| aakanksha-j/LeetCode | 559. Maximum Depth of N-ary Tree/iterative_stack_dfs_2.py | iterative_stack_dfs_2.py | py | 891 | python | en | code | 0 | github-code | 13 |
33516219118 | # -------------------------------------------------------------------------------
# Name: Load HUCs
#
# Purpose: Loop over a ShapeFile of HUC8 polygons and insert essential
# information into SQLite database
#
# Author: Philip Bailey
#
# Date: 12 Aug 2019
#
# -------------------------------------------------------------------------------
import argparse
import sqlite3
import os
import re
from osgeo import ogr
def load_hucs(polygons, database):
# Get the input flow lines layer
driver = ogr.GetDriverByName("ESRI Shapefile")
data_source = driver.Open(polygons, 0)
layer = data_source.GetLayer()
print('{:,} features in polygon ShapeFile {}'.format(layer.GetFeatureCount(), polygons))
values = []
for inFeature in layer:
values.append((inFeature.GetField('HUC8'), inFeature.GetField('NAME'), inFeature.GetField('AREASQKM')))
# Open connection to SQLite database. This will create the file if it does not already exist.
conn = sqlite3.connect(database)
# Fill the table using bulk operation
print('{:,} features about to be written to database {}'.format(len(values), database))
conn.executemany("INSERT INTO HUCs (HUC8, Name, AreaSqKm) values (?, ?, ?)", values)
conn.commit()
conn.close()
print('Process completed successfully')
def huc_info(database):
conn = sqlite3.connect(database)
curs = conn.cursor()
conn.execute("SELECT HUC, Name FROM HUCs")
hucs = {}
for row in curs.execute("SELECT HUC, Name FROM HUCs").fetchall():
hucs[row[0]] = row[1]
conn.close()
return hucs
def get_hucs_present(top_level_folder, database):
all_hucs = huc_info(database)
present_folders = {}
for subdir, dirs, files in os.walk(top_level_folder):
for dir in dirs:
x = re.search('_([0-9]{8})\Z', dir)
if x:
present_folders[x[1]] = os.path.join(top_level_folder, subdir, dir)
present_files = {}
for huc, path in present_folders.items():
present_files[int(huc)] = {}
# TODO: Paths need to be reset
raise Exception('PATHS NEED TO BE RESET')
SOMEPATH = os.path.join(os.environ['SOME_PATH'], 'BearLake_16010201/Inputs/04_Anthropogenic/06_LandOwnership/Land_Ownership_01/NationalSurfaceManagementAgency.shp')
search_items = {
'Network': os.path.join(path, 'Outputs', 'Output_SHP', '01_Perennial_Network', '02_Combined_Capacity_Model'),
'Roads': os.path.join(path, 'Inputs', '04_Anthropogenic', '02_Roads', 'Roads_01'),
'Rail': os.path.join(path, 'Inputs', '04_Anthropogenic', '03_Railroads', 'Railroads_01'),
'Canals': os.path.join(path, 'Inputs', '04_Anthropogenic', '04_Canals', 'Canals_01', 'NHDCanalsDitches.shp'),
'LandOwnership': os.path.join(path, 'Inputs', '04_Anthropogenic', '06_LandOwnership', 'Land_Onership_01', SOMEPATH),
'ExistingVeg': os.path.join(path, 'Inputs', '01_Vegetation', '01_ExistingVegetation', 'Ex_Veg_01')
}
for key, folder in search_items.items():
for root, dirs, files in os.walk(folder):
for file in files:
if file.endswith('.shp'):
present_files[int(huc)][key] = os.path.join(root, file)
elif file.endswith('.tif'):
present_files[int(huc)][key] = os.path.join(root, file)
print(len(present_files), 'HUCs found in', top_level_folder)
return present_files
def main():
parser = argparse.ArgumentParser()
parser.add_argument('polygons', help='Path to ShapeFile of HUC8 polygons', type=argparse.FileType('r'))
parser.add_argument('database', help='Path to SQLite database', type=argparse.FileType('r'))
args = parser.parse_args()
load_hucs(args.polygons.name, args.database.name)
if __name__ == '__main__':
main()
| Riverscapes/riverscapes-tools | lib/commons/scripts/load_hucs.py | load_hucs.py | py | 3,914 | python | en | code | 10 | github-code | 13 |
1569416077 | from kdtree import KDTree
import sys, time, random, csv, math
from align_eigenspaces import *
from ContactGeometry1 import *
import numpy as np
import matplotlib as plt
from scipy.optimize import fmin_bfgs
def dot_product(Va, Vb):
#No need to do transpose here. its done earlier
d=0
for i in range(len(Va)):
d+=Va[i]*Vb[i]
return d
def newContactScoreFxn(x, Xa, Xb, Va, Vb, sig):
#(n1, n2, n3, px, py, pz)
o = x[0:3]
p = x[3:]
Xar = np.dot(Rot(o), Xa)+ np.reshape(p, (3,1))
Cab = ContactMatrix(Xar,Xb,sig)
scr = -(np.dot(Va.T,np.dot(Cab,Vb))).sum()
return scr
FileName = 'PDBFiles/d1cida1.ent'
pdb = ReadPDB(FileName)
FileName = 'PDBFiles/d2rhea_.ent'
pdb2 = ReadPDB(FileName)
rawProteinA = CA_coordinates(pdb2)
rawProteinB = CA_coordinates(pdb2)
sig = 8
tol = 0.001
itermax = 100
#full list of atoms in protein a
Xafull = np.array(pdb[['x','y','z']],dtype='float64')
Xa1 = Xafull[:15]#only take first 15 atoms (for now)
#full list of atoms in protein b
Xbfull = np.array(pdb2[['x','y','z']],dtype='float64')
Xb1 =Xbfull[:15]#only take first 15 atoms (for now)
Xa = PrincipleAxesTransform(rawProteinA)#cut all but 15 atoms
Xb = PrincipleAxesTransform(rawProteinB)
Ca = ContactMatrix(Xa,Xa,sig)
Cb = ContactMatrix(Xb,Xb,sig)
tempa= Eig(Ca)
la = tempa[0]
Va = tempa[1]
#####################
tempb = Eig(Cb)
lb = tempb[0]
Vb = tempb[1]
#--------------------------------------------------
x0 = np.array([0.001,0,0,0,0,0])
xopt,scr,A,B,a,b,c = fmin_bfgs(ContactScore,x0,args=(Xa,Xb,Va,Vb,sig),full_output=1,disp=1)
#--------------------------------------------------
CA_Coord_ProteinA = rawProteinA.T
CA_Coord_ProteinB = rawProteinB.T
limitedListCA_ProtA = CA_Coord_ProteinA
limitedListCA_ProtB = CA_Coord_ProteinB
#listXa is the list of atoms in protein a
listXa=[]
for i in range(len(limitedListCA_ProtA)):
tupp = tuple([round(limitedListCA_ProtA[i][0], 3)])+tuple([round(limitedListCA_ProtA[i][1],3)])+tuple([round(limitedListCA_ProtA[i][2], 3)])+tuple([Va[i].T])
#eigenvector Va.T is appended to each node
listXa.append(tupp)
#listXb is the list of atoms in protein b
listXb=[]
for i in range(len(limitedListCA_ProtB)):
tupp = tuple([round(limitedListCA_ProtB[i][0], 3)])+tuple([round(limitedListCA_ProtB[i][1],3)])+tuple([round(limitedListCA_ProtB[i][2], 3)])+tuple([Vb[i]])
#eigenvector Vb is appended to each node
listXb.append(tupp)
data1 = listXa
data2 = listXb
Tree1 = KDTree.construct_from_data(data1)
Tree2 = KDTree.construct_from_data(data2)
score = 0
#print("####################################")
#Times for KD Tree approach
startT = time.time()
for i in range(len(data1)):
#finds the atoms within radius 30 of query pt
score += Tree2.queryrange(query_point=data1[i], r = 50)
solveTime = time.time() - startT
print(solveTime)
#Time for non-tree approach
#startT = time.time()
#total =0
#for i in range(len(data1)):
# #all points within dist r of the nn points on Tree2!
# rangepoints =Tree2.queryrange(query_point = data1[i], r = 50)
# #print(rangepoints)
# for k in range(len(rangepoints)):
# sd = rangepoints[k][-1] #Cij value of contact matrix
# C = math.exp(-0.5*(sd/64))
# #No Longer Necessary, calc happen recursively
# #only prints non-zero values of C
# if(C>.01):
# #print(C)
# total = total+C*dot_product(data1[i][-1], rangepoints[k][0][-1])
#solveTime = time.time() - startT
#print(solveTime) | itsvismay/MultiDimensionalTrees | version3.py | version3.py | py | 3,386 | python | en | code | 1 | github-code | 13 |
21524828774 | """
This program compiles an Excel spreadsheet for manually mapping dataset-specific species names to
a common taxonomy.
We are currently doing the counting here instead of as a part of the Cosmos DB query
- see SDK issue in notes.
It first goes through the list of datasets in the `datasets` table to find out which "species" are in each
dataset and the count of its "occurrences" (each sequence is counted as 1 if the class label is on the sequence;
each image is counted as 1 as well if the class label is on the image level; so a sequence/image count mixture).
This information is saved in a JSON file in the `output_dir` for each dataset.
Once this information is collected, for each "species" in a dataset, it queries the TOP 100 sequences
where the "species" is in the list of class names at either the sequence or the image level. It samples
7 of these TOP 100 sequences (sequences returned by TOP may have little variety) and from each sequence
samples an image to surface as an example. The spreadsheet is then prepared, adding a Bing search URL
with the species class name as the query string and fields to filter and fill in Excel.
Because querying for all species present in a dataset may take a long time, a dataset is only queried
if it does not yet have a JSON file in the `output_dir`.
Leave out the flag `--query_species` if you only want to prepare the spreadsheet using previously queried
species presence result.
Example invocation:
```
python data_management/megadb/query_and_upsert_examples/species_by_dataset.py --output_dir /Users/siyuyang/Source/temp_data/CameraTrap/megadb_query_results/species_by_dataset_trial --query_species
```
"""
import argparse
import json
import os
import urllib
from collections import Counter
from datetime import datetime
from random import sample
import pandas as pd
from openpyxl import Workbook
from openpyxl.utils.dataframe import dataframe_to_rows
from tqdm import tqdm
from data_management.megadb.megadb_utils import MegadbUtils
NUMBER_EXAMPLES_PER_SPECIES = 7
NUMBER_SEQUENCES_TO_QUERY = 100
def query_species_by_dataset(megadb_utils, output_dir):
# which datasets are already processed?
queried_datasets = os.listdir(output_dir)
queried_datasets = set([i.split('.json')[0] for i in queried_datasets if i.endswith('.json')])
datasets_table = megadb_utils.get_datasets_table()
dataset_names = list(datasets_table.keys())
dataset_names = [i for i in dataset_names if i not in queried_datasets]
print(f'{len(queried_datasets)} datasets already queried. Querying species in {len(dataset_names)} datasets...')
for dataset_name in dataset_names:
print(f'Querying dataset {dataset_name}...')
query_seq_level = '''
SELECT VALUE seq.class
FROM seq
WHERE ARRAY_LENGTH(seq.class) > 0 AND NOT ARRAY_CONTAINS(seq.class, "empty") AND NOT ARRAY_CONTAINS(seq.class, "__label_unavailable")
'''
results = megadb_utils.query_sequences_table(query_seq_level, partition_key=dataset_name)
counter = Counter()
for i in results:
counter.update(i)
# cases when the class field is on the image level (images in a sequence had different class labels)
# 'caltech' dataset is like this
query_image_level = '''
SELECT VALUE seq.images
FROM sequences seq
WHERE (SELECT VALUE COUNT(im) FROM im IN seq.images WHERE ARRAY_LENGTH(im.class) > 0) > 0
'''
results_im = megadb_utils.query_sequences_table(query_image_level, partition_key=dataset_name)
for seq_images in results_im:
for im in seq_images:
if 'class' in im:
counter.update(im['class'])
with open(os.path.join(output_dir, f'{dataset_name}.json'), 'w') as f:
json.dump(counter, f, indent=2)
def get_example_images(megadb_utils, dataset_name, class_name):
datasets_table = megadb_utils.get_datasets_table()
query_both_levels = '''
SELECT TOP {} VALUE seq
FROM seq
WHERE ARRAY_CONTAINS(seq.class, "{}") OR (SELECT VALUE COUNT(im) FROM im IN seq.images WHERE ARRAY_CONTAINS(im.class, "{}")) > 0
'''.format(NUMBER_SEQUENCES_TO_QUERY, class_name, class_name)
sequences = megadb_utils.query_sequences_table(query_both_levels, partition_key=dataset_name)
sample_seqs = sample(sequences, min(len(sequences), NUMBER_EXAMPLES_PER_SPECIES)) # sample 7 sequences if possible
image_urls = []
for i, seq in enumerate(sample_seqs):
sample_image = sample(seq['images'], 1)[0] # sample one image from each sequence
img_path = sample_image['file']
img_path = MegadbUtils.get_full_path(datasets_table, dataset_name, img_path)
img_path = urllib.parse.quote_plus(img_path)
dataset_info = datasets_table[dataset_name]
img_url = 'https://{}.blob.core.windows.net/{}/{}{}'.format(
dataset_info["storage_account"],
dataset_info["container"],
img_path,
dataset_info["container_sas_key"]
)
image_urls.append(img_url)
if len(image_urls) < NUMBER_EXAMPLES_PER_SPECIES:
image_urls.extend([None] * (NUMBER_EXAMPLES_PER_SPECIES - len(image_urls)))
assert len(image_urls) == NUMBER_EXAMPLES_PER_SPECIES
return image_urls
def make_spreadsheet(megadb_utils, output_dir):
all_classes = set()
class_in_multiple_ds = {}
species_by_dataset = {}
classes_excluded = ['car', 'vehicle', 'empty', '__label_unavailable', 'error']
# read species presence info from the JSON files for each dataset
for file_name in os.listdir(output_dir):
if not file_name.endswith('.json'):
continue
dataset_name = file_name.split('.json')[0]
print(f'Processing dataset {dataset_name}')
with open(os.path.join(output_dir, file_name)) as f:
species_in_dataset = json.load(f)
species_valid = {}
for class_name, count in species_in_dataset.items():
if class_name not in classes_excluded:
species_valid[class_name] = count
# has this class name appeared in a previous dataset?
if class_name in all_classes:
class_in_multiple_ds[class_name] = True
else:
class_in_multiple_ds[class_name] = False # first appearance
all_classes.update(list(species_valid.keys()))
species_by_dataset[dataset_name] = species_valid
# get the columns to populate the spreadsheet
# strangely the order in the Pandas dataframe and spreadsheet seems to follow the order of insersion here
cols = {
'dataset': [],
'occurrences': [], # count of sequences/images mixture where this class name appears
'species_label': [],
'bing_url': [],
'is_common': [], # is this class name seen already / need to be labeled again?
'taxonomy_name': [],
'common_name': [],
'is_typo': [], # there is a typo in the class name, but correct taxonomy name can be inferred
'not_applicable': [], # labels like "human-cattle" where a taxonomy name would not be applicable
'other_notes': [], # other info in the class name, like male/female
'is_new': [] # not in pervious versions of this spreadsheet
}
for i in range(NUMBER_EXAMPLES_PER_SPECIES):
cols[f'example{i + 1}'] = []
cols['example_mislabeled'] = []
for dataset_name, species_count in species_by_dataset.items():
print(dataset_name)
species_count_tups = sorted(species_count.items(), key=lambda x: x[1], reverse=True)
for class_name, class_count in tqdm(species_count_tups):
cols['dataset'].append(dataset_name)
cols['occurrences'].append(class_count)
cols['species_label'].append(class_name)
bing_url = 'https://www.bing.com/search?q={}'.format(urllib.parse.quote_plus(class_name))
cols['bing_url'].append(bing_url)
example_images_sas_urls = get_example_images(megadb_utils, dataset_name, class_name)
for i, url in enumerate(example_images_sas_urls):
cols[f'example{i + 1}'].append(url)
cols['is_common'].append(class_in_multiple_ds[class_name])
cols['taxonomy_name'].append('')
cols['common_name'].append('')
cols['is_typo'].append('')
cols['other_notes'].append('')
cols['not_applicable'].append('')
cols['is_new'].append(True)
cols['example_mislabeled'].append('')
# make the spreadsheet
spreadsheet = pd.DataFrame.from_dict(cols)
print(spreadsheet.head(5))
wb = Workbook()
ws = wb.active
for r in dataframe_to_rows(spreadsheet, index=False, header=True):
ws.append(r)
# Bing search URL
for i_row, cell in enumerate(ws['D']): # TODO hardcoded column number
if i_row > 0:
cell.hyperlink = cell.value
cell.style = 'Hyperlink'
# example image SAS URLs TODO hardcoded column number - need to change if number of examples changes or col order changes
sas_cols = [ws['L'], ws['M'], ws['N'], ws['O'], ws['P'], ws['Q'], ws['R']]
assert len(sas_cols) == NUMBER_EXAMPLES_PER_SPECIES
for i_example, ws_col in enumerate(sas_cols):
for i_row, cell in enumerate(ws_col):
if i_row > 0:
if cell.value is not None:
if not isinstance(cell.value, str):
print(f'WARNING cell.value is {cell.value}, type is {type(cell.value)}')
continue
cell.hyperlink = cell.value
cell.value = f'example{i_example + 1}'
cell.style = 'Hyperlink'
date = datetime.now().strftime('%Y_%m_%d')
wb.save(os.path.join(output_dir, f'species_by_dataset_{date}.xlsx'))
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--output_dir',
required=True,
help='Path to directory where the JSONs containing species count for each dataset live')
parser.add_argument('--query_species',
action='store_true',
help=('If flagged, query what species are present in a dataset. '
'Otherwise, create a spreadsheet for labeling the taxonomy'))
args = parser.parse_args()
assert 'COSMOS_ENDPOINT' in os.environ and 'COSMOS_KEY' in os.environ
os.makedirs(args.output_dir, exist_ok=True)
megadb_utils = MegadbUtils()
if args.query_species:
query_species_by_dataset(megadb_utils, args.output_dir)
make_spreadsheet(megadb_utils, args.output_dir)
if __name__ == '__main__':
main()
| UCSD-E4E/Owl_Classification_Interface | src/MegaDetector/cameratraps/detection/data_management/megadb/query_and_upsert_examples/species_by_dataset.py | species_by_dataset.py | py | 10,859 | python | en | code | 0 | github-code | 13 |
18145583439 | from pwn import remote
def main():
#r = process("./service")
r = remote("107.21.135.41", 2222)
r.recvuntil("menu: ")
r.sendline("1")
r.interactive()
for x in range(100):
line = r.recvuntil("? ")
print(line)
words = line.split()
a = int(words[4])
b = int(words[6][:-1])
print("a: %d b: %d" %(a, b))
c = a + b
print(" Sum:", c)
r.sendline("%d" %c)
#r.close()
if __name__ == "__main__":
main()
| aditya70/ss-course | lab1/c2.py | c2.py | py | 509 | python | en | code | 0 | github-code | 13 |
14274304806 | #python
# File: mc_lxRename_rename.py
# Author: Matt Cox
# Description: Bulk renames a selection of items, changing their names to the rename string.
import lx
import re
lxRRenameText = lx.eval( "user.value mcRename.rename ?" )
if len(lxRRenameText) != 0:
try:
lxRSelectedItems = lx.evalN('query sceneservice selection ? all')
for x in lxRSelectedItems:
lx.eval('select.Item %s' %str(x))
try:
lx.eval('item.name "%s"'%(lxRRenameText))
except:
lx.eval('dialog.setup error')
lx.eval('dialog.title {Error}')
lx.eval('dialog.msg {Unable to rename items.}')
lx.eval('dialog.open')
lx.eval('select.drop item')
for x in lxRSelectedItems:
lx.eval('select.Item %s add' %str(x))
except:
lx.out('Exception "%s" on line: %d' % (sys.exc_value, sys.exc_traceback.tb_lineno)) | Tilapiatsu/modo-tila_customconfig | mc_lxRename/Scripts/mc_lxRename_rename.py | mc_lxRename_rename.py | py | 949 | python | en | code | 2 | github-code | 13 |
25123696191 | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Tue Feb 14 22:19:03 2023
@author: Themanwhosoldtheworld
https://leetcode.com/problems/sqrtx/
"""
class Solution:
def mySqrt(self, x: int) -> int:
low=0
high=x
while(low<=high):
mid=low+(high-low)//2
if (mid*mid<=x and (mid+1)*(mid+1)>x):
return mid
elif mid*mid<x:
low=mid+1
else:
high=mid-1
return low | themanwhosoldtheworld7/LeetCode-Python | Sqrt.py | Sqrt.py | py | 498 | python | en | code | 0 | github-code | 13 |
15550968235 | from pwn import *
from pwn import p64
from ctypes import *
debug = 0
gdb_is = 0
# context(arch='i386',os = 'linux', log_level='DEBUG')
context(arch='amd64',os = 'linux', log_level='DEBUG')
if debug:
context.terminal = ['/mnt/c/Users/sagiriking/AppData/Local/Microsoft/WindowsApps/wt.exe','nt','Ubuntu','-c']
r = process("./pwn")
else:
host = "challenge-a75c68062475dd2f.sandbox.ctfhub.com"
r = connect(host,21831)#远程连接
gdb_is =0
if gdb_is:
gdb.attach(r,'b* 0x40078b')
pause()
pass
libc = cdll.LoadLibrary('libc.so.6')
elf = ELF('./pwn')
r.sendlineafter(b'someting:\n',b'A'* 0x70 + b'junkjunk'+ p64(0x000400285)+p64(0x0400777))
v0 = libc.time(0)
libc.srand(v0)
v3 = libc.rand()
print(f'v0 = {hex(v0)}')
print(f'v3 = {hex(v3)}')
r.sendline(str(v3).encode())
r.interactive()
| Sagiring/Sagiring_pwn | hub/one_hub/hub_one.py | hub_one.py | py | 859 | python | en | code | 1 | github-code | 13 |
28389657662 | import sys, os, time, pyautogui, math
PACKAGE_PARENT = '../..'
SCRIPT_DIR = os.path.dirname(os.path.realpath(os.path.join(os.getcwd(), os.path.expanduser(__file__))))
sys.path.append(os.path.normpath(os.path.join(SCRIPT_DIR, PACKAGE_PARENT)))
from master_bot.master import Bot as runeBot
player = runeBot(os.path.normpath(os.path.join(SCRIPT_DIR, PACKAGE_PARENT)))
#player.selectThing("bank")
#player.resetCamera()
#player.bankWith2ItemCraft("clay","water-bucket",count=14)
#player.bankWith2ItemCraft("dough","dish",count=14,report=False)
count = 5000
while count > 1:
count-=1
for i in range(0,3):
player.selectThing("mobs/chickenCenter")
time.sleep(1)
time.sleep(5)
for i in range(0,360,45):
for dis in range(40,200,40):
click_x = math.cos(i) * dis
click_y = math.sin(i) * dis
player.clickMouse(pos=[player.center[0] + click_y,player.center[1] + click_x],mouseType="right")
time.sleep(0.1)
if player.selectThing("mobs/chickenAttack"):
time.sleep(10)
else: pyautogui.click()
| SimSam115/orsr_bbb | tests/usingMaster_bot/killChickens.py | killChickens.py | py | 1,119 | python | en | code | 0 | github-code | 13 |
29478144455 | def main():
#write your code below this line
i = 0
num = int(input("How many times?"))
while (i < num):
print_text()
i += 1
def print_text():
print("In a hole in the ground there lived a method")
if __name__ == '__main__':
main()
| den01-python-programming-exercises/exercise-2-22-reprint-jakeleesh | src/exercise.py | exercise.py | py | 272 | python | en | code | 0 | github-code | 13 |
12653066200 | import numpy as np
import cv2 as cv
#---图像上的算数运算:加法---
x = np.uint8([250])
y = np.uint8([10])
# 250+10=260 => 255 OpenCV加法是饱和运算
print(cv.add(x, y))
# 250+10=260 % 256 = 4 Numpy加法是模运算
print(x+y)
#---图像上的算数运算:图像融合(两个矩形图像)---
# 对图像赋予不同的权重,以使其具有融合或透明的感觉
img1 = cv.imread('D:/PycharmProjects/pythonProject1/Opencv 4.5/images/add1.jpg')
img2 = cv.imread('D:/PycharmProjects/pythonProject1/Opencv 4.5/images/add2.jpg')
# cv.imshow('img1', img1)
# cv.imshow('img2', img2)
# 注:两个图像应具有相同的深度和类型
dst = cv.addWeighted(img1, 0.7, img2, 0.3, 0) # img1和img2的比重分别为0.7和0.3
cv.imshow('dst', dst)
cv.waitKey(0)
cv.destroyAllWindows()
| Darling1116/Greeting_1116 | Opencv/lesson_3/Add_1.py | Add_1.py | py | 808 | python | zh | code | 0 | github-code | 13 |
40424741685 | from data_structures.hashtable import Hashtable
import re
def hashtable_repeated_word(word):
regex_string = re.compile('[^a-zA-Z ]')
words_strip = regex_string.sub('', word)
words = words_strip.lower().split()
dict = set()
for word in words:
if word in dict:
return word
else:
dict.add(word)
| LieslW/data-structures-and-algorithms | python/code_challenges/hashtable_repeated_word.py | hashtable_repeated_word.py | py | 356 | python | en | code | 0 | github-code | 13 |
71190983699 | import random # 导入 random 包来生成随机的丢失的分组
from socket import *
# 创建一个 UDP 套接字
serverSocket = socket(AF_INET, SOCK_DGRAM)
serverSocket.bind(('', 12000))
print('服务器已启动!\n')
while True:
# 生成 0 到 10 的随机数字
rand = random.randint(0, 10)
# 接收客户分组和客户地址
message, address = serverSocket.recvfrom(1024)
print(message.decode() + '\n')
# 将来自客户的报文大写
message = message.upper()
# 随机生成的整数小于 4,则不发送报文
if rand < 4:
continue
serverSocket.sendto(message, address) | young-trigold/computer_networking | socket_propramming/udp_ping/UDPPingServer.py | UDPPingServer.py | py | 633 | python | zh | code | 1 | github-code | 13 |
71471396177 | from typing import Any, Dict, Sequence
import numpy as np
import onnx
from onnx.backend.test.case.base import Base
from onnx.backend.test.case.node import expect
class Concat(Base):
@staticmethod
def export() -> None:
test_cases: Dict[str, Sequence[Any]] = {
"1d": ([1, 2], [3, 4]),
"2d": ([[1, 2], [3, 4]], [[5, 6], [7, 8]]),
"3d": (
[[[1, 2], [3, 4]], [[5, 6], [7, 8]]],
[[[9, 10], [11, 12]], [[13, 14], [15, 16]]],
),
}
for test_case, values_ in test_cases.items():
values = [np.asarray(v, dtype=np.float32) for v in values_]
for i in range(len(values[0].shape)):
in_args = ["value" + str(k) for k in range(len(values))]
node = onnx.helper.make_node(
"Concat", inputs=list(in_args), outputs=["output"], axis=i
)
output = np.concatenate(values, i)
expect(
node,
inputs=list(values),
outputs=[output],
name="test_concat_" + test_case + "_axis_" + str(i),
)
for i in range(-len(values[0].shape), 0):
in_args = ["value" + str(k) for k in range(len(values))]
node = onnx.helper.make_node(
"Concat", inputs=list(in_args), outputs=["output"], axis=i
)
output = np.concatenate(values, i)
expect(
node,
inputs=list(values),
outputs=[output],
name="test_concat_" + test_case + "_axis_negative_" + str(abs(i)),
)
| onnx/onnx | onnx/backend/test/case/node/concat.py | concat.py | py | 1,751 | python | en | code | 15,924 | github-code | 13 |
30774665276 | from collections import Counter, namedtuple
import traceback
import numpy as np
import pandas as pd
import pytz
import statsmodels.formula.api as smf
from ..exceptions import MissingModelParameterError, UnrecognizedModelTypeError
from ..features import compute_temperature_features
from ..metrics import ModelMetrics
from ..transform import (
day_counts,
overwrite_partial_rows_with_nan,
)
from ..warnings import EEMeterWarning
__all__ = (
"CalTRACKUsagePerDayCandidateModel",
"CalTRACKUsagePerDayModelResults",
"DataSufficiency",
"ModelPrediction",
"fit_caltrack_usage_per_day_model",
"caltrack_sufficiency_criteria",
"caltrack_usage_per_day_predict",
"plot_caltrack_candidate",
"get_too_few_non_zero_degree_day_warning",
"get_total_degree_day_too_low_warning",
"get_parameter_negative_warning",
"get_parameter_p_value_too_high_warning",
"get_single_cdd_only_candidate_model",
"get_single_hdd_only_candidate_model",
"get_single_cdd_hdd_candidate_model",
"get_intercept_only_candidate_models",
"get_cdd_only_candidate_models",
"get_hdd_only_candidate_models",
"get_cdd_hdd_candidate_models",
"select_best_candidate",
)
ModelPrediction = namedtuple("ModelPrediction", ["result", "design_matrix", "warnings"])
def _noneify(value):
if value is None:
return None
return None if np.isnan(value) else value
class CalTRACKUsagePerDayModelResults(object):
"""Contains information about the chosen model.
Attributes
----------
status : :any:`str`
A string indicating the status of this result. Possible statuses:
- ``'NO DATA'``: No baseline data was available.
- ``'NO MODEL'``: No candidate models qualified.
- ``'SUCCESS'``: A qualified candidate model was chosen.
method_name : :any:`str`
The name of the method used to fit the baseline model.
model : :any:`eemeter.CalTRACKUsagePerDayCandidateModel` or :any:`None`
The selected candidate model, if any.
r_squared_adj : :any:`float`
The adjusted r-squared of the selected model.
candidates : :any:`list` of :any:`eemeter.CalTRACKUsagePerDayCandidateModel`
A list of any model candidates encountered during the model
selection and fitting process.
warnings : :any:`list` of :any:`eemeter.EEMeterWarning`
A list of any warnings reported during the model selection and fitting
process.
metadata : :any:`dict`
An arbitrary dictionary of metadata to be associated with this result.
This can be used, for example, to tag the results with attributes like
an ID::
{
'id': 'METER_12345678',
}
settings : :any:`dict`
A dictionary of settings used by the method.
totals_metrics : :any:`ModelMetrics`
A ModelMetrics object, if one is calculated and associated with this
model. (This initializes to None.) The ModelMetrics object contains
model fit information and descriptive statistics about the underlying data,
with that data expressed as period totals.
avgs_metrics : :any:`ModelMetrics`
A ModelMetrics object, if one is calculated and associated with this
model. (This initializes to None.) The ModelMetrics object contains
model fit information and descriptive statistics about the underlying data,
with that data expressed as daily averages.
"""
def __init__(
self,
status,
method_name,
interval=None,
model=None,
r_squared_adj=None,
candidates=None,
warnings=None,
metadata=None,
settings=None,
):
self.status = status # NO DATA | NO MODEL | SUCCESS
self.method_name = method_name
self.interval = interval
self.model = model
self.r_squared_adj = r_squared_adj
if candidates is None:
candidates = []
self.candidates = candidates
if warnings is None:
warnings = []
self.warnings = warnings
if metadata is None:
metadata = {}
self.metadata = metadata
if settings is None:
settings = {}
self.settings = settings
self.totals_metrics = None
self.avgs_metrics = None
def __repr__(self):
return (
"CalTRACKUsagePerDayModelResults(status='{}', method_name='{}',"
" r_squared_adj={})".format(
self.status, self.method_name, self.r_squared_adj
)
)
def json(self, with_candidates=False):
"""Return a JSON-serializable representation of this result.
The output of this function can be converted to a serialized string
with :any:`json.dumps`.
"""
def _json_or_none(obj):
return None if obj is None else obj.json()
data = {
"status": self.status,
"method_name": self.method_name,
"interval": self.interval,
"model": _json_or_none(self.model),
"r_squared_adj": _noneify(self.r_squared_adj),
"warnings": [w.json() for w in self.warnings],
"metadata": self.metadata,
"settings": self.settings,
"totals_metrics": _json_or_none(self.totals_metrics),
"avgs_metrics": _json_or_none(self.avgs_metrics),
"candidates": None,
}
if with_candidates:
data["candidates"] = [candidate.json() for candidate in self.candidates]
return data
@classmethod
def from_json(cls, data):
"""Loads a JSON-serializable representation into the model state.
The input of this function is a dict which can be the result
of :any:`json.loads`.
"""
# "model" is a CalTRACKUsagePerDayCandidateModel that was serialized
model = None
d = data.get("model")
if d:
model = CalTRACKUsagePerDayCandidateModel.from_json(d)
c = cls(
data.get("status"),
data.get("method_name"),
interval=data.get("interval"),
model=model,
r_squared_adj=data.get("r_squared_adj"),
candidates=data.get("candidates"),
warnings=data.get("warnings"),
metadata=data.get("metadata"),
settings=data.get("settings"),
)
# Note the metrics do not contain all the data needed
# for reconstruction (like the input pandas) ...
d = data.get("avgs_metrics")
if d:
c.avgs_metrics = ModelMetrics.from_json(d)
d = data.get("totals_metrics")
if d:
c.totals_metrics = ModelMetrics.from_json(d)
return c
def predict(
self,
prediction_index,
temperature_data,
with_disaggregated=False,
with_design_matrix=False,
**kwargs
):
return self.model.predict(
prediction_index,
temperature_data,
with_disaggregated=with_disaggregated,
with_design_matrix=with_design_matrix,
**kwargs
)
def plot(
self,
ax=None,
title=None,
figsize=None,
with_candidates=False,
candidate_alpha=None,
temp_range=None,
):
"""Plot a model fit.
Parameters
----------
ax : :any:`matplotlib.axes.Axes`, optional
Existing axes to plot on.
title : :any:`str`, optional
Chart title.
figsize : :any:`tuple`, optional
(width, height) of chart.
with_candidates : :any:`bool`
If True, also plot candidate models.
candidate_alpha : :any:`float` between 0 and 1
Transparency at which to plot candidate models. 0 fully transparent,
1 fully opaque.
Returns
-------
ax : :any:`matplotlib.axes.Axes`
Matplotlib axes.
"""
try:
import matplotlib.pyplot as plt
except ImportError: # pragma: no cover
raise ImportError("matplotlib is required for plotting.")
if figsize is None:
figsize = (10, 4)
if ax is None:
fig, ax = plt.subplots(figsize=figsize)
if temp_range is None:
temp_range = (20, 90)
if with_candidates:
for candidate in self.candidates:
candidate.plot(ax=ax, temp_range=temp_range, alpha=candidate_alpha)
self.model.plot(ax=ax, best=True, temp_range=temp_range)
if title is not None:
ax.set_title(title)
return ax
class CalTRACKUsagePerDayCandidateModel(object):
"""Contains information about a candidate model.
Attributes
----------
model_type : :any:`str`
The type of model, e..g., :code:`'hdd_only'`.
formula : :any:`str`
The R-style formula for the design matrix of this model, e.g., :code:`'meter_value ~ hdd_65'`.
status : :any:`str`
A string indicating the status of this model. Possible statuses:
- ``'NOT ATTEMPTED'``: Candidate model not fitted due to an issue
encountered in data before attempt.
- ``'ERROR'``: A fatal error occurred during model fit process.
- ``'DISQUALIFIED'``: The candidate model fit was disqualified
from the model selection process because of a decision made after
candidate model fit completed, e.g., a bad fit, or a parameter out
of acceptable range.
- ``'QUALIFIED'``: The candidate model fit is acceptable and can be
considered during model selection.
model_params : :any:`dict`, default :any:`None`
A flat dictionary of model parameters which must be serializable
using the :any:`json.dumps` method.
model : :any:`object`
The raw model (if any) used in fitting. Not serialized.
result : :any:`object`
The raw modeling result (if any) returned by the `model`. Not serialized.
r_squared_adj : :any:`float`
The adjusted r-squared of the candidate model.
warnings : :any:`list` of :any:`eemeter.EEMeterWarning`
A list of any warnings reported during creation of the candidate model.
"""
def __init__(
self,
model_type,
formula,
status,
model_params=None,
model=None,
result=None,
r_squared_adj=None,
warnings=None,
):
self.model_type = model_type
self.formula = formula
self.status = status # NOT ATTEMPTED | ERROR | QUALIFIED | DISQUALIFIED
self.model = model
self.result = result
self.r_squared_adj = r_squared_adj
if model_params is None:
model_params = {}
self.model_params = model_params
if warnings is None:
warnings = []
self.warnings = warnings
def __repr__(self):
return (
"CalTRACKUsagePerDayCandidateModel(model_type='{}', formula='{}', status='{}',"
" r_squared_adj={})".format(
self.model_type,
self.formula,
self.status,
round(self.r_squared_adj, 3)
if self.r_squared_adj is not None
else None,
)
)
def json(self):
"""Return a JSON-serializable representation of this result.
The output of this function can be converted to a serialized string
with :any:`json.dumps`.
"""
return {
"model_type": self.model_type,
"formula": self.formula,
"status": self.status,
"model_params": self.model_params,
"r_squared_adj": _noneify(self.r_squared_adj),
"warnings": [w.json() for w in self.warnings],
}
@classmethod
def from_json(cls, data):
"""Loads a JSON-serializable representation into the model state.
The input of this function is a dict which can be the result
of :any:`json.loads`.
"""
c = cls(
data.get("model_type"),
data.get("formula"),
data.get("status"),
model_params=data.get("model_params"),
r_squared_adj=data.get("r_squared_adj"),
warnings=data.get("warnings"),
)
return c
def predict(
self,
prediction_index,
temperature_data,
with_disaggregated=False,
with_design_matrix=False,
**kwargs
):
"""Predict"""
return caltrack_usage_per_day_predict(
self.model_type,
self.model_params,
prediction_index,
temperature_data,
with_disaggregated=with_disaggregated,
with_design_matrix=with_design_matrix,
**kwargs
)
def plot(
self,
best=False,
ax=None,
title=None,
figsize=None,
temp_range=None,
alpha=None,
**kwargs
):
"""Plot"""
return plot_caltrack_candidate(
self,
best=best,
ax=ax,
title=title,
figsize=figsize,
temp_range=temp_range,
alpha=alpha,
**kwargs
)
class DataSufficiency(object):
"""Contains the result of a data sufficiency check.
Attributes
----------
status : :any:`str`
A string indicating the status of this result. Possible statuses:
- ``'NO DATA'``: No baseline data was available.
- ``'FAIL'``: Data did not meet criteria.
- ``'PASS'``: Data met criteria.
criteria_name : :any:`str`
The name of the criteria method used to check for baseline data sufficiency.
warnings : :any:`list` of :any:`eemeter.EEMeterWarning`
A list of any warnings reported during the check for baseline data sufficiency.
data : :any:`dict`
A dictionary of data related to determining whether a warning should be generated.
settings : :any:`dict`
A dictionary of settings (keyword arguments) used.
"""
def __init__(self, status, criteria_name, warnings=None, data=None, settings=None):
self.status = status # NO DATA | FAIL | PASS
self.criteria_name = criteria_name
if warnings is None:
warnings = []
self.warnings = warnings
if data is None:
data = {}
self.data = data
if settings is None:
settings = {}
self.settings = settings
def __repr__(self):
return (
"DataSufficiency("
"status='{status}', criteria_name='{criteria_name}')".format(
status=self.status, criteria_name=self.criteria_name
)
)
def json(self):
"""Return a JSON-serializable representation of this result.
The output of this function can be converted to a serialized string
with :any:`json.dumps`.
"""
return {
"status": self.status,
"criteria_name": self.criteria_name,
"warnings": [w.json() for w in self.warnings],
"data": self.data,
"settings": self.settings,
}
def _get_parameter_or_raise(model_type, model_params, param):
try:
return model_params[param]
except KeyError:
raise MissingModelParameterError(
'"{}" parameter required for model_type: {}'.format(param, model_type)
)
def _caltrack_predict_design_matrix(
model_type,
model_params,
data,
disaggregated=False,
input_averages=False,
output_averages=False,
):
"""An internal CalTRACK predict method for use with a design matrix of the form
used in model fitting.
Given a set model type, parameters, and daily temperatures, return model
predictions.
Parameters
----------
model_type : :any:`str`
Model type (e.g., ``'cdd_hdd'``).
model_params : :any:`dict`
Parameters as stored in :any:`eemeter.CalTRACKUsagePerDayCandidateModel.model_params`.
data : :any:`pandas.DataFrame`
Data over which to predict. Assumed to be like the format of the data used
for fitting, although it need only have the columns. If not giving data
with a `pandas.DatetimeIndex` it must have the column `n_days`,
representing the number of days per prediction period (otherwise
inferred from DatetimeIndex).
disaggregated : :any:`bool`, optional
If True, return results as a :any:`pandas.DataFrame` with columns
``'base_load'``, ``'heating_load'``, and ``'cooling_load'``
input_averages : :any:`bool`, optional
If HDD and CDD columns expressed as period totals, select False. If HDD
and CDD columns expressed as period averages, select True. If prediction
period is daily, results should be the same either way. Matters for billing.
output_averages : :any:`bool`, optional
If True, prediction returned as averages (not totals). If False, returned
as totals.
Returns
-------
prediction : :any:`pandas.Series` or :any:`pandas.DataFrame`
Returns results as series unless ``disaggregated=True``.
"""
zeros = pd.Series(0, index=data.index)
ones = zeros + 1
if isinstance(data.index, pd.DatetimeIndex):
days_per_period = day_counts(data.index)
else:
try:
days_per_period = data["n_days"]
except KeyError:
raise ValueError("Data needs DatetimeIndex or an n_days column.")
# TODO(philngo): handle different degree day methods and hourly temperatures
if model_type in ["intercept_only", "hdd_only", "cdd_only", "cdd_hdd"]:
intercept = _get_parameter_or_raise(model_type, model_params, "intercept")
if output_averages == False:
base_load = intercept * days_per_period
else:
base_load = intercept * ones
elif model_type is None:
raise ValueError("Model not valid for prediction: model_type=None")
else:
raise UnrecognizedModelTypeError(
"invalid caltrack model type: {}".format(model_type)
)
if model_type in ["hdd_only", "cdd_hdd"]:
beta_hdd = _get_parameter_or_raise(model_type, model_params, "beta_hdd")
heating_balance_point = _get_parameter_or_raise(
model_type, model_params, "heating_balance_point"
)
hdd_column_name = "hdd_%s" % heating_balance_point
hdd = data[hdd_column_name]
if input_averages == True and output_averages == False:
heating_load = hdd * beta_hdd * days_per_period
elif input_averages == True and output_averages == True:
heating_load = hdd * beta_hdd
elif input_averages == False and output_averages == False:
heating_load = hdd * beta_hdd
else:
heating_load = hdd * beta_hdd / days_per_period
else:
heating_load = zeros
if model_type in ["cdd_only", "cdd_hdd"]:
beta_cdd = _get_parameter_or_raise(model_type, model_params, "beta_cdd")
cooling_balance_point = _get_parameter_or_raise(
model_type, model_params, "cooling_balance_point"
)
cdd_column_name = "cdd_%s" % cooling_balance_point
cdd = data[cdd_column_name]
if input_averages == True and output_averages == False:
cooling_load = cdd * beta_cdd * days_per_period
elif input_averages == True and output_averages == True:
cooling_load = cdd * beta_cdd
elif input_averages == False and output_averages == False:
cooling_load = cdd * beta_cdd
else:
cooling_load = cdd * beta_cdd / days_per_period
else:
cooling_load = zeros
# If any of the rows of input data contained NaNs, restore the NaNs
# Note: If data contains ANY NaNs at all, this declares the entire row a NaN.
# TODO(philngo): Consider making this more nuanced.
def _restore_nans(load):
load = load[data.sum(axis=1, skipna=False).notnull()].reindex(data.index)
return load
base_load = _restore_nans(base_load)
heating_load = _restore_nans(heating_load)
cooling_load = _restore_nans(cooling_load)
if disaggregated:
return pd.DataFrame(
{
"base_load": base_load,
"heating_load": heating_load,
"cooling_load": cooling_load,
}
)
else:
return base_load + heating_load + cooling_load
def caltrack_usage_per_day_predict(
model_type,
model_params,
prediction_index,
temperature_data,
degree_day_method="daily",
with_disaggregated=False,
with_design_matrix=False,
):
"""CalTRACK predict method.
Given a model type, parameters, hourly temperatures, a
:any:`pandas.DatetimeIndex` index over which to predict meter usage,
return model predictions as totals for the period (so billing period totals,
daily totals, etc.). Optionally include the computed design matrix or
disaggregated usage in the output dataframe.
Parameters
----------
model_type : :any:`str`
Model type (e.g., ``'cdd_hdd'``).
model_params : :any:`dict`
Parameters as stored in :any:`eemeter.CalTRACKUsagePerDayCandidateModel.model_params`.
temperature_data : :any:`pandas.DataFrame`
Hourly temperature data to use for prediction. Time period should match
the ``prediction_index`` argument.
prediction_index : :any:`pandas.DatetimeIndex`
Time period over which to predict.
with_disaggregated : :any:`bool`, optional
If True, return results as a :any:`pandas.DataFrame` with columns
``'base_load'``, ``'heating_load'``, and ``'cooling_load'``.
with_design_matrix : :any:`bool`, optional
If True, return results as a :any:`pandas.DataFrame` with columns
``'n_days'``, ``'n_days_dropped'``, ``n_days_kept``, and
``temperature_mean``.
Returns
-------
prediction : :any:`pandas.DataFrame`
Columns are as follows:
- ``predicted_usage``: Predicted usage values computed to match
``prediction_index``.
- ``base_load``: modeled base load (only for ``with_disaggregated=True``).
- ``cooling_load``: modeled cooling load (only for ``with_disaggregated=True``).
- ``heating_load``: modeled heating load (only for ``with_disaggregated=True``).
- ``n_days``: number of days in period (only for ``with_design_matrix=True``).
- ``n_days_dropped``: number of days dropped because of insufficient
data (only for ``with_design_matrix=True``).
- ``n_days_kept``: number of days kept because of sufficient data
(only for ``with_design_matrix=True``).
- ``temperature_mean``: mean temperature during given period.
(only for ``with_design_matrix=True``).
predict_warnings: :any: list of EEMeterWarning if any.
"""
if model_params is None:
raise MissingModelParameterError("model_params is None.")
predict_warnings = []
cooling_balance_points = []
heating_balance_points = []
if "cooling_balance_point" in model_params:
cooling_balance_points.append(model_params["cooling_balance_point"])
if "heating_balance_point" in model_params:
heating_balance_points.append(model_params["heating_balance_point"])
design_matrix = compute_temperature_features(
prediction_index,
temperature_data,
heating_balance_points=heating_balance_points,
cooling_balance_points=cooling_balance_points,
degree_day_method=degree_day_method,
use_mean_daily_values=False,
)
if degree_day_method == "daily":
design_matrix["n_days"] = (
design_matrix.n_days_kept + design_matrix.n_days_dropped
)
else:
design_matrix["n_days"] = (
design_matrix.n_hours_kept + design_matrix.n_hours_dropped
) / 24
if design_matrix.dropna().empty:
if with_disaggregated:
empty_columns = {
"predicted_usage": [],
"base_load": [],
"heating_load": [],
"cooling_load": [],
}
else:
empty_columns = {"predicted_usage": []}
if with_design_matrix:
empty_columns.update({col: [] for col in design_matrix.columns})
predict_warnings.append(
EEMeterWarning(
qualified_name=("eemeter.caltrack.compute_temperature_features"),
description=(
"Design matrix empty, compute_temperature_features failed"
),
data={"temperature_data": temperature_data},
)
)
return ModelPrediction(
pd.DataFrame(empty_columns),
design_matrix=pd.DataFrame(),
warnings=predict_warnings,
)
results = _caltrack_predict_design_matrix(
model_type,
model_params,
design_matrix,
input_averages=False,
output_averages=False,
).to_frame("predicted_usage")
if with_disaggregated:
disaggregated = _caltrack_predict_design_matrix(
model_type,
model_params,
design_matrix,
disaggregated=True,
input_averages=False,
output_averages=False,
)
results = results.join(disaggregated)
if with_design_matrix:
results = results.join(design_matrix)
return ModelPrediction(
result=results, design_matrix=design_matrix, warnings=predict_warnings
)
def get_too_few_non_zero_degree_day_warning(
model_type, balance_point, degree_day_type, degree_days, minimum_non_zero
):
"""Return an empty list or a single warning wrapped in a list regarding
non-zero degree days for a set of degree days.
Parameters
----------
model_type : :any:`str`
Model type (e.g., ``'cdd_hdd'``).
balance_point : :any:`float`
The balance point in question.
degree_day_type : :any:`str`
The type of degree days (``'cdd'`` or ``'hdd'``).
degree_days : :any:`pandas.Series`
A series of degree day values.
minimum_non_zero : :any:`int`
Minimum allowable number of non-zero degree day values.
Returns
-------
warnings : :any:`list` of :any:`eemeter.EEMeterWarning`
Empty list or list of single warning.
"""
warnings = []
n_non_zero = int((degree_days > 0).sum())
if n_non_zero < minimum_non_zero:
warnings.append(
EEMeterWarning(
qualified_name=(
"eemeter.caltrack_daily.{model_type}.too_few_non_zero_{degree_day_type}".format(
model_type=model_type, degree_day_type=degree_day_type
)
),
description=(
"Number of non-zero daily {degree_day_type} values below accepted minimum."
" Candidate fit not attempted.".format(
degree_day_type=degree_day_type.upper()
)
),
data={
"n_non_zero_{degree_day_type}".format(
degree_day_type=degree_day_type
): n_non_zero,
"minimum_non_zero_{degree_day_type}".format(
degree_day_type=degree_day_type
): minimum_non_zero,
"{degree_day_type}_balance_point".format(
degree_day_type=degree_day_type
): balance_point,
},
)
)
return warnings
def get_total_degree_day_too_low_warning(
model_type,
balance_point,
degree_day_type,
avg_degree_days,
period_days,
minimum_total,
):
"""Return an empty list or a single warning wrapped in a list regarding
the total summed degree day values.
Parameters
----------
model_type : :any:`str`
Model type (e.g., ``'cdd_hdd'``).
balance_point : :any:`float`
The balance point in question.
degree_day_type : :any:`str`
The type of degree days (``'cdd'`` or ``'hdd'``).
avg_degree_days : :any:`pandas.Series`
A series of degree day values.
period_days : :any:`pandas.Series`
A series of containing day counts.
minimum_total : :any:`float`
Minimum allowable total sum of degree day values.
Returns
-------
warnings : :any:`list` of :any:`eemeter.EEMeterWarning`
Empty list or list of single warning.
"""
warnings = []
total_degree_days = (avg_degree_days * period_days).sum()
if total_degree_days < minimum_total:
warnings.append(
EEMeterWarning(
qualified_name=(
"eemeter.caltrack_daily.{model_type}.total_{degree_day_type}_too_low".format(
model_type=model_type, degree_day_type=degree_day_type
)
),
description=(
"Total {degree_day_type} below accepted minimum."
" Candidate fit not attempted.".format(
degree_day_type=degree_day_type.upper()
)
),
data={
"total_{degree_day_type}".format(
degree_day_type=degree_day_type
): total_degree_days,
"total_{degree_day_type}_minimum".format(
degree_day_type=degree_day_type
): minimum_total,
"{degree_day_type}_balance_point".format(
degree_day_type=degree_day_type
): balance_point,
},
)
)
return warnings
def get_parameter_negative_warning(model_type, model_params, parameter):
"""Return an empty list or a single warning wrapped in a list indicating
whether model parameter is negative.
Parameters
----------
model_type : :any:`str`
Model type (e.g., ``'cdd_hdd'``).
model_params : :any:`dict`
Parameters as stored in :any:`eemeter.CalTRACKUsagePerDayCandidateModel.model_params`.
parameter : :any:`str`
The name of the parameter, e.g., ``'intercept'``.
Returns
-------
warnings : :any:`list` of :any:`eemeter.EEMeterWarning`
Empty list or list of single warning.
"""
warnings = []
if model_params.get(parameter, 0) < 0:
warnings.append(
EEMeterWarning(
qualified_name=(
"eemeter.caltrack_daily.{model_type}.{parameter}_negative".format(
model_type=model_type, parameter=parameter
)
),
description=(
"Model fit {parameter} parameter is negative. Candidate model rejected.".format(
parameter=parameter
)
),
data=model_params,
)
)
return warnings
def get_parameter_p_value_too_high_warning(
model_type, model_params, parameter, p_value, maximum_p_value
):
"""Return an empty list or a single warning wrapped in a list indicating
whether model parameter p-value is too high.
Parameters
----------
model_type : :any:`str`
Model type (e.g., ``'cdd_hdd'``).
model_params : :any:`dict`
Parameters as stored in :any:`eemeter.CalTRACKUsagePerDayCandidateModel.model_params`.
parameter : :any:`str`
The name of the parameter, e.g., ``'intercept'``.
p_value : :any:`float`
The p-value of the parameter.
maximum_p_value : :any:`float`
The maximum allowable p-value of the parameter.
Returns
-------
warnings : :any:`list` of :any:`eemeter.EEMeterWarning`
Empty list or list of single warning.
"""
warnings = []
if p_value > maximum_p_value:
data = {
"{}_p_value".format(parameter): p_value,
"{}_maximum_p_value".format(parameter): maximum_p_value,
}
data.update(model_params)
warnings.append(
EEMeterWarning(
qualified_name=(
"eemeter.caltrack_daily.{model_type}.{parameter}_p_value_too_high".format(
model_type=model_type, parameter=parameter
)
),
description=(
"Model fit {parameter} p-value is too high. Candidate model rejected.".format(
parameter=parameter
)
),
data=data,
)
)
return warnings
def get_fit_failed_candidate_model(model_type, formula):
"""Return a Candidate model that indicates the fitting routine failed.
Parameters
----------
model_type : :any:`str`
Model type (e.g., ``'cdd_hdd'``).
formula : :any:`float`
The candidate model formula.
Returns
-------
candidate_model : :any:`eemeter.CalTRACKUsagePerDayCandidateModel`
Candidate model instance with status ``'ERROR'``, and warning with
traceback.
"""
warnings = [
EEMeterWarning(
qualified_name="eemeter.caltrack_daily.{}.model_results".format(model_type),
description=(
"Error encountered in statsmodels.formula.api.ols method. (Empty data?)"
),
data={"traceback": traceback.format_exc()},
)
]
return CalTRACKUsagePerDayCandidateModel(
model_type=model_type, formula=formula, status="ERROR", warnings=warnings
)
def get_intercept_only_candidate_models(data, weights_col):
"""Return a list of a single candidate intercept-only model.
Parameters
----------
data : :any:`pandas.DataFrame`
A DataFrame containing at least the column ``meter_value``.
DataFrames of this form can be made using the
:any:`eemeter.create_caltrack_daily_design_matrix` or
:any:`eemeter.create_caltrack_billing_design_matrix` methods.
weights_col : :any:`str` or None
The name of the column (if any) in ``data`` to use as weights.
Returns
-------
candidate_models : :any:`list` of :any:`CalTRACKUsagePerDayCandidateModel`
List containing a single intercept-only candidate model.
"""
model_type = "intercept_only"
formula = "meter_value ~ 1"
if weights_col is None:
weights = 1
else:
weights = data[weights_col]
try:
model = smf.wls(formula=formula, data=data, weights=weights)
except Exception as e:
return [get_fit_failed_candidate_model(model_type, formula)]
result = model.fit()
# CalTrack 3.3.1.3
model_params = {"intercept": result.params["Intercept"]}
model_warnings = []
# CalTrack 3.4.3.2
for parameter in ["intercept"]:
model_warnings.extend(
get_parameter_negative_warning(model_type, model_params, parameter)
)
if len(model_warnings) > 0:
status = "DISQUALIFIED"
else:
status = "QUALIFIED"
return [
CalTRACKUsagePerDayCandidateModel(
model_type=model_type,
formula=formula,
status=status,
warnings=model_warnings,
model_params=model_params,
model=model,
result=result,
r_squared_adj=0,
)
]
def get_single_cdd_only_candidate_model(
data,
minimum_non_zero_cdd,
minimum_total_cdd,
beta_cdd_maximum_p_value,
weights_col,
balance_point,
):
"""Return a single candidate cdd-only model for a particular balance
point.
Parameters
----------
data : :any:`pandas.DataFrame`
A DataFrame containing at least the column ``meter_value`` and
``cdd_<balance_point>``
DataFrames of this form can be made using the
:any:`eemeter.create_caltrack_daily_design_matrix` or
:any:`eemeter.create_caltrack_billing_design_matrix` methods.
minimum_non_zero_cdd : :any:`int`
Minimum allowable number of non-zero cooling degree day values.
minimum_total_cdd : :any:`float`
Minimum allowable total sum of cooling degree day values.
beta_cdd_maximum_p_value : :any:`float`
The maximum allowable p-value of the beta cdd parameter.
weights_col : :any:`str` or None
The name of the column (if any) in ``data`` to use as weights.
balance_point : :any:`float`
The cooling balance point for this model.
Returns
-------
candidate_model : :any:`CalTRACKUsagePerDayCandidateModel`
A single cdd-only candidate model, with any associated warnings.
"""
model_type = "cdd_only"
cdd_column = "cdd_%s" % balance_point
formula = "meter_value ~ %s" % cdd_column
if weights_col is None:
weights = 1
else:
weights = data[weights_col]
period_days = weights
degree_day_warnings = []
degree_day_warnings.extend(
get_total_degree_day_too_low_warning(
model_type,
balance_point,
"cdd",
data[cdd_column],
period_days,
minimum_total_cdd,
)
)
degree_day_warnings.extend(
get_too_few_non_zero_degree_day_warning(
model_type, balance_point, "cdd", data[cdd_column], minimum_non_zero_cdd
)
)
if len(degree_day_warnings) > 0:
return CalTRACKUsagePerDayCandidateModel(
model_type=model_type,
formula=formula,
status="NOT ATTEMPTED",
warnings=degree_day_warnings,
)
try:
model = smf.wls(formula=formula, data=data, weights=weights)
except Exception as e:
return get_fit_failed_candidate_model(model_type, formula)
result = model.fit()
r_squared_adj = result.rsquared_adj
beta_cdd_p_value = result.pvalues[cdd_column]
# CalTrack 3.3.1.3
model_params = {
"intercept": result.params["Intercept"],
"beta_cdd": result.params[cdd_column],
"cooling_balance_point": balance_point,
}
model_warnings = []
# CalTrack 3.4.3.2
for parameter in ["intercept", "beta_cdd"]:
model_warnings.extend(
get_parameter_negative_warning(model_type, model_params, parameter)
)
model_warnings.extend(
get_parameter_p_value_too_high_warning(
model_type,
model_params,
parameter,
beta_cdd_p_value,
beta_cdd_maximum_p_value,
)
)
if len(model_warnings) > 0:
status = "DISQUALIFIED"
else:
status = "QUALIFIED"
return CalTRACKUsagePerDayCandidateModel(
model_type=model_type,
formula=formula,
status=status,
warnings=model_warnings,
model_params=model_params,
model=model,
result=result,
r_squared_adj=r_squared_adj,
)
def get_cdd_only_candidate_models(
data, minimum_non_zero_cdd, minimum_total_cdd, beta_cdd_maximum_p_value, weights_col
):
"""Return a list of all possible candidate cdd-only models.
Parameters
----------
data : :any:`pandas.DataFrame`
A DataFrame containing at least the column ``meter_value`` and 1 to n
columns with names of the form ``cdd_<balance_point>``. All columns
with names of this form will be used to fit a candidate model.
DataFrames of this form can be made using the
:any:`eemeter.create_caltrack_daily_design_matrix` or
:any:`eemeter.create_caltrack_billing_design_matrix` methods.
minimum_non_zero_cdd : :any:`int`
Minimum allowable number of non-zero cooling degree day values.
minimum_total_cdd : :any:`float`
Minimum allowable total sum of cooling degree day values.
beta_cdd_maximum_p_value : :any:`float`
The maximum allowable p-value of the beta cdd parameter.
weights_col : :any:`str` or None
The name of the column (if any) in ``data`` to use as weights.
Returns
-------
candidate_models : :any:`list` of :any:`CalTRACKUsagePerDayCandidateModel`
A list of cdd-only candidate models, with any associated warnings.
"""
balance_points = [int(col[4:]) for col in data.columns if col.startswith("cdd")]
candidate_models = [
get_single_cdd_only_candidate_model(
data,
minimum_non_zero_cdd,
minimum_total_cdd,
beta_cdd_maximum_p_value,
weights_col,
balance_point,
)
for balance_point in balance_points
]
return candidate_models
def get_single_hdd_only_candidate_model(
data,
minimum_non_zero_hdd,
minimum_total_hdd,
beta_hdd_maximum_p_value,
weights_col,
balance_point,
):
"""Return a single candidate hdd-only model for a particular balance
point.
Parameters
----------
data : :any:`pandas.DataFrame`
A DataFrame containing at least the column ``meter_value`` and
``hdd_<balance_point>``
DataFrames of this form can be made using the
:any:`eemeter.create_caltrack_daily_design_matrix` or
:any:`eemeter.create_caltrack_billing_design_matrix` methods.
minimum_non_zero_hdd : :any:`int`
Minimum allowable number of non-zero heating degree day values.
minimum_total_hdd : :any:`float`
Minimum allowable total sum of heating degree day values.
beta_hdd_maximum_p_value : :any:`float`
The maximum allowable p-value of the beta hdd parameter.
weights_col : :any:`str` or None
The name of the column (if any) in ``data`` to use as weights.
balance_point : :any:`float`
The heating balance point for this model.
Returns
-------
candidate_model : :any:`CalTRACKUsagePerDayCandidateModel`
A single hdd-only candidate model, with any associated warnings.
"""
model_type = "hdd_only"
hdd_column = "hdd_%s" % balance_point
formula = "meter_value ~ %s" % hdd_column
if weights_col is None:
weights = 1
else:
weights = data[weights_col]
period_days = weights
degree_day_warnings = []
degree_day_warnings.extend(
get_total_degree_day_too_low_warning(
model_type,
balance_point,
"hdd",
data[hdd_column],
period_days,
minimum_total_hdd,
)
)
degree_day_warnings.extend(
get_too_few_non_zero_degree_day_warning(
model_type, balance_point, "hdd", data[hdd_column], minimum_non_zero_hdd
)
)
if len(degree_day_warnings) > 0:
return CalTRACKUsagePerDayCandidateModel(
model_type=model_type,
formula=formula,
status="NOT ATTEMPTED",
warnings=degree_day_warnings,
)
try:
model = smf.wls(formula=formula, data=data, weights=weights)
except Exception as e:
return get_fit_failed_candidate_model(model_type, formula)
result = model.fit()
r_squared_adj = result.rsquared_adj
beta_hdd_p_value = result.pvalues[hdd_column]
# CalTrack 3.3.1.3
model_params = {
"intercept": result.params["Intercept"],
"beta_hdd": result.params[hdd_column],
"heating_balance_point": balance_point,
}
model_warnings = []
# CalTrack 3.4.3.2
for parameter in ["intercept", "beta_hdd"]:
model_warnings.extend(
get_parameter_negative_warning(model_type, model_params, parameter)
)
model_warnings.extend(
get_parameter_p_value_too_high_warning(
model_type,
model_params,
parameter,
beta_hdd_p_value,
beta_hdd_maximum_p_value,
)
)
if len(model_warnings) > 0:
status = "DISQUALIFIED"
else:
status = "QUALIFIED"
return CalTRACKUsagePerDayCandidateModel(
model_type=model_type,
formula=formula,
status=status,
warnings=model_warnings,
model_params=model_params,
model=model,
result=result,
r_squared_adj=r_squared_adj,
)
def get_hdd_only_candidate_models(
data, minimum_non_zero_hdd, minimum_total_hdd, beta_hdd_maximum_p_value, weights_col
):
"""
Parameters
----------
data : :any:`pandas.DataFrame`
A DataFrame containing at least the column ``meter_value`` and 1 to n
columns with names of the form ``hdd_<balance_point>``. All columns
with names of this form will be used to fit a candidate model.
DataFrames of this form can be made using the
:any:`eemeter.create_caltrack_daily_design_matrix` or
:any:`eemeter.create_caltrack_billing_design_matrix` methods.
minimum_non_zero_hdd : :any:`int`
Minimum allowable number of non-zero heating degree day values.
minimum_total_hdd : :any:`float`
Minimum allowable total sum of heating degree day values.
beta_hdd_maximum_p_value : :any:`float`
The maximum allowable p-value of the beta hdd parameter.
weights_col : :any:`str` or None
The name of the column (if any) in ``data`` to use as weights.
Returns
-------
candidate_models : :any:`list` of :any:`CalTRACKUsagePerDayCandidateModel`
A list of hdd-only candidate models, with any associated warnings.
"""
balance_points = [int(col[4:]) for col in data.columns if col.startswith("hdd")]
candidate_models = [
get_single_hdd_only_candidate_model(
data,
minimum_non_zero_hdd,
minimum_total_hdd,
beta_hdd_maximum_p_value,
weights_col,
balance_point,
)
for balance_point in balance_points
]
return candidate_models
def get_single_cdd_hdd_candidate_model(
data,
minimum_non_zero_cdd,
minimum_non_zero_hdd,
minimum_total_cdd,
minimum_total_hdd,
beta_cdd_maximum_p_value,
beta_hdd_maximum_p_value,
weights_col,
cooling_balance_point,
heating_balance_point,
):
"""Return and fit a single candidate cdd_hdd model for a particular selection
of cooling balance point and heating balance point
Parameters
----------
data : :any:`pandas.DataFrame`
A DataFrame containing at least the column ``meter_value`` and
``hdd_<heating_balance_point>`` and ``cdd_<cooling_balance_point>``
DataFrames of this form can be made using the
:any:`eemeter.create_caltrack_daily_design_matrix` or
:any:`eemeter.create_caltrack_billing_design_matrix` methods.
minimum_non_zero_cdd : :any:`int`
Minimum allowable number of non-zero cooling degree day values.
minimum_non_zero_hdd : :any:`int`
Minimum allowable number of non-zero heating degree day values.
minimum_total_cdd : :any:`float`
Minimum allowable total sum of cooling degree day values.
minimum_total_hdd : :any:`float`
Minimum allowable total sum of heating degree day values.
beta_cdd_maximum_p_value : :any:`float`
The maximum allowable p-value of the beta cdd parameter.
beta_hdd_maximum_p_value : :any:`float`
The maximum allowable p-value of the beta hdd parameter.
weights_col : :any:`str` or None
The name of the column (if any) in ``data`` to use as weights.
cooling_balance_point : :any:`float`
The cooling balance point for this model.
heating_balance_point : :any:`float`
The heating balance point for this model.
Returns
-------
candidate_model : :any:`CalTRACKUsagePerDayCandidateModel`
A single cdd-hdd candidate model, with any associated warnings.
"""
model_type = "cdd_hdd"
cdd_column = "cdd_%s" % cooling_balance_point
hdd_column = "hdd_%s" % heating_balance_point
formula = "meter_value ~ %s + %s" % (cdd_column, hdd_column)
n_days_column = None
if weights_col is None:
weights = 1
else:
weights = data[weights_col]
period_days = weights
degree_day_warnings = []
degree_day_warnings.extend(
get_total_degree_day_too_low_warning(
model_type,
cooling_balance_point,
"cdd",
data[cdd_column],
period_days,
minimum_total_cdd,
)
)
degree_day_warnings.extend(
get_too_few_non_zero_degree_day_warning(
model_type,
cooling_balance_point,
"cdd",
data[cdd_column],
minimum_non_zero_cdd,
)
)
degree_day_warnings.extend(
get_total_degree_day_too_low_warning(
model_type,
heating_balance_point,
"hdd",
data[hdd_column],
period_days,
minimum_total_hdd,
)
)
degree_day_warnings.extend(
get_too_few_non_zero_degree_day_warning(
model_type,
heating_balance_point,
"hdd",
data[hdd_column],
minimum_non_zero_hdd,
)
)
if len(degree_day_warnings) > 0:
return CalTRACKUsagePerDayCandidateModel(
model_type, formula, "NOT ATTEMPTED", warnings=degree_day_warnings
)
try:
model = smf.wls(formula=formula, data=data, weights=weights)
except Exception as e:
return get_fit_failed_candidate_model(model_type, formula)
result = model.fit()
r_squared_adj = result.rsquared_adj
beta_cdd_p_value = result.pvalues[cdd_column]
beta_hdd_p_value = result.pvalues[hdd_column]
# CalTrack 3.3.1.3
model_params = {
"intercept": result.params["Intercept"],
"beta_cdd": result.params[cdd_column],
"beta_hdd": result.params[hdd_column],
"cooling_balance_point": cooling_balance_point,
"heating_balance_point": heating_balance_point,
}
model_warnings = []
# CalTrack 3.4.3.2
for parameter in ["intercept", "beta_cdd", "beta_hdd"]:
model_warnings.extend(
get_parameter_negative_warning(model_type, model_params, parameter)
)
model_warnings.extend(
get_parameter_p_value_too_high_warning(
model_type,
model_params,
parameter,
beta_cdd_p_value,
beta_cdd_maximum_p_value,
)
)
model_warnings.extend(
get_parameter_p_value_too_high_warning(
model_type,
model_params,
parameter,
beta_hdd_p_value,
beta_hdd_maximum_p_value,
)
)
if len(model_warnings) > 0:
status = "DISQUALIFIED"
else:
status = "QUALIFIED"
return CalTRACKUsagePerDayCandidateModel(
model_type=model_type,
formula=formula,
status=status,
warnings=model_warnings,
model_params=model_params,
model=model,
result=result,
r_squared_adj=r_squared_adj,
)
def get_cdd_hdd_candidate_models(
data,
minimum_non_zero_cdd,
minimum_non_zero_hdd,
minimum_total_cdd,
minimum_total_hdd,
beta_cdd_maximum_p_value,
beta_hdd_maximum_p_value,
weights_col,
):
"""Return a list of candidate cdd_hdd models for a particular selection
of cooling balance point and heating balance point
Parameters
----------
data : :any:`pandas.DataFrame`
A DataFrame containing at least the column ``meter_value`` and 1 to n
columns each of the form ``hdd_<heating_balance_point>``
and ``cdd_<cooling_balance_point>``. DataFrames of this form can be
made using the
:any:`eemeter.create_caltrack_daily_design_matrix` or
:any:`eemeter.create_caltrack_billing_design_matrix` methods.
minimum_non_zero_cdd : :any:`int`
Minimum allowable number of non-zero cooling degree day values.
minimum_non_zero_hdd : :any:`int`
Minimum allowable number of non-zero heating degree day values.
minimum_total_cdd : :any:`float`
Minimum allowable total sum of cooling degree day values.
minimum_total_hdd : :any:`float`
Minimum allowable total sum of heating degree day values.
beta_cdd_maximum_p_value : :any:`float`
The maximum allowable p-value of the beta cdd parameter.
beta_hdd_maximum_p_value : :any:`float`
The maximum allowable p-value of the beta hdd parameter.
weights_col : :any:`str` or None
The name of the column (if any) in ``data`` to use as weights.
Returns
-------
candidate_models : :any:`list` of :any:`CalTRACKUsagePerDayCandidateModel`
A list of cdd_hdd candidate models, with any associated warnings.
"""
cooling_balance_points = [
int(col[4:]) for col in data.columns if col.startswith("cdd")
]
heating_balance_points = [
int(col[4:]) for col in data.columns if col.startswith("hdd")
]
# CalTrack 3.2.2.1
candidate_models = [
get_single_cdd_hdd_candidate_model(
data,
minimum_non_zero_cdd,
minimum_non_zero_hdd,
minimum_total_cdd,
minimum_total_hdd,
beta_cdd_maximum_p_value,
beta_hdd_maximum_p_value,
weights_col,
cooling_balance_point,
heating_balance_point,
)
for cooling_balance_point in cooling_balance_points
for heating_balance_point in heating_balance_points
if heating_balance_point <= cooling_balance_point
]
return candidate_models
def select_best_candidate(candidate_models):
"""Select and return the best candidate model based on r-squared and
qualification.
Parameters
----------
candidate_models : :any:`list` of :any:`eemeter.CalTRACKUsagePerDayCandidateModel`
Candidate models to select from.
Returns
-------
(best_candidate, warnings) : :any:`tuple` of :any:`eemeter.CalTRACKUsagePerDayCandidateModel` or :any:`None` and :any:`list` of `eemeter.EEMeterWarning`
Return the candidate model with highest r-squared or None if none meet
the requirements, and a list of warnings about this selection (or lack
of selection).
"""
best_r_squared_adj = -np.inf
best_candidate = None
# CalTrack 3.4.3.3
for candidate in candidate_models:
if (
candidate.status == "QUALIFIED"
and candidate.r_squared_adj > best_r_squared_adj
):
best_candidate = candidate
best_r_squared_adj = candidate.r_squared_adj
if best_candidate is None:
warnings = [
EEMeterWarning(
qualified_name="eemeter.caltrack_daily.select_best_candidate.no_candidates",
description="No qualified model candidates available.",
data={
"status_count:{}".format(status): count
for status, count in Counter(
[c.status for c in candidate_models]
).items()
},
)
]
return None, warnings
return best_candidate, []
def fit_caltrack_usage_per_day_model(
data,
fit_cdd=True,
use_billing_presets=False,
minimum_non_zero_cdd=10,
minimum_non_zero_hdd=10,
minimum_total_cdd=20,
minimum_total_hdd=20,
beta_cdd_maximum_p_value=1,
beta_hdd_maximum_p_value=1,
weights_col=None,
fit_intercept_only=True,
fit_cdd_only=True,
fit_hdd_only=True,
fit_cdd_hdd=True,
):
"""CalTRACK daily and billing methods using a usage-per-day modeling
strategy.
Parameters
----------
data : :any:`pandas.DataFrame`
A DataFrame containing at least the column ``meter_value`` and 1 to n
columns each of the form ``hdd_<heating_balance_point>``
and ``cdd_<cooling_balance_point>``. DataFrames of this form can be
made using the :any:`eemeter.create_caltrack_daily_design_matrix` or
:any:`eemeter.create_caltrack_billing_design_matrix` methods.
Should have a :any:`pandas.DatetimeIndex`.
fit_cdd : :any:`bool`, optional
If True, fit CDD models unless overridden by ``fit_cdd_only`` or
``fit_cdd_hdd`` flags. Should be set to ``False`` for gas meter data.
use_billing_presets : :any:`bool`, optional
Use presets appropriate for billing models. Otherwise defaults are
appropriate for daily models.
minimum_non_zero_cdd : :any:`int`, optional
Minimum allowable number of non-zero cooling degree day values.
minimum_non_zero_hdd : :any:`int`, optional
Minimum allowable number of non-zero heating degree day values.
minimum_total_cdd : :any:`float`, optional
Minimum allowable total sum of cooling degree day values.
minimum_total_hdd : :any:`float`, optional
Minimum allowable total sum of heating degree day values.
beta_cdd_maximum_p_value : :any:`float`, optional
The maximum allowable p-value of the beta cdd parameter. The default
value is the most permissive possible (i.e., 1). This is here
for backwards compatibility with CalTRACK 1.0 methods.
beta_hdd_maximum_p_value : :any:`float`, optional
The maximum allowable p-value of the beta hdd parameter. The default
value is the most permissive possible (i.e., 1). This is here
for backwards compatibility with CalTRACK 1.0 methods.
weights_col : :any:`str` or None, optional
The name of the column (if any) in ``data`` to use as weights. Weight
must be the number of days of data in the period.
fit_intercept_only : :any:`bool`, optional
If True, fit and consider intercept_only model candidates.
fit_cdd_only : :any:`bool`, optional
If True, fit and consider cdd_only model candidates. Ignored if
``fit_cdd=False``.
fit_hdd_only : :any:`bool`, optional
If True, fit and consider hdd_only model candidates.
fit_cdd_hdd : :any:`bool`, optional
If True, fit and consider cdd_hdd model candidates. Ignored if
``fit_cdd=False``.
Returns
-------
model_results : :any:`eemeter.CalTRACKUsagePerDayModelResults`
Results of running CalTRACK daily method. See :any:`eemeter.CalTRACKUsagePerDayModelResults`
for more details.
"""
if use_billing_presets:
# CalTrack 3.2.2.2.1
minimum_non_zero_cdd = 0
minimum_non_zero_hdd = 0
# CalTrack 3.2.2.2.2
minimum_total_cdd = 20
minimum_total_hdd = 20
# CalTrack 3.4.2
if weights_col is None:
raise ValueError(
"If using billing presets, the weights_col argument must be specified."
)
interval = "billing"
else:
interval = "daily"
# cleans data to fully NaN rows that have missing temp or meter data
data = overwrite_partial_rows_with_nan(data)
if data.dropna().empty:
return CalTRACKUsagePerDayModelResults(
status="NO DATA",
method_name="caltrack_usage_per_day",
warnings=[
EEMeterWarning(
qualified_name="eemeter.caltrack_usage_per_day.no_data",
description=("No data available. Cannot fit model."),
data={},
)
],
)
# collect all candidate results, then validate all at once
# CalTrack 3.4.3.1
candidates = []
if fit_intercept_only:
candidates.extend(
get_intercept_only_candidate_models(data, weights_col=weights_col)
)
if fit_hdd_only:
candidates.extend(
get_hdd_only_candidate_models(
data=data,
minimum_non_zero_hdd=minimum_non_zero_hdd,
minimum_total_hdd=minimum_total_hdd,
beta_hdd_maximum_p_value=beta_hdd_maximum_p_value,
weights_col=weights_col,
)
)
# cdd models ignored for gas
if fit_cdd:
if fit_cdd_only:
candidates.extend(
get_cdd_only_candidate_models(
data=data,
minimum_non_zero_cdd=minimum_non_zero_cdd,
minimum_total_cdd=minimum_total_cdd,
beta_cdd_maximum_p_value=beta_cdd_maximum_p_value,
weights_col=weights_col,
)
)
if fit_cdd_hdd:
candidates.extend(
get_cdd_hdd_candidate_models(
data=data,
minimum_non_zero_cdd=minimum_non_zero_cdd,
minimum_non_zero_hdd=minimum_non_zero_hdd,
minimum_total_cdd=minimum_total_cdd,
minimum_total_hdd=minimum_total_hdd,
beta_cdd_maximum_p_value=beta_cdd_maximum_p_value,
beta_hdd_maximum_p_value=beta_hdd_maximum_p_value,
weights_col=weights_col,
)
)
# find best candidate result
best_candidate, candidate_warnings = select_best_candidate(candidates)
warnings = candidate_warnings
if best_candidate is None:
status = "NO MODEL"
r_squared_adj = None
else:
status = "SUCCESS"
r_squared_adj = best_candidate.r_squared_adj
model_result = CalTRACKUsagePerDayModelResults(
status=status,
method_name="caltrack_usage_per_day",
interval=interval,
model=best_candidate,
candidates=candidates,
r_squared_adj=r_squared_adj,
warnings=warnings,
settings={
"fit_cdd": fit_cdd,
"minimum_non_zero_cdd": minimum_non_zero_cdd,
"minimum_non_zero_hdd": minimum_non_zero_hdd,
"minimum_total_cdd": minimum_total_cdd,
"minimum_total_hdd": minimum_total_hdd,
"beta_cdd_maximum_p_value": beta_cdd_maximum_p_value,
"beta_hdd_maximum_p_value": beta_hdd_maximum_p_value,
},
)
if best_candidate is not None:
if best_candidate.model_type in ["cdd_hdd"]:
num_parameters = 2
elif best_candidate.model_type in ["hdd_only", "cdd_only"]:
num_parameters = 1
else:
num_parameters = 0
predicted_avgs = _caltrack_predict_design_matrix(
best_candidate.model_type,
best_candidate.model_params,
data,
input_averages=True,
output_averages=True,
)
model_result.avgs_metrics = ModelMetrics(
data.meter_value, predicted_avgs, num_parameters
)
predicted_totals = _caltrack_predict_design_matrix(
best_candidate.model_type,
best_candidate.model_params,
data,
input_averages=True,
output_averages=False,
)
days_per_period = day_counts(data.index)
data_totals = data.meter_value * days_per_period
model_result.totals_metrics = ModelMetrics(
data_totals, predicted_totals, num_parameters
)
return model_result
def caltrack_sufficiency_criteria(
data_quality,
requested_start,
requested_end,
num_days=365,
min_fraction_daily_coverage=0.9, # TODO: needs to be per year
min_fraction_hourly_temperature_coverage_per_period=0.9,
):
"""CalTRACK daily data sufficiency criteria.
.. note::
For CalTRACK compliance, ``min_fraction_daily_coverage`` must be set
at ``0.9`` (section 2.2.1.2), and requested_start and requested_end must
not be None (section 2.2.4).
Parameters
----------
data_quality : :any:`pandas.DataFrame`
A DataFrame containing at least the column ``meter_value`` and the two
columns ``temperature_null``, containing a count of null hourly
temperature values for each meter value, and ``temperature_not_null``,
containing a count of not-null hourly temperature values for each
meter value. Should have a :any:`pandas.DatetimeIndex`.
requested_start : :any:`datetime.datetime`, timezone aware (or :any:`None`)
The desired start of the period, if any, especially if this is
different from the start of the data. If given, warnings
are reported on the basis of this start date instead of data start
date. Must be explicitly set to ``None`` in order to use data start date.
requested_end : :any:`datetime.datetime`, timezone aware (or :any:`None`)
The desired end of the period, if any, especially if this is
different from the end of the data. If given, warnings
are reported on the basis of this end date instead of data end date.
Must be explicitly set to ``None`` in order to use data end date.
num_days : :any:`int`, optional
Exact number of days allowed in data, including extent given by
``requested_start`` or ``requested_end``, if given.
min_fraction_daily_coverage : :any:, optional
Minimum fraction of days of data in total data extent for which data
must be available.
min_fraction_hourly_temperature_coverage_per_period=0.9,
Minimum fraction of hours of temperature data coverage in a particular
period. Anything below this causes the whole period to be considered
considered missing.
Returns
-------
data_sufficiency : :any:`eemeter.DataSufficiency`
The an object containing sufficiency status and warnings for this data.
"""
criteria_name = "caltrack_sufficiency_criteria"
if data_quality.dropna().empty:
return DataSufficiency(
status="NO DATA",
criteria_name=criteria_name,
warnings=[
EEMeterWarning(
qualified_name="eemeter.caltrack_sufficiency_criteria.no_data",
description=("No data available."),
data={},
)
],
)
data_start = data_quality.index.min().tz_convert("UTC")
data_end = data_quality.index.max().tz_convert("UTC")
n_days_data = (data_end - data_start).days
if requested_start is not None:
# check for gap at beginning
requested_start = requested_start.astimezone(pytz.UTC)
n_days_start_gap = (data_start - requested_start).days
else:
n_days_start_gap = 0
if requested_end is not None:
# check for gap at end
requested_end = requested_end.astimezone(pytz.UTC)
n_days_end_gap = (requested_end - data_end).days
else:
n_days_end_gap = 0
critical_warnings = []
if n_days_end_gap < 0:
# CalTRACK 2.2.4
critical_warnings.append(
EEMeterWarning(
qualified_name=(
"eemeter.caltrack_sufficiency_criteria"
".extra_data_after_requested_end_date"
),
description=("Extra data found after requested end date."),
data={
"requested_end": requested_end.isoformat(),
"data_end": data_end.isoformat(),
},
)
)
n_days_end_gap = 0
if n_days_start_gap < 0:
# CalTRACK 2.2.4
critical_warnings.append(
EEMeterWarning(
qualified_name=(
"eemeter.caltrack_sufficiency_criteria"
".extra_data_before_requested_start_date"
),
description=("Extra data found before requested start date."),
data={
"requested_start": requested_start.isoformat(),
"data_start": data_start.isoformat(),
},
)
)
n_days_start_gap = 0
n_days_total = n_days_data + n_days_start_gap + n_days_end_gap
n_negative_meter_values = data_quality.meter_value[
data_quality.meter_value < 0
].shape[0]
if n_negative_meter_values > 0:
# CalTrack 2.3.5
critical_warnings.append(
EEMeterWarning(
qualified_name=(
"eemeter.caltrack_sufficiency_criteria" ".negative_meter_values"
),
description=(
"Found negative meter data values, which may indicate presence"
" of solar net metering."
),
data={"n_negative_meter_values": n_negative_meter_values},
)
)
# TODO(philngo): detect and report unsorted or repeated values.
# create masks showing which daily or billing periods meet criteria
valid_meter_value_rows = data_quality.meter_value.notnull()
valid_temperature_rows = (
data_quality.temperature_not_null
/ (data_quality.temperature_not_null + data_quality.temperature_null)
) > min_fraction_hourly_temperature_coverage_per_period
valid_rows = valid_meter_value_rows & valid_temperature_rows
# get number of days per period - for daily this should be a series of ones
row_day_counts = day_counts(data_quality.index)
# apply masks, giving total
n_valid_meter_value_days = int((valid_meter_value_rows * row_day_counts).sum())
n_valid_temperature_days = int((valid_temperature_rows * row_day_counts).sum())
n_valid_days = int((valid_rows * row_day_counts).sum())
median = data_quality.meter_value.median()
upper_quantile = data_quality.meter_value.quantile(0.75)
lower_quantile = data_quality.meter_value.quantile(0.25)
iqr = upper_quantile - lower_quantile
extreme_value_limit = median + (3 * iqr)
n_extreme_values = data_quality.meter_value[
data_quality.meter_value > extreme_value_limit
].shape[0]
max_value = float(data_quality.meter_value.max())
if n_days_total > 0:
fraction_valid_meter_value_days = n_valid_meter_value_days / float(n_days_total)
fraction_valid_temperature_days = n_valid_temperature_days / float(n_days_total)
fraction_valid_days = n_valid_days / float(n_days_total)
else:
# unreachable, I think.
fraction_valid_meter_value_days = 0
fraction_valid_temperature_days = 0
fraction_valid_days = 0
if n_days_total != num_days:
critical_warnings.append(
EEMeterWarning(
qualified_name=(
"eemeter.caltrack_sufficiency_criteria"
".incorrect_number_of_total_days"
),
description=("Total data span does not match the required value."),
data={"num_days": num_days, "n_days_total": n_days_total},
)
)
if fraction_valid_days < min_fraction_daily_coverage:
critical_warnings.append(
EEMeterWarning(
qualified_name=(
"eemeter.caltrack_sufficiency_criteria"
".too_many_days_with_missing_data"
),
description=(
"Too many days in data have missing meter data or"
" temperature data."
),
data={"n_valid_days": n_valid_days, "n_days_total": n_days_total},
)
)
if fraction_valid_meter_value_days < min_fraction_daily_coverage:
critical_warnings.append(
EEMeterWarning(
qualified_name=(
"eemeter.caltrack_sufficiency_criteria"
".too_many_days_with_missing_meter_data"
),
description=("Too many days in data have missing meter data."),
data={
"n_valid_meter_data_days": n_valid_meter_value_days,
"n_days_total": n_days_total,
},
)
)
if fraction_valid_temperature_days < min_fraction_daily_coverage:
critical_warnings.append(
EEMeterWarning(
qualified_name=(
"eemeter.caltrack_sufficiency_criteria"
".too_many_days_with_missing_temperature_data"
),
description=("Too many days in data have missing temperature data."),
data={
"n_valid_temperature_data_days": n_valid_temperature_days,
"n_days_total": n_days_total,
},
)
)
if len(critical_warnings) > 0:
status = "FAIL"
else:
status = "PASS"
non_critical_warnings = []
if n_extreme_values > 0:
# CalTRACK 2.3.6
non_critical_warnings.append(
EEMeterWarning(
qualified_name=(
"eemeter.caltrack_sufficiency_criteria" ".extreme_values_detected"
),
description=(
"Extreme values (greater than (median + (3 * IQR)),"
" must be flagged for manual review."
),
data={
"n_extreme_values": n_extreme_values,
"median": median,
"upper_quantile": upper_quantile,
"lower_quantile": lower_quantile,
"extreme_value_limit": extreme_value_limit,
"max_value": max_value,
},
)
)
warnings = critical_warnings + non_critical_warnings
sufficiency_data = {
"extra_data_after_requested_end_date": {
"requested_end": requested_end.isoformat() if requested_end else None,
"data_end": data_end.isoformat(),
"n_days_end_gap": n_days_end_gap,
},
"extra_data_before_requested_start_date": {
"requested_start": requested_start.isoformat() if requested_start else None,
"data_start": data_start.isoformat(),
"n_days_start_gap": n_days_start_gap,
},
"negative_meter_values": {"n_negative_meter_values": n_negative_meter_values},
"incorrect_number_of_total_days": {
"num_days": num_days,
"n_days_total": n_days_total,
},
"too_many_days_with_missing_data": {
"n_valid_days": n_valid_days,
"n_days_total": n_days_total,
},
"too_many_days_with_missing_meter_data": {
"n_valid_meter_data_days": n_valid_meter_value_days,
"n_days_total": n_days_total,
},
"too_many_days_with_missing_temperature_data": {
"n_valid_temperature_data_days": n_valid_temperature_days,
"n_days_total": n_days_total,
},
"extreme_values_detected": {
"n_extreme_values": n_extreme_values,
"median": median,
"upper_quantile": upper_quantile,
"lower_quantile": lower_quantile,
"extreme_value_limit": extreme_value_limit,
"max_value": max_value,
},
}
return DataSufficiency(
status=status,
criteria_name=criteria_name,
warnings=warnings,
data=sufficiency_data,
settings={
"num_days": num_days,
"min_fraction_daily_coverage": min_fraction_daily_coverage,
"min_fraction_hourly_temperature_coverage_per_period": min_fraction_hourly_temperature_coverage_per_period,
},
)
def plot_caltrack_candidate(
candidate,
best=False,
ax=None,
title=None,
figsize=None,
temp_range=None,
alpha=None,
**kwargs
):
"""Plot a CalTRACK candidate model.
Parameters
----------
candidate : :any:`eemeter.CalTRACKUsagePerDayCandidateModel`
A candidate model with a predict function.
best : :any:`bool`, optional
Whether this is the best candidate or not.
ax : :any:`matplotlib.axes.Axes`, optional
Existing axes to plot on.
title : :any:`str`, optional
Chart title.
figsize : :any:`tuple`, optional
(width, height) of chart.
temp_range : :any:`tuple`, optional
(min, max) temperatures to plot model.
alpha : :any:`float` between 0 and 1, optional
Transparency, 0 fully transparent, 1 fully opaque.
**kwargs
Keyword arguments for :any:`matplotlib.axes.Axes.plot`
Returns
-------
ax : :any:`matplotlib.axes.Axes`
Matplotlib axes.
"""
try:
import matplotlib.pyplot as plt
except ImportError: # pragma: no cover
raise ImportError("matplotlib is required for plotting.")
if figsize is None:
figsize = (10, 4)
if ax is None:
fig, ax = plt.subplots(figsize=figsize)
if candidate.status == "QUALIFIED":
color = "C2"
elif candidate.status == "DISQUALIFIED":
color = "C3"
else:
return
if best:
color = "C1"
alpha = 1
temp_min, temp_max = (30, 90) if temp_range is None else temp_range
temps = np.arange(temp_min, temp_max)
data = {"n_days": np.ones(temps.shape)}
prediction_index = pd.date_range(
"2017-01-01T00:00:00Z", periods=len(temps), freq="D"
)
temps_hourly = pd.Series(temps, index=prediction_index).resample("H").ffill()
prediction = candidate.predict(
prediction_index, temps_hourly, "daily"
).result.predicted_usage
plot_kwargs = {"color": color, "alpha": alpha or 0.3}
plot_kwargs.update(kwargs)
ax.plot(temps, prediction, **plot_kwargs)
if title is not None:
ax.set_title(title)
return ax
| openeemeter/eemeter | eemeter/caltrack/usage_per_day.py | usage_per_day.py | py | 78,117 | python | en | code | 197 | github-code | 13 |
29807395266 | # shutil.which supported from Python 3.3+
from shutil import which
from json import loads
import subprocess
class Launch:
# Check if a shell command is available on the system.
@staticmethod
def check_shell_tool(name):
return which(name) is not None
@staticmethod
def check_py_gtk():
try:
import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk, Gdk, GdkPixbuf
return True
except ModuleNotFoundError:
return False
@staticmethod
def check_py_toml():
try:
import toml
return True
except ModuleNotFoundError:
return False
@staticmethod
def check_sox():
return Launch.check_shell_tool("sox")
@staticmethod
def check_sox_pulse():
sox_help = subprocess.run(["sox", "--help"], capture_output=True, encoding="utf8")
stdout = sox_help.stdout
sox_drivers_prefix = "AUDIO DEVICE DRIVERS: "
for line in stdout.split("\n"):
if line.startswith(sox_drivers_prefix):
drivers = line[len(sox_drivers_prefix):].split(" ")
return "pulseaudio" in drivers
return False
@staticmethod
def check_pactl():
return Launch.check_shell_tool("pactl")
@staticmethod
def determine_audio_server():
pactl_info = subprocess.run(["pactl", "--format=json", "info"], capture_output=True, encoding="utf8")
server_info = loads(pactl_info.stdout)
if "server_name" in server_info:
return server_info["server_name"]
return None
| lyrebird-voice-changer/lyrebird | app/core/launch.py | launch.py | py | 1,649 | python | en | code | 1,770 | github-code | 13 |
8224451868 | import time
import os
import argparse
import numpy as np
from numpy.lib.stride_tricks import sliding_window_view
import pandas as pd
from tqdm import tqdm
from joblib import Parallel, delayed
from gtda.homology import VietorisRipsPersistence
from sklearn.metrics import f1_score
import matplotlib.pyplot as plt
import torch
from torch import nn
from cpd_model import *
def main():
###############################################
# Passing command line arguments
###############################################
parser = argparse.ArgumentParser()
parser.add_argument('--dataset_path', help = 'files containing the datasets',
default = './Datasets/PAMAP2_Dataset/Protocol/subject102.dat', type = str)
parser.add_argument('--model_save_path', help = 'Path to save the models',
default = './Saved_Models/model_contrastive.pth', type = str)
parser.add_argument('--plots_save_path', help = 'Path to save the histogram plots during training',
default = './Saved_Models/Plots/', type = str)
parser.add_argument('--window_len', help = 'Length of window to be used for computing persistence diagrams',
default = 256, type = int)
parser.add_argument('--hidden_dim', help = 'Output dimension of intermediate MLP layers',
default = 64, type = int)
parser.add_argument('--num_similar', help = 'Number of similar pairs per class (for contrastive loss)',
default = 50, type = int)
parser.add_argument('--num_dissimilar', help = '''Number of dissimilar pairs for every pair of
adjacent segments (for contrastive loss)''', default = 20, type = int)
parser.add_argument('--loss_type', help = 'Type of loss function for learning similarity between windows',
default = 'contrastive', choices = ['contrastive', 'triplet'], type = str)
parser.add_argument('--train_ratio', help = 'Fraction of pairs to use for training',
default = 0.6, type = float)
parser.add_argument('--num_epochs', help = 'Number of epochs to train for',
default = 1000, type = int)
parser.add_argument('--learning_rate', help = 'Learning rate for training',
default = 5e-3, type = float)
parser.add_argument('--use_cuda', help = 'Whether to use GPUs for training',
action = 'store_false') # Use GPUs by default
args = parser.parse_args()
use_cuda = args.use_cuda and torch.cuda.is_available()
torch_device = torch.device('cuda') if use_cuda else torch.device('cpu')
################################################
# Reading and preprocessing the data
################################################
print('Reading and preprocessing starts')
arr = np.loadtxt(args.dataset_path)
col_name_map = {0: 'timestamp (s)', 1: 'activityID', 2: 'heart rate (bpm)', 4: 'hand_acc1', 5: 'hand_acc2', 6: 'hand_acc3',
21: 'chest_acc1', 22: 'chest_acc2', 23: 'chest_acc3', 38: 'ankle_acc1', 39: 'ankle_acc2', 40: 'ankle_acc3'}
df_timeseries = pd.DataFrame.from_dict({col_name_map[col_num]: arr[:, col_num]
for col_num in col_name_map})
temp_arr = np.array(df_timeseries.iloc[:, 3:])
valid_row_ind = ((~np.isnan(temp_arr).any(axis = 1)) & (df_timeseries['activityID'] != 0))
df_ts_clean = df_timeseries.loc[valid_row_ind, :]
df_ts_clean.set_index(np.arange(len(df_ts_clean)), inplace = True)
df_ts_clean = df_ts_clean.astype({'activityID': int})
break_pts_orig = [0] + [ind for ind in range(1, len(df_ts_clean)) if (df_ts_clean.iloc[ind, 1]
!= df_ts_clean.iloc[ind-1, 1])] + [len(df_ts_clean)]
intervals_labeled = [([break_pts_orig[ind], break_pts_orig[ind+1]], df_ts_clean.iloc[break_pts_orig[ind], 1])
for ind in range(len(break_pts_orig)-1)]
Xdata_cols = np.array(df_ts_clean.iloc[:, 3:])
ydata_cols = np.array(df_ts_clean['activityID'])
print('Reading and preprocessing done\n')
##############################################################
# Dividing into windows and computing persistence diagrams
##############################################################
print('Persistence diagram computation starts')
Xdata_arr, ylabels_arr = [], []
for interval in intervals_labeled:
window_inds = np.arange(interval[0][0], interval[0][1], args.window_len)
window_start_end = sliding_window_view(window_inds, 2)
for window_row in window_start_end:
Xdata_arr.append(Xdata_cols[window_row[0]: window_row[1], :])
ylabels_arr.append(interval[1])
ylabels_arr = np.array(ylabels_arr)
# Sequential computation of persistence diagrams
homology_dim = [0, 1]
# Xdata_homology = []
# for X_arr in tqdm(Xdata_arr):
# X_arr_homo = VietorisRipsPersistence(homology_dimensions = homology_dim).\
# fit_transform(X_arr[None, :, :])[0, :, :]
# Xdata_homology.append(X_arr_homo)
def get_persistence_diagram(X_arr, homology_dim = homology_dim):
X_arr_homo = VietorisRipsPersistence(homology_dimensions = homology_dim).\
fit_transform(X_arr[None, :, :])[0, :, :]
return X_arr_homo
start_time = time.time()
Xdata_homology = Parallel(n_jobs = -1)(delayed(get_persistence_diagram)(X_arr)
for X_arr in Xdata_arr)
end_time = time.time()
print('Persistence diagram computation done. Time taken = {}\n'.format(end_time-start_time))
#############################################################
# Preparing the training data (similar and dissimilar pairs)
#############################################################
print('Data preparation starts')
def get_intervals(arr):
break_pts = [0] + [ind for ind in range(1, len(arr)) if (arr[ind] != arr[ind-1])] + [arr.shape[0]]
break_start_end = sliding_window_view(break_pts, 2)
return break_start_end
seg_intervals = get_intervals(ylabels_arr)
seg_labels = np.array([ylabels_arr[interval[0]] for interval in seg_intervals])
complete_dataset = PersisDiagDataset(np.array(Xdata_homology, dtype = object), ylabels_arr)
# Sampling similar and dissimilar pairs
similar_pairs = []
for interval in seg_intervals:
seg_rand_inds = np.random.randint(interval[0], high = interval[1]-1, size = (args.num_similar, 1))
seg_similar_adj = np.concatenate((seg_rand_inds, seg_rand_inds+1), axis = 1)
similar_pairs.append(seg_similar_adj)
similar_pairs = np.concatenate(similar_pairs, axis = 0)
similar_pairs = torch.tensor(similar_pairs).to(torch_device)
# Sampling dissimilar pair indices
dissimilar_pairs = []
for ind1 in range(len(seg_intervals)):
for ind2 in range(ind1+1, len(seg_intervals)):
if seg_labels[ind1] != seg_labels[ind2]:
pairs_col0 = np.random.randint(seg_intervals[ind1, 0], high = seg_intervals[ind1, 1],
size = (args.num_dissimilar, 1))
pairs_col1 = np.random.randint(seg_intervals[ind2, 0], high = seg_intervals[ind2, 1],
size = (args.num_dissimilar, 1))
dissimilar_pairs.append(np.concatenate([pairs_col0, pairs_col1], axis = 1))
dissimilar_pairs = np.concatenate(dissimilar_pairs, axis = 0)
dissimilar_pairs = torch.tensor(dissimilar_pairs).to(torch_device)
print('Similar pairs = {}, dissimilar pairs = {}'.format(similar_pairs.shape, dissimilar_pairs.shape))
window_pairs = torch.cat([similar_pairs, dissimilar_pairs], dim = 0)
dissim_labels = torch.cat([torch.zeros(similar_pairs.shape[0]),
torch.ones(dissimilar_pairs.shape[0])], dim = 0).to(torch.long).to(torch_device)
# Divide pairs into train and test sets
tot_windows = window_pairs.shape[0]
permuted = torch.randperm(tot_windows)
window_pairs_train = window_pairs[permuted[:int(args.train_ratio*tot_windows)], :]
dissim_labels_train = dissim_labels[permuted[:int(args.train_ratio*tot_windows)]]
#print(dissim_labels_train)
window_pairs_test = window_pairs[permuted[int(args.train_ratio*tot_windows):], :]
dissim_labels_test = dissim_labels[permuted[int(args.train_ratio*tot_windows):]]
#print(dissim_labels_test)
print('Data preparation done\n')
########################################################
# Initializing and training the model
########################################################
disc_model = Disc_Model(128, 128, 128, 128, 128)
if use_cuda:
disc_model.cuda()
optimizer = torch.optim.Adam(list(disc_model.parameters()), lr = args.learning_rate)
for epoch in range(args.num_epochs):
print('__________________________________________')
print('Epoch = {}'.format(epoch))
##################
# Training
##################
dists_train = train_one_epoch_contr(disc_model, complete_dataset, window_pairs_train,
dissim_labels_train, optimizer)
dists_train = dists_train.cpu().detach().numpy()
labels_train = dissim_labels_train.cpu().detach().numpy()
if epoch % 50 == 0:
# Generating plot
fig = plt.figure()
#plt.xlim([0.0, 10.0])
plt.hist(dists_train[labels_train == 0], label = 'similar', bins = 100, density = True)
plt.hist(dists_train[labels_train == 1], label = 'dissimilar', bins = 100, alpha = 0.5, density = True)
plt.legend()
plt.title('Train')
plt.savefig(os.path.join(args.plots_save_path, 'train_epoch_{}.png'.format(epoch)))
plt.close(fig)
##################
# Evaluation
##################
dists_test = eval_one_epoch_contr(disc_model, complete_dataset, window_pairs_test,
dissim_labels_test)
dists_test = dists_test.cpu().detach().numpy()
labels_test = dissim_labels_test.cpu().numpy()
if epoch % 50 == 0:
# Generating plot
fig = plt.figure()
#plt.xlim([0.0, 10.0])
plt.hist(dists_test[labels_test == 0], label = 'similar', bins = 100, density = True)
plt.hist(dists_test[labels_test == 1], label = 'dissimilar', bins = 100, alpha = 0.5, density = True)
plt.legend()
plt.title('Test')
plt.savefig(os.path.join(args.plots_save_path, 'test_epoch_{}.png'.format(epoch)))
plt.close(fig)
print('__________________________________________')
torch.save({ 'epoch': args.num_epochs,
'model_state_dict': disc_model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
}, args.model_save_path)
if __name__ == '__main__':
main()
| shubham-kashyapi/Time-Series-TDA | train.py | train.py | py | 11,217 | python | en | code | 0 | github-code | 13 |
17043216144 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import json
from alipay.aop.api.constant.ParamConstants import *
class AlipayMerchantStoreShopcodeCreateModel(object):
def __init__(self):
self._address = None
self._category_id = None
self._city_code = None
self._district_code = None
self._latitude = None
self._longitude = None
self._merchant_logon_id = None
self._operator_id = None
self._out_biz_no = None
self._phone_no = None
self._province_code = None
self._shop_front_photo = None
self._shop_id = None
self._shop_name = None
self._shop_no = None
self._smid = None
self._tokens = None
@property
def address(self):
return self._address
@address.setter
def address(self, value):
self._address = value
@property
def category_id(self):
return self._category_id
@category_id.setter
def category_id(self, value):
self._category_id = value
@property
def city_code(self):
return self._city_code
@city_code.setter
def city_code(self, value):
self._city_code = value
@property
def district_code(self):
return self._district_code
@district_code.setter
def district_code(self, value):
self._district_code = value
@property
def latitude(self):
return self._latitude
@latitude.setter
def latitude(self, value):
self._latitude = value
@property
def longitude(self):
return self._longitude
@longitude.setter
def longitude(self, value):
self._longitude = value
@property
def merchant_logon_id(self):
return self._merchant_logon_id
@merchant_logon_id.setter
def merchant_logon_id(self, value):
self._merchant_logon_id = value
@property
def operator_id(self):
return self._operator_id
@operator_id.setter
def operator_id(self, value):
self._operator_id = value
@property
def out_biz_no(self):
return self._out_biz_no
@out_biz_no.setter
def out_biz_no(self, value):
self._out_biz_no = value
@property
def phone_no(self):
return self._phone_no
@phone_no.setter
def phone_no(self, value):
self._phone_no = value
@property
def province_code(self):
return self._province_code
@province_code.setter
def province_code(self, value):
self._province_code = value
@property
def shop_front_photo(self):
return self._shop_front_photo
@shop_front_photo.setter
def shop_front_photo(self, value):
self._shop_front_photo = value
@property
def shop_id(self):
return self._shop_id
@shop_id.setter
def shop_id(self, value):
self._shop_id = value
@property
def shop_name(self):
return self._shop_name
@shop_name.setter
def shop_name(self, value):
self._shop_name = value
@property
def shop_no(self):
return self._shop_no
@shop_no.setter
def shop_no(self, value):
self._shop_no = value
@property
def smid(self):
return self._smid
@smid.setter
def smid(self, value):
self._smid = value
@property
def tokens(self):
return self._tokens
@tokens.setter
def tokens(self, value):
if isinstance(value, list):
self._tokens = list()
for i in value:
self._tokens.append(i)
def to_alipay_dict(self):
params = dict()
if self.address:
if hasattr(self.address, 'to_alipay_dict'):
params['address'] = self.address.to_alipay_dict()
else:
params['address'] = self.address
if self.category_id:
if hasattr(self.category_id, 'to_alipay_dict'):
params['category_id'] = self.category_id.to_alipay_dict()
else:
params['category_id'] = self.category_id
if self.city_code:
if hasattr(self.city_code, 'to_alipay_dict'):
params['city_code'] = self.city_code.to_alipay_dict()
else:
params['city_code'] = self.city_code
if self.district_code:
if hasattr(self.district_code, 'to_alipay_dict'):
params['district_code'] = self.district_code.to_alipay_dict()
else:
params['district_code'] = self.district_code
if self.latitude:
if hasattr(self.latitude, 'to_alipay_dict'):
params['latitude'] = self.latitude.to_alipay_dict()
else:
params['latitude'] = self.latitude
if self.longitude:
if hasattr(self.longitude, 'to_alipay_dict'):
params['longitude'] = self.longitude.to_alipay_dict()
else:
params['longitude'] = self.longitude
if self.merchant_logon_id:
if hasattr(self.merchant_logon_id, 'to_alipay_dict'):
params['merchant_logon_id'] = self.merchant_logon_id.to_alipay_dict()
else:
params['merchant_logon_id'] = self.merchant_logon_id
if self.operator_id:
if hasattr(self.operator_id, 'to_alipay_dict'):
params['operator_id'] = self.operator_id.to_alipay_dict()
else:
params['operator_id'] = self.operator_id
if self.out_biz_no:
if hasattr(self.out_biz_no, 'to_alipay_dict'):
params['out_biz_no'] = self.out_biz_no.to_alipay_dict()
else:
params['out_biz_no'] = self.out_biz_no
if self.phone_no:
if hasattr(self.phone_no, 'to_alipay_dict'):
params['phone_no'] = self.phone_no.to_alipay_dict()
else:
params['phone_no'] = self.phone_no
if self.province_code:
if hasattr(self.province_code, 'to_alipay_dict'):
params['province_code'] = self.province_code.to_alipay_dict()
else:
params['province_code'] = self.province_code
if self.shop_front_photo:
if hasattr(self.shop_front_photo, 'to_alipay_dict'):
params['shop_front_photo'] = self.shop_front_photo.to_alipay_dict()
else:
params['shop_front_photo'] = self.shop_front_photo
if self.shop_id:
if hasattr(self.shop_id, 'to_alipay_dict'):
params['shop_id'] = self.shop_id.to_alipay_dict()
else:
params['shop_id'] = self.shop_id
if self.shop_name:
if hasattr(self.shop_name, 'to_alipay_dict'):
params['shop_name'] = self.shop_name.to_alipay_dict()
else:
params['shop_name'] = self.shop_name
if self.shop_no:
if hasattr(self.shop_no, 'to_alipay_dict'):
params['shop_no'] = self.shop_no.to_alipay_dict()
else:
params['shop_no'] = self.shop_no
if self.smid:
if hasattr(self.smid, 'to_alipay_dict'):
params['smid'] = self.smid.to_alipay_dict()
else:
params['smid'] = self.smid
if self.tokens:
if isinstance(self.tokens, list):
for i in range(0, len(self.tokens)):
element = self.tokens[i]
if hasattr(element, 'to_alipay_dict'):
self.tokens[i] = element.to_alipay_dict()
if hasattr(self.tokens, 'to_alipay_dict'):
params['tokens'] = self.tokens.to_alipay_dict()
else:
params['tokens'] = self.tokens
return params
@staticmethod
def from_alipay_dict(d):
if not d:
return None
o = AlipayMerchantStoreShopcodeCreateModel()
if 'address' in d:
o.address = d['address']
if 'category_id' in d:
o.category_id = d['category_id']
if 'city_code' in d:
o.city_code = d['city_code']
if 'district_code' in d:
o.district_code = d['district_code']
if 'latitude' in d:
o.latitude = d['latitude']
if 'longitude' in d:
o.longitude = d['longitude']
if 'merchant_logon_id' in d:
o.merchant_logon_id = d['merchant_logon_id']
if 'operator_id' in d:
o.operator_id = d['operator_id']
if 'out_biz_no' in d:
o.out_biz_no = d['out_biz_no']
if 'phone_no' in d:
o.phone_no = d['phone_no']
if 'province_code' in d:
o.province_code = d['province_code']
if 'shop_front_photo' in d:
o.shop_front_photo = d['shop_front_photo']
if 'shop_id' in d:
o.shop_id = d['shop_id']
if 'shop_name' in d:
o.shop_name = d['shop_name']
if 'shop_no' in d:
o.shop_no = d['shop_no']
if 'smid' in d:
o.smid = d['smid']
if 'tokens' in d:
o.tokens = d['tokens']
return o
| alipay/alipay-sdk-python-all | alipay/aop/api/domain/AlipayMerchantStoreShopcodeCreateModel.py | AlipayMerchantStoreShopcodeCreateModel.py | py | 9,219 | python | en | code | 241 | github-code | 13 |
30587527540 | import re
import sys
import text_cleaner
from pprint import pprint
from mongo.mongo_provider import MongoProvider
mongo_provider = MongoProvider()
author_address_regex = r";\s*(?![^[]*])"
bracket_regex = r"\[(.*?)\]"
publications_collection = mongo_provider.get_publications_collection()
wos_collection = mongo_provider.get_wos_collection()
def parse_author_address_stirng(author_address_string):
author_addresses = [
author_address
for author_address in re.split(author_address_regex, author_address_string)
]
author_to_addresses = {}
for author_address in author_addresses:
match = re.search(bracket_regex, author_address)
if not match:
continue
address = author_address[match.span()[1]:].strip()
author_list_string = match.group(1)
authors = [author.strip() for author in author_list_string.split(";")]
for author in authors:
if author in author_to_addresses:
author_to_addresses[author].add(address)
else:
author_to_addresses[author] = set()
author_to_addresses[author].add(address)
return author_to_addresses
def clean_entries():
for idx, doc in enumerate(wos_collection.find(no_cursor_timeout=True)):
if idx % 10 == 0:
print(f"STATUS: {idx}")
cleaned_entry = {}
# Raw data
_id = doc["_id"]
author_list_string = doc["Author Full Name"]
title = doc.get("Document Title")
doc_type = doc.get("Document Type")
abstract = doc.get("Abstract")
author_address_string = doc["Author Address"]
cleaned_entry["_id"] = _id
cleaned_entry["title"] = title
cleaned_entry["abstract"] = abstract
cleaned_entry["documentType"] = doc_type
# Parse the author list
authors = [author.strip() for author in author_list_string.split(";")]
# Get author to addresses dict
author_to_addresses = parse_author_address_stirng(author_address_string)
author_entries = []
for author in authors:
author_entry = {}
address_set = author_to_addresses.get(author, "")
author_entry["name"] = author
author_entry["addresses"] = [address for address in address_set]
author_entries.append(author_entry)
cleaned_entry["authors"] = author_entries
# Clean text
raw_text = title + " " + abstract
cleaned_text = text_cleaner.clean_text(raw_text)
tokens = text_cleaner.tokenize_text(cleaned_text)
cleaned_entry["tokens"] = tokens
publications_collection.insert_one(cleaned_entry)
print(f"STATUS: {idx}")
if __name__ == "__main__":
publications_collection.drop()
clean_entries()
| juliomarcopineda/jpl-academic-divisions | clean_wos.py | clean_wos.py | py | 2,837 | python | en | code | 0 | github-code | 13 |
2032728310 | import sys
rl = sys.stdin.readline
# 유클리드 호제법
def GCD(A, B): # 최대공약수
if B == 0:
return A
return GCD(B, A % B)
T = int(rl())
for i in range(T):
A, B = map(int, rl().split())
print(int(A*B/GCD(A, B)))
| YeonHoLee-dev/Python | BAEKJOON/[1934] 최소공배수.py | [1934] 최소공배수.py | py | 253 | python | ko | code | 0 | github-code | 13 |
74302135378 | # +
import numpy as np
from functools import wraps
from time import time
def timing(f):
@wraps(f)
def wrap(*args, **kw):
ts = time()
result = f(*args, **kw)
te = time()
print(f'Elapsed Time: {(te-ts): 2.4f} sec')
return result
return wrap
# -
DAY = 7
def readnumpy():
x = np.loadtxt(f'inputs/{DAY}.txt')
print("Input:", x)
print("Shape:", x.shape)
print("Min:", x.min())
print("Max:", x.max())
return x
def readlines():
with open(f'inputs/{DAY}.txt') as f:
x = [line.rstrip() for line in f]
print("Lines:", len(x))
return x
x = readlines() # readnumpy()
@timing
def solve_task1(x):
crabs = np.array(list(map(int, x[0].split(','))))
return int(np.abs(crabs - np.median(crabs)).sum())
print("Task 1 Result:", solve_task1(x))
@timing
def solve_task2(x):
crabs = np.array(list(map(int, x[0].split(',')))).astype(int)
fuels = []
positions = list(range(crabs.min(), crabs.max()))
for pos in positions:
delta = np.abs(crabs - pos)
fuels.append((delta * (delta + 1) / 2).sum())
return int(np.min(fuels))
print("Task 2 Result:", solve_task2(x))
| jonasgrebe/py-aoc-2021 | 07.py | 07.py | py | 1,218 | python | en | code | 0 | github-code | 13 |
1838365052 | from datetime import date
class Prodotto:
M = "altamente disponibile"
D = "disponibile"
E = "non disponibile"
MAXSCORTE = 1000
MAXORDINE = 350
def __init__(self, nome):
self.nome=nome
self.quantita=0
self.stato_scorte=Prodotto.E
self._acquirenti=dict()
@property
def quantita(self):
return self._quantita
@quantita.setter
def quantita(self,value):
self._quantita=value
@property
def stato_scorte(self):
return self._stato_scorte
@stato_scorte.setter
def stato_scorte(self,value):
if value == Prodotto.M:
self.immagazzina= lambda *args: print("non puoi acquistare altri prodotti")
self.vendi=self._vendi
elif value == Prodotto.D:
self.immagazzina =self._immagazzina
self.vendi = self._vendi
elif value==Prodotto.E:
self.immagazzina = self._immagazzina
self.vendi = lambda *args: print("non puoi vendere altri prodotti")
self._stato_scorte=value
def aggiorna(self, nome, numero, data):
self._acquirenti[nome]=(numero,data)
def _vendi(self, nomeAcquirente, numero):
if numero > Prodotto.MAXORDINE or numero > self.quantita:
print("Attenzione: vendita di {} unita` del prodotto {} non possibile".format(numero, self.nome))
return
else:
self.quantita = self.quantita - numero
self.aggiorna(nomeAcquirente, numero, date.today())
if self.quantita == 0:
self.stato_scorte = Prodotto.E
if 0 < self.quantita < Prodotto.MAXSCORTE:
self.stato_scorte = Prodotto.D
def elimina_scorte(self):
self.quantita = 0
self.stato_scorte = Prodotto.E
def _immagazzina(self, numero):
if numero <= 0:
return
self.quantita = self.quantita + numero
if 0 < self.quantita < Prodotto.MAXSCORTE:
self.stato_scorte = Prodotto.D
if self.quantita == Prodotto.MAXSCORTE:
self.stato_scorte = Prodotto.M
def main():
p1 = Prodotto("Paperini")
print("Inizialmente il prodotto {} e` nello stato {}".format(p1.nome, p1.stato_scorte))
print("Immagazziniamo {} unita di prodotto {}".format(Prodotto.MAXSCORTE, p1.nome))
p1.immagazzina(Prodotto.MAXSCORTE)
print(
"Il prodotto {} e` nello stato {} e la quantita` di prodotto disponibile e` {}".format(p1.nome, p1.stato_scorte,
p1.quantita))
print("SupermarketSun vuole acquistare 100 unita` di prodotto {}".format(p1.nome))
p1.vendi("SupermarketSun", 100)
print("SupermarketlongS vuole acquistare 160 unita` di prodotto {}".format(p1.nome))
p1.vendi("SupermarketLongS", 160)
print("SupermarketFoop vuole acquistare 150 unita` di prodotto {}".format(p1.nome))
p1.vendi("SupermarketFoop", 150)
print("SupermarketPrai vuole acquistare 110 unita` di prodotto {}".format(p1.nome))
p1.vendi("SupermarketPrai", 110)
print("SupermarketLongS vuole acquistare 150 unita` di prodotto {}".format(p1.nome))
p1.vendi("SupermarketLongS", 150)
print("SupermarketRonald vuole acquistare 120 unita` di prodotto {}".format(p1.nome))
p1.vendi("SupermarketRonald", 120)
print("SupermarketPrai vuole acquistare 140 unita` di prodotto {}".format(p1.nome))
p1.vendi("SupermarketPrai", 140)
print("SupermarketRonald vuole acquistare 150 unita` di prodotto {}".format(p1.nome))
p1.vendi("SupermarketRonald", 150)
print("SupermarketSun vuole acquistare 30 unita` di prodotto {}".format(p1.nome))
p1.vendi("SupermarketSun", 30)
print("\nQuesti sono gli ultimi acquisti effettuati da ciascun cliente del prodotto ", p1.nome)
for k, v in p1._acquirenti.items():
print("{} ha acquistato {} unita` il giorno {}".format(k, v[0], v[1]))
print("Eliminiamo scorte del prodotto {}".format(p1.nome))
p1.elimina_scorte()
if p1.quantita == 0:
print("Non vi sono piu` scorte del prodotto {} in magazzino".format(p1.nome))
else:
print("qualcosa non va nell'implementazione")
if __name__ == "__main__":
main()
"""Il programma deve stampare:
Inizialmente il prodotto Paperini e` nello stato non disponibile
Immagazziniamo 1000 unita di prodotto Paperini
Il prodotto Paperini e` nello stato altamente disponibile e la quantita` di prodotto disponibile e` 1000
SupermarketSun vuole acquistare 100 unita` di prodotto Paperini
SupermarketlongS vuole acquistare 160 unita` di prodotto Paperini
SupermarketFoop vuole acquistare 150 unita` di prodotto Paperini
SupermarketPrai vuole acquistare 110 unita` di prodotto Paperini
SupermarketLongS vuole acquistare 150 unita` di prodotto Paperini
SupermarketRonald vuole acquistare 120 unita` di prodotto Paperini
SupermarketPrai vuole acquistare 140 unita` di prodotto Paperini
SupermarketRonald vuole acquistare 150 unita` di prodotto Paperini
Attenzione: vendita di 150 unita` del prodotto Paperini non possibile
SupermarketSun vuole acquistare 30 unita` di prodotto Paperini
Questi sono gli ultimi acquisti effettuati da ciascun cliente del prodotto Paperini
SupermarketSun ha acquistato 30 unita` il giorno 2019-12-16
SupermarketLongS ha acquistato 150 unita` il giorno 2019-12-16
SupermarketFoop ha acquistato 150 unita` il giorno 2019-12-16
SupermarketPrai ha acquistato 140 unita` il giorno 2019-12-16
SupermarketRonald ha acquistato 120 unita` il giorno 2019-12-16
Eliminiamo scorte del prodotto Paperini
Non vi sono piu` scorte del prodotto Paperini in magazzino
"""
| DanyR2001/Codice-Percorso-Universitario | Terzo anno/Programmazione Avanzata/Primi esercizi/ripasso/Esercitazione 13-2-2022/Es1.py | Es1.py | py | 5,706 | python | it | code | 0 | github-code | 13 |
34521916162 | import pytest
from utils import *
def test_total_word_count():
data = {'0100405060': ['the', 'red', 'magnet', 'elephant', 'market'],
'0100405040': ['elephant', 'magnet', 'the', 'market'],
'0410500030': ['the', 'red', 'violin', 'wolf'],
'3900339302': ['the', 'yellow', 'magnet', 'potato', 'the', 'bob', 'elephant']}
assert total_word_count(data, "the") == 5
assert total_word_count(data, "red") == 2
assert total_word_count(data, "magnet") == 3
def test_unique_count():
data = {'0101010101': ['the', 'quick', 'brown', 'fox'],
'0202020202': ['jumped', 'over', 'the', 'lazy', 'dog'],
'0303030202': ['the', 'quick', 'brown', 'fox', 'jumped', 'over', 'the', 'lazy', 'dog']}
# Test Case 1
chapters = ['01', '02', '03']
word = 'the'
assert unique_count(data, chapters, word) == 3
# Test Case 2
chapters = ['01', '02', '03']
word = 'jumped'
assert unique_count(data, chapters, word) == 2
# Test Case 3
chapters = ['01', '02', '03']
word = 'cat'
assert unique_count(data, chapters, word) == 0
def test_split_tarrifs():
tarrifs = ['0100405060', '0100405040', '0410500030', '3900339302']
expected_result = {
'chapters': ['01', '01', '04', '39'],
'headings': ['0100', '0100', '0410', '3900'],
'subheadings': ['010040', '010040', '041050', '390033'],
'duty rates': ['01004050', '01004050', '04105000', '39003393']
}
tarrifs2 = ['0203040566', '0197405040', '9876543211', '1928374653']
expected_result2 = {
'chapters': ['02', '01', '98', '19'],
'headings': ['0203', '0197', '9876', '1928'],
'subheadings': ['020304', '019740', '987654', '192837'],
'duty rates': ['02030405', '01974050', '98765432', '19283746']
}
# test case 1
assert split_tarrifs(tarrifs) == expected_result
# test case 2
assert split_tarrifs(tarrifs2) == expected_result2
def test_total_chapter_values():
data = {
"01004050": ["red", "apple", "market", "dog", "green"],
"01005050": ["banana", "carrot", "flower", "dog", "elephant"],
"02004050": ["red", "banana", "market", "elephant", "green"],
"03004050": ["carrot", "dog", "elephant", "green"],
"04004050": ["red", "apple", "dog", "elephant", "green"],
"05004050": ["apple", "carrot", "dog", "elephant", "green"],
"05004150": ["apple", "carrot", "dog", "elephant", "green"],
}
chapters = ["0100", "0200", "0300"]
word = "dog"
assert max_word_occurrences(data, chapters, word) == 2
@pytest.fixture
def unique_words():
return ['elephant', 'market', 'potato', 'red', 'the', 'violin', 'yellow']
@pytest.fixture
def tarrifs_data():
return {
'chapters': ['01', '01', '04', '39'],
'headings': ['0100', '0100', '0410', '3900'],
'subheadings': ['010040', '010040', '041050', '390033'],
'duty rates': ['01004050', '01004050', '04105000', '39003393']
}
@pytest.fixture
def input_data():
return {
'0100405060': ['the', 'red', 'elephant', 'market'],
'0100405040': ['elephant', 'the', 'market'],
'0410500030': ['the', 'red', 'violin'],
'3900339302': ['the', 'yellow', 'potato', 'the', 'elephant']
}
def test_solution(unique_words, tarrifs_data, input_data):
result = solution(unique_words, tarrifs_data, input_data)
assert result['Word'] == unique_words
assert result['TotalCount'] == [3, 2, 1, 2, 5, 1, 1]
assert result['SingleChapterMaxCount'] == [2, 2, 1, 1, 2, 1, 1]
assert result['UniqueChaptersCount'] == [2, 1, 1, 2, 3, 1, 1]
assert result['SingleHeadingMaxCount'] == [2, 2, 1, 1, 2, 1, 1]
assert result['UniqueHeadingCount'] == [2, 1, 1, 2, 3, 1, 1]
assert result['SingleSubheadingMaxCount'] == [2, 2, 1, 1, 2, 1, 1]
assert result['UniqueSubheadingCount'] == [2, 1, 1, 2, 3, 1, 1]
assert result['SingleDutyRateMaxCount'] == [2, 2, 1, 1, 2, 1, 1]
assert result['UniqueDutyRateCount'] == [2, 1, 1, 2, 3, 1, 1]
assert result['SingleTariffMaxCount'] == [1, 1, 1, 1, 2, 1, 1]
assert result['UniqueTariffCount'] == [3, 2, 1, 2, 4, 1, 1]
@pytest.fixture
def test_data():
unique_words2 = ['apple', 'banana', 'carrot', 'dog', 'elephant', 'flower', 'green']
tariff_data2 = {
'chapters': ['01', '02', '03', '04', '05'],
'headings': ['0100', '0200', '0300', '0400', '0500'],
'subheadings': ['010040', '020040', '030040', '040040', '050040'],
'duty rates': ['01004050', '02004050', '03004050', '04004050', '05004050']
}
input_data2 = {
'0100405010': ['dog', 'elephant', 'green', 'banana', 'apple', 'flower', 'carrot', 'banana', 'carrot',
'elephant', 'banana', 'apple'],
'0200405020': ['dog', 'carrot', 'green', 'elephant', 'flower', 'banana', 'apple', 'elephant', 'carrot',
'green'],
'0300405020': ['green', 'elephant', 'carrot', 'banana', 'dog', 'apple', 'flower', 'banana', 'carrot',
'elephant'],
'0400405020': ['dog', 'elephant', 'green', 'banana', 'carrot', 'flower', 'dog', 'apple', 'elephant',
'carrot'],
'0500405010': ['flower', 'dog', 'banana', 'carrot', 'elephant', 'green', 'apple', 'banana', 'carrot',
'dog'],
}
return unique_words2, tariff_data2, input_data2
def test_solution2(test_data):
unique_words2, tariff_data2, input_data2 = test_data
result = solution(unique_words2, tariff_data2, input_data2)
assert result['Word'] == unique_words2
assert result['TotalCount'] == [6, 9, 10, 7, 9, 5, 6]
assert result['SingleChapterMaxCount'] == [2, 3, 2, 2, 2, 1, 2]
assert result['UniqueChaptersCount'] == [5, 5, 5, 5, 5, 5, 5]
assert result['SingleHeadingMaxCount'] == [2, 3, 2, 2, 2, 1, 2]
assert result['UniqueHeadingCount'] == [5, 5, 5, 5, 5, 5, 5]
assert result['SingleSubheadingMaxCount'] == [2, 3, 2, 2, 2, 1, 2]
assert result['UniqueSubheadingCount'] == [5, 5, 5, 5, 5, 5, 5]
assert result['SingleDutyRateMaxCount'] == [2, 3, 2, 2, 2, 1, 2]
assert result['UniqueDutyRateCount'] == [5, 5, 5, 5, 5, 5, 5]
assert result['SingleTariffMaxCount'] == [2, 3, 2, 2, 2, 1, 2]
assert result['UniqueTariffCount'] == [5, 5, 5, 5, 5, 5, 5]
| Levakov023/Python | 1/test_utils.py | test_utils.py | py | 6,514 | python | en | code | 0 | github-code | 13 |
17521336747 | import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn import metrics
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
def CreateLogisticRegressionModel(dataframe,
outcome_variable,
list_of_predictor_variables,
scale_predictor_variables=False,
test_size=0.2,
show_classification_plot=True,
lambda_for_regularization=0.001,
max_iterations=1000,
random_seed=412):
# Keep only the predictors and outcome variable
dataframe = dataframe[list_of_predictor_variables + [outcome_variable]].copy()
# Keep complete cases
dataframe.replace([np.inf, -np.inf], np.nan, inplace=True)
dataframe.dropna(inplace=True)
print("Count of examples eligible for inclusion in model training and testing:", len(dataframe.index))
# Scale the predictors, if requested
if scale_predictor_variables:
# Scale predictors
scaler = StandardScaler()
dataframe[list_of_predictor_variables] = scaler.fit_transform(dataframe[list_of_predictor_variables])
# Show the peak-to-peak range of each predictor
print("\nPeak-to-peak range of each predictor:")
print(np.ptp(dataframe[list_of_predictor_variables], axis=0))
# Split dataframe into training and test sets
train, test = train_test_split(
dataframe,
test_size=test_size,
random_state=random_seed
)
# Create logistic regression model
model = LogisticRegression(
max_iter=max_iterations,
random_state=random_seed,
C=1-lambda_for_regularization,
fit_intercept=True
)
# Train the model using the training sets and show fitting summary
model.fit(train[list_of_predictor_variables], train[outcome_variable])
print(f"\nNumber of iterations completed: {model.n_iter_}")
# Show parameters of the model
b_norm = model.intercept_
w_norm = model.coef_
print(f"\nModel parameters: w: {w_norm}, b:{b_norm}")
# Predict the test data
test['Predicted'] = model.predict(test[list_of_predictor_variables])
# Calculate the accuracy score
score = model.score(test[list_of_predictor_variables], test[outcome_variable])
# Print the confusion matrix
confusion_matrix = metrics.confusion_matrix(
test[outcome_variable],
test['Predicted']
)
if show_classification_plot:
plt.figure(figsize=(9,9))
sns.heatmap(
confusion_matrix,
annot=True,
fmt=".3f",
linewidths=.5,
square=True,
cmap='Blues_r'
)
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
all_sample_title = 'Accuracy Score: {0}'.format(score)
plt.title(all_sample_title, size = 15)
plt.show()
else:
print("Confusion matrix:")
print(confusion_matrix)
# Return the model
if scale_predictor_variables:
dict_return = {
'model': model,
'scaler': scaler
}
return(dict_return)
else:
return(model)
# # Test the function
# from sklearn import datasets
# iris = pd.DataFrame(datasets.load_iris(as_frame=True).data)
# iris['species'] = datasets.load_iris(as_frame=True).target
# # iris = iris[iris['species'] != 2]
# logistic_reg_model = CreateLogisticRegressionModel(
# dataframe=iris,
# outcome_variable='species',
# list_of_predictor_variables=['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']
# )
| KyleProtho/AnalysisToolBox | Python/PredictiveAnalytics/CreateLogisticRegressionModel.py | CreateLogisticRegressionModel.py | py | 3,936 | python | en | code | 0 | github-code | 13 |
25698264245 | import paho.mqtt.client as mqtt
import requests
import json
connected = False
def on_connect(client, userdata, flags, rc):
global connected
print("Connected with result code "+str(rc))
connected = True
client.subscribe("auck/*tempandpress*")
print('connnected')
# The callback for when a PUBLISH message is received from the server.
def on_message(client, userdata, msg):
print(msg.payload)
try:
p = json.loads(msg.payload.decode())
q = requests.post(' http://127.0.0.1:8000/',p)
print(q.content)
except:
pass
client = mqtt.Client()
client.on_connect = on_connect
client.on_message = on_message
print('connnecting...')
client.connect("iot.eclipse.org", 1883, 60)
#while not connected:
# pass
client.loop_start()
| abdool-sp/trial | data/utils.py | utils.py | py | 784 | python | en | code | 0 | github-code | 13 |
21485578792 | from pymongo import MongoClient
import datetime
marathon_public_url = "54.148.237.235"
mongo_port = 10109
client = MongoClient(marathon_public_url, mongo_port)
db = client.test_database
post1 = {"author": "Mike",
"text": "My first blog post!",
"tags": ["mongodb", "python", "pymongo"],
"date": datetime.datetime.utcnow()}
post2 = {"author": "Mark",
"text": "It was the best of times it was the worst of times...",
"tags": ["mongodb", "python", "pymongo","literature"],
"date": datetime.datetime.utcnow()}
posts = db.posts
print(posts)
post_id = posts.insert_one(post1).inserted_id
print(post_id)
post_id = posts.insert_one(post2).inserted_id
posts = db.posts
print (posts.find_one({"author":"Mark"})) | markfjohnson/dcos_spark_demo | mongodb/Mongo_Hello_World.py | Mongo_Hello_World.py | py | 757 | python | en | code | 0 | github-code | 13 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.