blob_id stringlengths 40 40 | language stringclasses 1
value | repo_name stringlengths 5 133 | path stringlengths 2 333 | src_encoding stringclasses 30
values | length_bytes int64 18 5.47M | score float64 2.52 5.81 | int_score int64 3 5 | detected_licenses listlengths 0 67 | license_type stringclasses 2
values | text stringlengths 12 5.47M | download_success bool 1
class |
|---|---|---|---|---|---|---|---|---|---|---|---|
87de2ea3dcbf008e2362efef63b0a2341595da7d | Python | lidongdongbuaa/leetcode2.0 | /图/网格搜索 grid/DFS模板1/542. 01 Matrix.py | UTF-8 | 8,363 | 3.671875 | 4 | [] | no_license | #!/usr/bin/python3
# -*- coding: utf-8 -*-
# @Time : 2020/3/28 14:29
# @Author : LI Dongdong
# @FileName: 542. 01 Matrix.py
''''''
'''
题目概述:在矩阵中,计算1到最近0点的最近距离,即从某点到外界的最短距离
题目考点:
DFS, BFS;
逆向思维,计算a到b,可以通过计算b到a来完成;
修改cell 1为极大值2**30 - 1,再通过比较进行缩小
把起始点先全部放入queue中,后遍历;若cell被访问过,即不是极大值,那么其一定是最优值,原因是bfs一层一层遍历,被访问过,一定是上个0的更小层数访问得到的
解决方案:
方案1:对每个1进行bfs,找到最近的0,tO(N^2)
方案2:把1改为极大值,对每个0进行bfs,每进行一次bfs, 计算够到达的邻近1的距离,若邻近1不是初始极大值,则更新其值,其代表了到附近不同0的距离。tO(N)
方法及方法分析:方案1;方案2
time complexity order:
space complexity order:O(N)
如何考
'''
'''
find the distance from one No-0 to 0 distance
input:
matrix, list[[]]; shape range? None? only one elem?
output:
matrix
corner case
matrix is None, return None
matrix is [[]], return [[]]
A. brute force: BFS, record the distance from node to 0
Method:
1. corner case
2. build a res matrix
3. traversal the node in matrix
if node is not 0, bfs its neighbor and record the distance, if meet 0, return distance and add to res matrix
Time complexity: O(N^2)
Space: O(N)
易错点:
1. 超时!
2. 空间复杂度巨大
'''
from copy import deepcopy
class Solution:
def updateMatrix(self, matrix: List[List[int]]) -> List[List[int]]:
if not matrix: # corner case
return None
if matrix == [[]]:
return [[]]
r, c = len(matrix), len(matrix[0])
res = [[0 for _ in range(c)] for _ in range(r)]
visit = [[0 for _ in range(c)] for _ in range(r)]
def bfs(i, j, visited): # visit i, j and its neighors, calculate the distance to 0 and add res
from collections import deque
queue = deque()
queue.append([[i, j], 0])
visited[i][j] = 1
while queue:
[i, j], d = queue.popleft()
if 0 <= i - 1 and visited[i - 1][j] == 0:
if matrix[i - 1][j] == 1:
queue.append([[i - 1, j], d + 1])
visited[i - 1][j] = 1
else:
return d + 1
if i + 1 <= r - 1 and visited[i + 1][j] == 0:
if matrix[i + 1][j] == 1:
queue.append([[i + 1, j], d + 1])
visited[i + 1][j] = 1
else:
return d + 1
if j - 1 >= 0 and visited[i][j - 1] == 0:
if matrix[i][j - 1] == 1:
queue.append([[i, j - 1], d + 1])
visited[i][j - 1] = 1
else:
return d + 1
if j + 1 <= c - 1 and visited[i][j + 1] == 0:
if matrix[i][j + 1] == 1:
queue.append([[i, j + 1], d + 1])
visited[i][j + 1] = 1
else:
return d + 1
for i in range(r):
for j in range(c):
if matrix[i][j] == 0:
pass
else:
visited = deepcopy(visit)
d = bfs(i, j, visited)
res[i][j] = d
return res
'''
超时
B. BFS - from 0 to search 逆向思维,
Method:
1. corner case
2. change 1 to max value in matrix
3. traversal all 0 node, i, by bfs, use queue to save note index and distance
if i's neigbor is 0, return
else:
if distance < i's neighbor's value, replace it with distance and searhc its neighbors
Time: O(N^2)
Space: O(N)
'''
class Solution:
def updateMatrix(self, matrix: List[List[int]]) -> List[List[int]]:
if matrix is None:
return None
if matrix == []:
return []
if matrix == [[]]:
return [[]]
r, c = len(matrix), len(matrix[0])
for i in range(r):
for j in range(c):
if matrix[i][j] == 1:
matrix[i][j] = float('inf')
def bfs(i, j): # calculate distance from 0 to 1 and save in matrix
from collections import deque
queue = deque()
queue.append([[i, j], 0])
while queue:
[i, j], d = queue.popleft()
if 0 <= i - 1 and d + 1 < matrix[i - 1][j]:
queue.append([[i - 1, j], d + 1])
matrix[i - 1][j] = d + 1
if i + 1 <= r - 1 and d + 1 < matrix[i + 1][j]:
queue.append([[i + 1, j], d + 1])
matrix[i + 1][j] = d + 1
if 0 <= j - 1 and d + 1 < matrix[i][j - 1]:
queue.append([[i, j - 1], d + 1])
matrix[i][j - 1] = d + 1
if j + 1 <= c - 1 and d + 1 < matrix[i][j + 1]:
queue.append([[i, j + 1], d + 1])
matrix[i][j + 1] = d + 1
for i in range(r):
for j in range(c):
if matrix[i][j] == 0:
bfs(i, j)
return matrix
'''
C. BFS - from 0 to search
技巧:
1. 把queue push node独立出来,减少计算时间,因为本来就需要全部遍历的
2. 逆向思维,先把距离设置为最大值;距离通过到0的累计来计算
3. 只访问未被访问的cell,一旦被访问,说明已经有最小值了,减少了时间复杂度
Time: O(N), new cell are added to the queue only if their current distance is greater than the calculated distance, cells are not likely to be added mulitple times.
Space: O(N)
'''
class Solution:
def updateMatrix(self, matrix: List[List[int]]) -> List[List[int]]:
if matrix is None:
return None
if matrix == []:
return []
if matrix == [[]]:
return [[]]
r, c = len(matrix), len(matrix[0])
res = [[0 for _ in range(c)] for _ in range(r)]
from collections import deque
queue = deque()
for i in range(r):
for j in range(c):
if matrix[i][j] == 1:
res[i][j] = 2**30 - 1
else:
queue.append([i, j])
while queue:
i, j = queue.popleft()
for x, y in ((i - 1, j), (i + 1, j), (i, j - 1), (i, j + 1)):
if 0 <= x <= r - 1 and 0 <= y <= c - 1 and res[x][y] == 2**30 - 1:
queue.append([x, y])
res[x][y] = res[i][j] + 1
return res
'''
超时, 要按照模板去写
D. DFS - find the 0 to its neighbor 1's shorest path length
Method:
1. corner case
2. change 1 to inf
3. traversal every 0 node, i
dfs to search i's not 0 neighbors, if path < the neighbors' value, renew the value as path
4. return matrix
Time: O(N^2)
Space: O(N) used by recursion stack
'''
class Solution:
def updateMatrix(self, matrix: List[List[int]]) -> List[List[int]]:
if not matrix: # corner case
return None
if matrix == [[]]:
return [[]]
r, c = len(matrix), len(matrix[0])
for i in range(r):
for j in range(c):
if matrix[i][j] == 1:
matrix[i][j] = float('inf')
def dfs(i, j, d): # depth first search i,j's not 0 neighbor, and renew the path length
if i < 0 or j < 0 or i > r - 1 or j > c - 1 or matrix[i][j] < d:
return
matrix[i][j] = d
for x, y in ((i - 1, j), (i + 1, j), (i, j - 1), (i, j + 1)):
dfs(x, y, d + 1)
for i in range(r):
for j in range(c):
if matrix[i][j] == 0:
dfs(i, j, 0)
return matrix
| true |
26a67e6cae6a759018ac6d8ef35a1ef8fa84f4de | Python | fewva/my_python | /10/aero_temper.py | UTF-8 | 370 | 3.078125 | 3 | [] | no_license | import numpy
with open('temper.stat', 'r') as file:
lines = [float(x) for x in [line.strip() for line in file]]
print(f"Максимальное значение: {max(lines)}, Минимальное значение: {min(lines)},\nСредняя температура: {numpy.mean(lines)} Кол-во уникальных температур: {len(set(lines))}") | true |
35d0c1555ac0bd64ad6bd28483ae16adaff980e9 | Python | StevenMaharaj/ecg | /Steven_Maharaj_695281_code_task_1/e_test.py | UTF-8 | 139 | 2.625 | 3 | [] | no_license | from LCG import LCG_alogorithm
a,b,m = 825,0,997
x = LCG_alogorithm(a,b,m,seed = 2,iterations=50)
dice_throws = (x%6)+1
print(dice_throws) | true |
5bb4c1ce464a7f623b120ddcc484ce1b21eda052 | Python | ymarfoq/outilACVDesagregation | /programme/write_results_disaggregation_UP.py | UTF-8 | 3,301 | 2.578125 | 3 | [] | no_license | def write_results_disaggregation_UP(impact_method, full_results_UP, CF_categories, UP_meta_info, UP_list, all_system_scores,
all_unit_scores, level_reached, system_scores, system_number,systems):
from openpyxl.workbook import Workbook
from openpyxl.worksheet import Worksheet
from construct_path import construct_path
from copy import copy
filename = 'disaggregation_' + impact_method + '_system' + str(system_number) + '.xlsx'
wb = Workbook(encoding = 'mac_roman') #creating a workbook
ws = Worksheet(wb, title = 'result') #creating a sheet inside the workbook
#creation of headers
header2 = ['instance ID',
'parent instance ID',
'unit process',
'demand',
'unit',
'pruned',
'level',
'infrastructure']
for impact_category in CF_categories:
header2.append(impact_category)
for i in range(1, level_reached + 1):
header2.append('path level ' + str(i))
header2.append('country')
for i in range(4):
header2.append('category ' + str(i))
header1 = []
for i in range(header2.index(CF_categories[0])):
header1.append('')
for impact_category in CF_categories:
matrix_line = CF_categories.index(impact_category)
header1.append(str(system_scores[matrix_line,0]))
ws.append(header1) #writing the header
ws.append(header2) #writing the header
#content of the file
for instance_ID in range(len(full_results_UP)): #for each instance of the full_results
path = construct_path(full_results_UP, instance_ID, UP_list, impact_method, system_number) #construction of the path between the instance and the RF
level = len(path) - 1 #level will be incremented from zero
matrix_column = full_results_UP[instance_ID]['UP_number']
UP_name = UP_list[matrix_column]
demand = copy(full_results_UP[instance_ID]['demand'])
#fetching the info to fill the line
line = [instance_ID,
full_results_UP[instance_ID]['parent'],
UP_name,
demand,
UP_meta_info[UP_name]['unit'],
full_results_UP[instance_ID]['pruned'],
level,
UP_meta_info[UP_name]['Infrastructure']]
for impact_category in CF_categories: #system or unit scores.
matrix_line = CF_categories.index(impact_category)
if full_results_UP[instance_ID]['pruned'] == 1:
line.append(all_system_scores[matrix_line, matrix_column] * demand)
else:
line.append(all_unit_scores[matrix_line, matrix_column] * demand)
for i in range(1, level_reached + 1): #path
try:
line.append(path[i])
except IndexError:
line.append('')
#complementary info
line.append(UP_meta_info[UP_name]['Country'])
for i in range(4):
try:
line.append(UP_meta_info[UP_name]['Category type'][i])
except IndexError:
line.append('')
#print line
ws.append(line) #writing the header
ws.freeze_panes = 'D3'
wb.add_sheet(ws)
wb.save(filename)
| true |
b18bf3fd5f0237b36910bb3c99b250a46f57d646 | Python | smur232/NaiveBayes | /classifier.py | UTF-8 | 4,671 | 2.84375 | 3 | [] | no_license | import string, pickle, time
from math import log
from collections import Counter, defaultdict, deque
from web_crawler import begin_crawling_from_this_page, determine_category_file, collect_links
from article_parser import get_all_body_p_tags_bbc, get_soup_of_page
from pickle_reader import read_object_from
def get_most_likely_category(word_count_new_article):
probability_dict = dict()
categories = ['business', 'asia', 'technology', 'uk', 'europe']
for category in categories:
update_probabilities(category)
category_word_probabilities = read_object_from(category + '_probability.p', defaultdict)
probability_dict[category] = get_total_probability(category_word_probabilities, word_count_new_article, category)
largest_probability = max(probability_dict.values())
likely_category = [x for x, y in probability_dict.items() if y == largest_probability]
return likely_category[0]
def count_words_in_article(url):
soup = get_soup_of_page(url)
p_tags = get_all_body_p_tags_bbc(soup)
word_counter = Counter()
for pTag in p_tags:
contents = str(pTag.contents[0])
if 'href' not in contents and 'span' not in contents:
word_counter.update(split_into_words(contents))
return word_counter
def split_into_words(contents):
return [word.strip(string.punctuation).lower() for word in contents.split()]
def get_probability_of_word(word, probability_dict, total_word_count):
probability = probability_dict[word]
if probability == 0:
return 1/(total_word_count+2)
# (1/ 1000000) (1/100
return probability
def get_probability_of_category(category):
a = read_object_from('articles_read_counter.p', Counter)
total_num_articles = sum(a.values())
return a[category] / total_num_articles
def get_total_probability(probability_of_words_for_category, new_article_word_counter, category):
total_probability = get_probability_of_category(category)
total_num_of_words = probability_of_words_for_category['num_of_words']
for word, count in new_article_word_counter.items():
if len(word) > 3:
total_probability += (log(get_probability_of_word(word, probability_of_words_for_category, total_num_of_words)))
return total_probability
def update_probabilities(category):
word_count = read_object_from(category + '.p', Counter)
total_num_words = sum(word_count.values())
# prob_of_word_not_seen = lambda: 1/total_num_words
word_probabilities = defaultdict(int)
word_probabilities['num_of_words'] = total_num_words
for word, count in word_count.items():
# 全てのカウントに1を足します
word_probabilities[word] = (count+1)/(total_num_words + 2)
pickle.dump(word_probabilities, open(category + '_probability.p', 'wb'))
return word_probabilities
def get_count_from_text_file(filename):
word_counter = Counter()
with open(filename) as f:
lines = f.readlines()
for line in lines:
word_counter.update(split_into_words(line))
return word_counter
def test_precision_recall(url, category, max_num_of_articles):
links = deque()
links.append(url)
count = 0
articles_in_training_set = read_object_from('visited_pages_set.p', set)
articles_in_testing = read_object_from('tested_articles_url.p', set)
print(articles_in_testing)
while links and count < max_num_of_articles:
try:
next_url = links.popleft()
soup = get_soup_of_page(next_url)
links.extend(collect_links(soup))
if next_url in articles_in_training_set or next_url in articles_in_testing:
continue
time.sleep(1)
article_category = determine_category_file(next_url)
if article_category != category:
continue
word_counter_new_article = count_words_in_article(next_url)
category_guess = get_most_likely_category(word_counter_new_article)
print('Currently going through ', next_url, ':')
articles_in_testing.add(next_url)
count += 1
print(' Your guess is', category_guess, '. The actual category is', article_category)
except AttributeError:
print('something went wrong, here', next_url, 'we will look at the next link')
continue
except Exception as e:
print('an unexpected error occurred, we will look at the next link: ', e)
continue
print('I have looked at', count, 'articles')
pickle.dump(articles_in_testing, open('tested_articles_url.p', 'wb'))
| true |
6060d10b91a5727626e7cc8e1d3ccd4fe67f936b | Python | hummelm10/FastlyPythonAPI | /FastlyPythonCLI/scripts/getAllTokens.py | UTF-8 | 1,597 | 2.625 | 3 | [
"MIT"
] | permissive | #custom
from .generateKey import generateKey
from .utils import clear
from .utils import getKeyFromConfig
#custom classes
from .utils import bcolors
from .utils import DataFrameFromDict
#Python packages
from xml.dom import minidom
import requests
import pandas
from pandas.io.json import json_normalize
def getAllTokens(printInput=True):
print('Getting all current tokens...')
print("API Key: " + getKeyFromConfig())
header={"Accept":"application/json"}
header.update({"Fastly-Key":getKeyFromConfig()})
r=requests.get("https://api.fastly.com/tokens",headers=header)
if r.status_code == 401:
print(bcolors.WARNING + "Return Message:" + bcolors.ENDC, end =" ")
print(r.json()['msg'])
input('Press ENTER to continue...')
clear()
elif r.status_code == 200:
with DataFrameFromDict(r.json()) as df:
df['ID'] = df['id']
df['User ID'] = df['user_id']
df['Customer ID'] = df['customer_id']
df['Name'] = df['name']
df['Scope'] = df['scope']
df['Last Used At'] = df['last_used_at']
df['Expiration'] = df['expires_at']
df['IP'] = df['ip']
print(bcolors.OKBLUE + bcolors.UNDERLINE + "FASTLY CURRENT API KEYS" + bcolors.ENDC + bcolors.ENDC)
print(df)
if printInput:
input("Press ENTER to continue...")
else:
return df
else:
print(bcolors.WARNING + "Unknown Response: " + str(r.status_code) + bcolors.ENDC)
input("Press ENTER to continue...")
exit()
return | true |
f4dce1394e025188b3ec0555b8ffe04f47005038 | Python | akash-kaushik/Python-Array-Programs | /Python Program for Find reminder of array multiplication divided by n.py | UTF-8 | 853 | 3.953125 | 4 | [] | no_license | # -*- coding: utf-8 -*-
"""
Created on Fri Sep 25 09:38:15 2020
@author: Akash Kaushik
"""
def user_input():
number = "wrong"
while number.isdigit() == False:
number = input()
return int(number)
print("Enter the length of an ARRAY:")
l = user_input()
def array_input(l):
my_array = []
for i in range(0,l):
print("Enter the", (i+1), "value of an array:")
my_array.append(user_input())
return my_array
new_array = []
new_array = array_input(l)
def array_mlt(new_array,l):
mult = 1
for i in range(0,l):
mult = mult * new_array[i]
return mult
mulltip = array_mlt(new_array,l)
print("Enter the value of Divider:")
div = user_input()
rem = mulltip % div
print("Multiplication =", mulltip, "\nReminder =", rem)
| true |
f2b094824aeb101421962a2b5b6a6098e5350803 | Python | park0902/python-source | /RNN TEST/test3.py | UTF-8 | 8,683 | 3.015625 | 3 | [] | no_license | # '''
# This script shows how to predict stock prices using a basic RNN
# '''
# import tensorflow as tf
# import numpy as np
# import matplotlib.pyplot as plt
# import os
#
# tf.set_random_seed(777) # reproducibility
#
#
# def MinMaxScaler(data):
# numerator = data - np.min(data, 0)
# denominator = np.max(data, 0) - np.min(data, 0)
# return numerator / (denominator + 1e-7)
#
#
# # train Parameters
# seq_length = 7
# data_dim = 8
# hidden_dim = 100
# output_dim = 1
# learning_rate = 0.01
# iterations = 1000
#
# # high diff_24h diff_per_24h bid ask low volume last
# # 2318.82 2228.7 4.043612869 2319.4 2319.99 2129.78 4241.641516 2318.82
# xy = np.loadtxt('d:\\data\\bitcoin_okcoin_usd222.csv', delimiter=',')
# # xy = xy[::-1] # reverse order (chronically ordered)
# xy = MinMaxScaler(xy)
# x = xy
# y = xy[:, [-1]] # last as label
#
# # build a dataset
# dataX = []
# dataY = []
# for i in range(0, len(y) - seq_length):
# _x = x[i:i + seq_length]
# _y = y[i + seq_length] # Next last price
# # print(_x, "->", _y)
# dataX.append(_x)
# dataY.append(_y)
#
#
# # train/test split
# train_size = int(len(dataY) * 0.7)
# test_size = len(dataY) - train_size
# trainX, testX = np.array(dataX[0:train_size]), np.array(
# dataX[train_size:len(dataX)])
# trainY, testY = np.array(dataY[0:train_size]), np.array(
# dataY[train_size:len(dataY)])
#
#
# # input place holders
# X = tf.placeholder(tf.float32, [None, seq_length, data_dim])
# Y = tf.placeholder(tf.float32, [None, 1])
#
#
# # build a LSTM network
# cell = tf.contrib.rnn.BasicLSTMCell(
# num_units=hidden_dim, state_is_tuple=True, activation=tf.tanh)
# outputs, _states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
# Y_pred = tf.contrib.layers.fully_connected(
# outputs[:, -1], output_dim, activation_fn=None) # We use the last cell's output
#
#
# # cost/loss
# loss = tf.reduce_sum(tf.square(Y_pred - Y)) # sum of the squares
#
#
# # optimizer
# optimizer = tf.train.AdamOptimizer(learning_rate)
# train = optimizer.minimize(loss)
#
#
# # RMSE
# targets = tf.placeholder(tf.float32, [None, 1])
# predictions = tf.placeholder(tf.float32, [None, 1])
# rmse = tf.sqrt(tf.reduce_mean(tf.square(targets - predictions)))
#
# with tf.Session() as sess:
# init = tf.global_variables_initializer()
# sess.run(init)
#
# # Training step
# for i in range(iterations):
# _, step_loss = sess.run([train, loss], feed_dict={
# X: trainX, Y: trainY})
# print("[step: {}] loss: {}".format(i, step_loss))
#
#
# # Test step
# test_predict = sess.run(Y_pred, feed_dict={X: testX})
# rmse_val = sess.run(rmse, feed_dict={
# targets: testY, predictions: test_predict})
# print("RMSE: {}".format(rmse_val))
#
#
# # Plot predictions
# plt.plot(testY)
# plt.plot(test_predict)
# plt.xlabel("Time Period")
# plt.ylabel("Stock Price")
# plt.show()
import tensorflow as tf
import numpy as np
from tensorflow.contrib import *
import time
class Model:
def __init__(self, n_inputs, n_sequences, n_hiddens, n_outputs, hidden_layer_cnt, file_name, model_name):
self.n_inputs = n_inputs
self.n_sequences = n_sequences
self.n_hiddens = n_hiddens
self.n_outputs = n_outputs
self.hidden_layer_cnt = hidden_layer_cnt
self.file_name = file_name
self.model_name = model_name
self._build_net()
def _build_net(self):
with tf.variable_scope(self.model_name):
with tf.name_scope('input_layer'):
self.X = tf.placeholder(tf.float32, [None, self.n_sequences, self.n_inputs])
self.Y = tf.placeholder(tf.float32, [None, self.n_outputs])
with tf.name_scope('GRU'):
self.cell = tf.nn.rnn_cell.GRUCell(num_units=self.n_hiddens, activation=tf.tanh)
self.cell = tf.nn.rnn_cell.DropoutWrapper(self.cell, output_keep_prob=0.5)
self.multi_cell = tf.nn.rnn_cell.MultiRNNCell([self.cell] * self.hidden_layer_cnt, state_is_tuple= False)
outputs, _states = tf.nn.dynamic_rnn(self.multi_cell, self.X, dtype=tf.float32)
self.Y_pred = tf.contrib.layers.fully_connected(outputs[:, -1], self.n_outputs, activation_fn=None)
# with tf.name_scope('GRU'):
# self.cell = tf.nn.rnn_cell.GRUCell(num_units=self.n_hiddens, activation=tf.tanh)
# self.cell = tf.nn.rnn_cell.DropoutWrapper(self.cell, output_keep_prob=0.5)
# self.multi_cell = tf.nn.rnn_cell.MultiRNNCell([self.cell] * self.hidden_layer_cnt)
# outputs, _states = tf.nn.dynamic_rnn(self.multi_cell, self.X, dtype=tf.float32)
# self.FC = tf.reshape(outputs, [-1, self.n_hiddens])
# self.outputs = tf.contrib.layers.fully_connected(self.FC, self.n_outputs, activation_fn=None)
# self.outputs = tf.reshape(self.n_outputs, [None, self.n_sequences, self.n_outputs])
#
# self.sequence_loss = tf.contrib.seq2seq.sequence_loss(logits=self.outputs, targets=self.Y)
# self.loss = tf.reduce_mean(self.sequence_loss)
# with tf.name_scope('LSTM'):
# self.cell = tf.nn.rnn_cell.BasicLSTMCell(num_units=self.n_hiddens, state_is_tuple=True, activation=tf.tanh)
# self.cell = tf.nn.rnn_cell.DropoutWrapper(self.cell, input_keep_prob=0.5)
# self.multi_cell = tf.nn.rnn_cell.MultiRNNCell([self.cell] * self.hidden_layer_cnt)
# self.multi_cell = tf.nn.rnn_cell.DropoutWrapper(self.multi_cell, output_keep_prob=0.5)
# outputs, _states = tf.nn.dynamic_rnn(self.multi_cell, self.X, dtype=tf.float32)
# self.Y_pred = tf.contrib.layers.fully_connected(outputs[:, -1], self.n_outputs, activation_fn=None)
# self.flat = tf.reshape(outputs, [-1, self.n_hiddens])
# self.logits = layers.linear(self.flat, self.n_inputs)
#
#
# self.cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=self.logits, labels=self.Y))
# self.optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(self.cost)
# self.accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.arg_max(self.logits, 1), tf.arg_max(self.Y, 1)), dtype=tf.float32))
# # self.accuracy = tf.reduce_mean(tf.cast(tf.equal(self.Y, tf.cast(self.Y, tf.uint8)), dtype=tf.float32))
self.loss = tf.reduce_sum(tf.square(self.Y_pred-self.Y))
self.optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
self.train = self.optimizer.minimize(self.loss)
def min_max_scaler(data):
return (data - np.min(data, axis=0)) / (np.max(data, axis=0) - np.min(data, axis=0) + 1e-5)
def read_data(file_name):
data = np.loadtxt('d:/data/' + file_name, delimiter=',', skiprows=1)
data = data[:, 1:]
data = data[np.sum(np.isnan(data), axis=1) == 0]
data = min_max_scaler(data)
# return data, data[:, [3]]
return data
def data_setting(data, sequence_length):
dataX = []
dataY = []
x = data
y = data[:, [-3]]
for i in range(0, len(y)-sequence_length):
x = x[i:i+sequence_length]
y = y[i:i+sequence_length]
dataX.append(x)
dataY.append(y)
train_size = int(len(dataY)*0.7)
# test_size = len(dataY) - train_size
trainX, trainY = np.array(dataX[0:train_size]), np.array(dataY[0:train_size])
testX, testY = np.array(dataX[train_size:len(dataX)]), np.array(dataY[train_size:len(dataY)])
return trainX, trainY, testX, testY
# print(read_data('bitstampUSD_1-min_data_2012-01-01_to_2017-05-31.csv'))
file_names=['bitstampUSD_1-min_data_2012-01-01_to_2017-05-31.csv']
batch_size = 1000
n_inputs = 7
n_sequences = 60
n_hiddens = 100
n_outputs = 1
hidden_layer_cnt = 5
iterations = 1000
model = Model(batch_size,n_sequences,n_hiddens,n_outputs,hidden_layer_cnt,file_name=file_names[0],model_name='RNN')
data = read_data(file_names[0])
trainX, trainY, testX, testY = data_setting(data, n_sequences)
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
# Training step
for i in range(iterations):
_, step_loss = sess.run([train, loss], feed_dict={
X: trainX, Y: trainY})
print("[step: {}] loss: {}".format(i, step_loss))
# Test step
test_predict = sess.run(Y_pred, feed_dict={X: testX})
rmse_val = sess.run(rmse, feed_dict={
targets: testY, predictions: test_predict})
print("RMSE: {}".format(rmse_val))
| true |
9cec942fc817e1fd0c3d22812887376a88cd46f1 | Python | osayi/python_intro | /data_visualization/mole_mimic.py | UTF-8 | 1,151 | 3.765625 | 4 | [] | no_license | from random import choice
class RandomWalker():
'''A class to descrbe how atrributes up and
down will move ramdomly'''
def __init__(self, tot_points=7000):
''' initializing total number of points to be walked'''
self.tot_points = tot_points
''' setting points to begin at origin'''
self.vert_values = [0]
self.hor_values = [0]
def walk_path(self):
''' calculate all the steps or points in walk'''
'''this sets the vertical values at a max of 7000'''
while len(self.vert_values) < self.tot_points:
'''deciding where and how far it will go vertically'''
vert_direction = choice([1,-1])
vert_distance = choice([0,1,2,3,4])
vert_steps = vert_direction * vert_distance
'''deciding where and how far it will go horizontally'''
hor_direction = choice([1,-1])
hor_distance = choice([0,1,2,3,4])
hor_steps = hor_direction * hor_distance
if hor_steps == 0 or vert_steps ==0:
continue
'''calculate the next steps'''
next_vert = self.vert_values[-1] + vert_steps
next_hor = self.hor_values[-1] + hor_values
self.vert_values.append(next_vert)
self.hor_values.append(next_hor)
| true |
0f55fcb6a4bcc64f981019fc77d885187c30165d | Python | chryswoods/brunel | /python/Brunel/_importer.py | UTF-8 | 17,440 | 2.625 | 3 | [
"MIT"
] | permissive |
__all__ = ["getDefaultImporters", "extractPersonName",
"isPerson", "importPerson", "isBusiness", "importBusiness",
"importConnection", "importPositions", "importAffiliations",
"importSources", "importType", "importNotes",
"importProject", "importSource", "importBiography",
"importEdgeSources", "importSharedLinks",
"importProjectDates"]
def _clean_string(s):
return str(s).lstrip().rstrip()
def _get_url(s):
return _clean_string(s)
def _get_date(s):
from ._daterange import Date as _Date
return _Date(s)
def _get_daterange(s):
parts = s.split(":")
dates = []
for part in parts:
dates.append(_get_date(part))
from ._daterange import DateRange as _DateRange
if len(dates) == 0:
return _DateRange.null()
elif len(dates) == 1:
return _DateRange(both=dates[0])
elif len(dates) > 2:
raise ValueError(f"Invalid number of dates? {dates}")
else:
return _DateRange(start=dates[0], end=dates[1])
def isPerson(node, importers=None):
try:
return str(node.Label).find("&") == -1
except Exception:
return False
def extractPersonName(name):
name = name.lstrip().rstrip()
orig_name = name
titles = []
firstnames = []
surnames = []
suffixes = []
state = {}
# some special cases
if name == "Brunel, I.K.":
firstnames.append("Isambard")
firstnames.append("Kingdom")
surnames.append("Brunel")
state["gender"] = "male"
elif name == "Wm Symonds":
firstnames.append("W.")
firstnames.append("M.")
surnames.append("Symonds")
elif name == "Mr John Edye":
firstnames.append("John")
surnames.append("Edye")
titles.append("Mr.")
else:
name = name.replace("'", "")
name = name.replace(".", " ")
s = name.lower().find("(the elder)")
if s != -1:
suffixes.append("(the elder)")
name = name[0:s]
parts = name.split(",")
possible_titles = {"captain": "Captain",
"cpt": "Captain",
"superintendent": "Superintendent",
"dr": "Dr.",
"doctor": "Dr.",
"prof": "Prof.",
"mr": "Mr.",
"ms": "Ms.",
"mrs": "Mrs.",
"miss": "Miss.",
"rn": "RN",
"rev": "Rev."
}
for part in parts[0].split(" "):
for surname in part.split("."):
try:
titles.append(possible_titles[surname.lower()])
except KeyError:
if len(surname) > 0:
surnames.append(surname)
try:
for part in parts[1].split(" "):
for firstname in part.split("."):
try:
titles.append(possible_titles[firstname.lower()])
except KeyError:
if len(firstname) == 1:
firstnames.append(f"{firstname}.")
elif len(firstname) > 1:
firstnames.append(firstname)
except Exception:
pass
state["titles"] = titles
state["firstnames"] = firstnames
state["surnames"] = surnames
state["suffixes"] = suffixes
state["orig_name"] = orig_name
if "Mr." in state["titles"]:
state["gender"] = "male"
elif "Mrs." in state["titles"] or "Ms." in state["titles"] or \
"Miss." in state["titles"]:
state["gender"] = "female"
return state
def importProjectDates(node, importers=None):
from ._daterange import DateRange as _DateRange
from ._daterange import Date as _Date
try:
dates = str(node["Date (joined project: left project)"])
except Exception:
return _DateRange.null()
if dates is None or dates == "0":
return _DateRange.null()
import re as _re
pattern = _re.compile(r":")
raw = dates
dates = pattern.split(dates)
try:
if len(dates) == 1:
return _DateRange(both=_Date(dates[0]))
elif len(dates) == 2:
return _DateRange(start=_Date(dates[0]), end=_Date(dates[1]))
else:
print(f"Could not interpret project dates from {raw}")
print(node)
return _DateRange.null()
except Exception:
print(f"Could not interpret project dates from {raw}")
print(node)
return _DateRange.null()
def importPerson(node, project, importers=None):
try:
extractPersonName = importers["extractPersonName"]
except KeyError:
extractPersonName = extractPersonName
try:
importPositions = importers["importPositions"]
except KeyError:
importPositions = importPositions
try:
importAffiliations = importers["importAffiliations"]
except KeyError:
importAffiliations = importAffiliations
try:
importSources = importers["importSources"]
except KeyError:
importSources = importSources
try:
importNotes = importers["importNotes"]
except KeyError:
importNotes = importNotes
try:
importProjectDates = importers["importProjectDates"]
except KeyError:
importProjectDates = importProjectDates
pid = project.getID()
try:
name = str(node.Label)
state = extractPersonName(name)
state["positions"] = {}
state["sources"] = {}
state["affiliations"] = {}
state["notes"] = {}
state["positions"][pid] = importPositions(node, importers=importers)
state["sources"][pid] = importSources(node, importers=importers)
state["affiliations"][pid] = importAffiliations(node,
importers=importers)
state["notes"][pid] = importNotes(node, importers=importers)
state["projects"] = {pid: importProjectDates(node,
importers=importers)}
from ._person import Person as _Person
return _Person(state)
except Exception as e:
print(f"Cannot load Person\n{node}\nError = {e}\n")
raise
return None
def isBusiness(node, importers=None):
return str(node.Label).find("&") != -1
def importBusiness(node, project, importers=None):
try:
importPositions = importers["importPositions"]
except KeyError:
importPositions = importPositions
try:
importAffiliations = importers["importAffiliations"]
except KeyError:
importAffiliations = importAffiliations
try:
importSources = importers["importSources"]
except KeyError:
importSources = importSources
try:
importNotes = importers["importNotes"]
except KeyError:
importNotes = importNotes
from ._daterange import DateRange as _DateRange
pid = project.getID()
try:
name = str(node.Label)
positions = {pid: importPositions(node, importers=importers)}
sources = {pid: importSources(node, importers=importers)}
affiliations = {pid: importAffiliations(node, importers=importers)}
notes = {pid: importNotes(node, importers=importers)}
from ._business import Business as _Business
return _Business({"name": name,
"positions": positions,
"sources": sources,
"affiliations": affiliations,
"projects": {project.getID(): _DateRange.null()},
"notes": notes})
except Exception as e:
print(f"Cannot load Business {node}: {e}")
return None
def importSharedLinks(edge, importers=None):
affiliations = importers["affiliations"]
result = []
import re as _re
pattern = _re.compile(r":")
for affiliation in pattern.split(str(edge["Shared Links"])):
affiliation = extractAffiliationName(affiliation)
affiliation = affiliations.add(affiliation)
if affiliation:
result.append(affiliation.getID())
return result
def importEdgeSources(edge, importers=None):
sources = importers["sources"]
asources = {}
csources = {}
import re as _re
from ._daterange import Date as _Date
duration = _Date()
pattern = _re.compile(r":")
dates = pattern.split(str(edge["Dates of AS"]))
adates = []
for date in dates:
date = _Date(date)
adates.append(date)
duration = duration.merge(date)
dates = pattern.split(str(edge["Dates of CS"]))
cdates = []
for date in dates:
date = _Date(date)
cdates.append(date)
duration = duration.merge(date)
data = pattern.split(str(edge["Afiliation Sources (AS)"]))
dates = adates
while len(dates) < len(data):
dates.append(duration)
for i in range(0, len(data)):
source = extractSourceName(data[i])
date = _Date(dates[i])
duration = duration.merge(date)
source = sources.add(source)
if source:
if source.updateDate(date, force=True):
sources.update(source)
id = source.getID()
if id not in asources:
asources[id] = []
asources[id].append(date)
data = pattern.split(str(edge["Correspondence Sources (CS)"]))
dates = adates
while len(dates) < len(data):
dates.append(duration)
for i in range(0, len(data)):
source = extractSourceName(data[i])
date = _Date(dates[i])
duration = duration.merge(date)
source = sources.add(source)
if source:
if source.updateDate(date, force=True):
sources.update(source)
id = source.getID()
if id not in csources:
csources[id] = []
csources[id].append(date)
return (duration, asources, csources)
def importConnection(edge, project, mapping=None, importers=None):
try:
importEdgeSources = importers["importEdgeSources"]
except KeyError:
importEdgeSources = importEdgeSources
try:
importSharedLinks = importers["importSharedLinks"]
except KeyError:
importSharedLinks = importSharedLinks
try:
importNotes = importers["importNotes"]
except KeyError:
importNotes = importNotes
try:
importType = importers["importType"]
except KeyError:
importType = importType
from ._daterange import DateRange as _DateRange
try:
if mapping:
n0 = int(edge.Source)
n1 = int(edge.Target)
if n0 is None:
raise KeyError(f"Unspecified n0 {n0} <=> {n1}")
if n1 is None:
raise KeyError(f"Unspecified n1 {n0} <=> {n1}")
if n0 not in mapping:
raise KeyError(f"No node0 with ID {n0}")
if n1 not in mapping:
raise KeyError(f"No node1 with ID {n1}")
n0 = mapping[n0]
n1 = mapping[n1]
else:
n0 = edge.Source
n1 = edge.Target
if n0 is None:
raise KeyError(f"Unspecified n0 {n0} <=> {n1}")
if n1 is None:
raise KeyError(f"Unspecified n1 {n0} <=> {n1}")
notes = importNotes(edge, importers=importers, isEdge=True)
(duration, asources, csources) = importEdgeSources(edge,
importers=importers)
typ = importType(edge, importers=importers)
shared_links = importSharedLinks(edge, importers=importers)
from ._connection import Connection as _Connection
return _Connection({"n0": n0,
"n1": n1,
"notes": notes,
"affiliations": asources,
"correspondances": csources,
"duration": duration,
"shared": shared_links,
"projects": {project.getID(): _DateRange.null()},
"weights": {project.getID(): 1.0},
"type": typ,
})
except Exception as e:
print(f"\nFailed to add connection!\n{e}\n{edge}\n\n")
return None
def extractPositionName(position):
position = position.lower().lstrip().rstrip()
return position
def importPositions(node, importers=None):
positions = importers["positions"]
result = []
import re as _re
pattern = _re.compile(r":")
for position in pattern.split(str(node["Position(s)"])):
position = extractPositionName(position)
position = positions.add(position)
if position:
result.append(position.getID())
return result
def extractAffiliationName(affiliation):
if not affiliation:
return None
affiliation = affiliation.lstrip().rstrip()
lower = affiliation.lower()
if lower == "nan" or affiliation == "_":
return None
return affiliation
def importAffiliations(node, importers=None):
affiliations = importers["affiliations"]
result = []
import re as _re
pattern = _re.compile(r":")
for affiliation in pattern.split(str(node["Other Affiliations"])):
affiliation = extractAffiliationName(affiliation)
affiliation = affiliations.add(affiliation)
if affiliation:
result.append(affiliation.getID())
return result
def extractSourceName(source):
source = source.lstrip().rstrip()
lower = source.lower()
if lower == "nan" or lower == "_":
return None
return source
def importSources(node, importers=None):
sources = importers["sources"]
result = []
import re as _re
pattern = _re.compile(r":")
for source in pattern.split(str(node["Source(s)"])):
name = extractSourceName(source)
source = sources.add(name)
if source:
result.append(source.getID())
return result
def importNotes(node, importers=None, isEdge=False):
return []
def importType(edge, importers=None):
try:
typ = str(edge.Link).lower()
except Exception:
return None
if typ.find("indirect") != -1:
return "indirect"
elif typ.find("direct") != -1:
return "direct"
else:
print(f"Invalid link type? {typ}")
return None
def importSource(data, importers=None):
props = {"name": _clean_string(data.Source),
"description": _clean_string(data.Description)}
from ._source import Source as _Source
return _Source(props)
def importProject(data, importers=None):
props = {"name": _clean_string(data.Name),
"duration": _get_daterange(data.Duration),
"url": _get_url(data.URL)}
from ._project import Project as _Project
return _Project(props)
def importBiography(data, importers=None):
name = str(data.Node).lstrip().rstrip()
bio = str(data.Biography).lstrip().rstrip()
if importers is None:
return
social = importers["social"]
try:
extractPersonName = importers["extractPersonName"]
except Exception:
extractPersonName = extractPersonName
from ._person import Person as _Person
person = None
try:
person_name = extractPersonName(name)
person = _Person(person_name)
node = social.people().find(person.getName(), best_match=True)
return (node, bio)
except Exception:
pass
try:
node = social.businesses().find(name)
return (node, bio)
except Exception:
pass
# let's try to find a person with the same surname...
nodes = None
try:
nodes = social.people().find(person.getSurname())
if isinstance(nodes, _Person):
nodes = [nodes]
for node in nodes:
if node.couldBe(person):
return (node, bio)
except Exception:
pass
if nodes:
print(f"Nearest matches are {nodes}")
if name == "Humphries, Francis":
data.Node = "Humphrys, Francis"
return importBiography(data, importers)
print(f"There is nothing called {name} for which to give a biography!")
return (None, None)
def getDefaultImporters():
return {"isPerson": isPerson, "extractPersonName": extractPersonName,
"importPerson": importPerson,
"isBusiness": isBusiness, "importBusiness": importBusiness,
"importConnection": importConnection,
"importPositions": importPositions,
"importAffiliations": importAffiliations,
"importSources": importSources, "importNotes": importNotes,
"importType": importType,
"importSource": importSource, "importProject": importProject,
"importBiography": importBiography,
"importEdgeSources": importEdgeSources,
"importSharedLinks": importSharedLinks,
"importProjectDates": importProjectDates}
| true |
804e75d30ed528fe1ca2947a5dad9f7b5093e192 | Python | TonyQue/python_test_save | /for_test.py | UTF-8 | 62 | 3.125 | 3 | [] | no_license | L = ['Bart', 'Lisa', 'Adam']
for n in L:
print('Hello,',n) | true |
bcb5aa73b4dc9a142060be8ee4f91d9c670c87d1 | Python | chyt123/cosmos | /coding_everyday/lc500+/lc784/csh.py | UTF-8 | 569 | 3.328125 | 3 | [] | no_license | class Solution(object):
def letterCasePermutation(self, S):
"""
:type S: str
:rtype: List[str]
"""
result = ['']
s = S.lower()
for i in s:
if 'a' <= i <= 'z':
lower = [x + i for x in result]
upper = [x + i.upper() for x in result]
result = lower + upper
else:
result = [x + i for x in result]
return result
if __name__ == "__main__":
sol = Solution()
S = '12345'
print sol.letterCasePermutation(S)
| true |
a7c782a73fa63f2e9ad03cbf38b8550a71c7d30d | Python | yjzhang96/UTI-VFI | /models/networks.py | UTF-8 | 5,796 | 2.609375 | 3 | [] | no_license | import torch
import torch.nn as nn
from numpy.random import normal
from numpy.linalg import svd
from math import sqrt
import torch.nn.functional as F
def _get_orthogonal_init_weights(weights):
fan_out = weights.size(0)
fan_in = weights.size(1) * weights.size(2) * weights.size(3)
u, _, v = svd(normal(0.0, 1.0, (fan_out, fan_in)), full_matrices=False)
if u.shape == (fan_out, fan_in):
return torch.Tensor(u.reshape(weights.size()))
else:
return torch.Tensor(v.reshape(weights.size()))
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
# m.weight.data.normal_(0.0, 0.02)
nn.init.xavier_uniform_(m.weight.data)
if hasattr(m.bias, 'data'):
m.bias.data.fill_(0)
elif classname.find('BatchNorm2d') != -1:
m.weight.data.normal_(1.0, 0.02)
m.bias.data.fill_(0)
def pixel_reshuffle(input, upscale_factor):
"""Rearranges elements in a tensor of shape ``[*, C, H, W]`` to a
tensor of shape ``[C*r^2, H/r, W/r]``.
See :class:`~torch.nn.PixelShuffle` for details.
Args:
input (Variable): Input
upscale_factor (int): factor to increase spatial resolution by
Examples:
>>> input = autograd.Variable(torch.Tensor(1, 3, 12, 12))
>>> output = pixel_reshuffle(input,2)
>>> print(output.size())
torch.Size([1, 12, 6, 6])
"""
batch_size, channels, in_height, in_width = input.size()
# // division is to keep data type unchanged. In this way, the out_height is still int type
out_height = in_height // upscale_factor
out_width = in_width // upscale_factor
input_view = input.contiguous().view(batch_size, channels, out_height, upscale_factor, out_width, upscale_factor)
channels = channels * upscale_factor * upscale_factor
shuffle_out = input_view.permute(0, 1, 3, 5, 2, 4).contiguous()
return shuffle_out.view(batch_size, channels, out_height, out_width)
class RDB_block(nn.Module):
def __init__(self, inChannels, growRate, kSize=3):
super(RDB_block, self).__init__()
Cin = inChannels
G = growRate
self.conv = nn.Sequential(*[
nn.Conv2d(Cin, G, kSize, padding=(kSize - 1) // 2, stride=1),
nn.ReLU()
])
def forward(self, x):
out = self.conv(x)
return torch.cat((x, out), 1)
class RDB(nn.Module):
def __init__(self, growRate0, growRate, nConvLayers, kSize=3):
super(RDB, self).__init__()
G0 = growRate0
G = growRate
C = nConvLayers
convs = []
for c in range(C):
convs.append(RDB_block(G0 + c * G, G))
self.convs = nn.Sequential(*convs)
# Local Feature Fusion
self.LFF = nn.Conv2d(G0 + C * G, G0, 1, padding=0, stride=1)
def forward(self, x):
return self.LFF(self.convs(x)) + x
class Residual_Net(nn.Module):
def __init__(self, in_channel, out_channel, n_RDB):
super(Residual_Net, self).__init__()
self.G0 = 96
kSize = 3
# number of RDB blocks, conv layers, out channels
self.D = n_RDB
self.C = 5
self.G = 48
# Shallow feature extraction net
self.SFENet1 = nn.Conv2d(in_channel*4, self.G0, 5, padding=2, stride=1)
self.SFENet2 = nn.Conv2d(self.G0, self.G0, kSize, padding=(kSize - 1) // 2, stride=1)
# Redidual dense blocks and dense feature fusion
self.RDBs = nn.ModuleList()
for i in range(self.D):
self.RDBs.append(
RDB(growRate0=self.G0, growRate=self.G, nConvLayers=self.C)
)
# Global Feature Fusion
self.GFF = nn.Sequential(*[
nn.Conv2d(self.D * self.G0, self.G0, 1, padding=0, stride=1),
nn.Conv2d(self.G0, self.G0, kSize, padding=(kSize - 1) // 2, stride=1)
])
# Up-sampling net
self.UPNet = nn.Sequential(*[
nn.Conv2d(self.G0, 256, kSize, padding=(kSize - 1) // 2, stride=1),
nn.PixelShuffle(2),
nn.Conv2d(64, out_channel, kSize, padding=(kSize - 1) // 2, stride=1)
])
def forward(self, input):
B_shuffle = pixel_reshuffle(input, 2)
f__1 = self.SFENet1(B_shuffle)
x = self.SFENet2(f__1)
RDBs_out = []
for i in range(self.D):
x = self.RDBs[i](x)
RDBs_out.append(x)
x = self.GFF(torch.cat(RDBs_out, 1))
x += f__1
# residual output
Residual = self.UPNet(x)
return Residual
class Deblur_2step(nn.Module):
def __init__(self, input_c, output_c=6, only_stage1=False):
super(Deblur_2step,self).__init__()
self.deblur_net = Residual_Net(input_c,12,n_RDB=20)
self.refine_net = Residual_Net(9,output_c,n_RDB=10)
self.only_stage1 = only_stage1
def forward(self, B0,B1,B2,B3):
input1 = torch.cat((B0,B1,B2,B3),1)
# with torch.no_grad():
if self.only_stage1:
res1 = self.deblur_net(input1)
deblur_out = torch.split(res1 + torch.cat((B1, B1, B2, B2), 1), 3, 1)
return deblur_out
else:
res1 = self.deblur_net(input1).detach()
deblur_out = torch.split(res1 + torch.cat((B1, B1, B2, B2), 1), 3, 1)
deblur_B1 = torch.cat((B1, deblur_out[0],deblur_out[1]),1)
deblur_B2 = torch.cat((B2, deblur_out[2],deblur_out[3]),1)
res2_B1 = self.refine_net(deblur_B1)
res2_B2 = self.refine_net(deblur_B2)
refine_B1 = torch.split(res2_B1 + torch.cat((deblur_out[0], deblur_out[1]), 1), 3, 1)
refine_B2 = torch.split(res2_B2 + torch.cat((deblur_out[2], deblur_out[3]), 1), 3, 1)
refine_out = refine_B1 + refine_B2
return deblur_out, refine_out
| true |
d5dfe29685a15e5acf2e3dd42fe227018ee250d6 | Python | d0c-s4vage/lookatme | /examples/file_loader_ext/hello_world.py | UTF-8 | 100 | 3.484375 | 3 | [
"MIT"
] | permissive | def hello_world(arg1):
print(f"Hello World, {arg1}")
hello_world(input("What is your name? "))
| true |
00d4c1442e6d65a8a4ebb3a8bb95b040d88df197 | Python | dsbaek/python_speech_weather | /speech_weather_test.py | UTF-8 | 1,236 | 3.171875 | 3 | [] | no_license | import speech_recognition as sr
from time import sleep
from pyowm import OWM
#import pygal
#bar_chart = pygal.Bar()
#bar_chart.add('Fibonacci', [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]) # Add some values
#bar_chart.render_to_file('bar_chart.svg')
r = sr.Recognizer()
mic = sr.Microphone()
API_key = '(your key)'
owm = OWM(API_key)
while True:
print('어디의 날씨가 궁굼 하세요? ')
with mic as source:
audio = r.listen(source)
talk = r.recognize_google(audio, language = 'en')
if talk == '감사합니다':
print('네 알겠습니다.')
else:
print(talk,'의 날씨 정보')
obs = owm.weather_at_place(talk)
w = obs.get_weather()
l = obs.get_location()
temp = w.get_temperature
rain = w.get_rain()
print(l.get_name()+':', w.get_status()+',', temp(unit='celsius')['temp'], '˚C')
if w.get_status() == 'Rain':
print('현재', l.get_name()+'은 ' '비가 옵니다. 우산을 챙겨 가세요')
"""print('강수확율:' , rain , '%')
if rain > 50:
print('우산을 챙겨 가세요')
"""
sleep(1)
| true |
bb71ed6e3d7ab8b580d96bc949bb00c50c194863 | Python | iiseeland/quant_ghl | /py27_standard/charpter2/mmap-example-2.py | UTF-8 | 523 | 3.328125 | 3 | [] | no_license | # -*- coding:utf-8 -*-
# Example 2-14. 对映射区域使用字符串方法和正则表达式
# File: mmap-example-2.py
import mmap
import os, string, re
def mapfile(filename):
file = open(filename, "r+")
size = os.path.getsize(filename)
return mmap.mmap(file.fileno(), size)
data = mapfile("samples/sample.txt")
# search
index = data.find("small")
print index, repr(data[index-5:index+15])
# regular expressions work too!
m = re.search("small", data)
print m.start(), m.group()
| true |
6f06423875434a1c61213b8425f90c4590b6130e | Python | lim-jonguk/ICE_HW8 | /석채원_5.py | UTF-8 | 548 | 3.96875 | 4 | [] | no_license | #5번, B835193 석채원
class Employee:
#클래스 변수들
employeeList = list()
employeenum = 0
def __init__(self,name,salary):
self.name = name
self.salary = salary
Employee.employeenum+=1
Employee.employeeList.append('SN : %d 이름: %s 월급: %s'%(Employee.employeenum,self.name,self.salary))
def main() :
Employee("사장",1200)
Employee("김수철",300)
Employee("이영애",600)
for i in Employee("장동철",400).employeeList :
print(i)
main()
| true |
a4edd328db26fdc7642ee70f0ab5ff2902a5dae2 | Python | sabrysingery/HalowTV | /plugin.video.swefilm/utils.py | UTF-8 | 490 | 2.9375 | 3 | [
"MIT"
] | permissive | import urllib2
import HTMLParser
def fetch_html(url):
request = urllib2.Request(url, headers={
'Accept' : '*/*',
'Accept-Encoding': '*',
'User-Agent': 'curl/7.43.0'
})
contents = urllib2.urlopen(request).read()
return contents
def safe_decode(word):
h = HTMLParser.HTMLParser()
word = h.unescape(word)
s = ''
for letter in word:
if ord(letter) > 127:
s += '_'
else:
s += letter
return s
| true |
587d729c379ff742183ec6fa2eef22b2df3db782 | Python | Relthdrarxash/Reo | /convertisseur_fini_cascade.py | UTF-8 | 2,440 | 3.796875 | 4 | [] | no_license | import re
def convert_deci_reo(nombre) :
"""
Convertit un entier naturel en reo tahitien.
:param nombre: un entier naturel
:type nombre: str
:return: le nombre en reo tahitien
:rtype: str
Valeur max : 9 999 999
Rappels des règles d'écriture en reo tahitien
- Les chiffres de zéro à neuf sont rendus par des mots spécifiques
- Les dizaines se forment en posant le chiffre multiplicateur avant le mot pour dix (’ahuru), à l’exception de dix lui-même (valable pour les centaines, milliers, etc)
- Les nombres composés se forment en reliant le chiffre de l’unité à la dizaine avec le coordinateur 'ma'
"""
# Les chiffres, le zéro ne s'écrit pas
numera = {
"0" : "’aore",
"1" : "ho’e",
"2" : "piti",
"3" : "toru",
"4" : "maha",
"5" : "pae",
"6" : "ono",
"7" : "hitu",
"8" : "va’u",
"9" : "iva"
}
# Les multiples de 10
puissance_dix = {
"0" : "",
"1" : "’ahuru",
"2" : "hanere",
"3" : "tauatini",
"4" : "’ahuru tauatini",
"5" : "hanere tauatini",
"6" : "mirioni",
"7" : "’ahuru mirioni",
"8" : "hanere mirioni",
"9" : "miria"
}
## A vous de jouer !! ##
reo = ""
i = 0
for i in range(len(nombre)):
puissance = len(nombre) - i - 1
if nombre[i] != "0":
if puissance > 1 and nombre[i+1] != "0":
if puissance > 1 :
reo += numera[nombre[i]] + " " + puissance_dix[str(puissance)] + " e "
elif puissance == 1:
if nombre[i] == "0":
pass
elif nombre[i] == "1":
if nombre[i+1] != "0":
reo += puissance_dix[str(puissance)] + " ma "
else :
reo += puissance_dix[str(puissance)]
else :
reo += numera[nombre[i]] + " " + puissance_dix[str(puissance)] + " ma "
elif puissance == 0:
reo += numera[nombre[i]]
elif reo == "":
reo += numera[nombre[i]]
return reo
# Saisie par l'utilisateur et vérification
nombre_deci = ""
while not re.match("^[0-9]+$", nombre_deci) :
nombre_deci = input("Veuillez saisir un nombre : ")
print(f"{nombre_deci} donne en reo tahiti : {convert_deci_reo(nombre_deci)}") | true |
b3218ecebe7c182cc09a7245186961e2e1a56c0b | Python | a1ec/MIT-OCW-6.00SC | /misc-exercises/square_root_bi.py | UTF-8 | 875 | 3.90625 | 4 | [] | no_license | # Problem Set 0
# Name: A C
# Collaborators: none
# Time Spent: 3:30
def withinEpsilon(a, b, epsilon):
""" Returns bool on whether difference between a and b is within epsilon"""
if abs(a-b) >= epsilon:
return False222
else:
return True
x = float(raw_input("Enter x: "))
lowBound = 0.0
upBound = max(x, 1.0)
epsilon = 0.01
i = 0
a = upBound / 2.0
# loop until find cube root or pass input
while not withinEpsilon(a**2, x, epsilon) and a <= upBound:
if a**2 > x:
upBound = a
else:
lowBound = a
# condition where stuck in loop
if upBound == lowBound:
break
a = (upBound+lowBound)/2.0
i += 1
print i, ": a:,", a
print lowBound, "-", upBound
print "Iterations: ", i
if withinEpsilon(a**2, x, epsilon):
print a, "is close to the square root of", x
else:
print "Failed on square root of", x
| true |
6073d54ae199c7af9d040867f058fe5d607a5ad6 | Python | rallen0150/class_notes | /practice/simplehistogram.py | UTF-8 | 349 | 4.03125 | 4 | [] | no_license | def histogram(value):
for x in value:
graph = "*" * x
print(graph)
return "" # Returns nothing
numbers = []
x = input("How many values would you like to input?\n>")
x = int(x)
for n in range(x):
value = input("Enter a number: ")
value = int(value)
numbers.append(value)
graph = histogram(numbers)
print(graph)
| true |
f6ba98a407e800cf43186905573487eed9dec8a5 | Python | lcaldara/python3 | /exercicios curso em video/manipulando string.py | UTF-8 | 1,281 | 4.0625 | 4 | [] | no_license | import random
frase = 'Curso em Vídeo Python'
#fatiamento
fat = frase[9:21:2]
#[9:13] fatia do 9 ao 13
#[9:21:2] fatia do 9 ao 21 pulando 2
#[:5] fatia do inicio ao 5
#[15:] fatia do 15 ao final
#[9::3] fatia do 9 ao fim pulando 3
print(frase[9:21:2])
print(len(frase))
print(len(fat))
#comprimento da frase
print(frase.count('o'))
#conta a quantidade e 'os' da frase
print(frase.count('o', 0, 13))
#conta os 'os' do 0 ao 13
print(frase.find('deo'))
#acha na frase
print('Curso' in frase)
#retorna boleano caso haja a string na frase
print(frase.replace('Python', 'Android'))
#substitui todas as palavras Python por Android
print(frase.upper())
#coloca todas as strings em maiúsculo
print(frase.lower())
print(frase.capitalize())
#coloca só a primeira letra da primeira palavra em maiúsculo
print(frase.title())
#coloca toda primeira letra de cada palavra em maiúsculo
print(frase.split())
#divide nos espaços
print(len(frase.split()))
#conta as palavras
print(frase.upper().count('O'))
#passa pra maiusculo e conta os 'os'
dividido = frase.split()
print(dividido[0])
#imprime a primeira palavra da lista dividido
print(dividido[2][3])
#imprime a terceira letra (começa na letra zero) dentro da palavra 2 começa na palavra zero
| true |
7373d8d05edc2e64fbb8a20fc3054ccd4d9aafaa | Python | matthew-cheney/kattis-solutions | /solutions/neighborhoodwatch.py | UTF-8 | 932 | 3.03125 | 3 | [] | no_license | N, K = [int(x) for x in input().split(' ')]
watches = set()
for k in range(K):
watches.add(int(input()) - 1)
total = N * (N - 1)
total //= 2
streak = False
dangInARow = 0
for h in range(N):
if h not in watches:
# dangerous house
# continuing dangerous streak
if streak:
dangInARow += 1
# starting dangerous streak
else:
streak = True
dangInARow += 1
else:
# ending dangerous streak
if streak:
dangWalks = dangInARow * (dangInARow - 1)
dangWalks //= 2
total -= dangWalks
streak = False
dangInARow = 0
# all safe
# do nothing
# check last streak
if streak:
dangWalks = dangInARow * (dangInARow - 1)
dangWalks //= 2
total -= dangWalks
# add walks to own house
total += len(watches)
print(total) | true |
b3ae8b02f9fb9428717e8dfbcbd71367a81ea425 | Python | ParulProgrammingHub/assignment-1-palakparekh | /program10.py | UTF-8 | 270 | 3.34375 | 3 | [] | no_license | principle=int(input("enter the principle amount:"))
rate=int(input("enter the rate:"))
time=int(input("enter the time:"))
def simple_interest(principle,time,rate):
A=principle*(1+rate*time)
print("total simple interest:",A)
simple_interest(principle,time,rate)
| true |
da6a7ba1a273dc0822696f96626ac2046c6abe43 | Python | 39239580/dev_ops_recom_sys_ser | /code_tools/tf_record_work/tf_recordmake.py | UTF-8 | 2,427 | 2.671875 | 3 | [] | no_license | import tensorflow as tf
import numpy as np
# from tensorflow.examples.tutorials.mnist import *
# from tensorflow.examples.tutorials.mnist import input_data
from tensorflow.examples.tutorials.mnist import input_data
# from tensorflow.models.offic
# from tensorflow.
"""
TFrecord 支持的数据类型有三种
字符串,整数,浮点型, Int64List,
BytestList, FloatList
"""
# 生成整数型的属性
def _int64_feature(value):
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
# 生成字符串型的属性
def _bytes_feature(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
# 生成浮点型的属性
def _float_feature(value):
return tf.train.Feature(floatt_list=tf.train.FloatList(value=[value]))
def creat_tf_example(params, data_type=None):
# writer = tf.python_io.TFRecordWriter(params["filename"]) # 旧版本
writer = tf.io.TFRecordWriter(params["filename"]) # 旧版本
if not data_type: # 图像数据
for index in range(params["num_example"]):
# 将图像矩阵转成一个字符串
image_raw = params["data_obj"][index].tostring()
# 将一个样例转化成 Example Protocol Buffer , 并将所有信息写入这个数据结构
tf_example = creat_image_format(params["pixels"], params["label"], image_raw)
writer.write(tf_example.SerializeToString()) # 将一个Example写入TFRecord文件
writer.close()
def creat_image_format(pixels, labels, image_raw): # 图片的
tf_example = tf.train.Example(features=tf.train.Features(
feature={"image_raw": _bytes_feature(image_raw),
"label": _int64_feature(np.argmax(labels)),
"pixels": _int64_feature(pixels)
}
)
)
return tf_example
def TFrecord_image():
mnist = input_data.read_data_sets("./mnist_data/data", dtype=tf.uint8, one_hot=True)
images = mnist.train.images
labels = mnist.train.labels
# 图像分辨率
pixels = images.shape[1]
num_example = mnist.train.num_examples
filename = "./mnist_data/output.tfrecords" # 保存成 2进制文件
params = {"filename": filename,
"label": labels,
"pixels": pixels,
"num_example": num_example,
"data_obj": images
}
creat_tf_example(params)
TFrecord_image()
| true |
acc719da73712811d8f1e7d496779c917022f0bc | Python | carlsummer/AI | /neuron/neuron-multi_output/network.py | UTF-8 | 1,320 | 2.984375 | 3 | [] | no_license | import tensorflow as tf
# [None, 3072]
x = tf.placeholder(tf.float32, [None, 3072])
# [None],eg:[0,5,6,3]
y = tf.placeholder(tf.int64, [None])
# (3072,10)
w = tf.get_variable('w', [x.get_shape()[-1], 10],
initializer=tf.random_normal_initializer(0, 1))
# (10,)
b = tf.get_variable("b", [10], initializer=tf.constant_initializer(0.0))
# [None,3072]*[3072,10] = [None,10]
y_ = tf.matmul(x, w) + b
# mean square loss
"""平方差损失函数
# e^x / sum(e^x)
# [[0.01,0.09,....,0.03],[]]
p_y = tf.nn.softmax(y_)
# 5 ->[0,0,0,0,1,0,0,0,0,0]
y_one_hot = tf.one_hot(y, 10, dtype=tf.float32)
loss = tf.reduce_mean(tf.square(y_one_hot - p_y))
"""
"""交叉熵损失函数"""
loss = tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_)
# y_ ->softmax
# y->one_hot
# loss = ylogy_
"""
# [None,1]
p_y_1 = tf.nn.sigmoid(y_) # 将y_生成值
# [None ,1]
y_reshaped = tf.reshape(y, (-1, 1))
y_reshaped_float = tf.cast(y_reshaped, tf.float32)
loss = tf.reduce_mean(tf.square(y_reshaped_float - p_y_1))
"""
# bool
predict = tf.argmax(y_, 1) # 取第二维上取最大值
# [1,0,1,1,1,0,0,0]
correct_prediction = tf.equal(predict, y)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float64))
with tf.name_scope('train_op'):
train_op = tf.train.AdamOptimizer(1e-3).minimize(loss)
| true |
98e90202d68812998b1f423eb4b6ae4f0883f549 | Python | CHESTERFIELD/simple-chat | /src/project/run_load_users.py | UTF-8 | 816 | 3.203125 | 3 | [] | no_license | """Simple chat script that load users to storage"""
import json
import os
from simple_chat_etcd_storage import Storage, User
def load_users_to_storage(file_path: str, etcd_storage: Storage):
"""Load list of users to storage from json file"""
with open(file_path, "r") as f:
for user in json.load(f):
etcd_storage.put_user(User(user['login'], user['full_name']))
return True
def get_etcd_storage_connection(host, port) -> 'Storage':
"""Creates new one etcd connection for working with storage"""
etcd_storage = Storage(host, port)
return etcd_storage
if __name__ == '__main__':
storage = get_etcd_storage_connection(
os.environ['ETCD_SERVER_HOST'], os.environ['ETCD_SERVER_PORT'])
load_users_to_storage(os.path.abspath('data/users.json'), storage)
| true |
7bd87d7bf2c0dcaf0233b56cd332360a6e6f714d | Python | FilippoRanza/rr-scheduler | /controller/controller/math_helper.py | UTF-8 | 1,131 | 3.53125 | 4 | [
"MIT"
] | permissive | #! /usr/bin/python
"""
Mathematical routines
to implement controller.py
complex numerical operations.
Here manly to keep controller.py clean.
"""
import numpy as np
def fixed_norm(vec):
"""
Return the norm of the vector.
If the norm is zero return 1 instead
"""
norm = np.linalg.norm(vec)
if norm == 0:
norm = 1
return norm
def normed_variance(iterable):
"""
Function takes as input an interable (a generator for example),
converts it into a ndarray and normalize it.
Returns normalized vector variance.
"""
vec = list(iterable)
np_vec = np.array(vec)
np_vec = np_vec / fixed_norm(np_vec)
return np.var(np_vec)
def ceil(num):
"""
ceil function but return an interger
"""
return int(np.ceil(num))
def pythagoras(cat_a, cat_b=None):
"""
Compute length of hypotenuse
knowing the length to the to cathetuses.
THe return value is rounded to the upper integer
"""
if cat_b is None:
cat_b = cat_a
cat_a **= 2
cat_b **= 2
cat_c = cat_a + cat_b
cat_c = np.sqrt(cat_c)
return ceil(cat_c)
| true |
47f78cea083cda04f06f9d1d8b51e0cc7d91a9e6 | Python | rileyjohngibbs/adventOfCode2020 | /day03/solve.py | UTF-8 | 793 | 3.328125 | 3 | [] | no_license | from functools import reduce
TREE = '#'
OPEN = '.'
def main():
input_ = load_input()
print(part1(input_))
print(part2(input_))
def load_input():
with open('input.txt') as f:
return [line.strip() for line in f.readlines()]
def part1(input_: list):
return traverse(input_, 3, 1)
def part2(input_: list):
return reduce(
lambda x, y: x * traverse(input_, *y),
[(1, 1), (3, 1), (5, 1), (7, 1), (1, 2)],
1
)
def traverse(input_: list, right: int, down: int) -> int:
width = len(input_[0])
tree_count = 0
x, y = 0, 0
while y < len(input_):
if input_[y][x] == TREE:
tree_count += 1
x = (x + right) % width
y += down
return tree_count
if __name__ == '__main__':
main()
| true |
23e764e7143b8545ac69a76c1c5638c8bf99960e | Python | sambapython/raisingstarts | /Assignment/db.py | UTF-8 | 586 | 2.90625 | 3 | [] | no_license | import sqlite3
def get_con():
con = sqlite3.connect('db1.db')
return con
def create_tables():
con = get_con()
con.execute("create table persons(id int,name varchar(20))")
con.close()
def inser_dummpy_data():
con = get_con()
ids=range(10)
names=['n1','n2','n3','n4','n5','n6','n7','n8','n9','n10']
for id_per,name_per in zip(ids,names):
query = "insert into persons values(%d,'%s')"%(id_per,name_per)
con.execute(query)
con.commit()
con.close()
def browse(table_name):
con = get_con()
cur = con.cursor()
cur.execute("select * from %s"%table_name)
return cur.fetchall() | true |
7a2e559e92a41418b3a7672a240b50d73b465788 | Python | Riyanshi851/covid19tracker | /COVID-O-METER.py | UTF-8 | 89,590 | 2.84375 | 3 | [] | no_license | from tkinter import *
from tkinter import ttk
from PIL import ImageTk, Image
import time
from tkinter import messagebox
import requests
from bs4 import BeautifulSoup
import pandas as pd
from difflib import get_close_matches
#DATA EXTRACTION BY WEB SCRAPING USING BEAUTIFUL SOUP
#-------------------------------------------------------------------------------------------------------------------------------#
page_link=requests.get('https://www.worldometers.info/coronavirus/')
soup_object=BeautifulSoup(page_link.content, 'html.parser')
soup_object2=soup_object.find(id='main_table_countries_today')
rows=soup_object2.find_all('tr')
column=[v.get_text() for v in rows[0].find_all('th')]
covid_list=[]
country_list=[]
for i in range(1,len(rows)):
row_data=[x.get_text().replace("\n", "") for x in rows[i].find_all('td')]
individual_dic={}
country_list.append(row_data[1])
individual_dic['Country']=row_data[1].rstrip()
individual_dic['Total Cases']=row_data[2].rstrip()
individual_dic['New Cases']=row_data[3].rstrip()
individual_dic['Total Deaths']=row_data[4].rstrip()
individual_dic['Total Recovered']=row_data[6].rstrip()
covid_list.append(individual_dic)
covid_df=pd.DataFrame(covid_list, columns=['Country', 'Total Cases','New Cases','Total Deaths','Total Recovered'], index=country_list)
page_link_india=requests.get('https://en.wikipedia.org/wiki/Timeline_of_the_COVID-19_pandemic_in_India')
soup_object=BeautifulSoup(page_link_india.content, 'html.parser')
my_table=soup_object.find(id='covid19-container')
rows=my_table.find_all('tr')
list1=[]
for x in rows:
list1.append(x.get_text())
states_list=[]
india_covid_list=[]
for i in range(2,len(rows)-2):
row_heading=[x.get_text() for x in rows[i].find_all('th')]
states_list.append(row_heading)
total_stats=states_list[0]
states_list.pop(0)
data=[]
for i in range(2,len(rows)-2):
row_data=[x.get_text() for x in rows[i].find_all('td')]
data.append(row_data)
data.pop(0)
for i in range(0,36):
state_dic={}
state_dic['State']=states_list[i][0].replace("\n","").rstrip()
state_dic['Total Cases']=data[i][0].replace("\n","").rstrip().replace("[a]","").replace("[b]","")
state_dic['Deaths']=data[i][1].replace("\n","").rstrip().replace("[a]","").replace("[b]","")
state_dic['Total Recovered']=data[i][2].replace("\n","").rstrip().replace("[a]","").replace("[b]","")
state_dic['Active Cases']=data[i][3].replace("\n","").rstrip().replace("[a]","").replace("[b]","")
india_covid_list.append(state_dic)
state_dic1={}
state_dic1['State']="India"
state_dic1['Total Cases']=total_stats[1].replace("\n","").rstrip().replace("*","")
state_dic1['Deaths']=total_stats[2].replace("\n","").rstrip()
state_dic1['Total Recovered']=total_stats[3].replace("\n","").rstrip()
state_dic1['Active Cases']=total_stats[4].replace("\n","").rstrip()
india_covid_list.append(state_dic1)
india_covid_df=pd.DataFrame(india_covid_list, columns=['State', 'Total Cases','Deaths','Total Recovered','Active Cases'])
india_covid_df=india_covid_df.set_index('State')
#GUI
#------------------------------------------------------------------------------------------------------------------------------------#
root=Tk()
root.title("COVID-O-METER")
root.geometry("1000x1000")
root.iconbitmap(r"C:\Users\riyan\OneDrive\Desktop\COVID-O-METER\virus.ico")
covid_notebook=ttk.Notebook(root)
covid_notebook.pack()
live_updates_frame=Frame(covid_notebook, width=1000, height=1000, bg="#e3fdfd" )
live_updates_frame.pack(fill="both", expand=1)
local_frame=Frame(covid_notebook, width=1000, height=1000, bg="#e3fdfd")
local_frame.pack(fill="both", expand=1)
search_frame=Frame(covid_notebook, width=1000, height=1000, bg="#e3fdfd")
search_frame.pack(fill="both", expand=1)
answer_frame=Frame(covid_notebook, height=1000, width=1000, bg="#e3fdfd")
answer_frame.pack(fill="both", expand=1)
faq_frame=Frame(covid_notebook,height=1000, width=1000, bg="#e3fdfd")
faq_frame.pack(fill="both", expand=1)
covid_notebook.add(live_updates_frame, text="COVID-19 Live Updates")
covid_notebook.add(local_frame, text="COVID-19 India")
covid_notebook.add(faq_frame, text="Frequently Asked Questions")
covid_notebook.add(search_frame, text="Search")
covid_notebook.add(answer_frame, text="Search Results")
covid_notebook.hide(4)
#All continent windows
def north_america():
north_america1=Tk()
north_america1.geometry("400x400")
north_america1.iconbitmap(r"C:\Users\riyan\Downloads\favicon (2).ico")
north_america1.title("North America")
north_america1.config(bg="#d3f4ff")
north_america_updates_canvas=Canvas(north_america1, height=200, width=400, bg="#d3f4ff", bd=0, highlightthickness=0, relief='ridge')
north_america_updates_canvas.grid()
north_america_updates_canvas.create_rectangle(10,20,390,193, fill="#d3f4ff",outline="black", width=3)
north_america_total_cases=covid_df.loc['North America','Total Cases']
north_america_total_recovered=covid_df.loc['North America', 'Total Recovered']
north_america_total_deaths=covid_df.loc['North America' ,'Total Deaths']
north_america_new_cases=covid_df.loc['North America', 'New Cases']
north_america_updates_canvas.create_text(200,40, text="Total Cases: " + str(north_america_total_cases), font=('Helvetica', 18, 'bold'), fill="black", tag="delete_text")
north_america_updates_canvas.create_text(200,80, text="Total Recovered: " + str(north_america_total_recovered), font=('Helvetica', 18, 'bold'), fill="green", tag="delete_text")
north_america_updates_canvas.create_text(200,120, text="Total Deaths: " + str(north_america_total_deaths), font=('Helvetica', 18, 'bold'), fill="black", tag="delete_text")
north_america_updates_canvas.create_text(200,160, text="New Cases: " + str(north_america_new_cases), font=('Helvetica', 18, 'bold'), fill="red",tag="delete_text")
def selected_option_north_america(event):
option_selected=north_america_combo.get()
north_america_updates_canvas.delete("delete_text")
selected_country_total_cases=covid_df.loc[option_selected, 'Total Cases']
selected_country_total_recovered=covid_df.loc[option_selected, 'Total Recovered']
selected_country_total_deaths=covid_df.loc[option_selected, 'Total Deaths']
selected_country_new_cases=covid_df.loc[option_selected,'New Cases']
north_america_updates_canvas.create_text(200,50, text="Total Cases: " + str(selected_country_total_cases), font=('Helvetica', 18, 'bold'), fill="black", tag="delete_text")
north_america_updates_canvas.create_text(200,90, text="Total Recovered: " + str(selected_country_total_recovered), font=('Helvetica', 18, 'bold'), fill="green", tag="delete_text")
north_america_updates_canvas.create_text(200,130, text="Total Deaths: " + str(selected_country_total_deaths), font=('Helvetica', 18, 'bold'), fill="black", tag="delete_text")
north_america_updates_canvas.create_text(200,170, text="New Cases: " + str(selected_country_new_cases), font=('Helvetica', 18, 'bold'), fill="red",tag="delete_text")
north_america_countries_list=["North America","USA","Mexico","Canada","Dominican Republic","Panama","Guatemala", "Honduras","Haiti","El Salvador","Cuba","Costa Rica",
"Nicaragua","Jamaica","Martinique","Guadeloupe","Cayman Islands","Bermuda","Trinidad and Tobago","Bahamas","Aruba","Barbados","Sint Maarten","Saint Martin",
"St. Vincent Grenadines","Antigua and Barbuda","Grenada","Curaçao","Belize","Saint Lucia","Dominica","Saint Kitts and Nevis","Greenland","Turks and Caicos",
"Montserrat","British Virgin Islands","Caribbean Netherlands","St. Barth","Anguilla","Saint Pierre Miquelon"]
select_country_label=Label(north_america1, text="Select Country", bg="#d3f4ff", font=('Helvetica',12,'bold','underline'), fg="#084177")
select_country_label.grid(pady=20)
north_america_combo=ttk.Combobox(north_america1, value=north_america_countries_list, state='readonly', width=30, height=6)
north_america_combo.current(0)
north_america_combo.bind("<<ComboboxSelected>>", selected_option_north_america)
north_america_combo.grid()
north_america1.mainloop()
def south_america():
south_america1=Tk()
south_america1.geometry("400x400")
south_america1.iconbitmap(r"C:\Users\riyan\Downloads\favicon (2).ico")
south_america1.title("South America")
south_america1.config(bg="#d3f4ff")
south_america_updates_canvas=Canvas(south_america1, height=200, width=400, bg="#d3f4ff",bd=0, highlightthickness=0, relief='ridge')
south_america_updates_canvas.grid()
south_america_updates_canvas.create_rectangle(10,20,390,193, fill="#d3f4ff",outline="black", width=3)
south_america_total_cases=covid_df.loc['South America','Total Cases']
south_america_total_recovered=covid_df.loc['South America', 'Total Recovered']
south_america_total_deaths=covid_df.loc['South America' ,'Total Deaths']
south_america_new_cases=covid_df.loc['South America', 'New Cases']
south_america_updates_canvas.create_text(200,50, text="Total Cases: " + str(south_america_total_cases), font=('Helvetica', 18, 'bold'), fill="black", tag="delete_text")
south_america_updates_canvas.create_text(200,90, text="Total Recovered: " + str(south_america_total_recovered), font=('Helvetica', 18, 'bold'), fill="green", tag="delete_text")
south_america_updates_canvas.create_text(200,130, text="Total Deaths: " + str(south_america_total_deaths), font=('Helvetica', 18, 'bold'), fill="black", tag="delete_text")
south_america_updates_canvas.create_text(200,170, text="New Cases: " + str(south_america_new_cases), font=('Helvetica', 18, 'bold'), fill="red",tag="delete_text")
def selected_option_south_america(event):
option_selected=south_america_combo.get()
south_america_updates_canvas.delete("delete_text")
selected_country_total_cases=covid_df.loc[option_selected, 'Total Cases']
selected_country_total_recovered=covid_df.loc[option_selected, 'Total Recovered']
selected_country_total_deaths=covid_df.loc[option_selected, 'Total Deaths']
selected_country_new_cases=covid_df.loc[option_selected,'New Cases']
south_america_updates_canvas.create_text(200,50, text="Total Cases: " + str(selected_country_total_cases), font=('Helvetica', 18, 'bold'), fill="black", tag="delete_text")
south_america_updates_canvas.create_text(200,90, text="Total Recovered: " + str(selected_country_total_recovered), font=('Helvetica', 18, 'bold'), fill="green", tag="delete_text")
south_america_updates_canvas.create_text(200,130, text="Total Deaths: " + str(selected_country_total_deaths), font=('Helvetica', 18, 'bold'), fill="black", tag="delete_text")
south_america_updates_canvas.create_text(200,170, text="New Cases: " + str(selected_country_new_cases), font=('Helvetica', 18, 'bold'), fill="red",tag="delete_text")
south_america_countries_list=["South America","Brazil","Peru","Chile","Ecuador","Colombia","Argentina","Bolivia","Venezuela","Paraguay","Uruguay","French Guiana","Guyana","Suriname","Falkland Islands"]
select_country_label=Label(south_america1, text="Select Country", bg="#d3f4ff", font=('Helvetica',12,'bold','underline'), fg="#084177")
select_country_label.grid(pady=20)
south_america_combo=ttk.Combobox(south_america1, value=south_america_countries_list, state='readonly',width=30,height=6)
south_america_combo.current(0)
south_america_combo.bind("<<ComboboxSelected>>", selected_option_south_america)
south_america_combo.grid()
south_america1.mainloop()
def asia():
asia1=Tk()
asia1.geometry("400x400")
asia1.iconbitmap(r"C:\Users\riyan\Downloads\favicon (2).ico")
asia1.title("Asia")
asia1.config(bg="#d3f4ff")
asia_updates_canvas=Canvas(asia1, height=200, width=400, bg="#d3f4ff",bd=0, highlightthickness=0, relief='ridge')
asia_updates_canvas.grid()
asia_updates_canvas.create_rectangle(10,20,390,193, fill="#d3f4ff",outline="black", width=3)
asia_total_cases=covid_df.loc['Asia','Total Cases']
asia_total_recovered=covid_df.loc['Asia', 'Total Recovered']
asia_total_deaths=covid_df.loc['Asia' ,'Total Deaths']
asia_new_cases=covid_df.loc['Asia', 'New Cases']
asia_updates_canvas.create_text(200,50, text="Total Cases: " + str(asia_total_cases), font=('Helvetica', 18, 'bold'), fill="black", tag="delete_text")
asia_updates_canvas.create_text(200,90, text="Total Recovered: " + str(asia_total_recovered), font=('Helvetica', 18, 'bold'), fill="green", tag="delete_text")
asia_updates_canvas.create_text(200,130, text="Total Deaths: " + str(asia_total_deaths), font=('Helvetica', 18, 'bold'), fill="black", tag="delete_text")
asia_updates_canvas.create_text(200,170, text="New Cases: " + str(asia_new_cases), font=('Helvetica', 18, 'bold'), fill="red",tag="delete_text")
def selected_option_asia(event):
option_selected=asia_combo.get()
asia_updates_canvas.delete("delete_text")
selected_country_total_cases=covid_df.loc[option_selected, 'Total Cases']
selected_country_total_recovered=covid_df.loc[option_selected, 'Total Recovered']
selected_country_total_deaths=covid_df.loc[option_selected, 'Total Deaths']
selected_country_new_cases=covid_df.loc[option_selected,'New Cases']
asia_updates_canvas.create_text(200,50, text="Total Cases: " + str(selected_country_total_cases), font=('Helvetica', 18, 'bold'), fill="black", tag="delete_text")
asia_updates_canvas.create_text(200,90, text="Total Recovered: " + str(selected_country_total_recovered), font=('Helvetica', 18, 'bold'), fill="green", tag="delete_text")
asia_updates_canvas.create_text(200,130, text="Total Deaths: " + str(selected_country_total_deaths), font=('Helvetica', 18, 'bold'), fill="black", tag="delete_text")
asia_updates_canvas.create_text(200,170, text="New Cases: " + str(selected_country_new_cases), font=('Helvetica', 18, 'bold'), fill="red",tag="delete_text")
asia_countries_list=["Asia","India","Iran","Turkey","Saudi Arabia","Pakistan","China","Qatar","Bangladesh","UAE","Singapore","Kuwait","Indonesia","Philippines","Afghanistan","Israel",
"Oman","Japan","Bahrain","Iraq","Armenia","Kazakhstan","S. Korea","Malaysia","Azerbaijan","Tajikistan","Uzbekistan","Nepal","Thailand","Kyrgyzstan","Maldives","Sri Lanka","Lebanon",
"Hong Kong","Cyprus","Jordan","Georgia","Yemen","Palestine","Taiwan","Vietnam","Myanmar","Mongolia","Syria","Brunei ","Cambodia","Bhutan","Macao","Timor-Leste","Laos"]
select_country_label=Label(asia1, text="Select Country", bg="#d3f4ff", font=('Helvetica',12,'bold','underline'), fg="#084177")
select_country_label.grid(pady=20)
asia_combo=ttk.Combobox(asia1, value=asia_countries_list, state='readonly', width=30, height=6)
asia_combo.current(0)
asia_combo.bind("<<ComboboxSelected>>", selected_option_asia)
asia_combo.grid()
asia1.mainloop()
def oceania():
oceania1=Tk()
oceania1.geometry("400x400")
oceania1.iconbitmap(r"C:\Users\riyan\Downloads\favicon (2).ico")
oceania1.title("Oceania")
oceania1.config(bg="#d3f4ff")
oceania_updates_canvas=Canvas(oceania1, height=200, width=400, bg="#d3f4ff",bd=0, highlightthickness=0, relief='ridge')
oceania_updates_canvas.grid()
oceania_updates_canvas.create_rectangle(10,20,390,193, fill="#d3f4ff",outline="black", width=3)
oceania_total_cases=covid_df.loc['Oceania','Total Cases']
oceania_total_recovered=covid_df.loc['Oceania', 'Total Recovered']
oceania_total_deaths=covid_df.loc['Oceania' ,'Total Deaths']
oceania_new_cases=covid_df.loc['Oceania', 'New Cases']
oceania_updates_canvas.create_text(200,50, text="Total Cases: " + str(oceania_total_cases), font=('Helvetica', 18, 'bold'), fill="black", tag="delete_text")
oceania_updates_canvas.create_text(200,90, text="Total Recovered: " + str(oceania_total_recovered), font=('Helvetica', 18, 'bold'), fill="green", tag="delete_text")
oceania_updates_canvas.create_text(200,130, text="Total Deaths: " + str(oceania_total_deaths), font=('Helvetica', 18, 'bold'), fill="black", tag="delete_text")
oceania_updates_canvas.create_text(200,170, text="New Cases: " + str(oceania_new_cases), font=('Helvetica', 18, 'bold'), fill="red",tag="delete_text")
def selected_option_oceania(event):
option_selected=oceania_combo.get()
oceania_updates_canvas.delete("delete_text")
selected_country_total_cases=covid_df.loc[option_selected, 'Total Cases']
selected_country_total_recovered=covid_df.loc[option_selected, 'Total Recovered']
selected_country_total_deaths=covid_df.loc[option_selected, 'Total Deaths']
selected_country_new_cases=covid_df.loc[option_selected,'New Cases']
oceania_updates_canvas.create_text(200,50, text="Total Cases: " + str(selected_country_total_cases), font=('Helvetica', 18, 'bold'), fill="black", tag="delete_text")
oceania_updates_canvas.create_text(200,90, text="Total Recovered: " + str(selected_country_total_recovered), font=('Helvetica', 18, 'bold'), fill="green", tag="delete_text")
oceania_updates_canvas.create_text(200,130, text="Total Deaths: " + str(selected_country_total_deaths), font=('Helvetica', 18, 'bold'), fill="black", tag="delete_text")
oceania_updates_canvas.create_text(200,170, text="New Cases: " + str(selected_country_new_cases), font=('Helvetica', 18, 'bold'), fill="red",tag="delete_text")
oceania_countries_list=["Oceania","Australia","New Zealand","French Polynesia","New Caledonia","Fiji","Papua New Guinea"]
select_country_label=Label(oceania1, text="Select Country", bg="#d3f4ff", font=('Helvetica',12,'bold','underline'), fg="#084177")
select_country_label.grid(pady=20)
oceania_combo=ttk.Combobox(oceania1, value=oceania_countries_list, state='readonly', width=30, height=6)
oceania_combo.current(0)
oceania_combo.bind("<<ComboboxSelected>>", selected_option_oceania)
oceania_combo.grid()
oceania1.mainloop()
def europe():
europe1=Tk()
europe1.geometry("400x400")
europe1.iconbitmap(r"C:\Users\riyan\Downloads\favicon (2).ico")
europe1.title("Europe")
europe1.config(bg="#d3f4ff")
europe_updates_canvas=Canvas(europe1, height=200, width=400, bg="#d3f4ff",bd=0, highlightthickness=0, relief='ridge')
europe_updates_canvas.grid()
europe_updates_canvas.create_rectangle(10,20,390,193, fill="#d3f4ff",outline="black", width=3)
europe_total_cases=covid_df.loc['Europe','Total Cases']
europe_total_recovered=covid_df.loc['Europe', 'Total Recovered']
europe_total_deaths=covid_df.loc['Europe' ,'Total Deaths']
europe_new_cases=covid_df.loc['Europe', 'New Cases']
europe_updates_canvas.create_text(200,50, text="Total Cases: " + str(europe_total_cases), font=('Helvetica', 18, 'bold'), fill="black", tag="delete_text")
europe_updates_canvas.create_text(200,90, text="Total Recovered: " + str(europe_total_recovered), font=('Helvetica', 18, 'bold'), fill="green", tag="delete_text")
europe_updates_canvas.create_text(200,130, text="Total Deaths: " + str(europe_total_deaths), font=('Helvetica', 18, 'bold'), fill="black", tag="delete_text")
europe_updates_canvas.create_text(200,170, text="New Cases: " + str(europe_new_cases), font=('Helvetica', 18, 'bold'), fill="red",tag="delete_text")
def selected_option_europe(event):
option_selected=europe_combo.get()
europe_updates_canvas.delete("delete_text")
selected_country_total_cases=covid_df.loc[option_selected, 'Total Cases']
selected_country_total_recovered=covid_df.loc[option_selected, 'Total Recovered']
selected_country_total_deaths=covid_df.loc[option_selected, 'Total Deaths']
selected_country_new_cases=covid_df.loc[option_selected,'New Cases']
europe_updates_canvas.create_text(200,50, text="Total Cases: " + str(selected_country_total_cases), font=('Helvetica', 18, 'bold'), fill="black", tag="delete_text")
europe_updates_canvas.create_text(200,90, text="Total Recovered: " + str(selected_country_total_recovered), font=('Helvetica', 18, 'bold'), fill="green", tag="delete_text")
europe_updates_canvas.create_text(200,130, text="Total Deaths: " + str(selected_country_total_deaths), font=('Helvetica', 18, 'bold'), fill="black", tag="delete_text")
europe_updates_canvas.create_text(200,170, text="New Cases: " + str(selected_country_new_cases), font=('Helvetica', 18, 'bold'), fill="red",tag="delete_text")
europe_countries_list=["Europe","Russia","Spain","UK","Italy","Germany","France","Belgium","Belarus","Netherlands","Sweden","Portugal","Switzerland","Ukraine",
"Poland","Ireland","Romania","Austria","Denmark","Serbia","Moldova","Czechia","Norway","Finland","Luxembourg","Hungary","North Macedonia","Greece","Bulgaria",
"Bosnia and Herzegovina","Croatia","Estonia","Iceland","Lithuania","Slovakia","Slovenia","Albania","Latvia","Andorra","San Marino","Malta","Channel Islands",
"Isle of Man","Montenegro","Faeroe Islands","Gibraltar","Monaco","Liechtenstein","Vatican City"]
select_country_label=Label(europe1, text="Select Country", bg="#d3f4ff", font=('Helvetica',12,'bold','underline'), fg="#084177")
select_country_label.grid(pady=20)
europe_combo=ttk.Combobox(europe1, value=europe_countries_list, state='readonly', width=30, height=6)
europe_combo.current(0)
europe_combo.bind("<<ComboboxSelected>>", selected_option_europe)
europe_combo.grid()
europe1.mainloop()
def africa():
africa1=Tk()
africa1.geometry("400x400")
africa1.iconbitmap(r"C:\Users\riyan\Downloads\favicon (2).ico")
africa1.title("Africa")
africa1.config(bg="#d3f4ff")
africa_updates_canvas=Canvas(africa1, height=200, width=400, bg="#d3f4ff",bd=0, highlightthickness=0, relief='ridge')
africa_updates_canvas.grid()
africa_updates_canvas.create_rectangle(10,20,390,193, fill="#d3f4ff",outline="black", width=3)
africa_total_cases=covid_df.loc['Africa','Total Cases']
africa_total_recovered=covid_df.loc['Africa', 'Total Recovered']
africa_total_deaths=covid_df.loc['Africa' ,'Total Deaths']
africa_new_cases=covid_df.loc['Africa', 'New Cases']
africa_updates_canvas.create_text(200,50, text="Total Cases: " + str(africa_total_cases), font=('Helvetica', 18, 'bold'), fill="black", tag="delete_text")
africa_updates_canvas.create_text(200,90, text="Total Recovered: " + str(africa_total_recovered), font=('Helvetica', 18, 'bold'), fill="green", tag="delete_text")
africa_updates_canvas.create_text(200,130, text="Total Deaths: " + str(africa_total_deaths), font=('Helvetica', 18, 'bold'), fill="black", tag="delete_text")
africa_updates_canvas.create_text(200,170, text="New Cases: " + str(africa_new_cases), font=('Helvetica', 18, 'bold'), fill="red",tag="delete_text")
def selected_option_africa(event):
option_selected=africa_combo.get()
africa_updates_canvas.delete("delete_text")
selected_country_total_cases=covid_df.loc[option_selected, 'Total Cases']
selected_country_total_recovered=covid_df.loc[option_selected, 'Total Recovered']
selected_country_total_deaths=covid_df.loc[option_selected, 'Total Deaths']
selected_country_new_cases=covid_df.loc[option_selected,'New Cases']
africa_updates_canvas.create_text(200,50, text="Total Cases: " + str(selected_country_total_cases), font=('Helvetica', 18, 'bold'), fill="black", tag="delete_text")
africa_updates_canvas.create_text(200,90, text="Total Recovered: " + str(selected_country_total_recovered), font=('Helvetica', 18, 'bold'), fill="green", tag="delete_text")
africa_updates_canvas.create_text(200,130, text="Total Deaths: " + str(selected_country_total_deaths), font=('Helvetica', 18, 'bold'), fill="black", tag="delete_text")
africa_updates_canvas.create_text(200,170, text="New Cases: " + str(selected_country_new_cases), font=('Helvetica', 18, 'bold'), fill="red",tag="delete_text")
africa_countries_list=["Africa","South Africa","Egypt","Nigeria","Algeria","Ghana","Morocco","Cameroon","Sudan","Senegal","Djibouti","Guinea","DRC","Ivory Coast","Gabon","Kenya","Somalia",
"Ethiopia","Mayotte","CAR","South Sudan","Mali","Guinea-Bissau","Equatorial Guinea","Zambia","Madagascar","Tunisia","Mauritania","Sierra Leone","Niger","Burkina Faso","Chad",
"Congo","Uganda","Cabo Verde","Sao Tome and Principe","Tanzania","Togo","Réunion","Rwanda","Malawi","Mozambique","Liberia","Mauritius","Eswatini","Benin","Zimbabwe","Libya",
"Comoros","Angola","Burundi","Botswana","Eritrea","Namibia","Gambia","Seychelles","Western Sahara","Lesotho"]
select_country_label=Label(africa1, text="Select Country", bg="#d3f4ff", font=('Helvetica',12,'bold','underline'), fg="#084177")
select_country_label.grid(pady=20)
africa_combo=ttk.Combobox(africa1, value=africa_countries_list, state='readonly', width=30, height=6)
africa_combo.current(0)
africa_combo.bind("<<ComboboxSelected>>", selected_option_africa)
africa_combo.grid()
africa1.mainloop()
#world_canvas
world_canvas=Canvas(live_updates_frame, height=200, width=1000)
world_canvas.grid(row=0, column=0, columnspan=3, pady=15)
world_image=Image.open(r"C:\Users\riyan\OneDrive\Desktop\gui python(tkinter)\world-map.jpg")
world_image_resize=world_image.resize((1000,300), Image.ANTIALIAS)
final_world_image=ImageTk.PhotoImage(world_image_resize)
world_canvas.create_image(500,100,image=final_world_image)
world_canvas.create_text(500,25, text="WORLD", font=('Helvetica', 24, 'bold', 'underline'), fill="#1f4068")
total_cases_world=covid_df.loc['World', 'Total Cases']
new_cases_world=covid_df.loc['World','New Cases']
total_deaths_world=covid_df.loc['World', 'Total Deaths']
total_recovered_world=covid_df.loc['World','Total Recovered']
world_canvas.create_text(325, 90, text="Total Cases: " + str(total_cases_world), font=('Helvetica',18,'bold'), fill='black')
world_canvas.create_text(325, 140, text="New Cases: " + str(new_cases_world), font=('Helvetica',18,'bold'), fill='red')
world_canvas.create_text(700, 90, text="Total Deaths: " + str(total_deaths_world), font=('Helvetica',18,'bold'), fill='black')
world_canvas.create_text(700, 140, text="Total Recovered: " + str(total_recovered_world), font=('Helvetica',18,'bold'), fill='green')
#north america canvas
north_america_canvas=Canvas(live_updates_frame, height=250, width=200, bg="#e3fdfd")
north_america_canvas.grid(row=1, column=0, padx=0)
north_america_image=Image.open(r"C:\Users\riyan\OneDrive\Desktop\gui python(tkinter)\north-america.png")
north_america_resize_image=north_america_image.resize((220,200), Image.ANTIALIAS)
final_north_america_image=ImageTk.PhotoImage(north_america_resize_image)
north_america_canvas.create_image(0,0, anchor=NW , image=final_north_america_image)
def on_enter_north_america(event):
north_america_button['foreground']='blue'
def on_leave_north_america(event):
north_america_button['foreground']='black'
north_america_button=Button(live_updates_frame, text="North America", borderwidth=0, font=('Helvetica', 16, 'bold', 'underline'), bg="#e3fdfd" , command=north_america )
north_america_button_window=north_america_canvas.create_window(100, 225 , window=north_america_button)
north_america_button.bind("<Enter>", on_enter_north_america)
north_america_button.bind("<Leave>", on_leave_north_america)
#South America canvas
south_america_canvas=Canvas(live_updates_frame, height=250, width=200, bg="#e3fdfd")
south_america_canvas.grid(row=1, column=1, padx=0)
south_america_image=Image.open(r"C:\Users\riyan\OneDrive\Desktop\gui python(tkinter)\south-america.png")
south_america_resize_image=south_america_image.resize((220,200), Image.ANTIALIAS)
final_south_america_image=ImageTk.PhotoImage(south_america_resize_image)
south_america_canvas.create_image(0,0, anchor=NW , image=final_south_america_image)
def on_enter_south_america(event):
south_america_button['foreground']='blue'
def on_leave_south_america(event):
south_america_button['foreground']='black'
south_america_button=Button(live_updates_frame, text="South America", borderwidth=0, font=('Helvetica', 16, 'bold', 'underline'), bg="#e3fdfd", command=south_america )
south_america_button_window=south_america_canvas.create_window(100, 225 , window=south_america_button)
south_america_button.bind("<Enter>", on_enter_south_america)
south_america_button.bind("<Leave>", on_leave_south_america)
#Asia Canvas
asia_canvas=Canvas(live_updates_frame, height=250, width=200, bg="#e3fdfd")
asia_canvas.grid(row=1, column=2, padx=0)
asia_image=Image.open(r"C:\Users\riyan\OneDrive\Desktop\gui python(tkinter)\asia.png")
asia_resize_image=asia_image.resize((220,190), Image.ANTIALIAS)
final_asia_image=ImageTk.PhotoImage(asia_resize_image)
asia_canvas.create_image(0,0, anchor=NW , image=final_asia_image)
def on_enter_asia(event):
asia_button['foreground']='blue'
def on_leave_asia(event):
asia_button['foreground']='black'
asia_button=Button(live_updates_frame, text="Asia", borderwidth=0, font=('Helvetica', 16, 'bold', 'underline'), bg="#e3fdfd", command=asia )
asia_button_window=asia_canvas.create_window(100, 225 , window=asia_button)
asia_button.bind("<Enter>", on_enter_asia)
asia_button.bind("<Leave>", on_leave_asia)
#Africa Canvas
africa_canvas=Canvas(live_updates_frame, height=250, width=200, bg="#e3fdfd")
africa_canvas.grid(row=2, column=0, padx=0, pady=15)
africa_image=Image.open(r"C:\Users\riyan\Downloads\africa.png")
africa_resize_image=africa_image.resize((200,200), Image.ANTIALIAS)
final_africa_image=ImageTk.PhotoImage(africa_resize_image)
africa_canvas.create_image(0,0, anchor=NW , image=final_africa_image)
def on_enter_africa(event):
africa_button['foreground']='blue'
def on_leave_africa(event):
africa_button['foreground']='black'
africa_button=Button(live_updates_frame, text="Africa", borderwidth=0, font=('Helvetica', 16, 'bold', 'underline'), bg="#e3fdfd", command=africa)
africa_button_window=africa_canvas.create_window(100, 225 , window=africa_button)
africa_button.bind("<Enter>", on_enter_africa)
africa_button.bind("<Leave>", on_leave_africa)
#oceania Canvas
oceania_canvas=Canvas(live_updates_frame, height=250, width=200, bg="#e3fdfd")
oceania_canvas.grid(row=2, column=1, padx=0, pady=15)
oceania_image=Image.open(r"C:\Users\riyan\OneDrive\Desktop\gui python(tkinter)\australia.jpg")
oceania_resize_image=oceania_image.resize((220,200), Image.ANTIALIAS)
final_oceania_image=ImageTk.PhotoImage(oceania_resize_image)
oceania_canvas.create_image(0,0, anchor=NW , image=final_oceania_image)
def on_enter_oceania(event):
oceania_button['foreground']='blue'
def on_leave_oceania(event):
oceania_button['foreground']='black'
oceania_button=Button(live_updates_frame, text="Oceania", borderwidth=0, font=('Helvetica', 16, 'bold', 'underline'), bg="#e3fdfd",command=oceania )
oceania_button_window=oceania_canvas.create_window(100, 225 , window=oceania_button)
oceania_button.bind("<Enter>", on_enter_oceania)
oceania_button.bind("<Leave>", on_leave_oceania)
#Europe Canvas
europe_canvas=Canvas(live_updates_frame, height=250, width=200, bg="#e3fdfd")
europe_canvas.grid(row=2, column=2, padx=0, pady=15)
europe_image=Image.open(r"C:\Users\riyan\OneDrive\Desktop\gui python(tkinter)\europe.png")
europe_resize_image=europe_image.resize((210,200), Image.ANTIALIAS)
final_europe_image=ImageTk.PhotoImage(europe_resize_image)
europe_canvas.create_image(0,0, anchor=NW , image=final_europe_image)
def on_enter_europe(event):
europe_button['foreground']='blue'
def on_leave_europe(event):
europe_button['foreground']='black'
europe_button=Button(live_updates_frame, text="Europe", borderwidth=0, font=('Helvetica', 16, 'bold', 'underline'), bg="#e3fdfd", command=europe)
europe_button_window=europe_canvas.create_window(100, 225 , window=europe_button)
europe_button.bind("<Enter>", on_enter_europe)
europe_button.bind("<Leave>", on_leave_europe)
#COVID-19 INDIA
def goa():
blue_bullet()
india_canvas.delete("delete")
goa_button=Button(local_frame, image=red_small_bullet_image, bg="#e3fdfd", command=goa, borderwidth=0)
india_canvas.create_window(300,580, window=goa_button)
total_cases=india_covid_df.loc['Goa', 'Total Cases']
deaths=india_covid_df.loc['Goa','Deaths']
recovered=india_covid_df.loc['Goa','Total Recovered']
active_cases=india_covid_df.loc['Goa','Active Cases']
india_canvas.create_text(800,30, text="State: Goa", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def himachal():
blue_bullet()
india_canvas.delete("delete")
hp_button=Button(local_frame, image=red_final_bullet_image, borderwidth=0, bg="#e3fdfd", command=himachal)
india_canvas.create_window(375,145, window=hp_button)
total_cases=india_covid_df.loc['Himachal Pradesh', 'Total Cases']
deaths=india_covid_df.loc['Himachal Pradesh','Deaths']
recovered=india_covid_df.loc['Himachal Pradesh','Total Recovered']
active_cases=india_covid_df.loc['Himachal Pradesh','Active Cases']
india_canvas.create_text(800,30, text="State: Himachal Pradesh", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def punjab():
blue_bullet()
india_canvas.delete("delete")
punjab_button=Button(local_frame, image=red_final_bullet_image, borderwidth=0, bg="#e3fdfd", command=punjab)
india_canvas.create_window(335,180, window=punjab_button)
total_cases=india_covid_df.loc['Punjab', 'Total Cases']
deaths=india_covid_df.loc['Punjab','Deaths']
recovered=india_covid_df.loc['Punjab','Total Recovered']
active_cases=india_covid_df.loc['Punjab','Active Cases']
india_canvas.create_text(800,30, text="State: Punjab", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def haryana():
blue_bullet()
india_canvas.delete("delete")
haryana_button=Button(local_frame, image=red_final_bullet_image, borderwidth=0, bg="#e3fdfd", command=haryana)
india_canvas.create_window(355,230, window=haryana_button)
total_cases=india_covid_df.loc['Haryana', 'Total Cases']
deaths=india_covid_df.loc['Haryana','Deaths']
recovered=india_covid_df.loc['Haryana','Total Recovered']
active_cases=india_covid_df.loc['Haryana','Active Cases']
india_canvas.create_text(800,30, text="State: Haryana", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def uttarakhand():
blue_bullet()
india_canvas.delete("delete")
uttarakhand_button=Button(local_frame, image=red_final_bullet_image, borderwidth=0, bg="#e3fdfd", command=uttarakhand)
india_canvas.create_window(420,190, window=uttarakhand_button)
total_cases=india_covid_df.loc['Uttarakhand', 'Total Cases']
deaths=india_covid_df.loc['Uttarakhand','Deaths']
recovered=india_covid_df.loc['Uttarakhand','Total Recovered']
active_cases=india_covid_df.loc['Uttarakhand','Active Cases']
india_canvas.create_text(800,30, text="State: Uttarakhand", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def uttar_pradesh():
blue_bullet()
india_canvas.delete("delete")
up_button=Button(local_frame, image=red_final_bullet_image, borderwidth=0, bg="#e3fdfd", command=uttar_pradesh)
india_canvas.create_window(450,280, window=up_button)
total_cases=india_covid_df.loc['Uttar Pradesh', 'Total Cases']
deaths=india_covid_df.loc['Uttar Pradesh','Deaths']
recovered=india_covid_df.loc['Uttar Pradesh','Total Recovered']
active_cases=india_covid_df.loc['Uttar Pradesh','Active Cases']
india_canvas.create_text(800,30, text="State: Uttar Pradesh", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def rajasthan():
blue_bullet()
india_canvas.delete("delete")
raja_button=Button(local_frame, image=red_final_bullet_image, borderwidth=0, bg="#e3fdfd", command=rajasthan)
india_canvas.create_window(290,290, window=raja_button)
total_cases=india_covid_df.loc['Rajasthan', 'Total Cases']
deaths=india_covid_df.loc['Rajasthan','Deaths']
recovered=india_covid_df.loc['Rajasthan','Total Recovered']
active_cases=india_covid_df.loc['Rajasthan','Active Cases']
india_canvas.create_text(800,30, text="State: Rajasthan", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def bihar():
blue_bullet()
india_canvas.delete("delete")
bihar_button=Button(local_frame, image=red_final_bullet_image, borderwidth=0, bg="#e3fdfd", command=bihar)
india_canvas.create_window(570,320, window=bihar_button)
total_cases=india_covid_df.loc['Bihar', 'Total Cases']
deaths=india_covid_df.loc['Bihar','Deaths']
recovered=india_covid_df.loc['Bihar','Total Recovered']
active_cases=india_covid_df.loc['Bihar','Active Cases']
india_canvas.create_text(800,30, text="State: Bihar", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def sikkim():
blue_bullet()
india_canvas.delete("delete")
sikkim_button=Button(local_frame, image=red_small_bullet_image, borderwidth=0, bg="#e3fdfd", command=sikkim)
india_canvas.create_window(627,265, window=sikkim_button)
total_cases=india_covid_df.loc['Sikkim', 'Total Cases']
deaths=india_covid_df.loc['Sikkim','Deaths']
recovered=india_covid_df.loc['Sikkim','Total Recovered']
active_cases=india_covid_df.loc['Sikkim','Active Cases']
india_canvas.create_text(800,30, text="State: Sikkim", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def arunachal_pradesh():
blue_bullet()
india_canvas.delete("delete")
arunachal_button=Button(local_frame, image=red_final_bullet_image, borderwidth=0, bg="#e3fdfd", command=arunachal_pradesh)
india_canvas.create_window(760,250, window=arunachal_button)
total_cases=india_covid_df.loc['Arunachal Pradesh', 'Total Cases']
deaths=india_covid_df.loc['Arunachal Pradesh','Deaths']
recovered=india_covid_df.loc['Arunachal Pradesh','Total Recovered']
active_cases=india_covid_df.loc['Arunachal Pradesh','Active Cases']
india_canvas.create_text(800,30, text="State: Arunachal Pradesh", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def assam():
blue_bullet()
india_canvas.delete("delete")
assam_button=Button(local_frame, image=red_final_bullet_image, borderwidth=0, bg="#e3fdfd", command=assam)
india_canvas.create_window(730,300, window=assam_button)
total_cases=india_covid_df.loc['Assam', 'Total Cases']
deaths=india_covid_df.loc['Assam','Deaths']
recovered=india_covid_df.loc['Assam','Total Recovered']
active_cases=india_covid_df.loc['Assam','Active Cases']
india_canvas.create_text(800,30, text="State: Assam", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def nagaland():
blue_bullet()
india_canvas.delete("delete")
nagaland_button=Button(local_frame, image=red_medium_bullet_image, borderwidth=0, bg="#e3fdfd", command=nagaland)
india_canvas.create_window(766,305, window=nagaland_button)
total_cases=india_covid_df.loc['Nagaland', 'Total Cases']
deaths=india_covid_df.loc['Nagaland','Deaths']
recovered=india_covid_df.loc['Nagaland','Total Recovered']
active_cases=india_covid_df.loc['Nagaland','Active Cases']
india_canvas.create_text(800,30, text="State: Nagaland", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def manipur():
blue_bullet()
india_canvas.delete("delete")
manipur_button=Button(local_frame, image=red_medium_bullet_image, borderwidth=0, bg="#e3fdfd", command=manipur)
india_canvas.create_window(750,340, window=manipur_button)
total_cases=india_covid_df.loc['Manipur', 'Total Cases']
deaths=india_covid_df.loc['Manipur','Deaths']
recovered=india_covid_df.loc['Manipur','Total Recovered']
active_cases=india_covid_df.loc['Manipur','Active Cases']
india_canvas.create_text(800,30, text="State: Manipur", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def mizoram():
blue_bullet()
india_canvas.delete("delete")
mizoram_button=Button(local_frame, image=red_medium_bullet_image, borderwidth=0, bg="#e3fdfd", command=mizoram)
india_canvas.create_window(725,375, window=mizoram_button)
total_cases=india_covid_df.loc['Mizoram', 'Total Cases']
deaths=india_covid_df.loc['Mizoram','Deaths']
recovered=india_covid_df.loc['Mizoram','Total Recovered']
active_cases=india_covid_df.loc['Mizoram','Active Cases']
india_canvas.create_text(800,30, text="State: Mizoram", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def tripura():
blue_bullet()
india_canvas.delete("delete")
tripura_button=Button(local_frame, image=red_small_bullet_image, borderwidth=0, bg="#e3fdfd", command=tripura)
india_canvas.create_window(696,370, window=tripura_button)
total_cases=india_covid_df.loc['Tripura', 'Total Cases']
deaths=india_covid_df.loc['Tripura','Deaths']
recovered=india_covid_df.loc['Tripura','Total Recovered']
active_cases=india_covid_df.loc['Tripura','Active Cases']
india_canvas.create_text(800,30, text="State: Tripura", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def meghalaya():
blue_bullet()
india_canvas.delete("delete")
meghalaya_button=Button(local_frame, image=red_medium_bullet_image, borderwidth=0, bg="#e3fdfd", command=meghalaya)
india_canvas.create_window(680,322, window=meghalaya_button)
total_cases=india_covid_df.loc['Meghalaya', 'Total Cases']
deaths=india_covid_df.loc['Meghalaya','Deaths']
recovered=india_covid_df.loc['Meghalaya','Total Recovered']
active_cases=india_covid_df.loc['Meghalaya','Active Cases']
india_canvas.create_text(800,30, text="State: Meghalaya", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def maharashtra():
blue_bullet()
india_canvas.delete("delete")
maharashtra_button=Button(local_frame, image=red_final_bullet_image, borderwidth=0, bg="#e3fdfd", command=maharashtra)
india_canvas.create_window(350,470, window=maharashtra_button)
total_cases=india_covid_df.loc['Maharashtra', 'Total Cases']
deaths=india_covid_df.loc['Maharashtra','Deaths']
recovered=india_covid_df.loc['Maharashtra','Total Recovered']
active_cases=india_covid_df.loc['Maharashtra','Active Cases']
india_canvas.create_text(800,30, text="State: Maharashtra", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def gujarat():
blue_bullet()
india_canvas.delete("delete")
gujarat_button=Button(local_frame, image=red_final_bullet_image, borderwidth=0, bg="#e3fdfd", command=gujarat)
india_canvas.create_window(250,390, window=gujarat_button)
total_cases=india_covid_df.loc['Gujarat', 'Total Cases']
deaths=india_covid_df.loc['Gujarat','Deaths']
recovered=india_covid_df.loc['Gujarat','Total Recovered']
active_cases=india_covid_df.loc['Gujarat','Active Cases']
india_canvas.create_text(800,30, text="State: Gujarat", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def madhya_pradesh():
blue_bullet()
india_canvas.delete("delete")
mp_button=Button(local_frame, image=red_final_bullet_image, borderwidth=0, bg="#e3fdfd", command=madhya_pradesh)
india_canvas.create_window(400,380, window=mp_button)
total_cases=india_covid_df.loc['Madhya Pradesh', 'Total Cases']
deaths=india_covid_df.loc['Madhya Pradesh','Deaths']
recovered=india_covid_df.loc['Madhya Pradesh','Total Recovered']
active_cases=india_covid_df.loc['Madhya Pradesh','Active Cases']
india_canvas.create_text(800,30, text="State: Madhya Pradesh", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def chhattisgarh():
blue_bullet()
india_canvas.delete("delete")
chhattisgarh_button=Button(local_frame, image=red_final_bullet_image, borderwidth=0, bg="#e3fdfd", command=chhattisgarh)
india_canvas.create_window(480,430, window=chhattisgarh_button)
total_cases=india_covid_df.loc['Chhattisgarh', 'Total Cases']
deaths=india_covid_df.loc['Chhattisgarh','Deaths']
recovered=india_covid_df.loc['Chhattisgarh','Total Recovered']
active_cases=india_covid_df.loc['Chhattisgarh','Active Cases']
india_canvas.create_text(800,30, text="State: Chhattisgarh", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def jharkhand():
blue_bullet()
india_canvas.delete("delete")
jharkhand_button=Button(local_frame, image=red_final_bullet_image, borderwidth=0, bg="#e3fdfd", command=jharkhand)
india_canvas.create_window(555,380, window=jharkhand_button)
total_cases=india_covid_df.loc['Jharkhand', 'Total Cases']
deaths=india_covid_df.loc['Jharkhand','Deaths']
recovered=india_covid_df.loc['Jharkhand','Total Recovered']
active_cases=india_covid_df.loc['Jharkhand','Active Cases']
india_canvas.create_text(800,30, text="State: Jharkhand", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def west_bengal():
blue_bullet()
india_canvas.delete("delete")
wb_button=Button(local_frame, image=red_final_bullet_image, borderwidth=0, bg="#e3fdfd", command=west_bengal)
india_canvas.create_window(610,380, window=wb_button)
total_cases=india_covid_df.loc['West Bengal', 'Total Cases']
deaths=india_covid_df.loc['West Bengal','Deaths']
recovered=india_covid_df.loc['West Bengal','Total Recovered']
active_cases=india_covid_df.loc['West Bengal','Active Cases']
india_canvas.create_text(800,30, text="State: West Bengal", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def odisha():
blue_bullet()
india_canvas.delete("delete")
odisha_button=Button(local_frame, image=red_final_bullet_image, borderwidth=0, bg="#e3fdfd", command=odisha)
india_canvas.create_window(550,440, window=odisha_button)
total_cases=india_covid_df.loc['Odisha', 'Total Cases']
deaths=india_covid_df.loc['Odisha','Deaths']
recovered=india_covid_df.loc['Odisha','Total Recovered']
active_cases=india_covid_df.loc['Odisha','Active Cases']
india_canvas.create_text(800,30, text="State: Odisha", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def karnatka():
blue_bullet()
india_canvas.delete("delete")
karnatka_button=Button(local_frame, image=red_final_bullet_image, borderwidth=0, bg="#e3fdfd", command=karnatka)
india_canvas.create_window(340,600, window=karnatka_button)
total_cases=india_covid_df.loc['Karnataka', 'Total Cases']
deaths=india_covid_df.loc['Karnataka','Deaths']
recovered=india_covid_df.loc['Karnataka','Total Recovered']
active_cases=india_covid_df.loc['Karnataka','Active Cases']
india_canvas.create_text(800,30, text="State: Karnataka", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def telangana():
blue_bullet()
india_canvas.delete("delete")
telangana_button=Button(local_frame, image=red_final_bullet_image, borderwidth=0, bg="#e3fdfd", command=telangana)
india_canvas.create_window(410,520, window=telangana_button)
total_cases=india_covid_df.loc['Telangana', 'Total Cases']
deaths=india_covid_df.loc['Telangana','Deaths']
recovered=india_covid_df.loc['Telangana','Total Recovered']
active_cases=india_covid_df.loc['Telangana','Active Cases']
india_canvas.create_text(800,30, text="State: Telangana", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def andhra_pradesh():
blue_bullet()
india_canvas.delete("delete")
ap_button=Button(local_frame, image=red_final_bullet_image, borderwidth=0, bg="#e3fdfd", command=andhra_pradesh)
india_canvas.create_window(415,580, window=ap_button)
total_cases=india_covid_df.loc['Andhra Pradesh', 'Total Cases']
deaths=india_covid_df.loc['Andhra Pradesh','Deaths']
recovered=india_covid_df.loc['Andhra Pradesh','Total Recovered']
active_cases=india_covid_df.loc['Andhra Pradesh','Active Cases']
india_canvas.create_text(800,30, text="State: Andhra Pradesh", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def kerala():
blue_bullet()
india_canvas.delete("delete")
kerala_button=Button(local_frame, image=red_small_bullet_image, borderwidth=0, bg="#e3fdfd", command=kerala)
india_canvas.create_window(358,695, window=kerala_button)
total_cases=india_covid_df.loc['Kerala', 'Total Cases']
deaths=india_covid_df.loc['Kerala','Deaths']
recovered=india_covid_df.loc['Kerala','Total Recovered']
active_cases=india_covid_df.loc['Kerala','Active Cases']
india_canvas.create_text(800,30, text="State: Kerala", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def tamil_nadu():
blue_bullet()
india_canvas.delete("delete")
tn_button=Button(local_frame, image=red_final_bullet_image, borderwidth=0, bg="#e3fdfd", command=tamil_nadu)
india_canvas.create_window(410,680, window=tn_button)
total_cases=india_covid_df.loc['Tamil Nadu', 'Total Cases']
deaths=india_covid_df.loc['Tamil Nadu','Deaths']
recovered=india_covid_df.loc['Tamil Nadu','Total Recovered']
active_cases=india_covid_df.loc['Tamil Nadu','Active Cases']
india_canvas.create_text(800,30, text="State: Tamil Nadu", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def delhi():
blue_bullet()
india_canvas.delete("delete")
total_cases=india_covid_df.loc['Delhi', 'Total Cases']
deaths=india_covid_df.loc['Delhi','Deaths']
recovered=india_covid_df.loc['Delhi','Total Recovered']
active_cases=india_covid_df.loc['Delhi','Active Cases']
india_canvas.create_text(800,30, text="Delhi", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def jk():
blue_bullet()
india_canvas.delete("delete")
total_cases=india_covid_df.loc['Jammu and Kashmir', 'Total Cases']
deaths=india_covid_df.loc['Jammu and Kashmir','Deaths']
recovered=india_covid_df.loc['Jammu and Kashmir','Total Recovered']
active_cases=india_covid_df.loc['Jammu and Kashmir','Active Cases']
india_canvas.create_text(800,30, text="Jammu & Kashmir", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def puducherry():
blue_bullet()
india_canvas.delete("delete")
total_cases=india_covid_df.loc['Puducherry', 'Total Cases']
deaths=india_covid_df.loc['Puducherry','Deaths']
recovered=india_covid_df.loc['Puducherry','Total Recovered']
active_cases=india_covid_df.loc['Puducherry','Active Cases']
india_canvas.create_text(800,30, text="Puducherry", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def andaman_nicobar():
blue_bullet()
india_canvas.delete("delete")
total_cases=india_covid_df.loc['Andaman and Nicobar Islands', 'Total Cases']
deaths=india_covid_df.loc['Andaman and Nicobar Islands','Deaths']
recovered=india_covid_df.loc['Andaman and Nicobar Islands','Total Recovered']
active_cases=india_covid_df.loc['Andaman and Nicobar Islands','Active Cases']
india_canvas.create_text(800,30, text="Andaman & Nicobar ", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def chandigarh():
blue_bullet()
india_canvas.delete("delete")
total_cases=india_covid_df.loc['Chandigarh', 'Total Cases']
deaths=india_covid_df.loc['Chandigarh','Deaths']
recovered=india_covid_df.loc['Chandigarh','Total Recovered']
active_cases=india_covid_df.loc['Chandigarh','Active Cases']
india_canvas.create_text(800,30, text="Chandigarh", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def lakshadweep():
blue_bullet()
india_canvas.delete("delete")
total_cases=india_covid_df.loc['Lakshadweep', 'Total Cases']
deaths=india_covid_df.loc['Lakshadweep','Deaths']
recovered=india_covid_df.loc['Lakshadweep','Total Recovered']
active_cases=india_covid_df.loc['Lakshadweep','Active Cases']
india_canvas.create_text(800,30, text="Lakshadweep", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def ladakh():
blue_bullet()
india_canvas.delete("delete")
total_cases=india_covid_df.loc['Ladakh', 'Total Cases']
deaths=india_covid_df.loc['Ladakh','Deaths']
recovered=india_covid_df.loc['Ladakh','Total Recovered']
active_cases=india_covid_df.loc['Ladakh','Active Cases']
india_canvas.create_text(800,30, text="Ladakh", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def dadra_nagarhaveli_daman_diu():
blue_bullet()
india_canvas.delete("delete")
total_cases=india_covid_df.loc['Dadra and Nagar Haveli and Daman and Diu', 'Total Cases']
deaths=india_covid_df.loc['Dadra and Nagar Haveli and Daman and Diu','Deaths']
recovered=india_covid_df.loc['Dadra and Nagar Haveli and Daman and Diu','Total Recovered']
active_cases=india_covid_df.loc['Dadra and Nagar Haveli and Daman and Diu','Active Cases']
india_canvas.create_text(800,30, text="Dadra-Nagar Haveli & Daman-Diu", font=('Helvetica', 16,'underline', 'bold'), tag="delete")
india_canvas.create_text(800, 70, text="Total Cases: " + total_cases, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 110, text="Total Deaths: " + deaths, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 150, text="Total Recovered: " + recovered, font=('Helvetica', 16),tag="delete")
india_canvas.create_text(800, 190, text="Active Cases: " + active_cases, font=('Helvetica', 16),tag="delete")
def union_territory():
blue_bullet()
delhi_button=Button(local_frame, image=black_final_bullet_image, borderwidth=0, bg="#e3fdfd", command=delhi)
india_canvas.create_window(375,238, window=delhi_button)
jk_button=Button(local_frame, image=black_final_bullet_image, borderwidth=0, bg="#e3fdfd", command=jk)
india_canvas.create_window(360,80, window=jk_button)
puducherry_button=Button(local_frame, image=black_final_bullet_image, bg="#e3fdfd", command=puducherry, borderwidth=0)
india_canvas.create_window(430,668, window=puducherry_button)
andaman_button=Button(local_frame, image=black_final_bullet_image, bg="#e3fdfd", command=andaman_nicobar, borderwidth=0)
india_canvas.create_window(700,600, window=andaman_button)
chandigarh_button=Button(local_frame, image=black_final_bullet_image, bg="#e3fdfd", command=chandigarh, borderwidth=0)
india_canvas.create_window(362,180, window=chandigarh_button)
lakshwadeep_button=Button(local_frame, image=black_final_bullet_image, bg="#e3fdfd", command=lakshadweep, borderwidth=0)
india_canvas.create_window(220,650, window=lakshwadeep_button)
ladakh_button=Button(local_frame, image=black_final_bullet_image, bg="#e3fdfd", command=ladakh, borderwidth=0)
india_canvas.create_window(300,580, window=ladakh_button)
daman_button=Button(local_frame, image=black_final_bullet_image, bg="#e3fdfd", command=dadra_nagarhaveli_daman_diu, borderwidth=0)
india_canvas.create_window(282,455, window=daman_button)
daman_button2=Button(local_frame, image=black_final_bullet_image, bg="#e3fdfd", command=dadra_nagarhaveli_daman_diu, borderwidth=0)
india_canvas.create_window(250,445, window=daman_button2)
india_canvas=Canvas(local_frame, height=1000, width=1000, bg="#e3fdfd")
india_canvas.grid()
india_image=Image.open(r"C:\Users\riyan\Downloads\india-removebg-preview (1).png")
india_image_resize=india_image.resize((1000,800), Image.ANTIALIAS)
final_india_image=ImageTk.PhotoImage(india_image_resize)
india_canvas.create_image(0,0, anchor=NW, image=final_india_image)
bullet_image=Image.open(r"C:\Users\riyan\Downloads\circle-cropped (7).png")
bullet_image_resize=bullet_image.resize((20,20), Image.ANTIALIAS)
final_bullet_image=ImageTk.PhotoImage(bullet_image_resize)
medium_bullet_resize=bullet_image.resize((14,14), Image.ANTIALIAS)
medium_bullet_image=ImageTk.PhotoImage(medium_bullet_resize)
small_bullet_resize=bullet_image.resize((11,11), Image.ANTIALIAS)
small_bullet_image=ImageTk.PhotoImage(small_bullet_resize)
black_bullet_image=Image.open(r"C:\Users\riyan\OneDrive\Desktop\gui python(tkinter)\redtri.png")
black_bullet_image_resize=black_bullet_image.resize((15,15), Image.ANTIALIAS)
black_final_bullet_image=ImageTk.PhotoImage(black_bullet_image_resize)
red_bullet_image=Image.open(r"C:\Users\riyan\Downloads\circle-cropped (8).png")
red_bullet_image_resize=red_bullet_image.resize((20,20), Image.ANTIALIAS)
red_final_bullet_image=ImageTk.PhotoImage(red_bullet_image_resize)
red_medium_bullet_resize=red_bullet_image.resize((14,14), Image.ANTIALIAS)
red_medium_bullet_image=ImageTk.PhotoImage(red_medium_bullet_resize)
red_small_bullet_resize=red_bullet_image.resize((11,11), Image.ANTIALIAS)
red_small_bullet_image=ImageTk.PhotoImage(red_small_bullet_resize)
def on_enter(event):
union_button['foreground']='blue'
def on_leave(event):
union_button['foreground']='black'
union_button=Button(local_frame, text="View Union Territories", borderwidth=0 , bg="#e3fdfd", command=union_territory, fg="black", font=('Helvetica',14,'bold','underline'),highlightthickness=0, relief='ridge')
india_canvas.create_window(120,50, window=union_button)
union_button.bind("<Enter>", on_enter)
union_button.bind("<Leave>", on_leave)
def normal_text():
india_canvas.create_text(900,680, text="TOTAL",font=('Helvetica',14,'bold','underline'), fill="black")
local_frame.after(500, animate_text)
def animate_text():
india_canvas.create_text(900,680, text="TOTAL",font=('Helvetica',14,'bold','underline'), fill="blue")
local_frame.after(500, normal_text)
india_canvas.create_oval(850,650,950,750,fill="white")
india_canvas.create_text(900,680, text="TOTAL",font=('Helvetica',14,'bold','underline') )
local_frame.after(500, animate_text)
total=india_covid_df.loc['India','Total Cases']
india_canvas.create_text(900,715, text=total, font=('Helvetica',14,'bold'), fill="red")
def blue_bullet():
goa_button=Button(local_frame, image=small_bullet_image, bg="#e3fdfd", command=goa, borderwidth=0)
india_canvas.create_window(300,580, window=goa_button)
hp_button=Button(local_frame, image=final_bullet_image, borderwidth=0, bg="#e3fdfd", command=himachal)
india_canvas.create_window(375,145, window=hp_button)
punjab_button=Button(local_frame, image=final_bullet_image, borderwidth=0, bg="#e3fdfd", command=punjab)
india_canvas.create_window(335,180, window=punjab_button)
uttarakhand_button=Button(local_frame, image=final_bullet_image, borderwidth=0, bg="#e3fdfd", command=uttarakhand)
india_canvas.create_window(420,190, window=uttarakhand_button)
haryana_button=Button(local_frame, image=final_bullet_image, borderwidth=0, bg="#e3fdfd", command=haryana)
india_canvas.create_window(355,230, window=haryana_button)
raja_button=Button(local_frame, image=final_bullet_image, borderwidth=0, bg="#e3fdfd", command=rajasthan)
india_canvas.create_window(290,290, window=raja_button)
up_button=Button(local_frame, image=final_bullet_image, borderwidth=0, bg="#e3fdfd", command=uttar_pradesh)
india_canvas.create_window(450,280, window=up_button)
bihar_button=Button(local_frame, image=final_bullet_image, borderwidth=0, bg="#e3fdfd", command=bihar)
india_canvas.create_window(570,320, window=bihar_button)
sikkim_button=Button(local_frame, image=small_bullet_image, borderwidth=0, bg="#e3fdfd", command=sikkim)
india_canvas.create_window(627,265, window=sikkim_button)
arunachal_button=Button(local_frame, image=final_bullet_image, borderwidth=0, bg="#e3fdfd", command=arunachal_pradesh)
india_canvas.create_window(760,250, window=arunachal_button)
assam_button=Button(local_frame, image=final_bullet_image, borderwidth=0, bg="#e3fdfd", command=assam)
india_canvas.create_window(730,300, window=assam_button)
nagaland_button=Button(local_frame, image=medium_bullet_image, borderwidth=0, bg="#e3fdfd", command=nagaland)
india_canvas.create_window(766,305, window=nagaland_button)
manipur_button=Button(local_frame, image=medium_bullet_image, borderwidth=0, bg="#e3fdfd", command=manipur)
india_canvas.create_window(750,340, window=manipur_button)
mizoram_button=Button(local_frame, image=medium_bullet_image, borderwidth=0, bg="#e3fdfd", command=mizoram)
india_canvas.create_window(725,375, window=mizoram_button)
tripura_button=Button(local_frame, image=small_bullet_image, borderwidth=0, bg="#e3fdfd", command=tripura)
india_canvas.create_window(696,370, window=tripura_button)
meghalaya_button=Button(local_frame, image=medium_bullet_image, borderwidth=0, bg="#e3fdfd", command=meghalaya)
india_canvas.create_window(680,322, window=meghalaya_button)
maharashtra_button=Button(local_frame, image=final_bullet_image, borderwidth=0, bg="#e3fdfd", command=maharashtra)
india_canvas.create_window(350,470, window=maharashtra_button)
gujarat_button=Button(local_frame, image=final_bullet_image, borderwidth=0, bg="#e3fdfd", command=gujarat)
india_canvas.create_window(250,390, window=gujarat_button)
mp_button=Button(local_frame, image=final_bullet_image, borderwidth=0, bg="#e3fdfd", command=madhya_pradesh)
india_canvas.create_window(400,380, window=mp_button)
chhattisgarh_button=Button(local_frame, image=final_bullet_image, borderwidth=0, bg="#e3fdfd", command=chhattisgarh)
india_canvas.create_window(480,430, window=chhattisgarh_button)
jharkhand_button=Button(local_frame, image=final_bullet_image, borderwidth=0, bg="#e3fdfd", command=jharkhand)
india_canvas.create_window(555,380, window=jharkhand_button)
wb_button=Button(local_frame, image=final_bullet_image, borderwidth=0, bg="#e3fdfd", command=west_bengal)
india_canvas.create_window(610,380, window=wb_button)
odisha_button=Button(local_frame, image=final_bullet_image, borderwidth=0, bg="#e3fdfd", command=odisha)
india_canvas.create_window(550,440, window=odisha_button)
karnatka_button=Button(local_frame, image=final_bullet_image, borderwidth=0, bg="#e3fdfd", command=karnatka)
india_canvas.create_window(340,600, window=karnatka_button)
telangana_button=Button(local_frame, image=final_bullet_image, borderwidth=0, bg="#e3fdfd", command=telangana)
india_canvas.create_window(410,520, window=telangana_button)
ap_button=Button(local_frame, image=final_bullet_image, borderwidth=0, bg="#e3fdfd", command=andhra_pradesh)
india_canvas.create_window(415,580, window=ap_button)
tn_button=Button(local_frame, image=final_bullet_image, borderwidth=0, bg="#e3fdfd", command=tamil_nadu)
india_canvas.create_window(410,680, window=tn_button)
kerala_button=Button(local_frame, image=small_bullet_image, borderwidth=0, bg="#e3fdfd", command=kerala)
india_canvas.create_window(358,695, window=kerala_button)
blue_bullet()
def pressed_enter_key(event):
question_asked=search_box.get().title().rstrip()
if question_asked=="Usa" or question_asked=="Us":
question_asked="USA"
elif question_asked=="Uae":
question_asked=question_asked.upper()
elif question_asked=="Uk":
question_asked=question_asked.upper()
question_asked1=get_close_matches(question_asked, country_list, n=1, cutoff=0.7)
if (len(question_asked1)!=0):
try:
question_asked=question_asked1[0]
total_cases=covid_df.loc[question_asked, 'Total Cases']
new_cases=covid_df.loc[question_asked,'New Cases']
deaths=covid_df.loc[question_asked,'Total Deaths']
total_recovered=covid_df.loc[question_asked,'Total Recovered']
search_box.delete(0,END)
covid_notebook.add(answer_frame, text="Search Results")
covid_notebook.hide(3)
answer_frame_canvas.grid(padx=200, pady=125)
answer_frame_canvas.delete("delete_answer")
answer_frame_canvas.create_rectangle(5,5,600,500,fill="#f2d6eb",outline="black", width=5, tag="delete_answer")
answer_frame_canvas.create_text(300, 100, text= question_asked, font=('Roboto',20, 'bold','underline'), tag="delete_answer")
answer_frame_canvas.create_text(300, 150, text="Total Cases: " + total_cases, font=('Roboto',18, 'bold'), tag="delete_answer")
answer_frame_canvas.create_text(300, 200, text="New Cases : " + new_cases, font=('Roboto',18, 'bold'), tag="delete_answer", fill="red")
answer_frame_canvas.create_text(300, 250, text="Total Deaths: " + deaths, font=('Roboto',18, 'bold'), tag="delete_answer")
answer_frame_canvas.create_text(300, 300, text="Total Recovered: " + total_recovered, font=('Roboto',18, 'bold'),fill="green" ,tag="delete_answer")
answer_frame_canvas.create_window(300, 400, window=back_button)
except KeyError:
covid_notebook.add(answer_frame, text="Search Results")
covid_notebook.hide(3)
answer_frame_canvas.grid(padx=200, pady=125)
answer_frame_canvas.delete("delete_answer")
answer_frame_canvas.create_rectangle(5,5,600,500,fill="#f2d6eb",outline="black", width=5, tag="delete_answer")
search_box.delete(0,END)
answer_frame_canvas.create_image(300, 200, image=error_final_image)
messagebox.showerror("Not Found","Please enter a valid country/continent.")
choose_search_frame()
else:
covid_notebook.add(answer_frame, text="Search Results")
covid_notebook.hide(3)
answer_frame_canvas.grid(padx=200, pady=125)
answer_frame_canvas.delete("delete_answer")
answer_frame_canvas.create_rectangle(5,5,600,500,fill="#f2d6eb",outline="black", width=5, tag="delete_answer")
search_box.delete(0,END)
answer_frame_canvas.create_image(300, 200, image=error_final_image)
messagebox.showerror("Not Found","Please enter a valid country/continent.")
choose_search_frame()
def choose_search_frame():
covid_notebook.add(search_frame, text="Search")
covid_notebook.hide(4)
search_image=Image.open(r"C:\Users\riyan\Downloads\mag-glass-removebg-preview-removebg-preview (1).png")
search_image_resize=search_image.resize((30,30), Image.ANTIALIAS)
search_final_image=ImageTk.PhotoImage(search_image_resize)
error_image=Image.open(r"C:\Users\riyan\Downloads\circle-cropped (9).png")
error_image_resize=error_image.resize((250,250), Image.ANTIALIAS)
error_final_image=ImageTk.PhotoImage(error_image_resize)
search_canvas=Canvas(search_frame, height=1000, width=1000, bg="#e3fdfd")
search_canvas.grid()
search_label1=Label(search_frame, text="S", font=('Roboto',45,'bold'), bg="#e3fdfd", fg="#4285F4")
search_canvas.create_window(430, 250, window=search_label1)
search_label2=Label(search_frame, text="e", font=('Roboto',45,'bold'), bg="#e3fdfd", fg="#DB4437")
search_canvas.create_window(470, 250, window=search_label2)
search_label3=Label(search_frame, text="a", font=('Roboto',45,'bold'), bg="#e3fdfd", fg="#F4B400")
search_canvas.create_window(510, 250, window=search_label3)
search_label4=Label(search_frame, text="r", font=('Roboto',45,'bold'), bg="#e3fdfd", fg="#4285F4")
search_canvas.create_window(545, 250, window=search_label4)
search_label5=Label(search_frame, text="c", font=('Roboto',45,'bold'), bg="#e3fdfd", fg="#0F9D58")
search_canvas.create_window(580 ,250, window=search_label5)
search_label6=Label(search_frame, text="h", font=('Roboto',45,'bold'), bg="#e3fdfd", fg="#DB4437")
search_canvas.create_window(615, 250, window=search_label6)
search_box=Entry(search_frame, font=('Helvetica','18'),width=40)
search_canvas.create_window(500, 340, window=search_box)
question_asked=search_box.get()
search_box.bind('<Return>', pressed_enter_key)
search_button=Button(search_frame, image=search_final_image, bg="white", borderwidth=0, command=lambda : pressed_enter_key(1))
search_canvas.create_window(758,340, window=search_button)
info_label=Label(search_frame, text="Enter a Country or Continent", font=('Roboto', 16, 'bold','underline'), bg="#e3fdfd", fg="black")
search_canvas.create_window(500, 400, window=info_label )
def navigate_first_tab():
covid_notebook.select(0)
def search_normal_text():
all_updates_button=Button(text="COVID-19 LIVE-UPDATES HERE",font=('Times New Roman', 30, 'bold', 'underline'),fg="#142850", bg="#e3fdfd", borderwidth=0,command=navigate_first_tab)
search_canvas.create_window(510, 670, window=all_updates_button )
search_frame.after(500, search_animate_text)
def search_animate_text():
all_updates_button=Button(text="COVID-19 LIVE-UPDATES HERE",font=('Times New Roman', 30, 'bold', 'underline'),fg="#fe346e", bg="#e3fdfd",command=navigate_first_tab, borderwidth=0)
search_canvas.create_window(510, 670, window=all_updates_button )
search_frame.after(500, search_normal_text)
all_updates_button=Button(text="COVID-19 LIVE-UPDATES HERE",font=('Times New Roman', 30, 'bold', 'underline'),fg="#142850", bg="#e3fdfd", command=navigate_first_tab, borderwidth=0)
search_canvas.create_window(510, 670, window=all_updates_button )
search_frame.after(500, search_animate_text)
answer_frame_canvas=Canvas(answer_frame, height=500,width=600, bg="#f2d6eb")
back_button=Button(answer_frame, text="Back to Search", command=choose_search_frame, bg="white", fg="black", font=('Roboto',20,'bold'), borderwidth=1)
faq_frame_canvas=Canvas(faq_frame, height=600, width=600, bg="#f2d6eb")
faq_frame_canvas.grid(padx=200, pady=100)
faq_frame_canvas.create_rectangle(5,5,600,600,fill="#f2d6eb",outline="black", width=5)
faq_frame_canvas.create_text(300,75, text="COVID-19 FAQ", font=('Balsamiq Sans', 30, 'bold', 'underline'))
question_list=['What is the source of the virus?','How does the virus spread?','How can I protect myself?',
'What are the symptoms of COVID-19','What is the recovery time of the virus?', 'Can the virus spread through food?']
answer_list=['It is caused by a coronavirus called SARS-CoV-2, which are originated in bats. However the exact source is unknown.',
'Spreads mainly through respiratory droplets produced when an infected person coughs or sneezes. Spread is more likely when people are in close contact (within about 6 feet).',
'Wash hands often, avoid close contact, cover your mouth when areound people,clean and disinfect.',
'Fever, cough, shortness of breath, nausea, headache, sore throat, fatigue are some of the common symptoms.',
'They found that for people with mild disease, recovery time is about two weeks, while people with severe or critical disease recover within three to six weeks.',
'It is highly unlikely that people can contract COVID-19 from food or food packaging.']
def selected_question(event):
option=faq_combo.get()
faq_frame_canvas.delete("ans")
for i in range(0,len(question_list)):
if option==question_list[i]:
index=i
break
answer=answer_list[index].split()
one=""
two=""
three=""
four=""
if index==1 or index==4:
for i in range(0,5):
one=one + " " + answer[i]
for i in range(5,10):
two=two + " " + answer[i]
for i in range(10,15):
three=three + " " + answer[i]
for i in range(15, len(answer)):
four=four + " " + answer[i]
faq_frame_canvas.create_text(300,150, text=one, font=('Helvetica',18,'bold'), tag="ans")
faq_frame_canvas.create_text(300,200, text=two, font=('Helvetica',18,'bold'), tag="ans")
faq_frame_canvas.create_text(300,250, text=three, font=('Helvetica',18,'bold'), tag="ans")
faq_frame_canvas.create_text(300,300, text=three, font=('Helvetica',18,'bold'), tag="ans")
else:
for i in range(0,6):
one=one + " " + answer[i]
for i in range(6,12):
two=two + " " + answer[i]
for i in range(12,len(answer)):
three=three + " " + answer[i]
faq_frame_canvas.create_text(300,200, text=one, font=('Helvetica',18,'bold'), tag="ans")
faq_frame_canvas.create_text(300,250, text=two, font=('Helvetica',18,'bold'), tag="ans")
faq_frame_canvas.create_text(300,300, text=three, font=('Helvetica',18,'bold'), tag="ans")
faq_combo=ttk.Combobox(value=question_list, width=60, height=6, state='readonly')
faq_combo.bind("<<ComboboxSelected>>", selected_question)
faq_frame_canvas.create_window(300, 400, window=faq_combo)
root.mainloop() | true |
45a8869b07c62b859edf59a4cc1015500341a7cd | Python | sunyeongchoi/sydsyd_challenge | /argorithm/OXquiz.py | UTF-8 | 281 | 3.140625 | 3 | [] | no_license | import sys
input = sys.stdin.readline
T = int(input())
for _ in range(T):
ox = list(input().rstrip())
cnt = 1
sum = 0
for index, i in enumerate(ox):
if i == 'O':
sum += cnt
cnt += 1
else:
cnt = 1
print(sum)
| true |
db6f68f22bb0c73827b2379ebc617601b7ca94ad | Python | MedicalEntomologist/VecID | /main.py | UTF-8 | 1,506 | 2.75 | 3 | [] | no_license | from Bio import SeqIO
import json
from itertools import count
def shred(seqrecord, k=33):
kmer_set = set()
for i in range(0,len(seqrecord.seq) - k):
kmer_set.add(str(seqrecord.seq[i:i+k]))
return seqrecord.name, kmer_set
def build_database(seqs, fp):
db = dict(kmers=dict(), names=dict())
for seq, i in zip(seqs, count()):
name, kmer_set = shred(seq)
#db[name] = kmer_set
for kmer in kmer_set:
if kmer not in db['kmers']:
#sets can't be JSON serialized
db['kmers'][kmer] = list()
db['kmers'][kmer].append(i)
db['names'][i] = name
json.dump(db, fp, indent=2)
def query_database(query, db):
qname, qset = shred(query)
votes = dict()
#find kmer matches in db
for kmer in qset:
name_ids = db['kmers'][kmer]
for name_id in name_ids:
if name_id not in votes:
votes[name_id] = 0
votes[name_id] = votes[name_id] + 1
#figure out which db kmer set was matched to the most
best_name_id =0
best_count = 0
for name_id, count in votes.items():
if count > best_count:
best_name_id = name_id
best_count = count
actual_name = db['names'][str(best_name_id)]
#return the name of the db reference sequence that the query kmers most matched to
return actual_name
if __name__ == '__main__':
print "building database"
build_database(SeqIO.parse(open('four_reads.fasta', 'r'), 'fasta'), open('db.json', 'w'))
print "querying database"
print query_database(SeqIO.parse(open('one_read.fasta', 'r'), 'fasta').next(), json.load(open('db.json', 'r'))) | true |
743bf83d6feb096edf7a80ed054d431ad9210206 | Python | MrZQAQ/autogithubhosts | /main.py | UTF-8 | 1,976 | 2.65625 | 3 | [] | no_license | import os,sys,ctypes
import datetime
import platform
import shutil
import get_ip_utils
hostLocation = ''
addr2ip = {}
sites = []
def loadSites():
with open('sites.txt','r',encoding='UTF-8') as sitefile:
line = sitefile.readline()
while line:
if not line.startswith('#'):
sites.append(line.strip('\n'))
line = sitefile.readline()
def checkPlatform():
global hostLocation
if platform.system() == 'Windows':
hostLocation = hostLocation + r'C:\Windows\System32\drivers\etc\hosts'
elif platform.system() == 'Linux':
hostLocation = hostLocation + r"/etc/hosts"
else:
print('\nOnly Work For Windows/Linux\n')
raise
def dropDuplication(line):
if ('# managed by autogithubhosts' in line) or ('#github.com start' in line) or ('#github.com end' in line):
return True
return False
def getIp():
global sites
for site in sites:
trueip=get_ip_utils.getIpFromipapi(site)
if trueip != None:
addr2ip[site] = trueip
def updateHost():
global addr2ip
global hostLocation
today = datetime.date.today()
with open(hostLocation, "r") as f1:
f1_lines = f1.readlines()
with open("temphost", "w") as f2:
for line in f1_lines:
if dropDuplication(line) == False:
f2.write(line)
f2.write('\n#github.com start' +
' **** ' + str(today) +' update ****\n')
for key in addr2ip:
f2.write(addr2ip[key] + "\t" + key + "\t# managed by autogithubhosts\n")
f2.write('#github.com end ********')
os.remove(hostLocation)
shutil.move('temphost',hostLocation)
if __name__ == '__main__':
print('载入列表...')
loadSites()
print('检测平台类型...')
checkPlatform()
print('获取ip...')
getIp()
print('写入文件...')
updateHost()
print('操作完成') | true |
0abb19f26579cab0f53c9c5f407652dc4d7d9aa3 | Python | ZenMoore/learn_CTC_pytorch | /steps/get_model_units.py | UTF-8 | 588 | 3.140625 | 3 | [] | no_license |
import sys
if len(sys.argv) != 2:
print("We need training text to generate the modelling units.")
sys.exit(1)
train_text = sys.argv[1]
units_file = 'data/units'
units = {}
with open(train_text, 'r') as fin:
line = fin.readline()
while line:
line = line.strip().split(' ')
for char in line[1:]:
try:
if units[char] == True:
continue
except:
units[char] = True
line = fin.readline()
fwriter = open(units_file, 'w')
for char in units:
print(char, file=fwriter)
| true |
01948378c33ea8399f0110356be1c7a5ccd9effe | Python | alsenydiallo/AutoGrader | /src/map-files.py | UTF-8 | 1,861 | 2.953125 | 3 | [] | no_license | import subprocess
import os
import sys
def mapfiles(src, target, mapfile='files.map', clean=False):
with open(os.path.join(src, mapfile), 'rt') as fin:
knownfiles = {}
for l in fin.readlines():
l = l.strip()
if not l: continue
k,v = l.split()
knownfiles[k] = v
print(knownfiles)
for srcfile in os.listdir(src):
if srcfile not in knownfiles:
if srcfile.startswith('.') or srcfile == mapfile:
continue
else:
print("Don't known about", srcfile)
sys.exit(1)
nonignoredfiles = [k for k in knownfiles if knownfiles[k] != 'ignore']
if clean:
for k in nonignoredfiles:
subprocess.call('rm -f %s/%s-%s'%(target,k,knownfiles[k]), shell=True)
else:
for k in nonignoredfiles:
subprocess.call('cp %s/%s %s/%s-%s'%(src,k,target,k,knownfiles[k]), shell=True)
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(description="""
Copy files from one directory (src) to another (target) based on
rules from a files.map file. The files.map file indicates the
role of each file in the source directory, and by default lives in
the source directory. files.map lists one file per line with the
file's role. File with the role 'ignore' are not copied to the
target directory. All other files are copied and given a
corresponding suffix for their role (e.g., '-handout' in the
target directory for a file whose role is 'handout'.""")
parser.add_argument('--clean', action='store_true')
parser.add_argument('src', help='src directory')
parser.add_argument('target', help='target directory')
args = parser.parse_args()
mapfiles(args.src, args.target, clean=args.clean)
| true |
28abeab8562c78ef9ac0a04e45e1dccb2d0019d6 | Python | nawaf2545/python-exe | /ex9sum numlist.py | UTF-8 | 147 | 3.46875 | 3 | [] | no_license | numbers = [6,5,1,9,7,4]
total = 0
for item in numbers:
#total = total +item
total +=item
else:
print("NO numbers left")
print(total)
| true |
c44d22e5caa21b8c48df2c6fa07bf3664e44156e | Python | NTU-Power/crawler | /mod/crawIO.py | UTF-8 | 1,682 | 2.75 | 3 | [] | no_license | from tabulate import tabulate
def printPowerData(powerDataTuple):
(powerAttrs, powerData) = powerDataTuple
print(powerAttrs)
print(tabulate(powerData))
print('')
def printMeters(meterTuples):
print(tabulate(meterTuples))
def dumpMetersCSV(meterTuples, csvFileName):
with open(csvFileName, 'w') as csvFile:
csvFile.write('MeterID,MeterName\n')
for oneTuple in meterTuples:
csvFile.write(oneTuple[0]+','+oneTuple[1]+'\n')
def loadMetersCSV(csvFileName):
with open(csvFileName) as csvFile:
csvLines = csvFile.readlines()
meterIDs = [a.split(',')[0] for a in csvLines[1:]]
meterNames = [a.split(',')[1][0:-1] for a in csvLines[1:]]
return meterIDs, meterNames
def loadMappingCSV(csvFileName):
with open(csvFileName) as csvFile:
csvLines = csvFile.readlines()
buildingIDs = [a.split(',')[0] for a in csvLines[1:]]
buildingNames = [a.split(',')[1] for a in csvLines[1:]]
PowerMeterStrs = [a.split('"')[1] for a in csvLines[1:]]
PowerMeterCleanStrs = [0]*len(PowerMeterStrs)
for i, oneStr in enumerate(PowerMeterStrs):
cleanStr = oneStr.replace('[', '').replace(']', '')\
.replace(',', '').replace('\'', '')\
.replace(' ', '').replace('(','')
PowerMeterCleanStrs[i] = cleanStr
PowerMeters = \
[
[
(oneMeterStr[0], oneMeterStr[1:])
for oneMeterStr in cleanStr.split(')')[0:-1]
] for cleanStr in PowerMeterCleanStrs
]
return zip(buildingIDs, buildingNames, PowerMeters)
| true |
2297f64e91efd3298e49afbeb90880ec5cf93544 | Python | DeokO/Deeplearning-from-scratch | /ch05_Backpropagation/ex02_activation_layers_propagation.py | UTF-8 | 2,433 | 3.109375 | 3 | [] | no_license | #numpy 배열을 인수로 받는다 가정하고 코드 작성
import numpy as np
#Relu function
class Relu:
def __init__(self):
self.mask = None #input이 0보다 작거나 같은지 확인하는 부분. T, F로 구성된 array
def forward(self, x):
self.mask = (x<=0)
out = x.copy()
out[self.mask] = 0
return out
def backward(self, dout):
dout[self.mask] = 0
dx = dout * 1
return dx
x = np.array([[1.0, -0.5], [-2.0, 3.0]])
print(x)
mask = (x<=0)
print(mask)
#Sigmoid function
class Sigmoid:
def __init__(self):
self.out = None
def forward(self, x):
out = 1 / (1+np.exp(-x))
self.out = out
return out
def backward(self, dout):
dx = dout * (1.0 - self.out) * self.out
return dx
#배치용 Affine 계층
X_dot_W = np.array([[0,0,0], [10, 10, 10]])
B = np.array([1, 2, 3])
X_dot_W
X_dot_W + B #브로드 캐스트에 의해 각 축마다 더해짐 -> 역전파 때는 각 데이터의 역전파 값이 편향의 원소에 모여야 함
dY = np.array([[1, 2, 3], [4, 5, 6]])
dY
dB = np.sum(dY, axis=0) #열 별로(축 별로) 합
dB
#Affine layer
class Affine:
def __init__(self, W, b):
self.b = b
self.W = W
self.x = None
self.dW = None
self.db = None
def forward(self, x):
self.x = x
out = np.dot(x, self.W) + self.b
return out
def backward(self, dout):
dx = np.dot(dout, self.W.T)
self.dW = np.dot(self.x.T, dout)
self.db = np.sum(dout, axis=0)
return dx
#Softmax-with-Loss 계층 구현
from common.functions import *
class SoftmaxWithLoss:
def __init__(self):
self.loss = None # 손실
self.y = None # softmax의 출력
self.t = None # 정답 레이블(one-hot vector)
def forward(self, x, t):
self.t = t
self.y = softmax(x)
self.loss = cross_entropy_error(self.y, self.t)
return self.loss
def backward(self, dout=1):
batch_size = self.t.shape[0]
# 배치 개수로 나눠서 데이터 1개당 오차를 앞 계층으로 전파하는 점에 주의. 해당 batch의 평균적인 gradient를 이용해서 학습함
dx = (self.y - self.t) / batch_size
return dx
| true |
ba2a94081ff03d9e2718a9345ccfb6f2f7a703d7 | Python | novdulawan/python-function | /function_stdoutput.py | UTF-8 | 390 | 4.03125 | 4 | [] | no_license | #!/usr/bin/env python
#Author: Novelyn G. Dulawan
#Date: March 16, 2016
#Purpose: Python Script for displaying output in three ways
def MyFunc(name, age):
print "Hi! My name is ", name + "and my age is", age
print "Hi! My name is %s and my age is %d" %(name, age)
print "Hi! My name is {} and my age is {}".format(name,age)
print MyFunc("Mary", 19)
| true |
a8be7f7045fcaba0cffee50bf7b8f7af61f6f3fe | Python | cklasicki/dataalgorithms | /tasks/arrays/pascal_triangle.py | UTF-8 | 856 | 4.34375 | 4 | [] | no_license | '''
Points to note:
1. We have to return a list.
2. The elements of n^th row are made up of elements of (n-1)^th row. This comes up till the 1^st row. We will follow a top-down approach.
3. Except for the first and last element, any other element at position `j` in the current row is the sum of elements at position `j` and `j-1` in the previous row.
4. Be careful about the edge cases, example, an index should never be a NEGATIVE at any point of time.
'''
def nth_row_pascal(n):
if n == 0:
return [1]
current_row = [1]
for i in range(1, n+1):
previous_row = current_row
current_row = [1]
for j in range(1, i):
next_number = previous_row[j] + previous_row[j-1]
current_row.append(next_number)
current_row.append(1)
return current_row
print( nth_row_pascal(10) ) | true |
6ea4356cb3531535ba1f716fb31f4f5fcd5c48c8 | Python | LorenzoVaralo/ExerciciosCursoEmVideo | /Mundo 3/Ex114.py | UTF-8 | 360 | 3.34375 | 3 | [] | no_license | '''Crie um codigo em python que teste se o site pudim está
acessivel pelo computador usado.'''
import urllib3
url = 'http://www.pudim.com.br/'
http = urllib3.PoolManager()
try:
urlcode = http.request('GET', url)
except:
print('\033[1;31mFoi impossivel conectar ao site pudim!\033[m')
else:
print('\033[1;32mO site pudim ta funcionando!\033[m')
| true |
78458b77bb00974148167226116154198d42e912 | Python | neviim/ListaCarrosGT5 | /listaCarrosGT5.py | UTF-8 | 1,059 | 2.75 | 3 | [] | no_license | #!/usr/bin/env python
# encoding: utf-8
from BeautifulSoup import BeautifulSoup
import urllib2
url = "http://www.gran-turismo.com/local/jp/data1/products/gt5/carlist_en.html"
pagina = urllib2.urlopen(url).read()
soup = BeautifulSoup(pagina)
soup.prettify() # Refaz a estrutura do codigo como html
table = soup.find('table')
rows = table.findAll('tr')
for tr in rows:
# Estrai informações de standard
cols1 = tr.findAll('td') # [<td class="icon2"> </td>, <td class="icon"><img src="./images/icon02.jpg" width="16"
# height="14" alt="standard" /></td>, <td><p>AC Cars 427 S/C '66</p></td>]
# Pega o codigo do Carro.
textID = ''.join(tr.find(text=True))
#textCarro = ''.join(cols1.find(text=True))
print textID, cols1
#splT1 = str((cols1[1])).split('"') # <td class="icon"><img src="./images/icon02.jpg" width="16" height="14" alt="standard" /></td>
#print (textID, splT1[9]) # Retira da string a informacao de standard/premium
# ------------
print | true |
e445c4792f47c79ddbecad4886407a5d6656b791 | Python | GalOz666/redhat_gregex | /test_units.py | UTF-8 | 585 | 3.21875 | 3 | [] | no_license | from matchers import FileMatcher, StringMatcher
file = 'lorem.txt'
a = FileMatcher(file, 'ut')
b = StringMatcher('Ut enim ad minim veniam, \n'
'quis nostrud exercitation ullamco laboris nisi ut \n'
'aliquip ex ea commodo consequat.', 'ut')
def test_normal():
c = a.line_dict
assert len(c) == 3
a.print_normal()
b.print_normal()
def test_file_color():
a.print_color()
b.print_color()
def test_carret():
a.print_with_caret()
b.print_with_caret()
def test_machine():
a.print_machine()
b.print_machine()
| true |
77df60265c6c144971b5dc2fd8d9e9cf98353bbb | Python | wonjongah/JAVA_Python_Cplusplus | /Python/itertools/count.py | UTF-8 | 203 | 3.578125 | 4 | [] | no_license | from itertools import count
# for n in count(10):
# print(n)
for n, s in zip(count(10), ["a", "b", "c", "d"]):
print(n, s)
for n, s in zip(count(100, 2), ["e", "f", "g", "h"]):
print(n, s) | true |
ca75a46ea18afee754cb9a67b24cba2163dec414 | Python | Miguel-leon2000/Arreglos | /2_dimensiones.py | UTF-8 | 1,403 | 3.640625 | 4 | [] | no_license | Matriz = [
[1, 2, 3, 4, 5, 6, 7],
[8, 9, 10, 11, 12, 13, 14],
[15, 16, 17, 18, 19, 20, 21],
[22, 23, 24, 25, 26, 27, 28],
[29, 30, 31, 32, 33, 34, 35],
[36, 37, 38, 39, 40, 41, 42],
[43, 44, 45, 46, 47, 48, 49]
]
# print(len(Matriz))
# print(len(Matriz[0]))
# print(len(Matriz[0][0]))
print("-----------Arriba-Abajo / Izquierda-Derecha--------------------------")
for f in range(0, len(Matriz)):
for c in range(0, len(Matriz[f])):
print(Matriz[f][c], "\t", end="")
print("\n")
print("-----------Arriba-Abajo / Derecha-Izquierda---------------------------")
for f in range(0, len(Matriz)):
for c in range(len(Matriz[f]) - 1, -1, -1):
print(Matriz[f][c], "\t", end="")
print("\n")
print("-----------Abajo-Arriba / Izquierda-Derecha---------------------------")
for f in range(len(Matriz) - 1, -1, -1):
for c in range(len(Matriz[f])):
print(Matriz[f][c], "\t", end="")
print("\n")
print("-----------Abajo-Arriba / Derecha-Izquierda---------------------------")
for f in range(len(Matriz) - 1, -1, -1):
for c in range(len(Matriz[f]) - 1, -1, -1):
print(Matriz[f][c], "\t", end="")
print("\n")
print("-----------------------Rotar------------------------------------------")
for f in range(0, len(Matriz)):
for c in range(0, len(Matriz[f])):
print(Matriz[c][f], "\t", end="")
print("\n")
| true |
9418059cb37a0914194b454ba65a81faf6465ab3 | Python | karisjochen/Taylor_Swift_Text_Generator | /genius_scrapping_addalbum.py | UTF-8 | 4,007 | 2.890625 | 3 | [] | no_license |
CLIENT_ACCESS_TOKEN = 'ENTER TOKEN HERE'
import lyricsgenius as genius
from openpyxl import Workbook
import datetime
import pandas as pd
import numpy as np
#calling api and scrapping data
api = genius.Genius(CLIENT_ACCESS_TOKEN, skip_non_songs = True, remove_section_headers = True, replace_default_terms= True)
#creates artist object with properties: name, image_url, songs, and num_songs
taylorswift = api.search_artist( 'Taylor Swift',max_songs=None, sort='popularity', get_full_info=False)
#preparing data for dataframe
ranking_list = np.arange(start=1, stop=taylorswift.num_songs+1)
title_list = []
artist_list = []
lyrics_list = []
album_list = []
year_list = []
for song in taylorswift.songs:
title = song.title
title_list.append(title)
artist = song.artist
artist_list.append(artist)
lyrics = song.lyrics
lyrics_list.append(lyrics)
try:
testsong = api.search_song(title, taylorswift.name)
except:
print('unable to call api to search song', song)
continue
try:
album = testsong.album
album_list.append(album)
except:
album = 'NaN'
album_list.append(album)
try:
year = testsong.year
year_list.append(year)
except:
year = 'NaN'
year_list.append(year)
datadict = {'rank': ranking_list, 'title': title_list, 'artist': artist_list,\
'lyrics': lyrics_list, 'album': album_list, 'year': year_list}
pd.set_option('max_colwidth',5000)
tswiftdf= pd.DataFrame(datadict)
'''
************************************************************************************************
DATA CLEANSING
************************************************************************************************
'''
#good idea just to get an overview of the dataframe, number of columns, rows, datatypes, number of null values
tswiftdf.info()
print()
print()
tswiftdf.drop_duplicates(subset='lyrics', inplace=True)
#excluded_termskj = ['(Original)', '[Liner Notes]', '[Discography List]','(Intro)','(Live)', '(Acoustic)', '(Remix)',\
# '(Live/2011)', '(Voice Memmo)', '(Grammys 2016)', '(Piano Version)', '(Piano/Vocal)',\
# '(Solo Version)', '(Alternate Demo)', '(Alternate Version)','(Demo)', '(Vevo Version)', '(For Daddy Gene)',\
# '(Radio Disney Version)', '(Happy Voting)', '(Pop Mix)', '(Digital Dog Radio Mix)']
#drop rows in place that consist of words in remove_titles variable
remove_titles = 'Live|Acoustic|Remix|version|Disney|Demo|Radoio|Memo|Mix|Grammys|Dog|Linear'
tswiftdf.drop(tswiftdf[tswiftdf['title'].str.contains(remove_titles, case=False)].index, inplace = True)
tswiftdf['lyrics'].replace('\n', '', inplace=True)
#test that desired \n was removed
print(tswiftdf.iloc[0].loc['lyrics'])
# Check if any text was truncated
pd_width = pd.get_option('max_colwidth')
maxsize = tswiftdf['lyrics'].map(len).max()
n_truncated = (tswiftdf['lyrics'].map(len) > pd_width).sum()
print("\nTEXT LENGTH:")
print("{:<17s}{:>6d}".format(" Max. Accepted", pd_width))
print("{:<17s}{:>6d}".format(" Max. Observed", maxsize))
print("{:<17s}{:>6d}".format(" Truncated", n_truncated))
print()
print()
#actual date was imported into year column. i still want this data but I also just want to know the year
tswiftdf['year'] = pd.to_datetime(tswiftdf['year'], format = '%Y/%m/%d')
tswiftdf['date'] = tswiftdf['year']
#extract just the year
tswiftdf['year'] = pd.DatetimeIndex(tswiftdf['year']).year
#get rid of the time component and just keeps the date
tswiftdf['date'] = tswiftdf['date'].dt.date
#make sure the datatypes are correct
#not sure why they revert back to int and string object but the cleanup worked in the process
tswiftdf.info()
print()
print()
#reindex rank column after all data cleansing
tswiftdf['rank']= np.arange(start=1, stop=tswiftdf.shape[0]+1)
#write cleaned up df to excel file
tswiftdf.to_excel('TaylorSwift_Songs.xlsx',sheet_name='song_data')
print('TSWIZZLE cleanup complete')
print()
print('LETS GET ANNALYZING')
| true |
3440298c55375096e44d209118a979aa353705f3 | Python | SondreHerr/IoT-Lab-gruppe-A | /Python/senseHat/printLetter_senseHat.py | UTF-8 | 305 | 2.84375 | 3 | [] | no_license | from sense_hat import SenseHat
from time import sleep
from random import randint
sense = SenseHat()
r = randint(0, 255)
sense.show_letter("j", (r, 0, 0))
sleep(2)
r = randint(0, 255)
sense.show_letter("e", (0, r, 0))
sleep(2)
r = randint(0, 255)
sense.show_letter("w", (0, 0, r))
sleep(2)
sense.clear() | true |
41815754c99718bcbf509e6ae434c4249e131303 | Python | chetanmreddy/robotics-projects | /nomad_camera_tracking/nodes/SSC32.py | UTF-8 | 884 | 2.796875 | 3 | [] | no_license | #
# manage interaction with a LynxMotion SSC-32 via a serial port
#
import serial
import time
lineEnd = '\r\n'
class SSC32:
def __init__(self, serialPort):
self.port = serial.Serial(port = serialPort,
baudrate = 9600,
timeout = 1)
self.port.setRTS(level = True)
self.port.setDTR(level = True)
self.firmwareVersion = self.requestResponse('VER')
return
def sendCommand(self, command):
if (len(command) > 0):
self.port.write(command)
self.port.write(lineEnd)
time.sleep(0.05)
return
def getResponse(self):
response = ''
while (self.port.inWaiting() == 0):
continue
while (self.port.inWaiting() > 0):
response += self.port.read()
return response
def requestResponse(self, request):
self.sendCommand(request)
return self.getResponse()
def commandNoResponse(self, command):
self.sendCommand(command)
return
| true |
dec15d14a30e298b11d239c2efd0ae2951a147a1 | Python | bboatright013/flaskpractice | /calc/app.py | UTF-8 | 1,182 | 3.53125 | 4 | [] | no_license | # Put your app in here.
from flask import Flask, request
import operations
app = Flask(__name__)
@app.route('/math/<func>')
def maths(func):
print(request.args)
a = int(request.args['a'])
b = int(request.args['b'])
functions = {
"add" : operations.add(a,b),
"sub" : operations.sub(a,b),
"mult" : operations.mult(a,b),
"div" : operations.div(a,b)
}
x = functions.get(func)
return f"{a} and {b} = {x}"
@app.route('/add')
def adds():
print(request.args)
a = int(request.args['a'])
b = int(request.args['b'])
x = operations.add(a,b)
return f"{a} and {b} = {x}"
@app.route('/sub')
def subs():
print(request.args)
a = int(request.args['a'])
b = int(request.args['b'])
x = operations.sub(a,b)
return f"{a} and {b} = {x}"
@app.route('/mult')
def mults():
print(request.args)
a = int(request.args['a'])
b = int(request.args['b'])
x = operations.mult(a,b)
return f"{a} and {b} = {x}"
@app.route('/div')
def divs():
print(request.args)
a = int(request.args['a'])
b = int(request.args['b'])
x = operations.div(a,b)
return f"{a} and {b} = {x}" | true |
5730f367e36e50091945e00472903d3ea888ff39 | Python | ray-project/ray | /python/ray/data/tests/test_file_based_datasource.py | UTF-8 | 2,396 | 2.5625 | 3 | [
"MIT",
"BSD-3-Clause",
"Apache-2.0"
] | permissive | import os
import pyarrow
import pytest
import ray
from ray.data.block import BlockAccessor
from ray.data.datasource import FileBasedDatasource
from ray.data.datasource.file_based_datasource import (
OPEN_FILE_MAX_ATTEMPTS,
_open_file_with_retry,
)
class MockFileBasedDatasource(FileBasedDatasource):
def _write_block(
self, f: "pyarrow.NativeFile", block: BlockAccessor, **writer_args
):
f.write(b"")
@pytest.mark.parametrize("num_rows", [0, 1])
def test_write_preserves_user_directory(num_rows, tmp_path, ray_start_regular_shared):
ds = ray.data.range(num_rows)
path = os.path.join(tmp_path, "test")
os.mkdir(path) # User-created directory
ds.write_datasource(MockFileBasedDatasource(), dataset_uuid=ds._uuid, path=path)
assert os.path.isdir(path)
def test_write_creates_dir(tmp_path, ray_start_regular_shared):
ds = ray.data.range(1)
path = os.path.join(tmp_path, "test")
ds.write_datasource(
MockFileBasedDatasource(), dataset_uuid=ds._uuid, path=path, try_create_dir=True
)
assert os.path.isdir(path)
def test_open_file_with_retry(ray_start_regular_shared):
class FlakyFileOpener:
def __init__(self, max_attempts: int):
self.retry_attempts = 0
self.max_attempts = max_attempts
def open(self):
self.retry_attempts += 1
if self.retry_attempts < self.max_attempts:
raise OSError(
"When creating key x in bucket y: AWS Error SLOW_DOWN during "
"PutObject operation: Please reduce your request rate."
)
return "dummy"
original_max_attempts = OPEN_FILE_MAX_ATTEMPTS
try:
# Test openning file successfully after retries.
opener = FlakyFileOpener(3)
assert _open_file_with_retry("dummy", lambda: opener.open()) == "dummy"
# Test exhausting retries and failed eventually.
ray.data.datasource.file_based_datasource.OPEN_FILE_MAX_ATTEMPTS = 3
opener = FlakyFileOpener(4)
with pytest.raises(OSError):
_open_file_with_retry("dummy", lambda: opener.open())
finally:
ray.data.datasource.file_based_datasource.OPEN_FILE_MAX_ATTEMPTS = (
original_max_attempts
)
if __name__ == "__main__":
import sys
sys.exit(pytest.main(["-v", __file__]))
| true |
84e4004311721ec8e70acb2ac62c795f9de4d324 | Python | ayhong16/Hack2021 | /matching_data.py | UTF-8 | 599 | 2.90625 | 3 | [] | no_license | def matching(ID, current, future):
for x in range(len(future)):
for y in range(len(current)):
if future[x] == current[y]:
print('Match found: Survey ID {:4} currently lives in Survey ID {:4} room'.format(ID[y],ID[x]))
return ''
ID = [1234,4321,9876,8236,7284]
current = ['H231','G487','H897','A827','N827']
future = ['G487','N827','H231','A827','H897']
print(matching(ID, current, future))
'''
Instead of printing the Survey ID numbers for the matches, it would print a
hyperlink leading to the pictures that the other student posted.
'''
| true |
db9250bb55b5eb73514bf44dfeb1024635f7a8df | Python | wqtwjt1996/UHT | /network/uht_net.py | UTF-8 | 2,980 | 2.640625 | 3 | [] | no_license | import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.nn.init as init
from network.resnet import ResNet50
def init_weights(modules):
for m in modules:
if isinstance(m, nn.Conv2d):
init.xavier_uniform_(m.weight.data)
if m.bias is not None:
m.bias.data.zero_()
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
elif isinstance(m, nn.Linear):
m.weight.data.normal_(0, 0.01)
m.bias.data.zero_()
class upsample(nn.Module):
def __init__(self, input_1, input_2, output):
super(upsample, self).__init__()
self.conv = nn.Sequential(
nn.Conv2d(input_1 + input_2, input_2, kernel_size=1),
nn.BatchNorm2d(input_2),
nn.ReLU(inplace=True),
nn.Conv2d(input_2, output, kernel_size=3, padding=1),
nn.BatchNorm2d(output),
nn.ReLU(inplace=True)
)
def forward(self, x):
x = self.conv(x)
return x
class UHT_Net(nn.Module):
def __init__(self):
super(UHT_Net, self).__init__()
self.basenet = ResNet50()
self.upconv1 = upsample(2048, 1024, 512)
self.upconv2 = upsample(512, 512, 256)
self.upconv3 = upsample(256, 256, 128)
self.upconv4 = upsample(128, 64, 32)
self.biggen = nn.Sequential(
nn.Upsample(scale_factor=4, mode='bilinear')
)
self.conv_cls = nn.Sequential(
nn.Conv2d(32, 32, kernel_size=3, padding=1), nn.ReLU(inplace=True),
nn.Conv2d(32, 32, kernel_size=3, padding=1), nn.ReLU(inplace=True),
nn.Conv2d(32, 16, kernel_size=3, padding=1), nn.ReLU(inplace=True),
nn.Conv2d(16, 16, kernel_size=1), nn.ReLU(inplace=True),
nn.Conv2d(16, 1, kernel_size=1),
)
init_weights(self.upconv1.modules())
init_weights(self.upconv2.modules())
init_weights(self.upconv3.modules())
init_weights(self.upconv4.modules())
init_weights(self.conv_cls.modules())
def forward(self, x):
FM = self.basenet(x)
y1 = F.interpolate(FM[4], size=FM[3].size()[2:], mode='bilinear')
y1 = torch.cat([y1, FM[3]], dim=1)
y1 = self.upconv1(y1)
y2 = F.interpolate(y1, size=FM[2].size()[2:], mode='bilinear')
y2 = torch.cat([y2, FM[2]], dim=1)
y2 = self.upconv2(y2)
y3 = F.interpolate(y2, size=FM[1].size()[2:], mode='bilinear')
y3 = torch.cat([y3, FM[1]], dim=1)
y3 = self.upconv3(y3)
y4 = F.interpolate(y3, size=FM[0].size()[2:], mode='bilinear')
y4 = torch.cat([y4, FM[0]], dim=1)
y_final = self.upconv4(y4)
res = self.conv_cls(self.biggen(y_final))
return res
if __name__ == '__main__':
model = UHT_Net().cuda()
output = model(torch.randn(1, 3, 512, 512).cuda())
print(output.shape) | true |
831235253b69c4c5ec36f78cdee5a9332c0ee6d5 | Python | thar/TuentiChallenge2 | /Reto4/karts.py | UTF-8 | 2,108 | 3.140625 | 3 | [] | no_license | #!/usr/bin/python
#Autor: Miguel Angel Julian Aguilar
#e-mail: miguel.a.j82@gmail.com
#Not so proud of this one
import sys
lines=sys.stdin.readlines()
try:
numero_de_casos=int(lines.pop(0).strip())
except:
print 'Error: No se pudieron obtener el numero de casos'
exit()
for caso in range(numero_de_casos):
pista=lines.pop(0).strip()
grupos=lines.pop(0).strip()
pistaDatos=pista.split()
grupos=grupos.split()
races=int(pistaDatos[0])
karts=int(pistaDatos[1])
groupsN=int(pistaDatos[2])
litros=0
gruposEnCarrera=[]
gruposDeCarrera=[0]
indice=0
freeKarts=karts
gruposAdded=0
freeKarts1=[]
indicesCarrerasIndex0=[]
while len(gruposDeCarrera)<=races:
if indice==0:
if freeKarts in freeKarts1:
racesBeforeLoop=indicesCarrerasIndex0[freeKarts1.index(freeKarts)]
if freeKarts is not 0:
racesBeforeLoop=racesBeforeLoop-1
gruposDeCarrera.pop(-1)
loopSize=len(gruposDeCarrera)-racesBeforeLoop
loopRaces=gruposDeCarrera[(len(gruposDeCarrera)-loopSize):]
litros=sum(gruposDeCarrera[:racesBeforeLoop])+((races-racesBeforeLoop)/loopSize)*sum(loopRaces)+sum(loopRaces[:(races-racesBeforeLoop)%loopSize])
break
else:
freeKarts1.append(freeKarts)
indicesCarrerasIndex0.append(len(gruposDeCarrera))
if freeKarts>=int(grupos[indice]) and gruposAdded<groupsN:
freeKarts=freeKarts-int(grupos[indice])
gruposDeCarrera[-1]=gruposDeCarrera[-1]+int(grupos[indice])
else:
if indice==0:
litros=(races/len(gruposDeCarrera))*sum(gruposDeCarrera)+sum(gruposDeCarrera[0:races%len(gruposDeCarrera)])
break
gruposDeCarrera.append(int(grupos[indice]))
freeKarts=karts-int(grupos[indice])
gruposAdded=0
indice=(indice+1)%groupsN
gruposAdded=gruposAdded+1
if len(gruposDeCarrera)>races:
litros=sum(gruposDeCarrera[:races])
print litros
| true |
24607224f3b903ca928b1df333c50ba9409ccfd3 | Python | damccorm/DistributedGraphProcessingSystem | /vertex_cover/vcWorker.py | UTF-8 | 2,129 | 3 | 3 | [] | no_license | """
Worker node with purpose of computing a 2x approximation vertex cover.
Each vertex should initially have 0 as its value.
All code created by Daniel McCormick.
"""
def compute(vertex, input_value, round_number, incoming_messages, send_message_to_vertex):
if round_number == 1:
return vertex, vertex.vertex_number
if vertex.active:=
if round_number % 3 == 1 and vertex.vertex_value == -1:
vertex.active = False
if len(incoming_messages) > 0:
vertex.vertex_value = 1
send_message_to_vertex(vertex, int(incoming_messages[0].sending_vertex), "ADD")
elif round_number%3 == 2:
if len(incoming_messages) > 0:
vertex.vertex_value = 1
vertex.active = False
elif input_value is None:
vertex.active = False
elif int(input_value) == vertex.vertex_number:
for v in vertex.outgoing_edges:
send_message_to_vertex(vertex, v, "MAYBE_ADD")
vertex.vertex_value = -1
else:
if len(incoming_messages) > 0:
send_message_to_vertex(vertex, int(incoming_messages[0].sending_vertex), "MAYBE_ADD")
else:
return vertex, None
return vertex, vertex.vertex_number
def output_function(vertex):
if vertex.vertex_value == 1:
print "Vertex", vertex.vertex_number, "is part of the cover"
else:
print "Vertex", vertex.vertex_number, "is not part of the cover"
if __name__ == '__main__':
if __package__ is None:
import sys
from os import path
sys.path.append( path.dirname( path.dirname( path.abspath(__file__) ) ) )
from worker import Worker
else:
from ..worker import Worker
master_ip_address = None
own_ip_address = "127.0.0.2"
if len(sys.argv) > 1:
master_ip_address = sys.argv[1]
if len(sys.argv) > 2:
own_ip_address = sys.argv[2]
compute_lambda = lambda vertex, input_value, round_number, incoming_messages, send_message_to_vertex: compute(vertex, input_value, round_number, incoming_messages, send_message_to_vertex)
output_lambda = lambda vertex: output_function(vertex)
worker = Worker(master_ip_address, own_ip_address, compute_lambda, output_lambda)
else:
print "ERROR, must add the address of the master as an argument"
| true |
211873bd9ab66997cb5ab4b8095926a3971cc87b | Python | sealot/yaoyingip | /getip.py | UTF-8 | 633 | 2.640625 | 3 | [] | no_license | import requests
import regex
from os import system
import time
session=requests.session()
response1=session.get("http://ip138.com/")
html1=response1.text
reg1=r"<iframe src=\"(http:\/\/[\S]+)\" [\s\S]+>[\s\S]*<\/iframe>"
pattern1=regex.compile(reg1)
url=pattern1.findall(html1)
response2=session.get(url[0])
html2=response2.text
reg2=r"<title>您的IP地址是:([\S]+)<\/title>"
pattern2=regex.compile(reg2)
ip=pattern2.findall(html2)
with open('ip.txt','w') as f:
f.write(ip[0])
system('git add .')
system('git commit -m %s' % time.strftime("%Y%m%d%H%M%S", time.localtime()))
system('git push origin master')
| true |
b11e75e6e8b02fa127f7150747d72644a6a56c85 | Python | caser789/leetcode | /notes/stone.py | UTF-8 | 4,132 | 3.34375 | 3 | [
"MIT"
] | permissive | class Solution(object):
def lastStoneWeight(self, stones):
"""
:type stones: List[int]
:rtype: int
"""
if not stones:
return 0
pq = MaxPriorityQueue()
for stone in stones:
pq.push(stone)
while len(pq) > 1:
a = pq.pop()
b = pq.pop()
s = abs(a-b)
if s:
pq.push(s)
if not pq: return 0
return pq.pop()
class MaxPriorityQueue(object):
def __init__(self):
self.keys = [None] * 2
self.n = 0
def __len__(self):
"""
>>> q = MaxPriorityQueue()
>>> len(q)
0
>>> q.push(1)
>>> q.push(2)
>>> q.push(3)
>>> len(q)
3
>>> q.pop()
3
>>> len(q)
2
"""
return self.n
def push(self, i):
"""
>>> q = MaxPriorityQueue()
>>> q.push(1)
>>> q.max
1
>>> q.push(3)
>>> q.max
3
>>> q.push(2)
>>> q.max
3
"""
if self.n + 1 == len(self.keys):
self._resize(len(self.keys)*2)
self.n += 1
self.keys[self.n] = i
self._swim(self.n)
def pop(self):
"""
>>> q = MaxPriorityQueue()
>>> q.push(1)
>>> q.push(3)
>>> q.push(2)
>>> q.pop()
3
>>> q.pop()
2
>>> q.pop()
1
>>> q.pop()
Traceback (most recent call last):
...
IndexError: underflow
"""
if not self.n:
raise IndexError('underflow')
keys = self.keys
res = keys[1]
keys[1], keys[self.n] = keys[self.n], keys[1]
keys[self.n] = None
self.n -= 1
self._sink(1)
if self.n and self.n * 4 == len(self.keys) - 1:
self._resize(len(self.keys)/2)
return res
@property
def max(self):
"""
>>> q = MaxPriorityQueue()
>>> q.max
Traceback (most recent call last):
...
IndexError: underflow
>>> q = MaxPriorityQueue()
>>> q.keys = [None, 1, 2, 3]
>>> q.n = 3
>>> q.max
1
"""
if not self.n:
raise IndexError('underflow')
return self.keys[1]
def _swim(self, n):
"""
>>> q = MaxPriorityQueue()
>>> q.keys = [None, 1, 2, None, 3]
>>> q.n = 4
>>> q._swim(4)
>>> q.keys
[None, 3, 1, None, 2]
>>> q = MaxPriorityQueue()
>>> q.keys = [None, 2, 1, None, 3]
>>> q.n = 4
>>> q._swim(4)
>>> q.keys
[None, 3, 2, None, 1]
"""
keys = self.keys
while n > 1 and keys[n/2] < keys[n]:
keys[n/2], keys[n] = keys[n], keys[n/2]
n /= 2
def _sink(self, n):
"""
>>> q = MaxPriorityQueue()
>>> q.keys = [None, 1, 2, 3]
>>> q.n = 3
>>> q._sink(1)
>>> q.keys
[None, 3, 2, 1]
>>> q = MaxPriorityQueue()
>>> q.keys = [None, 2, 1, 3]
>>> q.n = 3
>>> q._sink(1)
>>> q.keys
[None, 3, 1, 2]
"""
keys = self.keys
while 2 * n <= self.n:
i = 2 * n
if i < self.n and keys[i+1] > keys[i]:
i = i + 1
if keys[i] <= keys[n]:
break
keys[n], keys[i] = keys[i], keys[n]
n = i
def _resize(self, n):
"""
>>> # 1. test resize up
>>> q = MaxPriorityQueue()
>>> q.keys = [None, 1, 2]
>>> q.n = 2
>>> q._resize(6)
>>> q.keys
[None, 1, 2, None, None, None]
>>> # 2. test resize down
>>> q = MaxPriorityQueue()
>>> q.keys = [None, 1, 2, None, None, None]
>>> q.n = 2
>>> q._resize(3)
>>> q.keys
[None, 1, 2]
"""
tmp = [None] * n
for i in range(self.n+1):
tmp[i] = self.keys[i]
self.keys = tmp
| true |
4dde4ae6a9b9d4abfeb8601d199efc420566546b | Python | c0derzer0/SPFlow | /src/spn/structure/leaves/parametric/Sampling.py | UTF-8 | 2,036 | 2.71875 | 3 | [
"Apache-2.0"
] | permissive | """
Created on April 15, 2018
@author: Alejandro Molina
@author: Antonio Vergari
"""
from spn.algorithms.Sampling import add_leaf_sampling
from spn.structure.leaves.parametric.Parametric import (
Parametric,
Gaussian,
Gamma,
Poisson,
Categorical,
LogNormal,
Geometric,
Exponential,
Bernoulli,
CategoricalDictionary,
)
import numpy as np
from spn.structure.leaves.parametric.utils import get_scipy_obj_params
import logging
logger = logging.getLogger(__name__)
def sample_parametric_node(node, n_samples, data, rand_gen):
assert isinstance(node, Parametric)
assert n_samples > 0
X = None
if (
isinstance(node, Gaussian)
or isinstance(node, Gamma)
or isinstance(node, LogNormal)
or isinstance(node, Poisson)
or isinstance(node, Geometric)
or isinstance(node, Exponential)
or isinstance(node, Bernoulli)
):
scipy_obj, params = get_scipy_obj_params(node)
X = scipy_obj.rvs(size=n_samples, random_state=rand_gen, **params)
elif isinstance(node, Categorical):
X = rand_gen.choice(np.arange(node.k), p=node.p, size=n_samples)
elif isinstance(node, CategoricalDictionary):
vals = []
ps = []
for v, p in node.p.items():
vals.append(v)
ps.append(p)
X = rand_gen.choice(vals, p=ps, size=n_samples)
else:
raise Exception("Node type unknown: " + str(type(node)))
return X
def add_parametric_sampling_support():
add_leaf_sampling(Gaussian, sample_parametric_node)
add_leaf_sampling(Gamma, sample_parametric_node)
add_leaf_sampling(LogNormal, sample_parametric_node)
add_leaf_sampling(Poisson, sample_parametric_node)
add_leaf_sampling(Geometric, sample_parametric_node)
add_leaf_sampling(Exponential, sample_parametric_node)
add_leaf_sampling(Bernoulli, sample_parametric_node)
add_leaf_sampling(Categorical, sample_parametric_node)
add_leaf_sampling(CategoricalDictionary, sample_parametric_node)
| true |
ecbcd6fa583dc6741e6866c299604d6840c2bf92 | Python | NIL-zhuang/IRBL | /src/main/python/laprob/runall.py | UTF-8 | 3,930 | 2.546875 | 3 | [
"MIT"
] | permissive | from tqdm import tqdm
from collections import defaultdict
from numpy import mean
import os
import shutil
import time
from preprocess import CodePreprocess, BugPreprocessor, MethodExtractor
from bb import BugBugSim
from bs import BugSourceSim
from ss import SourceSourceSim
from constants import dependency_matrix, projects_path, alpha, beta, final_score_path, cheat_score_path, proj
from calculate import Calculate, markRate
from edgeCombine import EdgeCombination
from calculate import Calculate
from metric import Metric
def printRunTime(func):
def wrapper(*args, **kw):
local_time = time.time()
func(*args, **kw)
print('current Function [%s] run time is %.2f seconds' % (func.__name__, time.time() - local_time))
return wrapper
def cleanFiles():
preprocessDirs = ['bugComponent', 'bugs', 'srcComponent', 'words']
preprocessFiles = ['bugs.json', 'codes.json', 'methods.json', 'final_score.json',
'code2vec.json', 'report2vec.json', 'word_idx.json',
'bb.npy', 'bs.npy', 'BHG.npy', 'F.npy']
for dir in preprocessDirs:
dir = os.path.join(projects_path, dir)
if os.path.exists(dir):
shutil.rmtree(dir)
for file in preprocessFiles:
file = os.path.join(projects_path, file)
if os.path.exists(file):
os.remove(file)
print('Finish removing files')
def preprocessAll():
'''
从AST树抽取文件内提取components, methods, bugs and summary
'''
CodePreprocess().getFiles()
BugPreprocessor().bugPreprocess()
MethodExtractor().methodExtract()
# @printRunTime
def bbSim(portionK=0.75):
'''处理bug bug相似度矩阵'''
bb = BugBugSim()
# matrix = bb.calculateMatrix()
matrix = bb.calculateMatrixRefine()
bb.simplify(matrix, k=portionK)
print("Finish generate bb matrix")
# @printRunTime
def bsSim(portionK=0.75):
'''bug source相似度'''
bs = BugSourceSim()
# matrix = bs.calculateMatrix()
matrix = bs.calculateMatrixRefine()
bs.modifyEdge(matrix, k=portionK)
print("Finish generate bs matrix")
# @printRunTime
def ssSim():
'''source source相似度'''
ss = SourceSourceSim()
ss.output_result(dependency_matrix)
print("Finish generate ss matrix")
def edgeCombination(alphaCal, betaCal):
'''构造BHG矩阵'''
edges = EdgeCombination()
edges.normalize()
edges.BHG(a=alphaCal, b=betaCal)
print("Finish combine BHG matrix")
def calculate(markRate=0.05, iterations=1000):
'''进行标签传播和计算'''
cal = Calculate(markLabelRate=markRate, quiet=True)
cal.optimization(iter=iterations)
cal.saveRes()
cal.saveCheatRes()
def evaluate(resPath):
'''评估'''
metric = Metric(resPath)
res = metric.getRes()
return res
def fullEvaluate(iterTimes):
'''多次采样取平均'''
res = defaultdict(list)
cheatRes = defaultdict(list)
for i in tqdm(range(iterTimes), desc='full iterations'):
calculate()
for k, v in evaluate(final_score_path).items():
res[k].append(v)
for k, v in evaluate(cheat_score_path).items():
cheatRes[k].append(v)
print("\n==============={}=================".format(proj))
for k, v in res.items():
print("mean", k, round(float(mean(v)), 4))
print("\n============max {}================".format(proj))
for k, v in res.items():
print("max", k, round(float(max(v)), 4))
print("\n===============cheat {}============".format(proj))
for k, v in cheatRes.items():
print("mean", k, round(float(mean(v)), 4))
print("\n============max {}=================".format(proj))
for k, v in cheatRes.items():
print("max", k, round(float(max(v)), 4))
if __name__ == "__main__":
# preprocessAll()
bbSim()
bsSim()
ssSim()
edgeCombination(alpha, beta)
fullEvaluate(10)
| true |
7390b8339b92c4b19dab918ad2df65c321dad46a | Python | krosn/PTU-Helper | /main.py | UTF-8 | 4,073 | 3.40625 | 3 | [
"MIT"
] | permissive | import json
from level import Level
from location import Location
from pokemon import Pokemon
from typing import Any, Callable, List
pk_json_file = 'data/pokemon.json'
def _confirm(msg: str) -> bool:
print(msg)
confirm = input('Is this correct? [y/yes]').lower()
return confirm in ['y', 'yes', '']
def _prompt_location() -> Location:
loc_name = input('Enter the location of the pokemon: ').lower()
if loc_name not in Location._known_locations:
xp_mult = float(input('Add an xp multiplier for this new location: '))
return Location.add_new_location(loc_name, xp_mult)
else:
return Location.get_from_name(loc_name)
def _prompt_name() -> str:
return input('Enter a name: ')
def _prompt_index(msg: str, iterable: List[Any]) -> int:
def in_range(index: int):
return 0 <= index < len(iterable)
while True:
try:
index = int(input(msg))
except ValueError:
print('Please enter an integer...')
continue
if in_range(index):
return index
else:
print('Out of range, try again...')
def p_add_xp(pokemons: List[Pokemon]) -> None:
try:
xp = int(input('Enter the amount of xp earned: '))
xp = max(xp, 0)
except ValueError:
print('Invalid xp amount. Please specify an integer')
return
if not _confirm(f'Will add {xp} xp'):
return
for pokemon in pokemons:
levelup = pokemon.add_xp(xp)
if levelup:
print(f'{pokemon.name} leveled up to lvl {pokemon.level.number}')
def p_create_pokemon(pokemons: List[Pokemon]) -> None:
try:
name = _prompt_name()
location = _prompt_location()
xp = int(input('Enter the starting xp of the pokemon: '))
level = Level(xp)
except:
print('Failed to create pokemon')
return
if _confirm(f'Will create {name}, with {xp} xp, located in {location.name}'):
pokemon = Pokemon(name, location, level)
pokemons.append(pokemon)
print(f'Created {pokemon}')
def p_list_pokemon(pokemons: List[Pokemon]) -> None:
for idx, pokemon in enumerate(pokemons):
print(f'{idx}) {pokemon}')
def p_move_pokemon(pokemons: List[Pokemon]) -> None:
index = _prompt_index('Enter the pokemon # you wish to move: ', pokemons)
location = _prompt_location()
if _confirm(f'Will move {pokemons[index].name} to {location.name}'):
pokemons[index].location = location
def p_remove_pokemon(pokemons: List[Pokemon]) -> None:
index = _prompt_index('Enter the pokemon # you wish to remove: ', pokemons)
if _confirm(f'Will delete {pokemons[index].name}. THERE IS PERMANENT.'):
del pokemons[index]
def p_rename_pokemon(pokemons: List[Pokemon]) -> None:
index = _prompt_index('Enter the pokemon # you wish to rename: ', pokemons)
name = _prompt_name()
if _confirm(f'Will rename {pokemons[index].name} to {name}'):
pokemons[index].name = name
def p_quit(pokemons: List[Pokemon]) -> None:
print('Saving...')
try:
Pokemon.save_pokemon(pokemons, pk_json_file)
Location.save()
except Exception as ex:
print(f'Failed to save: {ex}')
finally:
print('Exiting...')
exit(0)
options = {
0: ('List pokemon', p_list_pokemon),
1: ('Add xp', p_add_xp),
2: ('Create a pokemon', p_create_pokemon),
3: ('Move a pokemon', p_move_pokemon),
4: ('Rename a pokemon', p_rename_pokemon),
5: ('Save and Quit', p_quit)
}
def prompt(pokemons: List[Pokemon]) -> None:
print()
print('Options:')
for index in options:
print(f'{index}) {options[index][0]}')
choice = _prompt_index('', options)
print()
options[choice][1](pokemons)
if __name__ == "__main__":
pokemons = Pokemon.load_pokemon(pk_json_file)
print(f'Loaded the following pokemon from {pk_json_file}')
for pokemon in pokemons:
print(pokemon)
while True:
prompt(pokemons) | true |
0eaf6871f4951170ec9b85fce62de9bd6d6b7de1 | Python | lzjwlt/wake_online | /server/app.py | UTF-8 | 1,592 | 2.8125 | 3 | [] | no_license | from flask import Flask, request
import re
app = Flask(__name__)
class Store:
def __init__(self):
self.__data = []
self.alias = {'lzj1':'00E0701B7792'}
def delete(self,data):
if data in self.__data:
self.__data.remove(data)
def add(self, data):
if data not in self.__data:
self.__data.append(data)
def get(self):
return self.__data
def isMac(str):
r = re.match("[A-Fa-f0-9]{12}",str)
if r:
return True
return False
store = Store()
@app.route("/wake/alias/<name>")
def wake_name(name):
add = store.alias.get(name)
if add:
wake(add)
return 'OK'
return 'name not exists'
@app.route("/wake/<address>")
def wake(address):
if len(address) == 12:
store.add(address)
return "ok"
return "len is not 12"
@app.route("/delete/<address>")
def delete(address):
store.delete(address)
return "delete:"+address
@app.route("/")
@app.route("/<address>", methods=['GET','DELETE', 'PUT'])
def get_wake_list(address=None):
if request.method == 'GET':
data = store.get()
result = ""
for mac in data:
result += mac + "\n"
return result
if not isMac(address):
return 'fail'
if request.method == 'PUT':
if address is None:
return 'fail'
store.add(address)
return 'OK'
if request.method == 'DELETE' :
store.delete(address)
return 'Del'+address
if __name__ == '__main__':
app.debug = True
app.run(host='0.0.0.0')
| true |
3b6e6ae70843fdef493d60803f0759f927d7f8d9 | Python | thomasjhuang/altair_saver | /altair_saver/tests/test_core.py | UTF-8 | 3,451 | 2.5625 | 3 | [
"BSD-3-Clause"
] | permissive | import io
import json
from typing import List, Union, Type
import altair as alt
import pandas as pd
import pytest
from altair_saver import (
save,
render,
BasicSaver,
HTMLSaver,
NodeSaver,
Saver,
SeleniumSaver,
)
from altair_saver._utils import JSONDict, mimetype_to_fmt
FORMATS = ["html", "pdf", "png", "svg", "vega", "vega-lite"]
def check_output(out: Union[str, bytes], fmt: str) -> None:
"""Do basic checks on output to confirm correct type, and non-empty."""
if fmt in ["png", "pdf"]:
assert isinstance(out, bytes)
elif fmt in ["vega", "vega-lite"]:
assert isinstance(out, str)
dct = json.loads(out)
assert len(dct) > 0
else:
assert isinstance(out, str)
assert len(out) > 0
@pytest.fixture
def chart() -> alt.Chart:
data = pd.DataFrame({"x": range(10), "y": range(10)})
return alt.Chart(data).mark_line().encode(x="x", y="y")
@pytest.fixture
def spec(chart: alt.Chart) -> JSONDict:
return chart.to_dict()
@pytest.mark.parametrize("fmt", FORMATS)
def test_save_chart(chart: alt.TopLevelMixin, fmt: str) -> None:
fp: Union[io.BytesIO, io.StringIO]
if fmt in ["png", "pdf"]:
fp = io.BytesIO()
else:
fp = io.StringIO()
save(chart, fp, fmt=fmt)
check_output(fp.getvalue(), fmt)
@pytest.mark.parametrize("fmt", FORMATS)
def test_save_spec(spec: JSONDict, fmt: str) -> None:
fp: Union[io.BytesIO, io.StringIO]
if fmt in ["png", "pdf"]:
fp = io.BytesIO()
else:
fp = io.StringIO()
save(spec, fp, fmt=fmt)
check_output(fp.getvalue(), fmt)
@pytest.mark.parametrize("method", ["node", "selenium", BasicSaver, HTMLSaver])
@pytest.mark.parametrize("fmt", FORMATS)
def test_save_chart_method(
spec: JSONDict, fmt: str, method: Union[str, Type[Saver]]
) -> None:
fp: Union[io.BytesIO, io.StringIO]
if fmt in ["png", "pdf"]:
fp = io.BytesIO()
else:
fp = io.StringIO()
valid_formats: List[str] = []
if method == "node":
valid_formats = NodeSaver.valid_formats
elif method == "selenium":
valid_formats = SeleniumSaver.valid_formats
elif isinstance(method, type):
valid_formats = method.valid_formats
else:
raise ValueError(f"unrecognized method: {method}")
if fmt not in valid_formats:
with pytest.raises(ValueError):
save(spec, fp, fmt=fmt, method=method)
else:
save(spec, fp, fmt=fmt, method=method)
check_output(fp.getvalue(), fmt)
@pytest.mark.parametrize("inline", [True, False])
def test_html_inline(spec: JSONDict, inline: bool) -> None:
fp = io.StringIO()
save(spec, fp, fmt="html", inline=inline)
html = fp.getvalue()
cdn_url = "https://cdn.jsdelivr.net"
if inline:
assert cdn_url not in html
else:
assert cdn_url in html
def test_render_spec(spec: JSONDict) -> None:
bundle = render(spec, fmts=FORMATS)
assert len(bundle) == len(FORMATS)
for mimetype, content in bundle.items():
fmt = mimetype_to_fmt(mimetype)
if isinstance(content, dict):
check_output(json.dumps(content), fmt)
else:
check_output(content, fmt)
def test_infer_mode(spec: JSONDict) -> None:
vg_spec = render(spec, "vega").popitem()[1]
vl_svg = render(spec, "svg").popitem()[1]
vg_svg = render(vg_spec, "svg").popitem()[1]
assert vl_svg == vg_svg
| true |
26980a10a918620bc79ab2ee7916b2f861a75e25 | Python | BeatrizC14/SSA | /features.py | UTF-8 | 6,167 | 2.6875 | 3 | [] | no_license | import cv2
import matplotlib.pyplot as plt
import numpy as np
import math
from numpy.core.fromnumeric import mean
def get_FD_CV(x, y, im):
Cb = round(im[ y , x , 2 ])
Cr = round(im[ y , x , 1 ])
FD_CV = (19.75 * Cb - 4.46 * Cr) / 255 - 8.18 # TODO: Confirmar
return FD_CV
def get_FD_YCV(x, y, imgYCC):
Y = imgYCC[y, x, 0]
Cb = imgYCC[y, x, 2]
Cr = imgYCC[y, x, 1]
return (8.60*Y + 25.50*Cb - 5.01*Cr)/255 - 15.45
def getPatchTexture(im, x, y, patch_size):
imgYCC = cv2.cvtColor(im, cv2.COLOR_BGR2YCR_CB)
half_patch_size = patch_size // 2
texture = 0
imgWidth = im.shape[1]
imgHeight = im.shape[0]
if (x < half_patch_size): x = half_patch_size
if (x >= imgWidth - half_patch_size): x = imgWidth - half_patch_size - 1
if (y < half_patch_size): y = half_patch_size
if (y >= imgHeight - half_patch_size): y = imgHeight - half_patch_size - 1
'''ix = im.shape[1]//2
iy = im.shape[0]//2'''
center_pixel = imgYCC[y, x, 0]/2
dx = -half_patch_size
while dx <= half_patch_size:
dy = -half_patch_size
while dy <= half_patch_size:
if not(dx == 0 and dy == 0):
indx = x + dx
indy = y + dy
value = imgYCC[indy, indx, 0]/2
texture += abs( int(value) - int(center_pixel) )
dy += 1
dx += 1
texture /= (patch_size * patch_size - 1)
return texture
def get_FD_RGB(x, y, imgBGR):
B = imgBGR[y, x, 0]
G = imgBGR[y, x, 1]
R = imgBGR[y, x, 2]
return ((-3.77*R - 1.25*G + 12.40*B)/255 - 4.62)
def get_FD_HSV(x, y, imgBGR):
imgHSV = cv2.cvtColor(imgBGR, cv2.COLOR_BGR2HSV)
H = imgHSV[y, x, 0]
S = imgHSV[y, x, 1]
V = imgHSV[y, x, 2]
return (3.35*H/179 + 2.55*S/255 + 8.58*V/255 - 7.51)
def get_yco(y, img):
return (y/img.shape[0])
def patch_mean(imgYCC, x, y, patch_size):
half_patch_size = patch_size // 2
imgWidth = imgYCC.shape[1]
imgHeight = imgYCC.shape[0]
#The correction in x and y is also made in get_PSD, so if you are calling patch_mean inside this function comment this part
if (x < half_patch_size): x = half_patch_size
if (x >= imgWidth - half_patch_size): x = imgWidth - half_patch_size - 1
if (y < half_patch_size): y = half_patch_size
if (y >= imgHeight - half_patch_size): y = imgHeight - half_patch_size - 1
###
patch = imgYCC[y-half_patch_size:y+half_patch_size, x-half_patch_size:x+half_patch_size, 0]
'''
mean = 0
dx = -half_patch_size
while dx <= half_patch_size:
dy = -half_patch_size
while dy <= half_patch_size:
indx = x + dx
indy = y + dy
value = imgYCC[indy, indx, 0]/2
mean += int(value);
dy += 1
dx += 1
mean /= (patch_size * patch_size)'''
return np.mean(patch)
def get_PSD(imgYCC, x, y, patch_size):
half_patch_size = patch_size // 2
imgWidth = imgYCC.shape[1]
imgHeight = imgYCC.shape[0]
# x and y Correction
if (x < half_patch_size): x = half_patch_size
if (x >= imgWidth - half_patch_size): x = imgWidth - half_patch_size - 1
if (y < half_patch_size): y = half_patch_size
if (y >= imgHeight - half_patch_size): y = imgHeight - half_patch_size - 1
patch = imgYCC[y-half_patch_size:y+half_patch_size, x-half_patch_size:x+half_patch_size, 0]
L = patch.size
'''
s = 0
mean = patch_mean(imgYCC, x, y, patch_size)
dx = -half_patch_size
while dx <= half_patch_size:
dy = -half_patch_size
while dy <= half_patch_size:
indx = x + dx
indy = y + dy
value = imgYCC[indy, indx, 0]/2
s += (value - mean)**2
dy += 1
dx += 1
s = math.sqrt( s/(patch_size * patch_size) )'''
return np.sqrt((1/L)*np.sum((patch - np.mean(patch))**2))
def get_uniformity(x, y, im, patch_size):
half_patch_size = patch_size // 2
imgWidth = im.shape[1]
imgHeight = im.shape[0]
if (x < half_patch_size): x = half_patch_size
if (x >= imgWidth - half_patch_size): x = imgWidth - half_patch_size - 1
if (y < half_patch_size): y = half_patch_size
if (y >= imgHeight - half_patch_size): y = imgHeight - half_patch_size - 1
sqr = (x - half_patch_size, x + half_patch_size, y - half_patch_size, y + half_patch_size)
patch = im[sqr[2]:sqr[3], sqr[0]:sqr[1], 0]/255
bins = 10
hist, _ = np.histogram(patch, bins, (0, 1))
# visualize intensity histogram
'''n, edges, _ = plt.hist(patch.flatten(), bins, (0, 1))
plt.show()'''
p = hist/(patch_size**2)
u = np.sum(p**2)
return u
def get_gradient(img, x, y):
if (x >= 0 and x < img.shape[1] and y >= 0 and y < img.shape[0]):
if x > 0:
Y1 = img[ y, x-1, 0]/2
else:
Y1 = img[ y, 0, 0]/2
if x < (img.shape[1] - 1):
Y2 = img[ y, x+1, 0]/2
else:
Y2 = img[ y, img.shape[1]-1, 0]/2
dx = abs(round(Y2) - round(Y1))
if y > 0:
Y1 = img[ y-1, x, 0]/2
else:
Y1 = img[ 0, x, 0]/2
if y < (img.shape[0] -1):
Y2 = img[ y+1, x, 0]/2
else:
Y2 = img[ img.shape[0] -1, x, 0]/2
dy = abs(round(Y2) - round(Y1))
grad = dx + dy
return grad
def get_grayness(im, x, y):
Cb = im[ y , x , 2 ]
Cr = im[ y , x , 1 ]
g = (Cb/255 - 0.5)**2 + (Cr/255 - 0.5)**2
return g
if __name__ == "__main__":
img = cv2.imread('../imag.jpg')
#img = cv2.imread('imag.jpg')
imgYCC = cv2.cvtColor(img, cv2.COLOR_BGR2YCR_CB)
height = imgYCC.shape[0]
width = imgYCC.shape[1]
patch_size = 10
'''FD_YCC = get_FD_YCV(0, 0, imgYCC)
print(FD_YCC)'''
for y in range(0, height):
for x in range(0, width):
#m=getPatchTexture(img, x, y, 10)
#m=get_PSD(imgYCC, x, y, patch_size)
m=get_grayness(img, x, y)
print(m)
| true |
afe87767ab6a589bd228a11126a6c07d8fc15212 | Python | 14point4kbps/tankbot | /tankPython/ps3botcontrol.py | UTF-8 | 2,444 | 2.9375 | 3 | [] | no_license | #!/usr/bin/python2.7 python
'''
ps3 controller for serial com using xbee to control tank robot.
'''
import serial
import pygame, sys, time
from pygame.locals import *
pygame.init()
pygame.joystick.init()
joystick = pygame.joystick.Joystick(0)
joystick.init()
screen = pygame.display.set_mode((400,300))
pygame.display.set_caption('Hello World')
ser = serial.Serial('/dev/ttyUSB0',9600) #set to your xbee serial port
interval = 0.01
dir = "x"
# get count of joysticks=1, axes=27, buttons=19 for DualShock 3
joystick_count = pygame.joystick.get_count()
numaxes = joystick.get_numaxes()
numbuttons = joystick.get_numbuttons()
loopQuit = False
while loopQuit == False:
ser.write('%s\n' % dir)
# test joystick axes
# 1:-1 = left forward 1:0.2 = left reverse
# 3:-1 = right forward, 3:0.2 = right reverse
# 1:0 and 3:0 = stop for both
if joystick.get_axis(1) < 0 and joystick.get_axis(3) < 0:
dir = "w"
print("forward")
elif joystick.get_axis(1) > 0 and joystick.get_axis(3) > 0:
dir = "s"
print("reverse")
elif joystick.get_axis(1) < 0 and joystick.get_axis(3) > 0:
dir = "d"
print("right turn")
elif joystick.get_axis(1) > 0 and joystick.get_axis(3) < 0:
dir = "a"
print("left turn")
elif joystick.get_axis(1) == 0 and joystick.get_axis(3) < 0:
dir = "q"
print("slow right turn")
elif joystick.get_axis(1) < 0 and joystick.get_axis(3) == 0:
dir = "e"
print("slow left turn")
#elif joystick.get_axis(1) > 0 and joystick.get_axis(3) == 0:
# dir = "c"
# print("rev slow right turn")
#elif joystick.get_axis(1) == 0 and joystick.get_axis(3) < 0:
# dir = "z"
# print("rev slow left turn")
elif joystick.get_axis(1) == 0 and joystick.get_axis(3) == 0:
dir = "x"
print("stopped")
#test controller buttons
#outstr = ""
#for i in range (0,numbuttons):
# button = joystick.get_button(i)
# outstr = outstr + str(i) + ";" + str(button) + "|"
#print(outstr)
for event in pygame.event.get():
if event.type == QUIT:
loopQuit = True
elif event.type == pygame.KEYDOWN:
if event.key == pygame.K_ESCAPE:
loopQuit = True
pygame.display.update()
time.sleep(interval)
pygame.quit()
sys.exit()
| true |
47abc79b70f40212edb13921da2c952fc631d5f8 | Python | chaerui7967/K_Digital_Training | /Python_KD_basic/Function/function_EX.py | UTF-8 | 4,992 | 3.828125 | 4 | [] | no_license | #상품가격, 주문수량 입력, 주문액을 출력 order()
def order():
a=eval(input('상품가격 입력 : '))
b=eval(input('주문수량 입력 : '))
print('---------------')
return a,b,a*b
aaa=order()
print(f'상품가격 : {aaa[0]}원\n주문수량 : {aaa[1]}개\n주문액 : {aaa[2]}원')
#이름과 나이를 입력받아서 딕셔너리 형식으로 반환
def myInfo():
info=dict()
name, age = input('이름, 나이 입력(","로 구분) : ').split(',')
info['name'] = name
info['age'] = int(age)
return info
infoD = myInfo()
print(infoD)
#키값을 모를경우
print(infoD[list(infoD.keys())[0]])
for key, value in infoD.items():
print(key,':',value)
for key in infoD.keys():
print(key,':',infoD[key])
#사칙연산을 수행하는 함수들을 정의하여 호출
#add, sub, mul, div
def add(x,y): return x+y
def sub(x,y): return x-y
def mul(x,y): return x*y
def div(x,y): return x/y
x,y = int(input('숫자 1 입력 : ')), int(input('숫자 2 입력 : '))
print(f'{x} + {y} = {add(x,y)}\n{x} - {y} = {sub(x,y)}'
f'\n{x} * {y} = {mul(x,y)}\n{x} / {y} = {div(x,y)}')
#order() 상품가격과 주문수량을 전달 주문액과 할인액, 지불액을 계산하여 반환
def order(a,b):
cho, hal = a * b, 0
if cho >= 100000:
hal = cho*0.1
elif cho >= 50000:
hal = cho*0.05
gi = cho - hal
return cho, hal, gi
for i in range(3):
price, amount = int(input('상품가격 입력 : ')), int(input('주문수량 입력 : '))
print('-'*20)
qty, discount, total = order(price, amount)
print(f'주문액 : {qty:,}원\n할인액 : {int(discount):,}원\n지불액 : {int(total):,}원\n')
#오른쪽 정렬 format().rjust(문자수)
order=order(price,amount)
print('%s' %format(order[0],',').rjust(10))
#딕셔너리활용
def order(a,b):
cho, hal = a * b, 0
if cho >= 100000:
hal = cho*0.1
elif cho >= 50000:
hal = cho*0.05
gi = cho - hal
dic = {'price':a,'qty':b,'amount':cho,'discount':hal,'total':gi}
return dic
price, amount = int(input('상품가격 입력 : ')), int(input('주문수량 입력 : '))
print('-'*20)
orders = order(price, amount)
print(orders)
for i in orders.keys():
print(f'{i} : {orders[i]}')
# 실습문제
def sub(x,y):
global a
a=7
x,y = y,x
b=3
print(a,b,x,y)
a,b,x,y = 1,2,3,4
sub(x,y)
print(a,b,x,y)
#리스트는 함수에서 수정시 수정된다 주소값이 동일
def showList(mylist):
mylist[0] = 100
print(mylist)
mylist = [1,2,3,4]
showList(mylist)
print(mylist)
#실습
def getProduct(prdList):
name = prdList['name']
price = prdList['price']
return {'name':name,'price':price}
productL = [{'name' : '노트북','price':123000, 'maker': 'LG'},
{'name' : '노트','price':111000, 'maker': 'LG'},
{'name': '북', 'price': 222000, 'maker': 'LG'}]
for pro in productL:
prdInfo = getProduct(pro)
print(prdInfo)
# 팩토리얼 함수정의
# n! = n*(n-1)*...
def factorial(n):
a=n
for i in range(a-1,0,-1):
a*=i
return a
print(factorial(4))
#다른방법 재귀함수
def factorial(n):
return n*factorial(n-1) if n>1 else 1
#연습 filter 사용 짝수 반환 함수
def even(x): return x%2==0
result = list(filter(even, [2,3,4,5,6,7]))
print(result)
#다른방법
def isEven(inputData):
if isinstance(inputData, list) or isinstance(inputData, tuple):
result = filter(lambda x: x%2 == 0, inputData)
if isinstance(inputData, list): result = list(result)
else: result = tuple(result)
return result
else:
raise Exception
# TEST
from random import randint
testData = [randint(0, 100) for i in range(10)]
print(f"Test data created: {testData}\nFiltered result: {isEven(testData)}")
#재귀함수 연습
aa = int(input('숫자입력 : '))
def sun(aa):
print(aa, end=" ")
if aa>0:
return sun(aa-1)
else:
return
sun(aa)
#다른
def count(num):
if num:
print(num, end=' ')
return count(num-1)
count(49)
a=10
def selfcall():
global a
a-=1
print('ha', end='')
if a < 1:
return
return selfcall()
#두개의 리스트의 같은 인덱스 요소의 값을 더해서 하나의 리스트를 출력
#1 def 함수 정의
listA = [1,2,3,4]
listB = [10,20,30,40]
def result1(a,b):
newL = []
for i in range(len(a)):
newL.append(a[i]+b[i])
return newL
print(result1(listA,listB))
#다른 방법
def hap(a,b): return [x+y for x, y in zip(a,b)]
#2 lambda 표현식 정의
print(list(map(lambda x,y : x+y,listA,listB)))
##다른 방법
from random import randint
list1 = [randint(1,10) for i in range(4)]
list2 = [randint(11,40) for i in range(4)]
def addList(list1, list2):
resultList = list()
for i, j in zip(list1, list2): resultList.append(i+j)
return resultList
# OR
addList = lambda x,y: [i+j for i,j in zip(x,y)]
addList(list1, list2) | true |
ce379caa2f1b20663956d20ac19403a92ffa8c60 | Python | korobopolly/OpenCV | /ch07/threshold1.py | UTF-8 | 388 | 2.53125 | 3 | [] | no_license | import sys
import numpy as np
import cv2
src = cv2.imread('cells.png', cv2.IMREAD_GRAYSCALE)
if src is None:
print('Image load failed!')
sys.exit()
_, dst1 = cv2.threshold(src, 100, 255, cv2.THRESH_BINARY)
_, dst2 = cv2.threshold(src, 210, 255, cv2.THRESH_BINARY)
cv2.imshow('src', src)
cv2.imshow('dst1', dst1)
cv2.imshow('dst2', dst2)
cv2.waitKey()
cv2.destroyAllWindows()
| true |
2f4d54532e4205ba188f370e7beb986ee0d00bf2 | Python | marscher/openpathsampling | /openpathsampling/storage/storage.py | UTF-8 | 16,708 | 2.515625 | 3 | [] | no_license | """
Created on 06.07.2014
@author: JDC Chodera
@author: JH Prinz
"""
import logging
logger = logging.getLogger(__name__)
init_log = logging.getLogger('openpathsampling.initialization')
import openpathsampling as paths
import simtk.unit as u
from openpathsampling.netcdfplus import NetCDFPlus
from openpathsampling.netcdfplus import WeakLRUCache, WeakValueCache
# =============================================================================================
# OPS SPECIFIC STORAGE
# =============================================================================================
class Storage(NetCDFPlus):
"""
A netCDF4 wrapper to store trajectories based on snapshots of an OpenMM
simulation. This allows effective storage of shooting trajectories
"""
def get_unit(self, dimension):
"""
Return a simtk.Unit instance from the unit_system the is of the specified dimension, e.g. length, time
"""
return u.Unit({self.unit_system.base_units[u.BaseDimension(dimension)]: 1.0})
@property
def template(self):
"""
Return the template snapshot from the storage
Returns
-------
Snapshot
the initial snapshot
"""
if self._template is None:
self._template = self.snapshots.load(int(self.variables['template_idx'][0]))
return self._template
def _setup_class(self):
super(Storage, self)._setup_class()
# use MD units
self.dimension_units = {
'length': u.nanometers,
'velocity': u.nanometers / u.picoseconds,
'energy': u.kilojoules_per_mole
}
def clone(self, filename, subset):
"""
Creates a copy of the netCDF file and allows to reduce the used atoms.
Notes
-----
This is mostly used to remove water but keep the data intact.
"""
storage2 = Storage(filename=filename, template=self.template.subset(subset), mode='w')
# Copy all configurations and momenta to new file in reduced form
for obj in self.configurations:
storage2.configurations.save(obj.copy(subset=subset), idx=self.configurations.index[obj])
for obj in self.momenta:
storage2.momenta.save(obj.copy(subset=subset), idx=self.momenta.index[obj])
# All other should be copied one to one. We do this explicitely although we could just copy all
# and exclude configurations and momenta, but this seems cleaner
for storage_name in [
'pathmovers', 'topologies', 'networks', 'details', 'trajectories',
'shootingpointselectors', 'engines', 'volumes',
'samplesets', 'ensembles', 'transitions', 'steps', 'pathmovechanges',
'samples', 'snapshots', 'pathsimulators', 'cvs'
]:
self.clone_storage(storage_name, storage2)
storage2.close()
# TODO: Need to copy cvs without caches!
def clone_empty(self, filename):
"""
Creates a copy of the netCDF file and replicates only the static parts which I consider
ensembles, volumes, engines, path movers, shooting point selectors. We do not need to
reconstruct collective variables since these need to be created again completely and then
the necessary arrays in the file will be created automatically anyway.
Notes
-----
This is mostly used to restart with a fresh file. Same setup, no results.
"""
storage2 = Storage(filename=filename, template=self.template, mode='w')
for storage_name in [
'pathmovers', 'topologies', 'networks',
'shootingpointselectors', 'engines', 'volumes',
'ensembles', 'transitions', 'pathsimulators'
]:
self.clone_storage(storage_name, storage2)
storage2.close()
@property
def n_atoms(self):
return self.topology.n_atoms
@property
def n_spatial(self):
return self.topology.n_spatial
def __init__(self, filename, mode=None,
template=None, units=None):
"""
Create a netdfplus storage for OPS Objects
Parameters
----------
filename : string
filename of the netcdf file to be used or created
mode : string, default: None
the mode of file creation, one of 'w' (write), 'a' (append) or
None, which will append any existing files.
template : openpathsampling.Snapshot
a Snapshot instance that contains a reference to a Topology, the
number of atoms and used units
units : dict of {str : simtk.unit.Unit } or None
representing a dict of string representing a dimension
('length', 'velocity', 'energy') pointing to
the simtk.unit.Unit to be used. If not None overrides the
standard units used
"""
self._template = template
super(Storage, self).__init__(filename, mode, units=units)
def _register_storages(self):
"""
Register all Stores used in the OpenPathSampling Storage
"""
# objects with special storages
self.add('trajectories', paths.storage.TrajectoryStore())
self.add('snapshots', paths.storage.SnapshotStore())
self.add('configurations', paths.storage.ConfigurationStore())
self.add('momenta', paths.storage.MomentumStore())
self.add('samples', paths.storage.SampleStore())
self.add('samplesets', paths.storage.SampleSetStore())
self.add('pathmovechanges', paths.storage.PathMoveChangeStore())
self.add('steps', paths.storage.MCStepStore())
self.add('cvs', paths.storage.ObjectDictStore(paths.CollectiveVariable, paths.Snapshot))
self.collectivevariables = self.cvs
# normal objects
self.add('details', paths.netcdfplus.ObjectStore(paths.Details, has_name=False))
self.add('topologies', paths.netcdfplus.ObjectStore(paths.Topology, has_name=True))
self.add('pathmovers', paths.netcdfplus.ObjectStore(paths.PathMover, has_name=True))
# self.add('shootingpoints'
# paths.storage.ObjectStore(paths.ShootingPoint, has_name=False))
self.add('shootingpointselectors',
paths.netcdfplus.ObjectStore(paths.ShootingPointSelector, has_name=True))
self.add('engines', paths.netcdfplus.ObjectStore(paths.DynamicsEngine, has_name=True))
self.add('pathsimulators',
paths.netcdfplus.ObjectStore(paths.PathSimulator, has_name=True))
self.add('transitions', paths.netcdfplus.ObjectStore(paths.Transition, has_name=True))
self.add('networks',
paths.netcdfplus.ObjectStore(paths.TransitionNetwork, has_name=True))
self.add('schemes',
paths.netcdfplus.ObjectStore(paths.MoveScheme, has_name=True))
# nestable objects
self.add('volumes',
paths.netcdfplus.ObjectStore(paths.Volume, nestable=True, has_name=True))
self.add('ensembles',
paths.netcdfplus.ObjectStore(paths.Ensemble, nestable=True, has_name=True))
# special stores
# self.add('names', paths.storage.NameStore())
def _initialize(self):
# Set global attributes.
setattr(self, 'title', 'OpenPathSampling Storage')
setattr(self, 'ConventionVersion', '0.2')
self.set_caching_mode('default')
template = self._template
if template.topology is not None:
self.topology = template.topology
else:
raise RuntimeError("A Storage needs a template snapshot with a topology")
if 'atom' not in self.dimensions:
self.createDimension('atom', self.topology.n_atoms)
# spatial dimensions
if 'spatial' not in self.dimensions:
self.createDimension('spatial', self.n_spatial)
# update the units for dimensions from the template
self.dimension_units.update(paths.tools.units_from_snapshot(template))
self._init_storages()
# TODO: Might not need to save topology
logger.info("Saving topology")
self.topologies.save(self.topology)
logger.info("Create initial template snapshot")
# Save the initial configuration
self.snapshots.save(template)
self.createVariable('template_idx', 'i4', 'scalar')
self.variables['template_idx'][:] = self.snapshots.index[template]
def _restore(self):
self.set_caching_mode('default')
self._restore_storages()
self.topology = self.topologies[0]
def sync_all(self):
"""
Convenience function to sync `self.cvs` and `self` at once.
Under most circumstances, you want to sync `self.cvs` and `self` at
the same time. This just makes it easier to do that.
"""
self.cvs.sync()
self.sync()
def set_caching_mode(self, mode='default'):
"""
Set default values for all caches
Parameters
----------
caching : str
One of the following values is allowed 'default', 'production',
'analysis', 'off', 'lowmemory' and 'memtest'
"""
available_cache_sizes = {
'default': self.default_cache_sizes,
'analysis': self.analysis_cache_sizes,
'production': self.production_cache_sizes,
'off': self.no_cache_sizes,
'lowmemory': self.lowmemory_cache_sizes,
'memtest': self.memtest_cache_sizes
}
if mode in available_cache_sizes:
# We need cache sizes as a function. Otherwise we will reuse the same
# caches for each storage and that will cause problems! Lots of...
cache_sizes = available_cache_sizes[mode]()
else:
raise ValueError(
"mode '" + mode + "' is not supported. Try one of " +
str(available_cache_sizes.keys())
)
for store_name, caching in cache_sizes.iteritems():
if hasattr(self, store_name):
store = getattr(self, store_name)
store.set_caching(caching)
@staticmethod
def default_cache_sizes():
"""
Cache sizes for standard sessions for medium production and analysis.
"""
return {
'trajectories': WeakLRUCache(10000),
'snapshots': WeakLRUCache(10000),
'configurations': WeakLRUCache(10000),
'momenta': WeakLRUCache(10000),
'samples': WeakLRUCache(25000),
'samplesets': False,
'cvs': True,
'pathmovers': True,
'shootingpointselectors': True,
'engines': True,
'pathsimulators': True,
'volumes': True,
'ensembles': True,
'pathmovechanges': False,
'transitions': True,
'networks': True,
'details': False,
'steps': WeakLRUCache(1000)
}
@staticmethod
def lowmemory_cache_sizes():
"""
Cache sizes for very low memory
This uses even less caching than production runs. Mostly used for debugging.
"""
return {
'trajectories': WeakLRUCache(10),
'snapshots': WeakLRUCache(100),
'configurations': WeakLRUCache(10),
'momenta': WeakLRUCache(10),
'samples': WeakLRUCache(25),
'samplesets': False,
'cvs': True,
'pathmovers': True,
'shootingpointselectors': True,
'engines': True,
'pathsimulators': True,
'volumes': True,
'ensembles': True,
'pathmovechanges': False,
'transitions': True,
'networks': True,
'details': False,
'steps': WeakLRUCache(10)
}
@staticmethod
def memtest_cache_sizes():
"""
Cache Sizes for memtest debugging sessions
Memtest will cache everything weak to measure if there is some object left in
memory that should have been disposed of.
"""
return {
'trajectories': WeakLRUCache(10),
'snapshots': WeakLRUCache(10),
'configurations': WeakLRUCache(10),
'momenta': WeakLRUCache(10),
'samples': WeakLRUCache(10),
'samplesets': WeakLRUCache(10),
'cvs': WeakLRUCache(10),
'pathmovers': WeakLRUCache(10),
'shootingpointselectors': WeakLRUCache(10),
'engines': WeakLRUCache(10),
'pathsimulators': WeakLRUCache(10),
'volumes': WeakLRUCache(10),
'ensembles': WeakLRUCache(10),
'pathmovechanges': WeakLRUCache(10),
'transitions': WeakLRUCache(10),
'networks': WeakLRUCache(10),
'details': WeakLRUCache(10),
'steps': WeakLRUCache(10)
}
#
@staticmethod
def analysis_cache_sizes():
"""
Cache Sizes for analysis sessions
Analysis caching is very large to allow fast processing
"""
return {
'trajectories': WeakLRUCache(500000),
'snapshots': WeakLRUCache(100000),
'configurations': WeakLRUCache(10000),
'momenta': WeakLRUCache(1000),
'samples': WeakLRUCache(1000000),
'samplesets': WeakLRUCache(100000),
'cvs': True,
'pathmovers': True,
'shootingpointselectors': True,
'engines': True,
'pathsimulators': True,
'volumes': True,
'ensembles': True,
'pathmovechanges': WeakLRUCache(250000),
'transitions': True,
'networks': True,
'details': False,
'steps': WeakLRUCache(50000)
}
@staticmethod
def production_cache_sizes():
"""
Cache Sizes for production runs
Production. No loading assumed, only last 1000 steps and a few other
objects for error testing
"""
return {
'trajectories': WeakLRUCache(100),
'snapshots': WeakLRUCache(100),
'configurations': WeakLRUCache(1000),
'momenta': WeakLRUCache(1000),
'samples': WeakLRUCache(100),
'samplesets': False,
'cvs': False,
'pathmovers': False,
'shootingpointselectors': False,
'engines': False,
'pathsimulators': False,
'volumes': False,
'ensembles': False,
'pathmovechanges': False,
'transitions': False,
'networks': False,
'details': False,
'steps': WeakLRUCache(10)
}
# No caching (so far only CVs internal storage is there)
@staticmethod
def no_cache_sizes():
"""
Set cache sizes to no caching at all.
Notes
-----
This is VERY SLOW and only used for debugging.
"""
return {
'trajectories': False,
'snapshots': False,
'configurations': False,
'momenta': False,
'samples': False,
'samplesets': False,
'cvs': False,
'pathmovers': False,
'shootingpointselectors': False,
'engines': False,
'pathsimulators': False,
'volumes': False,
'ensembles': False,
'pathmovechanges': False,
'transitions': False,
'networks': False,
'details': False,
'steps': False
}
class AnalysisStorage(Storage):
"""
Open a storage in read-only and do caching useful for analysis.
"""
def __init__(self, filename):
"""
Parameters
----------
filename : str
The filename of the storage to be opened
"""
super(AnalysisStorage, self).__init__(
filename=filename,
mode='r'
)
self.set_caching_mode('analysis')
# Let's go caching
AnalysisStorage.cache_for_analysis(self)
@staticmethod
def cache_for_analysis(storage):
storage.samples.cache_all()
storage.samplesets.cache_all()
storage.cvs.cache_all()
storage.volumes.cache_all()
storage.ensembles.cache_all()
storage.pathmovers.cache_all()
storage.pathmovechanges.cache_all()
storage.steps.cache_all()
# storage.trajectories.cache_all()
| true |
31764daf02fffffb04008df0c2271b1a56d1a5a2 | Python | jordyjordy/mma-lab | /Code/Assignment2/test_sift.py | UTF-8 | 842 | 2.546875 | 3 | [] | no_license | import cv2
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import numpy as np
sift = cv2.xfeatures2d.SIFT_create()
im1 = cv2.imread('../../Images/nieuwekerk1.jpg',cv2.IMREAD_GRAYSCALE)
im2 = cv2.imread('../../Images/nieuwekerk2.jpg',cv2.IMREAD_GRAYSCALE)
#keypoints = sift.detect(im,None)
#k_im = cv2.drawKeypoints(im, keypoints, None, flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
kp1, desc1 = sift.detectAndCompute(im1, None)
kp2, desc2 = sift.detectAndCompute(im2, None)
# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_L2, crossCheck=True)
# Match descriptors.
matches = bf.match(desc1,desc2)
# Sort them in the order of their distance.
matches = sorted(matches, key = lambda x:x.distance)
# Draw first 10 matches.
img3 = cv2.drawMatches(im1,kp1,im2,kp2,matches[:12], None, flags=2)
plt.imshow(img3)
plt.show()
| true |
4a1959afde401e056282264846c66b80f41d6419 | Python | xhj2501/Crawler_train | /数据解析/bs4解析基础.py | UTF-8 | 1,107 | 3.0625 | 3 | [] | no_license | from bs4 import BeautifulSoup
# 将本地的html文档中的数据加载到该对象中
with open('./uestc.html', 'r', encoding='utf-8') as fp:
soup = BeautifulSoup(fp, 'lxml')
# print(soup.a) # soup.tagName返回的是html中第一次出现的tagName标签
# print(soup.div)
# print(soup.find('div')) # find('tagName')等同于soup.div
# print(soup.find('div', class_="title-l hide-line-tag")) # 属性定位(class_/id/attr=)
# print(soup.find_all('a')) # find('tagName')返回符合要求的所有标签(列表)
# print(soup.select('.outter')) #select ('某种选择器(id/class/标签...选择器)'),返回的是一个列表。
# print(soup.select('.outter > .content')[0]) # 层级选择器,'>'表示一个层级,' '表示多个层级
# 获取标签中文本数据,soup.a.text/string/get_text()
# text/get_text():可以获取某一个标签中所有的文本内容
# string:只可以获取该标签下面直系的文本内容
# print(soup.select('.outter > .content')[0].text)
# print(soup.a['href']) # 获取标签中属性值:soup.a['href']
| true |
a5ca6d9264f75943b97f5a4c60a831c8fbac4685 | Python | mesomagik/python_tetris_game | /map.py | UTF-8 | 3,434 | 3.109375 | 3 | [] | no_license | class Map(object):
def __init__(self, height, width, squareSize):
self.squareSize = squareSize
self.height = height
self.width = width
self.mapMatrix = [[0 for i in range(width)] for j in range(height)]
self.connectedBricks = [[0 for i in range(width)] for j in range(height)]
self.points = 0
def ClearMap(self):
print("\n" * 25)
self.mapMatrix = [[0 for i in range(self.width)] for j in range(self.height)]
for i in range(self.height):
for j in range(self.width):
if self.connectedBricks[i][j] == 1:
self.mapMatrix[i][j] = 1
def PrintMap(self):
for j in range(self.width):
print("XX", end='')
print()
scoreString = "SCORE: " + str(self.points)
print("XX ", end='')
print(scoreString, end='')
for iter in range(self.width - 4):
print(" ", end='')
print("XX")
for j in range(self.width):
print("XX", end='')
print()
for i in range(self.height):
for j in range(self.width):
if self.mapMatrix[i][j] == 0:
print(" ", end='')
else:
print("XX", end='')
print("|")
def AddBrick(self, brick):
for lines in range(len(brick.shape)):
for itemInLine in range(len(brick.shape[lines])):
self.mapMatrix[brick.posWidth + lines][brick.posHeight + itemInLine] = brick.shape[lines][itemInLine]
def AddConnectedBrick(self, brick):
for lines in range(len(brick.shape)):
for itemInLine in range(len(brick.shape[lines])):
if brick.shape[lines][itemInLine] == 1:
self.connectedBricks[brick.posWidth + lines][brick.posHeight + itemInLine] = 1
self.CheckIfLineIsFull()
def CheckCollision(self, brick):
for lines in range(len(brick.shape)):
for itemInLine in range(len(brick.shape[lines])):
if brick.posWidth + len(brick.shape) >= self.height:
return True
elif brick.shape[lines][itemInLine] == 1 and self.mapMatrix[brick.posWidth + lines][brick.posHeight + itemInLine] == 1:
self.UpBrick(brick)
return True
return False
def DropBrick(self, brick):
brick.posWidth += 1
def MoveLeft(self, brick):
if brick.posHeight > 0:
brick.posHeight -= 1
def MoveRight(self, brick):
if brick.posHeight + len(brick.shape[0]) < self.width:
brick.posHeight += 1
def UpBrick(self, brick):
brick.posWidth -= 1
def CheckIfLineIsFull(self):
for line in range(self.height):
counter = 0
for itemInLine in range(self.width):
if self.connectedBricks[line][itemInLine] == 0:
break
counter += 1
if counter == self.width:
self.DeleteLineAndAddPoint(line)
def DeleteLineAndAddPoint(self, lineNumber):
print(len(self.connectedBricks))
del self.connectedBricks[lineNumber]
newList = [[0 for i in range(self.width)]]
newList = newList + self.connectedBricks
self.connectedBricks = newList
print(str(len(self.connectedBricks)))
self.points += 1
| true |
b46a7620179b464c6a7dfc959b604ca0cc631acd | Python | powerhouseofthecell/JPG2XML | /app/models/build/note length/classify_note_length.py | UTF-8 | 1,669 | 3 | 3 | [] | no_license | from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
from keras.models import Model, load_model
import numpy as np
from PIL import Image
import os, os.path
# this function should be called when the images have been created and assigned to the tmp/data directory
# *** please also go back and zero index the notes, just to make this whole thing a little bit easier ***
def classify():
# load the model that we will be using to classify the image
# maybe pass in the name of the model to use as an argument?
# or potentially include a default and use that unless you train your own?
model = load_model('model96.h5')
# data to be classified will need to be rescaled
prediction_datagen = ImageDataGenerator(rescale=1./255)
# count how many files are being classified
numOfImages = len([name for name in os.listdir('tmp/data/.') if os.path.isfile('tmp/data/' + name)])
# pull all the data to be classified from the tmp directory
prediction_generator = prediction_datagen.flow_from_directory('tmp', target_size=(150, 150), batch_size=1, class_mode=None, shuffle=False)
# classify the data
results = model.predict_generator(prediction_generator, numOfImages)
return(results.round())
# print the results ***FOR NOW: THIS SHOULD BECOME OBSOLETE SHORTLY
print(classify())
#########
# KEEP IN MIND, THIS REQUIRES THAT A TMP DIRECTORY BE CREATED WITH SUBDIRECTORY DATA IN ORDER FOR THIS TO WORK
# ALSO, THE RESULTS COULD EASILY BE RETURNED IN SUCH A WAY AS TO BE READABLE
# THE INDICES OF THE RESULTS ARRAY CORRESPOND TO THE 'POSITION' OF THE NOTE IN THE MUSICAL PIECE
#########
| true |
8d94e9a1a7d050ee74ad28d7eec24ef9fe74a36c | Python | agateblue/lifter | /tests/test_document_backend.py | UTF-8 | 2,299 | 2.71875 | 3 | [
"ISC"
] | permissive | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import sys
import datetime
import unittest
# import mock
BASE_DIR = os.path.abspath(os.path.dirname(__file__))
DATA_PATH = os.path.join(BASE_DIR, 'data', 'log.sample')
from lifter.backends import document
from lifter.models import Model
from lifter import adapters, parsers
class LogEntry(Model):
pass
class Adapter(adapters.RegexAdapter):
regex = '(?P<level>.*) - (?P<date>.*) - (?P<message>.*)'
def clean_date(self, data, value, model, field):
year, month, day = [int(part) for part in value.split(
'/')]
return datetime.date(year, month, day)
class TestDocumentBackend(unittest.TestCase):
def setUp(self):
self.store = document.DocumentStore(url='file://' + DATA_PATH)
def test_manager_can_load_objects_from_file(self):
manager = self.store.query(LogEntry, adapter=Adapter())
values = list(manager.all())
self.assertEqual(len(values), 3)
self.assertEqual(values[0].level, 'INFO')
self.assertEqual(values[0].date, datetime.date(2016, 3, 23))
self.assertEqual(values[0].message, 'Something happened')
self.assertEqual(values[1].level, 'ERROR')
self.assertEqual(values[1].date, datetime.date(2016, 3, 23))
self.assertEqual(values[1].message, 'Something BAD happened')
self.assertEqual(values[2].level, 'DEBUG')
self.assertEqual(values[2].date, datetime.date(2016, 3, 21))
self.assertEqual(values[2].message, 'Hello there')
def test_can_filter_data_from_file_backend(self):
manager = self.store.query(LogEntry, adapter=Adapter())
self.assertEqual(manager.all().count(), 3)
self.assertEqual(manager.filter(level='ERROR').count(), 1)
self.assertEqual(manager.filter(level='ERROR').first().message, 'Something BAD happened')
def test_can_use_custom_parser(self):
class DummyParser(parsers.Parser):
def parse(self, content):
return content.split('\n')
parser = DummyParser()
store = document.DocumentStore(url='file://' + DATA_PATH, parser=parser)
manager = self.store.query(LogEntry, adapter=Adapter())
values = list(manager.all())
self.assertEqual(len(values), 3)
| true |
375c9216a0e6d6a38ec328f73f411cae81b6c04c | Python | Juanragal/ProyectoHolaMundo | /ikeaNamingFurniture.py | UTF-8 | 1,529 | 3.609375 | 4 | [] | no_license | import random
vocales=["a","e","i","o","u","y","ä","å","ö","ü"]
vocalestilde=["ä","å","ö","ü"]
consonantes=["š","ž","b","c","d","f","j","k","g","h","l","m","n","p","q","r","s","t","v","z","w","x"]
letras=vocales+consonantes
letras.sort()
furnitures=["Mesa comedor","Canapé","Silla","Comoda","Mesita de noche","Mueble baño","Tocador"]
def haytresconsonante(nombre):
if nombre[0] in consonantes and nombre[1] in consonantes and nombre[2]in consonantes:
return True
def haytrevocales(nombre):
if nombre[0] in vocales and nombre[1] in vocales and nombre[2] in vocales:
return True
def dostildes(nombre):
nom=list(nombre.split())
if len(nom.intersection(vocalestilde))>2:
return True
def naming(opcion):
n=0
name=random.choice(letras)
while n!=opcion:
if len(name)<3:
name = random.choice(letras)+name
n+=1
elif haytresconsonante(name):
name = random.choice(vocales)+name
n+=1
elif dostildes(name):
name = str(random.choice(vocales))+name
n+=1
elif haytrevocales(name):
name = random.choice(consonantes)+name
n+=1
else:
name = random.choice(letras)+name
n+=1
name=name[::-1]
return name.capitalize()
nom_compuesto=random.choice(range(1,3))
x=0
nombrefinal=random.choice(furnitures)
while x!=nom_compuesto:
nombrefinal=nombrefinal+" "+naming(random.choice(range(3,9)))
x+=1
print(nombrefinal)
| true |
ab40e66bcbf88394b1d441e0ff0bad5fb112340a | Python | JunctionChao/python_trick | /ContextManger/2_contextmanager.py | UTF-8 | 1,103 | 3.828125 | 4 | [] | no_license | """
使用@contextmanager 装饰器能更方便创建上下文管理器
在使用 @contextmanager 装饰的生成器中,需要使用yield 语句的作用是把函
数的定义体分成两部分:yield 语句前面的所有代码在 with 块开始时
(即解释器调用 __enter__ 方法时)执行, yield 语句后面的代码在
with 块结束时(即调用 __exit__ 方法时)执行
"""
import contextlib
@contextlib.contextmanager
def looking_glass():
import sys
original_write = sys.stdout.write
def reverse_write(text):
original_write(text[::-1])
sys.stdout.write = reverse_write
yield 'JABBERWOCKY' # yield之前的代码在with块开始时执行,之后的代码在with块结束时执行
sys.stdout.write = original_write
if __name__ == '__main__':
# 执行yield之前的代码,在yield出暂停
with looking_glass() as what:
print('Alice, Kitty and Snowdrop') # pordwonS dna yttiK ,ecilA
print(what) # YKCOWREBBAJ
# 执行yield语句之后的代码,还原回 sys.stdout.write 方法
print(what) # JABBERWOCKY | true |
f44d201f01eac59333ad68a02dac9c5839260c74 | Python | guest7735/kkutu | /unifox_crawling.py | UTF-8 | 331 | 2.609375 | 3 | [] | no_license | import requests
from bs4 import BeautifulSoup
res = requests.get("http://unifox.kr/")
res.encoding = 'utf-8'
html = res.text
bs = BeautifulSoup(html, "html.parser")
#print(bs) // 페이지 html주소 전체 print
k = bs.select_one("#pass > div:nth-child(3) > p:nth-child(2)").text
print(list(map(lambda x: x.strip(), k.split()))) | true |
fb566de0b6613fe2e3e307d076fb133b4d9197be | Python | bitw1ze/crypto-puzzles | /p44.py | UTF-8 | 1,419 | 2.765625 | 3 | [] | no_license | #!/usr/bin/env python3.2
import sys
from hashlib import sha1
from helpers import *
from mymath import invmod
from mydsa import sign, generate_keypair, H, Q, Signature, PrivateKey
def recover_nonce():
infile = "p44-input.txt" if len(sys.argv) == 1 else sys.argv[1]
r = 0
s = 0
signed = []
with open(infile) as fh:
for i, line in enumerate(fh.readlines()):
line = line[:-1]
if i % 4 == 0:
message = s2b(line)
elif i % 4 == 1:
s = int(line)
elif i % 4 == 2:
r = int(line)
signed.append(Signature(message, r, s))
found_key = False
for i, z1 in enumerate(signed):
for j, z2 in enumerate(signed):
if i == j or z1.r != z2.r:
continue
m1, s1 = H(z1.m), z1.s
m2, s2 = H(z2.m), z2.s
k = ((m1-m2) * invmod((s1-s2)%Q, Q)) % Q
x = ((z1.s*k - m1) * invmod(z1.r, Q)) % Q
_sig = sign(PrivateKey(x), z1.m)
key_digest = sha1(bytes(hex(x)[2:], 'utf8')).hexdigest()
if key_digest == 'ca8f6f7c66fa362d40760d135b763eb8527d3d52':
return x
else:
raise Exception("Failed to find key!")
def main():
x = recover_nonce()
print("Found your private key!")
print(hex(x)[2:])
if __name__ == '__main__':
sys.exit(main())
| true |
13f785199e91b5c5a59fef18886f9f84b647c08f | Python | rehanshikkalgarrs15/SMS | /lab6/InverseTransform_variates_exponential.py | UTF-8 | 1,451 | 3.609375 | 4 | [] | no_license | import random,math
import matplotlib.pyplot as plt
def generateRandomNumbers(N):
global R
'''
this function will generate N random numbers
'''
for i in range(N):
R.append(random.random())
def formula(r):
'''
general formula
Xi = -(1/lambda)*(ln(1-Ri))
'''
global l_lambda
n1 = (1/l_lambda)
n2 = math.log(1 - r)
return n1 * n2
def formula1(r):
'''
general formula
Ri(b-a)+a = X
'''
a = 10
b= 20
return r*(b-a) + a
def inverseVariatesForExponentDistribution(R):
'''
this function stores inverse variates for each random number
'''
global X
for i in range(len(R)):
X.append(formula(R[i]))
print("using Exponent distribution")
for i in X:
print(i)
#graph X-axis ->Random Variates, Y-axis -> Random Number
plt.plot(R,X)
plt.show()
def inverseVariatesForUniformDistribution(R):
'''
this function stores inverse variates for each random number
'''
global X1
for i in range(len(R)):
X1.append(formula1(R[i]))
print("\nusing Uniform distribution")
for i in X1:
print(i)
#graph X-axis ->Random Variates, Y-axis -> Random Number
plt.plot(R,X1)
plt.show()
R = [0.1306,0.0422,0.6597,0.7965,0.7696] #random numbers
X = list() #inverse variates ForExponentDistribution
X1 = list() #inverse variates ForUniformDistribution
l_lambda = 0.5
#N = int(input())
#generateRandomNumbers(N) #generates N random numbers
inverseVariatesForExponentDistribution(R)
inverseVariatesForUniformDistribution(R)
| true |
fb07b59416fda181f4f7be92134ec9559db4b6ea | Python | mfschmidt/PyGEST | /bin/pgcomp | UTF-8 | 2,009 | 2.921875 | 3 | [
"MIT"
] | permissive | #!/usr/bin/env python3
import sys
import os
import argparse
from pygest.algorithms import file_is_equivalent
extensions = ['json', 'tsv', 'log', 'df']
def file_set(filepath):
""" From one path, return three files for results, one for df, and their base. """
d = {}
e = []
if os.path.isfile(filepath):
if "." in filepath:
d['base'] = filepath[:filepath.rfind(".")]
for ext in extensions:
if os.path.isfile(d['base'] + "." + ext):
d[ext] = d['base'] + "." + ext
else:
e.append("{} exists, but has no extension. should be .df, .tsv, .json, or .log")
else:
e.append("{} does not exist.".format(filepath))
if len(e) > 0:
print(", ".join(e))
return None
return d
def main():
parser, arguments = parse_args()
set_a = file_set(arguments.a)
set_b = file_set(arguments.b)
if set_a is None or set_b is None:
sys.exit(1)
results = {}
for ext in extensions:
if ext in set_a and ext in set_b:
results[ext] = file_is_equivalent(set_a[ext], set_b[ext], arguments.verbose)
if results['df']:
return "IDENTICAL"
elif results['tsv'] and results['json'] and results['log']:
return "IDENTICAL"
elif results['tsv']:
return "MATCH"
else:
return "MISS"
def parse_args():
""" Grab the command and pass the rest along. """
parser = argparse.ArgumentParser(description="PyGEST comparison tool command-line interface")
parser.add_argument("a",
help="A file from the first set")
parser.add_argument("b",
help="A file from the second set")
parser.add_argument("-v", "--verbose", dest="verbose", default=False, action='store_true',
help="Verbosely describe similarities and differences.")
args = parser.parse_args()
return parser, args
if __name__ == "__main__":
sys.exit(main())
| true |
620a188e6b91c60f87edd99ef9c34e4598fb21f7 | Python | adunStudio/RoguelikeTutorial | /Extras 04: Scrolling maps.py | UTF-8 | 35,842 | 2.59375 | 3 | [] | no_license | import tcod
import math
import textwrap
import shelve
SCREEN_WIDTH = 80
SCREEN_HEIGHT = 50
WINDOW_TITLE = "Python 3 tcod tutorial"
FULL_SCREEN = False
LIMIT_FPS = 20
MAP_WIDTH = 100
MAP_HEIGHT = 100
CAMERA_WIDTH = 80
CAMERA_HEIGHT = 43
BAR_WIDTH = 20
PANEL_HEIGHT = 7
PANEL_Y = SCREEN_HEIGHT - PANEL_HEIGHT
MSG_X = BAR_WIDTH + 2
MSG_WIDTH = SCREEN_WIDTH - BAR_WIDTH # -2
MSG_HEIGHT = PANEL_HEIGHT - 1
PLAYER_SPEED = 2
PLAYER_ATTACK_SPEED = 5
DEFAULT_SPEED = 8
DEFAULT_ATTACK_SPEED = 20
ROOM_MAX_SIZE = 10
ROOM_MIN_SIZE = 6
MAX_ROOMS = 30
EXPLORE_MODE = False
color_dark_wall = tcod.Color(0, 0, 100)
color_light_wall = tcod.Color(130, 110, 50)
color_dark_ground = tcod.Color(50, 50, 150)
color_light_ground = tcod.Color(200, 180, 50)
FOV_ALGO = 0 #default FOV algorithm
FOV_LIGHT_WALLS = True
TORCH_RADIUS = 7
INVENTORY_WIDTH = 50
HEAL_AMOUNT = 40
LIGHTNING_DAMAGE = 40
LIGHTNING_RANGE = 5
CONFUSE_NUM_TURNS = 10
CONFUSE_RANGE = 8
FIREBALL_DAMAGE = 25
FIREBALL_RADIUS = 3
LEVEL_UP_BASE = 200
LEVEL_UP_FACTOR = 150 # 200 + player.level * 150
LEVEL_SCREEN_WIDTH = 40
CHARACTER_SCREEN_WIDTH = 30
class Fighter:
def __init__(self, hp, defense, power, xp, attack_speed=DEFAULT_ATTACK_SPEED ,death_function=None):
self.base_max_hp = hp
self.hp = hp
self.base_defense = defense
self.base_power = power
self.xp = xp
self.attack_speed = attack_speed
self.death_function = death_function
@property
def power(self):
bonus = sum(equipment.power_bonus for equipment in get_all_equipped(self.owner))
return self.base_power + bonus
@property
def defense(self):
bonus = sum(equipment.defense_bonus for equipment in get_all_equipped(self.owner))
return self.base_defense + bonus
@property
def max_hp(self):
bonus = sum(equipment.max_hp_bonus for equipment in get_all_equipped(self.owner))
return self.base_max_hp + bonus
def take_damage(self, damage):
if damage > 0:
self.hp -= damage
if self.hp <= 0:
function = self.death_function
if function is not None:
function(self.owner)
if self.owner != player:
player.fighter.xp += self.xp
def attack(self, target):
damage = self.power - target.fighter.defense
if damage > 0:
message(self.owner.name.capitalize() + ' attacks ' + target.name + ' for ' + str(damage) + ' hit points.')
target.fighter.take_damage(damage)
else:
message(self.owner.name.capitalize() + ' attacks ' + target.name + ' but it has no effect!')
self.owner.wait = self.attack_speed
def heal(self, amount):
self.hp += amount
if self.hp > self.max_hp:
self.hp = self.max_hp
class BasicMonster:
def take_turn(self):
monster = self.owner
if tcod.map_is_in_fov(fov_map, monster.x, monster.y):
if monster.distance_to(player) >= 2:
monster.move_astar(player)
elif player.fighter.hp > 0:
monster.fighter.attack(player)
class ConfuseMonster:
def __init__(self, old_ai, num_turns=CONFUSE_NUM_TURNS):
self.old_ai = old_ai
self.num_turns = num_turns
def take_turn(self):
if self.num_turns > 0:
self.owner.move(tcod.random_get_int(0, -1, 1), tcod.random_get_int(0, -1, 1))
self.num_turns -= 1
else:
self.owner.ai = self.old_ai
message('The ' + self.owner.name + ' is no longer confused!', tcod.red)
class Rect:
def __init__(self, x, y, w, h):
# top-left
self.x1 = x
self.y1 = y
self.x2 = x + w
self.y2 = y + h
def center(self):
center_x = (self.x1 + self.x2) // 2
center_y = (self.y1 + self.y2) // 2
return (center_x, center_y)
def intersect(self, other):
return (self.x1 <= other.x2 and self.x2 >= other.x1 and
self.y1 <= other.y2 and self.y2 >= other.y1)
class Tile:
def __init__(self, blocked, block_sight = None):
self.explored = False
self.blocked = blocked
if block_sight is None:
block_sight = blocked
self.block_sight = block_sight
class Object:
def __init__(self, x, y, char, name, color, blocks = False, always_visible=False, speed = DEFAULT_SPEED, fighter = None, ai = None, item = None, equipment = None):
self.x = x
self.y = y
self.char = char
self.name = name
self.color = color
self.blocks = blocks
self.always_visible = always_visible
self.speed = speed
self.wait = 0
self.fighter = fighter
if self.fighter:
self.fighter.owner = self
self.ai = ai
if self.ai:
self.ai.owner = self
self.item = item
if self.item:
self.item.owner = self
self.equipment = equipment
if self.equipment:
self.equipment.owner = self
self.item = Item()
self.item.owner = self
def move(self, dx, dy):
global map
if not is_blocked(self.x + dx, self.y + dy):
self.x += dx
self.y += dy
self.wait = self.speed
def move_towards(self, target_x, target_y):
dx = target_x - self.x
dy = target_y - self.y
dist = math.sqrt(dx ** 2 + dy ** 2)
# Normalize
dx = int(round(dx / dist))
dy = int(round(dy / dist))
self.move(dx, dy)
def move_astar(self, target):
fov = tcod.map_new(MAP_WIDTH, MAP_HEIGHT)
for y1 in range(MAP_HEIGHT):
for x1 in range(MAP_WIDTH):
tcod.map_set_properties(fov, x1, y1, not map[x1][y1].block_sight, not map[x1][y1].blocked)
for obj in objects:
if obj.blocks and obj != self and obj != target:
tcod.map_set_properties(fov, obj.x, obj.y, True, False)
path = tcod.path_new_using_map(fov, 0) # 0: 대각선 금지, 1.41: 대각선 코스트(루트 2)
tcod.path_compute(path, self.x, self.y, target.x, target.y)
if not tcod.path_is_empty(path) and tcod.path_size(path) < 25:
x, y = tcod.path_walk(path, True)
if x or y:
self.x = x
self.y = y
self.wait = self.speed
else:
self.move_towards(target.x, target.y)
tcod.path_delete(path)
def distance_to(self, other):
dx = other.x - self.x
dy = other.y - self.y
return math.sqrt(dx ** 2 + dy ** 2)
def distance(self, x, y):
return math.sqrt((x - self.x) ** 2 + (y - self.y) ** 2)
def send_to_back(self):
global objects
objects.remove(self)
objects.insert(0, self)
def draw(self):
(x, y) = to_camera_coordinates(self.x, self.y)
if EXPLORE_MODE:
visible = tcod.map_is_in_fov(fov_map, self.x, self.y) or self.always_visible
if not visible:
return
if x is not None:
tcod.console_set_default_foreground(con, self.color)
tcod.console_put_char(con, x, y, self.char, tcod.BKGND_NONE)
def clear(self):
(x, y) = to_camera_coordinates(self.x, self.y)
if x is not None:
tcod.console_put_char(con, x, y, ' ', tcod.BKGND_NONE)
class Equipment:
def __init__(self, slot, power_bonus=0, defense_bonus=0, max_hp_bonus=0):
self.slot = slot
self.is_equipped = False
self.power_bonus = power_bonus
self.defense_bonus = defense_bonus
self.max_hp_bonus = max_hp_bonus
def toggle_equip(self):
if self.is_equipped:
self.dequip()
else:
self.equip()
def equip(self):
old_equipment = get_equipped_in_slot(self.slot)
if old_equipment is not None:
old_equipment.dequip()
self.is_equipped = True
message('Equipped ' + self.owner.name + ' on ' + self.slot + '.', tcod.light_green)
def dequip(self):
if not self.is_equipped:
return
self.is_equipped = False
message('Dequipped ' + self.owner.name + ' on ' + self.slot + '.', tcod.light_green)
class Item:
def __init__(self, use_function=None):
self.use_function = use_function
def use(self):
if self.owner.equipment:
self.owner.equipment.toggle_equip()
return
if self.use_function is None:
message("the" + self.owner.name + " cannot be used")
else:
if self.use_function() != "cancelled":
inventory.remove(self.owner)
def pick_up(self):
if len(inventory) >= 26:
message('Your inventory is full, cannot pick up ' + self.owner.name + '.', tcod.red)
else:
inventory.append(self.owner)
objects.remove(self.owner)
message('You picked up a ' + self.owner.name + '!', tcod.green)
equipment = self.owner.equipment
if equipment and get_equipped_in_slot(equipment.slot) is None:
equipment.equip()
def drop(self):
if self.owner.equipment:
self.owner.equipment.dequip()
objects.append(self.owner)
inventory.remove(self.owner)
self.owner.x = player.x
self.owner.y = player.y
message('You dropped a ' + self.owner.name + '.', tcod.yellow)
def make_map():
global map, objects, stairs
objects = [player]
map = [ [Tile(True) for y in range(MAP_HEIGHT)] for x in range(MAP_WIDTH) ]
rooms = []
num_rooms = 0
for r in range(MAX_ROOMS):
w = tcod.random_get_int(0, ROOM_MIN_SIZE, ROOM_MAX_SIZE)
h = tcod.random_get_int(0, ROOM_MIN_SIZE, ROOM_MAX_SIZE)
# random position without going out of the boundaries of the map
x = tcod.random_get_int(0, 0, MAP_WIDTH - w - 1)
y = tcod.random_get_int(0, 0, MAP_HEIGHT - h - 1)
new_room = Rect(x, y, w, h)
failed = False
for other_room in rooms:
if new_room.intersect(other_room):
failed = True
break
if not failed:
create_room(new_room)
(new_x, new_y) = new_room.center()
#room_no = Object(new_x, new_y, chr(65 + num_rooms), 'room number', tcod.white, blocks=False)
#objects.insert(0, room_no)
#room_no.send_to_back()
if num_rooms == 0:
player.x = new_x
player.y = new_y
else:
(prev_x, prev_y) = rooms[num_rooms-1].center()
if tcod.random_get_int(0, 0, 1) == 1:
create_h_tunnel(prev_x, new_x, prev_y)
create_v_tunnel(prev_y, new_y, new_x)
else:
create_v_tunnel(prev_y, new_y, prev_x)
create_h_tunnel(prev_x, new_x, new_y)
place_objects(new_room)
rooms.append(new_room)
num_rooms += 1
stairs = Object(new_x, new_y, '<', "stairs", tcod.white, always_visible=True)
objects.append(stairs)
stairs.send_to_back()
def create_room(room):
global map
for x in range(room.x1 + 1, room.x2):
for y in range(room.y1 + 1, room.y2):
map[x][y].blocked = False
map[x][y].block_sight = False
def create_h_tunnel(x1, x2, y): # 가로
global map
for x in range(min(x1, x2), max(x1, x2) + 1):
map[x][y].blocked = False
map[x][y].block_sight = False
def create_v_tunnel(y1, y2, x): # 세로
global map
for y in range(min(y1, y2), max(y1, y2) + 1):
map[x][y].blocked = False
map[x][y].block_sight = False
def place_objects(room):
max_monsters = from_dungeon_level([[2, 1], [3, 4], [5, 6]])
monster_chances = {}
monster_chances["orc"] = 80
monster_chances["troll"] = from_dungeon_level([[15, 3], [30, 5], [60, 7]])
for i in range(0, max_monsters):
x = tcod.random_get_int(0, room.x1 + 1, room.x2 - 1)
y = tcod.random_get_int(0, room.y1 + 1, room.y2 - 1)
if not is_blocked(x, y):
choice = random_choice(monster_chances)
if choice == "orc": # 80% chance of getting an orc
fighter_component = Fighter(hp=20, defense=0, power=4, xp=35, death_function=monster_death)
ai_component = BasicMonster()
monster = Object(x, y, 'o', 'orc', tcod.desaturated_green, blocks=True, fighter=fighter_component, ai=ai_component)
if choice == "troll":
fighter_component = Fighter(hp=30, defense=2, power=8, xp=100, death_function=monster_death)
ai_component = BasicMonster()
monster = Object(x, y, 'T', 'troll', tcod.darker_green, blocks=True, fighter=fighter_component, ai=ai_component)
objects.append(monster)
max_items = from_dungeon_level([[1, 1], [2, 4]])
item_chances = {} # {'heal': 70, 'lightning': 10, 'fireball': 10, 'confuse': 10}
item_chances["heal"] = 35
item_chances['lightning'] = from_dungeon_level([[25, 4]])
item_chances['fireball'] = from_dungeon_level([[25, 6]])
item_chances['confuse'] = from_dungeon_level([[10, 2]])
item_chances["sword"] = from_dungeon_level([[5, 4]])
item_chances["shield"] = from_dungeon_level([[15, 8]])
for i in range(0, max_items):
x = tcod.random_get_int(0, room.x1 + 1, room.x2 - 1)
y = tcod.random_get_int(0, room.y1 + 1, room.y2 - 1)
if not is_blocked(x, y):
choice = random_choice(item_chances)
if choice == 'heal':
item_component = Item(use_function=cast_heal)
item = Object(x, y, '!', 'healing potion', tcod.violet, item=item_component)
if choice == 'lightning':
item_component = Item(use_function=cast_lightning)
item = Object(x, y, '#', 'scroll of lightning bolt', tcod.light_yellow, item=item_component)
if choice == 'fireball':
item_component = Item(use_function=cast_fireball)
item = Object(x, y, '#', 'scroll of fireball', tcod.light_red, item=item_component)
if choice == 'confuse':
item_component = Item(use_function=cast_confuse)
item = Object(x, y, '#', 'scroll of confusion', tcod.light_yellow, item=item_component)
if choice == 'sword':
equipment_component = Equipment(slot="right hand", power_bonus=3)
item = Object(x, y, '/', "sword", tcod.sky, equipment=equipment_component)
if choice == 'shield':
equipment_component = Equipment(slot='left hand', defense_bonus=1)
item = Object(x, y, '[', 'shield', tcod.darker_orange, equipment=equipment_component)
objects.append(item)
item.send_to_back()
def is_blocked(x, y):
if map[x][y].blocked:
return True
for object in objects:
if object.blocks and object.x == x and object.y == y:
return True
return False
def player_death(player):
global game_state
message("You Died!", tcod.red)
game_state = "deat"
player.char = "%"
player.color = tcod.dark_red
def monster_death(monster):
message('The ' + monster.name.capitalize() + ' is dead! You gain ' + str(monster.fighter.xp) + ' experience points.', tcod.orange)
monster.char = '%'
monster.color = tcod.dark_red
monster.blocks = False
monster.fighter = None
monster.ai = None
monster.name = 'remains of ' + monster.name
monster.send_to_back()
def cast_heal():
if player.fighter.hp == player.fighter.max_hp:
message("you are already at full health.", tcod.red)
return "cancelled"
message("your wounds start to feel better!", tcod.light_violet)
player.fighter.heal(HEAL_AMOUNT)
def cast_lightning():
monster = closest_monster(LIGHTNING_RANGE)
if monster is None:
message("No enemy is close enough to strike.", tcod.red)
return "cancelled"
message('A lighting bolt strikes the ' + monster.name + ' with a loud thunder! The damage is ' + str(LIGHTNING_DAMAGE) + ' hit points.', tcod.light_blue)
monster.fighter.take_damage(LIGHTNING_DAMAGE)
def cast_confuse():
message('Left-click an enemy to confuse it, or right-click to cancel.', tcod.light_cyan)
monster = target_monster(CONFUSE_RANGE)
if monster is None:
return 'cancelled'
old_ai = monster.ai
monster.ai = ConfuseMonster(old_ai=old_ai)
monster.ai.owner = monster
message('The eyes of the ' + monster.name + ' look vacant, as he starts to stumble around!', tcod.light_green)
def cast_fireball():
message('Left-click a target tile for the fireball, or right-click to cancel.', tcod.light_cyan)
(x, y) = target_tile()
(x, y) = (camera_x + x, camera_y + y)
if x is None:
return "cancelled"
message('The fireball explodes, burning everything within ' + str(FIREBALL_RADIUS) + ' tiles!', tcod.orange)
for obj in objects:
if obj.distance(x, y) <= FIREBALL_RADIUS and obj.fighter:
message('The ' + obj.name + ' gets burned for ' + str(FIREBALL_DAMAGE) + ' hit points.', tcod.orange)
obj.fighter.take_damage(FIREBALL_DAMAGE)
def closest_monster(max_range):
closest_enemy = None
closest_dist = max_range + 1
for object in objects:
if object.fighter and not object == player and tcod.map_is_in_fov(fov_map, object.x, object.y):
dist = player.distance_to(object)
if dist < closest_dist:
closest_enemy = object
closest_dist = dist
break
return closest_enemy
def target_tile(max_range=None):
global key, mouse
while True:
tcod.console_flush()
tcod.sys_check_for_event(tcod.EVENT_KEY | tcod.EVENT_MOUSE, key, mouse)
render_all()
(x, y) = (mouse.cx, mouse.cy)
if mouse.lbutton_pressed and tcod.map_is_in_fov(fov_map, x, y) and (max_range is None or player.distance(x, y) <= max_range):
return (x, y)
if mouse.rbutton_pressed or key.vk == tcod.KEY_ESCAPE:
return (None, None)
def target_monster(max_range=None):
while True:
(x, y) = target_tile(max_range)
if x is None:
return None
for obj in objects:
if obj.x == x and obj.y ==y and obj.fighter and obj != player:
return obj
def inventory_menu(header):
if len(inventory) == 0:
options = ['Inventory is empty.']
else:
options = []
for item in inventory:
text = item.name
if item.equipment and item.equipment.is_equipped:
text = text + " (on " + item.equipment.slot + ")"
options.append(text)
index = menu(header, options, INVENTORY_WIDTH)
if index is None or len(inventory) == 0:
return None
return inventory[index].item
def menu(header, options, width):
global key, mouse
if len(options) > 26: raise ValueError('Cannot have a menu with more than 26 options.')
header_height = tcod.console_get_height_rect(con, 0, 0, width, SCREEN_HEIGHT, header)
if header == '':
header_height = 0
height = len(options) + header_height
window = tcod.console_new(SCREEN_WIDTH, SCREEN_HEIGHT)
tcod.console_set_default_foreground(window, tcod.white)
tcod.console_print_rect_ex(window, 0, 1, width, height, tcod.BKGND_NONE, tcod.LEFT, header)
y = header_height
letter_index = ord('a')
for option_text in options:
text = '(' + chr(letter_index) + ') ' + option_text
tcod.console_print_ex(window, 0, y, tcod.BKGND_NONE, tcod.LEFT, text)
y += 1
letter_index += 1
x = int(SCREEN_WIDTH / 2 - width / 2)
y = int(SCREEN_HEIGHT / 2 - height / 2)
tcod.console_blit(window, 0, 0, width, height, 0, x, y, 1.0, 0.7)
x_offset = x
y_offset = y + header_height
while True:
tcod.console_flush()
tcod.sys_check_for_event(tcod.EVENT_KEY_PRESS|tcod.EVENT_MOUSE,key,mouse)
if mouse.lbutton_pressed:
(menu_x, menu_y) = (mouse.cx - x_offset, mouse.cy - y_offset)
if 0 <= menu_x and menu_x <= width and 0 <= menu_y and menu_y < height - header_height:
return menu_y
if mouse.rbutton_pressed or key.vk == tcod.KEY_ESCAPE:
return None
if key.vk == tcod.KEY_ENTER and key.lalt:
tcod.console_set_fullscreen(not tcod.console_is_fullscreen())
index = key.c - ord('a')
if index >= 0 and index < len(options):
return index
if index >= 0 and index <= 26:
return None
def msgbox(text, width=50):
menu(text, [], width)
def main_menu():
img = tcod.image_load("/Users/adun/Desktop/RoguelikeTutorial/princess.png") # 160 * 100
while not tcod.console_is_window_closed():
tcod.image_blit_2x(img, 0, 0, 0)
choice = menu("", ["Play a new game, ", "Continue last game", "Quit"], 24)
if choice == 1:
try:
load_game()
except:
msgbox("\n No saved game to load.\n", 24)
continue
play_game()
elif choice == 0:
new_game()
play_game()
elif choice == 2:
break
def check_level_up():
level_up_xp = LEVEL_UP_BASE + player.level * LEVEL_UP_FACTOR
if player.fighter.xp >= level_up_xp:
player.level += 1
player.fighter.xp -= level_up_xp
message('Your battle skills grow stronger! You reached level ' + str(player.level) + '!', tcod.yellow)
choice = None
while choice == None:
choice = menu('Level up! Choose a stat to raise:\n',
['Constitution (+20 HP, from ' + str(player.fighter.max_hp) + ')',
'Strength (+1 attack, from ' + str(player.fighter.power) + ')',
'Agility (+1 defense, from ' + str(player.fighter.defense) + ')'], LEVEL_SCREEN_WIDTH)
if choice == 0:
player.fighter.base_max_hp += 20
player.fighter.hp += 20
elif choice == 1:
player.fighter.base_power += 1
elif choice == 2:
player.fighter.base_defense += 1
def random_choice_index(chances):
dice = tcod.random_get_int(0, 1, sum(chances))
running_sum = 0
choice = 0
for w in chances:
running_sum += w
if dice <= running_sum:
return choice
choice += 1
def random_choice(chances_dict):
chances = chances_dict.values()
strings = list(chances_dict.keys())
return strings[random_choice_index(chances)]
def from_dungeon_level(table):
for (value, level) in reversed(table):
if dungeon_level >= level:
return value
return 0
def get_equipped_in_slot(slot):
for obj in inventory:
if obj.equipment and obj.equipment.slot == slot and obj.equipment.is_equipped:
return obj.equipment
return None
def get_all_equipped(obj):
if obj == player:
equipped_list = []
for item in inventory:
if item.equipment and item.equipment.is_equipped:
equipped_list.append(item.equipment)
return equipped_list
else:
return []
def move_camera(target_x, target_y):
global camera_x, camera_y, fov_recompute
x = target_x - CAMERA_WIDTH // 2
y = target_y - CAMERA_HEIGHT // 2
if x > MAP_WIDTH - CAMERA_WIDTH - 1: x = MAP_WIDTH - CAMERA_WIDTH - 1
if y > MAP_HEIGHT - CAMERA_HEIGHT - 1: y = MAP_HEIGHT - CAMERA_HEIGHT - 1
if x < 0: x = 0
if y < 0: y = 0
if x != camera_x or y != camera_y: fov_recompute = True
(camera_x, camera_y) = (x, y)
def to_camera_coordinates(x, y):
(x, y) = (x - camera_x, y - camera_y)
if x < 0 or y < 0 or x >= CAMERA_WIDTH or y >= CAMERA_HEIGHT:
return (None, None)
return (x, y)
def render_all():
global fov_map, color_dark_wall, color_light_wall
global color_dark_ground, color_light_ground
global fov_recompute
move_camera(player.x, player.y)
if fov_recompute:
# recompute FOV if needed (the player moved or something)
fov_recompute = False
tcod.map_compute_fov(fov_map, player.x, player.y, TORCH_RADIUS, FOV_LIGHT_WALLS, FOV_ALGO)
# go through all tiles, and set their background color according to the FOV
for y in range(CAMERA_HEIGHT):
for x in range(CAMERA_WIDTH):
(map_x, map_y) = (camera_x + x, camera_y + y)
visible = tcod.map_is_in_fov(fov_map, map_x, map_y)
wall = map[map_x][map_y].block_sight
if EXPLORE_MODE:
if not visible:
# it's out of the player's FOV
if map[map_x][map_y].explored:
if wall:
tcod.console_set_char_background(con, x, y, color_dark_wall, tcod.BKGND_SET)
else:
tcod.console_set_char_background(con, x, y, color_dark_ground, tcod.BKGND_SET)
else:
# it's visible
if wall:
tcod.console_set_char_background(con, x, y, color_light_wall, tcod.BKGND_SET)
else:
tcod.console_set_char_background(con, x, y, color_light_ground, tcod.BKGND_SET)
map[map_x][map_y].explored = True
else:
if not visible:
# it's out of the player's FOV
if wall:
tcod.console_set_char_background(con, x, y, color_dark_wall, tcod.BKGND_SET)
else:
tcod.console_set_char_background(con, x, y, color_dark_ground, tcod.BKGND_SET)
else:
# it's visible
if wall:
tcod.console_set_char_background(con, x, y, color_light_wall, tcod.BKGND_SET)
else:
tcod.console_set_char_background(con, x, y, color_light_ground, tcod.BKGND_SET)
# draw all objects in the list
for object in objects:
if object != player:
object.draw()
player.draw()
# blit the contents of "con" to the root console
tcod.console_blit(con, 0, 0, SCREEN_WIDTH, SCREEN_HEIGHT, 0, 0, 0)
# GUI
tcod.console_set_default_background(panel, tcod.black)
tcod.console_clear(panel)
message_y = 1
for (line, color) in game_msgs:
tcod.console_set_default_foreground(panel, color)
tcod.console_print_ex(panel, MSG_X, message_y, tcod.BKGND_NONE, tcod.LEFT, line)
message_y += 1
tcod.console_set_default_foreground(panel, tcod.light_gray)
name = get_names_under_mouse()
tcod.console_print_ex(panel, 1, 0, tcod.BKGND_NONE, tcod.LEFT, name)
tcod.console_print_ex(panel, 1, 3, tcod.BKGND_NONE, tcod.LEFT, 'Dungeon level ' + str(dungeon_level))
render_bar(1, 1, BAR_WIDTH, "HP", player.fighter.hp, player.fighter.max_hp, tcod.light_red, tcod.darker_red)
tcod.console_blit(panel, 0, 0, SCREEN_WIDTH, PANEL_HEIGHT, 0, 0, PANEL_Y)
def handle_keys():
global key
if key.vk == tcod.KEY_ENTER and key.lalt:
tcod.console_set_fullscreen(not tcod.console_is_fullscreen())
elif key.vk == tcod.KEY_ESCAPE:
return 'exit' # exit game
if game_state == 'playing':
if player.wait > 0:
player.wait -= 1
return
if key.vk == tcod.KEY_UP or key.vk == tcod.KEY_KP8:
player_move_or_attack(0, -1)
elif key.vk == tcod.KEY_DOWN or key.vk == tcod.KEY_KP2:
player_move_or_attack(0, 1)
elif key.vk == tcod.KEY_LEFT or key.vk == tcod.KEY_KP4:
player_move_or_attack(-1, 0)
elif key.vk == tcod.KEY_RIGHT or key.vk == tcod.KEY_KP6:
player_move_or_attack(1, 0)
elif key.vk == tcod.KEY_HOME or key.vk == tcod.KEY_KP7:
player_move_or_attack(-1, -1)
elif key.vk == tcod.KEY_PAGEUP or key.vk == tcod.KEY_KP9:
player_move_or_attack(1, -1)
elif key.vk == tcod.KEY_END or key.vk == tcod.KEY_KP1:
player_move_or_attack(-1, 1)
elif key.vk == tcod.KEY_PAGEDOWN or key.vk == tcod.KEY_KP3:
player_move_or_attack(1, 1)
elif key.vk == tcod.KEY_KP5:
pass
else:
key_char = chr(key.c)
if key_char == 'g':
for object in objects:
if object.x == player.x and object.y == player.y and object.item:
object.item.pick_up()
break
if key_char == 'i':
# show the inventory
choose_item = inventory_menu('Press the key next to an item to use it, or any other to cancel.\n')
if choose_item is not None:
choose_item.use()
if key_char == 'd':
chosen_item = inventory_menu('Press the key next to an item to drop it, or any other to cancel.\n')
if chosen_item is not None:
chosen_item.drop()
if key_char == 'a':
if stairs.x == player.x and stairs.y == player.y:
next_level()
if key_char == 'c':
level_up_xp = LEVEL_UP_BASE + player.level * LEVEL_UP_FACTOR
msgbox(
'Character Information\n\nLevel: ' + str(player.level) + '\nExperience: ' + str(player.fighter.xp) +
'\nExperience to level up: ' + str(level_up_xp) + '\n\nMaximum HP: ' + str(player.fighter.max_hp) +
'\nAttack: ' + str(player.fighter.power) + '\nDefense: ' + str(player.fighter.defense),
CHARACTER_SCREEN_WIDTH)
else:
return 'didnt-take-turn'
def player_move_or_attack(dx, dy):
global fov_recompute
# the coordinates the player is moving to/attacking
x = player.x + dx
y = player.y + dy
# try to find an attackable object there
target = None
for object in objects:
if object.fighter and object.x == x and object.y == y:
target = object
break
# attack if target found, move otherwise
if target is not None:
player.fighter.attack(target)
else:
player.move(dx, dy)
fov_recompute = True
con = tcod.console_new(SCREEN_WIDTH, SCREEN_HEIGHT)
panel = tcod.console_new(SCREEN_WIDTH, SCREEN_HEIGHT)
def get_names_under_mouse():
global mouse
(x, y) = (mouse.cx, mouse.cy)
(x, y) = (x + camera_x, y + camera_y)
names = [obj.name for obj in objects if obj.x == x and obj.y == y and tcod.map_is_in_fov(fov_map, obj.x, obj.y)]
names = ','.join(names)
return names.capitalize()
def message(new_msg, color = tcod.white):
new_msg_lines = textwrap.wrap(new_msg, MSG_WIDTH)
for line in new_msg_lines:
if len(game_msgs) == MSG_HEIGHT:
del game_msgs[0]
game_msgs.append((line, color))
def render_bar(x, y, total_width, name, value, maximum, bar_color, back_color):
bar_width = int(float(value) / maximum * total_width)
tcod.console_set_default_background(panel, back_color)
tcod.console_rect(panel, x, y, total_width, 1, False, tcod.BKGND_SCREEN)
tcod.console_set_default_background(panel, bar_color)
if bar_width > 0:
tcod.console_rect(panel, x, y, bar_width, 1, False, tcod.BKGND_SCREEN)
tcod.console_set_default_foreground(panel, tcod.white)
tcod.console_print_ex(panel, x + total_width // 2, y, tcod.BKGND_NONE, tcod.CENTER, name + ': ' + str(value) + '/' + str(maximum))
def next_level():
global dungeon_level
dungeon_level += 1
message('You take a moment to rest, and recover your strength.', tcod.light_violet)
player.fighter.heal(player.fighter.max_hp // 2)
message('After a rare moment of peace, you descend deeper into the heart of the dungeon...', tcod.red)
make_map()
initialize_fov()
def new_game():
global player, inventory, game_msgs, game_state, dungeon_level
dungeon_level = 1
fighter_component = Fighter(hp=100, defense=1, power=4, xp=0, attack_speed=PLAYER_ATTACK_SPEED ,death_function=player_death)
player = Object(0, 0, '@', 'player', tcod.white, blocks=True, speed=PLAYER_SPEED, fighter=fighter_component)
player.level = 1
make_map()
initialize_fov()
inventory = []
game_msgs = []
game_state = "playing"
message("Welcomm stranger! Prepare to perish in the Tombs of the Ancient Kings.", tcod.red)
equipment_component = Equipment(slot='right hand', power_bonus=2)
obj = Object(0, 0, '-', 'dagger', tcod.sky, equipment=equipment_component)
inventory.append(obj)
equipment_component.equip()
obj.always_visible = True
def initialize_fov():
global fov_map, fov_recompute
fov_recompute = True
fov_map = tcod.map_new(MAP_WIDTH, MAP_HEIGHT)
for y in range(MAP_HEIGHT):
for x in range(MAP_WIDTH):
tcod.map_set_properties(fov_map, x, y, not map[x][y].block_sight, not map[x][y].blocked)
tcod.console_clear(con)
def play_game():
global camera_x, camera_y, key, mouse
(camera_x, camera_y) = (0, 0)
while not tcod.console_is_window_closed():
tcod.sys_check_for_event(tcod.EVENT_KEY_PRESS | tcod.EVENT_MOUSE, key, mouse)
render_all()
tcod.console_flush()
check_level_up()
for object in objects:
object.clear()
player_action = handle_keys()
if player_action == 'exit':
save_game()
break
if game_state == "playing":
for object in objects:
if object.ai:
if object.wait > 0:
object.wait -= 1
else:
object.ai.take_turn()
def save_game():
file = shelve.open("savegame", "n")
file["map"] = map
file["objects"] = objects
file["player_index"] = objects.index(player)
file['inventory'] = inventory
file['game_msgs'] = game_msgs
file['game_state'] = game_state
file['stairs_index'] = objects.index(stairs)
file['dungeon_level'] = dungeon_level
file.close()
def load_game():
global map, objects, player, inventory, game_msgs, game_state, stairs, dungeon_level
file = shelve.open("savegame", "r")
map = file["map"]
objects = file['objects']
player = objects[file['player_index']] # get index of player in objects list and access it
inventory = file['inventory']
game_msgs = file['game_msgs']
game_state = file['game_state']
stairs = objects[file['stairs_index']]
dungeon_level = file['dungeon_level']
file.close()
initialize_fov()
def main():
global key, mouse
key = tcod.Key()
mouse = tcod.Mouse()
tcod.console_set_custom_font('/Users/adun/Desktop/RoguelikeTutorial/arial10x10.png', tcod.FONT_TYPE_GREYSCALE | tcod.FONT_LAYOUT_TCOD)
tcod.console_init_root(SCREEN_WIDTH, SCREEN_HEIGHT, WINDOW_TITLE, FULL_SCREEN)
tcod.sys_set_fps(LIMIT_FPS)
main_menu()
if __name__ == "__main__":
main()
| true |
77790c61a794f70915ef79e1ad3f81b2f8196bf3 | Python | JosephLevinthal/Research-projects | /5 - Notebooks e Data/1 - Análises numéricas/Arquivos David/Atualizados/logDicas-master/data/2019-1/224/users/4376/codes/1734_2496.py | UTF-8 | 93 | 3.40625 | 3 | [] | no_license | i=int(input("numero"))
soma=0
while(i!=-1):
soma=soma+i
i=int(input("numero"))
print(soma) | true |
9fe86fc37878f08c1149f878ff01720deb2b8604 | Python | mukund1985/Python-Tutotrial | /Advance_Python/15. Thread/15. SingleTaskingUsingThread1.py | UTF-8 | 399 | 3.78125 | 4 | [] | no_license | # Single Tasking using a Thread
from threading import Thread
from time import *
class MyExam:
def solve_question(self):
self.task1()
self.task2()
self.task3()
def task1(self):
print("Question 1 Solved")
def task2(self):
print("Question 2 Solved")
def task3(self):
print("Question 3 Solved")
mye = MyExam()
t = Thread(target=mye.solve_question)
t.start() | true |
82dd891347738132cbcd19714cfde72addbe6dc5 | Python | AveryHuo/PeefyLeetCode | /src/Python/601-700/645.SetMismatch.py | UTF-8 | 551 | 3.328125 | 3 | [
"Apache-2.0"
] | permissive |
from collections import Counter
class Solution:
def findErrorNums(self, nums):
"""
:type nums: List[int]
:rtype: List[int]
"""
counter = Counter(nums)
ele1, ele2, n = 0, 0, len(nums)
for k in counter.keys():
if counter[k] > 1:
ele1 = k
break
ele2 = int((n + 1) * n / 2 - sum(counter.keys()))
return [ele1, ele2]
if __name__ == '__main__':
solution = Solution()
print(solution.findErrorNums([1, 2, 2, 4]))
else:
pass
| true |
a5c3801bc8c16ff8d52a95d637c5d1086f06ab2a | Python | scumechanics/Pre-trained-Deep-Learning-Models-For-Rapid-Analysis-Of-Piezoelectric-Hysteresis-Loops-SHO-Fitting | /codes/algorithm/TRPCGOptimizerv2.py | UTF-8 | 12,880 | 2.8125 | 3 | [
"BSD-3-Clause"
] | permissive | """
Created on Sun Jan 24 16:34:00 2021
@author: Martin Takac
"""
import numpy as np
import tensorflow as tf
import torch
class TRPCGOptimizerv2:
cgopttol = 1e-7
c0tr = 0.2
c1tr = 0.25
c2tr = 0.75 # when to accept
t1tr = 0.75
t2tr = 2.0
radius_max = 5.0 # max radius
radius_initial = 1.0
radius = radius_initial
@tf.function
def computeHessianProduct(self, x, y, v):
with tf.GradientTape() as tape:
with tf.GradientTape() as tape2:
out = self.model(x)
loss = tf.keras.losses.mean_squared_error(out, y)
loss = tf.reduce_mean(loss)
grad = tape2.gradient(loss, self.model.trainable_variables)
gradSum = tf.reduce_sum([tf.reduce_sum(g*p0i)
for g, p0i in zip(grad, v)])
Hp = tape.gradient(gradSum, self.model.trainable_variables)
return Hp
def __init__(self, model, radius, precondition,
cgopttol=1e-7, c0tr=0.0001, c1tr=0.1, c2tr=0.75, t1tr=0.25, t2tr=2.0, radius_max=2.0,
radius_initial=0.1):
self.model = model
self.cgopttol = cgopttol
self.c0tr = c0tr
self.c1tr = c1tr
self.c2tr = c2tr
self.t1tr = t1tr
self.t2tr = t2tr
self.radius_max = radius_max
self.radius_initial = radius_initial
self.radius = radius
self.cgmaxiter = sum([tf.size(w).numpy()
for w in self.model.trainable_weights])
self.d = self.cgmaxiter
self.cgmaxiter = min(120, self.cgmaxiter)
self.iterationCounterForAdamTypePreconditioning = 0
self.precondition = precondition
if self.precondition != 0:
self.DiagPrecond = [w.data*0.0 for w in self.model.parameters()]
self.DiagScale = 0.0
def findroot(self, x, p):
aa = 0.0
bb = 0.0
cc = 0.0
for e in range(len(x)):
aa += tf.reduce_sum(p[e]*p[e])
bb += tf.reduce_sum(p[e]*x[e])
cc += tf.reduce_sum(x[e]*x[e])
bb = bb*2.0
cc = cc - self.radius**2
alpha = (-2.0*cc)/(bb + tf.sqrt(bb**2-(4.0*aa*cc)))
return alpha
def computeListNorm(self, lst):
return np.sum([tf.reduce_sum(ri*ri) for ri in lst])**0.5
def computeListNormSq(self, lst):
return np.sum([tf.reduce_sum(ri*ri) for ri in lst])
def computeDotProducts(self, u, v):
return tf.reduce_sum(tf.stack([tf.reduce_sum(ui * vi) for ui, vi in zip(u, v)], 0))
def normOfVar(self, x):
return tf.sqrt(self.computeDotProducts(x, x))
def CGSolver(self, loss_grad, x, y):
cg_iter = 0 # iteration counter
x0 = [w.numpy()*0.0 for w in self.model.trainable_weights]
if self.precondition == 0:
r0 = [i+0.0 for i in loss_grad] # set initial residual to gradient
normGrad = self.normOfVar(r0)
# set initial conjugate direction to -r0
p0 = [-i+0.0 for i in loss_grad]
self.cgopttol = self.computeListNormSq(loss_grad)
self.cgopttol = self.cgopttol**0.5
self.cgopttol = (min(0.5, self.cgopttol**0.5))*self.cgopttol
else:
r0 = [(i.data+0.0)*pr.data for i,
pr in zip(loss_grad, self.SquaredPreconditioner)]
p0 = [-(i.data+0.0)*pr.data for i,
pr in zip(loss_grad, self.SquaredPreconditioner)]
self.cgopttol = self.computeListNormSq(r0)
self.cgopttol = self.cgopttol.data.item()**0.5
self.cgopttol = (min(0.5, self.cgopttol**0.5))*self.cgopttol
cg_term = 0
j = 0
while 1:
j += 1
self.CG_STEPS_TOOK = j
# if CG does not solve model within max allowable iterations
if j > self.cgmaxiter:
j = j-1
p1 = x0
print('\n\nCG has issues !!!\n\n')
break
# hessian vector product
if self.precondition == 0:
Hp = self.computeHessianProduct(x, y, p0)
else:
loss_grad_direct \
= np.sum([(gi*(si*pr.data)).sum() for gi, si, pr in zip(loss_grad, p0, self.SquaredPreconditioner)])
Hp = torch.autograd.grad(loss_grad_direct, self.model.parameters(
), retain_graph=True) # hessian-vector in tuple
Hp = [g*pr.data for g,
pr in zip(Hp, self.SquaredPreconditioner)]
pHp = tf.reduce_sum([tf.reduce_sum(Hpi*p0i)
for Hpi, p0i in zip(Hp, p0)])
# if nonpositive curvature detected, go for the boundary of trust region
if pHp <= 0:
tau = self.findroot(x0, p0)
p1 = [xi+tau*p0i for xi, p0i in zip(x0, p0)]
cg_term = 1
break
# if positive curvature
# vector product
rr0 = self.computeListNormSq(r0)
# update alpha
alpha = (rr0/pHp)
x1 = [xi+alpha*pi for xi, pi in zip(x0, p0)]
norm_x1 = self.computeListNorm(x1)
if norm_x1 >= self.radius:
tau = self.findroot(x0, p0)
p1 = [xi+tau*pi for xi, pi in zip(x0, p0)]
cg_term = 2
break
# update residual
r1 = [ri+alpha*Hpi for ri, Hpi in zip(r0, Hp)]
norm_r1 = self.computeListNorm(r1)
if norm_r1 < self.cgopttol:
p1 = x1
cg_term = 3
break
rr1 = self.computeListNormSq(r1)
beta = (rr1/rr0)
# update conjugate direction for next iterate
p1 = [-ri+beta*pi for ri, pi in zip(r1, p0)]
p0 = p1
x0 = x1
r0 = r1
cg_iter = j
if self.precondition != 0:
p1 = [pi*pr.data for pi, pr in zip(p1, self.SquaredPreconditioner)]
d = p1
return d, cg_iter, cg_term
def assignToModel(self, newX):
for w, nw in zip(self.model.trainable_weights, newX):
w.assign(nw)
def addToModel(self, d):
for w, di in zip(self.model.trainable_weights, d):
w.assign_add(di)
def computeLoss(self, x, y):
out = self.model(x)
loss = tf.keras.losses.mean_squared_error(out, y)
loss = tf.reduce_mean(loss)
return loss
def computeLossAndGrad(self, x, y):
with tf.GradientTape() as tape:
loss = self.computeLoss(x, y)
grad = tape.gradient(loss, self.model.trainable_variables)
return loss, grad
def step(self, x, y):
loss, grad = self.computeLossAndGrad(x, y)
w0 = [w.numpy()+0.0 for w in self.model.trainable_weights]
update = 3
while update == 3:
update = 2
# Conjugate Gradient Method
d, cg_iter, cg_term = self.CGSolver(grad, x, y)
Hd = self.computeHessianProduct(x, y, d)
dHd = tf.reduce_sum([tf.reduce_sum(Hdi*di)
for Hdi, di in zip(Hd, d)])
gd = tf.reduce_sum([tf.reduce_sum(gi*di)
for gi, di in zip(grad, d)])
norm_d = self.computeListNorm(d)
denominator = -gd - 0.5*(dHd)
self.addToModel(d)
loss_new = self.computeLoss(x, y)
numerator = loss - loss_new
# ratio
rho = numerator/denominator
if rho < self.c1tr: # shrink radius
self.radius = self.t1tr*self.radius
update = 0
# and np.abs(norm_d.data.item() - self.radius) < 1e-10: # enlarge radius
if rho > self.c2tr:
self.radius = min(self.t2tr*self.radius, self.radius_max)
update = 1
# otherwise, radius remains the same
if rho <= self.c0tr: # reject d
update = 3
self.assignToModel(w0)
lossTMP, grad = self.computeLossAndGrad(x, y)
print('rejecting .... radius: %1.6e FVALNew %1.6e, DeltaF %1.6e ' % (
self.radius, lossTMP, numerator))
if self.radius < 1e-15:
break
return loss, d, rho, update, cg_iter, cg_term, grad, norm_d, numerator, denominator, self.radius
def stepMAE(self, loss, MAE, Coor, AtomTypes, Grid, Label):
update = 3
w0 = [a.data+0.0 for a in self.model.parameters()]
loss_grad = torch.autograd.grad(
loss, self.model.parameters(), create_graph=True, retain_graph=True)
if self.precondition == 1:
for gi, di in zip(loss_grad, self.DiagPrecond):
di.data.set_(di.data*self.DiagScale+(1-self.DiagScale)*gi*gi)
di.data[di.data == 0] += 1.0
self.DiagScale = 0.95
if self.precondition == 2: # Martens paper
self.DiagScale = 0.001 # set lambda to what value?
self.exponent = 0.75 # based on paper
for gi, di in zip(loss_grad, self.DiagPrecond):
di.data.set_((gi*gi + self.DiagScale)**self.exponent)
if self.precondition == 3:
for gi, di in zip(loss_grad, self.DiagPrecond):
di.data.set_(1.0-self.DiagScale+self.DiagScale*gi*gi)
self.DiagScale = 1e-2
if self.precondition == 4:
for gi, di in zip(loss_grad, self.DiagPrecond):
di.data.set_(di.data*self.DiagScale+(1-self.DiagScale)*gi*gi)
di.data[di.data == 0] += 1.0
self.DiagScale = 0.99
if self.precondition == 5:
for gi, di in zip(loss_grad, self.DiagPrecond):
di.data.set_(di.data*self.DiagScale+(1-self.DiagScale)*gi*gi)
di.data[di.data == 0] += 1.0
self.DiagScale = 0.90
if self.precondition == 6:
for gi, di in zip(loss_grad, self.DiagPrecond):
di.data.set_(di.data*self.DiagScale +
(1-self.DiagScale)*torch.abs(gi))
di.data[di.data == 0] += 1.0
self.DiagScale = 0.95
if self.precondition == 6:
for gi, di in zip(loss_grad, self.DiagPrecond):
di.data.set_(di.data*self.DiagScale +
(1-self.DiagScale)*torch.abs(gi))
di.data[di.data == 0] += 1.0
self.DiagScale = 0.95
if self.precondition in [7, 8, 9]:
if self.precondition == 7:
self.DiagScale = 0.99
if self.precondition == 8:
self.DiagScale = 0.95
if self.precondition == 9:
self.DiagScale = 0.90
self.iterationCounterForAdamTypePreconditioning += 1
for gi, di in zip(loss_grad, self.DiagPrecond):
di.data.set_(di.data*self.DiagScale +
(1-self.DiagScale)*torch.abs(gi))
di.data[di.data == 0] += 1.0
while update == 3:
update = 2
# Conjugate Gradient Method
d, cg_iter, cg_term = self.CGSolver(loss_grad)
for wi, di in zip(self.model.parameters(), d):
wi.data.set_(wi.data+0.0+di)
# MSE loss plus penalty term
with torch.no_grad():
loss_new = Projection_Error(XData, YData, idx, n_steps)
numerator = loss.data.item() - loss_new.data.item()
loss_grad_direct = np.sum([(gi*di).sum()
for gi, di in zip(loss_grad, d)])
Hd = torch.autograd.grad(loss_grad_direct, self.model.parameters(
), retain_graph=True) # hessian-vector in tuple
dHd = np.sum([(Hdi*di).sum() for Hdi, di in zip(Hd, d)])
gd = np.sum([(gi*di).sum() for gi, di in zip(loss_grad, d)])
norm_d = self.computeListNorm(d)
denominator = -gd.data.item() - 0.5*(dHd.data.item())
# ratio
rho = numerator/denominator
if rho < self.c1tr: # shrink radius
self.radius = self.t1tr*self.radius
update = 0
# and np.abs(norm_d.data.item() - self.radius) < 1e-10: # enlarge radius
if rho > self.c2tr:
self.radius = min(self.t2tr*self.radius, self.radius_max)
update = 1
# otherwise, radius remains the same
if rho <= self.c0tr: # reject d
update = 3
for wi, w0i in zip(self.model.parameters(), w0):
wi.data.set_(w0i.data)
return d, rho, update, cg_iter, cg_term, loss_grad, norm_d, numerator, denominator
| true |
3a425b4b66883f4bacdbe1abe5f32ce128c7bd0a | Python | Exia-Aix-2016/Data-Project | /statistics.py | UTF-8 | 373 | 2.5625 | 3 | [
"MIT"
] | permissive |
def solution_stat(solution):
return {
"execution_time": solution["execution_time"],
"distance": solution["total_distance"],
"trucks": len(list(filter(lambda x: x["distance"] > 0, solution["vehicles"])))
}
def config_stat(solution_stats, generator):
return {
"cities": generator.cities,
"stats": solution_stats
}
| true |
ea98ca0dc647c0a8a36114bc145625a55f212826 | Python | YunwenShen/GitHook | /commit-msg.py | UTF-8 | 811 | 2.65625 | 3 | [] | no_license | #! /usr/bin/env python
# -*- encoding: utf-8 -*-
import sys
import re
pattern = re.compile("^(feat|fix|polish|docs|style|refactor|perf|test|workflow|ci|chore|types)(\(.+\))?: .{1,50}")
def validate_commit_msg(msg: str):
"""
校验git commit 内容格式是否满足要求
:param msg:
:return: void
"""
if msg.startswith("Revert"):
sys.exit(0)
elif msg.startswith("Merge"):
sys.exit(0)
elif re.match(pattern, msg):
sys.exit(0)
else:
print("invalid commit format")
sys.exit(1)
if __name__ == "__main__":
file_name = sys.argv[1]
print(file_name)
commit_msg = ""
with open(file_name, encoding="utf-8") as file:
for line in file.readlines():
commit_msg += line
validate_commit_msg(commit_msg)
| true |
e423eec0fb0dd5ce0af64d1327e79056df583154 | Python | phillipmurray/mini-1 | /Animated plots.py | UTF-8 | 1,716 | 3.078125 | 3 | [] | no_license | import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import pandas as pd
import os
# this is to create the data to use in the plot
# can be any data
os.chdir("c:\\Users\ebayuser")
# if want to subsample the data
indices = np.arange(0,24999, 10)
x = pd.read_csv("x_coord.csv", ",").to_numpy()
x_min, x_max = min(x) - 0.2*(max(x) - min(x)), max(x) + 0.2*(max(x) - min(x))
y = pd.read_csv("y_coord.csv").to_numpy()
y_min, y_max = min(y) - 0.2*(max(y) - min(y)), max(y) + 0.2*(max(y) - min(y))
# this is to create the actual plots
fig, ax = plt.subplots(figsize=(20,20))
line, = ax.plot(x, y)
# update function
def update(num, x, y, line):
line.set_data(x[:50*num], y[:50*num])
line.axes.axis([x_min, x_max, y_min, y_max])
return line,
ani = animation.FuncAnimation(fig, update, int(len(x)/50), fargs=[x, y, line],
interval=25, blit=True)
# save animation to file
ani.save('Matern2.gif')
n = 25000
dx = 1/np.sqrt(n) * np.random.normal(0,1,n)
dy = 1/np.sqrt(n) * np.random.normal(0,1,n)
x = np.cumsum(dx)
y = np.cumsum(dy)
x_min, x_max = min(x) - 0.2*(max(x) - min(x)), max(x) + 0.2*(max(x) - min(x))
y_min, y_max = min(y) - 0.2*(max(y) - min(y)), max(y) + 0.2*(max(y) - min(y))
fig, ax = plt.subplots(figsize=(20,20))
line, = ax.plot(x, y)
# update function
def update(num, x, y, line):
line.set_data(x[:50*num], y[:50*num])
line.axes.axis([x_min, x_max, y_min, y_max])
return line,
ani = animation.FuncAnimation(fig, update, int(len(x)/50), fargs=[x, y, line],
interval=25, blit=True)
ani.save('test.gif') | true |
3cfb99cd232d4b2e1f1d460dd43d2ea2f56a945b | Python | muzny/ds2001-web | /.www/cs/slides/p5_slides_final.py | UTF-8 | 2,871 | 4.5 | 4 | [] | no_license | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Felix Muzny
DS 2001 - CS
October 6/7, 2021
Practicum 5 - "Slides"!
"""
"""
Logistics notes:
- This week, we **will** have a small amount of
homework to complete before practicum next week
"""
"""
Warm-up:
Write the equivalent __while__ loop to the following for loop:
(Go ahead and do this in your p5.py file, you can leave it there
in comments or not)
"""
ls = ["a", "b", "c", "d"]
print("loop 1")
# range is going to produce [0, ...., len(ls) - 1]
# so that the variable i can go from 0 to len(ls) - 1
for i in range(len(ls)):
print(ls[i])
print()
count = 0
while count < len(ls):
print(ls[count])
count += 1
# if you printed out ls[count] here instead, you'd get
# an error because you run off the end of the list!
#Write the equivalent __while__ loop to the following for loop:
# start (inclusive), stop (exclusive), increment
# [0, 2, 4, .... len(ls) - 1]
print("loop 2")
for i in range(0, len(ls), 2):
print(ls[i])
print()
count = 0
while count < len(ls):
print(ls[count])
count += 2
print("loop 3")
#Write the equivalent __while__ loop to the following for loop:
# letter takes the next __value__ in the list (rather than the position)
for letter in ls:
print(letter)
#print(ls[letter]) # ERROR
print()
count = 0
while count < len(ls):
letter = ls[count]
print(letter)
count += 1
"""
(links for this discussion are on the course website)
1) Playing with ELIZA
http://psych.fullerton.edu/mbirnbaum/psych101/Eliza.htm
What is one thing that your group found that
ELIZA is unable to do?
"""
# Programming review topics for the day:
# string manipulation
# list manipulation & looping
# 2) demo some string manipulation
def main():
print()
print("p5 slides")
# String manipulation
# Strings are immutable
color = "purple"
print(color[len(color) - 1])
print(len(color))
#color[0] = "b" # ERROR
color = "burple"
# upper()
color = color.upper()
print(color)
#print(upper_color)
# lower()
# strip()
print()
# Lists are mutable
# Change a string list element into the upper case version
animals = ["dog", "cat", "wombat", "panther", "ocelot"]
print(animals)
animals[0] = "ferret"
print(animals)
animals[0] = animals[0].upper()
print(animals)
# change all the elements in the list into the
# upper case versions
for i in range(len(animals)):
print(animals[i])
animals[i] = animals[i].upper()
print(animals)
# while loop version
index = 0
while index < len(animals):
animals[index] = animals[index].upper()
# list slicing - gives you a sublist
# start:stop (stop is exclusive)
print(animals[1:4])
main() | true |
91b43cf42d1bcccb38b62e0a1e2e8d62c130ee2f | Python | senweim/JumpKingAtHome | /Level.py | UTF-8 | 8,190 | 2.65625 | 3 | [] | no_license | #!/usr/bin/env python
#
#
#
#
import pygame
import collections
import os
import math
import sys
from hiddenwalls import HiddenWalls
from Platforms import Platforms
from Background import Backgrounds
from Props import Props
from weather import Weathers
from scrolling import Scrollers
from BackgroundMusic import BackgroundAudio
from NPC import NPCs
from Names import Names
from Readable import Readables
from Flyers import Flyers
from Ending_Animation import Ending_Animation
from Wind import Wind
class Level:
def __init__(self, screen, level):
self.screen = screen
self.level = level
self.found = False
self.platforms = None
self.background = None
self.midground = None
self.foreground = None
self.weather = None
self.hiddenwalls = None
self.props = None
self.scrollers = None
self.shake = None
self.background_audio = None
self.npc = None
self.name = None
self.readable = None
self.flyer = None
class Levels:
def __init__(self, screen):
self.max_level = 42
self.current_level = 0
self.current_level_name = None
self.screen = screen
# Objects
self.platforms = Platforms()
self.background = Backgrounds("BG").backgrounds
self.midground = Backgrounds("MG").backgrounds
self.foreground = Backgrounds("FG").backgrounds
self.props = Props().props
self.weather = Weathers().weather
self.hiddenwalls = HiddenWalls().hiddenwalls
self.scrollers = Scrollers()
self.npcs = NPCs().npcs
self.names = Names()
self.readables = Readables().readables
self.flyers = Flyers()
self.Ending_Animation = Ending_Animation()
# Audio
self.background_audio = BackgroundAudio().level_audio
self.channels = [pygame.mixer.Channel(2), pygame.mixer.Channel(3), pygame.mixer.Channel(4), pygame.mixer.Channel(5)]
for channel in self.channels:
channel.set_volume(1.0)
# Movement
self.shake_var = 0
self.shake_levels = [39, 40, 41]
self.wind = Wind(self.screen)
self.levels = {}
self._load_levels()
# Ending
self.ending = False
self.END = False
def blit1(self):
try:
current_level = self.levels[self.current_level]
if current_level.background:
current_level.background.blitme(self.screen)
if current_level.scrollers:
for scroller in current_level.scrollers:
scroller.blitme(self.screen, "bg")
if current_level.midground:
current_level.midground.blitme(self.screen)
if current_level.props:
for prop in current_level.props:
prop.blitme(self.screen)
if current_level.flyer:
current_level.flyer.blitme(self.screen)
if current_level.npc:
current_level.npc.blitme(self.screen)
if current_level.weather:
current_level.weather.blitme(self.screen, self.wind.rect)
except Exception as e:
print("BLIT1 ERROR: ", e)
def blit2(self):
try:
current_level = self.levels[self.current_level]
if current_level.foreground:
current_level.foreground.blitme(self.screen)
if current_level.hiddenwalls:
for hiddenwall in current_level.hiddenwalls:
hiddenwall.blitme(self.screen)
if current_level.scrollers:
for scroller in current_level.scrollers:
scroller.blitme(self.screen, "fg")
if current_level.npc:
current_level.npc.blitmetext(self.screen)
if current_level.readable:
current_level.readable.blitmetext(self.screen)
if self.names.active:
self.names.blitme(self.screen)
if os.environ.get("hitboxes"):
if current_level.platforms:
for platform in current_level.platforms:
pygame.draw.rect(self.screen, (255, 0, 0), platform.rect, 1)
if self.END:
self.Ending_Animation.blitme(self.screen)
except Exception as e:
print("BLIT2 ERROR: ", e)
def update_levels(self, king, babe, agentCommand=None):
self.update_wind(king)
self.update_hiddenwalls(king)
self.update_npcs(king)
self.update_readables(king)
self.update_flyers(king)
self.update_discovery(king)
self.update_audio()
if self.ending:
self.END = self.Ending_Animation.update(self.levels[self.current_level], king, babe)
else:
king.update(agentCommand=agentCommand)
babe.update(king)
def update_flyers(self, king):
try:
current_level = self.levels[self.current_level]
if current_level.flyer:
current_level.flyer.update(king)
except Exception as e:
print("UPDATEFLYERS ERROR: ", e)
def update_audio(self):
try:
if not self.ending:
current_level = self.levels[self.current_level]
for index, audio in enumerate(current_level.background_audio):
if not audio:
self.channels[index].stop()
elif audio != [channel.get_sound() for channel in self.channels][index]:
self.channels[index].play(audio)
self.names.play_audio()
else:
for channel in self.channels:
channel.stop()
self.Ending_Animation.update_audio()
except Exception as e:
print("UPDATEAUDIO ERROR: ", e)
def update_discovery(self, king):
try:
if not king.isFalling:
if self.levels[self.current_level].name != self.current_level_name:
self.current_level_name = self.levels[self.current_level].name
if self.current_level_name:
self.names.opacity = 255
self.names.active = True
self.names.blit_name = self.current_level_name
self.names.blit_type = self.levels[self.current_level].found
self.levels[self.current_level].found = True
except Exception as e:
print("UPDATEDISCOVERY ERROR: ", e)
def update_readables(self, king):
try:
if self.levels[self.current_level].readable:
self.levels[self.current_level].readable.update(king)
except Exception as e:
print("UPDATEREADABLES ERROR:", e)
def update_npcs(self, king):
try:
for npc in self.npcs.values():
npc.update(king)
except Exception as e:
print("UPDATENPCS ERROR:", e)
def update_hiddenwalls(self, king):
try:
if self.levels[self.current_level].hiddenwalls:
for hiddenwall in self.levels[self.current_level].hiddenwalls:
hiddenwall.check_collision(king)
except Exception as e:
print("UPDATEHIDDENWALLS ERROR: ", e)
def update_wind(self, king):
try:
wind = self.wind.calculate_wind(king)
if self.levels[self.current_level].weather:
if self.levels[self.current_level].weather.hasWind:
if not king.lastCollision:
king.angle, king.speed = king.physics.add_vectors(king.angle, king.speed, math.pi / 2, wind / 50)
elif not king.lastCollision.type == "Snow":
king.angle, king.speed = king.physics.add_vectors(king.angle, king.speed, math.pi / 2, wind / 50)
except Exception as e:
print("UPDATEWIND ERROR: ", e)
def _load_levels(self):
try:
for i in range(0, self.max_level + 1):
self.levels[i] = Level(self.screen, i)
try:
self.levels[i].background = self.background[i]
except:
pass
try:
self.levels[i].midground = self.midground[i]
except:
pass
try:
self.levels[i].foreground = self.foreground[i]
except:
pass
try:
self.levels[i].platforms = self.platforms.platforms(i)
except:
pass
try:
self.levels[i].props = self.props[i]
except:
pass
try:
self.levels[i].weather = self.weather[i]
except:
pass
try:
self.levels[i].hiddenwalls = self.hiddenwalls[i]
except:
pass
try:
self.levels[i].scrollers = self.scrollers.scrollers[i]
except:
pass
try:
if i in self.shake_levels:
self.levels[i].shake = True
except:
pass
try:
self.levels[i].background_audio = self.background_audio[i]
except:
pass
try:
self.levels[i].npc = self.npcs[i]
except:
pass
try:
self.levels[i].name = self.names.names[i]
except:
pass
try:
self.levels[i].readable = self.readables[i]
except:
pass
try:
self.levels[i].flyer = self.flyers.flyers[i]
except:
pass
except Exception as e:
print("LOAD LEVELS ERROR: ", e)
def reset(self):
self.current_level = 0
self.wind.__init__(self.screen)
self.scrollers.__init__()
self.flyers.__init__()
| true |
82d39ed0bf452788927f8fff2e72c856f17789fb | Python | ArturoBarrios9000/CYPEnriqueBC | /libro/problemas_resueltos/Capitulo2/Problema2_10.py | UTF-8 | 770 | 4.03125 | 4 | [] | no_license | A=int(input("Ingrese valor entero positivo para A:"))
B=int(input("Ingrese valor entero positivo para B:"))
C=int(input("Ingrese valor entero positivo para C:"))
if A>B:
if A>C:
print(f"El valor de A {A} es el mayor")
elif A==C:
print(f"A y C son mayores y tienen un valor de {A}")
else:
print(f"El valor de C {C} es el mayor")
elif A==B:
if A>C:
print(f"A y B son mayores y tienen un valor de {B}")
elif A==C:
print(f"A, B y C son iguales y tienen un valor de {A}")
else:
print(f"El valor de C {C} es el mayor")
elif B>C:
print(f"El valor de B {B} es el mayor")
elif B==C:
print(f"B y C son mayores y tienen un valor de {B}")
else:
print(f"El valor de C {C} es el mayor")
print("Fin")
| true |
33f8c7e5ed00e981c62cf3288450b779657d9703 | Python | edwinlo/adventofcode2020 | /day_7/solution.py | UTF-8 | 1,882 | 3.65625 | 4 | [] | no_license | from collections import defaultdict
from re import findall, match
# gives [color bag, bag1, bag2 ..]
input_txt = [line.strip('.').split('contain') for line in open('input.txt').read().split('\n')]
def build_graph_pt_1(arr):
graph = defaultdict(set)
for line in arr:
src_bag = match('([\w ]+bag)', line[0]).group()
dest_bags = findall('\d ([\w ]+bag)', line[1])
for bag in dest_bags:
graph[src_bag].add(bag)
return graph
def shiny_bag_colors(graph):
"""
Run DFS for each node trying to find a path to shiny bag
- accounts for cycles & disconnected graphs
"""
def dfs(curr, visited):
if curr in visited:
return False
if curr == "shiny gold bag":
return True
visited.add(curr)
for bag in graph[curr]:
if dfs(bag, visited):
return True
return False
counts = 0
bags = list(graph.keys())
for bag in bags:
if bag != 'shiny gold bag' and dfs(bag, set()):
counts += 1
return counts
def build_graph_pt_2(arr):
graph = defaultdict(set)
for line in arr:
src_bag = match('([\w ]+) bag', line[0]).group()
dest_bags = findall('(\d+) ([\w ]+bag)',line[1])
for bag in dest_bags:
bag_count = int(bag[0])
bag_name = bag[1]
graph[src_bag].add((bag_count, bag_name))
return graph
def shiny_bag_count(graph):
"""
Run DFS for shiny bag node until no bags are left
- accumulate number of bags in path
- assume that there can't be cycles
"""
def dfs(curr):
count = 0
for i, bag in graph[curr]:
count += i + (i * dfs(bag))
return count
return dfs("shiny gold bag")
graph_pt_1 = build_graph_pt_1(input_txt)
print(shiny_bag_colors(graph_pt_1)) #224
graph_pt_2 = build_graph_pt_2(input_txt)
print(shiny_bag_count(graph_pt_2)) #1488
| true |
d1ae4bafcf96fabf0e8df445f8d80252150a7d98 | Python | gsimore/LPTHW | /python2/ex10.py | UTF-8 | 801 | 4.15625 | 4 | [] | no_license | print "I am 6'2\" tall." # escape double-quote inside string
print 'I am 6\'2" tall.' # escape single-quote inside string
tabby_cat = "\tI'm tabbed in." #/t will tab in your string
persian_cat = "I'm split \non a line." # \n will create a line break in your string
backslash_cat = "I'm \\ a \\ %s." # \\ will print one \
r_cat = "I'm a %r."
fat_cat = """
I'll do a list:
\t* Cat food
\t* Fishies
\t* Catnip\n\t* Grass
"""
# \n\t is another way to make a line break and then a tab in the string
print tabby_cat
print persian_cat
print backslash_cat % 'cat'
print r_cat % 'cat' # note the difference between %r and %s
print fat_cat
# something else to try out:
while True: #print this forever.
for i in ["/","-","|","\\","|"]: #makes list of characters: / - | \ |
print "%s\r" % i,
| true |
84abf155421a91cf795d1c74d63907c1751a20f6 | Python | windelbouwman/jodawg | /p2p/lib/messaging.py | UTF-8 | 5,307 | 2.65625 | 3 | [] | no_license | #!/usr/bin/env python3
#
# Jodawg Peer-to-Peer Communicator
#
# messaging.py: message encapsulation
#
# Copyright (C) 2013 Almer S. Tigelaar & Windel Bouwman
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
#
# So, the downside of an unencrypted envelope
# is that anyone can see who was the sender
# and receiver. To mask that one could
# encrypt the envelope as well, and always
# forward a message to a number of nodes
# only the node with the right private
# key can really decrypt the message.
#
# When store and forwarding, multiple peers
# must be specified as destination.
class MessageHeader(object):
"""A message header contains meta-data concerning
the message, and is essentially a dictionary with named fields."""
__slots__ = [ "identifier", "revision", "datetime", "fields" ]
def __init__(self):
self.identifier = uuid.uuid5(uuid.NAMESPACE_DNS, 'jodawg.org')
self.revision = 0
self.datetime = datetime.datetime.now()
self.fields = {}
def set_field(self, name, value):
assert name is not None and len(name) > 0
assert value is not None and len(value) > 0
self.fields[name] = value
def get_field(self, name):
assert name is not None and len(name) > 0
return self.fields[name]
class MessagePart(object):
"""A message part can be added to a message."""
pass
class MessagePartText(MessagePart):
"""A specific message part consisting of only text data."""
__slots__ = [ "content" ]
def __init__(self, _content=""):
self.content = _content
class Message(object):
"""A message consists of a header and
one or more message parts. When a message needs to
be revised. It must be cloned, then changed and re-added
to a message log."""
__slots__ = [ "header", "parts" ]
def __init__(self, _header=None):
if header is not None:
self.header = _header
else:
self.header = MessageHeader()
self.parts = []
def add_part(self, message_part):
self.parts.append(message_part)
def clone_revision(self):
"""Clones the entire message to make a new revision
of any of its contents."""
# deepcopy
# self.header.revision += 1
class SystemMessage(Message):
pass
class MessageLog(object):
"""A message log is a collection of messages.
intended for two or more participants. A message
log must be synchronized between nodes associated
with the specified participants."""
__slots__ = [ "revision", "participants", "messages", "message_tracking" ]
def __init__(self):
self.participants = set([])
self.messages = []
self.message_tracking = set([])
self.revision = 0
def add_participant(self, user):
"""Adds a participant to the log.
NOTE: The same participant may be added multiple times, this has
no real effect, but it's not considered erroneous either.
@param user The user to add.
"""
self.participants.add(user)
# Only output status message in group chat
if len(self.participants) > 2:
message = SystemMessage()
message.add_part(MessagePartText("User " + str(user) + " was added"))
self.add_message(message) # This also increases the revision, forcing everyone to sync
else:
self.revision += 1
def remove_participant(self, user):
"""Removes a participant from this log.
@param The user to remove.
"""
self.participants.remove(user)
assert len(self.participants) > 1 # at least two
message = SystemMessage()
message.add_part(MessagePartText("User " + str(user) + " was removed"))
self.add_message(message) # This also increases the revision, forcing everyone to sync
def append_message(self, message):
"""Appends a new message to the end of the log.
NOTE: You can add a unique message + revision ONLY ONCE.
In theory the same message could be appended to multiple logs.
@param message The message to append.
"""
# no point in adding messages if there's only one person
assert len(self.participants) > 1
# Can't add the same message, same revision twice
assert (message.header.identifier, message.header.revision) not in self.message_tracking
self.messages.append(message)
self.message_tracking.add((message.header.identifier, message.header.revision))
self.revision += 1
class MessagingService():
def __init__(self):
pass
| true |
a6866a2b2c315e92458d85d1230b9d2f5e42a1cd | Python | utsengar/python-dynamo | /dynamo/storage/storage_node.py | UTF-8 | 7,350 | 2.71875 | 3 | [] | no_license | #!/usr/bin/env python
# ------------------------------------------------------
# Imports
# ------------------------------------------------------
import logging
import xmlrpclib
import socket
from SimpleXMLRPCServer import SimpleXMLRPCServer
from optparse import OptionParser
from datetime import datetime, timedelta
from dynamo.storage.datastore_view import DataStoreView
from dynamo.storage.persistence.sqlite_persistence_layer import SqlitePersistenceLayer
# ------------------------------------------------------
# Config
# ------------------------------------------------------
logging.basicConfig(level=logging.INFO)
# ------------------------------------------------------
# Implementation
# ------------------------------------------------------
class StorageNode(object):
"""
A storage node.
"""
GET = 'GET'
PUT = 'PUT'
def __init__(self, servers, port):
"""
Parameters:
servers : list(str)
A list of servers. Each server name is in the
format {host/ip}:port
port : int
Port number to start on
"""
self.port = int(port)
self.server = None
if servers is None:
servers = []
# Add myself to the servers list
self.my_name = str(self)
servers.append(self.my_name)
self.datastore_view = DataStoreView(servers)
# Load the persistence layer
self._load_persistence_layer()
def __del__(self):
"""
Destructor
"""
if self.persis:
self.persis.close()
if self.server:
self.server.server_close()
def __str__(self):
"""
Builds a string representation of the storage node
:rtype: str
:returns: A string representation of the storage node
"""
if getattr(self, 'port'):
return '%s:%s' % (socket.gethostbyname(socket.gethostname()), self.port)
else:
return '%s' % socket.gethostbyname(socket.gethostname())
# ------------------------------------------------------
# Public methods
# ------------------------------------------------------
def run(self):
"""
Main storage node loop
"""
self.server = SimpleXMLRPCServer(('', self.port), allow_none=True)
self.server.register_function(self.get, "get")
self.server.register_function(self.put, "put")
self.server.serve_forever()
# ------------------------------------------------------
# RPC methods
# ------------------------------------------------------
def get(self, key):
"""
Gets a key
:Parameters:
key : str
The key value
"""
logging.debug('Getting key=%s' % key)
# Make sure I am supposed to have this key
respon_node = self.datastore_view.get_node(key)
if respon_node != self.my_name:
logging.info("I'm not responsible for %s (%s vs %s)" % (key,
respon_node,
self.my_name))
return None
# Read it from the database
result = self.persis.get_key(key)
# If the contexts don't line up then return the most recent
value = None
if len(result) == 1:
value = result[0][1]
else:
value = self._reconcile_conflict(result)[0]
logging.debug('Returning value=%s' % value)
return value
def put(self, key, value, context=None):
"""
Puts a key value in the datastore
:Parameters:
key : str
The key name
value : str
The value
context : str
Should be only be None for now. In the future an application will be
able to add a custom context string
:rtype: str
:returns 200 if the operation succeeded, 400 otherwise
"""
# Make sure I am supposed to have this key
if self.datastore_view.get_node(key) != self.my_name:
logging.info("I'm not responsible for %s" % key)
return None
res_code = None
try:
# Read it from the database
result = self.persis.put_key(key, value)
res_code = '200'
except:
logging.error('Error putting key=%s value=%s into the persistence layer' %
(key, value))
res_code = '400'
return res_code
# ------------------------------------------------------
# Private methods
# ------------------------------------------------------
def _reconcile_conflict(self, result):
"""
Reconciles the conflict between a number of values. Note
that currently this defaults to taking the last written value.
In the future this will be expanded to allow application specific
conflict resolution
:Parameters:
result : list(tuples)
A list of result tuples from the persistence layer in the form
[(id, "value", "date"), ...]
:rtype: tuple(int, str)
:returns An id, string tuple of the chosen version
"""
last_result = None
last_date = None
for res in result:
if last_result is None:
last_result = res[1]
last_date = self._parse_date(res[2])
else:
date = self._parse_date(res[2])
if date > last_date:
last_date = date
last_result = res[1]
return (last_result, last_date)
def _load_persistence_layer(self):
"""
Loads the persistence layer
"""
# Setup my persistence layer
self.persis = SqlitePersistenceLayer(self.my_name)
self.persis.init_persistence()
def _parse_date(self, datestr):
"""
Parses an iso formatted date
:Parameters:
datestr : str
An iso formatted date
:rtype: datetime
:returns A date object
"""
date_str, micros = datestr.split('.')
date = datetime.strptime(date_str, "%Y-%m-%d %H:%M:%S")
date += timedelta(microseconds=float(micros))
return date
# ------------------------------------------------------
# Main
# ------------------------------------------------------
def parse_args():
parser = OptionParser()
parser.add_option('-s', '--server', dest='servers',
help='List of storage nodes(one per server)',
action='append', default=[])
parser.add_option('-p', '--port', dest='port', default=25000,
help='Port to start the storage node on')
options, args = parser.parse_args()
return options
if __name__ == '__main__':
options = parse_args()
storage_node = StorageNode(options.servers, options.port)
storage_node.run()
| true |
42ff874fcd69998581416a15865815eec4a01780 | Python | choga88/dpm | /파이썬/python_example/pickle2.py | UTF-8 | 107 | 2.8125 | 3 | [] | no_license | import pickle
f=open("text2.txt",'rb')
for i in range(1,11):
data = pickle.load(f)
print(data)
f.close()
| true |
66fbefe35ef8eb77a3f078008e60e465e4c416f1 | Python | MyIsaak/Dev-Logger | /main.py | UTF-8 | 3,489 | 2.609375 | 3 | [] | no_license | import json
import os
import datetime
import argparse
import git
from mss import mss
from twitter.api import Twitter
try:
to_unicode = unicode
except NameError:
to_unicode = str
# Get the settings from settings.json
settings = {}
with open('settings.json') as outfile:
settings = json.load(outfile)
# Setup the parser
parser = argparse.ArgumentParser(
description='A simpler faster way to log your development' +
'process that tweets and pushes commits with screenshots')
parser.add_argument('message', type=str,
help='text to be commited and tweeted')
parser.add_argument('--append', action="store_true",
help='Commit on the same day as the last commit')
parser.add_argument('--offline', action="store_true",
help='Only update the readme file, no commit or tweet')
parser.add_argument('--text', action="store_true",
help="Don't take a screenshot for the latest log")
args = parser.parse_args()
# Setting up repo
if args.offline is False:
repo = git.Repo(settings['repo'])
# Initialize the data.json file
def setupData():
data = {
'day': 1,
'log': [""]
}
# Write JSON file
with open('data.json', 'w', encoding='utf8') as outfile:
str_ = json.dumps(data,
indent=4, sort_keys=True,
separators=(',', ': '), ensure_ascii=False)
outfile.write(to_unicode(str_))
if os.path.exists('data.json') is False:
setupData()
data = {}
with open('data.json') as outfile:
try:
data = json.load(outfile)
except Exception as e:
setupData()
data = json.load(outfile)
with open('data.json', 'w', encoding='utf8') as outfile:
if data['log'] is None:
setupData()
if data['day'] is None:
data['day'] = 1
day = data['day']
logTitle = '### Day ' + str(day) + ': ' + \
datetime.datetime.now().strftime("%B %d, %A")
if args.append is False:
if data['log'][-1] == "":
data['log'][-1] = (logTitle + '\n\n' + args.message)
else:
data['log'].append(logTitle + '\n\n' + args.message)
data['day'] += 1
else:
data['log'][-1] += ('\n\n' + args.message)
# The simplest use, save a screenshot of the 1st monitor
# TODO: Use the callback of sct.save with lambda
if args.text is False:
with mss() as sct:
sct.shot(output=settings['gallery'] + '/{date:%Y-%m-%d-%s}.png')
data['log'][-1] += ('\n\n[.strftime("%Y-%m-%d-%s") +
".png)](Screenshot)")
str_ = json.dumps(data,indent=4, sort_keys=True,separators=(',', ': '), ensure_ascii=False)
outfile.write(to_unicode(str_))
# Login and send tweet to Twitter
if args.offline is False:
twitter = Twitter(settings['username'], settings['password'])
twitter.statuses.update(data['log'][-1] + ' ' + settings['hashtags'])
# Update devlog markdown file with screenshot and commit
with open(settings['logpath'], 'w', encoding='utf8') as outfile:
str_ = ""
for log in data['log']:
str_ += log + '\n\n'
outfile.write(to_unicode(str_))
if args.offline is False:
repo.git.commit('-m', args.message, author=settings['email'])
| true |
93e3b2897ca703b020d52e13d43bc4f872775d3a | Python | ambarish710/python_concepts | /leetcode/medium/22_Generate_Parentheses.py | UTF-8 | 1,948 | 4.34375 | 4 | [] | no_license | # Given n pairs of parentheses, write a function to generate all combinations of well-formed parentheses.
#
#
#
# Example 1:
#
# Input: n = 3
# Output: ["((()))","(()())","(())()","()(())","()()()"]
# Example 2:
#
# Input: n = 1
# Output: ["()"]
#
#
# Constraints:
#
# 1 <= n <= 8
# Logic
# Classic example of recursion + backtracking...
# You write a method which takes openbracket, closingbracket count, max number of brackets allowed and s == current temp variable
# Have a few basic checks and boom you're done
# Basic check 1 --> if closingbrackets == n:
# Add the current list (conv to string) and put it inside a list
# Used list bcoz lists are immutable and strings aren't
# if openbrackets > closingbrackets:
# Add a new closing bracket
# recursively call the same function or self again by incrementing closing bracket
# pop the recently added bracket --> This comes as part of backtracking
# if openbrackets < n:
# Add a new opening bracket
# recursively call the same function or self again by incrementing opening bracket
# pop the recently added bracket --> This comes as part of backtracking
class Solution:
def generateParenthesis(self, n: int) -> List[str]:
self.output = []
def backtracking(openbrackets, closingbrackets, n, s=[]):
if closingbrackets == n:
self.output.append("".join(s))
return
else:
if openbrackets > closingbrackets:
s.append(")")
backtracking(openbrackets, closingbrackets + 1, n, s)
s.pop()
if openbrackets < n:
s.append("(")
backtracking(openbrackets + 1, closingbrackets, n, s)
s.pop()
return
backtracking(openbrackets=0, closingbrackets=0, n=n)
return self.output | true |
30cfd3857ac5d953d491f423c2bf579144187802 | Python | jcmaeng/SDCND_Capstone | /ros/src/tl_detector/tl_detector.py | UTF-8 | 9,464 | 2.515625 | 3 | [
"MIT"
] | permissive | #!/usr/bin/env python
import rospy
from std_msgs.msg import Int32, Header
from geometry_msgs.msg import Quaternion, PoseStamped, Pose
from styx_msgs.msg import TrafficLightArray, TrafficLight
from styx_msgs.msg import Lane
from sensor_msgs.msg import Image
from cv_bridge import CvBridge
from light_classification.tl_classifier import TLClassifier
import tf
import os
import math
import cv2
import yaml
STATE_COUNT_THRESHOLD = 3
class TLDetector(object):
def __init__(self):
rospy.init_node('tl_detector')
self.pose = None
self.waypoints = None
self.camera_image = None
self.lights = []
sub1 = rospy.Subscriber('/current_pose', PoseStamped, self.pose_cb)
sub2 = rospy.Subscriber('/base_waypoints', Lane, self.waypoints_cb)
'''
/vehicle/traffic_lights provides you with the location of the traffic light in 3D map space and
helps you acquire an accurate ground truth data source for the traffic light
classifier by sending the current color state of all traffic lights in the
simulator. When testing on the vehicle, the color state will not be available. You'll need to
rely on the position of the light and the camera image to predict it.
'''
sub3 = rospy.Subscriber('/vehicle/traffic_lights', TrafficLightArray, self.traffic_cb)
sub6 = rospy.Subscriber('/image_color', Image, self.image_cb)
config_string = rospy.get_param("/traffic_light_config")
self.config = yaml.load(config_string)
self.light_positions = self.config['stop_line_positions']
# stop_line_positions
self.upcoming_red_light_pub = rospy.Publisher('/traffic_waypoint', Int32, queue_size=1)
self.bridge = CvBridge()
self.light_classifier = TLClassifier()
self.listener = tf.TransformListener()
self.state = TrafficLight.UNKNOWN
self.last_state = TrafficLight.UNKNOWN
self.last_wp = -1
self.state_count = 0
rospy.spin()
def pose_cb(self, msg):
self.pose = msg
def waypoints_cb(self, waypoints):
self.waypoints = waypoints
def traffic_cb(self, msg):
self.lights = msg.lights
def image_cb(self, msg):
"""Identifies red lights in the incoming camera image and publishes the index
of the waypoint closest to the red light's stop line to /traffic_waypoint
Args:
msg (Image): image from car-mounted camera
"""
self.has_image = True
self.camera_image = msg
self.process_traffic_lights()
'''
Publish upcoming red lights at camera frequency.
Each predicted state has to occur `STATE_COUNT_THRESHOLD` number
of times till we start using it. Otherwise the previous stable state is
used.
'''
"""
if self.state != light_state:
self.state_count = 0
self.state = light_state
elif self.state_count >= STATE_COUNT_THRESHOLD:
self.last_state = self.state
light_waypoint = light_waypoint if light_state == TrafficLight.RED else -1
self.last_wp = light_waypoint
self.upcoming_red_light_pub.publish(Int32(light_waypoint))
else:
self.upcoming_red_light_pub.publish(Int32(self.last_wp))
self.state_count += 1
"""
# --------------------------------------------------------------------------------------------
def light_loc(self,state, lx, ly, lz, lyaw):
# light state initialization
light = TrafficLight()
#
light.state = state
# header position
light.header = Header()
light.header.stamp = rospy.Time.now()
light.header.frame_id = 'world'
# pose position
light.pose = PoseStamped()
light.pose.header.stamp = rospy.Time.now()
light.pose.header.frame_id = 'world'
light.pose.pose.position.x = lx
light.pose.pose.position.y = ly
light.pose.pose.position.z = lz
q_from_euler = tf.transformations.quaternion_from_euler(0.0, 0.0, math.pi*lyaw/180.0)
light.pose.pose.orientation = Quaternion(*q_from_euler)
return light
def dist2d(self, x1, y1, x2, y2):
return math.sqrt((x2-x1)**2 + (y2-y1)**2)
def dist3d(self,pos1, pos2):
return math.sqrt((pos1.x-pos2.x)**2 + (pos1.y-pos2.y)**2 + (pos1.z-pos2.z)**2)
# ------------------------------------------------------------------------------------------
def get_closest_waypoint(self, pose):
"""Identifies the closest path waypoint to the given position
https://en.wikipedia.org/wiki/Closest_pair_of_points_problem
Args:
pose (Pose): position to match a waypoint to
Returns:
int: index of the closest waypoint in self.waypoints
"""
#TODO implement by using kd tree(scipy~, see line 61~62)
dist = float('inf')
# dl = lambda a, b: math.sqrt((a.x-b.x)**2 + (a.y-b.y)**2 + (a.z-b.z)**2)
closest_wp_idx = 0
for i in range(len(self.waypoints.waypoints)):
new_dist = self.dist3d(pose.position, self.waypoints.waypoints[i].pose.pose.position)
if new_dist < dist:
dist = new_dist
closest_wp_idx = i
return closest_wp_idx
def get_light_state(self, light):
"""Determines the current color of the traffic light
Args:
light (TrafficLight): light to classify
Returns:
int: ID of traffic light color (specified in styx_msgs/TrafficLight)
"""
# for testing
# return light.state
if(not self.has_image):
self.prev_light_loc = None
return False
#self.camera_image.encoding = "rgb8"
cv_image = self.bridge.imgmsg_to_cv2(self.camera_image, "rgb8")
# cv_image = self.bridge.imgmsg_to_cv2(self.camera_image, "bgr8")
# Get classification
state = self.light_classifier.get_classification(cv_image)
if state == TrafficLight.UNKNOWN and self.last_state:
state = self.last_state
return state
def process_traffic_lights(self):
"""Finds closest visible traffic light, if one exists, and determines its
location and color
Returns:
int: index of waypoint closes to the upcoming stop line for a traffic light (-1 if none exists)
int: ID of traffic light color (specified in styx_msgs/TrafficLight)
"""
#closest_light = None
#line_wp_idx = None
light = None
light_waypoint = None
if self.waypoints == None or self.lights == None:
return -1, TrafficLight.UNKNOWN
# List of positions that correspond to the line to stop in front of for a given intersection
if self.pose and self.waypoints:
# car_position
car_wp = self.get_closest_waypoint(self.pose.pose)
light_positions = self.light_positions
min_dist = float('inf')
for i, light_pos in enumerate(self.light_positions):
light_now = self.light_loc(TrafficLight.UNKNOWN, light_pos[0], light_pos[1],
0.0, 0.0)
light_wp = self.get_closest_waypoint(light_now.pose.pose)
light_dist = self.dist2d(self.waypoints.waypoints[car_wp].pose.pose.position.x,
self.waypoints.waypoints[car_wp].pose.pose.position.y,
self.waypoints.waypoints[light_wp].pose.pose.position.x,
self.waypoints.waypoints[light_wp].pose.pose.position.y)
if (light_wp % len(self.waypoints.waypoints)) > (car_wp % len(self.waypoints.waypoints)) and (light_dist < 100) and (light_dist < min_dist):
light = light_now
closest_light_wp = light_wp
min_dist = light_dist
"""
uint8 UNKNOWN=4
uint8 GREEN=2
uint8 YELLOW=1
uint8 RED=0
"""
if light:
state = self.get_light_state(light)
light_wp = closest_light_wp
rospy.logwarn("Traffic light id: {}, and its color state: {}".format(closest_light_wp, state))
else:
state = TrafficLight.UNKNOWN
light_wp = -1
else:
state = TrafficLight.RED
light_wp = -1
if self.state != state:
self.state_count = 0
self.state = state
elif self.state_count >= STATE_COUNT_THRESHOLD:
self.last_state = self.state
if state not in [TrafficLight.RED, TrafficLight.YELLOW]:
light_wp = -1
self.last_wp = light_wp
self.upcoming_red_light_pub.publish(Int32(light_wp))
else:
self.upcoming_red_light_pub.publish(Int32(self.last_wp))
self.state_count += 1
if __name__ == '__main__':
try:
TLDetector()
except rospy.ROSInterruptException:
rospy.logerr('Could not start traffic node.')
| true |
8e4354171ea3f1a8c9c861e603638fcb80b632de | Python | MysteriousSonOfGod/py | /dataSet.py | UTF-8 | 772 | 3.15625 | 3 | [] | no_license | # This file takes input from the dataset of Kickstarter and formulates the
# dictionary object which could later be manupulated programatically
import os
os.getcwd()
path = '.\Data Sets\Kaggle\kickstarter-project-statistics'
dataFile = 'most_backed.csv'
os.chdir(path)
print('*****Opening File****')
dataFile = open(dataFile,mode='r')
data = []
data = dataFile.readline().split(',')
data[0] ='Sno'
print('*****Initiating Header*****')
print(data)
master = {}
#records = {}
for i in range(1,11):
row = dataFile.readline().split(',')
records = {}
for j in range(0,len(data)):
key = data[j]
records[key]=row[j]
#print(records)
master[i] = records
print('******Closing File******')
dataFile.close()
print(master[9])
print(master)
| true |
26439882e52ca6d135e120c752af35581672125b | Python | gcbanevicius/email_scripts | /email_strip.py | UTF-8 | 904 | 2.734375 | 3 | [] | no_license | #!/usr/bin/python
import sys
import csv
import os
f1 = open("passes1.csv", "r")
f2 = open("passes2.csv", "r")
f3 = open("passes3.csv", "r")
c1 = csv.reader(f1)
c2 = csv.reader(f2)
c3 = csv.reader(f3)
outfi = open("passes_bounced.txt", "w")
for row1 in c1:
found = False
email = row1[3]
#set name var if they have a middle name
if row1[1] != '':
name = row1[2] + ' ' + row1[1] + ' ' + row1[0]
#set name var if they don't have a middle name elif row[1] == '':
else:
name = row1[2] + ' ' + row1[0]
outfi.write("%s,%s," %(name,email) )
for row2 in c2:
if name in row2[0]:
outfi.write("%s\n" %row2[2] )
found = True
break
f2.seek(0)
if found == True:
continue
for row3 in c3:
if name in row3[0]:
outfi.write("%s\n" %row3[2] )
found = True
break
f3.seek(0)
if found == True:
continue
outfi.write("NO PASSWORD...\n")
outfi.close()
| true |
8c72d41a5372d62895088a874c6016af4464212b | Python | asv-github/bp4u-irc-bots | /fraybot.py | UTF-8 | 1,630 | 2.828125 | 3 | [
"MIT"
] | permissive | #!/usr/bin/python
from bot import *
import re, random, threading
class FrayBot(Bot):
initdefaults = {"chans": {"#yolo"}, "nick": "FrayBot", "user": "fraybot", "longuser": "H. Fraybot"}
def __init__(self, **kwargs):
for k, v in self.initdefaults.items():
if k not in kwargs:
kwargs[k] = v
Bot.__init__(self,**kwargs)
print("Loading booklist into memory....")
with open("books","r", encoding="UTF-8") as booklistfile:
self.booklist = list(filter(lambda s : not s.startswith("#"), booklistfile.readlines()))
print ("Loaded %d books." % len(self.booklist))
def handle_pm(self, what, fromwhom):
if what == "books plz":
self.spam_reuse()
else:
self.say("Sorry, the item has been claimed.",fromwhom)
def spam_reuse(self):
# Construct a spammy reuse message
numbooks = int(random.triangular(3.5, 9.5, 6.5)) # Number of books: Let's try 6, plus or minus 3. (.5 corrects for rounding down)
books = random.sample(self.booklist,numbooks)
message = ["Reuse: Books (will send ONLY if you send interoffice address):"] + [str(i+1) + ". " + books[i] for i in range(numbooks)]
for chan in self.chans:
for line in message:
self.say(line, chan)
def repeatedly_spam_reuse(self, avgperiod=3600):
self.spam_reuse();
self.spamtimer = threading.Timer(random.expovariate(1.0 / avgperiod), self.repeatedly_spam_reuse, kwargs={'avgperiod':avgperiod}) # Poisson process with average time between spams of avgperiod
self.spamtimer.start()
if __name__ == "__main__":
fraybot = FrayBot(chans=['#tetazoo','#nanometer'])
fraybot.repeatedly_spam_reuse(avgperiod=24*7*3600)
while True:
fraybot.process()
| true |