hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
f01944d27e76d31f7d24bb6d6aee8d5e5c5f6995 | 10,940 | py | Python | todo_app/display.py | WeaverDyl/python-todo | 80c533b79c6170ba9ba4923ba78f4900fece8339 | [
"MIT"
] | 3 | 2020-01-16T09:39:11.000Z | 2021-11-15T08:38:52.000Z | todo_app/display.py | WeaverDyl/python-todo | 80c533b79c6170ba9ba4923ba78f4900fece8339 | [
"MIT"
] | null | null | null | todo_app/display.py | WeaverDyl/python-todo | 80c533b79c6170ba9ba4923ba78f4900fece8339 | [
"MIT"
] | null | null | null | import os
import math
import shutil
import textwrap
from datetime import datetime
from terminaltables import AsciiTable
class Display:
def __init__(self):
self.colors = {
'RED': '\033[38;5;196m',
'ORANGE': '\033[38;5;220m',
'GREEN': '\033[38;5;46m',
'BOLD': '\u001b[1m',
'UNDERLINE': '\u001b[4m',
'RESET': '\033[0m'
}
# Defines where to insert newlines in case of
# situations where one task has some long columns
self.max_col_widths = {
'ID': 4,
'Added': 10,
'Title': 30,
'Description': 30,
'Due': 10,
'Finished': 1
}
def color_message(self, message, *args):
""" Sets a message to be a specific color from the colors dict before resetting """
args_list = [str(color) for color in args]
colors = ''.join([self.colors[i] for i in args_list])
return ''.join([colors, message, self.colors['RESET']])
def print_error(self, message):
""" Prints a message in bold, red characters """
print(self.color_message(message, 'BOLD', 'RED'))
def print_success(self, message):
""" Prints a message in bold, green characters """
print(self.color_message(message, 'BOLD', 'GREEN'))
def print_message(self, message):
""" Prints a message in bold characters """
print(self.color_message(message, 'BOLD'))
@staticmethod
def clear_terminal():
""" Clears a terminal to prepare for output """
os.system('cls' if os.name == 'nt' else 'clear')
def print_welcome(self):
""" Prints a simple welcome message. """
Display.clear_terminal()
self.print_message('Welcome to python-todo!\n')
def print_commands(self):
""" Prints a list of available commands to run the program with.
Shown when the user has an empty task list """
commands = [[self.color_message(i, 'BOLD') for i in ['Commands', 'Description']],
['-a/--add', 'Add a new element to a task list'],
['-r/--remove', 'Remove an element from a task list'],
['-f/--finish', 'Finish a task in a task list'],
['-u/--unfinish', 'Unfinish a task in a task list'],
['-c/--change', 'Change parts of an existing task'],
['-v/--view', 'View the whole task list']]
table_data = commands
table = AsciiTable(table_data)
table.inner_row_border = True
if not self.check_table_fit(table):
self.print_message('Try adding a task to your list! just call `python-todo -a`')
else:
self.print_message('Try adding a task to your list! Here\'s the available commands:')
print(table.table)
@staticmethod
def check_table_fit(table):
""" Returns true if a terminaltable will fit within the width of
the current terminal width"""
term_width = shutil.get_terminal_size().columns
table_width = table.table_width
if table_width > term_width:
return False
return True
def format_row(self, tasks):
""" Performs formatting tasks such as changing task completions from (0,1) to (X/✓) """
formatted_tasks = []
for task in tasks:
# Format specific columns
title = task['Title']
description = task['Description']
timestamp = task['Added']
finished = task['Finished?']
due = task['Due']
formatted_timestamp = self.format_time(timestamp)
formatted_finished = self.color_message('✓', 'GREEN', 'BOLD') if finished == 1 else self.color_message('X', 'BOLD', 'RED')
formatted_due = self.format_due_date(due, finished)
# Wrap long lines in the title or description
formatted_title = self.format_long_lines(title, 'Title')
formatted_description = self.format_long_lines(description, 'Description')
task['Title'] = formatted_title
task['Description'] = formatted_description
task['Added'] = formatted_timestamp
task['Finished?'] = formatted_finished
task['Due'] = formatted_due
formatted_tasks.append(task)
return formatted_tasks
@staticmethod
def format_time(timestamp):
""" Returns a nice timestamp telling the user how old a task is.
Returns strings such as '1d ago' """
timestamp_datetime = datetime.strptime(timestamp, '%Y-%m-%d %H:%M:%S')
curr_time = datetime.now()
total_time_diff = curr_time - timestamp_datetime
# Time Constants
SECONDS_IN_MIN = 60
SECONDS_IN_HOUR = 3600
SECONDS_IN_DAY = 86400
SECONDS_IN_WEEK = 604800
SECONDS_IN_MONTH = 2592000
SECONDS_IN_YEAR = 31536000
# Print out formatted time difference
if total_time_diff.total_seconds() < 10:
return f'just now'
if total_time_diff.total_seconds() < SECONDS_IN_MIN:
seconds_passed = math.floor(total_time_diff.total_seconds())
return f'{seconds_passed}s ago'
if total_time_diff.total_seconds() < SECONDS_IN_HOUR:
minutes_passed = math.floor(total_time_diff.total_seconds() / SECONDS_IN_MIN)
return f'{minutes_passed}m ago'
if total_time_diff.total_seconds() < SECONDS_IN_DAY:
hours_passed = math.floor(total_time_diff.total_seconds() / SECONDS_IN_HOUR)
return f'{hours_passed}h ago'
if total_time_diff.total_seconds() < SECONDS_IN_WEEK:
days_passed = math.floor(total_time_diff.total_seconds() / SECONDS_IN_DAY)
return f'{days_passed}d ago'
if total_time_diff.total_seconds() < SECONDS_IN_MONTH:
weeks_passed = math.floor(total_time_diff.total_seconds() / SECONDS_IN_WEEK)
return f'{weeks_passed}w ago'
if total_time_diff.total_seconds() < SECONDS_IN_YEAR:
months_passed = math.floor(total_time_diff.total_seconds() / SECONDS_IN_MONTH)
return f'{months_passed}mo ago'
years_passed = math.floor(total_time_diff.total_seconds() / SECONDS_IN_YEAR)
return f'{years_passed}yr ago'
@staticmethod
def validate_date(date_str):
""" Ensures that the date given is in an acceptable format """
for date_format in ('%m/%d/%Y', '%m-%d-%Y'):
try:
if datetime.strptime(date_str, date_format):
return True
except ValueError:
pass
return False
def format_due_date(self, due_date, finished):
""" Formats the due date column to be colored based on how close
the task is to its due date. (Red = overdue, etc...)"""
# Don't format tasks that don't have a due date or are finished
if due_date == '' or finished == 1:
return due_date
curr_time = datetime.now()
try:
due_date_time = datetime.strptime(due_date, '%m-%d-%Y')
except ValueError:
due_date_time = datetime.strptime(due_date, '%m/%d/%Y')
time_until_due = due_date_time - curr_time
SECONDS_IN_DAY = 86400
# Overdue tasks are colored red
if int(time_until_due.total_seconds()) < 0:
return self.color_message(due_date, 'RED', 'BOLD')
# Tasks due in 24 hours or less are colored orange
if int(time_until_due.total_seconds()) < SECONDS_IN_DAY:
return self.color_message(due_date, 'ORANGE', 'BOLD')
return due_date
def format_long_lines(self, long_text, element):
wrapper = textwrap.TextWrapper(width=self.max_col_widths[element])
return '\n'.join(wrapper.wrap(text=long_text))
def print_task_list_formatted(self, rows):
""" Prints each formatted task to the terminal in the form
of a table """
header = [self.color_message(i, 'BOLD') for i in ['ID', 'Added', 'Title', 'Description', 'Due', 'Finished?']]
table_data = [task.values() for task in rows]
table_data.insert(0, header) # The column headers are the first element of the list
table = AsciiTable(table_data) # Create the table -- but test width before printing
table.inner_row_border = True # Separates each task
if not self.check_table_fit(table):
max_width_table = table.table_width
term_width = shutil.get_terminal_size().columns
self.print_message(f'The task list has a width of {max_width_table} and cannot fit in the terminal of width {term_width}.')
return
# The table fits and we can print it
self.print_message('Here are your current tasks:')
print(table.table)
# Methods for ADDING tasks
def ask_user_title(self):
""" Asks the user for the title of the task """
title = ''
while title == '':
title = input(self.color_message('Give your task a name: ', 'BOLD'))
if title == '':
self.print_error('The title can\'t be an empty string!')
return title
def ask_user_description(self):
""" Gets an optional description from the user """
description = input(self.color_message('Optionally, give your task a description: ', 'BOLD'))
return description
def ask_user_due(self):
""" Gets an optional due date for the task from the user """
date = ''
asked = False
while not asked or not self.validate_date(date):
date = input(self.color_message('Optionally, give your task a due date (\'mm/dd/yyyy\' or \'mm-dd-yyyy\'): ', 'BOLD'))
asked = True
if date == '':
return date
if not self.validate_date(date):
self.print_error('That\'s not a valid date format!')
return date
def ask_user_finished(self):
""" Asks a user if a task is finished """
valid_responses = {
'yes': True,
'y': True,
'no': False,
'n': False
}
default_resp = False
while True:
user_resp = input(self.color_message('Is the task already finished? (y/N): ', 'BOLD')).lower()
if user_resp in valid_responses:
return valid_responses[user_resp]
if user_resp == '':
return default_resp
self.print_error('That\'s not a valid answer! Answer (y/N).')
def ask_user_id(self, action):
""" Ask the user for a task ID to remove/finish/unfinish/update """
row_id = input(self.color_message(f'What task would you like to {action}? (Enter an ID or `-1` to cancel): ', 'BOLD'))
return row_id
| 39.927007 | 135 | 0.599543 | 1,403 | 10,940 | 4.493942 | 0.2067 | 0.028549 | 0.030928 | 0.039968 | 0.25456 | 0.239968 | 0.218557 | 0.166059 | 0.11594 | 0.071055 | 0 | 0.012579 | 0.295155 | 10,940 | 273 | 136 | 40.07326 | 0.804824 | 0.151005 | 0 | 0.153846 | 0 | 0 | 0.143532 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.102564 | false | 0.076923 | 0.030769 | 0 | 0.276923 | 0.097436 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f01f136d0d4a9137fd6a7ceea105c26d2d1478ac | 1,098 | py | Python | tests/controllers/controller_with_throttling.py | DmitryKhursevich/winter | 9f3bf462f963059bab1f1bbb309ca57f8a43b46f | [
"MIT"
] | 1 | 2020-10-26T09:48:05.000Z | 2020-10-26T09:48:05.000Z | tests/controllers/controller_with_throttling.py | mikhaillazko/winter | cd4f11aaf28d500aabb59cec369817bfdb5c2fc1 | [
"MIT"
] | null | null | null | tests/controllers/controller_with_throttling.py | mikhaillazko/winter | cd4f11aaf28d500aabb59cec369817bfdb5c2fc1 | [
"MIT"
] | null | null | null | from http import HTTPStatus
import winter.web
from winter.web import ExceptionHandler
from winter.web.exceptions import ThrottleException
class CustomThrottleExceptionHandler(ExceptionHandler):
@winter.response_status(HTTPStatus.TOO_MANY_REQUESTS)
def handle(self, exception: ThrottleException) -> str:
return 'custom throttle exception'
@winter.route_get('with-throttling/')
@winter.web.no_authentication
class ControllerWithThrottling:
@winter.route_get()
@winter.web.throttling('5/s')
def simple_method(self) -> int:
return 1
@winter.route_post()
def simple_post_method(self) -> int:
return 1
@winter.route_get('same/')
@winter.web.throttling('5/s')
def same_simple_method(self) -> int:
return 1
@winter.route_get('without-throttling/')
def method_without_throttling(self):
pass
@winter.route_get('custom-handler/')
@winter.web.throttling('5/s')
@winter.throws(ThrottleException, CustomThrottleExceptionHandler)
def simple_method_with_custom_handler(self) -> int:
return 1
| 26.780488 | 69 | 0.721311 | 127 | 1,098 | 6.070866 | 0.338583 | 0.081712 | 0.090791 | 0.072633 | 0.233463 | 0.206226 | 0.143969 | 0.143969 | 0 | 0 | 0 | 0.007709 | 0.173042 | 1,098 | 40 | 70 | 27.45 | 0.84141 | 0 | 0 | 0.233333 | 0 | 0 | 0.081056 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0.033333 | 0.133333 | 0.166667 | 0.566667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
f02506946a855a60b83d59b8fe69069f7a64c710 | 1,316 | py | Python | fork_process/dataPreprocess/data_extraction_2.py | JianboTang/modified_GroundHog | cc511a146a51b42fdfb2b2c045205cca6ab306b7 | [
"BSD-3-Clause"
] | null | null | null | fork_process/dataPreprocess/data_extraction_2.py | JianboTang/modified_GroundHog | cc511a146a51b42fdfb2b2c045205cca6ab306b7 | [
"BSD-3-Clause"
] | null | null | null | fork_process/dataPreprocess/data_extraction_2.py | JianboTang/modified_GroundHog | cc511a146a51b42fdfb2b2c045205cca6ab306b7 | [
"BSD-3-Clause"
] | null | null | null | import numpy
import pickle
readfile1 = open('intermediate_data/post_1.txt','r');
readfile2 = open('intermediate_data/cmnt_1.txt','r');
writefile = open('intermediate_data/dictionary.pkl','w');
#writefile1 = open('intermediate_data/post_2.txt','w');
#writefile2 = open('intermediate_data/cmnt_2.txt','w');
def staticDict(dictionary,lline):
for i in xrange(len(lline)):
if lline[i] in dictionary:
dictionary[lline[i]] += 1;
else:
dictionary[lline[i]] = 1;
return dictionary
def preprocess(line):
line = line.decode("utf-8");
lline = [x for x in list(line) if x != u' '];
del lline[-1]
return lline
def dictPrint(dictionary):
for x in dictionary:
print x," : ",dictionary[x];
def main(count):
dict1 = {};
dict2 = {};
i = 0;
while i < count:
line1 = readfile1.readline();
line2 = readfile2.readline();
if not line1 or not line2:
print "touch the end of file"
break;
lline1 = preprocess(line1);
lline2 = preprocess(line2);
dict1 = staticDict(dict1,lline1);
dict2 = staticDict(dict2,lline2);
i += 1;
print "print the first dictionary"
dictPrint(dict1);
print "print the second dictionary"
dictPrint(dict2);
pickle.dump(dict1,writefile);
pickle.dump(dict2,writefile);
if __name__ == '__main__':
main(1000000);
| 25.803922 | 57 | 0.660334 | 175 | 1,316 | 4.868571 | 0.377143 | 0.093897 | 0.117371 | 0.056338 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040338 | 0.18997 | 1,316 | 50 | 58 | 26.32 | 0.758912 | 0.082067 | 0 | 0 | 0 | 0 | 0.150912 | 0.072968 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.046512 | null | null | 0.093023 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f031c64cd48b598cd3b616708c05819e454b8bc1 | 2,870 | py | Python | core/translator.py | bfu4/mdis | fac5ec078ffeaa9339df4b31b9b71140563f4f14 | [
"MIT"
] | 13 | 2021-05-17T06:38:50.000Z | 2022-03-27T15:39:57.000Z | core/translator.py | bfu4/mdis | fac5ec078ffeaa9339df4b31b9b71140563f4f14 | [
"MIT"
] | null | null | null | core/translator.py | bfu4/mdis | fac5ec078ffeaa9339df4b31b9b71140563f4f14 | [
"MIT"
] | null | null | null | from typing import List
from parser import parse_bytes, split_bytes_from_lines, get_bytes, parse_instruction_set, wrap_parsed_set
from reader import dump_file_hex_with_locs
class Translator:
"""
Class handling file translations from *.mpy to hex dumps and opcodes
"""
def __init__(self, file: str):
"""
Create new translator
:param file: location of the file
"""
self.file = file
def get_file_hex(self):
"""
Get a full hex dump of the file
:return:
"""
return dump_file_hex_with_locs(self.file)
def get_file_hex_at(self, _from: str, _to: str):
"""
Get a byte dump at a specified location
:param _from: from address
:param _to: to address
:return: bytes from address {_from} to address {_to}
"""
return parse_bytes(self.get_file_hex(), _from, _to)
def get_file(self):
"""
Get the file name
:return:
"""
return self.file
def get_magic(self) -> str:
"""
Get the magic number
:return:
"""
return "".join(self.get_all_bytes()[0][:8])
def get_all_bytes(self):
"""
Get all of the bytes
:return: all of the bytes
"""
return get_bytes(self.get_file_hex().split("\n"))
def get_split_bytes(self) -> List[List[str]]:
"""
Get all of the bytes per line
:return: bytes in list form
"""
split = split_bytes_from_lines(self.get_all_bytes())
split[0] = split[0][4:]
return split
def get_bytes_at(self, _from: str, _to: str) -> List[List[str]]:
"""
Get the bytes between the specified locations
:param _from: start address
:param _to: end address
:return: bytes
"""
return split_bytes_from_lines(self.get_file_hex_at(_from, _to))
def get_instruction_set(self) -> List[str]:
"""
Get the file's instruction set
:return: set
"""
bl = self.get_split_bytes()
# offset of 8, start at first BC_BASE_RESERVED
list_with_offset = bl[0][4:]
_bytes = self.__flatten([list_with_offset, bl[1]])
_set = parse_instruction_set(_bytes)
return wrap_parsed_set(_set)
def get_instructions_at(self, _from: str, _to: str) -> List[str]:
"""
Get the instructions between addresses
:param _from: start address
:param _to: end address
:return: instructions
"""
_bytes = self.__flatten(self.get_bytes_at(_from, _to))
_set = parse_instruction_set(_bytes)
return wrap_parsed_set(_set)
def __flatten(self, _list):
# Lambda replaced by def flatten due to E731
return [item for sublist in _list for item in sublist]
| 28.7 | 105 | 0.591289 | 375 | 2,870 | 4.229333 | 0.218667 | 0.044136 | 0.031526 | 0.035939 | 0.298235 | 0.192938 | 0.148802 | 0.121059 | 0.121059 | 0.065574 | 0 | 0.006073 | 0.311498 | 2,870 | 99 | 106 | 28.989899 | 0.796559 | 0.28885 | 0 | 0.117647 | 0 | 0 | 0.0012 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.323529 | false | 0 | 0.088235 | 0.029412 | 0.735294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f033f0846a998f9a5ac92cbb40712c19a572ab8c | 623 | py | Python | extra_tests/ctypes_tests/test_unions.py | nanjekyejoannah/pypy | e80079fe13c29eda7b2a6b4cd4557051f975a2d9 | [
"Apache-2.0",
"OpenSSL"
] | 333 | 2015-08-08T18:03:38.000Z | 2022-03-22T18:13:12.000Z | extra_tests/ctypes_tests/test_unions.py | nanjekyejoannah/pypy | e80079fe13c29eda7b2a6b4cd4557051f975a2d9 | [
"Apache-2.0",
"OpenSSL"
] | 7 | 2020-02-16T16:49:05.000Z | 2021-11-26T09:00:56.000Z | extra_tests/ctypes_tests/test_unions.py | nanjekyejoannah/pypy | e80079fe13c29eda7b2a6b4cd4557051f975a2d9 | [
"Apache-2.0",
"OpenSSL"
] | 55 | 2015-08-16T02:41:30.000Z | 2022-03-20T20:33:35.000Z | import sys
from ctypes import *
def test_getattr():
class Stuff(Union):
_fields_ = [('x', c_char), ('y', c_int)]
stuff = Stuff()
stuff.y = ord('x') | (ord('z') << 24)
if sys.byteorder == 'little':
assert stuff.x == b'x'
else:
assert stuff.x == b'z'
def test_union_of_structures():
class Stuff(Structure):
_fields_ = [('x', c_int)]
class Stuff2(Structure):
_fields_ = [('x', c_int)]
class UnionofStuff(Union):
_fields_ = [('one', Stuff),
('two', Stuff2)]
u = UnionofStuff()
u.one.x = 3
assert u.two.x == 3
| 21.482759 | 48 | 0.523274 | 80 | 623 | 3.875 | 0.425 | 0.067742 | 0.077419 | 0.083871 | 0.16129 | 0.16129 | 0 | 0 | 0 | 0 | 0 | 0.013825 | 0.303371 | 623 | 28 | 49 | 22.25 | 0.700461 | 0 | 0 | 0.090909 | 0 | 0 | 0.032103 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 1 | 0.090909 | false | 0 | 0.090909 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
f03ab886461270d772569e4546b232254bbdaeb6 | 3,525 | py | Python | .ipynb_checkpoints/main2-checkpoint.py | jcus/python-challenge | 8e00b7ae932e970a98c419e5b49fc7a0dfc3eac5 | [
"RSA-MD"
] | null | null | null | .ipynb_checkpoints/main2-checkpoint.py | jcus/python-challenge | 8e00b7ae932e970a98c419e5b49fc7a0dfc3eac5 | [
"RSA-MD"
] | null | null | null | .ipynb_checkpoints/main2-checkpoint.py | jcus/python-challenge | 8e00b7ae932e970a98c419e5b49fc7a0dfc3eac5 | [
"RSA-MD"
] | null | null | null | {
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "001887f2",
"metadata": {},
"outputs": [],
"source": [
"# import os modules to create path across operating system to load csv file\n",
"import os\n",
"# module for reading csv files\n",
"import csv"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "77c0f7d8",
"metadata": {},
"outputs": [],
"source": [
"# read csv data and load to budgetDB\n",
"csvpath = os.path.join(\"Resources\",\"budget_data.csv\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b2da0e1e",
"metadata": {},
"outputs": [],
"source": [
"# creat a txt file to hold the analysis\n",
"outputfile = os.path.join(\"Analysis\",\"budget_analysis.txt\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f3c0fd89",
"metadata": {},
"outputs": [],
"source": [
"# set var and initialize to zero\n",
"totalMonths = 0 \n",
"totalBudget = 0"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4f807576",
"metadata": {},
"outputs": [],
"source": [
"# set list to store all of the monthly changes\n",
"monthChange = [] \n",
"months = []"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ad264653",
"metadata": {},
"outputs": [],
"source": [
"# use csvreader object to import the csv library with csvreader object\n",
"with open(csvpath, newline = \"\") as csvfile:\n",
"# # create a csv reader object\n",
" csvreader = csv.reader(csvfile, delimiter=\",\")\n",
" \n",
" # skip the first row since it has all of the column information\n",
" #next(csvreader)\n",
" \n",
"#header: date, profit/losses\n",
"print(csvreader)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "27fc81c1",
"metadata": {},
"outputs": [],
"source": [
"for p in csvreader:\n",
" print(\"date: \" + p[0])\n",
" print(\"profit: \" + p[1])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "83749f03",
"metadata": {},
"outputs": [],
"source": [
"# read the header row\n",
"header = next(csvreader)\n",
"print(f\"csv header:{header}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3b441a20",
"metadata": {},
"outputs": [],
"source": [
"# move to the next row (first row)\n",
"firstRow = next(csvreader)\n",
"totalMonths = (len(f\"[csvfile.index(months)][csvfile]\"))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a815e200",
"metadata": {},
"outputs": [],
"source": [
"output = (\n",
" f\"Financial Anaylsis \\n\"\n",
" f\"------------------------- \\n\"\n",
" f\"Total Months: {totalMonths} \\n\")\n",
"print(output)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6bf35c14",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| 21.759259 | 84 | 0.508936 | 368 | 3,525 | 4.790761 | 0.361413 | 0.049915 | 0.074872 | 0.131027 | 0.203063 | 0.203063 | 0.203063 | 0.039705 | 0 | 0 | 0 | 0.028363 | 0.259858 | 3,525 | 161 | 85 | 21.89441 | 0.647374 | 0 | 0 | 0.347826 | 0 | 0 | 0.544113 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.024845 | 0 | 0.024845 | 0.031056 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f0425b1ddda33471bcd698350aad4a8f84b9b335 | 1,837 | py | Python | mnt/us/kapps/apps/gallery/gallery.py | PhilippMundhenk/kapps | eed07669d8554393bfbd40acd8d255475e90b88e | [
"MIT"
] | 1 | 2021-11-19T08:40:44.000Z | 2021-11-19T08:40:44.000Z | mnt/us/kapps/apps/gallery/gallery.py | PhilippMundhenk/kapps | eed07669d8554393bfbd40acd8d255475e90b88e | [
"MIT"
] | null | null | null | mnt/us/kapps/apps/gallery/gallery.py | PhilippMundhenk/kapps | eed07669d8554393bfbd40acd8d255475e90b88e | [
"MIT"
] | null | null | null | from core.kapp import Kapp
from core.httpResponse import HTTPResponse
from core.Kcommand import Kcommand
import uuid
import os
class GetImage(Kcommand):
getImageHash = str(uuid.uuid4())
def __init__(self):
super(GetImage, self).__init__(
"GetImage", self.getImageHash)
class ViewImage(Kcommand):
viewImageHash = str(uuid.uuid4())
def __init__(self):
super(ViewImage, self).__init__(
"ViewImage", self.viewImageHash)
class GalleryApp(Kapp):
name = "Gallery"
def getImageCallback(self, kcommand):
with open(kcommand.getParameter("path"), 'r') as file:
return HTTPResponse(content=file.read())
def viewImageCallback(self, kcommand):
cmd = GetImage()
cmd.params = dict(kcommand.params)
return HTTPResponse(content=self.getRes("image.html").replace("$IMAGE$", "<img style=\"width:100%;\" src=" + cmd.toURL() + " />"))
def homeCallback(self, kcommand):
path = "/mnt/us/images/"
files = os.listdir(path)
paths = [os.path.join(path, basename) for basename in files]
text = ""
for p in paths:
text = text + "<tr><td>"
imageURL = ViewImage().setParameter("path", p).toURL()
text = text + "<a href=\"" + \
imageURL + "\">" + p.replace(path, "") + "</a>"
text = text + "</td></tr>"
return HTTPResponse(content=self.getRes("imageList.html").replace("$IMAGES$", text))
def iconCallback(self, kcommand):
return HTTPResponse(content=self.getRes("icon.png"))
def register(appID, appPath, ctx):
print("register " + GalleryApp.name)
app = GalleryApp(appID, appPath, ctx)
app.subscribe(GetImage(), app.getImageCallback)
app.subscribe(ViewImage(), app.viewImageCallback)
return app
| 30.114754 | 138 | 0.619488 | 199 | 1,837 | 5.638191 | 0.386935 | 0.042781 | 0.089127 | 0.07754 | 0.143494 | 0.049911 | 0.049911 | 0 | 0 | 0 | 0 | 0.003551 | 0.233533 | 1,837 | 60 | 139 | 30.616667 | 0.793324 | 0 | 0 | 0.045455 | 0 | 0 | 0.095264 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.159091 | false | 0 | 0.113636 | 0.022727 | 0.522727 | 0.022727 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
f04ab36ef3e94f8716214625f760733bb0b62c82 | 1,437 | py | Python | chapter-7/chassis/demo.py | wallacei/microservices-in-action-copy | f9840464a1f9ec40622989e9e5377742246244f3 | [
"MIT"
] | 115 | 2017-11-06T08:12:07.000Z | 2022-02-25T09:56:59.000Z | chapter-7/chassis/demo.py | wallacei/microservices-in-action-copy | f9840464a1f9ec40622989e9e5377742246244f3 | [
"MIT"
] | 12 | 2017-08-05T14:51:35.000Z | 2020-12-01T11:05:14.000Z | chapter-7/chassis/demo.py | wallacei/microservices-in-action-copy | f9840464a1f9ec40622989e9e5377742246244f3 | [
"MIT"
] | 82 | 2017-08-05T09:41:12.000Z | 2022-02-18T00:57:39.000Z | import json
import datetime
import requests
from nameko.web.handlers import http
from nameko.timer import timer
from statsd import StatsClient
from circuitbreaker import circuit
class DemoChassisService:
name = "demo_chassis_service"
statsd = StatsClient('localhost', 8125, prefix='simplebank-demo')
@http('GET', '/health')
@statsd.timer('health')
def health(self, _request):
return json.dumps({'ok': datetime.datetime.utcnow().__str__()})
@http('GET', '/external')
@circuit(failure_threshold=5, expected_exception=ConnectionError)
@statsd.timer('external')
def external_request(self, _request):
response = requests.get('https://jsonplaceholder.typicode.com/posts/1')
return json.dumps({'code': response.status_code, 'body': response.text})
@http('GET', '/error')
@circuit(failure_threshold=5, expected_exception=ZeroDivisionError)
@statsd.timer('http_error')
def error_http_request(self):
return json.dumps({1 / 0})
class HealthCheckService:
name = "health_check_service"
statsd = StatsClient('localhost', 8125, prefix='simplebank-demo')
@timer(interval=10)
@statsd.timer('check_demo_service')
def check_demo_service(self):
response = requests.get('http://0.0.0.0:8000/health')
print("DemoChassisService HEALTH CHECK: status_code {}, response: {}".format(
response.status_code, response.text))
| 31.933333 | 85 | 0.701461 | 166 | 1,437 | 5.921687 | 0.373494 | 0.044761 | 0.045778 | 0.067141 | 0.19939 | 0.19939 | 0.115972 | 0.115972 | 0 | 0 | 0 | 0.019151 | 0.164231 | 1,437 | 44 | 86 | 32.659091 | 0.799334 | 0 | 0 | 0.058824 | 0 | 0 | 0.21016 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.205882 | 0.058824 | 0.588235 | 0.029412 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
f04ba688518b293c3f58f28b313a6e8e7fd63f49 | 214 | py | Python | URI 1017.py | Azefalo/Cluble-de-Programacao-UTFPR | f4a457bae36ac61378766035abc0633f5b3492db | [
"MIT"
] | 1 | 2021-04-19T22:42:00.000Z | 2021-04-19T22:42:00.000Z | URI 1017.py | Azefalo/Cluble-de-Programacao-UTFPR | f4a457bae36ac61378766035abc0633f5b3492db | [
"MIT"
] | null | null | null | URI 1017.py | Azefalo/Cluble-de-Programacao-UTFPR | f4a457bae36ac61378766035abc0633f5b3492db | [
"MIT"
] | null | null | null | # https://www.beecrowd.com.br/judge/en/problems/view/1017
car_efficiency = 12 # Km/L
time = int(input())
average_speed = int(input())
liters = (time * average_speed) / car_efficiency
print(f"{liters:.3f}") | 26.75 | 58 | 0.686916 | 32 | 214 | 4.46875 | 0.75 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037838 | 0.135514 | 214 | 8 | 59 | 26.75 | 0.735135 | 0.280374 | 0 | 0 | 0 | 0 | 0.082759 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f04ce64c795e8f616352eaaa159edec4673a3240 | 1,432 | py | Python | src/cpp/convert.py | shindavid/splendor | b51b0408967627dbd61f60f57031d1fe21aa9d8f | [
"MIT"
] | 1 | 2017-11-02T18:32:51.000Z | 2017-11-02T18:32:51.000Z | src/cpp/convert.py | shindavid/splendor | b51b0408967627dbd61f60f57031d1fe21aa9d8f | [
"MIT"
] | 1 | 2018-07-05T09:07:40.000Z | 2018-07-05T09:07:40.000Z | src/cpp/convert.py | shindavid/splendor | b51b0408967627dbd61f60f57031d1fe21aa9d8f | [
"MIT"
] | null | null | null | filename = '../py/cards.py'
f = open(filename)
color_map = {
'W' : 'eWhite',
'U' : 'eBlue',
'G' : 'eGreen',
'R' : 'eRed',
'B' : 'eBlack',
'J' : 'eGold',
}
color_index_map = {
'W' : 0,
'U' : 1,
'G' : 2,
'R' : 3,
'B' : 4
}
def convert(cost_str):
cost_array = [0,0,0,0,0]
tokens = [x.strip() for x in cost_str.split(',')]
for token in tokens:
subtokens = token.split(':')
color_index = color_index_map[subtokens[0]]
count = int(subtokens[1])
cost_array[color_index] = count
return ', '.join([str(x) for x in cost_array])
ID = 0
first = True
for line in f:
if line.count('_add_card'):
if first:
first = False
continue
lp = line.find('(')
rp = line.find(')')
lb = line.find('{')
rb = line.find('}')
cost_str = line[lb+1:rb]
tokens = line[lp+1:rp].split(',')
level = int(tokens[0].strip()) - 1
points = int(tokens[1].strip())
color = color_map[tokens[2].strip()]
print ' {%2d, {%s}, %s, %s, %s},' % (ID, convert(cost_str), points, level, color)
ID += 1
ID = 0
f = open(filename)
first = True
for line in f:
if line.count('_add_noble'):
if first:
first = False
continue
lp = line.find('(')
rp = line.find(')')
lb = line.find('{')
rb = line.find('}')
cost_str = line[lb+1:rb]
print ' {%s, 3, {%s}},' % (ID, convert(cost_str))
ID += 1
| 21.058824 | 88 | 0.513966 | 209 | 1,432 | 3.421053 | 0.296651 | 0.08951 | 0.058741 | 0.011189 | 0.366434 | 0.318881 | 0.318881 | 0.318881 | 0.318881 | 0.318881 | 0 | 0.024178 | 0.277933 | 1,432 | 67 | 89 | 21.373134 | 0.667311 | 0 | 0 | 0.440678 | 0 | 0 | 0.094274 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.033898 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f05664f755b3f673d926831f9475fb250a901f2c | 792 | py | Python | pytglib/api/types/rich_text_phone_number.py | iTeam-co/pytglib | e5e75e0a85f89b77762209b32a61b0a883c0ae61 | [
"MIT"
] | 6 | 2019-10-30T08:57:27.000Z | 2021-02-08T14:17:43.000Z | pytglib/api/types/rich_text_phone_number.py | iTeam-co/python-telegram | e5e75e0a85f89b77762209b32a61b0a883c0ae61 | [
"MIT"
] | 1 | 2021-08-19T05:44:10.000Z | 2021-08-19T07:14:56.000Z | pytglib/api/types/rich_text_phone_number.py | iTeam-co/python-telegram | e5e75e0a85f89b77762209b32a61b0a883c0ae61 | [
"MIT"
] | 5 | 2019-12-04T05:30:39.000Z | 2021-05-21T18:23:32.000Z |
from ..utils import Object
class RichTextPhoneNumber(Object):
"""
A rich text phone number
Attributes:
ID (:obj:`str`): ``RichTextPhoneNumber``
Args:
text (:class:`telegram.api.types.RichText`):
Text
phone_number (:obj:`str`):
Phone number
Returns:
RichText
Raises:
:class:`telegram.Error`
"""
ID = "richTextPhoneNumber"
def __init__(self, text, phone_number, **kwargs):
self.text = text # RichText
self.phone_number = phone_number # str
@staticmethod
def read(q: dict, *args) -> "RichTextPhoneNumber":
text = Object.read(q.get('text'))
phone_number = q.get('phone_number')
return RichTextPhoneNumber(text, phone_number)
| 21.405405 | 54 | 0.585859 | 80 | 792 | 5.6625 | 0.4125 | 0.218543 | 0.165563 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.295455 | 792 | 36 | 55 | 22 | 0.811828 | 0.352273 | 0 | 0 | 0 | 0 | 0.12 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.090909 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f0574e6e18439c3b0140a8ed7ddefec8cd1bf416 | 299 | py | Python | tests/core/scenario_finder/file_filters/test_file_filter.py | nikitanovosibirsk/vedro | e975a1c1eb065bc6caa32c41c0d7576ee6d284db | [
"Apache-2.0"
] | 2 | 2021-08-24T12:49:30.000Z | 2022-01-23T07:21:25.000Z | tests/core/scenario_finder/file_filters/test_file_filter.py | nikitanovosibirsk/vedro | e975a1c1eb065bc6caa32c41c0d7576ee6d284db | [
"Apache-2.0"
] | 20 | 2015-12-09T11:04:23.000Z | 2022-03-20T09:18:17.000Z | tests/core/scenario_finder/file_filters/test_file_filter.py | nikitanovosibirsk/vedro | e975a1c1eb065bc6caa32c41c0d7576ee6d284db | [
"Apache-2.0"
] | 3 | 2015-12-09T07:31:23.000Z | 2022-01-28T11:03:24.000Z | from pytest import raises
from vedro._core._scenario_finder._file_filters import FileFilter
def test_file_filter():
with raises(Exception) as exc_info:
FileFilter()
assert exc_info.type is TypeError
assert "Can't instantiate abstract class FileFilter" in str(exc_info.value)
| 24.916667 | 79 | 0.769231 | 42 | 299 | 5.238095 | 0.738095 | 0.095455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170569 | 299 | 11 | 80 | 27.181818 | 0.887097 | 0 | 0 | 0 | 0 | 0 | 0.143813 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 1 | 0.142857 | true | 0 | 0.285714 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f05da004efb57fa8123a5d8084bba03a6cd27ce9 | 623 | py | Python | create_tacacs.py | cromulon-actual/ise_automation | de3fbb762c3e1f4f41d81dda3bd2d33a11db1d58 | [
"MIT"
] | null | null | null | create_tacacs.py | cromulon-actual/ise_automation | de3fbb762c3e1f4f41d81dda3bd2d33a11db1d58 | [
"MIT"
] | null | null | null | create_tacacs.py | cromulon-actual/ise_automation | de3fbb762c3e1f4f41d81dda3bd2d33a11db1d58 | [
"MIT"
] | null | null | null | from ciscoisesdk import IdentityServicesEngineAPI
from ciscoisesdk.exceptions import ApiError
from dotenv import load_dotenv
import os
from pprint import pprint as ppr
load_dotenv()
admin = os.getenv("ISE_ADMIN")
pw = os.getenv("ISE_PW")
base_url = os.getenv("ISE_URL")
api = IdentityServicesEngineAPI(
username=admin, password=pw, base_url=base_url, version="3.0.0", verify=False)
print("=" * 50)
# Get Admin Users
search_result = api.admin_user.get_all()
ppr(search_result.response)
print("=" * 50)
# Get All TACACS Users
search_result = api.tacacs_profile.get_all()
ppr(search_result.response)
print("=" * 50)
| 23.074074 | 82 | 0.764045 | 91 | 623 | 5.054945 | 0.406593 | 0.104348 | 0.071739 | 0.086957 | 0.156522 | 0.156522 | 0.156522 | 0.156522 | 0 | 0 | 0 | 0.016393 | 0.11878 | 623 | 26 | 83 | 23.961538 | 0.821494 | 0.057785 | 0 | 0.277778 | 0 | 0 | 0.05137 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.055556 | 0.277778 | 0 | 0.277778 | 0.222222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f0625e6b2d07feed6a373b43052746c1a7b2640c | 984 | py | Python | home/tests/add-remove sector.py | caggri/FOFviz | 776ab387d832a86eea1a1b9064040d9b012494a7 | [
"MIT"
] | 2 | 2020-05-24T22:28:53.000Z | 2020-05-25T21:58:24.000Z | home/tests/add-remove sector.py | caggri/FOFviz | 776ab387d832a86eea1a1b9064040d9b012494a7 | [
"MIT"
] | null | null | null | home/tests/add-remove sector.py | caggri/FOFviz | 776ab387d832a86eea1a1b9064040d9b012494a7 | [
"MIT"
] | 1 | 2021-10-16T12:26:29.000Z | 2021-10-16T12:26:29.000Z | from selenium import webdriver
import time
chromedriver = "C:/Users/deniz/chromedriver/chromedriver"
driver = webdriver.Chrome(chromedriver)
driver.get('http://127.0.0.1:8000/')
dashboard = '//*[@id="accordionSidebar"]/li[1]/a'
sectors_1 = '//*[@id="sectors"]'
sectors_1_element = '//*[@id="sectors"]/option[4]'
add_sector = '//*[@id="select_filter_form"]/div[1]/input[1]'
remove_sector = '//*[@id="select_filter_form"]/div[1]/input[2]'
sectors_2 = '//*[@id="sectors2"]'
sectors_2_element = '//*[@id="sectors2"]/option[4]'
time.sleep(2)
driver.find_element_by_xpath(dashboard).click()
time.sleep(5)
driver.find_element_by_xpath(sectors_1).click()
time.sleep(2)
driver.find_element_by_xpath(sectors_1_element).click()
time.sleep(5)
driver.find_element_by_xpath(add_sector).click()
time.sleep(5)
driver.find_element_by_xpath(sectors_2).click()
time.sleep(2)
driver.find_element_by_xpath(sectors_2_element).click()
time.sleep(5)
driver.find_element_by_xpath(remove_sector).click()
| 29.818182 | 63 | 0.747967 | 150 | 984 | 4.633333 | 0.28 | 0.090647 | 0.171223 | 0.191367 | 0.546763 | 0.546763 | 0.546763 | 0.541007 | 0.397122 | 0.397122 | 0 | 0.036559 | 0.054878 | 984 | 32 | 64 | 30.75 | 0.710753 | 0 | 0 | 0.269231 | 0 | 0 | 0.28586 | 0.225839 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f065f569bc87da0b1005e3822cbd92500b510024 | 1,713 | py | Python | netensorflow/api_samples/ann_creation_and_usage.py | psigelo/NeTensorflow | ec8bc09cc98346484d1b682a3dfd25c68c4ded61 | [
"MIT"
] | null | null | null | netensorflow/api_samples/ann_creation_and_usage.py | psigelo/NeTensorflow | ec8bc09cc98346484d1b682a3dfd25c68c4ded61 | [
"MIT"
] | null | null | null | netensorflow/api_samples/ann_creation_and_usage.py | psigelo/NeTensorflow | ec8bc09cc98346484d1b682a3dfd25c68c4ded61 | [
"MIT"
] | null | null | null | import tensorflow as tf
from netensorflow.ann.ANN import ANN
from netensorflow.ann.macro_layer.MacroLayer import MacroLayer
from netensorflow.ann.macro_layer.layer_structure.InputLayerStructure import InputLayerStructure
from netensorflow.ann.macro_layer.layer_structure.LayerStructure import LayerStructure, LayerType
from netensorflow.ann.macro_layer.layer_structure.layers.FullConnected import FullConnected
from netensorflow.ann.macro_layer.layer_structure.layers.FullConnectedWithSoftmaxLayer import FullConnectedWithSoftmaxLayer
'''
ann Creation and simple usage, the goal of this code is simply run the most simpler artificial neural network
'''
def main():
# tensorflow
tf_sess = tf.Session()
# Layers:
input_dim = [None, 3]
hidden_layer = FullConnected(inputs_amount=20)
out_layer = FullConnectedWithSoftmaxLayer(inputs_amount=10)
# Layer Structures
input_layer_structure = InputLayerStructure(input_dim)
hidden_layer_structure = LayerStructure('Hidden', layer_type=LayerType.ONE_DIMENSION, layers=[hidden_layer])
output_layer_structure = LayerStructure('Output', layer_type=LayerType.ONE_DIMENSION,layers=[out_layer])
# Macro Layer
macro_layers = MacroLayer(layers_structure=[input_layer_structure, hidden_layer_structure, output_layer_structure])
# ann
ann = ANN(macro_layers=macro_layers, tf_session=tf_sess, base_folder='./tensorboard_logs/')
ann.connect_and_initialize()
# Execute
for it in range(100):
import numpy as np
input_tensor_value = [np.random.uniform(0.0, 10.0, 3)]
print(ann.run(global_iteration=it, input_tensor_value=input_tensor_value))
if __name__ == '__main__':
main()
| 37.23913 | 123 | 0.782837 | 210 | 1,713 | 6.104762 | 0.366667 | 0.109204 | 0.088924 | 0.093604 | 0.222309 | 0.199688 | 0.143526 | 0.076443 | 0 | 0 | 0 | 0.009472 | 0.137186 | 1,713 | 45 | 124 | 38.066667 | 0.857916 | 0.034443 | 0 | 0 | 0 | 0 | 0.025591 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.333333 | 0 | 0.375 | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
f06a56919cbaa9b5814f0dd5b244fec4364f26b3 | 423 | py | Python | python/arachne/runtime/rpc/logger.py | fixstars/arachne | 03c00fc5105991d0d706b935d77e6f9255bae9e7 | [
"MIT"
] | 3 | 2022-03-29T03:02:20.000Z | 2022-03-29T03:48:38.000Z | python/arachne/runtime/rpc/logger.py | fixstars/arachne | 03c00fc5105991d0d706b935d77e6f9255bae9e7 | [
"MIT"
] | null | null | null | python/arachne/runtime/rpc/logger.py | fixstars/arachne | 03c00fc5105991d0d706b935d77e6f9255bae9e7 | [
"MIT"
] | 1 | 2022-03-29T05:44:12.000Z | 2022-03-29T05:44:12.000Z | import logging
class Logger(object):
stream_handler = logging.StreamHandler()
formatter = logging.Formatter("[%(levelname)s %(pathname)s:%(lineno)d] %(message)s")
stream_handler.setFormatter(formatter)
stream_handler.setLevel(logging.INFO)
my_logger = logging.Logger("arachne.runtime.rpc")
my_logger.addHandler(stream_handler)
@staticmethod
def logger():
return Logger.my_logger
| 26.4375 | 88 | 0.72104 | 48 | 423 | 6.208333 | 0.541667 | 0.174497 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.158392 | 423 | 15 | 89 | 28.2 | 0.837079 | 0 | 0 | 0 | 0 | 0 | 0.165485 | 0.056738 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.090909 | 0.090909 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
f06ebdf27eb473116d5a5a69d7c99a59502c6586 | 409 | py | Python | hackerrank/python/introduction/function.py | wingkwong/competitive-programming | e8bf7aa32e87b3a020b63acac20e740728764649 | [
"MIT"
] | 18 | 2020-08-27T05:27:50.000Z | 2022-03-08T02:56:48.000Z | hackerrank/python/introduction/function.py | wingkwong/competitive-programming | e8bf7aa32e87b3a020b63acac20e740728764649 | [
"MIT"
] | null | null | null | hackerrank/python/introduction/function.py | wingkwong/competitive-programming | e8bf7aa32e87b3a020b63acac20e740728764649 | [
"MIT"
] | 1 | 2020-10-13T05:23:58.000Z | 2020-10-13T05:23:58.000Z | def is_leap(year):
leap = False
# Write your logic here
# The year can be evenly divided by 4, is a leap year, unless:
# The year can be evenly divided by 100, it is NOT a leap year, unless:
# The year is also evenly divisible by 400. Then it is a leap year.
leap = (year % 4 == 0 and (year % 400 == 0 or year % 100 != 0))
return leap
year = int(input())
print(is_leap(year)) | 34.083333 | 75 | 0.628362 | 73 | 409 | 3.493151 | 0.438356 | 0.219608 | 0.105882 | 0.094118 | 0.356863 | 0.356863 | 0.211765 | 0 | 0 | 0 | 0 | 0.057432 | 0.276284 | 409 | 12 | 76 | 34.083333 | 0.804054 | 0.533007 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.333333 | 0.166667 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f072cdc953dde5ba78b66b40195edc1332c89bcf | 346 | py | Python | functions/dissectData/lambda_handler.py | zinedine-zeitnot/anomaly-detection | 2287f6488d47884d97ff618c24c379d869eb51f5 | [
"MIT"
] | 3 | 2021-04-30T12:51:01.000Z | 2021-06-04T12:51:32.000Z | functions/dissectData/lambda_handler.py | zinedine-zeitnot/anomaly-detection | 2287f6488d47884d97ff618c24c379d869eb51f5 | [
"MIT"
] | null | null | null | functions/dissectData/lambda_handler.py | zinedine-zeitnot/anomaly-detection | 2287f6488d47884d97ff618c24c379d869eb51f5 | [
"MIT"
] | null | null | null | from data_dissector import DataDissector
def handler(event, _):
switchpoint_trio = DataDissector.dissect_data(data=event['data'])
return {
"switchpoint": switchpoint_trio.switchpoint,
"preSwitchAverage": switchpoint_trio.pre_switch_average,
"postSwitchAverage": switchpoint_trio.post_switch_average,
}
| 28.833333 | 69 | 0.731214 | 33 | 346 | 7.333333 | 0.545455 | 0.247934 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.182081 | 346 | 11 | 70 | 31.454545 | 0.855124 | 0 | 0 | 0 | 0 | 0 | 0.138728 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f0730fe6794bb60447a6e7f8e2d6de7cd6fc45d8 | 1,220 | py | Python | Chapter 3-Regression/2.py | FatiniNadhirah5/Datacamp-Machine-Learning-with-Apache-Spark-2019 | a0ef5f34c5a0aea222359a5085386f6a21611e7e | [
"FSFAP"
] | 8 | 2020-05-02T20:24:38.000Z | 2021-04-30T21:44:22.000Z | Chapter 3-Regression/2.py | FatiniNadhirah5/Machine-Learning-with-Apache-Spark | a0ef5f34c5a0aea222359a5085386f6a21611e7e | [
"FSFAP"
] | null | null | null | Chapter 3-Regression/2.py | FatiniNadhirah5/Machine-Learning-with-Apache-Spark | a0ef5f34c5a0aea222359a5085386f6a21611e7e | [
"FSFAP"
] | 9 | 2020-05-17T17:44:37.000Z | 2022-03-20T12:58:42.000Z | # Flight duration model: Just distance
# In this exercise you'll build a regression model to predict flight duration (the duration column).
# For the moment you'll keep the model simple, including only the distance of the flight (the km column) as a predictor.
# The data are in flights. The first few records are displayed in the terminal. These data have also been split into training and testing sets and are available as flights_train and flights_test.
# Instructions
# 100 XP
# Create a linear regression object. Specify the name of the label column. Fit it to the training data.
# Make predictions on the testing data.
# Create a regression evaluator object and use it to evaluate RMSE on the testing data.
from pyspark.ml.regression import LinearRegression
from pyspark.ml.evaluation import RegressionEvaluator
# Create a regression object and train on training data
regression = LinearRegression(labelCol='duration').fit(flights_train)
# Create predictions for the testing data and take a look at the predictions
predictions = regression.transform(flights_test)
predictions.select('duration', 'prediction').show(5, False)
# Calculate the RMSE
RegressionEvaluator(labelCol='duration').evaluate(predictions) | 48.8 | 195 | 0.80082 | 180 | 1,220 | 5.405556 | 0.466667 | 0.033916 | 0.043165 | 0.032888 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003831 | 0.144262 | 1,220 | 25 | 196 | 48.8 | 0.928161 | 0.690164 | 0 | 0 | 0 | 0 | 0.092896 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
f07320187da09dd13226ebf15b281c23c4b206d4 | 486 | py | Python | gipsy/admin.py | marwahaha/gipsy-1 | 5d31c37cff26b9b26cd6d24e1b6de13c81ebbe6e | [
"MIT"
] | 10 | 2015-02-11T02:11:33.000Z | 2018-03-22T13:08:33.000Z | gipsy/admin.py | marwahaha/gipsy-1 | 5d31c37cff26b9b26cd6d24e1b6de13c81ebbe6e | [
"MIT"
] | 9 | 2015-01-22T15:45:44.000Z | 2015-10-19T14:18:09.000Z | gipsy/admin.py | marwahaha/gipsy-1 | 5d31c37cff26b9b26cd6d24e1b6de13c81ebbe6e | [
"MIT"
] | 7 | 2015-04-28T15:20:57.000Z | 2019-07-16T03:45:12.000Z | from django.contrib import admin
class ChildrenInline(admin.TabularInline):
sortable_field_name = "order"
class GipsyMenu(admin.ModelAdmin):
inlines = [ChildrenInline]
exclude = ('parent',)
list_display = ['name', 'order']
ordering = ['order']
def get_queryset(self, request):
"""Overrides default queryset to only display parent items"""
query = super(GipsyMenu, self).get_queryset(request)
return query.filter(parent__isnull=True)
| 27 | 69 | 0.693416 | 53 | 486 | 6.226415 | 0.679245 | 0.054545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.193416 | 486 | 17 | 70 | 28.588235 | 0.841837 | 0.113169 | 0 | 0 | 0 | 0 | 0.058824 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.090909 | 0 | 0.909091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
f073a799e8b36554db301e779cfd3eed55011853 | 4,727 | py | Python | QDeblend/process/host_profiles.py | brandherd/QDeblend3D | 4e195ca027cf9fb65962ce66bf5d1f3e119b4f18 | [
"MIT"
] | null | null | null | QDeblend/process/host_profiles.py | brandherd/QDeblend3D | 4e195ca027cf9fb65962ce66bf5d1f3e119b4f18 | [
"MIT"
] | null | null | null | QDeblend/process/host_profiles.py | brandherd/QDeblend3D | 4e195ca027cf9fb65962ce66bf5d1f3e119b4f18 | [
"MIT"
] | null | null | null | import numpy, math
from scipy import special
"""
The Sersic Profile
Formulae for Sersic profile taken from Graham & Driver (2005)
bibcode: 2005PASA...22..118G
"""
class Sersic:
def __init__(self, size, x_c, y_c, mag, n, r_e, e=0., theta=0., osfactor=10,
osradius=2):
self.size = size
self.x_c = x_c
self.y_c = y_c
self.n = n
self.r_e = r_e
self.e = e
self.theta = theta
self.osf = osfactor
self.osr = osradius
flux = 10**(-0.4*mag)
self._get_kappa()
self.sigma_e = flux/(2*math.pi*(r_e**2)*math.exp(self.kappa)*n*
(self.kappa)**(-2.*n)*special.gamma(2.*n)*(1.-e))
self._make_array()
def _get_kappa(self):
init = 1.9992*self.n - 0.3271
self.kappa = self.__newton_it(init)
def __newton_it(self, x0, epsilon=1e-8):
for i in range(2000):
x0 -= self.__gammainc(x0)[0]/self.__gammainc(x0)[1]
if abs(self.__gammainc(x0)[0]) <= epsilon:
break
if i == 1999:
print 'Warning: Iteration failed!'
return x0
def __gammainc(self, x):
f = special.gammainc(2*self.n, x) - 0.5
df = (math.exp(-x) * x**(2.*self.n - 1.))/special.gamma(2.*self.n)
return (f, df)
def _make_array(self):
self.array = numpy.fromfunction(self._draw, self.size, dtype='float32')
if self.osf != 1:
csize = ((2*self.osr+1)*self.osf, (2*self.osr+1)*self.osf)
x_n = int(round(self.x_c))
y_n = int(round(self.y_c))
self.x_c += (self.osr - round(self.x_c))
self.y_c += (self.osr - round(self.y_c))
self.x_c *= self.osf
self.y_c *= self.osf
self.x_c += 0.5*(self.osf-1.)
self.y_c += 0.5*(self.osf-1.)
self.r_e *= self.osf
self.sigma_e /= (self.osf)**2
carray = numpy.fromfunction(self._draw, csize, dtype='float32')
s1_size = (2*self.osr+1, (2*self.osr+1)*self.osf, self.osf)
s2_size = (2*self.osr+1, 2*self.osr+1, self.osf)
step1 = numpy.sum(numpy.reshape(carray, s1_size, 'C'), axis=2)
step2 = numpy.sum(numpy.reshape(step1, s2_size, 'F'), axis=2)
self.array[y_n-self.osr:y_n+self.osr+1,
x_n-self.osr:x_n+self.osr+1] = step2
def _draw(self, y, x):
u = (x-self.x_c)*math.sin(self.theta)-(y-self.y_c)*math.cos(self.theta)
v = (y-self.y_c)*math.sin(self.theta)+(x-self.x_c)*math.cos(self.theta)
r = numpy.sqrt(u**2 + (v/(1. - self.e))**2)
return self.sigma_e*numpy.exp(-self.kappa*((r/self.r_e)**(1/self.n)-1))
def cut_area(in_array, center, radius, output=''):
x_i = round(center[0], 0)
y_i = round(center[1], 0)
shape = in_array.shape
out_array = numpy.zeros((2*radius+1, 2*radius+1), dtype='float32')
out_shape = out_array.shape
xmin = max(0, int(x_i - radius))
xmax = min(shape[1], int(x_i + radius + 1))
ymin = max(0, int(y_i - radius))
ymax = min(shape[0], int(y_i + radius + 1))
xlo = max(0, int(radius - x_i))
xhi = out_shape[1] - max(0, int(x_i + radius + 1 - shape[1]))
ylo = max(0, int(radius - y_i))
yhi = out_shape[0] - max(0, int(y_i + radius + 1 - shape[0]))
out_array[ylo:yhi,xlo:xhi] = in_array[ymin:ymax,xmin:xmax]
if output == 'full':
filled_pix = numpy.zeros(out_shape, dtype='int16')
filled_pix[ylo:yhi,xlo:xhi] += 1
return out_array, filled_pix
else:
return out_array
def paste_area(in_array, out_shape, refpix_in, refpix_out, out_array=None):
xpix_in = int(round(refpix_in[0], 0))
ypix_in = int(round(refpix_in[1], 0))
xpix_out = int(round(refpix_out[0], 0))
ypix_out = int(round(refpix_out[1], 0))
in_shape = in_array.shape
xmin = max(0, xpix_in - xpix_out)
xmax = in_shape[1] - max(0, (in_shape[1]-xpix_in) - (out_shape[1]-xpix_out))
ymin = max(0, ypix_in - ypix_out)
ymax = in_shape[0] - max(0, (in_shape[0]-ypix_in) - (out_shape[0]-ypix_out))
xlo = max(0, xpix_out - xpix_in)
xhi = out_shape[1] - max(0, (out_shape[1]-xpix_out) - (in_shape[1]-xpix_in))
ylo = max(0, ypix_out - ypix_in)
yhi = out_shape[0] - max(0, (out_shape[0]-ypix_out) - (in_shape[0]-ypix_in))
if out_array is None:
out_array = numpy.zeros(out_shape, dtype='float32')
if ylo < out_array.shape[0] and yhi > 0:
if xlo < out_array.shape[1] and xhi > 0:
out_array[ylo:yhi, xlo:xhi] = in_array[ymin:ymax, xmin:xmax]
return out_array | 34.757353 | 80 | 0.557436 | 783 | 4,727 | 3.178799 | 0.159642 | 0.022499 | 0.019285 | 0.021695 | 0.319004 | 0.152672 | 0.087585 | 0.075532 | 0.058658 | 0.058658 | 0 | 0.048467 | 0.275439 | 4,727 | 136 | 81 | 34.757353 | 0.678248 | 0 | 0 | 0.039604 | 0 | 0 | 0.014097 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.019802 | null | null | 0.009901 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f07a7df9283116337443c3a5f4f80b400ad900a1 | 4,848 | py | Python | tests/data/test_make_dataset.py | dnsosa/drug-lit-contradictory-claims | c03faa7269050344b631b12302214a3175384e98 | [
"MIT"
] | null | null | null | tests/data/test_make_dataset.py | dnsosa/drug-lit-contradictory-claims | c03faa7269050344b631b12302214a3175384e98 | [
"MIT"
] | null | null | null | tests/data/test_make_dataset.py | dnsosa/drug-lit-contradictory-claims | c03faa7269050344b631b12302214a3175384e98 | [
"MIT"
] | null | null | null | """Tests for making datasets for contradictory-claims."""
# -*- coding: utf-8 -*-
import os
import unittest
from contradictory_claims.data.make_dataset import load_drug_virus_lexicons, load_mancon_corpus_from_sent_pairs, \
load_med_nli, load_multi_nli
from .constants import drug_lex_path, mancon_sent_pairs, mednli_dev_path, mednli_test_path, mednli_train_path, \
multinli_test_path, multinli_train_path, sample_drug_lex_path, sample_mancon_sent_pairs, \
sample_multinli_test_path, sample_multinli_train_path, sample_virus_lex_path, virus_lex_path
class TestMakeDataset(unittest.TestCase):
"""Tests for making datasets for contradictory-claims."""
@unittest.skip("This test can be used to check that datasets are found at the correct locations locally")
def test_find_files(self):
"""Test that input files are found properly."""
self.assertTrue(os.path.isfile(multinli_train_path),
"MultiNLI training data not found at {}".format(multinli_train_path))
self.assertTrue(os.path.isfile(multinli_test_path),
"MultiNLI test data not found at {}".format(multinli_test_path))
self.assertTrue(os.path.isfile(mednli_train_path),
"MedNLI training data not found at {}".format(mednli_train_path))
self.assertTrue(os.path.isfile(mednli_dev_path),
"MedNLI dev set data not found at {}".format(mednli_dev_path))
self.assertTrue(os.path.isfile(mednli_test_path),
"MedNLI test data not found at {}".format(mednli_test_path))
self.assertTrue(os.path.isfile(mancon_sent_pairs),
"ManConCorpus sentence pairs training data not found at {}".format(mancon_sent_pairs))
self.assertTrue(os.path.isfile(drug_lex_path),
"Drug lexicon not found at {}".format(drug_lex_path))
self.assertTrue(os.path.isfile(virus_lex_path),
"Virus lexicon not found at {}".format(virus_lex_path))
@unittest.skip("This test can be used locally to check that MultiNLI loads properly")
def test_load_multi_nli(self):
"""Test that MultiNLI is loaded as expected."""
x_train, y_train, x_test, y_test = load_multi_nli(multinli_train_path, multinli_test_path)
self.assertEqual(len(x_train), 391165)
self.assertEqual(y_train.shape, (391165, 3))
self.assertEqual(len(x_test), 9897)
self.assertEqual(y_test.shape, (9897, 3))
def test_load_multi_nli_sample(self):
"""Test that MultiNLI SAMPLE DATA are loaded as expected."""
x_train, y_train, x_test, y_test = load_multi_nli(sample_multinli_train_path, sample_multinli_test_path)
self.assertEqual(len(x_train), 49)
self.assertEqual(y_train.shape, (49, 3))
self.assertEqual(len(x_test), 49)
self.assertEqual(y_test.shape, (49, 3))
@unittest.skip("This test can be used locally to check that MedNLI loads properly")
def test_load_med_nli(self):
"""Test that MedNLI is loaded as expected."""
x_train, y_train, x_test, y_test = load_med_nli(mednli_train_path, mednli_dev_path, mednli_test_path)
self.assertEqual(len(x_train), 12627)
self.assertEqual(y_train.shape, (12627, 3))
self.assertEqual(len(x_test), 1422)
self.assertEqual(y_test.shape, (1422, 3))
@unittest.skip("This test can be used locally to check that ManConCorpus loads properly")
def test_load_mancon_corpus_from_sent_pairs(self):
"""Test that ManConCorpus is loaded as expected."""
x_train, y_train, x_test, y_test = load_mancon_corpus_from_sent_pairs(mancon_sent_pairs)
self.assertEqual(len(x_train), 14328)
self.assertEqual(y_train.shape, (14328, 3))
self.assertEqual(len(x_test), 3583)
self.assertEqual(y_test.shape, (3583, 3))
def test_load_mancon_corpus_from_sent_pairs_sample(self):
"""Test that ManConCorpus is loaded as expected."""
x_train, y_train, x_test, y_test = load_mancon_corpus_from_sent_pairs(sample_mancon_sent_pairs)
self.assertEqual(len(x_train), 39)
self.assertEqual(y_train.shape, (39, 3))
self.assertEqual(len(x_test), 10)
self.assertEqual(y_test.shape, (10, 3))
def test_load_drug_virus_lexicons(self):
"""Test that the virus and drug lexicons are loaded properly."""
drug_names, virus_names = load_drug_virus_lexicons(sample_drug_lex_path, sample_virus_lex_path)
drugs = ["hydroxychloroquine", "remdesivir", "ritonavir", "chloroquine", "lopinavir"]
virus_syns = ["COVID-19", "SARS-CoV-2", "Coronavirus Disease 2019"]
self.assertTrue(set(drugs).issubset(set(drug_names)))
self.assertTrue(set(virus_syns).issubset(set(virus_names)))
| 49.469388 | 114 | 0.697401 | 674 | 4,848 | 4.722552 | 0.158754 | 0.094251 | 0.05655 | 0.059692 | 0.655985 | 0.472824 | 0.330506 | 0.225259 | 0.14923 | 0.14923 | 0 | 0.023226 | 0.200701 | 4,848 | 97 | 115 | 49.979381 | 0.798194 | 0.094059 | 0 | 0 | 0 | 0 | 0.156006 | 0 | 0 | 0 | 0 | 0 | 0.461538 | 1 | 0.107692 | false | 0 | 0.061538 | 0 | 0.184615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f07bcc1be66ad63b427b651f681533f05db82f52 | 430 | py | Python | topics/migrations/0003_topic_word.py | acdh-oeaw/mmp | 7ef8f33eafd3a7985328d374130f1cbe31f77df0 | [
"MIT"
] | 2 | 2021-06-02T11:27:54.000Z | 2021-08-25T10:29:04.000Z | topics/migrations/0003_topic_word.py | acdh-oeaw/mmp | 7ef8f33eafd3a7985328d374130f1cbe31f77df0 | [
"MIT"
] | 86 | 2021-01-29T12:31:34.000Z | 2022-03-28T11:41:04.000Z | topics/migrations/0003_topic_word.py | acdh-oeaw/mmp | 7ef8f33eafd3a7985328d374130f1cbe31f77df0 | [
"MIT"
] | null | null | null | # Generated by Django 3.2 on 2021-10-21 19:26
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('topics', '0002_alter_modelingprocess_modeling_type'),
]
operations = [
migrations.AddField(
model_name='topic',
name='word',
field=models.JSONField(default='{}'),
preserve_default=False,
),
]
| 21.5 | 63 | 0.597674 | 43 | 430 | 5.837209 | 0.837209 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 0.288372 | 430 | 19 | 64 | 22.631579 | 0.761438 | 0.1 | 0 | 0 | 1 | 0 | 0.148052 | 0.103896 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f08133a0ab8681553c9936415f848d5882f36db1 | 1,150 | py | Python | src/controllers/storage.py | koddas/python-oop-consistency-lab | 8ee3124aa230359d296fdfbe0c23773602769c8c | [
"MIT"
] | null | null | null | src/controllers/storage.py | koddas/python-oop-consistency-lab | 8ee3124aa230359d296fdfbe0c23773602769c8c | [
"MIT"
] | null | null | null | src/controllers/storage.py | koddas/python-oop-consistency-lab | 8ee3124aa230359d296fdfbe0c23773602769c8c | [
"MIT"
] | null | null | null | from entities.serializable import Serializable
class Storage:
'''
Storage represents a file storage that stores and retrieves objects
'''
def __init__(self):
pass
def save(self, filename: str, data: Serializable) -> bool:
'''
Stores a serializable object. If the object isn't explicitly marked as
being serializable, this method will fail.
'''
if not issubclass(data.__class__, Serializable):
return False
f = open(filename, "w")
f.write(data.serialize())
f.close()
return True
def read(self, filename: str, class_name: type) -> Serializable:
'''
Retrieves a serialized object. You specify the type of he object to
deserialize by passing the class (as a type, not a string) as the
second parameter.
'''
if not issubclass(class_name, Serializable):
return None
f = open(filename, "r")
data = f.read()
f.close()
deserialized = class_name.deserialize(data)
return deserialized | 28.75 | 78 | 0.578261 | 127 | 1,150 | 5.149606 | 0.511811 | 0.041284 | 0.045872 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.342609 | 1,150 | 40 | 79 | 28.75 | 0.865079 | 0.289565 | 0 | 0.105263 | 0 | 0 | 0.002732 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0.052632 | 0.052632 | 0 | 0.473684 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f081d74683e4da50d27ee2a254cfa3157f59305b | 924 | py | Python | tests/functional/modules/test_zos_tso_command.py | IBM/zos-core-collection-ftp | 017d2e031d64984571bd9bb330f49adaced387a6 | [
"Apache-2.0"
] | 4 | 2021-03-17T02:24:02.000Z | 2022-01-28T22:08:17.000Z | tests/functional/modules/test_zos_tso_command.py | IBM/zos-core-collection-ftp | 017d2e031d64984571bd9bb330f49adaced387a6 | [
"Apache-2.0"
] | null | null | null | tests/functional/modules/test_zos_tso_command.py | IBM/zos-core-collection-ftp | 017d2e031d64984571bd9bb330f49adaced387a6 | [
"Apache-2.0"
] | null | null | null | from __future__ import absolute_import, division, print_function
__metaclass__ = type
import os
import sys
import warnings
import ansible.constants
import ansible.errors
import ansible.utils
import pytest
from pprint import pprint
# The positive path test
def test_zos_tso_command_listuser(ansible_adhoc):
hosts = ansible_adhoc(inventory='localhost', connection='local')
print('--- hosts.all ---')
pprint(hosts.all)
pprint(hosts.all.options)
pprint(vars(hosts.all.options['inventory_manager']))
pprint(hosts.all.options['inventory_manager']._inventory.hosts)
hosts.all.options['inventory_manager']._inventory.hosts
results = hosts.localhost.zos_tso_command(commands=["LU"])
print('--- results.contacted ---')
pprint(results.contacted)
for result in results.contacted.values():
assert result.get("output")[0].get("rc") == 0
assert result.get("changed") is True
| 30.8 | 68 | 0.737013 | 116 | 924 | 5.672414 | 0.456897 | 0.072948 | 0.091185 | 0.109422 | 0.238602 | 0.136778 | 0.136778 | 0 | 0 | 0 | 0 | 0.002516 | 0.13961 | 924 | 29 | 69 | 31.862069 | 0.825157 | 0.02381 | 0 | 0 | 0 | 0 | 0.137778 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 1 | 0.041667 | false | 0 | 0.375 | 0 | 0.416667 | 0.375 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
b2b1db3b982c41901d0ae5c563cb502c2d0bce3e | 3,366 | py | Python | audio_pouring/utils/network.py | lianghongzhuo/MultimodalPouring | 6495c7de9afad396f39bd7ac25e1a150e74479d2 | [
"MIT"
] | 5 | 2020-03-12T16:36:32.000Z | 2021-01-28T18:23:19.000Z | audio_pouring/utils/network.py | lianghongzhuo/MultimodalPouring | 6495c7de9afad396f39bd7ac25e1a150e74479d2 | [
"MIT"
] | null | null | null | audio_pouring/utils/network.py | lianghongzhuo/MultimodalPouring | 6495c7de9afad396f39bd7ac25e1a150e74479d2 | [
"MIT"
] | 1 | 2020-03-11T17:09:28.000Z | 2020-03-11T17:09:28.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# Author : Hongzhuo Liang
# E-mail : liang@informatik.uni-hamburg.de
# Description:
# Date : 15/10/2019: 22:13
# File Name : network
import argparse
import numpy as np
import torch
def worker_init_fn(pid):
np.random.seed(torch.initial_seed() % (2 ** 31 - 1))
def my_collate(batch):
batch = list(filter(lambda x: x is not None, batch))
return torch.utils.data.dataloader.default_collate(batch)
def parse():
parser = argparse.ArgumentParser(description="audio2height")
parser.add_argument("--tag", type=str, default="")
parser.add_argument("--epoch", type=int, default=500)
parser.add_argument("--mode", choices=["train", "test"], default="train")
parser.add_argument("--bs", type=int, default=10)
parser.add_argument("--hidden-dim", type=int, default=256)
parser.add_argument("--layer-num", type=int, default=1)
parser.add_argument("--lstm", action="store_true")
parser.add_argument("--cuda", action="store_true")
parser.add_argument("--gpu", type=int, default=0)
parser.add_argument("--bottle-train", type=str, default="0")
parser.add_argument("--bottle-test", type=str, default="")
parser.add_argument("--lr", type=float, default=0.0001)
parser.add_argument("--snr_db", type=float, required=True)
parser.add_argument("--mono-coe", type=float, default=0.001)
parser.add_argument("--load-model", type=str, default="")
parser.add_argument("--load-epoch", type=int, default=-1)
parser.add_argument("--model-path", type=str, default="./assets/learned_models", help="pre-trained model path")
parser.add_argument("--data-path", type=str, default="h5py_dataset", help="data path")
parser.add_argument("--log-interval", type=int, default=10)
parser.add_argument("--save-interval", type=int, default=10)
parser.add_argument("--robot", action="store_true")
parser.add_argument("--multi", action="store_true")
parser.add_argument("--minus_wrench_first", action="store_true")
parser.add_argument("--stft_force", action="store_true")
parser.add_argument("--bidirectional", action="store_true")
parser.add_argument("--draw_acc_fig", action="store_true")
parser.add_argument("--acc_fig_name", type=str, default="")
parser.add_argument("--multi-detail", choices=["2loss2rnn", "2loss1rnn", "1loss1rnn", "audio_only", "a_guide_f",
"a_f_early_fusion", "force_only", "1loss2rnn"], default="audio_only")
args = parser.parse_args()
if args.bottle_test == "":
args.bottle_test = args.bottle_train
if args.tag != "":
args.tag += "_"
base = args.tag + "{}_{}{}_h{}_bs{}_bottle{}to{}_mono_coe{}_snr{}_{}_{}_{}_{}"
tag = base.format("multi" if args.multi else "audio", "lstm" if args.lstm else "gru", args.layer_num,
args.hidden_dim, args.bs, args.bottle_train, args.bottle_test, args.mono_coe, args.snr_db,
args.multi_detail, "minus_wrench_first" if args.minus_wrench_first else "raw",
"stft_force" if args.stft_force else "raw_force",
"bidirectional" if args.bidirectional else "unidirectional")
args.tag = tag
args.acc_fig_name = "snr{}_{}".format(args.snr_db, "lstm" if args.lstm else "gru")
return args
| 49.5 | 120 | 0.659834 | 451 | 3,366 | 4.718404 | 0.31929 | 0.118421 | 0.223684 | 0.088816 | 0.329417 | 0.31156 | 0.084117 | 0.038534 | 0 | 0 | 0 | 0.018538 | 0.166667 | 3,366 | 67 | 121 | 50.238806 | 0.740107 | 0.05407 | 0 | 0 | 0 | 0.018868 | 0.224111 | 0.025496 | 0 | 0 | 0 | 0 | 0 | 1 | 0.056604 | false | 0 | 0.056604 | 0 | 0.150943 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b2b1ff5ef4ba336018262956f57a372c5c93879b | 4,312 | py | Python | FederatedSDNSecurity/main.py | Beaconproj/CrossCloudVNFSimulation | 97023e05b57e54503259ae866608de6189b8c9a9 | [
"MIT"
] | 1 | 2021-09-25T04:17:55.000Z | 2021-09-25T04:17:55.000Z | FederatedSDNSecurity/main.py | Beaconproj/CrossCloudVNFSimulation | 97023e05b57e54503259ae866608de6189b8c9a9 | [
"MIT"
] | null | null | null | FederatedSDNSecurity/main.py | Beaconproj/CrossCloudVNFSimulation | 97023e05b57e54503259ae866608de6189b8c9a9 | [
"MIT"
] | null | null | null |
'''
Created on 12 janv. 2016
@author: phm
'''
import FederatedSDN
import FederatedSDNSecurity
import FederatedSecurityAgent
import VNFManager
import CloudManager
import ssl, socket
inputMessage="Press enter to continue"
federationCloudManagers=[]
cloudFederationMembers=[
["cloud_man_1","vnf_manager_1", "network_segment_1", [["" ]],[""]],
["cloud_man_2","vnf_manager_2", "network_segment_2", [[""]],[""]],
["cloud_man_3","vnf_manager_3", "network_segment_3", [[""]],[""]]
]
print "-----------Initial setup of Cloud_1, cloud_2 and cloud_3 -----------"
cloudMember=""
for cloudMember in cloudFederationMembers:
# create a cloud manager
cloud_manager=CloudManager.CloudManager(cloudMember[0])
print "Cloud manager", cloud_manager.getName(), "in federation"
federationCloudManagers.append(cloud_manager)
# set the network segments
cloud_manager.setNetworkSegments(cloudMember[2])
# create a VNF manager
vnfManager=VNFManager.VNFManager(cloudMember[1])
cloud_manager.setVNFManager(vnfManager)
print "------------ start Federated SDN ---------------"
# create a federated SDN
fedSDN=FederatedSDN.FederatedSDN("fedSDN_1")
print "FederatedSDN", fedSDN.getIdentifier(), "created"
# create a Federated SDN security
fedSDNSecurity=FederatedSDNSecurity.FederatedSDNSecurity("fedSDNSec_1")
print "FederatedSDNSecurity", fedSDNSecurity.getName(), "created"
print "------------- Create a Federated Cloud Network --------------------"
# get the network segments to be federated
network_segments=["network_segment_1","network_segment_2"]
#cloud_member=""
#for cloud_member in cloudFederationMembers:
# network_segments.append(cloud_member[2])
#print "network segments:", network_segments
fedSDN.createNetworkFederation("FedCloudNetwork_1", network_segments)
print "Federated network", fedSDN.getNetworkFederationSegments("FedCloudNetwork_1"), "created"
# Associate a FederatedSecurityAgent with each network segment"
cloudManager=""
for cloudManager in federationCloudManagers:
network_segment=cloudManager.getNetworkSegments()
fedSecAg=FederatedSecurityAgent.FederatedSecurityAgent("fedSecAg_"+network_segment[0])
#print "SecAgent", fedSecAg.getName(), "created"
fedSecAg.setVNFManager(cloudManager.getVNFManager())
fedSecAg.setNetworkSegment(cloudManager.getNetworkSegments())
fedSDNSecurity.addSecurityAgent(fedSecAg)
#print "----------- Analyse existing security VNF of federation network segments ----------"
#print "------------- Adapt VNF to respect global security policy: start new VNF and re-configure existing VNF --------"
print "------------- Deploy, configure and start VNF to respect global security policy --------"
wait = raw_input(inputMessage)
fedSDNSecurity.readYAMLfile("YAML1.txt")
#fedSDNSecurity.readYAMLfileV2("Cloud1-2-Heat.yaml")
print "-------- Verify that global security policy is correctly implemented in each federation cloud network ----------"
wait = raw_input(inputMessage)
fedSDNSecurity.verifySecurityPolicy(fedSDN)
print "------------- Run the network federation --------------"
wait = raw_input(inputMessage)
print "VM_1: send packet to VM_2 with protocol HTTP"
print "VM_2: received packet from VM_1"
print " "
print "VM_1: send packet to VM_2 with protocol SKYPE"
print "DPI_1: unauthorized protocol detected: SKYPE"
print "FW_1: reconfiguring firewall on network network_segment_1 to block SKYPE protocol"
print "------------- now add a new network_segment_3 to the federation and extend the security policy--------------"
wait = raw_input(inputMessage)
# add network segment to federation
fedSDN.addNetworkSegment("network_segment_3")
print "Federated network", fedSDN.getNetworkFederationSegments("FedCloudNetwork_1"), "extended"
fedSDNSecurity.readYAMLfile("YAML2.txt")
print "-------- Verify that global security policy is implemented VNF per network Segment ----------"
wait = raw_input(inputMessage)
fedSDNSecurity.verifySecurityPolicy(fedSDN)
print "------------- Run the network federation --------------"
wait = raw_input(inputMessage)
print "VM_1: send packet to VM_3 with protocol X "
print "ENCRYPT_1: VM_3 is in untrusted cloud: encrypt packet"
print "DECRYPT_3: packet for VM_3 from VM_1 is encrypted: decrypt packet using key XXX "
| 30.8 | 120 | 0.73539 | 480 | 4,312 | 6.45625 | 0.270833 | 0.058729 | 0.023233 | 0.046467 | 0.219426 | 0.214908 | 0.17425 | 0.10455 | 0.10455 | 0.10455 | 0 | 0.013958 | 0.119434 | 4,312 | 139 | 121 | 31.021583 | 0.802212 | 0.165584 | 0 | 0.153846 | 0 | 0 | 0.433551 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.092308 | null | null | 0.353846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b2b541552dee04f9e9bcd11e4c109a74ce0c81b7 | 1,697 | py | Python | timesketch/lib/cypher/insertable_string.py | Marwolf/timesketch | 8fbbb3d0a5a50dc0214fc56a9bbec82050908103 | [
"Apache-2.0"
] | null | null | null | timesketch/lib/cypher/insertable_string.py | Marwolf/timesketch | 8fbbb3d0a5a50dc0214fc56a9bbec82050908103 | [
"Apache-2.0"
] | null | null | null | timesketch/lib/cypher/insertable_string.py | Marwolf/timesketch | 8fbbb3d0a5a50dc0214fc56a9bbec82050908103 | [
"Apache-2.0"
] | null | null | null | # Copyright 2017 Google Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Module containing the InsertableString class."""
class InsertableString(object):
"""Class that accumulates insert and replace operations for a string and
later performs them all at once so that positions in the original string
can be used in all of the operations.
"""
def __init__(self, input_string):
self.input_string = input_string
self.to_insert = []
def insert_at(self, pos, s):
"""Add an insert operation at given position."""
self.to_insert.append((pos, pos, s))
def replace_range(self, start, end, s):
"""Add a replace operation for given range. Assume that all
replace_range operations are disjoint, otherwise undefined behavior.
"""
self.to_insert.append((start, end, s))
def apply_insertions(self):
"""Return a string obtained by performing all accumulated operations."""
to_insert = reversed(sorted(self.to_insert))
result = self.input_string
for start, end, s in to_insert:
result = result[:start] + s + result[end:]
return result
| 39.465116 | 80 | 0.696523 | 237 | 1,697 | 4.911392 | 0.489451 | 0.051546 | 0.041237 | 0.027491 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006056 | 0.221567 | 1,697 | 42 | 81 | 40.404762 | 0.875095 | 0.608721 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b2b7e4ac8602126a7252025c382dc07c1f558b19 | 1,181 | py | Python | fabrikApi/views/assembly/list.py | demokratiefabrik/fabrikApi | a56bb57d59a5e7cbbeeb77889c02d82f2a04c682 | [
"MIT"
] | null | null | null | fabrikApi/views/assembly/list.py | demokratiefabrik/fabrikApi | a56bb57d59a5e7cbbeeb77889c02d82f2a04c682 | [
"MIT"
] | null | null | null | fabrikApi/views/assembly/list.py | demokratiefabrik/fabrikApi | a56bb57d59a5e7cbbeeb77889c02d82f2a04c682 | [
"MIT"
] | null | null | null | """ Assemblies List View. """
import logging
from datetime import datetime
from cornice.service import Service
from fabrikApi.models.assembly import DBAssembly
from fabrikApi.models.mixins import arrow
# from fabrikApi.util.cors import CORS_LOCATION, CORS_MAX_AGE
logger = logging.getLogger(__name__)
# SERVICES
assemblies = Service(cors_origins=('*',),
name='assemblies',
description='List Assemblies.',
path='/assemblies')
@assemblies.get(permission='public')
def get_assemblies(request):
"""Returns all assemblies which are either public or accessible by the current user.
"""
# load all active assemblies
# TODO: filter only active assemblies
assemblies = request.dbsession.query(DBAssembly).all()
for assembly in assemblies:
# assembly.patch()
assembly.setup_lineage(request)
# show only assemblies with at least view permission.
assemblies = list(
filter(lambda assembly: request.has_public_permission(assembly),
assemblies)
)
assemblies = {v.identifier: v for v in assemblies}
return({
'assemblies': assemblies,
'access_date': arrow.utcnow()
})
| 25.673913 | 88 | 0.702794 | 132 | 1,181 | 6.189394 | 0.507576 | 0.097919 | 0.046512 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.202371 | 1,181 | 45 | 89 | 26.244444 | 0.867304 | 0.263336 | 0 | 0 | 0 | 0 | 0.076202 | 0 | 0 | 0 | 0 | 0.022222 | 0 | 1 | 0.041667 | false | 0 | 0.208333 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b2b9128938a7476610fbf31df937ff94978048ae | 1,514 | py | Python | tests/TestMetrics.py | gr33ndata/irlib | 4a518fec994b1a89cdc7d09a8170efec3d7e6615 | [
"MIT"
] | 80 | 2015-02-16T18:33:57.000Z | 2021-05-06T02:03:22.000Z | tests/TestMetrics.py | gr33ndata/irlib | 4a518fec994b1a89cdc7d09a8170efec3d7e6615 | [
"MIT"
] | 2 | 2016-02-05T06:30:21.000Z | 2017-09-24T17:42:58.000Z | tests/TestMetrics.py | gr33ndata/irlib | 4a518fec994b1a89cdc7d09a8170efec3d7e6615 | [
"MIT"
] | 25 | 2015-05-13T17:35:41.000Z | 2020-06-04T01:52:11.000Z | from unittest import TestCase
from irlib.metrics import Metrics
class TestMetrics(TestCase):
def setUp(self):
self.m = Metrics()
def test_jaccard_same_len(self):
with self.assertRaises(ValueError):
self.m.jaccard_vectors(
[0, 1],
[0, 1, 2, 3]
)
def test_jaccard_empty(self):
e = self.m.jaccard_vectors([],[])
self.assertEqual(e,1)
def test_jaccard_int(self):
e = self.m.jaccard_vectors(
[0, 2, 1, 3],
[0, 1, 2, 3]
)
self.assertEqual(e,0.75)
def test_jaccard_bool(self):
e = self.m.jaccard_vectors(
[False, False, True, True, True ],
[False, True , True, True, False]
)
self.assertEqual(e,0.4)
def test_euclid_same_len(self):
with self.assertRaises(ValueError):
self.m.euclid_vectors(
[0, 1, 2, 3],
[0, 1]
)
def test_euclid(self):
e = self.m.euclid_vectors([1,1],[4,5])
self.assertEqual(e,5)
def test_cos_same_len(self):
with self.assertRaises(ValueError):
self.m.cos_vectors(
[0, 1, 2],
[1, 1]
)
def test_cos_0(self):
c = self.m.cos_vectors([1,0,1],[0,1,0])
self.assertEqual(round(c,5),float(0))
def test_cos_1(self):
c = self.m.cos_vectors([1,1,1],[1,1,1])
self.assertEqual(round(c,5),float(1))
| 24.819672 | 47 | 0.515192 | 202 | 1,514 | 3.717822 | 0.193069 | 0.066578 | 0.074567 | 0.101198 | 0.48735 | 0.407457 | 0.23968 | 0.183755 | 0.183755 | 0 | 0 | 0.054656 | 0.347424 | 1,514 | 60 | 48 | 25.233333 | 0.705466 | 0 | 0 | 0.148936 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.191489 | 1 | 0.212766 | false | 0 | 0.042553 | 0 | 0.276596 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b2bff192f3852a8121825cff9ab0d2dc48bcad15 | 999 | py | Python | esp8266/boot.py | AlexGolovko/UltrasonicDeeper | 598020854a1bff433bce1582bf05625a6cb646c8 | [
"MIT"
] | 3 | 2020-04-21T10:51:38.000Z | 2022-03-10T18:23:56.000Z | esp8266/boot.py | AlexGolovko/UltrasonicDeeper | 598020854a1bff433bce1582bf05625a6cb646c8 | [
"MIT"
] | 5 | 2020-09-05T22:53:54.000Z | 2021-05-05T14:31:35.000Z | esp8266/boot.py | AlexGolovko/UltrasonicDeeper | 598020854a1bff433bce1582bf05625a6cb646c8 | [
"MIT"
] | 2 | 2021-01-24T19:18:42.000Z | 2021-02-26T09:41:54.000Z | # This file is executed on every boot (including wake-boot from deepsleep)
import esp
import gc
import machine
import network
esp.osdebug(None)
# machine.freq(160000000)
def do_connect(wifi_name, wifi_pass):
ssid = 'microsonar'
password = 'microsonar'
ap_if = network.WLAN(network.AP_IF)
ap_if.active(True)
# ap_if.config(essid=ssid, password=password)
ap_if.config(essid=ssid, authmode=network.AUTH_OPEN)
while not ap_if.active():
pass
print('Access Point created')
print(ap_if.ifconfig())
wlan = network.WLAN(network.STA_IF)
wlan.active(True)
wlans = wlan.scan()
if wifi_name in str(wlans):
print('connecting to network...')
wlan.connect(wifi_name, wifi_pass)
while not wlan.isconnected():
pass
print('network config:', wlan.ifconfig())
else:
wlan.active(False)
machine.Pin(2, machine.Pin.OUT).off()
do_connect('royter', 'traveller22')
gc.collect()
print('wifi connected')
| 23.785714 | 74 | 0.672673 | 136 | 999 | 4.823529 | 0.470588 | 0.042683 | 0.045732 | 0.057927 | 0.128049 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015113 | 0.205205 | 999 | 41 | 75 | 24.365854 | 0.811083 | 0.14014 | 0 | 0.066667 | 0 | 0 | 0.128655 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0.166667 | 0.133333 | 0 | 0.166667 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b2c0fe0d284c1df72e6a811ac09a5b401ed7fb9b | 509 | py | Python | tests/schema_mapping/expected/generated_example3.py | loyada/typed-py | 8f946ed0cddb38bf7fd463a4c8111a592ccae31a | [
"MIT"
] | 14 | 2018-02-14T13:28:47.000Z | 2022-02-12T08:03:21.000Z | tests/schema_mapping/expected/generated_example3.py | loyada/typed-py | 8f946ed0cddb38bf7fd463a4c8111a592ccae31a | [
"MIT"
] | 142 | 2017-11-22T14:02:33.000Z | 2022-03-23T21:26:29.000Z | tests/schema_mapping/expected/generated_example3.py | loyada/typed-py | 8f946ed0cddb38bf7fd463a4c8111a592ccae31a | [
"MIT"
] | 4 | 2017-12-14T16:46:45.000Z | 2021-12-15T16:33:31.000Z | from typedpy import *
class Person(Structure):
first_name = String()
last_name = String()
age = Integer(minimum=1)
_required = ['first_name', 'last_name']
class Groups(Structure):
groups = Array(items=Person)
_required = ['groups']
# ********************
class Example1(Structure):
people = Array(items=Person)
id = Integer()
i = Integer()
s = String()
m = Map(items=[String(), Person])
groups = Groups
_required = ['groups', 'id', 'm', 'people']
| 17.551724 | 47 | 0.581532 | 55 | 509 | 5.254545 | 0.454545 | 0.062284 | 0.110727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005115 | 0.231827 | 509 | 28 | 48 | 18.178571 | 0.734015 | 0.039293 | 0 | 0 | 0 | 0 | 0.082136 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
b2c83a9626d327c18df6c74ffc572fe2774106fd | 1,504 | py | Python | gopage/web_helper.py | wavegu/gopage | ff83cea34a82570627c74c5bad45ebc02ecaaff6 | [
"MIT"
] | 1 | 2017-02-03T10:24:00.000Z | 2017-02-03T10:24:00.000Z | gopage/web_helper.py | wavegu/gopage | ff83cea34a82570627c74c5bad45ebc02ecaaff6 | [
"MIT"
] | null | null | null | gopage/web_helper.py | wavegu/gopage | ff83cea34a82570627c74c5bad45ebc02ecaaff6 | [
"MIT"
] | null | null | null | # encoding: utf-8
import urllib2
from proxy_helper import ProxyHelper
proxyHelper = ProxyHelper()
class WebHelper:
def __init__(self):
pass
@classmethod
def get_page_content_from_url(cls, page_url):
"""
get html content from web page with given url
:param page_url: url of the page to be read
:return: page_content
"""
try:
proxy_ip = 'http://:@' + proxyHelper.choose_proxy()
print 'getting content from [' + page_url + ']', 'ip=' + proxy_ip
# print 'getting content from [' + page_url.decode('utf-8').encode('cp936') + ']', 'ip=' + proxy_ip
proxy = urllib2.ProxyHandler({'http': 'http://:@' + str(proxy_ip)})
auth = urllib2.HTTPBasicAuthHandler()
opener = urllib2.build_opener(proxy, auth, urllib2.HTTPHandler)
opener.addheaders = [('User-agent', 'Mozilla/5.0 (Windows NT 6.1 WOW64 rv:23.0) Gecko/20130406 Firefox/23.0')]
conn = opener.open(page_url)
page_content = conn.read()
return page_content
except urllib2.URLError or urllib2.HTTPError as e:
print '[Error]@WebHelper.get_page_content_from_url:', page_url
print e
return None
if __name__ == '__main__':
page_content = WebHelper.get_page_content_from_url('https://www.google.com/search?hl=en&safe=off&q=wave')
with open('test_result.html', 'w') as test_result:
test_result.write(page_content) | 36.682927 | 122 | 0.621011 | 189 | 1,504 | 4.703704 | 0.486772 | 0.098988 | 0.047244 | 0.060742 | 0.158605 | 0.134983 | 0 | 0 | 0 | 0 | 0 | 0.028674 | 0.257979 | 1,504 | 41 | 123 | 36.682927 | 0.767921 | 0.075133 | 0 | 0 | 0 | 0.076923 | 0.200323 | 0.035541 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.038462 | 0.076923 | null | null | 0.115385 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b2d1a8016c0b95e209c421ed0aa8314cc552c1ba | 491 | py | Python | art/migrations/0007_alter_artimage_project.py | rrozander/Art-Website | 2cedba90f2adc30d9e83e957903e890af7863eac | [
"MIT"
] | null | null | null | art/migrations/0007_alter_artimage_project.py | rrozander/Art-Website | 2cedba90f2adc30d9e83e957903e890af7863eac | [
"MIT"
] | null | null | null | art/migrations/0007_alter_artimage_project.py | rrozander/Art-Website | 2cedba90f2adc30d9e83e957903e890af7863eac | [
"MIT"
] | null | null | null | # Generated by Django 3.2.12 on 2022-03-01 23:06
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('art', '0006_auto_20220301_1452'),
]
operations = [
migrations.AlterField(
model_name='artimage',
name='project',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='images', to='art.project'),
),
]
| 24.55 | 122 | 0.643585 | 57 | 491 | 5.438596 | 0.684211 | 0.077419 | 0.090323 | 0.141935 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 0.234216 | 491 | 19 | 123 | 25.842105 | 0.739362 | 0.093686 | 0 | 0 | 1 | 0 | 0.130926 | 0.051919 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b2d418afad092a7839f43f08bf37f5d322277d2e | 392 | py | Python | fehler_auth/migrations/0003_auto_20220416_1626.py | dhavall13/fehler_core | dd27802d5b227a32aebcc8bfde68e78a69a36d66 | [
"MIT"
] | null | null | null | fehler_auth/migrations/0003_auto_20220416_1626.py | dhavall13/fehler_core | dd27802d5b227a32aebcc8bfde68e78a69a36d66 | [
"MIT"
] | null | null | null | fehler_auth/migrations/0003_auto_20220416_1626.py | dhavall13/fehler_core | dd27802d5b227a32aebcc8bfde68e78a69a36d66 | [
"MIT"
] | null | null | null | # Generated by Django 2.2.27 on 2022-04-16 16:26
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('fehler_auth', '0002_auto_20211002_1511'),
]
operations = [
migrations.AlterField(
model_name='invite',
name='email',
field=models.EmailField(max_length=255),
),
]
| 20.631579 | 52 | 0.604592 | 43 | 392 | 5.372093 | 0.813953 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.124555 | 0.283163 | 392 | 18 | 53 | 21.777778 | 0.697509 | 0.117347 | 0 | 0 | 1 | 0 | 0.130814 | 0.06686 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b2d6f5f40b8910601f5ded38d8738f9d70e406e6 | 835 | py | Python | agents/antifa.py | fan-weiwei/mercury-unicorn | 6c36d6baeaaee990a622caa0d7790dbd9982962c | [
"Apache-2.0"
] | null | null | null | agents/antifa.py | fan-weiwei/mercury-unicorn | 6c36d6baeaaee990a622caa0d7790dbd9982962c | [
"Apache-2.0"
] | null | null | null | agents/antifa.py | fan-weiwei/mercury-unicorn | 6c36d6baeaaee990a622caa0d7790dbd9982962c | [
"Apache-2.0"
] | null | null | null | from agents.agent import Agent
from random import randint
class Antifa(Agent):
def __init__(self):
super().__init__()
self.is_spy = False
def __str__(self):
return 'Basic Antifa'
def assign_mission(self, board):
number_to_assign = board.number_to_assign()
board.add_to_mission(self.seating_position)
while len(board.players_on_mission) < number_to_assign:
random_index = randint(0,board.number_of_players - 1)
if random_index not in board.players_on_mission:
board.add_to_mission(random_index)
def play_mission(self, board):
""" No other option but pass for the good guys """
return 'Pass'
def vote(self, board):
if board.stall_counter == 4:
return 1
return randint(0, 1)
| 24.558824 | 65 | 0.635928 | 110 | 835 | 4.5 | 0.463636 | 0.066667 | 0.084848 | 0.076768 | 0.086869 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009983 | 0.28024 | 835 | 33 | 66 | 25.30303 | 0.813644 | 0.050299 | 0 | 0 | 0 | 0 | 0.020382 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.238095 | false | 0.047619 | 0.095238 | 0.047619 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b2da2f7f294d67b0e66ebfb594c13ddc9e71fc29 | 13,568 | py | Python | timecat/apps/users/views.py | LinXueyuanStdio/memp | c6f6609cec7c54ec23881838dacb5f4ffba2e68c | [
"Apache-2.0"
] | null | null | null | timecat/apps/users/views.py | LinXueyuanStdio/memp | c6f6609cec7c54ec23881838dacb5f4ffba2e68c | [
"Apache-2.0"
] | null | null | null | timecat/apps/users/views.py | LinXueyuanStdio/memp | c6f6609cec7c54ec23881838dacb5f4ffba2e68c | [
"Apache-2.0"
] | null | null | null | import json
from django.urls import reverse
from django.shortcuts import render
from django.db.models import Q
from django.views.generic.base import View
from django.contrib.auth import authenticate,login,logout
from django.contrib.auth.backends import ModelBackend
from django.contrib.auth.hashers import make_password
from django.http import HttpResponse,HttpResponseRedirect
from rest_framework import viewsets
from rest_framework.permissions import IsAuthenticated
from pure_pagination import Paginator, EmptyPage, PageNotAnInteger
from apps.operation.models import UserCourse,UserFavorite,UserMessage
from apps.organization.models import CourseOrg,Teacher
from apps.course.models import Course
from apps.utils.email_send import send_register_eamil
from apps.utils.mixin_utils import LoginRequiredMixin
from .models import CustomUser,EmailVerifyRecord
from .models import Banner
from .forms import LoginForm,RegisterForm,ForgetPwdForm,ModifyPwdForm
from .forms import UploadImageForm,UserInfoForm
from .serializers import UserSerializer
class UsersViewSet(viewsets.ModelViewSet):
queryset = CustomUser.objects.all()
serializer_class = UserSerializer
permission_classes = (IsAuthenticated,)
# 邮箱和用户名都可以登录
# 基与ModelBackend类,因为它有authenticate方法
class CustomBackend(ModelBackend):
def authenticate(self, request, username=None, password=None, **kwargs):
try:
# 不希望用户存在两个,get只能有一个。两个是get失败的一种原因 Q为使用并集查询
user = CustomUser.objects.get(Q(username=username)|Q(email=username))
# django的后台中密码加密:所以不能password==password
# CustomUser继承的AbstractUser中有def check_password(self, raw_password):
if user.check_password(password):
return user
except Exception as e:
return None
class IndexView(View):
'''首页'''
def get(self,request):
#轮播图
all_banners = Banner.objects.all().order_by('index')
#课程
courses = Course.objects.filter(is_banner=False)[:6]
#轮播课程
banner_courses = Course.objects.filter(is_banner=True)[:3]
#课程机构
course_orgs = Course.objects.all()[:15]
return render(request,'index.html',{
'all_banners':all_banners,
'courses':courses,
'banner_courses':banner_courses,
'course_orgs':course_orgs,
})
# 登录
class LoginView(View):
'''用户登录'''
def get(self,request):
return render(request, 'login.html')
def post(self,request):
# 实例化
login_form = LoginForm(request.POST)
if login_form.is_valid():
# 获取用户提交的用户名和密码
user_name = request.POST.get('username', None)
pass_word = request.POST.get('password', None)
# 成功返回user对象,失败None
user = authenticate(username=user_name, password=pass_word)
# 如果不是null说明验证成功
if user is not None:
if user.is_active:
# 只有注册激活才能登录
login(request, user)
return HttpResponseRedirect(reverse('index'))
else:
return render(request, 'login.html', {'msg': '用户名或密码错误', 'login_form': login_form})
# 只有当用户名或密码不存在时,才返回错误信息到前端
else:
return render(request, 'login.html', {'msg': '用户名或密码错误','login_form':login_form})
# form.is_valid()已经判断不合法了,所以这里不需要再返回错误信息到前端了
else:
return render(request,'login.html',{'login_form':login_form})
# 激活用户
class ActiveUserView(View):
def get(self, request, active_code):
# 查询邮箱验证记录是否存在
all_record = EmailVerifyRecord.objects.filter(code = active_code)
if all_record:
for record in all_record:
# 获取到对应的邮箱
email = record.email
# 查找到邮箱对应的user
user = CustomUser.objects.get(email=email)
user.is_active = True
user.save()
# 验证码不对的时候跳转到激活失败页面
else:
return render(request,'active_fail.html')
# 激活成功跳转到登录页面
return render(request, "login.html", )
class LogoutView(View):
'''用户登出'''
def get(self,request):
logout(request)
return HttpResponseRedirect(reverse('index'))
# 注册
class RegisterView(View):
'''用户注册'''
def get(self,request):
register_form = RegisterForm()
return render(request,'register.html',{'register_form':register_form})
def post(self,request):
register_form = RegisterForm(request.POST)
if register_form.is_valid():
user_name = request.POST.get('email', None)
# 如果用户已存在,则提示错误信息
if CustomUser.objects.filter(email = user_name):
return render(request, 'register.html', {'register_form':register_form,'msg': '用户已存在'})
pass_word = request.POST.get('password', None)
# 实例化一个user_profile对象
user_profile = CustomUser()
user_profile.username = user_name
user_profile.email = user_name
user_profile.is_active = False
# 对保存到数据库的密码加密
user_profile.password = make_password(pass_word)
user_profile.save()
send_register_eamil(user_name,'register')
return render(request,'login.html')
else:
return render(request,'register.html',{'register_form':register_form})
class ForgetPwdView(View):
'''找回密码'''
def get(self,request):
forget_form = ForgetPwdForm()
return render(request,'forgetpwd.html',{'forget_form':forget_form})
def post(self,request):
forget_form = ForgetPwdForm(request.POST)
if forget_form.is_valid():
email = request.POST.get('email',None)
send_register_eamil(email,'forget')
return render(request, 'send_success.html')
else:
return render(request,'forgetpwd.html',{'forget_form':forget_form})
class ResetView(View):
def get(self, request, active_code):
all_records = EmailVerifyRecord.objects.filter(code=active_code)
if all_records:
for record in all_records:
email = record.email
return render(request, "password_reset.html", {"email":email})
else:
return render(request, "active_fail.html")
return render(request, "login.html")
class ModifyPwdView(View):
'''修改用户密码'''
def post(self, request):
modify_form = ModifyPwdForm(request.POST)
if modify_form.is_valid():
pwd1 = request.POST.get("password1", "")
pwd2 = request.POST.get("password2", "")
email = request.POST.get("email", "")
if pwd1 != pwd2:
return render(request, "password_reset.html", {"email":email, "msg":"密码不一致!"})
user = CustomUser.objects.get(email=email)
user.password = make_password(pwd2)
user.save()
return render(request, "login.html")
else:
email = request.POST.get("email", "")
return render(request, "password_reset.html", {"email":email, "modify_form":modify_form })
class UserinfoView(LoginRequiredMixin, View):
"""
用户个人信息
"""
def get(self, request):
return render(request, 'usercenter-info.html', {})
def post(self, request):
user_info_form = UserInfoForm(request.POST, instance=request.user)
if user_info_form.is_valid():
user_info_form.save()
return HttpResponse('{"status":"success"}', content_type='application/json')
else:
return HttpResponse(json.dumps(user_info_form.errors), content_type='application/json')
# def post(self, request):
# user_info_form = UserInfoForm(request.POST)
# if user_info_form.is_valid():
# nick_name = request.POST.get('nick_name',None)
# gender = request.POST.get('gender',None)
# birthday = request.POST.get('birthday',None)
# adress = request.POST.get('address',None)
# mobile = request.POST.get('mobile',None)
# user = request.user
# user.nick_name = nick_name
# user.gender = gender
# user.birthday = birthday
# user.adress = adress
# user.mobile = mobile
# user.save()
# return HttpResponse('{"status":"success"}', content_type='application/json')
# else:
# return HttpResponse(json.dumps(user_info_form.errors), content_type='application/json')
class UploadImageView(LoginRequiredMixin,View):
'''用户图像修改'''
def post(self,request):
#上传的文件都在request.FILES里面获取,所以这里要多传一个这个参数
image_form = UploadImageForm(request.POST,request.FILES)
if image_form.is_valid():
image = image_form.cleaned_data['image']
request.user.image = image
request.user.save()
return HttpResponse('{"status":"success"}', content_type='application/json')
else:
return HttpResponse('{"status":"fail"}', content_type='application/json')
class UpdatePwdView(View):
"""
个人中心修改用户密码
"""
def post(self, request):
modify_form = ModifyPwdForm(request.POST)
if modify_form.is_valid():
pwd1 = request.POST.get("password1", "")
pwd2 = request.POST.get("password2", "")
if pwd1 != pwd2:
return HttpResponse('{"status":"fail","msg":"密码不一致"}', content_type='application/json')
user = request.user
user.password = make_password(pwd2)
user.save()
return HttpResponse('{"status":"success"}', content_type='application/json')
else:
return HttpResponse(json.dumps(modify_form.errors), content_type='application/json')
class SendEmailCodeView(LoginRequiredMixin, View):
'''发送邮箱修改验证码'''
def get(self,request):
email = request.GET.get('email','')
if CustomUser.objects.filter(email=email):
return HttpResponse('{"email":"邮箱已存在"}', content_type='application/json')
send_register_eamil(email,'update_email')
return HttpResponse('{"status":"success"}', content_type='application/json')
class UpdateEmailView(LoginRequiredMixin, View):
'''修改邮箱'''
def post(self, request):
email = request.POST.get("email", "")
code = request.POST.get("code", "")
existed_records = EmailVerifyRecord.objects.filter(email=email, code=code, send_type='update_email')
if existed_records:
user = request.user
user.email = email
user.save()
return HttpResponse('{"status":"success"}', content_type='application/json')
else:
return HttpResponse('{"email":"验证码无效"}', content_type='application/json')
class MyCourseView(LoginRequiredMixin, View):
'''我的课程'''
def get(self, request):
user_courses = UserCourse.objects.filter(user=request.user)
return render(request, "usercenter-mycourse.html", {
"user_courses":user_courses,
})
class MyFavOrgView(LoginRequiredMixin,View):
'''我收藏的课程机构'''
def get(self, request):
org_list = []
fav_orgs = UserFavorite.objects.filter(user=request.user, fav_type=2)
# 上面的fav_orgs只是存放了id。我们还需要通过id找到机构对象
for fav_org in fav_orgs:
# 取出fav_id也就是机构的id。
org_id = fav_org.fav_id
# 获取这个机构对象
org = CourseOrg.objects.get(id=org_id)
org_list.append(org)
return render(request, "usercenter-fav-org.html", {
"org_list": org_list,
})
class MyFavTeacherView(LoginRequiredMixin, View):
'''我收藏的授课讲师'''
def get(self, request):
teacher_list = []
fav_teachers = UserFavorite.objects.filter(user=request.user, fav_type=3)
for fav_teacher in fav_teachers:
teacher_id = fav_teacher.fav_id
teacher = Teacher.objects.get(id=teacher_id)
teacher_list.append(teacher)
return render(request, "usercenter-fav-teacher.html", {
"teacher_list": teacher_list,
})
class MyFavCourseView(LoginRequiredMixin,View):
"""
我收藏的课程
"""
def get(self, request):
course_list = []
fav_courses = UserFavorite.objects.filter(user=request.user, fav_type=1)
for fav_course in fav_courses:
course_id = fav_course.fav_id
course = Course.objects.get(id=course_id)
course_list.append(course)
return render(request, 'usercenter-fav-course.html', {
"course_list":course_list,
})
class MyMessageView(LoginRequiredMixin, View):
'''我的消息'''
def get(self, request):
all_message = UserMessage.objects.filter(user= request.user.id)
try:
page = request.GET.get('page', 1)
except PageNotAnInteger:
page = 1
p = Paginator(all_message, 4,request=request)
messages = p.page(page)
return render(request, "usercenter-message.html", {
"messages":messages,
})
from django.shortcuts import render_to_response
def pag_not_found(request):
# 全局404处理函数
response = render_to_response('404.html', {})
response.status_code = 404
return response
def page_error(request):
# 全局500处理函数
from django.shortcuts import render_to_response
response = render_to_response('500.html', {})
response.status_code = 500
return response
| 34.176322 | 108 | 0.627874 | 1,434 | 13,568 | 5.796374 | 0.183403 | 0.037536 | 0.059432 | 0.028633 | 0.384745 | 0.306665 | 0.274663 | 0.217397 | 0.16783 | 0.123195 | 0 | 0.004174 | 0.258328 | 13,568 | 396 | 109 | 34.262626 | 0.821823 | 0.106427 | 0 | 0.354331 | 0 | 0 | 0.099147 | 0.012874 | 0 | 0 | 0 | 0 | 0 | 1 | 0.098425 | false | 0.062992 | 0.094488 | 0.007874 | 0.452756 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b2dc6929b15ae89ed88d8e3ef2cd52728dbd110e | 5,794 | py | Python | gui/Ui_sales_transaction.py | kim-song/kimsong-apriori | 0f2a4a2b749989ad1305da3836e7404c09482534 | [
"MIT"
] | 2 | 2020-07-17T09:36:56.000Z | 2020-12-11T11:36:11.000Z | gui/Ui_sales_transaction.py | kimsongsao/kimsong-apriori | 0f2a4a2b749989ad1305da3836e7404c09482534 | [
"MIT"
] | null | null | null | gui/Ui_sales_transaction.py | kimsongsao/kimsong-apriori | 0f2a4a2b749989ad1305da3836e7404c09482534 | [
"MIT"
] | 1 | 2020-07-17T09:23:15.000Z | 2020-07-17T09:23:15.000Z | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'd:\MITE12\ksapriori\gui\sales_transaction.ui'
#
# Created by: PyQt5 UI code generator 5.12.3
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_FormSalesTransaction(object):
def setupUi(self, FormSalesTransaction):
FormSalesTransaction.setObjectName("FormSalesTransaction")
FormSalesTransaction.resize(989, 466)
self.groupBox = QtWidgets.QGroupBox(FormSalesTransaction)
self.groupBox.setGeometry(QtCore.QRect(10, 10, 181, 451))
font = QtGui.QFont()
font.setFamily("Segoe UI")
font.setPointSize(9)
self.groupBox.setFont(font)
self.groupBox.setObjectName("groupBox")
self.pushButton = QtWidgets.QPushButton(self.groupBox)
self.pushButton.setGeometry(QtCore.QRect(80, 190, 75, 23))
self.pushButton.setObjectName("pushButton")
self.widget = QtWidgets.QWidget(self.groupBox)
self.widget.setGeometry(QtCore.QRect(10, 31, 151, 140))
self.widget.setObjectName("widget")
self.verticalLayout = QtWidgets.QVBoxLayout(self.widget)
self.verticalLayout.setContentsMargins(0, 0, 0, 0)
self.verticalLayout.setObjectName("verticalLayout")
self.label = QtWidgets.QLabel(self.widget)
font = QtGui.QFont()
font.setFamily("Segoe UI")
font.setPointSize(9)
self.label.setFont(font)
self.label.setObjectName("label")
self.verticalLayout.addWidget(self.label)
self.fromDate = QtWidgets.QDateEdit(self.widget)
font = QtGui.QFont()
font.setFamily("Segoe UI")
font.setPointSize(9)
self.fromDate.setFont(font)
self.fromDate.setObjectName("fromDate")
self.verticalLayout.addWidget(self.fromDate)
self.label_2 = QtWidgets.QLabel(self.widget)
font = QtGui.QFont()
font.setFamily("Segoe UI")
font.setPointSize(9)
self.label_2.setFont(font)
self.label_2.setObjectName("label_2")
self.verticalLayout.addWidget(self.label_2)
self.toDate = QtWidgets.QDateEdit(self.widget)
font = QtGui.QFont()
font.setFamily("Segoe UI")
font.setPointSize(9)
self.toDate.setFont(font)
self.toDate.setObjectName("toDate")
self.verticalLayout.addWidget(self.toDate)
self.label_3 = QtWidgets.QLabel(self.widget)
font = QtGui.QFont()
font.setFamily("Segoe UI")
font.setPointSize(9)
self.label_3.setFont(font)
self.label_3.setObjectName("label_3")
self.verticalLayout.addWidget(self.label_3)
self.lineEdit = QtWidgets.QLineEdit(self.widget)
self.lineEdit.setObjectName("lineEdit")
self.verticalLayout.addWidget(self.lineEdit)
self.tableWidget = QtWidgets.QTableWidget(FormSalesTransaction)
self.tableWidget.setGeometry(QtCore.QRect(200, 20, 781, 441))
self.tableWidget.setObjectName("tableWidget")
self.tableWidget.setColumnCount(8)
self.tableWidget.setRowCount(0)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(0, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(1, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(2, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(3, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(4, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(5, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(6, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(7, item)
self.retranslateUi(FormSalesTransaction)
QtCore.QMetaObject.connectSlotsByName(FormSalesTransaction)
def retranslateUi(self, FormSalesTransaction):
_translate = QtCore.QCoreApplication.translate
FormSalesTransaction.setWindowTitle(_translate("FormSalesTransaction", "Sales Transaction"))
self.groupBox.setTitle(_translate("FormSalesTransaction", "Filters"))
self.pushButton.setText(_translate("FormSalesTransaction", "Search"))
self.label.setText(_translate("FormSalesTransaction", "From Date"))
self.label_2.setText(_translate("FormSalesTransaction", "To Date"))
self.label_3.setText(_translate("FormSalesTransaction", "Item Code"))
item = self.tableWidget.horizontalHeaderItem(0)
item.setText(_translate("FormSalesTransaction", "Document No"))
item = self.tableWidget.horizontalHeaderItem(1)
item.setText(_translate("FormSalesTransaction", "Posting Date"))
item = self.tableWidget.horizontalHeaderItem(2)
item.setText(_translate("FormSalesTransaction", "Item Code"))
item = self.tableWidget.horizontalHeaderItem(3)
item.setText(_translate("FormSalesTransaction", "Item Label"))
item = self.tableWidget.horizontalHeaderItem(4)
item.setText(_translate("FormSalesTransaction", "Description"))
item = self.tableWidget.horizontalHeaderItem(5)
item.setText(_translate("FormSalesTransaction", "Quantity"))
item = self.tableWidget.horizontalHeaderItem(6)
item.setText(_translate("FormSalesTransaction", "Price"))
item = self.tableWidget.horizontalHeaderItem(7)
item.setText(_translate("FormSalesTransaction", "Amount"))
| 48.283333 | 101 | 0.681049 | 544 | 5,794 | 7.200368 | 0.220588 | 0.080419 | 0.110288 | 0.067399 | 0.347715 | 0.307889 | 0.290784 | 0.163901 | 0.163901 | 0.121522 | 0 | 0.020774 | 0.210735 | 5,794 | 119 | 102 | 48.689076 | 0.835775 | 0.037107 | 0 | 0.245283 | 1 | 0 | 0.103613 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018868 | false | 0 | 0.009434 | 0 | 0.037736 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b2dddf20dfadcb66e5efbea5b82b5fce448e57cf | 727 | py | Python | setup.py | dboddie/Beeware-Hello-VOC | a22ffc58121ead7acac850c6edb60576bdb66993 | [
"MIT"
] | 35 | 2017-09-21T03:45:33.000Z | 2021-11-18T01:18:13.000Z | setup.py | dboddie/Beeware-Hello-VOC | a22ffc58121ead7acac850c6edb60576bdb66993 | [
"MIT"
] | 6 | 2017-09-25T12:34:31.000Z | 2021-07-05T03:40:19.000Z | setup.py | dboddie/Beeware-Hello-VOC | a22ffc58121ead7acac850c6edb60576bdb66993 | [
"MIT"
] | 10 | 2018-02-03T12:51:31.000Z | 2022-02-08T18:54:48.000Z | #!/usr/bin/env python
from setuptools import setup, find_packages
setup(
name='{{ cookiecutter.app_name }}',
version='0.0.1',
description='{{ cookiecutter.description }}',
author='{{ cookiecutter.author }}',
author_email='{{ cookiecutter.author_email }}',
license='{{ cookiecutter.license }}',
packages=find_packages(
exclude=['docs', 'tests', 'android']
),
classifiers=[
'Development Status :: 1 - Planning',
'License :: OSI Approved :: {{ cookiecutter.license }}',
],
install_requires=[
],
options={
'app': {
'formal_name': '{{ cookiecutter.formal_name }}',
'bundle': '{{ cookiecutter.bundle }}'
},
}
)
| 25.964286 | 64 | 0.569464 | 62 | 727 | 6.548387 | 0.564516 | 0.059113 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007353 | 0.251719 | 727 | 27 | 65 | 26.925926 | 0.738971 | 0.02751 | 0 | 0.083333 | 0 | 0 | 0.456091 | 0.133144 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.041667 | 0 | 0.041667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b2de089e75f188f3482c29fc33bcbb7a91997599 | 27,975 | py | Python | src/app.py | chunyuyuan/NEWS_2019_network-master | 0eec84b383156c82fbd64d900dce578700575d99 | [
"MIT"
] | null | null | null | src/app.py | chunyuyuan/NEWS_2019_network-master | 0eec84b383156c82fbd64d900dce578700575d99 | [
"MIT"
] | null | null | null | src/app.py | chunyuyuan/NEWS_2019_network-master | 0eec84b383156c82fbd64d900dce578700575d99 | [
"MIT"
] | null | null | null | from flask import Flask, request, render_template, send_file, Response
import io
import base64
import csv
import json
import time
from collections import OrderedDict
import numpy
import pandas as pd
from numpy import genfromtxt
from flask import jsonify
from flask_cors import CORS
from LoadingNetwork import EchoWebSocket
import shutil
import gc
from tornado.wsgi import WSGIContainer
from tornado.web import Application, FallbackHandler
from tornado.websocket import WebSocketHandler
from tornado.ioloop import IOLoop
app = Flask('flasknado')
#app = Flask(__name__)
app.debug = True
CORS(app)
##initial netwrok csv data############################
rawdata = open('NetworkWithDistance.txt')
with open('NetworkWithDistance.txt') as f:
rawdata = f.readlines()
# you may also want to remove whitespace characters like `\n` at the end
# of each line
rawdata = [x.strip() for x in rawdata]
my_data = genfromtxt('networkwithdist.csv', delimiter=',')
# my_data=numpy.delete(my_data,(0),axis=0)
header = ['id', 'id_to', 'lon', 'lat', 'basinid']
frame = pd.DataFrame(my_data, columns=header)
data = []
MY_GLOBAL = []
with open('tempcsv.csv') as f:
for line in f:
temp = line.strip().split(',')
data.append(temp)
#############################
data1 = []
with open('MyFile1.txt') as f:
r = 0
for line in f:
if(r > 0):
data2 = []
# print(line)
temp = line.split("\",")
data2.append(temp[0][1:])
temp1 = temp[1].split(",[")
data2.append(temp1[0])
data2.append(temp1[1][:-2])
data1.append(data2)
r += 1
header = ['celllist', 'cellid', 'cellto']
frame_celllist = pd.DataFrame(data1, columns=header)
frame_celllist = frame_celllist.drop_duplicates()
del data1[:]
##################
data_c = []
with open('powerplant_cell_loc.csv') as f:
r = 0
for line in f:
if(r > 0):
data_cc = line.split(",")
data_c.append(data_cc)
# print(line)
r += 1
header = ['cellid', 'loc']
frame_cell = pd.DataFrame(data_c, columns=header)
frame_cell = frame_cell.drop_duplicates()
del data_c[:]
########################################################
import os
import sys
from SimpleHTTPServer import SimpleHTTPRequestHandler
import BaseHTTPServer
# class MyHTTPRequestHandler(SimpleHTTPRequestHandler):
# def translate_path(self,path):
# path = SimpleHTTPRequestHandler.translate_path(self,path)
# if os.path.isdir(path):
# for base in "index", "default":
# for ext in ".html", ".htm", ".txt":
# index = path + "/" + base + ext
# if os.path.exists(index):
# return index
# return path
# def test(HandlerClass = MyHTTPRequestHandler,
# ServerClass = BaseHTTPServer.HTTPServer):
# BaseHTTPServer.test(HandlerClass, ServerClass)
##################travesal network upstream############
'''def find_upstream(value):
gc.collect()
ii=0
li = []
temp=[]
a=frame.ix[int(value)]
temp.append(a)
#print(MY_GLOBAL)
MY_GLOBAL[:]=[]
#x=data[int(value)]
#x=frame[frame['id']==a['id_to']]
#print x
i=0
z=0
zz=0
while zz<len(temp):
item=temp[zz]
zz+=1
##print(z,len(temp))
## item=temp.pop()
## print item
#x=frame[frame['id_to']==item['id']]
x=data[int(float(item['id']))]
#print x
i=1
while i<len(x) :
# d = OrderedDict()
# xx=x.loc[x.index[i]]
xx=frame.ix[int(float(x[i]))]
# d['type'] = 'Feature'
# d['geometry'] = {
# 'type': 'MultiLineString',
# 'coordinates': [[[float(xx['lon']),float(xx['lat'])],[float(item['lon']), float(item['lat'])]]]
# }
# d['properties'] = { "id":int(xx['id']),"id_to":int(xx['id_to']),"lon": float(xx['lon']),"lat": float(xx['lat'])
# }
# li.append(d)
i+=1
# ii+=1
##if ii%1000==0:
## print ii
temp.append(xx)
print(len(temp))
while z<len(temp):
item=temp[z]
z+=1
##print(z,len(temp))
## item=temp.pop()
## print item
#x=frame[frame['id_to']==item['id']]
x=data[int(float(item['id']))]
#print x
i=1
while i<len(x) :
d = OrderedDict()
#xx=x.loc[x.index[i]]
xx=frame.ix[int(float(x[i]))]
d['type'] = 'Feature'
d['geometry'] = {
'type': 'MultiLineString',
'coordinates': [[[float(xx['lon']),float(xx['lat'])],[float(item['lon']), float(item['lat'])]]]
}
d['properties'] = { "id":int(xx['id']),"id_to":int(xx['id_to']),"lon": float(xx['lon']),"lat": float(xx['lat'])
}
li.append(d)
d = OrderedDict()
#xx=x.loc[x.index[i]]
# xx=frame.ix[int(float(x[i]))]
i+=1
ii+=1
if ii%1000==0 or (ii+1)/len(temp)==1:
MY_GLOBAL.append((int)((ii+1)/(len(temp)* 1.0)*100))
## print(checkInt,ii,len(temp))
## print ii
# temp.append(xx)
#d = OrderedDict()
#d['type'] = 'FeatureCollection'
#d['features'] = li
#print li
print(ii)
return li,200'''
def find_upstream(value):
gc.collect()
ii = 0
li = []
temp = []
a = frame.ix[int(value)]
temp.append(int(value))
MY_GLOBAL[:] = []
i = 0
z = 0
zz = 0
jstring = ''
while z < len(temp):
item = frame.ix[temp[z]]
z += 1
x = data[int(float(item['id']))]
#print x
i = 1
while i < len(x):
xx = frame.ix[int(float(x[i]))]
jstring += '{"type": "Feature","geometry": { "type": "MultiLineString", "coordinates": [[[' + str(float(xx['lon'])) + ',' + str(float(xx['lat'])) + '],[' + str(float(item['lon'])) + ',' + str(
float(item['lat'])) + ']]]},"properties": {"id_to": ' + str(int(xx['id_to'])) + ',"id":' + str(int(xx['id'])) + ',"lat":' + str(float(xx['lat'])) + ',"lon": ' + str(float(xx['lon'])) + '}},'
ii += 1
temp.append(int(float(x[i])))
i += 1
if ii % 1000 == 0:
# print(ii)
MY_GLOBAL.append((int)((ii + 1) / (200000 * 1.0) * 100))
# print(checkInt,ii,len(temp))
## print ii
# temp.append(xx)
#d = OrderedDict()
#d['type'] = 'FeatureCollection'
#d['features'] = li
#print li
# print(jstring)
MY_GLOBAL.append(100)
return jstring[:-1], 200
##################travesal network downstream############
def find_downstream(value, sourceid):
#print value,sourceid
ii = 0
li = []
temp = []
jstring = ''
# MY_GLOBAL[:]=[]
a = frame.ix[int(value)]
temp.append(a)
check = True
z = 0
while z < len(temp) and check:
item = temp[z]
z += 1
if(item['id_to'] == sourceid):
check = False
# break
## print item
# if(item['id']==sourceid):
# check=False
x = frame.ix[frame['id'] == item['id_to']]
#print x
i = 0
while i < len(x):
# d = OrderedDict()
xx = x.ix[x.index[i]]
jstring += '{"type": "Feature","geometry": { "type": "MultiLineString", "coordinates": [[[' + str(float(xx['lon'])) + ',' + str(float(xx['lat'])) + '],[' + str(float(item['lon'])) + ',' + str(
float(item['lat'])) + ']]]},"properties": {"id_to": ' + str(int(xx['id_to'])) + ',"id":' + str(int(xx['id'])) + ',"lat":' + str(float(xx['lat'])) + ',"lon": ' + str(float(xx['lon'])) + '}},'
# d['type'] = 'Feature'
# d['geometry'] = {
# 'type': 'MultiLineString',
# 'coordinates': [[[float(xx['lon']),float(xx['lat'])],[float(item['lon']), float(item['lat'])]]]
# }
# d['properties'] = { "id":int(xx['id']),"id_to":int(xx['id_to']),"lon": float(xx['lon']),"lat": float(xx['lat'])
# }
# li.append(d)
# d=OrderedDict()
i += 1
ii += 1
temp.append(xx)
# if(item['id']==sourceid):
# check=False
# MY_GLOBAL.append(100)
# d = OrderedDict()
# d['type'] = 'FeatureCollection'
# d['features'] = li
# print li
# if (check==False):
return jstring[:-1], 200
##################travesal network downstream############
def find_downstream1(value):
#print value,sourceid
ii = 0
li = []
temp = []
jstring = ''
# MY_GLOBAL[:]=[]
a = frame.ix[int(value)]
temp.append(a)
check = True
z = 0
while z < len(temp) and check:
item = temp[z]
z += 1
## print item
# if(item['id']==sourceid):
# check=False
x = frame.ix[frame['id'] == item['id_to']]
#print x
i = 0
while i < len(x):
# d = OrderedDict()
xx = x.ix[x.index[i]]
jstring += '{"type": "Feature","geometry": { "type": "MultiLineString", "coordinates": [[[' + str(float(xx['lon'])) + ',' + str(float(xx['lat'])) + '],[' + str(float(item['lon'])) + ',' + str(
float(item['lat'])) + ']]]},"properties": {"id_to": ' + str(int(xx['id_to'])) + ',"id":' + str(int(xx['id'])) + ',"lat":' + str(float(xx['lat'])) + ',"lon": ' + str(float(xx['lon'])) + '}},'
# d['type'] = 'Feature'
# d['geometry'] = {
# 'type': 'MultiLineString',
# 'coordinates': [[[float(xx['lon']),float(xx['lat'])],[float(item['lon']), float(item['lat'])]]]
# }
# d['properties'] = { "id":int(xx['id']),"id_to":int(xx['id_to']),"lon": float(xx['lon']),"lat": float(xx['lat'])
# }
# li.append(d)
# d=OrderedDict()
i += 1
ii += 1
temp.append(xx)
# if(item['id']==sourceid):
# check=False
# MY_GLOBAL.append(100)
# d = OrderedDict()
# d['type'] = 'FeatureCollection'
# d['features'] = li
# print li
# if (check==False):
return jstring[:-1], 200
#######################pp upstream#######################
def find_upstream_pp(cellid):
gc.collect()
# header=['celllist','cellid','cellto']
# header=['cellid','loc']
templi = frame_celllist[frame_celllist['cellid']
== cellid]['celllist'].tolist()
templist = templi[0][1:-1].split(",")
z = 0
jstring = ''
while z < len(templist):
curid = templist[z].strip()
# print(curid,templist)
curidloc = frame_cell[frame_cell['cellid'] == curid]['loc'].tolist()
curidloc1 = curidloc[0].split("_")
# print(curidloc1[0],curidloc1[1][:-1],curidloc[0])
z += 1
temp = frame_celllist[frame_celllist['cellid']
== curid]['cellto'].tolist()
print(temp)
temp = temp[0].split(",")
if len(temp) == 1 and temp[0][:-1] == "none":
# print(temp[0])
continue
else:
zz = 0
while zz < len(temp):
# print(temp[zz],temp)
x = temp[zz]
zz += 1
if zz == len(temp):
nextloc = frame_cell[frame_cell['cellid']
== x[:-1]]['loc'].tolist()
else:
nextloc = frame_cell[frame_cell['cellid']
== x]['loc'].tolist()
nextloc1 = nextloc[0].split("_")
# print(nextloc1[0],nextloc1[1][:-1],nextloc1)
jstring += '{"type": "Feature","geometry": { "type": "MultiLineString", "coordinates": [[[' + str(curidloc1[0]) + ',' + str(curidloc1[1][:-1]) + '],[' + str(
nextloc1[0]) + ',' + str(nextloc1[1][:-1]) + ']]]},"properties": {"lat":' + str(curidloc1[1][:-1]) + ',"lon": ' + str(curidloc1[0]) + '}},'
# jstring+='{"type": "Feature","geometry": { "type": "MultiLineString", "coordinates": [[['+str(float(xx['lon']))+','+str(float(xx['lat']))+'],['+str(float(item['lon']))+','+str(float(item['lat']))+']]]},"properties": {"id_to": '+str(int(xx['id_to']))+',"id":'+str(int(xx['id']))+',"lat":'+str(float(xx['lat']))+',"lon": '+str(float(xx['lon']))+'}},';
return jstring[:-1], 200
#######################pp downstream#######################
def find_downstream_pp(cellid, dcellid):
gc.collect()
# header=['celllist','cellid','cellto']
# header=['cellid','loc']
print(cellid, dcellid)
templi = frame_celllist[frame_celllist['cellid']
== cellid]['celllist'].tolist()
templist = templi[0][1:-1].split(",")
z = len(templist) - 1
jstring = ''
while z > 0:
print(templist[z].strip())
curid = templist[z].strip()
if curid != str(dcellid):
z -= 1
else:
print(z)
break
while z > 0:
curid = templist[z].strip()
# print(curid,templist)
curidloc = frame_cell[frame_cell['cellid'] == curid]['loc'].tolist()
curidloc1 = curidloc[0].split("_")
# print(curidloc1[0],curidloc1[1][:-1],curidloc[0])
temp = frame_celllist[frame_celllist['cellid']
== templist[z].strip()]['cellto'].tolist()
z -= 1
print(temp)
temp = temp[0].split(",")
if len(temp) == 1 and temp[0][:-1] == "none":
# print(temp[0])
z -= 1
continue
else:
zz = 0
aaaa = 'false'
while zz < len(temp):
# print(temp[zz],temp)
x = temp[zz]
zz += 1
if zz == len(temp):
if x[:-1] == curid:
aaaa = 'true'
nextloc = frame_cell[frame_cell['cellid']
== x[:-1]]['loc'].tolist()
else:
if x == curid:
aaaa = 'true'
nextloc = frame_cell[frame_cell['cellid']
== x]['loc'].tolist()
if aaaa == 'true':
nextloc1 = nextloc[0].split("_")
# print(nextloc1[0],nextloc1[1][:-1],nextloc1)
jstring += '{"type": "Feature","geometry": { "type": "MultiLineString", "coordinates": [[[' + str(curidloc1[0]) + ',' + str(curidloc1[1][:-1]) + '],[' + str(
nextloc1[0]) + ',' + str(nextloc1[1][:-1]) + ']]]},"properties": {"lat":' + str(curidloc1[1][:-1]) + ',"lon": ' + str(curidloc1[0]) + '}},'
# jstring+='{"type": "Feature","geometry": { "type": "MultiLineString", "coordinates": [[['+str(float(xx['lon']))+','+str(float(xx['lat']))+'],['+str(float(item['lon']))+','+str(float(item['lat']))+']]]},"properties": {"id_to": '+str(int(xx['id_to']))+',"id":'+str(int(xx['id']))+',"lat":'+str(float(xx['lat']))+',"lon": '+str(float(xx['lon']))+'}},';
print(jstring)
if len(jstring) > 0:
return jstring[:-1], 200
else:
return jstring, 200
@app.route("/", methods=['GET', 'POST'])
def index():
print(request)
return render_template('test1.html')
@app.route("/api/", methods=['GET', 'POST'])
def update():
print(request.method)
if request.method == "POST":
source = request.form["source"]
dist = request.form["dist"]
pic = request.form["pic"]
downfirst = request.form["downfirst"]
pp = request.form["pp"]
print(pp, source, dist, downfirst, pic)
if(pp == 'yes'):
upstream = request.form["upstream"]
if(upstream == 'yes'):
ucellid = request.form["ucellid"]
re, ii = find_upstream_pp(ucellid)
# print(re)
return json.dumps(re), ii
# if(upstream=='no'):
### ucellid = request.form["ucellid"]
# dcellid = request.form["dcellid"]
# re,ii=find_downstream_pp(ucellid,dcellid)
# print(re)
# if(pp=='no'):
source = request.form["source"]
dist = request.form["dist"]
pic = request.form["pic"]
downfirst = request.form["downfirst"]
#print dist
if(downfirst == 'no'):
if(source == 'yes'):
sourceid = request.form["sourceid"]
#print sourceid
import time
start = time. time()
re, ii = find_upstream(sourceid)
end = time. time()
#print ii,(end-start)
# print(re)
# print(MY_GLOBAL)
return json.dumps(re), ii
if(dist == 'yes'):
distid = request.form["distid"]
sourceid = request.form["sourceid"]
MY_GLOBAL[:] = []
#print distid,sourceid
re, ii = find_downstream(int(distid), int(sourceid))
print (re)
gc.collect()
MY_GLOBAL.append(100)
return json.dumps(re, sort_keys=False, indent=4), ii
if(downfirst == 'yes'):
if(dist == 'yes'):
distid = request.form["distid"]
sourceid = request.form["sourceid"]
MY_GLOBAL[:] = []
#print distid,sourceid
re, ii = find_downstream1(int(distid))
print (re)
gc.collect()
MY_GLOBAL.append(100)
return json.dumps(re, sort_keys=False, indent=4), ii
if(pic == 'yes'):
#print request.form
MY_GLOBAL[:] = []
start1 = request.form["dist_lat"]
start2 = request.form["dist_lon"]
goal1 = request.form["source_lat"]
goal2 = request.form["source_lon"]
fromdate = request.form["from"]
todate = request.form["to"]
import time
before = time.time()
output, str1, str2, str3 = LoadingNetwork.main(
[start1, start2], [goal1, goal2], fromdate, todate, rawdata)
#print str1,str2,str3
after = time.time()
print ("time,", after - before)
if(isinstance(output, str)):
return output, 201
else:
# gc.collect()
#print base64.b64encode(output.getvalue())
return base64.b64encode(
output.getvalue()) + "***" + str1 + "***" + str2 + "***" + str3, 200
class WebSocket(WebSocketHandler):
def on_message(self, message):
# self.write_message("Received: " + message)
# self.write_message("Received2: " + message)
# m=message.split("&")
print("Received message: " + m[0])
print("Received message: " + m[1])
print("Received message: " + m[2])
print("Received message: " + m[3])
print("Received message: " + m[4])
print("Received message: " + m[5])
print("Received message: " + m[6])
m=message[1:-1].split("&")
source = m[0].split("=")[1]
value = m[1].split("=")[1]
dist = m[2].split("=")[1]
value1 = m[3].split("=")[1]
pic = m[4].split("=")[1]
downfirst = m[5].split("=")[1]
pp = m[6].split("=")
print(pp, source, dist, downfirst, pic,value,value1)
###################################upstram##########################3
if(downfirst == 'no'):
if(source == 'yes'):
##################
gc.collect()
ii = 0
li = []
temp = []
a = frame.ix[int(value)]
temp.append(int(value))
i = 0
z = 0
zz = 0
jstring = ''
while z < len(temp):
item = frame.ix[temp[z]]
z += 1
x = data[int(float(item['id']))]
#print x
i = 1
while i < len(x):
xx = frame.ix[int(float(x[i]))]
jstring += '{"type": "Feature","geometry": { "type": "MultiLineString", "coordinates": [[[' + str(float(xx['lon'])) + ',' + str(float(xx['lat'])) + '],[' + str(float(item['lon'])) + ',' + str(
float(item['lat'])) + ']]]},"properties": {"id_to": ' + str(int(xx['id_to'])) + ',"id":' + str(int(xx['id'])) + ',"lat":' + str(float(xx['lat'])) + ',"lon": ' + str(float(xx['lon'])) + '}},'
ii += 1
temp.append(int(float(x[i])))
i += 1
if(len(jstring)>1500000):
zz+=5
self.write_message( jstring[:-1])
self.write_message( '~'+str(zz*1.0/100))
jstring = ''
self.write_message( jstring[:-1])
self.write_message( '~1')
############################downstream#########################
if(dist == 'yes'):
########################################################################
ii = 0
li = []
temp = []
jstring = ''
# MY_GLOBAL[:]=[]
a = frame.ix[int(value1)]
temp.append(a)
check = True
z = 0
zz=0
while z < len(temp) and check:
item = temp[z]
z += 1
if(item['id_to'] == int(value)):
check = False
x = frame.ix[frame['id'] == item['id_to']]
#print x
i = 0
while i < len(x):
# d = OrderedDict()
xx = x.ix[x.index[i]]
jstring += '{"type": "Feature","geometry": { "type": "MultiLineString", "coordinates": [[[' + str(float(xx['lon'])) + ',' + str(float(xx['lat'])) + '],[' + str(float(item['lon'])) + ',' + str(
float(item['lat'])) + ']]]},"properties": {"id_to": ' + str(int(xx['id_to'])) + ',"id":' + str(int(xx['id'])) + ',"lat":' + str(float(xx['lat'])) + ',"lon": ' + str(float(xx['lon'])) + '}},'
i += 1
ii += 1
temp.append(xx)
if(len(jstring)>150000):
zz+=5
self.write_message( jstring[:-1])
self.write_message( '~'+str(zz*1.0/100))
jstring = ''
self.write_message( jstring[:-1])
self.write_message( '~1')
##########################downfirst##############################################
if(downfirst == 'yes'):
if(dist == 'yes'):
ii = 0
li = []
temp = []
jstring = ''
# MY_GLOBAL[:]=[]
a = frame.ix[int(value1)]
temp.append(a)
z = 0
zz=0
while z < len(temp) :
item = temp[z]
z += 1
# break
## print item
# if(item['id']==sourceid):
# check=False
x = frame.ix[frame['id'] == item['id_to']]
#print x
i = 0
while i < len(x):
# d = OrderedDict()
xx = x.ix[x.index[i]]
jstring += '{"type": "Feature","geometry": { "type": "MultiLineString", "coordinates": [[[' + str(float(xx['lon'])) + ',' + str(float(xx['lat'])) + '],[' + str(float(item['lon'])) + ',' + str(
float(item['lat'])) + ']]]},"properties": {"id_to": ' + str(int(xx['id_to'])) + ',"id":' + str(int(xx['id'])) + ',"lat":' + str(float(xx['lat'])) + ',"lon": ' + str(float(xx['lon'])) + '}},'
# d['type'] = 'Feature'
# d['geometry'] = {
# 'type': 'MultiLineString',
# 'coordinates': [[[float(xx['lon']),float(xx['lat'])],[float(item['lon']), float(item['lat'])]]]
# }
# d['properties'] = { "id":int(xx['id']),"id_to":int(xx['id_to']),"lon": float(xx['lon']),"lat": float(xx['lat'])
# }
# li.append(d)
# d=OrderedDict()
i += 1
ii += 1
temp.append(xx)
# if(item['id']==sourceid):
# check=False
# MY_GLOBAL.append(100)
# d = OrderedDict()
# d['type'] = 'FeatureCollection'
# d['features'] = li
# print li
# if (check==False):
if(len(jstring)>150000):
zz+=5
self.write_message( jstring[:-1])
self.write_message( '~'+str(zz*1.0/100))
jstring = ''
self.write_message( jstring[:-1])
self.write_message( '~1')
# if(downfirst == 'yes'):
if(pic == 'yes'):
#print request.form
#"&dist_lat="+dist_lat+"&dist_lon="+dist_lon+"&source_lat="+source_lat+"&source_lon="+source_lon+"&from="+value3.value+"&to="+value4.value);
#m[6].split("=")
# start1 = request.form["dist_lat"]
# start2 = request.form["dist_lon"]
# goal1 = request.form["source_lat"]
# goal2 = request.form["source_lon"]
# fromdate = request.form["from"]
# todate = request.form["to"]
start1 = m[7].split("=")[1]
start2 = m[8].split("=")[1]
goal1 =m[9].split("=")[1]
goal2 = m[10].split("=")[1]
fromdate = m[11].split("=")[1]
todate = m[12].split("=")[1]
print(start1,start2,goal1,goal2,fromdate,todate)
import time
before = time.time()
output, str1, str2, str3 = LoadingNetwork.main(
[start1, start2], [goal1, goal2], fromdate, todate, rawdata)
#print str1,str2,str3
# print(output)
after = time.time()
print ("time,", after - before)
# if(isinstance(output, str)):
# return output, 201
# else:
# gc.collect()
#print base64.b64encode(output.getvalue())
# return base64.b64encode(
# output.getvalue()) + "***" + str1 + "***" + str2 + "***" + str3, 200
#
if __name__ == "__main__":
container = WSGIContainer(app)
server = Application([
(r'/websocket/', WebSocket),
(r'/we/', EchoWebSocket),
(r'.*', FallbackHandler, dict(fallback=container))
])
server.listen(5000)
IOLoop.instance().start()
# test()
| 16.913543 | 360 | 0.43378 | 2,883 | 27,975 | 4.151578 | 0.090878 | 0.030412 | 0.026736 | 0.017378 | 0.690617 | 0.683516 | 0.650096 | 0.64759 | 0.642994 | 0.619768 | 0 | 0.023871 | 0.360572 | 27,975 | 1,653 | 361 | 16.923775 | 0.645237 | 0.184701 | 0 | 0.655814 | 0 | 0.02093 | 0.100224 | 0.012076 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018605 | false | 0 | 0.060465 | 0 | 0.111628 | 0.051163 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b2e37635f83fbc719c3828b77b744bc8d962608e | 870 | py | Python | ddtrace/contrib/aiobotocore/__init__.py | tophatmonocle/dd-trace-py | 7db12f1c398c07cd5baf91c571aed672dbb6496d | [
"BSD-3-Clause"
] | null | null | null | ddtrace/contrib/aiobotocore/__init__.py | tophatmonocle/dd-trace-py | 7db12f1c398c07cd5baf91c571aed672dbb6496d | [
"BSD-3-Clause"
] | null | null | null | ddtrace/contrib/aiobotocore/__init__.py | tophatmonocle/dd-trace-py | 7db12f1c398c07cd5baf91c571aed672dbb6496d | [
"BSD-3-Clause"
] | null | null | null | """
The aiobotocore integration will trace all AWS calls made with the ``aiobotocore``
library. This integration isn't enabled when applying the default patching.
To enable it, you must run ``patch_all(botocore=True)``
::
import aiobotocore.session
from ddtrace import patch
# If not patched yet, you can patch botocore specifically
patch(aiobotocore=True)
# This will report spans with the default instrumentation
aiobotocore.session.get_session()
lambda_client = session.create_client('lambda', region_name='us-east-1')
# This query generates a trace
lambda_client.list_functions()
"""
from ...utils.importlib import require_modules
required_modules = ['aiobotocore.client']
with require_modules(required_modules) as missing_modules:
if not missing_modules:
from .patch import patch
__all__ = ['patch']
| 28.064516 | 82 | 0.74023 | 112 | 870 | 5.598214 | 0.553571 | 0.044657 | 0.070175 | 0.092504 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001401 | 0.17931 | 870 | 30 | 83 | 29 | 0.876751 | 0.712644 | 0 | 0 | 0 | 0 | 0.095041 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
b2e84727d200add756532d87eca711fb92b61dde | 1,570 | py | Python | setup.py | Peque/mmsim | b3a78ad0119db6ee8df349a89559ea8006c85db1 | [
"BSD-3-Clause"
] | null | null | null | setup.py | Peque/mmsim | b3a78ad0119db6ee8df349a89559ea8006c85db1 | [
"BSD-3-Clause"
] | null | null | null | setup.py | Peque/mmsim | b3a78ad0119db6ee8df349a89559ea8006c85db1 | [
"BSD-3-Clause"
] | null | null | null | """
Setup module.
"""
from setuptools import setup
from mmsim import __version__
setup(
name='mmsim',
version=__version__,
description='A simple Micromouse Maze Simulator server',
long_description="""The server can load different mazes and any client
can connect to it to ask for the current position walls, move from
one cell to another and visualize the simulated micromouse state.""",
url='https://github.com/Theseus/mmsim',
author='Miguel Sánchez de León Peque',
author_email='peque@neosit.es',
license='BSD License',
classifiers=[
'Development Status :: 3 - Alpha',
'Topic :: Utilities',
'License :: OSI Approved :: BSD License',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: Implementation :: CPython',
],
keywords='micromouse maze server simulator',
entry_points={
'console_scripts': [
'mmsim = mmsim.commands:launch',
],
},
packages=['mmsim'],
install_requires=[
'click',
'numpy',
'pyqtgraph',
'pyqt5',
'pyzmq'],
extras_require={
'docs': [
'doc8',
'sphinx',
'sphinx_rtd_theme',
],
'lint': [
'flake8',
'flake8-bugbear',
'flake8-per-file-ignores',
'flake8-quotes',
'pep8-naming',
],
'test': [
'pytest',
'pytest-cov',
],
},
)
| 26.610169 | 77 | 0.54586 | 148 | 1,570 | 5.682432 | 0.702703 | 0.067776 | 0.08918 | 0.061831 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010387 | 0.325478 | 1,570 | 58 | 78 | 27.068966 | 0.783758 | 0.00828 | 0 | 0.09434 | 0 | 0 | 0.503551 | 0.028405 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.037736 | 0 | 0.037736 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b2e911b19926607cd6e241f7b09f34ddcf0231cd | 2,556 | py | Python | fastinference.py | wkcw/VariousDiscriminator-CycleGan | de9c033aeed1c429f37c531056c1f74cb51a771c | [
"MIT"
] | null | null | null | fastinference.py | wkcw/VariousDiscriminator-CycleGan | de9c033aeed1c429f37c531056c1f74cb51a771c | [
"MIT"
] | null | null | null | fastinference.py | wkcw/VariousDiscriminator-CycleGan | de9c033aeed1c429f37c531056c1f74cb51a771c | [
"MIT"
] | null | null | null | """
A fast version of the original inference.
Constructing one graph to infer all the samples.
Originaly one graph for each sample.
"""
import tensorflow as tf
import os
from model import CycleGAN
import utils
import scipy.misc
import numpy as np
try:
from os import scandir
except ImportError:
# Python 2 polyfill module
from scandir import scandir
FLAGS = tf.flags.FLAGS
tf.flags.DEFINE_string('model', '', 'model path (.pb)')
tf.flags.DEFINE_string('input', 'data/apple', 'input image path')
tf.flags.DEFINE_string('output', 'samples/apple', 'output image path')
tf.flags.DEFINE_integer('image_size', 128, 'image size, default: 128')
def data_reader(input_dir):
file_paths = []
for img_file in scandir(input_dir):
if img_file.name.endswith('.jpg') and img_file.is_file():
file_paths.append(img_file.path)
return file_paths
def inference():
graph = tf.Graph()
with graph.as_default():
with tf.gfile.FastGFile(FLAGS.model, 'rb') as model_file:
graph_def = tf.GraphDef()
graph_def.ParseFromString(model_file.read())
input_image = tf.placeholder(tf.float32,shape=[FLAGS.image_size, FLAGS.image_size, 3])
[output_image] = tf.import_graph_def(graph_def,
input_map={'input_image': input_image},
return_elements=['output_image:0'],
name='output')
#print type(output_image), output_image
file_list = data_reader(FLAGS.input)
whole = len(file_list)
cnt = 0
with tf.Session(graph=graph) as sess:
for file in file_list:
tmp_image = scipy.misc.imread(file)
tmp_image = scipy.misc.imresize(tmp_image, (FLAGS.image_size, FLAGS.image_size, 3))
processed_image = tmp_image / 127.5 - 1
processed_image = np.asarray(processed_image, dtype=np.float32)
predicted_image = sess.run(output_image, feed_dict={input_image: processed_image})
predicted_image = np.squeeze(predicted_image)
#print tmp_image.shape, predicted_image.shape
save_image = np.concatenate((tmp_image, predicted_image), axis=1)
print cnt
output_file_name = file.split('/')[-1]
try:
os.makedirs(FLAGS.output)
except os.error, e:
pass
scipy.misc.imsave(FLAGS.output + '/{}'.format(output_file_name), save_image)
cnt += 1
if cnt//whole > 0.05:
print cnt//whole, 'done'
def main(unused_argv):
inference()
if __name__ == '__main__':
tf.app.run()
| 31.555556 | 91 | 0.658842 | 348 | 2,556 | 4.626437 | 0.344828 | 0.040994 | 0.032298 | 0.035404 | 0.063354 | 0.036025 | 0.036025 | 0 | 0 | 0 | 0 | 0.013165 | 0.227308 | 2,556 | 80 | 92 | 31.95 | 0.802025 | 0.041471 | 0 | 0.034483 | 0 | 0 | 0.075725 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.017241 | 0.172414 | null | null | 0.034483 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b2ee1fd8d6c226332f2055e2d1c443475db998ec | 2,442 | py | Python | omop_cdm/utility_programs/load_concept_files_into_db.py | jhajagos/CommonDataModelMapper | 65d2251713e5581b76cb16e36424d61fb194c901 | [
"Apache-2.0"
] | 1 | 2019-06-14T02:26:35.000Z | 2019-06-14T02:26:35.000Z | omop_cdm/utility_programs/load_concept_files_into_db.py | jhajagos/CommonDataModelMapper | 65d2251713e5581b76cb16e36424d61fb194c901 | [
"Apache-2.0"
] | null | null | null | omop_cdm/utility_programs/load_concept_files_into_db.py | jhajagos/CommonDataModelMapper | 65d2251713e5581b76cb16e36424d61fb194c901 | [
"Apache-2.0"
] | 1 | 2019-08-12T20:19:28.000Z | 2019-08-12T20:19:28.000Z | import argparse
import json
import sys
import os
try:
from utility_functions import load_csv_files_into_db, generate_vocabulary_load
except(ImportError):
sys.path.insert(0, os.path.abspath(os.path.join(os.path.split(__file__)[0], os.path.pardir, os.path.pardir, "src")))
from utility_functions import load_csv_files_into_db, generate_vocabulary_load
def main(vocab_directory, connection_string, schema, vocabularies=["CONCEPT"]):
vocab_list = generate_vocabulary_load(vocab_directory, vocabularies)
vocab_data_dict = {}
for pair in vocab_list:
vocab_data_dict[pair[1]] = pair[0]
load_csv_files_into_db(connection_string, vocab_data_dict, schema_ddl=None, indices_ddl=None,
i_print_update=1000, truncate=True, schema=schema, delimiter="\t")
if __name__ == "__main__":
arg_parse_obj = argparse.ArgumentParser(description="Load concept/vocabulary files into database")
arg_parse_obj.add_argument("-c", "--config-file-name", dest="config_file_name", help="JSON config file", default="../hi_config.json")
arg_parse_obj.add_argument("--connection-uri", dest="connection_uri", default=None)
arg_parse_obj.add_argument("--schema", dest="schema", default=None)
arg_parse_obj.add_argument("--load-concept_ancestor", default=False, action="store_true", dest="load_concept_ancestor")
arg_parse_obj.add_argument("--full-concept-files", default=False, action="store_true", dest="load_full_concept_files")
arg_obj = arg_parse_obj.parse_args()
print("Reading config file '%s'" % arg_obj.config_file_name)
with open(arg_obj.config_file_name) as f:
config = json.load(f)
if arg_obj.connection_uri is None:
connection_uri = config["connection_uri"]
else:
connection_uri = arg_obj.connection_uri
if arg_obj.schema is None:
schema = config["schema"]
else:
schema = arg_obj.schema
if arg_obj.load_full_concept_files:
vocabularies_to_load = ["CONCEPT", "CONCEPT_ANCESTOR", "CONCEPT_CLASS", "CONCEPT_RELATIONSHIP",
"CONCEPT_SYNONYM", "DOMAIN", "DRUG_STRENGTH", "RELATIONSHIP", "VOCABULARY"]
elif arg_obj.load_concept_ancestor:
vocabularies_to_load = ["CONCEPT", "CONCEPT_ANCESTOR"]
else:
vocabularies_to_load = ["CONCEPT"]
main(config["json_map_directory"], connection_uri, schema, vocabularies=vocabularies_to_load)
| 37 | 137 | 0.72154 | 324 | 2,442 | 5.067901 | 0.290123 | 0.032887 | 0.046894 | 0.042631 | 0.287454 | 0.211937 | 0.163216 | 0.08039 | 0.08039 | 0.08039 | 0 | 0.003914 | 0.162981 | 2,442 | 65 | 138 | 37.569231 | 0.799413 | 0 | 0 | 0.116279 | 1 | 0 | 0.199508 | 0.027448 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023256 | false | 0 | 0.162791 | 0 | 0.186047 | 0.046512 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b2efa45aafbdbf20dc2a3beb6bb3e66667896bb3 | 414 | py | Python | DNN_Experiments/MaskRCNN/convert.py | wmjpillow/FlameDetectionAPP | c3761c9e15adccbd084b17cd6b6f63c561c7f856 | [
"MIT"
] | 2 | 2019-12-28T21:46:18.000Z | 2020-01-10T03:41:03.000Z | DNN_Experiments/MaskRCNN/convert.py | wmjpillow/FlameDetectionAPP | c3761c9e15adccbd084b17cd6b6f63c561c7f856 | [
"MIT"
] | 10 | 2019-12-28T21:31:19.000Z | 2020-04-12T20:01:58.000Z | DNN_Experiments/MaskRCNN/convert.py | wmjpillow/FlameDetectionAPP | c3761c9e15adccbd084b17cd6b6f63c561c7f856 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# convert jpg tp png
from glob import glob
import cv2
pngs = glob('./*.jpg')
for j in pngs:
img = cv2.imread(j)
cv2.imwrite(j[:-3] + 'png', img)
# delete jpg files
import glob
import os
dir = "/Users/wangmeijie/ALLImportantProjects/FlameDetectionAPP/Models/MaskRCNN/02_26_2020/Mask_RCNN/dataset/train"
for jpgpath in glob.iglob(os.path.join(dir, '*.jpg')):
os.remove(jpgpath) | 21.789474 | 115 | 0.707729 | 65 | 414 | 4.461538 | 0.646154 | 0.103448 | 0.110345 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033898 | 0.144928 | 414 | 19 | 116 | 21.789474 | 0.785311 | 0.135266 | 0 | 0 | 0 | 0 | 0.342697 | 0.300562 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.454545 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
b2f7f4cc70879c961d4345ed522c0b9c510c8bf6 | 5,218 | py | Python | scratch/movielens-mongodb.py | crcsmnky/movielens-data-exports | f316f1367abef80a1abce64d3adb3bd3effc6365 | [
"Apache-2.0"
] | 1 | 2022-02-01T19:44:36.000Z | 2022-02-01T19:44:36.000Z | scratch/movielens-mongodb.py | crcsmnky/movielens-data-exports | f316f1367abef80a1abce64d3adb3bd3effc6365 | [
"Apache-2.0"
] | null | null | null | scratch/movielens-mongodb.py | crcsmnky/movielens-data-exports | f316f1367abef80a1abce64d3adb3bd3effc6365 | [
"Apache-2.0"
] | null | null | null | """
usage: python movielens-mongodb.py [movies] [ratings] [links]
"""
import sys
import re
import csv
import os
# import tmdbsimple as tmdb
from pymongo import MongoClient
from pymongo import ASCENDING, DESCENDING
from datetime import datetime
from time import sleep
def import_movies(db, mfile):
movies = []
mcsv = csv.DictReader(mfile)
for row in mcsv:
movie = {
'movieid': int(row['movieId']),
'title': row['title'].split(' (')[0],
'year': row['title'].split(' (')[-1][:-1],
'genres': row['genres'].split('|')
}
movies.append(movie)
if (len(movies) % 1000) == 0:
# print count, "movies inserted"
db.command('insert', 'movies', documents=movies, ordered=False)
movies = []
if len(movies) > 0:
db.command('insert', 'movies', documents=movies, ordered=False)
def import_ratings(db, rfile):
count = 0
ratings, movies, users = [], [], []
rcsv = csv.DictReader(rfile)
for row in rcsv:
rating = {
'movieid': int(row['movieId']),
'userid': int(row['userId']),
'rating': float(row['rating']),
'ts': datetime.fromtimestamp(float(row['timestamp']))
}
ratings.append(rating)
movie_update = {
'q': { 'movieid': int(row['movieId']) },
'u': { '$inc': {
'ratings' : 1,
'total_rating': float(row['rating'])
}
}
}
movies.append(movie_update)
user_update = {
'q': { 'userid' : int(row['userId']) },
'u': { '$inc': { 'ratings': 1 } },
'upsert': True
}
users.append(user_update)
count += 1
if (count % 1000) == 0:
# print count, "ratings inserted, movies updated, users updated"
db.command('insert', 'ratings', documents=ratings, ordered=False)
db.command('update', 'movies', updates=movies, ordered=False)
db.command('update', 'users', updates=users, ordered=False)
ratings, movies, users = [], [], []
if count > 0:
db.command('insert', 'ratings', documents=ratings, ordered=False)
db.command('update', 'movies', updates=movies, ordered=False)
db.command('update', 'users', updates=users, ordered=False)
def import_links(db, lfile):
count = 0
movies = []
lcsv = csv.DictReader(lfile)
for row in lcsv:
try:
movies.append({
'q': {'movieid': int(row['movieId'])},
'u': { '$set': {
'imdb': row['imdbId'],
'tmdb': row['tmdbId']
}}
})
count += 1
except:
continue
if (count % 1000) == 0:
db.command('update', 'movies', updates=movies, ordered=False)
movies = []
if count > 0:
db.command('update', 'movies', updates=movies, ordered=False)
def create_genres(db):
docs = list(db.movies.aggregate([
{'$unwind' : '$genres'},
{'$group': {
'_id': '$genres',
'count': {'$sum': 1}
}},
], cursor={}))
genres = [
{'_id': idx, 'name': doc['_id'], 'count': doc['count']}
for idx, doc in enumerate(docs)
]
db.command('insert', 'genres', documents=genres, ordered=False)
def update_avg_ratings(db):
movies = db.movies.find()
for m in movies:
try:
db.movies.update_one({'_id': m['_id']}, {'$set': {'avg_rating': float(m['total_rating'])/m['ratings']}})
except:
continue
def get_poster_links(db):
tmdb.API_KEY='[YOUR API KEY HERE]'
conf = tmdb.Configuration()
imgurl = conf.info()['images']['base_url'] + 'w154' + '{path}'
allmovies = db.movies.find()
for i in xrange(0, allmovies.count(), 40):
print i
for j in xrange(i, i+40):
try:
movie = tmdb.Movies(int(allmovies[j]['tmdb'])).info()
db.movies.update_one(
{'_id': allmovies[j]['_id']},
{'$set': {'poster': imgurl.format(path=movie['poster_path'])}}
)
except:
continue
sleep(10)
def ensure_indexes(db):
db.movies.ensure_index("movieid")
db.movies.ensure_index("ratings")
db.movies.ensure_index("genres")
db.ratings.ensure_index([("userid", ASCENDING),("movieid", ASCENDING)])
db.users.ensure_index("userid")
db.genres.ensure_index("name")
def main():
host=os.environ.get('MONGODB_HOST', 'localhost')
port=os.environ.get('MONGODB_PORT', 27017)
database=os.environ.get('MONGODB_DB', 'movieweb')
db = MongoClient(host, port)[database]
with open(sys.argv[1]) as mfile:
import_movies(db, mfile)
with open(sys.argv[2]) as rfile:
import_ratings(db, rfile)
with open(sys.argv[3]) as lfile:
import_links(db, lfile)
create_genres(db)
update_avg_ratings(db)
get_poster_links(db)
# ensure_indexes(db)
if __name__ == '__main__':
main()
| 26.622449 | 116 | 0.526639 | 571 | 5,218 | 4.718039 | 0.238179 | 0.036748 | 0.040089 | 0.029696 | 0.215664 | 0.190794 | 0.174462 | 0.174462 | 0.138827 | 0.103935 | 0 | 0.012749 | 0.308547 | 5,218 | 195 | 117 | 26.758974 | 0.733925 | 0.026447 | 0 | 0.260563 | 0 | 0 | 0.124101 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.098592 | null | null | 0.007042 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b2fce30e886d32040df251037d8a8ded7ce043ca | 3,817 | py | Python | aito/client/requests/query_api_request.py | AitoDotAI/aito-python-tools | 891d433222b04f4ff8a4eeafbb9268516fd215dc | [
"MIT"
] | 6 | 2019-10-16T02:35:06.000Z | 2021-02-03T13:39:43.000Z | aito/client/requests/query_api_request.py | AitoDotAI/aito-python-tools | 891d433222b04f4ff8a4eeafbb9268516fd215dc | [
"MIT"
] | 23 | 2020-03-17T13:16:02.000Z | 2021-04-23T15:09:51.000Z | aito/client/requests/query_api_request.py | AitoDotAI/aito-python-tools | 891d433222b04f4ff8a4eeafbb9268516fd215dc | [
"MIT"
] | null | null | null | """Aito `Query API <https://aito.ai/docs/api/#query-api>`__ Request Class"""
import re
from abc import ABC
from typing import Dict, Optional, Union, List
from .aito_request import AitoRequest, _PatternEndpoint, _PostRequest
from ..responses import SearchResponse, PredictResponse, RecommendResponse, EvaluateResponse, SimilarityResponse, \
MatchResponse, RelateResponse, HitsResponse
class QueryAPIRequest(_PostRequest, _PatternEndpoint, AitoRequest, ABC):
"""Request to a `Query API <https://aito.ai/docs/api/#query-api>`__
"""
#: the Query API path
path: str = None # get around of not having abstract class attribute
_query_api_paths = ['_search', '_predict', '_recommend', '_evaluate', '_similarity', '_match', '_relate', '_query']
def __init__(self, query: Dict):
"""
:param query: an Aito query if applicable, optional
:type query: Dict
"""
if self.path is None:
raise NotImplementedError(f'The API path must be implemented')
endpoint = self._endpoint_from_path(self.path)
super().__init__(method=self.method, endpoint=endpoint, query=query)
@classmethod
def _endpoint_pattern(cls):
return re.compile(f"^{cls._api_version_endpoint_prefix}/({'|'.join(cls._query_api_paths)})$")
@classmethod
def make_request(cls, method: str, endpoint: str, query: Optional[Union[Dict, List]]) -> 'AitoRequest':
for sub_cls in cls.__subclasses__():
if method == sub_cls.method and endpoint == QueryAPIRequest._endpoint_from_path(sub_cls.path):
return sub_cls(query=query)
raise ValueError(f"invalid {cls.__name__} with '{method}({endpoint})'")
@classmethod
def _endpoint_from_path(cls, path: str):
"""return the query api endpoint from the Query API path"""
if path not in cls._query_api_paths:
raise ValueError(f"path must be one of {'|'.join(cls._query_api_paths)}")
return f'{cls._api_version_endpoint_prefix}/{path}'
class SearchRequest(QueryAPIRequest):
"""Request to the `Search API <https://aito.ai/docs/api/#post-api-v1-search>`__"""
#: the Query API path
path: str = '_search'
response_cls = SearchResponse
class PredictRequest(QueryAPIRequest):
"""Request to the `Predict API <https://aito.ai/docs/api/#post-api-v1-predict>`__"""
#: the Query API path
path: str = '_predict'
#: the class of the response for this request class
response_cls = PredictResponse
class RecommendRequest(QueryAPIRequest):
"""Request to the `Recommend API <https://aito.ai/docs/api/#post-api-v1-recommend>`__"""
#: the Query API path
path: str = '_recommend'
response_cls = RecommendResponse
class EvaluateRequest(QueryAPIRequest):
"""Request to the `Evaluate API <https://aito.ai/docs/api/#post-api-v1-evaluate>`__"""
#: the Query API path
path: str = '_evaluate'
response_cls = EvaluateResponse
class SimilarityRequest(QueryAPIRequest):
"""Request to the `Similarity API <https://aito.ai/docs/api/#post-api-v1-similarity>`__"""
#: the Query API path
path: str = '_similarity'
response_cls = SimilarityResponse
class MatchRequest(QueryAPIRequest):
"""Request to the `Match query <https://aito.ai/docs/api/#post-api-v1-match>`__"""
#: the Query API path
path: str = '_match'
response_cls = MatchResponse
class RelateRequest(QueryAPIRequest):
"""Request to the `Relate API <https://aito.ai/docs/api/#post-api-v1-relate>`__"""
#: the Query API path
path: str = '_relate'
response_cls = RelateResponse
class GenericQueryRequest(QueryAPIRequest):
"""Request to the `Generic Query API <https://aito.ai/docs/api/#post-api-v1-query>`__"""
path: str = '_query'
response_cls = HitsResponse | 37.058252 | 119 | 0.687451 | 470 | 3,817 | 5.368085 | 0.214894 | 0.060246 | 0.043599 | 0.059453 | 0.230678 | 0.214824 | 0.12287 | 0.120888 | 0.110186 | 0 | 0 | 0.00257 | 0.184438 | 3,817 | 103 | 120 | 37.058252 | 0.807902 | 0.305737 | 0 | 0.057692 | 0 | 0 | 0.150744 | 0.064996 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.096154 | 0.019231 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
b2ffff75cff848e9cc4d8a6143bf4d9bf43e64d3 | 5,702 | py | Python | sapy_script/SAP.py | fkfouri/sapy_script | 476041288367e2098b955bc2377f442ce503e822 | [
"MIT"
] | 3 | 2018-12-03T15:51:54.000Z | 2020-11-20T01:05:39.000Z | sapy_script/SAP.py | whrocha/sapy_script | 476041288367e2098b955bc2377f442ce503e822 | [
"MIT"
] | null | null | null | sapy_script/SAP.py | whrocha/sapy_script | 476041288367e2098b955bc2377f442ce503e822 | [
"MIT"
] | 3 | 2018-07-28T21:53:32.000Z | 2018-08-22T13:51:17.000Z | from multiprocessing import Pool, Manager
from time import sleep
from wmi import WMI
from win32com.client import GetObject
from subprocess import Popen
from collections import Iterable
from tqdm import tqdm
from os import getpid
from sapy_script.Session import Session
session_process = None
all_processes_id = []
def _on_init(sid, p_ids):
p_ids.append(getpid())
global session_process
app = SAP.app()
i = 0
while True:
con = app.Children(i)
if con.Children(0).Info.systemsessionid == sid:
session = con.Children(p_ids.index(getpid()))
session_process = Session(session)
break
i = i + 1
def _task_executor(task):
task['func'](task['data'])
class SAP:
def __init__(self, max_sessions=16):
self._con = None
self._tasks = []
self.max_sessions = max_sessions
self.session = lambda i=0: Session(self._con.Children(i))
@staticmethod
def app():
"""Open SAPGui"""
wmi_obj = WMI()
sap_exists = len(wmi_obj.Win32_Process(name='saplgpad.exe')) > 0
if not sap_exists:
Popen(['C:\Program Files (x86)\SAP\FrontEnd\SAPgui\saplgpad.exe'])
while True:
try:
#temp = GetObject("SAPGUI").GetScriptingEngine
#temp.Change("teste 456", "", "", "", "", ".\LocalSystem", "")
#objService.Change(,, , , , , ".\LocalSystem", "")
return GetObject("SAPGUI").GetScriptingEngine
except:
sleep(1)
pass
def connect(self, environment, client=None, user=None, password=None, lang=None, force=False):
con = SAP.app().OpenConnection(environment, True)
session = Session(con.Children(0))
if client is not None:
session.findById("wnd[0]/usr/txtRSYST-MANDT").Text = client
if user is not None:
session.findById("wnd[0]/usr/txtRSYST-BNAME").Text = user
if password is not None:
session.findById("wnd[0]/usr/pwdRSYST-BCODE").Text = password
if lang is not None:
session.findById("wnd[0]/usr/txtRSYST-LANGU").Text = lang
session.findById("wnd[0]").sendVKey(0)
# Eventual tela de mudanca de senha
change_pwd = False
try:
session.findById("wnd[1]/usr/pwdRSYST-NCODE").text = ''
session.findById("wnd[1]/usr/pwdRSYST-NCOD2").text = ''
change_pwd = True
except:
pass
if change_pwd:
raise ValueError('Please, set a new Password')
# Derruba conexão SAP
if force:
try:
session.findById("wnd[1]/usr/radMULTI_LOGON_OPT1").select()
session.findById("wnd[1]/tbar[0]/btn[0]").press()
except:
pass
else:
try:
session.findById("wnd[1]/usr/radMULTI_LOGON_OPT1").select()
session.findById("wnd[1]").sendVKey(12)
return False
except:
pass
# Teste da Conexao
if session.is_connected():
self._con = con
return True
self._con = None
return False
@property
def connected(self):
return self.session().is_connected()
@staticmethod
def session():
global session_process
return session_process
def sid(self):
return self.session().Info.systemsessionid
def logout(self):
session = self.session()
session.findById("wnd[0]/tbar[0]/okcd").text = "/nex"
session.findById("wnd[0]").sendVKey(0)
del session
self._con = None
@property
def number_of_sessions(self):
return 0 if self._con is None else len(self._con.Children)
@number_of_sessions.setter
def number_of_sessions(self, value):
size = self.number_of_sessions
if size == 0:
return
value = min(max(int(value), 1), self.max_sessions)
minus = value < size
arr = list(range(size, value))
arr.extend(reversed(range(value, size)))
for i in arr:
if minus:
session = self.session(i)
session.findById("wnd[0]/tbar[0]/okcd").text = "/i"
session.findById("wnd[0]").sendVKey(0)
else:
self.session().createSession()
sleep(0.5)
def clear_tasks(self):
self._tasks = []
def add_task(self, func, data):
for dt in data:
self._tasks.append({'func': func, 'data': dt})
def execute_tasks(self, resize_sessions=False):
total = len(self._tasks)
if total == 0:
return
if resize_sessions:
self.number_of_sessions = total
size = self.number_of_sessions
if size == 0:
return
sess_manager = Manager().list([])
pool = Pool(processes=self.number_of_sessions, initializer=_on_init, initargs=(self.sid(), sess_manager))
response = list(tqdm(pool.imap_unordered(_task_executor, self._tasks)))
pool.close()
pool.join()
return list(response)
def execute_function(self, func, data, resize_sessions=False):
if not isinstance(data, Iterable):
data = [data]
self.clear_tasks()
self.add_task(func=func, data=data)
response = self.execute_tasks(resize_sessions=resize_sessions)
self.clear_tasks()
return response
@staticmethod
def multi_arguments(func):
def convert_args(pr):
return func(**pr)
return convert_args
| 28.368159 | 113 | 0.573834 | 665 | 5,702 | 4.790977 | 0.266165 | 0.070621 | 0.084746 | 0.053672 | 0.193032 | 0.177966 | 0.131827 | 0.131827 | 0.102009 | 0.042059 | 0 | 0.013245 | 0.31147 | 5,702 | 200 | 114 | 28.51 | 0.798268 | 0.04174 | 0 | 0.293333 | 0 | 0 | 0.075922 | 0.049331 | 0 | 0 | 0 | 0 | 0 | 1 | 0.113333 | false | 0.053333 | 0.06 | 0.026667 | 0.28 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
650b61eb839964413b4047a7102a2ba07a9d68e0 | 1,518 | py | Python | jake/test/test_audit.py | lvcarlosja/jake | 0ecbcdd89352d27f50e35d1d73b624b86456e568 | [
"Apache-2.0"
] | null | null | null | jake/test/test_audit.py | lvcarlosja/jake | 0ecbcdd89352d27f50e35d1d73b624b86456e568 | [
"Apache-2.0"
] | 4 | 2021-07-29T18:51:06.000Z | 2021-12-13T20:50:20.000Z | jake/test/test_audit.py | lvcarlosja/jake | 0ecbcdd89352d27f50e35d1d73b624b86456e568 | [
"Apache-2.0"
] | null | null | null | #
# Copyright 2019-Present Sonatype Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
""" test_audit.py , for all your testing of audit py needs """
import unittest
import json
from pathlib import Path
from ..audit.audit import Audit
from ..types.results_decoder import ResultsDecoder
class TestAudit(unittest.TestCase):
""" TestAudit is responsible for testing the Audit class """
def setUp(self):
self.func = Audit()
def test_call_audit_results_prints_output(self):
""" test_call_audit_results_prints_output ensures that when called with
a valid result, audit_results returns the number of vulnerabilities found """
filename = Path(__file__).parent / "ossindexresponse.txt"
with open(filename, "r") as stdin:
response = json.loads(
stdin.read(),
cls=ResultsDecoder)
self.assertEqual(self.func.audit_results(response),
self.expected_results())
@staticmethod
def expected_results():
""" Weeee, I'm helping! """
return 3
| 33 | 81 | 0.725296 | 208 | 1,518 | 5.197115 | 0.586538 | 0.055504 | 0.024052 | 0.029602 | 0.059204 | 0.059204 | 0 | 0 | 0 | 0 | 0 | 0.007317 | 0.189723 | 1,518 | 45 | 82 | 33.733333 | 0.871545 | 0.548748 | 0 | 0 | 0 | 0 | 0.03271 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 1 | 0.157895 | false | 0 | 0.263158 | 0 | 0.526316 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
650bd2653086235614c46bb1b73a337dfc0ba477 | 6,046 | py | Python | vintage_commands.py | ktuan89/Vintage | 81f178043d1dad4ec9bd50ad4db2df9ef994f098 | [
"MIT",
"Unlicense"
] | null | null | null | vintage_commands.py | ktuan89/Vintage | 81f178043d1dad4ec9bd50ad4db2df9ef994f098 | [
"MIT",
"Unlicense"
] | null | null | null | vintage_commands.py | ktuan89/Vintage | 81f178043d1dad4ec9bd50ad4db2df9ef994f098 | [
"MIT",
"Unlicense"
] | null | null | null | import sublime, sublime_plugin
import os
#import subprocess
def is_legal_path_char(c):
# XXX make this platform-specific?
return c not in " \n\"|*<>{}[]()"
def move_while_path_character(view, start, is_at_boundary, increment=1):
while True:
if not is_legal_path_char(view.substr(start)):
break
start = start + increment
if is_at_boundary(start):
break
return start
class ViOpenFileUnderSelectionCommand(sublime_plugin.TextCommand):
def run(self, edit):
#sel = self.view.sel()[0]
region = self.view.sel()[0]
"""if not sel.empty():
file_name = self.view.substr(sel)
else:
caret_pos = self.view.sel()[0].begin()
current_line = self.view.line(caret_pos)
left = move_while_path_character(
self.view,
caret_pos,
lambda x: x < current_line.begin(),
increment=-1)
right = move_while_path_character(
self.view,
caret_pos,
lambda x: x > current_line.end(),
increment=1)
file_name = self.view.substr(sublime.Region(left + 1, right))"""
if region.empty():
line = self.view.line(region)
file_name = self.view.substr(line)
else:
file_name = self.view.substr(region)
"""file_name = os.path.join(os.path.dirname(self.view.file_name()),
file_name)"""
if file_name.endswith(":"):
file_name = file_name[0:-1]
if os.path.exists(file_name):
self.view.window().open_file(file_name)
class CopyCurrentWord(sublime_plugin.TextCommand):
def run(self, edit):
for region in self.view.sel():
if region.empty():
sublime.set_clipboard(self.view.substr(self.view.word(region.begin())))
class OpenFileInXcode(sublime_plugin.TextCommand):
def run(self, edit):
if self.view.file_name() is not None:
#print self.view.file_name()
#subprocess.call(["open", "-a", "/Applications/Xcode.app", self.view.file_name()])
os.system("open -a /Applications/Xcode.app '" + self.view.file_name() + "'")
class ViSaveAndExit(sublime_plugin.WindowCommand):
def run(self):
self.window.run_command('save')
self.window.run_command('close')
if len(self.window.views()) == 0:
self.window.run_command('close')
#class MoveFocusedViewToBeginning(sublime_plugin.EventListener):
# def on_activated(self, view):
# view.window().set_view_index(view, 0, 0)
class ExtendedSwitcherHaha(sublime_plugin.WindowCommand):
# declarations
open_files = []
open_views = []
window = []
settings = []
# lets go
def run(self, list_mode):
print "Here here here"
# self.view.insert(edit, 0, "Hello, World!")
self.open_files = []
self.open_views = []
self.window = sublime.active_window()
self.settings = sublime.load_settings('ExtendedSwitcher.sublime-settings')
for f in self.getViews(list_mode):
# if skip the current active is enabled do not add the current file it for selection
if self.settings.get('skip_current_file') == True:
if f.id() == self.window.active_view().id():
continue
self.open_views.append(f) # add the view object
file_name = f.file_name() # get the full path
if file_name:
if f.is_dirty():
file_name += self.settings.get('mark_dirty_file_char') # if there are any unsaved changes to the file
if self.settings.get('show_full_file_path') == True:
self.open_files.append(os.path.basename(file_name) + ' - ' + os.path.dirname(file_name))
else:
self.open_files.append(os.path.basename(file_name))
else:
self.open_files.append("Untitled")
if self.check_for_sorting() == True:
self.sort_files()
self.window.show_quick_panel(self.open_files, self.tab_selected) # show the file list
# display the selected open file
def tab_selected(self, selected):
if selected > -1:
self.window.focus_view(self.open_views[selected])
return selected
# sort the files for display in alphabetical order
def sort_files(self):
open_files = self.open_files
open_views = []
open_files.sort()
for f in open_files:
for fv in self.open_views:
if fv.file_name():
f = f.replace(" - " + os.path.dirname(fv.file_name()),'')
if (f == os.path.basename(fv.file_name())) or (f == os.path.basename(fv.file_name())+self.settings.get('mark_dirty_file_char')):
open_views.append(fv)
self.open_views.remove(fv)
if f == "Untitled" and not fv.file_name():
open_views.append(fv)
self.open_views.remove(fv)
self.open_views = open_views
# flags for sorting
def check_for_sorting(self):
if self.settings.has("sort"):
return self.settings.get("sort", False)
def getViews(self, list_mode):
views = []
# get only the open files for the active_group
if list_mode == "active_group":
views = self.window.views_in_group(self.window.active_group())
# get all open view if list_mode is window or active_group doesnt not have any files open
if (list_mode == "window") or (len(views) < 1):
views = self.window.views()
return views
| 35.775148 | 148 | 0.561528 | 724 | 6,046 | 4.51105 | 0.211326 | 0.068585 | 0.02572 | 0.024495 | 0.246479 | 0.191366 | 0.191366 | 0.132884 | 0.132884 | 0.034905 | 0 | 0.003688 | 0.327324 | 6,046 | 168 | 149 | 35.988095 | 0.799361 | 0.133311 | 0 | 0.185567 | 0 | 0.020619 | 0.052681 | 0.01317 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.020619 | null | null | 0.010309 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
650c687c2aa892784fed03faf887190ac6a55992 | 3,718 | py | Python | bitten/tests/notify.py | dokipen/bitten | d4d2829c63eec84bcfab05ec7035a23e85d90c00 | [
"BSD-3-Clause"
] | 1 | 2016-08-28T03:13:03.000Z | 2016-08-28T03:13:03.000Z | bitten/tests/notify.py | dokipen/bitten | d4d2829c63eec84bcfab05ec7035a23e85d90c00 | [
"BSD-3-Clause"
] | null | null | null | bitten/tests/notify.py | dokipen/bitten | d4d2829c63eec84bcfab05ec7035a23e85d90c00 | [
"BSD-3-Clause"
] | null | null | null | #-*- coding: utf-8 -*-
#
# Copyright (C) 2007 Ole Trenner, <ole@jayotee.de>
# All rights reserved.
#
# This software is licensed as described in the file COPYING, which
# you should have received as part of this distribution.
import unittest
from trac.db import DatabaseManager
from trac.test import EnvironmentStub, Mock
from trac.web.session import DetachedSession
from bitten.model import schema, Build, BuildStep, BuildLog
from bitten.notify import BittenNotify, BuildNotifyEmail
class BittenNotifyBaseTest(unittest.TestCase):
def setUp(self):
self.env = EnvironmentStub(enable=['trac.*', 'bitten.notify.*'])
repos = Mock(get_changeset=lambda rev: Mock(author='author', rev=rev))
self.env.get_repository = lambda authname=None: repos
db = self.env.get_db_cnx()
cursor = db.cursor()
connector, _ = DatabaseManager(self.env)._get_connector()
for table in schema:
for stmt in connector.to_sql(table):
cursor.execute(stmt)
db.commit()
class BittenNotifyTest(BittenNotifyBaseTest):
"""unit tests for BittenNotify dispatcher class"""
def setUp(self):
BittenNotifyBaseTest.setUp(self)
self.dispatcher = BittenNotify(self.env)
self.failed_build = Build(self.env, status=Build.FAILURE)
self.successful_build = Build(self.env, status=Build.SUCCESS)
def test_do_notify_on_failed_build(self):
self.set_option(BittenNotify.notify_on_failure, 'true')
self.assertTrue(self.dispatcher._should_notify(self.failed_build),
'notifier should be called for failed builds.')
def test_do_not_notify_on_failed_build(self):
self.set_option(BittenNotify.notify_on_failure, 'false')
self.assertFalse(self.dispatcher._should_notify(self.failed_build),
'notifier should not be called for failed build.')
def test_do_notify_on_successful_build(self):
self.set_option(BittenNotify.notify_on_success, 'true')
self.assertTrue(self.dispatcher._should_notify(self.successful_build),
'notifier should be called for successful builds when configured.')
def test_do_not_notify_on_successful_build(self):
self.set_option(BittenNotify.notify_on_success, 'false')
self.assertFalse(self.dispatcher._should_notify(self.successful_build),
'notifier should not be called for successful build.')
def set_option(self, option, value):
self.env.config.set(option.section, option.name, value)
class BuildNotifyEmailTest(BittenNotifyBaseTest):
"""unit tests for BittenNotifyEmail class"""
def setUp(self):
BittenNotifyBaseTest.setUp(self)
self.env.config.set('notification','smtp_enabled','true')
self.notifications_sent_to = []
def send(to, cc, hdrs={}):
self.notifications_sent_to = to
def noop(*args, **kw):
pass
self.email = Mock(BuildNotifyEmail, self.env,
begin_send=noop,
finish_send=noop,
send=send)
self.build = Build(self.env, status=Build.SUCCESS, rev=123)
def test_notification_is_sent_to_author(self):
self.email.notify(self.build)
self.assertTrue('author' in self.notifications_sent_to,
'Recipient list should contain the author')
# TODO functional tests of generated mails
def suite():
suite = unittest.TestSuite()
suite.addTest(unittest.makeSuite(BittenNotifyTest, 'test'))
suite.addTest(unittest.makeSuite(BuildNotifyEmailTest, 'test'))
return suite
if __name__ == '__main__':
unittest.main(defaultTest='suite')
| 38.329897 | 83 | 0.687197 | 442 | 3,718 | 5.60181 | 0.316742 | 0.031099 | 0.01454 | 0.025848 | 0.344911 | 0.337641 | 0.307754 | 0.268174 | 0.197092 | 0.105008 | 0 | 0.002737 | 0.213825 | 3,718 | 96 | 84 | 38.729167 | 0.844338 | 0.09064 | 0 | 0.074627 | 0 | 0 | 0.102884 | 0 | 0 | 0 | 0 | 0.010417 | 0.074627 | 1 | 0.179104 | false | 0.014925 | 0.089552 | 0 | 0.328358 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
650ec3bb5d3381c505f9bd3240d3f221d5e35e00 | 660 | py | Python | open_data/dataset/migrations/0005_keyword_squashed_0006_remove_keyword_relevancy.py | balfroim/OpenData | f0334dae16c2806e81f7d2d53adeabc72403ecce | [
"MIT"
] | null | null | null | open_data/dataset/migrations/0005_keyword_squashed_0006_remove_keyword_relevancy.py | balfroim/OpenData | f0334dae16c2806e81f7d2d53adeabc72403ecce | [
"MIT"
] | null | null | null | open_data/dataset/migrations/0005_keyword_squashed_0006_remove_keyword_relevancy.py | balfroim/OpenData | f0334dae16c2806e81f7d2d53adeabc72403ecce | [
"MIT"
] | null | null | null | # Generated by Django 3.2 on 2021-04-21 13:01
from django.db import migrations, models
class Migration(migrations.Migration):
replaces = [('dataset', '0005_keyword'), ('dataset', '0006_remove_keyword_relevancy')]
dependencies = [
('dataset', '0004_alter_theme_id'),
]
operations = [
migrations.CreateModel(
name='Keyword',
fields=[
('word', models.CharField(max_length=64, primary_key=True, serialize=False)),
('datasets', models.ManyToManyField(blank=True, related_name='keywords', to='dataset.ProxyDataset')),
],
),
]
| 28.695652 | 118 | 0.587879 | 64 | 660 | 5.90625 | 0.796875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 0.278788 | 660 | 22 | 119 | 30 | 0.735294 | 0.065152 | 0 | 0 | 1 | 0 | 0.215852 | 0.048904 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.066667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
651500c04d2f53be7a66dad2701232feba9db8b0 | 4,919 | py | Python | MachineLearning/potential_field.py | JhaAman/lihax | c0421f62d7b86908a7c74251c1dc35b1407c4568 | [
"MIT"
] | null | null | null | MachineLearning/potential_field.py | JhaAman/lihax | c0421f62d7b86908a7c74251c1dc35b1407c4568 | [
"MIT"
] | null | null | null | MachineLearning/potential_field.py | JhaAman/lihax | c0421f62d7b86908a7c74251c1dc35b1407c4568 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import math, socket, struct, numpy as np, sys
# Get the gradient of the potential of an obstacle
# particle at (ox, oy) with the origin at (mx, my)
# Get the potential of an obstacle particle at (ox, oy)
# with the origin at (mx, my)
def potential(mx, my, ox, oy):
1.0 / ((mx - ox)**2 + (my - oy)**2)**0.5
class PotentialField():
def __init__(self):
#socket initialization
self.host_ip = socket.gethostname()
self.receiving_port = 5510
self.sending_port = 6510
self.sockR = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
self.sockS = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
self.sockS.connect((self.host_ip, self.sending_port))
self.sockR.bind((self.host_ip, self.receiving_port))
# cumulative speed - used to build up momentum
self.speed_c = 0
def grad(self,dist, mx, my, ox, oy):
c = -1/((mx - ox)**2 + (my - oy)**2)**1.5
return c*(mx - ox), c*(my - oy)
# calculate the total gradient from an array of lidar ranges
# with origin at (my_x, my_y)
def calc_gradient(self, ranges, my_x, my_y):
gradient_x = 0 # sum of dU/dx
gradient_y = 0 # sum of dU/dy
# ignore the edges of the lidar FOV, usually noisy
for i in range(len(ranges) - 180, 180, -1):
r = ranges[i]
deg = -(270.0/1080) * i # convert index of range to degree of range
deg += 225 # lidar FOV starts at -45 deg
px = r * math.cos(math.radians(deg)) # convert from polar to x coord
py = r * math.sin(math.radians(deg)) # convert from polar to y coord
gx, gy = self.grad(r, my_x, my_y, px, py) # compute gradient at rectangular coordinates
# add point's gradient into sum
gradient_x += gx
gradient_y += gy
return (gradient_x, gradient_y)
# lidar subscriber callback
def receive_lidar(self, STEER_BIAS=0, PUSH_MULTIPLIER=19.7, STEER_GRAD_PROPORTION=20.0, SPEED_GRAD_PROPORTION=-0.001, MOMENTUM_MU=0.95, UPDATE_INFLUENCE=0.11, REVERSE_SPEED_MULTIPLIER=-2.3, MIN_SPEED_CLAMP=-0.9, MAX_SPEED_CLAMP=1.0):
while True:
packet = self.sockR.recvfrom(65565)[0]
ranges = struct.unpack("1080f", packet)
# compute gradient sums from lidar ranges
grad_x, grad_y = self.calc_gradient(ranges, 0, 0)
grad_x += STEER_BIAS * self.grad(0.1, 0, 0, 0.1, 0)[0]
# place repelling particle behind origin (the car) to
# push the car forward. 14 is a multiplier to give more push.
grad_y += PUSH_MULTIPLIER * self.grad(0.1, 0, 0, 0, -0.1)[1]
# magnitude of gradient (euclidian dist)
grad_magnitude = math.sqrt(grad_x**2 + grad_y**2)
# steering proportional to potential gradient w.r.t. x
steer = grad_x / STEER_GRAD_PROPORTION # OR? math.atan2(grad_x, grad_y)
# the speed update at this instance: proportional to gradient magnitude
# and sign depends of sign of gradient w.r.t y
speed = (SPEED_GRAD_PROPORTION * grad_magnitude * np.sign(grad_y))*100-194
# update the cumulative momentum using the speed update at this instance.
# speed_c is multiplied by some constant < 1 to simulate friction and
# speed is multiplied by some constant > 0 to determine the influence of the
# speed update at this instance.
self.speed_c = MOMENTUM_MU*self.speed_c + UPDATE_INFLUENCE * speed
# if speed is less than -1, clamp it. also, the steering is multiplied
# by a negative constant < -1 to make it back out in a way that
# orients the car in the direction it would want to turn if it were
# not too close.
speed_now = self.speed_c
if self.speed_c < 0:
if self.speed_c > -0.2:
speed_now = -0.7
steer *= REVERSE_SPEED_MULTIPLIER
# print("reversing")
if self.speed_c < MIN_SPEED_CLAMP:
speed_now = MIN_SPEED_CLAMP
elif self.speed_c > MAX_SPEED_CLAMP:
# if speed is greater than 1, clamp it
speed_now = MAX_SPEED_CLAMP
# create and publish drive message using steer and speed_c
# print "Speed: " + str(speed)
# print "Speed c: " + str(self.speed_c)
# print "Speed now: " + str(speed_now)
message = struct.pack("2f", speed_now, -steer)
self.sockS.send(message)
self.sockR.close()
self.sockS.close()
print "STOPPED!!!"
sys.exit(1)
pf = PotentialField()
pf.receive_lidar()
| 42.042735 | 237 | 0.584468 | 700 | 4,919 | 3.975714 | 0.307143 | 0.025871 | 0.032339 | 0.011858 | 0.173194 | 0.145167 | 0.107797 | 0.075458 | 0.075458 | 0.075458 | 0 | 0.034762 | 0.32161 | 4,919 | 116 | 238 | 42.405172 | 0.799221 | 0.350274 | 0 | 0 | 0 | 0 | 0.005388 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.017544 | null | null | 0.017544 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6515124d4e3250bc474b369d4832ef055edc0d00 | 5,031 | py | Python | decomp/c/gen.py | nihilus/epanos | 998a9e3f45337df98b1bbc40b64954b079a1d4de | [
"MIT"
] | 1 | 2017-07-25T00:05:09.000Z | 2017-07-25T00:05:09.000Z | decomp/c/gen.py | nihilus/epanos | 998a9e3f45337df98b1bbc40b64954b079a1d4de | [
"MIT"
] | null | null | null | decomp/c/gen.py | nihilus/epanos | 998a9e3f45337df98b1bbc40b64954b079a1d4de | [
"MIT"
] | null | null | null | from itertools import imap, chain
from pycparser import c_generator, c_ast
from decomp import data, ida
from decomp.c import decl as cdecl, types as ep_ct
from decomp.cpu import ida as cpu_ida
XXX_INTRO_HACK = cpu_ida.ida_current_cpu().insns.support_header + '''
#include <stdint.h>
typedef union EPANOS_REG {
uint8_t u8;
int32_t i32;
uint32_t u32;
int64_t i64;
uint64_t u64;
float s;
double d;
} EPANOS_REG;
typedef struct EPANOS_ARGS {
EPANOS_REG v0;
EPANOS_REG v1;
EPANOS_REG a0;
EPANOS_REG a1;
EPANOS_REG a2;
EPANOS_REG a3;
EPANOS_REG a4;
EPANOS_REG a5;
EPANOS_REG a6;
EPANOS_REG a7;
EPANOS_REG f0;
EPANOS_REG f2;
EPANOS_REG f12;
EPANOS_REG f13;
EPANOS_REG f14;
EPANOS_REG f15;
EPANOS_REG f16;
EPANOS_REG f17;
EPANOS_REG f18;
EPANOS_REG f19;
} EPANOS_ARGS;
'''
gen_from_node = c_generator.CGenerator().visit
flatten = chain.from_iterable
def c_for_insn(ea, our_fns, extern_reg_map, stkvars):
while True:
(ea, c) = cpu_ida.ida_current_cpu().gen.fmt_insn(ea, our_fns, extern_reg_map, stkvars, from_delay=False)
yield c
if ea == ida.BADADDR:
break
def generate(ea, decl, our_fns, extern_reg_map, stkvar_map, stkvar_decls):
'''ea_t -> c_ast() -> frozenset(str) -> {str : reg_sig} ->
{str : {int : tinfo_t}} {str : [c_ast]} -> c_ast'''
try:
stkvars = stkvar_map[decl.name]
var_decls = stkvar_decls[decl.name]
except KeyError:
stkvars = {}
var_decls = []
start_ea = ida.get_func(ea).startEA
body = [XXX_STACKVAR_HACK()] + [var_decls] + [x for x in
c_for_insn(start_ea, our_fns, extern_reg_map, stkvars)]
funcdef = c_ast.FuncDef(decl, None, c_ast.Compound(flatten(body)))
return funcdef
def XXX_STACKVAR_HACK():
# XXX FIXME this will be going away once we've added elision of unnecessary
# stack variables (probably will just stick declarations into the AST)
regs = list(c_ast.Decl(x, [], [], [], c_ast.TypeDecl(x, [], c_ast.IdentifierType(['EPANOS_REG'])), None, None)
for x in
list('t%s' % str(n) for n in range(4, 8))
+ list('s%s' % str(n) for n in range(0, 8))
+ ['at', 't8', 't9', 'gp', 'sp', 'ra', 'fp', 'f1']
+ list('f%s' % str(n) for n in range(3, 12))
+ list('f%s' % str(n) for n in range(20, 32)))
regs += [c_ast.Decl('EPANOS_fp_cond', [], [], [], c_ast.TypeDecl('EPANOS_fp_cond', [], c_ast.IdentifierType(['int'])), None, None)]
return regs
def run(externs, our_fns, cpp_filter, cpp_all, decompile=True):
'''frozenset(str) -> frozenset(str) -> str -> str -> opt:bool -> [c_ast]'''
global OUR_FNS, EXTERN_REG_MAP, STKVAR_MAP # for repl convenience
OUR_FNS = our_fns
fn_segs = data.get_segs(['extern', '.text'])
rodata_segs = data.get_segs(['.rodata', '.srdata'])
data_segs = data.get_segs(['.data', '.bss'])
lit_segs = data.get_segs(['.lit4', '.lit8'])
num_lits = data.get_num_literals(lit_segs)
str_lits = data.get_str_literals(rodata_segs)
data_txt = data.get_data(data_segs, cpp_filter)
# XXX FIXME this will be going away once we've added emitting numeric and
# string constants directly at their site of use
if decompile is True:
for (k, v) in num_lits.iteritems():
ty = type(v)
if ty is ep_ct.cfloat:
print 'float %s = %s;' % (k, v)
elif ty is ep_ct.cdouble:
print 'double %s = %s;' % (k, v)
else:
raise Exception('o no')
for (k, v) in str_lits.iteritems():
print 'const char *%s = %s;' % (k, data.c_stringify(v))
protos = map(cdecl.make_internal_fn_decl, our_fns)
(lib_fns, tds) = data.get_fns_and_types(fn_segs, externs, cpp_all)
all_tds = {x.name: x for x in tds}
typedefs = cdecl.resolve_typedefs(all_tds)
EXTERN_REG_MAP = data.get_fn_arg_map(lib_fns, typedefs)
STKVAR_MAP = data.get_stkvars(our_fns)
stkvar_decls = data.make_stkvar_txt(our_fns, STKVAR_MAP, cpp_filter)
if decompile is True:
print XXX_INTRO_HACK
return gen_from_node(c_ast.FileAST(
data_txt +
protos +
list(generate(ida.loc_by_name(decl.name), decl, our_fns,
EXTERN_REG_MAP, STKVAR_MAP, stkvar_decls)
for decl in protos)))
else:
return
def repl_make_insn(ea, from_delay):
# for testing: print the C that will be generated from a line of assembly.
# note that if you ask for the ea of an insn in a delay slot, you get only
# that instruction; if you ask for a delayed instruction, you get both
try:
stkvars = STKVAR_MAP[ida.get_func_name(ea)]
except KeyError:
stkvars = {}
return list(gen_from_node(x) for x in
cpu_ida.ida_current_cpu().gen.fmt_insn(
ea, OUR_FNS, EXTERN_REG_MAP, stkvars, from_delay).c)
| 33.993243 | 135 | 0.624329 | 772 | 5,031 | 3.823834 | 0.310881 | 0.070122 | 0.03252 | 0.035569 | 0.170732 | 0.153455 | 0.153455 | 0.124322 | 0.113821 | 0.099594 | 0 | 0.016609 | 0.258 | 5,031 | 147 | 136 | 34.22449 | 0.774176 | 0.098788 | 0 | 0.086957 | 0 | 0 | 0.178506 | 0 | 0 | 0 | 0 | 0.006803 | 0 | 0 | null | null | 0 | 0.043478 | null | null | 0.034783 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
65185342728b8ea46c26ce2b78a8072737b087c9 | 2,102 | py | Python | example/py/CHSHInequality/chsh_inequality.py | samn33/qlazy | b215febfec0a3b8192e57a20ec85f14576745a89 | [
"Apache-2.0"
] | 15 | 2019-04-09T13:02:58.000Z | 2022-01-13T12:57:08.000Z | example/py/CHSHInequality/chsh_inequality.py | samn33/qlazy | b215febfec0a3b8192e57a20ec85f14576745a89 | [
"Apache-2.0"
] | 3 | 2020-02-26T16:21:18.000Z | 2022-03-31T00:46:53.000Z | example/py/CHSHInequality/chsh_inequality.py | samn33/qlazy | b215febfec0a3b8192e57a20ec85f14576745a89 | [
"Apache-2.0"
] | 3 | 2021-01-28T05:38:55.000Z | 2021-10-30T12:19:19.000Z | import random
from qlazy import QState
def classical_strategy(trials=1000):
win_cnt = 0
for _ in range(trials):
# random bits by Charlie (x,y)
x = random.randint(0,1)
y = random.randint(0,1)
# response by Alice (a)
a = 0
# response by Bob (b)
b = 0
# count up if win
if (x and y) == (a+b)%2:
win_cnt += 1
print("== result of classical strategy (trials:{0:d}) ==".format(trials))
print("* win prob. = ", win_cnt/trials)
def quantum_strategy(trials=1000):
win_cnt = 0
for _ in range(trials):
# random bits by Charlie (x,y)
x = random.randint(0,1)
y = random.randint(0,1)
# make entangled 2 qubits (one for Alice and another for Bob)
qs = QState(2).h(0).cx(0,1)
# response by Alice (a)
if x == 0:
# measurement of Z-basis (= Ry(0.0)-basis)
sa = qs.m([0], shots=1, angle=0.0, phase=0.0).lst
if sa == 0:
a = 0
else:
a = 1
else:
# measurement of X-basis (or Ry(0.5*PI)-basis)
sa = qs.mx([0], shots=1).lst
# sa = qs.m([0], shots=1, angle=0.5, phase=0.0).lst
if sa == 0:
a = 0
else:
a = 1
# response by Bob (b)
if y == 0:
# measurement of Ry(0.25*PI)-basis
sb = qs.m([1], shots=1, angle=0.25, phase=0.0).lst
if sb == 0:
b = 0
else:
b = 1
else:
# measurement of Ry(-0.25*PI)-basis
sb = qs.m([1], shots=1, angle=-0.25, phase=0.0).lst
if sb == 0:
b = 0
else:
b = 1
# count up if win
if (x and y) == (a+b)%2:
win_cnt += 1
print("== result of quantum strategy (trials:{0:d}) ==".format(trials))
print("* win prob. = ", win_cnt/trials)
if __name__ == '__main__':
classical_strategy()
quantum_strategy()
| 25.634146 | 77 | 0.450523 | 299 | 2,102 | 3.100334 | 0.22408 | 0.038835 | 0.06041 | 0.064725 | 0.677454 | 0.677454 | 0.640777 | 0.640777 | 0.601942 | 0.601942 | 0 | 0.067098 | 0.411513 | 2,102 | 81 | 78 | 25.950617 | 0.682296 | 0.207422 | 0 | 0.68 | 0 | 0 | 0.079952 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.04 | 0 | 0.08 | 0.08 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
652cc48a5f826ddc6ae73b306f5bcedc4dba7eb7 | 1,659 | py | Python | Murphi/ModularMurphi/TemplateClass.py | icsa-caps/HieraGen | 4026c1718878d2ef69dd13d3e6e10cab69174fda | [
"MIT"
] | 6 | 2020-07-07T15:45:13.000Z | 2021-08-29T06:44:29.000Z | Murphi/ModularMurphi/TemplateClass.py | icsa-caps/HieraGen | 4026c1718878d2ef69dd13d3e6e10cab69174fda | [
"MIT"
] | null | null | null | Murphi/ModularMurphi/TemplateClass.py | icsa-caps/HieraGen | 4026c1718878d2ef69dd13d3e6e10cab69174fda | [
"MIT"
] | null | null | null | import inspect
import os
import re
class TemplateHandler:
# Constant definitions
tab = " "
nl = "\n"
sem = ";"
end = sem + nl
def __init__(self, template_dir: str):
self.templatepath = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) + \
"/../" + template_dir
####################################################################################################################
# REPLACE DYNAMIC
####################################################################################################################
def _openTemplate(self, filename):
return re.sub(r'^\#.*\n?', '', open(self.templatepath + "/" + filename, "r").read(), flags=re.MULTILINE)
def _stringReplKeys(self, refstring, replacekeys):
inputstr = refstring
for ind in range(0, len(replacekeys)):
inputstr = self._stringRepl(inputstr, ind, replacekeys[ind])
return inputstr
def _stringRepl(self, string, ind, keyword):
return re.sub(r"\$" + str(ind) + "\$", keyword, string)
def _addtabs(self, string, count):
tabstring = ""
for ind in range(0, count):
tabstring += self.tab
outstr = ""
for line in string.splitlines():
outstr += tabstring + line + self.nl
return outstr
@staticmethod
def _testInt(string):
try:
int(string)
return True
except ValueError:
return False
@staticmethod
def _testOperator(string):
if string.isalpha():
return True
return False
| 29.105263 | 120 | 0.485835 | 146 | 1,659 | 5.431507 | 0.452055 | 0.027743 | 0.027743 | 0.030265 | 0.035309 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001657 | 0.272453 | 1,659 | 56 | 121 | 29.625 | 0.655344 | 0.0217 | 0 | 0.15 | 0 | 0 | 0.016571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.175 | false | 0 | 0.075 | 0.05 | 0.575 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
652e84bd8ad0187eda337ba399fb9e1a9cdb03eb | 377 | py | Python | setup.py | PaulDodd/pynomial | 8da81bf37ee96b8e3c49b40cdfcae075ba667632 | [
"Apache-2.0"
] | null | null | null | setup.py | PaulDodd/pynomial | 8da81bf37ee96b8e3c49b40cdfcae075ba667632 | [
"Apache-2.0"
] | null | null | null | setup.py | PaulDodd/pynomial | 8da81bf37ee96b8e3c49b40cdfcae075ba667632 | [
"Apache-2.0"
] | null | null | null | import sys
from setuptools import setup, find_packages
setup(
name='pynomial',
version='0.0.0',
packages=find_packages(),
author='Paul M Dodd',
author_email='pdodd@umich.edu',
description="python package for combinatorial problems",
url="https://github.com/PaulDodd/pynomial.git",
install_requires=[], # install_requires or something else?
)
| 26.928571 | 64 | 0.70557 | 47 | 377 | 5.553191 | 0.765957 | 0.091954 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009554 | 0.167109 | 377 | 13 | 65 | 29 | 0.821656 | 0.092838 | 0 | 0 | 0 | 0 | 0.35503 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
65489ab1059af5d3af74f3912af4a5da4c39124a | 383 | py | Python | las1.2.py | Theskill19/sweetpotato | 7cb46c412f400bcd51838db365038a766cf593cd | [
"CC0-1.0"
] | null | null | null | las1.2.py | Theskill19/sweetpotato | 7cb46c412f400bcd51838db365038a766cf593cd | [
"CC0-1.0"
] | null | null | null | las1.2.py | Theskill19/sweetpotato | 7cb46c412f400bcd51838db365038a766cf593cd | [
"CC0-1.0"
] | null | null | null | #2. Пользователь вводит время в секундах.
# Переведите время в часы, минуты и секунды и выведите в формате чч:мм:сс.
# Используйте форматирование строк.
time = int(input("Введите время в секундах "))
hours = time // 3600
minutes = (time - hours * 3600) // 60
seconds = time - (hours * 3600 + minutes * 60)
print(f"Время в формате чч:мм:сс {hours} : {minutes} : {seconds}") | 42.555556 | 75 | 0.681462 | 55 | 383 | 4.745455 | 0.545455 | 0.091954 | 0.10728 | 0.091954 | 0.10728 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055016 | 0.193211 | 383 | 9 | 76 | 42.555556 | 0.789644 | 0.383812 | 0 | 0 | 0 | 0 | 0.367257 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
654b5be42b94507090bb99be14ad14d6bad404c8 | 427 | py | Python | src/bananas/drf/errors.py | beshrkayali/django-bananas | 8e832ca91287c5b3eed5af8de948c67fd026c4b9 | [
"MIT"
] | 26 | 2015-04-07T12:18:26.000Z | 2021-07-23T18:05:52.000Z | src/bananas/drf/errors.py | beshrkayali/django-bananas | 8e832ca91287c5b3eed5af8de948c67fd026c4b9 | [
"MIT"
] | 55 | 2016-10-25T08:13:50.000Z | 2022-03-04T12:53:24.000Z | src/bananas/drf/errors.py | beshrkayali/django-bananas | 8e832ca91287c5b3eed5af8de948c67fd026c4b9 | [
"MIT"
] | 16 | 2015-10-13T10:11:59.000Z | 2021-11-11T12:30:32.000Z | from rest_framework import status
from rest_framework.exceptions import APIException
class PreconditionFailed(APIException):
status_code = status.HTTP_412_PRECONDITION_FAILED
default_detail = "An HTTP precondition failed"
default_code = "precondition_failed"
class BadRequest(APIException):
status_code = status.HTTP_400_BAD_REQUEST
default_detail = "Validation failed"
default_code = "bad_request"
| 28.466667 | 53 | 0.800937 | 49 | 427 | 6.653061 | 0.44898 | 0.165644 | 0.104294 | 0.171779 | 0.196319 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016438 | 0.145199 | 427 | 14 | 54 | 30.5 | 0.876712 | 0 | 0 | 0 | 0 | 0 | 0.173302 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e8da2871c87cf2076690059ef9b906d674072b5a | 846 | py | Python | setup.py | stshrive/pycense | 5bfd1b7b6b326a5592f58d621ee596c6c1d8a490 | [
"MIT"
] | null | null | null | setup.py | stshrive/pycense | 5bfd1b7b6b326a5592f58d621ee596c6c1d8a490 | [
"MIT"
] | 5 | 2018-09-15T23:40:11.000Z | 2018-10-05T22:57:13.000Z | setup.py | stshrive/pycense | 5bfd1b7b6b326a5592f58d621ee596c6c1d8a490 | [
"MIT"
] | 1 | 2018-10-04T23:43:42.000Z | 2018-10-04T23:43:42.000Z | import os
import setuptools
VERSION = "1.0.0a1+dev"
INSTALL_REQUIRES = [
'pip-licenses',
]
CLASSIFIERS = [
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'Intended Audience :: Science/Research'
]
def read(fname):
return open(os.path.join(os.path.dirname(__file__), fname)).read()
setuptools.setup(
name='pycense',
version="0.1.0",
description='Python package license inspector.',
long_description=read('README.md'),
license='MIT',
author='Microsoft Corporation',
author_email='stshrive@microsoft.com', # TODO: not one person :)
url='https://github.com/stshrive/pycense',
zip_safe=True,
classifiers=CLASSIFIERS,
entry_points = {'console_scripts': ['pycense=pycense.__main__:__main__']},
packages=['pycense',],
install_requires=INSTALL_REQUIRES
)
| 24.171429 | 78 | 0.682033 | 96 | 846 | 5.802083 | 0.677083 | 0.08079 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011348 | 0.166667 | 846 | 34 | 79 | 24.882353 | 0.778723 | 0.027187 | 0 | 0 | 0 | 0 | 0.380488 | 0.067073 | 0 | 0 | 0 | 0.029412 | 0 | 1 | 0.035714 | false | 0 | 0.071429 | 0.035714 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e8e227fac1aeb6cb15d17a60f96b91194af13f7f | 639 | py | Python | api/cueSearch/migrations/0005_searchcardtemplate_connectiontype.py | cuebook/CueSearch | 8bf047de273b27bba41b8bf4e266aac1eee7f81a | [
"Apache-2.0"
] | 3 | 2022-02-10T17:00:19.000Z | 2022-03-29T14:31:25.000Z | api/cueSearch/migrations/0005_searchcardtemplate_connectiontype.py | cuebook/CueSearch | 8bf047de273b27bba41b8bf4e266aac1eee7f81a | [
"Apache-2.0"
] | null | null | null | api/cueSearch/migrations/0005_searchcardtemplate_connectiontype.py | cuebook/CueSearch | 8bf047de273b27bba41b8bf4e266aac1eee7f81a | [
"Apache-2.0"
] | null | null | null | # Generated by Django 3.2.5 on 2022-02-18 08:20
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
("dataset", "0001_initial"),
("cueSearch", "0004_auto_20220217_0217"),
]
operations = [
migrations.AddField(
model_name="searchcardtemplate",
name="connectionType",
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.SET_NULL,
to="dataset.connectiontype",
),
),
]
| 24.576923 | 61 | 0.57277 | 62 | 639 | 5.790323 | 0.693548 | 0.066852 | 0.077994 | 0.122563 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08046 | 0.319249 | 639 | 25 | 62 | 25.56 | 0.744828 | 0.070423 | 0 | 0.105263 | 1 | 0 | 0.177365 | 0.076014 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.105263 | 0 | 0.263158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e8e29b7b8972a06149daa7c111affc519cbc1ff4 | 629 | py | Python | FormulaAccordingPrint.py | FreeBirdsCrew/Brainstorming_Codes | 9d06216cd0772ce56586acff2c240a210b94ba1f | [
"Apache-2.0"
] | 1 | 2020-12-11T10:24:08.000Z | 2020-12-11T10:24:08.000Z | FormulaAccordingPrint.py | FreeBirdsCrew/Brainstorming_Codes | 9d06216cd0772ce56586acff2c240a210b94ba1f | [
"Apache-2.0"
] | null | null | null | FormulaAccordingPrint.py | FreeBirdsCrew/Brainstorming_Codes | 9d06216cd0772ce56586acff2c240a210b94ba1f | [
"Apache-2.0"
] | null | null | null | """
Write a program that calculates and prints the value according to the given formula:
Q = Square root of [(2 * C * D)/H]
Following are the fixed values of C and H:
C is 50. H is 30.
D is the variable whose values should be input to your program in a comma-separated sequence.
Example
Let us assume the following comma separated input sequence is given to the program:
100,150,180
The output of the program should be:
18,22,24
"""
import math
c=50
h=30
value = []
items=[x for x in raw_input().split(',')]
for d in items:
value.append(str(int(round(math.sqrt(2*c*float(d)/h)))))
print ','.join(value) | 28.590909 | 94 | 0.694754 | 116 | 629 | 3.758621 | 0.551724 | 0.022936 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.049702 | 0.200318 | 629 | 22 | 95 | 28.590909 | 0.817097 | 0 | 0 | 0 | 0 | 0 | 0.011299 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.125 | null | null | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e8e693322ca4748ac11e7ad6f26ec9749c3ce95e | 904 | py | Python | Python Notebook/Python files/data_utility.py | wilfy9249/Capstone-Fall-18 | 832632eb00a10240e0ad16c364449d5020814c83 | [
"MIT"
] | 2 | 2018-10-24T21:32:17.000Z | 2019-02-19T21:15:29.000Z | Python Notebook/Python files/data_utility.py | wilfy9249/Capstone-Fall-18 | 832632eb00a10240e0ad16c364449d5020814c83 | [
"MIT"
] | null | null | null | Python Notebook/Python files/data_utility.py | wilfy9249/Capstone-Fall-18 | 832632eb00a10240e0ad16c364449d5020814c83 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# coding: utf-8
# In[1]:
import numpy as np
import pandas as pd
import os
# In[2]:
#function to get current directory
def getCurrentDirectory():
listDirectory = os.listdir('../')
return listDirectory
# In[3]:
#function to read csv file
def readCsvFile(path):
crimes_original = pd.read_csv(path, low_memory=False)
return crimes_original
# In[4]:
#function to filter Data
def filterData(data,column,value):
filterData = data.loc[data[column] == value]
return filterData
# In[5]:
#function to get count of a value
def getCount(data,column,columnName):
data_count = pd.DataFrame({columnName:data.groupby(column).size()}).reset_index()
return data_count
# In[7]:
#function to sort
def sortValue(data,column,ascBoolean):
sorted_data = data.sort_values(column,ascending = ascBoolean)
return sorted_data
# In[ ]:
| 14.580645 | 85 | 0.692478 | 124 | 904 | 4.967742 | 0.508065 | 0.081169 | 0.042208 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00955 | 0.189159 | 904 | 61 | 86 | 14.819672 | 0.830832 | 0.234513 | 0 | 0 | 0 | 0 | 0.004451 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.277778 | false | 0 | 0.166667 | 0 | 0.722222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e8ec71dfe68f78e0bbd64c46510e470c4242fa2e | 1,228 | py | Python | aiphysim/models/spacetime.py | perovai/deepkoopman | eb6de915f5ea1f20b47cb3a22a384f55c30f0558 | [
"MIT"
] | null | null | null | aiphysim/models/spacetime.py | perovai/deepkoopman | eb6de915f5ea1f20b47cb3a22a384f55c30f0558 | [
"MIT"
] | 10 | 2021-07-07T09:24:33.000Z | 2021-09-27T14:32:59.000Z | aiphysim/models/spacetime.py | perovai/deepkoopman | eb6de915f5ea1f20b47cb3a22a384f55c30f0558 | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
class SpaceTime(nn.Module):
def __init__(self, opts):
# TODO: Add things like no. of hidden layers to opts
pass
class LSTM(nn.Module):
# This class is largely derived from
# https://stackabuse.com/time-series-prediction-using-lstm-with-pytorch-in-python on 20210701.
def __init__(self, input_size=2, hidden_layer_size=100, output_size=2):
# param input_size: number of components in input vector
# param output_size: number of components in output vector
# param hidden_layer_size: number of components in hidden layer
super().__init__()
self.hidden_layer_size = hidden_layer_size
self.lstm = nn.LSTM(input_size, hidden_layer_size)
self.linear = nn.Linear(hidden_layer_size, output_size)
self.hidden_cell = (
torch.zeros(1, 1, self.hidden_layer_size),
torch.zeros(1, 1, self.hidden_layer_size),
)
def forward(self, input_seq):
lstm_out, self.hidden_cell = self.lstm(
input_seq.view(len(input_seq), 1, -1), self.hidden_cell
)
predictions = self.linear(lstm_out.view(len(input_seq), -1))
return predictions[-1]
| 34.111111 | 98 | 0.666124 | 173 | 1,228 | 4.479769 | 0.358382 | 0.127742 | 0.154839 | 0.085161 | 0.273548 | 0.08 | 0.08 | 0.08 | 0 | 0 | 0 | 0.022436 | 0.237785 | 1,228 | 35 | 99 | 35.085714 | 0.805556 | 0.286645 | 0 | 0.095238 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028571 | 0 | 1 | 0.142857 | false | 0.047619 | 0.095238 | 0 | 0.380952 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e8ed02dac89d480ead9705b1ad919290dfc731c8 | 934 | py | Python | fabfile/text.py | nprapps/austin | 45237e878260678bbeb57801e798b89e67ad4e0b | [
"MIT"
] | 7 | 2015-01-26T16:02:49.000Z | 2015-04-01T12:37:52.000Z | fabfile/text.py | nprapps/austin | 45237e878260678bbeb57801e798b89e67ad4e0b | [
"MIT"
] | 272 | 2015-01-26T16:37:22.000Z | 2016-04-04T17:08:55.000Z | fabfile/text.py | nprapps/austin | 45237e878260678bbeb57801e798b89e67ad4e0b | [
"MIT"
] | 4 | 2015-03-05T00:38:17.000Z | 2021-02-23T10:26:28.000Z | #!/usr/bin/env python
"""
Commands related to syncing copytext from Google Docs.
"""
from fabric.api import task
from termcolor import colored
import app_config
from etc.gdocs import GoogleDoc
@task(default=True)
def update():
"""
Downloads a Google Doc as an Excel file.
"""
if app_config.COPY_GOOGLE_DOC_URL == None:
print colored('You have set COPY_GOOGLE_DOC_URL to None. If you want to use a Google Sheet, set COPY_GOOGLE_DOC_URL to the URL of your sheet in app_config.py', 'blue')
return
else:
doc = {}
url = app_config.COPY_GOOGLE_DOC_URL
if 'key' in url:
bits = url.split('key=')
bits = bits[1].split('&')
doc['key'] = bits[0]
else:
bits = url.split('/d/')
bits = bits[1].split('/')
doc['key'] = bits[0]
g = GoogleDoc(**doc)
g.get_auth()
g.get_document()
| 24.578947 | 175 | 0.586724 | 133 | 934 | 3.984962 | 0.473684 | 0.084906 | 0.098113 | 0.120755 | 0.267925 | 0.267925 | 0.09434 | 0.09434 | 0 | 0 | 0 | 0.006079 | 0.295503 | 934 | 37 | 176 | 25.243243 | 0.799392 | 0.021413 | 0 | 0.173913 | 0 | 0.043478 | 0.206549 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.173913 | null | null | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e8f5e5aaaf237abae1b7ee4f6b5a71282972a181 | 428 | py | Python | snapmerge/home/migrations/0010_project_email.py | R4356th/smerge | 2f2a6a4acfe3903ed4f71d90537f7277248e8b59 | [
"MIT"
] | 13 | 2018-07-16T09:59:55.000Z | 2022-01-27T19:07:17.000Z | snapmerge/home/migrations/0010_project_email.py | R4356th/smerge | 2f2a6a4acfe3903ed4f71d90537f7277248e8b59 | [
"MIT"
] | 55 | 2018-07-16T12:17:58.000Z | 2022-03-17T16:10:30.000Z | snapmerge/home/migrations/0010_project_email.py | R4356th/smerge | 2f2a6a4acfe3903ed4f71d90537f7277248e8b59 | [
"MIT"
] | 4 | 2019-10-10T20:16:49.000Z | 2021-03-12T07:15:50.000Z | # Generated by Django 2.0.1 on 2018-07-26 12:51
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('home', '0009_auto_20180726_1214'),
]
operations = [
migrations.AddField(
model_name='project',
name='email',
field=models.EmailField(blank=True, max_length=254, null=True, verbose_name='Email'),
),
]
| 22.526316 | 97 | 0.61215 | 49 | 428 | 5.22449 | 0.816327 | 0.070313 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10828 | 0.266355 | 428 | 18 | 98 | 23.777778 | 0.707006 | 0.10514 | 0 | 0 | 1 | 0 | 0.115486 | 0.060367 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e8fb499b8bff3a7d1f52fd19a5495fe47b6684e5 | 2,165 | py | Python | bot/lib/controller/GetCommentTask.py | nullwriter/ig-actor | a089107657ccdf11ba213160c4cc5d3690cecd76 | [
"MIT"
] | null | null | null | bot/lib/controller/GetCommentTask.py | nullwriter/ig-actor | a089107657ccdf11ba213160c4cc5d3690cecd76 | [
"MIT"
] | null | null | null | bot/lib/controller/GetCommentTask.py | nullwriter/ig-actor | a089107657ccdf11ba213160c4cc5d3690cecd76 | [
"MIT"
] | null | null | null | import time
import re
from FileLogger import FileLogger as FL
import datetime
class GetCommentTask:
def __init__(self, task, name="extract-comment"):
self.task = task
self.name = name
self.comments = []
self.log = FL('Extracted Comments {:%Y-%m-%d %H:%M:%S}.txt'.format(datetime.datetime.now()))
def init_task(self):
hash_index = 0
loop = True
max_index = len(self.task.hashtags)
next_max_id = ""
while loop:
self.task.check_ops_limit()
current_hash = self.task.hashtags[hash_index]
self.task.api.getHashtagFeed(current_hash, maxid=next_max_id)
print ""
print "CURRENT HASHTAG = " + current_hash
print ""
ig_media = self.task.api.LastJson
if "next_max_id" not in ig_media:
print "####### Changing hashtag #######"
hash_index += 1
next_max_id = ""
if hash_index >= max_index - 1:
break
else:
next_max_id = self.do_task(ig_media)
def do_task(self, ig_media):
last_max_id = ig_media['next_max_id']
if "ranked_items" in ig_media:
key = "ranked_items"
else:
key = "items"
for ig in ig_media[key]:
self.task.api.getMediaComments(ig["id"])
for c in reversed(self.task.api.LastJson['comments']):
txt = c['text']
if self.check_string(txt):
self.comments.append(txt)
print "Comment = " + txt.encode('utf-8', 'ignore').decode('utf-8')
self.log.add_to_file(txt=txt)
self.task.task_count += 1
time.sleep(1)
time.sleep(self.task.get_time_delay())
return last_max_id
"""""
Checks if string doesnt contain special non-english characters, @, or Follow Me.
"""""
def check_string(self,str):
pattern = re.compile("^(?!follow|followme)[\s\w\d\?><;,\{\}\[\]\-_\+=!\#\$%^&\*\|\']*$")
return pattern.match(str)
| 28.116883 | 100 | 0.525635 | 257 | 2,165 | 4.229572 | 0.385214 | 0.080957 | 0.049678 | 0.034959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004916 | 0.342263 | 2,165 | 76 | 101 | 28.486842 | 0.758427 | 0 | 0 | 0.113208 | 0 | 0 | 0.127299 | 0.030978 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.075472 | null | null | 0.09434 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e8fc53ef376367f7c8b17273ac9b30e8bcf26788 | 4,981 | py | Python | scripts/mc_counting_same_origin.py | jonassagild/Track-to-Track-Fusion | 6bb7fbe6a6e2d9a2713c47f211899226485eee79 | [
"MIT"
] | 4 | 2021-06-16T19:33:56.000Z | 2022-03-14T06:47:41.000Z | scripts/mc_counting_same_origin.py | jonassagild/Track-to-Track-Fusion | 6bb7fbe6a6e2d9a2713c47f211899226485eee79 | [
"MIT"
] | 2 | 2021-06-08T16:18:45.000Z | 2021-11-25T09:38:08.000Z | scripts/mc_counting_same_origin.py | jonassagild/Track-to-Track-Fusion | 6bb7fbe6a6e2d9a2713c47f211899226485eee79 | [
"MIT"
] | 4 | 2020-09-28T04:54:17.000Z | 2021-10-15T15:58:38.000Z | """
script to run mc sims on the three associations techniques when the tracks origin are equal. Used to calculate the
total number of correctly associating tracks and total # falsly not associating tracks from the same target.
"""
import numpy as np
from stonesoup.types.state import GaussianState
from data_association.CountingAssociator import CountingAssociator
from data_association.bar_shalom_hypothesis_associators import HypothesisTestDependenceAssociator, \
HypothesisTestIndependenceAssociator
from trackers.kf_dependent_fusion_async_sensors import KalmanFilterDependentFusionAsyncSensors
from utils import open_object
from utils.scenario_generator import generate_scenario_3
start_seed = 0
end_seed = 5 # normally 500
num_mc_iterations = end_seed - start_seed
# params
save_fig = False
# scenario parameters
sigma_process_list = [0.3] # [0.05, 0.05, 0.05, 0.5, 0.5, 0.5, 3, 3, 3]
sigma_meas_radar_list = [50] # [5, 30, 200, 5, 30, 200, 5, 30, 200]
sigma_meas_ais_list = [10] # [10] * 9
radar_meas_rate = 1 # relevant radar meas rates: 1
ais_meas_rate_list = [6] # relevant AIS meas rates: 2 - 12
timesteps = 200
# associator params
association_distance_threshold = 10
consecutive_hits_confirm_association = 3
consecutive_misses_end_association = 2
# dicts to store final results for printing in a latex friendly way
Pc_overall = {} # Pc is the percentage of correctly associating tracks that originate from the same target
something_else_overall = {}
stats = []
for sigma_process, sigma_meas_radar, sigma_meas_ais, ais_meas_rate in zip(sigma_process_list, sigma_meas_radar_list,
sigma_meas_ais_list, ais_meas_rate_list):
for seed in range(start_seed, end_seed):
# generate scenario
generate_scenario_3(seed=seed, permanent_save=False, radar_meas_rate=radar_meas_rate,
ais_meas_rate=ais_meas_rate, sigma_process=sigma_process,
sigma_meas_radar=sigma_meas_radar, sigma_meas_ais=sigma_meas_ais,
timesteps=timesteps)
folder = "temp" # temp instead of seed, as it is not a permanent save
# load ground truth and the measurements
data_folder = "../scenarios/scenario3/" + folder + "/"
ground_truth = open_object.open_object(data_folder + "ground_truth.pk1")
measurements_radar = open_object.open_object(data_folder + "measurements_radar.pk1")
measurements_ais = open_object.open_object(data_folder + "measurements_ais.pk1")
# load start_time
start_time = open_object.open_object(data_folder + "start_time.pk1")
# prior
initial_covar = np.diag([sigma_meas_radar * sigma_meas_ais, sigma_meas_radar * sigma_process,
sigma_meas_radar * sigma_meas_ais, sigma_meas_radar * sigma_process]) ** 2
prior = GaussianState([1, 1.1, -1, 0.9], initial_covar, timestamp=start_time)
kf_dependent_fusion = KalmanFilterDependentFusionAsyncSensors(start_time, prior,
sigma_process_radar=sigma_process,
sigma_process_ais=sigma_process,
sigma_meas_radar=sigma_meas_radar,
sigma_meas_ais=sigma_meas_ais)
tracks_fused_dependent, tracks_radar, tracks_ais = kf_dependent_fusion.track_async(
start_time, measurements_radar, measurements_ais, fusion_rate=1)
# use the CountingAssociator to evaluate whether the tracks are associated
associator = CountingAssociator(association_distance_threshold, consecutive_hits_confirm_association,
consecutive_misses_end_association)
num_correct_associations = 0
num_false_mis_associations = 0
for i in range(1, len(tracks_radar)):
# use the associator to check the association
associated = associator.associate_tracks(tracks_radar[:i], tracks_ais[:i])
if associated:
num_correct_associations += 1
else:
num_false_mis_associations += 1
# save the number of correct associations and false mis associations in a dict
stats_individual = {'seed': seed, 'num_correct_associations': num_correct_associations,
'num_false_mis_associations': num_false_mis_associations}
stats.append(stats_individual)
# todo count the number of associations that turn out to be correct
# calc the #correct_associations and #false_mis_associations
tot_num_correct_associations = sum([stat['num_correct_associations'] for stat in stats])
tot_num_false_mis_associations = sum([stat['num_false_mis_associations'] for stat in stats])
print("")
| 49.81 | 116 | 0.682995 | 608 | 4,981 | 5.264803 | 0.273026 | 0.056232 | 0.04811 | 0.053421 | 0.20806 | 0.1612 | 0.11059 | 0.084349 | 0.072477 | 0.072477 | 0 | 0.022402 | 0.256173 | 4,981 | 99 | 117 | 50.313131 | 0.841565 | 0.20779 | 0 | 0 | 0 | 0 | 0.052094 | 0.037028 | 0 | 0 | 0 | 0.010101 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0.015873 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3302f95944549893e6c718830b8f06c614895c10 | 8,700 | py | Python | Python/cs611python.py | david145/CS6112018 | 7a74c239bf5157507594157b5871c9d0c70fcc23 | [
"MIT"
] | null | null | null | Python/cs611python.py | david145/CS6112018 | 7a74c239bf5157507594157b5871c9d0c70fcc23 | [
"MIT"
] | 1 | 2018-10-29T17:41:08.000Z | 2018-10-29T17:41:08.000Z | Python/cs611python.py | david145/CS6112018 | 7a74c239bf5157507594157b5871c9d0c70fcc23 | [
"MIT"
] | null | null | null | print("\n")
print("PythonExercises-v2 by David Bochan")
print("\n")
print("=== EXERCISE 1 ===")
print("\n")
print("(a) 5 / 3 = " + str(5 / 3))
print("=> with python3 you can receive a float even if you divide two \
integers")
print("\n")
print("(b) 5 % 3 = " + str(5 % 3))
print("=> % is the modulus which divides left hand operand by right hand \
operand and returns remainder")
print("\n")
print("(c) 5.0 / 3 = " + str(5.0 / 3))
print("=> outputs a float number.. there is no difference if a plain 5 or 5.0 \
is used")
print("\n")
print("(d) 5 / 3.0 = " + str(5 / 3.0))
print("=> outputs a float number.. there is no difference if a plain 3 or 3.0 \
is used")
print("\n")
print("(e) 5.2 % 3 = " + str(5.2 % 3))
print("=> % is the modulus which divides left hand operand by right hand \
operand and returns remainder")
print("\n")
print("=== EXERCISE 2 ===")
print("\n")
print("(a) 2000.3 ** 200 = ...")
try:
print(str(2000.3 ** 200))
except OverflowError as e:
print("=> The python3 interpreter throws a OverflowError " + str(e))
print("\n")
print("(b) 1.0 + 1.0 - 1.0 = " + str(1.0 + 1.0 - 1.0))
print("=> Addition and substraction of float values which results in another \
float value")
print("\n")
print("(c) 1.0 + 1.0e20 - 1.0e20 = " + str(1.0 + 1.0e20 - 1.0e20))
print("=> 1.0 + 1.0e20 is rounded as close as possible, which is 1.0e20 and \
after substraction of it again it results in 0.0")
print("\n")
print("=== EXERCISE 3 ===")
print("\n")
print("(a) float(123) = " + str(float(123)))
print("=> Takes the integer value 123 as input and casts it to the float \
value 123.0")
print("\n")
print("(b) float('123') = " + str(float('123')))
print("=> Takes the string '123' as input and casts it to the float value \
123.0")
print("\n")
print("(c) float('123.23') = " + str(float('123.23')))
print("=> Takes the string '123.23' as input and casts it to the float value \
123.23")
print("\n")
print("(d) int(123.23) = " + str(int(123.23)))
print("=> Takes the float 123.23 as input and casts it to the integer value \
123")
print("\n")
print("(e) int('123.23') = ...")
try:
int('123.23')
except ValueError as e:
print("=> The int() function can't cast a string to float to int and thus \
throws a ValueError (" + str(e) + ")")
print("\n")
print("(f) int(float('123.23')) = " + str(int(float(123.23))))
print("=> As we cast the string to float first, we can use it as a input to \
the int() function and receive a integer")
print("\n")
print("(g) str(12) = " + str(12))
print("=> Takes the integer 12 as input and casts it to the string '12'")
print("\n")
print("(h) str(12.2) = " + str(12.2))
print("=> Takes the float 12.2 as input and casts it to the string '12.2'")
print("\n")
print("(i) bool('a') = " + str(bool('a')))
print("=> Because an actual value (the character 'a') is passed to the bool() \
function, True is returned")
print("\n")
print("(j) bool(0) = " + str(bool(0)))
print("=> The boolean value False equals 0 in python, thus False is returned")
print("\n")
print("(k) bool(0.1) = " + str(bool(0.1)))
print("=> Because a value != 0 is provided in the bool() function, \
it returns True")
print("\n")
print("=== EXERCISE 4 ===")
print("\n")
print("range(5) = {}".format(range(5)))
print("=> range(5) returns a sequence of integers from 0 to 4. for i in \
range(5) is consequently iterating over the sequence of integers")
print("\n")
print("type(range(5)) = {}".format(type(range(5))))
print("=> The type function returns an object's class. For range(5) the class \
range is returned")
print("\n")
print("=== EXERCISE 5 ===")
print("\n")
def div_by_number(numbers_list, max_found):
number_found = 0
x = 1
while number_found < max_found:
for number in numbers_list:
if x % number == 0:
print(x)
number_found = number_found + 1
x = x + 1
numbers_list = [5, 7, 11]
print("div_by_number({}, 20)\n".format(numbers_list))
div_by_number(numbers_list, 20)
print("\n")
print("=== EXERCISE 6 ===")
print("\n")
print("(a) & (b)\n")
def is_prime(n):
if n <= 3:
return n > 1
elif n % 2 == 0 or n % 3 == 0:
return False
i = 5
while i * i <= n:
if n % i == 0 or n % (i + 2) == 0:
return False
i = i + 6
return True
print("is_prime(0) = {}\n".format(is_prime(0)))
print("is_prime(1) = {}\n".format(is_prime(1)))
print("is_prime(3) = {}\n".format(is_prime(3)))
print("is_prime(7) = {}\n".format(is_prime(7)))
print("is_prime(8) = {}\n".format(is_prime(8)))
print("is_prime(112331) = {}".format(is_prime(112331)))
def primes_up_to(n):
primes = []
for i in range(0, n):
if is_prime(i):
primes.append(i)
return primes
print("\n(c) primes_up_to(100) = {}".format(primes_up_to(100)))
def first_primes(n):
primes = []
i = 0
while len(primes) < n:
if is_prime(i):
primes.append(i)
i = i + 1
return primes
print("\n(d) first_primes(12) = {}".format(first_primes(12)))
print("\n")
print("=== EXERCISE 7 ===")
print("\n")
print("(a) print_elements(elements_list)\n")
def print_elements(elements):
for element in elements:
print(element)
elements_list = [12, "abc", 92.2, "hello"]
print_elements(elements_list)
print("\n(b) print_elements_reverse(elements_list)\n")
def print_elements_reverse(elements):
for element in elements[::-1]:
print(element)
print_elements_reverse(elements_list)
print("\n(c) len_elements(elements_list)\n")
def len_elements(elements):
count = 0
for _ in elements:
count = count + 1
return count
print("len_elements(elements_list) = {}".format(len_elements(elements_list)))
print("\n")
print("=== EXERCISE 8 ===")
a = [12, "abc", 92.2, "hello"]
print("\n")
print("(a) a = {}".format(a))
print("\n(b) b = a")
b = a
print("\n(c) b[1] = 'changed'")
b[1] = "changed"
print("\n(d) a = {}".format(a))
print("=> b is binding to the same object as a, so when b[1] was changed \
a[1] also shows the change")
print("\n(e) c = a[:]")
c = a[:]
print("\n(f) c[2] = 'also changed'")
c[2] = "also changed"
print("\n(g) a = {}".format(a))
print("=> A copy of the list a was created with a[:] and assigned to c, thus \
a[2] did not change when c[2] changed")
def set_first_elem_to_zero(l):
if len(l) > 0:
l[0] = 0
return l
numbers = [12, 21, 214, 3]
print("\n...")
print("\nnumbers = {}".format(numbers))
print("set_first_elem_to_zero(numbers) = \
{}".format(set_first_elem_to_zero(numbers)))
print("numbers = {}".format(numbers))
print("=> The original list also changed, even though we did not assign \
the returned list to it (same binding)")
print("\n")
print("=== EXERCISE 9 ===")
elements = [[1,3], [3,6]]
print("\n")
print("elements = {}".format(elements))
flat_list = lambda l: [element for sublist in l for element in sublist]
print("flat_list(elements) = {}".format(flat_list(elements)))
print("\n")
print("=== EXERCISE 10 ===")
import matplotlib.pyplot as plt
import numpy as np
t = np.arange(0.0, 2.0, 0.01)
s = np.sin(t - 2) ** 2 * np.e ** (-t ** 2)
fig, ax = plt.subplots()
ax.plot(t, s)
ax.set(xlabel='x', ylabel='y',
title='Exercise 10')
plt.show()
print("\n")
print("See Figure_1.png")
print("\n")
print("=== EXERCISE 11 ===")
def product_iteration(numbers):
product = 0
if len(numbers) > 0:
product = numbers.pop()
for number in numbers:
product = product * number
return product
from functools import reduce
def product_recursive(numbers):
if len(numbers) > 0:
return reduce((lambda x, y: x * y), numbers)
else:
return 0
numbers = [21, 12, 10, 128, 2]
empty_list = []
print("\n")
print("product_iteration(numbers) = {}".format(product_iteration(numbers)))
print("product_iteration(empty_list) = \
{}".format(product_iteration(empty_list)))
numbers = [21, 12, 10, 128, 2]
print("\n")
print("product_recursive(numbers) = {}".format(product_recursive(numbers)))
print("product_recursive(empty_list) = \
{}".format(product_recursive(empty_list)))
print("\n")
print("=== EXERCISE 12 ===")
print("\n\nGood to know!")
print("\n")
print("=== EXERCISE 13 ===")
def read_file(filename):
with open(filename, 'r') as myfile:
data=myfile.read().replace('\n', '')
return data
file_content = read_file("emails.txt")
print("\n\nread_file('emails.txt')\n\n{}".format(file_content))
import re
def extract_email(string):
match = re.findall(r'[\w\.-]+@[\w\.-]+\.\w+', string)
return match
print("\nextract_email(file_content)\
\n\n{}".format(extract_email(file_content))) | 23.138298 | 79 | 0.608046 | 1,400 | 8,700 | 3.704286 | 0.166429 | 0.06479 | 0.091207 | 0.047628 | 0.260509 | 0.17258 | 0.120902 | 0.120902 | 0.099306 | 0.081566 | 0 | 0.051979 | 0.192874 | 8,700 | 376 | 80 | 23.138298 | 0.686557 | 0 | 0 | 0.254753 | 0 | 0.003802 | 0.201356 | 0.026779 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045627 | false | 0.003802 | 0.015209 | 0 | 0.110266 | 0.551331 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
3304cda4bb7181483694fd293ce4ad5249b9bc1e | 2,178 | py | Python | checkov/terraform/checks/resource/kubernetes/PodSecurityContext.py | pmalkki/checkov | b6cdf386dd976fe27c16fed6d550756a678a5d7b | [
"Apache-2.0"
] | null | null | null | checkov/terraform/checks/resource/kubernetes/PodSecurityContext.py | pmalkki/checkov | b6cdf386dd976fe27c16fed6d550756a678a5d7b | [
"Apache-2.0"
] | null | null | null | checkov/terraform/checks/resource/kubernetes/PodSecurityContext.py | pmalkki/checkov | b6cdf386dd976fe27c16fed6d550756a678a5d7b | [
"Apache-2.0"
] | null | null | null | from checkov.common.models.enums import CheckCategories, CheckResult
from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
class PodSecurityContext(BaseResourceCheck):
def __init__(self):
# CIS-1.5 5.7.3
name = "Apply security context to your pods and containers"
# Security context can be set at pod or container level.
id = "CKV_K8S_29"
supported_resources = ['kubernetes_pod', 'kubernetes_deployment', 'kubernetes_daemonset']
categories = [CheckCategories.GENERAL_SECURITY]
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
def scan_resource_conf(self, conf) -> CheckResult:
if "spec" not in conf:
self.evaluated_keys = [""]
return CheckResult.FAILED
spec = conf['spec'][0]
if spec.get("container"):
containers = spec.get("container")
for idx, container in enumerate(containers):
if type(container) != dict:
return CheckResult.UNKNOWN
if not container.get("security_context"):
self.evaluated_keys = ["spec/[0]/container/{idx}"]
return CheckResult.FAILED
return CheckResult.PASSED
if spec.get("template") and isinstance(spec.get("template"), list):
template = spec.get("template")[0]
if template.get("spec") and isinstance(template.get("spec"), list):
temp_spec = template.get("spec")[0]
if temp_spec.get("container"):
containers = temp_spec.get("container")
for idx, container in enumerate(containers):
if type(container) != dict:
return CheckResult.UNKNOWN
if not container.get("security_context"):
self.evaluated_keys = ["spec/[0]/template/[0]/spec/[0]/container/{idx}"]
return CheckResult.FAILED
return CheckResult.PASSED
return CheckResult.FAILED
check = PodSecurityContext()
| 41.09434 | 106 | 0.596878 | 221 | 2,178 | 5.751131 | 0.343891 | 0.107002 | 0.072384 | 0.040913 | 0.329662 | 0.329662 | 0.329662 | 0.329662 | 0.329662 | 0.329662 | 0 | 0.009875 | 0.302571 | 2,178 | 52 | 107 | 41.884615 | 0.82686 | 0.031221 | 0 | 0.368421 | 0 | 0 | 0.140959 | 0.043189 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0.052632 | 0.052632 | 0 | 0.342105 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
3305691c62826956ff8ed131b9e4f86523e06726 | 1,431 | py | Python | repos/system_upgrade/common/models/selinux.py | sm00th/leapp-repository | 1c171ec3a5f9260a3c6f84a9b15cad78a875ac61 | [
"Apache-2.0"
] | null | null | null | repos/system_upgrade/common/models/selinux.py | sm00th/leapp-repository | 1c171ec3a5f9260a3c6f84a9b15cad78a875ac61 | [
"Apache-2.0"
] | 1 | 2022-03-07T15:34:11.000Z | 2022-03-07T15:35:15.000Z | repos/system_upgrade/common/models/selinux.py | sm00th/leapp-repository | 1c171ec3a5f9260a3c6f84a9b15cad78a875ac61 | [
"Apache-2.0"
] | null | null | null | from leapp.models import fields, Model
from leapp.topics import SystemInfoTopic, TransactionTopic
class SELinuxModule(Model):
"""SELinux module in cil including priority"""
topic = SystemInfoTopic
name = fields.String()
priority = fields.Integer()
content = fields.String()
# lines removed due to content invalid on RHEL 8
removed = fields.List(fields.String())
class SELinuxModules(Model):
"""
List of selinux modules that are not part of distribution policy
modules - list of custom policy modules (priority != 100,200)
templates - List of installed udica templates
"""
topic = SystemInfoTopic
modules = fields.List(fields.Model(SELinuxModule))
templates = fields.List(fields.Model(SELinuxModule))
class SELinuxCustom(Model):
"""SELinux customizations returned by semanage export"""
topic = SystemInfoTopic
commands = fields.List(fields.String())
removed = fields.List(fields.String())
class SELinuxRequestRPMs(Model):
"""
SELinux related RPM packages that need to be present after upgrade
Listed packages provide types that where used in policy
customizations (to_install), or the corresponding policy
was installed on RHEL-7 installation with priority 200
(to_keep).
"""
topic = TransactionTopic
to_keep = fields.List(fields.String(), default=[])
to_install = fields.List(fields.String(), default=[])
| 31.108696 | 70 | 0.716981 | 167 | 1,431 | 6.11976 | 0.461078 | 0.082192 | 0.109589 | 0.107632 | 0.189824 | 0.066536 | 0 | 0 | 0 | 0 | 0 | 0.009532 | 0.193571 | 1,431 | 45 | 71 | 31.8 | 0.876083 | 0.391335 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
330597e751b125d41e61a3b1d6607b8ceee7379c | 21,902 | py | Python | annotator_web.py | j20100/Seg_Annotator | 49b2806be9450c901cf4977633a4ec29b3b6bdca | [
"CC-BY-4.0"
] | null | null | null | annotator_web.py | j20100/Seg_Annotator | 49b2806be9450c901cf4977633a4ec29b3b6bdca | [
"CC-BY-4.0"
] | null | null | null | annotator_web.py | j20100/Seg_Annotator | 49b2806be9450c901cf4977633a4ec29b3b6bdca | [
"CC-BY-4.0"
] | null | null | null | #!/usr/bin/python3
# -*- coding: utf-8 -*-
import argparse
import base64
from bson import ObjectId
import datetime
from flask import Flask, Markup, Response, abort, escape, flash, redirect, \
render_template, request, url_for
from flask_login import LoginManager, UserMixin, current_user, login_required, \
login_user, logout_user
from werkzeug.utils import secure_filename
from functools import wraps
from gridfs import GridFS
from jinja2 import evalcontextfilter
from binascii import a2b_base64
from OpenSSL import SSL
from flask import session
from flask_socketio import SocketIO, emit
import json
import hashlib
import pandas as pd
import pymongo
import re
import subprocess
import threading
import time
import uuid
import urllib.parse
import webcolors
import os
import time
import glob
from flask_cors import CORS
curr_annotated_img = []
def hash_password(password):
"""This function hashes the password with SHA256 and a random salt"""
salt = uuid.uuid4().hex
return hashlib.sha256(salt.encode() + password.encode()).hexdigest() + ':' + salt
def check_password(hashed_password, user_password):
"""This function checks a password against a SHA256:salt entry"""
password, salt = hashed_password.split(':')
return password == hashlib.sha256(salt.encode() + user_password.encode()).hexdigest()
def admin_required(func):
"""Function wrapper to allow only logged in admins to access the page."""
@wraps(func)
def decorated_function(*args, **kwargs):
if not current_user.is_admin():
return redirect(url_for('bad_permissions'))
return func(*args, **kwargs)
return decorated_function
# Load default configuration from local file
with open('config.json') as config:
conf = argparse.Namespace(**json.load(config))
# Argument parser strings
app_description = "annotator Website Application\n\n" \
"All information can be found at https://github.com/seg_annotator.\n" \
"Modify file 'config.json' to edit the application's configuration.\n" \
"There are other command line arguments that can be used:"
help_host = "Hostname of the Flask app. Default: {0}".format(conf.app_host)
help_port = "Port of the Flask app. Default: {0}".format(conf.app_port)
help_debug = "Start Flask app in debug mode. Default: {0}".format(conf.debug)
# Set up the command-line arguments
parser = argparse.ArgumentParser(description=app_description,
formatter_class=argparse.RawTextHelpFormatter)
parser.add_argument('-H', '--app_host', help=help_host, default=conf.app_host)
parser.add_argument('-P', '--app_port', help=help_port, default=conf.app_port)
parser.add_argument('-D', '--debug', dest='debug', action='store_true', help=help_debug)
parser.set_defaults(debug=conf.debug)
# Update default configs with command line args
args = parser.parse_args()
conf.__dict__.update(args.__dict__)
# Get MongoDB Database Client
client = pymongo.MongoClient()
annotator = client['annotator']
fs = GridFS(annotator)
# Validate MongoDB is started, else exit
try:
client.server_info()
except pymongo.errors.ServerSelectionTimeoutError:
print('MongoDB is not started. Restart it before launching the web app again.')
quit()
# Create Flask Application
app = Flask(__name__)
CORS(app)
app.secret_key = uuid.uuid4().hex # Required to use log in and session manager
login_manager = LoginManager()
login_manager.init_app(app)
# ROS variable
ros_pid = None
socketio = SocketIO(app)
@socketio.on('disconnect')
def disconnect_user():
print('DISCONNECTING USER')
# user_logs = list(annotator.logs.find().skip((annotator.logs).count() - 1))
# user = user_logs[-1]
# annotator.logs.update_one(user, {'$set' : { 'stop_time' : time.time()}})
logout_user()
# session.pop(app.secret_key, None)
# User class
class User(UserMixin):
"""User Class making DB-stored parameters accessible from HTML templates."""
def __init__(self, username):
self.username = username
user = annotator.credentials.find_one({'username': username})
self.admin = user['admin']
self.nb_images = user['nb_images']
def get_id(self):
return self.username
def is_admin(self):
return self.admin
# Login Manager Configuration
@login_manager.user_loader
def load_user(user_id):
return User(user_id)
@login_manager.unauthorized_handler
def unauthorized_callback():
return redirect('/login?next=' + request.path)
# Application routes
@app.route('/')
def go_home():
return redirect(url_for('home'))
@app.route('/login', methods=['GET', 'POST'])
def login():
if request.method == 'POST':
next_page = request.args.get('next')
username = request.form['username']
password = request.form['password']
user = annotator.credentials.find_one({'username': username})
if user and check_password(user['password'], password):
if user['active']: # Inactived users should not be able to log in
login_user(User(username))
annotator.credentials.update_one(user, {'$set':
{'last_login' : time.time()}})
# If an admin logs in and there is at least one inactived user, show it
if user['admin'] and annotator.credentials.find_one({'active': False}):
flash('At least one user account has to be activated', 'info')
return redirect(url_for('manage_users'))
annotator.logs.insert_one({'start_time' : time.time(),
'username' : username,
'stop_time' : 0,
'nb_images' : 0})
return redirect(next_page or url_for('home'))
else:
flash('Account not yet activated by an administrator', 'warning')
else:
flash('Invalid credentials', 'danger')
return render_template('login.html')
else:
return render_template('login.html')
@app.route('/logout')
@login_required
def logout():
user_logs = list(annotator.logs.find().skip((annotator.logs).count() - 1))
user = user_logs[-1]
annotator.logs.update_one(user, {'$set' : { 'stop_time' : time.time()}})
logout_user()
return redirect(url_for('home'))
@app.route('/create_account', methods=['GET', 'POST'])
def create_account():
if request.method == 'POST':
next = request.args.get('next')
username = request.form['username'].strip()
password = request.form['password']
password_confirm = request.form['password_confirm']
if not password:
flash('Password cannot be empty', 'danger')
return render_template('create_account.html')
if password != password_confirm:
flash('Both password entries do not match', 'danger')
return render_template('create_account.html')
if not username.replace('_', '').isalnum():
# Only allow letters, numbers and underscore characters in usernames
flash('Invalid username (letters, numbers and underscores only)', 'danger')
return render_template('create_account.html')
user = annotator.credentials.find_one({'username': username})
if user or not username: # Check if username is not empty or already taken
flash('Username not available', 'danger')
return render_template('create_account.html')
active = False
admin = False
# If this is the first user to register, make it active and admin
if not annotator.credentials.find_one():
active = True
admin = True
flash('First account created, activated and is administrator, congratulations!', 'success')
# Create a new user account
annotator.credentials.insert_one({'username': username,
'password': hash_password(password),
'active': active,
'nb_images' : 0,
'admin': admin})
flash('Account created successfully', 'success')
return redirect(url_for('login'))
else:
return render_template('create_account.html')
@app.route('/change_password', methods=['GET', 'POST'])
def change_password():
if request.method == 'POST':
username = request.form['username']
old_password = request.form['old_password']
new_password = request.form['new_password']
user = annotator.credentials.find_one({'username': username})
if user and check_password(user['password'], old_password):
if not new_password:
flash('Password cannot be empty', 'danger')
return render_template('change_password.html')
# Modify password
annotator.credentials.update_one(user, {'$set': {
'password': hash_password(new_password)}})
flash('Password changed successfully', 'success')
return redirect(url_for('login'))
else:
flash('Invalid credentials', 'danger')
return render_template('change_password.html')
else:
return render_template('change_password.html')
@app.route('/home')
def home():
return render_template('index.html')
def sortKeyFunc(s):
t = s.split('/')
k=t[3].split('.')
s=k[0].split('_')
return int(s[2])
@app.route('/load_new_img', methods = ['POST'])
def uploader_new_img():
if request.method == 'POST':
global curr_annotated_img
directory = "static/data/annotations/"
searchlabel = os.path.join(directory, "*.png" )
with open('/home/jonathan/Seg_Annotator/static/data/dataset.json') as f:
data = json.load(f)
print(data)
fileslabel = glob.glob(searchlabel)
fileslabel.sort(key=sortKeyFunc)
i = 0
print("Doin the currently annotated img now")
print(curr_annotated_img)
print(fileslabel[i])
while fileslabel[i] in curr_annotated_img :
i=i+1
print("THIS ONE PASSED")
print(fileslabel[i])
newImgAnnot = fileslabel[i]
t = fileslabel[i].split('/')
#print(t)
newImg=t[0]+"/"+t[1]+"/"+"images"+"/"+t[3]
#print("Sending new img")
#print(newImg)
#print("Sending new img annot")
#print(newImgAnnot)
send = newImg+":"+newImgAnnot
#print(send)
curr_annotated_img.append(newImgAnnot)
return send
@app.route('/uploader', methods = ['POST'])
def uploader_file():
if request.method == 'POST':
pic = request.form['file']
username = request.form['username']
filename = request.form['filename']
#f.save(secure_filename(f.filename))
up = urllib.parse.urlparse(pic)
head, data = up.path.split(',', 1)
bits = head.split(';')
mime_type = bits[0] if bits[0] else 'text/plain'
charset, b64 = 'ASCII', False
for bit in bits:
if bit.startswith('charset='):
charset = bit[8:]
elif bit == 'base64':
b64 = True
binary_data = a2b_base64(data)
directory = "static/data/annotations/"
test = os.listdir( directory )
for item in test:
if item.startswith(filename):
os.remove( os.path.join( directory, item ) )
timestr = time.strftime("%Y%m%d-%H%M%S")
with open("static/data/annotations/" + filename + "_corrected_" + timestr, 'wb') as f:
f.write(binary_data)
user = annotator.credentials.find_one({'username': username})
user_logs = list(annotator.logs.find().skip((annotator.logs).count() - 1))
user_stats = user_logs[-1]
nb_images = user['nb_images']
nb_images = nb_images + 1
nb_images_stats = user_stats['nb_images']
nb_images_stats = nb_images_stats + 1
annotator.logs.update_one(user_stats, {'$set': {'nb_images': nb_images_stats}})
annotator.credentials.update_one(user, {'$set': {'nb_images': nb_images}})
searchlabel = os.path.join(directory, "*.png" )
fileslabel = glob.glob(searchlabel)
fileslabel.sort()
return "Done sending imges"
@app.route('/updater', methods = ['POST'])
def updater_URL():
if request.method == 'POST':
annotURL = request.form["URL"]
directory = "static/data/annotations/"
test = os.listdir(directory)
realURL = "NONE"
for item in test:
if item.startswith(annotURL[25:]):
realURL = item
return "static/data/annotations/" + realURL
@app.route('/annotator')
@login_required
def annotator_edit():
username = current_user.get_id()
return render_template('annotator.html', username=username)
@app.route('/dataset')
@login_required
def dataset():
username = current_user.get_id()
return render_template('dataset.html', username=username)
@app.route('/logs')
@admin_required
def logs():
logs = list(annotator.logs.find())
return render_template('logs.html', logs=logs)
@app.route('/logs/<start_time>')
def log_highlights(start_time):
if not valid_protocol(start_time):
return redirect(url_for('logs'))
# Get database of current protocol
db = client[protocol]
started = db.steps.count()
done = db.steps.count({'end': {'$exists': True}})
info = db.protocol.find_one()
json_protocol = {}
if info:
# Pretty print the raw protocol
json_protocol = json.dumps(info['protocol'], indent=4, sort_keys=True)
return render_template('log_highlights.html', active='Highlights', \
protocol=protocol, json_protocol=json_protocol, \
started=started, done=done, db=db)
@app.route('/logs/delete/<id>')
@login_required
@admin_required
def delete_logs(id):
# Delete all data from current protocol
print('DELETING THE LOG')
test = annotator.logs.find()
print(test)
test_list = list(annotator.logs.find())
print(test_list)
one = annotator.test_list.find({'_id' : id})
print(one)
annotator.logs.remove({})
flash("Entry {0} deleted successfully".format(id), 'info')
return redirect(url_for('logs'))
@app.route('/manage_users')
@login_required
@admin_required
def manage_users():
user_list = list(annotator.credentials.find())
return render_template('manage_users.html', users=user_list)
@app.route('/manage_users/activate/<username>')
@login_required
@admin_required
def activate_user(username):
"""Activate a user account."""
user = annotator.credentials.find_one({'username': username})
if not user['active']:
annotator.credentials.update_one(user, {'$set': {'active': True}})
flash("User {0} activated successfully".format(username), 'success')
else:
flash("User {0} is already active".format(username), 'warning')
return redirect(url_for('manage_users'))
@app.route('/manage_users/demote/<username>')
@login_required
@admin_required
def demote_user(username):
"""Remove admin privileges of another administrator."""
user = annotator.credentials.find_one({'username': username})
if current_user.get_id() == username:
flash('Cannot revert yourself to standard user', 'danger')
elif user:
if user['admin']:
annotator.credentials.update_one(user, {'$set': {'admin': False}})
flash("User {0} reverted to standard user successfully".format(username), 'info')
else:
flash("User {0} is already a standard user".format(username), 'warning')
else:
flash("Cannot revert unknown user {0} to standard user".format(username), 'warning')
return redirect(url_for('manage_users'))
@app.route('/manage_users/promote/<username>')
@login_required
@admin_required
def promote_user(username):
"""Give admin privileges from a normal user."""
user = annotator.credentials.find_one({'username': username})
if user:
if user['admin']:
flash("User {0} is already an administrator".format(username), 'warning')
else:
annotator.credentials.update_one(user, {'$set': {'admin': True}})
flash("User {0} promoted to administrator successfully".format(username), 'info')
else:
flash("Cannot promote unknown user {0} to administrator".format(username), 'warning')
return redirect(url_for('manage_users'))
@app.route('/manage_users/delete/<username>')
@login_required
@admin_required
def delete_user(username):
"""Delete a user account that is not yours."""
user = annotator.credentials.find_one({'username': username})
if current_user.get_id() == username:
flash('Cannot delete yourself', 'danger')
elif user:
annotator.credentials.delete_one(user)
flash("User {0} deleted successfully".format(username), 'info')
else:
flash("Cannot delete unknown user {0}".format(username), 'warning')
return redirect(url_for('manage_users'))
@app.route('/bad_permissions')
def bad_permissions():
"""Function called if a normal user tries to get to an admin reserved page."""
return render_template('bad_permissions.html')
@app.errorhandler(404)
def page_not_found(error):
"""This method handles all unexisting route requests."""
return render_template('404.html'), 404
# Add objects that can be called from the Jinja2 HTML templates
@app.template_filter()
@evalcontextfilter
def nl2br(eval_ctx, value):
"""Converts new lines to paragraph breaks in HTML."""
_paragraph_re = re.compile(r'(?:\r\n|\r|\n){2,}')
result = '\n\n'.join('<p>%s</p>' % p.replace('\n', '<br>\n') \
for p in _paragraph_re.split(escape(value)))
result = result.replace(' ', ' ')
if eval_ctx.autoescape:
result = Markup(result)
return result
def crossdomain(origin=None, methods=None, headers=None, max_age=21600,
attach_to_all=True, automatic_options=True):
"""Decorator function that allows crossdomain requests.
Courtesy of
https://blog.skyred.fi/articles/better-crossdomain-snippet-for-flask.html
"""
if methods is not None:
methods = ', '.join(sorted(x.upper() for x in methods))
if headers is not None and not isinstance(headers, basestring):
headers = ', '.join(x.upper() for x in headers)
if not isinstance(origin, basestring):
origin = ', '.join(origin)
if isinstance(max_age, timedelta):
max_age = max_age.total_seconds()
def get_methods():
""" Determines which methods are allowed
"""
if methods is not None:
return methods
options_resp = current_app.make_default_options_response()
return options_resp.headers['allow']
def decorator(f):
"""The decorator function
"""
def wrapped_function(*args, **kwargs):
"""Caries out the actual cross domain code
"""
if automatic_options and request.method == 'OPTIONS':
resp = current_app.make_default_options_response()
else:
resp = make_response(f(*args, **kwargs))
if not attach_to_all and request.method != 'OPTIONS':
return resp
h = resp.headers
h['Access-Control-Allow-Origin'] = origin
h['Access-Control-Allow-Methods'] = get_methods()
h['Access-Control-Max-Age'] = str(max_age)
h['Access-Control-Allow-Credentials'] = 'true'
h['Access-Control-Allow-Headers'] = \
"Origin, X-Requested-With, Content-Type, Accept, Authorization"
if headers is not None:
h['Access-Control-Allow-Headers'] = headers
return resp
f.provide_automatic_options = False
return update_wrapper(wrapped_function, f)
return decorator
def convert_ts(ts):
"""Convert timestamp to human-readable string"""
return datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d_%H:%M:%S')
def format_sidebar(name, icon, url):
"""
Used to generate HTML line for sidebar in layout.html.
- name is the name of the tab
- icon is the glyphicon name
"""
current_url = request.path.split('/')[1]
active = ' class="active"' if url == current_url else ''
html = '<li{0}><a href="/{1}"><i style="float:left; margin-right: 14px;">' \
'<span class="glyphicon glyphicon-{2}"></span></i>{3}' \
'</a></li>'.format(active, url, icon, name)
return Markup(html)
# Make some variables and functions available from Jinja2 HTML templates
app.jinja_env.globals.update(conf=conf,
force_type = Markup('onselect="return false" ' \
'onpaste="return false" ' \
'oncopy="return false" ' \
'oncut="return false" ' \
'ondrag="return false" ' \
'ondrop="return false" ' \
'autocomplete=off'),
format_sidebar=format_sidebar,
convert_ts=convert_ts)
# Start the application
if __name__ == '__main__':
#context = SSL.Context(SSL.TLSv1_2_METHOD)
#context.use_privatekey_file('host.key')
#context.use_certificate_file('host.cert')
socketio.run(app, host=conf.app_host, port=int(conf.app_port), ssl_context=('cert.pem', 'key.pem'))
| 34.98722 | 103 | 0.630079 | 2,611 | 21,902 | 5.149368 | 0.197242 | 0.029751 | 0.026776 | 0.017851 | 0.283897 | 0.232652 | 0.181034 | 0.141019 | 0.089922 | 0.07698 | 0 | 0.00591 | 0.242855 | 21,902 | 625 | 104 | 35.0432 | 0.80486 | 0.113551 | 0 | 0.234375 | 0 | 0.002232 | 0.19384 | 0.025658 | 0 | 0 | 0 | 0 | 0 | 1 | 0.087054 | false | 0.060268 | 0.064732 | 0.013393 | 0.270089 | 0.026786 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
3315429c0aa928801a1243413b1efad540cf1405 | 1,102 | py | Python | core/migrations/0002_meetup.py | hatsem78/django_docker_nginex_nginx_gunicorn | 15cb7d2d9ecfd2a2f9bf054997a35903c2ee0ce3 | [
"MIT"
] | null | null | null | core/migrations/0002_meetup.py | hatsem78/django_docker_nginex_nginx_gunicorn | 15cb7d2d9ecfd2a2f9bf054997a35903c2ee0ce3 | [
"MIT"
] | null | null | null | core/migrations/0002_meetup.py | hatsem78/django_docker_nginex_nginx_gunicorn | 15cb7d2d9ecfd2a2f9bf054997a35903c2ee0ce3 | [
"MIT"
] | null | null | null | # Generated by Django 2.2.16 on 2020-09-29 23:21
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('core', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='Meetup',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=255)),
('date', models.DateTimeField(help_text='Date start Meetup')),
('description', models.TextField(blank=True)),
('count_beer', models.IntegerField(default=0, help_text='Count Beer')),
('maximum_temperature', models.FloatField(blank=True, default=0, help_text='Maximum Temperature')),
('count_participants', models.IntegerField(default=0, help_text='Count Participants')),
('direction', models.CharField(max_length=350)),
],
options={
'verbose_name_plural': 'Meetup',
},
),
]
| 36.733333 | 115 | 0.579855 | 108 | 1,102 | 5.777778 | 0.574074 | 0.051282 | 0.057692 | 0.076923 | 0.125 | 0.125 | 0.125 | 0 | 0 | 0 | 0 | 0.036662 | 0.282214 | 1,102 | 29 | 116 | 38 | 0.752212 | 0.041742 | 0 | 0 | 1 | 0 | 0.180266 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.043478 | 0 | 0.173913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3316542a2058418ad1159222b80cb45ab969c4ba | 1,230 | py | Python | yocto/poky/bitbake/lib/bb/ui/crumbs/hobcolor.py | jxtxinbing/ops-build | 9008de2d8e100f3f868c66765742bca9fa98f3f9 | [
"Apache-2.0"
] | 16 | 2017-01-17T15:20:43.000Z | 2021-03-19T05:45:14.000Z | yocto/poky/bitbake/lib/bb/ui/crumbs/hobcolor.py | jxtxinbing/ops-build | 9008de2d8e100f3f868c66765742bca9fa98f3f9 | [
"Apache-2.0"
] | 415 | 2016-12-20T17:20:45.000Z | 2018-09-23T07:59:23.000Z | yocto/poky/bitbake/lib/bb/ui/crumbs/hobcolor.py | jxtxinbing/ops-build | 9008de2d8e100f3f868c66765742bca9fa98f3f9 | [
"Apache-2.0"
] | 10 | 2016-12-20T13:24:50.000Z | 2021-03-19T05:46:43.000Z | #
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2012 Intel Corporation
#
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
class HobColors:
WHITE = "#ffffff"
PALE_GREEN = "#aaffaa"
ORANGE = "#eb8e68"
PALE_RED = "#ffaaaa"
GRAY = "#aaaaaa"
LIGHT_GRAY = "#dddddd"
SLIGHT_DARK = "#5f5f5f"
DARK = "#3c3b37"
BLACK = "#000000"
PALE_BLUE = "#53b8ff"
DEEP_RED = "#aa3e3e"
KHAKI = "#fff68f"
OK = WHITE
RUNNING = PALE_GREEN
WARNING = ORANGE
ERROR = PALE_RED
| 31.538462 | 73 | 0.662602 | 164 | 1,230 | 4.920732 | 0.695122 | 0.040892 | 0.048327 | 0.070632 | 0.101611 | 0.069393 | 0 | 0 | 0 | 0 | 0 | 0.04281 | 0.25935 | 1,230 | 38 | 74 | 32.368421 | 0.84303 | 0.603252 | 0 | 0 | 0 | 0 | 0.179487 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
3318dead767f04f859f302ca3cf27d38474b142d | 5,739 | py | Python | venv/Lib/site-packages/PyQt4/examples/designer/calculatorform/ui_calculatorform.py | prateekfxtd/ns_Startup | 095a62b3a8c7bf0ff7b767355d57d993bbd2423d | [
"MIT"
] | 1 | 2022-03-16T02:10:30.000Z | 2022-03-16T02:10:30.000Z | venv/Lib/site-packages/PyQt4/examples/designer/calculatorform/ui_calculatorform.py | prateekfxtd/ns_Startup | 095a62b3a8c7bf0ff7b767355d57d993bbd2423d | [
"MIT"
] | null | null | null | venv/Lib/site-packages/PyQt4/examples/designer/calculatorform/ui_calculatorform.py | prateekfxtd/ns_Startup | 095a62b3a8c7bf0ff7b767355d57d993bbd2423d | [
"MIT"
] | 2 | 2019-05-28T11:58:59.000Z | 2020-09-23T17:21:19.000Z | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'calculatorform.ui'
#
# Created: Mon Jan 23 13:21:45 2006
# by: PyQt4 UI code generator vsnapshot-20060120
#
# WARNING! All changes made in this file will be lost!
import sys
from PyQt4 import QtCore, QtGui
class Ui_CalculatorForm(object):
def setupUi(self, CalculatorForm):
CalculatorForm.setObjectName("CalculatorForm")
CalculatorForm.resize(QtCore.QSize(QtCore.QRect(0,0,400,300).size()).expandedTo(CalculatorForm.minimumSizeHint()))
sizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Policy(5),QtGui.QSizePolicy.Policy(5))
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(CalculatorForm.sizePolicy().hasHeightForWidth())
CalculatorForm.setSizePolicy(sizePolicy)
self.gridlayout = QtGui.QGridLayout(CalculatorForm)
self.gridlayout.setMargin(9)
self.gridlayout.setSpacing(6)
self.gridlayout.setObjectName("gridlayout")
spacerItem = QtGui.QSpacerItem(40,20,QtGui.QSizePolicy.Expanding,QtGui.QSizePolicy.Minimum)
self.gridlayout.addItem(spacerItem,0,6,1,1)
self.label_3_2 = QtGui.QLabel(CalculatorForm)
self.label_3_2.setGeometry(QtCore.QRect(169,9,20,52))
self.label_3_2.setAlignment(QtCore.Qt.AlignCenter)
self.label_3_2.setObjectName("label_3_2")
self.gridlayout.addWidget(self.label_3_2,0,4,1,1)
self.vboxlayout = QtGui.QVBoxLayout()
self.vboxlayout.setMargin(1)
self.vboxlayout.setSpacing(6)
self.vboxlayout.setObjectName("vboxlayout")
self.label_2_2_2 = QtGui.QLabel(CalculatorForm)
self.label_2_2_2.setGeometry(QtCore.QRect(1,1,36,17))
self.label_2_2_2.setObjectName("label_2_2_2")
self.vboxlayout.addWidget(self.label_2_2_2)
self.outputWidget = QtGui.QLabel(CalculatorForm)
self.outputWidget.setGeometry(QtCore.QRect(1,24,36,27))
self.outputWidget.setFrameShape(QtGui.QFrame.Box)
self.outputWidget.setFrameShadow(QtGui.QFrame.Sunken)
self.outputWidget.setAlignment(QtCore.Qt.AlignAbsolute|QtCore.Qt.AlignBottom|QtCore.Qt.AlignCenter|QtCore.Qt.AlignHCenter|QtCore.Qt.AlignHorizontal_Mask|QtCore.Qt.AlignJustify|QtCore.Qt.AlignLeading|QtCore.Qt.AlignLeft|QtCore.Qt.AlignRight|QtCore.Qt.AlignTop|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter|QtCore.Qt.AlignVertical_Mask)
self.outputWidget.setObjectName("outputWidget")
self.vboxlayout.addWidget(self.outputWidget)
self.gridlayout.addLayout(self.vboxlayout,0,5,1,1)
spacerItem1 = QtGui.QSpacerItem(20,40,QtGui.QSizePolicy.Minimum,QtGui.QSizePolicy.Expanding)
self.gridlayout.addItem(spacerItem1,1,2,1,1)
self.vboxlayout1 = QtGui.QVBoxLayout()
self.vboxlayout1.setMargin(1)
self.vboxlayout1.setSpacing(6)
self.vboxlayout1.setObjectName("vboxlayout1")
self.label_2 = QtGui.QLabel(CalculatorForm)
self.label_2.setGeometry(QtCore.QRect(1,1,46,19))
self.label_2.setObjectName("label_2")
self.vboxlayout1.addWidget(self.label_2)
self.inputSpinBox2 = QtGui.QSpinBox(CalculatorForm)
self.inputSpinBox2.setGeometry(QtCore.QRect(1,26,46,25))
self.inputSpinBox2.setObjectName("inputSpinBox2")
self.vboxlayout1.addWidget(self.inputSpinBox2)
self.gridlayout.addLayout(self.vboxlayout1,0,3,1,1)
self.label_3 = QtGui.QLabel(CalculatorForm)
self.label_3.setGeometry(QtCore.QRect(63,9,20,52))
self.label_3.setAlignment(QtCore.Qt.AlignCenter)
self.label_3.setObjectName("label_3")
self.gridlayout.addWidget(self.label_3,0,1,1,1)
self.vboxlayout2 = QtGui.QVBoxLayout()
self.vboxlayout2.setMargin(1)
self.vboxlayout2.setSpacing(6)
self.vboxlayout2.setObjectName("vboxlayout2")
self.label = QtGui.QLabel(CalculatorForm)
self.label.setGeometry(QtCore.QRect(1,1,46,19))
self.label.setObjectName("label")
self.vboxlayout2.addWidget(self.label)
self.inputSpinBox1 = QtGui.QSpinBox(CalculatorForm)
self.inputSpinBox1.setGeometry(QtCore.QRect(1,26,46,25))
self.inputSpinBox1.setObjectName("inputSpinBox1")
self.vboxlayout2.addWidget(self.inputSpinBox1)
self.gridlayout.addLayout(self.vboxlayout2,0,0,1,1)
self.retranslateUi(CalculatorForm)
QtCore.QMetaObject.connectSlotsByName(CalculatorForm)
def tr(self, string):
return QtGui.QApplication.translate("CalculatorForm", string, None, QtGui.QApplication.UnicodeUTF8)
def retranslateUi(self, CalculatorForm):
CalculatorForm.setObjectName(self.tr("CalculatorForm"))
CalculatorForm.setWindowTitle(self.tr("Calculator Form"))
self.label_3_2.setObjectName(self.tr("label_3_2"))
self.label_3_2.setText(self.tr("="))
self.label_2_2_2.setObjectName(self.tr("label_2_2_2"))
self.label_2_2_2.setText(self.tr("Output"))
self.outputWidget.setObjectName(self.tr("outputWidget"))
self.outputWidget.setText(self.tr("0"))
self.label_2.setObjectName(self.tr("label_2"))
self.label_2.setText(self.tr("Input 2"))
self.inputSpinBox2.setObjectName(self.tr("inputSpinBox2"))
self.label_3.setObjectName(self.tr("label_3"))
self.label_3.setText(self.tr("+"))
self.label.setObjectName(self.tr("label"))
self.label.setText(self.tr("Input 1"))
self.inputSpinBox1.setObjectName(self.tr("inputSpinBox1"))
| 47.429752 | 343 | 0.699076 | 663 | 5,739 | 5.957768 | 0.211161 | 0.072911 | 0.035443 | 0.016203 | 0.201519 | 0.145316 | 0.07443 | 0.035443 | 0.018734 | 0 | 0 | 0.048562 | 0.181913 | 5,739 | 120 | 344 | 47.825 | 0.792758 | 0.040251 | 0 | 0 | 1 | 0 | 0.050182 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032609 | false | 0 | 0.021739 | 0.01087 | 0.076087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
331c64b5688bf0f03e29dded9df8fd3ced9edae7 | 461 | py | Python | tests/programs/misc/causality_1.py | astraldawn/pylps | e9964a24bb38657b180d441223b4cdb9e1dadc8a | [
"MIT"
] | 1 | 2018-05-19T18:28:12.000Z | 2018-05-19T18:28:12.000Z | tests/programs/misc/causality_1.py | astraldawn/pylps | e9964a24bb38657b180d441223b4cdb9e1dadc8a | [
"MIT"
] | 12 | 2018-04-26T00:58:11.000Z | 2018-05-13T22:03:39.000Z | tests/programs/misc/causality_1.py | astraldawn/pylps | e9964a24bb38657b180d441223b4cdb9e1dadc8a | [
"MIT"
] | null | null | null | from pylps.core import *
initialise(max_time=2)
create_fluents('test(_, _)')
create_actions('hello(_, _)')
create_variables('Person', 'Years', 'NewYears', 'OldYears',)
initially(test('A', 0),)
reactive_rule(True).then(
hello('A', 5),
)
hello(Person, Years).initiates(test(Person, NewYears)).iff(
test(Person, OldYears), NewYears.is_(OldYears + Years)
)
hello(Person, Years).terminates(test(Person, OldYears))
execute(debug=False)
show_kb_log()
| 18.44 | 60 | 0.704989 | 59 | 461 | 5.305085 | 0.610169 | 0.105431 | 0.102236 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007299 | 0.10846 | 461 | 24 | 61 | 19.208333 | 0.754258 | 0 | 0 | 0 | 0 | 0 | 0.10846 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.066667 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3322558b16b9e4d86a91248106c527a75ff6b943 | 2,350 | py | Python | miyamoto/test/mocks.py | caedesvvv/miyamoto | d781dffeb0b2af3ce679f13114f47e6965d1cdb1 | [
"MIT"
] | 1 | 2015-01-20T17:32:19.000Z | 2015-01-20T17:32:19.000Z | miyamoto/test/mocks.py | caedesvvv/miyamoto | d781dffeb0b2af3ce679f13114f47e6965d1cdb1 | [
"MIT"
] | null | null | null | miyamoto/test/mocks.py | caedesvvv/miyamoto | d781dffeb0b2af3ce679f13114f47e6965d1cdb1 | [
"MIT"
] | null | null | null | from twisted.web import server, resource
class MockSubscriber(resource.Resource):
isLeaf = True
def render_GET(self, request):
if request.path.endswith('/callback'):
return request.args.get('hub.challenge', [''])[0]
else:
return "Huh?"
class MockPublisher(resource.Resource):
isLeaf = True
def render(self, request):
host = '%s:%s' % (request.host.host, request.host.port)
if request.path.endswith('/happycats.xml'):
request.setHeader('content-type', 'application/atom+xml')
return """<?xml version="1.0"?>
<feed>
<!-- Normally here would be source, title, etc ... -->
<link rel="hub" href="http://%s/" />
<link rel="self" href="http://%s%s" />
<updated>2008-08-11T02:15:01Z</updated>
<!-- Example of a full entry. -->
<entry>
<title>Heathcliff</title>
<link href="http://publisher.example.com/happycat25.xml" />
<id>http://publisher.example.com/happycat25.xml</id>
<updated>2008-08-11T02:15:01Z</updated>
<content>
What a happy cat. Full content goes here.
</content>
</entry>
<!-- Example of an entity that isn't full/is truncated. This is implied
by the lack of a <content> element and a <summary> element instead. -->
<entry >
<title>Heathcliff</title>
<link href="http://publisher.example.com/happycat25.xml" />
<id>http://publisher.example.com/happycat25.xml</id>
<updated>2008-08-11T02:15:01Z</updated>
<summary>
What a happy cat!
</summary>
</entry>
<!-- Meta-data only; implied by the lack of <content> and
<summary> elements. -->
<entry>
<title>Garfield</title>
<link rel="alternate" href="http://publisher.example.com/happycat24.xml" />
<id>http://publisher.example.com/happycat25.xml</id>
<updated>2008-08-11T02:15:01Z</updated>
</entry>
<!-- Context entry that's meta-data only and not new. Implied because the
update time on this entry is before the //atom:feed/updated time. -->
<entry>
<title>Nermal</title>
<link rel="alternate" href="http://publisher.example.com/happycat23s.xml" />
<id>http://publisher.example.com/happycat25.xml</id>
<updated>2008-07-10T12:28:13Z</updated>
</entry>
</feed>""" % (host, host, request.path)
else:
return 'Huh?' | 33.098592 | 80 | 0.625106 | 304 | 2,350 | 4.828947 | 0.355263 | 0.070845 | 0.108992 | 0.125341 | 0.433924 | 0.409401 | 0.361717 | 0.341281 | 0.341281 | 0.275886 | 0 | 0.047492 | 0.202553 | 2,350 | 71 | 81 | 33.098592 | 0.735859 | 0 | 0 | 0.4 | 0 | 0.033333 | 0.747342 | 0.122926 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.016667 | 0 | 0.183333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
33269c8198e5473d3ddda4bf83fff9637afee268 | 1,793 | py | Python | src/graph2.py | gpu0/nnGraph | ae68af41804ce95dd4dbd6deeea57e377915acc9 | [
"MIT"
] | null | null | null | src/graph2.py | gpu0/nnGraph | ae68af41804ce95dd4dbd6deeea57e377915acc9 | [
"MIT"
] | null | null | null | src/graph2.py | gpu0/nnGraph | ae68af41804ce95dd4dbd6deeea57e377915acc9 | [
"MIT"
] | null | null | null | # t = 2 * (x*y + max(z,w))
class Num:
def __init__(self, val):
self.val = val
def forward(self):
return self.val
def backward(self, val):
print val
class Mul:
def __init__(self, left, right):
self.left = left
self.right = right
def forward(self):
self.left_fw = self.left.forward()
self.right_fw = self.right.forward()
return self.left_fw * self.right_fw
def backward(self, val):
self.left.backward(val * self.right_fw)
self.right.backward(val * self.left_fw)
class Factor:
def __init__(self, center, factor):
self.center = center
self.factor = factor
def forward(self):
return self.factor * self.center.forward()
def backward(self, val):
self.center.backward(val * self.factor)
class Add:
def __init__(self, left, right):
self.left = left
self.right = right
def forward(self):
return self.left.forward() + self.right.forward()
def backward(self, val):
self.left.backward(val)
self.right.backward(val)
class Max:
def __init__(self, left, right):
self.left = left
self.right = right
def forward(self):
self.left_fw = self.left.forward()
self.right_fw = self.right.forward()
self.out = 0
if self.left_fw > self.right_fw:
self.out = 1
return self.left_fw
return self.right_fw
def backward(self, val):
self.left.backward(val * self.out)
self.right.backward(val * (1 - self.out))
if __name__ == '__main__':
x = Num(3)
y = Num(-4)
z = Num(2)
w = Num(-1)
p = Mul(x, y)
q = Max(z, w)
r = Add(p, q)
t = Factor(r, 2)
print t.forward()
t.backward(1)
| 25.985507 | 57 | 0.572783 | 251 | 1,793 | 3.932271 | 0.139442 | 0.145897 | 0.06079 | 0.091185 | 0.602837 | 0.512665 | 0.444782 | 0.444782 | 0.444782 | 0.444782 | 0 | 0.007974 | 0.300614 | 1,793 | 68 | 58 | 26.367647 | 0.779107 | 0.013385 | 0 | 0.377049 | 0 | 0 | 0.004527 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.032787 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
332899d38b421fab9695a008493c81d6890c2e39 | 2,114 | py | Python | var/spack/repos/builtin/packages/jemalloc/package.py | ilagunap/spack | 510f869c3ae8ac2721debd29e98076212ee75852 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 2 | 2018-11-16T02:42:57.000Z | 2019-06-06T19:18:50.000Z | var/spack/repos/builtin/packages/jemalloc/package.py | ilagunap/spack | 510f869c3ae8ac2721debd29e98076212ee75852 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 18 | 2021-03-12T16:22:58.000Z | 2022-03-02T17:07:08.000Z | var/spack/repos/builtin/packages/jemalloc/package.py | ilagunap/spack | 510f869c3ae8ac2721debd29e98076212ee75852 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | null | null | null | # Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class Jemalloc(AutotoolsPackage):
"""jemalloc is a general purpose malloc(3) implementation that emphasizes
fragmentation avoidance and scalable concurrency support."""
homepage = "http://jemalloc.net/"
url = "https://github.com/jemalloc/jemalloc/releases/download/4.0.4/jemalloc-4.0.4.tar.bz2"
version('5.2.1', sha256='34330e5ce276099e2e8950d9335db5a875689a4c6a56751ef3b1d8c537f887f6')
version('5.2.0', sha256='74be9f44a60d2a99398e706baa921e4efde82bf8fd16e5c0643c375c5851e3b4')
version('4.5.0', sha256='9409d85664b4f135b77518b0b118c549009dc10f6cba14557d170476611f6780')
version('4.4.0', sha256='a7aea63e9718d2f1adf81d87e3df3cb1b58deb86fc77bad5d702c4c59687b033')
version('4.3.1', sha256='f7bb183ad8056941791e0f075b802e8ff10bd6e2d904e682f87c8f6a510c278b')
version('4.2.1', sha256='5630650d5c1caab95d2f0898de4fe5ab8519dc680b04963b38bb425ef6a42d57')
version('4.2.0', sha256='b216ddaeb901697fe38bd30ea02d7505a4b60e8979092009f95cfda860d46acb')
version('4.1.0', sha256='fad06d714f72adb4265783bc169c6d98eeb032d57ba02d87d1dcb4a2d933ec8e')
version('4.0.4', sha256='3fda8d8d7fcd041aa0bebbecd45c46b28873cf37bd36c56bf44961b36d0f42d0')
variant('stats', default=False, description='Enable heap statistics')
variant('prof', default=False, description='Enable heap profiling')
variant(
'jemalloc_prefix', default='none',
description='Prefix to prepend to all public APIs',
values=None,
multi=False
)
def configure_args(self):
spec = self.spec
args = []
if '+stats' in spec:
args.append('--enable-stats')
if '+prof' in spec:
args.append('--enable-prof')
je_prefix = spec.variants['jemalloc_prefix'].value
if je_prefix != 'none':
args.append('--with-jemalloc-prefix={0}'.format(je_prefix))
return args
| 44.041667 | 100 | 0.726585 | 205 | 2,114 | 7.463415 | 0.512195 | 0.036601 | 0.005882 | 0.037909 | 0.071895 | 0 | 0 | 0 | 0 | 0 | 0 | 0.24297 | 0.15894 | 2,114 | 47 | 101 | 44.978723 | 0.617548 | 0.150426 | 0 | 0 | 0 | 0.03125 | 0.513483 | 0.338202 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | false | 0 | 0.03125 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
332bc827b9397befa3b3df403d9170657a773ac3 | 2,372 | py | Python | Cura/Cura/plugins/VersionUpgrade/VersionUpgrade34to35/__init__.py | TIAO-JI-FU/3d-printing-with-moveo-1 | 100ecfd1208fe1890f8bada946145d716b2298eb | [
"MIT"
] | null | null | null | Cura/Cura/plugins/VersionUpgrade/VersionUpgrade34to35/__init__.py | TIAO-JI-FU/3d-printing-with-moveo-1 | 100ecfd1208fe1890f8bada946145d716b2298eb | [
"MIT"
] | null | null | null | Cura/Cura/plugins/VersionUpgrade/VersionUpgrade34to35/__init__.py | TIAO-JI-FU/3d-printing-with-moveo-1 | 100ecfd1208fe1890f8bada946145d716b2298eb | [
"MIT"
] | null | null | null | # Copyright (c) 2018 Ultimaker B.V.
# Cura is released under the terms of the LGPLv3 or higher.
from typing import Any, Dict, TYPE_CHECKING
from . import VersionUpgrade34to35
if TYPE_CHECKING:
from UM.Application import Application
upgrade = VersionUpgrade34to35.VersionUpgrade34to35()
def getMetaData() -> Dict[str, Any]:
return {
"version_upgrade": {
# From To Upgrade function
("preferences", 6000004): ("preferences", 6000005, upgrade.upgradePreferences),
("definition_changes", 4000004): ("definition_changes", 4000005, upgrade.upgradeInstanceContainer),
("quality_changes", 4000004): ("quality_changes", 4000005, upgrade.upgradeInstanceContainer),
("quality", 4000004): ("quality", 4000005, upgrade.upgradeInstanceContainer),
("user", 4000004): ("user", 4000005, upgrade.upgradeInstanceContainer),
("machine_stack", 4000004): ("machine_stack", 4000005, upgrade.upgradeStack),
("extruder_train", 4000004): ("extruder_train", 4000005, upgrade.upgradeStack),
},
"sources": {
"preferences": {
"get_version": upgrade.getCfgVersion,
"location": {"."}
},
"machine_stack": {
"get_version": upgrade.getCfgVersion,
"location": {"./machine_instances"}
},
"extruder_train": {
"get_version": upgrade.getCfgVersion,
"location": {"./extruders"}
},
"definition_changes": {
"get_version": upgrade.getCfgVersion,
"location": {"./definition_changes"}
},
"quality_changes": {
"get_version": upgrade.getCfgVersion,
"location": {"./quality_changes"}
},
"quality": {
"get_version": upgrade.getCfgVersion,
"location": {"./quality"}
},
"user": {
"get_version": upgrade.getCfgVersion,
"location": {"./user"}
}
}
}
def register(app: "Application") -> Dict[str, Any]:
return { "version_upgrade": upgrade }
| 38.258065 | 111 | 0.52403 | 167 | 2,372 | 7.287425 | 0.353293 | 0.103533 | 0.097781 | 0.172555 | 0.387839 | 0.239934 | 0 | 0 | 0 | 0 | 0 | 0.075016 | 0.35371 | 2,372 | 61 | 112 | 38.885246 | 0.718852 | 0.072091 | 0 | 0.142857 | 0 | 0 | 0.232135 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040816 | false | 0 | 0.061224 | 0.040816 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
333dcc9167d75b30415a919b52fe0903e6d5c709 | 1,453 | py | Python | convert-to-loom/convert-to-loom-3.py | mzager/dv-pipelines | 3356753cc56a5298bb075f12681f9282d8f08658 | [
"MIT"
] | 3 | 2020-02-24T21:08:11.000Z | 2020-05-19T18:26:01.000Z | convert-to-loom/convert-to-loom-3.py | mzager/dv-pipelines | 3356753cc56a5298bb075f12681f9282d8f08658 | [
"MIT"
] | null | null | null | convert-to-loom/convert-to-loom-3.py | mzager/dv-pipelines | 3356753cc56a5298bb075f12681f9282d8f08658 | [
"MIT"
] | 2 | 2020-01-04T00:23:07.000Z | 2020-02-26T17:54:34.000Z | #!/bin/python3
import sys
import os
import pandas as pd
import scanpy as sc
import anndata
from anndata import AnnData
sc.settings.verbosity = 3 # verbosity: errors (0), warnings (1), info (2), hints (3)
working_dir = os.getcwd()
adata = sc.read(os.path.join(working_dir, 'gene_count.txt'),
cache=True,
delimiter=' ').transpose()
print('adata variables:')
print(dir(adata))
df_cell = pd.read_csv(os.path.join(working_dir, 'cell_annotate.csv'), delimiter=',')
df_gene = pd.read_csv(os.path.join(working_dir, 'gene_annotate.csv'), delimiter=',')
df_cell.index = df_cell["sample"]
df_gene.index = df_gene["gene_id"]
adata.obs = df_cell # based off of index
adata.var = df_gene
# save the loom file
adata.write_loom("output.loom")
# gene count - cell by gene, based on index. They're exported using the same index
# annotations on the columns
# gene index annotations on the rows
# other thing to look at - anndata. CDS objects (cell dataset, in R)
# h5ad, loom, cds, and Seurat. And now our pubweb format.
top row - dimensions of the dataset. It's a sparse matrix.
It's mostly zeroes.
58 - 1 1 ->
# matrix market format, https://math.nist.gov/MatrixMarket/formats.html
First column is row, second column is column, the third is value. Index is 1 based
# wilfred is struggling to convert this to a dense matrix. -> they're straight up data tables.
HDF5 -either sparse matrices or strongly typed big-ass lists.
| 31.586957 | 94 | 0.720578 | 237 | 1,453 | 4.337553 | 0.531646 | 0.038911 | 0.029183 | 0.049611 | 0.083658 | 0.083658 | 0.05642 | 0.05642 | 0 | 0 | 0 | 0.010753 | 0.167928 | 1,453 | 45 | 95 | 32.288889 | 0.839537 | 0.36958 | 0 | 0 | 0 | 0 | 0.100887 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.24 | null | null | 0.08 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
333fc7cf7e1820391a570f954460758928bb90e7 | 753 | py | Python | 069_totient_maximum.py | fbcom/project-euler | 3c2194f797d54a0cc04031cd0be153f6a6f849ad | [
"MIT"
] | null | null | null | 069_totient_maximum.py | fbcom/project-euler | 3c2194f797d54a0cc04031cd0be153f6a6f849ad | [
"MIT"
] | null | null | null | 069_totient_maximum.py | fbcom/project-euler | 3c2194f797d54a0cc04031cd0be153f6a6f849ad | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# A Solution to "Totient maximum" – Project Euler Problem No. 69
# by Florian Buetow
#
# Sourcecode: https://github.com/fbcom/project-euler
# Problem statement: https://projecteuler.net/problem=69
def is_prime(n):
if n < 2:
return False
if n == 2:
return True
if n % 2 == 0:
return False
for d in range(3, int(n**0.5)+1, 2):
if n % d == 0:
return False
return True
# Solve
solution = n = i = 2 # starting with a prime
while n < 1000*1000:
i = i + 1
while not is_prime(i):
i += 1
solution = n
n = n * i # n can only have prime factors (then it has the most amount of coprime numbers)
print "Solution:", solution
| 22.818182 | 95 | 0.586985 | 120 | 753 | 3.675 | 0.566667 | 0.027211 | 0.027211 | 0.045351 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04878 | 0.292165 | 753 | 32 | 96 | 23.53125 | 0.776735 | 0.446215 | 0 | 0.263158 | 0 | 0 | 0.022167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
334183eb998b219f40fc5f9ed455577122ddc46b | 1,401 | py | Python | lectures/07-python-dictionaries/examples/gashlycrumb.py | mattmiller899/biosys-analytics | ab24a4c7206ed9a865e896daa57cee3c4e62df1f | [
"MIT"
] | 4 | 2019-01-10T17:12:37.000Z | 2019-03-01T18:25:07.000Z | lectures/07-python-dictionaries/examples/gashlycrumb.py | mattmiller899/biosys-analytics | ab24a4c7206ed9a865e896daa57cee3c4e62df1f | [
"MIT"
] | null | null | null | lectures/07-python-dictionaries/examples/gashlycrumb.py | mattmiller899/biosys-analytics | ab24a4c7206ed9a865e896daa57cee3c4e62df1f | [
"MIT"
] | 33 | 2019-01-05T17:03:47.000Z | 2019-11-11T20:48:24.000Z | #!/usr/bin/env python3
"""dictionary lookup"""
import os
import sys
args = sys.argv[1:]
if len(args) != 1:
print('Usage: {} LETTER'.format(os.path.basename(sys.argv[0])))
sys.exit(1)
letter = args[0].upper()
lines = """
A is for Amy who fell down the stairs.
B is for Basil assaulted by bears.
C is for Clara who wasted away.
D is for Desmond thrown out of a sleigh.
E is for Ernest who choked on a peach.
F is for Fanny sucked dry by a leech.
G is for George smothered under a rug.
H is for Hector done in by a thug.
I is for Ida who drowned in a lake.
J is for James who took lye by mistake.
K is for Kate who was struck with an axe.
L is for Leo who choked on some tacks.
M is for Maud who was swept out to sea.
N is for Neville who died of ennui.
O is for Olive run through with an awl.
P is for Prue trampled flat in a brawl.
Q is for Quentin who sank on a mire.
R is for Rhoda consumed by a fire.
S is for Susan who perished of fits.
T is for Titus who flew into bits.
U is for Una who slipped down a drain.
V is for Victor squashed under a train.
W is for Winnie embedded in ice.
X is for Xerxes devoured by mice.
Y is for Yorick whose head was bashed in.
Z is for Zillah who drank too much gin.
""".strip().splitlines()
lookup = {}
for line in lines:
lookup[line[0]] = line
if letter in lookup:
print(lookup[letter])
else:
print('I do not know "{}"'.format(letter))
| 26.942308 | 67 | 0.708779 | 284 | 1,401 | 3.496479 | 0.556338 | 0.130916 | 0.022155 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006335 | 0.211278 | 1,401 | 51 | 68 | 27.470588 | 0.892308 | 0.027837 | 0 | 0 | 0 | 0 | 0.750737 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.047619 | 0 | 0.047619 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3343300fe9fc63763059c54d0198c6efedcd6999 | 1,774 | py | Python | Lesson 12 Keras/Test_Keras.py | alchemz/Self-Driving-Car-Engineer-Nanodegree | 70d6ae9d741b6c53712e0099af04597dc0ba0291 | [
"MIT"
] | 1 | 2021-03-20T12:32:35.000Z | 2021-03-20T12:32:35.000Z | Lesson 12 Keras/Test_Keras.py | alchemz/Self-Driving-Car-Engineer-Nanodegree | 70d6ae9d741b6c53712e0099af04597dc0ba0291 | [
"MIT"
] | null | null | null | Lesson 12 Keras/Test_Keras.py | alchemz/Self-Driving-Car-Engineer-Nanodegree | 70d6ae9d741b6c53712e0099af04597dc0ba0291 | [
"MIT"
] | null | null | null | # Load pickled data
import pickle
import numpy as np
import tensorflow as tf
tf.python.control_flow_ops = tf
with open('small_train_traffic.p', mode='rb') as f:
data = pickle.load(f)
X_train, y_train = data['features'], data['labels']
# Initial Setup for Keras
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten, Dropout
from keras.layers.convolutional import Convolution2D
from keras.layers.pooling import MaxPooling2D
# TODO: Build the Final Test Neural Network in Keras Here
model = Sequential()
model.add(Convolution2D(32, 3, 3, input_shape=(32, 32, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.5))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dense(5))
model.add(Activation('softmax'))
# preprocess data
X_normalized = np.array(X_train / 255.0 - 0.5 )
from sklearn.preprocessing import LabelBinarizer
label_binarizer = LabelBinarizer()
y_one_hot = label_binarizer.fit_transform(y_train)
model.compile('adam', 'categorical_crossentropy', ['accuracy'])
history = model.fit(X_normalized, y_one_hot, nb_epoch=10, validation_split=0.2)
with open('small_test_traffic.p', 'rb') as f:
data_test = pickle.load(f)
X_test = data_test['features']
y_test = data_test['labels']
# preprocess data
X_normalized_test = np.array(X_test / 255.0 - 0.5 )
y_one_hot_test = label_binarizer.fit_transform(y_test)
print("Testing")
# TODO: Evaluate the test data in Keras Here
metrics = model.evaluate(X_normalized_test, y_one_hot_test)
# TODO: UNCOMMENT CODE
for metric_i in range(len(model.metrics_names)):
metric_name = model.metrics_names[metric_i]
metric_value = metrics[metric_i]
print('{}: {}'.format(metric_name, metric_value))
| 30.067797 | 79 | 0.756483 | 274 | 1,774 | 4.711679 | 0.375912 | 0.055771 | 0.021689 | 0.013943 | 0.088304 | 0.046476 | 0 | 0 | 0 | 0 | 0 | 0.023642 | 0.117813 | 1,774 | 59 | 80 | 30.067797 | 0.801278 | 0.108794 | 0 | 0.051282 | 0 | 0 | 0.087039 | 0.02859 | 0 | 0 | 0 | 0.016949 | 0 | 1 | 0 | false | 0 | 0.205128 | 0 | 0.205128 | 0.051282 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3344178b165547c21ead9ed9691b9fa23693168d | 2,360 | py | Python | crude/extrude_crude/take_05_model_run.py | i2mint/crude | 29fea0c7cd7cf58055b14bc47519bd527469f8d0 | [
"Apache-2.0"
] | null | null | null | crude/extrude_crude/take_05_model_run.py | i2mint/crude | 29fea0c7cd7cf58055b14bc47519bd527469f8d0 | [
"Apache-2.0"
] | null | null | null | crude/extrude_crude/take_05_model_run.py | i2mint/crude | 29fea0c7cd7cf58055b14bc47519bd527469f8d0 | [
"Apache-2.0"
] | null | null | null | """
Same as take_04_model_run, but where the dispatch is not as manual.
"""
from front.crude import KT, StoreName, Mall
from crude.extrude_crude.extrude_crude_util import mall, np, apply_model
# ---------------------------------------------------------------------------------------
# dispatchable function:
from front.crude import prepare_for_crude_dispatch
f = prepare_for_crude_dispatch(apply_model, param_to_mall_map=mall)
assert all(
f("fitted_model_1", "test_fvs") == np.array([[0.0], [1.0], [0.5], [2.25], [-1.5]])
)
def simple_mall_dispatch_core_func(
key: KT, action: str, store_name: StoreName, mall: Mall
):
if not store_name:
# if store_name empty, list the store names (i.e. the mall keys)
return list(mall)
else: # if not, get the store
store = mall[store_name]
if action == "list":
key = key.strip() # to handle some invisible whitespace that would screw things
return list(filter(lambda k: key in k, store))
elif action == "get":
return store[key]
# TODO: the function doesn't see updates made to mall. Fix.
# Just the partial (with mall set), but without mall arg visible (or will be dispatched)
def explore_mall(key: KT, action: str, store_name: StoreName):
return simple_mall_dispatch_core_func(key, action, store_name, mall=mall)
# Attempt to do this wit i2.wrapper
# from functools import partial
# from i2.wrapper import rm_params_ingress_factory, wrap
#
# without_mall_param = partial(
# wrap, ingress=partial(rm_params_ingress_factory, params_to_remove="mall")
# )
# mall_exploration_func = without_mall_param(
# partial(simple_mall_dispatch_core_func, mall=mall)
# )
# mall_exploration_func.__name__ = "explore_mall"
if __name__ == "__main__":
from crude.util import ignore_import_problems
with ignore_import_problems:
from streamlitfront.base import dispatch_funcs
from functools import partial
dispatchable_apply_model = prepare_for_crude_dispatch(
apply_model, output_store="model_results"
)
# extra, to get some defaults in:
dispatchable_apply_model = partial(
dispatchable_apply_model,
fitted_model="fitted_model_1",
fvs="test_fvs",
)
app = dispatch_funcs([dispatchable_apply_model, explore_mall])
app()
| 34.202899 | 89 | 0.677542 | 320 | 2,360 | 4.70625 | 0.375 | 0.046481 | 0.058433 | 0.045817 | 0.140106 | 0.122842 | 0.042497 | 0 | 0 | 0 | 0 | 0.00898 | 0.197881 | 2,360 | 68 | 90 | 34.705882 | 0.786582 | 0.372458 | 0 | 0 | 0 | 0 | 0.049485 | 0 | 0 | 0 | 0 | 0.014706 | 0.027778 | 1 | 0.055556 | false | 0 | 0.194444 | 0.027778 | 0.361111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
334ae316e44a87d9de4896827430dd6a339557e7 | 647 | py | Python | api/admin.py | Neoklosch/QuestionBasedServer | 7e690944355471d3a9507b0414cbaf1f800bab97 | [
"Apache-2.0"
] | null | null | null | api/admin.py | Neoklosch/QuestionBasedServer | 7e690944355471d3a9507b0414cbaf1f800bab97 | [
"Apache-2.0"
] | 7 | 2020-06-05T17:05:00.000Z | 2022-03-11T23:13:05.000Z | api/admin.py | Cookie-Monsters/uQu-Backend | 7e690944355471d3a9507b0414cbaf1f800bab97 | [
"Apache-2.0"
] | null | null | null | from django.contrib import admin
from api.models import Answer, Question, User
from django import forms
class AnswerAdmin(admin.ModelAdmin):
model = Answer
class QuestionAdmin(admin.ModelAdmin):
model = Question
# class UserForm(forms.ModelForm):
# password = forms.CharField(widget=forms.PasswordInput)
#
# def __init__(self, *args, **kwargs):
# super(UserForm, self).__init__(*args, **kwargs)
#
# class Meta:
# model = User
class UserAdmin(admin.ModelAdmin):
model = User
admin.site.register(Answer, AnswerAdmin)
admin.site.register(Question, QuestionAdmin)
admin.site.register(User, UserAdmin)
| 20.870968 | 60 | 0.718702 | 74 | 647 | 6.175676 | 0.432432 | 0.098468 | 0.131291 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170015 | 647 | 30 | 61 | 21.566667 | 0.851024 | 0.347759 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
334de2467a9aad2d75dceb60553a04a3e344347f | 2,895 | py | Python | src/moreos/parsing.py | sigmavirus24/moreos | a32056d89fa519499e704f978db32b737977e2d7 | [
"MIT"
] | 3 | 2020-12-16T16:43:57.000Z | 2021-06-03T10:54:55.000Z | src/moreos/parsing.py | sigmavirus24/moreos | a32056d89fa519499e704f978db32b737977e2d7 | [
"MIT"
] | null | null | null | src/moreos/parsing.py | sigmavirus24/moreos | a32056d89fa519499e704f978db32b737977e2d7 | [
"MIT"
] | null | null | null | """Parsing utilities for moreos."""
import re
import attr
@attr.s(frozen=True)
class ABNF:
"""Container of regular expressions both raw and compiled for parsing."""
# From https://tools.ietf.org/html/rfc2616#section-2.2
ctl = control_characters = "\x7f\x00-\x1f"
digit = "0-9"
separators = r"\[\]\(\)<>@,;:\\\"/?={}\s"
token = f"[^{ctl}{separators}]+"
# RFC1123 date components
wkday = "(?:Mon|Tue|Wed|Thu|Fri|Sat|Sun)"
month = "(?:Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)"
time = f"[{digit}]{{2}}:[{digit}]{{2}}:[{digit}]{{2}}"
date1 = f"[{digit}]{{1,2}} {month} [{digit}]{{4}}"
# NOTE(sigmavirus24) This allows some nonsense but it's a decent
# high-level check
rfc1123_date = f"{wkday}, {date1} {time} GMT"
# From https://tools.ietf.org/html/rfc1034#section-3.5, enhanced by
# https://tools.ietf.org/html/rfc1123#section-2.1
letter = "A-Za-z"
let_dig = f"{letter}{digit}"
let_dig_hyp = f"{let_dig}-"
ldh_str = f"[{let_dig_hyp}]+"
# This is where the update from rfc1123#section2.1 is relevant
label = f"[{let_dig}](?:(?:{ldh_str})?[{let_dig}])?"
subdomain = f"\\.?(?:{label}\\.)*(?:{label})"
# From https://tools.ietf.org/html/rfc6265#section-3.1
# NOTE: \x5b = [, \x5d = ] so let's escape those directly
cookie_octet = "[\x21\x23-\x2b\\\x2d-\x3a\x3c-\\[\\]-\x7e]"
cookie_value = f'(?:{cookie_octet}*|"{cookie_octet}*")'
cookie_name = token
cookie_pair = f"(?P<name>{cookie_name})=(?P<value>{cookie_value})"
_any_char_except_ctls_or_semicolon = f"[^;{ctl}]+"
extension_av = _any_char_except_ctls_or_semicolon
httponly_av = "(?P<httponly>HttpOnly)"
secure_av = "(?P<secure>Secure)"
path_value = _any_char_except_ctls_or_semicolon
path_av = f"Path=(?P<path>{path_value})"
domain_value = subdomain
domain_av = f"Domain=(?P<domain>{domain_value})"
non_zero_digit = "1-9"
max_age_av = f"Max-Age=(?P<max_age>[{non_zero_digit}][{digit}]*)"
sane_cookie_date = rfc1123_date
expires_av = f"Expires=(?P<expires>{sane_cookie_date})"
samesite_value = "(?:Strict|Lax|None)"
samesite_av = f"SameSite=(?P<samesite>{samesite_value})"
cookie_av = (
f"(?:{expires_av}|{max_age_av}|{domain_av}|{path_av}|"
f"{secure_av}|{httponly_av}|{samesite_av}|{extension_av})"
)
set_cookie_string = f"{cookie_pair}(?:; {cookie_av})*"
# Not specified in either RFC
client_cookie_string = f"(?:({cookie_name})=({cookie_value}))(?:; )?"
# Pre-compiled version of the above abnf
separators_re = re.compile(f"[{separators}]+")
control_characters_re = re.compile(f"[{ctl}]+")
cookie_name_re = token_re = re.compile(token)
cookie_value_re = re.compile(cookie_value)
set_cookie_string_re = re.compile(set_cookie_string)
client_cookie_string_re = re.compile(client_cookie_string)
| 40.208333 | 77 | 0.638342 | 418 | 2,895 | 4.181818 | 0.380383 | 0.012014 | 0.037757 | 0.038902 | 0.149886 | 0.096682 | 0.037757 | 0 | 0 | 0 | 0 | 0.029303 | 0.16304 | 2,895 | 71 | 78 | 40.774648 | 0.692117 | 0.207599 | 0 | 0 | 0 | 0 | 0.420635 | 0.309083 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.040816 | 0 | 0.959184 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
334f49d258643f6f1e499de6346d97b992cad8f0 | 604 | py | Python | tests/test_resources/fixtures/py3_tokens.py | cbillingham/docconvert | 2843f7446546ae90ba3f38e1246e69d208e0f053 | [
"BSD-3-Clause"
] | 8 | 2019-10-07T22:49:20.000Z | 2021-12-30T22:31:28.000Z | tests/test_resources/fixtures/py3_tokens.py | cbillingham/docconvert | 2843f7446546ae90ba3f38e1246e69d208e0f053 | [
"BSD-3-Clause"
] | 5 | 2019-09-17T21:03:38.000Z | 2020-07-23T04:47:21.000Z | tests/test_resources/fixtures/py3_tokens.py | cbillingham/docconvert | 2843f7446546ae90ba3f38e1246e69d208e0f053 | [
"BSD-3-Clause"
] | null | null | null | """Module docstring!"""
a = 1
b = 2
@bleh
@blah
def greet(
name: str,
age: int,
*args,
test='oh yeah',
**kwargs
) -> ({a: 1, b: 2}
):
"""Generic short description
Longer description of this function that does nothing
:param arg1: Desc for arg1
:type arg1: arg1_type
:returns: Desc for return
:rtype: `return_type`
:raises: RaisesError
"""
pass
def greet2(name: str, age: int, *args, test='oh yeah', **kwargs) -> {a: 1, b: 2}: return 1
"""This should not be considered a docstring.
:param arg1: Desc for arg1
:type arg1: arg1_type
""" | 17.764706 | 90 | 0.596026 | 86 | 604 | 4.151163 | 0.523256 | 0.089636 | 0.02521 | 0.033613 | 0.408964 | 0.408964 | 0.408964 | 0.408964 | 0.408964 | 0.207283 | 0 | 0.035874 | 0.261589 | 604 | 34 | 91 | 17.764706 | 0.764574 | 0.359272 | 0 | 0 | 0 | 0 | 0.056225 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.071429 | 0 | 0.071429 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
33515a257e86ab2541599ce4633c5188ac0eb93e | 4,393 | py | Python | ddot/api.py | agary-ucsd/ddot | 6f3755843e11bcf308634f188caca7fc9d4e2da3 | [
"MIT"
] | null | null | null | ddot/api.py | agary-ucsd/ddot | 6f3755843e11bcf308634f188caca7fc9d4e2da3 | [
"MIT"
] | null | null | null | ddot/api.py | agary-ucsd/ddot | 6f3755843e11bcf308634f188caca7fc9d4e2da3 | [
"MIT"
] | null | null | null | import sys
import argparse
import bottle
import pandas as pd
from bottle import Bottle, HTTPError, request
from gevent.pywsgi import WSGIServer
from geventwebsocket.handler import WebSocketHandler
from ddot import Ontology
import tempfile
import os
import csv
path_this = os.path.dirname(os.path.abspath(__file__))
os.environ["PATH"] += os.pathsep + os.path.join(path_this, '..')
print(os.environ["PATH"])
from ddot import generate_clixo_file
bottle.BaseRequest.MEMFILE_MAX = 1024 * 1024
api = Bottle()
@api.get('/api/networks')
def get_networks():
return 'get_networks complete'
@api.post('/api/ontology')
def upload_file():
#=================
# default values
#=================
alpha = 0.05
beta = 0.5
try:
data = request.files.get('file')
except Exception as e:
raise HTTPError(500, e)
if data and data.file:
if (request.query.alpha):
alpha = request.query.alpha
if (request.query.beta):
beta = request.query.beta
with tempfile.NamedTemporaryFile('w', delete=False) as f:
f.write(data.file.read())
f_name = f.name
f.close()
try:
clixo_file = generate_clixo_file(f_name, alpha, beta)
return_json = {}
with open(clixo_file, 'r') as tsvfile:
reader = csv.DictReader(filter(lambda row: row[0] != '#', tsvfile), dialect='excel-tab', fieldnames=['a', 'b', 'c', 'd'])
counter = 0
for row in reader:
return_json[counter] = [row.get('a'), row.get('b'), row.get('c'), row.get('d')]
counter += 1
return return_json
except OverflowError as ofe:
print('Error with running clixo')
@api.post('/api/ontology')
def upload_file():
#=================
# default values
#=================
alpha = 0.05
beta = 0.5
try:
data = request.files.get('file')
except Exception as e:
raise HTTPError(500, e)
if data and data.file:
if (request.query.alpha):
alpha = request.query.alpha
if (request.query.beta):
beta = request.query.beta
with tempfile.NamedTemporaryFile('w', delete=False) as f:
f.write(data.file.read())
f_name = f.name
f.close()
try:
clixo_file = generate_clixo_file(f_name, alpha, beta)
with open(clixo_file, 'r') as f_saved:
df = pd.read_csv(f_saved, sep='\t', engine='python', header=None, comment='#')
print(df.columns)
ont1 = Ontology.from_table(df, clixo_format=True, parent=0, child=1)
ont_url, G = ont1.to_ndex(name='MODY',
ndex_server='http://test.ndexbio.org',
ndex_pass='scratch2',
ndex_user='scratch2',
layout='bubble-collect',
visibility='PUBLIC')
if ont_url is not None and len(ont_url) > 0 and 'http' in ont_url:
uuid = ont_url.split('/')[-1]
return 'File has been processed. UUID:: %s \n' % uuid
else:
return 'File has been processed. UUID: %s \n' % ont_url
print('File has been processed: %s' % ont_url)
except OverflowError as ofe:
print('Error with running clixo')
else:
raise HTTPError(422, '**** FILE IS MISSING ****')
return "Unable to complete process. See stack message above."
# run the web server
def main():
status = 0
parser = argparse.ArgumentParser()
parser.add_argument('port', nargs='?', type=int, help='HTTP port', default=8383)
args = parser.parse_args()
print 'starting web server on port %s' % args.port
print 'press control-c to quit'
try:
server = WSGIServer(('0.0.0.0', args.port), api, handler_class=WebSocketHandler)
server.serve_forever()
except KeyboardInterrupt:
print('exiting main loop')
except Exception as e:
exit_str = 'could not start web server: %s' % e
print(exit_str)
status = 1
print('exiting with status %d', status)
return status
if __name__ == '__main__':
sys.exit(main())
| 28.712418 | 137 | 0.556795 | 534 | 4,393 | 4.47191 | 0.342697 | 0.040201 | 0.023451 | 0.022613 | 0.368509 | 0.368509 | 0.351759 | 0.351759 | 0.324958 | 0.283082 | 0 | 0.015868 | 0.311405 | 4,393 | 152 | 138 | 28.901316 | 0.773554 | 0.026633 | 0 | 0.422018 | 0 | 0 | 0.130241 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.009174 | 0.110092 | null | null | 0.091743 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3351e52b5194a9276589bc9c2f3638bd2c933566 | 12,356 | py | Python | crystaltoolgui/tabs/tabresmatcher.py | jingshenSN2/CrystalTool | 18f07963ff5f2a54ac2c93e2fa59fada51346232 | [
"MIT"
] | null | null | null | crystaltoolgui/tabs/tabresmatcher.py | jingshenSN2/CrystalTool | 18f07963ff5f2a54ac2c93e2fa59fada51346232 | [
"MIT"
] | null | null | null | crystaltoolgui/tabs/tabresmatcher.py | jingshenSN2/CrystalTool | 18f07963ff5f2a54ac2c93e2fa59fada51346232 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'tabresmatcher.ui'
#
# Created by: PyQt5 UI code generator 5.15.4
#
# WARNING: Any manual changes made to this file will be lost when pyuic5 is
# run again. Do not edit this file unless you know what you are doing.
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_tabresmatcher(object):
def setupUi(self, tabresmatcher):
tabresmatcher.setObjectName("tabresmatcher")
tabresmatcher.resize(697, 570)
self.horizontalLayout = QtWidgets.QHBoxLayout(tabresmatcher)
self.horizontalLayout.setObjectName("horizontalLayout")
self.vL_match_1 = QtWidgets.QVBoxLayout()
self.vL_match_1.setObjectName("vL_match_1")
self.l_match_res = QtWidgets.QLabel(tabresmatcher)
self.l_match_res.setObjectName("l_match_res")
self.vL_match_1.addWidget(self.l_match_res)
self.lV_match_res = QtWidgets.QListView(tabresmatcher)
self.lV_match_res.setSelectionMode(QtWidgets.QAbstractItemView.ExtendedSelection)
self.lV_match_res.setObjectName("lV_match_res")
self.vL_match_1.addWidget(self.lV_match_res)
self.hL_match_pB1 = QtWidgets.QHBoxLayout()
self.hL_match_pB1.setObjectName("hL_match_pB1")
self.pB_match_choose_res = QtWidgets.QPushButton(tabresmatcher)
self.pB_match_choose_res.setObjectName("pB_match_choose_res")
self.hL_match_pB1.addWidget(self.pB_match_choose_res)
self.l_solve_count = QtWidgets.QLabel(tabresmatcher)
self.l_solve_count.setObjectName("l_solve_count")
self.hL_match_pB1.addWidget(self.l_solve_count)
self.vL_match_1.addLayout(self.hL_match_pB1)
self.horizontalLayout.addLayout(self.vL_match_1)
self.line = QtWidgets.QFrame(tabresmatcher)
self.line.setFrameShape(QtWidgets.QFrame.VLine)
self.line.setFrameShadow(QtWidgets.QFrame.Sunken)
self.line.setObjectName("line")
self.horizontalLayout.addWidget(self.line)
self.vL_match_2 = QtWidgets.QVBoxLayout()
self.vL_match_2.setContentsMargins(-1, 25, -1, -1)
self.vL_match_2.setObjectName("vL_match_2")
self.hL_match_pdb = QtWidgets.QHBoxLayout()
self.hL_match_pdb.setObjectName("hL_match_pdb")
self.pB_pdb = QtWidgets.QPushButton(tabresmatcher)
self.pB_pdb.setObjectName("pB_pdb")
self.hL_match_pdb.addWidget(self.pB_pdb)
self.l_pdb = QtWidgets.QLabel(tabresmatcher)
self.l_pdb.setObjectName("l_pdb")
self.hL_match_pdb.addWidget(self.l_pdb)
self.vL_match_2.addLayout(self.hL_match_pdb)
self.fL_match_1 = QtWidgets.QFormLayout()
self.fL_match_1.setObjectName("fL_match_1")
self.l_old_algorithm = QtWidgets.QLabel(tabresmatcher)
self.l_old_algorithm.setObjectName("l_old_algorithm")
self.fL_match_1.setWidget(0, QtWidgets.QFormLayout.LabelRole, self.l_old_algorithm)
self.l_loss_atom = QtWidgets.QLabel(tabresmatcher)
self.l_loss_atom.setObjectName("l_loss_atom")
self.fL_match_1.setWidget(1, QtWidgets.QFormLayout.LabelRole, self.l_loss_atom)
self.sB_loss_atom = QtWidgets.QSpinBox(tabresmatcher)
self.sB_loss_atom.setObjectName("sB_loss_atom")
self.fL_match_1.setWidget(1, QtWidgets.QFormLayout.FieldRole, self.sB_loss_atom)
self.l_threshold = QtWidgets.QLabel(tabresmatcher)
self.l_threshold.setObjectName("l_threshold")
self.fL_match_1.setWidget(2, QtWidgets.QFormLayout.LabelRole, self.l_threshold)
self.cB_threshold = QtWidgets.QComboBox(tabresmatcher)
self.cB_threshold.setObjectName("cB_threshold")
self.cB_threshold.addItem("")
self.cB_threshold.addItem("")
self.cB_threshold.addItem("")
self.cB_threshold.addItem("")
self.cB_threshold.addItem("")
self.cB_threshold.addItem("")
self.cB_threshold.addItem("")
self.fL_match_1.setWidget(2, QtWidgets.QFormLayout.FieldRole, self.cB_threshold)
self.l_dBS_threshold = QtWidgets.QLabel(tabresmatcher)
self.l_dBS_threshold.setObjectName("l_dBS_threshold")
self.fL_match_1.setWidget(3, QtWidgets.QFormLayout.LabelRole, self.l_dBS_threshold)
self.dSB_threshold = QtWidgets.QDoubleSpinBox(tabresmatcher)
self.dSB_threshold.setObjectName("dSB_threshold")
self.fL_match_1.setWidget(3, QtWidgets.QFormLayout.FieldRole, self.dSB_threshold)
self.rB_old_algorithm = QtWidgets.QRadioButton(tabresmatcher)
self.rB_old_algorithm.setText("")
self.rB_old_algorithm.setObjectName("rB_old_algorithm")
self.fL_match_1.setWidget(0, QtWidgets.QFormLayout.FieldRole, self.rB_old_algorithm)
self.vL_match_2.addLayout(self.fL_match_1)
self.hL_match_thickness = QtWidgets.QHBoxLayout()
self.hL_match_thickness.setObjectName("hL_match_thickness")
self.l_match_thick = QtWidgets.QLabel(tabresmatcher)
self.l_match_thick.setObjectName("l_match_thick")
self.hL_match_thickness.addWidget(self.l_match_thick)
self.cB_thick_x = QtWidgets.QCheckBox(tabresmatcher)
self.cB_thick_x.setObjectName("cB_thick_x")
self.hL_match_thickness.addWidget(self.cB_thick_x)
self.cB_thick_y = QtWidgets.QCheckBox(tabresmatcher)
self.cB_thick_y.setObjectName("cB_thick_y")
self.hL_match_thickness.addWidget(self.cB_thick_y)
self.cB_thick_z = QtWidgets.QCheckBox(tabresmatcher)
self.cB_thick_z.setObjectName("cB_thick_z")
self.hL_match_thickness.addWidget(self.cB_thick_z)
self.vL_match_2.addLayout(self.hL_match_thickness)
self.gL_match_output = QtWidgets.QGridLayout()
self.gL_match_output.setObjectName("gL_match_output")
self.cB_Ra = QtWidgets.QCheckBox(tabresmatcher)
self.cB_Ra.setChecked(True)
self.cB_Ra.setObjectName("cB_Ra")
self.gL_match_output.addWidget(self.cB_Ra, 4, 1, 1, 1)
self.cB_Rb = QtWidgets.QCheckBox(tabresmatcher)
self.cB_Rb.setChecked(True)
self.cB_Rb.setObjectName("cB_Rb")
self.gL_match_output.addWidget(self.cB_Rb, 4, 2, 1, 1)
self.l_match_output = QtWidgets.QLabel(tabresmatcher)
self.l_match_output.setObjectName("l_match_output")
self.gL_match_output.addWidget(self.l_match_output, 1, 0, 1, 1)
self.cB_Nm = QtWidgets.QCheckBox(tabresmatcher)
self.cB_Nm.setChecked(True)
self.cB_Nm.setObjectName("cB_Nm")
self.gL_match_output.addWidget(self.cB_Nm, 1, 2, 1, 1)
self.cB_Tm = QtWidgets.QCheckBox(tabresmatcher)
self.cB_Tm.setChecked(True)
self.cB_Tm.setObjectName("cB_Tm")
self.gL_match_output.addWidget(self.cB_Tm, 1, 1, 1, 1)
self.cB_R1 = QtWidgets.QCheckBox(tabresmatcher)
self.cB_R1.setChecked(True)
self.cB_R1.setObjectName("cB_R1")
self.gL_match_output.addWidget(self.cB_R1, 5, 1, 1, 1)
self.cB_Alpha = QtWidgets.QCheckBox(tabresmatcher)
self.cB_Alpha.setChecked(True)
self.cB_Alpha.setObjectName("cB_Alpha")
self.gL_match_output.addWidget(self.cB_Alpha, 5, 3, 1, 1)
self.cB_Rweak = QtWidgets.QCheckBox(tabresmatcher)
self.cB_Rweak.setChecked(True)
self.cB_Rweak.setObjectName("cB_Rweak")
self.gL_match_output.addWidget(self.cB_Rweak, 5, 2, 1, 1)
self.cB_Rw = QtWidgets.QCheckBox(tabresmatcher)
self.cB_Rw.setChecked(True)
self.cB_Rw.setObjectName("cB_Rw")
self.gL_match_output.addWidget(self.cB_Rw, 1, 3, 1, 1)
self.cB_Rc = QtWidgets.QCheckBox(tabresmatcher)
self.cB_Rc.setChecked(True)
self.cB_Rc.setObjectName("cB_Rc")
self.gL_match_output.addWidget(self.cB_Rc, 4, 3, 1, 1)
self.vL_match_2.addLayout(self.gL_match_output)
self.hL_match_sort = QtWidgets.QHBoxLayout()
self.hL_match_sort.setObjectName("hL_match_sort")
self.l_match_sort = QtWidgets.QLabel(tabresmatcher)
self.l_match_sort.setObjectName("l_match_sort")
self.hL_match_sort.addWidget(self.l_match_sort)
self.lE_match_sort = QtWidgets.QLineEdit(tabresmatcher)
self.lE_match_sort.setInputMask("")
self.lE_match_sort.setMaxLength(32767)
self.lE_match_sort.setObjectName("lE_match_sort")
self.hL_match_sort.addWidget(self.lE_match_sort)
self.vL_match_2.addLayout(self.hL_match_sort)
spacerItem = QtWidgets.QSpacerItem(20, 40, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Expanding)
self.vL_match_2.addItem(spacerItem)
self.hL_match_start = QtWidgets.QHBoxLayout()
self.hL_match_start.setObjectName("hL_match_start")
self.pB_match_start = QtWidgets.QPushButton(tabresmatcher)
self.pB_match_start.setObjectName("pB_match_start")
self.hL_match_start.addWidget(self.pB_match_start)
self.l_match_start = QtWidgets.QLabel(tabresmatcher)
self.l_match_start.setObjectName("l_match_start")
self.hL_match_start.addWidget(self.l_match_start)
self.vL_match_2.addLayout(self.hL_match_start)
self.bar_match = QtWidgets.QProgressBar(tabresmatcher)
self.bar_match.setProperty("value", 0)
self.bar_match.setAlignment(QtCore.Qt.AlignLeading|QtCore.Qt.AlignLeft|QtCore.Qt.AlignVCenter)
self.bar_match.setObjectName("bar_match")
self.vL_match_2.addWidget(self.bar_match)
self.horizontalLayout.addLayout(self.vL_match_2)
self.retranslateUi(tabresmatcher)
self.cB_threshold.setCurrentIndex(0)
QtCore.QMetaObject.connectSlotsByName(tabresmatcher)
def retranslateUi(self, tabresmatcher):
_translate = QtCore.QCoreApplication.translate
tabresmatcher.setWindowTitle(_translate("tabresmatcher", "Form"))
self.l_match_res.setText(_translate("tabresmatcher", "RES文件"))
self.pB_match_choose_res.setText(_translate("tabresmatcher", "选择RES文件"))
self.l_solve_count.setText(_translate("tabresmatcher", "已选0个"))
self.pB_pdb.setText(_translate("tabresmatcher", "选择待搜索结构(pdb)"))
self.l_pdb.setText(_translate("tabresmatcher", "未选择"))
self.l_old_algorithm.setText(_translate("tabresmatcher", "使用旧算法"))
self.l_loss_atom.setText(_translate("tabresmatcher", "可损失原子数"))
self.l_threshold.setText(_translate("tabresmatcher", "汇报阈值基于"))
self.cB_threshold.setCurrentText(_translate("tabresmatcher", "无"))
self.cB_threshold.setItemText(0, _translate("tabresmatcher", "无"))
self.cB_threshold.setItemText(1, _translate("tabresmatcher", "Tm(匹配上次数)"))
self.cB_threshold.setItemText(2, _translate("tabresmatcher", "Nm(匹配上原子数)"))
self.cB_threshold.setItemText(3, _translate("tabresmatcher", "Rwm(质量加权匹配比例)"))
self.cB_threshold.setItemText(4, _translate("tabresmatcher", "Rwe2(电子加权匹配比例)"))
self.cB_threshold.setItemText(5, _translate("tabresmatcher", "Ram(元素匹配相似度)"))
self.cB_threshold.setItemText(6, _translate("tabresmatcher", "Rc(坐标匹配相似度)"))
self.l_dBS_threshold.setText(_translate("tabresmatcher", "汇报阈值"))
self.l_match_thick.setText(_translate("tabresmatcher", "晶胞加层"))
self.cB_thick_x.setText(_translate("tabresmatcher", "x"))
self.cB_thick_y.setText(_translate("tabresmatcher", "y"))
self.cB_thick_z.setText(_translate("tabresmatcher", "z"))
self.cB_Ra.setText(_translate("tabresmatcher", "Ra"))
self.cB_Rb.setText(_translate("tabresmatcher", "Rb"))
self.l_match_output.setText(_translate("tabresmatcher", "输出指标"))
self.cB_Nm.setText(_translate("tabresmatcher", "Nm"))
self.cB_Tm.setText(_translate("tabresmatcher", "Tm"))
self.cB_R1.setText(_translate("tabresmatcher", "R1"))
self.cB_Alpha.setText(_translate("tabresmatcher", "Alpha"))
self.cB_Rweak.setText(_translate("tabresmatcher", "Rweak"))
self.cB_Rw.setText(_translate("tabresmatcher", "Rw"))
self.cB_Rc.setText(_translate("tabresmatcher", "Rc"))
self.l_match_sort.setText(_translate("tabresmatcher", "排序规则"))
self.lE_match_sort.setText(_translate("tabresmatcher", "-Tm,-Nm"))
self.pB_match_start.setText(_translate("tabresmatcher", "开始匹配"))
self.l_match_start.setText(_translate("tabresmatcher", "未开始匹配"))
| 56.420091 | 114 | 0.717627 | 1,559 | 12,356 | 5.379731 | 0.12059 | 0.05437 | 0.035412 | 0.02635 | 0.401574 | 0.264695 | 0.174079 | 0.120901 | 0.055801 | 0.043639 | 0 | 0.012849 | 0.168582 | 12,356 | 218 | 115 | 56.678899 | 0.803563 | 0.022499 | 0 | 0.034314 | 1 | 0 | 0.094615 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009804 | false | 0 | 0.004902 | 0 | 0.019608 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3362692f46d8ce99fe607f99e82c59552aa25f04 | 8,926 | py | Python | generalRunFiles/tideCompare.py | wesleybowman/karsten | ef4b2d6debae605902d76cd0484e71c0ba74fdd1 | [
"MIT"
] | 1 | 2015-05-04T17:48:56.000Z | 2015-05-04T17:48:56.000Z | generalRunFiles/tideCompare.py | wesleybowman/karsten | ef4b2d6debae605902d76cd0484e71c0ba74fdd1 | [
"MIT"
] | null | null | null | generalRunFiles/tideCompare.py | wesleybowman/karsten | ef4b2d6debae605902d76cd0484e71c0ba74fdd1 | [
"MIT"
] | 1 | 2021-11-15T17:53:19.000Z | 2021-11-15T17:53:19.000Z | from __future__ import division
import numpy as np
import pandas as pd
import netCDF4 as nc
from datetime import datetime, timedelta
import cPickle as pickle
import sys
sys.path.append('/home/wesley/github/UTide/')
from utide import ut_solv
import scipy.io as sio
from stationClass import station
def mjd2num(x):
y = x + 678942
return y
def closest_point(points, lon, lat):
point_list = np.array([lon,lat]).T
closest_dist = ((point_list[:, 0] - points[:, 0, None])**2 +
(point_list[:, 1] - points[:, 1, None])**2)
closest_point_indexes = np.argmin(closest_dist, axis=1)
return closest_point_indexes
def datetime2matlabdn(dt):
# ordinal = dt.toordinal()
mdn = dt + timedelta(days=366)
frac = (dt-datetime(dt.year, dt.month, dt.day, 0, 0, 0)).seconds / \
(24.0 * 60.0 * 60.0)
return mdn.toordinal() + frac
def tideGauge(datafiles, Struct):
dgFilename = '/array/home/rkarsten/common_tidal_files/data/observed/DG/TideGauge/DigbyWharf_015893_20140115_2221_Z.mat'
gpFilename = '/array/home/rkarsten/common_tidal_files/data/observed/GP/TideGauge/Westport_015892_20140325_1212_Z.mat'
dgtg = sio.loadmat(dgFilename, struct_as_record=False, squeeze_me=True)
gptg = sio.loadmat(gpFilename, struct_as_record=False, squeeze_me=True)
ut_constits = ['M2','S2','N2','K2','K1','O1','P1','Q1']
print 'Westport TideGauge'
coef_gptg = ut_solv(gptg['RBR'].date_num_Z,
(gptg['RBR'].data-np.mean(gptg['RBR'].data)), [],
gptg['RBR'].lat, cnstit=ut_constits, notrend=True,
rmin=0.95, method='ols', nodiagn=True, linci=True,
ordercnstit='frq')
print 'DigbyWharf TideGauge'
coef_dgtg = ut_solv(dgtg['RBR'].date_num_Z,
(dgtg['RBR'].data-np.mean(dgtg['RBR'].data)), [],
dgtg['RBR'].lat, cnstit=ut_constits, notrend=True,
rmin=0.95, method='ols', nodiagn=True, linci=True,
ordercnstit='frq')
struct = np.array([])
for filename in datafiles:
print filename
data = nc.Dataset(filename, 'r')
lat = data.variables['lat'][:]
lon = data.variables['lon'][:]
time = data.variables['time'][:]
time = mjd2num(time)
tg_gp_id = np.argmin(np.sqrt((lon-gptg['RBR'].lon)**2+(lat-gptg['RBR'].lat)**2))
tg_dg_id = np.argmin(np.sqrt((lon-dgtg['RBR'].lon)**2+(lat-dgtg['RBR'].lat)**2))
#elgp = data.variables['zeta'][tg_gp_id, :]
#eldg = data.variables['zeta'][tg_dg_id, :]
elgp = data.variables['zeta'][:, tg_gp_id]
eldg = data.variables['zeta'][:, tg_dg_id]
coef_dg = ut_solv(time, eldg, [], dgtg['RBR'].lat, cnstit=ut_constits,
notrend=True, rmin=0.95, method='ols', nodiagn=True,
linci=True, ordercnstit='frq')
coef_gp = ut_solv(time, elgp, [], gptg['RBR'].lat, cnstit=ut_constits,
notrend=True, rmin=0.95, method='ols', nodiagn=True,
linci=True, ordercnstit='frq')
Name = filename.split('/')[-3]
Name = '2012_run'
print Name
obs_loc = {'name':Name, 'type':'TideGauge',
'mod_time':time, 'dg_time':dgtg['RBR'].date_num_Z,
'gp_time':gptg['RBR'].date_num_Z,
'lon':lon, 'lat':lat,
'dg_tidegauge_harmonics': coef_dgtg,
'gp_tidegauge_harmonics':coef_gptg,
'dg_mod_harmonics': coef_dg,
'gp_mod_harmonics': coef_gp,
'dg_tg_data':dgtg['RBR'].data,
'gp_tg_data':gptg['RBR'].data,
'eldg':eldg, 'elgp':elgp}
struct = np.hstack((struct, obs_loc))
Struct[Name] = np.hstack((Struct[Name], struct))
#pickle.dump(struct, open("structADCP.p", "wb"))
return Struct
def adcp(datafiles, debug=False):
if debug:
adcpFilename = '/home/wesley/github/karsten/adcp/testADCP.txt'
else:
adcpFilename = '/array/home/107002b/github/karsten/adcp/acadia_dngrid_adcp_2012.txt'
#adcpFilename = '/home/wesleyb/github/karsten/adcp/dngrid_adcp_2012.txt'
adcp = pd.read_csv(adcpFilename)
for i,v in enumerate(adcp['Latitude']):
path = adcp.iloc[i, -1]
if path != 'None':
print adcp.iloc[i, 0]
#print lonlat[i,1], uvnodell[ii,1]
ADCP = pd.read_csv(path, index_col=0)
ADCP.index = pd.to_datetime(ADCP.index)
adcpTime = np.empty(ADCP.index.shape)
for j, jj in enumerate(ADCP.index):
adcpTime[j] = datetime2matlabdn(jj)
adcpCoef = ut_solv(adcpTime, ADCP['u'].values,
ADCP['v'].values, v,
cnstit='auto', rmin=0.95, notrend=True,
method='ols', nodiagn=True, linci=True,
conf_int=True)
adcpData = adcpCoef
obs = pd.DataFrame({'u':ADCP['u'].values, 'v':ADCP['v'].values})
Struct = {}
for filename in datafiles:
print filename
data = nc.Dataset(filename, 'r')
#x = data.variables['x'][:]
#y = data.variables['y'][:]
lon = data.variables['lon'][:]
lat = data.variables['lat'][:]
lonc = data.variables['lonc'][:]
latc = data.variables['latc'][:]
ua = data.variables['ua']
va = data.variables['va']
time = data.variables['time'][:]
#trinodes = data.variables['nv'][:]
time = mjd2num(time)
lonlat = np.array([adcp['Longitude'], adcp['Latitude']]).T
#index = closest_point(lonlat, lon, lat)
index = closest_point(lonlat, lonc, latc)
adcpData = pd.DataFrame()
runData = pd.DataFrame()
Name = filename.split('/')[-3]
Name = '2012_run'
print Name
struct = np.array([])
for i, ii in enumerate(index):
path = adcp.iloc[i, -1]
if path != 'None':
print adcp.iloc[i, 0]
coef = ut_solv(time, ua[:, ii], va[:, ii], lonlat[i, 1],
cnstit='auto', rmin=0.95, notrend=True,
method='ols', nodiagn=True, linci=True,
conf_int=True)
runData = coef
mod = pd.DataFrame({'ua':ua[:, ii], 'va':va[:, ii]})
obs_loc = {'name':adcp.iloc[i,0], 'type':'ADCP', 'lat':lonlat[i,-1],
'lon':lonlat[0,0], 'obs_timeseries':obs,
'mod_timeseries':mod, 'obs_time':adcpTime,
'mod_time':time,'speed_obs_harmonics':adcpData,
'speed_mod_harmonics':runData}
struct = np.hstack((struct, obs_loc))
Struct[Name] = struct
return Struct
def main(debug=False):
if debug:
datafiles = ['/array/data1/rkarsten/dncoarse_bctest_old/output/dn_coarse_0001.nc',
'/array/data1/rkarsten/dncoarse_bctest/output/dn_coarse_0001.nc']
#datafiles = ['/home/wesley/ncfiles/smallcape_force_0001.nc']
else:
# datafiles = ['/array/data1/rkarsten/dncoarse_bctest_old/output/dn_coarse_0001.nc',
# '/array/data1/rkarsten/dncoarse_bctest/output/dn_coarse_0001.nc',
# '/array/data1/rkarsten/dncoarse_bctest2/output/dn_coarse_0001.nc',
# '/array/data1/rkarsten/dncoarse_bctest_all/output/dn_coarse_0001.nc',
# '/array/data1/rkarsten/dncoarse_bctest_EC/output/dn_coarse_0001.nc',
# '/array/data1/rkarsten/dncoarse_bctest_timeseries/output/dn_coarse_0001.nc']
#datafiles = ['/array2/data3/rkarsten/dncoarse_3D/output2/dn_coarse_station_timeseries.nc']
# datafiles = ['/EcoII/EcoEII_server_data_tree/data/simulated/FVCOM/dncoarse/calibration/bottom_roughness/0.0015/output/dngrid_0001.nc',
# '/EcoII/EcoEII_server_data_tree/data/simulated/FVCOM/dncoarse/calibration/bottom_roughness/0.0020/output/dngrid_0001.nc',
# '/EcoII/EcoEII_server_data_tree/data/simulated/FVCOM/dncoarse/calibration/bottom_roughness/0.0025/output/dngrid_0001.nc',
# '/EcoII/EcoEII_server_data_tree/data/simulated/FVCOM/dncoarse/calibration/bottom_roughness/0.0030/output/dngrid_0001.nc']
#
datafiles = ['/array/home/116822s/2012_run.nc']
#'/array/data1/rkarsten/dncoarse_stationtest/output/dn_coarse_0001.nc']
saveName = 'struct2012_run.p'
Struct = adcp(datafiles, debug=False)
if debug:
pickle.dump(Struct, open("structADCP.p", "wb"))
Struct = tideGauge(datafiles, Struct)
pickle.dump(Struct, open(saveName, "wb"))
return Struct
if __name__ == '__main__':
main()
| 36.284553 | 143 | 0.578871 | 1,093 | 8,926 | 4.560842 | 0.21409 | 0.044333 | 0.032497 | 0.046941 | 0.442929 | 0.414845 | 0.387563 | 0.361484 | 0.328987 | 0.308526 | 0 | 0.036895 | 0.27123 | 8,926 | 245 | 144 | 36.432653 | 0.729439 | 0.186758 | 0 | 0.339869 | 0 | 0 | 0.143904 | 0.075615 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.065359 | null | null | 0.052288 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3366d6f4ce4dfb447093519d06275a1e825d7959 | 555 | py | Python | app/calc/permissions.py | sajeeshen/WebCalculatorAPI | d951e688e84741cc594877914d292fbddb4e9542 | [
"MIT"
] | null | null | null | app/calc/permissions.py | sajeeshen/WebCalculatorAPI | d951e688e84741cc594877914d292fbddb4e9542 | [
"MIT"
] | null | null | null | app/calc/permissions.py | sajeeshen/WebCalculatorAPI | d951e688e84741cc594877914d292fbddb4e9542 | [
"MIT"
] | null | null | null | from rest_framework import permissions
class IsSuperUser(permissions.IsAdminUser):
def has_permission(self, request, view):
is_admin = super().has_permission(request, view)
return request.method in permissions.SAFE_METHODS or is_admin
class IsUser(permissions.BasePermission):
def has_object_permission(self, request, view, obj):
if request.user:
if request.user.is_superuser:
return True
else:
return obj == request.user
else:
return False
| 26.428571 | 69 | 0.652252 | 62 | 555 | 5.693548 | 0.532258 | 0.093484 | 0.11898 | 0.141643 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.279279 | 555 | 20 | 70 | 27.75 | 0.8825 | 0 | 0 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.071429 | 0 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
336a6fab7ed0883fc7d394502eed5edc3d498110 | 7,986 | py | Python | clone-zadara-volume.py | harvard-dce/mh-backup | 78a6f987759de22eb27b3b7e29943c7aba70cbac | [
"Apache-2.0"
] | null | null | null | clone-zadara-volume.py | harvard-dce/mh-backup | 78a6f987759de22eb27b3b7e29943c7aba70cbac | [
"Apache-2.0"
] | null | null | null | clone-zadara-volume.py | harvard-dce/mh-backup | 78a6f987759de22eb27b3b7e29943c7aba70cbac | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import time
import logging
import logging.config
import logging.handlers
import yaml
from zadarest import ZConsoleClient
from zadarest import ZVpsaClient
logger = None
def setup_logging( log_conf=None ):
if log_conf is None:
logging.basicConfig( level=logging.DEBUG )
else:
logging.config.dictConfig( dict( log_conf ) )
logging.info( 'start logging for %s at %s' %
( __name__, time.strftime( "%y%m%d-%H%M", time.localtime() ) ) )
def read_config( config_file_path ):
with open( config_file_path, 'r' ) as ymlfile:
config = yaml.load( ymlfile )
# some validation of config
if 'zadara_cloud_console' not in config.keys() or 'url' not in config['zadara_cloud_console'].keys():
logger.critical('missing zadara CLOUD CONSOLE URL config')
exit( 1 )
if 'zadara_vpsa' not in config.keys() or 'volume_export_path' not in config['zadara_vpsa'].keys():
logger.critical('missing zadara volume EXPORT PATH config')
exit( 1 )
if 'logging' not in config.keys():
config['logging'] = None
return config
def get_value_from_env_or_user_input( env_var_name, msg="enter your value: " ):
value = None
if env_var_name in os.environ:
value = os.environ[ env_var_name ]
while not value:
value = str( raw_input( msg ) )
return value
def setup_zadara_console_client():
token = get_value_from_env_or_user_input(
'ZADARA_CONSOLE_ACCESS_TOKEN',
'enter your zadara CONSOLE access token: ' )
zcon = ZConsoleClient( cfg['zadara_cloud_console']['url'], token )
logger.debug('set zconsole for url(%s)' % cfg['zadara_cloud_console']['url'] )
logger.debug('zconsole object is (%s)' % zcon )
return zcon
def setup_zadara_vpsa_client( z_console_client, vpsa_id ):
token = get_value_from_env_or_user_input(
'ZADARA_VPSA_ACCESS_TOKEN',
'enter your zadara VPSA token: ' )
zvpsa = ZVpsaClient( z_console_client, vpsa_token=token, vpsa_id=vpsa_id )
logger.debug('set zvpsa for id (%d)' % vpsa_id )
logger.debug('zvpsa object is (%s)' % zvpsa )
return zvpsa
def setup_zadara_client():
zcon = setup_zadara_console_client()
vpsa_token = get_value_from_env_or_user_input(
'ZADARA_VPSA_ACCESS_TOKEN',
'enter your zadara VPSA token: ' )
os.environ['ZADARA_VPSA_ACCESS_TOKEN'] = vpsa_token
vpsa = zcon.vpsa_by_export_path( cfg['zadara_vpsa']['volume_export_path'], vpsa_token )
if vpsa is None:
logger.critical(
'vpsa with export_path(%s) not found; maybe it is hibernated?' %
cfg['zadara_vpsa']['volume_export_path'] )
exit( 1 )
logger.debug('found vpsa with export_path (%s)! it has id (%d)' % (
cfg['zadara_vpsa']['volume_export_path'], vpsa['id']) )
zcli = setup_zadara_vpsa_client( zcon, vpsa['id'] )
return zcli
def print_snapshot_list_from_volume( cli, volume ):
snapshots = {}
snap_list = cli.get_snapshots_for_cgroup( volume['cg_name'] )
if snap_list is None or 0 == len( snap_list ):
logger.critical(
'no snapshots available for volume with export_path(%s)' %
volume['nfs_export_path'] )
exit( 1 )
logger.debug('return from snapshot list has (%d) elements' % len( snap_list ) )
i = 1
print 'available snapshots for volume with export_path(%s):' % volume['nfs_export_path']
for s in snap_list:
print '%d: %s [%s]' % ( i, s['modified_at'], s['display_name'] )
snapshots[i] = s
i += 1
return snapshots
def clone_from_snapshot( cli, volume, snapshot_id ):
timestamp = time.strftime( "%y%m%d_%H%M", time.localtime() )
#clone_volume_display_name = 'clone_snap_%s_on_%s' % ( snapshot_id.replace('-', '_'), timestamp )
clone_volume_display_name = 'clone_on_%s' % timestamp
logger.debug( 'cloning volume (%s) with display_name (%s), from snapshot_id (%s)' %
( volume['cg_name'], clone_volume_display_name, snapshot_id ) )
clone = cli.clone_volume(
cgroup=volume['cg_name'],
clone_name=clone_volume_display_name,
snap_id=snapshot_id )
timeout_in_sec = 5
max_checks = 5
i = 0
while clone is None and i < max_checks:
time.sleep( timeout_in_sec )
clone = cli.get_volume_by_display_name( clone_volume_display_name )
i += 1
if i == max_checks and clone is None:
logger.critical('error cloning volume')
exit( 1 )
logger.debug( 'cloned volume object is (%s)' % clone )
return clone
def shift_export_paths( cli, source_volume, clone_volume ):
timestamp = time.strftime( "%y%m%d-%H%M", time.localtime() )
de_facto_export_path = source_volume['nfs_export_path']
inactive_export_path = '%s_%s' % ( source_volume['nfs_export_path'], timestamp )
logger.debug('preparing to shift export paths: (%s)-->(%s)-->X(%s)' %
( inactive_export_path, de_facto_export_path, clone_volume['nfs_export_path'] ) )
src_servers = cli.detach_volume_from_all_servers( source_volume['name'] )
src_volume_name = cli.update_export_name_for_volume(
source_volume['name'],
os.path.basename( inactive_export_path ) )
logger.debug('detached source_volume from all servers (%s)' % src_servers )
clone_volume_name = cli.update_export_name_for_volume(
clone_volume['name'],
os.path.basename( de_facto_export_path ) )
clone_servers = cli.attach_volume_to_servers( clone_volume['name'], src_servers )
logger.debug('attached all servers to clone volume (%s)' % clone_servers )
logger.debug('src_volume_name(%s) and clone_volume_name(%s)' % ( src_volume_name,
clone_volume_name ) )
return ( src_volume_name, clone_volume_name )
def copy_snapshot_policies( cli, source_volume, clone_volume ):
src_policies = cli.get_snapshot_policies_for_cgroup( source_volume['cg_name'] )
logger.debug('policies from src_volume (%s)' % src_policies )
for p in src_policies:
cli.attach_snapshot_policy_to_cgroup( clone_volume['cg_name'], p['name'] )
logger.debug('policies now attached to clone_volume as well...')
return src_policies
if __name__ == '__main__':
cfg = read_config( 'config.yml' )
setup_logging( cfg['logging'] )
logger = logging.getLogger( __name__ )
logger.info('STEP 1. logging configured!')
logger.info('STEP 2. setting up zadara client...')
zcli = setup_zadara_client()
logger.info('STEP 3. finding volume to be clone by export_path (%s)' %
cfg['zadara_vpsa']['volume_export_path'])
volume_to_clone_info = zcli.get_volume_by_export_path( cfg['zadara_vpsa']['volume_export_path'] )
logger.info('STEP 4. volume found (%s); printing snapshots available',
volume_to_clone_info['display_name'] )
snapshots = print_snapshot_list_from_volume( zcli, volume_to_clone_info )
s_index = None
while not s_index and s_index not in snapshots.keys():
s_index = int( raw_input('which snapshot to clone? [1..%d]: ' % len( snapshots ) ) )
logger.info('STEP 5. snapshot picked (%s), cloning...' % snapshots[ s_index
]['display_name'] )
clone_info = clone_from_snapshot(
zcli,
volume_to_clone_info,
snapshots[ s_index ]['name'] )
logger.info('STEP 6. cloned as volume (%s); changing export_paths...' %
clone_info['display_name'] )
( src_path, clone_path ) = shift_export_paths(
zcli,
volume_to_clone_info,
clone_info )
logger.info('STEP 7. attaching snapshot policies...')
p_list = copy_snapshot_policies(
zcli,
volume_to_clone_info,
clone_info )
logger.info('STEP 8. remount shared storage in mh nodes and we are done.')
| 33 | 105 | 0.660406 | 1,087 | 7,986 | 4.538178 | 0.171113 | 0.050679 | 0.022704 | 0.020677 | 0.297588 | 0.169876 | 0.14028 | 0.127509 | 0.112102 | 0.081897 | 0 | 0.003558 | 0.22577 | 7,986 | 241 | 106 | 33.136929 | 0.794275 | 0.020536 | 0 | 0.152439 | 0 | 0 | 0.25688 | 0.015359 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.04878 | null | null | 0.030488 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
681bea3ce9c8135cbe81134242ec4563bc113cbf | 353 | py | Python | tools/find-broken-edition-parm.py | ctheune/assembly-cms | 20e000373fc30d9a14cb5dc882499b5eed1d86ee | [
"ZPL-2.1"
] | null | null | null | tools/find-broken-edition-parm.py | ctheune/assembly-cms | 20e000373fc30d9a14cb5dc882499b5eed1d86ee | [
"ZPL-2.1"
] | null | null | null | tools/find-broken-edition-parm.py | ctheune/assembly-cms | 20e000373fc30d9a14cb5dc882499b5eed1d86ee | [
"ZPL-2.1"
] | null | null | null | # Copyright (c) 2010 gocept gmbh & co. kg
# See also LICENSE.txt
import zope.traversing.api
stack = [root['summer10']]
while stack:
page = stack.pop()
for edition in page.editions:
for tag in edition.parameters:
if not ':' in tag:
print zope.traversing.api.getPath(edition)
stack.extend(page.subpages)
| 23.533333 | 58 | 0.637394 | 47 | 353 | 4.787234 | 0.702128 | 0.124444 | 0.151111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022814 | 0.254958 | 353 | 14 | 59 | 25.214286 | 0.8327 | 0.169972 | 0 | 0 | 0 | 0 | 0.031034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.111111 | null | null | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
681ce2b89477a5b6b497d174c63e24c3840af009 | 573 | py | Python | jqi/completion.py | jan-g/jqi | f304f9fda33ac9b9eae98848d2a64acbe0893131 | [
"CC-BY-3.0",
"Apache-2.0"
] | 3 | 2020-04-15T13:40:59.000Z | 2021-06-30T10:09:33.000Z | jqi/completion.py | jan-g/jqi | f304f9fda33ac9b9eae98848d2a64acbe0893131 | [
"CC-BY-3.0",
"Apache-2.0"
] | null | null | null | jqi/completion.py | jan-g/jqi | f304f9fda33ac9b9eae98848d2a64acbe0893131 | [
"CC-BY-3.0",
"Apache-2.0"
] | null | null | null | from .lexer import lex
from .parser import top_level
from .completer import *
from .eval import make_env, splice
def completer(s, offset, start=top_level):
evaluator = start.parse(lex(s, offset))
def complete(stream="", env=None):
if env is None:
env = {}
env = make_env().update(env) # Install standard bindings
try:
_ = evaluator(splice(env, stream))
return []
except Completion as c:
return c.completions, c.pos if c.pos is not None else (offset, offset)
return complete
| 27.285714 | 82 | 0.616056 | 75 | 573 | 4.64 | 0.493333 | 0.045977 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.287958 | 573 | 20 | 83 | 28.65 | 0.852941 | 0.04363 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.25 | 0 | 0.5625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
681e1d5a6599b7cdcc0f82c8c9f48fcd4cf5f272 | 5,107 | py | Python | server/shserver/GetCategory.py | AsherYang/ThreeLine | 351dc8bfd1c0a536ffbf36ce8b1af953cc71f93a | [
"Apache-2.0"
] | 1 | 2017-05-02T10:02:28.000Z | 2017-05-02T10:02:28.000Z | server/shserver/GetCategory.py | AsherYang/ThreeLine | 351dc8bfd1c0a536ffbf36ce8b1af953cc71f93a | [
"Apache-2.0"
] | null | null | null | server/shserver/GetCategory.py | AsherYang/ThreeLine | 351dc8bfd1c0a536ffbf36ce8b1af953cc71f93a | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
Author: AsherYang
Email: 1181830457@qq.com
Date: 2017/7/24
Desc: get weidian token
@see https://wiki.open.weidian.com/api#94
url = https://api.vdian.com/api?param={"showNoCate":"0"}&public={"method":"weidian.cate.get.list","access_token":"9882ff6e635aac4740646cf93f2389320007487713","version":"1.0"}
必须为get 请求
"""
import json
import time
import DbUtil
import OpenRequest
import TokenConstant
from Category import Category
from ShJsonDecode import categoryDecode
import GetToken
"""
从微店获取商品分类
url = https://api.vdian.com/api?param={"showNoCate":"0"}&public={"method":"weidian.cate.get.list","access_token":"9882ff6e635aac4740646cf93f2389320007487713","version":"1.0"}
"""
def getCategoryFromNet(showNoCate="0", version="1.0", path="api"):
header = {
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36',
}
params = {"showNoCate": showNoCate}
# GetToken.doGetToken()
pub = {"method": "weidian.cate.get.list", "access_token": GetToken.doGetToken(),
"version": version,
"lang": "python",
"sdkversion": TokenConstant.version}
url = "%s%s?param=%s&public=%s" % (TokenConstant.domain, path, params, pub)
body = OpenRequest.http_get(url, header=header)
categoryList = json.loads(body, cls=categoryDecode)
# print "body = " + body
print len(categoryList)
# for category in categoryList:
# print category.cate_id
# print " , name = " + category.cate_name
# print " , description = " + category.description
# print category.update_time
return categoryList
"""
从数据库获取商品分类
"""
def getCategoryFromDb():
print '--- getCategoryFromDb start ---'
query = "select * from sh_category"
results = DbUtil.query(query)
print results
if results is None:
return None
categoryList = []
for row in results:
category = Category()
row_id = row[0]
cate_id = row[1]
cate_name = row[2]
parent_id = row[3]
parent_cate_name = row[4]
sort_num = row[5]
cate_item_num = row[6]
description = row[7]
listUrl = row[8]
shopName = row[9]
shopLogo = row[10]
updateTime = row[11]
category.cate_id = cate_id
category.cate_name = cate_name
category.parent_id = parent_id
category.parent_cate_name = parent_cate_name
category.sort_num = sort_num
category.cate_item_num = cate_item_num
category.description = description
category.listUrl = listUrl
category.shopName = shopName
category.shopLogo = shopLogo
category.update_time = updateTime
categoryList.append(category)
# print "row_id = %s, access_token = %s, expire_in = %s, update_time = %s " %(row_id, access_token, expire_in, update_time)
return categoryList
"""
保存商品分类进数据库
"""
def saveCategoryToDb(categoryList=None):
print '--- saveCategoryToDb start ---'
if categoryList is None or len(categoryList) == 0:
print "categoryList is None could not save to db."
return
else:
insert = 'insert into sh_category (cate_id, cate_name, parent_id, parent_cate_name, sort_num, cate_item_num,' \
' description, listUrl, shopName, shopLogo, update_time) '
sql_select_str = ''
currentTime = int(time.time())
for category in categoryList:
sql_select_str += "SELECT '%s', '%s', '%s', '%s', '%s', '%s', '%s', '%s', '%s', '%s', '%s' union " \
% (category.cate_id, category.cate_name, category.parent_id, category.parent_cate_name,
category.sort_num, category.cate_item_num, category.description, category.listUrl,
category.shopName, category.shopLogo, currentTime)
# 拼接sql 语句
insert = insert + sql_select_str
# 截取字符串
insert = insert[:-6]
# 保存category 需要先清空数据
delete = 'delete from sh_category;'
DbUtil.delete(delete)
DbUtil.insert(insert)
"""
category 更新策略:
每天只更新一次
返回值:当前有效的 category list
"""
def doGetCategory():
currentTime = int(time.time())
categoryList = getCategoryFromDb()
# 更新间隔 1天 = 24 * 60 * 60
updateInterval = 24 * 60 * 60
print currentTime
if categoryList is None:
print "categoryList is None 正在更新"
categoryNetList = getCategoryFromNet()
saveCategoryToDb(categoryNetList)
return categoryNetList
lastTime = (int)(categoryList[0].update_time)
if (currentTime - lastTime < updateInterval):
print "从数据库中拿到 %d 条 category 数据" %(len(categoryList))
return categoryList
else:
print "categoryList is 日期太久了 正在更新"
categoryNetList = getCategoryFromNet()
saveCategoryToDb(categoryNetList)
return categoryNetList
if __name__ == '__main__':
doGetCategory()
# categoryNetList = getCategoryFromNet()
# saveCategoryToDb(categoryNetList) | 34.046667 | 174 | 0.639319 | 568 | 5,107 | 5.614437 | 0.288732 | 0.006899 | 0.008467 | 0.010034 | 0.237692 | 0.198495 | 0.151772 | 0.08498 | 0.08498 | 0.08498 | 0 | 0.039525 | 0.242021 | 5,107 | 150 | 175 | 34.046667 | 0.784293 | 0.101821 | 0 | 0.132653 | 0 | 0.020408 | 0.1757 | 0.011092 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.081633 | null | null | 0.091837 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
68247d930223fc7ffe60ad9d81a38863a76ea6be | 2,832 | py | Python | resources/code/train/Python/cp.py | searene/PLDetector | a8052b1d2ba91bfcc3fd4a5252480cf511d8a210 | [
"MIT"
] | 1 | 2020-11-09T08:24:17.000Z | 2020-11-09T08:24:17.000Z | resources/code/train/Python/cp.py | searene/PLDetector | a8052b1d2ba91bfcc3fd4a5252480cf511d8a210 | [
"MIT"
] | null | null | null | resources/code/train/Python/cp.py | searene/PLDetector | a8052b1d2ba91bfcc3fd4a5252480cf511d8a210 | [
"MIT"
] | null | null | null | # This file is part of the Hotwire Shell project API.
# Copyright (C) 2007 Colin Walters <walters@verbum.org>
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
# of the Software, and to permit persons to whom the Software is furnished to do so,
# subject to the following conditions:
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
# PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE X CONSORTIUM BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
# TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
# THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import os, sys, shutil, stat
import hotwire
import hotwire.fs
from hotwire.fs import FilePath
from hotwire.builtin import Builtin, BuiltinRegistry, MultiArgSpec
from hotwire.builtins.fileop import FileOpBuiltin
if '_' not in globals(): globals()['_'] = lambda x: x
class CpBuiltin(FileOpBuiltin):
__doc__ = _("""Copy sources to destination.""")
def __init__(self):
super(CpBuiltin, self).__init__('cp', aliases=['copy'],
hasstatus=True,
argspec=MultiArgSpec('files', min=2))
def execute(self, context, args):
assert len(args) > 0
target = FilePath(args[-1], context.cwd)
try:
target_is_dir = stat.S_ISDIR(os.stat(target).st_mode)
target_exists = True
except OSError, e:
target_is_dir = False
target_exists = False
sources = args[:-1]
assert len(sources) > 0
if (not target_is_dir) and len(sources) > 1:
raise ValueError(_("Can't copy multiple items to non-directory"))
sources_total = len(sources)
self._status_notify(context, sources_total, 0)
if target_is_dir:
for i,source in enumerate(sources):
hotwire.fs.copy_file_or_dir(FilePath(source, context.cwd), target, True)
self._status_notify(context, sources_total, i+1)
else:
hotwire.fs.copy_file_or_dir(FilePath(sources[0], context.cwd), target, False)
self._status_notify(context, sources_total, 1)
return []
BuiltinRegistry.getInstance().register_hotwire(CpBuiltin())
| 42.268657 | 89 | 0.676907 | 374 | 2,832 | 5.005348 | 0.462567 | 0.047009 | 0.023504 | 0.036859 | 0.088141 | 0.088141 | 0.032051 | 0 | 0 | 0 | 0 | 0.006527 | 0.242585 | 2,832 | 66 | 90 | 42.909091 | 0.8662 | 0.39548 | 0 | 0 | 0 | 0 | 0.048968 | 0 | 0 | 0 | 0 | 0 | 0.054054 | 0 | null | null | 0 | 0.162162 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6829fc0412ea7a5055c0ef733f3314f215ac41ec | 795 | py | Python | kf_lib_data_ingest/templates/my_ingest_package/ingest_package_config.py | kids-first/kf-lib-data-ingest | 92889efef082c64744a00a9c110d778da7383959 | [
"Apache-2.0"
] | 3 | 2018-10-30T17:56:44.000Z | 2020-05-27T16:18:05.000Z | kf_lib_data_ingest/templates/my_ingest_package/ingest_package_config.py | kids-first/kf-lib-data-ingest | 92889efef082c64744a00a9c110d778da7383959 | [
"Apache-2.0"
] | 344 | 2018-11-01T16:47:56.000Z | 2022-02-23T20:36:21.000Z | kf_lib_data_ingest/templates/my_ingest_package/ingest_package_config.py | kids-first/kf-lib-data-ingest | 92889efef082c64744a00a9c110d778da7383959 | [
"Apache-2.0"
] | 1 | 2020-08-19T21:25:25.000Z | 2020-08-19T21:25:25.000Z | """ Ingest Package Config """
# The list of entities that will be loaded into the target service. These
# should be class_name values of your target API config's target entity
# classes.
target_service_entities = [
"family",
"participant",
"diagnosis",
"phenotype",
"outcome",
"biospecimen",
"read_group",
"sequencing_experiment",
"genomic_file",
"biospecimen_genomic_file",
"sequencing_experiment_genomic_file",
"read_group_genomic_file",
]
# All paths are relative to the directory this file is in
extract_config_dir = "extract_configs"
transform_function_path = "transform_module.py"
# TODO - Replace this with your own unique identifier for the project. This
# will become CONCEPT.PROJECT.ID during the Load stage.
project = "SD_ME0WME0W"
| 27.413793 | 75 | 0.732075 | 102 | 795 | 5.5 | 0.676471 | 0.078431 | 0.096257 | 0.110517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003072 | 0.181132 | 795 | 28 | 76 | 28.392857 | 0.858679 | 0.450314 | 0 | 0 | 0 | 0 | 0.522353 | 0.24 | 0 | 0 | 0 | 0.035714 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
682a6e3c9903c978ec62e69b0a956020a4566963 | 1,877 | py | Python | src/dbmanage/database_from_csv.py | resolutedreamer/NESLDashboard | b87c65db8bbb2e4999073f08bfd374a3eb769b3d | [
"MIT"
] | null | null | null | src/dbmanage/database_from_csv.py | resolutedreamer/NESLDashboard | b87c65db8bbb2e4999073f08bfd374a3eb769b3d | [
"MIT"
] | null | null | null | src/dbmanage/database_from_csv.py | resolutedreamer/NESLDashboard | b87c65db8bbb2e4999073f08bfd374a3eb769b3d | [
"MIT"
] | null | null | null | #!/usr/bin/env python
"""
Written by Anthony Nguyen 2015/06/20
"""
import sqlite3
import sys
#import smap_analytics as smap_analytics
open_this_file = None
if len(sys.argv) == 2:
open_this_file = sys.argv[1]
else:
open_this_file = "config/smap_2015.csv"
with sqlite3.connect('dashboard.db') as conn:
c = conn.cursor()
# Remove any existing house_layout table
command = '''DROP TABLE IF EXISTS house_layout'''
print command
c.execute(command)
# Create a new house_layout table
command = '''CREATE TABLE house_layout
(path VARCHAR(100) PRIMARY KEY, uuid VARCHAR(36) NOT NULL UNIQUE,
heat_map_enable BOOLEAN, x_coord INT NOT NULL, y_coord INT NOT NULL,
room VARCHAR(16), description VARCHAR(255), tab_type VARCHAR(16),
channel_units VARCHAR(16))'''
print command
c.execute(command)
#Parse the file of UUIDs and add each as a row
with open(open_this_file) as f:
for line in f.readlines():
row = line.split(",")
#print row
uuid = row[0]
path = row[1]
heat_map_enable = row[2]
x_coord = int(row[3])
y_coord = int(row[4])
room = row[5]
description = row[6]
tab_type = row[7]
channel_units = row[8]
# Insert a row of data
command = '''INSERT INTO house_layout VALUES ('%s','%s','%s','%d','%d','%s','%s','%s',
'%s')'''%(path, uuid, heat_map_enable, x_coord, y_coord, room, description, tab_type, channel_units)
print command
c.execute(command)
# Print the whole table at the end to make sure it works
c.execute("SELECT * FROM house_layout")
print c.fetchall()
# Save (commit) the changes
conn.commit() | 31.283333 | 112 | 0.580181 | 257 | 1,877 | 4.101167 | 0.455253 | 0.062619 | 0.045541 | 0.056926 | 0.07685 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030069 | 0.309004 | 1,877 | 60 | 113 | 31.283333 | 0.782575 | 0.152371 | 0 | 0.157895 | 0 | 0.026316 | 0.316439 | 0.026641 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.052632 | null | null | 0.105263 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
682b4349f15570d914af6d4986f32343c6454f8e | 3,746 | py | Python | RevCompLibrary.py | sayloren/fennel | b85a08b36900c23931ef2cd988798ae9e671cf8b | [
"Apache-2.0"
] | null | null | null | RevCompLibrary.py | sayloren/fennel | b85a08b36900c23931ef2cd988798ae9e671cf8b | [
"Apache-2.0"
] | null | null | null | RevCompLibrary.py | sayloren/fennel | b85a08b36900c23931ef2cd988798ae9e671cf8b | [
"Apache-2.0"
] | null | null | null | """
Script to perform RC sorting
Wren Saylor
July 5 2017
Copyright 2017 Harvard University, Wu Lab
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import argparse
import pandas as pd
from FangsLibrary import run_sliding_window_for_each_nucleotide_string
from ElementLibrary import get_bedtools_features
from MethylationLibrary import collect_methylation_data_by_element
import GlobalVariables
# Methylation RCsorting
def sort_methylation_by_directionality(negStr,posStr):
posMeth = collect_methylation_data_by_element(posStr)
negMeth = collect_methylation_data_by_element(negStr)
# Zip reversed range to make a dictionary for replacing the location of the neg methylation
originalRange = range(0,GlobalVariables.num)
reverseRange = originalRange[::-1]
rangeDict = dict(zip(originalRange,reverseRange))
# Zip reverse complement sequence for replacing the nucleotides for neg methylation
seqDict = {'A':'T','T':'A','C':'G','G':'C','N':'N'}
# Convert neg Meth df
negMeth['methLocNew'] = negMeth.methLoc.map(rangeDict)
negMeth['CytosineNew'] = negMeth.Cytosine.map(seqDict)
negMeth['ContextNew'] = negMeth.Context.map(seqDict)
negMethNew = negMeth[['id','methLocNew','methPer','methCov','methFreq','CytosineNew','ContextNew','tissue']]
negMethNew.columns = ['id','methLoc','methPer','methCov','methFreq','Cytosine','Context','tissue']
# Concat pos and revised neg meth dfs
frames = [posMeth,negMethNew]
catMerge = pd.concat(frames)
# Update Frequencey count column
catMerge['methFreqNew'] = catMerge.groupby(['methLoc','tissue','Cytosine'])['methLoc'].transform('count')
outMerge = catMerge[['id','methLoc','methPer','methCov','methFreqNew','Cytosine','Context','tissue']]
outMerge.columns = ['id','methLoc','methPer','methCov','methFreq','Cytosine','Context','tissue']
return outMerge
# Sliding window RCsorting
def sort_sliding_window_by_directionality(negStr,posStr):
negDF, negNames = run_sliding_window_for_each_nucleotide_string(negStr['reverseComplement'],negStr['id'])
posDF, posNames = run_sliding_window_for_each_nucleotide_string(posStr['combineString'],posStr['id'])
compWindow = []
for x, y in zip(negDF, posDF):
tempCat = pd.concat([x,y],axis=1)
tempGroup = tempCat.groupby(tempCat.columns,axis=1).sum()
compWindow.append(tempGroup)
return compWindow, negNames
# Separate on plus and minus orientation, RCsort and return methylation and sliding window computations
def sort_elements_by_directionality(directionFeatures):
negStr = (directionFeatures[(directionFeatures['compareBoundaries'] == '-')])
posStr = (directionFeatures[(directionFeatures['compareBoundaries'] == '+')])
compWindow, compNames = sort_sliding_window_by_directionality(negStr,posStr)
if any(x in GlobalVariables.graphs for x in ['methylation','cluster','methextend']):
groupMeth = sort_methylation_by_directionality(negStr,posStr)
else:
groupMeth = None
return groupMeth,compWindow,compNames
def main(directionFeatures):
groupMeth,compWindow,compNames = sort_elements_by_directionality(directionFeatures)
print 'Completed reverse complement sorting for {0} items, with {1} bin sorting'.format(len(directionFeatures.index),GlobalVariables.binDir)
return groupMeth,compWindow,compNames
if __name__ == "__main__":
main()
| 41.622222 | 141 | 0.780566 | 465 | 3,746 | 6.15914 | 0.427957 | 0.031774 | 0.030726 | 0.039106 | 0.207402 | 0.143506 | 0.113478 | 0.041201 | 0.041201 | 0 | 0 | 0.005672 | 0.105713 | 3,746 | 89 | 142 | 42.089888 | 0.849254 | 0.108649 | 0 | 0.041667 | 0 | 0 | 0.178889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.125 | null | null | 0.020833 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.