hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
35306592e94033dce0c58ab7e8eff39511c6e4c8 | 4,401 | py | Python | ripper/constants.py | alexmon1989/russia_ddos | 6bee2718a4d9fb9a495ffe7063a3dfc68bdafa0d | [
"MIT"
] | 199 | 2022-02-28T23:28:02.000Z | 2022-03-30T18:00:45.000Z | ripper/constants.py | alexmon1989/russia_ddos | 6bee2718a4d9fb9a495ffe7063a3dfc68bdafa0d | [
"MIT"
] | 14 | 2022-03-05T21:48:34.000Z | 2022-03-18T12:28:36.000Z | ripper/constants.py | alexmon1989/russia_ddos | 6bee2718a4d9fb9a495ffe7063a3dfc68bdafa0d | [
"MIT"
] | 40 | 2022-03-02T00:19:31.000Z | 2022-03-28T01:48:09.000Z | from _version import __version__
###############################################
# Constants | Logo and help messages
###############################################
VERSION = f'v{__version__}'
USAGE = 'Usage: %prog [options] arg'
EPILOG = 'Example: dripper -t 100 -m tcp-flood -s tcp://192.168.0.1:80'
GITHUB_OWNER = 'alexmon1989'
GITHUB_REPO = 'russia_ddos'
GITHUB_ID = f'{GITHUB_OWNER}/{GITHUB_REPO}'
GITHUB_URL = f'https://github.com/{GITHUB_ID}'
LOGO_COLOR = f'''[deep_sky_blue1]
██████╗ ██████═╗██╗██████╗ ██████╗ ███████╗██████═╗
██╔══██╗██╔══██║██║██╔══██╗██╔══██╗██╔════╝██╔══██║
██║ ██║██████╔╝██║██████╔╝██████╔╝█████╗ ██████╔╝[bright_yellow]
██║ ██║██╔══██╗██║██╔═══╝ ██╔═══╝ ██╔══╝ ██╔══██╗
██████╔╝██║ ██║██║██║ ██║ ███████╗██║ ██║
╚═════╝ ╚═╝ ╚═╝╚═╝╚═╝ ╚═╝ ╚══════╝╚═╝ ╚═╝
[green]{VERSION}
[grey53]
It is the end user's responsibility to obey all applicable laws.
It is just like a server testing script and Your IP is visible.
Please, make sure you are ANONYMOUS!
[u blue link={GITHUB_URL}]{GITHUB_URL}[/]
'''
LOGO_NOCOLOR = f'''
██████╗ ██████═╗██╗██████╗ ██████╗ ███████╗██████═╗
██╔══██╗██╔══██║██║██╔══██╗██╔══██╗██╔════╝██╔══██║
██║ ██║██████╔╝██║██████╔╝██████╔╝█████╗ ██████╔╝
██║ ██║██╔══██╗██║██╔═══╝ ██╔═══╝ ██╔══╝ ██╔══██╗
██████╔╝██║ ██║██║██║ ██║ ███████╗██║ ██║
╚═════╝ ╚═╝ ╚═╝╚═╝╚═╝ ╚═╝ ╚══════╝╚═╝ ╚═╝
{VERSION}
It is the end user's responsibility to obey all applicable laws.
It is just like a server testing script and Your IP is visible.
Please, make sure you are ANONYMOUS!
{GITHUB_URL}
'''
BANNER = '\n\n[r][deep_sky_blue1]#StandWith[bright_yellow]Ukraine[/]'
CONTROL_CAPTION = f'[grey53]Press [green]CTRL+C[grey53] to interrupt process.{BANNER}\n'
DEFAULT_CURRENT_IP_VALUE = '...detecting'
HOST_IN_PROGRESS_STATUS = 'HOST_IN_PROGRESS'
HOST_FAILED_STATUS = 'HOST_FAILED'
HOST_SUCCESS_STATUS = 'HOST_SUCCESS'
# ==== Badge templates ====
BADGE_INFO = '[bold gray0 on cyan] {message} [/]'
BADGE_WARN = '[bold gray0 on orange1] {message} [/]'
BADGE_ERROR = '[bold white on red1] {message} [/]'
# ==== Formats and Constants
DATE_TIME_FULL = '%Y-%m-%d %H:%M:%S'
DATE_TIME_SHORT = '%H:%M:%S'
# ==== Defaults for Input ARGS ===
ARGS_DEFAULT_PORT = 80
ARGS_DEFAULT_THREADS_COUNT = 'auto'
ARGS_DEFAULT_HEALTH_CHECK = 1
ARGS_DEFAULT_HTTP_ATTACK_METHOD = 'GET'
ARGS_DEFAULT_HTTP_REQUEST_PATH = '/'
ARGS_DEFAULT_SOCK_TIMEOUT = 1
ARGS_DEFAULT_PROXY_TYPE = 'socks5'
# ==== Defaults ====
GEOIP_NOT_DEFINED = '--'
CONNECT_TO_HOST_MAX_RETRY = 5
MIN_SCREEN_WIDTH = 100
MIN_UPDATE_HOST_STATUSES_TIMEOUT = 120
SUCCESSFUL_CONNECTIONS_CHECK_PERIOD_SEC = 300
NO_SUCCESSFUL_CONNECTIONS_DIE_PERIOD_SEC = 300
HTTP_STATUS_CODE_CHECK_PERIOD_SEC = 10
UPDATE_CURRENT_IP_CHECK_PERIOD_SEC = 60
TARGET_STATS_AUTO_PAGINATION_INTERVAL_SECONDS = 5
MIN_ALIVE_AVAILABILITY_PERCENTAGE = 50
DEFAULT_LOG_LEVEL = 'warn'
DEFAULT_LOG_SIZE = 5
MAX_AUTOSCALE_CPU_PERCENTAGE = 80
MAX_FAILED_FAILED_AUTOSCALE_TESTS = 5
DEFAULT_AUTOSCALE_TEST_SECONDS = 0.5
DEFAULT_MIN_RND_PACKET_LEN = 1
DEFAULT_MAX_RND_PACKET_LEN = 1024
# ==== Sockets ====
PROXY_MAX_FAILURE_RATIO = 0.8
PROXY_MIN_VALIDATION_REQUESTS = 8
CLOUDFLARE_TAGS = [
'cloudflare',
'cf-spinner-please-wait',
'we are checking your browser...',
'Cloudflare Ray ID'
]
# ==== Error messages ====
GETTING_SERVER_IP_ERR_MSG = 'Can\'t get server IP. Packet sending failed. Check your VPN.'
NO_SUCCESSFUL_CONNECTIONS_ERR_MSG = 'There are no successful connections more than 2 min. ' \
'Check your VPN or change host/port.' \
'If you are using the proxylist then proxy validation might be in progress.'
YOUR_IP_WAS_CHANGED_ERR_MSG = 'Your IP was changed!!! Check VPN connection.'
CANNOT_SEND_REQUEST_ERR_MSG = 'Cannot send Request or Packet. Host does not respond.'
NO_MORE_PROXIES_ERR_MSG = 'There are no more operational proxies to work with host.'
MSG_YOUR_IP_WAS_CHANGED = 'IP changed'
MSG_CHECK_VPN_CONNECTION = 'Check VPN'
MSG_DONT_USE_VPN_WITH_PROXY = 'Do not use VPN with proxy'
NO_CONNECTIONS_ERR_MSG = f"There were no successful connections for more " \
f"than {NO_SUCCESSFUL_CONNECTIONS_DIE_PERIOD_SEC // 60} minutes. " \
f"Your attack is ineffective."
TARGET_DEAD_ERR_MSG = "[orange1]Target should be dead!"
NO_MORE_TARGETS_LEFT_ERR_MSG = 'No more valid targets left'
| 36.07377 | 93 | 0.623949 | 605 | 4,401 | 5.099174 | 0.423141 | 0.015559 | 0.037277 | 0.015559 | 0.301459 | 0.278768 | 0.256078 | 0.256078 | 0.161426 | 0.161426 | 0 | 0.019221 | 0.14883 | 4,401 | 121 | 94 | 36.371901 | 0.663374 | 0.041354 | 0 | 0.193548 | 0 | 0.010753 | 0.561331 | 0.10736 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.010753 | 0 | 0.010753 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3534938252381b99f7a835d8be5dc85221768e9e | 5,798 | py | Python | etl/common/brapi.py | bilalelhoudaigui/plant-brapi-etl-data-lookup-gnpis | 973dc444eac6d1cc80c020dd8b9a4656f70eeafb | [
"BSD-3-Clause"
] | 3 | 2018-06-04T09:14:55.000Z | 2018-10-25T14:32:03.000Z | etl/common/brapi.py | bilalelhoudaigui/plant-brapi-etl-data-lookup-gnpis | 973dc444eac6d1cc80c020dd8b9a4656f70eeafb | [
"BSD-3-Clause"
] | 18 | 2020-06-04T07:08:17.000Z | 2022-02-02T17:02:17.000Z | etl/common/brapi.py | bilalelhoudaigui/plant-brapi-etl-data-lookup-gnpis | 973dc444eac6d1cc80c020dd8b9a4656f70eeafb | [
"BSD-3-Clause"
] | 4 | 2019-04-18T12:53:19.000Z | 2019-11-22T08:53:19.000Z | import itertools
import json
import re
from functools import partial
from itertools import chain
from typing import Tuple, List
import requests
from etl.common.utils import join_url_path, remove_falsey, replace_template, remove_none, is_collection
from pyhashxx import hashxx
class BreedingAPIIterator:
"""
Iterate through BraPI result pages.
If no pagination is required, the first and only page will contain the one BrAPI object.
"""
def __init__(self, brapi_url, call, logger=None):
self.page = 0
self.page_size = None
self.is_paginated = 'page-size' in call
if self.is_paginated:
self.page_size = call['page-size']
self.total_pages = 1
self.brapi_url = brapi_url
self.call = call.copy()
self.logger = logger
# Py3-style iterator interface
def __next__(self):
return self.next()
def __iter__(self):
return self
def next(self):
if self.page >= self.total_pages:
raise StopIteration
return self.__fetch_page()
def __fetch_page(self):
url = join_url_path(self.brapi_url, self.call['path'])
headers = {'Accept': 'application/json, application/ld+json'}
params = {}
if self.is_paginated:
params = {'page': self.page, 'pageSize': self.page_size}
if 'param' in self.call:
params.update(self.call['param'])
params_json = json.dumps(params)
if self.logger:
self.logger.debug('Fetching {} {} {}'.format(self.call['method'], url.encode('utf-8'), params_json))
response = None
if self.call['method'] == 'GET':
response = requests.get(url, params=params, headers=headers, verify=False)
elif self.call['method'] == 'POST':
headers['Content-type'] = 'application/json'
response = requests.post(url, data=params_json, headers=headers, verify=False)
if response.status_code != 200:
try:
message = response.json()['metadata']
except ValueError:
message = str(response.content)
self.total_pages = -1
raise BrapiServerError(message)
content = response.json()
if self.is_paginated:
self.total_pages = max(content['metadata']['pagination']['totalPages'], 1)
self.page += 1
else:
self.total_pages = -1
if self.is_paginated:
return content['result']['data']
else:
return [content['result']]
@staticmethod
def fetch_all(brapi_url, call, logger=None):
"""Iterate through all BrAPI objects for given call (does pagination automatically if needed)"""
return chain.from_iterable(BreedingAPIIterator(brapi_url, call, logger))
class BrapiServerError(Exception):
pass
def get_identifier(entity_name, data):
"""
Get identifier from BrAPI object or generate one from hashed string json representation
"""
entity_id = entity_name + 'DbId'
data_id = data.get(entity_id)
if not data_id:
simplified_object = remove_falsey(data, predicate=lambda x: x and not isinstance(x, set))
json_rep = json.dumps(simplified_object, sort_keys=True)
data_id = str(hashxx(json_rep.encode()))
data[entity_id] = str(data_id)
return data_id
def get_call_id(call):
return call['method'] + " " + call["path"]
def get_implemented_calls(source, logger):
implemented_calls = set()
calls_call = {'method': 'GET', 'path': '/calls', 'page-size': 100}
for call in BreedingAPIIterator.fetch_all(source['brapi:endpointUrl'], calls_call, logger):
for method in call["methods"]:
implemented_calls.add(method + " " + call["call"].replace('/brapi/v1/', '').replace(' /', ''))
return implemented_calls
def get_implemented_call(source, call_group, context=None):
calls = call_group['call'].copy()
if not isinstance(calls, list):
calls = [calls]
for call in calls:
call_id = get_call_id(call)
if call_id in source['implemented-calls']:
call = call.copy()
if context:
call['path'] = replace_template(call['path'], context)
if 'param' in call:
call['param'] = call['param'].copy()
for param_name in call['param']:
call['param'][param_name] = replace_template(call['param'][param_name], context)
return call
if call_group.get('required'):
calls_description = "\n".join(map(get_call_id, calls))
raise NotImplementedError('{} does not implement required call in list:\n{}'
.format(source['schema:identifier'], calls_description))
return None
def get_entity_links(data: dict, id_field: str) -> List[Tuple[str, List[str], str]]:
"""
List links in a nested BrAPI object.
Can list DbIds or URIs, PUIs using the field pattern "{entity}(DbID|PUI|URI)s?"
"""
def get_entry_link(path, entry):
key, value = entry
new_path = [*path, key]
if isinstance(key, str):
match = re.search(f"^(\\w+){id_field}(s?)$", key)
if match and value:
entity_name, plural = match.groups()
return [(entity_name, new_path, value)]
return get_links(new_path, value)
def get_links(path, data):
if is_collection(data):
if isinstance(data, dict):
entries = data.items()
else:
entries = enumerate(data)
return itertools.chain.from_iterable(remove_none(map(partial(get_entry_link, path), entries)))
return list(get_links([], data))
| 33.131429 | 112 | 0.611073 | 704 | 5,798 | 4.869318 | 0.259943 | 0.016336 | 0.021879 | 0.019837 | 0.025088 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003549 | 0.271128 | 5,798 | 174 | 113 | 33.321839 | 0.80762 | 0.077613 | 0 | 0.073171 | 0 | 0 | 0.085839 | 0.00416 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105691 | false | 0.00813 | 0.073171 | 0.02439 | 0.317073 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
353701ba094a9027ecc3b83b7a83991a0d038851 | 4,215 | py | Python | clocker/viewer.py | Brokdar/Clocker | 95952019c12ea4157ae4feda27fe8ae3413a9819 | [
"MIT"
] | 1 | 2022-02-11T22:40:05.000Z | 2022-02-11T22:40:05.000Z | clocker/viewer.py | Brokdar/Clocker | 95952019c12ea4157ae4feda27fe8ae3413a9819 | [
"MIT"
] | null | null | null | clocker/viewer.py | Brokdar/Clocker | 95952019c12ea4157ae4feda27fe8ae3413a9819 | [
"MIT"
] | null | null | null | """This module is responsible for a visual representation of the model data"""
from calendar import Calendar
from rich.console import Console
from rich.style import Style
from rich.table import Table
from clocker import converter
from clocker.model import AbsenceType, WorkDay
from clocker.statistics import StatisticHandler
class Viewer:
"""Viewer class for displaying a single WorkDay or a set of WorkDays"""
def __init__(self, statistic: StatisticHandler):
self.__stats = statistic
self.__console = Console()
def display(self, day: WorkDay):
"""Displays a specific WorkDay
Args:
day (WorkDay): Workday to be displayed
"""
table = _table(f'Working Day - {converter.date_to_str(day.date)}')
table.add_row(*self.__convert(day))
self.__console.print(table)
def display_set(self, title: str, data: list[WorkDay]):
table = _table(title)
data.sort(key=lambda o: o.date)
for day in data:
table.add_row(*self.__convert(day))
self.__console.print(table)
def display_month(self, month: int, year: int, data: list[WorkDay]):
"""Displays all workday records of the given month and year.
Args:
month (int): month of the records to display
year (int): years of the records to display
data (list[WorkDay]): all WorkDay records of the month
"""
table = _table(f'Working Days - {month:02}/{year}')
data.sort(key=lambda o: o.date)
cal = Calendar()
idx = 0
for day in cal.itermonthdates(year, month):
if day.month != month or day.year != year:
continue
style = Style()
if day.weekday() >= 5:
style += Style(color='grey42')
if idx < len(data) and day == data[idx].date:
if data[idx].absence == AbsenceType.HOLIDAY:
table.add_row(*self.__convert(data[idx]), style=Style(color='cyan'))
else:
table.add_row(*self.__convert(data[idx]), style=style)
idx += 1
else:
table.add_row(converter.date_to_str(day), style=style)
self.__console.print(table)
def display_statistics(self, data: list[WorkDay]):
"""Displays a statistic object
Args:
data (list[WorkDay]): data set to be analyzed
"""
statistics = self.__stats.collect(data)
self.__console.print(' | '.join([
f'Vacation {statistics.count.vacation}/{statistics.accessable_vacation_days} ({statistics.accessable_vacation_days - statistics.count.vacation})', # pylint: disable = line-too-long
f'Flexday {statistics.count.flex}',
f'Sickness {statistics.count.sick}',
f'Flextime {self.__colorize(converter.delta_to_str(statistics.flextime))}'
]))
def __convert(self, workday: WorkDay) -> list:
if workday.is_absence_day():
return [converter.date_to_str(workday.date), converter.enum_to_abbreviation(workday.absence)]
return [
converter.date_to_str(workday.date),
converter.enum_to_abbreviation(workday.absence),
converter.time_to_str(workday.begin) if workday.begin is not None else None,
converter.time_to_str(workday.end) if workday.end is not None else None,
converter.delta_to_str(workday.pause),
converter.delta_to_str(workday.duration),
converter.delta_to_str(self.__stats.flextime(workday))
]
@staticmethod
def __colorize(value: str) -> str:
return f'[red]{value}[/]' if value.startswith('-') else f'[green]{value}[/]'
def _table(title: str):
table = Table(title=title)
table.add_column('Date')
table.add_column('Type', justify='center')
table.add_column('Start')
table.add_column('End')
table.add_column('Pause')
table.add_column('Duration')
table.add_column('Flextime', justify='right')
return table
| 34.834711 | 194 | 0.60427 | 500 | 4,215 | 4.928 | 0.252 | 0.038961 | 0.039773 | 0.029221 | 0.307224 | 0.191153 | 0.157468 | 0.138799 | 0.138799 | 0.107143 | 0 | 0.002323 | 0.284935 | 4,215 | 120 | 195 | 35.125 | 0.815196 | 0.131673 | 0 | 0.121622 | 0 | 0.013514 | 0.131171 | 0.078878 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108108 | false | 0 | 0.094595 | 0.013514 | 0.27027 | 0.054054 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
353756ae9dde5bd7aeed7c407e5ee54d75a533fb | 6,669 | py | Python | clocwalk/libs/analyzer/gradle.py | ksiswhite/clocwalk | 884b5c3efe61d005a003749bcf4bae079fac8e70 | [
"Apache-2.0"
] | null | null | null | clocwalk/libs/analyzer/gradle.py | ksiswhite/clocwalk | 884b5c3efe61d005a003749bcf4bae079fac8e70 | [
"Apache-2.0"
] | null | null | null | clocwalk/libs/analyzer/gradle.py | ksiswhite/clocwalk | 884b5c3efe61d005a003749bcf4bae079fac8e70 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
import os
import re
from clocwalk.libs.core.common import recursive_search_files
from clocwalk.libs.core.data import logger
__product__ = 'Java'
__version__ = '0.1'
"""
https://docs.gradle.org/current/dsl/org.gradle.api.Project.html#N14F2A
https://docs.gradle.org/current/javadoc/org/gradle/api/Project.html#files-java.lang.Object...-
"""
def find_include_file(content):
"""
https://docs.gradle.org/current/dsl/org.gradle.api.Project.html#org.gradle.api.Project:rootProject
:param content:
:return:
"""
result = None
kw = re.compile(r'rootProject\.file\("(.+)?"\)')
conf_content = ''
if isinstance(content, list):
conf_content = '\n'.join(content)
elif isinstance(content, str):
conf_content = content
find_list = kw.findall(conf_content)
if find_list:
result = [_ for _ in list(set(find_list)) if _.endswith('.gradle')] # FIXME .gradle other?
return result
def find_keyword_block(content, keyword='dependencies', l_bracket='{', r_bracket='}'):
"""
:param content:
:param keyword:
:param l_bracket:
:param r_bracket:
:return:
"""
kw = re.compile(' {0} '.format(keyword), re.I)
result = {}
left_brackets = 0
right_brackets = 0
line_number = 0
current_dep_num = None
if isinstance(content, str):
content_list = content.split('\n')
elif isinstance(content, list):
content_list = content
else:
# FIXME raise Exception?
content_list = None
if content_list:
is_found_keyword = False
for item in content_list:
line_number += 1
if kw.search(item):
current_dep_num = line_number
result[str(current_dep_num)] = []
is_found_keyword = True
left_brackets += item.count(l_bracket)
right_brackets += item.count(r_bracket)
continue
if is_found_keyword:
if item.strip() and item.strip() not in (r_bracket,):
result[str(current_dep_num)].append(item)
left_brackets += item.count(l_bracket)
right_brackets += item.count(r_bracket)
if left_brackets == right_brackets:
current_dep_num, is_found_keyword = None, False
return result
def find_version_info(content, keyword, name):
"""
:param content:
:param keyword:
:param name:
:return:
"""
result = ''
version_list = find_keyword_block(content, keyword, l_bracket='[', r_bracket=']')
if version_list:
for _, item in version_list.items():
for line in item:
if ':' in line:
current_line = line.strip()
n, v = current_line.split(':')
if name == n.strip():
result = v.strip().replace('"', '').replace("'", "").replace(',', '')
return result
return result
def find_product_info(content, origin_file=None):
"""
:param content:
:param origin_file:
:return:
"""
conf_content = content
if isinstance(content, str):
conf_content = conf_content.split('\n')
result = []
version = {}
for item in conf_content:
current_line = item.strip()
# full name
# compile group: 'org.apache.struts', name: 'struts2-core', version: '2.5.5'
if ' group ' in item and ' name ' in item and ' version ' in item:
line = current_line[current_line.index(" ") + 1:]
product = {
'new_version': '', 'cve': '', 'parent_file': '', 'origin_file': origin_file
}
for b in line.split(","):
if ":" in b:
key, value = b.split(":")
if 'group' in key:
key = 'vendor'
elif 'name' in key:
key = 'product'
elif 'version' in key:
v_r = re.search(r'\$\{*(\w+?)\.(\w+)?\}*', value)
if v_r:
section, name = v_r.group(1), v_r.group(2)
value = find_version_info(content, section, name)
product[key] = value
if product:
result.append(product)
else: # fast
info_re = re.search(r"[\"']{1}(.+?)[\"']{1}", current_line)
if info_re:
info = info_re.group(1).split(':')
if len(info) == 2:
result.append({
'vendor': info[0],
'product': info[1],
'version': '',
'new_version': '',
'cve': '',
'parent_file': '',
'origin_file': origin_file,
})
elif len(info) == 3:
value = info[2]
v_r = re.search(r'\$\{*(\w+?)\.(\w+)?\}*', info[2])
if v_r:
section, name = v_r.group(1), v_r.group(2)
value = find_version_info(content, section, name)
result.append({
'vendor': info[0],
'product': info[1],
'version': value,
'new_version': '',
'cve': '',
'parent_file': '',
'origin_file': origin_file,
})
return result
def start(**kwargs):
"""
:param kwargs:
:return:
"""
code_dir = kwargs.get('code_dir', '')
file_list = recursive_search_files(code_dir, '*/build.gradle')
result = []
for item in file_list:
origin_file = item[len(code_dir) + 1:]
logger.info('[-] Start analysis "{0}" file...'.format(origin_file))
with open(item, 'rb') as fp:
content = fp.read().decode()
include_file = find_include_file(content)
if include_file:
path, _ = os.path.split(item)
for f in include_file:
full_path = os.path.join(path, f)
with open(full_path, 'rb') as fpi:
result.extend(find_product_info(fpi.read().decode(), full_path[len(code_dir) + 1:]))
dependencies = find_keyword_block(content)
for key, value in dependencies.items():
result.extend(find_product_info(value, origin_file))
return result
| 32.0625 | 108 | 0.501874 | 720 | 6,669 | 4.447222 | 0.1875 | 0.034354 | 0.026234 | 0.023735 | 0.301062 | 0.20331 | 0.186446 | 0.178326 | 0.178326 | 0.111805 | 0 | 0.008059 | 0.367371 | 6,669 | 207 | 109 | 32.217391 | 0.750889 | 0.070925 | 0 | 0.283688 | 0 | 0 | 0.066882 | 0.012222 | 0 | 0 | 0 | 0.009662 | 0 | 1 | 0.035461 | false | 0 | 0.028369 | 0 | 0.106383 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
353bb5a963d4e5e65d96f97798cfb9499a78f1de | 2,850 | py | Python | Day-052/insta_follower.py | adrianurdar/100DaysOfCode-Bootcamp | af6340a75979f15cb26687931c64aa8e072de242 | [
"MIT"
] | 1 | 2020-11-18T11:02:43.000Z | 2020-11-18T11:02:43.000Z | Day-052/insta_follower.py | adrianurdar/100DaysOfCode-Bootcamp | af6340a75979f15cb26687931c64aa8e072de242 | [
"MIT"
] | null | null | null | Day-052/insta_follower.py | adrianurdar/100DaysOfCode-Bootcamp | af6340a75979f15cb26687931c64aa8e072de242 | [
"MIT"
] | null | null | null | import os
import time
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.common.keys import Keys
from selenium.common.exceptions import ElementClickInterceptedException
IG_EMAIL = os.environ.get("IG_EMAIL")
IG_PWD = os.environ.get("IG_PWD")
SIMILAR_ACCOUNT = "chefilacutite"
class InstaFollower:
def __init__(self):
self.driver = webdriver.Chrome(ChromeDriverManager().install())
def login(self):
self.driver.get("https://www.instagram.com/accounts/login/")
# Accept cookies
cookies = self.driver.find_element_by_css_selector("body > div.RnEpo.Yx5HN > div > div > div > div.mt3GC "
"> button.aOOlW.bIiDR")
cookies.click()
time.sleep(1)
# login
username_input = self.driver.find_element_by_css_selector("#loginForm > div > div:nth-child(1) > div > label "
"> input")
username_input.send_keys(IG_EMAIL)
pwd_input = self.driver.find_element_by_css_selector("#loginForm > div > div:nth-child(2) > div > label "
"> input")
pwd_input.send_keys(IG_PWD)
pwd_input.send_keys(Keys.ENTER)
time.sleep(2)
def find_followers(self):
search_similar_account = self.driver.find_element_by_xpath('//*[@id="react-root"]/section/nav/div[2]/div/div'
'/div[2]/input')
search_similar_account.send_keys(SIMILAR_ACCOUNT)
time.sleep(2)
search_element = self.driver.find_element_by_xpath('//*[@id="react-root"]/section/nav/div[2]/div/div/div[2]/'
'div[4]/div/a[1]')
search_element.click()
time.sleep(2)
# Click on followers
followers = self.driver.find_element_by_css_selector("#react-root > section > main > div > header > section "
"> ul > li:nth-child(2) > a")
followers.click()
time.sleep(2)
element_inside_pop_up = self.driver.find_element_by_xpath('/html/body/div[5]/div/div/div[2]')
for i in range(10):
self.driver.execute_script("arguments[0].scrollTop = arguments[0].scrollHeight", element_inside_pop_up)
time.sleep(2)
def follow(self):
elements = self.driver.find_elements_by_css_selector("li button")
for element in elements:
try:
element.click()
time.sleep(2)
except ElementClickInterceptedException:
self.driver.find_element_by_xpath('/html/body/div[6]/div/div/div/div[3]/button[2]').click()
| 43.181818 | 118 | 0.58 | 329 | 2,850 | 4.817629 | 0.297872 | 0.052997 | 0.079495 | 0.105994 | 0.277603 | 0.249842 | 0.249842 | 0.20694 | 0.20694 | 0.157729 | 0 | 0.013664 | 0.306667 | 2,850 | 65 | 119 | 43.846154 | 0.788462 | 0.013684 | 0 | 0.156863 | 0 | 0.039216 | 0.215176 | 0.081582 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078431 | false | 0 | 0.117647 | 0 | 0.215686 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
35400a81ba57c5aab1733bc92615e530e006f249 | 3,451 | py | Python | tests/unittests/test_dataset.py | JasonSWFu/speechbrain | cb78ba2b33fceba273b055dc471535344c3053f0 | [
"Apache-2.0"
] | 3,913 | 2021-03-14T13:54:52.000Z | 2022-03-30T05:09:55.000Z | tests/unittests/test_dataset.py | JasonSWFu/speechbrain | cb78ba2b33fceba273b055dc471535344c3053f0 | [
"Apache-2.0"
] | 667 | 2021-03-14T20:11:17.000Z | 2022-03-31T04:07:17.000Z | tests/unittests/test_dataset.py | JasonSWFu/speechbrain | cb78ba2b33fceba273b055dc471535344c3053f0 | [
"Apache-2.0"
] | 785 | 2021-03-14T13:20:57.000Z | 2022-03-31T03:26:03.000Z | def test_dynamic_item_dataset():
from speechbrain.dataio.dataset import DynamicItemDataset
import operator
data = {
"utt1": {"foo": -1, "bar": 0, "text": "hello world"},
"utt2": {"foo": 1, "bar": 2, "text": "how are you world"},
"utt3": {"foo": 3, "bar": 4, "text": "where are you world"},
"utt4": {"foo": 5, "bar": 6, "text": "hello nation"},
}
dynamic_items = [
{"provides": "foobar", "func": operator.add, "takes": ["foo", "bar"]}
]
output_keys = ["text"]
dataset = DynamicItemDataset(data, dynamic_items, output_keys)
assert dataset[0] == {"text": "hello world"}
dataset.set_output_keys(["id", "foobar"])
assert dataset[1] == {"id": "utt2", "foobar": 3}
dataset.add_dynamic_item(operator.sub, ["bar", "foo"], "barfoo")
dataset.set_output_keys(["id", "barfoo"])
assert dataset[1] == {"id": "utt2", "barfoo": 1}
# Iterate:
barfoosum = 0
for data_point in iter(dataset):
barfoosum += data_point["barfoo"]
assert barfoosum == 4
def test_filtered_sorted_dynamic_item_dataset():
from speechbrain.dataio.dataset import DynamicItemDataset
import operator
data = {
"utt1": {"foo": -1, "bar": 0, "text": "hello world"},
"utt2": {"foo": 1, "bar": 2, "text": "how are you world"},
"utt3": {"foo": 3, "bar": 4, "text": "where are you world"},
"utt4": {"foo": 5, "bar": 6, "text": "hello nation"},
}
dynamic_items = [
{"provides": "foobar", "func": operator.add, "takes": ["foo", "bar"]}
]
output_keys = ["text"]
dataset = DynamicItemDataset(data, dynamic_items, output_keys)
subset = dataset.filtered_sorted(key_min_value={"foo": 3})
# Note: subset is not a shallow view!
dataset.set_output_keys(["id", "foo"])
assert subset[0] == {"text": "where are you world"}
subset.set_output_keys(["id", "foo"])
assert subset[0] == {"id": "utt3", "foo": 3}
# Note: now making a subset from a version which had id and foo as output keys
subset = dataset.filtered_sorted(key_max_value={"bar": 2})
assert len(subset) == 2
assert subset[0] == {"id": "utt1", "foo": -1}
dataset.add_dynamic_item(operator.sub, ["bar", "foo"], "barfoo")
subset = dataset.filtered_sorted(key_test={"barfoo": lambda x: x == 1})
assert len(subset) == 4
assert subset[3] == {"id": "utt4", "foo": 5}
subset = dataset.filtered_sorted(key_min_value={"foo": 3, "bar": 2})
assert subset[0]["id"] == "utt3"
subset = dataset.filtered_sorted(
key_min_value={"foo": 3}, key_max_value={"foobar": 7}
)
assert len(subset) == 1
subset = dataset.filtered_sorted(
key_min_value={"foo": 3}, key_max_value={"foobar": 3}
)
assert len(subset) == 0
subset = dataset.filtered_sorted(select_n=1, key_min_value={"foo": 3})
assert len(subset) == 1
assert subset[0]["id"] == "utt3"
# Can filter twice!
subset = dataset.filtered_sorted(key_min_value={"foo": 3})
subsetsubset = subset.filtered_sorted(key_max_value={"bar": 4})
assert len(subset) == 2
assert len(subsetsubset) == 1
# Can sort:
subset = dataset.filtered_sorted(sort_key="foo", reverse=True)
assert subset[0]["id"] == "utt4"
# Can filter and sort at the same time:
subset = dataset.filtered_sorted(
key_max_value={"foo": 1}, sort_key="foo", reverse=True
)
assert subset[0]["id"] == "utt2"
| 38.775281 | 82 | 0.600985 | 454 | 3,451 | 4.420705 | 0.196035 | 0.083707 | 0.104634 | 0.134529 | 0.735924 | 0.609367 | 0.593921 | 0.551071 | 0.520179 | 0.377678 | 0 | 0.026519 | 0.213272 | 3,451 | 88 | 83 | 39.215909 | 0.712707 | 0.054187 | 0 | 0.479452 | 0 | 0 | 0.159349 | 0 | 0 | 0 | 0 | 0 | 0.260274 | 1 | 0.027397 | false | 0 | 0.054795 | 0 | 0.082192 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
35434e5acb9cd219e983dca868cd3f017fcae178 | 3,351 | py | Python | pycode/archives/RegRFSVMCompare.py | syitong/randrelu | 0236ed84ce24b46b8d877d858f8a04927e846ca8 | [
"MIT"
] | null | null | null | pycode/archives/RegRFSVMCompare.py | syitong/randrelu | 0236ed84ce24b46b8d877d858f8a04927e846ca8 | [
"MIT"
] | null | null | null | pycode/archives/RegRFSVMCompare.py | syitong/randrelu | 0236ed84ce24b46b8d877d858f8a04927e846ca8 | [
"MIT"
] | null | null | null | ### This code is for comparing the performance of L2, L1 regularized
### RFSVM and KSVM.
import csv
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm
from sklearn.kernel_approximation import RBFSampler
from sklearn.linear_model import SGDClassifier
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
# my module
import rff
### set up data parameters
gap = 0.5
label_prob = 0.9
samplesize = 1500
logclist = np.arange(-3,4.5,0.5)
trials = 10
### set up feature parameters
X_pool_fraction = 0.3
feature_pool_size = 100
n_components = 10
### generate train and test dataset
X,Y = rff.unit_circle_ideal(gap,label_prob,samplesize)
X_train,X_test,Y_train,Y_test = train_test_split(X,Y,test_size = 0.33,random_state=0)
### estimate gamma in the rbf kernel. gamma here is actually 1/variance
gamma = rff.gamma_est(X_train)
### rbf kernel support vector machine
kscore = list()
ksparsity = list()
for idx in range(len(logclist)):
C = 10**logclist[idx]
clf = svm.SVC(C=C,gamma=gamma)
clf.fit(X_train,Y_train)
kscore.append(clf.score(X_test,Y_test))
ksparsity.append(clf.n_support_)
### full and sparse random features method
l1score_list = list()
l2score_list = list()
for idx in range(trials):
l1score = list()
l2score = list()
l1sparsity = list()
l2sparsity = list()
#rbf_feature = rff.myRBFSampler(gamma=gamma,n_old_features=X_train.shape[1])
rbf_feature = RBFSampler(gamma=gamma,n_components=20)
X_train_til = rbf_feature.fit_transform(X_train)
X_test_til = rbf_feature.transform(X_test)
m = X_train_til.shape[0]
for jdx in range(len(logclist)):
C = 10**logclist[jdx]
clfl1 = SGDClassifier(loss='hinge',penalty='l1',alpha=1/C/m)
#clfl1 = svm.LinearSVC(penalty='l1',C=C,dual=False)
clfl1.fit(X_train_til,Y_train)
l1score.append(clfl1.score(X_test_til,Y_test))
l1sparsity.append(np.sum(clfl1.coef_!=0))
clfl2 = SGDClassifier(loss='hinge',penalty='l2',alpha=1/C/m)
#clfl2 = svm.LinearSVC(loss='hinge',penalty='l2',C=C)
clfl2.fit(X_train_til,Y_train)
l2score.append(clfl2.score(X_test_til,Y_test))
l2sparsity.append(np.sum(clfl2.coef_!=0))
l1score_list.append(np.array(l1score))
l2score_list.append(np.array(l2score))
np.set_printoptions(precision=2)
print(idx)
l1score_list = np.array(l1score_list)
l2score_list = np.array(l2score_list)
l1score_mean = np.sum(l1score_list,axis=0) / trials
l2score_mean = np.sum(l2score_list,axis=0) / trials
plt.plot(logclist,kscore,'r-o',fillstyle='none')
plt.plot(logclist,l2score_mean,'b--s',fillstyle='none')
plt.plot(logclist,l1score_mean,'g:x',fillstyle='none')
plt.xlabel('$\log(C)$')
plt.ylabel('accuracy')
plt.savefig('image/results.eps')
with open('result/l1spasity.csv','w',newline='') as csvfile:
datawriter = csv.writer(csvfile,delimiter=' ')
datawriter.writerow(l1sparsity)
with open('result/l2spasity.csv','w',newline='') as csvfile:
datawriter = csv.writer(csvfile,delimiter=' ')
datawriter.writerow(l2sparsity)
with open('result/kspasity.csv','w',newline='') as csvfile:
datawriter = csv.writer(csvfile,delimiter=' ')
datawriter.writerow(ksparsity)
| 36.423913 | 86 | 0.704864 | 496 | 3,351 | 4.608871 | 0.326613 | 0.023622 | 0.015748 | 0.01706 | 0.188976 | 0.152668 | 0.121172 | 0.095801 | 0.095801 | 0.095801 | 0 | 0.03067 | 0.163235 | 3,351 | 91 | 87 | 36.824176 | 0.784593 | 0.145927 | 0 | 0.041667 | 0 | 0 | 0.049234 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0.027778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
354458639ea7afcbdcc97689428659c49f9c1d58 | 663 | py | Python | Tuples-and-Sets/battle_of_names.py | dechevh/Python-Advanced | 9daf33771b9096db77bcbf05ae2a4591b876c723 | [
"MIT"
] | 2 | 2020-09-15T19:12:26.000Z | 2020-09-15T19:12:30.000Z | Tuples-and-Sets/battle_of_names.py | dechevh/Python-Advanced | 9daf33771b9096db77bcbf05ae2a4591b876c723 | [
"MIT"
] | 1 | 2021-07-06T09:20:49.000Z | 2021-07-06T09:20:49.000Z | Tuples-and-Sets/battle_of_names.py | dechevh/Python-Advanced | 9daf33771b9096db77bcbf05ae2a4591b876c723 | [
"MIT"
] | null | null | null | n = int(input())
odd_set = set()
even_set = set()
for i in range(1, n + 1):
name = input()
summed = sum([ord(x) for x in name]) // i
if summed % 2 == 0:
even_set.add(summed)
else:
odd_set.add(summed)
odd_sum = sum(odd_set)
even_sum = sum(even_set)
if odd_sum == even_sum:
union_values = odd_set.union(even_set)
print(', '.join([str(x) for x in union_values]))
elif odd_sum > even_sum:
different_values = odd_set.difference(even_set)
print(', '.join([str(x) for x in different_values]))
else:
symmetric_values = odd_set.symmetric_difference(even_set)
print(', '.join([str(x) for x in symmetric_values]))
| 25.5 | 61 | 0.641026 | 110 | 663 | 3.636364 | 0.272727 | 0.09 | 0.05 | 0.07 | 0.245 | 0.245 | 0.245 | 0.245 | 0.245 | 0.18 | 0 | 0.007634 | 0.209653 | 663 | 25 | 62 | 26.52 | 0.755725 | 0 | 0 | 0.095238 | 0 | 0 | 0.00905 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
35496750c9300139f95c827178997ddc5cbb94bc | 4,693 | py | Python | test/units/test_oci_db_node.py | slmjy/oci-ansible-modules | 4713699064f4244b4554b5b2f97b5e5443fa2d6e | [
"Apache-2.0"
] | 106 | 2018-06-29T16:38:56.000Z | 2022-02-16T16:38:56.000Z | test/units/test_oci_db_node.py | slmjy/oci-ansible-modules | 4713699064f4244b4554b5b2f97b5e5443fa2d6e | [
"Apache-2.0"
] | 122 | 2018-09-11T12:49:39.000Z | 2021-05-01T04:54:22.000Z | test/units/test_oci_db_node.py | slmjy/oci-ansible-modules | 4713699064f4244b4554b5b2f97b5e5443fa2d6e | [
"Apache-2.0"
] | 78 | 2018-07-04T05:48:54.000Z | 2022-03-09T06:33:12.000Z | # Copyright (c) 2018, Oracle and/or its affiliates.
# This software is made available to you under the terms of the GPL 3.0 license or the Apache 2.0 license.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Apache License v2.0
# See LICENSE.TXT for details.
import pytest
from nose.plugins.skip import SkipTest
import logging
from ansible.modules.cloud.oracle import oci_db_node
from ansible.module_utils.oracle import oci_db_utils, oci_utils
try:
import oci
from oci.util import to_dict
from oci.database.models import DbNode
from oci.exceptions import ServiceError, ClientError
except ImportError:
raise SkipTest("test_oci_db_node.py requires `oci` module")
class FakeModule(object):
def __init__(self, **kwargs):
self.params = kwargs
def fail_json(self, *args, **kwargs):
self.exit_args = args
self.exit_kwargs = kwargs
raise Exception(kwargs["msg"])
def exit_json(self, *args, **kwargs):
self.exit_args = args
self.exit_kwargs = kwargs
@pytest.fixture()
def db_client(mocker):
mock_db_client = mocker.patch("oci.database.database_client.DatabaseClient")
return mock_db_client.return_value
@pytest.fixture()
def execute_function_and_wait_patch(mocker):
return mocker.patch.object(oci_db_utils, "execute_function_and_wait")
@pytest.fixture()
def get_existing_resource_patch(mocker):
return mocker.patch.object(oci_utils, "get_existing_resource")
@pytest.fixture()
def db_node_action_patch(mocker):
return mocker.patch.object(oci_db_node, "db_node_action")
def setUpModule():
logging.basicConfig(
filename="/tmp/oci_ansible_module.log", filemode="a", level=logging.INFO
)
oci_db_node.set_logger(logging)
def test_perform_db_node_action(
db_client, get_existing_resource_patch, db_node_action_patch
):
module = get_module(dict())
db_node = get_db_node("AVAILABLE")
get_existing_resource_patch.return_value = db_node
db_node_action_patch.return_value = {"db_node": to_dict(db_node), "changed": True}
result = oci_db_node.perform_db_node_action(db_client, module)
assert result["db_node"]["hostname"] is db_node.hostname
def test_create_or_update_db_node_client_error(db_client, get_existing_resource_patch):
error_message = "Db Node id is mandatory"
module = get_module(dict())
get_existing_resource_patch.side_effect = ClientError(Exception(error_message))
try:
oci_db_node.perform_db_node_action(db_client, module)
except Exception as ex:
assert error_message in ex.args[0]
def test_create_or_update_db_node_service_error(
db_client, get_existing_resource_patch, db_node_action_patch
):
error_message = "Internal Server Error"
module = get_module(dict())
db_node = get_db_node("AVAILABLE")
get_existing_resource_patch.return_value = db_node
db_node_action_patch.side_effect = ServiceError(
499, "InternalServerError", dict(), error_message
)
try:
oci_db_node.perform_db_node_action(db_client, module)
except Exception as ex:
assert error_message in ex.args[0]
def test_db_node_action_change_in_desired_state(
db_client, execute_function_and_wait_patch
):
module = get_module(dict({"state": "stop"}))
db_node = get_db_node("AVAILABLE")
execute_function_and_wait_patch.return_value = {
"db_node": to_dict(db_node),
"changed": True,
}
result = oci_db_node.db_node_action(db_client, module, db_node)
assert result["changed"] is True
def test_db_node_action_no_change_in_desired_state(db_client):
module = get_module(dict({"state": "start"}))
db_node = get_db_node("AVAILABLE")
result = oci_db_node.db_node_action(db_client, module, db_node)
assert result["changed"] is False
def test_db_node_action_reset(db_client, execute_function_and_wait_patch):
module = get_module(dict({"state": "reset"}))
db_node = get_db_node("AVAILABLE")
execute_function_and_wait_patch.return_value = {
"db_node": to_dict(db_node),
"changed": True,
}
result = oci_db_node.db_node_action(db_client, module, db_node)
assert result["changed"] is True
def get_db_node(lifecycle_state):
db_node = DbNode()
db_node.hostname = "ansibledbnode"
db_node.lifecycle_state = lifecycle_state
return db_node
def get_response(status, header, data, request):
return oci.Response(status, header, data, request)
def get_module(additional_properties):
params = {"db_node_id": "ocid1.dbnode.aaaa"}
params.update(additional_properties)
module = FakeModule(**params)
return module
| 31.709459 | 106 | 0.740038 | 677 | 4,693 | 4.774003 | 0.223043 | 0.107673 | 0.059406 | 0.05198 | 0.52599 | 0.478032 | 0.444926 | 0.405941 | 0.381807 | 0.381807 | 0 | 0.005098 | 0.164074 | 4,693 | 147 | 107 | 31.92517 | 0.818761 | 0.062646 | 0 | 0.398148 | 0 | 0 | 0.097883 | 0.026406 | 0 | 0 | 0 | 0 | 0.055556 | 1 | 0.157407 | false | 0 | 0.092593 | 0.037037 | 0.324074 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
354a22e83e30c00f035c1d7de3f29a933437d790 | 609 | py | Python | Chapter 14/ASCII Table 2.py | smartdong/PythonPractise | e1fe421b24d7ec8b26d5e34f70f2692ce825e967 | [
"MIT"
] | null | null | null | Chapter 14/ASCII Table 2.py | smartdong/PythonPractise | e1fe421b24d7ec8b26d5e34f70f2692ce825e967 | [
"MIT"
] | null | null | null | Chapter 14/ASCII Table 2.py | smartdong/PythonPractise | e1fe421b24d7ec8b26d5e34f70f2692ce825e967 | [
"MIT"
] | null | null | null | chars = "☺☻♥♦♣♠•◘○◙♂♀♪♫☼►◄↕‼¶§▬↨↑↓→←∟↔▲▼ !\"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~⌂ÇüéâäàåçêëèïîìÄÅÉæÆôöòûùÿÖÜ¢£¥₧ƒáíóúñѪº¿⌐¬½¼¡«»░▒▓│┤╡╢╖╕╣║╗╝╜╛┐└┴┬├─┼╞╟╚╔╩╦╠═╬╧╨╤╥╙╘╒╓╫╪┘┌█▄▌▐▀αßΓπΣσµτΦΘΩδ∞φε∩≡±≥≤⌠⌡÷≈°∙·√ⁿ²■? "
cols = 8
rows = 256//cols
table = list("" for n in range(rows+1))
char = 0
for col in range(1,cols+1):
for row in range(1,rows+1):
table[row] += '{:3.0f}'.format(char) + ' '
table[row] += chars[char]
table[row] += '\t'
char += 1
print(len(chars))
for row in table:
print(row)
| 32.052632 | 268 | 0.479475 | 75 | 609 | 5.346667 | 0.533333 | 0.052369 | 0.0399 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048583 | 0.188834 | 609 | 18 | 269 | 33.833333 | 0.536437 | 0 | 0 | 0 | 0 | 0.071429 | 0.074199 | 0.052277 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
354bd4991be99ea827b1ed5f1fde17dd25275483 | 4,536 | py | Python | unittests/tools/test_meterian_parser.py | mtcolman/django-DefectDojo | 76175aca446e077884bdb5e1d8e2a671a0840775 | [
"BSD-3-Clause"
] | 2 | 2022-03-29T11:37:23.000Z | 2022-03-31T18:32:35.000Z | unittests/tools/test_meterian_parser.py | mtcolman/django-DefectDojo | 76175aca446e077884bdb5e1d8e2a671a0840775 | [
"BSD-3-Clause"
] | 206 | 2020-04-20T16:03:18.000Z | 2022-01-15T23:07:48.000Z | unittests/tools/test_meterian_parser.py | mtcolman/django-DefectDojo | 76175aca446e077884bdb5e1d8e2a671a0840775 | [
"BSD-3-Clause"
] | 1 | 2020-12-06T15:44:44.000Z | 2020-12-06T15:44:44.000Z | from ..dojo_test_case import DojoTestCase
from dojo.models import Test
from dojo.tools.meterian.parser import MeterianParser
class TestMeterianParser(DojoTestCase):
def test_meterianParser_invalid_security_report_raise_ValueError_exception(self):
with self.assertRaises(ValueError):
testfile = open("unittests/scans/meterian/report_invalid.json")
parser = MeterianParser()
findings = parser.get_findings(testfile, Test())
def test_meterianParser_report_has_no_finding(self):
testfile = open("unittests/scans/meterian/report_no_vulns.json")
parser = MeterianParser()
findings = parser.get_findings(testfile, Test())
testfile.close()
self.assertEqual(0, len(findings))
def test_meterianParser_report_has_one_findings(self):
testfile = open("unittests/scans/meterian/report_one_vuln.json")
parser = MeterianParser()
findings = parser.get_findings(testfile, Test())
testfile.close()
self.assertEqual(1, len(findings))
def test_meterianParser_report_has_many_findings(self):
testfile = open("unittests/scans/meterian/report_many_vulns.json")
parser = MeterianParser()
findings = parser.get_findings(testfile, Test())
testfile.close()
self.assertEqual(20, len(findings))
def test_meterianParser_finding_has_fields(self):
testfile = open("unittests/scans/meterian/report_one_vuln.json")
parser = MeterianParser()
findings = parser.get_findings(testfile, Test())
testfile.close()
finding = findings[0]
self.assertEqual(1, len(findings))
self.assertEqual("date-and-time:0.6.3", finding.title)
self.assertEqual("2021-06-02", finding.date)
self.assertEqual("High", finding.severity)
self.assertEqual("Issue severity of: **High** from a base " +
"CVSS score of: **7.5**", finding.severity_justification)
self.assertEqual("date-and-time is an npm package for manipulating " +
"date and time. In date-and-time before version 0.14.2, there a regular " +
"expression involved in parsing which can be exploited to to cause a denial " +
"of service. This is fixed in version 0.14.2.", finding.description)
self.assertEqual("7be36211-b569-30c0-8851-26b4bb8740ca", finding.unique_id_from_tool)
self.assertEqual("CVE-2020-26289", finding.cve)
self.assertEqual(400, finding.cwe)
self.assertTrue(finding.mitigation.startswith("## Remediation"))
self.assertTrue("Upgrade date-and-time to version 0.14.2 or higher." in finding.mitigation)
self.assertTrue("https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-26289" in finding.references, "found " + finding.references)
self.assertTrue("https://nvd.nist.gov/vuln/detail/CVE-2020-26289" in finding.references, "found " + finding.references)
self.assertTrue("https://www.npmjs.com/package/date-and-time" in finding.references, "found " + finding.references)
self.assertTrue("https://github.com/knowledgecode/date-and-time/security/advisories/GHSA-r92x-f52r-x54g" in finding.references, "found " + finding.references)
self.assertTrue("https://github.com/knowledgecode/date-and-time/commit/9e4b501eacddccc8b1f559fb414f48472ee17c2a" in finding.references, "found " + finding.references)
self.assertTrue("Manifest file", finding.file_path)
self.assertEqual(["nodejs"], finding.tags)
def test_meterianParser_finding_has_no_remediation(self):
testfile = open("unittests/scans/meterian/report_one_vuln_no_remediation.json")
parser = MeterianParser()
findings = parser.get_findings(testfile, Test())
testfile.close()
finding = findings[0]
self.assertTrue(finding.mitigation.startswith("We were not able to provide a safe version for this library."))
self.assertTrue("You should consider replacing this component as it could be an " +
"issue for the safety of your application." in finding.mitigation)
def test_meterianParser_dual_language_report_has_two_findins(self):
testfile = open("unittests/scans/meterian/report_multi_language.json")
parser = MeterianParser()
findings = parser.get_findings(testfile, Test())
testfile.close()
self.assertEqual(2, len(findings))
self.assertIn("nodejs", findings[0].tags)
self.assertIn("ruby", findings[1].tags)
| 48.255319 | 174 | 0.699735 | 538 | 4,536 | 5.775093 | 0.310409 | 0.067589 | 0.028323 | 0.058577 | 0.540715 | 0.457998 | 0.445124 | 0.390409 | 0.353396 | 0.317348 | 0 | 0.031216 | 0.187831 | 4,536 | 93 | 175 | 48.774194 | 0.812161 | 0 | 0 | 0.361111 | 0 | 0.041667 | 0.295194 | 0.082231 | 0 | 0 | 0 | 0 | 0.375 | 1 | 0.097222 | false | 0 | 0.041667 | 0 | 0.152778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
101bc85fcefa0a4f60961b4135e29195f9b35b02 | 3,781 | py | Python | DeepSaki/layers/fourier_pooling.py | sascha-kirch/DeepSaki | cfe6bd6537a2b0793d4db4041f2efb37d480cb4c | [
"MIT"
] | null | null | null | DeepSaki/layers/fourier_pooling.py | sascha-kirch/DeepSaki | cfe6bd6537a2b0793d4db4041f2efb37d480cb4c | [
"MIT"
] | null | null | null | DeepSaki/layers/fourier_pooling.py | sascha-kirch/DeepSaki | cfe6bd6537a2b0793d4db4041f2efb37d480cb4c | [
"MIT"
] | null | null | null | import tensorflow as tf
class rFFTPooling2D(tf.keras.layers.Layer):
'''
Pooling in frequency domain by truncating higher frequencies. Layer input is asumed to be in spatial domain.
args:
- isChannelFirst: True or False. If True, input shape is assumed to be [batch,channel,height,width]. If False, input shape is assumed to be [batch,height,width,channel]
- truncatedFrequencies: "high" or "low": if "high", high frequency values are truncated, if "low", low frequencies are truncated
- **kwargs: keyword arguments passed to the parent class tf.keras.layers.Layer.
'''
def __init__(self,
isChannelFirst = False,
truncatedFrequencies = "low",
**kwargs
):
super(rFFTPooling2D, self).__init__(**kwargs)
self.isChannelFirst = isChannelFirst
self.truncatedFrequencies=truncatedFrequencies
def build(self, input_shape):
super(rFFTPooling2D, self).build(input_shape)
if self.isChannelFirst:
batch_size, inp_filter, inp_height, inp_width = input_shape
else:
batch_size, inp_height, inp_width, inp_filter = input_shape
self.offset_height = int(inp_height/2)
self.offset_width = 0
self.target_height = int(inp_height/2)
self.target_width = int(inp_width/4 + 1) #1/4 because real spectrum has allready half width and filter only applies to positive frequencies in width
def call(self, inputs):
if not self.built:
raise ValueError('This model has not yet been built.')
if not self.isChannelFirst: #layer assumes channel first due to FFT
inputs = tf.einsum("bhwc->bchw",inputs)
inputs_F = tf.signal.rfft2d(inputs)
if self.truncatedFrequencies == "high":
inputs_F = tf.signal.fftshift(inputs_F, axes=[-2]) #shift frequencies to be able to crop in center
shape = tf.shape(inputs_F)
outputs_F = tf.slice(inputs_F, begin=[0,0,self.offset_height,self.offset_width],size=[shape[0],shape[1],self.target_height,self.target_width]) # Tf.slice instead of tf.image.crop, because the latter assumes channel last
if self.truncatedFrequencies == "high":
outputs_F = tf.signal.ifftshift(outputs_F, axes=[-2]) #reverse shift
outputs = tf.signal.irfft2d(outputs_F)
#reverse to previous channel config!
if not self.isChannelFirst:
outputs = tf.einsum("bchw->bhwc",outputs)
return outputs
def get_config(self):
config = super(rFFTPooling2D, self).get_config()
config.update({
"isChannelFirst":self.isChannelFirst,
"truncatedFrequencies":self.truncatedFrequencies
})
return config
class FourierPooling2D(tf.keras.layers.Layer):
'''
Pooling in frequency domain by truncating high frequencies. Layer input is asumed to be in frequency domain.
args:
- isChannelFirst: True or False. If True, input shape is assumed to be [batch,channel,height,width]. If False, input shape is assumed to be [batch,height,width,channel]
- **kwargs: keyword arguments passed to the parent class tf.keras.layers.Layer.
'''
def __init__(self,
isChannelFirst = False,
**kwargs
):
super(FourierPooling2D, self).__init__(**kwargs)
self.isChannelFirst = isChannelFirst
def call(self, inputs):
if self.isChannelFirst:
inputs = tf.einsum("bchw->bhwc",inputs)
outputs = tf.image.central_crop(inputs, 0.5) #assumes channel last
#reverse to previous channel config!
if self.isChannelFirst:
outputs = tf.einsum("bhwc->bchw",outputs)
return outputs
def get_config(self):
config = super(FourierPooling2D, self).get_config()
config.update({
"isChannelFirst":self.isChannelFirst
})
return config
| 39.385417 | 223 | 0.688707 | 482 | 3,781 | 5.292531 | 0.244813 | 0.077617 | 0.020384 | 0.028224 | 0.453156 | 0.412387 | 0.333203 | 0.333203 | 0.261074 | 0.224226 | 0 | 0.008073 | 0.2137 | 3,781 | 95 | 224 | 39.8 | 0.849983 | 0.323988 | 0 | 0.460317 | 0 | 0 | 0.052946 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.015873 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
101bee9583c424a24266bf77601ebc417f60a0da | 18,258 | py | Python | gluon/tests/test_dal.py | spiffytech/MobileBlur | f9d2469caa05f0fe5c05c2ec83d1480cf6b770d8 | [
"BSD-3-Clause"
] | null | null | null | gluon/tests/test_dal.py | spiffytech/MobileBlur | f9d2469caa05f0fe5c05c2ec83d1480cf6b770d8 | [
"BSD-3-Clause"
] | null | null | null | gluon/tests/test_dal.py | spiffytech/MobileBlur | f9d2469caa05f0fe5c05c2ec83d1480cf6b770d8 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Unit tests for gluon.sql
"""
import sys
import os
if os.path.isdir('gluon'):
sys.path.append(os.path.realpath('gluon'))
else:
sys.path.append(os.path.realpath('../'))
import unittest
import datetime
from dal import DAL, Field, Table, SQLALL
ALLOWED_DATATYPES = [
'string',
'text',
'integer',
'boolean',
'double',
'blob',
'date',
'time',
'datetime',
'upload',
'password',
]
def setUpModule():
pass
def tearDownModule():
if os.path.isfile('sql.log'):
os.unlink('sql.log')
class TestFields(unittest.TestCase):
def testFieldName(self):
# Check that Fields cannot start with underscores
self.assertRaises(SyntaxError, Field, '_abc', 'string')
# Check that Fields cannot contain punctuation other than underscores
self.assertRaises(SyntaxError, Field, 'a.bc', 'string')
# Check that Fields cannot be a name of a method or property of Table
for x in ['drop', 'on', 'truncate']:
self.assertRaises(SyntaxError, Field, x, 'string')
# Check that Fields allows underscores in the body of a field name.
self.assert_(Field('a_bc', 'string'),
"Field isn't allowing underscores in fieldnames. It should.")
def testFieldTypes(self):
# Check that string, text, and password default length is 512
for typ in ['string', 'password']:
self.assert_(Field('abc', typ).length == 512,
"Default length for type '%s' is not 512 or 255" % typ)
# Check that upload default length is 512
self.assert_(Field('abc', 'upload').length == 512,
"Default length for type 'upload' is not 128")
# Check that Tables passed in the type creates a reference
self.assert_(Field('abc', Table(None, 'temp')).type
== 'reference temp',
'Passing an Table does not result in a reference type.')
def testFieldLabels(self):
# Check that a label is successfully built from the supplied fieldname
self.assert_(Field('abc', 'string').label == 'Abc',
'Label built is incorrect')
self.assert_(Field('abc_def', 'string').label == 'Abc Def',
'Label built is incorrect')
def testFieldFormatters(self): # Formatter should be called Validator
# Test the default formatters
for typ in ALLOWED_DATATYPES:
f = Field('abc', typ)
if typ not in ['date', 'time', 'datetime']:
isinstance(f.formatter('test'), str)
else:
isinstance(f.formatter(datetime.datetime.now()), str)
def testRun(self):
db = DAL('sqlite:memory:')
for ft in ['string', 'text', 'password', 'upload', 'blob']:
db.define_table('t', Field('a', ft, default=''))
self.assertEqual(db.t.insert(a='x'), 1)
self.assertEqual(db().select(db.t.a)[0].a, 'x')
db.t.drop()
db.define_table('t', Field('a', 'integer', default=1))
self.assertEqual(db.t.insert(a=3), 1)
self.assertEqual(db().select(db.t.a)[0].a, 3)
db.t.drop()
db.define_table('t', Field('a', 'double', default=1))
self.assertEqual(db.t.insert(a=3.1), 1)
self.assertEqual(db().select(db.t.a)[0].a, 3.1)
db.t.drop()
db.define_table('t', Field('a', 'boolean', default=True))
self.assertEqual(db.t.insert(a=True), 1)
self.assertEqual(db().select(db.t.a)[0].a, True)
db.t.drop()
db.define_table('t', Field('a', 'date',
default=datetime.date.today()))
t0 = datetime.date.today()
self.assertEqual(db.t.insert(a=t0), 1)
self.assertEqual(db().select(db.t.a)[0].a, t0)
db.t.drop()
db.define_table('t', Field('a', 'datetime',
default=datetime.datetime.today()))
t0 = datetime.datetime(
1971,
12,
21,
10,
30,
55,
0,
)
self.assertEqual(db.t.insert(a=t0), 1)
self.assertEqual(db().select(db.t.a)[0].a, t0)
db.t.drop()
db.define_table('t', Field('a', 'time', default='11:30'))
t0 = datetime.time(10, 30, 55)
self.assertEqual(db.t.insert(a=t0), 1)
self.assertEqual(db().select(db.t.a)[0].a, t0)
db.t.drop()
class TestAll(unittest.TestCase):
def setUp(self):
self.pt = Table(None,'PseudoTable',Field('name'),Field('birthdate'))
def testSQLALL(self):
ans = 'PseudoTable.id, PseudoTable.name, PseudoTable.birthdate'
self.assertEqual(str(SQLALL(self.pt)), ans)
class TestTable(unittest.TestCase):
def testTableCreation(self):
# Check for error when not passing type other than Field or Table
self.assertRaises(SyntaxError, Table, None, 'test', None)
persons = Table(None, 'persons',
Field('firstname','string'),
Field('lastname', 'string'))
# Does it have the correct fields?
self.assert_(set(persons.fields).issuperset(set(['firstname',
'lastname'])))
# ALL is set correctly
self.assert_('persons.firstname, persons.lastname'
in str(persons.ALL))
def testTableAlias(self):
db = DAL('sqlite:memory:')
persons = Table(db, 'persons', Field('firstname',
'string'), Field('lastname', 'string'))
aliens = persons.with_alias('aliens')
# Are the different table instances with the same fields
self.assert_(persons is not aliens)
self.assert_(set(persons.fields) == set(aliens.fields))
def testTableInheritance(self):
persons = Table(None, 'persons', Field('firstname',
'string'), Field('lastname', 'string'))
customers = Table(None, 'customers',
Field('items_purchased', 'integer'),
persons)
self.assert_(set(customers.fields).issuperset(set(
['items_purchased', 'firstname', 'lastname'])))
class TestInsert(unittest.TestCase):
def testRun(self):
db = DAL('sqlite:memory:')
db.define_table('t', Field('a'))
self.assertEqual(db.t.insert(a='1'), 1)
self.assertEqual(db.t.insert(a='1'), 2)
self.assertEqual(db.t.insert(a='1'), 3)
self.assertEqual(db(db.t.a == '1').count(), 3)
self.assertEqual(db(db.t.a == '1').update(a='2'), 3)
self.assertEqual(db(db.t.a == '2').count(), 3)
self.assertEqual(db(db.t.a == '2').delete(), 3)
db.t.drop()
class TestSelect(unittest.TestCase):
def testRun(self):
db = DAL('sqlite:memory:')
db.define_table('t', Field('a'))
self.assertEqual(db.t.insert(a='1'), 1)
self.assertEqual(db.t.insert(a='2'), 2)
self.assertEqual(db.t.insert(a='3'), 3)
self.assertEqual(len(db(db.t.id > 0).select()), 3)
self.assertEqual(db(db.t.id > 0).select(orderby=~db.t.a
| db.t.id)[0].a, '3')
self.assertEqual(len(db(db.t.id > 0).select(limitby=(1, 2))), 1)
self.assertEqual(db(db.t.id > 0).select(limitby=(1, 2))[0].a,
'2')
self.assertEqual(len(db().select(db.t.ALL)), 3)
self.assertEqual(len(db(db.t.a == None).select()), 0)
self.assertEqual(len(db(db.t.a != None).select()), 3)
self.assertEqual(len(db(db.t.a > '1').select()), 2)
self.assertEqual(len(db(db.t.a >= '1').select()), 3)
self.assertEqual(len(db(db.t.a == '1').select()), 1)
self.assertEqual(len(db(db.t.a != '1').select()), 2)
self.assertEqual(len(db(db.t.a < '3').select()), 2)
self.assertEqual(len(db(db.t.a <= '3').select()), 3)
self.assertEqual(len(db(db.t.a > '1')(db.t.a < '3').select()), 1)
self.assertEqual(len(db((db.t.a > '1') & (db.t.a < '3')).select()), 1)
self.assertEqual(len(db((db.t.a > '1') | (db.t.a < '3')).select()), 3)
self.assertEqual(len(db((db.t.a > '1') & ~(db.t.a > '2')).select()), 1)
self.assertEqual(len(db(~(db.t.a > '1') & (db.t.a > '2')).select()), 0)
db.t.drop()
class TestBelongs(unittest.TestCase):
def testRun(self):
db = DAL('sqlite:memory:')
db.define_table('t', Field('a'))
self.assertEqual(db.t.insert(a='1'), 1)
self.assertEqual(db.t.insert(a='2'), 2)
self.assertEqual(db.t.insert(a='3'), 3)
self.assertEqual(len(db(db.t.a.belongs(('1', '3'))).select()),
2)
self.assertEqual(len(db(db.t.a.belongs(db(db.t.id
> 2)._select(db.t.a))).select()), 1)
self.assertEqual(len(db(db.t.a.belongs(db(db.t.a.belongs(('1',
'3')))._select(db.t.a))).select()), 2)
self.assertEqual(len(db(db.t.a.belongs(db(db.t.a.belongs(db
(db.t.a.belongs(('1', '3')))._select(db.t.a)))._select(
db.t.a))).select()),
2)
db.t.drop()
class TestLike(unittest.TestCase):
def testRun(self):
db = DAL('sqlite:memory:')
db.define_table('t', Field('a'))
self.assertEqual(db.t.insert(a='abc'), 1)
self.assertEqual(len(db(db.t.a.like('a%')).select()), 1)
self.assertEqual(len(db(db.t.a.like('%b%')).select()), 1)
self.assertEqual(len(db(db.t.a.like('%c')).select()), 1)
self.assertEqual(len(db(db.t.a.like('%d%')).select()), 0)
self.assertEqual(len(db(db.t.a.lower().like('A%')).select()), 1)
self.assertEqual(len(db(db.t.a.lower().like('%B%')).select()),
1)
self.assertEqual(len(db(db.t.a.lower().like('%C')).select()), 1)
self.assertEqual(len(db(db.t.a.upper().like('A%')).select()), 1)
self.assertEqual(len(db(db.t.a.upper().like('%B%')).select()),
1)
self.assertEqual(len(db(db.t.a.upper().like('%C')).select()), 1)
db.t.drop()
class TestDatetime(unittest.TestCase):
def testRun(self):
db = DAL('sqlite:memory:')
db.define_table('t', Field('a', 'datetime'))
self.assertEqual(db.t.insert(a=datetime.datetime(1971, 12, 21,
11, 30)), 1)
self.assertEqual(db.t.insert(a=datetime.datetime(1971, 11, 21,
10, 30)), 2)
self.assertEqual(db.t.insert(a=datetime.datetime(1970, 12, 21,
9, 30)), 3)
self.assertEqual(len(db(db.t.a == datetime.datetime(1971, 12,
21, 11, 30)).select()), 1)
self.assertEqual(len(db(db.t.a.year() == 1971).select()), 2)
self.assertEqual(len(db(db.t.a.month() == 12).select()), 2)
self.assertEqual(len(db(db.t.a.day() == 21).select()), 3)
self.assertEqual(len(db(db.t.a.hour() == 11).select()), 1)
self.assertEqual(len(db(db.t.a.minutes() == 30).select()), 3)
self.assertEqual(len(db(db.t.a.seconds() == 0).select()), 3)
db.t.drop()
class TestExpressions(unittest.TestCase):
def testRun(self):
db = DAL('sqlite:memory:')
db.define_table('t', Field('a', 'integer'))
self.assertEqual(db.t.insert(a=1), 1)
self.assertEqual(db.t.insert(a=2), 2)
self.assertEqual(db.t.insert(a=3), 3)
self.assertEqual(db(db.t.a == 3).update(a=db.t.a + 1), 1)
self.assertEqual(len(db(db.t.a == 4).select()), 1)
db.t.drop()
class TestJoin(unittest.TestCase):
def testRun(self):
db = DAL('sqlite:memory:')
db.define_table('t1', Field('a'))
db.define_table('t2', Field('a'), Field('b', db.t1))
i1 = db.t1.insert(a='1')
i2 = db.t1.insert(a='2')
i3 = db.t1.insert(a='3')
db.t2.insert(a='4', b=i1)
db.t2.insert(a='5', b=i2)
db.t2.insert(a='6', b=i2)
self.assertEqual(len(db(db.t1.id
== db.t2.b).select(orderby=db.t1.a
| db.t2.a)), 3)
self.assertEqual(db(db.t1.id == db.t2.b).select(orderby=db.t1.a
| db.t2.a)[2].t1.a, '2')
self.assertEqual(db(db.t1.id == db.t2.b).select(orderby=db.t1.a
| db.t2.a)[2].t2.a, '6')
self.assertEqual(len(db().select(db.t1.ALL, db.t2.ALL,
left=db.t2.on(db.t1.id == db.t2.b),
orderby=db.t1.a | db.t2.a)), 4)
self.assertEqual(db().select(db.t1.ALL, db.t2.ALL,
left=db.t2.on(db.t1.id == db.t2.b),
orderby=db.t1.a | db.t2.a)[2].t1.a, '2')
self.assertEqual(db().select(db.t1.ALL, db.t2.ALL,
left=db.t2.on(db.t1.id == db.t2.b),
orderby=db.t1.a | db.t2.a)[2].t2.a, '6')
self.assertEqual(db().select(db.t1.ALL, db.t2.ALL,
left=db.t2.on(db.t1.id == db.t2.b),
orderby=db.t1.a | db.t2.a)[3].t1.a, '3')
self.assertEqual(db().select(db.t1.ALL, db.t2.ALL,
left=db.t2.on(db.t1.id == db.t2.b),
orderby=db.t1.a | db.t2.a)[3].t2.a, None)
self.assertEqual(len(db().select(db.t1.ALL, db.t2.id.count(),
left=db.t2.on(db.t1.id == db.t2.b),
orderby=db.t1.a | db.t2.a, groupby=db.t1.a)),
3)
self.assertEqual(db().select(db.t1.ALL, db.t2.id.count(),
left=db.t2.on(db.t1.id == db.t2.b),
orderby=db.t1.a | db.t2.a,
groupby=db.t1.a)[0]._extra[db.t2.id.count()],
1)
self.assertEqual(db().select(db.t1.ALL, db.t2.id.count(),
left=db.t2.on(db.t1.id == db.t2.b),
orderby=db.t1.a | db.t2.a,
groupby=db.t1.a)[1]._extra[db.t2.id.count()],
2)
self.assertEqual(db().select(db.t1.ALL, db.t2.id.count(),
left=db.t2.on(db.t1.id == db.t2.b),
orderby=db.t1.a | db.t2.a,
groupby=db.t1.a)[2]._extra[db.t2.id.count()],
0)
db.t1.drop()
db.t2.drop()
class TestMinMaxSum(unittest.TestCase):
def testRun(self):
db = DAL('sqlite:memory:')
db.define_table('t', Field('a', 'integer'))
self.assertEqual(db.t.insert(a=1), 1)
self.assertEqual(db.t.insert(a=2), 2)
self.assertEqual(db.t.insert(a=3), 3)
s = db.t.a.min()
self.assertEqual(db(db.t.id > 0).select(s)[0]._extra[s], 1)
s = db.t.a.max()
self.assertEqual(db(db.t.id > 0).select(s)[0]._extra[s], 3)
s = db.t.a.sum()
self.assertEqual(db(db.t.id > 0).select(s)[0]._extra[s], 6)
s = db.t.a.count()
self.assertEqual(db(db.t.id > 0).select(s)[0]._extra[s], 3)
db.t.drop()
#class TestCache(unittest.
# def testRun(self):
# cache = cache.ram
# db = DAL('sqlite:memory:')
# db.define_table('t', Field('a'))
# db.t.insert(a='1')
# r1 = db().select(db.t.ALL, cache=(cache, 1000))
# db.t.insert(a='1')
# r2 = db().select(db.t.ALL, cache=(cache, 1000))
# self.assertEqual(r1.response, r2.response)
# db.t.drop()
class TestMigrations(unittest.TestCase):
def testRun(self):
db = DAL('sqlite://.storage.db')
db.define_table('t', Field('a'), migrate='.storage.table')
db.commit()
db = DAL('sqlite://.storage.db')
db.define_table('t', Field('a'), Field('b'),
migrate='.storage.table')
db.commit()
db = DAL('sqlite://.storage.db')
db.define_table('t', Field('a'), Field('b', 'text'),
migrate='.storage.table')
db.commit()
db = DAL('sqlite://.storage.db')
db.define_table('t', Field('a'), migrate='.storage.table')
db.t.drop()
db.commit()
def tearDown(self):
if os.path.exists('.storage.db'):
os.unlink('.storage.db')
if os.path.exists('.storage.table'):
os.unlink('.storage.table')
class TestReferece(unittest.TestCase):
def testRun(self):
db = DAL('sqlite:memory:')
db.define_table('t', Field('name'), Field('a','reference t'))
db.commit()
x = db.t.insert(name='max')
assert x.id == 1
assert x['id'] == 1
x.a = x
assert x.a == 1
x.update_record()
y = db.t[1]
assert y.a == 1
assert y.a.a.a.a.a.a.name == 'max'
z=db.t.insert(name='xxx', a = y)
assert z.a == y.id
db.t.drop()
db.commit()
class TestClientLevelOps(unittest.TestCase):
def testRun(self):
db = DAL('sqlite:memory:')
db.define_table('t', Field('a'))
db.commit()
db.t.insert(a="test")
rows1 = db(db.t.id>0).select()
rows2 = db(db.t.id>0).select()
rows3 = rows1 & rows2
assert len(rows3) == 2
rows4 = rows1 | rows2
assert len(rows4) == 1
rows5 = rows1.find(lambda row: row.a=="test")
assert len(rows5) == 1
rows6 = rows2.exclude(lambda row: row.a=="test")
assert len(rows6) == 1
rows7 = rows5.sort(lambda row: row.a)
assert len(rows7) == 1
db.t.drop()
db.commit()
class TestVirtualFields(unittest.TestCase):
def testRun(self):
db = DAL('sqlite:memory:')
db.define_table('t', Field('a'))
db.commit()
db.t.insert(a="test")
class Compute:
def a_upper(row): return row.t.a.upper()
db.t.virtualfields.append(Compute())
assert db(db.t.id>0).select().first().a_upper == 'TEST'
db.t.drop()
db.commit()
if __name__ == '__main__':
unittest.main()
tearDownModule()
| 37.261224 | 80 | 0.520813 | 2,530 | 18,258 | 3.732806 | 0.103953 | 0.042567 | 0.027531 | 0.027319 | 0.637336 | 0.587992 | 0.551673 | 0.531766 | 0.492588 | 0.435303 | 0 | 0.034469 | 0.289736 | 18,258 | 489 | 81 | 37.337423 | 0.693785 | 0.063424 | 0 | 0.330709 | 0 | 0 | 0.084984 | 0.001231 | 0 | 0 | 0 | 0 | 0.32021 | 1 | 0.068241 | false | 0.013123 | 0.013123 | 0.002625 | 0.12336 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
101e895d0ee7e75046948a39fa8c33b31c2e4dcf | 3,110 | py | Python | extract_csvs.py | aryehgigi/pybart_rule_based_evaluation | bec41400734323f92124a8614cc138083e4274ad | [
"Apache-2.0"
] | null | null | null | extract_csvs.py | aryehgigi/pybart_rule_based_evaluation | bec41400734323f92124a8614cc138083e4274ad | [
"Apache-2.0"
] | null | null | null | extract_csvs.py | aryehgigi/pybart_rule_based_evaluation | bec41400734323f92124a8614cc138083e4274ad | [
"Apache-2.0"
] | null | null | null | from collections import defaultdict
import csv
import json
import math
import os
import argparse
def list_files(names):
for i, name in enumerate(names):
print(i, name)
def main(names):
rels = defaultdict(defaultdict)
general = dict()
for name in names:
try:
with open(logs_dir + name, "r") as f:
d = json.load(f)
except json.decoder.JSONDecodeError:
continue
is_test = 'test' in name
name = name.replace("_dev", "")
name = name.replace("_test", "")
if name in general:
general[name][5 if is_test else 1:-1 if is_test else 5] = [d[1], d[2], d[3], d[4]]
else:
general[name] = [name, float('inf'), float('inf'), float('inf'), float('inf'), d[1], d[2], d[3], d[4]] if is_test else \
[name, d[1], d[2], d[3], d[4], float('inf'), float('inf'), float('inf'), float('inf')]
for rel, scores in d[0].items():
if (rel in rels) and (name in rels[rel].keys()):
rels[rel][name][7 if is_test else 1:-1 if is_test else 7] = \
[scores['precision'], scores['recall'], scores['f1'], scores['relevant'], scores['retrieved'], scores['retrievedAndRelevant']]
else:
rels[rel][name] = [name, -math.inf, -math.inf, -math.inf, -math.inf, -math.inf, -math.inf, scores['precision'], scores['recall'], scores['f1'], scores['relevant'], scores['retrieved'], scores['retrievedAndRelevant']] if is_test else \
[name, scores['precision'], scores['recall'], scores['f1'], scores['relevant'], scores['retrieved'], scores['retrievedAndRelevant'], -math.inf, -math.inf, -math.inf, -math.inf, -math.inf, -math.inf]
for rel, scores in rels.items():
with open(logs_dir + "output/" + rel + ".csv", "w") as f:
writer = csv.writer(f)
writer.writerows(scores)
print(rel)
for score in scores.values():
print("\t{: <43}{: >9.4f}{: >9.4f}{: >9.4f}{: >9}{: >9}{: >9}{: >9.4f}{: >9.4f}{: >9.4f}{: >9}{: >9}{: >9}".format(*score))
print()
with open(logs_dir + "output/" + "general.csv", "w") as f:
writer = csv.writer(f)
writer.writerows(general.values())
print("general")
for score in general.values():
print("\t{: <70}{: >11.6f}{: >11.6f}{: >11.6f}{: >11.6f}{: >11.6f}{: >11.6f}{: >11.6f}{: >11.6f}".format(*score))
if __name__ == "__main__":
arg_parser = argparse.ArgumentParser()
arg_parser.add_argument('-a', '--action', type=str, default='ex')
arg_parser.add_argument('-i', '--file_indices', action='append', default=None)
args = arg_parser.parse_args()
logs_dir = "/home/inbaryeh/spike/server/logs/"
names = [f for f in os.listdir(logs_dir) if os.path.isfile(os.path.join(logs_dir, f))]
if args.action == 'ex':
if args.file_indices:
names = [names[int(idx)] for idx in args.file_indices]
main(names)
elif args.action == 'ls':
list_files(names)
| 39.871795 | 250 | 0.551768 | 425 | 3,110 | 3.955294 | 0.249412 | 0.04997 | 0.065437 | 0.083284 | 0.433076 | 0.372397 | 0.372397 | 0.372397 | 0.320048 | 0.30577 | 0 | 0.030238 | 0.255627 | 3,110 | 77 | 251 | 40.38961 | 0.695896 | 0 | 0 | 0.066667 | 0 | 0.033333 | 0.162379 | 0.010611 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.1 | 0 | 0.133333 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
101f5580bef8ca4281031c3ae2a39d3eb6d07df8 | 242 | py | Python | courses/python/cursoemvideo/exercicios/ex079.py | bdpcampos/public | dda57c265718f3e1cc0d6bce73f149051f5647ef | [
"MIT"
] | 3 | 2020-04-28T01:42:09.000Z | 2020-05-03T12:05:23.000Z | courses/python/cursoemvideo/exercicios/ex079.py | bdpcampos/public | dda57c265718f3e1cc0d6bce73f149051f5647ef | [
"MIT"
] | null | null | null | courses/python/cursoemvideo/exercicios/ex079.py | bdpcampos/public | dda57c265718f3e1cc0d6bce73f149051f5647ef | [
"MIT"
] | null | null | null |
numeros = list()
n = 0
while n != -1:
n = int(input('Digite um número [para sair digite -1]: '))
if n in numeros:
print('O número já existe na lista!')
elif n != -1:
numeros.append(n)
print(sorted(numeros)) | 14.235294 | 62 | 0.557851 | 37 | 242 | 3.648649 | 0.648649 | 0.02963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023392 | 0.293388 | 242 | 17 | 63 | 14.235294 | 0.766082 | 0 | 0 | 0 | 0 | 0 | 0.280992 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.222222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
101f65b6c939a98e74b665fece7691ae914ae821 | 1,268 | py | Python | utils/net_utils.py | zsc/End-to-end-ASR-Transformer | 3e02ff6210badb588134a81eb17f8c9ab59e735f | [
"Apache-2.0"
] | 7 | 2021-12-08T04:07:48.000Z | 2022-01-10T07:27:29.000Z | utils/net_utils.py | zsc/End-to-end-ASR-Transformer | 3e02ff6210badb588134a81eb17f8c9ab59e735f | [
"Apache-2.0"
] | 1 | 2021-12-08T05:14:47.000Z | 2021-12-08T05:14:47.000Z | utils/net_utils.py | zsc/End-to-end-ASR-Transformer | 3e02ff6210badb588134a81eb17f8c9ab59e735f | [
"Apache-2.0"
] | 1 | 2021-12-08T05:13:44.000Z | 2021-12-08T05:13:44.000Z | import logging
import numpy as np
import megengine.module as M
import megengine.functional as F
import megengine as mge
def pad_list(xs, pad_value):
"""Perform padding for the list of tensors."""
n_batch = len(xs)
max_len = max(x.size(0) for x in xs)
pad = xs[0].new(n_batch, max_len, *xs[0].size()[1:]).fill_(pad_value)
for i in range(n_batch):
pad[i, : xs[i].size(0)] = xs[i]
return pad
def mask_by_length(xs, lengths, fill=0):
"""Mask tensor according to length."""
assert xs.size(0) == len(lengths)
ret = xs.data.new(*xs.size()).fill_(fill)
for i, l in enumerate(lengths):
ret[i, :l] = xs[i, :l]
return ret
def make_pad_mask(lengths, maxlen=None):
if not isinstance(lengths, list):
lengths = lengths.tolist()
bs = int(len(lengths))
if maxlen is None:
maxlen = int(max(lengths))
seq_range = mge.Tensor(F.arange(0, maxlen, dtype="int32"))
seq_range_expand = F.broadcast_to(
F.reshape(seq_range, (1, seq_range.shape[0])), (bs, maxlen)
)
seq_length_expand = mge.Tensor(lengths).reshape(-1, 1)
mask = seq_range_expand >= seq_length_expand
return mask
def make_non_pad_mask(lengths, maxlen=None):
return ~make_pad_mask(lengths, maxlen)
| 27.565217 | 73 | 0.649842 | 205 | 1,268 | 3.868293 | 0.326829 | 0.050441 | 0.052963 | 0.075662 | 0.095839 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013986 | 0.210568 | 1,268 | 45 | 74 | 28.177778 | 0.778222 | 0.057571 | 0 | 0 | 0 | 0 | 0.004223 | 0 | 0 | 0 | 0 | 0 | 0.030303 | 1 | 0.121212 | false | 0 | 0.151515 | 0.030303 | 0.393939 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
101f78131c2243a997c322ca3666f7ff8463501f | 3,513 | py | Python | plugins/jiraclient/alerta_jiraclient.py | p-24/alerta-contrib | ef014b6b1dc0c574f4634261f67b299ae9e6dc4d | [
"MIT"
] | null | null | null | plugins/jiraclient/alerta_jiraclient.py | p-24/alerta-contrib | ef014b6b1dc0c574f4634261f67b299ae9e6dc4d | [
"MIT"
] | null | null | null | plugins/jiraclient/alerta_jiraclient.py | p-24/alerta-contrib | ef014b6b1dc0c574f4634261f67b299ae9e6dc4d | [
"MIT"
] | null | null | null | import os
import datetime
from jira import JIRA
import logging
from alerta.exceptions import ApiError
try:
from alerta.plugins import app # alerta >= 5.0
except ImportError:
from alerta.app import app # alerta < 5.0
from alerta.plugins import PluginBase
LOG = logging.getLogger('alerta.plugins.jira')
JIRA_API_URL = os.environ.get('JIRA_API_URL') or app.config.get('JIRA_API_URL', None)
JIRA_API_USERNAME = os.environ.get('JIRA_API_USERNAME') or app.config.get('JIRA_API_USERNAME', '')
JIRA_API_PASSWORD = os.environ.get('JIRA_API_PASSWORD') or app.config.get('JIRA_API_PASSWORD', '')
JIRA_PROJECT_KEY = os.environ.get('JIRA_PROJECT_KEY') or app.config.get('JIRA_PROJECT_KEY', '')
JIRA_ISSUE_TYPE = os.environ.get('JIRA_ISSUE_TYPE') or app.config.get('JIRA_ISSUE_TYPE', 'Bug')
#JIRA_API_URL = 'http://uat-servicedesk.nvidiangn.net:8080'
#JIRA_API_USERNAME = 'moogqa'
#JIRA_API_PASSWORD = 'moogqa'
#JIRA_PROJECT_KEY = 'MOOG'
#JIRA_ISSUE_TYPE = 'Incident' # Default 'Bug'
class jiraClientEscalate(PluginBase):
def jirakey_retrieval(self,alert):
if(alert.attributes.get('jiraKey')):
return alert.attributes['jiraKey']
else:
alert.attributes['jiraKey'] = "None"
return "None"
def pre_receive(self, alert):
return alert
def post_receive(self, alert):
return
def status_change(self, alert, status, text):
if alert.status == status:
return
#if alert.status == 'ack' and alert.attributes.get("jiraKey") == "None":
if status == 'ack':
if self.jirakey_retrieval(alert) == "None":
#issue1 = jira.issue(alert.attributes.get("jiraKey"))
#if issue1.fields.status == "Closed" or issue1.fields.status == "Done"):
#options =
summary = "%s on %s" % (alert.event, alert.resource)
description = alert.text
if 'moreInfo' in alert.attributes:
description = description + alert.attributes['moreInfo']
jira_client = JIRA(options={'server': JIRA_API_URL}, basic_auth=(JIRA_API_USERNAME, JIRA_API_PASSWORD))
issue_dict = {
'project': {'key': JIRA_PROJECT_KEY},
'summary': summary,
'description': description,
'issuetype': {'name': JIRA_ISSUE_TYPE}
}
if 'Insight Id' in alert.attributes:
issue_dict['customfield_10900'] = alert.attributes['Insight Id']
if 'Customer' in alert.attributes:
issue_dict['customfield_10002'] = alert.attributes['Customer']
if 'jiraProduct' in alert.attributes:
issue_dict['customfield_10422'] = alert.attributes['jiraProduct']
try:
new_issue = jira_client.create_issue(fields=issue_dict)
alert.attributes['jiraKey'] = str(new_issue)
jiralink = '%s/%s' % (JIRA_API_URL, alert.attributes['jiraKey'])
a ="""<h3><a href="{}">{}</a></h3>""".format(jiralink,alert.attributes['jiraKey'])
alert.attributes['jiraLink'] = a
except Exception as e:
raise RuntimeError("Jira: Failed to create issue - %s", e)
raise ApiError("Jira: Ticket already exist", alert.attributes['jiraKey'])
#else:
#raise RuntimeError("Jira: Ticket already exist")
#raise ApiError("Jira: Ticket already exist")
return alert, status, text
| 39.033333 | 115 | 0.622545 | 410 | 3,513 | 5.160976 | 0.263415 | 0.127599 | 0.028355 | 0.037807 | 0.200378 | 0.140359 | 0 | 0 | 0 | 0 | 0 | 0.010622 | 0.249644 | 3,513 | 89 | 116 | 39.47191 | 0.792109 | 0.14489 | 0 | 0.067797 | 0 | 0 | 0.174029 | 0.007028 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067797 | false | 0.033898 | 0.152542 | 0.033898 | 0.338983 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
101fcc8c3be8f3a2db6f59e0aa2bd6aacbda05ed | 7,169 | py | Python | plot/plot_utils.py | irenetrampoline/clustering-interval-censored | f6ab06a6cf3098ffe006d1b95d1b4f1d158b0bc4 | [
"MIT"
] | 1 | 2022-02-03T08:47:45.000Z | 2022-02-03T08:47:45.000Z | plot/plot_utils.py | irenetrampoline/clustering-interval-censored | f6ab06a6cf3098ffe006d1b95d1b4f1d158b0bc4 | [
"MIT"
] | null | null | null | plot/plot_utils.py | irenetrampoline/clustering-interval-censored | f6ab06a6cf3098ffe006d1b95d1b4f1d158b0bc4 | [
"MIT"
] | null | null | null | import numpy as np
from matplotlib import pyplot as plt
from sklearn.manifold import TSNE
import torch
def clean_plot():
ax = plt.subplot(111)
ax.spines["top"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_visible(False)
def plot_delta_comp(true_delta, pred_delta, fname, labels=[], title=None):
clean_plot()
if len(labels) > 0:
uniq_labels = np.unique(labels)
for lab in uniq_labels:
lab_idx = np.where(labels == lab)[0]
plt.plot(true_delta[lab_idx], pred_delta[lab_idx],'.')
else:
plt.plot(true_delta, pred_delta, '.')
if title:
plt.title(title)
plt.xlabel('true delta')
plt.ylabel('predicted delta')
plt.savefig(fname)
plt.close()
def plot_latent_labels(test_z, test_labels, fname, title=None):
if type(test_z) != np.ndarray:
test_z = test_z.detach().numpy()
plt.figure()
clean_plot()
N_patients, N_dims = test_z.shape
N_clusters = len(np.unique(test_labels))
if N_dims == 2:
for c in range(N_clusters):
c_ix = np.where(labels == c)[0]
plt.plot(test_z[c_ix,0], test_z[c_ix,1],'.')
plt.xlabel('latent dim 1')
plt.ylabel('latent dim 2')
else:
z_transformed = TSNE(n_components=2).fit_transform(test_z)
for c in range(N_clusters):
c_ix = np.where(test_labels == c)[0]
plt.plot(z_transformed[c_ix,0], z_transformed[c_ix,1],'.')
plt.xlabel('latent dim 1')
plt.ylabel('latent dim 2')
plt.xlim(-20,20)
plt.ylim(-20,20)
if title:
plt.title(title)
plt.savefig(fname)
plt.close()
# print('Figure saved to %s' % fname)
def plot_latent(model, test_data_dict, fname='../figs/latent_test.pdf'):
device = torch.device('cpu')
test_X = torch.tensor(test_data_dict['obs_t_collect']).to(device)
test_Y = torch.tensor(test_data_dict['Y_collect']).to(device)
test_z, _ = model.get_mu(test_X,test_Y)
test_z = test_z.detach().numpy()
test_labels = model.subtypes_km.predict(test_z)
plt.figure()
clean_plot()
N_patients, N_dims = test_z.shape
N_clusters = len(np.unique(test_labels))
if N_dims == 2:
for c in range(N_clusters):
c_ix = np.where(labels == c)[0]
plt.plot(test_z[c_ix,0], test_z[c_ix,1],'.')
plt.xlabel('z1')
plt.ylabel('z2')
plt.savefig(fname)
else:
z_transformed = TSNE(n_components=2).fit_transform(test_z)
for c in range(N_clusters):
c_ix = np.where(test_labels == c)[0]
plt.plot(z_transformed[c_ix,0], z_transformed[c_ix,1],'.')
plt.xlabel('z1')
plt.ylabel('z2')
plt.savefig(fname)
plt.close()
print('Figure saved to %s' % fname)
return
def plot_subtypes(subtypes, is_sigmoid, plot_true=True, fname=None):
if is_sigmoid:
plot_sigmoid(subtypes, plot_true, fname=fname)
else:
plot_quadratic(subtypes, plot_true, fname=fname)
def plot_quadratic(subtypes, plot_true, max_time=4, fname=None):
"""
Given learned subtypes for sigmoid function, plot them
"""
K = len(subtypes)
D = len(subtypes[0][0][0])
feat_names = [str(i) for i in range(D)]
# plt.figure(figsize=(12,10))
ax = plt.subplot(111)
colors = ['#ff7f0e','#1f77b4', '#2ca02c', '#d62728', '#9467bd']
f_ix = 0
for c in range(K):
plot_col_quadratic(ax, subtypes[c][0][0][f_ix], subtypes[c][1][0][f_ix], subtypes[c][2][0][f_ix], max_time, colors[c])
if plot_true:
plot_col_quadratic(ax, 2., -7.8, 7.2, max_time, colors[c], ':')
plot_col_quadratic(ax, 0., 0., 2., max_time, colors[c], ':')
ax.set_xlim([0,max_time])
ax.spines["top"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
ax.grid()
if fname==None:
fname = '../figs/quadratic_subtypes.pdf'
plt.savefig(fname)
print('Figure saved to %s' % fname)
def plot_sigmoid(subtypes, plot_true, fname=None):
"""
Given learned subtypes for sigmoid function, plot them
"""
K = len(subtypes)
D = len(subtypes[0][0][0])
max_time = 10
feat_names = [str(i) for i in range(D)]
# plt.figure(figsize=(12,10))
fig, axs = plt.subplots(1,3, figsize=(12,4))
colors = ['#ff7f0e','#1f77b4', '#2ca02c', '#d62728', '#9467bd']
# Plot mean (with shaded std) for each dimension with each subtype (healthy/parkinson's) plotted on the same graph
for f_ix, (col,ax) in enumerate(zip(feat_names,axs.flatten())):
for c in range(K):
plot_col_sigmoid(ax, subtypes[c][0][0][f_ix], subtypes[c][1][0][f_ix],max_time, colors[c])
ax.title.set_text(col)
ax.set_xlim([0,max_time])
ax.spines["top"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
ax.grid()
if fname==None:
fname = '../figs/sigmoid_subtypes.pdf'
plt.savefig(fname)
print('Figure saved to %s' % fname)
# def plot_col(c_ix,data_dict, s_value=None, color='b'):
# if s_value == s_value:
# s_idx = np.where(data_dict['s_collect'] == s_value)[0]
# times = data_dict['t_collect'][s_idx].flatten()
# vals = data_dict['Y_collect'][s_idx,:,c_ix].flatten()
# valid_idx = np.where(vals != -1000.)[0]
# else:
# s_idx = np.where(data_dict['s_collect'] == s_value)[0]
# times = data_dict['t_collect'].flatten()
# vals = data_dict['Y_collect'][:,:,c_ix].flatten()
# valid_idx = np.where(vals != -1000.)[0]
# val_mean, times1, _ = binned_statistic(times[valid_idx], vals[valid_idx], statistic='mean', bins=20)
# val_std, times2, _ = binned_statistic(times[valid_idx], vals[valid_idx], statistic='std', bins=20)
# valid_idx = np.where(~np.isnan(val_mean))[0]
# ax.plot(times1[valid_idx], val_mean[valid_idx], color, linestyle='--')
# p1 = val_mean[valid_idx] - val_std[valid_idx]
# p2 = val_mean[valid_idx] + val_std[valid_idx]
# t = times1[valid_idx]
# ax.fill_between(t, p1, p2, color=color, alpha=0.1)
def plot_col_sigmoid(ax, sig0, sig1, max_time, color='b'):
xs = np.linspace(0,max_time, 100)
ys = [sigmoid(sig0 + sig1*x) for x in xs]
ax.plot(xs,ys, color, linewidth=5)
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def plot_col_quadratic(ax, a, b, c, max_time, color='b',linestyle='-'):
xs = np.linspace(0,max_time, 100)
ys = [quad_function(a,b,c,x) for x in xs]
ax.plot(xs,ys, color, linewidth=5, linestyle=linestyle)
def quad_function(a,b,c,X):
return a*X*X + b*X + c | 33.189815 | 126 | 0.601339 | 1,086 | 7,169 | 3.767035 | 0.171271 | 0.019555 | 0.043999 | 0.04571 | 0.660963 | 0.616231 | 0.543632 | 0.526522 | 0.498411 | 0.474456 | 0 | 0.029681 | 0.238666 | 7,169 | 216 | 127 | 33.189815 | 0.719861 | 0.19124 | 0 | 0.631944 | 0 | 0 | 0.065437 | 0.014097 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076389 | false | 0 | 0.027778 | 0.013889 | 0.125 | 0.020833 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1020081ef4c06bc2f35f226d1efa054552b6f6b4 | 2,765 | py | Python | Road Lane Detection.py | teejaytanmay/Visual-Recognition | a4257151c7a5667910184780554c5a7b9f6972b0 | [
"MIT"
] | null | null | null | Road Lane Detection.py | teejaytanmay/Visual-Recognition | a4257151c7a5667910184780554c5a7b9f6972b0 | [
"MIT"
] | null | null | null | Road Lane Detection.py | teejaytanmay/Visual-Recognition | a4257151c7a5667910184780554c5a7b9f6972b0 | [
"MIT"
] | null | null | null | import numpy as np
import cv2
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import math
def region_of_interest(img, vertices):
mask = np.zeros_like(img)
match_mask_color = 255
cv2.fillPoly(mask, vertices, match_mask_color)
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=3):
line_img = np.zeros(
(
img.shape[0],
img.shape[1],
3
),
dtype=np.uint8
)
img = np.copy(img)
if lines is None:
return
for line in lines:
for x1, y1, x2, y2 in line:
cv2.line(line_img, (x1, y1), (x2, y2), color, thickness)
img = cv2.addWeighted(img, 0.9, line_img, 1.0, 0.0)
return img
def pipeline(image):
height = image.shape[0]
width = image.shape[1]
region_of_interest_vertices = [
(width*1/10, height),
(width/2 , height / 2),
(width*8.5/10, height),
]
gray_image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
cannyed_image = cv2.Canny(gray_image, 100, 200)
cropped_image = region_of_interest(
cannyed_image,
np.array(
[region_of_interest_vertices],
np.int32
),
)
lines = cv2.HoughLinesP(
cropped_image,
rho=6,
theta=np.pi / 60,
threshold=160,
lines=np.array([]),
minLineLength=40,
maxLineGap=25
)
left_line_x = []
left_line_y = []
right_line_x = []
right_line_y = []
for line in lines:
for x1, y1, x2, y2 in line:
slope = (float)(y2 - y1) / (float)(x2 - x1)
if math.fabs(slope) < 0.5:
pass
if slope <= 0:
left_line_x.extend([x1, x2])
left_line_y.extend([y1, y2])
else:
right_line_x.extend([x1, x2])
right_line_y.extend([y1, y2])
min_y = int(image.shape[0]*0.6)
max_y = int(image.shape[0]*1.2)
poly_left = np.poly1d(np.polyfit(
left_line_y,
left_line_x,
deg=1
))
left_x_start = int(poly_left(max_y))
left_x_end = int(poly_left(min_y))
poly_right = np.poly1d(np.polyfit(
right_line_y,
right_line_x,
deg=1
))
right_x_start = int(poly_right(max_y))
right_x_end = int(poly_right(min_y))
line_image = draw_lines(
image,
[[
[left_x_start, max_y, left_x_end, min_y],
[right_x_start, max_y, right_x_end, min_y],
]],
thickness=20,
)
return line_image
img14 = mpimg.imread('road.jpeg')
img15 = pipeline(img14)
cv2.imshow('roads',img15)
cv2.waitKey(0)
cv2.destroyAllWindows()
cv2.imwrite('roads1.jpeg',img15)
| 22.663934 | 68 | 0.570705 | 391 | 2,765 | 3.810742 | 0.28133 | 0.032215 | 0.042953 | 0.016107 | 0.155705 | 0.041611 | 0.041611 | 0.041611 | 0.041611 | 0.041611 | 0 | 0.05907 | 0.308137 | 2,765 | 121 | 69 | 22.85124 | 0.719812 | 0 | 0 | 0.1 | 0 | 0 | 0.009042 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03 | false | 0.01 | 0.05 | 0 | 0.12 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10225d660f8c860cafe80123da3507b0517cf5bd | 6,007 | py | Python | delta_tracking_fusion_rc_car/scripts/cube_marker_publisher.py | deltaautonomy/delta_rc_car | 398d25704361bc80f94ec4663263182f24cafdc2 | [
"BSD-3-Clause"
] | 1 | 2020-02-11T20:30:19.000Z | 2020-02-11T20:30:19.000Z | delta_tracking_fusion_rc_car/scripts/cube_marker_publisher.py | deltaautonomy/delta_rc_car | 398d25704361bc80f94ec4663263182f24cafdc2 | [
"BSD-3-Clause"
] | null | null | null | delta_tracking_fusion_rc_car/scripts/cube_marker_publisher.py | deltaautonomy/delta_rc_car | 398d25704361bc80f94ec4663263182f24cafdc2 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
'''
Author : Heethesh Vhavle
Email : heethesh@cmu.edu
Version : 1.0.0
Date : Apr 13, 2019
'''
# Python 2/3 compatibility
from __future__ import print_function, absolute_import, division
# ROS modules
import rospy
# ROS messages
from geometry_msgs.msg import Point
from visualization_msgs.msg import Marker
from jsk_rviz_plugins.msg import Pictogram
########################### Functions ###########################
def make_label(text, position, frame_id='/map', marker_id=0,
duration=0.5, color=[1.0, 1.0, 1.0]):
"""
Helper function for generating visualization markers
Args:
text (str): Text string to be displayed.
position (list): List containing [x,y,z] positions
frame_id (str): ROS TF frame id
marker_id (int): Integer identifying the label
duration (rospy.Duration): How long the label will be displayed for
color (list): List of label color floats from 0 to 1 [r,g,b]
Returns:
Marker: A text view marker which can be published to RViz
"""
marker = Marker()
marker.header.frame_id = frame_id
marker.header.stamp = rospy.Time.now()
marker.id = marker_id
marker.type = marker.TEXT_VIEW_FACING
marker.text = text
marker.action = marker.ADD
marker.scale.x = 0.3
marker.scale.y = 0.3
marker.scale.z = 0.3
marker.color.a = 1.0
marker.color.r = color[0]
marker.color.g = color[1]
marker.color.b = color[2]
marker.lifetime = rospy.Duration(duration)
marker.pose.orientation.w = 1.0
marker.pose.position.x = position[0]
marker.pose.position.y = position[1]
marker.pose.position.z = position[2]
return marker
def make_pictogram(character, position, frame_id='/map',
duration=0.5, color=[1.0, 1.0, 1.0]):
"""
Helper function for generating visualization markers
Args:
character (str): Character (icon) to be displayed.
position (list): List containing [x,y,z] positions
frame_id (str): ROS TF frame id
duration (rospy.Duration): How long the label will be displayed for
color (list): List of label color floats from 0 to 1 [r,g,b]
Returns:
Pictogram: A jsk_rviz_plugins/Pictogram message which can be published to RViz
"""
msg = Pictogram()
msg.action = Pictogram.ADD
msg.header.frame_id = frame_id
msg.header.stamp = rospy.Time.now()
msg.mode = Pictogram.PICTOGRAM_MODE
msg.character = character
msg.speed = 1.0
msg.ttl = duration
msg.size = 0.5
msg.color.r = color[0]
msg.color.g = color[1]
msg.color.b = color[2]
msg.color.a = 1.0
msg.pose.orientation.x = 0.0
msg.pose.orientation.y = -1.0
msg.pose.orientation.z = 0.0
msg.pose.orientation.w = 1.0
msg.pose.position.x = position[0]
msg.pose.position.y = position[1]
msg.pose.position.z = position[2]
return msg
def make_trajectory(trajectory, frame_id='/map', marker_id=0,
duration=0.5, color=[1.0, 1.0, 1.0]):
"""
Helper function for generating visualization markers
Args:
trajectory (array-like): (n, 2) array-like trajectory data
frame_id (str): ROS TF frame id
marker_id (int): Integer identifying the trajectory
duration (rospy.Duration): How long the trajectory will be displayed for
color (list): List of color floats from 0 to 1 [r,g,b]
Returns:
Marker: A trajectory marker message which can be published to RViz
"""
marker = Marker()
marker.header.stamp = rospy.Time.now()
marker.header.frame_id = frame_id
marker.id = marker_id
marker.type = marker.LINE_STRIP
marker.action = marker.ADD
for x, y in trajectory:
point = Point()
point.x = x
point.y = y
point.z = 0.15
marker.points.append(point)
marker.scale.x = 0.03
marker.color.r = color[0]
marker.color.g = color[1]
marker.color.b = color[2]
marker.color.a = 1.0
marker.lifetime = rospy.Duration(duration)
return marker
def make_cuboid(position, scale, frame_id='/map', marker_id=0,
duration=0, color=[1.0, 1.0, 1.0]):
"""
Helper function for generating visualization markers
Args:
position (list): List containing [x, y, z] positions
scale (list): List containing [x, y, z] dimensions
frame_id (str): ROS TF frame id
marker_id (int): Integer identifying the label
duration (rospy.Duration): How long the label will be displayed for
color (list): List of label color floats from 0 to 1 [r, g, b]
Returns:
Marker: A cube marker which can be published to RViz
"""
marker = Marker()
marker.header.frame_id = frame_id
marker.id = marker_id
marker.type = marker.CUBE
marker.text = str(marker_id)
marker.action = marker.ADD
marker.scale.x = scale[0]
marker.scale.y = scale[1]
marker.scale.z = scale[2]
marker.color.r = color[0]
marker.color.g = color[1]
marker.color.b = color[2]
marker.color.a = 1.0
marker.lifetime = rospy.Duration(duration)
marker.pose.orientation.w = 1.0
marker.pose.position.x = position[0]
marker.pose.position.y = position[1]
marker.pose.position.z = position[2]
return marker
def publisher():
# Setup node
rospy.init_node('marker_publisher', anonymous=True)
pub = rospy.Publisher('marker_publisher', Marker, queue_size=10)
# Publish rate
r = rospy.Rate(0.25)
# Randomly publish some data
while not rospy.is_shutdown():
# Create the message array
msg = make_cuboid([0, 0, 0], [0.05, 0.05, 0.05])
# Header stamp and publish the message
msg.header.stamp = rospy.Time.now()
pub.publish(msg)
# Sleep
r.sleep()
if __name__ == '__main__':
try:
publisher()
except rospy.ROSInterruptException:
pass
| 29.885572 | 86 | 0.633428 | 867 | 6,007 | 4.319493 | 0.179931 | 0.011749 | 0.006409 | 0.008545 | 0.631242 | 0.591989 | 0.553004 | 0.505741 | 0.479306 | 0.479306 | 0 | 0.02921 | 0.247711 | 6,007 | 200 | 87 | 30.035 | 0.799513 | 0.350757 | 0 | 0.424528 | 0 | 0 | 0.015573 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04717 | false | 0.009434 | 0.04717 | 0 | 0.132075 | 0.009434 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10243321bc6e7aa0144c0fc0afdb19986290d3db | 6,491 | py | Python | policy/pen_in_cup_controller.py | YunchuZhang/Visually-Grounded-Library-of-Behaviors-for-Generalizing-Manipulation-Across-Objects-Configurations- | 896afda942dfc04e4aaad2ee751c32df1eb17913 | [
"MIT"
] | 1 | 2022-03-14T22:25:17.000Z | 2022-03-14T22:25:17.000Z | policy/pen_in_cup_controller.py | YunchuZhang/Visually-Grounded-Library-of-Behaviors | 896afda942dfc04e4aaad2ee751c32df1eb17913 | [
"MIT"
] | null | null | null | policy/pen_in_cup_controller.py | YunchuZhang/Visually-Grounded-Library-of-Behaviors | 896afda942dfc04e4aaad2ee751c32df1eb17913 | [
"MIT"
] | null | null | null | from policy.policy import Policy
import os
import numpy as np
import pcp_utils
from pcp_utils.utils import Config
from pcp_utils.load_ddpg import load_policy
class PenincupController(Policy):
class Config(Config):
policy_name = "pen_in_cup_controller"
policy_model_path = ""
model_name = None
max_path_length = 110
def __init__(self, config:Config):
self.config = config
self.policy_name = config.policy_name
self.max_path_length = config.max_path_length
def run_forwards(self, env, num_rollouts, obj, path_length=None):
acc_reward = 0
acc_success = 0
obj_xpos = env.env.sim.data.get_body_xpos('object0').copy()
obj_xmat = env.env.sim.data.get_body_xmat('object0').copy()
bbox_points = pcp_utils.np_vis.compute_bounding_box_from_obj_xml(obj.obj_xml_file,obj_xpos,obj_xmat,obj.scale)
bounds, center, extents = pcp_utils.np_vis.get_bbox_attribs(bbox_points)
self.center, self.extents = center, extents
for iter_id in range(num_rollouts):
print("ITERATION NUMBER ", iter_id)
obs = env.reset()
#ep_actions, ep_observations, ep_infos
success, cur_reward = self.goToGoal(env, obs)
acc_reward += cur_reward
acc_success += success
success_rate = acc_success/num_rollouts
avg_reward = acc_reward/num_rollouts
return {'avg_reward':avg_reward, 'success_rate':success_rate}
def goToGoal(self, env, lastObs):
goal = lastObs['desired_goal']
objectPos = lastObs['observation'][3:6]
object_rel_pos = lastObs['observation'][6:9]
episodeAcs = []
episodeObs = []
episodeInfo = []
cur_reward = []
object_oriented_goal = object_rel_pos.copy()
# object_oriented_goal[2] += self.extents[2]/2.0 # add height of half bbox
object_oriented_goal[2] += 0.08 # first make the gripper go slightly above the object
timeStep = 0 #count the total number of timesteps
episodeObs.append(lastObs)
while np.linalg.norm(object_oriented_goal) >= 0.005 and timeStep <= env._max_episode_steps:
env.render()
action = np.zeros(4,)
object_oriented_goal = object_rel_pos.copy()
object_oriented_goal[2] += 0.08
for i in range(len(object_oriented_goal)):
action[i] = object_oriented_goal[i]*10
action[len(action)-1] = 0.05 #open
obsDataNew, reward, done, info = env.step(action)
timeStep += 1
episodeAcs.append(action)
episodeInfo.append(info)
episodeObs.append(obsDataNew)
cur_reward.append(reward)
objectPos = obsDataNew['observation'][3:6]
object_rel_pos = obsDataNew['observation'][6:9]
while np.linalg.norm(object_rel_pos) >= 0.02 and timeStep <= env._max_episode_steps:
env.render()
action = np.zeros(4,)
for i in range(len(object_rel_pos)):
action[i] = object_rel_pos[i]*10
# action[len(action)-1] = -0.005
action[len(action)-1] -= 0.005
obsDataNew, reward, done, info = env.step(action)
timeStep += 1
episodeAcs.append(action)
episodeInfo.append(info)
episodeObs.append(obsDataNew)
cur_reward.append(reward)
objectPos = obsDataNew['observation'][3:6]
object_rel_pos = obsDataNew['observation'][6:9]
## ... for properly grasping the cup before lifting ... ##
for i in range(12):
env.render()
action = np.zeros(4,)
action[len(action)-1] = -0.005
obsDataNew, reward, done, info = env.step(action)
timeStep += 1
episodeAcs.append(action)
episodeInfo.append(info)
episodeObs.append(obsDataNew)
cur_reward.append(reward)
# now that I have grasped the object I am just going to lift it
lift_pos = objectPos.copy()
lift_pos[2] += 0.22
# print(f'lift_pos: {lift_pos}')
# print(f'objectPos: {objectPos}')
# print(lift_pos)
while np.linalg.norm(lift_pos - objectPos) >= 0.05 and timeStep <= env._max_episode_steps:
env.render()
action = np.zeros(4,)
for j in range(len(lift_pos - objectPos)):
action[j] = (lift_pos - objectPos)[j]*10
action[len(action)-1] = -0.005
obsDataNew, reward, done, info = env.step(action)
timeStep += 1
episodeAcs.append(action)
episodeInfo.append(info)
episodeObs.append(obsDataNew)
cur_reward.append(reward)
objectPos = obsDataNew['observation'][3:6]
object_rel_pos = obsDataNew['observation'][6:9]
goal[2] += 0.21
while np.linalg.norm(goal - objectPos) >= 0.01 and timeStep <= env._max_episode_steps:
env.render()
action = np.zeros(4,)
for i in range(len(goal - objectPos)):
action[i] = (goal - objectPos)[i]*10
action[len(action)-1] = -0.005
obsDataNew, reward, done, info = env.step(action)
timeStep += 1
episodeAcs.append(action)
episodeInfo.append(info)
episodeObs.append(obsDataNew)
cur_reward.append(reward)
objectPos = obsDataNew['observation'][3:6]
object_rel_pos = obsDataNew['observation'][6:9]
while True: #limit the number of timesteps in the episode to a fixed duration
env.render()
action = np.zeros(4,)
action[len(action)-1] = 0.05 # keep the gripper closed
obsDataNew, reward, done, info = env.step(action)
timeStep += 1
episodeAcs.append(action)
episodeInfo.append(info)
episodeObs.append(obsDataNew)
cur_reward.append(reward)
objectPos = obsDataNew['observation'][3:6]
object_rel_pos = obsDataNew['observation'][6:9]
if timeStep >= env._max_episode_steps: break
success = 0
cur_reward = np.sum(cur_reward)
if np.sum(cur_reward) > -1 * env._max_episode_steps:
success = 1
return success, cur_reward
#return episodeAcs, episodeObs, episodeInfo | 36.880682 | 118 | 0.594824 | 786 | 6,491 | 4.723919 | 0.206107 | 0.031511 | 0.035551 | 0.030164 | 0.507406 | 0.489362 | 0.457043 | 0.451656 | 0.445193 | 0.445193 | 0 | 0.027104 | 0.300878 | 6,491 | 176 | 119 | 36.880682 | 0.791097 | 0.085965 | 0 | 0.5 | 0 | 0 | 0.036849 | 0.00355 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022388 | false | 0 | 0.044776 | 0 | 0.097015 | 0.007463 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1024e6add7f07ac567ae890cfd5de37be8f88c2b | 5,813 | py | Python | examples/slope_dist.py | NAU-PIXEL/roughness | dfaa3d2bc448a2ca19cb2d6001cc5dcf8ee26f82 | [
"MIT"
] | null | null | null | examples/slope_dist.py | NAU-PIXEL/roughness | dfaa3d2bc448a2ca19cb2d6001cc5dcf8ee26f82 | [
"MIT"
] | 2 | 2021-11-18T16:26:19.000Z | 2021-11-18T16:39:08.000Z | examples/slope_dist.py | NAU-PIXEL/roughness | dfaa3d2bc448a2ca19cb2d6001cc5dcf8ee26f82 | [
"MIT"
] | 1 | 2021-10-09T08:01:11.000Z | 2021-10-09T08:01:11.000Z | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: percent
# format_version: '1.3'
# jupytext_version: 1.12.0
# kernelspec:
# display_name: 'Python 3.7.7 64-bit (''.venv'': poetry)'
# name: python3
# ---
# %% [markdown]
# # Slope distributions
#
# Several examples of slope distributions computed from analytic equations and raytracing binning code. Most plots show probablility of a surface slope occurring vs facet slope in [0, 90] degrees (where 0 degrees is a flat, level facet).
# %%
import numpy as np
import matplotlib.pyplot as plt
from roughness import config as cfg
from roughness import roughness as rn
from roughness import helpers as rh
plt.style.use("dark_background")
SAVEFIGS = False
lookup = rn.load_los_lookup(cfg.FLOOKUP)
# %% [markdown]
# ## RMS (Shepard 1995)
#
# Analytical RMS slope distribution using equations from Shepard (1995).
# %%
theta = np.arange(90)
rms_arr = (5, 10, 20, 30, 40, 50)
plt.figure()
for rms in rms_arr:
p_theta = rn.slope_dist(np.radians(theta), np.radians(rms), "rms")
plt.plot(theta, p_theta, label=f"RMS$={rms}^o$")
plt.title("Shepard gaussian slope distribution vs RMS")
plt.ylabel("$P(\\theta)$")
plt.xlabel("Facet $\\theta$ angle [deg]")
plt.xlim(0, 90)
plt.ylim(0, 0.15)
plt.legend(ncol=2)
if SAVEFIGS:
plt.savefig(cfg.FIG_SLOPE_DIST_SHEP, dpi=300)
# %% [markdown]
# ## Theta-bar (Hapke 1984)
#
# Analytical slope distributions using equations from Hapke (1984).
# %%
theta = np.arange(90)
tbar_arr = (5, 10, 20, 30, 40, 50)
plt.figure()
for tbar in tbar_arr:
p_theta = rn.slope_dist(np.radians(theta), np.radians(tbar), "tbar")
label = "$\\bar{\\theta}=$"
plt.plot(theta, p_theta, label=label + f"${tbar}^o$")
plt.title("Hapke gaussian slope distributions vs theta-bar")
plt.ylabel("$P(\\theta)$")
plt.xlabel("Facet $\\theta$ angle [deg]")
plt.xlim(0, 90)
plt.ylim(0, 0.15)
plt.legend(ncol=2)
if SAVEFIGS:
plt.savefig(cfg.FIG_SLOPE_DIST_HAPKE, dpi=300)
# %% [markdown]
# ## Slope distributions from lineofsight lookup tables
# %%
rms_arr = (5, 10, 20, 30, 40, 50)
plt.figure()
for rms in rms_arr:
facet_table = rn.get_los_table(rms, 0, 270, lookup, "total")
p_theta = np.sum(facet_table, axis=0) / np.nansum(facet_table)
plt.plot(facet_table.theta, p_theta, label=f"RMS$={rms}^o$")
plt.title("Synthetic gaussian surface RMS rough slope distributions")
plt.ylabel("$P(\\theta)$")
plt.xlabel("Facet $\\theta$ angle [deg]")
plt.xlim(0, 90)
plt.ylim(0, None)
plt.legend(ncol=2)
if SAVEFIGS:
plt.savefig(cfg.FIG_SLOPE_DIST_GSURF, dpi=300)
# %% [markdown]
# ## Visible facets from lineofsight lookup vs RMS
#
# Viewing azimuth is particularly important for higher RMS values and shows the distinction between syn-facing slopes (surface facets oriented towards the spacecraft) and anti-facing slopes (surface facets orented away from the spacecraft).
# %%
azs = lookup.az.values
theta = lookup.theta.values
rms_arr = (5, 10, 20, 30, 40, 50)
sc_theta = 60
sc_az = 270
plt.figure()
for rms in rms_arr:
# Compute prob of totalfacets being visible from sc_az
facet_table = rn.get_los_table(rms, sc_theta, sc_az, lookup, "total")
view_table = rn.get_view_table(rms, sc_theta, sc_az, lookup)
vis_facet_table = facet_table * view_table
# Get syn facets facing sc_az and anti facets 180 degrees from sc_az
p_theta270 = vis_facet_table[np.argmin(np.abs(azs - 270))]
p_theta90 = vis_facet_table[np.argmin(np.abs(azs - 90))]
# Normalize and plot both curves
p_theta270 = p_theta270 / np.nansum(p_theta270)
p_theta90 = p_theta90 / np.nansum(p_theta90)
(line,) = plt.plot(theta, p_theta270, label=f"RMS={rms}$^o$")
plt.plot(theta, p_theta90, ls=":", lw=3, c=line.get_color())
plt.title(f"Visible slopes vs RMS (with view angle $\\theta$={sc_theta}$^o$)")
plt.ylabel("$P(\\theta)$")
plt.xlabel("Facet $\\theta$ angle [deg]")
plt.xlim(0, 90)
plt.ylim(0, None)
# Make label lines for legend
plt.plot([0, 1e-3], [0, 0], "w-", label="syn facets")
plt.plot([0, 1e-3], [0, 0], "w:", lw=3, label="anti facets")
plt.legend()
if SAVEFIGS:
plt.savefig(cfg.FIG_SLOPE_DIST_VIS_RMS, dpi=300)
# %% [markdown]
# ## Visible facets from lineofsight lookup vs view angle
#
# Viewing angle (spacecraft emission angle) mainly affects anti-facing slopes (surface facets orented away from the spacecraft), but has little effect on syn-facing slopes (surface facets oriented towards the spacecraft), even at high roughness.
# %%
az = lookup.az.values
theta = lookup.theta.values
sc_thetas = (0, 15, 30, 45, 60, 75)
rms = 40
sc_az = 270
plt.figure()
for sc_theta in sc_thetas:
# Compute prob of totalfacets being visible from sc_az
facet_table = rn.get_los_table(rms, sc_theta, sc_az, lookup, "total")
view_table = rn.get_view_table(rms, sc_theta, sc_az, lookup)
vis_facet_table = facet_table * view_table
# Get syn facets facing sc_az and anti facets 180 degrees from sc_az
p_theta270 = vis_facet_table[np.argmin(np.abs(azs - 270))]
p_theta90 = vis_facet_table[np.argmin(np.abs(azs - 90))]
# Normalize and plot both curves
p_theta270 = p_theta270 / np.nansum(p_theta270)
p_theta90 = p_theta90 / np.nansum(p_theta90)
(line,) = plt.plot(
theta, p_theta270, label=f"view $\\theta$={sc_theta}$^o$"
)
plt.plot(theta, p_theta90, ls=":", lw=3, c=line.get_color())
plt.title(f"Visible slopes vs view angle (with RMS={rms}$^o$)")
plt.ylabel("$P(\\theta)$")
plt.xlabel("Facet $\\theta$ angle [deg]")
plt.xlim(0, 90)
plt.ylim(0, None)
# Make label lines for legend
plt.plot([0, 1e-3], [0, 0], "w-", label="syn facets")
plt.plot([0, 1e-3], [0, 0], "w:", lw=3, label="anti facets")
plt.legend()
if SAVEFIGS:
plt.savefig(cfg.FIG_SLOPE_DIST_VIS_THETA, dpi=300)
| 32.841808 | 245 | 0.691897 | 943 | 5,813 | 4.138918 | 0.209968 | 0.03587 | 0.018447 | 0.019985 | 0.641558 | 0.635409 | 0.619523 | 0.588522 | 0.584166 | 0.529849 | 0 | 0.04908 | 0.158782 | 5,813 | 176 | 246 | 33.028409 | 0.74908 | 0.307242 | 0 | 0.638095 | 0 | 0 | 0.160575 | 0.012352 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.047619 | 0 | 0.047619 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1027b258a2503b3e6ec7ec57cd558eee5891c1c8 | 15,939 | py | Python | tentacle/strategy.py | splendor-kill/ml-five | 4da5c192bbdc9175542833a86f5ec65fc955dc10 | [
"MIT"
] | 72 | 2016-10-20T13:01:30.000Z | 2021-12-16T09:17:32.000Z | tentacle/strategy.py | splendor-kill/ml-five | 4da5c192bbdc9175542833a86f5ec65fc955dc10 | [
"MIT"
] | null | null | null | tentacle/strategy.py | splendor-kill/ml-five | 4da5c192bbdc9175542833a86f5ec65fc955dc10 | [
"MIT"
] | 16 | 2016-11-25T10:43:59.000Z | 2018-07-12T16:12:03.000Z | from _hashlib import new
import pickle
import random
from scipy.special import expit
import matplotlib.pyplot as plt
import numpy as np
from tentacle.board import Board
from tentacle.dfs import Searcher
from tentacle.dnn3 import DCNN3
from tentacle.game import Game
from tentacle.mcts import MonteCarlo
from tentacle.mcts1 import MCTS1
class Strategy(object):
def __init__(self):
self.stand_for = None
self.is_learning = False
def needs_update(self):
return self.is_learning
def update(self, old, new):
pass
def update_at_end(self, old, new):
pass
def preferred_move(self, board):
pass
def preferred_board(self, old, moves, context):
'''
Parameters
------------
old : board
the old board
moves: list(board)
all possible moves
context: hash
game context
Returns:
------------
board : board
the preferred board
'''
if not moves:
return old
if len(moves) == 1:
return moves[0]
board_most_value = max(moves, key=lambda m: self.board_value(m, context))
return board_most_value
def board_value(self, board, context):
'''estimate the value of board
Returns:
------------
value : float
the estimate value
'''
pass
def close(self):
pass
def save(self, file):
pass
def load(self, file):
pass
def setup(self):
pass
def mind_clone(self):
pass
class StrategyProb(Strategy):
'''base class for using probabilities
Attributes:
--------------
probs : hash
prob factors
'''
def __init__(self):
super().__init__()
self.probs = {}
def board_probabilities(self, board, context):
pass
def board_value(self, board, context):
self.board_probabilities(board, context)
return self.probs[0]
class StrategyTD(StrategyProb):
'''
Attributes:
hidden_neurons_num : int
number of hidden layer nodes
is_learning : bool
whether if update weights
alpha : float
1st layer learning rate (typically 1/features_num)
beta : float
2nd layer learning rate (typically 1/hidden_neurons_num)
gamma : float
discount-rate parameter (typically 0.9)
lambdaa : float
trace decay parameter (should be <= gamma)
-----------------
output_weights: numpy.2darray
the weights of output layer, shape = (output_units, hidden_units + 1)
hidden_weights: numpy.2darray
the wights of hidden layer, shape = (hidden_units + 1, features + 1)
'''
def __init__(self, features_num, hidden_neurons_num):
super().__init__()
self.is_learning = True
self.features_num = features_num
self.hidden_neurons_num = hidden_neurons_num
self.alpha = 0.1
self.beta = 0.1
self.gamma = .9
self.lambdaa = 0.1
self.epsilon = 0.05
self.hidden_weights = np.random.rand(self.hidden_neurons_num + 1, self.features_num + 1)
# self.hidden_weights -= 0.5
self.hidden_weights *= 0.1
self.output_weights = np.random.rand(1, self.hidden_neurons_num + 1)
# self.output_weights -= 0.5
self.output_weights *= 0.1
self.setup()
# print(np.shape(self.hidden_weights))
# print(np.shape(self.output_weights))
def setup(self):
self.prev_state = None
self.hidden_traces = np.zeros((self.hidden_neurons_num + 1, self.features_num + 1))
self.output_traces = np.zeros((1, self.hidden_neurons_num + 1))
def preferred_board(self, old, moves, context):
if not moves:
return old
if len(moves) == 1:
return moves[0]
if np.random.rand() < self.epsilon: # exploration
the_board = random.choice(moves)
the_board.exploration = True
return the_board
else:
board_most_value = max(moves, key=lambda m: self.board_value(m, context))
return board_most_value
def board_probabilities(self, board, context):
inputs = self.get_input_values(board)
hiddens = self.get_hidden_values(inputs)
prob_win = self.get_output(hiddens)
self.probs[0] = prob_win
def get_input_values(self, board):
'''
Returns:
-----------
vector: numpy.1darray
the input vector
'''
# print('boar.stone shape: ' + str(board.stones.shape))
v = board.stones
# print('vectorized board shape: ' + str(v.shape))
# print('b[%d], w[%d]' % (black, white))
iv = np.zeros(v.shape[0] * 2 + 3)
iv[0] = 1.
iv[1:v.shape[0] + 1] = (v == Board.STONE_BLACK).astype(int)
iv[v.shape[0] + 1:v.shape[0] * 2 + 1] = (v == Board.STONE_WHITE).astype(int)
who = board.whose_turn_now()
iv[-2] = 1 if who == Board.STONE_BLACK else 0 # turn to black move
iv[-1] = 1 if who == Board.STONE_WHITE else 0 # turn to white move
# print(iv.shape)
# print(iv)
return iv
def get_hidden_values(self, inputs):
v = self.hidden_weights.dot(inputs)
# print(self.hidden_weights.shape)
# print(inputs.shape)
# print(v.shape)
v = expit(v)
v[0] = 1.
return v
def get_output(self, hiddens):
v = self.output_weights.dot(hiddens)
# print(self.hidden_weights.shape)
# print(hiddens.shape)
# print(v.shape)
return expit(v)
# return v
def update_at_end(self, old, new):
if not self.needs_update():
return
if new.winner == Board.STONE_EMPTY:
reward = 0
else:
reward = 2 if self.stand_for == new.winner else -2
if old is None:
if self.prev_state is not None:
self._update_impl(self.prev_state, new, reward)
else:
self._update_impl(old, new, reward)
def update(self, old, new):
if not self.needs_update():
return
if self.prev_state is None:
self.prev_state = old
return
if new is None:
self._update_impl(self.prev_state, old, 0)
self.prev_state = old
def _update_impl(self, old, new, reward):
# print('old', old.stones)
# print('new', new.stones)
old_inputs = self.get_input_values(old)
# print('old input', old_inputs)
old_hiddens = self.get_hidden_values(old_inputs)
old_output = self.get_output(old_hiddens)
# update traces
dw2 = old_output * (1 - old_output) * old_hiddens
# dw2 = old_hiddens
self.output_traces = self.lambdaa * self.output_traces + dw2
dw1 = dw2 * (1 - old_hiddens) * self.output_weights
# dw1 = self.output_weights
# print('dw1', dw1.shape)
# print('hidden traces', self.hidden_traces.shape)
# print('dw1:', dw1)
self.hidden_traces = self.lambdaa * self.hidden_traces + np.outer(dw1, old_inputs)
new_input = self.get_input_values(new)
# print('new input', new_input)
new_output = self.get_output(self.get_hidden_values(new_input))
delta = reward + self.gamma * new_output - old_output
# print('delta[{: 12.6g}], old[{: 15.6g}], new[{: 12.6g}], reward[{: 1.1f}]'.format(delta[0], old_output[0], new_output[0], reward))
# bak = np.copy(self.output_weights)
self.output_weights += self.beta * delta * self.output_traces
self.hidden_weights += self.alpha * delta * self.hidden_traces
# print(np.allclose(bak, self.output_weights))
def save(self, file):
np.savez(file,
hidden_weights=self.hidden_weights,
output_weights=self.output_weights,
hidden_traces=self.hidden_traces,
output_traces=self.output_traces,
features_num=self.features_num,
hidden_neurons_num=self.hidden_neurons_num,
alpha=self.alpha,
beta=self.beta,
gamma=self.gamma,
lambdaa=self.lambdaa,
epsilon=self.epsilon
)
print('save OK')
def load(self, file):
dat = np.load(file)
self.hidden_weights = dat['hidden_weights']
self.output_weights = dat['output_weights']
self.hidden_traces = dat['hidden_traces']
self.output_traces = dat['output_traces']
self.features_num = dat['features_num']
self.hidden_neurons_num = dat['hidden_neurons_num']
self.alpha = dat['alpha']
self.beta = dat['beta']
self.gamma = dat['gamma']
self.lambdaa = dat['lambdaa']
self.epsilon = dat['epsilon']
print('features[%d], hiddens[%d]' % (self.features_num, self.hidden_neurons_num))
print('load OK')
def mind_clone(self):
s = StrategyTD(self.features_num, self.hidden_neurons_num)
s.is_learning = False
s.alpha = self.alpha
s.beta = self.beta
s.gamma = self.gamma
s.lambdaa = self.lambdaa
s.epsilon = self.epsilon
s.hidden_weights = np.copy(self.hidden_weights)
s.output_weights = np.copy(self.output_weights)
s.hidden_traces = np.copy(self.hidden_traces)
s.output_traces = np.copy(self.output_traces)
return s
class StrategyHuman(Strategy):
def __init__(self):
super().__init__()
def preferred_board(self, old, moves, context):
game = context
if game.over:
return
game.wait_human = True
plt.title('set down a stone')
happy = False
while not happy:
pts = np.asarray(plt.ginput(1, timeout=-1, show_clicks=False))
if len(pts) != 1:
continue
i, j = map(round, (pts[0, 0], pts[0, 1]))
loc = int(i * Board.BOARD_SIZE + j)
if old.stones[loc] == Board.STONE_EMPTY:
return [b for b in moves if b.stones[loc] != Board.STONE_EMPTY][0]
else:
plt.title('invalid move')
continue
class StrategyNetBot(Strategy):
def __init__(self, cond):
super().__init__()
self.cond = cond
def preferred_board(self, old, moves, context):
game = context
while True:
self.cond.wait()
i, j = 0, 0
loc = int(i * Board.BOARD_SIZE + j)
if old.stones[loc] == Board.STONE_EMPTY:
return [b for b in moves if b.stones[loc] != Board.STONE_EMPTY][0]
else:
print('invalid move')
continue
class StrategyRand(Strategy):
def __init__(self):
super().__init__()
def preferred_board(self, old, moves, context):
return random.choice(moves)
class StrategyHeuristic(Strategy):
def __init__(self):
super().__init__()
def preferred_board(self, old, moves, context):
'''
find many space or many some color stones in surrounding
'''
game = context
offset = np.array([[-1, -1], [-1, 0], [-1, 1],
[0, -1], [0, 1],
[1, -1], [1, 0], [1, 1]], np.int)
loc = np.where(old.stones == 0)
box = []
for i in loc[0]:
row, col = divmod(i, Board.BOARD_SIZE)
neighbors = offset + (row, col)
s, space = 0, 0
for x, y in neighbors:
if 0 <= x < Board.BOARD_SIZE and 0 <= y < Board.BOARD_SIZE:
p = x * Board.BOARD_SIZE + y
if old.stones[p] == game.whose_turn:
s += 1
if old.stones[p] == Board.STONE_EMPTY:
space += 1
box.append((row, col, s, space))
box.sort(key=lambda t: 2 * t[2] + t[3], reverse=True)
if len(box) != 0:
loc = box[0]
# print('place here(%d,%d), %d pals' % (loc[0], loc[1], loc[2]))
return [b for b in moves if b.stones[loc[0] * Board.BOARD_SIZE + loc[1]] != Board.STONE_EMPTY][0]
else:
return random.choice(moves)
class StrategyMinMax(Strategy):
def __init__(self):
super().__init__()
self.searcher = Searcher()
def preferred_board(self, old, moves, context):
game = context
self.searcher.board = old.stones.reshape((-1, Board.BOARD_SIZE)).tolist()
DEPTH = 1
score, row, col = self.searcher.search(game.whose_turn, DEPTH)
# print('score%d, loc(%d, %d)'%(score, row, col))
x = old.stones.copy()
x[row * Board.BOARD_SIZE + col] = game.whose_turn
b = Board()
b.stones = x
return b
class Auditor(object):
def on_episode_start(self):
pass
def swallow(self, who, st0, st1, **kwargs):
pass
def absorb(self, winner, **kwargs):
pass
class StrategyMC(Strategy, Auditor):
def __init__(self):
super().__init__()
self.mc = MonteCarlo()
def preferred_board(self, old, moves, context):
game = context
return self.mc.select(old, moves, game.whose_turn, context=game)
def update(self, old, new):
pass
def on_episode_start(self):
self.mc.void()
def swallow(self, who, st0, st1, **kwargs):
self.mc.swallow(who, st0, st1, **kwargs)
def absorb(self, winner, **kwargs):
self.mc.absorb(winner, **kwargs)
def save(self, file):
with open(file, 'wb') as f:
pickle.dump(self.mc.net, f)
print('save OK')
def load(self, file):
with open(file, 'rb') as f:
self.mc.net = pickle.load(f)
print('load OK')
class StrategyMCTS1(Strategy, Auditor):
def __init__(self):
super().__init__()
self.brain = DCNN3(False, True, False)
self.brain.run()
self.mcts = MCTS1(self._value_fn, self._policy_fn, self._rollout_fn)
self.last_state = None
def preferred_board(self, old, moves, context):
if not moves:
raise Exception('should be ended')
if self.last_state is not None:
oppo_action = np.where(old.stones != self.last_state.stones)[0][0]
self.mcts.update_with_move(oppo_action)
best_move = self.mcts.get_move(old)
v = old.stones
if v[best_move] == Board.STONE_EMPTY:
for m in moves:
if m.stones[best_move] != Board.STONE_EMPTY:
self.last_state = m
self.mcts.update_with_move(best_move)
return m
raise Exception('impossible')
def _value_fn(self, board):
state, _ = self.get_input_values(board.stones)
v = self.brain.get_state_value(state)
return v
def _policy_fn(self, board):
_, _, legal_moves = Game.possible_moves(board)
state, _ = self.get_input_values(board.stones)
probs = self.brain.get_move_probs(state)
probs = probs[0, legal_moves]
return list(zip(legal_moves, probs))
def _rollout_fn(self, board, legal_moves):
state, _ = self.get_input_values(board.stones)
probs = self.brain.get_move_probs(state)
return probs
def get_input_values(self, board):
state, _ = self.brain.adapt_state(board)
legal = (board == Board.STONE_EMPTY)
return state, legal
if __name__ == '__main__':
mcts = StrategyMCTS1()
board = Board()
mcts.preferred_board(board, None, None)
| 29.571429 | 140 | 0.567288 | 2,006 | 15,939 | 4.321535 | 0.139083 | 0.032299 | 0.027685 | 0.021802 | 0.341216 | 0.248356 | 0.193909 | 0.151805 | 0.137732 | 0.120198 | 0 | 0.014799 | 0.31746 | 15,939 | 538 | 141 | 29.626394 | 0.782057 | 0.154527 | 0 | 0.354839 | 0 | 0 | 0.01854 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.16129 | false | 0.041056 | 0.035191 | 0.005865 | 0.313783 | 0.017595 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1029785cb87f985491f45d8d8e683b5b641dd7a3 | 2,296 | py | Python | run.py | lvyilin/DGC | 957a5ed4787d05d04f05589db0f5d4ff0edf378e | [
"MIT"
] | 6 | 2020-05-06T10:17:06.000Z | 2021-10-06T03:48:16.000Z | run.py | lvyilin/DGC | 957a5ed4787d05d04f05589db0f5d4ff0edf378e | [
"MIT"
] | null | null | null | run.py | lvyilin/DGC | 957a5ed4787d05d04f05589db0f5d4ff0edf378e | [
"MIT"
] | 3 | 2020-03-07T04:55:28.000Z | 2021-03-01T01:50:23.000Z | import argparse
import logging
import os
import subprocess
from io import StringIO
import pandas as pd
import configs
import utils
from datahandler import DataHandler
from dgc import DGC
def main(tag, seed, dataset):
opts = getattr(configs, 'config_%s' % dataset)
opts['work_dir'] = './results/%s/' % tag
if opts['verbose']:
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(message)s')
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(message)s')
utils.create_dir(opts['work_dir'])
utils.create_dir(os.path.join(opts['work_dir'],
'checkpoints'))
with utils.o_gfile((opts['work_dir'], 'params.txt'), 'w') as text:
text.write('Parameters:\n')
for key in opts:
text.write('%s : %s\n' % (key, opts[key]))
data = DataHandler(opts, seed)
model = DGC(opts, tag)
model.train(data)
def get_free_gpu(num=1):
gpu_stats = subprocess.check_output(["nvidia-smi", "--format=csv", "--query-gpu=memory.used,memory.free"])
gpu_df = pd.read_csv(StringIO(gpu_stats.decode('utf8')),
names=['memory.used', 'memory.free'],
skiprows=1)
gpu_df['memory.free'] = gpu_df['memory.free'].map(lambda x: int(x.rstrip(' [MiB]')))
gpu_df = gpu_df.sort_values(by='memory.free', ascending=False)
print('GPU usage:\n{}'.format(gpu_df))
free_gpus = []
for i in range(num):
print('Returning GPU{} with {} free MiB'.format(gpu_df.index[i], gpu_df.iloc[i]['memory.free']))
free_gpus.append(str(gpu_df.index[i]))
return ','.join(free_gpus)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("--exp", default='mnist',
help='dataset [mnist/celeba]')
parser.add_argument("--seed", type=int, default=1,
help='random seed for imbalance data generation')
FLAGS = parser.parse_args()
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
free_gpu_id = get_free_gpu(num=1)
os.environ["CUDA_VISIBLE_DEVICES"] = free_gpu_id
os.environ["OMP_NUM_THREADS"] = "8"
dataset_name = FLAGS.exp
seed = FLAGS.seed
tag = '%s_seed%02d' % (dataset_name, seed)
main(tag, seed, dataset_name)
| 33.764706 | 110 | 0.627178 | 314 | 2,296 | 4.407643 | 0.407643 | 0.032514 | 0.031792 | 0.026012 | 0.052023 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004432 | 0.21385 | 2,296 | 67 | 111 | 34.268657 | 0.762327 | 0 | 0 | 0 | 0 | 0 | 0.216028 | 0.015244 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037037 | false | 0 | 0.185185 | 0 | 0.240741 | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
102dddc7627a9ecaaf6755f2fd709c1b3f163f69 | 1,164 | py | Python | Pytorch/5-CNN/nn_conv2d.py | pengchenyu111/PaperCodeReplication | 7b8681654e25b7d707f4b4d7ebcfb85ffc0fd52a | [
"Apache-2.0"
] | null | null | null | Pytorch/5-CNN/nn_conv2d.py | pengchenyu111/PaperCodeReplication | 7b8681654e25b7d707f4b4d7ebcfb85ffc0fd52a | [
"Apache-2.0"
] | null | null | null | Pytorch/5-CNN/nn_conv2d.py | pengchenyu111/PaperCodeReplication | 7b8681654e25b7d707f4b4d7ebcfb85ffc0fd52a | [
"Apache-2.0"
] | null | null | null | import torch
import torchvision
from torch import nn
from torch.nn import Conv2d
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
dataset = torchvision.datasets.CIFAR10("./data", train=False, transform=torchvision.transforms.ToTensor(),
download=True)
dataloader = DataLoader(dataset, batch_size=64)
class Tudui(nn.Module):
def __init__(self):
super(Tudui, self).__init__()
self.conv1 = Conv2d(in_channels=3, out_channels=6, kernel_size=3, stride=1, padding=0)
def forward(self, x):
x = self.conv1(x)
return x
tudui = Tudui()
writer = SummaryWriter("../logs")
step = 0
for data in dataloader:
imgs, targets = data
output = tudui(imgs)
print(imgs.shape)
print(output.shape)
# torch.Size([64, 3, 32, 32])
writer.add_images("input", imgs, step)
# torch.Size([64, 6, 30, 30]) -> [xxx, 3, 30, 30]
output = torch.reshape(output, (-1, 3, 30, 30))
writer.add_images("output", output, step)
step = step + 1
writer.close()
| 25.304348 | 106 | 0.665808 | 156 | 1,164 | 4.833333 | 0.435897 | 0.047745 | 0.037135 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043478 | 0.209622 | 1,164 | 45 | 107 | 25.866667 | 0.776087 | 0.065292 | 0 | 0 | 0 | 0 | 0.02212 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0 | 0.225806 | 0 | 0.354839 | 0.064516 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
102e5861e25bbeb8eece09f40df632bfa7bcbf7b | 873 | py | Python | make_us_rich/utils/directory_cleaning.py | ChainYo/make-me-rich | ad3bbc23bef4840f80799e0fd4903767d9a57a72 | [
"Apache-2.0"
] | 11 | 2022-02-06T18:01:29.000Z | 2022-02-23T15:51:48.000Z | make_us_rich/utils/directory_cleaning.py | ChainYo/make-me-rich | ad3bbc23bef4840f80799e0fd4903767d9a57a72 | [
"Apache-2.0"
] | null | null | null | make_us_rich/utils/directory_cleaning.py | ChainYo/make-me-rich | ad3bbc23bef4840f80799e0fd4903767d9a57a72 | [
"Apache-2.0"
] | 1 | 2022-02-14T10:41:53.000Z | 2022-02-14T10:41:53.000Z | from pathlib import Path
from shutil import rmtree
from typing import List, Union
def clean_dir(path_to_clean: Union[str, Path], exception: List[str]) -> None:
"""
Removes all files and directories in the given path if they don't match the exception list.
Parameters
----------
path_to_clean : Union[str, Path]
Directory path to clean. If it is a string, it will be converted to a Path object.
exception : List[str]
List of files and directories to keep. If a file or directory is in this list, it will not be removed.
"""
if isinstance(path_to_clean, str):
path_to_clean = Path(path_to_clean)
items_to_remove = [item for item in path_to_clean.iterdir() if item.name not in exception]
for item in items_to_remove:
if item.is_dir():
rmtree(item)
else:
item.unlink()
| 33.576923 | 110 | 0.666667 | 136 | 873 | 4.147059 | 0.404412 | 0.074468 | 0.136525 | 0.056738 | 0.08156 | 0.08156 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25315 | 873 | 25 | 111 | 34.92 | 0.865031 | 0.415808 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.25 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
102ebf98a1c7a842bac58dc29f474bfacca5f62a | 5,114 | py | Python | recognition/arcface_paddle/static/utils/verification.py | qaz734913414/insightface | 4101fe608ca1d38604a23d53f32314ce8a28fe79 | [
"MIT"
] | 12,377 | 2017-12-04T02:46:57.000Z | 2022-03-31T16:48:31.000Z | recognition/arcface_paddle/static/utils/verification.py | qaz734913414/insightface | 4101fe608ca1d38604a23d53f32314ce8a28fe79 | [
"MIT"
] | 1,851 | 2017-12-05T05:41:23.000Z | 2022-03-30T13:06:22.000Z | recognition/arcface_paddle/static/utils/verification.py | qaz734913414/insightface | 4101fe608ca1d38604a23d53f32314ce8a28fe79 | [
"MIT"
] | 4,198 | 2017-12-05T02:57:19.000Z | 2022-03-30T10:29:37.000Z | # Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import time
import os
import numpy as np
import sklearn
import paddle
import logging
from utils.verification import evaluate
from datasets import load_bin
def test(rank, batch_size, data_set, executor, test_program, data_feeder,
fetch_list):
data_list = data_set[0]
issame_list = data_set[1]
embeddings_list = []
# data_list[0] for normalize
# data_list[1] for flip_left_right
for i in range(len(data_list)):
data = data_list[i]
embeddings = None
ba = 0
while ba < data.shape[0]:
bb = min(ba + batch_size, data.shape[0])
count = bb - ba
_data = []
for k in range(bb - batch_size, bb):
_data.append((data[k], ))
[_embeddings] = executor.run(test_program,
fetch_list=fetch_list,
feed=data_feeder.feed(_data),
use_program_cache=True)
if embeddings is None:
embeddings = np.zeros((data.shape[0], _embeddings.shape[1]))
embeddings[ba:bb, :] = _embeddings[(batch_size - count):, :]
ba = bb
embeddings_list.append(embeddings)
xnorm = 0.0
xnorm_cnt = 0
for embed in embeddings_list:
xnorm += np.sqrt((embed * embed).sum(axis=1)).sum(axis=0)
xnorm_cnt += embed.shape[0]
xnorm /= xnorm_cnt
embeddings = embeddings_list[0] + embeddings_list[1]
embeddings = sklearn.preprocessing.normalize(embeddings)
_, _, accuracy, val, val_std, far = evaluate(
embeddings, issame_list, nrof_folds=10)
acc, std = np.mean(accuracy), np.std(accuracy)
return acc, std, xnorm
class CallBackVerification(object):
def __init__(self,
frequent,
rank,
batch_size,
test_program,
feed_list,
fetch_list,
val_targets,
rec_prefix,
image_size=(112, 112)):
self.frequent: int = frequent
self.rank: int = rank
self.batch_size: int = batch_size
self.test_program: paddle.static.Program = test_program
self.feed_list: List[paddle.fluid.framework.Variable] = feed_list
self.fetch_list: List[paddle.fluid.framework.Variable] = fetch_list
self.highest_acc_list: List[float] = [0.0] * len(val_targets)
self.ver_list: List[object] = []
self.ver_name_list: List[str] = []
self.init_dataset(
val_targets=val_targets,
data_dir=rec_prefix,
image_size=image_size)
gpu_id = int(os.getenv("FLAGS_selected_gpus", 0))
place = paddle.CUDAPlace(gpu_id)
self.executor = paddle.static.Executor(place)
self.data_feeder = paddle.fluid.DataFeeder(
place=place, feed_list=self.feed_list, program=self.test_program)
def ver_test(self, global_step: int):
for i in range(len(self.ver_list)):
test_start = time.time()
acc2, std2, xnorm = test(
self.rank, self.batch_size, self.ver_list[i], self.executor,
self.test_program, self.data_feeder, self.fetch_list)
logging.info('[%s][%d]XNorm: %f' %
(self.ver_name_list[i], global_step, xnorm))
logging.info('[%s][%d]Accuracy-Flip: %1.5f+-%1.5f' %
(self.ver_name_list[i], global_step, acc2, std2))
if acc2 > self.highest_acc_list[i]:
self.highest_acc_list[i] = acc2
logging.info('[%s][%d]Accuracy-Highest: %1.5f' % (
self.ver_name_list[i], global_step, self.highest_acc_list[i]))
test_end = time.time()
logging.info("test time: {:.4f}".format(test_end - test_start))
def init_dataset(self, val_targets, data_dir, image_size):
for name in val_targets:
path = os.path.join(data_dir, name + ".bin")
if os.path.exists(path):
data_set = load_bin(path, image_size)
self.ver_list.append(data_set)
self.ver_name_list.append(name)
def __call__(self, num_update):
if self.rank == 0 and num_update > 0 and num_update % self.frequent == 0:
self.ver_test(num_update)
| 39.038168 | 82 | 0.584083 | 644 | 5,114 | 4.431677 | 0.290373 | 0.024527 | 0.019271 | 0.026279 | 0.099159 | 0.05466 | 0.029432 | 0.020322 | 0.020322 | 0 | 0 | 0.014849 | 0.315213 | 5,114 | 130 | 83 | 39.338462 | 0.800114 | 0.125733 | 0 | 0 | 0 | 0 | 0.028439 | 0.010867 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050505 | false | 0 | 0.080808 | 0 | 0.151515 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10309c5b33228fd5b9722bb807e1dc3097e10020 | 2,759 | py | Python | Data-Structures/hash_table/hash_table.py | nastinsk/python-data-structures-and-algorithms | 505b26a70fb846f6e9d0681bbe4f77e3797acf2d | [
"MIT"
] | null | null | null | Data-Structures/hash_table/hash_table.py | nastinsk/python-data-structures-and-algorithms | 505b26a70fb846f6e9d0681bbe4f77e3797acf2d | [
"MIT"
] | null | null | null | Data-Structures/hash_table/hash_table.py | nastinsk/python-data-structures-and-algorithms | 505b26a70fb846f6e9d0681bbe4f77e3797acf2d | [
"MIT"
] | 3 | 2020-05-31T03:25:49.000Z | 2020-12-05T21:03:13.000Z |
class _Node:
""" Class for the Node instances"""
def __init__(self, key, value):
self.key = key
self.value = value
self.next = None
class LinkedList:
""" Class for the LinkedLists instances"""
def __init__(self):
"""Method to iniate a LinkedList"""
self.head = None
def insert(self, key, value):
"""Method to insert new node to the beginnig of the list"""
node = _Node(key, value)
node.next = self.head
self.head = node
def includes(self, key):
"""Method to check if the given value in the liked list"""
current = self.head
while current:
if current.key == key:
return current.value
else:
current = current.next
return False
class HashTable:
"""Class to create a instance of Hash Table data structure"""
def __init__(self, size=1024):
"""Method to initalise Hash table instance, takes the integer as a parameter to create a hash table based on the array of the given length"""
self._array = [0 for i in range(size)]
self.size = size
def hash(self, key):
"""Method that takes in an arbitrary key and returns an index in the collection."""
key_chars = list(str(key))
char_sum = 0
for char in key_chars:
char_sum += ord(char)
index = (char_sum * 599) % self.size
return index
def add(self, key, value):
"""Method that takes in both the key and value. This method hash the key, and add the key and value pair to the table, handling collisions as needed."""
index = self.hash(key)
if self._array[index] == 0:
ll = LinkedList()
ll.insert(key, value)
self._array[index] = ll
else:
ll = self._array[index]
if ll.includes(key):
raise KeyValueAlreadyExists
else:
ll.insert(key, value)
def get(self, key):
"""Method that takes in the key and returns the value from the table."""
index = self.hash(key)
if self._array[index] != 0 and self._array[index].includes(key):
return self._array[index].includes(key)
else:
return None
def contains(self, key):
"""Method that takes in the key and returns a boolean, indicating if the key exists in the table already"""
index = self.hash(key)
if self._array[index] != 0 and self._array[index].includes(key):
return True
else:
return False
class KeyValueAlreadyExists(Exception):
"""Raised when the given key already exists in the hash table"""
pass
| 26.27619 | 160 | 0.580283 | 362 | 2,759 | 4.345304 | 0.251381 | 0.051494 | 0.071202 | 0.04323 | 0.188175 | 0.172282 | 0.157025 | 0.157025 | 0.157025 | 0.136046 | 0 | 0.006472 | 0.328017 | 2,759 | 104 | 161 | 26.528846 | 0.841963 | 0.306633 | 0 | 0.245614 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0.017544 | 0 | 0 | 0.350877 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1030b0e1e96c39f542bcea40261125040b467973 | 1,993 | py | Python | tests/deploy/deploy.py | blarghmatey/pyinfra | b8287618d66a4e00963c88a3ef191c94e8320f70 | [
"MIT"
] | 1,532 | 2015-06-13T19:48:52.000Z | 2022-03-26T15:32:45.000Z | tests/deploy/deploy.py | blarghmatey/pyinfra | b8287618d66a4e00963c88a3ef191c94e8320f70 | [
"MIT"
] | 729 | 2015-09-24T08:42:39.000Z | 2022-03-31T07:15:44.000Z | tests/deploy/deploy.py | blarghmatey/pyinfra | b8287618d66a4e00963c88a3ef191c94e8320f70 | [
"MIT"
] | 419 | 2015-12-16T21:00:34.000Z | 2022-03-05T21:05:07.000Z | from os import path
from utils import call_file_op
from pyinfra import host, local, state
from pyinfra.api import deploy
from pyinfra.operations import files, server
@deploy('My nested deploy')
def my_nested_deploy(state, host):
server.shell(
name='First nested deploy operation',
commands='echo first nested_deploy_op',
state=state, host=host,
)
@deploy('My deploy')
def my_deploy(state, host):
server.shell(
name='First deploy operation',
commands='echo first_deploy_op',
state=state, host=host,
)
my_nested_deploy(state=state, host=host)
server.shell(
name='Second deploy operation',
commands='echo second_deploy_op',
state=state, host=host,
)
server.shell(
name='First main operation',
commands='echo first_main_op',
)
# Create some conditional branches
if host.name == 'somehost':
server.shell(
name='Second main operation',
commands='echo second_main_op',
)
elif host.name == 'anotherhost':
local.include(path.join('tasks', 'a_task.py'))
# Include the whole file again, but for all hosts
local.include(path.join('tasks', 'a_task.py'))
# Execute the @deploy function
my_deploy()
# Do a loop which will generate duplicate op hashes
for i in range(2):
server.shell(
name='Loop-{0} main operation'.format(i),
commands='echo loop_{0}_main_operation'.format(i),
)
call_file_op()
with state.preserve_loop_order([1, 2]) as loop_items:
for item in loop_items():
server.shell(
name='Order loop {0}'.format(item),
commands='echo loop_{0}'.format(item),
)
server.shell(
name='2nd Order loop {0}'.format(item),
commands='echo loop_{0}'.format(item),
)
if host.name == 'somehost':
files.template(
name='Final limited operation',
src='templates/a_template.j2',
dest='/a_template',
is_template=True,
)
| 24.012048 | 58 | 0.641244 | 263 | 1,993 | 4.730038 | 0.307985 | 0.07074 | 0.096463 | 0.061093 | 0.380225 | 0.324759 | 0.236334 | 0.12701 | 0.075563 | 0.075563 | 0 | 0.007227 | 0.236327 | 1,993 | 82 | 59 | 24.304878 | 0.810118 | 0.079779 | 0 | 0.278689 | 0 | 0 | 0.254784 | 0.02515 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032787 | false | 0 | 0.081967 | 0 | 0.114754 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
103379ad0e7016495742dacd4aa052af5fc71df0 | 2,083 | py | Python | MCSH/first_time_setup.py | RealAllenDa/MinecraftServerHelper | 888217070443c0cc04823ebe4a41c7f24ff785ec | [
"MIT"
] | null | null | null | MCSH/first_time_setup.py | RealAllenDa/MinecraftServerHelper | 888217070443c0cc04823ebe4a41c7f24ff785ec | [
"MIT"
] | null | null | null | MCSH/first_time_setup.py | RealAllenDa/MinecraftServerHelper | 888217070443c0cc04823ebe4a41c7f24ff785ec | [
"MIT"
] | null | null | null | """
***************************************
MCSH - A Minecraft Server Helper.
Coded by AllenDa 2020.
Licensed under MIT.
***************************************
Module name: MCSH.first_time_setup
Module Revision: 0.0.1-18
Module Description:
Guides the user through first-time setup routines.
"""
from MCSH.consts import MCSH_version, LOGGING_COLORS, TUI_COLORS
from MCSH.logging import log
MODULE_NAME = "first_time_setup"
def startup_guide():
"""
The entrance of the startup guide.
Included parts:
Language, Check pre.req., Generate config, Evaluate computer.
"""
log(MODULE_NAME, "DEBUG", "Initializing first-time setup guide...")
# Pre-requirements check
_choose_colours()
# Computer evaluation
def _choose_colours():
print("Welcome to Minecraft Server Helper (MCSH) ver.{}!\n".format(MCSH_version) +
"Now, the setup program will print a few ANSI characters.\n"
"Choose yes and enable the console colouring "
"if you see characters in different colours.\n"
"Choose no and disable the console colouring "
"if you see characters with a '\\033..m' and no colours.")
for i in LOGGING_COLORS:
print("Testing Logging_Colors: {}This line should be the color of {}.\033[0m".format(LOGGING_COLORS[i], i))
for i in TUI_COLORS:
print("Testing TUI_Colors: {}This line should be the color of {}.\033[0m".format(TUI_COLORS[i], i))
seen_colours = input("Do you see the colours described above? [y/n]: ")
from MCSH.consts import config_instance
if seen_colours.lower() == "y":
log(MODULE_NAME, "INFO", "Successfully enabled console colouring.")
config_instance.program_config["color_enabled"] = True
config_instance.update_config()
else:
log(MODULE_NAME, "INFO", "Successfully disabled console colouring.", True)
config_instance.program_config["color_enabled"] = False
config_instance.update_config()
def _evaluate_computer():
log(MODULE_NAME, "DEBUG", "[Step 3/4] Evaluating computer...")
| 37.872727 | 115 | 0.662026 | 269 | 2,083 | 4.981413 | 0.416357 | 0.044776 | 0.048507 | 0.029851 | 0.271642 | 0.228358 | 0.119403 | 0.064179 | 0.064179 | 0.064179 | 0 | 0.013182 | 0.198752 | 2,083 | 54 | 116 | 38.574074 | 0.789694 | 0.216995 | 0 | 0.068966 | 0 | 0 | 0.433333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103448 | false | 0 | 0.103448 | 0 | 0.206897 | 0.137931 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1035e02df02cb8357fc290dc2aba63b6c1ba4281 | 1,640 | py | Python | backend/utils/id_generator.py | methodpark/digitaleswarten | 024c0b88df54e9727925b202e139b3c5b2ce73d6 | [
"Apache-2.0"
] | 10 | 2020-03-20T19:14:43.000Z | 2020-10-29T21:31:40.000Z | backend/utils/id_generator.py | methodpark/digitaleswarten | 024c0b88df54e9727925b202e139b3c5b2ce73d6 | [
"Apache-2.0"
] | 41 | 2020-03-20T20:27:55.000Z | 2020-03-24T21:49:37.000Z | backend/utils/id_generator.py | methodpark/digitaleswarten | 024c0b88df54e9727925b202e139b3c5b2ce73d6 | [
"Apache-2.0"
] | 1 | 2020-03-21T09:31:51.000Z | 2020-03-21T09:31:51.000Z | import random
import time
from hashlib import sha1
random.seed()
WORDLIST = {
'adjective': [
'angenehm', 'attraktiv', 'aufmerksam', 'bunt', 'blau', 'charmant',
'dankbar', 'edel', 'frei', 'gelb', 'glatt', 'hell', 'ideal', 'jung',
'leicht', 'lieb', 'luftig', 'mutig', 'nah', 'neu', 'offen', 'poetisch',
'rein', 'rund', 'sicher', 'treu', 'wach', 'warm', 'weich', 'zart',
'zentral', 'zivil'
],
'noun': [
'amulett', 'arm', 'ball', 'baum', 'dach', 'eimer', 'engel', 'film',
'foto', 'freiheit', 'haus', 'insel', 'kugel', 'liebe', 'mutter',
'maus', 'nase', 'natur', 'obst', 'orgel', 'papier', 'quelle', 'radio',
'ritter', 'sand', 'stein', 'uhr', 'vater', 'vogel', 'wasser', 'zahn'
],
'verb': [
'atmen', 'baden', 'bilden', 'danken', 'deuten', 'essen', 'haben',
'heilen', 'hoffen', 'jubeln', 'kreisen', 'lachen', 'leben', 'leuchten',
'loben', 'lohnen', 'malen', 'mischen', 'ordnen', 'planen', 'pfeifen',
'reden', 'rollen', 'sehen', 'stehen', 'teilen', 'trinken', 'wollen',
'zelten'
]
}
def generate_place_id():
"""
Returns:
- String: Human-readable id phrase
"""
return random.choice(WORDLIST['adjective']) + \
random.choice(WORDLIST['noun']) + \
random.choice(WORDLIST['verb'])
def generate_queue_id(queue_name):
hasher = sha1()
hasher.update(queue_name.encode('utf-8'))
name_hash = hasher.hexdigest()[:4]
time_stamp = str(int(time.time()))[-2:]
return name_hash + time_stamp
def generate_entry_id(name):
return generate_queue_id(name)
| 33.469388 | 79 | 0.54939 | 171 | 1,640 | 5.187135 | 0.754386 | 0.037204 | 0.067644 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003922 | 0.222561 | 1,640 | 48 | 80 | 34.166667 | 0.691765 | 0.028659 | 0 | 0.052632 | 0 | 0 | 0.335029 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078947 | false | 0 | 0.078947 | 0.026316 | 0.236842 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
103892534d1ad570018b58043ecff94c1a6af1b9 | 1,091 | py | Python | api/views/PhotosGroupedByDate.py | reneraab/librephotos | a3972ab520586e721c67f283b1a50ccb7abe2b01 | [
"MIT"
] | null | null | null | api/views/PhotosGroupedByDate.py | reneraab/librephotos | a3972ab520586e721c67f283b1a50ccb7abe2b01 | [
"MIT"
] | null | null | null | api/views/PhotosGroupedByDate.py | reneraab/librephotos | a3972ab520586e721c67f283b1a50ccb7abe2b01 | [
"MIT"
] | null | null | null | import datetime
import pytz
utc = pytz.UTC
class PhotosGroupedByDate:
def __init__(self, location, date, photos):
self.photos = photos
self.date = date
self.location = location
def get_photos_ordered_by_date(photos):
from collections import defaultdict
groups = defaultdict(list)
for photo in photos:
if photo.exif_timestamp:
groups[photo.exif_timestamp.date().strftime("%Y-%m-%d")].append(photo)
else:
groups[photo.exif_timestamp].append(photo)
groupedPhoto = list(groups.values())
result = []
noTimestampPhotos = []
for group in groupedPhoto:
location = ""
if group[0].exif_timestamp:
date = group[0].exif_timestamp.date().strftime("%Y-%m-%d")
result.append(PhotosGroupedByDate(location, date, group))
else:
date = "No timestamp"
noTimestampPhotos = PhotosGroupedByDate(location, date, group)
# add no timestamp last
if noTimestampPhotos != []:
result.append(noTimestampPhotos)
return result
| 26.609756 | 82 | 0.63703 | 116 | 1,091 | 5.87931 | 0.362069 | 0.095308 | 0.079179 | 0.070381 | 0.124633 | 0.082111 | 0.082111 | 0 | 0 | 0 | 0 | 0.002478 | 0.260312 | 1,091 | 40 | 83 | 27.275 | 0.842627 | 0.019248 | 0 | 0.066667 | 0 | 0 | 0.026217 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.1 | 0 | 0.233333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1038e857379eaf1cf6966d3a63465f0a0cc5d934 | 3,976 | py | Python | packages/augur-core/tests/libraries/test_mailbox.py | autun12/augur | 71ec78e09c1bba3ef15a9f90336edc78c76b5c9e | [
"MIT"
] | null | null | null | packages/augur-core/tests/libraries/test_mailbox.py | autun12/augur | 71ec78e09c1bba3ef15a9f90336edc78c76b5c9e | [
"MIT"
] | null | null | null | packages/augur-core/tests/libraries/test_mailbox.py | autun12/augur | 71ec78e09c1bba3ef15a9f90336edc78c76b5c9e | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from ethereum.tools import tester
from ethereum.tools.tester import TransactionFailed
from pytest import fixture, raises
from utils import stringToBytes, EtherDelta, TokenDelta
def test_mailbox_eth_happy_path(localFixture, mailbox):
# We can send some ETH to the mailbox
with EtherDelta(100, mailbox.address, localFixture.chain, "Deposit did not work"):
assert mailbox.depositEther(value=100)
# We can also withdraw the ETH balance of the mailbox
with EtherDelta(100, tester.a0, localFixture.chain, "Withdraw did not work"):
assert mailbox.withdrawEther()
def test_mailbox_tokens_happy_path(localFixture, mailbox, token):
# We can send some Tokens to the mailbox
assert token.faucet(100)
with TokenDelta(token, 100, mailbox.address, "Token deposit did not work"):
with TokenDelta(token, -100, tester.a0, "Token deposit did not work"):
token.transfer(mailbox.address, 100)
# The mailbox owner can withdraw these tokens
with TokenDelta(token, 100, tester.a0, "Token withdraw did not work"):
with TokenDelta(token, -100, mailbox.address, "Token withdraw did not work"):
mailbox.withdrawTokens(token.address)
def test_mailbox_eth_failure(localFixture, mailbox):
# We send some ETH to the mailbox
with EtherDelta(100, mailbox.address, localFixture.chain, "Deposit did not work"):
assert mailbox.depositEther(value=100)
# Withdrawing as someone other than the owner will fail
with raises(TransactionFailed):
mailbox.withdrawEther(sender=tester.k1)
def test_mailbox_tokens_failure(localFixture, mailbox, token):
# We send some Tokens to the mailbox
assert token.faucet(100)
with TokenDelta(token, 100, mailbox.address, "Token deposit did not work"):
with TokenDelta(token, -100, tester.a0, "Token deposit did not work"):
token.transfer(mailbox.address, 100)
# Withdrawing as someone other than the owner will fail
with raises(TransactionFailed):
mailbox.withdrawTokens(token.address, sender=tester.k1)
def test_mailbox_cash_happy_path(localFixture, mailbox, cash):
# We can send some Cash to the mailbox
assert cash.depositEther(value=100)
assert cash.balanceOf(tester.a0) == 100
with TokenDelta(cash, 100, mailbox.address, "Deposit did not work"):
assert cash.transfer(mailbox.address, 100)
# We can withdraw "Ether" and the Cash balance in the mailbox will be given to the owner as Ether
with EtherDelta(100, tester.a0, localFixture.chain, "Withdraw did not work"):
assert mailbox.withdrawEther()
@fixture(scope="session")
def localSnapshot(fixture, controllerSnapshot):
fixture.resetToSnapshot(controllerSnapshot)
fixture.uploadAugur()
# Upload a token
fixture.uploadAndAddToController("solidity_test_helpers/StandardTokenHelper.sol")
# Upload Cash
cash = fixture.uploadAndAddToController("../source/contracts/trading/Cash.sol")
cash.setController(fixture.contracts['Controller'].address)
# Upload the mailbox
name = "Mailbox"
targetName = "MailboxTarget"
fixture.uploadAndAddToController("../source/contracts/reporting/Mailbox.sol", targetName, name)
fixture.uploadAndAddToController("../source/contracts/libraries/Delegator.sol", name, "delegator", constructorArgs=[fixture.contracts['Controller'].address, stringToBytes(targetName)])
fixture.contracts[name] = fixture.applySignature(name, fixture.contracts[name].address)
fixture.contracts[name].initialize(tester.a0)
return fixture.createSnapshot()
@fixture
def localFixture(fixture, localSnapshot):
fixture.resetToSnapshot(localSnapshot)
return fixture
@fixture
def mailbox(localFixture):
return localFixture.contracts['Mailbox']
@fixture
def token(localFixture):
return localFixture.contracts['StandardTokenHelper']
@fixture
def cash(localFixture):
return localFixture.contracts['Cash']
| 39.366337 | 188 | 0.743209 | 475 | 3,976 | 6.178947 | 0.210526 | 0.022487 | 0.037479 | 0.040545 | 0.390119 | 0.370017 | 0.350937 | 0.321635 | 0.321635 | 0.321635 | 0 | 0.020733 | 0.162978 | 3,976 | 100 | 189 | 39.76 | 0.861178 | 0.137072 | 0 | 0.349206 | 0 | 0 | 0.149546 | 0.048288 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.15873 | false | 0 | 0.063492 | 0.047619 | 0.301587 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
103a7ed96f6f1e0c265624227616da7f0358645b | 1,565 | py | Python | backend/controller/mailers.py | vertex-ai-now/crmint | dc6b66a0b24b98c295fe22c04dbd3d7119c1fd46 | [
"Apache-2.0"
] | null | null | null | backend/controller/mailers.py | vertex-ai-now/crmint | dc6b66a0b24b98c295fe22c04dbd3d7119c1fd46 | [
"Apache-2.0"
] | null | null | null | backend/controller/mailers.py | vertex-ai-now/crmint | dc6b66a0b24b98c295fe22c04dbd3d7119c1fd46 | [
"Apache-2.0"
] | 1 | 2022-02-15T04:24:17.000Z | 2022-02-15T04:24:17.000Z | # Copyright 2018 Google Inc
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Mailers"""
# from google.appengine.api import mail
from controller.app_data import APP_DATA
class AppMailer(object):
def recipients(self, other_recipients):
from controller.models import GeneralSetting
gsetting = GeneralSetting.where(name='emails_for_notifications').first()
if gsetting is None or gsetting.value is None:
recipients = other_recipients
else:
recipients = list(set(gsetting.value.split() + other_recipients))
return recipients
class NotificationMailer(AppMailer):
SENDER = "CRMintApp %s Notification <%s>" % (
APP_DATA['app_title'],
APP_DATA['notification_sender_email']
)
def finished_pipeline(self, pipeline):
recipients = self.recipients(pipeline.recipients)
if recipients:
subject = "Pipeline %s %s." % (pipeline.name, pipeline.status)
# mail.send_mail(sender=self.SENDER,
# to=recipients,
# subject=subject,
# body=subject)
| 33.297872 | 76 | 0.706709 | 197 | 1,565 | 5.543147 | 0.548223 | 0.054945 | 0.02381 | 0.029304 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006431 | 0.205112 | 1,565 | 46 | 77 | 34.021739 | 0.871383 | 0.460703 | 0 | 0 | 0 | 0 | 0.125457 | 0.059683 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.105263 | 0 | 0.421053 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
103bc23f6c704413453c0c7a3e41c19916632877 | 4,222 | py | Python | facenet/celeba_refine_split_anno.py | hqbao/dlp_tf | e8fe3281470faebbe8e36caf55025c270e84c44f | [
"MIT"
] | null | null | null | facenet/celeba_refine_split_anno.py | hqbao/dlp_tf | e8fe3281470faebbe8e36caf55025c270e84c44f | [
"MIT"
] | null | null | null | facenet/celeba_refine_split_anno.py | hqbao/dlp_tf | e8fe3281470faebbe8e36caf55025c270e84c44f | [
"MIT"
] | 1 | 2021-12-30T08:55:37.000Z | 2021-12-30T08:55:37.000Z | import numpy as np
from random import shuffle
anno_file_path = 'anno/celeba_id_landmark_anno.txt'
train1_anno_file_path = 'anno/celeba_train1_anno.txt'
train2_anno_file_path = 'anno/celeba_train2_anno.txt'
test1_anno_file_path = 'anno/celeba_test1_anno.txt'
test2_anno_file_path = 'anno/celeba_test2_anno.txt'
total_identities = 1000
anno_file = open(anno_file_path, 'r')
train1_anno_file = open(train1_anno_file_path, 'w')
train2_anno_file = open(train2_anno_file_path, 'w')
test1_anno_file = open(test1_anno_file_path, 'w')
test2_anno_file = open(test2_anno_file_path, 'w')
lines = anno_file.readlines()
id_dist = {}
for line_idx in range(len(lines)):
line = lines[line_idx][:-1]
anno = line.split(' ')
image_id = int(anno[0][:-4])
identity = int(anno[1])
landmark = list(map(int, anno[2:]))
A = [landmark[1], landmark[0]]
B = [landmark[3], landmark[2]]
C = [landmark[7], landmark[6]]
D = [landmark[9], landmark[8]]
E = [landmark[5], landmark[4]]
x_ab = B[1] - A[1]
y_ac = D[0] - A[0]
x_ea = E[1] - A[1]
x_eb = B[1] - E[1]
if x_ea <= 0 or x_eb <= 0:
continue
if max(x_ea/x_eb, x_eb/x_ea) > 10:
continue
x_ea_per_eb = abs(min(x_ea/x_eb, 2))
x_eb_per_ea = abs(min(x_eb/x_ea, 2))
left = int(A[1] - 0.5*x_ab - (0.4*x_ea_per_eb)*x_ab)
right = int(B[1] + 0.5*x_ab + (0.4*x_eb_per_ea)*x_ab)
top = int(A[0] - y_ac)
bottom = int(top + 1.1*(right - left))
bbox = [top, left, bottom, right] # [y1, x1, y2, x2]
if identity not in id_dist:
id_dist[identity] = [[image_id]+bbox]
else:
id_dist[identity].append([image_id]+bbox)
yx = []
for identity in sorted(id_dist):
yx.append(id_dist[identity])
train_yx_list = []
test_yx_list = []
for i in range(len(yx)):
x_list = yx[i]
x_list_len = len(x_list)
if x_list_len >= 28:
train_yx_list.append(x_list[:25])
test_yx_list.append(x_list[25:28])
print('Train identities: {}'.format(len(train_yx_list)))
print('Test identities: {}'.format(len(test_yx_list)))
train1_yx_list = train_yx_list[:total_identities]
train2_yx_list = train_yx_list[total_identities:2*total_identities]
test1_yx_list = test_yx_list[:total_identities]
test2_yx_list = test_yx_list[total_identities:2*total_identities]
train1xy2d = np.zeros((total_identities*25, 6), dtype='int64')
train2xy2d = np.zeros((total_identities*25, 6), dtype='int64')
test1xy2d = np.zeros((total_identities*3, 6), dtype='int64')
test2xy2d = np.zeros((total_identities*3, 6), dtype='int64')
for i in range(total_identities):
for j in range(25):
image_id = train1_yx_list[i][j][0]
identity = i
bbox = train1_yx_list[i][j][1:]
train1xy2d[i*25+j] = [image_id, identity] + bbox
for i in range(total_identities):
for j in range(25):
image_id = train2_yx_list[i][j][0]
identity = i
bbox = train2_yx_list[i][j][1:]
train2xy2d[i*25+j] = [image_id, identity] + bbox
for i in range(total_identities):
for j in range(3):
image_id = test1_yx_list[i][j][0]
identity = i
bbox = test1_yx_list[i][j][1:]
test1xy2d[i*3+j] = [image_id, identity] + bbox
for i in range(total_identities):
for j in range(3):
image_id = test2_yx_list[i][j][0]
identity = i
bbox = test2_yx_list[i][j][1:]
test2xy2d[i*3+j] = [image_id, identity] + bbox
np.random.shuffle(train1xy2d)
np.random.shuffle(train2xy2d)
np.random.shuffle(test1xy2d)
np.random.shuffle(test2xy2d)
for i in range(train1xy2d.shape[0]):
line = str(train1xy2d[i, 0]).zfill(6) + '.jpg ' + ' '.join(list(map(str, list(train1xy2d[i, 1:])))) + '\n'
train1_anno_file.write(line)
for i in range(train2xy2d.shape[0]):
line = str(train2xy2d[i, 0]).zfill(6) + '.jpg ' + ' '.join(list(map(str, list(train2xy2d[i, 1:])))) + '\n'
train2_anno_file.write(line)
for i in range(test1xy2d.shape[0]):
line = str(test1xy2d[i, 0]).zfill(6) + '.jpg ' + ' '.join(list(map(str, list(test1xy2d[i, 1:])))) + '\n'
test1_anno_file.write(line)
for i in range(test2xy2d.shape[0]):
line = str(test2xy2d[i, 0]).zfill(6) + '.jpg ' + ' '.join(list(map(str, list(test2xy2d[i, 1:])))) + '\n'
test2_anno_file.write(line)
print('Train samples: {}, {}, test samples: {}, {}'.format(train1xy2d.shape[0], train2xy2d.shape[0], test1xy2d.shape[0], test2xy2d.shape[0]))
anno_file.close()
train1_anno_file.close()
test1_anno_file.close()
| 29.117241 | 141 | 0.68901 | 751 | 4,222 | 3.635153 | 0.141145 | 0.067399 | 0.043956 | 0.036264 | 0.409524 | 0.334066 | 0.320147 | 0.25348 | 0.133333 | 0.133333 | 0 | 0.056753 | 0.131928 | 4,222 | 144 | 142 | 29.319444 | 0.688131 | 0.00379 | 0 | 0.127273 | 0 | 0 | 0.066159 | 0.032842 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.018182 | 0 | 0.018182 | 0.027273 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
103ce200673ca1170d9b05ac1f82a3e9e138ca9d | 2,766 | py | Python | main.py | ShlokC/tradeTrickPY | 1d171ccb5c236aa2e0b82b1b2d9d4bbf2bfb78c1 | [
"MIT"
] | null | null | null | main.py | ShlokC/tradeTrickPY | 1d171ccb5c236aa2e0b82b1b2d9d4bbf2bfb78c1 | [
"MIT"
] | null | null | null | main.py | ShlokC/tradeTrickPY | 1d171ccb5c236aa2e0b82b1b2d9d4bbf2bfb78c1 | [
"MIT"
] | null | null | null | from flask import Flask, jsonify, render_template
import pypyodbc
import os
import numpy as np
import io
import base64
from pandas import datetime
import pandas as pd
from sklearn import linear_model
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.cross_validation import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
from sklearn import tree
import matplotlib.pyplot as plt
app = Flask(__name__)
@app.route('/')
def hello_world():
Connection = pypyodbc.connect('Driver={ODBC Driver 13 for SQL Server};Server=tcp:tradetricksql.database.windows.net,1433;Database=tradeTrickDB;Uid=shlok@tradetricksql;Pwd=MySQL@01;Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;')
cursor = Connection.cursor()
SQLCommand = ("SELECT * FROM BankNiftyData WHERE id > ?")
values = [2]
cursor.execute(SQLCommand,values)
results = cursor.fetchall()
#print(results)
return jsonify(results)
#return 'Hello, World!'
@app.route('/linearRegression')
def linearRegression():
try:
THIS_FOLDER = os.path.dirname(os.path.abspath(__file__))
filename_path = os.path.join(THIS_FOLDER, 'timedata.csv')
balance_data = pd.read_csv(filename_path, sep= ',',header= 0)
headers = list(balance_data.columns.values)
X = balance_data.values[:,1]
X =X.reshape(X.size, 1)
Y = balance_data.values[:,0]
Y =Y.reshape(Y.size, 1)
X_train, X_test, y_train, y_test = train_test_split( X, Y, test_size = 0.3, random_state = 100)
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(X_train, y_train)
# Make predictions using the testing set
y_pred = regr.predict(X_test)
# The coefficients
print('Coefficients: \n', regr.coef_)
# The mean squared error
print("Mean squared error: %.2f"
% mean_squared_error(y_test, y_pred))
mse =mean_squared_error(y_test, y_pred)
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % r2_score(y_test, y_pred))
vScr= r2_score(y_test, y_pred)
# Plot outputs
plt.scatter(X_test, y_test, color='green')
plt.plot(X_test, y_pred, color='blue', linewidth=3)
plt.xticks(())
plt.yticks(())
plt.show()
img = io.BytesIO()
plt.savefig(img, format='png')
img.seek(0)
data = base64.encodestring(img.getvalue())
plot_url = base64.b64encode(img.getvalue()).decode()
img_tag ='<img src="data:image/png;base64,{}">'.format(plot_url)
return render_template('output.html',Coefficients=regr.coef_, mse=mse, vscr=vScr, result=data.decode('utf8'))
except OSError as err:
return jsonify(err)
if __name__ == '__main__':
app.run()
| 35.922078 | 244 | 0.720897 | 393 | 2,766 | 4.890585 | 0.435115 | 0.01821 | 0.041623 | 0.020812 | 0.044745 | 0.044745 | 0.027055 | 0 | 0 | 0 | 0 | 0.01719 | 0.158713 | 2,766 | 76 | 245 | 36.394737 | 0.808767 | 0.090383 | 0 | 0 | 0 | 0.016129 | 0.163941 | 0.07858 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032258 | false | 0 | 0.241935 | 0 | 0.322581 | 0.048387 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
103e209fe080f06c94e307ec8087042b7c67ca55 | 2,353 | py | Python | code/exp_tunedalpha/runscript.py | ludwigbald/probprec | 227a924a725551f4531cbe682da4830305f55277 | [
"MIT"
] | null | null | null | code/exp_tunedalpha/runscript.py | ludwigbald/probprec | 227a924a725551f4531cbe682da4830305f55277 | [
"MIT"
] | null | null | null | code/exp_tunedalpha/runscript.py | ludwigbald/probprec | 227a924a725551f4531cbe682da4830305f55277 | [
"MIT"
] | null | null | null | """Simple run script using SORunner."""
import torch.optim as optim
import deepobs.pytorch as pyt
from sorunner import SORunner
from probprec import Preconditioner
import numpy
import math
class PreconditionedSGD(Preconditioner):
"""docstring for PreconditionedSGD"""
def __init__(self, *args, **kwargs):
super(PreconditionedSGD, self).__init__(*args, optim_class = optim.SGD, **kwargs)
# Preconditioned SGD, but without
class TunedFmnistPreconditionedSGD(Preconditioner):
def __init__(self, *args, **kwargs):
super(TunedFmnistPreconditionedSGD, self).__init__(*args, optim_class = optim.SGD, **kwargs)
def _init_the_optimizer(self):
for group in self.param_groups:
group.update(lr=0.11288378916846883)
print("[_init_the_optimizer] Group Learning Rate:", group['lr'])
self.optim_hyperparams.pop("lr", None)
print("[_init_the_optimizer] Initializing ", self.optim_class.__name__, " with: ", self.optim_hyperparams)
self.the_optimizer = self.optim_class(
self.param_groups, **self.optim_hyperparams)
# Preconditioned SGD, but without
class TunedCifarPreconditionedSGD(Preconditioner):
def __init__(self, *args, **kwargs):
super(TunedCifarPreconditionedSGD, self).__init__(*args, optim_class = optim.SGD, **kwargs)
def _init_the_optimizer(self):
for group in self.param_groups:
group.update(lr=0.04832930238571752)
print("[_init_the_optimizer] Group Learning Rate:", group['lr'])
self.optim_hyperparams.pop("lr", None)
print("[_init_the_optimizer] Initializing ", self.optim_class.__name__, " with: ", self.optim_hyperparams)
self.the_optimizer = self.optim_class(
self.param_groups, **self.optim_hyperparams)
# and its hyperparameters for correct file naming, these are the optimal learning rates for SGD from the baselines
hyperparams_fmnist = {'lr': {"type": float, 'default': 0.11288378916846883}}
hyperparams_cifar = {'lr': {"type": float, 'default': 0.04832930238571752}}
# create the runner instances
frunner = SORunner(TunedFmnistPreconditionedSGD, hyperparams_fmnist)
# create the runner instances
crunner = SORunner(TunedCifarPreconditionedSGD, hyperparams_cifar)
frunner.run(testproblem='fmnist_2c2d')
crunner.run(testproblem='cifar10_3c3d')
| 37.349206 | 114 | 0.728432 | 266 | 2,353 | 6.161654 | 0.293233 | 0.054912 | 0.058572 | 0.051251 | 0.529591 | 0.467358 | 0.451495 | 0.402685 | 0.38072 | 0.38072 | 0 | 0.039614 | 0.163196 | 2,353 | 62 | 115 | 37.951613 | 0.792788 | 0.127072 | 0 | 0.459459 | 0 | 0 | 0.110348 | 0.041197 | 0 | 0 | 0 | 0 | 0 | 1 | 0.135135 | false | 0 | 0.162162 | 0 | 0.378378 | 0.108108 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
103e7ad934891f7c56ad137ea27df1a8312e7e45 | 1,469 | py | Python | src/kol/request/SearchPlayerRequest.py | danheath/temppykol | 7f9621b44df9f9d2d9fc0a5b2a06db116b9ccfab | [
"BSD-3-Clause"
] | 19 | 2015-02-16T08:30:49.000Z | 2020-05-01T06:06:33.000Z | src/kol/request/SearchPlayerRequest.py | danheath/temppykol | 7f9621b44df9f9d2d9fc0a5b2a06db116b9ccfab | [
"BSD-3-Clause"
] | 5 | 2015-01-13T23:01:54.000Z | 2016-11-30T15:23:43.000Z | src/kol/request/SearchPlayerRequest.py | danheath/temppykol | 7f9621b44df9f9d2d9fc0a5b2a06db116b9ccfab | [
"BSD-3-Clause"
] | 19 | 2015-05-28T09:36:19.000Z | 2022-03-15T23:19:29.000Z | from GenericRequest import GenericRequest
from kol.manager import PatternManager
STARTSWITH = 1
CONTAINS = 2
ENDSWITH = 3
class SearchPlayerRequest(GenericRequest):
def __init__(self, session, queryString, queryType=STARTSWITH, pvpOnly=False, hardcoreOnly=None, searchLevel=None, searchRanking=None):
super(SearchPlayerRequest, self).__init__(session)
self.url = session.serverURL + "searchplayer.php"
self.requestData["searchstring"] = queryString
self.requestData['startswith'] = queryType
self.requestData['searching'] = 'Yep'
if pvpOnly:
self.requestData['pvponly'] = 1
if hardcoreOnly is not None:
if hardcoreOnly:
self.requestData['hardcoreonly'] = 1
else:
self.requestData['hardcoreonly'] = 2
else:
self.requestData['hardcoreonly'] = 0
if searchLevel:
self.requestData['searchlevel'] = searchLevel
if searchRanking:
self.requestData['searchranking'] = searchRanking
def parseResponse(self):
searchPattern = PatternManager.getOrCompilePattern('searchPlayers')
players = []
for player in searchPattern.finditer(self.responseText):
userId = int(player.group(1))
name = player.group(2)
p = { 'userName' : name, 'userId' : userId }
players.append(p)
self.responseData['players'] = players
| 35.829268 | 139 | 0.63853 | 131 | 1,469 | 7.099237 | 0.442748 | 0.145161 | 0.087097 | 0.066667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008318 | 0.263445 | 1,469 | 40 | 140 | 36.725 | 0.851201 | 0 | 0 | 0.058824 | 0 | 0 | 0.102791 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.058824 | 0 | 0.147059 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
103f01e708f926e0eae5bd3790ca21f7e8ebb3d6 | 3,830 | py | Python | code/run_fishers.py | kkatekim/micm | 8b79bc83a5023a6bd03ad1ab04332a6427dd778d | [
"MIT"
] | null | null | null | code/run_fishers.py | kkatekim/micm | 8b79bc83a5023a6bd03ad1ab04332a6427dd778d | [
"MIT"
] | null | null | null | code/run_fishers.py | kkatekim/micm | 8b79bc83a5023a6bd03ad1ab04332a6427dd778d | [
"MIT"
] | null | null | null | import pandas as pd
import argparse
from pathlib import Path
import hail as hl
import utils
'''Runs fisher's test on variants found in protein domain (includes MPC>=2 and AF_NFE=0).'''
def calculate_fishers(case_carrier, control_carrier, case_noncarrier, control_noncarrier):
'''Runs fisher's test one one gene and returns a list with pvalue, OR, and CI.'''
result = hl.eval(hl.fisher_exact_test(case_carrier, case_noncarrier, control_carrier, control_noncarrier))
return [result["p_value"], result["odds_ratio"], result["ci_95_lower"], result["ci_95_upper"]]
def fishers_test(df, variant):
'''Runs fisher's test on specified variant and returns df with pval, OR, and CI added.'''
case = "case_" + variant
control = "control_" + variant
fishers_df = pd.DataFrame(index=df.index)
fishers_df["case_carrier"] = df[case].values
fishers_df["control_carrier"] = df[control].values
fishers_df["case_noncarrier"] = 3864 - df[case].values
fishers_df["control_noncarrier"] = 7839 - df[control].values
fishers_df = fishers_df.astype(np.int32)
# col names
col_p = "pval_" + variant
col_or = "OR_" + variant
col_lowci = "lowci_" + variant
col_highci = "highci_" + variant
fishers_df[[col_p, col_or, col_lowci, col_highci]] = fishers_df.apply(lambda x: calculate_fishers(
x.case_carrier, x.control_carrier, x.case_noncarrier, x.control_noncarrier),
axis=1, result_type="expand")
return fishers_df.drop(labels=["case_carrier", "control_carrier", "case_noncarrier", "control_noncarrier"], axis=1)
def run_fishers_on_variants(df, mpc=False):
'''Runs fishers test on all variants.'''
if not mpc:
variants = ["synonymous", "missense", "PTVs"]
new_df = utils.get_case_control_per_variant(df)
else:
variants = ["missense_mpc>=2"]
new_df = utils.get_case_control_per_variant(df, mpc)
fishers_list = []
for variant in variants:
tmp = utils.fishers_test(new_df, variant)
fishers_list.append(tmp)
return new_df.join(pd.concat(fishers_list, axis=1))
def merge_variants_mpc_fishers(all_df, mpc_df, file_name=None):
'''Merges df of all variants and MPC>=2 variants and writes to file (if needed)..'''
combined_df = pd.merge(all_df, mpc_df, on="gene", how="outer")
if file_name is not None:
out_file = Path("../data/summaryData/{}_fishers.csv".format(file_name))
combined_df.to_csv(out_file)
return combined_df
if __name__ == "__main__":
# first create case control count
# find mpc >= 0
parser = argparse.ArgumentParser()
parser.add_argument("-i", "--input", type=str, default=None)
parser.add_argument("-o", "--output", type=str, default=None)
args = parser.parse_args()
if args.input is not None:
docs_file = Path(args.input)
else:
docs_file = (
Path(__file__)
.resolve()
.parents[1]
.joinpath("data", "proteinDomain", "variants_in_protein_domain.csv")
)
if args.output is not None:
filename = args.outout
else:
filename = "protein_variants"
df = pd.read_csv(docs_file)
all_df = run_fishers_on_variants(df)
mpc_df = run_fishers_on_variants(utils.find_subset(df, "MPC", 2, ">="), True)
combined = merge_variants_mpc_fishers(all_df, mpc_df, filename)
afe_df = utils.find_subset(df, "AF_NFE", 0, "=")
afe_all_df = run_fishers_on_variants(afe_df)
afe_mpc_df = run_fishers_on_variants(utils.find_subset(afe_df, "MPC", 2, ">="), True)
afe_combined = merge_variants_mpc_fishers(afe_all_df, afe_mpc_df, "afe_{}".format(filename))
print(df.shape[0])
print(combined.shape[0])
print(afe_combined.shape[0]) | 35.462963 | 119 | 0.669191 | 537 | 3,830 | 4.480447 | 0.271881 | 0.037406 | 0.024938 | 0.041563 | 0.242727 | 0.192436 | 0.137157 | 0.137157 | 0.063175 | 0 | 0 | 0.009552 | 0.207311 | 3,830 | 108 | 120 | 35.462963 | 0.782938 | 0.086162 | 0 | 0.041667 | 0 | 0 | 0.115771 | 0.018901 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.069444 | 0 | 0.180556 | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1044d9a720994b98d9af5ed5bede93c137e63162 | 363 | py | Python | falconer/users.py | wsz/falconer | 8331de6d311c96f87963971390cf1bd6da29cc83 | [
"MIT"
] | null | null | null | falconer/users.py | wsz/falconer | 8331de6d311c96f87963971390cf1bd6da29cc83 | [
"MIT"
] | null | null | null | falconer/users.py | wsz/falconer | 8331de6d311c96f87963971390cf1bd6da29cc83 | [
"MIT"
] | null | null | null | import json
import falcon
class Resource:
def on_get(self, req, resp):
users = {
'users': [
{
'name': 'Admin',
'email': 'admin@example.com'
}
]
}
resp.body = json.dumps(users, ensure_ascii=False)
resp.status = falcon.HTTP_200
| 18.15 | 57 | 0.432507 | 33 | 363 | 4.666667 | 0.757576 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015306 | 0.460055 | 363 | 19 | 58 | 19.105263 | 0.770408 | 0 | 0 | 0 | 0 | 0 | 0.099174 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.142857 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10472c072c45ab39419d2442a2d613d961d499e4 | 13,852 | py | Python | src/py/metrics.py | DCBIA-OrthoLab/CBCT_seg | 427d5b19fdeb52acebdee895a5f15ba21404a8a4 | [
"Unlicense"
] | null | null | null | src/py/metrics.py | DCBIA-OrthoLab/CBCT_seg | 427d5b19fdeb52acebdee895a5f15ba21404a8a4 | [
"Unlicense"
] | null | null | null | src/py/metrics.py | DCBIA-OrthoLab/CBCT_seg | 427d5b19fdeb52acebdee895a5f15ba21404a8a4 | [
"Unlicense"
] | 4 | 2021-07-13T15:52:01.000Z | 2022-03-26T02:32:58.000Z | import argparse
import glob
import math
import os
import time
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from numba import jit, prange
from sklearn import metrics
from utils import *
@jit(nopython=True, nogil=True, cache=True, parallel=True, fastmath=True)
def compute_tp_tn_fp_fn(y_true, y_pred):
tp = 0
tn = 0
fp = 0
fn = 0
for i in prange(y_pred.size):
tp += y_true[i] * y_pred[i]
tn += (1-y_true[i]) * (1-y_pred[i])
fp += (1-y_true[i]) * y_pred[i]
fn += y_true[i] * (1-y_pred[i])
return tp, tn, fp, fn
def compute_precision(tp, fp):
return tp / (tp + fp)
def compute_recall(tp, fn):
return tp / (tp + fn)
def compute_f1_score(precision, recall):
try:
return (2*precision*recall) / (precision + recall)
except:
return 0
def compute_fbeta_score(precision, recall, beta):
try:
return ((1 + beta**2) * precision * recall) / (beta**2 * precision + recall)
except:
return 0
def compute_accuracy(tp,tn,fp,fn):
return (tp + tn)/(tp + tn + fp + fn)
def compute_auc(GT, pred):
return metrics.roc_auc_score(GT, pred)
def compute_auprc(GT, pred):
prec, rec, thresholds = metrics.precision_recall_curve(GT, pred)
# print(prec, rec, thresholds)
plt.plot(prec, rec)
plt.show()
# return metrics.auc(prec, rec)
def compute_average_precision(GT, pred):
ratio = sum(GT)/np.size(GT)
return metrics.average_precision_score(GT, pred), ratio
def main(args):
#====== Numba compilation ======
# The 2 lines are important
compute_tp_tn_fp_fn(np.array([0,0,0], dtype=np.uint8), np.array([0,1,0], dtype=np.uint8))
compute_tp_tn_fp_fn(np.array([0,0,0], dtype=np.float32), np.array([0,1,0], dtype=np.float32))
#===============================
out = args.out
if not os.path.exists(os.path.dirname(out)):
os.makedirs(os.path.dirname(out))
model_name = args.model_name
number_epochs = args.epochs
batch_size = args.batch_size
NumberFilters = args.number_filters
lr = args.learning_rate
cv_fold = args.cv_fold
model_params = ['Number Epochs', 'Batch Size', 'Number Filters', 'Learning Rate', 'Empty col', 'Empty col2', 'Empty col3', 'CV']
param_values = [number_epochs, batch_size, NumberFilters, lr, '', '', '', '']
Params = pd.Series(param_values, index=model_params, name='Params values')
metrics_names = ['AUPRC','AUPRC - Baseline','F1_Score','Fbeta_Score','Accuracy','Recall','Precision','CV fold']
Metrics = pd.Series(metrics_names, index=model_params, name='Model\Metrics')
if not os.path.exists(out):
Folder_Metrics = pd.DataFrame(columns = model_params)
Image_Metrics = pd.DataFrame(columns = model_params)
else:
Metrics_file = pd.ExcelFile(out)
Folder_Metrics = pd.read_excel(Metrics_file, 'Sheet1', index_col=0, header=None)
Folder_Metrics = Folder_Metrics[Folder_Metrics.columns[:8]]
Folder_Metrics.columns = model_params
Image_Metrics = pd.read_excel(Metrics_file, 'Sheet2', index_col=0, header=None)
Image_Metrics.columns = model_params
matching_values = (Folder_Metrics.values[:,:4] == Params.values[:4]).all(1)
if not matching_values.any():
Folder_Metrics = Folder_Metrics.append(pd.Series(['Number Epochs', 'Batch Size', 'Number Filters', 'Learning Rate', '', '', '', 'CV'], name='Params', index=model_params), ignore_index=False)
Folder_Metrics = Folder_Metrics.append(Params, ignore_index=False)
Folder_Metrics = Folder_Metrics.append(Metrics, ignore_index=False)
Folder_Metrics = Folder_Metrics.append(pd.Series(name='', dtype='object'), ignore_index=False)
matching_values = (Image_Metrics.values[:,:4] == Params.values[:4]).all(1)
if not matching_values.any():
Image_Metrics = Image_Metrics.append(pd.Series(['Number Epochs', 'Batch Size', 'Number Filters', 'Learning Rate', '', '', '', 'File Name'], name='Params', index=model_params), ignore_index=False)
Image_Metrics = Image_Metrics.append(pd.Series(param_values, index=model_params, name='Params values'), ignore_index=False)
Image_Metrics = Image_Metrics.append(pd.Series(['AUPRC','AUPRC - Baseline','F1_Score','Fbeta_Score','Accuracy','Recall','Precision','File Name'], index=model_params, name='Model\Metrics'), ignore_index=False)
Image_Metrics = Image_Metrics.append(pd.Series(name='', dtype='object'), ignore_index=False)
arrays = [range(len(Folder_Metrics)), Folder_Metrics.index]
Index = pd.MultiIndex.from_arrays(arrays, names=('number', 'name'))
Folder_Metrics.set_index(Index, inplace=True)
arrays = [range(len(Image_Metrics)), Image_Metrics.index]
Index = pd.MultiIndex.from_arrays(arrays, names=('number', 'name'))
Image_Metrics.set_index(Index, inplace=True)
idx1 = Folder_Metrics[(Folder_Metrics.values[:,:4] == Params.values[:4]).all(1)].index.get_level_values('number').tolist()[0]
idx2 = Image_Metrics[(Image_Metrics.values[:,:4] == Params.values[:4]).all(1)].index.get_level_values('number').tolist()[0]
img_fn_array = []
if args.pred_img:
img_obj = {}
img_obj["img"] = args.pred_img
img_obj["GT"] = args.groundtruth_img
if args.pred_raw_img:
img_obj['raw'] = args.pred_raw_img
img_fn_array.append(img_obj)
if args.pred_dir:
normpath_img = os.path.normpath("/".join([args.pred_dir, '*', '']))
normpath_GT = os.path.normpath("/".join([args.groundtruth_dir, '*', '']))
if args.pred_raw_dir:
normpath_raw = os.path.normpath("/".join([args.pred_raw_dir, '*', '']))
img_list = []
for img_fn in glob.iglob(normpath_img, recursive=True):
if args.tool == 'RCSeg':
img_split = os.path.basename(img_fn).split("_")
if img_split[0] == img_split[-2] or (img_split[-2] not in ['upper', 'lower']):
img_list.append(img_fn)
else:
img_list.append(img_fn)
if args.pred_raw_dir:
for (img_fn, GT_fn, raw_fn) in zip(sorted(img_list), sorted(glob.iglob(normpath_GT, recursive=True)), sorted(glob.iglob(normpath_raw, recursive=True))):
if os.path.isfile(img_fn) and True in [ext in img_fn for ext in [".nrrd", ".nrrd.gz", ".nii", ".nii.gz", ".gipl", ".gipl.gz"]]:
img_obj = {}
img_obj["img"] = img_fn
img_obj["GT"] = GT_fn
img_obj["raw"] = raw_fn
img_fn_array.append(img_obj)
else:
for (img_fn, GT_fn) in zip(sorted(img_list), sorted(glob.iglob(normpath_GT, recursive=True))):
if os.path.isfile(img_fn) and True in [ext in img_fn for ext in [".nrrd", ".nrrd.gz", ".nii", ".nii.gz", ".gipl", ".gipl.gz"]]:
img_obj = {}
img_obj["img"] = img_fn
img_obj["GT"] = GT_fn
img_fn_array.append(img_obj)
total_values = pd.DataFrame(columns=model_params)
for img_obj in img_fn_array:
startTime = time.time()
pred_path = img_obj["img"]
GT_path = img_obj["GT"]
pred, _ = ReadFile(pred_path)
GT, _ = ReadFile(GT_path, verbose=0)
pred = Normalize(pred,out_min=0,out_max=1)
GT = Normalize(GT,out_min=0,out_max=1)
pred[pred<=0.5]=0
pred[pred>0.5]=1
GT[GT<=0.5]=0
GT[GT>0.5]=1
pred = np.array(pred).flatten()
GT = np.array(GT).flatten()
GT = np.uint8(GT > 0.5)
tp, tn, fp, fn = compute_tp_tn_fp_fn(GT, pred)
recall = compute_recall(tp, fn)
precision = compute_precision(tp, fp)
f1 = compute_f1_score(precision, recall)
fbeta = compute_fbeta_score(precision, recall, 2)
acc = compute_accuracy(tp, tn, fp, fn)
if 'raw' in img_obj:
raw_path = img_obj["raw"]
raw, _ = ReadFile(raw_path, verbose=0)
raw = Normalize(raw,out_min=0,out_max=1)
raw = np.array(raw).flatten()
# auc = compute_auc(GT, raw)
compute_auprc(GT, raw)
auprc, ratio = compute_average_precision(GT, raw)
else:
# auc = compute_auc(GT, pred)
# auprc = compute_auprc(GT, raw)
auprc, ratio = compute_average_precision(GT, pred)
metrics_line = [auprc,ratio,f1,fbeta,acc,recall,precision]
metrics_line.append(os.path.basename(pred_path).split('.')[0])
total_values.loc[len(total_values)] = metrics_line
stopTime = time.time()
print('Processing completed in {0:.2f} seconds'.format(stopTime-startTime))
means = total_values[total_values.columns.drop('CV')].mean()
stds = total_values[total_values.columns.drop('CV')].std()
stds = [0 if math.isnan(x) else x for x in stds]
values = [(f"{mean:.4f}"+' \u00B1 '+f"{std:.4f}") for (mean,std) in zip(means,stds)]
values.append(cv_fold)
line = pd.DataFrame([values], columns=model_params)
Index_line = pd.MultiIndex.from_arrays([[idx1+1.5],[model_name]], names=('number', 'name'))
line.set_index(Index_line, inplace=True)
Folder_Metrics = Folder_Metrics.append(line, ignore_index=False)
Folder_Metrics = Folder_Metrics.sort_index()
Folder_Metrics = Folder_Metrics.set_index(Folder_Metrics.index.droplevel('number').rename('Params'))
index_number = [idx2+1+(1/(len(total_values)+1)*(i+1)) for i in range(len(total_values))]
index_name = [model_name for i in range(len(total_values))]
Index_line = pd.MultiIndex.from_arrays([index_number,index_name], names=('number', 'name'))
total_values.set_index(Index_line, inplace=True)
Image_Metrics = Image_Metrics.append(total_values, ignore_index=False)
Image_Metrics = Image_Metrics.sort_index()
Image_Metrics = Image_Metrics.set_index(Image_Metrics.index.droplevel('number').rename('Params'))
writer = pd.ExcelWriter(out, engine='xlsxwriter')
Folder_Metrics.to_excel(writer, sheet_name='Sheet1', header=False)
Image_Metrics.to_excel(writer, sheet_name='Sheet2', header=False)
workbook = writer.book
worksheet1 = writer.sheets['Sheet1']
worksheet2 = writer.sheets['Sheet2']
row_format = workbook.add_format({'bold': True, 'align': 'center', 'valign': 'vcenter'})
for ind, row in enumerate(Folder_Metrics.index):
if row in ['Params', 'Model\Metrics']:
worksheet1.set_row(ind, 15, row_format)
for ind, row in enumerate(Image_Metrics.index):
if row in ['Params', 'Model\Metrics']:
worksheet2.set_row(ind, 15, row_format)
elif row not in ['Params values']:
worksheet2.set_row(ind, 15, workbook.add_format({'num_format': '0.0000', 'align': 'center', 'valign': 'vcenter'}))
col_format = workbook.add_format({'align': 'center', 'valign': 'vcenter'})
for ind, col in enumerate(Folder_Metrics.columns):
column_len = Folder_Metrics[col].astype(str).str.len().max() + 2
worksheet1.set_column(ind+1, ind+1, column_len, col_format)
for ind, col in enumerate(Image_Metrics.columns):
column_len = Image_Metrics[col].astype(str).str.len().max() + 2
worksheet2.set_column(ind+1, ind+1, column_len, col_format)
indexcol_len = Folder_Metrics.index.astype(str).str.len().max() + 2
worksheet1.set_column(0, 0, indexcol_len, col_format)
indexcol_len = Image_Metrics.index.astype(str).str.len().max() + 2
worksheet2.set_column(0, 0, indexcol_len, col_format)
writer.save()
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Evaluation metrics', formatter_class=argparse.ArgumentDefaultsHelpFormatter)
input_params = parser.add_argument_group('Input files')
predicted_files = input_params.add_mutually_exclusive_group(required=True)
predicted_files.add_argument('--pred_img', type=str, help='Input predicted reconstructed 3D image')
predicted_files.add_argument('--pred_dir', type=str, help='Input directory with predicted reconstructed 3D images')
predicted_raw_files = input_params.add_mutually_exclusive_group()
predicted_raw_files.add_argument('--pred_raw_img', type=str, help='Input raw predicted reconstructed 3D image')
predicted_raw_files.add_argument('--pred_raw_dir', type=str, help='Input directory with raw predicted reconstructed 3D images')
groundtruth_files = input_params.add_mutually_exclusive_group(required=True)
groundtruth_files.add_argument('--groundtruth_img', type=str, help='Input original 3D images (ground truth)')
groundtruth_files.add_argument('--groundtruth_dir', type=str, help='Input directory with original 3D images (ground truth)')
output_params = parser.add_argument_group('Output parameters')
output_params.add_argument('--out', type=str, help='Output filename', required=True)
training_parameters = parser.add_argument_group('Training parameters')
training_parameters.add_argument('--tool', type=str, help='Name of the tool used', default='MandSeg')
training_parameters.add_argument('--model_name', type=str, help='name of the model', default='CBCT_seg_model')
training_parameters.add_argument('--epochs', type=int, help='name of the model', default=20)
training_parameters.add_argument('--batch_size', type=int, help='batch_size value', default=16)
training_parameters.add_argument('--learning_rate', type=float, help='Learning rate', default=0.00001)
training_parameters.add_argument('--number_filters', type=int, help='Number of filters', default=16)
training_parameters.add_argument('--cv_fold', type=int, help='number of the cross-validation fold', default=1)
args = parser.parse_args()
main(args)
| 46.639731 | 216 | 0.661204 | 1,919 | 13,852 | 4.55185 | 0.140698 | 0.047625 | 0.023927 | 0.032742 | 0.557642 | 0.438008 | 0.323412 | 0.262278 | 0.217058 | 0.174699 | 0 | 0.014396 | 0.192608 | 13,852 | 296 | 217 | 46.797297 | 0.766631 | 0.016748 | 0 | 0.135593 | 0 | 0 | 0.115274 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042373 | false | 0 | 0.04661 | 0.016949 | 0.131356 | 0.004237 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
104848ce3a03e243f36daf0633f599b83f507b88 | 1,167 | py | Python | Python_Advanced_Softuni/Multidimensional_Lists_Exercise/venv/bombs.py | borisboychev/SoftUni | 22062312f08e29a1d85377a6d41ef74966d37e99 | [
"MIT"
] | 1 | 2020-12-14T23:25:19.000Z | 2020-12-14T23:25:19.000Z | Python_Advanced_Softuni/Multidimensional_Lists_Exercise/venv/bombs.py | borisboychev/SoftUni | 22062312f08e29a1d85377a6d41ef74966d37e99 | [
"MIT"
] | null | null | null | Python_Advanced_Softuni/Multidimensional_Lists_Exercise/venv/bombs.py | borisboychev/SoftUni | 22062312f08e29a1d85377a6d41ef74966d37e99 | [
"MIT"
] | null | null | null | def explode(bomb_r, bomb_c, size, m):
bomb = m[bomb_r][bomb_c]
for row in range(bomb_r - 1, bomb_r + 2):
for col in range(bomb_c - 1, bomb_c + 2):
current_pos = [row, col]
if is_valid(current_pos, size) and matrix[current_pos[0]][current_pos[1]] > 0:
m[current_pos[0]][current_pos[1]] -= bomb
def is_valid(matrix, size):
r = matrix[0]
c = matrix[1]
return 0 <= r < size and 0 <= c < size
n = int(input())
matrix = []
for _ in range(n):
matrix.append([int(x) for x in input().split()])
bomb_nums = input().split()
for bomb in bomb_nums:
tokens = [int(x) for x in bomb.split(',')]
bomb_row = tokens[0]
bomb_col = tokens[1]
if matrix[bomb_row][bomb_col] > 0:
explode(bomb_row, bomb_col, n, matrix)
matrix[bomb_row][bomb_col] = 0
alive_count = 0
alive_cells_sum = 0
for row in range(n):
for col in range(n):
if matrix[row][col] > 0:
alive_count += 1
alive_cells_sum += matrix[row][col]
print(f'Alive cells: {alive_count}')
print(f'Sum: {alive_cells_sum}')
for row in matrix:
print(' '.join([str(x) for x in row]))
| 24.3125 | 90 | 0.588689 | 197 | 1,167 | 3.304569 | 0.19797 | 0.092166 | 0.036866 | 0.032258 | 0.162826 | 0.132104 | 0 | 0 | 0 | 0 | 0 | 0.024306 | 0.25964 | 1,167 | 47 | 91 | 24.829787 | 0.729167 | 0 | 0 | 0 | 0 | 0 | 0.042845 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0 | 0 | 0.088235 | 0.088235 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
104c7b9ea2d55cb62dd87de202fec12e2b37d470 | 4,893 | py | Python | src/openfermion/third_party/_higham_test.py | mpharrigan/OpenFermion | ae5bbaed60faa019fae9d47d6e578933874e074d | [
"Apache-2.0"
] | null | null | null | src/openfermion/third_party/_higham_test.py | mpharrigan/OpenFermion | ae5bbaed60faa019fae9d47d6e578933874e074d | [
"Apache-2.0"
] | null | null | null | src/openfermion/third_party/_higham_test.py | mpharrigan/OpenFermion | ae5bbaed60faa019fae9d47d6e578933874e074d | [
"Apache-2.0"
] | null | null | null | # BSD 3-Clause License
#
# Copyright (c) 2018 Rigetti & Co, Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# pylint: disable=C
from itertools import product
import numpy as np
import pytest
from openfermion.third_party._higham import (heaviside, higham_polynomial,
higham_root, map_to_tensor,
map_to_matrix,
fixed_trace_positive_projection)
def test_heaviside():
assert np.isclose(heaviside(0), 1.0)
assert np.isclose(heaviside(0.5), 1.0)
assert np.isclose(heaviside(-0.5), 0.0)
assert np.isclose(heaviside(-0.5, -1), 1.0)
assert np.isclose(heaviside(-2, -1), 0)
def test_highham_polynomial():
eigs = np.arange(10)
assert np.isclose(higham_polynomial(eigs, eigs[-1]), 0.0)
assert np.isclose(higham_polynomial(eigs, 0), sum(eigs))
assert np.isclose(higham_polynomial(eigs, 5), sum(eigs[5:] - 5))
assert np.isclose(higham_polynomial(eigs, 8), sum(eigs[8:] - 8))
def test_higham_root():
dim = 20
np.random.seed(42)
mat = np.random.random((dim, dim))
mat = 0.5 * (mat + mat.T)
w, _ = np.linalg.eigh(mat)
target_trace = np.round(w[-1] - 1)
sigma = higham_root(w, target_trace)
assert np.isclose(higham_polynomial(w, shift=sigma), target_trace)
with pytest.raises(ValueError):
higham_root(w, target_trace=-1)
tw = higham_root(w, target_trace=0)
assert np.isclose(tw, w[-1])
def test_matrix_2_tensor():
dim = 10
np.random.seed(42)
mat = np.random.random((dim**2, dim**2))
mat = 0.5 * (mat + mat.T)
tensor = map_to_tensor(mat)
for p, q, r, s in product(range(dim), repeat=4):
assert np.isclose(tensor[p, q, r, s], mat[p * dim + q, r * dim + s])
test_mat = map_to_matrix(tensor)
assert np.allclose(test_mat, mat)
with pytest.raises(TypeError):
map_to_tensor(np.zeros((4, 4, 4, 4)))
with pytest.raises(TypeError):
map_to_matrix(np.zeros((4, 4)))
def test_reconstruction():
dim = 20
np.random.seed(42)
mat = np.random.random((dim, dim))
mat = 0.5 * (mat + mat.T)
test_mat = np.zeros_like(mat)
w, v = np.linalg.eigh(mat)
for i in range(w.shape[0]):
test_mat += w[i] * v[:, [i]].dot(v[:, [i]].T)
assert np.allclose(test_mat - mat, 0.0)
test_mat = fixed_trace_positive_projection(mat, np.trace(mat))
assert np.isclose(np.trace(test_mat), np.trace(mat))
w, v = np.linalg.eigh(test_mat)
assert np.all(w >= -(float(4.0E-15)))
mat = np.arange(16).reshape((4, 4))
mat = 0.5 * (mat + mat.T)
mat_tensor = map_to_tensor(mat)
trace_mat = np.trace(mat)
true_mat = fixed_trace_positive_projection(mat, trace_mat)
test_mat = map_to_matrix(
fixed_trace_positive_projection(mat_tensor, trace_mat))
assert np.allclose(true_mat, test_mat)
assert np.allclose(true_mat,
fixed_trace_positive_projection(true_mat, trace_mat))
def test_mlme():
"""
Test from fig 1 of maximum likelihood minimum effort!
"""
eigs = np.array(
list(reversed([3.0 / 5, 1.0 / 2, 7.0 / 20, 1.0 / 10, -11.0 / 20])))
target_trace = 1.0
sigma = higham_root(eigs, target_trace)
shifted_eigs = np.multiply(heaviside(eigs - sigma), (eigs - sigma))
assert np.allclose(shifted_eigs, [0, 0, 1.0 / 5, 7.0 / 20, 9.0 / 20])
| 37.351145 | 80 | 0.669119 | 746 | 4,893 | 4.274799 | 0.289544 | 0.047664 | 0.061148 | 0.030103 | 0.366573 | 0.291 | 0.137974 | 0.113515 | 0.087175 | 0.076513 | 0 | 0.031307 | 0.216636 | 4,893 | 130 | 81 | 37.638462 | 0.800678 | 0.325158 | 0 | 0.168831 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.246753 | 1 | 0.077922 | false | 0 | 0.051948 | 0 | 0.12987 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
104c8d60de5269a8abfbd3958e9bc1a67cec1fe1 | 2,356 | py | Python | scripts/list_new_commits.py | captainsafia/mybinder.org-deploy | fb7d233fc4c3e8ed5c055d71ef95daa5eb7c8da6 | [
"BSD-3-Clause"
] | 1 | 2019-12-15T06:25:06.000Z | 2019-12-15T06:25:06.000Z | scripts/list_new_commits.py | captainsafia/mybinder.org-deploy | fb7d233fc4c3e8ed5c055d71ef95daa5eb7c8da6 | [
"BSD-3-Clause"
] | null | null | null | scripts/list_new_commits.py | captainsafia/mybinder.org-deploy | fb7d233fc4c3e8ed5c055d71ef95daa5eb7c8da6 | [
"BSD-3-Clause"
] | null | null | null | from yaml import safe_load as load
import requests
print('Fetching the SHA for live BinderHub and repo2docker...')
# Load master requirements
url_requirements = "https://raw.githubusercontent.com/jupyterhub/mybinder.org-deploy/master/mybinder/requirements.yaml"
requirements = load(requests.get(url_requirements).text)
binderhub_dep = [ii for ii in requirements['dependencies'] if ii['name'] == 'binderhub'][0]
bhub_live = binderhub_dep['version'].split('-')[-1]
url_binderhub_requirements = "https://raw.githubusercontent.com/jupyterhub/binderhub/{}/helm-chart/binderhub/requirements.yaml".format(bhub_live)
requirements = load(requests.get(url_binderhub_requirements).text)
jupyterhub_dep = [ii for ii in requirements['dependencies'] if ii['name'] == 'jupyterhub'][0]
jhub_live = jupyterhub_dep['version'].split('-')[-1]
# Load master repo2docker
url_helm_chart = "https://raw.githubusercontent.com/jupyterhub/mybinder.org-deploy/master/mybinder/values.yaml"
helm_chart = requests.get(url_helm_chart)
helm_chart = load(helm_chart.text)
r2d_live = helm_chart['binderhub']['config']['BinderHub']['build_image'].split(':')[-1]
print('Fetching latest commit SHA for BinderHub and repo2docker...')
# Load latest r2d commit
url = "https://api.github.com/repos/jupyter/repo2docker/commits"
resp = requests.get(url)
r2d_master = resp.json()[0]['sha']
# Load latest binderhub and jupyterhub commits
repos = {'jupyterhub': 'zero-to-jupyterhub-k8s', 'binderhub': 'binderhub'}
latest_hash = {}
for i_repo, i_url in repos.items():
url = "https://api.github.com/repos/jupyterhub/{}/commits".format(i_url)
resp = requests.get(url)
# Grab the *second to latest* commit since this will be the image SHA
# The latest commit is the "merge" commit and is excluded.
latest_hash[i_repo] = resp.json()[1]['sha']
url_bhub = 'https://github.com/jupyterhub/binderhub/compare/{}...{}'.format(bhub_live, latest_hash['binderhub'][:7])
url_r2d = 'https://github.com/jupyter/repo2docker/compare/{}...{}'.format(r2d_live, r2d_master[:7])
url_jhub = 'https://github.com/jupyterhub/zero-to-jupyterhub-k8s/compare/{}...{}'.format(jhub_live, latest_hash['jupyterhub'][:7])
print('---------------------\n')
print('BinderHub: {}'.format(url_bhub))
print('repo2docker: {}'.format(url_r2d))
print('JupyterHub: {}'.format(url_jhub))
print('\n---------------------')
| 48.081633 | 145 | 0.727504 | 317 | 2,356 | 5.271293 | 0.242902 | 0.037702 | 0.041891 | 0.050269 | 0.27289 | 0.202274 | 0.135248 | 0.135248 | 0.135248 | 0.135248 | 0 | 0.011606 | 0.085739 | 2,356 | 48 | 146 | 49.083333 | 0.76416 | 0.102292 | 0 | 0.060606 | 0 | 0.090909 | 0.449715 | 0.032258 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.060606 | 0 | 0.060606 | 0.212121 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
104ec0177c671b19dd488790a9a439c37947a6bb | 6,293 | py | Python | train.py | rampage644/memn2n-tensorflow | 661b3b9e5af6d906d5ae2073286ef5f461a95db6 | [
"Apache-2.0"
] | 1 | 2016-12-03T11:04:06.000Z | 2016-12-03T11:04:06.000Z | train.py | rampage644/memn2n-tensorflow | 661b3b9e5af6d906d5ae2073286ef5f461a95db6 | [
"Apache-2.0"
] | 1 | 2016-11-23T13:08:18.000Z | 2016-11-23T13:08:18.000Z | train.py | rampage644/memn2n-tensorflow | 661b3b9e5af6d906d5ae2073286ef5f461a95db6 | [
"Apache-2.0"
] | null | null | null | '''Train model'''
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import os
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import memn2n.model
import memn2n.util
FLAGS = tf.app.flags.FLAGS
tf.app.flags.DEFINE_integer('embedding_size', 15, 'Dimension for word embedding')
tf.app.flags.DEFINE_integer('sentence_length', 0, 'Sentence length. Provide to redefine automatically calculated (max would be taken).')
tf.app.flags.DEFINE_integer('memory_size', 50, 'Memory size. Provide to redefine automatically calculated (min would be taken).')
tf.app.flags.DEFINE_integer('task_id', 0, 'Task number to test and train or (in case of independent train)')
tf.app.flags.DEFINE_integer('epoch', 1, 'Epoch count')
tf.app.flags.DEFINE_integer('anneal_every', 10, 'Anneal (halve) learning rate every `anneal_every` epoch')
tf.app.flags.DEFINE_integer('batch_size', 32, 'Batch size')
tf.app.flags.DEFINE_integer('hops', 3, 'Hops (layers) count')
tf.app.flags.DEFINE_float('learning_rate', 0.001, 'Starting learning rate')
tf.app.flags.DEFINE_string('train_dir', os.getcwd(), 'Directory with training files')
tf.app.flags.DEFINE_string('log_dir', os.getcwd(), 'Directory for tensorboard logs')
tf.app.flags.DEFINE_string('ckpt_dir', os.getcwd(), 'Directory for saving/restoring checkpoints')
tf.app.flags.DEFINE_boolean('pe', False, 'Enable position encoding')
tf.app.flags.DEFINE_boolean('joint', False, 'Train model jointly (that is on all tasks instead of one).')
plt.style.use('fivethirtyeight')
def main(argv=None):
word2idx, idx2word = memn2n.util.load_vocabulary(FLAGS.train_dir)
if FLAGS.joint:
train = []
for task_id in range(1, 21):
train_task, test_task = memn2n.util.load_dataset_for(task_id, FLAGS.train_dir)
train.extend(train_task)
train.extend(test_task)
train_task, test_task = memn2n.util.load_dataset_for(FLAGS.task_id, FLAGS.train_dir)
test = list(train_task) + list(test_task)
else:
train, test = memn2n.util.load_dataset_for(FLAGS.task_id, FLAGS.train_dir)
data = list(train) + list(test)
# keep 10% for validation
train_size = int((1 - 0.1) * len(data))
train, test = data[:train_size], data[train_size:]
memory_size = min(
memn2n.util.calc_memory_capacity_for(train),
FLAGS.memory_size
)
sentence_length = max(
memn2n.util.calc_sentence_length_for(train),
FLAGS.sentence_length
)
mem_train, query_train, answer_train = memn2n.util.vectorize_dataset(train, word2idx, memory_size, sentence_length)
mem_test, query_test, answer_test = memn2n.util.vectorize_dataset(test, word2idx, memory_size, sentence_length)
with tf.Session() as sess:
steps_per_epoch = len(mem_train) // FLAGS.batch_size + 1
print('Model details:')
for (name, value) in (
('step per epoch', steps_per_epoch),
('epoch', FLAGS.epoch),
('anneal every', FLAGS.anneal_every),
('position encoding', FLAGS.pe),
('hops', FLAGS.hops),
('learning_rate', FLAGS.learning_rate),
('vocab_size', len(word2idx)),
('embdding size', FLAGS.embedding_size),
('sentence length', sentence_length),
('memory size', memory_size)
):
print('{}: {}'.format(name, value))
model = memn2n.model.MemN2N(
steps_per_epoch,
FLAGS.epoch,
FLAGS.anneal_every,
FLAGS.pe,
FLAGS.hops,
FLAGS.learning_rate,
len(word2idx),
FLAGS.embedding_size,
sentence_length,
memory_size
)
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver()
writer = tf.train.SummaryWriter(FLAGS.log_dir, graph=tf.get_default_graph())
saved_model = tf.train.latest_checkpoint(FLAGS.ckpt_dir)
if saved_model:
saver.restore(sess, saved_model)
else:
print('Prevous model not found, starting from scratch.')
if not os.path.exists(FLAGS.ckpt_dir):
os.makedirs(FLAGS.ckpt_dir)
loss_history = []
accuracy_history = []
t = []
for e in range(FLAGS.epoch):
for step in range(0, len(mem_train), FLAGS.batch_size):
# FIXME: last batch size should not to be less than `batch_size`
start, end = step, step+FLAGS.batch_size if step + FLAGS.batch_size < len(mem_train) else None
loss, predicted, summary, _ = sess.run([model.loss, model.predicted, model.summary_op, model.train_op], {
model.x: mem_train[start:end],
model.q: query_train[start:end],
model.a: answer_train[start:end]
})
loss_history.append(loss)
t.append(tf.train.global_step(sess, model.global_step))
writer.add_summary(summary)
accuracy_history.append(np.array([
sess.run(model.accuracy, {
model.x: mem_train[start:end],
model.q: query_train[start:end],
model.a: answer_train[start:end]}),
sess.run(model.accuracy, {
model.x: mem_test,
model.q: query_test,
model.a: answer_test})
]))
print('\rEpoch: {}/{}'.format(e+1, FLAGS.epoch), end='')
saver.save(sess, os.path.join(FLAGS.ckpt_dir, 'memn2n'), global_step=model.global_step)
accuracy_history = np.asarray(accuracy_history)
print()
print('Accuracy train: {}, test: {}'.format(accuracy_history[-1, 0], accuracy_history[-1, 1]))
_, (ax1, ax2) = plt.subplots(2, 1)
ax1.set_title('Loss')
ax1.plot(t, loss_history)
ax1.plot(t, np.r_[loss_history[:19], memn2n.util.moving_average(loss_history, n=20)])
ax2.set_title('Accuracy')
ax2.plot(accuracy_history[:, 0])
ax2.plot(accuracy_history[:, 1])
plt.show()
if __name__ == '__main__':
tf.app.run()
| 40.082803 | 136 | 0.631019 | 810 | 6,293 | 4.687654 | 0.250617 | 0.021069 | 0.039505 | 0.058994 | 0.267053 | 0.122465 | 0.109297 | 0.096392 | 0.077956 | 0.062681 | 0 | 0.014144 | 0.247259 | 6,293 | 156 | 137 | 40.339744 | 0.787418 | 0.015732 | 0 | 0.0625 | 0 | 0 | 0.15177 | 0 | 0 | 0 | 0 | 0.00641 | 0 | 1 | 0.007813 | false | 0 | 0.078125 | 0 | 0.085938 | 0.054688 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1050f6f1d4c1beb7784a6f531c279a091eaebea9 | 1,121 | py | Python | objects.py | bokonV2/TopUsersVkWeb | 63f1124e6ce204de8c564141c2b0be7314cdecb5 | [
"MIT"
] | null | null | null | objects.py | bokonV2/TopUsersVkWeb | 63f1124e6ce204de8c564141c2b0be7314cdecb5 | [
"MIT"
] | null | null | null | objects.py | bokonV2/TopUsersVkWeb | 63f1124e6ce204de8c564141c2b0be7314cdecb5 | [
"MIT"
] | null | null | null | class Person:
id = int()
name = str()
lastname = str()
photo = str()
bdate = str()
def __init__(self, id, name, lastname, photo, bdate):
self.id = id
self.name = name
self.lastname = lastname
self.photo = photo
self.bdate = bdate
def gets(self):
return f"*id{self.id} ({self.name} {self.lastname})"
class Groups:
url_group = str()
date = "datetime"
url_chat = str()
money = int()
type_send = str()
range_send = str()
message = str()
design = list()
days = int()
def __init__(self,
url_group="", date=None,
url_chat="", money=0,
type_send="", range_send="",
message="", design=[]):
if type(date) == type("str"):
# date = date_transl(date)
pass
self.url_group = url_group
self.date = date
self.url_chat = url_chat
self.money = money
self.type_send = type_send
self.range_send = range_send
self.message = message
self.design = design
self.days = date_get_days(date)
| 21.980392 | 57 | 0.535236 | 136 | 1,121 | 4.213235 | 0.257353 | 0.055846 | 0.038394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001346 | 0.337199 | 1,121 | 50 | 58 | 22.42 | 0.769852 | 0.021409 | 0 | 0 | 0 | 0 | 0.048402 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075 | false | 0.025 | 0 | 0.025 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10525bdb4509a909e8ac31096e482f086a5e66a8 | 5,317 | py | Python | mlonmcu/setup/setup.py | tum-ei-eda/mlonmcu | 0d5c114b85f2ae9e48e7d815bfce8df04c2bdb46 | [
"Apache-2.0"
] | 3 | 2022-03-07T09:38:12.000Z | 2022-03-24T09:28:36.000Z | mlonmcu/setup/setup.py | tum-ei-eda/mlonmcu | 0d5c114b85f2ae9e48e7d815bfce8df04c2bdb46 | [
"Apache-2.0"
] | 24 | 2022-03-07T16:09:32.000Z | 2022-03-31T08:08:51.000Z | mlonmcu/setup/setup.py | tum-ei-eda/mlonmcu | 0d5c114b85f2ae9e48e7d815bfce8df04c2bdb46 | [
"Apache-2.0"
] | 1 | 2022-03-07T09:38:17.000Z | 2022-03-07T09:38:17.000Z | #
# Copyright (c) 2022 TUM Department of Electrical and Computer Engineering.
#
# This file is part of MLonMCU.
# See https://github.com/tum-ei-eda/mlonmcu.git for further info.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import shutil
from tqdm import tqdm
from mlonmcu.logging import get_logger
from mlonmcu.feature.type import FeatureType
from mlonmcu.feature.features import get_matching_features
from mlonmcu.config import filter_config
from .tasks import Tasks
from .task import TaskGraph
from mlonmcu.utils import ask_user
logger = get_logger()
class Setup:
"""MLonMCU dependency management interface."""
FEATURES = []
DEFAULTS = {
"print_outputs": False,
}
REQUIRED = []
def __init__(self, features=None, config=None, context=None, tasks_factory=Tasks):
self.config = config if config else {}
self.features = self.process_features(features)
self.config = filter_config(self.config, "setup", self.DEFAULTS, self.REQUIRED)
self.context = context
self.tasks_factory = tasks_factory
self.verbose = bool(self.config["print_outputs"])
def clean_cache(self, interactive=True):
assert self.context is not None
deps_dir = self.context.environment.lookup_path("deps").path
cache_file = deps_dir / "cache.ini"
if cache_file.is_file():
print(f"The dependency cache file ({cache_file}) will be removed.")
if ask_user("Are your sure?", default=not interactive, interactive=interactive):
print(f"Removing {cache_file} ...")
os.remove(cache_file)
def clean_dependencies(self, interactive=True):
assert self.context is not None
self.clean_cache(interactive=interactive)
deps_dir = self.context.environment.lookup_path("deps").path
subdirs = ["src", "build", "install"]
print(f"All dependencies will be removed from {deps_dir}.")
if ask_user("Are your sure?", default=not interactive, interactive=interactive):
for subdir in subdirs:
full_path = deps_dir / subdir
print(f"Removing contents of {full_path} ...")
shutil.rmtree(full_path, ignore_errors=True)
full_path.mkdir(exist_ok=True)
def process_features(self, features):
if features is None:
return []
features = get_matching_features(features, FeatureType.SETUP)
for feature in features:
# Not need to list features explicitly
# assert (
# feature.name in self.FEATURES
# ), f"Incompatible feature: {feature.name}"
feature.add_setup_config(self.config)
return features
def get_dependency_order(self):
self.tasks_factory.reset_changes()
task_graph = TaskGraph(
self.tasks_factory.registry.keys(),
self.tasks_factory.dependencies,
self.tasks_factory.providers,
)
V, E = task_graph.get_graph()
order = task_graph.get_order()
order_str = " -> ".join(order)
logger.debug("Determined dependency order: %s" % order_str)
return order
def setup_progress_bar(self, enabled):
if enabled:
pbar = tqdm(
total=len(self.tasks_factory.registry),
desc="Installing dependencies",
ncols=100,
bar_format="{l_bar}{bar}| {n_fmt}/{total_fmt} [{elapsed}s]",
)
return pbar
else:
logger.info("Installing dependencies...")
return None
def write_cache_file(self):
logger.debug("Updating dependency cache")
cache_file = self.context.environment.paths["deps"].path / "cache.ini"
self.context.cache.write_to_file(cache_file)
def invoke_single_task(self, name, progress=False, write_cache=True, rebuild=False):
assert name in self.tasks_factory.registry, f"Invalid task name: {name}"
func = self.tasks_factory.registry[name]
func(self.context, progress=progress, rebuild=rebuild, verbose=self.verbose)
def install_dependencies(
self,
progress=False,
write_cache=True,
rebuild=False,
):
assert self.context is not None
order = self.get_dependency_order()
pbar = self.setup_progress_bar(progress)
for task in order:
func = self.tasks_factory.registry[task]
func(self.context, progress=progress, rebuild=rebuild, verbose=self.verbose)
if pbar:
pbar.update(1)
if pbar:
pbar.close()
if write_cache:
self.write_cache_file()
logger.info("Finished installing dependencies")
return True
| 36.923611 | 92 | 0.648862 | 648 | 5,317 | 5.188272 | 0.311728 | 0.039262 | 0.042832 | 0.035693 | 0.17906 | 0.162403 | 0.156454 | 0.156454 | 0.129685 | 0.074955 | 0 | 0.003043 | 0.258228 | 5,317 | 143 | 93 | 37.181818 | 0.849391 | 0.160617 | 0 | 0.105769 | 0 | 0 | 0.108882 | 0 | 0 | 0 | 0 | 0 | 0.038462 | 1 | 0.086538 | false | 0 | 0.096154 | 0 | 0.278846 | 0.057692 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1052bebe50c01487d2c64126d7fcea6f8bd57e56 | 1,157 | py | Python | brunodb/database_sqlite.py | dave31415/brunodb | 57f71b6ee9e08fc8539efeb9b6b935beb232b6f4 | [
"MIT"
] | null | null | null | brunodb/database_sqlite.py | dave31415/brunodb | 57f71b6ee9e08fc8539efeb9b6b935beb232b6f4 | [
"MIT"
] | null | null | null | brunodb/database_sqlite.py | dave31415/brunodb | 57f71b6ee9e08fc8539efeb9b6b935beb232b6f4 | [
"MIT"
] | null | null | null | import sqlite3
import logging
from brunodb.sqlite_utils import get_db
from brunodb.database_generic import DBaseGeneric
from brunodb.format_query import format_sql_in_context
logger = logging.getLogger(__file__)
def db_is_open(db):
try:
db.execute('SELECT 1')
except sqlite3.ProgrammingError:
return False
return True
class DBaseSqlite(DBaseGeneric):
def __init__(self, db_file, isolation_level=None, journal_mode=None):
if isolation_level is None:
isolation_level = "DEFERRED"
if journal_mode is None:
journal_mode = "OFF"
super().__init__()
self.db_file = db_file
self.db = get_db(filename=db_file,
isolation_level=isolation_level,
journal_mode=journal_mode)
self.db_type = 'sqlite'
logger.info('Tables: %s' % self.tables.__repr__())
def is_open(self):
return db_is_open(self.db)
def truncate(self, table_name):
self.db.commit()
sql = format_sql_in_context('DELETE FROM {table_name}', {'table_name': table_name}, None)
self.raw_sql_query(sql)
| 27.547619 | 97 | 0.657736 | 147 | 1,157 | 4.816327 | 0.387755 | 0.050847 | 0.031073 | 0.050847 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003484 | 0.255834 | 1,157 | 41 | 98 | 28.219512 | 0.818815 | 0 | 0 | 0 | 0 | 0 | 0.059637 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.129032 | false | 0 | 0.16129 | 0.032258 | 0.419355 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1053d2df93cabe0af592bad00f344bf5601449c9 | 10,202 | py | Python | tests/main/helpers/test_search_helpers.py | pocketstefan/digitalmarketplace-buyer-frontend | f4d27f03d5f3accb29eaa61e5ec8d9e5eb60c306 | [
"MIT"
] | null | null | null | tests/main/helpers/test_search_helpers.py | pocketstefan/digitalmarketplace-buyer-frontend | f4d27f03d5f3accb29eaa61e5ec8d9e5eb60c306 | [
"MIT"
] | null | null | null | tests/main/helpers/test_search_helpers.py | pocketstefan/digitalmarketplace-buyer-frontend | f4d27f03d5f3accb29eaa61e5ec8d9e5eb60c306 | [
"MIT"
] | null | null | null | import mock
import pytest
from werkzeug.datastructures import MultiDict
from app.main.helpers import search_helpers, framework_helpers
from ...helpers import BaseApplicationTest
def test_should_hide_both_next_and_prev_if_no_services():
assert not search_helpers.pagination(0, 100)["show_prev"]
assert not search_helpers.pagination(0, 100)["show_next"]
def test_should_hide_both_next_and_prev_if_less_services_than_page():
assert not search_helpers.pagination(50, 100)["show_prev"]
assert not search_helpers.pagination(50, 100)["show_next"]
def test_should_hide_prev_if_page_one():
assert not search_helpers.pagination(101, 100)["show_prev"]
def test_should_show_prev_if_after_page_one():
assert search_helpers.pagination(101, 100, 2)["show_prev"]
def test_should_show_prev_if_last_page():
assert search_helpers.pagination(201, 100, 2)["show_prev"]
def test_show_next():
assert search_helpers.pagination(101, 100,)["show_next"]
assert search_helpers.pagination(101, 100, 1)["show_next"]
def test_hide_next_if_last_page():
assert not search_helpers.pagination(101, 100, 2)["show_next"]
def test_show_prev_as_last_page_if_too_big_page():
assert search_helpers.pagination(101, 100, 20)["show_prev"]
def test_set_total_pages():
assert search_helpers.pagination(99, 100)["total_pages"] == 1
assert search_helpers.pagination(100, 100)["total_pages"] == 1
assert search_helpers.pagination(101, 100)["total_pages"] == 2
def test_should_set_next_page():
assert search_helpers.pagination(99, 100)["next_page"] is None
assert search_helpers.pagination(101, 100)["next_page"] == 2
assert search_helpers.pagination(201, 100, 2)["next_page"] == 3
def test_should_set_prev_page():
assert search_helpers.pagination(99, 100)["prev_page"] is None
assert search_helpers.pagination(101, 100, 2)["prev_page"] == 1
assert search_helpers.pagination(301, 100, 3)["prev_page"] == 2
assert search_helpers.pagination(301, 100, 100)["prev_page"] == 4
def test_should_strip_page_from_multidict():
params = MultiDict()
params.add("this", "that")
params.add("page", 100)
parsed = search_helpers.query_args_for_pagination(params)
assert parsed['this'] == 'that'
assert 'page' not in parsed
@pytest.mark.parametrize("test_input,expected", (
(1, 1),
(100, 1),
(101, 2),
(200, 2),
(201, 3),
(1001, 11),
(0, 1),
))
def test_should_calculate_correct_page_total(test_input, expected):
assert search_helpers.total_pages(test_input, 100) == expected
@pytest.mark.parametrize("test_input,expected", (
(1, 1),
(100, 100),
(-1, None),
("aa", None),
))
def test_should_reject_invalid_page(test_input, expected):
assert search_helpers.valid_page(test_input) == expected
class TestBuildSearchQueryHelpers(BaseApplicationTest):
def setup(self):
self.lot_filters = [
{'label': 'section1', 'filters': [
{'name': 'question1', 'value': 'true'},
{'name': 'question2', 'value': 'true'},
{'name': 'question3', 'value': 'option1'},
{'name': 'question3', 'value': 'option2'},
{'name': 'question3', 'value': 'option3'},
]},
{'label': 'section2', 'filters': [
{'name': 'question4', 'value': 'true'},
{'name': 'question5', 'value': 'true'},
{'name': 'question6', 'value': 'option1'},
{'name': 'question6', 'value': 'option2'},
{'name': 'question6', 'value': 'option3'},
]},
]
self._lots_by_slug = framework_helpers.get_lots_by_slug(
self._get_framework_fixture_data('g-cloud-6')['frameworks']
)
self.g6_framework = self._get_framework_fixture_data('g-cloud-6')['frameworks']
def _request(self, params):
return mock.Mock(args=MultiDict(params))
def _loader(self, question_types=None):
question_types = question_types or {
'question1': {'type': 'boolean'},
'question4': {},
'question3': {'type': 'radios'},
'question6': {'type': 'checkboxes'},
'page': {},
'lot': {},
'q': {},
}
def _mock_get_question(question):
return question_types[question]
loader = mock.Mock()
loader.get_question = _mock_get_question
return loader
def test_get_filters_from_request(self):
request = self._request({
'q': '',
'page': 1,
'someFilter': 'filter',
'otherFilter': [1, 2],
})
assert search_helpers.get_filters_from_request(request.args).to_dict(False) == {
'someFilter': ['filter'],
'otherFilter': [1, 2],
}
def test_allowed_request_lot_filters(self):
assert search_helpers.allowed_request_lot_filters(self.lot_filters) == {
('question1', 'true'),
('question2', 'true'),
('question3', 'option1'),
('question3', 'option2'),
('question3', 'option3'),
('question4', 'true'),
('question5', 'true'),
('question6', 'option1'),
('question6', 'option2'),
('question6', 'option3'),
}
def test_clean_request_args(self):
filters = MultiDict({
'question1': 'true',
'question2': ['true', 'false', 1],
'question3': ['option1', 'true', 'option5', 'option2', 2, None],
'question6': '',
'question4': 'false',
'lot': 'saas',
'q': 'email',
'page': 9,
'parentCategory': 'collaborative working',
'unknown': 'key',
})
assert search_helpers.clean_request_args(filters, self.lot_filters, self._lots_by_slug) == MultiDict({
'question1': 'true',
'question2': 'true',
'question3': ['option1', 'option2'],
'q': 'email',
'lot': 'saas',
'page': 9,
})
def test_clean_request_args_strips_args_for_aggregation(self):
"""With the for_aggregation kwarg set the clean_request_args method should strip parentCategory and page keys"""
filters = MultiDict({
'question1': 'true',
'question2': ['true', 'false', 1],
'question3': ['option1', 'true', 'option5', 'option2', 2, None],
'question6': '',
'question4': 'false',
'lot': 'saas',
'q': 'email',
'page': 9,
'parentCategory': 'collaborative working',
'unknown': 'key',
})
results = search_helpers.clean_request_args(filters, self.lot_filters, self._lots_by_slug, for_aggregation=True)
assert results == MultiDict({
'question1': 'true',
'question2': 'true',
'question3': ['option1', 'option2'],
'q': 'email',
'lot': 'saas',
})
def test_clean_request_args_incorrect_lot(self):
filters = MultiDict({
'lot': 'saaspaas',
})
assert search_helpers.clean_request_args(filters, self.lot_filters, self._lots_by_slug) == MultiDict({})
def test_group_request_filters(self):
filters = MultiDict({
'question1': 'true',
'question3': ['option1', 'option2'],
'question4': 'true',
'question6': ['option1', 'option3'],
})
assert search_helpers.group_request_filters(filters, self._loader()) == {
'question1': 'true',
'question4': 'true',
'question3': 'option1,option2',
'question6': ['option1', 'option3'],
}
def test_replace_g5_search_dots(self):
assert search_helpers.replace_g5_search_dots("some text 5.G4.1005.001 text") == "some text 5-G4-1005-001 text"
def test_replace_g5_search_dots_no_id(self):
assert search_helpers.replace_g5_search_dots("some text 5.G4.1005 text") == "some text 5.G4.1005 text"
def test_build_search_query(self):
request = self._request({
'page': 5,
'q': 'email',
'non': 1,
'newkey': 'true',
'lot': 'saas',
'question1': 'true',
'question3': ['option1', 'option2'],
'question4': 'true',
'question6': ['option1', 'option3'],
})
assert search_helpers.build_search_query(request.args, self.lot_filters, self._loader(),
self._lots_by_slug) == {
'page': 5,
'q': 'email',
'lot': 'saas',
'question1': 'true',
'question4': 'true',
'question3': 'option1,option2',
'question6': ['option1', 'option3'],
}
def test_build_search_query_unknown_lot_is_dropped(self):
request = self._request({
'lot': 'saasaas',
})
assert search_helpers.build_search_query(request.args, self.lot_filters, self._loader(),
self._lots_by_slug) == {}
def test_build_search_query_multiple_lots_are_all_dropped(self):
request = self._request({
'lot': 'saas,paas',
})
assert search_helpers.build_search_query(request.args, self.lot_filters, self._loader(),
self._lots_by_slug) == {}
def test_build_search_query_no_keywords_drops_q_parameter(self):
request = self._request({
'q': '',
})
assert search_helpers.build_search_query(request.args, self.lot_filters, self._loader(),
self._lots_by_slug) == {}
def test_build_search_query_no_page_drops_page_parameter(self):
request = self._request({
'page': '',
})
assert search_helpers.build_search_query(request.args, self.lot_filters, self._loader(),
self._lots_by_slug) == {}
| 34.120401 | 120 | 0.57626 | 1,083 | 10,202 | 5.122807 | 0.156971 | 0.089041 | 0.099315 | 0.078407 | 0.626352 | 0.559481 | 0.483598 | 0.425919 | 0.3531 | 0.293259 | 0 | 0.044318 | 0.278965 | 10,202 | 298 | 121 | 34.234899 | 0.709897 | 0.01039 | 0 | 0.40678 | 0 | 0 | 0.175223 | 0 | 0 | 0 | 0 | 0 | 0.161017 | 1 | 0.131356 | false | 0 | 0.021186 | 0.008475 | 0.169492 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
105680434b919caf59e18bf52a83e42d2c63058b | 1,840 | py | Python | 2019/day6/part2.py | niklind/advent-of-code | c5736e5ec9f830f4e80b962874d28360e3735674 | [
"MIT"
] | null | null | null | 2019/day6/part2.py | niklind/advent-of-code | c5736e5ec9f830f4e80b962874d28360e3735674 | [
"MIT"
] | null | null | null | 2019/day6/part2.py | niklind/advent-of-code | c5736e5ec9f830f4e80b962874d28360e3735674 | [
"MIT"
] | null | null | null | class Node(object):
"""Generic tree node."""
def __init__(self, name):
self.name = name
self.parent = None
self.children = []
def __repr__(self):
return "Node(" + self.name + ")"
def add_parent(self, node):
assert isinstance(node, Node)
self.parent = node
def add_child(self, node):
assert isinstance(node, Node)
self.children.append(node)
def parse_data(file_name):
with open(file_name, 'r') as file:
return [line.strip("\n") for line in file]
def find_orbits(relations):
nodes = {}
for relation in relations:
child, parent = parse_relation(relation)
add_orbit(child, parent, nodes)
return count_orbital_transfers(nodes['YOU'], nodes['SAN'])
def parse_relation(line):
relation = line.split(")")
return relation[1], relation[0]
def add_orbit(current, parent, nodes):
current_node = nodes.get(current, Node(current))
parent_node = nodes.get(parent, Node(parent))
current_node.add_parent(parent_node)
parent_node.add_child(current_node)
nodes[current] = current_node
nodes[parent] = parent_node
def count_orbital_transfers(origin, destination):
steps = 0
current = origin.parent
while True:
depth = has_child(current, destination, 0)
if depth > -1:
return steps + depth
current = current.parent
steps += 1
def has_child(current, destination, depth):
if destination in current.children:
return depth
depth += 1
for child in current.children:
temp = has_child(child, destination, depth)
if temp > -1:
return temp
return -1
if __name__ == "__main__":
data = parse_data('input.txt')
result = find_orbits(data)
print("Result: " + str(result)) # 562
| 23.896104 | 62 | 0.629348 | 229 | 1,840 | 4.860262 | 0.275109 | 0.053908 | 0.043127 | 0.043127 | 0.06469 | 0.06469 | 0.06469 | 0 | 0 | 0 | 0 | 0.008791 | 0.258152 | 1,840 | 76 | 63 | 24.210526 | 0.806593 | 0.0125 | 0 | 0.037037 | 0 | 0 | 0.022639 | 0 | 0 | 0 | 0 | 0 | 0.037037 | 1 | 0.185185 | false | 0 | 0 | 0.018519 | 0.351852 | 0.018519 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1057e5aafe644860bb594dbef3b670bf4626daba | 1,939 | py | Python | src/software/jetson_nano/broadcasts/robot_broadcast_receiver.py | jonl112/Software | 61a028a98d5c0dd5e79bf055b231633290ddbf9f | [
"MIT"
] | null | null | null | src/software/jetson_nano/broadcasts/robot_broadcast_receiver.py | jonl112/Software | 61a028a98d5c0dd5e79bf055b231633290ddbf9f | [
"MIT"
] | null | null | null | src/software/jetson_nano/broadcasts/robot_broadcast_receiver.py | jonl112/Software | 61a028a98d5c0dd5e79bf055b231633290ddbf9f | [
"MIT"
] | null | null | null | import argparse
import socket
from time import time
from proto.announcement_pb2 import Announcement
RECEIVE_TIMEOUT_SECONDS = 0.2
def receive_announcements(port: int, duration: int) -> [Announcement]:
"""
Returns a list of Announcements, without duplicates received within a time window of 4s on a specified port
:param duration: how long to listen for announcements
:param port: the port to listen for announcements on
:return: a list of Announcements, without duplicates
"""
receiver = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
receiver.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1)
receiver.settimeout(RECEIVE_TIMEOUT_SECONDS)
receiver.bind(("", port))
announcements = []
timeout = time() + duration
while time() < timeout:
try:
data = receiver.recv(1024)
except socket.timeout: # ignore timeout errors
continue
else:
# parse announcement protobuf
announcement = Announcement()
announcement.ParseFromString(data)
# filter out duplicates
if announcement not in announcements:
announcements.append(announcement)
return announcements
def main():
# get command line args
ap = argparse.ArgumentParser()
ap.add_argument("-p", "--port", required=True, type=int, help="port to listen on")
ap.add_argument(
"-d" "--duration",
required=True,
type=int,
help="how long to listen for announcements. Recommended > 2",
)
args = vars(ap.parse_args())
port = args["port"]
duration = args["duration"]
announcements = receive_announcements(port, duration)
for announcement in announcements:
print(
f"robot_id: {announcement.robot_id} \nip_addr: {announcement.ip_addr} \nmac_addr: {announcement.mac_addr} \n"
)
if __name__ == "__main__":
main()
| 30.777778 | 121 | 0.660134 | 219 | 1,939 | 5.716895 | 0.442922 | 0.025559 | 0.026358 | 0.057508 | 0.145367 | 0.108626 | 0 | 0 | 0 | 0 | 0 | 0.006854 | 0.24755 | 1,939 | 62 | 122 | 31.274194 | 0.851268 | 0.186694 | 0 | 0 | 0 | 0.02381 | 0.139715 | 0.043984 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.095238 | 0 | 0.166667 | 0.02381 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
105c810f558537896f675fb4c43bfe877ba68617 | 14,794 | py | Python | tests/test_invenio_pidstore.py | torchingloom/invenio-pidstore | 8f6e99787da5a07d38712f7eb16146608304cdb9 | [
"MIT"
] | 6 | 2015-08-19T12:52:03.000Z | 2021-08-25T03:57:03.000Z | tests/test_invenio_pidstore.py | torchingloom/invenio-pidstore | 8f6e99787da5a07d38712f7eb16146608304cdb9 | [
"MIT"
] | 87 | 2015-07-15T17:17:37.000Z | 2020-12-10T08:29:59.000Z | tests/test_invenio_pidstore.py | torchingloom/invenio-pidstore | 8f6e99787da5a07d38712f7eb16146608304cdb9 | [
"MIT"
] | 37 | 2015-07-16T07:38:42.000Z | 2022-01-13T10:38:24.000Z | # -*- coding: utf-8 -*-
#
# This file is part of Invenio.
# Copyright (C) 2015-2018 CERN.
#
# Invenio is free software; you can redistribute it and/or modify it
# under the terms of the MIT License; see LICENSE file for more details.
"""Model tests."""
from __future__ import absolute_import, print_function
import pytest
import uuid
from mock import patch
from sqlalchemy.exc import SQLAlchemyError
from invenio_pidstore.errors import PIDAlreadyExists, PIDDoesNotExistError, \
PIDInvalidAction, PIDObjectAlreadyAssigned
from invenio_pidstore.models import PersistentIdentifier, PIDStatus, Redirect
@patch('invenio_pidstore.models.logger')
def test_pid_creation(logger, app, db):
"""Test pid creation."""
with app.app_context():
assert PersistentIdentifier.query.count() == 0
pid = PersistentIdentifier.create('doi', '10.1234/foo')
assert PersistentIdentifier.query.count() == 1
assert pid.pid_type == 'doi'
assert pid.pid_value == '10.1234/foo'
assert pid.pid_provider is None
assert pid.status == PIDStatus.NEW
assert pid.object_type is None
assert pid.object_uuid is None
assert logger.info.called
rec_uuid = uuid.uuid4()
pid = PersistentIdentifier.create(
'rec', '2', status=PIDStatus.REGISTERED, object_type='rec',
object_uuid=rec_uuid)
assert PersistentIdentifier.query.count() == 2
assert pid.pid_type == 'rec'
assert pid.pid_value == '2'
assert pid.pid_provider is None
assert pid.status == PIDStatus.REGISTERED
assert pid.object_type == 'rec'
assert pid.object_uuid == rec_uuid
# Can't duplicate existing persistent identifier
assert not logger.exception.called
pytest.raises(
PIDAlreadyExists, PersistentIdentifier.create, 'rec', '2')
assert logger.exception.called
with patch('invenio_pidstore.models.db.session.begin_nested') as mock:
mock.side_effect = SQLAlchemyError()
pytest.raises(SQLAlchemyError, PersistentIdentifier.create,
'rec', '2')
assert logger.exception.call_args[0][0].startswith(
"Failed to create")
def test_alembic(app, db):
"""Test alembic recipes."""
ext = app.extensions['invenio-db']
if db.engine.name == 'sqlite':
raise pytest.skip('Upgrades are not supported on SQLite.')
assert not ext.alembic.compare_metadata()
db.drop_all()
ext.alembic.upgrade()
assert not ext.alembic.compare_metadata()
ext.alembic.stamp()
ext.alembic.downgrade(target='96e796392533')
ext.alembic.upgrade()
assert not ext.alembic.compare_metadata()
def test_pidstatus_as():
"""Test PID status."""
assert PIDStatus.NEW.title == 'New'
assert PIDStatus.RESERVED.title == 'Reserved'
assert next(iter(PIDStatus)) == 'N'
def test_pid_get(app, db):
"""Test pid retrieval."""
with app.app_context():
PersistentIdentifier.create('doi', '10.1234/foo')
assert PersistentIdentifier.get('doi', '10.1234/foo')
pytest.raises(
PIDDoesNotExistError,
PersistentIdentifier.get,
'doi', '10.1234/bar'
)
# PID with provider
doi = '10.1234/a'
PersistentIdentifier.create('doi', doi, pid_provider='dcite')
assert PersistentIdentifier.get('doi', doi)
assert PersistentIdentifier.get(
'doi', doi, pid_provider='dcite')
pytest.raises(
PIDDoesNotExistError,
PersistentIdentifier.get,
'doi', doi, pid_provider='cref'
)
# Retrieve by object
myuuid = uuid.uuid4()
doi = '10.1234/b'
PersistentIdentifier.create(
'doi', doi, object_type='rec', object_uuid=myuuid)
pid = PersistentIdentifier.get_by_object('doi', 'rec', myuuid)
assert pid.pid_value == doi
pytest.raises(
PIDDoesNotExistError,
PersistentIdentifier.get_by_object,
'doi', 'rec', uuid.uuid4()
)
@patch('invenio_pidstore.models.logger')
def test_pid_assign(logger, app, db):
"""Test pid object assignment."""
with app.app_context():
# No assigned object
pid = PersistentIdentifier.create('doi', '10.1234/foo')
assert not pid.has_object()
assert pid.get_assigned_object() is None
assert pid.get_assigned_object('rec') is None
# Assign object
rec_uuid = uuid.uuid4()
pid.assign('rec', rec_uuid)
assert logger.info.call_args[0][0].startswith("Assigned")
assert 'pid' in logger.info.call_args[1]['extra']
assert pid.has_object()
assert pid.get_assigned_object() == rec_uuid
assert pid.get_assigned_object('rec') == rec_uuid
assert pid.get_assigned_object('oth') is None
# Doesnt' raise
pid.assign('rec', rec_uuid)
# Assign without overwrite (uuid as str and uuid)
new_uuid = uuid.uuid4()
pytest.raises(PIDObjectAlreadyAssigned, pid.assign, 'rec', new_uuid)
pytest.raises(
PIDObjectAlreadyAssigned, pid.assign, 'rec', str(new_uuid))
# Assign with overwrite
pid.assign('rec', str(new_uuid), overwrite=True)
assert pid.has_object()
assert pid.get_assigned_object() == new_uuid
assert pid.get_assigned_object('rec') == new_uuid
assert pid.get_assigned_object('oth') is None
# Assign with SQLError
pid = PersistentIdentifier.create('recid', '101')
with patch('invenio_pidstore.models.db.session.begin_nested') as mock:
mock.side_effect = SQLAlchemyError()
pytest.raises(SQLAlchemyError, pid.assign, 'rec', uuid.uuid4())
@patch('invenio_pidstore.models.logger')
def test_pid_unassign_noobject(logger, app, db):
"""Test unassign."""
with app.app_context():
pid = PersistentIdentifier.create('recid', '101')
assert pid.unassign()
pid.assign('rec', uuid.uuid4())
with patch('invenio_pidstore.models.db.session.begin_nested') as mock:
mock.side_effect = SQLAlchemyError()
pytest.raises(SQLAlchemyError, pid.unassign)
assert logger.exception.call_args[0][0].startswith(
"Failed to unassign")
assert 'pid' in logger.exception.call_args[1]['extra']
def test_pid_assign_deleted(app, db):
"""Test pid object assignment."""
with app.app_context():
pid = PersistentIdentifier.create(
'doi', '10.1234/foo', status=PIDStatus.DELETED)
pytest.raises(PIDInvalidAction, pid.assign, 'rec', uuid.uuid4())
@patch('invenio_pidstore.models.logger')
def test_reserve(logger, app, db):
"""Test pid reserve."""
with app.app_context():
i = 1
for s in [PIDStatus.NEW, PIDStatus.RESERVED]:
pid = PersistentIdentifier.create('rec', str(i), status=s)
i += 1
assert pid.reserve()
assert logger.info.call_args[0][0].startswith(
"Reserved PID")
for s in [PIDStatus.REGISTERED, PIDStatus.DELETED,
PIDStatus.REDIRECTED]:
pid = PersistentIdentifier.create('rec', str(i), status=s)
i += 1
pytest.raises(PIDInvalidAction, pid.reserve)
# Test logging of bad errors.
pid = PersistentIdentifier.create('rec', str(i))
with patch('invenio_pidstore.models.db.session.begin_nested') as mock:
mock.side_effect = SQLAlchemyError()
pytest.raises(SQLAlchemyError, pid.reserve)
assert logger.exception.call_args[0][0].startswith(
"Failed to reserve")
assert 'pid' in logger.exception.call_args[1]['extra']
@patch('invenio_pidstore.models.logger')
def test_register(logger, app, db):
"""Test pid register."""
with app.app_context():
i = 1
for s in [PIDStatus.NEW, PIDStatus.RESERVED]:
pid = PersistentIdentifier.create('rec', str(i), status=s)
i += 1
assert pid.register()
assert logger.info.call_args[0][0].startswith(
"Registered PID")
for s in [PIDStatus.REGISTERED, PIDStatus.DELETED,
PIDStatus.REDIRECTED]:
pid = PersistentIdentifier.create('rec', str(i), status=s)
i += 1
pytest.raises(PIDInvalidAction, pid.register)
# Test logging of bad errors.
pid = PersistentIdentifier.create('rec', str(i),
status=PIDStatus.RESERVED)
with patch('invenio_pidstore.models.db.session.begin_nested') as mock:
mock.side_effect = SQLAlchemyError()
pytest.raises(SQLAlchemyError, pid.register)
assert logger.exception.call_args[0][0].startswith(
"Failed to register")
assert 'pid' in logger.exception.call_args[1]['extra']
@patch('invenio_pidstore.models.logger')
def test_delete(logger, app, db):
"""Test pid delete."""
with app.app_context():
i = 1
for s in [PIDStatus.RESERVED, PIDStatus.RESERVED,
PIDStatus.REDIRECTED, PIDStatus.DELETED]:
pid = PersistentIdentifier.create('rec', str(i), status=s)
i += 1
assert pid.delete()
assert logger.info.call_args[0][0] == "Deleted PID."
# New persistent identifiers are removed completely
count = PersistentIdentifier.query.count()
pid = PersistentIdentifier.create('rec', str(i), status=PIDStatus.NEW)
db.session.commit()
assert PersistentIdentifier.query.count() == count + 1
pid.delete()
assert PersistentIdentifier.query.count() == count
assert logger.info.call_args[0][0] == "Deleted PID (removed)."
pid = PersistentIdentifier.create('rec', str(i + 1))
with patch('invenio_pidstore.models.db.session.begin_nested') as mock:
mock.side_effect = SQLAlchemyError()
pytest.raises(SQLAlchemyError, pid.delete)
assert logger.exception.call_args[0][0].startswith(
"Failed to delete")
assert 'pid' in logger.exception.call_args[1]['extra']
@patch('invenio_pidstore.models.logger')
def test_redirect(logger, app, db):
"""Test redirection."""
with app.app_context():
pid1 = PersistentIdentifier.create(
'rec', '1', status=PIDStatus.REGISTERED, object_type='rec',
object_uuid=uuid.uuid4())
pid2 = PersistentIdentifier.create(
'doi', '2', status=PIDStatus.REGISTERED, object_type='rec',
object_uuid=uuid.uuid4())
# Can't redirect these statuses
i = 10
for s in [PIDStatus.NEW, PIDStatus.RESERVED, PIDStatus.DELETED, ]:
pid = PersistentIdentifier.create('rec', str(i), status=s)
i += 1
pytest.raises(PIDInvalidAction, pid.redirect, pid1)
pid = PersistentIdentifier.create(
'rec', str(i), status=PIDStatus.REGISTERED)
# Can't redirect to non-exsting pid.
pytest.raises(PIDDoesNotExistError, pid.redirect,
PersistentIdentifier())
pid.redirect(pid1)
assert logger.info.call_args[0][0].startswith("Redirected")
assert 'pid' in logger.info.call_args[1]['extra']
assert pid.status == PIDStatus.REDIRECTED
assert pid.object_type is None
assert pid.object_uuid is not None
new_pid = pid.get_redirect()
assert new_pid.pid_type == 'rec'
assert new_pid.pid_value == '1'
# You can redirect an already redirected pid
pid.redirect(pid2)
new_pid = pid.get_redirect()
assert new_pid.pid_type == 'doi'
assert new_pid.pid_value == '2'
# Assign with SQLError
with patch('invenio_pidstore.models.db.session.begin_nested') as mock:
mock.side_effect = SQLAlchemyError()
pytest.raises(SQLAlchemyError, pid.redirect, '1')
assert logger.exception.call_args[0][0].startswith(
"Failed to redirect")
assert 'pid' in logger.exception.call_args[1]['extra']
def test_redirect_cleanup(app, db):
"""Test proper clean up from redirects."""
with app.app_context():
pid1 = PersistentIdentifier.create(
'recid', '1', status=PIDStatus.REGISTERED, object_type='rec',
object_uuid=uuid.uuid4())
pid2 = PersistentIdentifier.create(
'recid', '2', status=PIDStatus.REGISTERED, object_type='rec',
object_uuid=uuid.uuid4())
pid3 = PersistentIdentifier.create(
'recid', '3', status=PIDStatus.REGISTERED)
db.session.commit()
assert Redirect.query.count() == 0
pid3.redirect(pid1)
assert Redirect.query.count() == 1
pid3.redirect(pid2)
assert Redirect.query.count() == 1
pytest.raises(
PIDObjectAlreadyAssigned, pid3.assign, 'rec', uuid.uuid4())
pid3.unassign()
assert Redirect.query.count() == 0
@patch('invenio_pidstore.models.logger')
def test_sync_status(logger, app, db):
"""Test sync status."""
with app.app_context():
pid = PersistentIdentifier.create(
'rec', '1', status=PIDStatus.REGISTERED, object_type='rec',
object_uuid=uuid.uuid4())
pytest.raises(PIDInvalidAction, pid.reserve)
calls = logger.info.call_count
assert pid.sync_status(PIDStatus.NEW)
assert logger.info.call_count == calls + 1
assert pid.reserve()
calls = logger.info.call_count
assert pid.sync_status(PIDStatus.RESERVED)
assert logger.info.call_count == calls
with patch('invenio_pidstore.models.db.session.begin_nested') as mock:
mock.side_effect = SQLAlchemyError()
pytest.raises(SQLAlchemyError, pid.sync_status, PIDStatus.NEW)
assert logger.exception.call_args[0][0].startswith(
"Failed to sync status")
assert 'pid' in logger.exception.call_args[1]['extra']
def test_repr(app, db):
"""Test representation."""
with app.app_context():
pid = PersistentIdentifier.create(
'recid', '1', status=PIDStatus.REGISTERED, object_type='rec',
object_uuid='de3bb351-bc1a-4e51-8605-c6cd9589a560')
assert str(pid) == \
"<PersistentIdentifier recid:1 / " \
"rec:de3bb351-bc1a-4e51-8605-c6cd9589a560 (R)>"
pid = PersistentIdentifier.create(
'recid', '2', status=PIDStatus.REGISTERED)
assert str(pid) == "<PersistentIdentifier recid:2 (R)>"
| 38.030848 | 78 | 0.626403 | 1,677 | 14,794 | 5.413238 | 0.121049 | 0.040648 | 0.063891 | 0.045825 | 0.708857 | 0.620181 | 0.56334 | 0.515422 | 0.45043 | 0.40857 | 0 | 0.01998 | 0.255712 | 14,794 | 388 | 79 | 38.128866 | 0.804468 | 0.066716 | 0 | 0.477663 | 0 | 0 | 0.105624 | 0.053541 | 0 | 0 | 0 | 0 | 0.28866 | 1 | 0.04811 | false | 0 | 0.024055 | 0 | 0.072165 | 0.003436 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
105eb31399db13ffe192c164d7ec4f1b146a7717 | 8,576 | py | Python | animeflv/__init__.py | Alucard7795/animeApi | 10b5a18e7e033abb661de3e1431a321ac7ceb5f0 | [
"MIT"
] | null | null | null | animeflv/__init__.py | Alucard7795/animeApi | 10b5a18e7e033abb661de3e1431a321ac7ceb5f0 | [
"MIT"
] | null | null | null | animeflv/__init__.py | Alucard7795/animeApi | 10b5a18e7e033abb661de3e1431a321ac7ceb5f0 | [
"MIT"
] | null | null | null | import cloudscraper
from bs4 import BeautifulSoup
from urllib.parse import unquote, quote
import requests, json, re
def parseTable(table):
columns = list([x.string for x in table.thead.tr.find_all('th')])
rows = []
for row in table.tbody.find_all('tr'):
values = row.find_all('td')
if len(values) != len(columns):
raise Exception("don't match values size with columns size")
rows.append({h: x for h, x in zip(columns, values)})
return rows
BASE_URL = 'https://animeflv.net'
BROWSE_URL = 'https://animeflv.net/browse?'
SEARCH_URL = 'https://animeflv.net/browse?q='
ANIME_VIDEO_URL = 'https://animeflv.net/ver/'
BASE_EPISODE_IMG_URL = 'https://cdn.animeflv.net/screenshots/'
# BASE_JIKA_URL = ' https://api.jikan.moe/v3/search/anime?q='
# BASE_JIKA_ANIME_URL = 'https://api.jikan.moe/v3/anime/'
# BASE_MYANIME_LIST_URL = 'https://myanimelist.net/character/'
class AnimeFLV(object):
def __init__(self, *args, **kwargs):
session = kwargs.get('session', None)
self.__scraper = cloudscraper.create_scraper(sess=session)
def downloadLinksByEpisodeID(self, id, **kwargs):
"""
Get download links of specific episode.
Return a list of dictionaries like:
[
{
"server": "...",
"url": "..."
},
...
]
:param id: Episode id, like as '36557/nanatsu-no-taizai-1'.
:param **kwargs: Optional arguments for filter output (see doc).
:rtype: list
"""
res = self.__scraper.get(f"{ANIME_VIDEO_URL}{id}")
body = res.text
soup = BeautifulSoup(body, 'lxml')
table = soup.find('table', attrs={'class': 'RTbl'})
latin = kwargs.get('lat', False)
subtitled = kwargs.get('sub', True)
try:
rows = parseTable(table)
ret = []
for row in rows:
if row['FORMATO'].string == 'SUB' and subtitled\
or row['FORMATO'].string == 'LAT' and latin:
ret.append({
'server': row['SERVIDOR'].string,
'url': re.sub(r'^http[s]?://ouo.io/[A-Za-z0-9]+/[A-Za-z0-9]+\?[A-Za-z0-9]+=', '',
unquote(row['DESCARGAR'].a['href']))
})
return ret
except Exception:
return []
def search(self, query):
"""
Search in animeflv.net by query.
Return a list of dictionaries like:
[
{
"id": "...",
"title": "...",
"poster": " ... ",
"banner": "...",
"type": "...",
"synopsis": "...",
"rating": "..."
"debut": "...",
},
...
]
:param query: Query information like: 'Nanatsu no Taizai'.
:rtype: list
"""
res = self.__scraper.get(f"{SEARCH_URL}{quote(query)}")
body = res.text
soup = BeautifulSoup(body, 'lxml')
elements = soup.select('div.Container ul.ListAnimes li article')
ret = []
for element in elements:
try:
ret.append({
'id': element.select_one('div.Description a.Button')['href'][1:],
'title': element.select_one('a h3').string,
'poster': element.select_one('a div.Image figure img')['src'] or element.select('a div.Image figure img')['data-cfsrc'],
'banner': (element.select_one('a div.Image figure img')['src'] or element.select('a div.Image figure img')['data-cfsrc']).replace('covers' , 'banners').strip(),
'type': element.select_one('div.Description p span.Type').string,
'synopsis': element.select('div.Description p')[1].string.strip(),
'rating': element.select_one('div.Description p span.Vts').string,
'debut': element.select_one('a span.Estreno').string.lower() if element.select_one('a span.Estreno') else None
})
except Exception:
pass
return ret
def getVideoServers(self, id, **kwargs):
"""
Get in video servers, this work only using the iframe element.
Return a list of dictionaries.
:param id: Episode id, like as '36557/nanatsu-no-taizai-1'.
:rtype: list
"""
res = self.__scraper.get(f"{ANIME_VIDEO_URL}{id}")
body = res.text
soup = BeautifulSoup(body, 'lxml')
scripts = soup.find_all('script')
latin = kwargs.get('lat', False)
subtitled = kwargs.get('sub', True)
servers = []
for script in scripts:
content = str(script)
if 'var videos = {' in content:
videos = content.split('var videos = ')[1].split(';')[0]
data = json.loads(videos)
if 'SUB' in data and subtitled:
servers.append(data['SUB'])
if 'LAT' in data and latin:
servers.append(data['LAT'])
return servers
def getAnimeInfo(self, id):
"""
Get information about specific anime.
Return a dictionary.
:param id: Anime id, like as 'anime/1590/nanatsu-no-taizai'.
:rtype: dict
"""
episodes, genres, extraInfo = self.__getAnimeEpisodesInfo__(id)
return {
'id': id,
'title': extraInfo['title'] or None,
'poster': extraInfo['poster'] or None,
'banner': extraInfo['banner'] or None,
'synopsis': extraInfo['synopsis'] or None,
'rating': extraInfo['rating'] or None,
'debut': extraInfo['debut'] or None,
'type': extraInfo['type'] or None,
'genres': genres or None,
'episodes': episodes or None
}
def __getAnimeEpisodesInfo__(self, id):
res = self.__scraper.get(f"{BASE_URL}/{id}")
body = res.text
soup = BeautifulSoup(body, 'lxml')
extraInfo = {
"title": soup.select_one('body div.Wrapper div.Body div div.Ficha.fchlt div.Container h1.Title').string,
"poster": BASE_URL + '/' + soup.select_one('body div div div div div aside div.AnimeCover div.Image figure img')['src'],
"synopsis": soup.select_one('body div div div div div main section div.Description p').string.strip(),
"rating": soup.select_one('body div div div.Ficha.fchlt div.Container div.vtshr div.Votes span#votes_prmd').string,
"debut": soup.select_one('body div.Wrapper div.Body div div.Container div.BX.Row.BFluid.Sp20 aside.SidebarA.BFixed p.AnmStts').string,
"type": soup.select_one('body div.Wrapper div.Body div div.Ficha.fchlt div.Container span.Type').string
}
extraInfo['banner'] = extraInfo['poster'].replace('covers' , 'banners').strip()
genres = []
for element in soup.select('main.Main section.WdgtCn nav.Nvgnrs a'):
if '=' in element['href']:
genres.append(element['href'].split('=')[1])
info_ids = []
episodes_data = []
episodes = []
try:
for script in soup.find_all('script'):
contents = str(script)
if 'var anime_info = [' in contents:
anime_info = contents.split('var anime_info = ')[1].split(';')[0]
info_ids.append(json.loads(anime_info))
if 'var episodes = [' in contents:
data = contents.split('var episodes = ')[1].split(';')[0]
episodes_data.extend(json.loads(data))
AnimeThumbnailsId = info_ids[0][0]
animeId = info_ids[0][2]
# nextEpisodeDate = info_ids[0][3] if len(info_ids[0]) > 4 else None
for episode, id in episodes_data:
episodes.append({
'episode': episode,
'id': f'{id}/{animeId}-{episode}',
'imagePreview': f'{BASE_EPISODE_IMG_URL}{AnimeThumbnailsId}/{episode}/th_3.jpg'
})
except Exception:
pass
return (episodes, genres, extraInfo)
__version__ = '0.0.1'
__title__ = 'animeflv'
__author__ = 'Jorge Alejandro Jimenez Luna'
__license__ = 'MIT'
__copyright__ = 'Copyright 2021 RevDev' | 37.286957 | 180 | 0.533232 | 957 | 8,576 | 4.670846 | 0.252874 | 0.028188 | 0.028635 | 0.022819 | 0.287696 | 0.250112 | 0.210291 | 0.17472 | 0.170694 | 0.147204 | 0 | 0.009325 | 0.324743 | 8,576 | 230 | 181 | 37.286957 | 0.762563 | 0.139692 | 0 | 0.219858 | 0 | 0.014184 | 0.241487 | 0.036187 | 0 | 0 | 0 | 0 | 0 | 1 | 0.049645 | false | 0.014184 | 0.028369 | 0 | 0.134752 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1061c8a032a6a852ad602929d7c70168ec30fc0b | 15,048 | py | Python | annotation_connector/ann_extractor.py | ScholarIndex/LinkedBooks | 0cae008427ed1eb34a882e9d85f24b42b3ee3a28 | [
"MIT"
] | null | null | null | annotation_connector/ann_extractor.py | ScholarIndex/LinkedBooks | 0cae008427ed1eb34a882e9d85f24b42b3ee3a28 | [
"MIT"
] | 6 | 2020-03-20T18:10:01.000Z | 2021-09-29T17:31:17.000Z | annotation_connector/ann_extractor.py | ScholarIndex/LinkedBooks | 0cae008427ed1eb34a882e9d85f24b42b3ee3a28 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Linked Books
Parser that extracts annotations (i.e. files with annotations) and export a series of pickle objects to be used by patter matching facilities
Exports:
1- lines with citations and without citations for Sup and Unsup extraction of lines with citations
2- annotations and list of tags (for consistency checks)
"""
__author__ = """Giovanni Colavizza"""
import os, codecs, pickle, jellyfish
from collections import defaultdict
from scripts.fs_crawlers import walklevel
from scripts.text_processing import find_all
from collections import OrderedDict
from parsers.data_structures import annotation, page, c_line
import matplotlib.pyplot as plt
#from nltk.tokenize import WordPunctTokenizer
#tokenizer = WordPunctTokenizer()
# constants definition
data_directory = "extraction"
base_dir = "/Users/colavizz/Projects/working_directory/annotations/lb_1"
out_dir = "/Users/colavizz/Projects/working_directory/annotations/extracted_annotations"
logger = codecs.open("parsers/output/ann_extractor_logger.csv", "w", "utf-8")
separator = "&"
separator_train = " "
general_annotations_primary = ['Primary-Partial','Primary-Full']
general_annotations_secondary = ['Secondary-Partial','Secondary-Full']
general_annotations_partial = ['Primary-Partial','Secondary-Partial']
general_annotations_full = ['Primary-Full','Secondary-Full']
general_annotations = ['Primary-Partial','Primary-Full','Secondary-Partial','Secondary-Full']
general_annotations_discard = ['Implicit','Full','Partial']
#specific_annotations = []
# TODO: also consider to MERGE categories, e.g. Editor and Author and Curator!
specific_annotations_discard = ['Other']#, 'Conjunction','TopicDate','Parchment','Chapter','Period','Column','Protocollo','Mazzo','Table','Voce','ArchivalUnit','Citation','Responsible','Box','Website']
citation_to_find = ["senato terra", "senato mar", "giustizia vecchia", "savi alle decime", "notarile", "provveditori al sal", "consiglio dei x", "consiglio x", "avogaria di comun", "notarile testamenti", "notarile, testamenti", "avogaria"]
citations_found = {x: {} for x in citation_to_find}
# TODO: this dump doesn't work, do it in annotation extractor.
#simply dumps an annotation for dedicated matching
def dump_annotation(text, category):
text = text.replace("\n","")
f = codecs.open(os.path.join(data_directory, "NOCONTEXT_tags_train"+".txt"), "a", "utf-8")
t = codecs.open(os.path.join(data_directory, "NOCONTEXT_category_train"+".txt"), "a", "utf-8")
a = codecs.open(os.path.join(data_directory, "NOCONTEXT_PRIMARY_train"+".txt"), "a", "utf-8")
b = codecs.open(os.path.join(data_directory, "NOCONTEXT_SECONDARY_train"+".txt"), "a", "utf-8")
c = codecs.open(os.path.join(data_directory, "NOCONTEXT_PARTIAL_train"+".txt"), "a", "utf-8")
d = codecs.open(os.path.join(data_directory, "NOCONTEXT_FULL_train"+".txt"), "a", "utf-8")
for n, word in enumerate(text.split()):
word = word.replace("\r","")
if category in general_annotations:
t.write(word+separator_train+str(n)+separator_train+category+"\n")
if category in general_annotations_primary:
a.write(word+separator_train+str(n)+separator_train+category+"\n")
elif category in general_annotations_secondary:
b.write(word+separator_train+str(n)+separator_train+category+"\n")
if category in general_annotations_full:
d.write(word+separator_train+str(n)+separator_train+category+"\n")
elif category in general_annotations_partial:
c.write(word+separator_train+str(n)+separator_train+category+"\n")
elif not category in specific_annotations_discard and not category in general_annotations_discard:
f.write(word+separator_train+str(n)+separator_train+category+"\n")
if not category in specific_annotations_discard and not category in general_annotations_discard and not category in general_annotations:
f.write("\n")
f.close()
if category in general_annotations:
t.write("\n")
t.close()
if category in general_annotations_primary:
a.write("\n")
a.close()
if category in general_annotations_secondary:
b.write("\n")
b.close()
if category in general_annotations_partial:
c.write("\n")
c.close()
if category in general_annotations_full:
d.write("\n")
d.close()
def find_citations(text, bid, corpus):
for c in citation_to_find:
l = [x for x in find_all(text.lower(), c)]
if len(l) > 0:
if corpus in citations_found[c].keys():
if bid in citations_found[c][corpus].keys():
citations_found[c][corpus][bid]["count"] += len(l)
citations_found[c][corpus][bid]["list"].extend([text[x-35:x+35] for x in l])
else:
citations_found[c][corpus][bid] = {"count": len(l), "list": [text[x-35:x+35] for x in l]}
else:
citations_found[c][corpus] = {bid: {"count": len(l), "list": [text[x-35:x+35] for x in l]}}
def apply_annotations(line, ann_page):
words = line.text.split()
start = line.start
for n, word in enumerate(words):
line.annotations[n] = {"word": word, "citation_category": "None", "citation_tag": "None", "start": start, "pos_in_line": n, "pos_in_cat": 0}
start += len(word)+1
for ann in ann_page:
#print(ann)
#print(line.text)
for n, word in line.annotations.items():
if word["start"] in range(ann.span[0],ann.span[1]):
if ann.category in general_annotations:
line.annotations[n]["citation_category"] = ann.category
matches_in_ann = list()
for m, w in enumerate(ann.text.split()):
# usually to fix punctuation not taken into account in annotation..
if jellyfish.levenshtein_distance(w, word["word"]) < 2:
matches_in_ann.append(m)
if len(matches_in_ann) == 1:
line.annotations[n]["pos_in_cat"] = matches_in_ann[0]
else:
for m in matches_in_ann:
context_minus = min(m, n)
context_max = min(len(ann.text.split()), len(words))
if ann.text.split()[m-context_minus:m+context_max] == words[n-context_minus:n+context_max]:
line.annotations[n]["pos_in_cat"] = m
break
elif ann.category in general_annotations_discard or ann.category in specific_annotations_discard:
continue
else:
line.annotations[n]["citation_tag"] = ann.category
return line
# lines printer (to review)
def print_lines(lines, out_file, separator="&"):
with codecs.open(out_file, "w", "utf-8") as f:
for item in lines:
for row in item[4].values():
out = str(item[0])+separator+str(item[1])+separator+str(item[2])+separator+str(row["ann"])+separator+str(row["txt"])+"\n"
f.write(out)
def main():
# data structures
annotations_store = list()
lines_store = list()
annotation_tags = set()
previous_page = page()
current_page = page()
previous_ann_page = OrderedDict()
current_ann_page = OrderedDict()
continuations = list()
annotations_by_year = defaultdict(int)
ann_counter = 0
# parse corpus
for root, dirs, files in walklevel(base_dir, 2):
for file in files:
if ".ann" in file:
ann_file = file
txt_file = file.replace(".ann",".txt")
corpus = root.split("/")[-2]
bid = root.split("/")[-1]
page_nr = int(file.split(".")[-2].split("_")[-1])
try:
year = int(bid[:4])
except:
year = 0
full_text = codecs.open(os.path.join(root, txt_file), "r", "utf-8").read()
find_citations(full_text, year, corpus)
annotations = codecs.open(os.path.join(root, ann_file), "r", "utf-8").read()
if not os.path.isfile(os.path.join(root, txt_file)):
logger.write(file+separator+"missing TXT file\n")
continue
if not os.path.getsize(os.path.join(root, ann_file)) > 0:
continue
annotations_by_year[year] += 1
#print("Parsing "+corpus+" - "+bid+" - "+file)
# get and store list of files with annotations (for each folder, BID)
annotation_spans = list()
previous_ann_page = current_ann_page
current_ann_page = OrderedDict()
hasContinuation = False
for n, row in enumerate(annotations.split("\n")):
data = row.split("\t")
if len(data) > 1:
#if ann_file == "1998_15117.04.201518-19-26_page_35.ann":
# print(data)
type = data[0][:1]
if type == "A" or type == "R":
if "Continuation" in data[1]:
continuations.append(data[1].split()[1])
hasContinuation = True
continue
id = data[0]
category = ""
span = ""
text = ""
if len(data) == 3 and len(data[2]) > 0:
category = data[1].split()[0]
span = " ".join(data[1].split()[1:]).strip()
span = (int(span.split()[0]), int(span.split()[-1]))
if category in general_annotations:
ann_counter += 1
annotation_spans.append(span)
text = data[2]
else:
text = data[1]
if len(category) > 0:
annotation_tags.add(category)
#if ann_file == "1998_15117.04.201518-19-26_page_35.ann":
# print(category +" "+text+" "+str(span))
current_ann_page[n] = annotation(type, id, bid, corpus, txt_file, category, span, text)
dump_annotation(text, category)
# TODO: if needed expand on the representation of annotations with hierarchy and link to continuations
# sort and make a hierarchy of annotations
# merge continuations
# change and see if we need to process previous and last pages.
for ann in current_ann_page.values():
annotations_store.append(ann)
# process each corresponding text file
previous_page = current_page
current_page = page(bid, corpus, txt_file, full_text, page_nr, year, hasContinuation)
row_spans = [x for x in find_all(full_text, "\n")]
row_text = full_text.split("\n")
assert len(row_spans) == (len(row_text)-1)
pred = 0
keys = list()
for n, end in enumerate(row_spans):
keys.append((pred, n))
current_line = c_line(pred, end, n, row_text[n], False)
assert full_text[pred:end] == row_text[n]
current_page.addLine(current_line, n)
pred = end+1
annotation_spans = sorted(annotation_spans, key= lambda t: t[0])
keys = sorted(keys, key= lambda t: t[0])
for span in annotation_spans:
key = max([x for x in keys if x[0] <= span[0]])
key_pos = keys.index(key)
while key[0] <= span[1]:
current_page.lines[key[1]].hasAnnotation = True
key_pos += 1
if key_pos <= len(keys)-1:
key = keys[key_pos]
else:
break
# add annotations to the lines of the page
for n, line in current_page.lines.items():
annotations = list()
for ann in current_ann_page.values():
if ann.span:
if line.start <= ann.span[0] or ann.span[1] < line.end or (ann.span[0] < line.start < line.end < ann.span[1]):
annotations.append(ann)
#if current_page.filename == "1979_11217.04.201518-19-26_page_80.txt":
# for ann in annotations:
# print(ann.category + " " + ann.text + " " + str(ann.span))
#if current_page.filename == "1998_15117.04.201518-19-26_page_35.txt" and line.hasAnnotation:
# print(line.text)
current_page.lines[n] = apply_annotations(line, annotations)
#if current_page.filename == "1998_15117.04.201518-19-26_page_35.txt" and line.hasAnnotation:
# print(line.annotations)
# output, for each txt_file with annotations!
lines_store.append(current_page)
# store all data structures
logger.write(ann_file+separator+"extracted\n")
#print("Done "+corpus+" - "+bid+" - "+file)
pickle.dump(annotations_store, open("parsers/output/annotations.p", "wb"))
pickle.dump(annotation_tags, open("parsers/output/annotation_tags.p", "wb"))
pickle.dump(lines_store, open("parsers/output/lines_store.p", "wb"))
#print_lines(lines_store, "./output/lines.csv", separator)
#print(annotations_store[17].text)
print(annotation_tags)
print(len(lines_store))
print(ann_counter)
print(len(continuations))
for year, n in annotations_by_year.items():
print(str(year) + ": " + str(n))
#for line in lines_store[17].lines.values():
# print(line.annotations)
logger.close()
if __name__ == "__main__":
main()
print("DONE!!")
#print(citations_found)
figsize = (15,10)
year_start = 1960
for series, data in citations_found.items():
plt.figure(figsize=figsize)
for corpus, years in data.items():
data2 = sorted([(year, val["count"]) for year,val in years.items() if year > year_start], key=lambda t:t[0])
plt.plot([point[0] for point in data2], [point[1] for point in data2], label=corpus)
plt.legend(loc='best')
title = series
plt.title(title)
plt.savefig("parsers/plots/"+title+'.pdf') | 48.699029 | 239 | 0.568514 | 1,809 | 15,048 | 4.577667 | 0.177999 | 0.04782 | 0.032846 | 0.0541 | 0.289337 | 0.243449 | 0.206859 | 0.167371 | 0.117256 | 0.117256 | 0 | 0.020869 | 0.309011 | 15,048 | 309 | 240 | 48.699029 | 0.775534 | 0.148325 | 0 | 0.098712 | 0 | 0 | 0.092967 | 0.027961 | 0 | 0 | 0 | 0.003236 | 0.008584 | 1 | 0.021459 | false | 0 | 0.030043 | 0 | 0.055794 | 0.030043 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1063ba33c7c8379300e909203804cb70e901a853 | 9,424 | py | Python | tests/dataset_test.py | kilsenp/person-multi-task-dataset | 2f186cafa3db2c77d8c6c4309b2cadc13d4f92ab | [
"MIT"
] | 4 | 2020-10-08T03:31:36.000Z | 2021-03-06T08:06:23.000Z | reid/scripts/triplet_reid/tests/dataset_test.py | VisualComputingInstitute/CROWDBOT_perception | df98f3f658c39fb3fa4ac0456f1214f7918009f6 | [
"MIT"
] | 7 | 2021-06-08T20:55:10.000Z | 2022-02-10T00:38:32.000Z | reid/scripts/triplet_reid/tests/dataset_test.py | VisualComputingInstitute/CROWDBOT_perception | df98f3f658c39fb3fa4ac0456f1214f7918009f6 | [
"MIT"
] | null | null | null | from datasets.dataset import ConcatDataset, MultiDataset, unique_header
from builders.dataloader_builder import build_collate_fn, ERROR_STRING
from torch.utils.data import DataLoader
from datasets.utils import make_dataset_default, cv2_loader
from samplers.sequential_sampler import SequentialSampler
from samplers.batch_sampler import BatchSampler
from samplers.multi_sampler import ConcatenatedSamplerLongest
import pytest
from datasets.dummy import create_dummy_data, create_dummy_pid_data, DummyDataset
import torch
from datasets.reid_dataset import rewrite_pids, ReidDataset, ConcatReidDataset
from datasets.attribute_dataset import AttributeReidDataset, AttributeDataset
from datasets.mpii import make_dataset as make_mpii
from utils import visualize
import imgaug as ia
from imgaug import augmenters as iaa
from PIL import Image
import numpy as np
import builders.dataset_builder as dataset_builder
from datasets.attribute.market import make_market_attribute
from datasets.attribute.duke_mtmc import make_duke_attribute
from augmentations import ToTensor
from settings import Config
from builders import dataloader_builder
market_config = {
"name": "market1501",
"source_file": Config.MARKET_SOURCE,
"data_dir": Config.MARKET_DATA,
"loader_fn": "cv2",
"transform": {
"resize": {
"width": 256,
"height": 256
},
"debug": True
}
}
dataloader_cfg = {
"sampler": {
"type": "sequential",
"dataset": market_config,
"batch_size": 1
}
}
def test_dataset_simple():
dataloader = dataloader_builder.build(dataloader_cfg)
for idx, data in enumerate(dataloader):
assert 'img' in data
assert 'path' in data
assert isinstance(data['img'], torch.Tensor)
assert data['path'] != ERROR_STRING
if idx > 500:
break
def test_multi_dataset():
size1 = 70
size2 = 100
dummy_cfg_small = {
"name": "dummy",
"id": "dummy_small",
"size": size1,
"data_dir": "/"
}
dummy_cfg_large = {
"name": "dummy",
"id": "dummy_large",
"size": size2,
"data_dir": "/"
}
sequential_cfg1 = {
"type": "sequential",
"dataset": dummy_cfg_small,
"batch_size": 1,
"drop_last": True
}
sequential_cfg2 = {
"type": "sequential",
"dataset": dummy_cfg_large,
"batch_size": 1,
"drop_last": True
}
sampler_cfg = {
"type": "concatenated_longest",
"samplers": {
"sampler1": sequential_cfg1,
"sampler2": sequential_cfg2
}
}
dataloader_cfg = {
"sampler": sampler_cfg
}
dataloader = dataloader_builder.build(dataloader_cfg)
for idx, data in enumerate(dataloader):
assert data['path'][0].startswith("dummy_small")
assert data['path'][1].startswith("dummy_large")
test = size1 if size1 > size2 else size2
print(test, idx)
def test_concat_dataset():
size1 = 70
size2 = 100
name1 = "Dummy1"
name2 = "Dummy2"
dataset1 = DummyDataset(lambda: create_dummy_pid_data(size1, 30, name1), name1)
dataset2 = DummyDataset(lambda: create_dummy_data(size2, name2), name2)
dataset = ConcatDataset([dataset1, dataset2])
assert len(dataset) == size1 + size2
sampler = SequentialSampler(dataset)
collate_fn = build_collate_fn(dataset.header)
dataloader = DataLoader(
dataset,
sampler=sampler,
num_workers=1,
collate_fn=collate_fn
)
for idx, data in enumerate(dataloader):
if idx < size1:
# returns seq samplerbatch of 1
assert data['path'][0].startswith(name1)
assert data['pid'][0] != -1
else:
assert data['path'][0].startswith(name2)
assert data['pid'][0] == -1
def test_unique_headers():
class HeaderDataset(object):
def __init__(self, header):
self.header = header
header1 = HeaderDataset({'test': 1})
header2 = HeaderDataset({'test': 1})
header = unique_header([header1, header2])
assert type(header) == dict
assert header['test'] == 1
header3 = HeaderDataset({'test': 2})
with pytest.raises(RuntimeError):
header = unique_header([header1, header3])
def test_concat_reid_dataset():
size1 = 70
size2 = 100
name1 = "Dummy1"
name2 = "Dummy2"
pid1 = 30
pid2 = 30
dataset1 = DummyDataset(lambda: create_dummy_pid_data(size1, pid1, name1), name1)
dataset2 = DummyDataset(lambda: create_dummy_pid_data(size2, pid2, name2), name2)
dataset = ConcatReidDataset([dataset1, dataset2])
assert dataset.num_labels == pid1 + pid2
def test_rewrite_pids():
d1 = {'pid': 'a'}
d2 = {'pid': 'b'}
d3 = {'pid': 'c'}
d4 = {'pid': 'a'}
data = [d1, d2, d3, d4]
num_labels, label_dic = rewrite_pids(data)
assert num_labels == 3
assert d4['pid'] == 0
def test_make_market_attribute_train():
data, headers, dataset_info = make_market_attribute(Config.MARKET_ATTRIBUTE, "train")
assert len(data) == 751
def test_market_attribute_dataset():
market_attribute_cfg = {
"data_dir": Config.MARKET_ATTRIBUTE,
"split": 'train',
'name': 'market1501_attribute'
}
data = dataset_builder.build(market_attribute_cfg)
assert data[0]['hat'] == 0
assert data[174]['hat'] == 1
assert data[0]['upcolor'] == 2
assert data[0]['downcolor'] == 6
for idx, d in enumerate(data):
assert(d['downcolor']) != 9, idx
def test_make_market_attribute_gallery():
data, headers, dataset_info = make_market_attribute(Config.MARKET_ATTRIBUTE, "train")
assert len(data) == 751
def test_make_duke_attribute_gallery():
data, headers, dataset_info = make_duke_attribute(Config.DUKE_ATTRIBUTE, "train")
assert len(data) == 702
def test_duke_attribute_dataset():
duke_attribute_cfg = {
"data_dir": Config.DUKE_ATTRIBUTE,
"split": 'train',
'name': 'duke_mtmc_attribute'
}
data = dataset_builder.build(duke_attribute_cfg)
assert data[0]['hat'] == 0
assert data[4]['hat'] == 1
assert data[7]['upcolor'] == 5
assert data[336]['downcolor'] == 5
for idx, d in enumerate(data):
assert(d['gender']) < 2, idx
assert(d['top']) < 2, idx
assert(d['boots']) < 2, idx
assert(d['hat']) < 2, idx
assert(d['backpack']) < 2, idx
assert(d['bag']) < 2, idx
assert(d['handbag']) < 2, idx
assert(d['shoes']) < 2, idx
assert(d['upcolor']) < 8, idx
assert(d['downcolor']) < 7, idx
def test_make_mpii():
data, headers, dataset_info = make_mpii(Config.MPII_SOURCE, Config.MPII_DATA, "mpii")
def test_viz():
data, _, dataset_info = make_mpii(Config.MPII_SOURCE, Config.MPII_DATA, "mpii")
joint_info = dataset_info['joint_info']
for d in data[:5]:
coords = d['coords']
# find top left
top_x = 9999
top_y = 9999
bottom_x = 0
bottom_y = 0
for coord in coords:
x, y = coord
if x < top_x:
top_x = x
if x > bottom_x:
bottom_x = x
if y < top_y:
top_y = y
if y > bottom_y:
bottom_y = y
bbox = [(top_x, top_y), (bottom_x, bottom_y)]
visualize(d['path'], d['coords'], joint_info.stick_figure_edges, bbox)
def test_pose_imgaug():
data, headers, dataset_info = make_mpii(Config.MPII_SOURCE, Config.MPII_DATA, "mpii")
joint_info = dataset_info['joint_info']
ia.seed(1)
seq = iaa.Sequential([
iaa.Affine(
rotate=10,
scale=(0.5, 0.7)
) # rotate by exactly 10deg and scale to 50-70%, affects keypoints
])
for d in data[:5]:
image = np.asarray(Image.open(d['path']))
coords = d['coords']
keypoints = ia.KeypointsOnImage.from_coords_array(coords, image.shape)
seq_det = seq.to_deterministic()
image_aug = seq_det.augment_images([image])[0]
keypoints_aug = seq_det.augment_keypoints([keypoints])[0]
open_cv_image = np.array(image_aug)
# Convert RGB to BGR
open_cv_image = open_cv_image[:, :, ::-1].copy()
visualize(open_cv_image, keypoints_aug.get_coords_array(), joint_info.stick_figure_edges)
transform_cfg = {
"RandomHorizontalFlipWithPairs": {'p': 0.5},
"RandomCrop": {
"width": 128,
"height": 256,
"scale": 1.125
}
}
mpii_config = {
"name": "mpii",
"split": "train",
"source_file": Config.MPII_SOURCE,
"data_dir": Config.MPII_DATA,
"loader_fn": "cv2",
"transform": {
"affinewithcrop": {
"translate_percent": [-0.02, 0.02],
"scale": [0.75, 1.25]
},
"fliplrwithpairs": {"p": 0.5},
"resize": {
"width": 256,
"height": 256
}
},
"width": 256,
"height": 256,
"debug": True
}
def test_pose_dataset():
dataset = dataset_builder.build(mpii_config)
for data in dataset:
assert data['img'].shape == (3, 256, 256)
print(data['coords'])
break
| 27.08046 | 97 | 0.608871 | 1,122 | 9,424 | 4.902852 | 0.210339 | 0.029086 | 0.016361 | 0.015997 | 0.308853 | 0.215779 | 0.19142 | 0.163788 | 0.13543 | 0.105981 | 0 | 0.035725 | 0.269312 | 9,424 | 347 | 98 | 27.158501 | 0.763143 | 0.013264 | 0 | 0.229927 | 0 | 0 | 0.097816 | 0.003121 | 0 | 0 | 0 | 0 | 0.142336 | 1 | 0.058394 | false | 0 | 0.087591 | 0 | 0.149635 | 0.007299 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
106687ef31f6625f1777cb55e6a77d211abb6838 | 4,073 | py | Python | phase_triggered_tms/pre_post/BMI.py | HUtangge/experimental-paradigm | 866fa504f0c9ec63366ff497c1491a44f9b38bb4 | [
"MIT"
] | null | null | null | phase_triggered_tms/pre_post/BMI.py | HUtangge/experimental-paradigm | 866fa504f0c9ec63366ff497c1491a44f9b38bb4 | [
"MIT"
] | null | null | null | phase_triggered_tms/pre_post/BMI.py | HUtangge/experimental-paradigm | 866fa504f0c9ec63366ff497c1491a44f9b38bb4 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Thu Jan 30 17:10:26 2020
@author: Ethan
Markers:
bmi_start
prepare
nothing, imagine, or move
relax
bmi_end
"""
def bmi_main():
import numpy as np
import matplotlib.pyplot as plt
import time, reiz, liesl
from reiz.visual import Mural, Background
canvas = reiz.Canvas()
canvas.open()
def countdown(canvas, sek):
for i in range(0, sek):
cue = reiz.Cue(canvas, visualstim=[bg, Mural(text=str(sek - i), color=(0.18, 0.18, 0.18))])
cue.show(duration=1)
def part2(cuetype, image_lib):
if "Nothing" in cuetype:
DispImage = image_lib.Nothing
elif "Imagine" in cuetype:
DispImage = image_lib.Imagine
elif "Open" in cuetype:
DispImage = image_lib.Open
elif "Close" in cuetype:
DispImage = image_lib.Close
return DispImage
bg = Background(color='gray')
states = ("Nothing", "Imagine", "Open", "Close")
image_lib = reiz.visual.read_folder(r'C:\Users\Messung\Desktop\study-phase-triggered-TMS\phase_triggered_tms\pre_post')
nBlocks = 3
tiles = np.tile(states, (4))
block_tiles = np.tile(states, (nBlocks, 4))
for i in range(nBlocks):
block_tiles[i, :] = np.random.permutation(tiles)
canvas.start_run = False
start_protocol = reiz.Cue(
canvas, visualstim=[bg, Mural(text='Press F5 to start BMI', color=(0.18, 0.18, 0.18))])
while not canvas.start_run:
start_protocol.show(duration=0.1)
countdown(canvas, 3)
reiz.Cue(canvas,visualstim=[bg, reiz.visual.Mural("BMI Task:", position=[0, 0.5], fontsize=1.5, color=(0.18, 0.18, 0.18)),
reiz.visual.Mural("Bitte folgen Sie den Anweisungen", position=[0, -0.25], fontsize=1, color=(0.18, 0.18, 0.18))]).show(duration=5)
reiz.Cue(canvas,visualstim=[bg, reiz.visual.Mural("Bilder werden angezeigt.", position=[0, 0.4], fontsize=1, color=(0.18, 0.18, 0.18)),
reiz.visual.Mural("Bitte 3 Sekunden lang durchführen", position=[0, -0.4], fontsize=1, color=(0.18, 0.18, 0.18))]).show(5)
reiz.Cue(canvas,visualstim=[bg, image_lib.Open, reiz.visual.Mural("Öffnen Ihre rechte Hand", position=[0, 0.7], fontsize=1, color=(0.18, 0.18, 0.18))]).show(5)
reiz.Cue(canvas,visualstim=[bg, image_lib.Close, reiz.visual.Mural("Schließe Ihre rechte Hand", position=[0, 0.7], fontsize=1, color=(0.18, 0.18, 0.18))]).show(5)
reiz.Cue(canvas,visualstim=[bg, image_lib.Imagine,
reiz.visual.Mural("Stellen sich vor Ihre rechte Hand zu öffnen", position=[0, 0.7], fontsize=0.7, color=(0.18, 0.18, 0.18))]).show(5)
reiz.Cue(canvas,visualstim=[bg, image_lib.Nothing,
reiz.visual.Mural("Mach nichts", position=[0, 0.7], fontsize=1, color=(0.18, 0.18, 0.18))]).show(10)
reiz.marker.push('bmi_start')
for k in range(nBlocks):
canvas.start_run = False
start_protocol = reiz.Cue(
canvas, visualstim=[bg, Mural(text="Press F5 to start block " + str(k + 1), color=(0.18, 0.18, 0.18))])
while not canvas.start_run:
start_protocol.show(duration=0.1)
countdown(canvas, 3)
for cue in range(np.size(block_tiles, 1)):
reiz.marker.push("prepare_" + str(k) + '_' + str(cue))
reiz.Cue(canvas, visualstim=[bg, reiz.visual.Mural("Bereitmachen", position=[0, 0.4], fontsize=1, color=(0.18, 0.18, 0.18))]).show(3)
reiz.marker.push(str(block_tiles[k,cue]) + '_' + str(k) + '_' + str(cue))
reiz.Cue(canvas, visualstim=[bg, part2(block_tiles[k,cue], image_lib)]).show(3)
reiz.marker.push("relax_" + str(k) + '_' + str(cue))
reiz.Cue(canvas, visualstim=[bg, reiz.visual.Mural("Entspannen", position=[0, -0.4], fontsize=1, color=(0.18, 0.18, 0.18))]).show(5)
reiz.marker.push('bmi_end')
canvas.close() | 43.329787 | 167 | 0.594157 | 593 | 4,073 | 4.015177 | 0.224283 | 0.049139 | 0.043679 | 0.065519 | 0.534649 | 0.468711 | 0.467031 | 0.446871 | 0.412852 | 0.398152 | 0 | 0.065154 | 0.242573 | 4,073 | 94 | 168 | 43.329787 | 0.706645 | 0.040265 | 0 | 0.16129 | 0 | 0 | 0.112891 | 0.02074 | 0 | 0 | 0 | 0 | 0 | 1 | 0.048387 | false | 0 | 0.064516 | 0 | 0.129032 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
106790f4cf8ad337e86d57da6b042eb48665a36c | 5,273 | py | Python | apps/dash-floris-gch/app.py | JeroenvdSande/dash-sample-apps | 106fa24693cfdaf47c06466a0aed78e642344f91 | [
"MIT"
] | 2,332 | 2019-05-10T18:24:20.000Z | 2022-03-30T21:46:29.000Z | apps/dash-floris-gch/app.py | JeroenvdSande/dash-sample-apps | 106fa24693cfdaf47c06466a0aed78e642344f91 | [
"MIT"
] | 384 | 2019-05-09T19:19:56.000Z | 2022-03-12T00:58:24.000Z | apps/dash-floris-gch/app.py | JeroenvdSande/dash-sample-apps | 106fa24693cfdaf47c06466a0aed78e642344f91 | [
"MIT"
] | 3,127 | 2019-05-16T17:20:45.000Z | 2022-03-31T17:59:07.000Z | import base64
from io import BytesIO
import dash
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input, Output, State
import floris.tools as wfct
import matplotlib.pyplot as plt
import reusable_components as rc # see reusable_components.py
# ############ Create helper functions ############
def mpl_to_b64(fig, format="png", dpi=300, **kwargs):
b_io = BytesIO()
fig.savefig(b_io, format=format, bbox_inches="tight", dpi=dpi, **kwargs)
b64_enc = base64.b64encode(b_io.getvalue()).decode("utf-8")
return f"data:image/{format};base64," + b64_enc
def build_visualizations(x_loc, y_loc, yaw_1, wd, gch, minSpeed=4, maxSpeed=8.0):
fi = wfct.floris_interface.FlorisInterface("./data/example_input.json")
fi.set_gch(gch)
fi.reinitialize_flow_field(
wind_direction=wd, layout_array=((0, 126 * 7, 126 * 14), (0, 0, 0))
)
fi.calculate_wake(yaw_angles=[yaw_1, 0, 0])
# Horizontal plane
fig, ax = plt.subplots()
wfct.visualization.visualize_cut_plane(
fi.get_hor_plane(), ax=ax, minSpeed=minSpeed, maxSpeed=maxSpeed
)
ax.axhline(y_loc, color="w", ls="--", lw=1)
ax.axvline(x_loc, color="w", ls="--", lw=1)
horiz_b64 = mpl_to_b64(fig)
plt.close(fig)
# Cross (x-normal) plane
fig, ax = plt.subplots()
wfct.visualization.visualize_cut_plane(
fi.get_cross_plane(x_loc=x_loc), ax=ax, minSpeed=minSpeed, maxSpeed=maxSpeed
)
wfct.visualization.reverse_cut_plane_x_axis_in_plot(ax)
x_plane_b64 = mpl_to_b64(fig)
plt.close(fig)
# Cross (y-normal) plane
fig, ax = plt.subplots()
wfct.visualization.visualize_cut_plane(
fi.get_y_plane(y_loc=y_loc), ax=ax, minSpeed=minSpeed, maxSpeed=maxSpeed
)
wfct.visualization.reverse_cut_plane_x_axis_in_plot(ax)
y_plane_b64 = mpl_to_b64(fig)
plt.close(fig)
return horiz_b64, x_plane_b64, y_plane_b64
# ############ Initialize app ############
app = dash.Dash(__name__, external_stylesheets=[rc.MATERALIZE_CSS])
server = app.server
# ############ Build components and layouts ############
navbar = html.Nav(
html.Div(
className="nav-wrapper teal",
children=[
html.Img(
src=app.get_asset_url("dash-logo.png"),
style={"float": "right", "height": "100%", "padding-right": "15px"},
),
html.A(
"GCH and Cut Plane Visualization in FLORIS",
className="brand-logo",
href="https://plotly.com/dash/",
style={"padding-left": "15px"},
),
],
)
)
controls = [
rc.CustomSlider(id="wind-direction", min=250, max=290, label="Wind Direction"),
rc.CustomSlider(id="yaw-angle", min=-30, max=30, label="Yaw angle T1"),
rc.CustomSlider(
id="x-loc", min=0, max=3000, value=500, label="X Normal Plane Intercept"
),
rc.CustomSlider(id="y-loc", min=-100, max=100, label="Y Normal Plane Intercept"),
]
left_section = rc.Card(
rc.CardContent(
[
rc.CardTitle("Horizontal Cut Plane"),
html.Img(id="gch-horizontal", style={"width": "100%"}),
rc.CardTitle("Cross (X-Normal) Cut Plane"),
html.Img(id="gch-x-normal", style={"width": "100%"}),
rc.CardTitle("Cross (Y-Normal) Cut Plane"),
html.Img(id="gch-y-normal", style={"width": "100%"}),
]
)
)
right_section = rc.Card(
rc.CardContent(
[
rc.CardTitle("Horizontal Cut Plane"),
html.Img(id="no-gch-horizontal", style={"width": "100%"}),
rc.CardTitle("Cross (X-Normal) Cut Plane"),
html.Img(id="no-gch-x-normal", style={"width": "100%"}),
rc.CardTitle("Cross (Y-Normal) Cut Plane"),
html.Img(id="no-gch-y-normal", style={"width": "100%"}),
]
)
)
app.layout = html.Div(
style={"--slider_active": "teal"},
# className="container",
children=[
navbar,
html.Br(),
rc.Row(
rc.Col(
rc.Card(rc.CardContent(rc.Row([rc.Col(c, width=3) for c in controls]))),
width=12,
)
),
rc.Row(
[
rc.Col([html.H4("Results with GCH"), left_section], width=6),
rc.Col([html.H4("Results without GCH"), right_section], width=6),
]
),
],
)
@app.callback(
Output("gch-horizontal", "src"),
Output("gch-x-normal", "src"),
Output("gch-y-normal", "src"),
Input("x-loc", "value"),
Input("y-loc", "value"),
Input("yaw-angle", "value"),
Input("wind-direction", "value"),
)
def gch_update(x_loc, y_loc, yaw_1, wd):
return build_visualizations(x_loc, y_loc, yaw_1, wd, gch=True)
@app.callback(
Output("no-gch-horizontal", "src"),
Output("no-gch-x-normal", "src"),
Output("no-gch-y-normal", "src"),
Input("x-loc", "value"),
Input("y-loc", "value"),
Input("yaw-angle", "value"),
Input("wind-direction", "value"),
)
def no_gch_update(x_loc, y_loc, yaw_1, wd):
return build_visualizations(x_loc, y_loc, yaw_1, wd, gch=False)
if __name__ == "__main__":
app.run_server(debug=True, threaded=False, processes=2)
| 31.017647 | 88 | 0.593021 | 709 | 5,273 | 4.251058 | 0.269394 | 0.031851 | 0.013935 | 0.029861 | 0.460518 | 0.428998 | 0.394492 | 0.392833 | 0.390511 | 0.358991 | 0 | 0.029608 | 0.231367 | 5,273 | 169 | 89 | 31.201183 | 0.714039 | 0.035274 | 0 | 0.289855 | 0 | 0 | 0.177587 | 0.010388 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028986 | false | 0 | 0.065217 | 0.014493 | 0.123188 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1067b1a398ac5ee0e81efafd57f4798e6f8e07f8 | 13,965 | py | Python | scripts/minify.py | russellw/Ayane | 8109f9f134053fa1ededd2a4ff54da050291244e | [
"MIT"
] | null | null | null | scripts/minify.py | russellw/Ayane | 8109f9f134053fa1ededd2a4ff54da050291244e | [
"MIT"
] | null | null | null | scripts/minify.py | russellw/Ayane | 8109f9f134053fa1ededd2a4ff54da050291244e | [
"MIT"
] | null | null | null | import inspect
import subprocess
import re
import sys
import logging
logger = logging.getLogger()
logger.addHandler(logging.StreamHandler(sys.stdout))
logger.setLevel(logging.DEBUG)
# numbers larger than 2000 silently fail
sys.setrecursionlimit(2000)
def first(s):
for x in s:
return x
assert False
def dbg(a):
info = inspect.getframeinfo(inspect.currentframe().f_back)
logger.debug(f"{info.filename}:{info.function}:{info.lineno}: {repr(a)}")
def remove(s, i):
s = list(s)
del s[i]
return s
def check_tuples(x):
if isinstance(x, tuple):
for y in x:
check_tuples(y)
return
if isinstance(x, list):
raise ValueError(x)
def imp(x, y):
return "|", ("~", x), y
def size(x):
if type(x) in (list, tuple):
n = 0
for y in x:
n += size(y)
return n
return 1
def isFn(a):
if type(a) != str:
return
return a[0].islower()
def isVar(a):
if type(a) != str:
return
return a[0].isupper()
def getInds(t, a, r):
if type(a) == str:
if not isFn(a):
return
if t == "ind":
r.add(a)
return
o = a[0]
if o in ("!", "?"):
assert t == "bool"
getInds(t, a[2], r)
return
if o in ("&", "|", "<=>", "~"):
assert t == "bool"
for b in a[1:]:
getInds(t, b, r)
return
if isFn(o) or o == "=":
for b in a[1:]:
getInds("ind", b, r)
return
raise Exception(o)
def getGlobalInds(xs):
r = set()
for x in xs:
getInds("bool", x, r)
return r
def freeVars(a):
free = []
def rec(a, bound):
if isinstance(a, tuple):
if a[0] in ("!", "?"):
bound = bound.copy()
for x in a[1]:
bound.add(x)
rec(a[2], bound)
return
for b in a[1:]:
rec(b, bound)
return
if isVar(a):
if a not in bound and a not in free:
free.append(a)
rec(a, set())
return free
######################################## parser
def read_tptp(filename, xs, select=True):
text = open(filename).read()
if text and text[-1] != "\n":
text += "\n"
# tokenizer
ti = 0
tok = ""
def err(msg):
line = 1
for i in range(ti):
if text[i] == "\n":
line += 1
raise ValueError(f"{filename}:{line}: {repr(tok)}: {msg}")
def lex():
nonlocal ti
nonlocal tok
while ti < len(text):
c = text[ti]
# space
if c.isspace():
ti += 1
continue
# line comment
if c in ("%", "#"):
i = ti
while text[ti] != "\n":
ti += 1
continue
# block comment
if text[ti : ti + 2] == "/*":
ti += 2
while text[ti : ti + 2] != "*/":
ti += 1
ti += 2
continue
# word
if c.isalpha() or c == "$":
i = ti
ti += 1
while text[ti].isalnum() or text[ti] == "_":
ti += 1
tok = text[i:ti]
return
# quote
if c in ("'", '"'):
i = ti
ti += 1
while text[ti] != c:
if text[ti] == "\\":
ti += 1
ti += 1
ti += 1
tok = text[i:ti]
return
# number
if c.isdigit() or (c == "-" and text[ti + 1].isdigit()):
# integer part
i = ti
ti += 1
while text[ti].isalnum():
ti += 1
# rational
if text[ti] == "/":
ti += 1
while text[ti].isdigit():
ti += 1
# real
else:
if text[ti] == ".":
ti += 1
while text[ti].isalnum():
ti += 1
if text[ti - 1] in ("e", "E") and text[ti] in ("+", "-"):
ti += 1
while text[ti].isdigit():
ti += 1
tok = text[i:ti]
return
# punctuation
if text[ti : ti + 3] in ("<=>", "<~>"):
tok = text[ti : ti + 3]
ti += 3
return
if text[ti : ti + 2] in ("!=", "=>", "<=", "~&", "~|"):
tok = text[ti : ti + 2]
ti += 2
return
tok = c
ti += 1
return
# end of file
tok = None
def eat(o):
if tok == o:
lex()
return True
def expect(o):
if tok != o:
err(f"expected '{o}'")
lex()
# terms
def args():
expect("(")
r = []
if tok != ")":
r.append(atomic_term())
while tok == ",":
lex()
r.append(atomic_term())
expect(")")
return tuple(r)
def atomic_term():
o = tok
# higher-order terms
if tok == "!":
raise "Inappropriate"
# syntax sugar
if eat("$greater"):
s = args()
return "$less", s[1], s[0]
if eat("$greatereq"):
s = args()
return "$lesseq", s[1], s[0]
lex()
if tok == "(":
s = args()
return (o,) + s
return o
def infix_unary():
x = atomic_term()
o = tok
if o == "=":
lex()
return "=", x, atomic_term()
if o == "!=":
lex()
return "~", ("=", x, atomic_term())
return x
def unitary_formula():
o = tok
if o == "(":
lex()
x = logic_formula()
expect(")")
return x
if o == "~":
lex()
return "~", unitary_formula()
if o in ("!", "?"):
lex()
# variables
expect("[")
v = []
v.append(atomic_term())
while tok == ",":
lex()
v.append(atomic_term())
expect("]")
# body
expect(":")
x = o, tuple(v), unitary_formula()
return x
return infix_unary()
def logic_formula():
x = unitary_formula()
o = tok
if o in ("&", "|"):
v = [o, x]
while eat(o):
v.append(unitary_formula())
return tuple(v)
if o == "<=>":
lex()
return o, x, unitary_formula()
if o == "=>":
lex()
return imp(x, unitary_formula())
if o == "<=":
lex()
return imp(unitary_formula(), x)
if o == "<~>":
lex()
return "~", ("<=>", x, unitary_formula())
if o == "~&":
lex()
return "~", ("&", x, unitary_formula())
if o == "~|":
lex()
return "~", ("|", x, unitary_formula())
return x
# top level
def ignore():
if eat("("):
while not eat(")"):
ignore()
return
lex()
def selecting(name):
return select is True or name in select
def annotated_formula():
lex()
expect("(")
# name
name = atomic_term()
expect(",")
# role
role = atomic_term()
expect(",")
if role == "type":
while tok != ")":
ignore()
else:
x = logic_formula()
if selecting(name):
if role == "conjecture":
x = "~", x
xs.append(x)
# annotations
if tok == ",":
while tok != ")":
ignore()
# end
expect(")")
expect(".")
def include():
lex()
expect("(")
# tptp
tptp = os.getenv("TPTP")
if not tptp:
err("TPTP environment variable not set")
# file
filename1 = atomic_term()
# select
select1 = select
if eat(","):
expect("[")
select1 = []
while True:
name = atomic_term()
if selecting(name):
select1.append(name)
if not eat(","):
break
expect("]")
# include
read_tptp(tptp + "/" + filename1, xs, select1)
# end
expect(")")
expect(".")
lex()
header = False
while tok:
if tok in ("cnf", "fof", "tff"):
annotated_formula()
continue
if tok == "include":
include()
continue
err("unknown language")
######################################## printing
outf = None
def pr(x):
if x is not str:
x = str(x)
outf.write(x)
def prargs(x):
pr("(")
for i in range(1, len(x)):
if i > 1:
pr(",")
prterm(x[i])
pr(")")
def need_parens(x, parent):
if not parent:
return
if x[0] in ("&", "<=>", "|"):
return parent[0] in ("&", "<=>", "?", "!", "~", "|")
def prterm(x, parent=None):
if isinstance(x, tuple):
o = x[0]
# infix
if o == "=":
prterm(x[1])
pr("=")
prterm(x[2])
return
if o in ("&", "<=>", "|"):
if need_parens(x, parent):
pr("(")
for i in range(1, len(x)):
if i > 1:
pr(f"\n{o} ")
prterm(x[i], x)
if need_parens(x, parent):
pr(")")
return
# prefix/infix
if o == "~":
pr("~")
prterm(x[1], x)
return
# prefix
if o in ("?", "!"):
pr(o)
pr("[")
v = x[1]
for i in range(len(v)):
if i:
pr(",")
y = v[i]
pr(y)
pr("]:")
prterm(x[2], x)
return
pr(o)
prargs(x)
return
pr(x)
formnames = 0
def prformula(x):
global formnames
formnames += 1
pr("fof")
pr("(f")
# name
pr(formnames)
pr(", ")
# role
pr("plain")
pr(", ")
# content
prterm(x)
# end
pr(").\n")
def write_tmp(xs):
global formnames
global outf
formnames = 0
outf = open("b.p", "w")
for x in xs:
prformula(x)
outf.close()
######################################## shrink
def squant(x):
used = freeVars(x[2])
vs = []
for y in x[1]:
if y in used:
vs.append(y)
if not vs:
return x[2]
return x[0], used, x[2]
def shrink(t, x):
if type(x) is not tuple:
return [x]
o = x[0]
if o in ("!", "?"):
assert t == "bool"
r = [x, squant(x)]
xs1 = shrink(t, x[2])
for y in xs1:
r.append((o, x[1], y))
if len(x[1]) > 1:
used = freeVars(x[2])
for i in range(len(x[1])):
vs = list(x[1])
if vs[i] in used:
continue
del vs[i]
r.append((o, tuple(vs), y))
return r
if o in ("&", "|", "<=>"):
assert t == "bool"
r = [x]
for i in range(1, len(x)):
for z in shrink(t, x[i]):
y = list(x)
y[i] = z
r.append(tuple(y))
y = list(x)
del y[i]
if len(y) == 2:
y = y[1]
else:
y = tuple(y)
r.append(y)
return r
if o in ("~",):
assert t == "bool"
r = [x]
xs1 = shrink(t, x[1])
r.extend(xs1)
for y in xs1:
r.append((o, y))
return r
assert isFn(o) or o == "="
r = [x]
if t == "ind":
r.append(indVal)
for i in range(1, len(x)):
for z in shrink("ind", x[i]):
y = list(x)
y[i] = z
r.append(tuple(y))
return r
def shrinks(xs):
r = []
for i in range(len(xs)):
for x in shrink("bool", xs[i]):
ys = xs[:i] + [x] + xs[i + 1 :]
r.append(ys)
return r
######################################## top level
def good_test(xs):
write_tmp(xs)
cmd = ["./ayane", "b.p"]
p = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
stdout, stderr = p.communicate()
stdout = str(stdout, "utf-8")
stderr = str(stderr, "utf-8")
"""
if stderr:
print(stderr, end="")
exit(1)
if p.returncode:
raise Exception(str(p.returncode))
"""
m = re.search(r"FAILED", stdout)
if not m:
# print(stdout)
# print(stderr)
pass
return m
xs = []
read_tptp("a.p", xs)
assert good_test(xs)
indVal = first(getGlobalInds(xs))
while 1:
# print(xs)
print(f"size: {size(xs)}")
xss = shrinks(xs)
for ys in xss:
# print(f"size: {size(ys)}")
if size(ys) >= size(xs):
continue
if not good_test(ys):
continue
print(ys)
xs = ys
break
else:
write_tmp(xs)
exit(0)
| 21.12708 | 77 | 0.368278 | 1,532 | 13,965 | 3.327024 | 0.144909 | 0.01236 | 0.015696 | 0.021189 | 0.215813 | 0.181872 | 0.141456 | 0.108299 | 0.075927 | 0.055915 | 0 | 0.014674 | 0.472968 | 13,965 | 660 | 78 | 21.159091 | 0.677853 | 0.029646 | 0 | 0.412826 | 0 | 0 | 0.038517 | 0.003481 | 0 | 0 | 0 | 0 | 0.016032 | 1 | 0.072144 | false | 0.002004 | 0.01002 | 0.004008 | 0.206413 | 0.004008 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1069707a36080b47a174a720dec41b8ae371de1c | 1,478 | py | Python | SockTimeout.py | Alwaysproblem/Socket-receive-timeout | d5c3ea25a2b5f4d88870204c1c47bac950c0c887 | [
"Apache-2.0"
] | 1 | 2019-03-06T03:47:00.000Z | 2019-03-06T03:47:00.000Z | SockTimeout.py | Alwaysproblem/Socket-receive-timeout | d5c3ea25a2b5f4d88870204c1c47bac950c0c887 | [
"Apache-2.0"
] | null | null | null | SockTimeout.py | Alwaysproblem/Socket-receive-timeout | d5c3ea25a2b5f4d88870204c1c47bac950c0c887 | [
"Apache-2.0"
] | null | null | null | import socket
import threading as td
import time
class SockRecvTimeout(socket.socket):
def __init__(self,family=socket.AF_INET, type=socket.SOCK_STREAM, proto=0, fileno=None):
super().__init__(family, type, proto=0, fileno=None)
self.recv_flag = False
self.recv_data = None
# self.addr = None
self.RecvTimeout = False
def _recv(self, buff_size):
self.recv_data = self.recv(buff_size)
self.recv_flag = True
def _recvTimeout(self, buff_size, Timeout, sample_interval):
t = td.Thread(target=self._recv, args=[buff_size])
t.setDaemon(True)
t.start()
self.recv_flag = False
self.RecvTimeout = False
Tstart = time.clock()
while not self.recv_flag and time.clock() - Tstart < Timeout:
time.sleep(sample_interval)
if self.recv_flag == True:
self.RecvTimeout = False
else:
self.RecvTimeout = True
def recvTimeout(self, buff_size, Timeout, sample_interval):
"""
buff_size refer to the buffer size
Timeout refer to the timeout
sample_interveal refer to the interval of
"""
t = td.Thread(target=self._recvTimeout, args=[buff_size, Timeout, sample_interval])
t.start()
t.join()
return self.recv_data, self.RecvTimeout
# return self.recv_data, self.addr, self.RecvTimeout
| 32.844444 | 93 | 0.609608 | 180 | 1,478 | 4.811111 | 0.311111 | 0.101617 | 0.069284 | 0.055427 | 0.295612 | 0.15358 | 0.117783 | 0.117783 | 0.117783 | 0 | 0 | 0.001931 | 0.299053 | 1,478 | 44 | 94 | 33.590909 | 0.833977 | 0.118403 | 0 | 0.233333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.1 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
106b61c1681737e906f7fffb68a5bffc93a2d0c6 | 422 | py | Python | examples/simple_generator_consumer.py | ZygusPatryk/amqpstorm | 0f3ad84a529f12769d34638a88c38f3055cb05cd | [
"MIT"
] | 140 | 2016-06-07T18:53:57.000Z | 2022-03-23T01:50:15.000Z | examples/simple_generator_consumer.py | ZygusPatryk/amqpstorm | 0f3ad84a529f12769d34638a88c38f3055cb05cd | [
"MIT"
] | 85 | 2016-04-11T23:32:32.000Z | 2022-03-19T07:21:21.000Z | examples/simple_generator_consumer.py | ZygusPatryk/amqpstorm | 0f3ad84a529f12769d34638a88c38f3055cb05cd | [
"MIT"
] | 38 | 2016-04-20T20:21:13.000Z | 2022-03-23T05:31:58.000Z | import logging
from amqpstorm import Connection
logging.basicConfig(level=logging.INFO)
with Connection('localhost', 'guest', 'guest') as connection:
with connection.channel() as channel:
channel.queue.declare('simple_queue')
channel.basic.consume(queue='simple_queue', no_ack=False)
for message in channel.build_inbound_messages():
print(message.body)
message.ack()
| 30.142857 | 65 | 0.701422 | 49 | 422 | 5.938776 | 0.591837 | 0.09622 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.191943 | 422 | 13 | 66 | 32.461538 | 0.853372 | 0 | 0 | 0 | 0 | 0 | 0.101896 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
106c1cd14c2884a5f7d2e0c30c0548004107aed7 | 7,274 | py | Python | examples/AIJ Case G/aij_case_g_results.py | SimScaleGmbH/external-building-aerodynamics | 8ab6ce7bf7e0835d9b200c55461cd6966479f94a | [
"MIT"
] | null | null | null | examples/AIJ Case G/aij_case_g_results.py | SimScaleGmbH/external-building-aerodynamics | 8ab6ce7bf7e0835d9b200c55461cd6966479f94a | [
"MIT"
] | null | null | null | examples/AIJ Case G/aij_case_g_results.py | SimScaleGmbH/external-building-aerodynamics | 8ab6ce7bf7e0835d9b200c55461cd6966479f94a | [
"MIT"
] | null | null | null | import pathlib
import matplotlib as mpl
import matplotlib.image as image
import matplotlib.pyplot as plt
import pandas as pd
import simscale_eba.ResultProcessing as res
experimental_velocity_path = pathlib.Path.cwd() / "AIJ_Case_G_Normalised_Velocity.xlsx"
experimental_velocity = pd.read_excel(experimental_velocity_path, index_col=0)
experimental_tke_path = pathlib.Path.cwd() / "AIJ_Case_G_Normalised_TKE.xlsx"
experimental_tke = pd.read_excel(experimental_tke_path, index_col=0)
ref_speed = 5.586
result = res.directional_result()
result.find_project("AIJ Case: G - API")
result.find_simulation("Case G - URANS - Power Law")
result.find_run("Run 1")
result.query_results()
results = result.results
options = result.return_result_options()
category = "PROBE_POINT_PLOT_STATISTICAL_DATA"
average_velocity_mag = {}
tke_dict = {}
velocity_rans_dict = {}
tke_rans_dict = {}
for i in range(1, 8):
name = "Pole{}".format(i)
result.download_result(category, name)
download_dict = result.download_dict
items = result._find_item(category, name)
path = download_dict[category][name][None]
results = res.probes_to_dataframe(path)
source_points = pd.read_csv(pathlib.Path(name + ".csv"), index_col=0, header=None)
source_points.columns = ["X", "Y", "Z"]
u_mag = results["UMag"]
tke_rans = results["k"]["AVG"]
variance_rans = ((2 / 3) * tke_rans) ** 0.5
variance_resolved = u_mag["STDDEV"]
variance_total = (variance_rans ** 2 + variance_resolved ** 2) ** 0.5
variance_total.index = source_points["Z"].round(1)
tke_total = (3 / 2) * variance_total ** 2
u_mag.index = source_points["Z"].round(1)
average_velocity_mag[name] = u_mag["AVG"]
tke_dict[name] = tke_total
velocity_rans_path = pathlib.Path.cwd() / "AIJ_TU2_Velocity" / (name + ".csv")
try:
velocity_rans_dict[name] = pd.read_csv(velocity_rans_path, index_col=1)
except:
print("{} does not have reported RANS data".format(name))
tke_rans_path = pathlib.Path.cwd() / "AIJ_TU2_TKE" / (name + ".csv")
try:
tke_rans_dict[name] = pd.read_csv(tke_rans_path, index_col=1)
except:
print("{} does not have reported RANS data".format(name))
label_dict = {
"Pole1": 0.25,
"Pole2": 0.5,
"Pole3": 1,
"Pole4": 2,
"Pole5": 3,
"Pole6": 4,
"Pole7": 5,
}
mpl.rcParams['figure.dpi'] = 2400
aspect_image = 1 / 12
aspect_plot = 1 / 8
multiplier = aspect_image / aspect_plot
distribution = [1, multiplier, multiplier, multiplier, multiplier, multiplier, multiplier, multiplier]
fig, axs = plt.subplots(1, 8, sharey=True, gridspec_kw={'width_ratios': distribution})
im = image.imread('setup.png')
axs[0].imshow(im, extent=(0, 1, 0, 12), zorder=-1)
axs[0].set_aspect(aspect=aspect_image)
axs[0].set_ylabel("Height (m)", fontsize=5)
axs[0].set_yticks([1.5, 3, 4.5, 6, 9, 12])
axs[0].tick_params(axis='y', labelsize=5)
axs[0].tick_params(axis='x', labelsize=5)
for i in range(1, 8):
result_name = "Pole{}".format(i)
plot_list = []
if result_name in velocity_rans_dict.keys():
plot_list.append(velocity_rans_dict[result_name]["velocity"])
plot_list.append(velocity_rans_dict[result_name].index)
l2, = axs[i].plot(*plot_list, '-ro', markerfacecolor='none', markeredgecolor='red', markersize=3, linewidth=0.5,
markeredgewidth=0.5)
l3, = axs[i].plot(experimental_velocity.iloc[:, i - 1], experimental_velocity.index, 'ko',
markerfacecolor='none', markeredgecolor='black', markersize=3, )
l1, = axs[i].plot(average_velocity_mag[result_name] / ref_speed, average_velocity_mag[result_name].index)
l = [l1, l2, l3]
legen_plot = i
else:
l2, = axs[i].plot(experimental_velocity.iloc[:, i - 1], experimental_velocity.index, 'ko',
markerfacecolor='none', markeredgecolor='black', markersize=3, )
l1, = axs[i].plot(average_velocity_mag[result_name] / ref_speed, average_velocity_mag[result_name].index)
axs[i].set_xlim(0, 1)
axs[i].set_ylim(0, 12)
axs[i].set_title("x/H=" + str(label_dict[result_name]), fontsize=5)
axs[i].set_xlabel("U/Uh (-)", fontsize=5)
axs[i].grid(color='black', linestyle='--', linewidth=0.5)
axs[i].tick_params(axis='x', labelsize=5)
axs[i].set_aspect(aspect=aspect_plot)
# fig.subplots_adjust(bottom=-0.5)
handles, labels = axs[legen_plot].get_legend_handles_labels()
model = "uRANS"
labels = ["SimScale - {} - Power Law Profile".format(model), "AIJ - RANS", "Experimental"]
fig.legend(l,
labels,
loc='lower center',
bbox_to_anchor=(0.5, 0.25),
fontsize=5,
frameon=False
)
fig.suptitle("SimScale vs Experimental Results, for AIJ Case G", y=0.7)
plt.savefig('velocity_results.png')
# TKE Plot
aspect_image = 0.1 / 12
aspect_plot = 0.1 / 8
multiplier = aspect_image / aspect_plot
distribution = [1, multiplier, multiplier, multiplier, multiplier, multiplier, multiplier, multiplier]
fig, axs = plt.subplots(1, 8, sharey=True, gridspec_kw={'width_ratios': distribution})
axs[0].imshow(im, extent=(0, 0.1, 0, 12), zorder=-1)
axs[0].set_aspect(aspect=aspect_image)
axs[0].set_ylabel("Height (m)", fontsize=5)
axs[0].set_yticks([1.5, 3, 4.5, 6, 9, 12])
axs[0].set_xticks([0.05, 0.1])
axs[0].tick_params(axis='y', labelsize=5)
axs[0].tick_params(axis='x', labelsize=5)
for i in range(1, 8):
result_name = "Pole{}".format(i)
plot_list = []
if result_name in tke_rans_dict.keys():
plot_list.append(tke_rans_dict[result_name]["tke"])
plot_list.append(tke_rans_dict[result_name].index)
l2, = axs[i].plot(*plot_list, '-ro', markerfacecolor='none', markeredgecolor='red', markersize=3, linewidth=0.5,
markeredgewidth=0.5)
l3, = axs[i].plot(experimental_tke.iloc[:, i - 1], experimental_tke.index, 'ko', markerfacecolor='none',
markeredgecolor='black', markersize=3, )
l1, = axs[i].plot(tke_dict[result_name] / ref_speed ** 2, tke_dict[result_name].index)
l = [l1, l2, l3]
legen_plot = i
else:
l2, = axs[i].plot(experimental_tke.iloc[:, i - 1], experimental_tke.index, 'ko', markerfacecolor='none',
markeredgecolor='black', markersize=3, )
l1, = axs[i].plot(tke_dict[result_name] / ref_speed ** 2, tke_dict[result_name].index)
axs[i].set_xlim(0, 0.1)
axs[i].set_ylim(0, 12)
axs[i].set_xticks([0.05, 0.1])
axs[i].set_title("x/H=" + str(label_dict[result_name]), fontsize=5)
axs[i].set_xlabel("TKE/Uh² (-)", fontsize=5)
axs[i].grid(color='black', linestyle='--', linewidth=0.5)
axs[i].tick_params(axis='x', labelsize=5)
axs[i].set_aspect(aspect=aspect_plot)
# fig.subplots_adjust(bottom=-0.5)
handles, labels = axs[1].get_legend_handles_labels()
labels = ["SimScale - {} - Power Law Profile".format(model), "AIJ - RANS", "Experimental"]
fig.legend(l,
labels,
loc='lower center',
bbox_to_anchor=(0.5, 0.25),
fontsize=5,
frameon=False
)
fig.suptitle("SimScale vs Experimental Results, for AIJ Case G", y=0.7)
plt.savefig('tke_results.png')
| 36.189055 | 120 | 0.659472 | 1,062 | 7,274 | 4.313559 | 0.186441 | 0.021829 | 0.016809 | 0.069854 | 0.692644 | 0.692644 | 0.656843 | 0.639162 | 0.593539 | 0.593539 | 0 | 0.035138 | 0.182293 | 7,274 | 200 | 121 | 36.37 | 0.735037 | 0.010173 | 0 | 0.462025 | 0 | 0 | 0.109088 | 0.013619 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.037975 | 0 | 0.037975 | 0.012658 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
106e74c39aef33d713a8dc39fa90a44a46430c88 | 10,034 | py | Python | pipconflictchecker/checker.py | ambitioninc/pip-conflict-checker | b460622a3e26d3f34f0f1b7dba7b967a739040bb | [
"MIT"
] | 59 | 2015-05-05T02:43:22.000Z | 2021-12-07T13:34:58.000Z | pipconflictchecker/checker.py | ambitioninc/pip-conflict-checker | b460622a3e26d3f34f0f1b7dba7b967a739040bb | [
"MIT"
] | 8 | 2017-02-10T20:02:31.000Z | 2021-02-01T16:23:54.000Z | pipconflictchecker/checker.py | ambitioninc/pip-conflict-checker | b460622a3e26d3f34f0f1b7dba7b967a739040bb | [
"MIT"
] | 18 | 2015-05-28T19:25:45.000Z | 2020-10-30T09:02:46.000Z | from __future__ import absolute_import
from __future__ import unicode_literals
from pkg_resources import parse_version
try:
from pip import get_installed_distributions # pragma: no cover
except ImportError: # pragma: no cover
# pip >= 10.0.0
from pkg_resources import working_set # pragma: no cover
def get_installed_distributions(): # pragma: no cover
return working_set
class Conflict(object):
"""
Class that contains information about a dependency conflict
"""
def __init__(self, project_name, required_project_name, installed_version, specs):
super(Conflict, self).__init__()
self.project_name = project_name
self.required_project_name = required_project_name
self.installed_version = installed_version
self.specs = specs
self.readable_specs = self.create_readable_specs()
def create_readable_specs(self):
readable_specs = []
for spec in self.specs:
readable_specs.append('{0}{1}'.format(
spec[0],
spec[1]
))
return ','.join(readable_specs)
class Validator(object):
def __init__(self, installed_version, required_version_specs):
super(Validator, self).__init__()
self.installed_version = installed_version
self.required_version_specs = sorted(required_version_specs, key=lambda spec: parse_version(spec[1]))
def is_valid(self):
"""
Checks that the installed version is valid within the required versions
"""
# Init is valid to false
is_valid = False
# Get the booleans of all the checks
in_ranges = self.in_ranges()
in_exacts = self.in_exacts()
in_excludes = self.in_excludes()
# Determine if this is a valid installed version
if (in_ranges or in_exacts) and not in_excludes:
is_valid = True
return is_valid
def in_ranges(self):
"""
Determine if the installed version is in one of the required ranges
"""
# Get the ranges
ranges = self.get_required_version_ranges()
# If there are no ranges return true
if not len(ranges):
return True
# Set the default to false
in_ranges = False
# Keep a list of the results for determining if a version is within a range
results = []
# Loop over the ranges and determine if the installed version is in this range
for spec_range in ranges:
spec_results = []
for spec in spec_range:
if spec is not None:
conditional = 'parse_version(self.installed_version) {0} parse_version(spec[1])'.format(
spec[0]
)
spec_results.append(eval(conditional))
# If any spec was false the overall range is false
if False in spec_results:
results.append(False)
else:
results.append(True)
# If the installed version is within any of the ranges, the overall result is true
if True in results:
in_ranges = True
# Return the result
return in_ranges
def in_exacts(self):
"""
Determine if the installed version matches one of the exact versions
"""
# Set the default response to false
in_exacts = False
# Loop over the specs and check for an exact match
exacts = self.get_required_version_exacts()
for spec in exacts:
if spec[1] == self.installed_version:
in_exacts = True
# Return the response
return in_exacts
def in_excludes(self):
"""
Determine if the installed version matches one of the excluded versions
"""
# Set the default response to false
in_excludes = False
# Check installed version against the excluded versions
excludes = self.get_required_version_excludes()
for spec in excludes:
if spec[1] == self.installed_version:
in_excludes = True
# Return the response
return in_excludes
def get_required_version_ranges(self):
"""
Determines the ranges that a version has to exist within
"""
# List of all allowed ranges
ranges = []
# Keep track of the minimum required spec
min_spec = None
# Keep track of the maximum required spec
max_spec = None
# Loop over all the required specs and calculate the ranges
for spec in self.required_version_specs:
comparison = spec[0]
# Check if this should be the max
if comparison in ['<=', '<']:
max_spec = spec
# Check if this should be the min value
elif comparison in ['>=', '>']:
min_spec = spec
# Check if we have both a min and a max spec if so push it onto the ranges and reset
if min_spec and max_spec:
ranges.append((min_spec, max_spec))
min_spec = None
max_spec = None
# Add the last range if we need to
if min_spec or max_spec:
ranges.append((min_spec, max_spec))
# Return the ranges
return ranges
def get_required_version_exacts(self):
"""
Returns a list of versions that must be exact
"""
# List of exact versions
exacts = []
# Loop over all the required specs get the exacts
for spec in self.required_version_specs:
comparison = spec[0]
# Check if the comparison is exact
if comparison == '==':
exacts.append(spec)
# Return the exact versions
return exacts
def get_required_version_excludes(self):
"""
Returns a list of versions that we need to exclude
"""
# List of excluded versions
excluded = []
# Loop over all the required specs get the exacts
for spec in self.required_version_specs:
comparison = spec[0]
# Check if the comparison is exact
if comparison == '!=':
excluded.append(spec)
# Return the excluded version specs
return excluded
class Checker(object):
"""
Class that contains all the checker methods that find dependency conflicts
"""
def get_requirement_versions(self):
"""
Returns a dictionary of project_name => dict of projects that requires it with lists of requirements
"""
distributions = get_installed_distributions()
dist_requirements = {}
# Compute the dist requirements and versions
for dist in distributions:
dist_requirement_dict = dist_requirements.get(dist.project_name, {})
dist_requirements[dist.project_name] = dist_requirement_dict
for requirement in dist.requires():
dist_requirement_dict = dist_requirements.get(requirement.project_name, {})
dist_requirement_list = dist_requirement_dict.get(dist.project_name, set())
for spec in requirement.specs:
dist_requirement_list.add(spec)
dist_requirement_dict[dist.project_name] = dist_requirement_list
dist_requirements[requirement.project_name] = dist_requirement_dict
# Return the dict
return dist_requirements
def get_installed_versions(self):
"""
Returns a dict of project_name => version installed
"""
distributions = get_installed_distributions()
dist_versions = {}
# Build the installed versions dict
for dist in distributions:
dist_versions[dist.project_name] = dist.version
# Return the dict
return dist_versions
def get_conflicts(self):
"""
Checks the requirements against the installed projects to find any version conflicts
"""
requirement_versions = self.get_requirement_versions()
installed_versions = self.get_installed_versions()
# Gets around pep8 complaining about unused import
assert parse_version
# Find any requirement conflicts
conflicts = []
for project_name, requirements in requirement_versions.items():
# If this requirement is not in the installed versions, just continue
if project_name not in installed_versions:
continue
# Get the installed version
installed_version = installed_versions[project_name]
# Loop over the required dictionaries and determine if we have any dependency conflicts
for required_project_name, specs in requirements.items():
# Create a validator
validator = Validator(installed_version=installed_version, required_version_specs=specs)
if not validator.is_valid():
conflicts.append(Conflict(**{
'project_name': project_name,
'required_project_name': required_project_name,
'installed_version': installed_version,
'specs': specs
}))
# Return the conflicts
return conflicts
# Main entry point for console script
def main():
checker = Checker()
conflicts = checker.get_conflicts()
if conflicts:
print('-' * 50)
print(' Conflicts Detected')
print('-' * 50)
for conflict in conflicts:
output_string = (
' - ',
'{project_name}({installed_version}) ',
'{required_project_name}({readable_specs})'
)
print(''.join(output_string).format(
**conflict.__dict__
))
return 1
return 0
| 33.006579 | 109 | 0.603349 | 1,139 | 10,034 | 5.118525 | 0.147498 | 0.04717 | 0.01235 | 0.027444 | 0.29211 | 0.22247 | 0.141509 | 0.093825 | 0.069468 | 0.069468 | 0 | 0.003592 | 0.334064 | 10,034 | 303 | 110 | 33.115512 | 0.868902 | 0.265597 | 0 | 0.148148 | 0 | 0 | 0.033579 | 0.022103 | 0 | 0 | 0 | 0 | 0.006173 | 1 | 0.092593 | false | 0 | 0.037037 | 0.006173 | 0.240741 | 0.024691 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
106f12a0e7add7477b9aadea71890307457fa18b | 1,282 | py | Python | src/main/python/laprob/cheat.py | NIL-zhuang/IRBL | 4f787e2bf065f728f086dfad07d71ef6210dd159 | [
"MIT"
] | null | null | null | src/main/python/laprob/cheat.py | NIL-zhuang/IRBL | 4f787e2bf065f728f086dfad07d71ef6210dd159 | [
"MIT"
] | null | null | null | src/main/python/laprob/cheat.py | NIL-zhuang/IRBL | 4f787e2bf065f728f086dfad07d71ef6210dd159 | [
"MIT"
] | null | null | null | from constants import projects_path
from mapper import MapperGenerator
from fileFilter import FileIndex, FileFilter
from util import FileIdx
import os
import json
class CleanUnfixedFiles():
def __init__(self):
self.generateFiles()
self.fileIdx = FileIdx()
self.buggyFiles = self.getAllBuggyFiles()
def generateFiles(self):
FileFilter().filterFile()
FileIndex().storeIdx()
MapperGenerator().generate()
def cleanFiles(self):
srcPath = os.path.join(projects_path, 'src')
for file in self.buggyFiles:
file = os.path.join(srcPath, file)
os.remove(file)
def getAllBuggyFiles(self):
files = set()
mapper = json.load(open(os.path.join(projects_path, 'bug_src_map.json'), 'r'))
for k, v in mapper.items():
for item in v:
files.add(self.fileIdx.idx2file(item))
return files
def cleanMiddleFiles(self):
cleanFile = ['bug_src_map.json', 'fileIndex.csv']
for file in cleanFile:
os.remove(os.path.join(projects_path, file))
def cheat(self):
self.cleanFiles()
self.cleanMiddleFiles()
self.generateFiles()
if __name__ == '__main__':
CleanUnfixedFiles().cheat()
| 27.276596 | 86 | 0.631825 | 142 | 1,282 | 5.56338 | 0.373239 | 0.060759 | 0.050633 | 0.068354 | 0.083544 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001054 | 0.25975 | 1,282 | 46 | 87 | 27.869565 | 0.831401 | 0 | 0 | 0.054054 | 0 | 0 | 0.044462 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.162162 | false | 0 | 0.162162 | 0 | 0.378378 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10709fe03e6ac02fc5bc3bd44e11de49985efd41 | 5,658 | py | Python | src/offlineExp/mf.py | BetsyHJ/DANCER | 6393a6422eec8fa0002624d118469537578f580f | [
"MIT"
] | 5 | 2022-01-18T02:19:29.000Z | 2022-03-23T12:42:04.000Z | src/offlineExp/mf.py | BetsyHJ/DANCER | 6393a6422eec8fa0002624d118469537578f580f | [
"MIT"
] | null | null | null | src/offlineExp/mf.py | BetsyHJ/DANCER | 6393a6422eec8fa0002624d118469537578f580f | [
"MIT"
] | null | null | null | import torch
from torch import nn
from torch.nn.init import xavier_uniform_, xavier_normal_
from torch.nn.parameter import Parameter
class MF(nn.Module):
'''
Time-aware matrix factorization
Cite: Collaborative filtering with temporal dynamics
We only consider q_i(t) here when modeling r_{u,i,t}
'''
def __init__(self, config, data, debiasing=False, output_dim=2):
super(MF, self).__init__()
self.task = config['task']
# load parameter info
self.debiasing = debiasing
self.embedding_size = int(config['embedding_size'])
self.loss_type = config['loss_type']
# self.lr_decay_step = int(config['lr_decay_step'])
self.batch_size = int(config['batch_size'])
self.n_items = data.n_items
self.n_users = data.n_users
self.output_dim = output_dim
# define layers and loss
self.user_embedding = nn.Embedding(self.n_users, self.embedding_size)
self.item_embedding = nn.Embedding(self.n_items, self.embedding_size)
self.m = None
reduction = 'mean'
if self.task == 'OPPT':
reduction = 'none'
if self.loss_type.upper() == 'CE':
if self.debiasing:
self.loss_fct = nn.CrossEntropyLoss(reduction='none')
else:
self.loss_fct = nn.CrossEntropyLoss()
elif self.loss_type.upper() == 'MSE':
self.loss_fct = nn.MSELoss(reduction=reduction)
elif self.loss_type.upper() == 'NLL':
# self.loss_fct = nn.NLLLoss(reduction='none')
# self.loss_fct = nn.BCEWithLogitsLoss()
self.loss_fct = nn.BCELoss(reduction=reduction)
self.m = nn.Sigmoid()
if self.output_dim > 2:
self.loss_fct == nn.CrossEntropyLoss(reduction=reduction)
self.m = nn.Softmax()
else:
raise NotImplementedError("Make sure 'loss_type' in ['CE', 'MSE', 'NLL']!")
# parameters initialization
self.apply(self._init_weights)
def _init_weights(self, module):
if isinstance(module, nn.Embedding):
xavier_normal_(module.weight)
elif isinstance(module, nn.GRU):
xavier_uniform_(self.gru_layers.weight_hh_l0)
xavier_uniform_(self.gru_layers.weight_ih_l0)
elif isinstance(module, nn.Linear):
xavier_normal_(module.weight)
constant_(module.bias, 0.0)
elif isinstance(module, Parameter):
constant_(module.weight, 0.0)
def _gather_indexes(self, output, gather_index):
"""Gathers the vectors at the spexific positions over a minibatch"""
gather_index = gather_index.view(-1, 1, 1).expand(-1, -1, output.shape[-1])
output_tensor = output.gather(dim=1, index=gather_index)
return output_tensor.squeeze(1)
def forward(self, user, item):
user_e = self.user_embedding(user)
item_e = self.item_embedding(item)
# if self.loss_type.upper() == 'NLL':
# scores = torch.mul(user_e, item_e).sum(-1).float() # [B, D] -> [B]
# scores = torch.sigmoid(scores).unsqueeze(-1) #[B 1] for obser
# return torch.cat((1.0 - scores, scores), -1) # [B, 2]
output = torch.mul(user_e, item_e).sum(-1).float() # [B, D] -> [B]
if self.m is None:
return output
return self.m(output)
def calculate_loss(self, interaction):
user = interaction['user']
item = interaction['item']
pred = self.forward(user, item)
target = interaction['target'].float()
loss = self.loss_fct(pred, target)
# if self.debiasing:
# ctr = torch.reciprocal(interaction['ctr']) # [B]
# loss = torch.mul(loss, ctr).sum() # [B] -> [1]
return loss
def predict(self, interaction):
user = interaction['user']
item = interaction['item']
pred = self.forward(user, item)
return pred
def full_sort_predict(self, interaction):
user = interaction['user']
test_items_emb = self.item_embedding.weight.view(self.n_items, 1, self.embedding_size) # [N D]
scores = torch.matmul(self.user_embedding(user), test_items_emb.transpose(0, 1)) # [B D], [D N] -> [B N]
return scores
class MF_dnn(MF):
def __init__(self, config, data, debiasing=False):
super(MF_dnn, self).__init__(config, data, debiasing)
# self.dense = nn.Linear(1, 1)
self.b = Parameter(torch.Tensor(1))
# self.w = Parameter(torch.Tensor(1))
class MF_v(MF):
def __init__(self, config, data, debiasing=False):
super(MF_v, self).__init__(config, data, debiasing)
# self.dense = nn.Linear(1, 1)
self.b = Parameter(torch.Tensor(1))
self.b_u = nn.Embedding(self.n_users, 1)
self.b_i = nn.Embedding(self.n_items, 1)
# self.w = Parameter(torch.Tensor(1))
self.apply(self._init_weights)
print('-*-*-*-* We use s_{uit} = v_u * v_i + b + b_u + b_i *-*-*-*-')
def forward(self, user, item):
user_e = self.user_embedding(user)
item_e = self.item_embedding(item)
r_ui = torch.mul(user_e, item_e).sum(-1).float() # [B, D] -> [B]
# # W * v_u * v_i + b
# f_uit = self.dense(r_ui.unsqueeze(1)).squeeze().float() # [B]
# # v_u * v_i + b
# f_uit = r_ui + self.b
# # v_u * v_i + b_u + b_i + b
f_uit = r_ui + self.b + self.b_u(user).squeeze() + self.b_i(item).squeeze()
if self.m is not None:
return self.m(f_uit)
return f_uit | 40.414286 | 113 | 0.593142 | 754 | 5,658 | 4.246684 | 0.209549 | 0.03248 | 0.027483 | 0.02842 | 0.397876 | 0.296065 | 0.235478 | 0.205497 | 0.196752 | 0.196752 | 0 | 0.009732 | 0.273595 | 5,658 | 140 | 114 | 40.414286 | 0.769343 | 0.19141 | 0 | 0.239583 | 0 | 0 | 0.042765 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.104167 | false | 0 | 0.041667 | 0 | 0.260417 | 0.010417 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10714339f756289446a85583ab0f5357b5bc2250 | 7,746 | py | Python | astm_file2mysql_pentra_xlr.py | nishishailesh/astm_general | 20519b0e71065226334b443c11f3dd9333e536d9 | [
"MIT"
] | 3 | 2020-11-04T15:42:47.000Z | 2021-08-14T17:32:41.000Z | astm_file2mysql_pentra_xlr.py | nishishailesh/astm_general | 20519b0e71065226334b443c11f3dd9333e536d9 | [
"MIT"
] | 1 | 2021-07-19T13:54:18.000Z | 2021-08-01T17:47:34.000Z | astm_file2mysql_pentra_xlr.py | nishishailesh/astm_general | 20519b0e71065226334b443c11f3dd9333e536d9 | [
"MIT"
] | 1 | 2021-08-16T08:40:57.000Z | 2021-08-16T08:40:57.000Z | #!/usr/bin/python3
import sys, io
import logging
import time
import zlib
import astm_file2mysql_general as astmg
import zlib
import base64
import struct
#apt search python3-matplotlib
#apt install python3-matplotlib
#import matplotlib.pyplot as plt
#import numpy as np
import datetime
#to ensure that password is not in main sources
#prototype file is as follows
'''
example /var/gmcs_config/astm_var.py
#!/usr/bin/python3.7
my_user='uuu'
my_pass='ppp'
'''
'''
if anything is redirected, last newline is added.
To prevent it, use following
I needed this while outputting relevant data to a file via stdout redirection
echo -n `./astm_file2mysql_general.py` > x
'''
sys.path.append('/var/gmcs_config')
import astm_var
#print(dir(astm_var))
#Globals for configuration################
#used by parent class astm_file (so be careful, they are must)
log=1
my_host='127.0.0.1'
my_user=astm_var.my_user
my_pass=astm_var.my_pass
my_db='cl_general'
inbox='/root/yumizen_h500.data/'
archived='/root/yumizen_h500.arch/'
log_filename='/var/log/yumizen_h500.log'
logging.basicConfig(filename=log_filename,level=logging.DEBUG)
if log==0:
logging.disable(logging.CRITICAL)
#sub-class for yumizen H500 ASTM#########
#zlib.decompress(data, wbits=MAX_WBITS, bufsize=DEF_BUF_SIZE)
#https://docs.python.org/3/library/zlib.html
#Read docs for -15, for no header
def decode_base64_and_inflate( b64string ):
decoded_data = base64.b64decode( b64string )
return zlib.decompress( decoded_data , -15)
#not used in this project
def deflate_and_base64_encode( string_val ):
zlibbed_str = zlib.compress( string_val )
compressed_string = zlibbed_str[2:-4]
return base64.b64encode( compressed_string )
def mk_num_tuple_from_def_base_byte_str(def_base_byte_str):
non_base_inflated_str=decode_base64_and_inflate(def_base_byte_str)
length=len(non_base_inflated_str)
num_tuple=()
count=0
while count<length:
x=non_base_inflated_str[count:count+4]
#FLOATLE Little Enedian Float
#https://docs.python.org/2/library/struct.html#format-characters
num_value=struct.unpack('f',x)
num_tuple=num_tuple + (num_value)
count=count+4
return num_tuple
def mk_histogram_from_tuple(xy,heading,x_axis,y_axis,axis_range_tuple):
#print(x)
#print(y)
plt.plot(xy[0], xy[1])
plt.xlabel(x_axis)
plt.ylabel(y_axis)
plt.axis(axis_range_tuple)
plt.title('HISTOGRAM: '+heading)
f = io.BytesIO()
plt.savefig(f, format='png')
f.seek(0)
data=f.read()
f.close()
plt.close() #otherwise graphs will be overwritten, in next loop
return data
def mk_matrix_from_tuple(xy,heading,x_axis,y_axis,axis_range_tuple):
#print(x)
#print(y)
'''
0 for LYM box
1 for MON box
2 for NEU box
3 for EOS box
4 for LIC box
5 for ALY box
6 for LL box
7 for RN box
8 for RM box
'''
colors=('blue','green','red','cyan','#8B6914','#FB00EF','#1E90FF','#FFA500','#95FC01')
plt.text(0,axis_range_tuple[3]-axis_range_tuple[1]*0.05,' LYM',color=colors[0])
plt.text(0,axis_range_tuple[3]-axis_range_tuple[1]*0.10,' MON',color=colors[1])
plt.text(0,axis_range_tuple[3]-axis_range_tuple[1]*0.15,' NEU',color=colors[2])
plt.text(0,axis_range_tuple[3]-axis_range_tuple[1]*0.20,' EOS',color=colors[3])
plt.text(0,axis_range_tuple[3]-axis_range_tuple[1]*0.25,' LIC',color=colors[4])
plt.text(0,axis_range_tuple[3]-axis_range_tuple[1]*0.30,' ALY',color=colors[5])
plt.text(0,axis_range_tuple[3]-axis_range_tuple[1]*0.35,' LL',color=colors[6])
plt.text(0,axis_range_tuple[3]-axis_range_tuple[1]*0.40,' RN',color=colors[7])
plt.text(0,axis_range_tuple[3]-axis_range_tuple[1]*0.45,' RM',color=colors[8])
for i in range(0,len(xy[0])):
try:
color=colors[int(xy[3][i])]
except Exception as my_ex:
color='black'
plt.plot(xy[0][i], xy[1][i],'ro',markersize=1,color=color)
plt.xlabel(x_axis)
plt.ylabel(y_axis)
plt.axis(axis_range_tuple)
plt.title('MATRIX: '+heading)
f = io.BytesIO()
plt.savefig(f, format='png')
f.seek(0)
data=f.read()
f.close()
plt.close() #otherwise graphs will be overwritten, in next loop
return data
class yumizenp500(astmg.astm_file):
#"yumizon_code":(lis_num,multiplication factor)
yumizon_to_lis={
"MCV":(5,1),
"NEU%":(39,1),
"RDW-CV":(8,1),
"RBC":(2,1),
"WBC":(1,1000),
"PLT":(9,1000),
"MON%":(42,1),
"HGB":(3,1),
"LYM%":(40,1),
"BAS%":(43,1),
"MCH":(6,1),
"MCHC":(7,1),
"HCT":(4,1),
"EOS%":(41,1),
"RbcAlongRes":(22,1),
"PltAlongRes":(23,1),
"LMNEResAbs":(24,1)
}
def mk_sql(self):
con=self.get_link(my_host,my_user,my_pass,my_db)
for each_sample in self.final_data:
msg='sample_id is {}'.format(each_sample[0])
sample_id=each_sample[0]
logging.debug(msg)
if(sample_id.rstrip(' ').isnumeric() == False):
logging.debug('sample_id is not number')
return False;
####main sql edit as per your need####
#reuse sql
prepared_sql='insert into primary_result \
(sample_id,examination_id,result,uniq) \
values \
(%s,%s,%s,%s) \
ON DUPLICATE KEY UPDATE result=%s'
prepared_sql_blob='insert into primary_result_blob \
(sample_id,examination_id,result,uniq) \
values \
(%s,%s,%s,%s) \
ON DUPLICATE KEY UPDATE result=%s'
#56, remark, once only, no uniq value
data_tpl=(
sample_id,\
56,\
'Done on automated Yumizen H500',\
'',\
'Done on automated Yumizen H500'
)
cur=self.run_query(con,prepared_sql,data_tpl)
self.close_cursor(cur)
for each_result in each_sample[1]:
if(each_result[0]=='R'):
msg='Examination: {} --> Result {}'.format(each_result[2],each_result[3])
logging.debug(msg)
examination_name_tuple=each_result[2].split(self.s3)
msg='Examination tuple: {} '.format(examination_name_tuple)
logging.debug(msg)
ex_name=examination_name_tuple[3]
ex_result=each_result[3]
uniq=each_result[11]
uniq_for_M=uniq
msg='(sid,eid,res,uniq)= ({} , {} , {}, {})'.format(sample_id,ex_name,ex_result,uniq)
logging.debug(msg)
try:
if(ex_name in self.yumizon_to_lis):
data_tpl=(
sample_id,\
self.yumizon_to_lis[ex_name][0],\
float(ex_result)*self.yumizon_to_lis[ex_name][1],\
uniq,\
float(ex_result)*self.yumizon_to_lis[ex_name][1]
)
cur=self.run_query(con,prepared_sql,data_tpl)
self.close_cursor(cur)
except Exception as my_ex:
logging.debug(my_ex)
logging.debug('\033[0;31mresult of ('+ex_result+') can not be converted to float for multiplication?\033[0m')
continue
self.close_link(con)
#Main Code###############################
if __name__=='__main__':
#print('__name__ is ',__name__,',so running code')
while True:
m=yumizenp500(inbox,archived)
if(m.get_first_file()):
m.analyse_file()
m.mk_tuple()
m.mk_sql()
m.archive_file()
time.sleep(1)
#break; #useful during debugging
| 30.257813 | 121 | 0.612832 | 1,131 | 7,746 | 3.98939 | 0.289125 | 0.043883 | 0.068262 | 0.023936 | 0.275044 | 0.254211 | 0.249335 | 0.249335 | 0.249335 | 0.249335 | 0 | 0.04215 | 0.243481 | 7,746 | 255 | 122 | 30.376471 | 0.727816 | 0.136458 | 0 | 0.275449 | 0 | 0 | 0.093361 | 0.015161 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035928 | false | 0.011976 | 0.05988 | 0 | 0.143713 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1072ae60e09bd2717a225322efd4ee4212380bc3 | 4,396 | py | Python | new_plot_tra.py | xuyufan936831611/vo_imu | 8a5753384b4a5c08dc83edf718d76a2ac308a298 | [
"MIT"
] | null | null | null | new_plot_tra.py | xuyufan936831611/vo_imu | 8a5753384b4a5c08dc83edf718d76a2ac308a298 | [
"MIT"
] | null | null | null | new_plot_tra.py | xuyufan936831611/vo_imu | 8a5753384b4a5c08dc83edf718d76a2ac308a298 | [
"MIT"
] | null | null | null | #!/usr/bin/python
# Software License Agreement (BSD License)
#
# Copyright (c) 2013, Juergen Sturm, TUM
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials provided
# with the distribution.
# * Neither the name of TUM nor the names of its
# contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
#
# the resulting .ply file can be viewed for example with meshlab
# sudo apt-get install meshlab
"""
This script plots a trajectory into an image sequence.
"""
import numpy
import argparse
import sys
import os
from associate import *
from evaluate import *
from generate_pointcloud import *
from PIL import Image, ImageDraw
focalLength = 525.0
centerX = 319.5
centerY = 239.5
def point(pose, px, py, pz):
"""
Project a 3D point into the camera.
Input:
pose -- camera pose
px,py,pz -- point in global frame
Output:
u,v -- pixel coordinates
"""
p = pose.dot(numpy.matrix([[px], [py], [pz], [1]]))
X = p[0, 0]
Y = p[1, 0]
Z = p[2, 0]
u = X / Z * focalLength + centerX
v = Y / Z * focalLength + centerY
return [u, v]
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='''
This script plots a trajectory into an image sequence.
''')
parser.add_argument('image_list', help='input image list (format: timestamp filename)')
parser.add_argument('trajectory_file', help='input trajectory (format: timestamp tx ty tz qx qy qz qw)')
parser.add_argument('out_image', help='file name of the result (format: png)')
args = parser.parse_args()
image_list = read_file_list(args.image_list)
pose_list = read_file_list(args.trajectory_file)
traj = read_trajectory(args.trajectory_file)
matches = associate(image_list, pose_list, 0, 0.02)
stamps = image_list.keys()
stamps.sort()
matches_dict = dict(matches)
for stamp in stamps:
image_file = image_list[stamp][0]
image = Image.open(image_file)
print
"image stamp: %f" % stamp
if stamp in matches_dict:
print
"pose stamp: %f" % matches_dict[stamp]
pose = traj[matches_dict[stamp]]
stamps = traj.keys()
stamps.sort()
xy = []
draw = ImageDraw.Draw(image)
size = 0.01
for s in stamps:
p = traj[s]
rel_pose = numpy.dot(numpy.linalg.inv(pose), p)
if rel_pose[2, 3] < 0.01: continue
u, v = point(rel_pose, 0, 0, 0)
if u < 0 or v < 0 or u > 640 or v > 480: continue
draw.line(point(rel_pose, 0, 0, 0) + point(rel_pose, size, 0, 0), fill="#ff0000")
draw.line(point(rel_pose, 0, 0, 0) + point(rel_pose, 0, size, 0), fill="#00ff00")
draw.line(point(rel_pose, 0, 0, 0) + point(rel_pose, 0, 0, size), fill="#0000ff")
del draw
image.save(os.path.splitext(args.out_image)[0] + "-%f.png" % stamp)
| 35.451613 | 109 | 0.643995 | 609 | 4,396 | 4.573071 | 0.405583 | 0.008618 | 0.030162 | 0.028007 | 0.156912 | 0.124955 | 0.119569 | 0.119569 | 0.119569 | 0.087253 | 0 | 0.023486 | 0.263876 | 4,396 | 123 | 110 | 35.739837 | 0.837145 | 0.414013 | 0 | 0.067797 | 0 | 0 | 0.127579 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016949 | false | 0 | 0.135593 | 0 | 0.169492 | 0.033898 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10745f333838cd624bbd1eb5afeccb36529e0167 | 1,557 | py | Python | chapter-9/taxi_modules/feat_gridtensor.py | outerbounds/dsbook | 411b55c2057a3ba1e1d893cde03d6ec97d529969 | [
"Apache-2.0"
] | 27 | 2021-05-29T14:36:34.000Z | 2022-03-22T10:12:40.000Z | chapter-9/taxi_modules/feat_gridtensor.py | saibaldas/dsbook | be6b4670ed33a2001de8f28f6fb4151111cb26ca | [
"Apache-2.0"
] | null | null | null | chapter-9/taxi_modules/feat_gridtensor.py | saibaldas/dsbook | be6b4670ed33a2001de8f28f6fb4151111cb26ca | [
"Apache-2.0"
] | 6 | 2021-05-29T14:36:40.000Z | 2022-03-09T14:57:46.000Z | from metaflow import profile
NUM_HASH_BINS = 10000
PRECISION = 6
class FeatureEncoder():
NAME = 'grid'
FEATURE_LIBRARIES = {'python-geohash': '0.8.5',
'tensorflow-base': '2.6.0'}
CLEAN_FIELDS = ['pickup_latitude', 'pickup_longitude',
'dropoff_latitude', 'dropoff_longitude']
@classmethod
def _coords_to_grid(cls, table):
import geohash
plon = table['pickup_longitude'].to_numpy()
plat = table['pickup_latitude'].to_numpy()
dlon = table['dropoff_longitude'].to_numpy()
dlat = table['dropoff_latitude'].to_numpy()
trips = []
for i in range(len(plat)):
pcode = geohash.encode(plat[i], plon[i], precision=PRECISION)
dcode = geohash.encode(dlat[i], dlon[i], precision=PRECISION)
trips.append((pcode, dcode))
return trips
@classmethod
def encode(cls, table):
from tensorflow.keras.layers import Hashing, IntegerLookup
with profile('coordinates to grid'):
grid = cls._coords_to_grid(table)
hashing_trick = Hashing(NUM_HASH_BINS)
multi_hot = IntegerLookup(vocabulary=list(range(NUM_HASH_BINS)),
output_mode='multi_hot',
sparse=True)
with profile('creating tensor'):
tensor = multi_hot(hashing_trick(grid))
return {'tensor': tensor}
@classmethod
def merge(cls, shards):
return {key: [s[key] for s in shards] for key in shards[0]} | 37.97561 | 73 | 0.59666 | 175 | 1,557 | 5.125714 | 0.417143 | 0.031215 | 0.036789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011743 | 0.289017 | 1,557 | 41 | 74 | 37.97561 | 0.798555 | 0 | 0 | 0.081081 | 0 | 0 | 0.141207 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081081 | false | 0 | 0.081081 | 0.027027 | 0.351351 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1074c9b0d171e08aaf0c9852318cf23417d27bfc | 620 | py | Python | poc/pyApp/__init__.py | id-shiv/utillib | fc1186ac9cc505b884ff7cfdeccbea2bddf78d8a | [
"MIT"
] | null | null | null | poc/pyApp/__init__.py | id-shiv/utillib | fc1186ac9cc505b884ff7cfdeccbea2bddf78d8a | [
"MIT"
] | null | null | null | poc/pyApp/__init__.py | id-shiv/utillib | fc1186ac9cc505b884ff7cfdeccbea2bddf78d8a | [
"MIT"
] | null | null | null | import tkinter as tk
from tkinter import filedialog, Text
import os
app = tk.Tk()
def add_file():
file_name = filedialog.askopenfile(initialdir="/", title="Select Directory")
print(file_name)
canvas = tk.Canvas(app, height=700, width=700, bg="#263D42")
canvas.pack()
frame = tk.Frame(app, bg="white")
frame.place(relwidth=0.8, relheight=0.8, relx=0.1, rely=0.1)
open_file = tk.Button(frame, text="Open File", padx=10, pady=5, fg="black", bg="orange", command=add_file)
open_file.pack()
run_apps = tk.Button(frame, text="Run Apps", padx=10, pady=5, fg="black", bg="orange")
run_apps.pack()
app.mainloop() | 24.8 | 106 | 0.7 | 103 | 620 | 4.135922 | 0.475728 | 0.056338 | 0.061033 | 0.079812 | 0.122066 | 0.122066 | 0.122066 | 0.122066 | 0 | 0 | 0 | 0.045872 | 0.120968 | 620 | 25 | 107 | 24.8 | 0.73578 | 0 | 0 | 0 | 0 | 0 | 0.109501 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.1875 | 0 | 0.25 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1076727daaaa58526d7fc2ce53c7b46cf7d0f9e7 | 2,751 | py | Python | 2021/19/scan.py | svenaron/aoc | f24c0d89810907f03b4710c2132590cddb298828 | [
"MIT"
] | null | null | null | 2021/19/scan.py | svenaron/aoc | f24c0d89810907f03b4710c2132590cddb298828 | [
"MIT"
] | null | null | null | 2021/19/scan.py | svenaron/aoc | f24c0d89810907f03b4710c2132590cddb298828 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import numpy as np
from io import StringIO
from itertools import permutations
def perms(arr):
for columns in permutations(range(3)):
for x in (1, -1):
for y in (1, -1):
for z in (1, -1):
a = arr[:, columns] * [x, y, z]
a = a[np.lexsort(np.rot90(a))]
for r in range(len(a)):
yield np.roll(a, r, 0)
def parse(data):
scans = [np.genfromtxt(StringIO(s), delimiter=',',
dtype=int, skip_header=1)
for s in data.split("\n\n")]
return [s[np.lexsort(np.rot90(s))] for s in scans]
def align(scanners):
remain = list(enumerate(scanners))
done = [remain.pop(0) + (np.array((0, 0, 0)),)]
while remain:
found = False
for ai, a, _ in done:
aset = {tuple(p) for p in a}
for i, (bi, b) in enumerate(remain):
sz = min(len(b), len(a))
for bb in perms(b):
delta = a[:sz] - bb[:sz]
unq, cnt = np.unique(delta, axis=0, return_counts=True)
if max(cnt) < 2:
continue
for j, c in sorted(enumerate(cnt), key=lambda x: x[1]):
offset = unq[j]
aligned = bb + offset
bset = {tuple(p) for p in aligned}
common = aset.intersection(bset)
if len(common) >= 12:
remain.pop(i)
done.append((bi, aligned, offset))
print(f"{len(done)} done, {len(remain)} remain")
found = True
break
if found:
break
if found:
break
if not found:
print("uh oh, found none on entire iteration, giving up")
with open('output', 'w') as f:
f.write(str(remain))
f.write("\n\n")
f.write(str(done))
return None
return done
def solve(scanners):
beacons = set()
positions = list()
for i, scan, pos in scanners:
beacons.update({tuple(p) for p in scan})
positions.append(pos)
p1 = len(beacons)
p2 = max([np.abs(a - b).sum() for a in positions for b in positions])
return p1, p2
if __name__ == "__main__":
with open('sample') as f:
sample = f.read().strip()
scanners = parse(sample)
print(solve(align(scanners)))
with open('input') as f:
data = parse(f.read().strip())
scanners = align(data)
print(solve(align(data)))
| 31.988372 | 76 | 0.457652 | 341 | 2,751 | 3.659824 | 0.366569 | 0.007212 | 0.009615 | 0.024038 | 0.053686 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01623 | 0.417666 | 2,751 | 85 | 77 | 32.364706 | 0.762797 | 0.00727 | 0 | 0.069444 | 0 | 0 | 0.044322 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.041667 | 0 | 0.152778 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1079b66f8c89731f5a85b803957ae43d6fb58650 | 1,880 | py | Python | setup.py | matwey/coniferest | 3189f6b0a9f083bc5a4b6186ad1aec38b0f7c19d | [
"MIT"
] | null | null | null | setup.py | matwey/coniferest | 3189f6b0a9f083bc5a4b6186ad1aec38b0f7c19d | [
"MIT"
] | 20 | 2021-08-03T13:30:55.000Z | 2021-10-19T22:56:08.000Z | setup.py | matwey/coniferest | 3189f6b0a9f083bc5a4b6186ad1aec38b0f7c19d | [
"MIT"
] | 1 | 2022-01-20T14:48:39.000Z | 2022-01-20T14:48:39.000Z | import sys
from pathlib import Path
from setuptools import setup, Extension
from setuptools.command.build_ext import build_ext
from Cython.Build import cythonize
import numpy as np
extra_compile_args = []
extra_link_args = []
# macOS Clang doesn't support OpenMP
if sys.platform != 'darwin':
extra_compile_args.append('-fopenmp')
extra_link_args.append('-fopenmp')
extensions = [Extension("coniferest.calc_mean_paths",
["coniferest/calc_mean_paths.pyx"],
include_dirs=[np.get_include()],
extra_compile_args=extra_compile_args,
extra_link_args=extra_link_args,
)]
def get_readme():
return (Path(__file__).parent / 'README.md').read_text(encoding='utf8')
setup(name='coniferest',
version='0.0.2',
description='Coniferous forests for better machine learning',
long_description=get_readme(),
long_description_content_type='text/markdown',
url='https://github.com/snad-space/coniferest',
author='Vladimir Korolev, SNAD team',
author_email='balodja@gmail.com',
license='MIT',
classifiers=[
'Development Status :: 3 - Alpha',
'License :: OSI Approved :: MIT License',
'Intended Audience :: Science/Research',
'Natural Language :: English',
'Operating System :: Microsoft :: Windows',
'Operating System :: POSIX :: Linux',
'Operating System :: MacOS',
'Programming Language :: Python',
'Topic :: Scientific/Engineering'
],
packages=['coniferest', 'coniferest.sklearn'],
package_data={
'': ['*.pxd'],
},
ext_modules=cythonize(extensions),
install_requires=['numpy', 'sklearn', 'matplotlib'],
cmdclass = {
'build_ext': build_ext
},
zip_safe=False)
| 30.819672 | 75 | 0.623404 | 196 | 1,880 | 5.765306 | 0.602041 | 0.028319 | 0.056637 | 0.055752 | 0.051327 | 0.051327 | 0 | 0 | 0 | 0 | 0 | 0.003564 | 0.253723 | 1,880 | 60 | 76 | 31.333333 | 0.801853 | 0.018085 | 0 | 0 | 0 | 0 | 0.33044 | 0.042322 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020408 | false | 0 | 0.122449 | 0.020408 | 0.163265 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
107b0fad3e9b3f8de340519a6673b6541e169982 | 1,560 | py | Python | example/sample.py | marrow/contentment | 494ef441acdeb7aefee61ff6295ba202f4a2c79c | [
"MIT"
] | 2 | 2016-08-20T19:51:19.000Z | 2018-07-26T13:59:46.000Z | example/sample.py | marrow/contentment | 494ef441acdeb7aefee61ff6295ba202f4a2c79c | [
"MIT"
] | 11 | 2015-11-12T18:22:02.000Z | 2022-03-11T23:14:32.000Z | example/sample.py | marrow/contentment | 494ef441acdeb7aefee61ff6295ba202f4a2c79c | [
"MIT"
] | 3 | 2015-11-09T09:15:43.000Z | 2016-11-17T01:38:00.000Z | import logging
import logging.config
import pymongo
from web.contentment.taxonomy import Taxonomy
logging.config.dictConfig({
'version': 1,
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'formatter': 'json',
# 'level': 'debug',
'stream': 'ext://sys.stdout',
}
},
'loggers': {
'web': {
'level': 'DEBUG',
'handlers': ['console'],
'propagate': False
},
},
'root': {
'level': 'INFO',
'handlers': ['console']
},
'formatters': {
'json': {
'()': 'web.contentment.util.JSONFormatter',
'format': '%(asctime)s\t%(levelname)s\t%(name)s:%(funcName)s:%(lineno)s %(message)s',
# 'force_keys': ('levelname', 'lineno'),
}
}
})
cli = pymongo.MongoClient()
db = cli.test
db.assets.drop()
assets = db.assets
assets.insert_one({'name': '/', 'path': '/'})
assets.insert_one({'name': 'company'})
assets.insert_one({'name': 'about'})
assets.insert_one({'name': 'careers'})
assets.insert_one({'name': 'services'})
assets.insert_one({'name': 'rita'})
taxonomy = Taxonomy(collection=assets)
from time import time
start = time()
result = taxonomy.named('/').insert(0, taxonomy.named('company'))
duration = (time() - start) * 1000
print("Unattached:", duration, "ms")
__import__('pprint').pprint(result.children.first())
start = time()
result = taxonomy.named('/').insert(0, taxonomy.named('company'))
duration = (time() - start) * 1000
print("Attached:", duration, "ms")
print("taxonomy.named('/').insert(0, taxonomy.named('company'))")
| 23.283582 | 92 | 0.60641 | 169 | 1,560 | 5.532544 | 0.420118 | 0.077005 | 0.096257 | 0.121925 | 0.216043 | 0.216043 | 0.216043 | 0.173262 | 0.173262 | 0.173262 | 0 | 0.009281 | 0.171154 | 1,560 | 66 | 93 | 23.636364 | 0.713844 | 0.035897 | 0 | 0.113208 | 0 | 0.018868 | 0.298 | 0.113333 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.113208 | 0 | 0.113208 | 0.075472 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
107b3c2b69167025f895f63a4e7faa30d684efa0 | 989 | pyw | Python | website.pyw | sravanireddy1102/website-blocker | b1978ebe14664c901a75aee759c4d67ccdc2e436 | [
"MIT"
] | 2 | 2021-06-16T13:58:14.000Z | 2021-06-16T13:58:17.000Z | website.pyw | sravanireddy1102/website-blocker | b1978ebe14664c901a75aee759c4d67ccdc2e436 | [
"MIT"
] | null | null | null | website.pyw | sravanireddy1102/website-blocker | b1978ebe14664c901a75aee759c4d67ccdc2e436 | [
"MIT"
] | null | null | null | import time
from datetime import datetime as dt
hosts_path='C:\Windows\System32\drivers\etc\hosts'
redirect='127.0.0.1'
websites=['www.instagram.com','www.netflix.com','facebook.com','twitter.com']
while True:
if(dt(dt.now().year,dt.now().month,dt.now().day,9)<dt.now()<dt(dt.now().year,dt.now().month,dt.now().day,22)):
print("working hours :)")
with open(hosts_path,'r+') as file:
content=file.read()
for website in websites:
if website in content:
pass
else:
file.write(redirect+" "+website+"\n")
else:
with open(hosts_path,'r+') as file:
content=file.readlines()
file.seek(0)#goes to the beginning of the file.
for line in content:
if not any(website in line for website in websites):
file.write(line)
file.truncate()
time.sleep(10)
| 31.903226 | 115 | 0.54095 | 130 | 989 | 4.092308 | 0.476923 | 0.065789 | 0.026316 | 0.041353 | 0.240602 | 0.240602 | 0.240602 | 0.240602 | 0.240602 | 0.109023 | 0 | 0.020772 | 0.318504 | 989 | 30 | 116 | 32.966667 | 0.768546 | 0.034378 | 0 | 0.166667 | 0 | 0 | 0.134929 | 0.040261 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.041667 | 0.083333 | 0 | 0.083333 | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
107caa4f2a266f72974c18a5f1b011344f327b33 | 1,845 | py | Python | src/decoder.py | aaronzguan/Unaligned-Phoneme-Sequence-Prediction | 760653155e24227abc02dffc2c8cb4b3bccbf62d | [
"MIT"
] | null | null | null | src/decoder.py | aaronzguan/Unaligned-Phoneme-Sequence-Prediction | 760653155e24227abc02dffc2c8cb4b3bccbf62d | [
"MIT"
] | null | null | null | src/decoder.py | aaronzguan/Unaligned-Phoneme-Sequence-Prediction | 760653155e24227abc02dffc2c8cb4b3bccbf62d | [
"MIT"
] | null | null | null | import torch
import Levenshtein as Lev
from ctcdecode import CTCBeamDecoder
class BeamCTCDecoder():
def __init__(self, PHONEME_MAP, blank_index=0, beam_width=100):
# Add the blank to the phoneme_map as the first element
if PHONEME_MAP[blank_index] != ' ':
PHONEME_MAP.insert(0, ' ')
# Define the int_to_char dictionary
self.int_to_char = dict([(i, c) for (i, c) in enumerate(PHONEME_MAP)])
self._decoder = CTCBeamDecoder(PHONEME_MAP, blank_id=blank_index, beam_width=beam_width, log_probs_input=True)
def decode(self, probs, sizes=None):
probs, sizes = probs.cpu(), sizes.cpu()
out, _, _, seq_lens = self._decoder.decode(probs, sizes)
# out: shape (batch_size, beam_width, seq_len)
# seq_lens: shape (batch_size, beam_width)
# The best sequences are indexed 0 in the beam_width dimension.
strings = self.convert_to_strings(out[:, 0, :], seq_lens[:, 0])
return strings
def convert_to_strings(self, out, seq_len):
"""
:param out: (batch_size, sequence_length)
:param seq_len: (batch_size)
:return:
"""
out = out.cpu()
results = []
for b, utt in enumerate(out):
size = seq_len[b]
if size > 0:
# Map each integer to the char using the int_to_char dictionary
# Only get the original len and remove all the padding elements
transcript = ''.join(map(lambda x: self.int_to_char[x.item()], utt[:size]))
else:
transcript = ''
transcript = transcript.replace(' ', '')
results.append(transcript)
return results
def Lev_dist(self, s1, s2):
s1, s2 = s1.replace(' ', ''), s2.replace(' ', '')
return Lev.distance(s1, s2) | 40.108696 | 118 | 0.600542 | 238 | 1,845 | 4.441176 | 0.386555 | 0.056764 | 0.034059 | 0.037843 | 0.085147 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012947 | 0.288347 | 1,845 | 46 | 119 | 40.108696 | 0.792079 | 0.238482 | 0 | 0 | 0 | 0 | 0.003676 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.137931 | false | 0 | 0.103448 | 0 | 0.37931 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
107cd544f0ba33041df74c1f95ae47bccc03eecd | 2,829 | py | Python | arpym_template/estimation/flexible_probabilities.py | xshi19/arpym-template | 9cfb9cb37effb5b7bfb4e704537f4c3b7087c9fd | [
"BSD-2-Clause"
] | null | null | null | arpym_template/estimation/flexible_probabilities.py | xshi19/arpym-template | 9cfb9cb37effb5b7bfb4e704537f4c3b7087c9fd | [
"BSD-2-Clause"
] | null | null | null | arpym_template/estimation/flexible_probabilities.py | xshi19/arpym-template | 9cfb9cb37effb5b7bfb4e704537f4c3b7087c9fd | [
"BSD-2-Clause"
] | null | null | null | from collections import namedtuple
import pandas as pd
import numpy as np
from scipy.stats import norm
class FlexibleProbabilities(object):
"""
Flexible Probabilities
"""
def __init__(self, data):
self.x = data
self.p = np.ones(len(data))/len(data)
def shape(self):
return self.x.shape
def mean(self):
"""
Sample mean with flexible probabilities
"""
return np.dot(self.p, self.x)
def cov(self):
"""
Sample covariance with flexible probabilities
"""
x_ = self.x - np.mean(self.x, axis=0)
return np.dot(np.multiply(np.transpose(x_), self.p), x_)
def equal_weight(self):
"""
Equally weighted probabilities
"""
self.p = np.ones(len(self.x))/len(self.x)
def exponential_decay(self, tau):
"""
Exponentail decay probabilities
"""
t_ = len(self.x)
self.p = np.exp(-np.log(2)/tau*(t_-np.arange(0,t_)))
self.p = self.p /np.sum(self.p)
def smooth_kernel(self, z=None, z_star=None, h=None, gamma=2):
"""
Smooth kernel probabilities
"""
if z is None:
z = self.x[:,0]
if z_star is None:
z_star = np.mean(z)
if h is None:
h = np.std(z)
self.p = np.exp(-(np.abs(z - z_star)/h)**gamma)
self.p = self.p /np.sum(self.p)
def effective_scenarios(self, Type=None):
"""
This def computes the Effective Number of Scenarios of Flexible
Probabilities via different types of defs
INPUTS
Type : [struct] type of def: 'ExpEntropy', 'GenExpEntropy'
OUTPUTS
ens : [scalar] Effective Number of Scenarios
NOTE
The exponential of the entropy is set as default, otherwise
Specify Type.ExpEntropy.on = true to use the exponential of the entropy
or
Specify Type.GenExpEntropy.on = true and supply the scalar
Type.ExpEntropy.g to use the generalized exponential of the entropy
For details on the exercise, see here: https://www.arpm.co/lab/redirect.php?permalink=EBEffectNbScenFun
"""
if Type is None:
Type = namedtuple('type',['Entropy'])
Type.Entropy = 'Exp'
if Type.Entropy != 'Exp':
Type.Entropy = 'GenExp'
## Code
p_ = self.p
if Type.Entropy == 'Exp':
p_[p_==0] = 10**(-250) #avoid log(0) in ens computation
ens = np.exp(-p_@np.log(p_.T))
else:
ens = np.sum(p_ ** Type.g) ** (-1 / (Type.g - 1))
return ens
| 27.201923 | 111 | 0.525627 | 353 | 2,829 | 4.144476 | 0.33711 | 0.044429 | 0.028708 | 0.047163 | 0.102529 | 0.031442 | 0.031442 | 0.031442 | 0.031442 | 0 | 0 | 0.007773 | 0.363379 | 2,829 | 103 | 112 | 27.466019 | 0.804553 | 0.297985 | 0 | 0.046512 | 0 | 0 | 0.015205 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.186047 | false | 0 | 0.093023 | 0.023256 | 0.395349 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
107d7d2063f32a6f0f9dc010454644debca098ba | 25,011 | py | Python | astred/aligned.py | BramVanroy/astred | 450e4d071319ea02768db9fd0b2c6e12b037676c | [
"Apache-2.0"
] | 10 | 2020-03-25T10:23:49.000Z | 2021-12-18T02:35:37.000Z | astred/aligned.py | BramVanroy/astred | 450e4d071319ea02768db9fd0b2c6e12b037676c | [
"Apache-2.0"
] | 2 | 2021-10-07T09:56:55.000Z | 2022-03-01T10:57:24.000Z | astred/aligned.py | BramVanroy/astred | 450e4d071319ea02768db9fd0b2c6e12b037676c | [
"Apache-2.0"
] | null | null | null | from __future__ import annotations
import operator
from copy import deepcopy
from dataclasses import dataclass, field
from itertools import combinations
from typing import ClassVar, Dict, List, Optional, Set, Tuple, Union
from .aligner import Aligner
from .enum import EditOperation, Side, SpanType
from .pairs import IdxPair
from .sentence import Sentence
from .span import NullSpan, Span, SpanPair
from .tree import AstredConfig, Tree
from .utils import cached_property, pair_combs, rebase_to_idxs, unique_list
from .word import WordPair, spanpair_to_wordpairs
@dataclass(eq=False)
class AlignedSentences:
"""'AlignedSentences' is the main entry point for using this library. The focus lies on syntactic measures between
a source and target sentence. 'AlignedSentences' takes as input at least a source and target :class:`Sentence`,
and word alignments for that sentence pair.
"""
src: Sentence
tgt: Sentence
word_aligns: Union[List[Union[IdxPair, Tuple[int, int]]], str] = field(default=None)
aligner: Optional[Aligner] = field(default=None, repr=False)
allow_mwg: bool = field(default=True)
make_copies: bool = field(default=False)
aligned_words: List[WordPair] = field(default_factory=list, init=False, repr=False)
word_cross: int = field(default=0, init=False)
aligned_seq_spans: List[SpanPair] = field(default_factory=list, init=False, repr=False)
seq_aligns: List[IdxPair] = field(default_factory=list, init=False, repr=False)
seq_cross: int = field(default=0, init=False)
aligned_sacr_spans: List[SpanPair] = field(default_factory=list, init=False, repr=False)
sacr_aligns: List[IdxPair] = field(default_factory=list, init=False, repr=False)
sacr_cross: int = field(default=0, init=False)
ted_config: AstredConfig = field(default=AstredConfig(), repr=False)
ted: int = field(default=0, init=False)
ted_ops: List[Tuple[Tree]] = field(default_factory=list, repr=False, init=False)
# Keep a class variable for the aligner
_aligner: ClassVar[Aligner] = field(default=None, repr=False)
def __getitem__(self, idx):
return self.aligned_words[idx]
def __iter__(self):
return iter(self.aligned_words)
def __len__(self):
return len(self.aligned_words)
def __repr__(self):
return (
f"{self.__class__.__name__}(src={self.src.text}, tgt={self.tgt.text},"
f" aligns={[(i.src, i.tgt) for i in self.word_aligns]})"
)
def __post_init__(self):
if any(w.is_null for w in self.src.words + self.tgt.words):
raise ValueError(
"Your sentence(s) cannot contain NULL before passing it to an AlignedSentences"
" constructor because that means it has already been aligned and metrics have already"
" been calculate for all words involved."
)
# Copy input so that the given Sentence is not modified in place
if self.make_copies:
self.src = deepcopy(self.src)
self.tgt = deepcopy(self.tgt)
self.init_word_aligns()
self.attach_self_to_sentences()
# NULL is added to the front of the sentences here
self.attach_sentences()
self.aligned_words = [WordPair(self.src[align.src], self.tgt[align.tgt]) for align in self.word_aligns]
self.attach_pairs(self.aligned_words)
self.set_cross(self.aligned_words, "word_cross")
# SEQUENCES
self.create_seq_spans()
self.attach_pairs(self.aligned_seq_spans)
self.set_cross(self.aligned_seq_spans, "seq_cross")
if self.src.tree and self.tgt.tree:
# SACR
self.create_sacr_spans()
self.attach_pairs(self.aligned_sacr_spans)
self.set_cross(self.aligned_sacr_spans, "sacr_cross")
# TED
self.set_connected()
self.set_ted()
@cached_property
def giza_word_aligns(self):
return " ".join([f"{p.src-1}-{p.tgt-1}" for p in self.word_aligns if p.src and p.tgt])
@property
def idxs_d(self) -> Dict[str, Set[int]]:
"""Extracts the unique source and target word indices from the word alignments.
:return: a dictionary containg "src" and "tgt" keys with a set of integer values as keys
"""
src, tgt = zip(*self.word_aligns)
return {"src": src, "tgt": tgt}
@property
def no_null_sacr_pairs(self):
"""Removes any NULL alignments (-1 to exclude MWG from comparison. Is included in list, though)
:return:
"""
return [pair for pair in self.aligned_sacr_spans if not any(p.is_null for p in pair[:-1])]
@property
def no_null_seq_pairs(self):
"""Removes any NULL alignments (-1 to exclude MWG from comparison. Is included in list, though)
:return:
"""
return [pair for pair in self.aligned_seq_spans if not any(p.is_null for p in pair[:-1])]
@property
def no_null_word_pairs(self):
"""Removes any NULL alignments
:return:
"""
return [pair for pair in self.aligned_words if not any(p.is_null for p in pair)]
def num_changes(self, attr="deprel"):
num_changes = self.src.num_changes(attr)
assert num_changes == self.tgt.num_changes(attr)
return num_changes
@staticmethod
def attach_pairs(pairs: List[Union[SpanPair, WordPair]]):
"""Attach the "src" and "tgt" items in a list of pairs to each other, effectively adding them to
their "aligned" attribute. This can be done both for aligned :class:`WordPair` and :class:`SpanPair`.
:param pairs: a list of :class:`WordPair`s or :class:`SpanPair`s
"""
for pair in pairs:
pair.src.add_aligned(pair.tgt)
pair.tgt.add_aligned(pair.src)
@staticmethod
def check_mwg_and_external_align(pairs: List[WordPair], src_ids: Set[int], tgt_ids: Set[int]) -> Tuple[bool, bool]:
"""For a given list of :class:`WordPair`, and a set of its ``src_ids`` and ``tgt_ids``, check whether this
group is a multi-word expression (MWG) and whether any of the involved words is aligned with words outside of
this group. A multi-word expression here is defined as a group of more than one source and target words, and
for which all words in the source group are aligned with all words in the target group, and vice-versa.
:param pairs: a list of :class:`WordPair`
:param src_ids: a set containing all the source indices (int) in ``pairs``
:param tgt_ids:a set containing all the target indices (int) in ``pairs``
:return: a tuple of booleans indicating: (i) whether this list of pairs is a MWG; (ii) whether any of
the involved words is aligned to words that are not part of any of the involved :class:`WordPair`s.
"""
n_src = len(unique_list([p.src for p in pairs]))
n_tgt = len(unique_list([p.tgt for p in pairs]))
# MWG must consist of more than one source and target word
# Later we then check whether each word is aligned with all other words in the group
is_mwg = n_src > 1 and n_tgt > 1
has_external_align = False
for wordpair in pairs:
aligned_to_src = set([w.id for w in wordpair.src.aligned])
aligned_to_tgt = set([w.id for w in wordpair.tgt.aligned])
# Check whetther each source word is attached to all target words
# If it is set to False once, don't try to change it.
if is_mwg and (aligned_to_src != tgt_ids or aligned_to_tgt != src_ids):
is_mwg = False
# Check whether the aligned indices of all words are a subset of the actual idxs.
# If it is not a subset (and it contains more idxs than the actual idxs), then that
# means that that word is aligned with a word outside of this pair.
if not aligned_to_src.issubset(tgt_ids) or not aligned_to_tgt.issubset(src_ids):
has_external_align = True
# Break because these proeprties cannot change anymore.
if not is_mwg and has_external_align:
break
return is_mwg, has_external_align
def init_word_aligns(self):
if not self.word_aligns:
if not self.aligner:
cls = self.__class__
if not cls._aligner:
cls._aligner = Aligner()
self.word_aligns = [IdxPair(*val) for val in cls._aligner.align_from_objs(self.src, self.tgt)]
else:
self.word_aligns = [IdxPair(*val) for val in self.aligner.align_from_objs(self.src, self.tgt)]
elif isinstance(self.word_aligns, str):
try:
self.word_aligns = [IdxPair(*map(int, align.split("-"))) for align in self.word_aligns.split(" ")]
except ValueError as exc:
raise ValueError(
"The passed alignments could not be parsed successfully. Make sure that they are"
" written in the correct format as pairs of src_idx-tgt_idx"
) from exc
elif not isinstance(self.word_aligns, IdxPair):
self.word_aligns = [IdxPair(*val) for val in self.word_aligns]
# +1 because 0-index is reserved for NULL
self.word_aligns = [IdxPair(p.src + 1, p.tgt + 1) for p in self.word_aligns]
self.add_null_aligns()
self.word_aligns.sort(key=operator.attrgetter("src", "tgt"))
@staticmethod
def has_internal_cross(pairs: List):
for pair1, pair2 in combinations(pairs, 2):
if pair2.tgt.id < pair1.tgt.id:
return True
return False
@staticmethod
def idxs_are_consecutive(idxs: List[int]):
return sorted(idxs) == list(range(min(idxs), max(idxs) + 1))
def add_null_aligns(self):
# Fill in 0 idx for words that are not aligned
# The second list comprehension will already take into account the added idxs of the first one
# That ensures that the NULL words are not added twice.
self.word_aligns += [IdxPair(idx, 0) for idx in range(len(self.src) + 1) if idx not in self.idxs_d["src"]]
self.word_aligns += [IdxPair(0, idx) for idx in range(len(self.tgt) + 1) if idx not in self.idxs_d["tgt"]]
def attach_sentences(self):
# This setter adds NULL at the front of the sentence
self.tgt.aligned_sentence = self.src
self.src.side = Side.SRC
self.src.aligned_sentence = self.tgt
self.tgt.side = Side.TGT
def attach_self_to_sentences(self):
self.src.aligned_sentences = self
self.tgt.aligned_sentences = self
def is_valid_sequence(self, pairs, src_ids, tgt_ids):
# Check if:
# - src and tgt idxs are consecutive and the group has no external alignments
# - if there are internal crosses, only allow this group if it's MWG and MWG is allowed
# - if no internal cross at this stage, it is a valid group
is_mwg, has_external_align = self.check_mwg_and_external_align(pairs, src_ids, tgt_ids)
idxs_consec = self.idxs_are_consecutive(src_ids) and self.idxs_are_consecutive(tgt_ids)
is_valid = False
if idxs_consec and not has_external_align:
# If there is an internal cross, this can only be a valid group if it is a MWG
if self.has_internal_cross(pairs):
is_valid = self.allow_mwg and is_mwg
else:
# When we got this far, it must be a valid group:
# - src and tgt ids are consecutive
# - there are no external alignments
# - there are no internal crosses
is_valid = True
return is_valid, is_mwg
def create_sacr_spans(self):
def is_valid_sacr_pair(pair):
_is_valid = pair.src.is_valid_subtree and pair.tgt.is_valid_subtree or (self.allow_mwg and spanpair.is_mwg)
_is_valid = _is_valid or (pair.src.is_null and pair.tgt.is_null)
return _is_valid
src_word_groups = []
tgt_word_groups = []
sacr_spans: List[Tuple[int, int, bool]] = []
found: Dict[str, Set[int]] = {"src": set(), "tgt": set()}
def add_found(spair, s_ids, t_ids):
found["src"].update(s_ids)
found["tgt"].update(t_ids)
s_words, t_words = map(list, spair[:-1]) # Exclude mwg
src_word_groups.append(s_words)
tgt_word_groups.append(t_words)
sacr_spans.append((min(s_ids), min(t_ids), spair.is_mwg))
# This should probably be written more DRY-y
for spanpair in self.aligned_seq_spans:
src_ids = set([w.id for w in spanpair.src])
tgt_ids = set([w.id for w in spanpair.tgt])
# Does this span pair contain just one source and one target word?
is_singles = len(spanpair.src) == 1 and len(spanpair.tgt) == 1
# If any of the src or tgt ids have already been found as a good match, continue
# because a word can only ever belong to one group
# single pairs should always be accepted but are dealt with separately in "create_spans"
# Always continue if this pair is a singles
if not is_singles and (not src_ids.isdisjoint(found["src"]) or not tgt_ids.isdisjoint(found["tgt"])):
continue
if is_singles or is_valid_sacr_pair(spanpair):
add_found(spanpair, src_ids, tgt_ids)
else:
wpairs = spanpair_to_wordpairs(spanpair)
for pairs in pair_combs(wpairs, min_length=2):
src_ids, tgt_ids = map(set, zip(*[(p.src.id, p.tgt.id) for p in pairs]))
tmp_is_singles = len(src_ids) == 1 and len(tgt_ids) == 1
if not is_singles and (
not src_ids.isdisjoint(found["src"]) or not tgt_ids.isdisjoint(found["tgt"])
):
continue
# First check if this new group is a valid sequence group
is_valid_seq, is_mwg = self.is_valid_sequence(pairs, src_ids, tgt_ids)
if not is_valid_seq:
continue
src_words, tgt_words = map(list, zip(*pairs))
tmp_src = Span(
id=1, words=unique_list(src_words), span_type=SpanType.SACR, attach=False, is_mwg=is_mwg
)
tmp_tgt = Span(
id=1, words=unique_list(tgt_words), span_type=SpanType.SACR, attach=False, is_mwg=is_mwg
)
tmp_spanpair = SpanPair(tmp_src, tmp_tgt, is_mwg)
if tmp_is_singles or is_valid_sacr_pair(tmp_spanpair):
add_found(tmp_spanpair, src_ids, tgt_ids)
self.create_spans(sacr_spans, src_word_groups, tgt_word_groups, found, span_type=SpanType.SACR)
def create_seq_spans(self):
src_word_groups = []
tgt_word_groups = []
seq_spans = []
found = {"src": set(), "tgt": set()}
# pair_combs never returns groups that contain any NULL item
for pairs in pair_combs(self.aligned_words, min_length=2):
src_ids, tgt_ids = map(set, zip(*[(p.src.id, p.tgt.id) for p in pairs]))
# If any of the src or tgt ids have already been found as a good match, continue
# because a word can only ever belong to one group
# single pairs should always be accepted
if not src_ids.isdisjoint(found["src"]) or not tgt_ids.isdisjoint(found["tgt"]):
continue
is_valid, is_mwg = self.is_valid_sequence(pairs, src_ids, tgt_ids)
if is_valid:
found["src"].update(src_ids)
found["tgt"].update(tgt_ids)
src_words, tgt_words = map(list, zip(*pairs))
src_word_groups.append(src_words)
tgt_word_groups.append(tgt_words)
seq_spans.append((min(src_ids), min(tgt_ids), is_mwg))
self.create_spans(seq_spans, src_word_groups, tgt_word_groups, found, span_type=SpanType.SEQ)
def create_spans(self, spans, src_word_groups, tgt_word_groups, found, span_type: SpanType):
# Deal with single pairs separately because unlike other spans, they can be connected with
# multiple other spans. This includes NULL
# `pair_combs` starts with the largest groups, so if the current `pairs` only consists
# of one pair, then that must be a valid pair because it did not belong in other groups
# This also takes care of pairs with NULL because they are always just one pair
# (see self.pair_combs).
# Single pairs with the same src or tgt can appear multiple times (so don't add to "found"):
# when an item is aligned with multiple items and they do not belong to a larger group together,
# then those seperate alignments will be separate groups.
for p in self.aligned_words:
if (p.src.id in found["src"] and p.tgt.id in found["tgt"]) and not (p.src.is_null or p.tgt.is_null):
continue
src_word_groups.append([p.src])
tgt_word_groups.append([p.tgt])
spans.append((p.src.id, p.tgt.id, False))
spans = sorted(set(spans), key=operator.itemgetter(0, 1))
src_idxs, tgt_idxs, mwgs = zip(*spans)
spans = list(zip(rebase_to_idxs(src_idxs), rebase_to_idxs(tgt_idxs), mwgs))
# Convert src/tgt words in groups of words so that they appear in the same order as in the original sentence
# So the first item will always be a Null word
src_word_groups = sorted(unique_list(src_word_groups), key=lambda l: min([w.id for w in l]))
tgt_word_groups = sorted(unique_list(tgt_word_groups), key=lambda l: min([w.id for w in l]))
# Convert the groups into actual spans. First items are the NULL spans
# This means that just like Null words, Null spans have id=0
src_spans = [
NullSpan(null_word=words[0], span_type=span_type)
if words[0].is_null
else Span(id=idx, words=words, span_type=span_type, doc=self.src)
for idx, words in enumerate(src_word_groups)
]
tgt_spans = [
NullSpan(null_word=words[0], span_type=span_type)
if words[0].is_null
else Span(id=idx, words=words, span_type=span_type, doc=self.tgt)
for idx, words in enumerate(tgt_word_groups)
]
# Attach spans to original sentences
setattr(self.src, f"{span_type}_spans", src_spans)
setattr(self.tgt, f"{span_type}_spans", tgt_spans)
# Set MWG
for src_idx, tgt_idx, mwg in spans:
src_spans[src_idx].is_mwg = mwg
tgt_spans[tgt_idx].is_mwg = mwg
# Create span alignment pairs
setattr(
self,
f"aligned_{span_type}_spans",
[SpanPair(src_spans[src_idx], tgt_spans[tgt_idx], mwg) for src_idx, tgt_idx, mwg in spans],
)
setattr(
self,
f"{span_type}_aligns",
[IdxPair(src_idx, tgt_idx) for src_idx, tgt_idx, _ in spans],
)
def set_connected(self, attr="deprel"):
def get_all_connected(start):
done = set()
def recursive_connected(item):
item_repr = f"{item.doc.side}-{item.id}"
if item_repr in done:
return []
done.add(item_repr)
connects = []
for i in item.aligned:
i_connects = recursive_connected(i)
if i_connects:
connects.extend(i_connects)
return item.aligned + connects
return sorted(unique_list(recursive_connected(start)), key=operator.attrgetter("id"))
def get_connected_repr(group):
src_words = [_w for _w in group if _w.side == Side.SRC]
return "|".join(
[
f"{src.id}.{getattr(src, attr)}:"
+ ",".join([f"{tgt.id}.{getattr(tgt, attr)}" for tgt in src.aligned if not tgt.is_null])
for src in src_words
if not src.is_null
]
)
connected_set = set()
# For every source and target word, find all connected words
# To be as efficient as possible, we keep track of items that we already found.
# This makes sense, because an item can only be found once because _all_ connected items
# are taken into account.
for word in self.src.words + self.tgt.words:
word_repr = f"{word.doc.side}-{word.id}"
if word_repr in connected_set:
continue
connected_group = get_all_connected(word)
connected_repr = get_connected_repr(connected_group)
# Iterate over all the words that are connected to this word
for connected_word in connected_group:
c_repr = f"{connected_word.doc.side}-{connected_word.id}"
if c_repr in connected_set:
continue
connected_word.connected_repr = connected_repr
# Set c.connected to all connected words that we found EXCLUDING c itself
for w in connected_group:
w_repr = f"{w.doc.side}-{w.id}"
if c_repr != w_repr:
connected_word.connected.append(w)
connected_set.add(c_repr)
def set_cross(self, aligned, attr: str):
# Given a set of aligned pairs, set a specific cross specified by `attr`
for pair1, pair2 in combinations(aligned, 2):
all_items = [pair1.src, pair1.tgt, pair2.src, pair2.tgt]
# NULL alignments cannot cause crosses
if any(item.is_null for item in all_items):
continue
if pair2.tgt.id < pair1.tgt.id:
setattr(self, attr, getattr(self, attr) + 1)
pair1.src.aligned_cross[pair1.tgt.id] += 1
pair1.tgt.aligned_cross[pair1.src.id] += 1
pair2.src.aligned_cross[pair2.tgt.id] += 1
pair2.tgt.aligned_cross[pair2.src.id] += 1
def set_ted(self):
# Also sets edit operation for a tree's node. This edit operation is the edit operation that is necessary
# to change this node in its aligned node, e.g. by matching (~ same connected_repr), renaming (-> other connected_repr),
# or deleting (-> None). As such, no nodes can have "INSERTION" because we do not have None nodes. That does
# not mean of course that a tree cannot have insertion operations. It just means that we have no place to put
# them because we do not have None nodes.
# TED between an aligned src and tgt sentence are symmetric. However, that is not the same as
# summing up the astred_cost of each word in the sentence! TED for AlignedSentences counts all operations,
# including insertions. But a word can never have the "insertion" operation
# (because insertion is from None -> a Word). Hence, insertion costs will be missing when counting the differences
# on the word level. DO NOT DO THAT.
self.ted, self.ted_ops = self.src.tree.get_distance(self.tgt.tree, config=self.ted_config)
ted_tgt, _ = self.tgt.tree.get_distance(self.src.tree, config=self.ted_config)
assert self.ted == ted_tgt
cost = 0
for src_match, tgt_match in self.ted_ops:
# Node repr as used by the AstredConfig to calculate TED:
# By default, the representation is attr="connected_repr" to calculate ASTrED
# But a custom config can be used as well, e.g. to calculate regular TED, with attr="deprel"
src_repr = getattr(src_match.node, self.ted_config.attr) if src_match else None
tgt_repr = getattr(tgt_match.node, self.ted_config.attr) if tgt_match else None
if src_repr == tgt_repr:
src_match.astred_op = EditOperation.MATCH
tgt_match.astred_op = EditOperation.MATCH
elif src_repr is None:
tgt_match.astred_op = EditOperation.DELETION
cost += self.ted_config.costs[EditOperation.DELETION]
elif tgt_repr is None:
src_match.astred_op = EditOperation.DELETION
cost += self.ted_config.costs[EditOperation.DELETION]
else:
src_match.astred_op = EditOperation.RENAME
tgt_match.astred_op = EditOperation.RENAME
cost += self.ted_config.costs[EditOperation.RENAME]
assert self.ted == cost
| 46.575419 | 128 | 0.624805 | 3,560 | 25,011 | 4.221067 | 0.119944 | 0.009583 | 0.01677 | 0.01118 | 0.291409 | 0.247887 | 0.206628 | 0.175018 | 0.143542 | 0.134092 | 0 | 0.003717 | 0.289992 | 25,011 | 536 | 129 | 46.662313 | 0.842494 | 0.258446 | 0 | 0.122807 | 0 | 0.002924 | 0.046012 | 0.011503 | 0 | 0 | 0 | 0 | 0.008772 | 1 | 0.090643 | false | 0.005848 | 0.040936 | 0.017544 | 0.245614 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
108063978bb8c56f873b9f2e566ce0b467ce45f9 | 738 | py | Python | acoustic/data/format.py | DavisDevasia/acoustid-server | b4b2acbc198b3d0497df04c2294d9f030133ede5 | [
"MIT"
] | null | null | null | acoustic/data/format.py | DavisDevasia/acoustid-server | b4b2acbc198b3d0497df04c2294d9f030133ede5 | [
"MIT"
] | null | null | null | acoustic/data/format.py | DavisDevasia/acoustid-server | b4b2acbc198b3d0497df04c2294d9f030133ede5 | [
"MIT"
] | null | null | null | # Copyright (C) 2011 Lukas Lalinsky
# Distributed under the MIT license, see the LICENSE file for details.
import logging
from sqlalchemy import sql
from acoustic import tables as schema
logger = logging.getLogger(__name__)
def find_or_insert_format(conn, name):
"""
Find a format in the database, create it if it doesn't exist yet.
"""
with conn.begin():
query = sql.select([schema.format.c.id], schema.format.c.name == name)
id = conn.execute(query).scalar()
if id is None:
insert_stmt = schema.format.insert().values(name=name)
id = conn.execute(insert_stmt).inserted_primary_key[0]
logger.info("Inserted format %d with name %s", id, name)
return id
| 30.75 | 78 | 0.672087 | 106 | 738 | 4.575472 | 0.575472 | 0.074227 | 0.053608 | 0.057732 | 0.086598 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008757 | 0.226287 | 738 | 23 | 79 | 32.086957 | 0.84063 | 0.228997 | 0 | 0 | 0 | 0 | 0.056261 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.230769 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1082813367f7ffc44e27b57c424df1faf6530e4c | 2,806 | py | Python | code/cli.py | KyriakosP188/Blockchain_Project | 1da5127db96dcc783ca9cd8e8789987e8cbf104f | [
"MIT"
] | null | null | null | code/cli.py | KyriakosP188/Blockchain_Project | 1da5127db96dcc783ca9cd8e8789987e8cbf104f | [
"MIT"
] | null | null | null | code/cli.py | KyriakosP188/Blockchain_Project | 1da5127db96dcc783ca9cd8e8789987e8cbf104f | [
"MIT"
] | null | null | null | from transaction import Transaction
import pyfiglet
import requests
import pickle
import cmd
class Noobcash(cmd.Cmd):
intro = '\nWelcome to the noobcash client. Type help or ? to list commands.\n'
prompt = 'noobcash> '
def preloop(self):
print(pyfiglet.figlet_format('noobcash'))
self.port = input('Enter the port of your wallet: ')
self.ip = '127.0.0.1'
def do_t(self, args):
't <recipient_id> <amount>\nSend the specified amount of NBC coins to the wallet of the node with the given ID.'
args = args.split(' ')
if len(args) != 2:
print('Please provide <recipient_id> and <amount> to create the transaction.')
return
try:
response = requests.post('http://' + self.ip + ':' + self.port + '/create_new_transaction',
data=pickle.dumps((int(args[0]), int(args[1]))))
if response.status_code == 200:
print(f'Transaction of {args[1]} NBC coins to node{args[0]} completed successfully.')
elif response.status_code == 402:
print(response.json()['message'])
elif response.status_code == 403:
print(response.json()['message'])
elif response.status_code == 404:
print(response.json()['message'])
else:
print('Transaction failed. Check recipient ID or the system may be down.')
except:
print('Connection failed.')
def do_view(self, _):
'View the transactions of the current last block of the blockchain.'
try:
response = requests.get('http://' + self.ip + ':' + self.port + '/view_last_transactions')
transactions = pickle.loads(response._content)
for i in range(len(transactions)):
print('Transaction', i)
print('Sender Address:')
print(transactions[i].sender_address)
print('Recipient Address:')
print(transactions[i].receiver_address)
print('Amount:', transactions[i].amount)
print('ID:', transactions[i].transaction_id)
if i != len(transactions) - 1:
print('')
except:
print('Connection failed.')
def do_balance(self, _):
'Check your wallet balance.'
try:
response = requests.get('http://' + self.ip + ':' + self.port + '/get_balance')
balance = pickle.loads(response._content)
print(f'You have {balance} NBC coins in your wallet.')
except:
print('Connection failed.')
def do_exit(self, _):
'Exit the noobcash client.'
return True
if __name__ == "__main__":
Noobcash().cmdloop() | 40.085714 | 120 | 0.564861 | 314 | 2,806 | 4.94586 | 0.356688 | 0.020605 | 0.046362 | 0.027044 | 0.18416 | 0.172569 | 0.110753 | 0.110753 | 0.051513 | 0 | 0 | 0.012461 | 0.313614 | 2,806 | 70 | 121 | 40.085714 | 0.793873 | 0.081611 | 0 | 0.190476 | 0 | 0.015873 | 0.294264 | 0.016388 | 0 | 0 | 0 | 0 | 0 | 1 | 0.079365 | false | 0 | 0.079365 | 0 | 0.238095 | 0.301587 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10843daa3295a3a7afe2aa13c4baf2cae6d3cf1c | 9,700 | py | Python | boid.py | kiwi-fruitiwi/flocking | ddbb28e30ea3b7735a055254fc5c256210e56e25 | [
"MIT"
] | 1 | 2021-09-29T00:38:09.000Z | 2021-09-29T00:38:09.000Z | boid.py | kiwi-fruitiwi/flocking | ddbb28e30ea3b7735a055254fc5c256210e56e25 | [
"MIT"
] | null | null | null | boid.py | kiwi-fruitiwi/flocking | ddbb28e30ea3b7735a055254fc5c256210e56e25 | [
"MIT"
] | null | null | null |
# an automated bird! kind of like a bird and droid
class Boid:
def __init__(self):
self.position = PVector(random(width), random(height))
self.velocity = PVector().random2D().setMag(random(2.5, 4.5))
self.acceleration = PVector()
self.max_force = random(0.15, 0.25)
self.max_speed = random(2.5, 4.5)
self.ACC_VECTOR_SCALE = 100
self.r = 16
self.bee_img = loadImage("bee.png")
# update the boid's position, velocity, and acceleration
def update(self):
self.velocity.add(self.acceleration)
self.position.add(self.velocity)
self.velocity.limit(self.max_speed)
self.acceleration.mult(0)
# draw the acceleration vector
# TODO: add arrow
def show_acc_vector(self):
pushMatrix()
translate(self.position.x, self.position.y)
stroke(200, 100, 100, 50)
strokeWeight(2)
line(0, 0, self.ACC_VECTOR_SCALE*self.acceleration.x, self.ACC_VECTOR_SCALE*self.acceleration.y)
noStroke()
popMatrix()
# display as a bee!
def show_bee(self):
pushMatrix()
translate(self.position.x, self.position.y)
# rotate(self.velocity.heading())
image(self.bee_img, 0, 0)
popMatrix()
def show(self):
# self.show_acc_vector()
# rotate the object to point where its velocity vector points
pushMatrix()
translate(self.position.x, self.position.y)
# draw vel vector
VEL_VECTOR_SCALE = 10
stroke(0, 100, 100, 50)
strokeWeight(1)
# velocity vector isn't useful because vehicles rotate in that direction
# line(0, 0, VEL_VECTOR_SCALE*self.vel.x, VEL_VECTOR_SCALE*self.vel.y)
noStroke()
# rotate
rotate(self.velocity.heading())
# this is where we draw our object. we're going to try for a 9S Hackbot
# https://puu.sh/I3E19/9d32002c25.png
r = self.r
T = 0.4 # how far away is the tip away from the origin?
C = 0.2 # what is the radius of the inner circle?
B = 0.3 # how far away is the butt away from the origin?
fill(0, 0, 100, 75)
stroke(0, 0, 0, 100)
strokeWeight(1)
beginShape()
vertex(r, 0) # front tip
vertex(0, r*T) # top
vertex(-r*T, 0) # butt
vertex(0, -r*T) # bottom
vertex(r, 0) # front tip
endShape()
fill(0, 0, 0, 90)
circle(0, 0, r*C)
stroke(0, 0, 0, 100)
strokeWeight(1)
line(0, 0, -r*T, 0) # line to the butt
x = (r*T)/(sqrt(3)+T)
line(0, 0, x, sqrt(3)*x) # line to the top 120 degrees
line(0, 0, x, -sqrt(3)*x) # line to the bottom 120 degrees
# two little squares in the back
rectMode(CENTER)
fill(0, 0, 100, 50)
strokeWeight(1)
square(r*-B, r*T, r*0.2)
square(r*-B, -r*T, r*0.2)
rectMode(CORNER)
popMatrix()
# draw the velocity vector? unnecessary because we rotate to that direction
def show_simple(self):
# self.show_acc_vector()
# strokeWeight(10)
stroke(0, 0, 90)
# point(self.position.x, self.position.y)
fill(0, 0, 90, 30)
circle(self.position.x, self.position.y, 10)
# wrap off the edges
def edges(self):
if self.position.x > width:
self.position.x = 0
elif self.position.x < 0:
self.position.x = width
if self.position.y > height:
self.position.y = 0
elif self.position.y < 0:
self.position.y = height
def apply_force(self, force):
# F=ma, but we assure m=1 so our force vector becomes an acceleration vector
self.acceleration.add(force)
# applies flock behaviors to all boids
def flock(self, boids):
alignment = self.align(boids, 40)
self.acceleration.add(alignment)
cohesion = self.cohere(boids, 40)
self.acceleration.add(cohesion)
separation = self.separate(boids, 30).mult(1.5)
self.acceleration.add(separation)
# steering force = desired velocity - current velocity, as per Craig Reynolds's
# sbfac paper. Desired velocity should be a vector with direction toward where we
# want to go. Our steering force acts like a correction; it corrects for our current
# velocity and steers us to cancel that and toward our target
#
# returns a force vector steering this boid to its target's position
def seek_target(self, target_position):
# a vector pointing from us to our target, which we will treat as a velocity
# instead of the position it actually is
desired_velocity = PVector.sub(target_position, self.position)
return self.seek_velocity(desired_velocity)
# this needs to be called with a desired velocity
# as per Craig Reynolds's paper, steering force = desired velocity - current velocity
# seek calls this with PVector.sub(target_position, self.position)
#
# returns a force vector steering this boid toward its provided desired velocity
def seek_velocity(self, desired_velocity):
# set this velocity to our max speed
desired_velocity.setMag(self.max_speed)
# steering force = desired velocity - current velocity
# note that we are taking a velocity vector and are going to treat it as an
# acceleration vector :o
steering = PVector.sub(desired_velocity, self.velocity)
steering.limit(self.max_force)
return steering
def evade(self, target_position):
return self.seek(target_position).mult(-1)
# try to steer toward the same heading as neighboring boids within a perception radius
# this implementation uses seek!
# returns a zero force PVector if there are no other boids within a radius
def align(self, boids, perception_radius):
total = 0 # total number of neighboring boids we use for calculating the avg heading
average = PVector() # this vector will hold the average heading of neighboring boids
# find the average heading of neighboring boids
for boid in boids:
distance = PVector.dist(self.position, boid.position)
# only calculate for other boids (not us!) within the radius
if boid != self and distance < perception_radius:
total += 1
average.add(boid.velocity) # velocity contains heading information
if total > 0:
average.div(total)
return self.seek_velocity(average)
else:
return PVector()
# steer to move toward the average location of nearby flockmates. This is cohesion
def cohere(self, boids, perception_radius):
total = 0
average = PVector(0, 0) # this is our desired velocity
# find the average of the positions of all the boids
for boid in boids:
distance = PVector.dist(self.position, boid.position)
# only calculate within a desired perception radius
if boid != self and distance < perception_radius:
total += 1 # count how many are within our radius to divide later for average
# in self.align, we added the other boids' velocities. here we add position!
average.add(boid.position)
steering_force = average
if total > 0:
steering_force.div(total) # this is our desired velocity!
# note that we subtract our position from the average position first;
# this is the main difference from self.align!
return self.seek_target(steering_force)
# # note that if we didn't find anything, we return the zero vector
return PVector(0, 0)
# steer to avoid crowding local flockmates
def separate(self, boids, perception_radius):
total = 0
average = PVector(0, 0) # this is our desired velocity
# find the average of the positions of all the boids
for boid in boids:
distance = PVector.dist(self.position, boid.position)
# only calculate within a desired perception radius
if boid != self and distance < perception_radius:
difference = PVector.sub(self.position, boid.position)
# we want this difference to be inversely proportional to the distance between
# self and other; the further away it is, the lower the magnitude we want
# TODO: fix zero division error
difference.div(distance)
total += 1 # count how many are within our radius to divide later for average
# in self.align, we added the other boids' velocities. here we add position!
average.add(difference)
steering_force = average
if total > 0:
steering_force.div(total) # this is our desired velocity!
return self.seek_velocity(steering_force)
# # note that if we didn't find anything, we return the zero vector
return PVector(0, 0)
| 37.596899 | 104 | 0.588144 | 1,246 | 9,700 | 4.530498 | 0.210273 | 0.05527 | 0.020726 | 0.015058 | 0.389725 | 0.35465 | 0.267671 | 0.246413 | 0.233835 | 0.216475 | 0 | 0.02717 | 0.335979 | 9,700 | 257 | 105 | 37.743191 | 0.849247 | 0.382371 | 0 | 0.326087 | 0 | 0 | 0.001185 | 0 | 0 | 0 | 0 | 0.003891 | 0 | 1 | 0.108696 | false | 0 | 0 | 0.007246 | 0.181159 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10877dbab5b85428b227a62825db3cc87801dc10 | 480 | py | Python | 4. 01.07.2021/1. Secret Messages. New character.py | AntonVasko/CodeClub-2021-SUMMER | 14a80168bb7c2eb3c0c157d6d5b7630c05decb31 | [
"CC0-1.0"
] | null | null | null | 4. 01.07.2021/1. Secret Messages. New character.py | AntonVasko/CodeClub-2021-SUMMER | 14a80168bb7c2eb3c0c157d6d5b7630c05decb31 | [
"CC0-1.0"
] | null | null | null | 4. 01.07.2021/1. Secret Messages. New character.py | AntonVasko/CodeClub-2021-SUMMER | 14a80168bb7c2eb3c0c157d6d5b7630c05decb31 | [
"CC0-1.0"
] | null | null | null | #Secret Messages. New character
alphabet = 'abcdefghijklmnopqrstuvwxyz'
key = int(input('Please input key '))
character = input('Please enter a character ')
position = alphabet.find(character)
print('Position of a character ', character, ' is ', position)
newPosition = (position + key) % 26
print('New position of a character ', character, ' is ', newPosition)
newCharacter = alphabet[newPosition]
print('New character of a new position of ', newPosition, ' is ', newCharacter)
| 40 | 79 | 0.741667 | 57 | 480 | 6.245614 | 0.350877 | 0.08427 | 0.061798 | 0.11236 | 0.174157 | 0.174157 | 0 | 0 | 0 | 0 | 0 | 0.004854 | 0.141667 | 480 | 11 | 80 | 43.636364 | 0.859223 | 0.0625 | 0 | 0 | 0 | 0 | 0.371938 | 0.057906 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10895ebadbbfd8dbfaa26bda3163ee7b826faea8 | 2,023 | py | Python | algorithms/floyd_warshall.py | SteliosKliafas/shortest_path_algorithms | 0f28973bce7d53feeee202424b448b5007b5df68 | [
"MIT"
] | null | null | null | algorithms/floyd_warshall.py | SteliosKliafas/shortest_path_algorithms | 0f28973bce7d53feeee202424b448b5007b5df68 | [
"MIT"
] | null | null | null | algorithms/floyd_warshall.py | SteliosKliafas/shortest_path_algorithms | 0f28973bce7d53feeee202424b448b5007b5df68 | [
"MIT"
] | null | null | null | import numpy as np
def floyd_warshall(matrix):
vertices = len(matrix)
fw_distance_matrix = matrix.copy() # make a copy of matrix, (if there is no distance keep the default ones)
fw_distance_matrix[np.isnan(fw_distance_matrix)] = np.inf # fill indirect paths as well in fw_distance_matrix
path_matrix = np.zeros((vertices, vertices)) # create the path matrix initially filled with 0's
for i in range(vertices):
for j in range(vertices):
path_matrix[i, j] = i # replace each line with the corresponding vertex
for k in range(vertices):
for i in range(vertices):
for j in range(vertices):
if i != j:
if fw_distance_matrix[i][j] > fw_distance_matrix[i][k] + fw_distance_matrix[k][j]: # if our default value is larger replace it
fw_distance_matrix[i][j] = fw_distance_matrix[i][k] + fw_distance_matrix[k][j] # update shortest distance from i to j
path_matrix[i, j] = path_matrix[k, j] # update the optimal previous node
else:
fw_distance_matrix[i][j] = 0 # distances from the same node equal to 0
# print("Floyd-Warshall distances matrix: \n")
# print(fw_distance_matrix)
# print("\nPath matrix: \n")
# print(path_matrix)
path = reconstruct_path(path_matrix, len(matrix) - 1) # reconstruct the path to the destination node
print("Floyd-Warshall Shortest Path: ", path, "\nCost of Shortest Path: ", fw_distance_matrix[0][len(fw_distance_matrix[0])-1])
return path
def reconstruct_path(path_matrix, destination, path=[]):
source = 0
destination = int(destination)
if source == destination: # return path if destination is reached
path += [source]
shortest_path = list(reversed(path))
return shortest_path
else:
path += [destination] # add current node
return reconstruct_path(path_matrix, path_matrix[source, destination]) # update destination
| 48.166667 | 147 | 0.654474 | 280 | 2,023 | 4.575 | 0.282143 | 0.10929 | 0.174863 | 0.066354 | 0.157689 | 0.143638 | 0.143638 | 0.143638 | 0.143638 | 0.143638 | 0 | 0.005284 | 0.251607 | 2,023 | 41 | 148 | 49.341463 | 0.840819 | 0.30697 | 0 | 0.193548 | 0 | 0 | 0.039711 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0 | 0.032258 | 0 | 0.193548 | 0.032258 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
108a33d19a84feea4427ab50f511d6682ebec54e | 11,577 | py | Python | ozone-framework-python-server/tests/stacks/test_stacks_model.py | aamduka/ozone | 3fdbf232f5ea70661204a632e45310ca9d374973 | [
"Apache-2.0"
] | 6 | 2020-02-21T22:06:31.000Z | 2020-12-08T10:48:07.000Z | ozone-framework-python-server/tests/stacks/test_stacks_model.py | aamduka/ozone | 3fdbf232f5ea70661204a632e45310ca9d374973 | [
"Apache-2.0"
] | 12 | 2019-12-26T17:38:40.000Z | 2022-02-10T14:15:55.000Z | ozone-framework-python-server/tests/stacks/test_stacks_model.py | aamduka/ozone | 3fdbf232f5ea70661204a632e45310ca9d374973 | [
"Apache-2.0"
] | 4 | 2019-09-20T01:20:33.000Z | 2020-09-05T01:15:51.000Z | import json
from django.test import TransactionTestCase
from people.models import Person, PersonWidgetDefinition
from domain_mappings.models import RelationshipType, MappingType, DomainMapping
from dashboards.models import Dashboard
from stacks.models import Stack, StackGroups
from owf_groups.models import OwfGroup
from widgets.models import WidgetDefinition
create_stack_payload = {
'name': 'test stack 1',
'description': 'test description 1'
}
create_stack_payload2 = {
'name': 'test stack share',
'description': 'test description 1'
}
dashboard1_payload = {
'name': 'test dash 1',
'description': 'description for test dash 1',
'type': '',
'locked': '',
'layout_config': '{\"backgroundWidgets\":[],\"panels\":[],\"tree\":null}'
}
dashboard2_payload = {
'name': 'test dash 2',
'description': 'description for test dash 2',
'type': '',
'locked': '',
'layout_config': '{\"backgroundWidgets\":[],\"panels\":[],\"tree\":null}'
}
dashboard3_payload = {
'name': 'test dash 3',
'description': 'description for test dash 3',
'type': '',
'locked': '',
'layout_config': '{\"backgroundWidgets\":[],\"panels\":[],\"tree\":null}'
}
class StacksModelTests(TransactionTestCase):
fixtures = ['resources/fixtures/default_data.json', ]
def setUp(self):
self.admin_user = Person.objects.get(pk=1)
self.regular_user = Person.objects.get(pk=2)
self.stack = Stack.create(self.regular_user, create_stack_payload)
self.group = OwfGroup.objects.create(name="Test Group For Stack Tests")
self.group.add_user(self.admin_user)
self.group.add_user(self.regular_user)
# set all users in test group requires_sync to false
self.group.people.all().update(requires_sync=False)
def test_user_can_create_stack(self):
created_stack_id = self.stack.id
created_stack = Stack.objects.get(pk=created_stack_id)
self.assertTrue(created_stack.stack_context)
# check that default group got created and assigned to the stack
default_stack_group = created_stack.default_group
self.assertIsNotNone(default_stack_group)
self.assertEqual(default_stack_group.stack_default, True)
self.assertEqual(default_stack_group.automatic, False)
# check that the requesting user got added to the default group
self.assertIsNotNone(default_stack_group.people.get(pk=self.regular_user.id))
# check that the owner of the stack is the user
self.assertEqual(created_stack.owner.id, self.regular_user.id)
# check that a group dashboard got created
group_dashboard = Dashboard.objects.get(stack=created_stack_id, user=None)
self.assertIsNotNone(group_dashboard)
self.assertEqual(group_dashboard.name, created_stack.name)
# check that a personal dashboard got created
user_dashboard = Dashboard.objects.get(stack=created_stack_id, user=self.regular_user)
self.assertIsNotNone(user_dashboard)
self.assertEqual(user_dashboard.name, group_dashboard.name)
# check that the default group owns dashboard domain mapping get created
group_dashboard_domain_mapping = DomainMapping.objects.get(
src_id=default_stack_group.id,
src_type=MappingType.group,
relationship_type=RelationshipType.owns,
dest_id=group_dashboard.id,
dest_type=MappingType.dashboard
)
self.assertIsNotNone(group_dashboard_domain_mapping)
# check that the personal dash is a cloneOf group dash domain mapping get created
user_dashboard_domain_mapping = DomainMapping.objects.get(
src_id=user_dashboard.id,
src_type=MappingType.dashboard,
relationship_type=RelationshipType.cloneOf,
dest_id=group_dashboard.id,
dest_type=MappingType.dashboard
)
self.assertIsNotNone(user_dashboard_domain_mapping)
def test_add_group_to_stack(self):
instance = self.stack.add_group(self.group)
self.assertTrue(isinstance(instance, StackGroups))
self.assertEqual(instance.stack, self.stack)
self.assertEqual(instance.group, self.group)
# Assure all users in group requires_sync is set to True
self.assertTrue(all(self.group.people.values_list('requires_sync', flat=True)), True)
for user in self.group.people.all():
self.assertTrue(user.requires_sync)
def test_share_stack(self):
# data setup
widget1 = WidgetDefinition.objects.create(
visible=True,
image_url_medium='image_url_medium',
image_url_small='image_url_small',
singleton=False,
width=100,
height=100,
widget_url='widget url',
display_name='test widget 1'
)
widget2 = WidgetDefinition.objects.create(
visible=True,
image_url_medium='image_url_medium',
image_url_small='image_url_small',
singleton=False,
width=100,
height=100,
widget_url='widget url',
display_name='test widget 2'
)
widget3 = WidgetDefinition.objects.create(
visible=True,
image_url_medium='image_url_medium',
image_url_small='image_url_small',
singleton=False,
width=100,
height=100,
widget_url='widget url',
display_name='test widget 3'
)
user_widget1 = PersonWidgetDefinition.objects.create(
person=self.regular_user,
widget_definition=widget1
)
user_widget2 = PersonWidgetDefinition.objects.create(
person=self.regular_user,
widget_definition=widget2
)
user_widget3 = PersonWidgetDefinition.objects.create(
person=self.regular_user,
widget_definition=widget3
)
group_dash1, user_dash1 = self.stack.add_dashboard(self.regular_user, dashboard1_payload)
group_dash2, user_dash2 = self.stack.add_dashboard(self.regular_user, dashboard2_payload)
layout_config = {
"tree": {
"direction": "row",
"first": "02d98075-2fd8-42f0-8e35-f24cd88d8856",
"second": "b84f9fb1-e825-40b8-92bb-61937f9cd98f"
},
"panels": [{
"id": "02d98075-2fd8-42f0-8e35-f24cd88d8856",
"title": "Test Fit Panel",
"type": "fit",
"widgets": [{
"id": "ce14a7e5-e815-4759-b5a8-46f345edffc6",
"userWidgetId": user_widget1.id
}]
}, {
"id": "b84f9fb1-e825-40b8-92bb-61937f9cd98f",
"title": "Test Accordion Panel",
"type": "accordion",
"widgets": [{
"id": "e71ec8c6-f9e4-4258-a8cf-b348d7e91296",
"userWidgetId": user_widget2.id
}, {
"id": "d74106d3-8eb3-41e1-8e2a-8785be3a49fd",
"userWidgetId": user_widget3.id
}],
"collapsed": []
}],
"backgroundWidgets": []
}
user_dash1.locked = True
user_dash1.layout_config = json.dumps(layout_config)
user_dash1.save()
user_dash2.marked_for_deletion = True
user_dash2.save()
# method under test
self.stack.share()
# check that dashboards got deleted if they were marked for deletion
group_dash2_mappings = DomainMapping.objects.filter(
dest_type=MappingType.dashboard,
dest_id=group_dash2.id
)
self.assertFalse(group_dash2_mappings.exists())
self.assertFalse(Dashboard.objects.filter(pk=user_dash2.id).exists())
# check that the group dashboard got updated with the owner's dashboard
group_dashboard = Dashboard.objects.get(pk=group_dash1.id)
user_dashboard = Dashboard.objects.get(pk=user_dash1.id)
self.assertEqual(group_dashboard.name, user_dashboard.name)
self.assertEqual(group_dashboard.description, user_dashboard.description)
self.assertEqual(group_dashboard.type, user_dashboard.type)
self.assertEqual(group_dashboard.locked, user_dashboard.locked)
self.assertEqual(group_dashboard.layout_config, user_dashboard.layout_config)
# check that new widgets from the dashboard got added to the stack
widgets_to_stack_mapping = DomainMapping.objects.filter(
src_id=self.stack.default_group.id,
src_type=MappingType.group,
relationship_type=RelationshipType.owns,
dest_type=MappingType.widget
)
self.assertEqual(widgets_to_stack_mapping.count(), 3)
def test_user_can_restore_stack(self):
# data setup
stack = Stack.create(self.admin_user, create_stack_payload2)
group_dash1, user_dash1 = stack.add_dashboard(self.admin_user, dashboard1_payload)
group_dash2, user_dash2 = stack.add_dashboard(self.admin_user, dashboard2_payload)
# add user to stack and sync user
stack.default_group.add_user(self.regular_user)
self.regular_user.sync_dashboards()
# check that user has 3 dashboards that are apart of this stack
user_dashboards = Dashboard.objects.filter(user=self.regular_user, stack=stack)
self.assertEqual(user_dashboards.count(), 3)
# make modifications to stack's dashboards
layout_config = {
"tree": {
"direction": "row",
"first": "02d98075-2fd8-42f0-8e35-f24cd88d8893",
},
"panels": [{
"id": "02d98075-2fd8-42f0-8e35-f24cd88d8856",
"title": "Test Fit Panel",
"type": "fit",
"widgets": []
}],
"backgroundWidgets": []
}
user_dash1.layout_config = json.dumps(layout_config)
user_dash1.save()
name_update = 'dashboard name updated'
user_dash2.name = name_update
user_dash2.save()
# add a new dashboard after user sync has occurred to assure user gets a copy of it on restore
stack.add_dashboard(self.admin_user, dashboard3_payload)
stack.share()
# restore stack as a regular user
stack.restore(self.regular_user)
user_dashboards = Dashboard.objects.filter(user=self.regular_user, stack=stack)
# check that this user has updated dashboards
shared_user_dashboard_1 = user_dashboards.get(name=dashboard1_payload['name'])
shared_user_dashboard_2_exists = user_dashboards.filter(name=name_update).exists()
shared_user_dashboard_3_exists = user_dashboards.filter(name=dashboard3_payload['name']).exists()
self.assertEqual(shared_user_dashboard_1.layout_config, user_dash1.layout_config)
self.assertTrue(shared_user_dashboard_2_exists)
# confirm user received a copy of a dashboard added to stack without the user being synced
self.assertTrue(shared_user_dashboard_3_exists)
| 40.197917 | 106 | 0.629956 | 1,257 | 11,577 | 5.587112 | 0.159109 | 0.033319 | 0.034173 | 0.016232 | 0.415919 | 0.322797 | 0.295458 | 0.260715 | 0.223836 | 0.178556 | 0 | 0.033489 | 0.277792 | 11,577 | 287 | 107 | 40.337979 | 0.806482 | 0.099076 | 0 | 0.331839 | 0 | 0 | 0.132115 | 0.051581 | 0 | 0 | 0 | 0 | 0.130045 | 1 | 0.022422 | false | 0 | 0.035874 | 0 | 0.067265 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
108a6ab127244ab679e61b92b2865a4ff1c5d5c6 | 279 | py | Python | 1197.py | cravo-e-canela/URI-Online-Judge | 69d1d77d1760e75ff80cc5de84ec1e70f6424bd1 | [
"MIT"
] | 1 | 2020-12-13T21:30:36.000Z | 2020-12-13T21:30:36.000Z | 1197.py | cravo-e-canela/URI-Online-Judge | 69d1d77d1760e75ff80cc5de84ec1e70f6424bd1 | [
"MIT"
] | null | null | null | 1197.py | cravo-e-canela/URI-Online-Judge | 69d1d77d1760e75ff80cc5de84ec1e70f6424bd1 | [
"MIT"
] | null | null | null | v = 0
t = 0
aux = []
deslocamento = []
while True:
try:
aux = input().split()
calculo = (int(aux[1]) * 2) * int(aux[0])
deslocamento.append(calculo)
except EOFError:
break
for i in range(0,len(deslocamento)):
print(deslocamento[i])
| 16.411765 | 49 | 0.555556 | 36 | 279 | 4.305556 | 0.666667 | 0.077419 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030303 | 0.290323 | 279 | 16 | 50 | 17.4375 | 0.752525 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
108afb7a52a3aff616dd551a25f765ab7ded8ad5 | 3,043 | py | Python | perceptual_quality/pyramids/laplacian_test.py | google-research/perceptual-quality | 478157a5457c3b335e55de8fb2a4b779fe385143 | [
"Apache-2.0"
] | 30 | 2020-12-17T10:35:17.000Z | 2022-03-20T12:24:58.000Z | perceptual_quality/pyramids/laplacian_test.py | google-research/perceptual-quality | 478157a5457c3b335e55de8fb2a4b779fe385143 | [
"Apache-2.0"
] | 1 | 2021-01-31T12:40:36.000Z | 2021-02-18T19:21:45.000Z | perceptual_quality/pyramids/laplacian_test.py | google-research/perceptual-quality | 478157a5457c3b335e55de8fb2a4b779fe385143 | [
"Apache-2.0"
] | 5 | 2021-01-30T13:04:48.000Z | 2022-01-16T12:08:02.000Z | # Copyright 2021 Google LLC. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for Laplacian pyramid."""
from absl.testing import parameterized
from perceptual_quality.pyramids import laplacian
import tensorflow as tf
class LaplacianTest(tf.test.TestCase, parameterized.TestCase):
@parameterized.parameters(-1, 0)
def test_invalid_num_levels_fails(self, num_levels):
with self.assertRaises(ValueError):
laplacian.LaplacianPyramid(num_levels=num_levels)
def test_invalid_data_format_fails(self):
with self.assertRaises(ValueError):
laplacian.LaplacianPyramid(data_format=3)
@parameterized.parameters("channels_first", "channels_last")
def test_invalid_shape_fails(self, data_format):
pyramid = laplacian.LaplacianPyramid(data_format=data_format)
with self.assertRaises(ValueError):
pyramid(tf.zeros([16]))
@parameterized.parameters(1, 2, 3)
def test_number_and_shape_of_scales_match_channels_first(self, num_levels):
pyramid = laplacian.LaplacianPyramid(
num_levels=num_levels, data_format="channels_first")
image = tf.zeros((3, 32, 16))
subbands = pyramid(image)
self.assertLen(subbands, num_levels)
expected_shapes = [(3, 32, 16), (3, 16, 8), (3, 8, 4)]
for subband, shape in zip(subbands, expected_shapes):
self.assertEqual(subband.shape, shape)
@parameterized.parameters(1, 2)
def test_number_and_shape_of_scales_match_channels_last(self, num_levels):
pyramid = laplacian.LaplacianPyramid(
num_levels=num_levels, data_format="channels_last")
image = tf.zeros((1, 16, 16, 2))
subbands = pyramid(image)
self.assertLen(subbands, num_levels)
expected_shapes = [(1, 16, 16, 2), (1, 8, 8, 2)]
for subband, shape in zip(subbands, expected_shapes):
self.assertEqual(subband.shape, shape)
@parameterized.parameters(1, 2, 3)
def test_number_and_shape_of_scales_match_valid(self, num_levels):
pyramid = laplacian.LaplacianPyramid(
num_levels=num_levels, padding="valid", data_format="channels_last")
image = tf.zeros((48, 64))
subbands = pyramid(image)
expected_shapes = {
1: [(48, 64)],
2: [(40, 56), (22, 30)],
3: [(40, 56), (14, 22), (9, 13)],
}[num_levels]
self.assertLen(subbands, num_levels)
for subband, shape in zip(subbands, expected_shapes):
self.assertEqual(subband.shape, shape)
if __name__ == "__main__":
tf.test.main()
| 38.518987 | 80 | 0.705882 | 401 | 3,043 | 5.164589 | 0.326683 | 0.073877 | 0.046354 | 0.065669 | 0.472236 | 0.45775 | 0.395944 | 0.371801 | 0.371801 | 0.347658 | 0 | 0.032902 | 0.161025 | 3,043 | 78 | 81 | 39.012821 | 0.7783 | 0.222806 | 0 | 0.384615 | 0 | 0 | 0.034101 | 0 | 0 | 0 | 0 | 0 | 0.173077 | 1 | 0.115385 | false | 0 | 0.057692 | 0 | 0.192308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
108d00474562381df14caf24b90dc68a132f8541 | 14,194 | py | Python | Python - Track Controller/MCHE201_ControlApp/app.py | DocVaughan/MCHE201---Intro-to-Eng-Design | 383258d155fa2178b87988356120d04d6da15506 | [
"BSD-3-Clause"
] | 9 | 2016-09-22T20:35:52.000Z | 2021-03-21T18:45:20.000Z | Python - Track Controller/MCHE201_ControlApp/app.py | DocVaughan/MCHE201---Intro-to-Eng-Design | 383258d155fa2178b87988356120d04d6da15506 | [
"BSD-3-Clause"
] | null | null | null | Python - Track Controller/MCHE201_ControlApp/app.py | DocVaughan/MCHE201---Intro-to-Eng-Design | 383258d155fa2178b87988356120d04d6da15506 | [
"BSD-3-Clause"
] | 3 | 2015-02-03T20:11:35.000Z | 2022-03-30T03:06:34.000Z | import time
import logging
import serial
import threading
from threading import Thread
from flask import Flask, render_template, session, request
from flask_socketio import SocketIO, emit, join_room, leave_room, \
close_room, rooms, disconnect
logging.basicConfig(level=logging.DEBUG,
format='[%(levelname)s] (%(threadName)-10s) %(message)s',
)
app = Flask(__name__)
app.config['SECRET_KEY'] = 'secret!'
socketio = SocketIO(app)
thread = None
start = False
ON_RASPPI = True
HARDWARE_CONNECTED = True
ROUND_DURATION = 30.0
@app.route('/')
def full():
return render_template('index.html')
@app.route('/')
def sections():
return render_template('sections.html')
@socketio.on('my event', namespace='/MCHE201')
def test_message(message):
session['receive_count'] = session.get('receive_count', 0) + 1
emit('my response',
{'data': message['data'], 'count': session['receive_count']})
@socketio.on('my broadcast event', namespace='/MCHE201')
def test_broadcast_message(message):
global start
session['receive_count'] = session.get('receive_count', 0) + 1
logging.debug('Message data = {}'.format(message['data']))
if message['data'] == 1111:
logging.debug('Message data = {}'.format(message['data']))
with lock:
start = True
elif message['data'] == 0:
logging.debug('Message data = {}'.format(message['data']))
with lock:
start = False
# @socketio.on('join', namespace='/MCHE201')
# def join(message):
# join_room(message['room'])
# session['receive_count'] = session.get('receive_count', 0) + 1
# emit('my response',
# {'data': 'In rooms: ' + ', '.join(request.namespace.rooms),
# 'count': session['receive_count']})
#
#
# @socketio.on('leave', namespace='/MCHE201')
# def leave(message):
# leave_room(message['room'])
# session['receive_count'] = session.get('receive_count', 0) + 1
# emit('my response',
# {'data': 'In rooms: ' + ', '.join(request.namespace.rooms),
# 'count': session['receive_count']})
#
#
# @socketio.on('close room', namespace='/MCHE201')
# def close(message):
# session['receive_count'] = session.get('receive_count', 0) + 1
# emit('my response', {'data': 'Room ' + message['room'] + ' is closing.',
# 'count': session['receive_count']},
# room=message['room'])
# close_room(message['room'])
#
#
# @socketio.on('my room event', namespace='/MCHE201')
# def send_room_message(message):
# session['receive_count'] = session.get('receive_count', 0) + 1
# emit('my response',
# {'data': message['data'], 'count': session['receive_count']},
# room=message['room'])
@socketio.on('disconnect request', namespace='/MCHE201')
def disconnect_request():
session['receive_count'] = session.get('receive_count', 0) + 1
emit('my response',
{'data': 'Disconnected!', 'count': session['receive_count']})
disconnect()
@socketio.on('connect', namespace='/MCHE201')
def test_connect():
emit('my response', {'data': 'Connected', 'duration': ROUND_DURATION})
@socketio.on('disconnect', namespace='/MCHE201')
def test_disconnect():
print('Client disconnected')
class oceanControls(object):
""" Class to wrap the ASCII protocol for controlling the Ocean Controls
Relay module"""
def __init__(self, port, baudrate = 9600, address = 00):
self.ser = serial.Serial(port, baudrate,
bytesize=8, parity='N',
stopbits=1, timeout=0.1)
self.address = address
def turnRelayOn(self, relay_number):
""" Method to turn on an individual relay
Input arguments:
relay_number = The relay number to control
Returns:
nothing
Created: Joshua Vaughan - joshua.vaughan@louisiana.edu - 03/15/16
"""
if relay_number in [1, 2, 3, 4, 5, 6, 7, 8]:
self.ser.write('@{:02d} ON {}\r'.format(self.address, relay_number).encode('utf-8'))
else:
raise ValueError('Please enter a relay number between 1 and 8.')
def turnRelayOff(self, relay_number):
""" Method to turn off an individual relay
Input arguments:
relay_number = The relay number to control
Returns:
nothing
Created: Joshua Vaughan - joshua.vaughan@louisiana.edu - 03/15/16
"""
if relay_number in [1, 2, 3, 4, 5, 6, 7, 8]:
self.ser.write('@{:02d} OFF {}\r'.format(self.address, relay_number).encode('utf-8'))
else:
raise ValueError('Please enter a relay number between 1 and 8.')
def timedRelayOn(self, relay_number, time_on):
""" Method to turn on an individual relay for a set time
Input arguments:
relay_number = The relay number to control
time_on = the time the relay should remain on (s)
Returns:
nothing
Created: Joshua Vaughan - joshua.vaughan@louisiana.edu - 03/15/16
"""
if relay_number in [1, 2, 3, 4, 5, 6, 7, 8]:
# Convert the time input (s) to the number of ms the relay should be on
time_tenths = int(time_on * 10)
if time_tenths < 1 or time_tenths > 255:
raise ValueError('The time must be between 0.1s and 25.5s')
if not np.isclose((time_on / 0.1) % 1, 0):
raise ValueError('The resolution of this command is only 0.1s.\n\
Please enter a value that is a multiple of 0.1s.')
self.ser.write('@{:02d} TR {} {:03d}\r'.format(self.address, relay_number, time_tenths).encode('utf-8'))
else:
raise ValueError('Please enter a relay number between 1 and 8.')
def turnAllOn(self):
""" Method to turn on all relays
Input arguments:
nothing
Returns:
nothing
Created: Joshua Vaughan - joshua.vaughan@louisiana.edu - 03/15/16
"""
self.ser.write('@{:02d} ON {}\r'.format(self.address, 0).encode('utf-8'))
def turnAllOff(self):
""" Method to turn off all relays
Input arguments:
nothing
Returns:
nothing
Created: Joshua Vaughan - joshua.vaughan@louisiana.edu - 03/15/16
"""
self.ser.write('@{:02d} OFF {}\r'.format(self.address, 0).encode('utf-8'))
def isDigitalInputOn(self, digital_input_number):
""" Method that checks the status of an individual digital input
Input Arugments:
digital_input_number = The input number to check
Returns:
Boolean indicating if input is High/On (True) or Low/Ooff (False)
Created: Joshua Vaughan - joshua.vaughan@louisiana.edu - 03/16/16
"""
if digital_input_number in [1, 2, 3, 4]:
self.ser.flushInput()
# May need to change to below in versions of PySerial >= 3.0
# self.ser.reset_input_buffer()
self.ser.write('@{:02d} IS {:02d}\r'.format(self.address, digital_input_number).encode('utf-8'))
# TODO: Be more elegant about this
status_string = self.ser.readlines()[-1]
status = int(status_string.split()[-1])
if status:
return True
else:
return False
else:
raise ValueError('Please enter a digital input number between 1 and 4.')
def isRelayOn(self, relay_number):
""" Method that checks the status of an individual relay
Input Arugments:
relay_number = The relay number to control
Returns:
Boolean indicating if relay is on (True) or off (False)
Created: Joshua Vaughan - joshua.vaughan@louisiana.edu - 03/15/16
"""
if relay_number in [1, 2, 3, 4, 5, 6, 7, 8]:
# self.ser.flushInput()
# May need to change to below in versions of PySerial >= 3.0
# self.ser.reset_input_buffer()
self.ser.write('@{:02d} RS {:02d}\r'.format(self.address, relay_number).encode('utf-8'))
# TODO: Be more elegant about this
status_string = self.ser.readlines()[-1]
status = int(status_string.split()[-1])
if status:
return True
else:
return False
else:
raise ValueError('Please enter a relay number between 1 and 8.')
def printRelayStatus(self, relay_number):
""" Method to print the status of an individual relay
Input Arugments:
relay_number = The relay number to control
Returns:
nothing
Created: Joshua Vaughan - joshua.vaughan@louisiana.edu - 03/15/16
"""
if relay_number in [1, 2, 3, 4, 5, 6, 7, 8]:
if controller.isRelayOn(relay_number):
print('Relay {} is on.'.format(relay_number))
else:
print('Relay {} is off.'.format(relay_number))
else:
raise ValueError('Please enter a relay number between 1 and 8.')
def printDigitalInputStatus(self, digital_input_number):
""" Method to print the status of an individual digital input
Input Arugments:
relay_number = The digital input number to check
Returns:
nothing
Created: Joshua Vaughan - joshua.vaughan@louisiana.edu - 03/16/16
"""
if digital_input_number in [1, 2, 3, 4]:
if controller.isDigitalInputOn(digital_input_number):
print('Input {} is High/On.'.format(digital_input_number))
else:
print('Input {} is Low/Off.'.format(digital_input_number))
else:
raise ValueError('Please enter a digital input number between 1 and 4.')
class hardware_loop(threading.Thread):
"""
Class to control the threaded hardware loop
"""
def __init__(self):
threading.Thread.__init__(self, name = 'Hardware')
logging.debug('Hardware thread starting...')
self.running = True
def run(self):
"""
Main control loop
"""
global start
logging.debug('Hardware thread running...')
while self.running:
if start:
logging.info('Starting Countdown...')
# Close all the relays
if HARDWARE_CONNECTED:
# Open all the relays
controller.turnAllOn()
logging.info('Turning all on.')
start_time = time.time()
while time.time() - start_time < ROUND_DURATION:
elapsed_time = time.time() - start_time
# if start:
# logging.debug('Elapsed Time {:0.2f}'.format(elapsed_time))
# else:
# if HARDWARE_CONNECTED:
# # Open all the relays
# controller.turnAllOff()
#
# logging.info('Turning all off.')
# break
socketio.emit('my response',
{'data': 'time', 'elapsed_time': '{:0f}'.format(elapsed_time)},
namespace='/MCHE201')
time.sleep(0.2)
else:
if HARDWARE_CONNECTED:
controller.turnAllOff()
logging.info('Turning all off.')
socketio.emit('my response',
{'data': '0000'},
namespace='/MCHE201')
with lock:
start = False
time.sleep(0.5)
def stop(self):
self.running = False
if __name__ == '__main__':
if HARDWARE_CONNECTED:
if ON_RASPPI:
# Define an instance of the oceanControls class for use on Rasp Pi
controller = oceanControls('/dev/ttyUSB0')
else:
# Define an instance of the oceanControls class on Dr. Vaughan's MacBook
controller = oceanControls('/dev/tty.usbserial-AL01H195')
# Now the relationship between the Ocean Controller outputs and the track
# Define the values for red then increment around the track CW
# Red - Blue - Black - Yellow
# Should allow easier changing in the future
red_relay = 1
red_LED = 5
blue_relay = red_relay + 1
blue_LED = red_LED + 1
black_relay = blue_relay + 1
black_LED = blue_LED + 1
yellow_relay = black_relay + 1
yellow_LED = black_LED + 1
# Define the digital input position of the hardware switch
hardware_start_switch = 4
# Create a lock
lock = threading.Lock()
# hardware_thread = threading.Thread(name = 'Hardware', target = hardware_loop)
hardware_thread = hardware_loop()
hardware_thread.daemon = True
hardware_thread.start()
try:
logging.debug('Starting Flask app')
socketio.run(app, host='0.0.0.0', port=5000)
#socketio.run(app)
except (KeyboardInterrupt, SystemExit):
hardware_thread.stop()
hardware_thread.join()
logging.debug('KeyboardInterrupt or SystemExit exception. Exiting.\n\n')
| 32.332574 | 116 | 0.545935 | 1,574 | 14,194 | 4.820203 | 0.173443 | 0.047845 | 0.032556 | 0.021352 | 0.526427 | 0.485436 | 0.474759 | 0.434296 | 0.422433 | 0.372611 | 0 | 0.029276 | 0.343032 | 14,194 | 438 | 117 | 32.406393 | 0.784343 | 0.32091 | 0 | 0.336957 | 0 | 0 | 0.15585 | 0.003058 | 0 | 0 | 0 | 0.004566 | 0 | 1 | 0.108696 | false | 0 | 0.038043 | 0.01087 | 0.190217 | 0.038043 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1091a33512f17e8f77c987291edd04181ec2ffca | 5,437 | py | Python | fixtures/reddit.py | mitodl/open-discussions | ab6e9fac70b8a1222a84e78ba778a7a065c20541 | [
"BSD-3-Clause"
] | 12 | 2017-09-27T21:23:27.000Z | 2020-12-25T04:31:30.000Z | fixtures/reddit.py | mitodl/open-discussions | ab6e9fac70b8a1222a84e78ba778a7a065c20541 | [
"BSD-3-Clause"
] | 3,293 | 2017-06-30T18:16:01.000Z | 2022-03-31T18:01:34.000Z | fixtures/reddit.py | mitodl/open-discussions | ab6e9fac70b8a1222a84e78ba778a7a065c20541 | [
"BSD-3-Clause"
] | 1 | 2020-04-13T12:19:57.000Z | 2020-04-13T12:19:57.000Z | """Reddit fixtures"""
# pylint: disable=redefined-outer-name, unused-argument
from types import SimpleNamespace
import pytest
from channels import api
from channels.constants import CHANNEL_TYPE_PRIVATE, CHANNEL_TYPE_PUBLIC, LINK_TYPE_SELF
from channels.factories.models import PostFactory
from channels.factories.reddit import RedditFactories, FactoryStore
from channels.proxies import PostProxy
from channels.utils import render_article_text
@pytest.fixture
def praw_settings(settings, cassette_exists):
"""Settings needed to use Api client"""
if cassette_exists:
settings.OPEN_DISCUSSIONS_REDDIT_CLIENT_ID = "client_id"
settings.OPEN_DISCUSSIONS_REDDIT_SECRET = "secret"
settings.OPEN_DISCUSSIONS_REDDIT_URL = "https://reddit.local"
settings.OPEN_DISCUSSIONS_REDDIT_VALIDATE_SSL = False
settings.OPEN_DISCUSSIONS_CHANNEL_POST_LIMIT = 25
return settings
@pytest.fixture()
def reddit_factories(use_betamax, cassette_name, cassette_exists):
"""RedditFactories fixture"""
store = FactoryStore(cassette_name)
ctx = RedditFactories(store)
if cassette_exists:
store.load()
yield ctx
if not cassette_exists:
store.write()
@pytest.fixture()
def reddit_user(reddit_factories):
"""Override the user fixture to use reddit_factories"""
return reddit_factories.user("contributor")
@pytest.fixture()
def reddit_staff_user(reddit_factories):
"""Override the staff_user fixture to use reddit_factories"""
from channels.test_utils import no_ssl_verification
with no_ssl_verification():
return reddit_factories.user("staff_user", is_staff=True)
@pytest.fixture()
def reddit_index_user(reddit_factories):
"""Override the staff_user fixture to use reddit_factories"""
from channels.test_utils import no_ssl_verification
with no_ssl_verification():
return reddit_factories.user("index_user", is_staff=True)
@pytest.fixture()
def private_channel(reddit_factories, staff_user):
"""Returns a standard private channel for tests"""
return reddit_factories.channel(
"private_channel", staff_user, channel_type=CHANNEL_TYPE_PRIVATE
)
@pytest.fixture
def public_channel(reddit_factories, staff_user):
"""Returns a standard public channel for tests"""
return reddit_factories.channel(
"public_channel", staff_user, channel_type=CHANNEL_TYPE_PUBLIC
)
@pytest.fixture()
def staff_api(staff_user):
"""A fixture for an Api instance configured with the staff user"""
return api.Api(staff_user)
@pytest.fixture()
def contributor_api(user):
"""A fixture for an Api instance configured with the contributor user"""
return api.Api(user)
@pytest.fixture()
def private_channel_and_contributor(private_channel, staff_api, user):
"""Fixture for a channel and a user who is a contributor"""
staff_api.add_contributor(user.username, private_channel.name)
staff_api.add_subscriber(user.username, private_channel.name)
return private_channel, user
@pytest.fixture()
def subscribed_channels(reddit_factories, staff_user, staff_api, user):
"""Fixture for five channels with a user who is a contributor & subscriber"""
channels = []
for i in range(5):
channels.append(
reddit_factories.channel(
"private_channel_{}".format(i),
staff_user,
channel_type=CHANNEL_TYPE_PRIVATE,
)
)
staff_api.add_contributor(user.username, channels[i].name)
staff_api.add_subscriber(user.username, channels[i].name)
return channels
@pytest.fixture()
def reddit_submission_obj():
"""A dummy Post object"""
article_content = {"text": "some text"}
return SimpleNamespace(
author=SimpleNamespace(name="testuser"),
article_content=article_content,
plain_text=render_article_text(article_content),
subreddit=SimpleNamespace(
display_name="channel_1", title="Channel", subreddit_type="public"
),
selftext="Body text",
score=1,
created=12345,
id="a",
title="Post Title",
num_comments=1,
is_self=True,
likes=1,
banned_by=None,
edited=False,
permalink="http://reddit.local/r/channel_1/a/post-title",
)
@pytest.fixture()
def reddit_comment_obj(mocker, reddit_submission_obj):
"""A dummy Comment object"""
return SimpleNamespace(
parent=mocker.Mock(return_value=reddit_submission_obj),
submission=reddit_submission_obj,
author=SimpleNamespace(name="testuser"),
subreddit=reddit_submission_obj.subreddit,
body="Comment text",
id="b",
score=1,
created=12345,
likes=1,
banned_by=None,
edited=False,
permalink=lambda: "/r/{}/{}".format(
reddit_submission_obj.subreddit.display_name,
"/r/{}/comments/a/post-title/43".format(
reddit_submission_obj.subreddit.display_name
),
),
)
@pytest.fixture()
def post_proxy(reddit_submission_obj):
"""A dummy PostProxy object based on the reddit_submission_obj fixture"""
post = PostFactory.create(
post_id=reddit_submission_obj.id,
channel__name=reddit_submission_obj.subreddit.display_name,
post_type=LINK_TYPE_SELF,
)
return PostProxy(reddit_submission_obj, post)
| 31.427746 | 88 | 0.707559 | 655 | 5,437 | 5.616794 | 0.212214 | 0.065235 | 0.060886 | 0.035879 | 0.399565 | 0.322642 | 0.273172 | 0.155477 | 0.109269 | 0.109269 | 0 | 0.005044 | 0.197719 | 5,437 | 172 | 89 | 31.610465 | 0.838377 | 0.13684 | 0 | 0.312 | 0 | 0 | 0.060429 | 0.006498 | 0 | 0 | 0 | 0 | 0 | 1 | 0.112 | false | 0 | 0.08 | 0 | 0.296 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1093e0c425df24b9951dbb1acde7e2f1ded4595c | 1,329 | py | Python | nodes/0.9.x/python/GlobalParameter.GetValue.py | jdehotin/Clockworkfordynamo | 59226ea8292c57acfa1aa476efd40f0e78c9b965 | [
"MIT"
] | 147 | 2016-02-24T16:37:03.000Z | 2022-02-18T12:10:34.000Z | nodes/0.9.x/python/GlobalParameter.GetValue.py | jdehotin/Clockworkfordynamo | 59226ea8292c57acfa1aa476efd40f0e78c9b965 | [
"MIT"
] | 269 | 2016-02-25T14:04:14.000Z | 2022-03-26T07:30:53.000Z | nodes/0.9.x/python/GlobalParameter.GetValue.py | jdehotin/Clockworkfordynamo | 59226ea8292c57acfa1aa476efd40f0e78c9b965 | [
"MIT"
] | 89 | 2016-03-16T18:21:56.000Z | 2022-02-03T14:34:30.000Z | import clr
clr.AddReference('RevitAPI')
from Autodesk.Revit.DB import *
clr.AddReference("RevitNodes")
import Revit
clr.ImportExtensions(Revit.Elements)
clr.AddReference("RevitServices")
import RevitServices
from RevitServices.Persistence import DocumentManager
doc = DocumentManager.Instance.CurrentDBDocument
params = UnwrapElement(IN[0])
elementlist = list()
for param in params:
# in Revit 2016 R2 or later
try:
# any params that do not have a unit
if str(param.GetDefinition().UnitType) == "UT_Number":
# booleans
if str(param.GetDefinition().ParameterType) == "YesNo":
elementlist.append(param.GetValue().Value == 1)
# parameter types that contain element ids
elif str(param.GetDefinition().ParameterType) == "Image" or str(param.GetDefinition().ParameterType) == "Material":
elementlist.append(param.Document.GetElement(param.GetValue().Value))
# everything else
else:
elementlist.append(param.GetValue().Value)
# any params with units: convert vals to display unit
else:
formatoptions = doc.GetUnits().GetFormatOptions(param.GetDefinition().UnitType)
elementlist.append(UnitUtils.ConvertFromInternalUnits(param.GetValue().Value,formatoptions.DisplayUnits))
# any earlier Revit version does not support gloabl params
except:
elementlist.append(list())
OUT = elementlist | 35.918919 | 118 | 0.766742 | 153 | 1,329 | 6.653595 | 0.529412 | 0.088409 | 0.082515 | 0.100196 | 0.068762 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006009 | 0.123401 | 1,329 | 37 | 119 | 35.918919 | 0.867811 | 0.176825 | 0 | 0.074074 | 0 | 0 | 0.053358 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1094432a3096d1003d7a7bd5436d17a59bcb25aa | 11,170 | py | Python | preprocessing.py | ChristianOrr/subclassed-madnet-keras | 8d99cfddc653f665ae722c3bc1cca67c5ab81e65 | [
"Apache-2.0"
] | null | null | null | preprocessing.py | ChristianOrr/subclassed-madnet-keras | 8d99cfddc653f665ae722c3bc1cca67c5ab81e65 | [
"Apache-2.0"
] | null | null | null | preprocessing.py | ChristianOrr/subclassed-madnet-keras | 8d99cfddc653f665ae722c3bc1cca67c5ab81e65 | [
"Apache-2.0"
] | null | null | null | import os
import tensorflow as tf
import numpy as np
class StereoDatasetCreator():
"""
Takes paths to left and right stereo image directories
and creates a tf.data.Dataset that returns a batch of left
and right images, (Optional) returns the disparities as a target
using the disparities directories.
Init Args:
left_dir: path to left images folder
right_dir: path to right images folder
batch_size: desired batch size
height: desired height of the image (will be reshaped to this height if necessary)
width: desired width of the image (will be reshaped to this width if necessary)
shuffle: True/False
(Optional) disp_dir: path to disparity maps folder
Returns:
object that can be called to return a tf.data.Dataset
dataset will return values of the form:
{'left_input': (batch, height, width, 3), 'right_input': (batch, height, width, 3)},
(Optional) (batch, height, width, 1) else None
This can prepare MADNet data for training/evaluation and prediction
"""
def __init__(self, left_dir, right_dir, height, width, batch_size=1, shuffle=False, disp_dir=None):
self.left_dir = left_dir
self.right_dir = right_dir
self.disp_dir = disp_dir
self.batch_size = batch_size
self.height = height
self.width = width
self.shuffle = shuffle
self.left_names = tf.constant(
sorted([name for name in os.listdir(left_dir) if os.path.isfile(f"{self.left_dir}/{name}")]
)
)
self.right_names = tf.constant(
sorted([name for name in os.listdir(right_dir) if os.path.isfile(f"{self.right_dir}/{name}")])
)
if self.disp_dir is not None:
self.disp_names = tf.constant(
sorted([name for name in os.listdir(disp_dir) if os.path.isfile(f"{self.disp_dir}/{name}")])
)
# Check that there is a left image for every right image
self.num_left = len(self.left_names)
self.num_right = len(self.right_names)
if self.num_left != self.num_right:
raise ValueError(f"Number of right and left images do not match. "
f"Left number: {self.num_left}. Right number: {self.num_right}")
if self.disp_dir is not None:
self.num_disp = len(self.disp_names)
if self.num_disp != self.num_left:
raise ValueError(f"Number of disparity and left/right images do not match. "
f"Disparity number: {self.num_disp}. "
f"Left number: {self.num_left}. "
f"Right number: {self.num_right}.")
def _get_image(self, path):
"""
Get a single image helper function
Converts image to float32, normalises values to 0-1
and resizes to the desired shape
Args:
path to image (will be in Tensor format, since its called in a graph)
Return:
Tensor in the shape (height, width, 3)
"""
# Using tf.io.read_file since it can take a tensor as input
raw = tf.io.read_file(path)
# Converts to float32 and normalises values
image = tf.io.decode_image(raw, channels=3, dtype=tf.float32, expand_animations=False)
# Change dimensions to the desired model dimensions
image = tf.image.resize(image, [self.height, self.width], method="bilinear")
return image
def readPFM(self, file):
"""
Load a pfm file as a numpy array
Args:
file: path to the file to be loaded
Returns:
content of the file as a numpy array
with shape (height, width, channels)
"""
file = open(file, 'rb')
color = None
width = None
height = None
scale = None
endian = None
header = file.readline().rstrip()
if header == b'PF':
color = True
elif header == b'Pf':
color = False
else:
raise Exception('Not a PFM file.')
dims = file.readline()
try:
width, height = list(map(int, dims.split()))
except:
raise Exception('Malformed PFM header.')
scale = float(file.readline().rstrip())
if scale < 0: # little-endian
endian = '<'
scale = -scale
else:
endian = '>' # big-endian
data = np.fromfile(file, endian + 'f')
shape = (height, width, 3) if color else (height, width, 1)
data = np.reshape(data, shape)
data = np.flipud(data)
return data
def _get_pfm(self, path):
"""
Reads a single pfm disparity file and
returns a disparity map
Args:
path: path to the disparity file (will be in Tensor format, since its called in a graph)
Returns:
Tensor disparity map with shape (height, width, 1)
"""
# Convert tensor to a string
path = path.numpy().decode("ascii")
#disp_map = cv2.imread(path, cv2.IMREAD_UNCHANGED)
disp_map = self.readPFM(path)
# Set inf values to 0 (0 is infinitely far away, so basically the same)
disp_map[disp_map == np.inf] = 0
# convert values to positive
if disp_map.mean() < 0:
disp_map *= -1
# Change dimensions to the desired (height, width, channels)
# Using nearest neighbour interpolation for sparse groundtruth disparities
disp_map = tf.image.resize(disp_map, [self.height, self.width], method="nearest")
return disp_map
def _get_disp(self, disp_name):
"""
Args:
disp_name: Tensor string, name of the disparity file
Returns:
disparity map in the format [height, width, 1],
with float32 values representing the absolute
pixel disparity.
"""
disp_extension = tf.strings.split(disp_name, sep=".")[-1].numpy().decode()
disp_path = f"{self.disp_dir}/" + disp_name
if disp_extension == "pfm" or disp_extension == "PFM":
# wrapping in py_function so that the function can execute eagerly and run non tensor ops
disp_map = tf.py_function(func=self._get_pfm, inp=[disp_path], Tout=tf.float32)
elif disp_extension == "png" or disp_extension == "PNG":
disp_bytes = tf.io.read_file(disp_path)
# Using uint16 for higher precision
disp_map = tf.io.decode_png(disp_bytes, dtype=tf.uint16)
disp_map = tf.cast(disp_map, dtype=tf.float32)
disp_map = disp_map / 256.0
# Using nearest neighbour interpolation for sparse groundtruth disparities
disp_map = tf.image.resize(disp_map, [self.height, self.width], method="nearest")
else:
raise ValueError("Unsupported disparity file detected "
"only .pfm and .png disparities are supported. \n"
f"Detected extension: .{disp_extension} from: {disp_name}")
return disp_map
def _process_single_batch(self, index):
"""
Processes a single batch using index to find the files
Args:
index: Tensor integer
Returns:
stereo input dictionary, target
"""
left_name = self.left_names[index]
right_name = self.right_names[index]
left_image = self._get_image(f"{self.left_dir}/" + left_name)
right_image = self._get_image(f"{self.right_dir}/" + right_name)
disp_map = None
if self.disp_dir is not None:
disp_name = self.disp_names[index]
disp_map = tf.py_function(func=self._get_disp, inp=[disp_name], Tout=tf.float32)
return {'left_input': left_image, 'right_input': right_image}, disp_map
def __call__(self):
"""
Creates and returns a tensorflow data.Dataset
The dataset is shuffled, batched and prefetched
"""
indexes = list(range(self.num_left))
indexes_ds = tf.data.Dataset.from_tensor_slices(indexes)
if self.shuffle:
indexes_ds.shuffle(buffer_size=self.num_left, seed=101, reshuffle_each_iteration=False)
ds = indexes_ds.map(self._process_single_batch)
ds = ds.batch(batch_size=self.batch_size, drop_remainder=True)
ds = ds.prefetch(buffer_size=10)
return ds
class StereoGenerator(tf.keras.utils.Sequence):
"""
This method is currently not working.
Please use the StereoDatasetCreator instead for data preperation.
The Input data has shape (None, None, None, None) for each image when training
Takes paths to left and right stereo image directories
and creates a generator that returns a batch of left
and right images.
"""
def __init__(self, left_dir, right_dir, batch_size, height, width, shuffle):
self.left_dir = left_dir
self.right_dir = right_dir
self.batch_size = batch_size
self.height = height
self.width = width
self.shuffle = shuffle
self.left_paths = [path for path in os.listdir(left_dir) if os.path.isfile(f"{self.left_dir}/{path}")]
self.right_paths = [path for path in os.listdir(right_dir) if os.path.isfile(f"{self.right_dir}/{path}")]
# Check that there is a left image for every right image
self.num_left = len(self.left_paths)
self.num_right = len(self.right_paths)
if self.num_left != self.num_right:
raise ValueError(f"Number of right and left images do now match. "
f"Left number: {self.num_left}. Right number: {self.num_right}")
# Check if images names are identical
self.left_paths.sort()
self.right_paths.sort()
if self.left_paths != self.right_paths:
raise ValueError("Left and right image names do not match. "
"Please make sure left and right image names are identical")
def __len__(self):
# Denotes the number of batches per epoch
return self.num_left // self.batch_size
def _get_image(self, image_dir, image_name):
# get a single image helper function
image = tf.keras.preprocessing.image.load_img(f"{image_dir}/{image_name}")
image_arr = tf.keras.preprocessing.image.img_to_array(image)
image_arr = tf.image.resize(image_arr, (self.height, self.width)).numpy()
return image_arr/255.
def __getitem__(self, batch_index):
index = batch_index * self.batch_size
left_batch = self.left_paths[index: self.batch_size + index]
right_batch = self.right_paths[index: self.batch_size + index]
left_images = tf.constant([self._get_image(self.left_dir, image_name) for image_name in left_batch])
right_images = tf.constant([self._get_image(self.right_dir, image_name) for image_name in right_batch])
return {'left_input': left_images, 'right_input': right_images}, None
| 41.524164 | 113 | 0.615936 | 1,492 | 11,170 | 4.455094 | 0.175603 | 0.022115 | 0.018204 | 0.008274 | 0.350534 | 0.309914 | 0.275613 | 0.238905 | 0.213028 | 0.201896 | 0 | 0.006472 | 0.294539 | 11,170 | 268 | 114 | 41.679104 | 0.837056 | 0.286392 | 0 | 0.176871 | 0 | 0 | 0.122043 | 0.018179 | 0 | 0 | 0 | 0 | 0 | 1 | 0.07483 | false | 0 | 0.020408 | 0.006803 | 0.170068 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10961c54e1c0ec39a3b635c262d1cc42c0b3d0e9 | 4,968 | py | Python | erinyes/stress/memory_leak_assistant.py | enthought/erinyes | e135542dc8608072f630fa1ae0f45ca30aac9e5c | [
"BSD-3-Clause"
] | 1 | 2017-02-15T18:36:31.000Z | 2017-02-15T18:36:31.000Z | erinyes/stress/memory_leak_assistant.py | enthought/erinyes | e135542dc8608072f630fa1ae0f45ca30aac9e5c | [
"BSD-3-Clause"
] | null | null | null | erinyes/stress/memory_leak_assistant.py | enthought/erinyes | e135542dc8608072f630fa1ae0f45ca30aac9e5c | [
"BSD-3-Clause"
] | null | null | null | #------------------------------------------------------------------------------
# Copyright (c) 2013, Enthought, Inc.
# All rights reserved.
#------------------------------------------------------------------------------
import gc
import os
from multiprocessing import Process
from multiprocessing.queues import Queue
import psutil
class MemoryLeakAssistant(object):
""" Assistant methods used to assert against memory leaks in unittests.
"""
def assertMemoryUsage(self, process, usage, slack=0, msg=None):
""" Assert that the memory usage does not exceed the provided limit.
Parameters
----------
process : psutil.Process
The process to check.
usage : float
The target memory usage. This is used as a soft-limit.
msg : str
The message to show on AssertionError.
slack : float
The percentage (relative to `usage`) that we allow the
process memory usage to exceed the soft limit. The default is 0.0
Raises
------
AssertionError :
if the current memory usage of the process is higher than
:math:`usage * (1 + slack)`.
"""
current_usage = self._memory_usage(process)
hard_limit = usage * (1 + slack)
if hard_limit < current_usage:
if msg is None:
difference = (current_usage - usage) / usage
msg = "Memory leak of {:.2%}".format(difference)
raise AssertionError(msg)
def assertReturnsMemory(self, function, args=None, iterations=100,
slack=0.0, msg=None):
""" Assert that the function does not retain memory over a number of
runs.
Parameters
----------
func : callable
The function to check. The function should take no arguments.
args : tuple
The tuple of arguments to pass to the callable.
iterations : int
The number of times to run the function. Default is 100.
msg : str
The message to show on AssertionError.
slack : float
The percentage (relative to the first run) that we allow the
process memory usage to exceed the expected. The default is 0.0
Note
----
The function is executed in-process thus any memory leaks will be
there to cause problems to other tests that are part of the currently
running test suite.
"""
process = psutil.Process(os.getpid())
def test_function():
if args is None:
function()
else:
function(*args)
gc.collect()
baseline = self._memory_usage(process)
try:
for index in xrange(iterations):
test_function()
gc.collect()
self.assertMemoryUsage(process, baseline, slack=slack)
except AssertionError:
leak = (self._memory_usage(process) - baseline) / baseline
if msg is None:
msg = "Memory leak of {:.2%} after {} iterations"
raise AssertionError(msg.format(leak, index + 1))
else:
raise AssertionError(msg)
def assertDoesNotLeak(self, function, args=None, slack=0.2,
iterations=100):
""" Repeat the execution of a function in a child process while
monitoring the memory usage.
The method checks that the memory usage of the process at the end of
each run does not exceed on average (1 + slack) times the usage of
the first run and returns the appropriate errors.
.. note:: The memory leak could be so bad that the process goes out of
memory. In such a case the method returns the exception traceback.
"""
queue = Queue()
process = Process(
target=self._subprocess_runner(),
args=(function, iterations, slack, queue, args)
)
self._assertChildProcessFinishes(process, queue)
def _memory_usage(self, process):
return float(process.get_memory_info().rss)
def _assertChildProcessFinishes(self, process, queue):
try:
process.start()
process.join()
outcome = queue.get_nowait()
finally:
# Make sure that the process has terminated
process.terminate()
if outcome != 'FINISHED':
self.fail(outcome)
def _check_for_memory_leak(function, iterations, slack, queue, args=None):
assistant = MemoryLeakAssistant()
try:
assistant.assertNoMemoryLeak(function,
iterations=iterations,
args=args,
slack=slack)
except Exception as error:
queue.put(error)
return
queue.put('FINISHED')
| 34.5 | 79 | 0.560588 | 536 | 4,968 | 5.147388 | 0.324627 | 0.043856 | 0.015223 | 0.023922 | 0.159478 | 0.097862 | 0.082639 | 0.082639 | 0.082639 | 0.082639 | 0 | 0.008513 | 0.337963 | 4,968 | 143 | 80 | 34.741259 | 0.830344 | 0.396538 | 0 | 0.166667 | 0 | 0 | 0.029874 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.106061 | false | 0 | 0.075758 | 0.015152 | 0.227273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10969e2280bfa79ea259df52f4d03214e5b48aa5 | 702 | py | Python | app/main/views/index.py | by46/coffee | f12e1e95f12da7e322a432a6386a1147c5549c3b | [
"MIT"
] | null | null | null | app/main/views/index.py | by46/coffee | f12e1e95f12da7e322a432a6386a1147c5549c3b | [
"MIT"
] | null | null | null | app/main/views/index.py | by46/coffee | f12e1e95f12da7e322a432a6386a1147c5549c3b | [
"MIT"
] | null | null | null | from flask import render_template
from flask_restful import Resource
from flask_wtf import Form
from flask_wtf.file import FileField
from werkzeug.utils import secure_filename
from app.main import api
@api.resource('/api/v1/version')
class Version(Resource):
def get(self):
return dict(version='0.0.1')
class PhotoForm(Form):
file = FileField("Your photo")
def upload():
form = PhotoForm()
if form.validate_on_submit():
filename = secure_filename(form.file.data.filename)
form.file.data.save('uploads/' + filename)
else:
filename = None
return render_template('main/upload.html', form=form, filename=filename)
| 24.206897 | 77 | 0.68661 | 91 | 702 | 5.197802 | 0.461538 | 0.07611 | 0.05074 | 0.084567 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007246 | 0.213675 | 702 | 28 | 78 | 25.071429 | 0.849638 | 0 | 0 | 0 | 0 | 0 | 0.080119 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.3 | 0.05 | 0.65 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
109a88fcb63a41ebda8b2f136c7e27056f7d3cec | 7,721 | py | Python | Krypton/Res/ToolKit/ConsoleUtils.py | BolunHan/Krypton | 8caf8e8efad6172ea0783c777e7df49a2ac512cb | [
"MIT"
] | null | null | null | Krypton/Res/ToolKit/ConsoleUtils.py | BolunHan/Krypton | 8caf8e8efad6172ea0783c777e7df49a2ac512cb | [
"MIT"
] | null | null | null | Krypton/Res/ToolKit/ConsoleUtils.py | BolunHan/Krypton | 8caf8e8efad6172ea0783c777e7df49a2ac512cb | [
"MIT"
] | null | null | null | import io
import logging
import shutil
import sys
import threading
import time
from enum import Enum
from typing import Callable, Iterable, Union, Sized, Optional
from .LoggerUtils import temp_log
__all__ = ['Progress', 'GetInput', 'count_ordinal', 'TerminalStyle']
# noinspection SpellCheckingInspection
class TerminalStyle(Enum):
CEND = '\33[0m'
CBOLD = '\33[1m'
CITALIC = '\33[3m'
CURL = '\33[4m'
CBLINK = '\33[5m'
CBLINK2 = '\33[6m'
CSELECTED = '\33[7m'
CBLACK = '\33[30m'
CRED = '\33[31m'
CGREEN = '\33[32m'
CYELLOW = '\33[33m'
CBLUE = '\33[34m'
CVIOLET = '\33[35m'
CBEIGE = '\33[36m'
CWHITE = '\33[37m'
CBLACKBG = '\33[40m'
CREDBG = '\33[41m'
CGREENBG = '\33[42m'
CYELLOWBG = '\33[43m'
CBLUEBG = '\33[44m'
CVIOLETBG = '\33[45m'
CBEIGEBG = '\33[46m'
CWHITEBG = '\33[47m'
CGREY = '\33[90m'
CRED2 = '\33[91m'
CGREEN2 = '\33[92m'
CYELLOW2 = '\33[93m'
CBLUE2 = '\33[94m'
CVIOLET2 = '\33[95m'
CBEIGE2 = '\33[96m'
CWHITE2 = '\33[97m'
CGREYBG = '\33[100m'
CREDBG2 = '\33[101m'
CGREENBG2 = '\33[102m'
CYELLOWBG2 = '\33[103m'
CBLUEBG2 = '\33[104m'
CVIOLETBG2 = '\33[105m'
CBEIGEBG2 = '\33[106m'
CWHITEBG2 = '\33[107m'
@staticmethod
def color_table():
"""
prints table of formatted text format options
"""
for style in range(8):
for fg in range(30, 38):
s1 = ''
for bg in range(40, 48):
_format = ';'.join([str(style), str(fg), str(bg)])
s1 += '\x1b[%sm %s \x1b[0m' % (_format, _format)
print(s1)
print('\n')
class Progress(object):
DEFAULT = '{prompt} [{bar}] {progress:>7.2%} {eta}{done}'
MINI = '{prompt} {progress:.2%}'
FULL = '{prompt} [{bar}] {done_tasks}/{total_tasks} {progress:>7.2%}, {remaining} to go {eta}{done}'
def __init__(self, tasks: Union[int, Iterable], prompt: str = 'Progress:', format_spec: str = DEFAULT, **kwargs):
self.prompt = prompt
self.format_spec = format_spec
self._width = kwargs.pop('width', None)
self.tick_size = kwargs.pop('tick_size', 0.0001)
self.progress_symbol = kwargs.pop('progress_symbol', '=')
self.blank_symbol = kwargs.pop('blank_symbol', ' ')
if isinstance(tasks, int):
self.total_tasks = tasks
self.tasks = range(self.total_tasks)
elif isinstance(tasks, (Sized, Iterable)):
self.total_tasks = len(tasks)
self.tasks = tasks
if 'outputs' not in kwargs:
self.outputs = [sys.stdout]
else:
outputs = kwargs.pop('outputs')
if outputs is None:
self.outputs = []
elif isinstance(outputs, Iterable):
self.outputs = outputs
else:
self.outputs = [outputs]
self.start_time = time.time()
self.done_tasks = 0
self.done_time = None
self.iter_task = None
self.last_output = -1
@property
def eta(self):
remaining = self.total_tasks - self.done_tasks
time_cost = time.time() - self.start_time
if self.done_tasks == 0:
eta = float('inf')
else:
eta = time_cost / self.done_tasks * remaining
return eta
@property
def work_time(self):
if self.done_time:
work_time = self.done_time - self.start_time
else:
work_time = time.time() - self.start_time
return work_time
@property
def is_done(self):
return self.done_tasks == self.total_tasks
@property
def progress(self):
return self.done_tasks / self.total_tasks
@property
def remaining(self):
return self.total_tasks - self.done_tasks
@property
def width(self):
if self._width:
width = self._width
else:
width = shutil.get_terminal_size().columns
return width
def format_progress(self):
if self.is_done:
eta = ''
done = f'All done in {self.work_time:,.2f} seconds'
else:
eta = f'ETA: {self.eta:,.2f} seconds'
done = ''
args = {
'total_tasks': self.total_tasks,
'done_tasks': self.done_tasks,
'progress': self.progress,
'remaining': self.remaining,
'work_time': self.work_time,
'eta': eta,
'done': done,
'prompt': self.prompt,
'bar': '',
}
bar_size = max(10, self.width - len(self.format_spec.format_map(args)))
progress_size = round(bar_size * self.progress)
args['bar'] = self.progress_symbol * progress_size + self.blank_symbol * (bar_size - progress_size)
progress_str = self.format_spec.format_map(args)
return progress_str
def reset(self):
self.done_tasks = 0
self.done_time = None
self.last_output = -1
def output(self):
progress_str = self.format_progress()
self.last_output = self.progress
for output in self.outputs:
if isinstance(output, Callable):
output(progress_str)
elif isinstance(output, logging.Logger):
temp_log(logger=output, level=logging.INFO, msg=progress_str)
elif isinstance(output, (io.TextIOBase, logging.Logger)):
print('\r' + progress_str, file=output, end='')
else:
pass
def __call__(self, *args, **kwargs):
return self.format_progress()
def __next__(self):
try:
if (not self.tick_size) or self.progress >= self.tick_size + self.last_output:
self.output()
self.done_tasks += 1
return self.iter_task.__next__()
except StopIteration:
self.done_tasks = self.total_tasks
self.output()
raise StopIteration()
def __iter__(self):
self.reset()
self.start_time = time.time()
self.iter_task = self.tasks.__iter__()
return self
class GetInput(object):
def __init__(self, timeout=5, prompt_message: Optional[str] = None, default_value: Optional[str] = None):
if prompt_message is None:
prompt_message = f'Please respond in {timeout} seconds: '
self.timeout = timeout
self.default_value = default_value
self.prompt_message = prompt_message
self._input = None
self.input_thread: Optional[threading.Thread] = None
self.show()
def show(self):
self.input_thread = threading.Thread(target=self.get_input)
self.input_thread.daemon = True
self.input_thread.start()
self.input_thread.join(timeout=self.timeout)
# input_thread.terminate()
if self._input is None:
print(f"No input was given within {self.timeout} seconds. Use {self.default_value} as default value.")
self._input = self.default_value
def get_input(self):
self._input = None
self._input = input(self.prompt_message)
return
@property
def input(self):
return self._input
def count_ordinal(n: int) -> str:
"""
Convert an integer into its ordinal representation::
make_ordinal(0) => '0th'
make_ordinal(3) => '3rd'
make_ordinal(122) => '122nd'
make_ordinal(213) => '213th'
"""
n = int(n)
suffix = ['th', 'st', 'nd', 'rd', 'th'][min(n % 10, 4)]
if 11 <= (n % 100) <= 13:
suffix = 'th'
return str(n) + suffix
| 28.702602 | 117 | 0.56573 | 914 | 7,721 | 4.607221 | 0.280088 | 0.028497 | 0.033959 | 0.018048 | 0.124199 | 0.081928 | 0.038946 | 0.038946 | 0.038946 | 0.022797 | 0 | 0.043861 | 0.309027 | 7,721 | 268 | 118 | 28.809701 | 0.745455 | 0.035358 | 0 | 0.123223 | 0 | 0.009479 | 0.114177 | 0.006358 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090047 | false | 0.004739 | 0.042654 | 0.023697 | 0.407583 | 0.018957 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
109ab88a9bb37aa8a037a26d33f9022ea2747b30 | 71,623 | py | Python | pyplan_core/classes/PyplanFunctions.py | pyplan/pyplan-core | 21b991a16feb1141b3ff7e3ac75a3aee54f80d0d | [
"MIT"
] | 4 | 2020-04-29T20:24:44.000Z | 2021-03-03T17:09:32.000Z | pyplan_core/classes/PyplanFunctions.py | pyplan/pyplan-core | 21b991a16feb1141b3ff7e3ac75a3aee54f80d0d | [
"MIT"
] | 2 | 2020-08-24T17:49:00.000Z | 2021-01-19T16:09:03.000Z | pyplan_core/classes/PyplanFunctions.py | pyplan/pyplan-core | 21b991a16feb1141b3ff7e3ac75a3aee54f80d0d | [
"MIT"
] | 4 | 2021-01-23T13:06:31.000Z | 2021-12-16T13:11:40.000Z | import importlib
import ntpath
import os
import re
import subprocess
import time
import numpy as np
import pandas as pd
import xarray as xr
from openpyxl import load_workbook
from .ws.settings import NotLevels
try:
from StringIO import StringIO as BytesIO
except ImportError:
from io import BytesIO
class PyplanFunctions(object):
def __init__(self, model=None):
self.model = model
def release(self):
self.model = None
def set_domain(self, dataArray, domainDic, defaultValue=None):
"""Reindexes the dataArray by applying the indices of the domainDic param
Ex.
pp.set_domain(da,{"time":time_idex, "products":product_index})
"""
_da = dataArray
for key in domainDic:
_da = _da.reindex({key: domainDic[key].values})
_da = _da.rename({key: domainDic[key].name})
if not defaultValue is None:
_da = _da.fillna(defaultValue)
return _da
def build_report(self, values, name="Report", report_index=None):
"""DEPRECATED. Use the create_report function instead
Concatenates the values list of nodes along the report_index dimension
"""
_titles = [str(xx.name) for xx in values]
_index = None
if report_index is None:
_index = pd.Index(_titles, name=name)
else:
_index = report_index
return xr.concat(values, _index)
def create_dataarray(self, value, coords, dtype=None):
"""Creates a dataarray using an atomic value distributed along all dimensions
Ex.
pp.create_dataarray(1., coords=[time_idex, product_index])
"""
_data = np.full(tuple([(len(x)) for x in coords]), value, dtype=dtype)
return xr.DataArray(_data, coords)
def find(self, param1, param2, compareType=1, caseSensitive=True):
"""
param1: value or indexarray for compare
param2: index compare to
compareType: exact=1, start_with=2, end_with=3, contain=4
caseSensitive: able to differentiate between uppercase and lowercase (by default True)
If param1 is a scalar (numeric or str) and param2 is an index: return a dataArray indexed by param2 with True
on ocurrences of param2
Ex. pp.find("te", region, cp.end_with)
If param1 is an index and param2 is an index too: return a dataArray indexed by param1 and param2 with True
on ocurrences of param1 on param2
Ex. pp.find(subregion, region, cp.contain)
"""
def _internalFn(item, value):
if not isinstance(item, str):
item = str(item)
if not isinstance(value, str):
value = str(value)
if compareType == 1:
if caseSensitive:
return item == value
else:
return item.lower() == value.lower()
elif compareType == 2:
if caseSensitive:
return item[:len(value)] == value
else:
return item[:len(value)].lower() == value.lower()
elif compareType == 3:
if caseSensitive:
return item[-len(value):] == value
else:
return item[-len(value):].lower() == value.lower()
elif compareType == 4:
if caseSensitive:
return value in item
else:
return value.lower() in item.lower()
if (isinstance(param1, str) or str(param1).isnumeric()) and isinstance(param2, pd.Index):
vfn = np.vectorize(_internalFn)
return xr.DataArray(vfn(param2.values, param1), [param2])
if isinstance(param1, pd.Index) and isinstance(param2, pd.Index):
_res = self.create_dataarray(False, [param1, param2], dtype=bool)
for row in param1.values:
for col in param2.values:
_res.loc[{param1.name: slice(row, row), param2.name: slice(
col, col)}] = _internalFn(col, row)
return _res
def apply_fn(self, obj, applyFn, *args):
"""Applies "applyFn" to "obj" where obj can be DataArray or Index
Ex.
pp.apply(dataArray, node_function)
"""
vfn = np.vectorize(applyFn)
if isinstance(obj, pd.Index):
return pd.Index(np.unique(vfn(obj.values, *args)))
if isinstance(obj, xr.DataArray):
return xr.apply_ufunc(vfn, obj, *args)
return None
def subset(self, cube):
"""Returns an index with the elements of the index for which cube is true. The function is used to create a new
index that is a subset of an existing index.
Ex. pp.subset(sales>0)
"""
cond = cube > 0
values = cond.coords[cond.dims[0]].values[cond.values]
return pd.Index(values)
def split_text(self, param1, separator, part=None):
"""Returns a DataArray object with text values formed by splitting the elements of param1 text values at each
occurrence of separator "separator".
The DataArray will have the original dimension plus a new dimension 'Parts' of length (number of separators + 1).
All text values must have the same number of separators separator.
"""
if isinstance(param1, pd.Index):
param1 = xr.DataArray(param1.values, [param1])
_q_separators = self.apply_fn(param1, lambda x: x.count(separator))
_max_q_separators = np.asscalar(_q_separators.max().values)
_result_coords = ['Part ' + str(i)
for i in range(1, _max_q_separators + 2)]
_result_dim = pd.Index(_result_coords)
_result_dim.name = "Parts"
_results = []
for _part in range(_max_q_separators + 1):
_dataarray = self.apply_fn(
param1, lambda x: x.split(separator)[_part])
_results.append(_dataarray)
_res = xr.concat(_results, dim=_result_dim)
if not part is None:
_res = _res.sel(Parts="Part " + str(part), drop=True)
return _res
def get_pos(self, index):
"""Returns a DataArray with indexed by index and its positions as values
Ex. pp.get_pos(time_index)
"""
return xr.DataArray(range(0, len(index)), [index])
def concat_index(self, *args):
"""Concatenates two or more indexes and/or atomic values and returns a single new index
Ex.
pp.concatIndex(index1,index2,index3,value1,value2)
"""
_list = []
for arg in args:
if isinstance(arg, pd.Index):
values = (arg.values).tolist()
_list.extend(values)
else:
_list.append(arg)
seripandas = pd.Series(_list)
return pd.Index(seripandas.unique())
def linear_depreciation(self, investments, usefulLife, timeIndex, includeInCurrentMonth=False, timeIndexFormat='%Y.%m'):
"""Returns the straight-line depreciation of dataArray investments over its usefulLife.
investments: DataArray containing investments
usefulLife: DataArray with number of years of life expectancy
timeIndex: Time dimension of dataArray. Must be a Pandas Index
includeInCurrentMonth: Wheter to start depreciating in month t or month t+1
timeIndexFormat: i.e. for '2016.01' would be '%Y.%m'
"""
# Depreciation amount (safe division by zero)
_usefulLife_months = usefulLife.astype(int) * 12
_usefulLife_months_den = xr.where(
_usefulLife_months == 0, 1, _usefulLife_months)
_depreciation = xr.where(_usefulLife_months ==
0, 0, investments / _usefulLife_months_den)
# Calculate first and last months to depreciate
_df_per = _usefulLife_months.to_dataframe('first').reset_index()
_df_per['key'] = 1
_df_time = pd.DataFrame(timeIndex)
_df_time['key'] = 1
_df = _df_time.merge(_df_per, on=['key']).drop(columns=['key'])
_df['ending'] = pd.to_datetime(
_df[timeIndex.name].str.replace('.', '-')).dt.to_period('M')
_df['ending'] = (_df['ending'] + _df['first']
).dt.strftime(timeIndexFormat)
# Get dimensions indexes and names
_getNodeFn = self.model.getNode
_da_dims_names = list(usefulLife.dims)
_da_dims = {timeIndex.name: timeIndex}
for dim in _da_dims_names:
_da_dims.update({dim: _getNodeFn(dim).result})
# DataArray with ending date
_ending = self.dataarray_from_pandas(
_df, _da_dims, valueColumns='ending', defaultValue='')
# Allocate depreciation to corresponding periods
_depreciations = investments * 0.
for t in timeIndex:
_ending_month = self.subscript(_ending, timeIndex, t)
_depreciation_amount_t = self.subscript(
_depreciation, timeIndex, t)
if includeInCurrentMonth:
_depreciacion_t = xr.where((self.to_dataarray(timeIndex) >= t) & (
self.to_dataarray(timeIndex) < _ending_month), _depreciation_amount_t, 0.)
else:
_depreciacion_t = xr.where((self.to_dataarray(timeIndex) > t) & (
self.to_dataarray(timeIndex) <= _ending_month), _depreciation_amount_t, 0.)
_depreciations = _depreciations + _depreciacion_t
return _depreciations
def irr(self, flow, time_index):
"""Returns the Internal Rate of Return (IRR) of a series of periodic payments (negative values) and inflows
(positive values). The IRR is the discount rate at which the Net Present Value (NPV) of the flows equals zero.
The variable flow must be indexed by time_index.
If the cash flow never changes sign, pp.irr() has no solution and returns NAN (Not A Number).
"""
_getNodeFn = self.model.getNode
_rest_of_indexes_labels = np.setdiff1d(flow.dims, [time_index.name])
_cube = None
if len(_rest_of_indexes_labels) == 0:
_cube = np.irr(flow.values)
else:
_rest_of_indexes = [_getNodeFn(
xx).result for xx in _rest_of_indexes_labels]
_cube = self.create_dataarray(0., _rest_of_indexes)
_multivalues = [idx.values for idx in _rest_of_indexes]
_values = pd.MultiIndex.from_product(_multivalues).values
for _item in _values:
_filter = {}
for _nn in range(len(_item)):
_filter[_rest_of_indexes[_nn].name] = _item[_nn]
_toIrr = flow.sel(_filter).values
_irr = np.irr(_toIrr)
_cube.loc[_filter] = _irr
return _cube
def copy_as_values(self, source, targetId):
"""Copy values of datArray "source" into dataArray with id 'targetId'. This function alters the definition of
dataArray with 'targetId' identifier.
source: dataArray/index to copy values from
targetId: identifier (string) of the target node
"""
_getNodeFn = self.model.getNode
if isinstance(source, str):
source = _getNodeFn(source).result
if not isinstance(source, xr.DataArray) and not isinstance(source, pd.Index) and not isinstance(source, float) and not isinstance(source, int):
raise ValueError(
"The 'source' parameter must be a xr.DataArray, pd.Index, float or int")
if not isinstance(targetId, str):
raise ValueError(
"The 'targetId' parameter must be a string (identifier of node)")
newDef = ""
if isinstance(source, float) or isinstance(source, int):
newDef = f"result = {str(source)}"
elif isinstance(source, xr.DataArray):
_indexes = str(list(source.dims)).replace("'", '')
np.set_printoptions(threshold=np.prod(source.values.shape))
_data = np.array2string(source.values, separator=",", precision=20, formatter={
'float_kind': lambda x: "np.nan" if np.isnan(x) else repr(x)}).replace('\n', '')
newDef = f"result = xr.DataArray({_data},{_indexes})"
elif isinstance(source, pd.Index):
np.set_printoptions(threshold=np.prod(source.values.shape))
_data = np.array2string(source.values, separator=",", precision=20, formatter={
'float_kind': lambda x: "np.nan" if np.isnan(x) else repr(x)}).replace('\n', '')
newDef = f"result = pd.Index({_data})"
_getNodeFn(targetId).definition = newDef
return True
def excel_connection(self, filepath, useOpenpyxl=False, dataOnly=True, readOnly=True):
"""Creates Excel object from filepath.
filepath: path to excel file
useOpenpyxl: bool
dataOnly: bool. True to only get values, not the formula
readOnly: bool
Ex.
pp.excel_connection("\\path\\to\\the\\excelfile.xlsx")
"""
if self.model.isLinux():
filepath = filepath.replace("\\", "/")
_getNodeFn = self.model.getNode
fullFilename = filepath
if not os.path.isfile(fullFilename):
fullFilename = _getNodeFn("current_path").result + filepath
if os.path.isfile(fullFilename):
if useOpenpyxl:
return load_workbook(fullFilename, data_only=dataOnly, read_only=readOnly)
else:
return filepath
else:
raise ValueError("File not found")
def subscript(self, dataArray, indexes, values):
"""Filters dataArray using the filterList filters.
dataArray: dataArray to be filtered
indexes: the index to filter
values: the value to filter
Ex.
pp.subscript(dataArray, index, value)
"""
if not isinstance(dataArray, xr.DataArray):
raise ValueError(
"the 'dataArray' parameter must be of the type xr.DataArray")
if not isinstance(indexes, list):
indexes = [indexes]
if not isinstance(values, list):
values = [values]
res = dataArray
filterDic = {}
for _pos, indexItem in enumerate(indexes):
filterDic[indexItem.name] = values[_pos]
if len(filterDic) > 0:
res = res.sel(filterDic, drop=True)
return res
def change_index(self, dataArray, oldIndex, newIndex, compareMode=1, defaultValue=None):
"""Changes index of a dataArray object.
compareMode: 1: by Value (default), 2: by pos
Ex.
pp.change_index(dataArray, oldIndex, newIndex)
"""
_da = dataArray
if compareMode == 1:
_temp = _da.reindex({oldIndex.name: newIndex.values})
_temp[newIndex.name] = _temp[oldIndex.name]
_temp = _temp.swap_dims(
{oldIndex.name: newIndex.name}).drop(oldIndex.name)
if not defaultValue is None:
_temp = _temp.fillna(defaultValue)
return _temp
else:
if len(oldIndex.values) == len(newIndex.values):
_tmp = _da.copy()
_tmp = _tmp.assign_coords({oldIndex.name: newIndex.values})
_tmp = _tmp.rename({oldIndex.name: newIndex.name})
return _tmp
elif len(oldIndex.values) > len(newIndex.values):
raise ValueError(
"Changeindex by pos for indices of different size is not implemented")
else:
raise ValueError(
"Changeindex by pos for indices of different size is not implemented")
def kind_to_string(self, kind):
"""Returns the data type on human-readable string
"""
if kind in {'U', 'S'}:
return "string"
elif kind in {'b'}:
return "boolean"
elif kind in {'i', 'u', 'f', 'c'}:
return "numeric"
elif kind in {'m', 'M'}:
return "date"
elif kind in {'O'}:
return "object"
elif kind in {'V'}:
return "void"
def pandas_from_excel(self, excel, sheetName=None, namedRange=None, cellRange=None, indexes=None, driver=""):
"""Returns a pandas DataFrame from Excel spreadsheet.
excel: excel file path or openpyxl workbook object
sheetName: sheet name to be read
namedRange: range name to be read
cellRange: used together with sheetName to read from single cell range
indexes: List of columns names to be set as index of dataframe
Ex.
pp.pandas_from_excel(excelNode,"Sheet 1")
pp.pandas_from_excel(excelNode,namedRange="name_range")
pp.pandas_from_excel(excelNode,"Sheet 1",cellRange="A1:H10")
This function automatically generates pickles from every named range in excel file
when excel parameter is a string.
"""
# When excel param is a string, this function tries to read from automatically generated
# pickles for every named range if they are newer than the Excel file (its modified date).
# If they do not exist or are outdated, tries to generate one pickle for every named range in
# the spreadsheet.
# Requirements:
# - it must have writing permissions,
# - it must have named ranges.
# Otherwise, it should load the spreadsheet using openpyxl library and then read the sheet,
# range or cellrange.
if isinstance(excel, str):
if not os.path.isfile(excel):
excel = os.path.join(self.model.getNode(
"current_path").result, excel)
filepath = excel
# Only read/generate pickles for named ranges
if namedRange is not None:
orig_dir, single_filename = os.path.split(filepath)
filename, _ = os.path.splitext(single_filename)
target_dir = os.path.join(orig_dir, f".{filename}")
picklepath = os.path.join(target_dir, f"{namedRange}.pkl")
# Read from pickle if it is newer than Excel file
if os.path.isfile(picklepath) and os.path.getmtime(picklepath) >= os.path.getmtime(filepath):
return self.__read_pickle_df(filepath=picklepath, indexes=indexes)
else:
wb = load_workbook(
filepath, data_only=True, read_only=True)
named_ranges = [
r.name for r in wb.defined_names.definedName]
# Check if user has writing permissions to generate new pickles and if namedRange exists
if os.access(excel, os.W_OK) and namedRange in named_ranges:
flag_filename = 'flag.tmp'
flag_filepath = os.path.join(target_dir, flag_filename)
# Clean potentially old flag files
self.__remove_old_file(
filepath=flag_filepath, maxElapsedMinutes=60)
# If flag file exists (optimization is running), read directly from Excel
if os.path.isfile(flag_filepath):
return self.pandas_from_excel(wb, sheetName, namedRange, cellRange, indexes)
else:
self.__generate_pkl_from_excel(
workbook=wb, filepath=filepath, targetDir=target_dir,
maxFileSizeMB=100, flagFilename=flag_filename)
# Read file
if os.path.isfile(picklepath):
return self.__read_pickle_df(filepath=picklepath, indexes=indexes)
else:
return self.pandas_from_excel(wb, sheetName, namedRange, cellRange, indexes)
# Read directly from Excel
else:
return self.pandas_from_excel(wb, sheetName, namedRange, cellRange, indexes)
else:
wb = load_workbook(filepath, data_only=True, read_only=True)
return self.pandas_from_excel(wb, sheetName, namedRange, cellRange, indexes)
elif "openpyxl.workbook" in str(type(excel)):
rangeToRead = None
if not namedRange is None:
the_range = excel.defined_names[namedRange]
dests = the_range.destinations
for title, coord in dests:
ws = excel[title]
rangeToRead = ws[coord]
elif not cellRange is None:
ws = excel[sheetName]
rangeToRead = ws[cellRange]
else:
rangeToRead = excel[sheetName]
cols = []
values = []
for index, row in enumerate(rangeToRead):
if index == 0:
cols = [str(c.value) for c in row]
else:
values.append([c.value for c in row])
df = pd.DataFrame(values, None, cols)
if not indexes is None:
if isinstance(indexes, str):
indexes = [indexes]
toIndex = []
for indexColumn in indexes:
if indexColumn in df.columns.values:
toIndex.append(indexColumn)
if len(toIndex) > 0:
df.set_index(toIndex, inplace=True)
return df.dropna(how="all")
else:
raise ValueError("excel must be a string or openpyxl workbook")
def index_from_pandas(self, dataframe, columnName=None, removeEmpty=True):
"""Returns a pandas.Index from an column of a pandas dataframe.
dataframe: pandas dataframe
columnName: dataframe column name used for create cp.index. By default is created using the first column
removeEmpty: True for remove empty rows
Ex.
pp.index_from_pandas(df)
pp.index_from_pandas(df,"column10")
"""
_serie = None
if columnName is None:
_serie = dataframe[dataframe.columns[0]]
else:
_serie = dataframe[columnName]
if removeEmpty:
_serie.dropna(inplace=True)
if self.kind_to_string(_serie.dtype.kind) == "string" or self.kind_to_string(_serie.dtype.kind) == "object":
_serie = _serie[_serie != ""]
return pd.Index(_serie.unique())
def index_from_excel(self, excel, sheetName=None, namedRange=None, cellRange=None, columnName=None, removeEmpty=True):
"""Returns a pandas.Index from an excel file.
excel: pp.excel object
sheetName: sheet name to be read
namedRange: name of the range to be read
cellRange: used with sheetname, for read from a simple range
columnName: dataframe column name used for create pp.index. By default is created using the first column
removeEmpty: True for remove empty rows
Ex.
pp.index_from_excel(excelNode,"Sheet 1")
pp.index_from_excel(excelNode,namedRange="name_range")
pp.index_from_excel(excelNode,namedRange="name_range", columnName="indicadores")
"""
if isinstance(excel, str) or "openpyxl.workbook" in str(type(excel)):
_df = self.pandas_from_excel(
excel, sheetName, namedRange, cellRange)
return self.index_from_pandas(_df, columnName, removeEmpty)
else:
raise ValueError(
"excel can be excel_connection object or a str path to the filename")
def dataarray_from_pandas(self, dataframe, domainDic, valueColumns, defaultValue=None, valueColumnsAsDim=True, sumDuplicateRecords=True):
"""Returns a DataArray (valueColumns is string or (valueColumns is pd.Index and valueColumnsAsDim is True))
or Dataset (valueColumns is a list or (valueColumns is a pd.Index and valueColumnsAsDim is False)) from
a Pandas dataframe applying the set_domain function.
dataframe: Pandas dataframe with no index columns.
domainDic: Dictionary of column names and index names. Ex. {'Column Name': index_name}.
valueColumns: String, list or pd.Index. Dataframe's value columns.
defaultValue: Default value when applying set_domain function.
valueColumnsAsDim: If True, valueColumns becomes a dimension of resulting DataArray. If False, each value
column becomes a variable of the resulting Dataset.
sumDuplicateRecords: If True, sums identical rows. Otherwise, removes duplicates (except the first one).
Ex.
pp.dataarray_from_pandas(sales_dataframe, {'Sales Channel': sales_channels, 'Month': time}, 'Sales', 0.)
"""
_index_value_columns = None
# Check if valueColumns is string, list, np.ndarray or pd.Index (transform to list) and indexes is dict.
if isinstance(valueColumns, pd.Index):
_index_value_columns = valueColumns.copy()
_index_value_columns_name = _index_value_columns.name
valueColumns = valueColumns.values.tolist()
elif isinstance(valueColumns, np.ndarray):
valueColumns = valueColumns.tolist()
elif not isinstance(valueColumns, str) and not isinstance(valueColumns, list):
raise ValueError(
"valueColumns must be a string, a list or a pd.Index")
if not isinstance(domainDic, dict):
raise ValueError("Indexes must be a dictionary")
# Transform indexes into list and create list with all columns.
_index_cols = list(domainDic.keys())
_cols = _index_cols.copy()
if isinstance(valueColumns, list):
_cols = _cols + valueColumns
else:
_cols.append(valueColumns)
# If valueColumnsAsDim is True, check if every column is in dataframe and filter it.
if (valueColumnsAsDim is True) and isinstance(_index_value_columns, pd.Index):
_df_columns = dataframe.columns.values.tolist()
_cols = [value for value in _df_columns if value in _cols]
_filtered_value_columns = [
value for value in _cols if value not in _index_cols]
# Filter dataframe by columns.
_df = dataframe[_cols]
# Sum identical rows or remove duplicates.
if sumDuplicateRecords is True:
_df = _df.groupby(_index_cols, as_index=False).sum()
else:
_duplicate_rows = _df.duplicated(_index_cols)
_df = _df[~_duplicate_rows]
# If valueColumnsAsDim is True, melt valueColumns.
if (valueColumnsAsDim is True) and isinstance(_index_value_columns, pd.Index):
# Unpivot dataframe from wide format to long format by valueColumns.
_df = pd.melt(_df, id_vars=_index_cols, value_vars=_filtered_value_columns,
var_name=_index_value_columns_name, value_name='values')
_index_cols = _index_cols + [_index_value_columns_name]
domainDic[_index_value_columns_name] = _index_value_columns
# Create DataArray
_data = _df.set_index(_index_cols)['values'].to_xarray()
# Appy set_domain function to DataArray / Dataset.
_data = self.set_domain(_data, domainDic, defaultValue)
else:
# Create DataArray / Dataset.
_data = _df.set_index(_index_cols)[valueColumns].to_xarray()
# Appy set_domain function to DataArray / Dataset.
_data = self.set_domain(_data, domainDic, defaultValue)
return _data
def dataarray_from_excel(self, excel, sheetName=None, namedRange=None, cellRange=None, indexes=None, valueColumns=None, indexColumnHeaders=None, replaceByIndex=None, defaultValue=0):
"""Returns a xr.DataArray from excel file.
excel: excel_connection object.
sheetName: sheet name to be read
namedRange: name of the range to be read.
cellRange: used with sheetName to read from a simple range.
indexes: pd.Index objects to perform a change_index operation.
valueColumns: string with the column name of the dataframe that contains the values. pd.Index with column
names to convert columns to index.
indexColumnHeaders (optional): column names in pandas to parse with indexes. Used if header on dataframe
is not equal to index identifiers.
replaceByIndex (optional): replace index used in valueColumns by this index (using change_index).
Ex.
pp.dataarray_from_excel(excelNode,"Sheet 1",indexes=[indicadores],valueColumns="descuentos")
pp.dataarray_from_excel(excelNode,namedRange="nombre_rango",indexes=[indicadores],valueColumns=time)
"""
dataframe = self.pandas_from_excel(
excel, sheetName, namedRange, cellRange)
# Check size of dataframe. If it is empty, create empty dataArray. Else, proceed
if len(dataframe) == 0:
if not isinstance(indexes, list):
indexes = [indexes]
if isinstance(valueColumns, pd.Index):
indexes.append(valueColumns)
_data = np.full(tuple([(len(x)) for x in indexes]), defaultValue)
return xr.DataArray(_data, indexes)
else:
valueIndex = None
if isinstance(valueColumns, pd.Index):
valueIndex = valueColumns
valueColumns = valueIndex.tolist()
elif isinstance(valueColumns, str):
valueColumns = [valueColumns]
if indexColumnHeaders is None:
indexColumnHeaders = [index.name for index in indexes]
# Create total index and index names
_allindexes = indexes
_allIndexNames = indexColumnHeaders[:]
if not valueIndex is None:
_allindexes.append(valueIndex)
_allIndexNames.append("data_index")
# fill other columns for prevent melt error
cols_not_in_df = [
col for col in valueColumns if col not in dataframe.columns]
for col in cols_not_in_df:
dataframe[col] = np.nan
_full = dataframe.reset_index()[indexColumnHeaders + valueColumns].melt(
id_vars=indexColumnHeaders,
value_vars=valueColumns,
var_name="data_index",
value_name="data_value")
# sum for acum over duplicate records
_full = _full.groupby(_allIndexNames, as_index=False).sum()
_dtype = _full["data_value"].dtype
_dataType = self.kind_to_string(_dtype.kind)
if _dataType == "string":
_full = _full[(_full["data_value"] != "") &
(_full['data_value'].notna())]
else:
_full = _full[(_full["data_value"] != 0) &
(_full['data_value'].notna())]
_full.set_index(_allIndexNames, inplace=True)
_da = _full["data_value"].to_xarray()
# If indexed, rename index
if not indexes is None and not indexColumnHeaders is None:
if not isinstance(indexes, list):
indexes = [indexes]
idxPos = 0
for cubeIndex in indexes:
newIndexName = cubeIndex.name
if idxPos <= len(indexColumnHeaders)-1:
oldIndexName = indexColumnHeaders[idxPos]
if not newIndexName in _da.coords:
_da.coords[newIndexName] = _da.coords[oldIndexName]
_da = _da.swap_dims(
{oldIndexName: newIndexName}).drop(oldIndexName)
idxPos += 1
# Reindex to complete combinations
_da = _da.reindex({newIndexName: cubeIndex.values})
if not valueIndex is None:
newIndexName = valueIndex.name
oldIndexName = "data_index"
if not newIndexName in _da.coords:
_da.coords[newIndexName] = _da.coords[oldIndexName]
_da = _da.swap_dims(
{oldIndexName: newIndexName}).drop(oldIndexName)
# Reindex to complete combinations
_da = _da.reindex({newIndexName: valueIndex.values})
if not replaceByIndex is None:
_da = self.change_index(_da, valueIndex, replaceByIndex, 2)
return _da.fillna(defaultValue)
def to_dataarray(self, index):
"""Converts an index into DataArray indexed by index and with its values
Ex.
pp.to_dataarray(time_index)
"""
return xr.DataArray(index.values, [index])
def goal_seek(self, nodeIdX, nodeIdObjective, goal=0, startValue=1, matrixIndex=None):
"""Finds the value of nodeIdX that makes nodeIdObjective equal to goal.
nodeIdX: String with id of node X
nodeIdObjective: String with id of node Objective
matrixIndex: Index for multidimensional goal seek
"""
_getNodeFn = self.model.getNode
if self._exists_module("scipy"):
from scipy.optimize import newton
if matrixIndex is None:
def _f(x):
_getNodeFn(nodeIdX).definition = "result = " + str(x)
value = _getNodeFn(nodeIdObjective).result
return value - goal
_res = newton(_f, x0=startValue)
return _res
else:
_indexName = matrixIndex.name
for item in matrixIndex:
def _f(x):
_values = _getNodeFn(nodeIdX).result
_values.loc[{_indexName: slice(item, item)}] = x
np.set_printoptions(
threshold=np.prod(_values.values.shape))
data = np.array2string(_values.values, separator=",", precision=20,
formatter={
'float_kind': lambda x: "np.nan" if np.isnan(x) else repr(x)}
).replace('\n', '')
_getNodeFn(
nodeIdX).definition = f"result = xr.DataArray({data},[{_indexName}])"
value = _getNodeFn(nodeIdObjective).result
return self.subscript(value, matrixIndex, item)
_res = newton(_f, x0=startValue)
else:
raise ValueError("scipy library not found")
def _exists_module(self, import_name):
"""Return true if module is installed
"""
try:
importlib.import_module(import_name)
return True
except ImportError:
return False
def install_library(self, pypi_name, import_name=None):
"""DEPRECATED. Use Lib manager instead
"""
if import_name is None:
import_name = pypi_name
if not self._exists_module(import_name):
# check in lib folder
# install lib
os.system(f"pip install {pypi_name}")
importlib.invalidate_caches()
if not self._exists_module(import_name):
raise ValueError(f"Can't install the module '{import_name}'")
return True
def create_time(self, date_start, date_end, freq='M', format='%Y.%m'):
"""Creates time index usign start, end dates and freq. The result is formated with format parameter
Ex.
pp.create_time('2016.01','2018.12')
pp.create_time('2016.01.01','2016.12.31',freq='D',format='%d/%m/%Y')
"""
if "." in date_start:
date_start = date_start.replace('.', '-')
if "." in date_end:
date_end = date_end.replace('.', '-')
return pd.Index(pd.period_range(start=date_start, end=date_end, freq=freq).strftime(format))
def lookup(self, dataArray, dataMap, sharedIndex, defaultValue=0):
"""Returns the value of dataArray indexed by the index of dataMap.
dataArray must be indexed by sharedIndex and dataArray values must correspond to elements of sharedIndex.
For example: Let's say you have a dataArray with an estimated inflation rate by Country ("inflation_rate"
is the name of the dataArray; "country" is the name of the index) and you want to assign it to the
corresponding Company depending on its location. On the other hand, there's a many-to-one map where each
Company is allocated to a single Country ("country_to_company_allocation"). The sharedIndex, in this case,
is Country ("country").
As a result,
pp.lookup( inflation_rate , country_to_company_allocation , country )
will return the estimated inflation rate by Company.
"""
try:
return dataArray.sel({sharedIndex.name: dataMap}, drop=True)
except:
valuesOk = dataMap[dataMap.isin(sharedIndex.values)]
lookOk = dataArray.sel({sharedIndex.name: valuesOk}, drop=True)
final = lookOk.reindex(
{dataMap.dims[0]: dataMap.coords[dataMap.dims[0]].values})
return final.fillna(defaultValue)
def aggregate(self, dataArray, mapInfo, sourceIndex, targetIndex, aggregationFunction='sum'):
"""Converts dataArray, originally indexed by sourceIndex, to a dataArray indexed by targetIndex, aggregating
according to the mapInfo‘s allocation of targetIndex: sourceIndex.
mapInfo: gives the value of targetIndex for each element of sourceIndex (If the map does not match then the
element will not be set into target index and information will be lost)
aggregationFuction (optional): especifies the function to be used when grouping data (sum, mean, min, max,
median)
Ex. for aggregating time information into annual index, the syntax is:
pp.aggregate(dataArray, timeToYearsMap, time, years)
"""
# Transform map and targetIndex to list
if not isinstance(mapInfo, list):
mapInfo = [mapInfo]
if not isinstance(targetIndex, list):
targetIndex = [targetIndex]
if len(mapInfo) == len(targetIndex):
# Create dataframe map with new indexes
_map = pd.DataFrame(columns=[sourceIndex.name]).set_index(
sourceIndex.name)
for i in range(len(mapInfo)):
_map_i = mapInfo[i].to_dataframe(targetIndex[i].name)
_map = _map.join(_map_i, how='outer')
_df = dataArray.to_dataframe('value')
_empty_filter = _df["value"] != 0
# Drop rows with 0 if dataframe is not empty (to avoid error)
if len(_df[_empty_filter]) != 0:
_df = _df[_empty_filter]
# Join new dimensions (target) and drop original (source)
_df = _df.join(_map).reset_index()
_df.drop(columns=[sourceIndex.name], inplace=True)
_newDimList = [
xx for xx in dataArray.dims if xx not in [sourceIndex.name]]
# Groupby dataframe by new dimensions
for i in range(len(targetIndex)):
_newDimList.append(targetIndex[i].name)
_df = _df.groupby(_newDimList).agg(aggregationFunction)
# Transform to Xarray DataArray
_da = _df["value"].to_xarray()
# Reindex dimensions
_reindexDic = {}
for t_index in targetIndex:
_reindexDic.update({t_index.name: t_index.values})
for coord in dataArray.coords:
if coord != sourceIndex.name:
_reindexDic[coord] = dataArray.coords[coord].values
_da = _da.reindex(_reindexDic)
return _da.fillna(0)
else:
raise ValueError(
'mapInfo and targetIndex must have the same number of elements')
def choice(self, index, selection, includeAll=False):
"""DEPRECATED: Use pp.selector instead.
Returns the element in the "selection" position of the index.
"""
if selection == 0 and includeAll == 1:
return "All"
else:
values = None
if isinstance(index, pd.Index):
values = (index.values[:1000]).tolist()
elif isinstance(index, np.ndarray):
values = (index[:1000]).tolist()
else:
values = list(index)[:1000]
if not values is None and len(values) >= selection:
return values[selection-1]
return ""
def dynamic(self, dataArray, index, shift, initialValues=None):
"""Performs cyclic calculations between nodes.
dataArray: dataArray to perform the ciclyc dependency calculation
index: Index from dataArray to shift
shift: number of elemnts to shift. Can be positive or negative
initialValues (optional): scalar or 1-dim dataArray. Initial values to apply to first "shift" elements
"""
_da = dataArray.shift({index.name: (shift*-1)})
if not initialValues is None:
_da = _da.fillna(initialValues)
return _da
def slice_dataarray(self, dataArray, index, position):
"""Filters dataArray by integer position along the specified index.
dataArray: dataArray to be filtered
index: pp.index
position: int
Ex.
pp.slice_dataarray(dataArray1, index1, 0)
"""
if not isinstance(dataArray, xr.DataArray):
raise ValueError(
"the 'dataArray' parameter must be of the type xr.DataArray")
return dataArray.isel({index.name: position}, drop=True)
def fill_inf(self, dataArray, value=0):
"""Fills np.inf values with value
Ex.
pp.fill_inf(dataArray, 0)
"""
return self.apply_fn(dataArray, lambda x: value if np.isinf(x) else x)
def fill_all(self, dataArray, value=0):
"""Fills np.inf and np.nan with value
Ex.
pp.fill_all(dataArray, 0)
"""
return self.fill_inf(dataArray.fillna(value), value)
def add_periods(self, start, periods, freq='M', format='%Y.%m'):
"""Adds periods to a date. Allows setting freq and output format
Ex.
pp.addPeriods('2016.01', 6)
pp.apply_fn(pp.addPeriods, projects_initial_date, projects_duration)
"""
if "." in start:
start = start.replace('.', '-')
if periods < 0:
return pd.period_range(end=start, periods=-periods+1, freq=freq).strftime(format)[0]
else:
return pd.period_range(start=start, periods=periods+1, freq=freq).strftime(format)[-1]
def npv(self, rate, flow, time_index, offset=1):
""""Returns the Net Present Value (NPV) of a cash flow with equally spaced periods. The flow parameter must contain
a series of periodic payments (negative values) and inflows (positive values), indexed by time_index.
The optional offset parameter especifies the offset of the first value relative to the current time period. By
default, offset is set to 1, indicating that the first value is discounted as if it is one step in the future
"""
_number_of_periods = self.get_pos(time_index) + offset
_present_values = flow / (1 + rate) ** _number_of_periods
_npv = _present_values.sum(time_index.name)
return _npv
def copy_index(self, dataArray, sortValues=True):
"""Generates a pd.Index with the unique values of the dataArray.
"""
np_values = dataArray.values.flatten()
# Numpy unique function automatically reorders. Pandas unique, does not.
if sortValues is True:
return pd.Index(np.unique(np_values))
else:
return pd.Index(np_values).unique()
def sequence_index(self, _start, _end, _step=1):
"""
Returns a pd.Index with the sequence between 'start' and 'end' parameters. Both limits are inclusive. Values are
converted to string.
"""
try:
_start = int(_start)
_end = int(_end) + 1
_step = int(_step)
except:
raise ValueError(
"Only numbers are allowed as 'start', 'end' and 'step' parameters")
_list = [str(x) for x in range(_start, _end, _step)]
_index = pd.Index(_list)
return _index
def subindex(self, dataArray, targetValue, targetIndex, method='Last'):
"""Returns a dataArray containing the value of targetIndex for which dataArray (indexed by targetIndex) is equal
to targetValue.
dataArray: Xarray dataArray.
targetValue: Integer, Float or String.
targetIndex: Pandas Index.
method: There are two options: "Last" returns the last occurrence of targetIndex for which dataArray is equal
to targetValue. "First" returns the first occurrence.
"""
# Equals dataArray to targetValue and cumulates it along targetIndex.
_matriz_1_0 = xr.where(dataArray == targetValue, 1, 0)
_matriz_1_0_acum = xr.where(
_matriz_1_0 == 1, _matriz_1_0.cumsum(targetIndex.name), 0)
if method == 'Last':
# Get max cumulated value along targetIndex
_max = _matriz_1_0_acum.max(targetIndex.name)
_max = xr.where(_max == 0, np.nan, _max)
_matriz_max = xr.where(
_matriz_1_0_acum == _max, self.to_dataarray(targetIndex), np.nan)
return _matriz_max.max(targetIndex.name)
elif method == 'First':
# Get min (1) cumulated value along targetIndex
_matriz_min = xr.where(_matriz_1_0_acum == 1,
self.to_dataarray(targetIndex), np.nan)
return _matriz_min.max(targetIndex.name)
else:
raise ValueError("Insert a valid method")
def concat_rows(self, array_param, index_param):
"""Flattens array_param by replacing with a new index that includes all combinatios of values from
index_param
"""
_index = pd.Index([])
for i in index_param.values:
_index = self.concat_index(_index, pd.Index(
array_param.sel({index_param.name: i}, drop=True).values))
return _index
def log_task(self, task_state="PROGRESS", task_description=None, task_activity=None, task_info=None):
"""Generates log entry. Used for schedule tasks
task_state: PROGRESS, INFO, WARNING, FAILURE, RETRY, SUCCESS, REVOKED, STARTED, PENDING, RECEIVED
task_description: Shot description of task. example: start process
task_activity: other short description
task_info: json with more info
"""
import json
_params = {
"state": task_state,
"description": task_description,
"activity": task_activity,
"info": json.dumps(task_info)}
res = None
task_log_endpoint = self.model.getNode("task_log_endpoint").result
if task_log_endpoint:
# only used from pyplan_engine
import requests
from os import environ
base_host = environ['PYPLAN_API_HOST'] + task_log_endpoint
res = requests.post(base_host, data=_params)
else:
print(str(_params))
return res
def pandas_from_xlsb_file(self, filepath):
"""Returns a pandas DataFrame from xlsb file
"""
if self._exists_module("pyxlsb"):
from pyxlsb import open_workbook as open_xlsb
_df = []
with open_xlsb(filepath) as wb:
with wb.get_sheet(1) as sheet:
for row in sheet.rows():
_df.append([item.v for item in row])
return pd.DataFrame(_df[1:], columns=_df[0])
else:
raise ValueError("pyxlsb library not found")
def selector(self, options, selected, multiselect=False):
"""Creates UI Pyplan selector for decision nodes
options: List or pandas.Index with values that can be selected
selected: current selected index value
multiselect: True to allow multiple selection
"""
return Selector(options, selected, multiselect)
def send_message(self, message_text, message_title=None, not_level_reverse="info"):
"""Sends message to UI. Only used with Pyplan UI
Ex.
pp.send_message("The process has been completed","Process complete!","success")
"""
if self.model and self.model.ws:
notification_levels = [
NotLevels.INFO, NotLevels.SUCCESS, NotLevels.WARNING, NotLevels.ERROR]
not_level_reverse = NotLevels(not_level_reverse) if NotLevels(
not_level_reverse) in notification_levels else NotLevels.INFO
self.model.ws.ws_notification_message(
message=message_text, title=message_title, not_level=not_level_reverse)
def progressbar(self, progress, message_text="", not_level_reverse="info"):
"""Creates and updates progress bar. Only used with Pyplan UI
Ex.
pp.progressbar(20, "Step 1","info")
pp.progressbar(100, "Complete!","success")
"""
if self.model and self.model.ws:
notification_levels = [
NotLevels.INFO, NotLevels.SUCCESS, NotLevels.WARNING, NotLevels.ERROR]
not_level_reverse = NotLevels(not_level_reverse) if NotLevels(
not_level_reverse) in notification_levels else NotLevels.INFO
self.model.ws.ws_notification_progress_bar(
progress=progress, message=message_text, not_level=not_level_reverse)
def create_report(self, reportItems, reportIndexName="Report index", reportIndex=None):
"""Concatenates the reportItems dic dataArrays along the reportIndex dimension
reportItems: dict or list with datarrays to concat (must have the same structure)
reportIndexName: Name of the new ReportIndex dimension
reportIndex: Overwrite ReportIndex dimension
Ex.
pp.create_report(reportItems={"Demand":demand, "Product Stock":stock}, reportIndexName="New Report")
"""
if isinstance(reportItems, dict):
report_index = list(reportItems)
report_values = list(reportItems.values())
_titles = [str(xx.name) for xx in report_values]
_index = pd.Index(report_index, name=reportIndexName)
return xr.concat(report_values, _index)
else:
_titles = [str(xx.name) for xx in reportItems]
_index = None
if reportIndex is None:
_index = pd.Index(_titles, name=reportIndexName)
else:
_index = reportIndex
return xr.concat(reportItems, _index)
def pandas_from_dataarray(self, dataarray):
"""Create dataframe pandas from datarray with n dimensions
Ex.
pp.pandas_from_dataarray(dataArrayNode)
"""
return dataarray.stack(z=dataarray.dims).to_dataframe("value")
def pandas_from_access(self):
"""Class to manage access databases
"""
return Pandas_from_acc()
def __generate_pkl_from_excel(self, workbook, filepath, targetDir, maxFileSizeMB=None, flagFilename='flag.tmp'):
"""Generates compressed pickle from excel file
workbook: openpyxl workbook
filepath: full file path
targetDir: path where pickles will be stored
maxFileSizeMB: file size limit in megabytes
flagFilename: name of temporary flag file
"""
optimizable_templates = ['.xlsx', '.xlsm', '.xlsb']
_, ext = os.path.splitext(filepath)
# Generate pickle for selected file types if its size is below max limit
if ext in optimizable_templates and (maxFileSizeMB is None or os.stat(filepath).st_size/1024/1024 <= maxFileSizeMB):
if not os.path.isdir(targetDir):
os.mkdir(targetDir)
# When first user runs optimization, creates flag file that gets deleted after whole optimization is done
# If another user wants to read the Excel file while the optimization is running, the flag file will be present
flag_filepath = os.path.join(targetDir, flagFilename)
with open(flag_filepath, 'w'):
pass
try:
for item in workbook.defined_names.definedName:
try:
if not item.is_external and item.type == 'RANGE' and item.attr_text and '!$' in item.attr_text:
target_filepath = os.path.join(
targetDir, f'{item.name}.pkl')
if os.path.isfile(target_filepath):
os.remove(target_filepath)
dests = item.destinations
for title, coord in dests:
if title in workbook:
ws = workbook[title]
rangeToRead = ws[coord]
if not isinstance(rangeToRead, tuple):
rangeToRead = ((rangeToRead,),)
cols = []
values = []
for index, row in enumerate(rangeToRead):
if index == 0:
cols = [str(c.value) for c in row]
else:
values.append(
[c.value for c in row])
nn = 0
_finalCols = []
for _col in cols:
if _col is None:
_finalCols.append(
f'Unnamed{str(nn)}')
nn += 1
else:
_finalCols.append(_col)
df = pd.DataFrame(
values, columns=_finalCols).dropna(how='all')
df.to_pickle(target_filepath,
compression='gzip')
except Exception as e:
print(
f"Could not generate pkl for range '{item.name}'. Error: {e}")
finally:
os.remove(flag_filepath)
def __remove_old_file(self, filepath, maxElapsedMinutes=1):
"""Deletes file if its modified date is older than current date minus maxElapsedMinutes
"""
if os.path.isfile(filepath):
# Dates are expressed in seconds since epoch (floats)
modified_date = os.path.getmtime(filepath)
min_modified_date = time.time() - (maxElapsedMinutes * 60)
if modified_date < min_modified_date:
os.remove(filepath)
def __read_pickle_df(self, filepath, indexes=None):
"""Loads dataframe from pickled file
"""
df = pd.read_pickle(filepath, compression='gzip')
if not indexes is None:
df.set_index(indexes, inplace=True)
return df
def get_nested_lists_shape(self, lst, shape=()):
"""Returns a tuple with the shape of nested lists similarly to numpy's shape.
lst: the nested list
shape: the shape up to the current recursion depth
"""
if not isinstance(lst, list):
# base case
return shape
# peek ahead and assure all lists in the next depth have the same length
if isinstance(lst[0], list):
l = len(lst[0])
if not all(len(item) == l for item in lst):
raise ValueError('Not all lists have the same length')
shape += (len(lst), )
# recurse
shape = self.get_nested_lists_shape(lst[0], shape)
return shape
def __concat_dataarrays_over_one_dim(self, valuesList, dim):
"""Concatenates Xarray DataArrays along a new dimension, broadcasting by all possible
dimensions
valuesList: list of DataArrays, int, str, float. At least one of them must be DataArray
dim: Pandas Index with same length as valuesList
"""
# Error handling
if not isinstance(valuesList, list):
raise ValueError('valuesList must be a list')
if not any([isinstance(v, xr.DataArray) for v in valuesList]):
raise ValueError(
'At least one of the objects in valuesList must be a Xarray DataArray')
if not isinstance(dim, pd.Index):
raise ValueError('dim must be a pandas Index')
valuesListShape = self.get_nested_lists_shape(valuesList)
dimShape = (len(dim.values),)
if valuesListShape != dimShape:
raise ValueError(
f'Shape of valuesList {valuesListShape} is not equal to shape of dim {dimShape}')
# Get all possible dimensions
all_dims_names, all_dims_indexes = [], {}
for v in valuesList:
if isinstance(v, xr.DataArray):
dims_v = v.dims
indexes_v = v.indexes
for d in dims_v:
if d not in all_dims_names:
all_dims_names.append(d)
all_dims_indexes.update({d: indexes_v[d]})
newValuesList = []
for v in valuesList:
# Add dimensions not present in original DataArray
if isinstance(v, xr.DataArray):
dims_v = v.dims
if not all(d in dims_v for d in all_dims_names):
for d in all_dims_names:
if d not in dims_v:
v = v.expand_dims({d: all_dims_indexes[d].values})
# When value is a scalar (str, int, float, usually)
else:
v = xr.DataArray(v, list(all_dims_indexes.values()))
newValuesList.append(v)
return xr.concat(newValuesList, dim=dim)
def concat_dataarrays(self, valuesList, dim):
"""Concatenates Xarray DataArrays along one or two new dimensions, broadcasting
by all possible dimensions
valuesList: list or list of lists of DataArrays, int, str, float. At least one
of them must be a DataArray object
dim: Pandas Index or list of Pandas Indexes with same shape as valuesList
Ex.
pp.concat_dataarrays(
valuesList=[node1, node2, node3],
dim=three_items_index)
pp.concat_dataarrays(
valuesList=['String Example', node2, 0],
dim=three_items_index)
pp.concat_dataarrays(
valuesList=[[node1, node2, node3], [node4, node5, node6]],
dim=[two_items_index, three_items_index])
"""
valuesListShape = self.get_nested_lists_shape(valuesList)
if isinstance(dim, list):
# Error handling
# Check length
if len(dim) > 2:
raise ValueError("Can only concat along 1 or 2 dimensions")
if len(valuesListShape) == 1:
raise ValueError(
"valuesList must be list of lists if dim is a list")
# Ensure shapes are the same
if valuesListShape[0] == 1:
# to make it comparable to dimShape
valuesListShape = (valuesListShape[1],)
dimShape = tuple(len(d) for d in dim)
if not (valuesListShape == dimShape):
raise ValueError(
f"Shape of valuesList {valuesListShape} is not equal to shape of dim {dimShape}")
if len(dimShape) == 2:
# Broadcast and concat along each dim
das = []
for lst in valuesList:
da = self.__concat_dataarrays_over_one_dim(
valuesList=lst, dim=dim[1])
das.append(da)
return self.__concat_dataarrays_over_one_dim(valuesList=das, dim=dim[0])
else:
return self.__concat_dataarrays_over_one_dim(valuesList=valuesList[0], dim=dim[0])
else:
return self.__concat_dataarrays_over_one_dim(valuesList=valuesList, dim=dim)
class Selector(object):
""" Class to manage UI Pyplan selectors.
"""
SERIALIZABLE_PROPERTIES = ['options', 'selected', 'multiselect']
def __init__(self, options, selected, multiselect=False):
""" Create UI Pyplan selector for desicion nodes
Params:
options: List or pd.Index with available values that can be selected
selected: current selected index value's
multiselect: True to allow multiple selection
"""
self._options = options
self._multiselect = multiselect
self.selected = selected
@property
def value(self):
if self.multiselect:
return [self.options[i] for i in self.selected]
else:
return self.options[self.selected]
@property
def options(self):
return self._options
@property
def multiselect(self):
return self._multiselect
@property
def selected(self):
res = None
if self.multiselect:
res = []
for nn in self._selected:
if nn < len(self._options):
res.append(nn)
if len(res) == 0:
res = list(range(len(self._options)))
else:
res = self._selected if self._selected < len(self._options) else 0
return res
@selected.setter
def selected(self, value):
if self.multiselect:
if value is None:
self._selected = []
elif isinstance(value, list):
self._selected = value
else:
self._selected = [value]
else:
if isinstance(value, list):
self._selected = value[0]
else:
self._selected = value
def toObj(self):
res = dict()
for k in Selector.SERIALIZABLE_PROPERTIES:
if hasattr(self, k):
if k == "options" and isinstance(getattr(self, k), pd.Index):
res[k] = getattr(self, k).tolist()
else:
res[k] = getattr(self, k)
return res
def isSameValue(self, value):
if self.multiselect and isinstance(self.selected, list) and isinstance(value, list):
l1 = self.selected.copy()
l2 = value.copy()
l1.sort()
l2.sort()
return l1 == l2
else:
return self.selected == value
def generateDefinition(self, definition, value):
if self.multiselect:
if not isinstance(value, list):
if value is None:
value = 0
value = list(value)
elif len(value) == 0:
value = list(range(len(self.options)))
newPos = str(value)
reg = r'(?:[^\]\[,]+|\[[^\]\[]+\])'
groups = re.findall(reg, definition)
if len(groups) > 2:
if not str(groups[-1]) in ["False)", "True)", "multiselect=False)", "multiselect=True)"]:
groups.append("False)")
newDef = ""
for nn in range(len(groups)-2):
newDef += groups[nn]
newDef = f"{newDef},{newPos},{groups[-1]}"
return newDef
return None
class Pandas_from_acc():
"""Class that allows to read access files with pandas
EXAMPLES OF USE:
# Listing the tables.
for tbl in pandas_from_access.list_tables("my.mdb"):
print(tbl)
# Read a small table.
df = pandas_from_access.read_table("my.mdb", "MyTable")
# Read a huge table.
accumulator = []
for chunk in pandas_from_access.read_table("my.mdb", "MyTable", chunksize=10000):
accumulator.append(f(chunk))
"""
TABLE_RE = re.compile("CREATE TABLE \[([a-zA-Z_0-9 ]+)\]\s+\((.*?\));",
re.MULTILINE | re.DOTALL)
DEF_RE = re.compile("\s*\[(\w+)\]\s*(.*?),")
@classmethod
def list_tables(cls, rdb_file, encoding="latin-1"):
"""
:param rdb_file: The MS Access database file.
:param encoding: The content encoding of the output. I assume `latin-1`
because so many of MS files have that encoding. But, MDBTools may
actually be UTF-8.
:return: A list of the tables in a given database.
"""
tables = cls.__get_tables(rdb_file, encoding)
return [table for table, _ in tables]
@classmethod
def read_schema(cls, rdb_file, encoding='utf8'):
"""
:param rdb_file: The MS Access database file.
:param encoding: The schema encoding. I'm almost positive that MDBTools
spits out UTF-8, exclusively.
:return: a dictionary of table -> column -> access_data_type
"""
output = subprocess.check_output(['mdb-schema', rdb_file])
lines = output.decode(encoding).splitlines()
schema_ddl = "\n".join(l for l in lines if l and not l.startswith('-'))
tables = cls.__get_tables(rdb_file, encoding)
schema = {}
for table, defs in tables:
schema[table] = cls.__extract_defs(defs)
return schema
@classmethod
def to_pandas_schema(cls, schema, implicit_string=True):
"""
:param schema: the output of `read_schema`
:param implicit_string: mark strings and unknown dtypes as `np.str_`.
:return: a dictionary of table -> column -> np.dtype
"""
pd_schema = {}
for tbl, defs in schema.items():
pd_schema[tbl] = None
sub_schema = {}
for column, data_type in defs.items():
dtype = cls.__extract_dtype(data_type)
if dtype is not None:
sub_schema[column] = dtype
elif implicit_string:
sub_schema[column] = np.str_
pd_schema[tbl] = sub_schema
return pd_schema
@classmethod
def read_table(cls, rdb_file, table_name, *args, **kwargs):
"""
Read a MS Access database as a Pandas DataFrame.
Unless you set `converters_from_schema=False`, this function assumes you
want to infer the schema from the Access database's schema. This sets the
`dtype` argument of `read_csv`, which makes things much faster, in most
cases. If you set the `dtype` keyword argument also, it overrides
inferences. The `schema_encoding keyword argument passes through to
`read_schema`. The `implicit_string` argument passes through to
`to_pandas_schema`.
I recommend setting `chunksize=k`, where k is some reasonable number of
rows. This is a simple interface, that doesn't do basic things like
counting the number of rows ahead of time. You may inadvertently start
reading a 100TB file into memory. (Although, being a MS product, I assume
the Access format breaks after 2^32 bytes -- har, har.)
:param rdb_file: The MS Access database file.
:param table_name: The name of the table to process.
:param args: positional arguments passed to `pd.read_csv`
:param kwargs: keyword arguments passed to `pd.read_csv`
:return: a pandas `DataFrame` (or, `TextFileReader` if you set
`chunksize=k`)
"""
if kwargs.pop('converters_from_schema', True):
specified_dtypes = kwargs.pop('dtype', {})
schema_encoding = kwargs.pop('schema_encoding', 'utf8')
schemas = cls.to_pandas_schema(cls.read_schema(rdb_file, schema_encoding),
kwargs.pop('implicit_string', True))
dtypes = schemas[table_name]
dtypes.update(specified_dtypes)
if dtypes != {}:
kwargs['dtype'] = dtypes
cmd = ['mdb-export', rdb_file, table_name]
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE)
return pd.read_csv(proc.stdout, *args, **kwargs)
# private class methods
@classmethod
def __get_tables(cls, rdb_file, encoding='utf8'):
output = subprocess.check_output(['mdb-schema', rdb_file])
lines = output.decode(encoding).splitlines()
schema_ddl = "\n".join(l for l in lines if l and not l.startswith('-'))
return Pandas_from_acc.TABLE_RE.findall(schema_ddl)
@classmethod
def __extract_dtype(cls, data_type):
# Note, this list is surely incomplete. But, I only had one .mdb file
# at the time of creation. If you see a new data-type, patch-pull or just
# open an issue.
data_type = data_type.lower()
if data_type.startswith('double'):
return np.float_
elif data_type.startswith('long'):
return np.float_
elif data_type.startswith('bool'):
return np.bool_
elif data_type.startswith('text') or data_type.startswith('memo'):
return np.str_
elif data_type.startswith('ole'):
return np.bytes_
else:
return None
@classmethod
def __extract_defs(cls, defs_str):
defs = {}
lines = defs_str.splitlines()
for line in lines:
m = Pandas_from_acc.DEF_RE.match(line)
if m:
defs[m.group(1)] = m.group(2)
return defs
| 41.933841 | 186 | 0.582313 | 8,088 | 71,623 | 5.010138 | 0.124135 | 0.005306 | 0.006663 | 0.002073 | 0.247693 | 0.188934 | 0.165786 | 0.137826 | 0.115369 | 0.11147 | 0 | 0.006405 | 0.332966 | 71,623 | 1,707 | 187 | 41.958407 | 0.841779 | 0.267582 | 0 | 0.233333 | 0 | 0 | 0.049335 | 0.003373 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074747 | false | 0.00101 | 0.029293 | 0.00202 | 0.220202 | 0.005051 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
109b15c8163fbef0b5c3ccbc97a0ecee7d17c2d0 | 6,813 | py | Python | dimensigon/web/api_1_0/urls/locker.py | dimensigon/dimensigon | 079d7c91a66e10f13510d89844fbadb27e005b40 | [
"Apache-2.0"
] | 2 | 2020-11-20T10:27:14.000Z | 2021-02-21T13:57:56.000Z | dimensigon/web/api_1_0/urls/locker.py | dimensigon/dimensigon | 079d7c91a66e10f13510d89844fbadb27e005b40 | [
"Apache-2.0"
] | null | null | null | dimensigon/web/api_1_0/urls/locker.py | dimensigon/dimensigon | 079d7c91a66e10f13510d89844fbadb27e005b40 | [
"Apache-2.0"
] | null | null | null | import logging
import multiprocessing as mp
import threading
from datetime import datetime
from flask import request, current_app, g, jsonify
from flask_jwt_extended import jwt_required, get_jwt_identity
from sqlalchemy.exc import OperationalError
from dimensigon import defaults
from dimensigon.domain.entities import Catalog
from dimensigon.domain.entities.locker import Scope, State, Locker
from dimensigon.web import db, errors
from dimensigon.web.api_1_0 import api_bp
from dimensigon.web.decorators import securizer, forward_or_dispatch, validate_schema
from dimensigon.web.helpers import transaction
from dimensigon.web.json_schemas import locker_prevent_post, locker_unlock_lock_post
logger = logging.getLogger('dm.lock')
@api_bp.route('/locker', methods=['GET'])
@forward_or_dispatch()
@jwt_required()
@securizer
def locker():
data = []
for l in Locker.query.all():
data.append({l.scope.name: l.state.name})
return jsonify(data), 200
def revert_preventing(app, scope, applicant):
with app.app_context():
try:
l = Locker.query.with_for_update().get(scope)
if l.state == State.PREVENTING and l.applicant == applicant:
l.state = State.UNLOCKED
l.applicant = None
db.session.commit()
except OperationalError as e:
db.session.rollback()
@api_bp.route('/locker/prevent', methods=['POST'])
@forward_or_dispatch()
@jwt_required()
@securizer
@validate_schema(POST=locker_prevent_post)
def locker_prevent():
json_data = request.get_json()
l: Locker = Locker.query.get(Scope[json_data['scope']])
logger.debug(f"PreventLock requested on {json_data.get('scope')} from {g.source}")
# when orchestration scope check if applicant is the same as the current
if l.scope == Scope.ORCHESTRATION \
and l.state in (State.PREVENTING, State.LOCKED) \
and l.applicant == json_data.get('applicant'):
return {'message': f"{l.scope.name} already in {l.state.name} state"}, 210
elif l.scope == Scope.UPGRADE and l.state in (State.PREVENTING, State.LOCKED):
return {'message': f"{l.scope.name} already in {l.state.name} state"}, 210
# check status from current scope
if l.state == State.UNLOCKED:
# check priority
prioritary_lockers = Locker.query.filter(Locker.scope != l.scope).all()
prioritary_lockers = [pl for pl in prioritary_lockers if pl.scope < l.scope]
cond = any([pl.state in (State.PREVENTING, State.LOCKED) for pl in prioritary_lockers])
if not cond:
# catalog serialization
if json_data['scope'] != Scope.UPGRADE.name:
datemark = datetime.strptime(json_data['datemark'], defaults.DATEMARK_FORMAT)
catalog_ver = Catalog.max_catalog()
if datemark < catalog_ver:
raise errors.ObsoleteCatalog(catalog_ver, datemark)
with transaction():
l.state = State.PREVENTING
l.applicant = json_data.get('applicant')
th = threading.Timer(defaults.TIMEOUT_PREVENTING_LOCK, revert_preventing,
(current_app._get_current_object(), l.scope, l.applicant))
th.daemon = True
th.start()
return {json_data['scope']: 'PREVENTING'}, 200
else:
raise errors.PriorityLocker(l.scope)
else:
raise errors.StatusLockerError(l.scope, 'P', l.state)
counter = mp.Value('i', 0)
@api_bp.route('/locker/lock', methods=['POST'])
@forward_or_dispatch()
@jwt_required()
@securizer
@validate_schema(POST=locker_unlock_lock_post)
def locker_lock():
json_data = request.get_json()
l: Locker = Locker.query.get(Scope[json_data['scope']])
logger.debug(f"Lock requested on {json_data.get('scope')} from {g.source}")
if Scope[json_data['scope']] == Scope.ORCHESTRATION \
and l.state == State.LOCKED \
and l.applicant == json_data.get('applicant'):
return {'message': f"{json_data['scope']} already in {l.state} state"}, 210
elif l.scope == Scope.UPGRADE and l.state == State.LOCKED:
with counter.get_lock():
counter.value += 1
return {'message': f"{l.scope.name} already in {l.state.name} state"}, 210
if l.state == State.PREVENTING:
if l.applicant == json_data['applicant']:
with transaction():
l.state = State.LOCKED
if l.scope == Scope.UPGRADE:
with counter.get_lock():
counter.value += 1
logger.debug(f"Lock from {g.source} on {l.scope.name} acquired")
return {json_data['scope']: 'LOCKED'}, 200
else:
raise errors.ApplicantLockerError(l.scope)
else:
raise errors.StatusLockerError(l.scope, 'L', l.state)
@api_bp.route('/locker/unlock', methods=['POST'])
@forward_or_dispatch()
@jwt_required()
@securizer
@validate_schema(POST=locker_unlock_lock_post)
def locker_unlock():
json_data = request.get_json()
l: Locker = Locker.query.get(Scope[json_data['scope']])
logger.debug(f"Unlock requested on {json_data.get('scope')} from {g.source}")
if 'force' in json_data and json_data['force']:
if get_jwt_identity() != '00000000-0000-0000-0000-000000000001':
raise errors.UserForbiddenError()
else:
with transaction():
l.state = State.UNLOCKED
l.applicant = None
if l.scope == Scope.UPGRADE:
with counter.get_lock():
counter.value = 0
return {json_data['scope']: 'UNLOCKED'}, 200
if l.scope == Scope.ORCHESTRATION and l.state == State.UNLOCKED:
return {'message': f"{json_data['scope']} already in {l.state} state"}, 210
elif l.scope == Scope.UPGRADE and l.state == State.LOCKED:
with counter.get_lock():
counter.value -= 1
if counter.value == 0:
with transaction():
l.state = State.UNLOCKED
l.applicant = None
logger.debug(f"Lock on {l.scope.name} released")
return {json_data['scope']: 'UNLOCKED'}, 200
else:
return {'message': 'Pending upgrades'}, 210
elif l.state == State.PREVENTING or l.state == State.LOCKED:
if l.applicant == json_data['applicant']:
with transaction():
l.state = State.UNLOCKED
l.applicant = None
logger.debug(f"Lock on {l.scope.name} released")
return {json_data['scope']: 'UNLOCKED'}, 200
else:
raise errors.ApplicantLockerError(l.scope)
else:
raise errors.StatusLockerError(l.scope, 'U', l.state)
| 39.155172 | 95 | 0.630853 | 848 | 6,813 | 4.946934 | 0.169811 | 0.037187 | 0.044577 | 0.027175 | 0.547318 | 0.518951 | 0.470083 | 0.453159 | 0.415256 | 0.394756 | 0 | 0.014838 | 0.248202 | 6,813 | 173 | 96 | 39.381503 | 0.804178 | 0.020402 | 0 | 0.455782 | 0 | 0 | 0.126706 | 0.016194 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034014 | false | 0 | 0.102041 | 0 | 0.217687 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
109f4382ccda975a53588f5b96cd942c837a4818 | 3,650 | py | Python | src/azure-cli/azure/cli/command_modules/storage/tests/latest/test_storage_container_legal_hold.py | staer/azure-cli | 93c47df7565a6ff1bca080bd68be2a8252545def | [
"MIT"
] | 3,287 | 2016-07-26T17:34:33.000Z | 2022-03-31T09:52:13.000Z | src/azure-cli/azure/cli/command_modules/storage/tests/latest/test_storage_container_legal_hold.py | staer/azure-cli | 93c47df7565a6ff1bca080bd68be2a8252545def | [
"MIT"
] | 19,206 | 2016-07-26T07:04:42.000Z | 2022-03-31T23:57:09.000Z | src/azure-cli/azure/cli/command_modules/storage/tests/latest/test_storage_container_legal_hold.py | staer/azure-cli | 93c47df7565a6ff1bca080bd68be2a8252545def | [
"MIT"
] | 2,575 | 2016-07-26T06:44:40.000Z | 2022-03-31T22:56:06.000Z | # --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
from azure.cli.testsdk import (ScenarioTest, JMESPathCheck, ResourceGroupPreparer, StorageAccountPreparer, api_version_constraint)
from azure.cli.testsdk.scenario_tests import AllowLargeResponse
from azure.cli.core.profiles import ResourceType
class StorageLegalHold(ScenarioTest):
@AllowLargeResponse()
@ResourceGroupPreparer()
def test_legal_hold(self, resource_group):
storage_account = self.create_random_name('clistorage', 20)
self.cmd('storage account create -g {} -n {} --kind StorageV2'.format(
resource_group, storage_account))
container_name = 'container1'
self.cmd('storage container create --account-name {} -n {} --metadata k1=v1 k2=v2'.format(storage_account, container_name))
self.cmd('storage container legal-hold show --account-name {} -c {} -g {}'.format(
storage_account, container_name, resource_group), checks=[
JMESPathCheck("tags", [])])
result = self.cmd('storage container legal-hold set --account-name {} -c {} -g {} --tags tag1 tag2'.format(
storage_account, container_name, resource_group)).get_output_in_json()
self.assertIn("tag1", result.get("tags"))
self.assertIn("tag2", result.get("tags"))
self.cmd('storage container legal-hold clear --account-name {} -c {} -g {} --tags tag1 tag2'.format(
storage_account, container_name, resource_group), checks=[
JMESPathCheck("tags", [])])
@AllowLargeResponse()
@ResourceGroupPreparer()
@StorageAccountPreparer(kind='StorageV2', name_prefix='clitest', location='eastus2euap')
@api_version_constraint(resource_type=ResourceType.MGMT_STORAGE, min_api='2021-06-01')
def test_legal_hold_with_allow_protected_append_writes_all(self, resource_group, storage_account):
container_name = 'container1'
self.cmd('storage container create --account-name {} -n {} --metadata k1=v1 k2=v2'.format(storage_account,
container_name))
self.cmd('storage container legal-hold show --account-name {} -c {} -g {}'.format(
storage_account, container_name, resource_group), checks=[
JMESPathCheck("tags", []),
JMESPathCheck("allowProtectedAppendWritesAll", None)
])
self.cmd('storage container legal-hold set --account-name {} -c {} -g {} --tags tag1 tag2 --w-all'.format(
storage_account, container_name, resource_group), checks=[
JMESPathCheck("tags", ['tag1', 'tag2']),
JMESPathCheck("allowProtectedAppendWritesAll", True)
])
self.cmd('storage container legal-hold clear --account-name {} -c {} -g {} --tags tag1 tag2'.format(
storage_account, container_name, resource_group), checks=[
JMESPathCheck("tags", []),
JMESPathCheck("allowProtectedAppendWritesAll", None)
])
self.cmd('storage container legal-hold set --account-name {} -c {} -g {} --tags tag3 tag4 --w-all false'.format(
storage_account, container_name, resource_group), checks=[
JMESPathCheck("tags", ['tag3', 'tag4']),
JMESPathCheck("allowProtectedAppendWritesAll", False)
])
| 56.153846 | 131 | 0.618356 | 360 | 3,650 | 6.111111 | 0.277778 | 0.082727 | 0.115 | 0.135 | 0.573182 | 0.557273 | 0.557273 | 0.557273 | 0.557273 | 0.557273 | 0 | 0.013528 | 0.210137 | 3,650 | 64 | 132 | 57.03125 | 0.749566 | 0.092055 | 0 | 0.52 | 0 | 0.1 | 0.29586 | 0.035056 | 0 | 0 | 0 | 0 | 0.04 | 1 | 0.04 | false | 0 | 0.06 | 0 | 0.12 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
109f90153fc579b9ffa883b06b29d55875e2d391 | 779 | py | Python | get_options.py | vuanhtuan1012/small_scripts | 4a57a4a0caa459c3aed0d8f44d0a571d1c0ea78d | [
"MIT"
] | null | null | null | get_options.py | vuanhtuan1012/small_scripts | 4a57a4a0caa459c3aed0d8f44d0a571d1c0ea78d | [
"MIT"
] | null | null | null | get_options.py | vuanhtuan1012/small_scripts | 4a57a4a0caa459c3aed0d8f44d0a571d1c0ea78d | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# @Author: VU Anh Tuan
# @Date: 2018-10-12 13:37:25
# @Last Modified by: VU Anh Tuan
# @Last Modified time: 2018-10-12 13:51:55
import sys, getopt
conf = {
'help': 'get_options.py -r url'
}
def get_options(argv):
url = None
try:
opts, args = getopt.getopt(argv,"hr:", ["url="])
except getopt.GetoptError:
print(conf['help'])
sys.exit(2)
if not opts:
print(conf['help'])
sys.exit()
for opt, arg in opts:
if opt == '-h':
print(conf['help'])
sys.exit()
elif opt in ("-r", "--url"):
url = arg
return url
def main(argv):
url = get_options(argv)
print('url = %s' % url)
if __name__ == '__main__':
main(sys.argv[1:]) | 19 | 56 | 0.521181 | 110 | 779 | 3.590909 | 0.5 | 0.081013 | 0.098734 | 0.121519 | 0.151899 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057407 | 0.306804 | 779 | 41 | 57 | 19 | 0.674074 | 0.186136 | 0 | 0.192308 | 0 | 0 | 0.109698 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.038462 | 0 | 0.153846 | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
109ff1957f67ed2853e5710fd572c170094b4875 | 25,291 | py | Python | corus/sources/meta.py | Ilseyar/corus | 61a4776f5e534469bb9df1e451b6a6d5fc0e991b | [
"MIT"
] | null | null | null | corus/sources/meta.py | Ilseyar/corus | 61a4776f5e534469bb9df1e451b6a6d5fc0e991b | [
"MIT"
] | null | null | null | corus/sources/meta.py | Ilseyar/corus | 61a4776f5e534469bb9df1e451b6a6d5fc0e991b | [
"MIT"
] | null | null | null |
from corus.record import Record
from . import (
load_mokoron,
load_wiki,
load_simlex,
load_omnia,
load_gramru,
load_corpora,
load_ruadrect,
load_factru,
load_gareev,
load_lenta,
load_lenta2,
load_librusec,
load_ne5,
load_wikiner,
load_bsnlp,
load_persons,
load_taiga_arzamas,
load_taiga_fontanka,
load_taiga_interfax,
load_taiga_kp,
load_taiga_lenta,
load_taiga_nplus1,
load_taiga_magazines,
load_taiga_subtitles,
load_taiga_social,
load_taiga_proza,
load_taiga_stihi,
load_buriy_news,
load_buriy_webhose,
load_ods_interfax,
load_ods_gazeta,
load_ods_izvestia,
load_ods_meduza,
load_ods_ria,
load_ods_rt,
load_ods_tass,
load_ria_raw,
load_ria,
load_ud_gsd,
load_ud_taiga,
load_ud_pud,
load_ud_syntag,
load_morphoru_gicrya,
load_morphoru_rnc,
load_morphoru_corpora,
load_russe_hj,
load_russe_rt,
load_russe_ae,
load_toloka_lrwc,
)
class Meta(Record):
__attributes__ = ['title', 'url',
'description', 'stats', 'instruction',
'tags', 'functions']
def __init__(self, title, url=None,
description=None, stats=None, instruction=(),
tags=(), functions=()):
self.title = title
self.url = url
self.description = description
self.stats = stats
self.instruction = instruction
self.tags = tags
self.functions = functions
class Group(Record):
__attributes__ = ['title', 'url', 'description', 'instruction', 'metas']
def __init__(self, title, url=None, description=None, instruction=(), metas=()):
self.title = title
self.url = url
self.description = description
self.instruction = instruction
self.metas = metas
def is_group(item):
return isinstance(item, Group)
class Stats(Record):
__attributes__ = ['bytes', 'count']
def __init__(self, bytes=None, count=None):
self.bytes = bytes
self.count = count
NER = 'ner'
NEWS = 'news'
FICTION = 'fiction'
SOCIAL = 'social'
MORPH = 'morph'
SYNTAX = 'syntax'
EMB = 'emb'
SIM = 'sim'
SENTIMENT = 'sentiment'
WEB = 'web'
METAS = [
Group(
title='Lenta.ru',
url='https://github.com/yutkin/Lenta.Ru-News-Dataset',
metas=[
Meta(
title='Lenta.ru v1.0',
stats=Stats(
bytes=1785632079,
count=739351
),
instruction=[
'wget https://github.com/yutkin/Lenta.Ru-News-Dataset/releases/download/v1.0/lenta-ru-news.csv.gz'
],
tags=[NEWS],
functions=[load_lenta]
),
Meta(
title='Lenta.ru v1.1+',
stats=Stats(
bytes=2084746431,
count=800975
),
instruction=[
'wget https://github.com/yutkin/Lenta.Ru-News-Dataset/releases/download/v1.1/lenta-ru-news.csv.bz2'
],
tags=[NEWS],
functions=[load_lenta2]
),
]
),
Meta(
title='Lib.rus.ec',
url='https://russe.nlpub.org/downloads/',
description='Dump of lib.rus.ec prepared for RUSSE workshop',
stats=Stats(
count=301871,
bytes=155611193945
),
instruction=[
'wget http://panchenko.me/data/russe/librusec_fb2.plain.gz'
],
tags=[FICTION],
functions=[load_librusec]
),
Meta(
title='Rossiya Segodnya',
url='https://github.com/RossiyaSegodnya/ria_news_dataset',
stats=Stats(
count=1003869,
bytes=3974121040
),
instruction=[
'wget https://github.com/RossiyaSegodnya/ria_news_dataset/raw/master/ria.json.gz'
],
tags=[NEWS],
functions=[load_ria_raw, load_ria]
),
Meta(
title='Mokoron Russian Twitter Corpus',
url='http://study.mokoron.com/',
description='Russian Twitter sentiment markup',
instruction=[
'Manually download https://www.dropbox.com/s/9egqjszeicki4ho/db.sql'
],
stats=Stats(
count=17633417,
bytes=1998559570
),
tags=[SOCIAL, SENTIMENT],
functions=[load_mokoron],
),
Meta(
title='Wikipedia',
url='https://dumps.wikimedia.org/',
description='Russian Wiki dump',
instruction=[
'wget https://dumps.wikimedia.org/ruwiki/latest/ruwiki-latest-pages-articles.xml.bz2'
],
stats=Stats(
count=1541401,
bytes=13895798340
),
functions=[load_wiki],
),
Meta(
title='GramEval2020',
url='https://github.com/dialogue-evaluation/GramEval2020',
instruction=[
'wget https://github.com/dialogue-evaluation/GramEval2020/archive/master.zip',
'unzip master.zip',
'mv GramEval2020-master/dataTrain train',
'mv GramEval2020-master/dataOpenTest dev',
'rm -r master.zip GramEval2020-master',
'wget https://github.com/AlexeySorokin/GramEval2020/raw/master/data/GramEval_private_test.conllu'
],
stats=Stats(
count=162372,
bytes=31503713
),
functions=[load_gramru],
),
Meta(
title='OpenCorpora',
url='http://opencorpora.org/',
instruction=[
'wget http://opencorpora.org/files/export/annot/annot.opcorpora.xml.zip'
],
stats=Stats(
count=4030,
bytes=21194932
),
tags=[MORPH],
functions=[load_corpora],
),
Meta(
title='RusVectores SimLex-965',
instruction=[
'wget https://rusvectores.org/static/testsets/ru_simlex965_tagged.tsv',
'wget https://rusvectores.org/static/testsets/ru_simlex965.tsv'
],
tags=[EMB, SIM],
functions=[load_simlex],
),
Meta(
title='Omnia Russica',
url='https://omnia-russica.github.io/',
description='Taiga + Wiki + Araneum. Read "Even larger Russian corpus" https://events.spbu.ru/eventsContent/events/2019/corpora/corp_sborn.pdf',
instruction=[
'Manually download http://bit.ly/2ZT4BY9'
],
stats=Stats(
bytes=525728427750
),
tags=[MORPH, WEB, FICTION],
functions=[load_omnia]
),
###########
#
# NER
#
############
Meta(
title='factRuEval-2016',
url='https://github.com/dialogue-evaluation/factRuEval-2016/',
description='Manual PER, LOC, ORG markup prepared for 2016 Dialog competition',
stats=Stats(
count=254,
bytes=992532
),
instruction=[
'wget https://github.com/dialogue-evaluation/factRuEval-2016/archive/master.zip',
'unzip master.zip',
'rm master.zip'
],
tags=[NER, NEWS],
functions=[load_factru]
),
Meta(
title='Gareev',
url='https://www.researchgate.net/publication/262203599_Introducing_Baselines_for_Russian_Named_Entity_Recognition',
description='Manual PER, ORG markup (no LOC)',
stats=Stats(
count=97,
bytes=465938
),
instruction=[
'Email Rinat Gareev (gareev-rm@yandex.ru) ask for dataset',
'tar -xvf rus-ner-news-corpus.iob.tar.gz',
'rm rus-ner-news-corpus.iob.tar.gz'
],
tags=[NER, NEWS],
functions=[load_gareev]
),
Meta(
title='Collection5',
url='http://www.labinform.ru/pub/named_entities/',
description='News articles with manual PER, LOC, ORG markup',
stats=Stats(
count=1000,
bytes=3105146
),
instruction=[
'wget http://www.labinform.ru/pub/named_entities/collection5.zip',
'unzip collection5.zip',
'rm collection5.zip'
],
tags=[NER, NEWS],
functions=[load_ne5]
),
Meta(
title='WiNER',
url='https://www.aclweb.org/anthology/I17-1042',
description='Sentences from Wiki auto annotated with PER, LOC, ORG tags',
stats=Stats(
count=203287,
bytes=37907651
),
instruction=[
'wget https://github.com/dice-group/FOX/raw/master/input/Wikiner/aij-wikiner-ru-wp3.bz2'
],
tags=[NER],
functions=[load_wikiner]
),
Meta(
title='BSNLP-2019',
url='http://bsnlp.cs.helsinki.fi/shared_task.html',
description='Markup prepared for 2019 BSNLP Shared Task',
stats=Stats(
count=464,
bytes=1211300
),
instruction=[
'wget http://bsnlp.cs.helsinki.fi/TRAININGDATA_BSNLP_2019_shared_task.zip',
'wget http://bsnlp.cs.helsinki.fi/TESTDATA_BSNLP_2019_shared_task.zip',
'unzip TRAININGDATA_BSNLP_2019_shared_task.zip',
'unzip TESTDATA_BSNLP_2019_shared_task.zip -d test_pl_cs_ru_bg',
'rm TRAININGDATA_BSNLP_2019_shared_task.zip TESTDATA_BSNLP_2019_shared_task.zip'
],
tags=[NER],
functions=[load_bsnlp]
),
Meta(
title='Persons-1000',
url='http://ai-center.botik.ru/Airec/index.php/ru/collections/28-persons-1000',
description='Same as Collection5, only PER markup + normalized names',
stats=Stats(
count=1000,
bytes=3105146
),
instruction=[
'wget http://ai-center.botik.ru/Airec/ai-resources/Persons-1000.zip'
],
tags=[NER, NEWS],
functions=[load_persons]
),
##########
#
# TAIGA
#
###########
Group(
title='Taiga',
url='https://tatianashavrina.github.io/taiga_site/',
description='Large collection of Russian texts from various sources: news sites, magazines, literacy, social networks',
instruction=[
'wget https://linghub.ru/static/Taiga/retagged_taiga.tar.gz',
'tar -xzvf retagged_taiga.tar.gz'
],
metas=[
Meta(
title='Arzamas',
stats=Stats(
count=311,
bytes=4721604
),
tags=[NEWS],
functions=[load_taiga_arzamas],
),
Meta(
title='Fontanka',
stats=Stats(
count=342683,
bytes=824419630
),
tags=[NEWS],
functions=[load_taiga_fontanka],
),
Meta(
title='Interfax',
stats=Stats(
count=46429,
bytes=81320006
),
tags=[NEWS],
functions=[load_taiga_interfax],
),
Meta(
title='KP',
stats=Stats(
count=45503,
bytes=64789612
),
tags=[NEWS],
functions=[load_taiga_kp],
),
Meta(
title='Lenta',
stats=Stats(
count=36446,
bytes=99772679
),
tags=[NEWS],
functions=[load_taiga_lenta],
),
Meta(
title='Taiga/N+1',
stats=Stats(
count=7696,
bytes=26167631
),
tags=[NEWS],
functions=[load_taiga_nplus1],
),
Meta(
title='Magazines',
stats=Stats(
count=39890,
bytes=2352629006
),
functions=[load_taiga_magazines]
),
Meta(
title='Subtitles',
stats=Stats(
count=19011,
bytes=953237022
),
functions=[load_taiga_subtitles]
),
Meta(
title='Social',
stats=Stats(
count=1876442,
bytes=679670941
),
tags=[SOCIAL],
functions=[load_taiga_social]
),
Meta(
title='Proza',
stats=Stats(
count=1732434,
bytes=41067043857
),
tags=[FICTION],
functions=[load_taiga_proza]
),
Meta(
title='Stihi',
stats=Stats(
count=9157686,
bytes=13745805334
),
functions=[load_taiga_stihi]
),
]
),
#############
#
# BURIY
#
##########
Group(
title='Russian NLP Datasets',
url='https://github.com/buriy/russian-nlp-datasets/releases',
description='Several Russian news datasets from webhose.io, lenta.ru and other news sites.',
metas=[
Meta(
title='News',
description='Dump of top 40 news + 20 fashion news sites.',
instruction=[
'wget https://github.com/buriy/russian-nlp-datasets/releases/download/r4/news-articles-2014.tar.bz2',
'wget https://github.com/buriy/russian-nlp-datasets/releases/download/r4/news-articles-2015-part1.tar.bz2',
'wget https://github.com/buriy/russian-nlp-datasets/releases/download/r4/news-articles-2015-part2.tar.bz2'
],
stats=Stats(
count=2154801,
bytes=7340672169
),
tags=[NEWS],
functions=[load_buriy_news],
),
Meta(
title='Webhose',
description='Dump from webhose.io, 300 sources for one month.',
instruction=[
'wget https://github.com/buriy/russian-nlp-datasets/releases/download/r4/webhose-2016.tar.bz2'
],
stats=Stats(
count=285965,
bytes=901066314
),
tags=[NEWS],
functions=[load_buriy_webhose],
),
]
),
#############
#
# ODS
#
#########
Group(
title='ODS #proj_news_viz',
url='https://github.com/ods-ai-ml4sg/proj_news_viz/releases/tag/data',
description='Several news sites scraped by members of #proj_news_viz ODS project.',
metas=[
Meta(
title='Interfax',
instruction=[
'wget https://github.com/ods-ai-ml4sg/proj_news_viz/releases/download/data/interfax.csv.gz',
],
stats=Stats(
count=543961,
bytes=1314462876,
),
tags=[NEWS],
functions=[load_ods_interfax],
),
Meta(
title='Gazeta',
instruction=[
'wget https://github.com/ods-ai-ml4sg/proj_news_viz/releases/download/data/gazeta.csv.gz',
],
stats=Stats(
count=865847,
bytes=1752712320
),
tags=[NEWS],
functions=[load_ods_gazeta],
),
Meta(
title='Izvestia',
instruction=[
'wget https://github.com/ods-ai-ml4sg/proj_news_viz/releases/download/data/iz.csv.gz',
],
stats=Stats(
count=86601,
bytes=322117124
),
tags=[NEWS],
functions=[load_ods_izvestia],
),
Meta(
title='Meduza',
instruction=[
'wget https://github.com/ods-ai-ml4sg/proj_news_viz/releases/download/data/meduza.csv.gz',
],
stats=Stats(
count=71806,
bytes=283233963
),
tags=[NEWS],
functions=[load_ods_meduza],
),
Meta(
title='RIA',
instruction=[
'wget https://github.com/ods-ai-ml4sg/proj_news_viz/releases/download/data/ria.csv.gz',
],
stats=Stats(
count=101543,
bytes=245236791
),
tags=[NEWS],
functions=[load_ods_ria],
),
Meta(
title='Russia Today',
instruction=[
'wget https://github.com/ods-ai-ml4sg/proj_news_viz/releases/download/data/rt.csv.gz',
],
stats=Stats(
count=106644,
bytes=196212474
),
tags=[NEWS],
functions=[load_ods_rt],
),
Meta(
title='TASS',
instruction=[
'wget https://github.com/ods-ai-ml4sg/proj_news_viz/releases/download/data/tass-001.csv.gz',
],
stats=Stats(
count=1135635,
bytes=3515136716
),
tags=[NEWS],
functions=[load_ods_tass],
),
]
),
#############
#
# UD
#
#########
Group(
title='Universal Dependencies',
url='https://universaldependencies.org/',
metas=[
Meta(
title='GSD',
instruction=[
'wget https://github.com/UniversalDependencies/UD_Russian-GSD/raw/master/ru_gsd-ud-dev.conllu',
'wget https://github.com/UniversalDependencies/UD_Russian-GSD/raw/master/ru_gsd-ud-test.conllu',
'wget https://github.com/UniversalDependencies/UD_Russian-GSD/raw/master/ru_gsd-ud-train.conllu'
],
stats=Stats(
count=5030,
bytes=1059114
),
tags=[MORPH, SYNTAX],
functions=[load_ud_gsd],
),
Meta(
title='Taiga',
instruction=[
'wget https://github.com/UniversalDependencies/UD_Russian-Taiga/raw/master/ru_taiga-ud-dev.conllu',
'wget https://github.com/UniversalDependencies/UD_Russian-Taiga/raw/master/ru_taiga-ud-test.conllu',
'wget https://github.com/UniversalDependencies/UD_Russian-Taiga/raw/master/ru_taiga-ud-train.conllu'
],
stats=Stats(
count=3264,
bytes=362293
),
tags=[MORPH, SYNTAX],
functions=[load_ud_taiga],
),
Meta(
title='PUD',
instruction=[
'wget https://github.com/UniversalDependencies/UD_Russian-PUD/raw/master/ru_pud-ud-test.conllu',
],
stats=Stats(
count=1000,
bytes=212766
),
tags=[MORPH, SYNTAX],
functions=[load_ud_pud],
),
Meta(
title='SynTagRus',
instruction=[
'wget https://github.com/UniversalDependencies/UD_Russian-SynTagRus/raw/master/ru_syntagrus-ud-dev.conllu',
'wget https://github.com/UniversalDependencies/UD_Russian-SynTagRus/raw/master/ru_syntagrus-ud-test.conllu',
'wget https://github.com/UniversalDependencies/UD_Russian-SynTagRus/raw/master/ru_syntagrus-ud-train.conllu',
],
stats=Stats(
count=61889,
bytes=11877258
),
tags=[MORPH, SYNTAX],
functions=[load_ud_syntag],
),
]
),
#############
#
# MORPHORUEVAL
#
#########
Group(
title='morphoRuEval-2017',
url='https://github.com/dialogue-evaluation/morphoRuEval-2017',
metas=[
Meta(
title='General Internet-Corpus',
instruction=[
'wget https://github.com/dialogue-evaluation/morphoRuEval-2017/raw/master/GIKRYA_texts_new.zip',
'unzip GIKRYA_texts_new.zip',
'rm GIKRYA_texts_new.zip'
],
stats=Stats(
count=83148,
bytes=11091464
),
tags=[MORPH],
functions=[load_morphoru_gicrya],
),
Meta(
title='Russian National Corpus',
instruction=[
'wget https://github.com/dialogue-evaluation/morphoRuEval-2017/raw/master/RNC_texts.rar',
'unrar x RNC_texts.rar',
'rm RNC_texts.rar'
],
stats=Stats(
count=98892,
bytes=13330673
),
tags=[MORPH],
functions=[load_morphoru_rnc],
),
Meta(
title='OpenCorpora',
instruction=[
'wget https://github.com/dialogue-evaluation/morphoRuEval-2017/raw/master/OpenCorpora_Texts.rar',
'unrar x OpenCorpora_Texts.rar',
'rm OpenCorpora_Texts.rar'
],
stats=Stats(
count=38510,
bytes=5028255
),
tags=[MORPH],
functions=[load_morphoru_corpora],
),
]
),
#############
#
# RUSSE SEM
#
#########
Group(
title='RUSSE Russian Semantic Relatedness',
url='https://russe.nlpub.org/downloads/',
metas=[
Meta(
title='HJ: Human Judgements of Word Pairs',
instruction=[
'wget https://github.com/nlpub/russe-evaluation/raw/master/russe/evaluation/hj.csv'
],
tags=[EMB, SIM],
functions=[load_russe_hj],
),
Meta(
title='RT: Synonyms and Hypernyms from the Thesaurus RuThes',
instruction=[
'wget https://raw.githubusercontent.com/nlpub/russe-evaluation/master/russe/evaluation/rt.csv'
],
tags=[EMB, SIM],
functions=[load_russe_rt],
),
Meta(
title='AE: Cognitive Associations from the Sociation.org Experiment',
instruction=[
'wget https://github.com/nlpub/russe-evaluation/raw/master/russe/evaluation/ae-train.csv',
'wget https://github.com/nlpub/russe-evaluation/raw/master/russe/evaluation/ae-test.csv',
'wget https://raw.githubusercontent.com/nlpub/russe-evaluation/master/russe/evaluation/ae2.csv'
],
tags=[EMB, SIM],
functions=[load_russe_ae],
),
]
),
#############
#
# TOLOKA
#
#########
Group(
title='Toloka Datasets',
url='https://toloka.yandex.ru/datasets/',
metas=[
Meta(
title='Lexical Relations from the Wisdom of the Crowd (LRWC)',
instruction=[
'wget https://tlk.s3.yandex.net/dataset/LRWC.zip',
'unzip LRWC.zip',
'rm LRWC.zip'
],
tags=[EMB, SIM],
functions=[load_toloka_lrwc],
),
Meta(
title='The Russian Adverse Drug Reaction Corpus of Tweets (RuADReCT)',
url='https://github.com/cimm-kzn/RuDReC',
description='This corpus was developed for the Social Media Mining for Health Applications (#SMM4H) '
'Shared Task 2020',
instruction=[
'wget https://github.com/cimm-kzn/RuDReC/raw/master/data/RuADReCT.zip',
'unzip RuADReCT.zip',
'rm RuADReCT.zip'
],
stats=Stats(
count=9515,
bytes=2190063
),
tags=[SOCIAL],
functions=[load_ruadrect],
),
]
),
]
| 30.58162 | 152 | 0.483215 | 2,256 | 25,291 | 5.288121 | 0.192376 | 0.036211 | 0.050461 | 0.052808 | 0.441911 | 0.347443 | 0.279212 | 0.237888 | 0.208801 | 0.200419 | 0 | 0.053643 | 0.401487 | 25,291 | 826 | 153 | 30.618644 | 0.734492 | 0.002768 | 0 | 0.474623 | 0 | 0.050754 | 0.313812 | 0.014112 | 0 | 0 | 0 | 0 | 0 | 1 | 0.005487 | false | 0 | 0.002743 | 0.001372 | 0.017833 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10a45e117e2cd62e5493833f8eafa5129faea245 | 4,094 | py | Python | resources/lib/services/library_updater.py | groth-its/plugin.video.netflix | 2d9ef4336924da189526e306b47c63c7fcefabd0 | [
"MIT"
] | 1 | 2020-06-12T15:52:34.000Z | 2020-06-12T15:52:34.000Z | resources/lib/services/library_updater.py | groth-its/plugin.video.netflix | 2d9ef4336924da189526e306b47c63c7fcefabd0 | [
"MIT"
] | null | null | null | resources/lib/services/library_updater.py | groth-its/plugin.video.netflix | 2d9ef4336924da189526e306b47c63c7fcefabd0 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Automatic updates of items exported to the Kodi library"""
from __future__ import unicode_literals
from datetime import datetime, timedelta
import AddonSignals
import xbmc
from resources.lib.globals import g
import resources.lib.common as common
import resources.lib.kodi.library as library
class LibraryUpdateService(xbmc.Monitor):
"""
Checks if a library update is scheduled and triggers it
"""
def __init__(self):
xbmc.Monitor.__init__(self)
self.scan_in_progress = False
self.scan_awaiting = False
self.startidle = 0
self.last_schedule_check = datetime.now()
AddonSignals.registerSlot(
g.ADDON.getAddonInfo('id'), common.Signals.LIBRARY_UPDATE_REQUESTED,
self.update_kodi_library)
def on_tick(self):
"""Check if update is due and trigger it"""
if self.library_update_scheduled() and self.is_idle():
library.update_library()
def is_idle(self):
"""
Check if Kodi has been idle for 5 minutes
"""
if not g.ADDON.getSettingBool('wait_idle'):
return True
lastidle = xbmc.getGlobalIdleTime()
if xbmc.Player().isPlaying():
self.startidle = lastidle
if lastidle < self.startidle:
self.startidle = 0
idletime = lastidle - self.startidle
return idletime >= 300
def library_update_scheduled(self):
"""
Checks if the scheduled time for a library update has been reached
"""
try:
now = datetime.now()
update_frequency = g.ADDON.getSettingInt('auto_update')
interval = g.ADDON.getSettingInt('schedule_check_interval')
next_schedule_check = (self.last_schedule_check +
timedelta(minutes=interval))
if not update_frequency or now <= next_schedule_check:
return False
self.last_schedule_check = now
time = g.ADDON.getSetting('update_time') or '00:00'
lastrun_date = (g.ADDON.getSetting('last_update') or
'1970-01-01')
lastrun = common.strp('{} {}'.format(lastrun_date, time[0:5]),
'%Y-%m-%d %H:%M')
nextrun = lastrun + timedelta(days=[0, 1, 2, 5, 7][update_frequency])
common.log(
'It\'s currently {}, next run is scheduled for {}'
.format(now, nextrun))
return now >= nextrun
except TypeError:
# When there is concurrency between getSettingX and setSettingX at the same time,
# the get settings fails to read
return False
def onScanStarted(self, library):
"""Monitor library scan to avoid multiple calls"""
# Kodi cancels the update if called with JSON RPC twice
# so we monitor events to ensure we're not cancelling a previous scan
if library == 'video':
self.scan_in_progress = True
def onScanFinished(self, library):
"""Monitor library scan to avoid multiple calls"""
# Kodi cancels the update if called with JSON RPC twice
# so we monitor events to ensure we're not cancelling a previous scan
if library == 'video':
self.scan_in_progress = False
if self.scan_awaiting:
self.update_kodi_library()
def update_kodi_library(self, data = None):
# Update only the elements in the addon export folder
# for faster processing with a large library.
# If a scan is already in progress, the scan is delayed until onScanFinished event
common.debug('Library update requested for library updater service')
if not self.scan_in_progress:
self.scan_awaiting = False
common.scan_library(
xbmc.makeLegalFilename(
xbmc.translatePath(
library.library_path())))
else:
self.scan_awaiting = True
| 36.230088 | 93 | 0.60552 | 476 | 4,094 | 5.073529 | 0.342437 | 0.026501 | 0.016563 | 0.029814 | 0.180538 | 0.149068 | 0.149068 | 0.149068 | 0.149068 | 0.149068 | 0 | 0.009259 | 0.314118 | 4,094 | 112 | 94 | 36.553571 | 0.850783 | 0.220567 | 0 | 0.142857 | 0 | 0 | 0.053548 | 0.007419 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.1 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10a4dcea15f253b47020e5a4d9abc2f5d11c8d34 | 516 | py | Python | src/cogs/error.py | tescomealdealll/TagsPlus | e52e1810936de4ec354345ec1c103b3b0bc92e6a | [
"MIT"
] | 4 | 2022-01-12T18:31:46.000Z | 2022-01-13T09:38:15.000Z | src/cogs/error.py | tescomealdealll/TagsPlus | e52e1810936de4ec354345ec1c103b3b0bc92e6a | [
"MIT"
] | null | null | null | src/cogs/error.py | tescomealdealll/TagsPlus | e52e1810936de4ec354345ec1c103b3b0bc92e6a | [
"MIT"
] | 3 | 2022-01-12T18:04:17.000Z | 2022-03-22T07:13:43.000Z | import discord
from discord.ext import commands
class Error(commands.Cog):
def __init__(self, bot):
self.bot = bot
@commands.Cog.listener()
async def on_command_error(self, ctx, error):
if isinstance(error, commands.MissingRequiredArgument):
await ctx.send(
'Please try again with the `required` argument(s).',
delete_after=5,
)
else:
raise error
async def setup(bot):
await bot.add_cog(Error(bot))
| 24.571429 | 68 | 0.598837 | 61 | 516 | 4.934426 | 0.606557 | 0.086379 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002801 | 0.30814 | 516 | 20 | 69 | 25.8 | 0.840336 | 0 | 0 | 0 | 0 | 0 | 0.094961 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.125 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10a631054b6680b1a16ad497688733d8ec8b66a1 | 6,928 | py | Python | src/project/recommender_system.py | a6ln8ka/music-recommender-system | d807295dd06b90ff9d58d514bc830434384a98c9 | [
"MIT"
] | null | null | null | src/project/recommender_system.py | a6ln8ka/music-recommender-system | d807295dd06b90ff9d58d514bc830434384a98c9 | [
"MIT"
] | null | null | null | src/project/recommender_system.py | a6ln8ka/music-recommender-system | d807295dd06b90ff9d58d514bc830434384a98c9 | [
"MIT"
] | 2 | 2021-05-17T18:27:17.000Z | 2021-05-17T23:30:44.000Z | """
recommender_system.py
Creates content-based recommendations
"""
import numpy as np
import pandas as pd
import re
import ast
import os
def col(df, colname = "artists"):
"""
:param df:
:param colname: (Default value = "artists")
"""
return np.array([int(x == colname) for x in df.columns]).argmax()
def query_artists(df, lists = [], full = False, strict = True):
"""
:param df:
:param lists: (Default value = [])
:param full: (Default value = False)
:param strict: (Default value = True)
"""
return pd.concat([query_artist(df, string = name, strict = strict) for name in lists], axis = 0)
def query_artist(df, string = "--", full = False, strict = True):
"""
:param df:
:param string: (Default value = "--")
:param full: (Default value = False)
:param strict: (Default value = True)
"""
lists = []
for i, artist in enumerate(df["artists"]):
if(len(re.findall(string, "".join(artist))) != 0):
if(strict):
if(string == artist):
if(full):
lists.append(df.iloc[i])
else:
lists.append(df.iloc[i, [col(df, "artists"), col(df, "genres")]])
else:
if(full):
lists.append(df.iloc[i])
else:
lists.append(df.iloc[i, [col(df, "artists"), col(df, "genres")]])
if(full):
return pd.DataFrame(lists, columns = df.columns)
else:
return pd.DataFrame(lists, columns = ["artists", "genres"])
def perfect_eval(string):
"""This method evaluates string
:param string:
"""
try:
return ast.literal_eval(string)
except:
return []
def create_random_dict(df_by_artists, length, score):
"""This method is used to test the system. It creates random
dictionary of artists and rates
:param df_by_artists:
:param length:
:param score:
"""
list_of_names = list(set(df_by_artists["artists"]))
random_indices = [round(x) for x in np.random.random(length)*len(list_of_names)]
random_names = pd.Series(list_of_names).iloc[random_indices].values.tolist()
random_rates = [int(round(x)) for x in (score[0] + np.random.random(length)*(score[1]-score[0]))]
name_rate_dict = {}
for index in range(length):
name_rate_dict.update({random_names[index]: random_rates[index]})
return name_rate_dict
def rate_artist(df_by_artists, name_rate_dict):
"""This method selects best-rated genres from the name_rate_dict
:param df_by_artists:
:param name_rate_dict:
"""
#convert the name_rate_series to a pandas dataframe
name_rate_series = pd.DataFrame({"rate": name_rate_dict.values, "artists": name_rate_dict.index})
#create a new dataframe, only selecting the artists and genres columns of artists selected by user
artists_genres = df_by_artists[df_by_artists["artists"].isin(list(name_rate_dict.keys()))][["artists", "genres"]]
#merge both of these
df_name_rate = pd.merge(name_rate_series, artists_genres, on = "artists", how = "inner")
df_x = df_name_rate.copy()
#create the artist-genre-matrix for artists selected by users
for index, genres in enumerate(df_name_rate["genres"]):
for genre in genres:
#artist includes the genre: 1
df_x.at[index, genre] = 1
#artist does not include the genre: 0
df_x = df_x.fillna(0)
#ratings of artists
df_user = df_x["rate"]
#drop all columns except the genre columns
df_genre_matrix = df_x.drop(["artists", "genres", "rate"], axis = 1).reset_index(drop = True)
#find out the genres' ratings
df_profile = df_genre_matrix.transpose().dot(df_user)
return df_profile
def select_artist(df_by_artists, df_rate):
"""This method selects artists which perform the same genre as
artists were given
:param df_by_artists:
:param df_rate:
"""
# save the indices of artists, which include any of the genres in the genre profile
list_of_id = []
for index, row in df_by_artists.iterrows():
for genre in row["genres"]:
if(genre in df_rate.index):
list_of_id.append(index)
#find the unique indices
list_of_id = list(set(list_of_id))
#select the artists and genres columns of the artists including any of the genres in the genre profile
df_select_columns = df_by_artists.iloc[list_of_id, [col(df_by_artists, "artists"), col(df_by_artists, "genres")]]
df_select = df_select_columns.copy()
#create the artist-genre-matrix of new artists
for index, row in df_select_columns.iterrows():
for genre in row['genres']:
#artist includes genre: 1
df_select.at[index, genre] = 1
#artist does not include genre: 0
df_select = df_select.fillna(0)[df_rate.index]
return df_select
def recommend_artist_by_genre(df_by_artists, name_rate_dict, how_many):
"""This method is used to create recommendations based on dictionary
of artists names and rates
:param df_by_artists:
:param name_rate_dict:
:param how_many:
"""
df_by_artists = df_by_artists.copy()
#make sure that genres are list, not string
df_by_artists["genres"] = [perfect_eval(genre) for genre in df_by_artists["genres"]]
#create a name_rate pandas series
name_rate_series = pd.Series(name_rate_dict)
#find out the genre profile of user
df_rate = rate_artist(df_by_artists, name_rate_series)
#create new artists' matrix
df_select = select_artist(df_by_artists, df_rate)
#calculate similarity scores of those artists
affinity_scores = df_select.dot(df_rate)/df_rate.sum()
#sort it in descending order
affinity_scores_sorted = pd.Series(affinity_scores, name = "genre_affinity").sort_values(ascending = False)
#retrieve the names of artists by their indices
artists_in_df = df_by_artists.iloc[affinity_scores_sorted.index, [col(df_by_artists, "artists")]]
#store the artists' names and their similarity scores in a dataframe
resulted_df = pd.concat([affinity_scores_sorted, artists_in_df], axis = 1)
#drop the artists already selected by user and limit the count of artists to a specified amount
output = resulted_df[~resulted_df["artists"].isin(name_rate_series.index)].iloc[:how_many, :]
#create new indices
return output.reset_index()
def songs_dict(name_rate_dict, how_many):
"""This function is used in main.py. It returns dictionary of recommended
songs, which viewed when user presses "get recommendations" button
:param name_rate_dict:
:param how_many:
"""
dir = os.getcwd()
df_by_artists = pd.read_csv(dir + "//data_w_genres.csv")
df_scores = recommend_artist_by_genre(df_by_artists, name_rate_dict, how_many)
return df_scores.to_dict()
| 34.989899 | 117 | 0.665416 | 994 | 6,928 | 4.427565 | 0.187123 | 0.022722 | 0.062486 | 0.015451 | 0.28948 | 0.242445 | 0.186776 | 0.128607 | 0.084526 | 0.084526 | 0 | 0.002798 | 0.226184 | 6,928 | 197 | 118 | 35.167513 | 0.818131 | 0.324047 | 0 | 0.127907 | 0 | 0 | 0.048682 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.104651 | false | 0 | 0.05814 | 0 | 0.290698 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10a7ecc41caf463cd18bed28f5a682d0adc0ba3b | 955 | py | Python | examples/undocumented/python_modular/classifier_mpdsvm_modular.py | srgnuclear/shogun | 33c04f77a642416376521b0cd1eed29b3256ac13 | [
"Ruby",
"MIT"
] | 1 | 2015-11-05T18:31:14.000Z | 2015-11-05T18:31:14.000Z | examples/undocumented/python_modular/classifier_mpdsvm_modular.py | waderly/shogun | 9288b6fa38e001d63c32188f7f847dadea66e2ae | [
"Ruby",
"MIT"
] | null | null | null | examples/undocumented/python_modular/classifier_mpdsvm_modular.py | waderly/shogun | 9288b6fa38e001d63c32188f7f847dadea66e2ae | [
"Ruby",
"MIT"
] | null | null | null | #!/usr/bin/env python
traindat = '../data/fm_train_real.dat'
testdat = '../data/fm_test_real.dat'
label_traindat = '../data/label_train_twoclass.dat'
parameter_list = [[traindat,testdat,label_traindat,1,1e-5],[traindat,testdat,label_traindat,0.9,1e-5]]
def classifier_mpdsvm_modular (train_fname=traindat,test_fname=testdat,label_fname=label_traindat,C=1,epsilon=1e-5):
from modshogun import RealFeatures, BinaryLabels
from modshogun import GaussianKernel
from modshogun import MPDSVM, CSVFile
feats_train=RealFeatures(CSVFile(train_fname))
feats_test=RealFeatures(CSVFile(test_fname))
labels=BinaryLabels(CSVFile(label_fname))
width=2.1
kernel=GaussianKernel(feats_train, feats_train, width)
svm=MPDSVM(C, kernel, labels)
svm.set_epsilon(epsilon)
svm.train()
predictions = svm.apply(feats_test)
return predictions, svm, predictions.get_labels()
if __name__=='__main__':
print('MPDSVM')
classifier_mpdsvm_modular(*parameter_list[0])
| 31.833333 | 116 | 0.795812 | 133 | 955 | 5.43609 | 0.37594 | 0.071923 | 0.078838 | 0.077455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01479 | 0.079581 | 955 | 29 | 117 | 32.931034 | 0.807736 | 0.020942 | 0 | 0 | 0 | 0 | 0.101713 | 0.086724 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.142857 | 0 | 0.238095 | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10a9d8e49951075c6ed508727f52de4869c5e54c | 698 | py | Python | setup.py | roee30/flying_desktop | c0ab0dd9c0944ed1ad8b3d096b87bd2382d0b052 | [
"MIT"
] | 1 | 2020-01-03T14:15:37.000Z | 2020-01-03T14:15:37.000Z | setup.py | roee30/flying_desktop | c0ab0dd9c0944ed1ad8b3d096b87bd2382d0b052 | [
"MIT"
] | null | null | null | setup.py | roee30/flying_desktop | c0ab0dd9c0944ed1ad8b3d096b87bd2382d0b052 | [
"MIT"
] | 1 | 2021-04-30T23:38:36.000Z | 2021-04-30T23:38:36.000Z | """
Install the application
"""
from os import path
from setuptools import setup, find_packages
from flying_desktop import __version__
HERE = path.dirname(__file__)
with open(path.join(HERE, "requirements.txt"), "r") as f:
packages = list(map(str.strip, f))
setup(
name="flying-desktop",
version=__version__,
install_requires=packages,
packages=find_packages(),
url="",
license="",
author="Roee Nizan",
author_email="roeen30@gmail.com",
description="Download wallpapers from your social media accounts",
entry_points={"gui_scripts": ["flydesk = flying_desktop.__main__:main"]},
package_data={"flying_desktop": ["providers/*/credentials.json"]},
)
| 26.846154 | 77 | 0.709169 | 84 | 698 | 5.583333 | 0.690476 | 0.110874 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003378 | 0.151862 | 698 | 25 | 78 | 27.92 | 0.788851 | 0.032951 | 0 | 0 | 0 | 0 | 0.29985 | 0.083958 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.157895 | 0 | 0.157895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |