hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
347053f4c65de4c7e2c7c13a8144cc85060f99be | 2,367 | py | Python | statsneighborhoods/price.py | joehand/statistics-neighborhoods | 5a9cbbfd46da2d3dc5b715c2a9dec3276cee3fe8 | [
"MIT"
] | 1 | 2018-05-06T19:19:32.000Z | 2018-05-06T19:19:32.000Z | statsneighborhoods/price.py | joehand/statistics-neighborhoods | 5a9cbbfd46da2d3dc5b715c2a9dec3276cee3fe8 | [
"MIT"
] | null | null | null | statsneighborhoods/price.py | joehand/statistics-neighborhoods | 5a9cbbfd46da2d3dc5b715c2a9dec3276cee3fe8 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
price calculations
~~~~~~~~~
WARNING: This is specific to US income data.
If this needs to be changed,
you will need to update the BINS values and some of the regex expressions.
:copyright: (c) 2015 by Joe Hand, Santa Fe Institute.
:license: MIT
"""
import numpy as np
from pandas import concat, DataFrame, Series
from scipy import stats
LOW_BINS = [0, 10000, 15000, 20000, 25000, 30000, 35000, 40000, 45000,
50000, 60000, 75000, 100000, 125000, 150000, 200000]
MID_BINS = [5000, 12500, 17500, 22500, 27500, 32500, 37500, 42500, 47500,
55000, 67500, 87500, 112500, 137500, 175000, 225000]
def adjust_rich_bin(df, col_suffix, inc_bin=200000):
col = 'ACSHINC200'
df['new_mid_pt' + col_suffix] = (df['ACSAVGHINC'] * df['Total_Households'] -
df['mean_inc' + col_suffix] * df['Total_Households'] + df[col] * inc_bin)/df[col]
df['new_mid_pt' + col_suffix] = df['new_mid_pt' +
col_suffix].replace([np.inf, -np.inf], inc_bin)
df[col + col_suffix] = df[col] * df['new_mid_pt' + col_suffix]
df['adjusted' + col_suffix] = df.filter(
regex='^ACSHINC([0-9])+' + col_suffix).sum(axis=1)//df['Total_Households']
df['adjusted' + col_suffix] = df['adjusted' + col_suffix].astype(int)
return df
def calculate_mean_income(df, inc_bins, col_suffix):
df = df.copy()
cols = df.filter(regex='^ACSHINC([0-9])+$').columns
for i, col in enumerate(cols):
df[col + col_suffix] = df[col] * inc_bins[i]
df['total_inc' +
col_suffix] = df.filter(regex='^ACSHINC([0-9])+' + col_suffix).sum(axis=1)
df['mean_inc' + col_suffix] = df['total_inc' +
col_suffix]//df['Total_Households']
df = adjust_rich_bin(df, col_suffix, inc_bin=inc_bins[i])
return df
def calculate_a(df, reported_mean='ACSAVGHINC', calc_mean_low='mean_inc_low',
calc_mean_mid='mean_inc_mid'):
df = df.copy()
df['a'] = (df[reported_mean] - df[calc_mean_low]) / \
(df[calc_mean_mid] - df[calc_mean_low])
return df
def calculate_price(df):
df = calculate_mean_income(df, LOW_BINS, '_low')
df = calculate_mean_income(df, MID_BINS, '_mid')
df = calculate_a(df, calc_mean_low='adjusted_low')
return df
| 36.415385 | 118 | 0.621462 | 342 | 2,367 | 4.070175 | 0.359649 | 0.116379 | 0.094828 | 0.028736 | 0.357759 | 0.317529 | 0.232759 | 0.152299 | 0.109195 | 0.071839 | 0 | 0.101816 | 0.232362 | 2,367 | 64 | 119 | 36.984375 | 0.664282 | 0.114068 | 0 | 0.15 | 0 | 0 | 0.138768 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.075 | 0 | 0.275 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
34709103517f1e3b70cf5ad2dd28c6bb65951e42 | 5,945 | py | Python | 2018/day4/guard.py | sthilaid/AdventOfCode | f4a6006167debeb717843db41e7089d10d323d21 | [
"MIT"
] | null | null | null | 2018/day4/guard.py | sthilaid/AdventOfCode | f4a6006167debeb717843db41e7089d10d323d21 | [
"MIT"
] | null | null | null | 2018/day4/guard.py | sthilaid/AdventOfCode | f4a6006167debeb717843db41e7089d10d323d21 | [
"MIT"
] | null | null | null |
#!/usr/bin/python3
import array
import datetime
import functools
import io
import sys
class Guard():
def __init__(self, id):
self.id = id
self.shifts = []
self.sleepIntervals = []
def registerShift(self, shift):
self.shifts += [shift.date]
self.sleepIntervals += shift.sleepIntervals
def mostSleptMinute(self):
def isInRange(minute, start, end):
time = datetime.datetime(start.year, start.month, start.day, 0, minute)
return time >= start and time <= end
def accIfInRange(minute):
def accum(a, x):
if isInRange(minute, x[0], x[1]):
return a+1
else:
return a
return accum
bestMinute = -1
bestCount = 0
for m in range(59):
for sleep in self.sleepIntervals:
count = functools.reduce(accIfInRange(m), self.sleepIntervals, 0)
if count > bestCount:
bestMinute = m
bestCount = count
if bestMinute < 0:
if self.sleepIntervals:
raise Exception("Invalid sleep intervals analysis: %s" % self.sleepIntervals)
else:
return False
return bestMinute, bestCount
def totalSleepTime(self):
return functools.reduce(lambda a,x: a + (x[1]-x[0]), self.sleepIntervals, datetime.timedelta())
def __repr__(self):
return "Guard(%d)" % (self.id)
def __str__(self):
mostSleptMinuteResult = self.mostSleptMinute()
if mostSleptMinuteResult:
return ("Guard #%d: shifts: %d bestMinute: %d bestCount: %d snoozeTime: %s"
% (self.id, len(self.shifts), *mostSleptMinuteResult, self.totalSleepTime()))
else:
return "Guard #%d: shifts: %d !no snooze!" % (self.id, len(self.shifts))
class Shift():
def __init__(self, guardId, date, sleepIntervals):
self.guardId = guardId
self.date = date
self.sleepIntervals = sleepIntervals
def parse(lines):
def parseDate(line):
if isinstance(line, str):
if len(line) > 18 and line[0] == "[" and line[17] == "]":
dateAndTime = line[1:17].split()
date = [int(x) for x in dateAndTime[0].split('-')]
time = [int(x) for x in dateAndTime[1].split(':')]
return datetime.datetime(*date, *time)
raise Exception("parseDate: invalid line %s" % line)
def parseShiftStart(line):
if isinstance(line, str) and len(line) > 27:
if line[19:24] == "Guard":
text = line[26:]
num = ""
for i in range(1,len(text)):
if not text[:i].isdigit():
break
else:
num = text[:i]
if num.isdigit():
return int(num)
return False
def parseShiftSnoozeStart(line):
return "falls asleep" in line
def parseShiftSnoozeEnd(line):
return "wakes up" in line
if not lines:
return (None, [])
index = 0
line = lines[index]
date = parseDate(line)
guardId = parseShiftStart(line)
if not guardId:
raise Exception("invalid start of shift: %s" % line)
isSnoozing = False
snoozeList = []
while True:
index += 1
if index >= len(lines) or parseShiftStart(lines[index]):
break
if (isSnoozing and not parseShiftSnoozeEnd(lines[index]) or
not isSnoozing and not parseShiftSnoozeStart(lines[index])):
raise Exception("invalid snooze sequence...")
date = parseDate(lines[index])
if not isSnoozing:
isSnoozing = date
else:
snoozeList += [(isSnoozing, date)] # start, end
isSnoozing = False
return (Shift(guardId, date, snoozeList), lines[index:])
def __repr__(self):
return "Shift(%d, %s, %s)" % (self.guardId, self.date, self.sleepIntervals)
def part1(inputfile):
def greatestSnoozer(a, g):
if g.totalSleepTime() > a.totalSleepTime():
return g
else:
return a
def greatestMinuteSleeper(a, g):
res = g.mostSleptMinute()
if res and res[1] > a[0][1]:
return [(*res, g)]
elif res and res[1] == a[0][1]:
return a + [(*res, g)]
else:
return a
lines = []
with open(inputfile, "r") as file:
lines = file.readlines()
lines.sort()
guards = []
while True:
shift, lines = Shift.parse(lines)
shiftRegistered = False
for guard in guards:
if guard.id == shift.guardId:
guard.registerShift(shift)
shiftRegistered = True
break
if not shiftRegistered:
guards += [Guard(shift.guardId)]
guards[len(guards)-1].registerShift(shift)
if not lines:
break
# for g in guards:
# print(g)
snoozer = functools.reduce(greatestSnoozer, guards, Guard(-1))
print("[Part1] Best Snoozer: %s" % snoozer)
minuteSnoozer = functools.reduce(greatestMinuteSleeper, guards, [(None, 0, None)])
print("[Part2] %s" % list(map(lambda x: str(x[2]), minuteSnoozer)))
def main():
if (len(sys.argv) != 2):
print("usage: freq [inputfile]")
return
part1(sys.argv[1])
if __name__ == "__main__":
""" This is executed when run from the command line """
main()
| 31.62234 | 103 | 0.514382 | 606 | 5,945 | 5 | 0.232673 | 0.053465 | 0.021782 | 0.011221 | 0.066667 | 0.026403 | 0.012541 | 0.012541 | 0 | 0 | 0 | 0.013477 | 0.375946 | 5,945 | 187 | 104 | 31.791444 | 0.803235 | 0.009756 | 0 | 0.161074 | 0 | 0 | 0.057158 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.134228 | false | 0 | 0.033557 | 0.033557 | 0.342282 | 0.020134 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3472110f997eb86fec5734bbdab064fe7075504b | 10,492 | py | Python | helpdesk/views/api/index.py | dispensable/helpdesk | f1dff07d5f051ea739a5995dbe06fef8a690e4b4 | [
"BSD-3-Clause"
] | null | null | null | helpdesk/views/api/index.py | dispensable/helpdesk | f1dff07d5f051ea739a5995dbe06fef8a690e4b4 | [
"BSD-3-Clause"
] | null | null | null | helpdesk/views/api/index.py | dispensable/helpdesk | f1dff07d5f051ea739a5995dbe06fef8a690e4b4 | [
"BSD-3-Clause"
] | null | null | null | # coding: utf-8
import logging
from typing import Optional
from authlib.jose import jwt, errors as jwterrors
from starlette.responses import RedirectResponse # NOQA
from starlette.authentication import requires, has_required_scope # NOQA
from fastapi import Query, HTTPException, Depends, Request
from helpdesk import config
from helpdesk.libs.db import extract_filter_from_query_params
from helpdesk.models.provider import get_provider
from helpdesk.models.db.ticket import Ticket, TicketPhase
from helpdesk.models.db.param_rule import ParamRule
from helpdesk.models.action_tree import action_tree
from helpdesk.models.user import User
from helpdesk.libs.dependency import get_current_user, require_admin
from . import router
from .schemas import MarkTickets, ParamRule as ParamRuleSchema, OperateTicket
logger = logging.getLogger(__name__)
@router.get('/')
async def index():
return dict(msg='Hello API')
@router.get('/user/me')
async def user(current_user: User = Depends(get_current_user)) -> dict:
return current_user
@router.get('/admin_panel/{target_object}/{config_type}')
async def get_admin_panel_config(target_object: str, config_type: str,
_: User = Depends(require_admin)):
action_tree_leaf = action_tree.find(target_object) if target_object != '' else action_tree.first()
if not action_tree_leaf:
raise HTTPException(status_code=404, detail='Target object not found')
action = action_tree_leaf.action
if config_type not in ('param_rule',):
raise HTTPException(status_code=400, detail='Config type not supported')
param_rules = await ParamRule.get_all_by_provider_object(action.target_object)
return param_rules
@router.post('/admin_panel/{target_object}/{config_type}/{op}')
async def admin_panel(target_object: str, config_type: str, param_rule: ParamRuleSchema,
_: User = Depends(require_admin), op: Optional[str] = None):
if target_object != '':
action_tree_leaf = action_tree.find(target_object)
else:
action_tree_leaf = action_tree.first()
if not action_tree_leaf:
raise HTTPException(status_code=404, detail='Target object not found')
action = action_tree_leaf.action
if config_type not in ('param_rule',):
raise HTTPException(status_code=400, detail='Config type not supported')
if config_type == 'param_rule':
if op not in ('add', 'del'):
raise HTTPException(status_code=400, detail='Operation not supported')
if op == 'add':
new_rule = ParamRule(
id=param_rule.id,
title=param_rule.title,
provider_object=action.target_object,
rule=param_rule.rule,
is_auto_approval=param_rule.is_auto_approval,
approver=param_rule.approver)
id_ = await new_rule.save()
param_rule_added = await ParamRule.get(id_)
return param_rule_added
if op == 'del':
if not param_rule.id:
raise HTTPException(status_code=400, detail='Param rule id is required')
return await ParamRule.delete(param_rule.id) == param_rule.id
@router.get('/action_tree')
async def action_tree_list(_: User = Depends(get_current_user)):
def node_formatter(node, children):
if node.is_leaf:
return node.action
sub_node_info = {
'name': node.name,
'children': children,
}
return [sub_node_info] if node.parent is None else sub_node_info
return action_tree.get_tree_list(node_formatter)
@router.get('/action/{target_object}')
@router.post('/action/{target_object}')
async def action(target_object: str, request: Request, current_user: User = Depends(get_current_user)):
target_object = target_object.strip('/')
# check if action exists
action = action_tree.get_action_by_target_obj(target_object)
if not action:
raise HTTPException(status_code=404, detail='Target object not found')
provider = get_provider(action.provider_type)
if request.method == 'GET':
return action.to_dict(provider, current_user)
if request.method == 'POST':
form = await request.form()
ticket, msg = await action.run(provider, form, current_user)
msg_level = 'success' if bool(ticket) else 'error'
return dict(ticket=ticket, msg=msg, msg_level=msg_level, debug=config.DEBUG)
@router.post('/ticket/{ticket_id}/{op}')
async def ticket_op(ticket_id: int, op: str,
operate_data: OperateTicket, current_user: User = Depends(get_current_user)):
if op not in ('approve', 'reject'):
raise HTTPException(status_code=400, detail='Operation not supported')
ticket = await Ticket.get(ticket_id)
if not ticket:
raise HTTPException(status_code=404, detail='Ticket not found')
if not await ticket.can_admin(current_user):
raise HTTPException(status_code=403, detail='Permission denied')
if op == 'approve':
ret, msg = ticket.approve(by_user=current_user.name)
if not ret:
raise HTTPException(status_code=400, detail=msg)
execution, msg = ticket.execute()
if not execution:
raise HTTPException(status_code=400, detail=msg)
elif op == 'reject':
if operate_data.reason:
ticket.reason = operate_data
ret, msg = ticket.reject(by_user=current_user.name)
if not ret:
raise HTTPException(status_code=400, detail=msg)
id_ = await ticket.save()
if not id_:
msg = 'ticket executed but failed to save state' if op == 'approve' else 'Failed to save ticket state'
raise HTTPException(status_code=500, detail=msg)
await ticket.notify(TicketPhase.APPROVAL)
return dict(msg='Success')
@router.post('/helpdesk_ticket/mark/{ticket_id}')
async def mark_ticket(ticket_id: int, mark: MarkTickets, token: Optional[str] = None,
_: User = Depends(get_current_user)):
"""call helpdesk_ticket op to handle this handler only make authenticate disappear for provider"""
# verify jwt for callback url
try:
payload = jwt.decode(token, config.SESSION_SECRET_KEY)
logger.debug(f'received callback req: {payload}')
assert payload['ticket_id'] == ticket_id
except (jwterrors.BadSignatureError, AssertionError):
raise HTTPException(status_code=403, detail='Invalid token')
helpdesk_ticket = await Ticket.get(ticket_id)
if not helpdesk_ticket:
raise HTTPException(status_code=404, detail='Ticket not found')
try:
helpdesk_ticket.annotate(execution_status=mark.execution_status, final_exec_status=True)
logger.debug(f"helpdesk_ticket annotation: {helpdesk_ticket.annotation}")
# add notification to helpdesk_ticket mark action
await helpdesk_ticket.notify(TicketPhase.MARK)
await helpdesk_ticket.save()
except (RuntimeError, AssertionError) as e:
raise HTTPException(status_code=400, detail=f'decode mark body error: {str(e)}')
return dict(msg='Success')
def extra_dict(d):
id_ = d['id']
return dict(
url=f"/ticket/{id_}",
approve_url=f"/api/ticket/{id_}/approve",
reject_url=f"/api/ticket/{id_}/reject",
api_url=f"/api/ticket/{id_}",
**d)
@router.get('/ticket')
async def list_ticket(page: Optional[str] = None, page_size: Optional[str] = None,
order_by: Optional[str] = None, desc: bool = False, current_user: User = Depends(get_current_user)):
filter_ = extract_filter_from_query_params(query_params={
'page': page,
'page_size': page_size,
'order_by': order_by,
'desc': desc
}, model=Ticket)
if page and page.isdigit():
page = max(1, int(page))
else:
page = 1
if page_size and page_size.isdigit():
page_size = max(1, int(page_size))
page_size = min(page_size, config.TICKETS_PER_PAGE)
else:
page_size = config.TICKETS_PER_PAGE
if desc and str(desc).lower() == 'false':
desc = False
else:
desc = True
kw = dict(filter_=filter_, order_by=order_by, desc=desc, limit=page_size, offset=(page - 1) * page_size)
if current_user.is_admin:
tickets = await Ticket.get_all(**kw)
total = await Ticket.count(filter_=filter_)
else:
# only show self tickets if not admin
tickets = await Ticket.get_all_by_submitter(submitter=current_user.name, **kw)
total = await Ticket.count_by_submitter(submitter=current_user.name, filter_=filter_)
return dict(
tickets=[extra_dict(t.to_dict(show=True)) for t in tickets],
page=page,
page_size=page_size,
total=total,
)
@router.get('/ticket/{ticket_id}')
@router.post('/ticket/{ticket_id}')
async def get_ticket(ticket_id: int, current_user: User = Depends(get_current_user)):
ticket = await Ticket.get(ticket_id)
if not ticket:
raise HTTPException(status_code=404, detail="ticket not found")
if not await ticket.can_view(current_user):
raise HTTPException(status_code=403, detail='Permission denied')
tickets = [ticket]
total = 1
return dict(
tickets=[extra_dict(t.to_dict(show=True)) for t in tickets],
total=total,
)
@router.get('/ticket/{ticket_id}/result')
async def ticket_result(ticket_id: int, exec_output_id: Optional[str] = None, _: User = Depends(get_current_user)):
ticket = await Ticket.get(ticket_id)
if not ticket:
raise HTTPException(status_code=404, detail="ticket not found")
execution, msg = ticket.get_result(execution_output_id=exec_output_id)
if not execution:
raise HTTPException(status_code=404, detail=msg)
# update ticket status by result
if not exec_output_id:
annotation_execution_status = ticket.annotation.get('execution_status')
final_exec_status = ticket.annotation.get('final_exec_status')
try:
exec_status = execution.get('status')
if exec_status and annotation_execution_status != exec_status \
and not final_exec_status:
ticket.annotate(execution_status=exec_status)
await ticket.save()
except AttributeError as e:
logger.warning(f"can not get status from execution, error: {str(e)}")
return execution
| 38.432234 | 122 | 0.681186 | 1,371 | 10,492 | 4.990518 | 0.15682 | 0.038585 | 0.073663 | 0.08594 | 0.401198 | 0.345951 | 0.270535 | 0.224203 | 0.199649 | 0.182695 | 0 | 0.008404 | 0.217499 | 10,492 | 272 | 123 | 38.573529 | 0.82497 | 0.018014 | 0 | 0.226852 | 0 | 0 | 0.108964 | 0.028933 | 0 | 0 | 0 | 0 | 0.013889 | 1 | 0.009259 | false | 0 | 0.074074 | 0 | 0.157407 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3476a86dc9e0ac6fd8216803840ae80b2e93eb7b | 3,785 | py | Python | musicSpide.py | Brucechen13/converweb | 75e53960e0340a3c3a762bfe0b6087e3e7ac8ea0 | [
"MIT"
] | null | null | null | musicSpide.py | Brucechen13/converweb | 75e53960e0340a3c3a762bfe0b6087e3e7ac8ea0 | [
"MIT"
] | null | null | null | musicSpide.py | Brucechen13/converweb | 75e53960e0340a3c3a762bfe0b6087e3e7ac8ea0 | [
"MIT"
] | null | null | null | import requests
import json,os
import base64
from Crypto.Cipher import AES
import codecs
import binascii
from lxml import etree
musicUrl = ['http://music.163.com/discover/toplist?id=3779629']
basicUrl = 'http://music.163.com'
def getMusic():
return get_html(musicUrl[0])
def get_html(url):
headers = {
'Accept': '*/*',
'Accept-Encoding': 'gzip, deflate, sdch, br',
'Accept-Language': 'zh-CN,zh;q=0.8,en;q=0.6',
'Host':'music.163.com',
'Referer':'http://music.163.com',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36'
}
res = requests.get(url, headers=headers)
if res.status_code == 200:
data = []
doc = etree.HTML(res.text)
all_songs = doc.xpath('//ul[@class="f-hide"]/li')
for i in range(0, min(10, len(all_songs))):
music = {}
music['url'] = all_songs[i].xpath('./a/@href')[0];
music['comments'] = get_comments(music['url'][9:])
music['title'] = all_songs[i].xpath('./a/text()')[0]
res_detail = requests.get(basicUrl+music['url'] )
if res_detail.status_code == 200:
doc_detail = etree.HTML(res_detail.text)
music['picUrl'] = doc_detail.xpath('//div[@class="u-cover u-cover-6 f-fl"]/img/@src')[0]
infos = doc_detail.xpath('//div[@class="cnt"]/p[@class="des s-fc4"]')
music['auth'] = infos[0].xpath('./span/@title')[0]
music['album'] = infos[1].xpath('./a/text()')[0]
else:
pass
music['url'] = basicUrl+music['url']
data.append(music)
#print(music)
return data
else:
pass
# 由于网易云音乐歌曲评论采取AJAX填充的方式所以在HTML上爬不到,需要调用评论API,而API进行了加密处理,下面是相关解决的方法
def aesEncrypt(text, secKey):
pad = 16 - len(text) % 16
if (type(text)) is str:
text = text.encode('utf-8')
text = text + (pad * chr(pad)).encode('utf-8')
encryptor = AES.new(secKey, 2, '0102030405060708')
ciphertext = encryptor.encrypt(text)
ciphertext = base64.b64encode(ciphertext)
return ciphertext
def rsaEncrypt(text, pubKey, modulus):
text = text[::-1]
rs = int(binascii.b2a_hex(text.encode('utf-8')), 16) ** int(pubKey, 16) % int(modulus, 16)
return format(rs, 'x').zfill(256)
def createSecretKey(size):
return (''.join(map(lambda xx: (hex(xx)[2:]), os.urandom(size))))[0:16]
def get_comments(songId):
comments = []
url = 'http://music.163.com/weapi/v1/resource/comments/R_SO_4_' + str(songId) + '/?csrf_token='
headers = {'Cookie': 'appver=1.5.0.75771;', 'Referer': 'http://music.163.com/'}
text = {'username': '', 'password': '', 'rememberLogin': 'true'}
modulus = '00e0b509f6259df8642dbc35662901477df22677ec152b5ff68ace615bb7b725152b3ab17a876aea8a5aa76d2e417629ec4ee341f56135fccf695280104e0312ecbda92557c93870114af6c9d05c4f7f0c3685b7a46bee255932575cce10b424d813cfe4875d3e82047b97ddef52741d546b8e289dc6935b3ece0462db0a22b8e7'
nonce = '0CoJUm6Qyw8W8jud'
pubKey = '010001'
text = json.dumps(text)
secKey = createSecretKey(16)
encText = aesEncrypt(aesEncrypt(text, nonce), secKey)
encSecKey = rsaEncrypt(secKey, pubKey, modulus)
data = {'params': encText, 'encSecKey': encSecKey}
req = requests.post(url, headers=headers, data=data)#
total = req.json()['hotComments']
for i in range(0, min(2, len(total))):
comm = {}
comment = total[i]
comm['content'] = comment['content']
comm['userName'] = comment['user']['nickname']
comm['userPic'] = comment['user']['avatarUrl']
comments.append(comm)
return comments
#print(getMusic()) | 39.427083 | 274 | 0.614795 | 448 | 3,785 | 5.147321 | 0.419643 | 0.020815 | 0.028621 | 0.032524 | 0.06418 | 0.01301 | 0 | 0 | 0 | 0 | 0 | 0.105724 | 0.215324 | 3,785 | 96 | 275 | 39.427083 | 0.670707 | 0.025363 | 0 | 0.04878 | 0 | 0.012195 | 0.288931 | 0.097396 | 0 | 0 | 0 | 0 | 0 | 1 | 0.073171 | false | 0.036585 | 0.085366 | 0.02439 | 0.231707 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3477643ed6a1e9f2e57cd610eb10b9c823a21e5d | 2,891 | py | Python | package/src/snobedo/modis/__main__.py | UofU-Cryosphere/isnoda | 147742a8e589e472f73f9926cef27c2a1fe1d8d5 | [
"MIT"
] | 3 | 2021-12-02T01:55:08.000Z | 2022-01-05T21:40:55.000Z | package/src/snobedo/modis/__main__.py | UofU-Cryosphere/isnoda | 147742a8e589e472f73f9926cef27c2a1fe1d8d5 | [
"MIT"
] | 12 | 2020-09-24T22:32:48.000Z | 2022-02-17T20:29:54.000Z | package/src/snobedo/modis/__main__.py | UofU-Cryosphere/isnoda | 147742a8e589e472f73f9926cef27c2a1fe1d8d5 | [
"MIT"
] | 1 | 2020-08-26T19:45:32.000Z | 2020-08-26T19:45:32.000Z | import argparse
from datetime import datetime, timedelta
from pathlib import Path
from typing import NamedTuple
import dask
import numpy as np
from snobedo.lib import ModisGeoTiff
from snobedo.lib.command_line_helpers import add_dask_options, \
add_water_year_option
from snobedo.lib.dask_utils import run_with_client
from snobedo.modis.geotiff_to_zarr import write_zarr
from snobedo.modis.matlab_to_geotiff import matlab_to_geotiff, warp_to
ONE_DAY = timedelta(days=1)
class ConversionConfig(NamedTuple):
variable: str
source_dir: Path
output_dir: Path
modis_us: ModisGeoTiff
target_srs: str
def argument_parser():
parser = argparse.ArgumentParser(
description='Convert matlab files to zarr',
)
parser.add_argument(
'--source-dir',
required=True,
type=Path,
help='Base directory. The files to convert are expected to be in a '
'folder with the water year. Example: 2018'
' Other required file expected under this folder is the template '
f'MODIS file with name: {ModisGeoTiff.WESTERN_US_TEMPLATE}',
)
parser.add_argument(
'--variable',
required=True,
type=str,
help='Variable to extract from the matlab files'
)
parser.add_argument(
'--t-srs',
type=str,
default='EPSG:32613',
help='Target EPSG. Default: EPSG:32613'
)
parser = add_dask_options(parser)
parser = add_water_year_option(parser)
return parser
def config_for_arguments(arguments):
output_dir = arguments.source_dir / f'wy{arguments.water_year}-zarr/'
output_dir.mkdir(exist_ok=True)
return ConversionConfig(
variable=arguments.variable,
source_dir=arguments.source_dir,
output_dir=output_dir,
modis_us=ModisGeoTiff(arguments.source_dir),
target_srs=arguments.t_srs
)
def date_range(water_year):
d0 = datetime(water_year - 1, 9, 30)
d1 = datetime(water_year, 10, 1)
return np.arange(d0, d1, ONE_DAY).astype(datetime)
@dask.delayed
def write_date(date, config):
file = matlab_to_geotiff(
config.source_dir,
config.output_dir,
config.modis_us,
date,
config.variable,
)
file = warp_to(file, config.target_srs)
write_zarr(file, date, config.variable, config.output_dir)
def main():
arguments = argument_parser().parse_args()
if not arguments.source_dir.exists():
raise IOError(
f'Given source folder does not exist: {arguments.source_dir}'
)
with run_with_client(arguments.cores, arguments.memory):
config = config_for_arguments(arguments)
files = [
write_date(date, config)
for date in date_range(arguments.water_year)
]
dask.compute(files)
if __name__ == '__main__':
main()
| 26.045045 | 79 | 0.675199 | 368 | 2,891 | 5.078804 | 0.315217 | 0.043339 | 0.048154 | 0.019262 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011834 | 0.240055 | 2,891 | 110 | 80 | 26.281818 | 0.838871 | 0 | 0 | 0.08046 | 0 | 0 | 0.158423 | 0.029747 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057471 | false | 0 | 0.126437 | 0 | 0.287356 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3477dcb657d5c2ddc0c119b91b421d868e92f3de | 2,114 | py | Python | repos/system_upgrade/el8toel9/actors/sssdcheck/actor.py | sm00th/leapp-repository | 1c171ec3a5f9260a3c6f84a9b15cad78a875ac61 | [
"Apache-2.0"
] | null | null | null | repos/system_upgrade/el8toel9/actors/sssdcheck/actor.py | sm00th/leapp-repository | 1c171ec3a5f9260a3c6f84a9b15cad78a875ac61 | [
"Apache-2.0"
] | 1 | 2022-03-07T15:34:11.000Z | 2022-03-07T15:35:15.000Z | repos/system_upgrade/el8toel9/actors/sssdcheck/actor.py | sm00th/leapp-repository | 1c171ec3a5f9260a3c6f84a9b15cad78a875ac61 | [
"Apache-2.0"
] | null | null | null | from leapp.actors import Actor
from leapp.models import SSSDConfig8to9
from leapp import reporting
from leapp.reporting import Report, create_report
from leapp.tags import IPUWorkflowTag, ChecksPhaseTag
COMMON_REPORT_TAGS = [reporting.Tags.AUTHENTICATION, reporting.Tags.SECURITY]
related = [
reporting.RelatedResource('package', 'sssd'),
reporting.RelatedResource('file', '/etc/sssd/sssd.conf')
]
class SSSDCheck8to9(Actor):
"""
Check SSSD configuration for changes in RHEL9 and report them in model.
Implicit files domain is disabled by default. This may affect local
smartcard authentication if there is not explicit files domain created.
If there is no files domain and smartcard authentication is enabled,
we will notify the administrator.
"""
name = 'sssd_check_8to9'
consumes = (SSSDConfig8to9,)
produces = (Report,)
tags = (IPUWorkflowTag, ChecksPhaseTag)
def process(self):
model = next(self.consume(SSSDConfig8to9), None)
if not model:
return
# enable_files_domain is set explicitly, change of default has no effect
if model.enable_files_domain_set:
return
# there is explicit files domain, implicit files domain has no effect
if model.explicit_files_domain:
return
# smartcard authentication is disabled, implicit files domain has no effect
if not model.pam_cert_auth:
return
create_report([
reporting.Title('SSSD implicit files domain is now disabled by default.'),
reporting.Summary('Default value of [sssd]/enable_files_domain has '
'changed from true to false.'),
reporting.Tags(COMMON_REPORT_TAGS),
reporting.Remediation(
hint='If you use smartcard authentication for local users, '
'set this option to true explicitly and call '
'"authselect enable-feature with-files-domain".'
),
reporting.Severity(reporting.Severity.MEDIUM)
] + related)
| 34.655738 | 86 | 0.668401 | 241 | 2,114 | 5.784232 | 0.410788 | 0.094692 | 0.054519 | 0.027977 | 0.06241 | 0.045911 | 0.045911 | 0 | 0 | 0 | 0 | 0.007079 | 0.264901 | 2,114 | 60 | 87 | 35.233333 | 0.889961 | 0.25071 | 0 | 0.108108 | 0 | 0 | 0.207097 | 0.016774 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027027 | false | 0 | 0.135135 | 0 | 0.405405 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3478bed71b27fbe54d8a0036b43d502dea8a018c | 11,308 | py | Python | karborclient/common/base.py | thuylt2/karborclient | 884d685c3cd1c1075e895791d7027b56c30dd328 | [
"Apache-2.0"
] | null | null | null | karborclient/common/base.py | thuylt2/karborclient | 884d685c3cd1c1075e895791d7027b56c30dd328 | [
"Apache-2.0"
] | null | null | null | karborclient/common/base.py | thuylt2/karborclient | 884d685c3cd1c1075e895791d7027b56c30dd328 | [
"Apache-2.0"
] | null | null | null | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Base utilities to build API operation managers and objects on top of.
"""
import abc
import copy
import six
from six.moves.urllib import parse
from karborclient.common.apiclient import exceptions
from karborclient.common import http
SORT_DIR_VALUES = ('asc', 'desc')
SORT_KEY_VALUES = ('id', 'status', 'name', 'created_at')
SORT_KEY_MAPPINGS = {}
def getid(obj):
"""Abstracts the common pattern of allowing both an object or
an object's ID (UUID) as a parameter when dealing with relationships.
"""
try:
return obj.id
except AttributeError:
return obj
class Manager(object):
"""Managers interact with a particular type of API (servers, flavors,
images, etc.) and provide CRUD operations for them.
"""
resource_class = None
def __init__(self, api):
self.api = api
if isinstance(self.api, http.SessionClient):
self.project_id = self.api.get_project_id()
else:
self.project_id = self.api.project_id
def _list(self, url, response_key=None, obj_class=None,
data=None, headers=None, return_raw=False,):
if headers is None:
headers = {}
resp, body = self.api.json_request('GET', url, headers=headers)
if obj_class is None:
obj_class = self.resource_class
if response_key:
if response_key not in body:
body[response_key] = []
data = body[response_key]
else:
data = body
if return_raw:
return data
return [obj_class(self, res, loaded=True) for res in data if res]
def _delete(self, url, headers=None):
if headers is None:
headers = {}
self.api.raw_request('DELETE', url, headers=headers)
def _update(self, url, data, response_key=None, headers=None):
if headers is None:
headers = {}
resp, body = self.api.json_request('PUT', url, data=data,
headers=headers)
# PUT requests may not return a body
if body:
if response_key:
return self.resource_class(self, body[response_key])
return self.resource_class(self, body)
def _create(self, url, data=None, response_key=None,
return_raw=False, headers=None):
if headers is None:
headers = {}
if data:
resp, body = self.api.json_request('POST', url,
data=data, headers=headers)
else:
resp, body = self.api.json_request('POST', url, headers=headers)
if return_raw:
if response_key:
return body[response_key]
return body
if response_key:
return self.resource_class(self, body[response_key])
return self.resource_class(self, body)
def _get(self, url, response_key=None, return_raw=False, headers=None):
if headers is None:
headers = {}
resp, body = self.api.json_request('GET', url, headers=headers)
if return_raw:
if response_key:
return body[response_key]
return body
if response_key:
return self.resource_class(self, body[response_key])
return self.resource_class(self, body)
def _build_list_url(self, resource_type, detailed=False,
search_opts=None, marker=None, limit=None,
sort_key=None, sort_dir=None, sort=None):
if search_opts is None:
search_opts = {}
query_params = {}
for key, val in search_opts.items():
if val:
query_params[key] = val
if marker:
query_params['marker'] = marker
if limit:
query_params['limit'] = limit
if sort:
query_params['sort'] = self._format_sort_param(sort)
else:
# sort_key and sort_dir deprecated in kilo, prefer sort
if sort_key:
query_params['sort_key'] = self._format_sort_key_param(
sort_key)
if sort_dir:
query_params['sort_dir'] = self._format_sort_dir_param(
sort_dir)
# Transform the dict to a sequence of two-element tuples in fixed
# order, then the encoded string will be consistent in Python 2&3.
query_string = ""
if query_params:
params = sorted(query_params.items(), key=lambda x: x[0])
query_string = "?%s" % parse.urlencode(params)
detail = ""
if detailed:
detail = "/detail"
return ("/%(resource_type)s%(detail)s"
"%(query_string)s" %
{"resource_type": resource_type, "detail": detail,
"query_string": query_string})
def _format_sort_param(self, sort):
'''Formats the sort information into the sort query string parameter.
The input sort information can be any of the following:
- Comma-separated string in the form of <key[:dir]>
- List of strings in the form of <key[:dir]>
- List of either string keys, or tuples of (key, dir)
For example, the following import sort values are valid:
- 'key1:dir1,key2,key3:dir3'
- ['key1:dir1', 'key2', 'key3:dir3']
- [('key1', 'dir1'), 'key2', ('key3', dir3')]
:param sort: Input sort information
:returns: Formatted query string parameter or None
:raise ValueError: If an invalid sort direction or invalid sort key is
given
'''
if not sort:
return None
if isinstance(sort, six.string_types):
# Convert the string into a list for consistent validation
sort = [s for s in sort.split(',') if s]
sort_array = []
for sort_item in sort:
if isinstance(sort_item, tuple):
sort_key = sort_item[0]
sort_dir = sort_item[1]
else:
sort_key, _sep, sort_dir = sort_item.partition(':')
sort_key = sort_key.strip()
if sort_key in SORT_KEY_VALUES:
sort_key = SORT_KEY_MAPPINGS.get(sort_key, sort_key)
else:
raise ValueError('sort_key must be one of the following: %s.'
% ', '.join(SORT_KEY_VALUES))
if sort_dir:
sort_dir = sort_dir.strip()
if sort_dir not in SORT_DIR_VALUES:
msg = ('sort_dir must be one of the following: %s.'
% ', '.join(SORT_DIR_VALUES))
raise ValueError(msg)
sort_array.append('%s:%s' % (sort_key, sort_dir))
else:
sort_array.append(sort_key)
return ','.join(sort_array)
def _format_sort_key_param(self, sort_key):
if sort_key in SORT_KEY_VALUES:
return SORT_KEY_MAPPINGS.get(sort_key, sort_key)
msg = ('sort_key must be one of the following: %s.' %
', '.join(SORT_KEY_VALUES))
raise ValueError(msg)
def _format_sort_dir_param(self, sort_dir):
if sort_dir in SORT_DIR_VALUES:
return sort_dir
msg = ('sort_dir must be one of the following: %s.'
% ', '.join(SORT_DIR_VALUES))
raise ValueError(msg)
@six.add_metaclass(abc.ABCMeta)
class ManagerWithFind(Manager):
"""Manager with additional `find()`/`findall()` methods."""
@abc.abstractmethod
def list(self):
pass
def find(self, **kwargs):
"""Find a single item with attributes matching ``**kwargs``.
This isn't very efficient: it loads the entire list then filters on
the Python side.
"""
rl = self.findall(**kwargs)
num = len(rl)
if num == 0:
msg = "No %s matching %s." % (self.resource_class.__name__, kwargs)
raise exceptions.NotFound(msg)
elif num > 1:
raise exceptions.NoUniqueMatch
else:
return self.get(rl[0].id)
def findall(self, **kwargs):
"""Find all items with attributes matching ``**kwargs``.
This isn't very efficient: it loads the entire list then filters on
the Python side.
"""
found = []
searches = kwargs.items()
for obj in self.list():
try:
if all(getattr(obj, attr) == value
for (attr, value) in searches):
found.append(obj)
except AttributeError:
continue
return found
class Resource(object):
"""A resource represents a particular instance of an object (tenant, user,
etc). This is pretty much just a bag for attributes.
:param manager: Manager object
:param info: dictionary representing resource attributes
:param loaded: prevent lazy-loading if set to True
"""
def __init__(self, manager, info, loaded=False):
self.manager = manager
self._info = info
self._add_details(info)
self._loaded = loaded
def _add_details(self, info):
for k, v in info.items():
setattr(self, k, v)
def __setstate__(self, d):
for k, v in d.items():
setattr(self, k, v)
def __getattr__(self, k):
if k not in self.__dict__:
# NOTE(bcwaldon): disallow lazy-loading if already loaded once
if not self.is_loaded():
self.get()
return self.__getattr__(k)
raise AttributeError(k)
else:
return self.__dict__[k]
def __repr__(self):
reprkeys = sorted(k for k in self.__dict__.keys() if k[0] != '_' and
k != 'manager')
info = ", ".join("%s=%s" % (k, getattr(self, k)) for k in reprkeys)
return "<%s %s>" % (self.__class__.__name__, info)
def get(self):
# set_loaded() first ... so if we have to bail, we know we tried.
self.set_loaded(True)
if not hasattr(self.manager, 'get'):
return
new = self.manager.get(self.id)
if new:
self._add_details(new._info)
def __eq__(self, other):
if not isinstance(other, self.__class__):
return False
return self._info == other._info
def __ne__(self, other):
return not self.__eq__(other)
def is_loaded(self):
return self._loaded
def set_loaded(self, val):
self._loaded = val
def to_dict(self):
return copy.deepcopy(self._info)
| 32.776812 | 79 | 0.57508 | 1,406 | 11,308 | 4.438834 | 0.216216 | 0.035892 | 0.027239 | 0.020189 | 0.255248 | 0.236661 | 0.229931 | 0.216952 | 0.189232 | 0.189232 | 0 | 0.0037 | 0.330828 | 11,308 | 344 | 80 | 32.872093 | 0.821065 | 0.221348 | 0 | 0.257919 | 0 | 0 | 0.046193 | 0.003274 | 0 | 0 | 0 | 0 | 0 | 1 | 0.113122 | false | 0.004525 | 0.027149 | 0.013575 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3478c3f0864fb6087e6bd877632d57facc873d5f | 1,640 | py | Python | src/py_wizard/console_wiz_iface/ConsoleCurrencyQuestion.py | shearern/PyWizard | fac7f859707bc7fe36737caa794c159a66ea0789 | [
"MIT"
] | null | null | null | src/py_wizard/console_wiz_iface/ConsoleCurrencyQuestion.py | shearern/PyWizard | fac7f859707bc7fe36737caa794c159a66ea0789 | [
"MIT"
] | null | null | null | src/py_wizard/console_wiz_iface/ConsoleCurrencyQuestion.py | shearern/PyWizard | fac7f859707bc7fe36737caa794c159a66ea0789 | [
"MIT"
] | null | null | null | '''
Created on Sep 20, 2013
@author: nshearer
'''
from ConsoleSimpleQuestion import ConsoleSimpleQuestion
from ConsoleSimpleQuestion import UserAnswerValidationError
class ConsoleCurrencyQuestion(ConsoleSimpleQuestion):
def __init__(self, question):
super(ConsoleCurrencyQuestion, self).__init__(question)
# def _format_default_displayed(self, default_answer):
# return " [%.02f]" % (default_answer / 100.0)
def encode_answer_to_native(self, user_answer):
'''Return answers formatted to save in answer object'''
if user_answer is None or len(str(user_answer)) == 0:
return None
try:
parts = user_answer.split('.')
if len(parts) > 2:
raise UserAnswerValidationError("More than one decimal point?")
cents = 100 * int(parts[0])
if len(parts) == 2:
if len(parts[1]) > 2:
raise UserAnswerValidationError("Cents must be less than 100")
cents += int(parts[1])
return cents
except ValueError:
raise UserAnswerValidationError("Value not a decimal")
def decode_answer_to_text(self, answer):
'''Given a previous or default answer, convert it to a text value'''
if answer is None or len(str(answer).strip()) == 0:
return None
dollars = answer / 100 # Int division truncates
cents = answer % 100
return '%d.%02d' % (dollars, cents)
| 32.156863 | 83 | 0.568293 | 166 | 1,640 | 5.475904 | 0.445783 | 0.044004 | 0.033003 | 0.030803 | 0.044004 | 0.044004 | 0 | 0 | 0 | 0 | 0 | 0.031716 | 0.346341 | 1,640 | 50 | 84 | 32.8 | 0.816231 | 0.175 | 0 | 0.076923 | 0 | 0 | 0.063913 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115385 | false | 0 | 0.076923 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
34791ea44dea16eb3c5f6adbe45ec3f5e3a3b266 | 21,113 | py | Python | peersockets.py | kaykurokawa/pushtx | 8155665e17f494a22f65f32ec75002da846e5135 | [
"MIT"
] | 7 | 2015-07-28T20:35:05.000Z | 2021-03-13T03:30:16.000Z | peersockets.py | kaykurokawa/pushtx | 8155665e17f494a22f65f32ec75002da846e5135 | [
"MIT"
] | 1 | 2015-08-07T05:08:58.000Z | 2015-08-13T16:32:13.000Z | peersockets.py | kaykurokawa/pushtx | 8155665e17f494a22f65f32ec75002da846e5135 | [
"MIT"
] | 4 | 2015-11-08T10:30:58.000Z | 2020-10-31T21:55:40.000Z | import logging
import struct
import socket
import select
import time
from hashlib import sha256
import protocol
import cryptoconfig
USER_AGENT='/Satoshi:0.10.0/' #BIP 14
TCP_RECV_PACKET_SIZE=4096
SOCKET_BLOCK_SECONDS=0 # None means blocking calls, 0 means non blocking calls
ADDRESS_TO_GET_IP='google.com' #connect to this address, in order to retreive computer IP
NONCE = 1
DEFAULT_MAX_PEERS = 64 #max number of peers
DEFAULT_NUM_TX_BROADCASTS = 20 #number of peers to broadcast tx to
LOG_FILENAME='peersockets.log'
def socketrecv(conn,init_buffer_size):
msg=conn.recv(init_buffer_size)
# None will be received when socket is closed
if len(msg) == 0:
return None
#length of message in bytes,including this message length byte
expected_msg_len=struct.unpack('<H',msg[0:2])[0]
if len(msg) < expected_msg_len:
while 1:
new_msg=conn.recv(expected_msg_len-len(msg))
msg+=new_msg
if len(msg) == expected_msg_len:
break
return msg[2:]
def socketsend(conn,msg):
if len(msg) > 65536:
raise Exception('message length must be less than 16 bits')
msg_len=len(msg)+2
send_msg=struct.pack('<H',msg_len)+msg
conn.sendall(send_msg)
# Handle multiple peer sockets
class PeerSocketsHandler(object):
# tx_broadcast_list is a list of transactions to be broadcast
# in hex string (i.e, '03afb8..')
def __init__(self,crypto,tx_broadcast_list=[],peer_list=[],
connect_to_dns_seeds = True,
max_peers = DEFAULT_MAX_PEERS,
num_tx_broadcasts = DEFAULT_NUM_TX_BROADCASTS):
if crypto not in cryptoconfig.SUPPORTED_CRYPTOS:
raise Exception("Unsupported crypto {}".format(crypto))
logging.basicConfig(filename=LOG_FILENAME, level=logging.INFO,
format="%(asctime)s; %(levelname)s; %(message)s")
self.crypto=crypto
self.max_peers=DEFAULT_MAX_PEERS
self.num_tx_broadcasts=DEFAULT_NUM_TX_BROADCASTS
self.my_ip = self._get_my_ip()
self.poller = select.poll()
self.fileno_to_peer_dict = {}
self.address_to_peer_dict = {}
self.tx_broadcast_list = []
for tx in tx_broadcast_list:
self.tx_broadcast_list.append((tx,0))
# setup messaging socket
self.msg_socket= socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.msg_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.msg_socket.bind(('localhost',cryptoconfig.MESSAGING_PORT[self.crypto]))
self.msg_socket.listen(5)
self.msg_socket.settimeout(0) # non blocking
self.msg_recv_buffer=''
# connect to DNS seeds
if connect_to_dns_seeds:
for address in cryptoconfig.DNS_SEEDS[self.crypto]:
self.create_peer_socket(address)
# connect to specified peers
for address in peer_list:
self.create_peer_socket(address)
# function to get my current ip,
def _get_my_ip(self):
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.connect((ADDRESS_TO_GET_IP,80))
out= s.getsockname()[0]
s.close()
return out
#create new peer socket at address
def create_peer_socket(self,address):
# check if address is domain name and convert it to TCP IP address
if any([ c.isalpha() for c in address]):
try:
address=socket.gethostbyname(address)
except Exception as e:
logging.warn("failed to resolve address "+address)
return False
try:
peer=PeerSocket(self.crypto)
peer.connect(address)
except IOError as e:
logging.warn("I/O error({0}): {1}, could not connect to {}".format(e.errno, e.strerror,address))
return False
self.fileno_to_peer_dict[peer.get_socket().fileno()]=peer
self.address_to_peer_dict[address]=peer
eventmask=select.POLLIN|select.POLLPRI|select.POLLOUT|select.POLLERR|select.POLLHUP|select.POLLNVAL
self.poller.register( peer.get_socket().fileno(),eventmask)
return True
def remove_peer_socket(self,peer):
#unregister and remove from dictionary
fileno=peer.get_socket().fileno()
address=peer.get_address()
self.poller.unregister(fileno)
del self.fileno_to_peer_dict[fileno]
del self.address_to_peer_dict[address]
def get_num_peers(self):
return len(self.fileno_to_peer_dict)
def get_num_active_peers(self):
out=0
for peer in self.fileno_to_peer_dict.values():
if peer.get_is_active():
out+=1
return out
def add_new_broadcast_tx(self,tx):
self.tx_broadcast_list.append((tx,0))
def _recv_msg(self):
try:
conn,addr=self.msg_socket.accept()
msg=socketrecv(conn,TCP_RECV_PACKET_SIZE)
except Exception as e:
msg=None
if msg==None:
return False
msg_list=msg.split()
if msg_list[0] == 'tx':
self.add_new_broadcast_tx(msg_list[1])
socketsend(conn,'ack')
return True
else:
return False
# poll peer sockets and do stuff if there is data
def run(self):
# process any messages
result=self._recv_msg()
#broadcast tx
active_peer_list = [peer for peer in self.fileno_to_peer_dict.values() if peer.get_is_active()]
for current_peer in active_peer_list:
for (i,(tx,num_broadcasts)) in enumerate(self.tx_broadcast_list):
was_broadcast=current_peer.broadcast(tx)# this will not broadcast more than once
if was_broadcast:
logging.info("Tx has been broadcast: {}".format(tx))
self.tx_broadcast_list[i]=(tx,num_broadcasts+1)
# remove tx after we broadcast NUM_TX_BROADCASTS times
self.tx_broadcast_list=[x for x in self.tx_broadcast_list if x[1] < self.num_tx_broadcasts]
# check received packets
events=self.poller.poll()
for event in events:
poll_result=event[1]
fileno=event[0]
current_peer=self.fileno_to_peer_dict[fileno]
if(poll_result & select.POLLOUT): #ready for write (means socket is connected)
#don't check for POLLOUT anymore, since we know it is connected
self.poller.modify(fileno,
select.POLLIN|select.POLLPRI|select.POLLERR|select.POLLHUP|select.POLLNVAL)
# initialize by sending version
if not current_peer.get_is_active():
logging.info("connection established to {}".format(current_peer.address))
current_peer.send_version(self.my_ip)
current_peer.send_getaddr()
current_peer.set_is_active(True)
logging.info("num active peers: {}".format(self.get_num_active_peers()))
if(poll_result & select.POLLIN):#ready for read( packet is available)
current_peer.recv()
#check new addresses we got from the peer and try to connect
while len(current_peer.peer_address_list) > 0 :
address=current_peer.peer_address_list.pop()
if address not in self.address_to_peer_dict:
if self.get_num_peers() < self.max_peers:
self.create_peer_socket(address)
#check new tx and add to db, Currently this does nothing
while len(current_peer.tx_hash_list) > 0 :
tx_hash=current_peer.tx_hash_list.pop()
if(poll_result & select.POLLPRI): #urgent data to read
pass
if(poll_result & select.POLLERR): #Error condition
logging.info("Error condition detected on {}".format(current_peer.get_address()))
if(poll_result & select.POLLHUP): #hung up
logging.info("Hung up detected on {}".format(current_peer.get_address()))
self.remove_peer_socket(current_peer)
if(poll_result & select.POLLNVAL): #invalid request, unopen descriptor
logging.info("Invalid request detected on {}".format(current_peer.get_address()))
self.remove_peer_socket(current_peer)
class PeerSocket(object):
def __init__(self,crypto):
self.crypto=crypto
self.port = cryptoconfig.PORT[crypto]
self.protocol_version=cryptoconfig.PROTOCOL_VERSION[crypto]
self.msg_magic_bytes=cryptoconfig.MSG_MAGIC_BYTES[crypto]
self.is_active=False
self.address=''
self.peer_address_list=[]
#list of received tx hashes
self.tx_hash_list=[]
#dictionary where key is hash of tx we want to broadcast and value is tx
#only contains tx's that have been broacast already using broadcast() function
self.broadcast_tx_dict={}
self.recv_buffer=''
self.expected_msg_size=0
self.version=''
self.total_valid_bytes_received=0
self.total_junk_bytes_received=0
# time of connection
self.connection_time=None
def __del__(self):
self.my_socket.close()
def set_is_active(self,truefalse):
self.is_active=truefalse
if truefalse:
self.connection_time = time.time()
def get_address(self):
return self.address
def connect(self,address):
self.address=address
if('.' in address):
socket_type=socket.AF_INET
elif(':' in address):
socket_type=socket.AF_INET6
else:
raise Exception("Unexpected address: "+address)
self.my_socket=socket.socket(socket_type, socket.SOCK_STREAM)
self.my_socket.settimeout(SOCKET_BLOCK_SECONDS)
try:
if(socket_type==socket.AF_INET):
self.my_socket.connect((address,self.port))
else:
self.my_socket.connect((address,self.port,0,0))
except IOError as e:
# 115 == Operation now in progress EINPROGRESS, this is expected
if e.errno !=115:
return False
return True
def get_socket(self):
return self.my_socket
def get_packet(self):
if( len(self.recv_buffer) >= protocol.MSGHEADER_SIZE):
data_length=protocol.get_length_msgheader(self.recv_buffer)
self.expected_msg_size=data_length+protocol.MSGHEADER_SIZE
#if valid command is not contained, packet will be thrown out
if protocol.is_valid_command(self.recv_buffer,self.crypto)==False:
self.total_junk_bytes_received+=len(self.recv_buffer)
self.expected_msg_size=0
cmd = protocol.get_command_msgheader(self.recv_buffer)
self.recv_buffer=''
logging.error('Invalid command found in buffer: {}'.format(cmd))
return ''
try:
self.recv_buffer+=self.my_socket.recv(TCP_RECV_PACKET_SIZE)
except IOError as e:
logging.warn("Get packet from {0} I/O error({1}): {2}".format(self.address,e.errno, e.strerror))
return ''
#if entire message is assembled exactly, return it
if(len(self.recv_buffer) >= self.expected_msg_size and self.expected_msg_size !=0):
self.total_valid_bytes_received+=self.expected_msg_size
out=self.recv_buffer[0:self.expected_msg_size]
self.recv_buffer=self.recv_buffer[self.expected_msg_size:]
self.expected_msg_size=0
return out
#otherwise output is not ready, return empty string
else:
return ''
def get_is_active(self):
return self.is_active
#send a ping pong to verify connection
def verify_connection(self):
data = struct.pack('<Q',NONCE)
self._send_packet('ping',data)
data=self.my_socket.recv(TCP_RECV_PACKET_SIZE)
return process_pong(data)
def _send_packet(self, command, payload):
lc = len(command)
assert (lc < 12)
cmd = command + ('\x00' * (12 - lc))
h = protocol.dhash (payload)
checksum, = struct.unpack ('<I', h[:4])
packet = struct.pack ('<4s12sII',
self.msg_magic_bytes,cmd,len(payload),checksum) + payload
try:
self.my_socket.send(packet)
except IOError as e:
logging.warn("Send packet to {0} I/O error({1}): {2}".format(self.address,e.errno, e.strerror))
return False
return True
def send_version(self,my_ip):
data = struct.pack ('<IQQ', self.protocol_version, 1, int(time.time()))
data += protocol.pack_net_addr ((1, (my_ip, self.port)))
data += protocol.pack_net_addr ((1, (self.address, self.port)))
data += struct.pack ('<Q',NONCE)
data += protocol.pack_var_str (USER_AGENT)
start_height = 0
#ignore bip37 for now - leave True
data += struct.pack ('<IB', start_height, 1)
self._send_packet ('version', data)
# unused
def send_getaddr(self):
data = struct.pack('0c')
self._send_packet('getaddr',data)
def _send_tx(self,tx_hash):
# send only if we have tx_hash
if tx_hash in self.broadcast_tx_dict:
data=self.broadcast_tx_dict[tx_hash]
self._send_packet('tx',data)
def recv(self):
data=self.get_packet()
if(len(data)!=0):
self.process_data(data)
return True
else:
return False
# Return True, if broadcast succeeds, False oterwise.
# Unique tx can only be broadcast once, attempts to broadcast
# identical tx will return False.
# tx is expected to be a hex string, i.e. '02aba8...'
def broadcast(self,tx):
tx=tx.decode('hex')
tx_hash=protocol.dhash(tx) #need to hash here
# we only broadcast if tx is new
if tx_hash not in self.broadcast_tx_dict:
self.broadcast_tx_dict[tx_hash]=tx
data = protocol.pack_var_int(1)
data += struct.pack('<I32s',1,tx_hash) #MSG_TX
self._send_packet('inv',data)
return True
# we will receive getdata (make sure we have hash)
# and process_data will call _send_tx when inv message
# received
return False
def process_data(self,data):
if protocol.compare_command(data,"getaddr"): #get known peers
pass
elif protocol.compare_command(data,"addr"):#in reponse to getaddr
self._process_addr(data)
elif protocol.compare_command(data,"version"):
pass
elif protocol.compare_command(data,"verack"):
pass
elif protocol.compare_command(data,"inv"): #advertise knowledge of tx or block
self._process_inv(data)
elif protocol.compare_command(data,"getblocks"):#request an inv packet for blocks
pass
elif protocol.compare_command(data,"getheaders"):#request headers
pass
elif protocol.compare_command(data,"headers"):#return header in reponse to getheader
pass
elif protocol.compare_command(data,"getdata"):#get data from peer after broadcasting tx via inv
self._process_get_data(data)
elif protocol.compare_command(data,"notfound"):#not found is sent after getdata recieved
pass
elif protocol.compare_command(data,"block"):#describe a block in reponse to getdata
pass
elif protocol.compare_command(data,"tx"):#describe a transaction in repones to getdata
pass
elif protocol.compare_command(data,"pong"):#response to ping
pass
elif protocol.compare_command(data,"ping"):#query if tcp ip is alive
pass
elif protocol.compare_command(data,"reject"):
self._process_reject(data)
elif (self.crypto in ['dashpay','dashpay_testnet'] and
protocol.compare_command(data,'dseep')):
pass
else:
logging.warn("unhandled command recieved:{}".format(
protocol.get_command_msgheader(data)))
def _process_reject(self,data):
payload = protocol.get_payload(data)
varint_tuple = protocol.read_var_int(payload)
size_message = varint_tuple[0]
varint_size = varint_tuple[1]
message_rejected = payload[varint_size:varint_size+size_message]
code = struct.unpack('B',payload[varint_size+size_message:varint_size+size_message+1])[0]
payload = payload[varint_size+size_message+1:]
varint_tuple = protocol.read_var_int(payload)
size_reason = varint_tuple[0]
varint_size=varint_tuple[1]
reason = payload[varint_size:varint_size+size_reason]
data = payload[varint_size+size_reason:]
if message_rejected == 'tx':
pass
msg='{} rejected, reason: {}, code: {}, data: {}'.format(message_rejected,reason,code,data)
logging.error(msg)
def _process_get_data(self,data):
payload = protocol.get_payload(data)
varint_tuple = protocol.read_var_int(payload)
num_invs = varint_tuple[0]
varint_size = varint_tuple[1]
inv_data = payload[varint_size:]
for i in range(0,num_invs):
begin_index=i*36
end_index=begin_index+36
inv_type = struct.unpack('<I',inv_data[begin_index:begin_index+4])[0]
inv_hash = struct.unpack('32c',inv_data[begin_index+4:begin_index+36])
if(inv_type ==0):#error
pass
elif(inv_type==1):#tx
tx_hash=''.join(inv_hash) #convert tuple to string
self._send_tx(tx_hash)
elif(inv_type==2):#block
pass
else:
logging.error("unknown inv {} found".format(inv_type))
def _process_addr(self,data):
payload = protocol.get_payload(data)
varint_tuple = protocol.read_var_int(payload)
num_ips = varint_tuple[0]
varint_size = varint_tuple[1]
ip_data = payload[varint_size:]
for i in range(0,num_ips):
begin_index=i*30
end_index=begin_index+30
timestamp = struct.unpack('<I',ip_data[begin_index:begin_index+4])[0]
services = struct.unpack('<Q',ip_data[begin_index+4:begin_index+12])[0]
ipv6= struct.unpack('16c',ip_data[begin_index+12:begin_index+28])[0]
ipv4= struct.unpack('4c',ip_data[begin_index+24:begin_index+28])[0]
port=struct.unpack('!H',ip_data[begin_index+28:begin_index+30])[0]
self.peer_address_list.append(socket.inet_ntop(socket.AF_INET,ip_data[begin_index+24:begin_index+28]))
def _process_inv(self,data):
payload = protocol.get_payload(data)
varint_tuple = protocol.read_var_int(payload)
num_invs = varint_tuple[0]
varint_size = varint_tuple[1]
inv_data = payload[varint_size:]
for i in range(0,num_invs):
begin_index=i*36
end_index=begin_index+36
inv_type = struct.unpack('<I',inv_data[begin_index:begin_index+4])[0]
inv_hash = struct.unpack('32c',inv_data[begin_index+4:begin_index+36])
if not protocol.is_valid_inv_type(inv_type,self.crypto):
logging.error("unknown inv {} found".format(inv_type))
if inv_type == 1:#tx
self._process_inv_tx(inv_hash)
elif inv_type == 2:#block
self._process_inv_block(inv_hash)
elif inv_type == 3:#filtered block
pass
def _process_inv_tx(self,inv_hash):
self.tx_hash_list.append(inv_hash)
def _process_inv_block(self,inv_hash):
pass
def _process_pong(data):
if(protocol.compare_command(data,"pong")):
return True
else:
return False
def _process_version_handshake(socket):
data=socket.recv(1024)
if(protocol.compare_command(data,"version")):
logging.info("version message recieved")
else:
logging.error("unexpected message recieved in version handshake")
data=socket.recv(1024)
out_tuple=struct.unpack('<I12c',data[0:16])
if(protocol.compare_command(data,"verack")):
logging.info("verack message recieved")
else:
logging.info("message type:",out_tuple[1:])
| 39.911153 | 115 | 0.614929 | 2,704 | 21,113 | 4.576553 | 0.150518 | 0.02101 | 0.033778 | 0.039919 | 0.344242 | 0.257212 | 0.165414 | 0.133495 | 0.089455 | 0.089455 | 0 | 0.01314 | 0.289916 | 21,113 | 528 | 116 | 39.986742 | 0.8123 | 0.119642 | 0 | 0.260241 | 0 | 0 | 0.051756 | 0 | 0 | 0 | 0 | 0 | 0.00241 | 1 | 0.084337 | false | 0.043373 | 0.019277 | 0.009639 | 0.178313 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3479528b4dfe049877b66a81c47c90223d199db2 | 2,330 | py | Python | code/models/textcnn.py | hiyouga/Toxic_Detection | 7545f71ebf8f65043db8e8fb0f5a8072b8b346cf | [
"MIT"
] | 2 | 2022-01-01T06:37:41.000Z | 2022-02-14T05:30:44.000Z | code/models/textcnn.py | hiyouga/Toxic_Detection | 7545f71ebf8f65043db8e8fb0f5a8072b8b346cf | [
"MIT"
] | null | null | null | code/models/textcnn.py | hiyouga/Toxic_Detection | 7545f71ebf8f65043db8e8fb0f5a8072b8b346cf | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
from .env import multienv
class TextCNN(nn.Module):
def __init__(self, kernel_num, kernel_sizes, configs):
super(TextCNN, self).__init__()
WN, WD = configs['embedding_matrix'].shape
KN = kernel_num
KS = kernel_sizes
C = configs['num_classes']
self.embed = nn.Embedding.from_pretrained(torch.tensor(configs['embedding_matrix'], dtype=torch.float))
self.conv = nn.ModuleList([
nn.Sequential(
nn.Conv1d(WD, KN, K, padding=K//2, bias=True),
nn.ReLU(inplace=True),
) for K in KS
])
self.maxpool = nn.AdaptiveMaxPool1d(1)
self.linear = nn.Linear(len(KS) * KN, C)
self.dropout = nn.Dropout(0.1)
self.output_token_hidden = configs['output_token_hidden'] if 'output_token_hidden' in configs else False
if self.output_token_hidden:
raise ValueError("CNN model should not use for token output because not same length")
self.use_env = configs['use_env'] if 'use_env' in configs else False
if self.use_env:
accumulator = configs['accumulator']
self.env_model = multienv(WD, accumulator)
def forward(self, text, mask=None, env=None):
if self.use_env and env is None:
raise RuntimeWarning("build a env-enable model, but get no env input")
if not self.use_env and env is not None:
raise RuntimeError("build a env-free model, but get env input")
word_emb = self.embed(text)
if mask is not None:
word_emb = torch.mul(word_emb, mask.unsqueeze(-1))
if self.use_env and env is not None:
env_embeddings = self.env_model(env)
env_embeddings = env_embeddings.unsqueeze(dim=1).expand_as(word_emb)
word_emb += env_embeddings
word_emb = self.dropout(word_emb)
maxpool_out = list()
for conv in self.conv:
cnn_out_i = conv(word_emb.transpose(1, 2))
maxpool_i = self.maxpool(cnn_out_i).squeeze(-1)
maxpool_out.append(maxpool_i)
maxpool_out = torch.cat(maxpool_out, dim=-1)
output = self.linear(self.dropout(maxpool_out))
return output
def textcnn(configs):
return TextCNN(256, [3,4,5], configs)
| 38.196721 | 112 | 0.626609 | 323 | 2,330 | 4.343653 | 0.321981 | 0.039914 | 0.035638 | 0.025659 | 0.085531 | 0.085531 | 0.051319 | 0.035638 | 0 | 0 | 0 | 0.010619 | 0.272532 | 2,330 | 60 | 113 | 38.833333 | 0.817109 | 0 | 0 | 0 | 0 | 0 | 0.11073 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.06 | false | 0 | 0.06 | 0.02 | 0.18 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
347adf97af54d63546ef5ec7a08e7789395b81c9 | 12,026 | py | Python | bsuite_exp/option_critic/option_critic.py | DavidSlayback/rlpyt | 445adbd3917842caae0cae0d06e4b2866c8f1258 | [
"MIT"
] | null | null | null | bsuite_exp/option_critic/option_critic.py | DavidSlayback/rlpyt | 445adbd3917842caae0cae0d06e4b2866c8f1258 | [
"MIT"
] | null | null | null | bsuite_exp/option_critic/option_critic.py | DavidSlayback/rlpyt | 445adbd3917842caae0cae0d06e4b2866c8f1258 | [
"MIT"
] | null | null | null | # python3
# pylint: disable=g-bad-file-header
# Copyright 2019 DeepMind Technologies Limited. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""A simple actor-critic agent implemented in JAX + Haiku."""
from typing import Any, Callable, NamedTuple, Tuple
from bsuite.baselines import base
import dm_env
from dm_env import specs
from bsuite.baselines import base
from bsuite.baselines.utils import sequence
import numpy as np
class OCTrajectory(NamedTuple):
"""A trajectory is a sequence of observations, actions, rewards, discounts.
Note: `observations` should be of length T+1 to make up the final transition.
"""
observations: np.ndarray # [T + 1, ...]
actions: np.ndarray # [T]
prev_o: np.ndarray # [T]
o: np.ndarray # [T]
rewards: np.ndarray # [T]
discounts: np.ndarray # [T]
class OCBuffer:
"""A simple buffer for accumulating trajectories."""
_observations: np.ndarray
_actions: np.ndarray
_prev_o: np.ndarray
_o: np.ndarray
_rewards: np.ndarray
_discounts: np.ndarray
_max_sequence_length: int
_needs_reset: bool = True
_t: int = 0
def __init__(
self,
obs_spec: specs.Array,
action_spec: specs.Array,
max_sequence_length: int,
):
"""Pre-allocates buffers of numpy arrays to hold the sequences."""
self._observations = np.zeros(
shape=(max_sequence_length + 1, *obs_spec.shape), dtype=obs_spec.dtype)
self._actions = np.zeros(
shape=(max_sequence_length, *action_spec.shape),
dtype=action_spec.dtype)
self._prev_o = np.zeros(max_sequence_length, dtype=action_spec.dtype)
self._o = np.zeros(max_sequence_length, dtype=action_spec.dtype)
self._rewards = np.zeros(max_sequence_length, dtype=np.float32)
self._discounts = np.zeros(max_sequence_length, dtype=np.float32)
self._max_sequence_length = max_sequence_length
def append(
self,
timestep: dm_env.TimeStep,
action: base.Action,
new_timestep: dm_env.TimeStep,
):
"""Appends an observation, action, reward, and discount to the buffer."""
if self.full():
raise ValueError('Cannot append; sequence buffer is full.')
# Start a new sequence with an initial observation, if required.
if self._needs_reset:
self._t = 0
self._observations[self._t] = timestep.observation
self._needs_reset = False
# Append (o, a, r, d) to the sequence buffer.
self._observations[self._t + 1] = new_timestep.observation
self._actions[self._t] = action
self._rewards[self._t] = new_timestep.reward
self._discounts[self._t] = new_timestep.discount
self._t += 1
# Don't accumulate sequences that cross episode boundaries.
# It is up to the caller to drain the buffer in this case.
if new_timestep.last():
self._needs_reset = True
def append_options(self,
prev_o: base.Action,
o: base.Action,
):
if self.full():
raise ValueError('Cannot append; sequence buffer is full.')
self._prev_o[self._t] = prev_o
self._o[self._t] = o
def drain(self) -> OCTrajectory:
"""Empties the buffer and returns the (possibly partial) trajectory."""
if self.empty():
raise ValueError('Cannot drain; sequence buffer is empty.')
trajectory = OCTrajectory(
self._observations[:self._t + 1],
self._actions[:self._t],
self._prev_o[:self._t],
self._o[:self._t],
self._rewards[:self._t],
self._discounts[:self._t],
)
self._t = 0 # Mark sequences as consumed.
self._needs_reset = True
return trajectory
def empty(self) -> bool:
"""Returns whether or not the trajectory buffer is empty."""
return self._t == 0
def full(self) -> bool:
"""Returns whether or not the trajectory buffer is full."""
return self._t == self._max_sequence_length
import dm_env
from dm_env import specs
import haiku as hk
import jax
import jax.numpy as jnp
from jax.tree_util import Partial
import optax
import rlax
Logits = jnp.ndarray
Value = jnp.ndarray
LSTMState = Any
Logits_Omega = jnp.ndarray
Beta = jnp.ndarray
Q = jnp.ndarray
PolicyValueNet = Callable[[jnp.ndarray], Tuple[Logits, Value]]
OptionCriticNet = Callable[[jnp.ndarray], Tuple[Logits, Beta, Q, Logits_Omega]]
class TrainingState(NamedTuple):
params: hk.Params
o: jnp.ndarray # Selected option
prev_o: jnp.ndarray # Previous option
opt_state: Any
class OptionCritic(base.Agent):
"""Feed-forward actor-critic agent."""
def __init__(
self,
obs_spec: specs.Array,
action_spec: specs.DiscreteArray,
n_options: int,
network: OptionCriticNet,
optimizer: optax.GradientTransformation,
rng: hk.PRNGSequence,
sequence_length: int,
discount: float,
td_lambda: float,
):
# Define loss function.
def loss(trajectory: OCTrajectory) -> jnp.ndarray:
""""Actor-critic loss."""
logits, betas, qs, pi_omegas = network(trajectory.observations)
logits, values = network(trajectory.observations)
td_errors = rlax.td_lambda(
v_tm1=values[:-1],
r_t=trajectory.rewards,
discount_t=trajectory.discounts * discount,
v_t=values[1:],
lambda_=jnp.array(td_lambda),
)
critic_loss = jnp.mean(td_errors**2)
actor_loss = rlax.policy_gradient_loss(
logits_t=logits[:-1],
a_t=trajectory.actions,
adv_t=td_errors,
w_t=jnp.ones_like(td_errors))
return actor_loss + critic_loss
# Transform the loss into a pure function.
loss_fn = hk.without_apply_rng(hk.transform(loss, apply_rng=True)).apply
# Define update function.
@jax.jit
def sgd_step(state: TrainingState,
trajectory: sequence.Trajectory) -> TrainingState:
"""Does a step of SGD over a trajectory."""
gradients = jax.grad(loss_fn)(state.params, trajectory)
updates, new_opt_state = optimizer.update(gradients, state.opt_state)
new_params = optax.apply_updates(state.params, updates)
return TrainingState(params=new_params, opt_state=new_opt_state)
# Initialize network parameters and optimiser state.
init, forward = hk.without_apply_rng(hk.transform(network, apply_rng=True))
dummy_observation = jnp.zeros((1, *obs_spec.shape), dtype=jnp.float32)
initial_params = init(next(rng), dummy_observation)
initial_opt_state = optimizer.init(initial_params)
initial_option_state = jax.random.randint(next(rng), (1, ), 0, n_options)
# Internalize state.
self._state = TrainingState(initial_params, initial_option_state, initial_option_state, initial_opt_state)
self._forward = jax.jit(forward)
self._buffer = sequence.Buffer(obs_spec, action_spec, sequence_length)
self._sgd_step = sgd_step
self._rng = rng
def select_action(self, timestep: dm_env.TimeStep) -> base.Action:
"""Selects actions according to a softmax policy."""
key = next(self._rng)
observation = timestep.observation[None, ...]
logits, beta, q, pi_omega = self._forward(self._state.params, observation)
prev_o = self._state.prev_o
# logits, _ = self._forward(self._state.params, observation)
action = jax.random.categorical(key, logits).squeeze()
return int(action)
def _select_option(self, pi_omega: jnp.ndarray) -> base.Action:
"""Selects options according to a softmax policy"""
key = next(self._rng)
option = jax.random.categorical(key, pi_omega).squeeze()
return int(option)
def _sample_termination(self, beta: jnp.ndarray) -> bool:
"""Determines termination from termination probabilities"""
key = next(self._rng)
termination = jax.random.bernoulli(key, beta)
return bool(termination)
def update(
self,
timestep: dm_env.TimeStep,
action: base.Action,
new_timestep: dm_env.TimeStep,
):
"""Adds a transition to the trajectory buffer and periodically does SGD."""
self._buffer.append(timestep, action, new_timestep)
if self._buffer.full() or new_timestep.last():
trajectory = self._buffer.drain()
self._state = self._sgd_step(self._state, trajectory)
def default_agent(obs_spec: specs.Array,
action_spec: specs.DiscreteArray,
n_options: int,
use_interest: bool = True,
seed: int = 0) -> base.Agent:
"""Creates an option-critic agent with default hyperparameters."""
def network(inputs: jnp.ndarray) -> Tuple[Logits, Beta, Q, Logits_Omega]:
flat_inputs = hk.Flatten()(inputs) # Inputs flattened
torso = hk.nets.MLP([64, 64]) # Shared state processor, 2x64 with relu after each.
# Option outputs
policy_over_options_head = hk.Sequential(hk.Linear(n_options), Partial(jax.nn.softmax, axis=-1))
beta_head = hk.Sequential(hk.Linear(n_options), jax.nn.sigmoid)
interest_head = hk.Sequential(hk.Linear(n_options), jax.nn.sigmoid)
q_head = hk.Linear(n_options)
# q_ent_head = hk.Linear(n_options)
policy_head = hk.Sequential(hk.Linear(action_spec.num_values * n_options),
Partial(jnp.reshape, newshape=(-1, n_options, action_spec.num_values)),
Partial(jax.nn.softmax, axis=-1)
)
embedding = torso(flat_inputs)
logits = policy_head(embedding)
beta = beta_head(embedding)
interest = interest_head(embedding)
pi_omega = policy_over_options_head(embedding)
pi_omega = pi_omega * interest
pi_omega = pi_omega / jnp.sum(pi_omega, axis=-1) # Normalized interest policy
# q_ent = q_ent_head(embedding)
q = q_head(embedding)
return logits, beta, q, pi_omega
def network_no_interest(inputs: jnp.ndarray) -> Tuple[Logits, Beta, Q, Logits_Omega]:
flat_inputs = hk.Flatten()(inputs) # Inputs flattened
torso = hk.nets.MLP([64, 64]) # Shared state processor, 2x64 with relu after each.
# Option outputs
policy_over_options_head = hk.Sequential(hk.Linear(n_options), Partial(jax.nn.softmax, axis=-1))
beta_head = hk.Sequential(hk.Linear(n_options), jax.nn.sigmoid)
q_head = hk.Linear(n_options)
# q_ent_head = hk.Linear(n_options)
policy_head = hk.Sequential(hk.Linear(action_spec.num_values * n_options),
Partial(jnp.reshape, newshape=(-1, n_options, action_spec.num_values)),
Partial(jax.nn.softmax, axis=-1)
)
embedding = torso(flat_inputs)
logits = policy_head(embedding)
beta = beta_head(embedding)
pi_omega = policy_over_options_head(embedding)
# q_ent = q_ent_head(embedding)
q = q_head(embedding)
return logits, beta, q, pi_omega
return OptionCritic(
obs_spec=obs_spec,
action_spec=action_spec,
n_options=n_options,
network=network if use_interest else network_no_interest,
optimizer=optax.adam(3e-3),
rng=hk.PRNGSequence(seed),
sequence_length=32,
discount=0.99,
td_lambda=0.9,
)
| 37.347826 | 110 | 0.657492 | 1,547 | 12,026 | 4.908209 | 0.201034 | 0.011853 | 0.024628 | 0.018965 | 0.353747 | 0.310022 | 0.285263 | 0.285263 | 0.272224 | 0.239431 | 0 | 0.006953 | 0.234575 | 12,026 | 321 | 111 | 37.464174 | 0.817925 | 0.206885 | 0 | 0.294372 | 0 | 0 | 0.012443 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069264 | false | 0 | 0.064935 | 0 | 0.281385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
347c5bc64ff65cc737eacd82e9e3755436605323 | 22,430 | py | Python | sopel_modules/SpiceBot/Users.py | deathbybandaid/Sopel-SpiceBotSERV | 816dddc88943b9194f3f0aa6558759eedd585343 | [
"EFL-2.0"
] | 2 | 2018-07-24T14:04:36.000Z | 2019-01-11T21:41:50.000Z | sopel_modules/SpiceBot/Users.py | deathbybandaid/Sopel-SpiceBotSERV | 816dddc88943b9194f3f0aa6558759eedd585343 | [
"EFL-2.0"
] | 947 | 2018-07-24T01:50:29.000Z | 2019-04-14T22:40:57.000Z | sopel_modules/SpiceBot/Users.py | deathbybandaid/Sopel-SpiceBotSERV | 816dddc88943b9194f3f0aa6558759eedd585343 | [
"EFL-2.0"
] | 6 | 2019-04-12T17:09:07.000Z | 2019-09-30T05:56:15.000Z | # coding=utf8
from __future__ import unicode_literals, absolute_import, division, print_function
"""A way to track users"""
import sopel
from sopel.tools import Identifier
from .Database import db as botdb
from .Tools import is_number, inlist, similar, array_arrangesort, bot_privs, channel_privs
from sopel_modules.spicemanip import spicemanip
import threading
import re
import time
# TODO timestamp for new .seen
class BotUsers():
def __init__(self):
self.who_reqs = {}
self.lock = threading.Lock()
# TODO AWAY
self.dict = {
"all": botdb.get_bot_value('users') or {},
"online": [],
"offline": [],
"away": [],
"current": {},
"registered": botdb.get_bot_value('regged_users') or [],
"register_check": {},
"identified": [],
}
"""during setup, all users from database are offline until marked online"""
# for user_id in list(self.dict["all"].keys()):
# self.mark_user_offline(user_id)
self.lock.acquire()
self.dict["offline"] = list(self.dict["all"].keys())
self.lock.release()
def __getattr__(self, name):
''' will only get called for undefined attributes '''
"""We will try to find a dict value, or return None"""
if name.lower() in list(self.dict.keys()):
return self.dict[str(name).lower()]
else:
raise Exception('User dict does not contain a function or key ' + str(name.lower()))
def ID(self, nickinput):
if is_number(nickinput):
nick_id = nickinput
self.lock.acquire()
if int(nick_id) in list(self.dict["current"].keys()):
nick = self.dict["current"][int(nick_id)]["nick"]
self.lock.release()
elif int(nick_id) in list(self.dict["all"].keys()) and len(self.dict["all"][int(nick_id)]):
nick = self.dict["all"][int(nick_id)][0]
self.lock.release()
else:
raise Exception('ID ' + str(nickinput) + ' does not appear to be associated with a nick')
return nick
else:
nick_id = self.whois_ident(nickinput)
return int(nick_id)
def get_nick_id(self, nick, usercreate=True):
try:
nick_id = botdb.db.get_nick_id(nick, create=usercreate)
except Exception as e:
nick_id = e
nick_id = None
return None
return nick_id
def whois_ident(self, nick, usercreate=True):
nick = Identifier(nick)
try:
nick_id = self.get_nick_id(nick, usercreate)
except Exception as e:
nick_id = e
nick_id = None
return None
if usercreate and nick_id:
self.add_to_all(nick, nick_id)
return int(nick_id)
def save_user_db(self):
botdb.set_bot_value('users', self.dict["all"])
botdb.set_bot_value('regged_users', self.dict["registered"])
def add_to_all(self, nick, nick_id=None):
self.lock.acquire()
if not nick_id:
nick_id = self.ID(nick)
# add to all if not there
if int(nick_id) not in list(self.dict["all"].keys()):
self.dict["all"][int(nick_id)] = []
# add nick alias as index 0
if nick in self.dict["all"][int(nick_id)]:
self.dict["all"][int(nick_id)].remove(nick)
self.dict["all"][int(nick_id)].insert(0, nick)
self.lock.release()
self.save_user_db()
def add_to_current(self, nick, nick_id=None):
self.lock.acquire()
if not nick_id:
nick_id = self.whois_ident(nick)
# add to current if not there
if int(nick_id) not in list(self.dict["current"].keys()):
self.dict["current"][int(nick_id)] = {"channels": [], "nick": nick}
self.lock.release()
def channel_scan(self, bot):
for channel in list(bot.channels.keys()):
for user in list(bot.channels[channel].privileges.keys()):
# Identify
nick_id = self.whois_ident(user)
# check if nick is registered
self.whois_send(bot, user)
# Verify nick is in the all list
self.add_to_all(user, nick_id)
# Verify nick is in the all list
self.add_to_current(user, nick_id)
# set current nick
self.mark_current_nick(user, nick_id)
# add joined channel to nick list
self.add_channel(channel, nick_id)
# mark user as online
self.mark_user_online(nick_id)
def mark_current_nick(self, nick, nick_id):
self.lock.acquire()
self.dict["current"][int(nick_id)]["nick"] = nick
self.lock.release()
def add_channel(self, channel, nick_id):
self.lock.acquire()
if str(channel).lower() not in self.dict["current"][int(nick_id)]["channels"]:
self.dict["current"][int(nick_id)]["channels"].append(str(channel).lower())
self.lock.release()
def remove_channel(self, channel, nick_id):
self.lock.acquire()
if str(channel).lower() in self.dict["current"][int(nick_id)]["channels"]:
self.dict["current"][int(nick_id)]["channels"].remove(str(channel).lower())
self.lock.release()
def mark_user_online(self, nick_id):
self.lock.acquire()
if int(nick_id) not in self.dict["online"]:
self.dict["online"].append(int(nick_id))
if int(nick_id) in self.dict["offline"]:
self.dict["offline"].remove(int(nick_id))
self.lock.release()
def mark_user_offline(self, nick_id):
self.lock.acquire()
if int(nick_id) in self.dict["online"]:
self.dict["online"].remove(int(nick_id))
if int(nick_id) not in self.dict["offline"]:
self.dict["offline"].append(int(nick_id))
self.lock.release()
self.whois_identify_forget(nick_id)
def join(self, bot, trigger):
if trigger.nick == bot.nick:
for user in list(bot.channels[trigger.sender].privileges.keys()):
# Identify
nick_id = self.whois_ident(user)
# check if nick is registered
self.whois_send(bot, user)
# Verify nick is in the all list
self.add_to_all(user, nick_id)
# Verify nick is in the all list
self.add_to_current(user, nick_id)
# set current nick
self.mark_current_nick(user, nick_id)
# add joined channel to nick list
self.add_channel(trigger.sender, nick_id)
# mark user as online
self.mark_user_online(nick_id)
return
# Identify
nick_id = self.whois_ident(trigger.nick)
# check if nick is registered
self.whois_send(bot, trigger.nick)
# Verify nick is in the all list
self.add_to_all(trigger.nick, nick_id)
# Verify nick is in the all list
self.add_to_current(trigger.nick, nick_id)
# set current nick
self.mark_current_nick(trigger.nick, nick_id)
# add joined channel to nick list
self.add_channel(trigger.sender, nick_id)
# mark user as online
self.mark_user_online(nick_id)
def chat(self, bot, trigger):
if trigger.nick == bot.nick:
return
# Identify
nick_id = self.whois_ident(trigger.nick)
# check if nick is registered
self.whois_send(bot, trigger.nick)
# Verify nick is in the all list
self.add_to_all(trigger.nick, nick_id)
# Verify nick is in the all list
self.add_to_current(trigger.nick, nick_id)
# set current nick
self.mark_current_nick(trigger.nick, nick_id)
# add joined channel to nick list
self.add_channel(trigger.sender, nick_id)
# mark user as online
self.mark_user_online(nick_id)
def quit(self, bot, trigger):
if trigger.nick == bot.nick:
return
# Identify
nick_id = self.whois_ident(trigger.nick)
# check if nick is registered
self.whois_send(bot, trigger.nick)
# Verify nick is in the all list
self.add_to_all(trigger.nick, nick_id)
# Verify nick is in the all list
self.add_to_current(trigger.nick, nick_id)
# empty nicks channel list
self.remove_channel(trigger.sender, nick_id)
# mark user as offline
self.mark_user_offline(nick_id)
def part(self, bot, trigger):
if trigger.nick == bot.nick:
for nick_id in list(self.dict["current"].keys()):
self.remove_channel(trigger.sender, nick_id)
# mark offline
self.mark_user_offline(nick_id)
return
# Identify
nick_id = self.whois_ident(trigger.nick)
# check if nick is registered
self.whois_send(bot, trigger.nick)
# Verify nick is in the all list
self.add_to_all(trigger.nick, nick_id)
# Verify nick is in the all list
self.add_to_current(trigger.nick, nick_id)
# remove channel from nick list
self.remove_channel(trigger.sender, nick_id)
# mark offline
self.mark_user_offline(nick_id)
def kick(self, bot, trigger):
targetnick = Identifier(str(trigger.args[1]))
if targetnick == bot.nick:
for nick_id in list(self.dict["current"].keys()):
self.remove_channel(trigger.sender, nick_id)
# mark offline
self.mark_user_offline(nick_id)
return
# Identify
nick_id = self.whois_ident(targetnick)
# check if nick is registered
self.whois_send(bot, targetnick)
# Verify nick is in the all list
self.add_to_all(targetnick, nick_id)
# Verify nick is in the all list
self.add_to_current(targetnick, nick_id)
# remove channel from nick list
self.remove_channel(trigger.sender, nick_id)
# mark offline
self.mark_user_offline(nick_id)
def nick(self, bot, trigger):
oldnick = trigger.nick
old_nick_id = self.whois_ident(oldnick)
self.whois_identify_forget(old_nick_id)
newnick = Identifier(trigger)
if oldnick == bot.nick or newnick == bot.nick:
return
# check if nick is registered
self.whois_send(bot, newnick)
# Verify nick is in the all list
self.add_to_all(oldnick, old_nick_id)
# Verify nick is in the all list
self.add_to_current(oldnick, old_nick_id)
# set current nick
self.mark_current_nick(newnick, old_nick_id)
# add joined channel to nick list
self.add_channel(trigger.sender, old_nick_id)
# mark user as online
self.mark_user_online(old_nick_id)
# alias the nick
# try:
# botdb.alias_nick(oldnick, newnick)
# except Exception as e:
# old_nick_id = e
# return
def mode(self, bot, trigger):
return
def rpl_names(self, bot, trigger):
"""Handle NAMES response, happens when joining to channels."""
names = trigger.split()
# TODO specific to one channel type. See issue 281.
channels = re.search(r'(#\S*)', trigger.raw)
if not channels:
return
channel = Identifier(channels.group(1))
mapping = {'+': sopel.module.VOICE,
'%': sopel.module.HALFOP,
'@': sopel.module.OP,
'&': sopel.module.ADMIN,
'~': sopel.module.OWNER}
for name in names:
nick = Identifier(name.lstrip(''.join(mapping.keys())))
# Identify
nick_id = self.whois_ident(nick)
# Verify nick is in the all list
self.add_to_all(nick, nick_id)
# Verify nick is in the all list
self.add_to_current(nick, nick_id)
# set current nick
self.mark_current_nick(nick, nick_id)
# add joined channel to nick list
self.add_channel(channel, nick_id)
# mark user as online
self.mark_user_online(nick_id)
# check if nick is registered
self.whois_send(bot, nick)
def rpl_who(self, bot, trigger):
if len(trigger.args) < 2 or trigger.args[1] not in self.who_reqs:
# Ignored, some module probably called WHO
return
if len(trigger.args) != 8:
return
_, _, channel, user, host, nick, status, account = trigger.args
nick = Identifier(nick)
channel = Identifier(channel)
# Identify
nick_id = self.whois_ident(nick)
# Verify nick is in the all list
self.add_to_all(nick, nick_id)
# Verify nick is in the all list
self.add_to_current(nick, nick_id)
# set current nick
self.mark_current_nick(nick, nick_id)
# add joined channel to nick list
self.add_channel(channel, nick_id)
# mark user as online
self.mark_user_online(nick_id)
# check if nick is registered
self.whois_send(bot, nick)
def rpl_whois(self, bot, trigger):
if not bot.config.SpiceBot_regnick.regnick:
return
nick = trigger.args[1]
self.whois_handle(nick)
def whois_handle(self, nick):
nick_id = self.whois_ident(nick)
self.lock.acquire()
# identified
if int(nick_id) not in self.dict["identified"]:
self.dict["identified"].append(int(nick_id))
# registered
if str(nick).lower() not in [x.lower() for x in self.dict["registered"]]:
self.dict["registered"].append(str(nick))
self.lock.release()
def whois_send(self, bot, nick):
if not bot.config.SpiceBot_regnick.regnick:
return
check_whois = False
nick_id = self.whois_ident(nick)
# No registered check
if int(nick_id) not in list(self.dict["register_check"].keys()):
check_whois = True
elif int(nick_id) in self.dict["identified"]:
timestamp = self.dict["register_check"][int(nick_id)]
if time.time() - timestamp >= 1800:
check_whois = True
else:
timestamp = self.dict["register_check"][int(nick_id)]
if time.time() - timestamp >= 240:
check_whois = True
if check_whois:
bot.write(['WHOIS', str(nick)])
self.lock.acquire()
self.dict["register_check"][int(nick_id)] = time.time()
self.lock.release()
def whois_identify_forget(self, nick_id):
self.lock.acquire()
if int(nick_id) in self.dict["identified"]:
self.dict["identified"].remove(int(nick_id))
if int(nick_id) in list(self.dict["register_check"].keys()):
del self.dict["register_check"][int(nick_id)]
self.lock.release()
def account(self, bot, trigger):
# Identify
nick_id = self.whois_ident(trigger.nick)
# Verify nick is in the all list
self.add_to_all(trigger.nick, nick_id)
# Verify nick is in the all list
self.add_to_current(trigger.nick, nick_id)
# set current nick
self.mark_current_nick(trigger.nick, nick_id)
# mark user as online
self.mark_user_online(nick_id)
def track_notify(self, bot, trigger):
# Identify
nick_id = self.whois_ident(trigger.nick)
# Verify nick is in the all list
self.add_to_all(trigger.nick, nick_id)
# Verify nick is in the all list
self.add_to_current(trigger.nick, nick_id)
# set current nick
self.mark_current_nick(trigger.nick, nick_id)
# mark user as online
self.mark_user_online(nick_id)
def nick_actual(self, nick, altlist=None):
nick_id = self.whois_ident(nick)
nick_actual = self.ID(nick_id)
return nick_actual
def target_online(self, nick, nick_id=None):
if not nick_id:
nick_id = self.ID(nick)
if nick_id in self.dict["online"]:
return True
else:
return False
def target_check(self, bot, trigger, target, targetbypass):
targetgood = {"targetgood": True, "error": "None", "reason": None}
if not isinstance(targetbypass, list):
targetbypass = [targetbypass]
if "notarget" not in targetbypass:
if not target or target == '':
return {"targetgood": False, "error": "No target Given.", "reason": "notarget"}
# Optional don't allow self-target
if "self" not in targetbypass:
if inlist(target, trigger.nick):
return {"targetgood": False, "error": "This command does not allow you to target yourself.", "reason": "self"}
# cannot target bots
if "bot" not in targetbypass:
if inlist(target, bot.nick):
return {"targetgood": False, "error": "I am a bot and cannot be targeted.", "reason": "bot"}
if "bots" not in targetbypass:
if inlist(target, bot.nick):
return {"targetgood": False, "error": self.nick_actual(target) + " is a bot and cannot be targeted.", "reason": "bots"}
# Not a valid user
if "unknown" not in targetbypass:
if not botdb.check_nick_id(target):
sim_user, sim_num = [], []
for nick_id in list(self.dict["all"].keys()):
nick_list = self.dict["all"][nick_id]
for nick in nick_list:
similarlevel = similar(str(target).lower(), nick.lower())
if similarlevel >= .75:
sim_user.append(nick)
sim_num.append(similarlevel)
if sim_user != [] and sim_num != []:
sim_num, sim_user = array_arrangesort(sim_num, sim_user)
closestmatch = spicemanip(sim_user, 'reverse', "list")
listnumb, relist = 1, []
for item in closestmatch:
if listnumb <= 3:
relist.append(str(item))
listnumb += 1
closestmatches = spicemanip(relist, "andlist")
targetgooderror = "It looks like you're trying to target someone! Did you mean: " + str(closestmatches) + "?"
else:
targetgooderror = "I am not sure who that is."
return {"targetgood": False, "error": targetgooderror, "reason": "unknown"}
nick_id = self.whois_ident(target, usercreate=False)
# User offline
if "offline" not in targetbypass:
if not self.target_online(target, nick_id):
return {"targetgood": False, "error": "It looks like " + self.nick_actual(target) + " is offline right now!", "reason": "offline"}
# Private Message
if "privmsg" not in targetbypass:
if trigger.is_privmsg and not inlist(target, trigger.nick):
return {"targetgood": False, "error": "Leave " + self.nick_actual(target) + " out of this private conversation!", "reason": "privmsg"}
# not in the same channel
if "diffchannel" not in targetbypass:
if not trigger.is_privmsg and self.target_online(target, nick_id):
if str(trigger.sender).lower() not in self.dict["current"][nick_id]["channels"]:
return {"targetgood": False, "error": "It looks like " + self.nick_actual(target) + " is online right now, but in a different channel.", "reason": "diffchannel"}
# not a registered nick
# TODO
# if "unregged" not in targetbypass:
# if bot.config.SpiceBot_regnick.regnick:
# if str(target).lower() not in [x.lower() for x in self.dict["registered"]]:
# return {"targetgood": False, "error": "It looks like " + self.nick_actual(target) + " is not a registered nick.", "reason": "unregged"}
# TODO identified
return targetgood
def random_valid_target(self, bot, trigger, outputtype):
validtargs = []
if trigger.is_privmsg:
validtargs.extend([str(bot.nick), trigger.nick])
else:
for nick_id in self.dict["online"]:
if str(trigger.sender).lower() in self.dict["current"][nick_id]["channels"]:
nick = self.dict["all"][nick_id][0]
validtargs.append(nick)
if outputtype == 'list':
return validtargs
elif outputtype == 'random':
return spicemanip(validtargs, 'random')
def command_permissions_check(self, bot, trigger, privslist):
nick_id = self.whois_ident(trigger.nick)
if bot.config.SpiceBot_regnick.regnick:
if int(nick_id) not in self.dict["identified"]:
return False
commandrunconsensus = []
for botpriv in ["admins", "owner"]:
if botpriv in privslist:
botpriveval = bot_privs(botpriv)
if not inlist(trigger.nick, botpriveval):
commandrunconsensus.append('False')
else:
commandrunconsensus.append('True')
if not trigger.is_privmsg:
for chanpriv in ['OP', 'HOP', 'VOICE', 'OWNER', 'ADMIN']:
if chanpriv in privslist:
chanpriveval = channel_privs(bot, trigger.sender, chanpriv)
if not inlist(trigger.nick, chanpriveval):
commandrunconsensus.append('False')
else:
commandrunconsensus.append('True')
if not len(privslist):
commandrunconsensus.append('True')
if 'True' not in commandrunconsensus:
return False
return True
users = BotUsers()
| 39.144852 | 181 | 0.575881 | 2,800 | 22,430 | 4.458929 | 0.096786 | 0.076892 | 0.028114 | 0.026912 | 0.607048 | 0.569243 | 0.529996 | 0.465278 | 0.429876 | 0.390629 | 0 | 0.001702 | 0.319126 | 22,430 | 572 | 182 | 39.213287 | 0.815807 | 0.127864 | 0 | 0.428571 | 0 | 0 | 0.07228 | 0 | 0 | 0 | 0 | 0.001748 | 0 | 1 | 0.086735 | false | 0.028061 | 0.022959 | 0.002551 | 0.204082 | 0.002551 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
347c6d0817ffa67e0ca371af20acaf843bc16260 | 444 | py | Python | Lesson-1/uniqueOut.py | dhingratul/Data-Analysis | 8aa9695375b143fbbcb1355e9ade7a57ab68592d | [
"MIT"
] | null | null | null | Lesson-1/uniqueOut.py | dhingratul/Data-Analysis | 8aa9695375b143fbbcb1355e9ade7a57ab68592d | [
"MIT"
] | null | null | null | Lesson-1/uniqueOut.py | dhingratul/Data-Analysis | 8aa9695375b143fbbcb1355e9ade7a57ab68592d | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Mon May 8 13:48:29 2017
@author: dhingratul
Input: Filename, columnName
Output : Unique number of a column of data
"""
import pandas as pd
def uniqueOut(filename, columnName):
file = pd.read_csv(filename)
return len(file[columnName].unique())
filename = '/home/dhingratul/Documents/Dataset/Data Analysis/\
daily_engagement_full.csv'
print(uniqueOut(filename, 'acct'))
| 21.142857 | 62 | 0.722973 | 62 | 444 | 5.129032 | 0.758065 | 0.113208 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034121 | 0.141892 | 444 | 20 | 63 | 22.2 | 0.800525 | 0.385135 | 0 | 0 | 0 | 0 | 0.015152 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.428571 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3480169d44a71f7ec54dde6653589dc344e1247c | 10,063 | py | Python | oss2/select_response.py | xkdcc/aliyun-oss-python-sdk | c34c10c830b3a0f884af4bd2a1b501b3d4e52038 | [
"MIT"
] | 840 | 2015-12-07T04:00:04.000Z | 2022-03-25T15:04:53.000Z | oss2/select_response.py | xkdcc/aliyun-oss-python-sdk | c34c10c830b3a0f884af4bd2a1b501b3d4e52038 | [
"MIT"
] | 270 | 2016-01-11T01:55:46.000Z | 2022-03-30T10:27:49.000Z | oss2/select_response.py | xkdcc/aliyun-oss-python-sdk | c34c10c830b3a0f884af4bd2a1b501b3d4e52038 | [
"MIT"
] | 424 | 2015-12-05T12:26:04.000Z | 2022-03-23T10:57:16.000Z | import platform
import struct
import requests
from .compat import to_bytes
from .exceptions import RequestError
from .exceptions import SelectOperationFailed
from .exceptions import SelectOperationClientError
from .exceptions import InconsistentError
from . import utils
import logging
logger = logging.getLogger(__name__)
"""
The adapter class for Select object's response.
The response consists of frames. Each frame has the following format:
Type | Payload Length | Header Checksum | Payload | Payload Checksum
|<4-->| <--4 bytes------><---4 bytes-------><-n/a-----><--4 bytes--------->
And we have three kind of frames.
Data Frame:
Type:8388609
Payload: Offset | Data
<-8 bytes>
Continuous Frame
Type:8388612
Payload: Offset (8-bytes)
End Frame
Type:8388613
Payload: Offset | total scanned bytes | http status code | error message
<-- 8bytes--><-----8 bytes--------><---4 bytes-------><---variabe--->
"""
class SelectResponseAdapter(object):
_CHUNK_SIZE = 8 * 1024
_CONTINIOUS_FRAME_TYPE=8388612
_DATA_FRAME_TYPE = 8388609
_END_FRAME_TYPE = 8388613
_META_END_FRAME_TYPE = 8388614
_JSON_META_END_FRAME_TYPE = 8388615
_FRAMES_FOR_PROGRESS_UPDATE = 10
def __init__(self, response, progress_callback = None, content_length = None, enable_crc = False):
self.response = response
self.frame_off_set = 0
self.frame_length = 0
self.frame_data = b''
self.check_sum_flag = 0
self.file_offset = 0
self.finished = 0
self.raw_buffer = b''
self.raw_buffer_offset = 0
#self.resp_content_iter = response.__iter__()
self.callback = progress_callback
self.frames_since_last_progress_report = 0
self.content_length = content_length
self.resp_content_iter = response.__iter__()
self.enable_crc = enable_crc
self.payload = b''
self.output_raw_data = response.headers.get("x-oss-select-output-raw", '') == "true"
self.request_id = response.headers.get("x-oss-request-id",'')
self.splits = 0
self.rows = 0
self.columns = 0
def read(self):
if self.finished:
return b''
content=b''
for data in self:
content += data
return content
def __iter__(self):
return self
def __next__(self):
return self.next()
def next(self):
if self.output_raw_data == True:
data = next(self.resp_content_iter)
if len(data) != 0:
return data
else: raise StopIteration
while self.finished == 0:
if self.frame_off_set < self.frame_length:
data = self.frame_data[self.frame_off_set : self.frame_length]
self.frame_length = self.frame_off_set = 0
return data
else:
self.read_next_frame()
self.frames_since_last_progress_report += 1
if (self.frames_since_last_progress_report >= SelectResponseAdapter._FRAMES_FOR_PROGRESS_UPDATE and self.callback is not None):
self.callback(self.file_offset, self.content_length)
self.frames_since_last_progress_report = 0
raise StopIteration
def read_raw(self, amt):
ret = b''
read_count = 0
while amt > 0 and self.finished == 0:
size = len(self.raw_buffer)
if size == 0:
self.raw_buffer = next(self.resp_content_iter)
self.raw_buffer_offset = 0
size = len(self.raw_buffer)
if size == 0:
break
if size - self.raw_buffer_offset >= amt:
data = self.raw_buffer[self.raw_buffer_offset:self.raw_buffer_offset + amt]
data_size = len(data)
self.raw_buffer_offset += data_size
ret += data
read_count += data_size
amt -= data_size
else:
data = self.raw_buffer[self.raw_buffer_offset:]
data_len = len(data)
ret += data
read_count += data_len
amt -= data_len
self.raw_buffer = b''
return ret
def read_next_frame(self):
frame_type = bytearray(self.read_raw(4))
payload_length = bytearray(self.read_raw(4))
utils.change_endianness_if_needed(payload_length) # convert to little endian
payload_length_val = struct.unpack("I", bytes(payload_length))[0]
header_checksum = bytearray(self.read_raw(4))
frame_type[0] = 0 #mask the version bit
utils.change_endianness_if_needed(frame_type) # convert to little endian
frame_type_val = struct.unpack("I", bytes(frame_type))[0]
if (frame_type_val != SelectResponseAdapter._DATA_FRAME_TYPE and
frame_type_val != SelectResponseAdapter._CONTINIOUS_FRAME_TYPE and
frame_type_val != SelectResponseAdapter._END_FRAME_TYPE and
frame_type_val != SelectResponseAdapter._META_END_FRAME_TYPE and
frame_type_val != SelectResponseAdapter._JSON_META_END_FRAME_TYPE):
logger.warning("Unexpected frame type: {0}. RequestId:{1}. This could be due to the old version of client.".format(frame_type_val, self.request_id))
raise SelectOperationClientError(self.request_id, "Unexpected frame type:" + str(frame_type_val))
self.payload = self.read_raw(payload_length_val)
file_offset_bytes = bytearray(self.payload[0:8])
utils.change_endianness_if_needed(file_offset_bytes)
self.file_offset = struct.unpack("Q", bytes(file_offset_bytes))[0]
if frame_type_val == SelectResponseAdapter._DATA_FRAME_TYPE:
self.frame_length = payload_length_val - 8
self.frame_off_set = 0
self.check_sum_flag=1
self.frame_data = self.payload[8:]
checksum = bytearray(self.read_raw(4)) #read checksum crc32
utils.change_endianness_if_needed(checksum)
checksum_val = struct.unpack("I", bytes(checksum))[0]
if self.enable_crc:
crc32 = utils.Crc32()
crc32.update(self.payload)
checksum_calc = crc32.crc
if checksum_val != checksum_calc:
logger.warning("Incorrect checksum: Actual {0} and calculated {1}. RequestId:{2}".format(checksum_val, checksum_calc, self.request_id))
raise InconsistentError("Incorrect checksum: Actual" + str(checksum_val) + ". Calculated:" + str(checksum_calc), self.request_id)
elif frame_type_val == SelectResponseAdapter._CONTINIOUS_FRAME_TYPE:
self.frame_length = self.frame_off_set = 0
self.check_sum_flag=1
self.read_raw(4)
elif frame_type_val == SelectResponseAdapter._END_FRAME_TYPE:
self.frame_off_set = 0
scanned_size_bytes = bytearray(self.payload[8:16])
status_bytes = bytearray(self.payload[16:20])
utils.change_endianness_if_needed(status_bytes)
status = struct.unpack("I", bytes(status_bytes))[0]
error_msg_size = payload_length_val - 20
error_msg=b''
error_code = b''
if error_msg_size > 0:
error_msg = self.payload[20:error_msg_size + 20]
error_code_index = error_msg.find(b'.')
if error_code_index >= 0 and error_code_index < error_msg_size - 1:
error_code = error_msg[0:error_code_index]
error_msg = error_msg[error_code_index + 1:]
if status // 100 != 2:
raise SelectOperationFailed(status, error_code, error_msg)
self.frame_length = 0
if self.callback is not None:
self.callback(self.file_offset, self.content_length)
self.read_raw(4) # read the payload checksum
self.frame_length = 0
self.finished = 1
elif frame_type_val == SelectResponseAdapter._META_END_FRAME_TYPE or frame_type_val == SelectResponseAdapter._JSON_META_END_FRAME_TYPE:
self.frame_off_set = 0
scanned_size_bytes = bytearray(self.payload[8:16])
status_bytes = bytearray(self.payload[16:20])
utils.change_endianness_if_needed(status_bytes)
status = struct.unpack("I", bytes(status_bytes))[0]
splits_bytes = bytearray(self.payload[20:24])
utils.change_endianness_if_needed(splits_bytes)
self.splits = struct.unpack("I", bytes(splits_bytes))[0]
lines_bytes = bytearray(self.payload[24:32])
utils.change_endianness_if_needed(lines_bytes)
self.rows = struct.unpack("Q", bytes(lines_bytes))[0]
error_index = 36
if frame_type_val == SelectResponseAdapter._META_END_FRAME_TYPE:
column_bytes = bytearray(self.payload[32:36])
utils.change_endianness_if_needed(column_bytes)
self.columns = struct.unpack("I", bytes(column_bytes))[0]
else:
error_index = 32
error_size = payload_length_val - error_index
error_msg = b''
error_code = b''
if (error_size > 0):
error_msg = self.payload[error_index:error_index + error_size]
error_code_index = error_msg.find(b'.')
if error_code_index >= 0 and error_code_index < error_size - 1:
error_code = error_msg[0:error_code_index]
error_msg = error_msg[error_code_index + 1:]
self.read_raw(4) # read the payload checksum
self.final_status = status
self.frame_length = 0
self.finished = 1
if (status / 100 != 2):
raise SelectOperationFailed(status, error_code, error_msg)
| 41.929167 | 164 | 0.617311 | 1,221 | 10,063 | 4.773956 | 0.141687 | 0.060216 | 0.031223 | 0.062275 | 0.499056 | 0.39029 | 0.349116 | 0.276548 | 0.219077 | 0.164179 | 0 | 0.027426 | 0.293451 | 10,063 | 239 | 165 | 42.104603 | 0.792405 | 0.018384 | 0 | 0.293194 | 0 | 0.005236 | 0.029078 | 0.002486 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036649 | false | 0 | 0.052356 | 0.010471 | 0.167539 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3481af2e70769439b70e72f6601e14cbab1fe00b | 10,622 | py | Python | datagenerationpipeline.py | odeliss/wapetex | 1ad6740f537724e3640175234ea5d120912a0c3c | [
"MIT"
] | null | null | null | datagenerationpipeline.py | odeliss/wapetex | 1ad6740f537724e3640175234ea5d120912a0c3c | [
"MIT"
] | null | null | null | datagenerationpipeline.py | odeliss/wapetex | 1ad6740f537724e3640175234ea5d120912a0c3c | [
"MIT"
] | null | null | null | import os
import cv2
import imutils
import random
from skimage.transform import rotate
#3 next ones used for the skewing
from skimage.transform import warp
from skimage import data
from skimage.transform import ProjectiveTransform
import numpy as np
#this class will rotate, translate, crop, add noise, skew, flip verticaly, flip horizontaly
class dataGenerationPipeline(object):
def __init__(self, imageDirectory, fileType):
#Load the images of type fileType contained in the directory and store them in a list
self.inputDirectory = imageDirectory
self.inputImageList = []
self.inputFileType = fileType
self.DBG = True
if self.DBG:
print("[DEBUG] : Retrieving {} images from directory {} "
.format(self.inputFileType, self.inputDirectory))
inputFileNames = [os.path.join(self.inputDirectory,f)
for f in os.listdir(self.inputDirectory)
if f.endswith(str(fileType))]
self.inputImageList = [cv2.imread(f) for f in inputFileNames] #store the images
self.inputFilenames = [f for f in os.listdir(self.inputDirectory)
if f.endswith(str(fileType))] #store the file names for latter user
if self.DBG:
print("[DEBUG] : Retrieved {} images".format(len(self.inputImageList)))
def rotate(self):
#This will create 5 more images for each input images,
#rotated by random degrees of the input directory
#and store them in the /rotated subdirectory of the input directory
outputDirectory= os.path.join(self.inputDirectory,"rotated")
if not os.path.exists(outputDirectory): #if directory does not exist create it
os.makedirs(outputDirectory)
if self.DBG:
print("[DEBUG] : Created new directory {}".format(outputDirectory))
for (fileName,image) in zip(self.inputFilenames, self.inputImageList):
#usage of imutils librairy in oder to avoid the cropping of image when rotated
#ref:http://www.pyimagesearch.com/2017/01/02/rotate-images-correctly-with-opencv-and-python/
for angle in (90,270): #rotate by 90 and 270 degrees
rotated = imutils.rotate_bound(image, int(angle)) #This version leave black background
#rotated = rotate(image, int(angle), resize=True, mode='edge', preserve_range = True)
outputFilename = os.path.join(outputDirectory, "rot"+ str(int(angle)) +"_" + str(fileName))
cv2.imwrite (outputFilename, rotated) #store rotated image
#if self.DBG:
#print("[DEBUG] : Saved Image {}".format(outputFilename))
if self.DBG:
numberImages= len(os.listdir(outputDirectory))
print("[DEBUG] : Saved {} images in {} ".format(numberImages, outputDirectory))
def flip(self, horizontaly = False, verticaly = False):
#flip the images ad save them to the "flipped" sub-directory
outputDirectory=os.path.join(self.inputDirectory,"flipped")
if not os.path.exists(outputDirectory): #if directory does not exist create it
os.makedirs(outputDirectory)
if self.DBG:
print("[DEBUG] : Created new directory {}".format(outputDirectory))
for (fileName,image) in zip(self.inputFilenames, self.inputImageList):
if horizontaly and not verticaly: #flip horizontaly
flipped = cv2.flip(image, 1)
outputFilename = os.path.join(outputDirectory, "horFlip_"+str(fileName))
cv2.imwrite (outputFilename, flipped) #store flipped image
if verticaly and not horizontaly: #flip vertically
flipped = cv2.flip(image, 0)
outputFilename = os.path.join(outputDirectory,"verFlip_"+str(fileName))
cv2.imwrite (outputFilename, flipped) #store flipped image
if verticaly and horizontaly: #flip bothy
flipped = cv2.flip(image, -1)
outputFilename = os.path.join(outputDirectory, "horVerFlip_"+str(fileName))
cv2.imwrite (outputFilename, flipped) #store flipped image
if self.DBG:
numberImages= len(os.listdir(outputDirectory))
print("[DEBUG] : Saved {} images in {} ".format(numberImages, outputDirectory))
def addnoise(self, type='gauss'):
#will create on disk images blurred with gausian noise
outputDirectory=os.path.join(self.inputDirectory,"noise")
if not os.path.exists(outputDirectory): #if directory does not exist create it
os.makedirs(outputDirectory)
if self.DBG:
print("[DEBUG] : Created new directory {}".format(outputDirectory))
for (fileName,image) in zip(self.inputFilenames, self.inputImageList):
#https://stackoverflow.com/questions/22937589/how-to-add-noise-gaussian-salt-and-pepper-etc-to-image-in-python-with-opencv/30609854
if type == 'gauss':
row,col,ch= image.shape
mean = 0
var = 122
sigma = var**0.5
gauss = np.random.normal(mean,sigma,(row,col,ch))
gauss = gauss.reshape(row,col,ch)
noisy = image + gauss #numpy will perfrorm modulo additions/substractions
outputFilename = os.path.join(outputDirectory, "gauss_"+str(fileName))
cv2.imwrite (outputFilename, noisy) #store noisy image
elif type == 's&p':
row,col,ch = image.shape
s_vs_p = 0.5
amount = 0.004
out = np.copy(image)
# Salt mode
num_salt = np.ceil(amount * image.size * s_vs_p)
coords = [np.random.randint(0, i - 1, int(num_salt))
for i in image.shape]
out[coords] = 255 #add some white randomly
# Pepper mode
num_pepper = np.ceil(amount* image.size * (1. - s_vs_p))
coords = [np.random.randint(0, i - 1, int(num_pepper))
for i in image.shape]
out[coords] = 0 #add some black randomly
outputFilename = os.path.join(outputDirectory, "s&p_"+str(fileName))
cv2.imwrite (outputFilename, out) #store s&p image
if self.DBG:
numberImages= len(os.listdir(outputDirectory))
print("[DEBUG] : Saved {} images in {} ".format(numberImages, outputDirectory))
def blur(self, size=(5,5)):
#blur the image using a gaussian filter of size "size"
outputDirectory=os.path.join(self.inputDirectory,"blurred")
if not os.path.exists(outputDirectory): #if directory does not exist create it
os.makedirs(outputDirectory)
if self.DBG:
print("[DEBUG] : Created new directory {}".format(outputDirectory))
for (fileName,image) in zip(self.inputFilenames, self.inputImageList):
blurred = cv2.GaussianBlur(image, size, 0) #3rd param tells cv2 to compute sigma according to filter size
outputFilename = os.path.join(outputDirectory,"blur_"+str(fileName))
cv2.imwrite (outputFilename, blurred) #store blurred image
if self.DBG:
numberImages= len(os.listdir(outputDirectory))
print("[DEBUG] : Saved {} images in {} ".format(numberImages, outputDirectory))
def skew(self):
#return 5 versions of the original image skewed in 5 different random ways,
#http://scikit-image.org/docs/dev/api/skimage.transform.html#skimage.transform.ProjectiveTransform
outputDirectory=self.initializeDirectory("skewed") #prepare the output directory
for (fileName,image) in zip(self.inputFilenames, self.inputImageList):
(imageLength, imageWidth,_) = image.shape
skewExtend = int(max(imageLength, imageWidth) * 0.20) #Will distord the image by max 20% of max length, width
for nbNewImages in range(5): #generate 5 skewed images
d = skewExtend #distorsion applied is proportionnal to the parameter
#define the distortions to apply
topLeftTopShift = random.uniform(-d, d)
topLeftLeftShift = random.uniform(-d, d)
bottomLeftBottomShift = random.uniform(-d, d)
bottomLeftLeftShift = random.uniform(-d, d)
topRightTopShift = random.uniform(-d, d)
topRightRightShift = random.uniform(-d, d)
bottomRightBottomShift = random.uniform(-d, d)
bottomRightRightShift = random.uniform(-d, d)
#enable the projective transform
transform = ProjectiveTransform()
#tear the image
transform.estimate(np.array((
(topLeftLeftShift , topLeftTopShift),
(bottomLeftLeftShift, imageWidth- bottomLeftBottomShift),
(imageLength - bottomRightRightShift, imageWidth - bottomRightBottomShift),
(imageLength- topRightRightShift, topRightTopShift))),
np.array((
(0,0),
(0, imageWidth),
(imageLength, imageWidth),
(imageLength,0)))
)
#apply the skew
skewed = warp(image, transform, mode='edge')
skewed = skewed*255
outputFilename = os.path.join(outputDirectory,"skewed"+str(nbNewImages)+"_"+str(fileName))
cv2.imwrite (outputFilename, skewed) #store skewed image
if self.DBG:
numberImages= len(os.listdir(outputDirectory))
print("[DEBUG] : Saved {} images in {} ".format(numberImages, outputDirectory))
def crop(self):
#return 6 versions of the original image cropped
outputDirectory=self.initializeDirectory("cropped") #prepare the output directory
for (fileName,image) in zip(self.inputFilenames, self.inputImageList):
(imageLength, imageWidth,_) = image.shape
width25Percent = int(0.25*imageWidth)
width75Percent = int(0.75*imageWidth)
length25Percent = int(0.25*imageLength)
length75Percent = int(0.75*imageLength)
width33Percent = int(0.33*imageWidth)
width66Percent = int(0.66*imageWidth)
length33Percent = int(0.33*imageLength)
length66Percent = int(0.66*imageLength)
cropped=[]
cropped.append(image[length25Percent:length75Percent , 0:imageWidth]) #vertical layer 0.25 till 0.75 length, full width
cropped.append(image[0:imageLength , width25Percent:width75Percent]) #horizontal layer 0.25 till 0.75 width, full length
cropped.append(image[0:length66Percent , 0:width66Percent]) #window 0 --> 0.66 width, 0 --> 0.66 length
cropped.append(image[0:length66Percent , width33Percent:imageWidth]) #window 0.33 --> 1 width, 0 --> 0.66 length
cropped.append(image[length33Percent:imageLength , 0:width66Percent]) #window 0 --> 0.66 width, 0.33 --> 1 length
cropped.append(image[length33Percent:imageLength , width33Percent:imageWidth]) #window 0.33 --> 1 width, 0.33 --> 1 length
for (i,image) in enumerate(cropped):
outputFilename = os.path.join(outputDirectory,"cropped"+str(i)+"_"+str(fileName))
cv2.imwrite (outputFilename, cropped[i]) #store cropped image
if self.DBG:
numberImages= len(os.listdir(outputDirectory))
print("[DEBUG] : Saved {} images in {} ".format(numberImages, outputDirectory))
def initializeDirectory(self, directoryName):
#will create the output directory if needed and return its name
outputDirectory=os.path.join(self.inputDirectory,str(directoryName)) #save them in the new directory
if not os.path.exists(outputDirectory): #if directory does not exist create it
os.makedirs(outputDirectory)
if self.DBG:
print("[DEBUG] : Created new directory {}".format(outputDirectory))
return outputDirectory
| 40.234848 | 134 | 0.720015 | 1,341 | 10,622 | 5.683072 | 0.208054 | 0.015746 | 0.019682 | 0.028343 | 0.492193 | 0.392993 | 0.354809 | 0.334602 | 0.308883 | 0.308883 | 0 | 0.023799 | 0.165317 | 10,622 | 263 | 135 | 40.387833 | 0.835777 | 0.238185 | 0 | 0.323864 | 0 | 0 | 0.069408 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.051136 | 0 | 0.107955 | 0.073864 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3481d1c2600c28f0691d9c43bd72bdf90e3ec7b3 | 3,381 | py | Python | esheep_sdk/example/medusa/algorithm1/config.py | skygoo/esheep-sdk | b413bba8f703dbdd3752d0f7c5787bce030c0a67 | [
"Apache-2.0"
] | 3 | 2019-01-15T00:32:34.000Z | 2019-05-14T09:04:38.000Z | esheep-sdk/example/medusa/algorithm1/config.py | seekloud/esheep-sdk | 1c8978e2bf00f9f4838981a71a555c169a97fb21 | [
"Apache-2.0"
] | null | null | null | esheep-sdk/example/medusa/algorithm1/config.py | seekloud/esheep-sdk | 1c8978e2bf00f9f4838981a71a555c169a97fb21 | [
"Apache-2.0"
] | 3 | 2019-01-23T07:44:09.000Z | 2019-03-24T15:11:16.000Z | # Author: Taoz
# Date : 8/29/2018
# Time : 3:08 PM
# FileName: config.py
import time
import sys
import configparser
import os
def load_conf(conf_file):
# 获取当前文件路径
current_path = os.path.abspath(__file__)
# config.ini文件路径,获取当前目录的父目录的父目录与congig.ini拼接
default_conf_file = os.path.join(os.path.abspath(os.path.dirname(current_path)),
'dqn_default_conf.ini')
# print('default conf file:', default_conf_file)
#
# print('customer_conf_file:', conf_file)
# print(os.path.exists(default_conf_file))
# print(os.path.exists(conf_file))
config = configparser.ConfigParser(allow_no_value=True, interpolation=configparser.ExtendedInterpolation())
# config.read(default_conf_file)
config.read(conf_file)
return config['DQN']
customer_conf_file = sys.argv[1]
dqn_conf = load_conf(customer_conf_file)
"""experiment"""
GPU_INDEX = dqn_conf.getint('GPU_INDEX')
PRE_TRAIN_MODEL_FILE = dqn_conf.get('PRE_TRAIN_MODEL_FILE')
EPOCH_NUM = dqn_conf.getint('EPOCH_NUM')
EPOCH_LENGTH = dqn_conf.getint('EPOCH_LENGTH')
RANDOM_SEED = int(time.time() * 1000) % 100000000
"""game env"""
GAME_NAME = dqn_conf.get('GAME_NAME')
ACTION_NUM = dqn_conf.getint('ACTION_NUM')
OBSERVATION_TYPE = dqn_conf.get('OBSERVATION_TYPE')
CHANNEL = dqn_conf.getint('CHANNEL')
WIDTH = dqn_conf.getint('WIDTH')
HEIGHT = dqn_conf.getint('HEIGHT')
FRAME_SKIP = dqn_conf.getint('FRAME_SKIP')
"""player"""
TRAIN_PER_STEP = dqn_conf.getint('TRAIN_PER_STEP')
"""replay buffer"""
PHI_LENGTH = dqn_conf.getint('PHI_LENGTH')
BUFFER_MAX = dqn_conf.getint('BUFFER_MAX')
BEGIN_RANDOM_STEP = dqn_conf.getint('BEGIN_RANDOM_STEP')
"""q-learning"""
DISCOUNT = dqn_conf.getfloat('DISCOUNT')
EPSILON_MIN = dqn_conf.getfloat('EPSILON_MIN')
EPSILON_START = dqn_conf.getfloat('EPSILON_START')
EPSILON_DECAY = dqn_conf.getint('EPSILON_DECAY')
IS_DOUBLE = dqn_conf.getint('IS_DOUBLE')
IS_DUELING = dqn_conf.getint('IS_DUELING')
"""noisy-net option"""
NOISY_SCALE = dqn_conf.getfloat('NOISY_SCALE')
NOISY_ALPHA = dqn_conf.getfloat('NOISY_ALPHA')
UPDATE_TARGET_BY_EPISODE_END = dqn_conf.getint('UPDATE_TARGET_BY_EPISODE_END')
UPDATE_TARGET_BY_EPISODE_BEGIN = dqn_conf.getint('UPDATE_TARGET_BY_EPISODE_BEGIN')
UPDATE_TARGET_DECAY = dqn_conf.getint('UPDATE_TARGET_DECAY')
UPDATE_TARGET_RATE = (UPDATE_TARGET_BY_EPISODE_END - UPDATE_TARGET_BY_EPISODE_BEGIN) / UPDATE_TARGET_DECAY + 0.000001
OPTIMIZER = dqn_conf.get('OPTIMIZER')
LEARNING_RATE = dqn_conf.getfloat('LEARNING_RATE')
WEIGHT_DECAY = dqn_conf.getfloat('WEIGHT_DECAY')
GRAD_CLIPPING_THETA = dqn_conf.getfloat('GRAD_CLIPPING_THETA')
POSITIVE_REWARD = dqn_conf.getfloat('POSITIVE_REWARD')
NEGATIVE_REWARD = dqn_conf.getfloat('NEGATIVE_REWARD')
LIVING_REWARD = dqn_conf.getfloat('LIVING_REWARD')
SPECIAL_PUNISH = dqn_conf.getfloat('SPECIAL_PUNISH')
"""OTHER"""
MODEL_PATH = dqn_conf.get('MODEL_PATH')
MODEL_FILE_MARK = dqn_conf.get('MODEL_FILE_MARK')
BEGIN_TIME = time.strftime("%Y%m%d_%H%M%S")
EDITED_TIME = dqn_conf.get("EDITED_TIME")
print('\n\n\n\n++++++++++++++++ config edited time: %s ++++++++++++++++++' % EDITED_TIME)
print('BEGIN_TIME:', BEGIN_TIME)
print('CONF FILE:', customer_conf_file)
print('GAME_NAME:', GAME_NAME)
print('--------------------------')
print('configuration:')
for k, v in dqn_conf.items():
print('[%s = %s]' % (k, v))
print('--------------------------')
| 31.598131 | 117 | 0.736469 | 481 | 3,381 | 4.800416 | 0.274428 | 0.118233 | 0.101343 | 0.054569 | 0.12343 | 0.102209 | 0.080554 | 0.069294 | 0.043309 | 0.043309 | 0 | 0.010238 | 0.104407 | 3,381 | 106 | 118 | 31.896226 | 0.752312 | 0.091689 | 0 | 0.032258 | 0 | 0 | 0.231004 | 0.045455 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016129 | false | 0 | 0.064516 | 0 | 0.096774 | 0.129032 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
caa368ee3f86b500c3851b882996cf49e1b7ed27 | 9,657 | py | Python | sqlorm4es/sql.py | floatliang/sqlorm4es | 343d8b350087e0d73784d70c4fa152febc0e0b14 | [
"MIT"
] | 6 | 2019-10-11T17:03:22.000Z | 2019-10-14T02:11:02.000Z | sqlorm4es/sql.py | floatliang/sqlorm4es | 343d8b350087e0d73784d70c4fa152febc0e0b14 | [
"MIT"
] | 1 | 2019-10-14T02:11:23.000Z | 2019-10-14T12:41:25.000Z | sqlorm4es/sql.py | floatliang/sqlorm4es | 343d8b350087e0d73784d70c4fa152febc0e0b14 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# @Time : 2019/10/4 18:22
# @Author : floatsliang
# @File : sql.py
from typing import Union
import re
from copy import deepcopy
from .epool import POOL
from .compiler import QueryCompiler
from .field import Expr, Field, OP_DICT, OP
from .utils import result_wrapper, SearchResult
_WHERE_PATTERN = re.compile(
r'^\s*(?P<lhs>\S+)\s*(?P<op>(=|!=|<>|>=|<=|>|<|in|IN|LIKE|like|MATCH|match|MATCHALL|matchall))\s*(?P<rhs>\S+)\s*$')
_AGG_PATTERN = re.compile(
r'^\s*(?P<aggs>(count|COUNT|sum|SUM|max|MAX|min|MIN|avg|AVG|distinct|DISTINCT))\(\s*(?P<field>\S+)\s*\)\s*$')
_ORDER_BY_PATTERN = re.compile(r'^\s*(?P<field>\S+),?(\s+(?P<order>(asc|ASC|desc|DESC)))?\s*$')
def where_str_to_expr_dict(expr_str: str) -> dict:
expr_dict = {}
op_list = [' = ', ' != ', ' >= ', ' <= ', ' > ', ' < ', ' <> ', ' in ', ' not like ', ' like ', ' not match ',
' match ', ' matchall ']
expr_str_lower = expr_str.lower()
for op in op_list:
op_idx = expr_str_lower.find(op)
if op_idx >= 0:
expr_dict['op'] = op.strip()
expr_dict['lhs'] = expr_str[:op_idx].strip()
expr_dict['rhs'] = expr_str[op_idx + len(op):].strip()
break
return expr_dict
def parse_where(node: str):
match_dict = where_str_to_expr_dict(node)
if match_dict:
if match_dict['op'].lower() not in OP_DICT:
raise NotImplementedError(u'ERROR: {} operation not supported in str type'.format(match_dict['op']))
return Expr(match_dict['lhs'], OP_DICT[match_dict['op'].lower()], match_dict['rhs'])
return None
def parse_where_tuple_expr(node: Union[tuple, list]):
field = node[0].strip()
op = node[1].lower().strip()
values = node[2]
if len(node) > 3:
rel_op = node[3].lower().strip()
else:
rel_op = 'and'
new_expr = None
if op not in OP_DICT:
raise NotImplementedError(u'ERROR: {} operation not supported in str type'.format(op))
if not isinstance(values, (tuple, list)):
values = [values]
for val in values:
if new_expr:
new_expr = Expr(new_expr, OP_DICT[rel_op], Expr(field, OP_DICT[op], val))
else:
new_expr = Expr(field, OP_DICT[op], val)
return new_expr
def parse_aggs(node: str):
match_agg = _AGG_PATTERN.match(node)
if match_agg:
match_dict = match_agg.groupdict()
if match_dict['aggs'].lower() not in OP_DICT:
raise NotImplementedError(u'ERROR: {} aggregation not supported yet'.format(match_dict['aggs']))
return Expr(match_dict['field'], OP_DICT[match_dict['aggs'].lower()], None)
return None
def parse_order_by(order_by: str):
match_order_by = _ORDER_BY_PATTERN.match(order_by)
if match_order_by:
match_dict = match_order_by.groupdict()
agg = parse_aggs(match_dict['field'])
if agg:
match_dict['field'] = agg
return match_dict['field'], match_dict.get('order', 'asc')
return None
class SQL(object):
def __init__(self, model_clazz, *args, **kwargs):
self._model_clazz = model_clazz
self._index = None
self._database = None
self._doc_type = None
if model_clazz:
meta = getattr(model_clazz, '_meta')
if meta:
self._index = getattr(meta, 'index', None)
self._database = getattr(meta, 'database', None)
self._doc_type = getattr(meta, 'doc_type', None)
self._data['where'] = None
self._index = kwargs.get('index', None) or self._index
self._database = kwargs.get('database', None) or self._database
self._doc_type = kwargs.get('doc_type', None) or self._doc_type
self._compiler = QueryCompiler(self._data)
def index(self, index):
self._index = index
return self
def database(self, database):
self._database = database
return self
def doc_type(self, doc_type):
self._doc_type = doc_type
return self
def clone(self):
new_sql = self.__class__(self._model_clazz)
new_sql.index(self._index)
new_sql.database(deepcopy(self._database))
new_sql._data = deepcopy(self._data)
new_sql._doc_type = self._doc_type
new_sql._compiler = QueryCompiler(new_sql._data)
return new_sql
def where(self, *nodes):
for node in nodes:
if not isinstance(node, Expr):
if isinstance(node, str):
node = parse_where(node)
elif isinstance(node, (tuple, list)):
node = parse_where_tuple_expr(node)
else:
raise ValueError(u'ERROR: node in where must be expression or str')
if not node:
raise AttributeError(u'ERROR: node cannot be NoneType, it may caused by parse failure')
if not self._data['where']:
self._data['where'] = node
else:
self._data['where'] &= node
return self
def compile(self):
raise NotImplementedError
def execute(self):
raise NotImplementedError
class InsertSQL(SQL):
def __init__(self, model_clazz, **kwargs):
if not model_clazz:
raise Exception(u'InsertSQL must have index Model')
self._data = {'id': None, 'values': [], 'upsert': True, 'columns': model_clazz._fields}
super(InsertSQL, self).__init__(model_clazz, **kwargs)
def id(self, doc_id):
self._data['id'] = doc_id
return self
def values(self, rows):
pass
def upsert(self, insert_on_conflict=True):
self._data['upsert'] = insert_on_conflict
return self
def compile(self):
pass
def execute(self, ):
pass
class DeleteSQL(SQL):
def __init__(self, model_clazz, **kwargs):
super(DeleteSQL, self).__init__(**kwargs)
def compile(self):
pass
def execute(self):
pass
class UpdateSQL(SQL):
def __init__(self, model_clazz, **kwargs):
super(UpdateSQL, self).__init__(**kwargs)
def compile(self):
pass
def execute(self):
pass
class SelectSQL(SQL):
def __init__(self, model_clazz=None, **kwargs):
self._data = {'fields': [], 'join': None, 'group_by': [], 'order_by': {}, 'limit': None, 'offset': None}
super(SelectSQL, self).__init__(model_clazz, **kwargs)
def fields(self, *fields):
for field in fields:
if isinstance(field, str):
agg = parse_aggs(field)
if agg:
field = agg
elif isinstance(field, Field):
field = field.get_name()
self._data['fields'].append(field)
return self
def group_by(self, *fields):
self._data['group_by'] += fields
return self
def order_by(self, *fields):
for field_and_order in fields:
if isinstance(field_and_order, tuple) or isinstance(field_and_order, list):
field, order = field_and_order
field = field.get_name() if isinstance(field, Field) else str(field)
self._data['order_by'][field] = order
continue
field, order = parse_order_by(field_and_order)
if not field:
raise ValueError(u'ERROR: order by field {} cannot be parsed correctly'.format(field_and_order))
self._data['order_by'][field] = order
return self
def limit(self, limit: int = 10):
self._data['limit'] = int(limit)
return self
def offset(self, offset: int = 0):
self._data['offset'] = int(offset)
return self
def join(self, join_type, on=None):
return self
def compile(self):
return self._compiler.compile()
def paginate(self):
"""
yield document according to your sql and order by field
:return:
"""
if not self._data['limit'] or not self._data['order_by']:
raise Exception(u'ERROR: paginate need page length(limit) and order by field')
query = self.order_by(('_id', 'asc')).compile()
database = POOL.connect(**self._database)
res = database.search(index=self._index, body=query, doc_type=self._doc_type)
if 'hits' not in res:
return
res_data = res['hits']['hits']
if not res_data:
return
search_after_val = res_data[-1].get('sort', None)
if search_after_val is None:
return SearchResult(res)
if 'from' in query:
del query['from']
query['search_after'] = search_after_val
while 1:
yield SearchResult(res)
res = database.search(index=self._index, body=query, doc_type=self._doc_type)
if 'hits' not in res:
return
res_data = res['hits']['hits']
if not res_data:
return
query['search_after'] = res_data[-1]['sort']
@result_wrapper
def execute(self):
query = self.compile()
kwargs = {}
if self._doc_type:
kwargs['doc_type'] = self._doc_type
return POOL.connect(**self._database).search(index=self._index, body=query, **kwargs)
| 33.53125 | 120 | 0.569224 | 1,203 | 9,657 | 4.334165 | 0.144638 | 0.025508 | 0.029919 | 0.01611 | 0.249521 | 0.181435 | 0.138282 | 0.125432 | 0.112006 | 0.094361 | 0 | 0.003582 | 0.306099 | 9,657 | 287 | 121 | 33.648084 | 0.774511 | 0.016051 | 0 | 0.247788 | 0 | 0.013274 | 0.114815 | 0.030065 | 0 | 0 | 0 | 0 | 0 | 1 | 0.154867 | false | 0.030973 | 0.030973 | 0.00885 | 0.331858 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
caaa92936e9bdaba1fb9ce03d01d0d8201e0f71d | 702 | py | Python | warp_stl.py | rcpedersen/stl-warper | 2c838795f0527d1159907b477f09ca40ade6d121 | [
"MIT"
] | 3 | 2016-02-23T10:35:57.000Z | 2017-09-11T00:14:03.000Z | warp_stl.py | rcpedersen/stl-warper | 2c838795f0527d1159907b477f09ca40ade6d121 | [
"MIT"
] | null | null | null | warp_stl.py | rcpedersen/stl-warper | 2c838795f0527d1159907b477f09ca40ade6d121 | [
"MIT"
] | null | null | null | from reader import read_stl_verticies
from utils import split_triangles, push2d, top_wider, push3d
from writer import Binary_STL_Writer
if __name__ == '__main__':
faces = []
triangles = read_stl_verticies("./input.stl")
more_triangles = split_triangles(triangles,5)
for triangle in more_triangles:
triangle = list(triangle)
for i in range(3):
#Edit to change how the STL is warped
triangle[i] = push3d(triangle[i],[0,0,10],400)
triangle[i] = top_wider(triangle[i],2)
faces.append(triangle)
with open('output.stl', 'wb') as fp:
writer = Binary_STL_Writer(fp)
writer.add_faces(faces)
writer.close()
| 35.1 | 60 | 0.655271 | 95 | 702 | 4.6 | 0.515789 | 0.08238 | 0.073227 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02439 | 0.240741 | 702 | 19 | 61 | 36.947368 | 0.795497 | 0.051282 | 0 | 0 | 0 | 0 | 0.046617 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.176471 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
caaae961bfa85f149e075a0231d58bb0c94823f0 | 3,511 | py | Python | dcms/news/tests.py | regiondavid/django-demo | 4e1829c222d5f37ef222818b6f5afccc76a24f39 | [
"MIT"
] | 1 | 2019-08-31T16:54:28.000Z | 2019-08-31T16:54:28.000Z | dcms/news/tests.py | regiondavid/django-demo | 4e1829c222d5f37ef222818b6f5afccc76a24f39 | [
"MIT"
] | 9 | 2020-06-05T17:25:55.000Z | 2022-01-13T00:39:47.000Z | dcms/news/tests.py | regiondavid/django-demo | 4e1829c222d5f37ef222818b6f5afccc76a24f39 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
import datetime
from django.utils import timezone
from django.test import TestCase
from django.urls import reverse
from .models import Question
# Create your tests here.
class QuestionMethodTest(TestCase):
def test_was_published_recently_with_future_question(self):
"""
was_published_recently() should return False for question whose pub_date is in future.
"""
time = timezone.now() + datetime.timedelta(days=30)
future_question = Question(pub_date=time)
self.assertIs(future_question.was_published_recently(), False)
def test_was_published_recently_with_old_question(self):
"""
was_published_recently() should return False for question whose pub_date is in old.
"""
time = timezone.now() - datetime.timedelta(days=30)
old_question = Question(pub_date=time)
self.assertIs(old_question.was_published_recently(), False)
def test_was_published_recently_with_recent_question(self):
"""
was
"""
time = timezone.now() - datetime.timedelta(hours=1)
recent_question = Question(pub_date=time)
self.assertIs(recent_question.was_published_recently(), True)
def create_question(question_text, days):
"""
Create a question
"""
time = timezone.now() + datetime.timedelta(days=days)
return Question.objects.create(question_text=question_text, pub_date=time)
class QuestionViewTests(TestCase):
def test_index_with_no_question(self):
"""
if not questions exits, an appropriate message should be display
"""
response = self.client.get(reverse('news:index'))
self.assertEqual(response.status_code, 200)
self.assertContains(response, "No news are avalible")
self.assertQuerysetEqual(response.context['latest_question_list'], [])
def test_index_view_with_a_past_question(self):
"""
question with a pub_date
"""
create_question(question_text="Past question.", days=-30)
response = self.client.get(reverse('news:index'))
self.assertQuerysetEqual(
response.context['latest_question_list'],
[u'<Question: Past question.>']
)
def test_index_view_with_a_future_question(self):
"""
question with a pub_date in the future
"""
create_question(question_text="Future question", days=30)
response = self.client.get(reverse('news:index'))
self.assertContains(response, "No news are avalible")
self.assertQuerysetEqual(response.context['latest_question_list'], [])
def test_index_view_with_future_question_and_past_quesiton(self):
create_question(question_text="Past question.", days=-30)
create_question(question_text="Future question.", days=30)
response = self.client.get(reverse('news:index'))
self.assertQuerysetEqual(
response.context['latest_question_list'],
[u'<Question: Past question.>']
)
def test_index_view_with_two_past_question(self):
create_question(question_text="Past question 1.", days=-30)
create_question(question_text="Past question 2.", days=-10)
response = self.client.get(reverse('news:index'))
self.assertQuerysetEqual(
response.context['latest_question_list'],
[u'<Question: Past question 2.>', u'<Question: Past question 1.>']
) | 39.449438 | 94 | 0.679578 | 411 | 3,511 | 5.552311 | 0.216545 | 0.070114 | 0.070114 | 0.079755 | 0.703769 | 0.689746 | 0.640228 | 0.50745 | 0.457055 | 0.457055 | 0 | 0.009065 | 0.214469 | 3,511 | 89 | 95 | 39.449438 | 0.818347 | 0.104813 | 0 | 0.339286 | 0 | 0 | 0.130144 | 0 | 0 | 0 | 0 | 0 | 0.196429 | 1 | 0.160714 | false | 0 | 0.107143 | 0 | 0.321429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
caab037d5577b045c5e211a62efa05a95efb90b3 | 1,227 | py | Python | sample/PutRowsWithDataFrame.py | nguyentientungduong/python_client | 83e43e470f26291edb8ce09a99544c80e187f701 | [
"Apache-2.0"
] | 79 | 2018-03-28T14:13:24.000Z | 2021-12-23T18:29:12.000Z | sample/PutRowsWithDataFrame.py | nguyentientungduong/python_client | 83e43e470f26291edb8ce09a99544c80e187f701 | [
"Apache-2.0"
] | 20 | 2018-07-03T18:12:52.000Z | 2022-01-06T03:11:11.000Z | sample/PutRowsWithDataFrame.py | nguyentientungduong/python_client | 83e43e470f26291edb8ce09a99544c80e187f701 | [
"Apache-2.0"
] | 73 | 2018-08-11T09:45:25.000Z | 2021-12-27T03:24:13.000Z | #!/usr/bin/python
import griddb_python as griddb
import sys
import pandas
factory = griddb.StoreFactory.get_instance()
argv = sys.argv
blob = bytearray([65, 66, 67, 68, 69, 70, 71, 72, 73, 74])
update = False
containerName = "SamplePython_PutRows"
try:
# Get GridStore object
gridstore = factory.get_store(host=argv[1], port=int(argv[2]), cluster_name=argv[3], username=argv[4], password=argv[5])
# Create Collection
conInfo = griddb.ContainerInfo(containerName,
[["name", griddb.Type.STRING],
["status", griddb.Type.BOOL],
["count", griddb.Type.LONG],
["lob", griddb.Type.BLOB]],
griddb.ContainerType.COLLECTION, True)
col = gridstore.put_container(conInfo)
print("Create Collection name=", containerName)
# Put rows
rows = pandas.DataFrame([["name01", False, 1, blob], ["name02", False, 1, blob]])
col.put_rows(rows)
print("Put rows with DataFrame")
print("Success!")
except griddb.GSException as e:
for i in range(e.get_error_stack_size()):
print("[", i, "]")
print(e.get_error_code(i))
print(e.get_location(i))
print(e.get_message(i))
| 29.926829 | 124 | 0.620212 | 154 | 1,227 | 4.850649 | 0.532468 | 0.053548 | 0.028112 | 0.040161 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033049 | 0.235534 | 1,227 | 40 | 125 | 30.675 | 0.763326 | 0.05216 | 0 | 0 | 0 | 0 | 0.091458 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.035714 | 0.107143 | 0 | 0.107143 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
caab073dbfab127f4fb02cfb6a42eb41a7a18c2b | 443 | py | Python | Python/MaxSumInSlidingWindow.py | ChanderJindal/Data_Structure_and_Algorithms | 8268a5b8be6bc967af41d1985db1224d6c6afc5e | [
"MIT"
] | 14 | 2019-10-26T11:43:56.000Z | 2021-01-23T00:37:17.000Z | Python/MaxSumInSlidingWindow.py | ChanderJindal/Data_Structure_and_Algorithms | 8268a5b8be6bc967af41d1985db1224d6c6afc5e | [
"MIT"
] | 28 | 2019-10-13T17:49:42.000Z | 2020-11-15T07:08:10.000Z | Python/MaxSumInSlidingWindow.py | ChanderJindal/Data_Structure_and_Algorithms | 8268a5b8be6bc967af41d1985db1224d6c6afc5e | [
"MIT"
] | 73 | 2019-10-11T06:38:10.000Z | 2022-01-26T20:04:24.000Z | import math
# this algorithm calculates the maximum sum present in the sublists of length k
# in the array
nums=list(map(int,input("enter the elements of the list\n").split()))
k=int(input("enter the size of the window :"))
cursum=sum(nums[:k])
maxsum= cursum
for i in range(1,len(nums)-k):
cursum=cursum-nums[i-1]+nums[i+k-1]
maxsum=max(cursum,maxsum)
print(maxsum)
print('assertion:--- ',end=" ")
print(maxsum==maxsumink(nums,k))
| 27.6875 | 79 | 0.704289 | 77 | 443 | 4.051948 | 0.493506 | 0.048077 | 0.083333 | 0.102564 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007792 | 0.130926 | 443 | 15 | 80 | 29.533333 | 0.802597 | 0.20316 | 0 | 0 | 0 | 0 | 0.223496 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0.272727 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
caabe05d542fa1d4213fb15dc48b72856aee1f2d | 748 | py | Python | tests/configs/learning-gem5-p1-two-level.py | samgh1230/gem5 | 01c41ec521352ccd6e857aa2bed808d38557b40d | [
"BSD-3-Clause"
] | 3 | 2017-11-19T06:49:54.000Z | 2018-12-26T18:08:11.000Z | tests/configs/learning-gem5-p1-two-level.py | samgh1230/gem5 | 01c41ec521352ccd6e857aa2bed808d38557b40d | [
"BSD-3-Clause"
] | 1 | 2020-08-20T05:53:30.000Z | 2020-08-20T05:53:30.000Z | tests/configs/learning-gem5-p1-two-level.py | samgh1230/gem5 | 01c41ec521352ccd6e857aa2bed808d38557b40d | [
"BSD-3-Clause"
] | 6 | 2016-07-31T18:48:18.000Z | 2022-03-06T22:41:28.000Z |
# A wrapper around configs/learning_gem5/part1/two_level.py
# For some reason, this is implicitly needed by run.py
root = None
import m5
def run_test(root):
# Called from tests/run.py
# Add paths that we need
m5.util.addToPath('../configs/learning_gem5/part1')
m5.util.addToPath('../configs/common')
# The path to this script is the only parameter. Delete it so we can
# execute the script that we want to execute.
import sys
del sys.argv[1:]
# Note: at this point, we could add options we want to test.
# For instance, sys.argv.append('--l2_size=512kB')
# Execute the script we are wrapping
execfile('configs/learning_gem5/part1/two_level.py')
| 29.92 | 76 | 0.653743 | 112 | 748 | 4.303571 | 0.580357 | 0.093361 | 0.118257 | 0.149378 | 0.141079 | 0.141079 | 0.141079 | 0 | 0 | 0 | 0 | 0.02509 | 0.254011 | 748 | 24 | 77 | 31.166667 | 0.83871 | 0.550802 | 0 | 0 | 0 | 0 | 0.267692 | 0.215385 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.25 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
caaccb98746ecf642948cea481675b0e4f50f8ca | 13,233 | py | Python | src/stactools/landsat/mtl_metadata.py | pjhartzell/landsat | 56c1cdd4343a9ce44607b5f5b33def5278eca3a4 | [
"Apache-2.0"
] | null | null | null | src/stactools/landsat/mtl_metadata.py | pjhartzell/landsat | 56c1cdd4343a9ce44607b5f5b33def5278eca3a4 | [
"Apache-2.0"
] | null | null | null | src/stactools/landsat/mtl_metadata.py | pjhartzell/landsat | 56c1cdd4343a9ce44607b5f5b33def5278eca3a4 | [
"Apache-2.0"
] | null | null | null | from collections import defaultdict
from datetime import datetime
from typing import Any, Dict, List, Optional
from pyproj import Geod
from pystac.utils import str_to_datetime
from stactools.core.io import ReadHrefModifier
from stactools.core.io.xml import XmlElement
from stactools.core.projection import transform_from_bbox
from stactools.core.utils import map_opt
class MTLError(Exception):
pass
class MtlMetadata:
"""Parses a Collection 2 MTL XML file.
References https://github.com/sat-utils/sat-stac-landsat/blob/f2263485043a827b4153aecc12f45a3d1363e9e2/satstac/landsat/main.py#L157
""" # noqa
def __init__(self,
root: XmlElement,
href: Optional[str] = None,
legacy_l8: bool = True):
self._root = root
self.href = href
self.legacy_l8 = legacy_l8
def _xml_error(self, item: str) -> MTLError:
return MTLError(f"Cannot find {item} in MTL metadata" +
("" if self.href is None else f" at {self.href}"))
def _get_text(self, xpath: str) -> str:
return self._root.find_text_or_throw(xpath, self._xml_error)
def _get_float(self, xpath: str) -> float:
return float(self._get_text(xpath))
def _get_int(self, xpath: str) -> int:
return int(self._get_text(xpath))
@property
def satellite_num(self) -> int:
"""Return the Landsat satellite number."""
return int(self.product_id[2:4])
@property
def product_id(self) -> str:
"""Return the Landsat product ID."""
return self._get_text("PRODUCT_CONTENTS/LANDSAT_PRODUCT_ID")
@property
def item_id(self) -> str:
# Remove the processing date, as products IDs
# that only vary by processing date represent the
# same scene
# See "Section 5 - Product Packaging" at
# https://prd-wret.s3.us-west-2.amazonaws.com/assets/palladium/production/atoms/files/LSDS-1619_Landsat8-C2-L2-ScienceProductGuide-v2.pdf # noqa
# ID format: LXSS_LLLL_PPPRRR_YYYYMMDD_yyyymmdd_CX_TX
# remove yyyymmdd
id_parts = self.product_id.split('_')
id = '_'.join(id_parts[:4] + id_parts[-2:])
return id
@property
def scene_id(self) -> str:
""""Return the Landsat scene ID."""
return self._get_text("LEVEL1_PROCESSING_RECORD/LANDSAT_SCENE_ID")
@property
def processing_level(self) -> str:
"""Processing level. Determines product contents.
Returns either 'L2SP' or 'L2SR', standing for
'Level 2 Science Product' and 'Level 2 Surface Reflectance',
respectively. L2SP has thermal + surface reflectance assets;
L2SR only has surface reflectance.
"""
return self._get_text("PRODUCT_CONTENTS/PROCESSING_LEVEL")
@property
def epsg(self) -> int:
utm_zone = self._root.find_text('PROJECTION_ATTRIBUTES/UTM_ZONE')
if utm_zone:
if self.satellite_num == 8 and self.legacy_l8:
# Keep current STAC Item content consistent for Landsat 8
bbox = self.bbox
utm_zone = self._get_text('PROJECTION_ATTRIBUTES/UTM_ZONE')
center_lat = (bbox[1] + bbox[3]) / 2.0
return int(f"{326 if center_lat > 0 else 327}{utm_zone}")
else:
# The projection transforms in the COGs provided by the USGS are
# always for UTM North zones. The EPSG codes should therefore
# be UTM north zones (326XX, where XX is the UTM zone number).
# See: https://www.usgs.gov/faqs/why-do-landsat-scenes-southern-hemisphere-display-negative-utm-values # noqa
utm_zone = self._get_text('PROJECTION_ATTRIBUTES/UTM_ZONE')
return int(f"326{utm_zone}")
else:
# Polar Stereographic
# Based on Landsat 8-9 OLI/TIRS Collection 2 Level 1 Data Format Control Book,
# should only ever be 71 or -71
lat_ts = self._get_text('PROJECTION_ATTRIBUTES/TRUE_SCALE_LAT')
if lat_ts == "-71.00000":
# Antarctic
return 3031
elif lat_ts == "71.00000":
# Arctic
return 3995
else:
raise MTLError(
f'Unexpeced value for PROJECTION_ATTRIBUTES/TRUE_SCALE_LAT: {lat_ts} '
)
@property
def bbox(self) -> List[float]:
# Might be cleaner to just transform the proj bbox to WGS84.
lons = [
self._get_float("PROJECTION_ATTRIBUTES/CORNER_UL_LON_PRODUCT"),
self._get_float("PROJECTION_ATTRIBUTES/CORNER_UR_LON_PRODUCT"),
self._get_float("PROJECTION_ATTRIBUTES/CORNER_LL_LON_PRODUCT"),
self._get_float("PROJECTION_ATTRIBUTES/CORNER_LR_LON_PRODUCT")
]
lats = [
self._get_float("PROJECTION_ATTRIBUTES/CORNER_UL_LAT_PRODUCT"),
self._get_float("PROJECTION_ATTRIBUTES/CORNER_UR_LAT_PRODUCT"),
self._get_float("PROJECTION_ATTRIBUTES/CORNER_LL_LAT_PRODUCT"),
self._get_float("PROJECTION_ATTRIBUTES/CORNER_LR_LAT_PRODUCT")
]
geod = Geod(ellps="WGS84")
offset = self.sr_gsd / 2
_, _, bottom_distance = geod.inv(lons[2], lats[2], lons[3], lats[3])
bottom_offset = offset * (lons[3] - lons[2]) / bottom_distance
_, _, top_distance = geod.inv(lons[0], lats[0], lons[1], lats[1])
top_offset = offset * (lons[1] - lons[0]) / top_distance
_, _, lat_distance = geod.inv(lons[0], lats[0], lons[2], lats[2])
lat_offset = offset * (lats[0] - lats[2]) / lat_distance
return [
min(lons) - bottom_offset,
min(lats) - lat_offset,
max(lons) + top_offset,
max(lats) + lat_offset
]
@property
def proj_bbox(self) -> List[float]:
# USGS metadata provide bounds at the center of the pixel, but
# GDAL/rasterio transforms are to edge of pixel.
# https://github.com/stac-utils/stactools/issues/117
offset = self.sr_gsd / 2
xs = [
self._get_float(
"PROJECTION_ATTRIBUTES/CORNER_UL_PROJECTION_X_PRODUCT") -
offset,
self._get_float(
"PROJECTION_ATTRIBUTES/CORNER_UR_PROJECTION_X_PRODUCT") +
offset,
self._get_float(
"PROJECTION_ATTRIBUTES/CORNER_LL_PROJECTION_X_PRODUCT") -
offset,
self._get_float(
"PROJECTION_ATTRIBUTES/CORNER_LR_PROJECTION_X_PRODUCT") +
offset
]
ys = [
self._get_float(
"PROJECTION_ATTRIBUTES/CORNER_UL_PROJECTION_Y_PRODUCT") +
offset,
self._get_float(
"PROJECTION_ATTRIBUTES/CORNER_UR_PROJECTION_Y_PRODUCT") +
offset,
self._get_float(
"PROJECTION_ATTRIBUTES/CORNER_LL_PROJECTION_Y_PRODUCT") -
offset,
self._get_float(
"PROJECTION_ATTRIBUTES/CORNER_LR_PROJECTION_Y_PRODUCT") -
offset
]
return [min(xs), min(ys), max(xs), max(ys)]
@property
def sr_shape(self) -> List[int]:
"""Shape for surface reflectance assets.
Used for proj:shape. In [row, col] order"""
return [
self._get_int("PROJECTION_ATTRIBUTES/REFLECTIVE_LINES"),
self._get_int("PROJECTION_ATTRIBUTES/REFLECTIVE_SAMPLES")
]
@property
def thermal_shape(self) -> Optional[List[int]]:
"""Shape for thermal bands.
None if thermal bands not present.
Used for proj:shape. In [row, col] order"""
rows = map_opt(
int, self._root.find_text("PROJECTION_ATTRIBUTES/THERMAL_LINES"))
cols = map_opt(
int, self._root.find_text("PROJECTION_ATTRIBUTES/THERMAL_SAMPLES"))
if rows is not None and cols is not None:
return [rows, cols]
else:
return None
@property
def sr_transform(self) -> List[float]:
return transform_from_bbox(self.proj_bbox, self.sr_shape)
@property
def thermal_transform(self) -> Optional[List[float]]:
return map_opt(
lambda shape: transform_from_bbox(self.proj_bbox, shape),
self.thermal_shape)
@property
def sr_gsd(self) -> float:
return self._get_float(
"LEVEL1_PROJECTION_PARAMETERS/GRID_CELL_SIZE_REFLECTIVE")
@property
def thermal_gsd(self) -> Optional[float]:
return map_opt(
float,
self._root.find_text(
'LEVEL1_PROJECTION_PARAMETERS/GRID_CELL_SIZE_THERMAL'))
@property
def scene_datetime(self) -> datetime:
date = self._get_text("IMAGE_ATTRIBUTES/DATE_ACQUIRED")
time = self._get_text("IMAGE_ATTRIBUTES/SCENE_CENTER_TIME")
return str_to_datetime(f"{date} {time}")
@property
def cloud_cover(self) -> float:
return self._get_float("IMAGE_ATTRIBUTES/CLOUD_COVER")
@property
def sun_azimuth(self) -> float:
"""Returns the sun azimuth in STAC form.
Converts from Landsat metadata form (-180 to 180 from north, west being
negative) to STAC form (0 to 360 clockwise from north).
Returns:
float: Sun azimuth, 0 to 360 clockwise from north.
"""
azimuth = self._get_float("IMAGE_ATTRIBUTES/SUN_AZIMUTH")
if azimuth < 0.0:
azimuth += 360
return azimuth
@property
def sun_elevation(self) -> float:
return self._get_float("IMAGE_ATTRIBUTES/SUN_ELEVATION")
@property
def off_nadir(self) -> Optional[float]:
if self.satellite_num == 8 and self.legacy_l8:
# Keep current STAC Item content consistent for Landsat 8
if self._get_text("IMAGE_ATTRIBUTES/NADIR_OFFNADIR") == "NADIR":
return 0
else:
return None
else:
# NADIR_OFFNADIR and ROLL_ANGLE xml entries do not exist prior to
# landsat 8. Therefore, we perform a soft check for NADIR_OFFNADIR.
# If exists and is equal to "OFFNADIR", then a non-zero ROLL_ANGLE
# exists. We force this ROLL_ANGLE to be positive to conform with
# the stac View Geometry extension. We return 0 otherwise since
# off-nadir views are only an option on Landsat 8-9.
if self._root.find_text(
"IMAGE_ATTRIBUTES/NADIR_OFFNADIR") == "OFFNADIR":
return abs(self._get_float("IMAGE_ATTRIBUTES/ROLL_ANGLE"))
else:
return 0
@property
def wrs_path(self) -> str:
return self._get_text("IMAGE_ATTRIBUTES/WRS_PATH").zfill(3)
@property
def wrs_row(self) -> str:
return self._get_text("IMAGE_ATTRIBUTES/WRS_ROW").zfill(3)
@property
def landsat_metadata(self) -> Dict[str, Any]:
landsat_meta = {
"landsat:cloud_cover_land":
self._get_float("IMAGE_ATTRIBUTES/CLOUD_COVER_LAND"),
"landsat:wrs_type":
self._get_text("IMAGE_ATTRIBUTES/WRS_TYPE"),
"landsat:wrs_path":
self.wrs_path,
"landsat:wrs_row":
self.wrs_row,
"landsat:collection_category":
self._get_text("PRODUCT_CONTENTS/COLLECTION_CATEGORY"),
"landsat:collection_number":
self._get_text("PRODUCT_CONTENTS/COLLECTION_NUMBER"),
"landsat:correction":
self.processing_level,
"landsat:scene_id":
self.scene_id
}
if self.satellite_num == 8 and self.legacy_l8:
landsat_meta["landsat:processing_level"] = landsat_meta.pop(
"landsat:correction")
return landsat_meta
@property
def level1_radiance(self) -> Dict[str, Any]:
"""Gets the scale (mult) and offset (add) values for generating TOA
radiance from Level-1 DNs.
This is relevant to MSS data, which is only processed to Level-1.
Returns:
Dict[str, Any]: Dict of scale and offset dicts, keyed by band
number.
"""
node = self._root.find_or_throw("LEVEL1_RADIOMETRIC_RESCALING",
self._xml_error)
mult_add: Dict[str, Any] = defaultdict(dict)
for item in node.element:
if item.tag.startswith("RADIANCE_MULT_BAND"):
band = f'B{item.tag.split("_")[-1]}'
mult_add[band]["mult"] = float(str(item.text))
elif item.tag.startswith("RADIANCE_ADD_BAND"):
band = f'B{item.tag.split("_")[-1]}'
mult_add[band]["add"] = float(str(item.text))
return mult_add
@classmethod
def from_file(cls,
href: str,
read_href_modifier: Optional[ReadHrefModifier] = None,
legacy_l8: bool = True) -> "MtlMetadata":
return cls(XmlElement.from_file(href, read_href_modifier),
href=href,
legacy_l8=legacy_l8)
| 37.808571 | 153 | 0.607194 | 1,594 | 13,233 | 4.793601 | 0.220828 | 0.036644 | 0.03455 | 0.046067 | 0.333857 | 0.291323 | 0.216333 | 0.199712 | 0.123806 | 0.096584 | 0 | 0.019047 | 0.297741 | 13,233 | 349 | 154 | 37.916905 | 0.803185 | 0.202448 | 0 | 0.260331 | 0 | 0 | 0.215144 | 0.182567 | 0 | 0 | 0 | 0 | 0 | 1 | 0.119835 | false | 0.004132 | 0.03719 | 0.053719 | 0.309917 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
caad7ba8a2fcd067353d0063eebca93ac75d8669 | 2,191 | py | Python | wrappers.py | cross32768/PlaNet_PyTorch | 022baf724b52bf79e610f9d7a31e4195d6be6455 | [
"MIT"
] | 26 | 2019-12-30T06:45:16.000Z | 2022-03-18T02:00:15.000Z | wrappers.py | cross32768/PlaNet_PyTorch | 022baf724b52bf79e610f9d7a31e4195d6be6455 | [
"MIT"
] | 7 | 2019-12-29T14:46:36.000Z | 2020-01-01T09:11:34.000Z | wrappers.py | cross32768/PlaNet_PyTorch | 022baf724b52bf79e610f9d7a31e4195d6be6455 | [
"MIT"
] | 3 | 2020-05-23T17:23:22.000Z | 2020-10-27T18:03:58.000Z | import gym
import numpy as np
from viewer import OpenCVImageViewer
class GymWrapper(object):
"""
Gym interface wrapper for dm_control env wrapped by pixels.Wrapper
"""
metadata = {'render.modes': ['human', 'rgb_array']}
reward_range = (-np.inf, np.inf)
def __init__(self, env):
self._env = env
self._viewer = None
def __getattr(self, name):
return getattr(self._env, name)
@property
def observation_space(self):
obs_spec = self._env.observation_spec()
return gym.spaces.Box(0, 255, obs_spec['pixels'].shape, dtype=np.uint8)
@property
def action_space(self):
action_spec = self._env.action_spec()
return gym.spaces.Box(action_spec.minimum, action_spec.maximum, dtype=np.float32)
def step(self, action):
time_step = self._env.step(action)
obs = time_step.observation['pixels']
reward = time_step.reward or 0
done = time_step.last()
info = {'discount': time_step.discount}
return obs, reward, done, info
def reset(self):
time_step = self._env.reset()
obs = time_step.observation['pixels']
return obs
def render(self, mode='human', **kwargs):
if not kwargs:
kwargs = self._env._render_kwargs
img = self._env.physics.render(**kwargs)
if mode == 'rgb_array':
return img
elif mode == 'human':
if self._viewer is None:
self._viewer = OpenCVImageViewer()
self._viewer.imshow(img)
return self._viewer.isopen
else:
raise NotImplementedError
class RepeatAction(gym.Wrapper):
"""
Action repeat wrapper to act same action repeatedly
"""
def __init__(self, env, skip=4):
gym.Wrapper.__init__(self, env)
self._skip = skip
def reset(self):
return self.env.reset()
def step(self, action):
total_reward = 0.0
for _ in range(self._skip):
obs, reward, done, info = self.env.step(action)
total_reward += reward
if done:
break
return obs, total_reward, done, info
| 28.089744 | 89 | 0.600639 | 267 | 2,191 | 4.722846 | 0.314607 | 0.072165 | 0.02617 | 0.022205 | 0.079302 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007101 | 0.293017 | 2,191 | 77 | 90 | 28.454545 | 0.806972 | 0.053857 | 0 | 0.140351 | 0 | 0 | 0.03477 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.175439 | false | 0 | 0.052632 | 0.035088 | 0.45614 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cab187b9189478433c28cb351566cfef54a3c790 | 3,175 | py | Python | project/services/views.py | Aleksey-Hugo/Avtoservis-Ulyanovsk-one- | 073cf02eda5efc7ed7598a39bcf62815f2bfeebb | [
"Apache-2.0"
] | 1 | 2021-06-29T13:28:44.000Z | 2021-06-29T13:28:44.000Z | project/services/views.py | Aleksey-Hugo/Avtoservis-Ulyanovsk-one- | 073cf02eda5efc7ed7598a39bcf62815f2bfeebb | [
"Apache-2.0"
] | null | null | null | project/services/views.py | Aleksey-Hugo/Avtoservis-Ulyanovsk-one- | 073cf02eda5efc7ed7598a39bcf62815f2bfeebb | [
"Apache-2.0"
] | null | null | null | from django.shortcuts import render
from . import models
from project import additional_scripts as scripts
from django.http import HttpResponse, JsonResponse
from django.template.loader import render_to_string
from django.views.decorators.csrf import csrf_exempt
import re
# Create your views here.
def get_default_data_for_services(request, service_type=None, car_type=None, page=1, method="GET", all_data=True):
data = {}
current_page= page
records_on_page = 20
current_type_name = scripts.Translite(service_type).translite(lang="ru").normalize()
service_type_objects = models.ServiceType.objects.filter(is_active=True)
if car_type:
car_type_object = models.ServicedCar.objects.get(car_type__iexact=scripts.Translite(car_type).translite(lang="ru").normalize())
service_objects = models.Service.objects.filter(service_type=service_type_objects.get(type_name=current_type_name)).filter(serviced_cars=car_type_object)
else:
service_objects = models.Service.objects.filter(service_type=service_type_objects.get(type_name=current_type_name))
count_pages = (service_objects.count() // records_on_page) + 1 if service_objects.count() % records_on_page > 0 else service_objects.count() // records_on_page
if current_page == 1:
service_objects = service_objects[:records_on_page]
elif current_page == count_pages :
service_objects = service_objects[records_on_page * (current_page-1) :]
else:
service_objects = service_objects[records_on_page * (current_page- 1) : records_on_page * current_page]
if method == "GET":
return {"current_type_name":current_type_name,"service_objects":service_objects,\
"count_pages": count_pages, "current_page": current_page, "service_type_objects":service_type_objects}
elif method == "POST":
return {"html": render_to_string("tbody.html", {"service_objects": service_objects, "request": request}), 'count_pages': count_pages, "current_page": current_page, "hrefTextPrefix": re.sub(r"\d+\/$", "", request.path)}
def price_catalog(request):
data = {"service_type_objects": models.ServiceType.objects.filter(is_active=True)}
return render(request, "price-catalog.html", data)
@csrf_exempt
def all_prices(request, service_type, page=1):
if request.method == "GET":
data = get_default_data_for_services(request, service_type=service_type,page=int(page))
return render(request, "prices.html", data)
elif request.method == "POST":
post_data = get_default_data_for_services(request,service_type=service_type,page=int(page), method="POST")
return JsonResponse(post_data)
@csrf_exempt
def prices_by_type_of_car(request, service_type, car_type=None, page=1):
if request.method == "GET":
data = get_default_data_for_services(request, service_type=service_type, car_type=car_type, page=int(page))
return render(request, "prices.html", data)
elif request.method == "POST":
post_data = get_default_data_for_services(request,service_type=service_type,car_type=car_type,page=int(page), method="POST")
return JsonResponse(post_data)
| 50.396825 | 226 | 0.748976 | 433 | 3,175 | 5.163972 | 0.187067 | 0.09839 | 0.046512 | 0.059034 | 0.602862 | 0.569767 | 0.501789 | 0.483453 | 0.426655 | 0.426655 | 0 | 0.003662 | 0.139843 | 3,175 | 62 | 227 | 51.209677 | 0.815086 | 0.007244 | 0 | 0.244898 | 0 | 0 | 0.079365 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081633 | false | 0 | 0.142857 | 0 | 0.367347 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cab46f38ed7e33ed8da42d9ab11d078861215107 | 11,952 | py | Python | vispy/app/backends/_ipynb_webgl.py | robmcmullen/vispy | 8d5092fdae4a24fc364ae51c7e34e12d3fd6d0a2 | [
"BSD-3-Clause"
] | null | null | null | vispy/app/backends/_ipynb_webgl.py | robmcmullen/vispy | 8d5092fdae4a24fc364ae51c7e34e12d3fd6d0a2 | [
"BSD-3-Clause"
] | null | null | null | vispy/app/backends/_ipynb_webgl.py | robmcmullen/vispy | 8d5092fdae4a24fc364ae51c7e34e12d3fd6d0a2 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright (c) 2014, Vispy Development Team.
# Distributed under the (new) BSD License. See LICENSE.txt for more info.
"""
Vispy backend for the IPython notebook (WebGL approach).
"""
from __future__ import division
from ..base import (BaseApplicationBackend, BaseCanvasBackend,
BaseTimerBackend)
from ._ipynb_util import create_glir_message
from ...util import logger, keys
from ...ext import six
from vispy.gloo.glir import BaseGlirParser
# Import for displaying Javascript on notebook
import os.path as op
# -------------------------------------------------------------------- init ---
capability = dict( # things that can be set by the backend
title=True, # But it only applies to the dummy window :P
size=True, # We cannot possibly say we dont, because Canvas always sets it
position=True, # Dito
show=True,
vsync=False,
resizable=True,
decorate=False,
fullscreen=True,
context=True,
multi_window=False,
scroll=True,
parent=False,
)
# Init dummy objects needed to import this module withour errors.
# These are all overwritten with imports from IPython (on success)
DOMWidget = object
Unicode = Int = Float = Bool = lambda *args, **kwargs: None
# Try importing IPython
try:
import tornado
import IPython
if IPython.version_info < (2,):
raise RuntimeError('ipynb_webgl backend need IPython version >= 2.0')
from IPython.html.widgets import DOMWidget
from IPython.utils.traitlets import Unicode, Int
from IPython.display import display, Javascript
from IPython.html.nbextensions import install_nbextension
except Exception as exp:
# raise ImportError("The WebGL backend requires IPython >= 2.0")
available, testable, why_not, which = False, False, str(exp), None
else:
available, testable, why_not, which = True, False, None, None
# ------------------------------------------------------------- application ---
def _prepare_js():
pkgdir = op.dirname(__file__)
jsdir = op.join(pkgdir, '../../html/static/js/')
install_nbextension([op.join(jsdir, 'vispy.min.js'),
op.join(jsdir, 'jquery.mousewheel.min.js')])
backend_path = op.join(jsdir, 'webgl-backend.js')
with open(backend_path, 'r') as f:
script = f.read()
display(Javascript(script))
class ApplicationBackend(BaseApplicationBackend):
def __init__(self):
BaseApplicationBackend.__init__(self)
_prepare_js()
def _vispy_reuse(self):
_prepare_js()
def _vispy_get_backend_name(self):
return 'ipynb_webgl'
def _vispy_process_events(self):
# TODO: may be implemented later.
raise NotImplementedError()
def _vispy_run(self):
pass
def _vispy_quit(self):
pass
def _vispy_get_native_app(self):
return self
# ------------------------------------------------------------------ canvas ---
class WebGLGlirParser(BaseGlirParser):
def __init__(self, widget):
self._widget = widget
def is_remote(self):
return True
def convert_shaders(self):
return 'es2'
def parse(self, commands):
self._widget.send_glir_commands(commands)
class CanvasBackend(BaseCanvasBackend):
# args are for BaseCanvasBackend, kwargs are for us.
def __init__(self, *args, **kwargs):
BaseCanvasBackend.__init__(self, *args)
# Maybe to ensure that exactly all arguments are passed?
title, size, position, show, vsync, resize, dec, fs, parent, context, \
= self._process_backend_kwargs(kwargs)
self._context = context
# TODO: do something with context.config
# Take the context.
context.shared.add_ref('webgl', self)
if context.shared.ref is self:
pass # ok
else:
raise RuntimeError("WebGL doesn't yet support context sharing.")
self._create_widget(size=size)
def _create_widget(self, size=None):
self._widget = VispyWidget(self._gen_event, size=size)
# Set glir parser on context and context.shared
context = self._vispy_canvas.context
context.shared.parser = WebGLGlirParser(self._widget)
def _reinit_widget(self):
self._vispy_canvas.set_current()
self._vispy_canvas.events.initialize()
self._vispy_canvas.events.resize(size=(self._widget.width,
self._widget.height))
self._vispy_canvas.events.draw()
def _vispy_warmup(self):
pass
# Uncommenting these makes the backend crash.
def _vispy_set_current(self):
pass
def _vispy_swap_buffers(self):
pass
def _vispy_set_title(self, title):
raise NotImplementedError()
def _vispy_get_fullscreen(self):
# We don't want error messages to show up when the user presses
# F11 to fullscreen the browser.
pass
def _vispy_set_fullscreen(self, fullscreen):
# We don't want error messages to show up when the user presses
# F11 to fullscreen the browser.
pass
def _vispy_get_size(self):
return (self._widget.width, self._widget.height)
def _vispy_set_size(self, w, h):
self._widget.width = w
self._widget.height = h
def _vispy_get_position(self):
raise NotImplementedError()
def _vispy_set_position(self, x, y):
logger.warning('IPython notebook canvas cannot be repositioned.')
def _vispy_set_visible(self, visible):
if not visible:
logger.warning('IPython notebook canvas cannot be hidden.')
else:
display(self._widget)
self._reinit_widget()
def _vispy_update(self):
ioloop = tornado.ioloop.IOLoop.current()
ioloop.add_callback(self._draw_event)
def _draw_event(self):
self._vispy_canvas.set_current()
self._vispy_canvas.events.draw()
def _vispy_close(self):
raise NotImplementedError()
def _vispy_mouse_release(self, **kwds):
# HACK: override this method from the base canvas in order to
# avoid breaking other backends.
kwds.update(self._vispy_mouse_data)
ev = self._vispy_canvas.events.mouse_release(**kwds)
if ev is None:
return
self._vispy_mouse_data['press_event'] = None
# TODO: this is a bit ugly, need to improve mouse button handling in
# app
ev._button = None
self._vispy_mouse_data['buttons'] = []
self._vispy_mouse_data['last_event'] = ev
return ev
# Generate vispy events according to upcoming JS events
_modifiers_map = {
'ctrl': keys.CONTROL,
'shift': keys.SHIFT,
'alt': keys.ALT,
}
def _gen_event(self, ev):
if self._vispy_canvas is None:
return
event_type = ev['type']
key_code = ev.get('key_code', None)
if key_code is None:
key, key_text = None, None
else:
if hasattr(keys, key_code):
key = getattr(keys, key_code)
else:
key = keys.Key(key_code)
# Generate the key text to pass to the event handler.
if key_code == 'SPACE':
key_text = ' '
else:
key_text = six.text_type(key_code)
# Process modifiers.
modifiers = ev.get('modifiers', None)
if modifiers:
modifiers = tuple([self._modifiers_map[modifier]
for modifier in modifiers
if modifier in self._modifiers_map])
if event_type == "mouse_move":
self._vispy_mouse_move(native=ev,
button=ev["button"],
pos=ev["pos"],
modifiers=modifiers,
)
elif event_type == "mouse_press":
self._vispy_mouse_press(native=ev,
pos=ev["pos"],
button=ev["button"],
modifiers=modifiers,
)
elif event_type == "mouse_release":
self._vispy_mouse_release(native=ev,
pos=ev["pos"],
button=ev["button"],
modifiers=modifiers,
)
elif event_type == "mouse_wheel":
self._vispy_canvas.events.mouse_wheel(native=ev,
delta=ev["delta"],
pos=ev["pos"],
button=ev["button"],
modifiers=modifiers,
)
elif event_type == "key_press":
self._vispy_canvas.events.key_press(native=ev,
key=key,
text=key_text,
modifiers=modifiers,
)
elif event_type == "key_release":
self._vispy_canvas.events.key_release(native=ev,
key=key,
text=key_text,
modifiers=modifiers,
)
elif event_type == "resize":
self._vispy_canvas.events.resize(native=ev,
size=ev["size"])
elif event_type == "paint":
self._vispy_canvas.events.draw()
# ------------------------------------------------------------------- Timer ---
class TimerBackend(BaseTimerBackend):
def __init__(self, *args, **kwargs):
super(TimerBackend, self).__init__(*args, **kwargs)
self._timer = tornado.ioloop.PeriodicCallback(
self._vispy_timer._timeout,
1000)
def _vispy_start(self, interval):
self._timer.callback_time = interval * 1000
self._timer.start()
def _vispy_stop(self):
self._timer.stop()
# ---------------------------------------------------------- IPython Widget ---
class VispyWidget(DOMWidget):
_view_name = Unicode("VispyView", sync=True)
width = Int(sync=True)
height = Int(sync=True)
def __init__(self, gen_event, **kwargs):
super(VispyWidget, self).__init__(**kwargs)
w, h = kwargs.get('size', (500, 200))
self.width = w
self.height = h
self.gen_event = gen_event
self.on_msg(self.events_received)
def events_received(self, _, msg):
if msg['msg_type'] == 'events':
events = msg['contents']
for ev in events:
self.gen_event(ev)
def send_glir_commands(self, commands):
# TODO: check whether binary websocket is available (ipython >= 3)
# Until IPython 3.0 is released, use base64.
array_serialization = 'base64'
# array_serialization = 'binary'
if array_serialization == 'base64':
msg = create_glir_message(commands, 'base64')
msg['array_serialization'] = 'base64'
self.send(msg)
elif array_serialization == 'binary':
msg = create_glir_message(commands, 'binary')
msg['array_serialization'] = 'binary'
# Remove the buffers from the JSON message: they will be sent
# independently via binary WebSocket.
buffers = msg.pop('buffers')
self.comm.send({"method": "custom", "content": msg},
buffers=buffers)
| 34.643478 | 79 | 0.556727 | 1,267 | 11,952 | 5.026835 | 0.269929 | 0.027634 | 0.032972 | 0.032972 | 0.196106 | 0.125922 | 0.109593 | 0.089339 | 0.089339 | 0.089339 | 0 | 0.005235 | 0.328732 | 11,952 | 344 | 80 | 34.744186 | 0.788608 | 0.166332 | 0 | 0.191837 | 0 | 0 | 0.060407 | 0.004538 | 0 | 0 | 0 | 0.002907 | 0 | 1 | 0.15102 | false | 0.032653 | 0.053061 | 0.020408 | 0.273469 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cab5e991d555f4eb25c36074f5ce2e555de8a284 | 635 | py | Python | [functions] Time_related.py | 3n5/functions-python | be42f7b2108f8ed481956ce1c1ddc356c4ce17d6 | [
"MIT"
] | 2 | 2020-12-29T06:32:43.000Z | 2020-12-29T06:32:45.000Z | [functions] Time_related.py | h4r3/functions-python | be42f7b2108f8ed481956ce1c1ddc356c4ce17d6 | [
"MIT"
] | null | null | null | [functions] Time_related.py | h4r3/functions-python | be42f7b2108f8ed481956ce1c1ddc356c4ce17d6 | [
"MIT"
] | null | null | null | #Time-related functions 2020/12/22
"""Get elapsed time"""#[関数] プログラムの計測時間の表示
def time_elapsed():
import time
print(__doc__)
start = time.time()
print('== Replace the program you want to time here ==') and time.sleep(1)
end = time.time()
_time=end-start
hour,min,sec=_time//3600,_time//60,_time%60
print(f'It takes about {hour:.0f} hours {min:.0f} min {sec:.0f} sec')
return 0
time_elapsed()
"""Get the current time"""
def time_current():
import datetime
print(__doc__)
current=datetime.datetime.today().strftime('%Y%m%d_%H%M%S')
print(current)
time_current() | 28.863636 | 79 | 0.637795 | 93 | 635 | 4.172043 | 0.516129 | 0.061856 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04175 | 0.207874 | 635 | 22 | 80 | 28.863636 | 0.729622 | 0.108661 | 0 | 0.117647 | 0 | 0.058824 | 0.231969 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.117647 | 0 | 0.294118 | 0.294118 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cab6fe7833f5fec0c5468b98e097935105bdd403 | 6,661 | py | Python | tables/save_global_basin_curves_xls.py | thomasfrederikse/sealevelbudget_20c | eb3155da7255cd7e17cd574464e730b6ba16cb7d | [
"FSFAP"
] | 12 | 2020-08-19T21:32:37.000Z | 2022-02-23T12:35:42.000Z | tables/save_global_basin_curves_xls.py | thomasfrederikse/sealevelbudget_20c | eb3155da7255cd7e17cd574464e730b6ba16cb7d | [
"FSFAP"
] | 1 | 2020-12-09T08:51:00.000Z | 2020-12-09T08:51:00.000Z | tables/save_global_basin_curves_xls.py | thomasfrederikse/sealevelbudget_20c | eb3155da7255cd7e17cd574464e730b6ba16cb7d | [
"FSFAP"
] | null | null | null | # ---------------------------------------------------------
# Save global and basin-mean sea-level curves as Excel file
# Single file per basin
# ---------------------------------------------------------
import numpy as np
import os
import pandas as pd
def main():
settings = {}
settings['dir_data'] = os.getenv('HOME') + '/Data/'
settings['dir_budget'] = settings['dir_data'] + 'Budget_20c/'
settings['fn_stats'] = settings['dir_budget'] + 'results/stats_basin_global.npy'
settings['fn_region_ensembles'] = settings['dir_budget']+'region_data/region_ensembles.npy'
settings['fn_output'] = settings['dir_budget'] + 'data_supplement/global_basin_timeseries.xlsx'
settings['years'] = np.arange(1900,2019)
stats = np.load(settings['fn_stats'],allow_pickle=True).all()
write_array = pd.ExcelWriter(settings['fn_output'])
data_global = pd.DataFrame(data=stats['global']['obs']['tseries'], index=settings['years'], columns=['Observed GMSL [lower]','Observed GMSL [mean]','Observed GMSL [upper]'])
data_global['Sum of contributors [lower]'] = stats['global']['budget']['tseries'][:,0]
data_global['Sum of contributors [mean]'] = stats['global']['budget']['tseries'][:,1]
data_global['Sum of contributors [upper]'] = stats['global']['budget']['tseries'][:,2]
data_global['Steric [lower]'] = stats['global']['steric']['tseries'][:,0]
data_global['Steric [mean]'] = stats['global']['steric']['tseries'][:,1]
data_global['Steric [upper]'] = stats['global']['steric']['tseries'][:,2]
data_global['Glaciers [lower]'] = stats['global']['grd_glac']['tseries'][:,0]
data_global['Glaciers [mean]'] = stats['global']['grd_glac']['tseries'][:,1]
data_global['Glaciers [upper]'] = stats['global']['grd_glac']['tseries'][:,2]
data_global['Greenland Ice Sheet [lower]'] = stats['global']['grd_GrIS']['tseries'][:,0]
data_global['Greenland Ice Sheet [mean]'] = stats['global']['grd_GrIS']['tseries'][:,1]
data_global['Greenland Ice Sheet [upper]'] = stats['global']['grd_GrIS']['tseries'][:,2]
data_global['Antarctic Ice Sheet [lower]'] = stats['global']['grd_AIS']['tseries'][:,0]
data_global['Antarctic Ice Sheet [mean]'] = stats['global']['grd_AIS']['tseries'][:,1]
data_global['Antarctic Ice Sheet [upper]'] = stats['global']['grd_AIS']['tseries'][:,2]
data_global['Terrestrial Water Storage [lower]'] = stats['global']['grd_tws']['tseries'][:,0]
data_global['Terrestrial Water Storage [mean]'] = stats['global']['grd_tws']['tseries'][:,1]
data_global['Terrestrial Water Storage [upper]'] = stats['global']['grd_tws']['tseries'][:,2]
data_global['Reservoir impoundment [lower]'] = stats['global']['grd_tws_dam']['tseries'][:,0]
data_global['Reservoir impoundment [mean]'] = stats['global']['grd_tws_dam']['tseries'][:,1]
data_global['Reservoir impoundment [upper]'] = stats['global']['grd_tws_dam']['tseries'][:,2]
data_global['Groundwater depletion [lower]'] = stats['global']['grd_tws_gwd']['tseries'][:,0]
data_global['Groundwater depletion [mean]'] = stats['global']['grd_tws_gwd']['tseries'][:,1]
data_global['Groundwater depletion [upper]'] = stats['global']['grd_tws_gwd']['tseries'][:,2]
data_global['Natural TWS [lower]'] = stats['global']['grd_tws_natural']['tseries'][:,0]
data_global['Natural TWS [mean]'] = stats['global']['grd_tws_natural']['tseries'][:,1]
data_global['Natural TWS [upper]'] = stats['global']['grd_tws_natural']['tseries'][:,2]
data_global['Altimetry [lower]'] = stats['global']['altimetry']['tseries'][:,0]
data_global['Altimetry [mean]'] = stats['global']['altimetry']['tseries'][:,1]
data_global['Altimetry [upper]'] = stats['global']['altimetry']['tseries'][:,2]
data_global.to_excel(write_array, sheet_name='Global')
basin_name = ['Subpolar North Atlantic', 'Indian Ocean - South Pacific', 'Subtropical North Atlantic', 'East Pacific', 'South Atlantic', 'Northwest Pacific']
for basin in range(len(stats['basin'])):
data_basin = pd.DataFrame(data=stats['basin'][basin]['obs']['tseries'], index=settings['years'], columns=['Observed basin-mean sea level [lower]', 'Observed basin-mean sea level [mean]', 'Observed basin-mean sea level [upper]'])
data_basin['Sum of contributors [lower]'] = stats['basin'][basin]['budget']['tseries'][:, 0]
data_basin['Sum of contributors [mean]'] = stats['basin'][basin]['budget']['tseries'][:, 1]
data_basin['Sum of contributors [upper]'] = stats['basin'][basin]['budget']['tseries'][:, 2]
data_basin['Steric [lower]'] = stats['basin'][basin]['steric']['tseries'][:, 0]
data_basin['Steric [mean]'] = stats['basin'][basin]['steric']['tseries'][:, 1]
data_basin['Steric [upper]'] = stats['basin'][basin]['steric']['tseries'][:, 2]
data_basin['Glaciers [lower]'] = stats['basin'][basin]['grd_glac']['tseries'][:, 0]
data_basin['Glaciers [mean]'] = stats['basin'][basin]['grd_glac']['tseries'][:, 1]
data_basin['Glaciers [upper]'] = stats['basin'][basin]['grd_glac']['tseries'][:, 2]
data_basin['Greenland Ice Sheet [lower]'] = stats['basin'][basin]['grd_GrIS']['tseries'][:, 0]
data_basin['Greenland Ice Sheet [mean]'] = stats['basin'][basin]['grd_GrIS']['tseries'][:, 1]
data_basin['Greenland Ice Sheet [upper]'] = stats['basin'][basin]['grd_GrIS']['tseries'][:, 2]
data_basin['Antarctic Ice Sheet [lower]'] = stats['basin'][basin]['grd_AIS']['tseries'][:, 0]
data_basin['Antarctic Ice Sheet [mean]'] = stats['basin'][basin]['grd_AIS']['tseries'][:, 1]
data_basin['Antarctic Ice Sheet [upper]'] = stats['basin'][basin]['grd_AIS']['tseries'][:, 2]
data_basin['Terrestrial Water Storage [lower]'] = stats['basin'][basin]['grd_tws']['tseries'][:, 0]
data_basin['Terrestrial Water Storage [mean]'] = stats['basin'][basin]['grd_tws']['tseries'][:, 1]
data_basin['Terrestrial Water Storage [upper]'] = stats['basin'][basin]['grd_tws']['tseries'][:, 2]
data_basin['GIA [lower]'] = stats['basin'][basin]['gia']['tseries'][:, 0]
data_basin['GIA [mean]'] = stats['basin'][basin]['gia']['tseries'][:, 1]
data_basin['GIA [upper]'] = stats['basin'][basin]['gia']['tseries'][:, 2]
data_basin['Altimetry [lower]'] = stats['basin'][basin]['altimetry']['tseries'][:, 0]
data_basin['Altimetry [mean]'] = stats['basin'][basin]['altimetry']['tseries'][:, 1]
data_basin['Altimetry [upper]'] = stats['basin'][basin]['altimetry']['tseries'][:, 2]
data_basin.to_excel(write_array, sheet_name=basin_name[basin])
write_array.close()
return
| 76.563218 | 236 | 0.629035 | 813 | 6,661 | 4.99262 | 0.130381 | 0.078837 | 0.092387 | 0.050259 | 0.575265 | 0.239221 | 0.066519 | 0 | 0 | 0 | 0 | 0.010959 | 0.123255 | 6,661 | 86 | 237 | 77.453488 | 0.684075 | 0.029275 | 0 | 0 | 0 | 0 | 0.449141 | 0.016411 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013333 | false | 0 | 0.04 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cab7a62a6440c568cac6b6e233a81c664d529376 | 3,053 | py | Python | catalog/views.py | OdidiLavender/FoodWasteManagementProject | a5f339de79c9fb78771197089967caaf2ec1db18 | [
"MIT"
] | null | null | null | catalog/views.py | OdidiLavender/FoodWasteManagementProject | a5f339de79c9fb78771197089967caaf2ec1db18 | [
"MIT"
] | null | null | null | catalog/views.py | OdidiLavender/FoodWasteManagementProject | a5f339de79c9fb78771197089967caaf2ec1db18 | [
"MIT"
] | null | null | null | from multiprocessing import context
from django.shortcuts import render, redirect, get_object_or_404
from django.contrib.auth.decorators import login_required
from .models import Food, Order
from .forms import FoodForm
# Create your views here.
def home_view(request, template='index.html'):
products = Food.objects.all()
context = {
'products': products,
}
return render(request, template, context)
@login_required
def add_item(request, template='additem.html'):
if request.method == 'POST':
form = FoodForm(request.POST, request.FILES)
if form.is_valid():
food = Food()
food.user = request.user
food.product_name = form.cleaned_data['product_name']
food.country = form.cleaned_data['country']
food.county = form.cleaned_data['county']
food.location = form.cleaned_data['location']
food.quantity = form.cleaned_data['quantity']
food.price = form.cleaned_data['price']
food.pimage = request.FILES.get('pimage')
food.save()
return redirect('home')
form = FoodForm(request.POST)
return render(request, template, {'form': form})
form = FoodForm()
return render(request, template, {'form': form})
@login_required
def view_items(request, template='viewOrderItem.html'):
orders = Order.objects.filter(user=request.user.id)
products = Food.objects.all()
context = {
'orders': orders,
'products': products
}
return render(request, template, context)
def order(request, template='viewOrderItem.html'):
orders = Order.objects.filter(user=request.user.id)
order = Order()
if request.method == 'POST':
pk = request.POST.get('pk')
if not Order.objects.filter(product=pk).exists():
food = Food.objects.get(id=pk)
order.user = request.user
order.product = food
try:
order.save()
return redirect('view')
except Exception as e:
error = e + 'Order Failed!!!'
context = {
'orders': orders,
'error': error
}
return render(request, template, context)
error = 'You have already placed that order'
context = {
'orders': orders,
'error': error
}
return render(request, template, context)
return redirect('home')
def deleteOrder(request, id):
order = get_object_or_404(Order, id=id)
if request.method == 'POST':
if order.user == request.user:
order.delete()
return redirect('view')
return redirect('view')
return redirect('view')
def my_products(request, template='viewOrderItem.html'):
my_products = True
products = Food.objects.filter(user=request.user.id)
context = {
'my_products': my_products,
'products': products
}
return render(request, template, context) | 34.303371 | 65 | 0.603013 | 333 | 3,053 | 5.453453 | 0.249249 | 0.099119 | 0.073238 | 0.104075 | 0.376652 | 0.317181 | 0.232379 | 0.14978 | 0.14978 | 0.14978 | 0 | 0.002735 | 0.281363 | 3,053 | 89 | 66 | 34.303371 | 0.824977 | 0.007534 | 0 | 0.414634 | 0 | 0 | 0.094421 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.073171 | false | 0 | 0.060976 | 0 | 0.292683 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cab9ed4e026d9fabb8a1a1bf7202062d7c63720a | 744 | py | Python | InnovationMaps/urls.py | OnlineS3/5.6.-RIS3-innovation-maps | 118c1460ad13f1d56d5228187a0a570b42944cc6 | [
"MIT"
] | null | null | null | InnovationMaps/urls.py | OnlineS3/5.6.-RIS3-innovation-maps | 118c1460ad13f1d56d5228187a0a570b42944cc6 | [
"MIT"
] | null | null | null | InnovationMaps/urls.py | OnlineS3/5.6.-RIS3-innovation-maps | 118c1460ad13f1d56d5228187a0a570b42944cc6 | [
"MIT"
] | null | null | null | """Innovation Maps URL Configuration
The `urlpatterns` list routes URLs to views.
"""
from django.conf.urls import url
from InnovationMaps.views import *
urlpatterns = [
url(r'^about/$', about, name='innovationmaps_about'),
url(r'^guide/$', guide, name='innovationmaps_guide'),
url(r'^pdf/$', pdf, name='innovationmaps_pdf'),
url(r'^template/$', template, name='innovationmaps_template'),
url(r'^example/$', example, name='innovationmaps_example'),
url(r'^related/$', related, name='innovationmaps_related'),
url(r'^map/$', map, name='innovationmaps_map'),
url(r'^getData/$', data, name='innovationmaps_getData'),
url(r'filter/$', filter, name='innovationmaps_filter'),
url(r'^$', about), # Base URL
]
| 39.157895 | 66 | 0.678763 | 90 | 744 | 5.511111 | 0.322222 | 0.080645 | 0.03629 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.13172 | 744 | 18 | 67 | 41.333333 | 0.767802 | 0.11828 | 0 | 0 | 0 | 0 | 0.40832 | 0.169492 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cababfdd4a279ee379d1a2e9140e141a7bd6a78b | 2,892 | py | Python | nc2json_HYCOM_GLBy_forecast.py | cyhsu/leaflet-velocity | 3759902e995b0ebd5047c5558bbbf9887d1ebed0 | [
"MIT"
] | null | null | null | nc2json_HYCOM_GLBy_forecast.py | cyhsu/leaflet-velocity | 3759902e995b0ebd5047c5558bbbf9887d1ebed0 | [
"MIT"
] | null | null | null | nc2json_HYCOM_GLBy_forecast.py | cyhsu/leaflet-velocity | 3759902e995b0ebd5047c5558bbbf9887d1ebed0 | [
"MIT"
] | null | null | null | import os, sys, json
import numpy as np
import xarray as xr
from glob import glob
from datetime import datetime
from netCDF4 import Dataset, num2date, date2num
from scipy.interpolate import griddata
#- HYCOM GLBv0.08/latest (daily-mean) present + forecast
#- Detail info: https://www.hycom.org/dataserver/gofs-3pt1/analysis
#- Horizontal Resolution
slice_index = 5
dx, dy = 0.08 * slice_index, 0.04 * slice_index
#- Pre-processing.
Start_Date, End_Date = np.datetime64('today','h'), np.datetime64('today','h') + np.timedelta64(1,'D')
Start_Date, End_Date = np.datetime64('2020-06-29T00:00:00'), np.datetime64('2020-06-30T00:00:00')
link = 'http://tds.hycom.org/thredds/dodsC/GLBy0.08/latest'
nc = Dataset(link);
tm = nc.variables['time']
tim= num2date(tm[:], tm.units)
nc.close()
tid,= np.where((tim>=Start_Date) & (tim<=End_Date))
link = link + '?'\
+ 'time[{:d}:1:{:d}],'.format(tid[0],tid[-1]+1)\
+ 'lat[2200:{:d}:3001],'.format(slice_index)\
+ 'lon[3200:{:d}:4000],'.format(slice_index)\
+ 'water_u[{:d}:1:{:d}][0][2200:{:d}:3001][3200:{:d}:4000],'.format(tid[0],tid[-1]+1,slice_index,slice_index)\
+ 'water_v[{:d}:1:{:d}][0][2200:{:d}:3001][3200:{:d}:4000]'.format(tid[0],tid[-1]+1,slice_index,slice_index)
df = xr.open_dataset(link, decode_times=False).squeeze().persist()
df['time'] = num2date(df['time'].data, df['time'].units)
output = open('hycom_GLBy0p08_surface_current.json','r').read()
json_templete = json.loads(output)
json_templete[0]['header']['dx'] = dx #- dx of eastward-current
json_templete[0]['header']['dy'] = dy #- dy of eastward-current
json_templete[1]['header']['dx'] = dx #- dx of northward-current
json_templete[1]['header']['dy'] = dy #- dy of northward-current
u0 = df.fillna(0).water_u.mean('time').data[::-1]
v0 = df.fillna(0).water_v.mean('time').data[::-1]
json_templete[0]['header']['la1']=df.lat.data[-1]
json_templete[0]['header']['la2']=df.lat.data[0]
json_templete[0]['header']['lo1']=df.lon.data[0]
json_templete[0]['header']['lo2']=df.lon.data[-1]
json_templete[0]['header']['nx']=df.lon.size
json_templete[0]['header']['ny']=df.lat.size
json_templete[0]['header']['refTime']=str(df.time.mean('time').data)
json_templete[0]['data'] = u0.flatten().tolist()
json_templete[1]['header']['la1']=df.lat.data[-1]
json_templete[1]['header']['la2']=df.lat.data[0]
json_templete[1]['header']['lo1']=df.lon.data[0]
json_templete[1]['header']['lo2']=df.lon.data[-1]
json_templete[1]['header']['nx']=df.lon.size
json_templete[1]['header']['ny']=df.lat.size
json_templete[1]['header']['refTime']=str(df.time.mean('time').data)
json_templete[1]['data'] = v0.flatten().tolist()
with open('hycom_surface_current.json', 'w') as outfile:
outfile.write('[')
json.dump(json_templete[0], outfile)
outfile.write(',')
json.dump(json_templete[1], outfile)
outfile.write(']')
| 40.166667 | 117 | 0.662863 | 463 | 2,892 | 4.034557 | 0.269978 | 0.147752 | 0.076552 | 0.091542 | 0.485011 | 0.398822 | 0.346895 | 0.243041 | 0.110278 | 0.110278 | 0 | 0.065109 | 0.097165 | 2,892 | 71 | 118 | 40.732394 | 0.650326 | 0.089903 | 0 | 0 | 0 | 0.036364 | 0.205412 | 0.065549 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.127273 | 0 | 0.127273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cac27a4fdd875de8d39aa95f71a8e785bf4a097c | 901 | py | Python | lens/models/ext_models/deep_red/arffImporter.py | pietrobarbiero/logic_explained_networks | 238f2a220ae8fc4f31ab0cf12649603aba0285d5 | [
"Apache-2.0"
] | 18 | 2021-05-24T07:47:57.000Z | 2022-01-05T14:48:39.000Z | lens/models/ext_models/deep_red/arffImporter.py | pietrobarbiero/logic_explained_networks | 238f2a220ae8fc4f31ab0cf12649603aba0285d5 | [
"Apache-2.0"
] | 1 | 2021-08-25T16:33:10.000Z | 2021-08-25T16:33:10.000Z | lens/models/ext_models/deep_red/arffImporter.py | pietrobarbiero/deep-logic | 238f2a220ae8fc4f31ab0cf12649603aba0285d5 | [
"Apache-2.0"
] | 2 | 2021-05-26T08:15:14.000Z | 2021-08-23T18:58:16.000Z | import arff as arff
import numpy as np
datasetRepo = 'BreastCancer'
datasetName = 'breast-cancer-wisconsinBinary'
data_list=[]
for row in arff.load('/home/lukas/Uni/AAThesis/Datasets/'+ datasetRepo +'/'+ datasetName +'.arff'):
data_list.append(row)
data_np=np.array(data_list,dtype=float)
#eliminate missing
#if you need to cut the first feature e.g. it is the instance number uncomment the next line
#data_np = data_np[:,1:]
data_Y=data_np[:,-1]
num_insts=data_np.shape[0]
num_pos_examples=int(np.sum(data_Y))
num_neg_examples=num_insts-num_pos_examples
print('Dataset '+datasetName+' is getting Imported...')
print('Number of instances:',num_insts)
print('Number of positive Examples:',num_pos_examples)
print('Number of negative Examples:',num_neg_examples)
np.savetxt('/home/lukas/Uni/AAThesis/DeepRED/data/'+datasetName+'.csv', data_np, delimiter=",",fmt='%d')
print('Import done!') | 30.033333 | 104 | 0.756937 | 140 | 901 | 4.7 | 0.507143 | 0.054711 | 0.06383 | 0.06079 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003681 | 0.09545 | 901 | 30 | 105 | 30.033333 | 0.803681 | 0.145394 | 0 | 0 | 0 | 0 | 0.319426 | 0.131682 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 0.222222 | 0.277778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cac2971ae42c2e1eedf0e79789b0fb4607584896 | 1,550 | py | Python | clipl/scripts/hadd.py | thomas-mueller/clipl | 4c8c61dd4a09fee6ad2ec65f3baa6854cf9cce69 | [
"MIT"
] | null | null | null | clipl/scripts/hadd.py | thomas-mueller/clipl | 4c8c61dd4a09fee6ad2ec65f3baa6854cf9cce69 | [
"MIT"
] | null | null | null | clipl/scripts/hadd.py | thomas-mueller/clipl | 4c8c61dd4a09fee6ad2ec65f3baa6854cf9cce69 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import logging
import clipl.utility.logger as logger
log = logging.getLogger(__name__)
import argparse
import clipl.utility.jsonTools as jsonTools
import clipl.utility.tools as tools
import glob
import os
import shlex
import sys
from clipl.utility.tools import hadd
def main():
parser = argparse.ArgumentParser(description="Wrapper for hadd avoiding too many bash arguments.", parents=[logger.loggingParser])
parser.add_argument("-t", "--target-file", required=True,
help="Target file.")
parser.add_argument("source_files", nargs="+",
help="Source file. Can be either separate files or lists of files separated by whitespaces in one or more arguments.")
parser.add_argument("-a", "--args", default="",
help="Options for hadd. [Default: %(default)s]")
parser.add_argument("-n", "--max-files", default=500, type=int,
help="Maximum number of source files use per hadd call. [Default: %(default)s]")
args = parser.parse_args()
logger.initLogger(args)
source_files = []
for arg in args.source_files:
for item in shlex.split(arg.replace("\"", "")):
matching_files = glob.glob(item)
if len(matching_files) > 0:
source_files.extend(matching_files)
else:
source_files.append(item)
sys.exit(hadd(target_file=args.target_file,
source_files=source_files,
hadd_args=args.args,
max_files=args.max_files))
if __name__ == "__main__":
main()
| 28.703704 | 139 | 0.672903 | 204 | 1,550 | 4.955882 | 0.45098 | 0.087043 | 0.06726 | 0.035608 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004045 | 0.202581 | 1,550 | 53 | 140 | 29.245283 | 0.813916 | 0.027097 | 0 | 0 | 0 | 0.027027 | 0.228571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027027 | false | 0 | 0.27027 | 0 | 0.297297 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cac588be03ffc7ee32885771e6c662fba728e2e2 | 12,135 | py | Python | process.py | Jacob-Zhou/CWS_Dict-rewrite | ef73c2c05acedef6a8e886286e2cd652676c9879 | [
"MIT"
] | null | null | null | process.py | Jacob-Zhou/CWS_Dict-rewrite | ef73c2c05acedef6a8e886286e2cd652676c9879 | [
"MIT"
] | null | null | null | process.py | Jacob-Zhou/CWS_Dict-rewrite | ef73c2c05acedef6a8e886286e2cd652676c9879 | [
"MIT"
] | 1 | 2019-06-23T07:21:05.000Z | 2019-06-23T07:21:05.000Z | import codecs
import collections
from typing import *
import os
import tensorflow as tf
import csv
import tokenization
import numpy as np
import re
import utils
class InputExample(object):
"""A single training/test example for simple sequence classification."""
def __init__(self, guid, text, labels=None):
"""Constructs a InputExample.
Args:
guid: Unique id for the example.
text_a: string. The untokenized text of the first sequence. For single
sequence tasks, only this sequence must be specified.
text_b: (Optional) string. The untokenized text of the second sequence.
Only must be specified for sequence pair tasks.
label: (Optional) string. The label of the example. This should be
specified for train and dev examples, but not for test examples.
"""
self.guid = guid
self.text = text
self.labels = labels
class InputFeatures(object):
"""A single set of features of data."""
def __init__(self, input_ids, input_dicts, label_ids, seq_length):
self.input_ids = input_ids
self.input_dicts = input_dicts
self.seq_length = seq_length
self.label_ids = label_ids
def convert_single_example(ex_index, example: InputExample,
tokenizer, label_map, dict_builder=None):
"""Converts a single `InputExample` into a single `InputFeatures`."""
# label_map = {"B": 0, "M": 1, "E": 2, "S": 3}
# tokens_raw = tokenizer.tokenize(example.text)
tokens_raw = list(example.text)
labels_raw = example.labels
# Account for [CLS] and [SEP] with "- 2"
# The convention in BERT is:
# (b) For single sequences:
# tokens: [CLS] the dog is hairy . [SEP]
# type_ids: 0 0 0 0 0 0 0
#
# Where "type_ids" are used to indicate whether this is the first
# sequence or the second sequence. The embedding vectors for `type=0` and
# `type=1` were learned during pre-training and are added to the wordpiece
# embedding vector (and position vector). This is not *strictly* necessary
# since the [SEP] token unambiguously separates the sequences, but it makes
# it easier for the model to learn the concept of sequences.
#
# For classification tasks, the first vector (corresponding to [CLS]) is
# used as as the "sentence vector". Note that this only makes sense because
# the entire model is fine-tuned.
tokens = []
label_ids = []
for token, label in zip(tokens_raw, labels_raw):
tokens.append(token)
label_ids.append(label_map[label])
input_ids = tokenizer.convert_tokens_to_ids(tokens)
if dict_builder is None:
input_dicts = np.zeros_like(tokens_raw, dtype=np.int64)
else:
input_dicts = dict_builder.extract(tokens)
seq_length = len(tokens)
assert seq_length == len(input_ids)
assert seq_length == len(input_dicts)
assert seq_length == len(label_ids)
# The mask has 1 for real tokens and 0 for padding tokens. Only real
# tokens are attended to.
if ex_index < 1:
tf.logging.info("*** Example ***")
tf.logging.info("guid: %s" % example.guid)
tf.logging.info("tokens: %s" % " ".join(
[utils.printable_text(x) for x in tokens]))
tf.logging.info("input_ids: %s" % " ".join([str(x) for x in input_ids]))
tf.logging.info("input_ids: %s" % " ".join([str(x) for x in input_dicts]))
tf.logging.info("labels: %s" % " ".join([str(x) for x in example.labels]))
tf.logging.info("labels_ids: %s" % " ".join([str(x) for x in label_ids]))
feature = InputFeatures(
input_ids=input_ids,
input_dicts=input_dicts,
label_ids=label_ids,
seq_length=seq_length)
return feature
def file_based_convert_examples_to_features(
examples, tokenizer, label_map, output_file, dict_builder=None):
"""Convert a set of `InputExample`s to a TFRecord file."""
writer = tf.python_io.TFRecordWriter(output_file)
for (ex_index, example) in enumerate(examples):
if ex_index % 10000 == 0:
tf.logging.info("Writing example %d of %d" % (ex_index, len(examples)))
feature = convert_single_example(ex_index, example, tokenizer, label_map, dict_builder)
def create_int_feature(values):
f = tf.train.Feature(int64_list=tf.train.Int64List(value=list(values)))
return f
features = collections.OrderedDict()
plain_input_ids = np.array(feature.input_ids).reshape([-1])
features["input_ids"] = create_int_feature(plain_input_ids)
plain_input_dicts = np.array(feature.input_dicts).reshape([-1])
features["input_dicts"] = create_int_feature(plain_input_dicts)
features["seq_length"] = create_int_feature([feature.seq_length])
features["label_ids"] = create_int_feature(feature.label_ids)
tf_example = tf.train.Example(features=tf.train.Features(feature=features))
writer.write(tf_example.SerializeToString())
def convert_words_to_example(guid: int, sentence: List[str]) -> InputExample:
def word2tag(word):
if len(word) == 1:
return ["S"]
if len(word) == 2:
return ["B", "E"]
tag = ["B"]
for i in range(1, len(word) - 1):
tag.append("M")
tag.append("E")
return tag
labels = []
text = ' '.join(''.join(sentence))
for word in sentence:
labels += word2tag(word)
return InputExample(guid, text, labels)
class DataProcessor(object):
"""Base class for data converters for sequence classification data sets."""
def get_train_examples(self, data_dir):
"""Gets a collection of `InputExample`s for the train set."""
raise NotImplementedError()
def get_dev_examples(self, data_dir):
"""Gets a collection of `InputExample`s for the dev set."""
raise NotImplementedError()
def get_test_examples(self, data_dir):
"""Gets a collection of `InputExample`s for prediction."""
raise NotImplementedError()
def get_labels(self):
"""Gets the map of labels for this data set."""
raise NotImplementedError()
@classmethod
def _read_file(cls, input_file):
"""Reads a tab separated value file."""
with open(input_file, "r", encoding="utf-8") as f:
return f.readlines()
class CWSProcessor(DataProcessor):
"""Processor for the XNLI data set."""
def __init__(self):
self.language = "zh"
def get_train_examples(self, data_dir):
"""See base class."""
return self._create_examples(
self._read_file(os.path.join(data_dir, "train")), "train")
def get_dev_examples(self, data_dir):
"""See base class."""
return self._create_examples(
self._read_file(os.path.join(data_dir, "dev")), "dev")
def get_test_examples(self, data_dir):
"""See base class."""
return self._create_examples(
self._read_file(os.path.join(data_dir, "test")), "test")
def get_labels(self):
"""See base class."""
return {"B": 0, "M": 1, "E": 2, "S": 3}
def get_break_ids(self):
return [2, 3]
def _create_examples(self, lines, set_type):
"""Creates examples for the training and dev sets."""
examples = []
for (i, line) in enumerate(lines):
# Only the test set has a header
guid = "%s-%s" % (set_type, i)
text = utils.convert_to_unicode(line.strip())
labels = self._labels_words(text)
text = re.sub(u'\s+','',text.strip())
examples.append(
InputExample(guid=guid, text=text, labels=labels))
return examples
@staticmethod
def _labels_words(text):
def word2label(w):
if len(w) == 1:
return ["S"]
if len(w) == 2:
return ["B", "E"]
label = ["B"]
for i in range(1, len(w) - 1):
label.append("M")
label.append("E")
return label
words = text.split()
labels = []
for word in words:
labels += word2label(word)
return "".join(labels)
def evaluate_word_PRF(self, y_pred, y):
import itertools
y_pred = list(itertools.chain.from_iterable(y_pred))
y = list(itertools.chain.from_iterable(y))
assert len(y_pred) == len(y)
cor_num = 0
break_ids = self.get_break_ids()
yp_word_num = 0
yt_word_num = 0
for i in break_ids:
yp_word_num += y_pred.count(i)
yt_word_num += y.count(i)
# yp_word_num = y_pred.count(2) + y_pred.count(3)
# yt_word_num = y.count(2) + y.count(3)
start = 0
for i in range(len(y)):
if y[i] in break_ids:
flag = True
for j in range(start, i + 1):
if y[j] != y_pred[j]:
flag = False
break
if flag:
cor_num += 1
start = i + 1
P = cor_num / float(yp_word_num)
R = cor_num / float(yt_word_num)
F = 2 * P * R / (P + R)
return P, R, F
def convert_word_segmentation(self, x, y, output_dir, output_file='result.txt'):
if not os.path.exists(output_dir):
os.mkdir(output_dir)
output_file = os.path.join(output_dir, output_file)
f = codecs.open(output_file, 'w', encoding='utf-8')
break_ids = self.get_break_ids()
for i in range(len(x)):
sentence = []
for j in range(len(x[i])):
if y[i][j] in break_ids:
sentence.append(x[i][j])
sentence.append(" ")
else:
sentence.append(x[i][j])
f.write(''.join(sentence).strip() + '\n')
f.close()
class BiLabelProcessor(CWSProcessor):
def get_labels(self):
"""See base class."""
return {"N": 0, "E": 1}
def get_break_ids(self):
return [1]
@staticmethod
def _labels_words(text):
def word2label(w):
if len(w) == 1:
return ["E"]
label = []
for i in range(len(w) - 1):
label.append("N")
label.append("E")
return label
words = text.split()
labels = []
for word in words:
labels += word2label(word)
return "".join(labels)
def evaluate_word_PRF(self, y_pred, y):
import itertools
y_pred = list(itertools.chain.from_iterable(y_pred))
y = list(itertools.chain.from_iterable(y))
assert len(y_pred) == len(y)
cor_num = 0
break_ids = self.get_break_ids()
yp_word_num = 0
yt_word_num = 0
for i in break_ids:
yp_word_num += y_pred.count(i)
yt_word_num += y.count(i)
# yp_word_num = y_pred.count(2) + y_pred.count(3)
# yt_word_num = y.count(2) + y.count(3)
start = 0
len_y = len(y)
for i in range(len_y - 1):
if y_pred[i] == 1 or y_pred[i] == 3:
if y_pred[i + 1] == 1:
y_pred[i + 1] = 3
else:
y_pred[i + 1] = 2
if y[i] == 1 or y[i] == 3:
if y[i + 1] == 1:
y[i + 1] = 3
else:
y[i + 1] = 2
for i in range(len_y):
if y[i] == 1 or y[i] == 3:
flag = True
for j in range(start, i + 1):
if y[j] != y_pred[j]:
flag = False
break
if flag:
cor_num += 1
start = i + 1
P = cor_num / float(yp_word_num)
R = cor_num / float(yt_word_num)
F = 2 * P * R / (P + R)
return P, R, F
| 33.896648 | 95 | 0.567614 | 1,620 | 12,135 | 4.081481 | 0.162346 | 0.01588 | 0.008167 | 0.011645 | 0.37311 | 0.320327 | 0.287205 | 0.271022 | 0.253176 | 0.247429 | 0 | 0.01218 | 0.316687 | 12,135 | 357 | 96 | 33.991597 | 0.785215 | 0.189617 | 0 | 0.421053 | 0 | 0 | 0.026436 | 0 | 0 | 0 | 0 | 0 | 0.020243 | 1 | 0.11336 | false | 0 | 0.048583 | 0.008097 | 0.279352 | 0.004049 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
caca10fc98e62440f55454d66c538a4a9f921292 | 807 | py | Python | 0778 Swim in Rising Water.py | MdAbedin/leetcode | e835f2e716ea5fe87f30b84801ede9bc023749e7 | [
"MIT"
] | 4 | 2020-09-11T02:36:11.000Z | 2021-09-29T20:47:11.000Z | 0778 Swim in Rising Water.py | MdAbedin/leetcode | e835f2e716ea5fe87f30b84801ede9bc023749e7 | [
"MIT"
] | 3 | 2020-09-10T03:51:42.000Z | 2021-09-25T01:41:57.000Z | 0778 Swim in Rising Water.py | MdAbedin/leetcode | e835f2e716ea5fe87f30b84801ede9bc023749e7 | [
"MIT"
] | 6 | 2020-09-10T03:46:15.000Z | 2021-09-25T01:24:48.000Z | class Solution:
def swimInWater(self, grid: List[List[int]]) -> int:
t = 0
seen = set((0,0))
frontier = [(0,0)]
while True:
new_frontier = []
while frontier:
y,x = frontier.pop()
if grid[y][x] <= t:
if (y,x) == (len(grid)-1,len(grid)-1): return t
for ny,nx in [[y+1,x],[y-1,x],[y,x+1],[y,x-1]]:
if 0<=ny<len(grid) and 0<=nx<len(grid) and (ny,nx) not in seen:
seen.add((ny,nx))
frontier.append((ny,nx))
else:
new_frontier.append((y,x))
frontier = new_frontier
t += 1
| 32.28 | 87 | 0.359356 | 96 | 807 | 2.989583 | 0.333333 | 0.041812 | 0.069686 | 0.027875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034483 | 0.496902 | 807 | 24 | 88 | 33.625 | 0.672414 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0 | 0 | 0.105263 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
caca13b9f9987b6f832d8d595f3219235635f8e0 | 5,108 | py | Python | GUI/RPi_data.py | leander-dsouza/URC-2019 | 6773e6b66dfb840bdbb4463441e8a855b42b1123 | [
"MIT"
] | 5 | 2020-05-10T11:03:48.000Z | 2022-01-17T07:00:40.000Z | GUI/RPi_data.py | leander-dsouza/URC-2019 | 6773e6b66dfb840bdbb4463441e8a855b42b1123 | [
"MIT"
] | null | null | null | GUI/RPi_data.py | leander-dsouza/URC-2019 | 6773e6b66dfb840bdbb4463441e8a855b42b1123 | [
"MIT"
] | 3 | 2020-07-13T14:11:12.000Z | 2022-01-07T18:05:05.000Z | from PyQt5 import QtCore, QtWidgets, QtGui
import pyqtgraph as pg
import numpy as np
import socket
import pyproj
from pyqtgraph import functions as fn
from gps3 import gps3
import sys
from PyQt5.QtWidgets import QMainWindow, QApplication, QWidget, QPushButton, QAction, QLineEdit, QMessageBox
from PyQt5.QtGui import QIcon
from PyQt5.QtCore import pyqtSlot
import cv2
from PIL import Image
g = pyproj.Geod(ellps='WGS84')
TCP_IP = '192.168.1.70'
TCP_PORT = 5005
transmit = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
transmit.connect((TCP_IP, TCP_PORT))
# img = Image.open( "logo.tiff" )
# img = img.rotate(-90)
# img = img.transpose(Image.FLIP_LEFT_RIGHT)
# img.load()
# logo = np.asarray( img, dtype="int32" )
x=[]
y=[]
endlat=[13.3500460]
endlon=[74.7916420]
#endlat=[13.3498887]
#endlon=[74.7915768]
#endlat=[13.3503957]
#endlon=[74.7915252]
#print(endlat,endlon)
#endlat=[13.3506374]
#endlon=[74.7917847]
# endlat=[13.3501817]
# endlon=[74.7912211]
# endlat=[13.3496607]
# endlon=[74.7915165]
def get_heading(longitude, latitude):
global endlat, endlon
(az12, az21, dist) = g.inv(longitude, latitude, endlon, endlat)
if az12<0:
az12=az12+360
return az12, dist
class CenteredArrowItem(pg.ArrowItem):
def setStyle(self, **opts):
# http://www.pyqtgraph.org/documentation/_modules/pyqtgraph/graphicsItems/ArrowItem.html#ArrowItem.setStyle
self.opts.update(opts)
opt = dict([(k,self.opts[k]) for k in ['headLen', 'tipAngle', 'baseAngle', 'tailLen', 'tailWidth']])
tr = QtGui.QTransform()
path = fn.makeArrowPath(**opt)
tr.rotate(self.opts['angle'])
p = -path.boundingRect().center()
tr.translate(p.x(), p.y())
self.path = tr.map(path)
self.setPath(self.path)
self.setPen(fn.mkPen(self.opts['pen']))
self.setBrush(fn.mkBrush(self.opts['brush']))
if self.opts['pxMode']:
self.setFlags(self.flags() | self.ItemIgnoresTransformations)
else:
self.setFlags(self.flags() & ~self.ItemIgnoresTransformations)
class MyWidget(pg.GraphicsWindow):
def __init__(self, parent=None):
super(MyWidget, self).__init__(parent=parent)
self.mainLayout = QtWidgets.QVBoxLayout()
self.setLayout(self.mainLayout)
self.timer = QtCore.QTimer(self)
self.timer.setInterval(100) # in milliseconds
self.timer.start()
self.timer.timeout.connect(self.onNewData)
self.plotItem = self.addPlot(title="GPS Plotting")
self.plotItem.setAspectLocked(lock=True, ratio=1)
self.plotDataItem = self.plotItem.plot([], size=0, pen=None, symbolPen=None, symbolSize=10, symbol='o', symbolBrush=(255,255,255,10))
self.plotDataItem1 = self.plotItem.plot([0,0], size=10, pen=None,symbol='o', symbolBrush=(255,0,0,255))
#self.plotDataItem1.addLegend()
#legend.setParentItem(plotItem)
self.arrow = CenteredArrowItem(angle=0, tipAngle=40, baseAngle=50, headLen=80, tailLen=None, brush=None)
# self.proxy = QtGui.QGraphicsProxyWidget()
# self.im1 = pg.ImageView()
# self.im1.setImage(logo)
# self.proxy.setWidget(self.im1)
# self.addItem(self.proxy)
# self.im1.ui.histogram.hide()
# self.im1.ui.menuBtn.hide()
# self.im1.ui.roiBtn.hide()
# self.proxy1 = QtGui.QGraphicsProxyWidget()
# self.textbox = QLineEdit(self)
# self.proxy1.setWidget(self.textbox)
# self.addItem(self.proxy1)
def setData(self, x, y):
global endlat,endlon
self.plotDataItem.setData(x[len(x)-1000:], y[len(x)-1000:])
f=open('endlat.txt','r+')
endlat=(f.read())
f.close()
f=open('endlon.txt','r+')
endlon=(f.read())
f.close()
endlon=[float(endlon[1:].partition(",")[0])]
endlat=[float(endlat[1:].partition(",")[0])]
self.plotDataItem1.setData(endlon,endlat, size=10, pen=None,symbol='o', symbolBrush=(255,0,0,255))
def onNewData(self):
global x,y,endlat,endlon
latitude,longitude,angle=transmit.recv(1024).split(',')
transmit.send("a")
print(latitude,longitude,angle)
y.append(latitude)
x.append(longitude)
self.setData(x, y)
self.plotItem.removeItem(self.arrow)
self.arrow = CenteredArrowItem(angle=int(angle)/2+45, tipAngle=40, baseAngle=40, headLen=40, tailLen=None, brush=None)
adjusted_angle,distance = get_heading(longitude,latitude)
self.plotItem.setTitle('Distance: '+str(round(distance[0],3))+' Angle: '+str(int(adjusted_angle)-int(angle)))
self.arrow.setPos(float(longitude),float(latitude))
self.plotItem.addItem(self.arrow)
#self.scene.addLine(QLineF(x1, y1, x2, y2))
def main():
app = QtWidgets.QApplication([])
pg.setConfigOptions(antialias=True) # True seems to work as well
win = MyWidget()
win.show()
win.resize(800,600)
win.raise_()
app.exec_()
if __name__ == "__main__":
main()
| 31.530864 | 141 | 0.646045 | 642 | 5,108 | 5.0919 | 0.386293 | 0.017131 | 0.016519 | 0.019272 | 0.055063 | 0.055063 | 0.023861 | 0.023861 | 0.023861 | 0.023861 | 0 | 0.060309 | 0.201449 | 5,108 | 161 | 142 | 31.726708 | 0.741113 | 0.193226 | 0 | 0.042105 | 0 | 0 | 0.035985 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.063158 | false | 0 | 0.136842 | 0 | 0.231579 | 0.010526 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cacf9a2cc70649787fc954ebbb125da02d448c66 | 3,839 | py | Python | index/DiskIndex.py | pythonlittleboy/python_gentleman_crawler | 751b624d22a5024746c256080ea0815a9986e3d7 | [
"Apache-2.0"
] | 1 | 2017-05-03T12:18:31.000Z | 2017-05-03T12:18:31.000Z | index/DiskIndex.py | pythonlittleboy/python_gentleman_crawler | 751b624d22a5024746c256080ea0815a9986e3d7 | [
"Apache-2.0"
] | null | null | null | index/DiskIndex.py | pythonlittleboy/python_gentleman_crawler | 751b624d22a5024746c256080ea0815a9986e3d7 | [
"Apache-2.0"
] | 1 | 2020-10-29T04:00:04.000Z | 2020-10-29T04:00:04.000Z | import os
import index.MovieDAO as movieDAO
from pprint import pprint
from index import SysConst
import shutil
def getAllMovies(path):
movieTypes = set(["avi", "mp4", "mkv", "rmvb", "wmv", "txt"])
results = []
for fpath, dirs, fs in os.walk(path):
for filename in fs:
fullpath = os.path.join(fpath, filename)
suffix = filename[-3:]
if filename[0:1] != "." and len(filename) > 4 and suffix in movieTypes:
# print(fullpath + " | " + filename)
result = {"fullpath": fullpath, "filename": filename}
results.append(result)
return results
def getAllImages(path):
movieTypes = set(["jpg"])
results = []
for fpath, dirs, fs in os.walk(path):
for filename in fs:
fullpath = os.path.join(fpath, filename)
suffix = filename[-3:]
if filename[0:1] != "." and len(filename) > 4 and suffix in movieTypes:
# print(fullpath + " | " + filename)
result = {"fullpath": fullpath, "filename": filename}
results.append(result)
return results
def getMovies(path):
movieTypes = set(["avi", "mp4", "mkv", "rmvb", "wmv"])
results = []
for fpath, dirs, fs in os.walk(path):
for filename in fs:
fullpath = os.path.join(fpath, filename)
suffix = filename[-3:]
if filename[0:1] != "." and len(filename) > 4 and suffix in movieTypes:
# print(fullpath + " | " + filename)
result = {"fullpath": fullpath, "filename": filename}
results.append(result)
return results
def getTxts(path):
movieTypes = set(["txt"])
results = []
for fpath, dirs, fs in os.walk(path):
for filename in fs:
fullpath = os.path.join(fpath, filename)
suffix = filename[-3:]
if filename[0:1] != "." and len(filename) > 4 and suffix in movieTypes:
# print(fullpath + " | " + filename)
result = {"fullpath": fullpath, "filename": filename}
results.append(result)
return results
def findLocalMovies(avList, allFiles):
# allFiles = getAllMovies(path)
for av in avList:
avNumber = av["av_number"].lower()
for file in allFiles:
filename = file["filename"]
if filename.lower().find(avNumber) > -1:
av["local_movie"] = file["fullpath"]
break;
return avList
def deleteSmallImages(path):
allImages = getAllImages(path)
for image in allImages:
size = os.path.getsize(image["fullpath"])
if size < 5000:
print("delete " + image["fullpath"])
os.remove(image["fullpath"])
def copyImageToTemp(movieNumbers):
if not os.path.exists(SysConst.getImageTempPath()):
os.mkdir(SysConst.getImageTempPath())
allImages = getAllImages(SysConst.getImageCachePath())
for num in movieNumbers:
for image in allImages:
if num in image["fullpath"]:
shutil.copy(image["fullpath"], SysConst.getImageTempPath() + image["filename"])
break;
def copyOneImageToTemp(actor, avNumber):
if not os.path.exists(SysConst.getImageTempPath()):
os.mkdir(SysConst.getImageTempPath())
source = SysConst.getImageCachePath() + actor + "//" + avNumber + ".jpg"
to = SysConst.getImageTempPath() + avNumber + ".jpg"
shutil.copy(source, to)
# avList = [{"av_number": "ABS-072"}]
# pprint(findLocalMovies(avList=avList, path="G://Game//File//"))
#deleteSmallImages(SysConst.getImageCachePath())
#copyImageToTemp(["ABS-072"])
#copyOneImageToTemp("阿部乃美久", "ARMG-274") | 33.675439 | 96 | 0.567856 | 395 | 3,839 | 5.511392 | 0.217722 | 0.058797 | 0.031236 | 0.03491 | 0.532843 | 0.532843 | 0.532843 | 0.532843 | 0.502526 | 0.502526 | 0 | 0.011918 | 0.300599 | 3,839 | 114 | 97 | 33.675439 | 0.798883 | 0.099766 | 0 | 0.55 | 0 | 0 | 0.063006 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.0625 | 0 | 0.225 | 0.025 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cad081edcdd3a2dbd2df62ac950a790986e0574a | 30,555 | py | Python | mmaction/models/recognizers/recognizer3d_semi_appsup_tempsup_simclr_crossclip_ptv.py | lambert-x/video_semisup | 8ff44343bb34485f8ad08d50ca4d8de22e122c1d | [
"Apache-2.0"
] | null | null | null | mmaction/models/recognizers/recognizer3d_semi_appsup_tempsup_simclr_crossclip_ptv.py | lambert-x/video_semisup | 8ff44343bb34485f8ad08d50ca4d8de22e122c1d | [
"Apache-2.0"
] | null | null | null | mmaction/models/recognizers/recognizer3d_semi_appsup_tempsup_simclr_crossclip_ptv.py | lambert-x/video_semisup | 8ff44343bb34485f8ad08d50ca4d8de22e122c1d | [
"Apache-2.0"
] | null | null | null | import torch
import torch.nn.functional as F
import torch.nn as nn
# Custom imports
from .base_semi_apppearance_temporal_simclr import SemiAppTemp_SimCLR_BaseRecognizer
from ..builder import RECOGNIZERS
from ..losses import CosineSimiLoss
from ...utils import GatherLayer
@RECOGNIZERS.register_module()
class Semi_AppSup_TempSup_SimCLR_Crossclip_PTV_Recognizer3D(SemiAppTemp_SimCLR_BaseRecognizer):
"""Semi-supervised 3D recognizer model framework."""
def forward_train(self, imgs, labels, imgs_weak, imgs_strong, labels_unlabeled, imgs_appearance,
imgs_diff_labeled, imgs_diff_weak, imgs_diff_strong, cur_epoch=None):
"""Defines the computation performed at every call when training."""
clips_per_video = imgs.shape[1]
bz_labeled = imgs.shape[0] * imgs.shape[1]
bz_unlabeled = imgs_weak.shape[0] * imgs_weak.shape[1]
img_shape_template = imgs.shape[2:]
imgs = imgs.transpose(0, 1).reshape((-1,) + img_shape_template)
imgs_weak = imgs_weak.transpose(0, 1).reshape((-1,) + img_shape_template)
imgs_strong = imgs_strong.transpose(0, 1).reshape((-1,) + img_shape_template)
imgs_diff_shape_template = imgs_diff_labeled.shape[2:]
imgs_diff_labeled = imgs_diff_labeled.transpose(0, 1).reshape((-1,) + imgs_diff_shape_template)
imgs_diff_weak = imgs_diff_weak.transpose(0, 1).reshape((-1,) + imgs_diff_shape_template)
imgs_diff_strong = imgs_diff_strong.transpose(0, 1).reshape((-1,) + imgs_diff_shape_template)
imgs_all = torch.cat([imgs, imgs_weak, imgs_strong], dim=0)
imgs_diff_all = torch.cat([imgs_diff_labeled, imgs_diff_weak, imgs_diff_strong], dim=0)
# TODO: If we forward imgs_weak with no_grad, and then jointly forward imgs and imgs_strong,
# we might be able to save memory for a larger batch size? But not sure if this has
# negative impact on batch-norm.
# imgs_all = imgs_all.reshape((-1,) + imgs_all.shape[2:])
# imgs_diff_all = imgs_diff_all.reshape((-1,) + imgs_diff_all.shape[2:])
labels = labels.transpose(0, 1).reshape((-1, 1))
# print(labels)
if self.temp_align_indices is not None:
vid_features = self.extract_feat(imgs_all)
vid_feature = vid_features[-1]
imgs_diff_features = self.temp_backbone(imgs_diff_all)
imgs_diff_feature = imgs_diff_features[-1]
else:
vid_feature = self.extract_feat(imgs_all)
imgs_diff_feature = self.temp_backbone(imgs_diff_all)
# cls_score = self.cls_head(vid_feature)
cls_score = dict()
# cls_score_rgb, cls_score_diff = torch.split(cls_score_concat, bz_labeled+2*bz_unlabeled, dim=0)
cls_score['rgb'] = self.cls_head(vid_feature)
cls_score['diff'] = self.temp_sup_head(imgs_diff_feature)
batch_total_len = bz_labeled + bz_unlabeled
# NOTE: pre-softmx logit
cls_score_labeled = dict()
cls_score_weak = dict()
cls_score_strong = dict()
for view in ['rgb', 'diff']:
cls_score_labeled[view] = cls_score[view][:bz_labeled, :]
if self.loss_clip_selection == 0:
cls_score_weak[view] = cls_score[view][bz_labeled:(bz_labeled + bz_unlabeled // 2), :]
cls_score_strong[view] = cls_score[view][
(bz_labeled + bz_unlabeled):(bz_labeled + bz_unlabeled + bz_unlabeled // 2), :]
else:
cls_score_weak[view] = cls_score[view][bz_labeled:bz_labeled + bz_unlabeled, :]
cls_score_strong[view] = cls_score[view][bz_labeled + bz_unlabeled:bz_labeled + 2 * bz_unlabeled, :]
loss = dict()
if 'weak' in self.crossclip_contrast_range and 'strong' in self.crossclip_contrast_range:
query_index = torch.cat([torch.arange(bz_labeled // clips_per_video), torch.arange(bz_unlabeled // clips_per_video) + bz_labeled,
torch.arange(bz_unlabeled // clips_per_video) + bz_labeled + bz_unlabeled])
key_mask = torch.ones(bz_labeled + 2 * bz_unlabeled)
key_mask[query_index] = 0
key_index = torch.arange(bz_labeled + 2 * bz_unlabeled)[key_mask.bool()]
elif 'weak' in self.crossclip_contrast_range:
query_index = torch.cat([torch.arange(bz_labeled // clips_per_video), torch.arange(bz_unlabeled // clips_per_video) + bz_labeled])
key_mask = torch.ones(bz_labeled + bz_unlabeled)
key_mask[query_index] = 0
key_index = torch.arange(bz_labeled + bz_unlabeled)[key_mask.bool()]
elif 'strong' in self.crossclip_contrast_range:
query_index = torch.arange(bz_unlabeled // clips_per_video) + bz_labeled + bz_unlabeled
key_index = query_index + (bz_unlabeled // clips_per_video)
else:
pass
# print(query_index, key_index)
if 'rgb' in self.crossclip_contrast_loss:
contrast_rgb_feature = self.contrast_head_rgb(vid_feature)
rgb_query = contrast_rgb_feature[query_index]
rgb_key = contrast_rgb_feature[key_index]
loss['loss_rgb_contrast'] = self.contrast_loss_weight * \
self.contrast_head_rgb.loss([rgb_query, rgb_key])
if 'tg' in self.crossclip_contrast_loss:
contrast_tg_feature = self.contrast_head_tg(imgs_diff_feature)
tg_query = contrast_tg_feature[query_index]
tg_key = contrast_tg_feature[key_index]
loss['loss_tg_contrast'] = self.contrast_loss_weight * \
self.contrast_head_tg.loss([tg_query, tg_key])
if 'crossview' in self.crossclip_contrast_loss:
contrast_rgb_feature = self.contrast_head_rgb(vid_feature)
rgb_query = contrast_rgb_feature[query_index]
rgb_key = contrast_rgb_feature[key_index]
contrast_tg_feature = self.contrast_head_tg(imgs_diff_feature)
tg_query = contrast_tg_feature[query_index]
tg_key = contrast_tg_feature[key_index]
embedding_rgb1_tg2 = [rgb_query, tg_key]
embedding_rgb2_tg1 = [rgb_key, tg_query]
loss['loss_crossview_contrast'] = self.contrast_loss_weight * (
self.contrast_head_rgb.loss(embedding_rgb1_tg2) +
self.contrast_head_tg.loss(embedding_rgb2_tg1)) / 2
if 'crossview_sharedhead' in self.crossclip_contrast_loss:
contrast_rgb_feature = self.contrast_head_shared(vid_feature)
rgb_query = contrast_rgb_feature[query_index]
rgb_key = contrast_rgb_feature[key_index]
contrast_tg_feature = self.contrast_head_shared(imgs_diff_feature)
tg_query = contrast_tg_feature[query_index]
tg_key = contrast_tg_feature[key_index]
embedding_rgb1_tg2 = [rgb_query, tg_key]
embedding_rgb2_tg1 = [rgb_key, tg_query]
loss['loss_crossview_contrast'] = self.contrast_loss_weight * (
self.contrast_head_shared.loss(embedding_rgb1_tg2) +
self.contrast_head_shared.loss(embedding_rgb2_tg1)) / 2
if 'crossview_sameclip' in self.crossclip_contrast_loss:
if clips_per_video > 1:
contrast_rgb_feature = self.contrast_head_rgb(vid_feature)
rgb_query = contrast_rgb_feature[query_index]
rgb_key = contrast_rgb_feature[key_index]
contrast_tg_feature = self.contrast_head_tg(imgs_diff_feature)
tg_query = contrast_tg_feature[query_index]
tg_key = contrast_tg_feature[key_index]
embedding_rgb1_tg1 = [rgb_query, tg_query]
embedding_rgb2_tg2 = [rgb_key, tg_key]
loss['loss_crossview_contrast'] = self.contrast_loss_weight * (
self.contrast_head_rgb.loss(embedding_rgb1_tg1) +
self.contrast_head_tg.loss(embedding_rgb2_tg2)) / 2
else:
contrast_rgb_feature = self.contrast_head_rgb(vid_feature)
rgb_query = contrast_rgb_feature[query_index]
contrast_tg_feature = self.contrast_head_tg(imgs_diff_feature)
tg_query = contrast_tg_feature[query_index]
embedding_rgb1_tg1 = [rgb_query, tg_query]
loss['loss_crossview_contrast'] = self.contrast_loss_weight * \
self.contrast_head_rgb.loss(embedding_rgb1_tg1)
if 'crossview_sameclip_sharedhead' in self.crossclip_contrast_loss:
if clips_per_video > 1:
contrast_rgb_feature = self.contrast_head_shared(vid_feature)
rgb_query = contrast_rgb_feature[query_index]
rgb_key = contrast_rgb_feature[key_index]
contrast_tg_feature = self.contrast_head_shared(imgs_diff_feature)
tg_query = contrast_tg_feature[query_index]
tg_key = contrast_tg_feature[key_index]
embedding_rgb1_tg1 = [rgb_query, tg_query]
embedding_rgb2_tg2 = [rgb_key, tg_key]
loss['loss_crossview_contrast'] = self.contrast_loss_weight * (
self.contrast_head_shared.loss(embedding_rgb1_tg1) +
self.contrast_head_shared.loss(embedding_rgb2_tg2)) / 2
else:
contrast_rgb_feature = self.contrast_head_shared(vid_feature)
rgb_query = contrast_rgb_feature[query_index]
contrast_tg_feature = self.contrast_head_shared(imgs_diff_feature)
tg_query = contrast_tg_feature[query_index]
embedding_rgb1_tg1 = [rgb_query, tg_query]
loss['loss_crossview_contrast'] = self.contrast_loss_weight * \
self.contrast_head_shared.loss(embedding_rgb1_tg1)
if 'crossview_crossclip_densecl' in self.crossclip_contrast_loss:
contrast_rgb_feature = self.contrast_densecl_head(vid_feature)
rgb_query_backbone = vid_feature[query_index]
rgb_key_backbone = vid_feature[key_index]
rgb_query = contrast_rgb_feature[query_index]
rgb_key = contrast_rgb_feature[key_index]
contrast_tg_feature = self.contrast_densecl_head(imgs_diff_feature)
tg_query_backbone = imgs_diff_feature[query_index]
tg_key_backbone = imgs_diff_feature[key_index]
tg_query = contrast_tg_feature[query_index]
tg_key = contrast_tg_feature[key_index]
n = rgb_query.size(0)
c = vid_feature.size(1)
loss['loss_crossview_dense_contrast'] = list()
for rgb_grid, tg_grid, rgb_grid_b, tg_grid_b in \
[(rgb_query, tg_key, rgb_query_backbone, tg_key_backbone),
(rgb_key, tg_query, rgb_key_backbone, tg_query_backbone)]:
rgb_grid = rgb_grid.view(n, c, -1)
rgb_grid = nn.functional.normalize(rgb_grid, dim=1)
tg_grid = tg_grid.view(n, c, -1)
tg_grid = nn.functional.normalize(tg_grid, dim=1)
rgb_grid_b = rgb_grid_b.view(rgb_grid_b.size(0), rgb_grid_b.size(1), -1)
tg_grid_b = tg_grid_b.view(tg_grid_b.size(0), tg_grid_b.size(1), -1)
rgb_grid_b = nn.functional.normalize(rgb_grid_b, dim=1)
tg_grid_b = nn.functional.normalize(tg_grid_b, dim=1)
# print(rgb_grid_b.shape)
# print(tg_grid_b.shape)
backbone_sim_matrix = torch.matmul(rgb_grid_b.permute(0, 2, 1), tg_grid_b)
densecl_sim_ind = backbone_sim_matrix.max(dim=2)[1] # NxS^2
indexed_tg_grid = torch.gather(tg_grid, 2,
densecl_sim_ind.unsqueeze(1).expand(-1, rgb_grid.size(1), -1))
rgb_embedding = rgb_grid.view(rgb_grid.size(0), rgb_grid.size(1), -1).permute(0, 2, 1).reshape(-1, c)
tg_embedding = indexed_tg_grid.view(indexed_tg_grid.size(0), indexed_tg_grid.size(1), -1).permute(0, 2, 1).reshape(-1, c)
contrastive_embedding = [rgb_embedding, tg_embedding]
loss['loss_crossview_dense_contrast'].append(self.contrast_densecl_head.loss(contrastive_embedding))
loss['loss_crossview_dense_contrast'] = torch.mean(torch.stack(loss['loss_crossview_dense_contrast']))
if 'sameview_crossclip_densecl' in self.crossclip_contrast_loss:
contrast_rgb_feature = self.contrast_densecl_head(vid_feature)
rgb_query_backbone = vid_feature[query_index]
rgb_key_backbone = vid_feature[key_index]
rgb_query = contrast_rgb_feature[query_index]
rgb_key = contrast_rgb_feature[key_index]
contrast_tg_feature = self.contrast_densecl_head(imgs_diff_feature)
tg_query_backbone = imgs_diff_feature[query_index]
tg_key_backbone = imgs_diff_feature[key_index]
tg_query = contrast_tg_feature[query_index]
tg_key = contrast_tg_feature[key_index]
n = rgb_query.size(0)
c = vid_feature.size(1)
loss['loss_crossview_dense_contrast'] = list()
for q_grid, k_grid, q_grid_b, k_grid_b in \
[(rgb_query, rgb_key, rgb_query_backbone, rgb_key_backbone),
(tg_query, tg_key, tg_query_backbone, tg_key_backbone)]:
q_grid = q_grid.view(n, c, -1)
q_grid = nn.functional.normalize(q_grid, dim=1)
k_grid = k_grid.view(n, c, -1)
k_grid = nn.functional.normalize(k_grid, dim=1)
q_grid_b = q_grid_b.view(q_grid_b.size(0), q_grid_b.size(1), -1)
k_grid_b = k_grid_b.view(k_grid_b.size(0), k_grid_b.size(1), -1)
q_grid_b = nn.functional.normalize(q_grid_b, dim=1)
k_grid_b = nn.functional.normalize(k_grid_b, dim=1)
# print(rgb_grid_b.shape)
# print(tg_grid_b.shape)
backbone_sim_matrix = torch.matmul(q_grid_b.permute(0, 2, 1), k_grid_b)
densecl_sim_ind = backbone_sim_matrix.max(dim=2)[1] # NxS^2
indexed_k_grid = torch.gather(k_grid, 2, densecl_sim_ind.unsqueeze(1).expand(-1, k_grid.size(1), -1))
q_embedding = q_grid.view(q_grid.size(0), q_grid.size(1), -1).permute(0, 2, 1).reshape(-1, c)
k_embedding = indexed_k_grid.view(indexed_k_grid.size(0), indexed_k_grid.size(1), -1).permute(0, 2, 1).reshape(-1, c)
contrastive_embedding = [q_embedding, k_embedding]
loss['loss_crossview_dense_contrast'].append(self.contrast_densecl_head.loss(contrastive_embedding))
loss['loss_crossview_dense_contrast'] = torch.mean(torch.stack(loss['loss_crossview_dense_contrast']))
if self.loss_lambda is not None and 'loss_crossview_dense_contrast' in loss.keys() and 'loss_crossview_contrast' in loss.keys():
loss['loss_crossview_dense_contrast'] = self.loss_lambda * loss['loss_crossview_dense_contrast']
loss['loss_crossview_contrast'] = (1 - self.loss_lambda) * loss['loss_crossview_contrast']
if cur_epoch < self.contrast_warmup_epoch:
for contrast_loss_type in ['loss_rgb_contrast', 'loss_tg_contrast', 'loss_crossview_contrast',
'loss_crossview_dense_contrast']:
if contrast_loss_type in loss.keys():
loss[contrast_loss_type] = loss[contrast_loss_type].detach()
# if self.use_temp_contrast:
# cls_score_temp = self.cls_head_temp(vid_feature)
# vid_temporal = cls_score_temp[:batch_total_len, :]
# imgs_diff_temporal = self.temp_contrast_head(imgs_diff_feature[:batch_total_len, :])
#
# vid_temporal_pred = self.simsiam_temp_pred(vid_temporal)
# imgs_diff_temporal_pred = self.simsiam_temp_pred(imgs_diff_temporal)
# embedding_temp = [vid_temporal_pred, imgs_diff_temporal_pred, vid_temporal.detach(), imgs_diff_temporal.detach()]
# loss['loss_temporal_contrast'] = self.simsiam_temp_pred.loss(embedding_temp)
if self.temp_align_indices is not None:
for i, layer_idx in enumerate(self.temp_align_indices):
if self.align_stop_grad == 'tg':
# if self.align_with_correspondence:
# c = vid_features[layer_idx].size(1)
# q_grid = vid_features[layer_idx][:batch_total_len, ...].view(batch_total_len, c, -1)
# k_grid = imgs_diff_features[layer_idx][:batch_total_len, ...].detach().view(batch_total_len, c,
# -1)
# if self.align_with_correspondence == 'normalized_cosine':
# q_grid = nn.functional.normalize(q_grid, dim=1)
# k_grid = nn.functional.normalize(k_grid, dim=1)
# backbone_sim_matrix = torch.matmul(q_grid.permute(0, 2, 1), k_grid)
# densecl_sim_ind = backbone_sim_matrix.max(dim=2)[1] # NxS^2
# indexed_k_grid = torch.gather(k_grid, 2,
# densecl_sim_ind.unsqueeze(1).expand(-1, k_grid.size(1), -1))
# loss[f'loss_layer{layer_idx}_align'] = self.align_criterion(
# q_grid, indexed_k_grid
# )
if self.loss_clip_selection == 0:
align_selection_index = torch.cat(
[torch.arange(bz_labeled // 2), torch.arange(bz_unlabeled // 2) + bz_labeled])
loss[f'loss_layer{layer_idx}_align'] = self.align_criterion(
vid_features[layer_idx][align_selection_index],
imgs_diff_features[layer_idx][align_selection_index].detach())
else:
loss[f'loss_layer{layer_idx}_align'] = self.align_criterion(
vid_features[layer_idx][:batch_total_len, ...],
imgs_diff_features[layer_idx][:batch_total_len, ...].detach())
elif self.align_stop_grad == 'rgb':
loss[f'loss_layer{layer_idx}_align'] = self.align_criterion(
vid_features[layer_idx][:batch_total_len, ...].detach(),
imgs_diff_features[layer_idx][:batch_total_len, ...])
elif self.align_stop_grad is None:
loss[f'loss_layer{layer_idx}_align'] = self.align_criterion(
vid_features[layer_idx][:batch_total_len, ...],
imgs_diff_features[layer_idx][:batch_total_len, ...])
if self.densecl_indices is not None:
if self.densecl_indices == (3,):
layer_idx = self.densecl_indices[0]
# b = vid_feature.size(0)
c = vid_feature.size(1)
q_grid = vid_feature[:batch_total_len, ...].view(batch_total_len, c, -1)
q_grid = nn.functional.normalize(q_grid, dim=1)
k_grid = imgs_diff_feature[:batch_total_len, ...].view(batch_total_len, c, -1)
k_grid = nn.functional.normalize(k_grid, dim=1)
q_grid_pred = self.simsiam_densecl_pred(vid_feature)
k_grid_pred = self.simsiam_densecl_pred(imgs_diff_feature.detach())
q_grid_pred = q_grid_pred[:batch_total_len, ...].view(batch_total_len, c, -1)
q_grid_pred = nn.functional.normalize(q_grid_pred, dim=1)
k_grid_pred = k_grid_pred[:batch_total_len, ...].view(batch_total_len, c, -1)
k_grid_pred = nn.functional.normalize(k_grid_pred, dim=1)
# backbone_sim_matrix = torch.matmul(q_grid.permute(0, 2, 1), k_grid)
#
# # densecl_sim_ind = backbone_sim_matrix.max(dim=2)[1] # NxS^2
# #
# # indexed_k_grid = torch.gather(k_grid, 2, densecl_sim_ind.unsqueeze(1).expand(-1, k_grid.size(1), -1))
self.cosine_align_criterion = CosineSimiLoss(dim=1)
if self.densecl_strategy == 'shared_pred_stop_tg':
loss[f'loss_layer{layer_idx}_dense'] = self.cosine_align_criterion(q_grid_pred, k_grid_pred)
elif self.densecl_strategy == 'rgb_pred_stop_tg':
loss[f'loss_layer{layer_idx}_dense'] = self.cosine_align_criterion(q_grid_pred, k_grid)
elif self.densecl_strategy == 'tg_pred_stop_tg':
loss[f'loss_layer{layer_idx}_dense'] = self.cosine_align_criterion(q_grid, k_grid_pred)
elif self.densecl_strategy == 'simsiam':
embedding_rgb = [q_grid_pred, k_grid_pred, q_grid.detach(), k_grid.detach()]
loss[f'loss_layer{layer_idx}_dense_simsiam'] = self.simsiam_densecl_pred.loss(embedding_rgb)
with torch.no_grad():
loss[f'layer{layer_idx}_align_L2'] = torch.nn.MSELoss()(
vid_feature[:batch_total_len, ...],
imgs_diff_feature[:batch_total_len, ...]
)
else:
raise NotImplementedError
gt_labels = labels.squeeze()
# if torch.distributed.get_rank() == 0:
# print(gt_labels)
# inter-frame difference modality (temporal) supervised loss
# video supervised loss
# loss_labeled_weight = 0.5
loss_labeled_weight = self.cls_loss_weight['labeled']
loss_labeled = {}
# loss_clip1 = {}
# loss_clip2 = {}
for view in ['rgb', 'diff']:
if self.loss_clip_selection is not None:
if self.loss_clip_selection == 0:
loss_labeled[view] = self.cls_head.loss(cls_score_labeled[view][:bz_labeled // 2],
gt_labels[:bz_labeled // 2])
else:
loss_labeled[view] = self.cls_head.loss(cls_score_labeled[view], gt_labels)
# loss_clip1[view] = self.cls_head.loss(cls_score_labeled[view][bz_labeled//2:bz_labeled],
# gt_labels[bz_labeled//2:bz_labeled])
# loss_clip2[view] = self.cls_head.loss(cls_score_labeled[view][:bz_labeled//2],
# gt_labels[:bz_labeled//2])
# loss_labeled[view] = {}
# for k in loss_clip1[view].keys():
# loss_labeled[view][k] = (loss_clip1[view][k] + loss_clip2[view][k]) / 2
#
#
for k in loss_labeled[view]:
if 'loss' in k:
loss[k + f'_labeled_{view}'] = loss_labeled[view][k] * loss_labeled_weight
else:
loss[k + f'_labeled_{view}'] = loss_labeled[view][k]
## Unlabeled data
with torch.no_grad():
if self.pseudo_label_metric == 'rgb':
cls_prob_weak = cls_score_weak['rgb']
elif self.pseudo_label_metric == 'tg':
cls_prob_weak = cls_score_weak['diff']
elif self.pseudo_label_metric == 'avg':
cls_prob_weak = (cls_score_weak['rgb'] + cls_score_weak['diff']) / 2
elif self.pseudo_label_metric == 'individual':
cls_prob_weak = dict()
for view in ['rgb', 'diff']:
cls_prob_weak[view] = F.softmax(cls_score_weak[view], dim=-1)
else:
raise NotImplementedError
if self.pseudo_label_metric != 'individual':
cls_prob_weak = F.softmax(cls_prob_weak, dim=-1)
# if self.loss_clip_selection == 0:
# cls_prob_weak = cls_prob_weak[:bz_unlabeled // 2]
# for view in ['rgb', 'diff']:
# cls_score_strong[view] = cls_score_strong[view][:bz_unlabeled // 2]
# if torch.distributed.get_rank() == 0:
# print(cls_score_strong[view].shape)
# if self.loss_clip_selection == 0:
# bz_unlabeled /= 2
# cls_prob_weak = F.softmax(cls_score_weak.detach(), dim=-1)
# TODO: Control this threshold with cfg
# if self.framework == 'fixmatch':
# thres = 0.95
# else: # UDA
# thres = 0.8
thres = self.fixmatch_threshold
if self.pseudo_label_metric != 'individual':
select = (torch.max(cls_prob_weak, 1).values >= thres).nonzero(as_tuple=False).squeeze(1)
num_select = select.shape[0]
if num_select > 0 and cur_epoch >= self.warmup_epoch:
all_pseudo_labels = cls_prob_weak.argmax(1).detach()
pseudo_labels = torch.index_select(all_pseudo_labels.view(-1, 1), dim=0, index=select).squeeze(1)
cls_score_unlabeled = dict()
loss_unlabeled = dict()
loss_unlabeled_weight = self.cls_loss_weight['unlabeled']
# loss_unlabeled_weight = 0.5
for view in ['rgb', 'diff']:
cls_score_unlabeled[view] = torch.index_select(cls_score_strong[view], dim=0, index=select)
loss_unlabeled[view] = self.cls_head.loss(cls_score_unlabeled[view], pseudo_labels)
# NOTE: When do loss reduce_mean, we should always divide by the batch size
# instead of number of confident samples. So we compensate here.
loss[f'loss_cls_unlabeled_{view}'] = loss_unlabeled_weight * loss_unlabeled[view][
'loss_cls'] * num_select / len(cls_prob_weak)
# if torch.distributed.get_rank() == 0:
# print(loss_unlabeled_weight, num_select, len(cls_prob_weak))
else:
for view in ['rgb', 'diff']:
loss[f'loss_cls_unlabeled_{view}'] = torch.zeros(1).to(cls_prob_weak.device)
elif self.pseudo_label_metric == 'individual':
num_select = dict()
for view in ['rgb', 'diff']:
select = (torch.max(cls_prob_weak[view], 1).values >= thres).nonzero(as_tuple=False).squeeze(1)
num_select[view] = select.shape[0]
if num_select[view] > 0 and cur_epoch >= self.warmup_epoch:
all_pseudo_labels = cls_prob_weak[view].argmax(1).detach()
pseudo_labels = torch.index_select(all_pseudo_labels.view(-1, 1), dim=0, index=select).squeeze(1)
cls_score_unlabeled = dict()
loss_unlabeled = dict()
loss_unlabeled_weight = self.cls_loss_weight['unlabeled']
# loss_unlabeled_weight = 0.5
cls_score_unlabeled[view] = torch.index_select(cls_score_strong[view], dim=0, index=select)
loss_unlabeled[view] = self.cls_head.loss(cls_score_unlabeled[view], pseudo_labels)
# NOTE: When do loss reduce_mean, we should always divide by the batch size
# instead of number of confident samples. So we compensate here.
loss[f'loss_cls_unlabeled_{view}'] = loss_unlabeled_weight * loss_unlabeled[view][
'loss_cls'] * num_select[view] / len(cls_prob_weak[view])
# if torch.distributed.get_rank() == 0:
# print(loss_unlabeled_weight, num_select, len(cls_prob_weak))
else:
loss[f'loss_cls_unlabeled_{view}'] = torch.zeros(1).to(cls_prob_weak[view].device)
with torch.no_grad():
if self.pseudo_label_metric != 'individual':
loss['num_select'] = (torch.ones(1) * num_select).to(cls_prob_weak.device)
elif self.pseudo_label_metric == 'individual':
num_select_avg = (num_select['rgb'] + num_select['diff']) / 2
loss['num_select'] = (torch.ones(1) * num_select_avg).to(cls_prob_weak[view].device)
return loss
def _do_test(self, imgs):
"""Defines the computation performed at every call when evaluation and
testing."""
num_segs = imgs.shape[1]
imgs = imgs.reshape((-1,) + imgs.shape[2:])
if self.test_modality == 'diff':
imgs = imgs[:, :, 1:, ...] - imgs[:, :, :-1, ...]
if self.temp_align_indices is not None:
x = self.temp_backbone(imgs)[-1]
else:
x = self.temp_backbone(imgs)
cls_score = self.temp_sup_head(x)
elif self.test_modality == 'normalized_tg':
if self.temp_align_indices is not None:
x = self.temp_backbone(imgs)[-1]
else:
x = self.temp_backbone(imgs)
cls_score = self.temp_sup_head(x)
elif self.test_modality == 'video':
if self.temp_align_indices is not None:
x = self.extract_feat(imgs)[-1]
else:
x = self.extract_feat(imgs)
if self.test_return_features:
x = nn.AdaptiveAvgPool3d((1, 1, 1))(x)
x = x.view(x.shape[0], -1)
batch_size = x.shape[0]
features = x.view(batch_size // num_segs, num_segs, -1)
features = features.mean(dim=1)
return features
cls_score = self.cls_head(x)
else:
raise NotImplementedError
cls_score = self.average_clip(cls_score, num_segs)
return cls_score
def forward_test(self, imgs):
"""Defines the computation performed at every call when evaluation and
testing."""
return self._do_test(imgs).cpu().numpy()
def forward_dummy(self, imgs):
raise NotImplementedError
def forward_gradcam(self, imgs):
"""Defines the computation performed at every call when using gradcam
utils."""
return self._do_test(imgs)
| 52.409949 | 142 | 0.602291 | 3,845 | 30,555 | 4.39922 | 0.069961 | 0.026012 | 0.024594 | 0.019036 | 0.759563 | 0.678215 | 0.619509 | 0.57653 | 0.556547 | 0.528466 | 0 | 0.013962 | 0.299133 | 30,555 | 582 | 143 | 52.5 | 0.775905 | 0.146261 | 0 | 0.434555 | 0 | 0 | 0.056434 | 0.038894 | 0 | 0 | 0 | 0.001718 | 0 | 1 | 0.013089 | false | 0.002618 | 0.018325 | 0 | 0.04712 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cad40e90be13cab47bd5268c5b82d265267aefbf | 29,968 | py | Python | experiments/classCali_full_CRF/gpu_crf_eval_with_ece.py | neelabh17/SegmenTron | 69a4d1da858aba9222994847000f9945be3f4cd5 | [
"Apache-2.0"
] | null | null | null | experiments/classCali_full_CRF/gpu_crf_eval_with_ece.py | neelabh17/SegmenTron | 69a4d1da858aba9222994847000f9945be3f4cd5 | [
"Apache-2.0"
] | null | null | null | experiments/classCali_full_CRF/gpu_crf_eval_with_ece.py | neelabh17/SegmenTron | 69a4d1da858aba9222994847000f9945be3f4cd5 | [
"Apache-2.0"
] | null | null | null | from __future__ import print_function
from ast import dump
from numpy.lib.npyio import save
from calibration_library.metrics import ECELoss, CCELoss
import io
from logging import log
from experiments.classCali_full_CRF import convcrf
from experiments.classCali_full_CRF.convcrf import GaussCRF, get_default_conf
import os
import sys
import pickle
cur_path = os.path.abspath(os.path.dirname(__file__))
root_path = os.path.split(cur_path)[0]
sys.path.append(root_path)
import logging
import torch
import torch.nn as nn
import torch.nn.functional as F
import cv2
import numpy as np
import joblib
from torchvision import transforms
from segmentron.utils.visualize import get_color_pallete
from segmentron.data.dataloader import get_segmentation_dataset
from segmentron.utils.score import SegmentationMetric
from segmentron.config import cfg
from segmentron.utils.options import parse_args
from segmentron.utils.default_setup import default_setup
from segmentron.models.model_zoo import get_segmentation_model
from segmentron.utils.distributed import make_data_sampler, make_batch_data_sampler
import torch.utils.data as data
from calibration_library import metrics, visualization
# from crfasrnn.crfrnn import CrfRnn
from PIL import Image
from tqdm import tqdm
import os
def makedirs(dirs):
if not os.path.exists(dirs):
os.makedirs(dirs)
from convcrf import *
import matplotlib.pyplot as plt
class Evaluator(object):
def __init__(self, args):
self.args = args
self.device = torch.device(args.device)
self.n_bins = 15
self.ece_folder = "experiments/classCali_full_CRF/eceData"
# self.postfix = "Conv13_PascalVOC_GPU"
# self.postfix = "Min_Foggy_1_conv13_PascalVOC_GPU"
self.postfix = "MINFoggy_1_conv13_PascalVOC_GPU"
self.temp = 1.7
# self.useCRF=False
self.useCRF = True
self.ece_criterion = metrics.IterativeECELoss()
self.ece_criterion.make_bins(n_bins=self.n_bins)
# image transform
input_transform = transforms.Compose(
[
transforms.ToTensor(),
transforms.Normalize(cfg.DATASET.MEAN, cfg.DATASET.STD),
]
)
# dataset and dataloader
val_dataset = get_segmentation_dataset(
cfg.DATASET.NAME, split="val", mode="testval", transform=input_transform
)
val_sampler = make_data_sampler(val_dataset, False, args.distributed)
val_batch_sampler = make_batch_data_sampler(
val_sampler, images_per_batch=cfg.TEST.BATCH_SIZE, drop_last=False
)
self.val_loader = data.DataLoader(
dataset=val_dataset,
batch_sampler=val_batch_sampler,
num_workers=cfg.DATASET.WORKERS,
pin_memory=True,
)
self.dataset = val_dataset
self.classes = val_dataset.classes
self.metric = SegmentationMetric(val_dataset.num_class, args.distributed)
self.model = get_segmentation_model().to(self.device)
if (
hasattr(self.model, "encoder")
and hasattr(self.model.encoder, "named_modules")
and cfg.MODEL.BN_EPS_FOR_ENCODER
):
logging.info(
"set bn custom eps for bn in encoder: {}".format(
cfg.MODEL.BN_EPS_FOR_ENCODER
)
)
self.set_batch_norm_attr(
self.model.encoder.named_modules(), "eps", cfg.MODEL.BN_EPS_FOR_ENCODER
)
if args.distributed:
self.model = nn.parallel.DistributedDataParallel(
self.model,
device_ids=[args.local_rank],
output_device=args.local_rank,
find_unused_parameters=True,
)
self.model.to(self.device)
def set_batch_norm_attr(self, named_modules, attr, value):
for m in named_modules:
if isinstance(m[1], nn.BatchNorm2d) or isinstance(m[1], nn.SyncBatchNorm):
setattr(m[1], attr, value)
def giveComparisionImages_colormaps(self,pre_output, post_output, raw_image, gt_label, classes, outname):
"""
pre_output-> [1,21,h,w] cuda tensor
post_output-> [1,21,h,w] cuda tensor
raw_image->[1,3,h,w] cuda tensor
gt_label->[1,h,w] cuda tensor
"""
metric = SegmentationMetric(nclass=21, distributed=False)
metric.update(pre_output, gt_label)
pre_pixAcc, pre_mIoU = metric.get()
metric = SegmentationMetric(nclass=21, distributed=False)
metric.update(post_output, gt_label)
post_pixAcc, post_mIoU = metric.get()
uncal_labels = np.unique(torch.argmax(pre_output.squeeze(0), dim=0).cpu().numpy())
cal_labels = np.unique(torch.argmax(post_output.squeeze(0), dim=0).cpu().numpy())
pre_label_map = torch.argmax(pre_output.squeeze(0), dim=0).cpu().numpy()
post_label_map = torch.argmax(post_output.squeeze(0), dim=0).cpu().numpy()
# Bringing the shapes to justice
pre_output = pre_output.squeeze(0).cpu().numpy()
post_output = post_output.squeeze(0).cpu().numpy()
raw_image = raw_image.squeeze(0).permute(1, 2, 0).cpu().numpy().astype(np.uint8)
gt_label = gt_label.squeeze(0).cpu().numpy()
if False:
pass
else:
# Show result for each class
cols = int(np.ceil((max(len(uncal_labels), len(cal_labels)) + 1))) + 1
rows = 4
plt.figure(figsize=(20, 20))
# Plotting raw image
ax = plt.subplot(rows, cols, 1)
ax.set_title("Input image")
ax.imshow(raw_image[:, :, ::-1])
ax.axis("off")
# Plottig GT
ax = plt.subplot(rows, cols, cols + 1)
ax.set_title("Difference MAP ")
mask1 = get_color_pallete(pre_label_map, cfg.DATASET.NAME)
mask2 = get_color_pallete(post_label_map, cfg.DATASET.NAME)
# print(raw_image[:, :, ::-1].shape)
ax.imshow(((pre_label_map!=post_label_map).astype(np.uint8)))
ax.axis("off")
# Plottig GT
ax = plt.subplot(rows, cols, 2 * cols + 1)
ax.set_title("ColorMap (uncal+crf) pixA={:.4f} mIoU={:.4f}".format(pre_pixAcc, pre_mIoU))
mask = get_color_pallete(pre_label_map, cfg.DATASET.NAME)
ax.imshow(np.array(mask))
ax.axis("off")
# Plottig GT
ax = plt.subplot(rows, cols, 3 * cols + 1)
# metric = SegmentationMetric(nclass=21, distributed=False)
# metric.update(pre_output, gt_label)
# pixAcc, mIoU = metric.get()
ax.set_title("ColorMap (cal T = {} + CRF) pixA={:.4f} mIoU={:.4f}".format(self.temp,post_pixAcc, post_mIoU))
mask = get_color_pallete(post_label_map, cfg.DATASET.NAME)
ax.imshow(np.array(mask))
ax.axis("off")
for i, label in enumerate(uncal_labels):
ax = plt.subplot(rows, cols, i + 3)
ax.set_title("Uncalibrated-" + classes[label])
ax.imshow(pre_output[label], cmap="nipy_spectral")
ax.axis("off")
for i, label in enumerate(cal_labels):
ax = plt.subplot(rows, cols, cols + i + 3)
ax.set_title("Calibrated-" + classes[label])
ax.imshow(post_output[label], cmap="nipy_spectral")
ax.axis("off")
for i, label in enumerate(cal_labels):
ax = plt.subplot(rows, cols, 2 * cols + i + 3)
min_dif = np.min(pre_output[label] - post_output[label])
max_dif = np.max(pre_output[label] - post_output[label])
dif_map = np.where(
(pre_output[label] - post_output[label]) > 0,
(pre_output[label] - post_output[label]),
0,
)
ax.set_title("decrease: " + classes[label] + " max={:0.3f}".format(max_dif))
ax.imshow(
dif_map / max_dif,
cmap="nipy_spectral",
)
ax.axis("off")
for i, label in enumerate(cal_labels):
ax = plt.subplot(rows, cols, 3 * cols + i + 3)
min_dif = np.min(pre_output[label] - post_output[label])
max_dif = np.max(pre_output[label] - post_output[label])
dif_map = np.where(
(pre_output[label] - post_output[label]) < 0,
(pre_output[label] - post_output[label]),
0,
)
ax.set_title(
"increase: " + classes[label] + " max={:0.3f}".format(-min_dif)
)
ax.imshow(
dif_map / min_dif,
cmap="nipy_spectral",
)
ax.axis("off")
plt.tight_layout()
plt.savefig(outname)
def giveComparisionImages_after_crf(self,
pre_output, post_output, raw_image, gt_label, classes, outname
):
"""
pre_output-> [1,21,h,w] cuda tensor
post_output-> [1,21,h,w] cuda tensor
raw_image->[1,3,h,w] cuda tensor
gt_label->[1,h,w] cuda tensor
"""
uncal_labels = np.unique(torch.argmax(pre_output.squeeze(0), dim=0).cpu().numpy())
cal_labels = np.unique(torch.argmax(post_output.squeeze(0), dim=0).cpu().numpy())
# Bringing the shapes to justice
pre_output = pre_output.squeeze(0).cpu().numpy()
post_output = post_output.squeeze(0).cpu().numpy()
raw_image = raw_image.squeeze(0).permute(1, 2, 0).cpu().numpy().astype(np.uint8)
gt_label = gt_label.squeeze(0).cpu().numpy()
# import pdb; pdb.set_trace()
# gt_label=get_gt_with_id(imageName)
# if(np.sum((cal_labelmap!=uncal_labelmap).astype(np.float32))==0):
if False:
pass
else:
# Show result for each class
cols = int(np.ceil((max(len(uncal_labels), len(cal_labels)) + 1))) + 1
rows = 4
plt.figure(figsize=(20, 20))
ax = plt.subplot(rows, cols, 1)
ax.set_title("Input image")
ax.imshow(raw_image[:, :, ::-1])
ax.axis("off")
ax = plt.subplot(rows, cols, cols + 1)
# @ neelabh remove this
loss = 1.999999999999999
ax.set_title("Difference Map")
ax.imshow(raw_image[:, :, ::-1])
ax.axis("off")
# ax = plt.subplot(rows, cols, 2 * cols + 1)
# gradient = np.linspace(0, 1, 256)
# gradient = np.vstack((gradient, gradient))
# ax.imshow(gradient, cmap="nipy_spectral")
# ax.set_title("Acc")
# ax.imshow(raw_image[:, :, ::-1])
# ax.axis("off")
for i, label in enumerate(uncal_labels):
ax = plt.subplot(rows, cols, i + 3)
ax.set_title("Uncalibrated + crf-" + classes[label])
ax.imshow(pre_output[label], cmap="nipy_spectral")
ax.axis("off")
for i, label in enumerate(cal_labels):
ax = plt.subplot(rows, cols, cols + i + 3)
ax.set_title("Calibrated (T={}) + CRF ".format(self.temp) + classes[label])
ax.imshow(post_output[label], cmap="nipy_spectral")
ax.axis("off")
for i, label in enumerate(cal_labels):
ax = plt.subplot(rows, cols, 2 * cols + i + 3)
min_dif = np.min(pre_output[label] - post_output[label])
max_dif = np.max(pre_output[label] - post_output[label])
dif_map = np.where(
(pre_output[label] - post_output[label]) > 0,
(pre_output[label] - post_output[label]),
0,
)
ax.set_title("decrease: " + classes[label] + " max={:0.3f}".format(max_dif))
ax.imshow(
dif_map / max_dif,
cmap="nipy_spectral",
)
ax.axis("off")
for i, label in enumerate(cal_labels):
ax = plt.subplot(rows, cols, 3 * cols + i + 3)
min_dif = np.min(pre_output[label] - post_output[label])
max_dif = np.max(pre_output[label] - post_output[label])
dif_map = np.where(
(pre_output[label] - post_output[label]) < 0,
(pre_output[label] - post_output[label]),
0,
)
ax.set_title(
"increase: " + classes[label] + " max={:0.3f}".format(-min_dif)
)
ax.imshow(
dif_map / min_dif,
cmap="nipy_spectral",
)
ax.axis("off")
plt.tight_layout()
plt.savefig(outname)
def giveComparisionImages_before_crf(self,
pre_output, post_output, raw_image, gt_label, classes, outname
):
"""
pre_output-> [1,21,h,w] cuda tensor
post_output-> [1,21,h,w] cuda tensor
raw_image->[1,3,h,w] cuda tensor
gt_label->[1,h,w] cuda tensor
"""
uncal_labels = np.unique(torch.argmax(pre_output.squeeze(0), dim=0).cpu().numpy())
cal_labels = np.unique(torch.argmax(post_output.squeeze(0), dim=0).cpu().numpy())
# Bringing the shapes to justice
pre_output = pre_output.squeeze(0).cpu().numpy()
post_output = post_output.squeeze(0).cpu().numpy()
raw_image = raw_image.squeeze(0).permute(1, 2, 0).cpu().numpy().astype(np.uint8)
gt_label = gt_label.squeeze(0).cpu().numpy()
# import pdb; pdb.set_trace()
# gt_label=get_gt_with_id(imageName)
# if(np.sum((cal_labelmap!=uncal_labelmap).astype(np.float32))==0):
if False:
pass
else:
# Show result for each class
cols = int(np.ceil((max(len(uncal_labels), len(cal_labels)) + 1))) + 1
rows = 4
plt.figure(figsize=(20, 20))
ax = plt.subplot(rows, cols, 1)
ax.set_title("Input image")
ax.imshow(raw_image[:, :, ::-1])
ax.axis("off")
ax = plt.subplot(rows, cols, cols + 1)
# @ neelabh remove this
loss = 1.999999999999999
ax.set_title("Accuracy dif = {:0.3f}".format(loss))
ax.imshow(raw_image[:, :, ::-1])
ax.axis("off")
# ax = plt.subplot(rows, cols, 2 * cols + 1)
# gradient = np.linspace(0, 1, 256)
# gradient = np.vstack((gradient, gradient))
# ax.imshow(gradient, cmap="nipy_spectral")
# ax.set_title("Acc")
# ax.imshow(raw_image[:, :, ::-1])
# ax.axis("off")
for i, label in enumerate(uncal_labels):
ax = plt.subplot(rows, cols, i + 3)
ax.set_title("Uncalibrated-" + classes[label])
ax.imshow(pre_output[label], cmap="nipy_spectral")
ax.axis("off")
for i, label in enumerate(cal_labels):
ax = plt.subplot(rows, cols, cols + i + 3)
ax.set_title("Calibrated (T = {}) ".format(self.temp) + classes[label])
ax.imshow(post_output[label], cmap="nipy_spectral")
ax.axis("off")
for i, label in enumerate(cal_labels):
ax = plt.subplot(rows, cols, 2 * cols + i + 3)
min_dif = np.min(pre_output[label] - post_output[label])
max_dif = np.max(pre_output[label] - post_output[label])
dif_map = np.where(
(pre_output[label] - post_output[label]) > 0,
(pre_output[label] - post_output[label]),
0,
)
ax.set_title("decrease: " + classes[label] + " max={:0.3f}".format(max_dif))
ax.imshow(
dif_map / max_dif,
cmap="nipy_spectral",
)
ax.axis("off")
for i, label in enumerate(cal_labels):
ax = plt.subplot(rows, cols, 3 * cols + i + 3)
min_dif = np.min(pre_output[label] - post_output[label])
max_dif = np.max(pre_output[label] - post_output[label])
dif_map = np.where(
(pre_output[label] - post_output[label]) < 0,
(pre_output[label] - post_output[label]),
0,
)
ax.set_title(
"increase: " + classes[label] + " max={:0.3f}".format(-min_dif)
)
ax.imshow(
dif_map / min_dif,
cmap="nipy_spectral",
)
ax.axis("off")
plt.tight_layout()
plt.savefig(outname)
def eceOperations(
self, endNAme, bin_total, bin_total_correct, bin_conf_total, temp=None
):
eceLoss = self.ece_criterion.get_interative_loss(
bin_total, bin_total_correct, bin_conf_total
)
# print('ECE with probabilties %f' % (eceLoss))
if temp == None:
temp = self.temp
saveDir = os.path.join(self.ece_folder, self.postfix + f"_temp={temp}")
makedirs(saveDir)
file = open(os.path.join(saveDir, "Results.txt"), "a")
file.write(f"{endNAme.strip('.npy')}_temp={temp}\t\t\t ECE Loss: {eceLoss}\n")
plot_folder = os.path.join(saveDir, "plots")
makedirs(plot_folder)
rel_diagram = visualization.ReliabilityDiagramIterative()
plt_test_2 = rel_diagram.plot(
bin_total, bin_total_correct, bin_conf_total, title="Reliability Diagram"
)
plt_test_2.savefig(
os.path.join(plot_folder, f'{endNAme.strip(".npy")}_temp={temp}.png'),
bbox_inches="tight",
)
plt_test_2.close()
return eceLoss
def give_ece_order(self, model):
"""
Performs evaluation over the entire daatset
Returns a array of [imageName, eceLoss] in sorted order (descending
"""
eceLosses = []
for (image, target, filename) in tqdm(self.val_loader):
bin_total = []
bin_total_correct = []
bin_conf_total = []
image = image.to(self.device)
target = target.to(self.device)
filename = filename[0]
# print(filename)
endName = os.path.basename(filename).replace(".jpg", ".npy")
# print(endName)
npy_target_directory = "datasets/VOC_targets"
npy_file = os.path.join(npy_target_directory, endName)
if os.path.isfile(npy_file):
pass
else:
makedirs(npy_target_directory)
np.save(npy_file, target.cpu().numpy())
# print("Npy files not found | Going for onboard eval")
# print(image.shape)
with torch.no_grad():
# Checking if npy preprocesssed exists or not
# print(filename)
# npy_output_directory = "npy_outputs/npy_VOC_outputs"
npy_output_directory = "npy_outputs/npy_foggy1_VOC_outputs"
npy_file = os.path.join(npy_output_directory, endName)
# print (npy_file)
if os.path.isfile(npy_file):
output = np.load(npy_file)
output = torch.Tensor(output).cuda()
# print("Reading Numpy Files")
else:
# print("Npy files not found | Going for onboard eval")
makedirs(npy_output_directory)
output = model.evaluate(image)
np.save(npy_file, output.cpu().numpy())
output_before_cali = output.clone()
# ECE Stuff
conf = np.max(output_before_cali.softmax(dim=1).cpu().numpy(), axis=1)
label = torch.argmax(output_before_cali, dim=1).cpu().numpy()
# print(conf.shape,label.shape,target.shape)
(
bin_total_current,
bin_total_correct_current,
bin_conf_total_current,
) = self.ece_criterion.get_collective_bins(
conf, label, target.cpu().numpy()
)
# import pdb; pdb.set_trace()
bin_total.append(bin_total_current)
bin_total_correct.append(bin_total_correct_current)
bin_conf_total.append(bin_conf_total_current)
# ECE stuff
# if(not self.useCRF):
eceLosses.append(
[
endName,
filename,
self.eceOperations(
endName,
bin_total,
bin_total_correct,
bin_conf_total,
temp=1,
),
]
)
eceLosses.sort(key=lambda x: x[2], reverse=True)
return eceLosses
def eval(self):
self.metric.reset()
self.model.eval()
model = self.model
logging.info(
"Start validation, Total sample: {:d}".format(len(self.val_loader))
)
import time
time_start = time.time()
# if(not self.useCRF):
# first loop for finding ece errors
if os.path.isfile("experiments/classCali_full_CRF/sorted_ecefoggy.pickle"):
file = open("experiments/classCali_full_CRF/sorted_ecefoggy.pickle", "rb")
# if os.path.isfile("experiments/classCali_full_CRF/sorted_ece.pickle"):
# file = open("experiments/classCali_full_CRF/sorted_ece.pickle", "rb")
eceLosses = pickle.load(file)
file.close()
else:
assert False
eceLosses = self.give_ece_order(model)
pickle.dump(eceLosses, open("experiments/classCali_full_CRF/sorted_ece.pickle", "wb"))
print("ECE sorting completed....")
top_k = 10
assert top_k > 0
eceLosses.reverse()
# for i, (endName, imageLoc, eceLoss) in enumerate(tqdm(eceLosses[2:3])):
for i, (endName, imageLoc, eceLoss) in enumerate(tqdm(eceLosses[:top_k])):
# Loading outputs
print(endName)
# npy_output_directory = "npy_outputs/npy_VOC_outputs"
npy_output_directory = "npy_outputs/npy_foggy1_VOC_outputs"
npy_file = os.path.join(npy_output_directory, endName)
output = np.load(npy_file)
output = torch.Tensor(output).cuda()
# loading targets
npy_target_directory = "datasets/VOC_targets"
npy_file = os.path.join(npy_target_directory, endName)
target = np.load(npy_file)
target = torch.Tensor(target).cuda()
# print(image.shape)
with torch.no_grad():
output_uncal = output.clone()
output_cal = output / self.temp
# ECE Stuff
bin_total = []
bin_total_correct = []
bin_conf_total = []
conf = np.max(output_uncal.softmax(dim=1).cpu().numpy(), axis=1)
label = torch.argmax(output_uncal, dim=1).cpu().numpy()
# print(conf.shape,label.shape,target.shape)
(
bin_total_current,
bin_total_correct_current,
bin_conf_total_current,
) = self.ece_criterion.get_collective_bins(
conf, label, target.cpu().numpy()
)
# import pdb; pdb.set_trace()
bin_total.append(bin_total_current)
bin_total_correct.append(bin_total_correct_current)
bin_conf_total.append(bin_conf_total_current)
# ECE stuff
# if(not self.useCRF):
self.eceOperations(
endName,
bin_total,
bin_total_correct,
bin_conf_total,
temp=1
)
# ECE Stuff
bin_total = []
bin_total_correct = []
bin_conf_total = []
conf = np.max(output_cal.softmax(dim=1).cpu().numpy(), axis=1)
label = torch.argmax(output_cal, dim=1).cpu().numpy()
# print(conf.shape,label.shape,target.shape)
(
bin_total_current,
bin_total_correct_current,
bin_conf_total_current,
) = self.ece_criterion.get_collective_bins(
conf, label, target.cpu().numpy()
)
# import pdb; pdb.set_trace()
bin_total.append(bin_total_current)
bin_total_correct.append(bin_total_correct_current)
bin_conf_total.append(bin_conf_total_current)
# ECE stuff
# if(not self.useCRF):
self.eceOperations(
endName,
bin_total,
bin_total_correct,
bin_conf_total,
)
# REad raw image
raw_image = (
cv2.imread(imageLoc, cv2.IMREAD_COLOR)
.astype(np.float32)
.transpose(2, 0, 1)
)
raw_image = torch.from_numpy(raw_image).to(self.device)
raw_image = raw_image.unsqueeze(dim=0)
# Setting up CRF
crf = GaussCRF(
conf=get_default_conf(),
shape=output.shape[2:],
nclasses=len(self.classes),
use_gpu=True,
)
crf = crf.to(self.device)
# Getting CRF outputs
# print(output.shape, raw_image.shape)
assert output.shape[2:] == raw_image.shape[2:]
# import pdb; pdb.set_trace()
# print(":here1:")
output_cal_crf = crf.forward(output_cal, raw_image)
# print(":here2:")
output_uncal_crf = crf.forward(output_uncal, raw_image)
# Comparision before CRF bw cali and uncali
comparisionFolder = "experiments/classCali_full_CRF/comparisionImages"
saveFolder = os.path.join(
comparisionFolder, "bcrf" + self.postfix + f"_temp={self.temp}"
)
makedirs(saveFolder)
saveName = os.path.join(saveFolder, os.path.basename(imageLoc))
self.giveComparisionImages_before_crf(
output_uncal.softmax(dim=1),
output_cal.softmax(dim=1),
raw_image,
target,
self.classes,
saveName,
)
# Comparision before CRF bw cali and uncali
comparisionFolder = "experiments/classCali_full_CRF/comparisionImages"
saveFolder = os.path.join(
comparisionFolder, "crf" + self.postfix + f"_temp={self.temp}"
)
makedirs(saveFolder)
saveName = os.path.join(saveFolder, os.path.basename(imageLoc))
self.giveComparisionImages_after_crf(
output_uncal_crf.softmax(dim=1),
output_cal_crf.softmax(dim=1),
raw_image,
target,
self.classes,
saveName,
)
# Comparision uncali vs CRF after cali
comparisionFolder = "experiments/classCali_full_CRF/comparisionImages"
saveFolder = os.path.join(
comparisionFolder, "cmap_" + self.postfix + f"_temp={self.temp}"
)
makedirs(saveFolder)
saveName = os.path.join(saveFolder, os.path.basename(imageLoc))
self.giveComparisionImages_colormaps(
output_uncal_crf.softmax(dim=1),
output_cal_crf.softmax(dim=1),
raw_image,
target,
self.classes,
saveName,
)
# # Accuracy Stuff
# self.metric.update(output, target)
# pixAcc, mIoU = self.metric.get()
if __name__ == "__main__":
args = parse_args()
cfg.update_from_file(args.config_file)
cfg.update_from_list(args.opts)
cfg.PHASE = "test"
cfg.ROOT_PATH = root_path
cfg.check_and_freeze()
default_setup(args)
evaluator = Evaluator(args)
# temperatures=[0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.70, 0.75, 0.8, 0.85, 0.90, 0.95, 1.0, 1.05, 1.1, 1.15, 1.20, 1.25, 1.3, 1.35, 1.40, 1.45, 1.50,
# 1.55, 1.6, 1.65, 1.70, 1.75, 1.8, 1.85, 1.9, 1.95, 2.0, 2.05, 2.1, 2.15, 2.2, 2.25, 2.3, 2.35, 2.4, 2.45, 2.5]
# for temperature in temperatures:
# evaluator.temp=temperature
# evaluator.eval()
evaluator.eval() | 37.181141 | 189 | 0.531734 | 3,414 | 29,968 | 4.465729 | 0.122437 | 0.038961 | 0.024793 | 0.028335 | 0.658337 | 0.631379 | 0.618195 | 0.611964 | 0.590975 | 0.565263 | 0 | 0.021066 | 0.356881 | 29,968 | 806 | 190 | 37.181141 | 0.769989 | 0.121997 | 0 | 0.521101 | 0 | 0.001835 | 0.05671 | 0.019773 | 0 | 0 | 0 | 0 | 0.005505 | 1 | 0.016514 | false | 0.007339 | 0.06422 | 0 | 0.086239 | 0.005505 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cad510dc56f073e7603af970c006b57ebebf4ae0 | 9,585 | py | Python | leavable_wait_page/pages.py | chkgk/leavable_wait_page | 7bafe7492329d91e6e1df57996bd110fccb92abc | [
"MIT"
] | 2 | 2020-02-01T11:47:03.000Z | 2020-03-27T17:14:34.000Z | leavable_wait_page/pages.py | chkgk/leavable_wait_page | 7bafe7492329d91e6e1df57996bd110fccb92abc | [
"MIT"
] | 1 | 2020-08-13T02:34:56.000Z | 2020-08-13T02:36:22.000Z | leavable_wait_page/pages.py | chkgk/leavable_wait_page | 7bafe7492329d91e6e1df57996bd110fccb92abc | [
"MIT"
] | null | null | null | import time
from django.http import HttpResponseRedirect
from otree.models import Participant
from . import models
from ._builtin import Page, WaitPage
class DecorateIsDisplayMixin(object):
def __init__(self):
super(DecorateIsDisplayMixin, self).__init__()
# We need to edit is_displayed() method dynamically, when creating an instance, since custom use is that it is
# overriden in the last child
def decorate_is_displayed(func):
def decorated_is_display(*args, **kwargs):
app_name = self.player._meta.app_label
round_number = self.player.round_number
exiter = self.player.participant.vars.get('go_to_the_end', False) or self.player.participant.vars.get(
'skip_the_end_of_app_{}'.format(app_name), False) or self.player.participant.vars.get(
'skip_the_end_of_app_{}_round_{}'.format(app_name, round_number), False)
game_condition = func(*args, **kwargs)
# we need to first run them both separately to make sure that both conditions are executed
return game_condition and not exiter
return decorated_is_display
setattr(self, "is_displayed", decorate_is_displayed(getattr(self, "is_displayed")))
class SkippablePage(DecorateIsDisplayMixin, Page):
pass
class LeavableWaitPage(WaitPage):
# Only for the first, grouping wait page of the app
template_name = 'leavable_wait_page/LeavableWaitPage.html'
# In case a player waits more than allow_leaving_after (expressed in seconds), he will be offered the option to skip
# pages. By default, if skip_until_the_end_of = "experiment", if he decides to skip pages, he will skip all the
# pages until the end of the experiment (provided those pages inherit from SkippablePage or LeavableWaitPage).
# If skip_until_the_end_of = "app", he will only skip the pages of the current app.
# If skip_until_the_end_of = "round", only pages of the current round will be skipped
allow_leaving_after = 3600
# "experiment" or "app or "round"
skip_until_the_end_of = "experiment"
group_by_arrival_time = True
def dispatch(self, *args, **kwargs):
curparticipant = Participant.objects.get(code__exact=kwargs['participant_code'])
if self.request.method == 'POST':
app_name = curparticipant._current_app_name
index_in_pages = curparticipant._index_in_pages
now = time.time()
wptimerecord = models.WPTimeRecord.objects.get(app=app_name, page_index=index_in_pages,
augmented_participant_id=curparticipant.id)
time_left = wptimerecord.startwp_time + self.allow_leaving_after - now
if time_left > 0:
url_should_be_on = curparticipant._url_i_should_be_on()
return HttpResponseRedirect(url_should_be_on)
if self.skip_until_the_end_of in ["app", "round"]:
app_name = curparticipant._current_app_name
if self.skip_until_the_end_of == "round":
round_number = curparticipant._round_number
curparticipant.vars['skip_the_end_of_app_{}_round_{}'.format(app_name, round_number)] = True
else:
# "app"
curparticipant.vars['skip_the_end_of_app_{}'.format(app_name)] = True
else:
assert self.skip_until_the_end_of == "experiment", \
"the attribute skip_until_the_end_of should be set to experiment, app or round, not {}".format(
self.skip_until_the_end_of)
curparticipant.vars['go_to_the_end'] = True
curparticipant.save()
return super().dispatch(*args, **kwargs)
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
app_name = self.player._meta.app_label
index_in_pages = self._index_in_pages
now = time.time()
wptimerecord, created = self.participant.augmentedparticipant.wptimerecord_set.get_or_create(app=app_name,
page_index=index_in_pages)
if not wptimerecord.startwp_timer_set:
wptimerecord.startwp_timer_set = True
wptimerecord.startwp_time = time.time()
wptimerecord.save()
time_left = wptimerecord.startwp_time + self.allow_leaving_after - now
time_passed = now - wptimerecord.startwp_time
context.update({
'index_in_pages': index_in_pages,
'time_left': round(time_left),
'time_passed': round(time_passed),
'app_name': app_name,
})
return context
def __init__(self):
super(LeavableWaitPage, self).__init__()
# IS A WAIT PAGE
def decorate_after_all_players_arrive(func):
def decorated_after_all_players_arrive(*args, **kwargs):
self.extra_task_to_decorate_start_of_after_all_players_arrive()
func(*args, **kwargs)
self.extra_task_to_decorate_end_of_after_all_players_arrive()
return decorated_after_all_players_arrive
setattr(self, "after_all_players_arrive",
decorate_after_all_players_arrive(getattr(self, "after_all_players_arrive")))
# We need to edit is_displayed() method dynamically, when creating an instance, since custom use is that it is
# overriden in the last child
def decorate_is_displayed(func):
def decorated_is_display(*args, **kwargs):
game_condition = func(*args, **kwargs)
# we need to first run them both separately to make sure that both conditions are executed
self.extra_task_to_execute_with_is_display()
return game_condition
return decorated_is_display
setattr(self, "is_displayed", decorate_is_displayed(getattr(self, "is_displayed")))
def decorate_get_players_for_group(func):
def decorated_get_players_for_group(*args, **kwargs):
grouped = self.extra_task_to_decorate_start_of_get_players_for_group(*args, **kwargs)
if grouped:
# form groups of only one when a players decides to finish the experiment--> otherwise,
# there might be problems later during ordinary wait pages
return grouped[0:1]
grouped = func(*args, **kwargs)
if grouped:
return grouped
grouped = self.extra_task_to_decorate_end_of_get_players_for_group(*args, **kwargs)
if grouped:
return grouped
return decorated_get_players_for_group
setattr(self, "get_players_for_group",
decorate_get_players_for_group(getattr(self, "get_players_for_group")))
def extra_task_to_decorate_start_of_get_players_for_group(self, waiting_players):
app_name = self.subsession._meta.app_label
round_number = self.subsession.round_number
endofgamers = [p for p in waiting_players if (
p.participant.vars.get('go_to_the_end') or p.participant.vars.get(
'skip_the_end_of_app_{}'.format(app_name)) or p.participant.vars.get(
'skip_the_end_of_app_{}_round_{}'.format(app_name, round_number))
)]
if endofgamers:
return endofgamers
def extra_task_to_decorate_end_of_get_players_for_group(self, waiting_players):
pass
def extra_task_to_decorate_start_of_after_all_players_arrive(self):
pass
def extra_task_to_decorate_end_of_after_all_players_arrive(self):
if self.wait_for_all_groups:
players = self.subsession.get_players()
else:
players = self.group.get_players()
# It is theoretically possible to have a participant with "go_to_the_end" and also inside a "normal" group with
# more than one player... This can happen because "go_to_the_end" is set outside of the group-by-arrival-time
# lock (and the lock varies depending on the version of oTree so we can not easily fix this), but should be
# very rare, just when a participant requests exits right at the moment when he is grouped and if we have no
# luck...
# To fix this, we use a dirty hack here... we detect this anomaly with this test
if len(players) > 1:
app_name = players[0]._meta.app_label
round_number = players[0].round_number
for p in players:
exiter = p.participant.vars.get('go_to_the_end', False) or p.participant.vars.get(
'skip_the_end_of_app_{}'.format(app_name), False) or p.participant.vars.get(
'skip_the_end_of_app_{}_round_{}'.format(app_name, round_number), False)
if exiter:
# --> fix the error, remove the exit marker
p.participant.vars.pop('go_to_the_end', None)
p.participant.vars.pop('skip_the_end_of_app_{}'.format(app_name), None)
p.participant.vars.pop('skip_the_end_of_app_{}_round_{}'.format(app_name, round_number), None)
def extra_task_to_execute_with_is_display(self):
self.participant.vars.setdefault('starting_time_stamp_{}'.format(self._index_in_pages), time.time())
| 49.153846 | 127 | 0.649557 | 1,217 | 9,585 | 4.768283 | 0.184059 | 0.027917 | 0.027572 | 0.020851 | 0.521971 | 0.460279 | 0.396347 | 0.351198 | 0.311046 | 0.294847 | 0 | 0.001436 | 0.273552 | 9,585 | 194 | 128 | 49.407216 | 0.831969 | 0.183307 | 0 | 0.244444 | 0 | 0 | 0.090991 | 0.056132 | 0 | 0 | 0 | 0 | 0.007407 | 1 | 0.125926 | false | 0.037037 | 0.037037 | 0 | 0.311111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cad58261e324fbea414f99255ccc6dcba0116684 | 7,424 | py | Python | test/new_tests/test_info_single_node.py | syaiful6/aerospike-client-python | 59fa0d36aa899a164282643fe49b27d12aaf323f | [
"Apache-2.0"
] | 105 | 2015-01-07T09:51:13.000Z | 2022-03-24T04:23:54.000Z | test/new_tests/test_info_single_node.py | syaiful6/aerospike-client-python | 59fa0d36aa899a164282643fe49b27d12aaf323f | [
"Apache-2.0"
] | 180 | 2015-01-01T19:29:50.000Z | 2022-03-19T14:14:06.000Z | test/new_tests/test_info_single_node.py | syaiful6/aerospike-client-python | 59fa0d36aa899a164282643fe49b27d12aaf323f | [
"Apache-2.0"
] | 94 | 2015-01-21T19:17:48.000Z | 2022-01-31T07:17:47.000Z | # -*- coding: utf-8 -*-
import pytest
import time
import sys
import socket
from .test_base_class import TestBaseClass
from .as_status_codes import AerospikeStatus
from aerospike import exception as e
aerospike = pytest.importorskip("aerospike")
try:
import aerospike
except:
print("Please install aerospike python client.")
sys.exit(1)
@pytest.mark.xfail(TestBaseClass.temporary_xfail(), reason="xfail variable set")
@pytest.mark.usefixtures("as_connection", "connection_config")
class TestInfoSingleNode(object):
pytest_skip = False
@pytest.fixture(autouse=True)
def setup(self, request, as_connection, connection_config):
key = ('test', 'demo', 'list_key')
rec = {'names': ['John', 'Marlen', 'Steve']}
self.host_name = list(self.as_connection.info("node").keys())[0]
self.as_connection.put(key, rec)
yield
self.as_connection.remove(key)
def test_info_single_node_positive(self):
"""
Test info with correct arguments.
"""
if self.pytest_skip:
pytest.xfail()
response = self.as_connection.info_single_node(
'bins', self.host_name)
# This should probably make sure that a bin is actually named 'names'
assert 'names' in response
def test_info_single_node_positive_for_namespace(self):
"""
Test info with 'namespaces' as the command.
"""
if self.pytest_skip:
pytest.xfail()
response = self.as_connection.info_single_node(
'namespaces', self.host_name)
assert 'test' in response
def test_info_single_node_positive_for_sets(self):
"""
Test info with 'sets' as the command.
"""
if self.pytest_skip:
pytest.xfail()
response = self.as_connection.info_single_node(
'sets', self.host_name)
assert 'demo' in response
def test_info_single_node_positive_for_sindex_creation(self):
"""
Test creating an index through an info call.
"""
if self.pytest_skip:
pytest.xfail()
try:
self.as_connection.index_remove('test', 'names_test_index')
time.sleep(2)
except:
pass
self.as_connection.info_single_node(
'sindex-create:ns=test;set=demo;indexname=names_test_index;indexdata=names,string',
self.host_name)
time.sleep(2)
response = self.as_connection.info_single_node(
'sindex', self.host_name)
self.as_connection.index_remove('test', 'names_test_index')
assert 'names_test_index' in response
def test_info_single_node_positive_with_correct_policy(self):
"""
Test info call with bins as command and a timeout policy.
"""
if self.pytest_skip:
pytest.xfail()
host = self.host_name
policy = {'timeout': 1000}
response = self.as_connection.info_single_node('bins', host, policy)
assert 'names' in response
def test_info_single_node_positive_with_host(self):
"""
Test info with correct host.
"""
if self.pytest_skip:
pytest.xfail()
host = self.host_name
response = self.as_connection.info_single_node('bins', host)
assert 'names' in response
def test_info_single_node_positive_with_all_parameters(self):
"""
Test info with all parameters.
"""
if self.pytest_skip:
pytest.xfail()
policy = {
'timeout': 1000
}
host = self.host_name
response = self.as_connection.info_single_node('logs', host, policy)
assert response is not None
# Tests for incorrect usage
@pytest.mark.xfail(TestBaseClass.temporary_xfail(), reason="xfail variable set")
@pytest.mark.usefixtures("as_connection", "connection_config")
class TestInfoSingleNodeIncorrectUsage(object):
"""
Tests for invalid usage of the the info_single_node method.
"""
def test_info_single_node_no_parameters(self):
"""
Test info with no parameters.
"""
with pytest.raises(TypeError) as typeError:
self.as_connection.info_single_node()
assert "argument 'command' (pos 1)" in str(
typeError.value)
def test_info_single_node_for_incorrect_command(self):
"""
Test info for incorrect command.
"""
with pytest.raises(e.ClientError) as err_info:
self.as_connection.info_single_node(
'abcd', self.connection_config['hosts'][0])
def test_info_single_node_positive_without_connection(self):
"""
Test info with correct arguments without connection.
"""
client1 = aerospike.client(self.connection_config)
with pytest.raises(e.ClusterError) as err_info:
client1.info_single_node('bins', self.connection_config['hosts'][0][:2])
assert err_info.value.code == 11
assert err_info.value.msg == 'No connection to aerospike cluster.'
def test_info_single_node_positive_with_extra_parameters(self):
"""
Test info with extra parameters.
"""
host = self.connection_config['hosts'][0]
policy = {'timeout': 1000}
with pytest.raises(TypeError) as typeError:
self.as_connection.info_single_node('bins', host, policy, "")
assert "info_single_node() takes at most 3 arguments (4 given)" in str(
typeError.value)
def test_info_single_node_positive_with_incorrect_host(self):
"""
Test info with incorrect host.
"""
host = "wrong"
with pytest.raises(e.ParamError) as err_info:
self.as_connection.info_single_node('bins', host)
@pytest.mark.parametrize(
"command",
(None, 5, ["info"], {}, False))
def test_info_single_node_for_invalid_command_type(self, command):
"""
Test info for None command.
"""
with pytest.raises(e.ParamError) as err_info:
self.as_connection.info_single_node(
command, self.connection_config['hosts'][0][:2])
@pytest.mark.parametrize(
"hostname",
(None, 5, ["localhost"], {}, 3000.0)
)
def test_info_single_node_for_invalid_hostname_type(self, hostname):
"""
Test info for invalid hostname types.
"""
with pytest.raises(e.ParamError) as err_info:
self.as_connection.info_single_node(
"info", hostname)
def test_info_single_node_positive_with_incorrect_policy(self):
"""
Test info with incorrect policy.
"""
host = "host"
policy = {
'timeout': 0.5
}
with pytest.raises(e.ParamError) as err_info:
self.as_connection.info_single_node('bins', host, policy)
assert err_info.value.code == -2
assert err_info.value.msg == "timeout is invalid"
@pytest.mark.parametrize("host",
([(3000, 3000)], [], '123.456:1000', 3000, None))
def test_info_single_node_positive_with_incorrect_host_type(self, host):
"""
Test info with incorrect host type.
"""
with pytest.raises(e.ParamError) as err_info:
self.as_connection.info_single_node('bins', host)
| 32.419214 | 95 | 0.626212 | 882 | 7,424 | 5.02381 | 0.183673 | 0.078989 | 0.110585 | 0.076732 | 0.592417 | 0.504401 | 0.451365 | 0.411871 | 0.40149 | 0.305349 | 0 | 0.011465 | 0.271552 | 7,424 | 228 | 96 | 32.561404 | 0.807877 | 0.103718 | 0 | 0.410072 | 0 | 0.007194 | 0.102265 | 0.012763 | 0 | 0 | 0 | 0 | 0.093525 | 1 | 0.122302 | false | 0.007194 | 0.064748 | 0 | 0.208633 | 0.007194 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cada338b322e830c14a3de91f1b4a499e17be809 | 7,494 | py | Python | buildroot/package/lmsmpris/src/mpris.py | mgrosso/hifiberry-os | 47bad0e9abea0b72914cbd8b13b2b3fdbe656c6f | [
"MIT"
] | 637 | 2016-12-05T15:13:49.000Z | 2022-03-25T21:28:09.000Z | buildroot/package/lmsmpris/src/mpris.py | mgrosso/hifiberry-os | 47bad0e9abea0b72914cbd8b13b2b3fdbe656c6f | [
"MIT"
] | 294 | 2018-10-10T15:58:17.000Z | 2022-03-29T09:35:36.000Z | buildroot/package/lmsmpris/src/mpris.py | mgrosso/hifiberry-os | 47bad0e9abea0b72914cbd8b13b2b3fdbe656c6f | [
"MIT"
] | 89 | 2018-06-04T05:18:24.000Z | 2022-03-15T13:46:12.000Z | import dbus
import time
import logging
from metadata import Metadata
PLAYING = "playing"
mpris = None
MPRIS_NEXT = "Next"
MPRIS_PREV = "Previous"
MPRIS_PAUSE = "Pause"
MPRIS_PLAYPAUSE = "PlayPause"
MPRIS_STOP = "Stop"
MPRIS_PLAY = "Play"
mpris_commands = [MPRIS_NEXT, MPRIS_PREV,
MPRIS_PAUSE, MPRIS_PLAYPAUSE,
MPRIS_STOP, MPRIS_PLAY]
def array_to_string(arr):
"""
Converts an array of objects to a comma separated string
"""
res = ""
for part in arr:
res = res + part + ", "
if len(res) > 1:
return res[:-2]
else:
return ""
class PlayerState:
"""
Internal representation of the state of a player
"""
def __init__(self, state="unknown", metadata=None):
self.state = state
if metadata is not None:
self.metadata = metadata
else:
self.metadata = Metadata()
def __str__(self):
return self.state + str(self.metadata)
class MPRISController:
"""
Controller for MPRIS enabled media players
"""
def __init__(self, auto_pause=True):
self.state_table = {}
self.bus = dbus.SystemBus()
self.auto_pause = auto_pause
self.metadata_displays = []
def register_metadata_display(self, mddisplay):
self.metadata_displays.append(mddisplay)
def metadata_notify(self, metadata):
for md in self.metadata_displays:
md.metadata(metadata)
def retrievePlayers(self):
"""
Returns a list of all MPRIS enabled players that are active in
the system
"""
return [name for name in self.bus.list_names()
if name.startswith("org.mpris")]
def retrieveState(self, name):
"""
Returns the playback state for the given player instance
"""
try:
proxy = self.bus.get_object(name, "/org/mpris/MediaPlayer2")
device_prop = dbus.Interface(
proxy, "org.freedesktop.DBus.Properties")
state = device_prop.Get("org.mpris.MediaPlayer2.Player",
"PlaybackStatus")
return state
except:
return None
def retrieveMeta(self, name):
"""
Return the metadata for the given player instance
"""
try:
proxy = self.bus.get_object(name, "/org/mpris/MediaPlayer2")
device_prop = dbus.Interface(
proxy, "org.freedesktop.DBus.Properties")
prop = device_prop.Get(
"org.mpris.MediaPlayer2.Player", "Metadata")
try:
artist = array_to_string(prop.get("xesam:artist"))
except:
artist = None
try:
title = str(prop.get("xesam:title"))
except:
title = None
try:
albumArtist = array_to_string(prop.get("xesam:albumArtist"))
except:
albumArtist = None
try:
albumTitle = str(prop.get("xesam:album"))
except:
albumTitle = None
try:
artURL = str(prop.get("mpris:artUrl"))
except:
artURL = None
try:
discNumber = str(prop.get("xesam:discNumber"))
except:
discNumber = None
try:
trackNumber = str(prop.get("xesam:trackNumber"))
except:
trackNumber = None
md = Metadata(artist, title, albumArtist, albumTitle,
artURL, discNumber, trackNumber)
md.playerName = self.playername(name)
md.fixProblems()
return md
except dbus.exceptions.DBusException as e:
logging.debug(e)
def mpris_command(self, playername, command):
if command in mpris_commands:
proxy = self.bus.get_object(playername,
"/org/mpris/MediaPlayer2")
player = dbus.Interface(
proxy, dbus_interface='org.mpris.MediaPlayer2.Player')
run_command = getattr(player, command,
lambda: "Unknown command")
return run_command()
else:
raise RuntimeError("MPRIS command {} not supported".format(
command))
def pause_inactive(self, active_player):
"""
Automatically pause other player if playback was started
on a new player
"""
for p in self.state_table:
if (p != active_player) and \
(self.state_table[p].state == PLAYING):
logging.info("Pausing " + self.playername(p))
self.mpris_command(p, MPRIS_PAUSE)
def pause_all(self):
for player in self.state_table:
self.mpris_command(player, MPRIS_PAUSE)
def print_players(self):
for p in self.state_table:
print(self.playername(p))
def playername(self, mprisname):
if (mprisname.startswith("org.mpris.MediaPlayer2.")):
return mprisname[23:]
else:
return mprisname
def main_loop(self):
"""
Main loop:
- monitors state of all players
- pauses players if a new player starts palyback
"""
finished = False
md = Metadata()
active_players = set()
while not(finished):
new_player_started = None
for p in self.retrievePlayers():
if p not in self.state_table:
self.state_table[p] = PlayerState()
try:
state = self.retrieveState(p).lower()
except:
logging.info("Got no state from " + p)
state = "unknown"
self.state_table[p].state = state
# Check if playback started on a player that wasn't
# playing before
if state == PLAYING:
if (p not in active_players):
new_player_started = p
active_players.add(p)
md_old = self.state_table[p].metadata
md = self.retrieveMeta(p)
self.state_table[p].metadata = md
if md is not None:
if not(md.sameSong(md_old)):
self.metadata_notify(md)
else:
if p in active_players:
active_players.remove(p)
if new_player_started is not None:
if self.auto_pause:
logging.info(
"new player started, pausing other active players")
self.pause_inactive(new_player_started)
else:
logging.debug("auto-pause disabled")
time.sleep(0.2)
def __str__(self):
"""
String representation of the current state: all players,
playback state and meta data
"""
res = ""
for p in self.state_table:
res = res + "{:30s} - {:10s}: {}/{}\n".format(
self.playername(p),
self.state_table[p].state,
self.state_table[p].metadata.artist,
self.state_table[p].metadata.title)
return res
| 29.388235 | 76 | 0.52442 | 764 | 7,494 | 5.014398 | 0.217277 | 0.039937 | 0.051162 | 0.031323 | 0.173584 | 0.12947 | 0.087706 | 0.067345 | 0.067345 | 0.067345 | 0 | 0.003704 | 0.38751 | 7,494 | 254 | 77 | 29.503937 | 0.830937 | 0.086603 | 0 | 0.22093 | 0 | 0 | 0.088684 | 0.03641 | 0 | 0 | 0 | 0 | 0 | 1 | 0.093023 | false | 0 | 0.023256 | 0.005814 | 0.19186 | 0.011628 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cadf2c0206812d71cdeb1d4e64b0d2ed452aaeca | 2,587 | py | Python | tests/nlu_core_tests/test_entire_namespace_unittest.py | UPbook-innovations/nlu | 2ae02ce7b6ca163f47271e98b71de109d38adefe | [
"Apache-2.0"
] | 1 | 2021-05-01T01:23:18.000Z | 2021-05-01T01:23:18.000Z | tests/nlu_core_tests/test_entire_namespace_unittest.py | adbmd/nlu | c9b52a4ae001505bdb09c22c9ebd7e5b2ccf3b54 | [
"Apache-2.0"
] | 2 | 2021-09-28T05:55:05.000Z | 2022-02-26T11:16:21.000Z | tests/nlu_core_tests/test_entire_namespace_unittest.py | atdavidpark/nlu | 619d07299e993323d83086c86506db71e2a139a9 | [
"Apache-2.0"
] | 1 | 2021-09-13T10:06:20.000Z | 2021-09-13T10:06:20.000Z | # content of test_expectation.py
import pytest
import nlu
from nose2.tools import params
import unittest
from parameterized import parameterized, parameterized_class
'''
Test every component in the NLU namespace.
This can take very long
'''
all_default_references = []
i=0
for nlu_reference in nlu.NameSpace.component_alias_references.keys():
print('Adding default namespace test ', nlu_reference)
all_default_references.append((nlu_reference,i))
i+=1
all_pipe_references = []
for lang in nlu.NameSpace.pretrained_pipe_references.keys():
for nlu_reference in nlu.NameSpace.pretrained_pipe_references[lang] :
print('Adding pipe namespace test ', nlu_reference, ' and lang', lang)
all_pipe_references.append((nlu_reference,i))
i+=1
all_model_references = []
#because of pytest memory issues for large test suites we have to do tests in batches
skip_to_test=118 #skips all test
for lang in nlu.NameSpace.pretrained_models_references.keys():
for nlu_reference in nlu.NameSpace.pretrained_models_references[lang] :
print('Adding model namespace test ', nlu_reference, ' and lang', lang)
all_model_references.append((nlu_reference,i))
i+=1
class Test(unittest.TestCase):
# @params(all_default_references)
# @params(all_default_references)
@parameterized.expand(all_default_references)
def test_every_default_component(self,nlu_reference, id):
import nlu
print('TESTING NLU REFERENCE : ', nlu_reference)
df = nlu.load(nlu_reference).predict('What a wonderful day!')
print(df)
print(df.columns)
print('TESTING DONE FOR NLU REFERENCE : ', nlu_reference)
if __name__ == '__main__':
unittest.main()
# @pytest.mark.parametrize("nlu_ref,id",all_pipe_references)
# def test_every_default_component(nlu_ref,id):
# import nlu
# gc.collect()
# print( 'param =', nlu_ref)
# print('TESTING NLU REFERENCE : ', nlu_ref)
# if id < skip_to_test : return
# df = nlu.load(nlu_ref).predict('What a wonderful day!')
# print(df)
# print(df.columns)
#
# print('TESTING DONE FOR NLU REFERENCE : ', nlu_ref)
#
# @pytest.mark.parametrize("nlu_ref,id",all_model_references)
# def test_every_default_component(nlu_ref,id):
# import nlu
# gc.collect()
# print( 'param =', nlu_ref)
# print('TESTING NLU REFERENCE : ', nlu_ref)
# if id < skip_to_test : return
# df = nlu.load(nlu_ref).predict('What a wonderful day!')
# print(df)
# print(df.columns)
# print('TESTING DONE FOR NLU REFERENCE : ', nlu_ref)
#
| 33.166667 | 85 | 0.707383 | 352 | 2,587 | 4.965909 | 0.235795 | 0.130435 | 0.051487 | 0.05492 | 0.615561 | 0.600114 | 0.503432 | 0.449085 | 0.365561 | 0.30492 | 0 | 0.003788 | 0.18361 | 2,587 | 77 | 86 | 33.597403 | 0.823864 | 0.384615 | 0 | 0.142857 | 0 | 0 | 0.127187 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028571 | false | 0 | 0.171429 | 0 | 0.228571 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cae17f3ada10180582eda453dfd1179f17df81e8 | 2,158 | py | Python | format.py | suut/psychic-happiness | 56c0daef93f40b91e8a859f04a4660dde2b5c5c1 | [
"Unlicense"
] | 1 | 2020-02-25T10:47:00.000Z | 2020-02-25T10:47:00.000Z | format.py | suut/psychic-happiness | 56c0daef93f40b91e8a859f04a4660dde2b5c5c1 | [
"Unlicense"
] | null | null | null | format.py | suut/psychic-happiness | 56c0daef93f40b91e8a859f04a4660dde2b5c5c1 | [
"Unlicense"
] | null | null | null | #!/usr/bin/python3.4
# -*- coding: utf-8 -*-
from forbiddenfruit import curse
controlcodes = {'bold': '\x02',
'underlined': '\x1F',
'italic': '\x1D',
'reset': '\x0F',
'reverse': '\x16'}
colorlist = {'white': '00',
'black': '01',
'blue': '02',
'green': '03',
'red': '04',
'brown': '05',
'purple': '06',
'orange': '07',
'yellow': '08',
'lightgreen': '09',
'bluegreen ': '10',
'cyan': '11',
'lightblue': '12',
'magenta': '13',
'darkgrey': '14',
'grey': '15'}
class Color:
"""use it like that: color.red.blue.bold"""
def __init__(self):
self.values = []
def __getattr__(self, x):
# the magic function!
new = type(self)()
new.values = self.values[:] #make sure to copy the list and not to pass a reference of it
new.values.append(str(x))
return new
def __str__(self):
bg_color = ''
fg_color = ''
c_codes = []
for i in self.values:
if i in colorlist.keys():
if fg_color == '':
fg_color = '\x03'+colorlist[i]
elif bg_color == '':
bg_color = ','+colorlist[i]
else:
raise AttributeError('you can\'t specify more than 2 colors')
elif i in controlcodes.keys():
c_codes.append(controlcodes[i])
return fg_color+bg_color+''.join(c_codes)
def __repr__(self):
return self.__str__()
color = Color()
oldformat = str.format #automatically parsing the color tags with the builtin str type
def newformat(self, *args, **kwargs):
if 'color' not in kwargs.keys(): # not including it twice
return oldformat(self, *args, color=color, **kwargs)
else:
return oldformat(self, *args, **kwargs)
curse(str, 'format', newformat)
| 29.972222 | 97 | 0.460612 | 221 | 2,158 | 4.357466 | 0.552036 | 0.029076 | 0.024922 | 0.047767 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034589 | 0.397127 | 2,158 | 71 | 98 | 30.394366 | 0.705611 | 0.113068 | 0 | 0.036364 | 0 | 0 | 0.107725 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.018182 | 0.018182 | 0.218182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cae475cf15eaab315664af14b56874ea55231098 | 1,171 | py | Python | plugins/module_utils/ovh.py | PiMath22/infra-ovh-ansible-module | cbd9d1513f61001d1dcce56b1387fa0c9e3721f5 | [
"MIT"
] | 61 | 2017-03-28T09:38:19.000Z | 2022-03-16T00:38:25.000Z | plugins/module_utils/ovh.py | PiMath22/infra-ovh-ansible-module | cbd9d1513f61001d1dcce56b1387fa0c9e3721f5 | [
"MIT"
] | 30 | 2017-08-16T08:57:09.000Z | 2021-12-19T16:04:16.000Z | plugins/module_utils/ovh.py | PiMath22/infra-ovh-ansible-module | cbd9d1513f61001d1dcce56b1387fa0c9e3721f5 | [
"MIT"
] | 25 | 2017-05-30T10:25:32.000Z | 2022-01-24T10:31:16.000Z | from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
try:
import ovh
from ovh.exceptions import APIError
HAS_OVH = True
except ImportError:
HAS_OVH = False
def ovh_api_connect(module):
if not HAS_OVH:
module.fail_json(msg='python-ovh must be installed to use this module')
credentials = ['endpoint', 'application_key',
'application_secret', 'consumer_key']
credentials_in_parameters = [
cred in module.params for cred in credentials]
try:
if all(credentials_in_parameters):
client = ovh.Client(
**{credential: module.params[credential] for credential in credentials})
else:
client = ovh.Client()
except APIError as api_error:
module.fail_json(msg="Failed to call OVH API: {0}".format(api_error))
return client
def ovh_argument_spec():
return dict(
endpoint=dict(required=False, default=None),
application_key=dict(required=False, default=None),
application_secret=dict(required=False, default=None),
consumer_key=dict(required=False, default=None),
)
| 30.815789 | 88 | 0.671221 | 141 | 1,171 | 5.35461 | 0.439716 | 0.063576 | 0.090066 | 0.127152 | 0.18543 | 0.148344 | 0 | 0 | 0 | 0 | 0 | 0.001121 | 0.238258 | 1,171 | 37 | 89 | 31.648649 | 0.845291 | 0 | 0 | 0.064516 | 0 | 0 | 0.108454 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0 | 0.129032 | 0.032258 | 0.258065 | 0.032258 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cae4ec0d8b6fef60b4e8998414d13025c2d142e9 | 5,962 | py | Python | subs2vec/similarities.py | jvparidon/sub2vec | adb9e72b64dc6dbde3c2060ee0d3964ab623a149 | [
"MIT"
] | 19 | 2019-09-23T20:31:54.000Z | 2022-03-01T22:19:12.000Z | subs2vec/similarities.py | ClarasMind/subs2vec | adb9e72b64dc6dbde3c2060ee0d3964ab623a149 | [
"MIT"
] | 4 | 2019-11-26T21:12:53.000Z | 2022-03-29T02:46:25.000Z | subs2vec/similarities.py | ClarasMind/subs2vec | adb9e72b64dc6dbde3c2060ee0d3964ab623a149 | [
"MIT"
] | 2 | 2020-09-30T15:02:32.000Z | 2022-03-01T22:19:20.000Z | """Compute rank correlations between word vector cosine similarities and human ratings of semantic similarity."""
import numpy as np
import pandas as pd
import argparse
import os
import scipy.spatial.distance
import scipy.stats
from .vecs import Vectors
from .utensils import log_timer
import logging
logging.basicConfig(format='[{levelname}] {message}', style='{', level=logging.INFO)
path = os.path.dirname(__file__)
@log_timer
def compare_similarities(vectors, similarities):
"""Correlate vector similarities to human ratings of semantic similarity.
Computes cosine similarities, and uses rank (Spearman) correlation as a measure of similarity to the specified human ratings.
:param vectors: Vectors object containing word vectors.
:param similarities: pandas DataFrame of similarities, labeled word1, word2, and similarity
:return: dict containing score and predictions in separate pandas DataFrames
"""
vecs_dict = vectors.as_dict()
vecs_dsm = []
similarities_dsm = []
word1 = []
word2 = []
missing = 0
for index, pair in similarities.iterrows():
if all(word in vecs_dict.keys() for word in (pair['word1'], pair['word2'])):
vecs_dsm.append(1.0 - scipy.spatial.distance.cosine(vecs_dict[pair['word1']], vecs_dict[pair['word2']]))
similarities_dsm.append(pair['similarity'])
word1.append(pair['word1'])
word2.append(pair['word2'])
else:
missing += 1
total = len(similarities)
penalty = (total - missing) / total
score = scipy.stats.spearmanr(similarities_dsm, vecs_dsm)[0]
adjusted_score = scipy.stats.spearmanr(similarities_dsm, vecs_dsm)[0] * penalty
score = pd.DataFrame({'rank r': [score], 'adjusted rank r': [adjusted_score]})
predictions = pd.DataFrame({'word1': word1, 'word2': word2, 'similarity': similarities_dsm, 'predicted similarity': vecs_dsm})
return {'scores': score, 'predictions': predictions}
@log_timer
def evaluate_similarities(lang, vecs_fname):
"""Compute similarities for all available ratings datasets for a set of word vectors in a given language.
Writes scores to tab-separated text file but also returns them.
:param lang: language to evaluate word vectors in (uses two-letter ISO codes)
:param vecs_fname: word vectors to evaluate
:return: pandas DataFrame containing the similarities results
"""
similarities_path = os.path.join(path, 'datasets', 'similarities')
if not os.path.exists('results'):
os.mkdir('results')
results_path = os.path.join('results', 'similarities')
if not os.path.exists(results_path):
os.mkdir(results_path)
logging.info(f'evaluating semantic similarities with {vecs_fname}')
vectors = Vectors(vecs_fname, normalize=True, n=1e6, d=300)
scores = []
for similarities_fname in os.listdir(similarities_path):
if similarities_fname.startswith(lang):
logging.info(f'correlating similarities from {similarities_fname}')
similarities = pd.read_csv(os.path.join(similarities_path, similarities_fname), sep='\t', comment='#')
score = compare_similarities(vectors, similarities)['scores']
score['source'] = similarities_fname
scores.append(score)
scores_fname = os.path.split(vecs_fname)[1].replace('.vec', '.tsv')
if len(scores) > 0:
scores = pd.concat(scores)
scores.to_csv(os.path.join(results_path, scores_fname), sep='\t', index=False)
return scores
def novel_similarities(vecs_fname, words_fname):
"""Predict semantic similarities for novel word pairs, using word vectors.
Writes predictions to tab-separated text file.
:param vecs_fname: file containing word vectors to use for prediction.
:param similarities_fname: file containing word pairs in tab-separated columns named 'word1' and 'word2'
"""
logging.info(f'predicting novel semantic similarities with {vecs_fname}')
vectors = Vectors(vecs_fname, normalize=True, n=1e6, d=300)
df_words = pd.read_csv(words_fname, sep='\t', comment='#')
df_words = compute_similarities(vectors, df_words)
base_fname = '.'.join(words_fname.split('.')[:-1])
df_words.to_csv(f'{base_fname}.predictions.tsv', sep='\t', index=False)
def compute_similarities(vectors, df_words):
"""Compute semantic similarities for novel word pairs.
:param vectors: Vectors object containing word vectors
:param df_words: pandas DataFrame containing word pairs in columns labeled 'word1' and 'word2'
:return: pandas DataFrame containing word pairs and cosine similarities in a column labeled 'similarity'
"""
vecs_dict = vectors.as_dict()
df_words['similarity'] = df_words.apply(lambda x: 1.0 - scipy.spatial.distance.cosine(vecs_dict.get(x['word1'], np.nan),
vecs_dict.get(x['word2'], np.nan)),
axis=1)
return df_words
if __name__ == '__main__':
argparser = argparse.ArgumentParser(description='compute rank correlations between word vector cosine similarities and human semantic similarity ratings')
argparser.add_argument('lang', help='language to compare simarities in (uses two-letter ISO language codes)')
argparser.add_argument('vecs_fname', help='word vectors to evaluate')
argparser.add_argument('--novel_similarities', help='file containing tab-separated word pairs')
args = argparser.parse_args()
if args.novel_similarities:
novel_similarities(vecs_fname=args.vecs_fname, similarities_fname=args.novel_similarities)
else:
print(evaluate_similarities(lang=args.lang, vecs_fname=args.vecs_fname))
| 48.080645 | 159 | 0.679638 | 727 | 5,962 | 5.437414 | 0.233838 | 0.031875 | 0.010119 | 0.015178 | 0.267898 | 0.177081 | 0.158361 | 0.140147 | 0.09613 | 0.07235 | 0 | 0.009424 | 0.216874 | 5,962 | 123 | 160 | 48.471545 | 0.837224 | 0.249748 | 0 | 0.1 | 0 | 0 | 0.167137 | 0.006591 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.1125 | 0 | 0.2 | 0.0125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cae50b168c1643c324a44991b116ba28feaccea9 | 1,772 | py | Python | projectq/setups/ibm16_test.py | gavarela/ProjectQ | f2cec3cc1ea3775edd77c898ed8802af9f802533 | [
"Apache-2.0"
] | 2 | 2019-06-18T11:58:23.000Z | 2020-04-26T20:39:32.000Z | projectq/setups/ibm16_test.py | gavarela/ProjectQ | f2cec3cc1ea3775edd77c898ed8802af9f802533 | [
"Apache-2.0"
] | 8 | 2019-08-13T11:22:36.000Z | 2019-11-19T15:47:05.000Z | projectq/setups/ibm16_test.py | gavarela/ProjectQ | f2cec3cc1ea3775edd77c898ed8802af9f802533 | [
"Apache-2.0"
] | 1 | 2020-08-16T22:40:08.000Z | 2020-08-16T22:40:08.000Z | # Copyright 2017 ProjectQ-Framework (www.projectq.ch)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for projectq.setup.ibm16."""
import projectq
import projectq.setups.ibm16
from projectq import MainEngine
from projectq.cengines import GridMapper, SwapAndCNOTFlipper, DummyEngine
from projectq.libs.math import AddConstant
from projectq.ops import QFT, get_inverse
def test_mappers_in_cengines():
found = 0
for engine in projectq.setups.ibm16.get_engine_list():
if isinstance(engine, GridMapper):
found |= 1
if isinstance(engine, SwapAndCNOTFlipper):
found |= 2
assert found == 3
def test_high_level_gate_set():
mod_list = projectq.setups.ibm16.get_engine_list()
saving_engine = DummyEngine(save_commands=True)
mod_list = mod_list[:6] + [saving_engine] + mod_list[6:]
eng = MainEngine(DummyEngine(),
engine_list=mod_list)
qureg = eng.allocate_qureg(3)
AddConstant(3) | qureg
QFT | qureg
eng.flush()
received_gates = [cmd.gate for cmd in saving_engine.received_commands]
assert sum([1 for g in received_gates if g == QFT]) == 1
assert get_inverse(QFT) not in received_gates
assert AddConstant(3) not in received_gates
| 36.916667 | 76 | 0.719526 | 246 | 1,772 | 5.065041 | 0.455285 | 0.048154 | 0.045746 | 0.025682 | 0.051364 | 0.051364 | 0 | 0 | 0 | 0 | 0 | 0.019041 | 0.199774 | 1,772 | 47 | 77 | 37.702128 | 0.859662 | 0.353273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.071429 | false | 0 | 0.214286 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cae5ebe8c6d1f42aa7dd3dd33da4b132a54dd359 | 8,345 | py | Python | module/tokenizer.py | irebai/wav2vec2 | 461dc98292ad0dc7aa6f76a1d8d923f84a758d4f | [
"Apache-2.0"
] | 3 | 2021-06-02T22:30:57.000Z | 2022-03-02T15:31:04.000Z | module/tokenizer.py | irebai/wav2vec2 | 461dc98292ad0dc7aa6f76a1d8d923f84a758d4f | [
"Apache-2.0"
] | 1 | 2022-03-16T14:21:11.000Z | 2022-03-16T14:21:11.000Z | module/tokenizer.py | irebai/wav2vec2 | 461dc98292ad0dc7aa6f76a1d8d923f84a758d4f | [
"Apache-2.0"
] | null | null | null | import sentencepiece as spm
import logging
import json
import sys
import os
import re
import copy
from typing import Any, Dict, List, Optional, Union
from transformers import Wav2Vec2CTCTokenizer
from itertools import groupby
logger = logging.getLogger(__name__)
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
handlers=[logging.StreamHandler(sys.stdout)],
)
logger.setLevel(logging.INFO)
class Wav2Vec2CTCTokenizer_SP(Wav2Vec2CTCTokenizer):
def __init__(
self,
vocab_file,
model_file,
bos_token="<s>",
eos_token="</s>",
unk_token="<unk>",
pad_token="<pad>",
do_lower_case=False,
do_normalize=False,
word_delimiter_token='▁',
return_attention_mask=False,
**kwargs
):
super().__init__(
vocab_file,
unk_token=unk_token,
bos_token=bos_token,
eos_token=eos_token,
pad_token=pad_token,
word_delimiter_token=word_delimiter_token,
do_lower_case=do_lower_case,
do_normalize=do_normalize,
return_attention_mask=return_attention_mask,
**kwargs
)
if not os.path.isfile(model_file):
raise ValueError("Tokenizer is not found!")
logger.info("Loading Tokenizer")
self.sp = spm.SentencePieceProcessor()
self.sp.load(model_file)
@property
def vocab_size(self) -> int:
return self.sp.get_piece_size()
def get_vocab(self) -> Dict:
raise None
def _tokenize(self, text, **kwargs):
"""
Converts a string in a sequence of tokens (string), using the tokenizer.
"""
if self.do_lower_case:
text = text.upper()
tokens_list = self.sp.encode_as_pieces(text)
return tokens_list
def _convert_token_to_id(self, token: str) -> int:
"""Converts a token (str) in an index (integer) using the vocab."""
return self.sp.piece_to_id(token)
def _convert_id_to_token(self, index: int) -> str:
"""Converts an index (integer) in a token (str) using the vocab."""
return self.sp.id_to_piece(index)
def convert_tokens_to_string(
self, tokens: List[str], group_tokens: bool = True, spaces_between_special_tokens: bool = False
) -> str:
"""
Converts a connectionist-temporal-classification (CTC) output tokens into a single string.
"""
# group same tokens into non-repeating tokens in CTC style decoding
if group_tokens:
tokens = [token_group[0] for token_group in groupby(tokens)]
# filter self.pad_token which is used as CTC-blank token
filtered_tokens = list(filter(lambda token: token != self.pad_token, tokens))
string = self.sp.decode(filtered_tokens).replace(' ⁇ ','<unk>')
if self.do_lower_case:
string = string.lower()
return string
@classmethod
def train_sentencepiece(
cls,
train_file,
model_dir,
vocab_size,
model_type="unigram",
character_coverage=1.0,
user_defined_symbols=None,
max_sentencepiece_length=10,
bos_id=-1,
eos_id=-1,
pad_id=-1,
unk_id=0,
split_by_whitespace=True,
vocab=None,
**kwargs
):
# Special tokens
bos_token='<s>'
eos_token='</s>'
pad_token='<pad>'
unk_token='<unk>'
if model_type not in ["unigram", "bpe", "char"]:
raise ValueError("model_type must be one of : [unigram, bpe, char]")
if not os.path.isdir(model_dir):
os.makedirs(model_dir)
if not isinstance(vocab_size, int):
raise ValueError("vocab_size must be integer.")
prefix_model_file = os.path.join(
model_dir, str(vocab_size) + "_" + model_type
)
if not os.path.isfile(prefix_model_file + ".model"):
logger.info("Train tokenizer with type:" + model_type)
if vocab is not None:
logger.info("Clean train data using the specified vocab: ["+' '.join(vocab)+"]")
vocab_regex = f"[{re.escape(''.join(vocab))}]"
with open(train_file) as f:
corpus = f.readlines()
lines = []
with open(prefix_model_file + '.txt', 'w') as f:
for text in corpus:
line = re.sub(vocab_regex, '', text.strip())
if len(line) == 0:
f.write(text)
train_file = prefix_model_file + '.txt'
vocab_size = str(vocab_size)
character_coverage = str(character_coverage)
max_sentencepiece_length = str(max_sentencepiece_length)
bos_id = str(bos_id)
eos_id = str(eos_id)
pad_id = str(pad_id)
unk_id = str(unk_id)
split_by_whitespace = split_by_whitespace
query = (
"--input="
+ train_file
+ " --model_prefix="
+ prefix_model_file
+ " --model_type="
+ model_type
+ " --bos_id="
+ bos_id
+ " --eos_id="
+ eos_id
+ " --pad_id="
+ pad_id
+ " --unk_id="
+ unk_id
+ " --max_sentencepiece_length="
+ max_sentencepiece_length
+ " --character_coverage="
+ character_coverage
)
if model_type not in ["char"]:
# include vocab_size
query += " --vocab_size=" + str(vocab_size)
if user_defined_symbols is not None:
query += " --user_defined_symbols=" + user_defined_symbols
if not split_by_whitespace:
query += " --split_by_whitespace=false"
# Train tokenizer
spm.SentencePieceTrainer.train(query)
# Save Train query
with open(prefix_model_file + ".query", "w") as f:
f.write(query)
# Save special tokens
vocab_dict = {}
vocab_dict[bos_token]=bos_id
vocab_dict[eos_token]=eos_id
vocab_dict[pad_token]=pad_id
vocab_dict[unk_token]=unk_id
vocab_dict = {k: v for k, v in vocab_dict.items() if v != -1}
with open(prefix_model_file+".tokens.json", "w") as vocab_file:
json.dump(vocab_dict, vocab_file)
else:
logger.info("Tokenizer is already trained.")
return cls(
prefix_model_file+".tokens.json",
prefix_model_file + ".model",
**kwargs
)
class Wav2Vec2CTCTokenizer_CHAR(Wav2Vec2CTCTokenizer):
@classmethod
def set_vocab(
cls,
vocab_file,
vocab,
unk_id,
pad_id,
bos_id=None,
eos_id=None,
do_punctuation=False,
**kwargs
):
word_delimiter_token = '|'
logger.info("Prepare vocabulary")
# Prepare Vocab
vocab_list = copy.deepcopy(vocab)
vocab_list.remove(' ') # remove space and replace it by word_delimiter_token
vocab_list += [word_delimiter_token]
if do_punctuation:
vocab_list += ['.',',','!','?']
# Insert special tokens
vocab_list.insert(unk_id, '<unk>')
vocab_list.insert(pad_id, '<pad>')
if bos_id is not None:
vocab_list.insert(bos_id, '<bos>')
if eos_id is not None:
vocab_list.insert(eos_id, '<eos>')
vocab_dict = {v: k for k, v in enumerate(vocab_list)}
with open(vocab_file, "w") as file:
json.dump(vocab_dict, file)
return cls(
vocab_file,
bos_token="<bos>",
eos_token="<eos>",
unk_token="<unk>",
pad_token="<pad>",
word_delimiter_token=word_delimiter_token,
do_lower_case=False,
**kwargs
)
| 31.254682 | 103 | 0.545956 | 947 | 8,345 | 4.542767 | 0.214361 | 0.025105 | 0.031381 | 0.016039 | 0.131102 | 0.064156 | 0.033938 | 0.02185 | 0.02185 | 0 | 0 | 0.003876 | 0.350749 | 8,345 | 266 | 104 | 31.37218 | 0.78959 | 0.069982 | 0 | 0.138756 | 0 | 0 | 0.09072 | 0.016506 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043062 | false | 0 | 0.047847 | 0.004785 | 0.133971 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
caee7405f72c31710c4a76b26d18b5e0202497d9 | 2,794 | py | Python | tests/unit/app/test_session.py | cswarth/whylogs | 6805b252f1d07efde84836d3924949f7ec2d97b1 | [
"Apache-2.0"
] | 603 | 2020-07-31T23:26:10.000Z | 2022-03-31T23:05:36.000Z | tests/unit/app/test_session.py | cswarth/whylogs | 6805b252f1d07efde84836d3924949f7ec2d97b1 | [
"Apache-2.0"
] | 284 | 2021-03-02T21:28:03.000Z | 2022-03-31T22:36:08.000Z | tests/unit/app/test_session.py | cswarth/whylogs | 6805b252f1d07efde84836d3924949f7ec2d97b1 | [
"Apache-2.0"
] | 39 | 2020-08-14T21:22:08.000Z | 2022-03-29T20:24:54.000Z | import pytest
from whylogs.app.config import SessionConfig
from whylogs.app.session import (
Session,
get_or_create_session,
get_session,
reset_default_session,
session_from_config,
)
def test_get_global_session():
session = get_or_create_session()
global_session = get_session()
assert session == global_session
def test_reset():
get_or_create_session()
reset_default_session()
global_session = get_session()
assert global_session.project is not None
def test_session_log_dataframe(df):
pass
session = session_from_config(SessionConfig("default-project", "default-pipeline", [], False))
session.log_dataframe(df)
assert session.logger() is not None
assert session.logger("default-project").dataset_name == "default-project"
def test_session_profile(df):
session = session_from_config(SessionConfig("default-project", "default-pipeline", [], False))
profile = session.log_dataframe(df)
assert profile is not None
summary = profile.flat_summary()
flat_summary = summary["summary"]
assert len(flat_summary) == 4
def test_profile_df(df):
import datetime
session = get_or_create_session()
dt = datetime.datetime.now(datetime.timezone.utc)
log_profile = session.log_dataframe(df, dataset_timestamp=dt)
profile = session.profile_dataframe(df, dataset_timestamp=dt)
assert log_profile.name == profile.name
assert log_profile.dataset_timestamp == profile.dataset_timestamp
assert log_profile.session_timestamp == profile.session_timestamp
assert len(profile.columns) == 4
assert len(log_profile.tags) == 1
assert len(profile.tags) == 2
def test_close_session(df):
session = get_or_create_session()
session.close()
assert session.is_active() == False
log_profile = session.log_dataframe(df)
assert log_profile == None
profile = session.profile_dataframe(df)
assert profile == None
profile = session.new_profile(df)
assert profile == None
with pytest.raises(RuntimeError):
session.logger()
def test_session_default():
session = Session()
assert session.is_active() == True, "Newly created default session is expected to be active."
assert session.project == "", "project should be optional and default to empty string."
assert session.pipeline == "", "pipeline should be optional and default to empty string."
def test_logger_cache():
session = get_or_create_session()
with session.logger("cache-test", with_rotation_time="s") as logger:
logger.log({"name": 1})
session.close()
def test_remove_logger():
session = get_or_create_session()
session.logger("default-project")
with pytest.raises(KeyError):
session.remove_logger("test")
| 27.126214 | 98 | 0.721546 | 353 | 2,794 | 5.467422 | 0.195467 | 0.046632 | 0.039896 | 0.065285 | 0.351295 | 0.237306 | 0.11399 | 0.11399 | 0.073575 | 0.073575 | 0 | 0.002179 | 0.178597 | 2,794 | 102 | 99 | 27.392157 | 0.83878 | 0 | 0 | 0.185714 | 0 | 0 | 0.107015 | 0 | 0 | 0 | 0 | 0 | 0.271429 | 1 | 0.128571 | false | 0.014286 | 0.057143 | 0 | 0.185714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
caef0d0629c15343e9df7473db70cc7cfee8bd44 | 863 | py | Python | paper/scripts/parse_hg19_Genome_FASTA.py | CEGRcode/GenoPipe | 76c2bd553582a51aa1f83a3b5e916663dd8f1e18 | [
"MIT"
] | null | null | null | paper/scripts/parse_hg19_Genome_FASTA.py | CEGRcode/GenoPipe | 76c2bd553582a51aa1f83a3b5e916663dd8f1e18 | [
"MIT"
] | null | null | null | paper/scripts/parse_hg19_Genome_FASTA.py | CEGRcode/GenoPipe | 76c2bd553582a51aa1f83a3b5e916663dd8f1e18 | [
"MIT"
] | null | null | null | import sys
import argparse
valid_chr = ["chr1", "chr2",
"chr3", "chr4",
"chr5", "chr6",
"chr7", "chr8",
"chr9", "chr10",
"chr11", "chr12",
"chr13", "chr14",
"chr15", "chr16",
"chr17", "chr18",
"chr19", "chr20",
"chr21", "chr22",
"chrM",
"chrX", "chrY"]
def get_params():
parser = argparse.ArgumentParser(description="parse hg19 to strip out unwanted chromosomes (haplotype chr) and write new FSTA to STDOUT")
parser.add_argument(dest="genome_fn", help="input file with two matrices", metavar="GENOME.fa")
return(parser.parse_args())
if __name__=='__main__':
args = get_params()
valid =False
reader = open(args.genome_fn,'r')
line = None
while( line!="" ):
line = reader.readline().strip()
if( line[1:] in valid_chr ):
valid=True
elif( line.find('>')==0 ):
valid=False
if(valid):
sys.stdout.write(line+'\n')
reader.close()
| 23.324324 | 138 | 0.64774 | 117 | 863 | 4.641026 | 0.709402 | 0.029466 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053279 | 0.151796 | 863 | 36 | 139 | 23.972222 | 0.688525 | 0 | 0 | 0.060606 | 0 | 0 | 0.301275 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030303 | false | 0 | 0.060606 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
caef9773708a6b2a6663f3f50808e394425a0baf | 1,742 | py | Python | demos/python_demos/image_retrieval_demo/image_retrieval_demo/roi_detector_on_video.py | evgeny-izutov/open_model_zoo | 2cd6145ef342fc9b7ccf32676af73f4a1cb8d9ba | [
"Apache-2.0"
] | 5 | 2020-03-09T07:39:04.000Z | 2021-08-16T07:17:28.000Z | demos/python_demos/image_retrieval_demo/image_retrieval_demo/roi_detector_on_video.py | evgeny-izutov/open_model_zoo | 2cd6145ef342fc9b7ccf32676af73f4a1cb8d9ba | [
"Apache-2.0"
] | 8 | 2020-09-26T00:40:13.000Z | 2022-03-12T00:14:30.000Z | demos/python_demos/image_retrieval_demo/image_retrieval_demo/roi_detector_on_video.py | evgeny-izutov/open_model_zoo | 2cd6145ef342fc9b7ccf32676af73f4a1cb8d9ba | [
"Apache-2.0"
] | 4 | 2020-04-21T17:31:00.000Z | 2021-10-18T07:04:49.000Z | """
Copyright (c) 2019 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import os
import cv2
from image_retrieval_demo.roi_cv_detector.detect_by_simple_dense_optical_flow import RoiDetector, \
get_rect_tl, get_rect_br
class RoiDetectorOnVideo:
""" This class detects moving ROI on videos. """
def __init__(self, path):
if not os.path.exists(path):
raise Exception('File not found: {}'.format(path))
self.cap = cv2.VideoCapture(path)
self.frame_step = 5
self.roi_detector = RoiDetector(self.frame_step)
def __iter__(self):
return self
def __next__(self):
""" Returns cropped frame (ROI) and original frame with ROI drawn as a rectangle. """
_, frame = self.cap.read()
if frame is None:
raise StopIteration
view_frame = frame.copy()
bbox = self.roi_detector.handle_frame(frame)
if bbox is not None:
tl_x, tl_y = get_rect_tl(bbox)
br_x, br_y = get_rect_br(bbox)
frame = frame[tl_y:br_y, tl_x:br_x]
cv2.rectangle(view_frame, (tl_x, tl_y), (br_x, br_y), (255, 0, 255), 20)
else:
frame = None
return frame, view_frame
| 30.034483 | 99 | 0.668197 | 254 | 1,742 | 4.385827 | 0.484252 | 0.05386 | 0.023339 | 0.028725 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01608 | 0.250287 | 1,742 | 57 | 100 | 30.561404 | 0.836907 | 0.392078 | 0 | 0 | 0 | 0 | 0.017493 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0.037037 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
caf41dbd85255511abe44215a30940d32474994c | 519 | py | Python | produtos.py | rodrigomoreira777/POO2021.2 | 7c5d45148882337e4d72258f50f14eda8d42d1df | [
"MIT"
] | null | null | null | produtos.py | rodrigomoreira777/POO2021.2 | 7c5d45148882337e4d72258f50f14eda8d42d1df | [
"MIT"
] | null | null | null | produtos.py | rodrigomoreira777/POO2021.2 | 7c5d45148882337e4d72258f50f14eda8d42d1df | [
"MIT"
] | null | null | null | import ferramentas_usuarios as fe
class Produto:
codigo = fe.gera_codigo()
def __init__(self, data_ger, prazo_ent, inicio_pro, final_proc, inicio_cont, final_cont):
"""Cria um banco de dados baseado em um arquivo TXT"""
self.data_geracao = data_ger
self.prazo_entrega = prazo_ent
self.inicio_processo = inicio_pro
self.final_processo = final_proc
self.inicio_controle = inicio_cont
self.final_controle = final_cont
self.codigo = Produto.codigo
| 32.4375 | 93 | 0.695568 | 70 | 519 | 4.814286 | 0.485714 | 0.077151 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.236994 | 519 | 15 | 94 | 34.6 | 0.85101 | 0.092486 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
caf71c9e764e7f9cc0a44834ec06b0816dd53183 | 1,108 | py | Python | data_processing.py | lanlehoang67/Galaxies-Classification | 2ee35a418e1e93529b3c0522a6f0f07be6818a51 | [
"MIT"
] | null | null | null | data_processing.py | lanlehoang67/Galaxies-Classification | 2ee35a418e1e93529b3c0522a6f0f07be6818a51 | [
"MIT"
] | null | null | null | data_processing.py | lanlehoang67/Galaxies-Classification | 2ee35a418e1e93529b3c0522a6f0f07be6818a51 | [
"MIT"
] | null | null | null | import warnings
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from PIL import Image
solutions = pd.read_csv('D:\\code\\galaxy-zoo-the-galaxy-challenge\\training_solutions_rev1.csv', sep = ',')
solutions = solutions.loc[solutions["Class1.3"] < 0.5]
solutions = solutions.loc[:, ["GalaxyID", "Class1.1", "Class1.2"]]
solutions.columns = ["GalaxyID", "Smooth", "Class1.2"]
solutions.loc[solutions.Smooth > 0.5, "Smooth"] = 1
solutions.loc[solutions.Smooth <= 0.5, "Smooth"] = 0
solutions = solutions.loc[:, ["GalaxyID", "Smooth"]]
solutions = solutions.astype({"GalaxyID" : int, "Smooth" : int})
solutions.to_csv("D:\\code\\galaxy-zoo-the-galaxy-challenge\\solutions.csv", sep = ",", encoding = "utf-8")
def cropSave(imgID):
img = Image.open("D:\\code\\galaxy-zoo-the-galaxy-challenge\\images_training_rev1\\images_training_rev1\\" + str(imgID) + ".jpg")
img = img.resize(box=(112, 112, 312, 312), size=(128,128))
img.save("D:\\code\\galaxy-zoo-the-galaxy-challenge\\images_modified\\" + str(imgID) + ".jpg", "jpeg")
for imgID in solutions["GalaxyID"]:
cropSave(imgID) | 52.761905 | 133 | 0.695848 | 156 | 1,108 | 4.884615 | 0.384615 | 0.07874 | 0.057743 | 0.073491 | 0.283465 | 0.283465 | 0.283465 | 0.191601 | 0 | 0 | 0 | 0.038306 | 0.104693 | 1,108 | 21 | 134 | 52.761905 | 0.729839 | 0 | 0 | 0 | 0 | 0 | 0.355275 | 0.246168 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.25 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
caf8c9497a6a18b4424ce269c5bc4d42454d82d8 | 4,155 | py | Python | methods/arope_main.py | aida-ugent/graph-vis-eval | 8e15737542488c3abe1cbdf7da4dbd87011a0d8c | [
"MIT"
] | null | null | null | methods/arope_main.py | aida-ugent/graph-vis-eval | 8e15737542488c3abe1cbdf7da4dbd87011a0d8c | [
"MIT"
] | null | null | null | methods/arope_main.py | aida-ugent/graph-vis-eval | 8e15737542488c3abe1cbdf7da4dbd87011a0d8c | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# This file was originally developed for EvalNE by Alexandru Cristian Mara
import argparse
import numpy as np
import networkx as nx
import ast
import time
from utils import AROPE
# Although the method says it requires python 3.5, it should be run with python 2.7 for optimal results!
def parse_args():
""" Parses AROPE arguments. """
parser = argparse.ArgumentParser(description="Run AROPE.")
parser.add_argument('--inputgraph', nargs='?',
default='BlogCatalog.csv',
help='Input graph path')
parser.add_argument('--output', nargs='?',
default='network.emb',
help='Path where the embeddings will be stored.')
parser.add_argument('--tr_e', nargs='?', default=None,
help='Path of the input train edges. Default None (in this case returns embeddings)')
parser.add_argument('--tr_pred', nargs='?', default='tr_pred.csv',
help='Path where the train predictions will be stored.')
parser.add_argument('--te_e', nargs='?', default=None,
help='Path of the input test edges. Default None.')
parser.add_argument('--te_pred', nargs='?', default='te_pred.csv',
help='Path where the test predictions will be stored.')
parser.add_argument('--dimension', type=int, default=2,
help='Embedding dimension. Default is 128. If use_tsne then tSNE is used to \
reduce the dimensionality to match this parameter. The original embedding will \
be of "dimension_before_tsne".')
parser.add_argument('--delimiter', default=',',
help='The delimiter used to separate numbers in input file. Default is `,`')
parser.add_argument('--order', type=int, default=3,
help='Order of the proximity. Default is 3.')
parser.add_argument('--weights', default='[1,0.1,0.01]',
help='The weights for high-order proximity as list of len `order`. Default is `[1,0.1,0.01]`.')
return parser.parse_args()
def main(args):
""" Compute embeddings using AROPE. """
# Load edgelist
oneIndx = False
E = np.loadtxt(args.inputgraph, delimiter=args.delimiter, dtype=int)
if np.min(E) == 1:
oneIndx = True
E -= 1
# Create a graph
G = nx.Graph()
# Make sure the graph is unweighted
G.add_edges_from(E[:, :2])
# Get adj matrix of the graph and symmetrize
tr_A = nx.adjacency_matrix(G, weight=None)
# Compute embeddings
weights = ast.literal_eval(args.weights)
U_list, V_list = AROPE(tr_A, args.dimension, [args.order], [weights])
# AROPE sometimes returns more dimensions than specified. This depends on
# the dataset. Here we make sure the embedding dimension is what we expect.
U_list[0] = U_list[0][:, : args.dimension]
V_list[0] = V_list[0][:, : args.dimension]
start = time.time()
# Read the train edges and compute simmilarity
if args.tr_e is not None:
train_edges = np.loadtxt(args.tr_e, delimiter=args.delimiter, dtype=int)
if oneIndx:
train_edges -= 1
scores = list()
for src, dst in train_edges:
scores.append(U_list[0][src].dot(V_list[0][dst].T))
np.savetxt(args.tr_pred, scores, delimiter=args.delimiter)
# Read the test edges and run predictions
if args.te_e is not None:
test_edges = np.loadtxt(args.te_e, delimiter=args.delimiter, dtype=int)
if oneIndx:
test_edges -= 1
scores = list()
for src, dst in test_edges:
scores.append(U_list[0][src].dot(V_list[0][dst].T))
np.savetxt(args.te_pred, scores, delimiter=args.delimiter)
# If no edge lists provided to predict links, then just store the simmilarity matrix
else:
np.savetxt(args.output, U_list[0], delimiter=args.delimiter)
if __name__ == "__main__":
args = parse_args()
main(args)
| 36.447368 | 119 | 0.612996 | 556 | 4,155 | 4.47482 | 0.318345 | 0.036174 | 0.068328 | 0.019293 | 0.229502 | 0.198955 | 0.155949 | 0.123794 | 0.069936 | 0.041801 | 0 | 0.01224 | 0.272443 | 4,155 | 113 | 120 | 36.769912 | 0.810784 | 0.172082 | 0 | 0.088235 | 0 | 0.014706 | 0.192792 | 0.006153 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029412 | false | 0 | 0.088235 | 0 | 0.132353 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cafa51ff6545b99480852d33b0550b9abe3eb455 | 15,670 | py | Python | mygrid/power_flow/backward_forward_sweep_3p.py | grei-ufc/MyGrid | c42fe9d4e0838f253dd5b0716cbf0c892136d931 | [
"MIT"
] | 3 | 2018-03-02T11:18:51.000Z | 2021-07-30T23:22:18.000Z | mygrid/power_flow/backward_forward_sweep_3p.py | grei-ufc/MyGrid | c42fe9d4e0838f253dd5b0716cbf0c892136d931 | [
"MIT"
] | 8 | 2017-11-06T12:15:15.000Z | 2019-04-29T13:41:27.000Z | mygrid/power_flow/backward_forward_sweep_3p.py | grei-ufc/MyGrid | c42fe9d4e0838f253dd5b0716cbf0c892136d931 | [
"MIT"
] | 4 | 2019-03-20T11:26:04.000Z | 2021-06-03T19:28:05.000Z | # -*- coding: utf-8 -*-
import copy
from mygrid.util import Phasor, R, P
from mygrid.util import r2p, p2r
from mygrid.grid import Section, TransformerModel, Auto_TransformerModel
import numpy as np
# from pycallgraph import PyCallGraph
# from pycallgraph.output import GraphvizOutput
from numba import jit
from functools import wraps
import time
# def calc_power_flow_profiling(dist_grid):
# graphviz = GraphvizOutput()
# graphviz.output_file = 'basic.png'
# with PyCallGraph(output=graphviz):
# calc_power_flow(dist_grid)
def calc_power_flow(dist_grid, max_iterations = 100, converg_crt = 0.001, converg = 1e6) :
# -------------------------
# variables declarations
# -------------------------
iter = 0
#print('============================')
#print('Distribution Grid {dg} Sweep'.format(dg=dist_grid.name))
nodes_converg = dict()
for node in dist_grid.load_nodes.values():
nodes_converg[node.name] = 1e6
# calculo da profundidade máxima da rede
max_depth = np.max(dist_grid.load_nodes_tree.rnp.transpose()\
[:, 0].astype(int))
nodes_depth_dict = _make_nodes_depth_dictionary(dist_grid)
# -------------------------------------
# main loop for power flow calculation
# -------------------------------------
while iter <=max_iterations and converg > converg_crt:
iter += 1
#print('<<<<-----------BFS------------>>>>')
#print('Iteration: {iter}'.format(iter=iter))
for node in dist_grid.load_nodes.values():
node._calc_currents()
# ----------------------------------
# back-forward sweep implementation
# ----------------------------------
converg=_dist_grid_sweep(dist_grid, max_depth, nodes_depth_dict)
# -------------------------
# convergence verification
# -------------------------
# a=time.time()
# for node in dist_grid.load_nodes.values():
# nodes_converg[node.name] = np.mean(abs(voltage_nodes[node.name] - node.vp))
# converg = max(nodes_converg.values())
# b=time.time()
# time_total +=b-a
# print('Max. diff between load nodes voltage values: {conv}'.format(conv=converg))
# -------------------------
# verificação de tensões
# das barras PV (se houverem).
# -------------------------
calc=False
for sections in dist_grid.sections.values():
if isinstance(sections.transformer, Auto_TransformerModel) and not(\
sections.transformer_visited):
calc=True
sections.transformer_visited=True
if sections.transformer.compesator_active:
va,vb,vc=sections.transformer.controler_voltage(\
sections.n2.ip[0],sections.n2.ip[1],sections.n2.ip[2],\
sections.n2.vp0[0],sections.n2.vp0[1],sections.n2.vp0[2])
else:
va=sections.n2.vp[0]
vb=sections.n2.vp[1]
vc=sections.n2.vp[2]
sections.transformer.define_parameters(va,vb,vc)
sections._set_transformer_model()
if calc:
for node in dist_grid.load_nodes.values():
node.config_voltage(voltage=node.voltage)
calc_power_flow(dist_grid)
i=10
while i >0:
DG_unconv_ = _nodes_out_limit(dist_grid)
if DG_unconv_ != []:
for sections in dist_grid.sections.values():
if isinstance(sections.transformer, Auto_TransformerModel):
sections.transformer_visited=False
_define_power_insertion(DG_unconv_, dist_grid)
for node in dist_grid.load_nodes.values():
node.config_voltage(voltage=node.voltage)
node._calc_currents()
calc_power_flow(dist_grid)
else:
for node in dist_grid.load_nodes.values():
if node.generation != None:
if type(node.generation) == type(list()):
for i in node.generation:
if i.type == "PV" and i.limit_PV:
print("{0} exceeded the limit Generation ".format(i.name))
elif node.generation.type == "PV" and node.generation.limit_PV:
print("{0} exceeded the limit Generation ".format(node.generation.name))
return
i-=1
print("Load flow did not converge")
return
def _dist_grid_sweep(dist_grid, max_depth, nodes_depth_dict):
""" Função que varre a dist_grid pelo
método varredura direta/inversa"""
Back_Sweep(max_depth, nodes_depth_dict, dist_grid)
conv=Forward_Sweep(max_depth, nodes_depth_dict, dist_grid)
return conv
# ----------------------------------------------
# Inicio da varredura inversa
# ----------------------------------------------
#print('Backward Sweep phase <<<<----------')
# seção do cálculo das potências partindo dos
# nós com maiores profundidades até o nó raíz
def Back_Sweep(max_depth, nodes_depth_dict, dist_grid):
depth = max_depth
while depth >= 0:
# guarda os nós com maiores profundidades.
nodes = nodes_depth_dict[str(depth)]
# decrementodo da profundidade.
depth -= 1
# for que percorre os nós com a profundidade
# armazenada na variável depth
for node in nodes:
# atualiza o valor de corrente passante com o valor
# da corente da carga para que na prox. iteração
# do fluxo de carga não ocorra acúmulo.
node.ip=copy.copy(node.i)
downstream_neighbors = _get_downstream_neighbors_nodes(node, dist_grid)
# verifica se não há neighbor a jusante,
# se não houverem o nó de carga analisado
# é o último do ramo.
if downstream_neighbors == []:
continue
else:
# precorre os nos a jusante do no atual
# para calculo de fluxos passantes
for downstream_node in downstream_neighbors:
# chama a função busca_trecho para definir
# quais sections estão entre o nó atual e o nó a jusante
section = _search_section(node, downstream_node, dist_grid)
# ------------------------------------
# Equacionamento: Calculo de corretes
# ------------------------------------
# node.ip += np.dot(section.c, downstream_node.vp) + \
# np.dot(section.d, downstream_node.ip)
node.ip += calc_ip(section.c,section.d,downstream_node.vp, downstream_node.ip)
# -------------------------------------
#print(node.name + '<<<---' + downstream_node.name)
#print('Forward Sweep phase ---------->>>>')
def Forward_Sweep(max_depth, nodes_depth_dict, dist_grid):
depth = 1
# seção do cálculo de atualização das tensões
conv=0
while depth <= max_depth:
# salva os nós de carga a montante
nodes = nodes_depth_dict[str(depth)]
# percorre os nós para guardar a árvore do nó requerido
for node in nodes:
upstream_node = _get_upstream_neighbor_node(node, dist_grid)
section = _search_section(node, upstream_node, dist_grid)
# ------------------------------------
# Equacionamento: Calculo de tensoes
# ------------------------------------
# node.vp = np.dot(section.A, upstream_node.vp) - \
# np.dot(section.B, node.ip)
node.vp,vc = calc_vp(section.A, section.B, upstream_node.vp, node.ip, node.vp)
# ------------------------------------
#print(upstream_node.name + '--->>>' + node.name)
if vc>conv:
conv=vc
depth += 1
return conv
def calc_ip(c,d,vp,ip):
return c.dot(vp) + d.dot(ip)
def calc_vp(A, B, vp, ip, vi):
v= A.dot(vp) - B.dot(ip)
a=(abs(vi[0,0]) + abs(vi[1,0]) + abs(vi[2,0]))/3
b=(abs(v[0,0]) + abs(v[1,0]) + abs(v[2,0]))/3
vc=abs(b-a)
return v, vc
def _make_nodes_depth_dictionary(dist_grid):
dist_grid_rnp = dist_grid.load_nodes_tree.rnp
nodes_depth_dict = dict()
for node_depth in dist_grid_rnp.transpose():
depth = node_depth[0]
node = node_depth[1]
if depth in nodes_depth_dict.keys():
nodes_depth_dict[depth] += [dist_grid.load_nodes[node]]
else:
nodes_depth_dict[depth] = [dist_grid.load_nodes[node]]
return nodes_depth_dict
def _get_downstream_neighbors_nodes_cached(f):
cache = dict()
@wraps(f)
def inner_get_downstream_neighbors_nodes(arg1, arg2):
if arg1 not in cache:
cache[arg1] = f(arg1, arg2)
return cache[arg1]
return inner_get_downstream_neighbors_nodes
@_get_downstream_neighbors_nodes_cached
def _get_downstream_neighbors_nodes(node, dist_grid):
load_nodes_tree = dist_grid.load_nodes_tree.tree
dist_grid_rnp_dict = dist_grid.load_nodes_tree.rnp_dict()
neighbors = load_nodes_tree[node.name]
downstream_neighbors = list()
# for que percorre a árvore de cada nó de carga vizinho
for neighbor in neighbors:
# verifica se a profundidade do neighbor é maior
if int(dist_grid_rnp_dict[neighbor]) > int(dist_grid_rnp_dict[node.name]):
downstream_neighbors.append(dist_grid.load_nodes[neighbor])
return downstream_neighbors
def _get_upstream_neighbor_node_cached(f):
cache = dict()
@wraps(f)
def inner_get_upstream_neighbor_node(arg1, arg2):
if arg1 not in cache:
cache[arg1] = f(arg1, arg2)
return cache[arg1]
return inner_get_upstream_neighbor_node
@_get_upstream_neighbor_node_cached
def _get_upstream_neighbor_node(node, dist_grid):
load_nodes_tree = dist_grid.load_nodes_tree.tree
dist_grid_rnp_dict = dist_grid.load_nodes_tree.rnp_dict()
neighbors = load_nodes_tree[node.name]
upstream_neighbors = list()
# verifica quem é neighbor do nó desejado.
for neighbor in neighbors:
if int(dist_grid_rnp_dict[neighbor]) < int(dist_grid_rnp_dict[node.name]):
upstream_neighbors.append(dist_grid.load_nodes[neighbor])
# retorna o primeiro neighbor a montante
return upstream_neighbors[0]
def _search_section_cached(f):
cache = dict()
@wraps(f)
def inner_search_section(arg1, arg2, arg3):
if (arg1, arg2) not in cache or (arg2, arg1) not in cache:
cache[(arg1, arg2)] = f(arg1, arg2, arg3)
return cache[(arg1, arg2)]
return inner_search_section
@_search_section_cached
def _search_section(n1, n2, dist_grid):
"""Função que busca sections em um alimendador entre os nos
n1 e n2"""
if (n1, n2) in dist_grid.sections_by_nodes.keys():
return dist_grid.sections_by_nodes[(n1, n2)]
elif (n2, n1) in dist_grid.sections_by_nodes.keys():
return dist_grid.sections_by_nodes[(n2, n1)]
def _nodes_out_limit(dist_grid):
root_3=np.sqrt(3)
DG_unconv_ = list()
for node in dist_grid.load_nodes.values():
if (node.generation != None and node.type == 'PV'):
vaa = np.abs(node.vp[0]) / np.abs(node.voltage/root_3)
vbb = np.abs(node.vp[1]) / np.abs(node.voltage/root_3)
vcc = np.abs(node.vp[2]) / np.abs(node.voltage/root_3)
vphase_max = max([vaa,vbb,vcc])
vphase_min = min([vaa,vbb,vcc])
if node.defective_phase != None:
v_defective=np.abs(node.vp[node.defective_phase] / np.abs(node.voltage/root_3))
if np.abs(v_defective-node.Vspecified)>node.DV_presc:
DG_unconv_.append(node)
else:
if (vphase_min) < node.Vmin:
if vaa==vphase_min:
node.defective_phase=0
if vbb==vphase_min:
node.defective_phase=1
if vcc==vphase_min:
node.defective_phase=2
DG_unconv_.append(node)
elif (vphase_max) > node.Vmax:
if vaa==vphase_max:
node.defective_phase=0
if vbb==vphase_max:
node.defective_phase=1
if vcc==vphase_max:
node.defective_phase=2
DG_unconv_.append(node)
return DG_unconv_
def _define_power_insertion(DG_unconv_, dist_grid):
data2=list()
reactan_mat_ = np.ones((len(DG_unconv_), len(DG_unconv_))) * (0.0 + 1.0j)
section_list = list()
for i in range(len(DG_unconv_)):
dg_source=sections_path_to_root(dist_grid, DG_unconv_[i].name)
section_list.append(dg_source)
reactan_mat_[i, i] = sum_imped(dg_source)
for i in range(len(section_list)):
for j in range(i+1,len(section_list)):
comm_path=list()
for x in section_list[i]:
for y in section_list[j]:
if x==y:
comm_path.append(x)
reactan_mat_[i, j] =reactan_mat_[j, i]= sum_imped(comm_path)
inv_X = np.linalg.inv(reactan_mat_)
delta_I = {}
delta_V = np.ones((len(DG_unconv_),1))
for i in range(len(DG_unconv_)):
v=np.abs(DG_unconv_[i].vp0[0])
Vcurrent =np.abs(DG_unconv_[i].vp[DG_unconv_[i].defective_phase][0]/v)
ves=np.abs(DG_unconv_[i].Vspecified)
delta_V[i,0] =ves - (Vcurrent)
delta_I = inv_X.dot(delta_V)
for i in range(len(DG_unconv_)):
Q=delta_I[i][0]*(100e6)
if type(DG_unconv_[i].generation) == type(list()):
a=0
for j in DG_unconv_[i].generation:
if j.type == "PV":
a += 1
for j in DG_unconv_[i].generation:
if j.type == "PV":
j.update_Q(Q/a,Q/a,Q/a)
data2.append(a)
else:
DG_unconv_[i].generation.update_Q(Q,Q,Q)
def sections_path_to_root(dist_grid, n2):
root=list(dist_grid.sectors[dist_grid.root].load_nodes.values())[0].name
if n2 == root:
return list()
for section in dist_grid.sections.values():
name=section.name
if not(dist_grid.sections[name].switch != None and dist_grid.sections[name].switch.state==0):
if dist_grid.sections[name].n1.name == n2:
section_list=sections_path_to_root(dist_grid,dist_grid.sections[name].n2.name)
section_list.append(dist_grid.sections[name])
return section_list
elif dist_grid.sections[name].n2.name == n2:
section_list=sections_path_to_root(dist_grid,dist_grid.sections[name].n1.name)
section_list.append(dist_grid.sections[name])
return section_list
def sum_imped(sections):
z=0
for i in sections:
if i.line_model is not None:
z_base = np.abs((i.n1.vp0[0])**2 / 100e6)
z +=np.imag(i.Z012[1,1])*1j/z_base
elif isinstance(i.transformer, Auto_TransformerModel):
continue
elif isinstance(i.transformer, TransformerModel):
z_base = np.abs((i.n2.vp0[0])**2 / 100e6)
z +=np.imag(i.transformer.zt_012[1,1])*1j/z_base
return z
| 32.04499 | 101 | 0.572942 | 1,984 | 15,670 | 4.28881 | 0.155242 | 0.069573 | 0.026795 | 0.03796 | 0.428135 | 0.353861 | 0.301798 | 0.247503 | 0.231049 | 0.178869 | 0 | 0.016035 | 0.28762 | 15,670 | 488 | 102 | 32.110656 | 0.746215 | 0.200574 | 0 | 0.26616 | 0 | 0 | 0.008368 | 0 | 0 | 0 | 0 | 0.002049 | 0 | 1 | 0.076046 | false | 0 | 0.030418 | 0.003802 | 0.190114 | 0.011407 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cafc4de57835e7699aa3fc23c1d712de48724ce8 | 3,785 | py | Python | test4/test_brownian_motion.py | WINGS-ABC-programming-tutorial/mlflow-demo | 2af7ad439c5feebc5af3ba67086c0077fe142bbe | [
"MIT"
] | null | null | null | test4/test_brownian_motion.py | WINGS-ABC-programming-tutorial/mlflow-demo | 2af7ad439c5feebc5af3ba67086c0077fe142bbe | [
"MIT"
] | null | null | null | test4/test_brownian_motion.py | WINGS-ABC-programming-tutorial/mlflow-demo | 2af7ad439c5feebc5af3ba67086c0077fe142bbe | [
"MIT"
] | null | null | null | import numpy as np
import pytest
from lib4.brownian_motion import BrownianMotion, ParamBrownianMotion
CORRECT_DATASET = [
# (seed, initial_state, sigma, state_trajectory)
(123, 0.0, 10.0, np.array([
0.,
-9.891213503,
-13.56908001,
-0.68982741,
])),
(456, -2.0, 1.0, np.array([
-2.,
-1.103960047,
-3.227908586,
-1.587704777,
]))
]
class TestBrownianMotion:
def test_param_negative_sigma_fail(self):
with pytest.raises(ValueError):
ParamBrownianMotion(
seed=123,
initial_state=0.0,
sigma=-1.0
)
def test_init(self):
BrownianMotion(ParamBrownianMotion(
seed=123,
initial_state=0.0,
sigma=10.0
))
@pytest.mark.parametrize("seed, initial_state, sigma, state_trajectory", CORRECT_DATASET)
def test_step(self, seed, initial_state, sigma, state_trajectory):
bm = BrownianMotion(ParamBrownianMotion(
seed=seed,
initial_state=initial_state,
sigma=sigma
))
assert bm.state == state_trajectory[0]
for i in range(1, state_trajectory.shape[0]):
bm.step()
assert np.isclose(bm.state, state_trajectory[i])
@pytest.mark.parametrize("seed", [123, 456])
def test_linearity_of_state_wrt_initial_state(self, seed, init1=-10., init2=+10.):
"""
シードが同じであれば初期値が違うだけの2つの軌跡は平行移動すれば一致する
"""
bm1 = BrownianMotion(ParamBrownianMotion(
seed=seed,
initial_state=init1,
sigma=10.0
))
for i in range(100):
bm1.step()
bm2 = BrownianMotion(ParamBrownianMotion(
seed=seed,
initial_state=init2,
sigma=10.0
))
for i in range(100):
bm2.step()
assert np.isclose(bm1.state - bm2.state, init1 - init2)
def test_init_save_trajectory_without_total_step_fail(self):
with pytest.raises(ValueError):
BrownianMotion(
param=ParamBrownianMotion(
seed=123,
initial_state=0.0,
sigma=10.0
),
save_full_trajectory=True,
total_step=None
)
@pytest.mark.parametrize("seed, initial_state, sigma, state_trajectory", CORRECT_DATASET)
def test_step_save_trajectory(self, seed, initial_state, sigma, state_trajectory):
bm = BrownianMotion(
param=ParamBrownianMotion(
seed=seed,
initial_state=initial_state,
sigma=sigma
),
save_full_trajectory=True,
total_step=state_trajectory.shape[0] - 1
)
for i in range(state_trajectory.shape[0] - 1):
bm.step()
assert np.allclose(bm.state_trajectory, state_trajectory)
@pytest.mark.parametrize("seed", [123, 456])
@pytest.mark.parametrize("init1", [-10., +1.])
@pytest.mark.parametrize("init2", [-10., +10.])
@pytest.mark.parametrize("sigma", [0.1, 10.])
def test_linearity_of_trajectory_wrt_initial_state(self, seed, init1, init2, sigma):
"""
シードが同じであれば初期値が違うだけの2つの軌跡は平行移動すれば一致する
"""
bm1 = BrownianMotion(ParamBrownianMotion(
seed=seed,
initial_state=init1,
sigma=sigma
))
for i in range(100):
bm1.step()
bm2 = BrownianMotion(ParamBrownianMotion(
seed=seed,
initial_state=init2,
sigma=sigma
))
for i in range(100):
bm2.step()
assert np.allclose(bm1.state - bm2.state, init1 - init2)
| 30.039683 | 93 | 0.564333 | 390 | 3,785 | 5.307692 | 0.187179 | 0.104348 | 0.085024 | 0.098551 | 0.668116 | 0.641063 | 0.478744 | 0.473913 | 0.449275 | 0.308213 | 0 | 0.071118 | 0.331308 | 3,785 | 125 | 94 | 30.28 | 0.74674 | 0.031968 | 0 | 0.557692 | 0 | 0 | 0.030688 | 0 | 0 | 0 | 0 | 0 | 0.048077 | 1 | 0.067308 | false | 0 | 0.028846 | 0 | 0.105769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cafc7e9bfd9667324e0c0cf4454da54de8d2d177 | 4,280 | py | Python | src/exec/exec.py | rdoddanavar/hpr-sim | d2bc892865a093939f0a7e7c0713af849e87099a | [
"MIT"
] | 3 | 2021-01-30T21:22:06.000Z | 2022-01-12T06:49:01.000Z | src/exec/exec.py | rdoddanavar/hpr-sim | d2bc892865a093939f0a7e7c0713af849e87099a | [
"MIT"
] | 68 | 2019-04-22T20:06:11.000Z | 2022-03-26T19:03:17.000Z | src/exec/exec.py | rdoddanavar/hpr-sim | d2bc892865a093939f0a7e7c0713af849e87099a | [
"MIT"
] | 2 | 2021-01-30T21:22:08.000Z | 2021-04-27T06:15:33.000Z | # System modules
import sys
import os
import pdb
import pathlib
import copy
import multiprocessing as mp
import numpy as np
import yaml
# Path modifications
paths = ["../../build/src", "../preproc", "../util"]
for item in paths:
addPath = pathlib.Path(__file__).parent / item
sys.path.append(str(addPath.resolve()))
# Project modules
import exec_rand
import util_yaml
import util_unit
import preproc_input
import preproc_engine
import model
#------------------------------------------------------------------------------#
# Module variables
configPathRel = "../../config/config_input.yml"
outputPath2 = None
inputDict = None
#------------------------------------------------------------------------------#
def exec(inputPath, outputPath):
# Pre-processing
util_unit.config()
configPath = pathlib.Path(__file__).parent / configPathRel
configPath = str(configPath.resolve())
configDict = util_yaml.load(configPath)
global inputDict
inputDict = util_yaml.load(inputPath)
util_yaml.process(inputDict)
preproc_input.process(inputDict, configDict)
# Output setup
global outputPath2
inputName = pathlib.Path(inputPath).stem
outputPath2 = pathlib.Path(outputPath) / inputName
if not os.path.exists(outputPath2):
os.mkdir(outputPath2)
# Sim execution
mode = inputDict["exec"]["mode"]["value"]
numProc = inputDict["exec"]["numProc"]["value"]
numMC = inputDict["exec"]["numMC"]["value"]
if mode == "nominal":
#run_flight(np.nan)
run_flight(0)
elif mode == "montecarlo":
pool = mp.Pool(numProc)
iRuns = range(numMC)
pool.map_async(run_flight, iRuns)
pool.close()
pool.join()
# Post-processing
#------------------------------------------------------------------------------#
def run_flight(iRun):
inputDictRun = copy.deepcopy(inputDict)
if not(np.isnan(iRun)):
seedMaster = inputDict["exec"]["seed"]["value"]
seedRun = seedMaster + iRun
inputDictRun["exec"]["seed"]["value"] = seedRun
exec_rand.mc_draw(inputDictRun)
# Initialize model - engine
enginePath = inputDictRun["engine"]["inputPath"]["value"]
timeEng, thrustEng, massEng = preproc_engine.load(enginePath)
engine = model.Engine()
engine.init(timeEng, thrustEng, massEng)
# Initialize model - mass
mass = model.Mass()
massBody = inputDictRun["mass"]["massBody"]["value"]
mass.add_dep(engine)
mass.init(massBody)
# Initialize model - geodetic
geodetic = model.Geodetic()
latitude = inputDictRun["geodetic"]["latitude"]["value"]
geodetic.init(latitude)
# Initialize model - EOM
eom = model.EOM()
eom.add_dep(engine)
eom.add_dep(mass)
eom.add_dep(geodetic)
eom.init()
# Initialize model - flight
flight = model.Flight()
flight.add_dep(eom)
t0 = 0.0
dt = inputDictRun["flight"]["timeStep"]["value"]
tf = inputDictRun["flight"]["timeFlight"]["value"]
flight.init(t0, dt, tf)
flight.update() # Execute flight
write_output(iRun, inputDictRun, flight)
# write_summary
#------------------------------------------------------------------------------#
def write_output(iRun, inputDictRun, flight):
# Setup run output folder
outputPath3 = outputPath2 / f"run{iRun}"
if not os.path.exists(outputPath3):
os.mkdir(outputPath3)
# Write input *.yml
# Archives montecarlo draw for run recreation
inputDictRun["exec"]["mode"]["value"] = "nominal"
outputYml = outputPath3 / "input.yml"
with open(str(outputYml), 'w') as file:
yaml.dump(inputDictRun, file)
# Write telemetry *.csv
outputCsv = outputPath3 / "telem.csv"
flight.write_telem(str(outputCsv))
# Write telemetry *.txt
outputTxt = outputPath3 / "stats.txt"
flight.write_stats(str(outputTxt))
#------------------------------------------------------------------------------#
# def write_summary():
#------------------------------------------------------------------------------#
if __name__ == "__main__":
inputPath = sys.argv[1]
outputPath = sys.argv[2]
exec(inputPath, outputPath) | 24.457143 | 80 | 0.577336 | 426 | 4,280 | 5.692488 | 0.321596 | 0.030928 | 0.011134 | 0.01732 | 0.041237 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005539 | 0.198598 | 4,280 | 175 | 81 | 24.457143 | 0.701458 | 0.214252 | 0 | 0 | 0 | 0 | 0.093121 | 0.008711 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032258 | false | 0 | 0.150538 | 0 | 0.182796 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b02aed7e5de0af16b5239e868db60820eb7d130 | 448 | py | Python | C1-G/generator.py | Matrix53/algo | 7a176dac9ed9c6ad65d1514afb6388f7ee6b912a | [
"MIT"
] | 1 | 2021-12-14T08:54:11.000Z | 2021-12-14T08:54:11.000Z | C1-G/generator.py | Matrix53/algo | 7a176dac9ed9c6ad65d1514afb6388f7ee6b912a | [
"MIT"
] | null | null | null | C1-G/generator.py | Matrix53/algo | 7a176dac9ed9c6ad65d1514afb6388f7ee6b912a | [
"MIT"
] | 1 | 2021-12-13T09:31:40.000Z | 2021-12-13T09:31:40.000Z | from cyaron import *
max_node = 1000000
for i in range(5):
io = IO(str(i + 1) + '.in', str(i + 1) + '.out')
tree = Graph.tree(max_node)
leaf = []
for i in tree.edges:
if len(i) == 1:
leaf.append(randint(0, 10000))
io.input_writeln(max_node)
io.input_writeln(tree.to_str(output=Edge.unweighted_edge))
io.input_write(leaf)
io.output_gen('D:\\Workspace\\algo\\BUAAOJ-X\\Matrix53-1\\standard.exe')
| 28 | 76 | 0.613839 | 72 | 448 | 3.694444 | 0.569444 | 0.078947 | 0.045113 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05698 | 0.216518 | 448 | 15 | 77 | 29.866667 | 0.700855 | 0 | 0 | 0 | 0 | 0 | 0.138393 | 0.122768 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b02b03b1b37281b6a5b8bf7031ec157a8175644 | 19,091 | py | Python | src/model/soccer.py | celsomilne/soccer | b3b164b6610ebd88b13601970d2d4e65423d0a57 | [
"MIT"
] | null | null | null | src/model/soccer.py | celsomilne/soccer | b3b164b6610ebd88b13601970d2d4e65423d0a57 | [
"MIT"
] | 2 | 2021-06-08T20:34:37.000Z | 2022-02-10T01:12:59.000Z | src/model/soccer.py | celsomilne/soccer | b3b164b6610ebd88b13601970d2d4e65423d0a57 | [
"MIT"
] | null | null | null | import numpy as np
import cv2
import pandas as pd
from ._base import ModelBase
def merge_similar_lines(l, lines):
# Put unique lines in array if theres only 1, for looping
if len(lines.shape) == 1:
lines = np.array([lines])
# Translate line to align with each unique line
d = np.column_stack(
((lines[:, 0] + lines[:, 2]) / 2, (lines[:, 1] + lines[:, 3]) / 2)
)
d = np.tile(d, (1, 2))
tl = l - d
# Rotate line to align with each unique line
xd = lines[:, 2] - lines[:, 0]
yd = lines[:, 3] - lines[:, 1]
td = np.sqrt(xd * xd + yd * yd)
cos_theta = xd / td
sin_theta = yd / td
tl = np.column_stack(
(
tl[:, 0] * cos_theta + tl[:, 1] * sin_theta,
tl[:, 1] * cos_theta - tl[:, 0] * sin_theta,
tl[:, 2] * cos_theta + tl[:, 3] * sin_theta,
tl[:, 3] * cos_theta - tl[:, 2] * sin_theta,
)
)
# Bounds for the lines to be considered similar
xb = (
np.sqrt((lines[:, 0] - lines[:, 2]) ** 2 + (lines[:, 1] - lines[:, 3]) ** 2) / 2
+ 10
)
yb = 15
# Check if line is similar to any unique line
similar = np.logical_and(abs(tl[:, 1]) < yb, abs(tl[:, 3]) < yb)
if sum(similar) > 1:
# If multiple similar lines, take the most similar
diffs = np.maximum(abs(tl[:, 1]), abs(tl[:, 3]))
similar[:] = False
similar[np.argmin(diffs)] = True
if any(similar):
# If line is similar, check if it extends beyond current unique line
# Update unique line to new length if it does
xb = xb[similar]
if tl[similar, 0] < -xb or tl[similar, 0] > xb:
lines[similar, 0:2] = l[0:2]
if tl[similar, 2] < -xb or tl[similar, 2] > xb:
lines[similar, 2:4] = l[2:4]
else:
# If line is sufficiently differet than other unique lines, add to unique set
lines = np.concatenate((lines, l.reshape(1, 4)), axis=0)
return lines
def is_edge_line(l, dims):
# Tolerance for line to be considered on the edge
tol = 10
# Check if line is on any 4 borders
bad = (
(l[0] < tol and l[2] < tol)
or (l[1] < tol and l[3] < tol)
or (l[0] > dims[1] - tol and l[2] > dims[1] - tol)
or (l[1] > dims[0] - tol and l[3] > dims[0] - tol)
)
return bad
def find_intersection(l1, l2):
# Calculate intercept and gradient of first line
m1 = (l1[3] - l1[1]) / (l1[2] - l1[0])
b1 = l1[1] - m1 * l1[0]
# If line is vertical, manually derive intersection
if l2[0] == l2[2]:
return np.array([l2[0], m1 * l2[0] + b1])
# Find intercept and gradient of second line
m2 = (l2[3] - l2[1]) / (l2[2] - l2[0])
b2 = l2[1] - m2 * l2[0]
# Calculate intercepts of lines
x = (b2 - b1) / (m1 - m2)
y = m1 * x + b1
return np.array([x, y])
def generate_points(line_dict, right_view, true_dict, rel_dict):
# initialise vector/matrix of coefficients
A = np.array([]).reshape(0, 8)
b = np.array([])
# Loop through each pair of potential line intercepts to check if they've been detected
for n1 in rel_dict.keys():
if n1 in line_dict:
for n2 in rel_dict[n1]:
if n2 in line_dict:
# Calculate line intercepts on field
x = np.array([0.0, 0.0])
x += true_dict[n1]
if right_view:
x += true_dict[n2]
else:
x -= true_dict[n2]
# Calculate line intercepts in frame
u = find_intersection(line_dict[n1], line_dict[n2])
# Add results to coefficient matrix/vector
A = np.vstack(
(
A,
[
[u[0], u[1], 1, 0, 0, 0, -u[0] * x[0], -u[1] * x[0]],
[0, 0, 0, u[0], u[1], 1, -u[0] * x[1], -u[1] * x[1]],
],
)
)
b = np.concatenate((b, x))
# Add additional 'half' points to straighten far side line if in centre field case
if right_view == 1:
u = find_intersection(line_dict["top"], line_dict["left"])
A = np.vstack((A, [[0, 0, 0, u[0], u[1], 1, -70 * u[0], -70 * u[1]]]))
u = find_intersection(line_dict["top"], line_dict["right"])
A = np.vstack((A, [[0, 0, 0, u[0], u[1], 1, -70 * u[0], -70 * u[1]]]))
b = np.concatenate((b, [70, 70]))
# Calculate homographic transform matrix
c, _, _, _ = np.linalg.lstsq(A, b, rcond=None)
c = np.concatenate((c, np.array([1.0])))
c = c.reshape(3, 3)
return c
class FieldTransform:
# Initialise the location of the lines on soccer field
true_dict = {
"top": np.array([0.0, 70.0]).astype(np.float32),
"mid": np.array([0.0, 0.0]).astype(np.float32),
"box": np.array([33.5, 0.0]).astype(np.float32),
"tbox": np.array([0.0, 55.15]).astype(np.float32),
"bbox": np.array([0.0, 14.85]).astype(np.float32),
"goal": np.array([50.0, 0.0]).astype(np.float32),
"tcirc": np.array([0.0, 44.15]).astype(np.float32),
"bcirc": np.array([0.0, 25.85]).astype(np.float32),
"rcirc": np.array([9.15, 0.0]).astype(np.float32),
"lcirc": np.array([-9.15, 0.0]).astype(np.float32),
"ccirc": np.array([0.0, 35.0]).astype(np.float32),
}
# Initialise dictionary of line intercepts
rel_dict = {
"top": ["mid", "box", "goal"],
"tbox": ["box", "goal"],
"bbox": ["box", "goal"],
"tcirc": ["mid"],
"bcirc": ["mid"],
"ccirc": ["lcirc", "rcirc"],
}
def __call__(self, im):
# Creake a mask to extract the field
mask_field = np.logical_and(
im[:, :, 1] > im[:, :, 0], im[:, :, 1] > im[:, :, 2]
).astype(np.uint8)
# Use morphological operations to clear noise from th mask
k = 40
kernel = np.ones((k, k), np.uint8)
mask = cv2.morphologyEx(mask_field, cv2.MORPH_OPEN, kernel)
kernel = np.ones((2 * k, 2 * k), np.uint8)
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
# Apply mask back on to image to get field
field = cv2.bitwise_and(im, im, mask=mask)
# Mildly decay mask to remove edges generated by mask edge
kernel = np.ones((1, 1), np.uint8)
mask = cv2.erode(mask, kernel, iterations=1)
# Use blue channel to extract edged as maximally exposes soccer lines
gray = im[:, :, 2]
# Apply Canny edge detection technique
blur_gray = cv2.GaussianBlur(gray, (5, 5), 0)
low_threshold = 30
high_threshold = 200
edges = cv2.Canny(gray, low_threshold, high_threshold)
edges = cv2.bitwise_and(edges, edges, mask=mask)
edges = cv2.dilate(edges, np.ones((3, 3)))
# Find lines in image using probabilistic Hough transform
rho_tol = 1
theta_tol = np.pi / 180
lines = np.array(
cv2.HoughLinesP(
edges, rho_tol, theta_tol, 50, minLineLength=100, maxLineGap=5
)
)
lines.shape = (lines.shape[0], lines.shape[2])
# Remove lines on border and merge remaining multiple detections
good_lines = np.array([l for l in lines if not is_edge_line(l, edges.shape)])
lines = good_lines[0]
for i in range(1, len(good_lines)):
lines = merge_similar_lines(good_lines[i], lines)
# Direction of each line
theta = np.arctan2(lines[:, 3] - lines[:, 1], lines[:, 2] - lines[:, 0]) % np.pi
# Initialise dictionary to store classified lines
line_dict = {
"left": np.array([0, 0, 0, edges.shape[0]]),
"right": np.array([edges.shape[1], 0, edges.shape[1], edges.shape[0]]),
}
right_view = 2
# Check if near vertical line in image (centre field indicator)
if np.any(abs(theta - np.pi / 2) < np.pi / 60) or len(theta) < 2:
# Update view flag for centre field
right_view = 1
# Classify halfway line and far sideline as most and horizontal lines respectively
line_dict["mid"] = lines[np.argmin(abs(theta - np.pi / 2))]
line_dict["top"] = lines[np.pi / 2 - abs(theta - np.pi / 2) < np.pi / 30][0]
midpoint = (line_dict["mid"][0] + line_dict["mid"][2]) // 2
# Remove lines and boarder from the edge output
line_mask = np.ones(gray.shape, np.uint8)
for x1, y1, x2, y2 in line_dict.values():
cv2.line(line_mask, (x1, y1), (x2, y2), (0, 0, 0), 10)
line_mask[: line_mask.shape[0] // 8, :] = 0
line_mask[
(line_mask.shape[0] - line_mask.shape[0] // 8) : line_mask.shape[0], :
] = 0
line_mask[:, :20] = 0
line_mask[:, line_mask.shape[1] - 20 :] = 0
edges = cv2.bitwise_and(edges, edges, mask=line_mask)
# Find the largest remaining contour in image (centre circle)
conts, _ = cv2.findContours(
edges, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE
)
c = max(conts, key=cv2.contourArea)
x, y, w, h = cv2.boundingRect(c)
# Find closest remaining lines to the halfway line
# (these lines form the edges of the ellipse)
line_disp = lines[:, [0, 3]] - midpoint
idx = np.argpartition(
np.mean(line_disp, axis=1) + np.min(line_disp, axis=1), 2
)
lines = lines[np.any(lines != line_dict["top"], axis=1)]
lines = lines[np.any(lines != line_dict["mid"], axis=1)]
lines = lines[idx[0:2]]
# Classify these lines as centre circle intercept with the halfway line
line_dict["tcirc"] = lines[np.argmin(np.mean(lines[:, [1, 3]], axis=1))]
line_dict["bcirc"] = lines[np.argmax(np.mean(lines[:, [1, 3]], axis=1))]
# Find halfway point between this intersection
c_top = find_intersection(line_dict["tcirc"], line_dict["mid"])
c_bot = find_intersection(line_dict["bcirc"], line_dict["mid"])
c_cen = (c_top[1] + c_bot[1]) / 2
line_dict["ccirc"] = np.array([x, c_cen, x + w, c_cen])
# From largest contour, determine whether it lies to left or right of halfway line
# Classify an extreme point of centre circle accordingly
if x < midpoint:
line_dict["lcirc"] = np.array([x, y, x, y + h])
else:
line_dict["rcirc"] = np.array([x + w, y, x + w, y + h])
# No halfway line (end field indicator)
else:
# Classify horizontal lines on field as minimum line orientation
horz_theta = theta[np.argmin(np.pi / 2 - abs(theta - np.pi / 2))]
# Determine if at left or right end of field based on horizontal orientation
if horz_theta < np.pi / 2:
right_view = 0
# Extract all horizontal lines in image
horz_lines = lines[abs(theta - horz_theta) < np.pi / 30]
# Classify far sideline as heightest horizontal line and remove from sets
line_dict["top"] = horz_lines[
np.argmin(np.minimum(horz_lines[:, 1], horz_lines[:, 3]))
]
lines = lines[np.any(lines != line_dict["top"], axis=1)]
horz_lines = horz_lines[np.any(horz_lines != line_dict["top"], axis=1)]
# Check if goal line in image by comparing endpoints with far sideline
if right_view:
goal_line_check = np.logical_and(
lines[:, 0] - line_dict["top"][2] < 100,
lines[:, 1] - line_dict["top"][3] < 20,
)
else:
goal_line_check = np.logical_and(
lines[:, 2] - line_dict["top"][0] < 100,
lines[:, 3] - line_dict["top"][1] < 20,
)
# Classify any positive hits as the goal line
if any(goal_line_check):
line_dict["goal"] = lines[goal_line_check][0]
lines = lines[np.logical_not(goal_line_check)]
# Classify far pentaly box line as next heighest horizontal line and remove
line_dict["tbox"] = horz_lines[
np.argmin(horz_lines[:, 1] + horz_lines[:, 3])
]
lines = lines[np.any(lines != line_dict["tbox"], axis=1)]
horz_lines = horz_lines[np.any(horz_lines != line_dict["tbox"], axis=1)]
# Check if any horizontal lines remain and classify next heightest as close penalty box line
if len(horz_lines) != 0 and "tbox" in line_dict:
bbox_temp = horz_lines[np.argmin(horz_lines[:, 1] + horz_lines[:, 3])]
if np.max(bbox_temp[[1, 3]]) - np.max(line_dict["tbox"][[1, 3]]) > 2 * (
np.max(line_dict["tbox"][[1, 3]]) - np.max(line_dict["top"][[1, 3]])
):
line_dict["bbox"] = bbox_temp
lines = lines[np.any(lines != line_dict["bbox"], axis=1)]
horz_lines = horz_lines[
np.any(horz_lines != line_dict["bbox"], axis=1)
]
# Check if vertical pentaly box line in image by comparing endpoints
if right_view:
box_line_check = (
np.linalg.norm(lines[:, 0:2] - line_dict["tbox"][0:2], axis=1) < 50
)
else:
box_line_check = (
np.linalg.norm(lines[:, 2:4] - line_dict["tbox"][2:4], axis=1) < 50
)
# Classify any positive hits as the vertical penalty box line
if any(box_line_check):
line_dict["box"] = lines[box_line_check][0]
lines = lines[np.logical_not(box_line_check)]
# Generate transform using the classified lines from the image
c = generate_points(line_dict, right_view, self.true_dict, self.rel_dict)
return c
class SoccerModel:
ft = FieldTransform()
obj_old = pd.DataFrame()
c = np.zeros((3, 3))
count = 6
def __init__(self, detector):
self.detector = detector
self.objects = detector.bb_df
def predict(self, frameIdx, fps=25):
# Pull objects in current dataframe
df = self.detector.bb_df
obj = self.detector.get_df(batchNum=frameIdx).reset_index()
frame = self.get_frame(obj)
if frame is None:
return None
width = obj["width"]
height = obj["height"]
# Get only rows for which the width and height are within 2 standard deviations
# (Extracts players)
obj = obj[np.abs(width - np.mean(width)) / np.std(width) < 2]
obj = obj[np.abs(height - np.mean(height)) / np.std(height) < 2]
# Get the locations of the feet
obj["ufeet"] = obj["left"] + obj["width"] / 2
obj["vfeet"] = obj["top"] + obj["height"]
# Get the right and bottom bounding boxes
obj["right"] = obj["left"] + obj["width"]
obj["bottom"] = obj["top"] + obj["height"]
# Get the feature transform if available and not corrupt
try:
c = self.ft(frame)
if (
np.abs(c[0, 2]) > 0.01 and np.abs(c[1, 2]) > 0.01 and self.count > 5
) or frameIdx == 1:
self.c = c
self.count = 0
except Exception as e:
pass
finally:
self.count += 1
# Stack feet coordinates to be transformed
u = np.vstack(
(obj["ufeet"].values, obj["vfeet"].values, np.ones((1, len(obj))))
)
# Homographic transform
# u - frame coordinates
# xy - field coordinates
xy = self.c.dot(u).T
xy = np.divide(xy.T, xy[:, 2]).T
# Initialise the new fieled positions as 0
obj["xnew"] = 0
obj["ynew"] = 0
# Set the field positions equal to the transformed coordinates
obj[["xnew", "ynew"]] = xy[:, 0:2]
if frameIdx == 1:
# Store data and abort
obj["vel"] = 0.0
self.obj_old = obj
else:
# Extract the old x and y coordinates
xy_old = self.obj_old.loc[:, ["xnew", "ynew"]].values.copy()
# Take the euclidean difference of the old and new positions
diffs = np.stack(
(
xy[:, 0]
- xy_old[:, 0].reshape((-1, 1)), # difference in x position
xy[:, 1]
- xy_old[:, 1].reshape((-1, 1)), # difference in y position
),
axis=0,
)
diffs = np.linalg.norm(diffs, axis=0)
# Initialise 'old' positions
obj["xold"] = 0
obj["yold"] = 0
# Maximum euclidean distance between matching players
tol = 50
# Iterate until tolerance met or no objects in frame
while np.any(diffs < tol) and len(obj) > 0 and len(self.obj_old) > 0:
ind = np.unravel_index(np.argmin(diffs, axis=None), diffs.shape)
obj.loc[ind[1], ["xold", "yold"]] = xy_old[ind[0], 0:2]
diffs[:, ind[1]] = np.inf
diffs[ind[0], :] = np.inf
# Filter the x and y positions
valid_pos = ~np.isnan(obj["xold"])
obj.loc[valid_pos, ["xnew", "ynew"]] = (
0.05 * obj.loc[valid_pos, ["xnew", "ynew"]].values
+ 0.95 * obj.loc[valid_pos, ["xold", "yold"]].values
)
# Find the speed of the players
norm = np.linalg.norm(
obj.loc[:, ["xnew", "ynew"]].values
- obj.loc[:, ["xold", "yold"]].values,
axis=1,
)
obj["vel"] = norm * fps
# Cap speed at 10
obj.loc[obj["vel"] > 10, "vel"] = 10
# Store new objects for next frame
self.obj_old = obj
# Format data for display functions
vals = [
(
i["label"],
(
(i["left"], i["top"]),
(i["left"] + i["width"], i["top"] + i["height"]),
),
i["vel"],
(i["ynew"] / 70),
(i["xnew"] + 50) / 100,
)
for idx, i in obj.iterrows()
if (i["left"] == i["left"])
]
return vals
def get_frame(self, obj):
try:
fName = obj.loc[0, "fileName"][0]
frame = cv2.imread(fName)
return frame
except Exception as e:
return None
| 36.926499 | 104 | 0.510031 | 2,574 | 19,091 | 3.695027 | 0.169775 | 0.040374 | 0.003785 | 0.008516 | 0.180528 | 0.163285 | 0.117338 | 0.069709 | 0.058353 | 0.045001 | 0 | 0.04352 | 0.351265 | 19,091 | 516 | 105 | 36.998062 | 0.724425 | 0.206537 | 0 | 0.087464 | 0 | 0 | 0.033256 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023324 | false | 0.002915 | 0.011662 | 0 | 0.087464 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b03eaa75d1aee551f363fcba0197ce33eaa2183 | 766 | py | Python | osprey/cli/parser_worker.py | rmcgibbo/osprey | 0e0e11cafa3976854c0009be3bc562b8344218c6 | [
"Apache-2.0"
] | 5 | 2015-01-13T20:17:45.000Z | 2016-07-12T15:14:07.000Z | osprey/cli/parser_worker.py | rmcgibbo/osprey | 0e0e11cafa3976854c0009be3bc562b8344218c6 | [
"Apache-2.0"
] | null | null | null | osprey/cli/parser_worker.py | rmcgibbo/osprey | 0e0e11cafa3976854c0009be3bc562b8344218c6 | [
"Apache-2.0"
] | null | null | null | from __future__ import print_function, absolute_import, division
from argparse import ArgumentDefaultsHelpFormatter
def func(args, parser):
# delay import of the rest of the module to improve `osprey -h` performance
from ..execute_worker import execute
execute(args, parser)
def configure_parser(sub_parsers):
help = 'Run a worker process (hyperparameter optimization)'
p = sub_parsers.add_parser('worker', description=help, help=help,
formatter_class=ArgumentDefaultsHelpFormatter)
p.add_argument('config', help='Path to worker config file (yaml)')
p.add_argument('-n', '--n-iters', default=1, type=int, help='Number of '
'trials to run sequentially.')
p.set_defaults(func=func)
| 40.315789 | 79 | 0.703655 | 95 | 766 | 5.515789 | 0.578947 | 0.038168 | 0.045802 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001634 | 0.201044 | 766 | 18 | 80 | 42.555556 | 0.854575 | 0.0953 | 0 | 0 | 0 | 0 | 0.206946 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.230769 | 0 | 0.384615 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b08b94f653176a7bb3c4eeecbcb0204092d0a1b | 2,711 | py | Python | configboy/build_config.py | aaronhua123/configboy | ac0a0d65cd6b8a9b7b0cbb9679ed7b5e551d547c | [
"MIT"
] | 1 | 2019-12-06T06:44:39.000Z | 2019-12-06T06:44:39.000Z | configboy/build_config.py | aaronhua123/configboy | ac0a0d65cd6b8a9b7b0cbb9679ed7b5e551d547c | [
"MIT"
] | null | null | null | configboy/build_config.py | aaronhua123/configboy | ac0a0d65cd6b8a9b7b0cbb9679ed7b5e551d547c | [
"MIT"
] | null | null | null | import configparser
from jinja2 import Template
conf = configparser.ConfigParser()
# 用config对象读取配置文件
conf.read("base.config.ini")
sections = conf.sections()
dit = {}
def analyze_ini():
'''
分析ini配置文件读取的数据,放到dit里面
:return:
'''
for classname in sections:
print(classname, conf.items(classname))
classnamelist = classname.split('.')
while classnamelist:
subcl = '_'.join(classnamelist)
subcl_pop = classnamelist.pop()
baseclassname = '_'.join(classnamelist)
if baseclassname == '':
baseclassname = 'conf'
key = dit.get(baseclassname)
if key:
if key.get('subclass'):
key['subclass'].update({subcl_pop: subcl})
else:
key['subclass'] = {subcl_pop: subcl}
else:
dit[baseclassname] = {'subclass': {subcl_pop: subcl}}
subkey = dit.get(subcl)
if subkey is None: # 不存在
dit[subcl] = {'subclass': {}, 'args': []}
classnamelast = classname.replace('.', '_')
# print(classnamelast)
dit[classnamelast]['args'] = conf.items(classname)
def analyze_subclass():
'''
分析子类继承关系,修改dit的内容
:return:
'''
for k in dit.keys():
for name, value in dit.items():
args = value.get('args')
if args:
for argtuple in value['args']:
nickname = f'{name}_{argtuple[0]}'
if nickname == k:
dit[k]['string'] = argtuple[1]
value['args'].remove(argtuple)
else:
value['args'] = []
t_demo = '''
class _{{class}}:{% for k,v in subclass.items() %}
{{k}} = _{{v}}(){% endfor %}{% for arg,value in args %}
{{arg}} = "{{value}}"{% endfor %}
{% if string %}
def __str__(self):
return "{{ string }}"{% endif %}
'''
html = '# -*- coding:utf-8 -*-'
t = Template(t_demo)
def get_subclass(classname, body):
'''
获取subclass的数据,拼接html
:param classname:
:param body:
:return:
'''
global html
subclass = body.get('subclass')
if subclass:
for subname in subclass.values():
get_subclass(subname, dit[subname])
result = {'class': classname, **body}
html = html + t.render(result)
def build(a):
'''
获取分析的类关系,然后生成html
:param a:
:return:
'''
body = a.get('conf')
get_subclass('conf', body)
print(html)
with open('./config.py', 'w', encoding='utf-8') as f:
f.write(html)
if __name__ == '__main__':
analyze_ini()
analyze_subclass()
build(dit)
| 25.575472 | 69 | 0.520472 | 270 | 2,711 | 5.111111 | 0.333333 | 0.039855 | 0.028261 | 0.024638 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002735 | 0.32571 | 2,711 | 105 | 70 | 25.819048 | 0.752188 | 0.073036 | 0 | 0.043478 | 0 | 0 | 0.172172 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057971 | false | 0 | 0.028986 | 0 | 0.101449 | 0.028986 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b0944c79f8a702fe2867a9d3d9ea2355da44f81 | 5,633 | py | Python | utils/codes_distance.py | wemozj/Image-Compression-based-GMM-and-Attention-Module | 93f804dbcea8ffc1621456f3d104d0342c75373b | [
"Apache-2.0"
] | null | null | null | utils/codes_distance.py | wemozj/Image-Compression-based-GMM-and-Attention-Module | 93f804dbcea8ffc1621456f3d104d0342c75373b | [
"Apache-2.0"
] | null | null | null | utils/codes_distance.py | wemozj/Image-Compression-based-GMM-and-Attention-Module | 93f804dbcea8ffc1621456f3d104d0342c75373b | [
"Apache-2.0"
] | null | null | null | """
Calculates distance of some (bpp, metric) point (for some metric) to some codec on some dataset.
"""
import os
import numpy as np
import scipy.interpolate
from utils import other_codecs
import constants
from utils import logdir_helpers
from collections import defaultdict
from fjcommon import functools_ext as ft
from utils import val_files
# how much of a bin must be filled
_REQUIRED_BINS = 0.99
DEFAULT_BPP_GRID = np.linspace(0.1, 1.4, 50)
# All these will be rendered into the plot, labelled according to their path using get_label
# Expected to be in constants.OTHER_CODECS_ROOT
# Create with other_codecs.py $IMG_DIR $OTHER_CODECS_ROOT/out_dir $MODE
CODECS = {
'u100': {'jp2k': 'out_jp2k_Urban100_HR_crop',
'bpg': 'out_bpg_Urban100_HR_crop',
'jp': 'out_jp_Urban100_HR_crop'},
'b100': {'jp2k': 'out_jp2k_B100_cropped',
'bpg': 'out_bpg_B100_cropped',
'jp': 'out_jp_B100_cropped'},
'rf100': {'jp2k': 'out_jp2k_rf100',
'bpg': 'out_bpg_rf100',
'jp': 'out_jp_rf100_v3'},
'testset': {'bpg': 'out_bpg_imagenet_256_train_val_128x128__100',
'jp': 'out_jp_imagenet_256_train_val_128x128__100'},
'kodak': {
'bpg': 'out_bpg_kodak',
'jp2k': 'out_jp2k_kodak',
'jp': 'out_jp_kodak',
'webp': 'out_webp_kodak'
},
'cityscapes': {'bpg': 'out_bpg_cityscapes'}
}
class CodecDistanceReadException(Exception):
pass
class CodecDistance(object):
def __init__(self, dataset, codec, metric):
assert metric in other_codecs.SUPPORTED_METRICS, '{} not in {}'.format(metric, other_codecs.SUPPORTED_METRICS)
if dataset not in CODECS.keys():
raise CodecDistanceReadException('Dataset {} not in {}'.format(dataset, CODECS.keys()))
if codec not in CODECS[dataset].keys():
raise CodecDistanceReadException('Codec {} not in {}'.format(codec, CODECS[dataset].keys()))
codec_dir = os.path.join(constants.OTHER_CODECS_ROOT, CODECS[dataset][codec])
try:
bpps, values = get_interpolated_values_bpg_jp2k(codec_dir, DEFAULT_BPP_GRID, metric)
except ValueError as e:
raise CodecDistanceReadException('Failed: {}'.format(e))
self.f_bpp_meta = scipy.interpolate.interp1d(bpps, values, 'linear')
def distance(self, bpp, value):
codec_value = self.f_bpp_meta(bpp) # may raise ValueError
d = value - codec_value # > 0 if we are better
return d
def interpolator(measures_per_image_iter, grid, interp_mode='linear'):
accumulated_values = np.zeros_like(grid, np.float64)
# Count values per bin
N = np.zeros_like(grid, np.int64)
num_imgs = 0
num_errors = 0
for img_description, (bpps, values) in measures_per_image_iter:
assert len(bpps) >= 2, 'Missing values for {}'.format(img_description)
assert bpps[0] >= bpps[-1]
num_imgs += 1
# interpolation function
try:
fq = scipy.interpolate.interp1d(bpps, values, interp_mode) # key code
except ValueError as e:
print(bpps, values)
print(e)
exit(1)
for i, bpp in enumerate(grid):
try:
accumulated_values[i] += fq(bpp)
N[i] += 1
except ValueError as e:
num_errors += 1
continue
try:
grid, values = ft.unzip((bpp, m/n) for bpp, m, n in zip(grid, accumulated_values, N)
if n > _REQUIRED_BINS*num_imgs)
except ValueError as e:
raise e
return grid, values
def get_interpolated_values_bpg_jp2k(bpg_or_jp2k_dir, grid, metric):
""" :returns grid, values"""
ps = other_codecs.all_measures_file_ps(bpg_or_jp2k_dir)
if len(ps) == 0:
raise CodecDistanceReadException('No matches in {}'.format(bpg_or_jp2k_dir))
measures_per_image_iter = ((p, ft.unzip(sorted(other_codecs.read_measures(p, metric), reverse=True))) for p in ps)
return interpolator(measures_per_image_iter, grid, interp_mode='linear')
def get_measures_readers(log_dir_root, job_ids, dataset):
if job_ids == 'NA': # TODO
return []
missing = []
measures_readers = []
for job_id, ckpt_dir in zip(job_ids.split(','), logdir_helpers.iter_ckpt_dirs(log_dir_root, job_ids)):
val_dirs = val_files.ValidationDirs(ckpt_dir, log_dir_root, dataset)
try:
measures_reader = val_files.MeasuresReader(val_dirs.out_dir)
measures_readers.append(measures_reader)
except FileNotFoundError:
missing.append(job_id)
if missing:
print('Missing measures files for:\n{}'.format(','.join(missing)))
# uniquify
m = [val_files.MeasuresReader(o) for o in {m.out_dir for m in measures_readers}]
return m
def interpolate_ours(measures_readers, grid, interp_mode, metric):
measures_per_image = defaultdict(list)
for measures_reader in measures_readers:
for img_name, bpp, value in measures_reader.iter_metric(metric):
measures_per_image[img_name].append((bpp, value))
# Make sure every job has a value for every image
for img_name, values in measures_per_image.items():
assert len(values) == len(measures_readers), '{}: {}'.format(img_name, len(values))
return interpolator(
((img_name, ft.unzip(sorted(bpps_values, reverse=True)))
for img_name, bpps_values in measures_per_image.items()),
grid, interp_mode)
if __name__ == '__main__':
pass | 37.304636 | 118 | 0.650985 | 745 | 5,633 | 4.645638 | 0.271141 | 0.028604 | 0.036984 | 0.021959 | 0.141578 | 0.072811 | 0.030049 | 0.030049 | 0.030049 | 0 | 0 | 0.023447 | 0.242855 | 5,633 | 151 | 119 | 37.304636 | 0.788042 | 0.091603 | 0 | 0.097345 | 0 | 0 | 0.113293 | 0.03495 | 0 | 0 | 0 | 0.006623 | 0.035398 | 1 | 0.053097 | false | 0.017699 | 0.079646 | 0 | 0.20354 | 0.026549 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b0a363108dd0c38113cdb177788d3d39fa06984 | 854 | py | Python | statistics/phoneNumber.py | SLIPO-EU/poi-data-exploration | 4d11a14423a5b68c56da4131e67ff36e11dabe16 | [
"Apache-2.0"
] | 1 | 2019-02-25T09:36:36.000Z | 2019-02-25T09:36:36.000Z | statistics/phoneNumber.py | SLIPO-EU/poi-data-exploration | 4d11a14423a5b68c56da4131e67ff36e11dabe16 | [
"Apache-2.0"
] | null | null | null | statistics/phoneNumber.py | SLIPO-EU/poi-data-exploration | 4d11a14423a5b68c56da4131e67ff36e11dabe16 | [
"Apache-2.0"
] | null | null | null | import pandas as pd
import numpy as np
from collections import OrderedDict
from .statistics import Statistics
class PhoneNumber(Statistics):
categories = {
('Only numbers', '^[0-9]+$'),
('Numbers and parentheses', '^(?:[0-9]*[\(][0-9]+[\)][0-9]*)+$'),
('Numbers and + symbol', '^(?:[0-9]*[\+][0-9]+)+$'),
('Numbers and - symbol', '^(?:[0-9]*[\-][0-9]+)+$'),
('Numbers, + and - symbol', '^(?:[0-9]*[+][0-9]+[-][0-9]*)+$'),
('Numbers, parentheses and + symbol', '^[0-9\(\)\+]+$'),
('Numbers, parentheses and - symbol', '^[0-9\(\)\-]+$'),
('Numbers, parentheses, + and - symbol', '^[0-9\(\)\+\-]+$'),
('Only non numerical characters', '^[^0-9]+$')
}
def __init__(self, series, chart_type='pie'):
super().__init__(series, chart_type)
self.categories = OrderedDict(self.categories)
super().checkPatterns(series, self.categories)
| 35.583333 | 67 | 0.569087 | 102 | 854 | 4.666667 | 0.313725 | 0.063025 | 0.132353 | 0.05042 | 0.32563 | 0.32563 | 0.32563 | 0.321429 | 0.321429 | 0.321429 | 0 | 0.040323 | 0.128806 | 854 | 23 | 68 | 37.130435 | 0.599462 | 0 | 0 | 0 | 0 | 0 | 0.471897 | 0.128806 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.2 | 0 | 0.35 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b0a78407a818985d4fb61e0d744f1663cbde1b1 | 2,256 | py | Python | tests/e2e/test_service.py | bleckfisk/audio-transcoder | e66e8c0038c594c12b84dd12fdcbc30d28b2263b | [
"MIT"
] | null | null | null | tests/e2e/test_service.py | bleckfisk/audio-transcoder | e66e8c0038c594c12b84dd12fdcbc30d28b2263b | [
"MIT"
] | 1 | 2022-01-12T21:06:30.000Z | 2022-01-12T21:06:30.000Z | tests/e2e/test_service.py | bleckfisk/audio-transcoder | e66e8c0038c594c12b84dd12fdcbc30d28b2263b | [
"MIT"
] | 1 | 2020-02-20T07:40:57.000Z | 2020-02-20T07:40:57.000Z | from pydub import AudioSegment
import os
from pydub.utils import mediainfo
from unittest import mock
from service.core import process_message
from service.aws_boto3 import (
listen_sqs_queue,
create_s3_resource,
create_sqs_resource,
)
from service.settings import (
REQUEST_QUEUE_NAME
)
"""
These two functions will run the process with created data from
conftest.py and assert that we have called the publish_sns function.
That both tests passes is proof that even if exceptions are raised in
the process, we are still calling the publish_sns function and therefore
letting initiator know what went wrong.
"""
@mock.patch("service.core.publish_sns")
def test_service(mock_publish_sns, setup_no_exceptions):
listen_sqs_queue(
create_sqs_resource(),
REQUEST_QUEUE_NAME,
process_message,
True
)
wav_directory = setup_no_exceptions[0]
data = setup_no_exceptions[1]
sound = AudioSegment.from_file(wav_directory)
for target in data:
create_s3_resource().meta.client.download_file(
target["bucket"],
target["key"],
target["key"]
)
sound_2 = AudioSegment.from_file(target["key"])
info = mediainfo(target["key"])
wav_info = mediainfo(wav_directory)
assert info["format_name"].upper() == target["format"].upper()
assert info["sample_rate"] == wav_info["sample_rate"]
assert info["channels"] == wav_info["channels"]
assert len(sound) == len(sound_2)
os.remove(target["key"])
assert mock_publish_sns.call_count == 1
@mock.patch("service.core.publish_sns")
def test_service_fails_callback_still_runs(mock_publish_sns, setup_error):
"""
The set_error fixture sets up a context of trying to transcode a pdf file.
This will not succeed and the code will handle the exception thrown
by pydub accordingly, resulting in calling SNS with an error.
This test proves that the SNS call is executed even if
exceptions are thrown in the middle of the program.
"""
listen_sqs_queue(
create_sqs_resource(),
REQUEST_QUEUE_NAME,
process_message,
True
)
assert mock_publish_sns.call_count == 1
| 27.512195 | 78 | 0.698582 | 306 | 2,256 | 4.928105 | 0.405229 | 0.05305 | 0.037135 | 0.039788 | 0.18435 | 0.18435 | 0.18435 | 0.144562 | 0.144562 | 0.086207 | 0 | 0.005143 | 0.224291 | 2,256 | 81 | 79 | 27.851852 | 0.856571 | 0.138298 | 0 | 0.291667 | 0 | 0 | 0.078086 | 0.030227 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.041667 | false | 0 | 0.145833 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b0ed233a468637761bc287409daba800a6b9ca3 | 17,877 | py | Python | topnum/utils.py | machine-intelligence-laboratory/OptimalNumberOfTopics | 87267223987a4cb54b3f0ec431e87ee684044c7b | [
"MIT"
] | 5 | 2020-05-06T14:13:54.000Z | 2020-09-06T15:54:01.000Z | topnum/utils.py | machine-intelligence-laboratory/OptimalNumberOfTopics | 87267223987a4cb54b3f0ec431e87ee684044c7b | [
"MIT"
] | 54 | 2020-02-10T07:08:31.000Z | 2020-09-08T21:45:39.000Z | topnum/utils.py | machine-intelligence-laboratory/OptimalNumberOfTopics | 87267223987a4cb54b3f0ec431e87ee684044c7b | [
"MIT"
] | 2 | 2021-01-16T08:40:25.000Z | 2021-06-04T05:35:36.000Z | import glob
import numpy as np
import os
import pandas as pd
import strictyaml
import tempfile
import shutil
import warnings
from collections import defaultdict
from inspect import signature
from strictyaml import Map, Str, Optional, Int, CommaSeparated
import topicnet
from topicnet.cooking_machine.dataset import Dataset
from topnum.search_methods.optimize_scores_method import load_models_from_disk
from topnum.scores import (
SpectralDivergenceScore,
CalinskiHarabaszScore,
DiversityScore,
EntropyScore,
HoldoutPerplexityScore,
IntratextCoherenceScore,
LikelihoodBasedScore,
PerplexityScore,
SilhouetteScore,
SparsityPhiScore,
SparsityThetaScore,
MeanLiftScore,
UniformThetaDivergenceScore,
# Unused:
# SimpleTopTokensCoherenceScore,
SophisticatedTopTokensCoherenceScore
)
from enum import (
auto,
IntEnum
)
WC3_COLORS = {
"Red": "ff0303",
"Blue": "0042ff",
"Teal": "1be7ba",
"Purple": "550081",
"Yellow": "fefc00",
"Orange": "fe890d",
"Green": "21bf00",
"Pink": "e45caf",
"Gray": "939596",
"Light Blue": "7ebff1",
"Dark Green": "106247",
"Brown": "4f2b05",
"Maroon": "9c0000",
"Navy": "0000c3",
"Turquoise": "00ebff",
"Violet": "bd00ff",
"Wheat": "ecce87",
"Peach": "f7a58b",
"Mint": "bfff81",
"Lavender": "dbb8eb",
"Coal": "4f5055",
"Snow": "ecf0ff",
"Emerald": "00781e",
"Peanut": "a56f34",
"Black": "2e2d2e",
}
def split_into_train_test(dataset: Dataset, config: dict, save_folder: str = None):
# TODO: no need for `config` here, just `batches_prefix`
documents = list(dataset._data.index)
dn = config['batches_prefix']
random = np.random.RandomState(seed=123)
random.shuffle(documents)
test_size = 0.2
train_documents = documents[:int(1.0 - test_size * len(documents))]
test_documents = documents[len(train_documents):]
assert len(train_documents) + len(test_documents) == len(documents)
# TODO: test with keep_in_memory = False just in case
train_data = dataset._data.loc[train_documents]
test_data = dataset._data.loc[test_documents]
train_data['id'] = train_data.index
test_data['id'] = test_data.index
to_csv_kwargs = dict()
if not dataset._small_data:
to_csv_kwargs['single_file'] = True
if save_folder is None:
save_folder = '.'
elif not os.path.isdir(save_folder):
os.mkdir(save_folder)
train_dataset_path = os.path.join(save_folder, f'{dn}_train.csv')
test_dataset_path = os.path.join(save_folder, f'{dn}_test.csv')
train_data.to_csv(train_dataset_path, index=False, **to_csv_kwargs)
test_data.to_csv(test_dataset_path, index=False, **to_csv_kwargs)
train_dataset = Dataset(
train_dataset_path,
batch_vectorizer_path=f'{dn}_train_internals',
keep_in_memory=dataset._small_data,
)
test_dataset = Dataset(
test_dataset_path,
batch_vectorizer_path=f'{dn}_test_internals',
keep_in_memory=dataset._small_data,
)
# TODO: quick hack, i'm not sure what for
test_dataset._to_dataset = lambda: test_dataset
train_dataset._to_dataset = lambda: train_dataset
return train_dataset, test_dataset
# TODO: it needs a dummy load
# like this:
# _ = build_every_score(dataset, dataset)
def build_every_score(dataset, test_dataset, config):
scores = [
SpectralDivergenceScore("arun", dataset, [config['word']]),
PerplexityScore("perp"),
SparsityPhiScore("sparsity_phi"), SparsityThetaScore("sparsity_theta"),
HoldoutPerplexityScore('holdout_perp', test_dataset=test_dataset)
]
is_dataset_bow = _is_dataset_bow(test_dataset)
if not is_dataset_bow:
coherences = _build_coherence_scores(dataset=test_dataset)
else:
warnings.warn('Dataset seems to be in BOW! Skipping coherence scores')
coherences = list()
likelihoods = [
LikelihoodBasedScore(
f"{mode}_sparsity_{flag}", validation_dataset=dataset, modality=config['word'],
mode=mode, consider_sparsity=flag
)
for mode in ["AIC", "BIC", "MDL"] for flag in [True, False]
]
renyi_variations = [
EntropyScore(f"renyi_{threshold_factor}", threshold_factor=threshold_factor)
for threshold_factor in [0.5, 1, 2]
]
clustering = [
CalinskiHarabaszScore("calhar", dataset), SilhouetteScore("silh", dataset)
]
diversity = [
DiversityScore(f"diversity_{metric}_{is_closest}", metric=metric, closest=is_closest)
for metric in ["euclidean", 'jensenshannon', 'cosine', 'hellinger']
for is_closest in [True, False]
]
return scores + diversity + clustering + renyi_variations + likelihoods + coherences
def _is_dataset_bow(dataset: Dataset, max_num_documents_to_check: int = 100) -> bool:
words_with_colon_threshold = 0.25
is_dataset_bow = False
documents_to_check = list(dataset._data.index)[:max_num_documents_to_check]
for t in dataset._data.loc[documents_to_check, 'vw_text']:
all_vw_words = t.split()
doc_content_vw_words = [w for w in all_vw_words[1:] if not w.startswith('|')]
num_words_with_colon = sum(1 if ':' in w else 0 for w in doc_content_vw_words)
num_words = len(doc_content_vw_words)
if num_words_with_colon >= words_with_colon_threshold * num_words:
is_dataset_bow = True
break
return is_dataset_bow
def _build_coherence_scores(dataset: Dataset) -> list:
max_coherence_text_length = 25000
max_num_coherence_documents = 500
coherence_documents = list(dataset._data.index)[:max_num_coherence_documents]
coherence_text_length = sum(
len(t.split())
for t in dataset._data.loc[coherence_documents, 'vw_text'].values
)
while coherence_text_length > max_coherence_text_length:
d = coherence_documents.pop()
coherence_text_length -= len(dataset._data.loc[d, 'vw_text'].split())
assert len(coherence_documents) > 0
print(
f'Num documents for coherence: {len(coherence_documents)}, {coherence_text_length} words'
)
return [
IntratextCoherenceScore(
'intra', data=dataset, documents=coherence_documents
),
SophisticatedTopTokensCoherenceScore(
'toptok1', data=dataset, documents=coherence_documents
),
# TODO: and this
# SimpleTopTokensCoherenceScore(),
]
def check_if_monotonous(score_result):
signs = np.sign(score_result.diff().iloc[1:, :])
# convert all nans to a single value
different_signs = set(signs.values.flatten().astype(str))
if different_signs == {'nan', '0.0'}:
return True
return len(different_signs) == 1
def monotonity_and_std_analysis(
experiment_directory: str, experiment_name_template: str) -> pd.DataFrame:
informative_df = pd.DataFrame()
all_subexperems_mask = os.path.join(
experiment_directory,
experiment_name_template.format("*", "*")
)
for entry in glob.glob(all_subexperems_mask):
experiment_name = entry.split("/")[-1]
try:
result, detailed_result = load_models_from_disk(
experiment_directory, experiment_name
)
for score in detailed_result.keys():
max_std = detailed_result[score].std().max()
avg_val = detailed_result[score].median().median()
rel_error = max_std / avg_val
if rel_error > 0.01:
print(score, rel_error, detailed_result[score].std().min(), max_std)
is_monotonous = check_if_monotonous(detailed_result[score].T)
informative_df.loc[score, experiment_name] = is_monotonous
except IndexError as e:
print(f"Error reading data from {entry}!\nThe exception raised is\n{e}")
return informative_df
def read_corpus_config(filename='corpus.yml'):
schema = Map({
'dataset_path': Str(),
'batches_prefix': Str(),
'word': Str(),
'name': Str(),
Optional("num_topics_interval"): Int(),
Optional("nums_topics"): CommaSeparated(Int()),
'min_num_topics': Int(),
'max_num_topics': Int(),
'num_fit_iterations': Int(),
'num_restarts': Int(),
})
with open(filename, 'r') as f:
string = f.read()
data = strictyaml.load(string, schema=schema).data
return data
def trim_config(config, method):
return {
elem: config[elem]
for elem in signature(method.__init__).parameters
if elem in config
}
def estimate_num_iterations_for_convergence(tm, score_name="PerplexityScore@all"):
score = tm.scores[score_name]
normalized_score = np.array(score) / np.median(score)
contributions = abs(np.diff(normalized_score))
return (contributions > 2e-3).sum()
SCORES_DIRECTION = {
'PerplexityScore@all': None,
'SparsityThetaScore': max,
'SparsityPhiScore@word': max,
'PerplexityScore@word': None,
'SparsityPhiScore@lemmatized': max,
'PerplexityScore@lemmatized': None,
'TopicKernel@word.average_coherence': None,
'TopicKernel@word.average_contrast': max,
'TopicKernel@word.average_purity': max,
'TopicKernel@word.average_size': None,
'TopicKernel@lemmatized.average_coherence': None,
'TopicKernel@lemmatized.average_contrast': max,
'TopicKernel@lemmatized.average_purity': max,
'TopicKernel@lemmatized.average_size': None,
'perp': min,
'sparsity_phi': None,
'sparsity_theta': None,
'holdout_perp': min,
'arun': min,
'diversity_euclidean_True': max,
'diversity_euclidean_False': max,
'diversity_jensenshannon_True': max,
'diversity_jensenshannon_False': max,
'diversity_cosine_True': max,
'diversity_cosine_False': max,
'diversity_hellinger_True': max,
'diversity_hellinger_False': max,
'calhar': max,
'silh': max,
'renyi_0.5': min,
'renyi_1': min,
'renyi_2': min,
'AIC_sparsity_True': min,
'AIC_sparsity_False': min,
'BIC_sparsity_True': min,
'BIC_sparsity_False': min,
'MDL_sparsity_True': min,
'MDL_sparsity_False': min,
'intra': max,
'toptok1': max,
'lift': max,
'uni_theta_divergence': max,
'new_holdout_perp': min,
'RPC': min,
}
class CurveOptimumType(IntEnum):
INTERVAL = auto()
PEAK = auto()
JUMPING = auto()
OUTSIDE = auto()
JUMP_OUTSIDE = auto()
EMPTY = auto()
CURVETYPE_TO_MARKER = {
CurveOptimumType.JUMPING: "x",
CurveOptimumType.INTERVAL: ".",
CurveOptimumType.PEAK: "*",
CurveOptimumType.OUTSIDE: "^",
CurveOptimumType.JUMP_OUTSIDE: "v",
CurveOptimumType.EMPTY: "-",
}
def classify_curve(my_data, optimum_tolerance, score_direction):
"""
Parameters
----------
my_data: pd.Series
index is number of topics, values are quality measurements
optimum_tolerance: float in [0, 1]
score_direction: min, max or None
Returns
-------
(pd.Series, CurveOptimumType)
pd.Series: the set of points where optimum is located
index is number of topics
values are quality measurements (around the optimum) or NaNs (non-optimal points)
CurveOptimumType: an heuristic estimate of curve type
"""
colored_values = my_data.copy()
midrange = max(colored_values) - min(colored_values)
if score_direction == max:
threshold = max(colored_values) - midrange * optimum_tolerance
optimum = max(colored_values)
colored_values[colored_values < threshold] = np.nan
elif score_direction == min:
threshold = min(colored_values) + midrange * optimum_tolerance
optimum = min(colored_values)
colored_values[colored_values > threshold] = np.nan
intervals = colored_values[colored_values.notna()]
if len(intervals) == 0 or len(intervals) == len(colored_values):
return colored_values, CurveOptimumType.EMPTY
left_bound, right_bound = min(intervals.index), max(intervals.index)
optimum_idx = set(intervals.index)
slice_idx = set(colored_values.loc[left_bound:right_bound].index)
if (optimum_idx == slice_idx):
curve_type = CurveOptimumType.INTERVAL
if len(intervals) == 1:
curve_type = CurveOptimumType.PEAK
if min(colored_values.index) in optimum_idx:
if optimum in colored_values[colored_values.index[:2]]:
curve_type = CurveOptimumType.OUTSIDE
if max(colored_values.index) in optimum_idx:
# and abs(intervals.loc[right_bound] - optimum_val) <= :
if optimum in colored_values[colored_values.index[-2:]]:
curve_type = CurveOptimumType.OUTSIDE
curve_type = CurveOptimumType.OUTSIDE
else:
curve_type = CurveOptimumType.JUMPING
if min(colored_values.index) in optimum_idx:
if optimum in colored_values[colored_values.index[:2]]:
curve_type = CurveOptimumType.OUTSIDE
else:
curve_type = CurveOptimumType.JUMP_OUTSIDE
if max(colored_values.index) in optimum_idx:
if optimum in colored_values[colored_values.index[-2:]]:
curve_type = CurveOptimumType.OUTSIDE
else:
curve_type = CurveOptimumType.JUMP_OUTSIDE
return colored_values, curve_type
def plot_everything_informative(
experiment_directory, experiment_name_template,
true_criteria=None, false_criteria=None,
maxval=None, minval=None, optimum_tolerance=0.07
):
"""
Parameters
----------
experiment_directory: str
experiment_name_template: str
true_criteria: list of str or None
the score will be displayed if every element of this list is substring of the score name
false_criteria: list of str or None
the score will be displayed if no element of this list is substring of the score name
maxval: float
trims plot to size (useful for cases when first values are anomalous)
minval: float
trims plot to size (useful for cases when first values are anomalous)
optimum_tolerance: float
used for auto-determening optimums
"""
import matplotlib.pyplot as plt
if true_criteria is None:
true_criteria = list()
if false_criteria is None:
false_criteria = list()
details = defaultdict(dict)
all_subexperems_mask = os.path.join(
experiment_directory, experiment_name_template.format("*", "*")
)
for entry in glob.glob(all_subexperems_mask):
experiment_name = entry.split("/")[-1]
result, detailed_result = load_models_from_disk(
experiment_directory, experiment_name
)
for score in detailed_result.keys():
should_plot = (
all(t_criterion in score for t_criterion in true_criteria)
and
all(f_criterion not in score for f_criterion in false_criteria)
and
SCORES_DIRECTION[score] is not None
)
if should_plot:
details[score][experiment_name] = detailed_result[score].T
ticks = detailed_result[score].T.index
for score in details.keys():
fig, axes = plt.subplots(1, 1, figsize=(10, 10))
for experiment_name, data in details[score].items():
# I can make a grid of plots if I do something like this:
# my_ax = axes[index // 3][index % 3]
my_ax = axes
*name_base, param_id, seed = experiment_name.split("_")
seed = int(seed)
style = [':', "-.", "--"][seed % 3]
names = list(WC3_COLORS.keys())
color = "#" + WC3_COLORS[names[int(param_id)]]
my_data = data.T.mean(axis=0)
if maxval is not None:
my_data[my_data > maxval] = np.nan
if minval is not None:
my_data[my_data < minval] = np.nan
label = f"{experiment_name} ({data.shape[0]})" if seed == 0 else None
my_ax.plot(my_data, linestyle=style, label=label, color=color, alpha=0.7)
score_direction = SCORES_DIRECTION[score]
colored_values, curve_type = classify_curve(my_data, optimum_tolerance, score_direction)
marker = CURVETYPE_TO_MARKER[curve_type]
my_ax.plot(colored_values, linestyle=style, color=color, alpha=1.0)
my_ax.plot(colored_values, marker=marker, linestyle='', color='black', alpha=1.0)
my_ax.set_title(f"{score}")
my_ax.legend()
my_ax.set_xticks(ticks)
my_ax.grid(True)
fig.show()
def magic_clutch():
test_dataset = None
try:
# Just some dataset, whatever
test_dataset = Dataset(
data_path=os.path.join(
os.path.dirname(topicnet.__file__),
'tests', 'test_data', 'test_dataset.csv'
),
internals_folder_path=tempfile.mkdtemp(prefix='magic_clutch__')
)
# If not itialize a new score at least once in the notebook
# it won't be possible to load it
_ = HoldoutPerplexityScore('', test_dataset,)
_ = MeanLiftScore('', test_dataset, [])
_ = UniformThetaDivergenceScore('', test_dataset, [])
_ = build_every_score(test_dataset, test_dataset, {"word": "@word"})
_ = IntratextCoherenceScore("jbi", test_dataset)
_ = SophisticatedTopTokensCoherenceScore("sds", test_dataset)
finally:
if test_dataset is not None and os.path.isdir(test_dataset._internals_folder_path):
shutil.rmtree(test_dataset._internals_folder_path)
| 31.753108 | 100 | 0.654976 | 2,099 | 17,877 | 5.321582 | 0.216293 | 0.037243 | 0.022381 | 0.020949 | 0.235452 | 0.200806 | 0.184691 | 0.154252 | 0.140018 | 0.112086 | 0 | 0.012005 | 0.240477 | 17,877 | 562 | 101 | 31.809609 | 0.81065 | 0.09213 | 0 | 0.104478 | 0 | 0 | 0.120507 | 0.042097 | 0 | 0 | 0 | 0.001779 | 0.004975 | 1 | 0.029851 | false | 0 | 0.042289 | 0.002488 | 0.119403 | 0.007463 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b11280aec2954ad2cb925dd13d21f509185c325 | 4,757 | py | Python | approaches/networkx-graph/pac_naive_regexp_checker.py | athityakumar/btp | a1bdad0ed6162faa482673347707c09228d6cd9c | [
"MIT"
] | 1 | 2022-01-26T15:51:30.000Z | 2022-01-26T15:51:30.000Z | approaches/networkx-graph/pac_naive_regexp_checker.py | athityakumar/btp | a1bdad0ed6162faa482673347707c09228d6cd9c | [
"MIT"
] | 7 | 2017-08-25T12:36:20.000Z | 2018-04-23T04:08:38.000Z | approaches/networkx-graph/pac_naive_regexp_checker.py | athityakumar/btp | a1bdad0ed6162faa482673347707c09228d6cd9c | [
"MIT"
] | null | null | null | from pac_library import *
from regexp_naive import *
def fetch_training_words(language):
return(read_wordpairs('../daru-dataframe/spec/fixtures/'+language+'-train-high'))
def fetch_testing_words(language):
return(read_wordpairs('../daru-dataframe/spec/fixtures/'+language+'-dev'))
def fetch_common_words(language):
training_words = fetch_training_words(language)
testing_words = fetch_testing_words(language)
common_words = set()
same_words = set()
for word in testing_words:
if word in training_words:
common_words.add(word)
if testing_words[word] == training_words[word]:
same_words.add(word)
return(same_words)
def compute_suffix_regexp(concept, pac):
suffix_regexp_data = []
for (antecedent_attrs, consequent_attrs) in pac:
suffixes = dict()
count = 0
for word in consequent_attrs:
suffix_set = fetch_suffixes(word)
for suffix in suffix_set:
if suffix in suffixes:
suffixes[suffix] += 1
else:
suffixes[suffix] = 1
suffix_regexp = sorted(suffixes.items(), key=operator.itemgetter(1), reverse=True)
for (suffix, count) in suffix_regexp:
prob = count/len(consequent_attrs)
shared_operations = concept.attributes_extent(consequent_attrs)
shared_words = consequent_attrs
suffix_regexp_data.append((suffix, count, prob, shared_operations, shared_words))
return(sorted(suffix_regexp_data, key=operator.itemgetter(2), reverse=True))
def compute_prefix_regexp(concept, pac):
prefix_regexp_data = []
for (antecedent_attrs, consequent_attrs) in pac:
prefixes = dict()
count = 0
for word in consequent_attrs:
prefix_set = fetch_prefixes(word)
for prefix in prefix_set:
if prefix in prefixes:
prefixes[prefix] += 1
else:
prefixes[prefix] = 1
prefix_regexp = sorted(prefixes.items(), key=operator.itemgetter(1), reverse=True)
for (prefix, count) in prefix_regexp:
prob = count/len(consequent_attrs)
shared_operations = concept.attributes_extent(consequent_attrs)
shared_words = consequent_attrs
prefix_regexp_data.append((prefix, count, prob, shared_operations, shared_words))
return(sorted(prefix_regexp_data, key=operator.itemgetter(2), reverse=True))
def guess_cluster_from_pac2(prefix_regexp_data, suffix_regexp_data, given_word):
# Pick implication with least number of consequent attrs,
# and return shared objects (operations)
matching_suffixes = []
matching_prefixes = []
for (suffix, count, prob, shared_operations, shared_words) in suffix_regexp_data:
if given_word.endswith(suffix):
for (prefix, count, prob, shared_operations2, shared_words) in prefix_regexp_data:
if given_word.startswith(suffix):
return(set(shared_operations).union(shared_operations2))
return(None)
def guess_cluster_from_pac(prefix_regexp_data, suffix_regexp_data, given_word):
# Pick implication with least number of consequent attrs,
# and return shared objects (operations)
for (suffix, count, prob, shared_operations, shared_words) in suffix_regexp_data:
if given_word.endswith(suffix):
return(shared_operations)
return(None)
def apply_operations(word, operations):
operations = sorted(operations)
for operation in operations:
operation_type, value = operation.split('_')
if operation_type == 'insert':
word += value
else:
word = word.rstrip(value)
return(word)
language = 'polish'
testing_words = fetch_testing_words(language)
concept = init_dataset(language)
print('Initialized concept')
start1 = time.clock()
pac = concept.pac_basis(concept.is_member, 0.3, 0.4)
end1 = time.clock() - start1
exact_word_match = 0
n_words = 0
valid_operation_match = 0
n_operations = 0
ldist = 0
# common_words = fetch_common_words(language)
common_words = testing_words
prefix_regexp_data = compute_prefix_regexp(concept, pac)
suffix_regexp_data = compute_suffix_regexp(concept, pac)
for word in common_words:
words_cluster_operations = guess_cluster_from_pac(prefix_regexp_data, suffix_regexp_data, word)
if words_cluster_operations is not None:
n_words += 1
actual_word = apply_operations(word, words_cluster_operations)
expected_word = testing_words[word]
ldist += levenshtein(actual_word, expected_word)
print("Source:", word, "Expected:", expected_word, "PAC suggested operations:", words_cluster_operations, "PAC suggested word:", actual_word)
if actual_word == expected_word:
exact_word_match += 1
print("Average ldist / word:", ldist/len(common_words))
print(exact_word_match/len(common_words), "exact word matches.")
print(exact_word_match/n_words, "exact word matches when there are matching clusters.")
| 36.037879 | 145 | 0.741644 | 622 | 4,757 | 5.392283 | 0.186495 | 0.050686 | 0.042934 | 0.029815 | 0.451699 | 0.420394 | 0.38223 | 0.380441 | 0.307096 | 0.251044 | 0 | 0.006787 | 0.163759 | 4,757 | 131 | 146 | 36.312977 | 0.83635 | 0.04898 | 0 | 0.216981 | 0 | 0 | 0.058212 | 0.014166 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075472 | false | 0 | 0.018868 | 0.018868 | 0.09434 | 0.04717 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b11d7196c949a260576fffad7c4fd1f42930580 | 454 | py | Python | product/urls.py | Gandabh/E-commerce-Site | 8bc4ca85c9cd6f3ed1435e5767aef4ab315df559 | [
"MIT"
] | 1 | 2022-01-01T21:46:48.000Z | 2022-01-01T21:46:48.000Z | product/urls.py | Gandabh/E-commerce-Site | 8bc4ca85c9cd6f3ed1435e5767aef4ab315df559 | [
"MIT"
] | null | null | null | product/urls.py | Gandabh/E-commerce-Site | 8bc4ca85c9cd6f3ed1435e5767aef4ab315df559 | [
"MIT"
] | null | null | null | from django.urls import path,include
from product.views import (
ProductDetailView,
ProductView,
shopping_cart,
wishlist
)
app_name='product'
urlpatterns = [
path('product-detail/<int:pk>/', ProductDetailView.as_view(),name='product_detail'),
path('product-list/', ProductView.as_view(),name='product_list'),
path('shopping-cart/', shopping_cart,name='shopping_cart'),
path('wishlist/', wishlist,name='wishlist'),
]
| 21.619048 | 88 | 0.702643 | 52 | 454 | 5.980769 | 0.423077 | 0.154341 | 0.064309 | 0.109325 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140969 | 454 | 20 | 89 | 22.7 | 0.797436 | 0 | 0 | 0 | 0 | 0 | 0.251101 | 0.052863 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b11ee2a8c6b97121d035177bf542ec2e234d162 | 1,346 | py | Python | GestureAgentsDemo/Utils.py | chaosct/GestureAgents | 9ec0adb1e59bf995d5808431edd4cb8bf8907728 | [
"MIT"
] | 1 | 2015-01-22T10:42:09.000Z | 2015-01-22T10:42:09.000Z | GestureAgentsDemo/Utils.py | chaosct/GestureAgents | 9ec0adb1e59bf995d5808431edd4cb8bf8907728 | [
"MIT"
] | null | null | null | GestureAgentsDemo/Utils.py | chaosct/GestureAgents | 9ec0adb1e59bf995d5808431edd4cb8bf8907728 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from GestureAgentsDemo.Render import Update, drawBatch
from pyglet.text import Label
from pyglet.clock import schedule_once
class DynamicValue(object):
"""docstring for DynamicValue"""
def __init__(self, value=0):
self.value = value
self.target = value
self.time = 0
def __call__(self, target=None, time=0):
if target is None:
return self.value
if time <= 0:
self.value = target
else:
self.time = time
self.target = target
Update.register(DynamicValue._update_cb, self)
def _update_cb(self, dt):
step = dt * (self.target - self.value) / self.time
self.time -= dt
if self.time <= 0:
Update.unregister(self)
self.value = self.target
else:
self.value += step
class TextAlert(object):
def __init__(self, pos, text, group=None, timeout=5,
color=(255, 100, 100, 255), font_size=8, **kwargs):
x, y = pos
self.text = Label(text=text, x=x, y=y, font_size=font_size,
group=group, batch=drawBatch,
color=color, **kwargs)
schedule_once(self.kill, timeout)
def kill(self, dt=0):
self.text.delete()
self.text = None
| 28.638298 | 67 | 0.559435 | 164 | 1,346 | 4.463415 | 0.329268 | 0.086066 | 0.030055 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023307 | 0.330609 | 1,346 | 46 | 68 | 29.26087 | 0.789123 | 0.036404 | 0 | 0.055556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.138889 | false | 0 | 0.083333 | 0 | 0.305556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b14ba6dfc104cd29d8728d7b3da4a46ddc37365 | 1,590 | py | Python | tcfcli/cmds/native/common/runtime/python3.6/pstool/winps.py | alfredhuang211/scfcli | f5e086ff4fcee8d645682e85cd1486b28a224d08 | [
"Apache-2.0"
] | 103 | 2019-06-11T06:09:56.000Z | 2021-12-18T22:48:59.000Z | tcfcli/cmds/native/common/runtime/python3.6/pstool/winps.py | TencentCloud/Serverless-cli | 57f98b24cfd10712770a4806212cfb69d981a11a | [
"Apache-2.0"
] | 8 | 2019-07-12T12:08:40.000Z | 2020-10-20T07:18:17.000Z | tcfcli/cmds/native/common/runtime/python3.6/pstool/winps.py | TencentCloud/Serverless-cli | 57f98b24cfd10712770a4806212cfb69d981a11a | [
"Apache-2.0"
] | 49 | 2019-06-11T06:26:05.000Z | 2020-02-19T08:13:36.000Z | __all__ = ['win_peak_memory']
import ctypes
from ctypes import wintypes
GetCurrentProcess = ctypes.windll.kernel32.GetCurrentProcess
GetCurrentProcess.argtypes = []
GetCurrentProcess.restype = wintypes.HANDLE
SIZE_T = ctypes.c_size_t
class PROCESS_MEMORY_COUNTERS_EX(ctypes.Structure):
_fields_ = [
('cb', wintypes.DWORD),
('PageFaultCount', wintypes.DWORD),
('PeakWorkingSetSize', SIZE_T),
('WorkingSetSize', SIZE_T),
('QuotaPeakPagedPoolUsage', SIZE_T),
('QuotaPagedPoolUsage', SIZE_T),
('QuotaPeakNonPagedPoolUsage', SIZE_T),
('QuotaNonPagedPoolUsage', SIZE_T),
('PagefileUsage', SIZE_T),
('PeakPagefileUsage', SIZE_T),
('PrivateUsage', SIZE_T),
]
GetProcessMemoryInfo = ctypes.windll.psapi.GetProcessMemoryInfo
GetProcessMemoryInfo.argtypes = [
wintypes.HANDLE,
ctypes.POINTER(PROCESS_MEMORY_COUNTERS_EX),
wintypes.DWORD,
]
GetProcessMemoryInfo.restype = wintypes.BOOL
def _get_current_process():
"""Return handle to current process."""
return GetCurrentProcess()
def win_peak_memory(process=None):
"""Return Win32 process memory counters structure as a dict."""
if process is None:
process = _get_current_process()
counters = PROCESS_MEMORY_COUNTERS_EX()
ret = GetProcessMemoryInfo(process, ctypes.byref(counters),
ctypes.sizeof(counters))
if not ret:
return 0
info = dict((name, getattr(counters, name)) for name, _ in counters._fields_)
return int(info["PeakWorkingSetSize"] / (2**20)) | 32.44898 | 81 | 0.692453 | 159 | 1,590 | 6.672956 | 0.402516 | 0.051838 | 0.079171 | 0.065033 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006265 | 0.196855 | 1,590 | 49 | 82 | 32.44898 | 0.824589 | 0.057233 | 0 | 0 | 0 | 0 | 0.143049 | 0.047683 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.05 | 0 | 0.225 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b1629216fd1f26d4a22e42ef8c1b7d156606871 | 3,120 | py | Python | trojai/test/modelgen/test_modelgen_config.py | C0ldstudy/trojai | 10ac6c719f367a569a48b19e7ef18cf90df3a310 | [
"Apache-2.0"
] | 59 | 2019-07-11T02:33:38.000Z | 2022-02-02T01:51:29.000Z | trojai/test/modelgen/test_modelgen_config.py | C0ldstudy/trojai | 10ac6c719f367a569a48b19e7ef18cf90df3a310 | [
"Apache-2.0"
] | 4 | 2020-04-13T03:48:32.000Z | 2021-09-11T05:13:57.000Z | trojai/test/modelgen/test_modelgen_config.py | C0ldstudy/trojai | 10ac6c719f367a569a48b19e7ef18cf90df3a310 | [
"Apache-2.0"
] | 19 | 2019-09-17T21:09:06.000Z | 2022-03-31T17:55:18.000Z | import unittest
from unittest.mock import Mock
import os
import shutil
import tempfile
import torchvision.models as models
from trojai.modelgen.architecture_factory import ArchitectureFactory
from trojai.modelgen.data_manager import DataManager
from trojai.modelgen.config import ModelGeneratorConfig
class MyArchFactory(ArchitectureFactory):
def new_architecture(self):
return models.alexnet()
class TestModelGeneratorConfig(unittest.TestCase):
@classmethod
def setUpClass(cls):
pass
@classmethod
def tearDownClass(cls):
pass
def setUp(self):
self.m_tmp_dir = tempfile.TemporaryDirectory()
self.s_tmp_dir = tempfile.TemporaryDirectory()
self.model_save_dir = self.m_tmp_dir.name
self.stats_save_dir = self.s_tmp_dir.name
self.tdm = Mock(spec=DataManager)
def tearDown(self):
self.m_tmp_dir.cleanup()
self.s_tmp_dir.cleanup()
def test_good_param_configs(self):
mgc = ModelGeneratorConfig(MyArchFactory(), self.tdm, self.model_save_dir, self.stats_save_dir, 10)
self.assertIsInstance(mgc.arch_factory, ArchitectureFactory)
self.assertIsInstance(mgc.data, DataManager)
self.assertEqual(mgc.data, self.tdm)
self.assertEqual(mgc.model_save_dir, self.model_save_dir)
self.assertEqual(mgc.num_models, 10)
mgc = ModelGeneratorConfig(MyArchFactory(), self.tdm, self.model_save_dir, self.stats_save_dir, 15)
self.assertIsInstance(mgc.arch_factory, ArchitectureFactory)
self.assertIsInstance(mgc.data, DataManager)
self.assertEqual(mgc.data, self.tdm)
self.assertEqual(mgc.model_save_dir, self.model_save_dir)
self.assertEqual(mgc.num_models, 15)
def test_arch_and_data_bad_args(self):
self.assertRaises(TypeError, ModelGeneratorConfig, 5, self.tdm, self.model_save_dir, 10)
self.assertRaises(TypeError, ModelGeneratorConfig, MyArchFactory(), '5', self.model_save_dir, 10)
def test_model_save_dir_bad_args(self):
# error is the arg 5
with self.assertRaises(TypeError):
ModelGeneratorConfig(MyArchFactory(), self.tdm, 5, 10)
# error is arg 'object'
with self.assertRaises(TypeError):
ModelGeneratorConfig(MyArchFactory(), self.tdm, object, 10)
def test_stats_save_dir_bad_args(self):
# error is the arg 5
with self.assertRaises(TypeError):
ModelGeneratorConfig(MyArchFactory(), self.tdm, 5, 10)
# error is arg 'object'
with self.assertRaises(TypeError):
ModelGeneratorConfig(MyArchFactory(), self.tdm, object, 10)
def test_num_models_bad_args(self):
# error is arg 'object'
with self.assertRaises(TypeError):
ModelGeneratorConfig(MyArchFactory(), self.tdm, self.model_save_dir, self.stats_save_dir, object)
# error is arg '1'
with self.assertRaises(TypeError):
ModelGeneratorConfig(MyArchFactory(), self.tdm, self.model_save_dir, self.stats_save_dir, '1')
if __name__ == "__main__":
unittest.main()
| 35.454545 | 109 | 0.707692 | 372 | 3,120 | 5.717742 | 0.198925 | 0.059238 | 0.067701 | 0.067701 | 0.651622 | 0.557123 | 0.546309 | 0.546309 | 0.546309 | 0.546309 | 0 | 0.011236 | 0.201282 | 3,120 | 87 | 110 | 35.862069 | 0.842295 | 0.038462 | 0 | 0.360656 | 0 | 0 | 0.003341 | 0 | 0 | 0 | 0 | 0 | 0.295082 | 1 | 0.163934 | false | 0.032787 | 0.147541 | 0.016393 | 0.360656 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b18622eafedd5518564e3002df72bcc43626932 | 1,006 | py | Python | geolucidate/links/yahoo.py | kurtraschke/geolucidate | 827195a90d972fa5efce5a03bdbe53d8395d94ba | [
"MIT"
] | 3 | 2015-09-17T01:01:53.000Z | 2019-09-10T14:30:43.000Z | geolucidate/links/yahoo.py | kurtraschke/geolucidate | 827195a90d972fa5efce5a03bdbe53d8395d94ba | [
"MIT"
] | null | null | null | geolucidate/links/yahoo.py | kurtraschke/geolucidate | 827195a90d972fa5efce5a03bdbe53d8395d94ba | [
"MIT"
] | 5 | 2018-09-11T21:54:36.000Z | 2020-06-25T19:05:45.000Z | from geolucidate.links.tools import default_link
def yahoo_maps_link(type='hybrid', link=default_link):
'''
Returns a function for generating links to Yahoo Maps.
:param type: map type, one of 'map', 'satellite', or 'hybrid'
:param link: Link-generating function; defaults to :func:`~.default_link`
>>> from .tools import MapLink
>>> yahoo_maps_link()(MapLink("CN Tower", "43.6426", "-79.3871"))
'<a href="http://maps.yahoo.com/#lat=43.6426&lon=-79.3871&mvt=h&zoom=10&q1=43.6426%2C-79.3871" title="CN Tower (43.6426, -79.3871)">CN Tower</a>'
'''
types = {'map': 'm', 'satellite': 's', 'hybrid': 'h'}
def func(maplink, link=default_link):
baseurl = "http://maps.yahoo.com/#"
params = {'lat': maplink.lat_str,
'lon': maplink.long_str,
'mvt': types[type],
'zoom': '10',
'q1': maplink.coordinates(",")}
return maplink.make_link(baseurl, params, link)
return func
| 37.259259 | 149 | 0.593439 | 135 | 1,006 | 4.340741 | 0.407407 | 0.075085 | 0.044369 | 0.044369 | 0.064846 | 0.064846 | 0 | 0 | 0 | 0 | 0 | 0.070968 | 0.229622 | 1,006 | 26 | 150 | 38.692308 | 0.685161 | 0.4334 | 0 | 0 | 0 | 0 | 0.127341 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.083333 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b1be0edc4e55b6e8f3880be837ef0d3a77f96c8 | 4,056 | py | Python | config/settings/common.py | cni-iisc/campus-rakshak-simulator-app | ef30a1fb57b72d25534945526fdb77d158ad16c1 | [
"Apache-2.0"
] | 1 | 2021-07-29T10:33:26.000Z | 2021-07-29T10:33:26.000Z | config/settings/common.py | cni-iisc/campus-rakshak-simulator-app | ef30a1fb57b72d25534945526fdb77d158ad16c1 | [
"Apache-2.0"
] | null | null | null | config/settings/common.py | cni-iisc/campus-rakshak-simulator-app | ef30a1fb57b72d25534945526fdb77d158ad16c1 | [
"Apache-2.0"
] | null | null | null | """
Common Django settings for campussim project.
"""
from pathlib import Path
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent.parent
# Application definition
INSTALLED_APPS = [
### Django apps for core functionalitites and packages for static file renders
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'whitenoise.runserver_nostatic',
'django.contrib.staticfiles',
### Third-party apps that provide additional functionalities
'rest_framework',
'anymail',
'django_celery_results',
### Application logic of campussim
'interface.apps.InterfaceConfig',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'whitenoise.middleware.WhiteNoiseMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'config.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': ['templates'],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'config.wsgi.application'
# Database
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'db.sqlite3',
}
}
DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField' #default primary key field type
AUTH_USER_MODEL = 'interface.userModel' #custom user database table
# Internationalization
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'Asia/Kolkata'
USE_I18N = False
USE_L10N = True
USE_TZ = False
# Static files (CSS, JavaScript, Images)
STATIC_URL = '/static/'
STATIC_ROOT = 'static'
STATICFILES_STORAGE = 'whitenoise.storage.CompressedStaticFilesStorage' #compression for perfomance
# Media files (simulator files that are not directly served to users)
MEDIA_URL = '/media/'
MEDIA_ROOT = Path.joinpath(BASE_DIR, 'media/')
# Celery: handles running asynchronous, background tasks
CELERY_BROKER_URL = 'amqp://localhost'
CELERY_TIMEZONE = 'Asia/Kolkata'
CELERY_TASK_MAX_RETRIES = 1
CELERYD_TASK_SOFT_TIME_LIMIT = 240 # 4 minutes
CELERYD_TASK_TIME_LIMIT = 600 # 10 minutes
CELERY_RESULT_BACKEND = 'django-db'
CELERY_TASK_ROUTES = {
'interace.tasks.send_mail': 'mailQueue',
'interface.tasks.run_instantiate': 'instQueue',
'interface.task.run_simulation': 'simQueue',
}
# Anymail: handles sending out email notifications, when configured
ANYMAIL = {
}
EMAIL_BACKEND = ''
DEFAULT_FROM_MAIL = ''
# Logging Template
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'interface_file': {
'level': 'DEBUG',
'class': 'logging.FileHandler',
'filename': f'{ BASE_DIR }/log.txt',
'formatter': 'verbose',
},
},
'formatters': {
'verbose': {
'format': '%(asctime)s | %(levelname)s | module=%(module)s | %(message)s',
}
},
'loggers':{
'interface_log': {
'handlers': ['interface_file'],
'level': 'DEBUG',
'propagate': False #handles duplicates
},
'celery_log': {
'handlers': ['interface_file'],
'level': 'DEBUG',
'propagate': True
},
}
}
# Colours for messages
from django.contrib.messages import constants as messages
MESSAGE_TAGS = {
messages.ERROR: 'danger',
}
| 27.221477 | 99 | 0.664201 | 403 | 4,056 | 6.51861 | 0.511166 | 0.059383 | 0.031976 | 0.029692 | 0.044538 | 0.032737 | 0.032737 | 0 | 0 | 0 | 0 | 0.005311 | 0.210799 | 4,056 | 148 | 100 | 27.405405 | 0.81537 | 0.175542 | 0 | 0.046296 | 0 | 0 | 0.483998 | 0.307065 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.018519 | 0 | 0.018519 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b1c2c85a3b57a89a03330a9170df389c8a29620 | 5,562 | py | Python | Mp3Processing.py | yNeshy/voice-change | 2535351bcd8a9f2d58fcbff81a2051c4f6ac6ab4 | [
"MIT"
] | null | null | null | Mp3Processing.py | yNeshy/voice-change | 2535351bcd8a9f2d58fcbff81a2051c4f6ac6ab4 | [
"MIT"
] | null | null | null | Mp3Processing.py | yNeshy/voice-change | 2535351bcd8a9f2d58fcbff81a2051c4f6ac6ab4 | [
"MIT"
] | null | null | null | # Aziz Nechi
# If you have any trouble with submitting to S3, please contact me. This part is on me.
# As for the conversion, the fetching from url and the sound editing, I have made sure
# of its veracity.
# Please make sure the url you provide is correct, and doesn't block downloads from
# unknown sources.
# Your S3 bucket name here:
BUCKET = ''
# To read and write MP3 files
import pydub
# Math
import numpy as np
# Get requests to download mp3 or wav files
import requests
# Used to replace file operations with in memory operations (with ByteIO)
import io
# In memory copy of objects
from copy import copy
# Imports related to the use of S3
import logging
import boto3
from botocore.exceptions import ClientError
class AudioProcessing():
def __init__(self, url=None, bytes_object=None):
"""
In memory simple audio operation on wav files. Supports mp3 to wav conversion
"""
if(url):
# create file from url
r = requests.get(url)
bytes_object = r.content
self.read(bytes_object)
return
def add_echo(self, delay):
'''Applies an echo that is 0...<input audio duration in seconds> seconds from the beginning'''
output_audio = copy(self.audio_data)
output_delay = delay * self.freq
for count in range(len(self.audio_data)):
e = self.audio_data[count]
output_audio[count] = e + self.audio_data[count - int(output_delay)]
self.audio_data = output_audio
def set_audio_speed(self, speed_factor):
'''Sets the speed of the audio by a floating-point factor'''
sound_index = np.round(np.arange(0, len(self.audio_data), speed_factor))
self.audio_data = self.audio_data[sound_index[sound_index < len(self.audio_data)].astype(int)]
def filter_frequency(self, threshold):
output_audio = copy(self.audio_data)
for count in range(len(self.audio_data)):
e = self.audio_data[count]
if ( e < threshold ):
output_audio[count] = threshold
else :
output_audio[count] = e
self.audio_data = output_audio
def custom_filter(self, threshold):
output_audio = copy(self.audio_data)
for count in range(len(self.audio_data)):
e = self.audio_data[count]
output_audio[count] = (e / threshold)
self.audio_data = output_audio
def set_volume(self, level):
'''Sets the overall volume of the data via floating-point factor'''
output_audio = copy(self.audio_data)
for count in range(len(self.audio_data)):
e = self.audio_data[count]
output_audio[count] = (e * level)
self.audio_data = output_audio
def deepen(self, factor=0.85):
self.freq = int(self.freq * factor)
def pitch(self, factor=0.9):
self.freq = int(self.freq / factor)
def hide_voice (self):
self.set_volume(1.3)
#self.custom_filter(1.3)
self.add_echo(delay=0.02)
self.deepen(factor=0.8)
output = io.BytesIO()
return self.write(output)
def to_bytes(self):
"""
Returns io.BytesIO stream of the edited mp3
"""
output = io.BytesIO()
return self.write(output)
def read(self, f, normalized=False):
"""MP3 to numpy array"""
a = pydub.AudioSegment.from_mp3(io.BytesIO(f))
y = np.array(a.get_array_of_samples())
if a.channels == 2:
y = y.reshape((-1, 2))
if normalized:
self.audio_data = np.float32(y) / 2**15
self.freq = a.frame_rate
return a.frame_rate, np.float32(y) / 2**15
else:
self.audio_data = y
self.freq = a.frame_rate
return a.frame_rate, y
def write(self, f, normalized=False):
"""numpy array to MP3"""
sr = self.freq
x = self.audio_data
channels = 2 if (x.ndim == 2 and x.shape[1] == 2) else 1
if normalized: # normalized array - each item should be a float in [-1, 1)
y = np.int16(x * 2 ** 15)
else:
y = np.int16(x)
song = pydub.AudioSegment(y.tobytes(), frame_rate=sr, sample_width=2, channels=channels)
song.export(f, format="mp3", bitrate="320k")
return f
def upload_file_to_S3(file_name, bucket, object_name=None):
"""Upload a file to an S3 bucket
:param file_name: File to upload
:param bucket: Bucket to upload to
:param object_name: S3 object name. If not specified then file_name is used
:return: True if file was uploaded, else False
"""
# If S3 object_name was not specified, use file_name
if object_name is None:
object_name = file_name
# Upload the file
s3_client = boto3.client('s3')
try:
response = s3_client.upload_file(file_name, bucket, object_name)
return response
except ClientError as e:
print("S3 error occured "+ e)
return None
if __name__ == '__main__':
# print("URL from of mp3 file: ", end="")
# YOUR_URL = str(input())
YOUR_URL = debug_url
x = AudioProcessing(bytes_object="who-are-you.mp3")
# x.set_audio_speed(1.1)
# Deepened voice :
x.deepen()
# Pitched voice :
# x.pitch()
result = x.to_bytes()
# upload_file_to_S3(result, BUCKET, "voice_changed.mp3")
with open("slim.mp3", "wb") as f:
f.write(result.getbuffer())
| 29.903226 | 102 | 0.613448 | 783 | 5,562 | 4.218391 | 0.268199 | 0.065395 | 0.09446 | 0.029064 | 0.246443 | 0.222525 | 0.201635 | 0.153497 | 0.129882 | 0.109295 | 0 | 0.018896 | 0.286408 | 5,562 | 185 | 103 | 30.064865 | 0.813303 | 0.265013 | 0 | 0.267327 | 0 | 0 | 0.014876 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.128713 | false | 0 | 0.079208 | 0 | 0.29703 | 0.009901 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b1ea6c3e05799e453822258bb77692c266ee6a2 | 5,343 | py | Python | jsonwspclient/jsonwspresponse.py | ellethee/jsonwspclient | cafd6d5ee27d686329338bd861ae85a69eeaeb4b | [
"MIT"
] | 2 | 2018-03-22T10:29:46.000Z | 2019-09-26T16:58:20.000Z | jsonwspclient/jsonwspresponse.py | ellethee/jsonwspclient | cafd6d5ee27d686329338bd861ae85a69eeaeb4b | [
"MIT"
] | null | null | null | jsonwspclient/jsonwspresponse.py | ellethee/jsonwspclient | cafd6d5ee27d686329338bd861ae85a69eeaeb4b | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
====================================================
Jsonwspresponse :mod:`jsonwspclient.jsonwspresponse`
====================================================
"""
import logging
from . import jsonwspexceptions as excs
from . import jsonwsputils as utils
from .jsonwspmultipart import MultiPartReader
log = logging.getLogger('jsonwspclient')
class JsonWspResponse:
"""JsonWspResponse (wrapper for `requests Response object <http://docs
.python-requests.org/en/master/api/#requests.Response>`_) is not meant
to be instantiate manually but only as response from :any:`JsonWspClient`
requests.
"""
def __init__(self, response, trigger):
self._response = response
self.__reader = None
self._boundary = utils.get_boundary(self.headers)
self._multipart = None
self._raise_for_fault = False
self._trigger = trigger
self.attachments = {}
"""(dict): Attachments dictionary, not really useful."""
self.fault = {}
"""(dict): Fault dictionary if response has fault."""
self.fault_code = None
"""(str): Fault code if response has fault."""
self.has_fault = False
"""(bool): True if response has fault."""
self.is_multipart = bool(self._boundary)
"""(bool): True if response is multipart."""
self.length = int(self.headers.get('Content-Length', '0'))
"""(int): response content length"""
self.response_dict = {}
"""(dict): JSON part of the response."""
self.result = {}
"""(dict,list): **data** of the JSON part of the response."""
self._process()
def __getattr__(self, name):
return getattr(self._response, name)
def __bool__(self):
return self.has_fault is False
def _process(self):
"""_process."""
if self._boundary:
self.__reader = self._get_reader()
self.response_dict = next(self)
else:
try:
self.response_dict = self._response.json()
except ValueError as error:
log.debug('error %s', error)
self.response_dict = {}
self._check_fault()
self._get_attchments_id()
def _check_fault(self):
"""Check fault."""
self.has_fault = self.response_dict.get('type') == "jsonwsp/fault"
if self.has_fault:
self.fault = self.response_dict['fault']
self.fault_code = self.response_dict['fault']['code']
else:
self.result = self.response_dict.get('result', {})
def _get_attchments_id(self):
"""get info."""
if self.is_multipart and isinstance(self.result, (dict, list)):
self.attachments.update(utils.check_attachment(self.result))
def _get_reader(self):
"""get all."""
self._multipart = MultiPartReader(
self.headers,
utils.FileWithCallBack(self.raw, self._trigger, size=self.length),
size=self.length,
callback=self._trigger)
return self._multipart.iterator()
@property
def _reader(self):
if not self.is_multipart:
raise TypeError("Is not a multipart response")
elif self.__reader is None:
raise IOError("Reader is None")
return self.__reader
def read_all(self, chunk_size=None):
"""Read all the data and return a Dictionary containig the Attachments.
Args:
chunk_size (int): bytes to read each time.
Returns:
dict: Dictionary with all attachments.
"""
self._multipart.read_all(chunk_size)
return self._multipart.by_id
def __next__(self):
"""If JsonWspResponse is multipart returns the next attachment.
Returns:
JsonWspAttachment: the attachment object.
"""
return next(self._reader)
def save_all(self, path, name='name', overwrite=True):
"""Save all the attachments ad once.
Args:
path (str): Path where to save.
name (str, optional): key with which the file name is specified in the
dictionary (default ``name``).
overwrite (bool, optional): overwrite the file if exists (defautl True).
"""
for attach in self._reader:
if not attach:
break
try:
filename = self.attachments[attach.att_id][name]
except KeyError:
filename = attach.headers.get("x-filename")
attach.save(path, filename=filename, overwrite=overwrite)
def raise_for_fault(self):
"""Reise error if needed else return self."""
if self.fault_code == 'server':
raise excs.ServerFault(response=self)
elif self.fault_code == 'client':
raise excs.ClientFault(response=self)
elif self.fault_code == 'incompatible':
raise excs.IncompatibleFault(response=self)
return self
def __iter__(self):
return self
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.close()
def __del__(self):
del self.__reader
del self._multipart
del self.attachments
del self._response
| 33.186335 | 84 | 0.588059 | 587 | 5,343 | 5.153322 | 0.267462 | 0.05157 | 0.042314 | 0.017851 | 0.057521 | 0.035702 | 0 | 0 | 0 | 0 | 0 | 0.000524 | 0.285607 | 5,343 | 160 | 85 | 33.39375 | 0.791983 | 0.195957 | 0 | 0.094737 | 0 | 0 | 0.040707 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.168421 | false | 0 | 0.042105 | 0.042105 | 0.315789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b26031be43a4955c5bab323a43414cfbaa1a9e0 | 8,986 | py | Python | onir/pipelines/catfog.py | tgeral68/OpenNIR | 225b26185bd67fdc00f24de3ef70d35768e22243 | [
"MIT"
] | 3 | 2021-01-07T15:44:38.000Z | 2021-08-23T03:44:47.000Z | onir/pipelines/catfog.py | tgeral68/OpenNIR | 225b26185bd67fdc00f24de3ef70d35768e22243 | [
"MIT"
] | null | null | null | onir/pipelines/catfog.py | tgeral68/OpenNIR | 225b26185bd67fdc00f24de3ef70d35768e22243 | [
"MIT"
] | 1 | 2021-11-16T09:10:47.000Z | 2021-11-16T09:10:47.000Z | import os
import json
from onir import util, pipelines
import onir
import pickle
@pipelines.register('catfog')
class CatfogPipeline(pipelines.BasePipeline):
name = None
@staticmethod
def default_config():
return {
'max_epoch': 1000,
'early_stop': 20,
'warmup': -1,
'val_metric': 'map',
'purge_weights': True,
'test': False,
'initial_eval': False,
'skip_ds_init': False,
'only_cached': False,
'onlytest': False,
'finetune': False,
'savefile': '_',
}
def __init__(self, config, trainer, valid_pred, test_pred, logger):
super().__init__(config, logger)
self.trainer = trainer
self.valid_pred = valid_pred
self.test_pred = test_pred
def run(self):
validator = self.valid_pred.pred_ctxt()
top_epoch, top_value, top_train_ctxt, top_valid_ctxt = None, None, None, None
prev_train_ctxt = None
file_output = {
'ranker': self.trainer.ranker.path_segment(),
'vocab': self.trainer.vocab.path_segment(),
'trainer': self.trainer.path_segment(),
'dataset': self.trainer.dataset.path_segment(),
'valid_ds': self.valid_pred.dataset.path_segment(),
'validation_metric': self.config['val_metric'],
'logfile': util.path_log()
}
# initialize dataset(s)
if not self.config['skip_ds_init']:
self.trainer.dataset.init(force=False)
self.valid_pred.dataset.init(force=False)
if self.config['test']:
self.test_pred.dataset.init(force=False)
base_path_g = None
for train_ctxt in self.trainer.iter_train(only_cached=self.config['only_cached'], _top_epoch=self.config.get('finetune')):
if self.config.get('onlytest'):
base_path_g = train_ctxt['base_path']
self.logger.debug(f'[catfog] skipping training')
top_train_ctxt=train_ctxt
break
if prev_train_ctxt is not None and top_epoch is not None and prev_train_ctxt is not top_train_ctxt:
self._purge_weights(prev_train_ctxt)
if train_ctxt['epoch'] >= 0 and not self.config['only_cached']:
message = self._build_train_msg(train_ctxt)
if train_ctxt['cached']:
self.logger.debug(f'[train] [cached] {message}')
else:
self.logger.debug(f'[train] {message}')
if train_ctxt['epoch'] == -1 and not self.config['initial_eval']:
continue
valid_ctxt = dict(validator(train_ctxt))
message = self._build_valid_msg(valid_ctxt)
if valid_ctxt['epoch'] >= self.config['warmup']:
if self.config['val_metric'] == '':
top_epoch = valid_ctxt['epoch']
top_train_ctxt = train_ctxt
top_valid_ctxt = valid_ctxt
elif top_value is None or valid_ctxt['metrics'][self.config['val_metric']] > top_value:
message += ' <---'
top_epoch = valid_ctxt['epoch']
top_value = valid_ctxt['metrics'][self.config['val_metric']]
if top_train_ctxt is not None:
self._purge_weights(top_train_ctxt)
top_train_ctxt = train_ctxt
top_valid_ctxt = valid_ctxt
else:
if prev_train_ctxt is not None:
self._purge_weights(prev_train_ctxt)
if not self.config['only_cached']:
if valid_ctxt['cached']:
self.logger.debug(f'[valid] [cached] {message}')
else:
self.logger.info(f'[valid] {message}')
if top_epoch is not None:
epochs_since_imp = valid_ctxt['epoch'] - top_epoch
if self.config['early_stop'] > 0 and epochs_since_imp >= self.config['early_stop']:
self.logger.warn('stopping after epoch {epoch} ({early_stop} epochs with no '
'improvement to {val_metric})'.format(**valid_ctxt, **self.config))
break
if train_ctxt['epoch'] >= self.config['max_epoch']:
self.logger.warn('stopping after epoch {max_epoch} (max_epoch)'.format(**self.config))
break
prev_train_ctxt = train_ctxt
if not self.config.get('onlytest'):
self.logger.info('top validation epoch={} {}={}'.format(top_epoch, self.config['val_metric'], top_value))
self.logger.info(f'[catfog: top_train_ctxt] {top_train_ctxt}')
file_output.update({
'valid_epoch': top_epoch,
'valid_run': top_valid_ctxt['run_path'],
'valid_metrics': top_valid_ctxt['metrics'],
})
# save top train epoch for faster testing without needing the retraining phase
if not self.config.get('onlytest'):
#pickle.dump(top_epoch, open( top_train_ctxt['base_path']+"/top_epoch.pickle", "wb") )
# move best to -2.p
self.trainer.save_best(top_epoch, top_train_ctxt['base_path'])
if self.config.get('onlytest'): # for onlytest use also finetune=true, to load best epoch at first iteration
self.logger.debug(f'[catfog] loading top context')
#top_epoch = pickle.load(open(base_path_g+"/top_epoch.pickle", "rb"))
#self.logger.debug(f'[catfog] loading top context ... {top_epoch} epoch')
#top_train_ctxt = self.trainer.trainCtx(top_epoch)
self.logger.debug(f'[catfog] Top epoch context: {dict(top_train_ctxt)}')
if self.config['test']:
self.logger.info(f'Starting load ranker')
top_train_ctxt['ranker'] = onir.trainers.base._load_ranker(top_train_ctxt['ranker'](), top_train_ctxt['ranker_path'])
self.logger.debug(f'[catfog] test_pred : {self.test_pred}')
self.logger.info(f'Starting test predictor run')
with self.logger.duration('testing'):
test_ctxt = self.test_pred.run(top_train_ctxt)
file_output.update({
'test_ds': self.test_pred.dataset.path_segment(),
'test_run': test_ctxt['run_path'],
'test_metrics': test_ctxt['metrics'],
})
with open(util.path_modelspace() + '/val_test.jsonl', 'at') as f:
json.dump(file_output, f)
f.write('\n')
if not self.config.get('onlytest'):
self.logger.info('valid run at {}'.format(valid_ctxt['run_path']))
if self.config['test']:
self.logger.info('test run at {}'.format(test_ctxt['run_path']))
if not self.config.get('onlytest'):
self.logger.info('valid ' + self._build_valid_msg(top_valid_ctxt))
if self.config['test']:
self.logger.info('test ' + self._build_valid_msg(test_ctxt))
self._write_metrics_file(test_ctxt)
def _build_train_msg(self, ctxt):
delta_acc = ctxt['acc'] - ctxt['unsup_acc']
msg_pt1 = 'epoch={epoch} loss={loss:.4f}'.format(**ctxt)
msg_pt2 = 'acc={acc:.4f} unsup_acc={unsup_acc:.4f} ' \
'delta_acc={delta_acc:.4f}'.format(**ctxt, delta_acc=delta_acc)
losses = ''
if ctxt['losses'] and ({'data'} != ctxt['losses'].keys() or ctxt['losses']['data'] != ctxt['loss']):
losses = []
for lname, lvalue in ctxt['losses'].items():
losses.append(f'{lname}={lvalue:.4f}')
losses = ' '.join(losses)
losses = f' ({losses})'
return f'{msg_pt1}{losses} {msg_pt2}'
def _build_valid_msg(self, ctxt):
message = ['epoch=' + str(ctxt['epoch'])]
for metric, value in sorted(ctxt['metrics'].items()):
message.append('{}={:.4f}'.format(metric, value))
if metric == self.config['val_metric']:
message[-1] = '[' + message[-1] + ']'
return ' '.join(message)
def _purge_weights(self, ctxt):
if self.config['purge_weights']:
if os.path.exists(ctxt['ranker_path']):
os.remove(ctxt['ranker_path'])
if os.path.exists(ctxt['optimizer_path']):
os.remove(ctxt['optimizer_path'])
def _write_metrics_file(self, ctxt):
outputdir = ""
# format model-namemodel_train-dataset_test-dataset_train-dataset_test-dataset
filename = os.path.join(outputdir, self.config.get('savefile'))
with open(filename, "w") as f:
for metric, value in sorted(ctxt['metrics'].items()):
f.write('{}\t{:.4f}'.format(metric, value))
f.write('\n')
| 41.795349 | 130 | 0.565546 | 1,070 | 8,986 | 4.504673 | 0.158879 | 0.06722 | 0.042324 | 0.026556 | 0.351245 | 0.2361 | 0.146058 | 0.115768 | 0.0639 | 0.055602 | 0 | 0.003829 | 0.302471 | 8,986 | 214 | 131 | 41.990654 | 0.765156 | 0.060316 | 0 | 0.197605 | 0 | 0 | 0.175362 | 0.008537 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041916 | false | 0 | 0.02994 | 0.005988 | 0.101796 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b286add96d725a4355a582d1095f8902eb24f33 | 1,425 | py | Python | launch/no_roof_small_warehouse.launch.py | ros2-gbp/aws_robomaker_small_warehouse_world-release | 6c3f0c113772b0148e1b50dcdd45269869622d5d | [
"MIT-0"
] | 1 | 2022-02-24T21:33:26.000Z | 2022-02-24T21:33:26.000Z | launch/no_roof_small_warehouse.launch.py | ros2-gbp/aws_robomaker_small_warehouse_world-release | 6c3f0c113772b0148e1b50dcdd45269869622d5d | [
"MIT-0"
] | 1 | 2022-03-31T00:36:19.000Z | 2022-03-31T00:36:19.000Z | launch/no_roof_small_warehouse.launch.py | ros2-gbp/aws_robomaker_small_warehouse_world-release | 6c3f0c113772b0148e1b50dcdd45269869622d5d | [
"MIT-0"
] | 1 | 2022-03-11T05:24:38.000Z | 2022-03-11T05:24:38.000Z | # Copyright (c) 2018 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from ament_index_python.packages import get_package_share_directory
import launch
from launch.actions import IncludeLaunchDescription
from launch.launch_description_sources import PythonLaunchDescriptionSource
def generate_launch_description():
ld = launch.LaunchDescription([
launch.actions.IncludeLaunchDescription(
launch.launch_description_sources.PythonLaunchDescriptionSource(
[get_package_share_directory(
'aws_robomaker_small_warehouse_world'), '/launch/small_warehouse.launch.py']
),
launch_arguments={
'world': os.path.join(get_package_share_directory('aws_robomaker_small_warehouse_world'), 'worlds', 'no_roof_small_warehouse', 'no_roof_small_warehouse.world')
}.items()
)
])
return ld
| 38.513514 | 175 | 0.739649 | 174 | 1,425 | 5.867816 | 0.568966 | 0.058766 | 0.044074 | 0.070519 | 0.107738 | 0.107738 | 0.107738 | 0.107738 | 0.107738 | 0 | 0 | 0.006932 | 0.190175 | 1,425 | 36 | 176 | 39.583333 | 0.877816 | 0.391579 | 0 | 0 | 0 | 0 | 0.194607 | 0.181712 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.277778 | 0 | 0.388889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b28b59d6ecded6cdbfbf6ca62928fab85bd51f4 | 2,028 | py | Python | analysis/att_weights.py | sagarika1101/twitter_abusive_behavior_detection | edf90364d10f44aa6ace4d54ef9ed1f9bf495c0c | [
"Apache-2.0"
] | 1 | 2021-09-24T08:51:38.000Z | 2021-09-24T08:51:38.000Z | analysis/att_weights.py | sagarika1101/twitter_abusive_behavior_detection | edf90364d10f44aa6ace4d54ef9ed1f9bf495c0c | [
"Apache-2.0"
] | null | null | null | analysis/att_weights.py | sagarika1101/twitter_abusive_behavior_detection | edf90364d10f44aa6ace4d54ef9ed1f9bf495c0c | [
"Apache-2.0"
] | null | null | null | from keras.models import model_from_json
from sklearn.metrics import classification_report
from load_data import read_file, preprocess_doc, \
pad_sequences, build_input_data_rnn, build_vocab
from model import AttentionWithContext, dot_product
# load json and create model
json_file = open('model.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
# load weights into new model
loaded_model = model_from_json(loaded_model_json, custom_objects={
'AttentionWithContext': AttentionWithContext})
loaded_model.load_weights("rnn_docs_ranking.h5")
print("Loaded model from disk")
print(loaded_model.summary())
from keras import backend as K
# with a Sequential model
get_td_layer_output = K.function([loaded_model.layers[0].input],
[loaded_model.layers[3].output])
data, y = read_file('../hatespeech', with_evaluation=True)
data = preprocess_doc(data, True)
data = [s.split(" ") for s in data]
data = pad_sequences(data)
word_counts, vocabulary, vocabulary_inv = build_vocab(data)
x = build_input_data_rnn(data, vocabulary, len(data[0]))
print('Loaded data')
print("Computing predicted labels")
y_dist = loaded_model.predict(x)
y_pred = y_dist.argmax(axis=1)
print(classification_report(y, y_pred))
print("Computing 3rd layer output")
td_output = get_td_layer_output([x])[0]
W, b, u = loaded_model.layers[4].get_weights()
def get_attn_weights(x, W, b, u):
uit = dot_product(x, W)
uit += b
uit = K.tanh(uit)
ait = dot_product(uit, u)
a = K.exp(ait)
a /= K.cast(K.sum(a, axis=1, keepdims=True) + K.epsilon(), K.floatx())
a = K.expand_dims(a)
return a
print("Computing attention weights")
td_output = K.variable(td_output)
W, b, u = K.variable(W), K.variable(b), K.variable(u)
att_weights = get_attn_weights(td_output, W, b, u)
att_weights = att_weights.numpy().squeeze(axis=2)
import pickle
with open('/rnn_docs_attn.pkl', 'wb') as f:
pickle.dump((x, y, y_pred, y_dist, att_weights, vocabulary_inv), f)
print("Dumped weights")
| 28.971429 | 74 | 0.727811 | 321 | 2,028 | 4.367601 | 0.342679 | 0.078459 | 0.008559 | 0.024251 | 0.015692 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00576 | 0.143984 | 2,028 | 69 | 75 | 29.391304 | 0.801843 | 0.038462 | 0 | 0 | 0 | 0 | 0.107914 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020833 | false | 0 | 0.125 | 0 | 0.166667 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b2c3495062849e7ab61a70eaff4509c4559a318 | 879 | py | Python | examples/examples.py | benjaminkrenn/perpetual | b8925636391baf5a96daa20adf8741f4fd965f30 | [
"MIT"
] | null | null | null | examples/examples.py | benjaminkrenn/perpetual | b8925636391baf5a96daa20adf8741f4fd965f30 | [
"MIT"
] | null | null | null | examples/examples.py | benjaminkrenn/perpetual | b8925636391baf5a96daa20adf8741f4fd965f30 | [
"MIT"
] | 1 | 2019-12-20T08:33:54.000Z | 2019-12-20T08:33:54.000Z | # A simple example
# Author: Martin Lackner
from __future__ import print_function
import sys
sys.path.insert(0, '..')
import profiles
import perpetual_rules as perpetual
apprsets1 = {1: [1], 2: [1], 3: [2], 4: [3]}
apprsets2 = {1: [1], 2: [2], 3: [2], 4: [3]}
apprsets3 = {1: [1], 2: [1], 3: [2], 4: [3]}
voters = [1, 2, 3, 4]
cands = [1, 2, 3]
profile1 = profiles.ApprovalProfile(voters, cands, apprsets1)
profile2 = profiles.ApprovalProfile(voters, cands, apprsets2)
profile3 = profiles.ApprovalProfile(voters, cands, apprsets3)
weights = perpetual.init_weights("per_quota", voters)
print("First round:")
print("Perpetual Quota selects", perpetual.per_quota(profile1, weights))
print("Second round:")
print("Perpetual Quota selects", perpetual.per_quota(profile2, weights))
print("Third round:")
print("Perpetual Quota selects", perpetual.per_quota(profile3, weights))
| 28.354839 | 72 | 0.709898 | 121 | 879 | 5.066116 | 0.338843 | 0.016313 | 0.014682 | 0.019576 | 0.261011 | 0.261011 | 0.261011 | 0.261011 | 0 | 0 | 0 | 0.057292 | 0.12628 | 879 | 30 | 73 | 29.3 | 0.740885 | 0.044369 | 0 | 0 | 0 | 0 | 0.139785 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.35 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b2c403fc778412b306d10be87e5d0448be413c4 | 424 | py | Python | database.py | dwang/discord-logger | db476659265ee24fa97c5ba3d6cde65192214ab7 | [
"MIT"
] | null | null | null | database.py | dwang/discord-logger | db476659265ee24fa97c5ba3d6cde65192214ab7 | [
"MIT"
] | null | null | null | database.py | dwang/discord-logger | db476659265ee24fa97c5ba3d6cde65192214ab7 | [
"MIT"
] | null | null | null | import os
from pymongo import MongoClient
client = MongoClient(os.getenv("MONGODB_IP", "127.0.0.1"), 27017)
db = client.discord_logger
def add_message(guild, message_id, time, content, author, channel):
post = {
"message_id": str(message_id),
"time": str(time),
"content": str(content),
"author": str(author),
"channel": str(channel)
}
db[str(guild)].insert_one(post)
| 23.555556 | 67 | 0.632075 | 55 | 424 | 4.745455 | 0.527273 | 0.103448 | 0.099617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033133 | 0.216981 | 424 | 17 | 68 | 24.941176 | 0.753012 | 0 | 0 | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.153846 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b2c5562aee4802c9392f1d199a158fa1f1ff402 | 1,337 | py | Python | azure-mgmt-recoveryservicesbackup/azure/mgmt/recoveryservicesbackup/models/sql_data_directory.py | JonathanGailliez/azure-sdk-for-python | f0f051bfd27f8ea512aea6fc0c3212ee9ee0029b | [
"MIT"
] | 1 | 2021-09-07T18:36:04.000Z | 2021-09-07T18:36:04.000Z | azure-mgmt-recoveryservicesbackup/azure/mgmt/recoveryservicesbackup/models/sql_data_directory.py | JonathanGailliez/azure-sdk-for-python | f0f051bfd27f8ea512aea6fc0c3212ee9ee0029b | [
"MIT"
] | 2 | 2019-10-02T23:37:38.000Z | 2020-10-02T01:17:31.000Z | azure-mgmt-recoveryservicesbackup/azure/mgmt/recoveryservicesbackup/models/sql_data_directory.py | JonathanGailliez/azure-sdk-for-python | f0f051bfd27f8ea512aea6fc0c3212ee9ee0029b | [
"MIT"
] | 1 | 2019-06-17T22:18:23.000Z | 2019-06-17T22:18:23.000Z | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.serialization import Model
class SQLDataDirectory(Model):
"""SQLDataDirectory info.
:param type: Type of data directory mapping. Possible values include:
'Invalid', 'Data', 'Log'
:type type: str or
~azure.mgmt.recoveryservicesbackup.models.SQLDataDirectoryType
:param path: File path
:type path: str
:param logical_name: Logical name of the file
:type logical_name: str
"""
_attribute_map = {
'type': {'key': 'type', 'type': 'str'},
'path': {'key': 'path', 'type': 'str'},
'logical_name': {'key': 'logicalName', 'type': 'str'},
}
def __init__(self, **kwargs):
super(SQLDataDirectory, self).__init__(**kwargs)
self.type = kwargs.get('type', None)
self.path = kwargs.get('path', None)
self.logical_name = kwargs.get('logical_name', None)
| 34.282051 | 76 | 0.586387 | 145 | 1,337 | 5.303448 | 0.558621 | 0.085826 | 0.028609 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000922 | 0.188482 | 1,337 | 38 | 77 | 35.184211 | 0.707834 | 0.572177 | 0 | 0 | 0 | 0 | 0.171154 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b30e9bcb03004d191594b655b0f016407530407 | 5,278 | py | Python | functions_ordered.py | mpleung/equil_compute | 13c5d6488f894c1b2e7b539780dfa1c69e6e8133 | [
"MIT"
] | null | null | null | functions_ordered.py | mpleung/equil_compute | 13c5d6488f894c1b2e7b539780dfa1c69e6e8133 | [
"MIT"
] | null | null | null | functions_ordered.py | mpleung/equil_compute | 13c5d6488f894c1b2e7b539780dfa1c69e6e8133 | [
"MIT"
] | null | null | null | import numpy as np, pandas as pd, networkx as nx, itertools, sys, traceback
def assemble_data(data_folder):
"""
We don't include a dummy for missings for data.church because Card and Guiliano do not report its coefficient. We don't include a dummy for missings for data.parent_HS, data.parent_college because Card and Giuliano do not report its coefficient. We only use one measure for physical development index because the other measures have too much missing data.
"""
network_data = pd.read_csv(data_folder + '/21600-0003-Data.tsv', sep='\t', usecols=['AID','ODGX2'], low_memory=False)
network_data.columns = ['id','outdeg']
wave1 = pd.read_csv(data_folder + '/21600-0001-Data.tsv', sep='\t', usecols=['AID','H1MP4','H1EE14','H1EE15','S1','S2','S6B','S11','S12','S17','S18','PA22','PA23','PA63'], low_memory=False)
wave1.columns = ['id','phys_dev','killed21','hiv','age','sex','black','has_mom','motheredu','has_dad','fatheredu','religious','church','HH_smoke']
wave2 = pd.read_csv(data_folder + '/21600-0005-Data.tsv', sep='\t', usecols=['AID','H2PF28','H2PF35'], low_memory=False)
wave2.columns = ['id','risk','future']
data = pd.merge(wave1,wave2,on='id',how='inner')
data = pd.merge(data,network_data,on='id',how='inner')
data.replace(' ',np.nan,inplace=True)
data.dropna(inplace=True)
data = data.apply(pd.to_numeric)
data = data[data.phys_dev < 6]
data = data[data.risk < 6]
data = data[data.future < 6]
data = data[data.killed21 < 6]
data = data[data.hiv < 6]
data = data[data.HH_smoke <= 1]
data = data[data.has_mom <= 1]
data = data[data.has_dad <= 1]
data = data[data.religious <= 28]
data = data[data.church <= 4]
data = data[data.age != 99]
data['male'] = data.sex == 1
data.phys_dev = (data.phys_dev - data.phys_dev.mean()) / data.phys_dev.std()
data.risk.replace([1,2,3,4,5],[5,4,3,2,1], inplace=True)
data['time_pref'] = (data.killed21 + data.hiv) / 2
data['2parent'] = data.has_mom * data.has_dad
data.religious = 1 - pd.eval('(data.religious == 28) or (data.church == 4)')
data.church.replace([1,2,3,4],[3,2,1,0], inplace=True)
data['parent_HS'] = pd.eval('((data.motheredu >=3) and (data.motheredu <= 8)) or ((data.fatheredu >=3) and (data.fatheredu <= 8))')
data['parent_college'] = pd.eval('((data.motheredu >=7) and (data.motheredu <= 8)) or ((data.fatheredu >=7) and (data.fatheredu <= 8))')
deg_dist = data.outdeg.values
data = data[['age','black','male','phys_dev','risk','future','time_pref','HH_smoke','2parent','religious','church','parent_HS','parent_college']].values.astype(np.int32)
if deg_dist.sum() % 2 != 0: deg_dist[0] += 1 # total degree must be even
return deg_dist, data
def gen_exo(data, theta):
"""
Outputs nx1 vector corresponding to X_i'beta, where X_i is the vector of covariates.
"""
return data.dot(theta[4:theta.size])
def gen_D(U_exo_eps, A, theta):
"""
Outputs the network D.
U_exo_eps = output of gen_exo plus random-utility shock
theta = numpy array; vector of structural parameters
"""
D = nx.empty_graph(U_exo_eps.shape[0],create_using=nx.DiGraph())
nr = (U_exo_eps - theta[0] + max(theta[2],0) > 0) * (U_exo_eps - theta[1] + min(theta[3],0) < 0) * (np.minimum(-U_exo_eps + theta[1] - max(theta[3],0), U_exo_eps - theta[0] + min(theta[2],0)) < 0)
for edge in A.edges():
i,j = edge[0], edge[1]
if nr[j]: D.add_edge(i,j)
if nr[i]: D.add_edge(j,i)
return D
def component_NEs_one_run(component, rob12, nr_ind, A, U_exo_eps, theta, vec):
"""
Outputs vec if it corresponds to a NE, empty list otherwise.
component = list; agents in the component
vec = element in itertools.product iterator; candidate action profile for agents in component with non-robust actions
rob12 = numpy array; indicator for agents robustly choosing action 1
nr_ind = numpy array; indicators for agents with non-robust actions
A = networkx graph; network
U_exo_eps = numpy array; vector of utilities, excluding endogenous part
theta = numpy array; vector of structural parameters
"""
# construct candidate action profile for the whole network, which consists of splicing vec into rob12
nr_count = 0
vec_list = list(vec)
# verify equilibrium conditions for action_profile
NE = True
for i in component:
Yneigh = np.array([vec_list[component.index(j)] for j in A.adj[i] if j in component] + [rob12[j] for j in A.adj[i] if j not in component]) # first list is actions of i's neighbors with non-robust actions (given by vec_list), second is actions of i's neighbors with robust actions (given by rob12)
S1 = 0 if Yneigh.size == 0 else (Yneigh >= 1).sum() / Yneigh.size
S2 = 0 if Yneigh.size == 0 else (Yneigh == 2).sum() / Yneigh.size
c1 = theta[0] - theta[2] * S1
c2 = theta[1] - theta[3] * S2
own_action = vec_list[component.index(i)]
if U_exo_eps[i] > c1 and own_action == 0 or (c1 > U_exo_eps[i] or U_exo_eps[i] > c2) and own_action == 1 or c2 > U_exo_eps[i] and own_action == 2:
NE = False
break
if NE:
return vec_list
else:
return []
| 49.327103 | 359 | 0.650057 | 852 | 5,278 | 3.916667 | 0.284038 | 0.059934 | 0.02727 | 0.019479 | 0.213965 | 0.172011 | 0.106083 | 0.031166 | 0.031166 | 0.022176 | 0 | 0.042157 | 0.195529 | 5,278 | 106 | 360 | 49.792453 | 0.743759 | 0.268473 | 0 | 0 | 0 | 0.030769 | 0.170479 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061538 | false | 0 | 0.015385 | 0 | 0.153846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b32180c77f0ecbaa74ea5dff4762e2a920c7894 | 299 | py | Python | P1365.py | Muntaha-Islam0019/Leetcode-Solutions | 0bc56ce43a6d8ad10461b69078166a2a5b913e7f | [
"MIT"
] | null | null | null | P1365.py | Muntaha-Islam0019/Leetcode-Solutions | 0bc56ce43a6d8ad10461b69078166a2a5b913e7f | [
"MIT"
] | null | null | null | P1365.py | Muntaha-Islam0019/Leetcode-Solutions | 0bc56ce43a6d8ad10461b69078166a2a5b913e7f | [
"MIT"
] | null | null | null | class Solution:
def smallerNumbersThanCurrent(self, nums: List[int]) -> List[int]:
result = {}
for index, number in enumerate(sorted(nums)):
if number not in result:
result[number] = index
return [result[number] for number in nums]
| 27.181818 | 70 | 0.578595 | 33 | 299 | 5.242424 | 0.545455 | 0.080925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.324415 | 299 | 10 | 71 | 29.9 | 0.856436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b3291fd4770bd69c2af0fd2c1b2340f6739c8dc | 1,184 | py | Python | ee/models/dashboard_privilege.py | msnitish/posthog | cb86113f568e72eedcb64b5fd00c313d21e72f90 | [
"MIT"
] | null | null | null | ee/models/dashboard_privilege.py | msnitish/posthog | cb86113f568e72eedcb64b5fd00c313d21e72f90 | [
"MIT"
] | null | null | null | ee/models/dashboard_privilege.py | msnitish/posthog | cb86113f568e72eedcb64b5fd00c313d21e72f90 | [
"MIT"
] | null | null | null | from django.db import models
from posthog.models.dashboard import Dashboard
from posthog.models.utils import UUIDModel, sane_repr
# We call models that grant a user access to some resource (which isn't a grouping of users) a "privilege"
class DashboardPrivilege(UUIDModel):
dashboard: models.ForeignKey = models.ForeignKey(
"posthog.Dashboard", on_delete=models.CASCADE, related_name="privileges", related_query_name="privilege",
)
user: models.ForeignKey = models.ForeignKey(
"posthog.User",
on_delete=models.CASCADE,
related_name="explicit_dashboard_privileges",
related_query_name="explicit_dashboard_privilege",
)
level: models.PositiveSmallIntegerField = models.PositiveSmallIntegerField(
choices=Dashboard.RestrictionLevel.choices
)
added_at: models.DateTimeField = models.DateTimeField(auto_now_add=True)
updated_at: models.DateTimeField = models.DateTimeField(auto_now=True)
class Meta:
constraints = [
models.UniqueConstraint(fields=["dashboard", "user"], name="unique_explicit_dashboard_privilege"),
]
__repr__ = sane_repr("dashboard", "user", "level")
| 39.466667 | 113 | 0.735642 | 129 | 1,184 | 6.550388 | 0.434109 | 0.07574 | 0.040237 | 0.07574 | 0.27929 | 0.186982 | 0.111243 | 0 | 0 | 0 | 0 | 0 | 0.170608 | 1,184 | 29 | 114 | 40.827586 | 0.860489 | 0.087838 | 0 | 0 | 0 | 0 | 0.158627 | 0.085343 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.130435 | 0 | 0.478261 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b33c63a9914002051440cf798e5f124574105d1 | 3,496 | py | Python | niphlem/report/html_report.py | CoAxLab/brainhack-physio-project | b87cd6c6db486639f271b786ca1cf4aa27a70fad | [
"BSD-3-Clause"
] | 1 | 2022-03-13T14:33:44.000Z | 2022-03-13T14:33:44.000Z | niphlem/report/html_report.py | CoAxLab/niphlem | a275cfe0b6286fcb8a60a2d999354453804dd5d2 | [
"BSD-3-Clause"
] | 11 | 2021-12-07T03:33:13.000Z | 2022-02-28T20:41:12.000Z | niphlem/report/html_report.py | CoAxLab/brainhack-physio-project | b87cd6c6db486639f271b786ca1cf4aa27a70fad | [
"BSD-3-Clause"
] | null | null | null | import weakref
from html import escape
import warnings
MAX_IMG_VIEWS_BEFORE_WARNING = 10
class HTMLDocument(object):
"""
Embeds a plot in a web page.
If you are running a Jupyter notebook, the plot will be displayed
inline if this object is the output of a cell.
Otherwise, use open_in_browser() to open it in a web browser (or
save_as_html("filename.html") to save it as an html file).
use str(document) or document.html to get the content of the web page,
and document.get_iframe() to have it wrapped in an iframe.
"""
_all_open_html_repr = weakref.WeakSet()
def __init__(self, html, width=600, height=400):
self.html = html
self.width = width
self.height = height
self._temp_file = None
self._check_n_open()
def _check_n_open(self):
HTMLDocument._all_open_html_repr.add(self)
if MAX_IMG_VIEWS_BEFORE_WARNING is None:
return
if MAX_IMG_VIEWS_BEFORE_WARNING < 0:
return
if len(HTMLDocument._all_open_html_repr
) > MAX_IMG_VIEWS_BEFORE_WARNING - 1:
warnings.warn('It seems you have created more than {} '
'nilearn views. As each view uses dozens '
'of megabytes of RAM, you might want to '
'delete some of them.'.format(
MAX_IMG_VIEWS_BEFORE_WARNING))
def resize(self, width, height):
"""Resize the plot displayed in a Jupyter notebook."""
self.width, self.height = width, height
return self
def get_iframe(self, width=None, height=None):
"""
Get the document wrapped in an inline frame.
For inserting in another HTML page of for display in a Jupyter
notebook.
"""
if width is None:
width = self.width
if height is None:
height = self.height
escaped = escape(self.html, quote=True)
wrapped = ('<iframe srcdoc="{}" width="{}" height="{}" '
'frameBorder="0"></iframe>').format(escaped, width, height)
return wrapped
def get_standalone(self):
""" Get the plot in an HTML page."""
return self.html
def _repr_html_(self):
"""
Used by the Jupyter notebook.
Users normally won't call this method explicitly.
"""
return self.get_iframe()
def __str__(self):
return self.html
def save_as_html(self, file_name):
"""
Save the plot in an HTML file, that can later be opened in a browser.
"""
with open(file_name, 'wb') as f:
f.write(self.get_standalone().encode('utf-8'))
class HTMLReport(HTMLDocument):
"""A report written as HTML.
Methods such as save_as_html(), open_in_browser()
are inherited from HTMLDocument
"""
def __init__(self, head_tpl, body, head_values={}):
"""The head_tpl is meant for display as a full page, eg writing on
disk. The body is used for embedding in an existing page.
"""
html = head_tpl.safe_substitute(body=body, **head_values)
super(HTMLReport, self).__init__(html)
self.head_tpl = head_tpl
self.body = body
def _repr_html_(self):
"""
Used by the Jupyter notebook.
Users normally won't call this method explicitly.
"""
return self.body
def __str__(self):
return self.body
| 32.672897 | 78 | 0.606407 | 467 | 3,496 | 4.344754 | 0.316916 | 0.029571 | 0.027107 | 0.041893 | 0.210941 | 0.114342 | 0.088714 | 0.088714 | 0.088714 | 0.088714 | 0 | 0.004965 | 0.308638 | 3,496 | 106 | 79 | 32.981132 | 0.834506 | 0.300915 | 0 | 0.178571 | 0 | 0 | 0.095559 | 0.011216 | 0 | 0 | 0 | 0 | 0 | 1 | 0.196429 | false | 0 | 0.053571 | 0.035714 | 0.464286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b33d7ab4bd7f7cef81cb68e4a2eedff151354aa | 1,244 | py | Python | onegram/utils/validation.py | pauloromeira/onegram | 8294e237e8b8aa70c783347e42ea03d24098c48e | [
"MIT"
] | 150 | 2018-03-17T20:38:43.000Z | 2020-11-18T14:08:06.000Z | onegram/utils/validation.py | pauloromeira/onegram | 8294e237e8b8aa70c783347e42ea03d24098c48e | [
"MIT"
] | 14 | 2018-04-08T11:39:59.000Z | 2019-08-10T14:16:23.000Z | onegram/utils/validation.py | pauloromeira/onegram | 8294e237e8b8aa70c783347e42ea03d24098c48e | [
"MIT"
] | 11 | 2018-03-22T04:34:29.000Z | 2021-06-10T10:55:35.000Z | import json
from ..exceptions import AuthFailed, AuthUserError
from ..exceptions import RequestFailed, RateLimitedError
def validate_response(session, response, auth=False):
try:
try:
js_response = json.loads(response.text)
except:
response.raise_for_status()
if auth:
raise AuthFailed('Authentication failed')
else:
if auth:
_check_auth(js_response)
_check_status(js_response)
response.raise_for_status()
return js_response
except:
if response:
session.logger.error(response.text)
raise
def _check_auth(response):
if not response.get('user', False):
raise AuthUserError('Please check your username')
if not response.get('authenticated', False):
raise AuthFailed('Authentication failed')
def _check_status(response):
message = response.get('message', '').lower()
status = response.get('status', '').lower()
msg = ''
if status:
msg = status
if message:
msg += f': {message}'
if 'rate limit' in message:
raise RateLimitedError(msg)
if 'fail' in status:
raise RequestFailed(msg)
| 26.468085 | 57 | 0.611736 | 130 | 1,244 | 5.723077 | 0.346154 | 0.053763 | 0.053763 | 0.05914 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.292605 | 1,244 | 46 | 58 | 27.043478 | 0.845455 | 0 | 0 | 0.263158 | 0 | 0 | 0.098875 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078947 | false | 0 | 0.078947 | 0 | 0.184211 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b352ccb19fe75f43dea62f1c3041514829e96c3 | 5,233 | py | Python | sdk/python/tests/unit/test_proto_json.py | achals/feast | ce3ad0332edd6507e63fb80310c13d24952100d4 | [
"Apache-2.0"
] | 2,258 | 2020-05-17T02:41:07.000Z | 2022-03-31T22:30:57.000Z | sdk/python/tests/unit/test_proto_json.py | achals/feast | ce3ad0332edd6507e63fb80310c13d24952100d4 | [
"Apache-2.0"
] | 1,768 | 2020-05-16T05:37:28.000Z | 2022-03-31T23:30:05.000Z | sdk/python/tests/unit/test_proto_json.py | achals/feast | ce3ad0332edd6507e63fb80310c13d24952100d4 | [
"Apache-2.0"
] | 415 | 2020-05-16T18:21:27.000Z | 2022-03-31T09:59:10.000Z | import assertpy
import pytest
from google.protobuf.json_format import MessageToDict, Parse
from feast import proto_json
from feast.protos.feast.serving.ServingService_pb2 import (
FeatureList,
GetOnlineFeaturesResponse,
)
from feast.protos.feast.types.Value_pb2 import RepeatedValue
FieldValues = GetOnlineFeaturesResponse.FieldValues
@pytest.fixture(scope="module")
def proto_json_patch():
proto_json.patch()
def test_feast_value(proto_json_patch):
# FieldValues contains "map<string, feast.types.Value> fields" proto field.
# We want to test that feast.types.Value can take different types in JSON
# without using additional structure (e.g. 1 instead of {int64_val: 1}).
field_values_str = """{
"fields": {
"a": 1,
"b": 2.0,
"c": true,
"d": "foo",
"e": [1, 2, 3],
"f": [2.0, 3.0, 4.0, null],
"g": [true, false, true],
"h": ["foo", "bar", "foobar"],
"i": null
}
}"""
field_values_proto = FieldValues()
Parse(field_values_str, field_values_proto)
assertpy.assert_that(field_values_proto.fields.keys()).is_equal_to(
{"a", "b", "c", "d", "e", "f", "g", "h", "i"}
)
assertpy.assert_that(field_values_proto.fields["a"].int64_val).is_equal_to(1)
assertpy.assert_that(field_values_proto.fields["b"].double_val).is_equal_to(2.0)
assertpy.assert_that(field_values_proto.fields["c"].bool_val).is_equal_to(True)
assertpy.assert_that(field_values_proto.fields["d"].string_val).is_equal_to("foo")
assertpy.assert_that(field_values_proto.fields["e"].int64_list_val.val).is_equal_to(
[1, 2, 3]
)
# Can't directly check equality to [2.0, 3.0, 4.0, float("nan")], because float("nan") != float("nan")
assertpy.assert_that(
field_values_proto.fields["f"].double_list_val.val[:3]
).is_equal_to([2.0, 3.0, 4.0])
assertpy.assert_that(field_values_proto.fields["f"].double_list_val.val[3]).is_nan()
assertpy.assert_that(field_values_proto.fields["g"].bool_list_val.val).is_equal_to(
[True, False, True]
)
assertpy.assert_that(
field_values_proto.fields["h"].string_list_val.val
).is_equal_to(["foo", "bar", "foobar"])
assertpy.assert_that(field_values_proto.fields["i"].null_val).is_equal_to(0)
# Now convert protobuf back to json and check that
field_values_json = MessageToDict(field_values_proto)
assertpy.assert_that(field_values_json["fields"].keys()).is_equal_to(
{"a", "b", "c", "d", "e", "f", "g", "h", "i"}
)
assertpy.assert_that(field_values_json["fields"]["a"]).is_equal_to(1)
assertpy.assert_that(field_values_json["fields"]["b"]).is_equal_to(2.0)
assertpy.assert_that(field_values_json["fields"]["c"]).is_equal_to(True)
assertpy.assert_that(field_values_json["fields"]["d"]).is_equal_to("foo")
assertpy.assert_that(field_values_json["fields"]["e"]).is_equal_to([1, 2, 3])
# Can't directly check equality to [2.0, 3.0, 4.0, float("nan")], because float("nan") != float("nan")
assertpy.assert_that(field_values_json["fields"]["f"][:3]).is_equal_to(
[2.0, 3.0, 4.0]
)
assertpy.assert_that(field_values_json["fields"]["f"][3]).is_nan()
assertpy.assert_that(field_values_json["fields"]["g"]).is_equal_to(
[True, False, True]
)
assertpy.assert_that(field_values_json["fields"]["h"]).is_equal_to(
["foo", "bar", "foobar"]
)
assertpy.assert_that(field_values_json["fields"]["i"]).is_equal_to(None)
def test_feast_repeated_value(proto_json_patch):
# Make sure that RepeatedValue in JSON does not need the
# additional structure (e.g. [1,2,3] instead of {"val": [1,2,3]})
repeated_value_str = "[1,2,3]"
repeated_value_proto = RepeatedValue()
Parse(repeated_value_str, repeated_value_proto)
assertpy.assert_that(len(repeated_value_proto.val)).is_equal_to(3)
assertpy.assert_that(repeated_value_proto.val[0].int64_val).is_equal_to(1)
assertpy.assert_that(repeated_value_proto.val[1].int64_val).is_equal_to(2)
assertpy.assert_that(repeated_value_proto.val[2].int64_val).is_equal_to(3)
# Now convert protobuf back to json and check that
repeated_value_json = MessageToDict(repeated_value_proto)
assertpy.assert_that(repeated_value_json).is_equal_to([1, 2, 3])
def test_feature_list(proto_json_patch):
# Make sure that FeatureList in JSON does not need the additional structure
# (e.g. ["foo", "bar"] instead of {"val": ["foo", "bar"]})
feature_list_str = '["feature-a", "feature-b", "feature-c"]'
feature_list_proto = FeatureList()
Parse(feature_list_str, feature_list_proto)
assertpy.assert_that(len(feature_list_proto.val)).is_equal_to(3)
assertpy.assert_that(feature_list_proto.val[0]).is_equal_to("feature-a")
assertpy.assert_that(feature_list_proto.val[1]).is_equal_to("feature-b")
assertpy.assert_that(feature_list_proto.val[2]).is_equal_to("feature-c")
# Now convert protobuf back to json and check that
feature_list_json = MessageToDict(feature_list_proto)
assertpy.assert_that(feature_list_json).is_equal_to(
["feature-a", "feature-b", "feature-c"]
)
| 45.112069 | 106 | 0.68928 | 787 | 5,233 | 4.287166 | 0.125794 | 0.13278 | 0.170717 | 0.14997 | 0.679016 | 0.63278 | 0.547421 | 0.468287 | 0.397747 | 0.255483 | 0 | 0.020838 | 0.156316 | 5,233 | 115 | 107 | 45.504348 | 0.743375 | 0.155742 | 0 | 0.065217 | 0 | 0 | 0.121907 | 0 | 0 | 0 | 0 | 0 | 0.358696 | 1 | 0.043478 | false | 0 | 0.065217 | 0 | 0.108696 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b38ff047687aae355024d77f7e305c5f8f0b8c3 | 1,584 | py | Python | tests/test_models.py | andleb/rule-vetting | 57330c4e59affc89d6ae0ab5e619adb88e17c51e | [
"MIT"
] | null | null | null | tests/test_models.py | andleb/rule-vetting | 57330c4e59affc89d6ae0ab5e619adb88e17c51e | [
"MIT"
] | null | null | null | tests/test_models.py | andleb/rule-vetting | 57330c4e59affc89d6ae0ab5e619adb88e17c51e | [
"MIT"
] | 1 | 2021-11-16T22:03:58.000Z | 2021-11-16T22:03:58.000Z | from os.path import join as oj
import importlib
import numpy as np
import os
import rulevetting
import rulevetting.api.util
DATA_PATH = oj(os.path.dirname(os.path.abspath(__file__)), '..', 'data')
def test_models(project):
"""Check that each baseline is implemented properly
"""
if not project == 'None':
project_ids = [project]
else:
project_ids = rulevetting.api.util.get_project_ids()
for project_id in project_ids:
# get data
project_dset_module_name = f'rulevetting.projects.{project_id}.dataset'
dset = importlib.import_module(project_dset_module_name).Dataset()
_, df_tune, _ = dset.get_data(data_path=DATA_PATH, load_csvs=True)
assert df_tune.shape[0], 'df_tune should not be empty when loading from csvs'
project_baseline_module_name = f'rulevetting.projects.{project_id}.baseline'
baseline = importlib.import_module(project_baseline_module_name).Baseline()
preds_proba = baseline.predict_proba(df_tune)
assert len(preds_proba.shape) == 2
assert preds_proba.shape[1] == 2
assert preds_proba.shape[0] == df_tune.shape[0]
assert np.max(preds_proba) <= 1, 'predicted probabilities must be <= 1'
assert np.min(preds_proba) >= 0, 'predicted probabilities must be >= 0'
preds = baseline.predict(df_tune)
assert np.array_equal(preds, preds.astype(bool)), 'preds values must only be 0 or 1!'
assert preds.shape[0] == df_tune.shape[0]
s = baseline.print_model(df_tune)
assert isinstance(s, str)
| 36.837209 | 93 | 0.688131 | 221 | 1,584 | 4.701357 | 0.357466 | 0.046198 | 0.031761 | 0.034649 | 0.147257 | 0.109721 | 0.075072 | 0 | 0 | 0 | 0 | 0.011155 | 0.207702 | 1,584 | 42 | 94 | 37.714286 | 0.816733 | 0.039773 | 0 | 0 | 0 | 0 | 0.163696 | 0.054785 | 0 | 0 | 0 | 0 | 0.3 | 1 | 0.033333 | false | 0 | 0.266667 | 0 | 0.3 | 0.033333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b3a078d3c031de0b5f904e504751cabdb4ea3b8 | 3,295 | py | Python | backend/core/tests.py | fabiomsrs/vinta_teste | c05ab993ae20f4c7a8df9e1fce2ac1ba5ce1f150 | [
"MIT"
] | null | null | null | backend/core/tests.py | fabiomsrs/vinta_teste | c05ab993ae20f4c7a8df9e1fce2ac1ba5ce1f150 | [
"MIT"
] | 3 | 2021-04-08T21:57:14.000Z | 2021-06-10T20:30:28.000Z | backend/core/tests.py | fabiomsrs/vinta_teste | c05ab993ae20f4c7a8df9e1fce2ac1ba5ce1f150 | [
"MIT"
] | null | null | null | from django.test import Client, RequestFactory, TestCase
from django.contrib.sessions.middleware import SessionMiddleware
from core.serializers import RepoSerializer
from core.models import Repo
from .views import *
import datetime
# Create your tests here.
class RepoSerializerTestCase(TestCase):
def setUp(self):
self.repo_attributes = {
"name":"repo",
"owner":"bob",
"date": datetime.datetime.today(),
"user": "bob",
}
self.serializer_data = {
"name":"repo",
"owner":"jake",
"date": datetime.datetime.today(),
"user": "bob",
}
self.repo = Repo.objects.create(**self.repo_attributes)
self.serializer = RepoSerializer(instance=self.repo)
def test_contains_expected_fields(self):
data = self.serializer.data
print(data.keys())
self.assertEqual(set(data.keys()), set(["id",'name',"owner","date","user","commits"]))
def test_name_field_content(self):
data = self.serializer.data
self.assertEqual(data['name'], self.repo_attributes['name'])
def test_owner_field_content(self):
data = self.serializer.data
self.assertEqual(data['owner'], self.repo_attributes['owner'])
def test_date_field_content(self):
data = self.serializer.data
date = self.repo_attributes['date'].strftime("%d/%m/%Y %H:%M:%S")
self.assertEqual(data['date'], date)
def test_user_field_content(self):
data = self.serializer.data
self.assertEqual(data['user'], self.repo_attributes['user'])
def test_repo_exists(self):
serializer = RepoSerializer(data=self.serializer_data)
serializer.is_valid()
serializer.save()
serializer = RepoSerializer(data=self.serializer_data)
self.assertFalse(serializer.is_valid())
def test_repo_to_representation(self):
self.assertEqual(str(self.repo), self.repo.name)
class CoreViewsTestCase(TestCase):
def setUp(self):
# Every test needs access to the request factory.
self.factory = RequestFactory()
def test_get_commitviewset_forbidden(self):
# Create an instance of a GET request.
request = self.factory.get('/commit/')
view = CommitViewSet.as_view({'get':'list'})(request)
self.assertEqual(view.status_code, 403)
def test_get_repoviewset_forbidden(self):
# Create an instance of a GET request.
request = self.factory.get('/repos/')
view = RepoViewSet.as_view({'get':'list'})(request)
self.assertEqual(view.status_code, 403)
def test_post_repoviewset_forbidden(self):
# Create an instance of a POST request.
body = {
"name" : "tcc",
"owner": {"login":"fabiomsrs"},
"created_at": "2017-03-31T05:29:44Z"
}
request = self.factory.post('/repos/?access_token=123&user=fabiomsrs',body, content_type='application/json')
middleware = SessionMiddleware()
middleware.process_request(request)
request.session.save()
view = RepoViewSet.as_view({'post':'create'})(request)
self.assertEqual(view.status_code, 403)
| 33.622449 | 118 | 0.630046 | 368 | 3,295 | 5.505435 | 0.285326 | 0.043435 | 0.071076 | 0.076012 | 0.365252 | 0.350444 | 0.305035 | 0.231491 | 0.204837 | 0.204837 | 0 | 0.010417 | 0.242489 | 3,295 | 98 | 119 | 33.622449 | 0.801282 | 0.055539 | 0 | 0.257143 | 0 | 0 | 0.090763 | 0.012552 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.171429 | false | 0 | 0.085714 | 0 | 0.285714 | 0.014286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b3b3a87bbd82bb7b65edd23db6d4c072bb7d617 | 1,910 | py | Python | examples/sunflower_callback.py | neurophysik/jitcdde | 44d7ed6ad187d3591407155b6eeb063f73e462cd | [
"BSD-3-Clause"
] | 49 | 2016-10-24T10:00:33.000Z | 2022-03-27T11:03:51.000Z | examples/sunflower_callback.py | neurophysik/jitcdde | 44d7ed6ad187d3591407155b6eeb063f73e462cd | [
"BSD-3-Clause"
] | 45 | 2016-11-20T22:05:07.000Z | 2022-03-29T07:13:25.000Z | examples/sunflower_callback.py | neurophysik/jitcdde | 44d7ed6ad187d3591407155b6eeb063f73e462cd | [
"BSD-3-Clause"
] | 11 | 2016-11-14T07:19:16.000Z | 2022-03-16T14:27:06.000Z | # In this example, we implement the sunflower equation. First, we do it regularly, then we repeat the process using callbacks.
from jitcdde import jitcdde, y, t
import numpy as np
a = 4.8
b = 0.186
tau = 40
# Regular implementation
# ----------------------
# To implement the sine function, we use SymEngine’s sine. This is a symbolic function that gets translated to a C implementation of the sine function under the hood.
import symengine
f = [
y(1),
-a/tau*y(1) - b/tau*symengine.sin(y(0,t-tau))
]
DDE_regular = jitcdde(f)
# With Callbacks
# --------------
# Now, let’s assume for example’s sake that we did not have a sine function for straightforward use like above (i.e., a symbolic SymEngine function that gets translated to a C implementation). Instead we have to use the one from the Python’s Math library.
my_sine = symengine.Function("my_sine")
f_with_callback = [
y(1),
-a/tau*y(1) - b/tau*my_sine(y(0,t-tau))
]
# We need to introduce a wrapper to match the required signature for callbacks. We also add a `print` statement to see when the callback was called. Except for the latter, we do not use the first argument, which is a vector containing the entire present state of the system:
import math
def my_sine_callback(y,arg):
print(f"my_sine called with arguments {y} and {arg}")
return math.sin(arg)
DDE_callback = jitcdde(
f_with_callback,
callback_functions = [(my_sine,my_sine_callback,1)],
)
# Integration
# -----------
# Initialise and address initial discontinuities for both implementations:
DDE_regular.constant_past( [1.0,0.0], time=0.0 )
DDE_regular.adjust_diff()
DDE_callback.constant_past( [1.0,0.0], time=0.0 )
DDE_callback.adjust_diff()
assert DDE_regular.t == DDE_callback.t
# Integrate side by side and compare:
times = DDE_regular.t + np.arange(10,100,10)
for time in times:
assert DDE_regular.integrate(time)[0] == DDE_callback.integrate(time)[0]
| 33.508772 | 274 | 0.727749 | 323 | 1,910 | 4.213622 | 0.390093 | 0.03086 | 0.022043 | 0.038207 | 0.11903 | 0.11903 | 0.11903 | 0.11903 | 0.036738 | 0.036738 | 0 | 0.022291 | 0.15445 | 1,910 | 56 | 275 | 34.107143 | 0.820433 | 0.536649 | 0 | 0.0625 | 0 | 0 | 0.057405 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 1 | 0.03125 | false | 0 | 0.125 | 0 | 0.1875 | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b3bea91a0d8eade96c2b7f6b821f5c913db2dcb | 3,249 | py | Python | examples/correct_nlg.py | drahnr/nlprule | ae208aa911cf46c96731ed4012aba9b03fa6242e | [
"Apache-2.0",
"MIT"
] | 412 | 2020-12-28T19:00:53.000Z | 2022-03-29T18:11:15.000Z | examples/correct_nlg.py | drahnr/nlprule | ae208aa911cf46c96731ed4012aba9b03fa6242e | [
"Apache-2.0",
"MIT"
] | 65 | 2021-01-01T19:24:39.000Z | 2022-02-04T11:05:42.000Z | examples/correct_nlg.py | drahnr/nlprule | ae208aa911cf46c96731ed4012aba9b03fa6242e | [
"Apache-2.0",
"MIT"
] | 35 | 2021-01-18T17:51:43.000Z | 2022-02-16T04:58:31.000Z | from collections import Counter
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from nlprule import Rules, Tokenizer, SplitOn
from argparse import ArgumentParser
# gets a window from start - offset to end + offset or until a newline character is reached
def window(text, start, end, offset=50):
new_start, new_end = start, end
while new_start > 0 and start - new_start < offset and text[new_start - 1] != "\n":
new_start -= 1
while (
new_end < len(text) - 1 and new_end - end < offset and text[new_end + 1] != "\n"
):
new_end += 1
return text[new_start:new_end]
def correct(text, rules):
suggestions = list(
filter(
# there are frequently suggestions for fixing e. g. quotation marks
# i am excluding those here
lambda x: any(c.isalnum() for c in text[x.start : x.end]),
rules.suggest(text),
)
)
counter = Counter()
for x in suggestions:
corrected = rules.apply_suggestions(text, [x])
print(
"Before:", "..." + window(text, x.start, x.end) + "...",
)
print(
"After:", "..." + window(corrected, x.start, x.end) + "...",
)
print("Message:", x.message)
print("Type:", rules.rule(x.source).category_type)
print("---")
print()
counter[rules.rule(x.source).category_type] += 1
return rules.apply_suggestions(text, suggestions), counter
if __name__ == "__main__":
parser = ArgumentParser()
parser.add_argument("--rule_lang", choices={"de", "en"}, default="en")
parser.add_argument("--wikipedia_corpus", type=str, default="20200501.en")
parser.add_argument("--model_name", type=str, default="gpt2")
parser.add_argument("--tokens_per_sample", default=100)
parser.add_argument("--n_samples", default=2000)
args = parser.parse_args()
dataset = load_dataset("wikipedia", args.wikipedia_corpus)
rules = Rules.load(
args.rule_lang, Tokenizer.load(args.rule_lang), SplitOn([".", "!", "?"])
)
tokenizer = AutoTokenizer.from_pretrained(args.model_name)
pipe = pipeline(
"text-generation",
model=AutoModelForCausalLM.from_pretrained(args.model_name),
device=0,
tokenizer=tokenizer,
)
suggestion_counter = Counter()
n_tokens = 0
for text in dataset["train"][: args.n_samples]["text"]:
try:
first_sentence = text[: text.index(".") + 1]
except ValueError:
continue
generated = pipe(
first_sentence,
min_length=args.tokens_per_sample,
max_length=args.tokens_per_sample,
pad_token_id=tokenizer.eos_token_id,
)[0]["generated_text"][len(first_sentence) :].strip()
n_tokens += args.tokens_per_sample
_, counter = correct(generated, rules)
suggestion_counter.update(counter)
print(f"Generated {n_tokens} tokens.")
for key, value in suggestion_counter.items():
key = ((key or "none") + ":").ljust(15)
print(
f"{key} {value} suggestions\t({value / n_tokens * 1000:.2f} per 1000 tokens)"
)
| 30.942857 | 91 | 0.615266 | 389 | 3,249 | 4.958869 | 0.344473 | 0.024883 | 0.044064 | 0.015552 | 0.107828 | 0.029031 | 0 | 0 | 0 | 0 | 0 | 0.016488 | 0.253309 | 3,249 | 104 | 92 | 31.240385 | 0.778648 | 0.055709 | 0 | 0.038961 | 0 | 0.012987 | 0.09889 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025974 | false | 0 | 0.064935 | 0 | 0.116883 | 0.103896 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b3ce1736d1cfa9705fe8c6171cbc3a878c29378 | 3,541 | py | Python | __main__.py | TatianaPorras/MNIG_to_FMMSP | 1a62884237a4133f1417982d88f8a1421a972a08 | [
"MIT"
] | null | null | null | __main__.py | TatianaPorras/MNIG_to_FMMSP | 1a62884237a4133f1417982d88f8a1421a972a08 | [
"MIT"
] | null | null | null | __main__.py | TatianaPorras/MNIG_to_FMMSP | 1a62884237a4133f1417982d88f8a1421a972a08 | [
"MIT"
] | 2 | 2021-05-24T03:25:26.000Z | 2022-01-27T23:59:03.000Z | from algorithms import DNEH_SMR, destruction_reconstruction, local_search
from API.functions import PT, c_range, makespan
import numpy as np
import random
import math
# Soluciones probadas:
# Makespan: (36, 44, 52)
# Secuencias:
# [7, 8, 6, 10, 1, 9, 5, 4, 2, 3]
# [7, 8, 6, 10, 5, 9, 1, 4, 2, 3]
# [6, 1, 9, 7, 8, 10, 4, 5, 2, 3]
# [1, 6, 9, 10, 7, 8, 5, 4, 2, 3]
# [6, 5, 10, 1, 9, 7, 8, 4, 2, 3]
# [6, 10, 1, 8, 9, 4, 5, 7, 2, 3]
# [6, 5, 7, 10, 8, 1, 4, 9, 2, 3]
# [9, 7, 1, 6, 8, 10, 4, 5, 2, 3]
# [5, 8, 7, 6, 10, 4, 1, 9, 2, 3]
# [7, 1, 6, 10, 5, 9, 8, 4, 2, 3]
# Datos de la instancia de prueba
Tn = [
[(10, 12, 13), (11, 12, 14), (9, 10, 12), (6, 7, 9), (8, 9, 10)],
[(7, 8, 10), (8, 9, 10), (5, 6, 8), (4, 5, 6), (4, 5, 6)],
[(10, 11, 12), (9, 10, 12), (2, 3, 4), (5, 6, 8), (5, 6, 7)],
[(8, 9, 10), (6, 7, 8), (7, 8, 9), (4, 5, 6), (5, 6, 8)],
[(6, 7, 8), (8, 9, 10), (4, 5, 6), (6, 7, 8), (6, 7, 9)],
[(4, 5, 6), (2, 3, 4), (15, 16, 19), (13, 14, 15), (15, 16, 20)],
[(11, 13, 15), (1, 2, 3), (11, 13, 14), (10, 11, 13), (9, 10, 12)],
[(10, 11, 12), (18, 19, 23), (5, 6, 7), (6, 7, 9), (7, 8, 10)],
[(5, 6, 8), (4, 5, 6), (14, 15, 16), (19, 21, 25), (10, 12, 13)],
[(15, 17, 20), (12, 14, 15), (16, 17, 20), (17, 18, 21), (18, 19, 21)]
]
# Tn tiene la estructura [ trabajo1, trabajo2, trabajoN ], a su vez cada trabajo tiene la forma [ máquina1, máquina2, máquinaM ], y cada máquina tiene la forma (tiempo_pesimista, tiempo_promedio, tiempo_optimista).
# U_s es el conjunto de máquinas o unidades de la etapa s.
U_s = [[0, 1], [2, 3, 4]]
# L es el total de etapas
L = len(U_s)
# Pn es la ponderación de los números triangulares.
Pn = PT(Tn)
# Parametros para las iteraciones, introducidos como argumentos a este programa
import argparse
parser1 = argparse.ArgumentParser()
parser1.add_argument("N", type = int)
parser1.add_argument("T_0", type = float)
parser1.add_argument("d", type = int)
parser1.add_argument("--debug", action = "store_true")
args1 = parser1.parse_args()
N = args1.N
T_0 = args1.T_0
d = args1.d
debug = args1.debug
# N es un parámetro para el número de iteraciones
# N = 5
# T_0 es un parámetro para crear variación en el algoritmo, diferente de cero
# T_0 = 1.1
# d es un parámetro para la cantidad de trabajos a colocar en pi_d para el algoritmo destruction_reconstruction
# d = 4
# Paso 1
pi_re3, Ta = DNEH_SMR(Tn, U_s, Pn)
# Paso 2
pi_result = pi_re3.copy()
pi_temp = pi_re3.copy()
iter1 = 1
# Paso 3
UT = np.sum([len(U_s[s]) for s in c_range(1, L)])
TT = T_0*(np.sum(Ta))/(10 * N * L)
while (iter1 <= N**2 * L * UT):
iter1 += 1
# Paso 4
pi_temp = local_search(pi_temp, Tn, U_s, Pn)
# Paso 5
if (PT(makespan(pi_temp, Tn, U_s, Pn)) < PT(makespan(pi_re3, Tn, U_s, Pn))):
# Paso 6
pi_re3 = pi_temp.copy()
# Paso 7
if (PT(makespan(pi_temp, Tn, U_s, Pn)) < PT(makespan(pi_result, Tn, U_s, Pn))):
# Paso 8
pi_result = pi_temp.copy()
# Paso 9
else:
# Paso 10
if ( random.random() < math.exp(-(PT(makespan(pi_temp, Tn, U_s, Pn)) - PT(makespan(pi_re3, Tn, U_s, Pn)))/TT) ):
# Paso 11
pi_re3 = pi_temp.copy()
# Paso 12
pi_temp = destruction_reconstruction(pi_temp, d, Tn, U_s, Pn)
if (debug == True):
mk = makespan(pi_re3, Tn, U_s, Pn)
iter1O = "%4d" % (iter1 - 1)
print("Iter:", iter1O, " Secuencia:", pi_re3, " Makespan:", mk, " P:", PT(mk))
# Paso 13
print("\n", pi_result, makespan(pi_result, Tn, U_s, Pn, debug)) | 24.253425 | 214 | 0.558882 | 678 | 3,541 | 2.823009 | 0.231563 | 0.015674 | 0.025078 | 0.034483 | 0.171891 | 0.134274 | 0.100836 | 0.064786 | 0.064786 | 0.064786 | 0 | 0.150223 | 0.238633 | 3,541 | 146 | 215 | 24.253425 | 0.559718 | 0.333522 | 0 | 0.037736 | 0 | 0 | 0.026701 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.113208 | 0 | 0.113208 | 0.037736 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b443c2b930af2451df3f82a9a3ab6cc53195965 | 10,087 | py | Python | Prescient.py | mpolatcan/prescient | b9fbee679e6c370d14deda5e88cd4f1cc5965d78 | [
"Apache-2.0"
] | 1 | 2018-09-08T15:07:52.000Z | 2018-09-08T15:07:52.000Z | Prescient.py | mpolatcan/prescient | b9fbee679e6c370d14deda5e88cd4f1cc5965d78 | [
"Apache-2.0"
] | null | null | null | Prescient.py | mpolatcan/prescient | b9fbee679e6c370d14deda5e88cd4f1cc5965d78 | [
"Apache-2.0"
] | 1 | 2018-12-10T16:51:30.000Z | 2018-12-10T16:51:30.000Z | '''
_______ _______ _______ _______ ________ ___ _______ __ __ ___________
/ __ / / ___ \ / ____/ / _____/ / ______\ / / / ____/ / \ / / /____ _____/
/ /__/ / / |__| | / /___ / /____ / / / / / /___ / \ / / / /
/ ______/ / __ ___/ / ____/ /______ / / / / / / ____/ / /\ \/ / / /
/ / / / \ \ / /____ _______/ / / /_____ / / / /___ / / \ / / /
/_/ /_/ \_\ /______/ /________/ /________/ /__/ /______/ /__/ \__/ /__/
Automated forecasting tool powered by Facebook Prophet.
Developed by Mutlu Polatcan
01.02.2018
Version 0.1.0
'''
# TODO Get streaming data from Kafka
# TODO Online forecasting architecture
# TODO Model serialization
# TODO Time series database integration
import os
import subprocess
import numpy as np
import pandas as pd
import threading
import signal
from colorama import Fore
from concurrent.futures import ProcessPoolExecutor
from ProphetExecutor import ProphetExecutor
from PrescientConfig import PrescientConfig
from PrescientLogger import PrescientLogger
from collections import deque
import sys
# ------------------------ RESULT VARIABLES -----------------------
best_model = (0, 0, 0)
accuracies = deque()
accuracy_change_rates = deque()
best_accuracies = deque()
# -----------------------------------------------------------------
# ---------------------------------- CONFIGS ---------------------------------------
config = PrescientConfig(sys.argv[1]) # get configuration from file
dataset_filepath = config.get_str("forecastbase.dataset.filepath")
tdp_min = config.get_float("forecastbase.training.data.percent.min")
tdp_max = config.get_float("forecastbase.training.data.percent.max")
tdp_inc_by = config.get_float("forecastbase.training.data.percent.increment.by")
iw_min = config.get_float("forecastbase.interval.width.min")
iw_max = config.get_float("forecastbase.interval.width.max")
iw_inc_by = config.get_float("forecastbase.interval.width.increment.by")
cps_min = config.get_float("forecastbase.changepoint.prior.scale.min")
cps_max = config.get_float("forecastbase.changepoint.prior.scale.max")
cps_inc_by = config.get_float("forecastbase.changepoint.prior.scale.increment.by")
predict_next = config.get_int("forecastbase.predict.next")
predict_freq = config.get_str("forecastbase.predict.freq")
parallelism = config.get_int("forecastbase.paralellism")
measure_number = config.get_int("forecastbase.convergence.detection.measure.number")
average_acr_threshold = config.get_float("forecastbase.convergence.detection.acr.threshold")
holiday_weekends_enabled = config.get_bool("forecastbase.holiday.weekends.enabled")
holiday_special_days = config.get_list("forecastbase.holiday.special.days")
# -----------------------------------------------------------------------------------
# ----------------------------- HOLIDAY WEEKENDS SETTINGS -----------------------------
holiday_weekends = {}
if not holiday_weekends_enabled:
holiday_weekends = None
# -------------------------------------------------------------------------------------
semaphore = threading.BoundedSemaphore(value=1)
def run():
model_index = 1
prophet_executor = ProphetExecutor()
# Create training file and load weekends (if enabled) according to current percent
for training_data_percent_prep in np.arange(tdp_min, tdp_max + tdp_inc_by, tdp_inc_by):
prepare_training_file(training_data_percent_prep)
if holiday_weekends_enabled:
load_holiday_weekends(training_data_percent_prep)
# Submitting jobs
with ProcessPoolExecutor(max_workers=parallelism) as process_pool:
for training_data_percent in np.arange(tdp_min, tdp_max + tdp_inc_by, tdp_inc_by):
for interval_width in np.arange(iw_min, iw_max + iw_inc_by, iw_inc_by):
for changepoint_prior_scale in np.arange(cps_min, cps_max + cps_inc_by, cps_inc_by):
model_future = process_pool.submit(prophet_executor.execute,
model_index,
dataset_filepath,
training_data_percent,
interval_width,
changepoint_prior_scale,
predict_next,
predict_freq,
holiday_weekends,
holiday_special_days)
model_future.add_done_callback(model_training_done_callback)
model_index += 1
def prepare_training_file(training_data_percent):
# Get data count of file
data_count = int(subprocess.Popen(["wc", "-l", dataset_filepath], stdout=subprocess.PIPE).communicate()[0].split()[0])
# Calculate training data count according to percentage
training_data_count = (data_count * training_data_percent) / 100
PrescientLogger.console_log("FORECASTBASE", Fore.YELLOW, "Preparing training file for parameter training_data_percent=%" + str(training_data_percent) +
" Original data count:" + str(data_count) + " Training data count: " + str(training_data_count))
# Create training data file
os.system("head -" + str(int(training_data_count)) + " " + dataset_filepath + " > " + os.path.basename(dataset_filepath).split('.')[0] +
"_training_%" + str(training_data_percent) + ".csv")
def load_holiday_weekends(training_data_percent):
global holiday_weekends
PrescientLogger.console_log("FORECASTBASE", Fore.YELLOW, "Preparing weekends for parameter training_data_percent=%" + str(training_data_percent))
df_training_data = pd.read_csv(os.path.basename(dataset_filepath).split('.')[0] + "_training_%" + str(training_data_percent) + ".csv")
df_training_data['ds'] = pd.to_datetime(df_training_data['ds']) # Convert string to datetime
df_training_data['weekday'] = df_training_data['ds'].dt.weekday # Find number of day
df_training_data['ds'] = df_training_data['ds'].dt.date # Truncate time from datetime
# Selecting rows where day is Saturday or Sunday
df_holiday_weekends = df_training_data[(df_training_data['weekday'] == 5) | (df_training_data['weekday'] == 6)]
df_holiday_weekends = df_holiday_weekends.drop_duplicates(subset=['ds']) # Drop duplicate rows
df_holiday_weekends.drop(['y', 'weekday'], axis=1, inplace=True) # Drop unnecessary columns
holiday_weekends[str(training_data_percent)] = df_holiday_weekends
def show_intermediate_results(average_acr, acr_frame):
PrescientLogger.console_log(
None,
Fore.BLUE,
"########################################################################",
"Last " + str(measure_number) + " model's accuracies and accuracy change rates: \n",
acr_frame.to_string(),
"\nAverage accuracy change rate: " + str(average_acr),
"Best accuracy: " + str(best_model[0]),
"########################################################################\n")
def model_training_done_callback(model_fn):
global best_model
semaphore.acquire()
if model_fn.done():
error = model_fn.exception()
if error:
print(error)
else:
model = model_fn.result()
if accuracy_change_rates.__len__() < measure_number:
accuracy_change_rates.append(model[0] - best_model[0])
accuracies.append(model[0])
best_accuracies.append(best_model[0])
else:
# Remove oldest data and add last data
accuracy_change_rates.popleft()
accuracies.popleft()
best_accuracies.popleft()
accuracy_change_rates.append(model[0] - best_model[0]); accuracies.append(model[0]); best_accuracies.append(best_model[0])
# If trained model's accuracy is better than best model assign as new best model
if model[0] > best_model[0]:
best_model = model
if accuracy_change_rates.__len__() == measure_number:
# Calculate average accuracy change rate and show results
acr_frame = pd.DataFrame({'best_accuracy': best_accuracies, 'last_model_accuracy': accuracies, 'acr': accuracy_change_rates})
average_acr = acr_frame['acr'].mean()
show_intermediate_results(average_acr, acr_frame)
# If average accuracy change rate below threshold stop Forecastbase
if average_acr < average_acr_threshold:
PrescientLogger.console_log("FORECASTBASE", Fore.RED, "Convergence Detected!! Best model is accuracy=" + str(best_model[0]) +
" training_data_percent=" + str(best_model[1]) + " interval_width=" + str(best_model[2]) + " changepoint_prior_scale=" + str(best_model[3]))
semaphore.release() # Release acquired semaphore
# Remove training files
for training_data_percent in np.arange(tdp_min, tdp_max + tdp_inc_by, tdp_inc_by):
os.system("rm " + os.path.basename(dataset_filepath).split('.')[0] + "_training_%" + str(training_data_percent) + ".csv")
# Stop child processes and terminate program
child_pids = subprocess.Popen(["ps", "-o", "pid", "--ppid", str(os.getpid()), "--noheaders"],
stdout=subprocess.PIPE).communicate()[0].decode('ascii')
for child_pid in child_pids.split("\n")[:-1]:
try:
os.kill(int(child_pid), signal.SIGKILL)
except Exception:
continue
sys.exit()
semaphore.release()
if __name__ == "__main__":
run()
| 47.805687 | 166 | 0.603747 | 1,042 | 10,087 | 5.380038 | 0.239923 | 0.079201 | 0.071174 | 0.046379 | 0.326971 | 0.26436 | 0.2137 | 0.116661 | 0.116661 | 0.097752 | 0 | 0.006042 | 0.245167 | 10,087 | 210 | 167 | 48.033333 | 0.730234 | 0.195697 | 0 | 0.045802 | 0 | 0 | 0.168505 | 0.106802 | 0 | 0 | 0 | 0.004762 | 0 | 1 | 0.038168 | false | 0 | 0.099237 | 0 | 0.137405 | 0.007634 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b4b3e39f0ad43f972262fa6b24bdff4b514234e | 1,848 | py | Python | python/forgetting_hash_map/forgetting_hash_map.py | cob16/forgetting_hash_map | b3bd608384d7c8c0a0a484fbb6cdedd7ccc37564 | [
"MIT"
] | null | null | null | python/forgetting_hash_map/forgetting_hash_map.py | cob16/forgetting_hash_map | b3bd608384d7c8c0a0a484fbb6cdedd7ccc37564 | [
"MIT"
] | 1 | 2020-09-24T05:41:09.000Z | 2020-09-28T05:47:41.000Z | python/forgetting_hash_map/forgetting_hash_map.py | cob16/forgetting_hash_map | b3bd608384d7c8c0a0a484fbb6cdedd7ccc37564 | [
"MIT"
] | 1 | 2017-03-23T15:56:34.000Z | 2017-03-23T15:56:34.000Z | class ForgettingHash:
"""A fixed size hash that deletes the least used elements when full.
max_len must be more than 0
Uses a second dict with the same keys to record usage
Value is refereed to as 'content'
"""
def __init__(self, max_len, items=None): # type: int, list
if max_len <= 0:
raise ValueError('max_len must me larger than 0 ({} given)'.format(max_len))
self._max_len = max_len
if not items:
items = {}
if len(items) > max_len:
raise ValueError('There are more items ({}) given then the max_len ({})'.format(len(items), max_len))
self._map = dict(items)
# Create a second dict to record the usage of the the main hashable
self._popularity = dict.fromkeys(items.keys(), 0)
def add(self, key, content=None):
"""Adds a key value pair ot the map.
Will reset usage counter if the content is changed on a existing map
"""
if key in self._map and self._map.get(key) is content:
return # do nothing as this key already exists with the same value
elif len(self._map) >= self._max_len:
self.delete_least_used()
self._map[key] = content
self._popularity[key] = 0
def find(self, key):
"""Gets the value of a key from the map.
Returns None if key does not exist
"""
content = self._map.get(key, None)
if content:
self._popularity[key] += 1
return content
def delete_least_used(self):
"""Removes the lest used key-value pair"""
if self._popularity: # ignore if empty
key = min(self._popularity, key=self._popularity.get) # returns the name of the key with least uses
del self._map[key]
del self._popularity[key]
| 33 | 113 | 0.604978 | 266 | 1,848 | 4.071429 | 0.364662 | 0.060942 | 0.062789 | 0.025854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00468 | 0.306277 | 1,848 | 55 | 114 | 33.6 | 0.840094 | 0.323593 | 0 | 0 | 0 | 0 | 0.079352 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b4d2979efd4bf450ecdffaf76ebf2999530bf76 | 4,281 | py | Python | elasticdl/python/tests/checkpoint_test.py | chunyang-wen/elasticdl | 7b16b44f5314507494a552c11caaf3b2bce6d209 | [
"MIT"
] | null | null | null | elasticdl/python/tests/checkpoint_test.py | chunyang-wen/elasticdl | 7b16b44f5314507494a552c11caaf3b2bce6d209 | [
"MIT"
] | null | null | null | elasticdl/python/tests/checkpoint_test.py | chunyang-wen/elasticdl | 7b16b44f5314507494a552c11caaf3b2bce6d209 | [
"MIT"
] | null | null | null | import os
import tempfile
import unittest
from elasticdl.proto import elasticdl_pb2
from elasticdl.python.common.model_utils import (
get_module_file_path,
load_module,
)
from elasticdl.python.master.checkpoint_service import CheckpointService
from elasticdl.python.master.servicer import MasterServicer
_model_zoo_path = os.path.dirname(os.path.realpath(__file__))
_model_file = get_module_file_path(_model_zoo_path, "test_module.custom_model")
m = load_module(_model_file).__dict__
class CheckpointTest(unittest.TestCase):
def testNeedToCheckpoint(self):
checkpointer = CheckpointService("", 0, 5, False)
self.assertFalse(checkpointer.is_enabled())
checkpointer._steps = 3
self.assertTrue(checkpointer.is_enabled())
self.assertFalse(checkpointer.need_to_checkpoint(1))
self.assertFalse(checkpointer.need_to_checkpoint(2))
self.assertTrue(checkpointer.need_to_checkpoint(3))
self.assertFalse(checkpointer.need_to_checkpoint(4))
self.assertFalse(checkpointer.need_to_checkpoint(5))
self.assertTrue(checkpointer.need_to_checkpoint(6))
def testSaveLoadCheckpoint(self):
init_var = m["custom_model"]().trainable_variables
with tempfile.TemporaryDirectory() as tempdir:
chkp_dir = os.path.join(tempdir, "testSaveLoadCheckpoint")
os.makedirs(chkp_dir)
checkpointer = CheckpointService(chkp_dir, 3, 5, False)
self.assertTrue(checkpointer.is_enabled())
master = MasterServicer(
2,
3,
None,
None,
init_var=init_var,
checkpoint_filename_for_init="",
checkpoint_service=checkpointer,
evaluation_service=None,
)
req = elasticdl_pb2.GetModelRequest()
req.method = elasticdl_pb2.MINIMUM
req.version = 0
model = master.GetModel(req, None)
checkpointer.save(0, model, False)
loaded_model = checkpointer.get_checkpoint_model(0)
self.assertEqual(model.version, loaded_model.version)
for var, loaded_var in zip(model.param, loaded_model.param):
self.assertEqual(var, loaded_var)
def testInitFromCheckpoint(self):
init_var = m["custom_model"]().trainable_variables
with tempfile.TemporaryDirectory() as tempdir:
chkp_dir = os.path.join(tempdir, "testInitFromCheckpoint")
os.makedirs(chkp_dir)
master = MasterServicer(
2,
3,
None,
None,
init_var=init_var,
checkpoint_filename_for_init="",
checkpoint_service=CheckpointService(chkp_dir, 2, 3, False),
evaluation_service=None,
)
req = elasticdl_pb2.GetModelRequest()
req.method = elasticdl_pb2.MINIMUM
req.version = 0
model = master.GetModel(req, None)
master._checkpoint_service.save(master._version, model, False)
chkp_file = master._checkpoint_service.get_checkpoint_path(
master._version
)
# Create variables from init_var, get init value from checkpoint.
master2 = MasterServicer(
2,
3,
None,
None,
init_var=init_var,
checkpoint_filename_for_init=chkp_file,
checkpoint_service=CheckpointService("", 0, 0, False),
evaluation_service=None,
)
model2 = master2.GetModel(req, None)
self.assertEqual(model, model2)
# Create variables from checkpoint.
master3 = MasterServicer(
2,
3,
None,
None,
init_var=[],
checkpoint_filename_for_init=chkp_file,
checkpoint_service=CheckpointService("", 0, 0, False),
evaluation_service=None,
)
model3 = master3.GetModel(req, None)
self.assertEqual(model, model3)
if __name__ == "__main__":
unittest.main()
| 37.226087 | 79 | 0.6057 | 417 | 4,281 | 5.93765 | 0.218225 | 0.028271 | 0.043619 | 0.067851 | 0.511712 | 0.483441 | 0.351777 | 0.342084 | 0.342084 | 0.342084 | 0 | 0.014291 | 0.313478 | 4,281 | 114 | 80 | 37.552632 | 0.828173 | 0.022658 | 0 | 0.47 | 0 | 0 | 0.023918 | 0.016264 | 0 | 0 | 0 | 0 | 0.13 | 1 | 0.03 | false | 0 | 0.07 | 0 | 0.11 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b4d531c59a83982bb5c89cb4e755707e96fcd9f | 3,525 | py | Python | geometry/d2/circle.py | TheBiggerFish/fishpy | 632a7c603a33ad694650aa2bed058931775a4259 | [
"MIT"
] | null | null | null | geometry/d2/circle.py | TheBiggerFish/fishpy | 632a7c603a33ad694650aa2bed058931775a4259 | [
"MIT"
] | null | null | null | geometry/d2/circle.py | TheBiggerFish/fishpy | 632a7c603a33ad694650aa2bed058931775a4259 | [
"MIT"
] | null | null | null | """This module provides a class for storing and evaluating circles"""
from functools import cached_property
from math import pi
from typing import Optional, Tuple
from .point2d import Point2D
from .vector2d import Vector2D
class Circle:
"""This class stores and provides methods for evaluating circles"""
def __init__(self, center: Point2D, radius: float):
self.center = center
self.radius = radius
@cached_property
def diameter(self):
"""This property calculates the radius of the circle"""
return self.radius * 2
@cached_property
def circumference(self):
"""This property calculates the circumference of the circle"""
return self.diameter * pi
@cached_property
def area(self):
"""This property calculates the area of the circle"""
return pi * self.radius ** 2
def __repr__(self) -> str:
return f'{self.__class__.__name__}(center={repr(self.center)}, radius={self._radius})'
def __contains__(self, pt: Point2D) -> bool:
return self.center.euclidean_distance(pt) <= self.radius
def intersects(self, other: 'Circle') -> bool:
"""Predicate function which returns whether two circles have intersections"""
d = (self.center - other.center).magnitude()
if d == 0:
return self.radius == other.radius
if d > self.radius + other.radius:
return False
if d < abs(self.radius - other.radius):
return False
return True
# Find intersection points on two arcs:
# https://stackoverflow.com/questions/47863261/find-point-of-intersection-between-two-arc
def intersecting_points(self, other: 'Circle') -> Optional[Tuple[Point2D]]:
"""Returns the intersecting point(s) of two circles"""
other: Circle = other
if self.center == other.center and self.radius == other.radius:
raise ValueError('Intersecting circles of the same size '
'cannot share the same center')
if not self.intersects(other):
return None
d = Vector2D.from_point(self.center - other.center).magnitude()
a = (self.radius**2-other.radius**2+d**2)/(2*d)
h = (self.radius**2-a**2)**0.5
p2 = self.center + (other.center-self.center) * a/d
p3, p4 = Point2D(0, 0), Point2D(0, 0)
p3.x = round(p2.x + (other.center.y-self.center.y)*h/d, 8)
p3.y = round(p2.y - (other.center.x-self.center.x)*h/d, 8)
p4.x = round(p2.x - (other.center.y-self.center.y)*h/d, 8)
p4.y = round(p2.y + (other.center.x-self.center.x)*h/d, 8)
if p3 == p4:
return p3,
return p3, p4
@property
def radius(self):
"""This property works as a getter for radius"""
return self._radius
@radius.setter
def radius(self, radius: float):
"""
Set the radius property of the circle, invalidate caches
which depend on radius
"""
if radius < 0:
raise ValueError('Circle\'s radius cannot be negative')
# Invalidate diameter cache
try:
del self.diameter
except AttributeError:
pass
# Invalidate circumference cache
try:
del self.circumference
except AttributeError:
pass
# Invalidate area cache
try:
del self.area
except AttributeError:
pass
self._radius = radius
| 31.473214 | 94 | 0.604539 | 442 | 3,525 | 4.753394 | 0.255656 | 0.066635 | 0.030462 | 0.039981 | 0.187054 | 0.097097 | 0.066635 | 0.066635 | 0.066635 | 0.066635 | 0 | 0.021523 | 0.288227 | 3,525 | 111 | 95 | 31.756757 | 0.815863 | 0.207376 | 0 | 0.2 | 0 | 0.014286 | 0.059235 | 0.027594 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.042857 | 0.071429 | 0.028571 | 0.414286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b519a77db4799ab628dbd803d76bfec3ce31a64 | 9,134 | py | Python | analyze_model_results.py | awwong1/topic-traceability | 60d6fd8b52855d1e5faeaec3c160f2541e3ec0ca | [
"MIT"
] | 2 | 2019-04-28T09:13:14.000Z | 2019-11-19T02:49:30.000Z | analyze_model_results.py | awwong1/topic-traceability | 60d6fd8b52855d1e5faeaec3c160f2541e3ec0ca | [
"MIT"
] | 1 | 2021-06-01T23:40:18.000Z | 2021-06-01T23:40:18.000Z | analyze_model_results.py | awwong1/topic-traceability | 60d6fd8b52855d1e5faeaec3c160f2541e3ec0ca | [
"MIT"
] | 1 | 2019-04-28T09:13:25.000Z | 2019-04-28T09:13:25.000Z | #!/usr/bin/env python3
"""Calculate distances using the topic models on the course material/discussion
posts feature vectors.
"""
import os
import numpy as np
from datetime import datetime
from numpy import ravel
from pickle import load
from json import dump
from scipy.spatial.distance import cosine, euclidean
from gensim.models import TfidfModel
from collections import Counter
DIR_PATH = os.path.dirname(os.path.realpath(__file__))
def analyze_course_results(course_name, course_results, idf_vec_size):
t_start = datetime.now()
print("ANALYZING {} ({})".format(course_name, t_start))
docid_to_labels = invert_mapping(course_results["mapping"])
material_results = course_results["material_results"]
question_results = course_results["question_results"]
answer_results = course_results["answer_results"]
# question_id: { atm: [(doc, distance)], hdp: [(doc, distance)]}
questions_topic_mapping = {}
answers_topic_mapping = {}
distance_functions = {
"cosine": (cosine, False), # function, reverse
# "euclidean": (euclidean, False)
}
for key in distance_functions.keys():
questions_topic_mapping[key] = {}
answers_topic_mapping[key] = {}
course_unutilized_words = []
discussion_words = set()
for question_id, question_result in question_results.items():
all_words = question_result["all_words"]
unutilized_words = question_result["unutilized_words"]
atm, hdp, lda, llda, tfidf = flatten_gammas(question_result, idf_vec_size)
course_unutilized_words.extend(unutilized_words)
discussion_words = discussion_words.union(all_words)
for dist_func_name, distance_options in distance_functions.items():
question_topic_map = generate_topic_map(
distance_options, material_results, atm, hdp, lda, llda, tfidf, idf_vec_size)
questions_topic_mapping[dist_func_name][question_id] = question_topic_map
print("\rq: {}/{} (e: {})".format(
len(questions_topic_mapping[dist_func_name]),
len(question_results),
datetime.now() - t_start), end="")
# just look at questions for time being
# print()
# for answer_id, answer_result in answer_results.items():
# all_words = question_result["all_words"]
# unutilized_words = question_result["unutilized_words"]
# atm, hdp, lda, llda, tfidf = flatten_gammas(answer_result, idf_vec_size)
# for dist_func_name, distance_options in distance_functions.items():
# answer_topic_map = generate_topic_map(
# distance_options, material_results, atm, hdp, lda, llda, tfidf, idf_vec_size)
# answers_topic_mapping[dist_func_name][answer_id] = answer_topic_map
# print("\ra: {}/{} (e: {})".format(
# len(answers_topic_mapping[dist_func_name]),
# len(answer_results),
# datetime.now() - t_start), end="")
print()
if True:
# update?
model_res_fp = os.path.join(
DIR_PATH, "data", "model_res.{}.json".format(course_name))
with open(model_res_fp, "w") as mf:
dump({
"docid_to_labels": docid_to_labels,
"questions_topic_mapping": questions_topic_mapping,
"answers_topic_mapping": answers_topic_mapping
}, mf)
unutilized_words_count = Counter(course_unutilized_words)
ordered_unutilized_words = []
for key, value in sorted(unutilized_words_count.items(), key=lambda x:x[1], reverse=True):
ordered_unutilized_words.append((key, value))
print(len(ordered_unutilized_words), len(discussion_words))
print(ordered_unutilized_words[:5])
forum_only_vocabulary_fp = os.path.join(
DIR_PATH, "data", "forum_only_vocabulary.{}.json".format(course_name)
)
with open(forum_only_vocabulary_fp, "w") as f:
dump(ordered_unutilized_words, f)
def generate_topic_map(distance_options, material_results, atm, hdp, lda, llda, tfidf, idf_vec_size):
distance_function, sort_reverse = distance_options
topic_map = {
"atm_rank": [],
"hdp_rank": [],
"lda_rank": [],
"llda_rank": [],
"tfidf_rank": [],
# how much better do topic models improve on tf-idf?
"tfidf_with_atm_rank": [],
"tfidf_with_hdp_rank": [],
"tfidf_with_lda_rank": [],
"tfidf_with_llda_rank": [],
}
for doc_id, material_result in material_results.items():
m_atm, m_hdp, m_lda, m_llda, m_tfidf = flatten_gammas(material_result, idf_vec_size)
topic_map["atm_rank"].append((
doc_id,
distance_function(atm, m_atm)
))
topic_map["hdp_rank"].append((
doc_id,
distance_function(hdp, m_hdp)
))
topic_map["lda_rank"].append((
doc_id,
distance_function(lda, m_lda)
))
topic_map["llda_rank"].append((
doc_id,
distance_function(llda, m_llda)
))
topic_map["tfidf_rank"].append((
doc_id,
distance_function(tfidf, m_tfidf)
))
# Do combination of TF-IDF + four other topic models
topic_map["tfidf_with_atm_rank"].append((
doc_id,
distance_function(np.concatenate((tfidf, atm)), np.concatenate((m_tfidf, m_atm)))
))
topic_map["tfidf_with_hdp_rank"].append((
doc_id,
distance_function(np.concatenate((tfidf, hdp)), np.concatenate((m_tfidf, m_hdp)))
))
topic_map["tfidf_with_lda_rank"].append((
doc_id,
distance_function(np.concatenate((tfidf, lda)), np.concatenate((m_tfidf, m_lda)))
))
topic_map["tfidf_with_llda_rank"].append((
doc_id,
distance_function(np.concatenate((tfidf, llda)), np.concatenate((m_tfidf, m_llda)))
))
topic_map["atm_rank"].sort(
key=lambda tup: tup[1], reverse=sort_reverse)
topic_map["hdp_rank"].sort(
key=lambda tup: tup[1], reverse=sort_reverse)
topic_map["lda_rank"].sort(
key=lambda tup: tup[1], reverse=sort_reverse)
topic_map["llda_rank"].sort(
key=lambda tup: tup[1], reverse=sort_reverse)
topic_map["tfidf_rank"].sort(
key=lambda tup: tup[1], reverse=sort_reverse)
topic_map["tfidf_with_atm_rank"].sort(
key=lambda tup: tup[1], reverse=sort_reverse)
topic_map["tfidf_with_hdp_rank"].sort(
key=lambda tup: tup[1], reverse=sort_reverse)
topic_map["tfidf_with_lda_rank"].sort(
key=lambda tup: tup[1], reverse=sort_reverse)
topic_map["tfidf_with_llda_rank"].sort(
key=lambda tup: tup[1], reverse=sort_reverse)
return topic_map
def flatten_gammas(result, idf_vec_size):
atm = ravel(result["atm"])
hdp = ravel(result["hdp"])
lda = ravel(result["lda"])
llda = ravel(result["llda"])
tfidf_ltups = result["tfidf"]
tfidf = np.zeros(idf_vec_size)
for tfidf_idx, val in tfidf_ltups:
tfidf[tfidf_idx] = val
return atm, hdp, lda, llda, tfidf
def invert_mapping(mapping):
"""document to hierarchy labels.
[modules, lessons, items]
"""
inverted_mapping = {}
for coursera_item_type, value_dict in mapping.items():
# coursera_item_type one of [modules, lessons, items]
for item_name, doc_ids in value_dict.items():
for doc_id in doc_ids:
doc_type = inverted_mapping.get(doc_id, [None, None, None])
if coursera_item_type == "items":
doc_type[2] = item_name
elif coursera_item_type == "lessons":
doc_type[1] = item_name
elif coursera_item_type == "modules":
doc_type[0] = item_name
else:
raise NotImplementedError(coursera_item_type)
inverted_mapping[doc_id] = doc_type
return inverted_mapping
def main():
COURSE_NAME_STUBS = [
"agile-planning-for-software-products",
"client-needs-and-software-requirements",
"design-patterns",
"introduction-to-software-product-management",
"object-oriented-design",
"reviews-and-metrics-for-software-improvements",
"service-oriented-architecture",
"software-architecture",
"software-processes-and-agile-practices",
"software-product-management-capstone",
]
for course_name in COURSE_NAME_STUBS:
results_fp = os.path.join(DIR_PATH, "data", "eval.{}.pkl".format(
course_name))
course_results = None
with open(results_fp, "rb") as rf:
course_results = load(rf)
tfidf_fp = os.path.join(
DIR_PATH, "data", "tfidf.{}.pkl".format(course_name))
# with open(tfidf_fp, "rb") as tfidf_f:
tfidf_model = TfidfModel.load(tfidf_fp)
idf_vec_size = len(tfidf_model.idfs)
analyze_course_results(course_name, course_results, idf_vec_size)
if __name__ == "__main__":
main()
| 36.979757 | 101 | 0.639041 | 1,127 | 9,134 | 4.843833 | 0.176575 | 0.039568 | 0.02015 | 0.02473 | 0.412713 | 0.386884 | 0.299139 | 0.254809 | 0.254809 | 0.218905 | 0 | 0.002177 | 0.245676 | 9,134 | 246 | 102 | 37.130081 | 0.790131 | 0.133676 | 0 | 0.146739 | 0 | 0 | 0.126429 | 0.048412 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027174 | false | 0 | 0.048913 | 0 | 0.092391 | 0.027174 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b55c6de90397835acdc52e5f1e0da9fa1f29285 | 7,555 | py | Python | TCCDjango/app/views.py | hkirschke/App-Web-TCC | 7cb8eecff6f5afe4c996c8a424358de099faefd0 | [
"MIT"
] | null | null | null | TCCDjango/app/views.py | hkirschke/App-Web-TCC | 7cb8eecff6f5afe4c996c8a424358de099faefd0 | [
"MIT"
] | null | null | null | TCCDjango/app/views.py | hkirschke/App-Web-TCC | 7cb8eecff6f5afe4c996c8a424358de099faefd0 | [
"MIT"
] | null | null | null | """
Definition of views.
"""
from datetime import datetime
from django.shortcuts import render
from django.http import HttpRequest
from .graficoLinhaTempo import GraficoLT as graphLT
from .graficoBarra import GraficoBarra as graphBar
from .graficoScatter import GraficoScatter as grapScatter
import plotly as pl
def home(request):
"""Renders the home page."""
assert isinstance(request, HttpRequest)
return render(
request,
'app/index.html',
{
'title':'Home Page',
'year':datetime.now().year,
}
)
def contact(request):
"""Renders the contact page."""
assert isinstance(request, HttpRequest)
return render(
request,
'app/contact.html',
{
'title':'Contact',
'message':'Your contact page.',
'year':datetime.now().year,
}
)
def about(request):
"""Renders the about page."""
assert isinstance(request, HttpRequest)
return render(
request,
'app/about.html',
{
'title':'About',
'message':'Your application description page.',
'year':datetime.now().year,
}
)
def GraficoLTBarMensal(request):
assert isinstance(request, HttpRequest)
fig = graphLT.PlotGraphTimeLineBarMensal()
fig.write_html("app/graph.html")
pl.offline.plot(fig, filename = 'app/graph.html')
return render(
request,
'app/index.html',
{
'title':'Home Page',
'year':datetime.now().year,
}
)
def GraficoLTScatterMensal(request):
assert isinstance(request, HttpRequest)
fig = graphLT.PlotGraphTimeLineScatterMensal()
pl.offline.plot(fig, filename = 'app/graph.html')
return render(
request,
'app/index.html',
{
'title':'Home Page',
'year':datetime.now().year,
}
)
def GraficoLTScatterMensalPorcentagemCasosPopulacao(request):
assert isinstance(request, HttpRequest)
fig = graphLT.PlotGraphTimeLineScatterMensalPorcentagemCasosPopulacao()
pl.offline.plot(fig, filename = 'app/graph.html')
return render(
request,
'app/index.html',
{
'title':'Home Page',
'year':datetime.now().year,
}
)
def GraficoLTBarQuinzenal(request):
assert isinstance(request, HttpRequest)
fig = graphLT.PlotGraphTimeLineBarQuinzenal()
fig.write_html("app/graph.html")
pl.offline.plot(fig, filename = 'app/graph.html')
return render(
request,
'app/index.html',
{
'title':'Home Page',
'year':datetime.now().year,
}
)
def GraficoLTScatterQuinzenal(request):
assert isinstance(request, HttpRequest)
fig = graphLT.PlotGraphTimeLineScatterQuinzenal()
pl.offline.plot(fig, filename = 'app/graph.html')
return render(
request,
'app/index.html',
{
'title':'Home Page',
'year':datetime.now().year,
}
)
def GrafigoBarraMortePais24h(request):
assert isinstance(request, HttpRequest)
fig = graphBar.PlotGrafigoBarraMortePais24h()
pl.offline.plot(fig, filename = 'app/graph.html')
return render(
request,
'app/index.html',
{
'title':'Home Page',
'year':datetime.now().year,
}
)
def GrafigoBarraCasosPais24h(request):
assert isinstance(request, HttpRequest)
fig = graphBar.PlotGrafigoBarraCasosPais24h()
pl.offline.plot(fig, filename = 'app/graph.html')
return render(
request,
'app/index.html',
{
'title':'Home Page',
'year':datetime.now().year,
}
)
def GrafigoBarraCasosPais(request):
assert isinstance(request, HttpRequest)
fig = graphBar.PlotGrafigoBarraCasosPais()
pl.offline.plot(fig, filename = 'app/graph.html')
return render(
request,
'app/index.html',
{
'title':'Home Page',
'year':datetime.now().year,
}
)
def GrafigoBarraCalor(request):
assert isinstance(request, HttpRequest)
fig = graphBar.PlotGraficoBarraCalor()
pl.offline.plot(fig, filename = 'app/graph.html')
return render(
request,
'app/index.html',
{
'title':'Home Page',
'year':datetime.now().year,
}
)
def GraficoBarraCalorPorcentagem(request):
assert isinstance(request, HttpRequest)
fig = graphBar.PlotGraficoBarraCalorPorcentagem()
pl.offline.plot(fig, filename = 'app/graph.html')
return render(
request,
'app/index.html',
{
'title':'Home Page',
'year':datetime.now().year,
}
)
def GrafigoScatterCasos(request):
assert isinstance(request, HttpRequest)
fig = grapScatter.PlotGraficoScatterCasos()
pl.offline.plot(fig, filename = 'app/graph.html')
return render(
request,
'app/index.html',
{
'title':'Home Page',
'year':datetime.now().year,
}
)
def GraficoScatterPorcentagemCasos(request):
assert isinstance(request, HttpRequest)
fig = grapScatter.PlotGraficoScatterPorcentagemCasos()
pl.offline.plot(fig, filename = 'app/graph.html')
return render(
request,
'app/index.html',
{
'title':'Home Page',
'year':datetime.now().year,
}
)
def GraficoScatterPorcentagemCasosMortes(request):
assert isinstance(request, HttpRequest)
fig = grapScatter.PlotGraficoScatterPorcentagemCasosMortes()
pl.offline.plot(fig, filename = 'app/graph.html')
return render(
request,
'app/index.html',
{
'title':'Home Page',
'year':datetime.now().year,
}
)
def GraficoBarPorcentagemMortosCasos(request):
assert isinstance(request, HttpRequest)
fig = graphBar.PlotGraficoBarPorcentagemMortosCasos()
pl.offline.plot(fig, filename = 'app/graph.html')
return render(
request,
'app/index.html',
{
'title':'Home Page',
'year':datetime.now().year,
}
)
def GraficoBarPorcentagemMortosPopulacao(request):
assert isinstance(request, HttpRequest)
fig = graphBar.PlotGraficoBarPorcentagemMortosPopulacao()
pl.offline.plot(fig, filename = 'app/graph.html')
return render(
request,
'app/index.html',
{
'title':'Home Page',
'year':datetime.now().year,
}
)
def GrafigoBarraPorcentagemCurados(request):
assert isinstance(request, HttpRequest)
fig = graphBar.PlotGraficoBarPorcentagemCurados()
pl.offline.plot(fig, filename = 'app/graph.html')
return render(
request,
'app/index.html',
{
'title':'Home Page',
'year':datetime.now().year,
}
)
def GrafigoBarraPorcentagemCuradosPopulacao(request):
assert isinstance(request, HttpRequest)
fig = graphBar.PlotGraficoBarPorcentagemCuradosPopulacao()
pl.offline.plot(fig, filename = 'app/graph.html')
return render(
request,
'app/index.html',
{
'title':'Home Page',
'year':datetime.now().year,
}
)
def GrafigoBarraPorcentagemCasos(request):
assert isinstance(request, HttpRequest)
fig = graphBar.PlotGraficoBarPorcentagemCasos()
pl.offline.plot(fig, filename = 'app/graph.html')
return render(
request,
'app/index.html',
{
'title':'Home Page',
'year':datetime.now().year,
}
) | 26.232639 | 73 | 0.611383 | 688 | 7,555 | 6.710756 | 0.122093 | 0.072775 | 0.104613 | 0.154646 | 0.670782 | 0.670782 | 0.659519 | 0.455924 | 0.455924 | 0.421702 | 0 | 0.001438 | 0.263799 | 7,555 | 288 | 74 | 26.232639 | 0.828659 | 0.01231 | 0 | 0.552529 | 0 | 0 | 0.13629 | 0 | 0 | 0 | 0 | 0 | 0.081712 | 1 | 0.081712 | false | 0 | 0.027237 | 0 | 0.190661 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b57f40012ee74179621d2d5c1a27c6037745bba | 862 | py | Python | setup.py | yacchin1205/Jupyter-LC_nblineage | 1b105540c5ff2b0dfd749fb73f2f6c896d6c422e | [
"BSD-3-Clause"
] | 3 | 2017-06-11T11:07:08.000Z | 2019-03-25T03:21:56.000Z | setup.py | yacchin1205/Jupyter-LC_nblineage | 1b105540c5ff2b0dfd749fb73f2f6c896d6c422e | [
"BSD-3-Clause"
] | 11 | 2017-05-27T09:51:51.000Z | 2020-07-20T04:41:49.000Z | setup.py | yacchin1205/Jupyter-LC_nblineage | 1b105540c5ff2b0dfd749fb73f2f6c896d6c422e | [
"BSD-3-Clause"
] | 3 | 2017-05-27T07:00:18.000Z | 2019-12-22T14:57:20.000Z | #!/usr/bin/env python
from setuptools import setup
import os
import sys
HERE = os.path.abspath(os.path.dirname(__file__))
VERSION_NS = {}
with open(os.path.join(HERE, 'nblineage', '_version.py')) as f:
exec(f.read(), {}, VERSION_NS)
setup_args = dict (name='lc-nblineage',
version=VERSION_NS['__version__'],
description='lineage extension for Jupyter Notebook',
packages=['nblineage'],
package_dir={'nblineage': 'nblineage'},
package_data={'nblineage': ['nbextension/*']},
include_package_data=True,
platforms=['Jupyter Notebook 4.2.x'],
zip_safe=False,
install_requires=[
'notebook>=4.2.0',
],
entry_points={
'console_scripts': [
'jupyter-nblineage = nblineage.extensionapp:main'
]
}
)
if __name__ == '__main__':
setup(**setup_args)
| 26.121212 | 63 | 0.62645 | 98 | 862 | 5.204082 | 0.612245 | 0.035294 | 0.039216 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00744 | 0.220418 | 862 | 32 | 64 | 26.9375 | 0.751488 | 0.023202 | 0 | 0 | 0 | 0 | 0.281807 | 0.032105 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b5832d9eb1333c10d47499797f086c5fa200f38 | 1,868 | py | Python | projects/vdk-control-cli/tests/vdk/internal/control/utils/cli_utils_test.py | alod83/versatile-data-kit | 9ca672d3929eb3dc6fe5c677e8c8a75e2a0d2be8 | [
"Apache-2.0"
] | 100 | 2021-10-04T09:32:04.000Z | 2022-03-30T11:23:53.000Z | projects/vdk-control-cli/tests/vdk/internal/control/utils/cli_utils_test.py | alod83/versatile-data-kit | 9ca672d3929eb3dc6fe5c677e8c8a75e2a0d2be8 | [
"Apache-2.0"
] | 208 | 2021-10-04T16:56:40.000Z | 2022-03-31T10:41:44.000Z | projects/vdk-control-cli/tests/vdk/internal/control/utils/cli_utils_test.py | alod83/versatile-data-kit | 9ca672d3929eb3dc6fe5c677e8c8a75e2a0d2be8 | [
"Apache-2.0"
] | 14 | 2021-10-11T14:15:13.000Z | 2022-03-11T13:39:17.000Z | # Copyright 2021 VMware, Inc.
# SPDX-License-Identifier: Apache-2.0
from vdk.internal.control.utils.cli_utils import GqlQueryBuilder
from vdk.internal.control.utils.cli_utils import QueryField
def test_query_builder_simple():
jobs_builder = GqlQueryBuilder()
jobs_builder.start().add_return_new("jobs").add_return_new("content").add("jobName")
assert jobs_builder.build() == "{ jobs { content { jobName } } }"
def test_query_builder():
jobs_builder = GqlQueryBuilder()
jobs_content = (
jobs_builder.start()
.add_return_new("jobs", arguments=dict(pageNumber=1, pageSize=20))
.add_return_new("content")
)
jobs_content.add("jobName")
jobs_config = jobs_content.add_return_new("config")
jobs_config.add("team").add("description")
jobs_config.add_return_new("schedule").add("scheduleCron")
assert jobs_builder.build() == (
"{ jobs(pageNumber: 1, pageSize: 20) { content { jobName config { team "
"description schedule { scheduleCron } } } } }"
)
def test_query_builder_with_alias():
jobs_builder = GqlQueryBuilder()
jobs_content = (
jobs_builder.start()
.add_return_new("jobs", arguments=dict(pageNumber=1, pageSize=20))
.add_return_new("content")
)
starshot_config = jobs_content.add_return_new(
"config", alias="starshotConfig", arguments={"team": "starshot"}
)
eda_config = jobs_content.add_return_new(
"config", alias="edaConfig", arguments={"team": "eda"}
)
starshot_config.add("team").add("description")
eda_config.add("team").add("description")
assert jobs_builder.build() == (
"{ jobs(pageNumber: 1, pageSize: 20) { content { starshotConfig: config(team: "
"starshot) { team description } edaConfig: config(team: eda) { team "
"description } } } }"
)
| 35.923077 | 88 | 0.667024 | 212 | 1,868 | 5.627358 | 0.235849 | 0.07544 | 0.100587 | 0.070411 | 0.576697 | 0.487008 | 0.487008 | 0.430847 | 0.295054 | 0.295054 | 0 | 0.011928 | 0.192184 | 1,868 | 51 | 89 | 36.627451 | 0.778661 | 0.033726 | 0 | 0.317073 | 0 | 0 | 0.27303 | 0 | 0 | 0 | 0 | 0 | 0.073171 | 1 | 0.073171 | false | 0 | 0.04878 | 0 | 0.121951 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1b5f0887853abb3afaca7a9088409bbdada5a133 | 742 | py | Python | src/scripts/copy_bug_classes.py | ucd-plse/eesi | aacbcecbb8c1f1d2e152ee4a0aa417ac92ae74bb | [
"BSD-3-Clause"
] | 5 | 2019-07-01T09:05:32.000Z | 2019-09-03T17:36:26.000Z | src/scripts/copy_bug_classes.py | defreez-ucd/eesi-doi | 49b51f9a01b8c1dbbe110898c27ec714034132aa | [
"CC-BY-4.0"
] | 3 | 2019-08-01T03:14:48.000Z | 2019-09-03T20:40:52.000Z | src/scripts/copy_bug_classes.py | defreez-ucd/eesi-doi | 49b51f9a01b8c1dbbe110898c27ec714034132aa | [
"CC-BY-4.0"
] | 2 | 2019-06-10T23:08:27.000Z | 2020-09-29T01:20:49.000Z | import argparse
parser = argparse.ArgumentParser()
parser.add_argument('fromcsv', help="Path to classified bug csv")
parser.add_argument('tocsv', help="Path to new csv")
args = parser.parse_args()
fromcsv = open(args.fromcsv)
tocsv = open(args.tocsv)
from_lines = [x.strip() for x in fromcsv.readlines()]
to_lines = [x.strip() for x in tocsv.readlines()]
from_classes = dict()
for l in from_lines:
bug_location = l.split(',')[0]
from_classes[bug_location] = l
for l in to_lines:
bug_location = l.split(',')[0]
if bug_location in from_classes:
keep = ",".join(l.split(',')[:5])
copy = ",".join(from_classes[bug_location].split(',')[4:])
print("{},{}".format(keep, copy))
else:
print(l)
| 27.481481 | 66 | 0.650943 | 108 | 742 | 4.324074 | 0.37037 | 0.117773 | 0.077088 | 0.059957 | 0.171306 | 0.171306 | 0 | 0 | 0 | 0 | 0 | 0.006536 | 0.175202 | 742 | 26 | 67 | 28.538462 | 0.756536 | 0 | 0 | 0.095238 | 0 | 0 | 0.086253 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.047619 | 0 | 0.047619 | 0.095238 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |