hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
7ea3e4584a38703703c41bbec629295ed55eff87 | 3,562 | py | Python | _CNDB/_OMP/Net_Interface.py | CNDB/CNDB | 2e3a41111f604cf2f4f22a7c9370bb3f753e3e88 | [
"BSD-3-Clause"
] | null | null | null | _CNDB/_OMP/Net_Interface.py | CNDB/CNDB | 2e3a41111f604cf2f4f22a7c9370bb3f753e3e88 | [
"BSD-3-Clause"
] | null | null | null | _CNDB/_OMP/Net_Interface.py | CNDB/CNDB | 2e3a41111f604cf2f4f22a7c9370bb3f753e3e88 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright (C) 2012-2014 Mag. Christian Tanzer All rights reserved
# Glasauergasse 32, A--1130 Wien, Austria. tanzer@swing.co.at
# #*** <License> ************************************************************#
# This module is part of the package CNDB.OMP.
#
# This module is licensed under the terms of the BSD 3-Clause License
# <http://www.c-tanzer.at/license/bsd_3c.html>.
# #*** </License> ***********************************************************#
#
#++
# Name
# CNDB.OMP.Net_Interface
#
# Purpose
# Model network interfaces in CNDB
#
# Revision Dates
# 6-Mar-2012 (CT) Creation
# 10-May-2012 (CT) Change `mac_address` to `Primary_Optional`, add `name`
# 13-Sep-2012 (RS) Set `is_partial`
# 22-Sep-2012 (RS) make `name` `A_DNS_Label`
# 6-Dec-2012 (RS) Add `belongs_to_node`
# 26-Jan-2013 (CT) Set `Net_Interface.is_relevant`
# 7-May-2013 (RS) Add `desc`
# 8-May-2013 (RS) Fix comment for desc
# 30-Sep-2013 (CT) Mixin `Belongs_to_Node_Left`, not `Belongs_to_Node`
# 14-Apr-2014 (CT) Add mixin `Belongs_to_Net_Device_Left`
# 17-Apr-2014 (CT) Fix typo in `Net_Interface.name.description`
# 30-Apr-2014 (CT) Set `left.Kind_Mixins` to `Attr.Init_Only_Mixin`
# 13-Jun-2014 (RS) Add `ui_name` for `desc`
# ««revision-date»»···
#--
from __future__ import absolute_import, division, print_function, unicode_literals
from _MOM.import_MOM import *
from _CNDB import CNDB
import _CNDB._OMP
from _GTW._OMP._DNS.Attr_Type import A_DNS_Label
import _CNDB._OMP.Net_Device
import _CNDB._OMP.Belongs_to_Net_Device
import _CNDB._OMP.Belongs_to_Node
from _GTW._OMP._NET import NET
import _GTW._OMP._NET.Attr_Type
_Ancestor_Essence = CNDB.OMP.Link1
_Mixin_1 = CNDB.OMP.Belongs_to_Node_Left
_Mixin_2 = CNDB.OMP.Belongs_to_Net_Device_Left
class Net_Interface (_Mixin_1, _Mixin_2, _Ancestor_Essence) :
"""Model a network interface of a CNDB device"""
is_partial = True
is_relevant = True
class _Attributes \
( _Mixin_1._Attributes
, _Mixin_2._Attributes
, _Ancestor_Essence._Attributes
) :
_Ancestor = _Ancestor_Essence._Attributes
### Primary attributes
class left (_Ancestor.left) :
"""Network device the interface is connected to."""
role_type = CNDB.OMP.Net_Device
role_name = "device"
Kind_Mixins = (Attr.Init_Only_Mixin, )
show_in_ui_selector= False
# end class left
class mac_address (NET.A_MAC_Address) :
"""MAC address of interface."""
kind = Attr.Primary_Optional
# end class mac_address
class name (A_DNS_Label) :
"""Name of the interface."""
kind = Attr.Primary_Optional
completer = Attr.Completer_Spec (2, Attr.Selector.primary)
# end class name
### Non-primary attributes
class is_active (A_Boolean) :
"""Indicates if this interface is active."""
kind = Attr.Optional
# end class is_active
class desc (A_Text) :
"""Description of interface"""
kind = Attr.Optional
ui_name = "Description"
# end class desc
# end class _Attributes
# end class Net_Interface
if __name__ != "__main__" :
CNDB.OMP._Export ("*")
### __END__ CNDB.OMP.Net_Interface
| 29.932773 | 85 | 0.599663 | 450 | 3,562 | 4.455556 | 0.315556 | 0.041895 | 0.032419 | 0.03192 | 0.100249 | 0.043392 | 0.030923 | 0 | 0 | 0 | 0 | 0.037931 | 0.267266 | 3,562 | 118 | 86 | 30.186441 | 0.727586 | 0.462381 | 0 | 0.102564 | 0 | 0 | 0.014192 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25641 | 0 | 0.487179 | 0.025641 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7ea407ef62f48e56c1511352508b2200c1993ab9 | 7,564 | py | Python | patient_state.py | JMathiszig-Lee/Propofol | 7c1087ef56032a2edb02ec34a134d026c84fdc76 | [
"MIT"
] | 3 | 2017-05-27T10:32:46.000Z | 2018-12-22T22:57:51.000Z | patient_state.py | JMathiszig-Lee/Propofol | 7c1087ef56032a2edb02ec34a134d026c84fdc76 | [
"MIT"
] | 2 | 2017-05-29T12:52:18.000Z | 2018-11-26T14:37:21.000Z | patient_state.py | JMathiszig-Lee/Propofol | 7c1087ef56032a2edb02ec34a134d026c84fdc76 | [
"MIT"
] | 3 | 2017-05-27T12:42:07.000Z | 2018-01-23T13:33:16.000Z | class PatientState:
# age: years
# weight: kilos
# height: cm
# sex: 'm' or 'f'
def __init__(self, age, weight, height, sex, params):
self.params = params
lean_body_mass = self.__lean_body_mass(weight, height, sex)
self.v1 = params['v1']
# TODO: Work out why v2 and v3 are not used in the algorithm
v2 = params['k21c'] + params['k21d'] * (age - params['age_offset'])
v3 = params['v3']
# Initial concentration is zero in all components
self.x1 = 0.0
self.x2 = 0.0
self.x3 = 0.0
self.k10 = (params['k10a'] + params['k10b'] * (weight - params['weight_offset']) + params['k10c'] * (lean_body_mass - params['lbm_offset']) + params['k10d'] * (height - params['height_offset'])) / 60
self.k12 = (params['k12a'] + params['k12b'] * (age - params['age_offset'])) / 60
self.k13 = params['k13'] / 60
self.k21 = ((params['k21a'] + params['k21b'] * (age - params['age_offset'])) / v2) / 60
self.k31 = params['k31'] / 60
self.keo = 0.456 / 60
self.xeo = 0.0
def give_drug(self, drug_milligrams):
self.x1 = self.x1 + drug_milligrams / self.v1
def wait_time(self, time_seconds):
x1k10 = self.x1 * self.k10
x1k12 = self.x1 * self.k12
x1k13 = self.x1 * self.k13
x2k21 = self.x2 * self.k21
x3k31 = self.x3 * self.k31
xk1e = self.x1 * self.keo
xke1 = self.xeo * self.keo
self.x1 = self.x1 + (x2k21 - x1k12 + x3k31 - x1k13 - x1k10) * time_seconds
self.x2 = self.x2 + (x1k12 - x2k21) * time_seconds
self.x3 = self.x3 + (x1k13 - x3k31) * time_seconds
self.xeo = self.xeo + (xk1e - xke1) * time_seconds
@staticmethod
def with_schnider_params(age, weight, height, sex):
params = PatientState.schnider_params()
return PatientState(age, weight, height, sex, params)
@staticmethod
def schnider_params():
params = {
'k10a': 0.443,
'k10b': 0.0107,
'k10c': -0.0159,
'k10d': 0.0062,
'k12a': 0.302,
'k12b': -0.0056,
'k13': 0.196,
'k21a': 1.29,
'k21b': -0.024,
'k21c': 18.9,
'k21d': -0.391,
'k31': 0.0035,
'v1': 4.27,
'v3': 238,
'age_offset': 53,
'weight_offset': 77,
'lbm_offset': 59,
'height_offset': 177
}
return params
def __lean_body_mass(self, weight, height, sex):
if sex != "m" and sex != "f":
raise ValueError("Unknown sex '%s'. This algorithm can only handle 'm' and 'f'. :(" % sex)
# TODO: Use better equation to calculate lean body mass
if sex == "m":
return 1.1 * weight - self.params['weight_offset'] * ((weight/height) * (weight/height))
else:
return 1.07 * weight - self.params['weight_offset'] * ((weight/height) * (weight/height))
def __repr__(self):
return "PatientState(x1=%f, x2=%f, x3=%f, xeo=%f)" % (self.x1, self.x2, self.x3, self.xeo)
class PatientState2:
# age: years
# weight: kilos
# height: cm
# sex: 'm' or 'f'
def __init__(self, age, weight, height, sex, params):
self.params = params
lean_body_mass = self.__lean_body_mass(weight, height, sex)
self.v1 = ((params['v1a'] * 50) - params['v1b']*(age - (params['age_offset']) * 100)) * (params['v1c'] * (lean_body_mass - (params['lbm_offset'] * 100)))
self.v2 = params['v2a'] * lean_body_mass * 2
self.v3 = params['v3a'] * weight * 5
# Initial concentration is zero in all components
self.x1 = 0.0
self.x2 = 0.0
self.x3 = 0.0
self.k10 = (params['k10a'] * self.v1) / 60
self.k12 = params['k12'] /60
self.k13 = params['k13'] / 60
self.k21 = (params['k12'] * (self.v1/self.v2)) / 60
self.k31 = (params['k13'] * (self.v1/self.v3)) / 60
self.keo = 0.456 / 60
self.xeo = 0.0
def give_drug(self, drug_milligrams):
self.x1 = self.x1 + drug_milligrams / self.v1
def wait_time(self, time_seconds):
x1k10 = self.x1 * self.k10
x1k12 = self.x1 * self.k12
x1k13 = self.x1 * self.k13
x2k21 = self.x2 * self.k21
x3k31 = self.x3 * self.k31
self.x1 = self.x1 + (x2k21 - x1k12 + x3k31 - x1k13 - x1k10) * time_seconds
self.x2 = self.x2 + (x1k12 - x2k21) * time_seconds
self.x3 = self.x3 + (x1k13 - x3k31) * time_seconds
def __lean_body_mass(self, weight, height, sex):
if sex != "m" and sex != "f":
raise ValueError("Unknown sex '%s'. This algorithm can only handle 'm' and 'f'. :(" % sex)
if sex == "m":
return (0.32819 * weight) + (0.33929 * height) - 29.5336
else:
return (0.29569 * weight) + (0.41813 * height) - 43.2933
def __repr__(self):
return "PatientState(x1=%f, x2=%f, x3=%f, xeo=%f)" % (self.x1, self.x2, self.x3, self.xeo)
class MarshState:
def __init__(self, age, weight, height, sex, params):
self.params = params
self.v1 = params['v1a'] * weight
self.v2 = params['v2a'] * weight
self.v3 = params['v3a'] * weight
# Initial concentration is zero in all components
self.x1 = 0.0
self.x2 = 0.0
self.x3 = 0.0
self.k10 = params['k10a'] / 60
self.k12 = params['k12'] /60
self.k13 = params['k13'] / 60
self.k21 = params['k12'] / 60
self.k31 = params['k13'] / 60
def give_drug(self, drug_milligrams):
self.x1 = self.x1 + drug_milligrams / self.v1
def wait_time(self, time_seconds):
x1k10 = self.x1 * self.k10
x1k12 = self.x1 * self.k12
x1k13 = self.x1 * self.k13
x2k21 = self.x2 * self.k21
x3k31 = self.x3 * self.k31
self.x1 = self.x1 + (x2k21 - x1k12 + x3k31 - x1k13 - x1k10) * time_seconds
self.x2 = self.x2 + (x1k12 - x2k21) * time_seconds
self.x3 = self.x3 + (x1k13 - x3k31) * time_seconds
@staticmethod
def with_marsh_params(age, weight, height, sex):
params = MarshState.marsh_params()
return MarshState(age, weight, height, sex, params)
@staticmethod
def marsh_params():
params = {
'v1a': 0.228,
'v2a': 0.463,
'v3a': 2.893,
'k10a': 0.119,
'k12': 0.112,
'k13': 0.042,
'k21': 0.055,
'k31': 0.0033,
'keo': 0.26,
}
return params
def __repr__(self):
return "PatientState(x1=%f, x2=%f, x3=%f)" % (self.x1, self.x2, self.x3)
if __name__ == '__main__':
patient = PatientState.with_schnider_params(34, 46.3, 157.5, "f")
print ("Initial state: " + str(patient))
patient.give_drug(90)
print ("After giving drug: " + str(patient))
times = [126, 240, 483, 962, 1803, 3583]
patient2 = MarshState.with_marsh_params(34, 46.3, 157.5, "f")
print ("Initial state: " + str(patient2))
patient2.give_drug(90)
print ("After giving drug: " + str(patient2))
times = [126, 240, 483, 962, 1803, 3583]
for t in range(961):
patient.wait_time(1)
patient2.wait_time(1)
#print str(t) + str(patient)
mod = t % 30
if mod == 0:
print (str(t) + str(patient) + str(patient2))
| 32.324786 | 207 | 0.535299 | 993 | 7,564 | 3.963746 | 0.175227 | 0.042683 | 0.048272 | 0.032012 | 0.691311 | 0.644563 | 0.6156 | 0.578506 | 0.561738 | 0.526931 | 0 | 0.128235 | 0.315442 | 7,564 | 233 | 208 | 32.463519 | 0.631904 | 0.051163 | 0 | 0.506024 | 0 | 0 | 0.09341 | 0 | 0 | 0 | 0 | 0.004292 | 0 | 1 | 0.108434 | false | 0 | 0 | 0.018072 | 0.192771 | 0.03012 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7ea5ae4ac262218edb181aec0214200e18d8c4ee | 1,996 | py | Python | huffman/huffman.py | nicktimko/huffman | bfad004ce7951750cc4536ae4466f87afa0f5e5d | [
"MIT"
] | 18 | 2017-03-07T20:00:04.000Z | 2022-03-09T00:22:35.000Z | huffman/huffman.py | nicktimko/huffman | bfad004ce7951750cc4536ae4466f87afa0f5e5d | [
"MIT"
] | 1 | 2018-04-20T14:26:29.000Z | 2018-04-20T14:33:30.000Z | huffman/huffman.py | nicktimko/huffman | bfad004ce7951750cc4536ae4466f87afa0f5e5d | [
"MIT"
] | 5 | 2017-03-30T07:23:19.000Z | 2022-02-01T20:10:08.000Z | from __future__ import print_function, absolute_import
from .heapqo import Heap
__all__ = ["Node", "Leaf", "Tree", "codebook"]
class Node(object):
def __init__(self, left, right):
self.parent = None
left.parent = right.parent = self
self.left = left
self.right = right
self.weight = left.weight + right.weight
def __repr__(self):
return "<Node with weight {}>".format(self.weight)
def __lt__(self, other):
return self.weight < other.weight
class Leaf(Node):
def __init__(self, symbol, weight):
self.parent = None
self.symbol = symbol
self.weight = weight
def __repr__(self):
return "<Leaf '{}' with weight {}, code '{}'>".format(
self.symbol, self.weight, self.code
)
@property
def code(self):
code = ""
n = self
while n.parent is not None:
codebit = "0" if n is n.parent.left else "1"
code = codebit + code
n = n.parent
return code
class Tree(object):
def __init__(self, symbolweights):
leaves = [Leaf(*sw) for sw in symbolweights]
heap = Heap(leaves[:])
while len(heap) >= 2:
heap.push(Node(heap.pop(), heap.pop()))
self.root = heap.pop()
self.codebook = {l.symbol: l.code for l in leaves}
def codebook(symbolweights):
"""
Provided an iterable of 2-tuples in (symbol, weight) format, generate a
Huffman codebook, returned as a dictionary in {symbol: code} format.
Examples:
>>> huffman.codebook([('A', 2), ('B', 4), ('C', 1), ('D', 1)])
{'A': '10', 'B': '0', 'C': '110', 'D': '111'}
>>> huffman.codebook(collections.Counter('man the stand banana man').items())
{' ': '111',
'a': '10',
'b': '0100',
'd': '0110',
'e': '11010',
'h': '0101',
'm': '1100',
'n': '00',
's': '11011',
't': '0111'}
"""
return Tree(symbolweights).codebook
| 24.641975 | 81 | 0.545591 | 245 | 1,996 | 4.306122 | 0.371429 | 0.047393 | 0.03128 | 0.032227 | 0.043602 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038271 | 0.293086 | 1,996 | 80 | 82 | 24.95 | 0.709426 | 0.238477 | 0 | 0.095238 | 0 | 0 | 0.055402 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.190476 | false | 0 | 0.047619 | 0.071429 | 0.428571 | 0.02381 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7ea5eec3676adaf65ddd03389580c3142472152f | 3,950 | py | Python | tess.py | judaicadh/ja2-scripts | 7a0c0a1b2cac38eeebab6b6834555cdab0956d40 | [
"MIT"
] | null | null | null | tess.py | judaicadh/ja2-scripts | 7a0c0a1b2cac38eeebab6b6834555cdab0956d40 | [
"MIT"
] | null | null | null | tess.py | judaicadh/ja2-scripts | 7a0c0a1b2cac38eeebab6b6834555cdab0956d40 | [
"MIT"
] | null | null | null | import subprocess
import uuid
import os
import shlex
import argparse
import tempfile
maxproc = 8
debug = False
conv = ('convert -density 300 {doc} -depth 8 '
'-strip -background white -alpha off {tempfile}')
# tess = ('tesseract {infile} -l {lang} '
# '-c preserve_interword_spaces=1 {outfile}')
tess = ('tesseract {infile} {outfile} -l {lang}')
def poll_and_popitem(running_ps):
# Find a terminted process, using temporary output filenames as ids.
# This is a slow linear search, but that's probably OK since these will
# be heavily IO-bound processes.
pop_id = None
for pid, ps in running_ps.items():
if ps.poll() is not None:
pop_id = pid
break
# If no process has terminated, pick one at random & remember filename.
if pop_id is None:
pop_id, ps = running_ps.popitem()
else:
ps = running_ps.pop(pop_id)
return pop_id, ps
def wait_for_ps(running_ps, infiles, outfiles):
outfile, ps = poll_and_popitem(running_ps)
inf = infiles[outfile]
if ps.wait() == 0:
outfiles[inf] = outfile
msg = 'File\n\t{}\nconverted to\n\t{}\nusing `{}`'
msg = msg.format(inf, outfile, ps.args[0])
else:
msg = 'Error: `{}` failed for file\n\t{}'
msg = msg.format(ps.args[0], inf)
print(msg)
print()
def convert_files(files, tempdir): # instead of tempdir, make_outfile
running_ps = {}
docfiles = {}
outfiles = {}
for doc in files:
tempfile = '' # make_outfile
while not tempfile or os.path.exists(tempfile):
tempname = str(uuid.uuid4()) + '.tiff'
tempfile = os.path.join(tempdir, tempname)
docfiles[tempfile] = doc
args = shlex.split(conv.format(doc=doc, tempfile=tempfile))
print(args)
ps = subprocess.Popen(args)
running_ps[tempfile] = ps
if len(running_ps) > maxproc:
wait_for_ps(running_ps, docfiles, outfiles)
while running_ps:
wait_for_ps(running_ps, docfiles, outfiles)
return outfiles
def tess_files(infiles, language): # add make_outfile
running_ps = {}
stepfiles = {}
outfiles = {}
for infile, stepfile in infiles.items():
outfile, ext = os.path.splitext(infile) # make_outfile
stepfiles[outfile] = stepfile
args = shlex.split(tess.format(infile=stepfile,
lang=language,
outfile=outfile))
print(args)
ps = subprocess.Popen(args)
running_ps[outfile] = ps
if len(running_ps) > maxproc:
wait_for_ps(running_ps, stepfiles, outfiles)
while running_ps:
wait_for_ps(running_ps, stepfiles, outfiles)
return outfiles
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description='OCR workflow for a range of file types.'
)
parser.add_argument(
'--language', '-l', default='eng',
help='The tesseract language ID code for the language you would '
'like to use. Default is `eng` (English).'
)
parser.add_argument(
'files', nargs='+',
help='One or more image files to process.'
)
args = parser.parse_args()
files = args.files
try:
with tempfile.TemporaryDirectory() as tempdir:
outfiles = convert_files(files, tempdir)
outfiles = tess_files(outfiles, args.language)
except OSError as exc:
print('Either the `convert` or the `tesseract` command could not '
'be found.')
print()
print('Make sure that you have installed both ImageMagick and '
'Tesseract, and')
print('that the `convert` and `tesseract` executables are on the '
'system path.')
print()
print('Here is the original exception message:')
print()
print(exc)
print()
| 30.152672 | 75 | 0.602532 | 484 | 3,950 | 4.795455 | 0.363636 | 0.069798 | 0.033175 | 0.034468 | 0.15252 | 0.124946 | 0.124946 | 0.103404 | 0.069798 | 0.03533 | 0 | 0.003574 | 0.291646 | 3,950 | 130 | 76 | 30.384615 | 0.825947 | 0.102532 | 0 | 0.262136 | 0 | 0 | 0.182796 | 0.005942 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038835 | false | 0 | 0.058252 | 0 | 0.126214 | 0.126214 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7ea8cd5c9d9f93d3dcea1fafdeca645b70189d52 | 1,756 | py | Python | image_demo.py | bobby20180331/movenet-pytorch | d7262410af2776afe66d3f7a2282b64becb82601 | [
"MIT"
] | 24 | 2021-08-18T03:57:51.000Z | 2022-03-01T08:53:27.000Z | image_demo.py | bobby20180331/movenet-pytorch | d7262410af2776afe66d3f7a2282b64becb82601 | [
"MIT"
] | 6 | 2021-10-12T02:21:20.000Z | 2022-03-22T07:02:09.000Z | image_demo.py | bobby20180331/movenet-pytorch | d7262410af2776afe66d3f7a2282b64becb82601 | [
"MIT"
] | 6 | 2021-09-17T11:18:37.000Z | 2022-02-21T07:46:04.000Z | import cv2
import time
import argparse
import os
import torch
from movenet.models.model_factory import load_model
from movenet.utils import read_imgfile, draw_skel_and_kp
parser = argparse.ArgumentParser()
parser.add_argument('--model', type=str, default="movenet_lightning", choices=["movenet_lightning", "movenet_thunder"])
# parser.add_argument('--size', type=int, default=192)
parser.add_argument('--conf_thres', type=float, default=0.3)
parser.add_argument('--image_dir', type=str, default='./images')
parser.add_argument('--output_dir', type=str, default='./output')
args = parser.parse_args()
if args.model == "movenet_lightning":
args.size = 192
args.ft_size = 48
else:
args.size = 256
args.ft_size = 64
def main():
model = load_model(args.model, ft_size=args.ft_size)
# model = model.cuda()
if args.output_dir:
if not os.path.exists(args.output_dir):
os.makedirs(args.output_dir)
filenames = [
f.path for f in os.scandir(args.image_dir) if f.is_file() and f.path.endswith(('.png', '.jpg', 'jpeg'))]
start = time.time()
for f in filenames:
input_image, draw_image = read_imgfile(
f, args.size)
with torch.no_grad():
input_image = torch.Tensor(input_image) # .cuda()
kpt_with_conf = model(input_image)[0, 0, :, :]
kpt_with_conf = kpt_with_conf.numpy()
if args.output_dir:
draw_image = draw_skel_and_kp(
draw_image, kpt_with_conf, conf_thres=args.conf_thres)
cv2.imwrite(os.path.join(args.output_dir, os.path.relpath(f, args.image_dir)), draw_image)
print('Average FPS:', len(filenames) / (time.time() - start))
if __name__ == "__main__":
main()
| 29.266667 | 119 | 0.665148 | 250 | 1,756 | 4.42 | 0.332 | 0.048869 | 0.076923 | 0.023529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013485 | 0.197608 | 1,756 | 59 | 120 | 29.762712 | 0.770759 | 0.046128 | 0 | 0.04878 | 0 | 0 | 0.093357 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02439 | false | 0 | 0.170732 | 0 | 0.195122 | 0.02439 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7ea8ec1f781ee0130f656017a5cfa1e0ef40bca2 | 925 | py | Python | Scripts/Correlation.py | Albertios/PythonInGIS_EagleOwl | 35cba102b2c93bb1a7b415e9460aa955d4816b32 | [
"MIT"
] | null | null | null | Scripts/Correlation.py | Albertios/PythonInGIS_EagleOwl | 35cba102b2c93bb1a7b415e9460aa955d4816b32 | [
"MIT"
] | null | null | null | Scripts/Correlation.py | Albertios/PythonInGIS_EagleOwl | 35cba102b2c93bb1a7b415e9460aa955d4816b32 | [
"MIT"
] | null | null | null | import numpy as np
import numpy.ma as ma
x = []
y = []
#print(ma.corrcoef(ma.masked_invalid(x), ma.masked_invalid(z)))
autocorrelation = []
for i in range(len(lines)):
x = lines[i-1]
for j in range(len(lines)-1):
if i != j:
y = lines[j-1]
correlation = ma.corrcoef(ma.masked_invalid(x), ma.masked_invalid(y))
#print(correlation)
#print(correlation[0])
#print(correlation[1])
autocorrelation.append(correlation)
autocorrelation.append([])
owls = ["Jan","Feb","Mar","Apr","May","June","July","Aug","Sept","Oct","Nov","Dec"]
sum = 0.0
c = 0
for i in autocorrelation:
val = i[1][0]
if val != "--" and val > 0.0 :
sum = sum + val
c +=1
for i in autocorrelation:
print(i)
| 14.453125 | 83 | 0.478919 | 114 | 925 | 3.850877 | 0.368421 | 0.072893 | 0.136674 | 0.082005 | 0.186788 | 0.186788 | 0.186788 | 0.186788 | 0.186788 | 0 | 0 | 0.022298 | 0.36973 | 925 | 64 | 84 | 14.453125 | 0.730703 | 0.131892 | 0 | 0.086957 | 0 | 0 | 0.05125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.086957 | 0 | 0.086957 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7ea9490756cd3418bc4b450d8475a807b11c39ee | 293 | py | Python | scripts/helpers/bump_version.py | hingbong/Roboto | 7c1da0bde3b61a6bb20557e4b8e7b5b981bdff38 | [
"Apache-2.0"
] | null | null | null | scripts/helpers/bump_version.py | hingbong/Roboto | 7c1da0bde3b61a6bb20557e4b8e7b5b981bdff38 | [
"Apache-2.0"
] | null | null | null | scripts/helpers/bump_version.py | hingbong/Roboto | 7c1da0bde3b61a6bb20557e4b8e7b5b981bdff38 | [
"Apache-2.0"
] | null | null | null | import defcon
from glob import glob
VERSION_MAJOR = 3
VERSION_MINOR = 0
sources = glob("sources/*.ufo")
for path in sources:
print(f"Updating {path}")
font = defcon.Font(path)
font.info.versionMajor = VERSION_MAJOR
font.info.versionMinor = VERSION_MINOR
font.save(path)
| 19.533333 | 42 | 0.713311 | 41 | 293 | 5 | 0.536585 | 0.117073 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008368 | 0.1843 | 293 | 14 | 43 | 20.928571 | 0.849372 | 0 | 0 | 0 | 0 | 0 | 0.095563 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.181818 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7ea9bf0e07243a4ce72ca42c66cfee0e1c7cd5a7 | 1,823 | py | Python | aardvark/reaper/strategies/utils.py | NeCTAR-RC/aardvark | 3bf62bf38d034af825ac9fdf1dd61553a748a3e0 | [
"Apache-2.0"
] | null | null | null | aardvark/reaper/strategies/utils.py | NeCTAR-RC/aardvark | 3bf62bf38d034af825ac9fdf1dd61553a748a3e0 | [
"Apache-2.0"
] | null | null | null | aardvark/reaper/strategies/utils.py | NeCTAR-RC/aardvark | 3bf62bf38d034af825ac9fdf1dd61553a748a3e0 | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2018 European Organization for Nuclear Research.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
from oslo_log import log as logging
import aardvark.conf
from aardvark.objects import resources as res_obj
LOG = logging.getLogger(__name__)
CONF = aardvark.conf.CONF
Combination = collections.namedtuple(
"Combination", "provider instances leftovers")
def sum_resources(x):
resources = res_obj.Resources()
for y in x:
resources += y.resources
return resources
def sort_combinations(combinations):
"""Sorts the found combinations of servers"""
resources = sorted([
("VCPU", CONF.reaper.vcpu_sorting_priority),
("MEMORY_MB", CONF.reaper.ram_sorting_priority),
("DISK_GB", CONF.reaper.disk_sorting_priority)
], key=lambda x: x[1])
for resource, _ in resources:
combinations = sorted(
combinations, key=lambda x: getattr(x.leftovers, resource, 0))
minimum_value = getattr(combinations[0].leftovers, resource, 0)
combinations = [
combo for combo in combinations
if getattr(combo.leftovers, resource, 0) == minimum_value
]
if len(combinations) == 1:
break
return combinations[0]
| 30.898305 | 78 | 0.692814 | 231 | 1,823 | 5.380952 | 0.515152 | 0.04827 | 0.043443 | 0.025744 | 0.04827 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010608 | 0.224355 | 1,823 | 58 | 79 | 31.431034 | 0.868458 | 0.36972 | 0 | 0 | 0 | 0 | 0.052212 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.133333 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7ea9c57738aebfc5fef889f720af9731499c47ff | 7,730 | py | Python | fnet/data/chunkdataprovider.py | HelmholtzAI-Consultants-Munich/pytorch_fnet | 879784bd0f8e76ab8f0ed8de4235180a316e12d8 | [
"Unlicense"
] | 16 | 2021-03-12T01:37:36.000Z | 2022-02-07T22:02:15.000Z | fnet/data/chunkdataprovider.py | HelmholtzAI-Consultants-Munich/pytorch_fnet | 879784bd0f8e76ab8f0ed8de4235180a316e12d8 | [
"Unlicense"
] | 2 | 2021-03-26T04:12:15.000Z | 2022-03-30T07:34:34.000Z | fnet/data/chunkdataprovider.py | HelmholtzAI-Consultants-Munich/pytorch_fnet | 879784bd0f8e76ab8f0ed8de4235180a316e12d8 | [
"Unlicense"
] | 3 | 2020-07-08T09:03:00.000Z | 2021-11-29T07:17:13.000Z | import numpy as np
import warnings
from fnet import get_vol_transformed
import pdb
class ChunkDataProvider(object):
def __init__(self, dataset, buffer_size, batch_size, replace_interval,
dims_chunk=(32, 64, 64), dims_pin=(None, None, None),
transforms=None,
choices_augmentation = None,
):
"""
dataset - DataSet instance
buffer_size - (int) number images to generate batches from
...
replace_interval - (int) number of batches between buffer item replacements. Set to -1 for no replacement.
dims_chunk - (tuple) shape of extracted chunks
dims_pin - (tuple) optionally pin the chunk extraction from this coordinate. Use None to indicate no pinning
for any particular dimension.
transforms - list of transforms to apply to each DataSet element
"""
assert transforms is None or isinstance(transforms, (list, tuple))
assert choices_augmentation is None or all(i in range(8) for i in choices_augmentation)
print('DEBUG: augmentation', choices_augmentation)
self._dataset = dataset
self._buffer_size = buffer_size
self._batch_size = batch_size
self._replace_interval = replace_interval
self._transforms = transforms
self.last_sources = '' # str indicating indices of folders in buffer
self._dims_chunk = dims_chunk
self._dims_pin = dims_pin
self._buffer = []
self._n_folders = len(dataset)
self._idx_folder = 0
self._count_iter = 0
self._idx_replace = 0 # next
self._fill_buffer()
self._update_last_sources()
self._shape_batch = [self._batch_size, 1] + list(self._dims_chunk)
self._dims_chunk_options = (self._dims_chunk, (self._dims_chunk[0]//2, *self._dims_chunk[1:]))
self.choices_augmentation = choices_augmentation
def use_test_set(self):
self._dataset.use_test_set()
def use_train_set(self):
self._dataset.use_train_set()
def set_dims_pin(self, dims_pin):
self._dims_pin = dims_pin
def get_dims_chunk(self):
return self._dims_chunk
def _vol_size_okay(self, vol):
return all(vol.shape[i] >= self._dims_chunk[i] for i in range(vol.ndim))
def _replace_buffer_item(self):
"""Replace oldest package in buffer with another package."""
package = self._create_package()
self._buffer[self._idx_replace] = package
self._idx_replace += 1
if self._idx_replace >= self._buffer_size:
self._idx_replace = 0
def _fill_buffer(self):
while len(self._buffer) < self._buffer_size:
package = self._create_package()
self._buffer.append(package)
def _incr_idx_folder(self):
self._idx_folder += 1
if self._idx_folder >= self._n_folders:
self._idx_folder = 0
def _create_package(self):
"""Read signal, target images from current folder and return data package.
Returns:
package - 3-element tuple (idx_folder, vol_signal, vol_target)
"""
tries = 5
volumes = None
while volumes is None and tries > 0:
volumes = self._dataset[self._idx_folder]
if volumes:
if self._vol_size_okay(volumes[0]):
idx_folder = self._idx_folder
else:
warnings.warn('bad size: {}. skipping....'.format(volumes[0].shape))
volumes = None
self._incr_idx_folder()
tries -= 1
if tries <= 0:
raise
return (idx_folder, volumes[0], volumes[1])
def _update_last_sources(self):
source_list = [str(package[0]) for package in self._buffer]
self.last_sources = '|'.join(source_list)
def _augment_chunks(self, chunks):
if self.choices_augmentation is None:
return chunks
chunks_new = []
choice = np.random.choice(self.choices_augmentation)
for chunk in chunks:
chunk_new = chunk
if choice in [1, 3, 5, 7]:
chunk_new = np.flip(chunk_new, axis=1)
if choice in [2, 3]:
chunk_new = np.rot90(chunk_new, 1, axes=(1, 2))
elif choice in [4, 5]:
chunk_new = np.rot90(chunk_new, 2, axes=(1, 2))
elif choice in [6, 7]:
chunk_new = np.rot90(chunk_new, 3, axes=(1, 2))
chunks_new.append(chunk_new)
return chunks_new
def _gen_batch(self):
"""Generate a batch from sources in self._buffer
Returns:
batch_x, batch_y - (2 numpy arrays) each array will have shape (n, 1) + dims_chunk.
"""
batch_x, batch_y = None, None
coords = self._pick_random_chunk_coords()
for i in range(len(coords)):
coord = coords[i]
chunks_tup = self._extract_chunk(coord)
if self._transforms is not None:
chunks_transformed = []
for j, transform in enumerate(self._transforms):
chunks_transformed.append(get_vol_transformed(chunks_tup[j], self._transforms[j]))
else:
chunks_transformed = chunks_tup
if batch_x is None or batch_y is None:
batch_x = np.zeros((self._batch_size, 1, ) + chunks_transformed[0].shape, dtype=np.float32)
batch_y = np.zeros((self._batch_size, 1, ) + chunks_transformed[1].shape, dtype=np.float32)
# pdb.set_trace()
chunks_augmented = self._augment_chunks(chunks_transformed)
batch_x[i, 0, ...] = chunks_augmented[0]
batch_y[i, 0, ...] = chunks_augmented[1]
return batch_x, batch_y
def _pick_random_chunk_coords(self):
"""Returns a random coordinate from random images in buffer.
Returns:
coords - list of tuples of the form (idx_buffer, (z, y, z))
"""
coord_list = []
for idx_chunk in range(self._batch_size):
idx_rand = np.random.randint(0, self._buffer_size)
shape = self._buffer[idx_rand][1].shape # get shape from trans channel image
coord_3d = [0]*len(self._dims_chunk)
for i in range(len(coord_3d)):
if self._dims_pin[i] is None:
coord_3d[i] = np.random.randint(0, shape[i] - self._dims_chunk[i] + 1) # upper bound of randint is exclusive, so +1
else:
coord_3d[i] = self._dims_pin[i]
coord_list.append((idx_rand, tuple(coord_3d)))
return coord_list
def _extract_chunk(self, coord):
"""Returns arrays extracted from images in buffer.
Parameters:
coord - (list) tuples in the form of (idx_buffer, (z, y, z)) to indicate where to extract chunks from buffer
"""
idx_buf = coord[0]
coord_img = coord[1]
slices = []
for i in range(len(coord_img)):
slices.append(slice(coord_img[i], coord_img[i] + self._dims_chunk[i]))
return self._buffer[idx_buf][1][slices], self._buffer[idx_buf][2][slices]
def get_batch(self):
"""Get a batch of examples from source data."
Returns:
batch_x, batch_y - (2 numpy arrays) each array will have shape (n, 1) + dims_chunk.
"""
self._count_iter += 1
if (self._replace_interval > 0) and (self._count_iter % self._replace_interval == 0):
self._replace_buffer_item()
self._update_last_sources()
return self._gen_batch()
| 39.438776 | 135 | 0.599612 | 993 | 7,730 | 4.3857 | 0.18429 | 0.035132 | 0.032836 | 0.010103 | 0.150631 | 0.116418 | 0.046843 | 0.046843 | 0.029392 | 0.029392 | 0 | 0.016583 | 0.305692 | 7,730 | 195 | 136 | 39.641026 | 0.794857 | 0.178266 | 0 | 0.110294 | 0 | 0 | 0.007545 | 0 | 0 | 0 | 0 | 0 | 0.014706 | 1 | 0.117647 | false | 0 | 0.029412 | 0.014706 | 0.220588 | 0.007353 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7eabfb7903ba9ed9e4b698fd621c5968de6cd0a8 | 1,000 | py | Python | day14/part2.py | fvdnabee/aoc20 | e4c9d31aaead998bad9f0a53612ab731b66933da | [
"Unlicense"
] | null | null | null | day14/part2.py | fvdnabee/aoc20 | e4c9d31aaead998bad9f0a53612ab731b66933da | [
"Unlicense"
] | null | null | null | day14/part2.py | fvdnabee/aoc20 | e4c9d31aaead998bad9f0a53612ab731b66933da | [
"Unlicense"
] | null | null | null | inp = open('input').read().splitlines()
memory = {}
for l in inp:
if l.startswith('mask'):
m = l.split('=')[1].strip()
print(m)
and_mask, or_mask = (2**36 - 1, 0)
floating_positions = [0]
for idx, b in enumerate(m[::-1]):
if b == '1':
or_mask += 2**idx # flip bit at position idx to one
elif b == 'X':
and_mask -= 2**idx # flip bith at position idx to zero
new_positions = [item + 2**idx for item in floating_positions]
floating_positions.extend(new_positions)
elif l.startswith('mem'):
addr, value = [int(x.strip()) for x in l.replace('mem[', '').replace(']','').split('=')]
masked_addr = addr & and_mask
masked_addr = masked_addr | or_mask
for fp in floating_positions:
write_addr = masked_addr + fp
memory[write_addr] = value
print(write_addr, value)
print("sum =", sum(memory.values()))
| 35.714286 | 96 | 0.535 | 133 | 1,000 | 3.879699 | 0.37594 | 0.131783 | 0.027132 | 0.046512 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017595 | 0.318 | 1,000 | 27 | 97 | 37.037037 | 0.739003 | 0.065 | 0 | 0 | 0 | 0 | 0.027897 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7eacfaa261acc69d776732d45b8e43ce4f06435c | 13,679 | py | Python | toontown/minigame/TwoDEnemy.py | SuperM0use24/TT-CL-Edition | fdad8394f0656ae122b687d603f72afafd220c65 | [
"MIT"
] | null | null | null | toontown/minigame/TwoDEnemy.py | SuperM0use24/TT-CL-Edition | fdad8394f0656ae122b687d603f72afafd220c65 | [
"MIT"
] | 1 | 2021-06-08T17:16:48.000Z | 2021-06-08T17:16:48.000Z | toontown/minigame/TwoDEnemy.py | SuperM0use24/TT-CL-Edition | fdad8394f0656ae122b687d603f72afafd220c65 | [
"MIT"
] | 3 | 2021-06-03T05:36:36.000Z | 2021-06-22T15:07:31.000Z | from panda3d.core import *
from direct.directnotify import DirectNotifyGlobal
from direct.showbase.DirectObject import DirectObject
from direct.showbase import PythonUtil
from direct.interval.IntervalGlobal import *
from toontown.minigame import ToonBlitzGlobals
from toontown.toonbase import ToontownGlobals
from toontown.suit import Suit
from toontown.suit import SuitDNA
from toontown.battle.BattleProps import *
from toontown.battle import MovieUtil
from toontown.battle import BattleParticles, BattleProps
from direct.particles import ParticleEffect
import math
COLOR_RED = VBase4(1, 0, 0, 0.3)
class TwoDEnemy(DirectObject):
notify = DirectNotifyGlobal.directNotify.newCategory('TwoDEnemy')
def __init__(self, enemyMgr, index, suitAttribs):
self.enemyMgr = enemyMgr
self.game = self.enemyMgr.section.sectionMgr.game
self.index = index
self.moveIval = None
self.propTrack = None
self.animTrack = None
self.shotTrack = None
self.deathTrack = None
self.deathSuit = None
self.suitSound = None
self.deleteMeCallback = None
self.isMovingUpDown = False
self.isMovingLeftRight = False
self.showCollSpheres = False
self.isDestroyed = False
self.isGoingUp = False
self.setupEnemy(suitAttribs)
BattleParticles.loadParticles()
return
def destroy(self):
if self.isDestroyed:
return
self.isDestroyed = True
if hasattr(self.suit, 'prop') and self.suit.prop:
self.suit.prop.stash()
if self.propTrack:
self.propTrack.finish()
self.propTrack = None
if self.suitSound:
self.suitSound.stop()
del self.suitSound
if self.animTrack:
self.animTrack.finish()
self.animTrack = None
if self.shotTrack != None:
self.shotTrack.finish()
self.shotTrack = None
if self.deathTrack != None:
self.deathTrack.finish()
self.deathTrack = None
if self.deathSuit:
self.deathSuit.detachNode()
self.suit.cleanupLoseActor()
self.deathSuit = None
if self.moveIval:
self.moveIval.pause()
del self.moveIval
if self.suit:
self.suit.delete()
self.suit = None
BattleParticles.unloadParticles()
self.ignore(self.game.uniqueName('enter' + self.suitName))
self.game = None
self.enemyMgr = None
return
def setupEnemy(self, suitAttribs):
suitType = suitAttribs[0]
self.suit = Suit.Suit()
suitDNA = SuitDNA.SuitDNA()
suitDNA.newSuit(suitType)
self.suit.setDNA(suitDNA)
self.suit.pose('walk', 0)
self.suitName = 'Enemy-%s' % self.index
self.suit.setName(self.suitName)
suitPosAttribs = suitAttribs[1]
initX, initY, initZ = suitPosAttribs[0]
initPos = Point3(initX, initY, initZ)
if len(suitPosAttribs) == 3:
finalX, finalY, finalZ = suitPosAttribs[1]
finalPos = Point3(finalX, finalY, finalZ)
posIvalDuration = suitPosAttribs[2]
self.clearMoveIval()
def getForwardIval(blendTypeStr, self = self):
forwardIval = LerpPosInterval(self.suit, posIvalDuration, pos=finalPos, startPos=initPos, name='%s-moveFront' % self.suitName, blendType=blendTypeStr, fluid=1)
return forwardIval
def getBackwardIval(blendTypeStr, self = self):
backwardIval = LerpPosInterval(self.suit, posIvalDuration, pos=initPos, startPos=finalPos, name='%s-moveBack' % self.suitName, blendType=blendTypeStr, fluid=1)
return backwardIval
if abs(finalZ - initZ) > 0.0:
def setIsGoingUp(value):
self.isGoingUp = value
self.isMovingUpDown = True
self.suit.setH(90)
self.suit.prop = None
if self.suit.prop == None:
self.suit.prop = BattleProps.globalPropPool.getProp('propeller')
self.suit.prop.setScale(1.1)
self.suit.prop.setColor(1, 1, 0.6, 1)
head = self.suit.find('**/joint_head')
self.suit.prop.reparentTo(head)
self.propTrack = Sequence(ActorInterval(self.suit.prop, 'propeller', startFrame=8, endFrame=25, playRate=2.0))
self.animTrack = Sequence(ActorInterval(self.suit, 'landing', startFrame=8, endFrame=28, playRate=0.5), ActorInterval(self.suit, 'landing', startFrame=8, endFrame=28, playRate=-0.5))
self.moveIval = Sequence(Func(setIsGoingUp, True), getForwardIval('easeInOut'), Func(setIsGoingUp, False), getBackwardIval('easeInOut'))
self.suitSound = base.loader.loadSfx('phase_4/audio/sfx/TB_propeller.ogg')
else:
self.isMovingLeftRight = True
self.moveIval = Sequence(Func(self.setHeading, finalPos, initPos), getForwardIval('noBlend'), Func(self.setHeading, initPos, finalPos), getBackwardIval('noBlend'))
self.suit.setPos(initX, initY, initZ)
self.suit.dropShadow.hide()
self.setupCollision()
return
def setupCollision(self):
collSphere = CollisionSphere(0, 0, 2, 2)
collSphere.setTangible(1)
collNode = CollisionNode(self.game.uniqueName(self.suitName))
collNode.setIntoCollideMask(ToontownGlobals.WallBitmask)
collNode.addSolid(collSphere)
self.collNodePath = self.suit.attachNewNode(collNode)
self.collNodePath.hide()
if self.showCollSpheres:
self.collNodePath.show()
self.accept(self.game.uniqueName('enter' + self.suitName), self.handleEnemyCollision)
def clearMoveIval(self):
if self.moveIval:
self.moveIval.pause()
del self.moveIval
self.moveIval = None
return
def start(self, elapsedTime):
if self.moveIval:
self.moveIval.loop()
self.moveIval.setT(elapsedTime)
if self.isMovingLeftRight:
self.suit.loop('walk')
elif self.isMovingUpDown:
self.propTrack.loop()
self.animTrack.loop()
base.playSfx(self.suitSound, node=self.suit, looping=1)
def enterPause(self):
if hasattr(self, 'moveIval') and self.moveIval:
self.moveIval.pause()
self.suit.loop('neutral')
if self.suitSound:
self.suitSound.stop()
def exitPause(self):
if hasattr(self, 'moveIval') and self.moveIval:
self.moveIval.resume()
if self.isMovingLeftRight:
self.suit.loop('walk')
elif self.isMovingUpDown:
self.propTrack.loop()
self.animTrack.loop()
base.playSfx(self.suitSound, node=self.suit, looping=1, volume=0.1)
def handleEnemyCollision(self, cevent):
messenger.send('enemyHit')
def setHeading(self, finalPos, initPos):
diffX = finalPos.getX() - initPos.getX()
angle = -90 * diffX / math.fabs(diffX)
startAngle = self.suit.getH()
startAngle = PythonUtil.fitSrcAngle2Dest(startAngle, angle)
dur = 0.1 * abs(startAngle - angle) / 90
self.suitTurnIval = LerpHprInterval(self.suit, dur, Point3(angle, 0, 0), startHpr=Point3(startAngle, 0, 0), name='SuitLerpHpr')
self.suitTurnIval.start()
def blinkColor(self, color, duration):
blink = Sequence(LerpColorScaleInterval(self.suit, 0.5, color, startColorScale=VBase4(1, 1, 1, 1)), LerpColorScaleInterval(self.suit, 0.5, VBase4(1, 1, 1, 1), startColorScale=color))
track = Sequence(Func(blink.loop), Wait(duration), Func(blink.finish))
return track
def doShotTrack(self):
blinkRed = self.blinkColor(COLOR_RED, 2)
point = Point3(self.suit.getX(render), self.suit.getY(render), self.suit.getZ(render) + self.suit.height / 2.0)
scale = 0.3
splashHold = 0.1
def prepSplash(splash, point):
if callable(point):
point = point()
splash.reparentTo(render)
splash.setPos(point)
scale = splash.getScale()
splash.setBillboardPointWorld()
splash.setScale(scale)
splash = globalPropPool.getProp('splash-from-splat')
splash.setScale(scale)
splashTrack = Sequence(Func(prepSplash, splash, point), ActorInterval(splash, 'splash-from-splat'), Wait(splashHold), Func(MovieUtil.removeProp, splash))
self.shotTrack = Parallel(Func(self.game.assetMgr.playSplashSound), blinkRed, splashTrack)
self.shotTrack.start()
def doDeathTrack(self):
def removeDeathSuit(suit, deathSuit):
if not deathSuit.isEmpty():
deathSuit.detachNode()
suit.cleanupLoseActor()
if self.suitSound:
self.suitSound.stop()
self.deathSuit = self.suit.getLoseActor()
self.deathSuit.reparentTo(self.enemyMgr.enemiesNP)
self.deathSuit.setPos(render, self.suit.getPos(render))
self.deathSuit.setHpr(render, self.suit.getHpr(render))
self.suit.hide()
self.collNodePath.reparentTo(self.deathSuit)
treasureSpawnPoint = Point3(self.suit.getX(), self.suit.getY(), self.suit.getZ() + self.suit.height / 2.0)
gearPoint = Point3(0, 0, self.suit.height / 2.0 + 2.0)
spinningSound = base.loader.loadSfx('phase_3.5/audio/sfx/Cog_Death.ogg')
deathSound = base.loader.loadSfx('phase_3.5/audio/sfx/ENC_cogfall_apart.ogg')
smallGears = BattleParticles.createParticleEffect(file='gearExplosionSmall')
singleGear = BattleParticles.createParticleEffect('GearExplosion', numParticles=1)
smallGearExplosion = BattleParticles.createParticleEffect('GearExplosion', numParticles=10)
bigGearExplosion = BattleParticles.createParticleEffect('BigGearExplosion', numParticles=30)
smallGears.setPos(gearPoint)
singleGear.setPos(gearPoint)
smallGearExplosion.setPos(gearPoint)
bigGearExplosion.setPos(gearPoint)
smallGears.setDepthWrite(False)
singleGear.setDepthWrite(False)
smallGearExplosion.setDepthWrite(False)
bigGearExplosion.setDepthWrite(False)
if self.isMovingLeftRight:
self.enterPause()
suitTrack = Sequence(Func(self.collNodePath.stash), ActorInterval(self.deathSuit, 'lose', startFrame=80, endFrame=140), Func(removeDeathSuit, self.suit, self.deathSuit, name='remove-death-suit'))
explosionTrack = Sequence(Wait(1.5), MovieUtil.createKapowExplosionTrack(self.deathSuit, explosionPoint=gearPoint))
soundTrack = Sequence(SoundInterval(spinningSound, duration=1.6, startTime=0.6, volume=0.8, node=self.deathSuit), SoundInterval(deathSound, volume=0.32, node=self.deathSuit))
gears1Track = Sequence(ParticleInterval(smallGears, self.deathSuit, worldRelative=0, duration=4.3, cleanup=True), name='gears1Track')
gears2MTrack = Track((0.0, explosionTrack), (0.7, ParticleInterval(singleGear, self.deathSuit, worldRelative=0, duration=5.7, cleanup=True)), (5.2, ParticleInterval(smallGearExplosion, self.deathSuit, worldRelative=0, duration=1.2, cleanup=True)), (5.4, ParticleInterval(bigGearExplosion, self.deathSuit, worldRelative=0, duration=1.0, cleanup=True)), name='gears2MTrack')
elif self.isMovingUpDown:
def getFinalPos():
if self.isGoingUp:
direction = 1.0
else:
direction = -1.0
pos = Point3(self.deathSuit.getX(), self.deathSuit.getY(), self.deathSuit.getZ() + 2.0 * direction)
return pos
deathMoveIval = LerpPosInterval(self.deathSuit, 1.5, pos=getFinalPos(), name='%s-deathSuitMove' % self.suitName, blendType='easeInOut', fluid=1)
suitTrack = Sequence(Func(self.collNodePath.stash), Parallel(ActorInterval(self.deathSuit, 'lose', startFrame=80, endFrame=140), deathMoveIval), Func(removeDeathSuit, self.suit, self.deathSuit, name='remove-death-suit'))
explosionTrack = Sequence(Wait(1.5), MovieUtil.createKapowExplosionTrack(self.deathSuit, explosionPoint=gearPoint))
soundTrack = Sequence(SoundInterval(spinningSound, duration=1.6, startTime=0.6, volume=0.8, node=self.deathSuit), SoundInterval(deathSound, volume=0.32, node=self.deathSuit))
gears1Track = Sequence(ParticleInterval(smallGears, self.deathSuit, worldRelative=0, duration=4.3, cleanup=True), name='gears1Track')
gears2MTrack = Track((0.0, explosionTrack), (0.0, ParticleInterval(singleGear, self.deathSuit, worldRelative=0, duration=5.7, cleanup=True)), (2.7, ParticleInterval(smallGearExplosion, self.deathSuit, worldRelative=0, duration=1.2, cleanup=True)), (2.9, ParticleInterval(bigGearExplosion, self.deathSuit, worldRelative=0, duration=1.0, cleanup=True)), name='gears2MTrack')
def removeParticle(particle):
if particle and hasattr(particle, 'renderParent'):
particle.cleanup()
del particle
removeParticles = Parallel(Func(removeParticle, smallGears), Func(removeParticle, singleGear), Func(removeParticle, smallGearExplosion), Func(removeParticle, bigGearExplosion))
self.deathTrack = Sequence(Parallel(suitTrack, gears2MTrack, gears1Track, soundTrack), removeParticles, Func(self.destroy))
self.deathTrack.start()
| 49.205036 | 384 | 0.652899 | 1,404 | 13,679 | 6.35114 | 0.203704 | 0.045755 | 0.013457 | 0.024223 | 0.306718 | 0.279242 | 0.259056 | 0.240215 | 0.221151 | 0.210833 | 0 | 0.019174 | 0.237444 | 13,679 | 277 | 385 | 49.382671 | 0.835682 | 0 | 0 | 0.22 | 0 | 0 | 0.03743 | 0.007895 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.056 | 0 | 0.18 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7ead2c9fa5dae65596722dadcde985032b23d388 | 6,201 | py | Python | cashews/key.py | AIGeneratedUsername/cashews | c4af2807053956f75662966a23e0af024c1a64a9 | [
"MIT"
] | null | null | null | cashews/key.py | AIGeneratedUsername/cashews | c4af2807053956f75662966a23e0af024c1a64a9 | [
"MIT"
] | null | null | null | cashews/key.py | AIGeneratedUsername/cashews | c4af2807053956f75662966a23e0af024c1a64a9 | [
"MIT"
] | null | null | null | import inspect
from datetime import timedelta
from functools import lru_cache
from typing import Any, Callable, Container, Dict, Optional, Tuple, Union
from ._typing import TTL
from .formatter import _ReplaceFormatter, default_formatter, template_to_pattern
_KWARGS = "__kwargs__"
_ARGS = "__args__"
_ARGS_KWARGS = (_ARGS, _KWARGS)
class WrongKeyException(ValueError):
"""Raised If key template have wrong parameter"""
def ttl_to_seconds(ttl: Union[float, None, TTL]) -> Union[int, None, float]:
timeout = ttl() if callable(ttl) else ttl
if isinstance(timeout, timedelta):
return timeout.total_seconds()
if isinstance(timeout, str):
return _ttl_from_str(timeout)
return timeout
_STR_TO_DELTA = {
"h": timedelta(hours=1),
"m": timedelta(minutes=1),
"s": timedelta(seconds=1),
"d": timedelta(days=1),
}
def _ttl_from_str(ttl: str) -> int:
result = 0
mul = ""
for char in ttl.strip().lower():
if char.isdigit():
mul += char
elif char in _STR_TO_DELTA:
result += int(mul) * _STR_TO_DELTA[char].total_seconds()
mul = ""
else:
raise ValueError(f"ttl '{ttl}' has wrong string representation")
if mul != "" and not result:
return int(mul)
return result
def get_cache_key(
func: Callable,
template: Optional[str] = None,
args: Tuple[Any] = (),
kwargs: Optional[Dict] = None,
) -> str:
"""
Get cache key name for function (:param func) called with args and kwargs
if func_args is passed key build with parameters are included in func_args dict or tuple otherwise use all of them
Used function module and name as prefix if key parameter not passed
:param func: Target function
:param args: call positional arguments
:param kwargs: call keyword arguments
:return: cache key for call
"""
kwargs = kwargs or {}
key_values = _get_call_values(func, args, kwargs)
_key_template = template or get_cache_key_template(func)
return template_to_pattern(_key_template, _formatter=default_formatter, **key_values)
def get_func_params(func):
signature = _get_func_signature(func)
for param_name, param in signature.parameters.items():
if param.kind == inspect.Parameter.VAR_KEYWORD:
yield _KWARGS
elif param.kind == inspect.Parameter.VAR_POSITIONAL:
yield _ARGS
else:
yield param_name
def get_cache_key_template(
func: Callable, key: Optional[str] = None, prefix: str = "", exclude_parameters: Container = ()
) -> str:
"""
Get cache key name for function (:param func) called with args and kwargs
Used function module and name as prefix if key parameter not passed
:param func: Target function
:param key: template for key, may contain alias to args or kwargs passed to a call
:param exclude_parameters: array of args and kwargs names to exclude from key template (if key parameter not passed)
:return: cache key template
"""
if key is None:
key = generate_key_template(func, exclude_parameters)
else:
_check_key_params(key, list(get_func_params(func)))
if prefix:
key = f"{prefix}:{key}"
return key
def generate_key_template(func: Callable, exclude_parameters: Container = ()):
"""
Generate template for function (:param func) called with args and kwargs
Used function module and name as prefix if key parameter not passed
:param func: Target function
:param exclude_parameters: array of args and kwargs names to exclude from key template (if key parameter not passed)
:return: cache key template
"""
func_params = list(get_func_params(func))
key_template = f"{func.__module__}:{func.__name__}"
if func_params and func_params[0] == "self":
key_template = f"{func.__module__}:{func.__qualname__}"
for param_name in func_params:
if param_name in exclude_parameters:
continue
if param_name in _ARGS_KWARGS:
key_template += f":{{{param_name}}}"
else:
key_template += f":{param_name}:{{{param_name}}}"
return key_template
class _Star:
_STAR = "*"
def __getattr__(self, item):
return self._STAR
def __getitem__(self, item):
return self._STAR
def _check_key_params(key, func_params):
func_params = {param: _Star() for param in func_params}
errors = []
def _default(name):
errors.append(name)
return "*"
check = _ReplaceFormatter(default=_default)
check.format(key, **func_params)
if errors:
raise WrongKeyException(f"Wrong parameter placeholder '{errors}' in the key ")
def get_call_values(func: Callable, args, kwargs) -> Dict:
"""
Return dict with arguments and their values for function call with given positional and keywords arguments
:param func: Target function
:param args: call positional arguments
:param kwargs: call keyword arguments
"""
key_values = {}
for _key, _value in _get_call_values(func, args, kwargs).items():
if _key not in _ARGS_KWARGS:
key_values[_key] = _value
return key_values
@lru_cache(maxsize=100)
def _get_func_signature(func):
return inspect.signature(func)
def _get_call_values(func, args, kwargs):
signature = _get_func_signature(func).bind(*args, **kwargs)
signature.apply_defaults()
result = {}
for _name, _value in signature.arguments.items():
parameter: inspect.Parameter = signature.signature.parameters[_name]
if parameter.kind == inspect.Parameter.VAR_KEYWORD:
result[_KWARGS] = _value
result.update(_value)
elif parameter.kind == inspect.Parameter.VAR_POSITIONAL:
result[_ARGS] = _value
else:
result[_name] = _value
return result
def noself(decor_func):
def _decor(*args, **kwargs):
def outer(method):
if "key" not in kwargs:
kwargs["key"] = get_cache_key_template(method, exclude_parameters={"self"})
return decor_func(*args, **kwargs)(method)
return outer
return _decor
| 31.8 | 120 | 0.670537 | 797 | 6,201 | 4.976161 | 0.178168 | 0.049924 | 0.016641 | 0.021432 | 0.336863 | 0.244831 | 0.198689 | 0.198689 | 0.198689 | 0.198689 | 0 | 0.001901 | 0.236413 | 6,201 | 194 | 121 | 31.963918 | 0.835692 | 0.219481 | 0 | 0.103175 | 0 | 0 | 0.055721 | 0.021268 | 0 | 0 | 0 | 0 | 0 | 1 | 0.126984 | false | 0 | 0.047619 | 0.02381 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7ead609ce2b53c27af33efd9dab1034b5b14ab4e | 7,813 | py | Python | lightnn/examples/insuranceqa.py | tongluocq/lightnn | 602b0742d1141efc73a7146c930c5ea9eb994d37 | [
"Apache-2.0"
] | 131 | 2017-04-05T06:03:25.000Z | 2021-05-20T03:05:36.000Z | ch4/lightnn/lightnn/examples/insuranceqa.py | helloqorld/book-of-qna-code | 54950478fb28d15cd73dae4dc39f3cd783721e08 | [
"Apache-2.0"
] | 27 | 2018-11-26T07:39:25.000Z | 2022-02-09T23:44:53.000Z | ch4/lightnn/lightnn/examples/insuranceqa.py | helloqorld/book-of-qna-code | 54950478fb28d15cd73dae4dc39f3cd783721e08 | [
"Apache-2.0"
] | 62 | 2018-11-26T07:44:02.000Z | 2022-01-13T08:31:00.000Z | # -*- coding: utf-8 -*-
#===============================================================================
#
# Copyright (c) 2017 Hai Liang Wang<hailiang.hl.wang@gmail.com> All Rights Reserved
#
#
# File: /Users/hain/ai/InsuranceQA-Machine-Learning/deep_qa_1/network.py
# Author: Hai Liang Wang
# Date: 2017-08-08:18:32:05
#
#===============================================================================
"""
A Simple Network to learning QA.
"""
from __future__ import print_function
from __future__ import division
__copyright__ = "Copyright (c) 2017 Hai Liang Wang. All Rights Reserved"
__author__ = "Hai Liang Wang"
__date__ = "2017-08-08:18:32:05"
import os
import sys
curdir = os.path.dirname(os.path.abspath(__file__))
sys.path.insert(0, os.path.dirname(curdir))
if sys.version_info[0] < 3:
reload(sys)
sys.setdefaultencoding("utf-8")
# raise "Must be using Python 3"
import random
import insuranceqa_data as insuranceqa
data_dir = './insurance_data'
_train_data = insuranceqa.load_pairs_train(data_dir)
_test_data = insuranceqa.load_pairs_test(data_dir)
_valid_data = insuranceqa.load_pairs_valid(data_dir)
'''
build vocab data with more placeholder
'''
vocab_data = insuranceqa.load_pairs_vocab(data_dir)
print("keys", vocab_data.keys())
vocab_size = len(vocab_data['word2id'].keys())
VOCAB_PAD_ID = vocab_size+1
VOCAB_GO_ID = vocab_size+2
vocab_data['word2id']['<PAD>'] = VOCAB_PAD_ID
vocab_data['word2id']['<GO>'] = VOCAB_GO_ID
vocab_data['id2word'][VOCAB_PAD_ID] = '<PAD>'
vocab_data['id2word'][VOCAB_GO_ID] = '<GO>'
def _get_corpus_metrics():
'''
max length of questions
'''
for cat, data in zip(["valid", "test", "train"], [_valid_data, _test_data, _train_data]):
max_len_question = 0
total_len_question = 0
max_len_utterance = 0
total_len_utterance = 0
for x in data:
total_len_question += len(x['question'])
total_len_utterance += len(x['utterance'])
if len(x['question']) > max_len_question:
max_len_question = len(x['question'])
if len(x['utterance']) > max_len_utterance:
max_len_utterance = len(x['utterance'])
print('max len of %s question : %d, average: %d' % (cat, max_len_question, total_len_question/len(data)))
print('max len of %s utterance: %d, average: %d' % (cat, max_len_utterance, total_len_utterance/len(data)))
# max length of answers
class BatchIter():
'''
Load data with mini-batch
'''
def __init__(self, data = None, batch_size = 100):
assert data is not None, "data should not be None."
self.batch_size = batch_size
self.data = data
def next(self):
random.shuffle(self.data)
index = 0
total_num = len(self.data)
while index <= total_num:
yield self.data[index:index + self.batch_size]
index += self.batch_size
def padding(lis, pad, size):
'''
right adjust a list object
'''
if size > len(lis):
lis += [pad] * (size - len(lis))
else:
lis = lis[0:size]
return lis
def pack_question_n_utterance(q, u, q_length = 20, u_length = 99):
'''
combine question and utterance as input data for feed-forward network
'''
assert len(q) > 0 and len(u) > 0, "question and utterance must not be empty"
q = padding(q, VOCAB_PAD_ID, q_length)
u = padding(u, VOCAB_PAD_ID, u_length)
assert len(q) == q_length, "question should be pad to q_length"
assert len(u) == u_length, "utterance should be pad to u_length"
return q + [VOCAB_GO_ID] + u
def __resolve_input_data(data, batch_size, question_max_length = 20, utterance_max_length = 99):
'''
resolve input data
'''
batch_iter = BatchIter(data = data, batch_size = batch_size)
for mini_batch in batch_iter.next():
result = []
for o in mini_batch:
x = pack_question_n_utterance(o['question'], o['utterance'], question_max_length, utterance_max_length)
y_ = o['label']
assert len(x) == utterance_max_length + question_max_length + 1, "Wrong length afer padding"
assert VOCAB_GO_ID in x, "<GO> must be in input x"
assert len(y_) == 2, "desired output."
result.append([x, y_])
if len(result) > 0:
# print('data in batch:%d' % len(mini_batch))
yield result
else:
raise StopIteration
# export data
def load_train(batch_size = 100, question_max_length = 20, utterance_max_length = 99):
'''
load train data
'''
result = []
for o in _train_data:
x = pack_question_n_utterance(o['question'], o['utterance'], question_max_length, utterance_max_length)
y_ = o['label']
assert len(x) == utterance_max_length + question_max_length + 1, "Wrong length afer padding"
assert VOCAB_GO_ID in x, "<GO> must be in input x"
assert len(y_) == 2, "desired output."
result.append((x, y_))
return result
# return __resolve_input_data(_train_data, batch_size, question_max_length, utterance_max_length)
def load_test(question_max_length = 20, utterance_max_length = 99):
'''
load test data
'''
result = []
for o in _test_data:
x = pack_question_n_utterance(o['question'], o['utterance'], question_max_length, utterance_max_length)
y_ = o['label']
assert len(x) == utterance_max_length + question_max_length + 1, "Wrong length afer padding"
assert VOCAB_GO_ID in x, "<GO> must be in input x"
assert len(y_) == 2, "desired output."
result.append((x, y_))
return result
def load_valid(batch_size = 100, question_max_length = 20, utterance_max_length = 99):
'''
load valid data
'''
result = []
for o in _valid_data:
x = pack_question_n_utterance(o['question'], o['utterance'], question_max_length, utterance_max_length)
y_ = o['label']
assert len(x) == utterance_max_length + question_max_length + 1, "Wrong length afer padding"
assert VOCAB_GO_ID in x, "<GO> must be in input x"
assert len(y_) == 2, "desired output."
result.append((x, y_))
return result
# return __resolve_input_data(_valid_data, batch_size, question_max_length, utterance_max_length)
def test_batch():
'''
retrieve data with mini batch
'''
for mini_batch in load_test():
x, y_ = mini_batch
print("length", len(x))
assert len(y_) == 2, "data size should be 2"
print("VOCAB_PAD_ID", VOCAB_PAD_ID)
print("VOCAB_GO_ID", VOCAB_GO_ID)
def main():
import lightnn
from lightnn.models import Model
from lightnn.layers import Dense, Input
from lightnn.base import optimizers
import numpy as np
from sklearn.metrics import confusion_matrix
batch_size = 50
lr = 1e-4
question_max_length = 20
utterance_max_length = 99
train_X, train_y = zip(*load_train())
valid_X, valid_y = zip(*load_valid())
test_X, test_y = zip(*load_test())
input = Input(input_shape=question_max_length + utterance_max_length + 1)
d1 = Dense(100, activator='sigmoid')(input)
d2 = Dense(50, activator='sigmoid')(d1)
out = Dense(2, activator='softmax')(d2)
model = Model(input, out)
optimizer = optimizers.SGD(lr=lr)
model.compile('CCE', optimizer=optimizer)
model.fit(train_X, train_y, verbose=2, batch_size=batch_size, epochs=10,
validation_data=[valid_X, valid_y])
test_pred = model.predict(test_X)
print(confusion_matrix(np.argmax(test_y, axis=-1), np.argmax(test_pred, axis=-1)))
print(np.mean(np.equal(np.argmax(test_pred, axis=-1), np.argmax(test_y, axis=-1))))
if __name__ == '__main__':
# test_batch()
main() | 34.117904 | 115 | 0.639319 | 1,101 | 7,813 | 4.247956 | 0.179837 | 0.065427 | 0.058157 | 0.038914 | 0.382938 | 0.343596 | 0.297841 | 0.297841 | 0.281163 | 0.271969 | 0 | 0.020398 | 0.221938 | 7,813 | 229 | 116 | 34.117904 | 0.748972 | 0.124408 | 0 | 0.217687 | 0 | 0 | 0.130448 | 0 | 0 | 0 | 0 | 0 | 0.115646 | 1 | 0.07483 | false | 0 | 0.081633 | 0 | 0.197279 | 0.061224 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7eadc12b795916af41fccd01cdc01b58a5e8daf6 | 1,350 | py | Python | src/utilfuntions.py | Rishikesh-kumar-7258/Block_breaker | 7183f5c8732f5a60e909a5d30436614046fb76b2 | [
"MIT"
] | 20 | 2021-08-30T10:55:34.000Z | 2021-08-30T10:57:51.000Z | src/utilfuntions.py | Rishikesh-kumar-7258/Block_breaker | 7183f5c8732f5a60e909a5d30436614046fb76b2 | [
"MIT"
] | 7 | 2021-08-22T09:00:39.000Z | 2021-11-05T09:33:46.000Z | src/utilfuntions.py | Rishikesh-kumar-7258/Block_breaker | 7183f5c8732f5a60e909a5d30436614046fb76b2 | [
"MIT"
] | 2 | 2021-08-25T07:22:24.000Z | 2021-09-03T02:42:01.000Z | import pygame
from src.objects import PARTICLE
# displaying text on the screen
def Write(screen, text, x, y, size, color, center=False) -> None:
""" This function will render text on screen"""
font = pygame.font.SysFont(None, size)
text = font.render(text, False, color)
rect = text.get_rect()
if (center):
rect.center = (x, y)
else :
rect.x = x
rect.y = y
screen.blit(text, rect)
def update_score(player_name, player_score):
file = open("scores.csv", 'a')
file.write(player_name + ',' + str(player_score) + '\n')
def get_score():
file = open("scores.csv", 'r')
scores = {}
i = 0
for line in file:
i += 1
if i == 1:
continue
line = line.strip()
player, score = line.split(',')
scores[player] = int(score)
file.close()
return sorted(scores.items(), key=lambda x: x[1], reverse=True)
def collision(obj1 : dict, obj2 : dict) -> bool:
""" detects collision between 2 objects and returns some properties in list """
if (obj1['x'] + obj1['width'] /2 < obj2['x'] - obj2['width'] / 2 or
obj1['x'] - obj1['width'] / 2 > obj2['x'] + obj2['width'] / 2 or
obj1['y'] + obj1['height'] < obj2['y'] or
obj1['y'] > obj2['y'] + obj2['height']):
return False
return True | 27.55102 | 83 | 0.560741 | 186 | 1,350 | 4.032258 | 0.413978 | 0.032 | 0.034667 | 0.050667 | 0.149333 | 0.090667 | 0.090667 | 0.090667 | 0.090667 | 0.090667 | 0 | 0.025562 | 0.275556 | 1,350 | 49 | 84 | 27.55102 | 0.741309 | 0.106667 | 0 | 0 | 0 | 0 | 0.055276 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.114286 | false | 0 | 0.057143 | 0 | 0.257143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7eae06802f11a910f0e6509a21d3467b1fe56cb4 | 2,557 | py | Python | oy/models/mixins/classifiable.py | mush42/oy-cms | 66f2490be7eab9a692a68bb635099ba21d5944ae | [
"MIT"
] | 5 | 2019-02-12T08:54:46.000Z | 2021-03-15T09:22:44.000Z | oy/models/mixins/classifiable.py | mush42/oy-cms | 66f2490be7eab9a692a68bb635099ba21d5944ae | [
"MIT"
] | 2 | 2020-04-30T01:27:08.000Z | 2020-07-16T18:04:16.000Z | oy/models/mixins/classifiable.py | mush42/oy-cms | 66f2490be7eab9a692a68bb635099ba21d5944ae | [
"MIT"
] | 3 | 2019-10-16T05:53:31.000Z | 2021-10-11T09:37:16.000Z | # -*- coding: utf-8 -*-
"""
oy.models.mixins.classifiable
~~~~~~~~~~
Provides mixin classes for classifying content,
namely for adding tags and categories.
:copyright: (c) 2018 by Musharraf Omer.
:license: MIT, see LICENSE for more details.
"""
from sqlalchemy.ext.declarative import declared_attr
from sqlalchemy.ext.associationproxy import association_proxy
from oy.boot.sqla import db
from oy.helpers import _prepare_association_table
from .slugged import Titled, Slugged
class ClassifierMixin(Titled, Slugged):
"""Provides basic fields for a tag or a category item."""
class Tagged:
"""Provides a generic many-to-many relationship
to a dynamically generated tags table using
the `table-per-related` pattern.
.. admonition::: the dynamically generated table is shared by this model
class and all it's subclasses.
"""
@declared_attr
def tags(cls):
if not hasattr(cls, "Tag"):
# Create the Tag model
tag_attrs = {
"id": db.Column(db.Integer, primary_key=True),
"objects": db.relationship(
cls,
secondary=lambda: cls.__tags_association_table__,
backref="related_tags",
),
}
cls.Tag = type(f"{cls.__name__}Tag", (ClassifierMixin, db.Model), tag_attrs)
# The many-to-many association table
cls.__tags_association_table__ = _prepare_association_table(
table_name=f"{cls.__tablename__}s_tags",
remote1=cls.__tablename__,
remote2=cls.Tag.__tablename__,
)
return association_proxy(
"related_tags", "title", creator=lambda t: cls.Tag.get_or_create(title=t)
)
class Categorized:
"""Provides a generic Foreign Key relationship to a
dynamically generated category table using
the `table-per-related` pattern.
.. admonition::: the dynamically generated table is shared by this model
class and all it's subclasses.
"""
@declared_attr
def category_id(cls):
if not hasattr(cls, "Category"):
category_attrs = {
"id": db.Column(db.Integer, primary_key=True),
"objects": db.relationship(cls, backref="category"),
}
cls.Category = type(
f"{cls.__name__}Category", (ClassifierMixin, db.Model), category_attrs
)
return db.Column(db.Integer, db.ForeignKey(cls.Category.id))
| 33.207792 | 88 | 0.620649 | 292 | 2,557 | 5.243151 | 0.369863 | 0.052253 | 0.019595 | 0.033312 | 0.321359 | 0.252123 | 0.252123 | 0.252123 | 0.252123 | 0.252123 | 0 | 0.003813 | 0.281971 | 2,557 | 76 | 89 | 33.644737 | 0.830065 | 0.315995 | 0 | 0.102564 | 0 | 0 | 0.078645 | 0.028433 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051282 | false | 0 | 0.128205 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7eae7b70016f08a7d7f96af5d0e058b6d79d03d3 | 866 | py | Python | graylog/__init__.py | Drinkey/graylog | b0a6442256f85b853a80162b19cf71aef3128fb6 | [
"MIT"
] | null | null | null | graylog/__init__.py | Drinkey/graylog | b0a6442256f85b853a80162b19cf71aef3128fb6 | [
"MIT"
] | null | null | null | graylog/__init__.py | Drinkey/graylog | b0a6442256f85b853a80162b19cf71aef3128fb6 | [
"MIT"
] | null | null | null |
import json
import arrow
from graylog.graylog import Graylog, GraylogQueryParser, GraylogExtractor
from graylog.auth import GraylogAuthenticator, GraylogBasicAuthenticator
from graylog.search import GraylogAbsoluteSearch, GraylogRelativeSearch
from .types import DotDict
class TimeRange:
__fmt__ = 'YYYY-MM-DDTHH:mm:ss'
def _range(self, start, end):
return f"{start.format(self.__fmt__)}.000Z", f"{end.format(self.__fmt__)}.999Z"
def ndays_before(self, days=-1):
s, e = arrow.utcnow().shift(days=days).span('day')
return self._range(s, e)
class Configuration(DotDict):
def __init__(self, file: str):
self._file = file
super().__init__({})
def parse(self):
_data = {}
with open(self._file) as fp:
_data = json.load(fp)
self.init(_data)
return self
| 27.935484 | 87 | 0.667436 | 105 | 866 | 5.238095 | 0.514286 | 0.06 | 0.047273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010355 | 0.2194 | 866 | 30 | 88 | 28.866667 | 0.803254 | 0 | 0 | 0 | 0 | 0 | 0.099422 | 0.073988 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173913 | false | 0 | 0.26087 | 0.043478 | 0.695652 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7eb04a6614bae924abfdb24fce188e346993b5be | 1,094 | py | Python | old/test.py | HoweLab/Treadmill2D | 366eca12dba0949ec26344c3ad1cc2b7687bbffe | [
"MIT"
] | null | null | null | old/test.py | HoweLab/Treadmill2D | 366eca12dba0949ec26344c3ad1cc2b7687bbffe | [
"MIT"
] | null | null | null | old/test.py | HoweLab/Treadmill2D | 366eca12dba0949ec26344c3ad1cc2b7687bbffe | [
"MIT"
] | null | null | null | #!/usr/bin/python
# adapted from mouse_relay_voltage.py on https://github.com/HanLabBU/movement_recording/blob/master/mouse_relay_voltage.py
# note: run this using python 3
import time
from threading import Thread, Lock
import Adafruit_MCP4725 # using the deprecated Adafruit Python MCP4725 library
# initialize I2C buses (X: SDA 2 SC: 3; Y: SDA 17 SCL 27)
dacX = Adafruit_MCP4725.MCP4725(address=0x60,busnum=1)
dacY = Adafruit_MCP4725.MCP4725(address=0x60,busnum=3)
print('running')
t = .01
t1 = time.time()
while 1:
while time.time()-t1<t:
dxout = int(4096*0)
dyout = int(4096*1)
dacX.set_voltage(dxout)
dacY.set_voltage(dyout)
t1 = time.time()
while time.time()-t1<t:
dxout = int(4096*.5)
dyout = int(4096*.5)
dacX.set_voltage(dxout)
dacY.set_voltage(dyout)
t1 = time.time()
while time.time()-t1<t:
dxout = int(4096*1)
dyout = int(4096*0)
dacX.set_voltage(dxout)
dacY.set_voltage(dyout)
t1 = time.time()
while time.time()-t1<t:
dxout = int(4096*.5)
dyout = int(4096*.5)
dacX.set_voltage(dxout)
dacY.set_voltage(dyout)
t1 = time.time()
| 24.311111 | 122 | 0.706581 | 179 | 1,094 | 4.22905 | 0.351955 | 0.095112 | 0.06605 | 0.07926 | 0.541612 | 0.541612 | 0.438573 | 0.438573 | 0.401585 | 0.401585 | 0 | 0.099352 | 0.153565 | 1,094 | 44 | 123 | 24.863636 | 0.718143 | 0.252285 | 0 | 0.636364 | 0 | 0 | 0.008621 | 0 | 0 | 0 | 0.009852 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0.030303 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7eb285fca6a3ee35b8013e356e6c5c28958bdfba | 7,943 | py | Python | dpmeans/dpmeans.py | michaelhabeck/DP-means | e9f5ebabcadb48b4aeae7e3002acbaf374da3346 | [
"MIT"
] | 2 | 2019-10-20T14:54:56.000Z | 2021-04-20T07:48:52.000Z | dpmeans/dpmeans.py | michaelhabeck/DP-means | e9f5ebabcadb48b4aeae7e3002acbaf374da3346 | [
"MIT"
] | null | null | null | dpmeans/dpmeans.py | michaelhabeck/DP-means | e9f5ebabcadb48b4aeae7e3002acbaf374da3346 | [
"MIT"
] | null | null | null | """
DP-means clustering
"""
import numpy as np
from .clustering import Clustering
from scipy import ndimage
from scipy.spatial import cKDTree
from sklearn.neighbors import BallTree
class DPMeans(object):
batch_size = 1000
eps = 1e-100
@property
def cutoff(self):
"""Distance cutoff. """
return self._cutoff
@cutoff.setter
def cutoff(self, value):
"""Set distance cutoff. """
if not value > 0.:
msg = 'Distance cutoff (aka cluster penalty) must be positive'
raise ValueError(msg)
self._cutoff = float(value)
self._penalty = self._cutoff**2
@property
def penalty(self):
"""Lambda parameter. """
return self._penalty
@property
def centers(self):
"""Cluster centers. """
return self._centers[:self.clusters.k]
def __init__(self, data, cutoff, weights=None):
"""
Arguments
---------
data : rank-2 numpy array
Data matrix where each row corresponds to a multi-dimensional data
vector.
cutoff : positive float
Distance cutoff or cluster penalty parameter.
weights : rank-1 numpy array
Optional weights associated with each data point.
"""
if data.ndim != 2:
msg = 'Data matrix must be a rank-2 array'
raise ValueError(msg)
weights = np.ones(len(data)) if weights is None else weights
if weights.ndim != 1 or len(weights) != len(data):
msg = 'Data weights must be a rank-1 array of length {0}'
raise ValueError(msg.format(len(data)))
self.data = data
self.weights = weights
self.cutoff = cutoff
self.clusters = Clustering(len(self.data))
self._centers = None
self._add_new_batch()
self._centers[0] = np.mean(self.data, 0)
def __str__(self):
return '{0}(n_clusters={1})'.format(
self.__class__.__name__, self.clusters.k)
def __iter__(self):
return self
def _add_new_batch(self):
"""Private method that augments the array holding the cluster centers
by a certain number of rows. """
batch = np.zeros((self.batch_size, self.data.shape[1]))
if self._centers is None:
self._centers = batch
else:
self._centers = np.vstack([self._centers, batch])
def get_unassigned(self):
return np.random.permutation(len(self.data))
def assign_point(self, index):
"""Assigns a data point (row in data matrix) to the closest cluster. If
distance exceeds the cutoff, a new cluster is spawned. """
point = self.data[index]
dist = np.sum((point - self.centers)**2, 1)
if dist.min() <= self.penalty:
self.clusters.labels[index] = dist.argmin()
return
# create new cluster
k = self.clusters.k
if k >= len(self._centers):
self._add_new_batch()
self._centers[k] = point
self.clusters.labels[index] = k
def remove_empty(self):
"""Remove empty clusters. """
nonempty, labels = np.unique(self.clusters.labels, return_inverse=True)
self._centers[:len(nonempty)] = self.centers[nonempty]
self.clusters.labels[...] = labels
def update_centers(self):
"""Update cluster centers by averaging positions of members. """
clusters = np.arange(self.clusters.k)
labels = self.clusters.labels
weights = ndimage.sum(self.weights, labels, index=clusters)
centers = np.zeros((self.data.shape[1], len(clusters)))
for d in range(self.data.shape[1]):
values = self.weights * self.data[:,d]
centers[d,...] = ndimage.sum(values, labels, index=clusters)
centers /= (weights + self.eps)
self._centers[:len(clusters)] = centers.T
def next(self):
"""A single DP-means iteration that sweeps over all data points in some
random order. """
unassigned = self.get_unassigned()
for i in unassigned:
self.assign_point(i)
self.update_centers()
## py2/3 compatibility
__next__ = next
def fitness(self):
"""Evaluates how well the clusters approximate the data. """
d = np.sum((self.data-self.centers[self.clusters.labels])**2, 1)
f = np.dot(self.weights, d) / (self.weights.sum() + self.eps)
return len(self.data) * f
def loss(self):
"""Evaluate loss function optimized by DP-means. """
return self.clusters.k * self.penalty + self.fitness()
def stop(self, loss, tol):
"""Stop critertion : no significant change in loss function. """
if tol is None:
return False
if len(loss) >= 2:
x, y = loss[-2:]
return abs(x-y) / abs(x+y) < tol
return False
def run(self, n_iter=100, tol=1e-5, verbose=0):
"""Run DP-means iterations until loss function does no longer change
significantly (specified by tolerance) or maximum number of iterations
is reached.
Arguments
---------
n_iter : positive integer
Maximum number of iterations.
tol : float or None
Tolerance for checking local convergence.
verbose : non-negative integer
Specifies frequency with which progress of DP-means is reported
(verbose=0 means no messages are shown, default).
"""
loss = []
i = 0
output = 'iter={0}, cutoff={1:.1f}, #clusters={2}, loss={3:.3e}'
dpmeans = self.__iter__() #iter(self)
while i < n_iter:
next(dpmeans)
loss.append(self.loss())
if verbose and not i % verbose:
print(output.format(i, self.cutoff, self.clusters.k, loss[-1]))
if self.stop(loss, tol):
break
i += 1
return loss
class FastDPMeans(DPMeans):
"""
Faster implementation using a BallTree
"""
def __init__(self, data, cutoff, weights=None,
tree_type = ('balltree', 'kdtree')[1]):
super(FastDPMeans, self).__init__(data, cutoff, weights)
self.tree_type = tree_type
assert self.tree_type in ('balltree', 'kdtree')
def get_unassigned(self):
if self.tree_type == 'balltree':
return self._get_unassigned_balltree()
elif self.tree_type == 'kdtree':
return self._get_unassigned_kdtree()
else:
return super(FastDPMeans, self).get_unassigned()
def _get_unassigned_balltree(self):
"""
Use BallTree to find nearest clusters
"""
k = self.clusters.k
if k == 1:
return super(FastDPMeans, self).get_unassigned()
tree = BallTree(self.centers, leaf_size=k+1)
neigh, _ = tree.query_radius(
self.data, self.cutoff, sort_results=True, return_distance=True
)
n_neigh = np.array(list(map(len, neigh)))
assigned = np.nonzero(n_neigh>0)[0]
unassigned = np.nonzero(n_neigh==0)[0]
self.clusters.labels[assigned] = [neigh[i][0] for i in assigned]
return unassigned
def _get_unassigned_kdtree(self):
"""
Use KDTree to find nearest clusters
"""
k = self.clusters.k
if k == 1:
return super(FastDPMeans, self).get_unassigned()
tree = cKDTree(self.centers)
dist, ind = tree.query(self.data, k=1)
dist = dist.flatten()
ind = ind.flatten()
assigned = dist < self.cutoff
self.clusters.labels[assigned] = ind[assigned]
return np.nonzero(~assigned)[0]
| 28.571942 | 79 | 0.574216 | 950 | 7,943 | 4.690526 | 0.242105 | 0.045781 | 0.023339 | 0.010099 | 0.083034 | 0.083034 | 0.051167 | 0.036804 | 0.036804 | 0.036804 | 0 | 0.01085 | 0.315372 | 7,943 | 277 | 80 | 28.67509 | 0.808569 | 0.196903 | 0 | 0.142857 | 0 | 0.006803 | 0.041674 | 0 | 0 | 0 | 0 | 0 | 0.006803 | 1 | 0.142857 | false | 0 | 0.034014 | 0.020408 | 0.346939 | 0.006803 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7eb298952053945ce9666271c803b51f5d2e4a4f | 4,751 | py | Python | mess_io/writer/rxnchan.py | mobergd/interfaces | 82705d2173b9d213684da80913ec0593d30cdbe1 | [
"Apache-2.0"
] | null | null | null | mess_io/writer/rxnchan.py | mobergd/interfaces | 82705d2173b9d213684da80913ec0593d30cdbe1 | [
"Apache-2.0"
] | null | null | null | mess_io/writer/rxnchan.py | mobergd/interfaces | 82705d2173b9d213684da80913ec0593d30cdbe1 | [
"Apache-2.0"
] | null | null | null | """
Writes MESS input for a molecule
"""
import os
from mako.template import Template
from mess_io.writer import util
# OBTAIN THE PATH TO THE DIRECTORY CONTAINING THE TEMPLATES #
SRC_PATH = os.path.dirname(os.path.realpath(__file__))
TEMPLATE_PATH = os.path.join(SRC_PATH, 'templates')
SECTION_PATH = os.path.join(TEMPLATE_PATH, 'sections')
RXNCHAN_PATH = os.path.join(SECTION_PATH, 'reaction_channel')
def species(species_label, species_data, zero_energy):
""" Writes a species section.
"""
# Indent the string containing all of data for the well
species_data = util.indent(species_data, 2)
# Create dictionary to fill template
species_keys = {
'species_label': species_label,
'species_data': species_data,
'zero_energy': zero_energy
}
# Set template name and path for a species
template_file_name = 'species.mako'
template_file_path = os.path.join(RXNCHAN_PATH, template_file_name)
# Build species section string
species_str = Template(filename=template_file_path).render(**species_keys)
return species_str
def well(well_label, well_data, zero_energy):
""" Writes a well section.
"""
# Indent the string containing all of data for the well
well_data = util.indent(well_data, 4)
# Create dictionary to fill template
well_keys = {
'well_label': well_label,
'well_data': well_data,
'zero_energy': zero_energy
}
# Set template name and path for a well
template_file_name = 'well.mako'
template_file_path = os.path.join(RXNCHAN_PATH, template_file_name)
# Build well section string
well_str = Template(filename=template_file_path).render(**well_keys)
return well_str
def bimolecular(bimol_label,
species1_label, species1_data,
species2_label, species2_data,
ground_energy):
""" Writes a Bimolecular section.
"""
# Indent the string containing all of data for each species
species1_data = util.indent(species1_data, 4)
species2_data = util.indent(species2_data, 4)
# Determine if species is an atom
isatom1 = util.is_atom_in_str(species1_data)
isatom2 = util.is_atom_in_str(species2_data)
# Create dictionary to fill template
bimol_keys = {
'bimolec_label': bimol_label,
'species1_label': species1_label,
'species1_data': species1_data,
'isatom1': isatom1,
'species2_label': species2_label,
'species2_data': species2_data,
'isatom2': isatom2,
'ground_energy': ground_energy
}
# Set template name and path for a bimolecular set
template_file_name = 'bimolecular.mako'
template_file_path = os.path.join(RXNCHAN_PATH, template_file_name)
# Build bimolecular section string
bimol_str = Template(filename=template_file_path).render(**bimol_keys)
return bimol_str
def ts_sadpt(ts_label, reac_label, prod_label, ts_data, zero_energy,
tunnel=''):
""" Writes a TS section containing only a saddle point
"""
# Indent the string containing all of data for the saddle point
ts_data = util.indent(ts_data, 2)
if tunnel != '':
tunnel = util.indent(tunnel, 4)
# Create dictionary to fill template
ts_sadpt_keys = {
'ts_label': ts_label,
'reac_label': reac_label,
'prod_label': prod_label,
'ts_data': ts_data,
'zero_energy': zero_energy,
'tunnel': tunnel
}
# Set template name and path for a TS with only a single saddle point
template_file_name = 'ts_sadpt.mako'
template_file_path = os.path.join(RXNCHAN_PATH, template_file_name)
# Build saddle point string
sadpt_str = Template(filename=template_file_path).render(**ts_sadpt_keys)
return sadpt_str
def ts_irc(ts_label, reac_label, prod_label, irc_pt_strs, zero_energy,
tunnel=''):
""" Writes a TS section containing IRC information
"""
# Concatenate all of the IRC point strings
ts_data = '\n'.join(irc_pt_strs)
# Indent the TS IRC string
ts_data = util.indent(ts_data, 4)
if tunnel != '':
tunnel = util.indent(tunnel, 4)
# Create dictionary to fill template
irc_keys = {
'ts_label': ts_label,
'reac_label': reac_label,
'prod_label': prod_label,
'ts_data': ts_data,
'zero_energy': zero_energy,
'tunnel': tunnel
}
# Set template name and path for a TS with an IRC
template_file_name = 'ts_irc.mako'
template_file_path = os.path.join(RXNCHAN_PATH, template_file_name)
# Build transition state with IRC string
irc_str = Template(filename=template_file_path).render(**irc_keys)
return irc_str
| 29.147239 | 78 | 0.682172 | 639 | 4,751 | 4.799687 | 0.140845 | 0.078252 | 0.052168 | 0.036518 | 0.536355 | 0.464297 | 0.423541 | 0.3567 | 0.318878 | 0.29149 | 0 | 0.00876 | 0.231109 | 4,751 | 162 | 79 | 29.32716 | 0.830824 | 0.250895 | 0 | 0.290698 | 0 | 0 | 0.102916 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05814 | false | 0 | 0.034884 | 0 | 0.151163 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7eb60bcda12b52373251619ae529c1a9a9e9109f | 3,570 | py | Python | ansible/roles/win_template-fail/files/toCopy/test/utils/shippable/tools/run.py | rdepke/ansible-35613-win_template-example | 4b017a3638665b69c0342431d8fa2bb17f433dbc | [
"MIT"
] | 37 | 2017-08-15T15:02:43.000Z | 2021-07-23T03:44:31.000Z | ansible/roles/win_template-fail/files/toCopy/test/utils/shippable/tools/run.py | rdepke/ansible-35613-win_template-example | 4b017a3638665b69c0342431d8fa2bb17f433dbc | [
"MIT"
] | 12 | 2018-01-10T05:25:25.000Z | 2021-11-28T06:55:48.000Z | ansible/roles/win_template-fail/files/toCopy/test/utils/shippable/tools/run.py | rdepke/ansible-35613-win_template-example | 4b017a3638665b69c0342431d8fa2bb17f433dbc | [
"MIT"
] | 49 | 2017-08-15T09:52:13.000Z | 2022-03-21T17:11:54.000Z | #!/usr/bin/env python
# PYTHON_ARGCOMPLETE_OK
# (c) 2016 Red Hat, Inc.
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
"""CLI tool for starting new Shippable CI runs."""
from __future__ import print_function
# noinspection PyCompatibility
import argparse
import json
import os
import requests
try:
import argcomplete
except ImportError:
argcomplete = None
def main():
"""Main program body."""
api_key = get_api_key()
parser = argparse.ArgumentParser(description='Start a new Shippable run.')
parser.add_argument('project',
metavar='account/project',
help='Shippable account/project')
target = parser.add_mutually_exclusive_group()
target.add_argument('--branch',
help='branch name')
target.add_argument('--run',
metavar='ID',
help='Shippable run ID')
parser.add_argument('--key',
metavar='KEY',
default=api_key,
required=not api_key,
help='Shippable API key')
parser.add_argument('--env',
nargs=2,
metavar=('KEY', 'VALUE'),
action='append',
help='environment variable to pass')
if argcomplete:
argcomplete.autocomplete(parser)
args = parser.parse_args()
headers = dict(
Authorization='apiToken %s' % args.key,
)
# get project ID
data = dict(
projectFullNames=args.project,
)
url = 'https://api.shippable.com/projects'
response = requests.get(url, data, headers=headers)
if response.status_code != 200:
raise Exception(response.content)
result = response.json()
if len(result) != 1:
raise Exception(
'Received %d items instead of 1 looking for %s in:\n%s' % (
len(result),
args.project,
json.dumps(result, indent=4, sort_keys=True)))
project_id = response.json()[0]['id']
# new build
data = dict(
globalEnv=['%s=%s' % (kp[0], kp[1]) for kp in args.env or []]
)
if args.branch:
data['branch'] = args.branch
elif args.run:
data['runId'] = args.run
url = 'https://api.shippable.com/projects/%s/newBuild' % project_id
response = requests.post(url, data, headers=headers)
if response.status_code != 200:
raise Exception(response.content)
print(json.dumps(response.json(), indent=4, sort_keys=True))
def get_api_key():
"""
rtype: str
"""
key = os.environ.get('SHIPPABLE_KEY', None)
if key:
return key
path = os.path.join(os.environ['HOME'], '.shippable.key')
try:
with open(path, 'r') as key_fd:
return key_fd.read().strip()
except IOError:
return None
if __name__ == '__main__':
main()
| 25.683453 | 78 | 0.598599 | 434 | 3,570 | 4.831797 | 0.442396 | 0.017167 | 0.018598 | 0.027182 | 0.156414 | 0.125894 | 0.069623 | 0.069623 | 0.069623 | 0.069623 | 0 | 0.007504 | 0.290756 | 3,570 | 138 | 79 | 25.869565 | 0.820695 | 0.229132 | 0 | 0.105263 | 0 | 0 | 0.143755 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026316 | false | 0.013158 | 0.092105 | 0 | 0.157895 | 0.026316 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7eb793115fe556b68432beb68ad435a1bdebe52a | 1,024 | py | Python | src/pydb/logging/json_logger.py | SpielerNogard/PyDB | ea89a90b3870d91dbcb011cb88fd08c83466d6a8 | [
"MIT"
] | null | null | null | src/pydb/logging/json_logger.py | SpielerNogard/PyDB | ea89a90b3870d91dbcb011cb88fd08c83466d6a8 | [
"MIT"
] | null | null | null | src/pydb/logging/json_logger.py | SpielerNogard/PyDB | ea89a90b3870d91dbcb011cb88fd08c83466d6a8 | [
"MIT"
] | null | null | null | import logging
import json
class FormatterJSON(logging.Formatter):
def format(self, record):
record.message = record.getMessage()
extra_data = record.__dict__.get('data')
if self.usesTime():
record.asctime = self.formatTime(record, self.datefmt)
j = {
'levelname': record.levelname,
'time': '%(asctime)s.%(msecs)dZ' % dict(asctime=record.asctime, msecs=record.msecs),
'message': record.message,
'module': record.module,
}
if extra_data is not None:
j['extra_data'] = extra_data
return json.dumps(j)
def get_logger(log_level):
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger()
logger.setLevel('INFO')
formatter = FormatterJSON(
'[%(levelname)s]\t%(asctime)s.%(msecs)dZ\t%(levelno)s\t%(message)s\n',
'%Y-%m-%dT%H:%M:%S'
)
# Replace the LambdaLoggerHandler formatter :
logger.handlers[0].setFormatter(formatter)
return logger | 33.032258 | 96 | 0.615234 | 118 | 1,024 | 5.254237 | 0.449153 | 0.058065 | 0.041935 | 0.048387 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001292 | 0.244141 | 1,024 | 31 | 97 | 33.032258 | 0.799742 | 0.041992 | 0 | 0 | 0 | 0.037037 | 0.153061 | 0.090816 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0 | 0.074074 | 0 | 0.259259 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e171d377a74fc61a015fa99db4011dcf20fe378 | 2,220 | py | Python | plugins/hg4idea/testData/bin/hgext/pager.py | dmarcotte/intellij-community | 74ed654c3f9ed99f9cc84fa227846b2c38d683c0 | [
"Apache-2.0"
] | null | null | null | plugins/hg4idea/testData/bin/hgext/pager.py | dmarcotte/intellij-community | 74ed654c3f9ed99f9cc84fa227846b2c38d683c0 | [
"Apache-2.0"
] | null | null | null | plugins/hg4idea/testData/bin/hgext/pager.py | dmarcotte/intellij-community | 74ed654c3f9ed99f9cc84fa227846b2c38d683c0 | [
"Apache-2.0"
] | 1 | 2019-03-14T10:35:19.000Z | 2019-03-14T10:35:19.000Z | # pager.py - display output using a pager
#
# Copyright 2008 David Soria Parra <dsp@php.net>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.
#
# To load the extension, add it to your .hgrc file:
#
# [extension]
# pager =
#
# Run "hg help pager" to get info on configuration.
'''browse command output with an external pager
To set the pager that should be used, set the application variable::
[pager]
pager = LESS='FSRX' less
If no pager is set, the pager extensions uses the environment variable
$PAGER. If neither pager.pager, nor $PAGER is set, no pager is used.
If you notice "BROKEN PIPE" error messages, you can disable them by
setting::
[pager]
quiet = True
You can disable the pager for certain commands by adding them to the
pager.ignore list::
[pager]
ignore = version, help, update
You can also enable the pager only for certain commands using
pager.attend. Below is the default list of commands to be paged::
[pager]
attend = annotate, cat, diff, export, glog, log, qdiff
Setting pager.attend to an empty value will cause all commands to be
paged.
If pager.attend is present, pager.ignore will be ignored.
To ignore global commands like "hg version" or "hg help", you have to
specify them in the global .hgrc
'''
import sys, os, signal
from mercurial import dispatch, util, extensions
def uisetup(ui):
def pagecmd(orig, ui, options, cmd, cmdfunc):
p = ui.config("pager", "pager", os.environ.get("PAGER"))
if p and sys.stdout.isatty() and '--debugger' not in sys.argv:
attend = ui.configlist('pager', 'attend', attended)
if (cmd in attend or
(cmd not in ui.configlist('pager', 'ignore') and not attend)):
ui.setconfig('ui', 'interactive', False)
sys.stderr = sys.stdout = util.popen(p, "wb")
if ui.configbool('pager', 'quiet'):
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
return orig(ui, options, cmd, cmdfunc)
extensions.wrapfunction(dispatch, '_runcommand', pagecmd)
attended = ['annotate', 'cat', 'diff', 'export', 'glog', 'log', 'qdiff']
| 31.267606 | 78 | 0.678829 | 326 | 2,220 | 4.616564 | 0.472393 | 0.026578 | 0.014618 | 0.022591 | 0.074419 | 0.043854 | 0.043854 | 0 | 0 | 0 | 0 | 0.002887 | 0.21982 | 2,220 | 70 | 79 | 31.714286 | 0.866051 | 0.590991 | 0 | 0 | 0 | 0 | 0.130484 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e1b13087396eb7f886cbf85d990c5779bc18152 | 3,014 | py | Python | backend/jobs/workers/vertexai/vertex_ai_to_bq_predictor.py | vertex-ai-now/crmint | dc6b66a0b24b98c295fe22c04dbd3d7119c1fd46 | [
"Apache-2.0"
] | null | null | null | backend/jobs/workers/vertexai/vertex_ai_to_bq_predictor.py | vertex-ai-now/crmint | dc6b66a0b24b98c295fe22c04dbd3d7119c1fd46 | [
"Apache-2.0"
] | null | null | null | backend/jobs/workers/vertexai/vertex_ai_to_bq_predictor.py | vertex-ai-now/crmint | dc6b66a0b24b98c295fe22c04dbd3d7119c1fd46 | [
"Apache-2.0"
] | 1 | 2022-02-15T04:24:17.000Z | 2022-02-15T04:24:17.000Z | # Copyright 2021 Google Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from google.cloud import aiplatform
from jobs.workers.worker import Worker, WorkerException
from jobs.workers.vertexai.vertex_ai_worker import VertexAIWorker
class VertexAIToBQPredictor(VertexAIWorker):
"""Worker to train a Vertex AI AutoML model using a Vertex dataset."""
PARAMS = [
('vertexai_model_name', 'string', True, '', 'Vertex AI Model Name'),
('vertexai_batch_prediction_name', 'string', False, '',
'Vertex AI Batch Prediction Name'),
('region', 'string', True, '', 'Region'),
('bq_project_id', 'string', True, '', 'BQ Project ID'),
('bq_dataset_id', 'string', True, '', 'BQ Dataset ID'),
('bq_table_id', 'string', True, '', 'BQ Table ID'),
('clean_up', 'boolean', True, True, 'Clean Up'),
]
def _get_model(self, display_name):
models = aiplatform.Model.list(
filter = f'display_name="{display_name}"',
order_by = "create_time desc")
for m in models:
return m
return None
def _execute_batch_prediction(self):
aiplatform.init()
model = self._get_model(self._params['vertexai_model_name'])
if model is None:
self.log_info('No model found. Please try again.')
return
project_id = self._params['bq_project_id']
dataset_id = self._params['bq_dataset_id']
table_id = self._params['bq_table_id']
region = self._params['region']
vertexai_region = region if region[-1].isdigit() else f'{region}1'
job_client = self._get_vertexai_job_client(vertexai_region)
batch_prediction_name = self._params['vertexai_batch_prediction_name']
if batch_prediction_name is None:
batch_prediction_name = f'{project_id}.{dataset_id}.{table_id}'
if self._params['clean_up']:
self._clean_up_batch_predictions(job_client, project_id, vertexai_region)
job = model.batch_predict(
job_display_name = f'{batch_prediction_name}',
instances_format = 'bigquery',
predictions_format = 'bigquery',
bigquery_source = f'bq://{project_id}.{dataset_id}.{table_id}',
bigquery_destination_prefix = f'bq://{project_id}.{dataset_id}',
sync = False,
)
job.wait_for_resource_creation()
batch_prediction_name = job.resource_name
batch_prediction_job = self._get_batch_prediction_job(
job_client, batch_prediction_name)
self._wait_for_job(batch_prediction_job)
def _execute(self):
self._execute_batch_prediction()
| 40.186667 | 79 | 0.708693 | 406 | 3,014 | 4.987685 | 0.349754 | 0.103704 | 0.084444 | 0.035556 | 0.04642 | 0.036543 | 0 | 0 | 0 | 0 | 0 | 0.004037 | 0.178169 | 3,014 | 74 | 80 | 40.72973 | 0.813484 | 0.211015 | 0 | 0 | 0 | 0 | 0.244915 | 0.092797 | 0 | 0 | 0 | 0 | 0 | 1 | 0.056604 | false | 0 | 0.056604 | 0 | 0.207547 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e1c0b602f941bfc12eb99e597d09356dcf32cf9 | 604 | py | Python | labs/quick_sort.py | ZaytsevNS/python_practice | 109e14923a2ddeacc5360fd72947275afd2159e3 | [
"MIT"
] | null | null | null | labs/quick_sort.py | ZaytsevNS/python_practice | 109e14923a2ddeacc5360fd72947275afd2159e3 | [
"MIT"
] | null | null | null | labs/quick_sort.py | ZaytsevNS/python_practice | 109e14923a2ddeacc5360fd72947275afd2159e3 | [
"MIT"
] | null | null | null | import random
from datetime import datetime
start_time = datetime.now()
def quicksortBetter(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quicksortBetter(left) + middle + quicksortBetter(right)
arr = [random.randint(0, 100000) for i in range(100000)]
print ('Исходный массив: \n' ,arr)
print ('\nОтсортированный массив: ', quicksortBetter(arr))
end_time = datetime.now()
print('\n\nПрошло времени: {}'.format(end_time - start_time)) | 33.555556 | 66 | 0.668874 | 90 | 604 | 4.444444 | 0.4 | 0.05 | 0.0375 | 0.0525 | 0.135 | 0.135 | 0.135 | 0.135 | 0 | 0 | 0 | 0.030992 | 0.198676 | 604 | 18 | 67 | 33.555556 | 0.795455 | 0 | 0 | 0 | 0 | 0 | 0.110744 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.125 | 0 | 0.3125 | 0.1875 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e1d8979796434fec6803abf366ff204854a1441 | 2,148 | py | Python | models/embedder.py | hanq0212/Few_Shot-Neural_Talking_Head | a5d2b4e745531e39d4d4c31a94557c0bd5e6072e | [
"MIT"
] | null | null | null | models/embedder.py | hanq0212/Few_Shot-Neural_Talking_Head | a5d2b4e745531e39d4d4c31a94557c0bd5e6072e | [
"MIT"
] | null | null | null | models/embedder.py | hanq0212/Few_Shot-Neural_Talking_Head | a5d2b4e745531e39d4d4c31a94557c0bd5e6072e | [
"MIT"
] | null | null | null | import torch
import torch.nn
from models.blocks import *
class Embedder(nn.Module):
def __init__(self,curr_size=224, ideal_size=256, pool_mode = "sum"):
super(Embedder, self).__init__()
self.init_padding= Padding(curr_size=curr_size, ideal=ideal_size)
# downsample
# input: B, 6, 256, 256
if pool_mode != "sum":
self.emb = nn.Sequential(
Residual_Downsample(6, 64), #output: B, 64, 128, 128
Residual_Downsample(64, 128), #output: B, 128, 64, 64
Residual_Downsample(128, 256), #output: B, 256, 32, 32
Self_Attention(256), #output: B, 256, 32, 32
Residual_Downsample(256, 512), #output: B, 512, 16, 16
Residual_Downsample(512, 512), #output: B, 512, 8, 8
Residual_Downsample(512, 512), #output: B, 512, 4, 4
nn.AdaptiveMaxPool2d((1,1)),
nn.ReLU()
)
else:
self.emb = nn.Sequential(
Residual_Downsample(6, 64), #output: B, 64, 128, 128
Residual_Downsample(64, 128), #output: B, 128, 64, 64
Residual_Downsample(128, 256), #output: B, 256, 32, 32
Self_Attention(256), #output: B, 256, 32, 32
Residual_Downsample(256, 512), #output: B, 512, 16, 16
Residual_Downsample(512, 512), #output: B, 512, 8, 8
Residual_Downsample(512, 512), #output: B, 512, 4, 4
nn.LPPool2d(norm_type=1, kernel_size = 4),
nn.ReLU()
)
def forward(self, img, landmark):
new_input = torch.cat((img, landmark), dim = 1)
new_input = self.init_padding(new_input)
# print(new_input.size())
out = self.emb(new_input)
out = out.view(out.size(0), 512, 1)
return out | 45.702128 | 88 | 0.474395 | 238 | 2,148 | 4.121849 | 0.243697 | 0.099898 | 0.061162 | 0.079511 | 0.554536 | 0.554536 | 0.554536 | 0.554536 | 0.554536 | 0.554536 | 0 | 0.149242 | 0.416667 | 2,148 | 47 | 89 | 45.702128 | 0.633679 | 0.166667 | 0 | 0.486486 | 0 | 0 | 0.003384 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054054 | false | 0 | 0.081081 | 0 | 0.189189 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e1fd108550e7b99157b9f92ae2a594b904d3d80 | 23,817 | py | Python | afrift/findr_files.py | albertvisser/filefindr | e6c7d7bbbcd34fe43e5f75c410cdd359b362e79e | [
"MIT"
] | null | null | null | afrift/findr_files.py | albertvisser/filefindr | e6c7d7bbbcd34fe43e5f75c410cdd359b362e79e | [
"MIT"
] | null | null | null | afrift/findr_files.py | albertvisser/filefindr | e6c7d7bbbcd34fe43e5f75c410cdd359b362e79e | [
"MIT"
] | null | null | null | """Uitvoeren van de find/replace actie
de uitvoering wordt gestuurd door in een dictionary verzamelde parameters"""
import os
import pathlib
import re
import shutil
import collections
import subprocess
contains_default = 'module level code'
special_chars = ('.', '^', '$', '*', '+', '?', '{', '}', '[', ']', '(', ')', '|', '\\')
def determine_split_none(words):
"""determine word number to split on for regular search
argument is provided for compatibility with other variants
"""
return 3
def determine_split_py(words):
"""determine word number to split on for python context sensitive search
"""
if words[3] == 'class':
end = 5
if words[5] == 'method':
end = 7
elif words[3] == 'function':
end = 5
else: # if words == 'module':
end = 6
return end
determine_split = {None: determine_split_none, # dispatch table
"py": determine_split_py}
def format_result(lines, context_type=None):
"""reformat search results
"""
start_splitting = False
old_program_file = old_context = ''
lines_out = []
for line in lines:
line = line.strip()
if not start_splitting:
if line:
lines_out.append(line)
else:
start_splitting = True
continue
words = line.split()
program_file = words[0]
location = ' '.join(words[1:3])
end = determine_split[context_type](words)
context = ' '.join(words[3:end]) if end > 3 else ''
split_on = words[end - 1]
statement = line.split(split_on, 1)[1]
source_changed = False
if program_file != old_program_file:
source_changed = True
if old_program_file:
lines_out.append('')
old_program_file = program_file
lines_out.append(program_file)
if context and context != old_context or source_changed:
old_context = context
lines_out.append(context)
lines_out.append(': '.join((location, statement)))
return lines_out
def check_single_string(inp):
"""inspect single (doc)string
"""
test = inp[0]
while test in ('"', "'"):
try:
ix = inp.index(test, 1)
except IndexError:
return False
inp = inp[ix + 1:].strip()
if not inp:
return True
test = inp[0]
return False
def determine_filetype(entry):
"""Try to discover what kind of file this is and return which context-sensitive search to use
"""
if entry.suffix in ('.py', '.pyw'):
return 'py'
result = subprocess.run(['file', entry], stdout=subprocess.PIPE)
if 'python' in str(result.stdout.lower()):
return 'py'
return ''
def read_input_file(file, fallback_encoding):
"""return contents of the specified file as a list or None if it can't be read
"""
try:
with file.open() as f_in:
return f_in.readlines()
except UnicodeDecodeError:
try:
with file.open("r", encoding=fallback_encoding) as f_in:
return f_in.readlines()
except UnicodeDecodeError:
return None
def pyread(file, fallback_encoding='latin-1', negeer_docs=False):
"""context-aware search in Python files
this routine procduces a list of contexts
"""
def pop_construct(last_line):
"""if needed, add construct(s) to list
"""
while in_construct and indentpos <= in_construct[-1][0]:
construct = list(in_construct.pop())
construct.append(last_line)
constructs.append(in_construct + [construct])
lines = read_input_file(file, fallback_encoding)
if lines is None:
return lines
itemlist = []
modlevel_start = 1
constructs = []
in_construct = []
docstring = ''
docstring_start = 0
indentpos = prev_lineno = 0
start_of_code = False
for ix, line in enumerate(lines):
if line.strip() == "":
continue
lineno = ix + 1
test = line.lstrip()
if test.startswith('#') and negeer_docs:
itemlist.append(((lineno, 0), (lineno, -1), 'comment'))
if not start_of_code:
modlevel_start = lineno + 1
continue
elif test.startswith('#') and line.index(test) < indentpos:
pass
else:
indentpos = line.index(test)
if negeer_docs:
if docstring and line.rstrip().endswith(docstring):
docstring = ''
itemlist.append(((docstring_start, indentpos), (lineno, -1), "docstring"))
if not start_of_code:
modlevel_start = lineno + 1
continue
if test.startswith('"""') or test.startswith("'''"):
docstring = test[:3]
docstring_start = lineno
if line.rstrip().endswith(docstring):
docstring = ''
itemlist.append(((docstring_start, indentpos), (lineno, -1), "docstring"))
if not start_of_code:
modlevel_start = lineno + 1
continue
if test.startswith('"') or test.startswith("'"):
if check_single_string(test.rstrip()):
itemlist.append(((lineno, indentpos), (lineno, -1), 'docstring'))
if not start_of_code:
modlevel_start = lineno + 1
continue
if not start_of_code:
start_of_code = True
itemlist.append(((modlevel_start, 0), (len(lines), -1), contains_default))
pop_construct(prev_lineno)
if test.startswith('def ') or test.startswith('class '):
words = test.split()
construct = (indentpos, words[0], words[1].split(':')[0].split('(')[0], lineno)
in_construct.append(construct)
if '#' in test and negeer_docs:
pos = test.index('#')
itemlist.append(((lineno, pos), (lineno, -1), 'comment'))
prev_lineno = lineno
indentpos = 0
pop_construct(prev_lineno - 1)
for item in constructs:
_, _, _, start, end = item[-1]
construct = []
for part in item:
type_, name = part[1:3]
if type_ == "def":
if construct and construct[-2] == "class":
type_ = "method"
else:
type_ = "function"
construct.extend([type_, name])
itemlist.append(((start, 0), (end, -1), " ".join(construct)))
return sorted(itemlist)
class Finder():
"""interpreteren van de parameters en aansturen van de zoek/vervang routine
"""
def __init__(self, **parms):
## print parms
self.p = {
'zoek': '',
'vervang': '',
'pad': '',
'extlist': [],
'filelist': [],
'subdirs': False,
"case": False,
"woord": False,
"regexp": False,
"backup": False,
"follow_symlinks": False,
"maxdepth": 5,
"fallback_encoding": 'ascii',
"context": False,
"negeer": False, }
for x in parms:
if x in self.p:
self.p[x] = parms[x]
else:
raise TypeError('Onbekende optie ' + x)
## print('On creating Finder instance:', self.p)
self.ok = True
self.errors = []
self.rpt = [] # oorspronkelijk: verslag van wat er gebeurd is
self.use_complex = True
self.rgx, self.ignore = '', ''
if not self.p['filelist'] and not self.p['pad']:
self.rpt.append("Fout: geen lijst bestanden en geen directory opgegeven")
elif self.p['filelist'] and self.p['pad']:
self.rpt.append("Fout: lijst bestanden én directory opgegeven")
elif not self.p['zoek']:
self.rpt.append('Fout: geen zoekstring opgegeven')
if self.rpt:
self.ok = False
return
self.p['wijzig'] = True if self.p['vervang'] is not None else False
self.extlist_upper = []
for x in self.p['extlist']:
if not x.startswith("."):
x = "." + x
self.extlist_upper.append(x.upper())
# self.setup_search()
def setup_search(self):
"""instellen variabelen t.b.v. zoekactie en output
"""
# moet hier nog iets mee doen m.h.o.o. woorddelen of niet
if self.p['wijzig'] or self.p['regexp']:
self.use_complex = False
self.rgx = self.build_regexp_simple()
else:
self.rgx, self.ignore = self.build_regexes()
specs = ["Gezocht naar '{}'".format(self.p['zoek'])]
if self.p['wijzig']:
specs.append(" en dit vervangen door '{}'".format(self.p['vervang']))
if self.p['extlist']:
if len(self.p['extlist']) > 1:
typs = " en ".join((", ".join(self.p['extlist'][:-1]),
self.p['extlist'][-1]))
else:
typs = self.p['extlist'][0]
specs.append(" in bestanden van type {}".format(typs))
self.filenames = []
self.dirnames = set()
if self.p['pad']:
specs.append(" in {}".format(self.p['pad']))
self.subdirs(self.p['pad'])
else:
if len(self.p['filelist']) == 1:
specs.append(" in {}".format(self.p['filelist'][0]))
else:
specs.append(" in opgegeven bestanden/directories")
for entry in self.p['filelist']:
## self.subdirs(entry, is_list=False)
mld = self.subdirs(entry)
if mld:
self.errors.append(mld)
if self.p['subdirs']:
specs.append(" en evt. onderliggende directories")
self.rpt.insert(0, "".join(specs))
self.specs = specs
if self.errors:
self.rpt.append("Zoekactie niet mogelijk")
self.ok = False
## def subdirs(self, pad, is_list=True, level=0):
def subdirs(self, pad, level=0):
"""recursieve routine voor zoek/vervang in subdirectories
samenstellen lijst met te verwerken bestanden
als is_list = False dan wordt van de doorgegeven naam eerst een list
gemaakt. Daardoor hebben we altijd een iterable met directorynamen.
Deze parameter lijkt een probleem te veroorzaken als in multi mode een
lijst wordt opgegeven (via self.p['filelist']) met directorynamen erin
(misschien is dat recent veranderd) daarom is deze verwijderd en is de
except NotADirectoryError toegevoegd
"""
path = pad
try:
test = path.name
except AttributeError:
path = pathlib.Path(pad)
else:
pad = str(path)
if path.is_dir():
self.dirnames.add(pad)
if self.p["maxdepth"] != -1:
level += 1
if level > self.p["maxdepth"]:
return ''
## if is_list:
try:
_list = (fname for fname in os.scandir(pad))
except NotADirectoryError:
_list = [path]
except PermissionError:
_list = []
except FileNotFoundError:
return 'File not found: {}'.format(path.resolve())
## else:
## _list = (pad,)
for entry in _list:
if entry.is_dir():
if self.p['subdirs']:
self.subdirs(entry.path, level=level)
elif entry.is_symlink() and not self.p['follow_symlinks']:
pass
else:
try:
ext = entry.suffix
except AttributeError:
entry = pathlib.Path(entry.path)
ext = entry.suffix
if self.p['extlist'] == [] or ext.upper() in self.extlist_upper:
self.filenames.append(entry)
return ''
def build_regexp_simple(self):
"""build the search regexp(s)
this original version returns one compiled expression
"""
zoek = ''
for char in self.p['zoek']:
if not self.p['regexp']:
if char in special_chars:
zoek += "\\"
zoek += char
flags = re.MULTILINE
if not self.p['case']:
flags |= re.IGNORECASE
return re.compile(str(zoek), flags)
def build_regexes(self):
"""build the search regexp(s)
this version makes a complex search possible by looking for special
separators
returns a list of re's to look for and a re of strings to ignore"""
def escape(data): # only for strings
"""escape special characters when they are not to be interpreted
"""
zoek = ''
for char in data:
if char in special_chars:
zoek += "\\"
zoek += char
return zoek
negeer = ''
flags = re.MULTILINE
if not self.p['case']:
flags |= re.IGNORECASE
if self.p['regexp'] or self.p['wijzig']: # in these cases: always take literally
zoek = [re.compile(escape(self.p['zoek']), flags)]
else:
zoek_naar, zoek_ook, zoek_niet = self.parse_zoek()
zoek = [re.compile('|'.join([escape(x) for x in zoek_naar]), flags)]
zoek += [re.compile(escape(x), flags) for x in zoek_ook]
if zoek_niet:
negeer = re.compile('|'.join([escape(x) for x in zoek_niet]), flags)
return zoek, negeer
def parse_zoek(self):
"""
levert drie lists op:
- een lijst met frasen waarvan er tenminste één moet voorkomen
- een lijst met frasen die daarnaast ook moeten voorkomen
- een lijst met frasen die niet mag voorkomen
"""
def add_to_matches():
"""add phrase to the correct phrase list
"""
nonlocal zoekitem, also_required, forbidden
## print('add_to_matches called with zoekitem', zoekitem, also_required)
zoekitem = zoekitem.strip()
if not zoekitem:
return
if also_required:
required_matches.append(zoekitem)
elif forbidden:
forbidden_matches.append(zoekitem)
else:
possible_matches.append(zoekitem)
zoekitem = ''
also_required = forbidden = False
in_quotes = also_required = forbidden = False
zoekitem = ''
possible_matches = []
required_matches = []
forbidden_matches = []
for char in self.p['zoek']:
if char == '"':
if in_quotes:
add_to_matches()
in_quotes = not in_quotes
elif char == ',' and not in_quotes:
add_to_matches()
elif char == '+' and not in_quotes:
add_to_matches()
also_required = True
elif char == '-' and not in_quotes:
add_to_matches()
forbidden = True
else:
zoekitem += char
add_to_matches()
return possible_matches, required_matches, forbidden_matches
def go(self): # do_action(self) # , search_python=False):
"""start the search
"""
for entry in self.filenames:
self.zoek(entry)
if not self.p['context']: # search_python:
return
results, self.rpt = self.rpt, []
locations = {}
for entry in self.filenames:
ftype = determine_filetype(entry)
if ftype == 'py':
locations[str(entry)] = pyread(entry, self.p['fallback_encoding'], self.p['negeer'])
else:
locations[str(entry)] = []
for item in results:
test = item.split(' r. ', 1)
if len(test) == 1:
self.rpt.append(item)
continue
best = test[0]
if not locations[best]:
continue
test = test[1].split(': ', 1)
if len(test) == 1:
self.rpt.append(item)
continue
lineno, text = test
contains = contains_default
for loc in locations[best]:
lineno = int(lineno)
if loc[0][0] < lineno <= loc[1][0]:
old_contains = contains
contains = loc[2]
if contains == 'comment':
where = text.upper().find(self.p['zoek'].upper())
if where < loc[0][1]:
contains = old_contains
if loc[0][0] > lineno:
break
if self.p['negeer'] and contains in ('comment', 'docstring'):
continue
if contains != 'ignore':
self.rpt.append('{} r. {} ({}): {}'.format(best, lineno, contains, text))
def zoek(self, best):
"het daadwerkelijk uitvoeren van de zoek/vervang actie op een bepaald bestand"
pos, lines, regels = 0, [], []
msg = ""
try_again = False
with best.open("r") as f_in:
try:
for x in f_in:
lines.append(pos)
x = x.rstrip() + os.linesep
regels.append(x)
pos += len(x)
except UnicodeDecodeError:
try_again = True
if try_again:
pos, lines, regels = 0, [], []
with best.open("r", encoding=self.p['fallback_encoding']) as f_in:
try:
for x in f_in:
lines.append(pos)
x = x.rstrip() + os.linesep
regels.append(x)
pos += len(x)
except UnicodeDecodeError:
msg = best + ": overgeslagen, waarschijnlijk geen tekstbestand"
if msg:
self.rpt.append(msg)
return
lines.append(pos)
data = "".join(regels)
found = False
from_line = 0
last_in_line = 0
if self.use_complex:
result_list = self.complex_search(data, lines)
for lineno in result_list:
found = True
self.rpt.append("{} r. {}: {}".format(
best, lineno, regels[lineno - 1].rstrip()))
return
# gebruik de oude manier van zoeken bij vervangen of bij regexp zoekstring
result_list = self.rgx.finditer(data)
for vind in result_list:
found = True
## print(vind, vind.span(), sep = " ")
for lineno, linestart in enumerate(lines[from_line:]):
## print(from_line,lineno,linestart)
if vind.start() < linestart:
if not self.p['wijzig']:
in_line = lineno + from_line
if in_line != last_in_line:
self.rpt.append("{0} r. {1}: {2}".format(
best, in_line, regels[in_line - 1].rstrip()))
last_in_line = in_line
from_line = lineno
break
if found and self.p['wijzig']:
ndata, aant = self.rgx.subn(self.p["vervang"], data)
best_s = str(best)
self.rpt.append("%s: %s keer" % (best_s, aant))
self.backup_if_needed(best_s)
with best.open("w") as f_out:
f_out.write(ndata)
def complex_search(self, data, lines):
"""extended serch using phrases we want to find and phrases we don't want to find
"""
# maak een lijst van alle locaties waar een string gevonden is
# (plus de index van de regexp die er bij hoort)
found_in_lines = []
for ix, rgx in enumerate(self.rgx):
new_lines = [(x.start(), ix) for x in rgx.finditer(data)]
found_in_lines += new_lines
# vul de lijst aan met alle locaties waar gevonden is wat we niet willen vinden
# (gemarkeerd met index -1)
if self.ignore:
donotwant = [(x.start(), -1) for x in self.ignore.finditer(data)]
found_in_lines += donotwant
# loop de lijst langs om hiervan een dictionary te maken op regelnummer
# met alle regexp indexen waar deze bij gevonden is
# zodat we kunnen zien welke we wel en niet willen hebben
lines_found = collections.defaultdict(set)
from_line = 0 # houdt bij vanaf welke regel het zin heeft om de inhoud te controleren
for itemstart, number in sorted(found_in_lines):
for lineno, linestart in enumerate(lines[from_line:]):
if itemstart < linestart:
in_line = lineno + from_line # bereken het actuele regelnummer
from_line = lineno
break
lines_found[in_line].add(number)
# uitfilteren welke regel niet in alle zoekacties voorkomt of juist weggelaten met worden
all_searches = set(range(len(self.rgx)))
lines_left_over = []
for line, values in lines_found.items():
if -1 in values:
continue
if values == all_searches:
lines_left_over.append(line)
lines_left_over.sort()
return lines_left_over
def replace_selected(self, text, lines_to_replace):
"achteraf vervangen in geselecteerde regels"
replaced = 0
single_mode = len(lines_to_replace[0]) == 1
file_to_replace, lines = '', []
for line in sorted(lines_to_replace):
if single_mode:
filename, lineno = str(self.p['filelist'][0]), line[0]
else:
filename, lineno = line
lineno = int(lineno) - 1
if filename != file_to_replace:
if file_to_replace:
self.backup_if_needed(file_to_replace)
with open(file_to_replace, 'w') as out:
out.writelines(lines)
file_to_replace = filename
with open(file_to_replace) as in_:
lines = in_.readlines()
oldline = lines[lineno]
lines[lineno] = lines[lineno].replace(self.p['zoek'], text)
if lines[lineno] != oldline:
replaced += 1
self.backup_if_needed(file_to_replace)
with open(file_to_replace, 'w') as out:
out.writelines(lines)
return replaced
def backup_if_needed(self, fname):
"make backup if required"
if self.p['backup']:
bestnw = fname + ".bak"
shutil.copyfile(fname, bestnw)
| 38.168269 | 101 | 0.511399 | 2,603 | 23,817 | 4.562812 | 0.182866 | 0.023154 | 0.007072 | 0.005052 | 0.1902 | 0.159552 | 0.137324 | 0.119811 | 0.101288 | 0.079734 | 0 | 0.006746 | 0.383843 | 23,817 | 623 | 102 | 38.229535 | 0.802589 | 0.14053 | 0 | 0.280079 | 0 | 0 | 0.064149 | 0.001067 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04142 | false | 0.003945 | 0.011834 | 0 | 0.110454 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e2226e371230c1beb5b13beeed9d53f91d650e2 | 1,400 | py | Python | data/soda_data/interp_data.py | Skye777/DL_ENSO-MC | be0353548179b8b721cac4f6e395063e55042119 | [
"Apache-2.0"
] | null | null | null | data/soda_data/interp_data.py | Skye777/DL_ENSO-MC | be0353548179b8b721cac4f6e395063e55042119 | [
"Apache-2.0"
] | null | null | null | data/soda_data/interp_data.py | Skye777/DL_ENSO-MC | be0353548179b8b721cac4f6e395063e55042119 | [
"Apache-2.0"
] | null | null | null | """
@author: Skye Cui
@file: interp_data.py
@time: 2021/6/8 9:44
@description:
"""
import xarray
import matplotlib.pyplot as plt
import numpy as np
from hparams import Hparams
hparams = Hparams()
parser = hparams.parser
hp = parser.parse_args()
# dataset_s = self.config.dataset_s
# dataset_e = self.config.dataset_e
llat, rlat, llon, rlon = [-40, 40, 120, 280]
dic = [
[f'{hp.soda_dataset_dir}/meta-data/soda_ssh.nc', f'{hp.soda_dataset_dir}/interp-data/ssh.nc'],
[f'{hp.soda_dataset_dir}/meta-data/soda_taux.nc', f'{hp.soda_dataset_dir}/interp-data/taux.nc'],
[f'{hp.soda_dataset_dir}/meta-data/soda_tauy.nc', f'{hp.soda_dataset_dir}/interp-data/tauy.nc']
]
def intercept(base_file):
dataset = xarray.open_dataset(base_file, cache=True, decode_times=False)
print(dataset)
lc = dataset.coords['lon']
la = dataset.coords['lat']
data = dataset.loc[dict(lon=lc[(lc >= llon) & (lc <= rlon)], lat=la[(la >= llat) & (la <= rlat)])]
return data
def interpolation(data):
lat_len, lon_len = 80, 160
lon = np.linspace(120.25, 279.75, lon_len)
lat = np.linspace(-39.75, 39.75, lat_len)
interp_data = data.interp(lat=lat, lon=lon)
print(interp_data)
return interp_data
if __name__ == '__main__':
for dataset in dic:
data = intercept(dataset[0])
interp_data = interpolation(data)
interp_data.to_netcdf(dataset[1])
| 28 | 102 | 0.677857 | 221 | 1,400 | 4.099548 | 0.384615 | 0.099338 | 0.046358 | 0.092715 | 0.211921 | 0.211921 | 0.211921 | 0.196468 | 0.068433 | 0 | 0 | 0.037607 | 0.164286 | 1,400 | 49 | 103 | 28.571429 | 0.736752 | 0.102857 | 0 | 0 | 0 | 0 | 0.214114 | 0.202887 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.125 | 0 | 0.25 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e25ba8903e8b069df77eb31648fb47ba7913747 | 813 | py | Python | setup.py | bool3max/p3elf | 3981452d2a38b18416fb688ca28b315af484ff9f | [
"MIT"
] | null | null | null | setup.py | bool3max/p3elf | 3981452d2a38b18416fb688ca28b315af484ff9f | [
"MIT"
] | null | null | null | setup.py | bool3max/p3elf | 3981452d2a38b18416fb688ca28b315af484ff9f | [
"MIT"
] | null | null | null | # build the package using setuptools
import setuptools, io
from p3elf import __version__ as version
with io.open("./README.md", "r") as f:
ld = f.read()
args = {
"name": "p3elf",
"version": version,
"author": "Bogdan Mitrović",
"author_email": "bokisa.mitrovic2@gmail.com",
"description": "A tiny python3 package for parsing ELF files",
"long_description": ld,
"long_description_content_type": "text/markdown",
"url": "https://github.com/bool3max/p3elf/",
"packages": setuptools.find_packages(),
"classifiers": ["Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"],
"python_requires": ">=3.5"
}
print(setuptools.find_packages())
setuptools.setup(**args)
| 29.035714 | 66 | 0.638376 | 92 | 813 | 5.51087 | 0.706522 | 0.059172 | 0.086785 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013975 | 0.207872 | 813 | 27 | 67 | 30.111111 | 0.773292 | 0.04182 | 0 | 0 | 0 | 0 | 0.492921 | 0.070785 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.095238 | 0 | 0.095238 | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e265f07772f86550e080ed69b3544f93921b653 | 545 | py | Python | home-controll.py | CarliWasTaken/Frontend | f9ac1c3b1cc77e638730c9afcdeb8872ff115ce8 | [
"MIT"
] | 1 | 2021-09-22T04:47:33.000Z | 2021-09-22T04:47:33.000Z | home-controll.py | CarliWasTaken/Frontend | f9ac1c3b1cc77e638730c9afcdeb8872ff115ce8 | [
"MIT"
] | null | null | null | home-controll.py | CarliWasTaken/Frontend | f9ac1c3b1cc77e638730c9afcdeb8872ff115ce8 | [
"MIT"
] | null | null | null | import socket
from turtle import back
import pygame
# init Socket
UDPClientSocket = socket.socket(family=socket.AF_INET, type=socket.SOCK_DGRAM)
#serverAddressPort = ("127.0.0.1", 20001)
serverAddressPort = ("192.168.43.103", 20001)
bufferSize = 32
# init pygame
pygame.init()
done = False
clock = pygame.time.Clock()
while not done:
variables = variables = {'speed': 123, 'steer': 0}
print(variables)
sendBytes = str.encode(str(variables))
UDPClientSocket.sendto(sendBytes, serverAddressPort)
clock.tick(50)
pygame.quit() | 24.772727 | 78 | 0.730275 | 70 | 545 | 5.657143 | 0.614286 | 0.050505 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074786 | 0.141284 | 545 | 22 | 79 | 24.772727 | 0.771368 | 0.115596 | 0 | 0 | 0 | 0 | 0.050104 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1875 | 0 | 0.1875 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e271c7923d53f777ddb0232f966b5d3aef892b6 | 1,917 | py | Python | Scripts/Miscellaneous/sudokuSolver/sudokuSolver.py | fuzzmz/Python_and_the_Web | b47aa86161d1fe6dc364df822168a6e0ffef455d | [
"MIT"
] | 3 | 2020-10-13T17:41:33.000Z | 2021-06-02T15:01:58.000Z | Scripts/Miscellaneous/sudokuSolver/sudokuSolver.py | fuzzmz/Python_and_the_Web | b47aa86161d1fe6dc364df822168a6e0ffef455d | [
"MIT"
] | null | null | null | Scripts/Miscellaneous/sudokuSolver/sudokuSolver.py | fuzzmz/Python_and_the_Web | b47aa86161d1fe6dc364df822168a6e0ffef455d | [
"MIT"
] | null | null | null | # check the validity of the value at particular position
def isvalid(board,num,pos):
for i in range(len(board[0])):
if board[pos[0]][i]==num and pos[1]!=i:
return False
for i in range(len(board[0])):
if board[i][pos[1]]==num and pos[0]!=i:
return False
x=pos[1]//3
y=pos[0]//3
for i in range(y*3,y*3+3):
for j in range(x*3,x*3+3):
if board[i][j]==num and (i,j)!=pos:
return False
return True
# print the board
def printb(board):
for i in range(len(board)):
if i%3==0 and i!=0:
print('-----------------------')
for j in range(len(board[0])):
if j%3==0 and j!=0:
print(' | ',end="")
if j==8:
print(board[i][j])
else:
print(str(board[i][j])+" ",end="")
# check if board is already solved
def empty(board):
for i in range(len(board)):
for j in range(len(board)):
if board[i][j]==0:
return (i,j)
return None
# recursive function to solve the board
def solve(board):
find=empty(board)
if not find:
return True
row,col=find
for i in range(1,10):
if isvalid(board,i,(row,col)):
board[row][col]=i
if solve(board):
return True
board[row][col]=0
return False
if __name__=="__main__":
board=[]
print("="*5,"Enter Values separated by space and use 0 for empty values","="*5)
for i in range(9):
row=input(f"Enter row {i+1} values: ")
row=row.split(" ")
row=list(map(int, row)) #converting every element from the row from string to int
board.append(row) #appending row to the board
print("="*5,"Unsolved State","="*5)
printb(board)
solve(board)
print("="*5,"Solved State","="*5)
printb(board)
| 26.625 | 89 | 0.510172 | 292 | 1,917 | 3.321918 | 0.253425 | 0.072165 | 0.043299 | 0.079381 | 0.151546 | 0.147423 | 0.105155 | 0.05567 | 0.05567 | 0 | 0 | 0.029389 | 0.325509 | 1,917 | 71 | 90 | 27 | 0.720804 | 0.116328 | 0 | 0.232143 | 0 | 0 | 0.088915 | 0.013634 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0 | 0 | 0.232143 | 0.178571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e2792fdef66f880d7c52f6d222b6e625250d8f8 | 3,774 | py | Python | dashboard/services/email.py | robertsimmons514/isthislegit | aa8f2b6cb2ac3de2b0fe03bb93dbceccc4c1f495 | [
"BSD-3-Clause"
] | 282 | 2017-07-01T03:47:54.000Z | 2022-02-25T00:58:40.000Z | dashboard/services/email.py | robertsimmons514/isthislegit | aa8f2b6cb2ac3de2b0fe03bb93dbceccc4c1f495 | [
"BSD-3-Clause"
] | 46 | 2017-07-26T22:54:13.000Z | 2022-02-14T21:39:52.000Z | dashboard/services/email.py | robertsimmons514/isthislegit | aa8f2b6cb2ac3de2b0fe03bb93dbceccc4c1f495 | [
"BSD-3-Clause"
] | 53 | 2017-07-22T15:04:16.000Z | 2022-03-16T03:36:28.000Z | from config import config
'''
The email service is responsible for fetching, sending, and (potentially) deleting email on behalf of IsThisLegit.
'''
class EmailFetchError(Exception):
''' A generic error when fetching emails from the external service.'''
def __init__(self, message):
''' Creates a new instance of the error '''
self.message = message
super(EmailFetchError, self).__init__(message)
def __str__(self):
''' Returns a string representation of the error '''
return self.message
class EmailProvider(object):
def send(self, **kwargs):
''' Sends an email using the provider's client
It is expected that the provider can handle the following kwargs:
sender - the email sender
to - list of recipients in the "To" field
bcc - list of recipients in the "Bcc" field
cc - list of recipients in the "Cc" field
headers - optional headers
html - html body
body - plaintext body
subject - email subject
'''
raise NotImplementedError
def fetch(self, **kwargs):
''' Fetches an email using the provider's client.
For the initial release, this will only include fetching emails
from the Gmail API.
'''
raise NotImplementedError
class SMTPProvider(EmailProvider):
''' An email provider leveraging basic SMTP connections '''
def send(self, **kwargs):
raise NotImplementedError
def fetch(self, **kwargs):
raise NotImplementedError
class AppEngineProvider(EmailProvider):
''' An email provider leveraging the App Engine Mail API.
Every `sender` must be listed as an authorized sender in the GAE Project console:
https://cloud.google.com/appengine/docs/standard/python/mail/#who_can_send_mail
'''
def send(self, **kwargs):
''' Sends an email using the App Engine Mail API.
Raises an InvalidEmailError if an invalid email address was specified.
'''
message = mail.EmailMessage(**kwargs)
message.send()
def fetch(self, **kwargs):
''' Fetches an email using the Gmail API users.messages.get()
method. It leverages the IsThisLegit service account to impersonate
the user in order to retrieve the email by message ID. This prevents
users from having to manually accept the OAuth permission dialog before
reporting phishing emails.
Expected kwargs:
userId - The userID who reported the email
messageId - The Gmail message ID to fetch
'''
userId = kwargs.get('userId')
messageId = kwargs.get('messageId')
scopes = ['https://www.googleapis.com/auth/gmail.readonly']
credentials = ServiceAccountCredentials.from_json_keyfile_name(
config['gae']['service_account_key'], scopes=scopes)
delegated_credentials = credentials.create_delegated(userId)
http_auth = delegated_credentials.authorize(Http())
service = build('gmail', 'v1', http=http_auth)
response = service.users().messages().get(
userId=userId, id=messageId, format='raw').execute()
if not response or 'raw' not in response:
raise EmailFetchError('Error fetching email: User {}, thread {}'.
format(userId, messageId))
message = base64.urlsafe_b64decode(str(response['raw']))
return message
if config['email']['provider'] == 'gae':
import base64
from google.appengine.api import mail
from oauth2client.service_account import ServiceAccountCredentials
from httplib2 import Http
from googleapiclient.discovery import build
email_provider = AppEngineProvider()
| 34.623853 | 114 | 0.659777 | 435 | 3,774 | 5.65977 | 0.397701 | 0.02437 | 0.019496 | 0.02437 | 0.172218 | 0.101543 | 0.074736 | 0.062551 | 0.062551 | 0 | 0 | 0.003211 | 0.257287 | 3,774 | 108 | 115 | 34.944444 | 0.875134 | 0.375464 | 0 | 0.227273 | 0 | 0 | 0.0788 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.136364 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e287b7943e33f69334303c10ebdf209fd561965 | 524 | py | Python | src/app_functions/menu/change_countdown_statusbar.py | DanielNoord/DuolingoPomodoro | 307b386daf3216fb9ba86f983f0e39f6647ffd64 | [
"MIT"
] | null | null | null | src/app_functions/menu/change_countdown_statusbar.py | DanielNoord/DuolingoPomodoro | 307b386daf3216fb9ba86f983f0e39f6647ffd64 | [
"MIT"
] | 4 | 2021-04-25T15:39:32.000Z | 2022-02-18T20:58:00.000Z | src/app_functions/menu/change_countdown_statusbar.py | DanielNoord/DuolingoPomodoro | 307b386daf3216fb9ba86f983f0e39f6647ffd64 | [
"MIT"
] | null | null | null | from src.app_functions.settings.save_settings import save_settings
from src.app_functions.update_menu import update_menu
def change_countdown_statusbar(app):
"""Change setting whether a timer till next notification should be displayed in the menu bar
Args:
app (rumps.App): The App object of the main app
"""
if app.settings["time_in_menu_bar"]:
app.settings["time_in_menu_bar"] = False
else:
app.settings["time_in_menu_bar"] = True
update_menu(app)
save_settings(app)
| 30.823529 | 96 | 0.725191 | 78 | 524 | 4.628205 | 0.461538 | 0.077562 | 0.124654 | 0.141274 | 0.199446 | 0.199446 | 0 | 0 | 0 | 0 | 0 | 0 | 0.194656 | 524 | 16 | 97 | 32.75 | 0.85545 | 0.282443 | 0 | 0 | 0 | 0 | 0.134454 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.222222 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e33bfbc058d44c49d28bc0605dc3424cc4557d4 | 4,083 | py | Python | integrate.py | itslight/Assistant | 60a0475883b34290990a27ba6cb3de3fa54a85f5 | [
"MIT"
] | null | null | null | integrate.py | itslight/Assistant | 60a0475883b34290990a27ba6cb3de3fa54a85f5 | [
"MIT"
] | null | null | null | integrate.py | itslight/Assistant | 60a0475883b34290990a27ba6cb3de3fa54a85f5 | [
"MIT"
] | null | null | null | from tkinter import *
import write
import speak
import recognize
BG_GRAY = "#ABB2B9"
BG_COLOR = "#17202A"
TEXT_COLOR = "Pink"
FONT = "Helvetica 14"
FONT_BOLD = "Helvetica 13 bold"
class ChatApplication:
def __init__(self):
self.window = Tk()
self._setup_main_window()
# text = Text(self.window)
# text.tag_config("Assistant", background="yellow", foreground="yellow")
# text.tag_config("start", background="black", foreground="green")
def run(self):
self.window.mainloop()
def _setup_main_window(self):
self.window.title("Chat")
self.window.resizable(width=False, height=False)
self.window.configure(width=670, height=650, bg=BG_COLOR)
# head label
head_label = Label(self.window, bg=BG_COLOR, fg=TEXT_COLOR,
text="Welcome", font=FONT_BOLD, pady=10)
head_label.place(relwidth=1)
# tiny divider
line = Label(self.window, width=450, bg=BG_GRAY)
line.place(relwidth=1, rely=0.07, relheight=0.012)
# text widget
self.text_widget = Text(self.window, width=20, height=3, bg=BG_COLOR, fg=TEXT_COLOR,
font=FONT, padx=5, pady=5)
self.text_widget.place(relheight=0.745, relwidth=1, rely=0.08)
self.text_widget.configure(cursor="arrow", state=DISABLED)
# scroll bar
scrollbar = Scrollbar(self.text_widget)
scrollbar.place(relheight=1, relx=0.974)
scrollbar.configure(command=self.text_widget.yview)
# bottom label
bottom_label = Label(self.window, bg=BG_GRAY, height=80)
bottom_label.place(relwidth=1, rely=0.825)
# message entry box
self.msg_entry = Entry(bottom_label, bg="#2C3E50", fg=TEXT_COLOR, font=FONT)
self.msg_entry.place(relwidth=0.74, relheight=0.08, rely=0.008, relx=0.011)
self.msg_entry.focus()
self.msg_entry.bind("<Return>", self._on_enter_pressedW)
# send button
send_button = Button(bottom_label, text="Send", font=FONT_BOLD, width=20, bg=BG_GRAY,
command=lambda: self._on_enter_pressedW(None))
send_button.place(relx=0.77, rely=0.008, relheight=0.0268, relwidth=0.22)
# voice button
speak_button = Button(bottom_label, text="Speak", font=FONT_BOLD, width=20, bg=BG_GRAY,
command=lambda: self._on_enter_pressedS(None))
speak_button.place(relx=0.77, rely=0.035, relheight=0.0268, relwidth=0.22)
# gesture button
speak_button = Button(bottom_label, text="Gesture", font=FONT_BOLD, width=20, bg=BG_GRAY,
command=lambda: self._on_enter_pressedG(None))
speak_button.place(relx=0.77, rely=0.0614, relheight=0.0268, relwidth=0.22)
def _on_enter_pressedW(self, event):
msg = self.msg_entry.get()
write.start(self,msg)
# self._insert_message(msg, "You")
# self._insert_message(result, "Assistant")
def _on_enter_pressedS(self, event):
# msg = self.msg_entry.get()
result=speak.start(self)
# self._insert_message(msg, "You")
self._insert_message(result, "Assistant")
def _on_enter_pressedG(self, event):
# msg = self.msg_entry.get()
recognize.run()
# self._insert_message(msg, "You")
# self._insert_message(result, "Assistant")
def _insert_message(self, msg, sender):
if not msg:
return
self.msg_entry.delete(0, END)
msg1 = f"{sender}: {msg}\n\n"
self.text_widget.configure(state=NORMAL)
self.text_widget.insert(END, msg1)
self.text_widget.configure(state=DISABLED)
self.text_widget.see(END)
if __name__ == "__main__":
app = ChatApplication()
app.run() | 37.805556 | 98 | 0.591477 | 508 | 4,083 | 4.547244 | 0.255906 | 0.04329 | 0.054545 | 0.018182 | 0.386147 | 0.322078 | 0.253247 | 0.175325 | 0.175325 | 0.147619 | 0 | 0.044483 | 0.289738 | 4,083 | 108 | 99 | 37.805556 | 0.752069 | 0.126133 | 0 | 0 | 0 | 0 | 0.037868 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.106061 | false | 0 | 0.060606 | 0 | 0.19697 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e34246422e8d4557f8e45ace316310921ec075b | 451 | py | Python | build_tools/fetch_eigen.py | ltl-uva/logdecomp | 167f9f8c8edd396b4f45f65eb8995175cbe3dd36 | [
"BSD-2-Clause"
] | 3 | 2021-11-15T12:26:39.000Z | 2022-03-10T01:37:55.000Z | build_tools/fetch_eigen.py | ltl-uva/logdecomp | 167f9f8c8edd396b4f45f65eb8995175cbe3dd36 | [
"BSD-2-Clause"
] | null | null | null | build_tools/fetch_eigen.py | ltl-uva/logdecomp | 167f9f8c8edd396b4f45f65eb8995175cbe3dd36 | [
"BSD-2-Clause"
] | null | null | null | from pathlib import Path
import shutil
import urllib.request
def main():
path = Path("eigen.zip")
url = "https://gitlab.com/libeigen/eigen/-/archive/3.3.9/eigen-3.3.9.zip"
if not path.exists():
with urllib.request.urlopen(url) as request:
path.write_bytes(request.read())
dest = path.stem
shutil.rmtree(dest, ignore_errors=True)
shutil.unpack_archive(path, dest)
if __name__ == '__main__':
main()
| 21.47619 | 77 | 0.660754 | 64 | 451 | 4.484375 | 0.5625 | 0.090592 | 0.020906 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016621 | 0.199557 | 451 | 20 | 78 | 22.55 | 0.778393 | 0 | 0 | 0 | 0 | 0.071429 | 0.182222 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.214286 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e34a09bfe9f575921096d373bb00e558b346ff7 | 937 | py | Python | src/ipython_media.py | tomstark99/play-fair | 5b4ad20ebb96d1162f3bd696aba0a6b57006ab0a | [
"Apache-2.0"
] | 18 | 2020-11-26T17:14:09.000Z | 2021-12-13T13:25:06.000Z | src/ipython_media.py | tomstark99/play-fair | 5b4ad20ebb96d1162f3bd696aba0a6b57006ab0a | [
"Apache-2.0"
] | null | null | null | src/ipython_media.py | tomstark99/play-fair | 5b4ad20ebb96d1162f3bd696aba0a6b57006ab0a | [
"Apache-2.0"
] | 2 | 2020-12-15T04:24:55.000Z | 2021-03-01T17:59:51.000Z | from typing import List, Union
import numpy as np
import torch
from moviepy.video.io.ImageSequenceClip import ImageSequenceClip
def display_video(video: Union[torch.Tensor, np.ndarray, List[np.ndarray]],
format="THWC", fps=12):
"""
Args:
video: Video array or tensor with values ranging between 0--255.
format: TCHW in any order, describing the data layour of the video array
"""
if isinstance(video, list):
video = np.stack(video)
if isinstance(video, torch.Tensor):
video = video.numpy()
video = np.einsum(f"{format} -> THWC", video.astype(np.uint8))
_, height, width, _ = video.shape
if height % 2 != 0:
video = video[:, :-1, :, :]
if width % 2 != 0:
video = video[:, :, :-1, :]
print(video.shape)
frames: List[np.ndarray] = list(video)
clip = ImageSequenceClip(frames, fps=fps)
return clip.ipython_display()
| 28.393939 | 80 | 0.623266 | 121 | 937 | 4.793388 | 0.46281 | 0.086207 | 0.044828 | 0.041379 | 0.044828 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01844 | 0.247599 | 937 | 32 | 81 | 29.28125 | 0.804255 | 0.161153 | 0 | 0 | 0 | 0 | 0.02635 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.2 | 0 | 0.3 | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e38b02de0b92bc8b0eea03e39f85d15673826c0 | 2,859 | py | Python | migrations/versions/02ed71a97129_de_novo_de_novo.py | bcc-dw-20212/ifbook-backend | 87d9f1ec6f9e8c90ea79298b0a36b5c1a7bf8661 | [
"Apache-2.0"
] | null | null | null | migrations/versions/02ed71a97129_de_novo_de_novo.py | bcc-dw-20212/ifbook-backend | 87d9f1ec6f9e8c90ea79298b0a36b5c1a7bf8661 | [
"Apache-2.0"
] | 9 | 2022-03-09T19:31:43.000Z | 2022-03-28T16:37:57.000Z | migrations/versions/02ed71a97129_de_novo_de_novo.py | bcc-dw-20212/ifbook-backend | 87d9f1ec6f9e8c90ea79298b0a36b5c1a7bf8661 | [
"Apache-2.0"
] | null | null | null | """de novo de novo
Revision ID: 02ed71a97129
Revises:
Create Date: 2022-04-01 19:35:55.700583
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '02ed71a97129'
down_revision = None
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('curso',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('nome', sa.String(length=30), nullable=False),
sa.Column('sigla', sa.String(length=4), nullable=True),
sa.PrimaryKeyConstraint('id', name=op.f('pk_curso')),
sa.UniqueConstraint('nome', name=op.f('uq_curso_nome')),
sa.UniqueConstraint('sigla', name=op.f('uq_curso_sigla'))
)
op.create_table('user',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('username', sa.String(length=30), nullable=False),
sa.Column('senha', sa.String(length=100), nullable=False),
sa.Column('descricao', sa.String(length=1000), nullable=True),
sa.Column('firstname', sa.String(length=30), nullable=True),
sa.Column('lastname', sa.String(length=120), nullable=True),
sa.Column('email', sa.String(length=120), nullable=True),
sa.Column('matricula', sa.String(length=30), nullable=True),
sa.Column('telefone', sa.String(length=30), nullable=True),
sa.Column('naonulanova', sa.String(length=30), nullable=False),
sa.PrimaryKeyConstraint('id', name=op.f('pk_user')),
sa.UniqueConstraint('email', name=op.f('uq_user_email')),
sa.UniqueConstraint('matricula', name=op.f('uq_user_matricula')),
sa.UniqueConstraint('telefone', name=op.f('uq_user_telefone')),
sa.UniqueConstraint('username', name=op.f('uq_user_username'))
)
op.create_table('amizade',
sa.Column('primario', sa.Integer(), nullable=False),
sa.Column('secundario', sa.Integer(), nullable=False),
sa.ForeignKeyConstraint(['primario'], ['user.id'], name=op.f('fk_amizade_primario_user')),
sa.ForeignKeyConstraint(['secundario'], ['user.id'], name=op.f('fk_amizade_secundario_user')),
sa.PrimaryKeyConstraint('primario', 'secundario', name=op.f('pk_amizade'))
)
op.create_table('cursos_alunos',
sa.Column('aluno', sa.Integer(), nullable=False),
sa.Column('curso', sa.Integer(), nullable=False),
sa.ForeignKeyConstraint(['aluno'], ['user.id'], name=op.f('fk_cursos_alunos_aluno_user')),
sa.ForeignKeyConstraint(['curso'], ['curso.id'], name=op.f('fk_cursos_alunos_curso_curso')),
sa.PrimaryKeyConstraint('aluno', 'curso', name=op.f('pk_cursos_alunos'))
)
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_table('cursos_alunos')
op.drop_table('amizade')
op.drop_table('user')
op.drop_table('curso')
# ### end Alembic commands ###
| 40.842857 | 98 | 0.682756 | 379 | 2,859 | 5.034301 | 0.21372 | 0.071279 | 0.051363 | 0.077044 | 0.440776 | 0.398847 | 0.319182 | 0.222222 | 0.041929 | 0 | 0 | 0.025723 | 0.129766 | 2,859 | 69 | 99 | 41.434783 | 0.741158 | 0.099685 | 0 | 0.039216 | 0 | 0 | 0.215215 | 0.041387 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039216 | false | 0 | 0.039216 | 0 | 0.078431 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e3a459aa2e13234c68873f36a7b7ce645e73b6b | 26,895 | py | Python | datmo/core/controller/environment/environment.py | awesome-archive/datmo | 72ea51c28a9947e24a464395bb0136b39eb6001a | [
"Apache-2.0"
] | 331 | 2018-03-30T14:33:59.000Z | 2022-01-10T19:43:32.000Z | datmo/core/controller/environment/environment.py | KIMS-Github/datmo | a456d196006b67ce56af96cb4900682eab747bef | [
"MIT"
] | 274 | 2018-04-08T17:12:44.000Z | 2020-07-29T02:45:22.000Z | datmo/core/controller/environment/environment.py | KIMS-Github/datmo | a456d196006b67ce56af96cb4900682eab747bef | [
"MIT"
] | 28 | 2018-05-03T21:57:22.000Z | 2020-12-31T04:18:42.000Z | import os
import shutil
from datmo.core.controller.base import BaseController
from datmo.core.controller.file.file_collection import FileCollectionController
from datmo.core.entity.environment import Environment
from datmo.core.util.i18n import get as __
from datmo.core.util.validation import validate
from datmo.core.util.spinner import Spinner
from datmo.core.util.json_store import JSONStore
from datmo.core.util.misc_functions import get_datmo_temp_path, list_all_filepaths
from datmo.core.util.exceptions import PathDoesNotExist, RequiredArgumentMissing, TooManyArgumentsFound,\
EnvironmentNotInitialized, UnstagedChanges, ArgumentError, EnvironmentDoesNotExist, ProjectNotInitialized
class EnvironmentController(BaseController):
"""EnvironmentController inherits from BaseController and manages business logic related to the
environment.
Methods
-------
create(dictionary)
Create an environment within the project
build(id)
Build the environment for use within the project
list()
List all environments within the project
delete(id)
Delete the specified environment from the project
"""
def __init__(self):
super(EnvironmentController, self).__init__()
self.file_collection = FileCollectionController()
self.spinner = Spinner()
if not self.is_initialized:
raise ProjectNotInitialized(
__("error", "controller.environment.__init__"))
def get_environment_types(self):
"""Get the environment types
Returns
-------
list
List of supported environment type
"""
return self.environment_driver.get_environment_types()
def get_supported_frameworks(self, environment_type):
"""Get all the supported frameworks
Parameters
----------
environment_type : str
the type of environment
Returns
-------
list
List of available frameworks and their info
"""
return self.environment_driver.get_supported_frameworks(
environment_type)
def get_supported_languages(self, environment_type, environment_framework):
"""Get all the supported languages for the environment
Parameters
----------
environment_type : str
the type of environment
environment_framework : str
the framework for the environment
Returns
-------
list
List of available languages for the environments
"""
return self.environment_driver.get_supported_languages(
environment_type, environment_framework)
def setup(self, options, save_hardware_file=True):
"""Create a pre-defined supported environment and add it to the project environment directory
The user can build on top of the pre-defined environment and create new ones of their own
Parameters
----------
options : dict
can include the following values:
name : str
the name to be used to specify a supported environment
save_hardware_file : bool, optional
boolean to save hardware file along with other files
(default is True to save the file and create distinct hashes based on software and hardware)
Returns
-------
Environment
returns an object representing the environment created
Raises
------
UnstagedChanges
if unstaged changes exist in the environment it should fail
"""
# Check unstaged changes before trying to setup
try:
self.check_unstaged_changes()
except UnstagedChanges:
raise UnstagedChanges(
__("error", "controller.environment.setup.unstaged",
self.environment_driver.environment_directory_path))
try:
_ = self.environment_driver.setup(
options,
definition_path=self.environment_driver.
environment_directory_path)
except Exception:
raise
name = options.get('name', None)
if name is None:
environment_framework = options['environment_framework']
environment_type = options['environment_type']
environment_language = options['environment_language']
if environment_language:
name = "%s:%s-%s" % (environment_framework, environment_type,
environment_language)
else:
name = "%s:%s" % (environment_framework, environment_type)
create_dict = {
"name": name,
"description": "supported environment created by datmo"
}
return self.create(create_dict, save_hardware_file=save_hardware_file)
def current_environment(self):
"""Get the current environment object
Returns
-------
Environment
returns an object representing the current environment state
Raises
------
UnstagedChanges
if there are unstaged changes error out because no current environment
"""
self.check_unstaged_changes()
return self.create({})
def create(self, dictionary, save_hardware_file=True):
"""Create an environment
Parameters
----------
dictionary : dict
optional values to populate required environment entity args
paths : list, optional
list of absolute or relative filepaths and/or dirpaths to collect with destination names
(e.g. "/path/to/file>hello", "/path/to/file2", "/path/to/dir>newdir")
(default if none provided is to pull from project environment folder and project root. If none found create default definition)
name : str, optional
name of the environment
(default is None)
description : str, optional
description of the environment
(default is None)
save_hardware_file : bool
boolean to save hardware file along with other files
(default is True to save the file and create distinct hashes based on software and hardware)
Returns
-------
Environment
returns an object representing the environment created
Raises
------
EnvironmentDoesNotExist
if there is no environment found after given parameters and defaults are checked
PathDoesNotExist
if any source paths provided do not exist
"""
# Validate Inputs
create_dict = {"model_id": self.model.id}
create_dict["driver_type"] = self.environment_driver.type
validate("create_environment", dictionary)
# Create temp environment folder
_temp_env_dir = get_datmo_temp_path(self.home)
# Step 1: Populate a path list from the user inputs in a format compatible
# with the input of the File Collection create function
paths = []
# a. add in user given paths as is if they exist
if "paths" in dictionary and dictionary['paths']:
paths.extend(dictionary['paths'])
# b. if there exists project environment directory AND no paths exist, add in absolute paths
if not paths and os.path.isdir(
self.environment_driver.environment_directory_path):
paths.extend([
os.path.join(
self.environment_driver.environment_directory_path,
filepath) for filepath in list_all_filepaths(
self.environment_driver.environment_directory_path)
])
# c. add in default environment definition filepath as specified by the environment driver
# if path exists and NO OTHER PATHS exist
src_environment_filename = self.environment_driver.get_default_definition_filename(
)
src_environment_filepath = os.path.join(self.home,
src_environment_filename)
_, environment_filename = os.path.split(src_environment_filepath)
create_dict['definition_filename'] = environment_filename
if not paths and os.path.exists(src_environment_filepath):
paths.append(src_environment_filepath)
# Step 2: Check existing paths and create files as needed to populate the
# full environment within the temporary directory
paths = self._setup_compatible_environment(
create_dict,
paths,
_temp_env_dir,
save_hardware_file=save_hardware_file)
# Step 3: Pass in all paths for the environment to the file collection create
# If PathDoesNotExist is found for any source paths, then error
if not paths:
raise EnvironmentDoesNotExist()
try:
file_collection_obj = self.file_collection.create(paths)
except PathDoesNotExist as e:
raise PathDoesNotExist(
__("error", "controller.environment.create.filepath.dne",
str(e)))
# Step 4: Add file collection information to create dict and check unique hash
create_dict['file_collection_id'] = file_collection_obj.id
create_dict['unique_hash'] = file_collection_obj.filehash
# Check if unique hash is unique or not.
# If not, DO NOT CREATE Environment and return existing Environment object
results = self.dal.environment.query({
"unique_hash": file_collection_obj.filehash
})
if results: return results[0]
# Step 5: Delete the temporary directory
shutil.rmtree(_temp_env_dir)
# Step 6: Add optional arguments to the Environment entity
for optional_arg in ["name", "description"]:
if optional_arg in dictionary:
create_dict[optional_arg] = dictionary[optional_arg]
# Step 7: Create environment and return
return self.dal.environment.create(Environment(create_dict))
def build(self, environment_id, workspace=None):
"""Build environment from definition file
Parameters
----------
environment_id : str
environment object id to build
workspace : str
workspace to be used
Returns
-------
bool
returns True if success
Raises
------
EnvironmentDoesNotExist
if the specified Environment does not exist.
"""
self.environment_driver.init()
if not self.exists(environment_id):
raise EnvironmentDoesNotExist(
__("error", "controller.environment.build", environment_id))
environment_obj = self.dal.environment.get_by_id(environment_id)
file_collection_obj = self.dal.file_collection.\
get_by_id(environment_obj.file_collection_id)
# TODO: Check hardware info here if different from creation time
# Add in files for that environment id
environment_definition_path = os.path.join(self.home,
file_collection_obj.path)
# Copy to temp folder and remove files that are datmo specific
_temp_env_dir = get_datmo_temp_path(self.home)
self.file_driver.copytree(environment_definition_path, _temp_env_dir)
# get definition filepath for the temp folder
environment_definition_filepath = os.path.join(
_temp_env_dir, environment_obj.definition_filename)
try:
# Build the Environment with the driver
self.spinner.start()
result = self.environment_driver.build(
environment_id,
path=environment_definition_filepath,
workspace=workspace)
finally:
self.spinner.stop()
# Remove both temporary directories
shutil.rmtree(_temp_env_dir)
return result
def extract_workspace_url(self, name, workspace=None):
"""Extract workspace url from the environment
Parameters
----------
name : str
name of the environment being run
workspace : str
workspace being used for the run
Returns
-------
str
web url for the workspace being run, None if it doesn't exist
"""
return self.environment_driver.extract_workspace_url(name, workspace)
def run(self, environment_id, options, log_filepath):
"""Run and log an instance of the environment with the options given
Parameters
----------
environment_id : str
options : dict
can include the following values:
command : list, optional
ports : list, optional
Here are some example ports used for common applications.
* 'jupyter notebook' - 8888
* flask API - 5000
* tensorboard - 6006
An example input for the above would be ["8888:8888", "5000:5000", "6006:6006"]
which maps the running host port (right) to that of the environment (left)
name : str, optional
volumes : dict, optional
mem_limit : str, optional
gpu : bool, default False
detach : bool, optional
stdin_open : bool, optional
tty : bool, optional
log_filepath : str
filepath to the log file
Returns
-------
return_code : int
system return code for container and logs
run_id : str
identification for run of the environment
logs : str
string version of output logs for the container
"""
self.environment_driver.init()
# TODO: Check hardware info here if different from creation time
final_return_code, run_id, logs = \
self.environment_driver.run(environment_id, options, log_filepath)
return final_return_code, run_id, logs
def list(self):
# TODO: Add time filters
return self.dal.environment.query({})
def update(self, environment_id, name=None, description=None):
"""Update the environment metadata"""
if not self.exists(environment_id):
raise EnvironmentDoesNotExist()
update_environment_input_dict = {"id": environment_id}
if name:
update_environment_input_dict['name'] = name
if description:
update_environment_input_dict['description'] = description
return self.dal.environment.update(update_environment_input_dict)
def delete(self, environment_id):
"""Delete all traces of an environment
Parameters
----------
environment_id : str
environment object id to remove
Returns
-------
bool
True if success
Raises
------
EnvironmentDoesNotExist
if the specified Environment does not exist.
"""
self.environment_driver.init()
if not self.exists(environment_id):
raise EnvironmentDoesNotExist(
__("error", "controller.environment.delete", environment_id))
# Remove file collection
environment_obj = self.dal.environment.get_by_id(environment_id)
file_collection_deleted = self.file_collection.delete(
environment_obj.file_collection_id)
# Remove artifacts associated with the environment_driver
environment_artifacts_removed = self.environment_driver.remove(
environment_id, force=True)
# Delete environment object
delete_success = self.dal.environment.delete(environment_obj.id)
return file_collection_deleted and environment_artifacts_removed and \
delete_success
def stop(self, run_id=None, match_string=None, all=False):
"""Stop the trace of running environment
Parameters
----------
run_id : str, optional
stop environment with specific run id
(default is None, which means it is not used)
match_string : str, optional
stop environment with a string to match the environment name
(default is None, which means it is not used)
all : bool, optional
stop all environments
Notes
-----
The user must provide only one of the above, if multiple are given
or none are given the function will error
Returns
-------
bool
True if success
Raises
------
RequiredArgumentMissing
TooManyArguments
"""
self.environment_driver.init()
if not (run_id or match_string or all):
raise RequiredArgumentMissing()
if sum(map(bool, [run_id, match_string, all])) > 1:
raise TooManyArgumentsFound()
stop_success = False
if run_id:
# Stop the instance(e.g. container) running using environment driver(e.g. docker)
stop_success = self.environment_driver.stop(run_id, force=True)
if match_string:
# Stop all tasks matching the string given
stop_success = self.environment_driver.stop_remove_containers_by_term(
term=match_string, force=True)
if all:
# Stop all tasks associated within the enclosed project
all_match_string = "datmo-task-" + self.model.id
stop_success = self.environment_driver.stop_remove_containers_by_term(
term=all_match_string, force=True)
return stop_success
def exists(self, environment_id=None, environment_unique_hash=None):
"""Returns a boolean if the environment exists
Parameters
----------
environment_id : str
environment id to check for
environment_unique_hash : str
unique hash for the environment to check for
Returns
-------
bool
True if exists else False
"""
if environment_id:
environment_objs = self.dal.environment.query({
"id": environment_id
})
elif environment_unique_hash:
environment_objs = self.dal.environment.query({
"unique_hash": environment_unique_hash
})
else:
raise ArgumentError()
env_exists = False
if environment_objs:
env_exists = True
return env_exists
def check_unstaged_changes(self):
"""Checks if there exists any unstaged changes for the environment in project environment directory.
Returns
-------
bool
False if it's already staged else error
Raises
------
EnvironmentNotInitialized
if the environment driver is not initialized properly, will fail
UnstagedChanges
error if not there exists unstaged changes in environment
"""
if not self.environment_driver.is_initialized:
raise EnvironmentNotInitialized()
# Check if unstaged changes exist
if self._has_unstaged_changes():
raise UnstagedChanges()
return False
def checkout(self, environment_id):
"""Checkout to specific environment id
Parameters
----------
environment_id : str
environment id to checkout to
Returns
-------
bool
True if success
Raises
------
EnvironmentNotInitialized
error if not initialized (must initialize first)
PathDoesNotExist
if environment id does not exist
UnstagedChanges
error if not there exists unstaged changes in environment
"""
if not self.is_initialized:
raise EnvironmentNotInitialized()
if not self.exists(environment_id):
raise EnvironmentDoesNotExist(
__("error", "controller.environment.checkout_env",
environment_id))
# Check if unstaged changes exist
if self._has_unstaged_changes():
raise UnstagedChanges()
# Check if environment has is same as current
results = self.dal.environment.query({"id": environment_id})
environment_obj = results[0]
environment_hash = environment_obj.unique_hash
if self._calculate_project_environment_hash() == environment_hash:
return True
# Remove all content from project environment directory
for file in os.listdir(
self.environment_driver.environment_directory_path):
file_path = os.path.join(
self.environment_driver.environment_directory_path, file)
try:
if os.path.isfile(file_path):
os.remove(file_path)
elif os.path.isdir(file_path):
shutil.rmtree(file_path)
except Exception as e:
print(e)
# Add in files for that environment id
file_collection_obj = self.dal.file_collection.\
get_by_id(environment_obj.file_collection_id)
environment_definition_path = os.path.join(self.home,
file_collection_obj.path)
# Copy to temp folder and remove files that are datmo specific
_temp_env_dir = get_datmo_temp_path(self.home)
self.file_driver.copytree(environment_definition_path, _temp_env_dir)
for filename in self.environment_driver.get_datmo_definition_filenames(
):
os.remove(os.path.join(_temp_env_dir, filename))
# Copy from temp folder to project environment directory
self.file_driver.copytree(
_temp_env_dir, self.environment_driver.environment_directory_path)
shutil.rmtree(_temp_env_dir)
return True
def _setup_compatible_environment(self,
create_dict,
paths,
directory,
save_hardware_file=True):
"""Setup compatible environment from user paths. Creates the necessary datmo files if
they are not already present
Parameters
----------
create_dict : dict
dictionary for entity creation, this is mutated in the function (not returned)
paths : list
list of absolute or relative filepaths and/or dirpaths to collect with destination names
(e.g. "/path/to/file>hello", "/path/to/file2", "/path/to/dir>newdir")
directory : str
path of directory to save additional files to
save_hardware_file : bool
boolean to save hardware file along with other files
(default is True to save the file and create distinct hashes based on software and hardware)
Returns
-------
paths : list
returns the input paths with the paths of the new files created appended
"""
# a. look for the default definition, if not present add it to the directory, and add it to paths
if all(create_dict['definition_filename'] not in path
for path in paths):
self.environment_driver.create_default_definition(directory)
original_definition_filepath = os.path.join(
directory, create_dict['definition_filename'])
paths.append(original_definition_filepath)
# b. get the hardware info and save it to the entity, if save_hardware_file is True
# then save it to file and add it to the paths
create_dict[
'hardware_info'] = self.environment_driver.get_hardware_info()
if save_hardware_file:
hardware_info_filepath = os.path.join(directory, "hardware_info")
_ = JSONStore(
hardware_info_filepath,
initial_dict=create_dict['hardware_info'])
paths.append(hardware_info_filepath)
return paths
def _calculate_project_environment_hash(self, save_hardware_file=True):
"""Return the environment hash from contents in project environment directory.
If environment_directory not present then will assume it is empty
Parameters
----------
save_hardware_file : bool
include the hardware info file within the hash
Returns
-------
str
unique hash of the project environment directory
"""
# Populate paths from the project environment directory
paths = []
if os.path.isdir(self.environment_driver.environment_directory_path):
paths.extend([
os.path.join(
self.environment_driver.environment_directory_path,
filepath) for filepath in list_all_filepaths(
self.environment_driver.environment_directory_path)
])
# Create a temp dir to save any additional files necessary
_temp_dir = get_datmo_temp_path(self.home)
# Setup compatible environment and create add paths
paths = self._setup_compatible_environment(
{
"definition_filename":
self.environment_driver.get_default_definition_filename()
},
paths,
_temp_dir,
save_hardware_file=save_hardware_file)
# Create new temp directory
_temp_dir_2 = get_datmo_temp_path(self.home)
# Hash the paths of the environment with a different temp dir
dirhash = self.file_driver.calculate_hash_paths(paths, _temp_dir_2)
# Remove both temporary directories
shutil.rmtree(_temp_dir)
shutil.rmtree(_temp_dir_2)
return dirhash
def _has_unstaged_changes(self):
"""Return whether there are unstaged changes"""
env_hash = self._calculate_project_environment_hash()
env_hash_no_hardware = self._calculate_project_environment_hash(
save_hardware_file=False)
environment_files = list_all_filepaths(
self.environment_driver.environment_directory_path)
if self.exists(environment_unique_hash=env_hash) or self.exists(
environment_unique_hash=env_hash_no_hardware
) or not environment_files:
return False
return True
| 38.257468 | 147 | 0.620636 | 2,892 | 26,895 | 5.58195 | 0.12621 | 0.039026 | 0.04423 | 0.023787 | 0.349625 | 0.312334 | 0.252741 | 0.212662 | 0.190361 | 0.176857 | 0 | 0.002883 | 0.316416 | 26,895 | 702 | 148 | 38.311966 | 0.87517 | 0.374122 | 0 | 0.293515 | 0 | 0 | 0.042463 | 0.015175 | 0 | 0 | 0 | 0.002849 | 0 | 1 | 0.068259 | false | 0 | 0.037543 | 0.003413 | 0.180887 | 0.003413 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e3ee7f4b73a9c1c635a243272af01f2980eca4d | 1,373 | py | Python | .github/release_log.py | aaliddell/asyncpg | 2c99beb18425758d9ecb3e921a7aba26bb28976f | [
"Apache-2.0"
] | 5,714 | 2016-07-19T06:42:22.000Z | 2022-03-31T13:26:46.000Z | .github/release_log.py | DalavanCloud/asyncpg | 43a7b213438378e416bff462150b06c267d1b720 | [
"Apache-2.0"
] | 715 | 2016-07-20T11:28:45.000Z | 2022-03-27T22:14:47.000Z | .github/release_log.py | DalavanCloud/asyncpg | 43a7b213438378e416bff462150b06c267d1b720 | [
"Apache-2.0"
] | 405 | 2016-07-20T00:30:22.000Z | 2022-03-31T02:39:58.000Z | #!/usr/bin/env python3
#
# Copyright (C) 2016-present the asyncpg authors and contributors
# <see AUTHORS file>
#
# This module is part of asyncpg and is released under
# the Apache 2.0 License: http://www.apache.org/licenses/LICENSE-2.0
import json
import requests
import re
import sys
BASE_URL = 'https://api.github.com/repos/magicstack/asyncpg/compare'
def main():
if len(sys.argv) < 2:
print('pass a sha1 hash as a first argument')
sys.exit(1)
from_hash = sys.argv[1]
if len(sys.argv) > 2:
to_hash = sys.argv[2]
r = requests.get(f'{BASE_URL}/{from_hash}...{to_hash}')
data = json.loads(r.text)
for commit in data['commits']:
message = commit['commit']['message']
first_line = message.partition('\n\n')[0]
if commit.get('author'):
username = '@{}'.format(commit['author']['login'])
else:
username = commit['commit']['author']['name']
sha = commit["sha"][:8]
m = re.search(r'\#(?P<num>\d+)\b', message)
if m:
issue_num = m.group('num')
else:
issue_num = None
print(f'* {first_line}')
print(f' (by {username} in {sha}', end='')
if issue_num:
print(f' for #{issue_num})')
else:
print(')')
print()
if __name__ == '__main__':
main()
| 24.087719 | 68 | 0.562272 | 187 | 1,373 | 4.02139 | 0.502674 | 0.037234 | 0.031915 | 0.031915 | 0.034574 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017017 | 0.272396 | 1,373 | 56 | 69 | 24.517857 | 0.735736 | 0.163146 | 0 | 0.083333 | 0 | 0 | 0.239054 | 0.029772 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027778 | false | 0.027778 | 0.111111 | 0 | 0.138889 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e3f029a461c02dfded1cbf33a05d95229d1bce7 | 9,684 | py | Python | PyREMOT/docs/gasTransPor.py | sinagilassi/rmt-app | bbd5bb496f36116ecec15d75b4133a43a9233aaa | [
"MIT"
] | null | null | null | PyREMOT/docs/gasTransPor.py | sinagilassi/rmt-app | bbd5bb496f36116ecec15d75b4133a43a9233aaa | [
"MIT"
] | null | null | null | PyREMOT/docs/gasTransPor.py | sinagilassi/rmt-app | bbd5bb496f36116ecec15d75b4133a43a9233aaa | [
"MIT"
] | null | null | null | # TRANSPORT PROPERTIES OF GASES
# ------------------------------
# import packages/modules
from math import sqrt
import numpy as np
import re
# internals
from PyREMOT.docs.rmtUtility import rmtUtilityClass as rmtUtil
# core
from PyREMOT.core import Tref, R_CONST
from PyREMOT.core import roundNum
from PyREMOT.core import CONST_EQ_GAS_DIFFUSIVITY, CONST_EQ_GAS_VISCOSITY
# data
from PyREMOT.data import viscosityEqList
from PyREMOT.data import viscosityList
from PyREMOT.data.dataGasThermalConductivity import TherConductivityList
from PyREMOT.data.componentData import thermalConductivityEqList
def main():
pass
# NOTE
### diffusivity ###
def calGasDiffusivity(equation, compList, params):
"""
calculate gas diffusivity [m2/s]
args:
params: changes with equation
eq1: Chapman-Enskog
"""
# choose equation
if equation == 1:
return calGaDiEq1(compList, params)
else:
return -1
def calGaDiEq1(compList, params):
"""
calculate based on Chapman-Enskog
args:
params:
compList: component name list
MoFri: mole fraction list
T: temperature [K]
P: pressure [Pa]
MWi: molecular weight list [g/mol]
CrTei: critical temperature [K]
CrPri: critical pressure [bar]
"""
# input
MoFri = params['MoFri']
T = params['T']
P = params['P']
MWi = params['MWi']
CrTei = params['CrTei']
CrPri = params['CrPri']
# component no.
compNo = len(compList)
# e/K
eK_Ratio = np.array([0.75*item for item in CrTei])
# sigma - characteristic length of the intermolecular force law
sigma = np.zeros(compNo)
for i in range(compNo):
_loop = (CrTei[i]/CrPri[i])**(1/3)
sigma[i] = 2.44*_loop
# e[i,j]
eij = np.zeros((compNo, compNo))
for i in range(compNo):
for j in range(i, compNo):
if i == j:
eij[i][j] = 0
else:
eij[i][j] = sqrt(eK_Ratio[i]*eK_Ratio[j])
# sigma[i,j]
sigmaij = np.zeros((compNo, compNo))
for i in range(compNo):
for j in range(i, compNo):
if i == j:
sigmaij[i][j] = 0
else:
sigmaij[i][j] = 0.5*(sigma[i] + sigma[j])
# omega[i,j]
omegaij = np.zeros((compNo, compNo))
for i in range(compNo):
for j in range(i, compNo):
if i == j:
omegaij[i][j] = 0
else:
_Ts = T/eij[i][j]
_omegaLoop = 44.54*(_Ts**-4.909) + 1.911*(_Ts**-1.575)
omegaij[i][j] = _omegaLoop**0.10
# diffusivity coefficient D[i,j]
Dij = np.zeros((compNo, compNo))
for i in range(compNo):
for j in range(i, compNo):
if i == j:
Dij[i][j] = 0
else:
Dij[i][j] = (1e-4)*(0.0018583)*sqrt((T**3)*((1/MWi[i]) + (1/MWi[j]))) \
* (1/((P*9.86923e-6)*(sigmaij[i][j]**2)*omegaij[i][j]))
# based on Blanc's law
Dij_Cal = np.zeros((compNo, compNo))
# diagonal matrix
Dij_Transpose = np.transpose(Dij)
Dij_New = Dij + Dij_Transpose
for i in range(compNo):
for j in range(compNo):
if i == j:
Dij_Cal[i][j] = 0
else:
Dij_Cal[i][j] = MoFri[j]/Dij_New[i][j]
# mixture diffusivity coefficient D[i]
Di = np.zeros(compNo)
for k in range(compNo):
Di[k] = np.sum(Dij_Cal[k, :])**(-1)
# res
return Di
# NOTE
### viscosity ###
def calGasVisEq1(params, T):
"""
gas viscosity equation 1 - Pa.s
args:
params:
equation parameters list [A,B,C,D]
T: temperature [K]
"""
# try/except
try:
A = params[0]
B = params[1]
C = params[2]
D = params[3]
_res = A*1e-6*(T**B)/(1+C*(1/T)+D*(T**-2))
return _res
except Exception as e:
raise
def calGasVisEq2(eqExpr, T):
"""
gas viscosity equation - Pa.s
args:
eqExpr: equation expression
T: temperature [K]
"""
# try/except
try:
return eval(eqExpr)
except Exception as e:
raise
def calGasViscosity(comList, T):
"""
cal: gas viscosity at low pressure
unit: [Pa.s]
args:
comList: component name list
T: temperature [K]
"""
# try/except
try:
# heat capacity
_Vii = []
# load data
loadEqData = viscosityEqList
loadData = viscosityList
for i in comList:
# get id
eqIdData = [item['id']
for item in loadEqData if i == item['symbol']]
# get eq parameters
eqData = [{"eqParams": item['eqParams'], "eqExpr": item['eqExpr']}
for item in loadData if i == item['symbol']]
# check
_eqLen = len(eqIdData) + len(eqData)
if _eqLen > 0:
_eqIdSet = eqIdData[0]
_eqData = eqData[0]
if _eqIdSet == 1:
_eqParams = _eqData.get('eqParams')
_res = calGasVisEq1(_eqParams, T)
_Vii.append(_res)
elif _eqIdSet == 2:
_eqExpr = _eqData.get('eqExpr')
# build fun
_res = calGasVisEq2(_eqExpr, T)
_Vii.append(_res)
else:
print('viscosity data not found, update app database!')
raise
else:
print("component not found, update the app database!")
raise
# convert to numpy array
Vii = np.array(_Vii)
# res
return Vii
except Exception as e:
print(e)
# NOTE
### mixture property ###
def calMixturePropertyM1(compNo, Xi, MoFri, MWi):
'''
calculate mixture property M1
Method of Wilke
args:
compNo: component number
Xi: property name []
MoFri: mole fraction [-]
MWi: molecular weight [g/mol]
'''
try:
# wilke res
wilkeCo = np.zeros((compNo, compNo))
for i in range(compNo):
for j in range(compNo):
if i == j:
wilkeCo[i, j] = 1
else:
if i < j:
# wilke coefficient mix
A = 1 + sqrt(Xi[i]/Xi[j])*((MWi[j]/MWi[i])**(1/4))
AA = A**2
B = 8*(1+(MWi[i]/MWi[j]))
BB = sqrt(B)
wilkeCo[i, j] = AA/BB
else:
C = (Xi[i]/Xi[j])*(MWi[j]/MWi[i]) * wilkeCo[j, i]
wilkeCo[i, j] = C
# vars
A = np.zeros(compNo)
B = np.zeros((compNo, compNo))
# mixture property
mixProp = np.zeros(compNo)
for i in range(compNo):
A[i] = Xi[i]*MoFri[i]
for j in range(compNo):
B[i, j] = MoFri[j]*wilkeCo[i, j]
# set
mixProp[i] = A[i]/np.sum(B[i, :])
mixPropVal = np.sum(mixProp)
# res
return mixPropVal
except Exception as e:
print(e)
# NOTE
### thermal conductivity ###
def calGasThermalConductivity(comList, T):
"""
cal: gas thermal conductivity at low pressure
unit: [W/m.K]
args:
comList: component name list
T: temperature [K]
"""
# try/except
try:
# thermal conductivity list
_ThCoi = []
# load data
loadEqData = thermalConductivityEqList
loadData = TherConductivityList
for i in comList:
# get id
eqIdData = [item['id']
for item in loadEqData if i == item['symbol']]
# get eq parameters
eqData = [{"eqParams": item['eqParams'], "eqExpr": item['eqExpr']}
for item in loadData if i == item['symbol']]
# check
_eqLen = len(eqIdData) + len(eqData)
if _eqLen > 0:
_eqIdSet = eqIdData[0]
_eqData = eqData[0]
if _eqIdSet == 1:
_eqParams = _eqData.get('eqParams')
_res = calGasTherCondEq1(_eqParams, T)
_ThCoi.append(_res)
elif _eqIdSet == 2:
_eqExpr = _eqData.get('eqExpr')
# build fun
_res = calGasVisEq2(_eqExpr, T)
_ThCoi.append(_res)
else:
print('viscosity data not found, update app database!')
raise
else:
print("component not found, update the app database!")
raise
# convert to numpy array
ThCoi = np.array(_ThCoi)
# res
return ThCoi
except Exception as e:
print(e)
def calGasTherCondEq1(params, T):
"""
gas thermal conductivity equation 1 - W/m.K
args:
params:
equation parameters list [C1, C2, C3, C4]
T: temperature [K]
"""
# try/except
try:
C1 = params[0]
C2 = params[1]
C3 = params[2]
C4 = params[3]
_var1 = C1*(T**C2)
_var2 = 1 + (C3/T) + C4/(T**2)
_res = _var1/_var2
return _res
except Exception as e:
raise
def calGasTherCondEq2(eqExpr, T):
pass
def calTest():
return 1
if __name__ == "__main__":
main()
| 26.386921 | 87 | 0.491945 | 1,108 | 9,684 | 4.226534 | 0.195848 | 0.012812 | 0.033312 | 0.018791 | 0.383728 | 0.36088 | 0.334187 | 0.322229 | 0.28849 | 0.28849 | 0 | 0.021021 | 0.390851 | 9,684 | 366 | 88 | 26.459016 | 0.772843 | 0.195374 | 0 | 0.472637 | 0 | 0 | 0.043484 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054726 | false | 0.00995 | 0.054726 | 0.004975 | 0.159204 | 0.034826 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e3fa0731b9cb564034d63392706a95c4dbcb6a2 | 3,635 | py | Python | spotify.py | hilbix/currentlyplays | 0c9c3da249ab31d68eac3a7c8babd35b1b929ea0 | [
"BSD-3-Clause-No-Nuclear-Warranty"
] | null | null | null | spotify.py | hilbix/currentlyplays | 0c9c3da249ab31d68eac3a7c8babd35b1b929ea0 | [
"BSD-3-Clause-No-Nuclear-Warranty"
] | null | null | null | spotify.py | hilbix/currentlyplays | 0c9c3da249ab31d68eac3a7c8babd35b1b929ea0 | [
"BSD-3-Clause-No-Nuclear-Warranty"
] | null | null | null | #!/usr/bin/env python3
import sys
import json
import codecs
import urllib
import itertools
import logging
logging.basicConfig(format='%(asctime)-15s %(message)s', level=logging.DEBUG)
logging = logging.getLogger('spotify')
def log(*args, **kw):
logging.info(*args, **kw)
class OopsException(Exception):
def throw(self):
raise self
def OOPS(s):
OopsException(s).throw()
def prepend(s, iter):
for a in iter:
yield s+str(a)
utf8reader = codecs.getreader('utf-8')
class MissingValueException(Exception):
def __init__(self, id, help):
super().__init('missing argument: '+id)
self.id = id
self.help = help
class Arg(object):
def __init__(self, name, help, default=None):
self.name = name
self.help = help
self.value = default
def set(self, val):
self.value = val
def get(self):
if self.value is None: raise MissingValueException(self.id, self.help)
return self.value
class ArgumentException(Exception):
pass
class MalformedArgumentException(ArgumentException):
def __init__(self, message):
super().__init__(message)
class UnknownArgumentException(ArgumentException):
def __init__(self, id):
super().__init__('unknown argument: '+id)
class Spotify(object):
setup = Arg('setup', 'Setup file', '~/.spotify.json')
auth_url = Arg('auth', 'URL used for authentication; use {id}, {uri} and {nonce}', 'https://accounts.spotify.com/authorize?response_type=code&scope=user-read-currently-playing&client_id={id}&redirect_uri=(uri)&state={nonce}')
token_url = Arg('token', 'POST-URL to get tokens', 'https://accounts.spotify.com/api/token')
validate_nonce = Arg('nonce', 'Use nonce?', 1)
client_id = Arg('id', 'Client ID from https://developer.spotify.com/my-applications/')
client_secret = Arg('key', 'Client Secret from https://developer.spotify.com/my-applications/')
redirect_uri = Arg('uri', 'Redirect URI from https://developer.spotify.com/my-applications/')
def allargs(self):
for a in dir(self):
b = getattr(self, a)
if isinstance(b, Arg):
yield b
def __init__(self):
self.__allargs__ = { x.name:x for x in self.allargs() }
def usage(self):
for a in self.__allargs__.values():
yield "{}={}".format(a.name, a.value)
yield "\t{}".format(a.help)
def arg(self, s):
"str looks like arg=value, sets Arg to given value"
try:
k,v = s.split('=', 1)
except ValueError:
raise MalformedArgumentException("Arguments must have the form key=value")
for a in self.allargs():
if a.name == k:
a.set(v)
return
raise UnknownArgumentException(k)
def args(self, a):
for arg in a:
self.arg(arg)
def args_or_usage(self, a):
try:
self.args(a)
except ArgumentException as e:
yield '\n'.join(itertools.chain(['Exception: '+str(e),'', "List of arguments:"], prepend('\t', self.usage())))
def next(self):
while True:
try:
self.init()
except MissingValueException as e:
yield Ask(e.id, e.help)
def _get(self, rest, parm={}, retry=0):
url = self.auth_url.get().format(parm)
OOPS(url)
for _ in range(retry+1):
try:
log("request {}", url)
req = urllib.request.Request(url)
req.add_header('Authorization', 'Bearer ' + self.__key)
res = urllib.request.urlopen(req)
log("got {}", len(res))
return json.load(utf8reader(res))
except Exception as err:
log('URL failed: {} ({})'.format(url, err))
time.sleep(2)
OOPS(url)
def main(*args):
api = Spotify()
#api.arg('setup=~/.spotify.json')
yield from api.args_or_usage(args)
for act in api.next():
log(act)
if __name__ == '__main__':
for usage in main(*sys.argv[1:]):
print(usage, file=sys.stderr)
sys.exit(23)
| 24.560811 | 226 | 0.68033 | 526 | 3,635 | 4.577947 | 0.323194 | 0.014535 | 0.022841 | 0.031146 | 0.066445 | 0.052326 | 0.052326 | 0 | 0 | 0 | 0 | 0.004575 | 0.158184 | 3,635 | 147 | 227 | 24.727891 | 0.782353 | 0.028336 | 0 | 0.072727 | 0 | 0.009091 | 0.216262 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.172727 | false | 0.009091 | 0.054545 | 0 | 0.381818 | 0.009091 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e3fc6358f25a75fdabe818ccf66c952ae43b7e4 | 3,940 | py | Python | dataset/dataloader.py | kosuke55/DeepDepthDenoising | b1e7b95ca2e03384005b0540f865ddc7066a3e93 | [
"MIT"
] | 1 | 2020-04-15T16:46:56.000Z | 2020-04-15T16:46:56.000Z | dataset/dataloader.py | panluDreamer/DeepDepthDenoising | a994f495cd90193c78c1824367ce1462ae52752f | [
"MIT"
] | null | null | null | dataset/dataloader.py | panluDreamer/DeepDepthDenoising | a994f495cd90193c78c1824367ce1462ae52752f | [
"MIT"
] | null | null | null | import os
import sys
import torch
from torch.utils.data.dataset import Dataset
from torch.utils.data import DataLoader
import importers
import warnings
#testing
# sys.path.append('E:\\Projects\\vsc\\deep_depth_denoising\\denoise')
# import importers
'''
Dataset importer. We assume that data follows the below structure.
root_path
device_repository.json
|
|-----recording_i
| |-----Data
| |-----Calibration
|
|-----recording_i+1
| |-----Data
| |-----Calibration
|
'''
class DataLoaderParams:
def __init__(self
,root_path
,device_list
,decimation_scale = 2
,device_repository_path = "."
,depth_scale = 0.001
,depth_threshold = 5):
self.root_path = root_path
self.device_list = device_list
self.device_repository_path = device_repository_path
self.depth_scale = depth_scale
self.decimation_scale = decimation_scale
self.depth_threshold = depth_threshold
class DataLoad(Dataset):
def __init__(self, params):
super(DataLoad,self).__init__()
self.params = params
device_repo_path = os.path.join(self.params.device_repository_path,"device_repository.json")
if not os.path.exists(device_repo_path):
raise ValueError("{} does not exist, exiting.".format(device_repo_path))
self.device_repository = importers.intrinsics.load_intrinsics_repository(device_repo_path)
root_path = self.params.root_path
if not os.path.exists(root_path):
raise ValueError("{} does not exist, exiting.".format(root_path))
self.data = {}
# iterate over each recorded folder
for recording in os.listdir(root_path):
abs_recording_path = os.path.join(root_path,recording)
if not os.path.isdir(abs_recording_path):
continue
# path where data supposed to be stored
data_path = os.path.join(abs_recording_path,"Data")
if not os.path.exists(data_path):
warnings.warn("Folder {} does not containt \"Data\" folder".format(abs_recording_path))
continue
# path to the calibration of that particular recording
calibration_path = os.path.join(abs_recording_path,"Calibration")
if not os.path.exists(calibration_path):
warnings.warn("Folder {} does not containt \"Calibration\" folder".format(calibration_path))
continue
# data iteration
for file in os.listdir(data_path):
full_filename = os.path.join(data_path,file)
_, ext = os.path.splitext(full_filename)
if ext != ".png":
continue
_id,_name,_type,_ = file.split("_")
unique_name = recording + "-" + str(_id)
# skip names that we do not want to load
if _name not in self.params.device_list:
continue
if unique_name not in self.data:
self.data[unique_name] = {}
self.data[unique_name]["calibration"] = calibration_path
if _name not in self.data[unique_name]:
self.data[unique_name][_name] = {}
self.data[unique_name][_name][_type] = full_filename
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
#get an entry
key = list(self.data.keys())[idx]
datum = self.data[key]
datum_out = {}
for device in self.params.device_list:
color_img = importers.image.load_image(datum[device]["color"])
depth_range_mask = (importers.image.load_depth(datum[device]["depth"]) < self.params.depth_threshold).float()
depth_img = importers.image.load_depth(datum[device]["depth"]) * depth_range_mask
intrinsics, intrinsics_inv = importers.intrinsics.get_intrinsics(\
device, self.device_repository, self.params.decimation_scale)
extrinsics, extrinsics_inv = importers.extrinsics.load_extrinsics(\
os.path.join(datum["calibration"], device + ".extrinsics"))
datum_out.update({
device : {
"color" : color_img.squeeze(0),
"depth" : depth_img.squeeze(0),
"intrinsics" : intrinsics,
"intrinsics_inv" : intrinsics_inv,
"extrinsics" : extrinsics,
"extrinsics_inv" : extrinsics_inv
}})
return datum_out
def get_data(self):
return self.data | 28.759124 | 112 | 0.717005 | 525 | 3,940 | 5.12381 | 0.23619 | 0.026766 | 0.022305 | 0.020446 | 0.250929 | 0.147955 | 0.13829 | 0.05948 | 0 | 0 | 0 | 0.002728 | 0.16269 | 3,940 | 137 | 113 | 28.759124 | 0.812671 | 0.071827 | 0 | 0.05814 | 0 | 0 | 0.080725 | 0.006435 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05814 | false | 0 | 0.151163 | 0.023256 | 0.267442 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e433479b71756a2c40b1abbb67481f86fb0cb80 | 1,328 | py | Python | nuitka/tools/release/msi_upload/__main__.py | Mortal/Nuitka | 5150eeff7ff845ed4993c773449cd81b7f127c6b | [
"Apache-2.0"
] | null | null | null | nuitka/tools/release/msi_upload/__main__.py | Mortal/Nuitka | 5150eeff7ff845ed4993c773449cd81b7f127c6b | [
"Apache-2.0"
] | null | null | null | nuitka/tools/release/msi_upload/__main__.py | Mortal/Nuitka | 5150eeff7ff845ed4993c773449cd81b7f127c6b | [
"Apache-2.0"
] | 1 | 2018-12-16T23:51:18.000Z | 2018-12-16T23:51:18.000Z | # Copyright 2018, Kay Hayen, mailto:kay.hayen@gmail.com
#
# Part of "Nuitka", an optimizing Python compiler that is compatible and
# integrates with CPython, but also works on its own.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
""" Release: Create and upload Windows MSI files for Nuitka
"""
from __future__ import print_function
import os
import subprocess
from nuitka.tools.release.MSI import createMSIPackage
def main():
msi_filename = createMSIPackage()
assert subprocess.call(
(
"scp",
msi_filename,
"git@nuitka.net:/var/www/releases/" + os.path.basename(msi_filename)
),
shell = True # scan scp in PATH.
) == 0
print("OK, uploaded", msi_filename)
if __name__ == "__main__":
main()
| 28.869565 | 80 | 0.679217 | 178 | 1,328 | 4.97191 | 0.640449 | 0.067797 | 0.029379 | 0.036158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008893 | 0.237952 | 1,328 | 45 | 81 | 29.511111 | 0.865613 | 0.61747 | 0 | 0 | 0 | 0 | 0.116183 | 0.068465 | 0 | 0 | 0 | 0 | 0.058824 | 1 | 0.058824 | false | 0 | 0.235294 | 0 | 0.294118 | 0.117647 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e4334e2e8ddcb4593ccdd71cdabef09935e9dd9 | 2,878 | py | Python | docker/test_kafka.py | baoGia1404/mongo-kafka | 227534f2906e2ee3c1c562260143d6cb4e279f4f | [
"Apache-2.0"
] | null | null | null | docker/test_kafka.py | baoGia1404/mongo-kafka | 227534f2906e2ee3c1c562260143d6cb4e279f4f | [
"Apache-2.0"
] | null | null | null | docker/test_kafka.py | baoGia1404/mongo-kafka | 227534f2906e2ee3c1c562260143d6cb4e279f4f | [
"Apache-2.0"
] | null | null | null | import names
import pymongo
from bloom_filter import BloomFilter
import random
from datetime import datetime
import logging
logger = logging.getLogger('dev')
logger.setLevel(logging.INFO)
print('Begin to populate data.')
bloom_a = BloomFilter(max_elements=10000, error_rate=0.001)
bloom_b = BloomFilter(max_elements=10000, error_rate=0.001)
set_name = set()
set_company = set()
dict_gender = dict()
client = pymongo.MongoClient("mongodb://localhost:27017")
db = client["competition_monday_night"]
rank_a_coll = db["user_rank_a"]
rank_b_coll = db['user_rank_b']
range_name = 5000
print('Start generating name.')
while len(set_name) <= range_name:
female = names.get_first_name(gender='female')
male = names.get_first_name(gender='male')
set_name.add(male)
set_name.add(female)
dict_gender[male] = 'male'
dict_gender[female] = 'female'
print('{} - {}'.format(male, female))
list_name = list(set_name)
print('Finish generating name.')
print('Start inserting data.')
count = 0
while count <= 2000000:
index = random.randrange(range_name - 1, 0, -1)
amount_a = random.randrange(10000, 10, -1)
amount_b = random.randrange(10000, 10, -1)
name = list_name[index]
gender = dict_gender[name]
doc_a = {
'player_id': count + 1,
'name': name,
'gender': gender,
'amount': amount_a,
'update_count': 1,
'server': 'a'
}
doc_b = {
'player_id': count + 1,
'name': name,
'gender': gender,
'amount': amount_b,
'update_count': 1,
'server': 'b'
}
now = datetime.now()
if name not in bloom_a:
doc_a['created_time'] = now
doc_a['updated_time'] = now
a_id = rank_a_coll.insert_one(doc_a).inserted_id
bloom_a.add(name)
else:
a_id = rank_a_coll.update_one(
{
'name': name,
'gender': gender
},
{
'$inc': {
'update_count': 1
},
'$set': {
'updated_time': now,
'amount': amount_a
}
}).upserted_id
if name not in bloom_b:
doc_b['created_time'] = now
doc_b['updated_time'] = now
b_id = rank_b_coll.insert_one(doc_b).inserted_id
bloom_b.add(name)
else:
b_id = rank_b_coll.update_one(
{
'name': name,
'gender': gender
},
{
'$inc': {
'update_count': 1
},
'$set': {
'updated_time': now,
'amount': amount_b
}
}).upserted_id
print('{} - {} - {} - {} - {}'.format(count, a_id, b_id, amount_a, amount_b))
count = count + 1
client.close()
| 27.150943 | 81 | 0.544128 | 346 | 2,878 | 4.265896 | 0.251445 | 0.028455 | 0.03794 | 0.054201 | 0.331978 | 0.220867 | 0.220867 | 0.220867 | 0.166667 | 0.166667 | 0 | 0.031443 | 0.325921 | 2,878 | 105 | 82 | 27.409524 | 0.729381 | 0 | 0 | 0.244898 | 0 | 0 | 0.154327 | 0.017032 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.061224 | 0 | 0.061224 | 0.061224 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e43951d082ca770b82b0ce1dd79d154c0e2e3a3 | 13,466 | py | Python | util/myutils.py | rameshnair007/SR-cycleGAN | a111131fadb2fca45c6655410107e040d6973351 | [
"BSD-3-Clause"
] | 11 | 2020-05-03T08:38:36.000Z | 2021-07-21T09:27:19.000Z | util/myutils.py | rameshnair007/SR-cycleGAN | a111131fadb2fca45c6655410107e040d6973351 | [
"BSD-3-Clause"
] | 2 | 2020-06-08T19:29:11.000Z | 2021-07-01T05:21:36.000Z | util/myutils.py | rameshnair007/SR-cycleGAN | a111131fadb2fca45c6655410107e040d6973351 | [
"BSD-3-Clause"
] | 5 | 2019-07-18T15:25:32.000Z | 2021-10-30T23:34:39.000Z | # -*- coding: utf8 -*-
import nibabel as nib
import os
import random
import math
from skimage.measure import block_reduce
import scipy
from scipy.ndimage.interpolation import zoom
from scipy.ndimage.filters import gaussian_filter
import numpy as np
import cv2
#import path
win_min=3000
win_max=12000
def get_imgs_fn(file_name, path):
""" Input an image path and name, return an image array """
# return scipy.misc.imread(path + file_name).astype(np.float)
return scipy.misc.imread(path + file_name, mode='RGB')
def crop_sub_imgs_fn(x, size, is_random=False):
x = crop(x, wrg=64, hrg=64, is_random=is_random)
return x
def crop_sub_imgs_fn3D(img, cropsize, is_random=False):
imgshape = img.shape
if is_random:
x = random.randint(0, imgshape[0]-cropsize)
y = random.randint(0, imgshape[1]-cropsize)
z = random.randint(0, imgshape[2]-cropsize)
else:
x = math.ceil((imgshape[0] - cropsize)/2)
y = math.ceil((imgshape[1] - cropsize)/2)
z = math.ceil((imgshape[2] - cropsize)/2)
img = img[x:x+cropsize, y:y+cropsize, z:z+cropsize]
return img
def train_crop_sub_imgs_fn_andsmall3D(img, batchsize, cropsize, small_size, is_random=False):
imgshape = img.shape
imgbig = np.arange(batchsize*cropsize*cropsize*cropsize, dtype = 'float32').reshape(batchsize, cropsize, cropsize, cropsize, 1)
imgsmall= np.arange(batchsize*small_size*small_size*small_size, dtype = 'float32').reshape(batchsize, small_size, small_size, small_size, 1)
if is_random:
for i in range(0, batchsize):
x = random.randint(0, imgshape[0]-cropsize)
y = random.randint(0, imgshape[1]-cropsize)
z = random.randint(0, imgshape[2]-cropsize)
imgbig[i,:,:,:,0] = img[x:x+cropsize, y:y+cropsize, z:z+cropsize]
else:
for i in range(0, batchsize):
x = math.ceil((imgshape[0] - cropsize)/2)
y = math.ceil((imgshape[1] - cropsize)/2)
z = math.ceil((imgshape[2] - cropsize)/2)
imgbig[i,:,:,:,0] = img[x:x+cropsize, y:y+cropsize, z:z+cropsize]
imgsmall = block_reduce(imgbig, block_size = (1,8,8,8,1), func=np.mean)
imgsmall = zoom(imgsmall, (1,8.,8.,8.,1))
return imgbig, imgsmall
def train_crop_both_imgs_fn_andsmall3D(imgbig, imgsmall, cropsize, is_random=False):
imgshape = imgbig.shape
if is_random:
x = random.randint(0, imgshape[0]-cropsize)
y = random.randint(0, imgshape[1]-cropsize)
z = random.randint(0, imgshape[2]-cropsize)
imgpatchbig = imgbig[x:x+cropsize, y:y+cropsize, z:z+cropsize]
imgpatchsmall = imgsmall[x:x+cropsize, y:y+cropsize, z:z+cropsize]
else:
x = math.ceil((imgshape[0] - cropsize)/2)
y = math.ceil((imgshape[1] - cropsize)/2)
z = math.ceil((imgshape[2] - cropsize)/2)
imgpatchbig = imgbig[x:x+cropsize, y:y+cropsize, z:z+cropsize]
imgpatchsmall = imgsmall[x:x+cropsize, y:y+cropsize, z:z+cropsize]
return imgpatchbig, imgpatchsmall
def train_crop_both_imgs_fn_andsmall(imgbig, imgsmall, cropsize, is_random=False):
imgshape = imgbig.shape
if is_random:
x = random.randint(0, imgshape[0]-cropsize)
y = random.randint(0, imgshape[1]-cropsize)
z = random.randint(0, imgshape[2]-cropsize)
imgpatchbig = imgbig[x:x+cropsize, y:y+cropsize, z]
imgpatchsmall = imgsmall[x:x+cropsize, y:y+cropsize, z]
else:
x = math.ceil((imgshape[0] - cropsize)/2)
y = math.ceil((imgshape[1] - cropsize)/2)
z = math.ceil((imgshape[2] - cropsize)/2)
imgpatchbig = imgbig[x:x+cropsize, y:y+cropsize, z]
imgpatchsmall = imgsmall[x:x+cropsize, y:y+cropsize, z]
return imgpatchbig, imgpatchsmall
def valid_crop_sub_imgs_fn_andsmall3D(img, xaxis, yaxis, zaxis, batchsize, cropsize, small_size, is_random=False):
imgshape = img.shape #(1024, 1024, 64)
imgbig = np.arange(batchsize*cropsize*cropsize*cropsize, dtype = 'float32').reshape(batchsize, cropsize, cropsize, cropsize, 1)
imgsmall= np.arange(batchsize*small_size*small_size*small_size, dtype = 'float32').reshape(batchsize, small_size, small_size, small_size, 1)
if is_random:
for i in range(0, batchsize):
x = random.randint(0, imgshape[0]-cropsize)
y = random.randint(0, imgshape[1]-cropsize)
z = random.randint(0, imgshape[2]-cropsize)
imgbig[i,:,:,:,0] = img[x:x+cropsize, y:y+cropsize, z:z+cropsize]
else:
for i in range(0, batchsize):
x = xaxis
y = yaxis
z = zaxis
imgbig[i,:,:,:,0] = img[x:x+cropsize, y:y+cropsize, z:z+cropsize]
imgsmall = block_reduce(imgbig, block_size = (1,8,8,8,1), func=np.mean)
imgsmall = zoom(imgsmall, (1,8.,8.,8.,1))
return imgsmall
def downsample_fn(x):
# We obtained the LR images by downsampling the HR images using bicubic kernel with downsampling factor r = 4.
#print("before downsample:")
#print(x.shape)
#x = imresize(x, size=[96, 96], interp='bicubic', mode=None)
#print(x.shape)
#gaussian blurring
#x = gaussian_filter(x, 2, order=0, output=None, mode='reflect')
x = zoom(x, (0.125,0.125,1.0)) #8timesdownsampling
#print(x.shape) #(96,96,3)
return x
def downsample_zoom_fn(x):
x = block_reduce(x, block_size = (8, 8, 1), func=np.mean)
x = zoom(x, (8, 8, 1))
return x
def downsample_fn2(x):
x = zoom(x, (1,0.25,0.25))
return x
def normalizationminmax1threhold(data):
print('min/max data: {}/{} => {}/{}'.format(np.min(data),np.max(data),win_min,win_max))
data = np.float32(data)
data[data<win_min] = win_min
data[data>win_max] = win_max
data = data-np.min(data)
max = np.max(data)
data = data - (max / 2.)
data = data / max
return data
def normalizationminmax1(data):
print('min/max data: {}/{}'.format(np.min(data),np.max(data)))
data = np.float32(data)
max = np.max(data)
min = np.min(data)
data = data-min
newmax = np.max(data)
data = (data-(newmax/2)) / (newmax/2.)
#print('this is the minmax of normalization')
#print(np.max(data))
#print(np.min(data))
return data
def normalizationclinicalminmax1(data):
print('min/max data: {}/{}'.format(np.min(data),np.max(data)))
data = np.float32(data)
max = np.max(data)
min = np.min(data)
data = data-min
newmax = np.max(data)
data = (data-(newmax/2)) / (newmax/2.)
#print('this is the minmax of normalization')
#print(np.max(data))
#print(np.min(data))
return data
def normalizationmicrominmax1(data):
print('min/max data: {}/{}'.format(np.min(data),np.max(data)))
data = np.float32(data)
data[data<0.] = 0.
data[data>15000.] = 15000.
max = np.max(data)
min = np.min(data)
data = data-min
newmax = np.max(data)
data = (data-(newmax/2)) / (newmax/2.)
#print('this is the minmax of normalization')
#print(np.max(data))
#print(np.min(data))
return data
def normalizationmin0max1(data):
print('min/max data: {}/{}'.format(np.min(data),np.max(data)))
data = np.float32(data)
data[data<0.] = 0.
data[data>12000] = 12000
#data[data<2000] = 2000
#data = (data-(newmax/2)) / (newmax/2.)
data = data / 12000.
print('this is the minmax of normalization')
print(np.max(data))
print(np.min(data))
return data
def normalizationtominmax(data):
data[data<win_min] = win_min
data[data>win_max] = win_max
data = data-np.min(data)
return data
def normalizationtoimg(data):
print('min/max data: {}/{} => {}/{}'.format(np.min(data),np.max(data),win_min,win_max))
data = data-np.min(data)
data = data * (255.0/np.max(data))
return data
def my_psnr(im1,im2):
mse = ((im1 - im2) ** 2.).mean(axis=None)
rmse = np.sqrt(mse)
psnr = 20.*np.log10(1./rmse)
return psnr
def my_ssim(im1,im2):
mu1 = np.mean(im1)
mu2 = np.mean(im2)
c1 = 1e-4
c2 = 1e-4
sigma1 = np.std(im1)
sigma2 = np.std(im2)
im1 = im1 - mu1
im2 = im2 - mu2
cov12 = np.mean(np.multiply(im1,im2))
ssim = (2*mu1*mu2+c1) * (2*cov12+c2) / (mu1**2+mu2**2+c1) / (sigma1**2 + sigma2**2 + c2)
return ssim
def readnii(path):
dpath = path
img = nib.load(dpath)
#print("this is the shape of img:{}".format(img.shape))
#print(type(img)) #<class 'nibabel.nifti1.Nifti1Image'>
#print("this is the shape of img.affine.shape:{}")
#print("this is the header of img{}".format(img.header))
data = img.get_fdata()
#print(data.shape) #1024*1024*549
#print(type(data)) #<class 'numpy.ndarray'>
return data, img.header
def backtoitensity(path):
#get the header
correspondingimg = nib.load('/homes/tzheng/CTdata/CTMicroNUrespsurg/converted/DICOM_nulung026_cb_003_zf_ringRem.nii.gz')
correspondingheader = correspondingimg.header
empty_header = nib.Nifti1Header()
empty_header = correspondingheader
#print(empty_header)
#print(correspondingimg.affine)
#正规化导致neuves不能正常渲染
thisimg = correspondingimg.get_fdata()
valid_hr_slices = thisimg.shape[2]
dpath = path
img = nib.load(dpath)
data = img.get_fdata()
data = data * 12000.
thisimg[160:810,160:810,int(valid_hr_slices*0.1/8)*8+10:int(valid_hr_slices*0.1/8)*8+410] = data[10:660,10:660,10:410]
#saveimg = nib.Nifti1Image(data, correspondingimg.affine, empty_header)
saveimg = nib.Nifti1Image(thisimg, correspondingimg.affine, empty_header)
nib.save(saveimg, '/homes/tzheng/Mypythonfiles/densunetdiscirminator/samples/medicaltest/SRbacktoitensity.nii.gz')
def mean_squared_error3d(output, target, is_mean=False, name="mean_squared_error"):
if output.get_shape().ndims == 5: # [batch_size, l, w, h, c]
if is_mean:
mse = tf.reduce_mean(tf.reduce_mean(tf.squared_difference(output, target), [1, 2, 3]), name=name)
else:
mse = tf.reduce_mean(tf.reduce_sum(tf.squared_difference(output, target), [1, 2, 3]), name=name)
else:
raise Exception("Unknow dimension")
return mse
def fz(a):
return a[::-1]
def FZ(mat):
return np.array(fz(list(map(fz, mat))))
def rotatenii_180():
img = nib.load(config.VALID.Clinicalmedical_path)
data = img.get_fdata()
empty_header = nib.Nifti1Header()
empty_header = img.header
for i in range(0, data.shape[2]):
tempimg = FZ(data[:,:,i])
data[:,:,i] = tempimg
data = nib.Nifti1Image(data, img.affine, empty_header)
nib.save(data, '/homes/tzheng/Mypythonfiles/densunetdiscirminator/samples/medicaltest3D/rotatedclinical.nii.gz')
def crop_nifti_2D(img, cropsize, is_random=False):
imgshape = img.shape
if is_random:
x = random.randint(0, imgshape[0]-cropsize)
y = random.randint(0, imgshape[1]-cropsize)
z = random.randint(0, imgshape[2]-1)
imgpatch = img[x:x+cropsize, y:y+cropsize, z]
else:
x = math.ceil((imgshape[0] - cropsize)/2)
y = math.ceil((imgshape[1] - cropsize)/2)
z = math.ceil(imgshape[2] / 2)
imgpatch = img[x:x+cropsize, y:y+cropsize, z]
return imgpatch
def crop_nifti_withpos_2D(img, cropsize, is_random=False):
imgshape = img.shape
if is_random:
x = random.randint(0, imgshape[0]-cropsize)
y = random.randint(0, imgshape[1]-cropsize)
z = random.randint(0, imgshape[2]-1)
imgpatch = img[x:x+cropsize, y:y+cropsize, z]
else:
x = math.ceil((imgshape[0] - cropsize)/2)
y = math.ceil((imgshape[1] - cropsize)/2)
z = math.ceil(imgshape[2] / 2)
imgpatch = img[x:x+cropsize, y:y+cropsize, z]
return imgpatch, (x,y,z)
def dilatenifitimask():
mask = nib.load('/homes/tzheng/CTdata/CTMicroNUrespsurg/Mask/nulung036mask.nii.gz')
maskdata = mask.get_fdata()
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (20,20))
for i in range(maskdata.shape[2]):
maskdata[:,:,i] = cv2.dilate(maskdata[:,:,i], kernel)
dilated = nib.Nifti1Image(maskdata, mask.affine, mask.header)
nib.save(dilated, '/homes/tzheng/CTdata/CTMicroNUrespsurg/Mask/nulung036diatedmask.nii.gz')
if __name__ == '__main__':
#接下来要做的实验:提取Clinical CT的肺部领域,再正规化,与正规化的microCT进行比较
clinical_path = '/homes/tzheng/CTdata/CTMicroNUrespsurg/cct/nulung030.nii.gz'
micro_path = '/homes/tzheng/CTdata/CTMicroNUrespsurg/nii/nulung050/nulung050_053_000.nii.gz'
clinical_mask_path = '/homes/tzheng/CTdata/CTMicroNUrespsurg/Mask/nulung030diatedmask.nii.gz'
clinical = nib.load(clinical_path)
micro = nib.load(micro_path)
clinical_mask = nib.load(clinical_mask_path)
clinical_data = clinical.get_fdata()
micro_data = micro.get_fdata()
clinical_mask_data = clinical_mask.get_fdata()
maxdata = np.max(clinical_data[clinical_mask_data>0])
mindata = np.min(clinical_data[clinical_mask_data>0])
print(maxdata)
print(mindata)
| 37.614525 | 145 | 0.627135 | 1,880 | 13,466 | 4.401596 | 0.144681 | 0.038671 | 0.035529 | 0.055831 | 0.611964 | 0.569426 | 0.513716 | 0.498369 | 0.49281 | 0.49281 | 0 | 0.041091 | 0.226496 | 13,466 | 358 | 146 | 37.614525 | 0.75336 | 0.098025 | 0 | 0.542125 | 0 | 0 | 0.072894 | 0.052457 | 0 | 0 | 0 | 0 | 0 | 1 | 0.102564 | false | 0 | 0.03663 | 0.007326 | 0.230769 | 0.040293 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e4872411e09c87786904c52bfe1ce6a7ab7ae9d | 1,325 | py | Python | archimedes_whiteboard/test/detect_rectangles.py | Whillikers/archimedes-whiteboard | 2b493bff665412507c04b80d200b6dc4bfed2ce2 | [
"MIT"
] | 2 | 2017-12-22T13:32:11.000Z | 2018-08-28T00:46:38.000Z | archimedes_whiteboard/test/detect_rectangles.py | Whillikers/archimedes-whiteboard | 2b493bff665412507c04b80d200b6dc4bfed2ce2 | [
"MIT"
] | 16 | 2017-12-22T12:55:40.000Z | 2019-01-15T05:47:49.000Z | archimedes_whiteboard/test/detect_rectangles.py | Whillikers/archimedes-whiteboard | 2b493bff665412507c04b80d200b6dc4bfed2ce2 | [
"MIT"
] | null | null | null | '''
Test rectangle detection.
Useful for tuning command region detection for your setup.
'''
import cv2
import numpy as np
from matplotlib import pyplot as plt
from archimedes_whiteboard.commands import region_extraction
from archimedes_whiteboard.board_region import get_whiteboard_region_normal
img = cv2.imread('../sample_images/sideangle_highres.jpg')
normalized = get_whiteboard_region_normal(img)
filtered = region_extraction.filter_to_color(normalized, 180,
tol_hue=20,
min_saturation=30,
min_value=150)
rectangles = region_extraction.get_rectangular_boxes(filtered,
blur_size=21,
dilate_size=5)
blank = np.zeros((len(filtered), len(filtered[0])), np.uint8)
plt.figure(1)
plt.tight_layout()
plt.subplot(221)
plt.imshow(img, cmap='gray', interpolation='none')
plt.subplot(222)
plt.imshow(normalized, cmap='gray', interpolation='none')
plt.subplot(223)
plt.imshow(filtered, cmap='gray', interpolation='none')
plt.subplot(224)
for rect in rectangles:
cv2.polylines(blank, rect, True, (255, 255, 255), 10)
plt.imshow(blank, cmap='gray', interpolation='none')
plt.show()
| 33.125 | 75 | 0.642264 | 156 | 1,325 | 5.301282 | 0.519231 | 0.048368 | 0.101572 | 0.120919 | 0.228537 | 0.126965 | 0 | 0 | 0 | 0 | 0 | 0.042596 | 0.255849 | 1,325 | 39 | 76 | 33.974359 | 0.796146 | 0.064151 | 0 | 0 | 0 | 0 | 0.056818 | 0.030844 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.178571 | 0 | 0.178571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e4c0d4d069e94eccd26149d403b060814bc712a | 5,895 | py | Python | synapse/handlers/events.py | iot-factory/synapse | d3ac8fd87d85bd40d40b475d7a6f12f74ea0ddb0 | [
"Apache-2.0"
] | null | null | null | synapse/handlers/events.py | iot-factory/synapse | d3ac8fd87d85bd40d40b475d7a6f12f74ea0ddb0 | [
"Apache-2.0"
] | null | null | null | synapse/handlers/events.py | iot-factory/synapse | d3ac8fd87d85bd40d40b475d7a6f12f74ea0ddb0 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright 2014, 2015 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from synapse.util.logutils import log_function
from synapse.types import UserID
from synapse.events.utils import serialize_event
from ._base import BaseHandler
import logging
import random
logger = logging.getLogger(__name__)
class EventStreamHandler(BaseHandler):
def __init__(self, hs):
super(EventStreamHandler, self).__init__(hs)
# Count of active streams per user
self._streams_per_user = {}
# Grace timers per user to delay the "stopped" signal
self._stop_timer_per_user = {}
self.distributor = hs.get_distributor()
self.distributor.declare("started_user_eventstream")
self.distributor.declare("stopped_user_eventstream")
self.clock = hs.get_clock()
self.notifier = hs.get_notifier()
@defer.inlineCallbacks
def started_stream(self, user):
"""Tells the presence handler that we have started an eventstream for
the user:
Args:
user (User): The user who started a stream.
Returns:
A deferred that completes once their presence has been updated.
"""
if user not in self._streams_per_user:
self._streams_per_user[user] = 0
if user in self._stop_timer_per_user:
try:
self.clock.cancel_call_later(
self._stop_timer_per_user.pop(user)
)
except:
logger.exception("Failed to cancel event timer")
else:
yield self.distributor.fire("started_user_eventstream", user)
self._streams_per_user[user] += 1
def stopped_stream(self, user):
"""If there are no streams for a user this starts a timer that will
notify the presence handler that we haven't got an event stream for
the user unless the user starts a new stream in 30 seconds.
Args:
user (User): The user who stopped a stream.
"""
self._streams_per_user[user] -= 1
if not self._streams_per_user[user]:
del self._streams_per_user[user]
# 30 seconds of grace to allow the client to reconnect again
# before we think they're gone
def _later():
logger.debug("_later stopped_user_eventstream %s", user)
self._stop_timer_per_user.pop(user, None)
return self.distributor.fire("stopped_user_eventstream", user)
logger.debug("Scheduling _later: for %s", user)
self._stop_timer_per_user[user] = (
self.clock.call_later(30, _later)
)
@defer.inlineCallbacks
@log_function
def get_stream(self, auth_user_id, pagin_config, timeout=0,
as_client_event=True, affect_presence=True,
only_room_events=False, room_id=None, is_guest=False):
"""Fetches the events stream for a given user.
If `only_room_events` is `True` only room events will be returned.
"""
auth_user = UserID.from_string(auth_user_id)
try:
if affect_presence:
yield self.started_stream(auth_user)
if timeout:
# If they've set a timeout set a minimum limit.
timeout = max(timeout, 500)
# Add some randomness to this value to try and mitigate against
# thundering herds on restart.
timeout = random.randint(int(timeout*0.9), int(timeout*1.1))
if is_guest:
yield self.distributor.fire(
"user_joined_room", user=auth_user, room_id=room_id
)
events, tokens = yield self.notifier.get_events_for(
auth_user, pagin_config, timeout,
only_room_events=only_room_events,
is_guest=is_guest, guest_room_id=room_id
)
time_now = self.clock.time_msec()
chunks = [
serialize_event(e, time_now, as_client_event) for e in events
]
chunk = {
"chunk": chunks,
"start": tokens[0].to_string(),
"end": tokens[1].to_string(),
}
defer.returnValue(chunk)
finally:
if affect_presence:
self.stopped_stream(auth_user)
class EventHandler(BaseHandler):
@defer.inlineCallbacks
def get_event(self, user, event_id):
"""Retrieve a single specified event.
Args:
user (synapse.types.UserID): The user requesting the event
event_id (str): The event ID to obtain.
Returns:
dict: An event, or None if there is no event matching this ID.
Raises:
SynapseError if there was a problem retrieving this event, or
AuthError if the user does not have the rights to inspect this
event.
"""
event = yield self.store.get_event(event_id)
if not event:
defer.returnValue(None)
return
if hasattr(event, "room_id"):
yield self.auth.check_joined_room(event.room_id, user.to_string())
defer.returnValue(event)
| 33.117978 | 79 | 0.611535 | 734 | 5,895 | 4.722071 | 0.325613 | 0.028275 | 0.032314 | 0.036353 | 0.109059 | 0.070975 | 0.042701 | 0 | 0 | 0 | 0 | 0.007921 | 0.314673 | 5,895 | 177 | 80 | 33.305085 | 0.85 | 0.31179 | 0 | 0.079545 | 0 | 0 | 0.05721 | 0.031348 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068182 | false | 0 | 0.079545 | 0 | 0.193182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e4ec41b420d4b2055ce83967fb7bc8b3ab69604 | 1,513 | py | Python | knee/multi_knee.py | Yifei-Liu/knee | 7c2c7092a2c2dc4c4dac5ebc3b623c5725e0339b | [
"MIT"
] | 2 | 2021-09-03T02:59:10.000Z | 2021-12-28T16:32:28.000Z | knee/multi_knee.py | Yifei-Liu/knee | 7c2c7092a2c2dc4c4dac5ebc3b623c5725e0339b | [
"MIT"
] | 9 | 2021-06-05T08:10:30.000Z | 2022-01-05T20:50:32.000Z | knee/multi_knee.py | Yifei-Liu/knee | 7c2c7092a2c2dc4c4dac5ebc3b623c5725e0339b | [
"MIT"
] | 4 | 2020-12-04T07:04:34.000Z | 2021-09-03T02:59:19.000Z | # coding: utf-8
__author__ = 'Mário Antunes'
__version__ = '0.1'
__email__ = 'mariolpantunes@gmail.com'
__status__ = 'Development'
import typing
import logging
import numpy as np
from knee.linear_fit import linear_fit_points, linear_r2_points
logger = logging.getLogger(__name__)
def multi_knee(get_knee: typing.Callable, points: np.ndarray, t1: float = 0.99, t2: int = 3) -> np.ndarray:
"""
Wrapper that convert a single knee point detection into a multi knee point detector.
It uses recursion on the left and right parts of the curve after detecting the current knee.
Args:
get_knee (typing.Callable): method that returns a single knee point
points (np.ndarray): numpy array with the points (x, y)
t1 (float): the coefficient of determination used as a threshold (default 0.99)
t2 (int): the mininum number of points used as a threshold (default 3)
Returns:
np.ndarray: knee points on the curve
"""
stack = [(0, len(points))]
knees = []
while stack:
left, right = stack.pop()
pt = points[left:right]
if len(pt) > t2:
coef = linear_fit_points(pt)
if linear_r2_points(pt, coef) < t1:
rv = get_knee(pt)
if rv is not None:
idx = rv + left
knees.append(idx)
stack.append((left, idx+1))
stack.append((idx+1, right))
knees.sort()
return np.array(knees)
| 29.096154 | 107 | 0.614673 | 206 | 1,513 | 4.354369 | 0.456311 | 0.040134 | 0.033445 | 0.046823 | 0.051282 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020599 | 0.294118 | 1,513 | 51 | 108 | 29.666667 | 0.819288 | 0.358229 | 0 | 0 | 0 | 0 | 0.055255 | 0.026002 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0 | 0.153846 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e4ec487f567b1e43c2016504da0d132a6856103 | 1,110 | py | Python | docs/source/conf.py | thorium-cloud/boto3-assistant | 480551afbb28b5348aa54e6dee987f2448544e33 | [
"MIT"
] | null | null | null | docs/source/conf.py | thorium-cloud/boto3-assistant | 480551afbb28b5348aa54e6dee987f2448544e33 | [
"MIT"
] | null | null | null | docs/source/conf.py | thorium-cloud/boto3-assistant | 480551afbb28b5348aa54e6dee987f2448544e33 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os
import sys
sys.path.insert(0, os.path.abspath('.'))
sys.path.insert(0, os.path.abspath('..'))
sys.path.insert(0, os.path.abspath(os.path.join('..', '..')))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
extensions = ['sphinx.ext.doctest',
'sphinx.ext.mathjax',
'sphinx.ext.autodoc',
'sphinx.ext.napoleon']
napoleon_google_docstring = False
napoleon_use_param = False
napoleon_use_ivar = True
templates_path = ['_templates']
source_suffix = '.rst'
master_doc = 'index'
project = 'Boto3 Assistant'
author = 'Connor Bray'
version = '0.0.1'
release = '0.0.1'
language = None
exclude_patterns = []
pygments_style = 'sphinx'
todo_include_todos = True
html_theme = "sphinx_rtd_theme"
html_sidebars = {
'**': [
'about.html',
'navigation.html',
'relations.html',
'searchbox.html',
'donate.html',
]
}
htmlhelp_basename = 'boto3_assistantdoc'
| 23.617021 | 75 | 0.61982 | 132 | 1,110 | 5.05303 | 0.583333 | 0.035982 | 0.058471 | 0.062969 | 0.121439 | 0.121439 | 0.121439 | 0.121439 | 0.121439 | 0.121439 | 0 | 0.016556 | 0.183784 | 1,110 | 46 | 76 | 24.130435 | 0.719647 | 0.186486 | 0 | 0 | 0 | 0 | 0.268673 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.058824 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e5011d7f59d04eaaabb4d9d85c4236073aabd2e | 607 | py | Python | stucc_project_backend/stucc_app/urls.py | COdingaorg/stucc_project | cfbd492fab35cbc53806c09345dd463dfdaa5b59 | [
"MIT"
] | null | null | null | stucc_project_backend/stucc_app/urls.py | COdingaorg/stucc_project | cfbd492fab35cbc53806c09345dd463dfdaa5b59 | [
"MIT"
] | null | null | null | stucc_project_backend/stucc_app/urls.py | COdingaorg/stucc_project | cfbd492fab35cbc53806c09345dd463dfdaa5b59 | [
"MIT"
] | null | null | null | from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^$', views.login_user, name = 'login'),
url(r'home', views.index, name= 'home'),
url(r'^register_user/$', views.register_user, name = 'register_user'),
url(r'^login_user/$', views.login_user, name = 'login_user'),
url(r'^logout_user/$', views.logout_user, name = 'logout_user'),
url(r'^profile/$', views.add_user_profile, name = 'user_profile'),
url(r'^community/$', views.community, name = 'community'),
url(r'^forums/$', views.forums, name = 'forums'),
url(r'^projects/$', views.projects, name = 'projects'),
] | 40.466667 | 72 | 0.665568 | 85 | 607 | 4.6 | 0.247059 | 0.092072 | 0.061381 | 0.092072 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123558 | 607 | 15 | 73 | 40.466667 | 0.734962 | 0 | 0 | 0 | 0 | 0 | 0.277961 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.153846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e505067d5652355b98b246c143a2ce56a8f8eea | 1,633 | py | Python | data_loader/data_loaders.py | ne-bo/protein | eae8e73b1c5c4b6d886fc3cc362b7568b0914796 | [
"MIT"
] | 1 | 2020-01-07T14:52:04.000Z | 2020-01-07T14:52:04.000Z | data_loader/data_loaders.py | ne-bo/protein | eae8e73b1c5c4b6d886fc3cc362b7568b0914796 | [
"MIT"
] | null | null | null | data_loader/data_loaders.py | ne-bo/protein | eae8e73b1c5c4b6d886fc3cc362b7568b0914796 | [
"MIT"
] | null | null | null | import torch
from torch.utils.data import DataLoader
from torch.utils.data.sampler import BatchSampler, SequentialSampler
from base import BaseDataLoader
from data_loader.sampling import UniformSampler
from datasets import protein_channels
import numpy as np
class ProteinDataLoader(DataLoader):
"""
Protein data loading
"""
def __init__(self, config, name, shuffle=False, evaluation=True):
super(ProteinDataLoader, self).__init__(
dataset=protein_channels.ProteinChannelsDataset(config=config, name=name),
batch_size=config['data_loader']['batch_size_%s' % name],
drop_last=False,
shuffle=shuffle
)
if config['sampling'] == 'uniform' and name == 'train' and not evaluation:
number_of_different_classes_in_batch = 2
batches_number = self.dataset.__len__() * number_of_different_classes_in_batch // self.batch_size
uniform_sampler = UniformSampler(
data_source=self.dataset,
batches_number=batches_number,
number_of_different_classes_in_batch=number_of_different_classes_in_batch,
batch_size=self.batch_size
)
self.batch_sampler = BatchSampler(
uniform_sampler,
batch_size=self.batch_size,
drop_last=self.drop_last
)
else:
self.batch_sampler = BatchSampler(
SequentialSampler(self.dataset),
batch_size=self.batch_size,
drop_last=self.drop_last
)
self.config = config
| 34.744681 | 109 | 0.645438 | 172 | 1,633 | 5.784884 | 0.319767 | 0.081407 | 0.068342 | 0.096482 | 0.231156 | 0.209045 | 0.084422 | 0.084422 | 0.084422 | 0.084422 | 0 | 0.000855 | 0.28414 | 1,633 | 46 | 110 | 35.5 | 0.850299 | 0.012247 | 0 | 0.166667 | 0 | 0 | 0.027552 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027778 | false | 0 | 0.194444 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e50f0f2338f8b6d16287e0a663498c8dda18e18 | 1,231 | py | Python | nlpaug/augmenter/word/glove.py | booltime/nlpaug | d21e51bacd170dcd3dddfc34a401f0215f91dbf1 | [
"MIT"
] | 1 | 2021-09-08T09:18:02.000Z | 2021-09-08T09:18:02.000Z | nlpaug/augmenter/word/glove.py | booltime/nlpaug | d21e51bacd170dcd3dddfc34a401f0215f91dbf1 | [
"MIT"
] | null | null | null | nlpaug/augmenter/word/glove.py | booltime/nlpaug | d21e51bacd170dcd3dddfc34a401f0215f91dbf1 | [
"MIT"
] | null | null | null | import re, os
import numpy as np
from random import randint
from nlpaug.augmenter.word import WordEmbsAugmenter
from nlpaug.util import Action
import nlpaug.model.word_embs as nmw
GLOVE_MODEL = {}
def init_glove_model(model_path, force_reload=False):
"""
Load model once at runtime
"""
global GLOVE_MODEL
if GLOVE_MODEL and not force_reload:
return GLOVE_MODEL
glove = nmw.GloVe()
glove.read(model_path)
GLOVE_MODEL = glove
return GLOVE_MODEL
class GloVeAug(WordEmbsAugmenter):
def __init__(self, model_path='.', model=None, action=Action.SUBSTITUTE,
name='GloVe_Aug', aug_min=1, aug_p=0.3, aug_n=5, tokenizer=None, stopwords=[], force_reload=False,
verbose=0):
super(GloVeAug, self).__init__(
model_path=model_path, aug_n=aug_n,
action=action, name=name, aug_p=aug_p, aug_min=aug_min, tokenizer=tokenizer, stopwords=stopwords,
verbose=verbose)
if model is None:
self.model = self.get_model(force_reload=force_reload)
else:
self.model = model
def get_model(self, force_reload=False):
return init_glove_model(self.model_path, force_reload)
| 28.627907 | 115 | 0.677498 | 168 | 1,231 | 4.702381 | 0.333333 | 0.101266 | 0.060759 | 0.050633 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005308 | 0.234768 | 1,231 | 42 | 116 | 29.309524 | 0.833333 | 0.021121 | 0 | 0.068966 | 0 | 0 | 0.008439 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103448 | false | 0 | 0.206897 | 0.034483 | 0.448276 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e515c7636285b85cfc314449486588bc9d6dc22 | 316 | py | Python | app/app.py | Emmiwinks/watttron-example | c80535c6be24c07b3e3315b07b70c25f529ce628 | [
"MIT"
] | null | null | null | app/app.py | Emmiwinks/watttron-example | c80535c6be24c07b3e3315b07b70c25f529ce628 | [
"MIT"
] | null | null | null | app/app.py | Emmiwinks/watttron-example | c80535c6be24c07b3e3315b07b70c25f529ce628 | [
"MIT"
] | null | null | null | from flask import Flask
from get_comic import get_comic
app = Flask(__name__)
@app.route("/")
def hello_world():
src = get_comic()
return '<H1>Hello Watttron!</H1> ' \
'<img src="%s" alt="No image could be loaded :/">' % src
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0')
| 19.75 | 67 | 0.617089 | 48 | 316 | 3.729167 | 0.625 | 0.134078 | 0.03352 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024 | 0.208861 | 316 | 15 | 68 | 21.066667 | 0.692 | 0 | 0 | 0 | 0 | 0 | 0.281646 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e518b132a612d57f01788b0aa6d23eed85bb9ec | 1,712 | py | Python | tests/test_3_atn_tools.py | TimKrash/airport-viz | 9a56ae2fd6c7dcc508ca722120e8d5860860d816 | [
"Unlicense"
] | null | null | null | tests/test_3_atn_tools.py | TimKrash/airport-viz | 9a56ae2fd6c7dcc508ca722120e8d5860860d816 | [
"Unlicense"
] | null | null | null | tests/test_3_atn_tools.py | TimKrash/airport-viz | 9a56ae2fd6c7dcc508ca722120e8d5860860d816 | [
"Unlicense"
] | null | null | null | from atnresilience import atn_tools as tools
db_path = 'tests/processed/test_db.sqlite'
processed_direc = 'tests/processed/'
#import os
#dir_path = os.path.dirname(os.getcwd())
#script_dir = dir_path + '/atnresilience'
#os.chdir(script_dir)
#
#import atn_tools as tools
#
#processed_direc = 'tests/processed/'
#db_path = dir_path + '/tests/processed/test_db.sqlite'
def test_weighted_edge():
origin_top_expected = ['ABQ', 'ATL', 'AUS', 'AUS', 'BDL', 'BNA', 'BOS', 'BWI', 'BWI', 'CAK',]
weight_top_expected = [2, 2, 2, 2, 2, 2, 2, 3, 2, 2]
origin_bottom_expected = ['SJU', 'SLC', 'SMF', 'STL', 'STL', 'TPA', 'TUS',]
weight_bottom_expected = [1, 2, 3, 2, 2, 3, 2]
weight_df = tools.weighted_edge(db_path, 2015, 'WN')
origin_top = (weight_df.head(10))['Origin'].tolist()
weight_top = (weight_df.head(10))['Weight'].tolist()
origin_bottom = (weight_df.tail(7))['Origin'].tolist()
weight_bottom = (weight_df.tail(7))['Weight'].tolist()
# return(weight_df.tail(7))
assert origin_top == origin_top_expected and\
weight_top == weight_top_expected and\
origin_bottom == origin_bottom_expected and\
weight_bottom == weight_bottom_expected
def remove_frequency():
remove_dict = remove_frequency(db_path, 2015, 'WN', 'ADM', 0.1, 0.95, processed_direc)
assert remove_dict['BOX'] == 0 and\
remove_dict['CLT'] == 31 and\
remove_dict['DEN'] == 189 and\
remove_dict['DFW'] == 132 and\
remove_dict['ELP'] == 28 and\
remove_dict['JFK'] == 0 and\
remove_dict['LAX'] == 33 and\
remove_dict['MSP'] == 31 and\
remove_dict['OFF'] == 29 and\
remove_dict['OMA'] == 21 and\
remove_dict['PSP'] == 12 | 34.24 | 97 | 0.642523 | 247 | 1,712 | 4.198381 | 0.323887 | 0.115718 | 0.125362 | 0.015429 | 0.174542 | 0.064609 | 0 | 0 | 0 | 0 | 0 | 0.042113 | 0.181659 | 1,712 | 50 | 98 | 34.24 | 0.698073 | 0.147196 | 0 | 0 | 0 | 0 | 0.110958 | 0.020675 | 0 | 0 | 0 | 0 | 0.066667 | 1 | 0.066667 | false | 0 | 0.033333 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e51a72fa819afd0c89f12f000c59b2f2fdb02bc | 7,326 | py | Python | pages/predictions.py | coopwilliams/camera_prices | 44a2a4257b8f6cb1cc2ef0dcd0ffd54df4f64bef | [
"MIT"
] | null | null | null | pages/predictions.py | coopwilliams/camera_prices | 44a2a4257b8f6cb1cc2ef0dcd0ffd54df4f64bef | [
"MIT"
] | null | null | null | pages/predictions.py | coopwilliams/camera_prices | 44a2a4257b8f6cb1cc2ef0dcd0ffd54df4f64bef | [
"MIT"
] | null | null | null | import dash
import dash_bootstrap_components as dbc
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input, Output
from app import app
import pandas as pd
from joblib import load
pipeline = load('assets/pipeline.joblib')
url = "https://raw.githubusercontent.com/strangelycutlemon/camera_prices/master/modified_camera_prices.csv"
df = pd.read_csv(url)
style = {'padding': '1.5em'}
column1 = dbc.Col(
[
dcc.Markdown('#### Brand'),
dcc.Dropdown(
id='Brand',
options = [
{'label': 'Agfa','value': 'Agfa'},
{'label': 'Canon','value': 'Canon'},
{'label': 'Casio','value': 'Casio'},
{'label': 'Contax','value': 'Contax'},
{'label': 'Epson','value': 'Epson'},
{'label': 'Fujifilm','value': 'Fujifilm'},
{'label': 'HP','value': 'HP'},
{'label': 'JVC','value': 'JVC'},
{'label': 'Kodak','value': 'Kodak'},
{'label': 'Kyocera','value': 'Kyocera'},
{'label': 'Leica','value': 'Leica'},
{'label': 'Nikon','value': 'Nikon'},
{'label': 'Olympus','value': 'Olympus'},
{'label': 'Panasonic','value': 'Panasonic'},
{'label': 'Pentax','value': 'Pentax'},
{'label': 'Ricoh','value': 'Ricoh'},
{'label': 'Samsung','value': 'Samsung'},
{'label': 'Sanyo','value': 'Sanyo'},
{'label': 'Sigma','value': 'Sigma'},
{'label': 'Sony','value': 'Sony'},
{'label': 'Toshiba','value': 'Toshiba'}
],
value = 'Canon',
className='mb-5',
),
dcc.Markdown('#### Release Year'),
dcc.Slider(
id='Release_date',
min=1994,
max=2007,
step=1,
value=2003,
marks={n: str(n) for n in range(1994, 2007,2)},
className='mb-5',
),
dcc.Markdown('#### Maximum Resolution'),
dcc.Slider(
id='Max_resolution',
min=0,
max=5616,
step=16,
value=2400,
marks={n: str(n) for n in range(0,5616,1024)},
className='mb-5',
),
dcc.Markdown('#### Lowest Resolution'),
dcc.Slider(
id='Low_resolution',
min=0,
max=5616,
step=16,
value=1770,
marks={n: str(n) for n in range(0,5616,1024)},
className='mb-5',
),
dcc.Markdown('#### Zoom Wide'),
dcc.Slider(
id='Zoom_wide',
min=0,
max=52,
step=1,
value=33,
marks={n: str(n) for n in range(0,52,4)},
className='mb-5',
),
dcc.Markdown('#### Zoom Tele'),
dcc.Slider(
id='Zoom_tele',
min=0,
max=518,
step=20,
value=121,
marks={n: str(n) for n in range(0,518,50)},
className='mb-5',
),
],
md=4,
)
@app.callback(
Output('out', 'children'),
[Input('Release_date', 'value'), Input('Max_resolution', 'value'),
Input('Low_resolution', 'value'), Input('Effective_pixels', 'value'),
Input('Zoom_wide', 'value'), Input('Zoom_tele', 'value'),
Input('Normal_focus_range', 'value'), Input('Macro_focus_range', 'value'),
Input('Storage_included', 'value'), Input('Weight', 'value'),
Input('Dimensions', 'value'), Input('Brand', 'value'),
]
)
def predict(Release_date, Max_resolution, Low_resolution, Effective_pixels, Zoom_wide, Zoom_tele, Normal_focus_range, Macro_focus_range, Storage_included, Weight, Dimensions, Brand):
df = pd.DataFrame(
columns=['Release_date',
'Max_resolution',
'Low_resolution',
'Effective_pixels',
'Zoom_wide',
'Zoom_tele',
'Normal_focus_range',
'Macro_focus_range',
'Storage_included',
'Weight',
'Dimensions',
'Brand'],
data=[[Release_date,
Max_resolution,
Low_resolution,
Effective_pixels,
Zoom_wide,
Zoom_tele,
Normal_focus_range,
Macro_focus_range,
Storage_included,
Weight,
Dimensions,
Brand]]
)
y_pred = pipeline.predict(df)[0]
return("The price is ${}".format(y_pred))
column2 = dbc.Col(
[
html.Div([
dcc.Markdown("""
"""),
]),
dcc.Markdown('#### Effective MegaPixels'),
dcc.Slider(
id='Effective_pixels',
min=0,
max=21,
step=0.5,
value=5,
marks={n: str(n) for n in range(0,21,3)},
className='mb-5',
),
dcc.Markdown('#### Normal Focus Range'),
dcc.Slider(
id='Normal_focus_range',
min=0,
max=120,
step=5,
value=44,
marks={n: str(n) for n in range(0,120,10)},
className='mb-5',
),
dcc.Markdown('#### Macro Focus Range'),
dcc.Slider(
id='Macro_focus_range',
min=0,
max=85,
step=5,
value=7,
marks={n: str(n) for n in range(0,85,10)},
className='mb-5',
),
dcc.Markdown('#### Storage Included (Gb)'),
dcc.Slider(
id='Storage_included',
min=0,
max=450,
step=10,
value=17,
marks={n: str(n) for n in range(0,450,50)},
className='mb-5',
),
dcc.Markdown('#### Weight (Batteries inc.)'),
dcc.Slider(
id='Weight',
min=0,
max=1860,
step=1,
value=320,
marks={n: str(n) for n in range(0,1860,200)},
className='mb-5',
),
dcc.Markdown('#### Dimensions'),
dcc.Slider(
id='Dimensions',
min=0,
max=240,
step=1,
value=105,
marks={n: str(n) for n in range(0,240,24)},
className='mb-5',
),
]
)
column3 = dbc.Col(
[
html.H2('Expected Price', className='mb-5'),
html.Div(id='out', className='lead'),
# dcc.Input(id='out', value='initial value', type='text')
# html.Td(test_text)
html.Div("This model's predictions are off by $220 on average."),
html.Div("Given these data, the model is able to predict about 60% of the variation in price. ")
]
)
toprow = dbc.Row(
[
html.Div([
dcc.Markdown("""
## Camera Features
Use the controls below to predict the price of a camera from a set of features.
"""),
], style=style),
]
)
row2 = dbc.Row(
[
# dcc.Markdown('## Camera Features'),
column1,
column2,
column3
]
)
layout = dbc.Row([toprow, row2])
| 29.304 | 182 | 0.462599 | 761 | 7,326 | 4.363995 | 0.249671 | 0.049684 | 0.046974 | 0.033123 | 0.314363 | 0.25685 | 0.23246 | 0.23246 | 0.206865 | 0.153869 | 0 | 0.042973 | 0.377423 | 7,326 | 249 | 183 | 29.421687 | 0.685157 | 0.014742 | 0 | 0.283843 | 0 | 0 | 0.236412 | 0.00305 | 0 | 0 | 0 | 0 | 0 | 1 | 0.004367 | false | 0 | 0.034935 | 0 | 0.039301 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e54807bd2660b08164b66f588e9657b51e4fe7f | 10,127 | py | Python | tests/test_bulk_insert.py | VeranosTech/alembic | 7c352124d089c138bb063c9241eb55f929a06742 | [
"MIT"
] | 1 | 2021-03-08T08:40:18.000Z | 2021-03-08T08:40:18.000Z | tests/test_bulk_insert.py | VeranosTech/alembic | 7c352124d089c138bb063c9241eb55f929a06742 | [
"MIT"
] | null | null | null | tests/test_bulk_insert.py | VeranosTech/alembic | 7c352124d089c138bb063c9241eb55f929a06742 | [
"MIT"
] | 6 | 2018-05-10T01:19:33.000Z | 2019-10-07T02:01:01.000Z | from sqlalchemy import Column
from sqlalchemy import Integer
from sqlalchemy import MetaData
from sqlalchemy import String
from sqlalchemy import Table
from sqlalchemy.sql import column
from sqlalchemy.sql import table
from sqlalchemy.types import TypeEngine
from alembic import op
from alembic.migration import MigrationContext
from alembic.testing import assert_raises_message
from alembic.testing import config
from alembic.testing import eq_
from alembic.testing.fixtures import op_fixture
from alembic.testing.fixtures import TestBase
class BulkInsertTest(TestBase):
def _table_fixture(self, dialect, as_sql):
context = op_fixture(dialect, as_sql)
t1 = table(
"ins_table",
column("id", Integer),
column("v1", String()),
column("v2", String()),
)
return context, t1
def _big_t_table_fixture(self, dialect, as_sql):
context = op_fixture(dialect, as_sql)
t1 = Table(
"ins_table",
MetaData(),
Column("id", Integer, primary_key=True),
Column("v1", String()),
Column("v2", String()),
)
return context, t1
def _test_bulk_insert(self, dialect, as_sql):
context, t1 = self._table_fixture(dialect, as_sql)
op.bulk_insert(
t1,
[
{"id": 1, "v1": "row v1", "v2": "row v5"},
{"id": 2, "v1": "row v2", "v2": "row v6"},
{"id": 3, "v1": "row v3", "v2": "row v7"},
{"id": 4, "v1": "row v4", "v2": "row v8"},
],
)
return context
def _test_bulk_insert_single(self, dialect, as_sql):
context, t1 = self._table_fixture(dialect, as_sql)
op.bulk_insert(t1, [{"id": 1, "v1": "row v1", "v2": "row v5"}])
return context
def _test_bulk_insert_single_bigt(self, dialect, as_sql):
context, t1 = self._big_t_table_fixture(dialect, as_sql)
op.bulk_insert(t1, [{"id": 1, "v1": "row v1", "v2": "row v5"}])
return context
def test_bulk_insert(self):
context = self._test_bulk_insert("default", False)
context.assert_(
"INSERT INTO ins_table (id, v1, v2) VALUES (:id, :v1, :v2)"
)
def test_bulk_insert_wrong_cols(self):
context = op_fixture("postgresql")
t1 = table(
"ins_table",
column("id", Integer),
column("v1", String()),
column("v2", String()),
)
op.bulk_insert(t1, [{"v1": "row v1"}])
context.assert_(
"INSERT INTO ins_table (id, v1, v2) "
"VALUES (%(id)s, %(v1)s, %(v2)s)"
)
def test_bulk_insert_no_rows(self):
context, t1 = self._table_fixture("default", False)
op.bulk_insert(t1, [])
context.assert_()
def test_bulk_insert_pg(self):
context = self._test_bulk_insert("postgresql", False)
context.assert_(
"INSERT INTO ins_table (id, v1, v2) "
"VALUES (%(id)s, %(v1)s, %(v2)s)"
)
def test_bulk_insert_pg_single(self):
context = self._test_bulk_insert_single("postgresql", False)
context.assert_(
"INSERT INTO ins_table (id, v1, v2) "
"VALUES (%(id)s, %(v1)s, %(v2)s)"
)
def test_bulk_insert_pg_single_as_sql(self):
context = self._test_bulk_insert_single("postgresql", True)
context.assert_(
"INSERT INTO ins_table (id, v1, v2) VALUES (1, 'row v1', 'row v5')"
)
def test_bulk_insert_pg_single_big_t_as_sql(self):
context = self._test_bulk_insert_single_bigt("postgresql", True)
context.assert_(
"INSERT INTO ins_table (id, v1, v2) "
"VALUES (1, 'row v1', 'row v5')"
)
def test_bulk_insert_mssql(self):
context = self._test_bulk_insert("mssql", False)
context.assert_(
"INSERT INTO ins_table (id, v1, v2) VALUES (:id, :v1, :v2)"
)
def test_bulk_insert_inline_literal_as_sql(self):
context = op_fixture("postgresql", True)
class MyType(TypeEngine):
pass
t1 = table("t", column("id", Integer), column("data", MyType()))
op.bulk_insert(
t1,
[
{"id": 1, "data": op.inline_literal("d1")},
{"id": 2, "data": op.inline_literal("d2")},
],
)
context.assert_(
"INSERT INTO t (id, data) VALUES (1, 'd1')",
"INSERT INTO t (id, data) VALUES (2, 'd2')",
)
def test_bulk_insert_as_sql(self):
context = self._test_bulk_insert("default", True)
context.assert_(
"INSERT INTO ins_table (id, v1, v2) "
"VALUES (1, 'row v1', 'row v5')",
"INSERT INTO ins_table (id, v1, v2) "
"VALUES (2, 'row v2', 'row v6')",
"INSERT INTO ins_table (id, v1, v2) "
"VALUES (3, 'row v3', 'row v7')",
"INSERT INTO ins_table (id, v1, v2) "
"VALUES (4, 'row v4', 'row v8')",
)
def test_bulk_insert_as_sql_pg(self):
context = self._test_bulk_insert("postgresql", True)
context.assert_(
"INSERT INTO ins_table (id, v1, v2) "
"VALUES (1, 'row v1', 'row v5')",
"INSERT INTO ins_table (id, v1, v2) "
"VALUES (2, 'row v2', 'row v6')",
"INSERT INTO ins_table (id, v1, v2) "
"VALUES (3, 'row v3', 'row v7')",
"INSERT INTO ins_table (id, v1, v2) "
"VALUES (4, 'row v4', 'row v8')",
)
def test_bulk_insert_as_sql_mssql(self):
context = self._test_bulk_insert("mssql", True)
# SQL server requires IDENTITY_INSERT
# TODO: figure out if this is safe to enable for a table that
# doesn't have an IDENTITY column
context.assert_(
"SET IDENTITY_INSERT ins_table ON",
"GO",
"INSERT INTO ins_table (id, v1, v2) "
"VALUES (1, 'row v1', 'row v5')",
"GO",
"INSERT INTO ins_table (id, v1, v2) "
"VALUES (2, 'row v2', 'row v6')",
"GO",
"INSERT INTO ins_table (id, v1, v2) "
"VALUES (3, 'row v3', 'row v7')",
"GO",
"INSERT INTO ins_table (id, v1, v2) "
"VALUES (4, 'row v4', 'row v8')",
"GO",
"SET IDENTITY_INSERT ins_table OFF",
"GO",
)
def test_bulk_insert_from_new_table(self):
context = op_fixture("postgresql", True)
t1 = op.create_table(
"ins_table",
Column("id", Integer),
Column("v1", String()),
Column("v2", String()),
)
op.bulk_insert(
t1,
[
{"id": 1, "v1": "row v1", "v2": "row v5"},
{"id": 2, "v1": "row v2", "v2": "row v6"},
],
)
context.assert_(
"CREATE TABLE ins_table (id INTEGER, v1 VARCHAR, v2 VARCHAR)",
"INSERT INTO ins_table (id, v1, v2) VALUES "
"(1, 'row v1', 'row v5')",
"INSERT INTO ins_table (id, v1, v2) VALUES "
"(2, 'row v2', 'row v6')",
)
def test_invalid_format(self):
context, t1 = self._table_fixture("sqlite", False)
assert_raises_message(
TypeError, "List expected", op.bulk_insert, t1, {"id": 5}
)
assert_raises_message(
TypeError,
"List of dictionaries expected",
op.bulk_insert,
t1,
[(5,)],
)
class RoundTripTest(TestBase):
__only_on__ = "sqlite"
def setUp(self):
self.conn = config.db.connect()
self.conn.execute(
"""
create table foo(
id integer primary key,
data varchar(50),
x integer
)
"""
)
context = MigrationContext.configure(self.conn)
self.op = op.Operations(context)
self.t1 = table("foo", column("id"), column("data"), column("x"))
def tearDown(self):
self.conn.execute("drop table foo")
self.conn.close()
def test_single_insert_round_trip(self):
self.op.bulk_insert(self.t1, [{"data": "d1", "x": "x1"}])
eq_(
self.conn.execute("select id, data, x from foo").fetchall(),
[(1, "d1", "x1")],
)
def test_bulk_insert_round_trip(self):
self.op.bulk_insert(
self.t1,
[
{"data": "d1", "x": "x1"},
{"data": "d2", "x": "x2"},
{"data": "d3", "x": "x3"},
],
)
eq_(
self.conn.execute("select id, data, x from foo").fetchall(),
[(1, "d1", "x1"), (2, "d2", "x2"), (3, "d3", "x3")],
)
def test_bulk_insert_inline_literal(self):
class MyType(TypeEngine):
pass
t1 = table("foo", column("id", Integer), column("data", MyType()))
self.op.bulk_insert(
t1,
[
{"id": 1, "data": self.op.inline_literal("d1")},
{"id": 2, "data": self.op.inline_literal("d2")},
],
multiinsert=False,
)
eq_(
self.conn.execute("select id, data from foo").fetchall(),
[(1, "d1"), (2, "d2")],
)
def test_bulk_insert_from_new_table(self):
t1 = self.op.create_table(
"ins_table",
Column("id", Integer),
Column("v1", String()),
Column("v2", String()),
)
self.op.bulk_insert(
t1,
[
{"id": 1, "v1": "row v1", "v2": "row v5"},
{"id": 2, "v1": "row v2", "v2": "row v6"},
],
)
eq_(
self.conn.execute(
"select id, v1, v2 from ins_table order by id"
).fetchall(),
[(1, u"row v1", u"row v5"), (2, u"row v2", u"row v6")],
)
| 31.746082 | 79 | 0.506962 | 1,207 | 10,127 | 4.057167 | 0.114333 | 0.083725 | 0.080049 | 0.07719 | 0.740045 | 0.667552 | 0.587298 | 0.538085 | 0.481928 | 0.463958 | 0 | 0.036298 | 0.347092 | 10,127 | 318 | 80 | 31.845912 | 0.704325 | 0.012541 | 0 | 0.477444 | 0 | 0.003759 | 0.231168 | 0 | 0 | 0 | 0 | 0.003145 | 0.06015 | 1 | 0.093985 | false | 0.007519 | 0.056391 | 0 | 0.18797 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e58460534c347eda95c7a6ad92b660db3c1900d | 4,932 | py | Python | sympy/polys/tests/test_ring_series.py | shipci/sympy | 4b59927bed992b980c9b3faac01becb36feef26b | [
"BSD-3-Clause"
] | 319 | 2016-09-22T15:54:48.000Z | 2022-03-18T02:36:58.000Z | sympy/polys/tests/test_ring_series.py | curzel-it/KiPyCalc | 909c783d5e6967ea58ca93f875106d8a8e3ca5db | [
"MIT"
] | 9 | 2016-11-03T21:56:41.000Z | 2020-08-09T19:27:37.000Z | sympy/polys/tests/test_ring_series.py | curzel-it/KiPyCalc | 909c783d5e6967ea58ca93f875106d8a8e3ca5db | [
"MIT"
] | 27 | 2016-10-06T16:05:32.000Z | 2022-03-18T02:37:00.000Z | from sympy import Rational
from sympy.polys.domains import ZZ, QQ
from sympy.polys.rings import ring
from sympy.polys.ring_series import (_invert_monoms, rs_integrate,
rs_trunc, rs_mul, rs_square, rs_pow, _has_constant_term,
rs_series_inversion, rs_series_from_list, rs_exp, rs_log, rs_newton,
rs_hadamard_exp, rs_compose_add)
from sympy.utilities.pytest import raises
def test_ring_series1():
R, x = ring('x', QQ)
p = x**4 + 2*x**3 + 3*x + 4
assert _invert_monoms(p) == 4*x**4 + 3*x**3 + 2*x + 1
assert rs_hadamard_exp(p) == x**4/24 + x**3/3 + 3*x + 4
R, x = ring('x', QQ)
p = x**4 + 2*x**3 + 3*x + 4
assert rs_integrate(p, x) == x**5/5 + x**4/2 + 3*x**2/2 + 4*x
R, x, y = ring('x, y', QQ)
p = x**2*y**2 + x + 1
assert rs_integrate(p, x) == x**3*y**2/3 + x**2/2 + x
assert rs_integrate(p, y) == x**2*y**3/3 + x*y + y
def test_trunc():
R, x, y, t = ring('x, y, t', QQ)
p = (y + t*x)**4
p1 = rs_trunc(p, x, 3)
assert p1 == y**4 + 4*y**3*t*x + 6*y**2*t**2*x**2
def test_mul_trunc():
R, x, y, t = ring('x, y, t', QQ)
p = 1 + t*x + t*y
for i in range(2):
p = rs_mul(p, p, t, 3)
assert p == 6*x**2*t**2 + 12*x*y*t**2 + 6*y**2*t**2 + 4*x*t + 4*y*t + 1
p = 1 + t*x + t*y + t**2*x*y
p1 = rs_mul(p, p, t, 2)
assert p1 == 1 + 2*t*x + 2*t*y
R1, z = ring('z', QQ)
def test1(p):
p2 = rs_mul(p, z, x, 2)
raises(ValueError, lambda: test1(p))
p1 = 2 + 2*x + 3*x**2
p2 = 3 + x**2
assert rs_mul(p1, p2, x, 4) == 2*x**3 + 11*x**2 + 6*x + 6
def test_square_trunc():
R, x, y, t = ring('x, y, t', QQ)
p = (1 + t*x + t*y)*2
p1 = rs_mul(p, p, x, 3)
p2 = rs_square(p, x, 3)
assert p1 == p2
p = 1 + x + x**2 + x**3
assert rs_square(p, x, 4) == 4*x**3 + 3*x**2 + 2*x + 1
def test_pow_trunc():
R, x, y, z = ring('x, y, z', QQ)
p0 = y + x*z
p = p0**16
for xx in (x, y, z):
p1 = rs_trunc(p, xx, 8)
p2 = rs_pow(p0, 16, xx, 8)
assert p1 == p2
p = 1 + x
p1 = rs_pow(p, 3, x, 2)
assert p1 == 1 + 3*x
assert rs_pow(p, 0, x, 2) == 1
assert rs_pow(p, -2, x, 2) == 1 - 2*x
p = x + y
assert rs_pow(p, 3, y, 3) == x**3 + 3*x**2*y + 3*x*y**2
def test_has_constant_term():
R, x, y, z = ring('x, y, z', QQ)
p = y + x*z
assert _has_constant_term(p, x)
p = x + x**4
assert not _has_constant_term(p, x)
p = 1 + x + x**4
assert _has_constant_term(p, x)
p = x + y + x*z
def test_inversion():
R, x = ring('x', QQ)
p = 2 + x + 2*x**2
n = 5
p1 = rs_series_inversion(p, x, n)
assert rs_trunc(p*p1, x, n) == 1
R, x, y = ring('x, y', QQ)
p = 2 + x + 2*x**2 + y*x + x**2*y
p1 = rs_series_inversion(p, x, n)
assert rs_trunc(p*p1, x, n) == 1
R, x, y = ring('x, y', QQ)
p = 1 + x + y
def test2(p):
p1 = rs_series_inversion(p, x, 4)
raises(NotImplementedError, lambda: test2(p))
def test_series_from_list():
R, x = ring('x', QQ)
p = 1 + 2*x + x**2 + 3*x**3
c = [1, 2, 0, 4, 4]
r = rs_series_from_list(p, c, x, 5)
pc = R.from_list(list(reversed(c)))
r1 = rs_trunc(pc.compose(x, p), x, 5)
assert r == r1
R, x, y = ring('x, y', QQ)
c = [1, 3, 5, 7]
p1 = rs_series_from_list(x + y, c, x, 3, concur=0)
p2 = rs_trunc((1 + 3*(x+y) + 5*(x+y)**2 + 7*(x+y)**3), x, 3)
assert p1 == p2
R, x = ring('x', QQ)
h = 25
p = rs_exp(x, x, h) - 1
p1 = rs_series_from_list(p, c, x, h)
p2 = 0
for i, cx in enumerate(c):
p2 += cx*rs_pow(p, i, x, h)
assert p1 == p2
def test_log():
R, x = ring('x', QQ)
p = 1 + x
p1 = rs_log(p, x, 4)
assert p1 == x - x**2/2 + x**3/3
p = 1 + x +2*x**2/3
p1 = rs_log(p, x, 9)
assert p1 == -17*x**8/648 + 13*x**7/189 - 11*x**6/162 - x**5/45 + \
7*x**4/36 - x**3/3 + x**2/6 + x
p2 = rs_series_inversion(p, x, 9)
p3 = rs_log(p2, x, 9)
assert p3 == -p1
R, x, y = ring('x, y', QQ)
p = 1 + x + 2*y*x**2
p1 = rs_log(p, x, 6)
assert p1 == (4*x**5*y**2 - 2*x**5*y - 2*x**4*y**2 + x**5/5 + 2*x**4*y -
x**4/4 - 2*x**3*y + x**3/3 + 2*x**2*y - x**2/2 + x)
def test_exp():
R, x = ring('x', QQ)
p = x + x**4
for h in [10, 30]:
q = rs_series_inversion(1 + p, x, h) - 1
p1 = rs_exp(q, x, h)
q1 = rs_log(p1, x, h)
assert q1 == q
p1 = rs_exp(p, x, 30)
assert p1.coeff(x**29) == QQ(74274246775059676726972369, 353670479749588078181744640000)
prec = 21
p = rs_log(1 + x, x, prec)
p1 = rs_exp(p, x, prec)
assert p1 == x + 1
def test_newton():
R, x = ring('x', QQ)
p = x**2 - 2
r = rs_newton(p, x, 4)
f = [1, 0, -2]
assert r == 8*x**4 + 4*x**2 + 2
def test_compose_add():
R, x = ring('x', QQ)
p1 = x**3 - 1
p2 = x**2 - 2
assert rs_compose_add(p1, p2) == x**6 - 6*x**4 - 2*x**3 + 12*x**2 - 12*x - 7
| 28.842105 | 92 | 0.482563 | 1,066 | 4,932 | 2.12758 | 0.093809 | 0.030864 | 0.013228 | 0.027778 | 0.347002 | 0.239418 | 0.190035 | 0.143298 | 0.115961 | 0.104497 | 0 | 0.117235 | 0.304745 | 4,932 | 170 | 93 | 29.011765 | 0.544182 | 0 | 0 | 0.231788 | 0 | 0 | 0.013179 | 0 | 0 | 0 | 0 | 0 | 0.218543 | 1 | 0.092715 | false | 0 | 0.033113 | 0 | 0.125828 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e5ab3bc02b9c05f13e23c33d22191c75c4e068d | 1,902 | py | Python | dig.py | baltpeter/albert-extensions | 616adac2e23b878695d027bd0fb2253c0e7c8cd1 | [
"MIT"
] | 5 | 2020-10-24T11:45:55.000Z | 2022-03-04T20:53:03.000Z | dig.py | baltpeter/albert-extensions | 616adac2e23b878695d027bd0fb2253c0e7c8cd1 | [
"MIT"
] | null | null | null | dig.py | baltpeter/albert-extensions | 616adac2e23b878695d027bd0fb2253c0e7c8cd1 | [
"MIT"
] | 1 | 2020-08-05T23:54:46.000Z | 2020-08-05T23:54:46.000Z | # -*- coding: utf-8 -*-
"""Issue DNS requests through Albert."""
from albert import *
import subprocess
import ipaddress
__title__ = "Dig"
__version__ = "0.4.1"
__triggers__ = "dig"
__authors__ = "Benjamin Altpeter"
defaultIcon = iconLookup("network-server")
def handleQuery(query):
if not query.isTriggered:
return
results = []
args = query.string.split()
if len(args) > 0:
try:
cmd = ["dig", "+noall", "+noclass", "+answer"]
if isIp(args[0]): cmd.append("-x")
cmd.append(args[0])
if len(args) > 1: cmd.append(args[1])
sp = subprocess.run(cmd, stdout=subprocess.PIPE)
output = list(map(lambda line: line.split(), sp.stdout.decode("utf-8").split("\n")))
for line in output:
if len(line) > 3:
value = " ".join(line[3:])
results.append(
Item(
id=__title__,
icon=defaultIcon,
text=value + " :: " + line[2],
subtext=line[0] + " :: TTL = " + line[1],
completion=value,
actions=[
ClipAction("Copy value", value)
]
)
)
except:
pass
else:
results.append(
Item(
id=__title__,
icon=defaultIcon,
text="Enter your query to lookup DNS records.",
subtext="First argument is the domain or IP, (optional) second argument is the record type.",
completion=query.rawString
)
)
return results
def isIp(ip):
try:
ipaddress.ip_address(ip)
return True
except: return False
| 27.565217 | 109 | 0.460042 | 180 | 1,902 | 4.722222 | 0.533333 | 0.017647 | 0.021176 | 0.044706 | 0.101176 | 0.101176 | 0.101176 | 0.101176 | 0 | 0 | 0 | 0.013736 | 0.425868 | 1,902 | 68 | 110 | 27.970588 | 0.764652 | 0.029968 | 0 | 0.185185 | 0 | 0 | 0.120174 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037037 | false | 0.018519 | 0.055556 | 0 | 0.148148 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e5b415fad131da99ecb5c75c74bf92faafd85f0 | 22,268 | py | Python | python/dgl/batched_heterograph.py | pioy/dgl | bc2fff15910f4bad8d533b47ad7ba4956cd28cfb | [
"Apache-2.0"
] | null | null | null | python/dgl/batched_heterograph.py | pioy/dgl | bc2fff15910f4bad8d533b47ad7ba4956cd28cfb | [
"Apache-2.0"
] | null | null | null | python/dgl/batched_heterograph.py | pioy/dgl | bc2fff15910f4bad8d533b47ad7ba4956cd28cfb | [
"Apache-2.0"
] | null | null | null | """Classes and functions for batching multiple heterographs together."""
from collections.abc import Iterable
from . import backend as F
from . import heterograph_index
from .base import ALL, is_all
from .frame import FrameRef, Frame
from .heterograph import DGLHeteroGraph
__all__ = ['BatchedDGLHeteroGraph', 'unbatch_hetero', 'batch_hetero']
class BatchedDGLHeteroGraph(DGLHeteroGraph):
"""Class for batched DGLHeteroGraphs.
A :class:`BatchedDGLHeteroGraph` basically merges a list of small graphs into a giant
graph so that one can perform message passing and readout over a batch of graphs
simultaneously.
For a given node/edge type, the nodes/edges are re-indexed with a new id in the
batched graph with the rule below:
====== ========== ======================== === ==========================
item Graph 1 Graph 2 ... Graph k
====== ========== ======================== === ==========================
raw id 0, ..., N1 0, ..., N2 ... ..., Nk
new id 0, ..., N1 N1 + 1, ..., N1 + N2 + 1 ... ..., N1 + ... + Nk + k - 1
====== ========== ======================== === ==========================
To modify the features in :class:`BatchedDGLHeteroGraph` has no effect on the original
graphs. See the examples below about how to work around.
Parameters
----------
gidx : HeteroGraphIndex
Graph index object.
ntypes : list of str, pair of list of str
Node type list. ``ntypes[i]`` stores the name of node type i.
If a pair is given, the graph created is a uni-directional bipartite graph,
and its SRC node types and DST node types are given as in the pair.
etypes : list of str
Edge type list. ``etypes[i]`` stores the name of edge type i.
node_frames : list of FrameRef, optional
Node feature storage. If None, empty frame is created.
Otherwise, ``node_frames[i]`` stores the node features
of node type i. (default: None)
edge_frames : list of FrameRef, optional
Edge feature storage. If None, empty frame is created.
Otherwise, ``edge_frames[i]`` stores the edge features
of edge type i. (default: None)
batch_size : int
Number of heterogeneous graphs in the batch.
batch_num_nodes : list of list
batch_num_nodes[i][j] gives the number of nodes of the i-th type
in the j-th graph.
batch_num_edges : list of list
batch_num_edges[i][j] gives the number of edges of the i-th type
in the j-th graph.
Examples
--------
>>> import dgl
>>> import torch as th
**Example 1**
We start with a simple example.
>>> # Create the first graph and set features for nodes of type 'user'
>>> g1 = dgl.heterograph({('user', 'plays', 'game'): [(0, 0), (1, 0)]})
>>> g1.nodes['user'].data['h1'] = th.tensor([[0.], [1.]])
>>> # Create the second graph and set features for nodes of type 'user'
>>> g2 = dgl.heterograph({('user', 'plays', 'game'): [(0, 0)]})
>>> g2.nodes['user'].data['h1'] = th.tensor([[0.]])
>>> # Batch the graphs
>>> bg = dgl.batch_hetero([g1, g2])
With the batching operation, the nodes and edges are re-indexed.
>>> bg.nodes('user')
tensor([0, 1, 2])
By default, we also copy and concatenate all the node and edge features.
>>> bg.nodes['user'].data['h1']
tensor([[0.],
[1.],
[0.]])
**Example 2**
We will now see a more complex example and the
various operations one can play with a batched graph.
>>> g1 = dgl.heterograph({
... ('user', 'follows', 'user'): [(0, 1), (1, 2)],
... ('user', 'plays', 'game'): [(0, 0), (1, 0)]
... })
>>> g1.nodes['user'].data['h1'] = th.tensor([[0.], [1.], [2.]])
>>> g1.nodes['user'].data['h2'] = th.tensor([[3.], [4.], [5.]])
>>> g1.nodes['game'].data['h1'] = th.tensor([[0.]])
>>> g1.edges['plays'].data['h1'] = th.tensor([[0.], [1.]])
>>> g2 = dgl.heterograph({
... ('user', 'follows', 'user'): [(0, 1), (1, 2)],
... ('user', 'plays', 'game'): [(0, 0), (1, 0)]
... })
>>> g2.nodes['user'].data['h1'] = th.tensor([[0.], [1.], [2.]])
>>> g2.nodes['user'].data['h2'] = th.tensor([[3.], [4.], [5.]])
>>> g2.nodes['game'].data['h1'] = th.tensor([[0.]])
>>> g2.edges['plays'].data['h1'] = th.tensor([[0.], [1.]])
Merge two :class:`~dgl.DGLHeteroGraph` objects into one :class:`BatchedDGLHeteroGraph` object.
When merging a list of graphs, we can choose to include only a subset of the attributes.
>>> # For edge types, only canonical edge types are allowed to avoid ambiguity.
>>> bg = dgl.batch_hetero([g1, g2], node_attrs={'user': ['h1', 'h2'], 'game': None},
... edge_attrs={('user', 'plays', 'game'): 'h1'})
>>> list(bg.nodes['user'].data.keys())
['h1', 'h2']
>>> list(bg.nodes['game'].data.keys())
[]
>>> list(bg.edges['follows'].data.keys())
[]
>>> list(bg.edges['plays'].data.keys())
['h1']
We can get a brief summary of the graphs that constitute the batched graph.
>>> bg.batch_size
2
>>> bg.batch_num_nodes('user')
[3, 3]
>>> bg.batch_num_edges(('user', 'plays', 'game'))
[2, 2]
Updating the attributes of the batched graph has no effect on the original graphs.
>>> bg.nodes['game'].data['h1'] = th.tensor([[1.], [1.]])
>>> g2.nodes['game'].data['h1']
tensor([[0.]])
Instead, we can decompose the batched graph back into a list of graphs and use them
to replace the original graphs.
>>> g3, g4 = dgl.unbatch_hetero(bg) # returns a list of DGLHeteroGraph objects
>>> g4.nodes['game'].data['h1']
tensor([[1.]])
"""
def __init__(self,
gidx,
ntypes,
etypes,
node_frames,
edge_frames,
batch_size,
batch_num_nodes,
batch_num_edges):
super(BatchedDGLHeteroGraph, self).__init__(gidx=gidx,
ntypes=ntypes,
etypes=etypes,
node_frames=node_frames,
edge_frames=edge_frames)
# extra members
self._batch_size = batch_size
self._batch_num_nodes = batch_num_nodes
self._batch_num_edges = batch_num_edges
@property
def batch_size(self):
"""Number of graphs in this batch.
Returns
-------
int
Number of graphs in this batch."""
return self._batch_size
def batch_num_nodes(self, ntype=None):
"""Return the numbers of nodes of the given type for all heterographs in the batch.
Parameters
----------
ntype : str, optional
The node type. Can be omitted if there is only one node type
in the graph. (Default: None)
Returns
-------
list of int
The ith element gives the number of nodes of the specified type in the ith graph.
Examples
--------
>>> g1 = dgl.heterograph({
... ('user', 'follows', 'user'): [(0, 1), (1, 2)],
... ('user', 'plays', 'game'): [(0, 0), (1, 0), (2, 1), (3, 1)]
... })
>>> g2 = dgl.heterograph({
... ('user', 'follows', 'user'): [(0, 1), (1, 2)],
... ('user', 'plays', 'game'): [(0, 0), (1, 0), (2, 1)]
... })
>>> bg = dgl.batch_hetero([g1, g2])
>>> bg.batch_num_nodes('user')
[4, 3]
>>> bg.batch_num_nodes('game')
[2, 2]
"""
return self._batch_num_nodes[self.get_ntype_id(ntype)]
def batch_num_edges(self, etype=None):
"""Return the numbers of edges of the given type for all heterographs in the batch.
Parameters
----------
etype : str or tuple of str, optional
The edge type. Can be omitted if there is only one edge type
in the graph.
Returns
-------
list of int
The ith element gives the number of edges of the specified type in the ith graph.
Examples
--------
>>> g1 = dgl.heterograph({
... ('user', 'follows', 'user'): [(0, 1), (1, 2)],
... ('user', 'follows', 'developer'): [(0, 1), (1, 2)],
... ('user', 'plays', 'game'): [(0, 0), (1, 0), (2, 1), (3, 1)]
... })
>>> g2 = dgl.heterograph({
... ('user', 'follows', 'user'): [(0, 1), (1, 2)],
... ('user', 'follows', 'developer'): [(0, 1), (1, 2)],
... ('user', 'plays', 'game'): [(0, 0), (1, 0), (2, 1)]
... })
>>> bg = dgl.batch_hetero([g1, g2])
>>> bg.batch_num_edges('plays')
[4, 3]
>>> # 'follows' is ambiguous and we use ('user', 'follows', 'user') instead.
>>> bg.batch_num_edges(('user', 'follows', 'user'))
[2, 2]
"""
return self._batch_num_edges[self.get_etype_id(etype)]
def to(self, ctx, **kwargs): # pylint: disable=invalid-name
"""Move ndata, edata and graph structure to the targeted device context (cpu/gpu).
Parameters
----------
ctx : Framework-specific device context object
The context to move data to.
kwargs : Key-word arguments.
Key-word arguments fed to the framework copy function.
Returns
-------
g : BatchedDGLHeteroGraph
Moved BatchedDGLHeteroGraph of the targeted mode.
Examples
--------
The following example uses PyTorch backend.
>>> # Create the first graph and set features for nodes of type 'user'
>>> g1 = dgl.heterograph({('user', 'plays', 'game'): [(0, 0), (1, 0)]})
>>> g1.nodes['user'].data['h1'] = th.tensor([[0.], [1.]])
>>> # Create the second graph and set features for nodes of type 'user'
>>> g2 = dgl.heterograph({('user', 'plays', 'game'): [(0, 0)]})
>>> g2.nodes['user'].data['h1'] = th.tensor([[0.]])
>>> # Batch the graphs
>>> bg = dgl.batch_hetero([g1, g2])
>>> # Move the graph topology and features to GPU
>>> bg1 = bg.to(torch.device('cuda:0'))
>>> print(bg1.device)
device(type='cuda', index=0)
>>> print(bg.device)
device(type='cpu')
"""
new_nframes = []
for nframe in self._node_frames:
new_feats = {k : F.copy_to(feat, ctx) for k, feat in nframe.items()}
new_nframes.append(FrameRef(Frame(new_feats)))
new_eframes = []
for eframe in self._edge_frames:
new_feats = {k : F.copy_to(feat, ctx) for k, feat in eframe.items()}
new_eframes.append(FrameRef(Frame(new_feats)))
# TODO(minjie): replace the following line with the commented one to enable GPU graph.
new_gidx = self._graph
#new_gidx = self._graph.copy_to(utils.to_dgl_context(ctx))
return BatchedDGLHeteroGraph(gidx=new_gidx,
ntypes=self.ntypes,
etypes=self.etypes,
node_frames=new_nframes,
edge_frames=new_eframes,
batch_size=self.batch_size,
batch_num_nodes=self._batch_num_nodes,
batch_num_edges=self._batch_num_edges)
def unbatch_hetero(graph):
"""Return the list of heterographs in this batch.
Parameters
----------
graph : BatchedDGLHeteroGraph
The batched heterograph.
Returns
-------
list
A list of :class:`~dgl.BatchedDGLHeteroGraph` objects whose attributes are
obtained by partitioning the attributes of the :attr:`graph`. The length of
the list is the same as the batch size of :attr:`graph`.
Notes
-----
Unbatching will break each field tensor of the batched graph into smaller
partitions.
For simpler tasks such as node/edge state aggregation, try to slice graphs along
edge types and use readout functions.
See Also
--------
batch_hetero
"""
assert isinstance(graph, BatchedDGLHeteroGraph), \
'Expect the input to be of type BatchedDGLHeteroGraph, got type {}'.format(type(graph))
bsize = graph.batch_size
bnn_all_types = graph._batch_num_nodes
bne_all_types = graph._batch_num_edges
ntypes = graph._ntypes
etypes = graph._etypes
node_frames = [[FrameRef(Frame(num_rows=bnn_all_types[tid][i])) for tid in range(len(ntypes))]
for i in range(bsize)]
edge_frames = [[FrameRef(Frame(num_rows=bne_all_types[tid][i])) for tid in range(len(etypes))]
for i in range(bsize)]
for tid in range(len(ntypes)):
for attr, col in graph._node_frames[tid].items():
col_splits = F.split(col, bnn_all_types[tid], dim=0)
for i in range(bsize):
node_frames[i][tid][attr] = col_splits[i]
for tid in range(len(etypes)):
for attr, col in graph._edge_frames[tid].items():
col_splits = F.split(col, bne_all_types[tid], dim=0)
for i in range(bsize):
edge_frames[i][tid][attr] = col_splits[i]
unbatched_graph_indices = heterograph_index.disjoint_partition(
graph._graph, bnn_all_types, bne_all_types)
return [DGLHeteroGraph(gidx=unbatched_graph_indices[i],
ntypes=ntypes,
etypes=etypes,
node_frames=node_frames[i],
edge_frames=edge_frames[i]) for i in range(bsize)]
def batch_hetero(graph_list, node_attrs=ALL, edge_attrs=ALL):
"""Batch a collection of :class:`~dgl.DGLHeteroGraph` and return a
:class:`BatchedDGLHeteroGraph` object that is independent of the :attr:`graph_list`.
Parameters
----------
graph_list : iterable
A collection of :class:`~dgl.DGLHeteroGraph` to be batched.
node_attrs : None or dict
The node attributes to be batched. If ``None``, the resulted graph will not have
features. If ``dict``, it maps str to str or iterable. The keys represent names of
node types and the values represent the node features to be batched for the
corresponding type. By default, we use all features for all types of nodes.
edge_attrs : None or dict
Same as for the case of :attr:`node_attrs`.
Returns
-------
BatchedDGLHeteroGraph
One single batched heterograph
See Also
--------
BatchedDGLHeteroGraph
unbatch_hetero
"""
# Sanity check. Make sure all graphs have the same node/edge types, in the same order.
ref_canonical_etypes = graph_list[0].canonical_etypes
ref_ntypes = graph_list[0].ntypes
ref_etypes = graph_list[0].etypes
for i in range(1, len(graph_list)):
g_i = graph_list[i]
assert g_i.ntypes == ref_ntypes, \
'The node types of graph {:d} and {:d} should be the same.'.format(0, i)
assert g_i.canonical_etypes == ref_canonical_etypes, \
'The canonical edge types of graph {:d} and {:d} should be the same.'.format(0, i)
# Sanity check. Make sure all graphs have same node/edge features in terns of name, size
# and dtype if the number of nodes is nonzero.
ref_node_feats = dict()
for nty in ref_ntypes:
for i, graph in enumerate(graph_list):
# No nodes, skip it
if graph.number_of_nodes(nty) == 0:
continue
# Use this for reference of feature names, shape and dtype
if nty not in ref_node_feats:
ref_node_feats[nty] = (i, graph.node_attr_schemes(nty))
continue
# Name check
assert set(ref_node_feats[nty][1].keys()) == \
set(graph.node_attr_schemes(nty).keys()), \
'The node features of graph {:d} and {:d} for node type {} should be the ' \
'same.'.format(ref_node_feats[nty][0], i, nty)
# Size and dtype check
for nfeats in ref_node_feats[nty][1].keys():
assert ref_node_feats[nty][1][nfeats] == \
graph.node_attr_schemes(nty)[nfeats], \
'For graph {:d} and {:d}, the size and dtype for feature {} of ' \
'{}-typed nodes should be the same.'.format(
ref_node_feats[nty][0], i, nfeats, nty)
ref_edge_feats = dict()
for ety in ref_canonical_etypes:
for i, graph in enumerate(graph_list):
# No edges, skip it
if graph.number_of_edges(ety) == 0:
continue
# Use this for reference of feature names, shape and dtype
if ety not in ref_edge_feats:
ref_edge_feats[ety] = (i, graph.edge_attr_schemes(ety))
continue
# Name check
assert set(ref_edge_feats[ety][1].keys()) == \
set(graph.edge_attr_schemes(ety).keys()), \
'The edge features of graph {:d} and {:d} for edge type {} should be the ' \
'same.'.format(ref_edge_feats[ety][0], i, ety)
# Size and dtype check
for efeats in ref_edge_feats[ety][1].keys():
assert ref_edge_feats[ety][1][efeats] == \
graph.edge_attr_schemes(ety)[efeats], \
'For graph {:d} and {:d}, the size and dtype for feature {} of ' \
'{}-typed edges should be the same.'.format(
ref_edge_feats[ety][0], i, efeats, ety)
def _init_attrs(types, attrs, mode):
formatted_attrs = {t: [] for t in types}
if is_all(attrs):
for typ in types:
if mode == 'node':
# Handle the case where the nodes of a type have no features
formatted_attrs[typ] = list(ref_node_feats.get(
typ, (None, dict()))[1].keys())
elif mode == 'edge':
# Handle the case where the edges of a type have no features
formatted_attrs[typ] = list(ref_edge_feats.get(
typ, (None, dict()))[1].keys())
elif isinstance(attrs, dict):
for typ, v in attrs.items():
if isinstance(v, str):
formatted_attrs[typ] = [v]
elif isinstance(v, Iterable):
formatted_attrs[typ] = list(v)
elif v is not None:
raise ValueError('Expected {} attrs for type {} to be str '
'or iterable, got {}'.format(mode, typ, type(v)))
elif attrs is not None:
raise ValueError('Expected {} attrs to be of type None or dict,'
'got type {}'.format(mode, type(attrs)))
return formatted_attrs
node_attrs = _init_attrs(ref_ntypes, node_attrs, 'node')
edge_attrs = _init_attrs(ref_canonical_etypes, edge_attrs, 'edge')
node_frames = []
for tid, typ in enumerate(ref_ntypes):
if len(node_attrs[typ]) == 0:
# Emtpy frames will be created when we instantiate a DGLHeteroGraph.
node_frames.append(None)
else:
# NOTE: following code will materialize the columns of the input graphs.
cols = {key: F.cat([gr._node_frames[tid][key] for gr in graph_list
if gr.number_of_nodes(typ) > 0], dim=0)
for key in node_attrs[typ]}
node_frames.append(FrameRef(Frame(cols)))
edge_frames = []
for tid, typ in enumerate(ref_canonical_etypes):
if len(edge_attrs[typ]) == 0:
# Emtpy frames will be created when we instantiate a DGLHeteroGraph.
edge_frames.append(None)
else:
# NOTE: following code will materialize the columns of the input graphs.
cols = {key: F.cat([gr._edge_frames[tid][key] for gr in graph_list
if gr.number_of_edges(typ) > 0], dim=0)
for key in edge_attrs[typ]}
edge_frames.append(FrameRef(Frame(cols)))
# Create graph index for the batched graph
metagraph = graph_list[0]._graph.metagraph
batched_index = heterograph_index.disjoint_union(
metagraph, [g._graph for g in graph_list])
batch_size = 0
# Store number of nodes/edge based on the id of node/edge types as we need
# to handle both edge type and canonical edge type.
batch_num_nodes = [[] for _ in range(len(ref_ntypes))]
batch_num_edges = [[] for _ in range(len(ref_etypes))]
for grh in graph_list:
if isinstance(grh, BatchedDGLHeteroGraph):
# Handle input graphs that are already batched
batch_size += grh._batch_size
for ntype_id in range(len(ref_ntypes)):
batch_num_nodes[ntype_id].extend(grh._batch_num_nodes[ntype_id])
for etype_id in range(len(ref_etypes)):
batch_num_edges[etype_id].extend(grh._batch_num_edges[etype_id])
else:
batch_size += 1
for ntype_id in range(len(ref_ntypes)):
batch_num_nodes[ntype_id].append(grh._graph.number_of_nodes(ntype_id))
for etype_id in range(len(ref_etypes)):
batch_num_edges[etype_id].append(grh._graph.number_of_edges(etype_id))
return BatchedDGLHeteroGraph(gidx=batched_index,
ntypes=ref_ntypes,
etypes=ref_etypes,
node_frames=node_frames,
edge_frames=edge_frames,
batch_size=batch_size,
batch_num_nodes=batch_num_nodes,
batch_num_edges=batch_num_edges)
| 41.778612 | 98 | 0.55106 | 2,834 | 22,268 | 4.195131 | 0.129146 | 0.02557 | 0.020776 | 0.012953 | 0.460426 | 0.379595 | 0.330053 | 0.276474 | 0.248717 | 0.215998 | 0 | 0.015483 | 0.312601 | 22,268 | 532 | 99 | 41.857143 | 0.76122 | 0.477591 | 0 | 0.138462 | 0 | 0 | 0.069371 | 0.004184 | 0 | 0 | 0 | 0.00188 | 0.035897 | 1 | 0.041026 | false | 0 | 0.030769 | 0 | 0.112821 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e5e78d791411f9e5edc70fab6be1cd213daec5c | 4,308 | py | Python | yat-master/pymodule/yat/cli/command/check.py | opengauss-mirror/Yat | aef107a8304b94e5d99b4f1f36eb46755eb8919e | [
"MulanPSL-1.0"
] | null | null | null | yat-master/pymodule/yat/cli/command/check.py | opengauss-mirror/Yat | aef107a8304b94e5d99b4f1f36eb46755eb8919e | [
"MulanPSL-1.0"
] | null | null | null | yat-master/pymodule/yat/cli/command/check.py | opengauss-mirror/Yat | aef107a8304b94e5d99b4f1f36eb46755eb8919e | [
"MulanPSL-1.0"
] | null | null | null | #!/usr/bin/env python
# encoding=utf-8
"""
Copyright (c) 2021 Huawei Technologies Co.,Ltd.
openGauss is licensed under Mulan PSL v2.
You can use this software according to the terms and conditions of the Mulan PSL v2.
You may obtain a copy of Mulan PSL v2 at:
http://license.coscl.org.cn/MulanPSL2
THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT,
MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
See the Mulan PSL v2 for more details.
"""
import re
import click
from yat.cli.entry import cli
from yat.errors import YatError
from yat.guard.checker import PathChecker, ListChecker
from yat.guard.checker import all_checkers
from yat.guard.errors import TestGuardError
from yat.guard.report import get_reporter, CombineReporter
from yat.guard.setting import load
@cli.command(name='check', short_help='Checking test cases')
@click.option('-c', '--configure', help='Set the checker configure file')
@click.option('-s', '--src', help='Set the source directory or file, default is .')
@click.option('-l', '--check-list', help='Set the checking list file')
@click.option('-d', '--dest', multiple=True, help='Set the export information, default is report.txt',
default=["file:text:report.txt"])
@click.option('--level', help='Set the level of checking', type=click.Choice(['info', 'warn', 'error']),
default='warn')
@click.option('-v', '--verbose', is_flag=True, help='Verbose output the checking detail')
@click.option('--not-yat', is_flag=True, help='the src is not Yat suite directory')
@click.option('-e', '--filters', multiple=True, help='Set the filter to filter file to check')
@click.option('-r', '--checkers', help='Set the checker list to use')
@click.option('--list-checkers', help='List all checkers', is_flag=True)
def check(configure, src, check_list, dest, level, verbose, not_yat, filters, checkers, list_checkers):
"""
Checking test cases, and report to dest
\b
valid dest url format:
* file:text:/path/to/report.txt
* file:excel:/path/to/report.xlsx
* file:json:/path/to/report.json
* file:html:/path/to/report.html
* db:zenith:username/password@host:port
\b
check list file is a file contents list of file path, which is split with '\\n'
valid list file can be a txt file or -, which means stand input
\b
yat check [-c /path/to/configure/path ] [-s /path/to/suite/dir] -d <dest_url>
yat check [-c /path/to/configure/path ] [-s /path/to/suite/dir] -d <dest_url> [--not-yat]
yat check [-c /path/to/configure/path ] [-s /path/to/suite/dir] -d <dest_url> [--not-yat] [-e '^.*/testcase/.*\\.(sh|sql|py|go)' ...]
"""
try:
if list_checkers:
for name in all_checkers().keys():
print("\t{}".format(name))
return
reporters = []
for single_dest in dest:
reporter = get_reporter(single_dest)
if reporter is None:
raise YatError("can not found reporter with given url: %s" % single_dest)
reporters.append(reporter)
reporter = CombineReporter(*reporters)
if src is None and check_list is None:
src = '.'
if src is not None and check_list is not None:
raise YatError('-s/--src or -l/--check-list is mutual exclusion, choose one to begin the check')
filters = _make_filters(filters)
if configure:
load(configure)
if checkers:
checkers = checkers.split(',')
if src:
success = PathChecker(
src, reporter,
verbose=verbose, yat=not not_yat,
filters=filters, checkers=checkers).check()
else:
success = ListChecker(
check_list, reporter,
verbose=verbose, filters=filters,
checkers=checkers).check()
if success:
print('Checking Passed')
else:
raise YatError('Checking Not Passed')
except TestGuardError as e:
raise YatError(str(e))
def _make_filters(filters):
if len(filters) == 0:
return None
else:
return [re.compile(it) for it in filters]
| 37.137931 | 137 | 0.639044 | 597 | 4,308 | 4.567839 | 0.328308 | 0.040337 | 0.025669 | 0.014301 | 0.133847 | 0.060506 | 0.060506 | 0.060506 | 0.060506 | 0.060506 | 0 | 0.003342 | 0.236072 | 4,308 | 115 | 138 | 37.46087 | 0.825281 | 0.290622 | 0 | 0.046154 | 0 | 0.015385 | 0.219791 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030769 | false | 0.030769 | 0.138462 | 0 | 0.215385 | 0.030769 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e5e7ac35950be8b558860d29d15a13a99ad9d61 | 1,409 | py | Python | ansible_tests/testinfra_launcher.py | TanguyPatte/ansible-tests | e4657c73a572232d23f7fffdbbaa8abc2ba3e895 | [
"MIT"
] | 3 | 2018-11-16T16:14:05.000Z | 2018-11-30T15:44:35.000Z | ansible_tests/testinfra_launcher.py | TanguyPatte/ansible-tests | e4657c73a572232d23f7fffdbbaa8abc2ba3e895 | [
"MIT"
] | null | null | null | ansible_tests/testinfra_launcher.py | TanguyPatte/ansible-tests | e4657c73a572232d23f7fffdbbaa8abc2ba3e895 | [
"MIT"
] | null | null | null | import subprocess
import re
import os
class TestinfraLauncher:
def __init__(self, yaml_loader):
self.yaml_loader = yaml_loader
def run_test(self, file_name, inventory, current_path):
tests = self.yaml_loader.load(file_name)
for test in tests:
command = build_command(test, inventory, current_path)
subprocess.run(command.split(' '))
def build_tests_path(roles, current_path):
result = ''
if isinstance(roles, str):
return ' {}/roles/{}/tests'.format(current_path, roles)
for role in roles:
if isinstance(role, dict):
if os.path.exists('{}/roles/{}/tests'.format(current_path, role['role'])):
result += ' {}/roles/{}/tests'.format(current_path, role['role'])
else:
if os.path.exists('{}/roles/{}/tests'.format(current_path, role)):
result += ' {}/roles/{}/tests'.format(current_path, role)
return result
def build_command(test_content, inventory, current_path):
hosts = build_hosts_list(test_content['hosts'])
roles = test_content['roles']
tests_path = build_tests_path(roles, current_path)
return 'pytest --connection=ansible --ansible-inventory=' + inventory + ' --hosts=ansible://' + hosts + tests_path
def build_hosts_list(hosts):
if isinstance(hosts, list):
return ','.join(hosts)
return re.sub(':', ',', hosts)
| 33.547619 | 118 | 0.64159 | 170 | 1,409 | 5.105882 | 0.258824 | 0.126728 | 0.092166 | 0.132488 | 0.298387 | 0.267281 | 0.198157 | 0.193548 | 0.103687 | 0.103687 | 0 | 0 | 0.215046 | 1,409 | 41 | 119 | 34.365854 | 0.78481 | 0 | 0 | 0 | 0 | 0 | 0.125621 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15625 | false | 0 | 0.09375 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e5ecec120d588653e1ab06cc9bf1785b3679a24 | 6,134 | py | Python | main.py | ayanbag/Image_Captioning_using_RNN_and_CNN | 7304c6a135798d7aa5b34ef6e7fbbc2039edfd4b | [
"MIT"
] | null | null | null | main.py | ayanbag/Image_Captioning_using_RNN_and_CNN | 7304c6a135798d7aa5b34ef6e7fbbc2039edfd4b | [
"MIT"
] | null | null | null | main.py | ayanbag/Image_Captioning_using_RNN_and_CNN | 7304c6a135798d7aa5b34ef6e7fbbc2039edfd4b | [
"MIT"
] | null | null | null | import tensorflow as tf
from PIL import Image
import numpy as np
import time
from tqdm import tqdm
from nltk.translate import bleu
from nltk.translate.bleu_score import sentence_bleu
from evaluate import evaluate, bleu_score
from model import Attention, CNN_Encoder, RNN_Decoder
#from loss import loss_function
from preprocessing import datalimit, train_test_split
from utils import load_image, standardize, map_func, plot_attention
from data_download import data_download
from train import train_step
BATCH_SIZE = 64
BUFFER_SIZE = 1000
embedding_dim = 256
units = 512
features_shape = 2048
attention_features_shape = 64
limit = 1000 # number of images to use for training
def main(start=True):
annotation_file, PATH = data_download(data=True) # download data
train_captions, img_name_vector = datalimit(limit, annotation_file, PATH)
image_model = tf.keras.applications.InceptionV3(include_top=False, weights='imagenet') # download model
new_input = image_model.input
hidden_layer = image_model.layers[-1].output
image_features_extract_model = tf.keras.Model(new_input, hidden_layer)
encode_train = sorted(set(img_name_vector))
image_dataset = tf.data.Dataset.from_tensor_slices(encode_train)
image_dataset = image_dataset.map(load_image, num_parallel_calls=tf.data.AUTOTUNE).batch(16)
for img, path in tqdm(image_dataset):
batch_features = image_features_extract_model(img)
batch_features = tf.reshape(batch_features,(batch_features.shape[0], -1, batch_features.shape[3]))
for bf, p in zip(batch_features, path):
path_of_feature = p.numpy().decode("utf-8")
np.save(path_of_feature, bf.numpy())
caption_dataset = tf.data.Dataset.from_tensor_slices(train_captions)
max_length = 50
vocabulary_size = 5000
tokenizer = tf.keras.layers.TextVectorization(max_tokens=vocabulary_size,standardize=standardize,output_sequence_length=max_length)
tokenizer.adapt(caption_dataset)
cap_vector = caption_dataset.map(lambda x: tokenizer(x))
word_to_index = tf.keras.layers.StringLookup(mask_token="",vocabulary=tokenizer.get_vocabulary())
index_to_word = tf.keras.layers.StringLookup(mask_token="",vocabulary=tokenizer.get_vocabulary(),invert=True)
img_name_train, cap_train, img_name_val, cap_val = train_test_split(img_name_vector, cap_vector)
num_steps = len(img_name_train) // BATCH_SIZE
dataset = tf.data.Dataset.from_tensor_slices((img_name_train, cap_train))
dataset = dataset.map(lambda item1, item2: tf.numpy_function(map_func, [item1, item2], [tf.float32, tf.int64]),num_parallel_calls=tf.data.AUTOTUNE)
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
dataset = dataset.prefetch(buffer_size=tf.data.AUTOTUNE)
encoder = CNN_Encoder(embedding_dim)
decoder = RNN_Decoder(embedding_dim, units, tokenizer.vocabulary_size())
optimizer = tf.keras.optimizers.Adam()
#loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True, reduction='none')
checkpoint_path = "./checkpoints/train"
ckpt = tf.train.Checkpoint(encoder=encoder,decoder=decoder,optimizer=optimizer)
ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5)
start_epoch = 0 # start from epoch 0 or last checkpoint epoch
if ckpt_manager.latest_checkpoint:
start_epoch = int(ckpt_manager.latest_checkpoint.split('-')[-1])
ckpt.restore(ckpt_manager.latest_checkpoint)
EPOCHS = 20 # number of epochs to train the model
loss_plot=[] # list to store the loss value for each epoch
for epoch in range(start_epoch, EPOCHS):
start = time.time()
total_loss = 0
for (batch, (img_tensor, target)) in enumerate(dataset):
batch_loss, t_loss = train_step(img_tensor, target)
total_loss += t_loss
if batch % 100 == 0:
average_batch_loss = batch_loss.numpy()/int(target.shape[1])
print(f'Epoch {epoch+1} Batch {batch} Loss {average_batch_loss:.4f}')
loss_plot.append(total_loss / num_steps)
if epoch % 5 == 0:
ckpt_manager.save()
print(f'Epoch {epoch+1} Loss {total_loss/num_steps:.6f}')
print(f'Time taken for 1 epoch {time.time()-start:.2f} sec\n')
# Validation Dataset testing
rid = np.random.randint(0, len(img_name_val))
image = img_name_val[rid]
real_caption = ' '.join([tf.compat.as_text(index_to_word(i).numpy()) for i in cap_val[rid] if i not in [0]])
result, attention_plot = evaluate(image,encoder,decoder,word_to_index,index_to_word,max_length, image_features_extract_model,attention_features_shape)
print('Real Caption:', real_caption)
print('Prediction Caption:', ' '.join(result))
plot_attention(image, result, attention_plot)
# Unseen Data testing
for i in range(1,3):
image_path = 'data/unseen_image/'+i+'.jpg' # User path to image
result, attention_plot = evaluate(image,encoder,decoder,word_to_index,index_to_word,max_length, image_features_extract_model,attention_features_shape)
print('Prediction Caption:', ' '.join(result))
plot_attention(image_path, result, attention_plot)
# opening the image
Image.open(image_path)
# Calculate the avergae bleu score
bleu_score = 0 # initialize the bleu score
image_limit=1000 # number of images to test
for i in tqdm(range(image_limit)):
rid1 = np.random.randint(0, len(img_name_val))
image1 = img_name_val[rid1]
real_caption1 = ' '.join([tf.compat.as_text(index_to_word(i).numpy()) for i in cap_val[rid] if i not in [0]])
result1, attention_plot1 = evaluate(image1,encoder,decoder,word_to_index,index_to_word,max_length, image_features_extract_model,attention_features_shape)
caption_bleu_score = 0
caption_bleu_score = sentence_bleu(list(real_caption1), list(result1)) # calculate the bleu score
bleu_score+=caption_bleu_score
print("The average bleu score : ", bleu_score/image_limit)
if __name__ == "__main__":
start=True
main(start) | 43.814286 | 161 | 0.731823 | 867 | 6,134 | 4.906574 | 0.244521 | 0.025388 | 0.015515 | 0.029384 | 0.245651 | 0.228256 | 0.202398 | 0.17701 | 0.141044 | 0.141044 | 0 | 0.01669 | 0.16971 | 6,134 | 140 | 162 | 43.814286 | 0.818575 | 0.082817 | 0 | 0.038835 | 0 | 0 | 0.053645 | 0.01301 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009709 | false | 0 | 0.126214 | 0 | 0.135922 | 0.067961 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e63eb223b0a6bf222f1e2e4bbdcbbe5c8bc2e76 | 2,489 | py | Python | stockpick/stockpickworker.py | Expert68/expQuant | 7f0a6d41d26c02a0d10387dc87bb14e3bc234a2a | [
"MIT"
] | null | null | null | stockpick/stockpickworker.py | Expert68/expQuant | 7f0a6d41d26c02a0d10387dc87bb14e3bc234a2a | [
"MIT"
] | null | null | null | stockpick/stockpickworker.py | Expert68/expQuant | 7f0a6d41d26c02a0d10387dc87bb14e3bc234a2a | [
"MIT"
] | null | null | null | from .pickstockbases import Stockpickbases
import copy
# 选股工人类
class pickstockwoker(Stockpickbases):
def __init__(self, capital, benchmark, choice_symbols, timeseries_manager, stockpickers):
"""
选股要考虑:
资金
基准收益
选股对象
金融时间序列
选股因子序列
"""
self.capital = capital
self.benchmark = benchmark
self.choic_symbols = choice_symbols
self.timeseries = timeseries_manager
self.stockpicker = stockpickers
# 在初始化选股工人的时候同时初始化选股因子
self.init_stock_picker(stockpickers)
# 实例化时打印选股因子序列和选股交易对象
def __str__(self):
return 'stock_pickers:{}\nchoice_symbols:{}'.format(self.stockpicker, self.choic_symbols)
#初始化选股因子
def init_stock_picker(self, stockpickers):
#stockpicker是一个由字典组成的列表
#stockpickers=[{'factor':xxx,'class':xxxclass}]
"""初始化选股因子从选股因子列表中获得各个选股因子"""
if stockpickers:
for picker in stockpickers:
if picker is None:
continue
if 'class' not in picker:
raise ValueError('选股因子中必须有class这个key')
#获得picker_cp
picker_cp = copy.deepcopy(picker)
#将picker_cp中的类剔除掉
fac = picker_cp.pop('class')
#整合capital,benchmark等选股因素
picker = class_fac(self.capital,self.benchmark,**picker_cp)
def fit(self):
"""
选股开始工作
:return:
"""
def _batch_fit():
#如果没有选股条件,则返回所有备选股票
if self.stockpicker is None:
return self.choice_symbols
inner_choice_symbols = []
for epoch,target_symbol in enumerate(self.choice_symbols):
add = True
for picker in self.stockpickers:
timeseries = self.timeseries_manager.get_pick_stock_series(target_symbol,picker.xd,picker.min_xd)
#如果该股票时间序列过少,则该只股票剔除
if timeseries is None:
add = False
break
sub_add = picker.fit_pick(timeseries,target_symbol)
if sub_add is False:
#只要一个因子投了反对票,就刷出
add = False
break
if add:
inner_choice_symbols.append(target_symbol)
return inner_choice_symbols
self.choic_symbols = _batch_fit()
| 28.609195 | 117 | 0.552431 | 220 | 2,489 | 6.018182 | 0.386364 | 0.068731 | 0.036254 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.378867 | 2,489 | 86 | 118 | 28.94186 | 0.856404 | 0.120129 | 0 | 0.095238 | 0 | 0 | 0.030435 | 0.016908 | 0 | 0 | 0 | 0 | 0 | 1 | 0.119048 | false | 0 | 0.047619 | 0.02381 | 0.261905 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e649e2d5d19bb6517219553188f6b62f078376d | 1,850 | py | Python | sample-client/cloudfunctions/scheduler/main.py | viperan/storage-sdrs | 403b3f15dd87fed654add6d7eff1dd0816a5864a | [
"Apache-2.0"
] | 10 | 2019-03-08T01:02:41.000Z | 2021-10-09T03:11:44.000Z | sample-client/cloudfunctions/scheduler/main.py | viperan/storage-sdrs | 403b3f15dd87fed654add6d7eff1dd0816a5864a | [
"Apache-2.0"
] | 19 | 2019-04-25T19:58:38.000Z | 2021-12-18T18:12:31.000Z | sample-client/cloudfunctions/scheduler/main.py | viperan/storage-sdrs | 403b3f15dd87fed654add6d7eff1dd0816a5864a | [
"Apache-2.0"
] | 21 | 2019-03-21T19:57:49.000Z | 2021-10-09T03:11:31.000Z | # Copyright 2019 Google LLC. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations under
# the License.
#
# Any software provided by Google hereunder is distributed "AS IS", WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, and is not intended for production use.
import base64
import logging
import os
import requests
from common_lib import utils
from common_lib.utils import EVENTS_EXECUTION_ENDPOINT
from common_lib.utils import EVENTS_VALIDATION_ENDPOINT
LOGGER = logging.getLogger('sdrs_cf_scheduler')
LOGGER.setLevel(os.getenv('logLevel'))
def handler(event, context):
"""Takes in a pubsub message and invokes a POST based on the message"""
pub_sub_message = base64.b64decode(event['data']).decode('utf-8')
if pub_sub_message == 'executor':
LOGGER.debug('POST: %s', EVENTS_EXECUTION_ENDPOINT)
response = requests.post(EVENTS_EXECUTION_ENDPOINT, json={'type': 'POLICY'},
headers=utils.get_auth_header())
LOGGER.debug('Response: %s', response.text)
elif pub_sub_message == 'validator':
LOGGER.debug('POST: %s', EVENTS_VALIDATION_ENDPOINT)
response = requests.post(EVENTS_VALIDATION_ENDPOINT,
headers=utils.get_auth_header())
LOGGER.debug('Response: %s', response.text)
else:
LOGGER.warn('Unexpected message from PubSub: %s', pub_sub_message)
return | 37.755102 | 80 | 0.738378 | 257 | 1,850 | 5.202335 | 0.494163 | 0.044877 | 0.038893 | 0.023934 | 0.270755 | 0.186986 | 0.142109 | 0.085266 | 0.085266 | 0.085266 | 0 | 0.009785 | 0.171351 | 1,850 | 49 | 81 | 37.755102 | 0.862361 | 0.427027 | 0 | 0.166667 | 0 | 0 | 0.129808 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.291667 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e66532e2ed89a2252ecb163434d823ba9b98370 | 209,059 | py | Python | qswatplus/QSWATTopology.py | jphuart/swatplus-automatic-workflow | dd2eeb7f882eb2d4ab7e1e5265c10b9beb93ddc4 | [
"MIT"
] | 8 | 2020-06-28T07:50:29.000Z | 2022-01-05T16:29:48.000Z | qswatplus/QSWATTopology.py | jphuart/swatplus-automatic-workflow | dd2eeb7f882eb2d4ab7e1e5265c10b9beb93ddc4 | [
"MIT"
] | null | null | null | qswatplus/QSWATTopology.py | jphuart/swatplus-automatic-workflow | dd2eeb7f882eb2d4ab7e1e5265c10b9beb93ddc4 | [
"MIT"
] | 5 | 2020-06-28T07:50:31.000Z | 2021-08-16T07:09:59.000Z | # -*- coding: utf-8 -*-
"""
/***************************************************************************
QSWAT
A QGIS plugin
Create SWAT inputs
-------------------
begin : 2014-07-18
copyright : (C) 2014 by Chris George
email : cgeorge@mcmaster.ca
***************************************************************************/
/***************************************************************************
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
***************************************************************************/
"""
# Import the PyQt and QGIS libraries
from PyQt5.QtCore import * # @UnusedWildImport
from PyQt5.QtGui import * # @UnusedWildImport
from qgis.core import * # @UnusedWildImport
from osgeo import gdal
from numpy import * # @UnusedWildImport
import os.path
import time
import csv
import traceback
try:
from QSWATUtils import QSWATUtils, FileTypes, ListFuns
from DBUtils import DBUtils
from parameters import Parameters
from raster import Raster
from dataInC import ReachData, MergedChannelData, LakeData # @UnresolvedImport
except:
# used by convertFromArc
from QSWATUtils import QSWATUtils, FileTypes, ListFuns
from DBUtils import DBUtils
from parameters import Parameters
from raster import Raster
from dataInC import ReachData, MergedChannelData, LakeData # @UnresolvedImport
class QSWATTopology:
"""Module for creating and storing topological data
derived from watershed delineation.
Nomenclature: From TauDEM we have channels and also streams, each with link and wsno values.
We translate as follows:
channel link: channel
channel wsno: chBasin
stream link: stream
stream wsno: [sub]basin
These all have numbers from zero. To avoid zero ids in the output, a SWATchannel etc has an id
of at least one.
Unlike QSWAT, we do not guarantee SWAT identifiers will form a continuous range 1..n.
"""
## Value used to indicate no basin since channel is zero length and has no points.
## Must be negative (since TauDEM basin (WSNO) numbers go from 0 up
## and must not be -1 since this indicates a 'not found' in most gets, or a main outlet
_NOBASIN = -2
_RESTYPE = 1
_PONDTYPE = 2
_LINKNO = 'LINKNO'
_DSLINKNO = 'DSLINKNO'
_USLINKNO1 = 'USLINKNO1'
_USLINKNO2 = 'USLINKNO2'
_DSNODEID = 'DSNODEID'
_DRAINAREA = 'DS_Cont_Ar' if Parameters._ISWIN else 'DSContArea'
_DRAINAGE = 'Drainage'
_ORDER = 'Order' if Parameters._ISWIN else 'strmOrder'
_LENGTH = 'Length'
_MAGNITUDE = 'Magnitude'
_DROP = 'Drop' if Parameters._ISWIN else 'strmDrop'
_SLOPE = 'Slope'
_STRAIGHTL = 'Straight_L' if Parameters._ISWIN else 'StraightL'
_USCONTAR = 'US_Cont_Ar' if Parameters._ISWIN else 'USContArea'
_WSNO = 'WSNO'
_DOUTEND = 'DOUT_END' if Parameters._ISWIN else 'DOUTEND'
_DOUTSTART = 'DOUT_START' if Parameters._ISWIN else 'DOUTSTART'
_DOUTMID = 'DOUT_MID' if Parameters._ISWIN else 'DOUTMID'
_BASINNO = 'BasinNo'
_ID = 'ID'
_INLET = 'INLET'
_RES = 'RES'
_PTSOURCE = 'PTSOURCE'
_POLYGONID = 'PolygonId'
_DOWNID = 'DownId'
_STREAMLINK = 'StreamLink'
_STREAMLEN = 'StreamLen'
_DSNODEIDW = 'DSNodeID'
_DSWSID = 'DSWSID'
_US1WSID = 'US1WSID'
_US2WSID = 'US2WSID'
_SUBBASIN = 'Subbasin'
_CHANNEL = 'Channel'
_CHANNELR = 'ChannelR'
_LANDSCAPE = 'Landscape'
_AQUIFER = 'Aquifer'
_LSU = 'LSU'
_LSUID = 'LSUID'
_PENWIDTH = 'PenWidth'
_HRUS = 'HRUS'
_HRUGIS = 'HRUGIS'
_LAKEID = 'LakeId'
_RESERVOIR = 'Reservoir'
_POND = 'Pond'
_AREAC = 'AreaC'
_LEN2 = 'Len2'
_SLO2 = 'Slo2'
_WID2 = 'Wid2'
_DEP2 = 'Dep2'
_MINEL = 'MinEl'
_MAXEL = 'MaxEl'
_LAKEIN = 'LakeIn'
_LAKEOUT = 'LakeOut'
_LAKEWITHIN = 'LakeWithin'
_LAKEMAIN = 'LakeMain'
def __init__(self, isBatch):
"""Initialise class variables."""
## Link to project database
self.db = None
## True if outlet end of reach is its first point, i.e. index zero."""
self.outletAtStart = True
## index to LINKNO in channel shapefile
self.channelIndex = -1
## index to DSLINKNO in channel shapefile
self.dsChannelIndex = -1
## relation between channel basins and subbasins
# not used with grid models (since would be 1-1)
self.chBasinToSubbasin = dict()
## WSNO does not obey SWAT rules for element numbers (> 0)
# so we invent and store SWATBasin numbers
# also SWAT basins may not be empty
# not complete: zero areas/stream lengths excluded
self.subbasinToSWATBasin = dict()
##inverse map to make it easy to output in numeric order of SWAT Basins
self.SWATBasinToSubbasin = dict()
## original channel links may have zero numbers and may have zero lengths
## so we generate SWATChannel ids
# not complete
self.channelToSWATChannel = dict()
## inverse map
self.SWATChannelToChannel = dict()
## subbasin to stream mapping (wsno to link fields in streams shapefile)
# complete
self.subbasinToStream = dict()
## stream to downstream (link to dslink in streams shapefile)
# complete
self.downStreams = dict()
## stream lengths (link to length in streams shapefile). Lengths are in metres
# complete
self.streamLengths = dict()
## LINKNO to DSLINKNO in channel shapefile
# complete = all channel links defined
self.downChannels = dict()
## zero length channels
self.zeroChannels = set()
## subbasin to downstream subbasin
# incomplete: no empty basins (zero length streams or missing wsno value in subbasins layer)
self.downSubbasins = dict()
## map from channel to chBasin
# incomplete - no zero length channels
self.chLinkToChBasin = dict()
## reverse of chLinkToChBasin
self.chBasinToChLink = dict()
## centroids of basins as (x, y) pairs in projected units
self.basinCentroids = dict()
## channel link to channel length in metres:
# complete
self.channelLengths = dict()
## channel slopes in m/m
# complete
self.channelSlopes = dict()
## numpy array of total area draining to downstream end of channel in square metres
self.drainAreas = None
## map of lake id to ids of points added to split channels entering lakes
self.lakeInlets = dict()
## map of lake id to ids of points added to split channels leaving lakes
self.lakeOutlets = dict()
## map of channel to ReachData: points and elevations at ends of channels, plus basin
# not complete: zero areas/channel lengths excluded
self.channelsData = dict()
## map of lake id to LakeData for lakes defined by shapefile
self.lakesData = dict()
## map of channel links to lake ids: channel flowing into lake
self.chLinkIntoLake = dict()
## map of channel links to lake ids: channel completely inside lake
self.chLinkInsideLake = dict()
## map of channel links to lake ids: channel flowing out of lake
self.chLinkFromLake = dict()
## map of subbasin to lake id for subbasins with their outlet inside a lake (non-grid models only)
self.outletsInLake = dict()
## channel basin to area in square metres. Not used with grid model.
self.chBasinAreas = dict()
## current point id (for outlets, inlets and point sources)
self.pointId = 0
## current water body id (for lakes, reservoirs and ponds)
self.waterBodyId = 0
## channel links to reservoir or pond point ids plus water type: reservoir or pond discharges into channel
self.chLinkToWater = dict()
## channel links with point sources flowing into them (defined by inlets/outlets file)
self.chLinkToPtSrc = dict()
## channel links to watershed inlets (for grid models only)
self.chLinkToInlet = dict()
## basins draining to inlets
self.upstreamFromInlets = set()
## width of DEM cell in metres
self.dx = 0
## depth of DEM cell in metres
self.dy = 0
## x direction threshold for points to be considered coincident
self.xThreshold = 0
## y direction threshold for points to be considered coincident
self.yThreshold = 0
## multiplier to turn DEM elevations to metres
self.verticalFactor = 1
## DEM nodata value
self.demNodata = 0
## DEM extent
self.demExtent = None
## map from subbasin to outlet pointId, point, and channel draining to it
self.outlets = dict()
## map from subbasin to inlet pointId and point (not used with grid models)
self.inlets = dict()
## map from channel links to point sources
self.chPointSources = dict()
## reservoirs found by converting water HRUs
self.foundReservoirs = dict()
## project projection (set from DEM)
self.crsProject = None
## lat-long coordinate reference system
self.crsLatLong = QgsCoordinateReferenceSystem()
if not self.crsLatLong.createFromId(4326, QgsCoordinateReferenceSystem.EpsgCrsId):
QSWATUtils.error('Failed to create lat-long coordinate reference system', isBatch)
## transform from project corrdinates to lat-long
self.transformToLatLong = None
## Flag to show if batch run
self.isBatch = isBatch
## table for memorizing distances from basin to join in flowpath with other basin:
# basin -> otherBasin -> join distance in metres
self.distancesToJoins = dict()
## table for use in existing non-grid models of maximum channel flow lengths in metres to subbasin outlets
# complete
self.maxFlowLengths = dict()
## number of chunks to use for rasters and their arrays; increased when memory fails
self.chunkCount = 1
## dsNodeIds that cannot be retained when making grids as they would be in same grid cell as another point
self.lostDsNodeIds = set()
def setUp0(self, demLayer, channelLayer, outletLayer, ad8Layer, verticalFactor, useGridModel):
"""Set DEM size parameters and stream orientation, and store source and outlet points for stream reaches."""
# can fail if demLayer is None or not projected
try:
self.setCrs(demLayer)
units = self.crsProject.mapUnits()
except Exception:
QSWATUtils.loginfo('Failure to read DEM units: {0}'.format(traceback.format_exc()))
return False
if units == QgsUnitTypes.DistanceMeters:
factor = 1
elif units == QgsUnitTypes.DistanceFeet:
factor = Parameters._FEETTOMETRES
else:
# unknown or degrees - will be reported in delineation - just quietly fail here
QSWATUtils.loginfo('Failure to read DEM units: {0}'.format(str(units)))
return False
self.dx = demLayer.rasterUnitsPerPixelX() * factor
self.dy = demLayer.rasterUnitsPerPixelY() * factor
self.xThreshold = self.dx * Parameters._NEARNESSTHRESHOLD
self.yThreshold = self.dy * Parameters._NEARNESSTHRESHOLD
QSWATUtils.loginfo('Factor is {0}, cell width is {1}, cell depth is {2}'.format(factor, self.dx, self.dy))
self.demExtent = demLayer.extent()
self.verticalFactor = verticalFactor
self.outletAtStart = self.hasOutletAtStart(channelLayer, ad8Layer)
QSWATUtils.loginfo('Outlet at start is {0!s}'.format(self.outletAtStart))
return self.saveOutletsAndSources(channelLayer, outletLayer, useGridModel)
def setCrs(self, demLayer):
"""Set crsProject and transformToLatLong if necessary."""
if self.crsProject is None:
self.crsProject = demLayer.crs()
self.transformToLatLong = QgsCoordinateTransform(self.crsProject, self.crsLatLong, QgsProject.instance())
QgsProject.instance().setCrs(self.crsProject)
settings = QSettings()
settings.setValue('Projections/defaultBehaviour', 'useProject')
def setUp1(self, streamLayer):
"""Establish subbasinToStream, downStreams and streamLengths dictionaries.
Used when calculating ridges by branch length method and setUp has not been run yet."""
self.subbasinToStream.clear()
self.downStreams.clear()
self.streamLengths.clear()
streamIndex = self.getIndex(streamLayer, QSWATTopology._LINKNO)
if streamIndex < 0:
QSWATUtils.loginfo('No LINKNO field in stream layer')
return False
dsStreamIndex = self.getIndex(streamLayer, QSWATTopology._DSLINKNO)
if dsStreamIndex < 0:
QSWATUtils.loginfo('No DSLINKNO field in stream layer')
return False
lengthIndex = self.getIndex(streamLayer, QSWATTopology._LENGTH, True)
wsnoIndex = self.getIndex(streamLayer, QSWATTopology._WSNO)
if wsnoIndex < 0:
QSWATUtils.loginfo('No WSNO field in stream layer')
return False
for reach in streamLayer.getFeatures():
link = reach[streamIndex]
dsLink = reach[dsStreamIndex]
basin = reach[wsnoIndex]
if lengthIndex < 0:
length = reach.geometry().length()
else:
length = reach[lengthIndex]
self.subbasinToStream[basin] = link
self.downStreams[link] = dsLink
self.streamLengths[link] = length
return True
def setUp(self, demLayer, channelLayer, subbasinsLayer, outletLayer, lakesLayer, gv, existing,
recalculate, useGridModel, streamDrainage, reportErrors):
"""Create topological data from layers."""
#QSWATUtils.loginfo('Channel layer {0}'.format(channelLayer.dataProvider().dataSourceUri()))
#QSWATUtils.loginfo('Subbasins layer {0}'.format(subbasinsLayer.dataProvider().dataSourceUri()))
self.db = gv.db
self.chLinkToChBasin.clear()
self.chBasinToChLink.clear()
self.subbasinToSWATBasin.clear()
self.SWATBasinToSubbasin.clear()
self.channelToSWATChannel.clear()
self.SWATChannelToChannel.clear()
self.downChannels.clear()
self.zeroChannels.clear()
# do not clear centroids unless existing and not using grid model:
if existing and not useGridModel:
self.basinCentroids.clear()
self.channelLengths.clear()
self.channelSlopes.clear()
self.channelsData.clear()
self.chLinkToWater.clear()
self.chLinkToPtSrc.clear()
self.chLinkToInlet.clear()
self.distancesToJoins.clear()
self.maxFlowLengths.clear()
dsNodeToLink = dict()
ignoreError = not reportErrors
ignoreWithExisting = existing or not reportErrors
ignoreWithGrid = useGridModel or not reportErrors
ignoreWithGridOrExisting = ignoreWithGrid or ignoreWithExisting
self.channelIndex = self.getIndex(channelLayer, QSWATTopology._LINKNO, ignoreMissing=ignoreError)
if self.channelIndex < 0:
QSWATUtils.loginfo('No LINKNO field in channels layer')
return False
self.dsChannelIndex = self.getIndex(channelLayer, QSWATTopology._DSLINKNO, ignoreMissing=ignoreError)
if self.dsChannelIndex < 0:
QSWATUtils.loginfo('No DSLINKNO field in channels layer')
return False
dsNodeIndex = self.getIndex(channelLayer, QSWATTopology._DSNODEID, ignoreMissing=ignoreWithExisting)
wsnoIndex = self.getIndex(channelLayer, QSWATTopology._WSNO, ignoreMissing=ignoreError)
if wsnoIndex < 0:
QSWATUtils.loginfo('No WSNO field in channels layer')
return False
drainAreaIndex = self.getIndex(channelLayer, QSWATTopology._DRAINAREA, ignoreMissing=ignoreWithGridOrExisting)
lengthIndex = self.getIndex(channelLayer, QSWATTopology._LENGTH, ignoreMissing=ignoreWithGridOrExisting)
dropIndex = self.getIndex(channelLayer, QSWATTopology._DROP, ignoreMissing=ignoreWithGridOrExisting)
polyIndex = self.getIndex(subbasinsLayer, QSWATTopology._POLYGONID, ignoreMissing=ignoreError)
if polyIndex < 0:
QSWATUtils.loginfo('No POLYGONID field in subbasins layer')
return False
subbasinIndex = self.getIndex(subbasinsLayer, QSWATTopology._SUBBASIN, ignoreMissing=ignoreWithGridOrExisting)
if outletLayer is not None:
if dsNodeIndex < 0:
QSWATUtils.information('Warning: streams layer has no {0} field, so points in inlets/outlets file will be ignored'
.format(QSWATTopology._DSNODEID), gv.isBatch)
idIndex = self.getIndex(outletLayer, QSWATTopology._ID, ignoreMissing=ignoreError)
if idIndex < 0:
QSWATUtils.loginfo('No ID field in outlets layer')
return False
inletIndex = self.getIndex(outletLayer, QSWATTopology._INLET, ignoreMissing=ignoreError)
if inletIndex < 0:
QSWATUtils.loginfo('No INLET field in outlets layer')
return False
ptSourceIndex = self.getIndex(outletLayer, QSWATTopology._PTSOURCE, ignoreMissing=ignoreError)
if ptSourceIndex < 0:
QSWATUtils.loginfo('No PTSOURCE field in outlets layer')
return False
resIndex = self.getIndex(outletLayer, QSWATTopology._RES, ignoreMissing=ignoreError)
if resIndex < 0:
QSWATUtils.loginfo('No RES field in outlets layer')
return False
self.demNodata = demLayer.dataProvider().sourceNoDataValue(1)
if not useGridModel:
# upstream array will get very big for grid
us = dict()
time1 = time.process_time()
maxChLink = 0
SWATChannel = 0
for channel in channelLayer.getFeatures():
chLink = channel[self.channelIndex]
dsChLink = channel[self.dsChannelIndex]
chBasin = channel[wsnoIndex]
geom = channel.geometry()
if lengthIndex < 0 or recalculate:
length = geom.length()
else:
length = channel[lengthIndex]
data = self.getReachData(geom, demLayer)
self.channelsData[chLink] = data
if data and (dropIndex < 0 or recalculate):
drop = data.upperZ - data.lowerZ
elif dropIndex >= 0:
drop = channel[dropIndex]
else:
drop = 0
slope = 0 if length <= 0 else float(drop) / length
dsNode = channel[dsNodeIndex] if dsNodeIndex >= 0 else -1
if useGridModel and chBasin < 0:
# it is the downstream channel link from an inlet, and has no basin
pass
else:
# exit channels in grid model can have zero length
if length > 0 or useGridModel:
self.chLinkToChBasin[chLink] = chBasin
self.chBasinToChLink[chBasin] = chLink
SWATChannel += 1
self.channelToSWATChannel[chLink] = SWATChannel
self.SWATChannelToChannel[SWATChannel] = chLink
else:
self.zeroChannels.add(chLink)
maxChLink = max(maxChLink, chLink)
self.downChannels[chLink] = dsChLink
self.channelLengths[chLink] = length
self.channelSlopes[chLink] = slope
if dsNode >= 0:
dsNodeToLink[dsNode] = chLink
#QSWATUtils.loginfo('DSNode {0} mapped to channel link {1}'.format(dsNode, chLink))
if dsChLink >= 0:
if not useGridModel:
ups = us.get(dsChLink, None)
if ups is None:
us[dsChLink] = [chLink]
else:
ups.append(chLink)
# check we haven't just made the us relation circular
if QSWATTopology.reachable(dsChLink, [chLink], us):
QSWATUtils.error('Circular drainage network from channel link {0}'.format(dsChLink), self.isBatch)
return False
time2 = time.process_time()
QSWATUtils.loginfo('Topology setup for channels took {0} seconds'.format(int(time2 - time1)))
if not useGridModel:
self.setChannelBasinAreas(gv)
if existing:
# need to set centroids
for polygon in subbasinsLayer.getFeatures():
basin = polygon[polyIndex]
centroid = polygon.geometry().centroid().asPoint()
self.basinCentroids[basin] = (centroid.x(), centroid.y())
# find maximum channel flow length for each subbasin
self.setMaxFlowLengths()
time3 = time.process_time()
QSWATUtils.loginfo('Topology setup of subbasin areas and centroids took {0} seconds'.format(int(time3 - time2)))
if outletLayer is not None:
features = outletLayer.getFeatures()
else:
features = []
if dsNodeIndex >= 0:
doneNodes = set()
for point in features:
dsNode = point[idIndex]
if dsNode in doneNodes:
if reportErrors:
QSWATUtils.error('ID value {0} is used more than once in inlets/outlets file {1}. Occurrences after the first are ignored'
.format(dsNode, QSWATUtils.layerFilename(outletLayer)), self.isBatch)
chLink = -1
elif dsNode in self.lostDsNodeIds:
chLink = -1
elif dsNode not in dsNodeToLink:
if reportErrors:
QSWATUtils.error('ID value {0} from inlets/outlets file {1} not found as DSNODEID in channels file {2}. Will be ignored.'
.format(dsNode, QSWATUtils.layerFilename(outletLayer),
QSWATUtils.layerFileInfo(channelLayer).filePath()), self.isBatch)
chLink = -1
else:
chLink = dsNodeToLink[dsNode]
doneNodes.add(dsNode)
if chLink >= 0:
isInlet = point[inletIndex] == 1
isPtSource = point[ptSourceIndex] == 1
isReservoir = point[resIndex] == 1
isPond = point[resIndex] == 2
if lakesLayer is not None:
# check if point is inside lake
inLake = False
for lake in lakesLayer.getFeatures():
lakeGeom = lake.geometry()
lakeRect = lakeGeom.boundingBox()
if QSWATTopology.polyContains(point.geometry().asPoint(), lakeGeom, lakeRect):
inLake = True
if isInlet:
typ = 'Inlet'
elif isPtSource:
typ = 'Point source'
elif isReservoir:
typ = 'Reservoir'
elif isPond:
typ = 'Pond'
else:
# main outlets allowed within lakes
break
lakeIdIndex = lakesLayer.fieldNameIndex(QSWATTopology._LAKEID)
QSWATUtils.information('{0} {1} is inside lake {2}. Will be ignored.'.format(typ, point.id(), lake[lakeIdIndex]), self.isBatch)
break
if inLake:
continue
if isInlet:
if isPtSource:
pt = point.geometry().asPoint()
self.chLinkToPtSrc[chLink] = (self.nonzeroPointId(dsNode), pt)
elif useGridModel: # inlets collected in setUp0 for non-grids
pt = point.geometry().asPoint()
self.chLinkToInlet[chLink] = (self.nonzeroPointId(dsNode), pt)
elif isReservoir:
pt = point.geometry().asPoint()
self.chLinkToWater[chLink] = (self.nonzeroPointId(dsNode), pt, QSWATTopology._RESTYPE)
elif isPond:
pt = point.geometry().asPoint()
self.chLinkToWater[chLink] = (self.nonzeroPointId(dsNode), pt, QSWATTopology._PONDTYPE)
# else an outlet: nothing to do
# check for user-defined outlets coincident with stream junctions
if chLink in self.zeroChannels and chLink not in self.chLinkIntoLake:
if isInlet: typ = 'Inlet'
elif isPtSource: typ = 'Point source'
elif isReservoir: typ = 'Reservoir'
elif isPond: typ = 'Pond'
else: typ = 'Outlet'
msg = '{0} with id {1} has a zero length channel leading to it: please remove or move downstream'.format(typ, dsNode)
if reportErrors:
QSWATUtils.error(msg, self.isBatch)
else:
QSWATUtils.loginfo(msg)
return False
time4 = time.process_time()
QSWATUtils.loginfo('Topology setup for inlets/outlets took {0} seconds'.format(int(time4 - time3)))
# add any extra reservoirs and point sources
# set drainage
# drainAreas is a mapping from channelLink number (used as index to array) of channel basin or grid cell areas in sq m
self.drainAreas = zeros((maxChLink + 1), dtype=float)
if useGridModel:
gridCellArea = self.dx * self.dy * gv.gridSize * gv.gridSize
# try to use Drainage field from grid channels shapefile
if streamDrainage:
ok = self.setGridDrainageFromChannels(channelLayer)
else:
ok = False
if not ok:
self.setGridDrainageAreas(maxChLink, gridCellArea)
else:
# can use drain areas from TauDEM if we have them
if drainAreaIndex >= 0:
self.setDrainageFromChannels(channelLayer, drainAreaIndex)
else:
self.setDrainageAreas(us)
time5 = time.process_time()
QSWATUtils.loginfo('Topology drainage took {0} seconds'.format(int(time5 - time4)))
#try existing subbasin numbers as SWAT basin numbers
ok = polyIndex >= 0 and subbasinIndex >= 0 and self.tryBasinAsSWATBasin(subbasinsLayer, polyIndex, subbasinIndex)
if not ok:
# failed attempt may have put data in these, so clear them
self.subbasinToSWATBasin.clear()
self.SWATBasinToSubbasin.clear()
if useGridModel:
# lower limit on drainage area for outlets to be included
# 1.5 multiplier guards against rounding errors:
# ensures that any cell with drainage area exceeding this cannot be a singleton
minDrainArea = gridCellArea * 1.5
# Create SWAT basin numbers for grid
# we ignore single cell outlets, by checking that outlets have a drainage area greater than a single cell
SWATBasin = 0
# for grid models, streams and channels are the same, so chBasin is the same as basin
# we ignore edge basins which are outlets with nothing upstream, ie they are single cell outlets,
# by counting only those which have a downstream link or have an upstream link
for chLink, chBasin in self.chLinkToChBasin.items():
dsChLink = self.downChannels[chLink] if useGridModel else self.getDownChannel(chLink)
if dsChLink >= 0 or self.drainAreas[chLink] > minDrainArea:
SWATBasin += 1
self.subbasinToSWATBasin[chBasin] = SWATBasin
self.SWATBasinToSubbasin[SWATBasin] = chBasin
else:
# create SWAT basin numbers
SWATBasin = 0
request = QgsFeatureRequest().setFlags(QgsFeatureRequest.NoGeometry).setSubsetOfAttributes([polyIndex])
for feature in subbasinsLayer.getFeatures(request):
subbasin = feature[polyIndex]
if subbasin not in self.upstreamFromInlets:
SWATBasin += 1
self.subbasinToSWATBasin[subbasin] = SWATBasin
self.SWATBasinToSubbasin[SWATBasin] = subbasin
# put SWAT Basin numbers in subbasin field of subbasins shapefile
subbasinsLayer.startEditing()
if subbasinIndex < 0:
# need to add subbasin field
subbasinsLayer.dataProvider().addAttributes([QgsField(QSWATTopology._SUBBASIN, QVariant.Int)])
subbasinsLayer.updateFields()
subbasinIndex = subbasinsLayer.fields().indexOf(QSWATTopology._SUBBASIN)
request = QgsFeatureRequest().setFlags(QgsFeatureRequest.NoGeometry).setSubsetOfAttributes([polyIndex])
for feature in subbasinsLayer.getFeatures(request):
subbasin = feature[polyIndex]
SWATBasin = self.subbasinToSWATBasin.get(subbasin, 0)
subbasinsLayer.changeAttributeValue(feature.id(), subbasinIndex, SWATBasin)
subbasinsLayer.commitChanges()
time6 = time.process_time()
QSWATUtils.loginfo('Topology setting SWATBasin numbers took {0} seconds'.format(int(time6 - time5)))
subbasinsLayer.setLabelsEnabled(True)
subbasinsLayer.triggerRepaint()
if not useGridModel:
# add SWAT channel numbers to watershed shapefile
# in case loaded
root = QgsProject.instance().layerTreeRoot()
wshedLayer, _ = QSWATUtils.getLayerByFilename(root.findLayers(), gv.wshedFile, FileTypes._WATERSHED,
None, None, None)
if wshedLayer is None:
wshedLayer = QgsVectorLayer(gv.wshedFile, FileTypes.legend(FileTypes._WATERSHED), 'ogr')
wshedPolyIndex = self.getIndex(wshedLayer, QSWATTopology._POLYGONID, ignoreMissing=ignoreError)
wshedChannelIndex = self.getIndex(wshedLayer, QSWATTopology._CHANNEL, ignoreMissing=ignoreWithGridOrExisting)
wshedLayer.startEditing()
if wshedChannelIndex < 0:
wshedLayer.dataProvider().addAttributes([QgsField(QSWATTopology._CHANNEL, QVariant.Int)])
wshedLayer.updateFields()
wshedChannelIndex = wshedLayer.fields().indexOf(QSWATTopology._CHANNEL)
request = QgsFeatureRequest().setFlags(QgsFeatureRequest.NoGeometry).setSubsetOfAttributes([wshedPolyIndex])
for feature in wshedLayer.getFeatures(request):
chBasin = feature.attributes()[wshedPolyIndex]
channel = self.chBasinToChLink.get(chBasin, -1)
SWATChannel = self.channelToSWATChannel.get(channel, 0)
wshedLayer.changeAttributeValue(feature.id(), wshedChannelIndex, SWATChannel)
wshedLayer.commitChanges()
drainageFile = QSWATUtils.join(gv.shapesDir, gv.projName + Parameters._DRAINAGECSV)
self.writeDrainageFile(drainageFile)
return useGridModel or lakesLayer is not None or self.checkAreas(subbasinsLayer, gv)
def addLakes(self, lakesLayer, subbasinsLayer, chBasinsLayer, streamsLayer, channelsLayer,
demLayer, snapThreshold, gv, reportErrors=True):
"""Add lakes from lakes shapefile layer.
Not used with grid models."""
lakesProvider = lakesLayer.dataProvider()
lakeIdIndex = lakesProvider.fieldNameIndex(QSWATTopology._LAKEID)
lakeResIndex = lakesProvider.fieldNameIndex(QSWATTopology._RES)
if lakeResIndex < 0:
QSWATUtils.information('No RES field in lakes shapefile {0}: assuming lakes are reservoirs'.
format(QSWATUtils.layerFilename(lakesLayer)), self.isBatch)
subsProvider = subbasinsLayer.dataProvider()
subsAreaIndex = subsProvider.fieldNameIndex(Parameters._AREA)
if subsAreaIndex < 0:
QSWATUtils.error('Cannot find {0} field in {1}'.format(Parameters._AREA, gv.subbasinsFile), self.isBatch, reportErrors=reportErrors)
return False
chBasinsProvider = chBasinsLayer.dataProvider()
chBasinsPolyIndex = chBasinsProvider.fieldNameIndex(QSWATTopology._POLYGONID)
chBasinsAreaIndex = chBasinsProvider.fieldNameIndex(Parameters._AREA)
channelsProvider = channelsLayer.dataProvider()
channelLinkIndex = channelsProvider.fieldNameIndex(QSWATTopology._LINKNO)
channelDsLinkIndex = channelsProvider.fieldNameIndex(QSWATTopology._DSLINKNO)
channelDsNodeIndex = channelsProvider.fieldNameIndex(QSWATTopology._DSNODEID)
channelDrainAreaIndex = channelsProvider.fieldNameIndex(QSWATTopology._DRAINAREA)
channelWSNOIndex = channelsProvider.fieldNameIndex(QSWATTopology._WSNO)
channelLakeInIndex = channelsProvider.fieldNameIndex(QSWATTopology._LAKEIN)
channelLakeOutIndex = channelsProvider.fieldNameIndex(QSWATTopology._LAKEOUT)
channelLakeWithinIndex = channelsProvider.fieldNameIndex(QSWATTopology._LAKEWITHIN)
channelLakeMainIndex = channelsProvider.fieldNameIndex(QSWATTopology._LAKEMAIN)
fields = []
if channelLakeInIndex < 0:
fields.append(QgsField(QSWATTopology._LAKEIN, QVariant.Int))
if channelLakeOutIndex < 0:
fields.append(QgsField(QSWATTopology._LAKEOUT, QVariant.Int))
if channelLakeWithinIndex < 0:
fields.append(QgsField(QSWATTopology._LAKEWITHIN, QVariant.Int))
if channelLakeMainIndex < 0:
fields.append(QgsField(QSWATTopology._LAKEMAIN, QVariant.Int))
if len(fields) > 0:
if not channelsProvider.addAttributes(fields):
QSWATUtils.error('Cannot add lake fields to channels shapefile', self.isBatch)
return False
channelsLayer.updateFields()
channelLakeInIndex = channelsProvider.fieldNameIndex(QSWATTopology._LAKEIN)
channelLakeOutIndex = channelsProvider.fieldNameIndex(QSWATTopology._LAKEOUT)
channelLakeWithinIndex = channelsProvider.fieldNameIndex(QSWATTopology._LAKEWITHIN)
channelLakeMainIndex = self.getIndex(channelsLayer, QSWATTopology._LAKEMAIN)
self.chLinkIntoLake = dict()
self.chLinkInsideLake = dict()
self.chLinkFromLake = dict()
self.outletsInLake = dict()
lakeAttMap = dict()
for lake in lakesProvider.getFeatures():
lakeGeom = lake.geometry()
lakeRect = lakeGeom.boundingBox()
lakeId = lake[lakeIdIndex]
if lakeResIndex < 0:
waterRole = QSWATTopology._RESTYPE
else:
waterRole = lake[lakeResIndex]
lakeData = LakeData(lakeGeom.area(), lakeGeom.centroid().asPoint(), waterRole)
totalElevation = 0
# the area removed from channel basins that intersect wih the lake
chBasinWaterArea = 0
attMap = dict()
geomMap = dict()
for sub in subsProvider.getFeatures():
subGeom = sub.geometry()
if QSWATTopology.intersectsPoly(subGeom, lakeGeom, lakeRect):
# TODO: sub inside lake
subId = sub.id()
area1 = subGeom.area()
newGeom = subGeom.difference(lakeGeom)
area2 = newGeom.area()
if area2 < area1:
QSWATUtils.loginfo('Lake {0} overlaps subbasin {1}: area reduced from {2} to {3}'.format(lakeId, subId, area1, area2))
geomMap[subId] = newGeom
attMap[subId] = {subsAreaIndex: newGeom.area() / 1E4}
if not subsProvider.changeAttributeValues(attMap):
QSWATUtils.error('Failed to update subbasins attributes in {0}'.format(gv.subbasinsFile), self.isBatch, reportErrors=reportErrors)
for err in subsProvider.errors():
QSWATUtils.loginfo(err)
return False
if not subsProvider.changeGeometryValues(geomMap):
QSWATUtils.error('Failed to update subbasin geometries in {0}'.format(gv.subbasinsFile), self.isBatch, reportErrors=reportErrors)
for err in subsProvider.errors():
QSWATUtils.loginfo(err)
return False
# for some reason doing both changes at once fails
# if not subsProvider.changeFeatures(attMap, geomMap):
# QSWATUtils.error(u'Failed to update {0}'.format(gv.subbasinsFile), self.isBatch)
# for err in subsProvider.errors():
# QSWATUtils.loginfo(err)
# return
attMap = dict()
geomMap = dict()
# map of polygon id to area that is part of the lake
channelAreaChange = dict()
for chBasin in chBasinsProvider.getFeatures():
chBasinGeom = chBasin.geometry()
polyId = chBasin[chBasinsPolyIndex]
# if area reduced to zero because inside another lake, geometry is None
if chBasinGeom is not None and not chBasinGeom.disjoint(lakeGeom):
chBasinId = chBasin.id()
area1 = chBasinGeom.area()
newGeom = chBasinGeom.difference(lakeGeom)
area2 = newGeom.area()
if area2 < area1:
QSWATUtils.loginfo('Lake {0} overlaps channel basin {1}: area reduced from {2} to {3}'.format(lakeId, polyId, area1, area2))
chBasinWaterArea += area1 - area2
geomMap[chBasinId] = newGeom
attMap[chBasinId] = {chBasinsAreaIndex: newGeom.area() / 1E4}
channelAreaChange[polyId] = area1 - area2
if not chBasinsProvider.changeAttributeValues(attMap):
QSWATUtils.error('Failed to update channel basin attributes in {0}'.format(gv.wshedFile), self.isBatch, reportErrors=reportErrors)
for err in chBasinsProvider.errors():
QSWATUtils.loginfo(err)
return False
if not chBasinsProvider.changeGeometryValues(geomMap):
QSWATUtils.error('Failed to update channel basin geometries in {0}'.format(gv.wshedFile), self.isBatch, reportErrors=reportErrors)
for err in chBasinsProvider.errors():
QSWATUtils.loginfo(err)
return False
attMap = dict()
currentDrainArea = 0
# first pass through channels: collect inflowing and outflowing channels from DsNodes in lakeInlets and lakeOutlets
for channel in channelsProvider.getFeatures():
link = channel[channelLinkIndex]
dsLink = channel[channelDsLinkIndex]
dsNode = channel[channelDsNodeIndex]
if dsNode > 0:
if dsNode in self.lakeInlets[lakeId]:
inflowData = self.getReachData(channel.geometry(), demLayer)
lakeData.inChLinks[link] = (dsNode, QgsPointXY(inflowData.lowerX, inflowData.lowerY), inflowData.lowerZ)
if dsLink >= 0:
lakeData.lakeChLinks.add(dsLink)
self.chLinkInsideLake[dsLink] = lakeId
self.chLinkIntoLake[link] = lakeId
totalElevation += inflowData.lowerZ
channelId = channel.id()
chBasin = channel[channelWSNOIndex]
areaChange = channelAreaChange.get(chBasin, 0)
drainArea = channel[channelDrainAreaIndex] - areaChange
attMap[channelId] = {channelDrainAreaIndex: drainArea}
elif dsNode in self.lakeOutlets[lakeId]:
outflowData = self.getReachData(channel.geometry(), demLayer)
outlet = QgsPointXY(outflowData.lowerX, outflowData.lowerY)
replace = True
if dsLink >= 0:
if lakeData.outPoint[2] is not None:
# choose point with larger drain area
newDrainArea = channel[channelDrainAreaIndex]
if newDrainArea > currentDrainArea:
currentDrainArea = newDrainArea
if lakeData.outChLink >= 0:
lakeData.otherOutChLinks.add(lakeData.outChLink)
else:
replace = False
if replace:
chBasin = channel[channelWSNOIndex]
subbasin = self.chBasinToSubbasin[chBasin]
lakeData.outPoint = (subbasin, dsNode, outlet, outflowData.lowerZ)
lakeData.outChLink = dsLink
else:
lakeData.otherOutChLinks.add(dsLink)
self.chLinkFromLake[dsLink] = lakeId
lakeData.lakeChLinks.add(link)
self.chLinkInsideLake[link] = lakeId
# check to see of a watershed outlet was marked inside the lake
# and if so try to move it to the lake perimeter. Else leave it as an internal outlet.
# we don't need to exclude outlets created to split channels flowing into and out of lake
# because the outlets map is made from the streams before lake inlets and outlets are added to the snap file
# and the augmented snapfile is only used to make channels
for subbasin, (pointId, pt, ch) in self.outlets.items():
if QSWATTopology.polyContains(pt, lakeGeom, lakeRect) and \
QSWATTopology.isWatershedOutlet(pointId, channelsProvider, channelDsLinkIndex, channelDsNodeIndex):
if not os.path.exists(gv.pFile):
QSWATUtils.error('Cannot find D8 flow directions file {0}'.format(gv.pFile), self.isBatch, reportErrors=reportErrors)
break
# need to give different id to outPoint, since this is used to make the reservoir point
# which will then route to the subbasin outlet
# can use outlet point id if already created
if lakeData.outPoint[1] >= 0:
newPointId = lakeData.outPoint[1]
else:
self.pointId += 1
newPointId = self.pointId
elev = QSWATTopology.valueAtPoint(pt, demLayer)
lakeData.outPoint = (subbasin, newPointId, pt, elev)
# maximum number of steps approximates to the threshold for snapping points expressed as number of DEM cells
maxSteps = 5 if self.dx == 0 else int(snapThreshold / self.dx + 0.5)
lakeOutlet, found = QSWATTopology.movePointToPerimeter(pt, lakeGeom, gv.pFile, maxSteps)
if found:
if lakeData.outPoint[2] is not None:
QSWATUtils.information('User marked outlet {0} chosen as main outlet for lake {1}'.
format(pointId, lakeId), gv.isBatch)
if lakeData.outChLink >= 0:
lakeData.otherOutChLinks.add(lakeData.outChLink)
elev = QSWATTopology.valueAtPoint(lakeOutlet, demLayer)
lakeData.outPoint = (subbasin, newPointId, lakeOutlet, elev)
QSWATUtils.loginfo('Outlet of lake {0} set to ({1}, {2})'.
format(lakeId, int(lakeOutlet.x()), int(lakeOutlet.y())))
# update outlets map
self.outlets[subbasin] = (pointId, lakeOutlet, ch)
else:
QSWATUtils.loginfo('Outlet of lake {0} set to internal point ({1}, {2})'.
format(lakeId, int(lakeOutlet.x()), int(lakeOutlet.y())))
lakeData.outChLink = -1
break
# second pass through channels: collect channels within lake: i.e. both ends in lake
# and set LakeIn, LakeOut, LakeWithin fields
for channel in channelsProvider.getFeatures():
link = channel[channelLinkIndex]
channelId = channel.id()
channelData = None
channelGeom = None
lakeIn = self.chLinkIntoLake.get(link, 0)
lakeOut = self.chLinkFromLake.get(link, 0)
lakeWithin = self.chLinkInsideLake.get(link, 0)
if link not in self.chLinkIntoLake and link not in self.chLinkFromLake and link not in self.chLinkInsideLake:
channelGeom = channel.geometry()
channelData = self.getReachData(channelGeom, None)
pt1 = QgsPointXY(channelData.lowerX, channelData.lowerY)
pt2 = QgsPointXY(channelData.upperX, channelData.upperY)
if QSWATTopology.polyContains(pt1, lakeGeom, lakeRect) and QSWATTopology.polyContains(pt2, lakeGeom, lakeRect):
lakeData.lakeChLinks.add(link)
self.chLinkInsideLake[link] = lakeId
lakeWithin = lakeId
lakeAttMap[channelId] = {channelLakeInIndex: lakeIn, channelLakeOutIndex: lakeOut,
channelLakeWithinIndex: lakeWithin}
if link in lakeData.lakeChLinks:
# remove the channel's point source
del self.chPointSources[link]
# if the lake has an outlet channel with a drain area less than LAKEOUTLETCHANNELAREA percent of the lake area
# make its channel internal
outLinkId = None
outLink = lakeData.outChLink
outBasin = -1
dsOutLink = -1
if outLink >= 0:
request = QgsFeatureRequest().setFlags(QgsFeatureRequest.NoGeometry).setSubsetOfAttributes([channelLinkIndex,
channelWSNOIndex,
channelDsLinkIndex])
for channel in channelsProvider.getFeatures(request):
if channel[channelLinkIndex] == outLink:
outLinkId = channel.id()
outBasin = channel[channelWSNOIndex]
dsOutLink = channel[channelDsLinkIndex]
break
if outBasin >= 0:
# threshold in ha: LAKEOUTLETCHANNELAREA of lake area
threshold = (lakeData.area / 1E6) * Parameters._LAKEOUTLETCHANNELAREA
request = QgsFeatureRequest().setFlags(QgsFeatureRequest.NoGeometry).setSubsetOfAttributes([chBasinsPolyIndex, chBasinsAreaIndex])
for chBasin in chBasinsProvider.getFeatures():
if chBasin[chBasinsPolyIndex] == outBasin:
areaHa = chBasin[chBasinsAreaIndex]
if areaHa < threshold:
# move outlet channel inside lake
lakeData.lakeChLinks.add(outLink)
lakeData.outChLink = dsOutLink
del self.chLinkFromLake[outLink]
self.chLinkInsideLake[outLink] = lakeId
# mark it as within as well as being the outlet (already set)
lakeAttMap[outLinkId][channelLakeWithinIndex] = lakeId
# check if this point now inside the lake is a subbasin outlet
subbasin = self.chBasinToSubbasin[outBasin]
(_, _, outChannel) = self.outlets[subbasin]
if outChannel == outLink:
# subbasin outlet has moved inside the lake
self.outletsInLake[subbasin] = lakeId
QSWATUtils.loginfo('Channel link {0} channel basin {1} moved inside lake {2}'.
format(outLink, outBasin, lakeId))
# remove the channel's point source
del self.chPointSources[outLink]
if dsOutLink >= 0:
self.chLinkFromLake[dsOutLink] = lakeId
break
if lakeData.outPoint[2] is None:
QSWATUtils.error('Failed to find outlet for lake {0}'.format(lakeId), self.isBatch, reportErrors=reportErrors)
return False
if lakeData.outChLink >= 0:
chId = -1
# find the channel's id
request = QgsFeatureRequest().setFlags(QgsFeatureRequest.NoGeometry).setSubsetOfAttributes([channelLinkIndex])
for channel in channelsProvider.getFeatures(request):
if channel[channelLinkIndex] == lakeData.outChLink:
chId = channel.id()
break
if chId >= 0:
lakeAttMap[chId][channelLakeMainIndex] = lakeId
else:
QSWATUtils.error('Internal error: unable to find main outlet channel {0}'.
format(lakeData.outChLink), self.isBatch, reportErrors=reportErrors)
return False
numInflows = len(lakeData.inChLinks)
meanElevation = totalElevation / numInflows if numInflows > 0 else lakeData.outPoint[3]
lakeData.elevation = meanElevation
QSWATUtils.loginfo('Lake {0} has outlet on channel {1}, other outlets on channels {2}, inlets on channels {3} and contains channels {4}'
.format(lakeId, lakeData.outChLink, lakeData.otherOutChLinks,
list(lakeData.inChLinks.keys()), lakeData.lakeChLinks))
OK = channelsProvider.changeAttributeValues(attMap)
OK = OK and channelsProvider.changeAttributeValues(lakeAttMap)
if not OK:
QSWATUtils.error('Failed to update channel attributes in {0}'.format(gv.channelFile), self.isBatch, reportErrors=reportErrors)
for err in channelsProvider.errors():
QSWATUtils.loginfo(err)
return False
self.lakesData[lakeId] = lakeData
lakeArea = lakeData.area
percentChBasinWater = chBasinWaterArea / lakeArea * 100
QSWATUtils.loginfo('Lake {0} has area {1} and channel basin water area {2}: {3}%'.format(lakeId, lakeArea, chBasinWaterArea, percentChBasinWater))
# intPercent = int(percentChBasinWater + 0.5)
# if percentChBasinWater < 99:
# QSWATUtils.information(u'WARNING: Only {0}% of the area of lake {1} is accounted for in your watershed. There may be other channels flowing into it'
# .format(intPercent, lakeId), self.isBatch)
if len(self.lakesData) == 0:
QSWATUtils.error('No lakes found in {0}'.format(QSWATUtils.layerFilename(lakesLayer)), self.isBatch, reportErrors=reportErrors)
return False
chBasinsLayer.triggerRepaint()
streamsLayer.triggerRepaint()
channelsLayer.triggerRepaint()
return True
@staticmethod
def isWatershedOutlet(pointId, channelsProvider, dsLinkIndex, dsNodeIndex):
"""Return true if there is a channel with dsNode equal to pointId and with dsLink -1."""
request = QgsFeatureRequest().setFlags(QgsFeatureRequest.NoGeometry).setSubsetOfAttributes([dsLinkIndex, dsNodeIndex])
for link in channelsProvider.getFeatures(request):
if link[dsNodeIndex] == pointId and link[dsLinkIndex] == -1:
return True
return False
def isOutlet(self, pointId, outletsLayer):
"""Return true if outletsLayer contains an outlet point with id pointId."""
idIndex = self.getIndex(outletsLayer, QSWATTopology._ID, ignoreMissing=True)
inletIndex = self.getIndex(outletsLayer, QSWATTopology._INLET, ignoreMissing=True)
resIndex = self.getIndex(outletsLayer, QSWATTopology._RES, ignoreMissing=True)
if idIndex < 0 or inletIndex < 0 or resIndex < 0:
return False
for point in outletsLayer.getFeatures():
if point[idIndex] == pointId and point[inletIndex] == 0 and point[resIndex] == 0:
return True
return False
def addGridLakes(self, gridLayer, channelsLayer, demLayer, gv, reportErrors=True):
"""Add lakes when using grid model. Return number of lakes (which may be zero) or -1 if error."""
gridProvider = gridLayer.dataProvider()
gridPolyIndex = gridProvider.fieldNameIndex(QSWATTopology._POLYGONID)
gridDownIndex = gridProvider.fieldNameIndex(QSWATTopology._DOWNID)
gridAreaIndex = gridProvider.fieldNameIndex(Parameters._AREA)
gridLakeIdIndex = gridProvider.fieldNameIndex(QSWATTopology._LAKEID)
if gridLakeIdIndex < 0:
# can be no lakes
return 0
gridResIndex = gridProvider.fieldNameIndex(QSWATTopology._RES)
channelsProvider = channelsLayer.dataProvider()
channelLinkIndex = channelsProvider.fieldNameIndex(QSWATTopology._LINKNO)
channelDsLinkIndex = channelsProvider.fieldNameIndex(QSWATTopology._DSLINKNO)
channelWSNOIndex = channelsProvider.fieldNameIndex(QSWATTopology._WSNO)
# the drainage field may no exist if we are using grid or table drainage: deal with this later
streamDrainageIndex = channelsProvider.fieldNameIndex(QSWATTopology._DRAINAGE)
polysIntoLake = dict()
polysInsidelake = dict()
polysFromLake = dict()
self.chLinkIntoLake = dict()
self.chLinkInsideLake = dict()
self.chLinkFromLake = dict()
request = QgsFeatureRequest().setFlags(QgsFeatureRequest.NoGeometry).setSubsetOfAttributes([gridPolyIndex, gridLakeIdIndex])
# first make map poly -> lake id
polyToLake = dict()
for cell in gridProvider.getFeatures(request):
lakeId = cell[gridLakeIdIndex]
if lakeId != NULL:
polyToLake[cell[gridPolyIndex]] = lakeId
# make sure waterbody id is set to maximum lake id in case using existing grid
self.waterBodyId = max(self.waterBodyId, lakeId)
if len(polyToLake) == 0:
# no lakes
return 0
# data for calculating centroid
# map of lake id to (area, x moment of area, y moment)
lakeAreaData = dict()
for cell in gridProvider.getFeatures():
waterRole = cell[gridResIndex]
poly = cell[gridPolyIndex]
downPoly = cell[gridDownIndex]
sourceLake = cell[gridLakeIdIndex]
targetLake = polyToLake.get(downPoly, None)
if sourceLake != NULL:
if sourceLake not in lakeAreaData:
lakeAreaData[sourceLake] = (waterRole, 0, 0, 0)
area = cell[gridAreaIndex] * 1E4 # convert ha to m^2
centre, _, _ = QSWATUtils.centreGridCell(cell)
_, totalArea, xMoment, yMoment = lakeAreaData[sourceLake]
lakeAreaData[sourceLake] = (waterRole, totalArea + area, xMoment + area * centre.x(), yMoment + area * centre.y())
if targetLake == sourceLake:
# channel links two lake cells within lake
polysInsidelake[poly] = sourceLake
else:
# exit channel
polysFromLake[poly] = sourceLake
elif targetLake is not None:
polysIntoLake[poly] = targetLake
totalElevation = dict()
# map of lake id to possible exit channels
# will choose one with largest drainage
exitData = dict()
for lakeId, (waterRole, area, xMoment, yMoment) in lakeAreaData.items():
centroid = QgsPointXY(float(xMoment) / area, float(yMoment) / area)
self.lakesData[lakeId] = LakeData(area, centroid, waterRole)
totalElevation[lakeId] = 0
exitData[lakeId] = dict()
# convert wsnos to links and complete LakesData
# get maximum chLink and create downChannels map in case drainage needs calculating
self.downChannels = dict()
maxChLink = 0
for channel in channelsProvider.getFeatures():
chLink = channel[channelLinkIndex]
maxChLink = max(maxChLink, chLink)
dsChLink = channel[channelDsLinkIndex]
self.downChannels[chLink] = dsChLink
wsno = channel[channelWSNOIndex]
lakeIdInto = polysIntoLake.get(wsno, 0)
if lakeIdInto > 0:
self.chLinkIntoLake[chLink] = lakeIdInto
# since this is a grid model the grid cells form different subbasins and there will be a suitable outlet
# point already stored in the outlets map
pointId, point, _ = self.outlets[wsno]
elev = QSWATTopology.valueAtPoint(point, demLayer)
self.lakesData[lakeIdInto].inChLinks[chLink] = (pointId, point, elev)
totalElevation[lakeIdInto] += elev
continue
lakeIdFrom = polysFromLake.get(wsno, 0)
if lakeIdFrom > 0:
# allow for no drainage field
drainage = -1 if streamDrainageIndex < 0 else channel[streamDrainageIndex]
data = self.getReachData(channel.geometry(), demLayer)
exitData[lakeIdFrom][chLink] = (wsno, drainage, QgsPointXY(data.upperX, data.upperY), data.upperZ)
continue
lakeIdInside = polysInsidelake.get(wsno, 0)
if lakeIdInside > 0:
self.chLinkInsideLake[chLink] = lakeIdInside
self.lakesData[lakeIdInside].lakeChLinks.add(chLink)
continue
# check if we need to calculate drainage: no drainage field and more than one exit for at least one lake
needDrainage = False
if streamDrainageIndex < 0:
for data in exitData.values():
if len(data) > 1:
needDrainage = True
break
if needDrainage:
self.drainAreas = zeros((maxChLink + 1), dtype=float)
gridCellArea = self.dx * self.dy * gv.gridSize * gv.gridSize
self.setGridDrainageAreas(maxChLink, gridCellArea)
# find outlet with largest drainage and mark as THE outlet
for lakeId, data in exitData.items():
# set maxDrainage less than -1 value used for missing drainage so that first exit link registers
# as if there is only one exit for each lake needDrainage will be false
maxDrainage = -2
exLink = -1
exWsno = -1
exPoint = None
exElev = 0
for chLink, (wsno, drainage, pt, elev) in data.items():
if needDrainage:
drainage = float(self.drainAreas[chLink]) # use float to convert from numpy float
if drainage > maxDrainage:
maxDrainage = drainage
exLink = chLink
exWsno = wsno
exPoint = pt
exElev = elev
if exLink < 0:
QSWATUtils.error('There seems to be no outflow stream for lake {0}'.format(lakeId), gv.isBatch, reportErrors=reportErrors)
return -1
else:
others = list(data.keys())
others.remove(exLink)
if others != []:
QSWATUtils.information(
"""Warning: Stream link {0} chosen as main outlet for all of lake {1}.
Other possible outlet stream links are {2}.
""".format(exLink, lakeId, str([int(link) for link in others])), gv.isBatch, reportErrors=reportErrors)
self.chLinkFromLake[exLink] = lakeId
self.lakesData[lakeId].outChLink = exLink
for chLink in others:
self.chLinkFromLake[chLink] = lakeId
self.lakesData[lakeId].otherOutChLinks.add(chLink)
self.pointId += 1
self.lakesData[lakeId].outPoint = (exWsno, self.pointId, exPoint, exElev)
for lakeId, totalElev in totalElevation.items():
numInLinks = len(self.lakesData[lakeId].inChLinks)
if numInLinks > 0:
self.lakesData[lakeId].elevation = float(totalElev) / numInLinks
else:
self.lakesData[lakeId].elevation = self.lakesData[lakeId].outPoint[3]
return len(self.lakesData)
def addExistingLakes(self, lakesLayer, channelsLayer, demLayer, gv, reportErrors=True):
"""Add lakes data to existing non-grid model.
We ignore DsNodeIds for inflowing and outflowing channels since these were
probably only added previously to the snapped inlets/outlets file
and inlets/outlets are little use in any case with existing watersheds."""
lakeIdIndex = self.getIndex(lakesLayer, QSWATTopology._LAKEID)
lakeResIndex = self.getIndex(lakesLayer, QSWATTopology._RES)
channelLinkIndex = self.getIndex(channelsLayer, QSWATTopology._LINKNO)
channelDsLinkIndex = self.getIndex(channelsLayer, QSWATTopology._DSLINKNO)
channelBasinIndex = self.getIndex(channelsLayer, QSWATTopology._BASINNO)
channelLakeInIndex = self.getIndex(channelsLayer, QSWATTopology._LAKEIN)
channelLakeOutIndex = self.getIndex(channelsLayer, QSWATTopology._LAKEOUT)
channelLakeWithinIndex = self.getIndex(channelsLayer, QSWATTopology._LAKEWITHIN)
channelLakeMainIndex = self.getIndex(channelsLayer, QSWATTopology._LAKEMAIN)
if lakeIdIndex < 0 or channelLinkIndex < 0 or channelDsLinkIndex < 0 or channelBasinIndex < 0 or \
channelLakeInIndex < 0 or channelLakeOutIndex < 0 or channelLakeWithinIndex < 0 or channelLakeMainIndex < 0:
return False
self.lakesData = dict()
for lake in lakesLayer.getFeatures():
lakeId = lake[lakeIdIndex]
waterRole = lake[lakeResIndex]
if lakeId in self.lakesData:
QSWATUtils.error('Lake identifier {0} occurs twice in {1}. Lakes not added.'.format(lakeId, QSWATUtils.layerFilename(lakesLayer)),
gv.isBatch, reportErrors=reportErrors)
self.lakesData = dict()
return False
# to stop reuse of the same water body id
self.waterBodyId = max(self.waterBodyId, lakeId)
geom = lake.geometry()
area = geom.area()
centroid = geom.centroid().asPoint()
self.lakesData[lakeId] = LakeData(area, centroid, waterRole)
self.chLinkIntoLake = dict()
self.chLinkInsideLake = dict()
self.chLinkFromLake = dict()
self.outletsInLake = dict()
for channel in channelsLayer.getFeatures():
chLink = channel[channelLinkIndex]
dsLink = channel[channelDsLinkIndex]
lakeIn = channel[channelLakeInIndex]
lakeOut = channel[channelLakeOutIndex]
lakeWithin = channel[channelLakeWithinIndex]
lakeMain = channel[channelLakeMainIndex]
reachData = None
geom = None
if lakeIn != NULL and lakeIn > 0:
data = self.lakesData.get(lakeIn, None)
if data is None:
QSWATUtils.error('Channel with LINKNO {0} flows into lake {1} not defined in {2}. Lakes not added.'.
format(chLink, lakeIn, QSWATUtils.layerFilename(lakesLayer)),
gv.isBatch, reportErrors=reportErrors)
self.lakesData = dict()
return False
geom = channel.geometry()
reachData = self.getReachData(geom, demLayer)
point = QgsPointXY(reachData.lowerX, reachData.lowerY)
elev = reachData.lowerZ
data.elevation += elev
self.pointId += 1
data.inChLinks[chLink] = (self.pointId, point, elev)
self.chLinkIntoLake[chLink] = lakeIn
elif lakeWithin != NULL and lakeWithin > 0:
data = self.lakesData.get(lakeWithin, None)
if data is None:
QSWATUtils.error('Channel with LINKNO {0} inside lake {1} not defined in {2}. Lakes not added.'.
format(chLink, lakeWithin, QSWATUtils.layerFilename(lakesLayer)),
gv.isBatch, reportErrors=reportErrors)
self.lakesData = dict()
return False
data.lakeChLinks.add(chLink)
self.chLinkInsideLake[chLink] = lakeWithin
if dsLink < 0:
# watershed outlet
geom = channel.geometry()
reachData = self.getReachData(geom, demLayer)
subbasin = channel[channelBasinIndex]
data.outChLink = -1
point = QgsPointXY(reachData.lowerX, reachData.lowerY)
elev = reachData.lowerZ
self.pointId += 1
data.outPoint = (subbasin, self.pointId, point, elev)
self.outletsInLake[subbasin] = lakeWithin
if lakeOut != NULL and lakeOut > 0:
data = self.lakesData.get(lakeOut, None)
if data is None:
QSWATUtils.error('Channel with LINKNO {0} flows out of lake {1} not defined in {2}. Lakes not added.'.
format(chLink, lakeOut, QSWATUtils.layerFilename(lakesLayer)),
gv.isBatch, reportErrors=reportErrors)
self.lakesData = dict()
return False
if lakeMain != NULL and lakeMain == lakeOut:
# lake's main outlet
# channel leaves lake at upper end
geom = channel.geometry()
reachData = self.getReachData(geom, demLayer)
subbasin = channel[channelBasinIndex]
data.outChLink = chLink
point = QgsPointXY(reachData.upperX, reachData.upperY)
elev = reachData.upperZ
self.pointId += 1
data.outPoint = (subbasin, self.pointId, point, elev)
self.chLinkFromLake[chLink] = lakeOut
else:
# other outlet
data.otherOutChLinks.add(chLink)
# define lake elevation
for data in self.lakesData.values():
numInflows = len(data.inChLinks)
data.elevation = data.outPoint[3] if numInflows == 0 else float(data.elevation) / numInflows
return True
@staticmethod
def intersectsPoly(geom, polyGeom, polyRect):
"""Returns true if any part of geom intersects any part of polyGeom, which has associated rectangle polyRect."""
geoRect = geom.boundingBox()
if QSWATTopology.disjointBoxes(geoRect, polyRect):
return False
else:
return geom.intersects(polyGeom)
@staticmethod
def disjointBoxes(box1, box2):
"""Return True if the boxes are disjoint."""
return box1.xMinimum() > box2.xMaximum() or \
box1.xMaximum() < box2.xMinimum() or \
box1.yMinimum() > box2.yMaximum() or \
box1.yMaximum() < box2.yMinimum()
@staticmethod
def polyContains(point, polyGeom, polyRect):
"""Return true if point within polyGeom, which has associated rectangle polyRect."""
if polyRect.xMinimum() < point.x() < polyRect.xMaximum() and \
polyRect.yMinimum() < point.y() < polyRect.yMaximum():
return polyGeom.contains(point)
else:
return False
def saveLakesData(self, db):
"""Save lakes data in project database."""
with db.conn as conn:
if not conn:
return
curs = conn.cursor()
lakesTable = 'LAKESDATA'
clearSQL = 'DROP TABLE IF EXISTS ' + lakesTable
curs.execute(clearSQL)
curs.execute(db._CREATELAKESDATA)
linksTable = 'LAKELINKS'
clearSQL = 'DROP TABLE IF EXISTS ' + linksTable
curs.execute(clearSQL)
curs.execute(db._CREATELAKELINKS)
basinsTable = 'LAKEBASINS'
clearSQL = 'DROP TABLE IF EXISTS ' + basinsTable
curs.execute(clearSQL)
curs.execute(db._CREATELAKEBASINS)
for lakeId, lakeData in self.lakesData.items():
curs.execute(db._INSERTLAKESDATA, (lakeId, lakeData.outPoint[0], lakeData.waterRole, lakeData.area, lakeData.elevation, lakeData.outChLink,
lakeData.outPoint[1], lakeData.outPoint[2].x(), lakeData.outPoint[2].y(),
lakeData.outPoint[3], lakeData.centroid.x(), lakeData.centroid.y()))
# QSWATUtils.loginfo(str(lakeData.inChLinks.keys()))
# QSWATUtils.loginfo(str(lakeData.lakeChLinks))
for chLink, (pointId, pt, elev) in lakeData.inChLinks.items():
try:
curs.execute(db._INSERTLAKELINKS, (chLink, lakeId, True, False, pointId, pt.x(), pt.y(), elev))
except:
QSWATUtils.error('Failed to add in channel link {0}'.format(chLink), self.isBatch)
for chLink in lakeData.lakeChLinks:
try:
curs.execute(db._INSERTLAKELINKS, (chLink, lakeId, False, True, None, None, None, None))
except:
QSWATUtils.error('Failed to add inside channel link {0}'.format(chLink), self.isBatch)
for chLink in lakeData.otherOutChLinks:
try:
curs.execute(db._INSERTLAKELINKS, (chLink, lakeId, False, False, None, None, None, None))
except:
QSWATUtils.error('Failed to add other out channel link {0}'.format(chLink), self.isBatch)
for subbasin, lakeId in self.outletsInLake.items():
curs.execute(db._INSERTLAKEBASINS, (subbasin, lakeId))
db.hashDbTable(conn, lakesTable)
db.hashDbTable(conn, linksTable)
db.hashDbTable(conn, basinsTable)
def readLakesData(self, db):
"""Read lakes data from project database. Return true if data read OK, false if no data or error."""
with db.conn as conn:
if not conn:
return False
self.lakesData.clear()
self.chLinkIntoLake.clear()
self.chLinkInsideLake.clear()
self.chLinkFromLake.clear()
self.outletsInLake.clear()
curs = conn.cursor()
lakesTable = 'LAKESDATA'
linksTable = 'LAKELINKS'
basinsTable = 'LAKEBASINS'
lakeSql = db.sqlSelect(lakesTable, '*', '', '')
linksSql = db.sqlSelect(linksTable, '*', '', 'lakeid=?')
basinsSql = db.sqlSelect(basinsTable, '*', '', '')
try: # in case old database without these tables
# without fetchall this only reads first row. Strange
for lakeRow in curs.execute(lakeSql).fetchall():
lakeId = lakeRow['id']
self.waterBodyId = max(self.waterBodyId, lakeId)
self.lakesData[lakeId] = LakeData(lakeRow['area'], QgsPointXY(lakeRow['centroidx'], lakeRow['centroidy'], lakeRow['role']))
outChLink = lakeRow['outlink']
self.lakesData[lakeId].outChLink = outChLink
self.chLinkFromLake[outChLink] = lakeId
self.lakesData[lakeId].outPoint = (lakeRow['subbasin'], lakeRow['outletid'],
QgsPointXY(lakeRow['outletx'], lakeRow['outlety']), lakeRow['outletelev'])
self.lakesData[lakeId].centroid = QgsPointXY(lakeRow['centroidx'], lakeRow['centroidy'])
self.lakesData[lakeId].elevation = lakeRow['meanelev']
for linkRow in curs.execute(linksSql, (lakeId,)):
chLink = linkRow['linkno']
if linkRow['inside']:
self.lakesData[lakeId].lakeChLinks.add(chLink)
self.chLinkInsideLake[chLink] = lakeId
elif linkRow['inlet']:
self.lakesData[lakeId].inChLinks[chLink] = (linkRow['inletid'],
QgsPointXY(linkRow['inletx'], linkRow['inlety']), linkRow['inletelev'])
self.chLinkIntoLake[chLink] = lakeId
else:
self.lakesData[lakeId].otherOutChLinks.add(chLink)
self.chLinkFromLake[chLink] = lakeId
for basinRow in curs.execute(basinsSql).fetchall():
self.outletsInLake[basinRow['subbasin']] = basinRow['lakeid']
return len(self.lakesData) > 0
except:
QSWATUtils.loginfo('Reading lakes data failed: {0}'.format(traceback.format_exc()))
return False
def getDownChannel(self, channel):
"""Get downstream channel, skipping zero-length channels.
Returns -1 if the channel flows into a lake."""
if channel in self.chLinkIntoLake:
return -1
while True:
dsChannel = self.downChannels[channel]
if dsChannel in self.zeroChannels:
channel = dsChannel
else:
return dsChannel
def setChannelBasinAreas(self, gv):
"""
Define map chBasinAreas from channel basin to basin area in sq m.
Done by counting pixels in the wChannel file (as an alternative to creating a shapefile from it).
Not used with grid models.
"""
self.chBasinAreas.clear()
unitArea = self.dx * self.dy # area of une DEM pixel in sq m
completed = False
raster = Raster(gv.channelBasinFile, gv)
while not completed:
try:
# safer to mark complete immediately to avoid danger of endless loop
# only way to loop is then the memory error exception being raised
completed = True
if not raster.open(self.chunkCount):
QSWATUtils.error('Failed to open channel basins raster {0}'.format(gv.channelBasinFile), gv.isBatch)
return
for row in range(raster.numRows):
for col in range(raster.numCols):
val = int(raster.read(row, col))
if val == raster.noData:
continue
elif val in self.chBasinAreas:
self.chBasinAreas[val] += unitArea
else:
self.chBasinAreas[val] = unitArea
raster.close()
except MemoryError:
QSWATUtils.loginfo('Out of memory calculating channel basin areas with chunk count {0}'.format(self.chunkCount))
try:
raster.close()
except Exception:
pass
completed = False
self.chunkCount += 1
def checkAreas(self, subbasinsLayer, gv):
"""
Check total channel basin areas in each subbasin tally with subbasin areas,
and total watershed areaa for both tally, i.e. are the same within area of one DEM pixel.
This is only done in testing ('test' in project name) and is mostly a check that
channels are correctly assigned to subbasins.
Not used with grid models (since channel-subbasin is one-one for grid models).
"""
# TODO: make work with lakes
if 'test' in gv.projName:
unitArea = self.dx * self.dy # area of une DEM pixel in sq m
polyIndex = self.getIndex(subbasinsLayer, QSWATTopology._POLYGONID)
if polyIndex < 0:
return False
areaIndex = self.getIndex(subbasinsLayer, Parameters._AREA, ignoreMissing=True)
totalBasinsArea = 0
totalChannelBasinsArea = 0
# one percent test: using 1 pixel test instead
# def compare(x, y): # return true if both zero or difference < 1% of x
# if x == 0:
# return y == 0
# else:
# return abs(x - y) < 0.01 * x
for poly in subbasinsLayer.getFeatures():
if areaIndex < 0:
basinArea = poly.geometry().area()
else:
basinArea = poly[areaIndex] * 1E4 # areas in subbasins shapefile are in hectares
# need to count areas of basins upstream from inlets because comparison for whole watershed
# by using all channels will not exclude them
totalBasinsArea += basinArea
basin = poly[polyIndex]
chBasins = set()
chLinks = set()
for chLink, chBasin in self.chLinkToChBasin.items():
if basin == self.chBasinToSubbasin.get(chBasin, -1):
chBasins.add(chBasin)
chLinks.add(chLink)
area = 0
for chBasin, chArea in self.chBasinAreas.items():
if chBasin in chBasins:
area += chArea
if abs(basinArea - area) >= unitArea: # not using compare(basinArea, area):
SWATChannels = {self.channelToSWATChannel[chLink] for chLink in chLinks}
SWATBasin = self.subbasinToSWATBasin[basin]
QSWATUtils.error('Basin {0} with area {1} has channels {2} with total area {3}'.
format(SWATBasin, basinArea, SWATChannels, area), True)
return False
# now compare areas for whole watershed
for _, chArea in self.chBasinAreas.items():
totalChannelBasinsArea += chArea
if abs(totalBasinsArea - totalChannelBasinsArea) >= unitArea: # not using compare(totalBasinsArea, totalChannelBasinsArea):
QSWATUtils.error('Watershed area is {0} by adding subbasin areas and {1} by adding channel basin areas'.
format(totalBasinsArea, totalChannelBasinsArea), True)
return False
QSWATUtils.loginfo('Total watershed area is {0}'.format(totalBasinsArea))
return True
@staticmethod
def reachable(chLink, chLinks, us):
"""Return true if chLink is in chLinks or reachable from an item in chLinks via the one-many relation us."""
if chLink in chLinks:
return True
for nxt in chLinks:
if QSWATTopology.reachable(chLink, us.get(nxt, []), us):
return True
return False
#===========================================================================
# def addUpstreamLinks(self, link, us):
# """Add to upstreamFromInlets the links upstream from link."""
# ups = us.get(link, None)
# if ups is not None:
# for up in ups:
# self.upstreamFromInlets.add(up)
# self.addUpstreamLinks(up, us)
#===========================================================================
def setDrainageFromChannels(self, channelLayer, drainAreaIndex):
"""Get drain areas from channelLayer file's DS_Cont_Ar attribute."""
inds = [self.channelIndex, drainAreaIndex]
request = QgsFeatureRequest().setFlags(QgsFeatureRequest.NoGeometry).setSubsetOfAttributes(inds)
for reach in channelLayer.getFeatures(request):
channelLink = reach[self.channelIndex]
self.drainAreas[channelLink] = reach[drainAreaIndex]
def setGridDrainageFromChannels(self, channelLayer):
"""Get drain areas from channelLayer file's Drainage attribute. Return True if successful."""
channelIndex = self.getIndex(channelLayer, QSWATTopology._LINKNO, ignoreMissing=True)
drainageIndex = self.getIndex(channelLayer, QSWATTopology._DRAINAGE, ignoreMissing=True)
if channelIndex < 0 or drainageIndex < 0:
return False
inds = [channelIndex, drainageIndex]
request = QgsFeatureRequest().setFlags(QgsFeatureRequest.NoGeometry).setSubsetOfAttributes(inds)
for reach in channelLayer.getFeatures(request):
channel = reach[channelIndex]
self.drainAreas[channel] = reach[drainageIndex] * 1E6 # drainage attribute is in sq km
return True
def setGridDrainageAreas(self, maxChLink, gridCellArea):
"""Calculate and save grid drain areas in sq km."""
self.drainAreas.fill(gridCellArea)
# number of incoming links for each link
incount = zeros((maxChLink + 1), dtype=int)
for dsLink in self.downChannels.values():
if dsLink >= 0:
incount[dsLink] += 1
# queue contains all links whose drainage areas have been calculated
# i.e. will not increase and can be propagated
queue = [link for link in range(maxChLink + 1) if incount[link] == 0]
while queue:
link = queue.pop(0)
dsLink = self.downChannels.get(link, -1)
if dsLink >= 0:
self.drainAreas[dsLink] += self.drainAreas[link]
incount[dsLink] -= 1
if incount[dsLink] == 0:
queue.append(dsLink)
# incount values should now all be zero
remainder = [link for link in range(maxChLink + 1) if incount[link] > 0]
if remainder:
QSWATUtils.error('Drainage areas incomplete. There is a circularity in links {0!s}'.format(remainder), self.isBatch)
def setDrainageAreas(self, us):
"""
Calculate and save drainAreas.
Not used with grid models.
"""
for chLink, chBasin in self.chLinkToChBasin.items():
self.setLinkDrainageArea(chLink, chBasin, us)
def setLinkDrainageArea(self, chLink, chBasin, us):
"""
Calculate and save drainArea for chLink.
Not used with grid models.
"""
if self.drainAreas[chLink] > 0:
# already done in calculating one further downstream
return
ownArea = self.chBasinAreas.get(chBasin, 0)
upsArea = 0
ups = us.get(chLink, [])
for up in ups:
self.setLinkDrainageArea(up, self.chLinkToChBasin[up], us)
upsArea += self.drainAreas[up]
self.drainAreas[chLink] = ownArea + upsArea
def getDistanceToJoin(self, basin, otherBasin):
"""Get distance in metres to join with otherBasin from outlet of basin. Add to distancesToJoins if necessary."""
link = self.subbasinToStream[basin]
otherLink = self.subbasinToStream[otherBasin]
distances = self.distancesToJoins.get(link, dict())
distance = distances.get(otherLink, -1)
if distance < 0:
distance = self.distanceToJoin(link, otherLink)
distances[otherLink] = distance
self.distancesToJoins[link] = distances
return distance
def distanceToJoin(self, start, otherLink):
"""
Return distance in metres from outlet of link start to point of confluence with
flow from otherLink, or to Outlet if no confluence.
"""
return sum([self.streamLengths[link] for link in self.pathFromJoin(start, otherLink)])
def pathFromJoin(self, start, otherLink):
"""
Return list of downstream links starting with confluence with downstream path from otherLink,
and finishing with link immediately downstream from start.
If otherLink is immediately downstream from start, list will be [otherLink].
If start and otherLink both flow immediately into x, list will be empty.
If there is no confluence, list will be path from outlet to immediately downstream from start.
"""
startPath = self.pathFromOutlet(start)
otherPath = self.pathFromOutlet(otherLink)
return self.removeCommonPrefix(startPath, otherPath)
def pathFromOutlet(self, start):
"""List of links downstream of start, in upstream order."""
result = []
nxt = start
while True:
nxt = self.downStreams.get(nxt, -1)
if nxt == -1:
break
result = [nxt] + result
return result
def removeCommonPrefix(self, path1, path2):
"""Remove from the beginning of path1 the longest sequence that starts path2."""
i = 0
while i < len(path1) and i < len(path2):
if path1[i] == path2[i]:
i += 1
else:
break
return path1[i:]
def addBasinsToChannelFile(self, channelLayer, wStreamFile):
"""
Add basinno field (if necessary) to channels shapefile and populate with values from wStreamFile.
Not done with grid models.
"""
provider = channelLayer.dataProvider()
bsnIdx = self.getIndex(channelLayer, QSWATTopology._BASINNO, ignoreMissing=True)
if bsnIdx < 0:
field = QgsField(QSWATTopology._BASINNO, QVariant.Int)
OK = provider.addAttributes([field])
if not OK:
QSWATUtils.error('Cannot add {0} field to channels shapefile'.format(QSWATTopology._BASINNO), self.isBatch)
return
channelLayer.updateFields()
bsnIdx = self.getIndex(channelLayer, QSWATTopology._BASINNO)
wLayer = QgsRasterLayer(wStreamFile, 'Basins')
lenIdx = self.getIndex(channelLayer, QSWATTopology._LENGTH, ignoreMissing=True)
chsMap = dict()
for feature in provider.getFeatures():
# find a point well into the channel to ensure we are not just outside the basin
geometry = feature.geometry()
if lenIdx < 0:
length = geometry.length()
else:
length = feature[lenIdx]
if length <= 0:
basin = QSWATTopology._NOBASIN # value to indicate a zero-length channel
else:
if geometry.isMultipart():
lines = geometry.asMultiPolyline()
numLines = len(lines)
if numLines == 0:
QSWATUtils.error('Link in channel with id {0} consists of 0 lines'.format(feature.id()), self.isBatch)
return
line = lines[numLines // 2]
else:
line = geometry.asPolyline()
numPoints = len(line)
if numPoints < 2:
QSWATUtils.error('Link in channel with id {0} has less than two points'.format(feature.id()), self.isBatch)
return
point = line[numPoints // 2]
basin = QSWATTopology.valueAtPoint(point, wLayer)
fid = feature.id()
chsMap[fid] = dict()
chsMap[fid][bsnIdx] = basin
OK = provider.changeAttributeValues(chsMap)
if not OK:
QSWATUtils.error('Cannot add basin values to channels shapefile', self.isBatch)
return
def writeDrainageFile(self, drainageFile):
"""Write drainage csv file."""
if os.path.exists(drainageFile):
os.remove(drainageFile)
with open(drainageFile, 'w', newline='') as connFile:
writer = csv.writer(connFile)
writer.writerow([b'Subbasin', b'DownSubbasin'])
for subbasin, downSubbasin in self.downSubbasins.items():
SWATBasin = self.subbasinToSWATBasin.get(subbasin, -1)
if SWATBasin > 0:
downSWATBasin = self.subbasinToSWATBasin.get(downSubbasin, -1)
writer.writerow([str(SWATBasin),str(downSWATBasin)])
def getReachData(self, geom, demLayer):
"""
Generate ReachData record for reach geometry. demLayer may be none, in which case elevations are set zero.
"""
firstLine = QSWATTopology.reachFirstLine(geom, self.xThreshold, self.yThreshold)
if firstLine is None or len(firstLine) < 1:
QSWATUtils.error('It looks like your stream shapefile does not obey the single direction rule, that all reaches are either upstream or downstream.', self.isBatch)
return None
lastLine = QSWATTopology.reachLastLine(geom, self.xThreshold, self.yThreshold)
if lastLine is None or len(lastLine) < 1:
QSWATUtils.error('It looks like your stream shapefile does not obey the single direction rule, that all reaches are either upstream or downstream.', self.isBatch)
return None
pStart = firstLine[0]
pFinish = lastLine[-1]
if demLayer is None:
startVal = 0
finishVal = 0
else:
startVal = QSWATTopology.valueAtPoint(pStart, demLayer)
finishVal = QSWATTopology.valueAtPoint(pFinish, demLayer)
if startVal is None or startVal == self.demNodata:
if finishVal is None or finishVal == self.demNodata:
# QSWATUtils.loginfo(u'({0!s},{1!s}) elevation {4} to ({2!s},{3!s}) elevation {5}'
# .format(pStart.x(), pStart.y(), pFinish.x(), pFinish.y(), str(startVal), str(finishVal)))
return None
else:
startVal = finishVal
elif finishVal is None or finishVal == self.demNodata:
finishVal = startVal
if self.outletAtStart:
maxElev = finishVal * self.verticalFactor
minElev = startVal * self.verticalFactor
return ReachData(pFinish.x(), pFinish.y(), maxElev, pStart.x(), pStart.y(), minElev)
else:
minElev = finishVal * self.verticalFactor
maxElev = startVal * self.verticalFactor
return ReachData(pStart.x(), pStart.y(), maxElev, pFinish.x(), pFinish.y(), minElev)
@staticmethod
def gridReachLength(data):
"""Length of reach assuming it is a single straight line."""
dx = data.upperX - data.lowerX
dy = data.upperY - data.lowerY
return math.sqrt(dx * dx + dy * dy)
def tryBasinAsSWATBasin(self, subbasinsLayer, polyIndex, subbasinIndex):
"""Return true if the subbasin field values can be used as SWAT basin numbers.
The basin numbers, if any, can be used if they
are all positive and different.
Also populate subbasinToSWATBasin and SWATBasinToSubbasin if successful, else these are undetermined.
"""
assert polyIndex >= 0 and subbasinIndex >= 0 and len(self.subbasinToSWATBasin) == 0 and len(self.SWATBasinToSubbasin) == 0
SWATBasins = set()
request = QgsFeatureRequest().setFlags(QgsFeatureRequest.NoGeometry).setSubsetOfAttributes([polyIndex, subbasinIndex])
for polygon in subbasinsLayer.getFeatures(request):
subbasin = polygon[polyIndex]
if subbasin in self.upstreamFromInlets:
continue
SWATBasin = polygon[subbasinIndex]
if SWATBasin <= 0:
return False
if SWATBasin in SWATBasins:
return False
self.subbasinToSWATBasin[subbasin] = SWATBasin
self.SWATBasinToSubbasin[SWATBasin] = subbasin
SWATBasins.add(SWATBasin)
return True
@staticmethod
def snapPointToReach(streamLayer, point, threshold, transform, isBatch):
"""Return the nearest point on a stream segment to the input point."""
line, pointIndex = QSWATTopology.nearestVertex(streamLayer, point)
if pointIndex < 0:
QSWATUtils.error('Cannot snap point ({0:.0f}, {1:.0f}) to stream network'.format(point.x(), point.y()), isBatch)
return None
p1, p2 = QSWATTopology.intercepts(line, pointIndex, point)
p = QSWATTopology.nearer(p1, p2, point)
if p is None:
p = line[pointIndex]
return p if QSWATTopology.distanceMeasure(p, point) <= threshold * threshold else None
# check p is sufficiently near point
if QSWATTopology.distanceMeasure(p, point) <= threshold * threshold:
# before returning p, move it along the stream a little if it is on or close to a '4 corners' position
# since TauDEM can fail to make a boundary or use its id as a DSNODEID if is so positioned
if p1 == p2:
# a point on the line was chosen, which is safe (points on the line are centres of DEM cells)
return p
else:
floatCol = float(p.x() - transform[0]) / transform[1]
intCol = int(floatCol + 0.5)
floatRow = float(p.y() - transform[3]) / transform[5]
intRow = int(floatRow + 0.5)
if abs(floatCol - intCol) < 0.1 and abs(floatRow - intRow) < 0.1:
# move the point towards line[pointIndex] by about half a cell
p3 = QSWATTopology.shiftedPoint(p, line[pointIndex], transform, 0.4)
QSWATUtils.loginfo('({0:.0f},{1:.0f}) shifted to ({2:.0f},{3:.0f})'.format(p.x(), p.y(), p3.x(), p3.y()))
return p3
else:
return p
else:
QSWATUtils.error('Cannot snap point ({0:.0f}, {1:.0f}) to stream network within threshold {2!s}'.format(point.x(), point.y(), threshold), isBatch)
return None
@staticmethod
def separatePoints(p1, p2, transform):
"""If p2 is in same cell as p1 return a point in the next cell in the direction of p1 to p2.
Else return p2."""
# p1 is the end of a channel, so will be in the centre of a cell. So enough
# to move one coordinate of p2 by one cell from p1, and the other proportionately but less
col1, row1 = QSWATTopology.projToCell(p1.x(), p1.y(), transform)
col2, row2 = QSWATTopology.projToCell(p2.x(), p2.y(), transform)
if col1 == col2 and row1 == row2:
return QSWATTopology.shiftedPoint(p1, p2, transform, 1.0)
else:
return p2
@staticmethod
def shiftedPoint(p1, p2, transform, frac):
"""Return point at least frac of a cell away from p1 in direction p1 to p2."""
x1 = p1.x()
y1 = p1.y()
x2 = p2.x()
y2 = p2.y()
dirx = 1 if x2 > x1 else -1
diry = 1 if y2 > y1 else -1
stepx = transform[1] * frac
stepy = abs(transform[5]) * frac
if x1 == x2: # vertical
shiftx = 0
shifty = stepy * diry
else:
slope = abs(float(y1 - y2) / (x1 - x2))
if slope < 1:
shiftx = stepx * dirx
shifty = stepy * diry * slope
else:
shiftx = stepx * dirx / slope
shifty = stepy * diry
return QgsPointXY(x1 + shiftx, y1 + shifty)
@staticmethod
def nearestVertex(streamLayer, point):
"""Find nearest vertex in streamLayer to point and
return the line (list of points) in the reach and
index of the vertex within the line.
"""
bestPointIndex = -1
bestLine = None
minMeasure = float('inf')
for reach in streamLayer.getFeatures():
geometry = reach.geometry()
if geometry.isMultipart():
parts = geometry.asMultiPolyline()
else:
parts = [geometry.asPolyline()]
for line in parts:
for j in range(len(line)):
measure = QSWATTopology.distanceMeasure(line[j], point)
if measure < minMeasure:
minMeasure = measure
bestPointIndex = j
bestLine = line
# distance = math.sqrt(minMeasure)
# QSWATUtils.information(u'Nearest point at ({0:.2F}, {1:.2F}), distance {2:.2F}'.format(bestReach[bestPointIndex].x(), bestReach[bestPointIndex].y(), distance), False)
return (bestLine, bestPointIndex)
@staticmethod
def intercepts(line, pointIndex, point):
"""Get points on segments on either side of pointIndex where
vertical from point meets the segment.
"""
assert pointIndex in range(len(line))
# first try above pointIndex
if pointIndex == len(line) - 1:
# We are at the upper end - no upper segment.
# Return just this point to avoid a tiny subbasin.
return (line[pointIndex], line[pointIndex])
else:
upper = QSWATTopology.getIntercept(line[pointIndex], line[pointIndex+1], point)
if pointIndex == 0:
# We are at the lower end - no lower segment.
# Return just this point to avoid a tiny subbasin.
return (line[0], line[0])
else:
lower = QSWATTopology.getIntercept(line[pointIndex], line[pointIndex-1], point)
return (lower, upper)
@staticmethod
def getIntercept(p1, p2, p):
"""Return point on line from p1 to p2 where
vertical from p intercepts it, or None if there is no intercept.
"""
x1 = p1.x()
x2 = p2.x()
xp = p.x()
y1 = p1.y()
y2 = p2.y()
yp = p.y()
X = x1 - x2
Y = y1 - y2
assert not (X == 0 and Y == 0)
prop = (X * (x1 - xp) + Y * (y1 - yp)) / (X * X + Y * Y)
if prop < 0:
# intercept is off the line beyond p1
# technically we should check for prop > 1, which means
# intercept is off the line beyond p2, but we can assume p is nearer to p1
return None
else:
assert 0 <= prop < 1
return QPoint(x1 - prop * X, y1 - prop * Y)
@staticmethod
def nearer(p1, p2, p):
"""Return the nearer of p1 and p2 to p."""
if p1 is None:
return p2
if p2 is None:
return p1
if QSWATTopology.distanceMeasure(p1, p) < QSWATTopology.distanceMeasure(p2, p):
return p1
else:
return p2
@staticmethod
def distanceMeasure(p1, p2):
"""Return square of distance between p1 and p2."""
dx = p1.x() - p2.x()
dy = p1.y() - p2.y()
return dx * dx + dy * dy
def setMaxFlowLengths(self):
"""
Write table of subbasin to maximum flow length along channels within the basin.
Used for maximum flow path for existing non-grid models, and only defined for these.
"""
channelFlowLengths = dict()
for chLink, length in self.channelLengths.items():
self.setChannelFlowLength(chLink, length, channelFlowLengths)
def setChannelFlowLength(self, chLink, length, channelFlowLengths):
"""Add eentry for chLink to channelFlowLengths map. Also update maxFlowLengths for chLink's subbasin.
post: chLink in channelFlowLengths
"""
if chLink in channelFlowLengths:
return # nothing to do: set on previous recursive call
if chLink in self.zeroChannels:
return
chBasin = self.chLinkToChBasin[chLink]
subbasin = self.chBasinToSubbasin[chBasin]
dsLink = self.downChannels[chLink]
dsChBasin = self.chLinkToChBasin.get(dsLink, -1)
dsBasin = self.chBasinToSubbasin.get(dsChBasin, -1)
if dsBasin == subbasin:
# still in same subbasin:
# add this channel's length to downstream flow length
dsFlowLength = channelFlowLengths.get(dsLink, -1)
if dsFlowLength < 0:
self.setChannelFlowLength(dsLink, self.channelLengths[dsLink], channelFlowLengths)
dsFlowLength = channelFlowLengths[dsLink]
flowLength = dsFlowLength + length
else:
# outlet channel for subbasin
flowLength = length
channelFlowLengths[chLink] = flowLength
maxFlowLength = self.maxFlowLengths.get(subbasin, 0)
if flowLength > maxFlowLength:
self.maxFlowLengths[subbasin] = flowLength
def writePointsTable(self, demLayer, mergees, useGridModel):
"""Write the gis_points table in the project database."""
with self.db.conn as conn:
if not conn:
return
curs = conn.cursor()
table = 'gis_points'
clearSQL = 'DROP TABLE IF EXISTS ' + table
curs.execute(clearSQL)
curs.execute(self.db._POINTSCREATESQL)
waterAdded = []
# Add outlets from streams
for subbasin, (pointId, pt, chLink) in self.outlets.items():
if subbasin in self.upstreamFromInlets or subbasin in self.outletsInLake or \
chLink in self.chLinkInsideLake:
continue # excluded
elev = QSWATTopology.valueAtPoint(pt, demLayer)
self.addPoint(curs, subbasin, pointId, pt, elev, 'O')
# Add inlets
if useGridModel:
for chLink, (pointId, pt) in self.chLinkToInlet.items():
if chLink in self.chLinkInsideLake or chLink in self.chLinkFromLake: # shouldn't happen
continue
subbasin = self.chLinkToChBasin[chLink]
elev = QSWATTopology.valueAtPoint(pt, demLayer)
self.addPoint(curs, subbasin, pointId, pt, elev, 'I')
else:
for subbasin, (pointId, pt) in self.inlets.items():
if subbasin in self.upstreamFromInlets:
# shouldn't happen, but users can be stupid
continue
elev = QSWATTopology.valueAtPoint(pt, demLayer)
self.addPoint(curs, subbasin, pointId, pt, elev, 'I')
# Add point sources at heads of channels
for chLink, (pointId, pt) in self.chLinkToPtSrc.items():
if chLink in self.chLinkInsideLake:
continue
if useGridModel:
if chLink in self.chLinkFromLake:
continue
subbasin = self.chLinkToChBasin[chLink]
else:
chBasin = self.chLinkToChBasin.get(chLink, -1)
subbasin = self.chBasinToSubbasin.get(chBasin, -1)
if subbasin < 0 or subbasin in self.upstreamFromInlets:
continue
elev = QSWATTopology.valueAtPoint(pt, demLayer)
self.addPoint(curs, subbasin, pointId, pt, elev, 'P')
for chLink, (pointId, pt) in self.chPointSources.items():
if chLink in self.chLinkToPtSrc or chLink in mergees or chLink in self.chLinkInsideLake:
continue # link has user-defined point source flowing into it or has been merged or is inside lake
if useGridModel:
if chLink in self.chLinkFromLake:
continue # channel is inside lake
subbasin = self.chLinkToChBasin[chLink]
else:
chBasin = self.chLinkToChBasin.get(chLink, -1)
subbasin = self.chBasinToSubbasin.get(chBasin, -1)
if subbasin < 0 or subbasin in self.upstreamFromInlets:
continue
elev = QSWATTopology.valueAtPoint(pt, demLayer)
self.addPoint(curs, subbasin, pointId, pt, elev, 'P')
# Add lakes
for lake in self.lakesData.values():
# outlet from lake
subbasin, pointId, pt, elev = lake.outPoint
chLink = lake.outChLink
if useGridModel:
# subbasin for outlet will be inside lake and addPoint will fail
# since there will be no SWAT basin. Use one downstream if there is one
downChLink = self.downChannels[chLink]
if downChLink >= 0:
subbasin = self.chLinkToChBasin[downChLink]
elif chLink == -1:
# main outlet was moved inside lake, but reservoir point will still be routed to it
# so add its definition
(outletId, outPt, _) = self.outlets[subbasin]
self.addPoint(curs, subbasin, outletId, outPt, elev, 'O')
self.addPoint(curs, subbasin, pointId, pt, elev, 'W')
waterAdded.append(pointId)
# inlets to lake. These are outlets from streams in grid models, so not necessary
if not useGridModel:
for chLink, (pointId, pt, elev) in lake.inChLinks.items():
chBasin = self.chLinkToChBasin[chLink]
subbasin = self.chBasinToSubbasin[chBasin]
self.addPoint(curs, subbasin, pointId, pt, elev, 'O')
for chLink, (pointId, pt, _) in self.chLinkToWater.items():
# reservoir points at lake outlets can appear here
# but already added from lakesdata
if pointId in waterAdded:
continue
if useGridModel:
subbasin = self.chLinkToChBasin[chLink]
else:
chBasin = self.chLinkToChBasin.get(chLink, -1)
subbasin = self.chBasinToSubbasin.get(chBasin, -1)
if subbasin in self.upstreamFromInlets:
continue
elev = QSWATTopology.valueAtPoint(pt, demLayer)
self.addPoint(curs, subbasin, pointId, pt, elev, 'W')
for channel, (_, pointId, pt) in self.foundReservoirs.items():
if useGridModel:
subbasin = self.chLinkToChBasin[channel]
else:
chBasin = self.chLinkToChBasin.get(channel, -1)
subbasin = self.chBasinToSubbasin.get(chBasin, -1)
if subbasin in self.upstreamFromInlets:
continue
elev = QSWATTopology.valueAtPoint(pt, demLayer)
self.addPoint(curs, subbasin, pointId, pt, elev, 'W')
# for subbasin, (pointId, pt) in self.extraReservoirs.iteritems():
# if subbasin in self.upstreamFromInlets:
# # shouldn't happen, but users can be stupid
# continue
# elev = QSWATTopology.valueAtPoint(pt, demLayer)
# self.addPoint(curs, subbasin, pointId, pt, elev, 'R')
conn.commit()
def addExtraPointsToPointsTable(self, extraPoints, useGridModel):
"""Add extra points needed to mark where channels drain into reservoirs."""
with self.db.conn as conn:
if not conn:
return
curs = conn.cursor()
for channel, pointId in extraPoints:
if useGridModel:
subbasin = self.chLinkToChBasin[channel]
else:
chBasin = self.chLinkToChBasin[channel]
subbasin = self.chBasinToSubbasin[chBasin]
data = self.channelsData[channel]
pt = QgsPointXY(data.lowerX, data.lowerY)
self.addPoint(curs, subbasin, pointId, pt, data.lowerZ, 'O')
conn.commit()
def addPoint(self, cursor, subbasin, pointId, pt, elev, typ):
"""Add point to gis_points table."""
table = 'gis_points'
SWATBasin = self.subbasinToSWATBasin.get(subbasin, 0)
if SWATBasin == 0:
return
ptll = self.pointToLatLong(pt)
sql = "INSERT INTO " + table + " VALUES(?,?,?,?,?,?,?,?)"
try:
cursor.execute(sql, (pointId, SWATBasin, typ,
pt.x(), pt.y(), ptll.y(), ptll.x(), elev))
except:
QSWATUtils.exceptionError('Internal error: unable to add point {0} type {1}'.format(pointId, typ), self.isBatch)
#===========================================================================
# def addPoint(self, cursor, link, data, pointId, typ):
# """Add a point to the points table."""
# table = 'gis_points'
# # inlets will be located at the upstream ends of their links
# # since they are attached to their downstream basins
# if not data:
# return
# SWATBasin = self.subbasinToSWATBasin.get(data.ru, 0)
# if SWATBasin == 0:
# return
# lsu = 0
# if typ == 'I': # inlet
# pt = QgsPointXY(data.upperX, data.upperY)
# elev = data.upperZ
# drainId = SWATBasin
# drainCat = 'R'
# else:
# pt = QgsPointXY(data.lowerX, data.lowerY)
# elev = data.lowerZ
# if typ == 'P': # point source
# resId = self.linkToReservoir.get(link, 0)
# if resId > 0:
# # point source drains to reservoir
# drainId = resId
# drainCat = 'P'
# else:
# # point source drains to link outlet
# drainId = self.linkToOutlet[link]
# drainCat = 'P'
# elif typ == 'R': # reservoir: drains to link outlet
# drainId = self.linkToOutlet[link]
# drainCat = 'P'
# else:
# assert typ == 'O', u'Unknown point type: ' + typ
# # outlet: drains to reach of downstream basin (if any)
# dsLink = self.downLinks[link]
# dsSWATBasin = 0
# while dsLink >= 0 and dsSWATBasin == 0:
# dsBasin = self.linkToBasin[dsLink]
# dsSWATBasin = self.subbasinToSWATBasin.get(dsBasin, 0)
# if dsSWATBasin == 0:
# dsLink = self.downLinks[dsLink]
# if dsSWATBasin > 0:
# drainId = dsSWATBasin
# drainCat = 'R'
# else:
# drainId = -1
# drainCat = 'X'
# ptll = self.pointToLatLong(pt)
# sql = "INSERT INTO " + table + " VALUES(?,?,?,?,?,?,?,?,?)"
# cursor.execute(sql, (pointId, SWATBasin, lsu, typ, \
# pt.x(), pt.y(), ptll.y(), ptll.x(), elev))
#===========================================================================
def writeChannelsTable(self, mergeChannels, basins, gv):
"""
Write the channels table in the project database, make rivs1.shp in shapes directory, and copy as results template to TablesOut directory.
Changes the channel layer, so if successful, returns the new one.
"""
root = QgsProject.instance().layerTreeRoot()
if gv.useGridModel:
# use streams as channels
channelFile = gv.streamFile
strng = 'streams'
else:
channelFile = gv.channelFile
strng = 'channel'
if not os.path.exists(channelFile):
QSWATUtils.error('Cannot find {0} file {1}'.format(strng, channelFile), gv.isBatch)
return
channelLayer = QSWATUtils.getLayerByFilename(root.findLayers(), channelFile, FileTypes._CHANNELS,
None, None, None)[0]
if channelLayer is None: # perhaps removed by user
channelLayer = QgsVectorLayer(channelFile, 'Channels', 'ogr')
QSWATUtils.copyShapefile(channelFile, Parameters._RIVS1, gv.shapesDir)
rivs1File = QSWATUtils.join(gv.shapesDir, Parameters._RIVS1 + '.shp')
QSWATUtils.removeLayer(rivs1File, root)
rivs1Layer = QgsVectorLayer(rivs1File, 'Channels ({0})'.format(Parameters._RIVS1), 'ogr')
provider1 = rivs1Layer.dataProvider()
# add Channel, ChannelR, and Subbasin fields unless already has them
chIdx = self.getIndex(rivs1Layer, QSWATTopology._CHANNEL, ignoreMissing=True)
chRIdx = self.getIndex(rivs1Layer, QSWATTopology._CHANNELR, ignoreMissing=True)
subIdx = self.getIndex(rivs1Layer, QSWATTopology._SUBBASIN, ignoreMissing=True)
if chIdx < 0:
OK = provider1.addAttributes([QgsField(QSWATTopology._CHANNEL, QVariant.Int)])
if not OK:
QSWATUtils.error('Cannot add {0} field to channels shapefile {1}'.format(QSWATTopology._CHANNEL, rivs1File), self.isBatch)
return None
if chRIdx < 0:
OK = provider1.addAttributes([QgsField(QSWATTopology._CHANNELR, QVariant.Int)])
if not OK:
QSWATUtils.error('Cannot add {0} field to channels shapefile {1}'.format(QSWATTopology._CHANNELR, rivs1File), self.isBatch)
return None
if subIdx < 0:
OK = provider1.addAttributes([QgsField(QSWATTopology._SUBBASIN, QVariant.Int)])
if not OK:
QSWATUtils.error('Cannot add {0} field to channels shapefile {1}'.format(QSWATTopology._SUBBASIN, rivs1File), self.isBatch)
return None
rivs1Layer.updateFields()
chIdx = self.getIndex(rivs1Layer, QSWATTopology._CHANNEL)
chRIdx = self.getIndex(rivs1Layer, QSWATTopology._CHANNELR)
subIdx = self.getIndex(rivs1Layer, QSWATTopology._SUBBASIN)
chLinkIdx = self.getIndex(rivs1Layer, QSWATTopology._LINKNO)
request = QgsFeatureRequest().setSubsetOfAttributes([chLinkIdx])
if not gv.useGridModel:
basinMerge = self.mergeChannelData(mergeChannels)
# make map channel -> feature it is merged with for merged channels
merges = dict()
targets = []
for reach in provider1.getFeatures(request):
for channel in mergeChannels.keys():
target = self.finalTarget(channel, mergeChannels)
if target not in targets:
targets.append(target)
if reach[chLinkIdx] == target:
merges[channel] = reach
#QSWATUtils.loginfo('Channel {0} merging to target {1} with length {2}'.format(channel, target, reach.geometry().length()))
# create geometries for merged reaches
merged = []
for reach in provider1.getFeatures(request):
rid = reach.id()
channel = reach[chLinkIdx]
if channel in targets and rid not in merged:
merged.append(rid)
mergeReach = merges.get(channel, None)
if mergeReach is not None:
# add its geometry to its merger target
#length1 = mergeReach.geometry().length()
#length2 = reach.geometry().length()
mergeReach.setGeometry(mergeReach.geometry().combine(reach.geometry()))
#length3 = mergeReach.geometry().length()
#QSWATUtils.loginfo('Channel {0} merged to target with length {1} ({2} + {3})' \
# .format(channel, length3, length1, length2))
if rid not in merged:
merged.append(rid)
# remove channels and targets involved in mergers
provider1.deleteFeatures(merged)
# add mergers
mergers = []
for channel, reach in merges.items():
if reach not in mergers:
mergers.append(reach)
provider1.addFeatures(mergers)
chsMap = dict()
zeroRids = []
for reach in provider1.getFeatures(request):
channel = reach[chLinkIdx]
if gv.useGridModel:
# subbasin and chBasin are the same
subbasin = self.chLinkToChBasin.get(channel, -1)
downChannel = self.downChannels[channel]
else:
chBasin = self.chLinkToChBasin.get(channel, -1)
subbasin = self.chBasinToSubbasin.get(chBasin, -1)
downChannel = self.finalDownstream(channel, mergeChannels)
SWATBasin = self.subbasinToSWATBasin.get(subbasin, 0)
SWATChannel = 0 if SWATBasin == 0 else self.channelToSWATChannel.get(channel, 0)
downSWATChannel = self.channelToSWATChannel.get(downChannel, 0)
rid = reach.id()
if SWATChannel == 0:
zeroRids.append(rid)
chsMap[rid] = dict()
chsMap[rid][chIdx] = SWATChannel
chsMap[rid][chRIdx] = downSWATChannel
chsMap[rid][subIdx] = SWATBasin
OK = provider1.changeAttributeValues(chsMap)
if not OK:
QSWATUtils.error('Cannot add channel and subbasin values to channels shapefile {0}'.format(rivs1File), self.isBatch)
return None
if len(zeroRids) > 0:
OK = provider1.deleteFeatures(zeroRids)
if not OK:
QSWATUtils.error('Cannot remove merged, zero length, or above inlet channels from channels shapefile {0}'.format(rivs1File), self.isBatch)
return None
# Add fields from channels table to rivs1File if less than RIV1SUBS1MAX features; otherwise takes too long.
addToRiv1 = rivs1Layer.featureCount() <= Parameters._RIVS1SUBS1MAX
# remove fields apart from Channel, ChannelR and Subbasin
if addToRiv1:
self.removeFields(provider1, [QSWATTopology._LINKNO, QSWATTopology._CHANNEL, QSWATTopology._CHANNELR, QSWATTopology._SUBBASIN], rivs1File, self.isBatch)
if addToRiv1:
fields = []
fields.append(QgsField(QSWATTopology._AREAC, QVariant.Double, len=20, prec=0))
fields.append(QgsField(QSWATTopology._LEN2, QVariant.Double))
fields.append(QgsField(QSWATTopology._SLO2, QVariant.Double))
fields.append(QgsField(QSWATTopology._WID2, QVariant.Double))
fields.append(QgsField(QSWATTopology._DEP2, QVariant.Double))
fields.append(QgsField(QSWATTopology._MINEL, QVariant.Double))
fields.append(QgsField(QSWATTopology._MAXEL, QVariant.Double))
fields.append(QgsField(QSWATTopology._RESERVOIR, QVariant.Int))
fields.append(QgsField(QSWATTopology._POND, QVariant.Int))
fields.append(QgsField(QSWATTopology._LAKEIN, QVariant.Int))
fields.append(QgsField(QSWATTopology._LAKEOUT, QVariant.Int))
provider1.addAttributes(fields)
rivs1Layer.updateFields()
linkIdx = self.getIndex(rivs1Layer, QSWATTopology._LINKNO)
chIdx = self.getIndex(rivs1Layer, QSWATTopology._CHANNEL)
areaCIdx = self.getIndex(rivs1Layer, QSWATTopology._AREAC)
len2Idx = self.getIndex(rivs1Layer, QSWATTopology._LEN2)
slo2Idx = self.getIndex(rivs1Layer, QSWATTopology._SLO2)
wid2Idx = self.getIndex(rivs1Layer, QSWATTopology._WID2)
dep2Idx = self.getIndex(rivs1Layer, QSWATTopology._DEP2)
minElIdx = self.getIndex(rivs1Layer, QSWATTopology._MINEL)
maxElIdx = self.getIndex(rivs1Layer, QSWATTopology._MAXEL)
resIdx = self.getIndex(rivs1Layer, QSWATTopology._RESERVOIR)
pndIdx = self.getIndex(rivs1Layer, QSWATTopology._POND)
lakeInIdx = self.getIndex(rivs1Layer, QSWATTopology._LAKEIN)
lakeOutIdx = self.getIndex(rivs1Layer, QSWATTopology._LAKEOUT)
mmap = dict()
with self.db.conn as conn:
if not conn:
return None
curs = conn.cursor()
table = 'gis_channels'
clearSQL = 'DROP TABLE IF EXISTS ' + table
curs.execute(clearSQL)
curs.execute(self.db._CHANNELSCREATESQL)
time1 = time.process_time()
wid2Data = dict()
floodscape = QSWATUtils._FLOODPLAIN if gv.useLandscapes else QSWATUtils._NOLANDSCAPE
sql = "INSERT INTO " + table + " VALUES(?,?,?,?,?,?,?,?,?)"
if addToRiv1:
# iterate over channels in rivs1 shapefile
request = QgsFeatureRequest().setFlags(QgsFeatureRequest.NoGeometry).setSubsetOfAttributes([linkIdx, chIdx])
generator = self.generateChannelsFromShapefile(request, provider1, linkIdx, chIdx)
else:
generator = self.generateChannelsFromTable()
toDelete = []
for fid, channel, SWATChannel in generator:
if gv.useGridModel:
# basin and chBasin are the same
subbasin = self.chLinkToChBasin[channel]
else:
chBasin = self.chLinkToChBasin.get(channel, -1)
subbasin = self.chBasinToSubbasin.get(chBasin, -1)
SWATBasin = 0 if channel in self.chLinkInsideLake else self.subbasinToSWATBasin.get(subbasin, 0)
lakeOutId = self.chLinkFromLake.get(channel, 0)
if SWATBasin == 0 and (lakeOutId == 0 or self.downChannels.get(channel, -1) < 0):
toDelete.append(fid)
continue
if gv.useGridModel:
channelData = self.channelsData[channel]
# drain area is a numpy float, so need to coerce, or won't get written to attributes of rivs1
drainAreaHa = float(self.drainAreas[channel]) / 1E4
length = float(self.channelLengths[channel] * gv.mainLengthMultiplier)
slopePercent = float(self.channelSlopes[channel] * 100 * gv.reachSlopeMultiplier / gv.mainLengthMultiplier)
minEl = float(channelData.lowerZ)
maxEl = float(channelData.upperZ)
else:
mergeData = basinMerge.get(channel, None)
if mergeData is None:
continue
drainAreaHa = float(mergeData.areaC / 1E4)
length = float(mergeData.length * gv.mainLengthMultiplier)
slopePercent = float(mergeData.slope * 100 * gv.reachSlopeMultiplier) / gv.mainLengthMultiplier
minEl = float(mergeData.minEl)
maxEl = float(mergeData.maxEl)
# possible for channel to be so short it has no pixels draining to it
# also no LSU data when channel is outlet from lake in grid model
basinData = basins.get(subbasin, None)
lsuData = None if basinData is None else basinData.getLsus().get(channel, None)
drainAreaKm = float(drainAreaHa) / 100
channelWidth = float(gv.channelWidthMultiplier * (drainAreaKm ** gv.channelWidthExponent))
wid2Data[SWATChannel] = channelWidth
channelDepth = float(gv.channelDepthMultiplier * (drainAreaKm ** gv.channelDepthExponent))
rid = 0 if lsuData is None else self.getReservoirId(lsuData, floodscape)
pid = 0 if lsuData is None else self.getPondId(lsuData, floodscape)
if rid == 0 and pid == 0:
# omit from gis_channels channels which have become reservoirs or ponds
curs.execute(sql, (SWATChannel, SWATBasin, drainAreaHa, length, slopePercent,
channelWidth, channelDepth, minEl, maxEl))
if addToRiv1:
lakeInId = self.chLinkIntoLake.get(channel, 0)
mmap[fid] = dict()
mmap[fid][areaCIdx] = drainAreaHa
mmap[fid][len2Idx] = length
mmap[fid][slo2Idx] = slopePercent
mmap[fid][wid2Idx] = channelWidth
mmap[fid][dep2Idx] = channelDepth
mmap[fid][minElIdx] = minEl
mmap[fid][maxElIdx] = maxEl
mmap[fid][resIdx] = rid
mmap[fid][pndIdx] = pid
mmap[fid][lakeInIdx] = lakeInId
mmap[fid][lakeOutIdx] = lakeOutId
time2 = time.process_time()
QSWATUtils.loginfo('Writing gis_channels table took {0} seconds'.format(int(time2 - time1)))
conn.commit()
self.db.hashDbTable(conn, table)
if addToRiv1:
if not provider1.changeAttributeValues(mmap):
QSWATUtils.error('Cannot edit values in channels shapefile {0}'.format(rivs1File), self.isBatch)
return None
if len(toDelete) > 0:
OK = provider1.deleteFeatures(toDelete)
if not OK:
QSWATUtils.error('Cannot remove channels in lakes from channels shapefile {0}'.format(rivs1File), self.isBatch)
return None
# make copy as template for stream results
QSWATUtils.copyShapefile(rivs1File, Parameters._RIVS, gv.resultsDir)
rivFile = QSWATUtils.join(gv.resultsDir, Parameters._RIVS + '.shp')
rivLayer = QgsVectorLayer(rivFile, 'Channels', 'ogr')
provider = rivLayer.dataProvider()
# leave only the Channel, ChannelR and Subbasin attributes
self.removeFields(provider, [QSWATTopology._CHANNEL, QSWATTopology._CHANNELR, QSWATTopology._SUBBASIN], rivFile, self.isBatch)
# add PenWidth field to stream results template
OK = provider.addAttributes([QgsField(QSWATTopology._PENWIDTH, QVariant.Double)])
if not OK:
QSWATUtils.error('Cannot add {0} field to streams results template {1}'.format(QSWATTopology._PENWIDTH, rivFile), self.isBatch)
return None
self.setPenWidth(wid2Data, provider)
if gv.useGridModel:
return channelLayer
else:
layers = root.findLayers()
subLayer = root.findLayer(channelLayer.id())
rivs1Layer = QSWATUtils.getLayerByFilename(layers, rivs1File, FileTypes._CHANNELREACHES,
gv, subLayer, QSWATUtils._WATERSHED_GROUP_NAME)[0]
# hide channel layer
if channelLayer is not None:
QSWATUtils.setLayerVisibility(channelLayer, False, root)
if len(self.upstreamFromInlets) > 0:
self.replaceStreamLayer(root, layers, gv)
return rivs1Layer
def generateChannelsFromShapefile(self, request, provider, linkIdx, chIdx):
"""Yield (feature id, channel, swatChammel) tupless from rivs1.shp."""
for feature in provider.getFeatures(request):
yield feature.id(), feature[linkIdx], feature[chIdx]
def generateChannelsFromTable(self):
"""Yield (feature id, channel, swatChammel) tuples from tables."""
for channel, SWATChannel in self.channelToSWATChannel.items():
yield 0, channel, SWATChannel
def replaceStreamLayer(self, root, layers, gv):
"""Copy stream layer, remove streams upstream from inlets, and replace stream layer."""
streamLayer = QSWATUtils.getLayerByFilename(layers, gv.streamFile, FileTypes._STREAMREACHES, gv, None, None)[0]
if streamLayer is not None:
base, _ = os.path.splitext(os.path.split(gv.streamFile)[1])
QSWATUtils.copyShapefile(gv.streamFile, base + 'act', gv.shapesDir)
actStreamFile = QSWATUtils.join(gv.shapesDir, base + 'act.shp')
actstreamLayer = QgsVectorLayer(actStreamFile, FileTypes.legend(FileTypes._STREAMREACHES), 'ogr')
basinIdx = self.getIndex(actstreamLayer, QSWATTopology._WSNO)
if basinIdx < 0:
return
provider = actstreamLayer.dataProvider()
request = QgsFeatureRequest().setFlags(QgsFeatureRequest.NoGeometry).setSubsetOfAttributes([basinIdx])
toDelete = []
for feature in provider.getFeatures(request):
if feature[basinIdx] in self.upstreamFromInlets:
toDelete.append(feature.id())
if provider.deleteFeatures(toDelete):
subLayer = root.findLayer(streamLayer.id())
actstreamLayer = QSWATUtils.getLayerByFilename(layers, actStreamFile, FileTypes._STREAMREACHES, gv, subLayer,
QSWATUtils._WATERSHED_GROUP_NAME, True)[0]
QSWATUtils.setLayerVisibility(streamLayer, False, root)
def getReservoirId(self, channelData, floodscape):
"""Return reservoir id, if any, else 0."""
lsuData = channelData.get(floodscape, None)
if lsuData is not None and lsuData.waterBody is not None and lsuData.waterBody.isReservoir():
return lsuData.waterBody.id
return 0
def getPondId(self, channelData, floodscape):
"""Return pond id, if any, else 0."""
lsuData = channelData.get(floodscape, None)
if lsuData is not None and lsuData.waterBody is not None and lsuData.waterBody.isPond():
return lsuData.waterBody.id
return 0
def mergeChannelData(self, mergeChannels):
"""Generate and return map of channel to MergedChannelData."""
# first pass: collect data for unmerged channels
mergedChannelData = dict()
for channel in self.channelToSWATChannel.keys():
if channel not in mergeChannels:
channelData = self.channelsData[channel]
mergedChannelData[channel] = MergedChannelData(self.drainAreas[channel],
self.channelLengths[channel],
self.channelSlopes[channel],
channelData.lowerZ,
channelData.upperZ)
# second pass: add data for merged channels
for source, target in mergeChannels.items():
channelData = self.channelsData[channel]
final = self.finalTarget(target, mergeChannels)
mergedChannelData[final].add(self.drainAreas[source],
self.channelLengths[source],
self.channelSlopes[source],
channelData.lowerZ,
channelData.upperZ)
return mergedChannelData
def finalTarget(self, target, mergeChannels):
"""Find final target of merges."""
nxt = mergeChannels.get(target, -1)
if nxt < 0:
return target
else:
return self.finalTarget(nxt, mergeChannels)
def finalDownstream(self, start, mergeChannels):
"""Find downstream channel from start, skipping merged channels, and return it."""
chLink1 = self.finalTarget(start, mergeChannels)
return self.finalTarget(self.getDownChannel(chLink1), mergeChannels)
def routeChannelsOutletsAndBasins(self, basins, mergedChannels, mergees, extraPoints, gv):
"""Add channels, lakes, basins, point sources, reservoirs, inlets and outlets to main gis_routing table."""
chCat = 'CH'
subbasinCat = 'SUB'
ptCat = 'PT'
resCat = 'RES'
pondCat = 'PND'
xCat = 'X'
# first associate any inlets, point sources and reservoirs with appropriate channels
if gv.useGridModel:
# no merging
channelToInlet = self.chLinkToInlet
channelToPtSrc = self.chLinkToPtSrc
else:
channelToInlet = dict()
for subbasin, inlet in self.inlets.items():
# find an inlet channel for this subbasin
found = False
for channel, data in self.channelsData.items():
chBasin = self.chLinkToChBasin.get(channel, -1)
if subbasin == self.chBasinToSubbasin.get(chBasin, -1) and \
QSWATTopology.coincidentPoints(QgsPointXY(data.upperX, data.upperY),
inlet[1], self.xThreshold, self.yThreshold):
channelToInlet[self.finalTarget(channel, mergedChannels)] = inlet
found = True
break
if not found:
QSWATUtils.error('Failed to find channel for inlet to subbasin {0}'.format(subbasin), gv.isBatch)
# map point sources to the unmerged channels they drain into
channelToPtSrc = dict()
for channel, ptsrc in self.chLinkToPtSrc.items():
channelToPtSrc[channel] = ptsrc
#QSWATUtils.loginfo('Channel {0} merged to {1} has point source {2}'.format(channel, self.finalTarget(channel, mergedChannels), ptsrc[0]))
# add point sources at stream sources
for channel, ptsrc in self.chPointSources.items():
if channel not in channelToPtSrc and channel not in mergees and \
channel not in self.chLinkInsideLake and \
not (gv.useGridModel and channel in self.chLinkFromLake): # not already has a point, not merged, not inslide lake
channelToPtSrc[channel] = ptsrc
# map channels to water bodies that replace them as drainage targets
# and water bodies to channels they drain into
floodscape = QSWATUtils._FLOODPLAIN if gv.useLandscapes else QSWATUtils._NOLANDSCAPE
channelToWater = dict()
for basinData in basins.values():
for channel, channelData in basinData.getLsus().items():
lsuData = channelData.get(floodscape, None)
if lsuData is not None and lsuData.waterBody is not None and not lsuData.waterBody.isUnknown():
channelToWater[channel] = (lsuData.waterBody.id, lsuData.waterBody.waterRole)
try:
with self.db.conn as conn:
curs = conn.cursor()
routedPoints = []
routedWater = []
routedChannels = []
# routedSubbasins = []
for channel, SWATChannel in self.channelToSWATChannel.items():
if channel in mergedChannels:
# all that is needed is to map its point source to the merge target
ptsrc = channelToPtSrc.get(channel, None)
if ptsrc is not None:
ptsrcId = ptsrc[0]
if ptsrcId not in routedPoints:
finalChannel = self.finalTarget(channel, mergedChannels)
wid, role = channelToWater.get(finalChannel, (-1, -1))
if wid >= 0:
wCat = resCat if role == 1 else pondCat
curs.execute(DBUtils._ROUTINGINSERTSQL,
(ptsrcId, ptCat, wid, wCat, 100))
else:
finalSWATChannel = self.channelToSWATChannel[finalChannel]
curs.execute(DBUtils._ROUTINGINSERTSQL,
(ptsrcId, ptCat, finalSWATChannel, chCat, 100))
routedPoints.append(ptsrcId)
continue
# if channel is lake outflow
# if main outflow, route lake to outlet and outlet to channel
# else route 0% of lake to channel
outLakeId = self.chLinkFromLake.get(channel, -1)
if outLakeId >= 0:
lakeData = self.lakesData[outLakeId]
wCat = resCat if lakeData.waterRole == 1 else pondCat
if channel == lakeData.outChLink:
# main outlet
outletId = lakeData.outPoint[1]
curs.execute(DBUtils._ROUTINGINSERTSQL, (outLakeId, wCat, outletId, ptCat, 100))
if outletId not in routedPoints:
if gv.useGridModel and self.downChannels.get(channel, -1) < 0:
# we have an internal lake exit: route outlet id to watershed exit
curs.execute(DBUtils._ROUTINGINSERTSQL, (outletId, ptCat, 0, xCat, 100))
else:
curs.execute(DBUtils._ROUTINGINSERTSQL, (outletId, ptCat, SWATChannel, chCat, 100))
routedPoints.append(outletId)
else:
# other outlet
curs.execute(DBUtils._ROUTINGINSERTSQL, (outLakeId, wCat, SWATChannel, chCat, 0))
# check if channel routes into lake
inLakeId = self.chLinkIntoLake.get(channel, -1)
if inLakeId >= 0:
# route its point source to the channel
ptsrc = channelToPtSrc.get(channel, None)
if ptsrc is not None:
ptsrcId = ptsrc[0]
if ptsrcId not in routedPoints:
curs.execute(DBUtils._ROUTINGINSERTSQL,
(ptsrcId, ptCat, SWATChannel, chCat, 100))
routedPoints.append(ptsrcId)
# route the channel into its outlet, and the outlet into the lake
lakeData = self.lakesData[inLakeId]
outletId = lakeData.inChLinks[channel][0]
wCat = resCat if lakeData.waterRole == 1 else pondCat
if SWATChannel not in routedChannels:
curs.execute(DBUtils._ROUTINGINSERTSQL, (SWATChannel, chCat, outletId, ptCat, 100))
routedChannels.append(SWATChannel)
if outletId not in routedPoints:
curs.execute(DBUtils._ROUTINGINSERTSQL, (outletId, ptCat, inLakeId, wCat, 100))
routedPoints.append(outletId)
if not gv.useGridModel:
continue # since we know it is into the lake and so cannot have a downstream channel or be a subbasin outlet
if gv.useGridModel:
subbasin = self.chLinkToChBasin[channel]
else:
chBasin = self.chLinkToChBasin.get(channel, -1)
subbasin = self.chBasinToSubbasin.get(chBasin, -1)
SWATBasin = self.subbasinToSWATBasin.get(subbasin, 0)
if SWATBasin == 0:
continue
# if channel is inside lake ignore it unless a lake outflow
if channel in self.chLinkInsideLake and outLakeId < 0:
continue
dsChannel = self.finalDownstream(channel, mergedChannels)
dsSWATChannel = self.channelToSWATChannel.get(dsChannel, 0)
wid, role = channelToWater.get(channel, (-1, -1))
wCat = resCat if role == 1 else pondCat
inlet = channelToInlet.get(channel, None)
if inlet is not None:
# route inlet to channel or water
if wid >= 0:
curs.execute(DBUtils._ROUTINGINSERTSQL, (inlet[0], ptCat, wid, wCat, 100))
else:
curs.execute(DBUtils._ROUTINGINSERTSQL, (inlet[0], ptCat, SWATChannel, chCat, 100))
(pointId, _, outletChannel) = self.outlets[subbasin]
if channel == outletChannel or gv.useGridModel:
# subbasin outlet: channel routes to outlet point of subbasin; outlet routes to downstream channel
# but with some exceptions:
# - if the channel is replaced by a reservoir, this is routed to the outlet instead
# - if subbasin has an extra point source, this is added to its outlet channel or reservoir
if gv.useGridModel:
ptsrc = channelToPtSrc.get(channel, None)
else:
ptsrc = None
# if ptsrc is None:
# ptsrc = self.extraPtSrcs.get(subbasin, None)
if ptsrc is not None:
# route it to the outlet channel, unless already routed
ptsrcId = ptsrc[0]
if ptsrcId not in routedPoints:
if wid < 0:
curs.execute(DBUtils._ROUTINGINSERTSQL,
(ptsrcId, ptCat, SWATChannel, chCat, 100))
else:
curs.execute(DBUtils._ROUTINGINSERTSQL,
(ptsrcId, ptCat, wid, wCat, 100))
routedPoints.append(ptsrcId)
if wid >= 0:
# need to check if this is a reservoir that is continued in the downstream subbasin
# to make sure we only route it once, and at its final downstream end
widDown, _ = channelToWater.get(dsChannel, (-1, -1))
if wid not in routedWater and wid != widDown:
# route water to water point and water point to outlet
(waterId, _, _) = self.chLinkToWater.get(channel, (-1, None, -1))
if waterId < 0:
(waterId, ptId, _) = self.foundReservoirs.get(channel, (-1, -1, None))
else:
# it is safe to use same id for reservoir and reservoir outlet point when
# using DSNODEID from inlets/outlets file
ptId = waterId
if waterId < 0:
QSWATUtils.error('Cannot find water point for channel {0}'
.format(SWATChannel), gv.isBatch)
else:
curs.execute(DBUtils._ROUTINGINSERTSQL,
(wid, wCat, ptId, ptCat, 100))
if ptId not in routedPoints:
curs.execute(DBUtils._ROUTINGINSERTSQL,
(ptId, ptCat, pointId, ptCat, 100))
routedPoints.append(ptId)
routedWater.append(wid)
elif SWATChannel not in routedChannels:
curs.execute(DBUtils._ROUTINGINSERTSQL,
(SWATChannel, chCat, pointId, ptCat, 100))
routedChannels.append(SWATChannel)
if pointId not in routedPoints:
if dsSWATChannel > 0:
widDown, roleDown = channelToWater.get(dsChannel, (-1, -1))
if widDown >= 0:
wCat = resCat if roleDown == 1 else pondCat
curs.execute(DBUtils._ROUTINGINSERTSQL,
(pointId, ptCat, widDown, wCat, 100))
else:
curs.execute(DBUtils._ROUTINGINSERTSQL,
(pointId, ptCat, dsSWATChannel, chCat, 100))
else:
# watershed outlet: mark point as category X
curs.execute(DBUtils._ROUTINGINSERTSQL,
(pointId, ptCat, 0, xCat, 100))
routedPoints.append(pointId)
else:
# channel and downstream channel within a subbasin:
# channel routes to downstream channel unless it is replaced by water
# and if it has a point source this is added
assert dsSWATChannel > 0, 'Channel {0} has no downstream channel'.format(channel)
widDown, roleDown = channelToWater.get(dsChannel, (-1, -1))
if wid >= 0:
if wid not in routedWater:
if widDown >= 0:
# if wid == widDown wid is only a part water body
# and we will eventually route the one downstream
if wid != widDown:
# route water to water point and water point to ridDown
(waterId, _, _) = self.chLinkToWater.get(channel, (-1, None))
if waterId < 0:
(waterId, ptId, _) = self.foundReservoirs.get(channel, (-1, -1, None))
else:
# it is safe to use same id for reservoir and reservoir outlet point when
# using DSNODEID from inlets/outlets file
ptId = waterId
if waterId < 0:
QSWATUtils.error('Cannot find water point for channel {0}'
.format(SWATChannel), gv.isBatch)
else:
curs.execute(DBUtils._ROUTINGINSERTSQL,
(wid, wCat, ptId, ptCat, 100))
if ptId not in routedPoints:
wCat = resCat if roleDown == 1 else pondCat
curs.execute(DBUtils._ROUTINGINSERTSQL,
(ptId, ptCat, widDown, wCat, 100))
routedPoints.append(ptId)
routedWater.append(wid)
else:
# route water to water point and water point to downstream channel
(waterId, _, _) = self.chLinkToWater.get(channel, (-1, None, -1))
if waterId < 0:
(waterId, ptId, _) = self.foundReservoirs.get(channel, (-1, -1, None))
else:
# it is safe to use same id for water and water outlet point when
# using DSNODEID from inlets/outlets file
ptId = waterId
if waterId < 0:
QSWATUtils.error('Cannot find water point for channel {0}'
.format(SWATChannel), gv.isBatch)
else:
curs.execute(DBUtils._ROUTINGINSERTSQL,
(wid, wCat, ptId, ptCat, 100))
if ptId not in routedPoints:
curs.execute(DBUtils._ROUTINGINSERTSQL,
(ptId, ptCat, dsSWATChannel, chCat, 100))
routedPoints.append(ptId)
routedWater.append(wid)
elif SWATChannel not in routedChannels:
if widDown >= 0:
# insert an outlet point so that channel's contribution to reservoir
# is included in outputs
self.pointId += 1
extraPoints.append((channel, self.pointId))
curs.execute(DBUtils._ROUTINGINSERTSQL,
(SWATChannel, chCat, self.pointId, ptCat, 100))
wCat = resCat if roleDown == 1 else pondCat
curs.execute(DBUtils._ROUTINGINSERTSQL,
(self.pointId, ptCat, widDown, wCat, 100))
routedPoints.append(self.pointId)
else:
curs.execute(DBUtils._ROUTINGINSERTSQL,
(SWATChannel, chCat, dsSWATChannel, chCat, 100))
routedChannels.append(SWATChannel)
# also route point source, if any, to channel or water
ptsrc = channelToPtSrc.get(channel, None)
if ptsrc is not None:
ptsrcId = ptsrc[0]
if ptsrcId not in routedPoints:
if wid > 0:
curs.execute(DBUtils._ROUTINGINSERTSQL,
(ptsrcId, ptCat, wid, wCat, 100))
else:
curs.execute(DBUtils._ROUTINGINSERTSQL,
(ptsrcId, ptCat, SWATChannel, chCat, 100))
routedPoints.append(ptsrcId)
# route lakes without outlet channels to main outlet points
for lakeId, lakeData in self.lakesData.items():
if lakeData.outChLink == -1:
(subbasin, lakeOutletId, _, _) = lakeData.outPoint
(outletId, _, _) = self.outlets[subbasin]
wCat = resCat if lakeData.waterRole == 1 else pondCat
# route the lake to its lake outlet, the lake outlet to the main outlet, and mark main outlet as category X
curs.execute(DBUtils._ROUTINGINSERTSQL, (lakeId, wCat, lakeOutletId, ptCat, 100))
if lakeOutletId not in routedPoints:
curs.execute(DBUtils._ROUTINGINSERTSQL, (lakeOutletId, ptCat, outletId, ptCat, 100))
routedPoints.append(lakeOutletId)
if outletId not in routedPoints:
curs.execute(DBUtils._ROUTINGINSERTSQL, (outletId, ptCat, 0, xCat, 100))
routedPoints.append(outletId)
# route subbasin to outlet points
# or to lake if outlet in lake
for subbasin, (pointId, _, chLink) in self.outlets.items():
SWATBasin = self.subbasinToSWATBasin.get(subbasin, 0)
if SWATBasin == 0:
continue
if gv.useGridModel:
if chLink in self.chLinkInsideLake or chLink in self.chLinkFromLake:
continue
lakeId = self.outletsInLake.get(subbasin, None)
if lakeId is None:
curs.execute(DBUtils._ROUTINGINSERTSQL,
(SWATBasin, subbasinCat, pointId, ptCat, 100))
else:
lakeData = self.lakesData[lakeId]
wCat = resCat if lakeData.waterRole == 1 else pondCat
curs.execute(DBUtils._ROUTINGINSERTSQL,
(SWATBasin, subbasinCat, lakeId, wCat, 100))
return True
except Exception:
QSWATUtils.loginfo('Routing channels, outlets and subbasins failed: {0}'.format(traceback.format_exc()))
return False
@staticmethod
def removeFields(provider, keepFieldNames, fileName, isBatch):
"""Remove fields other than keepFieldNames from shapefile fileName with provider."""
toDelete = []
fields = provider.fields()
for idx in range(fields.count()):
name = fields.field(idx).name()
if not name in keepFieldNames:
toDelete.append(idx)
if len(toDelete) > 0:
OK = provider.deleteAttributes(toDelete)
if not OK:
QSWATUtils.error('Cannot remove fields from shapefile {0}'.format(fileName), isBatch)
def setPenWidth(self, data, provider):
"""Scale wid2 data to 1 .. 4 and write to layer."""
minW = float('inf')
maxW = 0
for val in data.values():
minW = min(minW, val)
maxW = max(maxW, val)
if maxW > minW: # guard against division by zero
rng = maxW - minW
fun = lambda x: (x - minW) * 3 / rng + 1.0
else:
fun = lambda _: 1.0
chIdx = provider.fieldNameIndex(QSWATTopology._CHANNEL)
if chIdx < 0:
QSWATUtils.error('Cannot find {0} field in channels results template'.format(QSWATTopology._CHANNEL))
return
penIdx = provider.fieldNameIndex(QSWATTopology._PENWIDTH)
if penIdx < 0:
QSWATUtils.error('Cannot find {0} field in channels results template'.format(QSWATTopology._PENWIDTH))
return
request = QgsFeatureRequest().setFlags(QgsFeatureRequest.NoGeometry).setSubsetOfAttributes([chIdx, penIdx])
mmap = dict()
for f in provider.getFeatures(request):
ch = f[chIdx]
width = data.get(ch, minW)
mmap[f.id()] = {penIdx: fun(width)}
OK = provider.changeAttributeValues(mmap)
if not OK:
QSWATUtils.error('Cannot edit channels results template', self.isBatch)
def makeOutletThresholds(self, gv, root):
"""
Make file like D8 contributing area but with heightened values at subbasin outlets.
Return -1 if cannot make the file.
"""
assert os.path.exists(gv.demFile)
demBase = os.path.splitext(gv.demFile)[0]
if not QSWATUtils.isUpToDate(gv.demFile, gv.ad8File):
# Probably using existing watershed but switched tabs in delineation form
# At any rate, cannot calculate flow paths
QSWATUtils.loginfo('ad8 file not found or out of date')
return -1
assert len(self.outlets) > 0
gv.hd8File = demBase + 'hd8.tif'
QSWATUtils.removeLayerAndFiles(gv.hd8File, root)
assert not os.path.exists(gv.hd8File)
ad8Layer = QgsRasterLayer(gv.ad8File, 'D8 contributing area')
# calculate maximum contributing area at an outlet point
maxContrib = 0
for (_, pt, _) in self.outlets.values():
contrib = QSWATTopology.valueAtPoint(pt, ad8Layer)
# assume ad8nodata is negative
if not (contrib is None or contrib < 0):
maxContrib = max(maxContrib, contrib)
threshold = int(2 * maxContrib)
ad8Layer = None
# copy ad8 to hd8 and then set outlet point values to threshold
ad8Ds = gdal.Open(gv.ad8File, gdal.GA_ReadOnly)
driver = gdal.GetDriverByName('GTiff')
hd8Ds = driver.CreateCopy(gv.hd8File, ad8Ds, 0)
if not hd8Ds:
QSWATUtils.error('Failed to create hd8 file {0}'.format(gv.hd8File), self.isBatch)
return -1, None
ad8Ds = None
QSWATUtils.copyPrj(gv.ad8File, gv.hd8File)
band = hd8Ds.GetRasterBand(1)
transform = hd8Ds.GetGeoTransform()
arr = array([[threshold]])
for (_, pt, _) in self.outlets.values():
x, y = QSWATTopology.projToCell(pt.x(), pt.y(), transform)
band.WriteArray(arr, x, y)
hd8Ds = None
return threshold
def runCalc1(self, file1, func, outFile, gv, isInt=False, fun1=None):
"""Use func as a function to calulate outFile from file1.
Valid input data values have fun1 applied if it is not None"""
if os.path.exists(outFile):
QSWATUtils.removeLayerAndFiles(outFile, gv.iface.legendInterface())
r1 = Raster(file1, gv)
rout = Raster(outFile, gv, canWrite=True, isInt=isInt)
completed = False
while not completed:
try:
# safer to mark complete immediately to avoid danger of endless loop
# only way to loop is then the memory error exception being raised
completed = True
r1.open(self.chunkCount)
noData = -99999 if isInt else r1.noData
rout.open(self.chunkCount, numRows=r1.numRows, numCols=r1.numCols,
transform=r1.ds.GetGeoTransform(), projection=r1.ds.GetProjection(), noData=noData)
for row in range(r1.numRows):
for col in range(r1.numCols):
v1 = r1.read(row, col)
if fun1 is not None and v1 != r1.noData:
v1 = fun1(v1)
vout = func(v1, r1.noData, noData)
rout.write(row, col, vout)
r1.close()
rout.close()
except MemoryError:
QSWATUtils.loginfo('runCalc1 out of memory with chunk count {0}'.format(self.chunkCount))
try:
r1.close()
rout.close()
except Exception:
pass
self.chunkCount += 1
completed = False
if os.path.exists(outFile):
QSWATUtils.copyPrj(file1, outFile)
return True
else:
# QSWATUtils.error(u'Calculator failed', self._gv.isBatch)
return False
def runCalc2(self, file1, file2, func, outFile, gv, isInt=False, fun1=None, fun2=None):
"""Use func as a function to calulate outFile from file1 and file2.
Assumes file1 and file2 have same origina and pixel size.
If file1/2 values are not nodata and fun1/2 are not None, they are applied before func is applied."""
if os.path.exists(outFile):
QSWATUtils.removeLayerAndFiles(outFile, gv.iface.legendInterface())
r1 = Raster(file1, gv)
r2 = Raster(file2, gv)
rout = Raster(outFile, gv, canWrite=True, isInt=isInt)
completed = False
while not completed:
try:
# safer to mark complete immediately to avoid danger of endless loop
# only way to loop is then the memory error exception being raised
completed = True
r1.open(self.chunkCount)
r2.open(self.chunkCount)
noData = -1 if isInt else r1.noData
rout.open(self.chunkCount, numRows=r1.numRows, numCols=r1.numCols,
transform=r1.ds.GetGeoTransform(), projection=r1.ds.GetProjection(), noData=noData)
for row in range(r1.numRows):
for col in range(r1.numCols):
v1 = r1.read(row, col)
if fun1 is not None and v1 != r1.noData:
v1 = fun1(v1)
v2 = r2.read(row, col)
if fun2 is not None and v2 != r2.noData:
v2 = fun2(v2)
vout = func(v1, r1.noData, v2, r2.noData, noData)
rout.write(row, col, vout)
r1.close()
r2.close()
rout.close()
except MemoryError:
QSWATUtils.loginfo('runCalc2 out of memory with chunk count {0}'.format(self.chunkCount))
try:
r1.close()
r2.close()
rout.close()
except:
pass
self.chunkCount += 1
completed = False
if os.path.exists(outFile):
QSWATUtils.copyPrj(file1, outFile)
return True
else:
# QSWATUtils.error(u'Calculator failed', self._gv.isBatch)
return False
def runCalc2Trans(self, file1, file2, func, outFile, baseFile, gv, isInt=False, fun1=None, fun2=None):
"""Use func as a function to calulate outFile from file1 and file2, using rows, columns and extent of baseFile.
If file1/2 values are not nodata and fun1/2 are not None, they are applied before func is applied."""
if os.path.exists(outFile):
QSWATUtils.removeLayerAndFiles(outFile, gv.iface.legendInterface())
r1 = Raster(file1, gv)
r2 = Raster(file2, gv)
rout = Raster(outFile, gv, canWrite=True, isInt=isInt)
ds = gdal.Open(baseFile, gdal.GA_ReadOnly)
transform = ds.GetGeoTransform()
numRows = ds.RasterYSize
numCols = ds.RasterXSize
projection=ds.GetProjection()
ds = None
completed = False
while not completed:
try:
# safer to mark complete immediately to avoid danger of endless loop
# only way to loop is then the memory error exception being raised
completed = True
r1.open(self.chunkCount)
r2.open(self.chunkCount)
transform1 = r1.ds.GetGeoTransform()
transform2 = r2.ds.GetGeoTransform()
rowFun1, colFun1 = QSWATTopology.translateCoords(transform, transform1, numRows, numCols)
rowFun2, colFun2 = QSWATTopology.translateCoords(transform, transform2, numRows, numCols)
noData = -1 if isInt else r1.noData
rout.open(self.chunkCount, numRows=numRows, numCols=numCols,
transform=transform, projection=projection, noData=noData)
for row in range(numRows):
y = QSWATTopology.rowToY(row, transform)
row1 = rowFun1(row, y)
row2 = rowFun2(row, y)
for col in range(numCols):
x = QSWATTopology.colToX(col, transform)
col1 = colFun1(col, x)
col2 = colFun2(col, x)
v1 = r1.read(row1, col1)
if fun1 is not None and v1 != r1.noData:
v1 =fun1(v1)
v2 = r2.read(row2, col2)
if fun2 is not None and v2 != r2.noData:
v2 = fun2(v2)
vout = func(v1, r1.noData, v2, r2.noData, noData)
rout.write(row, col, vout)
r1.close()
r2.close()
rout.close()
except MemoryError:
QSWATUtils.loginfo('runCalc2Trans out of memory with chunk count {0}'.format(self.chunkCount))
try:
r1.close()
r2.close()
rout.close()
except:
pass
self.chunkCount += 1
completed = False
if os.path.exists(outFile):
QSWATUtils.copyPrj(baseFile, outFile)
return True
else:
# QSWATUtils.error(u'Calculator failed', self._gv.isBatch)
return False
@staticmethod
def burnStream(streamFile, demFile, burnFile, depth, verticalFactor, isBatch):
"""Create as burnFile a copy of demFile with points on lines streamFile reduced in height by depth metres."""
# use vertical factor to convert from metres to vertical units of DEM
demReduction = float(depth) / verticalFactor
assert not os.path.exists(burnFile)
demDs = gdal.Open(demFile, gdal.GA_ReadOnly)
driver = gdal.GetDriverByName('GTiff')
burnDs = driver.CreateCopy(burnFile, demDs, 0)
if burnDs is None:
QSWATUtils.error('Failed to create burned-in DEM {0}'.format(burnFile), isBatch)
return
demDs = None
QSWATUtils.copyPrj(demFile, burnFile)
band = burnDs.GetRasterBand(1)
nodata = band.GetNoDataValue()
burnTransform = burnDs.GetGeoTransform()
streamLayer = QgsVectorLayer(streamFile, 'Burn in streams', 'ogr')
start = time.process_time()
countHits = 0
countPoints = 0
countChanges = 0
changed = dict()
for reach in streamLayer.getFeatures():
geometry = reach.geometry()
if geometry.isMultipart():
lines = geometry.asMultiPolyline()
else:
lines = [geometry.asPolyline()]
for line in lines:
for i in range(len(line) - 1):
countPoints += 1
p0 = line[i]
px0 = p0.x()
py0 = p0.y()
x0, y0 = QSWATTopology.projToCell(px0, py0, burnTransform)
p1 = line[i+1]
px1 = p1.x()
py1 = p1.y()
x1, y1 = QSWATTopology.projToCell(px1, py1, burnTransform)
steep = abs(y1 - y0) > abs(x1 - x0)
if steep:
x0, y0 = y0, x0
x1, y1 = y1, x1
if x0 > x1:
x0, x1 = x1, x0
y0, y1 = y1, y0
deltax = x1 - x0
deltay = abs(y1 - y0)
err = 0
deltaerr = deltay
y = y0
ystep = 1 if y0 < y1 else -1
arr = array([[0.0]])
for x in range(x0, x1+1):
if steep:
if QSWATTopology.addPointToChanged(changed, y, x):
arr = band.ReadAsArray(y, x, 1, 1)
# arr may be none if stream map extends outside DEM extent
if arr and arr[0,0] != nodata:
arr[0,0] = arr[0,0] - demReduction
band.WriteArray(arr, y, x)
countChanges += 1
else:
countHits += 1
else:
if QSWATTopology.addPointToChanged(changed, x, y):
arr = band.ReadAsArray(x, y, 1, 1)
# arr may be none if stream map extends outside DEM extent
if arr and arr[0,0] != nodata:
arr[0,0] = arr[0,0] - demReduction
band.WriteArray(arr, x, y)
countChanges += 1
else:
countHits += 1
err += deltaerr
if 2 * err < deltax:
continue
y += ystep
err -= deltax
finish = time.process_time()
QSWATUtils.loginfo('Created burned-in DEM {0} in {1!s} milliseconds; {2!s} points; {3!s} hits; {4!s} changes'.format(burnFile, int((finish - start)*1000), countPoints, countHits, countChanges))
@staticmethod
def addPointToChanged(changed, col, row):
"""Changed points held in dictionary column -> row-sortedlist, since it is like a sparse matrix.
Add a point unless ready there. Return true if added.
"""
rows = changed.get(col, [])
inserted = ListFuns.insertIntoSortedList(row, rows, True)
if inserted:
changed[col] = rows
return True
else:
return False
@staticmethod
def valueAtPoint(point, layer):
"""
Get the band 1 value at point in a grid layer.
"""
val, ok = layer.dataProvider().sample(point, 1)
if not ok:
return layer.dataProvider().sourceNoDataValue(1)
else:
return val
def isUpstreamSubbasin(self, subbasin):
"""Return true if a subbasin is upstream from an inlet."""
return subbasin in self.upstreamFromInlets
def pointToLatLong(self, point):
"""Convert a QgsPointXY to latlong coordinates and return it."""
geom = QgsGeometry.fromPointXY(point)
geom.transform(self.transformToLatLong)
return geom.asPoint()
def getIndex(self, layer, name, ignoreMissing=False):
"""Get the index of a shapefile layer attribute name,
reporting error if not found, unless ignoreMissing is true.
"""
# field names are truncated to 10 characters when created, so only search for up to 10 characters
# also allow any case, since using lookupField rather than indexOf
index = layer.fields().lookupField(name[:10])
if not ignoreMissing and index < 0:
QSWATUtils.error('Cannot find field {0} in {1}'.format(name, QSWATUtils.layerFileInfo(layer).filePath()), self.isBatch)
return index
def getProviderIndex(self, provider, name, ignoreMissing=False):
"""Get the index of a shapefile provider attribute name,
reporting error if not found, unless ignoreMissing is true.
"""
# field names are truncated to 10 characters when created, so only search for up to 10 characters
index = provider.fieldNameIndex(name[:10])
if not ignoreMissing and index < 0:
QSWATUtils.error('Cannot find field {0} in provider'.format(name), self.isBatch)
return index
def makePointInLine(self, reach, percent):
"""Return a point percent along line from outlet end to next point."""
if self.outletAtStart:
line = QSWATTopology.reachFirstLine(reach.geometry(), self.xThreshold, self.yThreshold)
pt1 = line[0]
pt2 = line[1]
else:
line = QSWATTopology.reachLastLine(reach.geometry(), self.xThreshold, self.yThreshold)
length = len(line)
pt1 = line[length-1]
pt2 = line[length-2]
x = (pt1.x() * (100 - percent) + pt2.x() * percent) / 100.0
y = (pt1.y() * (100 - percent) + pt2.y() * percent) / 100.0
return QgsPointXY(x, y)
def hasOutletAtStart(self, streamLayer, ad8Layer):
"""Returns true iff streamLayer lines have their outlet points at their start points.
If ad8Layer is not None, we are not in an existing watershed, and can rely on accumulations.
Accumulation will be higher at the outlet end.
Finds shapes with a downstream connections, and
determines the orientation by seeing how such a shape is connected to the downstream shape.
If they don't seem to be connected (as my happen after merging subbasins)
tries other shapes with downstream connections, up to 10.
A line is connected to another if their ends are less than dx and dy apart horizontally and vertically.
Assumes the orientation found for this shape can be used generally for the layer.
"""
streamIndex = self.getIndex(streamLayer, QSWATTopology._LINKNO, ignoreMissing=False)
if streamIndex < 0:
QSWATUtils.error('No LINKNO field in stream layer', self.isBatch)
return True # default as true for TauDEM
dsStreamIndex = self.getIndex(streamLayer, QSWATTopology._DSLINKNO, ignoreMissing=False)
if dsStreamIndex < 0:
QSWATUtils.error('No DSLINKNO field in stream layer', self.isBatch)
return True # default as true for TauDEM
if ad8Layer is not None: # only set to non-None if not an existing watershed
# use accumulation difference at ends of reach (or line in reach) to decide
for reach in streamLayer.getFeatures():
geometry = reach.geometry()
if geometry.isMultipart():
lines = geometry.asMultiPolyline()
else:
lines = [geometry.asPolyline()]
for line in lines:
if len(line) > 1: # make sure we haven't picked on an empty line
p1 = line[0]
p2 = line[-1]
acc1 = QSWATTopology.valueAtPoint(p1, ad8Layer)
acc2 = QSWATTopology.valueAtPoint(p2, ad8Layer)
if acc1 != acc2: # degenerate single point line
return acc1 > acc2
# find candidates: links with a down connection
candidates = [] # reach, downReach pairs
for reach in streamLayer.getFeatures():
downLink = reach[dsStreamIndex]
if downLink >= 0:
# find the down reach
downReach = QSWATUtils.getFeatureByValue(streamLayer, streamIndex, downLink)
if downReach is not None:
candidates.append((reach, downReach))
if len(candidates) < 10:
continue
else:
break
else:
QSWATUtils.error('Cannot find link {0!s} in {1}'.format(downLink, QSWATUtils.layerFileInfo(streamLayer).filePath()), self.isBatch)
return True
if candidates == []:
QSWATUtils.error('Cannot find link with a downstream link in {0}. Do you only have one stream?'.format(QSWATUtils.layerFileInfo(streamLayer).filePath()), self.isBatch)
return True
for (upReach, downReach) in candidates:
downGeom = downReach.geometry()
downStart = QSWATTopology.reachFirstLine(downGeom, self.xThreshold, self.yThreshold)
if downStart is None:
continue
downFinish = QSWATTopology.reachLastLine(downGeom, self.xThreshold, self.yThreshold)
if downFinish is None:
continue
upGeom = upReach.geometry()
upStart = QSWATTopology.reachFirstLine(upGeom, self.xThreshold, self.yThreshold)
if upStart is None:
continue
upFinish = QSWATTopology.reachLastLine(upGeom, self.xThreshold, self.yThreshold)
if upFinish is None:
continue
if QSWATTopology.pointOnLine(upStart[0], downFinish, self.xThreshold, self.yThreshold):
return True
if QSWATTopology.pointOnLine(upFinish[-1], downStart, self.xThreshold, self.yThreshold):
return False
QSWATUtils.error('Cannot find physically connected reaches in streams shapefile {0}. Try increasing nearness threshold'.format(QSWATUtils.layerFileInfo(streamLayer).filePath()), self.isBatch)
return True
def saveOutletsAndSources(self, channelLayer, outletLayer, useGridModel):
"""Write outlets, downSubbasins, and (unless useGridModel)
inlets, upstreamFromInlets, and outletChannels tables."""
# in case called twice
self.pointId = 0
self.waterBodyId = 0
self.outlets.clear()
self.inlets.clear()
self.chPointSources.clear()
self.upstreamFromInlets.clear()
self.downSubbasins.clear()
self.chBasinToSubbasin.clear()
chLinkToSubbasin = dict()
downChannels = dict()
chInlets = dict()
chOutlets = dict()
chLinkIndex = self.getIndex(channelLayer, QSWATTopology._LINKNO)
dsChLinkIndex = self.getIndex(channelLayer, QSWATTopology._DSLINKNO)
wsnoIndex = self.getIndex(channelLayer, QSWATTopology._WSNO, ignoreMissing=not useGridModel)
if chLinkIndex < 0 or dsChLinkIndex < 0:
return False
# ignoreMissing for subbasinIndex necessary when useGridModel, since channelLayer is then a streams layer
subbasinIndex = self.getIndex(channelLayer, QSWATTopology._BASINNO, ignoreMissing=useGridModel)
if useGridModel:
if wsnoIndex < 0:
return False
else:
if subbasinIndex < 0:
return False
dsNodeIndex = self.getIndex(channelLayer, QSWATTopology._DSNODEID, ignoreMissing=True)
if outletLayer is not None:
idIndex = self.getIndex(outletLayer, QSWATTopology._ID, ignoreMissing=False)
inletIndex = self.getIndex(outletLayer, QSWATTopology._INLET, ignoreMissing=False)
srcIndex = self.getIndex(outletLayer, QSWATTopology._PTSOURCE, ignoreMissing=False)
resIndex = self.getIndex(outletLayer, QSWATTopology._RES, ignoreMissing=False)
# set pointId to max id value in outletLayer
# and waterBodyId to max reservoir or pond id
request = QgsFeatureRequest().setFlags(QgsFeatureRequest.NoGeometry)
for point in outletLayer.getFeatures(request):
self.pointId = max(self.pointId, point[idIndex])
if point[inletIndex] == 0 and point[resIndex] > 0:
self.waterBodyId = max(self.waterBodyId, point[idIndex])
else:
dsNodeIndex = -1
for reach in channelLayer.getFeatures():
chLink = reach[chLinkIndex]
dsChLink = reach[dsChLinkIndex]
chBasin = reach[wsnoIndex]
geom = reach.geometry()
# for grids, channel basins and subbasins are the same
subbasin = chBasin if useGridModel else reach[subbasinIndex]
chLinkToSubbasin[chLink] = subbasin
if not useGridModel:
self.chBasinToSubbasin[chBasin] = subbasin
downChannels[chLink] = dsChLink
dsNode = reach[dsNodeIndex] if dsNodeIndex >= 0 else -1
if dsNode >= 0 and idIndex >= 0 and inletIndex >= 0 and srcIndex >= 0 and resIndex >= 0:
outletPoint = None
inletPoint = None
for f in outletLayer.getFeatures():
if f[idIndex] == dsNode:
if f[inletIndex] == 0:
if f[resIndex] == 0:
outletPoint = f
break
elif f[srcIndex] == 0:
inletPoint = f
break
if outletPoint is not None:
pt = outletPoint.geometry().asPoint()
chOutlets[chLink] = (self.nonzeroPointId(dsNode), pt)
elif inletPoint is not None:
pt = inletPoint.geometry().asPoint()
chInlets[chLink] = (self.nonzeroPointId(dsNode), pt)
first = QSWATTopology.reachFirstLine(geom, self.xThreshold, self.yThreshold)
if first is None or len(first) < 2:
QSWATUtils.error('It looks like your channels shapefile does not obey the single direction rule, that all channels are either upstream or downstream.', self.isBatch)
return False
last = QSWATTopology.reachLastLine(geom, self.xThreshold, self.yThreshold)
if last is None or len(last) < 2:
QSWATUtils.error('It looks like your channels shapefile does not obey the single direction rule, that all channels are either upstream or downstream.', self.isBatch)
return False
outId, pt = chOutlets.get(chLink, (-1, None))
if pt is None:
self.pointId += 1
outId = self.pointId
self.pointId += 1
srcId = self.pointId
if self.outletAtStart:
if not useGridModel and pt is not None and not QSWATTopology.coincidentPoints(first[0], pt, self.xThreshold, self.yThreshold):
QSWATUtils.error('Outlet point {0} at ({1}, {2}) not coincident with start of channel link {3}'
.format(outId, pt.x(), pt.y(), chLink), self.isBatch)
chOutlets[chLink] = (outId, first[0])
self.chPointSources[chLink] = (srcId, last[-1])
else:
if not useGridModel and pt is not None and not QSWATTopology.coincidentPoints(last[-1], pt, self.xThreshold, self.yThreshold):
QSWATUtils.error('Outlet point {0} at ({1}, {2}) not coincident with end of channel link {3}'
.format(outId, pt.x(), pt.y(), chLink), self.isBatch)
chOutlets[chLink] = (outId, last[-1])
self.chPointSources[chLink] = (srcId, first[0])
# now find the channels which are on subbasin boundaries,
# i.e. their downstream channels are in different basins
hasInlet = False
for chLink, dsChLink in downChannels.items():
subbasin = chLinkToSubbasin[chLink]
if subbasin == QSWATTopology._NOBASIN: # from a zero-length channel
continue
dsSubbasin = chLinkToSubbasin[dsChLink] if dsChLink >= 0 else -1
while dsSubbasin == QSWATTopology._NOBASIN:
# skip past zero-length channels
dsChLink = downChannels.get(dsChLink, -1)
dsSubbasin = chLinkToSubbasin.get(dsChLink, -1)
if subbasin != dsSubbasin:
self.downSubbasins[subbasin] = dsSubbasin
# collect the basin's outlet location:
outletId, outletPt = chOutlets[chLink]
self.outlets[subbasin] = (outletId, outletPt, chLink)
if not useGridModel:
# self.extraResPoints[subbasin] = chResPoints[chLink]
# self.extraPtSrcPoints[subbasin] = chSources[chLink]
inletId, inletPt = chInlets.get(chLink, (-1, None))
if inletPt is not None and dsSubbasin >= 0:
# inlets are associated with downstream basin
self.inlets[dsSubbasin] = (inletId, inletPt)
hasInlet = True
# collect subbasins upstream from inlets
# this looks inefficient, repeatedly going through all basins, but probably few projects have inlets:
if not useGridModel and hasInlet:
for subbasin in self.inlets.keys():
self.addUpstreamSubbasins(subbasin)
return True
def nonzeroPointId(self, dsNode):
"""Return dsNode, or next pointId if dsNode is zero. Used to prevent a zero point id."""
if dsNode == 0:
self.pointId += 1
return self.pointId
return dsNode
def addUpstreamSubbasins(self, start):
"""Add basins upstream from start to upstreamFromInlets."""
for subbasin, downSubbasin in self.downSubbasins.items():
if downSubbasin == start:
self.upstreamFromInlets.add(subbasin)
self.addUpstreamSubbasins(subbasin)
def surroundingLake(self, SWATChannel, useGridModel):
"""Return id of lake containing channel, if any, else -1."""
chLink = self.SWATChannelToChannel[SWATChannel]
lake1 = self.chLinkInsideLake.get(chLink, -1)
if useGridModel and lake1 < 0:
return self.chLinkFromLake.get(chLink, -1)
else:
return lake1
@staticmethod
def maskFun(val, valNoData, mask, maskNoData, resNoData):
"""Result is val unless mask is nodata."""
if val == valNoData or mask == maskNoData:
return resNoData
else:
return val
@staticmethod
def reachFirstLine(geometry, xThreshold, yThreshold):
"""Returns the line of a single polyline,
or a line in a multipolyline whose first point is not adjacent to a point
of another line in the multipolyline.
"""
if not geometry.isMultipart():
return geometry.asPolyline()
mpl = geometry.asMultiPolyline()
numLines = len(mpl)
for i in range(numLines):
linei = mpl[i]
connected = False
if linei is None or len(linei) == 0:
continue
else:
start = linei[0]
for j in range(numLines):
if i != j:
linej = mpl[j]
if QSWATTopology.pointOnLine(start, linej, xThreshold, yThreshold):
connected = True
break
if not connected:
return linei
# should not get here
return None
@staticmethod
def reachLastLine(geometry, xThreshold, yThreshold):
"""Returns the line of a single polyline,
or a line in a multipolyline whose last point is not adjacent to a point
of another line in the multipolyline.
"""
if not geometry.isMultipart():
return geometry.asPolyline()
mpl = geometry.asMultiPolyline()
numLines = len(mpl)
for i in range(numLines):
linei = mpl[i]
connected = False
if linei is None or len(linei) == 0:
continue
else:
finish = linei[-1]
for j in range(numLines):
if i != j:
linej = mpl[j]
if QSWATTopology.pointOnLine(finish, linej, xThreshold, yThreshold):
connected = True
break
if not connected:
return linei
# should not get here
return None
@staticmethod
def pointOnLine(point, line, xThreshold, yThreshold):
"""Return true if point is coincident with a point on the line.
Note this only checks if the point is close to a vertex."""
if line is None or len(line) == 0:
return False
for pt in line:
if QSWATTopology.coincidentPoints(point, pt, xThreshold, yThreshold):
return True
return False
@staticmethod
def coincidentPoints(pt1, pt2, xThreshold, yThreshold):
"""Return true if points are within xThreshold and yThreshold
horizontally and vertically."""
return abs(pt1.x() - pt2.x()) < xThreshold and \
abs(pt1.y() - pt2.y()) < yThreshold
@staticmethod
def colToX(col, transform):
"""Convert column number to X-coordinate."""
return (col + 0.5) * transform[1] + transform[0]
@staticmethod
def rowToY(row, transform):
"""Convert row number to Y-coordinate."""
return (row + 0.5) * transform[5] + transform[3]
#=========currently not used==================================================================
# @staticmethod
# def xToCol(x, transform):
# """Convert X-coordinate to column number."""
# return int((x - transform[0]) / transform[1])
#===========================================================================
#=========currently not used==================================================================
# @staticmethod
# def yToRow(y, transform):
# """Convert Y-coordinate to row number."""
# return int((y - transform[3]) / transform[5])
#===========================================================================
@staticmethod
def cellToProj(col, row, transform):
"""Convert column and row numbers to (X,Y)-coordinates."""
x = (col + 0.5) * transform[1] + transform[0]
y = (row + 0.5) * transform[5] + transform[3]
return (x,y)
@staticmethod
def projToCell(x, y, transform):
"""Convert (X,Y)-coordinates to column and row numbers."""
col = int((x - transform[0]) / transform[1])
row = int((y - transform[3]) / transform[5])
return (col, row)
#==========not currently used=================================================================
# @staticmethod
# def haveSameCoords(band1, transform1, transform2):
# """
# Return true if raster transform1 and transform2 are the same or sufficiently
# close for row/col coordinates of first to be used without reprojection
# as row/col coordinates of the second.
#
# Assumes second raster has sufficient extent.
# We could demand this, but in practice often read rasters well within their extents,
# because only looking at points within a watershed.
# """
# # may work, though may also fail - we are comparing float values
# if transform1 == transform2:
# return True
# # accept the origins as the same if they are within a tenth of the cell size
# # otherwise return false
# if (abs(transform1[0] - transform2[0]) > transform2[1] * 0.1 or \
# abs(transform1[3] - transform2[3]) > abs(transform2[5]) * 0.1):
# return False
# # then check if the vertical/horizontal difference in cell size times the number of rows/columns
# # in the first is less than half the depth/width of a cell in the second
# return abs(transform1[1] - transform2[1]) * band1.XSize < transform2[1] * 0.5 and \
# abs(transform1[5] - transform2[5]) * band1.YSize < abs(transform2[5]) * 0.5
#===========================================================================
@staticmethod
def translateCoords(transform1, transform2, numRows1, numCols1):
"""
Return a pair of functions:
row, latitude -> row and column, longitude -> column
for transforming positions in raster1 to row and column of raster2.
The functions are:
identities on the first argument if the rasters have (sufficiently)
the same origins and cell sizes;
a simple shift on the first argument if the rasters have
the same cell sizes but different origins;
otherwise a full transformation on the second argument.
It is assumed that the first and second arguments are consistent,
ie they identify the same cell in raster1.
"""
# may work, thuough we are comparing real values
if transform1 == transform2:
return (lambda row, _: row), (lambda col, _: col)
xOrigin1, xSize1, _, yOrigin1, _, ySize1 = transform1
xOrigin2, xSize2, _, yOrigin2, _, ySize2 = transform2
# accept the origins as the same if they are within a tenth of the cell size
sameXOrigin = abs(xOrigin1 - xOrigin2) < xSize2 * 0.1
sameYOrigin = abs(yOrigin1 - yOrigin2) < abs(ySize2) * 0.1
# accept cell sizes as equivalent if vertical/horizontal difference
# in cell size times the number of rows/columns
# in the first is less than half the depth/width of a cell in the second
sameXSize = abs(xSize1 - xSize2) * numCols1 < xSize2 * 0.5
sameYSize = abs(ySize1 - ySize2) * numRows1 < abs(ySize2) * 0.5
if sameXSize:
if sameXOrigin:
xFun = (lambda col, _: col)
else:
# just needs origin shift
# note that int truncates, i.e. rounds towards zero
if xOrigin1 > xOrigin2:
colShift = int((xOrigin1 - xOrigin2) / xSize1 + 0.5)
xFun = lambda col, _: col + colShift
else:
colShift = int((xOrigin2 - xOrigin1) / xSize1 + 0.5)
xFun = lambda col, _: col - colShift
else:
# full transformation
xFun = lambda _, x: int((x - xOrigin2) / xSize2)
if sameYSize:
if sameYOrigin:
yFun = (lambda row, _: row)
else:
# just needs origin shift
# note that int truncates, i.e. rounds towards zero, and y size will be negative
if yOrigin1 > yOrigin2:
rowShift = int((yOrigin2 - yOrigin1) / ySize1 + 0.5)
yFun = lambda row, _: row - rowShift
else:
rowShift = int((yOrigin1 - yOrigin2) / ySize1 + 0.5)
yFun = lambda row, _: row + rowShift
else:
# full transformation
yFun = lambda _, y: int((y - yOrigin2) / ySize2)
# note row, column order of return (same as order of reading rasters)
return yFun, xFun
@staticmethod
def sameTransform(transform1, transform2, numRows1, numCols1):
"""Return true if transforms are sufficiently close to be regarded as the same,
i.e. row and column numbers for the first can be used without transformation to read the second.
Avoids relying on equality between real numbers."""
# may work, thuough we are comparing real values
if transform1 == transform2:
return True
xOrigin1, xSize1, _, yOrigin1, _, ySize1 = transform1
xOrigin2, xSize2, _, yOrigin2, _, ySize2 = transform2
# accept the origins as the same if they are within a tenth of the cell size
sameXOrigin = abs(xOrigin1 - xOrigin2) < xSize2 * 0.1
if sameXOrigin:
sameYOrigin = abs(yOrigin1 - yOrigin2) < abs(ySize2) * 0.1
if sameYOrigin:
# accept cell sizes as equivalent if vertical/horizontal difference
# in cell size times the number of rows/columns
# in the first is less than half the depth/width of a cell in the second
sameXSize = abs(xSize1 - xSize2) * numCols1 < xSize2 * 0.5
if sameXSize:
sameYSize = abs(ySize1 - ySize2) * numRows1 < abs(ySize2) * 0.5
return sameYSize
return False
def splitReachByLake(self, lakeGeom, reachGeom, reachData):
"""lakeGeom is a polygon representing a lake. reach is known to intersect wil the lake..
Returns a pair of inflowing and outflowing reaches, either or both of which may be None."""
sourcePt = QgsPointXY(reachData.upperX, reachData.upperY)
sourceToLake = QSWATTopology.toIntersection(reachGeom, lakeGeom, sourcePt, not self.outletAtStart, self.xThreshold, self.yThreshold)
outletPt = QgsPointXY(reachData.lowerX, reachData.lowerY)
outletToLake = QSWATTopology.toIntersection(reachGeom, lakeGeom, outletPt, self.outletAtStart, self.xThreshold, self.yThreshold)
return sourceToLake, outletToLake
@staticmethod
def toIntersection(reachGeom, lakeGeom, start, isUp, xThreshold, yThreshold):
"""Return geometry for sequence of points from start to one before first one that intersects with lakeGeom,
or None if this is empty or a singleton, or if start is within the lake.
If isUp the search is from index 0 if the of the reach, else it is from the last index."""
if lakeGeom.contains(start):
return None
if reachGeom.isMultipart():
mpl = reachGeom.asMultiPolyline()
else:
mpl = [reachGeom.asPolyline()]
result = []
done = set()
while True:
progress = False
for i in range(len(mpl)):
if i not in done:
line = mpl[i]
if len(line) <= 1:
continue
if isUp:
if QSWATTopology.coincidentPoints(start, line[0], xThreshold, yThreshold):
for pt in line:
if lakeGeom.contains(pt):
length = len(result)
if length < 1:
return None
elif length == 1:
# create zero-length line at result[0]
return QgsGeometry.fromPolylineXY([result[0], result[0]])
return QgsGeometry.fromPolylineXY(result)
result.append(pt)
start = line[-1]
done.add(i)
progress = True
else:
if QSWATTopology.coincidentPoints(start, line[-1], xThreshold, yThreshold):
for pt in reversed(line):
if lakeGeom.contains(pt):
length = len(result)
if length < 1:
return None
elif length == 1:
# create zero-length line at result[0]
return QgsGeometry.fromPolylineXY([result[0], result[0]])
return QgsGeometry.fromPolylineXY(result)
result.insert(0, pt)
start = line[0]
done.add(i)
progress = True
if not progress:
raise Exception('Looping trying to calculate reach')
# @staticmethod
# def splitReach(resGeom, reachGeom, source, outlet, outletAtStart, xThreshold, yThreshold):
# """Split reachGeom into two parts, one from source to reservoir and one from reservoir to outlet.
#
# Assumes the reach has been split into at least two disjoint parts, one flowing from source, the other flowing to outlet.
# Algorithm checks each line in reach geometry, moving up from source or down from outlet until reservoir is reached
# in both cases."""
# sourcePart = []
# outletPart = []
# mpl = reachGeom.asMultiPolyline()
# done = set()
# outletToLakeDone = False
# sourceToLakeDone = False
# while True:
# reduced = False
# for i in xrange(len(mpl)):
# if i not in done:
# line = mpl[i]
# start = line[0]
# finish = line[-1]
# if outletAtStart:
# if not outletToLakeDone and QSWATTopology.coincidentPoints(outlet, start, xThreshold, yThreshold):
# newLine = []
# for pt in line:
# newLine.append(pt)
# if resGeom.intersects(QgsGeometry.fromPointXY(pt)):
# outletToLakeDone = True
# break
# outletPart.append(newLine)
# outlet = finish
# reduced = True
# done.add(i)
# elif not sourceToLakeDone and QSWATTopology.coincidentPoints(source, finish, xThreshold, yThreshold):
# newLine = []
# for pt in reversed(line):
# newLine.insert(0, pt)
# if resGeom.intersects(QgsGeometry.fromPointXY(pt)):
# sourceToLakeDone = True
# break
# sourcePart.append(newLine)
# source = start
# done.add(i)
# reduced = True
# else:
# if not outletToLakeDone and QSWATTopology.coincidentPoints(outlet, finish, xThreshold, yThreshold):
# newLine = []
# for pt in reversed(line):
# newLine.insert(0, pt)
# if resGeom.intersects(QgsGeometry.fromPointXY(pt)):
# outletToLakeDone = True
# break
# outletPart.append(line)
# outlet = start
# done.add(i)
# reduced = True
# elif QSWATTopology.coincidentPoints(source, start, xThreshold, yThreshold):
# newLine = []
# for pt in line:
# newLine.append(pt)
# if resGeom.intersects(QgsGeometry.fromPointXY(pt)):
# sourceToLakeDone = True
# break
# sourcePart.append(line)
# source = finish
# done.add(i)
# reduced = True
# if outletToLakeDone and sourceToLakeDone:
# break
# if not reduced:
# raise Exception('Looping trying to split reach')
# sourceGeom = QgsGeometry.fromPolyline(sourcePart[0]) if len(sourcePart) == 1 else QgsGeometry.fromMultiPolyline(sourcePart)
# outletGeom = QgsGeometry.fromPolyline(outletPart[0]) if len(outletPart) == 1 else QgsGeometry.fromMultiPolyline(outletPart)
# return sourceGeom, outletGeom
@staticmethod
def movePointToPerimeter(pt, lakeGeom, pFile, maxSteps):
"""Point pt is contained in lake. Move it downstream at most maxSteps
using D8 flow direction raster pFile until it is not inside the lake,
returning new point and true.
Return original point and false if failed to find perimeter."""
pLayer = QgsRasterLayer(pFile, 'FlowDir')
ds = gdal.Open(pFile, gdal.GA_ReadOnly)
pNodata = ds.GetRasterBand(1).GetNoDataValue()
transform = ds.GetGeoTransform()
stepCount = 0
pt1 = pt
while stepCount < maxSteps:
if not lakeGeom.contains(pt1):
return pt1, True
dir1 = QSWATTopology.valueAtPoint(pt1, pLayer)
if dir1 is None or dir1 == pNodata:
QSWATUtils.loginfo('Failed to reach lake perimeter: no flow direction available.')
return pt, False
# dir1 is read as a float. Also subtract 1 to get range 0..7
dir0 = int(dir1) - 1
col, row = QSWATTopology.projToCell(pt1.x(), pt1.y(), transform)
col1, row1 = col + QSWATUtils._dX[dir0], row + QSWATUtils._dY[dir0]
x1, y1 = QSWATTopology.cellToProj(col1, row1, transform)
pt1 = QgsPointXY(x1, y1)
stepCount += 1
QSWATUtils.loginfo('Failed to reach lake perimeter in {0} steps.'.format(maxSteps))
return pt, False
| 53.646138 | 379 | 0.555853 | 19,873 | 209,059 | 5.82836 | 0.092437 | 0.003833 | 0.002953 | 0.009687 | 0.351965 | 0.305965 | 0.246152 | 0.209381 | 0.18683 | 0.165212 | 0 | 0.012871 | 0.362644 | 209,059 | 3,896 | 380 | 53.659908 | 0.856407 | 0.200202 | 0 | 0.382778 | 0 | 0.005382 | 0.042987 | 0.000461 | 0 | 0 | 0 | 0.000513 | 0.003027 | 1 | 0.030272 | false | 0.001682 | 0.006391 | 0 | 0.12479 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e6a67d7be173d84fc383903999420aeb0fe6bba | 2,079 | py | Python | examples/measures_openml_sklearn.py | openml/openml-python-contrib | a3480b05483bd66e0fe42347b1f93261d7373ec9 | [
"Apache-2.0"
] | 1 | 2018-09-19T09:45:25.000Z | 2018-09-19T09:45:25.000Z | examples/measures_openml_sklearn.py | openml/openml-python-contrib | a3480b05483bd66e0fe42347b1f93261d7373ec9 | [
"Apache-2.0"
] | 2 | 2018-10-09T23:14:32.000Z | 2019-07-12T14:57:54.000Z | examples/measures_openml_sklearn.py | openml/openml-python-contrib | a3480b05483bd66e0fe42347b1f93261d7373ec9 | [
"Apache-2.0"
] | null | null | null | # This script searches for a given OpenML (Weka) measure which sklearn measure
# is the same
import argparse
import openml
import sklearn
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument('--task_id', type=int, default=6) # Arbitrary, in this case Letter
parser.add_argument('--flow_id', type=int, default=8353) # Arbitrary, in this case Pipeline with SVC
parser.add_argument('--num_runs', type=int, default=100) # Number of runs to inspect
parser.add_argument('--openml_measure', type=str, default='f_measure')
parser.add_argument('--sklearn_metric', type=str, default='f1_score')
args_ = parser.parse_args()
return args_
SKLEARN_AVERAGE_FUNCTIONS = ['micro', 'macro', 'weighted']
def run_script():
args = parse_args()
runs = openml.runs.list_runs(size=args.num_runs,
flow=[args.flow_id],
task=[args.task_id])
if len(runs) != args.num_runs:
raise ValueError('Obtained too few runs: %d' % len(runs))
# ideally, we would like to use the openml scores per fold (to remove bias from averaging),
# but these are not always calculated for all measures
differences = {avg_fn: list() for avg_fn in SKLEARN_AVERAGE_FUNCTIONS}
for run_id in runs:
run = openml.runs.get_run(run_id)
openml_score = run.evaluations[args.openml_measure]
for avg_fn in differences:
kwargs = {'average': avg_fn}
score = run.get_metric_fn(getattr(sklearn.metrics, args.sklearn_metric), kwargs)
score_avg = sum(score) / len(score)
difference = openml_score - score_avg
differences[avg_fn].append(difference)
differences_squared = {avg_fn: sum(map(lambda x: x*x, diffs)) for avg_fn, diffs in differences.items()}
differences_max = {avg_fn: max(diffs) for avg_fn, diffs in differences.items()}
# lower is better in both cases
print('squared', differences_squared)
print('max', differences_max)
if __name__ == '__main__':
run_script()
| 37.8 | 107 | 0.671477 | 283 | 2,079 | 4.720848 | 0.39576 | 0.033683 | 0.063623 | 0.023952 | 0.053892 | 0.053892 | 0.053892 | 0.053892 | 0 | 0 | 0 | 0.005556 | 0.220779 | 2,079 | 54 | 108 | 38.5 | 0.819136 | 0.17316 | 0 | 0 | 0 | 0 | 0.084795 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.083333 | 0 | 0.166667 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e6b034faca93125528bf3599a36ef5610e77fa5 | 9,052 | py | Python | ibis/backends/tests/test_client.py | EdAbati/ibis | 23a5a80164cc276b4da1815f26bf387acc9025ad | [
"Apache-2.0"
] | null | null | null | ibis/backends/tests/test_client.py | EdAbati/ibis | 23a5a80164cc276b4da1815f26bf387acc9025ad | [
"Apache-2.0"
] | null | null | null | ibis/backends/tests/test_client.py | EdAbati/ibis | 23a5a80164cc276b4da1815f26bf387acc9025ad | [
"Apache-2.0"
] | null | null | null | import pandas as pd
import pandas.testing as tm
import pytest
from pytest import mark
import ibis
import ibis.expr.datatypes as dt
from ibis.util import guid
@pytest.fixture
def new_schema():
return ibis.schema([('a', 'string'), ('b', 'bool'), ('c', 'int32')])
def _create_temp_table_with_schema(con, temp_table_name, schema, data=None):
con.drop_table(temp_table_name, force=True)
con.create_table(temp_table_name, schema=schema)
temporary = con.table(temp_table_name)
assert len(temporary.execute()) == 0
if data is not None and isinstance(data, pd.DataFrame):
con.load_data(temp_table_name, data, if_exists='append')
assert len(temporary.execute()) == len(data.index)
tm.assert_frame_equal(temporary.execute(), data)
return temporary
def test_load_data_sqlalchemy(
alchemy_backend, alchemy_con, alchemy_temp_table
):
sch = ibis.schema(
[
('first_name', 'string'),
('last_name', 'string'),
('department_name', 'string'),
('salary', 'float64'),
]
)
df = pd.DataFrame(
{
'first_name': ['A', 'B', 'C'],
'last_name': ['D', 'E', 'F'],
'department_name': ['AA', 'BB', 'CC'],
'salary': [100.0, 200.0, 300.0],
}
)
alchemy_con.create_table(alchemy_temp_table, schema=sch)
alchemy_con.load_data(alchemy_temp_table, df, if_exists='append')
result = alchemy_con.table(alchemy_temp_table).execute()
alchemy_backend.assert_frame_equal(df, result)
@mark.parametrize(
('expr_fn', 'expected'),
[
(lambda t: t.string_col, [('string_col', dt.String)]),
(
lambda t: t[t.string_col, t.bigint_col],
[('string_col', dt.String), ('bigint_col', dt.Int64)],
),
],
)
@mark.notimpl(["datafusion"])
def test_query_schema(ddl_backend, ddl_con, expr_fn, expected):
expr = expr_fn(ddl_backend.functional_alltypes)
# we might need a public API for it
ast = ddl_con.compiler.to_ast(expr, ddl_backend.make_context())
schema = ddl_con.ast_schema(ast)
# clickhouse columns has been defined as non-nullable
# whereas other backends don't support non-nullable columns yet
expected = ibis.schema(
[
(name, dtype(nullable=schema[name].nullable))
for name, dtype in expected
]
)
assert schema.equals(expected)
@mark.parametrize(
'sql',
[
'select * from functional_alltypes limit 10',
'select * from functional_alltypes \nlimit 10\n',
],
)
@mark.notimpl(["postgres", "mysql", "datafusion", "sqlite"])
def test_sql(ddl_backend, ddl_con, sql):
# execute the expression using SQL query
ddl_con.sql(sql).execute()
@mark.notimpl(["datafusion", "clickhouse"])
def test_create_table_from_schema(rw_con, rw_backend, new_schema, temp_table):
rw_con.create_table(temp_table, schema=new_schema)
t = rw_con.table(temp_table)
for k, i_type in t.schema().items():
assert new_schema[k] == i_type
@mark.notimpl(
[
"postgres",
"sqlite",
"mysql",
"pandas",
"dask",
"datafusion",
"clickhouse",
]
)
def test_rename_table(rw_con, temp_table, new_schema):
temp_table_original = f'{temp_table}_original'
rw_con.create_table(temp_table_original, schema=new_schema)
t = rw_con.table(temp_table_original)
t.rename(temp_table)
assert rw_con.table(temp_table) is not None
assert temp_table in rw_con.list_tables()
@mark.notimpl(["datafusion", "clickhouse"])
@mark.never(["impala", "pyspark"], reason="No non-nullable datatypes")
def test_nullable_input_output(rw_con, temp_table):
sch = ibis.schema(
[
('foo', 'int64'),
('bar', ibis.expr.datatypes.int64(nullable=False)),
('baz', 'boolean'),
]
)
rw_con.create_table(temp_table, schema=sch)
t = rw_con.table(temp_table)
assert t.schema().types[0].nullable
assert not t.schema().types[1].nullable
assert t.schema().types[2].nullable
# view tests
@mark.only_on_backends(["impala"])
def test_create_drop_view(ddl_con, ddl_backend, temp_view):
# setup
table_name = 'functional_alltypes'
expr = ddl_con.table(table_name).limit(1)
# create a new view
ddl_con.create_view(temp_view, expr)
# check if the view was created
assert temp_view in ddl_con.list_tables()
t_expr = ddl_con.table(table_name)
v_expr = ddl_con.table(temp_view)
# check if the view and the table has the same fields
assert set(t_expr.schema().names) == set(v_expr.schema().names)
@mark.notimpl(["postgres", "mysql", "clickhouse", "datafusion"])
def test_separate_database(
ddl_con, alternate_current_database, current_data_db
):
# using alternate_current_database switches "con" current
# database to a temporary one until a test is over
tmp_db = ddl_con.database(alternate_current_database)
# verifying we can open another db which isn't equal to current
db = ddl_con.database(current_data_db)
assert db.name == current_data_db
assert tmp_db.name == alternate_current_database
def test_insert_no_overwrite_from_dataframe(
alchemy_backend, alchemy_con, test_employee_schema, test_employee_data_2
):
temp_table = f'temp_to_table_{guid()}'
temporary = _create_temp_table_with_schema(
alchemy_con,
temp_table,
test_employee_schema,
)
alchemy_con.insert(temp_table, obj=test_employee_data_2, overwrite=False)
assert len(temporary.execute()) == 3
tm.assert_frame_equal(temporary.execute(), test_employee_data_2)
def test_insert_overwrite_from_dataframe(
alchemy_backend,
alchemy_con,
test_employee_schema,
test_employee_data_1,
test_employee_data_2,
):
temp_table = f'temp_to_table_{guid()}'
temporary = _create_temp_table_with_schema(
alchemy_con,
temp_table,
test_employee_schema,
data=test_employee_data_1,
)
alchemy_con.insert(temp_table, obj=test_employee_data_2, overwrite=True)
assert len(temporary.execute()) == 3
tm.assert_frame_equal(temporary.execute(), test_employee_data_2)
def test_insert_no_overwite_from_expr(
alchemy_backend, alchemy_con, test_employee_schema, test_employee_data_2
):
temp_table = f'temp_to_table_{guid()}'
temporary = _create_temp_table_with_schema(
alchemy_con,
temp_table,
test_employee_schema,
)
from_table_name = f'temp_from_table_{guid()}'
from_table = _create_temp_table_with_schema(
alchemy_con,
from_table_name,
test_employee_schema,
data=test_employee_data_2,
)
alchemy_con.insert(temp_table, obj=from_table, overwrite=False)
assert len(temporary.execute()) == 3
tm.assert_frame_equal(temporary.execute(), from_table.execute())
def test_insert_overwrite_from_expr(
alchemy_backend,
alchemy_con,
test_employee_schema,
test_employee_data_1,
test_employee_data_2,
):
temp_table = f'temp_to_table_{guid()}'
temporary = _create_temp_table_with_schema(
alchemy_con,
temp_table,
test_employee_schema,
data=test_employee_data_1,
)
from_table_name = f'temp_from_table_{guid()}'
from_table = _create_temp_table_with_schema(
alchemy_con,
from_table_name,
test_employee_schema,
data=test_employee_data_2,
)
alchemy_con.insert(temp_table, obj=from_table, overwrite=True)
assert len(temporary.execute()) == 3
tm.assert_frame_equal(temporary.execute(), from_table.execute())
def test_list_databases(alchemy_backend, alchemy_con):
# Every backend has its own databases
TEST_DATABASES = {
'sqlite': ['main', 'base'],
'postgres': ['postgres', 'ibis_testing'],
'mysql': ['ibis_testing', 'information_schema'],
}
assert alchemy_con.list_databases() == TEST_DATABASES[alchemy_con.name]
@mark.never(
["postgres"], reason="schemas and databases are different in postgres"
)
def test_list_schemas(alchemy_backend, alchemy_con):
with pytest.warns(FutureWarning):
schemas = alchemy_con.list_schemas()
assert schemas == alchemy_con.list_databases()
def test_verify(ddl_backend, ddl_con):
expr = ddl_con.table('functional_alltypes').double_col.sum()
with pytest.warns(FutureWarning):
assert expr.verify()
with pytest.warns(FutureWarning):
assert ddl_backend.api.verify(expr)
@mark.only_on_backends(["postgres"])
def test_not_verify(alchemy_con, alchemy_backend):
# There is no expression that can't be compiled to any backend
# Testing `not verify()` only for an expression not supported in postgres
expr = alchemy_con.table('functional_alltypes').double_col.approx_median()
with pytest.warns(FutureWarning):
assert not expr.verify()
with pytest.warns(FutureWarning):
assert not alchemy_backend.api.verify(expr)
| 29.106109 | 78 | 0.682722 | 1,201 | 9,052 | 4.819317 | 0.177352 | 0.065308 | 0.038701 | 0.029371 | 0.391327 | 0.349862 | 0.308915 | 0.284036 | 0.284036 | 0.271942 | 0 | 0.006803 | 0.204264 | 9,052 | 310 | 79 | 29.2 | 0.796751 | 0.070703 | 0 | 0.326087 | 0 | 0 | 0.102787 | 0.018699 | 0 | 0 | 0 | 0 | 0.126087 | 1 | 0.078261 | false | 0 | 0.030435 | 0.004348 | 0.117391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e6b21340c88c8fdd716f0092f7671a2ff1a9c95 | 3,716 | py | Python | chrome/test/functional/pdf.py | Scopetta197/chromium | b7bf8e39baadfd9089de2ebdc0c5d982de4a9820 | [
"BSD-3-Clause"
] | 212 | 2015-01-31T11:55:58.000Z | 2022-02-22T06:35:11.000Z | chrome/test/functional/pdf.py | 1065672644894730302/Chromium | 239dd49e906be4909e293d8991e998c9816eaa35 | [
"BSD-3-Clause"
] | 5 | 2015-03-27T14:29:23.000Z | 2019-09-25T13:23:12.000Z | chrome/test/functional/pdf.py | 1065672644894730302/Chromium | 239dd49e906be4909e293d8991e998c9816eaa35 | [
"BSD-3-Clause"
] | 221 | 2015-01-07T06:21:24.000Z | 2022-02-11T02:51:12.000Z | #!/usr/bin/env python
# Copyright (c) 2011 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import os
import glob
import pyauto_functional # Must be imported before pyauto
import pyauto
from pyauto_errors import JSONInterfaceError
class PDFTest(pyauto.PyUITest):
"""PDF related tests
This test runs only on Google Chrome build, not on Chromium.
"""
unloadable_pdfs = []
def _PerformPDFAction(self, action, tab_index=0, windex=0):
"""Perform an action on a PDF tab.
Args:
action: one of "fitToHeight", "fitToWidth", "ZoomIn", "ZoomOut"
tab_index: tab index Defaults to 0
windex: window index. Defaults to 0
"""
# Sometimes the zoom/fit bar is not fully loaded. We need to wait for it to
# load before we can perform actions.
js = """if (document.getElementsByName("plugin") &&
document.getElementsByName("plugin")[0])
{ window.domAutomationController.send("true"); }
else {window.domAutomationController.send("false"); }"""
try:
self.assertTrue(self.WaitUntil(lambda: self.ExecuteJavascript(js,
tab_index=tab_index, windex=windex), expect_retval="true"),
msg='Could not find zoom/fit to page/width bar so we will not be able '
'to peform the requested action')
except JSONInterfaceError as e:
# The PDF did not load, add it to the list and move on, we don't want the
# test to abort so we can check all of the PDFs.
PDFTest.unloadable_pdfs.append(self.GetActiveTabTitle())
return
assert action in ('fitToHeight', 'fitToWidth', 'ZoomIn', 'ZoomOut')
js = 'document.getElementsByName("plugin")[0].%s()' % action
# Add an empty string so that there's something to return back
# (or else it hangs)
return self.GetDOMValue('%s + ""' % js, tab_index)
def testPDFRunner(self):
"""Navigate to pdf files and verify that browser doesn't crash"""
# bail out if not a branded build
properties = self.GetBrowserInfo()['properties']
if properties['branding'] != 'Google Chrome':
return
breakpad_folder = properties['DIR_CRASH_DUMPS']
old_dmp_files = glob.glob(os.path.join(breakpad_folder, '*.dmp'))
pdf_files_path = os.path.join(self.DataDir(), 'pyauto_private', 'pdf')
pdf_files = map(self.GetFileURLForPath,
glob.glob(os.path.join(pdf_files_path, '*.pdf')))
# Add a pdf file over http:// to the list of pdf files.
# crbug.com/70454
pdf_files += ['http://www.irs.gov/pub/irs-pdf/fw4.pdf']
# Some pdfs cause known crashes. Exclude them. crbug.com/63549
exclude_list = ('nullip.pdf', 'sample.pdf')
pdf_files = [x for x in pdf_files if
os.path.basename(x) not in exclude_list]
PDFTest.unloadable_pdfs = []
for url in pdf_files:
self.AppendTab(pyauto.GURL(url))
for tab_index in range(1, len(pdf_files) + 1):
self.ActivateTab(tab_index)
self._PerformPDFAction('fitToHeight', tab_index=tab_index)
self._PerformPDFAction('fitToWidth', tab_index=tab_index)
# Assert that there is at least 1 browser window.
self.assertTrue(self.GetBrowserWindowCount(),
'Browser crashed, no window is open')
# Verify there're no crash dump files
for dmp_file in glob.glob(os.path.join(breakpad_folder, '*.dmp')):
self.assertTrue(dmp_file in old_dmp_files,
msg='Crash dump %s found' % dmp_file)
self.assertEqual(len(PDFTest.unloadable_pdfs), 0, msg='The following PDFs '
'did not load: %s' % PDFTest.unloadable_pdfs)
if __name__ == '__main__':
pyauto_functional.Main()
| 39.956989 | 80 | 0.676803 | 515 | 3,716 | 4.774757 | 0.403884 | 0.03904 | 0.017893 | 0.026027 | 0.035787 | 0.028467 | 0.028467 | 0.028467 | 0 | 0 | 0 | 0.008544 | 0.212594 | 3,716 | 92 | 81 | 40.391304 | 0.831852 | 0.293326 | 0 | 0.038462 | 0 | 0 | 0.25 | 0.082165 | 0 | 0 | 0 | 0 | 0.096154 | 1 | 0.038462 | false | 0 | 0.096154 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e6c9fbc711681cc619d80c8284c4bd573eeb695 | 3,319 | py | Python | shield/adv_predict.py | yfor1008/jpeg-defense | 40e5c310f5f517cd654a402d13f1b3d6497e2254 | [
"MIT"
] | null | null | null | shield/adv_predict.py | yfor1008/jpeg-defense | 40e5c310f5f517cd654a402d13f1b3d6497e2254 | [
"MIT"
] | null | null | null | shield/adv_predict.py | yfor1008/jpeg-defense | 40e5c310f5f517cd654a402d13f1b3d6497e2254 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
'''
# @File : adv_predict.py
# @Author : yuanwenjin
# @Mail : xxxx@mail.com
# @Date : 2019/12/10 17:41:55
# @Docs : 对训练好的模型进行inference
'''
import os, sys
import numpy as np
import time
import logging
# logging.basicConfig(level=logging.INFO, filename='run_log.log', filemode='w', datefmt='%Y-%m-%d, %a, %H:%M:%S', format='%(message)s')
sys.path.append(os.path.join(os.path.dirname(os.path.abspath(__file__)), 'models'))
print(os.path.join(os.path.dirname(os.path.abspath(__file__)), 'models'))
from PIL import Image
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
import tensorflow as tf
from nets import nets_factory
tf.flags.DEFINE_string(
'model_name', '', 'model name for mobile network.')
tf.flags.DEFINE_string(
'checkpoint_path', '', 'Path to checkpoint for mobile network.')
tf.flags.DEFINE_string(
'image_folder', '', 'Path to images.')
tf.flags.DEFINE_string(
'batch_size', '64', 'Path to images.')
FLAGS = tf.flags.FLAGS
model_name = FLAGS.model_name
def preprocessing(img, mask, reshaped_size=(256, 256)):
"""切圆
### Args:
- img: H*W*C, PIL image, rgb
- mask: H*W*1, array, same with reshaped
- reshaped_size, 2*1, (height, width), tuple
- rgb_means: C*1, (meanR, meanG, meanB), tuple
### Returns:
image.
"""
img = np.array(img.resize(reshaped_size, Image.ANTIALIAS), dtype='float16') / 255.0
img[:, :, 0] = img[:, :, 0] * mask
img[:, :, 1] = img[:, :, 1] * mask
img[:, :, 2] = img[:, :, 2] * mask
return img
def main(_):
network_fn = nets_factory.get_network_fn(model_name, num_classes=(15 - 0), is_training=False)
with tf.Graph().as_default():
# Prepare graph
x_input = tf.placeholder(tf.float32, shape=[None, 256, 256, 3])
logits, _ = network_fn(x_input)
gailv = tf.reduce_max(tf.nn.softmax(logits), 1)
pred = tf.argmax(logits, 1)
# images
images = os.listdir(FLAGS.image_folder)
# mask
MASK = Image.open(os.path.join(os.path.dirname(__file__), 'mask.png'))
MASK = MASK.resize((256, 256))
MASK = np.array(MASK, 'float16') > 0
# Run computation
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, FLAGS.checkpoint_path)
# log
logging.basicConfig(level=logging.INFO, filename='predict_%s.log' % model_name, filemode='w', datefmt='%Y-%m-%d, %a, %H:%M:%S', format='%(message)s')
logging.info('image, gailv, predict, label')
image_len = len(images)
# image_len = 100
step = int(FLAGS.batch_size)
for idx in range(0, image_len, step):
if (step + idx) > image_len:
step = image_len - idx
ims = [Image.open(os.path.join(FLAGS.image_folder, img)) for img in images[idx:step+idx]]
ims = [preprocessing(im, MASK) for im in ims]
val, classed = sess.run([gailv, pred], feed_dict={x_input: ims})
for name, v, c in zip(images[idx:step+idx], val, classed):
# print(name, v, c)
logging.info('%s, %f, %d' % (name, v, c))
if __name__ == '__main__':
tf.app.run()
| 32.861386 | 161 | 0.592648 | 459 | 3,319 | 4.141612 | 0.366013 | 0.028406 | 0.021042 | 0.039979 | 0.19516 | 0.180431 | 0.124145 | 0.087322 | 0.087322 | 0.087322 | 0 | 0.026358 | 0.245556 | 3,319 | 100 | 162 | 33.19 | 0.732827 | 0.18379 | 0 | 0.072727 | 0 | 0 | 0.103617 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036364 | false | 0 | 0.145455 | 0 | 0.2 | 0.018182 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e70bb5d8f6dd99555cf36b4234172538f4a7ba0 | 851 | py | Python | houston_bot/handlers.py | 77stm77/Houston-bot | 07aff4c8038dfd2ad529c1235593d99c8a9f902d | [
"MIT"
] | 2 | 2020-01-07T20:01:53.000Z | 2020-01-21T19:52:51.000Z | houston_bot/handlers.py | 77stm77/Houston-bot | 07aff4c8038dfd2ad529c1235593d99c8a9f902d | [
"MIT"
] | 9 | 2019-12-31T17:49:33.000Z | 2022-02-10T00:46:47.000Z | houston_bot/handlers.py | 77stm77/Houston-bot | 07aff4c8038dfd2ad529c1235593d99c8a9f902d | [
"MIT"
] | 1 | 2020-01-06T17:14:10.000Z | 2020-01-06T17:14:10.000Z | import json
import apiai
from config import BaseConfig
class BotHandlers(BaseConfig):
"""Handler bot. """
def start(self, bot, update):
"""Starting message. """
bot.send_message(chat_id=update.message.chat_id, text=self.welcome_text)
def message(self, bot, update):
"""Dialog message. """
request = apiai.ApiAI(self.dialog_token).text_request()
request.lang = self.lang
request.session_id = self.name
request.query = update.message.text
response_json = json.loads(request.getresponse().read().decode('utf-8'))
response = response_json['result']['fulfillment']['speech']
if response:
bot.send_message(chat_id=update.message.chat_id, text=response)
else:
bot.send_message(chat_id=update.message.chat_id, text=self.answer)
| 30.392857 | 80 | 0.654524 | 104 | 851 | 5.211538 | 0.394231 | 0.121771 | 0.143911 | 0.099631 | 0.252768 | 0.252768 | 0.252768 | 0.252768 | 0.252768 | 0.252768 | 0 | 0.001502 | 0.217391 | 851 | 27 | 81 | 31.518519 | 0.812312 | 0.056404 | 0 | 0 | 0 | 0 | 0.035623 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.176471 | 0 | 0.352941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e71d943f5bf1d4ad0f756a18fbdc81fced61f07 | 5,601 | py | Python | tools/wmsXMLView.py | ddbox/glideinwms | 1d0efbc1186ff9bd4cc3010fde6681b4cbe7cd54 | [
"Apache-2.0"
] | null | null | null | tools/wmsXMLView.py | ddbox/glideinwms | 1d0efbc1186ff9bd4cc3010fde6681b4cbe7cd54 | [
"Apache-2.0"
] | null | null | null | tools/wmsXMLView.py | ddbox/glideinwms | 1d0efbc1186ff9bd4cc3010fde6681b4cbe7cd54 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
# SPDX-FileCopyrightText: 2009 Fermi Research Alliance, LLC
# SPDX-License-Identifier: Apache-2.0
# Description:
# This tool displays the status of the glideinWMS pool
# in a XML format
#
# Arguments:
# [-pool collector_node] [-condor-stats 1|0] [-internals 1|0]
import os.path
import sys
from glideinwms.factory import glideFactoryConfig, glideFactoryInterface
from glideinwms.frontend import glideinFrontendInterface
from glideinwms.lib import xmlFormat
sys.path.append(os.path.join(sys.path[0], "../.."))
pool_name = None
factory_name = None
frontend_name = None
remove_condor_stats = True
remove_internals = True
key_obj = None
# parse arguments
alen = len(sys.argv)
i = 1
while i < alen:
ael = sys.argv[i]
if ael == "-pool":
i = i + 1
pool_name = sys.argv[i]
elif ael == "-factory":
i = i + 1
factory_name = sys.argv[i]
elif ael == "-frontend":
i = i + 1
frontend_name = sys.argv[i]
elif ael == "-condor-stats":
i = i + 1
remove_condor_stats = not int(sys.argv[i])
elif ael == "-internals":
i = i + 1
remove_internals = not int(sys.argv[i])
elif ael == "-rsa_key":
i = i + 1
key_obj = glideFactoryConfig.GlideinKey("RSA", sys.argv[i])
elif ael == "-help":
print("Usage:")
print(
"wmsXMLView.py [-pool <node>[:<port>]] [-factory <factory>] [-frontend <frontend>] [-condor-stats 0|1] [-internals 0|1] [-rsa_key <fname>] [-help]"
)
sys.exit(1)
else:
raise RuntimeError("Unknown option '%s', try -help" % ael)
i = i + 1
# get data
factory_constraints = None
if factory_name is not None:
farr = factory_name.split("@")
if len(farr) == 1:
# just the generic factory name
factory_constraints = 'FactoryName=?="%s"' % factory_name
elif len(farr) == 2:
factory_constraints = f'(FactoryName=?="{farr[1]}")&&(GlideinName=?="{farr[0]}")'
elif len(farr) == 3:
factory_constraints = '(FactoryName=?="{}")&&(GlideinName=?="{}")&&(EntryName=?="{}")'.format(
farr[2],
farr[1],
farr[0],
)
else:
raise RuntimeError("Invalid factory name; more than 2 @'s found")
glideins_obj = glideinFrontendInterface.findGlideins(pool_name, None, None, factory_constraints)
factoryclient_constraints = None
if factory_name is not None:
farr = factory_name.split("@")
if len(farr) == 1:
# just the generic factory name
factoryclient_constraints = 'ReqFactoryName=?="%s"' % factory_name
elif len(farr) == 2:
factoryclient_constraints = f'(ReqFactoryName=?="{farr[1]}")&&(ReqGlideinName=?="{farr[0]}")'
elif len(farr) == 3:
factoryclient_constraints = '(ReqFactoryName=?="{}")&&(ReqGlideinName=?="{}")&&(ReqEntryName=?="{}")'.format(
farr[2],
farr[1],
farr[0],
)
else:
raise RuntimeError("Invalid factory name; more than 2 @'s found")
clientsmon_obj = glideinFrontendInterface.findGlideinClientMonitoring(pool_name, None, factoryclient_constraints)
# extract data
glideins = list(glideins_obj.keys())
for glidein in glideins:
glidein_el = glideins_obj[glidein]
# Remove diagnostics attributes, if needed
if remove_condor_stats:
del glidein_el["attrs"]["LastHeardFrom"]
# rename params into default_params
glidein_el["default_params"] = glidein_el["params"]
del glidein_el["params"]
if remove_internals:
for attr in ("EntryName", "GlideinName", "FactoryName"):
del glidein_el["attrs"][attr]
entry_name, glidein_name, factory_name = glidein.split("@")
frontend_constraints = None
if frontend_name is not None:
farr = frontend_name.split(".")
if len(farr) == 1:
# just the generic frontend name
frontend_constraints = 'FrontendName=?="%s"' % frontend_name
elif len(farr) == 2:
frontend_constraints = f'(FrontendName=?="{farr[0]}")&&(GroupName=?="{farr[1]}")'
else:
raise RuntimeError("Invalid frontend name; more than one dot found")
clients_obj = glideFactoryInterface.findWork(
factory_name, glidein_name, entry_name, None, key_obj, additional_constraints=frontend_constraints
)
glidein_el["clients"] = clients_obj
clients = list(clients_obj.keys())
if (frontend_name is not None) and (len(clients) == 0):
# if user requested to see only one frontend
# and this factory is not serving that frontend
# do not show the frontend at all
del glideins_obj[glidein]
continue
for client in clients:
if remove_internals:
del clients_obj[client]["internals"]
# rename monitor into client_monitor
clients_obj[client]["client_monitor"] = clients_obj[client]["monitor"]
del clients_obj[client]["monitor"]
# add factory monitor
if client in clientsmon_obj:
clients_obj[client]["factory_monitor"] = clientsmon_obj[client]["monitor"]
for pd_key in list(clients_obj[client]["params_decrypted"].keys()):
if clients_obj[client]["params_decrypted"][pd_key] is None:
clients_obj[client]["params_decrypted"][pd_key] = "ENCRYPTED"
# print data
sub_dict = {"clients": {"dict_name": "clients", "el_name": "client", "subtypes_params": {"class": {}}}}
print(
xmlFormat.dict2string(glideins_obj, "glideinWMS", "factory", subtypes_params={"class": {"dicts_params": sub_dict}})
)
| 33.339286 | 159 | 0.632744 | 670 | 5,601 | 5.141791 | 0.237313 | 0.044702 | 0.018578 | 0.0209 | 0.229318 | 0.193324 | 0.153556 | 0.106531 | 0.106531 | 0.096952 | 0 | 0.011372 | 0.230673 | 5,601 | 167 | 160 | 33.538922 | 0.788118 | 0.118729 | 0 | 0.299145 | 0 | 0.008547 | 0.211683 | 0.066558 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.042735 | 0 | 0.042735 | 0.025641 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e732babe271d94db51fc05fa50ce189a8ace78f | 603 | py | Python | Tkinter/menu.py | a22057916w/python_advance | c964ad3237b503f5ef83e1add12d8007113690b1 | [
"MIT"
] | null | null | null | Tkinter/menu.py | a22057916w/python_advance | c964ad3237b503f5ef83e1add12d8007113690b1 | [
"MIT"
] | null | null | null | Tkinter/menu.py | a22057916w/python_advance | c964ad3237b503f5ef83e1add12d8007113690b1 | [
"MIT"
] | null | null | null | import tkinter as tk
window = tk.Tk()
window.title("MenuBar")
window.geometry("1600x900")
menuBar = tk.Menu(window)
window.config(menu = menuBar)
menuFile = tk.Menu(menuBar)
menuHelp = tk.Menu(menuBar)
menuBar.add_cascade(label="File", menu=menuFile)
menuBar.add_cascade(label="Help", menu=menuHelp)
menuFile.add_command(label="New")
menuFile.add_command(label="Open...")
menuFile.add_separator()
menuFile.add_command(label="Exit", command=window.quit)
menuHelp.add_command(label="about")
# Notice that: menuBar is hooked to window, menuFile and menuHelp are hooked to menuBar
window.mainloop()
| 23.192308 | 87 | 0.767828 | 85 | 603 | 5.364706 | 0.388235 | 0.096491 | 0.131579 | 0.151316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012774 | 0.091211 | 603 | 25 | 88 | 24.12 | 0.819343 | 0.140962 | 0 | 0 | 0 | 0 | 0.081395 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.0625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e786ab7cf0b54b5a67fc3cdcb9ed2bc81fe91a8 | 10,937 | py | Python | tklog.py | xinlin-z/tklog | 3ccbea9dd2bb65c07049bd3680f900f138fc852b | [
"MIT"
] | 15 | 2019-09-24T14:06:21.000Z | 2022-03-24T02:28:50.000Z | tklog.py | xinlin-z/tklog | 3ccbea9dd2bb65c07049bd3680f900f138fc852b | [
"MIT"
] | 12 | 2019-09-24T13:36:53.000Z | 2020-10-04T07:26:20.000Z | tklog.py | xinlin-z/tklog | 3ccbea9dd2bb65c07049bd3680f900f138fc852b | [
"MIT"
] | 7 | 2020-01-04T11:10:07.000Z | 2022-03-24T02:28:59.000Z | import tkinter as tk
from tkinter import Toplevel, PhotoImage
from tkinter.scrolledtext import ScrolledText
from tkinter.filedialog import asksaveasfilename
import logging
import threading
import queue
__version = 'V0.12'
"""
About the sync argument for log interfaces (added in V0.12):
You should be VERY VERY careful to decide set sync=True, since it is very
often cause dead lock. Normally, it only should be set in background
thread which needs to log to GUI text windows.
You can not set sync=True in the event loop of GUI!!
"""
# When too many threads put info in the queue, and there is only one
# thread to get and consume, a lot of info maybe stocked in the queue,
# and it needs so much time to get and consume them all. So, set a length
# to the queue to slow down all the crazy threads.
# But, there is another risk. If your GUI event handler put info directly,
# without in a thread, your program might be deadlock if the queue is not
# long enough.
# So, to be a little balance in between, here we go:
QUEUE_LEN = 2048
# And, put method is set with block=False.
class tklog(ScrolledText):
"""readonly scrolled text log class"""
def __init__(self, **kw):
super().__init__(**kw, state=tk.DISABLED, cursor='plus',
wrap=tk.WORD, font=('monospace',12))
self.tag_config('TITLE', foreground='blue')
self.tag_config('INFO', foreground='black')
self.tag_config('DEBUG', foreground='gray')
self.tag_config('WARNING', foreground='hotpink')
self.tag_config('ERROR', foreground='red')
self.tag_config('CRITICAL', foreground='red', underline=1)
self.rpop = tk.Menu(self, tearoff=0)
self.rpop.add_command(label='Export', command=self._copyas)
self.rpop.add_command(label='Copy', command=self._copyto)
self.rpop.add_command(label='Clean', command=self.clean)
self.autoscroll = tk.IntVar(value=1)
self.rpop.add_checkbutton(label='Autoscrolling',
command=None,
variable=self.autoscroll)
self.editable = tk.IntVar(value=0)
self.rpop.add_checkbutton(label='Editable',
command=self._editable,
variable=self.editable)
self.bind('<Button-3>', self._popup)
self.bind('<Button-1>', self._popdown)
self.bind('<Up>', self._lineUp)
self.bind('<Down>', self._lineDown)
self.pList = []
self.q = queue.Queue(QUEUE_LEN)
self.stop = 0
self.wt = threading.Thread(target=self._writer,
args=(), daemon=True)
self.wt.start()
def destroy(self):
self.stop = 1
self.q.put(None) # q.get is blocked, so we need put sth.
def _popup(self, event):
self.rpop.post(event.x_root, event.y_root)
def _popdown(self, event):
self.rpop.unpost()
self.focus_set()
def _copyas(self):
saveTo = asksaveasfilename()
if not isinstance(saveTo, str): return
if saveTo == '': return
with open(saveTo, 'w') as f:
f.write(self.get('1.0', tk.END))
def _copyto(self):
self.clipboard_clear()
try:
selection = self.get(tk.SEL_FIRST, tk.SEL_LAST)
except tk.TclError:
pass # skip TclError while no selection
else: self.clipboard_append(selection)
def _editable(self):
if self.editable.get():
self.config(state=tk.NORMAL)
else:
self.config(state=tk.DISABLED)
def _chState(self, state):
if self.editable.get():
return
if state == 'on':
self.config(state=tk.NORMAL)
if state == 'off':
self.config(state=tk.DISABLED)
def _writer(self):
while True:
info = self.q.get()
if self.stop: break
try:
if isinstance(info, threading.Event):
info.set()
continue
pos = info[:9].find('@')
if pos == -1:
self._chState('on')
self.insert(tk.END, '[undefined format]: '+info)
self._chState('off')
else:
if info[:pos] == 'CLEAN':
self._chState('on')
self.delete('1.0', tk.END)
self._chState('off')
elif info[:pos] == 'PNG' or info[:pos] == 'GIF':
try:
self.pList.append(PhotoImage(file=info[pos+1:]))
self._chState('on')
self.image_create(
tk.END,
image=self.pList[len(self.pList)-1])
self.insert(tk.END, '\n', 'DEBUG')
self._chState('off')
except Exception as e:
self._chState('on')
self.insert(tk.END, repr(e)+'\n', 'DEBUG')
self._chState('off')
else:
self._chState('on')
self.insert(tk.END, info[pos+1:], info[:pos])
self._chState('off')
if self.autoscroll.get() == 1:
self.see(tk.END)
except tk.TclError:
break
def _log(self, level, content, end, sync):
self.q.put(level+'@'+content+end, block=False)
if sync:
self._syn_log()
def _syn_log(self):
wait2go = threading.Event()
self.q.put(wait2go, block=False)
wait2go.wait()
def title(self, content, end='\n', *, sync=False):
self._log('TITLE', content, end, sync)
def info(self, content, end='\n', *, sync=False):
self._log('INFO', content, end, sync)
# directly call info will raise, why?
log = info
def debug(self, content, end='\n', *, sync=False):
self._log('DEBUG', content, end, sync)
def warning(self, content, end='\n', *, sync=False):
self._log('WARNING', content, end, sync)
def error(self, content, end='\n', *, sync=False):
self._log('ERROR', content, end, sync)
def critical(self, content, end='\n', *, sync=False):
self._log('CRITICAL', content, end, sync)
def png(self, pngFile, *, sync=False):
self._log('PNG', pngFile, '', sync)
def gif(self, gifFile, *, sync=False):
self._log('GIF', gifFile, '', sync)
def _lineUp(self, event):
self.yview('scroll', -1, 'units')
def _lineDown(self, event):
self.yview('scroll', 1, 'units')
def clean(self):
self.q.put('CLEAN@', block=False)
class tklogHandler(logging.Handler):
"""tklog handler inherited from logging.Handler"""
def __init__(self, **kw):
logging.Handler.__init__(self)
self.tklog = tklog(**kw)
def emit(self, record):
if record.levelno== logging.DEBUG:
self.tklog.debug(self.format(record))
if record.levelno== logging.INFO:
self.tklog.log(self.format(record))
if record.levelno== logging.WARNING:
self.tklog.warning(self.format(record))
if record.levelno== logging.ERROR:
self.tklog.error(self.format(record))
if record.levelno== logging.CRITICAL:
self.tklog.critical(self.format(record))
def title(self, msg):
self.tklog.title(msg)
def png(self, pngFile):
self.tklog.png(pngFile)
def gif(self, gifFile):
self.tklog.gif(gifFile)
def pack(self, **kw):
self.tklog.pack(**kw)
def grid(self, **kw):
self.tklog.grid(**kw)
class winlog():
"""readonly modaless Toplevel log window class"""
def __init__(self, root, title='Log Window', withdrawRoot=False,
destroyRoot=False):
self.root = root
if withdrawRoot:
self.root.withdraw()
self.win = Toplevel(root)
self.win.title(title)
self.win.geometry('600x800')
self.frame_0 = tk.Frame(self.win)
self.frame_0.pack(fill='both', expand=True)
self.st = tklog(master=self.frame_0, height=0)
self.st.pack(fill='both', expand=True)
self.frame_1 = tk.Frame(self.win)
self.frame_1.pack(fill=tk.X)
self.top = tk.Button(self.frame_1, text='Pin', command=self._pin)
self.top.pack(side=tk.LEFT, padx=2, pady=2)
self.win.bind('<FocusIn>', self._focusIn)
self.win.bind('<FocusOut>', self._focusOut)
self.pin = 0 # default is unpinned
self.win.protocol('WM_DELETE_WINDOW', self.destroy)
self.destroyRoot = destroyRoot
def _focusIn(self, event):
self.win.attributes('-alpha', 1.0)
def _focusOut(self, event):
self.win.attributes('-alpha', 0.7)
def _pin(self):
if self.pin == 0:
self.win.attributes('-topmost', True)
self.pin = 1
self.top['text'] = 'Unpin'
elif self.pin == 1:
self.win.attributes('-topmost', False)
self.pin = 0
self.top['text'] = 'Pin'
def title(self, content, end='\n'):
self.st.title(content, end)
def info(self, content, end='\n'):
self.st.log(content, end)
log = info
def debug(self, content, end='\n'):
self.st.debug(content, end)
def warning(self, content, end='\n'):
self.st.warning(content, end)
def error(self, content, end='\n'):
self.st.error(content, end)
def critical(self, content, end='\n'):
self.st.critical(content, end)
def png(self, pngFile):
self.st.png(pngFile)
def gif(self, gifFile):
self.st.gif(gifFile)
def destroy(self):
self.win.destroy()
if self.destroyRoot:
self.root.destroy()
class winlogHandler(logging.Handler):
"""winlog handler inherited from logging.Handler"""
def __init__(self, **kw):
logging.Handler.__init__(self)
self.winlog = winlog(**kw)
def emit(self, record):
if record.levelno== logging.DEBUG:
self.winlog.debug(self.format(record))
if record.levelno== logging.INFO:
self.winlog.log(self.format(record))
if record.levelno== logging.WARNING:
self.winlog.warning(self.format(record))
if record.levelno== logging.ERROR:
self.winlog.error(self.format(record))
if record.levelno== logging.CRITICAL:
self.winlog.critical(self.format(record))
def title(self, msg):
self.winlog.title(msg)
def png(self, pngFile):
self.winlog.png(pngFile)
def gif(self, gifFile):
self.winlog.gif(gifFile)
| 32.745509 | 76 | 0.560574 | 1,349 | 10,937 | 4.467013 | 0.214233 | 0.043146 | 0.027879 | 0.029871 | 0.324925 | 0.286757 | 0.21374 | 0.174743 | 0.125456 | 0.111185 | 0 | 0.007777 | 0.306391 | 10,937 | 333 | 77 | 32.843844 | 0.786581 | 0.073329 | 0 | 0.223577 | 0 | 0 | 0.043629 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.191057 | false | 0.004065 | 0.028455 | 0 | 0.247967 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e791a5327e4593106e9eb7fd4029bc940737513 | 945 | py | Python | training.py | ummatias/detector-expressao-facial | 5d426d75182598553753a416285ecfcb2ab1dcda | [
"MIT"
] | 17 | 2021-01-30T21:49:22.000Z | 2022-03-25T13:46:34.000Z | training.py | ummatias/detector-expressao-facial | 5d426d75182598553753a416285ecfcb2ab1dcda | [
"MIT"
] | null | null | null | training.py | ummatias/detector-expressao-facial | 5d426d75182598553753a416285ecfcb2ab1dcda | [
"MIT"
] | 2 | 2021-02-02T18:37:57.000Z | 2021-02-03T16:00:56.000Z | from data_preprocess import load_data
from models.Fac_Model import Fac_Model
from tensorflow import keras
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.losses import CategoricalCrossentropy
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau
X_train, X_test, X_val, y_train, y_test, y_val = load_data()
model = Fac_Model(7)
model.build(input_shape=(None, 48, 48, 1))
model.summary()
model.compile(
loss='categorical_crossentropy',
optimizer='adam' ,
metrics=['accuracy']
)
reducePlateau = ReduceLROnPlateau(monitor='val_accuracy', factor=0.1, min_delta=0.0001, patience=1, verbose=1)
history = model.fit(X_train, y_train, epochs=14, batch_size=128, steps_per_epoch=250, validation_data=(X_val, y_val), verbose=1, callbacks=[ reducePlateau ])
result = model.evaluate(X_test, y_test, verbose=1)
print('Resultado Final: ', result)
model.save_weights('./saved_weights.h5') | 39.375 | 158 | 0.785185 | 135 | 945 | 5.288889 | 0.503704 | 0.078431 | 0.079832 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030624 | 0.101587 | 945 | 24 | 159 | 39.375 | 0.810365 | 0 | 0 | 0 | 0 | 0 | 0.087738 | 0.02537 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.3 | 0 | 0.3 | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e79e5599051fef2d9e1caab20d5d8e659061a86 | 4,968 | py | Python | neurosynchro/train.py | pkgw/neurosynchro | f21d198e01146988944728231417ff601b706379 | [
"MIT"
] | null | null | null | neurosynchro/train.py | pkgw/neurosynchro | f21d198e01146988944728231417ff601b706379 | [
"MIT"
] | 2 | 2020-09-26T16:09:24.000Z | 2021-05-21T16:34:51.000Z | neurosynchro/train.py | pkgw/neurosynchro | f21d198e01146988944728231417ff601b706379 | [
"MIT"
] | null | null | null | # -*- mode: python; coding: utf-8 -*-
# Copyright 2017-2018 Peter Williams and collaborators.
# Licensed under the MIT License.
"""Train one of the neural networks.
Meant to be run as a program in production, but you can import it to
experiment with training regimens.
"""
from __future__ import absolute_import, division, print_function
import argparse, sys, time
from pwkit.cli import die
from pwkit.io import Path
from . import DomainRange
def generic_trainer(m):
from keras.layers import Dense
m.add(Dense(
units = 300,
input_dim = m.domain_range.n_params,
activation = 'relu',
kernel_initializer = 'normal',
))
m.add(Dense(
units = 1,
activation = 'linear',
kernel_initializer = 'normal',
))
m.compile('adam', 'mse')
hist = m.ns_fit(
epochs = 30,
batch_size = 2000,
verbose = 0,
)
print('Intermediate MSE:', hist.history['loss'][-1])
m.ns_sigma_clip(7)
hist = m.ns_fit(
epochs = 30,
batch_size = 2000,
verbose = 0,
)
m.final_mse = hist.history['loss'][-1]
return m
def twolayer_trainer(m):
from keras.layers import Dense
m.add(Dense(
units = 120,
input_dim = m.domain_range.n_params,
activation = 'relu',
kernel_initializer = 'normal',
))
m.add(Dense(
units = 60,
activation = 'relu',
kernel_initializer = 'normal',
))
m.add(Dense(
units = 1,
activation = 'linear',
kernel_initializer = 'normal',
))
m.compile('adam', 'mse')
hist = m.ns_fit(
epochs = 30,
batch_size = 2000,
verbose = 0,
)
print('Intermediate MSE:', hist.history['loss'][-1])
m.ns_sigma_clip(7)
hist = m.ns_fit(
epochs = 30,
batch_size = 2000,
verbose = 0,
)
m.final_mse = hist.history['loss'][-1]
return m
def binary_trainer(m):
from keras.layers import Dense
m.add(Dense(
units = 120,
input_dim = m.domain_range.n_params,
activation = 'relu',
kernel_initializer = 'normal',
))
m.add(Dense(
units = 60,
activation = 'relu',
kernel_initializer = 'normal',
))
m.add(Dense(
units = 1,
activation = 'sigmoid',
kernel_initializer = 'normal',
))
m.compile(optimizer='adam', loss='binary_crossentropy')
hist = m.ns_fit(
epochs = 30,
batch_size = 2000,
verbose = 0,
)
print('Intermediate MSE:', hist.history['loss'][-1])
# Note: no sigma-clipping
hist = m.ns_fit(
epochs = 30,
batch_size = 2000,
verbose = 0,
)
m.final_mse = hist.history['loss'][-1]
return m
def load_data_and_train(datadir, nndir, result_name):
from .impl import NSModel
cfg_path = Path(nndir) / 'nn_config.toml'
dr, rinfo = DomainRange.from_serialized(cfg_path, result_to_extract=result_name)
if rinfo is None:
die('no known result named %r', result_name)
sd = dr.load_and_normalize(datadir)
trainer_name = rinfo['trainer']
trainer_func = globals().get(trainer_name + '_trainer')
if trainer_func is None:
die('unknown trainer function %r', trainer_name)
print('Training with scheme \"%s\"' % trainer_name)
m = NSModel()
m.ns_setup(rinfo['_index'], sd)
t0 = time.time()
trainer_func(m)
m.training_wall_clock = time.time() - t0
return m
def page_results(m, residuals=False, thin=500):
import omega as om
pg = om.makeDisplayPager()
for i in range(m.domain_range.n_params):
pg.send(m.ns_plot(i, plot_err=residuals, thin=thin))
pg.done()
def make_parser(ap=None):
if ap is None:
ap = argparse.ArgumentParser()
ap.add_argument('-p', '--plot', action='store_true',
help='Compare the NN and Symphony after training.')
ap.add_argument('-r', '--residuals', action='store_true',
help='If plotting, plot residuals rather than absolute values (requires `omegaplot` package).')
ap.add_argument('datadir', type=str, metavar='<datadir>',
help='The path to the sample data directory.')
ap.add_argument('nndir', type=str, metavar='<nndir>',
help='The path to the neural-net directory.')
ap.add_argument('result_name', type=str, metavar='<result-name>',
help='The name of the simulation result to train on.')
return ap
def train_cli(settings):
m = load_data_and_train(settings.datadir, settings.nndir, settings.result_name)
print('Achieved MSE of %g in %.1f seconds for %s.' %
(m.final_mse, m.training_wall_clock, settings.result_name))
if settings.plot:
page_results(m, residuals=settings.residuals)
outpath = str(Path(settings.nndir) / ('%s.h5' % settings.result_name))
m.save(outpath, overwrite=True)
| 26.854054 | 115 | 0.606079 | 640 | 4,968 | 4.553125 | 0.307813 | 0.010295 | 0.024708 | 0.038435 | 0.400137 | 0.371997 | 0.371997 | 0.371997 | 0.371997 | 0.371997 | 0 | 0.022571 | 0.26872 | 4,968 | 184 | 116 | 27 | 0.779521 | 0.057367 | 0 | 0.569444 | 0 | 0 | 0.150471 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.048611 | false | 0 | 0.069444 | 0 | 0.152778 | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e7a521a2cf3a65def45b1a22707bb5a3093de13 | 537 | py | Python | from_book_data_structures/ch01/03.py | alexandrejr45/exercicios_algoritmo | b6af8275ff44e9dacea31e4d0121efd655ba34ca | [
"MIT"
] | null | null | null | from_book_data_structures/ch01/03.py | alexandrejr45/exercicios_algoritmo | b6af8275ff44e9dacea31e4d0121efd655ba34ca | [
"MIT"
] | null | null | null | from_book_data_structures/ch01/03.py | alexandrejr45/exercicios_algoritmo | b6af8275ff44e9dacea31e4d0121efd655ba34ca | [
"MIT"
] | null | null | null | # Write a short Python function, minmax(data), that takes a sequence of
# one or more numbers, and returns the smallest and largest numbers, in the
# form of a tuple of length two. Do not use the built-in functions min or
# max in implementing your solution.
import random
data = []
def minmax(numbers):
numbers.sort()
min_max = (numbers[0], numbers[-1])
return min_max
# Generate 50 randoms numbers to populate the list
for x in range(1, 50):
n = random.randrange(1, 10000)
data.append(n)
print(minmax(data))
| 23.347826 | 75 | 0.707635 | 88 | 537 | 4.295455 | 0.636364 | 0.05291 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030374 | 0.20298 | 537 | 22 | 76 | 24.409091 | 0.852804 | 0.556797 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.1 | 0 | 0.3 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e7a5341cb08a38a2ccf8a929fff8f726c083b27 | 3,992 | py | Python | addons/onedrive/client.py | gaybro8777/osf.io | 30408511510a40bc393565817b343ef5fd76ab14 | [
"Apache-2.0"
] | 628 | 2015-01-15T04:33:22.000Z | 2022-03-30T06:40:10.000Z | addons/onedrive/client.py | gaybro8777/osf.io | 30408511510a40bc393565817b343ef5fd76ab14 | [
"Apache-2.0"
] | 4,712 | 2015-01-02T01:41:53.000Z | 2022-03-30T14:18:40.000Z | addons/onedrive/client.py | gaybro8777/osf.io | 30408511510a40bc393565817b343ef5fd76ab14 | [
"Apache-2.0"
] | 371 | 2015-01-12T16:14:08.000Z | 2022-03-31T18:58:29.000Z | # -*- coding: utf-8 -*-
from framework.exceptions import HTTPError
from website.util.client import BaseClient
from addons.onedrive import settings
import logging
logger = logging.getLogger(__name__)
class OneDriveClient(BaseClient):
def __init__(self, access_token=None):
self.access_token = access_token
@property
def _default_headers(self):
if self.access_token:
return {'Authorization': 'Bearer {}'.format(self.access_token)}
return {}
def folders(self, drive_id=None, folder_id=None):
"""Get list of subfolders of the folder with id ``folder_id``
API Docs: https://dev.onedrive.com/items/list.htm
:param str folder_id: the id of the parent folder. defaults to ``None``
:rtype: list
:return: a list of metadata objects representing the child folders of ``folder_id``
"""
if drive_id is None:
raise Exception('drive_id is undefined, cannot proceed')
folder_path_part = 'root' if folder_id is None else folder_id
list_folder_url = self._build_url(settings.MSGRAPH_API_URL, 'drives', drive_id,
'items', folder_path_part, 'children')
resp = self._make_request(
'GET',
list_folder_url,
headers=self._default_headers,
params={'filter': 'folder ne null'},
expects=(200, ),
throws=HTTPError(401)
)
folder_list = resp.json()
logger.debug('folder_list:({})'.format(folder_list))
return folder_list['value']
def user_info(self):
"""Given an access token, return information about the token's owner.
API Docs::
https://msdn.microsoft.com/en-us/library/hh826533.aspx#requesting_info_using_rest
https://msdn.microsoft.com/en-us/library/hh243648.aspx#user
:param str access_token: a valid Microsoft Live access token
:rtype: dict
:return: a dict containing metadata about the token's owner.
"""
# get user properties from /me endpoint
me_url = self._build_url(settings.MSGRAPH_API_URL, 'me')
me_resp = self._make_request(
'GET',
me_url,
headers=self._default_headers,
expects=(200, ),
throws=HTTPError(401)
)
me_data = me_resp.json()
logger.debug('me_data:({})'.format(me_data))
retval = {
'id': me_data['id'],
'name': me_data['displayName'],
'link': '{}/users/{}'.format(settings.MSGRAPH_API_URL, me_data['id']),
'mail': me_data['userPrincipalName'],
}
# get drive properties from /users/$user_id/drive endpoint
drive_url = self._build_url(settings.MSGRAPH_API_URL, 'users', retval['id'], 'drive')
drive_resp = self._make_request(
'GET',
drive_url,
headers=self._default_headers,
expects=(200, ),
throws=HTTPError(401)
)
drive_data = drive_resp.json()
logger.debug('drive_data:({})'.format(drive_data))
retval['drive_id'] = drive_data['id']
if drive_data['driveType'] == 'personal':
retval['name'] = '{} - OneDrive Personal'.format(retval['mail'])
else:
# get site properties from /sites endpoint
# site_url = self._build_url(settings.MSGRAPH_API_URL, 'sites', 'root')
# site_resp = self._make_request(
# 'GET',
# site_url,
# headers=self._default_headers,
# expects=(200, ),
# throws=HTTPError(401)
# )
# site_data = site_resp.json()
# logger.debug('site_data:({})'.format(site_data))
retval['name'] = '{} - {}'.format(retval['mail'], 'OneDrive for School or Business')
logger.debug('retval:({})'.format(retval))
return retval
| 35.327434 | 96 | 0.584168 | 456 | 3,992 | 4.883772 | 0.291667 | 0.039515 | 0.040413 | 0.047149 | 0.261787 | 0.168837 | 0.168837 | 0.140099 | 0.075438 | 0.075438 | 0 | 0.013121 | 0.293587 | 3,992 | 112 | 97 | 35.642857 | 0.776596 | 0.272044 | 0 | 0.1875 | 0 | 0 | 0.123563 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.203125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e7a86f6aa19d6ae4f1c45bee0ea1b397d3ba682 | 691 | py | Python | examples/irq-multi-touch/irq-multi-touch.py | Tonino-RB/micropython-mpr121 | 31732dc76f2e851d2e38ef63c1aad1295dcb6eeb | [
"MIT"
] | null | null | null | examples/irq-multi-touch/irq-multi-touch.py | Tonino-RB/micropython-mpr121 | 31732dc76f2e851d2e38ef63c1aad1295dcb6eeb | [
"MIT"
] | null | null | null | examples/irq-multi-touch/irq-multi-touch.py | Tonino-RB/micropython-mpr121 | 31732dc76f2e851d2e38ef63c1aad1295dcb6eeb | [
"MIT"
] | null | null | null | """
Prints "Key n pressed" on key down.
Prints "Key n released" on key up.
The interrupt fires when any key is pressed or released.
Detects changes from previous state and supports multiple concurrent key downs.
"""
import mpr121
from machine import Pin
i2c = machine.I2C(3)
mpr = mpr121.MPR121(i2c)
last = 0
def check(pin):
global last
touched = mpr.touched()
diff = last ^ touched
for i in range(12):
if diff & (1 << i):
if touched & (1<<i):
print('Key {} pressed'.format(i))
else:
print('Key {} released'.format(i))
last = touched
d3 = Pin('D3', Pin.IN, Pin.PULL_UP)
d3.irq(check, Pin.IRQ_FALLING)
| 22.290323 | 79 | 0.615051 | 102 | 691 | 4.147059 | 0.519608 | 0.078014 | 0.047281 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041502 | 0.267728 | 691 | 30 | 80 | 23.033333 | 0.794466 | 0.301013 | 0 | 0 | 0 | 0 | 0.065263 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.111111 | 0 | 0.166667 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e7defec21c59b6fd0c74c184c53571907c930c5 | 1,337 | py | Python | src/data_standardization/Archive/standardize_volume.py | pankace/MelbourneCrashModel | 62d1b763db28201429b0115fe89857b178a0f984 | [
"MIT"
] | null | null | null | src/data_standardization/Archive/standardize_volume.py | pankace/MelbourneCrashModel | 62d1b763db28201429b0115fe89857b178a0f984 | [
"MIT"
] | null | null | null | src/data_standardization/Archive/standardize_volume.py | pankace/MelbourneCrashModel | 62d1b763db28201429b0115fe89857b178a0f984 | [
"MIT"
] | 1 | 2021-01-23T17:29:46.000Z | 2021-01-23T17:29:46.000Z | import argparse
import os
from jsonschema import validate
import json
from .boston_volume import BostonVolumeParser
BASE_FP = None
PROCESSED_DATA_FP = None
CURR_FP = os.path.dirname(
os.path.abspath(__file__))
def write_volume(volume_counts):
schema_path = os.path.join(os.path.dirname(os.path.dirname(
CURR_FP)), "standards", "volumes-schema.json")
with open(schema_path) as volume_schema:
validate(volume_counts, json.load(volume_schema))
volume_output = os.path.join(BASE_FP, "standardized", "volume.json")
with open(volume_output, "w") as f:
json.dump(volume_counts, f)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("-c", "--city", type=str, required=True,
help="city short name, e.g. boston")
parser.add_argument("-d", "--datadir", type=str,
help="data directory")
parser.add_argument("--volume", type=str, help="volume YES or NO")
args = parser.parse_args()
BASE_FP = os.path.join(args.datadir)
if args.volume is True:
print('Volume calculations being carried out.')
volume_counts = BostonVolumeParser(args.datadir).get_volume()
write_volume(volume_counts)
else:
print("No volume data given for {}".format(args.city))
| 31.833333 | 76 | 0.667165 | 175 | 1,337 | 4.88 | 0.411429 | 0.04918 | 0.045667 | 0.035129 | 0.044496 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.209424 | 1,337 | 41 | 77 | 32.609756 | 0.807947 | 0 | 0 | 0 | 0 | 0 | 0.157068 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | false | 0 | 0.15625 | 0 | 0.1875 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e7e1cb9d14c8c8397bf6ef96be01f1f1e89e068 | 1,179 | py | Python | tests/test_cf_model.py | donatoaz/pycfmodel | 1586e290b67d2347493dd4a77d2b0c8ee6c0936b | [
"Apache-2.0"
] | null | null | null | tests/test_cf_model.py | donatoaz/pycfmodel | 1586e290b67d2347493dd4a77d2b0c8ee6c0936b | [
"Apache-2.0"
] | null | null | null | tests/test_cf_model.py | donatoaz/pycfmodel | 1586e290b67d2347493dd4a77d2b0c8ee6c0936b | [
"Apache-2.0"
] | null | null | null | import pytest
from pycfmodel.model.cf_model import CFModel
from pycfmodel.model.resources.iam_user import IAMUser
@pytest.fixture()
def model():
return CFModel(
**{
"AWSTemplateFormatVersion": "2012-12-12",
"Description": "JSON string",
"Metadata": {},
"Parameters": {},
"Mappings": {},
"Conditions": {},
"Transform": [],
"Resources": {"Logical ID": {"Type": "Resource type", "Properties": {"foo": "bar"}}},
"Rules": {},
"Outputs": {},
}
)
def test_basic_json(model):
assert type(model).__name__ == "CFModel"
assert len(model.Resources) == 1
def test_resources_filtered_by_type():
generic_resource = {"Logical ID": {"Type": "Resource type", "Properties": {"foo": "bar"}}}
user = {"User": IAMUser()}
model = CFModel(Resources={**generic_resource, **user})
assert model.resources_filtered_by_type(("Resource type",)) == generic_resource
assert model.resources_filtered_by_type(("Resource type", IAMUser)) == {**generic_resource, **user}
assert model.resources_filtered_by_type((IAMUser,)) == user
| 31.026316 | 103 | 0.598813 | 116 | 1,179 | 5.87069 | 0.37069 | 0.10279 | 0.093979 | 0.135095 | 0.361233 | 0.361233 | 0.361233 | 0.361233 | 0.155653 | 0 | 0 | 0.009967 | 0.234097 | 1,179 | 37 | 104 | 31.864865 | 0.744186 | 0 | 0 | 0 | 0 | 0 | 0.207803 | 0.020356 | 0 | 0 | 0 | 0 | 0.172414 | 1 | 0.103448 | false | 0 | 0.103448 | 0.034483 | 0.241379 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e7e7db46c4352ba5d5d5ed03a5f8071ef9d23a6 | 17,118 | py | Python | agent.py | oskarij/rlproject | d0b4757ab994546952a0db0a74c413d013c77457 | [
"MIT"
] | null | null | null | agent.py | oskarij/rlproject | d0b4757ab994546952a0db0a74c413d013c77457 | [
"MIT"
] | null | null | null | agent.py | oskarij/rlproject | d0b4757ab994546952a0db0a74c413d013c77457 | [
"MIT"
] | null | null | null | """
CNN-DQN based RL agent for Pong
Requirements (model training):
- pytorch
- gym
- matplotlib
- numpy
- pillow
- opencv-python
"""
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
import math
import random
import os
import sys
from collections import namedtuple
class Agent:
NAME = 'AgentPongai'
MODEL_PATH = 'models/'
TRAINED_MODEL = 'model.mdl'
N_ACTIONS = 3
FRAME_WIDTH = 200
FRAME_HEIGHT = 200
EPS_START = 0.99
EPS_END = 0.05
EPS_DECAY = 150000
LR = 1e-4
# Common interface for both training and testing
# supports 0 (frames are summed to each other rather than stacked), 2 or 4 frame stacking
def __init__( self, training=False, batch_size=64, gamma=0.99, memory_size=15000, \
stack_frames=4, downscale=True, priority_memory=True):
self.training = training
self.batch_size = batch_size
self.gamma = gamma
self.stack_frames = stack_frames if stack_frames in (0, 2, 4) else 0
self.downscale = downscale
self.priority_memory = priority_memory
self.w = Agent.FRAME_WIDTH // 2 if downscale else Agent.FRAME_WIDTH
self.h = Agent.FRAME_HEIGHT // 2 if downscale else Agent.FRAME_HEIGHT
self.device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
in_channels = stack_frames if stack_frames in (2, 4) else 1
self.policy_net = DQN(Agent.N_ACTIONS, in_channels, self.h, self.w).to(self.device)
self.target_net = DQN(Agent.N_ACTIONS, in_channels, self.h, self.w).to(self.device)
self.target_net.load_state_dict(self.policy_net.state_dict())
self.target_net.eval()
self.memory = PriorityMemory(memory_size) if priority_memory else ReplayMemory(memory_size)
self.optimizer = optim.Adam(self.policy_net.parameters(), lr=Agent.LR)
self.epsilon = 1.0
self.prev_state = [None] if self.stack_frames in (0, 2) else [None, None, None]
self.steps_done = 0
"""
Load the trained model for evaluation
"""
def load_model(self):
self.policy_net.load_state_dict(torch.load(Agent.TRAINED_MODEL, map_location=self.device))
self.target_net.load_state_dict(self.policy_net.state_dict())
self.target_net.eval()
self.policy_net.eval()
"""
This needs to correspond to save
"""
def load_trained_model(self):
dir = os.path.join(os.getcwd(), Agent.MODEL_PATH, self.get_name())
if not os.path.isdir(dir):
sys.exit(f'Model loading failed: {dir} does not exist')
model_files = os.listdir(dir)
model_files.sort()
model_path = os.path.join(dir, model_files[-1]) # Select the latest model
self.policy_net.load_state_dict(torch.load(model_path, map_location=self.device))
self.target_net.load_state_dict(self.policy_net.state_dict())
self.target_net.eval()
self.optimizer = optim.Adam(self.policy_net.parameters(), lr=Agent.LR)
"""
Load a trained model from a specific path
"""
def load_model_file(self, model_path):
self.policy_net.load_state_dict(torch.load(model_path, map_location=self.device))
self.target_net.load_state_dict(self.policy_net.state_dict())
self.target_net.eval()
self.optimizer = optim.Adam(self.policy_net.parameters(), lr=Agent.LR)
"""
Reset Agent when an episode has finished
"""
def reset(self):
self.prev_state = [None] if self.stack_frames in (0, 2) else [None, None, None]
"""
frame: numpy array (200, 200, 3) -> convert to pytorch tensor
"""
def get_action(self, frame=None):
self.epsilon = Agent.EPS_END + (Agent.EPS_START - Agent.EPS_END) * math.exp(-1. * self.steps_done / Agent.EPS_DECAY)
self.steps_done += 1
action = torch.randint(0, Agent.N_ACTIONS, (1,)).item()
if frame is None:
return action
state = self.__preprocess_frame(frame)
if self.prev_state[0] != None and (not self.training or np.random.uniform(size=1)[0] > self.epsilon):
# Select the 'best' action
with torch.no_grad():
state_diff = self.__process_states(state, self.prev_state)
action = torch.argmax(self.policy_net(state_diff)).item()
if len(self.prev_state) == 3:
# 4 frames stacking enabled
self.prev_state[0] = self.prev_state[1]
self.prev_state[1] = self.prev_state[2]
self.prev_state[-1] = state
return action
"""
Get Agent name, used by the environment
"""
def get_name(self):
return Agent.NAME
"""
Save a Pytorch model to MODEL_PATH
"""
def save(self):
dir = os.path.join(os.getcwd(), Agent.MODEL_PATH, self.get_name())
if os.path.isdir(dir):
# This does not work for more than 100 models
content = os.listdir(dir)
content.sort()
model_id = content[-1][-6:-4] if len(content) > 0 else -1 # Get the latest model number
model_id = '%02d' % (int(model_id) + 1, ) # Increase it by one
else:
os.makedirs(dir)
model_id = '00'
model_path = os.path.join(dir, self.get_name() + model_id + '.pth')
torch.save(self.policy_net.state_dict(), model_path)
"""
Convert a rbg image frame given as a numpy array to a grayscale tensor
frame: np.array with shape(200, 200, 3) containing int values 0 - 255
"""
def __preprocess_frame(self, frame):
frame = frame.astype(float)
frame = np.true_divide(frame, 255.0)
frame = 1.0 - frame # invert colors so that the background is zeros
grayscale = 0.299 * frame[:,:,0] + 0.587 * frame[:,:,1] + 0.114 * frame[:,:,2]
if self.downscale:
grayscale = grayscale.reshape((1, Agent.FRAME_HEIGHT//2, 2, Agent.FRAME_WIDTH//2, 2)).max(4).max(2) # downscale by factor of 2
return torch.from_numpy(grayscale).to(torch.float32).to(self.device)
"""
Combine two consequtive states to create a combined state that shows movement.
This is the state information that is feed on the DQN and stored.
state: torch float32 tensor with values in the range of 0.0 and 1.0
prev_state: torch float32 tensor with values in the range of 0.0 and 1.0
"""
def __state_diff(self, state, prev_state):
combined = torch.add(state, 0.5 * prev_state).view(1, 1, self.h, self.w) # multiply prev_state to distinguish differences
combined -= torch.min(combined)
combined /= torch.max(combined)
return combined
"""
Stack two consequtive states to create a combined state that shows movement.
This is the state information that is feed on the DQN and stored.
states: torch float32 tensor with values in the range of 0.0 and 1.0 as a list
"""
def __stack_states(self, states):
stacked = torch.stack(states).view(1, self.stack_frames, self.h, self.w)
return stacked
"""
This is just a wrapper for processing state and the previous state
"""
def __process_states(self, state, prev_states):
return self.__stack_states([state] + prev_states) if self.stack_frames > 0 else self.__state_diff(state, prev_states[0])
# Training interface
"""
Store actions, states, rewards for replay
Note: state and next_state need to be image frame differences as tensors
observations: 3 latest observations (frames)
"""
def update_memory(self, observations, action, reward, done):
action = torch.tensor([[action]], dtype=torch.long, device=self.device)
reward = torch.tensor([reward], dtype=torch.float32, device=self.device)
states = [self.__preprocess_frame(observation) for observation in observations]
state = self.__process_states(states[-2], states[:-2])
next_state = self.__process_states(states[-1], states[1:-1])
done = torch.tensor([done], dtype=torch.bool, device=self.device)
self.memory.push(state, action, next_state, reward, done)
"""
Do a single batch update on the DQN network
"""
def update_dqn(self):
if len(self.memory) < self.batch_size:
return
if self.priority_memory:
samples, indices, weights = self.memory.sample(self.batch_size)
weights = torch.tensor(weights, dtype=torch.float32, device=self.device)
else:
samples = self.memory.sample(self.batch_size)
batch = Transition(*zip(*samples))
state_batch = torch.cat(batch.state)
next_state_batch = torch.cat(batch.next_state)
action_batch = torch.cat(batch.action)
reward_batch = torch.cat(batch.reward)
done_batch = torch.cat(batch.done)
non_final_mask = [done_batch == False]
non_final_next_states = next_state_batch[non_final_mask]
# Compute Q(s, a) by selecting Q values matching selected actions
state_action_values = self.policy_net(state_batch).gather(1, action_batch).squeeze(1)
# Compute V(s_t+1)
next_state_values = torch.zeros(self.batch_size, device=self.device)
next_state_values[non_final_mask] = self.target_net(non_final_next_states).max(1)[0].detach()
# Compute target Q values
target_state_action_values = reward_batch + self.gamma * next_state_values
# Compute loss and optimize the network
loss = F.smooth_l1_loss(state_action_values, target_state_action_values, reduction='none')
if self.priority_memory:
priorities = torch.abs(loss).detach() + 1e-5
loss = loss * weights
self.memory.update_priorities(indices, priorities.cpu().numpy())
self.optimizer.zero_grad()
loss = loss.mean()
loss.backward()
for param in self.policy_net.parameters():
param.grad.data.clamp_(-10, 10) # to make stable updates
self.optimizer.step()
"""
Double DQN: https://arxiv.org/pdf/1509.06461.pdf
"""
def update_ddqn(self):
if len(self.memory) < self.batch_size:
return
if self.priority_memory:
samples, indices, weights = self.memory.sample(self.batch_size)
weights = torch.tensor(weights, dtype=torch.float32, device=self.device)
else:
samples = self.memory.sample(self.batch_size)
batch = Transition(*zip(*samples))
state_batch = torch.cat(batch.state)
next_state_batch = torch.cat(batch.next_state)
action_batch = torch.cat(batch.action)
reward_batch = torch.cat(batch.reward)
done_batch = torch.cat(batch.done)
non_final_mask = [done_batch == False]
non_final_next_states = next_state_batch[non_final_mask]
# Compute Q(s, a) by selecting Q values matching selected actions
state_action_values = self.policy_net(state_batch).gather(1, action_batch).squeeze(1)
# Compute V(s_t+1)
next_state_values = torch.zeros(self.batch_size, device=self.device)
greedy_actions = self.policy_net(non_final_next_states).max(1)[1].unsqueeze(1).detach()
next_state_values[non_final_mask] = self.target_net(non_final_next_states).gather(1, greedy_actions).squeeze(1).detach()
# Compute target Q values
target_state_action_values = reward_batch + self.gamma * next_state_values
# Compute loss and optimize the network
loss = F.smooth_l1_loss(state_action_values, target_state_action_values, reduction='none')
if self.priority_memory:
priorities = torch.abs(loss).detach() + 1e-5
loss = loss * weights
self.memory.update_priorities(indices, priorities.cpu().numpy())
self.optimizer.zero_grad()
loss = loss.mean()
loss.backward()
for param in self.policy_net.parameters():
param.grad.data.clamp_(-10, 10) # to make stable updates
self.optimizer.step()
"""
Update the target network to match the policy network
Call this periodically from the training loop
"""
def update_target_network(self):
self.target_net.load_state_dict(self.policy_net.state_dict())
"""
Set agent learning rate
"""
def set_lr(self, lr):
for param_group in self.optimizer.param_groups:
param_group['lr'] = lr
"""
CNN-based neural network which is updated according to DQN update rules
"""
class DQN(nn.Module):
def __init__(self, n_actions, in_channels, h=Agent.FRAME_HEIGHT, w=Agent.FRAME_WIDTH):
super(DQN, self).__init__()
# Conv2d: https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html#torch.nn.Conv2d
padding, dilation = 0, 1
strides = [4, 2, 1]
kernels = [(8, 8), (4, 4), (3, 3)]
channels = [in_channels, 32, 64, 64]
def out_size(in_size, kernel_size, stride):
return math.floor((in_size + 2 * padding - dilation * (kernel_size - 1) - 1) / stride + 1)
self.conv1 = nn.Conv2d(channels[0], channels[1], kernels[0], padding=padding, dilation=dilation, stride=strides[0])
self.conv2 = nn.Conv2d(channels[1], channels[2], kernels[1], padding=padding, dilation=dilation, stride=strides[1])
self.conv3 = nn.Conv2d(channels[2], channels[3], kernels[2], padding=padding, dilation=dilation, stride=strides[2])
h_out = out_size(out_size(out_size(h, kernels[0][0], strides[0]), kernels[1][0], strides[1]), kernels[2][0], strides[2])
w_out = out_size(out_size(out_size(w, kernels[0][1], strides[0]), kernels[1][1], strides[1]), kernels[2][1], strides[2])
flattened_size = h_out * w_out * channels[-1]
self.fc1 = nn.Linear(flattened_size, 512)
self.fc2 = nn.Linear(512, n_actions)
"""
x : (batch_size, in_channels, image_width, image_height) = a batch of in_channels images as a tensor
"""
def forward(self, x):
y = F.relu(self.conv1(x))
y = F.relu(self.conv2(y))
y = F.relu(self.conv3(y))
y = torch.flatten(y, start_dim=1)
y = F.relu(self.fc1(y))
y = self.fc2(y)
return y
"""
Directly from Pytorch example implementation for replay memory
https://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html
"""
Transition = namedtuple('Transition',
('state', 'action', 'next_state', 'reward', 'done'))
class ReplayMemory():
def __init__(self, capacity):
self.capacity = capacity
self.memory = []
self.position = 0
def push(self, *args):
if len(self.memory) < self.capacity:
self.memory.append(None)
self.memory[self.position] = Transition(*args)
self.position = (self.position + 1) % self.capacity
def sample(self, batch_size):
return random.sample(self.memory, batch_size)
def update_beta(self):
pass
def __len__(self):
return len(self.memory)
"""
https://arxiv.org/pdf/1511.05952.pdf
"""
class PriorityMemory():
def __init__(self, capacity, alpha=0.6, beta0=0.4):
self.capacity = capacity
self.memory = []
self.priorities = np.zeros((capacity,), dtype=np.float32)
self.position = 0
self.alpha = alpha
self.beta0 = beta0
self.beta = beta0
def push(self, *args):
if len(self.memory) < self.capacity:
self.memory.append(None)
# store transition with maximal priority
self.memory[self.position] = Transition(*args)
self.priorities[self.position] = 1.0 if len(self.memory) == 1 else np.max(self.priorities)
self.position = (self.position + 1) % self.capacity
def sample(self, batch_size):
N = len(self.memory)
priorities = self.priorities[:N]
P = np.power(priorities, self.alpha)
P /= np.sum(P)
indices = np.random.choice(N, size=batch_size, p=P, replace=False)
samples = [self.memory[i] for i in indices]
w = np.power(N * P[indices], -self.beta)
w /= np.max(w)
return samples, indices, w
def update_priorities(self, indices, priorities):
for idx, priority in zip(indices, priorities):
self.priorities[idx] = priority
def update_beta(self, interval=8000):
self.beta = min(1.0, self.beta + (1.0 - self.beta0) / interval)
def __len__(self):
return len(self.memory)
## Testing ##
def test_DQN():
n_actions = 3
dqn = DQN(n_actions, 1, h=200, w=200)
test = torch.ones((1, 1, 200, 200))
assert dqn(test).size() == torch.Size([1, n_actions])
test2 = torch.zeros((13, 1, 200, 200))
assert dqn(test2).size() == torch.Size([13, n_actions])
def test_Agent():
agent = Agent()
frame1 = np.ones((200, 200, 3))
frame2 = np.ones((200, 200, 3))
assert type(agent.get_action(frame1)) == int
assert agent.get_action(frame2) in [i for i in range(Agent.N_ACTIONS)]
if __name__ == '__main__':
test_DQN()
test_Agent()
| 39.351724 | 138 | 0.644526 | 2,396 | 17,118 | 4.439065 | 0.161102 | 0.022565 | 0.024445 | 0.016924 | 0.443118 | 0.428074 | 0.386894 | 0.3632 | 0.35991 | 0.35991 | 0 | 0.027571 | 0.237236 | 17,118 | 434 | 139 | 39.442396 | 0.787011 | 0.060112 | 0 | 0.360714 | 0 | 0 | 0.010493 | 0 | 0 | 0 | 0 | 0 | 0.014286 | 1 | 0.117857 | false | 0.003571 | 0.035714 | 0.021429 | 0.257143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e7e9bc99f7ecc96f25c1c825d1420f48c3bf7f5 | 2,006 | py | Python | infobip_channels/whatsapp/models/body/buttons_message.py | infobip-community/infobip-api-python-sdk | 5ffc5ab877ee1748aa29391f991c8c5324387487 | [
"MIT"
] | null | null | null | infobip_channels/whatsapp/models/body/buttons_message.py | infobip-community/infobip-api-python-sdk | 5ffc5ab877ee1748aa29391f991c8c5324387487 | [
"MIT"
] | null | null | null | infobip_channels/whatsapp/models/body/buttons_message.py | infobip-community/infobip-api-python-sdk | 5ffc5ab877ee1748aa29391f991c8c5324387487 | [
"MIT"
] | null | null | null | from enum import Enum
from typing import Optional, Union
try:
from typing import Literal
except ImportError:
from typing_extensions import Literal
from pydantic import AnyHttpUrl, conlist, constr, validator
from infobip_channels.core.models import CamelCaseModel, UrlLengthValidatorMixin
from infobip_channels.whatsapp.models.body.core import MessageBody
class ButtonTypeEnum(str, Enum):
REPLY = "REPLY"
class Footer(CamelCaseModel):
text: constr(min_length=1, max_length=60)
class HeaderDocument(UrlLengthValidatorMixin, CamelCaseModel):
type: Literal["DOCUMENT"]
media_url: AnyHttpUrl
filename: Optional[constr(max_length=240)] = None
@validator("media_url", pre=True)
def validate_url_length(cls, value: str) -> str:
return super().validate_url_length(value)
class HeaderVideo(UrlLengthValidatorMixin, CamelCaseModel):
type: Literal["VIDEO"]
media_url: AnyHttpUrl
@validator("media_url", pre=True)
def validate_url_length(cls, value: str) -> str:
return super().validate_url_length(value)
class HeaderImage(UrlLengthValidatorMixin, CamelCaseModel):
type: Literal["IMAGE"]
media_url: AnyHttpUrl
@validator("media_url", pre=True)
def validate_url_length(cls, value: str) -> str:
return super().validate_url_length(value)
class HeaderText(CamelCaseModel):
type: Literal["TEXT"]
text: constr(min_length=1, max_length=60)
class Button(CamelCaseModel):
type: ButtonTypeEnum
id: constr(min_length=1, max_length=256)
title: constr(min_length=1, max_length=20)
class Action(CamelCaseModel):
buttons: conlist(Button, min_items=1, max_items=3)
class Body(CamelCaseModel):
text: constr(min_length=1, max_length=1024)
class Content(CamelCaseModel):
body: Body
action: Action
header: Optional[Union[HeaderText, HeaderImage, HeaderDocument, HeaderVideo]] = None
footer: Optional[Footer] = None
class ButtonsMessageBody(MessageBody):
content: Content
| 25.392405 | 88 | 0.744267 | 238 | 2,006 | 6.130252 | 0.294118 | 0.01645 | 0.069911 | 0.054832 | 0.344757 | 0.344757 | 0.310487 | 0.310487 | 0.271419 | 0.22207 | 0 | 0.013634 | 0.159023 | 2,006 | 78 | 89 | 25.717949 | 0.851215 | 0 | 0 | 0.28 | 0 | 0 | 0.026919 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.06 | false | 0 | 0.16 | 0.06 | 0.92 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e843638054fdefd4d7143406cd9bfbe91485f7b | 5,985 | py | Python | udf/ext_target.py | metazool/pegamatites-xDD | 0d8aba77904c6dd0e2fc227cba84dcb9ecf728a4 | [
"CC-BY-4.0"
] | null | null | null | udf/ext_target.py | metazool/pegamatites-xDD | 0d8aba77904c6dd0e2fc227cba84dcb9ecf728a4 | [
"CC-BY-4.0"
] | null | null | null | udf/ext_target.py | metazool/pegamatites-xDD | 0d8aba77904c6dd0e2fc227cba84dcb9ecf728a4 | [
"CC-BY-4.0"
] | null | null | null | # ==============================================================================
# TARGET NAME EXTRACTOR
# ==============================================================================
# import relevant modules and data
# ==============================================================================
import time
import random
import re
import yaml
import psycopg2
from psycopg2.extensions import AsIs
from yaml import Loader
start_time = time.time()
# Connect to Postgres
with open('./credentials', 'r') as credential_yaml:
credentials = yaml.load(credential_yaml, Loader=Loader)
with open('./config', 'r') as config_yaml:
config = yaml.load(config_yaml, Loader=Loader)
# Connect to Postgres
connection = psycopg2.connect(
password=credentials['postgres']['password'],
dbname=credentials['postgres']['database'],
user=credentials['postgres']['user'],
host=credentials['postgres']['host'],
port=credentials['postgres']['port'])
cursor = connection.cursor()
# initalize the target_instances table
cursor.execute("""
DELETE FROM target_instances;
""")
# IMPORT THE SENTENCES DUMP
cursor.execute("""
SELECT docid, sentid, words, poses, dep_paths, dep_parents FROM %(my_app)s_sentences_%(my_product)s;
""", {
"my_app": AsIs(config['app_name']),
"my_product": AsIs(config['product'].lower())
})
# push drop/create to the database
connection.commit()
# initalize list of target occurences
target_list = []
# TARGET DEFINITIONS
with open('./var/target_variables.txt') as fid:
target_variables = fid.readlines()
for i in target_variables:
exec(i)
# loop through all sentences.
to_write = []
for line in cursor:
# collect individual elements from the psql sentences dump
docid, sentid, words, poses, dep_paths, dep_parents = line
# initialize list of local target occurences
targets = []
# sentence string
sent = ' '.join(words)
# loop through all the target names
for name in target_names:
# starting index of all matches for a target_name in the joined
# sentence
matches = [m.start() for m in re.finditer(name, sent.lower())]
if matches:
# if at least one match is found, count number of spaces backward
# to arrive at word index
indices = [sent[0:m].count(' ') for m in matches]
# remove double hits (i.e. stromatolitic-thrombolitic)
indices = list(set(indices))
# target_name spans its starting word index to the number of words
# in the phrase
target_word_idx = [[i, i + len(name.split(' '))] for i in indices]
# initialize other data about a found target_name
target_pose = []
target_path = []
target_parent = []
for span in target_word_idx:
# poses, paths and parents can be found at same indices of a
# target_name find
target_word = ' '.join(words[span[0]:span[1]])
if target_word.lower() not in bad_words:
target_children = []
target_pose = poses[span[0]:span[1]]
target_path = dep_paths[span[0]:span[1]]
target_parent = dep_parents[span[0]:span[1]]
# children of each component of a target_name
for span_idx in range(span[0], span[1]):
children = [j for j, i in enumerate(
dep_parents) if i == span_idx + 1]
target_children.append(children)
# convert parent_ids to Pythonic ids
target_parent = [i - 1 for i in target_parent]
# add finds to a local variable
target_list.append([docid,
sentid,
target_word,
span,
target_pose,
target_path,
target_parent,
target_children,
sent])
# for easier storage, convert list of target_children lists
# to a string
str_target_children = str(target_children)
# write to PSQL table
to_write.append(
(docid, sentid, target_word, span, target_pose,
target_path, target_parent, str_target_children, sent)
)
cursor.executemany("""
INSERT INTO target_instances( docid,
sentid,
target_word,
target_word_idx,
target_pose,
target_path,
target_parent,
target_children,
sentence)
VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s);""",
to_write
)
# push insertions to the database
connection.commit()
# restart the primary key
cursor.execute("""
ALTER TABLE target_instances DROP target_id;
""")
# push drop/create to the database
connection.commit()
# add primary key
cursor.execute(""" ALTER TABLE target_instances ADD COLUMN target_id SERIAL PRIMARY KEY;
""")
connection.commit()
# do some magic
connection.set_isolation_level(0)
cursor.execute(""" VACUUM ANALYZE target_instances;
""")
connection.commit()
# close the connection
connection.close()
# summary statistic
success = 'number of target instances: %s' % len(target_list)
# summary of performance time
elapsed_time = time.time() - start_time
print(
'\n ###########\n\n %s \n elapsed time: %d seconds\n\n ###########\n\n' %
(success, elapsed_time))
# USEFUL BIT OF CODE FOR LOOKING AT RANDOM RESULTS
r = random.randint(0, len(target_list) - 1)
print("=========================\n")
print(("\n".join(str(target) for target in target_list[r])))
print("\n=========================")
| 31.666667 | 104 | 0.557059 | 673 | 5,985 | 4.818722 | 0.28529 | 0.024669 | 0.006475 | 0.007401 | 0.18008 | 0.15017 | 0.140302 | 0.140302 | 0.038853 | 0.038853 | 0 | 0.004537 | 0.300251 | 5,985 | 188 | 105 | 31.835106 | 0.769819 | 0.25213 | 0 | 0.220183 | 0 | 0.027523 | 0.186555 | 0.026167 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.009174 | 0.06422 | 0 | 0.06422 | 0.036697 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e85046c1045ae72b8ffce90d767897930c2dd01 | 11,568 | py | Python | mamo_CAD.py | jyotidabas/mammogram-mass-CAD | 84ae282af6fa963bb54429e00bcc623a9db8448b | [
"MIT"
] | 1 | 2020-10-05T09:59:28.000Z | 2020-10-05T09:59:28.000Z | mamo_CAD.py | jyotidabas/mammogram-mass-CAD | 84ae282af6fa963bb54429e00bcc623a9db8448b | [
"MIT"
] | null | null | null | mamo_CAD.py | jyotidabas/mammogram-mass-CAD | 84ae282af6fa963bb54429e00bcc623a9db8448b | [
"MIT"
] | null | null | null | """
Original code:
Mask R-CNN
Train on the toy Balloon dataset and implement color splash effect.
Copyright (c) 2018 Matterport, Inc.
Licensed under the MIT License (see LICENSE for details)
Written by Waleed Abdulla
------------------------------------------------------------
Adapted by Sam Kelly, Hang Min and Devin Wilson for mammographic mass detection and segmentation.
"""
import os
import sys
import json
import datetime
import numpy as np
import skimage.draw
import matplotlib.image
import glob
import scipy.misc
from PIL import Image
#import imgaug
from imgaug import augmenters as iaa
# Root directory of the project
ROOT_DIR = os.getcwd()
ROOT_DIR = ROOT_DIR+"/Mask_r_cnn/"
MAMOGRAM_IMAGE_DIR = "/scans/pseudo_color_image/" #Path of the mammograms
MAMOGRAM_MASK_DIR = "/scans/mask/"# Path of the ground truth masks
# Import Mask RCNN
sys.path.append(ROOT_DIR) # To find local version of the library
from mrcnn.config import Config
from mrcnn import model as modellib, utils
# Path to trained weights file
COCO_WEIGHTS_PATH = os.path.join(ROOT_DIR, "mask_rcnn_balloon.h5")
# Directory to save logs and model checkpoints, if not provided
# through the command line argument --logs
DEFAULT_LOGS_DIR = os.path.join("/your_path/", "logs")#Log directory for saving the weights
DEMO_SAVE_DIR = "scans/seg_mask/"# path to save the segmentation masks
if not os.path.exists(DEMO_SAVE_DIR):
os.mkdir(DEMO_SAVE_DIR)
############################################################
# Configurations
############################################################
class MamogramConfig(Config):
"""Configuration for training on the toy dataset.
Derives from the base Config class and overrides some values.
"""
# Give the configuration a recognizable name
NAME = "mamogram"
# We use a GPU with 12GB memory, which can fit two images.
# Adjust down if you use a smaller GPU.
IMAGES_PER_GPU = 1
# Number of classes (including background)
NUM_CLASSES = 1 + 1 # Background + lesion
# Number of training steps per epoch,set to the number of training data here
STEPS_PER_EPOCH = 100
# Number of validation steps after each round of training
VALIDATION_STEPS = 10
# Resize mode: "none" or "square"
IMAGE_RESIZE_MODE = "square"
IMAGE_MIN_DIM = 1024
IMAGE_MAX_DIM = 1024
# Skip detections with < DETECTION_MIN_CONFIDENCE
DETECTION_MIN_CONFIDENCE = 0.965 # alter this during testing to generate different TPR at different FPI
# 0.7 0.75 0.8 0.85 0.9
############################################################
# Dataset
############################################################
class MamogramDataset(utils.Dataset):
def load_mamogram(self, subset):
"""This method loads the actual image
subset is either "train" or "val" depending on whether the image is part of the training or validation datasets
"""
# Add classes. We have only one class to add.
# These are the things that will be segmented
self.add_class("mamogram", 1, "lesion")
# Train or validation dataset?
#list all the files in the directory with the mamogram images
files = os.listdir(ROOT_DIR + MAMOGRAM_IMAGE_DIR + subset + "/")
for fname in files:
self.add_image("mamogram", image_id=fname, path=ROOT_DIR + MAMOGRAM_IMAGE_DIR + subset +"/"+ fname, subset=subset, fname=fname)
def load_mask(self, image_id):
"""load the instance masks for an image.
Returns:
a tuple containing:
masks: A bool array of shape [height, width, instance count] with
one mask per instance.
class_ids: a 1D array of class IDs of the instance masks.
use dtype=np.int32
"""
info = self.image_info[image_id]
fname = info['fname']
files = glob.glob(ROOT_DIR + MAMOGRAM_MASK_DIR + info['subset'] + fname[0:-4] + "*")
masks = []
for i in range(0, len(files)):
data = skimage.io.imread(files[i])
if data.ndim != 1:
data = skimage.color.rgb2gray(data)
singleMask = data
if i == 0:
masks = np.zeros((singleMask.shape[0], singleMask.shape[1], len(files)))
masks[:,:,i] = singleMask
instanceMaskMap = np.array(np.ones([masks.shape[-1]], dtype=np.int32)) #this is VERY important: array of class ids in the order that they appear in bigdata
# Return mask, and array of class IDs of each instance. Since we have
# one class ID only, we return an array of 1s
return (masks.astype(np.bool), instanceMaskMap)
def load_image(self, image_id):
"""Load the specified image and return a [H,W,3] Numpy array.
Taken from utils.py, any refinements we need can be done here
"""
# Load image
image = skimage.io.imread(self.image_info[image_id]['path'])
# If grayscale. Convert to RGB for consistency.
if image.ndim != 3:
image = skimage.color.gray2rgb(image)
# If has an alpha channel, remove it for consistency
if image.shape[-1] == 4:
image = image[..., :3]
return image
def image_reference(self, image_id):
"""Return the path of the image."""
info = self.image_info[image_id]
return info["path"]
def train(model):
"""Train the model."""
# Training dataset.
dataset_train = MamogramDataset()
dataset_train.load_mamogram("/train/")
dataset_train.prepare()
# Validation dataset
dataset_val = MamogramDataset()
dataset_val.load_mamogram("/val/")
dataset_val.prepare()
# Image augmentation
# http://imgaug.readthedocs.io/en/latest/source/augmenters.html
aug = iaa.Sequential([
iaa.OneOf([iaa.Fliplr(0.5),
iaa.Flipud(0.5),
iaa.Affine(rotate=90),
iaa.Affine(rotate=180),
iaa.Affine(rotate=270)]),
])
# *** This training schedule is an example. Update to your needs ***
# Since we're using a very small dataset, and starting from
# COCO trained weights, we don't need to train too long. Also,
# no need to train all layers, just the heads should do it.
print("Training network heads")
model.train(dataset_train, dataset_val,
learning_rate=config.LEARNING_RATE,
epochs=10,augmentation=aug,
layers='all')
def segment(model, imPath):
image = skimage.io.imread(imPath)
fname = imPath.split('/')[-1]
mrcnnData = model.detect([image], verbose=1)
# documentation for model.detect:
# """Runs the detection pipeline.
# images: List of images, potentially of different sizes.
# Returns a list of dicts, one dict per image. The dict contains:
# rois: [N, (y1, x1, y2, x2)] detection bounding boxes
# class_ids: [N] int class IDs
# scores: [N] float probability scores for the class IDs
# masks: [H, W, N] instance binary masks
# """
mrcnnData = mrcnnData[0] #model.detect takes a list of images, but here we only provide one image so the output is a list with just one element
masks = mrcnnData['masks']
for i in range(0, masks.shape[2]):
#iterate through the masks
maskSingle = np.squeeze(masks[:, :, i])
file_name = DEMO_SAVE_DIR + "demo_mask_" + str(i) + "_" + fname + "_{:%Y%m%dT%H%M%S}.png".format(datetime.datetime.now())
scipy.misc.imsave(file_name, maskSingle.astype(np.int64))
print(mrcnnData)
print("&&&&&&&&&&&: "+str(mrcnnData['rois']))
print("&&&&&&&&&&&: "+str(mrcnnData['class_ids']))
print("&&&&&&&&&&&: "+str(mrcnnData['scores']))
return
def segmentWrapper(model, directory):
"""wrapper function for segment to take many images as an input, calls segment() on everything in the directory"""
files = os.listdir(directory)
for f in files:
segment(model, directory + '/' + f)
def overlayResult(image, mask):
"""Function to overlay segmentation mask on an image.
usage: image_var = overlayResult(image, dict['masks'] || masks_var)
image: RGB or grayscale image [height, width, 1 || 3].
mask: segmentation mask [height, width, instance_count]
returns resulting image.
"""
# Image is already in grayscale so we skip converting it
# May need to create 3 dimensions if single dimension image though so
# will add this as a placeholder
gray = skimage.color.gray2rgb(skimage.color.rgb2gray(image)) * 255
# Copy color pixels from the original color image where mask is set
if mask.shape[-1] > 0:
#collapse masks into one layer
mask = (np.sum(mask, -1, keepdims=True) >= 1)
overlay = np.where(mask, image, gray).astype(np.uint8)
else:
overlay = gray.astype(np.uint8)
return overlay
############################################################
# Training
############################################################
if __name__ == '__main__':
import argparse
# Parse command line arguments
parser = argparse.ArgumentParser(
description='Train Mask R-CNN to detect breast lesions.')
parser.add_argument("command",
metavar="<command>",
help="'train' or 'segment'")
parser.add_argument('--weights', required=True,
metavar="/path/to/weights.h5",
help="Path to weights .h5 file or 'coco'")
parser.add_argument('--image', required=False,
metavar='/path/to/image',
help="Path to image file for segmentation")
args = parser.parse_args()
# Configurations
if args.command == "train":
config = MamogramConfig()
else:
class InferenceConfig(MamogramConfig):
# Set batch size to 1 since we'll be running inference on
# one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU
GPU_COUNT = 1
IMAGES_PER_GPU = 1
config = InferenceConfig()
config.display()
# Create model
if args.command == "train":
model = modellib.MaskRCNN(mode="training", config=config,
model_dir=DEFAULT_LOGS_DIR)
else:
model = modellib.MaskRCNN(mode="inference", config=config,
model_dir=DEFAULT_LOGS_DIR)
# Select weights file to load
if args.weights.lower() == "coco":
weights_path = COCO_WEIGHTS_PATH
# Download weights file
if not os.path.exists(weights_path):
utils.download_trained_weights(weights_path)
else:
weights_path = args.weights
# Load weights
print("Loading weights ", weights_path)
if args.weights.lower() == "coco":
# Exclude the last layers because they require a matching
# number of classes
model.load_weights(weights_path, by_name=True, exclude=[
"mrcnn_class_logits", "mrcnn_bbox_fc",
"mrcnn_bbox", "mrcnn_mask"])
else:
model.load_weights(weights_path, by_name=True)
# Train or evaluate
if args.command == "train":
train(model)
elif args.command == "segment":
if os.path.isdir(args.image):
segmentWrapper(model, args.image)
else:
segment(model, args.image)
else:
print("'{}' is not recognized. "
"Use 'train' or 'segment'".format(args.command))
| 34.123894 | 163 | 0.615232 | 1,476 | 11,568 | 4.727642 | 0.301491 | 0.014187 | 0.006306 | 0.006449 | 0.064488 | 0.040413 | 0.02035 | 0.010605 | 0 | 0 | 0 | 0.012111 | 0.250519 | 11,568 | 338 | 164 | 34.224852 | 0.792734 | 0.365145 | 0 | 0.111801 | 0 | 0 | 0.097294 | 0.00695 | 0 | 0 | 0 | 0 | 0 | 1 | 0.049689 | false | 0 | 0.086957 | 0 | 0.254658 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e8653a92683cdf549d9efd3f8428708ccb95e03 | 890 | py | Python | umm/server/__main__.py | zachcoleman/umm-cli-bot | 13817970fb03a646b15921508eee4cae79b9a156 | [
"Apache-2.0"
] | null | null | null | umm/server/__main__.py | zachcoleman/umm-cli-bot | 13817970fb03a646b15921508eee4cae79b9a156 | [
"Apache-2.0"
] | 1 | 2021-09-08T18:57:54.000Z | 2021-09-08T18:57:54.000Z | umm/server/__main__.py | zachcoleman/umm-cli-helper | 13817970fb03a646b15921508eee4cae79b9a156 | [
"Apache-2.0"
] | null | null | null | from functools import partial
from aiohttp import web
from umm.server.command_set import CommandSet
from umm.server.routes import (
add_command,
available_commands,
confirm_command,
request_command,
)
from umm.server.utils import setup_folder
from umm.utils.config import parse_config
def setup_routes(app: web.Application, commands: CommandSet):
app.router.add_get("/commands", partial(available_commands, commands=commands))
app.router.add_get("/add", partial(add_command, commands=commands))
app.router.add_get("/umm", partial(request_command, commands=commands))
app.router.add_get("/confirm", partial(confirm_command, commands=commands))
def main():
commands = setup_folder()
config = parse_config()
app = web.Application()
setup_routes(app, commands)
web.run_app(app, port=config.port)
if __name__ == "__main__":
main()
| 26.969697 | 83 | 0.74382 | 116 | 890 | 5.465517 | 0.275862 | 0.126183 | 0.07571 | 0.094637 | 0.16877 | 0.16877 | 0.119874 | 0 | 0 | 0 | 0 | 0 | 0.146067 | 890 | 32 | 84 | 27.8125 | 0.834211 | 0 | 0 | 0 | 0 | 0 | 0.037079 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.25 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e884c2a615c888dd2ad2db31e95b3e71b92d49a | 584 | py | Python | Lib/site-packages/braintree/sub_merchant_account/contact_details.py | shashank7991/eBuy | 2e65572967b33e7205b38c048b7be2d9943173b6 | [
"MIT"
] | null | null | null | Lib/site-packages/braintree/sub_merchant_account/contact_details.py | shashank7991/eBuy | 2e65572967b33e7205b38c048b7be2d9943173b6 | [
"MIT"
] | null | null | null | Lib/site-packages/braintree/sub_merchant_account/contact_details.py | shashank7991/eBuy | 2e65572967b33e7205b38c048b7be2d9943173b6 | [
"MIT"
] | null | null | null | from braintree.attribute_getter import AttributeGetter
from braintree.sub_merchant_account.address_details import AddressDetails
class ContactDetails(AttributeGetter):
detail_list = [
"address_details",
"date_of_birth",
"email",
"first_name",
"last_name",
"phone",
]
def __init__(self, attributes):
AttributeGetter.__init__(self, attributes)
self.address_details = AddressDetails(attributes.get("address", {}))
def __repr__(self):
return super(ContactDetails, self).__repr__(self.detail_list)
| 29.2 | 76 | 0.688356 | 57 | 584 | 6.561404 | 0.561404 | 0.112299 | 0.096257 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.214041 | 584 | 19 | 77 | 30.736842 | 0.814815 | 0 | 0 | 0 | 0 | 0 | 0.109589 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0.0625 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e88cba520cdccf81efe1f48c5d10b08f44dc8a0 | 2,805 | py | Python | app/stats/views.py | uva-slp/pico | 3a4f20ea5e9359e2e4b770442fa59ae8e0bf30ed | [
"MIT"
] | 1 | 2017-09-20T23:29:59.000Z | 2017-09-20T23:29:59.000Z | app/stats/views.py | uva-slp/pico | 3a4f20ea5e9359e2e4b770442fa59ae8e0bf30ed | [
"MIT"
] | null | null | null | app/stats/views.py | uva-slp/pico | 3a4f20ea5e9359e2e4b770442fa59ae8e0bf30ed | [
"MIT"
] | null | null | null | from django.shortcuts import render
from contests.models import Participant, Submission
from teams.models import Team
from django.contrib.auth.decorators import login_required
@login_required
def index(request):
participation = Participant.objects.filter(team__members__username=request.user.username).order_by('contest__date_created')
contest_count = Participant.objects.filter(team__members__username=request.user.username).count()
teams = Team.objects.filter(members__username=request.user.username).order_by('name')
submissions = Submission.objects.filter(team__members__username=request.user.username).order_by('result')
submissions_count = Submission.objects.filter(team__members__username=request.user.username).count()
correct_submissions = 0
for s in submissions:
if s.result == 'YES':
correct_submissions += 1
teams_count = teams.count()
teammates_count = 0
best_team = None
best_team_count = 0
teammate_freq = {}
favorite_teammates = []
for t in teams:
members = t.members.all()
for m in members:
if m.username != request.user.username:
teammates_count += 1
if m not in teammate_freq:
teammate_freq[m] = 1
else:
teammate_freq[m] += 1
team_correct_num = Submission.objects.filter(team__members__username=request.user.username).filter(result='YES').count()
if team_correct_num > best_team_count:
best_team_count = team_correct_num
best_team = t
for i in range(3):
favorite_teammates.append(keywithmaxval(teammate_freq))
if favorite_teammates[i] != None:
del teammate_freq[favorite_teammates[i]]
return render(request, 'stats/index.html', { 'participation' : participation,
'contest_count' : contest_count,
'teams' : teams,
'teams_count' : teams_count,
'teammates_count' : teammates_count,
'correct_submissions' : correct_submissions,
'best_team' : best_team,
'best_team_count' : best_team_count,
'favorite_teammates' : favorite_teammates})
def keywithmaxval(dic):
""" a) create a list of the dict's keys and values;
b) return the key with the max value"""
v=list(dic.values())
k=list(dic.keys())
if len(v) > 0:
return k[v.index(max(v))]
else:
return None
| 41.865672 | 128 | 0.585027 | 298 | 2,805 | 5.248322 | 0.271812 | 0.046036 | 0.085038 | 0.120844 | 0.337596 | 0.314578 | 0.237852 | 0.211637 | 0.211637 | 0.074169 | 0 | 0.004787 | 0.329768 | 2,805 | 66 | 129 | 42.5 | 0.827128 | 0.030303 | 0 | 0.037037 | 0 | 0 | 0.063216 | 0.007763 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037037 | false | 0 | 0.074074 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e88f09070a65457c70eb23e51b91b102da3e2fe | 4,663 | py | Python | qiskit_metal/qlibrary/qubits/JJ_Manhattan.py | yjiangaj/qiskit-metal | 1ee7b39eeeffe9b8ad1119b5d607c91a896696c1 | [
"Apache-2.0"
] | 1 | 2021-08-28T20:35:43.000Z | 2021-08-28T20:35:43.000Z | qiskit_metal/qlibrary/qubits/JJ_Manhattan.py | yjiangaj/qiskit-metal | 1ee7b39eeeffe9b8ad1119b5d607c91a896696c1 | [
"Apache-2.0"
] | 1 | 2021-04-03T00:10:19.000Z | 2021-04-03T00:10:19.000Z | qiskit_metal/qlibrary/qubits/JJ_Manhattan.py | yjiangaj/qiskit-metal | 1ee7b39eeeffe9b8ad1119b5d607c91a896696c1 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# This code is part of Qiskit.
#
# (C) Copyright IBM 2017, 2021.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
#
# Any modifications or derivative works of this code must retain this
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.
"""
Josephson Junction (Manhattan-Style)
REFERENCE: I.M. Pop et al., Nature 508, 369-372 (2014)
"""
from qiskit_metal import draw, Dict
from qiskit_metal.qlibrary.core.base import QComponent
class jj_manhattan(QComponent):
"""
The base "JJ_Manhattan" inherits the "QComponent" class.
NOTE TO USER: Please be aware that when designing with this
qcomponent, one must take care in accounting for the junction
qgeometry when exporting to to GDS and/or to a simulator. This
qcomponent should not be rendered for EM simulation.
This creates a "Manhattan"-style Josephson Junction consisting
of two overlapping thin metal strips, each connected to a
larger metallic pad region.
.. image::
JJManhattan.png
Default Options:
* JJ_pad_lower_width: '4um' -- width of lower JJ metal region
* JJ_pad_lower_height: '2um' -- height of lower JJ metal region
* JJ_pad_lower_pos_x: '0' -- the initial x-coord of the lower rectangle
* JJ_pad_lower_pos_y: '0' -- the initial y-coord of the lower rectangle
* finger_lower_width: '1um' -- the width of the overlapping rectangular finger(s)
* finger_lower_height: '20um' -- the length of the overlapping rectangular finger(s)
* extension: '1um' -- the length of the fingers extending beyond the cross-point
* x_pos: '0um' -- the x-coordinate of the lower left point of the final design
* y_pos: '0um' -- the y-coordinate of the lower left point of the final design
"""
# Default drawing options
default_options = Dict(JJ_pad_lower_width='25um',
JJ_pad_lower_height='10um',
JJ_pad_lower_pos_x='0',
JJ_pad_lower_pos_y='0',
finger_lower_width='1um',
finger_lower_height='20um',
extension='1um',
x_pos='0um',
y_pos='0um',
layer='1')
"""Default drawing options"""
# Name prefix of component, if user doesn't provide name
component_metadata = Dict(short_name='component')
"""Component metadata"""
def make(self):
"""Convert self.options into QGeometry."""
p = self.parse_options() # Parse the string options into numbers
# draw the lower pad as a rectangle
JJ_pad_lower = draw.rectangle(p.JJ_pad_lower_width,
p.JJ_pad_lower_height,
p.JJ_pad_lower_pos_x,
p.JJ_pad_lower_pos_y)
finger_lower = draw.rectangle(
p.finger_lower_width, p.finger_lower_height, p.JJ_pad_lower_pos_x,
0.5 * (p.JJ_pad_lower_height + p.finger_lower_height))
# fudge factor to merge the two options
finger_lower = draw.translate(finger_lower, 0.0, -0.0001)
# merge the lower pad and the finger into a single object
design = draw.union(JJ_pad_lower, finger_lower)
# copy the pad/finger and rotate it by 90 degrees
design2 = draw.rotate(design, 90.0)
# translate the second pad/finger to achieve the desired extension
design2 = draw.translate(
design2, 0.5 * (p.JJ_pad_lower_height + p.finger_lower_height) -
0.5 * p.finger_lower_width - p.extension,
0.5 * (p.JJ_pad_lower_height + p.finger_lower_height) -
0.5 * p.finger_lower_width - p.extension)
final_design = draw.union(design, design2)
# translate the final design so that the bottom left
# corner of the lower pad is at the origin
final_design = draw.translate(final_design, 0.5 * p.JJ_pad_lower_width,
0.5 * p.JJ_pad_lower_height)
# now translate so that the design is centered on the
# user-defined coordinates (x_pos, y_pos)
final_design = draw.translate(final_design, p.x_pos, p.y_pos)
geom = {'design': final_design}
self.add_qgeometry('poly', geom, layer=p.layer, subtract=False)
| 42.390909 | 92 | 0.633069 | 650 | 4,663 | 4.366154 | 0.323077 | 0.035236 | 0.070472 | 0.03876 | 0.2463 | 0.210359 | 0.137421 | 0.130726 | 0.091261 | 0.091261 | 0 | 0.023759 | 0.28694 | 4,663 | 109 | 93 | 42.779817 | 0.829774 | 0.506755 | 0 | 0 | 0 | 0 | 0.021739 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027027 | false | 0 | 0.054054 | 0 | 0.162162 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e892f14a90b7f3626534a7c42c08b1aa4229096 | 7,238 | py | Python | createValidationVideoWithNewZZversion.py | oliviermirat/ZZDeepRollover | 99cd0c334c26b1cfcaf241e970415950443102ab | [
"MIT"
] | 1 | 2020-10-14T12:54:38.000Z | 2020-10-14T12:54:38.000Z | createValidationVideoWithNewZZversion.py | oliviermirat/ZZDeepRollover | 99cd0c334c26b1cfcaf241e970415950443102ab | [
"MIT"
] | null | null | null | createValidationVideoWithNewZZversion.py | oliviermirat/ZZDeepRollover | 99cd0c334c26b1cfcaf241e970415950443102ab | [
"MIT"
] | null | null | null | from __future__ import absolute_import, division, print_function
import matplotlib.pylab as plt
import numpy as np
import os
import cv2
import sys
import json
import pandas as pd
def createValidationVideoWithNewZZversion(videoName, path, rolloversMedFiltAllWells, resultsPercentages, pathToInitialVideo):
### Loading the images and applying the classifier on them
videoPath = os.path.join(os.path.join(path, videoName), 'results_' + videoName + '.txt')
resizeScale = 3
frame_width = 60
frame_height = 60
ext1 = 20
ext2 = 20
font = cv2.FONT_HERSHEY_SIMPLEX
bottomLeftCornerOfText = (1,10)
fontScale = 0.4
fontColor = (255,255,255)
lineType = 1
y_offset = 0
x_offset = 0
rolloverFrameFile = os.path.join(os.path.join(path, videoName), 'rolloverManualClassification.json')
exists = os.path.isfile(rolloverFrameFile)
if exists:
videoPath2 = pathToInitialVideo
cap = cv2.VideoCapture(videoPath2)
videoLength = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
fileRollover = open(rolloverFrameFile, 'r')
rolloverFrame = json.loads(fileRollover.read())
trueRolloverAllWells = []
for well in rolloverFrame:
trueRollovers = np.zeros((videoLength))
for boundaries in rolloverFrame[well]['rollover']:
left = boundaries[0]
right = boundaries[1]
for i in range(left,right+1):
trueRollovers[i] = 1
trueRolloverAllWells.append(trueRollovers)
trueRolloverAllWells = np.array(trueRolloverAllWells)
# Loading "true" maybes
trueMaybeAllWells = []
for well in rolloverFrame:
trueMaybe = np.zeros((videoLength))
for boundaries in rolloverFrame[well]['inBetween']:
left = boundaries[0]
right = boundaries[1]
for i in range(left,right+1):
trueMaybe[i] = 1
trueMaybeAllWells.append(trueMaybe)
trueMaybeAllWells = np.array(trueMaybeAllWells)
else:
trueRolloverAllWells = []
trueMaybeAllWells = []
out2 = cv2.VideoWriter(os.path.join(os.path.join(path, videoName), 'rolloverValidationAllFrames.avi'), cv2.VideoWriter_fourcc('M','J','P','G'), 10, (resizeScale*frame_width + ext1, resizeScale*frame_height + ext2))
if len(rolloversMedFiltAllWells):
out = cv2.VideoWriter(os.path.join(os.path.join(path, videoName), 'validationOnlyFramesDetectedAsRollover.avi') ,cv2.VideoWriter_fourcc('M','J','P','G'), 10, (resizeScale*frame_width + ext1, resizeScale*frame_height + ext2))
out3 = cv2.VideoWriter(os.path.join(os.path.join(path, videoName), 'validationOnlyFramesDetectedAsNormal.avi') ,cv2.VideoWriter_fourcc('M','J','P','G'), 10, (resizeScale*frame_width + ext1, resizeScale*frame_height + ext2))
out4 = cv2.VideoWriter(os.path.join(os.path.join(path, videoName), 'validationOnlyErrors.avi'), cv2.VideoWriter_fourcc('M','J','P','G'), 10, (resizeScale*frame_width + ext1, resizeScale*frame_height + ext2))
if (os.path.isfile(videoPath)):
# Applying rollover classifier to each frame and saving the results in a txt file
file = open(videoPath, 'r')
j = json.loads(file.read())
wellPoissMouv = j['wellPoissMouv']
wellPositions = j['wellPositions']
nbWell = len(wellPositions)
# going through each well in super structure
for i in range(0,nbWell):
xwell = wellPositions[i]['topLeftX']
ywell = wellPositions[i]['topLeftY']
if xwell < 0:
xwell = 0
if ywell < 0:
ywell = 0
videoPath2 = pathToInitialVideo
if (len(wellPoissMouv[i])):
if (len(wellPoissMouv[i][0])):
cap = cv2.VideoCapture(videoPath2)
nbMouv = len(wellPoissMouv[i][0])
for j in range(0,nbMouv):
if (len(wellPoissMouv[i][0][j])):
item = wellPoissMouv[i][0][j]
BoutStart = item['BoutStart']
BoutEnd = item['BoutEnd']
k = BoutStart
cap.set(cv2.CAP_PROP_POS_FRAMES,BoutStart)
while (k <= BoutEnd):
ret, frame = cap.read()
yStart = int(ywell+item['HeadY'][k-BoutStart]-30)
yEnd = int(ywell+item['HeadY'][k-BoutStart]+30)
xStart = int(xwell+item['HeadX'][k-BoutStart]-30)
xEnd = int(xwell+item['HeadX'][k-BoutStart]+30)
frame = frame[yStart:yEnd, xStart:xEnd]
if ret == True:
frame2 = cv2.resize(frame, (resizeScale*frame_width, resizeScale*frame_height))
frame3 = np.zeros((frame2.shape[0] + ext1, frame2.shape[1] + ext2, 3), np.uint8)
frame3[0:frame2.shape[0],0:frame2.shape[1]] = frame2
if len(rolloversMedFiltAllWells):
if rolloversMedFiltAllWells[i][k]:
cv2.circle(frame3,(10,10),10,(0,0,255),-1)
if len(trueRolloverAllWells):
if trueRolloverAllWells[i][k]:
cv2.circle(frame3,(10,29),10,(0,255,0),-1)
if len(trueMaybeAllWells):
if trueMaybeAllWells[i][k]:
cv2.circle(frame3,(10,25),10,(0,165,255),-1)
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(frame3, str((int(resultsPercentages[i][k]*100)))+"%", (10, resizeScale*frame_height+ext2-7), font, 0.5, (0, 0, 255), 2)
cv2.putText(frame3, str(i+1)+";"+str(k), (resizeScale*frame_width-40, resizeScale*frame_height+ext2-7), font, 0.4, (0, 0, 255), 1)
out2.write(frame3)
if rolloversMedFiltAllWells[i][k]:
out.write(frame3)
else:
out3.write(frame3)
if len(trueMaybeAllWells) and trueMaybeAllWells[i][k] == 0:
if rolloversMedFiltAllWells[i][k] != trueRolloverAllWells[i][k]:
out4.write(frame3)
else:
if len(trueRolloverAllWells):
if trueRolloverAllWells[i][k]:
cv2.circle(frame3,(10,12),10,(0,0,255),-1)
if trueMaybeAllWells[i][k]:
cv2.circle(frame3,(10,15),10,(0,165,255),-1)
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(frame3, str(i+1)+";"+str(k), (resizeScale*frame_width-40, resizeScale*frame_height+ext2-7), font, 0.4, (0, 0, 255), 1)
out2.write(frame3)
else:
break
k = k + 1
out2.release()
if len(rolloversMedFiltAllWells):
out.release()
out3.release()
out4.release()
if __name__ == '__main__':
__spec__ = "ModuleSpec(name='builtins', loader=<class '_frozen_importlib.BuiltinImporter'>)"
videoName = sys.argv[1]
path = sys.argv[2]
pathToInitialVideo = sys.argv[3]
createValidationVideoWithNewZZversion(videoName,path,[],[], pathToInitialVideo)
| 42.576471 | 228 | 0.582896 | 780 | 7,238 | 5.337179 | 0.233333 | 0.057651 | 0.028825 | 0.043718 | 0.367764 | 0.35215 | 0.343742 | 0.308191 | 0.25006 | 0.25006 | 0 | 0.048847 | 0.292899 | 7,238 | 170 | 229 | 42.576471 | 0.764556 | 0.027494 | 0 | 0.277372 | 0 | 0 | 0.054742 | 0.033272 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007299 | false | 0 | 0.065693 | 0 | 0.072993 | 0.007299 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e89ddfb94cea3f4d5d6b53127e2d70d3a33d0ae | 3,531 | py | Python | utilities/sniffer.py | ondrejholecek/fortimonitor | 79b0377f97b084f04396a323a84bb3e8a93d6f2d | [
"BSD-3-Clause"
] | 9 | 2018-10-19T08:47:42.000Z | 2020-08-19T01:58:27.000Z | utilities/sniffer.py | ondrejholecek/fortimonitor | 79b0377f97b084f04396a323a84bb3e8a93d6f2d | [
"BSD-3-Clause"
] | 1 | 2019-11-07T11:24:43.000Z | 2019-11-07T11:52:51.000Z | utilities/sniffer.py | ondrejholecek/fortimonitor | 79b0377f97b084f04396a323a84bb3e8a93d6f2d | [
"BSD-3-Clause"
] | 3 | 2020-04-24T03:51:15.000Z | 2021-12-19T16:08:24.000Z | #!/usr/bin/env python2.7
import os
# to be able to import our modules from the directory above
os.sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from _common import ssh, prepend_timestamp
import re
import datetime
import pytz
import struct
import sys
sshc, args = ssh([
{ 'name':'--filter', 'default':'', 'help':'Tcpdump filter, default ""' },
{ 'name':'--interface', 'default':'any', 'help':'Interface name, default "any"' },
{ 'name':'--direction', 'default':'both', 'choices':['in','out','both'], 'help':'Which direction to save, default "both"' },
{ 'name':'--simulate', 'help':'File name to simulate the SSH output' },
], """
Run the sniffer of FortiGate and dump packets in libpcap (old) format on standard input.
Can be processed by Wireshark with:
$ wireshark -k -i <(./sniffer.py --host 10.109.250.102 --port 10003 --filter 'proto 89')
""")
#
# 2018-11-12 13:17:24.757883 port4 in 10.1.4.5 -> 224.0.0.5: ip-proto-89 48
# 0x0000 0100 0000 0000 0050 5694 5986 0800 45c0 .......PV.Y...E.
# 0x0010 0044 d1a8 4000 0159 b8ed 0a01 0405 e000 .D..@..Y........
# 0x0020 0005 0201 0030 0000 0005 0000 0000 e088 .....0..........
# 0x0030 0000 0000 0000 0000 0000 ffff ff00 000a ................
# 0x0040 0201 0000 0028 0a01 0403 0a01 0405 0000 .....(..........
# 0x0050 0003 ..
#
re_packet = re.compile('^(\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2}).(\d{6}) (\S+) (\S+) (\S+) -> (\S+):.*?[\r\n]+(.*?)\r?\n\r?\n', re.M | re.DOTALL)
unix_epoch_start = pytz.UTC.localize(datetime.datetime(1970, 1, 1, 0, 0, 0))
def divide(data, info):
# returns (result, data)
packets = []
while True:
g = re_packet.search(data)
if g == None: break
packets.append(data[g.start():g.end()])
data = data[g.end():]
return packets, data
def result(data, info):
g = re_packet.search(data)
if g == None: return # this should not happen
ts = pytz.UTC.localize(datetime.datetime(int(g.group(1)), int(g.group(2)), int(g.group(3)), int(g.group(4)), int(g.group(5)), int(g.group(6)), int(g.group(7))))
td = ts-unix_epoch_start
us = td.microseconds + (td.seconds + td.days * 24 * 3600) * 10**6
iface = g.group(8)
direction = g.group(9)
src = g.group(10)
dst = g.group(11)
pkt = g.group(12)
hstr = ''
for pktl in pkt.split("\n"):
hstr += pktl.split("\t")[1].replace(' ', '')
bpkt = hstr.decode('hex')
if info['save_direction'] == 'in' and direction != 'in': return
if info['save_direction'] == 'out' and direction != 'out': return
# save packet
pcap_packet(bpkt, us)
def finished(info):
if 'no_new_data' in info and info['no_new_data'] == True: return ''
return None
def pcap_packet(pkt, us):
packet_header = struct.pack('>IIII',
us / 1000000,
us % 1000000,
len(pkt),
len(pkt))
sys.stdout.write(packet_header)
sys.stdout.write(pkt)
sys.stdout.flush()
def pcap_header():
global_header = struct.pack('>IHHIIII',
0xa1b2c3d4,
2,
4,
0,
0,
65535,
1)
sys.stdout.write(global_header)
sys.stdout.flush()
def do(sshc, interface, filter_string):
if args.simulate:
simulated_file = open(args.simulate, 'rb')
else:
simulated_file = None
pcap_header()
info = { 'info': {
'save_direction': args.direction,
}}
sshc.continuous_exec("diagnose sniffer packet %s '%s' 6 0 a" % (interface, filter_string,), divide, result, finished, info, simulate=simulated_file)
if __name__ == '__main__':
try:
do(sshc, args.interface, args.filter)
except (KeyboardInterrupt, IOError):
sshc.destroy()
| 27.372093 | 161 | 0.632682 | 543 | 3,531 | 4.036832 | 0.403315 | 0.032847 | 0.028741 | 0.007299 | 0.071624 | 0.028741 | 0.028741 | 0.028741 | 0.005018 | 0 | 0 | 0.103601 | 0.166242 | 3,531 | 128 | 162 | 27.585938 | 0.640965 | 0.167658 | 0 | 0.071429 | 0 | 0.02381 | 0.245554 | 0.029412 | 0 | 0 | 0.00342 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.083333 | 0 | 0.178571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e8a8f6597be1772db94f9575d4830cd13069f96 | 6,193 | py | Python | gym_manytrading/envs/trading_env.py | bukk530/gym-anytrading | 84709448309b3f462ee15e7790a05faf4cc504c2 | [
"MIT"
] | null | null | null | gym_manytrading/envs/trading_env.py | bukk530/gym-anytrading | 84709448309b3f462ee15e7790a05faf4cc504c2 | [
"MIT"
] | null | null | null | gym_manytrading/envs/trading_env.py | bukk530/gym-anytrading | 84709448309b3f462ee15e7790a05faf4cc504c2 | [
"MIT"
] | null | null | null | import gym
from gym import spaces
import numpy as np
from enum import Enum
import matplotlib.pyplot as plt
class Actions(Enum):
Noop = 0
Liquidate = 1
Sell = 2
Buy = 3
class Order:
def __init__(self, price, action, tick):
self.price = price
self.action = action
self.tick = tick
class TradingEnv(gym.Env):
metadata = {'render.modes': ['human']}
def __init__(self, df, window_size, frame_bound, initial_capital=1000):
assert df.ndim == 2
self.seed()
self.df = df
self.frame_bound = frame_bound
self.window_size = window_size
self.prices, self.signal_features = self._process_data()
self.shape = (window_size, self.signal_features.shape[1])
# porfolio
self.equity = initial_capital
self.equity_history = (len(self.prices) - 1) * [None]
# spaces
self.action_space = spaces.Discrete(len(Actions))
self.observation_space = spaces.Box(low=-np.inf, high=np.inf, shape=self.shape, dtype=np.float32)
# orders
self._actions = (len(self.prices) - 1) * [None]
self._last_order = None
# episode
self._start_tick = self.window_size
self._end_tick = len(self.prices) - 1
self._done = None
self._current_tick = None
self._last_trade_tick = None
self._first_rendering = None
self.history = None
def reset(self):
self._done = False
self._current_tick = self._start_tick
self._last_trade_tick = self._current_tick - 1
self._first_rendering = True
self.history = {}
return self._get_observation()
def step(self, action):
if self._done:
return
action = Actions(action)
net_position_profit = None
self._actions[self._current_tick] = action
if action == Actions.Buy:
net_position_profit = self._liquidate()
self._buy()
elif action == Actions.Sell:
net_position_profit = self._liquidate()
self._sell()
elif action == Actions.Liquidate:
net_position_profit = self._liquidate()
self._calculate_portfolio_equity(net_position_profit)
observation = self._get_observation()
info = dict(
total_reward=0
)
self._current_tick += 1
if self._current_tick == self._end_tick:
self._done = True
return observation, net_position_profit, self._done, info
def _liquidate(self):
if self._last_order is None\
or self._last_order.action == Actions.Liquidate\
or self._last_order.action == Actions.Noop:
return
order = Order(self.prices[self._current_tick], Actions.Liquidate, self._current_tick)
step_reward = self._calculate_position_profit(self._last_order, order)
self._last_order = order
return step_reward
def _sell(self):
self._last_order = Order(self.prices[self._current_tick], Actions.Sell, self._current_tick)
def _buy(self):
self._last_order = Order(self.prices[self._current_tick], Actions.Buy, self._current_tick)
def _calculate_portfolio_equity(self, net_position_profit):
if net_position_profit is None and self._last_order is not None:
net_position_profit = self._calculate_position_profit(self._last_order,
Order(self.prices[self._current_tick],
Actions.Liquidate,
self._current_tick))
if net_position_profit is None:
return
self.equity += net_position_profit
self.equity_history[self._current_tick] = self.equity
def _calculate_position_profit(self, open_position, close_position):
gross_profit = 0
if open_position.action == Actions.Sell:
gross_profit = (close_position.price - open_position.price) * self.equity
elif open_position.action == Actions.Buy:
gross_profit = (open_position.price - close_position.price) * self.equity
return gross_profit
def _get_observation(self):
return self.signal_features[(self._current_tick - self.window_size):self._current_tick]
def _update_history(self, info):
if not self.history:
self.history = {key: [] for key in info.keys()}
for key, value in info.items():
self.history[key].append(value)
def _process_data(self):
prices = self.df.loc[:, 'Close'].to_numpy()
prices[self.frame_bound[0] - self.window_size] # validate index (TODO: Improve validation)
prices = prices[self.frame_bound[0]-self.window_size:self.frame_bound[1]]
diff = np.insert(np.diff(prices), 0, 0)
signal_features = np.column_stack((prices, diff))
return prices, signal_features
def render(self, mode="human"):
pass
def render_all(self, mode='human'):
window_ticks = np.arange(len(self._actions))
plt.plot(self.prices)
buy_ticks = []
sell_ticks = []
liquidate_ticks = []
noop_ticks = []
for i, tick in enumerate(window_ticks):
if self._actions[i] == Actions.Buy:
buy_ticks.append(tick)
elif self._actions[i] == Actions.Sell:
sell_ticks.append(tick)
elif self._actions[i] == Actions.Liquidate:
liquidate_ticks.append(tick)
elif self._actions[i] == Actions.Noop:
noop_ticks.append(tick)
plt.plot(buy_ticks, self.prices[buy_ticks], 'ro')
plt.plot(sell_ticks, self.prices[sell_ticks], 'go')
plt.plot(liquidate_ticks, self.prices[liquidate_ticks], 'bo')
plt.suptitle(
"Total Reward: %.6f" % self.equity
)
def close(self):
plt.close()
def save_rendering(self, filepath):
plt.savefig(filepath)
def pause_rendering(self):
plt.show()
| 30.658416 | 105 | 0.605199 | 733 | 6,193 | 4.826739 | 0.185539 | 0.052855 | 0.072075 | 0.035613 | 0.224986 | 0.202374 | 0.143584 | 0.143584 | 0.091577 | 0.070096 | 0 | 0.005748 | 0.297756 | 6,193 | 201 | 106 | 30.810945 | 0.807772 | 0.011626 | 0 | 0.042254 | 0 | 0 | 0.009159 | 0 | 0 | 0 | 0 | 0.004975 | 0.007042 | 1 | 0.119718 | false | 0.007042 | 0.035211 | 0.007042 | 0.274648 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e8aa17ce59ec8f69d9a9500e7ce2b80459e9286 | 5,031 | py | Python | main.py | ByronDev121/DQN-Hind-sight-reward-shaping | 3032883b397f4ed5f601fcfe6d6ad75e9981f858 | [
"MIT"
] | null | null | null | main.py | ByronDev121/DQN-Hind-sight-reward-shaping | 3032883b397f4ed5f601fcfe6d6ad75e9981f858 | [
"MIT"
] | 10 | 2020-01-28T23:13:46.000Z | 2022-03-12T00:10:39.000Z | main.py | ByronDev121/DQN-Hind-Sight-Reward-Shaping | 3032883b397f4ed5f601fcfe6d6ad75e9981f858 | [
"MIT"
] | null | null | null | import os
import random
import time
import numpy as np
import tensorflow as tf
import keras.backend.tensorflow_backend as backend
from threading import Thread
from tqdm import tqdm
from AirSim_Gym import Gym
from DQNAgent import DQNAgent
IM_WIDTH = 256
IM_HEIGHT = 90
MEMORY_FRACTION = 0.4
MIN_REWARD = 10
EPISODES = 10_000
epsilon = 1
EPSILON_DECAY = 0.9995
MIN_EPSILON = 0.001
AGGREGATE_STATS_EVERY = 10
MODEL_NAME = "custom-model"
if __name__ == '__main__':
FPS = 8
# For stats
ep_rewards = [-200]
ep_qs = [-50]
# For more repetitive results
random.seed(1)
np.random.seed(1)
tf.set_random_seed(1)
# Memory fraction, used mostly when training multiple agents
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = MEMORY_FRACTION
backend.set_session(tf.Session(config=config))
# Create models folder
if not os.path.isdir('models'):
os.makedirs('models')
# Create agent and environment
agent = DQNAgent()
env = Gym()
# Start training thread and wait for training to be initialized
trainer_thread = Thread(target=agent.train_in_loop, daemon=True)
trainer_thread.start()
while not agent.training_initialized:
time.sleep(0.01)
# Initialize predictions - forst prediction takes longer as of initialization that has to be done
# It's better to do a first prediction then before we start iterating over episode steps
agent.get_qs(np.ones((IM_HEIGHT, IM_WIDTH, 3)))
# Iterate over episodes
for episode in tqdm(range(1, EPISODES + 1), ascii=True, unit='episodes'):
#try:
# Update tensorboard step every episode
agent.tensorboard.step = episode
# Restarting episode - reset episode reward and step number
episode_reward = 0
episode_qs = 0
step = 1
# Reset environment and get initial state
current_state = env.reset()
if current_state is None:
continue
# Reset flag and start iterating until episode ends
done = False
episode_start = time.time()
# Play for given number of seconds only
while True:
# This part stays mostly the same, the change is to query a model for Q values
if np.random.random() > epsilon:
# Get action from Q table
q = agent.get_qs(current_state)
episode_qs += max(q)
action = np.argmax(q)
else:
# Get random action
action = np.random.randint(0, 3)
# This takes no time, so we add a delay matching 60 FPS (prediction above takes longer)
time.sleep(1/FPS)
new_state, reward, done, _ = env.step(action)
if new_state is None:
break
# Transform new continous state to new discrete state and count reward
episode_reward += reward
# Every step we update replay memory
agent.update_replay_memory((current_state, action, reward, new_state, done))
current_state = new_state
step += 1
if done:
break
# Append episode reward to a list and log stats (every given number of episodes)
ep_rewards.append(episode_reward)
ep_qs.append(episode_qs)
if not episode % AGGREGATE_STATS_EVERY or episode == 1:
average_reward = sum(ep_rewards[-AGGREGATE_STATS_EVERY:])/len(ep_rewards[-AGGREGATE_STATS_EVERY:])
min_reward = min(ep_rewards[-AGGREGATE_STATS_EVERY:])
max_reward = max(ep_rewards[-AGGREGATE_STATS_EVERY:])
average_qs = sum(ep_qs[-AGGREGATE_STATS_EVERY:])/len(ep_qs[-AGGREGATE_STATS_EVERY:])
agent.tensorboard.update_stats(reward_avg=average_reward, reward_min=min_reward, reward_max=max_reward, avg_qs=episode_qs, epsilon=epsilon)
if not episode % 1000:
agent.model.save(f'models/{MODEL_NAME}__{max_reward:_>7.2f}max_{average_reward:_>7.2f}avg_{min_reward:_>7.2f}min__{episode}.model')
# Save model, but only when min reward is greater or equal a set value
if min_reward >= MIN_REWARD:
agent.model.save(f'models/{MODEL_NAME}__{max_reward:_>7.2f}max_{average_reward:_>7.2f}avg_{min_reward:_>7.2f}min__{int(time.time())}.model')
# Decay epsilon
if epsilon > MIN_EPSILON:
epsilon *= EPSILON_DECAY
epsilon = max(MIN_EPSILON, epsilon)
# Set termination flag for training thread and wait for it to finish
agent.terminate = True
trainer_thread.join()
agent.model.save(f'models/{MODEL_NAME}__{max_reward:_>7.2f}max_{average_reward:_>7.2f}avg_{min_reward:_>7.2f}min__{int(time.time())}.model') | 35.935714 | 160 | 0.625124 | 654 | 5,031 | 4.59633 | 0.30581 | 0.026946 | 0.026946 | 0.030605 | 0.160013 | 0.089488 | 0.089488 | 0.089488 | 0.089488 | 0.089488 | 0 | 0.020633 | 0.29676 | 5,031 | 140 | 161 | 35.935714 | 0.828999 | 0.234347 | 0 | 0.047619 | 0 | 0.035714 | 0.101385 | 0.090933 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.119048 | 0 | 0.119048 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e8c6329a572c5bd87e7b4f562f781c3c2d2d514 | 4,292 | py | Python | src/rhml_client/rhml_client/UI/helpers/add_new_model.py | PycT/RhythmicML | abf3eea273dcaa97b9308772c8054cfc60b77a4f | [
"Apache-2.0"
] | null | null | null | src/rhml_client/rhml_client/UI/helpers/add_new_model.py | PycT/RhythmicML | abf3eea273dcaa97b9308772c8054cfc60b77a4f | [
"Apache-2.0"
] | null | null | null | src/rhml_client/rhml_client/UI/helpers/add_new_model.py | PycT/RhythmicML | abf3eea273dcaa97b9308772c8054cfc60b77a4f | [
"Apache-2.0"
] | null | null | null | from rhythmic import rhythmicDB, faultReturnHandler;
from . import configuration, scanModelFolder;
from .packer import packFiles;
from os.path import exists, isdir as isDir, expanduser as expandUser;
from os import mkdir as makeDir;
from shutil import copyfile as copyFile;
from datetime import datetime;
from zipfile import ZipFile, ZIP_DEFLATED;
@faultReturnHandler
def addNewModel(model_name = None, model_dir = None):
"""
addNewModel(model_name = None, model_dir = None, db_file_name = ".rhml_db.sqlite3")
0. The path is checked to be already present in the database. If true, the according status returned, workflow stops.
1. Model wrapper script template copied to the model's directory.
2. The record containing model name, model path is added to `models_table`.
3. The record of version 0 is added to `versions_table`.
4. The model folder is scanned recursiveliy, adding all the files found to the `files_table` (absolute paths).
5. The `.rhml_storage` folder created within specified model directory.
6. The initial, 0-version archive is created.
"""
if model_dir == "~":
model_path = expandUser(model_dir);
else:
model_path = model_dir;
if ( (not model_name) or (not model_path) ):
return "Can't add the model: model name or model path is missing."
timestamp = str(datetime.now());
#####################################################33##########3#####
templateSource = configuration.model_wrapper_class_file_name;
templateDestination = "{}/{}".format(model_path, configuration.model_wrapper_class_file_name);
if not exists(templateDestination):
copyFile(templateSource, templateDestination);
#######################################################################33
#=================starting DB work =====================================
with rhythmicDB(configuration.db_name, configuration.db_file_name) as db:
probe = db.execute("SELECT model_name FROM models_table WHERE model_path = '{}'".format(model_path));
if len(probe) > 0:
return "The model [ {} ] stored in [ {} ] is already in the base.".format(probe[0][0], model_path);
new_model_id = db.execute(
"""
INSERT INTO models_table
(
model_name,
model_path,
last_version_timestamp
)
VALUES
(
'{}', '{}', '{}'
);
""".format(model_name, model_path, timestamp));
new_model_version_id = db.execute(
"""
INSERT INTO versions_table
(
model_id,
created_timestamp
)
VALUES
(
'{}', '{}'
);
""".format(new_model_id, timestamp));
files_record_request = \
"""
INSERT INTO files_table
(
model_version_id,
absolute_path,
last_modified_time
)
VALUES
""";
new_model_files = scanModelFolder(model_path);
if len(new_model_files) > 0:
files_record_values = "";
for item_path in new_model_files:
item = new_model_files[item_path];
files_record_values += "('{}', '{}', '{}'), \n".format(new_model_version_id, item_path, item["last_modified_time"]);
files_record_request += files_record_values[:len(files_record_values) -3] + ";"; #truncating `, \n` from the end of request and adding `;`.
db.execute(files_record_request);
#================= finished DB work =====================================
model_storage_path = model_path + "/{}".format(configuration.storage_folder_name);
if ( (not exists(model_storage_path)) or (not isDir(model_storage_path)) ):
makeDir(model_storage_path);
#================= Starting building ver0 .zip in storage =====================================
archive_name = model_storage_path + "/model_{}_ver0.zip".format(new_model_id);
packFiles(model_path, archive_name, new_model_files);
#================= Finished building ver0 .zip in storage =====================================
return "Success"; | 37.321739 | 151 | 0.573858 | 463 | 4,292 | 5.077754 | 0.280778 | 0.053594 | 0.027648 | 0.022969 | 0.101234 | 0.062952 | 0.030625 | 0 | 0 | 0 | 0 | 0.00719 | 0.25466 | 4,292 | 115 | 152 | 37.321739 | 0.727727 | 0.239049 | 0 | 0 | 0 | 0 | 0.10231 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022222 | false | 0 | 0.177778 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e8e46f2db92e108f88e268c875a0e70cb9b07a7 | 12,314 | py | Python | src/view/TICWidget.py | jpfeuffer/pyopenms-extra | b10c64356ccef147234f0adb08c40ddb043f2376 | [
"Zlib",
"Apache-2.0"
] | 26 | 2015-09-16T13:58:36.000Z | 2022-03-29T18:13:36.000Z | src/view/TICWidget.py | jpfeuffer/pyopenms-extra | b10c64356ccef147234f0adb08c40ddb043f2376 | [
"Zlib",
"Apache-2.0"
] | 118 | 2019-04-12T06:50:15.000Z | 2022-03-17T17:26:33.000Z | src/view/TICWidget.py | jpfeuffer/pyopenms-extra | b10c64356ccef147234f0adb08c40ddb043f2376 | [
"Zlib",
"Apache-2.0"
] | 30 | 2015-09-14T14:31:50.000Z | 2021-11-19T16:59:37.000Z | import numpy as np
import pyqtgraph as pg
from PyQt5.QtCore import pyqtSignal
from PyQt5.QtGui import QKeySequence
from PyQt5.QtWidgets import QShortcut
from pyqtgraph import PlotWidget
pg.setConfigOption("background", "w") # white background
pg.setConfigOption("foreground", "k") # black peaks
class TICWidget(PlotWidget):
"""
Used for creating a TIC plot
with dynamic zooming to avoid label collisions.
=============================== =========================================
**Signals:**
sigRTClicked Emitted when the user has clicked on TIC
plot and returns the clicked RT value.
sigSeleRTRegionChangeFinished Emitted while the user is double clicking
on a region in TIC plot and creates a
region by dragging a horizontal line.
The signal returns the start and end
RT values within the region.
=============================== =========================================
"""
sigRTClicked = pyqtSignal(float, name="sigRTClicked")
sigSeleRTRegionChangeFinished = pyqtSignal(
float, float, name="sigRTRegionChangeFinished"
)
def __init__(self, parent=None, dpi=100):
PlotWidget.__init__(self)
self.setLimits(yMin=0, xMin=0)
self.setMouseEnabled(y=False)
self.setLabel("bottom", "RT (min)")
self.setLabel("left", "relative intensity (%)")
self._peak_labels = {}
self._existTIC = True
# numpy arrays for fast look-up
self._rts = np.array([])
self._ints = np.array([])
self._peak_indices = np.array([])
self._currentIntensitiesInRange = np.array([])
self._region = None
self.getViewBox().sigXRangeChanged.connect(self._autoscaleYAxis)
self.scene().sigMouseClicked.connect(self._clicked) # emits rt_clicked
# shortcut to init region
self.shortcut1 = QShortcut(QKeySequence("Ctrl+r"), self)
self.shortcut1.activated.connect(self._rgn_shortcut)
# in cases only MS2 spectra are given
def checkExistTIC(self):
if self._rts.size == 0:
self._existTIC = False
def setTIC(self, chromatogram):
"""
Used to set new TIC and with given Information (rts, ints)
:param chromatogram: data from the MSExperiment
"""
if self._peak_labels != {}:
self._clear_labels()
self._peak_labels = {}
self._chrom = chromatogram
self._rts, self._ints = self._chrom.get_peaks()
self.checkExistTIC()
if self._existTIC:
self._rts_in_min()
self._relative_ints()
self._peak_indices = self._find_Peak()
self._autoscaleYAxis()
self.redrawPlot()
def _rts_in_min(self):
self._rts = np.array([x / 60 for x in self._rts])
def _relative_ints(self):
maxInt = np.amax(self._ints)
self._ints = np.array([((x / maxInt) * 100) for x in self._ints])
def redrawPlot(self):
self.plot(clear=True)
self._plot_tic()
self._draw_peak_label()
def _autoscaleYAxis(self):
"""
Used to adjust y axis with the maximal y value
from the current RT values. Also, redraws peak labels
depending on the current displayed RT values.
"""
x_range = self.getAxis("bottom").range
if x_range == [0, 1]: # workaround for axis sometimes not being set
x_range = [np.amin(self._rts), np.amax(self._rts)]
self.currMaxY = self._getMaxIntensityInRange(x_range)
if self.currMaxY:
self.setYRange(0, self.currMaxY, update=False)
self._redrawLabels()
def _getMaxIntensityInRange(self, xrange):
"""
:param xrange: A list of [min, max] bounding RT values.
:return: An float value representing the maximal
intensity current x range.
"""
left = np.searchsorted(self._rts, xrange[0], side="left")
right = np.searchsorted(self._rts, xrange[1], side="right")
self._currentIntensitiesInRange = self._ints[left:right]
return np.amax(self._ints[left:right], initial=1)
def _plot_tic(self):
plotgraph = pg.PlotDataItem(self._rts, self._ints)
self.addItem(plotgraph)
def _find_Peak(self):
"""
Calculates all indices from the intensity values to locate peaks.
This function operates on the principle that it compares peak values
against each other until it founds a maximal turning point.
:return: A numpy array containing all peak indices,
sorted descending (max first -> min last).
"""
data = self._ints
maxIndices = np.zeros_like(data)
peakValue = -np.inf
for indx in range(0, len(data), 1):
if peakValue < data[indx]:
peakValue = data[indx]
for j in range(indx, len(data)):
if peakValue < data[j]:
break
elif peakValue == data[j]:
continue
elif peakValue > data[j]:
peakIndex = indx + np.floor(abs(indx - j) / 2)
# marking found index
maxIndices[peakIndex.astype(int)] = 1
indx = j
break
peakValue = data[indx]
maxIndices = np.where(maxIndices)[0]
# sort indices of high points from largest intensity to smallest
maxIndices = sorted(maxIndices, key=lambda x: data[x], reverse=True)
return maxIndices
def _add_label(self, label_id, label_text, pos_x, pos_y):
label = pg.TextItem(anchor=(0.5, 1))
label.setText(text="{0:.2f}".format(label_text), color=(0, 0, 0))
label.setPos(pos_x, pos_y)
self._peak_labels[label_id] = {"label": label}
self.addItem(label, ignoreBounds=True)
if self._label_clashes(label_id):
self._remove_label(label_id)
def _remove_label(self, label_id):
self.removeItem(self._peak_labels[label_id]["label"])
del self._peak_labels[label_id]
def _clear_labels(self):
for label_id in self._peak_labels.keys():
self.removeItem(self._peak_labels[label_id]["label"])
self._peak_labels = {}
def _label_clashes(self, label_id):
"""
Calculates possible clash of new added label to other existing labels.
The clash is measured by the
collision of the label boundingRects,
which are representing displayed scene positions.
:param label_id: Represents index of peak position in peak_indices.
:return: A boolean indicating if there is a clash or not.
"""
new_label = label_id
clash = False
# scaling the distance with the correct pixel size
pixel_width = self.getViewBox().viewPixelSize()[0]
limit_distance = 20.0 * pixel_width
if self._peak_labels == {}:
return False
for exist_label in list(self._peak_labels):
if exist_label != new_label:
new_label_rect =\
self._peak_labels[new_label]["label"].mapRectToDevice(
self._peak_labels[new_label]["label"].boundingRect()
)
exist_label_rect = self._peak_labels[exist_label][
"label"
].mapRectToDevice(
self._peak_labels[exist_label]["label"].boundingRect()
)
if not new_label_rect.intersects(exist_label_rect):
exist_label_X = self._peak_labels[exist_label]["label"].x()
new_label_X = self._peak_labels[new_label]["label"].x()
distance = abs(new_label_X - exist_label_X)
if distance < limit_distance:
clash = True
break
else:
clash = False
elif new_label_rect.intersects(exist_label_rect):
clash = True
break
else:
if len(self._peak_labels) == 1 and exist_label == new_label:
clash = False
return clash
def _draw_peak_label(self):
"""
Function draws peak labels,
starting with the maximal peak to the minimal peak.
In each addition possible label clashes will be calculated,
if so then delete label.
"""
if self._peak_labels == {}:
for index in self._peak_indices:
if self._ints[index] in self._currentIntensitiesInRange:
self._add_label(
index, self._rts[index],
self._rts[index],
self._ints[index]
)
def _redrawLabels(self):
self._clear_labels()
self._draw_peak_label()
def _clicked(self, event):
if self._existTIC:
pos = event.scenePos()
if self.sceneBoundingRect().contains(pos):
mouse_point = self.getViewBox().mapSceneToView(pos)
closest_datapoint_idx = self._calculate_closest_datapoint(
mouse_point.x()
)
self.sigRTClicked.emit(
self._rts[closest_datapoint_idx]
) # notify observers
# check the selected rt region and return the bounds
if self._region is not None:
self._region.sigRegionChangeFinished.connect(
self._rtRegionBounds)
def mouseDoubleClickEvent(self, event):
super(TICWidget, self).mouseDoubleClickEvent(event)
try:
mouse_point = self.getViewBox().mapSceneToView(event.pos())
closest_datapoint_idx = self._calculate_closest_datapoint(
mouse_point.x())
rgn_start = self._rts[closest_datapoint_idx]
if self._region is None:
region = pg.LinearRegionItem()
region.setRegion((rgn_start, rgn_start))
self._region = region
self.addItem(region, ignoreBounds=True)
# delete the region when hovering over the region per doubleClk
self._delete_region()
except ValueError:
print("No TIC values to click on")
def _calculate_closest_datapoint(self, point_x):
"""
:param point_x: mouse clicked position
:return: closest data point near a peak
"""
larger_idx = np.searchsorted(self._rts, point_x, side="left")
smaller_idx = 0
if larger_idx >= self._rts.size: # to avoid array out of bounds
larger_idx -= 1
if larger_idx > 0:
smaller_idx = larger_idx - 1
if abs(self._rts[larger_idx] - point_x) < \
abs(self._rts[smaller_idx] - point_x):
closest_datapoint_idx = larger_idx
else:
closest_datapoint_idx = smaller_idx
return closest_datapoint_idx
def _rtRegionBounds(self):
region_bounds = self._region.getRegion()
start_rg = region_bounds[0]
stop_rg_idx = self._calculate_closest_datapoint(region_bounds[1])
stop_rg = self._rts[stop_rg_idx]
# set the new region of interest
self._region.setRegion((start_rg, stop_rg))
self.sigSeleRTRegionChangeFinished.emit(
start_rg, stop_rg) # notify observers
def _delete_region(self):
if self._region.mouseHovering:
self.removeItem(self._region)
self._region = None
def _rgn_shortcut(self):
# click region, with following shortcut -> create region
rgn_start = self.getViewBox().mapSceneToView(self.lastMousePos)
if self._region is None:
region = pg.LinearRegionItem()
region.setRegion((rgn_start, rgn_start))
self._region = region
self.addItem(region, ignoreBounds=True)
| 36.868263 | 79 | 0.576498 | 1,356 | 12,314 | 5.018437 | 0.244838 | 0.025863 | 0.039089 | 0.011168 | 0.164585 | 0.109331 | 0.076414 | 0.065834 | 0.054078 | 0.054078 | 0 | 0.005673 | 0.327189 | 12,314 | 333 | 80 | 36.978979 | 0.815691 | 0.207569 | 0 | 0.191589 | 0 | 0 | 0.02144 | 0.002667 | 0 | 0 | 0 | 0 | 0 | 1 | 0.102804 | false | 0 | 0.028037 | 0 | 0.168224 | 0.004673 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e8fd7bc2fb00b4fc1aa8f81dd85b4cdf19abc5d | 2,517 | py | Python | tests.py | wpajunior/peakon-api | b5561fc706974c9ddc47850b1c974093d38e1684 | [
"MIT"
] | null | null | null | tests.py | wpajunior/peakon-api | b5561fc706974c9ddc47850b1c974093d38e1684 | [
"MIT"
] | null | null | null | tests.py | wpajunior/peakon-api | b5561fc706974c9ddc47850b1c974093d38e1684 | [
"MIT"
] | null | null | null | import unittest
from unittest.mock import call, patch
import peakon.client
class BaseTestCase(unittest.TestCase):
def setUp(self):
self.test_token = 'test_token'
self.client = peakon.client.Client(self.test_token)
@patch('requests.get')
class TestClient(BaseTestCase):
def test_get(self, requests_get):
test_path = 'engagement/drivers'
requests_get.return_value.json.return_value = {}
self.client.get(test_path)
self.assertEqual(requests_get.called, True)
(uri,), kwargs = requests_get.call_args
self.assertEqual(uri, self.client.api_base_url + test_path)
self.assertEqual(kwargs['stream'], False)
self.assertEqual(kwargs['headers']['Authorization'], 'Bearer ' + self.test_token)
def test_get_http_error(self, requests_get):
test_path = 'engagement/drivers'
requests_get.return_value.json.return_value = {'status': 500, 'message': 'Application Error' }
self.client.get(test_path)
def create_object_with_json_function(self):
obj = object()
obj.json = lambda: '{}'
@patch.object(peakon.client.Client, 'get')
class TestDrivers(BaseTestCase):
def test_find_by_context_no_optional_parameters(self, client_get):
test_segment = 'segment_123'
self.client.drivers.find_by_context(test_segment)
self.assertEqual(client_get.called, True)
(path,), _ = client_get.call_args
self.assertEqual(path, 'engagement/contexts/{}/drivers'.format(test_segment))
def test_find_by_context_all_optional_parameters(self, client_get):
test_segment = 'segment_123'
filter = 'filter[question.driver]=autonomy'
self.client.drivers.find_by_context(test_segment, filter, observations=True, participation=True, interval='all')
self.assertEqual(client_get.called, True)
(path,), _ = client_get.call_args
self.assertEqual(path, 'engagement/contexts/{}/drivers?{}&observations=true&participation=true&interval=all'.format(test_segment, filter))
@patch.object(peakon.client.Client, 'get')
class TestSegments(BaseTestCase):
def test_find_by_type_no_optional_parameters(self, client_get):
test_type = 'employee'
self.client.segments.find_by_type(test_type)
self.assertEqual(client_get.called, True)
(type,), _ = client_get.call_args
self.assertEqual(type, 'segments?filter[type]={}'.format(test_type))
if __name__ == '__main__':
unittest.main() | 35.450704 | 146 | 0.701629 | 306 | 2,517 | 5.48366 | 0.251634 | 0.069726 | 0.038737 | 0.050656 | 0.56615 | 0.479738 | 0.387962 | 0.32062 | 0.271752 | 0.209774 | 0 | 0.004356 | 0.179182 | 2,517 | 71 | 147 | 35.450704 | 0.807841 | 0 | 0 | 0.26 | 0 | 0 | 0.134631 | 0.067117 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.14 | false | 0 | 0.06 | 0 | 0.28 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e8fd98a4058d90598fa6b42ad3eed5acac3ffa5 | 32,140 | py | Python | src/icff.py | rangell/icff | a24b7b289219caef26d24e8cdbf37e24de49447a | [
"MIT"
] | null | null | null | src/icff.py | rangell/icff | a24b7b289219caef26d24e8cdbf37e24de49447a | [
"MIT"
] | null | null | null | src/icff.py | rangell/icff | a24b7b289219caef26d24e8cdbf37e24de49447a | [
"MIT"
] | null | null | null | import os
import copy
import time
import pickle
import random
import logging
import argparse
from collections import deque, defaultdict
from heapq import heappop, heappush, heapify
from functools import reduce
from itertools import product
from tqdm import tqdm, trange
import numpy as np
import scipy.sparse as sp
from scipy.sparse import csr_matrix, coo_matrix
from scipy.special import softmax
from sklearn.metrics import adjusted_rand_score as adj_rand
from sklearn.metrics import adjusted_mutual_info_score as adj_mi
from sklearn.preprocessing import normalize
import higra as hg
from ortools.sat.python import cp_model
from sparse_dot_mkl import dot_product_mkl
from assign import greedy_assign
from compat_func import raw_overlap, transformed_overlap
from data import gen_data
from sim_func import dot_prod, jaccard_sim, cos_sim
from tree_ops import constraint_compatible_nodes
from tree_node import TreeNode
from utils import (MIN_FLOAT,
MAX_FLOAT,
InvalidAgglomError,
initialize_exp,
sparse_agglom_rep,
get_nil_rep)
from IPython import embed
logger = logging.getLogger(__name__)
def greedy_level_set_assign(viable_placements, incompat_mx):
# Setup:
# - queue for every constraint (based on sorted list of viable placements)
# - heap of proposed assignments for unassigned constraints
# - list(?) of chosen constraints and assignments
num_constraints = len(viable_placements)
running_vp = copy.deepcopy(viable_placements)
# note this is a min-heap so we negate the score
to_pick_heap = [(i, *d.popleft()) for i, d in enumerate(running_vp)]
to_pick_heap = [(-s, (c, t)) for c, s, t in to_pick_heap]
heapify(to_pick_heap)
picked_cidxs = np.array([], dtype=int)
picked_nidxs = np.array([], dtype=int)
picked_scores = np.array([], dtype=int)
valid_soln = True
while len(picked_cidxs) < num_constraints:
s, (c, n) = heappop(to_pick_heap)
if len(picked_cidxs) == 0:
picked_cidxs = np.append(picked_cidxs, int(c))
picked_nidxs = np.append(picked_nidxs, int(n))
picked_scores = np.append(picked_scores, -int(s))
continue
c_incompat_mask = incompat_mx[c, picked_cidxs]
if np.any(picked_nidxs[c_incompat_mask] == n) :
# if incompatible do the following
try:
_s, _n = running_vp[c].popleft()
except:
valid_soln = False
break
c_next_best = (-_s, (c, _n))
heappush(to_pick_heap, c_next_best)
else:
picked_cidxs = np.append(picked_cidxs, int(c))
picked_nidxs = np.append(picked_nidxs, int(n))
picked_scores = np.append(picked_scores, -int(s))
return np.sum(picked_scores), valid_soln
def custom_hac(opt, points, raw_points, constraints, incompat_mx, compat_func):
level_set = points.astype(float)
raw_level_set = raw_points.astype(float)
Xi = sp.vstack(constraints) if len(constraints) > 0 else None
uids = np.arange(level_set.shape[0])
num_leaves = np.ones_like(uids)
Z = []
num_points = points.shape[0]
num_constraints = Xi.shape[0] if Xi is not None else 0
# for computing best cut
best_cut_score = MIN_FLOAT
best_cut = copy.deepcopy(uids)
cluster_ids = np.arange(num_points)
intra_cluster_energies = np.ones_like(cluster_ids) * (1 / num_points)
raw_points_normd = normalize((raw_points > 0).astype(int), norm='l2', axis=1)
if num_constraints > 0:
constraint_scores = compat_func(
(raw_level_set > 0).astype(int),
Xi,
num_points if opt.super_compat_score else 1
)
prev_solns = {}
#level_set_normd = normalize(level_set, norm='l2', axis=1)
level_set_normd = normalize((level_set > 0).astype(int), norm='l2', axis=1)
sim_mx = dot_product_mkl(level_set_normd, level_set_normd.T, dense=True)
total_time_zone0 = 0
total_time_zone1 = 0
total_time_zone2 = 0
total_time_zone3 = 0
failed_aggloms = 0
invalid_cut = False
for _ in trange(level_set.shape[0]-1):
sim_mx[tuple([np.arange(level_set.shape[0])]*2)] = -np.inf # ignore diag
# get next agglomeration
while True:
agglom_coord = np.where(sim_mx == sim_mx.max())
agglom_coord = tuple(map(lambda x : x[0:1], agglom_coord))
agglom_ind = np.array(list(map(lambda x : x[0], agglom_coord)))
agglom_mask = np.zeros_like(uids, dtype=bool)
agglom_mask[agglom_ind] = True
if sim_mx[agglom_coord].item() > MIN_FLOAT:
try:
agglom_rep = sparse_agglom_rep(level_set[agglom_mask])
raw_agglom_rep = sparse_agglom_rep(
raw_level_set[agglom_mask]
)
except InvalidAgglomError:
failed_aggloms += 1
sim_mx[agglom_coord] = MIN_FLOAT
continue
else:
agglom_rep = get_nil_rep(rep_dim=level_set.shape[1])
raw_agglom_rep = get_nil_rep(rep_dim=raw_level_set.shape[1])
break
assert np.sum(agglom_mask) == 2
time0 = time.time()
# update data structures
linkage_score = sim_mx[agglom_coord]
not_agglom_mask = ~agglom_mask
agglom_num_leaves = sum([num_leaves[x] for x in agglom_ind])
invalid_cut = invalid_cut or (linkage_score <= MIN_FLOAT)
time1 = time.time()
# update linkage matrix
Z.append(
np.array(
[float(uids[agglom_ind[0]]),
float(uids[agglom_ind[1]]),
float(linkage_score),
float(agglom_num_leaves)]
)
)
# update level set
level_set = sp.vstack(
(level_set[not_agglom_mask], agglom_rep)
)
raw_level_set = sp.vstack(
(raw_level_set[not_agglom_mask], raw_agglom_rep)
)
time2 = time.time()
# update sim_mx
num_untouched = np.sum(not_agglom_mask)
sim_mx = sim_mx[not_agglom_mask[:,None] & not_agglom_mask[None,:]]
sim_mx = sim_mx.reshape(num_untouched, num_untouched)
sim_mx = np.concatenate(
(sim_mx, np.ones((1, num_untouched)) * -np.inf), axis=0
)
time3 = time.time()
#agglom_rep_normd = normalize(agglom_rep, norm='l2', axis=1)
agglom_rep_normd = normalize(
(agglom_rep > 0).astype(int), norm='l2', axis=1
)
raw_agglom_rep_normd = normalize(
(raw_agglom_rep > 0).astype(int), norm='l2', axis=1
)
level_set_normd = sp.vstack(
(level_set_normd[not_agglom_mask], agglom_rep_normd)
)
new_sims = dot_product_mkl(
level_set_normd, agglom_rep_normd.T, dense=True
)
sim_mx = np.concatenate((sim_mx, new_sims), axis=1)
time4 = time.time()
# update cluster_ids
next_uid = np.max(uids) + 1
new_cluster_mask = np.isin(cluster_ids, uids[agglom_mask])
cluster_ids[new_cluster_mask] = next_uid
# update uids list
uids = np.concatenate(
(uids[not_agglom_mask], np.array([next_uid]))
)
# update num_leaves list
num_leaves = np.concatenate(
(num_leaves[not_agglom_mask], np.array([agglom_num_leaves]))
)
# don't need to evaluate cut because constraints cannot be satisfied
if invalid_cut:
continue
# update intra cluster energies
agglom_energy = np.sum(
dot_product_mkl(
raw_points_normd[new_cluster_mask],
raw_agglom_rep_normd.T,
dense=True
)
)
agglom_energy /= num_points
intra_cluster_energies = np.concatenate(
(intra_cluster_energies[not_agglom_mask],
np.array([agglom_energy]))
)
# compute best assignment of constraints to level_set
assign_score = 0
if num_constraints > 0:
new_constraint_scores = compat_func(
(raw_level_set[-1] > 0).astype(int),
Xi,
num_points if opt.super_compat_score else 1
)
constraint_scores = np.concatenate(
(constraint_scores[not_agglom_mask], new_constraint_scores),
axis=0
)
node_idxs, constraint_idxs = np.where(constraint_scores > 0)
scores = constraint_scores[(node_idxs, constraint_idxs)]
viable_placements = [
sorted([(scores[i], node_idxs[i])
for i in np.where(constraint_idxs == cuid)[0]],
key=lambda x : x[0], reverse=True)
for cuid in range(num_constraints)
]
viable_placements = [deque(l) for l in viable_placements
if len(l) > 0]
if len(viable_placements) < num_constraints:
invalid_cut = True
continue
assign_score, valid_soln = greedy_level_set_assign(
viable_placements, incompat_mx
)
if not valid_soln:
invalid_cut = True
continue
if opt.compat_agg == 'avg':
assign_score /= num_constraints
cut_score = np.sum(intra_cluster_energies)\
- (opt.cost_per_cluster * intra_cluster_energies.size)\
+ assign_score
if cut_score >= best_cut_score:
#logger.debug((np.sum(intra_cluster_energies),
# opt.cost_per_cluster * intra_cluster_energies.size,
# assign_score))
best_cut_score = cut_score
best_cut = copy.deepcopy(uids)
total_time_zone0 += time1 - time0
total_time_zone1 += time2 - time1
total_time_zone2 += time3 - time2
total_time_zone3 += time4 - time3
# sanity check
assert level_set.shape[0] == 1
# return the linkage matrix
Z = np.vstack(Z)
logger.debug('Total time zone0: {}'.format(total_time_zone0))
logger.debug('Total time zone1: {}'.format(total_time_zone1))
logger.debug('Total time zone2: {}'.format(total_time_zone2))
logger.debug('Total time zone3: {}'.format(total_time_zone3))
logger.debug('Failed aggloms: {}'.format(failed_aggloms))
return Z, best_cut, best_cut_score
def cluster_points(opt,
leaf_nodes,
labels,
sim_func,
compat_func,
constraints,
cost_per_cluster):
# pull out all of the points
points = sp.vstack([x.transformed_rep for x in leaf_nodes])
raw_points = sp.vstack([x.raw_rep for x in leaf_nodes])
num_points = points.shape[0]
num_constraints = len(constraints)
# compute constraint incompatibility matrix
incompat_mx = None
if len(constraints) > 0:
Xi = sp.vstack(constraints)
extreme_constraints = copy.deepcopy(Xi)
extreme_constraints.data *= np.inf
incompat_mx = dot_product_mkl(
extreme_constraints, extreme_constraints.T, dense=True
)
incompat_mx = (incompat_mx == -np.inf) | np.isnan(incompat_mx)
# run clustering and produce the linkage matrix
Z, best_cut, cut_obj_score = custom_hac(
opt, points, raw_points, constraints, incompat_mx, compat_func
)
# build the tree
logger.debug('Constructing the tree')
pred_tree_nodes = copy.copy(leaf_nodes) # shallow copy on purpose!
new_node_id = num_points
struct_node_list = np.arange(2*num_points - 1) # used for higra's dp
assert new_node_id == len(pred_tree_nodes)
for merge_idx, merger in enumerate(Z):
lchild, rchild, score = int(merger[0]), int(merger[1]), merger[2]
struct_node_list[lchild] = new_node_id
struct_node_list[rchild] = new_node_id
lc_rr = pred_tree_nodes[lchild].raw_rep
rc_rr = pred_tree_nodes[rchild].raw_rep
lc_tr = pred_tree_nodes[lchild].transformed_rep
rc_tr = pred_tree_nodes[rchild].transformed_rep
agglom_raw_rep = sparse_agglom_rep(sp.vstack((lc_rr, rc_rr)))
if score > MIN_FLOAT:
agglom_transformed_rep = sparse_agglom_rep(
sp.vstack((lc_tr, rc_tr))
)
else:
agglom_transformed_rep = get_nil_rep(rep_dim=lc_rr.shape[1])
pred_tree_nodes.append(
TreeNode(
new_node_id,
agglom_raw_rep,
transformed_rep=agglom_transformed_rep,
children=[pred_tree_nodes[lchild], pred_tree_nodes[rchild]]
)
)
new_node_id += 1
# maximally pure mergers
maximally_pure_mergers = [n for n in pred_tree_nodes
if n.label is not None and n.parent.label is None]
# extract cut frontier
cut_frontier_nodes = [pred_tree_nodes[i] for i in best_cut]
# the predicted entities canonicalization
pred_canon_ents = sp.vstack(
[n.raw_rep for n in cut_frontier_nodes]
)
# produce the predicted labels for leaves
pred_labels = np.zeros_like(labels)
for i, n in enumerate(cut_frontier_nodes):
for x in n.get_leaves():
pred_labels[x.uid] = i
# compute metrics
fits = int(np.sum([xi.size for xi in constraints]))
hg_tree = hg.Tree(struct_node_list)
dp = hg.dendrogram_purity(hg_tree, labels)
adj_rand_idx = adj_rand(pred_labels, labels)
adj_mut_info = adj_mi(pred_labels, labels)
metrics = {
'# constraints': len(constraints),
'fits' : fits,
'pred_k' : len(cut_frontier_nodes),
'dp' : round(dp, 4),
'# maximally_pure_mergers' : len(maximally_pure_mergers),
'adj_rand_idx' : round(adj_rand_idx, 4),
'adj_mut_info' : round(adj_mut_info, 4),
'cut_obj_score' : round(cut_obj_score, 4)
}
return pred_canon_ents, pred_labels, pred_tree_nodes, metrics
def gen_constraint_cheat(opt,
gold_entities,
pred_canon_ents,
pred_tree_nodes,
feat_freq,
constraints,
sim_func,
num_to_generate=1):
# maximally pure mergers
maximally_pure_mergers = [n for n in pred_tree_nodes
if n.label is not None and n.parent.label is None]
random.shuffle(maximally_pure_mergers)
pure_merger_iter = iter(maximally_pure_mergers)
# create constraints using maximally pure mergers
num_gen_constraints = 0
while num_gen_constraints < num_to_generate:
pure_merger = next(pure_merger_iter)
gold_ent_rep = gold_entities[pure_merger.label]
pm_transformed_rep = pure_merger.transformed_rep.astype(bool).astype(float)
par_transformed_rep = pure_merger.parent.transformed_rep.astype(bool).astype(float)
neg_feats = ((par_transformed_rep - gold_ent_rep) > 0).astype(float)
pos_feats = ((gold_ent_rep - pm_transformed_rep) > 0).astype(float)
if pos_feats.tocoo().col.size == 0:
continue
logger.debug('Generating constraint for node: {}'.format(pure_merger.uid))
in_idxs = pm_transformed_rep.tocoo().col
pos_idxs = np.random.choice(pos_feats.tocoo().col, size=(1,))
#pos_idxs = pos_feats.tocoo().col
#neg_idxs = np.random.choice(neg_feats.tocoo().col, size=(1,))
neg_idxs = neg_feats.tocoo().col
constraint_cols = np.concatenate((pos_idxs, in_idxs, neg_idxs), axis=0)
constraint_data = [1] * (pos_idxs.size + in_idxs.size)\
+ [-1] * neg_idxs.size
constraint_rows = np.zeros_like(constraint_cols)
new_constraint = csr_matrix(
(constraint_data, (constraint_rows, constraint_cols)),
shape=gold_ent_rep.shape,
dtype=float
)
constraints.append(new_constraint)
num_gen_constraints += 1
return constraints
def gen_constraint(opt,
gold_entities,
pred_canon_ents,
pred_tree_nodes,
feat_freq,
constraints,
sim_func,
num_to_generate=1):
totals = np.sum(feat_freq, axis=0)
feat_freq_normd = feat_freq / totals
for _ in trange(num_to_generate):
# oracle feedback generation in the form of there-exists constraints
pred_ent_idx = random.randint(0, pred_canon_ents.shape[0]-1)
ff_pred_ent = pred_canon_ents[pred_ent_idx]
feat_intersect = dot_product_mkl(
ff_pred_ent.multiply(gold_entities), ff_pred_ent.T, dense=True
).reshape(-1,)
is_ent_subsets = feat_intersect == np.sum(ff_pred_ent)
num_ent_subsets = np.sum(is_ent_subsets)
assert num_ent_subsets < 2
# get the tgt entity for this constraint
tgt_ent_idx = np.argmax(feat_intersect)
tgt_gold_ent = gold_entities[tgt_ent_idx]
while True:
# indices of features in predicted ent domain
ff_pred_ent_domain = ff_pred_ent.tocoo().col
# sample "pos"-features
pos_pred_domain = np.array(
list(
set(tgt_gold_ent.tocoo().col).difference(
set(ff_pred_ent_domain)
)
)
)
pos_idxs = np.array([])
if pos_pred_domain.size > 0:
pos_feat_dist = softmax(
feat_freq_normd[tgt_ent_idx, pos_pred_domain]
)
pos_idxs = np.random.choice(
pos_pred_domain,
size=min(opt.constraint_strength, pos_pred_domain.size),
replace=False,
p=pos_feat_dist
)
# sample "in"-features
in_pred_domain = ff_pred_ent.multiply(tgt_gold_ent).tocoo().col
in_feat_dist = softmax(
feat_freq_normd[tgt_ent_idx][in_pred_domain]
)
in_idxs = np.random.choice(
in_pred_domain,
size=min(2*opt.constraint_strength - pos_idxs.size,
in_pred_domain.size),
replace=False,
p=in_feat_dist
)
# sample "neg"-features
if num_ent_subsets == 0:
neg_pred_domain = np.array(
list(
set(ff_pred_ent_domain).difference(
set(in_pred_domain)
)
)
)
neg_feat_dist = softmax(
np.sum(feat_freq[:, neg_pred_domain], axis=0)
)
else:
not_gold_mask = ~tgt_gold_ent.toarray().reshape(-1,).astype(bool)
neg_pred_domain = np.where(not_gold_mask)[0]
corr_weights = np.sum(
feat_freq_normd[:, in_pred_domain], axis=1
)
neg_feat_dist = softmax(np.sum(
feat_freq_normd[:, not_gold_mask] * corr_weights.reshape(-1, 1),
axis=0
))
neg_idxs = np.random.choice(
neg_pred_domain,
size=min(opt.constraint_strength, neg_pred_domain.size),
replace=False,
p=neg_feat_dist
)
# create constraint
ff_constraint_cols = np.concatenate((pos_idxs, in_idxs, neg_idxs), axis=0)
ff_constraint_data = [1] * (pos_idxs.size + in_idxs.size)\
+ [-1] * neg_idxs.size
ff_constraint_rows = np.zeros_like(ff_constraint_cols)
ff_constraint = csr_matrix(
(ff_constraint_data, (ff_constraint_rows, ff_constraint_cols)),
shape=tgt_gold_ent.shape,
dtype=float
)
# check to make sure this constraint doesn't exist yet
already_exists = np.any([
(ff_constraint != xi).nnz == 0 for xi in constraints
])
if not already_exists:
break
# add constraint to running list
constraints.append(ff_constraint)
return constraints
def get_feat_freq(mentions, mention_labels):
counts_rows = []
for lbl_id in np.unique(mention_labels):
counts_rows.append(np.sum(mentions[mention_labels == lbl_id], axis=0))
counts = np.asarray(np.concatenate(counts_rows))
return counts
def get_assign_metrics(leaf2constraints, labels, gold_entities, constraints):
constraint2num_leaves = defaultdict(int)
num_wrong_assignments = 0
num_assignments = 0
for muids, cuids in leaf2constraints.items():
ent_id = labels[muids]
for cuid in cuids:
constraint2num_leaves[cuid] += 1
num_assignments += 1
if np.sum(np.abs(constraints[cuid]-gold_entities[ent_id]) == 2) > 0:
num_wrong_assignments += 1
num_lvs = list(constraint2num_leaves.values())
ret_str = "# lvs / constraint -- min: {} ; max: {} ; mean: {}\n"\
"frac wrong node assign : {} / {}".format(
np.min(num_lvs),
np.max(num_lvs),
np.mean(num_lvs),
num_wrong_assignments,
num_assignments
)
return ret_str
def run_mock_icff(opt,
gold_entities,
mentions,
mention_labels,
sim_func,
compat_func):
constraints = []
num_points = mentions.shape[0]
feat_freq = get_feat_freq(mentions, mention_labels)
# construct tree node objects for leaves
leaves = [TreeNode(i, m_rep, label=lbl)
for i, (m_rep, lbl) in enumerate(zip(mentions, mention_labels))]
## NOTE: JUST FOR TESTING
#constraints = [csr_matrix(2*ent - 1, dtype=float)
# for ent in gold_entities.toarray()]
#for i, xi in enumerate(constraints):
# first_compat_idx = np.where(mention_labels == i)[0][0]
# first_compat_mention = mentions[first_compat_idx]
# transformed_rep = sparse_agglom_rep(
# sp.vstack((first_compat_mention, xi))
# )
# leaves[i].transformed_rep = transformed_rep
for r in range(opt.max_rounds+1):
logger.debug('*** START - Clustering Points ***')
# cluster the points
out = cluster_points(
opt,
leaves,
mention_labels,
sim_func,
compat_func,
constraints,
opt.cost_per_cluster
)
pred_canon_ents, pred_labels, pred_tree_nodes, metrics = out
logger.debug('*** END - Clustering Points ***')
logger.info("round: {} - metrics: {}".format(r, metrics))
#if r == 2:
# embed()
# exit()
if metrics['adj_rand_idx'] == 1.0:
logger.info("perfect clustering reached in {} rounds".format(r))
break
# generate constraints every `iters` round
iters = 1
if r % iters == 0:
logger.debug('*** START - Generating Constraints ***')
# generate constraints and viable places given predictions
#constraints = gen_constraint(
# opt,
# gold_entities,
# pred_canon_ents,
# pred_tree_nodes,
# feat_freq,
# constraints,
# sim_func,
# num_to_generate=opt.num_constraints_per_round
#)
constraints = gen_constraint_cheat(
opt,
gold_entities,
pred_canon_ents,
pred_tree_nodes,
feat_freq,
constraints,
sim_func,
num_to_generate=opt.num_constraints_per_round
)
logger.debug('*** END - Generating Constraints ***')
## NOTE: JUST FOR TESTING
#constraints = [csr_matrix(2*ent - 1, dtype=float)
# for ent in gold_entities.toarray()]
logger.debug('*** START - Computing Viable Placements ***')
viable_placements = constraint_compatible_nodes(
opt, pred_tree_nodes, constraints, compat_func, num_points
)
logger.debug('*** END - Computing Viable Placements ***')
logger.debug('*** START - Assigning Constraints ***')
# solve structured prediction problem of jointly placing the constraints
placements_out = greedy_assign(
pred_tree_nodes,
constraints,
viable_placements,
)
logger.debug('*** END - Assigning Constraints ***')
logger.debug('*** START - Projecting Assigned Constraints ***')
# reset all leaf transformed_rep's
logger.debug('Reseting leaves')
for node in leaves:
node.transformed_rep = copy.deepcopy(node.raw_rep)
# transform the placements out to leaf2constraints
logger.debug('Expanding placements to leaves')
nuid2luids = {n.uid : [x.uid for x in n.get_leaves()]
for n in pred_tree_nodes}
leaf2constraints = defaultdict(set)
for nuid, cuids in placements_out.items():
for luid in nuid2luids[nuid]:
leaf2constraints[luid].update(cuids)
# get and display assignment metrics
assignment_metrics = get_assign_metrics(
leaf2constraints, mention_labels, gold_entities, constraints
)
logger.info('assign metrics : {}'.format(assignment_metrics))
# project resolved constraint placements to leaves
logger.debug('Projecting constraints to expanded placements')
for nuid, cuids in leaf2constraints.items():
reps = [pred_tree_nodes[nuid].transformed_rep]\
+ [constraints[cuid] for cuid in cuids]
pred_tree_nodes[nuid].transformed_rep = sparse_agglom_rep(
sp.vstack(reps)
)
logger.debug('*** END - Projecting Assigned Constraints ***')
embed()
exit()
def get_opt():
# TODO: add conditional opts (e.g. diff opts for synthetic vs. real data)
parser = argparse.ArgumentParser()
parser.add_argument('--seed', type=int, default=27,
help="random seed for initialization")
parser.add_argument('--debug', action='store_true',
help="Enables and disables certain opts for debugging")
parser.add_argument("--output_dir", default=None, type=str,
help="The output directory for this run.")
parser.add_argument("--data_dir", default=None, type=str,
help="The directory where data is stored.")
# real dataset
parser.add_argument("--data_file", default=None, type=str,
help="preprocessed data pickle file in data_dir")
# opts for building synthetic data
parser.add_argument('--num_entities', type=int, default=2,
help="number of entities to generate when generating"\
"synthetic data")
parser.add_argument('--num_mentions', type=int, default=10,
help="number of mentions to generate when generating"\
"synthetic data")
parser.add_argument('--data_dim', type=int, default=16,
help="number of possible features (i.e. dimension of"\
"vector representation of points")
parser.add_argument('--entity_noise_prob', type=float, default=0.2,
help="probability of noise features added to entity")
parser.add_argument('--mention_sample_prob', type=float, default=0.7,
help="proportion of entity features added to mention")
parser.add_argument('--cost_per_cluster', type=float, default=0.5,
help="proportion of entity features added to mention")
parser.add_argument('--cluster_obj_reps', type=str,
choices=['raw', 'transformed'], default='raw',
help="which reps to use in tree cutting")
parser.add_argument('--compat_agg', type=str,
choices=['avg', 'sum'], default='avg',
help="how to aggregate constraint compatibility in obj")
parser.add_argument('--sim_func', type=str,
choices=['cosine', 'jaccard'], default='cosine',
help="similarity function for clustering")
parser.add_argument('--compat_func', type=str,
choices=['raw', 'transformed'], default='raw',
help="compatibility function constraint satisfaction")
parser.add_argument('--constraint_strength', type=int, default=1,
help="1/2 the max number features in gen constraint")
parser.add_argument('--super_compat_score', action='store_true',
help="Enables super compatibility score when perfect")
parser.add_argument('--max_rounds', type=int, default=100,
help="number of rounds to generate feedback for")
parser.add_argument('--num_constraints_per_round', type=int, default=1,
help="number of constraints to generate per round")
opt = parser.parse_args()
# check to make sure there are no issues with the specified opts
check_opt(opt)
return opt
def check_opt(opt):
# TODO: do a bunch of checks on the options
pass
def set_sim_func(opt):
sim_func = None
if opt.sim_func == 'cosine':
sim_func = cos_sim
elif opt.sim_func == 'jaccard':
sim_func = jaccard_sim
assert sim_func is not None
return sim_func
def set_compat_func(opt):
compat_func = None
if opt.compat_func == 'raw':
compat_func = raw_overlap
elif opt.compat_func == 'transformed':
compat_func = transformed_overlap
assert compat_func is not None
return compat_func
def main():
# get command line options
opt = get_opt()
# initialize the experiment
initialize_exp(opt)
if opt.data_file is not None:
# get real data
with open('{}/{}'.format(opt.data_dir, opt.data_file), 'rb') as f:
gold_entities, mentions, mention_labels = pickle.load(f)
else:
# get or create the synthetic data
data_fname = '{}/synth_data-{}_{}_{}_{}_{}-{}.pkl'.format(
opt.data_dir,
opt.num_entities,
opt.num_mentions,
opt.data_dim,
opt.entity_noise_prob,
opt.mention_sample_prob,
opt.seed
)
if not os.path.exists(data_fname):
with open(data_fname, 'wb') as f:
gold_entities, mentions, mention_labels = gen_data(opt)
gold_entities = csr_matrix(gold_entities)
mentions = csr_matrix(mentions)
pickle.dump((gold_entities, mentions, mention_labels), f)
else:
with open(data_fname, 'rb') as f:
gold_entities, mentions, mention_labels = pickle.load(f)
# declare similarity and compatibility functions with function pointers
assert opt.sim_func == 'cosine' # TODO: support more sim funcs
sim_func = set_sim_func(opt)
compat_func = set_compat_func(opt)
# run the core function
run_mock_icff(
opt, gold_entities, mentions, mention_labels, sim_func, compat_func
)
if __name__ == '__main__':
main()
| 35.631929 | 91 | 0.592999 | 3,874 | 32,140 | 4.635261 | 0.134228 | 0.01292 | 0.016651 | 0.006627 | 0.280726 | 0.228156 | 0.173136 | 0.137941 | 0.124687 | 0.106421 | 0 | 0.008519 | 0.317019 | 32,140 | 901 | 92 | 35.671476 | 0.80953 | 0.102334 | 0 | 0.148773 | 0 | 0 | 0.076693 | 0.004382 | 0 | 0 | 0 | 0.00111 | 0.010736 | 1 | 0.019939 | false | 0.001534 | 0.046012 | 0 | 0.081288 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e97eb7e1b3ec742c986110b05607516ab6e085d | 475 | py | Python | libs/src/main/python/dlpx/virtualization/libs/_logging.py | SumoSourabh/virtualization-sdk | d1c06e7aeb8adf48243599871423922d642d2c10 | [
"Apache-2.0"
] | null | null | null | libs/src/main/python/dlpx/virtualization/libs/_logging.py | SumoSourabh/virtualization-sdk | d1c06e7aeb8adf48243599871423922d642d2c10 | [
"Apache-2.0"
] | null | null | null | libs/src/main/python/dlpx/virtualization/libs/_logging.py | SumoSourabh/virtualization-sdk | d1c06e7aeb8adf48243599871423922d642d2c10 | [
"Apache-2.0"
] | null | null | null | #
# Copyright (c) 2019, 2021 by Delphix. All rights reserved.
#
from logging import Handler
from dlpx.virtualization.libs import libs
from dlpx.virtualization.common.util import to_str
__all__ = [
"PlatformHandler"
]
class PlatformHandler(Handler):
"""
A logging handler that calls into the Virtualization Library.
"""
def emit(self, record):
msg = self.format(record)
msg = to_str(msg)
libs._log_request(msg, record.levelno)
| 20.652174 | 65 | 0.692632 | 59 | 475 | 5.440678 | 0.627119 | 0.049844 | 0.137072 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021448 | 0.214737 | 475 | 22 | 66 | 21.590909 | 0.839142 | 0.252632 | 0 | 0 | 0 | 0 | 0.04451 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.272727 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e9e0a6a6f2012a14ed1273c3cdb8aceafe4caf8 | 15,457 | py | Python | utils/cuckoo_customized/utils/api.py | uvasrg/AutomaticallyEvadeClassifiers | 961e0ddb9b5e13033ea5692b369b6008e271c597 | [
"MIT"
] | 104 | 2016-01-21T13:38:23.000Z | 2021-06-25T02:15:17.000Z | utils/cuckoo_customized/utils/api.py | uvasrg/AutomaticallyEvadeClassifiers | 961e0ddb9b5e13033ea5692b369b6008e271c597 | [
"MIT"
] | 8 | 2015-12-30T15:33:57.000Z | 2020-09-25T04:50:58.000Z | utils/cuckoo_customized/utils/api.py | uvasrg/AutomaticallyEvadeClassifiers | 961e0ddb9b5e13033ea5692b369b6008e271c597 | [
"MIT"
] | 45 | 2016-02-14T03:15:16.000Z | 2021-09-26T03:02:07.000Z | #!/usr/bin/env python
# Copyright (C) 2010-2015 Cuckoo Foundation.
# This file is part of Cuckoo Sandbox - http://www.cuckoosandbox.org
# See the file 'docs/LICENSE' for copying permission.
import argparse
import json
import os
import socket
import sys
import tarfile
from datetime import datetime
from StringIO import StringIO
from zipfile import ZipFile, ZIP_STORED
try:
from bottle import route, run, request, hook, response, HTTPError
from bottle import default_app
except ImportError:
sys.exit("ERROR: Bottle.py library is missing")
sys.path.append(os.path.join(os.path.abspath(os.path.dirname(__file__)), ".."))
from lib.cuckoo.common.constants import CUCKOO_VERSION, CUCKOO_ROOT
from lib.cuckoo.common.utils import store_temp_file, delete_folder
from lib.cuckoo.core.database import Database, TASK_RUNNING
# Global MongoDB collection pointer.
mongo_report = False
from lib.cuckoo.common.config import Config
cfg = Config("reporting")
if cfg.mongodb and cfg.mongodb.enabled:
from pymongo import MongoClient, ASCENDING
uri = cfg.mongodb.get("uri", "mongodb://localhost:27017/cuckoo")
mdb = cfg.mongodb.get("db", "cuckoo")
collection = MongoClient(uri)[mdb]['analysis']
mongo_report = True
# Global DB pointer.
db = Database()
def jsonize(data):
"""Converts data dict to JSON.
@param data: data dict
@return: JSON formatted data
"""
response.content_type = "application/json; charset=UTF-8"
return json.dumps(data, sort_keys=False, indent=4)
@hook("after_request")
def custom_headers():
"""Set some custom headers across all HTTP responses."""
response.headers["Server"] = "Machete Server"
response.headers["X-Content-Type-Options"] = "nosniff"
response.headers["X-Frame-Options"] = "DENY"
response.headers["X-XSS-Protection"] = "1; mode=block"
response.headers["Pragma"] = "no-cache"
response.headers["Cache-Control"] = "no-cache"
response.headers["Expires"] = "0"
@route("/tasks/check_reported/<sha1>", method="GET")
def check_reported(sha1):
doc = collection.find_one({'target.file.sha1': sha1})
if not doc:
return None
return jsonize(doc['info']['id'])
@route("/tasks/create/file", method="POST")
@route("/v1/tasks/create/file", method="POST")
def tasks_create_file():
response = {}
data = request.files.file
package = request.forms.get("package", "")
timeout = request.forms.get("timeout", "")
priority = request.forms.get("priority", 1)
options = request.forms.get("options", "")
machine = request.forms.get("machine", "")
platform = request.forms.get("platform", "")
tags = request.forms.get("tags", None)
custom = request.forms.get("custom", "")
memory = request.forms.get("memory", False)
clock = request.forms.get("clock", None)
if memory:
memory = True
enforce_timeout = request.forms.get("enforce_timeout", False)
if enforce_timeout:
enforce_timeout = True
temp_file_path = store_temp_file(data.file.read(), data.filename)
task_id = db.add_path(
file_path=temp_file_path,
package=package,
timeout=timeout,
priority=priority,
options=options,
machine=machine,
platform=platform,
tags=tags,
custom=custom,
memory=memory,
enforce_timeout=enforce_timeout,
clock=clock
)
response["task_id"] = task_id
return jsonize(response)
@route("/tasks/create/url", method="POST")
@route("/v1/tasks/create/url", method="POST")
def tasks_create_url():
response = {}
url = request.forms.get("url")
package = request.forms.get("package", "")
timeout = request.forms.get("timeout", "")
priority = request.forms.get("priority", 1)
options = request.forms.get("options", "")
machine = request.forms.get("machine", "")
platform = request.forms.get("platform", "")
tags = request.forms.get("tags", None)
custom = request.forms.get("custom", "")
memory = request.forms.get("memory", False)
if memory:
memory = True
enforce_timeout = request.forms.get("enforce_timeout", False)
if enforce_timeout:
enforce_timeout = True
clock = request.forms.get("clock", None)
task_id = db.add_url(
url=url,
package=package,
timeout=timeout,
options=options,
priority=priority,
machine=machine,
platform=platform,
tags=tags,
custom=custom,
memory=memory,
enforce_timeout=enforce_timeout,
clock=clock
)
response["task_id"] = task_id
return jsonize(response)
@route("/tasks/list", method="GET")
@route("/v1/tasks/list", method="GET")
@route("/tasks/list/<limit:int>", method="GET")
@route("/v1/tasks/list/<limit:int>", method="GET")
@route("/tasks/list/<limit:int>/<offset:int>", method="GET")
@route("/v1/tasks/list/<limit:int>/<offset:int>", method="GET")
def tasks_list(limit=None, offset=None):
response = {}
response["tasks"] = []
completed_after = request.GET.get("completed_after")
if completed_after:
completed_after = datetime.fromtimestamp(int(completed_after))
status = request.GET.get("status")
for row in db.list_tasks(limit=limit, details=True, offset=offset,
completed_after=completed_after,
status=status, order_by="completed_on asc"):
task = row.to_dict()
task["guest"] = {}
if row.guest:
task["guest"] = row.guest.to_dict()
task["errors"] = []
for error in row.errors:
task["errors"].append(error.message)
task["sample"] = {}
if row.sample_id:
sample = db.view_sample(row.sample_id)
task["sample"] = sample.to_dict()
response["tasks"].append(task)
return jsonize(response)
@route("/tasks/view/<task_id:int>", method="GET")
@route("/v1/tasks/view/<task_id:int>", method="GET")
def tasks_view(task_id):
response = {}
task = db.view_task(task_id, details=True)
if task:
entry = task.to_dict()
entry["guest"] = {}
if task.guest:
entry["guest"] = task.guest.to_dict()
entry["errors"] = []
for error in task.errors:
entry["errors"].append(error.message)
entry["sample"] = {}
if task.sample_id:
sample = db.view_sample(task.sample_id)
entry["sample"] = sample.to_dict()
response["task"] = entry
else:
return HTTPError(404, "Task not found")
return jsonize(response)
@route("/tasks/reschedule/<task_id:int>", method="GET")
@route("/v1/tasks/reschedule/<task_id:int>", method="GET")
def tasks_reschedule(task_id):
response = {}
if not db.view_task(task_id):
return HTTPError(404, "There is no analysis with the specified ID")
new_task_id = db.reschedule(task_id)
if new_task_id:
response["status"] = "OK"
response["new_task_id"] = new_task_id
else:
return HTTPError(500, "An error occurred while trying to "
"reschedule the task")
return jsonize(response)
@route("/tasks/delete/<task_id:int>", method="GET")
@route("/v1/tasks/delete/<task_id:int>", method="GET")
def tasks_delete(task_id):
response = {}
task = db.view_task(task_id)
if task:
if task.status == TASK_RUNNING:
return HTTPError(500, "The task is currently being "
"processed, cannot delete")
if db.delete_task(task_id):
delete_folder(os.path.join(CUCKOO_ROOT, "storage",
"analyses", "%d" % task_id))
if mongo_report:
collection.delete_one({'info.id': task_id})
# TODO: Delete the elements of a report not in cuckoo.analysis.
response["status"] = "OK"
db.unlock_machine(task.machine)
else:
return HTTPError(500, "An error occurred while trying to "
"delete the task")
else:
return HTTPError(404, "Task not found")
return jsonize(response)
@route("/tasks/report/<task_id:int>", method="GET")
@route("/v1/tasks/report/<task_id:int>", method="GET")
@route("/tasks/report/<task_id:int>/<report_format>", method="GET")
@route("/v1/tasks/report/<task_id:int>/<report_format>", method="GET")
def tasks_report(task_id, report_format="json"):
formats = {
"json": "report.json",
"html": "report.html",
"maec": "report.maec-1.1.xml",
"metadata": "report.metadata.xml",
}
bz_formats = {
"all": {"type": "-", "files": ["memory.dmp"]},
"dropped": {"type": "+", "files": ["files"]},
}
tar_formats = {
"bz2": "w:bz2",
"gz": "w:gz",
"tar": "w",
}
if report_format.lower() in formats:
report_path = os.path.join(CUCKOO_ROOT, "storage", "analyses",
"%d" % task_id, "reports",
formats[report_format.lower()])
elif report_format.lower() in bz_formats:
bzf = bz_formats[report_format.lower()]
srcdir = os.path.join(CUCKOO_ROOT, "storage",
"analyses", "%d" % task_id)
s = StringIO()
# By default go for bz2 encoded tar files (for legacy reasons.)
tarmode = tar_formats.get(request.get("tar"), "w:bz2")
tar = tarfile.open(fileobj=s, mode=tarmode)
for filedir in os.listdir(srcdir):
if bzf["type"] == "-" and filedir not in bzf["files"]:
tar.add(os.path.join(srcdir, filedir), arcname=filedir)
if bzf["type"] == "+" and filedir in bzf["files"]:
tar.add(os.path.join(srcdir, filedir), arcname=filedir)
tar.close()
response.content_type = "application/x-tar; charset=UTF-8"
return s.getvalue()
else:
return HTTPError(400, "Invalid report format")
if os.path.exists(report_path):
return open(report_path, "rb").read()
else:
return HTTPError(404, "Report not found")
@route("/files/view/md5/<md5>", method="GET")
@route("/v1/files/view/md5/<md5>", method="GET")
@route("/files/view/sha256/<sha256>", method="GET")
@route("/v1/files/view/sha256/<sha256>", method="GET")
@route("/files/view/id/<sample_id:int>", method="GET")
@route("/v1/files/view/id/<sample_id:int>", method="GET")
def files_view(md5=None, sha256=None, sample_id=None):
response = {}
if md5:
sample = db.find_sample(md5=md5)
elif sha256:
sample = db.find_sample(sha256=sha256)
elif sample_id:
sample = db.view_sample(sample_id)
else:
return HTTPError(400, "Invalid lookup term")
if sample:
response["sample"] = sample.to_dict()
else:
return HTTPError(404, "File not found")
return jsonize(response)
@route("/files/get/<sha256>", method="GET")
@route("/v1/files/get/<sha256>", method="GET")
def files_get(sha256):
file_path = os.path.join(CUCKOO_ROOT, "storage", "binaries", sha256)
if os.path.exists(file_path):
response.content_type = "application/octet-stream; charset=UTF-8"
return open(file_path, "rb").read()
else:
return HTTPError(404, "File not found")
@route("/pcap/get/<task_id:int>", method="GET")
@route("/v1/pcap/get/<task_id:int>", method="GET")
def pcap_get(task_id):
file_path = os.path.join(CUCKOO_ROOT, "storage", "analyses",
"%d" % task_id, "dump.pcap")
if os.path.exists(file_path):
response.content_type = "application/octet-stream; charset=UTF-8"
try:
return open(file_path, "rb").read()
except:
return HTTPError(500, "An error occurred while reading PCAP")
else:
return HTTPError(404, "File not found")
@route("/machines/list", method="GET")
@route("/v1/machines/list", method="GET")
def machines_list():
response = {}
machines = db.list_machines()
response["machines"] = []
for row in machines:
response["machines"].append(row.to_dict())
return jsonize(response)
@route("/cuckoo/status", method="GET")
@route("/v1/cuckoo/status", method="GET")
def cuckoo_status():
response = dict(
version=CUCKOO_VERSION,
hostname=socket.gethostname(),
machines=dict(
total=len(db.list_machines()),
available=db.count_machines_available()
),
tasks=dict(
total=db.count_tasks(),
pending=db.count_tasks("pending"),
running=db.count_tasks("running"),
completed=db.count_tasks("completed"),
reported=db.count_tasks("reported")
),
)
return jsonize(response)
@route("/machines/view/<name>", method="GET")
@route("/v1/machines/view/<name>", method="GET")
def machines_view(name=None):
response = {}
machine = db.view_machine(name=name)
if machine:
response["machine"] = machine.to_dict()
else:
return HTTPError(404, "Machine not found")
return jsonize(response)
@route("/tasks/screenshots/<task:int>", method="GET")
@route("/v1/tasks/screenshots/<task:int>", method="GET")
@route("/tasks/screenshots/<task:int>/<screenshot>", method="GET")
@route("/v1/tasks/screenshots/<task:int>/<screenshot>", method="GET")
def task_screenshots(task=0, screenshot=None):
folder_path = os.path.join(CUCKOO_ROOT, "storage", "analyses", str(task), "shots")
if os.path.exists(folder_path):
if screenshot:
screenshot_name = "{0}.jpg".format(screenshot)
screenshot_path = os.path.join(folder_path, screenshot_name)
if os.path.exists(screenshot_path):
# TODO: Add content disposition.
response.content_type = "image/jpeg"
return open(screenshot_path, "rb").read()
else:
return HTTPError(404, screenshot_path)
else:
zip_data = StringIO()
with ZipFile(zip_data, "w", ZIP_STORED) as zip_file:
for shot_name in os.listdir(folder_path):
zip_file.write(os.path.join(folder_path, shot_name), shot_name)
# TODO: Add content disposition.
response.content_type = "application/zip"
return zip_data.getvalue()
else:
return HTTPError(404, folder_path)
@route("/tasks/view_signatures/<task_id:int>", method="GET")
def view_signatures(task_id):
doc = collection.find_one({'info.id': task_id})
if not doc:
return None
sigs = doc['signatures']
return jsonize(sigs)
def ensure_analysis_id_index(col):
if 'info.id' not in col.index_information():
col.create_index([("info.id", ASCENDING)], name="info.id", unique=True)
application = default_app()
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("-H", "--host", help="Host to bind the API server on", default="localhost", action="store", required=False)
parser.add_argument("-p", "--port", help="Port to bind the API server on", default=8090, action="store", required=False)
args = parser.parse_args()
# Ensure indexes on db.analysis
ensure_analysis_id_index(collection)
run(host=args.host, port=args.port) | 33.602174 | 131 | 0.618878 | 1,915 | 15,457 | 4.874674 | 0.162402 | 0.02571 | 0.035994 | 0.030852 | 0.438243 | 0.387359 | 0.340546 | 0.25849 | 0.209641 | 0.181253 | 0 | 0.012608 | 0.230316 | 15,457 | 460 | 132 | 33.602174 | 0.772043 | 0.037718 | 0 | 0.30563 | 0 | 0 | 0.195674 | 0.071626 | 0 | 0 | 0 | 0.002174 | 0 | 1 | 0.050938 | false | 0 | 0.045576 | 0 | 0.19571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0e9efacad3b0ff51bec1249b715d4eda1c1e68af | 5,962 | py | Python | pvtool/routes/_measurement.py | schmocker/pv-FHNW | 5066e0bc7ce76be5d1a930b50034c746b232a9f8 | [
"MIT"
] | 1 | 2019-10-31T13:34:12.000Z | 2019-10-31T13:34:12.000Z | pvtool/routes/_measurement.py | schmocker/pv-FHNW | 5066e0bc7ce76be5d1a930b50034c746b232a9f8 | [
"MIT"
] | 1 | 2019-05-27T13:03:25.000Z | 2019-05-27T13:03:25.000Z | pvtool/routes/_measurement.py | schmocker/pv-FHNW | 5066e0bc7ce76be5d1a930b50034c746b232a9f8 | [
"MIT"
] | null | null | null | """Overview of all Measurements and linked functions such as uploading removing and single view of measurement"""
import os
from werkzeug.utils import secure_filename
from flask import Blueprint, render_template, request, redirect, flash, g, current_app, url_for
from flask_login import current_user, login_required
from ..db import db, Measurement, PvModule, MeasurementValues
from ..forms import MeasurementForm
from ..file_upload import UPLOAD_FOLDER, allowed_file, process_data_file, InvalidFileType,\
process_multiple_measurements_file
from ._users import add_timestamp, requires_access_level
measurement_routes = Blueprint('measurement', __name__, template_folder='templates')
@measurement_routes.route('/measurements')
def measurements():
"""Display all measurements as table with clickable individual measurements"""
measurements_for_displaying = db.session.query(Measurement).all()
return render_template('measurement/measurements.html', measurements=measurements_for_displaying)
@measurement_routes.route('/measurement')
def measurement():
"""Display a single measurement with link to removal, plot and returning to all measurements"""
try:
meas_id = request.args.get('id', type=int)
if meas_id is None:
raise Exception(f'no valid id for pv module')
meas = db.session.query(Measurement).get(meas_id)
meas_values = db.session.query(MeasurementValues).filter(MeasurementValues.measurement_id == meas_id).all()
print(meas_values)
if meas is None:
raise Exception(f'no measurement with id {meas_id} exists')
return render_template('measurement/measurement.html', measurement=meas, measurement_values=meas_values)
except Exception as e:
flash(str(e), category='danger')
return redirect('measurements')
@measurement_routes.route('/measurement/remove')
@requires_access_level('Admin')
def remove_measurement():
"""Remove the individual measurement and its corresponding measurement values, does not affect the user"""
meas_id = request.args.get('id', type=int)
if meas_id is not None:
db.session.query(Measurement).filter(Measurement.id == meas_id).delete()
db.session.commit()
return redirect('/measurements')
@measurement_routes.route('/add_measurement', methods=['GET', 'POST'])
@login_required
def add_measurement():
"""Form to add measurement with populated pvmodules field"""
form = MeasurementForm()
modules = db.session.query(PvModule).all()
current_user_data = current_user.__dict__
user = {'students': current_user_data['student1'] + ', ' +
current_user_data['student2'] + ', ' +
current_user_data['student3'],
'meas_series': current_user_data['user_name']}
form.pv_modul.choices = []
# Every user can only insert one measurement
if db.session.query(Measurement).filter(Measurement.measurement_series == user['meas_series']).first() is not None\
and user['meas_series'] != 'admin':
print(db.session.query(Measurement).filter(Measurement.measurement_series == user['meas_series']).first())
flash('Sie haben bereits eine Messung hinzugefügt.', category='danger')
return redirect(url_for('measurement.measurements'))
# populate select field with available distinct modules
for module in modules:
if (module.model, str(module.manufacturer) + ' ' + str(module.model)) not in form.pv_modul.choices:
form.pv_modul.choices.append((module.model, str(module.manufacturer) + ' ' + str(module.model)))
if request.method == 'POST':
chosen_module = db.session.query(PvModule).filter(PvModule.model == form.pv_modul.data).first()
# noinspection PyArgumentList
new_measurement = Measurement(date=form.mess_datum.data,
measurement_series=user['meas_series'],
producer=user['students'],
)
# save file that was uploaded
# if form.validate_on_submit():
f = form.messungen.data
filename = secure_filename(f.filename)
if not allowed_file(filename):
flash('Ungültiges Dateiformat.', category='danger')
return redirect(url_for('measurement.measurements'))
f.save(os.path.join(UPLOAD_FOLDER, filename))
chosen_module.measurements.append(new_measurement)
try:
process_data_file(filename, new_measurement)
except InvalidFileType:
flash('Messung hochladen fehlgeschlagen!', category='danger')
return redirect(url_for('measurement.measurements'))
db.session.add(chosen_module)
db.session.commit()
add_timestamp()
flash('Messung erfolgreich hinzugefügt.', category='success')
return redirect(url_for('measurement.measurements'))
# flash current user
flash('Angemeldet als:', )
flash(current_user_data['user_name'], category='primary')
return render_template('measurement/add_measurement.html', form=form, user=user)
@measurement_routes.route('/add_measurements', methods=['GET', 'POST'])
@requires_access_level('Admin')
def add_measurements():
"""Form to add measurement from excel, multiple measurements possible"""
form = MeasurementForm()
if request.method == 'POST':
f = form.messungen.data
filename = secure_filename(f.filename)
if not allowed_file(filename):
flash('Ungültiges Dateiformat.', category='danger')
return redirect(url_for('measurement.measurements'))
path_to_file = os.path.join(UPLOAD_FOLDER, filename)
f.save(path_to_file)
process_multiple_measurements_file(filename)
return redirect(url_for('measurement.measurements'))
return render_template('measurement/add_measurements.html', form=form)
| 43.202899 | 119 | 0.69423 | 680 | 5,962 | 5.902941 | 0.260294 | 0.024664 | 0.027902 | 0.029895 | 0.318635 | 0.269058 | 0.187344 | 0.187344 | 0.136024 | 0.136024 | 0 | 0.000627 | 0.197249 | 5,962 | 137 | 120 | 43.518248 | 0.838069 | 0.116739 | 0 | 0.268041 | 0 | 0 | 0.157744 | 0.05086 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051546 | false | 0 | 0.082474 | 0 | 0.257732 | 0.041237 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0ea017270444e6fb1ac30020d4282d6fc8b2ffb8 | 5,484 | py | Python | hoopa/commands/create.py | fishtn/hoopa | 1742097c76b4ad4880bd22b87ee89be8490e2b24 | [
"Apache-2.0"
] | 9 | 2021-04-12T03:21:11.000Z | 2022-01-06T07:51:11.000Z | hoopa/commands/create.py | fishtn/hoopa | 1742097c76b4ad4880bd22b87ee89be8490e2b24 | [
"Apache-2.0"
] | 3 | 2021-04-14T06:58:00.000Z | 2021-06-17T03:25:34.000Z | hoopa/commands/create.py | fishtn/hoopa | 1742097c76b4ad4880bd22b87ee89be8490e2b24 | [
"Apache-2.0"
] | 3 | 2021-04-20T09:03:51.000Z | 2022-01-06T07:51:19.000Z | import argparse
import os
import re
import shutil
class FullAction(argparse.Action):
def __init__(self, option_strings, dest, default=None, required=False, help=None, metavar=None):
super(FullAction, self).__init__(option_strings=option_strings, dest=dest, nargs=0, const=True,
default=default, required=required, help=help)
def __call__(self, parser, namespace, values, option_string=None):
if len(namespace.spider) > 0:
setattr(namespace, self.dest, self.const)
else:
parser.error('must used with spider')
class CreateCommand:
parser = None
def add_arguments(self):
parser = argparse.ArgumentParser(description="resolve")
# subparsers = parser.add_subparsers(help='sub-command help')
# parser_create = subparsers.add_parser('create', help='add help')
group = parser.add_mutually_exclusive_group()
group.add_argument(
"-s", "--spider", nargs=1, help="创建爬虫 如 hoopa create -s <spider_name>"
)
group.add_argument(
"-p", "--project", nargs=1, help="创建项目 如 hoopa create -p <project_name>"
)
parser.add_argument(
"-f", "--full", help="full spider 如 hoopa create -s <spider_name> -f", default=False, action=FullAction
)
self.parser = parser
def create_spider(self, template_path, spider_name, project=None):
spider_class_name = spider_name
if spider_class_name.islower():
spider_class_name = spider_class_name.title().replace("_", "")
with open(template_path, "r", encoding="utf-8") as file:
spider_template = file.read()
spider_template = spider_template.replace("${spider_name}", spider_name)
spider_template = spider_template.replace("${spider_class_name}", spider_class_name)
result = self._save_spider_to_file(spider_template, spider_name, project)
return result
@staticmethod
def copy_callback(src, dst, *, follow_symlinks=True):
if src.endswith(".py"):
with open(src, "r", encoding="utf-8") as src_file, open(
dst, "w", encoding="utf8"
) as dst_file:
content = src_file.read()
dst_file.write(content)
else:
shutil.copy2(src, dst, follow_symlinks=follow_symlinks)
def create_project(self, template_path, project_name):
if os.path.exists(project_name):
print("%s 项目已经存在" % project_name)
return
shutil.copytree(template_path, project_name, copy_function=self.copy_callback)
template_path = self._find_template("project_spider")
getattr(self, f"create_spider")(template_path, project_name, project_name)
return True
def run_cmd(self):
args = self.parser.parse_args()
full = args.full
if args.spider:
spider_name = args.spider[0]
create_type = "spider"
elif args.project:
spider_name = args.project[0]
create_type = "project"
else:
raise
# 检查spider_name
if not re.search("^[a-zA-Z][a-zA-Z0-9_]*$", spider_name):
raise Exception("命名不规范,请用下划线命名或驼峰命名方式")
template_path = self._find_template(create_type, full)
result = getattr(self, f"create_{create_type}")(template_path, spider_name)
if result:
print(f"{create_type} {spider_name} create success")
@staticmethod
def _find_template(template, full=False):
if template == "spider":
if full:
template_file = "spider_full_template.tmpl"
else:
template_file = "spider_template.tmpl"
else:
if template == "project_spider":
template_file = "project_spider_template.tmpl"
else:
template_file = "project_template"
template_path = os.path.abspath(
os.path.join(__file__, "../../templates", template_file)
)
if os.path.exists(template_path):
return template_path
print("Unable to find template: %s\n" % template)
def _save_spider_to_file(self, spider, spider_name, project_name):
spider_underline = self._cover_to_underline(spider_name)
if project_name:
spider_file = f"{project_name}/{spider_underline}.py"
else:
spider_file = f"{spider_underline}.py"
if os.path.exists(spider_file):
print("文件已存在")
return
with open(spider_file, "w", encoding="utf-8") as file:
file.write(spider)
return True
@staticmethod
def _cover_to_underline(key):
regex = "[A-Z]*"
capitals = re.findall(regex, key)
if capitals:
for pos, capital in enumerate(capitals):
if not capital:
continue
if pos == 0:
if len(capital) > 1:
key = key.replace(capital, capital.lower() + "_", 1)
else:
key = key.replace(capital, capital.lower(), 1)
else:
if len(capital) > 1:
key = key.replace(capital, "_" + capital.lower() + "_", 1)
else:
key = key.replace(capital, "_" + capital.lower(), 1)
return key
| 34.929936 | 115 | 0.585522 | 619 | 5,484 | 4.943457 | 0.227787 | 0.045752 | 0.029412 | 0.026144 | 0.168627 | 0.115033 | 0.055556 | 0.055556 | 0.054248 | 0.054248 | 0 | 0.005253 | 0.305799 | 5,484 | 156 | 116 | 35.153846 | 0.798529 | 0.025164 | 0 | 0.172131 | 0 | 0 | 0.115313 | 0.024897 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081967 | false | 0 | 0.032787 | 0 | 0.196721 | 0.032787 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0ea33c68ed30e43fd72509aa19f7a1dcf6695aa7 | 4,599 | py | Python | main.py | 050644zf/sccl | 29ad95717201e5b4fad1746372e03eff643994a8 | [
"MIT-0"
] | null | null | null | main.py | 050644zf/sccl | 29ad95717201e5b4fad1746372e03eff643994a8 | [
"MIT-0"
] | null | null | null | main.py | 050644zf/sccl | 29ad95717201e5b4fad1746372e03eff643994a8 | [
"MIT-0"
] | null | null | null | """
Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved
Author: Dejiao Zhang (dejiaoz@amazon.com)
Date: 02/26/2021
"""
import sys
sys.path.append( './' )
import torch
import argparse
from sentence_transformers import SentenceTransformer
from models.Transformers import SCCLBert
from learners.cluster import ClusterLearner
from dataloader.dataloader import augment_loader
from training import training
from utils.kmeans import get_kmeans_centers
from utils.logger import setup_path
from utils.randomness import set_global_random_seed
import os
os.environ['TOKENIZERS_PARALLELISM'] = 'false'
MODEL_CLASS = {
"distil": 'distilbert-base-nli-stsb-mean-tokens',
"robertabase": 'roberta-base-nli-stsb-mean-tokens',
"robertalarge": 'roberta-large-nli-stsb-mean-tokens',
"msmarco": 'distilroberta-base-msmarco-v2',
"xlm": "xlm-r-distilroberta-base-paraphrase-v1",
"bertlarge": 'bert-large-nli-stsb-mean-tokens',
"bertbase": 'bert-base-nli-stsb-mean-tokens',
"cn": 'data/distiluse-base-multilingual-cased-v1',
"cndl": 'distiluse-base-multilingual-cased-v1',
"cn2": 'data/bert-base-chinese',
"cn2dl": 'bert-base-chinese',
"cn3": 'data/distiluse-base-multilingual-cased-v2',
"cn3dl": 'distiluse-base-multilingual-cased-v2',
"cn4": 'data/paraphrase-multilingual-MiniLM-L12-v2',
"cn4dl": 'paraphrase-multilingual-MiniLM-L12-v2'
}
def run(args):
resPath, tensorboard = setup_path(args)
args.resPath, args.tensorboard = resPath, tensorboard
set_global_random_seed(args.seed)
# dataset loader
train_loader = augment_loader(args)
# model
torch.cuda.set_device(args.gpuid[0])
#配置 Sentence Transformer
sbert = SentenceTransformer(MODEL_CLASS[args.bert])
#获取每个聚类的中心
cluster_centers = get_kmeans_centers(sbert, train_loader, args.num_classes)
model = SCCLBert(sbert, cluster_centers=cluster_centers, alpha=args.alpha)
model = model.cuda()
# optimizer
optimizer = torch.optim.Adam([
{'params':model.sentbert.parameters()},
{'params':model.head.parameters(), 'lr': args.lr*args.lr_scale},
{'params':model.cluster_centers, 'lr': args.lr*args.lr_scale}], lr=args.lr)
print(optimizer)
# set up the trainer
learner = ClusterLearner(model, optimizer, args.temperature, args.base_temperature)
training(train_loader, learner, args)
return None
def get_args(argv):
parser = argparse.ArgumentParser()
parser.add_argument('--gpuid', nargs="+", type=int, default=[0], help="The list of gpuid, ex:--gpuid 3 1. Negative value means cpu-only")
parser.add_argument('--seed', type=int, default=0, help="")
parser.add_argument('--print_freq', type=float, default=250, help="")
parser.add_argument('--result_path', type=str, default='./results/')
parser.add_argument('--bert', type=str, default='cn4', help="")
# Dataset
parser.add_argument('--dataset', type=str, default='bili', help="")
parser.add_argument('--datalen', type=int, default=100, help="")
parser.add_argument('--data_path', type=str, default='./data/')
parser.add_argument('--aug_path', type=str, default='augdata/p0.5/')
parser.add_argument('--dataname', type=str, default='searchsnippets.csv', help="")
parser.add_argument('--num_classes', type=int, default=8, help="")
parser.add_argument('--max_length', type=int, default=32)
# Learning parameters
parser.add_argument('--lr', type=float, default=1e-5, help="")
parser.add_argument('--lr_scale', type=int, default=100, help="")
parser.add_argument('--max_iter', type=int, default=10)
# contrastive learning
parser.add_argument('--batch_size', type=int, default=10)
parser.add_argument('--temperature', type=float, default=0.5, help="temperature required by contrastive loss")
parser.add_argument('--base_temperature', type=float, default=0.07, help="temperature required by contrastive loss")
# Clustering
parser.add_argument('--use_perturbation', action='store_true', help="")
parser.add_argument('--alpha', type=float, default=1.0)
args = parser.parse_args(argv)
#args = parser.parse_args('--result_path ./restest/searchsnippets/ --num_classes 8 --dataset bili --bert cn --alpha 1 --lr 1e-05 --lr_scale 100 --batch_size 10 --temperature 0.5 --base_temperature 0.07 --max_iter 10 --print_freq 250 --seed 0 --gpuid 0 '.split(' '))
args.use_gpu = args.gpuid[0] >= 0
args.resPath = None
args.tensorboard = None
return args
if __name__ == '__main__':
run(get_args(sys.argv[1:]))
| 41.0625 | 269 | 0.701457 | 602 | 4,599 | 5.222591 | 0.317276 | 0.057252 | 0.108142 | 0.060115 | 0.198473 | 0.061705 | 0.024173 | 0.024173 | 0 | 0 | 0 | 0.021325 | 0.143509 | 4,599 | 111 | 270 | 41.432432 | 0.776847 | 0.116765 | 0 | 0 | 0 | 0 | 0.264095 | 0.125618 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025641 | false | 0 | 0.153846 | 0 | 0.205128 | 0.025641 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |