hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
aa1c828960ece338bcbfad1fbc49675e5404d34a | 3,837 | py | Python | pororo/tasks/collocation.py | jayten42/pororo | 0b02e6a633b9a32ec4241b8ed96745e6592db317 | [
"Apache-2.0"
] | 1,137 | 2021-02-02T02:09:06.000Z | 2022-03-29T03:10:40.000Z | pororo/tasks/collocation.py | jayten42/pororo | 0b02e6a633b9a32ec4241b8ed96745e6592db317 | [
"Apache-2.0"
] | 57 | 2021-02-02T03:29:54.000Z | 2022-03-31T16:20:00.000Z | pororo/tasks/collocation.py | jayten42/pororo | 0b02e6a633b9a32ec4241b8ed96745e6592db317 | [
"Apache-2.0"
] | 216 | 2021-02-02T02:49:02.000Z | 2022-03-28T01:19:58.000Z | """Collocation related modeling class"""
from typing import Optional
from pororo.tasks.utils.base import PororoFactoryBase, PororoSimpleBase
from pororo.tasks.utils.download_utils import download_or_load
class PororoCollocationFactory(PororoFactoryBase):
"""
Conduct collocation search using index file
English (`collocate.en`)
- dataset: enwiki-20180420
- metric: N/A
Korean (`kollocate`)
- dataset: kowiki-20200720
- metric: N/A
Chinese (`collocate.zh`)
- dataset: zhwiki-20180420
- metric: N/A
Japanse (`collocate.ja`)
- dataset: jawiki-20180420
- metric: N/A
Args:
text (str): text to be inputted for collocation search
Returns:
dict: searched collocation splitted by part of speech
Examples:
>>> col = Pororo(task="col", lang="ko")
>>> col("먹")
먹 as verb
noun 것(39), 수(29), 음식(23), 등(16), 고기(14), ..
verb 하(33), 않(21), 살(17), 즐기(11), 굽(9), ..
adverb 많이(10), 주로(7), 다(5), 같이(4), 잘(4), ...
determiner 다른(5), 그(2), 여러(1), 세(1), 몇몇(1), 새(1)
adjective 싶(5), 어리(1), 편하(1), 작(1), 좋(1), 손쉽(1), 못하(1)
먹 as noun
noun 붓(3), 종이(2), 묘선(1), 청자(1), 은장도(1), 제조(1), ..
verb 의하(1), 그리(1), 찍(1), 차(1), 늘어놓(1)
adverb 하지만(1)
>>> col = Pororo(task="collocation", lang="ja")
>>> col("東京")
{'noun': {'noun': [('都', 137), ('家', 21), ('年', 18), ('府', 17), ('市', 12), ('式', 12), ('デザイナー', 10), ('日', 10), ('都立', 9), ('県', 9), ('出身', 8), ('証券', 8), ('後', 6)]}}
>>> col = Pororo(task="col", lang="en")
>>> col("george")
{'noun': {'noun': [('washington', 13), ('gen.', 7)]}}
>>> col = Pororo(task="col", lang="zh")
>>> col("世界杯")
{'noun': {'noun': [('2002年', 72), ('足球赛', 71), ('冠军', 53), ('2006年', 39), ('決賽', 35), ('决赛', 30), ('1998年', 26), ('外圍賽', 25), ('2010年', 23), ('2018年', 22), ('冠軍', 21), ...}}
"""
def __init__(self, task: str, lang: str, model: Optional[str]):
super().__init__(task, lang, model)
@staticmethod
def get_available_langs():
return ["ko", "en", "ja", "zh"]
@staticmethod
def get_available_models():
return {
"ko": ["kollocate"],
"en": ["collocate.en"],
"ja": ["collocate.ja"],
"zh": ["collocate.zh"],
}
def load(self, device: str):
"""
Load user-selected task-specific model
Args:
device (str): device information
Returns:
object: User-selected task-specific model
"""
if self.config.n_model == "kollocate":
try:
from kollocate import Kollocate
except ModuleNotFoundError as error:
raise error.__class__(
"Please install kollocate with: `pip install kollocate`")
model = Kollocate()
return PororoCollocate(model, self.config)
if "collocate" in self.config.n_model:
from pororo.models.collocate import Collocate
index_path = download_or_load(
f"misc/collocate.{self.config.lang}.zip",
self.config.lang,
)
model = Collocate(index_path)
return PororoCollocate(model, self.config)
class PororoCollocate(PororoSimpleBase):
def __init__(self, model, config):
super().__init__(config)
self._model = model
def predict(self, text: str, **kwargs) -> str:
"""
Conduct collocation search using index file
Args:
text (str): text to be inputted for collocation search
Returns:
dict: searched collocation splitted by part of speech
"""
return self._model(text)
| 29.744186 | 181 | 0.524629 | 458 | 3,837 | 4.318777 | 0.447598 | 0.030334 | 0.016178 | 0.024267 | 0.23458 | 0.138524 | 0.100101 | 0.100101 | 0.100101 | 0.100101 | 0 | 0.055722 | 0.303101 | 3,837 | 128 | 182 | 29.976563 | 0.683994 | 0.496742 | 0 | 0.1 | 0 | 0 | 0.103343 | 0.022492 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15 | false | 0 | 0.125 | 0.05 | 0.45 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa202ef793fccd1c236624b8d1bc9c91b6d74f56 | 3,146 | py | Python | libretto/api/rest/serializers.py | avorio/dezede | 22a3b2168fea3bebeb00eb3180ad58b7ba12e741 | [
"BSD-3-Clause"
] | null | null | null | libretto/api/rest/serializers.py | avorio/dezede | 22a3b2168fea3bebeb00eb3180ad58b7ba12e741 | [
"BSD-3-Clause"
] | null | null | null | libretto/api/rest/serializers.py | avorio/dezede | 22a3b2168fea3bebeb00eb3180ad58b7ba12e741 | [
"BSD-3-Clause"
] | null | null | null | from collections import OrderedDict
from rest_framework.fields import ReadOnlyField, Field
from rest_framework.relations import (
HyperlinkedIdentityField, StringRelatedField)
from rest_framework.reverse import reverse
from rest_framework.serializers import HyperlinkedModelSerializer
from ...models import *
class AncrageSpatioTemporelSerializer(Field):
def to_representation(self, obj):
d = OrderedDict()
for fieldname in sorted(obj.fields):
d[fieldname] = getattr(obj, fieldname)
lieu = d.get('lieu')
if lieu is not None:
d['lieu'] = reverse('lieu-detail', (lieu.pk,),
request=self.context.get('request'))
return d
class IndividuSerializer(HyperlinkedModelSerializer):
str = ReadOnlyField(source='__str__')
naissance = AncrageSpatioTemporelSerializer()
deces = AncrageSpatioTemporelSerializer()
professions = StringRelatedField(many=True)
front_url = HyperlinkedIdentityField(view_name='individu_detail',
lookup_field='slug')
class Meta(object):
model = Individu
fields = (
'id', 'str', 'nom', 'prenoms',
'naissance', 'deces',
'professions', 'parents',
'front_url', 'url'
)
class EnsembleSerializer(HyperlinkedModelSerializer):
str = ReadOnlyField(source='__str__')
type = StringRelatedField()
front_url = HyperlinkedIdentityField(view_name='ensemble_detail',
lookup_field='slug')
class Meta(object):
model = Ensemble
fields = (
'id', 'str', 'type', 'front_url', 'url'
)
class LieuSerializer(HyperlinkedModelSerializer):
str = ReadOnlyField(source='__str__')
nature = StringRelatedField()
front_url = HyperlinkedIdentityField(view_name='lieu_detail',
lookup_field='slug')
class Meta(object):
model = Lieu
fields = (
'id', 'str', 'nom', 'nature', 'parent', 'front_url', 'url'
)
class AuteurSerializer(HyperlinkedModelSerializer):
profession = StringRelatedField()
class Meta(object):
model = Auteur
fields = ('individu', 'profession')
class OeuvreSerializer(HyperlinkedModelSerializer):
str = ReadOnlyField(source='__str__')
titre_significatif = ReadOnlyField(source='get_titre_significatif')
titre_non_significatif = ReadOnlyField(source='get_titre_non_significatif')
description = ReadOnlyField(source='get_description')
genre = StringRelatedField()
auteurs = AuteurSerializer(many=True, read_only=True)
creation = AncrageSpatioTemporelSerializer()
front_url = HyperlinkedIdentityField(view_name='oeuvre_detail',
lookup_field='slug')
class Meta(object):
model = Oeuvre
fields = (
'id', 'str', 'extrait_de',
'titre_significatif', 'titre_non_significatif', 'description',
'genre', 'auteurs', 'creation',
'front_url', 'url'
)
| 33.115789 | 79 | 0.632867 | 267 | 3,146 | 7.250936 | 0.314607 | 0.033058 | 0.03874 | 0.051653 | 0.361054 | 0.144628 | 0.084711 | 0.084711 | 0 | 0 | 0 | 0 | 0.263827 | 3,146 | 94 | 80 | 33.468085 | 0.835924 | 0 | 0 | 0.253333 | 0 | 0 | 0.131914 | 0.02225 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013333 | false | 0 | 0.08 | 0 | 0.52 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa2091add5f547fb7a05939c02e360b284039037 | 2,897 | py | Python | SMPyBandits/very_simple_configuration.py | balbok0/SMPyBandits | c8ff765687989e0c20ab42c2e2e1d8440923225b | [
"MIT"
] | 309 | 2018-03-03T22:07:59.000Z | 2022-03-26T08:15:58.000Z | very_simple_configuration.py | 98k-bot/SMPyBandits | 35e675bde29dafbec68288fcfcd14ef3b0f058b2 | [
"MIT"
] | 125 | 2018-02-27T22:54:03.000Z | 2021-11-05T10:50:15.000Z | very_simple_configuration.py | 98k-bot/SMPyBandits | 35e675bde29dafbec68288fcfcd14ef3b0f058b2 | [
"MIT"
] | 60 | 2018-04-30T20:54:24.000Z | 2022-02-21T22:41:46.000Z | # -*- coding: utf-8 -*-
"""
An very simple configuration file to run some basic simulations about stationary multi-armed bandits.
"""
from Arms import *
from Environment import MAB
from Policies import *
# --- Parameters of the experiments
HORIZON = 30
REPETITIONS = 1
NB_ARMS = 5
ARM_TYPE = Bernoulli
# Like http://localhost/publis/tiny-d3-bandit-animation.git/index.html?T=30&MU=0.1,0.2,0.3,0.4,0.9
MEANS = [0.1, 0.2, 0.3, 0.4, 0.9]
#: This dictionary configures the experiments
configuration = {
# --- Duration of the experiment
"horizon": HORIZON,
# --- Number of repetition of the experiment (to have an average)
"repetitions": REPETITIONS,
# --- Parameters for the use of joblib.Parallel
"n_jobs": 1, # = nb of CPU cores
"verbosity": 6, # Max joblib verbosity
# --- Other parameters for the Evaluator
"finalRanksOnAverage": True, # Use an average instead of the last value for the final ranking of the tested players
"averageOn": 1e-3, # Average the final rank on the 1.% last time steps
# --- Should we plot the lower-bounds or not?
"plot_lowerbounds": False, # XXX Default
# --- Arms
"environment": [
{ # Use vector from command line
"arm_type": ARM_TYPE,
"params": MEANS
},
],
}
configuration.update({
"policies": [
# --- Full or partial knowledge algorithms
{ "archtype": TakeFixedArm, "params": { "armIndex": 0 }}, # Take worse arm!
{ "archtype": TakeFixedArm, "params": { "armIndex": 1 }}, # Take second worse arm!
{ "archtype": TakeFixedArm, "params": { "armIndex": 2 }}, # Take third worse arm!
{ "archtype": TakeFixedArm, "params": { "armIndex": 3 }}, # Take forth worse arm!
{ "archtype": TakeFixedArm, "params": { "armIndex": 4 }}, # Take fifth worse arm!
# --- Stupid algorithms
{
"archtype": Uniform, # The stupidest policy, fully uniform
"params": {}
},
# --- UCB algorithm
{
"archtype": UCB, # UCB with alpha=1 parameter
"params": {}
},
# --- Thompson algorithm
{
"archtype": Thompson,
"params": {}
},
# --- KL UCB algorithm
{
"archtype": klUCB,
"params": {}
},
# --- BESA algorithm
{
"archtype": BESA,
"params": {
"horizon": HORIZON,
}
},
# --- MOSS algorithm
{
"archtype": MOSS,
"params": {}
},
# --- Exp3++ algorithm
{
"archtype": Exp3PlusPlus,
"params": {}
},
]}
)
# DONE
print("Loaded experiments configuration from 'example_of_configuration_singleplayer.py' :")
print("configuration =", configuration) # DEBUG
| 29.262626 | 120 | 0.548153 | 297 | 2,897 | 5.316498 | 0.491582 | 0.064598 | 0.082331 | 0.107663 | 0.119063 | 0.119063 | 0.012666 | 0.012666 | 0.012666 | 0.012666 | 0 | 0.020738 | 0.31757 | 2,897 | 98 | 121 | 29.561224 | 0.777946 | 0.383155 | 0 | 0.125 | 0 | 0 | 0.241557 | 0.024041 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.046875 | 0 | 0.046875 | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa2462512ae4ba73f660f8d85dcdf133c6460095 | 3,507 | py | Python | perfkitbenchmarker/linux_packages/glibc.py | Nowasky/PerfKitBenchmarker | cfa88e269eb373780910896ed4bdc8db09469753 | [
"Apache-2.0"
] | 3 | 2018-04-28T13:06:14.000Z | 2020-06-09T02:39:44.000Z | perfkitbenchmarker/linux_packages/glibc.py | Nowasky/PerfKitBenchmarker | cfa88e269eb373780910896ed4bdc8db09469753 | [
"Apache-2.0"
] | 1 | 2021-09-09T07:43:25.000Z | 2021-09-09T10:47:56.000Z | perfkitbenchmarker/linux_packages/glibc.py | Nowasky/PerfKitBenchmarker | cfa88e269eb373780910896ed4bdc8db09469753 | [
"Apache-2.0"
] | 6 | 2019-06-11T18:59:57.000Z | 2021-03-02T19:14:42.000Z | # Copyright 2018 PerfKitBenchmarker Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Module containing Glibc Benchmark installation and cleanup functions."""
import posixpath
import re
from perfkitbenchmarker import errors
from perfkitbenchmarker import linux_packages
PACKAGE_NAME = 'glibc'
GLIBC_DIR = '%s/glibc' % linux_packages.INSTALL_DIR
GLIBC_VERSION = '2.31'
GLIBC_TAR = 'glibc-{}.tar.xz'.format(GLIBC_VERSION)
BINUTILS_DIR = '%s/binutils' % linux_packages.INSTALL_DIR
BINUTILS_VERSION = '2.34'
BINUTILS_TAR = 'binutils-{}.tar.gz'.format(BINUTILS_VERSION)
PREPROVISIONED_DATA = {
BINUTILS_TAR:
'53537d334820be13eeb8acb326d01c7c81418772d626715c7ae927a7d401cab3',
GLIBC_TAR:
'9246fe44f68feeec8c666bb87973d590ce0137cca145df014c72ec95be9ffd17'
}
PACKAGE_DATA_URL = {
BINUTILS_TAR: posixpath.join(
'https://ftp.gnu.org/gnu/binutils', BINUTILS_TAR),
GLIBC_TAR: posixpath.join(
'https://ftp.gnu.org/gnu/libc', GLIBC_TAR)
}
_GCC_VERSION_RE = re.compile(r'gcc\ version\ (.*?)\ ')
def GetGccVersion(vm):
"""Get the currently installed gcc version."""
_, stderr = vm.RemoteCommand('gcc -v')
match = _GCC_VERSION_RE.search(stderr)
if not match:
raise errors.Benchmarks.RunError('Invalid gcc version %s' % stderr)
return match.group(1)
def _Install(vm):
"""Installs the Glibc Benchmark package on the VM."""
# The included version of gcc-7.4 in Ubuntu 1804 does not work out of the box
# without gcc-snapshot.
vm.InstallPackages('gcc-snapshot')
GetGccVersion(vm)
vm.Install('build_tools')
# bison and texinfo are required for compiling newer versions of glibc > 2.27.
vm.InstallPackages('bison texinfo')
vm.RemoteCommand('cd {0} && mkdir binutils'.format(
linux_packages.INSTALL_DIR))
vm.InstallPreprovisionedPackageData(
PACKAGE_NAME, [BINUTILS_TAR], BINUTILS_DIR)
vm.RemoteCommand('cd {0} && tar xvf {1}'.format(BINUTILS_DIR, BINUTILS_TAR))
vm.RemoteCommand('cd {0} && mkdir binutils-build && '
'cd binutils-build/ && '
'../binutils-{1}/configure --prefix=/opt/binutils && '
'make -j 4 && sudo make install'.format(
BINUTILS_DIR, BINUTILS_VERSION))
vm.RemoteCommand('cd {0} && mkdir glibc'.format(linux_packages.INSTALL_DIR))
vm.InstallPreprovisionedPackageData(
PACKAGE_NAME, [GLIBC_TAR], GLIBC_DIR)
vm.RemoteCommand('cd {0} && tar xvf {1}'.format(GLIBC_DIR, GLIBC_TAR))
vm.RemoteCommand(
'cd {0} && mkdir glibc-build && cd glibc-build && '
'../glibc-{1}/configure --prefix=/usr/local/glibc --disable-profile '
'--enable-add-ons --with-headers=/usr/include '
'--with-binutils=/opt/binutils/bin && make && sudo make install'.format(
GLIBC_DIR, GLIBC_VERSION))
def YumInstall(vm):
"""Installs the Glibc Benchmark package on the VM."""
_Install(vm)
def AptInstall(vm):
"""Installs the Glibc Benchmark package on the VM."""
_Install(vm)
| 36.915789 | 80 | 0.708868 | 449 | 3,507 | 5.420935 | 0.371938 | 0.023007 | 0.041906 | 0.044371 | 0.224733 | 0.224733 | 0.173788 | 0.173788 | 0.146672 | 0.041085 | 0 | 0.039573 | 0.171372 | 3,507 | 94 | 81 | 37.308511 | 0.798004 | 0.289136 | 0 | 0.067797 | 0 | 0 | 0.320555 | 0.115008 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067797 | false | 0 | 0.067797 | 0 | 0.152542 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa2744a93bb28c12149ca740b50c1f5bf40e4bff | 20,689 | py | Python | tests/test_common.py | LaudateCorpus1/service-identity | 5129bbd5e98e6853bd42b7aad126b7ea28639b0a | [
"MIT"
] | 45 | 2015-03-07T22:06:31.000Z | 2019-05-15T10:10:55.000Z | tests/test_common.py | LaudateCorpus1/service-identity | 5129bbd5e98e6853bd42b7aad126b7ea28639b0a | [
"MIT"
] | 23 | 2015-04-15T16:09:49.000Z | 2019-05-27T14:28:40.000Z | tests/test_common.py | LaudateCorpus1/service-identity | 5129bbd5e98e6853bd42b7aad126b7ea28639b0a | [
"MIT"
] | 17 | 2019-08-19T02:46:23.000Z | 2022-03-26T08:52:11.000Z | from __future__ import absolute_import, division, print_function
import ipaddress
import pickle
import pytest
import six
import service_identity._common
from service_identity._common import (
DNS_ID,
SRV_ID,
URI_ID,
DNSPattern,
IPAddress_ID,
IPAddressPattern,
ServiceMatch,
SRVPattern,
URIPattern,
_contains_instance_of,
_find_matches,
_hostname_matches,
_is_ip_address,
_validate_pattern,
verify_service_identity,
)
from service_identity.cryptography import extract_ids
from service_identity.exceptions import (
CertificateError,
DNSMismatch,
SRVMismatch,
VerificationError,
)
from .test_cryptography import CERT_EVERYTHING
from .util import DNS_IDS
try:
import idna
except ImportError:
idna = None
class TestVerifyServiceIdentity(object):
"""
Simple integration tests for verify_service_identity.
"""
def test_dns_id_success(self):
"""
Return pairs of certificate ids and service ids on matches.
"""
rv = verify_service_identity(
DNS_IDS, [DNS_ID(u"twistedmatrix.com")], []
)
assert [
ServiceMatch(
cert_pattern=DNSPattern(b"twistedmatrix.com"),
service_id=DNS_ID(u"twistedmatrix.com"),
)
] == rv
def test_integration_dns_id_fail(self):
"""
Raise VerificationError if no certificate id matches the supplied
service ids.
"""
i = DNS_ID(u"wrong.host")
with pytest.raises(VerificationError) as e:
verify_service_identity(
DNS_IDS, obligatory_ids=[i], optional_ids=[]
)
assert [DNSMismatch(mismatched_id=i)] == e.value.errors
def test_ip_address_success(self):
"""
IP addresses patterns are matched against IP address IDs.
"""
ip4 = ipaddress.ip_address(u"2.2.2.2")
ip6 = ipaddress.ip_address(u"2a00:1c38::53")
id4 = IPAddress_ID(six.text_type(ip4))
id6 = IPAddress_ID(six.text_type(ip6))
rv = verify_service_identity(
extract_ids(CERT_EVERYTHING), [id4, id6], []
)
assert [
ServiceMatch(id4, IPAddressPattern(ip4)),
ServiceMatch(id6, IPAddressPattern(ip6)),
] == rv
def test_obligatory_missing(self):
"""
Raise if everything matches but one of the obligatory IDs is missing.
"""
i = DNS_ID(u"example.net")
with pytest.raises(VerificationError) as e:
verify_service_identity(
[SRVPattern(b"_mail.example.net")],
obligatory_ids=[SRV_ID(u"_mail.example.net"), i],
optional_ids=[],
)
assert [DNSMismatch(mismatched_id=i)] == e.value.errors
def test_obligatory_mismatch(self):
"""
Raise if one of the obligatory IDs doesn't match.
"""
i = DNS_ID(u"example.net")
with pytest.raises(VerificationError) as e:
verify_service_identity(
[SRVPattern(b"_mail.example.net"), DNSPattern(b"example.com")],
obligatory_ids=[SRV_ID(u"_mail.example.net"), i],
optional_ids=[],
)
assert [DNSMismatch(mismatched_id=i)] == e.value.errors
def test_optional_missing(self):
"""
Optional IDs may miss as long as they don't conflict with an existing
pattern.
"""
p = DNSPattern(b"mail.foo.com")
i = DNS_ID(u"mail.foo.com")
rv = verify_service_identity(
[p], obligatory_ids=[i], optional_ids=[SRV_ID(u"_mail.foo.com")]
)
assert [ServiceMatch(cert_pattern=p, service_id=i)] == rv
def test_optional_mismatch(self):
"""
Raise VerificationError if an ID from optional_ids does not match
a pattern of respective type even if obligatory IDs match.
"""
i = SRV_ID(u"_xmpp.example.com")
with pytest.raises(VerificationError) as e:
verify_service_identity(
[DNSPattern(b"example.net"), SRVPattern(b"_mail.example.com")],
obligatory_ids=[DNS_ID(u"example.net")],
optional_ids=[i],
)
assert [SRVMismatch(mismatched_id=i)] == e.value.errors
def test_contains_optional_and_matches(self):
"""
If an optional ID is found, return the match within the returned
list and don't raise an error.
"""
p = SRVPattern(b"_mail.example.net")
i = SRV_ID(u"_mail.example.net")
rv = verify_service_identity(
[DNSPattern(b"example.net"), p],
obligatory_ids=[DNS_ID(u"example.net")],
optional_ids=[i],
)
assert ServiceMatch(cert_pattern=p, service_id=i) == rv[1]
class TestContainsInstance(object):
def test_positive(self):
"""
If the list contains an object of the type, return True.
"""
assert _contains_instance_of([object(), tuple(), object()], tuple)
def test_negative(self):
"""
If the list does not contain an object of the type, return False.
"""
assert not _contains_instance_of([object(), list(), {}], tuple)
class TestDNS_ID(object):
def test_enforces_unicode(self):
"""
Raise TypeError if pass DNS-ID is not unicode.
"""
with pytest.raises(TypeError):
DNS_ID(b"foo.com")
def test_handles_missing_idna(self, monkeypatch):
"""
Raise ImportError if idna is missing and a non-ASCII DNS-ID is passed.
"""
monkeypatch.setattr(service_identity._common, "idna", None)
with pytest.raises(ImportError):
DNS_ID(u"f\xf8\xf8.com")
def test_ascii_works_without_idna(self, monkeypatch):
"""
7bit-ASCII DNS-IDs work no matter whether idna is present or not.
"""
monkeypatch.setattr(service_identity._common, "idna", None)
dns = DNS_ID(u"foo.com")
assert b"foo.com" == dns.hostname
@pytest.mark.skipif(idna is None, reason="idna not installed")
def test_idna_used_if_available_on_non_ascii(self):
"""
If idna is installed and a non-ASCII DNS-ID is passed, encode it to
ASCII.
"""
dns = DNS_ID(u"f\xf8\xf8.com")
assert b"xn--f-5gaa.com" == dns.hostname
@pytest.mark.parametrize(
"invalid_id",
[
u" ",
u"", # empty strings
u"host,name", # invalid chars
u"192.168.0.0",
u"::1",
u"1234", # IP addresses
],
)
def test_catches_invalid_dns_ids(self, invalid_id):
"""
Raise ValueError on invalid DNS-IDs.
"""
with pytest.raises(ValueError):
DNS_ID(invalid_id)
def test_lowercases(self):
"""
The hostname is lowercased so it can be compared case-insensitively.
"""
dns_id = DNS_ID(u"hOsTnAmE")
assert b"hostname" == dns_id.hostname
def test_verifies_only_dns(self):
"""
If anything else than DNSPattern is passed to verify, return False.
"""
assert not DNS_ID(u"foo.com").verify(object())
def test_simple_match(self):
"""
Simple integration test with _hostname_matches with a match.
"""
assert DNS_ID(u"foo.com").verify(DNSPattern(b"foo.com"))
def test_simple_mismatch(self):
"""
Simple integration test with _hostname_matches with a mismatch.
"""
assert not DNS_ID(u"foo.com").verify(DNSPattern(b"bar.com"))
def test_matches(self):
"""
Valid matches return `True`.
"""
for cert, actual in [
(b"www.example.com", b"www.example.com"),
(b"*.example.com", b"www.example.com"),
]:
assert _hostname_matches(cert, actual)
def test_mismatches(self):
"""
Invalid matches return `False`.
"""
for cert, actual in [
(b"xxx.example.com", b"www.example.com"),
(b"*.example.com", b"baa.foo.example.com"),
(b"f*.example.com", b"baa.example.com"),
(b"*.bar.com", b"foo.baz.com"),
(b"*.bar.com", b"bar.com"),
(b"x*.example.com", b"xn--gtter-jua.example.com"),
(b"xxx*.example.com", b"xxxwww.example.com"),
(b"f*.example.com", b"foo.example.com"),
(b"*oo.bar.com", b"foo.bar.com"),
(b"fo*oo.bar.com", b"fooooo.bar.com"),
]:
assert not _hostname_matches(cert, actual)
class TestURI_ID(object):
def test_enforces_unicode(self):
"""
Raise TypeError if pass URI-ID is not unicode.
"""
with pytest.raises(TypeError):
URI_ID(b"sip:foo.com")
def test_create_DNS_ID(self):
"""
The hostname is converted into a DNS_ID object.
"""
uri_id = URI_ID(u"sip:foo.com")
assert DNS_ID(u"foo.com") == uri_id.dns_id
assert b"sip" == uri_id.protocol
def test_lowercases(self):
"""
The protocol is lowercased so it can be compared case-insensitively.
"""
uri_id = URI_ID(u"sIp:foo.com")
assert b"sip" == uri_id.protocol
def test_catches_missing_colon(self):
"""
Raise ValueError if there's no colon within a URI-ID.
"""
with pytest.raises(ValueError):
URI_ID(u"sip;foo.com")
def test_is_only_valid_for_uri(self):
"""
If anything else than an URIPattern is passed to verify, return
False.
"""
assert not URI_ID(u"sip:foo.com").verify(object())
def test_protocol_mismatch(self):
"""
If protocol doesn't match, verify returns False.
"""
assert not URI_ID(u"sip:foo.com").verify(URIPattern(b"xmpp:foo.com"))
def test_dns_mismatch(self):
"""
If the hostname doesn't match, verify returns False.
"""
assert not URI_ID(u"sip:bar.com").verify(URIPattern(b"sip:foo.com"))
def test_match(self):
"""
Accept legal matches.
"""
assert URI_ID(u"sip:foo.com").verify(URIPattern(b"sip:foo.com"))
class TestSRV_ID(object):
def test_enforces_unicode(self):
"""
Raise TypeError if pass srv-ID is not unicode.
"""
with pytest.raises(TypeError):
SRV_ID(b"_mail.example.com")
def test_create_DNS_ID(self):
"""
The hostname is converted into a DNS_ID object.
"""
srv_id = SRV_ID(u"_mail.example.com")
assert DNS_ID(u"example.com") == srv_id.dns_id
def test_lowercases(self):
"""
The service name is lowercased so it can be compared
case-insensitively.
"""
srv_id = SRV_ID(u"_MaIl.foo.com")
assert b"mail" == srv_id.name
def test_catches_missing_dot(self):
"""
Raise ValueError if there's no dot within a SRV-ID.
"""
with pytest.raises(ValueError):
SRV_ID(u"_imapsfoocom")
def test_catches_missing_underscore(self):
"""
Raise ValueError if the service is doesn't start with an underscore.
"""
with pytest.raises(ValueError):
SRV_ID(u"imaps.foo.com")
def test_is_only_valid_for_SRV(self):
"""
If anything else than an SRVPattern is passed to verify, return False.
"""
assert not SRV_ID(u"_mail.foo.com").verify(object())
def test_match(self):
"""
Accept legal matches.
"""
assert SRV_ID(u"_mail.foo.com").verify(SRVPattern(b"_mail.foo.com"))
@pytest.mark.skipif(idna is None, reason="idna not installed")
def test_match_idna(self):
"""
IDNAs are handled properly.
"""
assert SRV_ID(u"_mail.f\xf8\xf8.com").verify(
SRVPattern(b"_mail.xn--f-5gaa.com")
)
def test_mismatch_service_name(self):
"""
If the service name doesn't match, verify returns False.
"""
assert not (
SRV_ID(u"_mail.foo.com").verify(SRVPattern(b"_xmpp.foo.com"))
)
def test_mismatch_dns(self):
"""
If the dns_id doesn't match, verify returns False.
"""
assert not (
SRV_ID(u"_mail.foo.com").verify(SRVPattern(b"_mail.bar.com"))
)
class TestDNSPattern(object):
def test_enforces_bytes(self):
"""
Raise TypeError if unicode is passed.
"""
with pytest.raises(TypeError):
DNSPattern(u"foo.com")
def test_catches_empty(self):
"""
Empty DNS-IDs raise a :class:`CertificateError`.
"""
with pytest.raises(CertificateError):
DNSPattern(b" ")
def test_catches_NULL_bytes(self):
"""
Raise :class:`CertificateError` if a NULL byte is in the hostname.
"""
with pytest.raises(CertificateError):
DNSPattern(b"www.google.com\0nasty.h4x0r.com")
def test_catches_ip_address(self):
"""
IP addresses are invalid and raise a :class:`CertificateError`.
"""
with pytest.raises(CertificateError):
DNSPattern(b"192.168.0.0")
def test_invalid_wildcard(self):
"""
Integration test with _validate_pattern: catches double wildcards thus
is used if an wildward is present.
"""
with pytest.raises(CertificateError):
DNSPattern(b"*.foo.*")
class TestURIPattern(object):
def test_enforces_bytes(self):
"""
Raise TypeError if unicode is passed.
"""
with pytest.raises(TypeError):
URIPattern(u"sip:foo.com")
def test_catches_missing_colon(self):
"""
Raise CertificateError if URI doesn't contain a `:`.
"""
with pytest.raises(CertificateError):
URIPattern(b"sip;foo.com")
def test_catches_wildcards(self):
"""
Raise CertificateError if URI contains a *.
"""
with pytest.raises(CertificateError):
URIPattern(b"sip:*.foo.com")
class TestSRVPattern(object):
def test_enforces_bytes(self):
"""
Raise TypeError if unicode is passed.
"""
with pytest.raises(TypeError):
SRVPattern(u"_mail.example.com")
def test_catches_missing_underscore(self):
"""
Raise CertificateError if SRV doesn't start with a `_`.
"""
with pytest.raises(CertificateError):
SRVPattern(b"foo.com")
def test_catches_wildcards(self):
"""
Raise CertificateError if SRV contains a *.
"""
with pytest.raises(CertificateError):
SRVPattern(b"sip:*.foo.com")
class TestValidateDNSWildcardPattern(object):
def test_allows_only_one_wildcard(self):
"""
Raise CertificateError on multiple wildcards.
"""
with pytest.raises(CertificateError):
_validate_pattern(b"*.*.com")
def test_wildcard_must_be_left_most(self):
"""
Raise CertificateError if wildcard is not in the left-most part.
"""
for hn in [b"foo.b*r.com", b"foo.bar.c*m", b"foo.*", b"foo.*.com"]:
with pytest.raises(CertificateError):
_validate_pattern(hn)
def test_must_have_at_least_three_parts(self):
"""
Raise CertificateError if host consists of less than three parts.
"""
for hn in [
b"*",
b"*.com",
b"*fail.com",
b"*foo",
b"foo*",
b"f*o",
b"*.example.",
]:
with pytest.raises(CertificateError):
_validate_pattern(hn)
def test_valid_patterns(self):
"""
Does not throw CertificateError on valid patterns.
"""
for pattern in [
b"*.bar.com",
b"*oo.bar.com",
b"f*.bar.com",
b"f*o.bar.com",
]:
_validate_pattern(pattern)
class TestIPAddressPattern(object):
def test_invalid_ip(self):
"""
Raises CertificateError on invalid IP addresses.
"""
with pytest.raises(CertificateError):
IPAddressPattern.from_bytes(b"127.o.o.1")
@pytest.mark.parametrize("ip_s", [u"1.1.1.1", u"::1"])
def test_verify_equal(self, ip_s):
"""
Return True if IP addresses are identical.
"""
ip = ipaddress.ip_address(ip_s)
assert IPAddress_ID(ip).verify(IPAddressPattern(ip)) is True
class FakeCertID(object):
pass
class Fake_ID(object):
"""
An ID that accepts exactly on object as pattern.
"""
def __init__(self, pattern):
self._pattern = pattern
def verify(self, other):
"""
True iff other is the same object as pattern.
"""
return other is self._pattern
class TestFindMatches(object):
def test_one_match(self):
"""
If there's a match, return a tuple of the certificate id and the
service id.
"""
valid_cert_id = FakeCertID()
valid_id = Fake_ID(valid_cert_id)
rv = _find_matches(
[FakeCertID(), valid_cert_id, FakeCertID()], [valid_id]
)
assert [
ServiceMatch(cert_pattern=valid_cert_id, service_id=valid_id)
] == rv
def test_no_match(self):
"""
If no valid certificate ids are found, return an empty list.
"""
rv = _find_matches(
[FakeCertID(), FakeCertID(), FakeCertID()], [Fake_ID(object())]
)
assert [] == rv
def test_multiple_matches(self):
"""
Return all matches.
"""
valid_cert_id_1 = FakeCertID()
valid_cert_id_2 = FakeCertID()
valid_cert_id_3 = FakeCertID()
valid_id_1 = Fake_ID(valid_cert_id_1)
valid_id_2 = Fake_ID(valid_cert_id_2)
valid_id_3 = Fake_ID(valid_cert_id_3)
rv = _find_matches(
[
FakeCertID(),
valid_cert_id_1,
FakeCertID(),
valid_cert_id_3,
FakeCertID(),
valid_cert_id_2,
],
[valid_id_1, valid_id_2, valid_id_3],
)
assert [
ServiceMatch(cert_pattern=valid_cert_id_1, service_id=valid_id_1),
ServiceMatch(cert_pattern=valid_cert_id_2, service_id=valid_id_2),
ServiceMatch(cert_pattern=valid_cert_id_3, service_id=valid_id_3),
] == rv
class TestIsIPAddress(object):
@pytest.mark.parametrize(
"ip",
[
b"127.0.0.1",
u"127.0.0.1",
"172.16.254.12",
"*.0.0.1",
"::1",
"*::1",
"2001:0db8:0000:0000:0000:ff00:0042:8329",
"2001:0db8::ff00:0042:8329",
],
)
def test_ips(self, ip):
"""
Returns True for patterns and hosts that could match IP addresses.
"""
assert _is_ip_address(ip) is True
@pytest.mark.parametrize(
"not_ip",
[
b"*.twistedmatrix.com",
b"twistedmatrix.com",
b"mail.google.com",
b"omega7.de",
b"omega7",
b"127.\xff.0.1",
],
)
def test_not_ips(self, not_ip):
"""
Return False for patterns and hosts that aren't IP addresses.
"""
assert _is_ip_address(not_ip) is False
class TestVerificationError(object):
def test_repr_str(self):
"""
The __str__ and __repr__ methods return something helpful.
"""
try:
raise VerificationError(errors=["foo"])
except VerificationError as e:
assert repr(e) == str(e)
assert str(e) != ""
@pytest.mark.parametrize("proto", range(0, pickle.HIGHEST_PROTOCOL + 1))
@pytest.mark.parametrize(
"exc",
[
VerificationError(errors=[]),
VerificationError(errors=[DNSMismatch("example.com")]),
VerificationError([]),
VerificationError([DNSMismatch("example.com")]),
],
)
def test_pickle(self, exc, proto):
"""
Exceptions can be pickled and unpickled.
"""
new_exc = pickle.loads(pickle.dumps(exc, proto))
# Exceptions can't be compared.
assert exc.__class__ == new_exc.__class__
assert exc.__dict__ == new_exc.__dict__
| 29.555714 | 79 | 0.575185 | 2,487 | 20,689 | 4.587053 | 0.129875 | 0.038657 | 0.037868 | 0.013675 | 0.488342 | 0.418566 | 0.357556 | 0.283485 | 0.229313 | 0.17365 | 0 | 0.011765 | 0.309778 | 20,689 | 699 | 80 | 29.597997 | 0.787115 | 0.184349 | 0 | 0.275949 | 0 | 0 | 0.111983 | 0.007845 | 0 | 0 | 0 | 0 | 0.106329 | 1 | 0.164557 | false | 0.002532 | 0.035443 | 0 | 0.240506 | 0.002532 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa28d41694d42ed318cbd0f4ad4a45afc18c4deb | 37,803 | py | Python | models/retina_net.py | jinhyun95/RegRCNN | 6636ef954ccaedfe91c1fdf42c96c2a025d57598 | [
"Apache-2.0"
] | null | null | null | models/retina_net.py | jinhyun95/RegRCNN | 6636ef954ccaedfe91c1fdf42c96c2a025d57598 | [
"Apache-2.0"
] | null | null | null | models/retina_net.py | jinhyun95/RegRCNN | 6636ef954ccaedfe91c1fdf42c96c2a025d57598 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# Copyright 2019 Division of Medical Image Computing, German Cancer Research Center (DKFZ).
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Retina Net. According to https://arxiv.org/abs/1708.02002"""
import utils.model_utils as mutils
import utils.exp_utils as utils
import sys
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils
sys.path.append('..')
from custom_extensions.nms import nms
class Classifier(nn.Module):
def __init__(self, cf, conv):
"""
Builds the classifier sub-network.
"""
super(Classifier, self).__init__()
self.dim = conv.dim
self.n_classes = cf.head_classes
n_input_channels = cf.end_filts
n_features = cf.n_rpn_features
n_output_channels = cf.n_anchors_per_pos * cf.head_classes
anchor_stride = cf.rpn_anchor_stride
self.conv_1 = conv(n_input_channels, n_features, ks=3, stride=anchor_stride, pad=1, relu=cf.relu, norm=cf.norm)
self.conv_2 = conv(n_features, n_features, ks=3, stride=anchor_stride, pad=1, relu=cf.relu, norm=cf.norm)
self.conv_3 = conv(n_features, n_features, ks=3, stride=anchor_stride, pad=1, relu=cf.relu, norm=cf.norm)
self.conv_4 = conv(n_features, n_features, ks=3, stride=anchor_stride, pad=1, relu=cf.relu, norm=cf.norm)
self.conv_final = conv(n_features, n_output_channels, ks=3, stride=anchor_stride, pad=1, relu=None)
def forward(self, x):
"""
:param x: input feature map (b, in_c, y, x, (z))
:return: class_logits (b, n_anchors, n_classes)
"""
x = self.conv_1(x)
x = self.conv_2(x)
x = self.conv_3(x)
x = self.conv_4(x)
class_logits = self.conv_final(x)
axes = (0, 2, 3, 1) if self.dim == 2 else (0, 2, 3, 4, 1)
class_logits = class_logits.permute(*axes)
class_logits = class_logits.contiguous()
class_logits = class_logits.view(x.shape[0], -1, self.n_classes)
return [class_logits]
class BBRegressor(nn.Module):
def __init__(self, cf, conv):
"""
Builds the bb-regression sub-network.
"""
super(BBRegressor, self).__init__()
self.dim = conv.dim
n_input_channels = cf.end_filts
n_features = cf.n_rpn_features
n_output_channels = cf.n_anchors_per_pos * self.dim * 2
anchor_stride = cf.rpn_anchor_stride
self.conv_1 = conv(n_input_channels, n_features, ks=3, stride=anchor_stride, pad=1, relu=cf.relu, norm=cf.norm)
self.conv_2 = conv(n_features, n_features, ks=3, stride=anchor_stride, pad=1, relu=cf.relu, norm=cf.norm)
self.conv_3 = conv(n_features, n_features, ks=3, stride=anchor_stride, pad=1, relu=cf.relu, norm=cf.norm)
self.conv_4 = conv(n_features, n_features, ks=3, stride=anchor_stride, pad=1, relu=cf.relu, norm=cf.norm)
self.conv_final = conv(n_features, n_output_channels, ks=3, stride=anchor_stride, pad=1, relu=None)
def forward(self, x):
"""
:param x: input feature map (b, in_c, y, x, (z))
:return: bb_logits (b, n_anchors, dim * 2)
"""
x = self.conv_1(x)
x = self.conv_2(x)
x = self.conv_3(x)
x = self.conv_4(x)
bb_logits = self.conv_final(x)
axes = (0, 2, 3, 1) if self.dim == 2 else (0, 2, 3, 4, 1)
bb_logits = bb_logits.permute(*axes)
bb_logits = bb_logits.contiguous()
bb_logits = bb_logits.view(x.shape[0], -1, self.dim * 2)
return [bb_logits]
class RoIRegressor(nn.Module):
def __init__(self, cf, conv, rg_feats):
"""
Builds the RoI-item-regression sub-network. Regression items can be, e.g., malignancy scores of tumors.
"""
super(RoIRegressor, self).__init__()
self.dim = conv.dim
n_input_channels = cf.end_filts
n_features = cf.n_rpn_features
self.rg_feats = rg_feats
n_output_channels = cf.n_anchors_per_pos * self.rg_feats
anchor_stride = cf.rpn_anchor_stride
self.conv_1 = conv(n_input_channels, n_features, ks=3, stride=anchor_stride, pad=1, relu=cf.relu, norm=cf.norm)
self.conv_2 = conv(n_features, n_features, ks=3, stride=anchor_stride, pad=1, relu=cf.relu, norm=cf.norm)
self.conv_3 = conv(n_features, n_features, ks=3, stride=anchor_stride, pad=1, relu=cf.relu, norm=cf.norm)
self.conv_4 = conv(n_features, n_features, ks=3, stride=anchor_stride, pad=1, relu=cf.relu, norm=cf.norm)
self.conv_final = conv(n_features, n_output_channels, ks=3, stride=anchor_stride,
pad=1, relu=None)
def forward(self, x):
"""
:param x: input feature map (b, in_c, y, x, (z))
:return: bb_logits (b, n_anchors, dim * 2)
"""
x = self.conv_1(x)
x = self.conv_2(x)
x = self.conv_3(x)
x = self.conv_4(x)
x = self.conv_final(x)
axes = (0, 2, 3, 1) if self.dim == 2 else (0, 2, 3, 4, 1)
x = x.permute(*axes)
x = x.contiguous()
x = x.view(x.shape[0], -1, self.rg_feats)
return [x]
############################################################
# Loss Functions
############################################################
#
def compute_class_loss(anchor_matches, class_pred_logits, shem_poolsize=20):
"""
:param anchor_matches: (n_anchors). [-1, 0, 1] for negative, neutral, and positive matched anchors.
:param class_pred_logits: (n_anchors, n_classes). logits from classifier sub-network.
:param shem_poolsize: int. factor of top-k candidates to draw from per negative sample (online-hard-example-mining).
:return: loss: torch tensor
:return: np_neg_ix: 1D array containing indices of the neg_roi_logits, which have been sampled for training.
"""
# Positive and Negative anchors contribute to the loss,
# but neutral anchors (match value = 0) don't.
pos_indices = torch.nonzero(anchor_matches > 0)
neg_indices = torch.nonzero(anchor_matches == -1)
# get positive samples and calucalte loss.
if not 0 in pos_indices.size():
pos_indices = pos_indices.squeeze(1)
roi_logits_pos = class_pred_logits[pos_indices]
targets_pos = anchor_matches[pos_indices].detach()
pos_loss = F.cross_entropy(roi_logits_pos, targets_pos.long())
else:
pos_loss = torch.FloatTensor([0]).cuda()
# get negative samples, such that the amount matches the number of positive samples, but at least 1.
# get high scoring negatives by applying online-hard-example-mining.
if not 0 in neg_indices.size():
neg_indices = neg_indices.squeeze(1)
roi_logits_neg = class_pred_logits[neg_indices]
negative_count = np.max((1, pos_indices.cpu().data.numpy().size))
roi_probs_neg = F.softmax(roi_logits_neg, dim=1)
neg_ix = mutils.shem(roi_probs_neg, negative_count, shem_poolsize)
neg_loss = F.cross_entropy(roi_logits_neg[neg_ix], torch.LongTensor([0] * neg_ix.shape[0]).cuda())
# return the indices of negative samples, who contributed to the loss for monitoring plots.
np_neg_ix = neg_ix.cpu().data.numpy()
else:
neg_loss = torch.FloatTensor([0]).cuda()
np_neg_ix = np.array([]).astype('int32')
loss = (pos_loss + neg_loss) / 2
return loss, np_neg_ix
def compute_bbox_loss(target_deltas, pred_deltas, anchor_matches):
"""
:param target_deltas: (b, n_positive_anchors, (dy, dx, (dz), log(dh), log(dw), (log(dd)))).
Uses 0 padding to fill in unused bbox deltas.
:param pred_deltas: predicted deltas from bbox regression head. (b, n_anchors, (dy, dx, (dz), log(dh), log(dw), (log(dd))))
:param anchor_matches: tensor (n_anchors). value in [-1, 0, class_ids] for negative, neutral, and positive matched anchors.
i.e., positively matched anchors are marked by class_id >0
:return: loss: torch 1D tensor.
"""
if not 0 in torch.nonzero(anchor_matches>0).shape:
indices = torch.nonzero(anchor_matches>0).squeeze(1)
# Pick bbox deltas that contribute to the loss
pred_deltas = pred_deltas[indices]
# Trim target bounding box deltas to the same length as pred_deltas.
target_deltas = target_deltas[:pred_deltas.shape[0], :].detach()
# Smooth L1 loss
loss = F.smooth_l1_loss(pred_deltas, target_deltas)
else:
loss = torch.FloatTensor([0]).cuda()
return loss
def compute_rg_loss(tasks, target, pred, anchor_matches):
"""
:param target_deltas: (b, n_positive_anchors, (dy, dx, (dz), log(dh), log(dw), (log(dd)))).
Uses 0 padding to fill in unsed bbox deltas.
:param pred_deltas: predicted deltas from bbox regression head. (b, n_anchors, (dy, dx, (dz), log(dh), log(dw), (log(dd))))
:param anchor_matches: (n_anchors). [-1, 0, 1] for negative, neutral, and positive matched anchors.
:return: loss: torch 1D tensor.
"""
if not 0 in target.shape and not 0 in torch.nonzero(anchor_matches>0).shape:
indices = torch.nonzero(anchor_matches>0).squeeze(1)
# Pick rgs that contribute to the loss
pred = pred[indices]
# Trim target
target = target[:pred.shape[0]].detach()
if 'regression_bin' in tasks:
loss = F.cross_entropy(pred, target.long())
else:
loss = F.smooth_l1_loss(pred, target)
else:
loss = torch.FloatTensor([0]).cuda()
return loss
def compute_focal_class_loss(anchor_matches, class_pred_logits, gamma=2.):
""" Focal Loss FL = -(1-q)^g log(q) with q = pred class probability.
:param anchor_matches: (n_anchors). [-1, 0, class] for negative, neutral, and positive matched anchors.
:param class_pred_logits: (n_anchors, n_classes). logits from classifier sub-network.
:param gamma: g in above formula, good results with g=2 in original paper.
:return: loss: torch tensor
:return: focal loss
"""
# Positive and Negative anchors contribute to the loss,
# but neutral anchors (match value = 0) don't.
pos_indices = torch.nonzero(anchor_matches > 0).squeeze(-1) # dim=-1 instead of 1 or 0 to cover empty matches.
neg_indices = torch.nonzero(anchor_matches == -1).squeeze(-1)
target_classes = torch.cat( (anchor_matches[pos_indices].long(), torch.LongTensor([0] * neg_indices.shape[0]).cuda()) )
non_neutral_indices = torch.cat( (pos_indices, neg_indices) )
q = F.softmax(class_pred_logits[non_neutral_indices], dim=1) # q shape: (n_non_neutral_anchors, n_classes)
# one-hot encoded target classes: keep only the pred probs of the correct class. it will receive incentive to be maximized.
# log(q_i) where i = target class --> FL shape (n_anchors,)
# need to transform to indices into flattened tensor to use torch.take
target_locs_flat = q.shape[1] * torch.arange(q.shape[0]).cuda() + target_classes
q = torch.take(q, target_locs_flat)
FL = torch.log(q) # element-wise log
FL *= -(1-q)**gamma
# take mean over all considered anchors
FL = FL.sum() / FL.shape[0]
return FL
def refine_detections(anchors, probs, deltas, regressions, batch_ixs, cf):
"""Refine classified proposals, filter overlaps and return final
detections. n_proposals here is typically a very large number: batch_size * n_anchors.
This function is hence optimized on trimming down n_proposals.
:param anchors: (n_anchors, 2 * dim)
:param probs: (n_proposals, n_classes) softmax probabilities for all rois as predicted by classifier head.
:param deltas: (n_proposals, n_classes, 2 * dim) box refinement deltas as predicted by bbox regressor head.
:param regressions: (n_proposals, n_classes, n_rg_feats)
:param batch_ixs: (n_proposals) batch element assignemnt info for re-allocation.
:return: result: (n_final_detections, (y1, x1, y2, x2, (z1), (z2), batch_ix, pred_class_id, pred_score, pred_regr))
"""
anchors = anchors.repeat(batch_ixs.unique().shape[0], 1)
#flatten foreground probabilities, sort and trim down to highest confidences by pre_nms limit.
fg_probs = probs[:, 1:].contiguous()
flat_probs, flat_probs_order = fg_probs.view(-1).sort(descending=True)
keep_ix = flat_probs_order[:cf.pre_nms_limit]
# reshape indices to 2D index array with shape like fg_probs.
keep_arr = torch.cat(((keep_ix / fg_probs.shape[1]).unsqueeze(1), (keep_ix % fg_probs.shape[1]).unsqueeze(1)), 1)
pre_nms_scores = flat_probs[:cf.pre_nms_limit]
pre_nms_class_ids = keep_arr[:, 1] + 1 # add background again.
pre_nms_batch_ixs = batch_ixs[keep_arr[:, 0]]
pre_nms_anchors = anchors[keep_arr[:, 0]]
pre_nms_deltas = deltas[keep_arr[:, 0]]
pre_nms_regressions = regressions[keep_arr[:, 0]]
keep = torch.arange(pre_nms_scores.size()[0]).long().cuda()
# apply bounding box deltas. re-scale to image coordinates.
std_dev = torch.from_numpy(np.reshape(cf.rpn_bbox_std_dev, [1, cf.dim * 2])).float().cuda()
scale = torch.from_numpy(cf.scale).float().cuda()
refined_rois = mutils.apply_box_deltas_2D(pre_nms_anchors / scale, pre_nms_deltas * std_dev) * scale \
if cf.dim == 2 else mutils.apply_box_deltas_3D(pre_nms_anchors / scale, pre_nms_deltas * std_dev) * scale
# round and cast to int since we're deadling with pixels now
refined_rois = mutils.clip_to_window(cf.window, refined_rois)
pre_nms_rois = torch.round(refined_rois)
for j, b in enumerate(mutils.unique1d(pre_nms_batch_ixs)):
bixs = torch.nonzero(pre_nms_batch_ixs == b)[:, 0]
bix_class_ids = pre_nms_class_ids[bixs]
bix_rois = pre_nms_rois[bixs]
bix_scores = pre_nms_scores[bixs]
for i, class_id in enumerate(mutils.unique1d(bix_class_ids)):
ixs = torch.nonzero(bix_class_ids == class_id)[:, 0]
# nms expects boxes sorted by score.
ix_rois = bix_rois[ixs]
ix_scores = bix_scores[ixs]
ix_scores, order = ix_scores.sort(descending=True)
ix_rois = ix_rois[order, :]
ix_scores = ix_scores
class_keep = nms.nms(ix_rois, ix_scores, cf.detection_nms_threshold)
# map indices back.
class_keep = keep[bixs[ixs[order[class_keep]]]]
# merge indices over classes for current batch element
b_keep = class_keep if i == 0 else mutils.unique1d(torch.cat((b_keep, class_keep)))
# only keep top-k boxes of current batch-element.
top_ids = pre_nms_scores[b_keep].sort(descending=True)[1][:cf.model_max_instances_per_batch_element]
b_keep = b_keep[top_ids]
# merge indices over batch elements.
batch_keep = b_keep if j == 0 else mutils.unique1d(torch.cat((batch_keep, b_keep)))
keep = batch_keep
# arrange output.
result = torch.cat((pre_nms_rois[keep],
pre_nms_batch_ixs[keep].unsqueeze(1).float(),
pre_nms_class_ids[keep].unsqueeze(1).float(),
pre_nms_scores[keep].unsqueeze(1),
pre_nms_regressions[keep]), dim=1)
return result
def gt_anchor_matching(cf, anchors, gt_boxes, gt_class_ids=None, gt_regressions=None):
"""Given the anchors and GT boxes, compute overlaps and identify positive
anchors and deltas to refine them to match their corresponding GT boxes.
anchors: [num_anchors, (y1, x1, y2, x2, (z1), (z2))]
gt_boxes: [num_gt_boxes, (y1, x1, y2, x2, (z1), (z2))]
gt_class_ids (optional): [num_gt_boxes] Integer class IDs for one stage detectors. in RPN case of Mask R-CNN,
set all positive matches to 1 (foreground)
gt_regressions: [num_gt_rgs, n_rg_feats], if None empty rg_targets are returned
Returns:
anchor_class_matches: [N] (int32) matches between anchors and GT boxes. class_id = positive anchor,
-1 = negative anchor, 0 = neutral. i.e., positively matched anchors are marked by class_id (which is >0).
anchor_delta_targets: [N, (dy, dx, (dz), log(dh), log(dw), (log(dd)))] Anchor bbox deltas.
anchor_rg_targets: [n_anchors, n_rg_feats]
"""
anchor_class_matches = np.zeros([anchors.shape[0]], dtype=np.int32)
anchor_delta_targets = np.zeros((cf.rpn_train_anchors_per_image, 2*cf.dim))
if gt_regressions is not None:
if 'regression_bin' in cf.prediction_tasks:
anchor_rg_targets = np.zeros((cf.rpn_train_anchors_per_image,))
else:
anchor_rg_targets = np.zeros((cf.rpn_train_anchors_per_image, cf.regression_n_features))
else:
anchor_rg_targets = np.array([])
anchor_matching_iou = cf.anchor_matching_iou
if gt_boxes is None:
anchor_class_matches = np.full(anchor_class_matches.shape, fill_value=-1)
return anchor_class_matches, anchor_delta_targets, anchor_rg_targets
# for mrcnn: anchor matching is done for RPN loss, so positive labels are all 1 (foreground)
if gt_class_ids is None:
gt_class_ids = np.array([1] * len(gt_boxes))
# Compute overlaps [num_anchors, num_gt_boxes]
overlaps = mutils.compute_overlaps(anchors, gt_boxes)
# Match anchors to GT Boxes
# If an anchor overlaps a GT box with IoU >= anchor_matching_iou then it's positive.
# If an anchor overlaps a GT box with IoU < 0.1 then it's negative.
# Neutral anchors are those that don't match the conditions above,
# and they don't influence the loss function.
# However, don't keep any GT box unmatched (rare, but happens). Instead,
# match it to the closest anchor (even if its max IoU is < 0.1).
# 1. Set negative anchors first. They get overwritten below if a GT box is
# matched to them. Skip boxes in crowd areas.
anchor_iou_argmax = np.argmax(overlaps, axis=1)
anchor_iou_max = overlaps[np.arange(overlaps.shape[0]), anchor_iou_argmax]
if anchors.shape[1] == 4:
anchor_class_matches[(anchor_iou_max < 0.1)] = -1
elif anchors.shape[1] == 6:
anchor_class_matches[(anchor_iou_max < 0.01)] = -1
else:
raise ValueError('anchor shape wrong {}'.format(anchors.shape))
# 2. Set an anchor for each GT box (regardless of IoU value).
gt_iou_argmax = np.argmax(overlaps, axis=0)
for ix, ii in enumerate(gt_iou_argmax):
anchor_class_matches[ii] = gt_class_ids[ix]
# 3. Set anchors with high overlap as positive.
above_thresh_ixs = np.argwhere(anchor_iou_max >= anchor_matching_iou)
anchor_class_matches[above_thresh_ixs] = gt_class_ids[anchor_iou_argmax[above_thresh_ixs]]
# Subsample to balance positive anchors.
ids = np.where(anchor_class_matches > 0)[0]
extra = len(ids) - (cf.rpn_train_anchors_per_image // 2)
if extra > 0:
# Reset the extra ones to neutral
ids = np.random.choice(ids, extra, replace=False)
anchor_class_matches[ids] = 0
# Leave all negative proposals negative for now and sample from them later in online hard example mining.
# For positive anchors, compute shift and scale needed to transform them to match the corresponding GT boxes.
ids = np.where(anchor_class_matches > 0)[0]
ix = 0 # index into anchor_delta_targets
for i, a in zip(ids, anchors[ids]):
# closest gt box (it might have IoU < anchor_matching_iou)
gt = gt_boxes[anchor_iou_argmax[i]]
# convert coordinates to center plus width/height.
gt_h = gt[2] - gt[0]
gt_w = gt[3] - gt[1]
gt_center_y = gt[0] + 0.5 * gt_h
gt_center_x = gt[1] + 0.5 * gt_w
# Anchor
a_h = a[2] - a[0]
a_w = a[3] - a[1]
a_center_y = a[0] + 0.5 * a_h
a_center_x = a[1] + 0.5 * a_w
if cf.dim == 2:
anchor_delta_targets[ix] = [
(gt_center_y - a_center_y) / a_h,
(gt_center_x - a_center_x) / a_w,
np.log(gt_h / a_h),
np.log(gt_w / a_w)]
else:
gt_d = gt[5] - gt[4]
gt_center_z = gt[4] + 0.5 * gt_d
a_d = a[5] - a[4]
a_center_z = a[4] + 0.5 * a_d
anchor_delta_targets[ix] = [
(gt_center_y - a_center_y) / a_h,
(gt_center_x - a_center_x) / a_w,
(gt_center_z - a_center_z) / a_d,
np.log(gt_h / a_h),
np.log(gt_w / a_w),
np.log(gt_d / a_d)]
# normalize.
anchor_delta_targets[ix] /= cf.rpn_bbox_std_dev
if gt_regressions is not None:
anchor_rg_targets[ix] = gt_regressions[anchor_iou_argmax[i]]
ix += 1
return anchor_class_matches, anchor_delta_targets, anchor_rg_targets
############################################################
# RetinaNet Class
############################################################
class net(nn.Module):
"""Encapsulates the RetinaNet model functionality.
"""
def __init__(self, cf, logger):
"""
cf: A Sub-class of the cf class
model_dir: Directory to save training logs and trained weights
"""
super(net, self).__init__()
self.cf = cf
self.logger = logger
self.build()
if self.cf.weight_init is not None:
logger.info("using pytorch weight init of type {}".format(self.cf.weight_init))
mutils.initialize_weights(self)
else:
logger.info("using default pytorch weight init")
self.debug_acm = []
def build(self):
"""Build Retina Net architecture."""
# Image size must be dividable by 2 multiple times.
h, w = self.cf.patch_size[:2]
if h / 2 ** 5 != int(h / 2 ** 5) or w / 2 ** 5 != int(w / 2 ** 5):
raise Exception("Image size must be divisible by 2 at least 5 times "
"to avoid fractions when downscaling and upscaling."
"For example, use 256, 320, 384, 448, 512, ... etc. ")
backbone = utils.import_module('bbone', self.cf.backbone_path)
self.logger.info("loaded backbone from {}".format(self.cf.backbone_path))
conv = backbone.ConvGenerator(self.cf.dim)
# build Anchors, FPN, Classifier / Bbox-Regressor -head
self.np_anchors = mutils.generate_pyramid_anchors(self.logger, self.cf)
self.anchors = torch.from_numpy(self.np_anchors).float().cuda()
self.fpn = backbone.FPN(self.cf, conv, operate_stride1=self.cf.operate_stride1).cuda()
self.classifier = Classifier(self.cf, conv).cuda()
self.bb_regressor = BBRegressor(self.cf, conv).cuda()
if 'regression' in self.cf.prediction_tasks:
self.roi_regressor = RoIRegressor(self.cf, conv, self.cf.regression_n_features).cuda()
elif 'regression_bin' in self.cf.prediction_tasks:
# classify into bins of regression values
self.roi_regressor = RoIRegressor(self.cf, conv, len(self.cf.bin_labels)).cuda()
else:
self.roi_regressor = lambda x: [torch.tensor([]).cuda()]
if self.cf.model == 'retina_unet':
self.final_conv = conv(self.cf.end_filts, self.cf.num_seg_classes, ks=1, pad=0, norm=None, relu=None)
def forward(self, img):
"""
:param img: input img (b, c, y, x, (z)).
"""
# Feature extraction
fpn_outs = self.fpn(img)
if self.cf.model == 'retina_unet':
seg_logits = self.final_conv(fpn_outs[0])
selected_fmaps = [fpn_outs[i + 1] for i in self.cf.pyramid_levels]
else:
seg_logits = None
selected_fmaps = [fpn_outs[i] for i in self.cf.pyramid_levels]
# Loop through pyramid layers
class_layer_outputs, bb_reg_layer_outputs, roi_reg_layer_outputs = [], [], [] # list of lists
for p in selected_fmaps:
class_layer_outputs.append(self.classifier(p))
bb_reg_layer_outputs.append(self.bb_regressor(p))
roi_reg_layer_outputs.append(self.roi_regressor(p))
# Concatenate layer outputs
# Convert from list of lists of level outputs to list of lists
# of outputs across levels.
# e.g. [[a1, b1, c1], [a2, b2, c2]] => [[a1, a2], [b1, b2], [c1, c2]]
class_logits = list(zip(*class_layer_outputs))
class_logits = [torch.cat(list(o), dim=1) for o in class_logits][0]
bb_outputs = list(zip(*bb_reg_layer_outputs))
bb_outputs = [torch.cat(list(o), dim=1) for o in bb_outputs][0]
if not 0 == roi_reg_layer_outputs[0][0].shape[0]:
rg_outputs = list(zip(*roi_reg_layer_outputs))
rg_outputs = [torch.cat(list(o), dim=1) for o in rg_outputs][0]
else:
if self.cf.dim == 2:
n_feats = np.array([p.shape[-2] * p.shape[-1] * self.cf.n_anchors_per_pos for p in selected_fmaps]).sum()
else:
n_feats = np.array([p.shape[-3]*p.shape[-2]*p.shape[-1]*self.cf.n_anchors_per_pos for p in selected_fmaps]).sum()
rg_outputs = torch.zeros((selected_fmaps[0].shape[0], n_feats, self.cf.regression_n_features),
dtype=torch.float32).fill_(float('NaN')).cuda()
# merge batch_dimension and store info in batch_ixs for re-allocation.
batch_ixs = torch.arange(class_logits.shape[0]).unsqueeze(1).repeat(1, class_logits.shape[1]).view(-1).cuda()
flat_class_softmax = F.softmax(class_logits.view(-1, class_logits.shape[-1]), 1)
flat_bb_outputs = bb_outputs.view(-1, bb_outputs.shape[-1])
flat_rg_outputs = rg_outputs.view(-1, rg_outputs.shape[-1])
detections = refine_detections(self.anchors, flat_class_softmax, flat_bb_outputs, flat_rg_outputs, batch_ixs,
self.cf)
return detections, class_logits, bb_outputs, rg_outputs, seg_logits
def get_results(self, img_shape, detections, seg_logits, box_results_list=None):
"""
Restores batch dimension of merged detections, unmolds detections, creates and fills results dict.
:param img_shape:
:param detections: (n_final_detections, (y1, x1, y2, x2, (z1), (z2), batch_ix, pred_class_id, pred_score,
pred_regression)
:param box_results_list: None or list of output boxes for monitoring/plotting.
each element is a list of boxes per batch element.
:return: results_dict: dictionary with keys:
'boxes': list over batch elements. each batch element is a list of boxes. each box is a dictionary:
[[{box_0}, ... {box_n}], [{box_0}, ... {box_n}], ...]
'seg_preds': pixel-wise class predictions (b, 1, y, x, (z)) with values [0, 1] only fg. vs. bg for now.
class-specific return of masks will come with implementation of instance segmentation evaluation.
"""
detections = detections.cpu().data.numpy()
batch_ixs = detections[:, self.cf.dim*2]
detections = [detections[batch_ixs == ix] for ix in range(img_shape[0])]
if box_results_list == None: # for test_forward, where no previous list exists.
box_results_list = [[] for _ in range(img_shape[0])]
for ix in range(img_shape[0]):
if not 0 in detections[ix].shape:
boxes = detections[ix][:, :2 * self.cf.dim].astype(np.int32)
class_ids = detections[ix][:, 2 * self.cf.dim + 1].astype(np.int32)
scores = detections[ix][:, 2 * self.cf.dim + 2]
regressions = detections[ix][:, 2 * self.cf.dim + 3:]
# Filter out detections with zero area. Often only happens in early
# stages of training when the network weights are still a bit random.
if self.cf.dim == 2:
exclude_ix = np.where((boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1]) <= 0)[0]
else:
exclude_ix = np.where(
(boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 5] - boxes[:, 4]) <= 0)[0]
if exclude_ix.shape[0] > 0:
boxes = np.delete(boxes, exclude_ix, axis=0)
class_ids = np.delete(class_ids, exclude_ix, axis=0)
scores = np.delete(scores, exclude_ix, axis=0)
regressions = np.delete(regressions, exclude_ix, axis=0)
if not 0 in boxes.shape:
for ix2, score in enumerate(scores):
if score >= self.cf.model_min_confidence:
box = {'box_type': 'det', 'box_coords': boxes[ix2], 'box_score': score,
'box_pred_class_id': class_ids[ix2]}
if "regression_bin" in self.cf.prediction_tasks:
# in this case, regression preds are actually the rg_bin_ids --> map to rg value the bin stands for
box['rg_bin'] = regressions[ix2].argmax()
box['regression'] = self.cf.bin_id2rg_val[box['rg_bin']]
else:
box['regression'] = regressions[ix2]
if hasattr(self.cf, "rg_val_to_bin_id") and \
any(['regression' in task for task in self.cf.prediction_tasks]):
box['rg_bin'] = self.cf.rg_val_to_bin_id(regressions[ix2])
box_results_list[ix].append(box)
results_dict = {}
results_dict['boxes'] = box_results_list
if seg_logits is None:
# output dummy segmentation for retina_net.
out_logits_shape = list(img_shape)
out_logits_shape[1] = self.cf.num_seg_classes
results_dict['seg_preds'] = np.zeros(out_logits_shape, dtype=np.float16)
#todo: try with seg_preds=None? as to not carry heavy dummy preds.
else:
# output label maps for retina_unet.
results_dict['seg_preds'] = F.softmax(seg_logits, 1).cpu().data.numpy()
return results_dict
def train_forward(self, batch, is_validation=False):
"""
train method (also used for validation monitoring). wrapper around forward pass of network. prepares input data
for processing, computes losses, and stores outputs in a dictionary.
:param batch: dictionary containing 'data', 'seg', etc.
:return: results_dict: dictionary with keys:
'boxes': list over batch elements. each batch element is a list of boxes. each box is a dictionary:
[[{box_0}, ... {box_n}], [{box_0}, ... {box_n}], ...]
'seg_preds': pixelwise segmentation output (b, c, y, x, (z)) with values [0, .., n_classes].
'torch_loss': 1D torch tensor for backprop.
'class_loss': classification loss for monitoring.
"""
img = batch['data']
gt_class_ids = batch['class_targets']
gt_boxes = batch['bb_target']
if 'regression' in self.cf.prediction_tasks:
gt_regressions = batch["regression_targets"]
elif 'regression_bin' in self.cf.prediction_tasks:
gt_regressions = batch["rg_bin_targets"]
else:
gt_regressions = None
if self.cf.model == 'retina_unet':
var_seg_ohe = torch.FloatTensor(mutils.get_one_hot_encoding(batch['seg'], self.cf.num_seg_classes)).cuda()
var_seg = torch.LongTensor(batch['seg']).cuda()
img = torch.from_numpy(img).float().cuda()
torch_loss = torch.FloatTensor([0]).cuda()
# list of output boxes for monitoring/plotting. each element is a list of boxes per batch element.
box_results_list = [[] for _ in range(img.shape[0])]
detections, class_logits, pred_deltas, pred_rgs, seg_logits = self.forward(img)
# loop over batch
for b in range(img.shape[0]):
# add gt boxes to results dict for monitoring.
if len(gt_boxes[b]) > 0:
for tix in range(len(gt_boxes[b])):
gt_box = {'box_type': 'gt', 'box_coords': batch['bb_target'][b][tix]}
for name in self.cf.roi_items:
gt_box.update({name: batch[name][b][tix]})
box_results_list[b].append(gt_box)
# match gt boxes with anchors to generate targets.
anchor_class_match, anchor_target_deltas, anchor_target_rgs = gt_anchor_matching(
self.cf, self.np_anchors, gt_boxes[b], gt_class_ids[b], gt_regressions[b] if gt_regressions is not None else None)
# add positive anchors used for loss to results_dict for monitoring.
pos_anchors = mutils.clip_boxes_numpy(
self.np_anchors[np.argwhere(anchor_class_match > 0)][:, 0], img.shape[2:])
for p in pos_anchors:
box_results_list[b].append({'box_coords': p, 'box_type': 'pos_anchor'})
else:
anchor_class_match = np.array([-1]*self.np_anchors.shape[0])
anchor_target_deltas = np.array([])
anchor_target_rgs = np.array([])
anchor_class_match = torch.from_numpy(anchor_class_match).cuda()
anchor_target_deltas = torch.from_numpy(anchor_target_deltas).float().cuda()
anchor_target_rgs = torch.from_numpy(anchor_target_rgs).float().cuda()
if self.cf.focal_loss:
# compute class loss as focal loss as suggested in original publication, but multi-class.
class_loss = compute_focal_class_loss(anchor_class_match, class_logits[b], gamma=self.cf.focal_loss_gamma)
# sparing appendix of negative anchors for monitoring as not really relevant
else:
# compute class loss with SHEM.
class_loss, neg_anchor_ix = compute_class_loss(anchor_class_match, class_logits[b])
# add negative anchors used for loss to results_dict for monitoring.
neg_anchors = mutils.clip_boxes_numpy(
self.np_anchors[np.argwhere(anchor_class_match.cpu().numpy() == -1)][neg_anchor_ix, 0],
img.shape[2:])
for n in neg_anchors:
box_results_list[b].append({'box_coords': n, 'box_type': 'neg_anchor'})
rg_loss = compute_rg_loss(self.cf.prediction_tasks, anchor_target_rgs, pred_rgs[b], anchor_class_match)
bbox_loss = compute_bbox_loss(anchor_target_deltas, pred_deltas[b], anchor_class_match)
torch_loss += (class_loss + bbox_loss + rg_loss) / img.shape[0]
results_dict = self.get_results(img.shape, detections, seg_logits, box_results_list)
results_dict['seg_preds'] = results_dict['seg_preds'].argmax(axis=1).astype('uint8')[:, np.newaxis]
if self.cf.model == 'retina_unet':
seg_loss_dice = 1 - mutils.batch_dice(F.softmax(seg_logits, dim=1),var_seg_ohe)
seg_loss_ce = F.cross_entropy(seg_logits, var_seg[:, 0])
torch_loss += (seg_loss_dice + seg_loss_ce) / 2
#self.logger.info("loss: {0:.2f}, class: {1:.2f}, bbox: {2:.2f}, seg dice: {3:.3f}, seg ce: {4:.3f}, "
# "mean pixel preds: {5:.5f}".format(torch_loss.item(), batch_class_loss.item(), batch_bbox_loss.item(),
# seg_loss_dice.item(), seg_loss_ce.item(), np.mean(results_dict['seg_preds'])))
if 'dice' in self.cf.metrics:
results_dict['batch_dices'] = mutils.dice_per_batch_and_class(
results_dict['seg_preds'], batch["seg"], self.cf.num_seg_classes, convert_to_ohe=True)
#else:
#self.logger.info("loss: {0:.2f}, class: {1:.2f}, bbox: {2:.2f}".format(
# torch_loss.item(), class_loss.item(), bbox_loss.item()))
results_dict['torch_loss'] = torch_loss
results_dict['class_loss'] = class_loss.item()
return results_dict
def test_forward(self, batch, **kwargs):
"""
test method. wrapper around forward pass of network without usage of any ground truth information.
prepares input data for processing and stores outputs in a dictionary.
:param batch: dictionary containing 'data'
:return: results_dict: dictionary with keys:
'boxes': list over batch elements. each batch element is a list of boxes. each box is a dictionary:
[[{box_0}, ... {box_n}], [{box_0}, ... {box_n}], ...]
'seg_preds': actually contain seg probabilities since evaluated to seg_preds (via argmax) in predictor.
or dummy seg logits for real retina net (detection only)
"""
img = torch.from_numpy(batch['data']).float().cuda()
detections, _, _, _, seg_logits = self.forward(img)
results_dict = self.get_results(img.shape, detections, seg_logits)
return results_dict | 48.527599 | 142 | 0.626458 | 5,417 | 37,803 | 4.151745 | 0.126269 | 0.015474 | 0.006003 | 0.010004 | 0.37092 | 0.331036 | 0.3008 | 0.267274 | 0.24442 | 0.223877 | 0 | 0.017315 | 0.252943 | 37,803 | 779 | 143 | 48.527599 | 0.779045 | 0.304605 | 0 | 0.228111 | 0 | 0 | 0.03057 | 0 | 0 | 0 | 0 | 0.001284 | 0 | 1 | 0.041475 | false | 0 | 0.023041 | 0 | 0.105991 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa29cbe9046a7757494a078dc6a9da8756da21f2 | 2,996 | py | Python | synth/devices/energy.py | pervasivesolutions/synth | 7b00f14dffc2630acd2743d0d5cf9f7c1627b067 | [
"MIT"
] | null | null | null | synth/devices/energy.py | pervasivesolutions/synth | 7b00f14dffc2630acd2743d0d5cf9f7c1627b067 | [
"MIT"
] | null | null | null | synth/devices/energy.py | pervasivesolutions/synth | 7b00f14dffc2630acd2743d0d5cf9f7c1627b067 | [
"MIT"
] | null | null | null | """
energy
=====
Simulates energy meter
Configurable parameters::
{
"opening_times" : (optional) "name of an opening-times pattern"
"max_power" : (optional) maximum power level
"baseload_power" : (optional) baseload power level (e.g. night-time)
"power_variation" : (optional) how much "noise" on the reading
}
Device properties created::
{
"kWh" : odometer
"kW" : instantaneous
}
"""
import random
import logging
import time
from .device import Device
from .helpers import opening_times as opening_times
ENERGY_READING_INTERVAL_S = 60 * 30
DEFAULT_OPENING_TIMES = "nine_to_five"
DEFAULT_MAX_POWER_KW = 10.0
DEFAULT_BASELOAD_POWER_KW = 2.0
DEFAULT_POWER_VARIATION_KW = 1.0
class Energy(Device):
def __init__(self, instance_name, time, engine, update_callback, context, params):
super(Energy,self).__init__(instance_name, time, engine, update_callback, context, params)
self.opening_times = params["energy"].get("opening_times", DEFAULT_OPENING_TIMES)
self.max_power_kW = params["energy"].get("max_power", DEFAULT_MAX_POWER_KW)
self.baseload_power_kW = params["energy"].get("baseload_power", DEFAULT_BASELOAD_POWER_KW)
self.power_variation_kW = params["energy"].get("power_variation", DEFAULT_POWER_VARIATION_KW)
if not self.property_exists("device_type"):
self.set_property("device_type", "energy")
self.set_property("kWh", int(random.random() * 100000))
self.occupied_bodge = params["energy"].get("occupied_bodge", False)
if self.occupied_bodge:
self.set_property("occupied", False) # !!!!!!!!!!! TEMP BODGE TO OVERCOME CLUSTERING PROBLEM
self.engine.register_event_in(ENERGY_READING_INTERVAL_S, self.tick_reading, self, self)
def comms_ok(self):
return super(Energy,self).comms_ok()
def external_event(self, event_name, arg):
super(Energy,self).external_event(event_name, arg)
pass
def close(self):
super(Energy,self).close()
# Private methods
def tick_reading(self, _):
open_chance = opening_times.chance_of_occupied(self.engine.get_now(), self.opening_times)
kW = self.baseload_power_kW + open_chance * (self.max_power_kW - self.baseload_power_kW - self.power_variation_kW/2.0)
kW += random.random() * self.power_variation_kW
kWh = self.get_property("kWh")
kWh += kW * ENERGY_READING_INTERVAL_S / (60 * 60.0)
kW = int(100 * kW) / 100.0 # Round
kWh = int(100 * kWh) / 100.0
self.start_property_group() # -->
self.set_property("kW", kW)
self.set_property("kWh", kWh)
if self.occupied_bodge:
self.set_property("occupied", not self.get_property("occupied")) # !!!!!!!!!!! TEMP BODGE TO OVERCOME CLUSTERING PROBLEM
self.end_property_group() # <--
self.engine.register_event_in(ENERGY_READING_INTERVAL_S, self.tick_reading, self, self)
| 37.924051 | 135 | 0.679573 | 389 | 2,996 | 4.938303 | 0.25964 | 0.062467 | 0.046851 | 0.045809 | 0.321187 | 0.266007 | 0.266007 | 0.167621 | 0.072879 | 0.072879 | 0 | 0.015886 | 0.201602 | 2,996 | 78 | 136 | 38.410256 | 0.787207 | 0.194593 | 0 | 0.086957 | 0 | 0 | 0.070863 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108696 | false | 0.021739 | 0.108696 | 0.021739 | 0.26087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa2b234e671e311a5cf9def2ea0d0a7e0a41dda3 | 3,165 | py | Python | mtil/domain_transfer.py | qxcv/mtil | 62608046efb570b53f8107b8de9a7a1f28aee28a | [
"0BSD"
] | 1 | 2021-01-18T23:57:07.000Z | 2021-01-18T23:57:07.000Z | mtil/domain_transfer.py | qxcv/mtil | 62608046efb570b53f8107b8de9a7a1f28aee28a | [
"0BSD"
] | null | null | null | mtil/domain_transfer.py | qxcv/mtil | 62608046efb570b53f8107b8de9a7a1f28aee28a | [
"0BSD"
] | null | null | null | """Tools for domain transfer/domain invariance losses."""
import torch
from torch import nn
import torch.nn.functional as F
class ReverseGrad(torch.autograd.Function):
"""This layer acts like the identity function on the forward pass, but
reverses gradients on the backwards pass."""
@staticmethod
def forward(ctx, x):
# return a copy of x to avoid whatever spooky optimisations Torch might
# be doing (this is purely defensive---I haven't needed it, but on the
# Torch forums there are people claiming issues with an older verion
# https://discuss.pytorch.org/t/solved-reverse-gradients-in-backward-pass/3589/7)
return x.clone()
@staticmethod
def backward(ctx, dydx):
return -dydx
reverse_grad = ReverseGrad.apply
def make_binary_domain_classifier(in_chans,
hidden_chans=256,
ActivationCls=nn.ReLU):
"""Simple MLP for domain classification (no gradient reversal)."""
return nn.Sequential(
nn.Linear(in_chans, hidden_chans),
ActivationCls(),
nn.Linear(hidden_chans, 1),
)
class BinaryDomainLossModule(nn.Module):
"""Combines gradient reversal -> domain classifier -> (xent loss, acc.)."""
def __init__(self, in_chans, **kwargs):
super().__init__()
self.classifier = make_binary_domain_classifier(in_chans, **kwargs)
def forward(self, x, binary_is_demo_labels, reduce_loss=True):
assert binary_is_demo_labels.shape == (x.shape[0], )
assert ((binary_is_demo_labels == 0) |
(binary_is_demo_labels == 1)).all()
rev_x = reverse_grad(x)
logits = self.classifier(rev_x)
logits = logits.squeeze(1)
loss = F.binary_cross_entropy_with_logits(
logits,
binary_is_demo_labels,
reduction='mean' if reduce_loss else 'none')
pred_labels = (logits >= 0).to(torch.long)
acc = torch.mean(
(pred_labels == binary_is_demo_labels).to(torch.float))
return loss, acc
def test_grad_rev():
def fn(u):
return 0.5 * u**2 - u
def fn_deriv(u):
return u - 1
for val in [-1.0, 0.0, 4.0]:
# test normal forward pass
x = torch.tensor(val, requires_grad=True)
y = fn(x)
y.backward()
real_grad = fn_deriv(x.detach())
assert torch.allclose(x.grad, real_grad), (x.grad, real_grad)
# test reversed forward pass
x_rev = torch.tensor(val, requires_grad=True)
y_rev = fn(reverse_grad(x_rev))
y_rev.backward()
assert torch.allclose(x_rev.grad, -real_grad), (x_rev.grad, -real_grad)
# also sanity check
assert torch.allclose(x_rev.grad, -x.grad), (x_rev.grad, x.grad)
# test adding the two
x_joint = torch.tensor(val, requires_grad=True)
y_joint = fn(x_joint)
y_joint_rev = fn(reverse_grad(x_joint))
(y_joint + y_joint_rev).backward()
assert torch.allclose(x_joint.grad, 0.0), x_joint.grad
print("Done, tests pass!")
if __name__ == '__main__':
test_grad_rev()
| 32.628866 | 89 | 0.62812 | 428 | 3,165 | 4.425234 | 0.357477 | 0.021119 | 0.038015 | 0.057022 | 0.200106 | 0.134636 | 0.049102 | 0 | 0 | 0 | 0 | 0.011144 | 0.262875 | 3,165 | 96 | 90 | 32.96875 | 0.800686 | 0.211374 | 0 | 0.032258 | 0 | 0 | 0.013393 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 1 | 0.129032 | false | 0.016129 | 0.048387 | 0.064516 | 0.306452 | 0.016129 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa2df5d8a6806ad5b845cdc95e03c222696b8243 | 783 | py | Python | utils/mean_std.py | DLLXW/DIGIX-ImageRetrieval | 4aaf39f8e9b7b8d4f26271f2cc7e80565f71d35e | [
"MIT"
] | 10 | 2020-09-19T06:46:22.000Z | 2022-01-04T01:42:28.000Z | utils/mean_std.py | DLLXW/DIGIX-ImageRetrieval | 4aaf39f8e9b7b8d4f26271f2cc7e80565f71d35e | [
"MIT"
] | 1 | 2020-09-24T06:07:02.000Z | 2020-09-29T02:01:24.000Z | utils/mean_std.py | DLLXW/DIGIX-ImageRetrieval | 4aaf39f8e9b7b8d4f26271f2cc7e80565f71d35e | [
"MIT"
] | 4 | 2020-09-20T02:38:14.000Z | 2022-01-21T15:03:16.000Z | import numpy as np
import cv2
import os
import glob
# img_h, img_w = 32, 32
img_h, img_w = 32, 48 # 根据自己数据集适当调整,影响不大
means, stdevs = [], []
img_list = []
imgs_path_list = glob.glob('datasets/*/*')
len_ = len(imgs_path_list)
i = 0
for item in imgs_path_list:
img = cv2.imread(item)
img = cv2.resize(img, (img_w, img_h))
img = img[:, :, :, np.newaxis]
img_list.append(img)
i += 1
print(i, '/', len_)
imgs = np.concatenate(img_list, axis=3)
imgs = imgs.astype(np.float32) / 255.
for i in range(3):
pixels = imgs[:, :, i, :].ravel() # 拉成一行
means.append(np.mean(pixels))
stdevs.append(np.std(pixels))
# BGR --> RGB , CV读取的需要转换,PIL读取的不用转换
means.reverse()
stdevs.reverse()
print("normMean = {}".format(means))
print("normStd = {}".format(stdevs)) | 22.371429 | 45 | 0.634738 | 121 | 783 | 3.966942 | 0.429752 | 0.025 | 0.04375 | 0.033333 | 0.041667 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031348 | 0.185185 | 783 | 35 | 46 | 22.371429 | 0.721003 | 0.099617 | 0 | 0 | 0 | 0 | 0.054208 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.148148 | 0 | 0.148148 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa2e589a43586cdcb5dc369948992ae3c41e11b0 | 1,669 | py | Python | b2/account_info/test_upload_url_concurrency.py | SergeyBurma/B2_Command_Line_Tool | 65d8adf080e1502cd51c78f9bc9ce3b0bc787147 | [
"MIT"
] | 1 | 2020-09-06T09:32:44.000Z | 2020-09-06T09:32:44.000Z | b2/account_info/test_upload_url_concurrency.py | SergeyBurma/B2_Command_Line_Tool | 65d8adf080e1502cd51c78f9bc9ce3b0bc787147 | [
"MIT"
] | null | null | null | b2/account_info/test_upload_url_concurrency.py | SergeyBurma/B2_Command_Line_Tool | 65d8adf080e1502cd51c78f9bc9ce3b0bc787147 | [
"MIT"
] | null | null | null | ######################################################################
#
# File: b2/account_info/test_upload_conncurrency.py
#
# Copyright 2018 Backblaze Inc. All Rights Reserved.
#
# License https://www.backblaze.com/using_b2_code.html
#
######################################################################
import os
import threading
import six
from .sqlite_account_info import SqliteAccountInfo
def test_upload_url_concurrency():
# Clean up from previous tests
file_name = '/tmp/test_upload_conncurrency.db'
try:
os.unlink(file_name)
except OSError:
pass
# Make an account info with a bunch of upload URLs in it.
account_info = SqliteAccountInfo(file_name)
available_urls = set()
for i in six.moves.range(3000):
url = 'url_%d' % i
account_info.put_bucket_upload_url('bucket-id', url, 'auth-token-%d' % i)
available_urls.add(url)
# Pull them all from the account info, from multiple threads
lock = threading.Lock()
def run_thread():
while True:
(url, _) = account_info.take_bucket_upload_url('bucket-id')
if url is None:
break
with lock:
if url in available_urls:
available_urls.remove(url)
else:
print('DOUBLE:', url)
threads = []
for i in six.moves.range(5):
thread = threading.Thread(target=run_thread)
thread.start()
threads.append(thread)
for t in threads:
t.join()
# Check
if len(available_urls) != 0:
print('LEAK:', available_urls)
# Clean up
os.unlink(file_name)
| 26.492063 | 81 | 0.566207 | 197 | 1,669 | 4.619289 | 0.48731 | 0.084615 | 0.048352 | 0.035165 | 0.092308 | 0.041758 | 0 | 0 | 0 | 0 | 0 | 0.009772 | 0.26423 | 1,669 | 62 | 82 | 26.919355 | 0.73127 | 0.186938 | 0 | 0.054054 | 0 | 0 | 0.067276 | 0.026578 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054054 | false | 0.027027 | 0.108108 | 0 | 0.162162 | 0.054054 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa32a4ae0026141354ecf77c20e93c609285dd6e | 1,163 | py | Python | Project/src/Modules/House/Irrigation/__init__.py | DBrianKimmel/PyHouse | a100fc67761a22ae47ed6f21f3c9464e2de5d54f | [
"MIT"
] | 3 | 2016-11-16T00:37:58.000Z | 2019-11-10T13:10:19.000Z | Project/src/Modules/House/Irrigation/__init__.py | DBrianKimmel/PyHouse | a100fc67761a22ae47ed6f21f3c9464e2de5d54f | [
"MIT"
] | null | null | null | Project/src/Modules/House/Irrigation/__init__.py | DBrianKimmel/PyHouse | a100fc67761a22ae47ed6f21f3c9464e2de5d54f | [
"MIT"
] | 1 | 2020-07-19T22:06:52.000Z | 2020-07-19T22:06:52.000Z | """
The irrigation system.
Each irrigation system has one source of water. It could be a faucet or a dedicated line or a well with a pump.
The system can be always on, seasonally on (above freezing?) or use a pump relay and/or master valve.
A system may be divided into zones. Each zone can take a part or all of the water being used.
Within a system, only one zone may be active at a time.
System Types:
Multi Zoned. This takes a pump-start relay or a master valve and then individual zone valves where the zones
run in sequence.
Single Zone. This has a valve to turn the system on or off.
"""
__updated__ = '2020-01-26'
__version_info__ = (20, 1, 25)
__version__ = '.'.join(map(str, __version_info__))
VALID_IRRIGATION_TYPE = ['Multi', 'Single']
MODULES = [ # All modules for the House must be listed here. They will be loaded if configured.
'Rainbird'
]
class IrrigationInformation:
""" Info about any/all irrigation systems for a house.
==> PyHouse.House.Irrigation.xxx as in the def below
"""
def __init__(self):
self.Name = None
self.Systems = {} # IrrigationSystemData()
# ## END DBK
| 29.075 | 113 | 0.700774 | 183 | 1,163 | 4.322404 | 0.579235 | 0.011378 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014349 | 0.22098 | 1,163 | 39 | 114 | 29.820513 | 0.85872 | 0.711092 | 0 | 0 | 0 | 0 | 0.097087 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa3830ff6466c939d6f5a6068ea5ffe5a1e5e751 | 7,422 | py | Python | SAC/SAC_rla.py | RickyMexx/SAC-tf2 | 68599e45e92844c07d8c4f8561e31d9a2478559b | [
"Apache-2.0"
] | 6 | 2021-02-07T04:57:07.000Z | 2022-03-01T06:45:28.000Z | SAC/SAC_rla.py | RickyMexx/SAC-tf2 | 68599e45e92844c07d8c4f8561e31d9a2478559b | [
"Apache-2.0"
] | null | null | null | SAC/SAC_rla.py | RickyMexx/SAC-tf2 | 68599e45e92844c07d8c4f8561e31d9a2478559b | [
"Apache-2.0"
] | 1 | 2021-02-07T04:57:11.000Z | 2021-02-07T04:57:11.000Z | import tensorflow_addons as tfa
from typing import Sequence
from common.utils import *
def soft_update(source_vars: Sequence[tf.Variable], target_vars: Sequence[tf.Variable], tau: float) -> None:
"""Move each source variable by a factor of tau towards the corresponding target variable.
Arguments:
source_vars {Sequence[tf.Variable]} -- Source variables to copy from
target_vars {Sequence[tf.Variable]} -- Variables to copy data to
tau {float} -- How much to change to source var, between 0 and 1.
"""
if len(source_vars) != len(target_vars):
raise ValueError("source_vars and target_vars must have the same length.")
for source, target in zip(source_vars, target_vars):
target.assign((1.0 - tau) * target + tau * source)
def hard_update(source_vars: Sequence[tf.Variable], target_vars: Sequence[tf.Variable]) -> None:
"""Copy source variables to target variables.
Arguments:
source_vars {Sequence[tf.Variable]} -- Source variables to copy from
target_vars {Sequence[tf.Variable]} -- Variables to copy data to
"""
# Tau of 1, so get everything from source and keep nothing from target
soft_update(source_vars, target_vars, 1.0)
class SAC:
def __init__(self, obs_dim, n_actions, act_lim, seed, discount, temperature, polyak_coef, lr,
hidden_layers, n_hidden_units, save_dir, env):
self.obs_dim = obs_dim
self.n_actions = n_actions
self.act_lim = act_lim
self.seed = seed
self.discount = discount
self.temperature = temperature
self.polyak_coef = polyak_coef
self.lr = lr
self.save_dir = save_dir
self.env = env
self.gamma=discount
### Creating networks and optimizers ###
# Policy network
# action_output are the squashed actions and action_original those straight from the normal distribution
logprob_epsilon = 1e-6 # For numerical stability when computing tf.log
self.actor_network = ActorNetwork(hidden_layers, n_hidden_units, n_actions, logprob_epsilon)
# 2 Soft q-functions networks + targets
self.softq_network = SoftQNetwork(hidden_layers, n_hidden_units)
self.softq_target_network = SoftQNetwork(hidden_layers, n_hidden_units)
self.softq_network2 = SoftQNetwork(hidden_layers, n_hidden_units)
self.softq_target_network2 = SoftQNetwork(hidden_layers, n_hidden_units)
# Building up 2 soft q-function with their relative targets
input1 = tf.keras.Input(shape=(obs_dim), dtype=tf.float32)
input2 = tf.keras.Input(shape=(n_actions), dtype=tf.float32)
self.softq_network(input1, input2)
self.softq_target_network(input1, input2)
hard_update(self.softq_network.variables, self.softq_target_network.variables)
self.softq_network2(input1, input2)
self.softq_target_network2(input1, input2)
hard_update(self.softq_network2.variables, self.softq_target_network2.variables)
# Optimizers for the networks
self.softq_optimizer = tfa.optimizers.RectifiedAdam(learning_rate=lr)
self.softq_optimizer2 = tfa.optimizers.RectifiedAdam(learning_rate=lr)
self.actor_optimizer = tfa.optimizers.RectifiedAdam(learning_rate=lr)
def softq_value(self, states: np.ndarray, actions: np.ndarray):
return self.softq_network(states, actions)
def softq_value2(self, states: np.ndarray, actions: np.ndarray):
return self.softq_network2(states, actions)
def actions(self, states: np.ndarray) -> np.ndarray:
"""Get the actions for a batch of states."""
return self.actor_network(states)[0]
def action(self, state: np.ndarray) -> np.ndarray:
"""Get the action for a single state."""
return self.actor_network(state[None, :])[0][0]
def step(self, obs):
return self.actor_network(obs)[0]
@tf.function
def train(self, sample, action_batch, batch_size):
state0_batch = sample["states0"]
reward_batch = sample["rewards"]
state1_batch = sample["states1"]
terminal1_batch = sample["terminals1"]
# Computing action and a_tilde
action, action_logprob2 = self.actor_network(state1_batch)
value_target1 = self.softq_target_network(state1_batch, action)
value_target2 = self.softq_target_network2(state1_batch, action)
# Taking the minimum of the q-functions values
next_value_batch = tf.math.minimum(value_target1, value_target2) - self.temperature * action_logprob2
# Computing target for q-functions
softq_targets = reward_batch + self.gamma * (1 - terminal1_batch) * tf.reshape(next_value_batch, [-1])
softq_targets = tf.reshape(softq_targets, [batch_size, 1])
# Gradient descent for the first q-function
with tf.GradientTape() as softq_tape:
softq = self.softq_network(state0_batch, action_batch)
softq_loss = tf.reduce_mean(tf.square(softq - softq_targets))
# Gradient descent for the second q-function
with tf.GradientTape() as softq_tape2:
softq2 = self.softq_network2(state0_batch, action_batch)
softq_loss2 = tf.reduce_mean(tf.square(softq2 - softq_targets))
# Gradient ascent for the policy (actor)
with tf.GradientTape() as actor_tape:
actions, action_logprob = self.actor_network(state0_batch)
new_softq = tf.math.minimum(self.softq_network(state0_batch, actions), self.softq_network2(state0_batch, actions))
# Loss implementation from the pseudocode -> works worse
#actor_loss = tf.reduce_mean(action_logprob - new_softq)
# New actor_loss -> works better
advantage = tf.stop_gradient(action_logprob - new_softq)
actor_loss = tf.reduce_mean(action_logprob * advantage)
# Computing the gradients with the tapes and applying them
actor_gradients = actor_tape.gradient(actor_loss, self.actor_network.trainable_weights)
softq_gradients = softq_tape.gradient(softq_loss, self.softq_network.trainable_weights)
softq_gradients2 = softq_tape2.gradient(softq_loss2, self.softq_network2.trainable_weights)
# Minimize gradients wrt weights
self.actor_optimizer.apply_gradients(zip(actor_gradients, self.actor_network.trainable_weights))
self.softq_optimizer.apply_gradients(zip(softq_gradients, self.softq_network.trainable_weights))
self.softq_optimizer2.apply_gradients(zip(softq_gradients2, self.softq_network2.trainable_weights))
# Update the weights of the soft q-function target networks
soft_update(self.softq_network.variables, self.softq_target_network.variables, self.polyak_coef)
soft_update(self.softq_network2.variables, self.softq_target_network2.variables, self.polyak_coef)
# Computing mean and variance of soft-q function
softq_mean, softq_variance = tf.nn.moments(softq, axes=[0])
return softq_mean[0], tf.sqrt(softq_variance[0]), softq_loss, actor_loss, tf.reduce_mean(action_logprob)
def save(self):
self.actor_network.save_weights(self.save_dir+"/actor.ckpt")
print("Model saved!")
def load(self, filepath):
self.actor_network.load_weights(filepath+"/actor.ckpt")
print("Model loaded!")
| 44.178571 | 126 | 0.703449 | 961 | 7,422 | 5.208117 | 0.217482 | 0.057542 | 0.031968 | 0.035165 | 0.363037 | 0.263936 | 0.247952 | 0.174426 | 0.174426 | 0.142258 | 0 | 0.01379 | 0.208569 | 7,422 | 167 | 127 | 44.443114 | 0.83827 | 0.207087 | 0 | 0 | 0 | 0 | 0.022766 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.122222 | false | 0 | 0.033333 | 0.033333 | 0.233333 | 0.022222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa3b3f40dc899e70bf6647c2e9cda4288572204b | 19,238 | py | Python | apteco_api/models/communication_statistics.py | Apteco/apteco-api | 7440c98ab10ea6d8a5997187f6fc739ce1c75d2b | [
"Apache-2.0"
] | 2 | 2020-05-21T14:24:16.000Z | 2020-12-03T19:56:34.000Z | apteco_api/models/communication_statistics.py | Apteco/apteco-api | 7440c98ab10ea6d8a5997187f6fc739ce1c75d2b | [
"Apache-2.0"
] | null | null | null | apteco_api/models/communication_statistics.py | Apteco/apteco-api | 7440c98ab10ea6d8a5997187f6fc739ce1c75d2b | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
Apteco API
An API to allow access to Apteco Marketing Suite resources # noqa: E501
The version of the OpenAPI document: v2
Contact: support@apteco.com
Generated by: https://openapi-generator.tech
"""
import pprint
import re # noqa: F401
import six
class CommunicationStatistics(object):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
"""
Attributes:
openapi_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
openapi_types = {
'days': 'list[str]',
'communications_counts': 'list[int]',
'total_communications_count': 'int',
'deliveries_counts': 'list[int]',
'total_deliveries_count': 'int',
'messages_counts': 'list[int]',
'total_messages_count': 'int',
'campaigns_counts': 'list[int]',
'total_campaigns_count': 'int',
'people_counts': 'list[int]',
'communication_statistics_timestamp': 'datetime',
'campaign_statistics_timestamp': 'datetime',
'people_statistics_timestamp': 'datetime'
}
attribute_map = {
'days': 'days',
'communications_counts': 'communicationsCounts',
'total_communications_count': 'totalCommunicationsCount',
'deliveries_counts': 'deliveriesCounts',
'total_deliveries_count': 'totalDeliveriesCount',
'messages_counts': 'messagesCounts',
'total_messages_count': 'totalMessagesCount',
'campaigns_counts': 'campaignsCounts',
'total_campaigns_count': 'totalCampaignsCount',
'people_counts': 'peopleCounts',
'communication_statistics_timestamp': 'communicationStatisticsTimestamp',
'campaign_statistics_timestamp': 'campaignStatisticsTimestamp',
'people_statistics_timestamp': 'peopleStatisticsTimestamp'
}
def __init__(self, days=None, communications_counts=None, total_communications_count=None, deliveries_counts=None, total_deliveries_count=None, messages_counts=None, total_messages_count=None, campaigns_counts=None, total_campaigns_count=None, people_counts=None, communication_statistics_timestamp=None, campaign_statistics_timestamp=None, people_statistics_timestamp=None): # noqa: E501
"""CommunicationStatistics - a model defined in OpenAPI""" # noqa: E501
self._days = None
self._communications_counts = None
self._total_communications_count = None
self._deliveries_counts = None
self._total_deliveries_count = None
self._messages_counts = None
self._total_messages_count = None
self._campaigns_counts = None
self._total_campaigns_count = None
self._people_counts = None
self._communication_statistics_timestamp = None
self._campaign_statistics_timestamp = None
self._people_statistics_timestamp = None
self.discriminator = None
self.days = days
self.communications_counts = communications_counts
self.total_communications_count = total_communications_count
self.deliveries_counts = deliveries_counts
self.total_deliveries_count = total_deliveries_count
self.messages_counts = messages_counts
self.total_messages_count = total_messages_count
self.campaigns_counts = campaigns_counts
self.total_campaigns_count = total_campaigns_count
self.people_counts = people_counts
if communication_statistics_timestamp is not None:
self.communication_statistics_timestamp = communication_statistics_timestamp
if campaign_statistics_timestamp is not None:
self.campaign_statistics_timestamp = campaign_statistics_timestamp
if people_statistics_timestamp is not None:
self.people_statistics_timestamp = people_statistics_timestamp
@property
def days(self):
"""Gets the days of this CommunicationStatistics. # noqa: E501
The set of days where communication information is available # noqa: E501
:return: The days of this CommunicationStatistics. # noqa: E501
:rtype: list[str]
"""
return self._days
@days.setter
def days(self, days):
"""Sets the days of this CommunicationStatistics.
The set of days where communication information is available # noqa: E501
:param days: The days of this CommunicationStatistics. # noqa: E501
:type: list[str]
"""
if days is None:
raise ValueError("Invalid value for `days`, must not be `None`") # noqa: E501
self._days = days
@property
def communications_counts(self):
"""Gets the communications_counts of this CommunicationStatistics. # noqa: E501
The set of counts representing the number of communications on the corresponding day. The first figure is data for the first day in the Days list, and so on. # noqa: E501
:return: The communications_counts of this CommunicationStatistics. # noqa: E501
:rtype: list[int]
"""
return self._communications_counts
@communications_counts.setter
def communications_counts(self, communications_counts):
"""Sets the communications_counts of this CommunicationStatistics.
The set of counts representing the number of communications on the corresponding day. The first figure is data for the first day in the Days list, and so on. # noqa: E501
:param communications_counts: The communications_counts of this CommunicationStatistics. # noqa: E501
:type: list[int]
"""
if communications_counts is None:
raise ValueError("Invalid value for `communications_counts`, must not be `None`") # noqa: E501
self._communications_counts = communications_counts
@property
def total_communications_count(self):
"""Gets the total_communications_count of this CommunicationStatistics. # noqa: E501
The total number of communications across all days # noqa: E501
:return: The total_communications_count of this CommunicationStatistics. # noqa: E501
:rtype: int
"""
return self._total_communications_count
@total_communications_count.setter
def total_communications_count(self, total_communications_count):
"""Sets the total_communications_count of this CommunicationStatistics.
The total number of communications across all days # noqa: E501
:param total_communications_count: The total_communications_count of this CommunicationStatistics. # noqa: E501
:type: int
"""
if total_communications_count is None:
raise ValueError("Invalid value for `total_communications_count`, must not be `None`") # noqa: E501
self._total_communications_count = total_communications_count
@property
def deliveries_counts(self):
"""Gets the deliveries_counts of this CommunicationStatistics. # noqa: E501
The set of counts representing the number of deliveries that have run on the corresponding day. The first figure is data for the first day in the Days list, and so on. # noqa: E501
:return: The deliveries_counts of this CommunicationStatistics. # noqa: E501
:rtype: list[int]
"""
return self._deliveries_counts
@deliveries_counts.setter
def deliveries_counts(self, deliveries_counts):
"""Sets the deliveries_counts of this CommunicationStatistics.
The set of counts representing the number of deliveries that have run on the corresponding day. The first figure is data for the first day in the Days list, and so on. # noqa: E501
:param deliveries_counts: The deliveries_counts of this CommunicationStatistics. # noqa: E501
:type: list[int]
"""
if deliveries_counts is None:
raise ValueError("Invalid value for `deliveries_counts`, must not be `None`") # noqa: E501
self._deliveries_counts = deliveries_counts
@property
def total_deliveries_count(self):
"""Gets the total_deliveries_count of this CommunicationStatistics. # noqa: E501
The total number of deliveries that have run across all days # noqa: E501
:return: The total_deliveries_count of this CommunicationStatistics. # noqa: E501
:rtype: int
"""
return self._total_deliveries_count
@total_deliveries_count.setter
def total_deliveries_count(self, total_deliveries_count):
"""Sets the total_deliveries_count of this CommunicationStatistics.
The total number of deliveries that have run across all days # noqa: E501
:param total_deliveries_count: The total_deliveries_count of this CommunicationStatistics. # noqa: E501
:type: int
"""
if total_deliveries_count is None:
raise ValueError("Invalid value for `total_deliveries_count`, must not be `None`") # noqa: E501
self._total_deliveries_count = total_deliveries_count
@property
def messages_counts(self):
"""Gets the messages_counts of this CommunicationStatistics. # noqa: E501
The set of counts representing the number of messages that have had at least one delivery run on the corresponding day. The first figure is data for the first day in the Days list, and so on. # noqa: E501
:return: The messages_counts of this CommunicationStatistics. # noqa: E501
:rtype: list[int]
"""
return self._messages_counts
@messages_counts.setter
def messages_counts(self, messages_counts):
"""Sets the messages_counts of this CommunicationStatistics.
The set of counts representing the number of messages that have had at least one delivery run on the corresponding day. The first figure is data for the first day in the Days list, and so on. # noqa: E501
:param messages_counts: The messages_counts of this CommunicationStatistics. # noqa: E501
:type: list[int]
"""
if messages_counts is None:
raise ValueError("Invalid value for `messages_counts`, must not be `None`") # noqa: E501
self._messages_counts = messages_counts
@property
def total_messages_count(self):
"""Gets the total_messages_count of this CommunicationStatistics. # noqa: E501
The total number of messages that have had at least one delivery run across all days # noqa: E501
:return: The total_messages_count of this CommunicationStatistics. # noqa: E501
:rtype: int
"""
return self._total_messages_count
@total_messages_count.setter
def total_messages_count(self, total_messages_count):
"""Sets the total_messages_count of this CommunicationStatistics.
The total number of messages that have had at least one delivery run across all days # noqa: E501
:param total_messages_count: The total_messages_count of this CommunicationStatistics. # noqa: E501
:type: int
"""
if total_messages_count is None:
raise ValueError("Invalid value for `total_messages_count`, must not be `None`") # noqa: E501
self._total_messages_count = total_messages_count
@property
def campaigns_counts(self):
"""Gets the campaigns_counts of this CommunicationStatistics. # noqa: E501
The set of counts representing the number of campaigns that have had at least one delivery run on the corresponding day. The first figure is data for the first day in the Days list, and so on. # noqa: E501
:return: The campaigns_counts of this CommunicationStatistics. # noqa: E501
:rtype: list[int]
"""
return self._campaigns_counts
@campaigns_counts.setter
def campaigns_counts(self, campaigns_counts):
"""Sets the campaigns_counts of this CommunicationStatistics.
The set of counts representing the number of campaigns that have had at least one delivery run on the corresponding day. The first figure is data for the first day in the Days list, and so on. # noqa: E501
:param campaigns_counts: The campaigns_counts of this CommunicationStatistics. # noqa: E501
:type: list[int]
"""
if campaigns_counts is None:
raise ValueError("Invalid value for `campaigns_counts`, must not be `None`") # noqa: E501
self._campaigns_counts = campaigns_counts
@property
def total_campaigns_count(self):
"""Gets the total_campaigns_count of this CommunicationStatistics. # noqa: E501
The total number of campaigns that have had at least one delivery run across all days # noqa: E501
:return: The total_campaigns_count of this CommunicationStatistics. # noqa: E501
:rtype: int
"""
return self._total_campaigns_count
@total_campaigns_count.setter
def total_campaigns_count(self, total_campaigns_count):
"""Sets the total_campaigns_count of this CommunicationStatistics.
The total number of campaigns that have had at least one delivery run across all days # noqa: E501
:param total_campaigns_count: The total_campaigns_count of this CommunicationStatistics. # noqa: E501
:type: int
"""
if total_campaigns_count is None:
raise ValueError("Invalid value for `total_campaigns_count`, must not be `None`") # noqa: E501
self._total_campaigns_count = total_campaigns_count
@property
def people_counts(self):
"""Gets the people_counts of this CommunicationStatistics. # noqa: E501
The set of counts representing the number of unique people processed on the corresponding day. The first figure is data for the first day in the Days list, and so on. # noqa: E501
:return: The people_counts of this CommunicationStatistics. # noqa: E501
:rtype: list[int]
"""
return self._people_counts
@people_counts.setter
def people_counts(self, people_counts):
"""Sets the people_counts of this CommunicationStatistics.
The set of counts representing the number of unique people processed on the corresponding day. The first figure is data for the first day in the Days list, and so on. # noqa: E501
:param people_counts: The people_counts of this CommunicationStatistics. # noqa: E501
:type: list[int]
"""
if people_counts is None:
raise ValueError("Invalid value for `people_counts`, must not be `None`") # noqa: E501
self._people_counts = people_counts
@property
def communication_statistics_timestamp(self):
"""Gets the communication_statistics_timestamp of this CommunicationStatistics. # noqa: E501
The date and time that the communication statistics were calculated # noqa: E501
:return: The communication_statistics_timestamp of this CommunicationStatistics. # noqa: E501
:rtype: datetime
"""
return self._communication_statistics_timestamp
@communication_statistics_timestamp.setter
def communication_statistics_timestamp(self, communication_statistics_timestamp):
"""Sets the communication_statistics_timestamp of this CommunicationStatistics.
The date and time that the communication statistics were calculated # noqa: E501
:param communication_statistics_timestamp: The communication_statistics_timestamp of this CommunicationStatistics. # noqa: E501
:type: datetime
"""
self._communication_statistics_timestamp = communication_statistics_timestamp
@property
def campaign_statistics_timestamp(self):
"""Gets the campaign_statistics_timestamp of this CommunicationStatistics. # noqa: E501
The date and time that the delivery, message and campaign statistics were calculated # noqa: E501
:return: The campaign_statistics_timestamp of this CommunicationStatistics. # noqa: E501
:rtype: datetime
"""
return self._campaign_statistics_timestamp
@campaign_statistics_timestamp.setter
def campaign_statistics_timestamp(self, campaign_statistics_timestamp):
"""Sets the campaign_statistics_timestamp of this CommunicationStatistics.
The date and time that the delivery, message and campaign statistics were calculated # noqa: E501
:param campaign_statistics_timestamp: The campaign_statistics_timestamp of this CommunicationStatistics. # noqa: E501
:type: datetime
"""
self._campaign_statistics_timestamp = campaign_statistics_timestamp
@property
def people_statistics_timestamp(self):
"""Gets the people_statistics_timestamp of this CommunicationStatistics. # noqa: E501
The date and time that the people statistics were calculated # noqa: E501
:return: The people_statistics_timestamp of this CommunicationStatistics. # noqa: E501
:rtype: datetime
"""
return self._people_statistics_timestamp
@people_statistics_timestamp.setter
def people_statistics_timestamp(self, people_statistics_timestamp):
"""Sets the people_statistics_timestamp of this CommunicationStatistics.
The date and time that the people statistics were calculated # noqa: E501
:param people_statistics_timestamp: The people_statistics_timestamp of this CommunicationStatistics. # noqa: E501
:type: datetime
"""
self._people_statistics_timestamp = people_statistics_timestamp
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.openapi_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, CommunicationStatistics):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
| 41.640693 | 393 | 0.69082 | 2,226 | 19,238 | 5.766846 | 0.07637 | 0.048609 | 0.117473 | 0.100257 | 0.709278 | 0.627249 | 0.612448 | 0.512113 | 0.450027 | 0.400093 | 0 | 0.016627 | 0.243424 | 19,238 | 461 | 394 | 41.73102 | 0.865338 | 0.449007 | 0 | 0.07772 | 0 | 0 | 0.157917 | 0.064053 | 0 | 0 | 0 | 0 | 0 | 1 | 0.165803 | false | 0 | 0.015544 | 0 | 0.295337 | 0.010363 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa3bd02bcdff5c14d421931c497de918fe2a8ff5 | 446 | py | Python | corefacility/authorizations/google/entity/account/account_set.py | serik1987/corefacility | 78d84e19403361e83ef562e738473849f9133bef | [
"RSA-MD"
] | null | null | null | corefacility/authorizations/google/entity/account/account_set.py | serik1987/corefacility | 78d84e19403361e83ef562e738473849f9133bef | [
"RSA-MD"
] | null | null | null | corefacility/authorizations/google/entity/account/account_set.py | serik1987/corefacility | 78d84e19403361e83ef562e738473849f9133bef | [
"RSA-MD"
] | null | null | null | from core.entity.entity_sets.external_authorization_account_set import ExternalAuthorizationAccountSet
from .account_reader import AccountReader
class AccountSet(ExternalAuthorizationAccountSet):
"""
Declares basic ways to find a proper Google account
"""
_entity_name = "Google Account"
_entity_class = "authorizations.google.entity.account.Account"
_entity_reader_class = AccountReader
_alias_kwarg = "email"
| 24.777778 | 102 | 0.784753 | 46 | 446 | 7.304348 | 0.586957 | 0.116071 | 0.113095 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.154709 | 446 | 17 | 103 | 26.235294 | 0.891247 | 0.11435 | 0 | 0 | 0 | 0 | 0.166227 | 0.116095 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa3e69b9fb7647e55adf8701acf325badb2eb4f5 | 2,470 | py | Python | src/lin/log.py | Faronsince2016/lin-cms-flask-core | 2ba3e9a5924adce22a3531f52c65b19cb3b09d84 | [
"MIT"
] | null | null | null | src/lin/log.py | Faronsince2016/lin-cms-flask-core | 2ba3e9a5924adce22a3531f52c65b19cb3b09d84 | [
"MIT"
] | null | null | null | src/lin/log.py | Faronsince2016/lin-cms-flask-core | 2ba3e9a5924adce22a3531f52c65b19cb3b09d84 | [
"MIT"
] | null | null | null | """
log of Lin
~~~~~~~~~
log 模块,用户日志记录,只对管理员开放查询接口
:copyright: © 2018 by the Lin team.
:license: MIT, see LICENSE for more details.
"""
from functools import wraps
import re
from flask import Response, request
from flask_jwt_extended import get_current_user
from .core import find_info_by_ep, Log
REG_XP = r'[{](.*?)[}]'
OBJECTS = ['user', 'response', 'request']
class Logger(object):
# message template
template = None
def __init__(self, template=None):
if template:
self.template: str = template
elif self.template is None:
raise Exception('template must not be None!')
self.message = ''
self.response = None
self.user = None
def __call__(self, func):
@wraps(func)
def wrap(*args, **kwargs):
response: Response = func(*args, **kwargs)
self.response = response
self.user = get_current_user()
if not self.user:
raise Exception('Logger must be used in the login state')
self.message = self._parse_template()
self.write_log()
return response
return wrap
def write_log(self):
info = find_info_by_ep(request.endpoint)
authority = info.auth if info is not None else ''
status_code = getattr(self.response, 'status_code', None)
if status_code is None:
status_code = getattr(self.response, 'code', None)
if status_code is None:
status_code = 0
Log.create_log(message=self.message, user_id=self.user.id, user_name=self.user.username,
status_code=status_code, method=request.method,
path=request.path, authority=authority, commit=True)
# 解析自定义模板
def _parse_template(self):
message = self.template
total = re.findall(REG_XP, message)
for it in total:
assert '.' in it, '%s中必须包含 . ,且为一个' % it
i = it.rindex('.')
obj = it[:i]
assert obj in OBJECTS, '%s只能为user,response,request中的一个' % obj
prop = it[i + 1:]
if obj == 'user':
item = getattr(self.user, prop, '')
elif obj == 'response':
item = getattr(self.response, prop, '')
else:
item = getattr(request, prop, '')
message = message.replace('{%s}' % it, str(item))
return message
| 31.666667 | 96 | 0.568421 | 292 | 2,470 | 4.678082 | 0.349315 | 0.058565 | 0.032943 | 0.01757 | 0.087848 | 0.052709 | 0.052709 | 0.052709 | 0.052709 | 0 | 0 | 0.003582 | 0.321862 | 2,470 | 77 | 97 | 32.077922 | 0.811343 | 0.062753 | 0 | 0.035088 | 0 | 0 | 0.075241 | 0.013123 | 0 | 0 | 0 | 0 | 0.035088 | 1 | 0.087719 | false | 0 | 0.087719 | 0 | 0.263158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa40e69dee6cf34ee0379d4695dc0af2ca9943bf | 1,197 | py | Python | python/DL_for_HTT/common/NN_settings.py | dzuolo/DL_for_HTT_mass | 79d56f3fa5b44642c9c64ffdadf5d87325ad032f | [
"MIT"
] | 1 | 2021-09-22T09:45:49.000Z | 2021-09-22T09:45:49.000Z | python/DL_for_HTT/common/NN_settings.py | dzuolo/DL_for_HTT_mass | 79d56f3fa5b44642c9c64ffdadf5d87325ad032f | [
"MIT"
] | null | null | null | python/DL_for_HTT/common/NN_settings.py | dzuolo/DL_for_HTT_mass | 79d56f3fa5b44642c9c64ffdadf5d87325ad032f | [
"MIT"
] | null | null | null | # NN settings
# Higgs mass range in GeV
min_mass = 50
max_mass = 250
# channels to process
channels = "inclusive"
# NN structure
Nlayers = 10
Nneurons = 100
# NN training
loss = "mae"
optimizer = "Adam"
w_init_mode = "glorot_uniform"
activation = "relu"
regularizer = None
epochs = 200
# Dataset splitting
train_frac = 0.7
valid_frac = 0.2
random_seed = 2020
# Target and inputs
target = "tauH_SVFIT_mass"
default_model_inputs_file = "PuppiMET_with_METcov_j1j2jr_Nnu_Npu"
model_inputs_file = __import__('DL_for_HTT.common.model_inputs.{}'.format(default_model_inputs_file), fromlist=[''])
inputs = model_inputs_file.inputs
inputs_from_Heppy = {
"tau1_pt_reco" : "l1_pt",
"tau1_eta_reco" : "l1_eta",
"tau1_phi_reco" : "l1_phi",
"tau2_pt_reco" : "l2_pt",
"tau2_eta_reco" : "l2_eta",
"tau2_phi_reco" : "l2_phi",
"jet1_pt_reco" : "j1_pt",
"jet1_eta_reco" : "j1_eta",
"jet1_phi_reco" : "j1_phi",
"jet2_pt_reco" : "j2_pt",
"jet2_eta_reco" : "j2_eta",
"jet2_phi_reco" : "j2_phi",
"MET_pt_reco" : "met",
"MET_phi_reco" : "metphi",
"mT1_reco" : "l1_mt",
"mT2_reco" : "l2_mt",
"mTtt_reco" : "mt_tt",
"mTtot_reco" : "mt_tot",
}
| 22.166667 | 116 | 0.674185 | 179 | 1,197 | 4.050279 | 0.50838 | 0.075862 | 0.082759 | 0.06069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052147 | 0.182957 | 1,197 | 53 | 117 | 22.584906 | 0.689162 | 0.096909 | 0 | 0 | 0 | 0 | 0.396086 | 0.063374 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.026316 | 0 | 0.026316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa426b96756871283c3639d141ad5e05677a028f | 6,871 | py | Python | wharf_management/wharf_management/doctype/enquire_cargo_fees/enquire_cargo_fees.py | SOLOSOFTMA/wharf_management | 244458cf033351f80b5148edb05495d80612589a | [
"MIT"
] | null | null | null | wharf_management/wharf_management/doctype/enquire_cargo_fees/enquire_cargo_fees.py | SOLOSOFTMA/wharf_management | 244458cf033351f80b5148edb05495d80612589a | [
"MIT"
] | null | null | null | wharf_management/wharf_management/doctype/enquire_cargo_fees/enquire_cargo_fees.py | SOLOSOFTMA/wharf_management | 244458cf033351f80b5148edb05495d80612589a | [
"MIT"
] | 1 | 2021-04-29T16:02:52.000Z | 2021-04-29T16:02:52.000Z | # -*- coding: utf-8 -*-
# Copyright (c) 2020, Sione Taumoepeau and contributors
# For license information, please see license.txt
from __future__ import unicode_literals
import frappe
from frappe.core.doctype.user_permission.user_permission import get_permitted_documents
from frappe import _
from frappe.model.document import Document
from frappe.utils import cstr, today, flt
from wharf_management.wharf_management.doctype.wharf_payment_entry.wharf_payment_entry import get_storage_days
class EnquireCargoFees(Document):
def get_storage(self):
charged_days, storage_fee, wharfage, wharfage_fee, storage_days, grace_days, qty = 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0
currency = frappe.get_value('Company', "Ports Authority Tonga", "default_currency")
cargo_ref = self.cargo_ref
eta_date_cargo = frappe.db.get_value("Cargo", cargo_ref, "eta_date")
storage_days = get_storage_days(eta_date_cargo, today())
if cargo_ref or cargo_ref != " ":
container_size, container_content, cargo_type, volume, net_weight, litre = frappe.db.get_value('Cargo', cargo_ref, ['container_size',
'container_content', 'cargo_type', 'volume', 'net_weight','litre'])
if cargo_type == 'Vehicles':
grace_days, fee_amount, storage_item_code, storage_description = frappe.db.get_value("Wharf Fees", {"wharf_fee_category":"Storage Fee","cargo_type":cargo_type}, ["grace_days", "fee_amount", "item_code", "description"])
wharfage_fee, wharf_item_code, wharfage_description = frappe.db.get_value("Wharf Fees", {"wharf_fee_category":"Wharfage Fee", "cargo_type":cargo_type}, ["fee_amount", "item_code", "description"])
if storage_days > flt(grace_days):
charged_days = storage_days - flt(grace_days)
storage_fee = ((storage_days - flt(grace_days)) * fee_amount)
if volume > net_weight:
wharfage = volume * wharfage_fee
qty = volume
if volume < net_weight:
wharfage = net_weight * wharfage_fee
qty = net_weight
if cargo_type in ('Container', 'Tank Tainers', 'Flatrack'):
grace_days, fee_amount, storage_item_code, storage_description = frappe.db.get_value("Wharf Fees", {"wharf_fee_category":"Storage Fee","cargo_type":cargo_type,
"container_size":container_size, "container_content":container_content}, ["grace_days", "fee_amount", "item_code", "description"])
if storage_days > flt(grace_days):
charged_days = storage_days - flt(grace_days)
storage_fee = ((storage_days - flt(grace_days)) * fee_amount)
if cargo_type in ('Container', 'Flatrack'):
wharfage_fee, wharf_item_code, wharfage_description = frappe.db.get_value("Wharf Fees", {"wharf_fee_category":"Wharfage Fee","cargo_type":cargo_type,
"container_size":container_size}, ["fee_amount", "item_code", "description"])
qty = 1
if cargo_type == 'Tank Tainers':
wharfage_fee, wharf_item_code, wharfage_description = frappe.db.get_value("Wharf Fees", {"wharf_fee_category":"Wharfage Fee","cargo_type":cargo_type,
"container_size":container_size}, ["fee_amount", "item_code", "description"])
wharfage = flt(litre/1000) * wharfage_fee
qty = 1
if cargo_type in ('Heavy Vehicles', 'Break Bulk', 'Loose Cargo'):
grace_days, fee_amount, storage_item_code, storage_description = frappe.db.get_value("Wharf Fees", {"wharf_fee_category":"Storage Fee","cargo_type":cargo_type}, ["grace_days", "fee_amount", "item_code", "description"])
wharfage_fee, wharf_item_code, wharfage_description = frappe.db.get_value("Wharf Fees", {"wharf_fee_category":"Wharfage Fee","cargo_type":cargo_type}, ["fee_amount", "item_code", "description"])
if volume > net_weight:
if storage_days > flt(grace_days):
charged_days = storage_days - flt(grace_days)
storage_fee = ((storage_days - flt(grace_days)) * fee_amount * flt(volume))
wharfage = volume * wharfage_fee
qty = volume
if volume < net_weight:
if storage_days > flt(grace_days):
charged_days = storage_days - flt(grace_days)
storage_fee = ((storage_days - flt(grace_days)) * fee_amount * flt(net_weight))
wharfage = net_weight * wharfage_fee
qty = net_weight
if storage_days <= flt(grace_days):
storage_fee = 0.0
charged_days = 0.0
self.append("wharf_fee_item_check", {
"item": storage_item_code,
"description": storage_description,
"price": fee_amount,
"qty": charged_days,
"total": float(storage_fee)
})
self.append("wharf_fee_item_check", {
"item": wharf_item_code,
"description": wharfage_description,
"price": wharfage_fee,
"qty": qty,
"total": float(wharfage_fee * qty)
})
self.total_fee_to_paid = float(storage_fee + (wharfage_fee * qty))
# return storage_days, grace_days, charged_days, fee_amount, storage_fee, wharfage, flt(storage_fee + wharfage)
@frappe.whitelist()
def get_storage_fees(docname):
return frappe.db.sql("""select docB.item_code, docA.description,
Sum(docB.charged_storage_days) as qty,
Sum(docB.storage_fee_price) as price,
Sum(docB.storage_fee) as total
from `tabCargo References Check` as docB, `tabWharf Fees` as docA
WHERE docB.wharfage_item_code = docA.item_name AND docB.parent = %s group by docB.item_code""", (docname), as_dict=1)
@frappe.whitelist()
def get_wharfage_fees(docname):
return frappe.db.sql("""select docB.wharfage_item_code, docA.description, docB.wharfage_fee_price as price,
CASE
WHEN docB.cargo_type IN ("Heavy Vehicles", "Break Bulk", "Loose Cargo", "Vehicles", "Split Ports")
THEN
CASE
WHEN Sum(docB.volume) < Sum(docB.net_weight)
THEN Sum(docB.net_weight) ELSE Sum(docB.volume) END
WHEN docB.cargo_type IN ("Container", "Flatrack") THEN Count(docA.item_name)
WHEN docB.cargo_type IN ("Tank Tainers") THEN Sum(docB.litre/1000)
END AS qty,
Sum(docB.wharfage_fee) as total
from `tabCargo References Check` as docB, `tabWharf Fees` as docA
where docB.wharfage_item_code = docA.item_name and docB.parent = %s group by docB.wharfage_item_code""", (docname), as_dict=1)
@frappe.whitelist()
def clear_table():
frappe.db.sql(""" DELETE from `tabCargo References Check`""", as_dict=1)
frappe.db.sql(""" DELETE from `tabWharf Fee Item Check`""", as_dict=1) | 52.853846 | 230 | 0.657983 | 874 | 6,871 | 4.871854 | 0.144165 | 0.050728 | 0.042743 | 0.058008 | 0.648192 | 0.593236 | 0.591357 | 0.560592 | 0.537341 | 0.49225 | 0 | 0.006979 | 0.228351 | 6,871 | 130 | 231 | 52.853846 | 0.796115 | 0.034347 | 0 | 0.413462 | 0 | 0.009615 | 0.323028 | 0.035741 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0 | 0.067308 | 0.019231 | 0.134615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa44e05e04e4c5b44ab2ad5733cf2c278329bff4 | 1,941 | py | Python | leetcode/python/binary_search_trees/balance_binary_search_tree.py | ajeet1308/code_problems | 5d99839b6319295c6d81dd86775c46a536e7a1ca | [
"MIT"
] | 61 | 2020-09-26T19:57:44.000Z | 2022-03-09T18:51:44.000Z | leetcode/python/binary_search_trees/balance_binary_search_tree.py | ajeet1308/code_problems | 5d99839b6319295c6d81dd86775c46a536e7a1ca | [
"MIT"
] | 88 | 2020-09-19T20:00:27.000Z | 2021-10-31T09:41:57.000Z | leetcode/python/binary_search_trees/balance_binary_search_tree.py | ajeet1308/code_problems | 5d99839b6319295c6d81dd86775c46a536e7a1ca | [
"MIT"
] | 218 | 2020-09-20T08:18:03.000Z | 2022-01-30T23:13:16.000Z | # Given a binary search tree, return a balanced binary search tree with the same node values.
# A binary search tree is balanced if and only if the depth of the two subtrees of every node never differ by more than 1.
# If there is more than one answer, return any of them.
# Definition for a binary tree node.
class TreeNode:
def __init__(self, val=0, left=None, right=None):
self.val = val
self.left = left
self.right = right
def insert(self, data):
if self.val:
if data < self.val:
if self.left is None:
self.left = Node(data)
return
else:
self.left.insert(data)
elif data > self.data:
if self.right is None:
self.right = Node(data)
return
else:
self.right.insert(data)
else:
self.val = data
def tree2list(sorted_list, node):
if node.left:
sorted_list.append(tree2list(sorted_list, node.left))
sorted_list.append(node)
if node.right:
sorted_list.append(tree2list(sorted_list, node.right))
if node.left is None or node.right is None:
return node.val
def insert_balance_tree(sorted_list, tree):
if not sorted_list:
return
if isinstance(sorted_list, list):
mid = len(sorted_list)//2
tree.insert(sorted_list[mid])
insert_balance_tree(sorted_list[0:mid], tree)
insert_balance_tree(sorted_list[mid+1:len(sorted_list)], tree)
class Solution2:
def balanceBST(self, root: TreeNode) -> TreeNode:
list_tree = []
list_tree = tree2list(list_tree, root)
new_root = TreeNode()
new_root.val = None
insert_balance_tree(list_tree, new_root)
return new_root
# Right Solution
class Solution:
def balanceBST(self, root: TreeNode) -> TreeNode:
v = []
def dfs(node):
if node:
dfs(node.left)
v.append(node.val)
dfs(node.right)
dfs(root)
def bst(v):
if not v:
return None
mid = len(v)//2
root = TreeNode(v[mid])
root.left = bst(v[:mid])
root.right = bst(v[mid+1:])
return root
return bst(v)
| 23.385542 | 122 | 0.68779 | 310 | 1,941 | 4.196774 | 0.216129 | 0.10761 | 0.052267 | 0.053036 | 0.234435 | 0.116833 | 0.059954 | 0 | 0 | 0 | 0 | 0.007777 | 0.205049 | 1,941 | 83 | 123 | 23.385542 | 0.835386 | 0.162803 | 0 | 0.126984 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.126984 | false | 0 | 0 | 0 | 0.301587 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa454a42e4545b23661aaeb2a10be8c19c6a0a6b | 809 | py | Python | Module3/andrews_curves.py | KBtooeasy/python-for-data-science | a1539070d5a5c45dbd4c80ed3a0d09c81854f37e | [
"MIT"
] | 1 | 2018-05-29T07:57:34.000Z | 2018-05-29T07:57:34.000Z | Module3/andrews_curves.py | KBtooeasy/python-for-data-science | a1539070d5a5c45dbd4c80ed3a0d09c81854f37e | [
"MIT"
] | null | null | null | Module3/andrews_curves.py | KBtooeasy/python-for-data-science | a1539070d5a5c45dbd4c80ed3a0d09c81854f37e | [
"MIT"
] | null | null | null | from sklearn.datasets import load_iris
from pandas.tools.plotting import andrews_curves
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
# Look pretty...
matplotlib.style.use('ggplot')
# If the above line throws an error, use plt.style.use('ggplot') instead
# Load up SKLearn's Iris Dataset into a Pandas Dataframe
data = load_iris()
df = pd.DataFrame(data.data, columns=data.feature_names)
df['target_names'] = [data.target_names[i] for i in data.target]
# Andrews Curves Start Here:
plt.figure()
andrews_curves(df, 'target_names')
plt.show()
# Imshow
plt.imshow(df.corr(), cmap=plt.cm.Blues, interpolation='nearest')
plt.colorbar()
tick_marks = [i for i in range(len(df.columns))]
plt.xticks(tick_marks, df.columns, rotation='vertical')
plt.yticks(tick_marks, df.columns)
plt.show() | 29.962963 | 72 | 0.765142 | 129 | 809 | 4.713178 | 0.496124 | 0.064145 | 0.046053 | 0.023026 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108776 | 809 | 27 | 73 | 29.962963 | 0.843273 | 0.21508 | 0 | 0.111111 | 0 | 0 | 0.071429 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.277778 | 0 | 0.277778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa45e46b6e01c1295ce52282eddfbcfb5d786933 | 1,624 | py | Python | nodeconductor/cost_tracking/management/commands/generatepriceestimates.py | p-p-m/nodeconductor | bc702302ef65c89793452f0fd6ca9a6bec79782f | [
"Apache-2.0"
] | null | null | null | nodeconductor/cost_tracking/management/commands/generatepriceestimates.py | p-p-m/nodeconductor | bc702302ef65c89793452f0fd6ca9a6bec79782f | [
"Apache-2.0"
] | null | null | null | nodeconductor/cost_tracking/management/commands/generatepriceestimates.py | p-p-m/nodeconductor | bc702302ef65c89793452f0fd6ca9a6bec79782f | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8
from __future__ import unicode_literals
import random
from django.core.management.base import BaseCommand
from django.utils import timezone
from nodeconductor.cost_tracking import models
class Command(BaseCommand):
def handle(self, *args, **options):
current_month = timezone.now().month
current_year = timezone.now().year
for model in models.PriceEstimate.get_estimated_models():
self.stdout.write('Creating price estimates for all instance of model: {} ...'.format(model.__name__))
estimates = []
for obj in model.objects.all():
self.stdout.write(' - price estimates for object: {}'.format(obj))
for i in range(6):
year = current_year
month = current_month - i
if month < 1:
year = current_year - 1
month += 12
estimates.append(
models.PriceEstimate(
scope=obj,
total=random.randint(100, 2000),
details={
'ram': random.randint(50, 200),
'disk': random.randint(50, 200),
'cpu': random.randint(50, 200),
},
year=year,
month=month,
)
)
models.PriceEstimate.objects.bulk_create(estimates)
self.stdout.write('... Done')
| 36.909091 | 114 | 0.481527 | 146 | 1,624 | 5.232877 | 0.486301 | 0.068063 | 0.058901 | 0.070681 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030172 | 0.428571 | 1,624 | 43 | 115 | 37.767442 | 0.793103 | 0.010468 | 0 | 0 | 0 | 0 | 0.067913 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028571 | false | 0 | 0.142857 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa46828ed04982c08c0b878f96d1b300a3fa9173 | 828 | py | Python | examples/vhdl/com/run.py | nordicneurolab/vunit | c69c99512d7cebe915f77653b473853b495886d2 | [
"Artistic-2.0"
] | 3 | 2021-03-09T15:01:40.000Z | 2022-02-05T12:11:55.000Z | examples/vhdl/com/run.py | noasic/vunit | 4d65f1283ede458f8941910cd40787edb7a9271c | [
"Artistic-2.0",
"Apache-2.0"
] | 4 | 2022-02-04T08:24:33.000Z | 2022-03-14T17:06:48.000Z | examples/vhdl/com/run.py | noasic/vunit | 4d65f1283ede458f8941910cd40787edb7a9271c | [
"Artistic-2.0",
"Apache-2.0"
] | 4 | 2020-03-25T16:49:30.000Z | 2022-02-01T11:18:12.000Z | # This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this file,
# You can obtain one at http://mozilla.org/MPL/2.0/.
#
# Copyright (c) 2014-2020, Lars Asplund lars.anders.asplund@gmail.com
"""
Communication library
---------------------
Demonstrates the ``com`` message passing package which can be used
to communicate arbitrary objects between processes. Further reading
can be found in the :ref:`com user guide <com_user_guide>`.
"""
from pathlib import Path
from vunit import VUnit
VU = VUnit.from_argv()
VU.add_com()
VU.add_verification_components()
VU.add_osvvm()
ROOT = Path(__file__).parent
VU.add_library("lib").add_source_files(ROOT / "src" / "*.vhd")
VU.add_library("tb_lib").add_source_files(ROOT / "test" / "*.vhd")
VU.main()
| 27.6 | 75 | 0.721014 | 132 | 828 | 4.386364 | 0.621212 | 0.043178 | 0.041451 | 0.058722 | 0.072539 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016783 | 0.136473 | 828 | 29 | 76 | 28.551724 | 0.793007 | 0.60628 | 0 | 0 | 0 | 0 | 0.082803 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa4799e17117bd1b832c35ad6797d7d3584d7a9e | 446 | py | Python | network/process_value_array.py | sdyz5210/python | 78f9999f94d92d9ca7fde6f18acec7d3abd422ef | [
"BSD-3-Clause"
] | null | null | null | network/process_value_array.py | sdyz5210/python | 78f9999f94d92d9ca7fde6f18acec7d3abd422ef | [
"BSD-3-Clause"
] | null | null | null | network/process_value_array.py | sdyz5210/python | 78f9999f94d92d9ca7fde6f18acec7d3abd422ef | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf-8 -*-
from multiprocessing import Process, Value, Array
def f(n, a):
n.value = n.value + 1
for i in range(len(a)):
a[i] = a[i] * 10
if __name__ == '__main__':
num = Value('i', 1)
arr = Array('i', range(10))
p = Process(target=f, args=(num, arr))
p.start()
p.join()
print(num.value)
print(arr[:])
p2 = Process(target=f, args=(num, arr))
p2.start()
p2.join()
print(num.value)
print(arr[:]) | 16.518519 | 49 | 0.59417 | 76 | 446 | 3.381579 | 0.460526 | 0.093385 | 0.108949 | 0.140078 | 0.381323 | 0.381323 | 0 | 0 | 0 | 0 | 0 | 0.027397 | 0.181614 | 446 | 27 | 50 | 16.518519 | 0.676712 | 0.085202 | 0 | 0.222222 | 0 | 0 | 0.02457 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.055556 | 0 | 0.111111 | 0.222222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa48446f2f2e12a1d3284134162576d24baa892d | 4,178 | py | Python | discriminate_agkistrodon/capetian_modifier.py | kwierman/discriminate_agkistrodon | f6eac7d3f4898ad5362ac840de95647282d24f23 | [
"MIT"
] | null | null | null | discriminate_agkistrodon/capetian_modifier.py | kwierman/discriminate_agkistrodon | f6eac7d3f4898ad5362ac840de95647282d24f23 | [
"MIT"
] | null | null | null | discriminate_agkistrodon/capetian_modifier.py | kwierman/discriminate_agkistrodon | f6eac7d3f4898ad5362ac840de95647282d24f23 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
""" Model definitions for `discriminate_agkistrodon`
TODO: repeat for Conv3DTranspose
"""
from keras.layers import Input
from keras.layers.convolutional import (MaxPooling3D, Conv3D,
UpSampling3D)
from keras.models import Model
import logging
class capetian_modifier(Model):
""" Capetian modifier is the protomodel for FCN using LArNet inspired
modules with a fully convolutional backend.
For this prototype, the FC backend was created using 3D upsampling.
Though it's memory intensive, the ``Attributes`` include links to
each of the network's layers.
Usage:
Instantiation of the network functions as follows:
.. code-block:: python
model = capetian_modifier()
model.compile()
"""
logger = logging.getLogger('capetian_modifier')
"""Logger for this class"""
def __init__(self):
self.logger.info("Assembling Model")
self._input = Input(shape=(1, 3, 9600, 3600))
"""keras.layers.Input: keras style input layer"""
conv1 = Conv3D(32, (1, 2, 2), strides=(1, 2, 2),
activation='relu', padding='same',
data_format='channels_first',
name='block1_conv1')(self._input)
"""keras.layers.convolutional.Conv3D: First block convolution"""
pool1 = MaxPooling3D((1, 2, 2), strides=(1, 2, 2),
data_format='channels_first',
name='block1_pool')(conv1)
"""keras.layers.convolutional.MaxPooling3D: First block pooling"""
# Block 2
conv2 = Conv3D(64, (1, 2, 2), strides=(1, 2, 2),
activation='relu', padding='same',
data_format='channels_first',
name='block2_conv1')(pool1)
"""keras.layers.convolutional.Conv3D: Second block convolution"""
pool2 = MaxPooling3D((1, 2, 2), strides=(1, 2, 2),
data_format='channels_first',
name='block2_pool')(conv2)
"""keras.layers.convolutional.MaxPooling3D: Second block pooling"""
# Block 3
conv3 = Conv3D(128, (3, 2, 2), strides=(3, 2, 2),
activation='relu', padding='same',
data_format='channels_first',
name='block3_conv1')(pool2)
"""keras.layers.convolutional.Conv3D: Third block convolution"""
pool3 = MaxPooling3D((1, 2, 2), strides=(1, 2, 2),
data_format='channels_first',
name='block3_pool')(conv3)
"""keras.layers.convolutional.Conv3D: Third block convolution"""
# Block 4
conv4 = Conv3D(256, (1, 2, 2), strides=(1, 2, 2),
activation='relu', padding='same',
data_format='channels_first',
name='block4_conv1')(pool3)
"""keras.layers.convolutional.Conv3D: Fourth block convolution"""
pool4 = MaxPooling3D((1, 2, 2), strides=(1, 2, 2),
name='block4_pool',
data_format='channels_first')(conv4)
# Block 5
conv5 = Conv3D(512, (1, 2, 2), strides=(1, 2, 2), activation='relu',
padding='same', data_format='channels_first')(pool4)
"""keras.layers.convolutional.Conv3D: Fifth block convolution"""
pool5 = MaxPooling3D((1, 2, 2), strides=(1, 2, 2), name='block5_pool',
data_format='channels_first')(conv5)
# Block 6, Ze Fully Convolution Bloch
conv6 = Conv3D(1024, (1, 2, 2), strides=(1, 2, 2), activation='relu',
padding='same', data_format='channels_first')(pool5)
conv7 = Conv3D(10, (1, 2, 2), strides=(1, 2, 2), activation='relu',
padding='same', data_format='channels_first')(conv6)
up7 = UpSampling3D(size=(3, 9600 / 3, 3600),
data_format='channels_first')(conv7)
"""str: Docstring *after* attribute, with type specified."""
super(capetian_modifier, self).__init__(self._input, up7)
def compile(self):
"""Calls default compile
"""
self.logger.info("Compiling Model")
self.compile(loss='categorical_crossentropy', optimizer='sgd',
metrics=['accuracy'])
| 38.330275 | 74 | 0.594303 | 475 | 4,178 | 5.117895 | 0.305263 | 0.019745 | 0.027149 | 0.122995 | 0.344714 | 0.322501 | 0.307692 | 0.265734 | 0.265734 | 0.241876 | 0 | 0.059458 | 0.267353 | 4,178 | 108 | 75 | 38.685185 | 0.734727 | 0.146003 | 0 | 0.207547 | 0 | 0 | 0.14789 | 0.008371 | 0 | 0 | 0 | 0.009259 | 0 | 1 | 0.037736 | false | 0 | 0.075472 | 0 | 0.150943 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4a9b9556c5f78ebd0428b54c68009b8746726bc | 5,166 | py | Python | gde/eval_results.py | MIPT-Oulu/greedy_ensembles_training | de72d8f84f151a0398c49aaf56c1cc9c709f79b7 | [
"Apache-2.0"
] | 10 | 2021-06-01T05:15:18.000Z | 2021-12-26T03:59:53.000Z | gde/eval_results.py | Oulu-IMEDS/greedy_ensembles_training | de72d8f84f151a0398c49aaf56c1cc9c709f79b7 | [
"Apache-2.0"
] | null | null | null | gde/eval_results.py | Oulu-IMEDS/greedy_ensembles_training | de72d8f84f151a0398c49aaf56c1cc9c709f79b7 | [
"Apache-2.0"
] | 1 | 2021-06-06T07:08:43.000Z | 2021-06-06T07:08:43.000Z | import gc
import logging
import pathlib
import sys
import pandas as pd
import torch
from omegaconf import OmegaConf
from gde.data.data_provider import DataProvider
from gde.eval.utils import init_loader, predict_from_loader, init_model, init_args
logging.basicConfig(format='%(asctime)s %(message)s',
datefmt='%d/%m/%Y %I:%M:%S %p',
level=logging.INFO)
logger = logging.getLogger(__name__)
if __name__ == "__main__":
args = init_args()
assert args.corruption == ''
results = []
datasets_dict = {'cifar10': {5: None, 10: None, 21: None, 42: None, 84: None},
'cifar100': {5: None, 10: None, 21: None, 42: None, 84: None},
'mnist': {5: None, 10: None, 21: None, 42: None, 84: None}}
arch = args.arch
dataset = args.dataset
ens_size = args.ens_size
seed = args.seed
if pathlib.Path(f'results-{args.arch}-{args.dataset}-{args.ens_size}-{seed}.pkl').is_file():
logger.info('Results have already been computed. Exiting...')
sys.exit(0)
counter = 0
exps = list(args.workdir.glob(f'*_{dataset}_{ens_size}_{arch}_*{seed}_{ens_size}/*/*'))
total = len(exps)
data_dir = pathlib.Path('datasets')
logger.info(f'Architecture: {arch}')
logger.info(f'Dataset: {dataset}')
logger.info(f'Ens size: {dataset}')
logger.info(f'Seed: {seed}')
logger.info(f'Total of {total} experiments to run')
logger.info(f'Data dir: {data_dir.absolute()}')
if dataset == 'cifar10' or dataset == 'cifar100':
ds1_label = 'svhn'
ds2_label = 'lsun'
elif dataset == 'mnist':
ds1_label = 'fashion_mnist'
ds2_label = 'omniglot'
else:
raise ValueError('Dataset is NOT supported')
for exp in exps:
snapshots = list(exp.glob('deep_ensemble_cache*/*.pth'))
if len(snapshots) != ens_size:
print(exp)
continue
cfg = OmegaConf.load(exp / 'config.yaml')
method = 'greedy' if cfg.ensemble.greedy else 'deepens'
logger.info(f'Doing experiment {counter + 1} / '
f'{total} [{arch}-{dataset}-{ens_size}-{method}-{seed}]')
if datasets_dict[dataset][seed] is None:
provider = DataProvider(cfg.data.dataset, data_dir, cfg.seed, cfg.data.augs,
cfg.data.val_amount)
# For testing
test_df, _ = getattr(provider, f"init_{dataset}")(train=False)
# For ood detection (two datasets always)
# In the case of CIFAR, df1 is svhn and df2 is lsun
ood_df1, _ = getattr(provider, f'init_{ds1_label}')(data_dir)
ood_df2, _ = getattr(provider, f'init_{ds2_label}')(data_dir)
ood_df1['target'] = 1
id_df = test_df.copy()
id_df['target'] = 0
ood_df1 = pd.concat((ood_df1, id_df), axis=0)
ood_df2['target'] = 1
id_df = test_df.copy()
id_df['target'] = 0
ood_df2 = pd.concat((ood_df2, id_df), axis=0)
test_loader = init_loader(cfg, test_df, 4, 128)
ood_loader1 = init_loader(cfg, ood_df1, 4, 128)
ood_loader2 = init_loader(cfg, ood_df2, 4, 128)
datasets_dict[dataset][seed] = (test_loader, ood_loader1, ood_loader2)
else:
test_loader, ood_loader1, ood_loader2 = datasets_dict[dataset][seed]
with torch.no_grad():
preds_ens = []
preds_ood_ens_ds1 = []
preds_ood_ens_ds2 = []
for model_snapshot in snapshots:
model = init_model(cfg, model_snapshot)
preds, gt = predict_from_loader(model, test_loader)
preds_ood_ds1, gt_ood_ds1 = predict_from_loader(model, ood_loader1)
preds_ood_ds2, gt_ood_ds2 = predict_from_loader(model, ood_loader2)
preds_ood_ens_ds1.append(preds_ood_ds1)
preds_ood_ens_ds2.append(preds_ood_ds2)
preds_ens.append(preds)
preds_ens = torch.stack(preds_ens, 0).numpy()
preds_ood_ens_ds1 = torch.stack(preds_ood_ens_ds1, 0).numpy()
preds_ood_ens_ds2 = torch.stack(preds_ood_ens_ds2, 0).numpy()
gt = gt.numpy()
if cfg.ensemble.greedy:
div_lambda = cfg.ensemble.diversity_lambda
else:
div_lambda = 'N/A'
prior_lambda = cfg.train.prior_lambda
results.append([arch, dataset, seed, prior_lambda, ens_size, method, div_lambda,
preds_ens, preds_ood_ens_ds2, preds_ood_ens_ds1,
gt, gt_ood_ds2, gt_ood_ds1])
gc.collect()
torch.cuda.empty_cache()
counter += 1
results = pd.DataFrame(
columns=['Architecture', 'Dataset', 'Seed', 'Prior',
'Size', 'Method', 'Diversity',
'Preds_ens', f'Preds_ood_{ds2_label}', f'Preds_ood_{ds1_label}',
'Gt_ens', f'Gt_ood_{ds2_label}', f'Gt_ood_{ds1_label}'],
data=results)
results.to_pickle(f'results-{args.arch}-{args.dataset}-{args.ens_size}-{seed}.pkl')
| 36.9 | 96 | 0.590399 | 673 | 5,166 | 4.267459 | 0.249629 | 0.044568 | 0.038301 | 0.024373 | 0.175836 | 0.106894 | 0.086003 | 0.086003 | 0.086003 | 0.086003 | 0 | 0.02969 | 0.282811 | 5,166 | 139 | 97 | 37.165468 | 0.745479 | 0.019551 | 0 | 0.064815 | 0 | 0.009259 | 0.169137 | 0.060858 | 0 | 0 | 0 | 0 | 0.009259 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.083333 | 0.009259 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4abd428bf614463c5980aa59028561efad0b370 | 1,510 | py | Python | src/web/notifications/notifications.py | rodekruis/shelter-database | 99f96bf06a7287e925b7385dbf7cc363caf4a2bd | [
"MIT"
] | 9 | 2016-07-12T06:41:48.000Z | 2022-02-03T05:55:17.000Z | src/web/notifications/notifications.py | rodekruis/shelter-database | 99f96bf06a7287e925b7385dbf7cc363caf4a2bd | [
"MIT"
] | 22 | 2016-09-06T05:36:37.000Z | 2021-09-07T23:41:26.000Z | src/web/notifications/notifications.py | rodekruis/shelter-database | 99f96bf06a7287e925b7385dbf7cc363caf4a2bd | [
"MIT"
] | 3 | 2016-08-19T05:37:08.000Z | 2017-02-20T06:58:03.000Z | #! /usr/bin/env python
# -*- coding: utf-8 -*-
# ***** BEGIN LICENSE BLOCK *****
# This file is part of Shelter Database.
# Copyright (c) 2016 Luxembourg Institute of Science and Technology.
# All rights reserved.
#
#
#
# ***** END LICENSE BLOCK *****
__author__ = "Cedric Bonhomme"
__version__ = "$Revision: 0.1 $"
__date__ = "$Date: 2016/07/11 $"
__revision__ = "$Date: 2016/07/11 $"
__copyright__ = "Copyright (c) "
__license__ = ""
import datetime
from flask import render_template
import conf
from web.notifications import emails
def new_account_creation(user):
"""
Account creation notification.
"""
plaintext = render_template('emails/account_creation.txt',
user=user, platform_url=conf.PLATFORM_URL)
emails.send(to=user.email,
bcc=conf.NOTIFICATION_EMAIL,
subject="[Shelter Database] Account creation",
plaintext=plaintext)
def new_shelter_creation(shelter, user):
"""
Shelter creation notification.
"""
from web.models import User
admins = User.query.filter(User.is_admin==True).all()
recipients = ", ".join([user.email for user in admins])
plaintext = render_template('emails/shelter_creation.txt',
shelter=shelter, user=user, platform_url=conf.PLATFORM_URL)
emails.send(to=recipients,
bcc=conf.NOTIFICATION_EMAIL,
subject="[Shelter Database] Shelter creation",
plaintext=plaintext)
| 30.816327 | 91 | 0.643709 | 167 | 1,510 | 5.580838 | 0.443114 | 0.064378 | 0.021459 | 0.025751 | 0.197425 | 0.197425 | 0.197425 | 0.098712 | 0.098712 | 0.098712 | 0 | 0.019931 | 0.235762 | 1,510 | 48 | 92 | 31.458333 | 0.787695 | 0.194702 | 0 | 0.148148 | 0 | 0 | 0.177721 | 0.045918 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0 | 0.185185 | 0 | 0.259259 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4aea5506d3e4febfe7535c11f8367e32b25aece | 16,705 | py | Python | atst/routes/applications/settings.py | robgil/atst | 2af99da9cf46e99f5f6a4155602d4bb2db98300e | [
"MIT"
] | null | null | null | atst/routes/applications/settings.py | robgil/atst | 2af99da9cf46e99f5f6a4155602d4bb2db98300e | [
"MIT"
] | null | null | null | atst/routes/applications/settings.py | robgil/atst | 2af99da9cf46e99f5f6a4155602d4bb2db98300e | [
"MIT"
] | null | null | null | from flask import redirect, render_template, request as http_request, url_for, g
from .blueprint import applications_bp
from atst.domain.exceptions import AlreadyExistsError
from atst.domain.environments import Environments
from atst.domain.applications import Applications
from atst.domain.application_roles import ApplicationRoles
from atst.domain.audit_log import AuditLog
from atst.domain.csp.cloud import GeneralCSPException
from atst.domain.common import Paginator
from atst.domain.environment_roles import EnvironmentRoles
from atst.domain.invitations import ApplicationInvitations
from atst.forms.application_member import NewForm as NewMemberForm, UpdateMemberForm
from atst.forms.application import NameAndDescriptionForm, EditEnvironmentForm
from atst.forms.data import ENV_ROLE_NO_ACCESS as NO_ACCESS
from atst.forms.member import NewForm as MemberForm
from atst.domain.authz.decorator import user_can_access_decorator as user_can
from atst.models.permissions import Permissions
from atst.domain.permission_sets import PermissionSets
from atst.utils.flash import formatted_flash as flash
from atst.utils.localization import translate
from atst.jobs import send_mail
def get_environments_obj_for_app(application):
return sorted(
[
{
"id": env.id,
"name": env.name,
"pending": env.is_pending,
"edit_form": EditEnvironmentForm(obj=env),
"member_count": len(env.roles),
"members": sorted(
[
{
"user_name": env_role.application_role.user_name,
"status": env_role.status.value,
}
for env_role in env.roles
],
key=lambda env_role: env_role["user_name"],
),
}
for env in application.environments
],
key=lambda env: env["name"],
)
def filter_perm_sets_data(member):
perm_sets_data = {
"perms_team_mgmt": bool(
member.has_permission_set(PermissionSets.EDIT_APPLICATION_TEAM)
),
"perms_env_mgmt": bool(
member.has_permission_set(PermissionSets.EDIT_APPLICATION_ENVIRONMENTS)
),
"perms_del_env": bool(
member.has_permission_set(PermissionSets.DELETE_APPLICATION_ENVIRONMENTS)
),
}
return perm_sets_data
def filter_env_roles_data(roles):
return sorted(
[
{
"environment_id": str(role.environment.id),
"environment_name": role.environment.name,
"role": role.role,
}
for role in roles
],
key=lambda env: env["environment_name"],
)
def filter_env_roles_form_data(member, environments):
env_roles_form_data = []
for env in environments:
env_data = {
"environment_id": str(env.id),
"environment_name": env.name,
"role": NO_ACCESS,
"disabled": False,
}
env_roles_set = set(env.roles).intersection(set(member.environment_roles))
if len(env_roles_set) == 1:
(env_role,) = env_roles_set
env_data["role"] = env_role.role
env_data["disabled"] = env_role.disabled
env_roles_form_data.append(env_data)
return env_roles_form_data
def get_members_data(application):
members_data = []
for member in application.members:
permission_sets = filter_perm_sets_data(member)
roles = EnvironmentRoles.get_for_application_member(member.id)
environment_roles = filter_env_roles_data(roles)
env_roles_form_data = filter_env_roles_form_data(
member, application.environments
)
form = UpdateMemberForm(
environment_roles=env_roles_form_data, **permission_sets
)
update_invite_form = (
MemberForm(obj=member.latest_invitation)
if member.latest_invitation and member.latest_invitation.can_resend
else MemberForm()
)
members_data.append(
{
"role_id": member.id,
"user_name": member.user_name,
"permission_sets": permission_sets,
"environment_roles": environment_roles,
"role_status": member.display_status,
"form": form,
"update_invite_form": update_invite_form,
}
)
return sorted(members_data, key=lambda member: member["user_name"])
def get_new_member_form(application):
env_roles = sorted(
[
{"environment_id": e.id, "environment_name": e.name}
for e in application.environments
],
key=lambda role: role["environment_name"],
)
return NewMemberForm(data={"environment_roles": env_roles})
def render_settings_page(application, **kwargs):
environments_obj = get_environments_obj_for_app(application=application)
new_env_form = EditEnvironmentForm()
pagination_opts = Paginator.get_pagination_opts(http_request)
audit_events = AuditLog.get_application_events(application, pagination_opts)
new_member_form = get_new_member_form(application)
members = get_members_data(application)
if "application_form" not in kwargs:
kwargs["application_form"] = NameAndDescriptionForm(
name=application.name, description=application.description
)
return render_template(
"applications/settings.html",
application=application,
environments_obj=environments_obj,
new_env_form=new_env_form,
audit_events=audit_events,
new_member_form=new_member_form,
members=members,
**kwargs,
)
def send_application_invitation(invitee_email, inviter_name, token):
body = render_template(
"emails/application/invitation.txt", owner=inviter_name, token=token
)
send_mail.delay(
[invitee_email],
translate("email.application_invite", {"inviter_name": inviter_name}),
body,
)
def handle_create_member(application_id, form_data):
application = Applications.get(application_id)
form = NewMemberForm(form_data)
if form.validate():
try:
invite = Applications.invite(
application=application,
inviter=g.current_user,
user_data=form.user_data.data,
permission_sets_names=form.data["permission_sets"],
environment_roles_data=form.environment_roles.data,
)
send_application_invitation(
invitee_email=invite.email,
inviter_name=g.current_user.full_name,
token=invite.token,
)
flash("new_application_member", user_name=invite.first_name)
except AlreadyExistsError:
return render_template(
"error.html", message="There was an error processing your request."
)
else:
pass
# TODO: flash error message
def handle_update_member(application_id, application_role_id, form_data):
app_role = ApplicationRoles.get_by_id(application_role_id)
application = Applications.get(application_id)
existing_env_roles_data = filter_env_roles_form_data(
app_role, application.environments
)
form = UpdateMemberForm(
formdata=form_data, environment_roles=existing_env_roles_data
)
if form.validate():
try:
ApplicationRoles.update_permission_sets(
app_role, form.data["permission_sets"]
)
for env_role in form.environment_roles:
environment = Environments.get(env_role.environment_id.data)
new_role = None if env_role.disabled.data else env_role.data["role"]
Environments.update_env_role(environment, app_role, new_role)
flash("application_member_updated", user_name=app_role.user_name)
except GeneralCSPException:
flash(
"application_member_update_error", user_name=app_role.user_name,
)
else:
pass
# TODO: flash error message
@applications_bp.route("/applications/<application_id>/settings")
@user_can(Permissions.VIEW_APPLICATION, message="view application edit form")
def settings(application_id):
application = Applications.get(application_id)
return render_settings_page(
application=application,
active_toggler=http_request.args.get("active_toggler"),
active_toggler_section=http_request.args.get("active_toggler_section"),
)
@applications_bp.route("/environments/<environment_id>/edit", methods=["POST"])
@user_can(Permissions.EDIT_ENVIRONMENT, message="edit application environments")
def update_environment(environment_id):
environment = Environments.get(environment_id)
application = environment.application
env_form = EditEnvironmentForm(obj=environment, formdata=http_request.form)
if env_form.validate():
Environments.update(environment=environment, name=env_form.name.data)
flash("application_environments_updated")
return redirect(
url_for(
"applications.settings",
application_id=application.id,
fragment="application-environments",
_anchor="application-environments",
active_toggler=environment.id,
active_toggler_section="edit",
)
)
else:
return (
render_settings_page(
application=application,
active_toggler=environment.id,
active_toggler_section="edit",
),
400,
)
@applications_bp.route(
"/applications/<application_id>/environments/new", methods=["POST"]
)
@user_can(Permissions.CREATE_ENVIRONMENT, message="create application environment")
def new_environment(application_id):
application = Applications.get(application_id)
env_form = EditEnvironmentForm(formdata=http_request.form)
if env_form.validate():
Environments.create(
g.current_user, application=application, name=env_form.name.data
)
flash("environment_added", environment_name=env_form.data["name"])
return redirect(
url_for(
"applications.settings",
application_id=application.id,
fragment="application-environments",
_anchor="application-environments",
)
)
else:
return (render_settings_page(application=application), 400)
@applications_bp.route("/applications/<application_id>/edit", methods=["POST"])
@user_can(Permissions.EDIT_APPLICATION, message="update application")
def update(application_id):
application = Applications.get(application_id)
form = NameAndDescriptionForm(http_request.form)
if form.validate():
application_data = form.data
Applications.update(application, application_data)
return redirect(
url_for(
"applications.portfolio_applications",
portfolio_id=application.portfolio_id,
)
)
else:
return render_settings_page(application=application, application_form=form)
@applications_bp.route("/applications/<application_id>/delete", methods=["POST"])
@user_can(Permissions.DELETE_APPLICATION, message="delete application")
def delete(application_id):
application = Applications.get(application_id)
Applications.delete(application)
flash("application_deleted", application_name=application.name)
return redirect(
url_for(
"applications.portfolio_applications", portfolio_id=application.portfolio_id
)
)
@applications_bp.route("/environments/<environment_id>/delete", methods=["POST"])
@user_can(Permissions.DELETE_ENVIRONMENT, message="delete environment")
def delete_environment(environment_id):
environment = Environments.get(environment_id)
Environments.delete(environment=environment, commit=True)
flash("environment_deleted", environment_name=environment.name)
return redirect(
url_for(
"applications.settings",
application_id=environment.application_id,
_anchor="application-environments",
fragment="application-environments",
)
)
@applications_bp.route("/application/<application_id>/members/new", methods=["POST"])
@user_can(
Permissions.CREATE_APPLICATION_MEMBER, message="create new application member"
)
def create_member(application_id):
handle_create_member(application_id, http_request.form)
return redirect(
url_for(
"applications.settings",
application_id=application_id,
fragment="application-members",
_anchor="application-members",
)
)
@applications_bp.route(
"/applications/<application_id>/members/<application_role_id>/delete",
methods=["POST"],
)
@user_can(Permissions.DELETE_APPLICATION_MEMBER, message="remove application member")
def remove_member(application_id, application_role_id):
application_role = ApplicationRoles.get_by_id(application_role_id)
ApplicationRoles.disable(application_role)
flash(
"application_member_removed",
user_name=application_role.user_name,
application_name=g.application.name,
)
return redirect(
url_for(
"applications.settings",
_anchor="application-members",
application_id=g.application.id,
fragment="application-members",
)
)
@applications_bp.route(
"/applications/<application_id>/members/<application_role_id>/update",
methods=["POST"],
)
@user_can(Permissions.EDIT_APPLICATION_MEMBER, message="update application member")
def update_member(application_id, application_role_id):
handle_update_member(application_id, application_role_id, http_request.form)
return redirect(
url_for(
"applications.settings",
application_id=application_id,
fragment="application-members",
_anchor="application-members",
)
)
@applications_bp.route(
"/applications/<application_id>/members/<application_role_id>/revoke_invite",
methods=["POST"],
)
@user_can(
Permissions.DELETE_APPLICATION_MEMBER, message="revoke application invitation"
)
def revoke_invite(application_id, application_role_id):
app_role = ApplicationRoles.get_by_id(application_role_id)
invite = app_role.latest_invitation
if invite.is_pending:
ApplicationInvitations.revoke(invite.token)
flash(
"application_invite_revoked",
user_name=app_role.user_name,
application_name=g.application.name,
)
else:
flash(
"application_invite_error",
user_name=app_role.user_name,
application_name=g.application.name,
)
return redirect(
url_for(
"applications.settings",
application_id=application_id,
fragment="application-members",
_anchor="application-members",
)
)
@applications_bp.route(
"/applications/<application_id>/members/<application_role_id>/resend_invite",
methods=["POST"],
)
@user_can(Permissions.EDIT_APPLICATION_MEMBER, message="resend application invitation")
def resend_invite(application_id, application_role_id):
app_role = ApplicationRoles.get_by_id(application_role_id)
invite = app_role.latest_invitation
form = MemberForm(http_request.form)
if form.validate():
new_invite = ApplicationInvitations.resend(
g.current_user, invite.token, form.data
)
send_application_invitation(
invitee_email=new_invite.email,
inviter_name=g.current_user.full_name,
token=new_invite.token,
)
flash(
"application_invite_resent",
user_name=new_invite.user_name,
application_name=app_role.application.name,
)
else:
flash(
"application_invite_error",
user_name=app_role.user_name,
application_name=g.application.name,
)
return redirect(
url_for(
"applications.settings",
application_id=application_id,
fragment="application-members",
_anchor="application-members",
)
)
| 32.948718 | 88 | 0.663813 | 1,717 | 16,705 | 6.152009 | 0.101922 | 0.054151 | 0.036353 | 0.017987 | 0.46928 | 0.42327 | 0.344599 | 0.290069 | 0.234214 | 0.175708 | 0 | 0.000558 | 0.249506 | 16,705 | 506 | 89 | 33.013834 | 0.841988 | 0.003053 | 0 | 0.315914 | 0 | 0 | 0.141853 | 0.076632 | 0 | 0 | 0 | 0.001976 | 0 | 1 | 0.049881 | false | 0.004751 | 0.049881 | 0.004751 | 0.152019 | 0.002375 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4afdf88a71da4c4f5fc69215e399f995f7b8b46 | 7,181 | py | Python | rlgraph/components/layers/preprocessing/image_resize.py | samialabed/rlgraph | f5fa632a385e67295a2939f54cbaa4c47a007728 | [
"Apache-2.0"
] | null | null | null | rlgraph/components/layers/preprocessing/image_resize.py | samialabed/rlgraph | f5fa632a385e67295a2939f54cbaa4c47a007728 | [
"Apache-2.0"
] | null | null | null | rlgraph/components/layers/preprocessing/image_resize.py | samialabed/rlgraph | f5fa632a385e67295a2939f54cbaa4c47a007728 | [
"Apache-2.0"
] | null | null | null | # Copyright 2018/2019 The RLgraph authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import cv2
import numpy as np
from six.moves import xrange as range_
from rlgraph import get_backend
from rlgraph.components.layers.preprocessing import PreprocessLayer
from rlgraph.utils.decorators import rlgraph_api
from rlgraph.utils.ops import unflatten_op
from rlgraph.utils.rlgraph_errors import RLGraphError
cv2.ocl.setUseOpenCL(False)
if get_backend() == "tf":
import tensorflow as tf
from tensorflow.python.ops.image_ops_impl import ResizeMethod
elif get_backend() == "pytorch":
import torch
class ImageResize(PreprocessLayer):
"""
Resizes one or more images to a new size without touching the color channel.
"""
def __init__(self, width, height, interpolation="area", scope="image-resize", **kwargs):
"""
Args:
width (int): The new width.
height (int): The new height.
interpolation (str): One of "bilinear", "area". Default: "bilinear" (which is also the default for both
cv2 and tf).
"""
super(ImageResize, self).__init__(scope=scope, **kwargs)
self.width = width
self.height = height
if interpolation == "bilinear":
if get_backend() == "tf":
self.tf_interpolation = ResizeMethod.BILINEAR
# All other backends use cv2 currently.
# Sometimes we mix python preprocessor stack with tf backend -> always need this.
self.cv2_interpolation = cv2.INTER_LINEAR
elif interpolation == "area":
if get_backend() == "tf":
self.tf_interpolation = ResizeMethod.AREA
self.cv2_interpolation = cv2.INTER_AREA
else:
raise RLGraphError("Invalid interpolation algorithm {}!. Allowed are 'bilinear' and "
"'area'.".format(interpolation))
# The output spaces after preprocessing (per flat-key).
self.output_spaces = None
def get_preprocessed_space(self, space):
## Test sending np samples to get number of return values and output spaces without having to call
## the tf graph_fn.
#backend = self.backend
#self.backend = "python"
#sample = space.sample(size=1)
#out = self._graph_fn_apply(sample)
#new_space = get_space_from_op(out)
#self.backend = backend
#return new_space
ret = dict()
for key, value in space.flatten().items():
# Do some sanity checking.
rank = value.rank
if get_backend() == "tf":
assert rank == 2 or rank == 3, \
"ERROR: Given image's rank (which is {}{}, not counting batch rank) must be either 2 or 3!".\
format(rank, ("" if key == "" else " for key '{}'".format(key)))
# Determine the output shape.
shape = list(value.shape)
shape[0] = self.width
shape[1] = self.height
elif get_backend() == "pytorch":
shape = list(value.shape)
# Determine the output shape.
if rank == 3:
shape[0] = self.width
shape[1] = self.height
elif rank == 4:
# TODO PyTorch shape inference issue.
shape[1] = self.width
shape[2] = self.height
ret[key] = value.__class__(shape=tuple(shape), add_batch_rank=value.has_batch_rank)
return unflatten_op(ret)
def create_variables(self, input_spaces, action_space=None):
in_space = input_spaces["preprocessing_inputs"]
self.output_spaces = self.get_preprocessed_space(in_space)
@rlgraph_api
def _graph_fn_apply(self, preprocessing_inputs):
"""
Images come in with either a batch dimension or not.
"""
if self.backend == "python" or get_backend() == "python":
if isinstance(preprocessing_inputs, list):
preprocessing_inputs = np.asarray(preprocessing_inputs)
had_single_color_dim = (preprocessing_inputs.shape[-1] == 1)
# Batch of samples.
if preprocessing_inputs.ndim == 4:
resized = []
for i in range_(len(preprocessing_inputs)):
resized.append(cv2.resize(
preprocessing_inputs[i], dsize=(self.width, self.height), interpolation=self.cv2_interpolation)
)
resized = np.asarray(resized)
# Single sample.
else:
resized = cv2.resize(
preprocessing_inputs, dsize=(self.width, self.height), interpolation=self.cv2_interpolation
)
# cv2.resize removes the color rank, if its dimension is 1 (e.g. grayscale), add it back here.
if had_single_color_dim is True:
resized = np.expand_dims(resized, axis=-1)
return resized
elif get_backend() == "pytorch":
if isinstance(preprocessing_inputs, list):
preprocessing_inputs = torch.tensor(preprocessing_inputs)
had_single_color_dim = (preprocessing_inputs.shape[-1] == 1)
# Batch of samples.
if len(preprocessing_inputs.shape) == 4:
resized = []
for i in range_(len(preprocessing_inputs)):
# Get numpy array.
resized.append(cv2.resize(
preprocessing_inputs[i].numpy(), dsize=(self.width, self.height),
interpolation=self.cv2_interpolation)
)
resized = torch.tensor(resized)
# Single sample.
else:
resized = cv2.resize(
preprocessing_inputs.numpy(), dsize=(self.width, self.height), interpolation=self.cv2_interpolation
)
# cv2.resize removes the color rank, if its dimension is 1 (e.g. grayscale), add it back here.
if had_single_color_dim is True:
resized = torch.unsqueeze(resized, dim=-1)
return resized
elif get_backend() == "tf":
return tf.image.resize_images(
images=preprocessing_inputs, size=(self.width, self.height), method=self.tf_interpolation
)
| 41.75 | 119 | 0.591979 | 808 | 7,181 | 5.115099 | 0.300743 | 0.087346 | 0.021776 | 0.022986 | 0.307767 | 0.294217 | 0.280668 | 0.234212 | 0.212436 | 0.147593 | 0 | 0.010694 | 0.309845 | 7,181 | 171 | 120 | 41.994152 | 0.823245 | 0.256928 | 0 | 0.316832 | 0 | 0.009901 | 0.05074 | 0 | 0 | 0 | 0 | 0.005848 | 0.009901 | 1 | 0.039604 | false | 0 | 0.138614 | 0 | 0.227723 | 0.009901 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4b02d09092fa876dafdded9d6fe1ae131a86f46 | 7,928 | py | Python | c7n/credentials.py | staxio/cloud-custodian | 24ed5d8f09bc37ff76184aae97a1ef577a69a41b | [
"Apache-2.0"
] | 2 | 2022-02-16T07:45:20.000Z | 2022-02-19T10:25:30.000Z | c7n/credentials.py | staxio/cloud-custodian | 24ed5d8f09bc37ff76184aae97a1ef577a69a41b | [
"Apache-2.0"
] | 7 | 2020-12-16T21:46:49.000Z | 2021-06-08T00:20:25.000Z | c7n/credentials.py | staxio/cloud-custodian | 24ed5d8f09bc37ff76184aae97a1ef577a69a41b | [
"Apache-2.0"
] | null | null | null | # Copyright The Cloud Custodian Authors.
# SPDX-License-Identifier: Apache-2.0
"""
Authentication utilities
"""
import os
import json
import subprocess
from botocore.credentials import RefreshableCredentials
from botocore.session import get_session
from boto3 import Session
from botocore.exceptions import ClientError
import logging
from c7n.version import version
from c7n.utils import get_retry
log = logging.getLogger('custodian.credentials')
class UnableToAssumeRole(Exception):
pass
# we still have some issues (see #5023) to work through to switch to
# default regional endpoints, for now its opt-in.
USE_STS_REGIONAL = os.environ.get(
'C7N_USE_STS_REGIONAL', '').lower() in ('yes', 'true')
class SessionFactory:
def __init__(self, region, profile=None, assume_role=None, external_id=None, credential_helper=None):
self.region = region
self.profile = profile
self.assume_role = assume_role
self.external_id = external_id
self.credential_helper = credential_helper
self.user_agent_name = "CloudCustodian"
self.session_name = "CloudCustodian"
if 'C7N_SESSION_SUFFIX' in os.environ:
self.session_name = "%s@%s" % (
self.session_name, os.environ['C7N_SESSION_SUFFIX'])
self._subscribers = []
def _set_policy_name(self, name):
self.user_agent_name = ("CloudCustodian(%s)" % name).strip()
policy_name = property(None, _set_policy_name)
def __call__(self, assume=True, region=None):
if self.credential_helper:
session = credential_helper_session(
self.credential_helper, self.session_name, region or self.region)
elif self.assume_role and assume:
session = Session(profile_name=self.profile)
session = assumed_session(
self.assume_role, self.session_name, session,
region or self.region, self.external_id)
else:
session = Session(
region_name=region or self.region, profile_name=self.profile)
return self.update(session)
def update(self, session):
session._session.user_agent_name = self.user_agent_name
session._session.user_agent_version = version
for s in self._subscribers:
s(session)
return session
def set_subscribers(self, subscribers):
self._subscribers = subscribers
class CredentialHelperError(Exception):
pass
class UnableToParseCredentialOutput(CredentialHelperError):
pass
class CredentialHelperExited(CredentialHelperError):
pass
def credential_helper_session(command, session_name, region=None):
"""Role assumption / fetching, using a credential helper.
The helper program is expected to return the same JSON payload
as a call to STS would, just making it possible to use shell scripts.
The command is run using a shell.
"""
def fetch_credentials():
env = os.environ.copy()
env['CUSTODIAN_SESSION_NAME'] = session_name
if region:
env['CUSTODIAN_CREDENTIALS_REGION'] = region
try:
raw_response = subprocess.check_output(command, shell=True, env=env).strip()
parseable_string = raw_response.decode('utf-8')
parsed = json.loads(parseable_string)
credentials = parsed['Credentials']
return dict(
access_key=credentials['AccessKeyId'],
secret_key=credentials['SecretAccessKey'],
token=credentials['SessionToken'],
expiry_time=credentials['Expiration'])
except subprocess.CalledProcessError as e:
raise CredentialHelperExited("Credential Helper Exited Abnormally with status code %d" % e.returncode)
except (AttributeError, TypeError, ValueError, json.decoder.JSONDecodeError):
raise UnableToParseCredentialOutput("Unable to parse input:\n%s" % parseable_string)
session_credentials = RefreshableCredentials.create_from_metadata(
metadata=fetch_credentials(),
refresh_using=fetch_credentials,
method='credential-helper')
# so dirty.. it hurts, no clean way to set this outside of the
# internals poke. There's some work upstream on making this nicer
# but its pretty baroque as well with upstream support.
# https://github.com/boto/boto3/issues/443
# https://github.com/boto/botocore/issues/761
s = get_session()
s._credentials = session_credentials
if region is None:
region = s.get_config_variable('region') or 'us-east-1'
s.set_config_variable('region', region)
return Session(botocore_session=s)
current_cached_credentials = None
def assumed_session(role_arn, session_name, session=None, region=None, external_id=None):
"""STS Role assume a boto3.Session
With automatic credential renewal.
Args:
role_arn: iam role arn to assume
session_name: client session identifier
session: an optional extant session, note session is captured
in a function closure for renewing the sts assumed role.
:return: a boto3 session using the sts assumed role credentials
Notes: We have to poke at botocore internals a few times
"""
if session is None:
session = Session()
retry = get_retry(('Throttling',))
def refresh(allow_cache=False):
global current_cached_credentials
if allow_cache and current_cached_credentials is not None:
log.info("Using in-memory assumed credentials cache")
return current_cached_credentials
log.debug("Fetching fresh credentials from STS")
parameters = {"RoleArn": role_arn, "RoleSessionName": session_name}
if external_id is not None:
parameters['ExternalId'] = external_id
try:
credentials = retry(
get_sts_client(
session, region).assume_role, **parameters)['Credentials']
except ClientError as e:
if e.response['Error']['Code'] == 'AccessDenied':
raise UnableToAssumeRole("Unable to assume the specified role.")
else:
raise
normalised_credentials = dict(
access_key=credentials['AccessKeyId'],
secret_key=credentials['SecretAccessKey'],
token=credentials['SessionToken'],
# Silly that we basically stringify so it can be parsed again
expiry_time=credentials['Expiration'].isoformat())
log.debug("Updating memory credentials cache.")
current_cached_credentials = normalised_credentials
return normalised_credentials
session_credentials = RefreshableCredentials.create_from_metadata(
metadata=refresh(True),
refresh_using=refresh,
method='sts-assume-role')
# so dirty.. it hurts, no clean way to set this outside of the
# internals poke. There's some work upstream on making this nicer
# but its pretty baroque as well with upstream support.
# https://github.com/boto/boto3/issues/443
# https://github.com/boto/botocore/issues/761
s = get_session()
s._credentials = session_credentials
if region is None:
region = s.get_config_variable('region') or 'us-east-1'
s.set_config_variable('region', region)
return Session(botocore_session=s)
def get_sts_client(session, region):
"""Get the AWS STS endpoint specific for the given region.
Returns the global endpoint if region is not specified.
For the list of regional endpoints, see https://amzn.to/2ohJgtR
"""
if region and USE_STS_REGIONAL:
endpoint_url = "https://sts.{}.amazonaws.com".format(region)
region_name = region
else:
endpoint_url = None
region_name = None
return session.client(
'sts', endpoint_url=endpoint_url, region_name=region_name)
| 35.079646 | 114 | 0.681887 | 935 | 7,928 | 5.616043 | 0.273797 | 0.023043 | 0.014283 | 0.013712 | 0.234432 | 0.199962 | 0.199962 | 0.174824 | 0.174824 | 0.174824 | 0 | 0.005284 | 0.236125 | 7,928 | 225 | 115 | 35.235556 | 0.86179 | 0.202069 | 0 | 0.210145 | 0 | 0 | 0.113004 | 0.011413 | 0 | 0 | 0 | 0 | 0 | 1 | 0.072464 | false | 0.028986 | 0.072464 | 0 | 0.246377 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4b0edd77fe0cc33728332e7aa23eb13cad95fa7 | 2,354 | py | Python | grr/server/grr_response_server/events.py | khanhgithead/grr | 8ad8a4d2c5a93c92729206b7771af19d92d4f915 | [
"Apache-2.0"
] | 4,238 | 2015-01-01T15:34:50.000Z | 2022-03-31T08:18:05.000Z | grr/server/grr_response_server/events.py | khanhgithead/grr | 8ad8a4d2c5a93c92729206b7771af19d92d4f915 | [
"Apache-2.0"
] | 787 | 2015-01-02T21:34:24.000Z | 2022-03-02T13:26:38.000Z | grr/server/grr_response_server/events.py | khanhgithead/grr | 8ad8a4d2c5a93c92729206b7771af19d92d4f915 | [
"Apache-2.0"
] | 856 | 2015-01-02T02:50:11.000Z | 2022-03-31T11:11:53.000Z | #!/usr/bin/env python
"""The GRR event publishing classes."""
from grr_response_core.lib import rdfvalue
from grr_response_core.lib.registry import EventRegistry
class EventListener(metaclass=EventRegistry):
"""Base Class for all Event Listeners.
Event listeners can register for an event by specifying the event
name in the EVENTS constant.
"""
EVENTS = []
def ProcessEvents(self, msgs=None, publisher_username=None):
"""Processes a message for the event."""
def ProcessEvent(self, event=None, publisher_username=None):
return self.ProcessEvents([event], publisher_username=publisher_username)
class Events(object):
"""A class that provides event publishing methods."""
@classmethod
def PublishEvent(cls, event_name, event, username=None):
"""Publish the message into all listeners of the event.
We send the message to all event handlers which contain this
string in their EVENT static member. This allows the event to be
sent to multiple interested listeners.
Args:
event_name: An event name.
event: The message to send to the event handler.
username: Username of the publisher of the message.
Raises:
ValueError: If the message is invalid. The message must be a Semantic
Value (instance of RDFValue) or a full GrrMessage.
"""
cls.PublishMultipleEvents({event_name: [event]}, username=username)
@classmethod
def PublishMultipleEvents(cls, events, username=None):
"""Publishes multiple messages at once.
Args:
events: A dict with keys being event names and values being lists of
messages.
username: Username of the publisher of the messages.
Raises:
ValueError: If the message is invalid. The message must be a Semantic
Value (instance of RDFValue) or a full GrrMessage.
"""
event_name_map = EventRegistry.EVENT_NAME_MAP
for event_name, messages in events.items():
if not isinstance(event_name, str):
raise ValueError(
"Event names should be string, got: %s" % type(event_name))
for msg in messages:
if not isinstance(msg, rdfvalue.RDFValue):
raise ValueError("Can only publish RDFValue instances.")
for event_cls in event_name_map.get(event_name, []):
event_cls().ProcessEvents(messages, publisher_username=username)
| 33.628571 | 77 | 0.714528 | 310 | 2,354 | 5.348387 | 0.36129 | 0.065139 | 0.033776 | 0.022919 | 0.191797 | 0.165259 | 0.165259 | 0.12304 | 0.12304 | 0.12304 | 0 | 0 | 0.209431 | 2,354 | 69 | 78 | 34.115942 | 0.890919 | 0.460918 | 0 | 0.086957 | 0 | 0 | 0.063313 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173913 | false | 0 | 0.086957 | 0.043478 | 0.434783 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4b155d638bd33ff941989b002ca1a0220b0f145 | 960 | py | Python | example_01.py | istellartech/OpenVerne | f6d94341ded48724d0ab310a7759e3f994ce86bc | [
"MIT"
] | 8 | 2018-06-21T09:14:21.000Z | 2021-03-15T05:23:09.000Z | example_01.py | istellartech/OpenVerne | f6d94341ded48724d0ab310a7759e3f994ce86bc | [
"MIT"
] | null | null | null | example_01.py | istellartech/OpenVerne | f6d94341ded48724d0ab310a7759e3f994ce86bc | [
"MIT"
] | 2 | 2019-07-11T07:09:52.000Z | 2020-12-02T04:34:21.000Z | # -*- coding: utf-8 -*-
""" Google Earth file output example
required simplekml module.
if you didn't install simplekml, execute the following command.
> pip install simplekml
"""
from OpenVerne import IIP
import numpy as np
import pandas as pd
import simplekml
import warnings
warnings.filterwarnings('ignore')
if __name__ == '__main__':
posLLH_ = np.array([35.0, 140.0, 100])
velNED_ = np.array([10, 0, 0])
_IIP = IIP(posLLH_, velNED_)
# print(_IIP)
_IIP.disp()
kml_points = [["satrt", posLLH_[0], posLLH_[1], posLLH_[2]],
["IIP", _IIP.posLLH_IIP_deg[0], _IIP.posLLH_IIP_deg[1], 0]]
kml = simplekml.Kml()
for point in kml_points:
p = kml.newpoint(name=point[0], coords=[(point[2], point[1], point[3])])
p.altitudemode = simplekml.AltitudeMode.absolute
p.lookat.latitude = point[1]
p.lookat.longitude = point[2]
kml.save("test.kml")
print("kml file outputted.")
| 26.666667 | 80 | 0.647917 | 133 | 960 | 4.481203 | 0.496241 | 0.030201 | 0.040268 | 0.050336 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03548 | 0.207292 | 960 | 35 | 81 | 27.428571 | 0.7477 | 0.190625 | 0 | 0 | 0 | 0 | 0.063802 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.238095 | 0 | 0.238095 | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4b16b9a02b8e34f41e02a71006951401e22f714 | 1,009 | py | Python | Packs/PaloAltoNetworks_IoT3rdParty/Scripts/GeneratePANWIoTDeviceTableQueryForServiceNow/GeneratePANWIoTDeviceTableQueryForServiceNow.py | diCagri/content | c532c50b213e6dddb8ae6a378d6d09198e08fc9f | [
"MIT"
] | 799 | 2016-08-02T06:43:14.000Z | 2022-03-31T11:10:11.000Z | Packs/PaloAltoNetworks_IoT3rdParty/Scripts/GeneratePANWIoTDeviceTableQueryForServiceNow/GeneratePANWIoTDeviceTableQueryForServiceNow.py | diCagri/content | c532c50b213e6dddb8ae6a378d6d09198e08fc9f | [
"MIT"
] | 9,317 | 2016-08-07T19:00:51.000Z | 2022-03-31T21:56:04.000Z | Packs/PaloAltoNetworks_IoT3rdParty/Scripts/GeneratePANWIoTDeviceTableQueryForServiceNow/GeneratePANWIoTDeviceTableQueryForServiceNow.py | diCagri/content | c532c50b213e6dddb8ae6a378d6d09198e08fc9f | [
"MIT"
] | 1,297 | 2016-08-04T13:59:00.000Z | 2022-03-31T23:43:06.000Z | import demistomock as demisto # noqa: F401
from CommonServerPython import * # noqa: F401
def main():
device_list = demisto.args().get('devices')
query_strs = []
query_str = 'mac_addressIN'
DEFAULT_VALUE_SIZE = 100 # each query contains 100 deviceid
res = {}
output_description = f'Total data length is {len(device_list)}'
for i, entry in enumerate(device_list):
query_str += entry['deviceid'] + ','
if ((i + 1) % DEFAULT_VALUE_SIZE == 0 or i == (len(device_list) - 1)):
query_strs.append(query_str[0:len(query_str) - 1])
query_str = 'mac_addressIN'
res['query'] = query_strs
output_description = f'{output_description} total number of query is {len(query_strs)}'
results = CommandResults(
readable_output=output_description,
outputs_prefix="PanwIot3rdParty.Query",
outputs=res
)
return results
if __name__ in ['__main__', 'builtin', 'builtins']:
res = main()
return_results(res)
| 30.575758 | 91 | 0.650149 | 125 | 1,009 | 4.976 | 0.464 | 0.064309 | 0.03537 | 0.064309 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023256 | 0.232904 | 1,009 | 32 | 92 | 31.53125 | 0.780362 | 0.053518 | 0 | 0.08 | 0 | 0 | 0.202944 | 0.022082 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.08 | 0 | 0.16 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4b29571402c087279f876f388d804e3ad1abedd | 1,609 | py | Python | src/services/message_management/api.py | b1team/trada | 22ceaf4d50fe3a38ff402315c029e574773ca9e0 | [
"MIT"
] | null | null | null | src/services/message_management/api.py | b1team/trada | 22ceaf4d50fe3a38ff402315c029e574773ca9e0 | [
"MIT"
] | 1 | 2021-03-12T15:16:03.000Z | 2021-03-12T15:16:03.000Z | src/services/message_management/api.py | b1team/trada | 22ceaf4d50fe3a38ff402315c029e574773ca9e0 | [
"MIT"
] | null | null | null | from typing import Optional
from src.api.exceptions import internal_errors, room_errors, user_errors
from src.config import settings
from src.services.crud.users.logic import check_user_exist
from src.services.crud.room.logic import check_room_exists, room_members
from . import logic
from .publish import publish_event
def send_message(content: str,
sender_id: Optional[str] = None,
room_id: Optional[str] = None,
username: Optional[str] = None):
try:
is_room = check_room_exists(room_id)
except:
raise internal_errors.InternalError(detail="Incorrect room id format")
if not is_room:
raise room_errors.NotFoundError(obj=f"Room {room_id}")
try:
sender_exist = check_user_exist(sender_id)
except:
raise internal_errors.InternalError(detail="Incorrect user id format")
if not sender_exist:
raise user_errors.NotFoundError(obj=f"Sender {sender_id}")
if is_room:
try:
message = logic.send_to_user(content, sender_id, room_id)
except:
raise internal_errors.InternalError(detail="Check id format")
_message = message.to_dict()
_message["username"] = username
event = {"event_type": "new_message", "payload": _message}
members = room_members(room_id)
for member in members:
if(member != sender_id):
publish_event(redis_uri=settings.REDIS_URI,
channel=member,
event=event)
return message
return False
| 32.836735 | 78 | 0.649472 | 196 | 1,609 | 5.096939 | 0.295918 | 0.036036 | 0.045045 | 0.063063 | 0.164164 | 0.164164 | 0.164164 | 0.164164 | 0 | 0 | 0 | 0 | 0.270976 | 1,609 | 48 | 79 | 33.520833 | 0.851662 | 0 | 0 | 0.153846 | 0 | 0 | 0.081417 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025641 | false | 0 | 0.179487 | 0 | 0.25641 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4b6e019d20d6f1bf2b9083b06f3e2813544ca43 | 788 | py | Python | tests/example/app/schema.py | robtucker/pyspark-tooling | 946773975b4069c448dca1590eff3ae77a25be98 | [
"MIT"
] | null | null | null | tests/example/app/schema.py | robtucker/pyspark-tooling | 946773975b4069c448dca1590eff3ae77a25be98 | [
"MIT"
] | null | null | null | tests/example/app/schema.py | robtucker/pyspark-tooling | 946773975b4069c448dca1590eff3ae77a25be98 | [
"MIT"
] | null | null | null | import random
import uuid
from pyspark.sql import SQLContext
from pyspark.sql.types import StructType, StructField, StringType, IntegerType
ID = "id"
GROUP = "group"
VALUE = "value"
GROUP_COUNT = "group_count"
def create_schema():
return StructType(
[
StructField(ID, StringType()),
StructField(GROUP, StringType()),
StructField(VALUE, IntegerType()),
]
)
def create_fake_dataframe(spark: SQLContext, count=1000):
"""Create a dataframe containing some fake data"""
rows = [
(
str(uuid.uuid4()),
random.choice(["a", "b", "c", "d"]),
random.randint(10000000, 99999999),
)
for i in range(count)
]
return spark.createDataFrame(rows, create_schema())
| 23.176471 | 78 | 0.607868 | 82 | 788 | 5.768293 | 0.52439 | 0.046512 | 0.059197 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036522 | 0.270305 | 788 | 33 | 79 | 23.878788 | 0.786087 | 0.055838 | 0 | 0 | 0 | 0 | 0.036585 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.153846 | 0.038462 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4c2c14227719d7047360deb5a49c0783d20fb80 | 1,423 | py | Python | server/launch/sample/main_machine/main_instance/experiment_demo_pld/server_config.py | romainrossi/weblabdeusto | 494f1cd291d03dcf1d2e8f3e36d3dbe2348b167f | [
"BSD-2-Clause"
] | 15 | 2015-03-12T12:15:41.000Z | 2021-12-20T17:53:24.000Z | server/launch/sample/main_machine/main_instance/experiment_demo_pld/server_config.py | romainrossi/weblabdeusto | 494f1cd291d03dcf1d2e8f3e36d3dbe2348b167f | [
"BSD-2-Clause"
] | 44 | 2015-01-07T09:22:05.000Z | 2017-01-31T22:44:21.000Z | server/launch/sample/main_machine/main_instance/experiment_demo_pld/server_config.py | romainrossi/weblabdeusto | 494f1cd291d03dcf1d2e8f3e36d3dbe2348b167f | [
"BSD-2-Clause"
] | 22 | 2015-01-13T13:55:48.000Z | 2021-12-16T17:07:00.000Z | #!/usr/bin/env python
#-*-*- encoding: utf-8 -*-*-
xilinx_board_type = 'PLD'
weblab_xilinx_experiment_port_number = 1
# This should be something like this:
# import os as _os
# xilinx_home = _os.getenv('XILINX_HOME')
# if xilinx_home == None:
# if _os.name == 'nt':
# xilinx_home = r'C:\Program Files\Xilinx'
# elif _os.name == 'posix':
# xilinx_home = r"/home/nctrun/Xilinx"
#
# if _os.name == 'nt':
# xilinx_impact_full_path = [xilinx_home + r'\bin\nt\impact']
# elif _os.name == 'posix':
# xilinx_impact_full_path = [xilinx_home + r'/bin/lin/impact']
# But for testing we are going to fake it:
xilinx_home = "."
xilinx_impact_full_path = ["python","./test/unit/weblab/experiment/devices/xilinx_impact/fake_impact.py" ]
xilinx_programmer_type = 'XilinxImpact' # 'DigilentAdept', 'JTagBlazer'
xilinx_device_to_send_commands = 'SerialPort' # 'HttpDevice'
xilinx_jtag_blazer_jbmanager_svf2jsvf_full_path = []
xilinx_jtag_blazer_jbmanager_target_full_path = []
xilinx_jtag_blazer_device_ip_PLD = "192.168.50.137"
xilinx_http_device_ip_PLD = "192.168.50.138"
xilinx_http_device_port_PLD = 80
xilinx_http_device_app_PLD = ""
xilinx_batch_content_PLD = """setMode -bs
setMode -bs
setCable -port auto
Identify
identifyMPM
assignFile -p 1 -file $FILE
Program -p 1 -e -defaultVersion 0
quit
"""
pld_webcam_url = '''https://www.weblab.deusto.es/webcam/pld0/image.jpg'''
| 29.040816 | 106 | 0.717498 | 206 | 1,423 | 4.61165 | 0.490291 | 0.084211 | 0.046316 | 0.063158 | 0.227368 | 0.111579 | 0.071579 | 0.071579 | 0 | 0 | 0 | 0.025556 | 0.147576 | 1,423 | 48 | 107 | 29.645833 | 0.757626 | 0.394238 | 0 | 0 | 0 | 0 | 0.364929 | 0.078199 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4c3509dae1858bb10698cb2cad3ecaf3e7212a4 | 7,909 | py | Python | retrievers/bert_doc_ranker.py | WuDiDaBinGe/TAKG | 83e608e677a4ee74722d18cb5ef430f4f6c6ad31 | [
"MIT"
] | null | null | null | retrievers/bert_doc_ranker.py | WuDiDaBinGe/TAKG | 83e608e677a4ee74722d18cb5ef430f4f6c6ad31 | [
"MIT"
] | null | null | null | retrievers/bert_doc_ranker.py | WuDiDaBinGe/TAKG | 83e608e677a4ee74722d18cb5ef430f4f6c6ad31 | [
"MIT"
] | null | null | null | """
This example uses Approximate Nearest Neighbor Search (ANN) with FAISS (https://github.com/facebookresearch/faiss).
Searching a large corpus with Millions of embeddings can be time-consuming. To speed this up,
ANN can index the existent vectors. For a new query vector, this index can be used to find the nearest neighbors.
This nearest neighbor search is not perfect, i.e., it might not perfectly find all top-k nearest neighbors.
In this example, we use FAISS with an inverse flat index (IndexIVFFlat). It learns to partition the corpus embeddings
into different cluster (number is defined by n_clusters). At search time, the matching cluster for query is found and only vectors
in this cluster must be search for nearest neighbors.
This script will compare the result from ANN with exact nearest neighbor search and output a Recall@k value
as well as the missing results in the top-k hits list.
See the FAISS repository, how to install FAISS.
As dataset, we use the Quora Duplicate querys dataset, which contains about 500k querys (only 100k are used):
https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-query-Pairs.
As embeddings model, we use the SBERT model 'distilbert-multilingual-nli-stsb-quora-ranking',
that it aligned for 100 languages. I.e., you can type in a query in various languages and it will
return the closest querys in the corpus (querys in the corpus are mainly in English).
"""
from sentence_transformers import SentenceTransformer
import os
import pickle
import faiss
from faiss import normalize_L2
import numpy as np
from retrievers.utils import read_tokenized_src_file
from sklearn.feature_extraction.text import TfidfVectorizer
import string
stoplist = list(string.punctuation)
class SBERTDocRanker(object):
"""Loads a pre-weighted inverted index of token/document terms.
Scores new queries by taking sparse dot products.
"""
def __init__(self, opt, word2idx, train_opera='train'):
self.opt = opt
self.train_opera = train_opera
self.word2idx = word2idx
# model_name = 'bert-base-nli-mean-tokens'
# 加载roberta-large-nli-stsb-mean-tokens。中文可以使用paraphrase-multilingual-mpnet-base-v2(好而慢)或者paraphrase-multilingual-MiniLM-L12-v2(快但是差一些)
# model_name = "allenai-specter"
model_name = opt.dense_model_name
# self.model = SentenceTransformer('/home/yxb/setence_trans_model/sentence_transformers/' + model_name)
self.model = SentenceTransformer('/home/ubuntu/setence_trans_model/sentence_transformers/'+model_name)
embed_cache_path = opt.data_dir + '/embeddings-{}.pkl'.format(model_name.replace('/', '_'))
self.index, self.tfidf_vectorizer = self.build_index(embed_cache_path)
def build_index(self, embed_cache_path):
embedding_size = 768 # Size of embeddings
# Defining our FAISS index
# Number of clusters used for faiss. Select a value 4*sqrt(N) to 16*sqrt(N)
# - https://github.com/facebookresearch/faiss/wiki/Guidelines-to-choose-an-index
# n_clusters = 5600 # N=500000
n_clusters = 500 # N=50000
# n_clusters = 20 # N=50000
# We use Inner Product (dot-product) as Index. We will normalize our vectors
# to unit length, then is Inner Product equal to cosine similarity
quantizer = faiss.IndexFlatIP(embedding_size)
index = faiss.IndexIVFFlat(quantizer, embedding_size, n_clusters, faiss.METRIC_INNER_PRODUCT)
# index = faiss.IndexFlatIP(embedding_size)
# Number of clusters to explorer at search time. We will search for nearest neighbors in 3 clusters.
index.nprobe = 200
# index.nprobe = 3
# Check if embedding cache path exists
if not os.path.exists(embed_cache_path):
if self.train_opera == 'train':
ref_doc_path = self.opt.train_src
elif self.train_opera == 'valid':
ref_doc_path = self.opt.valid_src
else:
ref_doc_path = self.opt.test_src
ref_docs = read_tokenized_src_file(ref_doc_path, self.opt.max_src_len)
corpus_embeddings = self.model.encode(ref_docs, show_progress_bar=True, convert_to_numpy=True, batch_size=128)
# Create the FAITS index
print("Start creating FAISS index")
# First, we need to normalize vectors to unit length
normalize_L2(corpus_embeddings)
# corpus_embeddings = corpus_embeddings / np.linalg.norm(corpus_embeddings, axis=1, keepdims=True)
# warn: default token_pattern will ignore singe token
print("The tf-idf bow len: {}".format(len(self.word2idx)))
tfidf_vectorizer = TfidfVectorizer(tokenizer=str.split,
vocabulary={w: i for w, i in self.word2idx.items()
if i < self.opt.vocab_size})
tfidf_vectorizer = tfidf_vectorizer.fit(ref_docs)
id2word = {}
for w, id in tfidf_vectorizer.vocabulary_.items():
id2word[id] = w
tfidf_vectorizer.id2word = id2word
print("Store corpus_embeddings on disc")
with open(embed_cache_path, "wb") as fOut:
pickle.dump({'corpus_embeddings': corpus_embeddings,
'tfidf_vectorizer': tfidf_vectorizer}, fOut)
else:
print("Load pre-computed embeddings from disc")
with open(embed_cache_path, "rb") as fIn:
cache_data = pickle.load(fIn)
corpus_embeddings = cache_data['corpus_embeddings']
tfidf_vectorizer = cache_data['tfidf_vectorizer']
# Then we train the index to find a suitable clustering
index.train(corpus_embeddings)
# Finally we add all embeddings to the index
index.add(corpus_embeddings)
return index, tfidf_vectorizer
def batch_closest_docs(self, queries, k=1):
query_embedding = self.model.encode(queries, show_progress_bar=True, convert_to_numpy=True, batch_size=128)
# FAISS works with inner product (dot product). When we normalize vectors to unit length,
# inner product is equal to cosine similarity
normalize_L2(query_embedding)
# Search in FAISS. It returns a matrix with distances and corpus ids.
distances, corpus_ids = self.index.search(query_embedding, k)
# normalize score to integer between [0, 9]
distances = np.round(distances * 9)
distances[distances < 0] = 0
return corpus_ids, distances
def batch_words_tfidf(self, queries, k=3, word2idx=None):
batch_tfidf = self.tfidf_vectorizer.transform(queries)
# batch_sorted_idx, batch_tfidf = row_topk_csr(batch_tfidf.data, batch_tfidf.indices, batch_tfidf.indptr, k)
batch_tfidf, batch_sorted_idx = top_k(batch_tfidf, k)
# normalize score to integer between [0, 9]
words2tfidf = [{word2idx[self.tfidf_vectorizer.id2word[id]]: np.round(tfidf * 9)
for id, tfidf in zip(topic_words_id, words_tfidf)
if id != -1 and self.tfidf_vectorizer.id2word[id] not in stoplist}
for topic_words_id, words_tfidf in zip(batch_sorted_idx, batch_tfidf)]
return words2tfidf
def _top_k(d, r, k):
tmp = sorted(zip(d, r), reverse=True)[:k]
return zip(*tmp)
def top_k(m, k):
"""
Keep only the top k elements of each row in a csr_matrix
"""
ml = m.tolil()
# print("ml's shape is{}".format(ml.shape))
ms = []
cnt = 0 # 251
for d, r in zip(ml.data, ml.rows):
if len(d) == 0:
cnt += 1
else:
ms.append(_top_k(d, r, k))
# print("cnt empty is {}".format(cnt))
# ms = [_top_k(d, r, k) for d, r in zip(ml.data, ml.rows)]
return zip(*ms)
| 47.933333 | 142 | 0.671766 | 1,082 | 7,909 | 4.763401 | 0.318854 | 0.040745 | 0.016298 | 0.010865 | 0.152891 | 0.08246 | 0.058207 | 0.040357 | 0.027551 | 0.019014 | 0 | 0.015235 | 0.244784 | 7,909 | 164 | 143 | 48.22561 | 0.847648 | 0.421039 | 0 | 0.034884 | 0 | 0 | 0.061446 | 0.012201 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069767 | false | 0 | 0.104651 | 0 | 0.244186 | 0.046512 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4c75050296018fd87fae0c5ed44792583079945 | 4,605 | py | Python | tests/mock_constructor_testslide.py | xush6528/TestSlide | 52095828e2ef5c993fe611b25ccfbb0579f62663 | [
"MIT"
] | null | null | null | tests/mock_constructor_testslide.py | xush6528/TestSlide | 52095828e2ef5c993fe611b25ccfbb0579f62663 | [
"MIT"
] | null | null | null | tests/mock_constructor_testslide.py | xush6528/TestSlide | 52095828e2ef5c993fe611b25ccfbb0579f62663 | [
"MIT"
] | 1 | 2020-10-22T09:27:52.000Z | 2020-10-22T09:27:52.000Z | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import sys
import contextlib
from testslide.dsl import context, xcontext, fcontext, Skip # noqa: F401
class Target(object): # noqa
def __init__(self, *args, **kwargs):
self.args = args
self.kwargs = kwargs
original_target_class = Target
target_class_name = original_target_class.__name__
def dummy():
pass
@context("mock_constructor()")
def mock_constructor(context):
context.memoize("target_module", lambda self: sys.modules[__name__])
context.memoize(
"target_class", lambda self: getattr(self.target_module, target_class_name)
)
@context.function
@contextlib.contextmanager
def assertRaisesWithMessage(self, exception, msg):
with self.assertRaises(exception) as cm:
yield
ex_msg = str(cm.exception)
self.assertEqual(
ex_msg,
msg,
"Expected exception {}.{} message "
"to be\n{}\nbut got\n{}.".format(
exception.__module__, exception.__name__, repr(msg), repr(ex_msg)
),
)
@context.before
def assert_unpatched(self):
self.assertTrue(
original_target_class is self.target_class, "Unpatching didn't work."
)
args = (1, 2)
kwargs = {"3": 4, "5": 6}
t = Target(*args, **kwargs)
self.assertEqual(type(t), original_target_class)
self.assertEqual(t.args, args)
self.assertEqual(t.kwargs, kwargs)
@context.sub_context
def argument_validation(context):
@context.example
def rejects_non_string_class_name(self):
with self.assertRaisesWithMessage(
ValueError,
"Second argument must be a string with the name of the class.",
):
self.mock_constructor(self.target_module, original_target_class)
@context.example
def rejects_non_class_targets(self):
with self.assertRaisesWithMessage(ValueError, "Target must be a class."):
self.mock_constructor(self.target_module, "dummy")
@context.example
def it_uses_mock_callable_dsl(self):
self.assertEqual(
type(self.mock_constructor(self.target_module, target_class_name)),
type(self.mock_callable(self.target_module, "dummy")),
)
@context.example
def mocking_works(self):
# Allow all other calls
self.mock_constructor(self.target_module, target_class_name).to_call_original()
# Mock specefic call
mock_args = (6, 7)
mock_kwargs = {"8": 9, "10": 11}
# We use a wrapper here to validate that the first argument of __new__ is not
# passed along
def wrapper(original_callable, *args, **kwargs):
self.assertEqual(args, mock_args)
self.assertEqual(kwargs, mock_kwargs)
return "mocked"
self.mock_constructor(self.target_module, target_class_name).for_call(
*mock_args, **mock_kwargs
).with_wrapper(wrapper)
# And get a reference to the pached class
target_class = getattr(self.target_module, target_class_name)
# Generic calls works (to_call_original)
with self.sub_example():
original_args = ("a", "b")
original_kwargs = {"c": "d"}
original_instance = target_class(*original_args, **original_kwargs)
self.assertTrue(issubclass(type(original_instance), original_target_class))
self.assertEqual(original_instance.args, original_args)
self.assertEqual(original_instance.kwargs, original_kwargs)
# for_call registered calls works
with self.sub_example():
mocked_instance = target_class(*mock_args, **mock_kwargs)
self.assertEqual(mocked_instance, "mocked")
@context.example
def accepts_module_as_string(self):
args = (6, 7)
kwargs = {"8": 9, "10": 11}
self.mock_constructor(self.target_module.__name__, target_class_name).for_call(
*args, **kwargs
).to_return_value("mocked")
target_class = getattr(self.target_module, target_class_name)
mocked_instance = target_class(*args, **kwargs)
self.assertEqual(mocked_instance, "mocked")
| 34.111111 | 87 | 0.650814 | 537 | 4,605 | 5.286778 | 0.27933 | 0.081367 | 0.056358 | 0.046495 | 0.284607 | 0.191265 | 0.150053 | 0.087355 | 0.087355 | 0 | 0 | 0.007289 | 0.255157 | 4,605 | 134 | 88 | 34.365672 | 0.820408 | 0.092291 | 0 | 0.132653 | 0 | 0 | 0.06025 | 0 | 0 | 0 | 0 | 0 | 0.183673 | 1 | 0.122449 | false | 0.010204 | 0.071429 | 0 | 0.214286 | 0.010204 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4c99f4348c4d415ef87c9b501e74698a5f4a3f4 | 3,418 | py | Python | full-problems/nQueen.py | vikas-t/DS-Algo | ea654d1cad5374c824c52da9d3815a9546eb43fa | [
"Apache-2.0"
] | null | null | null | full-problems/nQueen.py | vikas-t/DS-Algo | ea654d1cad5374c824c52da9d3815a9546eb43fa | [
"Apache-2.0"
] | null | null | null | full-problems/nQueen.py | vikas-t/DS-Algo | ea654d1cad5374c824c52da9d3815a9546eb43fa | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python3
#https://practice.geeksforgeeks.org/problems/n-queen-problem/0
queens = []
results = []
def isValidNaive(grid, N, x, y):
for i in range(N):
if grid[x][i] == True:
#print('x', i)
return False
for i in range(N):
if grid[i][y] == True:
#print(i, 'y')
return False
tx, ty = x+1, y+1
while tx < N and ty < N:
if grid[tx][ty] == True:
#print(+1, tx, ty)
return False
tx+=1
ty+=1
tx, ty = x-1, y-1
while tx >= 0 and ty >= 0:
if grid[tx][ty] == True:
#print(-1, tx, ty)
return False
tx-=1
ty-=1
tx, ty = x-1, y+1
while tx >= 0 and ty < N:
if grid[tx][ty] == True:
return False
tx-=1
ty+=1
tx, ty = x+1, y-1
while tx < N and ty >= 0:
if grid[tx][ty] == True:
return False
tx+=1
ty-=1
return True
def nQueenNaive(grid, N, n):
#print(N, n, x, y)
if n == 0:
return True
for i in range(N):
for j in range(N):
if not isValid(grid, N, i, j):
continue
grid[i][j] = True
if nQueen(grid, N, n-1):
return True
grid[i][j] = False
return False
def solNaive():
# The naive way
# Prints all possible solutions
N=4
res = []
for i in range(N):
for j in range(N):
grid = [[False for _ in range(N)] for _ in range(N)]
# The functions above modify the orginal grid, so we make a new
# one again and again
grid[i][j] = True
if nQueenNaive(grid, N, N-1):
if grid not in res:
res.append(grid)
for r in res:
print(r)
##************************************************************************
def isValid(grid, N, row, col):
for i in range(N):
# No horizontal attacks
if grid[row][i]:
return False
# We do not check the right side here as it is not yet filled
r, c = row-1, col-1
while r >= 0 and c >= 0:
# No left up diagonal attacks
if grid[r][c]:
return False
r-=1
c-=1
r, c = row+1, col-1
while r < N and c >= 0:
# No left down diagonal attacks
if grid[r][c]:
return False
r+=1
c-=1
return True
def nQueen(grid, N, n, col):
if n >= N:
results.append(queens[:])
return
for i in range(N):
if isValid(grid, N, i, col):
queens.append(i+1)
grid[i][col] = True
nQueen(grid, N, n+1, col+1)
# For every valid arrangement, recurse till N is exhausted and
# keep appending the results
# Note that its not returning at any points making sure that
# every combination is tried at every level
grid[i][col] = False
queens.remove(i+1)
# Undo the changes one sub results are appended
def sol():
N=4
grid = [[False for _ in range(N)] for _ in range(N)]
nQueen(grid, N, 0, 0)
if results:
for r in results:
print("["+" ".join(str(x) for x in r)+" ]", end=" ")
# Print as per required in the question
else:
print(-1)
sol() | 25.318519 | 76 | 0.460503 | 497 | 3,418 | 3.158954 | 0.2334 | 0.053503 | 0.061147 | 0.042038 | 0.371975 | 0.318471 | 0.309554 | 0.286624 | 0.263694 | 0.263694 | 0 | 0.022983 | 0.401697 | 3,418 | 135 | 77 | 25.318519 | 0.744743 | 0.2244 | 0 | 0.458333 | 0 | 0 | 0.001902 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0 | 0 | 0.21875 | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4cf230dd1528592b7009b2d4e3ead035e04f470 | 8,459 | py | Python | spix_rim.py | DensoITLab/ss-with-RIM | 9a3a1d41a7b16a74919633a2ba0c1452b79c4ec4 | [
"BSD-2-Clause"
] | 10 | 2020-06-22T06:05:05.000Z | 2021-09-24T07:52:32.000Z | spix_rim.py | DensoITLab/ss-with-RIM | 9a3a1d41a7b16a74919633a2ba0c1452b79c4ec4 | [
"BSD-2-Clause"
] | null | null | null | spix_rim.py | DensoITLab/ss-with-RIM | 9a3a1d41a7b16a74919633a2ba0c1452b79c4ec4 | [
"BSD-2-Clause"
] | 5 | 2020-06-22T04:27:45.000Z | 2021-12-15T13:29:00.000Z | # Copyright (C) 2020 Denso IT Laboratory, Inc.
# All Rights Reserved
# Denso IT Laboratory, Inc. retains sole and exclusive ownership of all
# intellectual property rights including copyrights and patents related to this
# Software.
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from skimage.segmentation._slic import _enforce_label_connectivity_cython
def conv_in_relu(in_c, out_c):
return nn.Sequential(
nn.ReflectionPad2d(1),
nn.Conv2d(in_c, out_c, 3, bias=False),
nn.InstanceNorm2d(out_c, affine=True),
nn.ReLU()
)
class CNNRIM(nn.Module):
"""
code for
T.Suzuki, ICASSP2020
Superpixel Segmentation via Convolutional Neural Networks with Regularized Information Maximization
https://arxiv.org/abs/2002.06765
Args:
in_c: int
number of input channels. (5 indicates RGB+XY)
n_spix: int
number of superpixels
n_filters: int
number of filters in convolution filters.
At i-th layer, output channels are n_filters * 2^{i+1}
n_layers: int
number of convolution layers
use_recons: bool
if True, use reconstruction loss for optimization
use_last_inorm: bool
if True, use instance normalization layer for output
"""
def __init__(self, in_c=5, n_spix=100, n_filters=32, n_layers=5, use_recons=True, use_last_inorm=True):
super().__init__()
self.n_spix = n_spix
self.use_last_inorm = use_last_inorm
self.use_recons = use_recons
out_c = n_spix
if use_recons:
out_c += 3
layers = []
for i in range(n_layers-1):
layers.append(conv_in_relu(in_c, n_filters << i))
in_c = n_filters << i
layers.append(nn.Conv2d(in_c, out_c, 1))
self.layers = nn.Sequential(*layers)
if use_last_inorm:
self.norm = nn.InstanceNorm2d(n_spix, affine=True)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.xavier_normal_(m.weight)
if m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.InstanceNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def forward(self, x):
spix = self.layers(x)
if self.use_recons:
recons, spix = spix[:, :3], spix[:, 3:]
else:
recons = None
if self.use_last_inorm:
spix = self.norm(spix)
return spix, recons
def mutual_information(self, logits, coeff):
"""
Mutual information defined in eq. (2)
Args:
logits: torch.Tensor
A Tensor of shape (b, n, h, w)
coeff: float
corresponding to lambda in eq. (2)
"""
prob = logits.softmax(1)
pixel_wise_ent = - (prob * F.log_softmax(logits, 1)).sum(1).mean()
marginal_prob = prob.mean((2, 3))
marginal_ent = - (marginal_prob * torch.log(marginal_prob + 1e-16)).sum(1).mean()
return pixel_wise_ent - coeff * marginal_ent
def smoothness(self, logits, image):
"""
Smoothness loss defined in eq. (3)
Args:
logits: torch.Tensor
A Tensor of shape (b, n, h, w)
image; torch.Tensor
A Tensor of shape (b, c, h, w)
"""
prob = logits.softmax(1)
dp_dx = prob[..., :-1] - prob[..., 1:]
dp_dy = prob[..., :-1, :] - prob[..., 1:, :]
di_dx = image[..., :-1] - image[..., 1:]
di_dy = image[..., :-1, :] - image[..., 1:, :]
return (dp_dx.abs().sum(1) * (-di_dx.pow(2).sum(1)/8).exp()).mean() + \
(dp_dy.abs().sum(1) * (-di_dy.pow(2).sum(1)/8).exp()).mean()
def reconstruction(self, recons, image):
"""
Reconstruction loss defined in eq. (4)
Args:
recons: torch.Tensor
A Tensor of shape (b, c, h, w)
image; torch.Tensor
A Tensor of shape (b, c, h, w)
"""
return F.mse_loss(recons, image)
def __preprocess(self, image, device="cuda"):
image = torch.from_numpy(image).permute(2, 0, 1).float()[None]
h, w = image.shape[-2:]
coord = torch.stack(torch.meshgrid(torch.arange(h), torch.arange(w))).float()[None]
input = torch.cat([image, coord], 1).to(device)
input = (input - input.mean((2, 3), keepdim=True)) / input.std((2, 3), keepdim=True)
return input
def optimize(self, image, n_iter=500, lr=1e-2, lam=2, alpha=2, beta=10, device="cuda"):
"""
optimizer and generate superpixels
Args:
image: numpy.ndarray
An array of shape (h, w, c)
n_iter: int
number of iterations for SGD
lr: float
learning rate
lam: float
used in eq. (2)
alpha: float
used in eq. (1)
beta: float
used in eq. (1)
device: ["cpu", "cuda"]
Return:
spix: numpy.ndarray
An array of shape (h, w)
"""
input = self.__preprocess(image, device)
optimizer = optim.Adam(self.parameters(), lr)
for i in range(n_iter):
spix, recons = self.forward(input)
loss_mi = self.mutual_information(spix, lam)
loss_smooth = self.smoothness(spix, input)
loss = loss_mi + alpha * loss_smooth
if recons is not None:
loss = loss + beta * self.reconstruction(recons, input[:, :3])
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f"[{i+1}/{n_iter}] loss {loss.item()}")
return self.calc_spixel(image, device)
def calc_spixel(self, image, device="cuda"):
"""
generate superpixels
Args:
image: numpy.ndarray
An array of shape (h, w, c)
device: ["cpu", "cuda"]
Return:
spix: numpy.ndarray
An array of shape (h, w)
"""
input = self.__preprocess(image, device)
spix, recons = self.forward(input)
spix = spix.argmax(1).squeeze().to("cpu").detach().numpy()
segment_size = spix.size / self.n_spix
min_size = int(0.06 * segment_size)
max_size = int(3.0 * segment_size)
spix = _enforce_label_connectivity_cython(
spix[None], min_size, max_size)[0]
return spix
if __name__ == "__main__":
import os
import argparse
import numpy as np
import matplotlib.pyplot as plt
from skimage.segmentation import mark_boundaries
parser = argparse.ArgumentParser()
parser.add_argument("--image", default=None, type=str, help="/path/to/image")
parser.add_argument("--n_spix", default=100, type=int, help="number of superpixels")
parser.add_argument("--n_filters", default=32, type=int, help="number of convolution filters")
parser.add_argument("--n_layers", default=5, type=int, help="number of convolution layers")
parser.add_argument("--lam", default=2, type=float, help="coefficient of marginal entropy")
parser.add_argument("--alpha", default=2, type=float, help="coefficient of smoothness loss")
parser.add_argument("--beta", default=2, type=float, help="coefficient of reconstruction loss")
parser.add_argument("--lr", default=1e-2, type=float, help="learning rate")
parser.add_argument("--n_iter", default=500, type=int, help="number of iterations")
parser.add_argument("--out_dir", default="./", type=str, help="output directory")
args = parser.parse_args()
device = "cuda" if torch.cuda.is_available() else "cpu"
model = CNNRIM(5, args.n_spix, args.n_filters, args.n_layers).to(device)
if args.image is None: # load sample image from scipy
import scipy.misc
img = scipy.misc.face()
else:
img = plt.imread(args.image)
spix = model.optimize(img, args.n_iter, args.lr, args.lam, args.alpha, args.beta, device)
plt.imsave(os.path.join(args.out_dir, "boundary.png"), mark_boundaries(img, spix))
np.save("spixel", spix) # save generated superpixel as .npy file
| 33.30315 | 107 | 0.576073 | 1,107 | 8,459 | 4.266486 | 0.237579 | 0.004235 | 0.035994 | 0.019056 | 0.191404 | 0.151387 | 0.123862 | 0.09549 | 0.09549 | 0.09549 | 0 | 0.020118 | 0.300745 | 8,459 | 253 | 108 | 33.434783 | 0.77836 | 0.239745 | 0 | 0.081967 | 0 | 0 | 0.067153 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.07377 | false | 0 | 0.090164 | 0.008197 | 0.237705 | 0.008197 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4cf7ccc89c25b4c5ff1ca23b7964aa90f9fddbc | 1,659 | py | Python | tests/test_search.py | cangkevin/asain-show-rss-website-wrapper | ca7953e2b764705d1a78c691e1920b2f632e6bc9 | [
"MIT"
] | null | null | null | tests/test_search.py | cangkevin/asain-show-rss-website-wrapper | ca7953e2b764705d1a78c691e1920b2f632e6bc9 | [
"MIT"
] | 46 | 2018-12-16T02:38:34.000Z | 2021-06-01T22:44:45.000Z | tests/test_search.py | cangkevin/asain-show-rss-website-wrapper | ca7953e2b764705d1a78c691e1920b2f632e6bc9 | [
"MIT"
] | 1 | 2019-01-06T04:13:51.000Z | 2019-01-06T04:13:51.000Z | from elasticsearch import exceptions as es_exceptions
from website.search import client as search_client
def test_query_index(client, monkeypatch):
def mockreturn(*args, **kwargs):
return [
{"_id": 21344, "image": "fakeImageUrl1", "title": "ShowTitle1"},
{"_id": 21345, "image": "fakeImageUrl2", "title": "ShowTitle2"},
]
monkeypatch.setattr("website.search.routes.query_index", mockreturn)
response = client.get("/search?q=test")
assert response.status_code == 200
assert len(response.json) == 2
def test_no_elasticsearch_config(monkeypatch):
monkeypatch.setattr("website.search.client.current_app.elasticsearch", None)
response = search_client.query_index("test")
assert not response
def test_query_index_with_results(monkeypatch):
def mockreturn(*args, **kwargs):
source = {
"id": 23231,
"image": "imageUrl",
"title": "Running Man (VI) (Cantonese)",
}
return {"hits": {"hits": [{"_source": source}]}}
monkeypatch.setattr(
"website.search.client.current_app.elasticsearch.search", mockreturn
)
response = search_client.query_index("test")
assert response
assert len(response) == 1
def test_search_service_down(client, monkeypatch):
def mockreturn(*args, **kwargs):
raise es_exceptions.ConnectionError
monkeypatch.setattr(
"website.search.client.current_app.elasticsearch.search", mockreturn
)
response = client.get("/search?q=test")
assert response.status_code == 500
assert "Search service is temporarily down" == response.data.decode()
| 30.163636 | 80 | 0.670283 | 179 | 1,659 | 6.055866 | 0.363128 | 0.066421 | 0.092251 | 0.114391 | 0.479705 | 0.448339 | 0.374539 | 0.308118 | 0.252768 | 0.252768 | 0 | 0.020439 | 0.203737 | 1,659 | 54 | 81 | 30.722222 | 0.800151 | 0 | 0 | 0.282051 | 0 | 0 | 0.23689 | 0.113321 | 0 | 0 | 0 | 0 | 0.179487 | 1 | 0.179487 | false | 0 | 0.051282 | 0.025641 | 0.282051 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4cfcb3b4c0f990eab014979791a926236fa15c9 | 1,387 | py | Python | day8.py | jjayala1/adventofCode2020 | d5587fd812368d2ff24f215d904ddf258dd0a4a8 | [
"MIT"
] | null | null | null | day8.py | jjayala1/adventofCode2020 | d5587fd812368d2ff24f215d904ddf258dd0a4a8 | [
"MIT"
] | null | null | null | day8.py | jjayala1/adventofCode2020 | d5587fd812368d2ff24f215d904ddf258dd0a4a8 | [
"MIT"
] | null | null | null | def clean_data():
f = open('day8.txt','r')
data = f.readlines()
for i,d in enumerate(data):
for r in (('\n',''),('bags',''),('bag',''),('.','')):
d = d.replace(*r)
data[i] = d
return data
def get_acum(data):
acum = 0
loop = []
j = 0
for i,d in enumerate(data):
op, value = data[j].split()
loop.append(j)
acum += int(value) if op == 'acc' else 0
j += int(value) if op == 'jmp' else 1
if j in loop:
return acum,j
if j == len(data):
return acum,j
return acum,j
changed = []
def change_op(data):
global changed
datan = data.copy()
change = 0
for i,d in enumerate(data):
op, value = d.split()
if op == 'jmp' and change == 0 and i not in changed:
datan[i] = 'nop ' + value
changed.append(i)
change = 1
#print(f'changing {i}')
if op == 'nop' and change == 0 and i not in changed:
datan[i] = 'jmp ' + value
changed.append(i)
change = 1
if change == 0:
exit()
acum,j = get_acum(datan)
if j != len(data):
return change_op(data)
else:
return(acum,j)
data = clean_data()
acum = get_acum(data)
acum2 = change_op(data)
print(f'Part 1: {acum[0]}')
print(f'Part 2: {acum2[0]}')
| 19.263889 | 61 | 0.480894 | 198 | 1,387 | 3.328283 | 0.247475 | 0.037936 | 0.066768 | 0.031866 | 0.339909 | 0.291351 | 0.182094 | 0.182094 | 0.182094 | 0.097117 | 0 | 0.019187 | 0.361211 | 1,387 | 71 | 62 | 19.535211 | 0.724605 | 0.015862 | 0 | 0.204082 | 0 | 0 | 0.054252 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061224 | false | 0 | 0 | 0 | 0.163265 | 0.040816 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4d851f2fe934796a8ba3145b143b522073f3fdd | 1,105 | py | Python | agent/windows/src/agent_cli.py | liaojiacan/dyanmic-host | 0b47d8fa5b596e3e3d82d75992a00a97a9d4f457 | [
"MIT"
] | 4 | 2018-02-11T09:53:22.000Z | 2022-03-06T06:35:41.000Z | agent/windows/src/agent_cli.py | liaojiacan/dyanmic-host | 0b47d8fa5b596e3e3d82d75992a00a97a9d4f457 | [
"MIT"
] | null | null | null | agent/windows/src/agent_cli.py | liaojiacan/dyanmic-host | 0b47d8fa5b596e3e3d82d75992a00a97a9d4f457 | [
"MIT"
] | 1 | 2020-12-11T07:03:38.000Z | 2020-12-11T07:03:38.000Z | import argparse
import agent
parser = argparse.ArgumentParser()
parser.add_argument(
"--eth",
type=str,
required=True,
help="The network interface for monitor.")
parser.add_argument(
"--hostname",
type=str,
help="Custom hostname.")
parser.add_argument(
"--scheme",
type=str,
default="http",
help="server protocal.")
parser.add_argument(
"--server",
type=str,
required=True,
help="The server domain or ip[:port].")
parser.add_argument(
"--agent-key",
type=str,
required=True,
help="The agent-key for this machine.")
parser.add_argument(
'command',
help="run ->Start reporting host info to the cloud [server]. sync ->Start sync host file from cloud"
)
argv = parser.parse_args()
ag = agent.Agent(argv.agent_key,argv.server,argv.scheme)
if 'run'==argv.command:
ag.report(argv.hostname,ag.getLocalIp())
elif 'sync'==argv.command:
pass
else:
print("Ivalid command [%s]: \nUsage: \nrun ->Start reporting host info to the cloud [server].\nsync ->Start sync host file from cloud"%(argv.command))
| 27.625 | 154 | 0.660633 | 146 | 1,105 | 4.945205 | 0.40411 | 0.074792 | 0.141274 | 0.078947 | 0.296399 | 0.296399 | 0.188366 | 0.105263 | 0 | 0 | 0 | 0 | 0.19276 | 1,105 | 39 | 155 | 28.333333 | 0.809417 | 0 | 0 | 0.358974 | 0 | 0.025641 | 0.372851 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.025641 | 0.051282 | 0 | 0.051282 | 0.025641 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4da9c8055f7b1a4f4a746fcf61e20ddfe317f07 | 1,604 | py | Python | analysis/groups.py | weishi/praat | 132f6a1e152541780c97cebf95609375d506bff0 | [
"MIT"
] | 1 | 2020-05-21T15:26:07.000Z | 2020-05-21T15:26:07.000Z | analysis/groups.py | weishi/praat | 132f6a1e152541780c97cebf95609375d506bff0 | [
"MIT"
] | null | null | null | analysis/groups.py | weishi/praat | 132f6a1e152541780c97cebf95609375d506bff0 | [
"MIT"
] | null | null | null | import filter as ft
import analyzer as az
GROUP_A = ([
[ft.IsShanghainese(), ft.IsMandarin()],
[ft.IsMale(), ft.IsFemale()],
[ft.IsChild(), ft.IsYouth(), ft.IsAdult(), ft.IsSenior()],
[ft.IsVariant('a1'), ft.IsVariant('a2')],
], [
az.FormantQuantiles(),
az.FormantRegression(),
])
GROUP_C = ([
[ft.IsShanghainese(), ft.IsMandarin()],
[ft.IsMale(), ft.IsFemale()],
[ft.IsChild(), ft.IsYouth(), ft.IsAdult(), ft.IsSenior()],
[ft.IsVariant('c1'),
ft.IsVariant('c2'),
ft.IsVariant('c2vs'),
ft.IsVariant('c2h'),
ft.IsVariant('c4')],
[ft.IsWordNum([1, 2]),
ft.IsWordNum([3, 5, 6]),
ft.IsWordNum([7, 8, 9]),
ft.IsWordNum([10]),
ft.IsWordNum([11, 12, 13, 14, 15])],
], [
az.FormantQuantiles(),
az.FormantRegression(),
az.HnrRegression(),
# az.HnrQuantilesMean(),
])
GROUP_D1 = ([
[ft.IsShanghainese(), ft.IsMandarin()],
[ft.IsMale(), ft.IsFemale()],
[ft.IsChild(), ft.IsYouth(), ft.IsAdult(), ft.IsSenior()],
[ft.IsVariant('d1'), ft.IsVariant('d2')],
[ft.IsWordNum([1, 3]),
ft.IsWordNum([4]),
],
], [
az.FormantQuantiles(),
az.FormantRegression(),
az.HnrRegression(),
# az.HnrQuantilesMean()
])
GROUP_D2 = ([
[ft.IsShanghainese(), ft.IsMandarin()],
[ft.IsMale(), ft.IsFemale()],
[ft.IsChild(), ft.IsYouth(), ft.IsAdult(), ft.IsSenior()],
[ft.IsVariant('d2n'), ft.IsVariant('d2h')],
[ft.IsWordNum([5, 6, 7, 11, 12, 13, 14]), ],
], [
az.FormantQuantiles(),
az.FormantRegression(),
az.HnrRegression(),
# az.HnrQuantilesMean()
])
| 25.870968 | 62 | 0.569202 | 181 | 1,604 | 5.022099 | 0.270718 | 0.133113 | 0.079208 | 0.123212 | 0.655666 | 0.655666 | 0.655666 | 0.655666 | 0.578658 | 0.413641 | 0 | 0.036154 | 0.189526 | 1,604 | 61 | 63 | 26.295082 | 0.663077 | 0.041147 | 0 | 0.574074 | 0 | 0 | 0.017601 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.037037 | 0 | 0.037037 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4ddbcced9869b2e87bddea13e35866d2ef3696e | 7,560 | py | Python | cifar_net_search/syclop_cifar_convlstm_102.py | sashkarivkind/imagewalker | 999e1ae78cfe1512e1be894d9e7891a7d0c41233 | [
"Apache-2.0"
] | 2 | 2021-04-28T13:33:45.000Z | 2021-11-09T14:31:09.000Z | cifar_net_search/syclop_cifar_convlstm_102.py | sashkarivkind/imagewalker | 999e1ae78cfe1512e1be894d9e7891a7d0c41233 | [
"Apache-2.0"
] | null | null | null | cifar_net_search/syclop_cifar_convlstm_102.py | sashkarivkind/imagewalker | 999e1ae78cfe1512e1be894d9e7891a7d0c41233 | [
"Apache-2.0"
] | 1 | 2021-03-07T13:25:59.000Z | 2021-03-07T13:25:59.000Z | '''
These network archtecture performed perfectly while training with a teacher
Let's see ho it performes without one.
'''
from __future__ import division, print_function, absolute_import
print('Starting..................................')
import sys
sys.path.insert(1, '/home/labs/ahissarlab/orra/imagewalker')
sys.path.insert(1, '/home/orram/Documents/GitHub/imagewalker')
import numpy as np
import cv2
import misc
import pandas as pd
import matplotlib.pyplot as plt
import pickle
from keras_utils import dataset_update, write_to_file, create_cifar_dataset
import tensorflow.keras as keras
import tensorflow as tf
from tensorflow.keras.datasets import cifar10
# load dataset
(trainX, trainy), (testX, testy) = cifar10.load_data()
images, labels = trainX, trainy
#Define function for low resolution lens on syclop
def bad_res101(img,res):
sh=np.shape(img)
dwnsmp=cv2.resize(img,res, interpolation = cv2.INTER_CUBIC)
upsmp = cv2.resize(dwnsmp,sh[:2], interpolation = cv2.INTER_CUBIC)
return upsmp
def bad_res102(img,res):
sh=np.shape(img)
dwnsmp=cv2.resize(img,res, interpolation = cv2.INTER_AREA)
return dwnsmp
import importlib
importlib.reload(misc)
from misc import Logger
import os
def deploy_logs():
if not os.path.exists(hp.save_path):
os.makedirs(hp.save_path)
dir_success = False
for sfx in range(1): # todo legacy
candidate_path = hp.save_path + '/' + hp.this_run_name + '_' + str(os.getpid()) + '/'
if not os.path.exists(candidate_path):
hp.this_run_path = candidate_path
os.makedirs(hp.this_run_path)
dir_success = Truecnn_net = cnn_one_img(n_timesteps = sample, input_size = 28, input_dim = 1)
break
if not dir_success:
error('run name already exists!')
sys.stdout = Logger(hp.this_run_path+'log.log')
print('results are in:', hp.this_run_path)
print('description: ', hp.description)
#print('hyper-parameters (partial):', hp.dict)
kernel_regularizer_list = [None, keras.regularizers.l1(),keras.regularizers.l2(),keras.regularizers.l1_l2()]
optimizer_list = [tf.keras.optimizers.Adam, tf.keras.optimizers.Nadam, tf.keras.optimizers.RMSprop]
if len(sys.argv) > 1:
paramaters = {
'epochs' : int(sys.argv[1]),
'sample' : int(sys.argv[2]),
'res' : int(sys.argv[3]),
'hidden_size' : int(sys.argv[4]),
'concat' : int(sys.argv[5]),
'regularizer' : kernel_regularizer_list[int(sys.argv[6])],
'optimizer' : optimizer_list[int(sys.argv[7])],
'cnn_dropout' : 0.4,
'rnn_dropout' : 0.2,
'lr' : 5e-4,
'run_id' : np.random.randint(1000,9000)
}
else:
paramaters = {
'epochs' : 1,
'sample' : 5,
'res' : 8,
'hidden_size' : 128,
'concat' : 1,
'regularizer' : None,
'optimizer' : optimizer_list[0],
'cnn_dropout' : 0.4,
'rnn_dropout' : 0.2,
'lr' : 5e-4,
'run_id' : np.random.randint(1000,9000)
}
print(paramaters)
for key,val in paramaters.items():
exec(key + '=val')
epochs = epochs
sample = sample
res = res
hidden_size =hidden_size
concat = concat
regularizer = regularizer
optimizer = optimizer
cnn_dropout = cnn_dropout
rnn_dropout = rnn_dropout
lr = lr
run_id = run_id
n_timesteps = sample
def split_dataset_xy(dataset):
dataset_x1 = [uu[0] for uu in dataset]
dataset_x2 = [uu[1] for uu in dataset]
dataset_y = [uu[-1] for uu in dataset]
return (np.array(dataset_x1),np.array(dataset_x2)[:,:n_timesteps,:]),np.array(dataset_y)
def convgru_cnn(n_timesteps = 5, cell_size = 128, input_size = 28,input_dim = 3, concat = False,
optimizer = tf.keras.optimizers.Adam):
inputA = keras.layers.Input(shape=(n_timesteps,input_size,input_size,input_dim))
inputB = keras.layers.Input(shape=(n_timesteps,2))
num_feature = 64
# define LSTM model
x = keras.layers.ConvLSTM2D(32,(3,3), padding = 'same', return_sequences=True,
dropout = cnn_dropout,recurrent_dropout=rnn_dropout,
name = 'convLSTM1')(inputA)
x = keras.layers.ConvLSTM2D(64,(3,3), padding = 'same', return_sequences=True,
name = 'convLSTM2',
dropout = cnn_dropout,recurrent_dropout=rnn_dropout,)(x)
x = keras.layers.ConvLSTM2D(num_feature,(3,3), padding = 'same',
name = 'convLSTM3', activation='relu',
dropout = cnn_dropout,recurrent_dropout=rnn_dropout,)(x)
print(x.shape)
x = keras.layers.Conv2D(128,(3,3),activation='relu', padding = 'same',
name = 'cnn3')(x)
x = keras.layers.Conv2D(128,(3,3),activation='relu', padding = 'same',
name = 'cnn32')(x)
x = keras.layers.MaxPooling2D((2, 2),
name = 'max_pool3')(x)
x = keras.layers.Dropout(cnn_dropout)(x)
#Flatten and add linear layer and softmax
x = keras.layers.Flatten()(x)
x = keras.layers.Dense(128,activation="relu",
name = 'fc1')(x)
x = keras.layers.Dense(10,activation="softmax",
name = 'final')(x)
model = keras.models.Model(inputs=[inputA,inputB],outputs=x, name = 'convgru_cnn_{}'.format(concat))
opt=optimizer(lr=3e-3)
model.compile(
optimizer=opt,
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
return model
rnn_net = convgru_cnn(n_timesteps = sample, cell_size = hidden_size,input_size = res , concat = concat)
#%%
train_dataset, test_dataset = create_cifar_dataset(images, labels,res = res,
sample = sample, return_datasets=True,
mixed_state = False, add_seed = 0,
)#bad_res_func = bad_res101, up_sample = True)
train_dataset_x, train_dataset_y = split_dataset_xy(train_dataset)
test_dataset_x, test_dataset_y = split_dataset_xy(test_dataset)
#%%
print("##################### Fit {} and trajectories model on training data res = {} ##################".format(rnn_net.name,res))
rnn_history = rnn_net.fit(
train_dataset_x,
train_dataset_y,
batch_size=64,
epochs=epochs,
# We pass some validation for
# monitoring validation loss and metrics
# at the end of each epoch
validation_data=(test_dataset_x, test_dataset_y),
verbose = 2)
print('################# {} Validation Accuracy = '.format(rnn_net.name),rnn_history.history['val_sparse_categorical_accuracy'])
print('################# {} Training Accuracy = '.format(rnn_net.name),rnn_history.history['sparse_categorical_accuracy'])
plt.figure()
plt.plot(rnn_history.history['sparse_categorical_accuracy'], label = 'train')
plt.plot(rnn_history.history['val_sparse_categorical_accuracy'], label = 'val')
plt.legend()
plt.title('{} on cifar res = {} hs = {} dropout = {}'.format(rnn_net.name, res, hidden_size,cnn_dropout))
plt.savefig('{} on Cifar res = {} val accur = {} hs = {} dropout = {}.png'.format(rnn_net.name,res,rnn_history.history['val_sparse_categorical_accuracy'][-1], hidden_size,cnn_dropout))
with open('/home/labs/ahissarlab/orra/imagewalker/cifar_net_search/{}_{}'.format(rnn_net.name, run_id), 'wb') as file_pi:
pickle.dump(rnn_history.history, file_pi)
dataset_update(rnn_history, rnn_net,paramaters)
write_to_file(rnn_history, rnn_net,paramaters) | 33.6 | 184 | 0.64246 | 993 | 7,560 | 4.694864 | 0.280967 | 0.028314 | 0.02574 | 0.016731 | 0.29837 | 0.223938 | 0.167525 | 0.108966 | 0.072501 | 0.072501 | 0 | 0.023633 | 0.216402 | 7,560 | 225 | 185 | 33.6 | 0.763336 | 0.057143 | 0 | 0.101266 | 0 | 0 | 0.143058 | 0.057251 | 0 | 0 | 0 | 0.004444 | 0 | 1 | 0.031646 | false | 0 | 0.101266 | 0 | 0.158228 | 0.056962 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4e049be21b483469b697920ab978fe3ed18d99c | 7,802 | py | Python | spyderlib/utils/introspection/utils.py | SylvainCorlay/spyder | b87bfa08abd53e1c97b59feeb51f665f6a632415 | [
"MIT"
] | 1 | 2016-05-04T23:12:27.000Z | 2016-05-04T23:12:27.000Z | spyderlib/utils/introspection/utils.py | SylvainCorlay/spyder | b87bfa08abd53e1c97b59feeb51f665f6a632415 | [
"MIT"
] | null | null | null | spyderlib/utils/introspection/utils.py | SylvainCorlay/spyder | b87bfa08abd53e1c97b59feeb51f665f6a632415 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#
# Copyright © 2013 The Spyder Development Team
# Licensed under the terms of the MIT License
# (see spyderlib/__init__.py for details)
"""
Introspection utilities used by Spyder
"""
import imp
import os
import pickle
import os.path as osp
import re
from spyderlib.utils.misc import memoize
from spyderlib.utils.syntaxhighlighters import (
custom_extension_lexer_mapping
)
from pygments.lexer import words
from pygments.lexers import (
get_lexer_for_filename, get_lexer_by_name, TextLexer
)
from pygments.util import ClassNotFound
from pygments.token import Token
class CodeInfo(object):
id_regex = re.compile(r'[^\d\W][\w\.]*', re.UNICODE)
func_call_regex = re.compile(r'([^\d\W][\w\.]*)\([^\)\()]*\Z',
re.UNICODE)
def __init__(self, name, source_code, position, filename=None,
is_python_like=False, in_comment_or_string=False, **kwargs):
self.__dict__.update(kwargs)
self.name = name
self.filename = filename
self.source_code = source_code
self.is_python_like = is_python_like
self.in_comment_or_string = in_comment_or_string
self.position = position
# if in a comment, look for the previous definition
if in_comment_or_string:
# if this is a docstring, find it, set as our
self.docstring = self._get_docstring()
# backtrack and look for a line that starts with def or class
if name != 'completions':
while position:
base = self.source_code[position: position + 6]
if base.startswith('def ') or base.startswith('class '):
position += base.index(' ') + 1
break
position -= 1
else:
self.docstring = ''
self.position = position
if position == 0:
self.lines = []
self.column = 0
self.line_num = 0
self.line = ''
self.obj = ''
self.full_obj = ''
else:
self._get_info()
def _get_info(self):
self.lines = self.source_code[:self.position].splitlines()
self.line_num = len(self.lines)
self.line = self.lines[-1]
self.column = len(self.lines[-1])
full_line = self.source_code.splitlines()[self.line_num - 1]
lexer = find_lexer_for_filename(self.filename)
# check for a text-based lexer that doesn't split tokens
if len(list(lexer.get_tokens('a b'))) == 1:
# Use regex to get the information
tokens = re.findall(self.id_regex, self.line)
if tokens and self.line.endswith(tokens[-1]):
self.obj = tokens[-1]
else:
self.obj = None
self.full_obj = self.obj
if self.obj:
full_line = self.source_code.splitlines()[self.line_num - 1]
rest = full_line[self.column:]
match = re.match(self.id_regex, rest)
if match:
self.full_obj = self.obj + match.group()
self.context = None
else:
# Use lexer to get the information
pos = 0
line_tokens = lexer.get_tokens(full_line)
for (context, token) in line_tokens:
pos += len(token)
if pos >= self.column:
self.obj = token[:len(token) - (pos - self.column)]
self.full_obj = token
if context in Token.Literal.String:
context = Token.Literal.String
self.context = context
break
if (self.name in ['info', 'definition'] and (not self.context in Token.Name)
and self.is_python_like):
func_call = re.findall(self.func_call_regex, self.line)
if func_call:
self.obj = func_call[-1]
self.column = self.line.index(self.obj) + len(self.obj)
self.position = self.position - len(self.line) + self.column
def _get_docstring(self):
"""Find the docstring we are currently in"""
left = self.position
while left:
if self.source_code[left: left + 3] in ['"""', "'''"]:
left += 3
break
left -= 1
right = self.position
while right < len(self.source_code):
if self.source_code[right - 3: right] in ['"""', "'''"]:
right -= 3
break
right += 1
if left and right < len(self.source_code):
return self.source_code[left: right]
return ''
def __eq__(self, other):
try:
return self.serialize() == other.serialize()
except Exception:
return False
def __getitem__(self, item):
"""Allow dictionary-like access"""
return getattr(self, item)
def serialize(self):
state = {}
for (key, value) in self.__dict__.items():
try:
pickle.dumps(value)
state[key] = value
except Exception:
pass
state['id_regex'] = self.id_regex
state['func_call_regex'] = self.func_call_regex
return state
def find_lexer_for_filename(filename):
"""Get a Pygments Lexer given a filename.
"""
filename = filename or ''
root, ext = os.path.splitext(filename)
if ext in custom_extension_lexer_mapping:
lexer = get_lexer_by_name(custom_extension_lexer_mapping[ext])
else:
try:
lexer = get_lexer_for_filename(filename)
except ClassNotFound:
return TextLexer()
return lexer
def get_keywords(lexer):
"""Get the keywords for a given lexer.
"""
if not hasattr(lexer, 'tokens'):
return []
if 'keywords' in lexer.tokens:
try:
return lexer.tokens['keywords'][0][0].words
except:
pass
keywords = []
for vals in lexer.tokens.values():
for val in vals:
try:
if isinstance(val[0], words):
keywords.extend(val[0].words)
else:
ini_val = val[0]
if ')\\b' in val[0] or ')(\\s+)' in val[0]:
val = re.sub(r'\\.', '', val[0])
val = re.sub('[^0-9a-zA-Z|]+', '', val)
if '|' in ini_val:
keywords.extend(val.split('|'))
else:
keywords.append(val)
except Exception:
continue
return keywords
@memoize
def get_parent_until(path):
"""
Given a file path, determine the full module path
e.g. '/usr/lib/python2.7/dist-packages/numpy/core/__init__.pyc' yields
'numpy.core'
"""
dirname = osp.dirname(path)
try:
mod = osp.basename(path)
mod = osp.splitext(mod)[0]
imp.find_module(mod, [dirname])
except ImportError:
return
items = [mod]
while 1:
items.append(osp.basename(dirname))
try:
dirname = osp.dirname(dirname)
imp.find_module('__init__', [dirname + os.sep])
except ImportError:
break
return '.'.join(reversed(items))
if __name__ == '__main__':
code = 'import numpy'
test = CodeInfo('test', code, len(code) - 2)
assert test.obj == 'num'
assert test.full_obj == 'numpy'
test2 = CodeInfo('test', code, len(code) - 2)
assert test == test2
test3 = pickle.loads(pickle.dumps(test2.__dict__))
assert test3['full_obj'] == 'numpy'
| 31.333333 | 84 | 0.542297 | 914 | 7,802 | 4.463895 | 0.233042 | 0.029412 | 0.034314 | 0.016667 | 0.072549 | 0.047059 | 0.047059 | 0.038235 | 0.021569 | 0.021569 | 0 | 0.009245 | 0.348372 | 7,802 | 248 | 85 | 31.459677 | 0.793076 | 0.096514 | 0 | 0.160428 | 0 | 0 | 0.032484 | 0.00415 | 0 | 0 | 0 | 0 | 0.02139 | 1 | 0.048128 | false | 0.010695 | 0.074866 | 0 | 0.208556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4e10d8d09e31f46182e9e1e9d9bcb42f1933304 | 2,641 | py | Python | Website/Members/parser.py | sdeusch/django_member_management | ff649ce2845ac6774d6a4187d716349e7eb4a7b8 | [
"Apache-2.0"
] | null | null | null | Website/Members/parser.py | sdeusch/django_member_management | ff649ce2845ac6774d6a4187d716349e7eb4a7b8 | [
"Apache-2.0"
] | null | null | null | Website/Members/parser.py | sdeusch/django_member_management | ff649ce2845ac6774d6a4187d716349e7eb4a7b8 | [
"Apache-2.0"
] | null | null | null | import csv
import os
from pathlib import Path
from .models import Member, Account
class Row:
'''Local Class used by Parser below'''
def __init__(self, first_name, last_name, phone_number, client_member_id, account_id):
self.first_name = first_name
self.last_name = last_name
self.phone_number = phone_number
self.client_member_id = client_member_id
self.account_id = account_id
class Parser:
'''Parser for Member Files in CSV format
It is a best-effort parser, i.e. it won't stop if a line is formatted incorrectly
A line should be a tuple of ( id, firstname, lastname, phone number, member ID, account ID )
'''
def __init__(self, file_name):
self.file_name = file_name
def format_row(self, row):
'''Attempts to parse a single row'''
try:
if len(row) != 6:
raise ValueError(f"Expecting 6 comma separated input columns: {row}")
# the ID is ignored as we want to find duplicates
fname = row[1]
lname = row[2]
phone = int(row[3])
cmember_id = int(row[4])
acc_id = int(row[5])
return Row(fname, lname, phone, cmember_id, acc_id)
except Exception as exc:
print(f"A parsing error {exc} was caused by \"{row}\", moving on...")
return None
def process_csv(self, fname):
'''Processes a local CSV file '''
BASE_DIR = Path(__file__).resolve().parent.parent
fname = os.path.join(BASE_DIR, 'media', fname)
with open(fname) as csvfile:
reader = csv.reader(csvfile, delimiter=',', quotechar='"')
# 893, Ollie, Capstack, 9050093979, 3807423, 18
for row in reader:
data = self.format_row(row)
if not data:
continue
print(data)
members = Member.objects.filter(first_name=data.first_name, last_name=data.last_name, phone_number=data.phone_number,
client_member_id=data.client_member_id)
if not members:
member = Member(first_name=data.first_name, last_name=data.last_name, phone_number=data.phone_number,
client_member_id=data.client_member_id)
member.save()
# create member account mapping
memberAccount = Account(member=member, account_id=data.account_id)
memberAccount.save()
print(f"Saved new member {member}") | 38.838235 | 133 | 0.579326 | 331 | 2,641 | 4.429003 | 0.383686 | 0.060027 | 0.066849 | 0.034789 | 0.153479 | 0.136426 | 0.136426 | 0.136426 | 0.136426 | 0.136426 | 0 | 0.016487 | 0.333964 | 2,641 | 68 | 134 | 38.838235 | 0.816941 | 0.162438 | 0 | 0.043478 | 0 | 0 | 0.060341 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.086957 | 0 | 0.26087 | 0.065217 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4e14c0f30bfaf19eddb7ae25726c34a94d2e7f6 | 43,870 | py | Python | train_stage3.py | chirain1206/Improvement-on-OTT-QA | 694efa208aa914ee4d83305778abf4bef479b575 | [
"MIT"
] | 94 | 2020-10-08T23:56:26.000Z | 2022-03-24T03:17:22.000Z | train_stage3.py | chirain1206/Improvement-on-OTT-QA | 694efa208aa914ee4d83305778abf4bef479b575 | [
"MIT"
] | 8 | 2021-03-22T07:38:03.000Z | 2021-11-18T15:08:55.000Z | train_stage3.py | chirain1206/Improvement-on-OTT-QA | 694efa208aa914ee4d83305778abf4bef479b575 | [
"MIT"
] | 20 | 2020-10-09T05:11:33.000Z | 2022-03-21T08:37:22.000Z | # coding=utf-8
# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.
# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" Finetuning the library models for question-answering on SQuAD (DistilBERT, Bert, XLM, XLNet)."""
import argparse
import glob
import logging
import os
import re
import collections
import random
import timeit
import json
import numpy as np
from datetime import datetime
import torch
from torch.utils.data import DataLoader, RandomSampler, SequentialSampler
from torch.utils.data.distributed import DistributedSampler
from tqdm import tqdm, trange
from multiprocessing import Pool, cpu_count
from functools import partial
from torch.utils.data import TensorDataset
from transformers import (WEIGHTS_NAME, AdamW, BertConfig, BertTokenizer,
BertForQuestionAnswering, get_linear_schedule_with_warmup)
from transformers.data.metrics.squad_metrics import (
compute_predictions_log_probs,
compute_predictions_logits,
)
import string
from transformers.data.processors.utils import DataProcessor
from utils import readGZip
try:
from torch.utils.tensorboard import SummaryWriter
except ImportError:
from tensorboardX import SummaryWriter
logger = logging.getLogger(__name__)
ALL_MODELS = sum(
(
tuple(conf.pretrained_config_archive_map.keys())
for conf in (BertConfig, )
),
(),
)
MODEL_CLASSES = {"bert": (BertConfig, BertForQuestionAnswering, BertTokenizer)}
def squad_convert_example_to_features_init(tokenizer_for_convert):
global tokenizer
tokenizer = tokenizer_for_convert
def _improve_answer_span(doc_tokens, input_start, input_end, tokenizer, orig_answer_text):
"""Returns tokenized answer spans that better match the annotated answer."""
tok_answer_text = " ".join(tokenizer.tokenize(orig_answer_text))
for new_start in range(input_start, input_end + 1):
for new_end in range(input_end, new_start - 1, -1):
text_span = " ".join(doc_tokens[new_start : (new_end + 1)])
if text_span == tok_answer_text:
return (new_start, new_end)
return (input_start, input_end)
def _new_check_is_max_context(doc_spans, cur_span_index, position):
"""Check if this is the 'max context' doc span for the token."""
# if len(doc_spans) == 1:
# return True
best_score = None
best_span_index = None
for (span_index, doc_span) in enumerate(doc_spans):
end = doc_span["start"] + doc_span["length"] - 1
if position < doc_span["start"]:
continue
if position > end:
continue
num_left_context = position - doc_span["start"]
num_right_context = end - position
score = min(num_left_context, num_right_context) + 0.01 * doc_span["length"]
if best_score is None or score > best_score:
best_score = score
best_span_index = span_index
return cur_span_index == best_span_index
class SquadFeatures(object):
def __init__(
self,
input_ids,
attention_mask,
token_type_ids,
cls_index,
p_mask,
example_index,
unique_id,
paragraph_len,
token_is_max_context,
tokens,
token_to_orig_map,
start_position,
end_position,
is_impossible,
):
self.input_ids = input_ids
self.attention_mask = attention_mask
self.token_type_ids = token_type_ids
self.cls_index = cls_index
self.p_mask = p_mask
self.example_index = example_index
self.unique_id = unique_id
self.paragraph_len = paragraph_len
self.token_is_max_context = token_is_max_context
self.tokens = tokens
self.token_to_orig_map = token_to_orig_map
self.start_position = start_position
self.end_position = end_position
self.is_impossible = is_impossible
class SquadResult(object):
"""
Constructs a SquadResult which can be used to evaluate a model's output on the SQuAD dataset.
Args:
unique_id: The unique identifier corresponding to that example.
start_logits: The logits corresponding to the start of the answer
end_logits: The logits corresponding to the end of the answer
"""
def __init__(self, unique_id, start_logits, end_logits, start_top_index=None, end_top_index=None, cls_logits=None):
self.start_logits = start_logits
self.end_logits = end_logits
self.unique_id = unique_id
if start_top_index:
self.start_top_index = start_top_index
self.end_top_index = end_top_index
self.cls_logits = cls_logits
def squad_convert_example_to_features(example, max_seq_length, doc_stride, max_query_length, is_training):
features = []
if is_training and not example.is_impossible:
# Get start and end position
start_position = example.start_position
end_position = example.end_position
# If the answer cannot be found in the text, then skip this example.
actual_text = " ".join(example.doc_tokens[start_position : (end_position + 1)])
if actual_text.find(example.answer_text) == -1:
logger.warning("Could not find answer: '%s' vs. '%s' in '%s'", actual_text, example.answer_text, example.qas_id)
return []
tok_to_orig_index = []
orig_to_tok_index = []
all_doc_tokens = []
for (i, token) in enumerate(example.doc_tokens):
orig_to_tok_index.append(len(all_doc_tokens))
sub_tokens = tokenizer.tokenize(token)
for sub_token in sub_tokens:
tok_to_orig_index.append(i)
all_doc_tokens.append(sub_token)
if is_training and not example.is_impossible:
tok_start_position = orig_to_tok_index[example.start_position]
if example.end_position < len(example.doc_tokens) - 1:
tok_end_position = orig_to_tok_index[example.end_position + 1] - 1
else:
tok_end_position = len(all_doc_tokens) - 1
(tok_start_position, tok_end_position) = _improve_answer_span(
all_doc_tokens, tok_start_position, tok_end_position, tokenizer, example.answer_text
)
spans = []
truncated_query = tokenizer.encode(example.question_text, add_special_tokens=False, max_length=max_query_length)
sequence_added_tokens = (
tokenizer.max_len - tokenizer.max_len_single_sentence + 1
if "roberta" in str(type(tokenizer)) or "camembert" in str(type(tokenizer))
else tokenizer.max_len - tokenizer.max_len_single_sentence
)
sequence_pair_added_tokens = tokenizer.max_len - tokenizer.max_len_sentences_pair
span_doc_tokens = all_doc_tokens
while len(spans) * doc_stride < len(all_doc_tokens):
encoded_dict = tokenizer.encode_plus(
truncated_query if tokenizer.padding_side == "right" else span_doc_tokens,
span_doc_tokens if tokenizer.padding_side == "right" else truncated_query,
max_length=max_seq_length,
return_overflowing_tokens=True,
pad_to_max_length=True,
stride=max_seq_length - doc_stride - len(truncated_query) - sequence_pair_added_tokens,
truncation_strategy="only_second" if tokenizer.padding_side == "right" else "only_first",
)
paragraph_len = min(
len(all_doc_tokens) - len(spans) * doc_stride,
max_seq_length - len(truncated_query) - sequence_pair_added_tokens,
)
if tokenizer.pad_token_id in encoded_dict["input_ids"]:
if tokenizer.padding_side == "right":
non_padded_ids = encoded_dict["input_ids"][: encoded_dict["input_ids"].index(tokenizer.pad_token_id)]
else:
last_padding_id_position = (
len(encoded_dict["input_ids"]) - 1 - encoded_dict["input_ids"][::-1].index(tokenizer.pad_token_id)
)
non_padded_ids = encoded_dict["input_ids"][last_padding_id_position + 1 :]
else:
non_padded_ids = encoded_dict["input_ids"]
tokens = tokenizer.convert_ids_to_tokens(non_padded_ids)
token_to_orig_map = {}
for i in range(paragraph_len):
index = len(truncated_query) + sequence_added_tokens + i if tokenizer.padding_side == "right" else i
token_to_orig_map[index] = tok_to_orig_index[len(spans) * doc_stride + i]
encoded_dict["paragraph_len"] = paragraph_len
encoded_dict["tokens"] = tokens
encoded_dict["token_to_orig_map"] = token_to_orig_map
encoded_dict["truncated_query_with_special_tokens_length"] = len(truncated_query) + sequence_added_tokens
encoded_dict["token_is_max_context"] = {}
encoded_dict["start"] = len(spans) * doc_stride
encoded_dict["length"] = paragraph_len
spans.append(encoded_dict)
if "overflowing_tokens" not in encoded_dict:
break
span_doc_tokens = encoded_dict["overflowing_tokens"]
for doc_span_index in range(len(spans)):
for j in range(spans[doc_span_index]["paragraph_len"]):
is_max_context = _new_check_is_max_context(spans, doc_span_index, doc_span_index * doc_stride + j)
index = (
j
if tokenizer.padding_side == "left"
else spans[doc_span_index]["truncated_query_with_special_tokens_length"] + j
)
spans[doc_span_index]["token_is_max_context"][index] = is_max_context
for span in spans:
# Identify the position of the CLS token
cls_index = span["input_ids"].index(tokenizer.cls_token_id)
# p_mask: mask with 1 for token than cannot be in the answer (0 for token which can be in an answer)
# Original TF implem also keep the classification token (set to 0) (not sure why...)
p_mask = np.array(span["token_type_ids"])
p_mask = np.minimum(p_mask, 1)
if tokenizer.padding_side == "right":
# Limit positive values to one
p_mask = 1 - p_mask
p_mask[np.where(np.array(span["input_ids"]) == tokenizer.sep_token_id)[0]] = 1
# Set the CLS index to '0'
p_mask[cls_index] = 0
span_is_impossible = example.is_impossible
start_position = 0
end_position = 0
if is_training and not span_is_impossible:
# For training, if our document chunk does not contain an annotation
# we throw it out, since there is nothing to predict.
doc_start = span["start"]
doc_end = span["start"] + span["length"] - 1
out_of_span = False
if not (tok_start_position >= doc_start and tok_end_position <= doc_end):
out_of_span = True
if out_of_span:
start_position = cls_index
end_position = cls_index
span_is_impossible = True
else:
if tokenizer.padding_side == "left":
doc_offset = 0
else:
doc_offset = len(truncated_query) + sequence_added_tokens
start_position = tok_start_position - doc_start + doc_offset
end_position = tok_end_position - doc_start + doc_offset
features.append(
SquadFeatures(
span["input_ids"],
span["attention_mask"],
span["token_type_ids"],
cls_index,
p_mask.tolist(),
example_index=0, # Can not set unique_id and example_index here. They will be set after multiple processing.
unique_id=0,
paragraph_len=span["paragraph_len"],
token_is_max_context=span["token_is_max_context"],
tokens=span["tokens"],
token_to_orig_map=span["token_to_orig_map"],
start_position=start_position,
end_position=end_position,
is_impossible=span_is_impossible,
)
)
return features
def squad_convert_examples_to_features(examples, tokenizer, max_seq_length, doc_stride, max_query_length, is_training, threads=1):
# Defining helper methods
features = []
threads = min(threads, cpu_count())
with Pool(threads, initializer=squad_convert_example_to_features_init, initargs=(tokenizer,)) as p:
annotate_ = partial(
squad_convert_example_to_features,
max_seq_length=max_seq_length,
doc_stride=doc_stride,
max_query_length=max_query_length,
is_training=is_training,
)
features = list(
tqdm(
p.imap(annotate_, examples, chunksize=32),
total=len(examples),
desc="convert squad examples to features",
)
)
new_features = []
unique_id = 1000000000
example_index = 0
for example_features in tqdm(features, total=len(features), desc="add example index and unique id"):
if not example_features:
continue
for example_feature in example_features:
example_feature.example_index = example_index
example_feature.unique_id = unique_id
new_features.append(example_feature)
unique_id += 1
example_index += 1
features = new_features
del new_features
# Convert to Tensors and build dataset
all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
all_attention_masks = torch.tensor([f.attention_mask for f in features], dtype=torch.long)
all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long)
all_cls_index = torch.tensor([f.cls_index for f in features], dtype=torch.long)
all_p_mask = torch.tensor([f.p_mask for f in features], dtype=torch.float)
all_is_impossible = torch.tensor([f.is_impossible for f in features], dtype=torch.float)
if not is_training:
all_example_index = torch.arange(all_input_ids.size(0), dtype=torch.long)
dataset = TensorDataset(
all_input_ids, all_attention_masks, all_token_type_ids, all_example_index, all_cls_index, all_p_mask
)
else:
all_start_positions = torch.tensor([f.start_position for f in features], dtype=torch.long)
all_end_positions = torch.tensor([f.end_position for f in features], dtype=torch.long)
dataset = TensorDataset(
all_input_ids,
all_attention_masks,
all_token_type_ids,
all_start_positions,
all_end_positions,
all_cls_index,
all_p_mask,
all_is_impossible,
)
return features, dataset
def set_seed(args):
random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
if args.n_gpu > 0:
torch.cuda.manual_seed_all(args.seed)
def to_list(tensor):
return tensor.detach().cpu().tolist()
def train(args, train_dataset, model, tokenizer):
""" Train the model """
if args.local_rank in [-1, 0]:
tb_writer = SummaryWriter(log_dir=args.output_dir)
args.train_batch_size = args.per_gpu_train_batch_size * max(1, args.n_gpu)
train_sampler = RandomSampler(train_dataset) if args.local_rank == -1 else DistributedSampler(train_dataset)
train_dataloader = DataLoader(train_dataset, sampler=train_sampler, batch_size=args.train_batch_size)
t_total = len(train_dataloader) // args.gradient_accumulation_steps * args.num_train_epochs
# Prepare optimizer and schedule (linear warmup and decay)
no_decay = ["bias", "LayerNorm.weight"]
optimizer_grouped_parameters = [
{
"params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
"weight_decay": args.weight_decay,
},
{"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], "weight_decay": 0.0},
]
optimizer = AdamW(optimizer_grouped_parameters, lr=args.learning_rate, eps=args.adam_epsilon)
scheduler = get_linear_schedule_with_warmup(
optimizer, num_warmup_steps=args.warmup_steps, num_training_steps=t_total
)
# Check if saved optimizer or scheduler states exist
if os.path.isfile(os.path.join(args.model_name_or_path, "optimizer.pt")) and os.path.isfile(
os.path.join(args.model_name_or_path, "scheduler.pt")
):
# Load in optimizer and scheduler states
optimizer.load_state_dict(torch.load(os.path.join(args.model_name_or_path, "optimizer.pt")))
scheduler.load_state_dict(torch.load(os.path.join(args.model_name_or_path, "scheduler.pt")))
if args.fp16:
try:
from apex import amp
except ImportError:
raise ImportError("Please install apex from https://www.github.com/nvidia/apex to use fp16 training.")
model, optimizer = amp.initialize(model, optimizer, opt_level=args.fp16_opt_level)
# multi-gpu training (should be after apex fp16 initialization)
if args.n_gpu > 1:
model = torch.nn.DataParallel(model)
# Distributed training (should be after apex fp16 initialization)
if args.local_rank != -1:
model = torch.nn.parallel.DistributedDataParallel(
model, device_ids=[args.local_rank], output_device=args.local_rank, find_unused_parameters=True
)
# Train!
logger.info("***** Running training *****")
logger.info(" Num examples = %d", len(train_dataset))
logger.info(" Num Epochs = %d", args.num_train_epochs)
logger.info(" Instantaneous batch size per GPU = %d", args.per_gpu_train_batch_size)
logger.info(
" Total train batch size (w. parallel, distributed & accumulation) = %d",
args.train_batch_size
* args.gradient_accumulation_steps
* (torch.distributed.get_world_size() if args.local_rank != -1 else 1),
)
logger.info(" Gradient Accumulation steps = %d", args.gradient_accumulation_steps)
logger.info(" Total optimization steps = %d", t_total)
global_step = 1
epochs_trained = 0
steps_trained_in_current_epoch = 0
# Check if continuing training from a checkpoint
if os.path.exists(args.model_name_or_path):
try:
# set global_step to gobal_step of last saved checkpoint from model path
checkpoint_suffix = args.model_name_or_path.split("-")[-1].split("/")[0]
global_step = int(checkpoint_suffix)
epochs_trained = global_step // (len(train_dataloader) // args.gradient_accumulation_steps)
steps_trained_in_current_epoch = global_step % (len(train_dataloader) // args.gradient_accumulation_steps)
logger.info(" Continuing training from checkpoint, will skip to saved global_step")
logger.info(" Continuing training from epoch %d", epochs_trained)
logger.info(" Continuing training from global step %d", global_step)
logger.info(" Will skip the first %d steps in the first epoch", steps_trained_in_current_epoch)
except ValueError:
logger.info(" Starting fine-tuning.")
tr_loss, logging_loss = 0.0, 0.0
model.zero_grad()
train_iterator = trange(
epochs_trained, int(args.num_train_epochs), desc="Epoch", disable=args.local_rank not in [-1, 0]
)
# Added here for reproductibility
set_seed(args)
for epoch in train_iterator:
epoch_iterator = tqdm(train_dataloader, desc="Iteration", disable=args.local_rank not in [-1, 0])
for step, batch in enumerate(epoch_iterator):
# Skip past any already trained steps if resuming training
if steps_trained_in_current_epoch > 0:
steps_trained_in_current_epoch -= 1
continue
model.train()
batch = tuple(t.to(args.device) for t in batch)
inputs = {
"input_ids": batch[0],
"attention_mask": batch[1],
"token_type_ids": batch[2],
"start_positions": batch[3],
"end_positions": batch[4],
}
outputs = model(**inputs)
# model outputs are always tuple in transformers (see doc)
loss = outputs[0]
if args.n_gpu > 1:
loss = loss.mean() # mean() to average on multi-gpu parallel (not distributed) training
if args.gradient_accumulation_steps > 1:
loss = loss / args.gradient_accumulation_steps
if args.fp16:
with amp.scale_loss(loss, optimizer) as scaled_loss:
scaled_loss.backward()
else:
loss.backward()
tr_loss += loss.item()
if (step + 1) % args.gradient_accumulation_steps == 0:
if args.fp16:
torch.nn.utils.clip_grad_norm_(amp.master_params(optimizer), args.max_grad_norm)
else:
torch.nn.utils.clip_grad_norm_(model.parameters(), args.max_grad_norm)
optimizer.step()
scheduler.step() # Update learning rate schedule
model.zero_grad()
global_step += 1
# Log metrics
if args.local_rank in [-1, 0] and args.logging_steps > 0 and global_step % args.logging_steps == 0:
tb_writer.add_scalar("stage3_lr", scheduler.get_last_lr()[0], global_step)
tb_writer.add_scalar("stage3_loss", (tr_loss - logging_loss) / args.logging_steps, global_step)
logging_loss = tr_loss
# Save model checkpoint
if args.local_rank in [-1, 0]:
output_dir = os.path.join(args.output_dir, "checkpoint-epoch{}".format(epoch))
if not os.path.exists(output_dir):
os.makedirs(output_dir)
# Take care of distributed/parallel training
model_to_save = model.module if hasattr(model, "module") else model
model_to_save.save_pretrained(output_dir)
tokenizer.save_pretrained(output_dir)
torch.save(args, os.path.join(output_dir, "training_args.bin"))
logger.info("Saving model checkpoint to %s", output_dir)
torch.save(optimizer.state_dict(), os.path.join(output_dir, "optimizer.pt"))
torch.save(scheduler.state_dict(), os.path.join(output_dir, "scheduler.pt"))
logger.info("Saving optimizer and scheduler states to %s", output_dir)
if args.local_rank in [-1, 0]:
tb_writer.close()
return global_step, tr_loss / global_step
def evaluate_simplified(inputs, args, model, tokenizer, prefix=""):
processor = SquadProcessor()
examples = processor._create_examples(inputs, 'dev')
#logger.info("Preprocessing {} examples".format(len(examples)))
features, dataset = squad_convert_examples_to_features(
examples=examples,
tokenizer=tokenizer,
max_seq_length=args.max_seq_length,
doc_stride=args.doc_stride,
max_query_length=args.max_query_length,
is_training=False,
threads=args.threads,
)
args.eval_batch_size = args.per_gpu_eval_batch_size * max(1, args.n_gpu)
# Note that DistributedSampler samples randomly
eval_sampler = SequentialSampler(dataset)
eval_dataloader = DataLoader(dataset, sampler=eval_sampler, batch_size=args.eval_batch_size)
# multi-gpu evaluate
if args.n_gpu > 1 and not isinstance(model, torch.nn.DataParallel):
model = torch.nn.DataParallel(model)
# Eval!
logger.info("***** Running evaluation {} *****".format(prefix))
logger.info(" Num examples = %d", len(dataset))
logger.info(" Batch size = %d", args.eval_batch_size)
all_results = []
start_time = timeit.default_timer()
for batch in tqdm(eval_dataloader, desc="Evaluating"):
model.eval()
batch = tuple(t.to(args.device) for t in batch)
with torch.no_grad():
inputs = {
"input_ids": batch[0],
"attention_mask": batch[1],
"token_type_ids": batch[2],
}
example_indices = batch[3]
outputs = model(**inputs)
for i, example_index in enumerate(example_indices):
eval_feature = features[example_index.item()]
unique_id = int(eval_feature.unique_id)
output = [to_list(output[i]) for output in outputs]
start_logits, end_logits = output
result = SquadResult(unique_id, start_logits, end_logits)
all_results.append(result)
evalTime = timeit.default_timer() - start_time
#logger.info(" Evaluation done in total %f secs (%f sec per example) for %d examples", evalTime, evalTime / len(dataset), len(all_results))
# Compute predictions
output_prediction_file = os.path.join('/tmp/', "predictions_{}.json".format(prefix))
output_nbest_file = os.path.join('/tmp/', "nbest_predictions_{}.json".format(prefix))
if args.version_2_with_negative:
output_null_log_odds_file = os.path.join('/tmp/', "null_odds_{}.json".format(prefix))
else:
output_null_log_odds_file = None
predictions = compute_predictions_logits(
examples,
features,
all_results,
args.n_best_size,
args.max_answer_length,
args.do_lower_case,
output_prediction_file,
output_nbest_file,
output_null_log_odds_file,
args.verbose_logging,
args.version_2_with_negative,
args.null_score_diff_threshold,
tokenizer,
)
return predictions
def _is_whitespace(c):
if c == " " or c == "\t" or c == "\r" or c == "\n" or ord(c) == 0x202F:
return True
return False
class SquadExample(object):
def __init__(
self,
qas_id,
question_text,
context_text,
answer_text,
start_position_character,
title,
answers=[],
is_impossible=False,
):
self.qas_id = qas_id
self.question_text = question_text
self.context_text = context_text
self.answer_text = answer_text
self.title = title
self.is_impossible = is_impossible
self.answers = answers
self.start_position, self.end_position = 0, 0
doc_tokens = []
char_to_word_offset = []
prev_is_whitespace = True
# Split on whitespace so that different tokens may be attributed to their original position.
for c in self.context_text:
if _is_whitespace(c):
prev_is_whitespace = True
else:
if prev_is_whitespace:
doc_tokens.append(c)
else:
doc_tokens[-1] += c
prev_is_whitespace = False
char_to_word_offset.append(len(doc_tokens) - 1)
self.doc_tokens = doc_tokens
self.char_to_word_offset = char_to_word_offset
# Start and end positions only has a value during evaluation.
if start_position_character is not None and not is_impossible:
self.start_position = char_to_word_offset[start_position_character]
self.end_position = char_to_word_offset[
min(start_position_character + len(answer_text) - 1, len(char_to_word_offset) - 1)
]
class SquadProcessor(DataProcessor):
def get_train_examples(self, filename=None):
#with open(os.path.join(filename), "r", encoding="utf-8") as reader:
# input_data = json.load(reader)
input_data = readGZip(filename)
return self._create_examples(input_data, "train")
def get_dev_examples(self, filename=None):
input_data = readGZip(filename)
return self._create_examples(input_data, "dev")
def _create_examples(self, input_data, set_type):
is_training = set_type == "train"
examples = []
for entry in tqdm(input_data):
title = entry["title"]
context_text = entry["context"]
#for qa in paragraph["qas"]:
qas_id = entry["question_id"]
question_text = entry["question"]
start_position_character = None
answer_text = None
answers = []
if "is_impossible" in entry:
is_impossible = entry["is_impossible"]
else:
is_impossible = False
if not is_impossible:
if is_training:
answer = entry["answers"][0]
answer_text = answer["text"]
start_position_character = answer["answer_start"]
else:
answers = entry["answers"]
example = SquadExample(
qas_id=qas_id,
question_text=question_text,
context_text=context_text,
answer_text=answer_text,
start_position_character=start_position_character,
title=title,
is_impossible=is_impossible,
answers=answers,
)
examples.append(example)
return examples
def get_raw_scores(examples, preds):
"""
Computes the exact and f1 scores from the examples and the model predictions
"""
exact_scores = {}
f1_scores = {}
for example in examples:
qas_id = example.qas_id
gold_answers = [answer["text"] for answer in example.answers if normalize_answer(answer["text"])]
if not gold_answers:
# For unanswerable questions, only correct answer is empty string
gold_answers = [""]
if qas_id not in preds:
print("Missing prediction for %s" % qas_id)
continue
prediction = preds[qas_id]
exact_scores[qas_id] = max(compute_exact(a, prediction) for a in gold_answers)
f1_scores[qas_id] = max(compute_f1(a, prediction) for a in gold_answers)
qid_list = [k for k in exact_scores]
total = len(qid_list)
return collections.OrderedDict(
[
("exact", 100.0 * sum(exact_scores[k] for k in qid_list) / total),
("f1", 100.0 * sum(f1_scores[k] for k in qid_list) / total),
("total", total),
]
)
def load_and_cache_examples(args, tokenizer, evaluate=False, output_examples=False, cache=True):
# Load data features from cache or dataset file
processor = SquadProcessor()
if evaluate:
examples = processor.get_dev_examples(args.predict_file)
else:
examples = processor.get_train_examples(args.train_file)
features, dataset = squad_convert_examples_to_features(
examples=examples,
tokenizer=tokenizer,
max_seq_length=args.max_seq_length,
doc_stride=args.doc_stride,
max_query_length=args.max_query_length,
is_training=not evaluate,
threads=args.threads,
)
if output_examples:
return dataset, examples, features
return dataset
def main():
parser = argparse.ArgumentParser()
# Required parameters
parser.add_argument(
"--model_type",
default='bert',
type=str,
help="Model type selected in the list: " + ", ".join(MODEL_CLASSES.keys()),
)
parser.add_argument(
"--model_name_or_path",
default="bert-base-uncased",
type=str,
help="Path to pre-trained model or shortcut name selected in the list: " + ", ".join(ALL_MODELS),
)
parser.add_argument(
"--output_dir",
default='stage3',
type=str,
help="The output directory where the model checkpoints and predictions will be written.",
)
parser.add_argument(
"--train_file",
default=None,
type=str,
help="The input training file. If a data dir is specified, will look for the file there"
+ "If no data dir or train/predict files are specified, will run with tensorflow_datasets.",
)
parser.add_argument(
"--resource_dir",
type=str,
default='data/',
help="Number of updates steps to accumulate before performing a backward/update pass.",
)
parser.add_argument(
"--predict_file",
default=None,
type=str,
help="The input evaluation file. If a data dir is specified, will look for the file there"
+ "If no data dir or train/predict files are specified, will run with tensorflow_datasets.",
)
parser.add_argument(
"--config_name", default="", type=str, help="Pretrained config name or path if not the same as model_name"
)
parser.add_argument(
"--tokenizer_name",
default="",
type=str,
help="Pretrained tokenizer name or path if not the same as model_name",
)
parser.add_argument(
"--cache_dir",
default="/tmp/",
type=str,
help="Where do you want to store the pre-trained models downloaded from s3",
)
parser.add_argument(
"--version_2_with_negative",
action="store_true",
help="If true, the SQuAD examples contain some that do not have an answer.",
)
parser.add_argument(
"--null_score_diff_threshold",
type=float,
default=0.0,
help="If null_score - best_non_null is greater than the threshold predict null.",
)
parser.add_argument(
"--max_seq_length",
default=384,
type=int,
help="The maximum total input sequence length after WordPiece tokenization. Sequences "
"longer than this will be truncated, and sequences shorter than this will be padded.",
)
parser.add_argument(
"--doc_stride",
default=128,
type=int,
help="When splitting up a long document into chunks, how much stride to take between chunks.",
)
parser.add_argument(
"--max_query_length",
default=64,
type=int,
help="The maximum number of tokens for the question. Questions longer than this will "
"be truncated to this length.",
)
parser.add_argument("--do_train", action="store_true", help="Whether to run training.")
parser.add_argument("--do_eval", action="store_true", help="Whether to run eval on the dev set.")
parser.add_argument("--do_stage3", action="store_true", help="Whether to run eval on the dev set.")
parser.add_argument(
"--do_lower_case", action="store_true", help="Set this flag if you are using an uncased model."
)
parser.add_argument("--per_gpu_train_batch_size", default=8, type=int, help="Batch size per GPU/CPU for training.")
parser.add_argument(
"--per_gpu_eval_batch_size", default=8, type=int, help="Batch size per GPU/CPU for evaluation."
)
parser.add_argument("--learning_rate", default=5e-5, type=float, help="The initial learning rate for Adam.")
parser.add_argument(
"--gradient_accumulation_steps",
type=int,
default=1,
help="Number of updates steps to accumulate before performing a backward/update pass.",
)
parser.add_argument("--weight_decay", default=0.0, type=float, help="Weight decay if we apply some.")
parser.add_argument("--adam_epsilon", default=1e-8, type=float, help="Epsilon for Adam optimizer.")
parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.")
parser.add_argument(
"--num_train_epochs", default=3.0, type=float, help="Total number of training epochs to perform."
)
parser.add_argument("--warmup_steps", default=0, type=int, help="Linear warmup over warmup_steps.")
parser.add_argument(
"--n_best_size",
default=20,
type=int,
help="The total number of n-best predictions to generate in the nbest_predictions.json output file.",
)
parser.add_argument(
"--max_answer_length",
default=30,
type=int,
help="The maximum length of an answer that can be generated. This is needed because the start "
"and end predictions are not conditioned on one another.",
)
parser.add_argument(
"--verbose_logging",
action="store_true",
help="If true, all of the warnings related to data processing will be printed. "
"A number of warnings are expected for a normal SQuAD evaluation.",
)
parser.add_argument(
"--lang_id",
default=0,
type=int,
help="language id of input for language-specific xlm models (see tokenization_xlm.PRETRAINED_INIT_CONFIGURATION)",
)
parser.add_argument("--request_path", type=str, default='request_tok', help="Request directory.")
parser.add_argument("--logging_steps", type=int, default=500, help="Log every X updates steps.")
parser.add_argument("--save_steps", type=int, default=500, help="Save checkpoint every X updates steps.")
parser.add_argument(
"--overwrite_cache", action="store_true", help="Overwrite the cached training and evaluation sets"
)
parser.add_argument("--seed", type=int, default=42, help="random seed for initialization")
parser.add_argument("--local_rank", type=int, default=-1, help="local_rank for distributed training on gpus")
parser.add_argument(
"--fp16",
action="store_true",
help="Whether to use 16-bit (mixed) precision (through NVIDIA apex) instead of 32-bit",
)
parser.add_argument(
"--fp16_opt_level",
type=str,
default="O1",
help="For fp16: Apex AMP optimization level selected in ['O0', 'O1', 'O2', and 'O3']."
"See details at https://nvidia.github.io/apex/amp.html",
)
parser.add_argument("--threads", type=int, default=1, help="multiple threads for converting example to features")
args = parser.parse_args()
if args.doc_stride >= args.max_seq_length - args.max_query_length:
logger.warning(
"WARNING - You've set a doc stride which may be superior to the document length in some "
"examples. This could result in errors when building features from the examples. Please reduce the doc "
"stride or increase the maximum length to ensure the features are correctly built."
)
assert(args.local_rank == -1)
# Setup CUDA, GPU & distributed training
device = torch.device("cuda")
args.n_gpu = torch.cuda.device_count()
args.device = device
args.output_dir = os.path.join(args.output_dir, datetime.now().strftime('%Y_%m_%d_%H_%M_%S'))
# Setup logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO if args.local_rank in [-1, 0] else logging.WARN,
)
# Set seed
set_seed(args)
args.model_type = args.model_type.lower()
config_class, model_class, tokenizer_class = MODEL_CLASSES[args.model_type]
config = config_class.from_pretrained(
args.config_name if args.config_name else args.model_name_or_path,
cache_dir=args.cache_dir if args.cache_dir else None,
)
tokenizer = tokenizer_class.from_pretrained(
args.tokenizer_name if args.tokenizer_name else args.model_name_or_path,
do_lower_case=args.do_lower_case,
cache_dir=args.cache_dir if args.cache_dir else None,
)
model = model_class.from_pretrained(
args.model_name_or_path,
from_tf=bool(".ckpt" in args.model_name_or_path),
config=config,
cache_dir=args.cache_dir if args.cache_dir else None,
)
model.to(args.device)
logger.info("Training/evaluation parameters %s", args)
# Before we do anything with models, we want to ensure that we get fp16 execution of torch.einsum if args.fp16 is set.
# Otherwise it'll default to "promote" mode, and we'll get fp32 operations. Note that running `--fp16_opt_level="O2"` will
# remove the need for this code, but it is still valid.
if args.fp16:
try:
import apex
apex.amp.register_half_function(torch, "einsum")
except ImportError:
raise ImportError("Please install apex from https://www.github.com/nvidia/apex to use fp16 training.")
# Training
if args.do_train:
train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False)
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
logger.info(" global_step = %s, average loss = %s", global_step, tr_loss)
# Evaluation - we can ask to evaluate all the checkpoints (sub-directories) in a directory
if args.do_eval and args.local_rank in [-1, 0]:
results = {}
logger.info("Loading checkpoint %s for evaluation", args.model_name_or_path)
checkpoints = []
logger.info("Evaluate the following checkpoints: %s", args.model_name_or_path)
model = model_class.from_pretrained(args.model_name_or_path)
model.to(args.device)
# Evaluate
with open(args.predict_file, 'r') as f:
full_split = json.load(f)
key2idx = {}
for step, d in enumerate(full_split):
key2idx[d['question_id']] = step
prediction = evaluate_simplified(full_split, args, model, tokenizer)
for k, step in key2idx.items():
full_split[step]['pred'] = prediction.get(k, 'None')
with open('passage_only_predictions.json', 'w') as f:
json.dump(full_split, f, indent=2)
if args.do_stage3 and args.local_rank in [-1, 0]:
logger.info("Loading checkpoint %s for evaluation", args.model_name_or_path)
model = model_class.from_pretrained(args.model_name_or_path)
model.to(args.device)
with open(args.request_path, 'r') as f:
requests = json.load(f)
#evaluate(args, model, tokenizer, prefix=global_step)
with open(args.predict_file, 'r') as f:
data = json.load(f)
full_split = []
key2idx = {}
for step, d in enumerate(data):
if isinstance(d['pred'], str):
continue
table_id = d['table_id']
node = d['pred']
context = 'Title : {} . {}'.format(node[0], requests[node[1]])
full_split.append({'context': context, 'title': table_id,
'question': d['question'], 'question_id': d['question_id'],
'answers': [{'answer_start': None, 'text': None}]})
key2idx[d['question_id']] = step
prediction = evaluate_simplified(full_split, args, model, tokenizer)
for k, step in key2idx.items():
data[step]['pred'] = prediction.get(k, 'None')
#for _ in data:
# assert isinstance(_['pred'], str), "there are some unprocessed stage3 examples"
with open('predictions.json', 'w') as f:
json.dump(data, f, indent=2)
if __name__ == "__main__":
main()
| 38.926353 | 144 | 0.644062 | 5,633 | 43,870 | 4.759986 | 0.127641 | 0.013426 | 0.025361 | 0.008951 | 0.310446 | 0.241189 | 0.181367 | 0.148547 | 0.119643 | 0.107187 | 0 | 0.007728 | 0.26264 | 43,870 | 1,126 | 145 | 38.960924 | 0.821164 | 0.091384 | 0 | 0.193064 | 0 | 0.001156 | 0.154725 | 0.008534 | 0.002312 | 0 | 0.000151 | 0 | 0.001156 | 1 | 0.021965 | false | 0.003468 | 0.036994 | 0.001156 | 0.083237 | 0.002312 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4e201363382d6ca681c22324964e6c414654e74 | 4,014 | py | Python | tests/common.py | pcmoritz/credis | 3a80c3aef5586dba0c0c8487bbceb14e240b0785 | [
"Apache-2.0"
] | 1 | 2017-12-15T04:32:09.000Z | 2017-12-15T04:32:09.000Z | tests/common.py | pcmoritz/credis | 3a80c3aef5586dba0c0c8487bbceb14e240b0785 | [
"Apache-2.0"
] | null | null | null | tests/common.py | pcmoritz/credis | 3a80c3aef5586dba0c0c8487bbceb14e240b0785 | [
"Apache-2.0"
] | 1 | 2017-12-08T01:58:36.000Z | 2017-12-08T01:58:36.000Z | import subprocess
import time
import pytest
import redis
# Ports, in the order of [master; replicas in chain].
INIT_PORTS = [6369, 6370]
INIT_PORTS = [6369, 6370, 6371, 6372]
INIT_PORTS = [6369, 6370, 6371]
PORTS = list(INIT_PORTS)
MAX_USED_PORT = max(PORTS) # For picking the next port.
def MakeChain(num_nodes=2):
global PORTS
assert num_nodes >= 1
# 6369 reserved for the master.
chain = [6369 + i for i in range(num_nodes + 1)]
PORTS = list(chain)
return chain
def KillNode(index=None, port=None):
global PORTS
assert index is not None or port is not None
if port is None:
assert index >= 0 and index < len(
PORTS) - 1, "index %d num_chain_nodes %d" % (index, len(PORTS) - 1)
assert index == 0 or index == len(
PORTS
) - 2, "middle node failure is not handled, index %d, len %d" % (
index, len(PORTS))
port_to_kill = PORTS[index + 1]
else:
port_to_kill = port
print('killing port %d' % port_to_kill)
subprocess.check_output(
["pkill", "-9", "redis-server.*:%s" % port_to_kill])
if port is None:
del PORTS[index + 1]
else:
del PORTS[PORTS.index(port)]
def AddNode(master_client, port=None):
global PORTS
global MAX_USED_PORT
if port is not None:
MAX_USED_PORT = port if MAX_USED_PORT is None else max(
MAX_USED_PORT, port)
new_port = port
else:
new_port = MAX_USED_PORT + 1
MAX_USED_PORT += 1
print('launching redis-server --port %d' % new_port)
member = subprocess.Popen(
[
"redis/src/redis-server",
"--loadmodule",
"build/src/libmember.so",
"--port",
str(new_port)
],)
time.sleep(0.1)
print('calling master add, new_port %s' % new_port)
master_client.execute_command("MASTER.ADD", "127.0.0.1", str(new_port))
if port is None:
PORTS.append(new_port)
return member, new_port
def Start(request=None, chain=INIT_PORTS):
global PORTS
global MAX_USED_PORT
PORTS = list(chain)
MAX_USED_PORT = max(PORTS) # For picking the next port.
assert len(PORTS) > 1, "At least 1 master and 1 chain node"
print('Setting up initial chain: %s' % INIT_PORTS)
subprocess.Popen(["pkill", "-9", "redis-server.*"]).wait()
master = subprocess.Popen([
"redis/src/redis-server", "--loadmodule", "build/src/libmaster.so",
"--port",
str(PORTS[0])
])
if request is not None:
request.addfinalizer(master.kill)
master_client = redis.StrictRedis("127.0.0.1", PORTS[0])
for port in PORTS[1:]:
member, _ = AddNode(master_client, port)
if request is not None:
request.addfinalizer(member.kill)
@pytest.fixture(autouse=True)
def startcredis(request):
Start(request)
def AckClient():
return redis.StrictRedis("127.0.0.1", PORTS[-1])
def AckClientAndPubsub(client=None):
if client is None:
client = AckClient()
ack_pubsub = client.pubsub(ignore_subscribe_messages=True)
ack_pubsub.subscribe("answers")
return client, ack_pubsub
def MasterClient():
return redis.StrictRedis("127.0.0.1", PORTS[0])
head_client = redis.StrictRedis("127.0.0.1", PORTS[1])
def GetHeadFromMaster(master_client):
return head_client
def RefreshHeadFromMaster(master_client):
print('calling MASTER.REFRESH_HEAD')
head_addr_port = master_client.execute_command("MASTER.REFRESH_HEAD")
print('head_addr_port: %s' % head_addr_port)
splits = head_addr_port.split(b':')
return redis.StrictRedis(splits[0], int(splits[1]))
def RefreshTailFromMaster(master_client):
print('calling MASTER.REFRESH_TAIL')
tail_addr_port = master_client.execute_command("MASTER.REFRESH_TAIL")
print('tail_addr_port: %s' % tail_addr_port)
splits = tail_addr_port.split(b':')
c = redis.StrictRedis(splits[0], int(splits[1]))
return AckClientAndPubsub(c)
| 28.267606 | 79 | 0.644245 | 556 | 4,014 | 4.494604 | 0.214029 | 0.02521 | 0.039616 | 0.012005 | 0.307323 | 0.290516 | 0.22409 | 0.168067 | 0.098439 | 0.032013 | 0 | 0.03354 | 0.234928 | 4,014 | 141 | 80 | 28.468085 | 0.780202 | 0.033632 | 0 | 0.183486 | 0 | 0 | 0.149716 | 0.022716 | 0 | 0 | 0 | 0 | 0.045872 | 1 | 0.100917 | false | 0 | 0.036697 | 0.027523 | 0.211009 | 0.073395 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4e4f30424f84454f6247bc8785e3e86b7897962 | 501 | py | Python | HofstadterButterfly/main.py | leovp/graphics | 5518879b5710aa845ce84ff3e071d4601f9dfa07 | [
"MIT"
] | null | null | null | HofstadterButterfly/main.py | leovp/graphics | 5518879b5710aa845ce84ff3e071d4601f9dfa07 | [
"MIT"
] | null | null | null | HofstadterButterfly/main.py | leovp/graphics | 5518879b5710aa845ce84ff3e071d4601f9dfa07 | [
"MIT"
] | null | null | null | # Hofstadter Butterfly Fractal
# http://en.wikipedia.org/wiki/Hofstadter%27s_butterfly
# Wolfgang Kinzel/Georg Reents,"Physics by Computer" Springer Press (1998)
# FB36 - 20130922
from math import pi, cos
from PIL import Image
from cyhof import butterfly
def main():
img_size = 256
raw_data = butterfly(img_size)
data = raw_data.tostring()
img = Image.frombytes('L', (img_size, img_size), data, decoder_name='raw')
img.save('result.png')
if __name__ == '__main__':
main()
| 23.857143 | 78 | 0.712575 | 70 | 501 | 4.871429 | 0.642857 | 0.082111 | 0.064516 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045673 | 0.169661 | 501 | 20 | 79 | 25.05 | 0.774038 | 0.341317 | 0 | 0 | 0 | 0 | 0.067692 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.272727 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4e6630f789d62746c8955d34fb59563f51cda15 | 11,552 | py | Python | src/frr/tests/topotests/pim_acl/test_pim_acl.py | zhouhaifeng/vpe | 9c644ffd561988e5740021ed26e0f7739844353d | [
"Apache-2.0"
] | null | null | null | src/frr/tests/topotests/pim_acl/test_pim_acl.py | zhouhaifeng/vpe | 9c644ffd561988e5740021ed26e0f7739844353d | [
"Apache-2.0"
] | null | null | null | src/frr/tests/topotests/pim_acl/test_pim_acl.py | zhouhaifeng/vpe | 9c644ffd561988e5740021ed26e0f7739844353d | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
#
# test_pim_acl.py
# Part of NetDEF Topology Tests
#
# Copyright (c) 2020 by
# Network Device Education Foundation, Inc. ("NetDEF")
#
# Permission to use, copy, modify, and/or distribute this software
# for any purpose with or without fee is hereby granted, provided
# that the above copyright notice and this permission notice appear
# in all copies.
#
# THE SOFTWARE IS PROVIDED "AS IS" AND NETDEF DISCLAIMS ALL WARRANTIES
# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NETDEF BE LIABLE FOR
# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY
# DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
# OF THIS SOFTWARE.
#
"""
test_pim_acl.py: Test PIM with RP selection using ACLs
"""
# Test PIM RP selection with ACLs
#
# Testing RP selection with ACLs. R1 uses multiple ACLs
# to select desired RPs (R11 to R15)
#
# Test steps:
# - setup_module()
# Create topology. Hosts are only using zebra/staticd,
# no PIM, no OSPF (using IGMPv2 for multicast)
# - test_ospf_convergence()
# Wait for OSPF convergence in each VRF. OSPF is run on
# R1 and R11 - R15.
# - test_pim_convergence()
# Wait for PIM convergence on all routers. PIM is run on
# R1 and R11 - R15.
# - test_mcast_acl_1():
# Test 1st ACL entry 239.100.0.0/28 with 239.100.0.1 which
# should use R11 as RP
# Stop multicast after verification
# - test_mcast_acl_2():
# Test 2nd ACL entry 239.100.0.17/32 with 239.100.0.17 which
# should use R12 as RP
# Stop multicast after verification
# - test_mcast_acl_3():
# Test 3rd ACL entry 239.100.0.32/27 with 239.100.0.32 which
# should use R13 as RP
# Stop multicast after verification
# - test_mcast_acl_4():
# Test 4th ACL entry 239.100.0.128/25 with 239.100.0.255 which
# should use R14 as RP
# Stop multicast after verification
# - test_mcast_acl_5():
# Test 5th ACL entry 239.100.0.96/28 with 239.100.0.97 which
# should use R14 as RP
# Stop multicast after verification
# - test_mcast_acl_6():
# Test 6th ACL entry 239.100.0.64/28 with 239.100.0.70 which
# should use R15 as RP
# Stop multicast after verification
# - teardown_module()
# shutdown topology
#
# XXX clean up in later commit to avoid conflict on rebase
# pylint: disable=C0413
TOPOLOGY = """
+----------+
| Host H2 |
| Source |
+----------+
.2 |
+-----------+ | +----------+
| | .1 | .11 | Host R11 |
+---------+ | R1 |---------+--------| PIM RP |
| Host H1 | 192.168.100.0/24 | | 192.168.101.0/24 +----------+
| receive |------------------| uses ACLs | | +----------+
|IGMP JOIN| .10 .1 | to pick | | .12 | Host R12 |
+---------+ | RP | +--------| PIM RP |
| | | +----------+
+-----------+ | +----------+
| .13 | Host R13 |
+--------| PIM RP |
| +----------+
| +----------+
| .14 | Host R14 |
+--------| PIM RP |
| +----------+
| +----------+
| .15 | Host R15 |
+--------| PIM RP |
+----------+
"""
import json
import functools
import os
import sys
import pytest
# Save the Current Working Directory to find configuration files.
CWD = os.path.dirname(os.path.realpath(__file__))
sys.path.append(os.path.join(CWD, "../"))
# pylint: disable=C0413
# Import topogen and topotest helpers
from lib import topotest
from lib.topogen import Topogen, TopoRouter, get_topogen
from lib.topolog import logger
# Required to instantiate the topology builder class.
from lib.pim import McastTesterHelper
pytestmark = [pytest.mark.pimd, pytest.mark.ospfd]
def build_topo(tgen):
for hostNum in range(1, 3):
tgen.add_router("h{}".format(hostNum))
# Create the main router
tgen.add_router("r1")
# Create the PIM RP routers
for rtrNum in range(11, 16):
tgen.add_router("r{}".format(rtrNum))
# Setup Switches and connections
for swNum in range(1, 3):
tgen.add_switch("sw{}".format(swNum))
# Add connections H1 to R1 switch sw1
tgen.gears["h1"].add_link(tgen.gears["sw1"])
tgen.gears["r1"].add_link(tgen.gears["sw1"])
# Add connections R1 to R1x switch sw2
tgen.gears["r1"].add_link(tgen.gears["sw2"])
tgen.gears["h2"].add_link(tgen.gears["sw2"])
tgen.gears["r11"].add_link(tgen.gears["sw2"])
tgen.gears["r12"].add_link(tgen.gears["sw2"])
tgen.gears["r13"].add_link(tgen.gears["sw2"])
tgen.gears["r14"].add_link(tgen.gears["sw2"])
tgen.gears["r15"].add_link(tgen.gears["sw2"])
#####################################################
#
# Tests starting
#
#####################################################
def setup_module(module):
logger.info("PIM RP ACL Topology: \n {}".format(TOPOLOGY))
tgen = Topogen(build_topo, module.__name__)
tgen.start_topology()
# Starting Routers
router_list = tgen.routers()
for rname, router in router_list.items():
logger.info("Loading router %s" % rname)
router.load_config(
TopoRouter.RD_ZEBRA, os.path.join(CWD, "{}/zebra.conf".format(rname))
)
if rname[0] != "h":
# Only load ospf on routers, not on end hosts
router.load_config(
TopoRouter.RD_OSPF, os.path.join(CWD, "{}/ospfd.conf".format(rname))
)
router.load_config(
TopoRouter.RD_PIM, os.path.join(CWD, "{}/pimd.conf".format(rname))
)
tgen.start_router()
def teardown_module(module):
tgen = get_topogen()
tgen.stop_topology()
def test_ospf_convergence():
"Test for OSPFv2 convergence"
tgen = get_topogen()
# Skip if previous fatal error condition is raised
if tgen.routers_have_failure():
pytest.skip(tgen.errors)
logger.info("Checking OSPFv2 convergence on router r1")
router = tgen.gears["r1"]
reffile = os.path.join(CWD, "r1/ospf_neighbor.json")
expected = json.loads(open(reffile).read())
test_func = functools.partial(
topotest.router_json_cmp, router, "show ip ospf neighbor json", expected
)
_, res = topotest.run_and_expect(test_func, None, count=60, wait=2)
assertmsg = "OSPF router R1 did not converge"
assert res is None, assertmsg
def test_pim_convergence():
"Test for PIM convergence"
tgen = get_topogen()
# Skip if previous fatal error condition is raised
if tgen.routers_have_failure():
pytest.skip(tgen.errors)
logger.info("Checking PIM convergence on router r1")
router = tgen.gears["r1"]
reffile = os.path.join(CWD, "r1/pim_neighbor.json")
expected = json.loads(open(reffile).read())
test_func = functools.partial(
topotest.router_json_cmp, router, "show ip pim neighbor json", expected
)
_, res = topotest.run_and_expect(test_func, None, count=60, wait=2)
assertmsg = "PIM router R1 did not converge"
assert res is None, assertmsg
def check_mcast_entry(entry, mcastaddr, pimrp):
"Helper function to check RP"
tgen = get_topogen()
logger.info(
"Testing PIM RP selection for ACL {} entry using {}".format(entry, mcastaddr)
)
with McastTesterHelper(tgen) as helper:
helper.run("h2", ["--send=0.7", mcastaddr, "h2-eth0"])
helper.run("h1", [mcastaddr, "h1-eth0"])
logger.info("mcast join and source for {} started".format(mcastaddr))
# tgen.mininet_cli()
router = tgen.gears["r1"]
reffile = os.path.join(CWD, "r1/acl_{}_pim_join.json".format(entry))
expected = json.loads(open(reffile).read())
logger.info("verifying pim join on r1 for {}".format(mcastaddr))
test_func = functools.partial(
topotest.router_json_cmp, router, "show ip pim join json", expected
)
_, res = topotest.run_and_expect(test_func, None, count=60, wait=2)
assertmsg = "PIM router r1 did not show join status"
assert res is None, assertmsg
logger.info("verifying pim join on PIM RP {} for {}".format(pimrp, mcastaddr))
router = tgen.gears[pimrp]
reffile = os.path.join(CWD, "{}/acl_{}_pim_join.json".format(pimrp, entry))
expected = json.loads(open(reffile).read())
test_func = functools.partial(
topotest.router_json_cmp, router, "show ip pim join json", expected
)
_, res = topotest.run_and_expect(test_func, None, count=60, wait=2)
assertmsg = "PIM router {} did not get selected as the PIM RP".format(pimrp)
assert res is None, assertmsg
return
def test_mcast_acl_1():
"Test 1st ACL entry 239.100.0.0/28 with 239.100.0.1"
tgen = get_topogen()
# Skip if previous fatal error condition is raised
if tgen.routers_have_failure():
pytest.skip(tgen.errors)
check_mcast_entry(1, "239.100.0.1", "r11")
def test_mcast_acl_2():
"Test 2nd ACL entry 239.100.0.17/32 with 239.100.0.17"
tgen = get_topogen()
# Skip if previous fatal error condition is raised
if tgen.routers_have_failure():
pytest.skip(tgen.errors)
check_mcast_entry(2, "239.100.0.17", "r12")
def test_mcast_acl_3():
"Test 3rd ACL entry 239.100.0.32/27 with 239.100.0.32"
tgen = get_topogen()
# Skip if previous fatal error condition is raised
if tgen.routers_have_failure():
pytest.skip(tgen.errors)
check_mcast_entry(3, "239.100.0.32", "r13")
def test_mcast_acl_4():
"Test 4th ACL entry 239.100.0.128/25 with 239.100.0.255"
tgen = get_topogen()
# Skip if previous fatal error condition is raised
if tgen.routers_have_failure():
pytest.skip(tgen.errors)
check_mcast_entry(4, "239.100.0.255", "r14")
def test_mcast_acl_5():
"Test 5th ACL entry 239.100.0.96/28 with 239.100.0.97"
tgen = get_topogen()
# Skip if previous fatal error condition is raised
if tgen.routers_have_failure():
pytest.skip(tgen.errors)
check_mcast_entry(5, "239.100.0.97", "r14")
def test_mcast_acl_6():
"Test 6th ACL entry 239.100.0.64/28 with 239.100.0.70"
tgen = get_topogen()
# Skip if previous fatal error condition is raised
if tgen.routers_have_failure():
pytest.skip(tgen.errors)
check_mcast_entry(6, "239.100.0.70", "r15")
if __name__ == "__main__":
args = ["-s"] + sys.argv[1:]
sys.exit(pytest.main(args))
| 33.387283 | 86 | 0.576524 | 1,497 | 11,552 | 4.334001 | 0.207081 | 0.019112 | 0.032367 | 0.025894 | 0.519729 | 0.490598 | 0.461621 | 0.423397 | 0.417232 | 0.40151 | 0 | 0.060989 | 0.287483 | 11,552 | 345 | 87 | 33.484058 | 0.727251 | 0.309557 | 0 | 0.350282 | 0 | 0.039548 | 0.364126 | 0.008198 | 0 | 0 | 0 | 0 | 0.045198 | 1 | 0.067797 | false | 0 | 0.050847 | 0 | 0.124294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4e7162c56f87e424c23aa8e5b5d60093271088c | 1,371 | py | Python | model/SRMDNF.py | wangjia0602/DRAN | d1ff8353794a09f0eb5514eae3279d71e01ea8a6 | [
"Apache-2.0"
] | null | null | null | model/SRMDNF.py | wangjia0602/DRAN | d1ff8353794a09f0eb5514eae3279d71e01ea8a6 | [
"Apache-2.0"
] | null | null | null | model/SRMDNF.py | wangjia0602/DRAN | d1ff8353794a09f0eb5514eae3279d71e01ea8a6 | [
"Apache-2.0"
] | null | null | null | import torch
import torch.nn as nn
# torch.set_default_tensor_type(torch.DoubleTensor)
class SRMD(nn.Module):
def __init__(self, args, num_blocks=11, num_channels=18, conv_dim=128, scale_factor=4):
super(SRMD, self).__init__()
self.num_channels = num_channels
self.conv_dim = conv_dim
self.sf = scale_factor
# num_blocks = args.n_resblocks
self.conv2d = nn.Conv2d(3, self.num_channels, kernel_size=3, padding=1)
self.nonlinear_mapping = self.make_layers(num_blocks)
self.conv_last = nn.Sequential(
nn.Conv2d(self.conv_dim, 3*self.sf**2, kernel_size=3, padding=1),
nn.PixelShuffle(self.sf),
nn.Sigmoid()
)
def forward(self, x):
b_size = x.shape[0]
h, w = list(x.shape[2:])
x = self.conv2d(x)
x = self.nonlinear_mapping(x)
x = self.conv_last(x)
return x
def make_layers(self, num_blocks):
layers = []
in_channels = self.num_channels
for i in range(num_blocks):
conv2d = nn.Conv2d(in_channels, self.conv_dim, kernel_size=3, padding=1)
layers += [conv2d, nn.BatchNorm2d(self.conv_dim), nn.ReLU(inplace=True)]
in_channels = self.conv_dim
return nn.Sequential(*layers) | 34.275 | 93 | 0.592268 | 185 | 1,371 | 4.151351 | 0.340541 | 0.063802 | 0.071615 | 0.074219 | 0.128906 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028067 | 0.298322 | 1,371 | 40 | 94 | 34.275 | 0.77027 | 0.057622 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.066667 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4e9b0f448a97871d6844b3ed49fc6cadcfa3f03 | 4,219 | py | Python | out_of_context/outofcontext.py | kiiada/fish | d0f3d5b22ef0849f71f31e521de0f1007624b1e7 | [
"Apache-2.0"
] | null | null | null | out_of_context/outofcontext.py | kiiada/fish | d0f3d5b22ef0849f71f31e521de0f1007624b1e7 | [
"Apache-2.0"
] | null | null | null | out_of_context/outofcontext.py | kiiada/fish | d0f3d5b22ef0849f71f31e521de0f1007624b1e7 | [
"Apache-2.0"
] | null | null | null | import os
from time import sleep
import random
import re
import pathlib
import discord
from redbot.core import commands
from functools import reduce
BaseCog = getattr(commands, "Cog", object)
# todo -> archive specified channel in db
# db format
# | message_id | contents |
# todo -> export
OOCFILE = pathlib.Path(os.path.join(str(pathlib.Path.home()), "ooc.txt"))
quotes = [
_
for _ in pathlib.Path(OOCFILE).read_text().split("\n")
if len(_) != 0
]
class OutOfContext(BaseCog):
def __init__(self, bot):
self.bot = bot
self.message_log = {}
self.quote_hash = dict()
for quote in quotes:
quote_words = [_ for _ in quote.lower().split() if len(_) > 3]
for word in quote_words:
if word not in self.quote_hash:
self.quote_hash[word] = []
self.quote_hash[word].append(quote)
async def out_of_context_handler(message):
# Prevent acting on DM's
if message.guild is None or message.guild.name.lower() != "cortex":
return
clean_message = message.clean_content.lower()
# MM: Added so list instead of string
message_split = clean_message.split(" ")
# BLACKLIST CHANNELS
blacklist = [
"news",
"rpg",
"the-tavern",
"events",
"recommends",
"politisophy",
"eyebleach",
"weeb-lyfe",
"out-of-context",
"jokes",
"anime-club",
]
message_channel = message.channel.name.lower()
regex = r"http|www"
if re.search(regex, clean_message) is not None:
return
# DO NOT RESPOND TO SELF MESSAGES
if "195663495189102593" == str(
message.author.id
) or message.content.startswith("."):
return
if (
# DO NOT RESPOND TO SELF MESSAGES
(bot.user.id == message.author.id or message.content.startswith("."))
or (message.channel.name is None)
or (
reduce(
lambda acc, n: acc or (n == message_channel), blacklist, False
)
)
or ("thank" in clean_message)
or ("http" in clean_message)
):
return
# channel-specific logs for last 5 messages
chan_id = message.channel.id
if chan_id not in self.message_log:
self.message_log[chan_id] = [clean_message]
else:
self.message_log[chan_id].append(clean_message)
if len(self.message_log[chan_id]) > 5:
self.message_log[chan_id].pop(0)
ctx = await bot.get_context(message)
if random.random() <= 0.99: # 1% chance of activation
return
reply = self.get_quote(chan_id)
async with ctx.typing():
sleep(1)
await message.channel.send(reply)
self.bot.add_listener(out_of_context_handler, "on_message")
@commands.command()
async def penny(self, ctx):
reply = self.get_quote(ctx.channel.id, most_recent=False)
async with ctx.typing():
sleep(1)
await ctx.send(reply)
def get_quote(self, channel_id, most_recent=True):
reply = random.choice(quotes)
if channel_id not in self.message_log:
return reply # just random if no logs
split_msgs = [s.split(" ") for s in self.message_log[channel_id]]
if most_recent:
split_message = split_msgs[-1] # just grab the last
else:
split_message = reduce(lambda a, b: a + b, split_msgs)
random.shuffle(split_message)
split_message = [s for s in split_message if len(s) > 3]
for word in split_message:
if word in self.quote_hash:
reply = random.choice(self.quote_hash[word])
break
return reply
| 30.79562 | 86 | 0.532591 | 484 | 4,219 | 4.491736 | 0.31405 | 0.040478 | 0.051518 | 0.033119 | 0.144434 | 0.107636 | 0.064397 | 0 | 0 | 0 | 0 | 0.011729 | 0.373548 | 4,219 | 136 | 87 | 31.022059 | 0.810821 | 0.080588 | 0 | 0.128713 | 0 | 0 | 0.04088 | 0 | 0 | 0 | 0 | 0.007353 | 0 | 1 | 0.019802 | false | 0 | 0.079208 | 0 | 0.178218 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4eb0973e3ac6c9900355c8c09015bfba11b1d37 | 5,562 | py | Python | modeling/vit.py | branislavhesko/DVIS | 09d1e5e0a9bc1a79b7a5046bf8bdc05e9e4e933c | [
"MIT"
] | 3 | 2021-05-19T08:45:26.000Z | 2021-06-20T13:58:56.000Z | modeling/vit.py | branislavhesko/DVIS | 09d1e5e0a9bc1a79b7a5046bf8bdc05e9e4e933c | [
"MIT"
] | null | null | null | modeling/vit.py | branislavhesko/DVIS | 09d1e5e0a9bc1a79b7a5046bf8bdc05e9e4e933c | [
"MIT"
] | null | null | null | from math import sqrt
import einops
from einops.layers.torch import Rearrange, Reduce
import torch
import torch.nn as nn
# TODO: this may be refactored
class PatchEmbedding(torch.nn.Module):
def __init__(self, in_channels, embed_size, patch_size=16):
super().__init__()
# noinspection PyTypeChecker
self._embedding = nn.Sequential(*[
nn.Conv2d(in_channels, embed_size, kernel_size=patch_size, stride=patch_size),
Rearrange('b e (h) (w) -> b (h w) e'),
])
self._cls_token = torch.nn.Parameter(torch.randn(1, embed_size))
def forward(self, x):
embedded = self._embedding(x)
token = einops.repeat(self._cls_token, "n e -> b n e", b=x.shape[0])
return torch.cat([token, embedded], dim=1)
class PositionEmbedding(torch.nn.Module):
def __init__(self, image_shape, patch_size, embed_size):
super().__init__()
num_patches = (image_shape[0] // patch_size) * (image_shape[1] // patch_size) + 1
self._position = nn.Parameter(torch.randn(int(num_patches), embed_size))
def forward(self, x):
position = einops.repeat(self._position, "n e -> b n e", b=x.shape[0])
return x + position
class ResidualAdd(torch.nn.Module):
def __init__(self, block):
super().__init__()
self.block = block
def forward(self, x):
return x + self.block(x)
class MultiHeadAttention(torch.nn.Module):
def __init__(self, embed_size, num_heads, attention_store=None):
super().__init__()
self.queries_projection = nn.Linear(embed_size, embed_size)
self.values_projection = nn.Linear(embed_size, embed_size)
self.keys_projection = nn.Linear(embed_size, embed_size)
self.final_projection = nn.Linear(embed_size, embed_size)
self.embed_size = embed_size
self.num_heads = num_heads
self.attention_store = attention_store
def forward(self, x):
assert len(x.shape) == 3
keys = self.keys_projection(x)
values = self.values_projection(x)
queries = self.queries_projection(x)
keys = einops.rearrange(keys, "b n (h e) -> b n h e", h=self.num_heads)
queries = einops.rearrange(queries, "b n (h e) -> b n h e", h=self.num_heads)
values = einops.rearrange(values, "b n (h e) -> b n h e", h=self.num_heads)
energy_term = torch.einsum("bqhe, bkhe -> bqhk", queries, keys)
divider = sqrt(self.embed_size)
mh_out = torch.softmax(energy_term, -1)
if self.attention_store is not None:
self.attention_store.append(mh_out.detach().cpu())
out = torch.einsum('bihv, bvhd -> bihd ', mh_out / divider, values)
out = einops.rearrange(out, "b n h e -> b n (h e)")
return self.final_projection(out)
class MLP(torch.nn.Sequential):
def __init__(self, embed_size=768, expansion=4):
super().__init__(*[
nn.Linear(embed_size, embed_size * expansion),
nn.GELU(),
nn.Linear(embed_size * expansion, embed_size)
])
class TransformerEncoderLayer(torch.nn.Sequential):
def __init__(self, embed_size=768, expansion=4, num_heads=8, attention_store=None):
super(TransformerEncoderLayer, self).__init__(
*[
ResidualAdd(nn.Sequential(*[
nn.LayerNorm(embed_size),
MultiHeadAttention(embed_size, num_heads, attention_store=attention_store)
])),
ResidualAdd(nn.Sequential(*[
nn.LayerNorm(embed_size),
MLP(embed_size, expansion)
]))
]
)
class TransformerEncoder(torch.nn.Sequential):
def __init__(self, num_layers=6, **kwargs):
super(TransformerEncoder, self).__init__(*[
TransformerEncoderLayer(**kwargs) for _ in range(num_layers)
])
class ClassificationHead(torch.nn.Sequential):
def __init__(self, embed_size, num_classes):
super().__init__(*[
Reduce("b n e-> b e", reduction="mean"),
nn.LayerNorm(embed_size),
nn.Linear(embed_size, num_classes)
])
class VIT(torch.nn.Module):
def __init__(self, in_channels, embed_size, num_classes, num_layers,
num_heads, image_shape, patch_size, store_attention):
super().__init__()
self.attention_store = [] if store_attention else None
self.patch_embed = PatchEmbedding(in_channels=in_channels, embed_size=embed_size, patch_size=patch_size)
self.position_embed = PositionEmbedding(embed_size=embed_size, image_shape=image_shape, patch_size=patch_size)
self.encoder = TransformerEncoder(num_layers=num_layers, embed_size=embed_size,
num_heads=num_heads, attention_store=self.attention_store)
self.classifier = ClassificationHead(embed_size=embed_size, num_classes=num_classes)
self.store_attention = store_attention
def forward(self, x):
patches = self.patch_embed(x)
positions = self.position_embed(patches)
encoder = self.encoder(positions)
return self.classifier(encoder)
def reset(self):
if self.attention_store is not None and len(self.attention_store) > 0:
[self.attention_store.pop(0) for _ in range(len(self.attention_store))]
if __name__ == "__main__":
model = VIT(3, 768, 2, 6, 8, (224, 224), 16, store_attention=False)
out = model(torch.rand(2, 3, 224, 224))
print(out.shape)
| 38.358621 | 118 | 0.640777 | 713 | 5,562 | 4.701262 | 0.180926 | 0.104714 | 0.042661 | 0.053699 | 0.300119 | 0.261038 | 0.189737 | 0.146778 | 0.085621 | 0.085621 | 0 | 0.011175 | 0.243797 | 5,562 | 144 | 119 | 38.625 | 0.785782 | 0.009889 | 0 | 0.185841 | 0 | 0 | 0.034157 | 0 | 0 | 0 | 0 | 0.006944 | 0.00885 | 1 | 0.132743 | false | 0 | 0.044248 | 0.00885 | 0.300885 | 0.00885 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4eb823a8dc457010727eaaa479fb841019974cd | 1,297 | py | Python | scripts/model_predict.py | SitwalaM/nlp-topic-modelling | 1521418f97177e3fdffef17a890b8635592a381e | [
"MIT"
] | null | null | null | scripts/model_predict.py | SitwalaM/nlp-topic-modelling | 1521418f97177e3fdffef17a890b8635592a381e | [
"MIT"
] | null | null | null | scripts/model_predict.py | SitwalaM/nlp-topic-modelling | 1521418f97177e3fdffef17a890b8635592a381e | [
"MIT"
] | null | null | null | import numpy as np
import pandas as pd
import pickle
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.decomposition import LatentDirichletAllocation
import os
abspath = os.path.abspath(__file__)
dname = os.path.dirname(abspath)
os.chdir(dname)
# load model
with open("lda_model.pk","rb") as f:
lda_model = pickle.load(f)
# vectorizer
vectorizer = pickle.load(open("vectorizer.pickle", 'rb'))
# topics
topics = list(np.arange(0,10))
def get_inference(model, vectorizer, topics, text, threshold):
"""
runs inference on text input
paramaters
----------
model: loaded model to use to transform the input
vectorizer: instance of the vectorizer e.g TfidfVectorizer(ngram_range=(2, 3))
topics: the list of topics in the model
text: input string to be classified
threshold: float of threshold to use to output a topic
returns
-------
tuple => top score
"""
v_text = vectorizer.transform([text])
score = model.transform(v_text)
labels = set()
for i in range(len(score[0])):
if score[0][i] > threshold:
labels.add(topics[i])
if not labels:
return 'None', -1, set()
return topics[np.argmax(score)]
| 23.581818 | 82 | 0.685428 | 177 | 1,297 | 4.954802 | 0.468927 | 0.037628 | 0.041049 | 0.063854 | 0.086659 | 0.086659 | 0 | 0 | 0 | 0 | 0 | 0.007767 | 0.20586 | 1,297 | 54 | 83 | 24.018519 | 0.843689 | 0.292213 | 0 | 0 | 0 | 0 | 0.043376 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.291667 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4ebb62861b9be5c8eb7f606117802ace713978c | 1,613 | py | Python | test.py | jingkunchen/patch_gan_alocc | 3aef3e636552a86926ef11a5caaa83da11224d8d | [
"Apache-2.0"
] | null | null | null | test.py | jingkunchen/patch_gan_alocc | 3aef3e636552a86926ef11a5caaa83da11224d8d | [
"Apache-2.0"
] | null | null | null | test.py | jingkunchen/patch_gan_alocc | 3aef3e636552a86926ef11a5caaa83da11224d8d | [
"Apache-2.0"
] | null | null | null | import numpy as np
from models import ALOCC_Model
from keras.datasets import mnist
import matplotlib.pyplot as plt
from keras import backend as K
import os
from keras.losses import binary_crossentropy
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
self =ALOCC_Model(dataset_name='mnist', input_height=32,input_width=32)
self.adversarial_model.load_weights('./checkpoint/ALOCC_Model_30.h5')
X_train = np.load("lesion_test_25_100.npy")
X_train = X_train[:,:,:,np.newaxis]
def test_reconstruction(label, data_index = 0):
# specific_idx = np.where(y_train == label)[0]
if data_index >= len(X_train):
data_index = 0
datalist = X_train
for i in range(len(datalist)):
data = X_train[i:i+1]
model_predicts = self.adversarial_model.predict(data)
# print(model_predicts_latentspace[0])
#fig= plt.figure(figsize=(8, 8))
#columns = 1
#rows = 2
#fig.add_subplot(rows, columns, 1)
input_image = data.reshape((32, 32))
reconstructed_image = model_predicts[0].reshape((32, 32))
#plt.title('Input')
#plt.imshow(input_image, label='Input')
#fig.add_subplot(rows, columns, 2)
#plt.title('Reconstruction')
#plt.imshow(reconstructed_image, label='Reconstructed')
#plt.show()
# Compute the mean binary_crossentropy loss of reconstructed image.
y_true = K.variable(reconstructed_image)
y_pred = K.variable(input_image)
error = K.eval(binary_crossentropy(y_true, y_pred)).mean()
print(error+1-model_predicts[1][0][0])
test_reconstruction(4)
| 32.26 | 75 | 0.676999 | 225 | 1,613 | 4.64 | 0.413333 | 0.034483 | 0.038314 | 0.032567 | 0.045977 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029549 | 0.202728 | 1,613 | 49 | 76 | 32.918367 | 0.782271 | 0.254185 | 0 | 0 | 0 | 0 | 0.065601 | 0.043734 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0 | 0.269231 | 0 | 0.307692 | 0.038462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4ecaea9116f3c6cd50a959154ae11db305b4871 | 420 | py | Python | v02PyGameWindow/PyGameWindow.py | ccurtis7/gamejam2022 | 13833f162447a5125e4f3a1e5934bfe58148cdd0 | [
"MIT"
] | null | null | null | v02PyGameWindow/PyGameWindow.py | ccurtis7/gamejam2022 | 13833f162447a5125e4f3a1e5934bfe58148cdd0 | [
"MIT"
] | null | null | null | v02PyGameWindow/PyGameWindow.py | ccurtis7/gamejam2022 | 13833f162447a5125e4f3a1e5934bfe58148cdd0 | [
"MIT"
] | null | null | null | ##### INITIALIZATION
# Import Modules
import pygame
# Initialize PyGame
pygame.init()
# Initialize game window
screen = pygame.display.set_mode((1280, 960))
##### MAIN PROGRAM
# Loop until the window is closed
window_open = True
while window_open:
for event in pygame.event.get():
if event.type == pygame.QUIT: # Request to close window
window_open = False
# Close the window
pygame.quit() | 18.26087 | 63 | 0.695238 | 56 | 420 | 5.142857 | 0.625 | 0.104167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020896 | 0.202381 | 420 | 23 | 64 | 18.26087 | 0.838806 | 0.371429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4ed729a6053b5a30399417f1ba24a15f00221e7 | 2,634 | py | Python | ieee_1584/additional_test_cases/verify_spreadsheet_tests.py | LiaungYip/arcflash | 0ebe75024b652b1410197c59e2eae522e9386ca7 | [
"MIT"
] | 1 | 2022-03-08T04:07:33.000Z | 2022-03-08T04:07:33.000Z | ieee_1584/additional_test_cases/verify_spreadsheet_tests.py | LiaungYip/arcflash | 0ebe75024b652b1410197c59e2eae522e9386ca7 | [
"MIT"
] | null | null | null | ieee_1584/additional_test_cases/verify_spreadsheet_tests.py | LiaungYip/arcflash | 0ebe75024b652b1410197c59e2eae522e9386ca7 | [
"MIT"
] | null | null | null | # Copyright 2022, Li-aung Yip - https://www.penwatch.net
# Licensed under the MIT License. Refer LICENSE.txt.
# This compares the results from the official IEEE spreadsheet(s) to the results of the `ieee_1584` module.
#
# Some pre-generated test data is shipped with this code - see `ieee_1584_spreadsheet_results.csv`.
#
# Should you want to generate your own test cases, you might try editing `test_case_generator.py` and then running
# excel_test.py.
import csv
from ieee_1584.calculation import Calculation
from ieee_1584.cubicle import Cubicle
infile = "ieee_1584_spreadsheet_results.csv"
outfile = "comparison.csv"
with open(infile) as fh_in:
with open(outfile, mode="w", newline="") as fh_out:
reader = csv.DictReader(fh_in)
writer = csv.writer(fh_out)
header = (
"V_oc", "EC", "G", "D", "height", "width", "depth", "I_bf", "T",
"I_arc_max(ss)", "I_arc_max(py)", "I_arc_max(%diff)",
"E_joules_max(ss)", "E_joules_max(py)", "E_joules_max(%diff)",
"AFB_max(ss)", "AFB_max(py)", "AFB_max(%diff)",
"I_arc_min(ss)", "I_arc_min(py)", "I_arc_min(%diff)",
"E_joules_min(ss)", "E_joules_min(py)", "E_joules_min(%diff)",
"AFB_min(ss)", "AFB_min(py)", "AFB_min(%diff)",
)
writer.writerow(header)
for r in reader:
# Convert numerical columns from strings to floats
for k, v in r.items():
if k != "EC":
r[k] = float(v)
# Discard cases which are invalid due to busbar gap vs. enclosure width
if r["width"] < 4 * r["G"]:
continue
# Do calculations
cubicle_params = (r["V_oc"], r["EC"], r["G"], r["D"], r["height"], r["width"], r["depth"],)
cubicle = Cubicle(*cubicle_params)
calc = Calculation(cubicle, r["I_bf"])
calc.calculate_I_arc()
calc.calculate_E_AFB(r["T"], r["T"])
# Check our calcs to official calcs
ss_results = (
r["I_arc_max"], r["E_joules_max"], r["AFB_max"], r["I_arc_min"], r["E_joules_min"], r["AFB_min"],)
py_results = (calc.I_arc_max, calc.E_max, calc.AFB_max, calc.I_arc_min, calc.E_min, calc.AFB_min,)
results = list()
for ss, py in zip(ss_results, py_results):
results.append(f"{ss:.5g}")
results.append(f"{py:.5g}")
results.append(f"{abs(1 - (ss / py)):.1%}")
out_row = cubicle_params + (r["I_bf"], r["T"],) + tuple(results)
writer.writerow(out_row)
| 39.909091 | 114 | 0.575171 | 383 | 2,634 | 3.741514 | 0.347258 | 0.030705 | 0.024424 | 0.036288 | 0.040475 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015112 | 0.27145 | 2,634 | 65 | 115 | 40.523077 | 0.731631 | 0.230068 | 0 | 0 | 0 | 0 | 0.235236 | 0.016377 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.075 | 0 | 0.075 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4eeb76bf09c053b08a116f9fe7a6e91910bd52b | 830 | py | Python | Q61RotateList.py | ChenliangLi205/LeetCode | 6c547c338eb05042cb68f57f737dce483964e2fd | [
"MIT"
] | null | null | null | Q61RotateList.py | ChenliangLi205/LeetCode | 6c547c338eb05042cb68f57f737dce483964e2fd | [
"MIT"
] | null | null | null | Q61RotateList.py | ChenliangLi205/LeetCode | 6c547c338eb05042cb68f57f737dce483964e2fd | [
"MIT"
] | null | null | null | # Definition for singly-linked list.
# class ListNode:
# def __init__(self, x):
# self.val = x
# self.next = None
class Solution:
def rotateRight(self, head, k):
"""
:type head: ListNode
:type k: int
:rtype: ListNode
"""
if head is None:
return head
length, cur, tail = 0, head, None
while cur is not None:
length += 1
tail = cur
cur = cur.next
if k % length == 0:
return head
k %= length
target = length - k
oldHead = head
cur, cnt = head, 1
while cnt != target:
cnt += 1
cur = cur.next
newHead = cur.next
cur.next = None
head = newHead
tail.next = oldHead
return head | 24.411765 | 41 | 0.466265 | 94 | 830 | 4.074468 | 0.37234 | 0.073107 | 0.052219 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010893 | 0.446988 | 830 | 34 | 42 | 24.411765 | 0.823529 | 0.210843 | 0 | 0.217391 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0 | 0 | 0.217391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4eecb34ab98e58ac82db35ad11553b17fdf186b | 8,255 | py | Python | DigitalMeLib/servers/digitalme/Package.py | jdelrue/digital_me | e5c92c405c0cea419ce18d25863f35d1bfe5a428 | [
"Apache-2.0"
] | null | null | null | DigitalMeLib/servers/digitalme/Package.py | jdelrue/digital_me | e5c92c405c0cea419ce18d25863f35d1bfe5a428 | [
"Apache-2.0"
] | 72 | 2018-08-01T06:13:46.000Z | 2019-02-01T15:50:20.000Z | DigitalMeLib/servers/digitalme/Package.py | jdelrue/digital_me | e5c92c405c0cea419ce18d25863f35d1bfe5a428 | [
"Apache-2.0"
] | 2 | 2018-08-05T08:09:13.000Z | 2018-11-21T13:11:28.000Z | from jumpscale import j
JSBASE = j.application.jsbase_get_class()
import sys
from importlib import import_module
model = """
@url = jumpscale.digitalme.package
enabled = false (B)
start = 0 (D)
path = "" (S)
docsites = (LO) !jumpscale.digitalme.package.docsite
blueprints = (LO) !jumpscale.digitalme.package.blueprints
actors = (LO) !jumpscale.digitalme.package.actors
chatflows = (LO) !jumpscale.digitalme.package.chatflow
recipes = (LO) !jumpscale.digitalme.package.recipes
docmacros = (LO) !jumpscale.digitalme.package.docmacros
zrbotrepos = (LO) !jumpscale.digitalme.package.zrbotrepos
models = (LO) !jumpscale.digitalme.package.models
@url = jumpscale.digitalme.package.docsite
name = "" (S)
url = "" (S)
path = "" (S)
publish = "" (S)
enabled = false (B)
@url = jumpscale.digitalme.package.blueprints
name = "" (S)
url = "" (S)
path = "" (S)
publish = (B)
enabled = false (B)
links = (LO) !jumpscale.digitalme.package.bp.link
@url = jumpscale.digitalme.package.bp.link
name = "" (S)
url = "" (S)
dest = "" (S)
enabled = false (B)
@url = jumpscale.digitalme.package.actors
name = "" (S)
url = "" (S)
path = "" (S)
enabled = false (B)
@url = jumpscale.digitalme.package.chatflow
name = "" (S)
url = "" (S)
path = "" (S)
enabled = false (B)
@url = jumpscale.digitalme.package.recipes
name = "" (S)
url = "" (S)
path = "" (S)
enabled = false (B)
@url = jumpscale.digitalme.package.docmacros
name = "" (S)
url = "" (S)
path = "" (S)
enabled = false (B)
@url = jumpscale.digitalme.package.zrbotrepo
name = "" (S)
url = "" (S)
path = "" (S)
enabled = false (B)
@url = jumpscale.digitalme.package.models
name = "" (S)
url = "" (S)
path = "" (S)
enabled = false (B)
"""
class Package(JSBASE):
def __init__(self,path):
JSBASE.__init__(self)
self.path = j.sal.fs.getDirName(path)
db_client = j.clients.redis_config.get().redis
self.bcdb = j.data.bcdb.get(db_client)
schema_model = j.data.bcdb.MODEL_CLASS(bcdb=self.bcdb, schema=model)
self.bcdb.model_add(schema_model)
self._model = self.bcdb.model_get(url="jumpscale.digitalme.package")
self.data = self._model.new()
data = j.data.serializer.toml.load(path)
#be flexible
#std value is False
if "enable" in data:
self.data.enabled =data["enable"]
elif "enabled" in data:
self.data.enabled =data["enabled"]
elif "active" in data:
self.data.enabled =data["active"]
self.data.name = j.sal.fs.getBaseName(self.path)
dir_items = j.sal.fs.listDirsInDir(self.path,False,True)
if "actors" in dir_items:
name = "%s_internal"%(self.name)
if name not in self.actors:
obj = self.data.actors.new({"name":name, "enabled":True,
"path":"%s/actors"%(self.path)})
if "blueprints" in dir_items:
name = "%s_internal"%(self.name)
if name not in self.blueprints:
obj = self.data.blueprints.new({"name":name, "enabled":True,
"path":"%s/blueprints"%(self.path)})
if "models" in dir_items:
name = "%s_internal"%(self.name)
if name not in self.models:
obj = self.data.models.new({"name":name, "enabled":True,
"path":"%s/models"%(self.path)})
if "chatflows" in dir_items:
name = "%s_internal"%(self.name)
if name not in self.chatflows:
obj = self.data.chatflows.new({"name":name, "enabled":True,
"path":"%s/chatflows"%(self.path)})
if "recipes" in dir_items:
name = "%s_internal"%(self.name)
if name not in self.recipes:
obj = self.data.recipes.new({"name":name, "enabled":True,
"path":"%s/recipes"%(self.path)})
if "doc_macros" in dir_items:
name = "%s_internal"%(self.name)
if name not in self.doc_macros:
obj = self.data.doc_macros.new({"name":name, "enabled":True,
"path":"%s/doc_macros"%(self.path)})
if "docs" in dir_items:
docs_dir = j.sal.fs.joinPaths(self.path, "docs")
dir_items = j.sal.fs.listDirsInDir(docs_dir,
recursive=False, dirNameOnly=True)
for dir_name in dir_items:
self.data.docsites.new({"name": dir_name, "enabled": True,
"path": j.sal.fs.joinPaths(docs_dir, dir_name)})
#TODO: *1 finish & test
if "docsite" in data:
for item in data["docsite"]:
if item["name"] not in self.docsites:
obj=self.data.docsites.new(item)
obj.path = j.clients.git.getContentPathFromURLorPath(obj.url)
if "blueprint" in data:
for item in data["blueprint"]:
if item["name"] not in self.blueprints:
obj = self.data.blueprints.new(item)
obj.path = j.clients.git.getContentPathFromURLorPath(obj.url)
if "chatflows" in data:
for item in data["chatflows"]:
if item["name"] not in self.chatflows:
obj = self.data.chatflows.new(item)
obj.path = j.clients.git.getContentPathFromURLorPath(obj.url)
if "actors" in data:
for item in data["actors"]:
if item["name"] not in self.actors:
obj = self.data.actors.new(item)
obj.path = j.clients.git.getContentPathFromURLorPath(obj.url)
if "models" in data:
for item in data["models"]:
if item["name"] not in self.models:
obj = self.data.models.new(item)
obj.path = j.clients.git.getContentPathFromURLorPath(obj.url)
if "recipes" in data:
for item in data["recipes"]:
if item["name"] not in self.recipes:
obj = self.data.recipes.new(item)
obj.path = j.clients.git.getContentPathFromURLorPath(obj.url)
if "doc_macros" in data:
for item in data["doc_macros"]:
if item["name"] not in self.doc_macros:
obj = self.data.doc_macros.new(item)
obj.path = j.clients.git.getContentPathFromURLorPath(obj.url)
#TODO:need to check and make sure we have all see ...threefoldtech/digital_me/packages/readme.md
self.load()
@property
def name(self):
return self.data.name
@property
def docsites(self):
return [item.name for item in self.data.docsites]
@property
def blueprints(self):
return [item.name for item in self.data.blueprints]
@property
def chatflows(self):
return [item.name for item in self.data.chatflows]
@property
def doc_macros(self):
return [item.name for item in self.data.doc_macros]
@property
def zrobot_repos(self):
return [item.name for item in self.data.zrobot_repos]
@property
def actors(self):
return [item.name for item in self.data.actors]
@property
def models(self):
return [item.name for item in self.data.models]
def load(self):
"""
load package into memory
"""
rack = j.servers.digitalme.rack
gedis = j.servers.gedis.latest
#need to load the blueprints, docsites, actors, ...
self.chatflows_load()
self.blueprints_load()
self.docsites_load()
def chatflows_load(self):
for item in self.data.chatflows:
j.servers.gedis.latest.chatbot.chatflows_load(item.path)
return
def blueprints_load(self):
for blueprint in self.data.blueprints:
if blueprint.enabled:
j.servers.web.latest.loader.paths.append(blueprint.path)
def docsites_load(self):
for doc_site in self.data.docsites:
j.tools.docsites.load(doc_site.path, doc_site.name)
| 32.246094 | 104 | 0.571169 | 1,011 | 8,255 | 4.598417 | 0.125618 | 0.051624 | 0.10755 | 0.036352 | 0.503549 | 0.490213 | 0.414498 | 0.374489 | 0.356421 | 0.297914 | 0 | 0.000346 | 0.298849 | 8,255 | 255 | 105 | 32.372549 | 0.802868 | 0.026772 | 0 | 0.291457 | 0 | 0 | 0.255184 | 0.088184 | 0 | 0 | 0 | 0.003922 | 0 | 1 | 0.065327 | false | 0 | 0.015075 | 0.040201 | 0.130653 | 0.085427 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4f0438971592499728c0b83dbe182757f278aee | 5,284 | py | Python | examples/scripts/tsp/tsp_byedge.py | ghackebeil/pybnb | 1f69b0684cfbe83d69ca1be00641ce438cbc3d7b | [
"MIT"
] | 41 | 2018-11-13T03:03:22.000Z | 2022-03-07T13:22:56.000Z | examples/scripts/tsp/tsp_byedge.py | ghackebeil/pybnb | 1f69b0684cfbe83d69ca1be00641ce438cbc3d7b | [
"MIT"
] | 16 | 2019-01-15T04:38:27.000Z | 2020-12-08T16:14:43.000Z | examples/scripts/tsp/tsp_byedge.py | ghackebeil/pybnb | 1f69b0684cfbe83d69ca1be00641ce438cbc3d7b | [
"MIT"
] | 9 | 2019-06-07T04:21:27.000Z | 2022-03-07T13:22:57.000Z | #
# This example defines a script that solves the Traveling
# Salesperson Problem using a combination of
# branch-and-bound (by edge) and local heuristics. It
# highlights a number of advanced pybnb features, including:
# (1) the use of pybnb.futures.NestedSolver
# (2) re-continuing a solve after early termination
#
# This example can be executed in serial as
#
# $ python tsp_byedge.py <data_file>
#
# or in parallel as
#
# $ mpiexec -n <n> python tsp_byedge.py <data_file>
#
# The following data files are available:
# (source: https://people.sc.fsu.edu/~jburkardt/datasets/tsp/tsp.html)
# - p01_d.txt: 15 cities, minimal tour length 291
# - p01_d_inf.txt: same as above, but with random paths
# removed to make the problem infeasible
# - fri26_d.txt: 26 cities, minimal tour length 937
#
import pybnb
try:
import numpy
except ImportError: # pragma:nocover
raise ImportError("This example requires numpy")
class TSP_ByEdge(pybnb.Problem):
def __init__(self, dist):
self._N = len(dist)
# state that changes during the solve
self._dist = numpy.array(dist, dtype=float)
numpy.fill_diagonal(self._dist, numpy.inf)
self._path = [0]
self._partial_cost = 0
self._cost = None
def _row_reduction(self):
row_mins = self._dist.min(axis=1)
mask = row_mins != numpy.inf
tmp = row_mins[mask, numpy.newaxis]
self._dist[mask, :] -= tmp
return tmp.sum()
def _col_reduction(self):
col_mins = self._dist.min(axis=0)
mask = col_mins != numpy.inf
tmp = col_mins[mask]
self._dist[:, mask] -= tmp
return tmp.sum()
#
# Implement Problem abstract methods
#
def sense(self):
return pybnb.minimize
def objective(self):
if len(self._path) == self._N:
assert self._cost is not None
return self._cost
else:
return self.infeasible_objective()
def bound(self):
if self._cost is None:
assert len(self._path) >= 1
if len(self._path) > 1:
u = self._path[-2]
v = self._path[-1]
self._dist[u, :] = numpy.inf
self._dist[:, v] = numpy.inf
self._dist[v][self._path[0]] = numpy.inf
row_sum = self._row_reduction()
col_sum = self._col_reduction()
self._cost = self._partial_cost
self._cost += row_sum
self._cost += col_sum
return self._cost
def save_state(self, node):
node.state = (self._path, self._dist, self._partial_cost, self._cost)
def load_state(self, node):
(self._path, self._dist, self._partial_cost, self._cost) = node.state
assert len(self._path) <= self._N
def branch(self):
# note that the branch method should never be called
# with a path of length N as the objective and bound
# converge exactly in that case.
assert len(self._path) < self._N
assert self._cost is not None
u = self._path[-1]
candidates = numpy.flatnonzero(self._dist[u, :] != numpy.inf).tolist()
if len(candidates) == 0:
# this path is infeasible, so return a dummy
# child to indicate that
child = pybnb.Node()
child.bound = pybnb.inf
child.objective = pybnb.inf
yield child
else:
for v in candidates:
child = pybnb.Node()
child.state = (
self._path + [v],
self._dist.copy(),
self._cost + self._dist[u][v],
None,
)
yield child
def notify_solve_finished(self, comm, worker_comm, results):
tour = None
if (results.best_node is not None) and (results.best_node.state is not None):
path = results.best_node.state[0]
route = path + [path[0]]
tour = {"cost": results.best_node.objective, "route": route}
results.tour = tour
if __name__ == "__main__":
import argparse
from tsp_util import parse_dense_distance_matrix, run_solve_loop
parser = argparse.ArgumentParser(
description=("Run parallel branch and bound to solve an instance of TSP.")
)
parser.add_argument(
"data_filename",
type=str,
help=("The name of a file that stores a dense distance matrix."),
)
parser.add_argument(
"--results-filename",
type=str,
default=None,
help=(
"When set, saves the solver results "
"into a YAML-formatted file with the "
"given name."
),
)
args = parser.parse_args()
dist = parse_dense_distance_matrix(args.data_filename)
problem = TSP_ByEdge(dist)
solver = pybnb.Solver()
results = run_solve_loop(dist, problem, solver)
stats = solver.collect_worker_statistics()
if solver.is_dispatcher:
pybnb.solver.summarize_worker_statistics(stats)
# save results to a file
# (mainly used for testing this example)
if args.results_filename is not None:
results.write(args.results_filename)
| 32.024242 | 85 | 0.595572 | 674 | 5,284 | 4.48368 | 0.316024 | 0.039709 | 0.0182 | 0.014891 | 0.135341 | 0.095301 | 0.06949 | 0.051621 | 0.051621 | 0.025811 | 0 | 0.008465 | 0.306964 | 5,284 | 164 | 86 | 32.219512 | 0.816767 | 0.211204 | 0 | 0.144144 | 0 | 0 | 0.065344 | 0 | 0 | 0 | 0 | 0 | 0.045045 | 1 | 0.09009 | false | 0 | 0.054054 | 0.009009 | 0.207207 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4f20806210c0e06a98b29d4e7ebe5ee90ea05b1 | 2,054 | py | Python | examples/programmatical/detrends_setup_example.py | franpoz/SHERLOCK | 6c9e79405aa84e86cd1d6c41fa1cc45d5dbcfb46 | [
"MIT"
] | 20 | 2020-09-25T13:18:46.000Z | 2022-03-09T14:01:03.000Z | examples/programmatical/detrends_setup_example.py | anthuil/SHERLOCK | e768a6375ded6c3ba1d07784afc2682e95228d3a | [
"MIT"
] | 74 | 2020-09-22T12:19:28.000Z | 2022-01-12T13:53:35.000Z | examples/programmatical/detrends_setup_example.py | anthuil/SHERLOCK | e768a6375ded6c3ba1d07784afc2682e95228d3a | [
"MIT"
] | 5 | 2020-10-19T10:01:05.000Z | 2021-12-16T10:23:24.000Z | from contextlib import contextmanager
from timeit import default_timer
from sherlockpipe.sherlock import Sherlock
from lcbuilder.objectinfo.MissionObjectInfo import MissionObjectInfo
from sherlockpipe.sherlock_target import SherlockTarget
@contextmanager
def elapsed_timer():
start = default_timer()
elapser = lambda: str(default_timer() - start)
yield lambda: elapser()
end = default_timer()
elapser = lambda: str(end - start)
with elapsed_timer() as elapsed:
# We will use only one object id so we can explain better the detrend configs that the coder can select
# We will:
# 1 Enable the initial smooth function, which reduces local noise in the signal.
# 2 Enable the initial High RMS areas masking. This procedure will mask all the lightcurve time binned ranges by
# the 'initial_rms_bin_hours' value with a threshold of the 'initial_rms_threshold' value * RMS_median.
# 3 Set the number of detrends to be done from PDCSAP_FLUX for each run.
# 4 Set the SHERLOCK PDCSAP_FLUX detrends method to Gaussian Processes.
# 5 Set the number of CPU cores to be used by the detrending procedure.
# 6 Enable the Auto-Detrend detection, which will search for strong periodicities in the light curve and do an
# initial detrend for it based on the selected 'auto_detrend_method' method and the value of the
# 'auto_detrend_ratio' value, which ensures that we are detrending the light curve at 'auto_detrend_ratio' times
# the stronger period.
sherlock = Sherlock([SherlockTarget(object_info=MissionObjectInfo("TIC 181804752", 'all'),
smooth_enabled=True, high_rms_enabled=True, high_rms_threshold=2.5, high_rms_bin_hours=3,
detrends_number=12, detrend_method="gp", cpu_cores=2,
auto_detrend_enabled=True, auto_detrend_ratio=0.33,
auto_detrend_method="cosine")]) \
.run()
print("Analysis took " + elapsed() + "s")
| 54.052632 | 129 | 0.703505 | 277 | 2,054 | 5.075812 | 0.469314 | 0.054765 | 0.034139 | 0.035562 | 0.039829 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015316 | 0.237098 | 2,054 | 37 | 130 | 55.513514 | 0.88194 | 0.462025 | 0 | 0 | 0 | 0 | 0.035714 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.25 | 0 | 0.3 | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4f56fa0e96d5125e51b53d65b1d8a1a0fd42d6c | 1,898 | py | Python | infer.py | frankgu968/learning-to-see-in-the-dark-pytorch | 6a59fc64d1f152a2410b9128a6a51687a9b179d1 | [
"MIT"
] | null | null | null | infer.py | frankgu968/learning-to-see-in-the-dark-pytorch | 6a59fc64d1f152a2410b9128a6a51687a9b179d1 | [
"MIT"
] | null | null | null | infer.py | frankgu968/learning-to-see-in-the-dark-pytorch | 6a59fc64d1f152a2410b9128a6a51687a9b179d1 | [
"MIT"
] | null | null | null | import torch
from torch.utils.data import DataLoader
from dataset import pack_raw
import transforms as trf
import torchvision.transforms as transforms
from PIL import Image
from model.model import UNet
from pathlib import Path
import numpy as np
import rawpy
import math
import cv2
import time
import sys
checkpoint_path = './checkpoint/checkpoint.t7'
if __name__ == '__main__':
try:
image_path = sys.argv[1]
output_path = sys.argv[2]
ratio = int(sys.argv[3])
except:
print("Error in inference input, use command:\n $ python infer.py RAW_IMAGE_PATH OUTPUT_IMAGE EXPOSURE_RATIO")
sys.exit()
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# load model
model = UNet().to(device)
model.load_state_dict(torch.load(checkpoint_path)["state_dict"])
#set model to evaluate mode
model.eval()
# image import
raw = rawpy.imread(image_path)
im = pack_raw(raw) * ratio
# scaling image down to a max dimension of 1024, maintaining aspect ratio
if max(im.shape) > 1024:
scale_factor = 1024 / max(im.shape)
H = int(im.shape[0] * scale_factor)
W = int(im.shape[1] * scale_factor)
im = cv2.resize(im, (W,H), cv2.INTER_AREA)
# cropping image to nearest 16, to allow torch to compute
H = math.floor(im.shape[0]/16.0)*16
W = math.floor(im.shape[1]/16.0)*16
im = im[:H, :W, :]
# transpose and add dummy sample dimension
tensor = torch.from_numpy(im).transpose(0, 2).unsqueeze(0)
tensor = tensor.to(device)
with torch.no_grad():
output = model(tensor)
output = output.to('cpu').numpy() * 255
output = output.squeeze()
output = np.transpose(output, (2, 1, 0)).astype('uint8')
output = Image.fromarray(output).convert("RGB")
output.show()
output.save(output_path)
| 28.757576 | 118 | 0.653319 | 275 | 1,898 | 4.4 | 0.407273 | 0.034711 | 0.018182 | 0.026446 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030864 | 0.231823 | 1,898 | 65 | 119 | 29.2 | 0.79904 | 0.114858 | 0 | 0 | 0 | 0 | 0.09743 | 0.015541 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.297872 | 0 | 0.297872 | 0.021277 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4f6f46e4098dc29651562a7f52a3e267ea50c7e | 18,814 | py | Python | bluesky/plan_patterns.py | AbbyGi/bluesky | 759f9c55dce97dc47513cca749a69dd861bdf58d | [
"BSD-3-Clause"
] | 43 | 2015-08-04T20:13:41.000Z | 2019-04-12T17:21:36.000Z | bluesky/plan_patterns.py | AbbyGi/bluesky | 759f9c55dce97dc47513cca749a69dd861bdf58d | [
"BSD-3-Clause"
] | 966 | 2015-07-29T16:43:21.000Z | 2019-05-09T21:02:28.000Z | bluesky/plan_patterns.py | AbbyGi/bluesky | 759f9c55dce97dc47513cca749a69dd861bdf58d | [
"BSD-3-Clause"
] | 48 | 2019-05-15T18:01:06.000Z | 2022-03-03T18:53:43.000Z | import functools
import operator
import collections
from enum import Enum
import numpy as np
from cycler import cycler
try:
# cytools is a drop-in replacement for toolz, implemented in Cython
from cytools import partition
except ImportError:
from toolz import partition
from .utils import snake_cyclers, is_movable
def spiral(x_motor, y_motor, x_start, y_start, x_range, y_range, dr, nth, *,
dr_y=None, tilt=0.0):
'''Spiral scan, centered around (x_start, y_start)
Parameters
----------
x_motor : object, optional
any 'setable' object (motor, temp controller, etc.)
y_motor : object, optional
any 'setable' object (motor, temp controller, etc.)
x_start : float
x center
y_start : float
y center
x_range : float
x width of spiral
y_range : float
y width of spiral
dr : float
Delta radius along the minor axis of the ellipse.
dr_y : float, optional
Delta radius along the major axis of the ellipse, if not specifed
defaults to dr
nth : float
Number of theta steps
tilt : float, optional
Tilt angle in radians, default 0.0
Returns
-------
cyc : cycler
'''
if dr_y is None:
dr_aspect = 1
else:
dr_aspect = dr_y / dr
half_x = x_range / 2
half_y = y_range / (2 * dr_aspect)
r_max = np.sqrt(half_x ** 2 + half_y ** 2)
num_ring = 1 + int(r_max / dr)
tilt_tan = np.tan(tilt + np.pi / 2.)
x_points, y_points = [], []
for i_ring in range(1, num_ring + 2):
radius = i_ring * dr
angle_step = 2. * np.pi / (i_ring * nth)
for i_angle in range(int(i_ring * nth)):
angle = i_angle * angle_step
x = radius * np.cos(angle)
y = radius * np.sin(angle) * dr_aspect
if ((abs(x - (y / dr_aspect) / tilt_tan) <= half_x) and
(abs(y / dr_aspect) <= half_y)):
x_points.append(x_start + x)
y_points.append(y_start + y)
cyc = cycler(x_motor, x_points)
cyc += cycler(y_motor, y_points)
return cyc
def spiral_square_pattern(x_motor, y_motor, x_center, y_center,
x_range, y_range, x_num, y_num):
'''
Square spiral scan, centered around (x_start, y_start)
Parameters
----------
x_motor : object, optional
any 'setable' object (motor, temp controller, etc.)
y_motor : object, optional
any 'setable' object (motor, temp controller, etc.)
x_center : float
x center
y_center : float
y center
x_range : float
x width of spiral
y_range : float
y width of spiral
x_num : float
number of x axis points
y_num : float
number of y axis points
Returns
-------
cyc : cycler
'''
x_points, y_points = [], []
# checks if x_num/y_num is even or odd and sets the required offset
# parameter for the start point from the centre point.
if x_num % 2 == 0:
x_offset = 0.5
else:
x_offset = 0
if y_num % 2 == 0:
y_offset = -0.5
else:
y_offset = 0
num_ring = max(x_num, y_num)
x_delta = x_range / (x_num - 1)
y_delta = y_range / (y_num - 1)
# include the first point, as it is the first 'ring' to include.
x_points.append(x_center - x_delta * x_offset)
y_points.append(y_center - y_delta * y_offset)
# set the number of found points to 0
num_pnts_fnd = 1
# step through each of the rings required to map out the entire area.
for i_ring in range(2, num_ring+1, 1):
# step through each of the 'sides' of the ring if the constant value
# for each side is within the range to plot and
# that we have not already found all the required points.
# SIDE 1
if (abs(i_ring - 1 - x_offset) <= x_num / 2) and \
(num_pnts_fnd < x_num * y_num):
for n in range(i_ring-2, -i_ring, -1):
# Ensure that the variable value for this side is within the
# range to plot and that we have not already
# found all the required points.
if (abs(n - y_offset) < y_num / 2) and \
(num_pnts_fnd < y_num * x_num):
x = x_center - x_delta * x_offset + x_delta * (i_ring - 1)
y = y_center - y_delta * y_offset + y_delta * n
num_pnts_fnd += 1
x_points.append(x)
y_points.append(y)
# SIDE 2
if (abs(-i_ring + 1 - y_offset) < y_num / 2) and \
(num_pnts_fnd < x_num * y_num):
for n in range(i_ring - 2, -i_ring, -1):
# Ensure that the variable value for this side is within the
# range to plot and that we have not already
# found all the required points.
if (abs(n - x_offset) < x_num / 2) and \
(num_pnts_fnd < y_num * x_num):
x = x_center - x_delta * x_offset + x_delta * n
y = y_center - y_delta * y_offset + y_delta * (-i_ring + 1)
num_pnts_fnd += 1
x_points.append(x)
y_points.append(y)
# SIDE 3
if (abs(-i_ring + 1 - x_offset) < x_num / 2) and \
(num_pnts_fnd < x_num * y_num):
for n in range(-i_ring + 2, i_ring, 1):
# Ensure that the variable value for this side is within the
# range to plot and that we have not already
# found all the required points.
if (abs(n - y_offset) < y_num / 2) and \
(num_pnts_fnd < y_num * x_num):
x = x_center - x_delta * x_offset + x_delta * (-i_ring + 1)
y = y_center - y_delta * y_offset + y_delta * n
num_pnts_fnd += 1
x_points.append(x)
y_points.append(y)
# SIDE 4
if (abs(i_ring - 1 - y_offset) < y_num / 2) and \
(num_pnts_fnd < x_num * y_num):
for n in range(-i_ring + 2, i_ring, 1):
# Ensure that the variable value for this side is within the
# range to plot and that we have not already
# found all the required points.
if (abs(n - x_offset) < x_num / 2) and \
(num_pnts_fnd < y_num * x_num):
x = x_center - x_delta * x_offset + x_delta * n
y = y_center - y_delta * y_offset + y_delta * (i_ring - 1)
num_pnts_fnd += 1
x_points.append(x)
y_points.append(y)
cyc = cycler(x_motor, x_points)
cyc += cycler(y_motor, y_points)
return cyc
def spiral_fermat(x_motor, y_motor, x_start, y_start, x_range, y_range, dr,
factor, *, dr_y=None, tilt=0.0):
'''Absolute fermat spiral scan, centered around (x_start, y_start)
Parameters
----------
x_motor : object, optional
any 'setable' object (motor, temp controller, etc.)
y_motor : object, optional
any 'setable' object (motor, temp controller, etc.)
x_start : float
x center
y_start : float
y center
x_range : float
x width of spiral
y_range : float
y width of spiral
dr : float
delta radius along the minor axis of the ellipse.
dr_y : float, optional
Delta radius along the major axis of the ellipse, if not specifed defaults to dr
factor : float
radius gets divided by this
tilt : float, optional
Tilt angle in radians, default 0.0
Returns
-------
cyc : cycler
'''
if dr_y is None:
dr_aspect = 1
else:
dr_aspect = dr_y / dr
phi = 137.508 * np.pi / 180.
half_x = x_range / 2
half_y = y_range / (2 * dr_aspect)
tilt_tan = np.tan(tilt + np.pi / 2.)
x_points, y_points = [], []
diag = np.sqrt(half_x ** 2 + half_y ** 2)
num_rings = int((1.5 * diag / (dr / factor)) ** 2)
for i_ring in range(1, num_rings):
radius = np.sqrt(i_ring) * dr / factor
angle = phi * i_ring
x = radius * np.cos(angle)
y = radius * np.sin(angle) * dr_aspect
if ((abs(x - (y / dr_aspect) / tilt_tan) <= half_x) and (abs(y) <= half_y)):
x_points.append(x_start + x)
y_points.append(y_start + y)
cyc = cycler(x_motor, x_points)
cyc += cycler(y_motor, y_points)
return cyc
def inner_list_product(args):
'''Scan over one multi-motor trajectory.
Parameters
----------
args : list
patterned like (``motor1, position_list1,``
``...,``
``motorN, position_listN``)
Motors can be any 'settable' object (motor, temp controller, etc.)
``position_list``'s are lists of positions, all lists must have the
same length.
Returns
-------
cyc : cycler
'''
if len(args) % 2 != 0:
raise ValueError("Wrong number of positional arguments for "
"'inner_list_product'")
cyclers = []
for motor, pos_list, in partition(2, args):
c = cycler(motor, pos_list)
cyclers.append(c)
return functools.reduce(operator.add, cyclers)
def outer_list_product(args, snake_axes):
'''Scan over a mesh; each motor is on an independent trajectory.
Parameters
----------
args
patterned like (``motor1, position_list1,``
``motor2, position_list2,``
``motor3, position_list3,``
``...,``
``motorN, position_listN``)
The first motor is the "slowest", the outer loop. ``position_list``'s
are lists of positions, all lists must have the same length.
.
snake_axes
which axes should be snaked, either ``False`` (do not snake any axes),
``True`` (snake all axes) or a list of axes to snake. "Snaking" an axis
is defined as following snake-like, winding trajectory instead of a
simple left-to-right trajectory.
See Also
--------
:func:`bluesky.plan_patterns.inner_list_product`
Returns
-------
cyc : cycler
'''
snaking = []
cyclers = []
for motor, pos_list in partition(2, args):
if not snake_axes:
snaking.append(False)
elif isinstance(snake_axes, collections.abc.Iterable):
if motor in snake_axes:
snaking.append(True)
else:
snaking.append(False)
elif snake_axes:
if not snaking:
snaking.append(False)
else:
snaking.append(True)
else:
raise ValueError('The snake_axes arg to ``outer_list_product`` '
'must be either "False" (do not snake any axes), '
'"True" (snake all axes) or a list of axes to '
'snake. Instead it is {}.'.format(snake_axes))
c = cycler(motor, pos_list)
cyclers.append(c)
return snake_cyclers(cyclers, snaking)
def inner_product(num, args):
'''Scan over one multi-motor trajectory.
Parameters
----------
num : integer
number of steps
args : list of {Positioner, Positioner, int}
patterned like (``motor1, start1, stop1, ..., motorN, startN, stopN``)
Motors can be any 'setable' object (motor, temp controller, etc.)
Returns
-------
cyc : cycler
'''
if len(args) % 3 != 0:
raise ValueError("Wrong number of positional arguments for "
"'inner_product'")
cyclers = []
for motor, start, stop, in partition(3, args):
steps = np.linspace(start, stop, num=num, endpoint=True)
c = cycler(motor, steps)
cyclers.append(c)
return functools.reduce(operator.add, cyclers)
class OuterProductArgsPattern(Enum):
PATTERN_1 = 1
PATTERN_2 = 2
def classify_outer_product_args_pattern(args):
"""
Classifies the pattern of grid scan arguments in the list `args`.
Checks the argument list for consistency, in particular checks
to location of movable objects (motors) in the list.
Should be used together with the function `chunk_outer_product_args`.
Parameters
----------
args: iterable
The list of grid scan arguments. Two pattern of arguments
are supported. See the description of the identical parameter
for the `chunk_outer_product_args`.
Returns
-------
pattern: OuterProductArgsPattern
Detected pattern
Raises
------
ValueError is raised if the pattern can not be identified or the list
is inconsistent.
"""
args = list(args)
pattern = None
def _verify_motor_locations(args, pattern):
# Verify the motors are preset only at correct positions in the list
if pattern == OuterProductArgsPattern.PATTERN_1:
# Positions of the movable objects (motors)
pos_movable = list(range(0, len(args), 4))
elif pattern == OuterProductArgsPattern.PATTERN_2:
# Positions of the movable objects (motors)
pos_movable = [0] + list(range(4, len(args), 5))
else:
raise ValueError(f"Unknown pattern '{pattern}'")
for n, element in enumerate(args):
# Check if the element is the motor
flag = is_movable(element)
# If the element is expected to be the motor, then flip the flag
if n in pos_movable:
flag = not flag
# Now the flag is True if the motor is out of place in the list of arguments
if flag:
return False
return True
# div_4 - the correct number of elements for pattern 1, div_5 - for pattern 2
div_4, div_5 = not(len(args) % 4), (len(args) > 4) and not((len(args) - 4) % 5)
# Check the number of elements in 'args'
if not div_4 and not div_5:
raise ValueError(f"Wrong number of elements in 'args': len(args) = {len(args)}")
args_valid = False
if div_4 and not div_5:
pattern = OuterProductArgsPattern.PATTERN_1
args_valid = _verify_motor_locations(args, pattern)
elif not div_4 and div_5:
pattern = OuterProductArgsPattern.PATTERN_2
args_valid = _verify_motor_locations(args, pattern)
else:
for p in OuterProductArgsPattern:
if _verify_motor_locations(args, p):
pattern = p
args_valid = True
break
if not args_valid:
raise ValueError(f"Incorrect order of elements in the argument list 'args': "
f"some of the movable objects (motors) are out of place "
f"(args = {args})")
return pattern
def chunk_outer_product_args(args, pattern=None):
'''Scan over a mesh; each motor is on an independent trajectory.
Parameters
----------
args: iterable
Two patterns are supported:
Pattern 1: (``motor1, start1, stop1, num1,```
``motor2, start2, stop2, num2,``
``motor3, start3, stop3, num3,`` ...
``motorN, startN, stopN, numN``)
Pattern 2: (``motor1, start1, stop1, num1,```
``motor2, start2, stop2, num2, snake2,``
``motor3, start3, stop3, num3, snake3,`` ...
``motorN, startN, stopN, numN, snakeN``)
All elements 'motorX' must be movable objects. There must be no
movable objects in the other positions in the list.
In Pattern 2, the first motor is the "slowest", the outer loop. For all motors
except the first motor, there is a "snakeX" argument: a boolean
indicating whether to following snake-like, winding trajectory or a
simple left-to-right trajectory.
pattern: OuterProductArgsPattern
If pattern of 'args' is known, then it can be explicitely specified.
In this case the automated recognition of the pattern is not performed
and consistency of the argument list is not verified. Use
`classify_outer_product_args_pattern` to verify consistency of `args`.
See Also
--------
`bluesky.plan_patterns.outer_product`
`bluesky.plan_patterns.classify_outer_product_args_pattern`
Yields
------
(motor, start, stop, num, snake)
The `snake` value is always `False` for Pattern 1
'''
if pattern is None:
pattern = classify_outer_product_args_pattern(args)
else:
if not isinstance(pattern, OuterProductArgsPattern):
raise ValueError("The parameter 'pattern' must have type OuterProductArgsPattern: "
f"{type(pattern)} ")
args = list(args)
if pattern == OuterProductArgsPattern.PATTERN_1:
# Set 'snaked' to False for every motor
for n in range(1, int(len(args) / 4) + 1):
args.insert(5 * n - 1, False)
elif pattern == OuterProductArgsPattern.PATTERN_2:
# The first (slowest) axis is never "snaked." Insert False to
# make it easy to iterate over the chunks or args..
args.insert(4, False)
else:
raise RuntimeError(f"Unsupported pattern: {pattern}. This is a bug. "
f"You shouldn't have ended up on this branch.")
yield from partition(5, args)
def outer_product(args):
'''Scan over a mesh; each motor is on an independent trajectory.
Parameters
----------
args
patterned like (``motor1, start1, stop1, num1,```
``motor2, start2, stop2, num2, snake2,``
``motor3, start3, stop3, num3, snake3,`` ...
``motorN, startN, stopN, numN, snakeN``)
The first motor is the "slowest", the outer loop. For all motors
except the first motor, there is a "snake" argument: a boolean
indicating whether to following snake-like, winding trajectory or a
simple left-to-right trajectory.
See Also
--------
`bluesky.plan_patterns.inner_product`
Returns
-------
cyc : cycler
'''
shape = []
extents = []
snaking = []
cyclers = []
for motor, start, stop, num, snake in chunk_outer_product_args(args):
shape.append(num)
extents.append([start, stop])
snaking.append(snake)
steps = np.linspace(start, stop, num=num, endpoint=True)
c = cycler(motor, steps)
cyclers.append(c)
return snake_cyclers(cyclers, snaking)
| 32.834206 | 95 | 0.573669 | 2,507 | 18,814 | 4.152772 | 0.134025 | 0.011526 | 0.012487 | 0.01921 | 0.5961 | 0.524733 | 0.500144 | 0.48516 | 0.455 | 0.42628 | 0 | 0.014804 | 0.332199 | 18,814 | 572 | 96 | 32.891608 | 0.813833 | 0.421282 | 0 | 0.466102 | 0 | 0 | 0.06531 | 0.004545 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042373 | false | 0 | 0.042373 | 0 | 0.139831 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4f77df6af46df2d8500e52443b3e9ede46f93a3 | 6,364 | py | Python | cirq/linalg/decompositions_test.py | shuuji3/Cirq | a9c1f6411431fa69aee58ea49c14df7974f59e7b | [
"Apache-2.0"
] | null | null | null | cirq/linalg/decompositions_test.py | shuuji3/Cirq | a9c1f6411431fa69aee58ea49c14df7974f59e7b | [
"Apache-2.0"
] | null | null | null | cirq/linalg/decompositions_test.py | shuuji3/Cirq | a9c1f6411431fa69aee58ea49c14df7974f59e7b | [
"Apache-2.0"
] | null | null | null | # Copyright 2018 The Cirq Developers
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import random
import numpy as np
import pytest
from cirq import linalg
from cirq import testing
from cirq.linalg import combinators
X = np.array([[0, 1], [1, 0]])
Y = np.array([[0, -1j], [1j, 0]])
Z = np.array([[1, 0], [0, -1]])
H = np.array([[1, 1], [1, -1]]) * np.sqrt(0.5)
SQRT_X = np.array([[1, 1j], [1j, 1]])
c = np.exp(1j * np.pi / 4)
SQRT_SQRT_X = np.array([[1 + c, 1 - c], [1 - c, 1 + c]]) / 2
SWAP = np.array([[1, 0, 0, 0],
[0, 0, 1, 0],
[0, 1, 0, 0],
[0, 0, 0, 1]])
CNOT = np.array([[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 0, 1],
[0, 0, 1, 0]])
CZ = np.diag([1, 1, 1, -1])
@pytest.mark.parametrize('matrix', [
X,
linalg.kron(X, X),
linalg.kron(X, Y),
linalg.kron(X, np.eye(2))
])
def test_map_eigenvalues_identity(matrix):
identity_mapped = linalg.map_eigenvalues(matrix, lambda e: e)
assert np.allclose(matrix, identity_mapped)
@pytest.mark.parametrize('matrix,exponent,desired', [
[X, 2, np.eye(2)],
[X, 3, X],
[Z, 2, np.eye(2)],
[H, 2, np.eye(2)],
[Z, 0.5, np.diag([1, 1j])],
[X, 0.5, np.array([[1j, 1], [1, 1j]]) * (1 - 1j) / 2],
])
def test_map_eigenvalues_raise(matrix, exponent, desired):
exp_mapped = linalg.map_eigenvalues(matrix, lambda e: complex(e)**exponent)
assert np.allclose(desired, exp_mapped)
@pytest.mark.parametrize('f1,f2', [
(H, X),
(H * 1j, X),
(H, SQRT_X),
(H, SQRT_SQRT_X),
(H, H),
(SQRT_SQRT_X, H),
(X, np.eye(2)),
(1j * X, np.eye(2)),
(X, 1j * np.eye(2)),
(-X, 1j * np.eye(2)),
(X, X),
] + [
(testing.random_unitary(2), testing.random_unitary(2))
for _ in range(10)
])
def test_kron_factor(f1, f2):
p = linalg.kron(f1, f2)
g, g1, g2 = linalg.kron_factor_4x4_to_2x2s(p)
assert abs(np.linalg.det(g1) - 1) < 0.00001
assert abs(np.linalg.det(g2) - 1) < 0.00001
assert np.allclose(g * linalg.kron(g1, g2), p)
@pytest.mark.parametrize('f1,f2', [
(testing.random_special_unitary(2), testing.random_special_unitary(2))
for _ in range(10)
])
def test_kron_factor_special_unitaries(f1, f2):
p = linalg.kron(f1, f2)
g, g1, g2 = linalg.kron_factor_4x4_to_2x2s(p)
assert np.allclose(linalg.kron(g1, g2), p)
assert abs(g - 1) < 0.000001
assert linalg.is_special_unitary(g1)
assert linalg.is_special_unitary(g2)
def test_kron_factor_fail():
with pytest.raises(ValueError):
_ = linalg.kron_factor_4x4_to_2x2s(
linalg.kron_with_controls(linalg.CONTROL_TAG, X))
with pytest.raises(ValueError):
_ = linalg.kron_factor_4x4_to_2x2s(np.diag([1, 1, 1, 1j]))
def recompose_so4(a: np.ndarray, b: np.ndarray) -> np.ndarray:
assert a.shape == (2, 2)
assert b.shape == (2, 2)
assert linalg.is_special_unitary(a)
assert linalg.is_special_unitary(b)
magic = np.array([[1, 0, 0, 1j],
[0, 1j, 1, 0],
[0, 1j, -1, 0],
[1, 0, 0, -1j]]) * np.sqrt(0.5)
result = np.real(combinators.dot(np.conj(magic.T),
linalg.kron(a, b),
magic))
assert linalg.is_orthogonal(result)
return result
@pytest.mark.parametrize('m', [
testing.random_special_orthogonal(4)
for _ in range(10)
])
def test_so4_to_magic_su2s(m):
a, b = linalg.so4_to_magic_su2s(m)
m2 = recompose_so4(a, b)
assert np.allclose(m, m2)
@pytest.mark.parametrize('a,b', [
(testing.random_special_unitary(2), testing.random_special_unitary(2))
for _ in range(10)
])
def test_so4_to_magic_su2s_known_factors(a, b):
m = recompose_so4(a, b)
a2, b2 = linalg.so4_to_magic_su2s(m)
m2 = recompose_so4(a2, b2)
assert np.allclose(m2, m)
# Account for kron(A, B) = kron(-A, -B).
if np.linalg.norm(a + a2) > np.linalg.norm(a - a2):
assert np.allclose(a2, a)
assert np.allclose(b2, b)
else:
assert np.allclose(a2, -a)
assert np.allclose(b2, -b)
@pytest.mark.parametrize('mat', [
np.diag([0, 1, 1, 1]),
np.diag([0.5, 2, 1, 1]),
np.diag([1, 1j, 1, 1]),
np.diag([1, 1, 1, -1]),
])
def test_so4_to_magic_su2s_fail(mat):
with pytest.raises(ValueError):
linalg.so4_to_magic_su2s(mat)
def recompose_kak(g, a, v, b) -> np.ndarray:
a1, a0 = a
x, y, z = v
b1, b0 = b
xx = linalg.kron(X, X)
yy = linalg.kron(Y, Y)
zz = linalg.kron(Z, Z)
a = linalg.kron(a1, a0)
m = linalg.map_eigenvalues(xx * x + yy * y + zz * z,
lambda e: np.exp(1j * e))
b = linalg.kron(b1, b0)
return linalg.dot(a, m, b) * g
@pytest.mark.parametrize('x,y,z', [
[(random.random() * 2 - 1) * np.pi * 2 for _ in range(3)]
for _ in range(10)
])
def test_kak_canonicalize_vector(x, y, z):
i = np.eye(2)
m = recompose_kak(1, (i, i), (x, y, z), (i, i))
g, (a1, a0), (x2, y2, z2), (b1, b0) = linalg.kak_canonicalize_vector(
x, y, z)
m2 = recompose_kak(g, (a1, a0), (x2, y2, z2), (b1, b0))
assert 0.0 <= x2 <= np.pi / 4
assert 0.0 <= y2 <= np.pi / 4
assert -np.pi / 4 <= z2 <= np.pi / 4
assert abs(x2) >= abs(y2) >= abs(z2)
assert linalg.is_special_unitary(a1)
assert linalg.is_special_unitary(a0)
assert linalg.is_special_unitary(b1)
assert linalg.is_special_unitary(b0)
assert np.allclose(m, m2)
@pytest.mark.parametrize('m', [
np.eye(4),
SWAP,
SWAP * 1j,
CZ,
CNOT,
SWAP.dot(CZ),
] + [
testing.random_unitary(4)
for _ in range(10)
])
def test_kak_decomposition(m):
g, (a1, a0), (x, y, z), (b1, b0) = linalg.kak_decomposition(m)
m2 = recompose_kak(g, (a1, a0), (x, y, z), (b1, b0))
assert np.allclose(m, m2)
| 28.410714 | 79 | 0.585009 | 1,042 | 6,364 | 3.450096 | 0.166987 | 0.012796 | 0.053408 | 0.046732 | 0.451182 | 0.299305 | 0.259527 | 0.220306 | 0.183032 | 0.154659 | 0 | 0.06614 | 0.2445 | 6,364 | 223 | 80 | 28.538117 | 0.681572 | 0.093652 | 0 | 0.202312 | 0 | 0 | 0.009039 | 0.003998 | 0 | 0 | 0 | 0 | 0.17341 | 1 | 0.069364 | false | 0 | 0.034682 | 0 | 0.115607 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4f7c4b183c9ec8f408fa5667e59397a49f0ef3c | 3,569 | py | Python | deployment/hrdlbackend/test.py | aogunwoolu/HRDL | 61bee1945d302a6b83cddb28d040a5499ba8023a | [
"MIT",
"Unlicense"
] | null | null | null | deployment/hrdlbackend/test.py | aogunwoolu/HRDL | 61bee1945d302a6b83cddb28d040a5499ba8023a | [
"MIT",
"Unlicense"
] | null | null | null | deployment/hrdlbackend/test.py | aogunwoolu/HRDL | 61bee1945d302a6b83cddb28d040a5499ba8023a | [
"MIT",
"Unlicense"
] | null | null | null | import ccxt
import torch
import torch.nn as nn
from torch.autograd import Variable
import pandas as pd
import numpy as np
from sklearn.preprocessing import MinMaxScaler
seq_length = 5
class LSTM(nn.Module):
def __init__(self, num_classes, input_size, hidden_size, num_layers):
super(LSTM, self).__init__()
self.num_classes = num_classes
self.num_layers = num_layers
self.input_size = input_size
self.hidden_size = hidden_size
self.seq_length = seq_length
self.lstm = nn.LSTM(input_size=input_size, hidden_size=hidden_size,
num_layers=num_layers, batch_first=True)
self.fc = nn.Linear(hidden_size, num_classes)
def forward(self, x):
h_0 = Variable(torch.zeros(
self.num_layers, x.size(0), self.hidden_size))
c_0 = Variable(torch.zeros(
self.num_layers, x.size(0), self.hidden_size))
# Propagate input through LSTM
ula, (h_out, _) = self.lstm(x, (h_0, c_0))
h_out = h_out.view(-1, self.hidden_size)
out = self.fc(h_out)
return out
exchange_class = getattr(ccxt, 'binance')
exchange = exchange_class({
'apiKey': 'GxE5pFRZ6wKei541KAKNUNqQLC43TwMD9aw952IUkewTMSDbXjTe65Av8JmyBTaB',
'secret': 'GL3hpjbGkPH7K6m8dXFVUCNhuNbC9hcKEsqlYWiwDci8WaYkLh7lBtNZVDs72eL8',
# 'apiKey': 'J0eCN1nkfjoCPVI3fHzBqU3yzscfgpoBYozm3sJpmunvhtxhKYUoh7IffisZ6yHk',
# 'secret': 'fqYiy16MWUAi61khQb6UGHH6M8XH6kFHPd6578aDZsGsHcDWBGnKf9tyOLH1EBEB',
})
exchange.set_sandbox_mode(True)
symbol = 'SOL/USDT'
tf = '1h'
tf_multi = 60 * 60 * 1000
from datetime import datetime, timedelta
from_timestamp = exchange.parse8601(f'{datetime.now() - timedelta(days=3)}')
end = exchange.parse8601(f'{datetime.now().isoformat()}')
while from_timestamp < end:
ohlcvs = exchange.fetch_ohlcv(symbol, tf, from_timestamp)
print(ohlcvs)
from_timestamp += len(ohlcvs) * tf_multi
# print(exchange_class.timeframes)
# exchange.set_sandbox_mode(True)
# input_size = 5; hidden_size = 2;num_layers = 1;num_classes = 5
# lstm = LSTM(num_classes,input_size,hidden_size,num_layers)
# lstm.load_state_dict(torch.load("./HRDL/static/models/ADAUSDT_SD.pth"))
# lstm.eval()
# klines = exchange.fetch_ohlcv('BTC/USDT', '1d')
# #print(klines)
# data = pd.DataFrame(klines).iloc[-1:,:]
# print(data)
# recent = pd.DataFrame(data)
# df = pd.DataFrame({
# "volume": recent.iloc[:, 5],
# "open": recent.iloc[:, 1],
# "high": recent.iloc[:, 2],
# "low": recent.iloc[:, 3],
# "close": recent.iloc[:, 4]
# })
# arr = df.values.tolist()
# #print(df)
# sc = MinMaxScaler()
# x = sc.fit_transform(arr)
# dataX = Variable(torch.Tensor(np.array([x])))
# print(dataX)
# predict = lstm(dataX)
# data_predict = sc.inverse_transform(predict.data.numpy())
# print(data_predict[0])
#print(exchange.fetch_balance()['USDT'])
#print(exchange.fetch_ticker('BTC/USDT'))
#print(exchange.id, exchange.create_limit_buy_order('BTC/USDT', 0.001, float(exchange.fetch_ticker('BTC/USDT')['close'])))
# import Coinpaprika
# from datetime import datetime, timedelta
# api_client = Coinpaprika.Client()
# twitter_timeline = api_client.coins.twitter("dai-dai")
# # print(twitter_timeline)
# # bitcoin = api_client.coins.with_id("dai-dai")
# OHLC = api_client.coins.historical_OHLC(
# coin_id="dai-dai",
# start=datetime.now() - timedelta(weeks=1),
# end=datetime.now(),
# limit=7,
# )
#bitcoin = api_client.coins
#print(OHLC) | 28.325397 | 122 | 0.677781 | 449 | 3,569 | 5.182628 | 0.325167 | 0.047271 | 0.030082 | 0.024495 | 0.186936 | 0.077353 | 0.077353 | 0.077353 | 0.044693 | 0.044693 | 0 | 0.030511 | 0.182684 | 3,569 | 126 | 123 | 28.325397 | 0.767227 | 0.45587 | 0 | 0.045455 | 0 | 0 | 0.116869 | 0.082496 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.181818 | 0 | 0.272727 | 0.022727 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4f859f50a16815bafa505ba716360ea89f37e04 | 318 | py | Python | bibleProc.py | Narcolapser/Little-News-Processor | e408ebd05f8e36f139bb413c91c4b831cd1213c7 | [
"Apache-2.0"
] | 1 | 2016-01-14T15:06:03.000Z | 2016-01-14T15:06:03.000Z | bibleProc.py | Narcolapser/Little-News-Processor | e408ebd05f8e36f139bb413c91c4b831cd1213c7 | [
"Apache-2.0"
] | null | null | null | bibleProc.py | Narcolapser/Little-News-Processor | e408ebd05f8e36f139bb413c91c4b831cd1213c7 | [
"Apache-2.0"
] | null | null | null | import bibleIn
def consume(verses):
ret = ""
for i in verses:
out = ""
for v in i:
add = str(v[3])
if v[0] == u'Proverbs':
add = add.replace("\n\n","\n")
add += "\n"
out += add
ret += out+"\n\n\n"
return ret[:-3]
if __name__ == "__main__":
verses = bibleIn.fetch()
print(consume(verses))
| 16.736842 | 34 | 0.54717 | 51 | 318 | 3.254902 | 0.490196 | 0.048193 | 0.036145 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012448 | 0.242138 | 318 | 18 | 35 | 17.666667 | 0.676349 | 0 | 0 | 0 | 0 | 0 | 0.09434 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.1875 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4f9352e728c01bf65c860292d60ed3659212959 | 1,217 | py | Python | var/spack/repos/builtin/packages/libbeagle/package.py | HaochengLIU/spack | 26e51ff1705a4d6234e2a0cf734f93f7f95df5cb | [
"ECL-2.0",
"Apache-2.0",
"MIT"
] | 2 | 2018-11-27T03:39:44.000Z | 2021-09-06T15:50:35.000Z | var/spack/repos/builtin/packages/libbeagle/package.py | HaochengLIU/spack | 26e51ff1705a4d6234e2a0cf734f93f7f95df5cb | [
"ECL-2.0",
"Apache-2.0",
"MIT"
] | 1 | 2019-01-11T20:11:52.000Z | 2019-01-11T20:11:52.000Z | var/spack/repos/builtin/packages/libbeagle/package.py | HaochengLIU/spack | 26e51ff1705a4d6234e2a0cf734f93f7f95df5cb | [
"ECL-2.0",
"Apache-2.0",
"MIT"
] | 1 | 2020-10-14T14:20:17.000Z | 2020-10-14T14:20:17.000Z | # Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class Libbeagle(AutotoolsPackage):
"""Beagle performs genotype calling, genotype phasing, imputation of
ungenotyped markers, and identity-by-descent segment detection."""
homepage = "https://github.com/beagle-dev/beagle-lib"
url = "https://github.com/beagle-dev/beagle-lib/archive/beagle_release_2_1_2.tar.gz"
version('2.1.2', '1107614e86f652f8ee45c1c92f2af3d4')
depends_on('autoconf', type='build')
depends_on('automake', type='build')
depends_on('libtool', type='build')
depends_on('m4', type='build')
depends_on('subversion', type='build')
depends_on('pkgconfig', type='build')
depends_on('java', type='build')
def url_for_version(self, version):
url = "https://github.com/beagle-dev/beagle-lib/archive/beagle_release_{0}.tar.gz"
return url.format(version.underscored)
def setup_environment(self, spack_env, run_env):
prefix = self.prefix
run_env.prepend_path('BEAST_LIB', prefix.lib)
| 35.794118 | 93 | 0.704191 | 159 | 1,217 | 5.257862 | 0.534591 | 0.075359 | 0.114833 | 0.129187 | 0.169856 | 0.169856 | 0.169856 | 0.131579 | 0.131579 | 0.131579 | 0 | 0.038273 | 0.162695 | 1,217 | 33 | 94 | 36.878788 | 0.782139 | 0.26212 | 0 | 0 | 0 | 0.111111 | 0.361678 | 0.036281 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.055556 | 0 | 0.388889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4f9d775cae6fb88cf9ac9dee2e3caa35c3e5752 | 5,223 | py | Python | metrics/CreateDBFromArchive/multithread_controller.py | shashvatshukla/visualising-political-shills | 782c341adef71fbc96c7bbdcd54cdc4b191ed58b | [
"MIT"
] | 1 | 2019-05-21T10:45:16.000Z | 2019-05-21T10:45:16.000Z | metrics/CreateDBFromArchive/multithread_controller.py | shashvatshukla/visualising-political-shills | 782c341adef71fbc96c7bbdcd54cdc4b191ed58b | [
"MIT"
] | 5 | 2019-03-15T10:31:33.000Z | 2019-04-19T09:53:26.000Z | metrics/CreateDBFromArchive/multithread_controller.py | shashvatshukla/visualising-political-shills | 782c341adef71fbc96c7bbdcd54cdc4b191ed58b | [
"MIT"
] | null | null | null | import argparse
import multiprocessing
import os
import time
import psycopg2
import worker
import consts
lock = multiprocessing.Lock()
class Controller:
def __init__(self, archive_path, words):
self._archive_path = archive_path
self._words = words
self._task_size = 4
self._n_files_done = multiprocessing.Value('i', 0)
self._total_files = 0
for _, _, files in os.walk(self._archive_path):
for file in files:
if file.endswith("bz2"):
self._total_files += 1
def _worker(self, task_queue, lock):
DBworker = worker.Worker(self._words, lock)
while not task_queue.empty():
crt_task = task_queue.get()
DBworker.json_to_db(crt_task)
with self._n_files_done.get_lock():
self._n_files_done.value += self._task_size
if self._n_files_done.value < self._total_files:
print("Files done: " + str(self._n_files_done.value) +
" out of " + str(self._total_files),
end='\r')
else:
print("Files done: " + str(self._n_files_done.value) +
" out of " + str(self._total_files),
end='\n')
DBworker.finalise()
def _populate_tasks(self, task_queue):
all_files = []
for root, dirs, files in os.walk(self._archive_path):
for file in files:
if file.endswith("bz2"):
all_files.append(os.path.join(root, file))
for idx in range(0, len(all_files), self._task_size):
task_queue.put(all_files[idx:idx + self._task_size])
return task_queue
@staticmethod
def _setup_database():
try:
# Establish connection
connection = psycopg2.connect(**consts.db_creds)
# Create tweets table
drop_first = '''DROP TABLE IF EXISTS tweets'''
cursor = connection.cursor()
cursor.execute(drop_first)
create_table_query = ''' CREATE TABLE tweets
(ID SERIAL PRIMARY KEY,
created_at TIMESTAMP NOT NULL,
text TEXT NOT NULL,
usr VARCHAR (255) NOT NULL,
twid VARCHAR (255) NOT NULL,
md5_hash VARCHAR (255) NOT NULL,
rt_status BOOLEAN NOT NULL,
screen_name VARCHAR(55) NOT NULL,
retweet_text TEXT);
'''
index_query = "CREATE INDEX create_at_index ON tweets (created_at);"
cursor.execute(create_table_query)
cursor.execute(index_query)
except psycopg2.Error as error:
print("Error while connecting to PostgreSQL", error)
finally:
# Batch commit all changes
connection.commit()
# Close connection
if connection:
cursor.close()
connection.close()
@staticmethod
def _create_indexes():
try:
# Establish connection
connection = psycopg2.connect(**consts.db_creds)
cursor = connection.cursor()
text_index = ''' CREATE INDEX text_index ON tweets
USING hash (text); '''
cursor.execute(text_index)
except psycopg2.Error as error:
print("Error while connecting to PostgreSQL", error)
finally:
# Batch commit all changes
connection.commit()
# Close connection
if connection:
cursor.close()
connection.close()
def run(self):
self._setup_database()
empty_task_queue = multiprocessing.Queue()
full_task_queue = self._populate_tasks(empty_task_queue)
processes = []
n_processes = multiprocessing.cpu_count()
print(f'Running with {n_processes} processes!')
start = time.time()
for w in range(n_processes):
p = multiprocessing.Process(
target=self._worker, args=(full_task_queue, lock))
processes.append(p)
p.start()
for p in processes:
p.join()
print('Creating indexes')
self._create_indexes()
print(f'Time taken = {time.time() - start:.10f}')
for p in processes:
p.close()
def main():
parser = argparse.ArgumentParser(description='Create DB from tweets')
parser.add_argument('-a', help='Path to the archive')
parser.add_argument('words', metavar='W', type=str, nargs='*',
help='Words used for filtering')
args = parser.parse_args()
path = args.a
words = args.words
if words is None:
words = []
runner = Controller(path, words)
print("Started job")
runner.run()
print("\nFinished job")
if __name__ == "__main__":
main()
| 31.275449 | 80 | 0.532453 | 548 | 5,223 | 4.846715 | 0.286496 | 0.033886 | 0.02259 | 0.031627 | 0.289157 | 0.277108 | 0.259789 | 0.259789 | 0.259789 | 0.214608 | 0 | 0.008012 | 0.37871 | 5,223 | 166 | 81 | 31.463855 | 0.810478 | 0.027762 | 0 | 0.260163 | 0 | 0 | 0.222135 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.056911 | false | 0 | 0.056911 | 0 | 0.130081 | 0.073171 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4fb970cc3750249815cf8ca26aa1c1cec5fcc51 | 2,975 | py | Python | bcs-ui/backend/tests/container_service/clusters/tools/conftest.py | masanqi/bk-bcs | 70d97b674fbd5beacde21d6ca8be914d7eb56865 | [
"Apache-2.0"
] | 599 | 2019-06-25T03:20:46.000Z | 2022-03-31T12:14:33.000Z | bcs-ui/backend/tests/container_service/clusters/tools/conftest.py | masanqi/bk-bcs | 70d97b674fbd5beacde21d6ca8be914d7eb56865 | [
"Apache-2.0"
] | 537 | 2019-06-27T06:03:44.000Z | 2022-03-31T12:10:01.000Z | bcs-ui/backend/tests/container_service/clusters/tools/conftest.py | masanqi/bk-bcs | 70d97b674fbd5beacde21d6ca8be914d7eb56865 | [
"Apache-2.0"
] | 214 | 2019-06-25T03:26:05.000Z | 2022-03-31T07:52:03.000Z | # -*- coding: utf-8 -*-
"""
Tencent is pleased to support the open source community by making 蓝鲸智云PaaS平台社区版 (BlueKing PaaS Community
Edition) available.
Copyright (C) 2017-2021 THL A29 Limited, a Tencent company. All rights reserved.
Licensed under the MIT License (the "License"); you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://opensource.org/licenses/MIT
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
"""
from unittest.mock import patch
import pytest
from backend.resources.node.client import Node
from backend.tests.resources.conftest import FakeBcsKubeConfigurationService
fake_inner_ip = "127.0.0.1"
fake_node_name = "bcs-test-node"
fake_labels = {"bcs-test": "test"}
fake_taints = {"key": "test", "value": "tet", "effect": "NoSchedule"}
@pytest.fixture
def node_name():
return fake_node_name
@pytest.fixture
def bcs_cc_nodes():
return {
"127.0.0.1": {"inner_ip": "127.0.0.1", "status": "initializing"},
"127.0.0.2": {"inner_ip": "127.0.0.2", "status": "normal"},
"127.0.0.3": {"inner_ip": "127.0.0.3", "status": "initial_failed"},
}
@pytest.fixture
def cluster_nodes():
return {
"127.0.0.2": {"inner_ip": "127.0.0.2", "status": "Ready", "unschedulable": False, "node_name": "127.0.0.2"},
"127.0.0.4": {"inner_ip": "127.0.0.3", "status": "Ready", "unschedulable": False, "node_name": "127.0.0.4"},
}
@pytest.fixture(autouse=True)
def use_faked_configuration():
with patch(
'backend.resources.utils.kube_client.BcsKubeConfigurationService',
new=FakeBcsKubeConfigurationService,
):
yield
@pytest.fixture
def client(ctx_cluster):
return Node(ctx_cluster)
@pytest.fixture
def create_and_delete_node(client):
client.update_or_create(
body={
"apiVersion": "v1",
"kind": "Node",
"metadata": {"name": fake_node_name, "labels": fake_labels},
"spec": {"taints": [fake_taints]},
"status": {
"addresses": [
{"address": fake_inner_ip, "type": "InternalIP"},
],
"conditions": [
{
"lastHeartbeatTime": "2021-07-07T04:13:48Z",
"lastTransitionTime": "2020-09-16T05:24:53Z",
"message": "kubelet is posting ready status",
"reason": "KubeletReady",
"status": "True",
"type": "Ready",
}
],
},
},
name=fake_node_name,
)
yield
client.delete_wait_finished(fake_node_name)
| 32.336957 | 116 | 0.608739 | 362 | 2,975 | 4.88674 | 0.464088 | 0.029395 | 0.036744 | 0.037309 | 0.120407 | 0.105144 | 0.090447 | 0.072357 | 0.072357 | 0.028265 | 0 | 0.053225 | 0.25479 | 2,975 | 91 | 117 | 32.692308 | 0.7447 | 0.244706 | 0 | 0.177419 | 0 | 0 | 0.284055 | 0.028138 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096774 | false | 0 | 0.064516 | 0.064516 | 0.225806 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4fc24c83ee30aafff9009610ae1dad007c5028d | 1,254 | py | Python | nlp_gym/data_pools/registry.py | lipucky/nlp-gym | d6d0175f038a777f0ed07586053e89fa9b1b6d2d | [
"MIT"
] | 120 | 2020-11-12T15:40:41.000Z | 2022-03-31T06:51:08.000Z | nlp_gym/data_pools/registry.py | lipucky/nlp-gym | d6d0175f038a777f0ed07586053e89fa9b1b6d2d | [
"MIT"
] | 3 | 2021-09-09T02:49:12.000Z | 2022-03-08T11:20:41.000Z | nlp_gym/data_pools/registry.py | lipucky/nlp-gym | d6d0175f038a777f0ed07586053e89fa9b1b6d2d | [
"MIT"
] | 13 | 2020-11-19T01:12:30.000Z | 2022-03-07T05:41:08.000Z | from nlp_gym.data_pools.custom_multi_label_pools import ReutersDataPool, AAPDDataPool
from nlp_gym.data_pools.custom_seq_tagging_pools import UDPosTagggingPool, CONLLNerTaggingPool
from nlp_gym.data_pools.custom_question_answering_pools import AIRC, QASC
from nlp_gym.data_pools.base import DataPool
from typing import Any, Dict, List
class DataPoolRegistry:
_registry_mapping = {
# Multi-label
"ReutersPool": ReutersDataPool,
"AAPDPool": AAPDDataPool,
# Sequence Tagging
"UDPosTagPool": UDPosTagggingPool,
"CONLLNerTaggingPool": CONLLNerTaggingPool,
# Question answering
"AIRC": AIRC,
"QASC": QASC
}
@classmethod
def get_pool_splits(cls, pool_name: str, pool_params: Dict[str, Any] = {}) -> List[DataPool]:
"""
Returns train, val and test splits
"""
pool_cls = cls._registry_mapping[pool_name]
train_pool_instance = pool_cls.prepare(**{**pool_params, **{"split": "train"}})
val_pool_instance = pool_cls.prepare(**{**pool_params, **{"split": "val"}})
test_pool_instance = pool_cls.prepare(**{**pool_params, **{"split": "test"}})
return [train_pool_instance, val_pool_instance, test_pool_instance] | 38 | 97 | 0.691388 | 141 | 1,254 | 5.829787 | 0.361702 | 0.087591 | 0.048662 | 0.068127 | 0.26399 | 0.240876 | 0.149635 | 0.149635 | 0 | 0 | 0 | 0 | 0.199362 | 1,254 | 33 | 98 | 38 | 0.818725 | 0.066188 | 0 | 0 | 0 | 0 | 0.074171 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.238095 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4fc8fcf5dbba2521c1c4d083520ee4a78b22fa2 | 16,330 | py | Python | Scripts/simulation/curfew/curfew_service.py | velocist/TS4CheatsInfo | b59ea7e5f4bd01d3b3bd7603843d525a9c179867 | [
"Apache-2.0"
] | null | null | null | Scripts/simulation/curfew/curfew_service.py | velocist/TS4CheatsInfo | b59ea7e5f4bd01d3b3bd7603843d525a9c179867 | [
"Apache-2.0"
] | null | null | null | Scripts/simulation/curfew/curfew_service.py | velocist/TS4CheatsInfo | b59ea7e5f4bd01d3b3bd7603843d525a9c179867 | [
"Apache-2.0"
] | null | null | null | # uncompyle6 version 3.7.4
# Python bytecode 3.7 (3394)
# Decompiled from: Python 3.7.9 (tags/v3.7.9:13c94747c7, Aug 17 2020, 18:58:18) [MSC v.1900 64 bit (AMD64)]
# Embedded file name: T:\InGame\Gameplay\Scripts\Server\curfew\curfew_service.py
# Compiled at: 2020-06-08 22:31:54
# Size of source mod 2**32: 22657 bytes
from date_and_time import TimeSpan
from event_testing.resolver import SingleSimResolver, DoubleSimResolver
from event_testing.tests import TunableTestSet
from sims4.callback_utils import CallableList
from sims4.localization import TunableLocalizedStringFactory
from sims4.service_manager import Service
from sims4.tuning.tunable import TunableList, TunableRange, TunableSimMinute, TunablePackSafeReference, TunableReference, TunableSet, TunableEnumEntry
from sims4.utils import classproperty
from tag import Tag
from ui.ui_dialog import UiDialogOk
import alarms, build_buy, date_and_time, persistence_error_types, services, sims4
class CurfewService(Service):
ALLOWED_CURFEW_TIMES = TunableList(description='\n A list of times (in military time) that are allowed to be set as curfew\n times.\n \n NOTE: Many objects will have curfew components and will only support\n a visual of certain values. Changing these values without making sure\n the art supports the value will not work properly. Please only change\n these values if you know for sure they need to be changed and are \n getting support from modelling to make the change.\n ',
tunable=TunableRange(description='\n The hour for which the curfew will be set to.\n ',
tunable_type=int,
default=0,
minimum=0,
maximum=23))
CURFEW_END_TIME = TunableRange(description='\n The time when the curfew is considered to be over and the Sims are \n no longer subject to it.\n \n This should probably be set to some time in the morning. 6am perhaps.\n ',
tunable_type=int,
default=0,
minimum=0,
maximum=23)
MINUTES_BEFORE_CURFEW_WARNING = TunableSimMinute(description='\n The minutes before the curfew starts that a Sim should receive a \n warning about the curfew being about to start.\n ',
default=30)
BREAK_CURFEW_WARNING = TunableLocalizedStringFactory(description='\n The string that is used to warn the player that a pie menu action will\n cause the Sim to break curfew. This will wrap around the name of the \n interaction so should be tuned to something like [Warning] {0.String}.\n ')
CURFEW_WARNING_TEXT_MESSAGE_DIALOG = UiDialogOk.TunableFactory(description='\n The dialog to display as a text message when warning a Sim that their\n curfew is about to expire.\n ')
CURFEW_WARNING_SIM_TESTS = TunableTestSet(description='\n Tests to run on each of the Sims to determine if they should receive\n the curfew warning text message or not.\n ')
BREAK_CURFEW_BUFF = TunablePackSafeReference(description="\n The buff that get's added to a Sim that breaks curfew. This buff will\n enable the Sim to be disciplined for their behavior.\n ",
manager=(services.buff_manager()))
INTERACTION_BLACKLIST_TAGS = TunableSet(description='\n A list of all the tags that blacklist interactions from causing Sims to\n break curfew.\n ',
tunable=TunableEnumEntry(description='\n A tag that when tagged on the interaction will allow the Sim to run\n the interaction and not break curfew.\n ',
tunable_type=Tag,
default=(Tag.INVALID),
pack_safe=True))
CURFEW_BEGIN_LOOT = TunablePackSafeReference(description='\n The loot to apply to all Sims in the family when curfew begins. This\n will allow us to give buffs that affect the behavior of the Sims if\n they pass certain tests.\n ',
manager=(services.get_instance_manager(sims4.resources.Types.ACTION)))
CURFEW_END_LOOT = TunablePackSafeReference(description='\n The loot to apply to all Sims in the family when curfew ends. This will\n allow us to remove buffs that affect the behavior of the Sims.\n ',
manager=(services.get_instance_manager(sims4.resources.Types.ACTION)))
UNSET = -1
def __init__(self, *args, **kwargs):
(super().__init__)(*args, **kwargs)
self._zone_curfew_data = {}
self._curfew_warning_alarm_handle = None
self._curfew_started_alarm_handle = None
self._curfew_ended_alarm_handle = None
self._curfew_message_alarm_handle = None
self._curfew_warning_callback = CallableList()
self._curfew_started_callback = CallableList()
self._curfew_ended_callback = CallableList()
self._time_set_callback = CallableList()
def get_zone_curfew(self, zone_id):
curfew_setting = self._zone_curfew_data.get(zone_id, self.UNSET)
return curfew_setting
def set_zone_curfew(self, zone_id, curfew_setting):
if self._zone_curfew_data.get(zone_id, None) == curfew_setting:
return
if curfew_setting not in CurfewService.ALLOWED_CURFEW_TIMES:
if curfew_setting != CurfewService.UNSET:
return
self._zone_curfew_data[zone_id] = curfew_setting
self._update_curfew_settings(zone_id, curfew_setting)
self._setup_curfew_text_message()
def _update_curfew_settings(self, current_zone_id, current_setting):
self._create_alarm_handles(current_zone_id)
self._time_set_callback(current_setting)
def _create_alarm_handles(self, zone_id):
for alarm in (self._curfew_warning_alarm_handle,
self._curfew_started_alarm_handle,
self._curfew_ended_alarm_handle):
if alarm is not None:
alarms.cancel_alarm(alarm)
time = self._zone_curfew_data.get(zone_id, self.UNSET)
now = services.time_service().sim_now
self._create_warning_callback(now, time)
self._create_curfew_callback(now, time)
self._create_curfew_ended_callback(now, time)
def _create_warning_callback(self, now, time):
if time is not CurfewService.UNSET:
alarm_time = date_and_time.create_date_and_time(hours=(time - 1))
warning_span = now.time_till_next_day_time(alarm_time)
if warning_span.in_ticks() == 0:
warning_span += TimeSpan(date_and_time.sim_ticks_per_day())
self._curfew_warning_alarm_handle = alarms.add_alarm(self, warning_span, self._handle_warning_callback, False)
def _handle_warning_callback(self, handle):
self._curfew_warning_callback()
now = services.time_service().sim_now
time = self._zone_curfew_data.get(services.current_zone_id(), CurfewService.UNSET)
self._create_warning_callback(now, time)
def _create_curfew_callback(self, now, time):
if time is not self.UNSET:
alarm_time = date_and_time.create_date_and_time(hours=time)
curfew_span = now.time_till_next_day_time(alarm_time)
if curfew_span.in_ticks() == 0:
curfew_span += TimeSpan(date_and_time.sim_ticks_per_day())
self._curfew_started_alarm_handle = alarms.add_alarm(self, curfew_span, self._handle_curfew_callback, False)
def _handle_curfew_callback(self, handle):
self._curfew_started_callback()
now = services.time_service().sim_now
time = self._zone_curfew_data.get(services.current_zone_id(), CurfewService.UNSET)
self.apply_curfew_loots()
self._create_curfew_callback(now, time)
def _create_curfew_ended_callback(self, now, time):
if time is not CurfewService.UNSET:
alarm_time = date_and_time.create_date_and_time(hours=(CurfewService.CURFEW_END_TIME))
curfew_span = now.time_till_next_day_time(alarm_time)
if curfew_span.in_ticks() == 0:
curfew_span += TimeSpan(date_and_time.sim_ticks_per_day())
self._curfew_ended_alarm_handle = alarms.add_alarm(self, curfew_span, self._handle_curfew_ended_callback, False)
def _handle_curfew_ended_callback(self, handle):
self._curfew_ended_callback()
now = services.time_service().sim_now
time = CurfewService.CURFEW_END_TIME
self.remove_curfew_loots()
self._create_curfew_ended_callback(now, time)
def register_for_alarm_callbacks(self, warning_callback, curfew_callback, curfew_over_callback, time_set_callback):
self._curfew_warning_callback.append(warning_callback)
self._curfew_started_callback.append(curfew_callback)
self._curfew_ended_callback.append(curfew_over_callback)
self._time_set_callback.append(time_set_callback)
def unregister_for_alarm_callbacks(self, warning_callback, curfew_callback, curfew_over_callback, time_set_callback):
if warning_callback in self._curfew_warning_callback:
self._curfew_warning_callback.remove(warning_callback)
if curfew_callback in self._curfew_started_callback:
self._curfew_started_callback.remove(curfew_callback)
if curfew_over_callback in self._curfew_ended_callback:
self._curfew_ended_callback.remove(curfew_over_callback)
if time_set_callback in self._time_set_callback:
self._time_set_callback.remove(time_set_callback)
def sim_breaking_curfew(self, sim, target, interaction=None):
if interaction is not None:
if self.interaction_blacklisted(interaction):
return False
elif sim.sim_info.is_in_travel_group():
return False
situation_manager = services.get_zone_situation_manager()
sim_situations = situation_manager.get_situations_sim_is_in(sim)
if any((situation.disallows_curfew_violation for situation in sim_situations)):
return False
active_household = services.active_household()
if active_household is None:
return False
home_zone_id = active_household.home_zone_id
curfew_setting = self._zone_curfew_data.get(home_zone_id, CurfewService.UNSET)
if sim.sim_info not in active_household:
return False
elif curfew_setting is not CurfewService.UNSET:
if sim.sim_info.is_young_adult_or_older:
return False
if self.past_curfew(curfew_setting) and not services.current_zone_id() == home_zone_id:
ensemble_service = services.ensemble_service()
ensemble = ensemble_service.get_visible_ensemble_for_sim(sim)
if ensemble is not None:
if any((sim.sim_info.is_young_adult_or_older and sim.sim_info in active_household for sim in ensemble)):
return False
return True
if target is not None and not target.is_in_inventory():
if not services.active_lot().is_position_on_lot(target.position):
return True
if target is None:
if not services.active_lot().is_position_on_lot(sim.position):
return True
return False
def interaction_blacklisted(self, interaction):
interaction_tags = interaction.get_category_tags()
for tag in CurfewService.INTERACTION_BLACKLIST_TAGS:
if tag in interaction_tags:
return True
return False
def past_curfew(self, curfew_setting):
now = services.time_service().sim_now
if now.hour() >= curfew_setting or now.hour() < CurfewService.CURFEW_END_TIME:
return True
return False
def _setup_curfew_text_message(self):
if self._curfew_message_alarm_handle is not None:
self._curfew_message_alarm_handle.cancel()
self._curfew_message_alarm_handle = None
current_household = services.active_household()
if current_household is None:
return
home_zone_id = current_household.home_zone_id
curfew_setting = self._zone_curfew_data.get(home_zone_id, CurfewService.UNSET)
if curfew_setting is CurfewService.UNSET:
return
now = services.time_service().sim_now
alarm_time = date_and_time.create_date_and_time(hours=curfew_setting)
time_till_alarm = now.time_till_next_day_time(alarm_time)
span = date_and_time.create_time_span(minutes=(CurfewService.MINUTES_BEFORE_CURFEW_WARNING))
time_till_alarm -= span
self._curfew_message_alarm_handle = alarms.add_alarm(self, time_till_alarm, self._handle_curfew_message_callback, False)
def _handle_curfew_message_callback(self, handle):
active_lot = services.active_lot()
if active_lot.lot_id != services.active_household_lot_id():
from_sim = None
for sim_info in services.active_household():
if sim_info.is_young_adult_or_older:
from_sim = sim_info.is_instanced() or sim_info
break
if from_sim is None:
return
for sim_info in services.active_household():
if sim_info.get_sim_instance() is None:
continue
resolver = DoubleSimResolver(sim_info, from_sim)
if not CurfewService.CURFEW_WARNING_SIM_TESTS.run_tests(resolver):
continue
dialog = self.CURFEW_WARNING_TEXT_MESSAGE_DIALOG(sim_info, target_sim_id=(from_sim.id), resolver=resolver)
dialog.show_dialog()
def add_broke_curfew_buff(self, sim):
if not sim.has_buff(CurfewService.BREAK_CURFEW_BUFF):
sim.add_buff(CurfewService.BREAK_CURFEW_BUFF)
def remove_broke_curfew_buff(self, sim):
if sim.has_buff(CurfewService.BREAK_CURFEW_BUFF):
sim.remove_buff_by_type(CurfewService.BREAK_CURFEW_BUFF)
def is_curfew_active_on_lot_id(self, lot_id):
curfew_setting = self._zone_curfew_data.get(lot_id, CurfewService.UNSET)
if curfew_setting == CurfewService.UNSET:
return False
return self.past_curfew(curfew_setting)
def apply_curfew_loots(self):
for sim_info in services.active_household():
resolver = SingleSimResolver(sim_info)
CurfewService.CURFEW_BEGIN_LOOT.apply_to_resolver(resolver)
def remove_curfew_loots(self):
for sim_info in services.active_household():
resolver = SingleSimResolver(sim_info)
CurfewService.CURFEW_END_LOOT.apply_to_resolver(resolver)
@classproperty
def save_error_code(cls):
return persistence_error_types.ErrorCodes.SERVICE_SAVE_FAILED_CURFEW_SERVICE
def save(self, object_list=None, zone_data=None, open_street_data=None, save_slot_data=None):
persistence_service = services.get_persistence_service()
for save_zone_data in persistence_service.zone_proto_buffs_gen():
setting = self._zone_curfew_data.get(save_zone_data.zone_id, CurfewService.UNSET)
save_zone_data.gameplay_zone_data.curfew_setting = setting
def load(self, zone_data=None):
persistence_service = services.get_persistence_service()
for zone_data in persistence_service.zone_proto_buffs_gen():
self._zone_curfew_data[zone_data.zone_id] = zone_data.gameplay_zone_data.curfew_setting
def on_zone_load(self):
current_zone_id = services.current_zone_id()
self._setup_curfew_text_message()
self._create_alarm_handles(current_zone_id)
venue_manager = services.get_instance_manager(sims4.resources.Types.VENUE)
current_venue_tuning = venue_manager.get(build_buy.get_current_venue(current_zone_id))
if current_venue_tuning.is_residential or current_venue_tuning.is_university_housing:
current_setting = self._zone_curfew_data.get(current_zone_id, CurfewService.UNSET)
self._update_curfew_settings(current_zone_id, current_setting)
else:
self._update_curfew_settings(current_zone_id, CurfewService.UNSET) | 56.701389 | 543 | 0.702266 | 2,138 | 16,330 | 5.01637 | 0.152011 | 0.030769 | 0.014359 | 0.021818 | 0.474126 | 0.347972 | 0.285128 | 0.256131 | 0.211655 | 0.177063 | 0 | 0.007327 | 0.231108 | 16,330 | 288 | 544 | 56.701389 | 0.846846 | 0.0188 | 0 | 0.2749 | 0 | 0.043825 | 0.137167 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.10757 | false | 0.003984 | 0.043825 | 0.003984 | 0.294821 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4ff29840efc7a4c06f6a4fdf581373f72bb8042 | 6,556 | py | Python | util/trampballfont.py | tjol/trampball | 3cf10c70485c9f66231f0e75da55e933ec0a5df9 | [
"MIT"
] | null | null | null | util/trampballfont.py | tjol/trampball | 3cf10c70485c9f66231f0e75da55e933ec0a5df9 | [
"MIT"
] | null | null | null | util/trampballfont.py | tjol/trampball | 3cf10c70485c9f66231f0e75da55e933ec0a5df9 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import argparse
import struct
import os.path
import freetype
import numpy as np
asciichars = ''.join(chr(i) for i in range(ord(' '), ord('~')+1))
def mkbuf(face):
tops = [(face.load_char(c), face.glyph.bitmap_top)[1] for c in asciichars]
heights = [(face.load_char(c), face.glyph.bitmap.rows)[1] for c in asciichars]
widths = [(face.load_char(c), face.glyph.bitmap.width)[1] for c in asciichars]
lefts = [(face.load_char(c), face.glyph.bitmap_left)[1] for c in asciichars]
bottoms = [t-h for t, h in zip(tops, heights)]
highest_height = max(tops)
chr_w = max(widths)
chr_h = max(tops) - min(bottoms)
buf = np.zeros([len(asciichars), chr_h, chr_w], dtype='uint8')
for i, c in enumerate(asciichars):
face.load_char(c)
y = highest_height - face.glyph.bitmap_top
x = face.glyph.bitmap_left
h = face.glyph.bitmap.rows
w = face.glyph.bitmap.width
buf[i, y:y+h, x:x+w].flat = face.glyph.bitmap.buffer
return buf, chr_w, chr_h
def demo_buf(buf, chr_w, chr_h):
img = np.zeros([10, 10, chr_h, chr_w], dtype='uint8')
img.flat[:buf.size] = buf.flat
return np.hstack(np.hstack(img))
def demo_xpm(fn, buf, chr_w, chr_h):
img = demo_buf(buf, chr_w, chr_h)
colours = np.unique(img)
abc = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
abc += abc.lower()
abc += '0123456789'
abc = ' .,:~/?!#' + abc
abc1 = abc
while len(colours) > len(abc):
abc = [a + b for a in abc for b in abc1]
cmap = dict(zip(colours, abc))
with open(fn, 'w') as fp:
print('/* XPM */', file=fp)
print('static char * font_demo_xpm[] = {', file=fp)
print('"%d %d %d %d"' % (img.shape[1], img.shape[0], len(cmap), len(abc[0])), file=fp)
for colourval, code in cmap.items():
print('"%s\tc #%s",' % (code, 3*('%02x'%colourval)), file=fp)
for line in img[:-1]:
print('"' + ''.join(map(cmap.__getitem__, line)) + '",', file=fp)
print('"' + ''.join(map(cmap.__getitem__, img[-1])) + '"};', file=fp)
def demo_bmp(fn, buf, chr_w, chr_h):
img = demo_buf(buf, chr_w, chr_h)
colours = np.unique(img)
if len(colours) <= 2:
bits = 1
px_per_byte = 8
elif len(colours) <= 0x10:
bits = 4
px_per_byte = 2
else:
bits = 8
px_per_byte = 1
colourtable = dict(zip(colours, range(len(colours))))
print(bits, len(colours))
# construct the file in memory first
h = img.shape[0]
w = img.shape[1]
row_len_dwords = int(np.ceil(w / px_per_byte / 4))
img_buf = np.zeros([h, row_len_dwords*4], dtype='u1')
padded_original_row_len = row_len_dwords * 4 * px_per_byte
img_padded = np.zeros([h, padded_original_row_len], dtype='u1')
img_padded[:,:w] = img
img_padded = img_padded.reshape([h, row_len_dwords*4, px_per_byte])
for i in range(img_buf.shape[0]):
for j in range(img_buf.shape[1]):
for k in range(px_per_byte):
clr = colourtable[img_padded[i,j,k]]
img_buf[h-i-1,j] |= clr << ((px_per_byte-1)-k)*bits
bitmapinfoheader = struct.pack('<LLLHHLLLLLL',
40, # header size
w, # width
h, # height
1, # number of colour planes
bits, # bits per pixel
0, # uncompressed BI_RGB
img_buf.size, # image size in bytes
1417, # horiz res
1417, # vert res
len(colours), # lenth of palette
0)
bmp_colourtable = b''.join(struct.pack('BBBB', c, c, c, 0) for c in colours)
dataoffset = 14 + len(bitmapinfoheader) + len(bmp_colourtable)
filesize = dataoffset + img_buf.size
fileheader = b'BM' + struct.pack('<LHHL', filesize, 0, 0, dataoffset)
with open(fn, 'wb') as fp:
fp.write(fileheader)
fp.write(bitmapinfoheader)
fp.write(bmp_colourtable)
fp.write(img_buf.ravel('C'))
def write_font_bin(fn, buf, size, w, h):
bitmap = np.all((buf == 0) | (buf == 255))
with open(fn, 'wb') as fp:
fp.write(b'TRAMPBALLFONT 1')
if bitmap:
fp.write(b'b')
else:
fp.write(b'\x00')
fp.write(struct.pack('!LLLL', size, w, h, buf.size//(w*h)))
if bitmap:
octets = int(np.ceil(buf.size/8))
buf2 = np.zeros([octets, 8], dtype='u1')
buf2.flat[:buf.size] = buf.flat
for bits in buf2:
b = (((bits[0] & 1) << 7) |
((bits[1] & 1) << 6) |
((bits[2] & 1) << 5) |
((bits[3] & 1) << 4) |
((bits[4] & 1) << 3) |
((bits[5] & 1) << 2) |
((bits[6] & 1) << 1) |
((bits[7] & 1) << 0))
fp.write(bytes([b]))
else:
fp.write(buf.ravel('C'))
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Convert TTF to trampball bitmap font format')
parser.add_argument('ttf_file', metavar='ttf-file', help='input file name')
parser.add_argument('-s', '--size', default=16, type=int,
help='nominal size in pixels')
parser.add_argument('-o', '--output', help='output file name')
parser.add_argument('-x', '--create-xpm', action='store_true',
help='create an XPM image file showing the font')
parser.add_argument('-b', '--create-bmp', action='store_true',
help='create a BMP image file showing the font')
args = parser.parse_args()
face = freetype.Face(args.ttf_file)
size = args.size
face.set_pixel_sizes(size, size)
buf, chr_w, chr_h = mkbuf(face)
out_fn = args.output
if not out_fn:
if args.ttf_file.lower().endswith('.ttf'):
out_fn = args.ttf_file[:-4] + '.tbf'
else:
out_fn = args.ttf_file + '.tbf'
if args.create_xpm:
xpm_fn = out_fn + '.xpm'
demo_xpm(xpm_fn, buf, chr_w, chr_h)
if args.create_bmp:
xpm_fn = out_fn + '.bmp'
demo_bmp(xpm_fn, buf, chr_w, chr_h)
write_font_bin(out_fn, buf, size, chr_w, chr_h)
| 36.220994 | 95 | 0.526388 | 916 | 6,556 | 3.614629 | 0.220524 | 0.015705 | 0.021142 | 0.024162 | 0.241317 | 0.120205 | 0.108426 | 0.045304 | 0.03141 | 0.03141 | 0 | 0.025237 | 0.323063 | 6,556 | 180 | 96 | 36.422222 | 0.72082 | 0.029896 | 0 | 0.09396 | 0 | 0 | 0.076087 | 0.004096 | 0 | 0 | 0.00063 | 0 | 0 | 1 | 0.033557 | false | 0 | 0.033557 | 0 | 0.080537 | 0.04698 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a4ff8de93dba70de2f7f6a62269957491814063c | 1,743 | py | Python | app/nlp/bulk_predict.py | s2t2/tweet-analyzer-py | 0a398fc47101a2d602d8c4116c970f1076a58f27 | [
"MIT"
] | 5 | 2020-04-02T12:03:57.000Z | 2020-10-18T19:29:15.000Z | app/nlp/bulk_predict.py | s2t2/tweet-analyzer-py | 0a398fc47101a2d602d8c4116c970f1076a58f27 | [
"MIT"
] | 22 | 2020-03-31T02:00:34.000Z | 2021-06-30T17:59:01.000Z | app/nlp/bulk_predict.py | s2t2/tweet-analyzer-py | 0a398fc47101a2d602d8c4116c970f1076a58f27 | [
"MIT"
] | 3 | 2020-04-04T16:08:08.000Z | 2020-10-20T01:32:46.000Z | import os
from app import seek_confirmation
from app.job import Job
from app.bq_service import BigQueryService
from app.nlp.model_storage import ModelStorage, MODELS_DIRPATH
MODEL_NAME = os.getenv("MODEL_NAME", default="current_best")
LIMIT = os.getenv("LIMIT")
BATCH_SIZE = int(os.getenv("BATCH_SIZE", default="100000"))
if __name__ == "__main__":
storage = ModelStorage(dirpath=f"{MODELS_DIRPATH}/{MODEL_NAME}")
tv = storage.load_vectorizer()
clf = storage.load_model()
bq_service = BigQueryService()
print("DESTROYING PREDICTIONS TABLE???")
seek_confirmation()
print("DESTROYING PREDICTIONS TABLE...")
bq_service.destructively_migrate_2_community_predictions_table()
job = Job()
job.start()
ids_batch = []
statuses_batch = []
for row in bq_service.fetch_unlabeled_statuses_in_batches(limit=LIMIT):
ids_batch.append(row["status_id"])
statuses_batch.append(row["status_text"])
job.counter += 1
if job.counter % BATCH_SIZE == 0:
results = clf.predict(tv.transform(statuses_batch))
batch = [{"status_id": s, "predicted_community_id": int(i)} for s, i in zip(ids_batch, results)]
bq_service.upload_predictions_in_batches(batch)
job.progress_report()
ids_batch = []
statuses_batch = []
batch = []
if len(statuses_batch) > 0:
results = clf.predict(tv.transform(statuses_batch))
batch = [{"status_id": s, "predicted_community_id": int(i)} for s, i in zip(ids_batch, results)]
bq_service.upload_predictions_in_batches(batch)
job.progress_report()
ids_batch = []
statuses_batch = []
batch = []
job.end()
| 30.578947 | 108 | 0.663224 | 215 | 1,743 | 5.069767 | 0.32093 | 0.083486 | 0.066055 | 0.057798 | 0.344954 | 0.344954 | 0.344954 | 0.344954 | 0.344954 | 0.344954 | 0 | 0.007348 | 0.219162 | 1,743 | 56 | 109 | 31.125 | 0.793534 | 0 | 0 | 0.380952 | 0 | 0 | 0.128514 | 0.041882 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.119048 | 0 | 0.119048 | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
350036bbe71df8bb1814b798525412437eca2231 | 3,221 | py | Python | AI-City-Vehicle-Reid/test/submit.py | he010103/Traffic-Brain | abaeafba9e31b2ae9d4ec3447255146888dbb279 | [
"MIT"
] | 15 | 2019-07-29T03:20:05.000Z | 2022-01-27T01:23:29.000Z | AI-City-Vehicle-Reid/test/submit.py | he010103/Traffic-Brain | abaeafba9e31b2ae9d4ec3447255146888dbb279 | [
"MIT"
] | 13 | 2019-07-24T06:28:45.000Z | 2020-06-29T04:09:16.000Z | AI-City-Vehicle-Reid/test/submit.py | he010103/Traffic-Brain | abaeafba9e31b2ae9d4ec3447255146888dbb279 | [
"MIT"
] | 11 | 2019-06-16T18:39:20.000Z | 2021-11-04T04:08:04.000Z | # srun --mpi=pmi2 -p VI_UC_TITANXP -n1 --gres=gpu:4 python test.py
import os
from os.path import join as opj
import numpy as np
from scipy.spatial.distance import cdist
from tqdm import tqdm
import sys
import re
from sklearn import preprocessing
import multiprocessing
import torch
from torch.optim import lr_scheduler
from torch import nn
from torchvision import transforms
from torchvision.datasets.folder import default_loader
sys.path.append('../train/')
from modeling.baseline import Baseline
from metrics import track_reranking_weight_feat
import pickle
test_transform = transforms.Compose([
transforms.Resize((320, 320)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
def extract_feature(model, inputs):
with torch.no_grad():
features = torch.FloatTensor()
ff = torch.FloatTensor(inputs.size(0), 2048).zero_()
for i in range(2):
if i == 1:
# flip
inputs = inputs.index_select(3, torch.arange(inputs.size(3) - 1, -1, -1).long())
input_img = inputs.to('cuda')
outputs = model(input_img)
f = outputs.data.cpu()
ff = ff + f
features = torch.cat((features, ff), 0)
return features.cpu().data.numpy()
def list_pictures(directory):
imgs = sorted([opj(directory, img) for img in os.listdir(directory)], key=lambda x: int(x.split('/')[-1].split('.')[0]))
return imgs
def read_image(img_path):
img = default_loader(img_path)
img = test_transform(img)
img = img.unsqueeze(0)
return img
if __name__ == '__main__':
model_path = 'your model path'
model = Baseline(10000, 1, model_path, 'bnneck', 'before', 'resnet50', 'self')
model = model.to('cuda')
model = nn.DataParallel(model)
resume = torch.load(model_path)
model.load_state_dict(resume)
model.eval()
print('create multiprocessing...')
pool = multiprocessing.Pool(processes=32)
print('after create multiprocessing...')
query_dir = './image_query/'
test_dir = './image_test/'
query_img_paths = [path for path in list_pictures(query_dir)]
test_img_paths = [path for path in list_pictures(test_dir)]
batch_size = 256
qf = np.zeros((len(query_img_paths), 2048))
for i in tqdm(range( int(np.ceil(len(query_img_paths)/batch_size)) )):
cur_query_img = pool.map(read_image, query_img_paths[i*batch_size:(i+1)*batch_size])
cur_query_img = torch.cat(cur_query_img, 0)
if len(cur_query_img) == 0: break
cur_qf = extract_feature(model, cur_query_img)
qf[i*batch_size:(i+1)*batch_size, :] = cur_qf
gf = np.zeros((len(test_img_paths), 2048))
for i in tqdm(range( int(np.ceil(len(test_img_paths)/batch_size)) )):
cur_test_img = pool.map(read_image, test_img_paths[i*batch_size:(i+1)*batch_size])
cur_test_img = torch.cat(cur_test_img, 0)
if len(cur_test_img) == 0: break
cur_gf = extract_feature(model, cur_test_img)
gf[i*batch_size:(i+1)*batch_size, :] = cur_gf
pickle.dump({'qf': qf}, open('qf.data', 'wb'))
pickle.dump({'gf': gf}, open('gf.data', 'wb')) | 32.867347 | 124 | 0.658802 | 475 | 3,221 | 4.265263 | 0.328421 | 0.048865 | 0.035538 | 0.021718 | 0.188549 | 0.126357 | 0.126357 | 0.126357 | 0.070089 | 0.070089 | 0 | 0.030505 | 0.206147 | 3,221 | 98 | 125 | 32.867347 | 0.76183 | 0.021422 | 0 | 0 | 0 | 0 | 0.054286 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039474 | false | 0 | 0.223684 | 0 | 0.302632 | 0.026316 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
35011ecc71808d97e13c85625b4d23c5c5f11948 | 612 | py | Python | temboardui/model/alembic/versions/20211112_2d55ab971bde17_metrics_archive_deadlock.py | tilkow/temboard | 99a75c894561ee77e78a2dc568d273c1a02e0b0d | [
"PostgreSQL"
] | null | null | null | temboardui/model/alembic/versions/20211112_2d55ab971bde17_metrics_archive_deadlock.py | tilkow/temboard | 99a75c894561ee77e78a2dc568d273c1a02e0b0d | [
"PostgreSQL"
] | null | null | null | temboardui/model/alembic/versions/20211112_2d55ab971bde17_metrics_archive_deadlock.py | tilkow/temboard | 99a75c894561ee77e78a2dc568d273c1a02e0b0d | [
"PostgreSQL"
] | null | null | null | """metrics-archive-deadlock
Revision ID: 55ab971bde17
Revises: 67f52879da15
Create Date: 2021-11-12 10:24:47.899243
"""
import os.path
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '55ab971bde17'
down_revision = '67f52879da15'
branch_labels = None
depends_on = None
def sqlfile(name):
directory = os.path.dirname(__file__)
with open(os.path.join(directory, name + '.sql')) as fo:
return sa.text(fo.read())
def upgrade():
name = os.path.basename(__file__)
name, _ = name.rsplit('.', 1)
op.get_bind().execute(sqlfile(name))
| 19.741935 | 60 | 0.705882 | 84 | 612 | 4.988095 | 0.666667 | 0.057279 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103718 | 0.165033 | 612 | 30 | 61 | 20.4 | 0.716243 | 0.25 | 0 | 0 | 0 | 0 | 0.064302 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
350195df62defe068877f9ba13b38a816a1ebc0a | 12,835 | py | Python | custom_components/cisco_imc/__init__.py | ecoen66/cisco_imc | 88d2b832d13fa7f52d5cf9b781fa5034fafa197b | [
"MIT"
] | 5 | 2022-02-22T17:57:25.000Z | 2022-02-28T14:30:20.000Z | custom_components/cisco_imc/__init__.py | ecoen66/cisco_imc | 88d2b832d13fa7f52d5cf9b781fa5034fafa197b | [
"MIT"
] | null | null | null | custom_components/cisco_imc/__init__.py | ecoen66/cisco_imc | 88d2b832d13fa7f52d5cf9b781fa5034fafa197b | [
"MIT"
] | null | null | null | """
Custom integration to integrate Cisco UCS IMC with Home Assistant.
For more details about this integration, please refer to
https://github.com/ecoen66/cisco_imc
"""
from __future__ import annotations
import asyncio
import logging
from datetime import timedelta
from collections import defaultdict
#from typing import List
from imcsdk.imchandle import ImcHandle
from imcsdk.imcexception import ImcLoginError, ImcException, ImcConnectionError
from urllib.error import URLError
from homeassistant.helpers.update_coordinator import DataUpdateCoordinator, UpdateFailed
from homeassistant.exceptions import ConfigEntryAuthFailed, ConfigEntryNotReady
from homeassistant.config_entries import ConfigEntry, SOURCE_REAUTH, SOURCE_IMPORT
from homeassistant.core import HomeAssistant, callback
import homeassistant.helpers.config_validation as cv
import homeassistant.helpers.entity_registry as er
from .services import async_setup_services, async_unload_services
from .switch import ImcPollingSwitch
from .binary_sensor import CiscoImcBinarySensor
from .sensor import CiscoImcRackUnitSensor
from .imc_device import CiscoImcDevice
from homeassistant.const import (
CONF_SCAN_INTERVAL,
CONF_IP_ADDRESS,
CONF_USERNAME,
CONF_PASSWORD,
EVENT_HOMEASSISTANT_CLOSE,
)
from .const import (
DOMAIN,
PLATFORMS,
STARTUP_MESSAGE,
DATA_API_CLIENT,
DATA_LISTENER,
RACK_UNIT_UPDATE_DELAY,
RACK_UNIT_SENSORS,
SWITCH,
BINARY_SENSOR,
SENSOR_TYPES,
SWITCH_TYPE,
BINARY_SENSOR_TYPE,
DEFAULT_SCAN_INTERVAL,
MIN_SCAN_INTERVAL,
)
CONFIG_SCHEMA = cv.removed(DOMAIN, raise_if_present=False)
_LOGGER = logging.getLogger(__name__)
@callback
def _async_configured_ips(hass):
"""Return a set of configured IMCs."""
_LOGGER.debug("Creating a list of configured IMCs")
return {
entry.data[CONF_IP_ADDRESS]
for entry in hass.config_entries.async_entries(DOMAIN)
if CONF_IP_ADDRESS in entry.data
}
async def async_setup(_hass: HomeAssistant, _config: Config):
"""Set up this integration using YAML is not supported."""
return True
async def async_setup_entry(hass: HomeAssistant, config_entry: ConfigEntry) -> bool:
"""Set up this integration using UI."""
hass.data.setdefault(DOMAIN, {})
_LOGGER.info(STARTUP_MESSAGE)
imc = config_entry.title
if not hass.data[DOMAIN]:
_LOGGER.debug(f"{imc} Calling setup services")
await async_setup_services(hass)
_LOGGER.debug(f"{imc} Returned from setup services")
if imc in hass.data[DOMAIN] and CONF_SCAN_INTERVAL in hass.data[DOMAIN][imc]:
_LOGGER.debug(f"{imc} imc found in hass.data[DOMAIN]")
scan_interval = hass.data[DOMAIN][imc][CONF_SCAN_INTERVAL]
hass.config_entries.async_update_entry(
config_entry, options={CONF_SCAN_INTERVAL: scan_interval}
)
hass.data[DOMAIN].pop(imc)
try:
_LOGGER.debug(f"{imc} Setting up coordinator")
coordinator = CiscoImcDataService(hass, config_entry)
_LOGGER.debug(f"{imc} Asking coordinator to login")
await coordinator.async_login()
_LOGGER.debug("Logged in to imc %s in __init__.py", imc)
except URLError as ex:
raise ConfigEntryAuthFailed(ex) from ex
except Exception as ex:
raise ConfigEntryNotReady(ex) from ex
async def _async_close_client(*_):
await coordinator.async_close()
@callback
def _async_create_close_task():
asyncio.create_task(_async_close_client())
config_entry.async_on_unload(
hass.bus.async_listen_once(EVENT_HOMEASSISTANT_CLOSE, _async_close_client)
)
config_entry.async_on_unload(_async_create_close_task)
# Fetch initial data so we have data when entities subscribe
entry_data = hass.data[DOMAIN][config_entry.entry_id] = {
"coordinator": coordinator,
"devices": dict[
str,
dict[
str,
type [CiscoImcDevice]
]
],
DATA_LISTENER: [config_entry.add_update_listener(update_listener)],
}
_LOGGER.debug(f"{imc} await coordinator.async_config_entry_first_refresh()")
await coordinator.async_config_entry_first_refresh()
all_devices: dict[
str,
dict[
str,
type [CiscoImcDevice]
],
] = await get_homeassistant_components(hass, config_entry)
if not all_devices:
return False
entry_data["devices"] = all_devices.copy()
_LOGGER.debug(f"{imc} devices = {entry_data['devices']}")
hass.config_entries.async_setup_platforms(config_entry, PLATFORMS)
# entity_registry = er.async_get(hass)
# attrs: Dict[str, Any] = {ATTR_RESTORED: True}
# states.async_set(entry.entity_id, STATE_UNAVAILABLE, attrs)
# print(f'{}')
return True
async def get_homeassistant_components(hass, config_entry) -> dict[
str,
dict[
str,
type [CiscoImcDevice]
],
]:
"""Return a list of home assistant components for the IMC."""
platform_name = config_entry.title
services: dict[
str,
dict[
str,
type [CiscoImcDevice]
],
] = {}
services.setdefault("sensor", {})
services.setdefault("switch", {})
services.setdefault("binary_sensor", {})
for key in RACK_UNIT_SENSORS:
for sensor_type in SENSOR_TYPES:
if sensor_type.key == key:
services["sensor"][key] = sensor_type
services["switch"][SWITCH] = SWITCH_TYPE
services["binary_sensor"][BINARY_SENSOR] = BINARY_SENSOR_TYPE
return services
async def async_unload_entry(hass, config_entry) -> bool:
"""Unload a config entry."""
unload_ok = await hass.config_entries.async_unload_platforms(
config_entry, PLATFORMS
)
await hass.data[DOMAIN].get(config_entry.entry_id)[
"coordinator"
].async_close()
for listener in hass.data[DOMAIN][config_entry.entry_id][DATA_LISTENER]:
listener()
imc = config_entry.title
if unload_ok:
hass.data[DOMAIN].pop(config_entry.entry_id)
_LOGGER.debug("Unloaded entry for %s", imc)
if not hass.data[DOMAIN]:
async_unload_services(hass)
return True
return False
async def update_listener(hass, config_entry):
"""Update when config_entry options update."""
imc = config_entry.title
coordinator = hass.data[DOMAIN][config_entry.entry_id]["coordinator"]
old_update_interval = coordinator.update_interval
coordinator.update_interval = config_entry.options.get(CONF_SCAN_INTERVAL)
if old_update_interval != coordinator.update_interval:
_LOGGER.debug(
"Changing scan_interval for %s from %s to %s",
imc,
old_update_interval,
coordinator.update_interval,
)
class CiscoImcDataService(DataUpdateCoordinator):
"""This class handle communication and stores the data."""
def __init__(self, hass, config_entry):
"""Initialize the class."""
self.hass = hass
self.config_entry = config_entry
self.imc = config_entry.data.get(CONF_IP_ADDRESS)[0]
self.username = self.config_entry.data.get(CONF_USERNAME)[0]
self.password = self.config_entry.data.get(CONF_PASSWORD)
_LOGGER.debug("about to setdefault custom_attributes for %s", self.imc)
if not hasattr(self.hass, 'custom_attributes'):
self.hass.custom_attributes = {}
_LOGGER.debug("about to setdefault custom_attributes.imc for %s", self.imc)
self.hass.custom_attributes.setdefault(self.imc, {})
_LOGGER.debug("about to set custom_attributes for %s", self.imc)
self.hass.custom_attributes[self.imc] = {}
_LOGGER.debug("about to set polling_switch for %s", self.imc)
self.hass.custom_attributes[self.imc]['polling_switch'] = True
# self.hass.data[DOMAIN][config_entry.entry_id]['devices']['switch'][SWITCH]._is_on = True
self.hass.custom_attributes[self.imc]['reachable'] = False
self.hass.custom_attributes[self.imc]['unreachable_counter'] = 0
self.update_interval = timedelta(seconds=MIN_SCAN_INTERVAL)
self.client = ImcHandle(
self.imc,
self.username,
self.password,
secure=True,
auto_refresh=True,
force=True,
timeout=60,
)
super().__init__(hass, _LOGGER, name=DOMAIN, update_interval=self.update_interval)
async def async_login(self):
response = False
self.hass.custom_attributes[self.imc]['reachable'] = False
try:
_LOGGER.debug(f"{self.imc} Logging in from CiscoImcDataService")
response = await self.hass.async_add_executor_job(self.client.login)
except URLError as ex:
self.hass.custom_attributes[self.imc]['reachable'] = False
self.hass.custom_attributes[self.imc]['unreachable_counter'] += 1
raise UpdateFailed("Unable to contact the IMC, skipping update") from ex
# _LOGGER.debug(f"{self.imc} Unable to contact the IMC, skipping update")
# return False
except ImcLoginError as ex:
_LOGGER.error("Could not login to the IMC %s", self.imc)
raise ConfigEntryAuthFailed from ex
except ImcException as ex:
_LOGGER.error("Exception logging in to the IMC %s", self.imc)
raise ConfigEntryNotReady from ex
_LOGGER.debug(f"{self.imc} Login from CiscoImcDataService = {response}")
self.hass.custom_attributes[self.imc]['reachable'] = response
_LOGGER.debug(f"{self.imc} Reachable set to {self.hass.custom_attributes[self.imc]['reachable']}")
return response
async def async_close(self):
response = await self.hass.async_add_executor_job(self.client.logout)
self.hass.custom_attributes[self.imc]['reachable'] = False
_LOGGER.debug(f"{self.imc} Logout from CiscoImcDataService = {response}")
return response
async def _async_update_data(self):
"""Update data."""
_LOGGER.debug(f"{self.imc} polling_switch = {self.hass.custom_attributes[self.imc]['polling_switch']}")
if self.hass.custom_attributes[self.imc]['polling_switch']:
_LOGGER.debug(f"{self.imc} reachable = {self.hass.custom_attributes[self.imc]['reachable']}")
if not self.hass.custom_attributes[self.imc]['reachable']:
result = await self.async_login()
if not result:
self.hass.custom_attributes[self.imc]['unreachable_counter'] += 1
return False
await self.hass.async_add_executor_job(self.update)
def update(self):
"""Update the data from the Cisco IMC API."""
try:
rack_unit = self.client.query_dn("sys/rack-unit-1")
except URLError as ex:
self.hass.custom_attributes[self.imc]['reachable'] = False
self.hass.custom_attributes[self.imc]['unreachable_counter'] += 1
raise UpdateFailed("Unable to contact the IMC, skipping update") from ex
except Exception as ex:
self.hass.custom_attributes[self.imc]['reachable'] = False
self.hass.custom_attributes[self.imc]['unreachable_counter'] += 1
raise UpdateFailed("Unable to contact the IMC, skipping update") from ex
self.hass.custom_attributes[self.imc].clear()
self.hass.custom_attributes[self.imc]['reachable'] = True
self.hass.custom_attributes[self.imc]['polling_switch'] = True
self.hass.custom_attributes[self.imc]['unreachable_counter'] = 0
for key, value in rack_unit.__dict__.items():
if key in RACK_UNIT_SENSORS:
self.hass.custom_attributes[self.imc][key] = value
_LOGGER.debug(f"Updated Cisco IMC Rack Unit {self.imc}: {self.hass.custom_attributes[self.imc]}")
def set_polling_state(self, new_state):
"""Update the polling status the Cisco IMC API."""
self.hass.custom_attributes[self.imc]['polling_switch'] = new_state
_LOGGER.debug(f"Updated Cisco IMC Polling {self.imc}: %s", self.hass.custom_attributes[self.imc]['polling_switch'])
def is_polling(self):
"""Return the polling status the Cisco IMC API."""
is_polling = self.hass.custom_attributes[self.imc]['polling_switch'] == True
_LOGGER.debug(f"Cisco IMC Polling {self.imc}: %s", is_polling)
return is_polling
def sensor_state(self, key):
"""Return the state of a Cisco IMC sensor."""
return self.hass.custom_attributes[self.imc][key]
| 38.543544 | 123 | 0.673471 | 1,558 | 12,835 | 5.324134 | 0.150193 | 0.040506 | 0.054008 | 0.092586 | 0.406269 | 0.353828 | 0.288849 | 0.185654 | 0.120434 | 0.103436 | 0 | 0.001312 | 0.227815 | 12,835 | 332 | 124 | 38.659639 | 0.835637 | 0.07051 | 0 | 0.230769 | 0 | 0.007692 | 0.14567 | 0.024938 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026923 | false | 0.011538 | 0.080769 | 0 | 0.157692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
350609a56f7bdc4d4bdaf1c7cad5025b3755e119 | 13,399 | py | Python | rogal/console/ui.py | kosciak/ecs-rogal | d553104e0ea350d11272d274a900419620b9389e | [
"MIT"
] | 4 | 2021-01-23T13:25:46.000Z | 2021-03-19T03:08:05.000Z | rogal/console/ui.py | kosciak/ecs-rogal | d553104e0ea350d11272d274a900419620b9389e | [
"MIT"
] | null | null | null | rogal/console/ui.py | kosciak/ecs-rogal | d553104e0ea350d11272d274a900419620b9389e | [
"MIT"
] | null | null | null | import string
from .. import components
from ..data import Decorations
from ..geometry import Size
from ..tiles import Colors
from ..events import handlers
from .. import render
from .core import Align, Padding, ZOrder
from . import basic
from . import containers
from . import decorations
from . import renderers
from . import widgets
# TODO: frames overlapping, and fixing overlapped/overdrawn characters to merge borders/decorations
# TODO: Scrollable Panels?
class WidgetsBuilder:
def __init__(self, ecs):
self.ecs = ecs
self.palette = self.ecs.resources.wrapper.palette
self.default_colors = Colors(self.palette.fg, self.palette.bg)
self.window_frame = decorations.Frame(
*Decorations.DSLINE,
colors=None,
)
self.title_frame = decorations.Frame(
*Decorations.MINIMAL_DSLINE,
colors=None,
)
self.title_align = Align.TOP_CENTER
self.button_frame = decorations.Frame(
*Decorations.LINE,
colors=None,
)
self.button_width = 8
self.buttons_align = Align.BOTTOM_CENTER
def create_window_title(self, title):
if title is None:
return
title = decorations.Padded(
content=decorations.Framed(
content=basic.Text(
title,
colors=self.default_colors,
align=self.title_align,
# width=10,
),
frame=self.title_frame,
align=self.title_align,
),
padding=Padding(0, 1),
)
return title
def create_window(self, title=None, on_key_press=None):
window = widgets.Window(
frame=self.window_frame,
default_colors=self.default_colors,
title=self.create_window_title(title),
on_key_press=on_key_press,
)
return window
def create_modal_window(self, align, size, title=None, on_key_press=None):
window = content=widgets.ModalWindow(
size=size,
align=align,
default_colors=self.default_colors,
frame=self.window_frame,
title=self.create_window_title(title),
on_key_press=on_key_press,
)
return window
def create_button(self, text, callback, value):
button = widgets.Button(
value, callback,
default_colors=self.default_colors,
text=basic.Text(
text,
width=self.button_width,
align=Align.CENTER,
),
frame=self.button_frame,
# selected_colors=self.default_colors.invert(),
press_colors=Colors(
bg=self.palette.bg,
fg=self.palette.BRIGHT_WHITE
),
selected_renderers=[
renderers.InvertColors(),
],
)
return button
def create_buttons_row(self, callback, buttons):
buttons_row = containers.Row(
align=self.buttons_align,
)
for text, value in buttons:
buttons_row.append(self.create_button(text, callback, value))
return buttons_row
def create_text_input(self, width, text=None):
text_input = widgets.TextInput(
self.ecs,
width=width,
default_text=text,
# default_colors=self.default_colors,
default_colors=Colors(fg=self.palette.fg, bg=self.palette.BRIGHT_BLACK),
)
return text_input
def create_list_item(self, width, item, key_binding, callback, index):
index_text = decorations.Padded(
content=basic.Text(
f'{key_binding})',
),
padding=Padding(0, 1),
)
item_text = basic.Text(
item,
colors=Colors(
fg=self.palette.BLUE,
),
width=0,
)
list_item = widgets.ListItem(
self.ecs,
key_binding=key_binding,
callback=callback, value=index,
index=index_text, item=item_text,
default_colors=self.default_colors,
selected_renderers=[
renderers.InvertColors(),
renderers.PaintPanel(Colors(bg=item_text.colors.fg)),
],
)
return list_item
def create_list_separator(self):
separator = basic.Text(
'-'*5,
width=0,
align=Align.CENTER,
)
return separator
def create_yes_no_prompt(self, context):
title = context['title']
msg = context['msg']
callback = context['callback']
window = self.create_modal_window(
size=Size(40, 8),
align=Align.TOP_CENTER,
title=title,
on_key_press={
handlers.YesNoKeyPress(self.ecs): callback,
handlers.DiscardKeyPress(self.ecs): callback,
},
)
msg = basic.Text(
msg,
align=Align.TOP_CENTER,
)
buttons = self.create_buttons_row(
callback=callback,
buttons=[
['No', False],
['Yes', True],
],
)
window.extend([
decorations.Padded(
content=msg,
padding=Padding(1, 0),
),
decorations.Padded(
content=buttons,
padding=Padding(1, 0, 0, 0),
),
])
widgets_layout = decorations.Padded(
content=window,
padding=Padding(12, 0),
)
return widgets_layout
def create_text_input_prompt(self, context):
title = context['title']
msg = context['msg']
callback = context['callback']
prompt = basic.Text(
"Text: ",
)
text_input = self.create_text_input(
width=26,
)
input_row = containers.Row(
align=Align.TOP_CENTER,
)
input_row.extend([
prompt,
text_input,
])
buttons = self.create_buttons_row(
callback=callback,
buttons=[
['Cancel', False],
['OK', text_input],
],
)
window = self.create_modal_window(
align=Align.TOP_CENTER,
size=Size(40, 8),
title=title,
on_key_press={
handlers.OnKeyPress(self.ecs, 'common.SUBMIT', text_input): callback,
handlers.DiscardKeyPress(self.ecs): callback,
},
)
window.extend([
decorations.Padded(
content=input_row,
padding=Padding(1, 0),
),
decorations.Padded(
content=buttons,
padding=Padding(1, 0, 0, 0),
),
])
widgets_layout = decorations.Padded(
content=window,
padding=Padding(12, 0),
)
return widgets_layout
def create_alphabetic_select_prompt(self, context):
title = context['title']
msg = 'Select something'
callback = context['callback']
items = [f'index: {i}' for i in range(10)]
msg = basic.Text(
msg,
align=Align.TOP_CENTER,
)
buttons = self.create_buttons_row(
callback=callback,
buttons=[
['Cancel', False],
],
)
window = self.create_modal_window(
align=Align.TOP_CENTER,
size=Size(
20,
self.window_frame.extents.height+msg.height+buttons.height+len(items)+4
),
title=title,
on_key_press={
handlers.DiscardKeyPress(self.ecs): callback,
},
)
items_list = widgets.ListBox(
self.ecs,
align=Align.TOP_LEFT,
)
# TODO: Move to create_list()
for index, item in enumerate(items):
if index == len(items) / 2:
items_list.append_separator(
self.create_list_separator()
)
key_binding = string.ascii_lowercase[index]
items_list.append_item(
self.create_list_item(18, item, key_binding, callback, index)
)
window.extend([
decorations.Padded(
content=msg,
padding=Padding(1, 0),
),
decorations.Padded(
content=items_list,
padding=Padding(3, 0, 0, 0),
),
decorations.Padded(
content=buttons,
padding=Padding(1, 0, 0, 0),
),
])
widgets_layout = decorations.Padded(
content=window,
padding=Padding(8, 0),
)
return widgets_layout
def create(self, widget_type, context):
# TODO: Move layout definitions to data/ui.yaml ?
if widget_type == 'YES_NO_PROMPT':
widgets_layout = self.create_yes_no_prompt(context)
if widget_type == 'TEXT_INPUT_PROMPT':
widgets_layout = self.create_text_input_prompt(context)
if widget_type == 'ALPHABETIC_SELECT_PROMPT':
widgets_layout = self.create_alphabetic_select_prompt(context)
if widget_type == 'IN_GAME':
widgets_layout = containers.Split(bottom=12)
camera = self.create_window(title='mapcam')
camera.content.append(render.Camera(self.ecs))
msg_log = self.create_window(title='logs')
msg_log.content.append(render.MessageLog())
widgets_layout.extend([camera, msg_log])
return widgets_layout
class UIManager:
def __init__(self, ecs):
self.ecs = ecs
self.widgets_builder = WidgetsBuilder(self.ecs)
def create(self, widget_type, context):
widget = self.ecs.create(
components.CreateUIWidget(
widget_type=widget_type,
context=context,
),
)
return widget
def destroy(self, widget):
self.ecs.manage(components.DestroyUIWidget).insert(widget)
def create_child(self, parent):
widget = self.ecs.create(
components.ParentUIWidget(parent),
)
return widget
def redraw(self, widget):
self.ecs.manage(components.DestroyUIWidgetChildren).insert(widget)
content = self.ecs.manage(components.UIWidget).get(widget)
if content:
content.invalidate()
def insert(self, widget, *,
ui_widget=None,
panel=None,
z_order=None,
renderer=None,
):
if ui_widget:
self.ecs.manage(components.UIWidget).insert(
widget, ui_widget, needs_update=False,
)
if panel:
self.ecs.manage(components.Console).insert(
widget, panel, z_order or ZOrder.BASE,
)
if renderer:
self.ecs.manage(components.PanelRenderer).insert(
widget, renderer,
)
def bind(self, widget, *,
on_text_input=None,
on_key_press=None,
on_mouse_click=None, on_mouse_press=None, on_mouse_up=None,
on_mouse_in=None, on_mouse_over=None, on_mouse_out=None,
on_mouse_wheel=None,
):
if on_text_input:
self.ecs.manage(components.OnTextInput).insert(
widget, on_text_input,
)
if on_key_press:
self.ecs.manage(components.OnKeyPress).insert(
widget, on_key_press,
)
if on_mouse_click:
self.ecs.manage(components.OnMouseClick).insert(
widget, on_mouse_click,
)
if on_mouse_press:
self.ecs.manage(components.OnMousePress).insert(
widget, on_mouse_press,
)
if on_mouse_up:
self.ecs.manage(components.OnMouseUp).insert(
widget, on_mouse_up,
)
if on_mouse_in:
self.ecs.manage(components.OnMouseIn).insert(
widget, on_mouse_in,
)
if on_mouse_over:
self.ecs.manage(components.OnMouseOver).insert(
widget, on_mouse_over,
)
if on_mouse_out:
self.ecs.manage(components.OnMouseOut).insert(
widget, on_mouse_out,
)
if on_mouse_wheel:
self.ecs.manage(components.OnMouseWheel).insert(
widget, on_mouse_wheel,
)
def grab_focus(self, widget):
self.ecs.manage(components.GrabInputFocus).insert(widget)
# TODO: get_focus -> just set current InputFocus value, not higher one!
def release_focus(self, widget):
self.ecs.manage(components.InputFocus).remove(widget)
def create_widget(self, widget_type, context):
widget = self.widgets_builder.create(
widget_type, context,
)
return widget
| 28.877155 | 99 | 0.5381 | 1,326 | 13,399 | 5.240573 | 0.153092 | 0.03425 | 0.031803 | 0.056267 | 0.361203 | 0.262052 | 0.200029 | 0.175421 | 0.167362 | 0.167362 | 0 | 0.006886 | 0.371371 | 13,399 | 463 | 100 | 28.939525 | 0.818117 | 0.026868 | 0 | 0.373711 | 0 | 0 | 0.014967 | 0.001842 | 0 | 0 | 0 | 0.00216 | 0 | 1 | 0.059278 | false | 0 | 0.033505 | 0 | 0.139175 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3508f24332d6b901219f1252812a2dba6160ada9 | 4,905 | py | Python | lamination.py | DannyStoll1/polymath-fractal-geometry | 363c0c795ad19839afc6435c10c1eda37a62ebae | [
"MIT"
] | 1 | 2021-08-05T02:44:20.000Z | 2021-08-05T02:44:20.000Z | lamination.py | DannyStoll1/polymath-fractal-geometry | 363c0c795ad19839afc6435c10c1eda37a62ebae | [
"MIT"
] | null | null | null | lamination.py | DannyStoll1/polymath-fractal-geometry | 363c0c795ad19839afc6435c10c1eda37a62ebae | [
"MIT"
] | 1 | 2021-08-05T02:44:30.000Z | 2021-08-05T02:44:30.000Z | #! /usr/bin/env python
from math import ceil, log10, cos, sin, tan, sqrt, pi
from fractions import Fraction
import matplotlib.pyplot as plt
from matplotlib.patches import Arc
from matplotlib import cm
from matplotlib.lines import Line2D
class Lamination:
def __init__(self, period=1, degree=2):
self.degree = degree
self.max_period = 1
self.size = 1
# Maintain list of arcs of form (id, angle1, angle2)
self.arcs = [ (0, Fraction(0,1), Fraction(0,1)) ]
self.endpoints = {Fraction(0,1)}
self.period_cutoffs = [0,1]
self.extend_to_period(period)
# Add in arcs of next period
def extend(self):
self.max_period = p = self.max_period + 1
n = self.degree**p - 1
counters = [0] * n
for k in range(n):
# Ignore arcs that have already been drawn
if Fraction(k,n) in self.endpoints:
counters[k] = -1
for (id, a, b) in self.arcs:
for k in range(ceil(n*a), ceil(n*b)):
if counters[k] != -1:
counters[k] ^= (1<<id)
angles = {}
it = enumerate(counters)
next(it)
for k, counter in it:
if counter == -1:
continue
if counter in angles.keys():
id = self.size
self.arcs.append((id, angles[counter], Fraction(k,n)))
self.size += 1
self.endpoints.add(angles[counter])
self.endpoints.add(Fraction(k, n))
del angles[counter]
else:
angles[counter] = Fraction(k,n)
self.period_cutoffs.append(self.size)
def __len__(self):
return self.size
def extend_to_period(self, per):
for _ in range(self.max_period, per):
self.extend()
def arcs_of_period(self, per=1, sort=False):
i = self.period_cutoffs[per-1]
j = self.period_cutoffs[per]
out = [(a,b) for _,a,b in self.arcs[i:j]]
if sort:
out.sort()
return out
def arc_lengths_of_period(self, per=1):
i = self.period_cutoffs[per-1]
j = self.period_cutoffs[per]
return [b-a for _,a,b in self.arcs[i:j]]
def arc_lengths_cumulative(self, max_per=-1):
j = self.period_cutoffs[max_per]
return [b-a for _,a,b in self.arcs[:j]]
def arc_lengths_cumulative_set(self, max_per=-1):
j = self.period_cutoffs[max_per]
return {b-a for _,a,b in self.arcs[:j]}
def draw(self, max_period = None):
if max_period is None or max_period >= self.max_period:
max_period = self.max_period
fig, ax = plt.subplots()
ax.set_xlim(-1.4,1.4)
ax.set_ylim(-1.1,1.1)
ax.set_aspect(1)
ax.axis('off')
def draw_arc(a, b, color='blue', lw=1):
u = 2*pi*a
v = 2*pi*b
# center of arc
x = (sin(u)-sin(v))/sin(u-v)
y = (cos(v)-cos(u))/sin(u-v)
r = tan((v-u)/2)
start = v*180/pi + 90
end = u*180/pi + 270
arc = Arc(
[x,y],
2*r,2*r,
angle = 0,
theta1 = start,
theta2 = end,
color = color,
lw = lw)
ax.add_patch(arc)
disk = plt.Circle((0,0), 1, fill=False)
ax.add_artist(disk)
cmap = cm.get_cmap("Set1", max_period)
for p in reversed(range(1, max_period)):
i = self.period_cutoffs[p]
j = self.period_cutoffs[p+1]
color = cmap((p-1)/max_period)
for _,a,b in self.arcs[i:j]:
draw_arc(a,b,
color=color,
lw = max_period / (2*p+3))
p = max_period
i = self.period_cutoffs[p]
color = cmap((p-1)/max_period)
for _,a,b in self.arcs[i:]:
draw_arc(a,b,
color=color,
lw = max_period / (2*p+3))
legend_icons = [
Line2D([0], [0], color=cmap((p-2)/max_period), lw=4)
for p in range(2, max_period+1)]
legend_labels = [
f"Period {i}"
for i in range(2,max_period+1)]
ax.legend(legend_icons, legend_labels, loc='upper right',
fontsize='x-small')
plt.title(f"Mandelbrot lamination up to period {max_period}")
plt.savefig(f"lamination_{max_period}.png", dpi=300)
plt.show()
if __name__ == '__main__':
lam = Lamination(14)
arc_length_first_digits = [x//10**int(log10(x)-1)
for x in lam.arc_lengths_cumulative_set() if x != 0]
plt.hist(arc_length_first_digits, bins=9)
plt.show()
lam.draw(9)
| 28.684211 | 70 | 0.507849 | 683 | 4,905 | 3.505124 | 0.228404 | 0.082707 | 0.078112 | 0.023392 | 0.284879 | 0.224311 | 0.186717 | 0.163325 | 0.154971 | 0.154971 | 0 | 0.030655 | 0.368196 | 4,905 | 170 | 71 | 28.852941 | 0.741852 | 0.031397 | 0 | 0.152 | 0 | 0 | 0.025495 | 0.005689 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.048 | 0.008 | 0.176 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
35090affe3acbefe412b188fa97fa107eac09458 | 4,663 | py | Python | xeputils/mail.py | xsf/xsf-tools | 7628fe8610e3d7f2247e5e7634bbf6c754c2d033 | [
"MIT"
] | 12 | 2015-01-29T12:59:16.000Z | 2019-01-27T22:06:12.000Z | xeputils/mail.py | xsf/xsf-tools | 7628fe8610e3d7f2247e5e7634bbf6c754c2d033 | [
"MIT"
] | 12 | 2015-07-09T00:14:22.000Z | 2017-02-02T17:22:53.000Z | xeputils/mail.py | xsf/xsf-tools | 7628fe8610e3d7f2247e5e7634bbf6c754c2d033 | [
"MIT"
] | 8 | 2015-05-29T15:31:56.000Z | 2021-11-02T04:06:25.000Z | # File: mail.py
# Version: 0.3
# Description: utility functions for handling XEPs
# Last Modified: 2014-09-23
# Based on scripts by:
# Tobias Markmann (tm@ayena.de)
# Peter Saint-Andre (stpeter@jabber.org)
# Authors:
# Winfried Tilanus (winfried@tilanus.com)
## LICENSE ##
#
# Copyright (c) 1999 - 2014 XMPP Standards Foundation
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of tqhe Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
## END LICENSE ##
import smtplib
import sys
import datetime
class BaseMessage(object):
"""Class to be subclassed for sending messages. When sublassing the variable
"MESSAGETEXT" has to be overwritten. Optionally the method "makeSubject"
can be overwritten.
Class properties:
MESSAGETEXT (str): String containing the message, including the
headers. Python string formatting as defined in:
https://docs.python.org/2/library/string.html#format-string-syntax
can be used in it. The 'config' and 'xep' objects
will be available when formatting as well as the
string 'subject' (as set by the 'makeSubject'
method').
xep (xep object): The XEP (object) to send the message about.
config (config object: The config (object) when composing and sending
the message."""
MESSAGETEXT = """From: {config.mailfrom}
To: {config.mailto}
Subject: {subject}
This is a testmessage about XEP-{xep.nrFormatted} ({xep.title}), please
ignore.
"""
def __init__(self, config, xep):
"""Create a new message.
Arguments:
config (config object): The config (object) to use.
xep (xep object): The XEP (object) this message is about."""
self.config = config
self.xep = xep
def makeSubject(self):
"""Class to be overwritten, must return a string containing the
subject of the message. Can be kept at its default when the string
'subject' is not used in MESSAGETEXT"""
return "Testmessage about XEP-{xep.nrFormatted}, ignore.".format(xep=self.xep)
def send(self, subject=""):
"""Formats and sends the message"""
subject = self.makeSubject()
msg = self.MESSAGETEXT.format(
config=self.config,
xep=self.xep,
subject=subject)
server = smtplib.SMTP(self.config.mailserver)
server.sendmail(self.config.mailfrom, self.config.mailto, msg)
server.quit()
class LogMail():
MESSAGETEXT = """From: {config.mailfrom}
To: {config.logtomail}
Subject: There where errors during the run of {script} on {date}
{logs}
"""
def __init__(self, config, logs):
self.config = config
self.logs = logs
def send(self):
msg = self.MESSAGETEXT.format(
config=self.config,
script=sys.argv[0],
date=datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"),
logs=self.logs)
server = smtplib.SMTP(self.config.mailserver)
server.sendmail(self.config.mailfrom, self.config.logtomail, msg)
server.quit()
class Deferred(BaseMessage):
MESSAGETEXT = """From: XMPP Extensions Editor <{config.mailfrom}>
To: {config.mailto}
Subject: DEFERRED: XEP-{xep.nrFormatted} ({xep.title})
XEP-{xep.nrFormatted} ({xep.title}) has been Deferred because of inactivity.
Abstract: {xep.abstract}
URL: http://xmpp.org/extensions/xep-{xep.nrFormatted}.html
If and when a new revision of this XEP is published, its status will be changed back to Experimental.
"""
| 36.716535 | 101 | 0.660305 | 599 | 4,663 | 5.126878 | 0.404007 | 0.039075 | 0.027678 | 0.021491 | 0.194725 | 0.154347 | 0.077499 | 0.051449 | 0.051449 | 0.051449 | 0 | 0.005688 | 0.245979 | 4,663 | 126 | 102 | 37.007937 | 0.867747 | 0.541497 | 0 | 0.333333 | 0 | 0.019608 | 0.372478 | 0.043973 | 0 | 0 | 0 | 0 | 0 | 1 | 0.098039 | false | 0 | 0.058824 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
35125afe0ef557ad2b51230e5f38ad37600cc6ca | 2,375 | py | Python | list_games.py | clach04/tts_save | ccd598ccc93a9ed62d32c62fff2b43e010b5648c | [
"MIT"
] | null | null | null | list_games.py | clach04/tts_save | ccd598ccc93a9ed62d32c62fff2b43e010b5648c | [
"MIT"
] | null | null | null | list_games.py | clach04/tts_save | ccd598ccc93a9ed62d32c62fff2b43e010b5648c | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: ascii -*-
# vim:ts=4:sw=4:softtabstop=4:smarttab:expandtab
#
# wip TableTop Simulator json parser
# Copyright (C) 2020 Chris Clark
import glob
import logging
import os
import sys
try:
#raise ImportError()
# Python 2.6+
import json
except ImportError:
#raise ImportError()
# from http://code.google.com/p/simplejson
import simplejson as json
logging.basicConfig() ## NO debug, no info. But logs warnings
log = logging.getLogger("mylogger")
log.setLevel(logging.DEBUG)
def main(argv=None):
if argv is None:
argv = sys.argv
try:
game_dirname = argv[1]
except IndexError:
game_dirname = os.path.join(os.environ['USERPROFILE'], 'Documents', 'My Games', 'Tabletop Simulator', 'Mods', 'Workshop')
log.info('game_dirname %r', game_dirname)
filename = os.path.join(game_dirname, 'WorkshopFileInfos.json')
f = open(filename, 'rb')
workshop = json.load(f)
log.info('read complete, workshop len %r', len(workshop))
f.close()
for counter, filename in enumerate(glob.glob(os.path.join(game_dirname, '*.json'))):
log.info('game_dirname %r', filename)
if filename.endswith('WorkshopFileInfos.json'):
continue
f = open(filename, 'rb')
data = json.load(f)
#data = f.read()
log.info('read complete, len %r', len(data))
f.close()
log.info('game %r', data['SaveName'])
log.info('game %r', data['GameMode'])
game_id = os.path.basename(filename).replace('.json', '') # dumb split extn
game_id = int(game_id) # assume no leading zeroes
log.info('game_id %r', game_id)
log.info('https://steamcommunity.com/sharedfiles/filedetails/?id=%d', game_id)
tmp_names = glob.glob(os.path.join(game_dirname, str(game_id)+'.*'))
#log.info('tmp_names %r', tmp_names)
tmp_names = [os.path.basename(x) for x in tmp_names]
#log.info('tmp_names %r', tmp_names)
tmp_names.remove('%d.json' % game_id)
assert len(tmp_names) == 1
log.info('game image filename %r', tmp_names[0])
log.info('counter %r', counter)
log.info('workshop len %r', len(workshop)) # NOTE seeig dupes in here, nt sure if this explains all count differences though
return 0
if __name__ == "__main__":
sys.exit(main())
| 32.094595 | 129 | 0.635368 | 328 | 2,375 | 4.496951 | 0.396341 | 0.061695 | 0.044746 | 0.028475 | 0.175593 | 0.082712 | 0.082712 | 0.04339 | 0.04339 | 0 | 0 | 0.006997 | 0.217684 | 2,375 | 73 | 130 | 32.534247 | 0.786868 | 0.206316 | 0 | 0.125 | 0 | 0 | 0.196572 | 0.023567 | 0 | 0 | 0 | 0 | 0.020833 | 1 | 0.020833 | false | 0 | 0.145833 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3512fdbd95859b20640e0bf38d111f5328aee1fc | 3,003 | py | Python | vsmomi/virtual_machine_disk_layout.py | dahuebi/vsmomi | 8e86b8624cb6b203385842503b75059dde1c803f | [
"Apache-2.0"
] | null | null | null | vsmomi/virtual_machine_disk_layout.py | dahuebi/vsmomi | 8e86b8624cb6b203385842503b75059dde1c803f | [
"Apache-2.0"
] | null | null | null | vsmomi/virtual_machine_disk_layout.py | dahuebi/vsmomi | 8e86b8624cb6b203385842503b75059dde1c803f | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import (absolute_import, division,
print_function, unicode_literals)
from builtins import *
from future.builtins.disabled import *
from pyVmomi import vim
class VirtualMachineDiskLayout(object):
def __init__(self, vm):
self.layout = {}
self.__layout(vm)
def delSlot(self, ctrlNr, slotNr):
layout = self.layout
try:
del layout[ctrlNr]["slots"][slotNr]
except KeyError:
pass
def getController(self, ctrlNr):
layout = self.layout
ctrl = None
try:
ctrl = layout[ctrlNr]["ctrl"]
except KeyError:
pass
return ctrl
def addController(self, ctrlNr, ctrl):
layout = self.layout
assert ctrlNr not in layout, "%d" % (ctrlNr)
layout[ctrlNr] = {"ctrl": ctrl, "slots": {}}
def addDisk(self, ctrlNr, slotNr, disk):
layout = self.layout
assert ctrlNr in layout, \
"controller '{}' must exist".format(ctrlNr)
assert slotNr not in layout[ctrlNr]["slots"], \
"slot '{}' must not exist".format((ctrlNr, slotNr))
layout[ctrlNr]["slots"][slotNr] = disk
def getDisk(self, ctrlNr, slotNr):
layout = self.layout
disk = None
try:
disk = layout[ctrlNr]["slots"][slotNr]
except KeyError:
pass
return disk
def getFreeSlot(self):
layout = self.layout
for ctrlNr in sorted(layout.keys()):
slotList = layout[ctrlNr]["slots"]
for slotNr in range(0, 16): # slots per controller 16
if slotNr not in slotList.keys():
return ctrlNr, slotNr
assert False, "%d-%d" % (ctrlNr, slotNr)
def __iter__(self):
layout = self.layout
for ctrlNr in sorted(layout.keys()):
slotList = layout[ctrlNr]["slots"]
for slotNr in sorted(slotList.keys()):
disk = layout[ctrlNr]["slots"][slotNr]
yield (ctrlNr, slotNr, disk)
def __layout(self, vm):
# layout:
# {controllerNumber (0...)}{'slots'}{slotNumber (0..16)} = disk
res = {}
ctrls = []
disks = []
for dev in vm.config.hardware.device:
# vim.vm.device.VirtualController
# vim.vm.device.VirtualIDEController
# vim.vm.device.VirtualSCSIController
if isinstance(dev, (vim.vm.device.VirtualSCSIController, vim.vm.device.VirtualIDEController)):
ctrls.append(dev)
if isinstance(dev, (vim.vm.device.VirtualDisk, vim.vm.device.VirtualCdrom)):
disks.append(dev)
for ctrlNr in range(0, len(ctrls)):
ctrl = ctrls[ctrlNr]
ctrlDisks = [x for x in disks if x.key in ctrl.device]
self.addController(ctrlNr, ctrl)
for disk in ctrlDisks:
self.addDisk(ctrlNr, disk.unitNumber, disk)
| 33 | 106 | 0.564103 | 321 | 3,003 | 5.218069 | 0.261682 | 0.065672 | 0.076418 | 0.054925 | 0.269254 | 0.217313 | 0.14806 | 0.099104 | 0.099104 | 0.099104 | 0 | 0.005403 | 0.322011 | 3,003 | 90 | 107 | 33.366667 | 0.817289 | 0.072594 | 0 | 0.305556 | 0 | 0 | 0.037824 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 1 | 0.125 | false | 0.041667 | 0.055556 | 0 | 0.236111 | 0.013889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
35137da8d9337fb37d12ad5ac0b81ac810ed79aa | 1,271 | py | Python | applewatch_dataprocessing/source/preprocessing/psg/psg_label_service.py | bzhai/Ubi-SleepNet | 27837827dec608d06659421d073872fb1f68453e | [
"MIT"
] | 3 | 2022-01-22T15:55:31.000Z | 2022-01-28T16:09:02.000Z | applewatch_dataprocessing/source/preprocessing/psg/psg_label_service.py | bzhai/Ubi-SleepNet | 27837827dec608d06659421d073872fb1f68453e | [
"MIT"
] | null | null | null | applewatch_dataprocessing/source/preprocessing/psg/psg_label_service.py | bzhai/Ubi-SleepNet | 27837827dec608d06659421d073872fb1f68453e | [
"MIT"
] | null | null | null | import numpy as np
import pandas as pd
from source.constants import Constants
from source.preprocessing.psg.psg_service import PSGService
class PSGLabelService(object):
@staticmethod
def load(subject_id):
psg_label_path = PSGLabelService.get_path(subject_id)
feature = pd.read_csv(str(psg_label_path)).values
return feature
@staticmethod
def get_path(subject_id):
return Constants.FEATURE_FILE_PATH.joinpath(subject_id + '_psg_labels.out')
@staticmethod
def build(subject_id, valid_epochs):
psg_array = PSGService.load_cropped_array(subject_id)
labels = []
idx = psg_array[:, 0]
original_labels = []
for epoch in valid_epochs:
value = np.interp(epoch.timestamp, psg_array[:, 0], psg_array[:, 1])
labels.append(value)
original_labels.append(psg_array[:, 1][[np.where(idx == epoch.timestamp)[0]]][0])
assert np.abs(np.asarray(labels)-np.asarray(original_labels)).sum() == 0, \
print("Label interpolation error")
return np.array(labels)
@staticmethod
def write(subject_id, labels):
psg_labels_path = PSGLabelService.get_path(subject_id)
np.savetxt(psg_labels_path, labels, fmt='%f')
| 33.447368 | 93 | 0.672699 | 161 | 1,271 | 5.080745 | 0.385093 | 0.08802 | 0.051345 | 0.05868 | 0.085575 | 0.085575 | 0 | 0 | 0 | 0 | 0 | 0.007064 | 0.220299 | 1,271 | 37 | 94 | 34.351351 | 0.818365 | 0 | 0 | 0.133333 | 0 | 0 | 0.033045 | 0 | 0 | 0 | 0 | 0 | 0.033333 | 1 | 0.133333 | false | 0 | 0.133333 | 0.033333 | 0.4 | 0.033333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3516a541b3c782940cfd27f1c823223d5984df01 | 1,200 | py | Python | plugins/active_directory_ldap/unit_test/test_action_move_object.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 46 | 2019-06-05T20:47:58.000Z | 2022-03-29T10:18:01.000Z | plugins/active_directory_ldap/unit_test/test_action_move_object.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 386 | 2019-06-07T20:20:39.000Z | 2022-03-30T17:35:01.000Z | plugins/active_directory_ldap/unit_test/test_action_move_object.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 43 | 2019-07-09T14:13:58.000Z | 2022-03-28T12:04:46.000Z | from unittest import TestCase, mock
from komand_active_directory_ldap.actions.move_object import MoveObject
from komand_active_directory_ldap.actions.move_object.schema import Input, Output
from unit_test.common import MockServer
from unit_test.common import MockConnection
from unit_test.common import default_connector
class TestActionMoveObject(TestCase):
@mock.patch("ldap3.Server", mock.MagicMock(return_value=MockServer))
@mock.patch("ldap3.Connection", mock.MagicMock(return_value=MockConnection()))
@default_connector(action=MoveObject())
def test_move_object(self, action):
actual = action.run({Input.DISTINGUISHED_NAME: "CN=Users,DC=example,DC=com"})
expected = {Output.SUCCESS: True}
self.assertEqual(actual, expected)
@mock.patch("ldap3.Server", mock.MagicMock(return_value=MockServer))
@mock.patch("ldap3.Connection", mock.MagicMock(return_value=MockConnection()))
@default_connector(action=MoveObject())
def test_move_object_false(self, action):
actual = action.run({Input.DISTINGUISHED_NAME: "CN=wrong_result,DC=example,DC=com"})
expected = {Output.SUCCESS: False}
self.assertEqual(actual, expected)
| 44.444444 | 92 | 0.765 | 147 | 1,200 | 6.068027 | 0.346939 | 0.044843 | 0.06278 | 0.107623 | 0.742152 | 0.661435 | 0.661435 | 0.58296 | 0.479821 | 0.369955 | 0 | 0.003806 | 0.124167 | 1,200 | 26 | 93 | 46.153846 | 0.84491 | 0 | 0 | 0.380952 | 0 | 0 | 0.095833 | 0.049167 | 0 | 0 | 0 | 0 | 0.095238 | 1 | 0.095238 | false | 0 | 0.285714 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
35189997329c006fdc801e2d7e871b43c3b400c1 | 3,671 | py | Python | src/plugins/weibo/data_source.py | AmemachiF/Hairpin | 82040ab45f062787da759cddfa4a24913ee49f07 | [
"MIT"
] | 2 | 2021-10-21T00:01:16.000Z | 2021-12-16T14:13:55.000Z | src/plugins/weibo/data_source.py | AmemachiF/Hairpin | 82040ab45f062787da759cddfa4a24913ee49f07 | [
"MIT"
] | null | null | null | src/plugins/weibo/data_source.py | AmemachiF/Hairpin | 82040ab45f062787da759cddfa4a24913ee49f07 | [
"MIT"
] | 1 | 2021-12-09T14:49:28.000Z | 2021-12-09T14:49:28.000Z | # @Author: South
# @Date: 2021-08-14 10:56
import asyncio
import base64
from typing import Optional
import httpx
from playwright.async_api import Browser, async_playwright
weibo_url = "https://m.weibo.cn/detail/%s"
space_history = "https://m.weibo.cn/api/container/getIndex?type=uid&value=%s&containerid=1076037713357552"
space_headers = {"Origin": "https://weibo.com",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9",
"Connection": "close",
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 11_0_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36",
"Referer": "https://weibo.com/", "Sec-Fetch-Site": "none", "Sec-Fetch-Dest": "document",
"DNT": "1", "Accept-Encoding": "gzip, deflate, br",
"Accept-Language": "zh-TW,zh-CN;q=0.9,zh;q=0.8,en;q=0.7",
"Sec-Fetch-Mode": "navigate"}
_browser: Optional[Browser] = None
async def init(**kwargs) -> Browser:
global _browser
browser = await async_playwright().start()
_browser = await browser.chromium.launch(**kwargs)
return _browser
async def get_browser(**kwargs) -> Browser:
return _browser or await init(**kwargs)
async def get_weibo_list(uid, offset_weibo_id=0):
"""
爬取主播B站空间动态。
Args:
offset_weibo_id (int, optional): 每页动态id索引,默认为第一页:0
Returns:
list[list,int].第一个值为动态id列表;第二个值为下一页索引,没有下一页则为-1.
"""
async with httpx.AsyncClient() as client:
result = {}
try:
data = (await client.get(headers=space_headers, url=space_history % uid)).json()[
"data"]
print(space_history % uid)
weibo_list = []
for card in data["cards"]:
weibo_list.append(card["mblog"]["id"])
result["weibo_list"] = weibo_list
# if data["has_more"] == 1:
# result["next_offset"] = data["next_offset"]
# else:
result["next_offset"] = -1
except httpx.ConnectTimeout:
await client.aclose()
return {"weibo": [], "next_offset": -1}
return result
async def get_weibo_screenshot(weibo_id, retry=3):
"""
截图B站空间动态主要内容。
Args:
weibo_id (int, optional): 动态id
retry (int, optional): 重连次数,默认为3
Returns:
void.
"""
browser = await get_browser()
page = None
for i in range(retry + 1):
try:
page = await browser.new_page()
await page.goto(weibo_url % weibo_id, wait_until='networkidle', timeout=10000)
await page.set_viewport_size({"width": 1980, "height": 2160})
main = await page.query_selector(".main")
assert main is not None
main_clip = await main.bounding_box()
assert main_clip is not None
wrap = await page.query_selector(".wrap")
assert wrap is not None
wrap_clip = await wrap.bounding_box()
assert wrap_clip is not None
main_clip['height'] = wrap_clip['y'] - main_clip['y']
image = await page.screenshot(clip=main_clip)
# image = await page.screenshot(clip=main_clip, path="./test.png")
await page.close()
return base64.b64encode(image).decode()
except Exception:
if page:
await page.close()
raise
#
# asyncio.run(get_weibo_list(7713357552))
# asyncio.run(get_weibo_screenshot(4692282445660371))
| 36.346535 | 165 | 0.588668 | 450 | 3,671 | 4.668889 | 0.424444 | 0.034269 | 0.017135 | 0.012375 | 0.050452 | 0.034269 | 0.034269 | 0 | 0 | 0 | 0 | 0.045712 | 0.278943 | 3,671 | 100 | 166 | 36.71 | 0.748017 | 0.074911 | 0 | 0.063492 | 0 | 0.063492 | 0.227948 | 0.056161 | 0 | 0 | 0 | 0 | 0.063492 | 1 | 0 | false | 0 | 0.079365 | 0 | 0.15873 | 0.015873 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3518f8ecfbc7ba396e74e645e87405055e8be328 | 395 | py | Python | lang/Python/ordered-partitions-1.py | ethansaxenian/RosettaDecode | 8ea1a42a5f792280b50193ad47545d14ee371fb7 | [
"MIT"
] | null | null | null | lang/Python/ordered-partitions-1.py | ethansaxenian/RosettaDecode | 8ea1a42a5f792280b50193ad47545d14ee371fb7 | [
"MIT"
] | null | null | null | lang/Python/ordered-partitions-1.py | ethansaxenian/RosettaDecode | 8ea1a42a5f792280b50193ad47545d14ee371fb7 | [
"MIT"
] | null | null | null | from itertools import combinations
def partitions(*args):
def p(s, *args):
if not args: return [[]]
res = []
for c in combinations(s, args[0]):
s0 = [x for x in s if x not in c]
for r in p(s0, *args[1:]):
res.append([c] + r)
return res
s = list(range(sum(args)))
return p(s, *args)
print(partitions(2, 0, 2))
| 24.6875 | 45 | 0.503797 | 61 | 395 | 3.262295 | 0.442623 | 0.075377 | 0.060302 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027344 | 0.351899 | 395 | 15 | 46 | 26.333333 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.076923 | 0 | 0.384615 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
351cea7a09dbc76a4de23aae8b4f7a2bcbae0170 | 692 | py | Python | tool.py | song9446/crypto-waifu | c5d442df9bcc1945270f81521d3e7fe6d05c7e7d | [
"MIT"
] | null | null | null | tool.py | song9446/crypto-waifu | c5d442df9bcc1945270f81521d3e7fe6d05c7e7d | [
"MIT"
] | null | null | null | tool.py | song9446/crypto-waifu | c5d442df9bcc1945270f81521d3e7fe6d05c7e7d | [
"MIT"
] | null | null | null | THROUGHPUT=20
TIME_UNIT=0.01
import asyncio
async def wrapper(coru, semaphore, sec):
async with semaphore:
r = await coru
await asyncio.sleep(sec)
return r
def limited_api(corus, n=THROUGHPUT, sec=TIME_UNIT):
semaphore = asyncio.Semaphore(n)
return asyncio.gather(*[wrapper(coru, semaphore, sec) for coru in corus])
import logging
logger = logging.getLogger(__name__)
def log(retry_state):
if retry_state.attempt_number < 1:
loglevel = logging.INFO
else:
loglevel = logging.WARNING
logger.log(
loglevel, 'Retrying %s: attempt %s ended with: %s',
retry_state.fn, retry_state.attempt_number, retry_state.outcome)
| 30.086957 | 77 | 0.693642 | 93 | 692 | 5.010753 | 0.494624 | 0.107296 | 0.085837 | 0.098712 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010989 | 0.210983 | 692 | 22 | 78 | 31.454545 | 0.842491 | 0 | 0 | 0 | 0 | 0 | 0.054993 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0 | 0.095238 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
351da6e9112c7be2db44d69f514ef8cb01bab626 | 11,896 | py | Python | minml/minann.py | jacorbal/minml | 4b018ad5cd7ceaa222bee1acf5721cdf80efdb74 | [
"MIT"
] | null | null | null | minml/minann.py | jacorbal/minml | 4b018ad5cd7ceaa222bee1acf5721cdf80efdb74 | [
"MIT"
] | null | null | null | minml/minann.py | jacorbal/minml | 4b018ad5cd7ceaa222bee1acf5721cdf80efdb74 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# vim: set ft=python fenc=utf-8 tw=72:
# MINML :: Minimal machine learning algorithms
# Copyright (c) 2019-2020, J. A. Corbal
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# “Software”), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
# CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
# TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"""Minimal artificial neural network with backpropagation.
In machine learning and cognitive science, artificial neural networks
(ANNs) are a family of models inspired by biological neural networks
(the central nervous systems of animals, in particular the brain) and
are used to estimate or approximate functions that can depend on a
large number of inputs and are generally unknown. Artificial neural
networks are generally presented as systems of interconnected
"neurons" which exchange messages between each other. The connections
have numeric weights that can be tuned based on experience, making
neural nets adaptive to inputs and capable of learning.
Backpropagation, an abbreviation for "backward propagation of errors",
is a common method of training artificial neural networks used in
conjunction with an optimization method such as gradient descent. The
method calculates the gradient of a loss function with respect to all
the weights in the network. The gradient is fed to the optimization
method which in turn uses it to update the weights, in an attempt to
minimize the loss function.
.. note:: This module is based on Toby Segaran's model published in
*Programming Collective Intelligence*, O'Reilly, 2007.
"""
import math
import random
__all__ = ['sigmoid', 'dsigmoid', 'MinAnn']
class MinAnn(object):
"""Instantiates an neural network object ready to be trained.
Example for a NN that solves an ``AND`` gate::
>>> topology = [2, 4, 1]
>>> p = [ [[0,0],[0]], [[0,1],[0]], [[1,0],[0]], [[1,1],[1]]]
>>> net = MinAnn(topology)
>>> net.train(p, max_iterations=4000, verbose=False, test=True)
Inputs: [0,0] -> [-7.255498785979596e-05] Target: [0]
Inputs: [0,1] -> [-6.731854924635882e-05] Target: [0]
Inputs: [1,0] -> [1.1761479715612379e-05] Target: [0]
Inputs: [1,1] -> [0.9941687241240065] Target: [1]
>>> test = [1,0]
>>> print("Testing", test, "->", net.feed_forward(test))
Testing [1,0] -> [1.1761479715612379e-05]
"""
def __init__(self, topology):
"""Default constructor. Sets up the network.
:param topology: Topology of the net in the form of a list:
[inputs_num, hidden_layers_num, outputs_num]
:type topology: list
"""
# Number of nodes in layers
self._ni = topology[0] + 1 # "+1" for bias
self._nh = topology[1]
self._no = topology[2]
# Initialize node-activations
self._ai, self._ah, self._ao = [], [], []
self._ai = [1.0] * self._ni
self._ah = [1.0] * self._nh
self._ao = [1.0] * self._no
# Create node weight matrices
self._wi = make_matrix(self._ni, self._nh)
self._wo = make_matrix(self._nh, self._no)
# Initialize node weights to random values
randomize_matrix(self._wi, -0.2, 0.2)
randomize_matrix(self._wo, -2.0, 2.0)
# Create last change in weights matrices for momentum
self._ci = make_matrix(self._ni, self._nh)
self._co = make_matrix(self._nh, self._no)
def feed_forward(self, inputs):
"""Takes a list of inputs, pushes them through the network,
and returns the output of all nodes in the output layer.
:param inputs: Feeding forward values
:type inputs: list
:return: Output values
:rtype: list
"""
if (len(inputs) != self._ni - 1):
raise ValueError("Incorrect number of inputs.")
for i in range(self._ni - 1):
self._ai[i] = inputs[i]
for j in range(self._nh):
sum = 0.0
for i in range(self._ni):
sum += (self._ai[i] * self._wi[i][j])
self._ah[j] = sigmoid(sum)
for k in range(self._no):
sum = 0.0
for j in range(self._nh):
sum += (self._ah[j] * self._wo[j][k])
self._ao[k] = sigmoid(sum)
return self._ao
def back_propagate(self, targets, N, M):
"""Executes the backpropagation algorithm updating the weights
using target values.
Calculates the error in advance and then adjusts the weights,
because all the calculations rely on knowing the current
weights rather than the updated weights.
:param targets: List of target values
:type targets: list
:param N: Overall learning rate (*eta*)
:type N: float
:param M: Momentum, multiplier of last weight change (*alpha*)
:type M: float
"""
# Calc output deltas
# We want to find the instantaneous rate of change of(error
# with respect to weight from node j to node k) output_delta is
# defined as an attribute of each ouput node. It is not the
# final rate we need.
# To get the final rate we must multiply the delta by the
# activation of the hidden layer node in question. This
# multiplication is done according to the chain rule as we are
# taking the derivative of the activation function
# of the ouput node.
# dE/dw[j][k] =(t[k] - ao[k]) * s'(SUM(w[j][k]*ah[j])) * ah[j]
output_deltas = [0.0] * self._no
for k in range(self._no):
error = targets[k] - self._ao[k]
output_deltas[k] = error * dsigmoid(self._ao[k])
# Update output weights
for j in range(self._nh):
for k in range(self._no):
# Output_deltas[k] * self._ah[j]
# is the full derivative of dError/dweight[j][k]
change = output_deltas[k] * self._ah[j]
self._wo[j][k] += N * change + M * self._co[j][k]
self._co[j][k] = change
# Calculate hidden deltas
hidden_deltas = [0.0] * self._nh
for j in range(self._nh):
error = 0.0
for k in range(self._no):
error += output_deltas[k] * self._wo[j][k]
hidden_deltas[j] = error * dsigmoid(self._ah[j])
# Update input weights
for i in range(self._ni):
for j in range(self._nh):
change = hidden_deltas[j] * self._ai[i]
# print('Activation:',self._ai[i],'Synapse:',i,j,'Change:',change)
self._wi[i][j] += N * change + M * self._ci[i][j]
self._ci[i][j] = change
# Calculate combined error
# 1/2 for differential convenience & **2 for modulus
error = 0.0
for k in range(len(targets)):
error += 0.5 * (targets[k] - self._ao[k])**2
return error
def test(self, patterns):
"""Test the net with a new set of inputs and target values and
outputs the result values.
:param patterns: List of inputs and target values
:type patterns: list
"""
for p in patterns:
inputs = p[0]
print("Inputs:", p[0], "->", self.feed_forward(inputs), " \t",
"Target:", p[1])
def train(self, patterns, max_iterations=1000, N=0.5, M=0.1,
verbose=False, test=False):
"""Train the net with a new set of inputs and target values.
:param patterns: List of inputs and target values
:type patterns: list
:param max_iterations: Iterations for the same pattern
:type max_iterations: int
:param N: Overall learning rate (*eta*)
:type N: float
:param M: Momentum, multiplier of last weight change (*alpha*)
:type M: float
:param verbose: If ``True`` outputs the combined error on each
iteration
:type verbose: bool
:param test: If ``True`` outputs a test after training
:type test: bool
"""
for i in range(max_iterations):
for p in patterns:
inputs = p[0]
targets = p[1]
self.feed_forward(inputs)
error = self.back_propagate(targets, N, M)
if verbose and i % 50 == 0:
print("Combined error", error)
if test:
self.test(patterns)
def get_weights(self):
"""Return weights in the format: [Input weights, Output
weights] where both Input weights and Output weights are
lists.
"""
weights = [[], []]
for i in range(self._ni):
weights[0].append(self._wi[i])
for j in range(self._nh):
weights[1].append(self._wo[j][0])
return weights
# Properties
weights = property(get_weights, None)
def sigmoid(x):
"""A sigmoid function is a bounded differentiable real function
that is defined for all real input values and has a positive
derivative at each point.
A sigmoid function is a mathematical function having an "S" shape
(sigmoid curve). Often, sigmoid function refers to the special
case of the logistic function shown in the first figure and
defined by the formula: :math:`s(x) = 1 / (1 + exp(-x))`, but also
:math:`tanh(x)=(exp(x) - exp(-x)) / (exp(x) + exp(-x))` is a
sigmoid.
Generally speaking, :math:`s(x)` is used for values with a
non-negative domain [0,1], while :math:`tanh(x)` is in the range
[-1,1].
"""
return math.tanh(x)
def dsigmoid(y):
"""Derivative function of the function represented in
:py:func:`sigmoid`.
* If :math:`y = tanh(x)`, then :math:`Dy = 1 - y^2`,
* if :math:`y = s(x)`, then :math:`Ds = y - y^2`.
* There are infinite sigmoid functions. Just put here the
derivative of the ``sigmoid`` function.
"""
return 1 - y**2
def make_matrix(rows, cols, fill=0.0):
"""Returns a matrix (list of list of floats) using a default
value.
:param rows: Number of rows
:type rows: int
:param cols: Number of columns
:type cols: int
:param fill: Default value for each element in the matrix
:type fill: float
"""
m = []
for i in range(rows):
m.append([fill] * cols)
return m
def randomize_matrix(matrix, a, b):
"""Randomizes the values of a matrix in the range [a,b].
:param matrix: Matrix to randomize
:type matrix: list
:param a: Start-point value
:type a: float
:param b: End-point value
:type b: float
.. note:: The end-point value ``b`` may or may not be included in
the range depending on floating-point rounding in the
equation ``a + (b-a) * random()``.
"""
for i in range(len(matrix)):
for j in range(len(matrix[0])):
matrix[i][j] = random.uniform(a, b)
| 37.64557 | 81 | 0.609364 | 1,686 | 11,896 | 4.234282 | 0.262752 | 0.01863 | 0.021572 | 0.010786 | 0.146239 | 0.126488 | 0.099033 | 0.054349 | 0.054349 | 0.054349 | 0 | 0.025182 | 0.285642 | 11,896 | 315 | 82 | 37.765079 | 0.814898 | 0.605666 | 0 | 0.216495 | 0 | 0 | 0.020059 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103093 | false | 0 | 0.020619 | 0 | 0.206186 | 0.020619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
351f5fc2c75a54411f1d3da370bd804a7d202ad6 | 1,494 | py | Python | setup.py | Teagum/chainsaddiction | b57995e60361134cfd4f6f748a92457e8ef14c81 | [
"BSD-3-Clause"
] | null | null | null | setup.py | Teagum/chainsaddiction | b57995e60361134cfd4f6f748a92457e8ef14c81 | [
"BSD-3-Clause"
] | 7 | 2021-06-18T09:37:58.000Z | 2021-09-22T12:10:20.000Z | setup.py | Teagum/chainsaddiction | b57995e60361134cfd4f6f748a92457e8ef14c81 | [
"BSD-3-Clause"
] | null | null | null | #/usr/bin/env python3
"""ChainsAddiction setup"""
import itertools
from pathlib import Path
from typing import Generator, Tuple
from setuptools import setup, Extension
import numpy as np
def cglob(path: str) -> Generator:
"""Generate all .c files in ``path``."""
return (f'{src!s}' for src in Path(path).glob('*.c'))
def list_source_files (paths: Tuple[str, ...]) -> list:
"""Generate a list of .c files in found in all of ``paths``."""
return list(itertools.chain.from_iterable(cglob(path) for path in paths))
utils_src = (
'src/vmath',
'src/chainsaddiction/utils',
)
poishmm_src = (
'src/vmath',
'src/chainsaddiction',
'src/chainsaddiction/utils',
'src/chainsaddiction/poishmm',
)
utils = Extension('chainsaddiction.utils',
sources = list_source_files(utils_src),
include_dirs = [
'include',
'src/vmath',
'src/chainsaddiction',
'src/chainsaddiction/utils',
np.get_include(),
],
extra_compile_args = ['-Wall', '-Wextra'],
language = 'c')
poishmm = Extension('chainsaddiction.poishmm',
sources = list_source_files(poishmm_src),
include_dirs = [
'include',
'src/vmath',
'src/chainsaddiction',
'src/chainsaddiction/poishmm',
np.get_include()
],
extra_compile_args = ['-Wall', '-Wextra'],
language = 'c')
setup(ext_modules = [utils, poishmm])
| 24.9 | 77 | 0.601071 | 164 | 1,494 | 5.353659 | 0.341463 | 0.164009 | 0.050114 | 0.118451 | 0.353075 | 0.316629 | 0.316629 | 0.255125 | 0.255125 | 0.255125 | 0 | 0.000899 | 0.255689 | 1,494 | 59 | 78 | 25.322034 | 0.788669 | 0.090361 | 0 | 0.52381 | 0 | 0 | 0.235294 | 0.128816 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.119048 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
351fc23a1ac5f5b18d5ecb6496d289212af6ad1f | 216 | py | Python | Python/Find the Runner-Up Score!.py | MonwarAdeeb/HackerRank-Solutions | 571327e9688061745000ae81c5fd74ff7a2976d4 | [
"MIT"
] | null | null | null | Python/Find the Runner-Up Score!.py | MonwarAdeeb/HackerRank-Solutions | 571327e9688061745000ae81c5fd74ff7a2976d4 | [
"MIT"
] | null | null | null | Python/Find the Runner-Up Score!.py | MonwarAdeeb/HackerRank-Solutions | 571327e9688061745000ae81c5fd74ff7a2976d4 | [
"MIT"
] | null | null | null | if __name__ == '__main__':
n = int(input())
arr = map(int, input().split())
newlist = []
for i in arr:
if i not in newlist:
newlist.append(i)
newlist.sort(reverse=True)
print(newlist[1])
| 14.4 | 35 | 0.583333 | 31 | 216 | 3.806452 | 0.645161 | 0.135593 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006211 | 0.25463 | 216 | 14 | 36 | 15.428571 | 0.726708 | 0 | 0 | 0 | 0 | 0 | 0.037037 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3520460b526d231b833e154670eefe3a32e73774 | 1,892 | py | Python | servo/drv/cr50_unittest.py | neverware-mirrors/hdctools | dd7f911bb9051e615af7fcb71d921bd481f934fb | [
"BSD-3-Clause"
] | null | null | null | servo/drv/cr50_unittest.py | neverware-mirrors/hdctools | dd7f911bb9051e615af7fcb71d921bd481f934fb | [
"BSD-3-Clause"
] | null | null | null | servo/drv/cr50_unittest.py | neverware-mirrors/hdctools | dd7f911bb9051e615af7fcb71d921bd481f934fb | [
"BSD-3-Clause"
] | null | null | null | # Copyright (c) 2020 The Chromium OS Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import mock
import re
import unittest
import cr50
import pty_driver
@mock.patch('servo.drv.pty_driver.ptyDriver._issue_cmd_get_results')
class TestPromptDetection(unittest.TestCase):
class cr50(cr50.cr50):
def __init__(self):
pass
_logger = mock.MagicMock()
def test_normal_prompt(self, issueCmdMock):
issueCmdMock.return_value = "value"
uut = self.cr50()
self.assertEquals('value', uut._issue_cmd_get_results('cmd\n', []))
issueCmdMock.assert_called_with('cmd\n', [], flush=None,
timeout=pty_driver.DEFAULT_UART_TIMEOUT)
def test_spurious_prompt(self, issueCmdMock):
def fakeIssueCmd(*args, **kwargs):
if issueCmdMock.call_count >= cr50.cr50.PROMPT_DETECTION_TRIES:
return 'value'
else:
raise pty_driver.ptyError('error')
issueCmdMock.side_effect = fakeIssueCmd
uut = self.cr50()
self.assertEquals('value', uut._issue_cmd_get_results('cmd\n', []))
issueCmdMock.assert_called_with('cmd\n', [], flush=None,
timeout=pty_driver.DEFAULT_UART_TIMEOUT)
# Prompt detection tries + issue of actual command
self.assertEquals(cr50.cr50.PROMPT_DETECTION_TRIES+1,
issueCmdMock.call_count)
def test_no_prompt(self, issueCmdMock):
issueCmdMock.side_effect = pty_driver.ptyError('error')
uut = self.cr50()
with self.assertRaises(pty_driver.ptyError):
uut._issue_cmd_get_results('cmd\n', [])
self.assertEquals(cr50.cr50.PROMPT_DETECTION_TRIES,
issueCmdMock.call_count)
| 39.416667 | 80 | 0.643763 | 219 | 1,892 | 5.315068 | 0.406393 | 0.054124 | 0.037801 | 0.061856 | 0.347938 | 0.323883 | 0.323883 | 0.226804 | 0.226804 | 0.226804 | 0 | 0.022159 | 0.260571 | 1,892 | 47 | 81 | 40.255319 | 0.809864 | 0.111522 | 0 | 0.297297 | 0 | 0 | 0.064439 | 0.031623 | 0 | 0 | 0 | 0 | 0.189189 | 1 | 0.135135 | false | 0.027027 | 0.135135 | 0 | 0.351351 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
35261ae03f029216dfc2d799f6a76b6a4600c26b | 612 | py | Python | plugin-sim/rdf-age-query.py | albertcrowley/coinstac-search | 731d38644e1c7dc4c9b65fb986c824ed11a33e8c | [
"MIT"
] | null | null | null | plugin-sim/rdf-age-query.py | albertcrowley/coinstac-search | 731d38644e1c7dc4c9b65fb986c824ed11a33e8c | [
"MIT"
] | null | null | null | plugin-sim/rdf-age-query.py | albertcrowley/coinstac-search | 731d38644e1c7dc4c9b65fb986c824ed11a33e8c | [
"MIT"
] | null | null | null | import rdflib
from argparse import ArgumentParser
parser = ArgumentParser()
# parse command line arguments
parser.add_argument('-nidm', dest='nidm_file', required=True, help="NIDM-Exp RDF File to import")
args = parser.parse_args()
g=rdflib.Graph()
g.parse(args.nidm_file, format='ttl')
qres = g.query(
"""SELECT DISTINCT ?id ?age ?assessment
WHERE {
?assessment prov:wasGeneratedBy ?acq .
?acq prov:wasAssociatedWith ?person .
?assessment ncicb:Age ?age .
?person ndar:src_subject_id ?id
}""")
for row in qres:
print("%s - %s - %s" % row)
| 23.538462 | 97 | 0.648693 | 78 | 612 | 5.012821 | 0.615385 | 0.040921 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.223856 | 612 | 25 | 98 | 24.48 | 0.823158 | 0.045752 | 0 | 0 | 0 | 0 | 0.167164 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.272727 | 0 | 0.272727 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
352692fc2dc3c012236ee0f24a3bdf7ab8a867ef | 7,081 | py | Python | cogs/voteBan.py | Simpleton-Yogy/GLOB | c68a6002c87e754ffa9c3b45073e211691a548a2 | [
"MIT"
] | 2 | 2021-05-01T08:06:33.000Z | 2021-11-29T11:13:58.000Z | cogs/voteBan.py | Simpleton-Yogy/GLOB | c68a6002c87e754ffa9c3b45073e211691a548a2 | [
"MIT"
] | 1 | 2021-01-05T08:21:45.000Z | 2021-01-05T08:21:45.000Z | cogs/voteBan.py | Simpleton-Yogy/GLOB | c68a6002c87e754ffa9c3b45073e211691a548a2 | [
"MIT"
] | 8 | 2020-12-11T07:30:18.000Z | 2021-05-01T08:06:44.000Z | import discord
from discord.ext import commands
from discord.ext.commands import has_permissions
import sqlite3 as sqlite
import random as random
class voteBan(commands.Cog):
def __init__(self, bot):
self.bot = bot
self.doubleVoteResponses = ["Don't try to fool me! You voted already.", "This is not how democracy works..", "Hold on, don't cast your vote twice.", "You really tried that? You voted already!", "No, you don't have more votes than others.."]
self.banKickReasonResponses = ["Democracy has spoken", "You weren't wanted anymore..", "Everybody has to go..", "Farewell, banana.", "They are the senate!", "Personally.. I would do that too.", "Don't anger them again!"]
@commands.command(name = "voteBan", aliases = ["vB", "vb", "voteB", "vBan"], pass_context = True)
@has_permissions(ban_members = True)
async def voteBan(self, ctx, name = None, count = 5):
conn = sqlite.connect("data/internal.db")
guild = ctx.guild.name.replace("'", "").replace(" ", "_") + "_voteban"
if len(ctx.message.mentions) == 0:
await ctx.send("***You didn't specify whome to start kick vote against!***")
else:
if len(conn.execute(f"SELECT name FROM sqlite_master WHERE type='table' AND name='{guild}'").fetchall()) == 0:
conn.execute(f"CREATE TABLE IF NOT EXISTS {guild}('id' INTEGER PRIMARY KEY, 'ban_user' TEXT NOT NULL, 'ban_voters' TEXT NOT NULL, 'votes_current' INTEGER NOT NULL, 'votes_max' INTEGER NOT NULL)")
conn.commit()
print(f"Created voteban table in database for {guild[:len(guild) - 8]}")
conn.execute(f"INSERT INTO {guild} VALUES(NULL, ?, ?, ?, ?)", (ctx.message.mentions[0].id, ctx.message.author.id, 1, count))
conn.commit()
ban_field = discord.Embed(colour = discord.Colour(0xFDED32))
ban_field.add_field(name = f"Ban vote in progress!", value = f"{ctx.message.mentions[0].mention} `1/{count}`", inline = True)
await ctx.send(embed = ban_field)
else:
data = conn.execute(f"SELECT * FROM {guild} WHERE ban_user = '{ctx.message.mentions[0].id}'").fetchall()
if len(data) == 0:
conn.execute(f"INSERT INTO {guild} VALUES(NULL, ?, ?, ?, ?)", (ctx.message.mentions[0].id, ctx.message.author.id, 1, count))
conn.commit()
ban_field = discord.Embed(colour = discord.Colour(0xFDED32))
ban_field.add_field(name = f"Ban vote in progress!", value = f"{ctx.message.mentions[0].mention} `1/{count}`", inline = True)
await ctx.send(embed = ban_field)
else:
data = data[0]
ban_user = data[1]
ban_voters = data[2].split("_")
votes_current = int(data[3])
votes_max = int(data[4])
if str(ctx.message.author.id) in ban_voters:
await ctx.send(f"***{self.doubleVoteResponses[random.randrange(len(self.doubleVoteResponses))]}***")
else:
if votes_current >= (votes_max - 1):
await ctx.guild.get_member(int(ban_user)).kick(reason = f"{self.banKickReasonResponses[random.randrange(len(self.banKickReasonResponses))]}")
ban_field = discord.Embed(colour = discord.Colour(0xFDED32))
ban_field.add_field(name = f"You did it!", value = f"{ctx.message.mentions[0].mention} is no more.", inline = True)
await ctx.send(embed = ban_field)
conn.execute(f"DELETE FROM {guild} WHERE kick_user = '{ban_user}'")
conn.commit()
else:
votes_current += 1
ban_voters.append(ctx.author.id)
ban_voters = "_".join(map(str, ban_voters))
conn.execute(f"UPDATE {guild} SET votes_current = '{votes_current}', kick_voters = '{ban_voters}' WHERE kick_user = '{ban_user}'")
conn.commit()
ban_field = discord.Embed(colour = discord.Colour(0xFDED32))
ban_field.add_field(name = f"Ban vote in progress!", value = f"{ctx.message.mentions[0].mention} `{votes_current}/{votes_max}`", inline = True)
await ctx.send(embed = ban_field)
conn.close()
@commands.command(name = "resetVoteBan", aliases = ["rVB", "rvb", "resetvB", "rBan", "rb", "rB"])
async def resetVoteBan(self, ctx):
guild = ctx.guild.name.replace("'", "").replace(" ", "_") + "_voteban"
conn = sqlite.connect("data/internal.db")
if len(conn.execute(f"SELECT name FROM sqlite_master WHERE type='table' AND name='{guild}'").fetchall()) == 0:
await ctx.send("***There is no ban vote going on right now***")
else:
if len(ctx.message.mentions) == 0:
if len(conn.execute(f"SELECT * FROM {guild}").fetchall()) > 0:
conn.execute(f"DELETE FROM {guild}")
conn.commit()
cancel_ban_field = discord.Embed(colour = discord.Colour(0xFDED32))
cancel_ban_field.add_field(name = f"Canceled all ban votes!", value = f"They don't have to fear the sword anymore!")
await ctx.send(embed = cancel_ban_field)
else:
await ctx.send("***There is no ban vote against anybody on this server***")
else:
if len(conn.execute(f"SELECT * FROM {guild} WHERE ban_user = '{ctx.message.mentions[0].id}'").fetchall()) == 0:
await ctx.send("***There is no ban vote against this user***")
else:
conn.execute(f"DELETE FROM {guild} WHERE ban_user = '{ctx.message.mentions[0].id}'")
cancel_ban_field = discord.Embed(colour = discord.Colour(0xFDED32))
cancel_ban_field.add_field(name = f"Canceled ban voting", value = f"{ctx.message.mentions[0].mention} don't have to fear.. For now anyways.")
await ctx.send(embed = cancel_ban_field)
conn.close()
@voteBan.error
async def voteBanError(self, ctx, error):
await ctx.message.channel.send("***I don't have permission to do that!***")
print(error)
def setup(bot):
bot.add_cog(voteBan(bot))
def get_prefix(bot, message):
conn = sqlite.connect("data/internal.db")
try:
return conn.execute(f"SELECT * FROM prefixes WHERE guild_id = {message.guild.id}").fetchall()[0][2]
except:
conn.execute(f"INSERT INTO prefixes VALUES(NULL, ?, ?)", (message.guild.id, "."))
conn.commit()
return conn.execute(f"SELECT * FROM prefixes WHERE guild_id = {message.guild.id}").fetchall()[0][2]
conn.close() | 57.569106 | 248 | 0.567575 | 870 | 7,081 | 4.527586 | 0.224138 | 0.036558 | 0.045697 | 0.057883 | 0.537954 | 0.52526 | 0.477279 | 0.400863 | 0.39147 | 0.361005 | 0 | 0.010602 | 0.294026 | 7,081 | 123 | 249 | 57.569106 | 0.777355 | 0 | 0 | 0.455446 | 0 | 0.029703 | 0.330698 | 0.062977 | 0 | 0 | 0.006778 | 0 | 0 | 1 | 0.029703 | false | 0.009901 | 0.049505 | 0 | 0.108911 | 0.019802 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
35275936d610f0af6241487e73f0ca7f9ff70942 | 4,981 | py | Python | tzdealer/tzdealer/doctype/website_connector/website_connector.py | Lewinta/tzdealer | 3e6a8f39e16029b217ae84bed806cb79bbc89dbf | [
"MIT"
] | null | null | null | tzdealer/tzdealer/doctype/website_connector/website_connector.py | Lewinta/tzdealer | 3e6a8f39e16029b217ae84bed806cb79bbc89dbf | [
"MIT"
] | null | null | null | tzdealer/tzdealer/doctype/website_connector/website_connector.py | Lewinta/tzdealer | 3e6a8f39e16029b217ae84bed806cb79bbc89dbf | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright (c) 2020, TZCODE SRL and contributors
# For license information, please see license.txt
from __future__ import unicode_literals
import frappe
from frappe.model.document import Document
import requests
import json
import traceback
from tzdealer.hook.item import cast_to_post, cast_image
from frappe.utils import nowdate
class WebsiteConnector(Document):
def validate(self):
self.check_token()
def check_token(self):
if not self.token:
self.get_token()
else:
url = "{}/{}".format(self.website_url, "wp-json/jwt-auth/v1/token/validate")
headers = self.headers({'Authorization': "Bearer {}".format(self.token)})
r = requests.request("POST", url, headers=headers)
text = frappe._dict(json.loads(r.text))
frappe.errprint("Text: {}\n\n".format(text))
if r.status_code == 200:
if text.code == "jwt_auth_valid_token":
frappe.errprint("Valid Token")
return
else:
frappe.errprint("# Token Invalid, Lets get a new one")
self.get_token()
elif r.status_code == 403:
# Token Expired, Let's get a new one
frappe.errprint("# Token Expired, Lets get a new one")
self.get_token()
else:
frappe.throw("<b>Error {}</b> {} <br>{}<br>{}".format(r.status_code, url, headers, r.text))
def get_token(self):
data = json.dumps({
"username": self.username,
"password": self.get_password()
})
headers = {
'Content-Type': "application/json",
'cache-control': "no-cache"
}
url = "{}/{}".format(self.website_url, "wp-json/jwt-auth/v1/token")
r = requests.request("POST", url, data=data, headers=headers)
text = frappe._dict(json.loads(r.text))
if r.status_code == 200 :
self.token = text.token
frappe.errprint("# New Token Generated")
self.last_update = nowdate()
else:
frappe.throw("<b>Error {}</b> {} <br>{}<br>{}".format(r.status_code, url, headers, r.text))
def send(self, sufix, headers, data=None):
self.check_token()
url = "{}/{}".format(self.website_url, sufix)
frappe.errprint("POST {}\ndata:{}\nheaders:{}".format(url, data, headers))
r = requests.request("POST", url, data=data, headers=headers)
# Expired Token
frappe.errprint(r.status_code)
if r.status_code == 403:
self.get_token()
if r.status_code in [200, 201]:
return frappe._dict(json.loads(r.text))
else:
frappe.throw("<b>Error {}</b> {} <br>{}<br>{}<br> {}".format(r.status_code, url, self.headers(), data, r.text))
def headers(self, args=None):
h = {
'Content-Type': "application/json",
'cache-control': "no-cache"
}
if args:
h.update(args)
return h
def sync(self, item_code):
url = "/wp-json/wp/v2/vehicles"
doc = frappe.get_doc("Item", item_code)
# self.sync_images(doc)
if not self.last_update or self.last_update < nowdate():
self.get_token()
if doc.website_id:
# looks like exists let's update
url +="/{}".format(doc.website_id)
try:
r = self.send(url, self.headers({'Authorization': "Bearer {}".format(self.token)}), cast_to_post(doc))
if not doc.website_id:
doc.website_id = r.id
doc.db_update()
frappe.db.commit()
except Exception as e:
error = frappe.new_doc("Error Log")
error.update({
'error': "msg:{}\n\nTraceback:{}".format(
e.args[0],
traceback.format_exc()
),
'method': "sync",
})
error.save(ignore_permissions=True)
def sync_images(self, item):
url = "/wp-json/wp/v2/media"
if type(item) == unicode and frappe.db.exists("Item", item):
item = frappe.get_doc("Item", item)
for img in item.website_images:
if img.post_id:
# looks like exists let's update
url +="/{}".format(img.post_id)
try:
r = self.send(url, self.headers({'Authorization': "Bearer {}".format(self.token)}), cast_to_post(doc))
img.post_id = r.id
img.db_update()
except Exception as e:
error = frappe.new_doc("Error Log")
error.update({
'error': "msg:{}\n\nTraceback:{}".format(
e.args[0],
traceback.format_exc()
),
'method': "sync",
})
error.save(ignore_permissions=True)
def sync_single_image(self, img_name):
url = "/wp-json/wp/v2/media"
if not frappe.db.exists("Website Image", img_name):
frappe.throw("Website Image {} not found".format(img_name))
web_img = frappe.get_doc("Website Image", img_name)
if web_img.post_id:
# looks like exists let's update
url +="/{}".format(web_img.post_id)
frappe.errprint("Now let's send it")
try:
r = self.send(url, self.headers({'Authorization': "Bearer {}".format(self.token)}), cast_image(web_img))
web_img.post_id = r.id
web_img.db_update()
except Exception as e:
error = frappe.new_doc("Error Log")
error.update({
'error': "msg:{}\n\nTraceback:{}".format(
e.args[0],
traceback.format_exc()
),
'method': "sync",
})
error.save(ignore_permissions=True)
def update_vehicle(self, doc):
url = "/wp-json/wp/v2/vehicles/{}".format()
self.check_token()
| 28.626437 | 115 | 0.650472 | 720 | 4,981 | 4.377778 | 0.195833 | 0.019987 | 0.031409 | 0.038071 | 0.526332 | 0.498731 | 0.459074 | 0.425127 | 0.371827 | 0.309645 | 0 | 0.007801 | 0.176471 | 4,981 | 173 | 116 | 28.791908 | 0.760605 | 0.056414 | 0 | 0.453237 | 0 | 0 | 0.184901 | 0.042013 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064748 | false | 0.007194 | 0.057554 | 0 | 0.151079 | 0.057554 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
35285f639f6c60e4132437817567fdc4bbfc6b45 | 499 | py | Python | Code Accdemy/List_Practice_1.py | JackVoice/Testing | 11e1b83c7b2e51fa0a8cdde67c5c1eab650e018a | [
"Unlicense"
] | null | null | null | Code Accdemy/List_Practice_1.py | JackVoice/Testing | 11e1b83c7b2e51fa0a8cdde67c5c1eab650e018a | [
"Unlicense"
] | null | null | null | Code Accdemy/List_Practice_1.py | JackVoice/Testing | 11e1b83c7b2e51fa0a8cdde67c5c1eab650e018a | [
"Unlicense"
] | null | null | null | toppings = ["pepperoni", "pineapple", "cheese", "sausage", "olives", "anchovies", "mushrooms"]
prices = [2,6,1,3,2,7,2]
num_pizzas = len(toppings)
print("We Sell " + str(num_pizzas) + " different kinds of Pizza!")
pizzas = list(zip(prices,toppings))
print(pizzas)
pizzas.sort()
cheapest_pizza = pizzas[0]
print(cheapest_pizza)
priciest_pizza = pizzas[-1]
print(priciest_pizza)
three_cheapest = pizzas[0:3]
print(three_cheapest)
num_two_dollar_slices = prices.count(2)
print(num_two_dollar_slices)
| 27.722222 | 94 | 0.741483 | 73 | 499 | 4.876712 | 0.493151 | 0.092697 | 0.067416 | 0.101124 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026667 | 0.098196 | 499 | 17 | 95 | 29.352941 | 0.764444 | 0 | 0 | 0 | 0 | 0 | 0.178357 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.4 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3528e9bd98bcca8ec43737dc82d123ac8dfa85fb | 5,762 | py | Python | src/pytorch/ami_model.py | njuaplusplus/AmI | f10ca9b0001cc7276ff98abebcf2222c59e65f1a | [
"MIT"
] | null | null | null | src/pytorch/ami_model.py | njuaplusplus/AmI | f10ca9b0001cc7276ff98abebcf2222c59e65f1a | [
"MIT"
] | null | null | null | src/pytorch/ami_model.py | njuaplusplus/AmI | f10ca9b0001cc7276ff98abebcf2222c59e65f1a | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# coding=utf-8
import torch
import torch.nn as nn
from collections import OrderedDict
from torch.nn.functional import interpolate
class AmIModel(nn.Module):
def __init__(self, base, ami_weaken_parameter=0, ami_strengthen_parameter0=0, ami_strengthen_parameter1=0):
super(AmIModel, self).__init__()
self.base = base
self.my_hooks = {}
self.weaken_param = ami_weaken_parameter
self.strengthen_param0 = ami_strengthen_parameter0
self.strengthen_param1 = ami_strengthen_parameter1
def set_ami_params(self, ami_weaken_parameter, ami_strengthen_parameter0, ami_strengthen_parameter1):
self.weaken_param = ami_weaken_parameter
self.strengthen_param0 = ami_strengthen_parameter0
self.strengthen_param1 = ami_strengthen_parameter1
def register_my_hook(self, skip_layers=[], ami_data=None, return_tensor=False):
self.my_hooks = OrderedDict()
for name, module in self.base.named_modules():
if name not in skip_layers:
if len(list(module.children())) == 0:
print(f'register hook for {name}')
self.my_hooks[name] = Hook(name, module, self.weaken_param,
self.strengthen_param0, self.strengthen_param1,
ami_data, return_tensor)
def remove_my_hook(self):
for name, hook in self.my_hooks.items():
print(f'remove hook for {name}')
hook.close()
self.my_hooks = {}
def show_layers(self):
for name, module in self.base.named_modules():
if len(list(module.children())) == 0:
print(f'{name}:\t{module}')
def get_activation_values(self):
if len(self.my_hooks) > 0 and list(self.my_hooks.values())[0].ami_data is None:
res = OrderedDict()
for n, h in self.my_hooks.items():
if h.output is None:
print(f'{n}.output is None')
res[n] = h.output
return res
def forward(self, x0):
return self.base(x0)
class Hook():
def __init__(self, name, module, weaken_param, strengthen_param0, strengthen_param1, ami_data=None, return_tensor=False):
self.hook = module.register_forward_hook(self.hook_fn)
self.name = name
self.output = None
self.ami_data = ami_data
self.weaken_neurons = None
self.strengthen_neurons = []
self.attri_neurons = []
self.processed = False
self.return_tensor = return_tensor
self.weaken_param = weaken_param
self.strengthen_param0 = strengthen_param0
self.strengthen_param1 = strengthen_param1
def hook_fn(self, module, input, output):
# self.input = input.detach().clone().cpu().numpy()
# output shape: batch x neurons x ...
if self.ami_data is not None:
if (not self.processed) and (not self.strengthen_neurons):
self.weaken_neurons = torch.ones(output.shape[1], device=output.device)
for n_l in self.ami_data:
n_l = n_l[self.name]
self.attri_neurons.extend(n_l)
if not n_l: continue
tmp_n = torch.zeros(output.shape[1], device=output.device)
tmp_n[n_l] = 1
self.strengthen_neurons.append(tmp_n)
self.weaken_neurons *= (1-tmp_n)
self.processed = True
if not self.attri_neurons or not self.strengthen_neurons:
return output
if 'pool3' in self.name:
t_h, t_w = output.shape[2:]
tmp = output.clone()[...,2:t_h-2, 2:t_w-2]
tmp = interpolate(tmp, size=(t_h, t_w), mode='bilinear', align_corners=False)
output = tmp
new_output = neuron_AmI(output, self.weaken_neurons, self.strengthen_neurons,
self.attri_neurons, self.name, self.weaken_param,
self.strengthen_param0, self.strengthen_param1)
return new_output
else:
if self.return_tensor:
self.output = output.detach().clone()
else:
self.output = output.detach().clone().cpu().squeeze(0).numpy()
def close(self):
self.hook.remove()
def neuron_AmI(x, weaken_neurons, strengthen_neurons, attri_neurons, layer_i, weaken_param, strengthen_param0, strengthen_param1):
x = x.squeeze(0)
if len(x.shape) == 1:
data = x
else:
data = torch.sum(x, (1,2))
attri_data = data[attri_neurons]
attri_mean = attri_data.mean()
attri_std = attri_data.std(unbiased=False)
attri_min = attri_data.min()
deviation = torch.zeros(x.shape[0], device=weaken_neurons.device)
if attri_std != 0:
deviation = torch.max(deviation, (data-attri_mean)/attri_std)
wkn = weaken(deviation, weaken_param) * weaken_neurons
stn = 1.
for st_n in strengthen_neurons:
deviation = torch.zeros(x.shape[0], device=weaken_neurons.device)
if attri_std != 0:
deviation = torch.abs(data-attri_min) / attri_std
t_stn = strengthen(deviation, strengthen_param0, strengthen_param1) * st_n + (1 - st_n)
stn *= t_stn
stn *= (1.-weaken_neurons)
stn += wkn
if len(x.shape) == 3:
stn.unsqueeze_(-1)
stn.unsqueeze_(-1)
return (x*stn).unsqueeze(0)
def strengthen(x, strengthen_param0, strengthen_param1):
return strengthen_param0 - torch.exp(-x/strengthen_param1)
def weaken(x, weaken_param):
return torch.exp(-x/weaken_param)
| 38.413333 | 130 | 0.608296 | 730 | 5,762 | 4.565753 | 0.172603 | 0.063006 | 0.026403 | 0.038404 | 0.326133 | 0.279028 | 0.235224 | 0.191419 | 0.174017 | 0.118812 | 0 | 0.016137 | 0.290177 | 5,762 | 149 | 131 | 38.671141 | 0.798778 | 0.020653 | 0 | 0.175 | 0 | 0 | 0.016676 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108333 | false | 0 | 0.033333 | 0.025 | 0.216667 | 0.033333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3529dc52ba49ecf8e3acc2f0ee96759ff2d4fa5a | 4,690 | py | Python | absa/run_base.py | huminghao16/SpanABSA | 66369af840599df2a886d4ad1db1ceec53c0649f | [
"Apache-2.0"
] | 111 | 2019-06-17T11:10:41.000Z | 2022-03-23T02:34:27.000Z | absa/run_base.py | snath99920/SpanABSA | 66369af840599df2a886d4ad1db1ceec53c0649f | [
"Apache-2.0"
] | 13 | 2019-07-10T04:46:08.000Z | 2021-10-29T23:33:38.000Z | absa/run_base.py | snath99920/SpanABSA | 66369af840599df2a886d4ad1db1ceec53c0649f | [
"Apache-2.0"
] | 19 | 2019-08-12T09:22:12.000Z | 2021-10-04T12:24:14.000Z | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import sys
import torch
from bert.optimization import BERTAdam
try:
import xml.etree.ElementTree as ET, getopt, logging, sys, random, re, copy
from xml.sax.saxutils import escape
except:
sys.exit('Some package is missing... Perhaps <re>?')
logging.basicConfig(format = '%(asctime)s - %(levelname)s - %(name)s - %(message)s',
datefmt = '%m/%d/%Y %H:%M:%S',
level = logging.INFO)
logger = logging.getLogger(__name__)
def copy_optimizer_params_to_model(named_params_model, named_params_optimizer):
""" Utility function for optimize_on_cpu and 16-bits training.
Copy the parameters optimized on CPU/RAM back to the model on GPU
"""
for (name_opti, param_opti), (name_model, param_model) in zip(named_params_optimizer, named_params_model):
if name_opti != name_model:
logger.error("name_opti != name_model: {} {}".format(name_opti, name_model))
raise ValueError
param_model.data.copy_(param_opti.data)
def set_optimizer_params_grad(named_params_optimizer, named_params_model, test_nan=False):
""" Utility function for optimize_on_cpu and 16-bits training.
Copy the gradient of the GPU parameters to the CPU/RAMM copy of the model
"""
is_nan = False
for (name_opti, param_opti), (name_model, param_model) in zip(named_params_optimizer, named_params_model):
if name_opti != name_model:
logger.error("name_opti != name_model: {} {}".format(name_opti, name_model))
raise ValueError
if test_nan and torch.isnan(param_model.grad).sum() > 0:
is_nan = True
if param_opti.grad is None:
param_opti.grad = torch.nn.Parameter(param_opti.data.new().resize_(*param_opti.data.size()))
param_opti.grad.data.copy_(param_model.grad.data)
return is_nan
def prepare_optimizer(args, model, num_train_steps):
if args.fp16:
param_optimizer = [(n, param.clone().detach().to('cpu').float().requires_grad_()) \
for n, param in model.named_parameters()]
elif args.optimize_on_cpu:
param_optimizer = [(n, param.clone().detach().to('cpu').requires_grad_()) \
for n, param in model.named_parameters()]
else:
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'LayerNorm']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}]
optimizer = BERTAdam(optimizer_grouped_parameters,
lr=args.learning_rate,
warmup=args.warmup_proportion,
t_total=num_train_steps)
return optimizer, param_optimizer
def post_process_loss(args, n_gpu, loss):
if n_gpu > 1:
loss = loss.mean() # mean() to average on multi-gpu.
if args.fp16 and args.loss_scale != 1.0:
# rescale loss for fp16 training
# see https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html
loss = loss * args.loss_scale
if args.gradient_accumulation_steps > 1:
loss = loss / args.gradient_accumulation_steps
return loss
def bert_load_state_dict(model, state_dict):
missing_keys = []
unexpected_keys = []
error_msgs = []
# copy state_dict so _load_from_state_dict can modify it
metadata = getattr(state_dict, '_metadata', None)
state_dict = state_dict.copy()
if metadata is not None:
state_dict._metadata = metadata
def load(module, prefix=''):
local_metadata = {} if metadata is None else metadata.get(prefix[:-1], {})
module._load_from_state_dict(
state_dict, prefix, local_metadata, True, missing_keys, unexpected_keys, error_msgs)
for name, child in module._modules.items():
if child is not None:
load(child, prefix + name + '.')
load(model, prefix='' if hasattr(model, 'bert') else 'bert.')
if len(missing_keys) > 0:
logger.info("Weights of {} not initialized from pretrained model: {}".format(
model.__class__.__name__, missing_keys))
if len(unexpected_keys) > 0:
logger.info("Weights from pretrained model not used in {}: {}".format(
model.__class__.__name__, unexpected_keys))
return model | 45.533981 | 114 | 0.646482 | 619 | 4,690 | 4.61874 | 0.285945 | 0.03148 | 0.036376 | 0.035677 | 0.305701 | 0.290311 | 0.253935 | 0.253935 | 0.228751 | 0.177685 | 0 | 0.006529 | 0.248827 | 4,690 | 103 | 115 | 45.533981 | 0.804996 | 0.097441 | 0 | 0.121951 | 0 | 0 | 0.085086 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.073171 | false | 0 | 0.097561 | 0 | 0.219512 | 0.012195 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
352aa0950aedaa7f34ccaf411b0899ff8810e489 | 18,189 | py | Python | problem/views/admin.py | prayjourney/OnlineJudge | 41f248091ef80fda1b165d0b6504ec58cfea76ca | [
"MIT"
] | 1 | 2018-01-28T07:48:13.000Z | 2018-01-28T07:48:13.000Z | problem/views/admin.py | OnlineJudgeNextGeneration/qduoj2 | c4889d70850bd91ae7f662c02524d0555b6a3ce7 | [
"MIT"
] | null | null | null | problem/views/admin.py | OnlineJudgeNextGeneration/qduoj2 | c4889d70850bd91ae7f662c02524d0555b6a3ce7 | [
"MIT"
] | 1 | 2020-09-29T14:21:27.000Z | 2020-09-29T14:21:27.000Z | import hashlib
import json
import os
import shutil
import zipfile
from wsgiref.util import FileWrapper
from django.conf import settings
from django.http import StreamingHttpResponse, HttpResponse
from account.decorators import problem_permission_required
from judge.dispatcher import SPJCompiler
from contest.models import Contest, ContestStatus
from submission.models import Submission
from utils.api import APIView, CSRFExemptAPIView, validate_serializer
from utils.shortcuts import rand_str, natural_sort_key
from ..models import Problem, ProblemRuleType, ProblemTag
from ..serializers import (CreateContestProblemSerializer, CompileSPJSerializer,
CreateProblemSerializer, EditProblemSerializer, EditContestProblemSerializer,
ProblemAdminSerializer, TestCaseUploadForm, ContestProblemMakePublicSerializer,
AddContestProblemSerializer)
class TestCaseAPI(CSRFExemptAPIView):
request_parsers = ()
def filter_name_list(self, name_list, spj):
ret = []
prefix = 1
if spj:
while True:
in_name = str(prefix) + ".in"
if in_name in name_list:
ret.append(in_name)
prefix += 1
continue
else:
return sorted(ret, key=natural_sort_key)
else:
while True:
in_name = str(prefix) + ".in"
out_name = str(prefix) + ".out"
if in_name in name_list and out_name in name_list:
ret.append(in_name)
ret.append(out_name)
prefix += 1
continue
else:
return sorted(ret, key=natural_sort_key)
@problem_permission_required
def get(self, request):
problem_id = request.GET.get("problem_id")
if not problem_id:
return self.error("Parameter error, problem_id is required")
try:
problem = Problem.objects.get(id=problem_id)
except Problem.DoesNotExist:
return self.error("Problem does not exists")
test_case_dir = os.path.join(settings.TEST_CASE_DIR, problem.test_case_id)
if not os.path.isdir(test_case_dir):
return self.error("Test case does not exists")
name_list = self.filter_name_list(os.listdir(test_case_dir), problem.spj)
name_list.append("info")
file_name = os.path.join(test_case_dir, problem.test_case_id + ".zip")
with zipfile.ZipFile(file_name, "w") as file:
for test_case in name_list:
file.write(f"{test_case_dir}/{test_case}", test_case)
if os.environ.get("OJ_ENV") == "production":
response = HttpResponse()
response["X-Accel-Redirect"] = file_name
else:
response = StreamingHttpResponse(FileWrapper(open(file_name, "rb")),
content_type="application/octet-stream")
response["Content-Disposition"] = f"attachment; filename=problem_{problem.id}_test_cases.zip"
response["Content-Length"] = os.path.getsize(file_name)
return response
@problem_permission_required
def post(self, request):
form = TestCaseUploadForm(request.POST, request.FILES)
if form.is_valid():
spj = form.cleaned_data["spj"] == "true"
file = form.cleaned_data["file"]
else:
return self.error("Upload failed")
tmp_file = os.path.join("/tmp", rand_str() + ".zip")
with open(tmp_file, "wb") as f:
for chunk in file:
f.write(chunk)
try:
zip_file = zipfile.ZipFile(tmp_file)
except zipfile.BadZipFile:
return self.error("Bad zip file")
name_list = zip_file.namelist()
test_case_list = self.filter_name_list(name_list, spj=spj)
if not test_case_list:
return self.error("Empty file")
test_case_id = rand_str()
test_case_dir = os.path.join(settings.TEST_CASE_DIR, test_case_id)
os.mkdir(test_case_dir)
size_cache = {}
md5_cache = {}
for item in test_case_list:
with open(os.path.join(test_case_dir, item), "wb") as f:
content = zip_file.read(item).replace(b"\r\n", b"\n")
size_cache[item] = len(content)
if item.endswith(".out"):
md5_cache[item] = hashlib.md5(content.rstrip()).hexdigest()
f.write(content)
test_case_info = {"spj": spj, "test_cases": {}}
hint = None
diff = set(name_list).difference(set(test_case_list))
if diff:
hint = ", ".join(diff) + " are ignored"
ret = []
if spj:
for index, item in enumerate(test_case_list):
data = {"input_name": item, "input_size": size_cache[item]}
ret.append(data)
test_case_info["test_cases"][str(index + 1)] = data
else:
# ["1.in", "1.out", "2.in", "2.out"] => [("1.in", "1.out"), ("2.in", "2.out")]
test_case_list = zip(*[test_case_list[i::2] for i in range(2)])
for index, item in enumerate(test_case_list):
data = {"stripped_output_md5": md5_cache[item[1]],
"input_size": size_cache[item[0]],
"output_size": size_cache[item[1]],
"input_name": item[0],
"output_name": item[1]}
ret.append(data)
test_case_info["test_cases"][str(index + 1)] = data
with open(os.path.join(test_case_dir, "info"), "w", encoding="utf-8") as f:
f.write(json.dumps(test_case_info, indent=4))
return self.success({"id": test_case_id, "info": ret, "hint": hint, "spj": spj})
class CompileSPJAPI(APIView):
@validate_serializer(CompileSPJSerializer)
@problem_permission_required
def post(self, request):
data = request.data
spj_version = rand_str(8)
error = SPJCompiler(data["spj_code"], spj_version, data["spj_language"]).compile_spj()
if error:
return self.error(error)
else:
return self.success()
class ProblemBase(APIView):
def common_checks(self, request):
data = request.data
if data["spj"]:
if not data["spj_language"] or not data["spj_code"]:
return "Invalid spj"
if not data["spj_compile_ok"]:
return "SPJ code must be compiled successfully"
data["spj_version"] = hashlib.md5(
(data["spj_language"] + ":" + data["spj_code"]).encode("utf-8")).hexdigest()
else:
data["spj_language"] = None
data["spj_code"] = None
if data["rule_type"] == ProblemRuleType.OI:
total_score = 0
for item in data["test_case_score"]:
if item["score"] <= 0:
return "Invalid score"
else:
total_score += item["score"]
data["total_score"] = total_score
data["created_by"] = request.user
data["languages"] = list(data["languages"])
@problem_permission_required
def delete(self, request):
id = request.GET.get("id")
if not id:
return self.error("Invalid parameter, id is requred")
try:
problem = Problem.objects.get(id=id)
except Problem.DoesNotExist:
return self.error("Problem does not exists")
if Submission.objects.filter(problem=problem).exists():
return self.error("Can't delete the problem as it has submissions")
d = os.path.join(settings.TEST_CASE_DIR, problem.test_case_id)
if os.path.isdir(d):
shutil.rmtree(d, ignore_errors=True)
problem.delete()
return self.success()
class ProblemAPI(ProblemBase):
@validate_serializer(CreateProblemSerializer)
@problem_permission_required
def post(self, request):
data = request.data
_id = data["_id"]
if not _id:
return self.error("Display ID is required")
if Problem.objects.filter(_id=_id, contest_id__isnull=True).exists():
return self.error("Display ID already exists")
error_info = self.common_checks(request)
if error_info:
return self.error(error_info)
# todo check filename and score info
tags = data.pop("tags")
problem = Problem.objects.create(**data)
for item in tags:
try:
tag = ProblemTag.objects.get(name=item)
except ProblemTag.DoesNotExist:
tag = ProblemTag.objects.create(name=item)
problem.tags.add(tag)
return self.success(ProblemAdminSerializer(problem).data)
@problem_permission_required
def get(self, request):
problem_id = request.GET.get("id")
rule_type = request.GET.get("rule_type")
user = request.user
if problem_id:
try:
problem = Problem.objects.get(id=problem_id)
if not user.can_mgmt_all_problem() and problem.created_by != user:
return self.error("Problem does not exist")
return self.success(ProblemAdminSerializer(problem).data)
except Problem.DoesNotExist:
return self.error("Problem does not exist")
problems = Problem.objects.filter(contest_id__isnull=True).order_by("-create_time")
if rule_type:
if rule_type not in ProblemRuleType.choices():
return self.error("Invalid rule_type")
else:
problems = problems.filter(rule_type=rule_type)
if not user.can_mgmt_all_problem():
problems = problems.filter(created_by=user)
keyword = request.GET.get("keyword")
if keyword:
problems = problems.filter(title__contains=keyword)
return self.success(self.paginate_data(request, problems, ProblemAdminSerializer))
@validate_serializer(EditProblemSerializer)
@problem_permission_required
def put(self, request):
data = request.data
problem_id = data.pop("id")
user = request.user
try:
problem = Problem.objects.get(id=problem_id)
if not user.can_mgmt_all_problem() and problem.created_by != user:
return self.error("Problem does not exist")
except Problem.DoesNotExist:
return self.error("Problem does not exist")
_id = data["_id"]
if not _id:
return self.error("Display ID is required")
if Problem.objects.exclude(id=problem_id).filter(_id=_id, contest_id__isnull=True).exists():
return self.error("Display ID already exists")
error_info = self.common_checks(request)
if error_info:
return self.error(error_info)
# todo check filename and score info
tags = data.pop("tags")
data["languages"] = list(data["languages"])
for k, v in data.items():
setattr(problem, k, v)
problem.save()
problem.tags.remove(*problem.tags.all())
for tag in tags:
try:
tag = ProblemTag.objects.get(name=tag)
except ProblemTag.DoesNotExist:
tag = ProblemTag.objects.create(name=tag)
problem.tags.add(tag)
return self.success()
class ContestProblemAPI(ProblemBase):
@validate_serializer(CreateContestProblemSerializer)
@problem_permission_required
def post(self, request):
data = request.data
try:
contest = Contest.objects.get(id=data.pop("contest_id"))
if request.user.is_admin() and contest.created_by != request.user:
return self.error("Contest does not exist")
except Contest.DoesNotExist:
return self.error("Contest does not exist")
if data["rule_type"] != contest.rule_type:
return self.error("Invalid rule type")
_id = data["_id"]
if not _id:
return self.error("Display ID is required")
if Problem.objects.filter(_id=_id, contest=contest).exists():
return self.error("Duplicate Display id")
error_info = self.common_checks(request)
if error_info:
return self.error(error_info)
# todo check filename and score info
data["contest"] = contest
tags = data.pop("tags")
problem = Problem.objects.create(**data)
for item in tags:
try:
tag = ProblemTag.objects.get(name=item)
except ProblemTag.DoesNotExist:
tag = ProblemTag.objects.create(name=item)
problem.tags.add(tag)
return self.success(ProblemAdminSerializer(problem).data)
@problem_permission_required
def get(self, request):
problem_id = request.GET.get("id")
contest_id = request.GET.get("contest_id")
user = request.user
if problem_id:
try:
problem = Problem.objects.get(id=problem_id)
if user.is_admin() and problem.contest.created_by != user:
return self.error("Problem does not exist")
except Problem.DoesNotExist:
return self.error("Problem does not exist")
return self.success(ProblemAdminSerializer(problem).data)
if not contest_id:
return self.error("Contest id is required")
problems = Problem.objects.filter(contest_id=contest_id).order_by("-create_time")
if user.is_admin():
problems = problems.filter(contest__created_by=user)
keyword = request.GET.get("keyword")
if keyword:
problems = problems.filter(title__contains=keyword)
return self.success(self.paginate_data(request, problems, ProblemAdminSerializer))
@validate_serializer(EditContestProblemSerializer)
@problem_permission_required
def put(self, request):
data = request.data
try:
contest = Contest.objects.get(id=data.pop("contest_id"))
if request.user.is_admin() and contest.created_by != request.user:
return self.error("Contest does not exist")
except Contest.DoesNotExist:
return self.error("Contest does not exist")
if data["rule_type"] != contest.rule_type:
return self.error("Invalid rule type")
problem_id = data.pop("id")
user = request.user
try:
problem = Problem.objects.get(id=problem_id)
if not user.can_mgmt_all_problem() and problem.created_by != user:
return self.error("Problem does not exist")
except Problem.DoesNotExist:
return self.error("Problem does not exist")
_id = data["_id"]
if not _id:
return self.error("Display ID is required")
if Problem.objects.exclude(id=problem_id).filter(_id=_id, contest=contest).exists():
return self.error("Display ID already exists")
error_info = self.common_checks(request)
if error_info:
return self.error(error_info)
# todo check filename and score info
tags = data.pop("tags")
data["languages"] = list(data["languages"])
for k, v in data.items():
setattr(problem, k, v)
problem.save()
problem.tags.remove(*problem.tags.all())
for tag in tags:
try:
tag = ProblemTag.objects.get(name=tag)
except ProblemTag.DoesNotExist:
tag = ProblemTag.objects.create(name=tag)
problem.tags.add(tag)
return self.success()
class MakeContestProblemPublicAPIView(APIView):
@validate_serializer(ContestProblemMakePublicSerializer)
@problem_permission_required
def post(self, request):
data = request.data
display_id = data.get("display_id")
if Problem.objects.filter(_id=display_id, contest_id__isnull=True).exists():
return self.error("Duplicate display ID")
try:
problem = Problem.objects.get(id=data["id"])
except Problem.DoesNotExist:
return self.error("Problem does not exist")
if not problem.contest or problem.is_public:
return self.error("Alreay be a public problem")
problem.is_public = True
problem.save()
# https://docs.djangoproject.com/en/1.11/topics/db/queries/#copying-model-instances
tags = problem.tags.all()
problem.pk = None
problem.contest = None
problem._id = display_id
problem.visible = False
problem.submission_number = problem.accepted_number = 0
problem.statistic_info = {}
problem.save()
problem.tags.set(tags)
return self.success()
class AddContestProblemAPI(APIView):
@validate_serializer(AddContestProblemSerializer)
def post(self, request):
data = request.data
try:
contest = Contest.objects.get(id=data["contest_id"])
problem = Problem.objects.get(id=data["problem_id"])
except (Contest.DoesNotExist, Problem.DoesNotExist):
return self.error("Contest or Problem does not exist")
if contest.status == ContestStatus.CONTEST_ENDED:
return self.error("Contest has ended")
if Problem.objects.filter(contest=contest, _id=data["display_id"]).exists():
return self.error("Duplicate display id in this contest")
tags = problem.tags.all()
problem.pk = None
problem.contest = contest
problem.is_public = True
problem.visible = True
problem._id = request.data["display_id"]
problem.submission_number = problem.accepted_number = 0
problem.statistic_info = {}
problem.save()
problem.tags.set(tags)
return self.success()
| 38.454545 | 106 | 0.602562 | 2,094 | 18,189 | 5.074499 | 0.129895 | 0.053642 | 0.062112 | 0.028986 | 0.591568 | 0.554489 | 0.535761 | 0.509693 | 0.49313 | 0.470073 | 0 | 0.002882 | 0.294244 | 18,189 | 472 | 107 | 38.536017 | 0.824881 | 0.016329 | 0 | 0.558603 | 0 | 0 | 0.098468 | 0.005312 | 0 | 0 | 0 | 0.002119 | 0 | 1 | 0.034913 | false | 0 | 0.0399 | 0 | 0.25187 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
352aefd694ac55314cda9c06b8eb52a346c67466 | 5,390 | py | Python | classy/scripts/cli/evaluate.py | sunglasses-ai/classy | c166490a30d8ba6d7c25f70ce707b7a2ddcfb53f | [
"Apache-2.0"
] | 26 | 2021-10-17T08:32:53.000Z | 2022-03-30T10:57:13.000Z | classy/scripts/cli/evaluate.py | sunglasses-ai/classy | c166490a30d8ba6d7c25f70ce707b7a2ddcfb53f | [
"Apache-2.0"
] | 8 | 2021-11-02T20:57:44.000Z | 2022-03-13T09:42:29.000Z | classy/scripts/cli/evaluate.py | sunglasses-ai/classy | c166490a30d8ba6d7c25f70ce707b7a2ddcfb53f | [
"Apache-2.0"
] | null | null | null | from argparse import ArgumentParser
from pathlib import Path
from typing import Dict, Union
from argcomplete import FilesCompleter
from omegaconf import OmegaConf
from classy.data.data_drivers import DataDriver
from classy.scripts.cli.utils import (
autocomplete_model_path,
checkpoint_path_from_user_input,
get_device,
)
from classy.utils.help_cli import (
HELP_EVALUATE,
HELP_FILE_PATH,
HELP_MODEL_PATH,
HELP_PREDICTION_PARAMS,
HELP_TOKEN_BATCH_SIZE,
)
from classy.utils.train_coordinates import load_bundle
def populate_parser(parser: ArgumentParser):
parser.add_argument(
"model_path",
type=checkpoint_path_from_user_input,
help=HELP_MODEL_PATH,
).completer = autocomplete_model_path
parser.add_argument(
"file_path",
nargs="?",
default=None,
help=HELP_FILE_PATH,
).completer = FilesCompleter()
parser.add_argument(
"-d",
"--device",
default=None,
help="The device you will use for the evaluation. If not provided, classy will try to infer the desired behavior from the available environment.",
)
parser.add_argument(
"-o",
"--output-path",
default=None,
required=False,
help="""
Optional. If specified, the predictions will be stored in the "output_path" along with the gold labels and the
original sample.
""",
).completer = FilesCompleter()
parser.add_argument(
"--token-batch-size", type=int, default=1024, help=HELP_TOKEN_BATCH_SIZE
)
parser.add_argument(
"--evaluation-config", type=str, default=None, help=HELP_EVALUATE
)
parser.add_argument(
"--prediction-params", type=str, default=None, help=HELP_PREDICTION_PARAMS
)
parser.add_argument(
"-t",
"--output-type",
type=str,
default="tree",
choices=("tree", "json", "list"),
)
def get_parser(subparser=None) -> ArgumentParser:
# subparser: Optional[argparse._SubParsersAction]
parser_kwargs = dict(
name="evaluate",
description="evaluate a model trained using classy",
help="Evaluate a model trained using classy.",
)
parser = (subparser.add_parser if subparser is not None else ArgumentParser)(
**parser_kwargs
)
populate_parser(parser)
return parser
def parse_args():
return get_parser().parse_args()
def automatically_infer_test_path(model_path: str) -> Union[str, Dict[str, DataDriver]]:
from classy.utils.lightning import load_training_conf_from_checkpoint
checkpoint_path = Path(model_path)
exp_split_data_folder = checkpoint_path.parent.parent.joinpath("data")
# search if it was created via split at training time
if exp_split_data_folder.exists():
possible_test_files = [
fp for fp in exp_split_data_folder.iterdir() if fp.stem == "test"
]
if len(possible_test_files) == 1:
return str(possible_test_files[0])
# check if dataset_path provided at training time was a folder that contained a test set
training_conf = load_training_conf_from_checkpoint(model_path)
dataset_path = Path(training_conf.data.datamodule.dataset_path)
if dataset_path.exists():
if dataset_path.is_file():
coordinates_dict = OmegaConf.load(dataset_path)
if "test_dataset" in coordinates_dict:
test_bundle = load_bundle(
coordinates_dict["test_dataset"], training_conf.data.datamodule.task
)
return test_bundle
elif "validation_dataset" in coordinates_dict:
validation_bundle = load_bundle(
coordinates_dict["validation_dataset"],
training_conf.data.datamodule.task,
)
return validation_bundle
if dataset_path.is_dir():
possible_test_files = [
fp for fp in dataset_path.iterdir() if fp.stem == "test"
]
if len(possible_test_files) == 1:
return str(possible_test_files[0])
raise ValueError
def main(args):
# import here to avoid importing torch before it's actually needed
import torch
from classy.scripts.model.evaluate import evaluate
# input_path: if provided, use default one
# otherwise, try to infer its positions
if args.file_path is not None:
input_path = args.file_path
else:
# try to infer path
try:
input_path = automatically_infer_test_path(args.model_path)
print(f"Test path automatically inferred to {input_path}")
except ValueError:
print("Failed to automatically infer test path")
input_path = input("Please, explicitly enter test path: ").strip()
# read device
device = args.device
if device is None and torch.cuda.is_available():
device = 0
device = get_device(device)
evaluate(
args.model_path,
device,
args.token_batch_size,
input_path,
output_type=args.output_type,
output_path=args.output_path,
evaluate_config_path=args.evaluation_config,
prediction_params=args.prediction_params,
metrics_fn=None,
)
if __name__ == "__main__":
main(parse_args())
| 31.156069 | 154 | 0.656586 | 644 | 5,390 | 5.246894 | 0.26087 | 0.026635 | 0.040249 | 0.016869 | 0.191773 | 0.116011 | 0.081681 | 0.04084 | 0.04084 | 0.04084 | 0 | 0.00226 | 0.261039 | 5,390 | 172 | 155 | 31.337209 | 0.846096 | 0.06679 | 0 | 0.137681 | 0 | 0.014493 | 0.141207 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036232 | false | 0 | 0.086957 | 0.007246 | 0.166667 | 0.014493 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
352b68bcf671b4810225b88ad6428d3134ef7df5 | 1,444 | py | Python | ponyhug/views/factions_view.py | marcsello/ponyhug-backend | 8ddb32531c93f0fbc202966f99186d5288e1df70 | [
"MIT"
] | 1 | 2021-02-20T23:12:10.000Z | 2021-02-20T23:12:10.000Z | ponyhug/views/factions_view.py | marcsello/ponyhug-backend | 8ddb32531c93f0fbc202966f99186d5288e1df70 | [
"MIT"
] | 1 | 2020-09-11T19:27:13.000Z | 2020-09-11T19:27:13.000Z | ponyhug/views/factions_view.py | marcsello/ponyhug-backend | 8ddb32531c93f0fbc202966f99186d5288e1df70 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
from flask import abort, jsonify, request
from flask_classful import FlaskView
from marshmallow import ValidationError
from utils import ponytoken_required, this_player, anyadmin_required, json_required
from model import db, Faction
from schemas import FactionSchema
class FactionsView(FlaskView):
faction_schema = FactionSchema(many=False, exclude=["players"])
factions_schema = FactionSchema(many=True, exclude=["players"])
@ponytoken_required
def index(self):
factions = Faction.query.all()
return jsonify(self.factions_schema.dump(factions)), 200
@ponytoken_required
def get(self, id_: int):
faction = Faction.query.filter_by(id=id_).first_or_404("No such faction")
return jsonify(self.faction_schema.dump(faction)), 200
@ponytoken_required
def my(self):
return jsonify(self.faction_schema.dump(this_player().faction))
@anyadmin_required
@json_required
def post(self):
try:
faction = self.faction_schema.load(request.get_json(), session=db.session)
except ValidationError as e:
return abort(422, str(e))
db.session.add(faction)
db.session.commit()
return jsonify(self.faction_schema.dump(faction)), 201
@anyadmin_required
def delete(self, id_: int):
Faction.query.filter_by(id=id_).delete()
db.session.commit()
return '', 204
| 30.083333 | 86 | 0.697368 | 177 | 1,444 | 5.531073 | 0.384181 | 0.066394 | 0.069459 | 0.073544 | 0.167518 | 0.167518 | 0.083759 | 0 | 0 | 0 | 0 | 0.016479 | 0.201524 | 1,444 | 47 | 87 | 30.723404 | 0.832611 | 0.014543 | 0 | 0.2 | 0 | 0 | 0.020394 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.171429 | 0.028571 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
352bb62cdb42fc77feea713fabd8c74f160b76e5 | 2,547 | py | Python | sharebears/filters.py | mgp/sharebears | aabb2c568707cea1107498a05d7f56bd772ef4fb | [
"Apache-2.0"
] | 1 | 2015-01-17T20:02:14.000Z | 2015-01-17T20:02:14.000Z | sharebears/filters.py | mgp/sharebears | aabb2c568707cea1107498a05d7f56bd772ef4fb | [
"Apache-2.0"
] | null | null | null | sharebears/filters.py | mgp/sharebears | aabb2c568707cea1107498a05d7f56bd772ef4fb | [
"Apache-2.0"
] | null | null | null | import album_item
import paragraph_item
from renderable_item import RenderableItem
from url_decoder_github import GitHubRepositoryUrlDecoder
from url_decoder_image import ImageUrlDecoder
from url_decoder_twitter import TwitterTweetUrlDecoder
from url_decoder_youtube import YouTubeUrlDecoder
def _youtube_player_id(video_id, post_id=0, item_id=0):
return "ytplayer-%s-%s-%s" % (video_id, post_id, item_id)
def _has_renderer(renderer_name):
"""Returns a function that returns whether a RendererItem has the given renderer name."""
return lambda item: item.get_renderer_name() == renderer_name
_SECONDS_PER_HOUR = 60 * 60
_SECONDS_PER_MINUTE = 60
def _get_day_components(created_timedelta):
seconds = created_timedelta.seconds
# Get the number of hours.
hours = seconds / _SECONDS_PER_HOUR
seconds %= _SECONDS_PER_HOUR
# Get the number of minutes.
minutes = seconds / _SECONDS_PER_MINUTE
seconds %= _SECONDS_PER_MINUTE
return (hours, minutes, seconds)
_JUST_NOW_STRING = "just now"
def _get_time_ago_string(created_datetime, now):
if created_datetime > now:
return _JUST_NOW_STRING
created_timedelta = now - created_datetime
if created_timedelta.days > 0:
# Avoid using strftime with %d because it returns a leading zero.
month_name = created_datetime.strftime("%b")
if created_datetime.year == now.year:
return "%s %s" % (month_name, created_datetime.day)
else:
return "%s %s, %s" % (month_name, created_datetime.day, created_datetime.year)
hours, minutes, seconds = _get_day_components(created_timedelta)
if hours > 0:
return "%sh ago" % hours
elif minutes > 0:
return "%sm ago" % minutes
elif seconds > 0:
return "%ss ago" % seconds
else:
return _JUST_NOW_STRING
def add_to_environment(environment):
env_filters = environment.filters
env_filters["timeagostring"] = _get_time_ago_string
env_filters["youtubeplayerid"] = _youtube_player_id
env_filters["isparagraph"] = _has_renderer(paragraph_item._PARAGRAPH_ITEM_TYPE)
env_filters["istext"] = lambda item: item.type == RenderableItem.TEXT_TYPE
env_filters["isurl"] = lambda item: item.type == RenderableItem.URL_TYPE
env_filters["isimage"] = _has_renderer(ImageUrlDecoder.name())
env_filters["istweet"] = _has_renderer(TwitterTweetUrlDecoder.name())
env_filters["isyoutubevideo"] = _has_renderer(YouTubeUrlDecoder.name())
env_filters["isalbum"] = _has_renderer(album_item._ALBUM_ITEM_TYPE)
env_filters["isgithubrepo"] = _has_renderer(GitHubRepositoryUrlDecoder.name())
| 33.077922 | 91 | 0.768748 | 335 | 2,547 | 5.483582 | 0.280597 | 0.05988 | 0.030484 | 0.039194 | 0.101252 | 0.031573 | 0.031573 | 0 | 0 | 0 | 0 | 0.005492 | 0.142128 | 2,547 | 76 | 92 | 33.513158 | 0.83524 | 0.078524 | 0 | 0.076923 | 0 | 0 | 0.068007 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096154 | false | 0 | 0.134615 | 0.019231 | 0.423077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
352ec195e7ffed9c660d204252fb1131a05d4022 | 592 | py | Python | easy/string/unique_email_addresses.py | vladimirf7/leetcode | 83faa201f188bcd0367ea95a252d9166735827cd | [
"MIT"
] | null | null | null | easy/string/unique_email_addresses.py | vladimirf7/leetcode | 83faa201f188bcd0367ea95a252d9166735827cd | [
"MIT"
] | null | null | null | easy/string/unique_email_addresses.py | vladimirf7/leetcode | 83faa201f188bcd0367ea95a252d9166735827cd | [
"MIT"
] | null | null | null | """solved, easy"""
class Solution:
def numUniqueEmails(self, emails):
"""
:type emails: List[str]
:rtype: int
"""
unique = set()
for email in emails:
index = email.find("@")
unique.add(
email[:index].split("+")[0].replace(".", "") + email[index:])
return len(unique)
if __name__ == "__main__":
assert Solution().numUniqueEmails([
"test.email+alex@leetcode.com",
"test.e.mail+bob.cathy@leetcode.com",
"testemail+david@lee.tcode.com"]
) == 2 | 24.666667 | 77 | 0.501689 | 59 | 592 | 4.898305 | 0.728814 | 0.069204 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005038 | 0.329392 | 592 | 24 | 78 | 24.666667 | 0.722922 | 0.081081 | 0 | 0 | 0 | 0 | 0.200787 | 0.179134 | 0 | 0 | 0 | 0 | 0.071429 | 1 | 0.071429 | false | 0 | 0 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |