hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6b2b728930bad3115a85ebba51da0679afe260aa | 1,410 | py | Python | tools/keystorer.py | c-castillo/pyethereum | 6e0e9a937c16d67faed16c5fce9a21bdef5a58ee | [
"MIT"
] | null | null | null | tools/keystorer.py | c-castillo/pyethereum | 6e0e9a937c16d67faed16c5fce9a21bdef5a58ee | [
"MIT"
] | null | null | null | tools/keystorer.py | c-castillo/pyethereum | 6e0e9a937c16d67faed16c5fce9a21bdef5a58ee | [
"MIT"
] | null | null | null | #!/usr/bin/python2.7
import sys, json, os
import getpass
try:
import keys
except:
try:
import ethereum.keys as keys
except:
raise Exception("keys module not found")
# Help
if len(sys.argv) < 2:
print("Use `keystorer.py create <optional privkey>` to create a key store file, and `keystorer.py getprivkey <filename>` or `keystorer.py getaddress <filename> to get a privkey/address from a key store file, respectively")
# Create a json
elif sys.argv[1] == 'create':
if len(sys.argv) < 3:
key = os.urandom(32)
else:
key = keys.decode_hex(sys.argv[2])
pw = getpass.getpass()
pw2 = getpass.getpass()
assert pw == pw2, "Password mismatch"
print("Applying hard key derivation function. Wait a little")
j = keys.make_keystore_json(key, pw)
print j
open(j["id"]+'.json', 'w').write(json.dumps(j, indent=4))
print("Wallet creation successful, file saved at: " + j["id"] + ".json")
# Decode a json
elif sys.argv[1] in ('getprivkey', 'getaddress'):
if len(sys.argv) < 3:
raise Exception("Need filename")
json = json.loads(open(sys.argv[2]).read())
pw = getpass.getpass()
print("Applying hard key derivation function. Wait a little")
k = keys.decode_keystore_json(json, pw)
if sys.argv[1] == 'getprivkey':
print(keys.encode_hex(k))
else:
print(keys.encode_hex(keys.privtoaddr(k)))
| 32.790698 | 226 | 0.646809 | 206 | 1,410 | 4.393204 | 0.417476 | 0.061878 | 0.026519 | 0.039779 | 0.174586 | 0.145856 | 0.108287 | 0.108287 | 0.108287 | 0 | 0 | 0.013514 | 0.212766 | 1,410 | 42 | 227 | 33.571429 | 0.801802 | 0.036879 | 0 | 0.342857 | 0 | 0.028571 | 0.341211 | 0 | 0 | 0 | 0 | 0 | 0.028571 | 0 | null | null | 0.142857 | 0.114286 | null | null | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
6b364017b8bc40efa3da770f403003f450e8a8e3 | 581 | py | Python | dataactcore/migrations/versions/9f7136e38e21_.py | brianherman/data-act-broker-backend | 80eb055b9d245046192f7ad4fd0be7d0e11d2dec | [
"CC0-1.0"
] | 1 | 2019-06-22T21:53:16.000Z | 2019-06-22T21:53:16.000Z | dataactcore/migrations/versions/9f7136e38e21_.py | brianherman/data-act-broker-backend | 80eb055b9d245046192f7ad4fd0be7d0e11d2dec | [
"CC0-1.0"
] | 3 | 2021-08-22T11:47:45.000Z | 2022-03-29T22:06:49.000Z | dataactcore/migrations/versions/9f7136e38e21_.py | brianherman/data-act-broker-backend | 80eb055b9d245046192f7ad4fd0be7d0e11d2dec | [
"CC0-1.0"
] | 1 | 2020-07-17T23:50:56.000Z | 2020-07-17T23:50:56.000Z | """Merge a62138efd429 and fc6be41471a3
Revision ID: 9f7136e38e21
Revises: a62138efd429, fc6be41471a3
Create Date: 2016-11-29 13:54:49.342595
"""
# revision identifiers, used by Alembic.
revision = '9f7136e38e21'
down_revision = ('a62138efd429', 'fc6be41471a3')
branch_labels = None
depends_on = None
from alembic import op
import sqlalchemy as sa
def upgrade(engine_name):
globals()["upgrade_%s" % engine_name]()
def downgrade(engine_name):
globals()["downgrade_%s" % engine_name]()
def upgrade_data_broker():
pass
def downgrade_data_broker():
pass
| 15.702703 | 48 | 0.736661 | 72 | 581 | 5.763889 | 0.597222 | 0.096386 | 0.081928 | 0.06747 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169043 | 0.154905 | 581 | 36 | 49 | 16.138889 | 0.676171 | 0.306368 | 0 | 0.142857 | 0 | 0 | 0.147208 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0.142857 | 0.142857 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
6b376e6ef1f1420a0a95e58f07c99c3fb5fa8fa0 | 5,252 | py | Python | montecarlo/distributions/dist.py | dhaystead/MonteCarlo | 557a15455e9a5eaabf2d36e430ba9328331a1941 | [
"MIT"
] | null | null | null | montecarlo/distributions/dist.py | dhaystead/MonteCarlo | 557a15455e9a5eaabf2d36e430ba9328331a1941 | [
"MIT"
] | null | null | null | montecarlo/distributions/dist.py | dhaystead/MonteCarlo | 557a15455e9a5eaabf2d36e430ba9328331a1941 | [
"MIT"
] | null | null | null | import numpy as np
from numpy import pi, exp
from scipy.special import gamma as gamma_func
from scipy.spatial import distance
class Distribution:
_parameter_names = ''
multi = False
"""Base probability distribution class"""
def __init__(self, **kwargs):
pass
@property
def rvs(self):
"""Returns single random sample"""
return self.sample(1)[0]
def sample(self):
#draw samples from distribution
raise NotImplementedError()
def pdf(self, x):
# compute p(x)
raise NotImplementedError()
@property
def _param_values(self):
return ",".join(f"{value}" for value in self.__dict__.values())
@property
def _get_params_str(self):
"""Return name and value for each parameter"""
d = dict(zip(self._parameter_names, self.__dict__.values()))
return ",".join(f"{param}={value}" for param, value in d.items())
@property
def parameters(self):
return ",".join(f"{key}={value}" for key, value in self.__dict__.items())
def __repr__(self):
return f'{self.__class__.__name__}({self.parameters})'
def __str__(self):
return f'{self.__class__.__name__} distribution with {self._get_params_str})'
# @classmethod
# def from_params(cls, params):
# """ Create a distribution from parameter tuple
# --------
# >>> x = (0,1)
# >>> Gaussian(x)
# """
#
# kwargs = dict(zip(cls.parameters(), params))
# return cls(**kwargs)
class Gaussian(Distribution):
"""Gaussian distribution"""
_parameter_names = "mean", "standard deviation"
def __init__(self, mu=0, sd=1):
self.mu = mu
self.sd= sd
@property
def var(self):
return self.sd**2
def __add__(self, other):
"""Function to add together two Gaussian distributions
Args:
other (Gaussian): Gaussian instance
Returns:
Gaussian: Gaussian distribution
"""
mu_new = self.mu + other.mu
sd_new = np.sqrt(self.var + other.var)
return Gaussian(mu_new, sd_new)
def pdf(self, x):
return exp(-(x-self.mu)**2 / (2*self.sd)) / (self.sd*np.sqrt( 2.0 * pi))
def sample(self, n, seed=None):
rng = np.random.default_rng(seed)
return rng.normal(self.mu, self.sd, size=n)
class Uniform(Distribution):
_parameter_names = "lower bound", "upper bound"
def __init__(self, a=0, b=1):
self.a = a
self.b = b
def pdf(self, x):
if (x >= self.a) and (x <= self.b):
return 1/(self.b-self.a)
else:
return 0
def sample(self, n, seed=None):
rng = np.random.default_rng(seed)
return rng.uniform(self.a, self.b, n)
class Exponential(Distribution):
_parameter_names = "scale"
def __init__(self, l=1):
self.l = l
def pdf(self, x):
if x < 0:
return 0
return self.l*np.exp(-self.l*x)
def sample(self, n, seed=None):
rng = np.random.default_rng(seed)
beta = 1/self.l
return rng.exponential(scale=beta, size=n)
class Gamma(Distribution):
_parameter_names = "shape", "scale"
def __init__(self, k, t):
self.k = k
self.t = t
def pdf(self, x):
return x**(self.k-1)*exp(-x/self.t)/((self.t*self.k)*gamma_func(self.k))
def sample(self, n, seed=None):
rng = np.random.default_rng(seed)
return rng.gamma(self.k, self.t, size=n)
class Logistic(Distribution):
_parameter_names = "mean", "standard deviation"
def __init__(self, mu, sd):
self.mu = mu
self.sd = sd
def pdf(self, x):
e1 = np.exp((x-self.mu)/(2*self.sd))
e2 = np.exp(-(x-self.mu)/(2*self.sd))
return 1/(self.sd*(e1 + e2)**2)
def sample(self, n, seed=None):
rng = np.random.default_rng(seed)
return rng.logistic(self.mu, self.sd, size=n)
class Lognormal(Distribution):
_parameter_names = "mean", "standard deviation"
def __init__(self, mu=0, sd=1):
self.mu = mu
self.sd = sd
def pdf(self, x):
d = x*self.sd*np.sqrt( 2.0 * pi)
return 1/d*exp(-(np.log(x)-self.mu)**2/(2*self.sd**2))
def sample(self, n, seed=None):
rng = np.random.default_rng(seed)
return rng.lognormal(self.mu, self.sd, size=n)
class MultivariateGaussian(Distribution):
_parameter_names = "mean", "covariance"
multi=True
def __init__(self, mu, cov, dim=2):
self.mu = mu
self.cov= cov
self.dim = dim
def mahalanobis(self,x):
return distance.mahalanobis(x, self.mu, self.inv_cov)
@property
def inv_cov(self):
return np.linalg.inv(self.cov)
def pdf(self, x):
det = np.linalg.det(self.cov)
return exp(-(self.mahalanobis(x)**2)/2) / (np.sqrt((2*pi)**self.dim)*det)
def sample(self, n, seed=None):
rng = np.random.default_rng(seed)
return rng.multivariate_normal(self.mu, self.cov, size=n)
mvg = MultivariateGaussian(mu=np.array([1,1]), cov=np.matrix('1, 0; 0, 1'))
| 27.642105 | 85 | 0.573686 | 718 | 5,252 | 4.036212 | 0.17688 | 0.037267 | 0.071774 | 0.030366 | 0.31815 | 0.305383 | 0.279158 | 0.23499 | 0.221877 | 0.221877 | 0 | 0.012198 | 0.281988 | 5,252 | 189 | 86 | 27.78836 | 0.756298 | 0.097677 | 0 | 0.354839 | 0 | 0 | 0.060196 | 0.020065 | 0 | 0 | 0 | 0 | 0 | 1 | 0.274194 | false | 0.008065 | 0.032258 | 0.072581 | 0.66129 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
6b3def15b55a406227ceccf43e07a08924d9e5c9 | 229 | py | Python | app/app/schemas/assignment.py | gooocho/fastapi_todo | b88177e651f1c6984a636262a4d686935b67ed6f | [
"MIT"
] | null | null | null | app/app/schemas/assignment.py | gooocho/fastapi_todo | b88177e651f1c6984a636262a4d686935b67ed6f | [
"MIT"
] | null | null | null | app/app/schemas/assignment.py | gooocho/fastapi_todo | b88177e651f1c6984a636262a4d686935b67ed6f | [
"MIT"
] | null | null | null | from pydantic import BaseModel
class AssignmentBase(BaseModel):
user_id: int
task_id: int
class AssignmentCreate(AssignmentBase):
pass
class Assignment(AssignmentBase):
class Config:
orm_mode = True
| 14.3125 | 39 | 0.724891 | 25 | 229 | 6.52 | 0.68 | 0.06135 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.218341 | 229 | 15 | 40 | 15.266667 | 0.910615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.111111 | 0.111111 | 0 | 0.777778 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
6b41544a38410e0eb0f2567be0559d90ba00833c | 22,137 | py | Python | tests/components/ssdp/test_init.py | grayjeremy/core | 0294a6399896aabe97e6fb5d1e784ebb71dea22b | [
"Apache-2.0"
] | 2 | 2020-03-29T05:32:57.000Z | 2021-06-13T06:55:05.000Z | tests/components/ssdp/test_init.py | grayjeremy/core | 0294a6399896aabe97e6fb5d1e784ebb71dea22b | [
"Apache-2.0"
] | 45 | 2021-04-19T09:52:10.000Z | 2022-03-31T06:09:38.000Z | tests/components/ssdp/test_init.py | grayjeremy/core | 0294a6399896aabe97e6fb5d1e784ebb71dea22b | [
"Apache-2.0"
] | null | null | null | """Test the SSDP integration."""
import asyncio
from datetime import timedelta
from ipaddress import IPv4Address, IPv6Address
from unittest.mock import patch
import aiohttp
from async_upnp_client.search import SSDPListener
from async_upnp_client.utils import CaseInsensitiveDict
import pytest
from homeassistant import config_entries
from homeassistant.components import ssdp
from homeassistant.const import (
EVENT_HOMEASSISTANT_STARTED,
EVENT_HOMEASSISTANT_STOP,
MATCH_ALL,
)
from homeassistant.core import CoreState, callback
from homeassistant.setup import async_setup_component
import homeassistant.util.dt as dt_util
from tests.common import async_fire_time_changed, mock_coro
def _patched_ssdp_listener(info, *args, **kwargs):
listener = SSDPListener(*args, **kwargs)
async def _async_callback(*_):
await listener.async_callback(info)
listener.async_start = _async_callback
return listener
async def _async_run_mocked_scan(hass, mock_ssdp_response, mock_get_ssdp):
def _generate_fake_ssdp_listener(*args, **kwargs):
return _patched_ssdp_listener(
mock_ssdp_response,
*args,
**kwargs,
)
with patch(
"homeassistant.components.ssdp.async_get_ssdp",
return_value=mock_get_ssdp,
), patch(
"homeassistant.components.ssdp.SSDPListener",
new=_generate_fake_ssdp_listener,
), patch.object(
hass.config_entries.flow, "async_init", return_value=mock_coro()
) as mock_init:
assert await async_setup_component(hass, ssdp.DOMAIN, {ssdp.DOMAIN: {}})
hass.bus.async_fire(EVENT_HOMEASSISTANT_STARTED)
await hass.async_block_till_done()
await hass.async_block_till_done()
return mock_init
async def test_scan_match_st(hass, caplog):
"""Test matching based on ST."""
mock_ssdp_response = {
"st": "mock-st",
"location": None,
"usn": "mock-usn",
"server": "mock-server",
"ext": "",
}
mock_get_ssdp = {"mock-domain": [{"st": "mock-st"}]}
mock_init = await _async_run_mocked_scan(hass, mock_ssdp_response, mock_get_ssdp)
assert len(mock_init.mock_calls) == 1
assert mock_init.mock_calls[0][1][0] == "mock-domain"
assert mock_init.mock_calls[0][2]["context"] == {
"source": config_entries.SOURCE_SSDP
}
assert mock_init.mock_calls[0][2]["data"] == {
ssdp.ATTR_SSDP_ST: "mock-st",
ssdp.ATTR_SSDP_LOCATION: None,
ssdp.ATTR_SSDP_USN: "mock-usn",
ssdp.ATTR_SSDP_SERVER: "mock-server",
ssdp.ATTR_SSDP_EXT: "",
}
assert "Failed to fetch ssdp data" not in caplog.text
async def test_partial_response(hass, caplog):
"""Test location and st missing."""
mock_ssdp_response = {
"usn": "mock-usn",
"server": "mock-server",
"ext": "",
}
mock_get_ssdp = {"mock-domain": [{"st": "mock-st"}]}
mock_init = await _async_run_mocked_scan(hass, mock_ssdp_response, mock_get_ssdp)
assert len(mock_init.mock_calls) == 0
@pytest.mark.parametrize(
"key", (ssdp.ATTR_UPNP_MANUFACTURER, ssdp.ATTR_UPNP_DEVICE_TYPE)
)
async def test_scan_match_upnp_devicedesc(hass, aioclient_mock, key):
"""Test matching based on UPnP device description data."""
aioclient_mock.get(
"http://1.1.1.1",
text=f"""
<root>
<device>
<{key}>Paulus</{key}>
</device>
</root>
""",
)
mock_get_ssdp = {"mock-domain": [{key: "Paulus"}]}
mock_ssdp_response = {
"st": "mock-st",
"location": "http://1.1.1.1",
}
mock_init = await _async_run_mocked_scan(hass, mock_ssdp_response, mock_get_ssdp)
# If we get duplicate respones, ensure we only look it up once
assert len(aioclient_mock.mock_calls) == 1
assert len(mock_init.mock_calls) == 1
assert mock_init.mock_calls[0][1][0] == "mock-domain"
assert mock_init.mock_calls[0][2]["context"] == {
"source": config_entries.SOURCE_SSDP
}
async def test_scan_not_all_present(hass, aioclient_mock):
"""Test match fails if some specified attributes are not present."""
aioclient_mock.get(
"http://1.1.1.1",
text="""
<root>
<device>
<deviceType>Paulus</deviceType>
</device>
</root>
""",
)
mock_ssdp_response = {
"st": "mock-st",
"location": "http://1.1.1.1",
}
mock_get_ssdp = {
"mock-domain": [
{
ssdp.ATTR_UPNP_DEVICE_TYPE: "Paulus",
ssdp.ATTR_UPNP_MANUFACTURER: "Paulus",
}
]
}
mock_init = await _async_run_mocked_scan(hass, mock_ssdp_response, mock_get_ssdp)
assert not mock_init.mock_calls
async def test_scan_not_all_match(hass, aioclient_mock):
"""Test match fails if some specified attribute values differ."""
aioclient_mock.get(
"http://1.1.1.1",
text="""
<root>
<device>
<deviceType>Paulus</deviceType>
<manufacturer>Paulus</manufacturer>
</device>
</root>
""",
)
mock_ssdp_response = {
"st": "mock-st",
"location": "http://1.1.1.1",
}
mock_get_ssdp = {
"mock-domain": [
{
ssdp.ATTR_UPNP_DEVICE_TYPE: "Paulus",
ssdp.ATTR_UPNP_MANUFACTURER: "Not-Paulus",
}
]
}
mock_init = await _async_run_mocked_scan(hass, mock_ssdp_response, mock_get_ssdp)
assert not mock_init.mock_calls
@pytest.mark.parametrize("exc", [asyncio.TimeoutError, aiohttp.ClientError])
async def test_scan_description_fetch_fail(hass, aioclient_mock, exc):
"""Test failing to fetch description."""
aioclient_mock.get("http://1.1.1.1", exc=exc)
mock_ssdp_response = {
"st": "mock-st",
"usn": "uuid:TIVRTLSR7ANF-D6E-1557809135086-RETAIL::urn:mdx-netflix-com:service:target:3",
"location": "http://1.1.1.1",
}
mock_get_ssdp = {
"mock-domain": [
{
ssdp.ATTR_UPNP_DEVICE_TYPE: "Paulus",
ssdp.ATTR_UPNP_MANUFACTURER: "Paulus",
}
]
}
mock_init = await _async_run_mocked_scan(hass, mock_ssdp_response, mock_get_ssdp)
assert not mock_init.mock_calls
assert ssdp.async_get_discovery_info_by_st(hass, "mock-st") == [
{
"UDN": "uuid:TIVRTLSR7ANF-D6E-1557809135086-RETAIL",
"ssdp_location": "http://1.1.1.1",
"ssdp_st": "mock-st",
"ssdp_usn": "uuid:TIVRTLSR7ANF-D6E-1557809135086-RETAIL::urn:mdx-netflix-com:service:target:3",
}
]
async def test_scan_description_parse_fail(hass, aioclient_mock):
"""Test invalid XML."""
aioclient_mock.get(
"http://1.1.1.1",
text="""
<root>INVALIDXML
""",
)
mock_ssdp_response = {
"st": "mock-st",
"location": "http://1.1.1.1",
}
mock_get_ssdp = {
"mock-domain": [
{
ssdp.ATTR_UPNP_DEVICE_TYPE: "Paulus",
ssdp.ATTR_UPNP_MANUFACTURER: "Paulus",
}
]
}
mock_init = await _async_run_mocked_scan(hass, mock_ssdp_response, mock_get_ssdp)
assert not mock_init.mock_calls
async def test_invalid_characters(hass, aioclient_mock):
"""Test that we replace bad characters with placeholders."""
aioclient_mock.get(
"http://1.1.1.1",
text="""
<root>
<device>
<deviceType>ABC</deviceType>
<serialNumber>\xff\xff\xff\xff</serialNumber>
</device>
</root>
""",
)
mock_ssdp_response = {
"st": "mock-st",
"location": "http://1.1.1.1",
}
mock_get_ssdp = {
"mock-domain": [
{
ssdp.ATTR_UPNP_DEVICE_TYPE: "ABC",
}
]
}
mock_init = await _async_run_mocked_scan(hass, mock_ssdp_response, mock_get_ssdp)
assert len(mock_init.mock_calls) == 1
assert mock_init.mock_calls[0][1][0] == "mock-domain"
assert mock_init.mock_calls[0][2]["context"] == {
"source": config_entries.SOURCE_SSDP
}
assert mock_init.mock_calls[0][2]["data"] == {
"ssdp_location": "http://1.1.1.1",
"ssdp_st": "mock-st",
"deviceType": "ABC",
"serialNumber": "ÿÿÿÿ",
}
@patch("homeassistant.components.ssdp.SSDPListener.async_start")
@patch("homeassistant.components.ssdp.SSDPListener.async_search")
async def test_start_stop_scanner(async_start_mock, async_search_mock, hass):
"""Test we start and stop the scanner."""
assert await async_setup_component(hass, ssdp.DOMAIN, {ssdp.DOMAIN: {}})
hass.bus.async_fire(EVENT_HOMEASSISTANT_STARTED)
await hass.async_block_till_done()
async_fire_time_changed(hass, dt_util.utcnow() + timedelta(seconds=200))
await hass.async_block_till_done()
assert async_start_mock.call_count == 1
assert async_search_mock.call_count == 1
hass.bus.async_fire(EVENT_HOMEASSISTANT_STOP)
await hass.async_block_till_done()
async_fire_time_changed(hass, dt_util.utcnow() + timedelta(seconds=200))
await hass.async_block_till_done()
assert async_start_mock.call_count == 1
assert async_search_mock.call_count == 1
async def test_unexpected_exception_while_fetching(hass, aioclient_mock, caplog):
"""Test unexpected exception while fetching."""
aioclient_mock.get(
"http://1.1.1.1",
text="""
<root>
<device>
<deviceType>ABC</deviceType>
<serialNumber>\xff\xff\xff\xff</serialNumber>
</device>
</root>
""",
)
mock_ssdp_response = {
"st": "mock-st",
"location": "http://1.1.1.1",
}
mock_get_ssdp = {
"mock-domain": [
{
ssdp.ATTR_UPNP_DEVICE_TYPE: "ABC",
}
]
}
with patch(
"homeassistant.components.ssdp.descriptions.ElementTree.fromstring",
side_effect=ValueError,
):
mock_init = await _async_run_mocked_scan(
hass, mock_ssdp_response, mock_get_ssdp
)
assert len(mock_init.mock_calls) == 0
assert "Failed to fetch ssdp data from: http://1.1.1.1" in caplog.text
async def test_scan_with_registered_callback(hass, aioclient_mock, caplog):
"""Test matching based on callback."""
aioclient_mock.get(
"http://1.1.1.1",
text="""
<root>
<device>
<deviceType>Paulus</deviceType>
</device>
</root>
""",
)
mock_ssdp_response = {
"st": "mock-st",
"location": "http://1.1.1.1",
"usn": "uuid:TIVRTLSR7ANF-D6E-1557809135086-RETAIL::urn:mdx-netflix-com:service:target:3",
"server": "mock-server",
"x-rincon-bootseq": "55",
"ext": "",
}
not_matching_intergration_callbacks = []
intergration_match_all_callbacks = []
intergration_match_all_not_present_callbacks = []
intergration_callbacks = []
intergration_callbacks_from_cache = []
match_any_callbacks = []
@callback
def _async_exception_callbacks(info):
raise ValueError
@callback
def _async_intergration_callbacks(info):
intergration_callbacks.append(info)
@callback
def _async_intergration_match_all_callbacks(info):
intergration_match_all_callbacks.append(info)
@callback
def _async_intergration_match_all_not_present_callbacks(info):
intergration_match_all_not_present_callbacks.append(info)
@callback
def _async_intergration_callbacks_from_cache(info):
intergration_callbacks_from_cache.append(info)
@callback
def _async_not_matching_intergration_callbacks(info):
not_matching_intergration_callbacks.append(info)
@callback
def _async_match_any_callbacks(info):
match_any_callbacks.append(info)
def _generate_fake_ssdp_listener(*args, **kwargs):
listener = SSDPListener(*args, **kwargs)
async def _async_callback(*_):
await listener.async_callback(mock_ssdp_response)
@callback
def _callback(*_):
hass.async_create_task(listener.async_callback(mock_ssdp_response))
listener.async_start = _async_callback
listener.async_search = _callback
return listener
with patch(
"homeassistant.components.ssdp.SSDPListener",
new=_generate_fake_ssdp_listener,
):
hass.state = CoreState.stopped
assert await async_setup_component(hass, ssdp.DOMAIN, {ssdp.DOMAIN: {}})
await hass.async_block_till_done()
ssdp.async_register_callback(hass, _async_exception_callbacks, {})
ssdp.async_register_callback(
hass,
_async_intergration_callbacks,
{"st": "mock-st"},
)
ssdp.async_register_callback(
hass,
_async_intergration_match_all_callbacks,
{"x-rincon-bootseq": MATCH_ALL},
)
ssdp.async_register_callback(
hass,
_async_intergration_match_all_not_present_callbacks,
{"x-not-there": MATCH_ALL},
)
ssdp.async_register_callback(
hass,
_async_not_matching_intergration_callbacks,
{"st": "not-match-mock-st"},
)
ssdp.async_register_callback(
hass,
_async_match_any_callbacks,
)
await hass.async_block_till_done()
async_fire_time_changed(hass, dt_util.utcnow() + timedelta(seconds=200))
ssdp.async_register_callback(
hass,
_async_intergration_callbacks_from_cache,
{"st": "mock-st"},
)
await hass.async_block_till_done()
hass.bus.async_fire(EVENT_HOMEASSISTANT_STARTED)
hass.state = CoreState.running
await hass.async_block_till_done()
async_fire_time_changed(hass, dt_util.utcnow() + timedelta(seconds=200))
await hass.async_block_till_done()
assert hass.state == CoreState.running
assert len(intergration_callbacks) == 3
assert len(intergration_callbacks_from_cache) == 3
assert len(intergration_match_all_callbacks) == 3
assert len(intergration_match_all_not_present_callbacks) == 0
assert len(match_any_callbacks) == 3
assert len(not_matching_intergration_callbacks) == 0
assert intergration_callbacks[0] == {
ssdp.ATTR_UPNP_DEVICE_TYPE: "Paulus",
ssdp.ATTR_SSDP_EXT: "",
ssdp.ATTR_SSDP_LOCATION: "http://1.1.1.1",
ssdp.ATTR_SSDP_SERVER: "mock-server",
ssdp.ATTR_SSDP_ST: "mock-st",
ssdp.ATTR_SSDP_USN: "uuid:TIVRTLSR7ANF-D6E-1557809135086-RETAIL::urn:mdx-netflix-com:service:target:3",
ssdp.ATTR_UPNP_UDN: "uuid:TIVRTLSR7ANF-D6E-1557809135086-RETAIL",
"x-rincon-bootseq": "55",
}
assert "Failed to callback info" in caplog.text
async def test_scan_second_hit(hass, aioclient_mock, caplog):
"""Test matching on second scan."""
aioclient_mock.get(
"http://1.1.1.1",
text="""
<root>
<device>
<deviceType>Paulus</deviceType>
</device>
</root>
""",
)
mock_ssdp_response = CaseInsensitiveDict(
**{
"ST": "mock-st",
"LOCATION": "http://1.1.1.1",
"USN": "uuid:TIVRTLSR7ANF-D6E-1557809135086-RETAIL::urn:mdx-netflix-com:service:target:3",
"SERVER": "mock-server",
"EXT": "",
}
)
mock_get_ssdp = {"mock-domain": [{"st": "mock-st"}]}
intergration_callbacks = []
@callback
def _async_intergration_callbacks(info):
intergration_callbacks.append(info)
def _generate_fake_ssdp_listener(*args, **kwargs):
listener = SSDPListener(*args, **kwargs)
async def _async_callback(*_):
pass
@callback
def _callback(*_):
hass.async_create_task(listener.async_callback(mock_ssdp_response))
listener.async_start = _async_callback
listener.async_search = _callback
return listener
with patch(
"homeassistant.components.ssdp.async_get_ssdp",
return_value=mock_get_ssdp,
), patch(
"homeassistant.components.ssdp.SSDPListener",
new=_generate_fake_ssdp_listener,
), patch.object(
hass.config_entries.flow, "async_init", return_value=mock_coro()
) as mock_init:
assert await async_setup_component(hass, ssdp.DOMAIN, {ssdp.DOMAIN: {}})
await hass.async_block_till_done()
remove = ssdp.async_register_callback(
hass,
_async_intergration_callbacks,
{"st": "mock-st"},
)
hass.bus.async_fire(EVENT_HOMEASSISTANT_STARTED)
await hass.async_block_till_done()
async_fire_time_changed(hass, dt_util.utcnow() + timedelta(seconds=200))
await hass.async_block_till_done()
async_fire_time_changed(hass, dt_util.utcnow() + timedelta(seconds=200))
await hass.async_block_till_done()
remove()
async_fire_time_changed(hass, dt_util.utcnow() + timedelta(seconds=200))
await hass.async_block_till_done()
assert len(intergration_callbacks) == 2
assert intergration_callbacks[0] == {
ssdp.ATTR_UPNP_DEVICE_TYPE: "Paulus",
ssdp.ATTR_SSDP_EXT: "",
ssdp.ATTR_SSDP_LOCATION: "http://1.1.1.1",
ssdp.ATTR_SSDP_SERVER: "mock-server",
ssdp.ATTR_SSDP_ST: "mock-st",
ssdp.ATTR_SSDP_USN: "uuid:TIVRTLSR7ANF-D6E-1557809135086-RETAIL::urn:mdx-netflix-com:service:target:3",
ssdp.ATTR_UPNP_UDN: "uuid:TIVRTLSR7ANF-D6E-1557809135086-RETAIL",
}
assert len(mock_init.mock_calls) == 1
assert mock_init.mock_calls[0][1][0] == "mock-domain"
assert mock_init.mock_calls[0][2]["context"] == {
"source": config_entries.SOURCE_SSDP
}
assert mock_init.mock_calls[0][2]["data"] == {
ssdp.ATTR_UPNP_DEVICE_TYPE: "Paulus",
ssdp.ATTR_SSDP_ST: "mock-st",
ssdp.ATTR_SSDP_LOCATION: "http://1.1.1.1",
ssdp.ATTR_SSDP_SERVER: "mock-server",
ssdp.ATTR_SSDP_EXT: "",
ssdp.ATTR_SSDP_USN: "uuid:TIVRTLSR7ANF-D6E-1557809135086-RETAIL::urn:mdx-netflix-com:service:target:3",
ssdp.ATTR_UPNP_UDN: "uuid:TIVRTLSR7ANF-D6E-1557809135086-RETAIL",
}
assert "Failed to fetch ssdp data" not in caplog.text
udn_discovery_info = ssdp.async_get_discovery_info_by_st(hass, "mock-st")
discovery_info = udn_discovery_info[0]
assert discovery_info[ssdp.ATTR_SSDP_LOCATION] == "http://1.1.1.1"
assert discovery_info[ssdp.ATTR_SSDP_ST] == "mock-st"
assert (
discovery_info[ssdp.ATTR_UPNP_UDN]
== "uuid:TIVRTLSR7ANF-D6E-1557809135086-RETAIL"
)
assert (
discovery_info[ssdp.ATTR_SSDP_USN]
== "uuid:TIVRTLSR7ANF-D6E-1557809135086-RETAIL::urn:mdx-netflix-com:service:target:3"
)
st_discovery_info = ssdp.async_get_discovery_info_by_udn(
hass, "uuid:TIVRTLSR7ANF-D6E-1557809135086-RETAIL"
)
discovery_info = st_discovery_info[0]
assert discovery_info[ssdp.ATTR_SSDP_LOCATION] == "http://1.1.1.1"
assert discovery_info[ssdp.ATTR_SSDP_ST] == "mock-st"
assert (
discovery_info[ssdp.ATTR_UPNP_UDN]
== "uuid:TIVRTLSR7ANF-D6E-1557809135086-RETAIL"
)
assert (
discovery_info[ssdp.ATTR_SSDP_USN]
== "uuid:TIVRTLSR7ANF-D6E-1557809135086-RETAIL::urn:mdx-netflix-com:service:target:3"
)
discovery_info = ssdp.async_get_discovery_info_by_udn_st(
hass, "uuid:TIVRTLSR7ANF-D6E-1557809135086-RETAIL", "mock-st"
)
assert discovery_info[ssdp.ATTR_SSDP_LOCATION] == "http://1.1.1.1"
assert discovery_info[ssdp.ATTR_SSDP_ST] == "mock-st"
assert (
discovery_info[ssdp.ATTR_UPNP_UDN]
== "uuid:TIVRTLSR7ANF-D6E-1557809135086-RETAIL"
)
assert (
discovery_info[ssdp.ATTR_SSDP_USN]
== "uuid:TIVRTLSR7ANF-D6E-1557809135086-RETAIL::urn:mdx-netflix-com:service:target:3"
)
assert ssdp.async_get_discovery_info_by_udn_st(hass, "wrong", "mock-st") is None
_ADAPTERS_WITH_MANUAL_CONFIG = [
{
"auto": True,
"default": False,
"enabled": True,
"ipv4": [],
"ipv6": [
{
"address": "2001:db8::",
"network_prefix": 8,
"flowinfo": 1,
"scope_id": 1,
}
],
"name": "eth0",
},
{
"auto": True,
"default": False,
"enabled": True,
"ipv4": [{"address": "192.168.1.5", "network_prefix": 23}],
"ipv6": [],
"name": "eth1",
},
{
"auto": False,
"default": False,
"enabled": False,
"ipv4": [{"address": "169.254.3.2", "network_prefix": 16}],
"ipv6": [],
"name": "vtun0",
},
]
async def test_async_detect_interfaces_setting_empty_route(hass):
"""Test without default interface config and the route returns nothing."""
mock_get_ssdp = {
"mock-domain": [
{
ssdp.ATTR_UPNP_DEVICE_TYPE: "ABC",
}
]
}
create_args = []
def _generate_fake_ssdp_listener(*args, **kwargs):
create_args.append([args, kwargs])
listener = SSDPListener(*args, **kwargs)
async def _async_callback(*_):
pass
@callback
def _callback(*_):
pass
listener.async_start = _async_callback
listener.async_search = _callback
return listener
with patch(
"homeassistant.components.ssdp.async_get_ssdp",
return_value=mock_get_ssdp,
), patch(
"homeassistant.components.ssdp.SSDPListener",
new=_generate_fake_ssdp_listener,
), patch(
"homeassistant.components.ssdp.network.async_get_adapters",
return_value=_ADAPTERS_WITH_MANUAL_CONFIG,
):
assert await async_setup_component(hass, ssdp.DOMAIN, {ssdp.DOMAIN: {}})
await hass.async_block_till_done()
hass.bus.async_fire(EVENT_HOMEASSISTANT_STARTED)
await hass.async_block_till_done()
assert {create_args[0][1]["source_ip"], create_args[1][1]["source_ip"]} == {
IPv4Address("192.168.1.5"),
IPv6Address("2001:db8::"),
}
| 31.624286 | 111 | 0.63884 | 2,657 | 22,137 | 5.004516 | 0.094091 | 0.012334 | 0.012183 | 0.014214 | 0.786192 | 0.745281 | 0.696924 | 0.666842 | 0.653155 | 0.618034 | 0 | 0.031607 | 0.236798 | 22,137 | 699 | 112 | 31.669528 | 0.755431 | 0.003975 | 0 | 0.613333 | 0 | 0.016667 | 0.192654 | 0.094927 | 0 | 0 | 0 | 0 | 0.1 | 1 | 0.026667 | false | 0.005 | 0.025 | 0.001667 | 0.061667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6b4321b71e025727969edbd03c6de2879bf645f0 | 400 | py | Python | server/ImageProcessor.py | Thukor/MazeSolver | c953e193ce27a7348e8ec9c5592144426dfce193 | [
"MIT"
] | 5 | 2018-02-06T22:48:34.000Z | 2020-01-07T20:19:05.000Z | server/ImageProcessor.py | Thukor/MazeSolver | c953e193ce27a7348e8ec9c5592144426dfce193 | [
"MIT"
] | 11 | 2018-01-31T21:47:49.000Z | 2018-04-21T16:42:52.000Z | server/ImageProcessor.py | Thukor/MazeSolver | c953e193ce27a7348e8ec9c5592144426dfce193 | [
"MIT"
] | 2 | 2020-06-18T05:40:03.000Z | 2022-02-02T03:46:30.000Z | from image_processing import *
"""
Processor Class for images
"""
class ImageProcessor:
#initialize strategies
def __init__(self,strategies):
self.strategies = [birdseye_correction, image_segmentation]
#We interpret each set of processing functions as strategies.
def process_image(image_name, number):
birdseye_correction(image_name, number)
image_segmentation("warped.png", number)
| 22.222222 | 62 | 0.7875 | 47 | 400 | 6.446809 | 0.617021 | 0.085809 | 0.151815 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.13 | 400 | 17 | 63 | 23.529412 | 0.87069 | 0.2025 | 0 | 0 | 0 | 0 | 0.035336 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.142857 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
6b43a84b4a033e1f5fdbee3e48dbefc07a5cfd19 | 1,844 | py | Python | botenv/lib/python3.9/site-packages/apscheduler/schedulers/twisted.py | 0xtuytuy/unit-crypto-ski-week-poap-bot | 9bab0a6013a29db9ce76311d4f6fa1d0922ac5c1 | [
"MIT"
] | 3 | 2021-04-28T04:52:27.000Z | 2021-08-30T10:10:10.000Z | newenv/lib/python3.8/site-packages/apscheduler/schedulers/twisted.py | palakshivlani-11/cryptorium | eebb78c061007519e527b3d18b8df6bc13679c46 | [
"Apache-2.0"
] | 21 | 2021-02-04T01:37:44.000Z | 2022-03-12T01:00:55.000Z | newenv/lib/python3.8/site-packages/apscheduler/schedulers/twisted.py | palakshivlani-11/cryptorium | eebb78c061007519e527b3d18b8df6bc13679c46 | [
"Apache-2.0"
] | 8 | 2020-10-30T18:44:03.000Z | 2022-02-24T22:15:47.000Z | from __future__ import absolute_import
from functools import wraps
from apscheduler.schedulers.base import BaseScheduler
from apscheduler.util import maybe_ref
try:
from twisted.internet import reactor as default_reactor
except ImportError: # pragma: nocover
raise ImportError('TwistedScheduler requires Twisted installed')
def run_in_reactor(func):
@wraps(func)
def wrapper(self, *args, **kwargs):
self._reactor.callFromThread(func, self, *args, **kwargs)
return wrapper
class TwistedScheduler(BaseScheduler):
"""
A scheduler that runs on a Twisted reactor.
Extra options:
=========== ========================================================
``reactor`` Reactor instance to use (defaults to the global reactor)
=========== ========================================================
"""
_reactor = None
_delayedcall = None
def _configure(self, config):
self._reactor = maybe_ref(config.pop('reactor', default_reactor))
super(TwistedScheduler, self)._configure(config)
@run_in_reactor
def shutdown(self, wait=True):
super(TwistedScheduler, self).shutdown(wait)
self._stop_timer()
def _start_timer(self, wait_seconds):
self._stop_timer()
if wait_seconds is not None:
self._delayedcall = self._reactor.callLater(wait_seconds, self.wakeup)
def _stop_timer(self):
if self._delayedcall and self._delayedcall.active():
self._delayedcall.cancel()
del self._delayedcall
@run_in_reactor
def wakeup(self):
self._stop_timer()
wait_seconds = self._process_jobs()
self._start_timer(wait_seconds)
def _create_default_executor(self):
from apscheduler.executors.twisted import TwistedExecutor
return TwistedExecutor()
| 29.269841 | 82 | 0.645336 | 195 | 1,844 | 5.851282 | 0.405128 | 0.048203 | 0.031551 | 0.026293 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205531 | 1,844 | 62 | 83 | 29.741935 | 0.77884 | 0.154013 | 0 | 0.128205 | 0 | 0 | 0.032723 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.205128 | false | 0 | 0.205128 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8608d9b47872018a9d7678b06a0fb32bd5e009b2 | 50 | py | Python | bot/config.py | maxsaltonstall/letters-with-strangers | 80eabd75c7d08ab51a47fb7f4680312c1d1830b0 | [
"Apache-2.0"
] | 3 | 2021-08-20T00:01:51.000Z | 2021-09-28T12:04:28.000Z | bot/config.py | maxsaltonstall/letters-with-strangers | 80eabd75c7d08ab51a47fb7f4680312c1d1830b0 | [
"Apache-2.0"
] | 59 | 2021-07-30T15:31:04.000Z | 2022-01-10T23:14:41.000Z | bot/config.py | maxsaltonstall/letters-with-strangers | 80eabd75c7d08ab51a47fb7f4680312c1d1830b0 | [
"Apache-2.0"
] | null | null | null | # default values
cloud_monitoring_enabled = False
| 16.666667 | 32 | 0.84 | 6 | 50 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 50 | 2 | 33 | 25 | 0.909091 | 0.28 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
860af3e708bb7b69059ff160e65393f98282513b | 766 | py | Python | util.py | haby0/ghas-jira-integration | 09ab22d88611a7842f98eb2db29b35ed30640b42 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | util.py | haby0/ghas-jira-integration | 09ab22d88611a7842f98eb2db29b35ed30640b42 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | util.py | haby0/ghas-jira-integration | 09ab22d88611a7842f98eb2db29b35ed30640b42 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | import hashlib
import os.path
import json
REQUEST_TIMEOUT = 10
def state_from_json(s):
j = json.loads(s)
if not "version" in j:
return {}
return j["states"]
def state_to_json(state):
final = {"version": 2, "states": state}
return json.dumps(final, indent=2, sort_keys=True)
def state_from_file(fpath):
if os.path.isfile(fpath):
with open(fpath, "r") as f:
return state_from_json(f.read())
return {}
def state_to_file(fpath, state):
with open(fpath, "w") as f:
f.write(state_to_json(state))
def make_key(s):
sha_3 = hashlib.sha3_256()
sha_3.update(s.encode("utf-8"))
return sha_3.hexdigest()
def json_accept_header():
return {"Accept": "application/vnd.github.v3+json"}
| 19.15 | 55 | 0.642298 | 119 | 766 | 3.957983 | 0.470588 | 0.067941 | 0.050955 | 0.067941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021703 | 0.218016 | 766 | 39 | 56 | 19.641026 | 0.764608 | 0 | 0 | 0.076923 | 0 | 0 | 0.090078 | 0.039164 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.115385 | 0.038462 | 0.615385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8611867bef4474ae28411d607b4ba203528d979a | 1,406 | py | Python | section-04-research-and-development/preprocessors.py | karanvijaygit/DMLM | aaeb3e65d0a58ad583289aaa39b089f11d06a4eb | [
"BSD-3-Clause"
] | 477 | 2019-02-14T11:24:29.000Z | 2022-03-31T08:43:50.000Z | section-04-research-and-development/preprocessors.py | karanvijaygit/DMLM | aaeb3e65d0a58ad583289aaa39b089f11d06a4eb | [
"BSD-3-Clause"
] | 51 | 2019-05-11T11:00:48.000Z | 2021-12-08T14:50:33.000Z | section-04-research-and-development/preprocessors.py | karanvijaygit/DMLM | aaeb3e65d0a58ad583289aaa39b089f11d06a4eb | [
"BSD-3-Clause"
] | 4,870 | 2019-01-20T11:04:50.000Z | 2022-03-31T12:37:17.000Z | import numpy as np
import pandas as pd
from sklearn.base import BaseEstimator, TransformerMixin
class TemporalVariableTransformer(BaseEstimator, TransformerMixin):
# Temporal elapsed time transformer
def __init__(self, variables, reference_variable):
if not isinstance(variables, list):
raise ValueError('variables should be a list')
self.variables = variables
self.reference_variable = reference_variable
def fit(self, X, y=None):
# we need this step to fit the sklearn pipeline
return self
def transform(self, X):
# so that we do not over-write the original dataframe
X = X.copy()
for feature in self.variables:
X[feature] = X[self.reference_variable] - X[feature]
return X
# categorical missing value imputer
class Mapper(BaseEstimator, TransformerMixin):
def __init__(self, variables, mappings):
if not isinstance(variables, list):
raise ValueError('variables should be a list')
self.variables = variables
self.mappings = mappings
def fit(self, X, y=None):
# we need the fit statement to accomodate the sklearn pipeline
return self
def transform(self, X):
X = X.copy()
for feature in self.variables:
X[feature] = X[feature].map(self.mappings)
return X | 25.563636 | 70 | 0.648649 | 167 | 1,406 | 5.389222 | 0.389222 | 0.086667 | 0.024444 | 0.044444 | 0.44 | 0.44 | 0.44 | 0.44 | 0.391111 | 0.291111 | 0 | 0 | 0.280228 | 1,406 | 55 | 71 | 25.563636 | 0.889328 | 0.16074 | 0 | 0.62069 | 0 | 0 | 0.044255 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.206897 | false | 0 | 0.103448 | 0.068966 | 0.517241 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8631de2360e48efc03de18a508b746b6dc0a84e8 | 4,851 | py | Python | src/dns-resolver/azext_dnsresolver/vendored_sdks/dnsresolver/models/__init__.py | Caoxuyang/azure-cli-extensions | d2011261f29033cb31a1064256727d87049ab423 | [
"MIT"
] | null | null | null | src/dns-resolver/azext_dnsresolver/vendored_sdks/dnsresolver/models/__init__.py | Caoxuyang/azure-cli-extensions | d2011261f29033cb31a1064256727d87049ab423 | [
"MIT"
] | 9 | 2022-03-25T19:35:49.000Z | 2022-03-31T06:09:47.000Z | src/dns-resolver/azext_dnsresolver/vendored_sdks/dnsresolver/models/__init__.py | Caoxuyang/azure-cli-extensions | d2011261f29033cb31a1064256727d87049ab423 | [
"MIT"
] | 1 | 2022-02-14T21:43:29.000Z | 2022-02-14T21:43:29.000Z | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
try:
from ._models_py3 import CloudErrorBody
from ._models_py3 import DnsForwardingRuleset
from ._models_py3 import DnsForwardingRulesetListResult
from ._models_py3 import DnsForwardingRulesetPatch
from ._models_py3 import DnsResolver
from ._models_py3 import DnsResolverListResult
from ._models_py3 import DnsResolverPatch
from ._models_py3 import ForwardingRule
from ._models_py3 import ForwardingRuleListResult
from ._models_py3 import ForwardingRulePatch
from ._models_py3 import InboundEndpoint
from ._models_py3 import InboundEndpointListResult
from ._models_py3 import InboundEndpointPatch
from ._models_py3 import IpConfiguration
from ._models_py3 import OutboundEndpoint
from ._models_py3 import OutboundEndpointListResult
from ._models_py3 import OutboundEndpointPatch
from ._models_py3 import ProxyResource
from ._models_py3 import Resource
from ._models_py3 import SubResource
from ._models_py3 import SubResourceListResult
from ._models_py3 import SystemData
from ._models_py3 import TargetDnsServer
from ._models_py3 import TrackedResource
from ._models_py3 import VirtualNetworkDnsForwardingRuleset
from ._models_py3 import VirtualNetworkDnsForwardingRulesetListResult
from ._models_py3 import VirtualNetworkLink
from ._models_py3 import VirtualNetworkLinkListResult
from ._models_py3 import VirtualNetworkLinkPatch
except (SyntaxError, ImportError):
from ._models import CloudErrorBody # type: ignore
from ._models import DnsForwardingRuleset # type: ignore
from ._models import DnsForwardingRulesetListResult # type: ignore
from ._models import DnsForwardingRulesetPatch # type: ignore
from ._models import DnsResolver # type: ignore
from ._models import DnsResolverListResult # type: ignore
from ._models import DnsResolverPatch # type: ignore
from ._models import ForwardingRule # type: ignore
from ._models import ForwardingRuleListResult # type: ignore
from ._models import ForwardingRulePatch # type: ignore
from ._models import InboundEndpoint # type: ignore
from ._models import InboundEndpointListResult # type: ignore
from ._models import InboundEndpointPatch # type: ignore
from ._models import IpConfiguration # type: ignore
from ._models import OutboundEndpoint # type: ignore
from ._models import OutboundEndpointListResult # type: ignore
from ._models import OutboundEndpointPatch # type: ignore
from ._models import ProxyResource # type: ignore
from ._models import Resource # type: ignore
from ._models import SubResource # type: ignore
from ._models import SubResourceListResult # type: ignore
from ._models import SystemData # type: ignore
from ._models import TargetDnsServer # type: ignore
from ._models import TrackedResource # type: ignore
from ._models import VirtualNetworkDnsForwardingRuleset # type: ignore
from ._models import VirtualNetworkDnsForwardingRulesetListResult # type: ignore
from ._models import VirtualNetworkLink # type: ignore
from ._models import VirtualNetworkLinkListResult # type: ignore
from ._models import VirtualNetworkLinkPatch # type: ignore
from ._dns_resolver_management_client_enums import (
CreatedByType,
DnsResolverState,
ForwardingRuleState,
IpAllocationMethod,
ProvisioningState,
)
__all__ = [
'CloudErrorBody',
'DnsForwardingRuleset',
'DnsForwardingRulesetListResult',
'DnsForwardingRulesetPatch',
'DnsResolver',
'DnsResolverListResult',
'DnsResolverPatch',
'ForwardingRule',
'ForwardingRuleListResult',
'ForwardingRulePatch',
'InboundEndpoint',
'InboundEndpointListResult',
'InboundEndpointPatch',
'IpConfiguration',
'OutboundEndpoint',
'OutboundEndpointListResult',
'OutboundEndpointPatch',
'ProxyResource',
'Resource',
'SubResource',
'SubResourceListResult',
'SystemData',
'TargetDnsServer',
'TrackedResource',
'VirtualNetworkDnsForwardingRuleset',
'VirtualNetworkDnsForwardingRulesetListResult',
'VirtualNetworkLink',
'VirtualNetworkLinkListResult',
'VirtualNetworkLinkPatch',
'CreatedByType',
'DnsResolverState',
'ForwardingRuleState',
'IpAllocationMethod',
'ProvisioningState',
]
| 42.552632 | 94 | 0.743146 | 417 | 4,851 | 8.414868 | 0.213429 | 0.165289 | 0.107438 | 0.157025 | 0.254773 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007487 | 0.173985 | 4,851 | 113 | 95 | 42.929204 | 0.868231 | 0.170893 | 0 | 0 | 0 | 0 | 0.164366 | 0.080803 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.582524 | 0 | 0.582524 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
863578b06cea0cbd905008acf1c406fcb3d20ee9 | 52 | py | Python | tests/pyconverter-test/cases/array_index_type.py | jaydeetay/pxt | aad1beaf15edc46e1327806367298cbc942dcbc1 | [
"MIT"
] | 977 | 2019-05-06T23:12:55.000Z | 2022-03-29T19:11:44.000Z | tests/pyconverter-test/cases/array_index_type.py | jaydeetay/pxt | aad1beaf15edc46e1327806367298cbc942dcbc1 | [
"MIT"
] | 3,980 | 2019-05-09T20:48:14.000Z | 2022-03-28T20:33:07.000Z | tests/pyconverter-test/cases/array_index_type.py | jaydeetay/pxt | aad1beaf15edc46e1327806367298cbc942dcbc1 | [
"MIT"
] | 306 | 2016-04-09T05:28:07.000Z | 2019-05-02T14:23:29.000Z | foo = [7]
i = foo[0]
testNamespace.numberArgument(i) | 17.333333 | 31 | 0.711538 | 8 | 52 | 4.625 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043478 | 0.115385 | 52 | 3 | 31 | 17.333333 | 0.76087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8637b415094ab24ad0e88bbbd0f0876265e3a02b | 6,364 | py | Python | infoblox_netmri/api/broker/v3_8_0/access_objects_broker.py | infobloxopen/infoblox_netmri | aa1c744df7e439dbe163bb9edd165e4e85a9771b | [
"Apache-2.0"
] | 12 | 2016-02-19T12:37:54.000Z | 2022-03-04T20:11:08.000Z | infoblox_netmri/api/broker/v3_8_0/access_objects_broker.py | azinfoblox/infoblox-netmri | 02372c5231e2677ab6299cb659a73c9a41b4b0f4 | [
"Apache-2.0"
] | 18 | 2015-11-12T18:37:00.000Z | 2021-05-19T07:59:55.000Z | infoblox_netmri/api/broker/v3_8_0/access_objects_broker.py | azinfoblox/infoblox-netmri | 02372c5231e2677ab6299cb659a73c9a41b4b0f4 | [
"Apache-2.0"
] | 18 | 2016-01-07T12:04:34.000Z | 2022-03-31T11:05:41.000Z | from ..broker import Broker
class AccessObjectsBroker(Broker):
controller = "access_objects"
def children(self, **kwargs):
"""Returns tree-format information about this children of a node, in a tree view.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` DeviceObject
:param tree_type: types of objects in the tree, one of ["DeviceObject", "DeviceService"]
:type tree_type: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` normal
:param search_mode: kind of search within the object of the database
:type search_mode: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param search_key: Search identifier for contextual search process. Used only for contextual search.
:type search_key: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:``
:param search: String to search in the objects names or direct values
:type search: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` none
:param group_mode: organization of tree-view result
:type group_mode: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param parent_node_id: identifier of an existing node of the tree, for which we want the children nodes, if omitted, parent is the absolute root node of the tree
:type parent_node_id: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 0
:param start: Position of the first node of the returned array, in the brotherhood
:type start: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 50
:param limit: Maximum number of nodes expected to return
:type limit: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` tree
:param formatting: output format. only one value 'tree'
:type formatting: String
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return children: list of node. Each one is a hash [id, node_type, label, tooltip, icon, has_details, leaf, children] representing a child's node
:rtype children: Array
"""
return self.api_request(self._get_method_fullname("children"), kwargs)
def cancel_search(self, **kwargs):
"""Cancels background search processes. Currently only works for contextual search.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param search_key: Search identifier for contextual search process. Used only for contextual search.
:type search_key: String
**Outputs**
"""
return self.api_request(self._get_method_fullname("cancel_search"), kwargs)
def parents(self, **kwargs):
"""Returns the parents of this node.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param object_id: The tree node id for which we want the parents
:type object_id: String
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return parents: list of parents.
:rtype parents: Array
"""
return self.api_request(self._get_method_fullname("parents"), kwargs)
def to_detail(self, **kwargs):
"""Returns the detail for an object.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param object_id: None
:type object_id: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 0
:param view: 0=tostring, 1=tooltip, 2-popup
:type view: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return detail: None
:rtype detail: String
"""
return self.api_request(self._get_method_fullname("to_detail"), kwargs)
def object_picker(self, **kwargs):
"""Returns a tree-format of the full sub-tree for this node. Expected to be displayed as a separate tree in a tree-popup window
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` DeviceObject
:param main_type: types of objects searched, one of ["DeviceObject", "DeviceService"]
:type main_type: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param query: query string for matching fields
:type query: String
**Outputs**
"""
return self.api_request(self._get_method_fullname("object_picker"), kwargs)
| 31.043902 | 174 | 0.511471 | 646 | 6,364 | 4.962848 | 0.210526 | 0.112289 | 0.072988 | 0.095446 | 0.559264 | 0.527449 | 0.527449 | 0.527449 | 0.51466 | 0.485964 | 0 | 0.001764 | 0.376493 | 6,364 | 204 | 175 | 31.196078 | 0.8062 | 0.646292 | 0 | 0 | 0 | 0 | 0.086604 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.384615 | false | 0 | 0.076923 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8649ca33ac0b7e2d14eb8e51717ad6d167626749 | 2,609 | py | Python | src/generated-spec/synthetics.py | wheerd/cloudformation-to-terraform | 5411b33293e1f7d7673bb5d4cb52ff0537240db3 | [
"MIT"
] | null | null | null | src/generated-spec/synthetics.py | wheerd/cloudformation-to-terraform | 5411b33293e1f7d7673bb5d4cb52ff0537240db3 | [
"MIT"
] | null | null | null | src/generated-spec/synthetics.py | wheerd/cloudformation-to-terraform | 5411b33293e1f7d7673bb5d4cb52ff0537240db3 | [
"MIT"
] | null | null | null | from . import *
class AWS_Synthetics_Canary_Code(CloudFormationProperty):
def write(self, w):
with w.block("code"):
self.property(w, "S3Bucket", "s3_bucket", StringValueConverter())
self.property(w, "S3Key", "s3_key", StringValueConverter())
self.property(w, "S3ObjectVersion", "s3_object_version", StringValueConverter())
self.property(w, "Script", "script", StringValueConverter())
self.property(w, "Handler", "handler", StringValueConverter())
class AWS_Synthetics_Canary_VPCConfig(CloudFormationProperty):
def write(self, w):
with w.block("vpc_config"):
self.property(w, "VpcId", "vpc_id", StringValueConverter())
self.property(w, "SubnetIds", "subnet_ids", ListValueConverter(StringValueConverter()))
self.property(w, "SecurityGroupIds", "security_group_ids", ListValueConverter(StringValueConverter()))
class AWS_Synthetics_Canary_RunConfig(CloudFormationProperty):
def write(self, w):
with w.block("run_config"):
self.property(w, "TimeoutInSeconds", "timeout_in_seconds", BasicValueConverter())
class AWS_Synthetics_Canary_Schedule(CloudFormationProperty):
def write(self, w):
with w.block("schedule"):
self.property(w, "Expression", "expression", StringValueConverter())
self.property(w, "DurationInSeconds", "duration_in_seconds", StringValueConverter())
class AWS_Synthetics_Canary(CloudFormationResource):
cfn_type = "AWS::Synthetics::Canary"
tf_type = "aws_synthetics_canary" # TODO: Most likely not working
ref = "arn"
attrs = {
"Id": "id",
"State": "state",
}
def write(self, w):
with self.resource_block(w):
self.property(w, "Name", "name", StringValueConverter())
self.block(w, "Code", AWS_Synthetics_Canary_Code)
self.property(w, "ArtifactS3Location", "artifact_s3_location", StringValueConverter())
self.block(w, "Schedule", AWS_Synthetics_Canary_Schedule)
self.property(w, "ExecutionRoleArn", "execution_role_arn", StringValueConverter())
self.property(w, "RuntimeVersion", "runtime_version", StringValueConverter())
self.property(w, "SuccessRetentionPeriod", "success_retention_period", BasicValueConverter())
self.property(w, "FailureRetentionPeriod", "failure_retention_period", BasicValueConverter())
self.property(w, "Tags", "tags", ListValueConverter(ResourceTag()))
self.block(w, "VPCConfig", AWS_Synthetics_Canary_VPCConfig)
self.block(w, "RunConfig", AWS_Synthetics_Canary_RunConfig)
self.property(w, "StartCanaryAfterCreation", "start_canary_after_creation", BasicValueConverter())
| 44.220339 | 108 | 0.729782 | 269 | 2,609 | 6.866171 | 0.312268 | 0.123443 | 0.13373 | 0.160801 | 0.272334 | 0.148349 | 0.097455 | 0.097455 | 0 | 0 | 0 | 0.003538 | 0.133384 | 2,609 | 58 | 109 | 44.982759 | 0.813357 | 0.011115 | 0 | 0.108696 | 0 | 0 | 0.241848 | 0.072593 | 0 | 0 | 0 | 0.017241 | 0 | 1 | 0.108696 | false | 0 | 0.021739 | 0 | 0.326087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
865742b34349f60b7b18554331dcee356dab5506 | 328 | py | Python | WEEKS/CD_Sata-Structures/_RESOURCES/CODESIGNAL/arithmetic_expression.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | null | null | null | WEEKS/CD_Sata-Structures/_RESOURCES/CODESIGNAL/arithmetic_expression.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | null | null | null | WEEKS/CD_Sata-Structures/_RESOURCES/CODESIGNAL/arithmetic_expression.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | null | null | null | def arithmeticExpression(a, b, c):
"""
Consider an arithmetic expression of the form a#b=c.
Check whether it is possible to replace # with one of
the four signs: +, -, * or / to obtain a correct
"""
return (
True if (a + b == c) or (a - b == c) or (a * b == c) or (a / b == c) else False
)
| 32.8 | 87 | 0.545732 | 53 | 328 | 3.377358 | 0.584906 | 0.067039 | 0.100559 | 0.083799 | 0.100559 | 0.100559 | 0.100559 | 0.100559 | 0.100559 | 0.100559 | 0 | 0 | 0.320122 | 328 | 9 | 88 | 36.444444 | 0.802691 | 0.478659 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
866c4a59c7a0eb40e5a0d0390e04f748fc26e2eb | 229 | py | Python | list_comprehension2.py | gustavosoarz/100-dias-de-Python | 3c44575cb1af04e4e1eec5ba9df202c6a72f6a6f | [
"MIT"
] | null | null | null | list_comprehension2.py | gustavosoarz/100-dias-de-Python | 3c44575cb1af04e4e1eec5ba9df202c6a72f6a6f | [
"MIT"
] | null | null | null | list_comprehension2.py | gustavosoarz/100-dias-de-Python | 3c44575cb1af04e4e1eec5ba9df202c6a72f6a6f | [
"MIT"
] | null | null | null | # Somar o preço dos produtos utilizando List Comprehension
carrinho = []
carrinho.append(("Camisa 1", 35))
carrinho.append(("Bone 1", 20))
carrinho.append(("Calça 1", 80))
total = sum([int(y) for x, y in carrinho])
print(total) | 25.444444 | 58 | 0.69869 | 35 | 229 | 4.571429 | 0.714286 | 0.2625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0.135371 | 229 | 9 | 59 | 25.444444 | 0.762626 | 0.244541 | 0 | 0 | 0 | 0 | 0.122093 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8670b961cf66d047e420e1367d2fb0a2798087b4 | 297 | py | Python | datalad/runner/tests/utils.py | soichih/datalad | 797dde3ab7497be170e2c4ea8824f33a4b38e5d8 | [
"MIT"
] | null | null | null | datalad/runner/tests/utils.py | soichih/datalad | 797dde3ab7497be170e2c4ea8824f33a4b38e5d8 | [
"MIT"
] | 1 | 2020-12-01T20:13:51.000Z | 2020-12-01T20:13:51.000Z | datalad/runner/tests/utils.py | jwodder/datalad | 2b92a764fdc64b750dad68eb51c817218a1ec153 | [
"MIT"
] | null | null | null | import sys
def py2cmd(code: str,
*additional_arguments):
"""Helper to invoke some Python code through a cmdline invocation of
the Python interpreter.
This should be more portable in some cases.
"""
return [sys.executable, '-c', code] + list(additional_arguments)
| 24.75 | 72 | 0.683502 | 38 | 297 | 5.289474 | 0.815789 | 0.189055 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004386 | 0.232323 | 297 | 11 | 73 | 27 | 0.877193 | 0.451178 | 0 | 0 | 0 | 0 | 0.013889 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
86711013129b918e450405068a23a96efe7fe239 | 542 | py | Python | force_gromacs/data_sources/molecule/molecule_factory.py | force-h2020/force-bdss-plugin-gromacs | 1518185e4cdab824d57570bc5df6c719f1f11bea | [
"MIT"
] | null | null | null | force_gromacs/data_sources/molecule/molecule_factory.py | force-h2020/force-bdss-plugin-gromacs | 1518185e4cdab824d57570bc5df6c719f1f11bea | [
"MIT"
] | 28 | 2019-09-05T09:05:52.000Z | 2020-11-11T13:32:46.000Z | force_gromacs/data_sources/molecule/molecule_factory.py | force-h2020/force-bdss-plugin-gromacs | 1518185e4cdab824d57570bc5df6c719f1f11bea | [
"MIT"
] | 1 | 2020-01-15T20:53:17.000Z | 2020-01-15T20:53:17.000Z | # (C) Copyright 2010-2020 Enthought, Inc., Austin, TX
# All rights reserved.
from force_bdss.api import BaseDataSourceFactory
from .molecule_model import MoleculeDataSourceModel
from .molecule_data_source import MoleculeDataSource
class MoleculeFactory(BaseDataSourceFactory):
def get_identifier(self):
return "molecule"
def get_name(self):
return "Gromacs Molecule"
def get_model_class(self):
return MoleculeDataSourceModel
def get_data_source_class(self):
return MoleculeDataSource
| 23.565217 | 54 | 0.754613 | 59 | 542 | 6.745763 | 0.542373 | 0.060302 | 0.070352 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018141 | 0.186347 | 542 | 22 | 55 | 24.636364 | 0.884354 | 0.134686 | 0 | 0 | 0 | 0 | 0.051613 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.25 | 0.333333 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
8671b20f7708f4f14b2955db9a68938bd6bca6d9 | 4,181 | py | Python | app/shop/cart.py | deeprave/yourwishesfuneraladvocacy | 49a66191aba79e57f89732031119910453f5e20c | [
"MIT"
] | null | null | null | app/shop/cart.py | deeprave/yourwishesfuneraladvocacy | 49a66191aba79e57f89732031119910453f5e20c | [
"MIT"
] | 2 | 2020-08-28T00:06:22.000Z | 2020-09-05T04:00:05.000Z | app/shop/cart.py | deeprave/yourwishesfuneraladvocacy | 49a66191aba79e57f89732031119910453f5e20c | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from django.conf import settings
from wagtail.core.models import Site
from .models import Product, ShopSettings, Order, OrderStatus
class Cart:
"""
Shopping cart container
"""
def __init__(self, request):
self.session = request.session
self.session['shipping'] = self.shop_settings
self._cart = None
@property
def shop_settings(self):
shop_settings = ShopSettings.for_site(Site.objects.get(is_default_site=True))
return {
'charge': shop_settings.shipping_charge,
'bulk_charge': shop_settings.bulk_shipping_charge,
'quantity': shop_settings.bulk_quantity,
'tax_rate': shop_settings.tax_rate,
}
@property
def cart(self):
if self._cart is None:
self._cart = self.session.get(settings.CART_SESSION_ID, {})
if not self._cart:
self.session[settings.CART_SESSION_ID] = self._cart
return self._cart
def add(self, product: Product, quantity=1, update_quantity=False):
if quantity >= 0:
if product.code not in self.cart:
self.cart[product.code] = dict(quantity=0, price=product.price)
if update_quantity:
self.cart[product.code]['quantity'] = quantity
else:
self.cart[product.code]['quantity'] += quantity
self.save()
def remove(self, product, quantity=None):
if product.code in self.cart:
if quantity is None or quantity >= self.cart[product.code]['quantity']:
del self.cart[product.code]
else:
self.cart[product.code]['quantity'] -= quantity
self.save()
def clear(self):
if settings.CART_SESSION_ID in self.session:
del self.session[settings.CART_SESSION_ID]
self._cart = None
self.save()
@property
def total_price(self):
return sum(item['price'] * item['quantity'] for item in self.cart.values()) + self.shipping_price
@property
def total_quantity(self):
return sum(item['quantity'] for item in self.cart.values())
@property
def shipping(self):
for item in self:
if item['product'].shipping:
return True
return False
@property
def tax_rate(self):
return self.session['shipping']['tax_rate']
@property
def shipping_price(self):
"""wagtail specific basesetting"""
total_quantity = self.total_quantity
shipping = self.session['shipping']
return shipping['charge'] if total_quantity < shipping['quantity'] else shipping['bulk_charge']
@property
def modified(self):
return self.session.modified
@property
def length(self):
return len(self.cart.keys())
def save(self):
self.session.modified = True
def __iter__(self):
cart = self.cart.copy()
for product in Product.objects.filter(code__in=list(cart.keys())):
cart[product.code]['product'] = product
for item in cart.values():
item['total_price'] = item['price'] * item['quantity']
yield item
def __len__(self):
return len(self.cart.keys())
class CartOrder:
def __init__(self, request, order_id=None):
self.session = request.session
if order_id:
self.session['order_id'] = order_id
self.session.modified = True
self._order = None
@property
def order_id(self) -> int:
return self.session['order_id']
@property
def order(self) -> Order:
if self._order is None:
self._order = Order.objects.get(pk=self.order_id)
return self._order
@property
def order_status(self) -> OrderStatus:
return OrderStatus.value_Of(self.order.order_status)
@property
def paid_status(self) -> bool:
return self.order.paid_status
@property
def total_price(self) -> float:
return self.order.total_price
@property
def total_items(self) -> float:
return self.order.total_items
| 29.237762 | 105 | 0.607989 | 494 | 4,181 | 4.977733 | 0.17004 | 0.06832 | 0.0427 | 0.04636 | 0.196015 | 0.177308 | 0.101667 | 0.101667 | 0.040667 | 0.040667 | 0 | 0.001336 | 0.283664 | 4,181 | 142 | 106 | 29.443662 | 0.8197 | 0.017938 | 0 | 0.256881 | 0 | 0 | 0.048225 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.211009 | false | 0 | 0.027523 | 0.100917 | 0.412844 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
8677a925b8494ad1567eed2b120a18fae7b52fb6 | 133 | py | Python | python_aulas/aula014_while_b.py | gilsonaureliano/Python-aulas | 64269872acd482bcf297941ba28d30f13f29c752 | [
"MIT"
] | 1 | 2021-08-05T13:52:12.000Z | 2021-08-05T13:52:12.000Z | python_aulas/aula014_while_b.py | gilsonaureliano/Python-aulas | 64269872acd482bcf297941ba28d30f13f29c752 | [
"MIT"
] | null | null | null | python_aulas/aula014_while_b.py | gilsonaureliano/Python-aulas | 64269872acd482bcf297941ba28d30f13f29c752 | [
"MIT"
] | null | null | null | r = 'S'
while r == "S":
n = int(input('Digite um numero: '))
r = str(input('Quer continuar [S/N]: ')).upper()
print('Acabou') | 26.6 | 52 | 0.541353 | 21 | 133 | 3.428571 | 0.714286 | 0.055556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.203008 | 133 | 5 | 53 | 26.6 | 0.679245 | 0 | 0 | 0 | 0 | 0 | 0.358209 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8684c6a6c0fc1f3a46b56ee6f88a13afdde8cba3 | 331 | py | Python | python/1929.py | zheedong/BaekJoon | 7f9e00085276a337d18ee3bb90c98126f7af4d3a | [
"MIT"
] | null | null | null | python/1929.py | zheedong/BaekJoon | 7f9e00085276a337d18ee3bb90c98126f7af4d3a | [
"MIT"
] | null | null | null | python/1929.py | zheedong/BaekJoon | 7f9e00085276a337d18ee3bb90c98126f7af4d3a | [
"MIT"
] | null | null | null | import math
m, n = map(int, input().split())
prime_list = [2]
for i in range(3, n):
flag = True
for prime in prime_list:
if i % prime == 0:
flag = False
break
if flag:
prime_list.append(i)
for prime in prime_list:
if prime >= m and prime <= n:
print(prime) | 19.470588 | 33 | 0.522659 | 50 | 331 | 3.38 | 0.5 | 0.213018 | 0.118343 | 0.177515 | 0.248521 | 0.248521 | 0 | 0 | 0 | 0 | 0 | 0.014354 | 0.36858 | 331 | 17 | 34 | 19.470588 | 0.794258 | 0 | 0 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.071429 | 0.071429 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
869304ab8b49c90d36c8a436e220396f2dd16a9d | 577 | py | Python | python/solutii/iulian_bute/search.py | broascaiulian/labs | 068c7f440c7a29cb6a3e1dbb8e4bb7dfaff5a050 | [
"MIT"
] | null | null | null | python/solutii/iulian_bute/search.py | broascaiulian/labs | 068c7f440c7a29cb6a3e1dbb8e4bb7dfaff5a050 | [
"MIT"
] | null | null | null | python/solutii/iulian_bute/search.py | broascaiulian/labs | 068c7f440c7a29cb6a3e1dbb8e4bb7dfaff5a050 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
''' cauta intr-un director dat fisierele ce contin in nume o cheie '''
import os
def cauta_fisiere(path, key):
''' cauta intr-un director dat fisierele ce contin in nume o cheie '''
for root, _, files in os.walk(path):
for nume_fisier in files:
if key in nume_fisier:
print os.path.join(root, nume_fisier)
def main():
''' cauta intr-un director dat fisierele ce contin in nume o cheie '''
key = ".py"
path = "../../exercitii"
cauta_fisiere(path, key)
if __name__ == "__main__":
main()
| 25.086957 | 74 | 0.623917 | 85 | 577 | 4.070588 | 0.4 | 0.069364 | 0.095376 | 0.16474 | 0.442197 | 0.442197 | 0.442197 | 0.442197 | 0.442197 | 0.442197 | 0 | 0 | 0.256499 | 577 | 22 | 75 | 26.227273 | 0.806527 | 0.034662 | 0 | 0 | 0 | 0 | 0.075145 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.083333 | null | null | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
869711495e388a8f9d59b1a32a50f81bf0876cfa | 6,586 | py | Python | python/pygimli/core/matrix.py | rolandhill/gimli | 38efa680ce807f789687836b90017655d13cec02 | [
"Apache-2.0"
] | null | null | null | python/pygimli/core/matrix.py | rolandhill/gimli | 38efa680ce807f789687836b90017655d13cec02 | [
"Apache-2.0"
] | null | null | null | python/pygimli/core/matrix.py | rolandhill/gimli | 38efa680ce807f789687836b90017655d13cec02 | [
"Apache-2.0"
] | 1 | 2019-11-09T03:03:09.000Z | 2019-11-09T03:03:09.000Z | # -*- coding: utf-8 -*-
"""Some matrix specialization."""
import time
from pygimli.core import _pygimli_ as pg
import numpy as np
# make core matrices (now in pg, later pg.core) known here for tab-completion
# BlockMatrix = pg.BlockMatrix
# IdentityMatrix = pg.IdentityMatrix
class MultLeftMatrix(pg.MatrixBase):
"""Matrix consisting of actual RMatrix and lef-side vector."""
def __init__(self, A, left, verbose=False):
"""Constructor saving matrix and vector."""
if A.rows() != len(left):
raise Exception("Matrix columns do not fit vector length!")
self.A = A
self.left = left
super().__init__(verbose) # only in Python 3
# pg.MatrixBase.__init__(self) # the Python 2 variant
def rows(self):
"""Return number of rows (using underlying matrix)."""
return self.A.rows()
def cols(self):
"""Return number of columns (using underlying matrix)."""
return self.A.cols()
def mult(self, x):
"""Multiplication from right-hand-side (dot product A*x)."""
return self.A.mult(x) * self.left
def transMult(self, x):
"""Multiplication from right-hand-side (dot product A.T * x)"""
return self.A.transMult(x * self.left)
LMultRMatrix = MultLeftMatrix # alias for backward compatibility
class MultRightMatrix(pg.MatrixBase):
"""Some Matrix, multiplied with a right hand side vector r."""
def __init__(self, A, r=None):
super().__init__()
self.A = A
if r is None:
self.r = pg.RVector(A.cols(), 1.0)
else:
self.r = r
def mult(self, x):
"""Return M*x = A*(r*x)"""
return self.A.mult(x * self.r)
def transMult(self, x):
"""Return M.T*x=(A.T*x)*r"""
return self.A.transMult(x) * self.r
def cols(self):
"""Number of columns."""
return self.A.cols()
def rows(self):
"""Number of rows."""
return self.A.rows()
RMultRMatrix = MultRightMatrix # alias for backward compatibility
class MultLeftRightMatrix(pg.MatrixBase):
"""Matrix consisting of actual RMatrix and left-hand-side vector."""
def __init__(self, A, left, right, verbose=False):
"""Constructor saving matrix and vector."""
if A.cols() != len(right):
raise Exception("Matrix columns do not fit right vector length!")
if A.rows() != len(left):
raise Exception("Matrix rows do not fit left vector length!")
self.A = A
self.right = right
self.left = left
super().__init__(verbose) # only in Python 3
# pg.MatrixBase.__init__(self) # the Python 2 variant
def rows(self):
"""Number of rows (using the underlying matrix)."""
return self.A.rows()
def cols(self):
"""Number of columns (using the underlying matrix)."""
return self.A.cols()
def mult(self, x):
"""Multiplication from right-hand-side (dot product A*x)."""
return self.A.mult(x * self.right) * self.left
def transMult(self, x):
"""Multiplication from right-hand-side (dot product A.T*x)."""
return self.A.transMult(x * self.left) * self.right
LRMultRMatrix = MultLeftRightMatrix # alias for backward compatibility
class Add2Matrix(pg.MatrixBase):
"""Matrix by adding two matrices."""
def __init__(self, A, B):
super().__init__()
self.A = A
self.B = B
assert A.rows() == B.rows()
assert A.cols() == B.cols()
def mult(self, x):
"""Return M*x = A*(r*x)"""
return self.A.mult(x) + self.B.mult(x)
def transMult(self, x):
"""Return M.T*x=(A.T*x)*r"""
return self.A.transMult(x) + self.B.transMult(x)
def cols(self):
"""Number of columns."""
return self.A.cols()
def rows(self):
"""Number of rows."""
return self.A.rows()
class Mult2Matrix(pg.MatrixBase):
"""Matrix by multiplying two matrices."""
def __init__(self, A, B):
super().__init__()
self.A = A
self.B = B
assert A.cols() == B.rows()
def mult(self, x):
"""Return M*x = A*(r*x)"""
return self.A.mult(self.B.mult(x))
def transMult(self, x):
"""Return M.T*x=(A.T*x)*r"""
return self.B.transMult(self.A.transMult(x))
def cols(self):
"""Number of columns."""
return self.B.cols()
def rows(self):
"""Number of rows."""
return self.A.rows()
class DiagonalMatrix(pg.MatrixBase):
"""Square matrix with a vector on the main diagonal."""
def __init__(self, d):
super().__init__()
self.d = d
def mult(self, x):
"""Return M*x = r*x (element-wise)"""
return x * self.d
def transMult(self, x):
"""Return M.T*x=(A.T*x)*r"""
return x * self.d
def cols(self):
"""Number of columns (length of diagonal)."""
return len(self.d)
def rows(self):
"""Number of rows (length of diagonal)."""
return len(self.d)
class Cm05Matrix(pg.MatrixBase):
"""Matrix implicitly representing the inverse square-root."""
def __init__(self, A, verbose=False):
"""Constructor saving matrix and vector.
Parameters
----------
A : ndarray
numpy type (full) matrix
"""
from scipy.linalg import eigh # , get_blas_funcs
if A.shape[0] != A.shape[1]: # rows/cols for pg matrix
raise Exception("Matrix must by square (and symmetric)!")
self.size = A.shape[0]
t = time.time()
self.ew, self.EV = eigh(A)
self.mul = np.sqrt(1./self.ew)
if verbose:
pg.info('(C) Time for eigenvalue decomposition:{:.1f} s'.format(
time.time() - t))
self.A = A
super().__init__(verbose) # only in Python 3
def rows(self):
"""Return number of rows (using underlying matrix)."""
return self.size
def cols(self):
"""Return number of columns (using underlying matrix)."""
return self.size
def mult(self, x):
"""Multiplication from right-hand side (dot product)."""
part1 = (np.dot(np.transpose(x), self.EV).T*self.mul).reshape(-1, 1)
return self.EV.dot(part1).reshape(-1,)
# return self.EV.dot((x.T.dot(self.EV)*self.mul).T)
def transMult(self, x):
"""Multiplication from right-hand side (dot product)."""
return self.mult(x) # matrix is symmetric by definition
| 27.90678 | 77 | 0.57607 | 886 | 6,586 | 4.205418 | 0.164786 | 0.0416 | 0.05314 | 0.025765 | 0.660762 | 0.614868 | 0.583736 | 0.508857 | 0.46672 | 0.428878 | 0 | 0.004623 | 0.277407 | 6,586 | 235 | 78 | 28.025532 | 0.778315 | 0.32068 | 0 | 0.537815 | 0 | 0 | 0.050237 | 0 | 0 | 0 | 0 | 0 | 0.02521 | 1 | 0.294118 | false | 0 | 0.033613 | 0 | 0.621849 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
869870ef7c7244c44bf21575213912af0d50d57a | 363 | py | Python | python/Text_Wrap.py | avishshah11/Hackerrank_python | 7a7b8005ba2c8b03fb63d727496deb4175f860f5 | [
"MIT"
] | null | null | null | python/Text_Wrap.py | avishshah11/Hackerrank_python | 7a7b8005ba2c8b03fb63d727496deb4175f860f5 | [
"MIT"
] | null | null | null | python/Text_Wrap.py | avishshah11/Hackerrank_python | 7a7b8005ba2c8b03fb63d727496deb4175f860f5 | [
"MIT"
] | null | null | null | import textwrap
def wrap(string, max_width):
text_wrapped = str()
for i in range(len(string)):
text_wrapped += string[i]
if (i+1) % max_width == 0:
text_wrapped += '\n'
return text_wrapped
if __name__ == '__main__':
string, max_width = input(), int(input())
result = wrap(string, max_width)
print(result)
| 19.105263 | 45 | 0.597796 | 48 | 363 | 4.1875 | 0.541667 | 0.159204 | 0.208955 | 0.179104 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007547 | 0.269972 | 363 | 18 | 46 | 20.166667 | 0.750943 | 0 | 0 | 0 | 0 | 0 | 0.027548 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.083333 | 0 | 0.25 | 0.083333 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
869cc912b15e57a6d74e09ab1c31fdb3d4dce343 | 346 | py | Python | Python/test_fatorial_parametrize.py | robertogoes/exercicios-coursera-python | 5093cc31ccda764b6131987fa81601b7179a9bfc | [
"MIT"
] | null | null | null | Python/test_fatorial_parametrize.py | robertogoes/exercicios-coursera-python | 5093cc31ccda764b6131987fa81601b7179a9bfc | [
"MIT"
] | null | null | null | Python/test_fatorial_parametrize.py | robertogoes/exercicios-coursera-python | 5093cc31ccda764b6131987fa81601b7179a9bfc | [
"MIT"
] | null | null | null | def fatorial(n):
if n < 0:
return 0
i = fat = 1
while i <= n:
fat *= i
i += 1
return fat
import pytest
@pytest.mark.parametrize("entrada, esperado", [
(0, 1),
(1, 1),
(-10, 0),
(4, 24),
(5, 120)
])
def test_fatorial(entrada, esperado):
assert fatorial(entrada) == esperado
| 15.043478 | 47 | 0.49711 | 46 | 346 | 3.717391 | 0.5 | 0.263158 | 0.269006 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 0.349711 | 346 | 22 | 48 | 15.727273 | 0.68 | 0 | 0 | 0 | 0 | 0 | 0.049275 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 1 | 0.111111 | false | 0 | 0.055556 | 0 | 0.277778 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
86a1e3e2f06f4a1f24b3879fb5dadee1ee79b073 | 306 | py | Python | 16727 ICPC.py | jangThang/Baekjoon-problem | f34c7d9977ad95fbe2a59c0096bf8ff1e885c01f | [
"MIT"
] | null | null | null | 16727 ICPC.py | jangThang/Baekjoon-problem | f34c7d9977ad95fbe2a59c0096bf8ff1e885c01f | [
"MIT"
] | null | null | null | 16727 ICPC.py | jangThang/Baekjoon-problem | f34c7d9977ad95fbe2a59c0096bf8ff1e885c01f | [
"MIT"
] | null | null | null | # 입력
p1, s1 = map(int, input().split())
s2, p2 = map(int, input().split())
# 출력
if p1+p2 > s1+s2:
print("Persepolis")
elif p1+p2 < s1+s2:
print("Esteghlal")
else: # 동점인 경우
if s1 > p2:
print("Esteghlal")
elif s1 < p2:
print("Persepolis")
else:
print("Penalty")
| 18 | 34 | 0.53268 | 44 | 306 | 3.704545 | 0.431818 | 0.07362 | 0.134969 | 0.196319 | 0.159509 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072727 | 0.281046 | 306 | 16 | 35 | 19.125 | 0.668182 | 0.039216 | 0 | 0.461538 | 0 | 0 | 0.155172 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.384615 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
86a36b57e3aeeef88e583ac0dbfe857f15224076 | 228 | py | Python | 15 Distance and units.py | L0ganhowlett/Python_workbook-Ben_Stephenson | ab711257bd2da9b34c6001a8e09d20bfc0114a3f | [
"MIT"
] | null | null | null | 15 Distance and units.py | L0ganhowlett/Python_workbook-Ben_Stephenson | ab711257bd2da9b34c6001a8e09d20bfc0114a3f | [
"MIT"
] | null | null | null | 15 Distance and units.py | L0ganhowlett/Python_workbook-Ben_Stephenson | ab711257bd2da9b34c6001a8e09d20bfc0114a3f | [
"MIT"
] | null | null | null | #15 Distance units
# Askng for distance in feet.
x = float(input("Enter the distance in feet = "))
inches = x * 12
yards = x * 0.33333
miles = x * 0.000189
print("Inches = ",inches," Yards = ",yards," Miles = ",miles)
| 28.5 | 64 | 0.618421 | 34 | 228 | 4.147059 | 0.588235 | 0.141844 | 0.198582 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096591 | 0.22807 | 228 | 7 | 65 | 32.571429 | 0.704545 | 0.201754 | 0 | 0 | 0 | 0 | 0.33526 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
86a6e48bef98b40f71afdd3a8ed46b129779061a | 257 | py | Python | Old Scripts/main.py | MarritS/smart_bike | 11168a7ae9d4a78424bedb6df4aafcb487259367 | [
"MIT"
] | null | null | null | Old Scripts/main.py | MarritS/smart_bike | 11168a7ae9d4a78424bedb6df4aafcb487259367 | [
"MIT"
] | null | null | null | Old Scripts/main.py | MarritS/smart_bike | 11168a7ae9d4a78424bedb6df4aafcb487259367 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Sat Mar 27 12:09:08 2021
@author: marri
"""
import yolo_opencv as yolo
import yolo_video as yolo_video
yolo.find_vehicles("tiny_test3.png", tiny=False)
yolo_video.find_vehicles("00108.MTS", "demo.mp4", tiny=True)
| 17.133333 | 60 | 0.715953 | 43 | 257 | 4.116279 | 0.697674 | 0.152542 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089686 | 0.132296 | 257 | 14 | 61 | 18.357143 | 0.704036 | 0.287938 | 0 | 0 | 0 | 0 | 0.177143 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
86b417fea3f08f57314453d543b14b1e1dab4a59 | 539 | bzl | Python | js/packages/go-bridge/config.bzl | n0izn0iz/berty | b6d1dce49268bb7e3f237fad12b882d2d8a48092 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 1 | 2021-01-28T06:24:46.000Z | 2021-01-28T06:24:46.000Z | js/packages/go-bridge/config.bzl | n0izn0iz/berty | b6d1dce49268bb7e3f237fad12b882d2d8a48092 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | null | null | null | js/packages/go-bridge/config.bzl | n0izn0iz/berty | b6d1dce49268bb7e3f237fad12b882d2d8a48092 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | null | null | null | load("@berty_go//:config.bzl", "berty_go_config")
load("@co_znly_rules_gomobile//:repositories.bzl", "gomobile_repositories")
load("@build_bazel_apple_support//lib:repositories.bzl", "apple_support_dependencies")
load("@build_bazel_rules_swift//swift:repositories.bzl", "swift_rules_dependencies")
def berty_bridge_config():
# fetch and config berty go dependencies
berty_go_config()
# config gomobile repositories
gomobile_repositories()
# config ios
apple_support_dependencies()
swift_rules_dependencies()
| 33.6875 | 86 | 0.77551 | 65 | 539 | 6.030769 | 0.338462 | 0.071429 | 0.09949 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109462 | 539 | 15 | 87 | 35.933333 | 0.816667 | 0.144712 | 0 | 0 | 0 | 0 | 0.538293 | 0.50547 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | true | 0 | 0 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
86b493131bf69e9c180953cf248944c435702f97 | 2,457 | py | Python | user/admin.py | EncryptEx/myhackupc | 3b7c8bce8528e61aab65c976a3c9b4a700210c09 | [
"MIT"
] | 8 | 2017-11-20T09:11:37.000Z | 2020-01-26T19:23:33.000Z | user/admin.py | EncryptEx/myhackupc | 3b7c8bce8528e61aab65c976a3c9b4a700210c09 | [
"MIT"
] | 38 | 2018-07-11T08:03:43.000Z | 2019-10-22T09:26:36.000Z | user/admin.py | EncryptEx/myhackupc | 3b7c8bce8528e61aab65c976a3c9b4a700210c09 | [
"MIT"
] | 6 | 2019-01-21T18:19:17.000Z | 2020-03-09T17:42:36.000Z | from django.conf import settings
from django.contrib import admin
# Register your models here.
from django.contrib.admin.forms import AdminPasswordChangeForm
from django.contrib.auth.models import Group
from user import models
from user.forms import UserChangeForm
class UserAdmin(admin.ModelAdmin):
form = UserChangeForm
change_password_form = AdminPasswordChangeForm
display_fields = ['email', 'name', 'type', 'admin_is_organizer', 'admin_is_volunteer_accepted',
'is_director', 'have_application', 'is_judge']
filter_fields = ['is_director', 'is_admin', 'email_verified', 'type', 'is_judge']
permission_fields = ['is_director', 'is_admin', 'email_verified', 'can_review_dubious', 'can_review_blacklist',
'can_review_volunteers', 'can_review_mentors', 'can_review_sponsors', 'email_subscribed']
if settings.HARDWARE_ENABLED:
display_fields.append('is_hardware_admin')
filter_fields.append('is_hardware_admin')
permission_fields.insert(4, 'is_hardware_admin')
# The fields to be used in displaying the User model.
# These override the definitions on the base UserAdmin
# that reference specific fields on auth.User.
list_display = tuple(display_fields)
list_filter = tuple(filter_fields)
permission_fields = tuple(permission_fields)
fieldsets = (
(None, {'fields': ('email', 'type', 'password')}),
('Personal info', {'fields': ('name',)}),
('Permissions', {'fields': permission_fields}),
('Important dates', {'fields': ('last_login',)}),
)
add_fieldsets = (
(None, {
'classes': ('wide',),
'fields': ('email', 'name', 'password1', 'password2',)}
),
)
search_fields = ('email',)
ordering = ('created_time',)
date_hierarchy = 'created_time'
filter_horizontal = ()
def get_fieldsets(self, request, obj=None):
if not obj:
return self.add_fieldsets
return super(UserAdmin, self).get_fieldsets(request, obj)
class BlacklistUserAdmin(admin.ModelAdmin):
list_display = ('email', 'name', 'date_of_ban')
list_per_page = 20
list_filter = ('email', 'name')
search_fields = ('email', 'name')
actions = ['delete_selected', ]
admin.site.register(models.User, admin_class=UserAdmin)
admin.site.register(models.BlacklistUser, admin_class=BlacklistUserAdmin)
admin.site.unregister(Group)
| 36.671642 | 115 | 0.677656 | 273 | 2,457 | 5.849817 | 0.406593 | 0.03444 | 0.031935 | 0.022542 | 0.078898 | 0.045085 | 0.045085 | 0 | 0 | 0 | 0 | 0.00253 | 0.195767 | 2,457 | 66 | 116 | 37.227273 | 0.805668 | 0.071632 | 0 | 0 | 0 | 0 | 0.243409 | 0.02109 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02 | false | 0.08 | 0.14 | 0 | 0.62 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
86caf0a41f91644c1b1c7b344019f4351b5f8ce5 | 798 | py | Python | build/lib/sbse/simple_genetics.py | mahdasjs/CodART | 853456abc6dcb11bef0bc902f70a82c58354ea79 | [
"MIT"
] | null | null | null | build/lib/sbse/simple_genetics.py | mahdasjs/CodART | 853456abc6dcb11bef0bc902f70a82c58354ea79 | [
"MIT"
] | null | null | null | build/lib/sbse/simple_genetics.py | mahdasjs/CodART | 853456abc6dcb11bef0bc902f70a82c58354ea79 | [
"MIT"
] | null | null | null | from pymoo.algorithms.so_genetic_algorithm import GA
from pymoo.factory import get_problem
from pymoo.optimize import minimize
from initialize import make_field_non_static
from initialize import make_field_static
from initialize import make_method_static_2
from initialize import make_method_non_static_2
problem = get_problem(make_method_non_static_2, make_field_non_static, make_field_static, make_method_static_2)
def genetic_algorithm():
algorithm = GA(
population_size=100,
individual_size=2,
eliminate_duplicates=True)
res = minimize(problem,
algorithm,
seed=1,
verbose=False)
print("Best solution found: \nX = %s\nF = %s" % (res.X, res.F))
if __name__ == '__main__':
genetic_algorithm()
| 28.5 | 111 | 0.723058 | 105 | 798 | 5.104762 | 0.428571 | 0.104478 | 0.149254 | 0.179104 | 0.298507 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014286 | 0.210526 | 798 | 27 | 112 | 29.555556 | 0.836508 | 0 | 0 | 0 | 0 | 0 | 0.056391 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.35 | 0 | 0.4 | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
86cceccfa7ed2000190ddfe985151868fda22419 | 337 | py | Python | src/smolcalc/values.py | andre4k14/smolcalc | 0789215767cb1c22eb86b86a011094d9a65ecc34 | [
"MIT"
] | null | null | null | src/smolcalc/values.py | andre4k14/smolcalc | 0789215767cb1c22eb86b86a011094d9a65ecc34 | [
"MIT"
] | null | null | null | src/smolcalc/values.py | andre4k14/smolcalc | 0789215767cb1c22eb86b86a011094d9a65ecc34 | [
"MIT"
] | null | null | null | from dataclasses import dataclass
@dataclass
class Number:
value: float
def __repr__(self):
return f"{self.value}"
def __str__(self):
if not isinstance(self.value, int) and self.value.is_integer() and str(self.value).find("e") == -1:
return f"{int(self.value)}"
return f"{self.value}"
| 22.466667 | 107 | 0.623145 | 46 | 337 | 4.369565 | 0.521739 | 0.268657 | 0.109453 | 0.159204 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003906 | 0.240356 | 337 | 14 | 108 | 24.071429 | 0.78125 | 0 | 0 | 0.2 | 0 | 0 | 0.124629 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.1 | 0.1 | 0.8 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
86e51ff59174b3c0b2e0678ab599f2dedf8e0141 | 583 | py | Python | test/test_exception.py | jokki/python_unit_test_examples | 206b04d4f736a4b861f7f61fa32240e1285eec75 | [
"Unlicense"
] | null | null | null | test/test_exception.py | jokki/python_unit_test_examples | 206b04d4f736a4b861f7f61fa32240e1285eec75 | [
"Unlicense"
] | null | null | null | test/test_exception.py | jokki/python_unit_test_examples | 206b04d4f736a4b861f7f61fa32240e1285eec75 | [
"Unlicense"
] | null | null | null | import unittest
from unittest.mock import Mock, patch, MagicMock
import runme
class TestException(unittest.TestCase):
def setUp(self):
## Whatever goes here gets run before every test case
pass
def tearDown(self):
## Whatever goes here gets run after every test case
pass
def test_exception(self):
with self.assertRaises(runme.RunMeException) as context:
runme.functionThatThrowsException()
self.assertTrue('This is an exception' in str(context.exception))
if __name__ == '__main__':
unittest.main()
| 24.291667 | 73 | 0.686106 | 69 | 583 | 5.666667 | 0.594203 | 0.061381 | 0.081841 | 0.102302 | 0.240409 | 0.138107 | 0 | 0 | 0 | 0 | 0 | 0 | 0.238422 | 583 | 23 | 74 | 25.347826 | 0.880631 | 0.171527 | 0 | 0.142857 | 0 | 0 | 0.0587 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.214286 | false | 0.142857 | 0.214286 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
86e6fee1daf361d7bdc961a7e36b7c9ea5ac657d | 258 | py | Python | something.py | Mrthomas20121/metal_creator | 5569989531b13d2178ad2134fa86274209596f1a | [
"MIT"
] | null | null | null | something.py | Mrthomas20121/metal_creator | 5569989531b13d2178ad2134fa86274209596f1a | [
"MIT"
] | null | null | null | something.py | Mrthomas20121/metal_creator | 5569989531b13d2178ad2134fa86274209596f1a | [
"MIT"
] | null | null | null | from PIL import Image
template = Image.new('P', (16, 16), '#AF7CAA')
template = template.convert('PA')
oldimage = Image.open("./template/block.png").convert('PA')
oldimage.putpalette(template.getpalette())
oldimage = oldimage.convert('RGBA')
oldimage.show() | 32.25 | 59 | 0.724806 | 33 | 258 | 5.666667 | 0.575758 | 0.096257 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021097 | 0.081395 | 258 | 8 | 60 | 32.25 | 0.767932 | 0 | 0 | 0 | 0 | 0 | 0.138996 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
86e7b7e621e2e78819b0fba240687cb172e20413 | 96,745 | py | Python | VMEncryption/main/handle.py | shaktidhara/azure-linux-extensions2 | 0cdc23c6ac54c453bbe7c740361cf8ea7557df24 | [
"Apache-2.0"
] | null | null | null | VMEncryption/main/handle.py | shaktidhara/azure-linux-extensions2 | 0cdc23c6ac54c453bbe7c740361cf8ea7557df24 | [
"Apache-2.0"
] | null | null | null | VMEncryption/main/handle.py | shaktidhara/azure-linux-extensions2 | 0cdc23c6ac54c453bbe7c740361cf8ea7557df24 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
#
# VMEncryption extension
#
# Copyright 2015 Microsoft Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import array
import base64
import filecmp
import httplib
import imp
import json
import os
import os.path
import re
import shlex
import string
import subprocess
import sys
import datetime
import time
import tempfile
import traceback
import urllib2
import urlparse
import uuid
from Utils import HandlerUtil
from Common import *
from ExtensionParameter import ExtensionParameter
from DiskUtil import DiskUtil
from BackupLogger import BackupLogger
from KeyVaultUtil import KeyVaultUtil
from EncryptionConfig import *
from patch import *
from BekUtil import *
from DecryptionMarkConfig import DecryptionMarkConfig
from EncryptionMarkConfig import EncryptionMarkConfig
from EncryptionEnvironment import EncryptionEnvironment
from MachineIdentity import MachineIdentity
from OnGoingItemConfig import OnGoingItemConfig
from ProcessLock import ProcessLock
from CommandExecutor import *
from __builtin__ import int
def install():
hutil.do_parse_context('Install')
hutil.restore_old_configs()
hutil.do_status_report(operation='Install', status=CommonVariables.extension_success_status, status_code=str(CommonVariables.success), message='Installing pre-requisites')
logger.log("Installing pre-requisites")
DistroPatcher.install_extras()
hutil.do_exit(0, 'Install', CommonVariables.extension_success_status, str(CommonVariables.success), 'Install Succeeded')
def disable():
hutil.do_parse_context('Disable')
hutil.do_exit(0, 'Disable', CommonVariables.extension_success_status, '0', 'Disable succeeded')
def uninstall():
hutil.do_parse_context('Uninstall')
hutil.archive_old_configs()
hutil.do_exit(0, 'Uninstall', CommonVariables.extension_success_status, '0', 'Uninstall succeeded')
def disable_encryption():
hutil.do_parse_context('DisableEncryption')
logger.log('Disabling encryption')
decryption_marker = DecryptionMarkConfig(logger, encryption_environment)
if decryption_marker.config_file_exists():
logger.log(msg="decryption is marked, starting daemon.", level=CommonVariables.InfoLevel)
start_daemon('DisableEncryption')
hutil.do_exit(exit_code=0,
operation='DisableEncryption',
status=CommonVariables.extension_success_status,
code=str(CommonVariables.success),
message='Decryption started')
exit_status = {
'operation': 'DisableEncryption',
'status': CommonVariables.extension_success_status,
'status_code': str(CommonVariables.success),
'message': 'Decryption completed'
}
hutil.exit_if_same_seq(exit_status)
hutil.save_seq()
try:
protected_settings_str = hutil._context._config['runtimeSettings'][0]['handlerSettings'].get('protectedSettings')
public_settings_str = hutil._context._config['runtimeSettings'][0]['handlerSettings'].get('publicSettings')
if isinstance(public_settings_str, basestring):
public_settings = json.loads(public_settings_str)
else:
public_settings = public_settings_str
if isinstance(protected_settings_str, basestring):
protected_settings = json.loads(protected_settings_str)
else:
protected_settings = protected_settings_str
extension_parameter = ExtensionParameter(hutil, logger, DistroPatcher, encryption_environment, protected_settings, public_settings)
disk_util = DiskUtil(hutil=hutil, patching=DistroPatcher, logger=logger, encryption_environment=encryption_environment)
encryption_status = json.loads(disk_util.get_encryption_status())
if encryption_status["os"] != "NotEncrypted":
raise Exception("Disabling encryption is not supported when OS volume is encrypted")
bek_util = BekUtil(disk_util, logger)
encryption_config = EncryptionConfig(encryption_environment, logger)
bek_passphrase_file = bek_util.get_bek_passphrase_file(encryption_config)
crypt_items = disk_util.get_crypt_items()
logger.log('Found {0} items to decrypt'.format(len(crypt_items)))
for crypt_item in crypt_items:
disk_util.create_cleartext_key(crypt_item.mapper_name)
add_result = disk_util.luks_add_cleartext_key(bek_passphrase_file,
crypt_item.dev_path,
crypt_item.mapper_name,
crypt_item.luks_header_path)
if add_result != CommonVariables.process_success:
raise Exception("luksAdd failed with return code {0}".format(add_result))
if crypt_item.dev_path.startswith("/dev/sd"):
logger.log('Updating crypt item entry to use mapper name')
logger.log('Device name before update: {0}'.format(crypt_item.dev_path))
crypt_item.dev_path = disk_util.query_dev_id_path_by_sdx_path(crypt_item.dev_path)
logger.log('Device name after update: {0}'.format(crypt_item.dev_path))
crypt_item.uses_cleartext_key = True
disk_util.update_crypt_item(crypt_item)
logger.log('Added cleartext key for {0}'.format(crypt_item))
decryption_marker.command = extension_parameter.command
decryption_marker.volume_type = extension_parameter.VolumeType
decryption_marker.commit()
hutil.do_exit(exit_code=0,
operation='DisableEncryption',
status=CommonVariables.extension_success_status,
code=str(CommonVariables.success),
message='Decryption started')
except Exception as e:
message = "Failed to disable the extension with error: {0}, stack trace: {1}".format(e, traceback.format_exc())
logger.log(msg=message, level=CommonVariables.ErrorLevel)
hutil.do_exit(exit_code=0,
operation='DisableEncryption',
status=CommonVariables.extension_error_status,
code=str(CommonVariables.unknown_error),
message=message)
def update_encryption_settings():
hutil.do_parse_context('UpdateEncryptionSettings')
logger.log('Updating encryption settings')
encryption_config = EncryptionConfig(encryption_environment, logger)
config_secret_seq = encryption_config.get_secret_seq_num()
current_secret_seq_num = int(config_secret_seq if config_secret_seq else -1)
update_call_seq_num = hutil.get_current_seq()
logger.log("Current secret was created in operation #{0}".format(current_secret_seq_num))
logger.log("The update call is operation #{0}".format(update_call_seq_num))
executor = CommandExecutor(logger)
executor.Execute("mount /boot")
try:
protected_settings_str = hutil._context._config['runtimeSettings'][0]['handlerSettings'].get('protectedSettings')
public_settings_str = hutil._context._config['runtimeSettings'][0]['handlerSettings'].get('publicSettings')
if isinstance(public_settings_str, basestring):
public_settings = json.loads(public_settings_str)
else:
public_settings = public_settings_str
if isinstance(protected_settings_str, basestring):
protected_settings = json.loads(protected_settings_str)
else:
protected_settings = protected_settings_str
disk_util = DiskUtil(hutil=hutil, patching=DistroPatcher, logger=logger, encryption_environment=encryption_environment)
bek_util = BekUtil(disk_util, logger)
extension_parameter = ExtensionParameter(hutil, logger, DistroPatcher, encryption_environment, protected_settings, public_settings)
existing_passphrase_file = bek_util.get_bek_passphrase_file(encryption_config)
if current_secret_seq_num < update_call_seq_num:
if extension_parameter.passphrase is None or extension_parameter.passphrase == "":
extension_parameter.passphrase = bek_util.generate_passphrase(extension_parameter.KeyEncryptionAlgorithm)
logger.log('Recreating secret to store in the KeyVault')
keyVaultUtil = KeyVaultUtil(logger)
temp_keyfile = tempfile.NamedTemporaryFile(delete=False)
temp_keyfile.write(extension_parameter.passphrase)
temp_keyfile.close()
for crypt_item in disk_util.get_crypt_items():
if not crypt_item:
continue
before_keyslots = disk_util.luks_dump_keyslots(crypt_item.dev_path, crypt_item.luks_header_path)
logger.log("Before key addition, keyslots for {0}: {1}".format(crypt_item.dev_path, before_keyslots))
logger.log("Adding new key for {0}".format(crypt_item.dev_path))
luks_add_result = disk_util.luks_add_key(passphrase_file=existing_passphrase_file,
dev_path=crypt_item.dev_path,
mapper_name=crypt_item.mapper_name,
header_file=crypt_item.luks_header_path,
new_key_path=temp_keyfile.name)
logger.log("luks add result is {0}".format(luks_add_result))
after_keyslots = disk_util.luks_dump_keyslots(crypt_item.dev_path, crypt_item.luks_header_path)
logger.log("After key addition, keyslots for {0}: {1}".format(crypt_item.dev_path, after_keyslots))
new_keyslot = list(map(lambda x: x[0] != x[1], zip(before_keyslots, after_keyslots))).index(True)
logger.log("New key was added in keyslot {0}".format(new_keyslot))
crypt_item.current_luks_slot = new_keyslot
disk_util.update_crypt_item(crypt_item)
logger.log("New key successfully added to all encrypted devices")
if DistroPatcher.distro_info[0] == "Ubuntu":
executor.Execute("update-initramfs -u -k all", True)
os.unlink(temp_keyfile.name)
kek_secret_id_created = keyVaultUtil.create_kek_secret(Passphrase=extension_parameter.passphrase,
KeyVaultURL=extension_parameter.KeyVaultURL,
KeyEncryptionKeyURL=extension_parameter.KeyEncryptionKeyURL,
AADClientID=extension_parameter.AADClientID,
AADClientCertThumbprint=extension_parameter.AADClientCertThumbprint,
KeyEncryptionAlgorithm=extension_parameter.KeyEncryptionAlgorithm,
AADClientSecret=extension_parameter.AADClientSecret,
DiskEncryptionKeyFileName=extension_parameter.DiskEncryptionKeyFileName)
if kek_secret_id_created is None:
hutil.do_exit(exit_code=0,
operation='UpdateEncryptionSettings',
status=CommonVariables.extension_error_status,
code=str(CommonVariables.create_encryption_secret_failed),
message='UpdateEncryptionSettings failed.')
else:
encryption_config.passphrase_file_name = extension_parameter.DiskEncryptionKeyFileName
encryption_config.secret_id = kek_secret_id_created
encryption_config.secret_seq_num = hutil.get_current_seq()
encryption_config.commit()
shutil.copy(existing_passphrase_file, encryption_environment.bek_backup_path)
logger.log("Backed up BEK at {0}".format(encryption_environment.bek_backup_path))
hutil.do_exit(exit_code=0,
operation='UpdateEncryptionSettings',
status=CommonVariables.extension_success_status,
code=str(CommonVariables.success),
message=str(kek_secret_id_created))
else:
logger.log('Secret has already been updated')
mount_encrypted_disks(disk_util, bek_util, existing_passphrase_file, encryption_config)
hutil.exit_if_same_seq()
# remount bek volume
existing_passphrase_file = bek_util.get_bek_passphrase_file(encryption_config)
if extension_parameter.passphrase and extension_parameter.passphrase != file(existing_passphrase_file).read():
logger.log("The new passphrase has not been placed in BEK volume yet")
logger.log("Skipping removal of old passphrase")
exit_without_status_report()
logger.log('Removing old passphrase')
for crypt_item in disk_util.get_crypt_items():
if not crypt_item:
continue
if filecmp.cmp(existing_passphrase_file, encryption_environment.bek_backup_path):
logger.log('Current BEK and backup are the same, skipping removal')
continue
logger.log('Removing old passphrase from {0}'.format(crypt_item.dev_path))
keyslots = disk_util.luks_dump_keyslots(crypt_item.dev_path, crypt_item.luks_header_path)
logger.log("Keyslots before removal: {0}".format(keyslots))
luks_remove_result = disk_util.luks_remove_key(passphrase_file=encryption_environment.bek_backup_path,
dev_path=crypt_item.dev_path,
header_file=crypt_item.luks_header_path)
logger.log("luks remove result is {0}".format(luks_remove_result))
keyslots = disk_util.luks_dump_keyslots(crypt_item.dev_path, crypt_item.luks_header_path)
logger.log("Keyslots after removal: {0}".format(keyslots))
logger.log("Old key successfully removed from all encrypted devices")
hutil.save_seq()
extension_parameter.commit()
os.unlink(encryption_environment.bek_backup_path)
hutil.do_exit(exit_code=0,
operation='UpdateEncryptionSettings',
status=CommonVariables.extension_success_status,
code=str(CommonVariables.success),
message='Encryption settings updated')
except Exception as e:
message = "Failed to update encryption settings with error: {0}, stack trace: {1}".format(e, traceback.format_exc())
logger.log(msg=message, level=CommonVariables.ErrorLevel)
hutil.do_exit(exit_code=0,
operation='UpdateEncryptionSettings',
status=CommonVariables.extension_error_status,
code=str(CommonVariables.unknown_error),
message=message)
def update():
hutil.do_parse_context('Upadate')
hutil.do_exit(0, 'Update', CommonVariables.extension_success_status, '0', 'Update Succeeded')
def exit_without_status_report():
sys.exit(0)
def not_support_header_option_distro(patching):
if patching.distro_info[0].lower() == "centos" and patching.distro_info[1].startswith('6.'):
return True
if patching.distro_info[0].lower() == "redhat" and patching.distro_info[1].startswith('6.'):
return True
if patching.distro_info[0].lower() == "suse" and patching.distro_info[1].startswith('11'):
return True
return False
def none_or_empty(obj):
if obj is None or obj == "":
return True
else:
return False
def toggle_se_linux_for_centos7(disable):
if DistroPatcher.distro_info[0].lower() == 'centos' and DistroPatcher.distro_info[1].startswith('7.0'):
if disable:
se_linux_status = encryption_environment.get_se_linux()
if se_linux_status.lower() == 'enforcing':
encryption_environment.disable_se_linux()
return True
else:
encryption_environment.enable_se_linux()
return False
def mount_encrypted_disks(disk_util, bek_util, passphrase_file, encryption_config):
#make sure the azure disk config path exists.
for crypt_item in disk_util.get_crypt_items():
if not crypt_item:
continue
#add walkaround for the centos 7.0
se_linux_status = None
if DistroPatcher.distro_info[0].lower() == 'centos' and DistroPatcher.distro_info[1].startswith('7.0'):
se_linux_status = encryption_environment.get_se_linux()
if se_linux_status.lower() == 'enforcing':
encryption_environment.disable_se_linux()
luks_open_result = disk_util.luks_open(passphrase_file=passphrase_file,
dev_path=crypt_item.dev_path,
mapper_name=crypt_item.mapper_name,
header_file=crypt_item.luks_header_path,
uses_cleartext_key=crypt_item.uses_cleartext_key)
logger.log("luks open result is {0}".format(luks_open_result))
if DistroPatcher.distro_info[0].lower() == 'centos' and DistroPatcher.distro_info[1].startswith('7.0'):
if se_linux_status is not None and se_linux_status.lower() == 'enforcing':
encryption_environment.enable_se_linux()
if crypt_item.mount_point != 'None':
disk_util.mount_crypt_item(crypt_item, passphrase_file)
else:
logger.log(msg=('mount_point is None so skipping mount for the item {0}'.format(crypt_item)), level=CommonVariables.WarningLevel)
def main():
global hutil, DistroPatcher, logger, encryption_environment
HandlerUtil.LoggerInit('/var/log/waagent.log','/dev/stdout')
HandlerUtil.waagent.Log("{0} started to handle.".format(CommonVariables.extension_name))
hutil = HandlerUtil.HandlerUtility(HandlerUtil.waagent.Log, HandlerUtil.waagent.Error, CommonVariables.extension_name)
logger = BackupLogger(hutil)
DistroPatcher = GetDistroPatcher(logger)
hutil.patching = DistroPatcher
encryption_environment = EncryptionEnvironment(patching=DistroPatcher, logger=logger)
disk_util = DiskUtil(hutil=hutil, patching=DistroPatcher, logger=logger, encryption_environment=encryption_environment)
hutil.disk_util = disk_util
if DistroPatcher is None:
hutil.do_exit(exit_code=0,
operation='Enable',
status=CommonVariables.extension_error_status,
code=(CommonVariables.os_not_supported),
message='Enable failed: the os is not supported')
for a in sys.argv[1:]:
if re.match("^([-/]*)(disable)", a):
disable()
elif re.match("^([-/]*)(uninstall)", a):
uninstall()
elif re.match("^([-/]*)(install)", a):
install()
elif re.match("^([-/]*)(enable)", a):
enable()
elif re.match("^([-/]*)(update)", a):
update()
elif re.match("^([-/]*)(daemon)", a):
daemon()
def mark_encryption(command, volume_type, disk_format_query):
encryption_marker = EncryptionMarkConfig(logger, encryption_environment)
encryption_marker.command = command
encryption_marker.volume_type = volume_type
encryption_marker.diskFormatQuery = disk_format_query
encryption_marker.commit()
return encryption_marker
def is_daemon_running():
handler_path = os.path.join(os.getcwd(), __file__)
daemon_arg = "-daemon"
psproc = subprocess.Popen(['ps', 'aux'], stdout=subprocess.PIPE)
pslist, _ = psproc.communicate()
for line in pslist.split("\n"):
if handler_path in line and daemon_arg in line:
return True
return False
def enable():
while True:
hutil.do_parse_context('Enable')
logger.log('Enabling extension')
protected_settings_str = hutil._context._config['runtimeSettings'][0]['handlerSettings'].get('protectedSettings')
public_settings_str = hutil._context._config['runtimeSettings'][0]['handlerSettings'].get('publicSettings')
if isinstance(public_settings_str, basestring):
public_settings = json.loads(public_settings_str)
else:
public_settings = public_settings_str
if isinstance(protected_settings_str, basestring):
protected_settings = json.loads(protected_settings_str)
else:
protected_settings = protected_settings_str
logger.log('Public settings:\n{0}'.format(json.dumps(public_settings, sort_keys=True, indent=4)))
encryption_operation = public_settings.get(CommonVariables.EncryptionEncryptionOperationKey)
if encryption_operation == CommonVariables.EnableEncryption or encryption_operation == CommonVariables.EnableEncryptionFormat:
logger.log("handle.py found enable encryption operation")
logger.log("Installing pre-requisites")
DistroPatcher.install_extras()
extension_parameter = ExtensionParameter(hutil, logger, DistroPatcher, encryption_environment, protected_settings, public_settings)
if os.path.exists(encryption_environment.bek_backup_path) or (extension_parameter.config_file_exists() and extension_parameter.config_changed()):
logger.log("Config has changed, updating encryption settings")
update_encryption_settings()
extension_parameter.commit()
else:
logger.log("Config did not change or first call, enabling encryption")
enable_encryption()
elif encryption_operation == CommonVariables.DisableEncryption:
logger.log("handle.py found disable encryption operation")
logger.log("Installing pre-requisites")
DistroPatcher.install_extras()
disable_encryption()
elif encryption_operation == CommonVariables.QueryEncryptionStatus:
logger.log("handle.py found query operation")
if is_daemon_running():
logger.log("A daemon is already running, exiting without status report")
hutil.redo_last_status()
exit_without_status_report()
else:
logger.log("No daemon found, trying to find the last non-query operation")
hutil.find_last_nonquery_operation = True
else:
msg = "Encryption operation {0} is not supported".format(encryption_operation)
logger.log(msg)
hutil.do_exit(exit_code=0,
operation='Enable',
status=CommonVariables.extension_error_status,
code=(CommonVariables.unknown_error),
message=msg)
def enable_encryption():
hutil.do_parse_context('EnableEncryption')
# we need to start another subprocess to do it, because the initial process
# would be killed by the wala in 5 minutes.
logger.log('Enabling encryption')
"""
trying to mount the crypted items.
"""
disk_util = DiskUtil(hutil=hutil, patching=DistroPatcher, logger=logger, encryption_environment=encryption_environment)
bek_util = BekUtil(disk_util, logger)
existing_passphrase_file = None
encryption_config = EncryptionConfig(encryption_environment=encryption_environment, logger=logger)
config_path_result = disk_util.make_sure_path_exists(encryption_environment.encryption_config_path)
if config_path_result != CommonVariables.process_success:
logger.log(msg="azure encryption path creation failed.",
level=CommonVariables.ErrorLevel)
if encryption_config.config_file_exists():
existing_passphrase_file = bek_util.get_bek_passphrase_file(encryption_config)
if existing_passphrase_file is not None:
mount_encrypted_disks(disk_util=disk_util,
bek_util=bek_util,
encryption_config=encryption_config,
passphrase_file=existing_passphrase_file)
else:
logger.log(msg="EncryptionConfig is present, but could not get the BEK file.",
level=CommonVariables.WarningLevel)
hutil.redo_last_status()
exit_without_status_report()
ps = subprocess.Popen(["ps", "aux"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
ps_stdout, ps_stderr = ps.communicate()
if re.search(r"dd.*of=/dev/mapper/osencrypt", ps_stdout):
logger.log(msg="OS disk encryption already in progress, exiting",
level=CommonVariables.WarningLevel)
exit_without_status_report()
# handle the re-call scenario. the re-call would resume?
# if there's one tag for the next reboot.
encryption_marker = EncryptionMarkConfig(logger, encryption_environment)
try:
protected_settings_str = hutil._context._config['runtimeSettings'][0]['handlerSettings'].get('protectedSettings')
public_settings_str = hutil._context._config['runtimeSettings'][0]['handlerSettings'].get('publicSettings')
if isinstance(public_settings_str, basestring):
public_settings = json.loads(public_settings_str)
else:
public_settings = public_settings_str
if isinstance(protected_settings_str, basestring):
protected_settings = json.loads(protected_settings_str)
else:
protected_settings = protected_settings_str
extension_parameter = ExtensionParameter(hutil, logger, DistroPatcher, encryption_environment, protected_settings, public_settings)
kek_secret_id_created = None
encryption_marker = EncryptionMarkConfig(logger, encryption_environment)
if encryption_marker.config_file_exists():
# verify the encryption mark
logger.log(msg="encryption mark is there, starting daemon.", level=CommonVariables.InfoLevel)
start_daemon('EnableEncryption')
else:
encryption_config = EncryptionConfig(encryption_environment, logger)
exit_status = None
if encryption_config.config_file_exists():
exit_status = {
'operation': 'EnableEncryption',
'status': CommonVariables.extension_success_status,
'status_code': str(CommonVariables.success),
'message': encryption_config.get_secret_id()
}
hutil.exit_if_same_seq(exit_status)
hutil.save_seq()
encryption_config.volume_type = extension_parameter.VolumeType
encryption_config.commit()
if encryption_config.config_file_exists() and existing_passphrase_file is not None:
logger.log(msg="config file exists and passphrase file exists.", level=CommonVariables.WarningLevel)
encryption_marker = mark_encryption(command=extension_parameter.command,
volume_type=extension_parameter.VolumeType,
disk_format_query=extension_parameter.DiskFormatQuery)
start_daemon('EnableEncryption')
else:
"""
creating the secret, the secret would be transferred to a bek volume after the updatevm called in powershell.
"""
#store the luks passphrase in the secret.
keyVaultUtil = KeyVaultUtil(logger)
"""
validate the parameters
"""
if(extension_parameter.VolumeType is None or
not any([extension_parameter.VolumeType.lower() == vt.lower() for vt in CommonVariables.SupportedVolumeTypes])):
if encryption_config.config_file_exists():
existing_passphrase_file = bek_util.get_bek_passphrase_file(encryption_config)
if existing_passphrase_file is None:
logger.log("Unsupported volume type specified and BEK volume does not exist, clearing encryption config")
encryption_config.clear_config()
hutil.do_exit(exit_code=0,
operation='EnableEncryption',
status=CommonVariables.extension_error_status,
code=str(CommonVariables.volue_type_not_support),
message='VolumeType "{0}" is not supported'.format(extension_parameter.VolumeType))
if extension_parameter.command not in [CommonVariables.EnableEncryption, CommonVariables.EnableEncryptionFormat]:
hutil.do_exit(exit_code=0,
operation='EnableEncryption',
status=CommonVariables.extension_error_status,
code=str(CommonVariables.command_not_support),
message='Command "{0}" is not supported'.format(extension_parameter.command))
"""
this is the fresh call case
"""
#handle the passphrase related
if existing_passphrase_file is None:
if extension_parameter.passphrase is None or extension_parameter.passphrase == "":
extension_parameter.passphrase = bek_util.generate_passphrase(extension_parameter.KeyEncryptionAlgorithm)
else:
logger.log(msg="the extension_parameter.passphrase is already defined")
kek_secret_id_created = keyVaultUtil.create_kek_secret(Passphrase=extension_parameter.passphrase,
KeyVaultURL=extension_parameter.KeyVaultURL,
KeyEncryptionKeyURL=extension_parameter.KeyEncryptionKeyURL,
AADClientID=extension_parameter.AADClientID,
AADClientCertThumbprint=extension_parameter.AADClientCertThumbprint,
KeyEncryptionAlgorithm=extension_parameter.KeyEncryptionAlgorithm,
AADClientSecret=extension_parameter.AADClientSecret,
DiskEncryptionKeyFileName=extension_parameter.DiskEncryptionKeyFileName)
if kek_secret_id_created is None:
encryption_config.clear_config()
hutil.do_exit(exit_code=0,
operation='EnableEncryption',
status=CommonVariables.extension_error_status,
code=str(CommonVariables.create_encryption_secret_failed),
message='Enable failed.')
else:
encryption_config.passphrase_file_name = extension_parameter.DiskEncryptionKeyFileName
encryption_config.volume_type = extension_parameter.VolumeType
encryption_config.secret_id = kek_secret_id_created
encryption_config.secret_seq_num = hutil.get_current_seq()
encryption_config.commit()
extension_parameter.commit()
encryption_marker = mark_encryption(command=extension_parameter.command,
volume_type=extension_parameter.VolumeType,
disk_format_query=extension_parameter.DiskFormatQuery)
if kek_secret_id_created:
hutil.do_exit(exit_code=0,
operation='EnableEncryption',
status=CommonVariables.extension_success_status,
code=str(CommonVariables.success),
message=str(kek_secret_id_created))
else:
"""
the enabling called again. the passphrase would be re-used.
"""
hutil.do_exit(exit_code=0,
operation='EnableEncryption',
status=CommonVariables.extension_success_status,
code=str(CommonVariables.encrypttion_already_enabled),
message=str(kek_secret_id_created))
except Exception as e:
message = "Failed to enable the extension with error: {0}, stack trace: {1}".format(e, traceback.format_exc())
logger.log(msg=message, level=CommonVariables.ErrorLevel)
hutil.do_exit(exit_code=0,
operation='EnableEncryption',
status=CommonVariables.extension_error_status,
code=str(CommonVariables.unknown_error),
message=message)
def enable_encryption_format(passphrase, disk_format_query, disk_util, force=False):
logger.log('enable_encryption_format')
logger.log('disk format query is {0}'.format(disk_format_query))
json_parsed = json.loads(disk_format_query)
if type(json_parsed) is dict:
encryption_format_items = [json_parsed,]
elif type(json_parsed) is list:
encryption_format_items = json_parsed
else:
raise Exception("JSON parse error. Input: {0}".format(encryption_parameters))
for encryption_item in encryption_format_items:
dev_path_in_query = None
if encryption_item.has_key("scsi") and encryption_item["scsi"] != '':
dev_path_in_query = disk_util.query_dev_sdx_path_by_scsi_id(encryption_item["scsi"])
if encryption_item.has_key("dev_path") and encryption_item["dev_path"] != '':
dev_path_in_query = encryption_item["dev_path"]
if not dev_path_in_query:
raise Exception("Could not parse diskFormatQuery: {0}".format(encryption_parameters))
devices = disk_util.get_device_items(dev_path_in_query)
if len(devices) != 1:
logger.log(msg=("the device with this path {0} have more than one sub device. so skip it.".format(dev_path_in_query)), level=CommonVariables.WarningLevel)
continue
else:
device_item = devices[0]
if device_item.file_system is None or device_item.file_system == "" or force:
if device_item.mount_point:
disk_util.swapoff()
disk_util.umount(device_item.mount_point)
mapper_name = str(uuid.uuid4())
logger.log("encrypting " + str(device_item))
if device_item.uuid is not None and device_item.uuid != "" and not force:
device_to_encrypt_uuid_path = os.path.join("/dev/disk/by-uuid", device_item.uuid)
else:
device_to_encrypt_uuid_path = dev_path_in_query
encrypted_device_path = os.path.join(CommonVariables.dev_mapper_root, mapper_name)
try:
se_linux_status = None
if DistroPatcher.distro_info[0].lower() == 'centos' and DistroPatcher.distro_info[1].startswith('7.0'):
se_linux_status = encryption_environment.get_se_linux()
if se_linux_status.lower() == 'enforcing':
encryption_environment.disable_se_linux()
encrypt_result = disk_util.encrypt_disk(dev_path = device_to_encrypt_uuid_path, passphrase_file = passphrase, mapper_name = mapper_name, header_file=None)
finally:
if DistroPatcher.distro_info[0].lower() == 'centos' and DistroPatcher.distro_info[1].startswith('7.0'):
if se_linux_status is not None and se_linux_status.lower() == 'enforcing':
encryption_environment.enable_se_linux()
if encrypt_result == CommonVariables.process_success:
#TODO: let customer specify the default file system in the
#parameter
file_system = None
if encryption_item.has_key("file_system") and encryption_item["file_system"] != "":
file_system = encryption_item["file_system"]
else:
file_system = CommonVariables.default_file_system
format_disk_result = disk_util.format_disk(dev_path = encrypted_device_path, file_system = file_system)
if format_disk_result != CommonVariables.process_success:
logger.log(msg = ("format disk {0} failed".format(encrypted_device_path, format_disk_result)), level = CommonVariables.ErrorLevel)
crypt_item_to_update = CryptItem()
crypt_item_to_update.mapper_name = mapper_name
crypt_item_to_update.dev_path = device_to_encrypt_uuid_path
crypt_item_to_update.luks_header_path = "None"
crypt_item_to_update.file_system = file_system
crypt_item_to_update.uses_cleartext_key = False
crypt_item_to_update.current_luks_slot = 0
if encryption_item.has_key("name") and encryption_item["name"] != "":
crypt_item_to_update.mount_point = os.path.join("/mnt/", str(encryption_item["name"]))
else:
crypt_item_to_update.mount_point = os.path.join("/mnt/", mapper_name)
logger.log(msg="removing entry for unencrypted drive from fstab", level=CommonVariables.InfoLevel)
disk_util.remove_mount_info(crypt_item_to_update.mount_point)
disk_util.make_sure_path_exists(crypt_item_to_update.mount_point)
update_crypt_item_result = disk_util.add_crypt_item(crypt_item_to_update)
if not update_crypt_item_result:
logger.log(msg="update crypt item failed", level=CommonVariables.ErrorLevel)
mount_result = disk_util.mount_filesystem(dev_path=encrypted_device_path, mount_point=crypt_item_to_update.mount_point)
logger.log(msg=("mount result is {0}".format(mount_result)))
else:
logger.log(msg="encryption failed with code {0}".format(encrypt_result), level=CommonVariables.ErrorLevel)
else:
logger.log(msg=("the item fstype is not empty {0}".format(device_item.file_system)))
def encrypt_inplace_without_seperate_header_file(passphrase_file,
device_item,
disk_util,
bek_util,
status_prefix='',
ongoing_item_config=None):
"""
if ongoing_item_config is not None, then this is a resume case.
this function will return the phase
"""
logger.log("encrypt_inplace_without_seperate_header_file")
current_phase = CommonVariables.EncryptionPhaseBackupHeader
if ongoing_item_config is None:
ongoing_item_config = OnGoingItemConfig(encryption_environment = encryption_environment, logger = logger)
ongoing_item_config.current_block_size = CommonVariables.default_block_size
ongoing_item_config.current_slice_index = 0
ongoing_item_config.device_size = device_item.size
ongoing_item_config.file_system = device_item.file_system
ongoing_item_config.luks_header_file_path = None
ongoing_item_config.mapper_name = str(uuid.uuid4())
ongoing_item_config.mount_point = device_item.mount_point
if os.path.exists(os.path.join('/dev/', device_item.name)):
ongoing_item_config.original_dev_name_path = os.path.join('/dev/', device_item.name)
ongoing_item_config.original_dev_path = os.path.join('/dev/', device_item.name)
else:
ongoing_item_config.original_dev_name_path = os.path.join('/dev/mapper/', device_item.name)
ongoing_item_config.original_dev_path = os.path.join('/dev/mapper/', device_item.name)
ongoing_item_config.phase = CommonVariables.EncryptionPhaseBackupHeader
ongoing_item_config.commit()
else:
logger.log(msg="ongoing item config is not none, this is resuming, info: {0}".format(ongoing_item_config),
level=CommonVariables.WarningLevel)
logger.log(msg=("encrypting device item: {0}".format(ongoing_item_config.get_original_dev_path())))
# we only support ext file systems.
current_phase = ongoing_item_config.get_phase()
original_dev_path = ongoing_item_config.get_original_dev_path()
mapper_name = ongoing_item_config.get_mapper_name()
device_size = ongoing_item_config.get_device_size()
luks_header_size = CommonVariables.luks_header_size
size_shrink_to = (device_size - luks_header_size) / CommonVariables.sector_size
while current_phase != CommonVariables.EncryptionPhaseDone:
if current_phase == CommonVariables.EncryptionPhaseBackupHeader:
logger.log(msg="the current phase is " + str(CommonVariables.EncryptionPhaseBackupHeader),
level=CommonVariables.InfoLevel)
if not ongoing_item_config.get_file_system().lower() in ["ext2", "ext3", "ext4"]:
logger.log(msg="we only support ext file systems for centos 6.5/6.6/6.7 and redhat 6.7",
level=CommonVariables.WarningLevel)
ongoing_item_config.clear_config()
return current_phase
chk_shrink_result = disk_util.check_shrink_fs(dev_path = original_dev_path, size_shrink_to = size_shrink_to)
if chk_shrink_result != CommonVariables.process_success:
logger.log(msg="check shrink fs failed with code {0} for {1}".format(chk_shrink_result, original_dev_path),
level=CommonVariables.ErrorLevel)
logger.log(msg="your file system may not have enough space to do the encryption.")
#remove the ongoing item.
ongoing_item_config.clear_config()
return current_phase
else:
ongoing_item_config.current_slice_index = 0
ongoing_item_config.current_source_path = original_dev_path
ongoing_item_config.current_destination = encryption_environment.copy_header_slice_file_path
ongoing_item_config.current_total_copy_size = CommonVariables.default_block_size
ongoing_item_config.from_end = False
ongoing_item_config.header_slice_file_path = encryption_environment.copy_header_slice_file_path
ongoing_item_config.original_dev_path = original_dev_path
ongoing_item_config.commit()
if os.path.exists(encryption_environment.copy_header_slice_file_path):
logger.log(msg="the header slice file is there, remove it.", level = CommonVariables.WarningLevel)
os.remove(encryption_environment.copy_header_slice_file_path)
copy_result = disk_util.copy(ongoing_item_config=ongoing_item_config, status_prefix=status_prefix)
if copy_result != CommonVariables.process_success:
logger.log(msg="copy the header block failed, return code is: {0}".format(copy_result),
level=CommonVariables.ErrorLevel)
return current_phase
else:
ongoing_item_config.current_slice_index = 0
ongoing_item_config.phase = CommonVariables.EncryptionPhaseEncryptDevice
ongoing_item_config.commit()
current_phase = CommonVariables.EncryptionPhaseEncryptDevice
elif current_phase == CommonVariables.EncryptionPhaseEncryptDevice:
logger.log(msg="the current phase is {0}".format(CommonVariables.EncryptionPhaseEncryptDevice),
level=CommonVariables.InfoLevel)
encrypt_result = disk_util.encrypt_disk(dev_path=original_dev_path,
passphrase_file=passphrase_file,
mapper_name=mapper_name,
header_file=None)
# after the encrypt_disk without seperate header, then the uuid
# would change.
if encrypt_result != CommonVariables.process_success:
logger.log(msg = "encrypt file system failed.", level = CommonVariables.ErrorLevel)
return current_phase
else:
ongoing_item_config.current_slice_index = 0
ongoing_item_config.phase = CommonVariables.EncryptionPhaseCopyData
ongoing_item_config.commit()
current_phase = CommonVariables.EncryptionPhaseCopyData
elif current_phase == CommonVariables.EncryptionPhaseCopyData:
logger.log(msg="the current phase is {0}".format(CommonVariables.EncryptionPhaseCopyData),
level=CommonVariables.InfoLevel)
device_mapper_path = os.path.join(CommonVariables.dev_mapper_root, mapper_name)
ongoing_item_config.current_destination = device_mapper_path
ongoing_item_config.current_source_path = original_dev_path
ongoing_item_config.current_total_copy_size = (device_size - luks_header_size)
ongoing_item_config.from_end = True
ongoing_item_config.phase = CommonVariables.EncryptionPhaseCopyData
ongoing_item_config.commit()
copy_result = disk_util.copy(ongoing_item_config=ongoing_item_config, status_prefix=status_prefix)
if copy_result != CommonVariables.process_success:
logger.log(msg="copy the main content block failed, return code is: {0}".format(copy_result),
level=CommonVariables.ErrorLevel)
return current_phase
else:
ongoing_item_config.phase = CommonVariables.EncryptionPhaseRecoverHeader
ongoing_item_config.commit()
current_phase = CommonVariables.EncryptionPhaseRecoverHeader
elif current_phase == CommonVariables.EncryptionPhaseRecoverHeader:
logger.log(msg="the current phase is " + str(CommonVariables.EncryptionPhaseRecoverHeader),
level=CommonVariables.InfoLevel)
ongoing_item_config.from_end = False
backed_up_header_slice_file_path = ongoing_item_config.get_header_slice_file_path()
ongoing_item_config.current_slice_index = 0
ongoing_item_config.current_source_path = backed_up_header_slice_file_path
device_mapper_path = os.path.join(CommonVariables.dev_mapper_root, mapper_name)
ongoing_item_config.current_destination = device_mapper_path
ongoing_item_config.current_total_copy_size = CommonVariables.default_block_size
ongoing_item_config.commit()
copy_result = disk_util.copy(ongoing_item_config=ongoing_item_config, status_prefix=status_prefix)
if copy_result == CommonVariables.process_success:
crypt_item_to_update = CryptItem()
crypt_item_to_update.mapper_name = mapper_name
original_dev_name_path = ongoing_item_config.get_original_dev_name_path()
crypt_item_to_update.dev_path = disk_util.query_dev_id_path_by_sdx_path(original_dev_name_path)
crypt_item_to_update.luks_header_path = "None"
crypt_item_to_update.file_system = ongoing_item_config.get_file_system()
crypt_item_to_update.uses_cleartext_key = False
crypt_item_to_update.current_luks_slot = 0
# if the original mountpoint is empty, then leave
# it as None
mount_point = ongoing_item_config.get_mount_point()
if mount_point == "" or mount_point is None:
crypt_item_to_update.mount_point = "None"
else:
crypt_item_to_update.mount_point = mount_point
update_crypt_item_result = disk_util.add_crypt_item(crypt_item_to_update)
if not update_crypt_item_result:
logger.log(msg="update crypt item failed", level=CommonVariables.ErrorLevel)
if mount_point:
logger.log(msg="removing entry for unencrypted drive from fstab",
level=CommonVariables.InfoLevel)
disk_util.remove_mount_info(mount_point)
else:
logger.log(msg=original_dev_name_path + " is not defined in fstab, no need to update",
level=CommonVariables.InfoLevel)
if os.path.exists(encryption_environment.copy_header_slice_file_path):
os.remove(encryption_environment.copy_header_slice_file_path)
current_phase = CommonVariables.EncryptionPhaseDone
ongoing_item_config.phase = current_phase
ongoing_item_config.commit()
expand_fs_result = disk_util.expand_fs(dev_path=device_mapper_path)
if crypt_item_to_update.mount_point != "None":
disk_util.mount_filesystem(device_mapper_path, ongoing_item_config.get_mount_point())
else:
logger.log("the crypt_item_to_update.mount_point is None, so we do not mount it.")
ongoing_item_config.clear_config()
if expand_fs_result != CommonVariables.process_success:
logger.log(msg="expand fs result is: {0}".format(expand_fs_result),
level=CommonVariables.ErrorLevel)
return current_phase
else:
logger.log(msg="recover header failed result is: {0}".format(copy_result),
level=CommonVariables.ErrorLevel)
return current_phase
def encrypt_inplace_with_seperate_header_file(passphrase_file,
device_item,
disk_util,
bek_util,
status_prefix='',
ongoing_item_config=None):
"""
if ongoing_item_config is not None, then this is a resume case.
"""
logger.log("encrypt_inplace_with_seperate_header_file")
current_phase = CommonVariables.EncryptionPhaseEncryptDevice
if ongoing_item_config is None:
ongoing_item_config = OnGoingItemConfig(encryption_environment=encryption_environment,
logger=logger)
mapper_name = str(uuid.uuid4())
ongoing_item_config.current_block_size = CommonVariables.default_block_size
ongoing_item_config.current_slice_index = 0
ongoing_item_config.device_size = device_item.size
ongoing_item_config.file_system = device_item.file_system
ongoing_item_config.mapper_name = mapper_name
ongoing_item_config.mount_point = device_item.mount_point
#TODO improve this.
if os.path.exists(os.path.join('/dev/', device_item.name)):
ongoing_item_config.original_dev_name_path = os.path.join('/dev/', device_item.name)
else:
ongoing_item_config.original_dev_name_path = os.path.join('/dev/mapper/', device_item.name)
ongoing_item_config.original_dev_path = os.path.join('/dev/disk/by-uuid', device_item.uuid)
luks_header_file_path = disk_util.create_luks_header(mapper_name=mapper_name)
if luks_header_file_path is None:
logger.log(msg="create header file failed", level=CommonVariables.ErrorLevel)
return current_phase
else:
ongoing_item_config.luks_header_file_path = luks_header_file_path
ongoing_item_config.phase = CommonVariables.EncryptionPhaseEncryptDevice
ongoing_item_config.commit()
else:
logger.log(msg="ongoing item config is not none, this is resuming: {0}".format(ongoing_item_config),
level=CommonVariables.WarningLevel)
current_phase = ongoing_item_config.get_phase()
while current_phase != CommonVariables.EncryptionPhaseDone:
if current_phase == CommonVariables.EncryptionPhaseEncryptDevice:
disabled = False
try:
mapper_name = ongoing_item_config.get_mapper_name()
original_dev_path = ongoing_item_config.get_original_dev_path()
luks_header_file_path = ongoing_item_config.get_header_file_path()
disabled = toggle_se_linux_for_centos7(True)
encrypt_result = disk_util.encrypt_disk(dev_path=original_dev_path,
passphrase_file=passphrase_file,
mapper_name=mapper_name,
header_file=luks_header_file_path)
if encrypt_result != CommonVariables.process_success:
logger.log(msg="the encrypton for {0} failed".format(original_dev_path),
level=CommonVariables.ErrorLevel)
return current_phase
else:
ongoing_item_config.phase = CommonVariables.EncryptionPhaseCopyData
ongoing_item_config.commit()
current_phase = CommonVariables.EncryptionPhaseCopyData
finally:
toggle_se_linux_for_centos7(False)
elif current_phase == CommonVariables.EncryptionPhaseCopyData:
disabled = False
try:
mapper_name = ongoing_item_config.get_mapper_name()
original_dev_path = ongoing_item_config.get_original_dev_path()
luks_header_file_path = ongoing_item_config.get_header_file_path()
disabled = toggle_se_linux_for_centos7(True)
device_mapper_path = os.path.join("/dev/mapper", mapper_name)
if not os.path.exists(device_mapper_path):
open_result = disk_util.luks_open(passphrase_file=passphrase_file,
dev_path=original_dev_path,
mapper_name=mapper_name,
header_file=luks_header_file_path,
uses_cleartext_key=False)
if open_result != CommonVariables.process_success:
logger.log(msg="the luks open for {0} failed.".format(original_dev_path),
level=CommonVariables.ErrorLevel)
return current_phase
else:
logger.log(msg="the device mapper path existed, so skip the luks open.",
level=CommonVariables.InfoLevel)
device_size = ongoing_item_config.get_device_size()
current_slice_index = ongoing_item_config.get_current_slice_index()
if current_slice_index is None:
ongoing_item_config.current_slice_index = 0
ongoing_item_config.current_source_path = original_dev_path
ongoing_item_config.current_destination = device_mapper_path
ongoing_item_config.current_total_copy_size = device_size
ongoing_item_config.from_end = True
ongoing_item_config.commit()
copy_result = disk_util.copy(ongoing_item_config=ongoing_item_config, status_prefix=status_prefix)
if copy_result != CommonVariables.success:
error_message = "the copying result is {0} so skip the mounting".format(copy_result)
logger.log(msg = (error_message), level = CommonVariables.ErrorLevel)
return current_phase
else:
crypt_item_to_update = CryptItem()
crypt_item_to_update.mapper_name = mapper_name
original_dev_name_path = ongoing_item_config.get_original_dev_name_path()
crypt_item_to_update.dev_path = disk_util.query_dev_id_path_by_sdx_path(original_dev_name_path)
crypt_item_to_update.luks_header_path = luks_header_file_path
crypt_item_to_update.file_system = ongoing_item_config.get_file_system()
crypt_item_to_update.uses_cleartext_key = False
crypt_item_to_update.current_luks_slot = 0
# if the original mountpoint is empty, then leave
# it as None
mount_point = ongoing_item_config.get_mount_point()
if mount_point is None or mount_point == "":
crypt_item_to_update.mount_point = "None"
else:
crypt_item_to_update.mount_point = mount_point
update_crypt_item_result = disk_util.add_crypt_item(crypt_item_to_update)
if not update_crypt_item_result:
logger.log(msg="update crypt item failed", level = CommonVariables.ErrorLevel)
if crypt_item_to_update.mount_point != "None":
disk_util.mount_filesystem(device_mapper_path, mount_point)
else:
logger.log("the crypt_item_to_update.mount_point is None, so we do not mount it.")
if mount_point:
logger.log(msg="removing entry for unencrypted drive from fstab",
level=CommonVariables.InfoLevel)
disk_util.remove_mount_info(mount_point)
else:
logger.log(msg=original_dev_name_path + " is not defined in fstab, no need to update",
level=CommonVariables.InfoLevel)
current_phase = CommonVariables.EncryptionPhaseDone
ongoing_item_config.phase = current_phase
ongoing_item_config.commit()
ongoing_item_config.clear_config()
return current_phase
finally:
toggle_se_linux_for_centos7(False)
def decrypt_inplace_copy_data(passphrase_file,
crypt_item,
raw_device_item,
mapper_device_item,
disk_util,
status_prefix='',
ongoing_item_config=None):
logger.log(msg="decrypt_inplace_copy_data")
if ongoing_item_config:
logger.log(msg="ongoing item config is not none, resuming decryption, info: {0}".format(ongoing_item_config),
level=CommonVariables.WarningLevel)
else:
logger.log(msg="starting decryption of {0}".format(crypt_item))
ongoing_item_config = OnGoingItemConfig(encryption_environment=encryption_environment, logger=logger)
ongoing_item_config.current_destination = crypt_item.dev_path
ongoing_item_config.current_source_path = os.path.join(CommonVariables.dev_mapper_root,
crypt_item.mapper_name)
ongoing_item_config.current_total_copy_size = mapper_device_item.size
ongoing_item_config.from_end = True
ongoing_item_config.phase = CommonVariables.DecryptionPhaseCopyData
ongoing_item_config.current_slice_index = 0
ongoing_item_config.current_block_size = CommonVariables.default_block_size
ongoing_item_config.mount_point = crypt_item.mount_point
ongoing_item_config.commit()
current_phase = ongoing_item_config.get_phase()
while current_phase != CommonVariables.DecryptionPhaseDone:
logger.log(msg=("the current phase is {0}".format(CommonVariables.EncryptionPhaseBackupHeader)),
level=CommonVariables.InfoLevel)
if current_phase == CommonVariables.DecryptionPhaseCopyData:
copy_result = disk_util.copy(ongoing_item_config=ongoing_item_config, status_prefix=status_prefix)
if copy_result == CommonVariables.process_success:
mount_point = ongoing_item_config.get_mount_point()
if mount_point and mount_point != "None":
logger.log(msg="restoring entry for unencrypted drive from fstab", level=CommonVariables.InfoLevel)
disk_util.restore_mount_info(ongoing_item_config.get_mount_point())
else:
logger.log(msg=crypt_item.dev_path + " was not in fstab when encryption was enabled, no need to restore",
level=CommonVariables.InfoLevel)
ongoing_item_config.phase = CommonVariables.DecryptionPhaseDone
ongoing_item_config.commit()
current_phase = CommonVariables.DecryptionPhaseDone
else:
logger.log(msg="decryption: block copy failed, result: {0}".format(copy_result),
level=CommonVariables.ErrorLevel)
return current_phase
ongoing_item_config.clear_config()
return current_phase
def decrypt_inplace_without_separate_header_file(passphrase_file,
crypt_item,
raw_device_item,
mapper_device_item,
disk_util,
status_prefix='',
ongoing_item_config=None):
logger.log(msg="decrypt_inplace_without_separate_header_file")
proc_comm = ProcessCommunicator()
executor = CommandExecutor(logger)
executor.Execute(DistroPatcher.cryptsetup_path + " luksDump " + crypt_item.dev_path, communicator=proc_comm)
luks_header_size = int(re.findall(r"Payload.*?(\d+)", proc_comm.stdout)[0]) * CommonVariables.sector_size
if raw_device_item.size - mapper_device_item.size != luks_header_size:
logger.log(msg="mismatch between raw and mapper device found for crypt_item {0}".format(crypt_item),
level=CommonVariables.ErrorLevel)
logger.log(msg="raw_device_item: {0}".format(raw_device_item),
level=CommonVariables.ErrorLevel)
logger.log(msg="mapper_device_item {0}".format(mapper_device_item),
level=CommonVariables.ErrorLevel)
return None
return decrypt_inplace_copy_data(passphrase_file,
crypt_item,
raw_device_item,
mapper_device_item,
disk_util,
status_prefix,
ongoing_item_config)
def decrypt_inplace_with_separate_header_file(passphrase_file,
crypt_item,
raw_device_item,
mapper_device_item,
disk_util,
status_prefix='',
ongoing_item_config=None):
logger.log(msg="decrypt_inplace_with_separate_header_file")
if raw_device_item.size != mapper_device_item.size:
logger.log(msg="mismatch between raw and mapper device found for crypt_item {0}".format(crypt_item),
level=CommonVariables.ErrorLevel)
logger.log(msg="raw_device_item: {0}".format(raw_device_item),
level=CommonVariables.ErrorLevel)
logger.log(msg="mapper_device_item {0}".format(mapper_device_item),
level=CommonVariables.ErrorLevel)
return
return decrypt_inplace_copy_data(passphrase_file,
crypt_item,
raw_device_item,
mapper_device_item,
disk_util,
status_prefix,
ongoing_item_config)
def enable_encryption_all_in_place(passphrase_file, encryption_marker, disk_util, bek_util):
"""
if return None for the success case, or return the device item which failed.
"""
logger.log(msg="executing the enableencryption_all_inplace command.")
device_items = disk_util.get_device_items(None)
device_items_to_encrypt = []
encrypted_items = []
error_message = ""
for device_item in device_items:
logger.log("device_item == " + str(device_item))
should_skip = disk_util.should_skip_for_inplace_encryption(device_item, encryption_marker.get_volume_type())
if not should_skip:
if device_item.name == bek_util.passphrase_device:
logger.log("skip for the passphrase disk ".format(device_item))
should_skip = True
if device_item.uuid in encrypted_items:
logger.log("already did a operation {0} so skip it".format(device_item))
should_skip = True
if not should_skip and \
not any(di.name == device_item.name for di in device_items_to_encrypt):
device_items_to_encrypt.append(device_item)
msg = 'Encrypting {0} data volumes'.format(len(device_items_to_encrypt))
logger.log(msg);
hutil.do_status_report(operation='EnableEncryption',
status=CommonVariables.extension_success_status,
status_code=str(CommonVariables.success),
message=msg)
for device_num, device_item in enumerate(device_items_to_encrypt):
umount_status_code = CommonVariables.success
if device_item.mount_point is not None and device_item.mount_point != "":
umount_status_code = disk_util.umount(device_item.mount_point)
if umount_status_code != CommonVariables.success:
logger.log("error occured when do the umount for: {0} with code: {1}".format(device_item.mount_point, umount_status_code))
else:
encrypted_items.append(device_item.uuid)
logger.log(msg=("encrypting: {0}".format(device_item)))
no_header_file_support = not_support_header_option_distro(DistroPatcher)
status_prefix = "Encrypting data volume {0}/{1}".format(device_num + 1,
len(device_items_to_encrypt))
#TODO check the file system before encrypting it.
if no_header_file_support:
logger.log(msg="this is the centos 6 or redhat 6 or sles 11 series, need to resize data drive",
level=CommonVariables.WarningLevel)
encryption_result_phase = encrypt_inplace_without_seperate_header_file(passphrase_file=passphrase_file,
device_item=device_item,
disk_util=disk_util,
bek_util=bek_util,
status_prefix=status_prefix)
else:
encryption_result_phase = encrypt_inplace_with_seperate_header_file(passphrase_file=passphrase_file,
device_item=device_item,
disk_util=disk_util,
bek_util=bek_util,
status_prefix=status_prefix)
if encryption_result_phase == CommonVariables.EncryptionPhaseDone:
continue
else:
# do exit to exit from this round
return device_item
return None
def disable_encryption_all_in_place(passphrase_file, decryption_marker, disk_util):
"""
On success, returns None. Otherwise returns the crypt item for which decryption failed.
"""
logger.log(msg="executing disable_encryption_all_in_place")
device_items = disk_util.get_device_items(None)
crypt_items = disk_util.get_crypt_items()
msg = 'Decrypting {0} data volumes'.format(len(crypt_items))
logger.log(msg);
hutil.do_status_report(operation='DisableEncryption',
status=CommonVariables.extension_success_status,
status_code=str(CommonVariables.success),
message=msg)
for crypt_item_num, crypt_item in enumerate(crypt_items):
logger.log("processing crypt_item: " + str(crypt_item))
def raw_device_item_match(device_item):
sdx_device_name = "/dev/" + device_item.name
if crypt_item.dev_path.startswith(CommonVariables.disk_by_id_root):
return crypt_item.dev_path == disk_util.query_dev_id_path_by_sdx_path(sdx_device_name)
else:
return crypt_item.dev_path == sdx_device_name
def mapped_device_item_match(device_item):
return crypt_item.mapper_name == device_item.name
raw_device_item = next((d for d in device_items if raw_device_item_match(d)), None)
mapper_device_item = next((d for d in device_items if mapped_device_item_match(d)), None)
if not raw_device_item:
logger.log("raw device not found for crypt_item {0}".format(crypt_item))
return crypt_item
if not mapper_device_item:
logger.log("mapper device not found for crypt_item {0}".format(crypt_item))
return crypt_item
decryption_result_phase = None
status_prefix = "Decrypting data volume {0}/{1}".format(crypt_item_num + 1,
len(crypt_items))
if crypt_item.luks_header_path:
decryption_result_phase = decrypt_inplace_with_separate_header_file(passphrase_file=passphrase_file,
crypt_item=crypt_item,
raw_device_item=raw_device_item,
mapper_device_item=mapper_device_item,
disk_util=disk_util,
status_prefix=status_prefix)
else:
decryption_result_phase = decrypt_inplace_without_separate_header_file(passphrase_file=passphrase_file,
crypt_item=crypt_item,
raw_device_item=raw_device_item,
mapper_device_item=mapper_device_item,
disk_util=disk_util,
status_prefix=status_prefix)
if decryption_result_phase == CommonVariables.DecryptionPhaseDone:
disk_util.luks_close(crypt_item.mapper_name)
disk_util.remove_crypt_item(crypt_item)
disk_util.mount_all()
continue
else:
# decryption failed for a crypt_item, return the failed item to caller
return crypt_item
return None
def daemon_encrypt():
# Ensure the same configuration is executed only once
# If the previous enable failed, we do not have retry logic here.
# TODO Remount all
encryption_marker = EncryptionMarkConfig(logger, encryption_environment)
if encryption_marker.config_file_exists():
logger.log("encryption is marked.")
"""
search for the bek volume, then mount it:)
"""
disk_util = DiskUtil(hutil, DistroPatcher, logger, encryption_environment)
encryption_config = EncryptionConfig(encryption_environment, logger)
bek_passphrase_file = None
"""
try to find the attached bek volume, and use the file to mount the crypted volumes,
and if the passphrase file is found, then we will re-use it for the future.
"""
bek_util = BekUtil(disk_util, logger)
if encryption_config.config_file_exists():
bek_passphrase_file = bek_util.get_bek_passphrase_file(encryption_config)
if bek_passphrase_file is None:
hutil.do_exit(exit_code=0,
operation='EnableEncryption',
status=CommonVariables.extension_error_status,
code=CommonVariables.passphrase_file_not_found,
message='Passphrase file not found.')
executor = CommandExecutor(logger)
is_not_in_stripped_os = bool(executor.Execute("mountpoint /oldroot"))
volume_type = encryption_config.get_volume_type().lower()
if (volume_type == CommonVariables.VolumeTypeData.lower() or volume_type == CommonVariables.VolumeTypeAll.lower()) and \
is_not_in_stripped_os:
try:
while not daemon_encrypt_data_volumes(encryption_marker=encryption_marker,
encryption_config=encryption_config,
disk_util=disk_util,
bek_util=bek_util,
bek_passphrase_file=bek_passphrase_file):
logger.log("Calling daemon_encrypt_data_volumes again")
except Exception as e:
message = "Failed to encrypt data volumes with error: {0}, stack trace: {1}".format(e, traceback.format_exc())
logger.log(msg=message, level=CommonVariables.ErrorLevel)
hutil.do_exit(exit_code=0,
operation='EnableEncryptionDataVolumes',
status=CommonVariables.extension_error_status,
code=CommonVariables.encryption_failed,
message=message)
else:
hutil.do_status_report(operation='EnableEncryptionDataVolumes',
status=CommonVariables.extension_success_status,
status_code=str(CommonVariables.success),
message='Encryption succeeded for data volumes')
if volume_type == CommonVariables.VolumeTypeOS.lower() or \
volume_type == CommonVariables.VolumeTypeAll.lower():
# import OSEncryption here instead of at the top because it relies
# on pre-req packages being installed (specifically, python-six on Ubuntu)
distro_name = DistroPatcher.distro_info[0]
distro_version = DistroPatcher.distro_info[1]
os_encryption = None
if ((distro_name == 'redhat' and distro_version == '7.3') and
(disk_util.is_os_disk_lvm() or os.path.exists('/volumes.lvm'))):
from oscrypto.rhel_72_lvm import RHEL72LVMEncryptionStateMachine
os_encryption = RHEL72LVMEncryptionStateMachine(hutil=hutil,
distro_patcher=DistroPatcher,
logger=logger,
encryption_environment=encryption_environment)
elif ((distro_name == 'centos' and distro_version == '7.3.1611') and
(disk_util.is_os_disk_lvm() or os.path.exists('/volumes.lvm'))):
from oscrypto.rhel_72_lvm import RHEL72LVMEncryptionStateMachine
os_encryption = RHEL72LVMEncryptionStateMachine(hutil=hutil,
distro_patcher=DistroPatcher,
logger=logger,
encryption_environment=encryption_environment)
elif ((distro_name == 'redhat' and distro_version == '7.2') or
(distro_name == 'redhat' and distro_version == '7.3') or
(distro_name == 'centos' and distro_version == '7.3.1611') or
(distro_name == 'centos' and distro_version == '7.2.1511')):
from oscrypto.rhel_72 import RHEL72EncryptionStateMachine
os_encryption = RHEL72EncryptionStateMachine(hutil=hutil,
distro_patcher=DistroPatcher,
logger=logger,
encryption_environment=encryption_environment)
elif distro_name == 'redhat' and distro_version == '6.8':
from oscrypto.rhel_68 import RHEL68EncryptionStateMachine
os_encryption = RHEL68EncryptionStateMachine(hutil=hutil,
distro_patcher=DistroPatcher,
logger=logger,
encryption_environment=encryption_environment)
elif distro_name == 'centos' and (distro_version == '6.8' or distro_version == '6.9'):
from oscrypto.centos_68 import CentOS68EncryptionStateMachine
os_encryption = CentOS68EncryptionStateMachine(hutil=hutil,
distro_patcher=DistroPatcher,
logger=logger,
encryption_environment=encryption_environment)
elif distro_name == 'Ubuntu' and distro_version == '16.04':
from oscrypto.ubuntu_1604 import Ubuntu1604EncryptionStateMachine
os_encryption = Ubuntu1604EncryptionStateMachine(hutil=hutil,
distro_patcher=DistroPatcher,
logger=logger,
encryption_environment=encryption_environment)
elif distro_name == 'Ubuntu' and distro_version == '14.04':
from oscrypto.ubuntu_1404 import Ubuntu1404EncryptionStateMachine
os_encryption = Ubuntu1404EncryptionStateMachine(hutil=hutil,
distro_patcher=DistroPatcher,
logger=logger,
encryption_environment=encryption_environment)
else:
message = "OS volume encryption is not supported on {0} {1}".format(distro_name,
distro_version)
logger.log(msg=message, level=CommonVariables.ErrorLevel)
hutil.do_exit(exit_code=0,
operation='EnableEncryptionOSVolume',
status=CommonVariables.extension_error_status,
code=CommonVariables.encryption_failed,
message=message)
try:
os_encryption.start_encryption()
if not os_encryption.state == 'completed':
raise Exception("did not reach completed state")
else:
encryption_marker.clear_config()
except Exception as e:
message = "Failed to encrypt OS volume with error: {0}, stack trace: {1}, machine state: {2}".format(e,
traceback.format_exc(),
os_encryption.state)
logger.log(msg=message, level=CommonVariables.ErrorLevel)
hutil.do_exit(exit_code=0,
operation='EnableEncryptionOSVolume',
status=CommonVariables.extension_error_status,
code=CommonVariables.encryption_failed,
message=message)
message = ''
if volume_type == CommonVariables.VolumeTypeAll.lower():
message = 'Encryption succeeded for all volumes'
else:
message = 'Encryption succeeded for OS volume'
logger.log(msg=message)
hutil.do_status_report(operation='EnableEncryptionOSVolume',
status=CommonVariables.extension_success_status,
status_code=str(CommonVariables.success),
message=message)
def daemon_encrypt_data_volumes(encryption_marker, encryption_config, disk_util, bek_util, bek_passphrase_file):
try:
"""
check whether there's a scheduled encryption task
"""
mount_all_result = disk_util.mount_all()
if mount_all_result != CommonVariables.process_success:
logger.log(msg="mount all failed with code:{0}".format(mount_all_result),
level=CommonVariables.ErrorLevel)
"""
TODO: resuming the encryption for rebooting suddenly scenario
we need the special handling is because the half done device can be a error state: say, the file system header missing.so it could be
identified.
"""
ongoing_item_config = OnGoingItemConfig(encryption_environment=encryption_environment, logger=logger)
if ongoing_item_config.config_file_exists():
logger.log("OngoingItemConfig exists.")
ongoing_item_config.load_value_from_file()
header_file_path = ongoing_item_config.get_header_file_path()
mount_point = ongoing_item_config.get_mount_point()
status_prefix = "Resuming encryption after reboot"
if not none_or_empty(mount_point):
logger.log("mount point is not empty {0}, trying to unmount it first.".format(mount_point))
umount_status_code = disk_util.umount(mount_point)
logger.log("unmount return code is {0}".format(umount_status_code))
if none_or_empty(header_file_path):
encryption_result_phase = encrypt_inplace_without_seperate_header_file(passphrase_file=bek_passphrase_file,
device_item=None,
disk_util=disk_util,
bek_util=bek_util,
status_prefix=status_prefix,
ongoing_item_config=ongoing_item_config)
#TODO mount it back when shrink failed
else:
encryption_result_phase = encrypt_inplace_with_seperate_header_file(passphrase_file=bek_passphrase_file,
device_item=None,
disk_util=disk_util,
bek_util=bek_util,
status_prefix=status_prefix,
ongoing_item_config=ongoing_item_config)
"""
if the resuming failed, we should fail.
"""
if encryption_result_phase != CommonVariables.EncryptionPhaseDone:
original_dev_path = ongoing_item_config.get_original_dev_path
message='EnableEncryption: resuming encryption for {0} failed'.format(original_dev_path)
raise Exception(message)
else:
ongoing_item_config.clear_config()
else:
logger.log("OngoingItemConfig does not exist")
failed_item = None
if not encryption_marker.config_file_exists():
logger.log("Data volumes are not marked for encryption")
bek_util.umount_azure_passhprase(encryption_config)
return True
if encryption_marker.get_current_command() == CommonVariables.EnableEncryption:
failed_item = enable_encryption_all_in_place(passphrase_file=bek_passphrase_file,
encryption_marker=encryption_marker,
disk_util=disk_util,
bek_util=bek_util)
elif encryption_marker.get_current_command() == CommonVariables.EnableEncryptionFormat:
disk_format_query = encryption_marker.get_encryption_disk_format_query()
failed_item = enable_encryption_format(passphrase=bek_passphrase_file,
disk_format_query=disk_format_query,
disk_util=disk_util)
else:
message = "Command {0} not supported.".format(encryption_marker.get_current_command())
logger.log(msg=message, level=CommonVariables.ErrorLevel)
raise Exception(message)
for tmpvol in filter(lambda x: 'resource-part' in x.azure_name, disk_util.get_device_items(None)):
if not tmpvol.mount_point:
continue
proc_comm = ProcessCommunicator()
executor = CommandExecutor(logger)
command = 'find {0} -type f -print | grep -v swapfile | grep -v DATALOSS_WARNING_README.txt | wc -l'.format(tmpvol.mount_point)
executor.ExecuteInBash(command, communicator=proc_comm)
if int(proc_comm.stdout) != 0:
logger.log("Resource disk mounted at {0} is not empty".format(tmpvol.mount_point))
continue
disk_format_query = '{"dev_path":"/dev/DEVNAME","name":"MOUNTPOINT","file_system":"FILESYSTEM"}'
disk_format_query = disk_format_query.replace('DEVNAME', tmpvol.name)
disk_format_query = disk_format_query.replace('MOUNTPOINT', tmpvol.mount_point)
disk_format_query = disk_format_query.replace('FILESYSTEM', tmpvol.file_system)
logger.log("Encrypting resource disk {0}".format(tmpvol.azure_name))
failed_item = enable_encryption_format(passphrase=bek_passphrase_file,
disk_format_query=disk_format_query,
disk_util=disk_util,
force=True)
if failed_item:
message = 'Encryption failed for {0}'.format(failed_item)
raise Exception(message)
else:
bek_util.umount_azure_passhprase(encryption_config)
return True
except Exception as e:
raise
def daemon_decrypt():
decryption_marker = DecryptionMarkConfig(logger, encryption_environment)
if not decryption_marker.config_file_exists():
logger.log("decryption is not marked.")
return
logger.log("decryption is marked.")
# mount and then unmount all the encrypted items
# in order to set-up all the mapper devices
# we don't need the BEK since all the drives that need decryption were made cleartext-key unlockable by first call to disable
disk_util = DiskUtil(hutil, DistroPatcher, logger, encryption_environment)
encryption_config = EncryptionConfig(encryption_environment, logger)
mount_encrypted_disks(disk_util=disk_util,
bek_util=None,
encryption_config=encryption_config,
passphrase_file=None)
disk_util.umount_all_crypt_items()
# at this point all the /dev/mapper/* crypt devices should be open
ongoing_item_config = OnGoingItemConfig(encryption_environment=encryption_environment, logger=logger)
if ongoing_item_config.config_file_exists():
logger.log("ongoing item config exists.")
else:
logger.log("ongoing item config does not exist.")
failed_item = None
if decryption_marker.get_current_command() == CommonVariables.DisableEncryption:
failed_item = disable_encryption_all_in_place(passphrase_file=None,
decryption_marker=decryption_marker,
disk_util=disk_util)
else:
raise Exception("command {0} not supported.".format(decryption_marker.get_current_command()))
if failed_item != None:
hutil.do_exit(exit_code=0,
operation='Disable',
status=CommonVariables.extension_error_status,
code=CommonVariables.encryption_failed,
message='Decryption failed for {0}'.format(failed_item))
else:
encryption_config.clear_config()
logger.log("clearing the decryption mark after successful decryption")
decryption_marker.clear_config()
hutil.do_exit(exit_code=0,
operation='Disable',
status=CommonVariables.extension_success_status,
code=str(CommonVariables.success),
message='Decryption succeeded')
def daemon():
hutil.find_last_nonquery_operation = True
hutil.do_parse_context('Executing')
lock = ProcessLock(logger, encryption_environment.daemon_lock_file_path)
if not lock.try_lock():
logger.log("there's another daemon running, please wait it to exit.", level = CommonVariables.WarningLevel)
return
logger.log("daemon lock acquired sucessfully.")
logger.log("waiting for 2 minutes before continuing the daemon")
time.sleep(120)
decryption_marker = DecryptionMarkConfig(logger, encryption_environment)
if decryption_marker.config_file_exists():
try:
daemon_decrypt()
except Exception as e:
error_msg = ("Failed to disable the extension with error: {0}, stack trace: {1}".format(e, traceback.format_exc()))
logger.log(msg=error_msg,
level=CommonVariables.ErrorLevel)
hutil.do_exit(exit_code=0,
operation='Disable',
status=CommonVariables.extension_error_status,
code=str(CommonVariables.encryption_failed),
message=error_msg)
finally:
lock.release_lock()
logger.log("returned to daemon")
logger.log("exiting daemon")
return
try:
daemon_encrypt()
except Exception as e:
# mount the file systems back.
error_msg = ("Failed to enable the extension with error: {0}, stack trace: {1}".format(e, traceback.format_exc()))
logger.log(msg=error_msg,
level=CommonVariables.ErrorLevel)
hutil.do_exit(exit_code=0,
operation='Enable',
status=CommonVariables.extension_error_status,
code=str(CommonVariables.encryption_failed),
message=error_msg)
else:
encryption_marker = EncryptionMarkConfig(logger, encryption_environment)
#TODO not remove it, backed it up.
logger.log("returned to daemon successfully after encryption")
logger.log("clearing the encryption mark.")
encryption_marker.clear_config()
hutil.redo_current_status()
finally:
lock.release_lock()
logger.log("exiting daemon")
def start_daemon(operation):
args = [os.path.join(os.getcwd(), __file__), "-daemon"]
logger.log("start_daemon with args: {0}".format(args))
#This process will start a new background process by calling
# handle.py -daemon
#to run the script and will exit itself immediatelly.
#Redirect stdout and stderr to /dev/null. Otherwise daemon process will
#throw Broke pipe exeception when parent process exit.
devnull = open(os.devnull, 'w')
child = subprocess.Popen(args, stdout=devnull, stderr=devnull)
encryption_config = EncryptionConfig(encryption_environment, logger)
if encryption_config.config_file_exists():
hutil.do_exit(exit_code=0,
operation=operation,
status=CommonVariables.extension_success_status,
code=str(CommonVariables.success),
message=encryption_config.get_secret_id())
else:
hutil.do_exit(exit_code=0,
operation=operation,
status=CommonVariables.extension_error_status,
code=str(CommonVariables.encryption_failed),
message='Encryption config not found.')
if __name__ == '__main__' :
main()
| 53.332415 | 175 | 0.60613 | 9,698 | 96,745 | 5.727779 | 0.070839 | 0.024951 | 0.046824 | 0.011324 | 0.675632 | 0.618708 | 0.569201 | 0.523583 | 0.49865 | 0.473104 | 0 | 0.005031 | 0.326084 | 96,745 | 1,813 | 176 | 53.361831 | 0.846943 | 0.024756 | 0 | 0.570526 | 0 | 0.002105 | 0.096626 | 0.008596 | 0 | 0 | 0 | 0.003861 | 0 | 0 | null | null | 0.050526 | 0.030877 | null | null | 0.002105 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
86f8fdf6ddff7d449bce414748623ac0ca280f9e | 219 | py | Python | src/config/s3_cloudfront/env.py | eshiji/troposphere-templates | 2cc03261d708dbe0d70958f32b2022f5d651cc08 | [
"Apache-2.0"
] | null | null | null | src/config/s3_cloudfront/env.py | eshiji/troposphere-templates | 2cc03261d708dbe0d70958f32b2022f5d651cc08 | [
"Apache-2.0"
] | null | null | null | src/config/s3_cloudfront/env.py | eshiji/troposphere-templates | 2cc03261d708dbe0d70958f32b2022f5d651cc08 | [
"Apache-2.0"
] | 1 | 2020-09-14T02:01:16.000Z | 2020-09-14T02:01:16.000Z | import os
def get_env(var, default=None):
return os.environ[var] or default
# Check if environment variable PYTHON_ENV is set
if "PYTHON_ENV" in os.environ:
pass
config = {
'env': get_env('PYTHON_ENV')
} | 16.846154 | 49 | 0.69863 | 35 | 219 | 4.228571 | 0.6 | 0.182432 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196347 | 219 | 13 | 50 | 16.846154 | 0.840909 | 0.214612 | 0 | 0 | 0 | 0 | 0.134503 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0.125 | 0.125 | 0.125 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 2 |
810186e02588917c82fe1abaeb6cf9f1b5e2a0e4 | 1,438 | py | Python | pocean/dsg/__init__.py | lucmehl/pocean-core | 79ac85c975f409826abee0de6880b0dd9a3ede6e | [
"MIT"
] | null | null | null | pocean/dsg/__init__.py | lucmehl/pocean-core | 79ac85c975f409826abee0de6880b0dd9a3ede6e | [
"MIT"
] | null | null | null | pocean/dsg/__init__.py | lucmehl/pocean-core | 79ac85c975f409826abee0de6880b0dd9a3ede6e | [
"MIT"
] | null | null | null | #!python
# coding=utf-8
# Profile
from .profile.im import IncompleteMultidimensionalProfile
from .profile.om import OrthogonalMultidimensionalProfile
# Trajectory
from .trajectory.cr import ContiguousRaggedTrajectory
from .trajectory.ir import IndexedRaggedTrajectory
from .trajectory.im import IncompleteMultidimensionalTrajectory
# TrajectoryProfile
from .trajectoryProfile.cr import ContiguousRaggedTrajectoryProfile
# Timeseries
from .timeseries.cr import ContiguousRaggedTimeseries
from .timeseries.ir import IndexedRaggedTimeseries
from .timeseries.im import IncompleteMultidimensionalTimeseries
from .timeseries.om import OrthogonalMultidimensionalTimeseries
# TimeseriesProfile
from .timeseriesProfile.r import RaggedTimeseriesProfile
from .timeseriesProfile.im import IncompleteMultidimensionalTimeseriesProfile
from .timeseriesProfile.om import OrthogonalMultidimensionalTimeseriesProfile
__all__ = [
'IncompleteMultidimensionalProfile',
'OrthogonalMultidimensionalProfile',
'ContiguousRaggedTrajectory',
'IndexedRaggedTrajectory',
'IncompleteMultidimensionalTrajectory',
'ContiguousRaggedTrajectoryProfile',
'ContiguousRaggedTimeseries',
'IndexedRaggedTimeseries',
'IncompleteMultidimensionalTimeseries',
'OrthogonalMultidimensionalTimeseries',
'RaggedTimeseriesProfile',
'IncompleteMultidimensionalTimeseriesProfile',
'OrthogonalMultidimensionalTimeseriesProfile',
]
| 33.44186 | 77 | 0.847705 | 88 | 1,438 | 13.806818 | 0.340909 | 0.026337 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000771 | 0.098053 | 1,438 | 42 | 78 | 34.238095 | 0.936006 | 0.059805 | 0 | 0 | 0 | 0 | 0.307807 | 0.307807 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.464286 | 0 | 0.464286 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
8102fe4327002e634954adce86d5303fa0824785 | 1,281 | py | Python | auto_derby/jobs/champions_meeting.py | Splendent/auto-derby | 1e96017019501b4bc61e7054e474f35cd73af9a7 | [
"MIT"
] | null | null | null | auto_derby/jobs/champions_meeting.py | Splendent/auto-derby | 1e96017019501b4bc61e7054e474f35cd73af9a7 | [
"MIT"
] | null | null | null | auto_derby/jobs/champions_meeting.py | Splendent/auto-derby | 1e96017019501b4bc61e7054e474f35cd73af9a7 | [
"MIT"
] | null | null | null | # -*- coding=UTF-8 -*-
# pyright: strict
import time
from .. import action, templates
def champions_meeting():
while True:
tmpl, pos = action.wait_image(
templates.CONNECTING,
templates.RETRY_BUTTON,
templates.GREEN_NEXT_BUTTON,
templates.SKIP_BUTTON,
templates.RACE_START_BUTTON,
templates.RACE_CONFIRM_BUTTON,
templates.CHAMPIONS_MEETING_ENTRY_BUTTON_DISABLED,
templates.CHAMPIONS_MEETING_ENTRY_BUTTON,
templates.CHAMPIONS_MEETING_CONFIRM_TITLE,
templates.CHAMPIONS_MEETING_REGISTER_BUTTON,
templates.CHAMPIONS_MEETING_RACE_BUTTON,
templates.CHAMPIONS_MEETING_REWARD_BUTTON,
templates.LEGEND_RACE_START_BUTTON,
)
name = tmpl.name
if name == templates.CONNECTING:
time.sleep(1)
elif name == templates.CHAMPIONS_MEETING_ENTRY_BUTTON_DISABLED:
exit(0)
elif name == templates.CHAMPIONS_MEETING_CONFIRM_TITLE:
time.sleep(1)
if not action.count_image(templates.CHAMPIONS_MEETING_USING_TICKET):
exit(0)
action.wait_click_image(templates.GREEN_OK_BUTTON)
else:
action.click(pos)
| 33.710526 | 80 | 0.644809 | 131 | 1,281 | 5.946565 | 0.366412 | 0.205392 | 0.288832 | 0.159178 | 0.274711 | 0.112965 | 0 | 0 | 0 | 0 | 0 | 0.005482 | 0.288056 | 1,281 | 37 | 81 | 34.621622 | 0.848684 | 0.028103 | 0 | 0.129032 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032258 | false | 0 | 0.064516 | 0 | 0.096774 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
81036ac1a74b91d6d58f421b51f170c315fb0727 | 1,275 | py | Python | pandas_funcs/get_group_members.py | SecTraversl/Toolbox_Python_3.8 | 0ad1d92d3a12225ea60e4eef3f263aecfffd1b65 | [
"MIT"
] | null | null | null | pandas_funcs/get_group_members.py | SecTraversl/Toolbox_Python_3.8 | 0ad1d92d3a12225ea60e4eef3f263aecfffd1b65 | [
"MIT"
] | null | null | null | pandas_funcs/get_group_members.py | SecTraversl/Toolbox_Python_3.8 | 0ad1d92d3a12225ea60e4eef3f263aecfffd1b65 | [
"MIT"
] | null | null | null | # %%
#######################################
def get_group_members(group_name=None):
"""Returns a DataFrame containing the local computer's groups, GID, and group members.
Examples:
>>> get_group_members()\n
0 1 2 3\n
0 root x 0 []\n
1 daemon x 1 []\n
2 bin x 2 []\n
3 sys x 3 []\n
4 adm x 4 [syslog, pengwin]\n
.. ... .. ... ...\n
71 systemd-coredump x 999 []\n
72 pengwin x 1000 []\n
73 lightdm x 133 []\n
74 nopasswdlogin x 134 []\n
75 testuser x 1001 []\n
Args:
group_name (str, optional): Reference the name of a particular group. Defaults to None.
Returns:
pandas.core.frame.DataFrame: Returns a pandas DataFrame with the group information.
"""
import grp
import pandas as pd
all_groups = grp.getgrall()
df = pd.DataFrame(all_groups)
return df
| 37.5 | 95 | 0.392157 | 129 | 1,275 | 3.813953 | 0.527132 | 0.073171 | 0.060976 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064976 | 0.505098 | 1,275 | 33 | 96 | 38.636364 | 0.714739 | 0.762353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
811127c23b54da138157cabf93b303aac4a99744 | 1,991 | py | Python | backend/scoreboard/models.py | progteam/parrot | 150d3958ad1959a0a13416d7e025ace6dbbc45e3 | [
"BSD-2-Clause"
] | 5 | 2019-02-25T02:24:51.000Z | 2019-04-21T00:56:43.000Z | backend/scoreboard/models.py | progteam/parrot | 150d3958ad1959a0a13416d7e025ace6dbbc45e3 | [
"BSD-2-Clause"
] | 51 | 2019-02-06T03:36:27.000Z | 2021-06-10T21:11:24.000Z | backend/scoreboard/models.py | progteam/parrot | 150d3958ad1959a0a13416d7e025ace6dbbc45e3 | [
"BSD-2-Clause"
] | 7 | 2019-02-06T04:37:10.000Z | 2019-03-28T07:52:26.000Z | """
backend/scoreboard/models.py
Scoreboard data models
"""
from django.contrib import admin
from django.db import models
class KattisHandle(models.Model):
"""Kattis handle can be subscribed or unsubscribed
- it has many Kattis scores
- TODO: it (may) belongs to a user
"""
handle = models.CharField(max_length=50)
subscribed = models.BooleanField(default=True)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
def add_score(self, score):
"""Add a score snapshot for the current Kattis handle
To actually perform the query, save() needs to be called. Read more at
https://docs.djangoproject.com/en/2.1/ref/models/querysets/
We do not execute a query every when this function is called, or it
will be too slow (when there're many add_score calls). Instead, we
should create the objects in memory, and use bulk-create:
https://docs.djangoproject.com/en/2.1/ref/models/querysets/#bulk-create
"""
return KattisScore(kattis_handle=self, score=score)
def scores(self):
"""Return all the score snapshots for the current kattis handle
"""
return KattisScore.objects.filter(kattis_handle=self)
def __str__(self):
return '%s' % self.handle
class KattisScore(models.Model):
""" A Kattis score snapshot, obtained by scrapping Kattis Website
- it belongs to a Kattis handle
- it shall not be modified once created
"""
kattis_handle = models.ForeignKey(KattisHandle, on_delete=models.CASCADE)
score = models.FloatField(editable=False)
created_at = models.DateTimeField(auto_now_add=True)
def __str__(self):
return '[%s] %s: %s' % (self.created_at, self.kattis_handle, self.score)
class Meta:
indexes = [
models.Index(fields=['created_at']),
]
# Allow admins to add KattisHandle
admin.site.register(KattisHandle)
| 33.183333 | 83 | 0.685083 | 266 | 1,991 | 5.030075 | 0.458647 | 0.071749 | 0.047085 | 0.056054 | 0.216741 | 0.133034 | 0.133034 | 0.133034 | 0.070254 | 0.070254 | 0 | 0.003863 | 0.21999 | 1,991 | 59 | 84 | 33.745763 | 0.857695 | 0.429935 | 0 | 0.166667 | 0 | 0 | 0.022483 | 0 | 0 | 0 | 0 | 0.016949 | 0 | 1 | 0.166667 | false | 0 | 0.083333 | 0.083333 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8117e4323d540f71acede7f6909bbeec235f4dde | 1,627 | py | Python | shared_libs/message/message/enc.py | ux00c6/message-security-workshop | 712dd79ed32b898cb07e0867170f0f3b179d5db2 | [
"BSD-3-Clause"
] | 1 | 2021-08-06T22:24:02.000Z | 2021-08-06T22:24:02.000Z | shared_libs/message/message/enc.py | ux00c6/message-security-workshop | 712dd79ed32b898cb07e0867170f0f3b179d5db2 | [
"BSD-3-Clause"
] | null | null | null | shared_libs/message/message/enc.py | ux00c6/message-security-workshop | 712dd79ed32b898cb07e0867170f0f3b179d5db2 | [
"BSD-3-Clause"
] | 1 | 2021-08-18T16:16:02.000Z | 2021-08-18T16:16:02.000Z | """Encryption functions."""
import os
import secrets
from cryptography import x509
from jwcrypto.jwe import JWE
from jwcrypto.jwk import JWK
from .util import Util
class Enc:
@classmethod
def encrypt(cls, path_to_certificate, message):
"""Return JWE.
Args:
path_to_certificate (str): Path to certificate PEM file.
message (str): Message contents.
Returns:
str: Encrypted message, as a serialized JWE.
"""
pubkey_pyca = x509.load_pem_x509_certificate(Util.get_file_contents(path_to_certificate)).public_key()
public_key = JWK()
public_key.import_from_pyca(pubkey_pyca)
protected_header = {"alg": "RSA-OAEP-256",
"enc": "A256CBC-HS512",
"typ": "JWE"}
jwetoken = JWE(message.encode("utf-8"), recipient=public_key, protected=protected_header)
return jwetoken.serialize()
@classmethod
def decrypt(cls, path_to_private_key, encrypted_message):
"""Extract and return a message from an encrypted object.
Args:
path_to_private_key (str): Absolute path to private key file.
message (str): Encrypted messsage, base64-encoded.
Returns:
str: Decrypted message.
"""
privkey_pem = Util.get_file_contents(path_to_private_key)
private_key = JWK()
private_key.import_from_pem(privkey_pem)
token = JWE()
token.deserialize(encrypted_message, key=private_key)
return token.payload.decode() | 31.901961 | 110 | 0.618931 | 184 | 1,627 | 5.25 | 0.380435 | 0.049689 | 0.070393 | 0.066253 | 0.05176 | 0.05176 | 0 | 0 | 0 | 0 | 0 | 0.018293 | 0.294407 | 1,627 | 51 | 111 | 31.901961 | 0.823171 | 0.259373 | 0 | 0.08 | 0 | 0 | 0.039623 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.32 | 0 | 0.52 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
811fc818a50e107c45eeea2002e68bda14102e2a | 622 | py | Python | var/spack/repos/builtin/packages/perl-module-runtime/package.py | xiki-tempula/spack | 9d66c05e93ab8a933fc59915040c0e0c86a4aac4 | [
"ECL-2.0",
"Apache-2.0",
"MIT"
] | 9 | 2018-04-18T07:51:40.000Z | 2021-09-10T03:56:57.000Z | var/spack/repos/builtin/packages/perl-module-runtime/package.py | xiki-tempula/spack | 9d66c05e93ab8a933fc59915040c0e0c86a4aac4 | [
"ECL-2.0",
"Apache-2.0",
"MIT"
] | 907 | 2018-04-18T11:17:57.000Z | 2022-03-31T13:20:25.000Z | var/spack/repos/builtin/packages/perl-module-runtime/package.py | xiki-tempula/spack | 9d66c05e93ab8a933fc59915040c0e0c86a4aac4 | [
"ECL-2.0",
"Apache-2.0",
"MIT"
] | 29 | 2018-11-05T16:14:23.000Z | 2022-02-03T16:07:09.000Z | # Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class PerlModuleRuntime(PerlPackage):
"""Runtime module handling"""
homepage = "http://search.cpan.org/~zefram/Module-Runtime/lib/Module/Runtime.pm"
url = "http://search.cpan.org/CPAN/authors/id/Z/ZE/ZEFRAM/Module-Runtime-0.016.tar.gz"
version('0.016', sha256='68302ec646833547d410be28e09676db75006f4aa58a11f3bdb44ffe99f0f024')
depends_on('perl-module-build', type='build')
| 34.555556 | 95 | 0.745981 | 79 | 622 | 5.860759 | 0.759494 | 0.084233 | 0.060475 | 0.073434 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117431 | 0.123794 | 622 | 17 | 96 | 36.588235 | 0.73211 | 0.342444 | 0 | 0 | 0 | 0.333333 | 0.59 | 0.16 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
811fe4471f90798a1274482e60be76d77e37ef71 | 12,825 | py | Python | AAAI/Interpretability/attention_sparsity_comparison/codes_initial_comparison/scripts/zeroth_layer_softmax.py | lnpandey/DL_explore_synth_data | 0a5d8b417091897f4c7f358377d5198a155f3f24 | [
"MIT"
] | 2 | 2019-08-24T07:20:35.000Z | 2020-03-27T08:16:59.000Z | AAAI/Interpretability/attention_sparsity_comparison/codes_initial_comparison/scripts/zeroth_layer_softmax.py | lnpandey/DL_explore_synth_data | 0a5d8b417091897f4c7f358377d5198a155f3f24 | [
"MIT"
] | null | null | null | AAAI/Interpretability/attention_sparsity_comparison/codes_initial_comparison/scripts/zeroth_layer_softmax.py | lnpandey/DL_explore_synth_data | 0a5d8b417091897f4c7f358377d5198a155f3f24 | [
"MIT"
] | 3 | 2019-06-21T09:34:32.000Z | 2019-09-19T10:43:07.000Z | # -*- coding: utf-8 -*-
"""zeroth_layer_softmax.ipynb
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/1FrUY7XuTFmC3I7jJXNJ9pI-Z8tdyk03r
"""
# path = '.../' # change to save directory
# !pip install sparsemax # uncomment if sparsemax is not installed
# Commented out IPython magic to ensure Python compatibility.
import numpy as np
import pandas as pd
import torch
import torchvision
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from sparsemax import Sparsemax
from matplotlib import pyplot as plt
# %matplotlib inline
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
"""# Generate dataset"""
np.random.seed(12)
y = np.random.randint(0,10,5000)
idx= []
for i in range(10):
print(i,sum(y==i))
idx.append(y==i)
x = np.zeros((5000,2))
np.random.seed(12)
x[idx[0],:] = np.random.multivariate_normal(mean = [4,6.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[0]))
x[idx[1],:] = np.random.multivariate_normal(mean = [5.5,6],cov=[[0.01,0],[0,0.01]],size=sum(idx[1]))
x[idx[2],:] = np.random.multivariate_normal(mean = [4.5,4.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[2]))
x[idx[3],:] = np.random.multivariate_normal(mean = [3,3.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[3]))
x[idx[4],:] = np.random.multivariate_normal(mean = [2.5,5.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[4]))
x[idx[5],:] = np.random.multivariate_normal(mean = [3.5,8],cov=[[0.01,0],[0,0.01]],size=sum(idx[5]))
x[idx[6],:] = np.random.multivariate_normal(mean = [5.5,8],cov=[[0.01,0],[0,0.01]],size=sum(idx[6]))
x[idx[7],:] = np.random.multivariate_normal(mean = [7,6.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[7]))
x[idx[8],:] = np.random.multivariate_normal(mean = [6.5,4.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[8]))
x[idx[9],:] = np.random.multivariate_normal(mean = [5,3],cov=[[0.01,0],[0,0.01]],size=sum(idx[9]))
color = ['#1F77B4','orange', 'g','brown']
name = [1,2,3,0]
for i in range(10):
if i==3:
plt.scatter(x[idx[i],0],x[idx[i],1],c=color[3],label="D_"+str(name[i]))
elif i>=4:
plt.scatter(x[idx[i],0],x[idx[i],1],c=color[3])
else:
plt.scatter(x[idx[i],0],x[idx[i],1],c=color[i],label="D_"+str(name[i]))
plt.legend()
desired_num = 6000
mosaic_list_of_images =[]
mosaic_label = []
fore_idx=[]
for j in range(desired_num):
np.random.seed(j)
fg_class = np.random.randint(0,3)
fg_idx = np.random.randint(0,9)
a = []
for i in range(9):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
# print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
# print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
mosaic_list_of_images.append(a)
mosaic_label.append(fg_class)
fore_idx.append(fg_idx)
"""# load mosaic data"""
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list, mosaic_label,fore_idx):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list
self.label = mosaic_label
self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx] , self.fore_idx[idx]
batch = 250
msd1 = MosaicDataset(mosaic_list_of_images[0:3000], mosaic_label[0:3000] , fore_idx[0:3000])
train_loader = DataLoader( msd1 ,batch_size= batch ,shuffle=True)
batch = 250
msd2 = MosaicDataset(mosaic_list_of_images[3000:6000], mosaic_label[3000:6000] , fore_idx[3000:6000])
test_loader = DataLoader( msd2 ,batch_size= batch ,shuffle=True)
"""# models"""
class Focus_deep(nn.Module):
'''
deep focus network averaged at zeroth layer
input : elemental data
'''
def __init__(self,inputs,output,K,d):
super(Focus_deep,self).__init__()
self.inputs = inputs
self.output = output
self.K = K
self.d = d
self.linear1 = nn.Linear(self.inputs,50, bias=False) #,self.output)
self.linear2 = nn.Linear(50,50 , bias=False)
self.linear3 = nn.Linear(50,self.output, bias=False)
torch.nn.init.xavier_normal_(self.linear1.weight)
torch.nn.init.xavier_normal_(self.linear2.weight)
torch.nn.init.xavier_normal_(self.linear3.weight)
def forward(self,z):
batch = z.shape[0]
x = torch.zeros([batch,self.K],dtype=torch.float64)
y = torch.zeros([batch,50], dtype=torch.float64) # number of features of output
features = torch.zeros([batch,self.K,50],dtype=torch.float64)
x,y = x.to(device),y.to(device)
features = features.to(device)
for i in range(self.K):
alp,ftrs = self.helper(z[:,i] ) # self.d*i:self.d*i+self.d
x[:,i] = alp[:,0]
features[:,i] = ftrs
x = F.softmax(x,dim=1) # alphas
for i in range(self.K):
x1 = x[:,i]
y = y+torch.mul(x1[:,None],features[:,i]) # self.d*i:self.d*i+self.d
return y , x
def helper(self,x):
x1 = x
x = self.linear1(x)
x = F.relu(x)
x = self.linear2(x
x = F.relu(x)
x = self.linear3(x)
return x,x1
fc = Focus_deep(2,1,9,2).double()
fc = fc.to(device)
class Classification_deep(nn.Module):
'''
input : elemental data
deep classification module data averaged at zeroth layer
'''
def __init__(self,inputs,output):
super(Classification_deep,self).__init__()
self.inputs = inputs
self.output = output
self.linear1 = nn.Linear(self.inputs,50)
#self.linear2 = nn.Linear(6,12)
self.linear2 = nn.Linear(50,self.output)
torch.nn.init.xavier_normal_(self.linear1.weight)
torch.nn.init.zeros_(self.linear1.bias)
torch.nn.init.xavier_normal_(self.linear2.weight)
torch.nn.init.zeros_(self.linear2.bias)
def forward(self,x):
x = F.relu(self.linear1(x))
#x = F.relu(self.linear2(x))
x = self.linear2(x)
return x
criterion = nn.CrossEntropyLoss()
def calculate_attn_loss(dataloader,what,where,criter):
what.eval()
where.eval()
r_loss = 0
alphas = []
lbls = []
pred = []
fidices = []
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels,fidx = data
lbls.append(labels)
fidices.append(fidx)
inputs = inputs.double()
inputs, labels = inputs.to(device),labels.to(device)
avg,alpha = where(inputs)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
alphas.append(alpha.cpu().numpy())
loss = criter(outputs,labels)
r_loss += loss.item()
alphas = np.concatenate(alphas,axis=0)
pred = np.concatenate(pred,axis=0)
lbls = np.concatenate(lbls,axis=0)
fidices = np.concatenate(fidices,axis=0)
#print(alphas.shape,pred.shape,lbls.shape,fidices.shape)
analysis = analyse_data(alphas,lbls,pred,fidices)
return r_loss/(i+1),analysis
def analyse_data(alphas,lbls,predicted,f_idx):
'''
analysis data is created here
'''
batch = len(predicted)
amth,alth,ftpt,ffpt,ftpf,ffpf = 0,0,0,0,0,0
for j in range (batch):
focus = np.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
amth +=1
else:
alth +=1
if(focus == f_idx[j] and predicted[j] == lbls[j]):
ftpt += 1
elif(focus != f_idx[j] and predicted[j] == lbls[j]):
ffpt +=1
elif(focus == f_idx[j] and predicted[j] != lbls[j]):
ftpf +=1
elif(focus != f_idx[j] and predicted[j] != lbls[j]):
ffpf +=1
#print(sum(predicted==lbls),ftpt+ffpt)
# value>0.01
sparsity_val = np.sum(np.sum(alphas>0.01,axis=1))
# simplex distance
argmax_index = np.argmax(alphas,axis=1)
simplex_pt = np.zeros(alphas.shape)
simplex_pt[np.arange(argmax_index.size),argmax_index] = 1
shortest_distance_simplex = np.sum(np.sqrt(np.sum((alphas-simplex_pt)**2,axis=1)))
# entropy
#entropy = np.nansum((-alphas*np.log2(alphas)).sum(axis=1))
entropy = np.sum(np.nansum(-alphas*np.log2(alphas),axis=1))
return [ftpt,ffpt,ftpf,ffpf,sparsity_val,shortest_distance_simplex,entropy]#,amth,alth]
"""# training"""
number_runs = 5
full_analysis =[]
FTPT_analysis = pd.DataFrame(columns = ["FTPT","FFPT", "FTPF","FFPF","sparsity_value","shortest distance smplx","Entropy"])
for n in range(number_runs):
print("--"*40)
# instantiate focus and classification Model
torch.manual_seed(n)
where = Focus_deep(2,1,9,2).double()
#where = where.double().to("cuda")
what = Classification_deep(50,3).double()
where = where.to(device)
what = what.to(device)
# instantiate optimizer
optimizer_where = optim.Adam(where.parameters(),lr =0.001)
optimizer_what = optim.Adam(what.parameters(), lr=0.001)
#criterion = nn.CrossEntropyLoss()
acti = []
analysis_data = []
loss_curi = []
epochs = 2500
# calculate zeroth epoch loss and FTPT values
running_loss ,anlys_data= calculate_attn_loss(train_loader,what,where,criterion)
loss_curi.append(running_loss)
analysis_data.append(anlys_data)
print('epoch: [%d ] loss: %.3f' %(0,running_loss))
# training starts
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
what.train()
where.train()
for i, data in enumerate(train_loader, 0):
# get the inputs
inputs, labels,_ = data
inputs = inputs.double()
inputs, labels = inputs.to(device),labels.to(device)
# zero the parameter gradients
optimizer_where.zero_grad()
optimizer_what.zero_grad()
# forward + backward + optimize
avg, alpha = where(inputs)
outputs = what(avg)
loss = criterion(outputs,labels)
# print statistics
loss.backward()
optimizer_where.step()
optimizer_what.step()
running_loss += loss.item()
#break
running_loss,anls_data = calculate_attn_loss(train_loader,what,where,criterion)
analysis_data.append(anls_data)
if(epoch % 200==0):
print('epoch: [%d] loss: %.3f ' %(epoch + 1,running_loss))
loss_curi.append(running_loss) #loss per epoch
if running_loss<=0.01:
print('breaking in epoch: ', epoch)
break
print('Finished Training run ' +str(n))
#break
analysis_data = np.array(analysis_data)
FTPT_analysis.loc[n] = analysis_data[-1,:7]/3000
full_analysis.append((epoch, analysis_data))
correct = 0
total = 0
with torch.no_grad():
for data in test_loader:
images, labels,_ = data
images = images.double()
images, labels = images.to(device), labels.to(device)
avg, alpha = where(images)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 3000 test images: %f %%' % ( 100 * correct / total))
print(np.mean(np.array(FTPT_analysis),axis=0))
FTPT_analysis
FTPT_analysis[FTPT_analysis['FTPT']+FTPT_analysis['FFPT'] > 0.9 ]
print(np.mean(np.array(FTPT_analysis[FTPT_analysis['FTPT']+FTPT_analysis['FFPT'] > 0.9 ]),axis=0))
86.83+12.84
cnt=1
for epoch, analysis_data in full_analysis:
analysis_data = np.array(analysis_data)
# print("="*20+"run ",cnt,"="*20)
plt.figure(figsize=(6,5))
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,0]/30,label="FTPT")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,1]/30,label="FFPT")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,2]/30,label="FTPF")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,3]/30,label="FFPF")
plt.title("Training trends for run "+str(cnt))
plt.grid()
# plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.legend()
plt.xlabel("epochs", fontsize=14, fontweight = 'bold')
plt.ylabel("percentage train data", fontsize=14, fontweight = 'bold')
#plt.savefig(path + "run"+str(cnt)+".png",bbox_inches="tight")
#plt.savefig(path + "run"+str(cnt)+".pdf",bbox_inches="tight")
cnt+=1
#FTPT_analysis.to_csv(path+"synthetic_zeroth.csv",index=False)
| 31.280488 | 123 | 0.639142 | 1,946 | 12,825 | 4.104317 | 0.177287 | 0.006511 | 0.005259 | 0.032553 | 0.357331 | 0.267435 | 0.218104 | 0.175786 | 0.169776 | 0.138475 | 0 | 0.042798 | 0.192904 | 12,825 | 409 | 124 | 31.356968 | 0.728818 | 0 | 0 | 0.156716 | 1 | 0 | 0.030329 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.041045 | null | null | 0.037313 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8125ba8cb30578397f04ed1456ba10220520c8b3 | 249 | py | Python | src/141/a.py | teitei-tk/AtCoder | d51aa8103d5038a8998264266505dee0c143b468 | [
"Apache-2.0"
] | null | null | null | src/141/a.py | teitei-tk/AtCoder | d51aa8103d5038a8998264266505dee0c143b468 | [
"Apache-2.0"
] | null | null | null | src/141/a.py | teitei-tk/AtCoder | d51aa8103d5038a8998264266505dee0c143b468 | [
"Apache-2.0"
] | null | null | null | def main():
s = input()
weathers = ['Sunny', 'Cloudy', 'Rainy']
length = weathers.__len__()
index = weathers.index(s) + 1
if index >= length:
index = 0
print(weathers[index])
if __name__ == "__main__":
main()
| 16.6 | 43 | 0.550201 | 28 | 249 | 4.464286 | 0.571429 | 0.208 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011236 | 0.285141 | 249 | 14 | 44 | 17.785714 | 0.691011 | 0 | 0 | 0 | 0 | 0 | 0.096386 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0 | 0 | 0.1 | 0.1 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8129578d46ed1997d2c10c1331b10a31eb3f0034 | 1,059 | py | Python | setup.py | thefreshuk/aws-python-sns-message-validator | 22dac7268217059091b542c5b2ace4c1ffc066a4 | [
"MIT"
] | null | null | null | setup.py | thefreshuk/aws-python-sns-message-validator | 22dac7268217059091b542c5b2ace4c1ffc066a4 | [
"MIT"
] | null | null | null | setup.py | thefreshuk/aws-python-sns-message-validator | 22dac7268217059091b542c5b2ace4c1ffc066a4 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import os.path
from setuptools import setup
ROOT = os.path.dirname(__file__)
setup(
version="0.1",
url="https://github.com/thefreshuk/aws-python-sns-message-validator",
name="aws_sns_validator",
description=("Validate the integrity of Amazon SNS message with support"
"for use with Fake_SNS"),
long_description=open("README.md").read(),
author="Neil Hickman",
author_email="neil@thefreshuk.com",
packages=["aws_sns_validator"],
package_dir={"":"src"},
install_requires=["m2crypto"],
classifiers=[
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Software Development",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.3",
"Programming Language :: Python :: 3.4"
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
]
)
| 31.147059 | 76 | 0.621341 | 118 | 1,059 | 5.466102 | 0.618644 | 0.176744 | 0.232558 | 0.24186 | 0.083721 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018405 | 0.230406 | 1,059 | 33 | 77 | 32.090909 | 0.773006 | 0.018886 | 0 | 0 | 0 | 0 | 0.554913 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d4934434af83b50b03b667b8e528f059b407daa1 | 388 | py | Python | backend/users/urls.py | huanglianqi/ricite | df493314bcdc9c6cf184a334260c02bf0bbe1f0f | [
"MIT"
] | 3 | 2020-12-07T02:46:47.000Z | 2021-04-25T00:54:20.000Z | backend/users/urls.py | huanglianqi/ricite | df493314bcdc9c6cf184a334260c02bf0bbe1f0f | [
"MIT"
] | 5 | 2021-03-19T03:53:43.000Z | 2022-01-27T16:12:58.000Z | backend/users/urls.py | huanglianqi/ricite | df493314bcdc9c6cf184a334260c02bf0bbe1f0f | [
"MIT"
] | null | null | null | from django.urls import path, include
from .views import (
UserRetrieveUpdateAPIView,
UserPasswordResetAPIView,
SubscribeEmailRetrieveUpdateAPIView,
)
urlpatterns = [
path(
'account/<str:username>/',
UserRetrieveUpdateAPIView.as_view()
),
path(
'subscribeEmail/<str:name>/',
SubscribeEmailRetrieveUpdateAPIView.as_view()
),
]
| 20.421053 | 53 | 0.675258 | 27 | 388 | 9.62963 | 0.666667 | 0.046154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.224227 | 388 | 18 | 54 | 21.555556 | 0.863787 | 0 | 0 | 0.25 | 0 | 0 | 0.126289 | 0.126289 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.0625 | 0.125 | 0 | 0.125 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
d494c45926e347a2d56295d9d45cc2fecb5ddc6c | 255 | py | Python | Janus/python-base-unit_05/files/rmdir.test.py | voodoopeople42/Vproject | 349e80a0d3cfd590cb9dbe667acfbdb7393308e3 | [
"MIT"
] | null | null | null | Janus/python-base-unit_05/files/rmdir.test.py | voodoopeople42/Vproject | 349e80a0d3cfd590cb9dbe667acfbdb7393308e3 | [
"MIT"
] | null | null | null | Janus/python-base-unit_05/files/rmdir.test.py | voodoopeople42/Vproject | 349e80a0d3cfd590cb9dbe667acfbdb7393308e3 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# rmdir.test.py
# Для удаления папки используется функция rmdir(),
# в которую передается путь к удаляемой папке:
import os
# путь относительно текущего скрипта
os.rmdir("hello")
# абсолютный путь
os.rmdir("c://somedir/hello")
| 19.615385 | 51 | 0.717647 | 35 | 255 | 5.228571 | 0.771429 | 0.076503 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004608 | 0.14902 | 255 | 12 | 52 | 21.25 | 0.83871 | 0.709804 | 0 | 0 | 0 | 0 | 0.338462 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
d499fbc2a7a66ea1dee0b657977bf2c8f376e989 | 274 | py | Python | Vault7/Lost-in-Translation/windows/Resources/Tasking/PyScripts/Lib/tasking/netstat.py | dendisuhubdy/grokmachine | 120a21a25c2730ed356739231ec8b99fc0575c8b | [
"BSD-3-Clause"
] | 46 | 2017-05-15T11:15:08.000Z | 2018-07-02T03:32:52.000Z | Vault7/Lost-in-Translation/windows/Resources/Tasking/PyScripts/Lib/tasking/netstat.py | dendisuhubdy/grokmachine | 120a21a25c2730ed356739231ec8b99fc0575c8b | [
"BSD-3-Clause"
] | null | null | null | Vault7/Lost-in-Translation/windows/Resources/Tasking/PyScripts/Lib/tasking/netstat.py | dendisuhubdy/grokmachine | 120a21a25c2730ed356739231ec8b99fc0575c8b | [
"BSD-3-Clause"
] | 24 | 2017-05-17T03:26:17.000Z | 2018-07-09T07:00:50.000Z |
import dsz
import os
import re
from task import *
class Netstat(Task, ):
def __init__(self, file):
Task.__init__(self, file, 'Netstat')
def CreateCommandLine(self):
return ['netstat -list']
TaskingOptions['_netstatTasking'] = Netstat | 19.571429 | 45 | 0.649635 | 30 | 274 | 5.633333 | 0.566667 | 0.094675 | 0.142012 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.244526 | 274 | 14 | 46 | 19.571429 | 0.816425 | 0 | 0 | 0 | 0 | 0 | 0.1341 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0.1 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
d4af3b120d5f6b42338b8ba5e257478aeec7e4a6 | 447 | py | Python | oauth_provider/admin.py | ovidioreyna/django-oauth-plus | b9b64a3ac24fd11f471763c88462bbf3c53e46e6 | [
"BSD-3-Clause"
] | null | null | null | oauth_provider/admin.py | ovidioreyna/django-oauth-plus | b9b64a3ac24fd11f471763c88462bbf3c53e46e6 | [
"BSD-3-Clause"
] | 4 | 2018-01-11T20:59:12.000Z | 2020-05-12T12:48:53.000Z | oauth_provider/admin.py | ovidioreyna/django-oauth-plus | b9b64a3ac24fd11f471763c88462bbf3c53e46e6 | [
"BSD-3-Clause"
] | 3 | 2017-12-18T20:01:36.000Z | 2018-12-17T05:35:53.000Z | from __future__ import absolute_import
from django.contrib import admin
from .models import Consumer, Scope, Token
class ScopeAdmin(admin.ModelAdmin):
pass
class ConsumerAdmin(admin.ModelAdmin):
raw_id_fields = ['user']
class TokenAdmin(admin.ModelAdmin):
raw_id_fields = ['user', 'consumer', 'scope']
admin.site.register(Scope, ScopeAdmin)
admin.site.register(Consumer, ConsumerAdmin)
admin.site.register(Token, TokenAdmin)
| 19.434783 | 49 | 0.767338 | 54 | 447 | 6.185185 | 0.425926 | 0.134731 | 0.152695 | 0.11976 | 0.179641 | 0.179641 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127517 | 447 | 22 | 50 | 20.318182 | 0.85641 | 0 | 0 | 0 | 0 | 0 | 0.04698 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.083333 | 0.25 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
d4b0226a7bb11361002997ad28e4f906ce2566ce | 2,931 | py | Python | tf_summary_reader/preprocess/process_record_clean_empty_summary.py | howl-anderson/tf_summary_reader | a88d6aeeb325405f91c011c74c04c5efb641a06c | [
"MIT"
] | null | null | null | tf_summary_reader/preprocess/process_record_clean_empty_summary.py | howl-anderson/tf_summary_reader | a88d6aeeb325405f91c011c74c04c5efb641a06c | [
"MIT"
] | null | null | null | tf_summary_reader/preprocess/process_record_clean_empty_summary.py | howl-anderson/tf_summary_reader | a88d6aeeb325405f91c011c74c04c5efb641a06c | [
"MIT"
] | null | null | null | from typing import List, Dict
def process_record_clean_empty_summary(record_list: List[Dict]) -> List[Dict]:
"""
remove empty summary record from list
:param record_list: list of record
:return: list of record
"""
processed_list = []
for record in record_list:
if not record["summary"]:
continue
processed_list.append(record)
return processed_list
if __name__ == "__main__":
data = [
{"step": 0, "wall_time": 1565338566.0, "summary": {}},
{"step": 0, "wall_time": 1565338566.6084208, "summary": {}},
{"step": 0, "wall_time": 1565338567.1908352, "summary": {}},
{"step": 0, "wall_time": 1565338617.0762944, "summary": {}},
{"step": 0, "wall_time": 1565338617.420546, "summary": {}},
{"step": 0, "wall_time": 1565338625.4887302, "summary": {}},
{"step": 1, "wall_time": 1565338664.3173296, "summary": {}},
{
"step": 1,
"wall_time": 1565338664.317573,
"summary": {
"acc": 0.003722084453329444,
"precision": 0.003722084453329444,
"recall": 0.007317073177546263,
"f1": 0.004934210795909166,
"correct_rate": 0.0,
"loss": 71.3306655883789,
},
},
{
"step": 101,
"wall_time": 1565338673.0802393,
"summary": {"global_step/sec": 11.410505294799805},
},
{
"step": 101,
"wall_time": 1565338673.080884,
"summary": {
"acc": 0.3448275923728943,
"precision": 0.14387211203575134,
"recall": 0.19194312393665314,
"f1": 0.164466992020607,
"correct_rate": 0.0078125,
"loss": 19.875246047973633,
},
},
]
result = process_record_clean_empty_summary(data)
print(result)
expected = [
{
"step": 1,
"wall_time": 1565338664.317573,
"summary": {
"acc": 0.003722084453329444,
"precision": 0.003722084453329444,
"recall": 0.007317073177546263,
"f1": 0.004934210795909166,
"correct_rate": 0.0,
"loss": 71.3306655883789,
},
},
{
"step": 101,
"wall_time": 1565338673.0802393,
"summary": {"global_step/sec": 11.410505294799805},
},
{
"step": 101,
"wall_time": 1565338673.080884,
"summary": {
"acc": 0.3448275923728943,
"precision": 0.14387211203575134,
"recall": 0.19194312393665314,
"f1": 0.164466992020607,
"correct_rate": 0.0078125,
"loss": 19.875246047973633,
},
},
]
| 30.53125 | 78 | 0.487888 | 243 | 2,931 | 5.716049 | 0.296296 | 0.074874 | 0.038877 | 0.056156 | 0.75594 | 0.645788 | 0.575954 | 0.575954 | 0.575954 | 0.575954 | 0 | 0.350218 | 0.373593 | 2,931 | 95 | 79 | 30.852632 | 0.406318 | 0.033095 | 0 | 0.525 | 0 | 0 | 0.159786 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0125 | false | 0 | 0.0125 | 0 | 0.0375 | 0.0125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d4b1fef68d81877c7fea179cccb91b037bdace27 | 2,701 | py | Python | utils/swift_build_support/swift_build_support/products/pythonkit.py | belkadan/swift | 142791ba12dba16c7871a9b0abe3b4433627ce8b | [
"Apache-2.0"
] | 2 | 2020-04-16T21:09:11.000Z | 2021-11-09T11:53:45.000Z | utils/swift_build_support/swift_build_support/products/pythonkit.py | cigarsmoker/swift | 172d57fa9324a8b23a008db07c72eee7ddf93b46 | [
"Apache-2.0"
] | 1 | 2020-09-19T16:23:45.000Z | 2020-09-19T16:23:45.000Z | utils/swift_build_support/swift_build_support/products/pythonkit.py | cigarsmoker/swift | 172d57fa9324a8b23a008db07c72eee7ddf93b46 | [
"Apache-2.0"
] | 1 | 2021-05-31T04:34:06.000Z | 2021-05-31T04:34:06.000Z | # swift_build_support/products/pythonkit.py ---------------------*- python -*-
#
# This source file is part of the Swift.org open source project
#
# Copyright (c) 2014 - 2017 Apple Inc. and the Swift project authors
# Licensed under Apache License v2.0 with Runtime Library Exception
#
# See https://swift.org/LICENSE.txt for license information
# See https://swift.org/CONTRIBUTORS.txt for the list of Swift project authors
#
# ----------------------------------------------------------------------------
import os
from . import product
from .. import shell
from .. import targets
class PythonKit(product.Product):
@classmethod
def product_source_name(cls):
return "PythonKit"
@classmethod
def is_build_script_impl_product(cls):
return False
def should_build(self, host_target):
return True
def build(self, host_target):
toolchain_path = targets.toolchain_path(self.args.install_destdir,
self.args.install_prefix)
swiftc = os.path.join(toolchain_path, 'usr', 'bin', 'swiftc')
# FIXME: this is a workaround for CMake <3.16 which does not correctly
# generate the build rules if you are not in the build directory. As a
# result, we need to create the build tree before we can use it and
# change into it.
#
# NOTE: unfortunately, we do not know if the build is using Python
# 2.7 or Python 3.2+. In the latter, the `exist_ok` named parameter
# would alleviate some of this issue.
try:
os.makedirs(self.build_dir)
except OSError:
pass
with shell.pushd(self.build_dir):
shell.call([
self.toolchain.cmake,
'-G', 'Ninja',
'-D', 'BUILD_SHARED_LIBS=YES',
'-D', 'CMAKE_INSTALL_PREFIX={}/usr'.format(
self.install_toolchain_path()),
'-D', 'CMAKE_MAKE_PROGRAM={}'.format(self.toolchain.ninja),
'-D', 'CMAKE_Swift_COMPILER={}'.format(swiftc),
'-B', self.build_dir,
'-S', self.source_dir,
])
shell.call([
self.toolchain.cmake,
'--build', self.build_dir,
])
def should_test(self, host_target):
return self.args.test_pythonkit
def test(self, host_target):
pass
def should_install(self, host_target):
return self.args.install_pythonkit
def install(self, host_target):
shell.call([
self.toolchain.cmake,
'--build', self.build_dir,
'--target', 'install',
])
| 32.939024 | 79 | 0.57201 | 318 | 2,701 | 4.72956 | 0.440252 | 0.031915 | 0.055851 | 0.039894 | 0.117686 | 0.117686 | 0.058511 | 0.058511 | 0.058511 | 0 | 0 | 0.008938 | 0.295816 | 2,701 | 81 | 80 | 33.345679 | 0.781809 | 0.322843 | 0 | 0.306122 | 0 | 0 | 0.088999 | 0.050857 | 0 | 0 | 0 | 0.012346 | 0 | 1 | 0.163265 | false | 0.040816 | 0.081633 | 0.102041 | 0.367347 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
d4b2c68778b6c6acc1037a71628e4f500d324087 | 194 | py | Python | books/PRML/PRML-master-Python/test/nn/nonlinear/tanh.py | iamfaith/DeepLearning | 80ce429d0e9e448cf84e7d51129ef4e0077048a2 | [
"Apache-2.0"
] | 7,581 | 2018-04-26T04:29:30.000Z | 2022-03-31T15:35:39.000Z | books/PRML/PRML-master-Python/test/nn/nonlinear/tanh.py | lizhenchen2019/DeepLearning | 467c73e2d0435f0a05255e5b5e00454260d01f27 | [
"Apache-2.0"
] | 8 | 2019-05-22T02:27:35.000Z | 2022-03-03T03:53:05.000Z | books/PRML/PRML-master-Python/test/nn/nonlinear/tanh.py | lizhenchen2019/DeepLearning | 467c73e2d0435f0a05255e5b5e00454260d01f27 | [
"Apache-2.0"
] | 2,340 | 2018-04-26T04:28:11.000Z | 2022-03-31T02:28:25.000Z | import unittest
from prml import nn
class TestTanh(unittest.TestCase):
def test_tanh(self):
self.assertEqual(nn.tanh(0).value, 0)
if __name__ == '__main__':
unittest.main()
| 14.923077 | 45 | 0.685567 | 26 | 194 | 4.769231 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012821 | 0.195876 | 194 | 12 | 46 | 16.166667 | 0.782051 | 0 | 0 | 0 | 0 | 0 | 0.041237 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
d4bebce366659ddca11766eef05af6e21514a4e8 | 375 | py | Python | InterviewDemo/Generic.py | janaka1984/PythonDemo | 1e7751426573a964643be16b9a0cc4169ac8be4e | [
"Unlicense"
] | null | null | null | InterviewDemo/Generic.py | janaka1984/PythonDemo | 1e7751426573a964643be16b9a0cc4169ac8be4e | [
"Unlicense"
] | null | null | null | InterviewDemo/Generic.py | janaka1984/PythonDemo | 1e7751426573a964643be16b9a0cc4169ac8be4e | [
"Unlicense"
] | null | null | null | #list
list = [1,2,3,4]
for i in list:
print i
print [i*i for i in list]
print "------------"
#tuple
tuple = (1,2,3,4)
for i in tuple:
print i
print "------------"
#dictionary
dict = {"a":1,"b":2,"c":3}
for key,val in dict.iteritems():
print "key {} val {}".format(key,val)
print "------------"
#set
set = {1,2,3}
for i in set:
print i
| 16.304348 | 42 | 0.485333 | 65 | 375 | 2.8 | 0.307692 | 0.087912 | 0.131868 | 0.043956 | 0.241758 | 0.10989 | 0.10989 | 0 | 0 | 0 | 0 | 0.05 | 0.253333 | 375 | 22 | 43 | 17.045455 | 0.6 | 0.058667 | 0 | 0.375 | 0 | 0 | 0.159509 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
d4cf19c89449de00e113993744196ac0289760e1 | 436 | py | Python | safe_grid_agents/parsing/__init__.py | jvmancuso/safe-grid-agents | 40f8101d23ca7dfe5c93d0c6be2a43101f56ce1c | [
"Apache-2.0"
] | 21 | 2018-07-31T21:12:37.000Z | 2019-03-31T05:55:56.000Z | safe_grid_agents/parsing/__init__.py | jvmncs/safe-grid-agents | 40f8101d23ca7dfe5c93d0c6be2a43101f56ce1c | [
"Apache-2.0"
] | 21 | 2018-09-28T21:21:18.000Z | 2019-06-02T22:00:35.000Z | safe_grid_agents/parsing/__init__.py | jvmncs/safe-grid-agents | 40f8101d23ca7dfe5c93d0c6be2a43101f56ce1c | [
"Apache-2.0"
] | 4 | 2018-09-24T05:05:29.000Z | 2019-06-26T00:27:35.000Z | """Top-level mosule imports."""
core_config = "safe_grid_agents/parsing/core_parser_configs.yaml"
agent_config = "safe_grid_agents/parsing/agent_parser_configs.yaml"
env_config = "safe_grid_agents/parsing/env_parser_configs.yaml"
from safe_grid_agents.parsing.parse import prepare_parser, ENV_MAP, AGENT_MAP
__all__ = [
"core_config",
"agent_config",
"env_config",
"prepare_parser",
"ENV_MAP",
"AGENT_MAP",
]
| 25.647059 | 77 | 0.756881 | 60 | 436 | 5 | 0.35 | 0.106667 | 0.186667 | 0.28 | 0.45 | 0.18 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126147 | 436 | 16 | 78 | 27.25 | 0.787402 | 0.057339 | 0 | 0 | 0 | 0 | 0.518519 | 0.362963 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d4d23eb441aab486f58c903d67870d4a195d3de7 | 2,641 | py | Python | tests/hikari/test_embeds.py | sabidib/hikari | e1e112a1b2938890e4abd38eb07b559fda7eedbb | [
"MIT"
] | 520 | 2020-10-12T22:53:55.000Z | 2022-03-30T17:59:53.000Z | tests/hikari/test_embeds.py | sabidib/hikari | e1e112a1b2938890e4abd38eb07b559fda7eedbb | [
"MIT"
] | 319 | 2020-10-11T19:04:03.000Z | 2022-03-31T16:55:28.000Z | tests/hikari/test_embeds.py | sabidib/hikari | e1e112a1b2938890e4abd38eb07b559fda7eedbb | [
"MIT"
] | 85 | 2020-10-17T20:25:47.000Z | 2022-03-31T15:19:40.000Z | # -*- coding: utf-8 -*-
# Copyright (c) 2020 Nekokatt
# Copyright (c) 2021 davfsa
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import mock
import pytest
from hikari import embeds
class TestEmbedResource:
@pytest.fixture()
def resource(self):
return embeds.EmbedResource(resource=mock.Mock())
def test_url(self, resource):
assert resource.url is resource.resource.url
def test_filename(self, resource):
assert resource.filename is resource.resource.filename
def test_stream(self, resource):
mock_executor = object()
assert resource.stream(executor=mock_executor, head_only=True) is resource.resource.stream.return_value
resource.resource.stream.assert_called_once_with(executor=mock_executor, head_only=True)
class TestEmbedResourceWithProxy:
@pytest.fixture()
def resource_with_proxy(self):
return embeds.EmbedResourceWithProxy(resource=mock.Mock(), proxy_resource=mock.Mock())
def test_proxy_url(self, resource_with_proxy):
assert resource_with_proxy.proxy_url is resource_with_proxy.proxy_resource.url
def test_proxy_url_when_resource_is_none(self, resource_with_proxy):
resource_with_proxy.proxy_resource = None
assert resource_with_proxy.proxy_url is None
def test_proxy_filename(self, resource_with_proxy):
assert resource_with_proxy.proxy_filename is resource_with_proxy.proxy_resource.filename
def test_proxy_filename_when_resource_is_none(self, resource_with_proxy):
resource_with_proxy.proxy_resource = None
assert resource_with_proxy.proxy_filename is None
| 40.630769 | 111 | 0.769784 | 368 | 2,641 | 5.353261 | 0.353261 | 0.079188 | 0.112183 | 0.08934 | 0.253807 | 0.230457 | 0.167513 | 0.152284 | 0.152284 | 0.102538 | 0 | 0.004091 | 0.166982 | 2,641 | 64 | 112 | 41.265625 | 0.891364 | 0.414994 | 0 | 0.137931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.275862 | 1 | 0.310345 | false | 0 | 0.103448 | 0.068966 | 0.551724 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
d4d38e64c6095ab0af95f46a486ed2c3ffcfc26b | 860 | py | Python | commit.py | denizcandas/autodp | 3548edf43849eb2b58eaba563e05a75924f55382 | [
"Apache-2.0"
] | null | null | null | commit.py | denizcandas/autodp | 3548edf43849eb2b58eaba563e05a75924f55382 | [
"Apache-2.0"
] | null | null | null | commit.py | denizcandas/autodp | 3548edf43849eb2b58eaba563e05a75924f55382 | [
"Apache-2.0"
] | null | null | null | class Commit:
def __init__(self):
""" A simple feature container for each git commit """
self.features = {}
def add(self, key, value):
if key in self.features:
raise Exception("Do not overwrite features")
self.features[key] = value
def remove(self, key):
if key in self.features:
return self.features.pop(key)
raise Exception(f"The commit object does not have the feature: {key}")
def get(self, key):
if key in self.features:
return self.features[key]
raise Exception(f"The commit object does not have the feature: {key}")
def get_all(self):
return self.features.copy()
def to_list(self):
tmp = self.features.copy()
tmp.pop("message", None)
tmp.pop("files", None)
return list(tmp.values())
| 28.666667 | 78 | 0.593023 | 114 | 860 | 4.421053 | 0.359649 | 0.214286 | 0.041667 | 0.065476 | 0.462302 | 0.424603 | 0.424603 | 0.424603 | 0.424603 | 0.424603 | 0 | 0 | 0.297674 | 860 | 29 | 79 | 29.655172 | 0.834437 | 0.053488 | 0 | 0.227273 | 0 | 0 | 0.169975 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0 | 0.045455 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d4d6843fa1c34bfb4c241a7ca496e77045bfc94d | 36,096 | py | Python | monogusatools.py | Jim-Dev/BlenderAddons | 9ed6bae312a4f2bea20cc4cc9e8b02839e59cb1a | [
"MIT"
] | null | null | null | monogusatools.py | Jim-Dev/BlenderAddons | 9ed6bae312a4f2bea20cc4cc9e8b02839e59cb1a | [
"MIT"
] | null | null | null | monogusatools.py | Jim-Dev/BlenderAddons | 9ed6bae312a4f2bea20cc4cc9e8b02839e59cb1a | [
"MIT"
] | null | null | null | ####################################
# MonogusaTools
# v.1.0
# (c)isidourou 2013
####################################
#!BPY
import bpy
import random
from bpy.types import Menu, Panel
bl_info = {
"name": "Monogusa Tools",
"author": "isidourou",
"version": (1, 0),
"blender": (2, 65, 0),
"location": "View3D > Toolbar",
"description": "MonogusaTools",
"warning": "",
"wiki_url": "",
"tracker_url": "",
"category": 'CTNAME'}
atobj = None
def mode_interpret(emode):
if emode == 'PAINT_TEXTURE':
return 'TEXTURE_PAINT'
if emode == 'SCULPT':
return 'SCULPT'
if emode == 'PAINT_VERTEX':
return 'VERTEX_PAINT'
if emode == 'PAINT_WEIGHT':
return 'WEIGHT_PAINT'
if emode == 'OBJECT':
return 'OBJECT'
if emode == 'POSE':
return 'POSE'
if emode=='EDIT_MESH' or emode=='EDIT_ARMATURE' or emode=='EDIT_CURVE' or emode=='EDIT_TEXT' or emode=='EDIT_METABALL' or emode=='EDIT_SURFACE':
return 'EDIT'
def check_active():
count = 0
slist = bpy.context.selected_objects
for i in slist:
count += 1
return count
def check_mode():
emode = bpy.context.mode
if emode != 'OBJECT':
bpy.ops.object.mode_set(mode='OBJECT')
return emode
# Menu in tools region
class MonogusaToolsPanel(bpy.types.Panel):
bl_label = "Monogusa Tools"
bl_space_type = "VIEW_3D"
bl_region_type = "TOOLS"
def draw(self, context):
layout = self.layout
#3D Cursor
col = layout.column(align=True)
col.label(text="3d cursor:")
row = col.row(align=True)
row.operator("to.selected", text="to Selected")
row.operator("to.cursor", text="to Cursor")
#select
col = layout.column(align=True)
col.label(text="Select:")
row = col.row(align=True)
row.operator("select.type", text="Type")
row.operator("select.group", text="Group")
row.operator("select.obdata", text="OBData")
row.operator("select.mat", text="Mat")
row = col.row(align=True)
row.operator("select.invert", text="Invert")
row.operator("select.all", text=" All")
row.operator("deselect.all", text="Deselect")
#execute
#col = layout.column(align=True)
col.label(text="Execute:")
row = col.row(align=True)
row.operator("hide.selected", text="Hide")
row.operator("unhide.all", text="Unhide")
row.operator("execute.delete", text="Delete")
#sendlayer layer
col.label(text="Move to Layer:")
row = col.row(align=True)
row.operator("sendlayer.l00",text=' ')
row.operator("sendlayer.l01",text=' ')
row.operator("sendlayer.l02",text=' ')
row.operator("sendlayer.l03",text=' ')
row.operator("sendlayer.l04",text=' ')
row.operator("sendlayer.l05",text=' ')
row.operator("sendlayer.l06",text=' ')
row.operator("sendlayer.l07",text=' ')
row.operator("sendlayer.l08",text=' ')
row.operator("sendlayer.l09",text=' ')
row = col.row(align=True)
row.operator("sendlayer.l10",text=' ')
row.operator("sendlayer.l11",text=' ')
row.operator("sendlayer.l12",text=' ')
row.operator("sendlayer.l13",text=' ')
row.operator("sendlayer.l14",text=' ')
row.operator("sendlayer.l15",text=' ')
row.operator("sendlayer.l16",text=' ')
row.operator("sendlayer.l17",text=' ')
row.operator("sendlayer.l18",text=' ')
row.operator("sendlayer.l19",text=' ')
#convert
col = layout.column(align=True)
col.label(text="Convert:")
row = col.row(align=True)
row.operator("convert.tomesh", text="to Mesh")
row.operator("convert.tocurve", text="to Curve")
#subdivide
col = layout.column(align=True)
col.label(text="Sub Divide:")
row = col.row(align=True)
row.operator("div.simple", text="Simple Divide")
row = col.row(align=True)
row.operator("div.smooth", text="Smooth Div")
row.operator("div.rand", text="Random Div")
row = col.row(align=False)
row.operator("ver.smooth", text="Smoothing Vertex / Points")
#add mirror modifire
col = layout.column(align=True)
col = layout.column(align=True)
col.label(text="Add Mirror Modifier:")
row = col.row(align=True)
row.operator("add.mmx", text="X")
row.operator("add.mmy", text="Y")
row.operator("add.mmz", text="Z")
row = col.row(align=True)
row.operator("add.mmmx", text="-X")
row.operator("add.mmmy", text="-Y")
row.operator("add.mmmz", text="-Z")
#add mirror modifire
col = layout.column(align=True)
col.label(text="Set Template Empty:")
row = col.row(align=True)
row.operator("temp.single", text="Single")
row.operator("temp.separate", text="3D Separate")
row.operator("temp.contact", text="3D Contact")
#---- main ------
#select
class SelectType(bpy.types.Operator):
bl_idname = "select.type"
bl_label = "SelectType"
def execute(self, context):
check_mode()
if check_active() == 0:
return{'FINISHED'}
bpy.ops.object.select_grouped(type='TYPE')
return{'FINISHED'}
class SelectGroup(bpy.types.Operator):
bl_idname = "select.group"
bl_label = "SelectGroup"
def execute(self, context):
check_mode()
if check_active() == 0:
return{'FINISHED'}
bpy.ops.object.select_grouped(type='GROUP')
return{'FINISHED'}
class SelectObjdata(bpy.types.Operator):
bl_idname = "select.obdata"
bl_label = "SelectObjdata"
def execute(self, context):
check_mode()
if check_active() == 0:
return{'FINISHED'}
bpy.ops.object.select_linked(type='OBDATA')
return{'FINISHED'}
class SelectMat(bpy.types.Operator):
bl_idname = "select.mat"
bl_label = "SelectMat"
def execute(self, context):
check_mode()
if check_active() == 0:
return{'FINISHED'}
bpy.ops.object.select_linked(type='MATERIAL')
return{'FINISHED'}
class SelectInvert(bpy.types.Operator):
bl_idname = "select.invert"
bl_label = "SelectInvert"
def execute(self, context):
cobj = bpy.context.object
if cobj == None:
return{'FINISHED'}
objtype = cobj.type
emode = bpy.context.mode
emode = mode_interpret(emode)
if objtype == 'MESH':
if emode == 'EDIT':
bpy.ops.mesh.select_all(action='INVERT')
if objtype == 'CURVE' or objtype == 'SURFACE':
if emode == 'EDIT':
bpy.ops.curve.select_all(action='INVERT')
if objtype == 'ARMATURE':
if emode == 'POSE':
bpy.ops.pose.select_all(action='INVERT')
if emode == 'EDIT':
bpy.ops.armature.select_all(action='INVERT')
if objtype == 'META':
if emode == 'EDIT':
bpy.ops.mball.select_all(action='INVERT')
if emode == 'OBJECT':
bpy.ops.object.select_all(action='INVERT')
return{'FINISHED'}
class SelectAll(bpy.types.Operator):
bl_idname = "select.all"
bl_label = "SelectAll"
def execute(self, context):
cobj = bpy.context.object
if cobj == None:
return{'FINISHED'}
objtype = cobj.type
emode = bpy.context.mode
emode = mode_interpret(emode)
if objtype == 'MESH':
if emode == 'EDIT':
bpy.ops.mesh.select_all(action='SELECT')
if objtype == 'CURVE' or objtype == 'SURFACE':
if emode == 'EDIT':
bpy.ops.curve.select_all(action='SELECT')
if objtype == 'ARMATURE':
if emode == 'POSE':
bpy.ops.pose.select_all(action='SELECT')
if emode == 'EDIT':
bpy.ops.armature.select_all(action='SELECT')
if objtype == 'META':
if emode == 'EDIT':
bpy.ops.mball.select_all(action='SELECT')
if emode == 'OBJECT':
bpy.ops.object.select_all(action='SELECT')
return{'FINISHED'}
class DeselectAll(bpy.types.Operator):
bl_idname = "deselect.all"
bl_label = "DeselectAll"
def execute(self, context):
cobj = bpy.context.object
if cobj == None:
return{'FINISHED'}
objtype = cobj.type
emode = bpy.context.mode
emode = mode_interpret(emode)
if objtype == 'MESH':
if emode == 'EDIT':
bpy.ops.mesh.select_all(action='DESELECT')
if objtype == 'CURVE' or objtype == 'SURFACE':
if emode == 'EDIT':
bpy.ops.curve.select_all(action='DESELECT')
if objtype == 'ARMATURE':
if emode == 'POSE':
bpy.ops.pose.select_all(action='DESELECT')
if emode == 'EDIT':
bpy.ops.armature.select_all(action='DESELECT')
if objtype == 'META':
if emode == 'EDIT':
bpy.ops.mball.select_all(action='DESELECT')
if emode == 'OBJECT':
bpy.ops.object.select_all(action='DESELECT')
return{'FINISHED'}
#execute
class HideSelected(bpy.types.Operator):
bl_idname = "hide.selected"
bl_label = "HideSelected"
def execute(self, context):
global atobj
cobj = bpy.context.object
if cobj == None:
return{'FINISHED'}
objtype = cobj.type
emode = bpy.context.mode
emode = mode_interpret(emode)
if objtype == 'MESH':
if emode == 'EDIT':
bpy.ops.mesh.hide(unselected=False)
if objtype == 'CURVE' or objtype == 'SURFACE':
if emode == 'EDIT':
bpy.ops.curve.hide(unselected=False)
if objtype == 'ARMATURE':
if emode == 'POSE':
bpy.ops.pose.hide(unselected=False)
if emode == 'EDIT':
bpy.ops.armature.hide(unselected=False)
if objtype == 'META':
if emode == 'EDIT':
bpy.ops.mball.hide_metaelems(unselected=False)
if emode == 'OBJECT':
bpy.ops.object.hide_view_set(unselected=False)
atobj = cobj
return{'FINISHED'}
class UnhideAll(bpy.types.Operator):
bl_idname = "unhide.all"
bl_label = "UnhideAll"
def execute(self, context):
global atobj
cobj = bpy.context.object
if cobj == None:
bpy.context.scene.objects.active = atobj
obj=bpy.context.object
obj.select = True
emode = bpy.context.mode
emode = mode_interpret(emode)
if emode == 'OBJECT':
#bpy.ops.object.select_all(action='DESELECT')
bpy.ops.object.hide_view_clear()
return{'FINISHED'}
objtype = bpy.context.object.type
if objtype == 'MESH':
if emode == 'EDIT':
bpy.ops.mesh.reveal()
if objtype == 'CURVE' or objtype == 'SURFACE':
if emode == 'EDIT':
bpy.ops.curve.reveal()
if objtype == 'ARMATURE':
if emode == 'POSE':
bpy.ops.pose.reveal()
if emode == 'EDIT':
bpy.ops.armature.reveal()
if objtype == 'META':
if emode == 'EDIT':
bpy.ops.mball.reveal_metaelems()
return{'FINISHED'}
class ExecuteDelete(bpy.types.Operator):
bl_idname = "execute.delete"
bl_label = "ExecuteDelete"
def execute(self, context):
emode = bpy.context.mode
emode = mode_interpret(emode)
if emode == 'OBJECT':
bpy.ops.object.delete(use_global=False)
return{'FINISHED'}
objtype = bpy.context.object.type
if objtype == 'MESH':
if emode == 'EDIT':
bpy.ops.mesh.delete()
if objtype == 'CURVE' or objtype == 'SURFACE':
if emode == 'EDIT':
bpy.ops.curve.delete()
if objtype == 'ARMATURE':
if emode == 'POSE':
bpy.ops.object.editmode_toggle()
bpy.ops.armature.delete()
bpy.ops.object.posemode_toggle()
if emode == 'EDIT':
bpy.ops.armature.delete()
if objtype == 'META':
if emode == 'EDIT':
bpy.ops.mball.delete_metaelems()
return{'FINISHED'}
#move to Layer
class Send00(bpy.types.Operator):
bl_idname = "sendlayer.l00"
bl_label = "Send00"
def execute(self, context):
check_mode()
bpy.ops.object.move_to_layer(
layers=(True,False,False,False,False,False,False,False,False,False,
False,False,False,False,False,False,False,False,False,False))
return{'FINISHED'}
class Send01(bpy.types.Operator):
bl_idname = "sendlayer.l01"
bl_label = "Send01"
def execute(self, context):
check_mode()
bpy.ops.object.move_to_layer(
layers=(False,True,False,False,False,False,False,False,False,False,
False,False,False,False,False,False,False,False,False,False))
return{'FINISHED'}
class Send02(bpy.types.Operator):
bl_idname = "sendlayer.l02"
bl_label = "Send02"
def execute(self, context):
check_mode()
bpy.ops.object.move_to_layer(
layers=(False,False,True,False,False,False,False,False,False,False,
False,False,False,False,False,False,False,False,False,False))
return{'FINISHED'}
class Send03(bpy.types.Operator):
bl_idname = "sendlayer.l03"
bl_label = "Send03"
def execute(self, context):
check_mode()
bpy.ops.object.move_to_layer(
layers=(False,False,False,True,False,False,False,False,False,False,
False,False,False,False,False,False,False,False,False,False))
return{'FINISHED'}
class Send04(bpy.types.Operator):
bl_idname = "sendlayer.l04"
bl_label = "Send04"
def execute(self, context):
check_mode()
bpy.ops.object.move_to_layer(
layers=(False,False,False,False,True,False,False,False,False,False,
False,False,False,False,False,False,False,False,False,False))
return{'FINISHED'}
class Send05(bpy.types.Operator):
bl_idname = "sendlayer.l05"
bl_label = "Send05"
def execute(self, context):
check_mode()
bpy.ops.object.move_to_layer(
layers=(False,False,False,False,False,True,False,False,False,False,
False,False,False,False,False,False,False,False,False,False))
return{'FINISHED'}
class Send06(bpy.types.Operator):
bl_idname = "sendlayer.l06"
bl_label = "Send06"
def execute(self, context):
check_mode()
bpy.ops.object.move_to_layer(
layers=(False,False,False,False,False,False,True,False,False,False,
False,False,False,False,False,False,False,False,False,False))
return{'FINISHED'}
class Send07(bpy.types.Operator):
bl_idname = "sendlayer.l07"
bl_label = "Send07"
def execute(self, context):
check_mode()
bpy.ops.object.move_to_layer(
layers=(False,False,False,False,False,False,False,True,False,False,
False,False,False,False,False,False,False,False,False,False))
return{'FINISHED'}
class Send08(bpy.types.Operator):
bl_idname = "sendlayer.l08"
bl_label = "Send08"
def execute(self, context):
check_mode()
bpy.ops.object.move_to_layer(
layers=(False,False,False,False,False,False,False,False,True,False,
False,False,False,False,False,False,False,False,False,False))
return{'FINISHED'}
class Send09(bpy.types.Operator):
bl_idname = "sendlayer.l09"
bl_label = "Send09"
def execute(self, context):
check_mode()
bpy.ops.object.move_to_layer(
layers=(False,False,False,False,False,False,False,False,False,True,
False,False,False,False,False,False,False,False,False,False))
return{'FINISHED'}
class Send10(bpy.types.Operator):
bl_idname = "sendlayer.l10"
bl_label = "Send10"
def execute(self, context):
check_mode()
bpy.ops.object.move_to_layer(
layers=(False,False,False,False,False,False,False,False,False,False,
True,False,False,False,False,False,False,False,False,False))
return{'FINISHED'}
class Send11(bpy.types.Operator):
bl_idname = "sendlayer.l11"
bl_label = "Send11"
def execute(self, context):
check_mode()
bpy.ops.object.move_to_layer(
layers=(False,False,False,False,False,False,False,False,False,False,
False,True,False,False,False,False,False,False,False,False))
return{'FINISHED'}
class Send12(bpy.types.Operator):
bl_idname = "sendlayer.l12"
bl_label = "Send12"
def execute(self, context):
check_mode()
bpy.ops.object.move_to_layer(
layers=(False,False,False,False,False,False,False,False,False,False,
False,False,True,False,False,False,False,False,False,False))
return{'FINISHED'}
class Send13(bpy.types.Operator):
bl_idname = "sendlayer.l13"
bl_label = "Send13"
def execute(self, context):
check_mode()
bpy.ops.object.move_to_layer(
layers=(False,False,False,False,False,False,False,False,False,False,
False,False,False,True,False,False,False,False,False,False))
return{'FINISHED'}
class Send14(bpy.types.Operator):
bl_idname = "sendlayer.l14"
bl_label = "Send14"
def execute(self, context):
check_mode()
bpy.ops.object.move_to_layer(
layers=(False,False,False,False,False,False,False,False,False,False,
False,False,False,False,True,False,False,False,False,False))
return{'FINISHED'}
class Send15(bpy.types.Operator):
bl_idname = "sendlayer.l15"
bl_label = "Send15"
def execute(self, context):
check_mode()
bpy.ops.object.move_to_layer(
layers=(False,False,False,False,False,False,False,False,False,False,
False,False,False,False,False,True,False,False,False,False))
return{'FINISHED'}
class Send16(bpy.types.Operator):
bl_idname = "sendlayer.l16"
bl_label = "Send16"
def execute(self, context):
check_mode()
bpy.ops.object.move_to_layer(
layers=(False,False,False,False,False,False,False,False,False,False,
False,False,False,False,False,False,True,False,False,False))
return{'FINISHED'}
class Send17(bpy.types.Operator):
bl_idname = "sendlayer.l17"
bl_label = "Send17"
def execute(self, context):
check_mode()
bpy.ops.object.move_to_layer(
layers=(False,False,False,False,False,False,False,False,False,False,
False,False,False,False,False,False,False,True,False,False))
return{'FINISHED'}
class Send18(bpy.types.Operator):
bl_idname = "sendlayer.l18"
bl_label = "Send18"
def execute(self, context):
check_mode()
bpy.ops.object.move_to_layer(
layers=(False,False,False,False,False,False,False,False,False,False,
False,False,False,False,False,False,False,False,True,False))
return{'FINISHED'}
class Send19(bpy.types.Operator):
bl_idname = "sendlayer.l19"
bl_label = "Send19"
def execute(self, context):
check_mode()
bpy.ops.object.move_to_layer(
layers=(False,False,False,False,False,False,False,False,False,False,
False,False,False,False,False,False,False,False,False,True))
return{'FINISHED'}
#3D cursor
class ToSelected(bpy.types.Operator):
bl_idname = "to.selected"
bl_label = "ToSelected"
def execute(self, context):
bpy.ops.view3d.snap_cursor_to_selected()
return{'FINISHED'}
class ToCursor(bpy.types.Operator):
bl_idname = "to.cursor"
bl_label = "ToCursor"
def execute(self, context):
bpy.ops.view3d.snap_selected_to_cursor()
return{'FINISHED'}
#subdivide
class DivSimple(bpy.types.Operator):
bl_idname = "div.simple"
bl_label = "DivSimple"
def execute(self, context):
objtype = bpy.context.object.type
emode = bpy.context.mode
emode = mode_interpret(emode)
if objtype == 'MESH':
if emode != 'EDIT':
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.mesh.subdivide(smoothness=0)
if emode != 'EDIT':
bpy.ops.object.mode_set(mode=emode)
if objtype == 'ARMATURE':
if emode != 'EDIT':
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.armature.subdivide()
if emode != 'EDIT':
bpy.ops.object.mode_set(mode=emode)
if objtype == 'CURVE':
if emode != 'EDIT':
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.curve.subdivide()
if emode != 'EDIT':
bpy.ops.object.mode_set(mode=emode)
return{'FINISHED'}
class DivSmooth(bpy.types.Operator):
bl_idname = "div.smooth"
bl_label = "DivSmooth"
def execute(self, context):
objtype = bpy.context.object.type
emode = bpy.context.mode
emode = mode_interpret(emode)
if bpy.context.object.type == 'MESH':
if emode != 'EDIT':
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.mesh.subdivide(smoothness=1)
if emode != 'EDIT':
bpy.ops.object.mode_set(mode=emode)
return{'FINISHED'}
class DivRand(bpy.types.Operator):
bl_idname = "div.rand"
bl_label = "DivRand"
def execute(self, context):
objtype = bpy.context.object.type
emode = bpy.context.mode
emode = mode_interpret(emode)
if bpy.context.object.type == 'MESH':
if emode != 'EDIT':
bpy.ops.object.mode_set(mode='EDIT')
frc = random.random()*6
sed = int(random.random()*10)
bpy.ops.mesh.subdivide(smoothness=0, fractal=frc, seed=sed)
if emode != 'EDIT':
bpy.ops.object.mode_set(mode=emode)
return{'FINISHED'}
class VerSmooth(bpy.types.Operator):
bl_idname = "ver.smooth"
bl_label = "DivSmooth"
def execute(self, context):
objtype = bpy.context.object.type
emode = bpy.context.mode
emode = mode_interpret(emode)
if bpy.context.object.type == 'MESH':
if emode != 'EDIT':
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.mesh.vertices_smooth()
if emode != 'EDIT':
bpy.ops.object.mode_set(mode=emode)
if objtype == 'CURVE':
if emode != 'EDIT':
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.curve.smooth()
if emode != 'EDIT':
bpy.ops.object.mode_set(mode=emode)
return{'FINISHED'}
#convert
class ConverttoMesh(bpy.types.Operator):
bl_idname = "convert.tomesh"
bl_label = "ConverttoMesh"
def execute(self, context):
objtype = bpy.context.object.type
emode = bpy.context.mode
if emode == 'SCULPT' or emode.find('PAINT') != -1:
return{'FINISHED'}
emode = mode_interpret(emode)
if objtype == 'CURVE' or objtype == 'FONT' or objtype == 'META' or objtype == 'SURFACE':
if emode != 'OBJECT':
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.convert(target='MESH')
bpy.ops.object.editmode_toggle()
bpy.ops.mesh.select_all(action='SELECT')
bpy.ops.object.editmode_toggle()
if emode != 'OBJECT':
bpy.ops.object.mode_set(mode=emode)
return{'FINISHED'}
class ConverttoCurve(bpy.types.Operator):
bl_idname = "convert.tocurve"
bl_label = "ConverttoCurve"
def execute(self, context):
objtype = bpy.context.object.type
emode = bpy.context.mode
if emode == 'SCULPT' or emode.find('PAINT') != -1:
return{'FINISHED'}
emode = mode_interpret(emode)
if objtype == 'MESH' or objtype == 'FONT':
if emode != 'OBJECT':
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.convert(target='CURVE')
if emode != 'OBJECT':
bpy.ops.object.mode_set(mode=emode)
return{'FINISHED'}
#add mirror modifier
def add_mm(direction):
emode = bpy.context.mode
emode = mode_interpret(emode)
obj = bpy.ops.object
cobj = bpy.context.object
mesh = cobj.data
obj.mode_set(mode='EDIT')
bpy.ops.mesh.select_all(action='DESELECT')
obj.mode_set(mode='OBJECT')
ct = 0
exist = False
for i in cobj.modifiers:
s = cobj.modifiers[ct].name
if s.find('Mirror') != -1:
exist = True
break
if exist == False:
obj.modifier_add(type='MIRROR')
if direction == 'X':
for vertex in mesh.vertices:
if (vertex.co.x < -0.000001):
vertex.select = True
cobj.modifiers["Mirror"].use_x = True
if exist == False:
cobj.modifiers["Mirror"].use_y = False
cobj.modifiers["Mirror"].use_z = False
if direction == '-X':
for vertex in mesh.vertices:
if (vertex.co.x > 0.000001):
vertex.select = True
cobj.modifiers["Mirror"].use_x = True
if exist == False:
cobj.modifiers["Mirror"].use_y = False
cobj.modifiers["Mirror"].use_z = False
if direction == 'Y':
for vertex in mesh.vertices:
if (vertex.co.y < -0.000001):
vertex.select = True
cobj.modifiers["Mirror"].use_y = True
if exist == False:
cobj.modifiers["Mirror"].use_x = False
cobj.modifiers["Mirror"].use_z = False
if direction == '-Y':
for vertex in mesh.vertices:
if (vertex.co.y > 0.000001):
vertex.select = True
cobj.modifiers["Mirror"].use_y = True
if exist == False:
cobj.modifiers["Mirror"].use_x = False
cobj.modifiers["Mirror"].use_z = False
if direction == 'Z':
for vertex in mesh.vertices:
if (vertex.co.z < -0.000001):
vertex.select = True
cobj.modifiers["Mirror"].use_z = True
if exist == False:
cobj.modifiers["Mirror"].use_x = False
cobj.modifiers["Mirror"].use_y = False
if direction == '-Z':
for vertex in mesh.vertices:
if (vertex.co.z > 0.000001):
vertex.select = True
cobj.modifiers["Mirror"].use_z = True
if exist == False:
cobj.modifiers["Mirror"].use_x = False
cobj.modifiers["Mirror"].use_y = False
cobj.modifiers["Mirror"].use_clip = True
obj.mode_set(mode='EDIT')
bpy.ops.mesh.delete(type='VERT')
bpy.ops.mesh.select_all(action='SELECT')
obj.mode_set(mode='OBJECT')
if emode != 'OBJECT':
bpy.ops.object.mode_set(mode=emode)
class AddMmx(bpy.types.Operator):
bl_idname = "add.mmx"
bl_label = "AddMmx"
def execute(self, context):
if bpy.context.object.type == 'MESH':
add_mm('X')
return{'FINISHED'}
class AddMm_x(bpy.types.Operator):
bl_idname = "add.mmmx"
bl_label = "AddMmx"
def execute(self, context):
if bpy.context.object.type == 'MESH':
add_mm('-X')
return{'FINISHED'}
class AddMmy(bpy.types.Operator):
bl_idname = "add.mmy"
bl_label = "AddMmx"
def execute(self, context):
if bpy.context.object.type == 'MESH':
add_mm('Y')
return{'FINISHED'}
class AddMm_y(bpy.types.Operator):
bl_idname = "add.mmmy"
bl_label = "AddMmx"
def execute(self, context):
if bpy.context.object.type == 'MESH':
add_mm('-Y')
return{'FINISHED'}
class AddMmz(bpy.types.Operator):
bl_idname = "add.mmz"
bl_label = "AddMmx"
def execute(self, context):
if bpy.context.object.type == 'MESH':
add_mm('Z')
return{'FINISHED'}
class AddMm_z(bpy.types.Operator):
bl_idname = "add.mmmz"
bl_label = "AddMmx"
def execute(self, context):
if bpy.context.object.type == 'MESH':
add_mm('-Z')
return{'FINISHED'}
#set template empty
def objselect(objct,selection):
if (selection == 'ONLY'):
bpy.ops.object.select_all(action='DESELECT')
bpy.context.scene.objects.active = objct
objct.select = True
def makecenterempty():
bpy.ops.object.empty_add(type='PLAIN_AXES',
view_align=False,
location=(0, 0, 0))
centerempty = bpy.context.object
centerempty.name = 'CenterEmpty'
return centerempty
def makeempty(loc,rot):
bpy.ops.object.empty_add(type='PLAIN_AXES',
view_align=False,
location= loc,
rotation= rot
)
empty = bpy.context.object
empty.empty_draw_type = 'IMAGE'
empty.empty_draw_size = 10
empty.name = 'Template Empty'
empty.color[3] = 0.3 #Transparency
empty.show_x_ray = True
return empty
class TempSingle(bpy.types.Operator):
bl_idname = "temp.single"
bl_label = "TempSingle"
def execute(self, context):
pi = 3.141595
pq = pi/2
#sn = bpy.context.scene
erot = [(pq, 0, 0),(pq, 0, pq),(0, 0, 0)]
eloc = [(-5, 0, -5),(0, -5, -5),(-5, -5, 0)]
cempty = makecenterempty()
bpy.ops.group.create(name="TemplateEmpty")
empty = makeempty(eloc[0],erot[0])
bpy.ops.object.group_link(group='TemplateEmpty')
objselect(cempty,'ADD')
bpy.ops.object.parent_set(type='OBJECT')
objselect(cempty,'ONLY')
bpy.ops.view3d.snap_selected_to_cursor()
return{'FINISHED'}
class TempSeparate(bpy.types.Operator):
bl_idname = "temp.separate"
bl_label = "TempSeparate"
def execute(self, context):
pi = 3.141595
pq = pi/2
#sn = bpy.context.scene
erot = [(pq, 0, 0),(pq, 0, pq),(0, 0, 0)]
eloc = [(-5, 5, -5),(-5, -5, -5),(-5, -5, -5)]
cempty = makecenterempty()
bpy.ops.group.create(name="TemplateEmpty")
for i in range(3):
empty = makeempty(eloc[i],erot[i])
bpy.ops.object.group_link(group='TemplateEmpty')
objselect(cempty,'ADD')
bpy.ops.object.parent_set(type='OBJECT')
objselect(cempty,'ONLY')
bpy.ops.view3d.snap_selected_to_cursor()
return{'FINISHED'}
class TempContact(bpy.types.Operator):
bl_idname = "temp.contact"
bl_label = "TempContact"
def execute(self, context):
pi = 3.141595
pq = pi/2
#sn = bpy.context.scene
erot = [(pq, 0, 0),(pq, 0, pq),(0, 0, 0)]
eloc = [(-5, 0, -5),(0, -5, -5),(-5, -5, 0)]
cempty = makecenterempty()
bpy.ops.group.create(name="TemplateEmpty")
for i in range(3):
empty = makeempty(eloc[i],erot[i])
bpy.ops.object.group_link(group='TemplateEmpty')
objselect(cempty,'ADD')
bpy.ops.object.parent_set(type='OBJECT')
objselect(cempty,'ONLY')
bpy.ops.view3d.snap_selected_to_cursor()
return{'FINISHED'}
# Registration
def register():
bpy.utils.register_class(MonogusaToolsPanel)
#select
bpy.utils.register_class(SelectType)
bpy.utils.register_class(SelectGroup)
bpy.utils.register_class(SelectObjdata)
bpy.utils.register_class(SelectMat)
bpy.utils.register_class(SelectInvert)
bpy.utils.register_class(SelectAll)
bpy.utils.register_class(DeselectAll)
#execute
bpy.utils.register_class(HideSelected)
bpy.utils.register_class(UnhideAll)
bpy.utils.register_class(ExecuteDelete)
#move to layer
bpy.utils.register_class(Send00)
bpy.utils.register_class(Send01)
bpy.utils.register_class(Send02)
bpy.utils.register_class(Send03)
bpy.utils.register_class(Send04)
bpy.utils.register_class(Send05)
bpy.utils.register_class(Send06)
bpy.utils.register_class(Send07)
bpy.utils.register_class(Send08)
bpy.utils.register_class(Send09)
bpy.utils.register_class(Send10)
bpy.utils.register_class(Send11)
bpy.utils.register_class(Send12)
bpy.utils.register_class(Send13)
bpy.utils.register_class(Send14)
bpy.utils.register_class(Send15)
bpy.utils.register_class(Send16)
bpy.utils.register_class(Send17)
bpy.utils.register_class(Send18)
bpy.utils.register_class(Send19)
#3d cursor
bpy.utils.register_class(ToSelected)
bpy.utils.register_class(ToCursor)
#subdvide
bpy.utils.register_class(DivSimple)
bpy.utils.register_class(DivSmooth)
bpy.utils.register_class(DivRand)
bpy.utils.register_class(VerSmooth)
bpy.utils.register_class(ConverttoMesh)
bpy.utils.register_class(ConverttoCurve)
bpy.utils.register_class(AddMmx)
bpy.utils.register_class(AddMm_x)
bpy.utils.register_class(AddMmy)
bpy.utils.register_class(AddMm_y)
bpy.utils.register_class(AddMmz)
bpy.utils.register_class(AddMm_z)
#set template empty
bpy.utils.register_class(TempSingle)
bpy.utils.register_class(TempSeparate)
bpy.utils.register_class(TempContact)
def unregister():
bpy.utils.unregister_class(MonogusaToolsPanel)
#select
bpy.utils.unregister_class(SelectType)
bpy.utils.unregister_class(SelectGroup)
bpy.utils.unregister_class(SelectObjdata)
bpy.utils.unregister_class(SelectMat)
bpy.utils.unregister_class(SelectInvert)
bpy.utils.unregister_class(SelectAll)
bpy.utils.unregister_class(DeselectAll)
#execute
bpy.utils.unregister_class(HideSelected)
bpy.utils.unregister_class(UnhideAll)
bpy.utils.unregister_class(ExecuteDelete)
#move to layer
bpy.utils.unregister_class(Send00)
bpy.utils.unregister_class(Send01)
bpy.utils.unregister_class(Send02)
bpy.utils.unregister_class(Send03)
bpy.utils.unregister_class(Send04)
bpy.utils.unregister_class(Send05)
bpy.utils.unregister_class(Send06)
bpy.utils.unregister_class(Send07)
bpy.utils.unregister_class(Send08)
bpy.utils.unregister_class(Send09)
bpy.utils.unregister_class(Send10)
bpy.utils.unregister_class(Send11)
bpy.utils.unregister_class(Send12)
bpy.utils.unregister_class(Send13)
bpy.utils.unregister_class(Send14)
bpy.utils.unregister_class(Send15)
bpy.utils.unregister_class(Send16)
bpy.utils.unregister_class(Send17)
bpy.utils.unregister_class(Send18)
bpy.utils.unregister_class(Send19)
#3d cursor
bpy.utils.unregister_class(ToSelected)
bpy.utils.unregister_class(ToCursor)
#subdvide
bpy.utils.unregister_class(DivSimple)
bpy.utils.unregister_class(DivSmooth)
bpy.utils.unregister_class(DivRand)
bpy.utils.unregister_class(VerSmooth)
bpy.utils.unregister_class(ConverttoMesh)
bpy.utils.unregister_class(ConverttoCurve)
bpy.utils.unregister_class(AddMmx)
bpy.utils.unregister_class(AddMm_x)
bpy.utils.unregister_class(AddMmy)
bpy.utils.unregister_class(AddMm_y)
bpy.utils.unregister_class(AddMmz)
bpy.utils.unregister_class(AddMm_z)
#set template empty
bpy.utils.unregister_class(TempSingle)
bpy.utils.unregister_class(TempSeparate)
bpy.utils.unregister_class(TempContact)
if __name__ == "__main__":
register() | 36.132132 | 148 | 0.599069 | 4,333 | 36,096 | 4.889684 | 0.065774 | 0.16142 | 0.216642 | 0.256761 | 0.667674 | 0.626941 | 0.551895 | 0.536178 | 0.498844 | 0.470052 | 0 | 0.015243 | 0.263935 | 36,096 | 999 | 149 | 36.132132 | 0.78219 | 0.015597 | 0 | 0.503254 | 0 | 0 | 0.103755 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061822 | false | 0 | 0.003254 | 0 | 0.234273 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d4d8ad8b746e8c91c357a1e252ef8c65ca2155d5 | 719 | py | Python | multiplex/migrations/0010_auto_20201010_2143.py | semani01/OnlineTicketBooking | f45e2ac2c843bebbd9f915b61c522e29907b4d5e | [
"MIT"
] | null | null | null | multiplex/migrations/0010_auto_20201010_2143.py | semani01/OnlineTicketBooking | f45e2ac2c843bebbd9f915b61c522e29907b4d5e | [
"MIT"
] | null | null | null | multiplex/migrations/0010_auto_20201010_2143.py | semani01/OnlineTicketBooking | f45e2ac2c843bebbd9f915b61c522e29907b4d5e | [
"MIT"
] | 1 | 2022-01-05T10:36:51.000Z | 2022-01-05T10:36:51.000Z | # Generated by Django 3.0.5 on 2020-10-10 16:13
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('multiplex', '0009_auto_20201010_1918'),
]
operations = [
migrations.AddField(
model_name='movie',
name='actor',
field=models.CharField(max_length=50, null=True),
),
migrations.AddField(
model_name='movie',
name='director',
field=models.CharField(max_length=50, null=True),
),
migrations.AddField(
model_name='movie',
name='video',
field=models.CharField(max_length=200, null=True),
),
]
| 24.793103 | 62 | 0.563282 | 74 | 719 | 5.351351 | 0.540541 | 0.136364 | 0.174242 | 0.204545 | 0.542929 | 0.469697 | 0.378788 | 0.378788 | 0.378788 | 0.378788 | 0 | 0.077393 | 0.317107 | 719 | 28 | 63 | 25.678571 | 0.729124 | 0.062587 | 0 | 0.5 | 1 | 0 | 0.096726 | 0.034226 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.045455 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d4d988dea056ceacc773bc414f23acb6065203ac | 1,884 | py | Python | src/script/gen_cluster_size.py | guangyu-zhou/scRNA-Clustering | 814360bdc596596f84dcf1b585d323dfcd7b97a6 | [
"MIT"
] | 2 | 2019-07-07T05:30:13.000Z | 2019-07-19T04:46:53.000Z | src/script/gen_cluster_size.py | guangyu-zhou/scRNA-Clustering | 814360bdc596596f84dcf1b585d323dfcd7b97a6 | [
"MIT"
] | null | null | null | src/script/gen_cluster_size.py | guangyu-zhou/scRNA-Clustering | 814360bdc596596f84dcf1b585d323dfcd7b97a6 | [
"MIT"
] | 1 | 2018-08-31T11:32:44.000Z | 2018-08-31T11:32:44.000Z | import matplotlib as mpl
mpl.use('Agg')
# import matplotlib.pyplot as plt
import numpy as np
import math
import pylab as plt
from matplotlib.ticker import ScalarFormatter
# path = '/home/zgy_ucla_cs/data/dropSeq/'
path = '/home/zgy_ucla_cs/Research/DropSeq/data/reads_barcode/'
read_cluster_size = []
# with open(path + "E31_REP1_HKT73BGX2_cluster_other.fastq") as infile:
# with open(path + "E31_REP2_HHN7NBGX3_cluster_other.fastq") as infile:
# with open(path + "E31_REP3_HHNKFBGX3_cluster_other.fastq") as infile:
# l = 10000
# for line in infile:
# # l-=1
# # print line.count('@')
# size = line.count('@')
# if size <= 100:
# read_cluster_size.append(size)
# np.save(path + 'arr2_100', np.array(read_cluster_size))
# np.save(path + 'All_cluster_size', np.array(read_cluster_size))
read_cluster_size = np.load(path + 'E31_REP3_cell_group_size.npy')
read_cluster_size = read_cluster_size[np.where( read_cluster_size == 50)]
# read_cluster_size = read_cluster_size[np.where( read_cluster_size <= 60)]
print("total clusters:", len(read_cluster_size))
# print(min(read_cluster_size), max(read_cluster_size))
# bins = np.linspace(math.ceil(min(read_cluster_size)),
# math.floor(max(read_cluster_size)),
# 200) # fixed number of bins
# plt.hist(read_cluster_size, bins = bins) # arguments are passed to np.histogram
# plt.title("Cluster size distritbuion, y log scale")
# plt.xlabel("reads counts")
# # plt.gca().set_xscale("log")
# # tk = np.linspace(math.ceil(min(read_cluster_size)),
# # math.floor(max(read_cluster_size)),
# # 20)
# # plt.gca().set_xticks(tk)
# plt.gca().set_yscale("log")
# plt.ylabel("number of cells")
# # for axis in [plt.gca().xaxis, plt.gca().yaxis]:
# # axis.set_major_formatter(ScalarFormatter())
# plt.savefig(path + "E31_REP3_cell_group_size_2_more_100_less.png")
| 35.54717 | 82 | 0.707006 | 283 | 1,884 | 4.441696 | 0.378092 | 0.183771 | 0.22673 | 0.054097 | 0.38027 | 0.310263 | 0.272076 | 0.246619 | 0.246619 | 0.182975 | 0 | 0.029265 | 0.147558 | 1,884 | 52 | 83 | 36.230769 | 0.753425 | 0.734607 | 0 | 0 | 0 | 0 | 0.220264 | 0.180617 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.454545 | 0 | 0.454545 | 0.090909 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
d4dc4d4a9fd54de2548f8f0ebb46a0e706ede4e3 | 615 | py | Python | roadpin_backend/app/cron_data/cron_chiayi_city.py | g0v/roadpin | c2919552dd3ce0e3614a35466bae6d6a740e9368 | [
"CC-BY-3.0",
"Apache-2.0"
] | 5 | 2015-04-20T17:16:56.000Z | 2018-12-25T11:14:22.000Z | roadpin_backend/app/cron_data/cron_chiayi_city.py | g0v/roadpin | c2919552dd3ce0e3614a35466bae6d6a740e9368 | [
"CC-BY-3.0",
"Apache-2.0"
] | 1 | 2016-06-20T02:55:27.000Z | 2016-06-21T15:37:19.000Z | roadpin_backend/app/cron_data/cron_chiayi_city.py | g0v/roadpin | c2919552dd3ce0e3614a35466bae6d6a740e9368 | [
"CC-BY-3.0",
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from app.constants import S_OK, S_ERR
import random
import math
import base64
import time
import ujson as json
import sys
import argparse
from app import cfg
from app import util
def cron_chiayi_city():
pass
def parse_args():
''' '''
parser = argparse.ArgumentParser(description='roadpin_backend')
parser.add_argument('-i', '--ini', type=str, required=True, help="ini filename")
args = parser.parse_args()
return (S_OK, args)
if __name__ == '__main__':
(error_code, args) = parse_args()
cfg.init({"ini_filename": args.ini})
cron_chiayi_city()
| 17.083333 | 84 | 0.686179 | 86 | 615 | 4.651163 | 0.581395 | 0.0525 | 0.065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006 | 0.186992 | 615 | 35 | 85 | 17.571429 | 0.794 | 0.034146 | 0 | 0 | 0 | 0 | 0.092308 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0.047619 | 0.47619 | 0 | 0.619048 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
d4e97cdb676222b44bb1bc3a2210e1b38631b566 | 624 | py | Python | django/blog_app/blog/models.py | bogdan-veliscu/react-stack-compare | e7d2020ec8177e12d1b1c41bfaed96c2ee87490c | [
"Apache-2.0"
] | null | null | null | django/blog_app/blog/models.py | bogdan-veliscu/react-stack-compare | e7d2020ec8177e12d1b1c41bfaed96c2ee87490c | [
"Apache-2.0"
] | 7 | 2021-10-06T14:03:21.000Z | 2022-02-27T02:40:18.000Z | django/blog_app/blog/models.py | bogdan-veliscu/react-stack-compare | e7d2020ec8177e12d1b1c41bfaed96c2ee87490c | [
"Apache-2.0"
] | null | null | null | from django.db import models
# Create your models here.
class Topic (models.Model):
top_name = models.CharField(max_length=256, unique=True)
def __str__(self):
return self.top_name
class Webpage(models.Model):
topic = models.ForeignKey(Topic, on_delete=models.CASCADE)
name = models.CharField(max_length=256, unique=True)
url = models.URLField(unique=True)
def __str__(self):
return self.name
class AccessRecord(models.Model):
name = models.ForeignKey(Webpage, on_delete=models.CASCADE)
date = models.DateField()
def __str__(self):
return str(self.date)
| 22.285714 | 63 | 0.701923 | 82 | 624 | 5.121951 | 0.402439 | 0.066667 | 0.071429 | 0.114286 | 0.314286 | 0.314286 | 0.314286 | 0.195238 | 0 | 0 | 0 | 0.011905 | 0.192308 | 624 | 27 | 64 | 23.111111 | 0.821429 | 0.038462 | 0 | 0.1875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0 | 0.0625 | 0.1875 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
d4e9a0fd42e773597d396b7d106cf20adc80ded8 | 1,013 | py | Python | m2cgen/interpreters/__init__.py | yarix/m2cgen | f1aa01e4c70a6d1a8893e27bfbe3c36fcb1e8546 | [
"MIT"
] | 1 | 2021-01-25T09:55:29.000Z | 2021-01-25T09:55:29.000Z | m2cgen/interpreters/__init__.py | yarix/m2cgen | f1aa01e4c70a6d1a8893e27bfbe3c36fcb1e8546 | [
"MIT"
] | null | null | null | m2cgen/interpreters/__init__.py | yarix/m2cgen | f1aa01e4c70a6d1a8893e27bfbe3c36fcb1e8546 | [
"MIT"
] | null | null | null | from .java.interpreter import JavaInterpreter
from .python.interpreter import PythonInterpreter
from .c.interpreter import CInterpreter
from .go.interpreter import GoInterpreter
from .javascript.interpreter import JavascriptInterpreter
from .visual_basic.interpreter import VisualBasicInterpreter
from .c_sharp.interpreter import CSharpInterpreter
from .powershell.interpreter import PowershellInterpreter
from .r.interpreter import RInterpreter
from .php.interpreter import PhpInterpreter
from .dart.interpreter import DartInterpreter
from .haskell.interpreter import HaskellInterpreter
from .ruby.interpreter import RubyInterpreter
from .f_sharp.interpreter import FSharpInterpreter
__all__ = [
JavaInterpreter,
PythonInterpreter,
CInterpreter,
GoInterpreter,
JavascriptInterpreter,
VisualBasicInterpreter,
CSharpInterpreter,
PowershellInterpreter,
RInterpreter,
PhpInterpreter,
DartInterpreter,
HaskellInterpreter,
RubyInterpreter,
FSharpInterpreter,
]
| 31.65625 | 60 | 0.826259 | 88 | 1,013 | 9.431818 | 0.375 | 0.286747 | 0.053012 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129319 | 1,013 | 31 | 61 | 32.677419 | 0.941043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.466667 | 0 | 0.466667 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
d4ef10f97f76f957b73c11cf8c2b6f7ed6b391ab | 694 | py | Python | seqgentools/algorithms/extendables.py | NCAR/seqgentools | c9276b13876b24a99ba218f7abbd1daee115fed5 | [
"MIT"
] | 3 | 2019-06-12T09:19:25.000Z | 2022-02-09T15:47:16.000Z | seqgentools/algorithms/extendables.py | NCAR/seqgentools | c9276b13876b24a99ba218f7abbd1daee115fed5 | [
"MIT"
] | null | null | null | seqgentools/algorithms/extendables.py | NCAR/seqgentools | c9276b13876b24a99ba218f7abbd1daee115fed5 | [
"MIT"
] | 1 | 2021-11-22T18:46:50.000Z | 2021-11-22T18:46:50.000Z | # coding: utf-8
from __future__ import (unicode_literals, print_function,
division)
from seqgentools.sequence import Sequence, Chain, INF
# IDEA
# - index of an element is an extendable tuple
# - When new dimension is discovered, add an additional index in the tuple
# - Each index has an attribute of length, which is also changable
# - When an range is dicovered as null, do not include it to the space
# - Previous searched index is preserved eventhough new dimension is added
# by assuming all new dimensions are indexed as zero
# - If a dimension is not valid in ranges, it is assumed that there is only one element
# - ?? support a tuple is an index of another search tuple?
| 40.823529 | 87 | 0.755043 | 110 | 694 | 4.709091 | 0.654545 | 0.063707 | 0.054054 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001795 | 0.197406 | 694 | 16 | 88 | 43.375 | 0.928187 | 0.778098 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
d4f1e3ac9df5a42a14c94f4a5a15d1aed2216134 | 26,498 | py | Python | tensorflow/python/estimator/warm_starting_util_test.py | shengfuintel/tensorflow | e67f3af48c94c9456c3ff376dc30c82a4bf982cd | [
"Apache-2.0"
] | 522 | 2016-06-08T02:15:50.000Z | 2022-03-02T05:30:36.000Z | tensorflow/python/estimator/warm_starting_util_test.py | shengfuintel/tensorflow | e67f3af48c94c9456c3ff376dc30c82a4bf982cd | [
"Apache-2.0"
] | 48 | 2016-07-26T00:11:55.000Z | 2022-02-23T13:36:33.000Z | tensorflow/python/estimator/warm_starting_util_test.py | shengfuintel/tensorflow | e67f3af48c94c9456c3ff376dc30c82a4bf982cd | [
"Apache-2.0"
] | 108 | 2016-06-16T15:34:05.000Z | 2022-03-12T13:23:11.000Z | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for warm_starting_util."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import numpy as np
import six
from tensorflow.python.estimator import warm_starting_util as ws_util
from tensorflow.python.feature_column import feature_column as fc
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import init_ops
from tensorflow.python.ops import variable_scope
from tensorflow.python.ops import variables
from tensorflow.python.platform import test
from tensorflow.python.training import saver as saver_lib
ones = init_ops.ones_initializer
norms = init_ops.truncated_normal_initializer
rand = init_ops.random_uniform_initializer
class WarmStartingUtilTest(test.TestCase):
def _write_vocab(self, string_values, file_name):
vocab_file = os.path.join(self.get_temp_dir(), file_name)
with open(vocab_file, "w") as f:
f.write("\n".join(string_values))
return vocab_file
def _write_checkpoint(self, sess):
sess.run(variables.global_variables_initializer())
saver = saver_lib.Saver()
ckpt_prefix = os.path.join(self.get_temp_dir(), "model")
ckpt_state_name = "checkpoint"
saver.save(
sess, ckpt_prefix, global_step=0, latest_filename=ckpt_state_name)
def _create_prev_run_var(self,
var_name,
shape=None,
initializer=None,
partitioner=None):
with ops.Graph().as_default() as g:
with self.test_session(graph=g) as sess:
var = variable_scope.get_variable(
var_name,
shape=shape,
initializer=initializer,
partitioner=partitioner)
self._write_checkpoint(sess)
if partitioner:
self.assertTrue(isinstance(var, variables.PartitionedVariable))
var = var._get_variable_list()
return var, sess.run(var)
def _create_dummy_inputs(self):
return {
"sc_int": array_ops.sparse_placeholder(dtypes.int32),
"sc_hash": array_ops.sparse_placeholder(dtypes.string),
"sc_keys": array_ops.sparse_placeholder(dtypes.string),
"sc_vocab": array_ops.sparse_placeholder(dtypes.string),
"real": array_ops.placeholder(dtypes.float32)
}
def _create_linear_model(self, feature_cols, partitioner):
cols_to_vars = {}
with variable_scope.variable_scope("", partitioner=partitioner):
# Create the variables.
fc.linear_model(
features=self._create_dummy_inputs(),
feature_columns=feature_cols,
units=1,
cols_to_vars=cols_to_vars)
# Return a dictionary mapping each column to its variable, dropping the
# 'bias' key that's also filled.
cols_to_vars.pop("bias")
return cols_to_vars
def _assert_cols_to_vars(self, cols_to_vars, cols_to_expected_values, sess):
for col, expected_values in six.iteritems(cols_to_expected_values):
for i, var in enumerate(cols_to_vars[col]):
self.assertAllEqual(expected_values[i], var.eval(sess))
def testWarmStartVar(self):
_, prev_val = self._create_prev_run_var(
"fruit_weights", initializer=[[0.5], [1.], [1.5], [2.]])
with ops.Graph().as_default() as g:
with self.test_session(graph=g) as sess:
fruit_weights = variable_scope.get_variable(
"fruit_weights", initializer=[[0.], [0.], [0.], [0.]])
ws_util._warmstart_var(fruit_weights, self.get_temp_dir())
sess.run(variables.global_variables_initializer())
self.assertAllEqual(prev_val, fruit_weights.eval(sess))
def testWarmStartVarPrevVarPartitioned(self):
_, weights = self._create_prev_run_var(
"fruit_weights",
shape=[4, 1],
initializer=[[0.5], [1.], [1.5], [2.]],
partitioner=lambda shape, dtype: [2, 1])
prev_val = np.concatenate([weights[0], weights[1]], axis=0)
with ops.Graph().as_default() as g:
with self.test_session(graph=g) as sess:
fruit_weights = variable_scope.get_variable(
"fruit_weights", initializer=[[0.], [0.], [0.], [0.]])
ws_util._warmstart_var(fruit_weights, self.get_temp_dir())
sess.run(variables.global_variables_initializer())
self.assertAllEqual(prev_val, fruit_weights.eval(sess))
def testWarmStartVarCurrentVarPartitioned(self):
_, prev_val = self._create_prev_run_var(
"fruit_weights", initializer=[[0.5], [1.], [1.5], [2.]])
with ops.Graph().as_default() as g:
with self.test_session(graph=g) as sess:
fruit_weights = variable_scope.get_variable(
"fruit_weights",
shape=[4, 1],
initializer=[[0.], [0.], [0.], [0.]],
partitioner=lambda shape, dtype: [2, 1])
self.assertTrue(
isinstance(fruit_weights, variables.PartitionedVariable))
ws_util._warmstart_var(fruit_weights, self.get_temp_dir())
sess.run(variables.global_variables_initializer())
fruit_weights = fruit_weights._get_variable_list()
new_val = np.concatenate(
[fruit_weights[0].eval(sess), fruit_weights[1].eval(sess)], axis=0)
self.assertAllEqual(prev_val, new_val)
def testWarmStartVarBothVarsPartitioned(self):
_, weights = self._create_prev_run_var(
"old_scope/fruit_weights",
shape=[4, 1],
initializer=[[0.5], [1.], [1.5], [2.]],
partitioner=lambda shape, dtype: [2, 1])
prev_val = np.concatenate([weights[0], weights[1]], axis=0)
# New session and new graph.
with ops.Graph().as_default() as g:
with self.test_session(graph=g) as sess:
fruit_weights = variable_scope.get_variable(
"new_scope/fruit_weights",
shape=[4, 1],
initializer=[[0.], [0.], [0.], [0.]],
partitioner=lambda shape, dtype: [2, 1])
self.assertTrue(
isinstance(fruit_weights, variables.PartitionedVariable))
ws_util._warmstart_var(
fruit_weights,
self.get_temp_dir(),
prev_tensor_name="old_scope/fruit_weights")
sess.run(variables.global_variables_initializer())
fruit_weights = fruit_weights._get_variable_list()
new_val = np.concatenate(
[fruit_weights[0].eval(sess), fruit_weights[1].eval(sess)], axis=0)
self.assertAllEqual(prev_val, new_val)
def testWarmStartVarWithVocab(self):
prev_vocab_path = self._write_vocab(["apple", "banana", "guava", "orange"],
"old_vocab")
_, _ = self._create_prev_run_var(
"fruit_weights", initializer=[[0.5], [1.], [1.5], [2.]])
# New vocab with elements in reverse order and one new element.
new_vocab_path = self._write_vocab(
["orange", "guava", "banana", "apple", "raspberry"], "new_vocab")
# New session and new graph.
with ops.Graph().as_default() as g:
with self.test_session(graph=g) as sess:
fruit_weights = variable_scope.get_variable(
"fruit_weights", initializer=[[0.], [0.], [0.], [0.], [0.]])
ws_util._warmstart_var_with_vocab(fruit_weights, new_vocab_path, 5,
self.get_temp_dir(), prev_vocab_path)
sess.run(variables.global_variables_initializer())
self.assertAllEqual([[2.], [1.5], [1.], [0.5], [0.]],
fruit_weights.eval(sess))
def testWarmStartVarWithVocabPrevVarPartitioned(self):
prev_vocab_path = self._write_vocab(["apple", "banana", "guava", "orange"],
"old_vocab")
_, _ = self._create_prev_run_var(
"fruit_weights",
shape=[4, 1],
initializer=[[0.5], [1.], [1.5], [2.]],
partitioner=lambda shape, dtype: [2, 1])
# New vocab with elements in reverse order and one new element.
new_vocab_path = self._write_vocab(
["orange", "guava", "banana", "apple", "raspberry"], "new_vocab")
# New session and new graph.
with ops.Graph().as_default() as g:
with self.test_session(graph=g) as sess:
fruit_weights = variable_scope.get_variable(
"fruit_weights", initializer=[[0.], [0.], [0.], [0.], [0.]])
ws_util._warmstart_var_with_vocab(fruit_weights, new_vocab_path, 5,
self.get_temp_dir(), prev_vocab_path)
sess.run(variables.global_variables_initializer())
self.assertAllEqual([[2.], [1.5], [1.], [0.5], [0.]],
fruit_weights.eval(sess))
def testWarmStartVarWithVocabCurrentVarPartitioned(self):
prev_vocab_path = self._write_vocab(["apple", "banana", "guava", "orange"],
"old_vocab")
_, _ = self._create_prev_run_var(
"fruit_weights", initializer=[[0.5], [1.], [1.5], [2.]])
# New vocab with elements in reverse order and one new element.
new_vocab_path = self._write_vocab(
["orange", "guava", "banana", "apple", "raspberry"], "new_vocab")
# New session and new graph.
with ops.Graph().as_default() as g:
with self.test_session(graph=g) as sess:
fruit_weights = variable_scope.get_variable(
"fruit_weights",
shape=[6, 1],
initializer=[[0.], [0.], [0.], [0.], [0.], [0.]],
partitioner=lambda shape, dtype: [2, 1])
ws_util._warmstart_var_with_vocab(
fruit_weights,
new_vocab_path,
5,
self.get_temp_dir(),
prev_vocab_path,
current_oov_buckets=1)
sess.run(variables.global_variables_initializer())
self.assertTrue(
isinstance(fruit_weights, variables.PartitionedVariable))
fruit_weights_vars = fruit_weights._get_variable_list()
self.assertAllEqual([[2.], [1.5], [1.]],
fruit_weights_vars[0].eval(sess))
self.assertAllEqual([[0.5], [0.], [0.]],
fruit_weights_vars[1].eval(sess))
def testWarmStartVarWithVocabBothVarsPartitioned(self):
prev_vocab_path = self._write_vocab(["apple", "banana", "guava", "orange"],
"old_vocab")
_, _ = self._create_prev_run_var(
"fruit_weights",
shape=[4, 1],
initializer=[[0.5], [1.], [1.5], [2.]],
partitioner=lambda shape, dtype: [2, 1])
# New vocab with elements in reverse order and two new elements.
new_vocab_path = self._write_vocab(
["orange", "guava", "banana", "apple", "raspberry",
"blueberry"], "new_vocab")
# New session and new graph.
with ops.Graph().as_default() as g:
with self.test_session(graph=g) as sess:
fruit_weights = variable_scope.get_variable(
"fruit_weights",
shape=[6, 1],
initializer=[[0.], [0.], [0.], [0.], [0.], [0.]],
partitioner=lambda shape, dtype: [2, 1])
ws_util._warmstart_var_with_vocab(fruit_weights, new_vocab_path, 6,
self.get_temp_dir(), prev_vocab_path)
sess.run(variables.global_variables_initializer())
self.assertTrue(
isinstance(fruit_weights, variables.PartitionedVariable))
fruit_weights_vars = fruit_weights._get_variable_list()
self.assertAllEqual([[2.], [1.5], [1.]],
fruit_weights_vars[0].eval(sess))
self.assertAllEqual([[0.5], [0.], [0.]],
fruit_weights_vars[1].eval(sess))
def testWarmStartInputLayer_SparseColumnIntegerized(self):
# Create feature column.
sc_int = fc.categorical_column_with_identity("sc_int", num_buckets=10)
# Save checkpoint from which to warm-start.
_, prev_int_val = self._create_prev_run_var(
"linear_model/sc_int/weights", shape=[10, 1], initializer=ones())
# Verify we initialized the values correctly.
self.assertAllEqual(np.ones([10, 1]), prev_int_val)
partitioner = lambda shape, dtype: [1] * len(shape)
# New graph, new session WITHOUT warmstarting.
with ops.Graph().as_default() as g:
with self.test_session(graph=g) as sess:
cols_to_vars = self._create_linear_model([sc_int], partitioner)
sess.run(variables.global_variables_initializer())
# Without warmstarting, the weights should be initialized using default
# initializer (which is init_ops.zeros_initializer).
self._assert_cols_to_vars(cols_to_vars, {sc_int: [np.zeros([10, 1])]},
sess)
# New graph, new session with warmstarting.
with ops.Graph().as_default() as g:
with self.test_session(graph=g) as sess:
cols_to_vars = self._create_linear_model([sc_int], partitioner)
ws_util._warmstart_input_layer(cols_to_vars,
ws_util._WarmStartSettings(
self.get_temp_dir()))
sess.run(variables.global_variables_initializer())
# Verify weights were correctly warmstarted.
self._assert_cols_to_vars(cols_to_vars, {sc_int: [prev_int_val]}, sess)
def testWarmStartInputLayer_SparseColumnHashed(self):
# Create feature column.
sc_hash = fc.categorical_column_with_hash_bucket(
"sc_hash", hash_bucket_size=15)
# Save checkpoint from which to warm-start.
_, prev_hash_val = self._create_prev_run_var(
"linear_model/sc_hash/weights", shape=[15, 1], initializer=norms())
partitioner = lambda shape, dtype: [1] * len(shape)
# New graph, new session WITHOUT warmstarting.
with ops.Graph().as_default() as g:
with self.test_session(graph=g) as sess:
cols_to_vars = self._create_linear_model([sc_hash], partitioner)
sess.run(variables.global_variables_initializer())
# Without warmstarting, the weights should be initialized using default
# initializer (which is init_ops.zeros_initializer).
self._assert_cols_to_vars(cols_to_vars, {sc_hash: [np.zeros([15, 1])]},
sess)
# New graph, new session with warmstarting.
with ops.Graph().as_default() as g:
with self.test_session(graph=g) as sess:
cols_to_vars = self._create_linear_model([sc_hash], partitioner)
ws_util._warmstart_input_layer(cols_to_vars,
ws_util._WarmStartSettings(
self.get_temp_dir()))
sess.run(variables.global_variables_initializer())
# Verify weights were correctly warmstarted.
self._assert_cols_to_vars(cols_to_vars, {sc_hash: [prev_hash_val]},
sess)
def testWarmStartInputLayer_SparseColumnVocabulary(self):
# Create vocab for sparse column "sc_vocab".
vocab_path = self._write_vocab(["apple", "banana", "guava", "orange"],
"vocab")
# Create feature column.
sc_vocab = fc.categorical_column_with_vocabulary_file(
"sc_vocab", vocabulary_file=vocab_path, vocabulary_size=4)
# Save checkpoint from which to warm-start.
_, prev_vocab_val = self._create_prev_run_var(
"linear_model/sc_vocab/weights", shape=[4, 1], initializer=ones())
partitioner = lambda shape, dtype: [1] * len(shape)
# New graph, new session WITHOUT warmstarting.
with ops.Graph().as_default() as g:
with self.test_session(graph=g) as sess:
cols_to_vars = self._create_linear_model([sc_vocab], partitioner)
sess.run(variables.global_variables_initializer())
# Without warmstarting, the weights should be initialized using default
# initializer (which is init_ops.zeros_initializer).
self._assert_cols_to_vars(cols_to_vars, {sc_vocab: [np.zeros([4, 1])]},
sess)
# New graph, new session with warmstarting.
with ops.Graph().as_default() as g:
with self.test_session(graph=g) as sess:
cols_to_vars = self._create_linear_model([sc_vocab], partitioner)
# Since old vocab is not explicitly set in WarmStartSettings, the old
# vocab is assumed to be same as new vocab.
ws_util._warmstart_input_layer(cols_to_vars,
ws_util._WarmStartSettings(
self.get_temp_dir()))
sess.run(variables.global_variables_initializer())
# Verify weights were correctly warmstarted.
self._assert_cols_to_vars(cols_to_vars, {sc_vocab: [prev_vocab_val]},
sess)
def testWarmStartInputLayer_BucketizedColumn(self):
# Create feature column.
real = fc.numeric_column("real")
real_bucket = fc.bucketized_column(real, boundaries=[0., 1., 2., 3.])
# Save checkpoint from which to warm-start.
_, prev_bucket_val = self._create_prev_run_var(
"linear_model/real_bucketized/weights",
shape=[5, 1],
initializer=norms())
partitioner = lambda shape, dtype: [1] * len(shape)
# New graph, new session WITHOUT warmstarting.
with ops.Graph().as_default() as g:
with self.test_session(graph=g) as sess:
cols_to_vars = self._create_linear_model([real_bucket], partitioner)
sess.run(variables.global_variables_initializer())
# Without warmstarting, the weights should be initialized using default
# initializer (which is init_ops.zeros_initializer).
self._assert_cols_to_vars(cols_to_vars,
{real_bucket: [np.zeros([5, 1])]}, sess)
# New graph, new session with warmstarting.
with ops.Graph().as_default() as g:
with self.test_session(graph=g) as sess:
cols_to_vars = self._create_linear_model([real_bucket], partitioner)
ws_util._warmstart_input_layer(cols_to_vars,
ws_util._WarmStartSettings(
self.get_temp_dir()))
sess.run(variables.global_variables_initializer())
# Verify weights were correctly warmstarted.
self._assert_cols_to_vars(cols_to_vars,
{real_bucket: [prev_bucket_val]}, sess)
def testWarmStartInputLayer_MultipleCols(self):
# Create vocab for sparse column "sc_vocab".
vocab_path = self._write_vocab(["apple", "banana", "guava", "orange"],
"vocab")
# Create feature columns.
sc_int = fc.categorical_column_with_identity("sc_int", num_buckets=10)
sc_hash = fc.categorical_column_with_hash_bucket(
"sc_hash", hash_bucket_size=15)
sc_keys = fc.categorical_column_with_vocabulary_list(
"sc_keys", vocabulary_list=["a", "b", "c", "e"])
sc_vocab = fc.categorical_column_with_vocabulary_file(
"sc_vocab", vocabulary_file=vocab_path, vocabulary_size=4)
real = fc.numeric_column("real")
real_bucket = fc.bucketized_column(real, boundaries=[0., 1., 2., 3.])
cross = fc.crossed_column([sc_keys, sc_vocab], hash_bucket_size=20)
all_linear_cols = [sc_int, sc_hash, sc_keys, sc_vocab, real_bucket, cross]
# Save checkpoint from which to warm-start.
with ops.Graph().as_default() as g:
with self.test_session(graph=g) as sess:
sc_int_weights = variable_scope.get_variable(
"linear_model/sc_int/weights", shape=[10, 1], initializer=ones())
sc_hash_weights = variable_scope.get_variable(
"linear_model/sc_hash/weights", shape=[15, 1], initializer=norms())
sc_keys_weights = variable_scope.get_variable(
"linear_model/sc_keys/weights", shape=[4, 1], initializer=rand())
sc_vocab_weights = variable_scope.get_variable(
"linear_model/sc_vocab/weights", shape=[4, 1], initializer=ones())
real_bucket_weights = variable_scope.get_variable(
"linear_model/real_bucketized/weights",
shape=[5, 1],
initializer=norms())
cross_weights = variable_scope.get_variable(
"linear_model/sc_keys_X_sc_vocab/weights",
shape=[20, 1],
initializer=rand())
self._write_checkpoint(sess)
(prev_int_val, prev_hash_val, prev_keys_val, prev_vocab_val,
prev_bucket_val, prev_cross_val) = sess.run([
sc_int_weights, sc_hash_weights, sc_keys_weights, sc_vocab_weights,
real_bucket_weights, cross_weights
])
# Verify we initialized the values correctly.
self.assertAllEqual(np.ones([10, 1]), prev_int_val)
partitioner = lambda shape, dtype: [1] * len(shape)
# New graph, new session WITHOUT warmstarting.
with ops.Graph().as_default() as g:
with self.test_session(graph=g) as sess:
cols_to_vars = self._create_linear_model(all_linear_cols, partitioner)
sess.run(variables.global_variables_initializer())
# Without warmstarting, all weights should be initialized using default
# initializer (which is init_ops.zeros_initializer).
self._assert_cols_to_vars(cols_to_vars, {
sc_int: [np.zeros([10, 1])],
sc_hash: [np.zeros([15, 1])],
sc_keys: [np.zeros([4, 1])],
sc_vocab: [np.zeros([4, 1])],
real_bucket: [np.zeros([5, 1])],
cross: [np.zeros([20, 1])],
}, sess)
# New graph, new session with warmstarting.
with ops.Graph().as_default() as g:
with self.test_session(graph=g) as sess:
cols_to_vars = self._create_linear_model(all_linear_cols, partitioner)
ws_util._warmstart_input_layer(cols_to_vars,
ws_util._WarmStartSettings(
self.get_temp_dir()))
sess.run(variables.global_variables_initializer())
# Verify weights were correctly warmstarted.
self._assert_cols_to_vars(cols_to_vars, {
sc_int: [prev_int_val],
sc_hash: [prev_hash_val],
sc_keys: [prev_keys_val],
sc_vocab: [prev_vocab_val],
real_bucket: [prev_bucket_val],
cross: [prev_cross_val],
}, sess)
def testWarmStartInputLayerMoreSettings(self):
# Create old and new vocabs for sparse column "sc_vocab".
prev_vocab_path = self._write_vocab(["apple", "banana", "guava", "orange"],
"old_vocab")
new_vocab_path = self._write_vocab(
["orange", "guava", "banana", "apple", "raspberry",
"blueberry"], "new_vocab")
# Create feature columns.
sc_hash = fc.categorical_column_with_hash_bucket(
"sc_hash", hash_bucket_size=15)
sc_keys = fc.categorical_column_with_vocabulary_list(
"sc_keys", vocabulary_list=["a", "b", "c", "e"])
sc_vocab = fc.categorical_column_with_vocabulary_file(
"sc_vocab", vocabulary_file=new_vocab_path, vocabulary_size=6)
all_linear_cols = [sc_hash, sc_keys, sc_vocab]
# Save checkpoint from which to warm-start.
with ops.Graph().as_default() as g:
with self.test_session(graph=g) as sess:
_ = variable_scope.get_variable(
"linear_model/sc_hash/weights", shape=[15, 1], initializer=norms())
sc_keys_weights = variable_scope.get_variable(
"some_other_name", shape=[4, 1], initializer=rand())
_ = variable_scope.get_variable(
"linear_model/sc_vocab/weights",
initializer=[[0.5], [1.], [2.], [3.]])
self._write_checkpoint(sess)
prev_keys_val = sess.run(sc_keys_weights)
def _partitioner(shape, dtype): # pylint:disable=unused-argument
# Partition each var into 2 equal slices.
partitions = [1] * len(shape)
partitions[0] = min(2, shape[0].value)
return partitions
# New graph, new session with warmstarting.
with ops.Graph().as_default() as g:
with self.test_session(graph=g) as sess:
cols_to_vars = self._create_linear_model(all_linear_cols, _partitioner)
ws_settings = ws_util._WarmStartSettings(
self.get_temp_dir(),
col_to_prev_vocab={sc_vocab: prev_vocab_path},
col_to_prev_tensor={sc_keys: "some_other_name"},
exclude_columns=[sc_hash])
ws_util._warmstart_input_layer(cols_to_vars, ws_settings)
sess.run(variables.global_variables_initializer())
# Verify weights were correctly warmstarted. Var corresponding to
# sc_hash should not be warm-started. Var corresponding to sc_vocab
# should be correctly warmstarted after vocab remapping.
self._assert_cols_to_vars(cols_to_vars, {
sc_keys:
np.split(prev_keys_val, 2),
sc_hash: [np.zeros([8, 1]), np.zeros([7, 1])],
sc_vocab: [
np.array([[3.], [2.], [1.]]),
np.array([[0.5], [0.], [0.]])
]
}, sess)
def testErrorConditions(self):
self.assertRaises(ValueError, ws_util._WarmStartSettings, None)
x = variable_scope.get_variable(
"x",
shape=[4, 1],
initializer=ones(),
partitioner=lambda shape, dtype: [2, 1])
# List of PartitionedVariable is invalid type.
self.assertRaises(TypeError, ws_util._warmstart_var, [x], prev_ckpt="/tmp")
self.assertRaises(TypeError, ws_util._warmstart_var_with_vocab, [x], "/tmp",
5, "/tmp", "/tmp")
# Keys of type other than FeatureColumn.
self.assertRaises(TypeError, ws_util._warmstart_input_layer,
{"StringType": x}, ws_util._WarmStartSettings("/tmp"))
if __name__ == "__main__":
test.main()
| 45.686207 | 80 | 0.63752 | 3,281 | 26,498 | 4.840597 | 0.095093 | 0.043068 | 0.029593 | 0.019393 | 0.747072 | 0.716283 | 0.693615 | 0.673341 | 0.658859 | 0.634555 | 0 | 0.014942 | 0.244811 | 26,498 | 579 | 81 | 45.765112 | 0.778722 | 0.134689 | 0 | 0.631461 | 0 | 0 | 0.054802 | 0.018953 | 0 | 0 | 0 | 0 | 0.076404 | 1 | 0.049438 | false | 0 | 0.035955 | 0.002247 | 0.098876 | 0.002247 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d4f2c2dad7a465026df74ac45d98935ef41ed2a8 | 10,445 | py | Python | scripts/end_date_volumes_invariant_13a.py | simpsonw/atmosphere | 3a5203ef0b563de3a0e8c8c8715df88186532d7a | [
"BSD-3-Clause"
] | 197 | 2016-12-08T02:33:32.000Z | 2022-03-23T14:27:47.000Z | scripts/end_date_volumes_invariant_13a.py | simpsonw/atmosphere | 3a5203ef0b563de3a0e8c8c8715df88186532d7a | [
"BSD-3-Clause"
] | 385 | 2017-01-03T22:51:46.000Z | 2020-12-16T16:20:42.000Z | scripts/end_date_volumes_invariant_13a.py | simpsonw/atmosphere | 3a5203ef0b563de3a0e8c8c8715df88186532d7a | [
"BSD-3-Clause"
] | 50 | 2016-12-08T08:32:25.000Z | 2021-12-10T00:21:39.000Z | #!/usr/bin/env python
import argparse
import django
django.setup()
from core.models import Volume
from django.db import connection
from django.db.models.functions import Now
def main():
'''
This script will end date volumes that come up for Invariant #13a on
https://tasmo.atmo.cloud and will be run via cron every ___________.
'''
# Dry run option
parser = argparse.ArgumentParser()
parser.add_argument(
"--dry-run",
action="store_true",
help="Do not actually end-date any volumes"
)
args = parser.parse_args()
if args.dry_run:
print 'DRY RUN -- No Volumes will be end-dated'
volumes_from_invariant_13a = []
# This query comes from here: https://tasmo.atmo.cloud/queries/64/source#87
query = '''WITH volumes_users_allocations AS
( SELECT volume.id AS volume_id, volume.name AS volume_name, volume.description AS volume_description,
proj.name AS atmo_project_name, proj.description AS atmo_project_description, au.id AS user_id,
au.username, au.is_staff, au.is_superuser,
CASE
WHEN ins_src.provider_id = 4 THEN 'IU'
WHEN ins_src.provider_id = 5 THEN 'TACC'
ELSE 'UNKNOWN'
END AS src_provider, ins_src.identifier AS openstack_identifier, ins_src.start_date, ins_src.end_date,
string_agg(current_als.name, ',') AS current_allocation_sources
FROM volume
LEFT OUTER JOIN instance_source ins_src ON volume.instance_source_id = ins_src.id
LEFT OUTER JOIN project proj ON volume.project_id = proj.id
LEFT OUTER JOIN atmosphere_user au ON ins_src.created_by_id = au.id
LEFT OUTER JOIN user_allocation_source current_uals ON au.id = current_uals.user_id
LEFT OUTER JOIN allocation_source current_als ON current_uals.allocation_source_id = current_als.id
GROUP BY volume.id, proj.id, au.id, ins_src.id), user_allocation_source_deleted_events AS
( SELECT DISTINCT event_table.name AS event_name, event_table.entity_id AS username,
event_table.payload :: json ->> 'allocation_source_name' AS allocation_source_name,
max(TIMESTAMP) AS last_event, min(TIMESTAMP) AS first_event
FROM event_table
WHERE event_table.name = 'user_allocation_source_deleted'
GROUP BY event_table.name, event_table.entity_id, event_table.payload :: json ->> 'allocation_source_name' ),
user_allocation_source_deleted_events_grouped AS
( SELECT DISTINCT event_name, username, string_agg(DISTINCT allocation_source_name, ',') AS historic_allocation_sources,
max(last_event) AS last_event, min(first_event) AS first_event
FROM user_allocation_source_deleted_events
GROUP BY event_name, username ), users_with_no_allocation_sources AS
( SELECT au.id AS user_id, au.username, au.is_staff, au.is_superuser
FROM atmosphere_user au
LEFT OUTER JOIN user_allocation_source uas ON au.id = uas.user_id
WHERE uas.id IS NULL ),
users_with_no_allocation_source_over_six_months AS
( SELECT uwnas.user_id, uwnas.username, uwnas.is_staff, uwnas.is_superuser, uasdeg.last_event, uasdeg.historic_allocation_sources
FROM users_with_no_allocation_sources uwnas
LEFT OUTER JOIN user_allocation_source_deleted_events_grouped uasdeg ON uasdeg.username = uwnas.username
WHERE uasdeg.last_event IS NULL OR uasdeg.last_event < NOW() - INTERVAL '6 months' ),
active_volumes_for_users_with_no_allocation_source_over_six_months AS
( SELECT * FROM volumes_users_allocations vua
LEFT JOIN users_with_no_allocation_source_over_six_months uwnasosm ON vua.user_id = uwnasosm.user_id
WHERE uwnasosm.user_id IS NOT NULL AND vua.end_date IS NULL AND vua.username <> 'atmoadmin' ),
instancesources_appversions_apps AS
( SELECT DISTINCT isrc.identifier AS openstack_image_identifier, isrc.start_date AS isrc_start_date,
isrc.end_date AS isrc_end_date,
CASE
WHEN isrc.provider_id = 4 THEN 'IU'
WHEN isrc.provider_id = 5 THEN 'TACC'
ELSE 'UNKNOWN'
END AS isrc_provider, appv.created_by_id AS appv_created_by_id, appv.start_date AS appv_start_date,
appv.end_date AS appv_end_date, appv.name AS appv_name, app.created_by_id AS app_created_by_id,
app.name AS app_name, app.description AS app_description, app.start_date AS app_start_date, app.end_date AS app_end_date
FROM application_version appv
LEFT OUTER JOIN provider_machine pm ON appv.id = pm.application_version_id
LEFT OUTER JOIN application app ON app.id = appv.application_id
LEFT OUTER JOIN instance_source isrc ON pm.instance_source_id = isrc.id ),
instancesources_appversions_apps_instances AS
( SELECT DISTINCT isrc.identifier AS openstack_image_identifier, isrc.start_date AS isrc_start_date,
isrc.end_date AS isrc_end_date, appv.created_by_id AS appv_created_by_id, appv.start_date AS appv_start_date,
appv.end_date AS appv_end_date, app.created_by_id AS app_created_by_id, app.start_date AS app_start_date,
app.end_date AS app_end_date, ins.id AS instance_id, ins.created_by_id AS instance_created_by_id,
ins.start_date AS instance_start_date, ins.end_date AS instance_end_date
FROM application_version appv
LEFT OUTER JOIN provider_machine pm ON appv.id = pm.application_version_id
LEFT OUTER JOIN application app ON app.id = appv.application_id
LEFT OUTER JOIN instance_source isrc ON pm.instance_source_id = isrc.id
LEFT OUTER JOIN instance ins ON isrc.id = ins.source_id ),
images_users_allocations_agg AS
( SELECT DISTINCT isrc.identifier AS openstack_identifier, jsonb_agg(DISTINCT isrc.*) AS instance_sources,
jsonb_agg(DISTINCT pm.*) AS provider_machine, jsonb_agg(DISTINCT app.*) AS applications,
jsonb_agg(DISTINCT appv.*) AS application_versions, jsonb_agg(DISTINCT ins.*) AS instances
FROM application_version appv
LEFT OUTER JOIN provider_machine pm ON appv.id = pm.application_version_id
LEFT OUTER JOIN application app ON app.id = appv.application_id
LEFT OUTER JOIN instance_source isrc ON pm.instance_source_id = isrc.id
LEFT OUTER JOIN instance ins ON isrc.id = ins.source_id
GROUP BY isrc.identifier ), active_instancesources_and_appversions_for_users_with_no_allocation_source_over_six_months AS
( SELECT iaa.*, uwnasosm.username AS created_by_user_username, uwnasosm.is_staff AS created_by_user_is_staff,
uwnasosm.is_superuser AS created_by_user_is_superuser, uwnasosm.last_event AS created_by_user_last_allocation_end_date,
uwnasosm.historic_allocation_sources AS created_by_user_historic_allocation_sources
FROM instancesources_appversions_apps iaa
LEFT JOIN users_with_no_allocation_source_over_six_months uwnasosm ON iaa.appv_created_by_id = uwnasosm.user_id
WHERE uwnasosm.user_id IS NOT NULL AND (isrc_end_date IS NULL OR appv_end_date IS NULL OR app_end_date IS NULL)
AND uwnasosm.username NOT IN ('admin', 'atmoadmin')), aiaafuwnasosm_with_current_allocation_sources AS
( SELECT aiaafuwnasosm.openstack_image_identifier, aiaafuwnasosm.isrc_provider, aiaafuwnasosm.isrc_end_date,
aiaafuwnasosm.isrc_start_date, aiaafuwnasosm.appv_name, aiaafuwnasosm.appv_start_date, aiaafuwnasosm.appv_end_date,
aiaafuwnasosm.appv_created_by_id, aiaafuwnasosm.app_end_date, aiaafuwnasosm.app_start_date, aiaafuwnasosm.app_description,
aiaafuwnasosm.app_name, aiaafuwnasosm.app_created_by_id, aiaafuwnasosm.created_by_user_username,
aiaafuwnasosm.created_by_user_is_staff, aiaafuwnasosm.created_by_user_is_superuser,
aiaafuwnasosm.created_by_user_last_allocation_end_date, aiaafuwnasosm.created_by_user_historic_allocation_sources,
string_agg(DISTINCT current_als.name, ',') AS current_allocation_sources
FROM active_instancesources_and_appversions_for_users_with_no_allocation_source_over_six_months aiaafuwnasosm
LEFT OUTER JOIN user_allocation_source current_uals ON aiaafuwnasosm.appv_created_by_id = current_uals.user_id
LEFT OUTER JOIN allocation_source current_als ON current_uals.allocation_source_id = current_als.id
GROUP BY aiaafuwnasosm.openstack_image_identifier, aiaafuwnasosm.isrc_provider, aiaafuwnasosm.isrc_end_date,
aiaafuwnasosm.isrc_start_date, aiaafuwnasosm.appv_name, aiaafuwnasosm.appv_start_date, aiaafuwnasosm.appv_end_date,
aiaafuwnasosm.appv_created_by_id, aiaafuwnasosm.app_end_date, aiaafuwnasosm.app_start_date,
aiaafuwnasosm.app_description, aiaafuwnasosm.app_name, aiaafuwnasosm.app_created_by_id,
aiaafuwnasosm.created_by_user_username, aiaafuwnasosm.created_by_user_is_staff, aiaafuwnasosm.created_by_user_is_superuser,
aiaafuwnasosm.created_by_user_last_allocation_end_date, aiaafuwnasosm.created_by_user_historic_allocation_sources
ORDER BY aiaafuwnasosm.created_by_user_last_allocation_end_date ASC )
SELECT * FROM active_volumes_for_users_with_no_allocation_source_over_six_months avfuwnasosm ORDER BY last_event ASC;'''
# Use the query above to get volumes listed for Invariant #13a
with connection.cursor() as cursor:
cursor.execute(query)
# Get the results as a dictionary
rows = dictfetchall(cursor)
# If there are any results from the query
if rows:
volumes = Volume.objects.all()
# Get the Volume object and put it into our list
for row in rows:
volume = volumes.get(pk=row['volume_id'])
volumes_from_invariant_13a.append(volume)
print 'Here are volumes from invariant 13a:'
ctr = 1
for vol in volumes_from_invariant_13a:
print ctr
ctr = ctr + 1
print vol.name.encode('utf-8')
print vol
if not args.dry_run:
vol.end_date = Now()
vol.save()
print 'End-dated %s' % vol
print '----'
# Helper function to get query results as a dictionary
def dictfetchall(cursor):
columns = [col[0] for col in cursor.description]
return [dict(zip(columns, row)) for row in cursor.fetchall()]
if __name__ == "__main__":
main()
| 61.441176 | 135 | 0.745428 | 1,492 | 10,445 | 4.876005 | 0.140751 | 0.040825 | 0.035739 | 0.026804 | 0.573196 | 0.521512 | 0.493196 | 0.471615 | 0.453058 | 0.43079 | 0 | 0.002982 | 0.197224 | 10,445 | 169 | 136 | 61.804734 | 0.864639 | 0.032551 | 0 | 0.146853 | 0 | 0.027972 | 0.875403 | 0.328402 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.034965 | null | null | 0.048951 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
be090c11f43784802fe2a586b6957ea61628eda4 | 5,033 | py | Python | conditions/views.py | surajsjain/smart-aquaponics-backend | f4361bc9511f71be8e969c29773f464b4cf6440f | [
"MIT"
] | 4 | 2019-11-14T15:28:20.000Z | 2021-03-26T04:43:43.000Z | conditions/views.py | surajsjain/smart-aquaponics-backend | f4361bc9511f71be8e969c29773f464b4cf6440f | [
"MIT"
] | 7 | 2020-06-05T22:46:25.000Z | 2022-02-10T09:37:55.000Z | conditions/views.py | surajsjain/smart-aquaponics-backend | f4361bc9511f71be8e969c29773f464b4cf6440f | [
"MIT"
] | 2 | 2020-02-21T06:05:30.000Z | 2021-03-26T04:43:48.000Z | from django.shortcuts import render
from django.shortcuts import render
from rest_framework import status
from rest_framework.decorators import api_view
from rest_framework.response import Response
from .models import *
from .searializers import *
@api_view(['GET', 'POST'])
def plant_list(request):
if request.method == 'GET':
conditions = PlantConditions.objects.all()
serializer = PlantConditionsSerializer(conditions, many=True)
return Response(serializer.data)
elif request.method == 'POST':
print(type(request.data))
serializer = PlantConditionsSerializer(data=request.data)
if serializer.is_valid():
# print(serializer.validated_data)
serializer.save()
return Response(serializer.data, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
@api_view(['GET', 'POST'])
def pond_list(request):
if request.method == 'GET':
conditions = PondConditions.objects.all()
serializer = PondConditionsSerializer(conditions, many=True)
return Response(serializer.data)
elif request.method == 'POST':
print(type(request.data))
serializer = PondConditionsSerializer(data=request.data)
if serializer.is_valid():
# print(serializer.validated_data)
serializer.save()
return Response(serializer.data, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
@api_view(['GET', 'POST'])
def watering_list(request):
if request.method == 'GET':
conditions = Watering.objects.all()
serializer = WateringConditionsSerializer(conditions, many=True)
return Response(serializer.data)
elif request.method == 'POST':
# print(type(request.data))
serializer = WateringConditionsSerializer(data=request.data)
if serializer.is_valid():
print(serializer.validated_data)
serializer.save()
return Response(serializer.data, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
@api_view(['GET', 'POST'])
def fishfeeding_list(request):
if request.method == 'GET':
conditions = FishFeeding.objects.all()
serializer = FishFeedingSerializer(conditions, many=True)
return Response(serializer.data)
elif request.method == 'POST':
# print(type(request.data))
serializer = FishFeedingSerializer(data=request.data)
if serializer.is_valid():
print(serializer.validated_data)
serializer.save()
return Response(serializer.data, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
@api_view(['GET', 'POST'])
def focus(request, pid):
if request.method == 'GET':
conditions = InFocus.objects.filter(id=pid)[0]
serializer = InFocusSerializer(conditions, many=False)
return Response(serializer.data)
elif request.method == 'POST': ### WORK ON THIS
# print(type(request.data))
data = request.data
ob = InFocus.objects.filter(id=pid)[0]
ob.temperature = data["temperature"]
ob.humidity = data["humidity"]
ob.soilMoisture = data["soilMoisture"]
ob.diseased = data["diseased"]
ob.save();
return Response(data, status=status.HTTP_201_CREATED)
# return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
@api_view(['GET', 'POST'])
def actuate(request, pid):
# if request.method == 'GET':
# conditions = InFocus.objects.filter(id=pid)[0]
# serializer = InFocusSerializer(conditions, many=False)
# return Response(serializer.data)
if request.method == 'GET':
conditions = ActuatorOverride.objects.filter(id=pid)[0]
# data = {}
# data["plant"] = conditions.plant.id
# data["water"] = conditions.water
# data["light"] = conditions.light
#
# print(data)
# return Response(data)
serializer = ActuatorOverrideSerializer(conditions, many=False)
print(serializer.data)
return Response(serializer.data)
elif request.method == 'POST': ### WORK ON THIS
# print(type(request.data))
data = request.data
ob = ActuatorOverride.objects.filter(id=pid)[0]
type = data["modType"]
if(type == "water"):
ob.manual = True
ob.water = data["value"]
elif(type == "light"):
ob.manual = True
ob.light = data["value"]
elif(type == 'manual'):
ob.manual = data['value']
# ob.soilMoisture = data["soilMoisture"]
# ob.diseased = data["diseased"]
ob.save();
return Response(data, status=status.HTTP_201_CREATED)
# return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
| 33.111842 | 80 | 0.649116 | 536 | 5,033 | 5.998134 | 0.141791 | 0.087092 | 0.126905 | 0.095801 | 0.732193 | 0.718196 | 0.667807 | 0.619285 | 0.619285 | 0.619285 | 0 | 0.010619 | 0.232863 | 5,033 | 151 | 81 | 33.331126 | 0.822067 | 0.144645 | 0 | 0.568421 | 0 | 0 | 0.037643 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.063158 | false | 0 | 0.073684 | 0 | 0.305263 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
be093645efae250325c4f37ac68888385c5fb7ea | 29,887 | py | Python | app/tests/test_api.py | j33n/ShoppingListApi | a1a616274793c4fec2ef45d490dc0b3411f9468c | [
"MIT"
] | null | null | null | app/tests/test_api.py | j33n/ShoppingListApi | a1a616274793c4fec2ef45d490dc0b3411f9468c | [
"MIT"
] | null | null | null | app/tests/test_api.py | j33n/ShoppingListApi | a1a616274793c4fec2ef45d490dc0b3411f9468c | [
"MIT"
] | null | null | null | import unittest
import os
import json
import time
from flask import Flask
from app.app import create_app, db
from app.models import Users
class ApiTestCase(unittest.TestCase):
"""User test case"""
def setUp(self):
"""Initialise app and define test variables"""
self.app = create_app(config_name="testing")
self.client = self.app.test_client
self.user = {
'username': 'Stallion',
'email': 'rocky@test.com',
'password': 'secret',
'confirm_password': 'secret'
}
self.shoppinglist = {
'owner_id': '1',
'title': "My favorite meal",
'description': 'Items to cook my favorite meal'
}
self.shoppinglistitem = {
'owner_id': '1',
'shoppinglist_id': '1',
'item_title': "Vegetables",
'item_description': 'Carrots and Cabbages'
}
with self.app.app_context():
# create all tables
db.create_all()
def register_user(self):
return self.client().post('/auth/register', data=self.user)
def login_user(self, email="rocky@test.com", password="secret"):
user_data = {
'email': email,
'password': password
}
return self.client().post('/auth/login', data=user_data)
def access_token(self):
res = self.login_user()
access_token = json.loads(res.data.decode())
token = "Bearer " + access_token['token']
return token
def test_user_creation(self):
"""Test we can create a user"""
response = self.register_user()
self.assertEqual(response.status_code, 200)
self.assertIn("User account created successfuly", str(response.data))
def test_password_mismatch(self):
"""Test user creates an account when he confirms password"""
response = self.client().post('/auth/register', data={
'username': 'Stallion',
'email': 'rocky@test.com',
'password': 'secret',
'confirm_password': 'secreto'
})
self.assertEqual(response.status_code, 202)
self.assertIn("Password does not match", str(response.data))
def test_missing_registration_data(self):
"""Test a user is not missing out a password"""
response = self.client().post('/auth/register', data={
'username': 'Stallion',
'email': 'rocky@test.com'
})
self.assertEqual(response.status_code, 500)
self.assertIn("Password must be non-empty.", str(response.data))
def test_account_duplicate(self):
"""Test users can't create similar accounts"""
response = self.register_user()
self.assertEqual(response.status_code, 200)
self.assertIn("User account created successfuly", str(response.data))
response1 = self.register_user()
self.assertEqual(response1.status_code, 202)
self.assertIn("User account already exists.", str(response1.data))
def test_empty_values(self):
"""Test a user is not missing out an email or password"""
response = self.client().post('/auth/register', data={
'username': 'Stallion',
'email': '',
'password': 'secret',
'confirm_password': 'secret'
})
self.assertEqual(response.status_code, 500)
self.assertIn(b"Email or Username can\'t be empty.", response.data)
def test_user_login(self):
"""Test user can login"""
self.register_user()
response = self.login_user()
self.assertEqual(response.status_code, 200)
self.assertIn("token", str(response.data))
# Test invalid credentials
wrong_cred_login = self.client().post('/auth/login', data={
'email': 'Stallion',
'password': 'secret'
})
self.assertEqual(wrong_cred_login.status_code, 202)
self.assertIn("Invalid credentials", str(wrong_cred_login.data))
def test_middleware(self):
"""Test a our middleware can't be broken"""
self.register_user()
mess_up_token = self.access_token() + "Mess up token"
response1 = self.client().get('/shoppinglists',
headers=dict(Authorization=mess_up_token))
self.assertEqual(response1.status_code, 403)
self.assertIn(b"Invalid token. Please log in again.", response1.data)
response2 = self.client().get('/shoppinglists',
headers=dict(Authorization=""))
self.assertEqual(response2.status_code, 500)
self.assertIn(b"Authorization is not provided", response2.data)
def test_create_shoppinglist(self):
"""Test user can create a shoppinglist"""
self.register_user()
# Test empty shoppinglists
response1 = self.client().get(
'/shoppinglists',
headers=dict(Authorization=self.access_token())
)
self.assertTrue("You don't have any shoppinglists for now.", response1.data)
self.assertEqual(200, response1.status_code)
# Test shoppinglists without authorization
response2 = self.client().post(
'/shoppinglists',
headers=dict(Authorization=self.access_token() + '_'),
data=self.shoppinglist
)
self.assertTrue('fail' in str(response2.data))
self.assertEqual(403, response2.status_code)
response3 = self.client().post(
'/shoppinglists',
headers=dict(Authorization=self.access_token()),
data=self.shoppinglist
)
self.assertTrue('My favorite meal' in str(response3.data))
self.assertEqual(201, response3.status_code)
def test_invalid_shoppinglists(self):
"""Test user can't create mal formatted shoppinglists"""
self.register_user()
response1 = self.client().post(
'/shoppinglists',
headers=dict(Authorization=self.access_token()),
data={
'owner_id': '1',
'title': 666666,
'description': 'Items to cook my favorite meal'
}
)
response2 = self.client().post(
'/shoppinglists',
headers=dict(Authorization=self.access_token()),
data={
'owner_id': '1',
'title': "Nyamameat",
'description': 'Items to cook my favorite meal'
}
)
response3 = self.client().post(
'/shoppinglists',
headers=dict(Authorization=self.access_token()),
data={
'owner_id': '1',
'title': "",
'description': 'Items to cook my favorite meal'
}
)
self.assertIn(b'Value can\'t be numbers', response1.data)
self.assertEqual(202, response1.status_code)
self.assertIn(b'Value should be more than 10 characters', response2.data)
self.assertEqual(202, response2.status_code)
self.assertIn(b'Value can\'t be empty', response3.data)
self.assertEqual(202, response3.status_code)
def test_duplicate_shoppinglist(self):
"""Test user can't create two similar shoppinglists"""
self.register_user()
access_token = self.access_token()
self.client().post(
'/shoppinglists',
headers=dict(Authorization=access_token),
data=self.shoppinglist
)
response = self.client().post(
'/shoppinglists',
headers=dict(Authorization=access_token),
data=self.shoppinglist
)
self.assertIn(b'Shopping List My favorite meal already exists', response.data)
self.assertEqual(response.status_code, 202)
def test_fetch_all_shoppinglists(self):
"""Test user is able to display all shopping lists"""
self.register_user()
access_token = self.access_token()
response1 = self.client().post(
'/shoppinglists',
headers=dict(Authorization=access_token),
data=self.shoppinglist
)
self.assertEqual(response1.status_code, 201)
response = self.client().get(
'/shoppinglists',
headers=dict(Authorization=access_token)
)
self.assertEqual(response.status_code, 202)
self.assertIn('My favorite meal', str(response.data))
def test_fetch_single_shoppinglist(self):
"""Test user is able to display a single shopping lists"""
self.register_user()
access_token = self.access_token()
response = self.client().post(
'/shoppinglists',
headers=dict(Authorization=access_token),
data=self.shoppinglist
)
self.assertEqual(response.status_code, 201)
results = json.loads(response.data.decode())
get_single_sl = self.client().get(
'/shoppinglist/{0}'.format(results['id']),
headers=dict(Authorization=access_token)
)
self.assertIn(b"My favorite meal", get_single_sl.data)
self.assertEqual(get_single_sl.status_code, 201)
def test_non_existent_shoppinglist(self):
"""Test user can't access non existent shoppinglist"""
self.register_user()
access_token = self.access_token()
response = self.client().get(
'/shoppinglist/1',
headers=dict(Authorization=access_token)
)
self.assertIn(b"Requested value '1' was not found", response.data)
self.assertEqual(response.status_code, 202)
def test_update_shoppinglist(self):
"""Test a user can update a shopping list"""
self.register_user()
access_token = self.access_token()
response = self.client().post(
'/shoppinglists',
headers=dict(Authorization=access_token),
data=self.shoppinglist
)
self.assertEqual(201, response.status_code)
results = json.loads(response.data.decode())
update_resp = self.client().put(
'/shoppinglist/{0}'.format(results['id']),
headers=dict(Authorization=access_token),
data={
'owner_id': '1',
'title': "My favorite shoes",
'description': 'Converse and Jordan 2015'
}
)
self.assertTrue(b"Converse and Jordan 2015" in update_resp.data)
self.assertTrue(b"Shopping List updated successfuly" in update_resp.data)
self.assertEqual(update_resp.status_code, 200)
# Test shoppinglist can't take an existing name on POST
response1 = self.client().post(
'/shoppinglists',
headers=dict(Authorization=access_token),
data={
'owner_id': '1',
'title': "My favorite shoes",
'description': 'Converse and Jordan 2016'
}
)
self.assertTrue(b"Shopping List My favorite shoes already exists" in response1.data)
self.assertEqual(202, response1.status_code)
# Test shoppinglist can't take an existing name on PUT
check_update = self.client().put(
'/shoppinglist/1',
headers=dict(Authorization=access_token),
data={
'title': "My favorite shoes",
'description': 'Converse and Jordan 2016'
}
)
self.assertTrue(b"Shopping List My favorite shoes already exists" in check_update.data)
self.assertEqual(202, check_update.status_code)
# Test invalid use of data on PUT
check_wrong_update_1 = self.client().put(
'/shoppinglist/1',
headers=dict(Authorization=access_token),
data={
'title': "666",
'description': 'Converse and Jordan 2016'
}
)
check_wrong_update_2 = self.client().put(
'/shoppinglist/1',
headers=dict(Authorization=access_token),
data={
'title': 'Fish',
'description': 'Converse and Jordan 2016'
}
)
check_wrong_update_3 = self.client().put(
'/shoppinglist/1',
headers=dict(Authorization=access_token),
data={
'title': '56',
'description': 'aaaa'
}
)
check_wrong_update_4 = self.client().put(
'/shoppinglist/1',
headers=dict(Authorization=access_token),
data={
'title': 'Converse and Jordan 2016',
'description': 'Shoe'
}
)
self.assertIn(b"Value can\'t be numbers", check_wrong_update_1.data)
self.assertEqual(202, check_wrong_update_1.status_code)
self.assertIn(b"Value should be more than 10 characters", check_wrong_update_2.data)
self.assertEqual(202, check_wrong_update_2.status_code)
self.assertIn(b"Value should be more than 10 characters", check_wrong_update_3.data)
self.assertIn(b"Value can\'t be numbers", check_wrong_update_3.data)
self.assertEqual(202, check_wrong_update_3.status_code)
self.assertIn(b"Value should be more than 10 characters", check_wrong_update_4.data)
self.assertEqual(202, check_wrong_update_4.status_code)
def test_invalid_data_update(self):
"""Test user can't update with invalid format title or description"""
self.register_user()
response = self.client().post(
'/shoppinglists',
headers=dict(Authorization=self.access_token()),
data=self.shoppinglist
)
self.assertTrue(response.status_code, 201)
response1 = self.client().put(
'/shoppinglist/1',
headers=dict(Authorization=self.access_token()),
data={
'owner_id': '1',
'title': 666666,
'description': 'Items to cook my favorite meal'
}
)
response2 = self.client().put(
'/shoppinglist/1',
headers=dict(Authorization=self.access_token()),
data={
'owner_id': '1',
'title': "Nyamameat",
'description': 'Items to cook my favorite meal'
}
)
response3 = self.client().put(
'/shoppinglist/1',
headers=dict(Authorization=self.access_token()),
data={
'owner_id': '1',
'title': "",
'description': 'Items to cook my favorite meal'
}
)
response4 = self.client().put(
'/shoppinglist/1',
headers=dict(Authorization=self.access_token()),
data={
'owner_id': True,
'title': "Nyamameat",
'description': 'Items to cook my favorite meal'
}
)
response5 = self.client().put(
'/shoppinglist/1',
headers=dict(Authorization=self.access_token()),
data={
'owner_id': "",
'title': "",
'description': 'Items to cook my favorite meal'
}
)
self.assertIn(b'Value can\'t be numbers', response1.data)
self.assertEqual(202, response1.status_code)
self.assertIn(b'Value should be more than 10 characters', response2.data)
self.assertEqual(202, response2.status_code)
self.assertIn(b'Value can\'t be empty', response3.data)
self.assertEqual(202, response3.status_code)
self.assertIn(b'fail', response4.data)
self.assertEqual(202, response4.status_code)
self.assertIn(b'fail', response5.data)
self.assertEqual(202, response5.status_code)
def test_delete_shoppinglist(self):
"""Test a user can delete a shopping list"""
self.register_user()
access_token = self.access_token()
response = self.client().post(
'/shoppinglists',
headers=dict(Authorization=access_token),
data=self.shoppinglist
)
self.assertEqual(201, response.status_code)
results = json.loads(response.data.decode())
update_resp = self.client().delete(
'/shoppinglist/{0}'.format(results['id']),
headers=dict(Authorization=access_token)
)
# Test delete non existing value
invalid_deletion = self.client().delete(
'/shoppinglist/2'.format(results['id']),
headers=dict(Authorization=access_token)
)
self.assertIn(b"Requested value \'2\' was not found", invalid_deletion.data)
self.assertEqual(202, invalid_deletion.status_code)
self.assertFalse(b"Items to cook my favorite meal" in update_resp.data)
self.assertIn(b"Shopping List \'My favorite meal\' deleted successfuly", update_resp.data)
self.assertEqual(update_resp.status_code, 201)
def test_add_shoppinglistitem(self):
"""Test a user can add an item to shoppinglist"""
self.register_user()
access_token = self.access_token()
response = self.client().post(
'/shoppinglists',
headers=dict(Authorization=access_token),
data=self.shoppinglist
)
self.assertEqual(response.status_code, 201)
results = json.loads(response.data.decode())
"""Test our shopping list is empty"""
check_emptyness = self.client().get(
'/shoppinglist/1/items',
headers=dict(Authorization=access_token)
)
self.assertTrue(b"You don\'t have any items for now" in check_emptyness.data)
self.assertEqual(202, check_emptyness.status_code)
# Test invalid values are not saved
create_invalid_item = self.client().post(
'/shoppinglist/{0}/items'.format(results['id']),
headers=dict(Authorization=access_token),
data={
'owner_id': '1',
'shoppinglist_id': '1',
'item_title': "",
'item_description': 'Carrots and Cabbages'
}
)
self.assertIn(b'Value can\'t be empty', create_invalid_item.data)
self.assertEqual(202, create_invalid_item.status_code)
create_invalid_item_2 = self.client().post(
'/shoppinglist/{0}/items'.format(results['id']),
headers=dict(Authorization=access_token),
data={
'owner_id': '1',
'shoppinglist_id': '1',
'item_title': "Carrots and Cabbages",
'item_description': '66666'
}
)
self.assertIn(b'Value can\'t be numbers', create_invalid_item_2.data)
self.assertEqual(202, create_invalid_item_2.status_code)
create_item = self.client().post(
'/shoppinglist/{0}/items'.format(results['id']),
headers=dict(Authorization=access_token),
data=self.shoppinglistitem
)
self.assertIn(b'Vegetables', create_item.data)
self.assertEqual(201, create_item.status_code)
create_invalid_item_2 = self.client().post(
'/shoppinglist/{0}/items'.format(results['id']),
headers=dict(Authorization=access_token),
data={
'owner_id': '1',
'shoppinglist_id': '1',
'item_title': "Carrots",
'item_description': '66666'
}
)
self.assertIn(b'Value should be more than 10 characters', create_invalid_item_2.data)
self.assertIn(b'Value can\'t be numbers', create_invalid_item_2.data)
self.assertEqual(202, create_invalid_item_2.status_code)
def test_invalid_item_value(self):
"""Test user can't update with invalid format title or description"""
self.register_user()
response = self.client().post(
'/shoppinglists',
headers=dict(Authorization=self.access_token()),
data=self.shoppinglist
)
self.assertTrue(response.status_code, 201)
results = json.loads(response.data.decode())
response1 = self.client().post(
'/shoppinglist/{0}/items'.format(results['id']),
headers=dict(Authorization=self.access_token()),
data={
'owner_id': '1',
'shoppinglist_id': results['id'],
'item_title': 666666,
'item_description': 'Carrots and Tomatoes'
}
)
response2 = self.client().post(
'/shoppinglist/{0}/items'.format(results['id']),
headers=dict(Authorization=self.access_token()),
data={
'owner_id': '1',
'shoppinglist_id': results['id'],
'item_title': "Meat",
'item_description': 'Carrots and Tomatoes'
}
)
response3 = self.client().post(
'/shoppinglist/{0}/items'.format(results['id']),
headers=dict(Authorization=self.access_token()),
data={
'owner_id': '1',
'shoppinglist_id': results['id'],
'item_title': "",
'item_description': 'Carrots and Tomatoes'
}
)
self.assertIn(b'Value can\'t be numbers', response1.data)
self.assertEqual(202, response1.status_code)
self.assertIn(b'Value should be more than 10 characters', response2.data)
self.assertEqual(202, response2.status_code)
self.assertIn(b'Value can\'t be empty', response3.data)
self.assertEqual(202, response3.status_code)
def test_get_all_shoppinglist_items(self):
"""Test that a user can get all items on a shoppiglist"""
self.register_user()
access_token = self.access_token()
response = self.client().post(
'/shoppinglists',
headers=dict(Authorization=access_token),
data=self.shoppinglist
)
self.assertEqual(response.status_code, 201)
results = json.loads(response.data.decode())
create_item = self.client().post(
'/shoppinglist/{0}/items'.format(results['id']),
headers=dict(Authorization=access_token),
data=self.shoppinglistitem)
self.assertEqual(201, create_item.status_code)
get_items = self.client().get(
'/shoppinglist/{0}/items'.format(results['id']),
headers=dict(Authorization=access_token)
)
self.assertEqual(get_items.status_code, 202)
self.assertIn('Carrots and Cabbages', str(get_items.data))
def test_fetch_shoppinglist_item(self):
"""Test user can get a single item on the shoppiglist"""
self.register_user()
access_token = self.access_token()
response = self.client().post(
'/shoppinglists',
headers=dict(Authorization=access_token),
data=self.shoppinglist
)
self.assertEqual(response.status_code, 201)
results = json.loads(response.data.decode())
create_item = self.client().post(
'/shoppinglist/{0}/items'.format(results['id']),
headers=dict(Authorization=access_token),
data=self.shoppinglistitem)
self.assertEqual(201, create_item.status_code)
results1 = json.loads(create_item.data.decode())
# Test one can be fetched
get_single_item = self.client().get(
'/shoppinglist/{0}/item/{1}'.format(results['id'], results1['item_id']),
headers=dict(Authorization=access_token)
)
self.assertEqual(get_single_item.status_code, 201)
self.assertIn('Carrots and Cabbages', str(get_single_item.data))
# Test item title can't be duplicated
create_duplicate_item = self.client().post(
'/shoppinglist/{0}/items'.format(results['id']),
headers=dict(Authorization=access_token),
data=self.shoppinglistitem)
self.assertEqual(create_duplicate_item.status_code, 202)
self.assertIn(b'Shopping List item Vegetables already exists', create_duplicate_item.data)
# Test non existing items
get_wrong_item = self.client().get(
'/shoppinglist/{0}/item/3'.format(results['id'], results1['item_id']),
headers=dict(Authorization=access_token)
)
self.assertIn('Requested value \\\'3\\\' was not found', str(get_wrong_item.data))
self.assertEqual(get_wrong_item.status_code, 202)
def test_update_shoppinglistitem(self):
"""Test a user can update an item on shoppinglist"""
self.register_user()
access_token = self.access_token()
response = self.client().post(
'/shoppinglists',
headers=dict(Authorization=access_token),
data=self.shoppinglist
)
self.assertEqual(201, response.status_code)
results = json.loads(response.data.decode())
create_item = self.client().post(
'/shoppinglist/{0}/items'.format(results['id']),
headers=dict(Authorization=access_token),
data=self.shoppinglistitem)
self.assertEqual(201, create_item.status_code)
results1 = json.loads(create_item.data.decode())
update_resp = self.client().put(
'/shoppinglist/{0}/item/{1}'.format(results['id'], results1['item_id']),
headers=dict(Authorization=access_token),
data={
'owner_id': '1',
'shoppinglist_id': '1',
'item_title': "Sausages and stuff",
'item_description': 'Carrots and Waffles'
}
)
self.assertTrue(b"Carrots and Waffles" in update_resp.data)
self.assertTrue(b"Shopping list item updated successfuly" in update_resp.data)
self.assertEqual(update_resp.status_code, 200)
# Test updating with existing name
non_existing_updates = self.client().put(
'/shoppinglist/{}/item/4'.format(results['id']),
headers=dict(Authorization=access_token),
data={
'owner_id': '1',
'shoppinglist_id': '1',
'item_title': "Sausages and stuff",
'item_description': 'Carrots and Waffles'
}
)
self.assertTrue(b"Shopping list item Sausages and stuff already exists" in non_existing_updates.data)
self.assertEqual(non_existing_updates.status_code, 202)
def test_invalid_item_update(self):
"""Test user can't update item with invalid title or description"""
self.register_user()
access_token = self.access_token()
response = self.client().post(
'/shoppinglists',
headers=dict(Authorization=self.access_token()),
data=self.shoppinglist
)
self.assertTrue(response.status_code, 201)
results = json.loads(response.data.decode())
create_item = self.client().post(
'/shoppinglist/{0}/items'.format(results['id']),
headers=dict(Authorization=access_token),
data=self.shoppinglistitem)
self.assertEqual(201, create_item.status_code)
results1 = json.loads(create_item.data.decode())
update_resp = self.client().put(
'/shoppinglist/{0}/item/{1}'.format(results['id'], results1['item_id']),
headers=dict(Authorization=access_token),
data={
'owner_id': '1',
'shoppinglist_id': '1',
'item_title': "Sausages and stuff",
'item_description': 'Carrots and Waffles'
}
)
response1 = self.client().put(
'/shoppinglist/{0}/item/{1}'.format(results['id'], results1['item_id']),
headers=dict(Authorization=self.access_token()),
data={
'owner_id': '1',
'shoppinglist_id': results['id'],
'item_title': 666666,
'item_description': 'Carrots and Tomatoes'
}
)
response2 = self.client().put(
'/shoppinglist/{0}/item/{1}'.format(results['id'], results1['item_id']),
headers=dict(Authorization=self.access_token()),
data={
'owner_id': '1',
'shoppinglist_id': results['id'],
'item_title': "Meat",
'item_description': 'Carrots and Tomatoes'
}
)
response3 = self.client().put(
'/shoppinglist/{0}/item/{1}'.format(results['id'], results1['item_id']),
headers=dict(Authorization=self.access_token()),
data={
'owner_id': '1',
'shoppinglist_id': results['id'],
'item_title': "Carrots and Tomatoes",
'item_description': ''
}
)
response4 = self.client().put(
'/shoppinglist/{0}/item/{1}'.format(results['id'], results1['item_id']),
headers=dict(Authorization=self.access_token()),
data={
'owner_id': '1',
'shoppinglist_id': results['id'],
'item_title': "9999",
'item_description': "Fish"
}
)
self.assertIn(b'Value can\'t be numbers', response1.data)
self.assertEqual(202, response1.status_code)
self.assertIn(b'Value should be more than 10 characters', response2.data)
self.assertEqual(202, response2.status_code)
self.assertIn(b'Value can\'t be empty', response3.data)
self.assertEqual(202, response3.status_code)
self.assertIn(b'Value can\'t be numbers', response4.data)
self.assertIn(b'Value should be more than 10 characters', response4.data)
self.assertEqual(202, response4.status_code)
def test_delete_shoppinglistitem(self):
"""Test a user can delete an item on a shopping list"""
self.register_user()
access_token = self.access_token()
response = self.client().post(
'/shoppinglists',
headers=dict(Authorization=access_token),
data=self.shoppinglist
)
self.assertEqual(201, response.status_code)
results = json.loads(response.data.decode())
create_item = self.client().post(
'/shoppinglist/{0}/items'.format(results['id']),
headers=dict(Authorization=access_token),
data=self.shoppinglistitem)
self.assertEqual(201, create_item.status_code)
results1 = json.loads(create_item.data.decode())
delete_item = self.client().delete(
'/shoppinglist/{0}/item/{1}'.format(results['id'], results1['item_id']),
headers=dict(Authorization=access_token)
)
self.assertFalse(b"Carrots and Cabbages" in delete_item.data)
self.assertIn(b"Shopping list item \'Vegetables\' deleted successfuly", delete_item.data)
self.assertEqual(delete_item.status_code, 201)
def test_delete_wrong_shoppinglistitem(self):
"""Test a user can delete an item on a shopping list"""
self.register_user()
access_token = self.access_token()
response = self.client().post(
'/shoppinglists',
headers=dict(Authorization=access_token),
data=self.shoppinglist
)
self.assertEqual(201, response.status_code)
results = json.loads(response.data.decode())
delete_item = self.client().delete(
'/shoppinglist/{0}/item/7'.format(results['id']),
headers=dict(Authorization=access_token)
)
self.assertTrue(b"Requested value '7' was not found" in delete_item.data)
self.assertEqual(delete_item.status_code, 202)
def test_token_unprovided(self):
"""Test a token is always provided on login"""
self.register_user()
url_list_get = ['/shoppinglists', '/shoppinglist/1', '/shoppinglist/1/items', '/shoppinglist/1/item/1']
for url in url_list_get:
response = self.client().get(
url
)
self.assertIn(b"Authorization is not provided", response.data)
self.assertEqual(response.status_code, 500)
url_list_post = ['/shoppinglists', '/shoppinglist/1/items', 'auth/logout']
for post_url in url_list_post:
# Test post requests without Authorization
response2 = self.client().post(
post_url
)
self.assertIn(b"Authorization is not provided", response2.data)
self.assertEqual(response2.status_code, 500)
url_list_put = ['/shoppinglist/1', '/shoppinglist/1/item/1']
for put_url in url_list_put:
# Test put requests without Authorization
response3 = self.client().put(
put_url
)
self.assertIn(b"Authorization is not provided", response3.data)
self.assertEqual(response3.status_code, 500)
url_list_delete = ['/shoppinglist/1', '/shoppinglist/2/item/2']
for delete_url in url_list_delete:
# Test delete requests without Authorization
response4 = self.client().delete(
delete_url
)
self.assertIn(b"Authorization is not provided", response4.data)
self.assertEqual(response4.status_code, 500)
def test_welcome_page(self):
"""Test welcome page"""
response = self.client().get('/')
self.assertTrue(b"Welcome to Shopping List API" in response.data)
self.assertEqual(response.status_code, 200)
def test_valid_logout(self):
"""Test a user can logout smoothly"""
self.register_user()
access_token = self.access_token()
logout_response = self.client().post(
'/auth/logout',
headers=dict(Authorization=access_token)
)
self.assertIn(b"Successfully logged out.", logout_response.data)
self.assertEquals(200, logout_response.status_code)
def test_token_duplicate(self):
"""Test that a token can't be used twice ever"""
self.register_user()
access_token = self.access_token()
logout_response = self.client().post(
'/auth/logout',
headers=dict(Authorization=access_token)
)
self.assertIn(b"Successfully logged out.", logout_response.data)
self.assertEquals(200, logout_response.status_code)
response = self.client().get(
'/shoppinglists',
headers=dict(Authorization=access_token)
)
self.assertIn(b"Token created. Please log in again.", response.data)
self.assertTrue(403, response.status_code)
def test_logout_invalid_token(self):
"""Test a logout requires a valid token"""
self.register_user()
mess_up_token = self.access_token() + "Mess up token"
logout_response = self.client().post(
'/auth/logout',
headers=dict(Authorization=mess_up_token)
)
self.assertEqual(logout_response.status_code, 403)
self.assertIn(b"Invalid token. Please log in again.", logout_response.data)
def test_token_expiration(self):
""" Test if a token has expired after a certain time"""
self.register_user()
access_token = self.access_token()
time.sleep(61)
response = self.client().post(
'/shoppinglists',
headers=dict(Authorization=access_token),
data=self.shoppinglist
)
self.assertIn(b"Signature expired. Please log in again.", response.data)
def tearDown(self):
"""teardown all initialized variables."""
with self.app.app_context():
# drop all tables
db.session.remove()
db.drop_all()
| 33.92395 | 105 | 0.713387 | 3,858 | 29,887 | 5.379212 | 0.066096 | 0.055655 | 0.082109 | 0.067942 | 0.79974 | 0.757529 | 0.718547 | 0.687033 | 0.629644 | 0.599624 | 0 | 0.021129 | 0.143273 | 29,887 | 880 | 106 | 33.9625 | 0.789377 | 0.067053 | 0 | 0.602054 | 0 | 0 | 0.211012 | 0.02639 | 0 | 0 | 0 | 0 | 0.195122 | 1 | 0.044929 | false | 0.015404 | 0.008986 | 0.001284 | 0.05905 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0777addbc2c2eb967c9af2ff4b4e530173ea459d | 13,856 | py | Python | sahara/tests/unit/conductor/manager/test_templates.py | redhat-openstack/sahara | 67165c96eceb1ce3b087870934d394602f5dd959 | [
"Apache-2.0"
] | null | null | null | sahara/tests/unit/conductor/manager/test_templates.py | redhat-openstack/sahara | 67165c96eceb1ce3b087870934d394602f5dd959 | [
"Apache-2.0"
] | null | null | null | sahara/tests/unit/conductor/manager/test_templates.py | redhat-openstack/sahara | 67165c96eceb1ce3b087870934d394602f5dd959 | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2013 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
import testtools
from sahara.conductor import manager
from sahara import context
from sahara import exceptions as ex
import sahara.tests.unit.conductor.base as test_base
import sahara.tests.unit.conductor.manager.test_clusters as cluster_tests
SAMPLE_NGT = {
"plugin_name": "test_plugin",
"flavor_id": "42",
"tenant_id": "tenant_1",
"hadoop_version": "test_version",
"name": "ngt_test",
"node_processes": ["p1", "p2"],
"floating_ip_pool": None,
"availability_zone": None,
"node_configs": {
"service_1": {
"config_1": "value_1"
},
"service_2": {
"config_1": "value_1"
}
}
}
SAMPLE_CLT = {
"plugin_name": "test_plugin",
"tenant_id": "tenant_1",
"hadoop_version": "test_version",
"name": "clt_test",
"cluster_configs": {
"service_1": {
"config_1": "value_1"
},
"service_2": {
"config_1": "value_1"
}
},
"node_groups": [
{
"name": "ng_1",
"flavor_id": "42",
"node_processes": ["p1", "p2"],
"count": 1,
"floating_ip_pool": None,
"security_groups": None,
"availability_zone": None,
},
{
"name": "ng_2",
"flavor_id": "42",
"node_processes": ["p3", "p4"],
"count": 3,
"floating_ip_pool": None,
"security_groups": ["group1", "group2"],
"availability_zone": None,
}
]
}
class NodeGroupTemplates(test_base.ConductorManagerTestCase):
def __init__(self, *args, **kwargs):
super(NodeGroupTemplates, self).__init__(
checks=[
lambda: SAMPLE_CLT,
lambda: SAMPLE_NGT,
lambda: manager.CLUSTER_DEFAULTS,
lambda: manager.NODE_GROUP_DEFAULTS,
lambda: manager.INSTANCE_DEFAULTS,
], *args, **kwargs)
def test_minimal_ngt_create_list_delete(self):
ctx = context.ctx()
self.api.node_group_template_create(ctx, SAMPLE_NGT)
lst = self.api.node_group_template_get_all(ctx)
self.assertEqual(1, len(lst))
ngt_id = lst[0]['id']
self.api.node_group_template_destroy(ctx, ngt_id)
lst = self.api.node_group_template_get_all(ctx)
self.assertEqual(0, len(lst))
def test_duplicate_ngt_create(self):
ctx = context.ctx()
self.api.node_group_template_create(ctx, SAMPLE_NGT)
with testtools.ExpectedException(ex.DBDuplicateEntry):
self.api.node_group_template_create(ctx, SAMPLE_NGT)
def test_ngt_fields(self):
ctx = context.ctx()
ngt_db_obj_id = self.api.node_group_template_create(
ctx, SAMPLE_NGT)['id']
ngt_db_obj = self.api.node_group_template_get(ctx, ngt_db_obj_id)
self.assertIsInstance(ngt_db_obj, dict)
for key, val in SAMPLE_NGT.items():
self.assertEqual(val, ngt_db_obj.get(key),
"Key not found %s" % key)
def test_ngt_delete(self):
ctx = context.ctx()
db_obj_ngt = self.api.node_group_template_create(ctx, SAMPLE_NGT)
_id = db_obj_ngt['id']
self.api.node_group_template_destroy(ctx, _id)
with testtools.ExpectedException(ex.NotFoundException):
self.api.node_group_template_destroy(ctx, _id)
def test_ngt_delete_default(self):
ctx = context.ctx()
vals = copy.copy(SAMPLE_NGT)
vals["is_default"] = True
db_obj_ngt = self.api.node_group_template_create(ctx, vals)
_id = db_obj_ngt['id']
with testtools.ExpectedException(ex.DeletionFailed):
self.api.node_group_template_destroy(ctx, _id)
self.api.node_group_template_destroy(ctx, _id, ignore_default=True)
with testtools.ExpectedException(ex.NotFoundException):
self.api.node_group_template_destroy(ctx, _id)
def test_ngt_search(self):
ctx = context.ctx()
self.api.node_group_template_create(ctx, SAMPLE_NGT)
lst = self.api.node_group_template_get_all(ctx)
self.assertEqual(1, len(lst))
kwargs = {'name': SAMPLE_NGT['name'],
'plugin_name': SAMPLE_NGT['plugin_name']}
lst = self.api.node_group_template_get_all(ctx, **kwargs)
self.assertEqual(1, len(lst))
# Valid field but no matching value
kwargs = {'name': SAMPLE_NGT['name']+"foo"}
lst = self.api.node_group_template_get_all(ctx, **kwargs)
self.assertEqual(0, len(lst))
# Invalid field
lst = self.api.node_group_template_get_all(ctx, **{'badfield': 'junk'})
self.assertEqual(0, len(lst))
def test_ngt_update(self):
ctx = context.ctx()
ngt = self.api.node_group_template_create(ctx, SAMPLE_NGT)
ngt_id = ngt["id"]
UPDATE_NAME = "UpdatedSampleNGTName"
update_values = {"name": UPDATE_NAME}
updated_ngt = self.api.node_group_template_update(ctx,
ngt_id,
update_values)
self.assertEqual(UPDATE_NAME, updated_ngt["name"])
updated_ngt = self.api.node_group_template_get(ctx, ngt_id)
self.assertEqual(UPDATE_NAME, updated_ngt["name"])
with testtools.ExpectedException(ex.NotFoundException):
self.api.node_group_template_update(ctx, -1, update_values)
ngt = self.api.node_group_template_create(ctx, SAMPLE_NGT)
ngt_id = ngt['id']
with testtools.ExpectedException(ex.DBDuplicateEntry):
self.api.node_group_template_update(ctx, ngt_id, update_values)
def test_ngt_update_default(self):
ctx = context.ctx()
vals = copy.copy(SAMPLE_NGT)
vals["is_default"] = True
ngt = self.api.node_group_template_create(ctx, vals)
ngt_id = ngt["id"]
UPDATE_NAME = "UpdatedSampleNGTName"
update_values = {"name": UPDATE_NAME}
with testtools.ExpectedException(ex.UpdateFailedException):
self.api.node_group_template_update(ctx,
ngt_id,
update_values)
updated_ngt = self.api.node_group_template_update(ctx,
ngt_id,
update_values,
ignore_default=True)
self.assertEqual(UPDATE_NAME, updated_ngt["name"])
class ClusterTemplates(test_base.ConductorManagerTestCase):
def __init__(self, *args, **kwargs):
super(ClusterTemplates, self).__init__(
checks=[
lambda: SAMPLE_CLT,
lambda: SAMPLE_NGT,
lambda: manager.CLUSTER_DEFAULTS,
lambda: manager.NODE_GROUP_DEFAULTS,
lambda: manager.INSTANCE_DEFAULTS,
], *args, **kwargs)
def test_minimal_clt_create_list_delete(self):
ctx = context.ctx()
self.api.cluster_template_create(ctx, SAMPLE_CLT)
lst = self.api.cluster_template_get_all(ctx)
self.assertEqual(1, len(lst))
clt_id = lst[0]['id']
self.api.cluster_template_destroy(ctx, clt_id)
lst = self.api.cluster_template_get_all(ctx)
self.assertEqual(0, len(lst))
with testtools.ExpectedException(ex.NotFoundException):
self.api.cluster_template_destroy(ctx, clt_id)
def test_duplicate_clt_create(self):
ctx = context.ctx()
self.api.cluster_template_create(ctx, SAMPLE_CLT)
with testtools.ExpectedException(ex.DBDuplicateEntry):
self.api.cluster_template_create(ctx, SAMPLE_CLT)
def test_clt_fields(self):
ctx = context.ctx()
clt_db_obj_id = self.api.cluster_template_create(ctx, SAMPLE_CLT)['id']
clt_db_obj = self.api.cluster_template_get(ctx, clt_db_obj_id)
self.assertIsInstance(clt_db_obj, dict)
for key, val in SAMPLE_CLT.items():
if key == 'node_groups':
# this will be checked separately
continue
self.assertEqual(val, clt_db_obj.get(key),
"Key not found %s" % key)
for ng in clt_db_obj["node_groups"]:
ng.pop("created_at")
ng.pop("updated_at")
ng.pop("id")
ng.pop("tenant_id")
self.assertEqual(clt_db_obj_id, ng.pop("cluster_template_id"))
ng.pop("image_id")
ng.pop("node_configs")
ng.pop("node_group_template_id")
ng.pop("volume_mount_prefix")
ng.pop("volumes_size")
ng.pop("volumes_per_node")
ng.pop("volumes_availability_zone")
ng.pop("volume_type")
ng.pop("auto_security_group")
ng.pop("is_proxy_gateway")
ng.pop('volume_local_to_instance')
self.assertEqual(SAMPLE_CLT["node_groups"],
clt_db_obj["node_groups"])
def test_clt_delete(self):
ctx = context.ctx()
db_obj_clt = self.api.cluster_template_create(ctx, SAMPLE_CLT)
_id = db_obj_clt['id']
self.api.cluster_template_destroy(ctx, _id)
with testtools.ExpectedException(ex.NotFoundException):
self.api.cluster_template_destroy(ctx, _id)
def test_clt_delete_default(self):
ctx = context.ctx()
vals = copy.copy(SAMPLE_CLT)
vals["is_default"] = True
db_obj_clt = self.api.cluster_template_create(ctx, vals)
_id = db_obj_clt['id']
with testtools.ExpectedException(ex.DeletionFailed):
self.api.cluster_template_destroy(ctx, _id)
self.api.cluster_template_destroy(ctx, _id, ignore_default=True)
with testtools.ExpectedException(ex.NotFoundException):
self.api.cluster_template_destroy(ctx, _id)
def test_clt_search(self):
ctx = context.ctx()
self.api.cluster_template_create(ctx, SAMPLE_CLT)
lst = self.api.cluster_template_get_all(ctx)
self.assertEqual(1, len(lst))
kwargs = {'name': SAMPLE_CLT['name'],
'plugin_name': SAMPLE_CLT['plugin_name']}
lst = self.api.cluster_template_get_all(ctx, **kwargs)
self.assertEqual(1, len(lst))
# Valid field but no matching value
kwargs = {'name': SAMPLE_CLT['name']+"foo"}
lst = self.api.cluster_template_get_all(ctx, **kwargs)
self.assertEqual(0, len(lst))
# Invalid field
lst = self.api.cluster_template_get_all(ctx, **{'badfield': 'junk'})
self.assertEqual(0, len(lst))
def test_clt_update(self):
ctx = context.ctx()
clt = self.api.cluster_template_create(ctx, SAMPLE_CLT)
clt_id = clt["id"]
UPDATE_NAME = "UpdatedClusterTemplate"
update_values = {"name": UPDATE_NAME}
updated_clt = self.api.cluster_template_update(ctx,
clt_id,
update_values)
self.assertEqual(UPDATE_NAME, updated_clt["name"])
updated_clt = self.api.cluster_template_get(ctx, clt_id)
self.assertEqual(UPDATE_NAME, updated_clt["name"])
# check duplicate name handling
clt = self.api.cluster_template_create(ctx, SAMPLE_CLT)
clt_id = clt["id"]
with testtools.ExpectedException(ex.DBDuplicateEntry):
self.api.cluster_template_update(ctx, clt_id, update_values)
with testtools.ExpectedException(ex.NotFoundException):
self.api.cluster_template_update(ctx, -1, update_values)
# create a cluster and try updating the referenced cluster template
cluster_val = copy.deepcopy(cluster_tests.SAMPLE_CLUSTER)
cluster_val['name'] = "ClusterTempalteUpdateTestCluster"
cluster_val['cluster_template_id'] = clt['id']
self.api.cluster_create(ctx, cluster_val)
update_values = {"name": "noUpdateInUseName"}
with testtools.ExpectedException(ex.UpdateFailedException):
self.api.cluster_template_update(ctx, clt['id'], update_values)
def test_clt_update_default(self):
ctx = context.ctx()
vals = copy.copy(SAMPLE_CLT)
vals["is_default"] = True
clt = self.api.cluster_template_create(ctx, vals)
clt_id = clt["id"]
UPDATE_NAME = "UpdatedClusterTemplate"
update_values = {"name": UPDATE_NAME}
with testtools.ExpectedException(ex.UpdateFailedException):
self.api.cluster_template_update(ctx,
clt_id,
update_values)
updated_clt = self.api.cluster_template_update(ctx,
clt_id,
update_values,
ignore_default=True)
self.assertEqual(UPDATE_NAME, updated_clt["name"])
| 36.272251 | 79 | 0.602771 | 1,608 | 13,856 | 4.888682 | 0.126866 | 0.054319 | 0.05699 | 0.086757 | 0.770894 | 0.724717 | 0.703218 | 0.664928 | 0.600051 | 0.507569 | 0 | 0.005719 | 0.293303 | 13,856 | 381 | 80 | 36.367454 | 0.797079 | 0.056077 | 0 | 0.583333 | 0 | 0 | 0.094202 | 0.011258 | 0 | 0 | 0 | 0 | 0.083333 | 1 | 0.0625 | false | 0 | 0.024306 | 0 | 0.09375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0779bc8c22db22f4f35e50c66b2b51f9b0d1b987 | 404 | py | Python | HackerEarth - DSA July/Forming a regular polygon/Forming a regular polygon.py | prasadbhatane/Contests | 26b937cd4568e0168ba44000eb4374906c8d1da7 | [
"MIT"
] | null | null | null | HackerEarth - DSA July/Forming a regular polygon/Forming a regular polygon.py | prasadbhatane/Contests | 26b937cd4568e0168ba44000eb4374906c8d1da7 | [
"MIT"
] | null | null | null | HackerEarth - DSA July/Forming a regular polygon/Forming a regular polygon.py | prasadbhatane/Contests | 26b937cd4568e0168ba44000eb4374906c8d1da7 | [
"MIT"
] | null | null | null | # Write your code here
N = int(input())
arr = list(map(int, input().split()))
def give_soln(n):
return ((2**n) - (1 + (n) + (n*(n-1)/2)))
my_dict = dict()
for i in arr:
if i in my_dict:
my_dict[i] += 1
else:
my_dict[i] = 1
count = 0
for side in my_dict:
if my_dict[side] >=3:
count += give_soln(my_dict[side])
print(int(count)%(10**9 + 7))
| 19.238095 | 46 | 0.514851 | 71 | 404 | 2.802817 | 0.450704 | 0.211055 | 0.080402 | 0.080402 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042254 | 0.29703 | 404 | 20 | 47 | 20.2 | 0.658451 | 0.049505 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05 | 0 | 1 | 0.066667 | false | 0 | 0 | 0.066667 | 0.133333 | 0.066667 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
077a33ab145fb1b6c736ecb622a58cb0e610e6b0 | 4,229 | py | Python | sknano/generators/_bulk_structure_generator.py | haidi-ustc/scikit-nano | ef9b24165ba37918b3f520657f7311ba139b3e7d | [
"BSD-2-Clause"
] | 21 | 2016-06-08T18:27:20.000Z | 2022-03-22T08:27:46.000Z | sknano/generators/_bulk_structure_generator.py | haidi-ustc/scikit-nano | ef9b24165ba37918b3f520657f7311ba139b3e7d | [
"BSD-2-Clause"
] | 8 | 2016-06-24T19:45:58.000Z | 2021-03-25T21:42:29.000Z | sknano/generators/_bulk_structure_generator.py | scikit-nano/scikit-nano | ef9b24165ba37918b3f520657f7311ba139b3e7d | [
"BSD-2-Clause"
] | 9 | 2016-12-08T16:35:52.000Z | 2021-06-23T17:13:44.000Z | # -*- coding: utf-8 -*-
"""
===============================================================================
Bulk structure generator (:mod:`sknano.generators._bulk_structure_generator`)
===============================================================================
.. currentmodule:: sknano.generators._bulk_structure_generator
.. todo::
Add methods to perform fractional translation and cartesian translation
before structure generation.
.. todo::
Handle different units in output coordinates.
"""
from __future__ import absolute_import, division, print_function, \
unicode_literals
__docformat__ = 'restructuredtext en'
# import numpy as np
# from sknano.core import pluralize
# from sknano.core.math import Point, Vector
from sknano.core.crystallography import AlphaQuartz, DiamondStructure, \
Iron, Gold, Copper, BCCStructure, FCCStructure, CaesiumChlorideStructure, \
RocksaltStructure, ZincblendeStructure, MoS2
from ._base import BulkGeneratorBase
__all__ = ['DiamondStructureGenerator',
'BCCStructureGenerator', 'FCCStructureGenerator',
'CaesiumChlorideStructureGenerator',
'RocksaltStructureGenerator', 'ZincblendeStructureGenerator',
'AlphaQuartzGenerator', 'IronGenerator',
'GoldGenerator', 'CopperGenerator', 'MoS2Generator']
class AlphaQuartzGenerator(BulkGeneratorBase, AlphaQuartz):
""":class:`AlphaQuartz` generator class.
Parameters
----------
a, c : :class:`~python:float`
scaling_matrix : {None, :class:`~python:float`, :class:`~python:list`}
"""
def save(self, fname='alpha_quartz', **kwargs):
super().save(fname=fname, scaling_matrix=self.scaling_matrix, **kwargs)
class DiamondStructureGenerator(BulkGeneratorBase, DiamondStructure):
""":class:`DiamondStructure` generator class.
Parameters
----------
a : class:`~python:float`
"""
def save(self, fname='diamond', **kwargs):
super().save(fname=fname, scaling_matrix=self.scaling_matrix, **kwargs)
class CaesiumChlorideStructureGenerator(BulkGeneratorBase,
CaesiumChlorideStructure):
""":class:`CaesiumChlorideStructure` generator class."""
def save(self, fname='caesium_chloride', **kwargs):
super().save(fname=fname, scaling_matrix=self.scaling_matrix, **kwargs)
class RocksaltStructureGenerator(BulkGeneratorBase, RocksaltStructure):
""":class:`RocksaltStructure` generator class."""
def save(self, fname='rock_salt', **kwargs):
super().save(fname=fname, scaling_matrix=self.scaling_matrix, **kwargs)
class ZincblendeStructureGenerator(BulkGeneratorBase, ZincblendeStructure):
""":class:`ZincblendeStructure` generator class."""
def save(self, fname='zincblende', **kwargs):
super().save(fname=fname, scaling_matrix=self.scaling_matrix, **kwargs)
class BCCStructureGenerator(BulkGeneratorBase, BCCStructure):
""":class:`BCCStructure` generator class."""
def save(self, fname='bcc_structure', **kwargs):
super().save(fname=fname, scaling_matrix=self.scaling_matrix, **kwargs)
class FCCStructureGenerator(BulkGeneratorBase, FCCStructure):
""":class:`FCCStructure` generator class."""
def save(self, fname='fcc_structure', **kwargs):
super().save(fname=fname, scaling_matrix=self.scaling_matrix, **kwargs)
class IronGenerator(BulkGeneratorBase, Iron):
""":class:`Iron` generator class."""
def save(self, fname='iron', **kwargs):
super().save(fname=fname, scaling_matrix=self.scaling_matrix, **kwargs)
class GoldGenerator(BulkGeneratorBase, Gold):
""":class:`Gold` generator class."""
def save(self, fname='gold', **kwargs):
super().save(fname=fname, scaling_matrix=self.scaling_matrix, **kwargs)
class CopperGenerator(BulkGeneratorBase, Copper):
""":class:`Copper` generator class."""
def save(self, fname='copper', **kwargs):
super().save(fname=fname, scaling_matrix=self.scaling_matrix, **kwargs)
class MoS2Generator(BulkGeneratorBase, MoS2):
""":class:`MoS2` generator class."""
def save(self, fname='MoS2', **kwargs):
super().save(fname=fname, scaling_matrix=self.scaling_matrix, **kwargs)
| 35.838983 | 79 | 0.682667 | 388 | 4,229 | 7.311856 | 0.255155 | 0.105393 | 0.042651 | 0.062037 | 0.382446 | 0.355657 | 0.260486 | 0.260486 | 0.260486 | 0.260486 | 0 | 0.001941 | 0.147316 | 4,229 | 117 | 80 | 36.145299 | 0.784803 | 0.283519 | 0 | 0.234043 | 0 | 0 | 0.117989 | 0.052668 | 0 | 0 | 0 | 0.017094 | 0 | 1 | 0.234043 | false | 0 | 0.06383 | 0 | 0.531915 | 0.021277 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
077b3dde939ef88f2bf9ea108b6e64e362737c9d | 652 | py | Python | openresty/setup.py | oberhamsi/FrameworkBenchmarks | 660a66d51a9aad10b43c0660208fb13c098121af | [
"BSD-3-Clause"
] | 1 | 2017-11-02T13:25:06.000Z | 2017-11-02T13:25:06.000Z | openresty/setup.py | lightyeare/FrameworkBenchmarks | 40489856a0480c85227993d91de7d66e9224f8b4 | [
"BSD-3-Clause"
] | null | null | null | openresty/setup.py | lightyeare/FrameworkBenchmarks | 40489856a0480c85227993d91de7d66e9224f8b4 | [
"BSD-3-Clause"
] | null | null | null | import subprocess
import sys
import setup_util
import os
def start(args, logfile, errfile):
setup_util.replace_text("openresty/nginx.conf", "CWD", os.getcwd())
setup_util.replace_text("openresty/app.lua", "DBHOSTNAME", args.database_host)
subprocess.Popen('sudo /usr/local/openresty/nginx/sbin/nginx -c `pwd`/nginx.conf -g "worker_processes ' + str((args.max_threads)) + ';"', shell=True, cwd="openresty", stderr=errfile, stdout=logfile)
return 0
def stop(logfile, errfile):
subprocess.Popen('sudo /usr/local/openresty/nginx/sbin/nginx -c `pwd`/nginx.conf -s stop', shell=True, cwd="openresty", stderr=errfile, stdout=logfile)
return 0
| 38.352941 | 200 | 0.743865 | 94 | 652 | 5.074468 | 0.468085 | 0.056604 | 0.067086 | 0.083857 | 0.612159 | 0.490566 | 0.490566 | 0.490566 | 0.490566 | 0.490566 | 0 | 0.003413 | 0.101227 | 652 | 16 | 201 | 40.75 | 0.81058 | 0 | 0 | 0.166667 | 0 | 0.166667 | 0.343558 | 0.113497 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
077bf18e268f3d68b6efd8ead95c52db31d7004a | 249 | py | Python | configuration/constant.py | smtr42/P5_openfoodfact | bb7cba6eea356b98bb3a556914a45c4e33193a8f | [
"Unlicense",
"MIT"
] | null | null | null | configuration/constant.py | smtr42/P5_openfoodfact | bb7cba6eea356b98bb3a556914a45c4e33193a8f | [
"Unlicense",
"MIT"
] | null | null | null | configuration/constant.py | smtr42/P5_openfoodfact | bb7cba6eea356b98bb3a556914a45c4e33193a8f | [
"Unlicense",
"MIT"
] | null | null | null | """This is where you modify constants"""
USER = "user"
PASSWORD = "pwd"
DATABASE_NAME = "mydb"
CATEGORIES = ["Fromages",
"Desserts",
"Viandes",
"Chocolats",
"Snacks", ]
PRODUCT_NUMBER = 1000
| 19.153846 | 40 | 0.526104 | 22 | 249 | 5.863636 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024242 | 0.337349 | 249 | 12 | 41 | 20.75 | 0.757576 | 0.136546 | 0 | 0 | 0 | 0 | 0.23445 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.111111 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
0788c34e5039cb8b8ff6997ef5251792092e2b7c | 602 | py | Python | test/test_15_lims_test_case_2_5.py | SazonovPavel/lims-tst-web-portal | 75f6538d7e16ce1fc0c96ea6f499b95a7eab1cfd | [
"Apache-2.0"
] | null | null | null | test/test_15_lims_test_case_2_5.py | SazonovPavel/lims-tst-web-portal | 75f6538d7e16ce1fc0c96ea6f499b95a7eab1cfd | [
"Apache-2.0"
] | null | null | null | test/test_15_lims_test_case_2_5.py | SazonovPavel/lims-tst-web-portal | 75f6538d7e16ce1fc0c96ea6f499b95a7eab1cfd | [
"Apache-2.0"
] | 1 | 2019-08-11T18:53:18.000Z | 2019-08-11T18:53:18.000Z | # Заява про звуження переліку лікарських форм (Додаток 16)
def test_15_lims_test_case_2_5(app):
app.session.login(password='111',
path_to_key='C:/98745612_7878789898_DU180323123055.ZS2')
app.first_application.create_fifth_application()
app.first_application.change_mpd_fifth()
app.first_application.notifications_and_license_terms_fourth(comment='Коментар тест')
app.first_application.submit_application_fourth(path_to_key='C:/98745612_7878789898_DU180323123055.ZS2',
password='111')
app.session.logout()
| 35.411765 | 108 | 0.717608 | 72 | 602 | 5.611111 | 0.611111 | 0.079208 | 0.188119 | 0.049505 | 0.222772 | 0.222772 | 0.222772 | 0.222772 | 0 | 0 | 0 | 0.152577 | 0.194352 | 602 | 16 | 109 | 37.625 | 0.680412 | 0.093023 | 0 | 0 | 0 | 0 | 0.185662 | 0.150735 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0.222222 | 0 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
0790daeaf6218b91bf44144b325141316a32e389 | 1,823 | py | Python | tests/test_spatial_analysis.py | emilydolson/avida-spatial-tools | 7beb0166ccefad5fa722215b030ac2a53d62b59e | [
"MIT"
] | 1 | 2018-06-12T18:31:40.000Z | 2018-06-12T18:31:40.000Z | tests/test_spatial_analysis.py | emilydolson/avida-spatial-tools | 7beb0166ccefad5fa722215b030ac2a53d62b59e | [
"MIT"
] | 1 | 2016-02-03T23:37:09.000Z | 2016-02-03T23:37:09.000Z | tests/test_spatial_analysis.py | emilydolson/avida-spatial-tools | 7beb0166ccefad5fa722215b030ac2a53d62b59e | [
"MIT"
] | null | null | null | from avidaspatial import *
import numpy as np
expected = [[1.4466166676282082, 0.9864267287308424, 0.9864267287308424,
1.224394445405986, 1.224394445405986, 0.5032583347756456, 0.0,
0.0, 0.0, 0.5032583347756456, 1.4466166676282082],
[1.4466166676282082, 0.9864267287308424, 0.9864267287308424,
1.3516441151533924, 1.3516441151533924, 0.7642045065086203,
0.0, 0.0, 0.0, 0.5032583347756456, 1.4466166676282082],
[1.224394445405986, 0.9864267287308424, 1.224394445405986,
1.5304930567574826, 1.5304930567574826, 0.7642045065086203,
0.0, 0.0, 0.0, 0.5032583347756456, 1.224394445405986],
[0.5032583347756456, 0.5032583347756456, 0.9182958340544896,
1.3516441151533924, 1.3516441151533924, 0.5032583347756456, 0.0,
0.0, 0.0, 0.0, 0.5032583347756456],
[0.9864267287308424, 0.9864267287308424, 0.7642045065086203,
0.7642045065086203, 0.7642045065086203, 0.0, 0.0, 0.0, 0.0,
0.0, 0.9864267287308424]]
def test_diversity_map_spatial_weights():
world = load_grid_data("tests/grid_task.10000.dat")
world = agg_grid(world, mode)
w = make_toroidal_weights(len(world), len(world[0]), rook=False)
data = diversity_map_spatial_weights(world, w)
for i, row in enumerate(data):
for j, num in enumerate(row):
assert(np.isclose(num, expected[i][j]))
def test_diversity_map():
world = load_grid_data("tests/grid_task.10000.dat")
world = agg_grid(world, mode)
data = diversity_map(world)
for i, row in enumerate(data):
for j, num in enumerate(row):
assert(np.isclose(num, expected[i][j]))
if __name__ == "__main__":
test_diversity_map()
test_diversity_map_spatial_weights()
| 38.787234 | 77 | 0.663741 | 227 | 1,823 | 5.180617 | 0.22467 | 0.061224 | 0.079082 | 0.088435 | 0.748299 | 0.501701 | 0.501701 | 0.414116 | 0.40051 | 0.34949 | 0 | 0.470217 | 0.217224 | 1,823 | 46 | 78 | 39.630435 | 0.353889 | 0 | 0 | 0.285714 | 0 | 0 | 0.031816 | 0.027427 | 0 | 0 | 0 | 0 | 0.057143 | 1 | 0.057143 | false | 0 | 0.057143 | 0 | 0.114286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
079168d8efc1a7672059884ce7e32da6e7a8736c | 972 | py | Python | test/unit/test_payment_data_factory.py | tonchik-tm/yandex-checkout-sdk-python | 7680e85a3e3416a1b3d2a6dd6bd3de84ba646d1d | [
"MIT"
] | null | null | null | test/unit/test_payment_data_factory.py | tonchik-tm/yandex-checkout-sdk-python | 7680e85a3e3416a1b3d2a6dd6bd3de84ba646d1d | [
"MIT"
] | null | null | null | test/unit/test_payment_data_factory.py | tonchik-tm/yandex-checkout-sdk-python | 7680e85a3e3416a1b3d2a6dd6bd3de84ba646d1d | [
"MIT"
] | null | null | null | import unittest
from yandex_checkout.domain.common.data_context import DataContext
from yandex_checkout.domain.common.payment_method_type import PaymentMethodType
from yandex_checkout.domain.models.payment_data.payment_data_factory import PaymentDataFactory
from yandex_checkout.domain.models.payment_data.request.payment_data_yandex_wallet import PaymentDataYandexWallet
from yandex_checkout.domain.models.payment_data.response.payment_data_webmoney import PaymentDataWebmoney
class TestPaymentDataFactory(unittest.TestCase):
def test_factory_method(self):
factory = PaymentDataFactory()
request_payment_data = factory.create({'type': PaymentMethodType.YANDEX_MONEY}, DataContext.REQUEST)
self.assertIsInstance(request_payment_data, PaymentDataYandexWallet)
response_payment_data = factory.create({'type': PaymentMethodType.WEBMONEY}, DataContext.RESPONSE)
self.assertIsInstance(response_payment_data, PaymentDataWebmoney)
| 54 | 113 | 0.842593 | 104 | 972 | 7.596154 | 0.307692 | 0.139241 | 0.113924 | 0.151899 | 0.34557 | 0.26962 | 0.155696 | 0 | 0 | 0 | 0 | 0 | 0.093621 | 972 | 17 | 114 | 57.176471 | 0.896708 | 0 | 0 | 0 | 0 | 0 | 0.00823 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 1 | 0.076923 | false | 0 | 0.461538 | 0 | 0.615385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
0797a7bbabc3b56be63b12947025d9b48c508e76 | 1,104 | py | Python | config.py | Pechsopha/KITPoint | 076890838ca7f57b76f7c9a9a4101c9e90b13d8b | [
"MIT"
] | 1 | 2019-10-16T14:27:29.000Z | 2019-10-16T14:27:29.000Z | config.py | Pechsopha/KITPoint | 076890838ca7f57b76f7c9a9a4101c9e90b13d8b | [
"MIT"
] | null | null | null | config.py | Pechsopha/KITPoint | 076890838ca7f57b76f7c9a9a4101c9e90b13d8b | [
"MIT"
] | null | null | null | from os import environ
class Config(object):
SECRET_KEY = 'key'
SQLALCHEMY_DATABASE_URI = 'sqlite:///database.db'
SQLALCHEMY_TRACK_MODIFICATIONS = False
# THEME SUPPORT
# if set then url_for('static', filename='', theme='')
# will add the theme name to the static URL:
# /static/<DEFAULT_THEME>/filename
# DEFAULT_THEME = "themes/dark"
DEFAULT_THEME = None
class ProductionConfig(Config):
DEBUG = False
# Security
SESSION_COOKIE_HTTPONLY = True
REMEMBER_COOKIE_HTTPONLY = True
REMEMBER_COOKIE_DURATION = 3600
# PostgreSQL database
SQLALCHEMY_DATABASE_URI = 'postgresql://{}:{}@{}:{}/{}'.format(
environ.get('GENTELELLA_DATABASE_USER', 'postgres'),
environ.get('GENTELELLA_DATABASE_PASSWORD', 'postgres'),
environ.get('GENTELELLA_DATABASE_HOST', 'localhost'),
environ.get('GENTELELLA_DATABASE_PORT', 5432),
environ.get('GENTELELLA_DATABASE_NAME', 'flask')
)
class DebugConfig(Config):
DEBUG = True
config_dict = {
'Production': ProductionConfig,
'Debug': DebugConfig
}
| 25.674419 | 67 | 0.67663 | 116 | 1,104 | 6.198276 | 0.517241 | 0.069541 | 0.139082 | 0.194715 | 0.180807 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009112 | 0.20471 | 1,104 | 42 | 68 | 26.285714 | 0.809795 | 0.186594 | 0 | 0 | 0 | 0 | 0.247191 | 0.193258 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.041667 | 0.041667 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
07991c5cd5981b2264807ddec5f7e8ad27374455 | 4,706 | py | Python | sample_data/Set-PD-Ix-100/3_Analyses/DOE_Ix-PD-100/Input_point1/Imperfection_point1/DoE_point93/script_DoE93_write_riks_inputs.py | hanklu2020/mabessa_F3DAS | 57b1bd1cb85d96567ad1044c216535ab3df88db3 | [
"BSD-3-Clause"
] | null | null | null | sample_data/Set-PD-Ix-100/3_Analyses/DOE_Ix-PD-100/Input_point1/Imperfection_point1/DoE_point93/script_DoE93_write_riks_inputs.py | hanklu2020/mabessa_F3DAS | 57b1bd1cb85d96567ad1044c216535ab3df88db3 | [
"BSD-3-Clause"
] | null | null | null | sample_data/Set-PD-Ix-100/3_Analyses/DOE_Ix-PD-100/Input_point1/Imperfection_point1/DoE_point93/script_DoE93_write_riks_inputs.py | hanklu2020/mabessa_F3DAS | 57b1bd1cb85d96567ad1044c216535ab3df88db3 | [
"BSD-3-Clause"
] | null | null | null | #=====================================================================#
#
# Created by M.A. Bessa on 12-Nov-2019 06:42:20
#=====================================================================#
from abaqusConstants import *
from odbAccess import *
import os
import numpy
import collections
from copy import deepcopy
try:
import cPickle as pickle # Improve speed
except ValueError:
import pickle
def dict_merge(a, b):
#recursively merges dict's. not just simple a['key'] = b['key'], if
#both a and b have a key who's value is a dict then dict_merge is called
#on both values and the result stored in the returned dictionary.
if not isinstance(b, dict):
return b
result = deepcopy(a)
for k, v in b.iteritems():
if k in result and isinstance(result[k], dict):
result[k] = dict_merge(result[k], v)
else:
result[k] = deepcopy(v)
return result
#
os.chdir(r'/home/gkus/F3DAS-master/3_Analyses/DOE_Ix-PD-100/Input_point1/Imperfection_point1/DoE_point93')
# Set directory where the post-processing file is
postproc_dir='/home/gkus/F3DAS-master/4_Postprocessing/DOE_Ix-PD-100'
#
file_postproc_path = postproc_dir+'/'+'STRUCTURES_postprocessing_variables_CPU1.p'
# Flag saying post-processing file exists:
try:
# Try to load a previous post-processing file with all information:
readfile_postproc = open(file_postproc_path, 'rb')
STRUCTURES_data = pickle.load(readfile_postproc)
readfile_postproc.close()
postproc_exists = 1
except Exception, e:
postproc_exists = 0 # Flag saying that there is no post-processing file saved previously
sys.exit(1) # Exit the code because there is nothing left to do!
#
with open('DoE93_riks.inp','wb') as File:
if STRUCTURES_data['Input1']['Imperfection1']['DoE93']['coilable'][0] == 0:
sys.exit() # do not bother running the RIKS analysis because the material will not coil...
#
File.write('** Include file with mesh of structure:\n')
File.write('*INCLUDE, INPUT=include_mesh_DoE93.inp\n')
File.write('** \n')
File.write('** INTERACTION PROPERTIES\n')
File.write('** \n')
File.write('*SURFACE INTERACTION,NAME=IMP_TARG\n')
File.write('1.,\n')
File.write('*Surface Behavior, no separation, pressure-overclosure=HARD\n')
File.write('***Surface Interaction, name=IntProp-1\n')
File.write('** \n')
File.write('** INTERACTIONS\n')
File.write('** \n')
File.write('***CONTACT PAIR,INTERACTION=IMP_TARG\n')
File.write('**longerons-1-1.all_longerons_surface, AnalyticSurf-1-1.rigid_support\n')
File.write('** Interaction: Int-1\n')
File.write('*Contact Pair, interaction=IMP_TARG, type=SURFACE TO SURFACE, no thickness\n')
File.write('longerons-1-1.all_longerons_surface, AnalyticSurf-1-1.rigid_support\n')
File.write('**\n')
File.write('** Seed an imperfection:\n')
File.write('*IMPERFECTION, FILE=DoE93_linear_buckle, STEP=1\n')
mode_amplitude = 8.28831e-02/STRUCTURES_data['Input1']['Imperfection1']['DoE93']['maxDisp_p3'][1]
File.write('1, ' + str(mode_amplitude) + '\n')
File.write('** \n')
File.write('** STEP: Step-1\n')
File.write('** \n')
File.write('*Step, name=Step-RIKS, nlgeom=YES, inc=400\n')
File.write('*Static, riks\n')
File.write('5.0e-2,1.0,,0.5\n')
File.write('** \n')
File.write('** BOUNDARY CONDITIONS\n')
File.write('** \n')
File.write('** Name: BC_Zminus Type: Displacement/Rotation\n')
File.write('*Boundary\n')
File.write('RP_ZmYmXm, 1, 6\n')
File.write('** Name: BC_Zplus Type: Displacement/Rotation\n')
File.write('*Boundary, type=displacement\n')
File.write('RP_ZpYmXm, 3, 3, -4.53801e+01\n')
File.write('** \n')
File.write('** \n')
File.write('** OUTPUT REQUESTS\n')
File.write('** \n')
File.write('** FIELD OUTPUT: F-Output-1\n')
File.write('** \n')
File.write('*Output, field, variable=PRESELECT, frequency=1\n')
File.write('** \n')
File.write('** HISTORY OUTPUT: H-Output-2\n')
File.write('** \n')
File.write('*Output, history, frequency=1\n')
File.write('*Node Output, nset=RP_ZmYmXm\n')
File.write('RF1, RF2, RF3, RM1, RM2, RM3, U1, U2\n')
File.write('U3, UR1, UR2, UR3\n')
File.write('** \n')
File.write('** HISTORY OUTPUT: H-Output-3\n')
File.write('** \n')
File.write('*Node Output, nset=RP_ZpYmXm\n')
File.write('RF1, RF2, RF3, RM1, RM2, RM3, U1, U2\n')
File.write('U3, UR1, UR2, UR3\n')
File.write('** \n')
File.write('** HISTORY OUTPUT: H-Output-1\n')
File.write('** \n')
File.write('*Output, history, variable=PRESELECT, frequency=1\n')
File.write('*End Step\n')
#
| 40.568966 | 106 | 0.635784 | 679 | 4,706 | 4.332842 | 0.328424 | 0.186608 | 0.200544 | 0.07104 | 0.398029 | 0.35554 | 0.289599 | 0.188987 | 0.148878 | 0.128484 | 0 | 0.031378 | 0.167021 | 4,706 | 115 | 107 | 40.921739 | 0.719133 | 0.159371 | 0 | 0.260417 | 0 | 0.010417 | 0.444981 | 0.134689 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.083333 | null | null | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
07ad6ed440d514898031958d1b03691c0b55f115 | 2,033 | py | Python | rest_client/CogRest.py | diogo1790/inphinity | 862eb85746e8a3a9bbc0d6aef9abbd5eebe9765f | [
"MIT"
] | 1 | 2019-03-11T12:59:37.000Z | 2019-03-11T12:59:37.000Z | rest_client/CogRest.py | diogo1790/inphinity | 862eb85746e8a3a9bbc0d6aef9abbd5eebe9765f | [
"MIT"
] | 21 | 2018-10-17T14:52:30.000Z | 2019-06-03T12:43:58.000Z | rest_client/CogRest.py | diogo1790/inphinity | 862eb85746e8a3a9bbc0d6aef9abbd5eebe9765f | [
"MIT"
] | 6 | 2019-02-28T07:40:14.000Z | 2019-09-23T13:31:54.000Z | import json
from rest_client.GetRest import GetRest
from rest_client.PostRest import PostRest
class CogAPI(object):
"""
This class manage the requests for the cog objects into the restAPI
:param function: the name of the function to access in the rest API
:type function: string
"""
def __init__(self, function='cog/'):
"""
Initialization of the class
:param function: name of the function
:type function: string (url)
"""
self.function = function
def getAll(self):
"""
get all the cogs on the database
:return: json file with all the data
:rtype: string (json format)
"""
result_get = GetRest(function = self.function).performRequest()
return result_get
def setCog(self, jsonData):
"""
set new cogs in the database
:return: json file with the last genus created
:rtype: string (json format)
"""
jsonData = json.dumps(jsonData)
result_post = PostRest(function = self.function, dataDict = jsonData).performRequest()
return result_post
def getById(self, id_cog:int):
"""
get a cog given it id
:param id_cog: id of the cog
:type id_cog: int
:return: json file with all the data
:rtype: string (json format)
"""
self.function += str(id_cog) + '/'
result_get = GetRest(function = self.function).performRequest()
return result_get
def getCogsByParameters(self, url_parameters:str):
"""
return a list of cogs according the parameters you send
:param url_parameters: string that contains the parameters values (that design the fields)
:type url_parameters: str
:return: json file with all the data
:rtype: string (json format)
"""
self.function += '?' + url_parameters
result_get = GetRest(function = self.function).performRequest()
return result_get | 26.064103 | 98 | 0.615839 | 243 | 2,033 | 5.061728 | 0.312757 | 0.078049 | 0.045528 | 0.058537 | 0.334959 | 0.334959 | 0.302439 | 0.302439 | 0.302439 | 0.302439 | 0 | 0 | 0.302509 | 2,033 | 78 | 99 | 26.064103 | 0.867419 | 0.414166 | 0 | 0.285714 | 0 | 0 | 0.006383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.238095 | false | 0 | 0.142857 | 0 | 0.619048 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
07bc027d5cbe75e5f933381aac51b5ff0f2f19e9 | 1,251 | py | Python | app/grandchallenge/serving/urls.py | kant/grand-challenge.org | 608266ae3376448fc56c3bb4e34138ab81a45e2a | [
"Apache-2.0"
] | null | null | null | app/grandchallenge/serving/urls.py | kant/grand-challenge.org | 608266ae3376448fc56c3bb4e34138ab81a45e2a | [
"Apache-2.0"
] | null | null | null | app/grandchallenge/serving/urls.py | kant/grand-challenge.org | 608266ae3376448fc56c3bb4e34138ab81a45e2a | [
"Apache-2.0"
] | null | null | null | from django.conf import settings
from django.urls import path
from grandchallenge.serving.views import (
serve_folder,
serve_images,
serve_submissions,
)
app_name = "serving"
urlpatterns = [
path(
f"{settings.IMAGE_FILES_SUBDIRECTORY}/<uuid:pk>/<path:path>",
serve_images,
),
path(
(
f"{settings.EVALUATION_FILES_SUBDIRECTORY}/"
f"<int:challenge_pk>/"
f"submissions/"
f"<int:creator_pk>/"
f"<uuid:submission_pk>/"
f"<path:path>"
),
serve_submissions,
),
path("logos/<path:path>", serve_folder, {"folder": "logos"}),
path("banners/<path:path>", serve_folder, {"folder": "banners"}),
path("mugshots/<path:path>", serve_folder, {"folder": "mugshots"}),
path("favicon/<path:path>", serve_folder, {"folder": "favicon"}),
path("i/<path:path>", serve_folder, {"folder": "i"}),
path("cache/<path:path>", serve_folder, {"folder": "cache"}),
path(
"evaluation-supplementary/<path:path>",
serve_folder,
{"folder": "evaluation-supplementary"},
),
path(
"<slug:challenge_name>/<path:path>",
serve_folder,
name="challenge-file",
),
]
| 27.8 | 71 | 0.581135 | 131 | 1,251 | 5.381679 | 0.282443 | 0.113475 | 0.184397 | 0.215603 | 0.248227 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.243006 | 1,251 | 44 | 72 | 28.431818 | 0.744456 | 0 | 0 | 0.390244 | 0 | 0 | 0.377298 | 0.169464 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.073171 | 0 | 0.073171 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
07bc54f6fa70b2beb009c9e694b154ab713fcab6 | 294 | py | Python | shortly/__init__.py | fengsp/shortly | 29532523c2db297c995a7e94c84df6d884ce240e | [
"BSD-3-Clause"
] | 11 | 2015-01-01T03:22:09.000Z | 2021-02-12T14:08:06.000Z | shortly/__init__.py | fengsp/shortly | 29532523c2db297c995a7e94c84df6d884ce240e | [
"BSD-3-Clause"
] | null | null | null | shortly/__init__.py | fengsp/shortly | 29532523c2db297c995a7e94c84df6d884ce240e | [
"BSD-3-Clause"
] | null | null | null | """
shortly.app
~~~~~~~~~~~
Shortly app inspired by http://werkzeug.pocoo.org/docs/tutorial/.
:copyright: (c) 2014 by fsp.
:license: BSD.
"""
from flask import Flask
app = Flask(__name__)
from shortly import settings
app.config.from_object(settings)
import shortly.views
| 18.375 | 69 | 0.673469 | 38 | 294 | 5.078947 | 0.631579 | 0.103627 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016598 | 0.180272 | 294 | 15 | 70 | 19.6 | 0.784232 | 0.459184 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
07be6f662061ed7a38698e62c2a89a3a283f217c | 131 | py | Python | Section 02/sec02-imgshow-lena.py | PacktPublishing/Python-Data-Visualization-Solutions | 4d279912d779cdce8ad6ea27895aa23d0599fc72 | [
"MIT"
] | null | null | null | Section 02/sec02-imgshow-lena.py | PacktPublishing/Python-Data-Visualization-Solutions | 4d279912d779cdce8ad6ea27895aa23d0599fc72 | [
"MIT"
] | null | null | null | Section 02/sec02-imgshow-lena.py | PacktPublishing/Python-Data-Visualization-Solutions | 4d279912d779cdce8ad6ea27895aa23d0599fc72 | [
"MIT"
] | 2 | 2020-09-22T18:37:46.000Z | 2021-09-02T11:02:59.000Z | import scipy.misc
import matplotlib.pyplot as plt
lena = scipy.misc.ascent()
plt.gray()
plt.imshow(lena)
plt.colorbar()
plt.show()
| 16.375 | 31 | 0.755725 | 21 | 131 | 4.714286 | 0.619048 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.099237 | 131 | 7 | 32 | 18.714286 | 0.838983 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
07bfb14b12aa9b7f49cdabb818b850ef93c8c964 | 4,255 | py | Python | rsna_retro/self_supervised.py | bearpelican/rsna_retro | 1475da3224403261c48f0425b4a24e060d07556c | [
"Apache-2.0"
] | 3 | 2020-01-27T09:49:37.000Z | 2020-09-15T06:55:38.000Z | rsna_retro/self_supervised.py | bearpelican/rsna_retro | 1475da3224403261c48f0425b4a24e060d07556c | [
"Apache-2.0"
] | 1 | 2021-05-20T12:44:34.000Z | 2021-05-20T12:44:34.000Z | rsna_retro/self_supervised.py | bearpelican/rsna_retro | 1475da3224403261c48f0425b4a24e060d07556c | [
"Apache-2.0"
] | 1 | 2020-09-15T06:55:40.000Z | 2020-09-15T06:55:40.000Z | # AUTOGENERATED! DO NOT EDIT! File to edit: 08_train_self_supervised.ipynb (unless otherwise specified).
__all__ = ['get_ss_gen', 'get_ss_data', 'pipe_update_size', 'get_aug_pipe', 'SSModel', 'CombinedSSLoss', 'wrap_metric',
'SSCallback']
# Cell
from .imports import *
from .metadata import *
from .preprocess import *
from .train import *
from .contrastive_loss import *
# Cell
def get_ss_gen(fns, bs, img_tfm, splits, nw=8, test=False):
tfms = [[img_tfm, ToTensor], [fn2label,EncodedMultiCategorize(htypes)]]
if test: tfms = [tfms[0]]
dsets = Datasets(fns, tfms, splits=splits)
batch_tfms = L(IntToFloatTensor)
return dsets.dataloaders(bs=bs, num_workers=nw, after_batch=batch_tfms)
# Cell
def get_ss_data(bs, splits, img_dir=path_jpg256, **kwargs):
return get_ss_gen(L(list(Meta.df_comb.index)), bs=bs, img_tfm=get_pil_fn(path/img_dir),
splits=splits, **kwargs)
# Cell
def pipe_update_size(pipe, size):
for tf in pipe.fs:
if isinstance(tf, RandomResizedCropGPU):
tf.size = size
# Cell
def get_aug_pipe(size, min_scale=0.4, stats=(mean,std), **kwargs):
tfms = [Normalize.from_stats(*stats)] + aug_transforms(size=size, min_scale=min_scale, **kwargs)
return Pipeline(tfms)
# Cell
class SSModel(nn.Sequential):
def __init__(self, model:nn.Sequential): super().__init__(*model)
def features(self, x):
return super().forward(x)
# return self[1][:2](self[0](x))
def logits(self, x):
return x
# return self[1][2:](x)
def forward(self, *args):
feats = [self.features(x) for x in args]
logits = [self.logits(x) for x in feats]
return tuple(feats), tuple(logits)
# Cell
class CombinedSSLoss(nn.Module):
def __init__(self, ss_loss_func, orig_loss_func, multi_loss=False):
super().__init__()
store_attr(self, 'ss_loss_func,orig_loss_func,multi_loss')
def ss_loss(self, preds, labels):
(anchor, positive), _ = preds
return self.ss_loss_func(anchor, positive)
def orig_loss(self, preds, labels):
_, (logits_targ, _) = preds
return self.orig_loss_func(logits_targ, labels)
def forward(self, preds, labels):
if not self.multi_loss: return self.ss_loss(preds, labels)
return self.ss_loss(preds, labels) + self.orig_loss(preds, labels)
# Cell
from functools import wraps
def wrap_metric(f):
@wraps(f)
def wrapped_f(preds, targ):
_, (inp, _) = preds
return f(inp, targ)
return wrapped_f
# Cell
class SSCallback(Callback):
run_before=Recorder
def __init__(self, loss_func, size=256, aug_targ=None, aug_pos=None, multi_loss=False):
self.aug_targ = ifnone(aug_targ, get_aug_pipe(size, min_scale=0.7))
self.aug_pos = ifnone(aug_pos, get_aug_pipe(size, min_scale=0.4))
self.ss_loss_func = loss_func
self.multi_loss = multi_loss
self.orig_metrics = None
def update_size(self, size):
pipe_update_size(self.aug_targ, size)
pipe_update_size(self.aug_pos, size)
def begin_fit(self):
self.learn.model = SSModel(self.learn.model)
lf = CombinedSSLoss(self.ss_loss_func, self.learn.loss_func, self.multi_loss)
self.learn.loss_func = lf
self.orig_metrics = self.learn.metrics
self.learn.metrics = [wrap_metric(f.func) for f in self.orig_metrics]
self.learn.dls.valid.shuffle = True # prevents high loss if images ordered by class
if self.multi_loss:
self.learn.metrics += [lf.ss_loss, lf.orig_loss]
def after_fit(self):
self.learn.model = nn.Sequential(*self.learn.model)
self.learn.loss_func = self.learn.loss_func.orig_loss_func
self.learn.metrics = self.orig_metrics
self.learn.dls.valid.shuffle = False
def set_split(self, split_idx):
self.aug_targ.split_idx = split_idx
# self.aug_pos.split_idx = split_idx # always keep augmentation
def begin_validate(self): self.set_split(1)
def begin_train(self): self.set_split(0)
def begin_batch(self):
xb, = self.learn.xb
xb_targ = self.aug_targ(xb)
xb_pos = self.aug_pos(xb)
self.learn.xb = xb_targ, xb_pos
| 34.04 | 119 | 0.6698 | 624 | 4,255 | 4.309295 | 0.241987 | 0.053552 | 0.026032 | 0.026032 | 0.219041 | 0.152845 | 0.081443 | 0.07289 | 0.026032 | 0 | 0 | 0.007149 | 0.211046 | 4,255 | 124 | 120 | 34.314516 | 0.793864 | 0.077791 | 0 | 0 | 1 | 0 | 0.033009 | 0.009724 | 0 | 0 | 0 | 0 | 0 | 1 | 0.252874 | false | 0 | 0.068966 | 0.034483 | 0.494253 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
07d413b0134965a22e85e900dcfad1d58a08e13c | 1,842 | py | Python | test/test_msgzero.py | SX-Aurora/mpi4py-ve | aa6b1f97933196f8a485d5d808e89d5a29b58b1c | [
"BSD-2-Clause"
] | null | null | null | test/test_msgzero.py | SX-Aurora/mpi4py-ve | aa6b1f97933196f8a485d5d808e89d5a29b58b1c | [
"BSD-2-Clause"
] | null | null | null | test/test_msgzero.py | SX-Aurora/mpi4py-ve | aa6b1f97933196f8a485d5d808e89d5a29b58b1c | [
"BSD-2-Clause"
] | null | null | null | from mpi4pyve import MPI
import mpiunittest as unittest
class BaseTestMessageZero(object):
null_b = [None, MPI.INT]
null_v = [None, (0, None), MPI.INT]
def testPointToPoint(self):
comm = self.COMM
comm.Sendrecv(sendbuf=self.null_b, dest=comm.rank,
recvbuf=self.null_b, source=comm.rank)
r2 = comm.Irecv(self.null_b, comm.rank)
r1 = comm.Isend(self.null_b, comm.rank)
MPI.Request.Waitall([r1, r2])
def testCollectivesBlock(self):
comm = self.COMM
comm.Bcast(self.null_b)
comm.Gather(self.null_b, self.null_b)
comm.Scatter(self.null_b, self.null_b)
comm.Allgather(self.null_b, self.null_b)
comm.Alltoall(self.null_b, self.null_b)
def testCollectivesVector(self):
comm = self.COMM
comm.Gatherv(self.null_b, self.null_v)
comm.Scatterv(self.null_v, self.null_b)
comm.Allgatherv(self.null_b, self.null_v)
comm.Alltoallv(self.null_v, self.null_v)
@unittest.skip('necmpi')
@unittest.skipMPI('openmpi')
@unittest.skipMPI('SpectrumMPI')
def testReductions(self):
comm = self.COMM
comm.Reduce(self.null_b, self.null_b)
comm.Allreduce(self.null_b, self.null_b)
comm.Reduce_scatter_block(self.null_b, self.null_b)
rcnt = [0]*comm.Get_size()
comm.Reduce_scatter(self.null_b, self.null_b, rcnt)
try: comm.Scan(self.null_b, self.null_b)
except NotImplementedError: pass
try: comm.Exscan(self.null_b, self.null_b)
except NotImplementedError: pass
class TestMessageZeroSelf(BaseTestMessageZero, unittest.TestCase):
COMM = MPI.COMM_SELF
class TestMessageZeroWorld(BaseTestMessageZero, unittest.TestCase):
COMM = MPI.COMM_WORLD
if __name__ == '__main__':
unittest.main()
| 32.315789 | 67 | 0.664495 | 250 | 1,842 | 4.7 | 0.272 | 0.224681 | 0.214468 | 0.132766 | 0.48 | 0.354043 | 0.260426 | 0.08 | 0.08 | 0 | 0 | 0.004875 | 0.220413 | 1,842 | 56 | 68 | 32.892857 | 0.81337 | 0 | 0 | 0.133333 | 0 | 0 | 0.017372 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088889 | false | 0.044444 | 0.044444 | 0 | 0.288889 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
07eddd38c6f47aa6a472c92b7481d1d86a4a3b6e | 369 | py | Python | oscomp/src/oscomp/waitpid_test.py | wei-huan/MyOS | 0592dd73fd9768a9a69975f7d18c758999195774 | [
"MIT"
] | 2 | 2022-02-21T08:37:46.000Z | 2022-03-03T06:01:47.000Z | oscomp/src/oscomp/waitpid_test.py | wei-huan/MyOS | 0592dd73fd9768a9a69975f7d18c758999195774 | [
"MIT"
] | null | null | null | oscomp/src/oscomp/waitpid_test.py | wei-huan/MyOS | 0592dd73fd9768a9a69975f7d18c758999195774 | [
"MIT"
] | null | null | null | from test_base import TestBase
import re
class waitpid_test(TestBase):
def __init__(self):
super().__init__("waitpid", 4)
def test(self, data):
self.assert_ge(len(data), 3)
self.assert_equal(data[0], "This is child process")
self.assert_equal(data[1], "waitpid successfully.")
self.assert_equal(data[2], "wstatus: 3")
| 26.357143 | 59 | 0.650407 | 51 | 369 | 4.431373 | 0.54902 | 0.176991 | 0.199115 | 0.252212 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02069 | 0.214092 | 369 | 13 | 60 | 28.384615 | 0.758621 | 0 | 0 | 0 | 0 | 0 | 0.159892 | 0 | 0 | 0 | 0 | 0 | 0.4 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
07f020d98c9c52f4c1494e2c9562744e5e6caf49 | 457 | py | Python | ameritrade/settings.py | vcokltfre/ameritrade-python | d3e2cb0b31caeea6874c2088449f43000b4d2b5a | [
"MIT"
] | 4 | 2021-02-12T07:32:27.000Z | 2022-03-30T22:55:44.000Z | ameritrade/settings.py | vcokltfre/ameritrade-python | d3e2cb0b31caeea6874c2088449f43000b4d2b5a | [
"MIT"
] | 5 | 2021-03-10T20:36:27.000Z | 2021-11-13T22:18:44.000Z | ameritrade/settings.py | vcokltfre/ameritrade-python | d3e2cb0b31caeea6874c2088449f43000b4d2b5a | [
"MIT"
] | 2 | 2021-11-13T21:02:05.000Z | 2022-02-09T18:42:01.000Z | # Url to manually retrieve authorization code
AUTH_URL = "https://auth.tdameritrade.com/auth"
# API endpoint for token and authorization code authentication
OAUTH_URL = "https://api.tdameritrade.com/v1/oauth2/token"
# Quote endpoints
GET_QUOTES_URL = "https://api.tdameritrade.com/v1/marketdata/quotes"
GET_QUOTE_URL = "https://api.tdameritrade.com/v1/marketdata/"
# History endpoint
GET_PRICE_HISTORY_URL = "https://api.tdameritrade.com/v1/marketdata/"
| 35.153846 | 69 | 0.787746 | 63 | 457 | 5.571429 | 0.412698 | 0.11396 | 0.125356 | 0.262108 | 0.404558 | 0.404558 | 0.324786 | 0 | 0 | 0 | 0 | 0.01199 | 0.087527 | 457 | 12 | 70 | 38.083333 | 0.829736 | 0.299781 | 0 | 0 | 0 | 0 | 0.67619 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
07f60644e1278035ff1189bf77c13f4e30a1d395 | 13,406 | py | Python | openGaussBase/testcase/SQL/DDL/partition/Opengauss_Function_DDL_Partition_Case0064.py | opengauss-mirror/Yat | aef107a8304b94e5d99b4f1f36eb46755eb8919e | [
"MulanPSL-1.0"
] | null | null | null | openGaussBase/testcase/SQL/DDL/partition/Opengauss_Function_DDL_Partition_Case0064.py | opengauss-mirror/Yat | aef107a8304b94e5d99b4f1f36eb46755eb8919e | [
"MulanPSL-1.0"
] | null | null | null | openGaussBase/testcase/SQL/DDL/partition/Opengauss_Function_DDL_Partition_Case0064.py | opengauss-mirror/Yat | aef107a8304b94e5d99b4f1f36eb46755eb8919e | [
"MulanPSL-1.0"
] | null | null | null | """
Copyright (c) 2022 Huawei Technologies Co.,Ltd.
openGauss is licensed under Mulan PSL v2.
You can use this software according to the terms and conditions of the Mulan PSL v2.
You may obtain a copy of Mulan PSL v2 at:
http://license.coscl.org.cn/MulanPSL2
THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT,
MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
See the Mulan PSL v2 for more details.
"""
"""
Case Type : 功能测试
Case Name : 检测是否支持不同约束等级下的外键操作:合理报错
Description :
1.创建兼容mysql的数据库 期望:创建成功
2.建表指定主键关系 插入数据 期望:创建成功,插入数据成功
3.增加外键约束 期望:增加成功
4.测试不同约束等级下的外键操作 期望:操作成功
5.更新外键 期望:更新成功
6.测试不同约束等级下的外键操作 期望:操作成功
7.删除表 期望:删除成功
Expect :
History :
"""
import sys
import unittest
from yat.test import Node
from yat.test import macro
sys.path.append(sys.path[0] + "/../")
from testcase.utils.CommonSH import *
from testcase.utils.Constant import Constant
from testcase.utils.Logger import Logger
logger = Logger()
class IndexFileDamaged(unittest.TestCase):
def setUp(self):
logger.info(
'----------------------------Opengauss_Function_DDL_Parttion_Case0064开始执行-----------------------------')
self.userNode = Node('dbuser')
self.DB_ENV_PATH = macro.DB_ENV_PATH
self.Constant = Constant()
def test_Index_file_damaged(self):
logger.info('----------------------------创建兼容mysql的数据库 期望:创建成功-----------------------------')
sql_cmd = '''
drop table if exists pstudent_table_02 cascade;
drop table if exists pclass_table_02 cascade;
drop table if exists pteacher_table_02 cascade;
drop database if exists pguser;
CREATE DATABASE pguser DBCOMPATIBILITY 'B';
'''
excute_cmd = f'''
source {self.DB_ENV_PATH} ;
gsql -d {self.userNode.db_name} -p {self.userNode.db_port} -c "{sql_cmd}"
'''
logger.info(excute_cmd)
msg = self.userNode.sh(excute_cmd).result()
logger.info(msg)
self.assertIn(self.Constant.DROP_DATABASE_SUCCESS, msg)
self.assertIn(self.Constant.CREATE_DATABASE_SUCCESS, msg)
logger.info('----------------------建表指定主键关系 插入数据 期望:创建成功,插入数据成功---------------------')
sql_cmd = '''
drop table if exists pstudent_table_02 cascade;
drop table if exists pclass_table_02 cascade;
drop table if exists pteacher_table_02 cascade;
create table pclass_table_02
(
c_date TIMESTAMP primary key,
c_name varchar not null
)partition by range(c_date) interval ('10 day') (
partition part1 values less than ('1990-02-02 00:00:00'));
create table pteacher_table_02
(
t_date TIMESTAMP primary key,
t_name varchar not null
)partition by range(t_date) interval ('10 day') (
partition part1 values less than ('1990-02-02 00:00:00'));
create table pstudent_table_02
(
s_date TIMESTAMP primary key,
s_name varchar not null,
c_date TIMESTAMP,
t_date TIMESTAMP,
foreign key(c_date) references pclass_table_02(c_date)
)partition by range(s_date) interval ('10 day') (
partition part1 values less than ('1990-02-02 00:00:00'));
--添加数据
insert into pclass_table_02 values (date '2020-09-01', '1年1班');
insert into pclass_table_02 values (date '2020-09-02', '1年2班');
insert into pclass_table_02 values (date '2020-09-03', '1年3班');
insert into pclass_table_02 values (date '2020-09-04', '1年4班');
insert into pteacher_table_02 values (date '2020-09-01', '李老师');
insert into pteacher_table_02 values (date '2020-09-02', '张老师');
insert into pteacher_table_02 values (date '2020-09-03', '陈老师');
insert into pteacher_table_02 values (date '2020-09-04', '杨老师');
insert into pstudent_table_02 values (date '2020-09-01', '张三', date '2020-09-01', date '2020-09-01');
insert into pstudent_table_02 values (date '2020-09-02', '李四', date '2020-09-02', date '2020-09-02');
insert into pstudent_table_02 values (date '2020-09-03', '王二', date '2020-09-03', date '2020-09-03');
insert into pstudent_table_02 values (date '2020-09-04', '李明', date '2020-09-04', date '2020-09-04');
'''
excute_cmd = f'''
source {self.DB_ENV_PATH} ;
gsql -d pguser -p {self.userNode.db_port} -c "{sql_cmd}"
'''
logger.info(excute_cmd)
msg = self.userNode.sh(excute_cmd).result()
logger.info(msg)
self.assertIn(self.Constant.DROP_TABLE_SUCCESS, msg)
self.assertIn(self.Constant.TABLE_CREATE_SUCCESS, msg)
self.assertIn(self.Constant.INSERT_SUCCESS_MSG, msg)
self.assertNotIn(self.Constant.SQL_WRONG_MSG[1], msg)
logger.info('----------------------------增加外键约束 期望:增加成功-----------------------------')
sql_cmd = '''
alter table pstudent_table_02 add constraint fk_student_tid foreign key (t_date)
references pteacher_table_02(t_date) on delete set null on update no action;
alter table pstudent_table_02 add constraint fk_student_cid foreign key (c_date)
references pclass_table_02(c_date) on delete cascade on update restrict;
select conname, convalidated, confupdtype, confdeltype, confmatchtype
from PG_CONSTRAINT where conname in ('fk_student_tid', 'fk_student_cid');
'''
excute_cmd = f'''
source {self.DB_ENV_PATH} ;
gsql -d pguser -p {self.userNode.db_port} -c "{sql_cmd}"
'''
logger.info(excute_cmd)
msg = self.userNode.sh(excute_cmd).result()
self.assertIn(self.Constant.ALTER_TABLE_MSG, msg)
logger.info(msg)
logger.info('----------------------------测试不同约束等级下的外键操作 期望:操作成功-----------------------------')
sql_cmd = '''
delete from pteacher_table_02 where t_date = date '2020-09-04';
select t_date from pstudent_table_02 where s_name='李明';
'''
excute_cmd = f'''
source {self.DB_ENV_PATH} ;
gsql -d pguser -p {self.userNode.db_port} -c "{sql_cmd}"
'''
logger.info(excute_cmd)
msg = self.userNode.sh(excute_cmd).result()
self.assertIn(self.Constant.DELETE_SUCCESS_MSG, msg)
self.assertEqual(msg.split('\n')[-2],' ')
logger.info(msg)
sql_cmd = '''
delete from pclass_table_02 where c_date = date '2020-09-04';
'''
excute_cmd = f'''
source {self.DB_ENV_PATH} ;
gsql -d pguser -p {self.userNode.db_port} -c "{sql_cmd}"
'''
logger.info(excute_cmd)
msg = self.userNode.sh(excute_cmd).result()
self.assertIn(self.Constant.SQL_WRONG_MSG[1], msg)
self.assertIn("violates foreign key constraint", msg)
logger.info(msg)
sql_cmd = '''
update pteacher_table_02 set t_date = date '2020-09-09' where t_date = date '2020-09-03';
'''
excute_cmd = f'''
source {self.DB_ENV_PATH} ;
gsql -d pguser -p {self.userNode.db_port} -c "{sql_cmd}"
'''
logger.info(excute_cmd)
msg = self.userNode.sh(excute_cmd).result()
self.assertIn(self.Constant.SQL_WRONG_MSG[1], msg)
self.assertIn("violates foreign key constraint", msg)
logger.info(msg)
sql_cmd = '''
update pclass_table_02 set c_date = date '2020-09-09' where c_date = date '2020-09-04';
'''
excute_cmd = f'''
source {self.DB_ENV_PATH} ;
gsql -d pguser -p {self.userNode.db_port} -c "{sql_cmd}"
'''
logger.info(excute_cmd)
msg = self.userNode.sh(excute_cmd).result()
self.assertIn(self.Constant.SQL_WRONG_MSG[1], msg)
self.assertIn("violates foreign key constraint", msg)
logger.info(msg)
logger.info('----------------------------更新外键 期望:更新成功-----------------------------')
sql_cmd = '''
alter table pstudent_table_02 drop constraint fk_student_cid;
alter table pstudent_table_02 drop constraint fk_student_tid;
alter table pstudent_table_02 add constraint fk_pstudent_table_02_tdate foreign key (t_date) references pteacher_table_02(t_date) on delete no action on update cascade;
alter table pstudent_table_02 add constraint fk_pstudent_table_02_cdate foreign key (c_date) references pclass_table_02(c_date) on delete restrict on update set null;
'''
excute_cmd = f'''
source {self.DB_ENV_PATH} ;
gsql -d pguser -p {self.userNode.db_port} -c "{sql_cmd}"
'''
logger.info(excute_cmd)
msg = self.userNode.sh(excute_cmd).result()
self.assertIn(self.Constant.ALTER_TABLE_MSG, msg)
logger.info(msg)
logger.info('----------------------------测试不同约束等级下的外键操作 期望:操作成功-----------------------------')
sql_cmd = '''
delete from pteacher_table_02 where t_date = date '2020-09-01';
'''
excute_cmd = f'''
source {self.DB_ENV_PATH} ;
gsql -d pguser -p {self.userNode.db_port} -c "{sql_cmd}"
'''
logger.info(excute_cmd)
msg = self.userNode.sh(excute_cmd).result()
self.assertIn(self.Constant.SQL_WRONG_MSG[1], msg)
self.assertIn("violates foreign key constraint", msg)
logger.info(msg)
sql_cmd = '''
delete from pclass_table_02 where c_date = date '2020-09-04';
'''
excute_cmd = f'''
source {self.DB_ENV_PATH} ;
gsql -d pguser -p {self.userNode.db_port} -c "{sql_cmd}"
'''
logger.info(excute_cmd)
msg = self.userNode.sh(excute_cmd).result()
self.assertIn(self.Constant.SQL_WRONG_MSG[1], msg)
self.assertIn("violates foreign key constraint", msg)
logger.info(msg)
sql_cmd = '''
update pteacher_table_02 set t_date = date '2020-09-08' where t_date = date '2020-09-01';
select t_date from pstudent_table_02 where s_name='张三';
'''
excute_cmd = f'''
source {self.DB_ENV_PATH} ;
gsql -d pguser -p {self.userNode.db_port} -c "{sql_cmd}"
'''
logger.info(excute_cmd)
msg = self.userNode.sh(excute_cmd).result()
self.assertIn("2020-09-08", msg)
logger.info(msg)
sql_cmd = '''
update pclass_table_02 set c_date = date '2020-09-09' where c_date = date '2020-09-02';
'''
excute_cmd = f'''
source {self.DB_ENV_PATH} ;
gsql -d pguser -p {self.userNode.db_port} -c "{sql_cmd}"
'''
logger.info(excute_cmd)
msg = self.userNode.sh(excute_cmd).result()
self.assertIn(self.Constant.SQL_WRONG_MSG[1], msg)
self.assertIn("violates foreign key constraint", msg)
logger.info(msg)
def tearDown(self):
logger.info('----------------------------删除表和数据库-----------------------------')
sql_cmd = '''
drop table pclass_table_02 cascade;
drop table pteacher_table_02 cascade;
drop table pstudent_table_02 cascade;
drop database if exists pguser;
'''
excute_cmd = f'''
source {self.DB_ENV_PATH} ;
gsql -d {self.userNode.db_name} -p {self.userNode.db_port} -c "{sql_cmd}"
'''
logger.info(excute_cmd)
msg = self.userNode.sh(excute_cmd).result()
logger.info(msg)
logger.info('----------------------------Opengauss_Function_DDL_Parttion_Case0064执行完成-----------------------------') | 46.710801 | 188 | 0.526331 | 1,543 | 13,406 | 4.379132 | 0.149708 | 0.04869 | 0.047358 | 0.026935 | 0.743229 | 0.724434 | 0.689063 | 0.666124 | 0.656356 | 0.556608 | 0 | 0.051142 | 0.343652 | 13,406 | 287 | 189 | 46.710801 | 0.716786 | 0.037893 | 0 | 0.62931 | 0 | 0.051724 | 0.679676 | 0.096621 | 0 | 0 | 0 | 0 | 0.099138 | 1 | 0.012931 | false | 0 | 0.030172 | 0 | 0.047414 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
07f76c25a65e7eb00610dabf2a5fbecdf6aefb8c | 88 | py | Python | evostream/conf/defaults.py | tomi77/django-evostream | 5702aa7031a2330778b8c7c9e19ec527768ddf44 | [
"MIT"
] | 6 | 2016-04-27T00:42:51.000Z | 2020-10-28T03:40:26.000Z | evostream/conf/defaults.py | tomi77/django-evostream | 5702aa7031a2330778b8c7c9e19ec527768ddf44 | [
"MIT"
] | null | null | null | evostream/conf/defaults.py | tomi77/django-evostream | 5702aa7031a2330778b8c7c9e19ec527768ddf44 | [
"MIT"
] | 1 | 2019-10-11T00:37:38.000Z | 2019-10-11T00:37:38.000Z | """
Default django-evostream configuration
"""
EVOSTREAM_URI = 'http://127.0.0.1:7777'
| 14.666667 | 39 | 0.704545 | 12 | 88 | 5.083333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126582 | 0.102273 | 88 | 5 | 40 | 17.6 | 0.64557 | 0.431818 | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
07f78209cbc51ef755cef47f7790318bf786d4d7 | 4,306 | py | Python | test/test_fillgaps_lowpass_2d.py | JanisGailis/gridtools | 793df6b4699dc7bdcbd68c569b41e325bb421a9b | [
"MIT"
] | null | null | null | test/test_fillgaps_lowpass_2d.py | JanisGailis/gridtools | 793df6b4699dc7bdcbd68c569b41e325bb421a9b | [
"MIT"
] | null | null | null | test/test_fillgaps_lowpass_2d.py | JanisGailis/gridtools | 793df6b4699dc7bdcbd68c569b41e325bb421a9b | [
"MIT"
] | null | null | null | import unittest
import numpy as np
from numpy.testing import assert_almost_equal
import gridtools.gapfilling as gtg
GAP = np.nan
KERNEL = np.ones((3, 3), dtype=np.float64)
class FillgapsLowpass2d(unittest.TestCase):
def _test_fillgaps(self, src, desired_out, desired_gaps_filled):
src = np.array(src)
gc1 = gtg.count_gaps(src)
actual_out = gtg.fillgaps_lowpass_2d(src, kernel=KERNEL)
gc2 = gtg.count_gaps(actual_out)
actual_gaps_filled = gc1 - gc2
assert_almost_equal(actual_out, np.array(desired_out))
self.assertEqual(actual_gaps_filled, desired_gaps_filled)
def test_0_missing(self):
self._test_fillgaps([[1.0, 2.0],
[3.0, 4.0]],
[[1.0, 2.0],
[3.0, 4.0]], 0)
def test_1_missing(self):
self._test_fillgaps([[GAP]],
[[GAP]], 0)
_F_ = (2 + 3 + 4) / 3.
self._test_fillgaps([[GAP, 2.0],
[3.0, 4.0]],
[[_F_, 2.0],
[3.0, 4.0]], 1)
_F_ = (1 + 2 + 3) / 3.
self._test_fillgaps([[1.0, 2.0],
[3.0, GAP]],
[[1.0, 2.0],
[3.0, _F_]], 1)
_F_ = (1 + 2 + 3 + 4 + 6 + 7 + 8 + 9) / 8.
self._test_fillgaps([[1.0, 2.0, 3.0],
[4.0, GAP, 6.0],
[7.0, 8.0, 9.0]],
[[1.0, 2.0, 3.0],
[4.0, _F_, 6.0],
[7.0, 8.0, 9.0]], 1)
def test_2_missing(self):
self._test_fillgaps([[GAP, GAP]],
[[GAP, GAP]], 0)
F1_ = (2 + 3) / 2.
F2_ = (2 + 3) / 2.
self._test_fillgaps([[GAP, 2.0],
[3.0, GAP]],
[[F1_, 2.0],
[3.0, F2_]], 2)
F1_ = (2 + 4) / 2.
F2_ = (2 + 3 + 4 + 6 + 7 + 8 + 9) / 7.
self._test_fillgaps([[GAP, 2.0, 3.0],
[4.0, GAP, 6.0],
[7.0, 8.0, 9.0]],
[[F1_, 2.0, 3.0],
[4.0, F2_, 6.0],
[7.0, 8.0, 9.0]], 2)
def test_3_missing(self):
self._test_fillgaps([[GAP, GAP],
[GAP, 4.0]],
[[4.0, 4.0],
[4.0, 4.0]], 3)
F1_ = 2.
F2_ = (2 + 7 + 8) / 3.
F3_ = (2 + 3 + 6 + 7 + 8 + 9) / 6.
self._test_fillgaps([[GAP, 2.0, 3.0],
[GAP, GAP, 6.0],
[7.0, 8.0, 9.0]],
[[F1_, 2.0, 3.0],
[F2_, F3_, 6.0],
[7.0, 8.0, 9.0]], 3)
def test_4_missing(self):
self._test_fillgaps([[GAP, GAP],
[GAP, GAP]],
[[GAP, GAP],
[GAP, GAP]], 0)
F1_ = (3 + 6) / 2.
F2_ = (7 + 8) / 2.
F3_ = (3 + 6 + 7 + 8 + 9) / 5.
F4_ = (F1_ + F2_ + F3_) / 3.
self._test_fillgaps([[GAP, GAP, 3.0],
[GAP, GAP, 6.0],
[7.0, 8.0, 9.0]],
[[F4_, F1_, 3.0],
[F2_, F3_, 6.0],
[7.0, 8.0, 9.0]], 4)
def test_9_missing(self):
F1_ = (1 + 2 + 3 + 4 + 5 + 9) / 6.
F2_ = (2 + 3 + 4) / 3.
F3_ = (3 + 4) / 2.
F4_ = (5 + 9 + 13) / 3.
F5_ = (9 + 13) / 2.
F6_ = (F1_ + F2_ + F3_ + F4_ + F5_) / 5.
F7_ = (F2_ + F3_) / 2.
F8_ = (F4_ + F5_) / 2.
F9_ = (F6_ + F7_ + F8_) / 3.
self._test_fillgaps([[1.0, 2.0, 3.0, 4.0],
[5.0, GAP, GAP, GAP],
[9.0, GAP, GAP, GAP],
[13., GAP, GAP, GAP]],
[[1.0, 2.0, 3.0, 4.0],
[5.0, F1_, F2_, F3_],
[9.0, F4_, F6_, F7_],
[13., F5_, F8_, F9_]], 9)
| 34.725806 | 68 | 0.326986 | 544 | 4,306 | 2.352941 | 0.112132 | 0.103125 | 0.0375 | 0.05 | 0.470313 | 0.429688 | 0.417188 | 0.354688 | 0.3125 | 0.195313 | 0 | 0.170616 | 0.509986 | 4,306 | 123 | 69 | 35.00813 | 0.436019 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028846 | 1 | 0.067308 | false | 0.019231 | 0.038462 | 0 | 0.115385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
07f99c6b250607cb1f91e5e3d7c9dbc64d1d48f3 | 2,853 | py | Python | hpcrocket/ui.py | SvenMarcus/ssh-slurm-runner | 91ea1a052a0362b5b8676b6e429aa3c890359e73 | [
"MIT"
] | 2 | 2021-04-16T15:53:38.000Z | 2021-04-16T20:36:05.000Z | hpcrocket/ui.py | SvenMarcus/ssh-slurm-runner | 91ea1a052a0362b5b8676b6e429aa3c890359e73 | [
"MIT"
] | 18 | 2021-04-16T15:53:55.000Z | 2021-09-13T17:38:44.000Z | hpcrocket/ui.py | SvenMarcus/ssh-slurm-runner | 91ea1a052a0362b5b8676b6e429aa3c890359e73 | [
"MIT"
] | null | null | null | from abc import ABC, abstractmethod
from typing import Any
from rich import box
from rich.console import RenderableType
from rich.live import Live
from rich.spinner import Spinner
from rich.table import Table
from hpcrocket.core.slurmbatchjob import SlurmJobStatus
class UI(ABC):
@abstractmethod
def update(self, job: SlurmJobStatus) -> None:
pass
@abstractmethod
def error(self, text: str) -> None:
pass
@abstractmethod
def info(self, text: str) -> None:
pass
@abstractmethod
def success(self, text: str) -> None:
pass
@abstractmethod
def launch(self, text: str) -> None:
pass
class NullUI(UI):
def update(self, job: SlurmJobStatus) -> None:
pass
def error(self, text: str) -> None:
pass
def info(self, text: str) -> None:
pass
def success(self, text: str) -> None:
pass
def launch(self, text: str) -> None:
pass
class RichUI(UI):
def __init__(self) -> None:
self._rich_live: Live
def __enter__(self) -> 'RichUI':
self._rich_live = Live(
Spinner("bouncingBar", ""),
refresh_per_second=16)
self._rich_live.start()
return self
def __exit__(self, *args: Any, **kwargs: Any) -> None:
self._rich_live.stop()
def update(self, job: SlurmJobStatus) -> None:
self._rich_live.update(self._make_table(job))
def error(self, text: str) -> None:
self._rich_live.console.print(
":cross_mark:", text, style="bold red", emoji=True)
def info(self, text: str) -> None:
self._rich_live.console.print(
":information_source:", text, style="bold blue", emoji=True)
def success(self, text: str) -> None:
self._rich_live.console.print(
":heavy_check_mark: ", text, style="bold green", emoji=True)
def launch(self, text: str) -> None:
self._rich_live.console.print(":rocket: ", text, style="bold yellow", emoji=True)
def _make_table(self, job: SlurmJobStatus) -> Table:
table = Table(style="bold", box=box.MINIMAL)
table.add_column("ID")
table.add_column("Name")
table.add_column("State")
for task in job.tasks:
last_column: RenderableType = task.state
color = "grey42"
if task.state == "RUNNING":
color = "blue"
last_column = Spinner("arc", task.state)
elif task.state == "COMPLETED":
color = "green"
last_column = f":heavy_check_mark: {task.state}"
elif task.state == "FAILED":
color = "red"
last_column = f":cross_mark: {task.state}"
table.add_row(str(task.id), task.name, last_column, style=color)
return table
| 25.936364 | 89 | 0.590606 | 344 | 2,853 | 4.747093 | 0.244186 | 0.058788 | 0.080833 | 0.110227 | 0.37722 | 0.350276 | 0.308634 | 0.135946 | 0.09553 | 0 | 0 | 0.001971 | 0.288819 | 2,853 | 109 | 90 | 26.174312 | 0.802859 | 0 | 0 | 0.423077 | 0 | 0 | 0.080266 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.24359 | false | 0.128205 | 0.102564 | 0 | 0.410256 | 0.051282 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
6af895be58baaae90428104f1b0b67212597bf71 | 3,152 | py | Python | nncf/torch/quantization/quantizer_id.py | xiao1228/nncf | 307262119ee3f50eec2fa4022b2ef96693fd8448 | [
"Apache-2.0"
] | null | null | null | nncf/torch/quantization/quantizer_id.py | xiao1228/nncf | 307262119ee3f50eec2fa4022b2ef96693fd8448 | [
"Apache-2.0"
] | null | null | null | nncf/torch/quantization/quantizer_id.py | xiao1228/nncf | 307262119ee3f50eec2fa4022b2ef96693fd8448 | [
"Apache-2.0"
] | null | null | null | """
Copyright (c) 2020 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from nncf.torch.dynamic_graph.context import Scope
from nncf.torch.graph.graph import InputAgnosticOperationExecutionContext
class QuantizerId:
""" Unique identifier of a quantizer. It's used to store and search all quantizers in a single
structure. Also it provides the scope, where the quantizer was inserted. """
def get_base(self):
raise NotImplementedError
def get_suffix(self) -> str:
raise NotImplementedError
def get_scope(self) -> Scope:
raise NotImplementedError
def __str__(self):
return str(self.get_base()) + self.get_suffix()
def __hash__(self):
return hash((self.get_base(), self.get_suffix()))
def __eq__(self, other: 'QuantizerId'):
return (self.get_base() == other.get_base()) and (self.get_suffix() == other.get_suffix())
class WeightQuantizerId(QuantizerId):
""" Unique identifier of a quantizer for weights."""
def __init__(self, scope: 'Scope'):
self.scope = scope
def get_base(self) -> 'Scope':
return self.scope
def get_suffix(self) -> str:
return 'module_weight'
def get_scope(self) -> Scope:
return self.get_base()
class NonWeightQuantizerId(QuantizerId):
""" Unique identifier of a quantizer, which corresponds to non-weight operations, such as
ordinary activation, function and input"""
def __init__(self, ia_op_exec_context: InputAgnosticOperationExecutionContext,
input_port_id=None):
self.ia_op_exec_context = ia_op_exec_context
self.input_port_id = input_port_id
def get_base(self) -> 'InputAgnosticOperationExecutionContext':
return self.ia_op_exec_context
def get_suffix(self) -> str:
return '|OUTPUT' if self.input_port_id is None else '|INPUT{}'.format(self.input_port_id)
def get_scope(self) -> Scope:
return self.ia_op_exec_context.scope_in_model
class InputQuantizerId(NonWeightQuantizerId):
""" Unique identifier of a quantizer for model's input"""
def get_base(self) -> 'Scope':
return self.ia_op_exec_context.scope_in_model
def get_suffix(self) -> str:
return 'module_input'
class FunctionQuantizerId(NonWeightQuantizerId):
""" Unique identifier of a quantizer for a function call"""
def __init__(self, ia_op_exec_context: InputAgnosticOperationExecutionContext, input_arg_idx: int):
super().__init__(ia_op_exec_context)
self.input_arg_idx = input_arg_idx
def get_suffix(self) -> str:
return "_input" + str(self.input_arg_idx)
| 33.531915 | 103 | 0.71415 | 418 | 3,152 | 5.148325 | 0.327751 | 0.033457 | 0.02974 | 0.055762 | 0.398234 | 0.358271 | 0.245353 | 0.106877 | 0.106877 | 0.042751 | 0 | 0.003167 | 0.198604 | 3,152 | 93 | 104 | 33.892473 | 0.848773 | 0.317259 | 0 | 0.326087 | 0 | 0 | 0.052481 | 0.01813 | 0 | 0 | 0 | 0 | 0 | 1 | 0.391304 | false | 0 | 0.043478 | 0.26087 | 0.804348 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
6afa472316303d213d34e9406c9d5ff20cb11605 | 313 | py | Python | ChefExtensionHandler/bin/parse_env_variables.py | gsreynolds/azure-chef-extension | 407194655ae1690b3c8e6f8f9ee8ea300927846f | [
"Apache-2.0"
] | null | null | null | ChefExtensionHandler/bin/parse_env_variables.py | gsreynolds/azure-chef-extension | 407194655ae1690b3c8e6f8f9ee8ea300927846f | [
"Apache-2.0"
] | null | null | null | ChefExtensionHandler/bin/parse_env_variables.py | gsreynolds/azure-chef-extension | 407194655ae1690b3c8e6f8f9ee8ea300927846f | [
"Apache-2.0"
] | null | null | null | import json
import sys
with open(sys.argv[1]) as data_file:
data = json.load(data_file, strict=False)
data = data['runtimeSettings'][0]['handlerSettings']['publicSettings']['environment_variables']
commands=""
for key, value in data.items():
commands=commands+'export '+key+'="'+value+'";'
print commands
| 31.3 | 95 | 0.71885 | 41 | 313 | 5.414634 | 0.658537 | 0.072072 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007143 | 0.105431 | 313 | 9 | 96 | 34.777778 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0.242812 | 0.067093 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.222222 | null | null | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6affeec1ef7848d973497754e0dce22d43890638 | 1,647 | py | Python | season01/scripting_for_artists_python/sfa_python04.py | alexanderrichtertd/SfA | ffba23dba0df4968edf5c30772d8ce295e2bf25f | [
"MIT"
] | 17 | 2018-06-01T09:41:01.000Z | 2021-11-11T12:07:11.000Z | season01/scripting_for_artists_python/sfa_python04.py | alexanderrichtertd/SfA | ffba23dba0df4968edf5c30772d8ce295e2bf25f | [
"MIT"
] | null | null | null | season01/scripting_for_artists_python/sfa_python04.py | alexanderrichtertd/SfA | ffba23dba0df4968edf5c30772d8ce295e2bf25f | [
"MIT"
] | 4 | 2018-07-28T13:49:49.000Z | 2022-02-26T05:54:33.000Z | #*********************************************************************
# content = Python basics
# date = 2017-09-07
#
# author = Alexander Richter
# email = alexanderrichtertd@gmail.com
#*********************************************************************
# IMPORT external files
# Import is like copying the code from another film.
# IMPORT works for
# a) the current folder
# b) folder which are in PATH/PYTHONPATH environment
import sfa_python04_print
sfa_python04_print.print_new('Alex')
# NAVIGATE through folders with "from"
# NEED: __init__.py files
from extern import print_extern
print_extern.print_filename()
from extern.another import print_extern
print_extern.print_filename()
# GET & SET additional import paths
import sys
print sys.path
sys.path.append(r'C:/project/pipeline')
print sys.path
# DEBUG & ERROR HANDLING
shot_path = 'D:/project/shots/s010'
task_list = ['RIG', 'ANIM', 'LIGHT']
for item in task_list:
result_path = shot_path + '/' + item
print result_path
"Frame: " + 1001
""" ERROR MESSAGE
Traceback (most recent call last):
File "D:\Dropbox\promotion\software\scripting_for_artists\code\scripting_for_artists_python\sfa_python04.py", line 32, in <module>
"Frame: " + 1001
TypeError: cannot concatenate 'str' and 'int' objects
"""
# EXPLAIN:
# [EXCEPTION] Traceback which stops the execution
# [WHERE] file path & line 32
# [WHAT] TypeError: cannot concatenate 'str' and 'int' objects
# TRY & EXCEPT
# try catches breaks if it breaks and except will be executes instead
try:
"Frame: " + 1001
except Exception as exp:
print('EXCEPTION adding frame: ' + str(exp))
| 24.954545 | 132 | 0.666667 | 211 | 1,647 | 5.075829 | 0.559242 | 0.041083 | 0.059757 | 0.041083 | 0.154995 | 0.154995 | 0.154995 | 0 | 0 | 0 | 0 | 0.023758 | 0.156648 | 1,647 | 65 | 133 | 25.338462 | 0.7473 | 0.465695 | 0 | 0.3 | 0 | 0 | 0.159933 | 0.035354 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.2 | null | null | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
ed04ad540e3dbc400df475954f6fb5d79ee8d8e9 | 30,563 | py | Python | modules/pymol/keywords.py | kingdavid72/Pymol | 91ddc53199f40f12d186dee2a3745cd777a57877 | [
"CNRI-Python"
] | null | null | null | modules/pymol/keywords.py | kingdavid72/Pymol | 91ddc53199f40f12d186dee2a3745cd777a57877 | [
"CNRI-Python"
] | null | null | null | modules/pymol/keywords.py | kingdavid72/Pymol | 91ddc53199f40f12d186dee2a3745cd777a57877 | [
"CNRI-Python"
] | null | null | null |
import parsing
import cmd
def get_command_keywords(self_cmd=cmd):
return {
# keyword : [ command, # min_arg, max_arg, separator, mode ]
# NOTE: min_arg, max_arg, and separator, are hold-overs from the
# original PyMOL parser which will eventually be removed.
# all new commands should use NO_CHECK or STRICT modes
# which make much better use of built-in python features.
'abort' : [ self_cmd.abort , 0 , 0 , '' , parsing.ABORT ],
'accept' : [ self_cmd.accept , 0 , 0 , '' , parsing.STRICT ],
'alias' : [ self_cmd.alias , 0 , 0 , '' , parsing.LITERAL1 ], # insecure
'align' : [ self_cmd.align , 0 , 0 , '' , parsing.STRICT ],
'alignto' : [ self_cmd.alignto , 0 , 0 , '' , parsing.STRICT ],
'alter' : [ self_cmd.alter , 0 , 0 , '' , parsing.LITERAL1 ], # insecure
'_alt' : [ self_cmd._alt , 0 , 0 , '' , parsing.STRICT ],
'alter_state' : [ self_cmd.alter_state , 0 , 0 , '' , parsing.LITERAL2 ], # insecure
'angle' : [ self_cmd.angle , 0 , 0 , '' , parsing.STRICT ],
'as' : [ self_cmd.show_as , 0 , 0 , '' , parsing.STRICT ],
'assert' : [ self_cmd.python_help , 0 , 0 , '' , parsing.PYTHON ],
'attach' : [ self_cmd.attach , 0 , 0 , '' , parsing.STRICT ],
'backward' : [ self_cmd.backward , 0 , 0 , '' , parsing.STRICT ],
'bg_color' : [ self_cmd.bg_color , 0 , 0 , '' , parsing.STRICT ],
'bond' : [ self_cmd.bond , 0 , 0 , '' , parsing.STRICT ],
'break' : [ self_cmd.python_help , 0 , 0 , '' , parsing.PYTHON ],
'button' : [ self_cmd.button , 0 , 0 , '' , parsing.STRICT ],
'cache' : [ self_cmd.cache , 0 , 0 , '' , parsing.STRICT ],
'cartoon' : [ self_cmd.cartoon , 0 , 0 , '' , parsing.STRICT ],
'capture' : [ self_cmd.capture , 0 , 0 , '' , parsing.STRICT ],
'cealign' : [ self_cmd.cealign , 0 , 0 , '' , parsing.STRICT ],
'cd' : [ self_cmd.cd , 0 , 0 , '' , parsing.STRICT ],
'center' : [ self_cmd.center , 0 , 0 , '' , parsing.STRICT ],
'check' : [ self_cmd.check , 0 , 0 , '' , parsing.STRICT ],
'clean' : [ self_cmd.clean , 0 , 0 , '' , parsing.STRICT ],
'class' : [ self_cmd.python_help , 0 , 0 , '' , parsing.PYTHON ],
'clip' : [ self_cmd.clip , 0 , 0 , '' , parsing.STRICT ],
'cls' : [ self_cmd.cls , 0 , 0 , '' , parsing.STRICT ],
'_ctrl' : [ self_cmd._ctrl , 0 , 0 , '' , parsing.STRICT ],
'_ctsh' : [ self_cmd._ctsh , 0 , 0 , '' , parsing.STRICT ],
'color' : [ self_cmd.color , 0 , 0 , '' , parsing.STRICT ],
'commands' : [ self_cmd.helping.commands , 0 , 0 , '' , parsing.STRICT ],
'config_mouse' : [ self_cmd.config_mouse , 0 , 0 , '' , parsing.STRICT ],
'continue' : [ self_cmd.python_help , 0 , 0 , '' , parsing.PYTHON ],
'copy' : [ self_cmd.copy , 0 , 0 , '' , parsing.LEGACY ],
'count_atoms' : [ self_cmd.count_atoms , 0 , 0 , '' , parsing.STRICT ],
'count_frames' : [ self_cmd.count_frames , 0 , 0 , '' , parsing.STRICT ],
'count_states' : [ self_cmd.count_states , 0 , 0 , '' , parsing.STRICT ],
'cycle_valence' : [ self_cmd.cycle_valence , 0 , 0 , '' , parsing.STRICT ],
'create' : [ self_cmd.create , 0 , 0 , '' , parsing.LEGACY ],
'decline' : [ self_cmd.decline , 0 , 0 , '' , parsing.STRICT ],
'delete' : [ self_cmd.delete , 0 , 0 , '' , parsing.STRICT ],
'def' : [ self_cmd.python_help , 0 , 0 , '' , parsing.PYTHON ],
'del' : [ self_cmd.python_help , 0 , 0 , '' , parsing.PYTHON ],
'deprotect' : [ self_cmd.deprotect , 0 , 0 , '' , parsing.STRICT ],
'deselect' : [ self_cmd.deselect , 0 , 0 , '' , parsing.STRICT ],
'dihedral' : [ self_cmd.dihedral , 0 , 0 , '' , parsing.STRICT ],
'dir' : [ self_cmd.ls , 0 , 0 , '' , parsing.STRICT ],
'disable' : [ self_cmd.disable , 0 , 0 , '' , parsing.STRICT ],
'distance' : [ self_cmd.distance , 0 , 0 , '' , parsing.LEGACY ],
'drag' : [ self_cmd.drag , 0 , 0 , '' , parsing.STRICT ],
'draw' : [ self_cmd.draw , 0 , 0 , '' , parsing.STRICT ],
'dss' : [ self_cmd.dss , 0 , 0 , '' , parsing.STRICT ],
'dump' : [ self_cmd.dump , 0 , 0 , '' , parsing.STRICT ],
'dummy' : [ self_cmd.dummy , 0 , 0 , '' , parsing.STRICT ],
'edit' : [ self_cmd.edit , 0 , 0 , '' , parsing.STRICT ],
'edit_mode' : [ self_cmd.edit_mode , 0 , 0 , '' , parsing.STRICT ],
'elif' : [ self_cmd.python_help , 0 , 0 , '' , parsing.PYTHON ],
'else' : [ self_cmd.python_help , 0 , 0 , '' , parsing.PYTHON ],
'embed' : [ self_cmd.helping.embed , 0 , 3 , ',' , parsing.EMBED ],
'enable' : [ self_cmd.enable , 0 , 0 , '' , parsing.STRICT ],
'ending' : [ self_cmd.ending , 0 , 0 , '' , parsing.STRICT ],
'except' : [ self_cmd.python_help , 0 , 0 , '' , parsing.PYTHON ],
'extract' : [ self_cmd.extract , 0 , 0 , '' , parsing.STRICT ],
'exec' : [ self_cmd.python_help , 0 , 0 , '' , parsing.PYTHON ],
'export_dots' : [ self_cmd.export_dots , 0 , 0 , '' , parsing.STRICT ],
'extend' : [ self_cmd.extend , 0 , 0 , '' , parsing.STRICT ],
'fab' : [ self_cmd.fab , 0 , 0 , '' , parsing.STRICT ],
'fast_minimize' : [ self_cmd.fast_minimize , 1, 4 , ',' , parsing.SIMPLE ], # TO REMOVE
'feedback' : [ self_cmd.feedback , 0, 0 , '' , parsing.STRICT ],
'fetch' : [ self_cmd.fetch , 0, 0 , '' , parsing.STRICT ],
'fit' : [ self_cmd.fit , 0 , 0 , '' , parsing.STRICT ],
'finally' : [ self_cmd.python_help , 0 , 0 , '' , parsing.PYTHON ],
'fix_chemistry' : [ self_cmd.fix_chemistry , 0 , 0 , '' , parsing.STRICT ],
'flag' : [ self_cmd.flag , 0 , 0 , '' , parsing.LEGACY ],
'for' : [ self_cmd.python_help , 0 , 0 , '' , parsing.PYTHON ],
'fork' : [ self_cmd.helping.spawn , 1 , 2 , ',' , parsing.SPAWN ],
'forward' : [ self_cmd.forward , 0 , 0 , '' , parsing.STRICT ],
'fragment' : [ self_cmd.fragment , 0 , 0 , '' , parsing.STRICT ],
'full_screen' : [ self_cmd.full_screen , 0 , 0 , '' , parsing.STRICT ],
'fuse' : [ self_cmd.fuse , 0 , 0 , '' , parsing.STRICT ],
'frame' : [ self_cmd.frame , 0 , 0 , '' , parsing.STRICT ],
'get' : [ self_cmd.get , 0 , 0 , '' , parsing.STRICT ],
'get_angle' : [ self_cmd.get_angle , 0 , 0 , '' , parsing.STRICT ],
'get_area' : [ self_cmd.get_area , 0 , 0 , '' , parsing.STRICT ],
'get_chains' : [ self_cmd.get_chains , 0 , 0 , '' , parsing.STRICT ],
'get_dihedral' : [ self_cmd.get_dihedral , 0 , 0 , '' , parsing.STRICT ],
'get_distance' : [ self_cmd.get_distance , 0 , 0 , '' , parsing.STRICT ],
'get_extent' : [ self_cmd.get_extent , 0 , 0 , '' , parsing.STRICT ],
'get_position' : [ self_cmd.get_position , 0 , 0 , '' , parsing.STRICT ],
'get_symmetry' : [ self_cmd.get_symmetry , 0 , 0 , '' , parsing.STRICT ],
'get_title' : [ self_cmd.get_title , 0 , 0 , '' , parsing.STRICT ],
'get_type' : [ self_cmd.get_type , 0 , 0 , '' , parsing.STRICT ],
'get_version' : [ self_cmd.get_version , 0 , 0 , '' , parsing.STRICT ],
'get_view' : [ self_cmd.get_view , 0 , 0 , '' , parsing.STRICT ],
'get_viewport' : [ self_cmd.get_viewport , 0 , 0 , '' , parsing.STRICT ],
'global' : [ self_cmd.python_help , 0 , 0 , '' , parsing.PYTHON ],
'gradient' : [ self_cmd.gradient , 0 , 0 , '' , parsing.STRICT ],
'group' : [ self_cmd.group , 0 , 0 , '' , parsing.STRICT ],
'h_add' : [ self_cmd.h_add , 0 , 0 , '' , parsing.STRICT ],
'h_fill' : [ self_cmd.h_fill , 0 , 0 , '' , parsing.STRICT ],
'h_fix' : [ self_cmd.h_fix , 0 , 0 , '' , parsing.STRICT ],
'help' : [ self_cmd.help , 0 , 0 , '' , parsing.STRICT ],
'hide' : [ self_cmd.hide , 0 , 0 , '' , parsing.STRICT ],
'id_atom' : [ self_cmd.id_atom , 0 , 0 , '' , parsing.STRICT ],
'identify' : [ self_cmd.identify , 0 , 0 , '' , parsing.STRICT ],
'if' : [ self_cmd.python_help , 0 , 0 , '' , parsing.PYTHON ],
'import' : [ self_cmd.python_help , 0 , 0 , '' , parsing.PYTHON ],
'index' : [ self_cmd.index , 0 , 0 , '' , parsing.STRICT ],
'indicate' : [ self_cmd.indicate , 0 , 0 , '' , parsing.STRICT ],
'intra_fit' : [ self_cmd.intra_fit , 0 , 0 , '' , parsing.STRICT ],
'intra_rms' : [ self_cmd.intra_rms , 0 , 0 , '' , parsing.STRICT ],
'intra_rms_cur' : [ self_cmd.intra_rms_cur , 0 , 0 , '' , parsing.STRICT ],
'invert' : [ self_cmd.invert , 0 , 0 , '' , parsing.STRICT ],
'isodot' : [ self_cmd.isodot , 0 , 0 , '' , parsing.LEGACY ],
'isolevel' : [ self_cmd.isolevel , 0 , 0 , '' , parsing.STRICT ],
'isomesh' : [ self_cmd.isomesh , 0 , 0 , '' , parsing.LEGACY ],
'isosurface' : [ self_cmd.isosurface , 0 , 0 , '' , parsing.LEGACY ],
'iterate' : [ self_cmd.iterate , 0 , 0 , '' , parsing.LITERAL1 ], # insecure
'iterate_state' : [ self_cmd.iterate_state , 0 , 0 , '' , parsing.LITERAL2 ], # insecure
'label' : [ self_cmd.label , 0 , 0 , '' , parsing.LITERAL1 ], # insecure
'load' : [ self_cmd.load , 0 , 0 , '' , parsing.STRICT ],
'space' : [ self_cmd.space , 0 , 0 , '' , parsing.STRICT ],
'load_embedded' : [ self_cmd.load_embedded , 0 , 0 , '' , parsing.STRICT ],
'load_png' : [ self_cmd.load_png , 0 , 0 , '' , parsing.STRICT ],
'load_traj' : [ self_cmd.load_traj , 0 , 0 , '' , parsing.STRICT ],
'log' : [ self_cmd.log , 0 , 0 , '' , parsing.STRICT ],
'log_close' : [ self_cmd.log_close , 0 , 0 , '' , parsing.STRICT ],
'log_open' : [ self_cmd.log_open , 0 , 0 , '' , parsing.STRICT ],
'ls' : [ self_cmd.ls , 0 , 0 , '' , parsing.STRICT ],
'madd' : [ self_cmd.madd , 0 , 0 , '' , parsing.STRICT ],
'mask' : [ self_cmd.mask , 0 , 0 , '' , parsing.STRICT ],
'map_set' : [ self_cmd.map_set , 0 , 0 , '' , parsing.STRICT ],
'map_set_border': [ self_cmd.map_set_border , 0 , 0 , '' , parsing.STRICT ],
'map_double' : [ self_cmd.map_double , 0 , 0 , '' , parsing.STRICT ],
'map_generate' : [ self_cmd.map_generate , 0 , 0 , '' , parsing.STRICT ],
'map_halve' : [ self_cmd.map_halve , 0 , 0 , '' , parsing.STRICT ],
'map_new' : [ self_cmd.map_new , 0 , 0 , '' , parsing.STRICT ],
'map_trim' : [ self_cmd.map_trim , 0 , 0 , '' , parsing.STRICT ],
'mappend' : [ self_cmd.mappend , 2 , 2 , ':' , parsing.MOVIE ],
'matrix_reset' : [ self_cmd.matrix_reset , 0 , 0 , '' , parsing.STRICT ],
'matrix_copy' : [ self_cmd.matrix_copy , 0 , 0 , '' , parsing.STRICT ],
'matrix_transfer': [ self_cmd.matrix_copy , 0 , 0 , '' , parsing.STRICT ], # LEGACY
'mcopy' : [ self_cmd.mcopy , 0 , 0 , '' , parsing.STRICT ],
'mdelete' : [ self_cmd.mdelete , 0 , 0 , '' , parsing.STRICT ],
'mem' : [ self_cmd.mem , 0 , 0 , '' , parsing.STRICT ],
'meter_reset' : [ self_cmd.meter_reset , 0 , 0 , '' , parsing.STRICT ],
'minsert' : [ self_cmd.minsert , 0 , 0 , '' , parsing.STRICT ],
'mmove' : [ self_cmd.mmove , 0 , 0 , '' , parsing.STRICT ],
'move' : [ self_cmd.move , 0 , 0 , '' , parsing.STRICT ],
'mset' : [ self_cmd.mset , 0 , 0 , '' , parsing.STRICT ],
'mdo' : [ self_cmd.mdo , 2 , 2 , ':' , parsing.MOVIE ],
'mdump' : [ self_cmd.mdump , 0 , 0 , '' , parsing.STRICT ],
'mpng' : [ self_cmd.mpng , 0 , 0 , '' , parsing.SECURE ],
'mplay' : [ self_cmd.mplay , 0 , 0 , '' , parsing.STRICT ],
'mtoggle' : [ self_cmd.mtoggle , 0 , 0 , '' , parsing.STRICT ],
'mray' : [ self_cmd.mray , 0 , 0 , '' , parsing.STRICT ],
'mstop' : [ self_cmd.mstop , 0 , 0 , '' , parsing.STRICT ],
'mclear' : [ self_cmd.mclear , 0 , 0 , '' , parsing.STRICT ],
'middle' : [ self_cmd.middle , 0 , 0 , '' , parsing.STRICT ],
'minimize' : [ self_cmd.minimize , 0 , 4 , ',' , parsing.SIMPLE ], # TO REMOVE
'mmatrix' : [ self_cmd.mmatrix , 0 , 0 , '' , parsing.STRICT ],
'mouse' : [ self_cmd.mouse , 0 , 0 , '' , parsing.STRICT ],
'multisave' : [ self_cmd.multisave , 0 , 0 , '' , parsing.STRICT ],
'mview' : [ self_cmd.mview , 0 , 0 , '' , parsing.STRICT ],
'order' : [ self_cmd.order , 0 , 0 , '' , parsing.STRICT ],
'origin' : [ self_cmd.origin , 0 , 0 , '' , parsing.STRICT ],
'orient' : [ self_cmd.orient , 0 , 0 , '' , parsing.STRICT ],
'overlap' : [ self_cmd.overlap , 0 , 0 , '' , parsing.STRICT ],
'pair_fit' : [ self_cmd.pair_fit , 2 ,98 , ',' , parsing.SIMPLE ],
'pass' : [ self_cmd.python_help , 0 , 0 , '' , parsing.PYTHON ],
'phi_psi' : [ self_cmd.phi_psi , 0 , 0 , '' , parsing.STRICT ],
'pop' : [ self_cmd.pop , 0 , 0 , '' , parsing.STRICT ],
'protect' : [ self_cmd.protect , 0 , 0 , '' , parsing.STRICT ],
'pseudoatom' : [ self_cmd.pseudoatom , 0 , 0 , '' , parsing.STRICT ],
'push_undo' : [ self_cmd.push_undo , 0 , 0 , '' , parsing.STRICT ],
'pwd' : [ self_cmd.pwd , 0 , 0 , '' , parsing.STRICT ],
'python' : [ self_cmd.helping.python , 0 , 2 , ',' , parsing.PYTHON_BLOCK ],
'skip' : [ self_cmd.helping.skip , 0 , 1 , ',' , parsing.SKIP ],
'raise' : [ self_cmd.python_help , 0 , 0 , '' , parsing.PYTHON ],
'ramp_new' : [ self_cmd.ramp_new , 0 , 0 , '' , parsing.STRICT ],
'ray' : [ self_cmd.ray , 0 , 0 , '' , parsing.STRICT ],
'rebuild' : [ self_cmd.rebuild , 0 , 0 , '' , parsing.STRICT ],
'recolor' : [ self_cmd.recolor , 0 , 0 , '' , parsing.STRICT ],
'redo' : [ self_cmd.redo , 0 , 0 , '' , parsing.STRICT ],
'reference' : [ self_cmd.reference , 0 , 0 , '' , parsing.STRICT ],
'reinitialize' : [ self_cmd.reinitialize , 0 , 0 , '' , parsing.STRICT ],
'refresh' : [ self_cmd.refresh , 0 , 0 , '' , parsing.STRICT ],
'refresh_wizard': [ self_cmd.refresh_wizard , 0 , 0 , '' , parsing.STRICT ],
'remove' : [ self_cmd.remove , 0 , 0 , '' , parsing.STRICT ],
'remove_picked' : [ self_cmd.remove_picked , 0 , 0 , '' , parsing.STRICT ],
'rename' : [ self_cmd.rename , 0 , 0 , '' , parsing.STRICT ],
'replace' : [ self_cmd.replace , 0 , 0 , '' , parsing.STRICT ],
'replace_wizard': [ self_cmd.replace_wizard , 0 , 0 , '' , parsing.STRICT ],
'reset' : [ self_cmd.reset , 0 , 0 , '' , parsing.STRICT ],
'resume' : [ self_cmd.resume , 0 , 0 , '' , parsing.STRICT ],
'return' : [ self_cmd.python_help , 0 , 0 , '' , parsing.PYTHON ],
'rewind' : [ self_cmd.rewind , 0 , 0 , '' , parsing.STRICT ],
# 'rgbfunction' : [ self_cmd.rgbfunction , 0 , 0 , '' , parsing.LEGACY ],
'rock' : [ self_cmd.rock , 0 , 0 , '' , parsing.STRICT ],
'rotate' : [ self_cmd.rotate , 0 , 0 , '' , parsing.STRICT ],
'run' : [ self_cmd.helping.run , 1 , 2 , ',' , parsing.RUN ], # insecure
'rms' : [ self_cmd.rms , 0 , 0 , '' , parsing.STRICT ],
'rms_cur' : [ self_cmd.rms_cur , 0 , 0 , '' , parsing.STRICT ],
'save' : [ self_cmd.save , 0 , 0 , '' , parsing.SECURE ],
'scene' : [ self_cmd.scene , 0 , 0 , '' , parsing.STRICT ],
'scene_order' : [ self_cmd.scene_order , 0 , 0 , '' , parsing.STRICT ],
'sculpt_purge' : [ self_cmd.sculpt_purge , 0 , 0 , '' , parsing.STRICT ],
'sculpt_deactivate': [ self_cmd.sculpt_deactivate,0, 0 , '' , parsing.STRICT ],
'sculpt_activate': [ self_cmd.sculpt_activate , 0 , 0 , '' , parsing.STRICT ],
'sculpt_iterate': [ self_cmd.sculpt_iterate , 0 , 0 , '' , parsing.STRICT ],
'spectrum' : [ self_cmd.spectrum , 0 , 0 , '' , parsing.STRICT ],
'select' : [ self_cmd.select , 0 , 0 , '' , parsing.LEGACY ],
'set' : [ self_cmd.set , 0 , 0 , '' , parsing.LEGACY ],
'set_bond' : [ self_cmd.set_bond , 0 , 0 , '' , parsing.STRICT ],
'set_color' : [ self_cmd.set_color , 0 , 0 , '' , parsing.LEGACY ],
'set_dihedral' : [ self_cmd.set_dihedral , 0 , 0 , '' , parsing.STRICT ],
'set_name' : [ self_cmd.set_name , 0 , 0 , '' , parsing.STRICT ],
'set_geometry' : [ self_cmd.set_geometry , 0 , 0 , '' , parsing.STRICT ],
'set_symmetry' : [ self_cmd.set_symmetry , 0 , 0 , '' , parsing.STRICT ],
'set_title' : [ self_cmd.set_title , 0 , 0 , '' , parsing.STRICT ],
'set_key' : [ self_cmd.set_key , 0 , 0 , '' , parsing.STRICT ], # API only
'set_view' : [ self_cmd.set_view , 0 , 0 , '' , parsing.STRICT ],
'show' : [ self_cmd.show , 0 , 0 , '' , parsing.STRICT ],
'slice_new' : [ self_cmd.slice_new , 0 , 0 , '' , parsing.STRICT ],
# 'slice_lock' : [ self_cmd.slice_lock , 0 , 0 , '' , parsing.LEGACY ],
# 'slice_unlock' : [ self_cmd.slice_unlock , 0 , 0 , '' , parsing.LEGACY ],
'smooth' : [ self_cmd.smooth , 0 , 0 , '' , parsing.STRICT ],
'sort' : [ self_cmd.sort , 0 , 0 , '' , parsing.STRICT ],
'spawn' : [ self_cmd.helping.spawn , 1 , 2 , ',' , parsing.SPAWN ], # insecure
'spheroid' : [ self_cmd.spheroid , 0 , 0 , '' , parsing.STRICT ],
'splash' : [ self_cmd.splash , 0 , 0 , '' , parsing.STRICT ],
'split_states' : [ self_cmd.split_states , 0 , 0 , '' , parsing.STRICT ],
'_special' : [ self_cmd._special , 0 , 0 , '' , parsing.STRICT ],
'stereo' : [ self_cmd.stereo , 0 , 0 , '' , parsing.STRICT ],
'super' : [ self_cmd.super , 0 , 0 , '' , parsing.STRICT ],
'symexp' : [ self_cmd.symexp , 0 , 0 , '' , parsing.LEGACY ],
'symmetry_copy' : [ self_cmd.symmetry_copy , 0 , 0 , '' , parsing.STRICT ],
'system' : [ self_cmd.system , 0 , 0 , '' , parsing.LITERAL ],
'test' : [ self_cmd.test , 0 , 0 , '' , parsing.STRICT ],
'toggle' : [ self_cmd.toggle , 0 , 0 , '' , parsing.STRICT ],
'torsion' : [ self_cmd.torsion , 0 , 0 , '' , parsing.STRICT ], # vs toggle_object
'translate' : [ self_cmd.translate , 0 , 0 , '' , parsing.STRICT ],
'try' : [ self_cmd.python_help , 0 , 0 , '' , parsing.PYTHON ],
'turn' : [ self_cmd.turn , 0 , 0 , '' , parsing.STRICT ],
'quit' : [ self_cmd.quit , 0 , 0 , '' , parsing.STRICT ],
'_quit' : [ self_cmd._quit , 0 , 0 , '' , parsing.STRICT ],
'png' : [ self_cmd.png , 0 , 0 , '' , parsing.SECURE ],
'unbond' : [ self_cmd.unbond , 0 , 0 , '' , parsing.STRICT ],
'unpick' : [ self_cmd.unpick , 0 , 0 , '' , parsing.STRICT ],
'undo' : [ self_cmd.undo , 0 , 0 , '' , parsing.STRICT ],
'ungroup' : [ self_cmd.ungroup , 0 , 0 , '' , parsing.STRICT ],
'unmask' : [ self_cmd.unmask , 0 , 0 , '' , parsing.STRICT ],
'unset' : [ self_cmd.unset , 0 , 0 , '' , parsing.STRICT ],
'unset_bond' : [ self_cmd.unset_bond , 0 , 0 , '' , parsing.STRICT ],
'update' : [ self_cmd.update , 0 , 0 , '' , parsing.STRICT ],
'valence' : [ self_cmd.valence , 0 , 0 , '' , parsing.STRICT ],
'vdw_fit' : [ self_cmd.vdw_fit , 0 , 0 , '' , parsing.STRICT ],
'view' : [ self_cmd.view , 0 , 0 , '' , parsing.STRICT ],
'viewport' : [ self_cmd.viewport , 0 , 0 , '' , parsing.STRICT ],
'volume' : [ self_cmd.volume , 0 , 0 , '' , parsing.STRICT ],
'volume_color' : [ self_cmd.volume_color , 0 , 0 , '' , parsing.STRICT ],
'window' : [ self_cmd.window , 0 , 0 , '' , parsing.STRICT ],
'while' : [ self_cmd.python_help , 0 , 0 , '' , parsing.PYTHON ],
'wizard' : [ self_cmd.wizard , 0 , 0 , '' , parsing.STRICT ],
'zoom' : [ self_cmd.zoom , 0 , 0 , '' , parsing.STRICT ],
# utility programs
'util.cbag' : [ self_cmd.util.cbag , 0 , 0 , '' , parsing.STRICT ],
'util.cbac' : [ self_cmd.util.cbac , 0 , 0 , '' , parsing.STRICT ],
'util.cbay' : [ self_cmd.util.cbay , 0 , 0 , '' , parsing.STRICT ],
'util.cbas' : [ self_cmd.util.cbas , 0 , 0 , '' , parsing.STRICT ],
'util.cbap' : [ self_cmd.util.cbap , 0 , 0 , '' , parsing.STRICT ],
'util.cbak' : [ self_cmd.util.cbak , 0 , 0 , '' , parsing.STRICT ],
'util.cbaw' : [ self_cmd.util.cbaw , 0 , 0 , '' , parsing.STRICT ],
'util.cbab' : [ self_cmd.util.cbab , 0 , 0 , '' , parsing.STRICT ],
'util.cbao' : [ self_cmd.util.cbao , 0 , 0 , '' , parsing.STRICT ],
'util.cbam' : [ self_cmd.util.cbam , 0 , 0 , '' , parsing.STRICT ],
'util.cbc' : [ self_cmd.util.cbc , 0 , 0 , '' , parsing.STRICT ],
'util.chainbow' : [ self_cmd.util.chainbow , 0 , 0 , '' , parsing.STRICT ],
'util.cnc' : [ self_cmd.util.cnc , 0 , 0 , '' , parsing.STRICT ],
'util.colors' : [ self_cmd.util.colors , 0 , 0 , '' , parsing.STRICT ],
'util.mrock' : [ self_cmd.util.mrock , 0 , 0 , '' , parsing.STRICT ], # LEGACY
'util.mroll' : [ self_cmd.util.mroll , 0 , 0 , '' , parsing.STRICT ], # LEGACY
'util.ss' : [ self_cmd.util.ss , 0 , 0 , '' , parsing.STRICT ],# secondary structure
'util.rainbow' : [ self_cmd.util.rainbow , 0 , 0 , '' , parsing.STRICT ],
# movie programs
'movie.load' : [ self_cmd.movie.load , 0 , 0 , '' , parsing.STRICT ],
'movie.nutate' : [ self_cmd.movie.nutate , 0 , 0 , '' , parsing.STRICT ],
'movie.pause' : [ self_cmd.movie.pause , 0 , 0 , '' , parsing.STRICT ],
'movie.produce' : [ self_cmd.movie.produce , 0 , 0 , '' , parsing.STRICT ],
'movie.rock' : [ self_cmd.movie.rock , 0 , 0 , '' , parsing.STRICT ],
'movie.roll' : [ self_cmd.movie.roll , 0 , 0 , '' , parsing.STRICT ],
'movie.screw' : [ self_cmd.movie.screw , 0 , 0 , '' , parsing.STRICT ],
'movie.sweep' : [ self_cmd.movie.sweep , 0 , 0 , '' , parsing.STRICT ],
'movie.tdroll' : [ self_cmd.movie.tdroll , 0 , 0 , '' , parsing.STRICT ],
'movie.zoom' : [ self_cmd.movie.zoom , 0 , 0 , '' , parsing.STRICT ],
# activate metaphorics extensions
# 'metaphorics' : [ self_cmd.metaphorics , 0 , 0 , '' , parsing.STRICT ],
}
def fix_keyword_list(kw_list):
# remove legacy commands from the shortcut list
kw_list.remove('matrix_transfer')
kw_list.remove('util.mroll')
kw_list.remove('util.mrock')
def fix_list(kw_list):
# remove legacy commands from the shortcut list
kw_list.remove('matrix_transfer')
kw_list.remove('util.mroll')
kw_list.remove('util.mrock')
def fix_dict(keyword):
# Prepare for Python 2.6 (not hashed)
keyword['show_as'] = keyword['as']
# Aliases for Mother England (not hashed)
keyword['colour'] = keyword['color']
keyword['set_colour'] = keyword['set_color']
keyword['recolour'] = keyword['recolor']
keyword['bg_colour'] = keyword['bg_color']
def get_help_only_keywords(self_cmd=cmd):
return {
'api' : [ self_cmd.helping.api ],
'editing' : [ self_cmd.helping.editing ],
'edit_keys' : [ self_cmd.helping.edit_keys ],
'examples' : [ self_cmd.helping.examples ],
'faster' : [ self_cmd.helping.faster ],
'get_area' : [ self_cmd.get_area ],
'get_movie_playing' : [ self_cmd.get_movie_playing ],
'get_model' : [ self_cmd.get_model ],
'get_mtl_obj' : [ self_cmd.get_mtl_obj ],
'get_names' : [ self_cmd.get_names ],
'get_object_list' : [ self_cmd.get_object_list ],
'get_object_matrix' : [ self_cmd.get_object_matrix ],
'get_povray' : [ self_cmd.get_povray ],
'get_pdbstr' : [ self_cmd.get_pdbstr ],
'get_symmetry' : [ self_cmd.get_symmetry ],
'get_title' : [ self_cmd.get_title ],
'get_type' : [ self_cmd.get_type ],
'get_version' : [ self_cmd.get_version ],
'keyboard' : [ self_cmd.helping.keyboard ],
'launching' : [ self_cmd.helping.launching ],
'load_model' : [ self_cmd.load_model ],
'mouse' : [ self_cmd.helping.mouse ],
'movies' : [ self_cmd.helping.movies ],
'python' : [ self_cmd.helping.python ],
'python_help' : [ self_cmd.python_help ],
'povray' : [ self_cmd.helping.povray ],
'read_molstr' : [ self_cmd.read_molstr ],
'read_pdbstr' : [ self_cmd.read_pdbstr ],
'read_sdfstr' : [ self_cmd.read_sdfstr ],
'release' : [ self_cmd.helping.release ],
'selections' : [ self_cmd.helping.selections ],
'skip' : [ self_cmd.helping.skip ],
'sync' : [ self_cmd.sync ],
'stereochemistry' : [ self_cmd.helping.stereochemistry ],
'text_type' : [ self_cmd.helping.text_type ],
'transparency' : [ self_cmd.helping.transparency ],
'@' : [ self_cmd.helping.at_sign ],
}
| 80.217848 | 109 | 0.434807 | 3,016 | 30,563 | 4.204907 | 0.125663 | 0.184356 | 0.201546 | 0.282684 | 0.338038 | 0.127819 | 0.089812 | 0.086027 | 0.025311 | 0.025311 | 0 | 0.033419 | 0.413539 | 30,563 | 380 | 110 | 80.428947 | 0.674124 | 0.034159 | 0 | 0.022857 | 0 | 0 | 0.087437 | 0 | 0 | 0 | 0 | 0 | 0.002857 | 1 | 0.014286 | false | 0.002857 | 0.008571 | 0.005714 | 0.028571 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ed34548ab08bf15e9be39a81f7b2f07f675bd1ae | 497 | py | Python | {{ cookiecutter.repo_name }}/tests/test_visualization.py | AnHo4ng/cookiecutter-data-science | 6ecdc9904e5ba2f96301a346a3e3bc80f1187e2e | [
"MIT"
] | 2 | 2021-05-07T13:51:46.000Z | 2021-12-23T14:14:16.000Z | {{ cookiecutter.repo_name }}/tests/test_visualization.py | AnHo4ng/cookiecutter-data-science | 6ecdc9904e5ba2f96301a346a3e3bc80f1187e2e | [
"MIT"
] | null | null | null | {{ cookiecutter.repo_name }}/tests/test_visualization.py | AnHo4ng/cookiecutter-data-science | 6ecdc9904e5ba2f96301a346a3e3bc80f1187e2e | [
"MIT"
] | 1 | 2022-03-15T16:50:28.000Z | 2022-03-15T16:50:28.000Z | # More infos under https://docs.pytest.org
import pytest
from src import visualization
@pytest.fixture
def my_fixture() -> str:
"""Creating a sample fixture.
Returns:
Sample Fixture
"""
return "Fixture"
@pytest.mark.parametrize("input", range(3))
def test_visualization(input: int, my_fixture: str) -> None:
"""A Test for the visulization module.
Args:
input: Sample Input
my_fixture: Sample Fixture
"""
print(input)
assert True
| 17.75 | 60 | 0.651911 | 61 | 497 | 5.245902 | 0.590164 | 0.084375 | 0.075 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002653 | 0.241449 | 497 | 27 | 61 | 18.407407 | 0.846154 | 0.392354 | 0 | 0 | 0 | 0 | 0.045977 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 1 | 0.222222 | false | 0 | 0.222222 | 0 | 0.555556 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
ed3b8c6b67f03820f7a588a7590e5887f0519ac1 | 232 | py | Python | pygoat/docker_settings.py | umaranit/newdemopython | 4b05f2c3183d90a367e7b3023d815ad5449512a6 | [
"MIT"
] | null | null | null | pygoat/docker_settings.py | umaranit/newdemopython | 4b05f2c3183d90a367e7b3023d815ad5449512a6 | [
"MIT"
] | null | null | null | pygoat/docker_settings.py | umaranit/newdemopython | 4b05f2c3183d90a367e7b3023d815ad5449512a6 | [
"MIT"
] | null | null | null | from settings import *
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'testDb',
'USER': 'root',
'PASSWORD': 'password',
'HOST': 'db',
'PORT': '',
}
} | 19.333333 | 45 | 0.452586 | 19 | 232 | 5.526316 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.340517 | 232 | 12 | 46 | 19.333333 | 0.686275 | 0 | 0 | 0 | 0 | 0 | 0.347639 | 0.103004 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.090909 | 0.090909 | 0 | 0.090909 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
ed3e1a14350114d150e808d47ab93ea4cabc0185 | 1,178 | py | Python | workflow/forms.py | gustavohenrique/wms | ec3632626d63d1662c0aa1a4693dd091ba55eb39 | [
"CC-BY-3.0"
] | 1 | 2015-08-06T20:58:05.000Z | 2015-08-06T20:58:05.000Z | workflow/forms.py | gustavohenrique/wms | ec3632626d63d1662c0aa1a4693dd091ba55eb39 | [
"CC-BY-3.0"
] | null | null | null | workflow/forms.py | gustavohenrique/wms | ec3632626d63d1662c0aa1a4693dd091ba55eb39 | [
"CC-BY-3.0"
] | 1 | 2020-01-26T20:48:06.000Z | 2020-01-26T20:48:06.000Z | # -*- coding: utf-8 -*-
from django.forms.models import ModelForm
from django.utils import simplejson
from django.forms import ModelChoiceField, Field
from django.forms.fields import CharField, IntegerField
from django.forms.forms import ValidationError
from django.utils.safestring import mark_safe
from django.contrib.auth.models import User
from workflow.models import *
from wms.utils.extjs import ExtJSONEncoder
class StepAdminForm(ModelForm):
#time_limit = DecimalField(max_digits=10, decimal_places=2, required=False, label='Tempo Limite')
class Meta:
model = Step
class WorkForm(ModelForm):
class Meta:
model = Work
#exclude = ('id','previous_step','current_step','owner','reject','reason_reject','datetime_add','datetime_change','status')
fields = ['name','desc','client']
def as_ext(self):
return mark_safe(simplejson.dumps(self,cls=ExtJSONEncoder))
class AttachForm(ModelForm):
user = ModelChoiceField(User.objects.all(), required=False)
work = ModelChoiceField(Work.objects.all(), required=False)
class Meta:
model = Attachment
fields = ['filepath','desc','user','work']
| 30.205128 | 131 | 0.72326 | 142 | 1,178 | 5.922535 | 0.521127 | 0.083234 | 0.071344 | 0.054697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004036 | 0.158744 | 1,178 | 38 | 132 | 31 | 0.844601 | 0.202886 | 0 | 0.125 | 0 | 0 | 0.036364 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.375 | 0.041667 | 0.791667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
ed411e5ab18edc9b57365804f2e2e2494c285d97 | 326 | py | Python | gunicorn-config.py | dmpayton/proxit | 32e7f0c98866157b14acad69ca7a4e40ec65e643 | [
"MIT"
] | null | null | null | gunicorn-config.py | dmpayton/proxit | 32e7f0c98866157b14acad69ca7a4e40ec65e643 | [
"MIT"
] | null | null | null | gunicorn-config.py | dmpayton/proxit | 32e7f0c98866157b14acad69ca7a4e40ec65e643 | [
"MIT"
] | null | null | null | import multiprocessing
import os
def env(key, default=None):
return os.environ.get(key, default)
bind = f"0:{env('PORT', '8000')}"
max_requests = env("GUNICORN_MAX_REQUESTS", 1000)
max_requests_jitter = env("GUNICORN_MAX_REQUESTS_JITTER", 25)
workers = env("GUNICORN_WORKERS", (multiprocessing.cpu_count() * 2) + 1)
| 21.733333 | 72 | 0.730061 | 46 | 326 | 4.956522 | 0.586957 | 0.192982 | 0.122807 | 0.192982 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0.122699 | 326 | 14 | 73 | 23.285714 | 0.751748 | 0 | 0 | 0 | 0 | 0 | 0.269939 | 0.150307 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.25 | 0.125 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
ed424b37cce1f1237d44ccd20011a3d2d30c324c | 663 | py | Python | projects/learning-journal/brain-bit-ingestor/app/harvesters/twitter_harvester.py | DEV3L/archive | 652e37bf949cfcb2174b97ed5b7dbb6285a8dbe8 | [
"Beerware"
] | null | null | null | projects/learning-journal/brain-bit-ingestor/app/harvesters/twitter_harvester.py | DEV3L/archive | 652e37bf949cfcb2174b97ed5b7dbb6285a8dbe8 | [
"Beerware"
] | null | null | null | projects/learning-journal/brain-bit-ingestor/app/harvesters/twitter_harvester.py | DEV3L/archive | 652e37bf949cfcb2174b97ed5b7dbb6285a8dbe8 | [
"Beerware"
] | null | null | null | from tweepy import API
from app.harvesters.twitter.tweets import Tweets
class TwitterHarvester():
def __init__(self, api: API, screen_name: str, last_tweet_id: str):
self.api = api
self.screen_name = screen_name
self.last_tweet_id = last_tweet_id
def fetch(self):
# retrieved_tweets = []
# retrieved_tweets.extend(self._fetch('tweet'))
# retrieved_tweets.extend(self._fetch('favorite'))
tweet = self._fetch('tweet')
return tweet
def _fetch(self, tweet_type):
tweets = Tweets(self.api, self.screen_name, self.last_tweet_id, tweet_type=tweet_type)
return tweets.get()
| 30.136364 | 94 | 0.669683 | 86 | 663 | 4.860465 | 0.302326 | 0.095694 | 0.105263 | 0.08134 | 0.263158 | 0.119617 | 0 | 0 | 0 | 0 | 0 | 0 | 0.226244 | 663 | 21 | 95 | 31.571429 | 0.814815 | 0.174962 | 0 | 0 | 0 | 0 | 0.009208 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.153846 | 0 | 0.615385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
ed4cbffe3b4023f56b05dad0d724a278016b9545 | 1,036 | py | Python | alfred_code/todos_files/todos_helper.py | ecmadao/Alfred-TodoList | f515d9742287acbcba805d10c989514fc8a6e317 | [
"MIT"
] | 9 | 2016-07-03T06:04:55.000Z | 2021-11-02T04:53:09.000Z | alfred_code/todos_files/todos_helper.py | dishantpandya777/Alfred-TodoList | f515d9742287acbcba805d10c989514fc8a6e317 | [
"MIT"
] | null | null | null | alfred_code/todos_files/todos_helper.py | dishantpandya777/Alfred-TodoList | f515d9742287acbcba805d10c989514fc8a6e317 | [
"MIT"
] | 3 | 2020-01-31T10:45:49.000Z | 2022-03-26T19:19:32.000Z | """
todo helpers
"""
# -*- coding: UTF-8 -*-
import os
import re
from random import randrange
from .const_value import TODO_HEADER, ACTIONS
def filter_complete_todos(todo):
re_result = re.search(r'^{}~~(.*)~~\n$'.format(TODO_HEADER), todo)
result = True if re_result else False
return result
def filter_target_todo(argument):
def filter_todo(todo):
target_result = re.search(r'^{todo_header}.*{argument}.*\n$'.format(todo_header=TODO_HEADER, argument=argument), todo)
complete_result = filter_complete_todos(todo)
result = True if target_result and not complete_result else False
return result
return filter_todo
def get_todo_actions():
all_actions = {}
for action in ACTIONS:
icon_path = 'icons/todo_{}.png'.format(action)
action_icon = icon_path if os.path.isfile(icon_path) else get_default_icon()
all_actions[action] = action_icon
return all_actions
def get_default_icon():
default_icons = os.listdir('icons/default')
return 'icons/default/{}'.format(default_icons[randrange(0, len(default_icons))])
| 27.263158 | 120 | 0.750965 | 152 | 1,036 | 4.868421 | 0.309211 | 0.067568 | 0.051351 | 0.062162 | 0.12973 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0022 | 0.122587 | 1,036 | 37 | 121 | 28 | 0.811881 | 0.033784 | 0 | 0.08 | 0 | 0 | 0.091641 | 0.031219 | 0 | 0 | 0 | 0.027027 | 0 | 1 | 0.2 | false | 0 | 0.16 | 0 | 0.56 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
ed51112c9ed9f4302a2ef22ef84622462ff9a1a6 | 874 | py | Python | Python/behavioral_patterns/interpreter/command_list.py | ploukareas/Design-Patterns | 8effde38d73ae9058c3028c97ef395644a90d55b | [
"BSD-3-Clause",
"MIT"
] | 28 | 2018-09-28T07:45:35.000Z | 2022-02-12T12:25:05.000Z | Python/behavioral_patterns/interpreter/command_list.py | ploukareas/Design-Patterns | 8effde38d73ae9058c3028c97ef395644a90d55b | [
"BSD-3-Clause",
"MIT"
] | null | null | null | Python/behavioral_patterns/interpreter/command_list.py | ploukareas/Design-Patterns | 8effde38d73ae9058c3028c97ef395644a90d55b | [
"BSD-3-Clause",
"MIT"
] | 5 | 2021-05-10T23:19:55.000Z | 2022-03-04T20:26:35.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# ˅
from behavioral_patterns.interpreter.node import Node
from behavioral_patterns.interpreter.command import Command
# ˄
class CommandList(Node):
# ˅
# ˄
def __init__(self):
self.__nodes = []
# ˅
pass
# ˄
def parse(self, context):
# ˅
while True:
if context.get_token() is None:
exit('Missing \'end\'')
elif context.get_token() == 'end':
context.slide_token('end')
break
else:
_node = Command()
_node.parse(context)
self.__nodes.append(_node.to_string()) # Hold the parsed node
# ˄
def to_string(self):
# ˅
return '[' + ', '.join(self.__nodes) + ']'
# ˄
# ˅
# ˄
# ˅
# ˄
| 17.137255 | 79 | 0.469108 | 93 | 874 | 4.344086 | 0.505376 | 0.014851 | 0.108911 | 0.163366 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00189 | 0.394737 | 874 | 50 | 80 | 17.48 | 0.73535 | 0.104119 | 0 | 0 | 0 | 0 | 0.024804 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0.052632 | 0.105263 | 0.052632 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.