hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
049ebdfcd73ada2523419ff21494ae5ce7ca37d8 | 46 | py | Python | fortlab/kernel/__init__.py | grnydawn/fortlab | 524daa6dd7c99c1ca4bf6088a8ba3e1bcd096d5d | [
"MIT"
] | null | null | null | fortlab/kernel/__init__.py | grnydawn/fortlab | 524daa6dd7c99c1ca4bf6088a8ba3e1bcd096d5d | [
"MIT"
] | 1 | 2021-03-29T14:54:22.000Z | 2021-03-29T14:54:51.000Z | fortlab/kernel/__init__.py | grnydawn/fortlab | 524daa6dd7c99c1ca4bf6088a8ba3e1bcd096d5d | [
"MIT"
] | null | null | null | from .kernelgen import FortranKernelGenerator
| 23 | 45 | 0.891304 | 4 | 46 | 10.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 46 | 1 | 46 | 46 | 0.97619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
04a5597245fb4935d13d367b907d7ac91e534a74 | 3,175 | py | Python | stable_baselines/common/identity_env.py | jfsantos/stable-baselines | 5bd4ffa98e364b9e8e8b4e64bc2d1be9b6e4897a | [
"MIT"
] | null | null | null | stable_baselines/common/identity_env.py | jfsantos/stable-baselines | 5bd4ffa98e364b9e8e8b4e64bc2d1be9b6e4897a | [
"MIT"
] | null | null | null | stable_baselines/common/identity_env.py | jfsantos/stable-baselines | 5bd4ffa98e364b9e8e8b4e64bc2d1be9b6e4897a | [
"MIT"
] | 1 | 2019-12-25T16:45:54.000Z | 2019-12-25T16:45:54.000Z | import numpy as np
from gym import Env
from gym.spaces import Discrete, MultiDiscrete, MultiBinary, Box
class IdentityEnv(Env):
def __init__(self, dim, ep_length=100):
"""
Identity environment for testing purposes
:param dim: (int) the size of the dimensions you want to learn
:param ep_length: (int) the length of each episodes in timesteps
"""
self.action_space = Discrete(dim)
self.ep_length = ep_length
self.current_step = 0
self.reset()
def reset(self):
self.current_step = 0
self._choose_next_state()
self.observation_space = self.action_space
return self.state
def step(self, action):
reward = self._get_reward(action)
self._choose_next_state()
self.current_step += 1
done = self.current_step >= self.ep_length
return self.state, reward, done, {}
def _choose_next_state(self):
self.state = self.action_space.sample()
def _get_reward(self, action):
return 1 if self.state == action else 0
def render(self, mode='human'):
pass
class IdentityEnvMultiDiscrete(Env):
def __init__(self, dim, ep_length=100):
"""
Identity environment for testing purposes
:param dim: (int) the size of the dimensions you want to learn
:param ep_length: (int) the length of each episodes in timesteps
"""
self.action_space = MultiDiscrete([dim, dim])
self.dim = dim
self.observation_space = Box(low=0, high=1, shape=(dim * 2,), dtype=int)
self.ep_length = ep_length
self.reset()
def reset(self):
self._choose_next_state()
return self.state
def step(self, action):
reward = self._get_reward(action)
self._choose_next_state()
return self.state, reward, False, {}
def _choose_next_state(self):
state = np.zeros(self.dim*2, dtype=int)
mask = self.action_space.sample()
state[mask[0]] = 1
state[mask[1] + self.dim] = 1
self.state = state
def _get_reward(self, action):
return 1 if np.all(self.state == action) else 0
def render(self, mode='human'):
pass
class IdentityEnvMultiBinary(Env):
def __init__(self, dim, ep_length=100):
"""
Identity environment for testing purposes
:param dim: (int) the size of the dimensions you want to learn
:param ep_length: (int) the length of each episodes in timesteps
"""
self.action_space = MultiBinary(dim)
self.observation_space = Box(low=0, high=1, shape=(dim,), dtype=int)
self.ep_length = ep_length
self.reset()
def reset(self):
self._choose_next_state()
return self.state
def step(self, action):
reward = self._get_reward(action)
self._choose_next_state()
return self.state, reward, False, {}
def _choose_next_state(self):
self.state = self.action_space.sample()
def _get_reward(self, action):
return 1 if np.all(self.state == action) else 0
def render(self, mode='human'):
pass
| 28.863636 | 80 | 0.623307 | 418 | 3,175 | 4.547847 | 0.174641 | 0.054708 | 0.071015 | 0.059968 | 0.794319 | 0.767491 | 0.741715 | 0.741715 | 0.741715 | 0.741715 | 0 | 0.012195 | 0.27685 | 3,175 | 109 | 81 | 29.12844 | 0.815767 | 0.16126 | 0 | 0.681159 | 0 | 0 | 0.005894 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.26087 | false | 0.043478 | 0.043478 | 0.043478 | 0.478261 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
04aa051f60d6b77d5ad5f5158a5efb9262542e79 | 76 | py | Python | backend/service/__init__.py | willrp/willorders-ws | de0757d8888dab41095c93500a6a88c813755530 | [
"MIT"
] | null | null | null | backend/service/__init__.py | willrp/willorders-ws | de0757d8888dab41095c93500a6a88c813755530 | [
"MIT"
] | null | null | null | backend/service/__init__.py | willrp/willorders-ws | de0757d8888dab41095c93500a6a88c813755530 | [
"MIT"
] | null | null | null | from .jwt_service import JWTService
from .order_service import OrderService
| 25.333333 | 39 | 0.868421 | 10 | 76 | 6.4 | 0.7 | 0.40625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 76 | 2 | 40 | 38 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
04d9a4c3df4ef1fb0b2fd4ae3252e0fc38361839 | 61 | py | Python | code/cheat/cheat/__init__.py | cankai/cankai.github.io | e09a5b13adc475cb695cae03b5573cb446cca096 | [
"Apache-2.0"
] | null | null | null | code/cheat/cheat/__init__.py | cankai/cankai.github.io | e09a5b13adc475cb695cae03b5573cb446cca096 | [
"Apache-2.0"
] | null | null | null | code/cheat/cheat/__init__.py | cankai/cankai.github.io | e09a5b13adc475cb695cae03b5573cb446cca096 | [
"Apache-2.0"
] | null | null | null | from . import sheet
from . import sheets
from . import utils
| 15.25 | 20 | 0.754098 | 9 | 61 | 5.111111 | 0.555556 | 0.652174 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196721 | 61 | 3 | 21 | 20.333333 | 0.938776 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b6d1ac03df7184a9d6d9894e0c8d62f04694e027 | 1,407 | py | Python | TeamFight Helper.py | Marchosiae/TeamFIght-Helper | f4ded4e406a6400fbbafe71efe43698eeec95a5a | [
"MIT"
] | 1 | 2020-07-01T14:26:52.000Z | 2020-07-01T14:26:52.000Z | TeamFight Helper.py | Marchosiae/TeamFIght-Helper | f4ded4e406a6400fbbafe71efe43698eeec95a5a | [
"MIT"
] | null | null | null | TeamFight Helper.py | Marchosiae/TeamFIght-Helper | f4ded4e406a6400fbbafe71efe43698eeec95a5a | [
"MIT"
] | null | null | null | import pyautogui
import time
import infi.systray
from infi.systray import SysTrayIcon
systray = SysTrayIcon("icon.ico", "icon",)
systray.start()
#def acceptGame():
# while True:
# time.sleep(2)#UPDATE SEARCH EVERY 2SECOND
# if pyautogui.locateOnScreen('images\Accept\Accept.png'):#IF PICTURE IS ON SCREEN -> CLICK
# #Get the cursor position b4 moving the click
# x, y = pyautogui.position()
# pyautogui.click(pyautogui.locateOnScreen('images\Accept\Accept.png'))
# #Return the cursor to the original position
# pyautogui.moveTo(x, y)
# break
# #IF NOT -> Tell me it is not found yet.
# else:
# print('Not found yet...')
# while True:
# acceptGame()
def acceptGame():
#While Image Not Found LOOP.
while True:
time.sleep(2)#UPDATE SEARCH EVERY 2SECOND
if pyautogui.locateOnScreen('images\Champions\Leona.png'):#IF PICTURE IS ON SCREEN -> CLICK
#Get the cursor position b4 moving the click
x, y = pyautogui.position()
pyautogui.click(pyautogui.locateOnScreen('images\Champions\Leona.png'))
#Return the cursor to the original position
pyautogui.moveTo(x, y)
break
#IF NOT -> Tell me it is not found yet.
else:
print('Not found yet...')
while True:
acceptGame() | 33.5 | 99 | 0.614783 | 170 | 1,407 | 5.088235 | 0.323529 | 0.046243 | 0.134104 | 0.041619 | 0.811561 | 0.811561 | 0.751445 | 0.751445 | 0.751445 | 0.751445 | 0 | 0.005935 | 0.28145 | 1,407 | 42 | 100 | 33.5 | 0.849654 | 0.565743 | 0 | 0.111111 | 0 | 0 | 0.136519 | 0.088737 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.222222 | 0 | 0.277778 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8e1dd4e92fa6588547e84fefe7e8e8b2ec258559 | 66 | py | Python | awx/sso/tests/unit/test_pipeline.py | gitEdouble/awx | 5885654405ccaf465f08df4db998a6dafebd9b4d | [
"Apache-2.0"
] | 11,396 | 2017-09-07T04:56:02.000Z | 2022-03-31T13:56:17.000Z | awx/sso/tests/unit/test_pipeline.py | gitEdouble/awx | 5885654405ccaf465f08df4db998a6dafebd9b4d | [
"Apache-2.0"
] | 11,046 | 2017-09-07T09:30:46.000Z | 2022-03-31T20:28:01.000Z | awx/sso/tests/unit/test_pipeline.py | gitEdouble/awx | 5885654405ccaf465f08df4db998a6dafebd9b4d | [
"Apache-2.0"
] | 3,592 | 2017-09-07T04:14:31.000Z | 2022-03-31T23:53:09.000Z | def test_module_loads():
from awx.sso import pipeline # noqa
| 22 | 40 | 0.727273 | 10 | 66 | 4.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.19697 | 66 | 2 | 41 | 33 | 0.867925 | 0.060606 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f3f6b653d0ac350ce026d5add1e413f1c2047146 | 254 | py | Python | django/contrib/messages/tests/__init__.py | kix/django | 5262a288df07daa050a0e17669c3f103f47a8640 | [
"BSD-3-Clause"
] | 790 | 2015-01-03T02:13:39.000Z | 2020-05-10T19:53:57.000Z | django/contrib/messages/tests/__init__.py | mradziej/django | 5d38965743a369981c9a738a298f467f854a2919 | [
"BSD-3-Clause"
] | 1,361 | 2015-01-08T23:09:40.000Z | 2020-04-14T00:03:04.000Z | django/contrib/messages/tests/__init__.py | mradziej/django | 5d38965743a369981c9a738a298f467f854a2919 | [
"BSD-3-Clause"
] | 155 | 2015-01-08T22:59:31.000Z | 2020-04-08T08:01:53.000Z | from django.contrib.messages.tests.cookie import CookieTest
from django.contrib.messages.tests.fallback import FallbackTest
from django.contrib.messages.tests.middleware import MiddlewareTest
from django.contrib.messages.tests.session import SessionTest
| 50.8 | 67 | 0.874016 | 32 | 254 | 6.9375 | 0.4375 | 0.18018 | 0.306306 | 0.45045 | 0.540541 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.062992 | 254 | 4 | 68 | 63.5 | 0.932773 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6d12fedeb1c73ee638fd8f24f33a0ec2a1c6342b | 142 | py | Python | views.py | caesarbonicillo/pythonclub2 | 414d64d9057a7a05219c356bd06403fd600358fd | [
"MIT"
] | null | null | null | views.py | caesarbonicillo/pythonclub2 | 414d64d9057a7a05219c356bd06403fd600358fd | [
"MIT"
] | null | null | null | views.py | caesarbonicillo/pythonclub2 | 414d64d9057a7a05219c356bd06403fd600358fd | [
"MIT"
] | null | null | null | from django.shortcuts import render
# Create your views here.
def index (request):
return render(request, 'pythonclubapp/index.html') | 28.4 | 54 | 0.746479 | 18 | 142 | 5.888889 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161972 | 142 | 5 | 54 | 28.4 | 0.890756 | 0.161972 | 0 | 0 | 0 | 0 | 0.210526 | 0.210526 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
6d53bf7379dbbd3b0a5ac3c3d260663cedcd8bee | 146 | py | Python | python/src/orders/__init__.py | KrishanBhalla/MatchingEngines | f085c0eb2c1aa85267b942bcb1dab09b0fc66406 | [
"MIT"
] | null | null | null | python/src/orders/__init__.py | KrishanBhalla/MatchingEngines | f085c0eb2c1aa85267b942bcb1dab09b0fc66406 | [
"MIT"
] | null | null | null | python/src/orders/__init__.py | KrishanBhalla/MatchingEngines | f085c0eb2c1aa85267b942bcb1dab09b0fc66406 | [
"MIT"
] | null | null | null | from .limit_order import LimitOrder
from .market_order import MarketOrder
from .base_order import BaseOrder
from .cancel_order import CancelOrder
| 29.2 | 37 | 0.863014 | 20 | 146 | 6.1 | 0.55 | 0.360656 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109589 | 146 | 4 | 38 | 36.5 | 0.938462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6d56358f590c54d2169c9bc587364160f1198ef8 | 164 | py | Python | geometric-shapes/Square.py | GenaBitu/ISDe-exercises | 948209e2a6f292217933cc4228615c9270d5fd4a | [
"MIT"
] | null | null | null | geometric-shapes/Square.py | GenaBitu/ISDe-exercises | 948209e2a6f292217933cc4228615c9270d5fd4a | [
"MIT"
] | null | null | null | geometric-shapes/Square.py | GenaBitu/ISDe-exercises | 948209e2a6f292217933cc4228615c9270d5fd4a | [
"MIT"
] | null | null | null | from Polygon import Polygon;
class Square(Polygon):
def __init__(self, side):
self.side = side;
def perimeter(self):
return 4 * self.side;
| 20.5 | 29 | 0.634146 | 21 | 164 | 4.761905 | 0.571429 | 0.24 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008264 | 0.262195 | 164 | 7 | 30 | 23.428571 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0.166667 | 0.833333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
edba8d9eb1af1829b67d0a42906a9a5eb3c94dd5 | 138 | py | Python | app/sample_nature/__init__.py | kid-kodi/BioBank | 27c7cb7286dcae737fa53c245456d60857fe949f | [
"MIT"
] | null | null | null | app/sample_nature/__init__.py | kid-kodi/BioBank | 27c7cb7286dcae737fa53c245456d60857fe949f | [
"MIT"
] | null | null | null | app/sample_nature/__init__.py | kid-kodi/BioBank | 27c7cb7286dcae737fa53c245456d60857fe949f | [
"MIT"
] | null | null | null | from flask import Blueprint
bp = Blueprint('sample_nature', __name__, template_folder='templates')
from app.sample_nature import routes
| 23 | 70 | 0.811594 | 18 | 138 | 5.833333 | 0.722222 | 0.228571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108696 | 138 | 5 | 71 | 27.6 | 0.853659 | 0 | 0 | 0 | 0 | 0 | 0.15942 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
b630da173986dcb80e7a0c3e08fc5af910fc7416 | 24,792 | py | Python | src/tests/test_pagure_flask_ui_remote_pr.py | yifengyou/learn-pagure | e54ba955368918c92ad2be6347b53bb2c24a228c | [
"Unlicense"
] | null | null | null | src/tests/test_pagure_flask_ui_remote_pr.py | yifengyou/learn-pagure | e54ba955368918c92ad2be6347b53bb2c24a228c | [
"Unlicense"
] | null | null | null | src/tests/test_pagure_flask_ui_remote_pr.py | yifengyou/learn-pagure | e54ba955368918c92ad2be6347b53bb2c24a228c | [
"Unlicense"
] | null | null | null | # -*- coding: utf-8 -*-
"""
(c) 2018 - Copyright Red Hat Inc
Authors:
Pierre-Yves Chibon <pingou@pingoured.fr>
"""
from __future__ import unicode_literals, absolute_import
import json
import os
import re
import shutil
import sys
import tempfile
import time
import unittest
import pygit2
import wtforms
from mock import patch, MagicMock
from bs4 import BeautifulSoup
sys.path.insert(
0, os.path.join(os.path.dirname(os.path.abspath(__file__)), "..")
)
import pagure.lib.query
import tests
from pagure.lib.repo import PagureRepo
from pagure.lib.git import _make_signature
class PagureRemotePRtests(tests.Modeltests):
""" Tests for remote PRs in pagure """
def setUp(self):
""" Set up the environment. """
super(PagureRemotePRtests, self).setUp()
self.newpath = tempfile.mkdtemp(prefix="pagure-fork-test")
self.old_value = pagure.config.config["REMOTE_GIT_FOLDER"]
pagure.config.config["REMOTE_GIT_FOLDER"] = os.path.join(
self.path, "remotes"
)
def tearDown(self):
""" Clear things up. """
super(PagureRemotePRtests, self).tearDown()
pagure.config.config["REMOTE_GIT_FOLDER"] = self.old_value
shutil.rmtree(self.newpath)
def set_up_git_repo(self, new_project=None, branch_from="feature"):
"""Set up the git repo and create the corresponding PullRequest
object.
"""
# Create a git repo to play with
gitrepo = os.path.join(self.path, "repos", "test.git")
repo = pygit2.init_repository(gitrepo, bare=True)
repopath = os.path.join(self.newpath, "test")
clone_repo = pygit2.clone_repository(gitrepo, repopath)
# Create a file in that git repo
with open(os.path.join(repopath, "sources"), "w") as stream:
stream.write("foo\n bar")
clone_repo.index.add("sources")
clone_repo.index.write()
try:
com = repo.revparse_single("HEAD")
prev_commit = [com.oid.hex]
except:
prev_commit = []
# Commits the files added
tree = clone_repo.index.write_tree()
author = _make_signature("Alice Author", "alice@authors.tld")
committer = _make_signature("Cecil Committer", "cecil@committers.tld")
clone_repo.create_commit(
"refs/heads/master", # the name of the reference to update
author,
committer,
"Add sources file for testing",
# binary string representing the tree object ID
tree,
# list of binary strings representing parents of the new commit
prev_commit,
)
# time.sleep(1)
refname = "refs/heads/master:refs/heads/master"
ori_remote = clone_repo.remotes[0]
PagureRepo.push(ori_remote, refname)
first_commit = repo.revparse_single("HEAD")
with open(os.path.join(repopath, ".gitignore"), "w") as stream:
stream.write("*~")
clone_repo.index.add(".gitignore")
clone_repo.index.write()
# Commits the files added
tree = clone_repo.index.write_tree()
author = _make_signature("Alice Äuthòr", "alice@äuthòrs.tld")
committer = _make_signature("Cecil Cõmmîttër", "cecil@cõmmîttërs.tld")
clone_repo.create_commit(
"refs/heads/master",
author,
committer,
"Add .gitignore file for testing",
# binary string representing the tree object ID
tree,
# list of binary strings representing parents of the new commit
[first_commit.oid.hex],
)
refname = "refs/heads/master:refs/heads/master"
ori_remote = clone_repo.remotes[0]
PagureRepo.push(ori_remote, refname)
# Set the second repo
new_gitrepo = repopath
if new_project:
# Create a new git repo to play with
new_gitrepo = os.path.join(self.newpath, new_project.fullname)
if not os.path.exists(new_gitrepo):
os.makedirs(new_gitrepo)
new_repo = pygit2.clone_repository(gitrepo, new_gitrepo)
repo = pygit2.Repository(new_gitrepo)
# Edit the sources file again
with open(os.path.join(new_gitrepo, "sources"), "w") as stream:
stream.write("foo\n bar\nbaz\n boose")
repo.index.add("sources")
repo.index.write()
# Commits the files added
tree = repo.index.write_tree()
author = _make_signature("Alice Author", "alice@authors.tld")
committer = _make_signature("Cecil Committer", "cecil@committers.tld")
repo.create_commit(
"refs/heads/%s" % branch_from,
author,
committer,
"A commit on branch %s" % branch_from,
tree,
[first_commit.oid.hex],
)
refname = "refs/heads/%s" % (branch_from)
ori_remote = repo.remotes[0]
PagureRepo.push(ori_remote, refname)
@patch("pagure.lib.notify.send_email", MagicMock(return_value=True))
def test_new_remote_pr_unauth(self):
""" Test creating a new remote PR un-authenticated. """
tests.create_projects(self.session)
tests.create_projects_git(
os.path.join(self.path, "requests"), bare=True
)
self.set_up_git_repo()
# Before
project = pagure.lib.query.get_authorized_project(self.session, "test")
self.assertEqual(len(project.requests), 0)
# Try creating a remote PR
output = self.app.get("/test/diff/remote")
self.assertEqual(output.status_code, 302)
self.assertIn(
"You should be redirected automatically to target URL: "
'<a href="/login/?',
output.get_data(as_text=True),
)
@patch("pagure.lib.notify.send_email", MagicMock(return_value=True))
def test_new_remote_pr_auth(self):
""" Test creating a new remote PR authenticated. """
tests.create_projects(self.session)
tests.create_projects_git(
os.path.join(self.path, "requests"), bare=True
)
self.set_up_git_repo()
# Before
self.session = pagure.lib.query.create_session(self.dbpath)
project = pagure.lib.query.get_authorized_project(self.session, "test")
self.assertEqual(len(project.requests), 0)
# Try creating a remote PR
user = tests.FakeUser(username="foo")
with tests.user_set(self.app.application, user):
output = self.app.get("/test/diff/remote")
self.assertEqual(output.status_code, 200)
self.assertIn(
"<h2>New remote pull-request</h2>",
output.get_data(as_text=True),
)
csrf_token = self.get_csrf(output=output)
with patch(
"pagure.forms.RemoteRequestPullForm.git_repo.args",
MagicMock(
return_value=(
"Git Repo address",
[wtforms.validators.DataRequired()],
)
),
):
data = {
"csrf_token": csrf_token,
"title": "Remote PR title",
"branch_from": "feature",
"branch_to": "master",
"git_repo": os.path.join(self.newpath, "test"),
}
output = self.app.post("/test/diff/remote", data=data)
self.assertEqual(output.status_code, 200)
output_text = output.get_data(as_text=True)
self.assertIn("Create Pull Request\n </div>\n", output_text)
self.assertIn('<div class="card mb-3" id="_1">\n', output_text)
self.assertIn('<div class="card mb-3" id="_2">\n', output_text)
self.assertNotIn(
'<div class="card mb-3" id="_3">\n', output_text
)
# Not saved yet
self.session = pagure.lib.query.create_session(self.dbpath)
project = pagure.lib.query.get_authorized_project(
self.session, "test"
)
self.assertEqual(len(project.requests), 0)
data = {
"csrf_token": csrf_token,
"title": "Remote PR title",
"branch_from": "feature",
"branch_to": "master",
"git_repo": os.path.join(self.newpath, "test"),
"confirm": 1,
}
self.old_value = pagure.config.config["DISABLE_REMOTE_PR"]
pagure.config.config["DISABLE_REMOTE_PR"] = True
output = self.app.post(
"/test/diff/remote", data=data, follow_redirects=True
)
self.assertEqual(output.status_code, 404)
pagure.config.config["DISABLE_REMOTE_PR"] = self.old_value
output = self.app.post(
"/test/diff/remote", data=data, follow_redirects=True
)
self.assertEqual(output.status_code, 200)
output_text = output.get_data(as_text=True)
self.assertIn(
'<span class="text-success font-weight-bold">#1',
output_text,
)
self.assertIn('<div class="card mb-3" id="_1">\n', output_text)
self.assertIn('<div class="card mb-3" id="_2">\n', output_text)
self.assertNotIn(
'<div class="card mb-3" id="_3">\n', output_text
)
# Show the filename in the Changes summary
self.assertIn(
'<a href="#_1" class="list-group-item', output_text
)
self.assertIn(
'<div class="ellipsis pr-changes-description">'
"\n <small>.gitignore</small>",
output_text,
)
self.assertIn(
'<a href="#_2" class="list-group-item', output_text
)
self.assertIn(
'<div class="ellipsis pr-changes-description">'
"\n <small>sources</small>",
output_text,
)
# Remote PR Created
self.session = pagure.lib.query.create_session(self.dbpath)
project = pagure.lib.query.get_authorized_project(self.session, "test")
self.assertEqual(len(project.requests), 1)
@patch("pagure.lib.notify.send_email", MagicMock(return_value=True))
def test_new_remote_no_title(self):
"""Test creating a new remote PR authenticated when no title is
specified."""
tests.create_projects(self.session)
tests.create_projects_git(
os.path.join(self.path, "requests"), bare=True
)
self.set_up_git_repo()
# Before
self.session = pagure.lib.query.create_session(self.dbpath)
project = pagure.lib.query.get_authorized_project(self.session, "test")
self.assertEqual(len(project.requests), 0)
# Try creating a remote PR
user = tests.FakeUser(username="foo")
with tests.user_set(self.app.application, user):
output = self.app.get("/test/diff/remote")
self.assertEqual(output.status_code, 200)
self.assertIn(
"<h2>New remote pull-request</h2>",
output.get_data(as_text=True),
)
csrf_token = self.get_csrf(output=output)
with patch(
"pagure.forms.RemoteRequestPullForm.git_repo.args",
MagicMock(
return_value=(
"Git Repo address",
[wtforms.validators.DataRequired()],
)
),
):
data = {
"csrf_token": csrf_token,
"branch_from": "master",
"branch_to": "feature",
"git_repo": os.path.join(self.newpath, "test"),
}
output = self.app.post("/test/diff/remote", data=data)
self.assertEqual(output.status_code, 200)
output_text = output.get_data(as_text=True)
self.assertIn("<h2>New remote pull-request</h2>", output_text)
self.assertIn("<option selected>feature</option>", output_text)
@patch("pagure.lib.notify.send_email", MagicMock(return_value=True))
def test_new_remote_pr_empty_target(self):
"""Test creating a new remote PR authenticated against an empty
git repo."""
tests.create_projects(self.session)
tests.create_projects_git(
os.path.join(self.path, "requests"), bare=True
)
# Create empty target git repo
gitrepo = os.path.join(self.path, "repos", "test.git")
pygit2.init_repository(gitrepo, bare=True)
# Create git repo we'll pull from
gitrepo = os.path.join(self.path, "repos", "test_origin.git")
repo = pygit2.init_repository(gitrepo)
# Create a file in that git repo
with open(os.path.join(gitrepo, "sources"), "w") as stream:
stream.write("foo\n bar")
repo.index.add("sources")
repo.index.write()
prev_commit = []
# Commits the files added
tree = repo.index.write_tree()
author = _make_signature("Alice Author", "alice@authors.tld")
committer = _make_signature("Cecil Committer", "cecil@committers.tld")
repo.create_commit(
"refs/heads/feature", # the name of the reference to update
author,
committer,
"Add sources file for testing",
# binary string representing the tree object ID
tree,
# list of binary strings representing parents of the new commit
prev_commit,
)
# Before
self.session = pagure.lib.query.create_session(self.dbpath)
project = pagure.lib.query.get_authorized_project(self.session, "test")
self.assertEqual(len(project.requests), 0)
# Try creating a remote PR
user = tests.FakeUser(username="foo")
with tests.user_set(self.app.application, user):
output = self.app.get("/test/diff/remote")
self.assertEqual(output.status_code, 200)
self.assertIn(
"<h2>New remote pull-request</h2>",
output.get_data(as_text=True),
)
csrf_token = self.get_csrf(output=output)
with patch(
"pagure.forms.RemoteRequestPullForm.git_repo.args",
MagicMock(
return_value=(
"Git Repo address",
[wtforms.validators.DataRequired()],
)
),
):
data = {
"csrf_token": csrf_token,
"title": "Remote PR title",
"branch_from": "feature",
"branch_to": "master",
"git_repo": gitrepo,
}
output = self.app.post("/test/diff/remote", data=data)
self.assertEqual(output.status_code, 200)
output_text = output.get_data(as_text=True)
self.assertIn("Create Pull Request\n </div>\n", output_text)
self.assertIn('<div class="card mb-3" id="_1">\n', output_text)
self.assertNotIn(
'<div class="card mb-3" id="_2">\n', output_text
)
# Not saved yet
self.session = pagure.lib.query.create_session(self.dbpath)
project = pagure.lib.query.get_authorized_project(
self.session, "test"
)
self.assertEqual(len(project.requests), 0)
data = {
"csrf_token": csrf_token,
"title": "Remote PR title",
"branch_from": "feature",
"branch_to": "master",
"git_repo": gitrepo,
"confirm": 1,
}
output = self.app.post(
"/test/diff/remote", data=data, follow_redirects=True
)
self.assertEqual(output.status_code, 200)
output_text = output.get_data(as_text=True)
self.assertIn(
"<title>PR#1: Remote PR title - test\n - Pagure</title>",
output_text,
)
self.assertIn('<div class="card mb-3" id="_1">\n', output_text)
self.assertNotIn(
'<div class="card mb-3" id="_2">\n', output_text
)
# Show the filename in the Changes summary
self.assertIn(
'<a href="#_1" class="list-group-item', output_text
)
self.assertIn(
'<div class="ellipsis pr-changes-description">'
"\n <small>sources</small>",
output_text,
)
# Remote PR Created
self.session = pagure.lib.query.create_session(self.dbpath)
project = pagure.lib.query.get_authorized_project(self.session, "test")
self.assertEqual(len(project.requests), 1)
# Check the merge state of the PR
data = {"csrf_token": csrf_token, "requestid": project.requests[0].uid}
output = self.app.post("/pv/pull-request/merge", data=data)
self.assertEqual(output.status_code, 200)
output_text = output.get_data(as_text=True)
data = json.loads(output_text)
self.assertEqual(
data,
{
"code": "FFORWARD",
"message": "The pull-request can be merged and fast-forwarded",
"short_code": "Ok",
},
)
user = tests.FakeUser(username="pingou")
with tests.user_set(self.app.application, user):
# Merge the PR
data = {"csrf_token": csrf_token}
output = self.app.post(
"/test/pull-request/1/merge", data=data, follow_redirects=True
)
output_text = output.get_data(as_text=True)
self.assertEqual(output.status_code, 200)
self.assertIn(
"<title>PR#1: Remote PR title - test\n - Pagure</title>",
output_text,
)
@patch("pagure.lib.notify.send_email", MagicMock(return_value=True))
@patch("pagure.lib.tasks_services.trigger_ci_build")
def test_new_remote_pr_ci_off(self, trigger_ci):
""" Test creating a new remote PR when CI is not configured. """
tests.create_projects(self.session)
tests.create_projects_git(
os.path.join(self.path, "requests"), bare=True
)
self.set_up_git_repo()
# Before
self.session = pagure.lib.query.create_session(self.dbpath)
project = pagure.lib.query.get_authorized_project(self.session, "test")
self.assertEqual(len(project.requests), 0)
# Create a remote PR
user = tests.FakeUser(username="foo")
with tests.user_set(self.app.application, user):
csrf_token = self.get_csrf()
data = {
"csrf_token": csrf_token,
"title": "Remote PR title",
"branch_from": "feature",
"branch_to": "master",
"git_repo": os.path.join(self.newpath, "test"),
}
with patch(
"pagure.forms.RemoteRequestPullForm.git_repo.args",
MagicMock(
return_value=(
"Git Repo address",
[wtforms.validators.DataRequired()],
)
),
):
output = self.app.post(
"/test/diff/remote", data=data, follow_redirects=True
)
self.assertEqual(output.status_code, 200)
data["confirm"] = 1
output = self.app.post(
"/test/diff/remote", data=data, follow_redirects=True
)
self.assertEqual(output.status_code, 200)
output_text = output.get_data(as_text=True)
self.assertIn(
'<span class="text-success font-weight-bold">#1',
output_text,
)
self.assertIn('<div class="card mb-3" id="_1">\n', output_text)
self.assertIn('<div class="card mb-3" id="_2">\n', output_text)
self.assertNotIn(
'<div class="card mb-3" id="_3">\n', output_text
)
# Remote PR Created
self.session = pagure.lib.query.create_session(self.dbpath)
project = pagure.lib.query.get_authorized_project(self.session, "test")
self.assertEqual(len(project.requests), 1)
trigger_ci.assert_not_called()
@patch("pagure.lib.notify.send_email", MagicMock(return_value=True))
@patch("pagure.lib.tasks_services.trigger_ci_build")
def test_new_remote_pr_ci_on(self, trigger_ci):
""" Test creating a new remote PR when CI is configured. """
tests.create_projects(self.session)
tests.create_projects_git(
os.path.join(self.path, "requests"), bare=True
)
self.set_up_git_repo()
# Before
self.session = pagure.lib.query.create_session(self.dbpath)
project = pagure.lib.query.get_authorized_project(self.session, "test")
self.assertEqual(len(project.requests), 0)
# Create a remote PR
user = tests.FakeUser(username="pingou")
with tests.user_set(self.app.application, user):
csrf_token = self.get_csrf()
# Activate CI hook
data = {
"active_pr": "y",
"ci_url": "https://jenkins.fedoraproject.org",
"ci_job": "test/job",
"ci_type": "jenkins",
"csrf_token": csrf_token,
}
output = self.app.post(
"/test/settings/Pagure CI", data=data, follow_redirects=True
)
self.assertEqual(output.status_code, 200)
user = tests.FakeUser(username="foo")
with tests.user_set(self.app.application, user):
data = {
"csrf_token": csrf_token,
"title": "Remote PR title",
"branch_from": "feature",
"branch_to": "master",
"git_repo": os.path.join(self.newpath, "test"),
}
# Disables checking the URL pattern for git_repo
with patch(
"pagure.forms.RemoteRequestPullForm.git_repo.args",
MagicMock(
return_value=(
"Git Repo address",
[wtforms.validators.DataRequired()],
)
),
):
# Do the preview, triggers the cache & all
output = self.app.post(
"/test/diff/remote", data=data, follow_redirects=True
)
self.assertEqual(output.status_code, 200)
# Confirm the PR creation
data["confirm"] = 1
output = self.app.post(
"/test/diff/remote", data=data, follow_redirects=True
)
self.assertEqual(output.status_code, 200)
output_text = output.get_data(as_text=True)
self.assertIn(
'<span class="text-success font-weight-bold">#1',
output_text,
)
self.assertIn('<div class="card mb-3" id="_1">\n', output_text)
self.assertIn('<div class="card mb-3" id="_2">\n', output_text)
self.assertNotIn(
'<div class="card mb-3" id="_3">\n', output_text
)
# Remote PR Created
self.session = pagure.lib.query.create_session(self.dbpath)
project = pagure.lib.query.get_authorized_project(self.session, "test")
self.assertEqual(len(project.requests), 1)
trigger_ci.assert_not_called()
if __name__ == "__main__":
unittest.main(verbosity=2)
| 38.200308 | 79 | 0.542675 | 2,688 | 24,792 | 4.849702 | 0.117188 | 0.031451 | 0.025775 | 0.018257 | 0.834919 | 0.814897 | 0.783292 | 0.769024 | 0.744554 | 0.725146 | 0 | 0.008379 | 0.345273 | 24,792 | 648 | 80 | 38.259259 | 0.794726 | 0.075871 | 0 | 0.678643 | 0 | 0 | 0.174238 | 0.037914 | 0 | 0 | 0 | 0 | 0.133733 | 1 | 0.017964 | false | 0 | 0.033932 | 0 | 0.053892 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b63c4909d49c612e2994149bfe631cdec19c94a4 | 2,694 | py | Python | tests/test_create_clutter_flag.py | josephhardinee/rca | b50ce4557b366553495a7a958d8dc30985a8fbd6 | [
"MIT"
] | 4 | 2020-03-03T14:32:46.000Z | 2021-06-09T08:42:56.000Z | tests/test_create_clutter_flag.py | josephhardinee/rca | b50ce4557b366553495a7a958d8dc30985a8fbd6 | [
"MIT"
] | 1 | 2021-02-17T17:14:07.000Z | 2021-02-17T17:14:07.000Z | tests/test_create_clutter_flag.py | josephhardinee/rca | b50ce4557b366553495a7a958d8dc30985a8fbd6 | [
"MIT"
] | 1 | 2020-03-03T14:32:48.000Z | 2020-03-03T14:32:48.000Z | # import pytest
# import rca
# import numpy as np
# from rca.modules import create_clutter_flag
# from rca.modules import create_masks
# testdata = np.load('/Users/hunz743/projects/github/rca/testdata/sample_var_arrays_ppi.npy').item()
# polarization = 'dual'
# range_limit = 5000
# z_thresh = 40.
# @pytest.mark.parametrize("testdict", testdata)
# def test_timedistance_v0(a, b, expected):
# def test_create_clutter_flag_ppi_returns_array():
# ''' Tests whether create_clutter_flag_ppi returns a string and 2 np array objects
# '''
# testdata = np.load('/Users/hunz743/projects/github/rca/testdata/sample_var_arrays_ppi.npy').item()
# polarization = 'dual'
# range_limit = 5000
# z_thresh = 40.
# ret_value = create_clutter_flag.create_clutter_flag_ppi(testdata,polarization,range_limit,z_thresh)
# #print(ret_value)
# assert type(ret_value[0]) == str
# assert type(ret_value[1]) == np.ndarray
# assert type(ret_value[2]) == np.ndarray
# def test_create_clutter_flag_ppi_returns_binary():
# ''' Tests whether create_clutter_flag_ppi returns only 0 or 1 in arrays
# '''
# testdata = np.load('/Users/hunz743/projects/github/rca/testdata/sample_var_arrays_ppi.npy').item()
# polarization = 'dual'
# range_limit = 5000
# z_thresh = 40.
# ret_value = create_clutter_flag.create_clutter_flag_ppi(testdata,polarization,range_limit,z_thresh)
# #print(ret_value)
# assert ret_value[1][0,0] == 0. or ret_value[1][0,0] == 1., 'Improper gate flagging'
# assert ret_value[2][0,0] == 0. or ret_value[2][0,0] == 1., 'Improper gate flagging'
# def test_create_clutter_flag_rhi_returns_array():
# ''' Tests whether create_clutter_flag_hsrhi returns a string and 2 np array objects
# '''
# testdata = np.load('/Users/hunz743/projects/github/rca/testdata/sample_var_arrays_rhi.npy').item()
# polarization = 'horizontal'
# range_limit = 5000
# z_thresh = 40.
# ret_value = create_clutter_flag.create_clutter_flag_hsrhi(testdata,polarization,range_limit,z_thresh)
# #print(ret_value)
# assert type(ret_value[0]) == str
# assert type(ret_value[1]) == np.ndarray
# def test_create_clutter_flag_rhi_returns_binary():
# ''' Tests whether create_clutter_flag_hsrhi returns only 0 or 1 in arrays
# '''
# testdata = np.load('/Users/hunz743/projects/github/rca/testdata/sample_var_arrays_rhi.npy').item()
# polarization = 'horizontal'
# range_limit = 5000
# z_thresh = 40.
# ret_value = create_clutter_flag.create_clutter_flag_hsrhi(testdata,polarization,range_limit,z_thresh)
# print(ret_value[1].shape)
# assert ret_value[1][0,0,0] == 0. or ret_value[1][0,0,0] == 1., 'Improper gate flagging'
| 42.761905 | 106 | 0.723088 | 394 | 2,694 | 4.65736 | 0.177665 | 0.082834 | 0.157493 | 0.065395 | 0.917711 | 0.883924 | 0.840872 | 0.664305 | 0.664305 | 0.646866 | 0 | 0.036474 | 0.145137 | 2,694 | 62 | 107 | 43.451613 | 0.760313 | 0.955085 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b641d01940adff2e6b6bd350acf95dc45385789c | 61 | py | Python | src/libs/init/__greet.py | sguzman/py-goodreads-data-cleaner | e9c8ed5ea1c5e2faf0f53c69ef898cb08eef58f6 | [
"Unlicense"
] | null | null | null | src/libs/init/__greet.py | sguzman/py-goodreads-data-cleaner | e9c8ed5ea1c5e2faf0f53c69ef898cb08eef58f6 | [
"Unlicense"
] | null | null | null | src/libs/init/__greet.py | sguzman/py-goodreads-data-cleaner | e9c8ed5ea1c5e2faf0f53c69ef898cb08eef58f6 | [
"Unlicense"
] | null | null | null | import logging
def exec() -> None:
logging.debug('hi')
| 10.166667 | 23 | 0.622951 | 8 | 61 | 4.75 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.213115 | 61 | 5 | 24 | 12.2 | 0.791667 | 0 | 0 | 0 | 0 | 0 | 0.032787 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b67c5d3400154a03f17ccb72f3de49f7dad5ff50 | 69 | py | Python | py_tdlib/constructors/connection_state_ready.py | Mr-TelegramBot/python-tdlib | 2e2d21a742ebcd439971a32357f2d0abd0ce61eb | [
"MIT"
] | 24 | 2018-10-05T13:04:30.000Z | 2020-05-12T08:45:34.000Z | py_tdlib/constructors/connection_state_ready.py | MrMahdi313/python-tdlib | 2e2d21a742ebcd439971a32357f2d0abd0ce61eb | [
"MIT"
] | 3 | 2019-06-26T07:20:20.000Z | 2021-05-24T13:06:56.000Z | py_tdlib/constructors/connection_state_ready.py | MrMahdi313/python-tdlib | 2e2d21a742ebcd439971a32357f2d0abd0ce61eb | [
"MIT"
] | 5 | 2018-10-05T14:29:28.000Z | 2020-08-11T15:04:10.000Z | from ..factory import Type
class connectionStateReady(Type):
pass
| 11.5 | 33 | 0.782609 | 8 | 69 | 6.75 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144928 | 69 | 5 | 34 | 13.8 | 0.915254 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
fcfb583d31788d01e094cc5447469d573b838b67 | 202 | py | Python | account/views.py | shayan72/Courseware | 457fc5aed483f8d9c2b752c7458c38579b01e550 | [
"MIT"
] | null | null | null | account/views.py | shayan72/Courseware | 457fc5aed483f8d9c2b752c7458c38579b01e550 | [
"MIT"
] | null | null | null | account/views.py | shayan72/Courseware | 457fc5aed483f8d9c2b752c7458c38579b01e550 | [
"MIT"
] | null | null | null | from django.shortcuts import render, redirect
from course.models import CourseInstance
# Create your views here.
def home(request, username):
return render( request, 'account/student_home.html' )
| 25.25 | 57 | 0.782178 | 26 | 202 | 6.038462 | 0.807692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138614 | 202 | 7 | 58 | 28.857143 | 0.902299 | 0.113861 | 0 | 0 | 0 | 0 | 0.141243 | 0.141243 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
1e04e1a5cef5c8f3e262b7fbc3ae490b547c89ec | 4,648 | py | Python | model.py | giorgosdrainakis/dml | 2c9bd589d2fb36f971a63256699ce16adbbc684d | [
"CC0-1.0"
] | null | null | null | model.py | giorgosdrainakis/dml | 2c9bd589d2fb36f971a63256699ce16adbbc684d | [
"CC0-1.0"
] | null | null | null | model.py | giorgosdrainakis/dml | 2c9bd589d2fb36f971a63256699ce16adbbc684d | [
"CC0-1.0"
] | null | null | null | import torch.nn as nn
import torch.nn.functional as F
import Tools
import Global
class MNIST_Model(nn.Module):
def __init__(self):
super(MNIST_Model, self).__init__()
self.fc1 = nn.Linear(784, 500) #28x28 images
self.fc2 = nn.Linear(500, 10)
self.size=float(1.55) #Mb
def forward(self, x):
x = x.view(-1, 784)
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
return F.log_softmax(x, dim=1)
def get_size(self):
return self.size
class INFIMNIST_Model_big(nn.Module):
def __init__(self):
super(INFIMNIST_Model_big, self).__init__()
self.fc1 = nn.Linear(784, 512) #28x28 images
self.fc2 = nn.Linear(512, 256)
self.fc3 = nn.Linear(256, 128)
self.fc4 = nn.Linear(128, 64)
self.fc5 = nn.Linear(64, 10)
self.size=float(1.55) #Mb
def forward(self, xb):
out = xb.view(xb.size(0), -1)
out = self.fc1(out)
out = F.relu(out)
out = self.fc2(out)
out = F.relu(out)
out = self.fc3(out)
out = F.relu(out)
out = self.fc4(out)
out = F.relu(out)
out = self.fc5(out)
return F.log_softmax(out, dim=1)
def get_size(self):
return self.size
class INFIMNIST_Model(nn.Module):
def __init__(self):
super(INFIMNIST_Model, self).__init__()
self.fc1 = nn.Linear(784, 256) #28x28 images
self.fc2 = nn.Linear(256, 10)
self.size=float(0.8) #Mb
def forward(self, xb):
out = xb.view(-1,784)
out = self.fc1(out)
out = F.relu(out)
out = self.fc2(out)
return F.log_softmax(out, dim=1)
def get_size(self):
return self.size
class SVHN_Model_big(nn.Module):
def __init__(self):
super(SVHN_Model_big, self).__init__()
self.fc1 = nn.Linear(3072, 2048) #32x32 images
self.fc2 = nn.Linear(2048, 1024)
self.fc3 = nn.Linear(1024, 512)
self.fc4 = nn.Linear(512, 256)
self.fc5 = nn.Linear(256, 128)
self.fc6 = nn.Linear(128, 64)
self.fc7 = nn.Linear(64, 10)
self.size=float(35.5) #Mb todo
def forward(self, xb):
out = xb.view(-1, 3072)
out = self.fc1(out)
out = F.relu(out)
out = self.fc2(out)
out = F.relu(out)
out = self.fc3(out)
out = F.relu(out)
out = self.fc4(out)
out = F.relu(out)
out = self.fc5(out)
out = F.relu(out)
out = self.fc6(out)
out = F.relu(out)
out = self.fc7(out)
return F.log_softmax(out, dim=1)
def get_size(self):
return self.size
class SVHN_Model(nn.Module):
def __init__(self):
super(SVHN_Model, self).__init__()
self.fc3 = nn.Linear(3072, 512) #32x32x3
self.fc5 = nn.Linear(512, 10)
self.size=float(6.1) #Mb todo
def forward(self, xb):
out = xb.view(-1, 3072)
out = self.fc3(out)
out = F.relu(out)
out = self.fc5(out)
return F.log_softmax(out, dim=1)
def get_size(self):
return self.size
def get_train_time_mobile_with_epochs(self,samples,epochs):
return float(epochs*(samples/125))
class SVHN_Model_6(nn.Module):
def __init__(self):
super(SVHN_Model_6, self).__init__()
self.fc3 = nn.Linear(3072, 512)
self.fc4 = nn.Linear(512, 256)
self.fc5 = nn.Linear(256, 128)
self.fc6 = nn.Linear(128, 64)
self.fc7 = nn.Linear(64, 10)
self.size=float(6.8) #Mb todo
def forward(self, xb):
out = xb.view(-1, 3072)
out = self.fc3(out)
out = F.relu(out)
out = self.fc4(out)
out = F.relu(out)
out = self.fc5(out)
out = F.relu(out)
out = self.fc6(out)
out = F.relu(out)
out = self.fc7(out)
return F.log_softmax(out, dim=1)
def get_size(self):
return self.size
class CIFAR10_Model(nn.Module):
def __init__(self):
super(CIFAR10_Model, self).__init__()
self.fc1 = nn.Linear(3072, 512) #32x32 images
self.fc2 = nn.Linear(512, 256)
self.fc3 = nn.Linear(256, 128)
self.fc4 = nn.Linear(128, 64)
self.size=float(6.8) #Mb
def forward(self, xb):
out = xb.view(-1, 3072)
#out = xb.view(xb.size(0), -1)
out = self.fc1(out)
out = F.relu(out)
out = self.fc2(out)
out = F.relu(out)
out = self.fc3(out)
out = F.relu(out)
out = self.fc4(out)
return F.log_softmax(out, dim=1)
def get_size(self):
return self.size | 28 | 63 | 0.552065 | 703 | 4,648 | 3.514936 | 0.100996 | 0.09227 | 0.053824 | 0.084581 | 0.862404 | 0.855929 | 0.815459 | 0.772157 | 0.630919 | 0.630919 | 0 | 0.093624 | 0.308305 | 4,648 | 166 | 64 | 28 | 0.674961 | 0.026893 | 0 | 0.706294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006024 | 0 | 1 | 0.153846 | false | 0 | 0.027972 | 0.055944 | 0.335664 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1e25f7d3ca8f6b6147f40c78a65a754101e6f04b | 113 | py | Python | chaiwat/myname.py | chaiwatamorn/chaiwata | c83ab1adcdf748c38a70cbae77c820780534525d | [
"MIT"
] | null | null | null | chaiwat/myname.py | chaiwatamorn/chaiwata | c83ab1adcdf748c38a70cbae77c820780534525d | [
"MIT"
] | null | null | null | chaiwat/myname.py | chaiwatamorn/chaiwata | c83ab1adcdf748c38a70cbae77c820780534525d | [
"MIT"
] | null | null | null | def fullname():
print('My name is Chaiwat')
print('If you want to want to learn python, please contact me')
| 28.25 | 65 | 0.699115 | 19 | 113 | 4.157895 | 0.842105 | 0.151899 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.19469 | 113 | 3 | 66 | 37.666667 | 0.868132 | 0 | 0 | 0 | 0 | 0 | 0.654545 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0 | 0 | 0.333333 | 0.666667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
1e288d948786a1866886a8ac4f01e6f369c10886 | 2,232 | py | Python | test/rpc/test_seed.py | vogt4nick/dbt | 1bd82d4914fd80fcc6fe17140e46554ad677eab0 | [
"Apache-2.0"
] | null | null | null | test/rpc/test_seed.py | vogt4nick/dbt | 1bd82d4914fd80fcc6fe17140e46554ad677eab0 | [
"Apache-2.0"
] | 1 | 2020-10-01T02:16:55.000Z | 2020-10-01T02:16:55.000Z | test/rpc/test_seed.py | vogt4nick/dbt | 1bd82d4914fd80fcc6fe17140e46554ad677eab0 | [
"Apache-2.0"
] | null | null | null | import pytest
from .util import (
assert_has_threads,
get_querier,
ProjectDefinition,
)
@pytest.mark.supported('postgres')
def test_rpc_seed_threads(
project_root, profiles_root, dbt_profile, unique_schema
):
project = ProjectDefinition(
project_data={'seeds': {'config': {'quote_columns': False}}},
seeds={'data.csv': 'a,b\n1,hello\n2,goodbye'},
)
querier_ctx = get_querier(
project_def=project,
project_dir=project_root,
profiles_dir=profiles_root,
schema=unique_schema,
test_kwargs={},
)
with querier_ctx as querier:
results = querier.async_wait_for_result(querier.seed(threads=5))
assert_has_threads(results, 5)
results = querier.async_wait_for_result(
querier.cli_args('seed --threads=7')
)
assert_has_threads(results, 7)
@pytest.mark.supported('postgres')
def test_rpc_seed_include_exclude(
project_root, profiles_root, dbt_profile, unique_schema
):
project = ProjectDefinition(
project_data={'seeds': {'config': {'quote_columns': False}}},
seeds={
'data_1.csv': 'a,b\n1,hello\n2,goodbye',
'data_2.csv': 'a,b\n1,data',
},
)
querier_ctx = get_querier(
project_def=project,
project_dir=project_root,
profiles_dir=profiles_root,
schema=unique_schema,
test_kwargs={},
)
with querier_ctx as querier:
results = querier.async_wait_for_result(querier.seed(select=['data_1']))
assert len(results['results']) == 1
results = querier.async_wait_for_result(querier.seed(select='data_1'))
assert len(results['results']) == 1
results = querier.async_wait_for_result(querier.cli_args('seed --select=data_1'))
assert len(results['results']) == 1
results = querier.async_wait_for_result(querier.seed(exclude=['data_2']))
assert len(results['results']) == 1
results = querier.async_wait_for_result(querier.seed(exclude='data_2'))
assert len(results['results']) == 1
results = querier.async_wait_for_result(querier.cli_args('seed --exclude=data_2'))
assert len(results['results']) == 1
| 32.823529 | 90 | 0.650538 | 272 | 2,232 | 5.040441 | 0.209559 | 0.081692 | 0.110868 | 0.134209 | 0.875274 | 0.875274 | 0.875274 | 0.844639 | 0.784829 | 0.75857 | 0 | 0.013226 | 0.220878 | 2,232 | 67 | 91 | 33.313433 | 0.775158 | 0 | 0 | 0.508475 | 0 | 0 | 0.121864 | 0.020609 | 0 | 0 | 0 | 0 | 0.152542 | 1 | 0.033898 | false | 0 | 0.033898 | 0 | 0.067797 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1e743eb7db0f8ed95828e44f8f6194e873903c6f | 152 | py | Python | src/ui/__init__.py | gregflynn/taskqm | 7800ff2ab8d6240b019637576e63d374b9319868 | [
"MIT"
] | null | null | null | src/ui/__init__.py | gregflynn/taskqm | 7800ff2ab8d6240b019637576e63d374b9319868 | [
"MIT"
] | null | null | null | src/ui/__init__.py | gregflynn/taskqm | 7800ff2ab8d6240b019637576e63d374b9319868 | [
"MIT"
] | null | null | null | from .board import Board # noqa
from .status import StatusLine # noqa
from .selector import Selector # noqa
from .stdout import TaskQMStdOut # noqa
| 30.4 | 40 | 0.763158 | 20 | 152 | 5.8 | 0.45 | 0.206897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184211 | 152 | 4 | 41 | 38 | 0.935484 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1e7c243c335108c2531256d2876044fd63f6f7b0 | 26 | py | Python | gaopt/__init__.py | macky168/gaopt | bf2785325d3cb4489513f47ed06f745a059262f8 | [
"MIT"
] | null | null | null | gaopt/__init__.py | macky168/gaopt | bf2785325d3cb4489513f47ed06f745a059262f8 | [
"MIT"
] | null | null | null | gaopt/__init__.py | macky168/gaopt | bf2785325d3cb4489513f47ed06f745a059262f8 | [
"MIT"
] | null | null | null | from gaopt.gaopt import *
| 13 | 25 | 0.769231 | 4 | 26 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1ea0fc5dde0de537208c462734c08d0ea1f0b61f | 2,788 | py | Python | m5stack/freeze/g_sign.py | kcfkwok2003/m5_star_navigator | 3c2fcc8edfecc417965ce08e159745426d614a8d | [
"MIT"
] | null | null | null | m5stack/freeze/g_sign.py | kcfkwok2003/m5_star_navigator | 3c2fcc8edfecc417965ce08e159745426d614a8d | [
"MIT"
] | null | null | null | m5stack/freeze/g_sign.py | kcfkwok2003/m5_star_navigator | 3c2fcc8edfecc417965ce08e159745426d614a8d | [
"MIT"
] | null | null | null |
g_width=20
g_sign={}
g_sign['ari']=bytearray([
0x00, 0x00, 0x00, 0x00, 0x18, 0x0c, 0x24, 0x12, 0x42, 0x21, 0x42, 0x21,
0x84, 0x10, 0x80, 0x00, 0x80, 0x00, 0x80, 0x00, 0x80, 0x00, 0x80, 0x00,
0x80, 0x00, 0x80, 0x00, 0x80, 0x00, 0x00, 0x00])
g_sign['tau']=bytearray([
0x00, 0x00, 0x00, 0x00, 0x06, 0x30, 0x08, 0x08, 0x10, 0x04, 0x20, 0x02,
0xc0, 0x01, 0x30, 0x06, 0x08, 0x08, 0x08, 0x08, 0x04, 0x10, 0x04, 0x10,
0x08, 0x08, 0x08, 0x08, 0x30, 0x06, 0xc0, 0x01 ])
g_sign['gem']=bytearray([
0x01, 0x40, 0x06, 0x30, 0xf8, 0x0f, 0x10, 0x04, 0x10, 0x04, 0x10, 0x04,
0x10, 0x04, 0x10, 0x04, 0x10, 0x04, 0x10, 0x04, 0x10, 0x04, 0xf8, 0x0f,
0x06, 0x30, 0x01, 0x40, 0x00, 0x00, 0x00, 0x00])
g_sign['can']=bytearray([
0x80, 0x03, 0x60, 0x0c, 0x10, 0x30, 0x08, 0x40, 0x0e, 0x80, 0x11, 0x00,
0x11, 0x00, 0x11, 0x70, 0x0e, 0x88, 0x00, 0x88, 0x00, 0x88, 0x01, 0x70,
0x02, 0x10, 0x0c, 0x08, 0x30, 0x06, 0xc0, 0x01])
g_sign['leo']=bytearray([
0x80, 0x01, 0x60, 0x06, 0x10, 0x08, 0x08, 0x10, 0x08, 0x10, 0x1c, 0x08,
0x22, 0x08, 0x41, 0x04, 0x41, 0x04, 0x41, 0x02, 0x22, 0x02, 0x1c, 0x44,
0x00, 0x24, 0x00, 0x18, 0x00, 0x00, 0x00, 0x00])
g_sign['vir']=bytearray([
0x18, 0x03, 0xa5, 0x04, 0x63, 0x04, 0x21, 0x04, 0x21, 0x14, 0x21, 0x2c,
0x21, 0x44, 0x21, 0x44, 0x21, 0x24, 0x21, 0x14, 0x21, 0x08, 0x21, 0x34,
0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00])
g_sign['lib']=bytearray([
0x80, 0x01, 0x60, 0x06, 0x10, 0x08, 0x10, 0x08, 0x08, 0x10, 0x08, 0x10,
0x10, 0x08, 0x10, 0x08, 0x60, 0x06, 0x40, 0x02, 0x7e, 0x7e, 0x00, 0x00,
0x00, 0x00, 0xfe, 0x7f, 0x00, 0x00, 0x00, 0x00])
g_sign['sco']=bytearray([
0x18, 0x03, 0xa5, 0x04, 0x63, 0x04, 0x21, 0x04, 0x21, 0x04, 0x21, 0x04,
0x21, 0x04, 0x21, 0x04, 0x21, 0x04, 0x21, 0x04, 0x21, 0x48, 0x21, 0xf0,
0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00])
g_sign['sag']=bytearray([
0x00, 0x3e, 0x00, 0x30, 0x00, 0x28, 0x00, 0x24, 0x08, 0x22, 0x10, 0x01,
0xa0, 0x00, 0x40, 0x00, 0xa0, 0x00, 0x10, 0x01, 0x08, 0x02, 0x04, 0x00,
0x02, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00])
g_sign['cap']=bytearray([
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x60, 0x00, 0x51, 0x00, 0x51, 0x00,
0x51, 0x00, 0x4a, 0x00, 0x84, 0x38, 0x84, 0x44, 0x80, 0x82, 0x00, 0x81,
0x80, 0x82, 0x40, 0x44, 0x00, 0x38, 0x00, 0x00])
g_sign['aqu']=bytearray([
0x00, 0x00, 0x10, 0x42, 0x18, 0x63, 0x94, 0x52, 0xa4, 0x94, 0x62, 0x8c,
0x21, 0x84, 0x00, 0x00, 0x00, 0x00, 0x10, 0x42, 0x18, 0x63, 0x94, 0x52,
0xa4, 0x94, 0x62, 0x8c, 0x21, 0x84, 0x00, 0x00])
g_sign['pis']=bytearray([
0x00, 0x00, 0x02, 0x10, 0x04, 0x08, 0x08, 0x04, 0x08, 0x04, 0x10, 0x02,
0x10, 0x02, 0xfe, 0x1f, 0x10, 0x02, 0x10, 0x02, 0x08, 0x04, 0x08, 0x04,
0x04, 0x08, 0x02, 0x10, 0x00, 0x00, 0x00, 0x00])
| 51.62963 | 78 | 0.633788 | 437 | 2,788 | 4.011442 | 0.16476 | 0.223617 | 0.219053 | 0.173417 | 0.472333 | 0.410724 | 0.338277 | 0.257844 | 0.224758 | 0.224758 | 0 | 0.497557 | 0.192611 | 2,788 | 53 | 79 | 52.603774 | 0.281208 | 0 | 0 | 0 | 0 | 0 | 0.012926 | 0 | 0 | 0 | 0.551526 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1eb3f98bacce2a29322982fbd3c09e28d3e6c15a | 14,347 | py | Python | models/Models.py | dsp6414/Siam-NestedUNet | 1d7066f802f4a74621907977897647dcf7b1107e | [
"MIT"
] | 1 | 2020-11-19T08:37:52.000Z | 2020-11-19T08:37:52.000Z | models/Models.py | dsp6414/Siam-NestedUNet | 1d7066f802f4a74621907977897647dcf7b1107e | [
"MIT"
] | null | null | null | models/Models.py | dsp6414/Siam-NestedUNet | 1d7066f802f4a74621907977897647dcf7b1107e | [
"MIT"
] | null | null | null | import torch.nn as nn
import torch
class conv_block_nested(nn.Module):
def __init__(self, in_ch, mid_ch, out_ch):
super(conv_block_nested, self).__init__()
self.activation = nn.ReLU(inplace=True)
self.conv1 = nn.Conv2d(in_ch, mid_ch, kernel_size=3, padding=1, bias=True)
self.bn1 = nn.BatchNorm2d(mid_ch)
self.conv2 = nn.Conv2d(mid_ch, out_ch, kernel_size=3, padding=1, bias=True)
self.bn2 = nn.BatchNorm2d(out_ch)
def forward(self, x):
x = self.conv1(x)
identity = x
x = self.bn1(x)
x = self.activation(x)
x = self.conv2(x)
x = self.bn2(x)
output = self.activation(x + identity)
return output
class up(nn.Module):
def __init__(self, in_ch, bilinear=False):
super(up, self).__init__()
if bilinear:
self.up = nn.Upsample(scale_factor=2,
mode='bilinear',
align_corners=True)
else:
self.up = nn.ConvTranspose2d(in_ch, in_ch, 2, stride=2)
def forward(self, x):
x = self.up(x)
return x
class NestedUNet_Diff(nn.Module):
def __init__(self, in_ch=3, out_ch=2):
super(NestedUNet_Diff, self).__init__()
torch.nn.Module.dump_patches = True
n1 = 64
filters = [n1, n1 * 2, n1 * 4, n1 * 8, n1 * 16]
self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv0_0 = conv_block_nested(in_ch, filters[0], filters[0])
self.conv1_0 = conv_block_nested(filters[0], filters[1], filters[1])
self.Up1_0 = up(filters[1])
self.conv2_0 = conv_block_nested(filters[1], filters[2], filters[2])
self.Up2_0 = up(filters[2])
self.conv3_0 = conv_block_nested(filters[2], filters[3], filters[3])
self.Up3_0 = up(filters[3])
self.conv4_0 = conv_block_nested(filters[3], filters[4], filters[4])
self.Up4_0 = up(filters[4])
self.conv0_1 = conv_block_nested(filters[0] + filters[1], filters[0], filters[0])
self.conv1_1 = conv_block_nested(filters[1] + filters[2], filters[1], filters[1])
self.Up1_1 = up(filters[1])
self.conv2_1 = conv_block_nested(filters[2] + filters[3], filters[2], filters[2])
self.Up2_1 = up(filters[2])
self.conv3_1 = conv_block_nested(filters[3] + filters[4], filters[3], filters[3])
self.Up3_1 = up(filters[3])
self.conv0_2 = conv_block_nested(filters[0] * 2 + filters[1], filters[0], filters[0])
self.conv1_2 = conv_block_nested(filters[1] * 2 + filters[2], filters[1], filters[1])
self.Up1_2 = up(filters[1])
self.conv2_2 = conv_block_nested(filters[2] * 2 + filters[3], filters[2], filters[2])
self.Up2_2 = up(filters[2])
self.conv0_3 = conv_block_nested(filters[0] * 3 + filters[1], filters[0], filters[0])
self.conv1_3 = conv_block_nested(filters[1] * 3 + filters[2], filters[1], filters[1])
self.Up1_3 = up(filters[1])
self.conv0_4 = conv_block_nested(filters[0] * 4 + filters[1], filters[0], filters[0])
self.final1 = nn.Conv2d(filters[0], out_ch, kernel_size=1)
self.final2 = nn.Conv2d(filters[0], out_ch, kernel_size=1)
self.final3 = nn.Conv2d(filters[0], out_ch, kernel_size=1)
self.final4 = nn.Conv2d(filters[0], out_ch, kernel_size=1)
self.conv_final = nn.Conv2d(out_ch * 4, out_ch, kernel_size=1)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def forward(self, xA, xB):
'''xA'''
x0_0A = self.conv0_0(xA)
x1_0A = self.conv1_0(self.pool(x0_0A))
x2_0A = self.conv2_0(self.pool(x1_0A))
x3_0A = self.conv3_0(self.pool(x2_0A))
# x4_0A = self.conv4_0(self.pool(x3_0A))
'''xB'''
x0_0B = self.conv0_0(xB)
x1_0B = self.conv1_0(self.pool(x0_0B))
x2_0B = self.conv2_0(self.pool(x1_0B))
x3_0B = self.conv3_0(self.pool(x2_0B))
x4_0B = self.conv4_0(self.pool(x3_0B))
x0_1 = self.conv0_1(torch.cat([torch.abs(x0_0A - x0_0B), self.Up1_0(x1_0B)], 1))
x1_1 = self.conv1_1(torch.cat([torch.abs(x1_0A - x1_0B), self.Up2_0(x2_0B)], 1))
x0_2 = self.conv0_2(torch.cat([torch.abs(x0_0A - x0_0B), x0_1, self.Up1_1(x1_1)], 1))
x2_1 = self.conv2_1(torch.cat([torch.abs(x2_0A - x2_0B), self.Up3_0(x3_0B)], 1))
x1_2 = self.conv1_2(torch.cat([torch.abs(x1_0A - x1_0B), x1_1, self.Up2_1(x2_1)], 1))
x0_3 = self.conv0_3(torch.cat([torch.abs(x0_0A - x0_0B), x0_1, x0_2, self.Up1_2(x1_2)], 1))
x3_1 = self.conv3_1(torch.cat([torch.abs(x3_0A - x3_0B), self.Up4_0(x4_0B)], 1))
x2_2 = self.conv2_2(torch.cat([torch.abs(x2_0A - x2_0B), x2_1, self.Up3_1(x3_1)], 1))
x1_3 = self.conv1_3(torch.cat([torch.abs(x1_0A - x1_0B), x1_1, x1_2, self.Up2_2(x2_2)], 1))
x0_4 = self.conv0_4(torch.cat([torch.abs(x0_0A - x0_0B), x0_1, x0_2, x0_3, self.Up1_3(x1_3)], 1))
output1 = self.final1(x0_1)
output2 = self.final2(x0_2)
output3 = self.final3(x0_3)
output4 = self.final4(x0_4)
output = self.conv_final(torch.cat([output1, output2, output3, output4], 1))
return (output1, output2, output3, output4, output)
class NestedUNet_Dif_Conc(nn.Module):
def __init__(self, in_ch=3, out_ch=2):
super(NestedUNet_Dif_Conc, self).__init__()
torch.nn.Module.dump_patches = True
n1 = 64
filters = [n1, n1 * 2, n1 * 4, n1 * 8, n1 * 16]
self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv0_0 = conv_block_nested(in_ch, filters[0], filters[0])
self.conv1_0 = conv_block_nested(filters[0], filters[1], filters[1])
self.Up1_0 = up(filters[1])
self.conv2_0 = conv_block_nested(filters[1], filters[2], filters[2])
self.Up2_0 = up(filters[2])
self.conv3_0 = conv_block_nested(filters[2], filters[3], filters[3])
self.Up3_0 = up(filters[3])
self.conv4_0 = conv_block_nested(filters[3], filters[4], filters[4])
self.Up4_0 = up(filters[4])
self.conv0_1 = conv_block_nested(filters[0] * 3 + filters[1], filters[0], filters[0])
self.conv1_1 = conv_block_nested(filters[1] * 3 + filters[2], filters[1], filters[1])
self.Up1_1 = up(filters[1])
self.conv2_1 = conv_block_nested(filters[2] * 3 + filters[3], filters[2], filters[2])
self.Up2_1 = up(filters[2])
self.conv3_1 = conv_block_nested(filters[3] * 3 + filters[4], filters[3], filters[3])
self.Up3_1 = up(filters[3])
self.conv0_2 = conv_block_nested(filters[0] * 4 + filters[1], filters[0], filters[0])
self.conv1_2 = conv_block_nested(filters[1] * 4 + filters[2], filters[1], filters[1])
self.Up1_2 = up(filters[1])
self.conv2_2 = conv_block_nested(filters[2] * 4 + filters[3], filters[2], filters[2])
self.Up2_2 = up(filters[2])
self.conv0_3 = conv_block_nested(filters[0] * 5 + filters[1], filters[0], filters[0])
self.conv1_3 = conv_block_nested(filters[1] * 5 + filters[2], filters[1], filters[1])
self.Up1_3 = up(filters[1])
self.conv0_4 = conv_block_nested(filters[0] * 6 + filters[1], filters[0], filters[0])
self.final1 = nn.Conv2d(filters[0], out_ch, kernel_size=1)
self.final2 = nn.Conv2d(filters[0], out_ch, kernel_size=1)
self.final3 = nn.Conv2d(filters[0], out_ch, kernel_size=1)
self.final4 = nn.Conv2d(filters[0], out_ch, kernel_size=1)
self.conv_final = nn.Conv2d(out_ch * 4, out_ch, kernel_size=1)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def forward(self, xA, xB):
'''xA'''
x0_0A = self.conv0_0(xA)
x1_0A = self.conv1_0(self.pool(x0_0A))
x2_0A = self.conv2_0(self.pool(x1_0A))
x3_0A = self.conv3_0(self.pool(x2_0A))
# x4_0A = self.conv4_0(self.pool(x3_0A))
'''xB'''
x0_0B = self.conv0_0(xB)
x1_0B = self.conv1_0(self.pool(x0_0B))
x2_0B = self.conv2_0(self.pool(x1_0B))
x3_0B = self.conv3_0(self.pool(x2_0B))
x4_0B = self.conv4_0(self.pool(x3_0B))
x0_1 = self.conv0_1(torch.cat([torch.abs(x0_0A - x0_0B), x0_0A, x0_0B, self.Up1_0(x1_0B)], 1))
x1_1 = self.conv1_1(torch.cat([torch.abs(x1_0A - x1_0B), x1_0A, x1_0B, self.Up2_0(x2_0B)], 1))
x0_2 = self.conv0_2(torch.cat([torch.abs(x0_0A - x0_0B), x0_0A, x0_0B, x0_1, self.Up1_1(x1_1)], 1))
x2_1 = self.conv2_1(torch.cat([torch.abs(x2_0A - x2_0B), x2_0A, x2_0B, self.Up3_0(x3_0B)], 1))
x1_2 = self.conv1_2(torch.cat([torch.abs(x1_0A - x1_0B), x1_0A, x1_0B, x1_1, self.Up2_1(x2_1)], 1))
x0_3 = self.conv0_3(torch.cat([torch.abs(x0_0A - x0_0B), x0_0A, x0_0B, x0_1, x0_2, self.Up1_2(x1_2)], 1))
x3_1 = self.conv3_1(torch.cat([torch.abs(x3_0A - x3_0B), x3_0A, x3_0B, self.Up4_0(x4_0B)], 1))
x2_2 = self.conv2_2(torch.cat([torch.abs(x2_0A - x2_0B), x2_0A, x2_0B, x2_1, self.Up3_1(x3_1)], 1))
x1_3 = self.conv1_3(torch.cat([torch.abs(x1_0A - x1_0B), x1_0A, x1_0B, x1_1, x1_2, self.Up2_2(x2_2)], 1))
x0_4 = self.conv0_4(torch.cat([torch.abs(x0_0A - x0_0B), x0_0A, x0_0B, x0_1, x0_2, x0_3, self.Up1_3(x1_3)], 1))
output1 = self.final1(x0_1)
output2 = self.final2(x0_2)
output3 = self.final3(x0_3)
output4 = self.final4(x0_4)
output = self.conv_final(torch.cat([output1, output2, output3, output4], 1))
return (output1, output2, output3, output4, output)
class NestedUNet_Conc(nn.Module):
def __init__(self, in_ch=3, out_ch=2):
super(NestedUNet_Conc, self).__init__()
torch.nn.Module.dump_patches = True
n1 = 64
filters = [n1, n1 * 2, n1 * 4, n1 * 8, n1 * 16]
self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv0_0 = conv_block_nested(in_ch, filters[0], filters[0])
self.conv1_0 = conv_block_nested(filters[0], filters[1], filters[1])
self.Up1_0 = up(filters[1])
self.conv2_0 = conv_block_nested(filters[1], filters[2], filters[2])
self.Up2_0 = up(filters[2])
self.conv3_0 = conv_block_nested(filters[2], filters[3], filters[3])
self.Up3_0 = up(filters[3])
self.conv4_0 = conv_block_nested(filters[3], filters[4], filters[4])
self.Up4_0 = up(filters[4])
self.conv0_1 = conv_block_nested(filters[0] * 2 + filters[1], filters[0], filters[0])
self.conv1_1 = conv_block_nested(filters[1] * 2 + filters[2], filters[1], filters[1])
self.Up1_1 = up(filters[1])
self.conv2_1 = conv_block_nested(filters[2] * 2 + filters[3], filters[2], filters[2])
self.Up2_1 = up(filters[2])
self.conv3_1 = conv_block_nested(filters[3] * 2 + filters[4], filters[3], filters[3])
self.Up3_1 = up(filters[3])
self.conv0_2 = conv_block_nested(filters[0] * 3 + filters[1], filters[0], filters[0])
self.conv1_2 = conv_block_nested(filters[1] * 3 + filters[2], filters[1], filters[1])
self.Up1_2 = up(filters[1])
self.conv2_2 = conv_block_nested(filters[2] * 3 + filters[3], filters[2], filters[2])
self.Up2_2 = up(filters[2])
self.conv0_3 = conv_block_nested(filters[0] * 4 + filters[1], filters[0], filters[0])
self.conv1_3 = conv_block_nested(filters[1] * 4 + filters[2], filters[1], filters[1])
self.Up1_3 = up(filters[1])
self.conv0_4 = conv_block_nested(filters[0] * 5 + filters[1], filters[0], filters[0])
self.final1 = nn.Conv2d(filters[0], out_ch, kernel_size=1)
self.final2 = nn.Conv2d(filters[0], out_ch, kernel_size=1)
self.final3 = nn.Conv2d(filters[0], out_ch, kernel_size=1)
self.final4 = nn.Conv2d(filters[0], out_ch, kernel_size=1)
self.conv_final = nn.Conv2d(out_ch * 4, out_ch, kernel_size=1)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def forward(self, xA, xB):
'''xA'''
x0_0A = self.conv0_0(xA)
x1_0A = self.conv1_0(self.pool(x0_0A))
x2_0A = self.conv2_0(self.pool(x1_0A))
x3_0A = self.conv3_0(self.pool(x2_0A))
# x4_0A = self.conv4_0(self.pool(x3_0A))
'''xB'''
x0_0B = self.conv0_0(xB)
x1_0B = self.conv1_0(self.pool(x0_0B))
x2_0B = self.conv2_0(self.pool(x1_0B))
x3_0B = self.conv3_0(self.pool(x2_0B))
x4_0B = self.conv4_0(self.pool(x3_0B))
x0_1 = self.conv0_1(torch.cat([x0_0A, x0_0B, self.Up1_0(x1_0B)], 1))
x1_1 = self.conv1_1(torch.cat([x1_0A, x1_0B, self.Up2_0(x2_0B)], 1))
x0_2 = self.conv0_2(torch.cat([x0_0A, x0_0B, x0_1, self.Up1_1(x1_1)], 1))
x2_1 = self.conv2_1(torch.cat([x2_0A, x2_0B, self.Up3_0(x3_0B)], 1))
x1_2 = self.conv1_2(torch.cat([x1_0A, x1_0B, x1_1, self.Up2_1(x2_1)], 1))
x0_3 = self.conv0_3(torch.cat([x0_0A, x0_0B, x0_1, x0_2, self.Up1_2(x1_2)], 1))
x3_1 = self.conv3_1(torch.cat([x3_0A, x3_0B, self.Up4_0(x4_0B)], 1))
x2_2 = self.conv2_2(torch.cat([x2_0A, x2_0B, x2_1, self.Up3_1(x3_1)], 1))
x1_3 = self.conv1_3(torch.cat([x1_0A, x1_0B, x1_1, x1_2, self.Up2_2(x2_2)], 1))
x0_4 = self.conv0_4(torch.cat([x0_0A, x0_0B, x0_1, x0_2, x0_3, self.Up1_3(x1_3)], 1))
output1 = self.final1(x0_1)
output2 = self.final2(x0_2)
output3 = self.final3(x0_3)
output4 = self.final4(x0_4)
output = self.conv_final(torch.cat([output1, output2, output3, output4], 1))
return (output1, output2, output3, output4, output)
| 46.131833 | 119 | 0.612044 | 2,431 | 14,347 | 3.353764 | 0.044426 | 0.058874 | 0.086471 | 0.113333 | 0.939164 | 0.939164 | 0.934257 | 0.928615 | 0.928615 | 0.92052 | 0 | 0.112528 | 0.229456 | 14,347 | 310 | 120 | 46.280645 | 0.624966 | 0.008782 | 0 | 0.60084 | 0 | 0 | 0.002892 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042017 | false | 0 | 0.008403 | 0 | 0.092437 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1ec0cb0d437de460f8534686e0d28bac46b9e6a9 | 23 | py | Python | aiovk_new/__init__.py | jDan735/aiovk_new | c24b211821f1d9d795e7f5f9118aa623d9a6b79e | [
"MIT"
] | null | null | null | aiovk_new/__init__.py | jDan735/aiovk_new | c24b211821f1d9d795e7f5f9118aa623d9a6b79e | [
"MIT"
] | null | null | null | aiovk_new/__init__.py | jDan735/aiovk_new | c24b211821f1d9d795e7f5f9118aa623d9a6b79e | [
"MIT"
] | null | null | null | from .api import AioVK
| 11.5 | 22 | 0.782609 | 4 | 23 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
94a54ad551216e15f7b4b1e2129753f9d37929c9 | 23 | py | Python | bot/database/__init__.py | fortrax-br/rss-services-bot | 5e19057e10e90bc06982e2e961e4fec3273e482a | [
"MIT"
] | 1 | 2021-09-26T01:44:27.000Z | 2021-09-26T01:44:27.000Z | bot/database/__init__.py | fortrax-br/rss-services-bot | 5e19057e10e90bc06982e2e961e4fec3273e482a | [
"MIT"
] | 1 | 2021-06-20T07:34:07.000Z | 2021-07-01T23:23:20.000Z | bot/database/__init__.py | fortrax-br/rss-services-bot | 5e19057e10e90bc06982e2e961e4fec3273e482a | [
"MIT"
] | 1 | 2021-07-20T11:57:56.000Z | 2021-07-20T11:57:56.000Z | from .crub import crub
| 11.5 | 22 | 0.782609 | 4 | 23 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
94ab0e1924b166d871ee1df01580875caa2eab19 | 538 | py | Python | cgnp_patchy/lib/patterns/__init__.py | cjspindel/cgnp_patchy | 12d401c90795ecddb9c4ea0433dc26c4d31d80b6 | [
"MIT"
] | null | null | null | cgnp_patchy/lib/patterns/__init__.py | cjspindel/cgnp_patchy | 12d401c90795ecddb9c4ea0433dc26c4d31d80b6 | [
"MIT"
] | null | null | null | cgnp_patchy/lib/patterns/__init__.py | cjspindel/cgnp_patchy | 12d401c90795ecddb9c4ea0433dc26c4d31d80b6 | [
"MIT"
] | null | null | null | from cgnp_patchy.lib.patterns.bipolar_pattern import BipolarPattern
from cgnp_patchy.lib.patterns.polar_pattern import PolarPattern
from cgnp_patchy.lib.patterns.random_pattern import RandomPattern
from cgnp_patchy.lib.patterns.equatorial_pattern import EquatorialPattern
from cgnp_patchy.lib.patterns.square_pattern import SquarePattern
from cgnp_patchy.lib.patterns.cube_pattern import CubePattern
from cgnp_patchy.lib.patterns.tetrahedral_pattern import TetrahedralPattern
from cgnp_patchy.lib.patterns.ring_pattern import RingPattern
| 59.777778 | 75 | 0.895911 | 72 | 538 | 6.472222 | 0.319444 | 0.137339 | 0.240343 | 0.291845 | 0.429185 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05948 | 538 | 8 | 76 | 67.25 | 0.920949 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
94eebe3b4852ca8e4bce06b39eb14d7409ba21be | 3,994 | py | Python | tests/test_fio_cat.py | mwtoews/fiona | 885dbd11c5dcb2c61862faefc28bfbbff99954de | [
"BSD-3-Clause"
] | null | null | null | tests/test_fio_cat.py | mwtoews/fiona | 885dbd11c5dcb2c61862faefc28bfbbff99954de | [
"BSD-3-Clause"
] | null | null | null | tests/test_fio_cat.py | mwtoews/fiona | 885dbd11c5dcb2c61862faefc28bfbbff99954de | [
"BSD-3-Clause"
] | null | null | null | """Tests for `$ fio cat`."""
import os
import pytest
from click.testing import CliRunner
from fiona.fio.main import main_group
from fiona.fio import cat
def test_one(path_coutwildrnp_shp):
runner = CliRunner()
result = runner.invoke(main_group, ['cat', path_coutwildrnp_shp])
assert result.exit_code == 0
assert result.output.count('"Feature"') == 67
def test_two(path_coutwildrnp_shp):
runner = CliRunner()
result = runner.invoke(main_group, ['cat', path_coutwildrnp_shp, path_coutwildrnp_shp])
assert result.exit_code == 0
assert result.output.count('"Feature"') == 134
def test_bbox_no(path_coutwildrnp_shp):
runner = CliRunner()
result = runner.invoke(
main_group,
['cat', path_coutwildrnp_shp, '--bbox', '0,10,80,20'],
catch_exceptions=False)
assert result.exit_code == 0
assert result.output == ""
def test_bbox_yes(path_coutwildrnp_shp):
runner = CliRunner()
result = runner.invoke(
main_group,
['cat', path_coutwildrnp_shp, '--bbox', '-109,37,-107,39'],
catch_exceptions=False)
assert result.exit_code == 0
assert result.output.count('"Feature"') == 19
def test_bbox_yes_two_files(path_coutwildrnp_shp):
runner = CliRunner()
result = runner.invoke(
main_group,
['cat', path_coutwildrnp_shp, path_coutwildrnp_shp, '--bbox', '-109,37,-107,39'],
catch_exceptions=False)
assert result.exit_code == 0
assert result.output.count('"Feature"') == 38
def test_bbox_json_yes(path_coutwildrnp_shp):
runner = CliRunner()
result = runner.invoke(
main_group,
['cat', path_coutwildrnp_shp, '--bbox', '[-109,37,-107,39]'],
catch_exceptions=False)
assert result.exit_code == 0
assert result.output.count('"Feature"') == 19
def test_bbox_where(path_coutwildrnp_shp):
runner = CliRunner()
result = runner.invoke(
main_group,
['cat', path_coutwildrnp_shp, '--bbox', '-120,40,-100,50',
'--where', "NAME LIKE 'Mount%'"],
catch_exceptions=False)
assert result.exit_code == 0
assert result.output.count('"Feature"') == 4
def test_where_no(path_coutwildrnp_shp):
runner = CliRunner()
result = runner.invoke(
main_group,
['cat', path_coutwildrnp_shp, '--where', "STATE LIKE '%foo%'"],
catch_exceptions=False)
assert result.exit_code == 0
assert result.output == ""
def test_where_yes(path_coutwildrnp_shp):
runner = CliRunner()
result = runner.invoke(
main_group,
['cat', path_coutwildrnp_shp, '--where', "NAME LIKE 'Mount%'"],
catch_exceptions=False)
assert result.exit_code == 0
assert result.output.count('"Feature"') == 9
def test_where_yes_two_files(path_coutwildrnp_shp):
runner = CliRunner()
result = runner.invoke(
main_group,
['cat', path_coutwildrnp_shp, path_coutwildrnp_shp,
'--where', "NAME LIKE 'Mount%'"],
catch_exceptions=False)
assert result.exit_code == 0
assert result.output.count('"Feature"') == 18
def test_where_fail(data_dir):
runner = CliRunner()
result = runner.invoke(main_group, ['cat', '--where', "NAME=3",
data_dir])
assert result.exit_code != 0
def test_multi_layer(data_dir):
layerdef = "1:coutwildrnp,1:coutwildrnp"
runner = CliRunner()
result = runner.invoke(
main_group, ['cat', '--layer', layerdef, data_dir])
assert result.output.count('"Feature"') == 134
def test_multi_layer_fail(data_dir):
runner = CliRunner()
result = runner.invoke(main_group, ['cat', '--layer', '200000:coutlildrnp',
data_dir])
assert result.exit_code != 0
def test_vfs(path_coutwildrnp_zip):
runner = CliRunner()
result = runner.invoke(main_group, [
'cat', 'zip://{}'.format(path_coutwildrnp_zip)])
assert result.exit_code == 0
assert result.output.count('"Feature"') == 67
| 29.367647 | 91 | 0.649474 | 494 | 3,994 | 5.002024 | 0.153846 | 0.15176 | 0.167544 | 0.152975 | 0.842169 | 0.842169 | 0.842169 | 0.842169 | 0.803723 | 0.753136 | 0 | 0.028254 | 0.211317 | 3,994 | 135 | 92 | 29.585185 | 0.75619 | 0.005508 | 0 | 0.621359 | 0 | 0 | 0.104387 | 0.006808 | 0 | 0 | 0 | 0 | 0.242718 | 1 | 0.135922 | false | 0 | 0.048544 | 0 | 0.184466 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bf50848ddc2649711c2fe05ae974a36f32a4e532 | 347 | py | Python | email/email_exceptions.py | sebanie15/pwd_secure | 3e2e6592b55697c97b26291cc0d7c05869fb7b20 | [
"MIT"
] | null | null | null | email/email_exceptions.py | sebanie15/pwd_secure | 3e2e6592b55697c97b26291cc0d7c05869fb7b20 | [
"MIT"
] | null | null | null | email/email_exceptions.py | sebanie15/pwd_secure | 3e2e6592b55697c97b26291cc0d7c05869fb7b20 | [
"MIT"
] | null | null | null | """set of Exceptions of email validators"""
from validator_base.base_exceptions import ValidatorException
class AtCharacterInMailException(ValidatorException):
pass
class EmailLengthException(ValidatorException):
pass
class EmailUserException(ValidatorException):
pass
class EmailDomainException(ValidatorException):
pass
| 17.35 | 61 | 0.809798 | 29 | 347 | 9.62069 | 0.551724 | 0.315412 | 0.290323 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135447 | 347 | 19 | 62 | 18.263158 | 0.93 | 0.106628 | 0 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.444444 | 0.111111 | 0 | 0.555556 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
bf55b7dd6745f96163df4df1d2e46c469b1f7511 | 18 | py | Python | pydeep/__init__.py | jytan17/deep_learning_framework | c0a55c0d9d201aacfe03e4d49b9f0d1b75278eb5 | [
"MIT"
] | null | null | null | pydeep/__init__.py | jytan17/deep_learning_framework | c0a55c0d9d201aacfe03e4d49b9f0d1b75278eb5 | [
"MIT"
] | null | null | null | pydeep/__init__.py | jytan17/deep_learning_framework | c0a55c0d9d201aacfe03e4d49b9f0d1b75278eb5 | [
"MIT"
] | null | null | null | # TODO: fill this
| 9 | 17 | 0.666667 | 3 | 18 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 18 | 1 | 18 | 18 | 0.857143 | 0.833333 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 1 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bf69141faa1174677e2a2bd28b13c2e332259b15 | 38 | py | Python | sabnzbd_copy/__init__.py | xabgesagtx/sabnzbd-copy | f346a9f2958c51dc01f401ec1582cece1f70d6e8 | [
"MIT"
] | 1 | 2016-01-10T18:05:09.000Z | 2016-01-10T18:05:09.000Z | sabnzbd_copy/__init__.py | xabgesagtx/sabnzbd-copy | f346a9f2958c51dc01f401ec1582cece1f70d6e8 | [
"MIT"
] | null | null | null | sabnzbd_copy/__init__.py | xabgesagtx/sabnzbd-copy | f346a9f2958c51dc01f401ec1582cece1f70d6e8 | [
"MIT"
] | 1 | 2022-03-21T07:21:57.000Z | 2022-03-21T07:21:57.000Z | from .sabnzbd_copy import SabnzbdCopy
| 19 | 37 | 0.868421 | 5 | 38 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bf6e1af3f65af4f3ab5550204e827980a780a1ef | 45 | py | Python | sympy/print_example.py | lindsayad/python | 4b63a8b02de6a7c0caa7bb770f3f22366e066a7f | [
"MIT"
] | null | null | null | sympy/print_example.py | lindsayad/python | 4b63a8b02de6a7c0caa7bb770f3f22366e066a7f | [
"MIT"
] | null | null | null | sympy/print_example.py | lindsayad/python | 4b63a8b02de6a7c0caa7bb770f3f22366e066a7f | [
"MIT"
] | null | null | null | def print_it():
print("Shut your mouth")
| 15 | 28 | 0.644444 | 7 | 45 | 4 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 45 | 2 | 29 | 22.5 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
44d40745ac75471764dd89a74755678e3a9d180b | 4,273 | py | Python | api/src/routers/endpoints/spotify_chart.py | JoaoGustavoRogel/spider-music-api | 6eb4fc66e6595611bbbc98f1e43f70fb96f1d6f6 | [
"MIT"
] | 2 | 2020-12-24T03:11:33.000Z | 2021-01-05T15:10:22.000Z | api/src/routers/endpoints/spotify_chart.py | JoaoGustavoRogel/spider-music-api | 6eb4fc66e6595611bbbc98f1e43f70fb96f1d6f6 | [
"MIT"
] | null | null | null | api/src/routers/endpoints/spotify_chart.py | JoaoGustavoRogel/spider-music-api | 6eb4fc66e6595611bbbc98f1e43f70fb96f1d6f6 | [
"MIT"
] | null | null | null | import os
import shutil
from fastapi import APIRouter, HTTPException
from datetime import datetime
from src.models.SpotifyCrawler import ConcreteFactorySpotifyChartsCrawler
from src.models.MySql import MySql
router = APIRouter()
@router.get("/crawler_query")
def get_data_chart(start_date: str, end_date: str):
try:
start_date = datetime.strptime(start_date, "%Y-%m-%d")
end_date = datetime.strptime(end_date, "%Y-%m-%d")
except Exception:
raise HTTPException(status_code=201, detail="Invalid date format. Must be: YYYY-MM-DD")
factory = ConcreteFactorySpotifyChartsCrawler()
crawler = factory.create_crawler()
path = "outputs/"
data_to_extract = {
"start_date": start_date,
"end_date": end_date,
"path": path,
}
try:
shutil.rmtree(path)
except Exception:
pass
os.mkdir(path)
collected_data = crawler.get_data(data_to_extract)
shutil.rmtree(path)
return {"data": collected_data}
@router.get("/insert_db")
def insert_data_db(start_date: str, end_date: str):
mysql_db = MySql.instance()
try:
start_date = datetime.strptime(start_date, "%Y-%m-%d")
end_date = datetime.strptime(end_date, "%Y-%m-%d")
except Exception:
raise HTTPException(status_code=201, detail="Invalid date format. Must be: YYYY-MM-DD")
factory = ConcreteFactorySpotifyChartsCrawler()
crawler = factory.create_crawler()
path = "outputs/"
data_to_extract = {
"start_date": start_date,
"end_date": end_date,
"path": path,
}
try:
shutil.rmtree(path)
except Exception:
pass
os.mkdir(path)
collected_data = crawler.get_data(data_to_extract)
shutil.rmtree(path)
try:
mysql_db.insert_data_list("src/sql/insert_spotify_chart.sql", collected_data)
except Exception as e:
print(e)
raise HTTPException(status_code=500, detail="Intern error!")
return {"message": "Sucess in insert, welcome data!"}
@router.get("/query_db")
def query_data_db(start_date: str, end_date: str):
mysql_db = MySql.instance()
try:
datetime.strptime(start_date, "%Y-%m-%d")
datetime.strptime(end_date, "%Y-%m-%d")
except Exception:
raise HTTPException(status_code=201, detail="Invalid date format. Must be: YYYY-MM-DD")
parameters = [start_date, end_date]
res_query = mysql_db.query_data("src/sql/query_spotify_chart.sql", parameters)
fields = ["position","track_name","artist_name","streams","url","track_id","chart_type","date","period","region"]
return {"fields": fields, "count_data": len(res_query), "data": res_query}
@router.get("/delete_db")
def delete_data_db(start_date: str, end_date: str):
mysql_db = MySql.instance()
try:
datetime.strptime(start_date, "%Y-%m-%d")
datetime.strptime(end_date, "%Y-%m-%d")
except Exception:
raise HTTPException(status_code=201, detail="Invalid date format. Must be: YYYY-MM-DD")
parameters = [start_date, end_date]
mysql_db.delete_data("src/sql/delete_spotify_chart.sql", parameters)
return {"message": "Sucess, good bye data!"}
@router.get("/update_db")
def update_data_db(start_date: str, end_date: str):
mysql_db = MySql.instance()
try:
datetime.strptime(start_date, "%Y-%m-%d")
datetime.strptime(end_date, "%Y-%m-%d")
except Exception:
raise HTTPException(status_code=201, detail="Invalid date format. Must be: YYYY-MM-DD")
parameters = [start_date, end_date]
mysql_db.delete_data("src/sql/delete_spotify_chart.sql", parameters)
factory = ConcreteFactorySpotifyChartsCrawler()
crawler = factory.create_crawler()
path = "outputs/"
data_to_extract = {
"start_date": datetime.strptime(start_date, "%Y-%m-%d"),
"end_date": datetime.strptime(end_date, "%Y-%m-%d"),
"path": path,
}
try:
shutil.rmtree(path)
except Exception:
pass
os.mkdir(path)
collected_data = crawler.get_data(data_to_extract)
shutil.rmtree(path)
mysql_db.insert_data_list("src/sql/insert_spotify_chart.sql", collected_data)
return {"message": "Sucess, welcome new data!"}
| 29.881119 | 117 | 0.666043 | 552 | 4,273 | 4.936594 | 0.163043 | 0.069358 | 0.026422 | 0.030826 | 0.736147 | 0.736147 | 0.728073 | 0.728073 | 0.728073 | 0.728073 | 0 | 0.005282 | 0.202434 | 4,273 | 142 | 118 | 30.091549 | 0.794308 | 0 | 0 | 0.706422 | 0 | 0 | 0.18886 | 0.03721 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045872 | false | 0.027523 | 0.055046 | 0 | 0.146789 | 0.009174 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
44e91d4f645a77b0c29c3610e1a540d2b4242563 | 27 | py | Python | explib/explib/optim/adasls/__init__.py | jacqueschen1/adam_sgd_heavy_tails | d4ecab6d460fb44ac3fd2b865641b8e47f3848ee | [
"Apache-2.0"
] | 1 | 2021-12-02T21:47:46.000Z | 2021-12-02T21:47:46.000Z | explib/explib/optim/adasls/__init__.py | jacqueschen1/adam_sgd_heavy_tails | d4ecab6d460fb44ac3fd2b865641b8e47f3848ee | [
"Apache-2.0"
] | null | null | null | explib/explib/optim/adasls/__init__.py | jacqueschen1/adam_sgd_heavy_tails | d4ecab6d460fb44ac3fd2b865641b8e47f3848ee | [
"Apache-2.0"
] | null | null | null | from .adasls import AdaSLS
| 13.5 | 26 | 0.814815 | 4 | 27 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
789b1342e3b0df1063c1d63f0e244810f03bab09 | 39 | py | Python | MyFunc.py | Seek/LaTechNumeric | dabef2040e84bf25cabab07fe20a6434ce52197b | [
"MIT"
] | null | null | null | MyFunc.py | Seek/LaTechNumeric | dabef2040e84bf25cabab07fe20a6434ce52197b | [
"MIT"
] | null | null | null | MyFunc.py | Seek/LaTechNumeric | dabef2040e84bf25cabab07fe20a6434ce52197b | [
"MIT"
] | null | null | null | import numpy as np
import scipy as sp
| 9.75 | 18 | 0.769231 | 8 | 39 | 3.75 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 39 | 3 | 19 | 13 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
15b9daeadebbf14a0d0674bd50e45b06c9f0c272 | 22 | py | Python | api/src/cruds/__init__.py | re3turn/twitter-crawling | a5b4075cda9d2bdca2cd9891c8d609627feb83e4 | [
"MIT"
] | 2 | 2019-02-25T12:13:22.000Z | 2020-07-06T14:22:57.000Z | api/src/cruds/__init__.py | re3turn/twitter-crawling | a5b4075cda9d2bdca2cd9891c8d609627feb83e4 | [
"MIT"
] | 5 | 2020-02-06T01:01:43.000Z | 2022-02-09T23:28:40.000Z | api/src/cruds/__init__.py | re3turn/twitter-crawling | a5b4075cda9d2bdca2cd9891c8d609627feb83e4 | [
"MIT"
] | 4 | 2019-02-15T10:17:32.000Z | 2021-07-26T15:13:23.000Z | from . import twitter
| 11 | 21 | 0.772727 | 3 | 22 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 22 | 1 | 22 | 22 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ec8a0d29147605c837a20c935755d420be2d1bc8 | 28 | py | Python | amocrm_api_client/models/unsorted/__init__.py | iqtek/amocrm_api_client | 910ea42482698f5eb47d6b6e12d52ec09af77a3e | [
"MIT"
] | null | null | null | amocrm_api_client/models/unsorted/__init__.py | iqtek/amocrm_api_client | 910ea42482698f5eb47d6b6e12d52ec09af77a3e | [
"MIT"
] | null | null | null | amocrm_api_client/models/unsorted/__init__.py | iqtek/amocrm_api_client | 910ea42482698f5eb47d6b6e12d52ec09af77a3e | [
"MIT"
] | null | null | null | from .UnsortedCall import *
| 14 | 27 | 0.785714 | 3 | 28 | 7.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ecb177c232168bf83bd27d3d16428074a80ddfc8 | 137 | py | Python | molecules/ml/unsupervised/vae/symmetric/__init__.py | hengma1001/molecules | c6694cc77ef1eb246f3fdab1f201481d1bcaa07c | [
"MIT"
] | 4 | 2020-08-06T20:08:25.000Z | 2021-01-25T00:13:57.000Z | molecules/ml/unsupervised/vae/symmetric/__init__.py | braceal/molecules | 6c6c7efc2b968aa42b957be4afd418da190b43dd | [
"MIT"
] | 43 | 2020-05-06T04:33:19.000Z | 2021-03-17T14:47:36.000Z | molecules/ml/unsupervised/vae/symmetric/__init__.py | hengma1001/molecules | c6694cc77ef1eb246f3fdab1f201481d1bcaa07c | [
"MIT"
] | 2 | 2020-06-08T15:17:39.000Z | 2020-07-29T16:40:34.000Z | from .hyperparams import SymmetricVAEHyperparams
from .encoder import SymmetricEncoderConv2d
from .decoder import SymmetricDecoderConv2d
| 34.25 | 48 | 0.890511 | 12 | 137 | 10.166667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016 | 0.087591 | 137 | 3 | 49 | 45.666667 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ecdc706fcdb2e3d133c32163294b705b48d264a4 | 71 | py | Python | pyrossgeo/mft/__init__.py | hidekb/PyRossGeo | 0d245a547add212f27be00bf234235cbd1db65f9 | [
"MIT"
] | 12 | 2020-05-12T09:18:48.000Z | 2020-10-23T13:29:24.000Z | pyrossgeo/mft/__init__.py | hidekb/PyRossGeo | 0d245a547add212f27be00bf234235cbd1db65f9 | [
"MIT"
] | null | null | null | pyrossgeo/mft/__init__.py | hidekb/PyRossGeo | 0d245a547add212f27be00bf234235cbd1db65f9 | [
"MIT"
] | 5 | 2020-05-15T15:53:08.000Z | 2020-07-20T23:31:38.000Z | from pyrossgeo.mft.deterministic import *
#from deterministic import *
| 23.666667 | 41 | 0.816901 | 8 | 71 | 7.25 | 0.625 | 0.655172 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112676 | 71 | 2 | 42 | 35.5 | 0.920635 | 0.380282 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ece58ed849a5544ae0429118f241d05e0ac62429 | 41 | py | Python | pipeline/functions/add_comments.py | jamesonl/pulltasks | 4f9dbd86a40bd64cff37c9136eeb941dc39a47d2 | [
"BSD-3-Clause"
] | null | null | null | pipeline/functions/add_comments.py | jamesonl/pulltasks | 4f9dbd86a40bd64cff37c9136eeb941dc39a47d2 | [
"BSD-3-Clause"
] | null | null | null | pipeline/functions/add_comments.py | jamesonl/pulltasks | 4f9dbd86a40bd64cff37c9136eeb941dc39a47d2 | [
"BSD-3-Clause"
] | null | null | null |
def add_comment(task):
return None
| 8.2 | 22 | 0.682927 | 6 | 41 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.243902 | 41 | 4 | 23 | 10.25 | 0.870968 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
171e08bb86dcf9b6d49f47ac5aacf00166f69cc3 | 35 | py | Python | mecabpr/__init__.py | kzinmr/mecabpr | 8f6f840e105b88b57524015d26ff4c9ce72f460d | [
"MIT"
] | 6 | 2019-04-16T01:11:33.000Z | 2020-11-09T05:59:55.000Z | mecabpr/__init__.py | kzinmr/mecabpr | 8f6f840e105b88b57524015d26ff4c9ce72f460d | [
"MIT"
] | null | null | null | mecabpr/__init__.py | kzinmr/mecabpr | 8f6f840e105b88b57524015d26ff4c9ce72f460d | [
"MIT"
] | 2 | 2020-03-04T12:46:48.000Z | 2020-11-06T16:28:25.000Z | from .mecabpr import MeCabPosRegex
| 17.5 | 34 | 0.857143 | 4 | 35 | 7.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1732e5bfa1411ff3bfe427de9c97712a4638c197 | 162 | py | Python | takler/core/_logger.py | perillaroc/takler | 607a64ff22b33d507f90acca4640963e69626879 | [
"Apache-2.0"
] | null | null | null | takler/core/_logger.py | perillaroc/takler | 607a64ff22b33d507f90acca4640963e69626879 | [
"Apache-2.0"
] | null | null | null | takler/core/_logger.py | perillaroc/takler | 607a64ff22b33d507f90acca4640963e69626879 | [
"Apache-2.0"
] | null | null | null | from typing import TYPE_CHECKING
from takler.logging import get_logger
if TYPE_CHECKING:
from logging import Logger
logger: "Logger" = get_logger("core")
| 16.2 | 37 | 0.777778 | 23 | 162 | 5.304348 | 0.478261 | 0.196721 | 0.262295 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.160494 | 162 | 9 | 38 | 18 | 0.897059 | 0 | 0 | 0 | 0 | 0 | 0.061728 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1743516e57b5ba7f38ce1b14bc4728264c0d23d8 | 1,696 | py | Python | Simple/test_formy_modal.py | tim-corley/Selenium-Starter-Kit | e74beae52c97464f40c034996c0645fe3f8cc235 | [
"Unlicense",
"MIT"
] | null | null | null | Simple/test_formy_modal.py | tim-corley/Selenium-Starter-Kit | e74beae52c97464f40c034996c0645fe3f8cc235 | [
"Unlicense",
"MIT"
] | 1 | 2021-06-02T00:54:01.000Z | 2021-06-02T00:54:01.000Z | Simple/test_formy_modal.py | tim-corley/Selenium-Starter-Kit | e74beae52c97464f40c034996c0645fe3f8cc235 | [
"Unlicense",
"MIT"
] | null | null | null | from selenium import webdriver
from pathlib import Path
from time import sleep
import pytest
import os
global driver_path
parent_path = str(Path(os.getcwd()).parent)
driver_path = parent_path + '/Drivers/'
class TestModalChrome():
@pytest.fixture()
def test_setup(self):
global driver
driver = webdriver.Chrome(executable_path=driver_path+'chromedriver')
driver.implicitly_wait(10)
driver.maximize_window()
driver.get('https://formy-project.herokuapp.com/')
yield
driver.close()
driver.quit()
print('Test Completed')
def test_modal_click(self, test_setup):
driver.find_element_by_link_text('Modal').click()
modal_btn = driver.find_element_by_id('modal-button')
modal_btn.click()
sleep(1)
close_btn = driver.find_element_by_id('close-button')
driver.execute_script('arguments[0].click()', close_btn)
sleep(1)
class TestModalFirefox():
@pytest.fixture()
def test_setup(self):
global driver
driver = webdriver.Firefox(executable_path=driver_path+'geckodriver')
driver.implicitly_wait(10)
driver.maximize_window()
driver.get('https://formy-project.herokuapp.com/')
yield
driver.close()
driver.quit()
print('Test Completed')
def test_modal_click(self, test_setup):
driver.find_element_by_link_text('Modal').click()
modal_btn = driver.find_element_by_id('modal-button')
modal_btn.click()
sleep(1)
close_btn = driver.find_element_by_id('close-button')
driver.execute_script('arguments[0].click()', close_btn)
sleep(1)
| 31.407407 | 77 | 0.662736 | 207 | 1,696 | 5.188406 | 0.289855 | 0.055866 | 0.094972 | 0.106145 | 0.733706 | 0.733706 | 0.733706 | 0.733706 | 0.733706 | 0.733706 | 0 | 0.007582 | 0.222288 | 1,696 | 53 | 78 | 32 | 0.806672 | 0 | 0 | 0.75 | 0 | 0 | 0.135613 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.104167 | 0 | 0.229167 | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bd58af6ea0c1e78f403188d8aec8f921266e2f02 | 195 | py | Python | tests/python/test_ct_python.py | dtmoodie/ct | 21dc0092d9d2615e5c4510371c63d9233118de5e | [
"MIT"
] | 5 | 2019-07-28T01:43:08.000Z | 2020-06-09T09:39:09.000Z | tests/python/test_ct_python.py | dtmoodie/ct | 21dc0092d9d2615e5c4510371c63d9233118de5e | [
"MIT"
] | 1 | 2019-12-21T00:09:07.000Z | 2019-12-26T22:00:45.000Z | tests/python/test_ct_python.py | dtmoodie/ct | 21dc0092d9d2615e5c4510371c63d9233118de5e | [
"MIT"
] | null | null | null | import imp
import os
if os.path.exists('libtest_ct_python.so'):
imp.load_dynamic('test_ct_python','libtest_ct_python.so')
else:
imp.load_dynamic('test_ct_python','libtest_ct_pythond.so')
| 27.857143 | 62 | 0.774359 | 33 | 195 | 4.212121 | 0.454545 | 0.230216 | 0.215827 | 0.244604 | 0.503597 | 0.503597 | 0.503597 | 0.503597 | 0 | 0 | 0 | 0 | 0.087179 | 195 | 6 | 63 | 32.5 | 0.780899 | 0 | 0 | 0 | 0 | 0 | 0.45641 | 0.107692 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
bdb1bcbafdcf6fe759a6255f20d949864b50f323 | 101 | py | Python | primus/impute/pandas/__init__.py | taohu88/primus | b30f7a41dfb3417c848aa2ac682dc504c411a071 | [
"MIT"
] | null | null | null | primus/impute/pandas/__init__.py | taohu88/primus | b30f7a41dfb3417c848aa2ac682dc504c411a071 | [
"MIT"
] | null | null | null | primus/impute/pandas/__init__.py | taohu88/primus | b30f7a41dfb3417c848aa2ac682dc504c411a071 | [
"MIT"
] | null | null | null | from .util import empty_to_none, strs_to_none
__all__ = [
'empty_to_none',
'strs_to_none'
] | 14.428571 | 45 | 0.70297 | 16 | 101 | 3.6875 | 0.5 | 0.40678 | 0.372881 | 0.508475 | 0.711864 | 0.711864 | 0 | 0 | 0 | 0 | 0 | 0 | 0.19802 | 101 | 7 | 46 | 14.428571 | 0.728395 | 0 | 0 | 0 | 0 | 0 | 0.245098 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
da7fb465967b876646774b8960926ebb2164cef7 | 94 | py | Python | yatfs/backend/noop.py | AllSeeingEyeTolledEweSew/yatfs | 55bcd486f3d5df22eb8f2a806c3f2b4a85e35e81 | [
"Unlicense"
] | 1 | 2018-06-02T23:09:29.000Z | 2018-06-02T23:09:29.000Z | yatfs/backend/noop.py | AllSeeingEyeTolledEweSew/yatfs | 55bcd486f3d5df22eb8f2a806c3f2b4a85e35e81 | [
"Unlicense"
] | null | null | null | yatfs/backend/noop.py | AllSeeingEyeTolledEweSew/yatfs | 55bcd486f3d5df22eb8f2a806c3f2b4a85e35e81 | [
"Unlicense"
] | null | null | null | class Backend(object):
def init(self):
pass
def destroy(self):
pass
| 11.75 | 22 | 0.542553 | 11 | 94 | 4.636364 | 0.727273 | 0.313725 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.361702 | 94 | 7 | 23 | 13.428571 | 0.85 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0.4 | 0 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
e52700d44cc32953cb5942b851751534c3be7476 | 257 | py | Python | test/run/t215.py | timmartin/skulpt | 2e3a3fbbaccc12baa29094a717ceec491a8a6750 | [
"MIT"
] | 2,671 | 2015-01-03T08:23:25.000Z | 2022-03-31T06:15:48.000Z | test/run/t215.py | timmartin/skulpt | 2e3a3fbbaccc12baa29094a717ceec491a8a6750 | [
"MIT"
] | 972 | 2015-01-05T08:11:00.000Z | 2022-03-29T13:47:15.000Z | test/run/t215.py | timmartin/skulpt | 2e3a3fbbaccc12baa29094a717ceec491a8a6750 | [
"MIT"
] | 845 | 2015-01-03T19:53:36.000Z | 2022-03-29T18:34:22.000Z | wee = lambda waa, woo=False, wii=True: ("OK", waa, woo, wii)
print wee("stuff")
print wee("stuff", "dog")
print wee("stuff", "dog", "cat")
print wee("stuff", wii="lamma")
print wee(wii="lamma", waa="pocky")
print wee(wii="lamma", waa="pocky", woo="blorp")
| 28.555556 | 60 | 0.634241 | 42 | 257 | 3.880952 | 0.357143 | 0.294479 | 0.319018 | 0.196319 | 0.294479 | 0.294479 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116732 | 257 | 8 | 61 | 32.125 | 0.718062 | 0 | 0 | 0 | 0 | 0 | 0.237354 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.857143 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
e5722c60480cc81db18c8aaefc806d6c2200be13 | 14,281 | py | Python | tests/test_memory_core/test_pond.py | mfkiwl/garnet | 89b2f907d72c5bb7f86a71bf8fea307f040dc194 | [
"BSD-3-Clause"
] | 56 | 2018-12-15T02:47:57.000Z | 2022-03-25T23:50:40.000Z | tests/test_memory_core/test_pond.py | mfkiwl/garnet | 89b2f907d72c5bb7f86a71bf8fea307f040dc194 | [
"BSD-3-Clause"
] | 525 | 2018-07-27T20:35:54.000Z | 2022-03-28T23:52:20.000Z | tests/test_memory_core/test_pond.py | mfkiwl/garnet | 89b2f907d72c5bb7f86a71bf8fea307f040dc194 | [
"BSD-3-Clause"
] | 11 | 2019-01-26T06:41:10.000Z | 2021-03-28T08:02:26.000Z | from lake.utils.util import transform_strides_and_ranges, trim_config
import random
from gemstone.common.testers import BasicTester
from cgra.util import create_cgra, compress_config_data
from canal.util import IOSide
from archipelago import pnr
from _kratos import create_wrapper_flatten
import lassen.asm as asm
def io_sides():
return IOSide.North | IOSide.East | IOSide.South | IOSide.West
def generate_pond_api(interconnect, pondcore, ctrl_rd, ctrl_wr, pe_x, pe_y, config_data):
flattened = create_wrapper_flatten(pondcore.dut.internal_generator.clone(),
pondcore.dut.name)
(tform_ranges_rd, tform_strides_rd) = transform_strides_and_ranges(ctrl_rd[0], ctrl_rd[1], ctrl_rd[2])
(tform_ranges_wr, tform_strides_wr) = transform_strides_and_ranges(ctrl_wr[0], ctrl_wr[1], ctrl_wr[2])
(tform_ranges_rd_sched, tform_strides_rd_sched) = transform_strides_and_ranges(ctrl_rd[0], ctrl_rd[5], ctrl_rd[2])
(tform_ranges_wr_sched, tform_strides_wr_sched) = transform_strides_and_ranges(ctrl_wr[0], ctrl_wr[5], ctrl_wr[2])
name_out, val_out = trim_config(flattened, "tile_en", 1)
idx, value = pondcore.get_config_data(name_out, val_out)
config_data.append((interconnect.get_config_addr(idx, 1, pe_x, pe_y), value))
name_out, val_out = trim_config(flattened, "rf_read_iter_0_dimensionality", ctrl_rd[2])
idx, value = pondcore.get_config_data(name_out, val_out)
config_data.append((interconnect.get_config_addr(idx, 1, pe_x, pe_y), value))
name_out, val_out = trim_config(flattened, "rf_read_addr_0_starting_addr", ctrl_rd[3])
idx, value = pondcore.get_config_data(name_out, val_out)
config_data.append((interconnect.get_config_addr(idx, 1, pe_x, pe_y), value))
name_out, val_out = trim_config(flattened, "rf_read_addr_0_strides_0", tform_strides_rd[0])
idx, value = pondcore.get_config_data(name_out, val_out)
config_data.append((interconnect.get_config_addr(idx, 1, pe_x, pe_y), value))
name_out, val_out = trim_config(flattened, "rf_read_addr_0_strides_1", tform_strides_rd[1])
idx, value = pondcore.get_config_data(name_out, val_out)
config_data.append((interconnect.get_config_addr(idx, 1, pe_x, pe_y), value))
name_out, val_out = trim_config(flattened, "rf_read_iter_0_ranges_0", tform_ranges_rd[0])
idx, value = pondcore.get_config_data(name_out, val_out)
config_data.append((interconnect.get_config_addr(idx, 1, pe_x, pe_y), value))
name_out, val_out = trim_config(flattened, "rf_read_iter_0_ranges_1", tform_ranges_rd[1])
idx, value = pondcore.get_config_data(name_out, val_out)
config_data.append((interconnect.get_config_addr(idx, 1, pe_x, pe_y), value))
name_out, val_out = trim_config(flattened, "rf_read_sched_0_sched_addr_gen_starting_addr", ctrl_rd[4])
idx, value = pondcore.get_config_data(name_out, val_out)
config_data.append((interconnect.get_config_addr(idx, 1, pe_x, pe_y), value))
name_out, val_out = trim_config(flattened, "rf_read_sched_0_sched_addr_gen_strides_0", tform_strides_rd_sched[0])
idx, value = pondcore.get_config_data(name_out, val_out)
config_data.append((interconnect.get_config_addr(idx, 1, pe_x, pe_y), value))
name_out, val_out = trim_config(flattened, "rf_read_sched_0_sched_addr_gen_strides_1", tform_strides_rd_sched[1])
idx, value = pondcore.get_config_data(name_out, val_out)
config_data.append((interconnect.get_config_addr(idx, 1, pe_x, pe_y), value))
name_out, val_out = trim_config(flattened, "rf_write_iter_0_dimensionality", ctrl_wr[2])
idx, value = pondcore.get_config_data(name_out, val_out)
config_data.append((interconnect.get_config_addr(idx, 1, pe_x, pe_y), value))
name_out, val_out = trim_config(flattened, "rf_write_addr_0_starting_addr", ctrl_wr[3])
idx, value = pondcore.get_config_data(name_out, val_out)
config_data.append((interconnect.get_config_addr(idx, 1, pe_x, pe_y), value))
name_out, val_out = trim_config(flattened, "rf_write_addr_0_strides_0", tform_strides_wr[0])
idx, value = pondcore.get_config_data(name_out, val_out)
config_data.append((interconnect.get_config_addr(idx, 1, pe_x, pe_y), value))
name_out, val_out = trim_config(flattened, "rf_write_addr_0_strides_1", tform_strides_wr[1])
idx, value = pondcore.get_config_data(name_out, val_out)
config_data.append((interconnect.get_config_addr(idx, 1, pe_x, pe_y), value))
name_out, val_out = trim_config(flattened, "rf_write_iter_0_ranges_0", tform_ranges_wr[0])
idx, value = pondcore.get_config_data(name_out, val_out)
config_data.append((interconnect.get_config_addr(idx, 1, pe_x, pe_y), value))
name_out, val_out = trim_config(flattened, "rf_write_iter_0_ranges_1", tform_ranges_wr[1])
idx, value = pondcore.get_config_data(name_out, val_out)
config_data.append((interconnect.get_config_addr(idx, 1, pe_x, pe_y), value))
name_out, val_out = trim_config(flattened, "rf_write_sched_0_sched_addr_gen_starting_addr", ctrl_wr[4])
idx, value = pondcore.get_config_data(name_out, val_out)
config_data.append((interconnect.get_config_addr(idx, 1, pe_x, pe_y), value))
name_out, val_out = trim_config(flattened, "rf_write_sched_0_sched_addr_gen_strides_0", tform_strides_wr_sched[0])
idx, value = pondcore.get_config_data(name_out, val_out)
config_data.append((interconnect.get_config_addr(idx, 1, pe_x, pe_y), value))
name_out, val_out = trim_config(flattened, "rf_write_sched_0_sched_addr_gen_strides_1", tform_strides_wr_sched[1])
idx, value = pondcore.get_config_data(name_out, val_out)
config_data.append((interconnect.get_config_addr(idx, 1, pe_x, pe_y), value))
name_out, val_out = trim_config(flattened, "rf_write_sched_0_enable", 1)
idx, value = pondcore.get_config_data(name_out, val_out)
config_data.append((interconnect.get_config_addr(idx, 1, pe_x, pe_y), value))
name_out, val_out = trim_config(flattened, "rf_read_sched_0_enable", 1)
idx, value = pondcore.get_config_data(name_out, val_out)
config_data.append((interconnect.get_config_addr(idx, 1, pe_x, pe_y), value))
def test_pond_rd_wr(run_tb):
chip_size = 2
interconnect = create_cgra(chip_size, chip_size, io_sides(),
num_tracks=3,
add_pd=True,
add_pond=True,
mem_ratio=(1, 2))
netlist = {
"e0": [("I0", "io2f_16"), ("p0", "data_in_pond")],
"e1": [("I1", "io2f_16"), ("p0", "data1")],
"e2": [("p0", "data_out_pond"), ("I2", "f2io_16")]
}
bus = {"e0": 16, "e1": 16, "e2": 16}
placement, routing = pnr(interconnect, (netlist, bus))
config_data = interconnect.get_route_bitstream(routing)
pe_x, pe_y = placement["p0"]
petile = interconnect.tile_circuits[(pe_x, pe_y)]
pondcore = petile.additional_cores[0]
# Ranges, Strides, Dimensionality, Starting Addr, Starting Addr - Schedule
ctrl_rd = [[16, 1], [1, 1], 2, 0, 16, [1, 1]]
ctrl_wr = [[16, 1], [1, 1], 2, 0, 0, [1, 1]]
generate_pond_api(interconnect, pondcore, ctrl_rd, ctrl_wr, pe_x, pe_y, config_data)
config_data = compress_config_data(config_data)
circuit = interconnect.circuit()
tester = BasicTester(circuit, circuit.clk, circuit.reset)
tester.zero_inputs()
tester.reset()
tester.poke(circuit.interface["stall"], 1)
for addr, index in config_data:
tester.configure(addr, index)
tester.config_read(addr)
tester.eval()
tester.expect(circuit.read_config_data, index)
tester.done_config()
tester.poke(circuit.interface["stall"], 0)
tester.eval()
src_x0, src_y0 = placement["I0"]
src_x1, src_y1 = placement["I1"]
src_name0 = f"glb2io_16_X{src_x0:02X}_Y{src_y0:02X}"
src_name1 = f"glb2io_16_X{src_x1:02X}_Y{src_y1:02X}"
dst_x, dst_y = placement["I2"]
dst_name = f"io2glb_16_X{dst_x:02X}_Y{dst_y:02X}"
random.seed(0)
for i in range(32):
tester.poke(circuit.interface[src_name0], i)
tester.poke(circuit.interface[src_name1], i + 1)
tester.eval()
if i >= 16:
tester.expect(circuit.interface[dst_name], i - 16)
tester.step(2)
tester.eval()
run_tb(tester)
def test_pond_pe(run_tb):
chip_size = 2
interconnect = create_cgra(chip_size, chip_size, io_sides(),
num_tracks=3,
add_pd=True,
add_pond=True,
mem_ratio=(1, 2))
netlist = {
"e0": [("I0", "io2f_16"), ("p0", "data_in_pond")],
"e1": [("I1", "io2f_16"), ("p0", "data1")],
"e2": [("p0", "alu_res"), ("I2", "f2io_16")],
"e3": [("p0", "data_out_pond"), ("p0", "data0")]
}
bus = {"e0": 16, "e1": 16, "e2": 16, "e3": 16}
placement, routing = pnr(interconnect, (netlist, bus))
config_data = interconnect.get_route_bitstream(routing)
pe_x, pe_y = placement["p0"]
petile = interconnect.tile_circuits[(pe_x, pe_y)]
pondcore = petile.additional_cores[0]
add_bs = petile.core.get_config_bitstream(asm.umult0())
for addr, data in add_bs:
config_data.append((interconnect.get_config_addr(addr, 0, pe_x, pe_y), data))
# Ranges, Strides, Dimensionality, Starting Addr, Starting Addr - Schedule
ctrl_rd = [[16, 1], [1, 1], 2, 0, 16, [1, 1]]
ctrl_wr = [[16, 1], [1, 1], 2, 0, 0, [1, 1]]
generate_pond_api(interconnect, pondcore, ctrl_rd, ctrl_wr, pe_x, pe_y, config_data)
config_data = compress_config_data(config_data)
circuit = interconnect.circuit()
tester = BasicTester(circuit, circuit.clk, circuit.reset)
tester.zero_inputs()
tester.reset()
tester.poke(circuit.interface["stall"], 1)
for addr, index in config_data:
tester.configure(addr, index)
tester.config_read(addr)
tester.eval()
tester.expect(circuit.read_config_data, index)
tester.done_config()
tester.poke(circuit.interface["stall"], 0)
tester.eval()
src_x0, src_y0 = placement["I0"]
src_x1, src_y1 = placement["I1"]
src_name0 = f"glb2io_16_X{src_x0:02X}_Y{src_y0:02X}"
src_name1 = f"glb2io_16_X{src_x1:02X}_Y{src_y1:02X}"
dst_x, dst_y = placement["I2"]
dst_name = f"io2glb_16_X{dst_x:02X}_Y{dst_y:02X}"
random.seed(0)
for i in range(32):
if i < 16:
tester.poke(circuit.interface[src_name0], i)
tester.eval()
if i >= 16:
num = random.randrange(0, 256)
tester.poke(circuit.interface[src_name1], num)
tester.eval()
tester.expect(circuit.interface[dst_name], (i - 16) * num)
tester.step(2)
tester.eval()
run_tb(tester)
def test_pond_pe_acc(run_tb):
chip_size = 2
interconnect = create_cgra(chip_size, chip_size, io_sides(),
num_tracks=3,
add_pd=True,
add_pond=True,
mem_ratio=(1, 2))
netlist = {
"e0": [("I0", "io2f_16"), ("p0", "data0")],
"e1": [("p0", "data_out_pond"), ("p0", "data1")],
"e2": [("p0", "alu_res"), ("p0", "data_in_pond")],
"e3": [("p0", "data_out_pond"), ("I1", "f2io_16")]
}
bus = {"e0": 16, "e1": 16, "e2": 16, "e3": 16}
placement, routing = pnr(interconnect, (netlist, bus))
config_data = interconnect.get_route_bitstream(routing)
pe_x, pe_y = placement["p0"]
petile = interconnect.tile_circuits[(pe_x, pe_y)]
pondcore = petile.additional_cores[0]
add_bs = petile.core.get_config_bitstream(asm.add())
for addr, data in add_bs:
config_data.append((interconnect.get_config_addr(addr, 0, pe_x, pe_y), data))
# Ranges, Strides, Dimensionality, Starting Addr, Starting Addr - Schedule
ctrl_rd = [[16, 1], [0, 0], 2, 8, 0, [1, 0]]
ctrl_wr = [[16, 1], [0, 0], 2, 8, 0, [1, 0]]
generate_pond_api(interconnect, pondcore, ctrl_rd, ctrl_wr, pe_x, pe_y, config_data)
config_data = compress_config_data(config_data)
circuit = interconnect.circuit()
tester = BasicTester(circuit, circuit.clk, circuit.reset)
tester.zero_inputs()
tester.reset()
tester.poke(circuit.interface["stall"], 1)
for addr, index in config_data:
tester.configure(addr, index)
tester.config_read(addr)
tester.eval()
tester.expect(circuit.read_config_data, index)
tester.done_config()
tester.poke(circuit.interface["stall"], 0)
tester.eval()
src_x0, src_y0 = placement["I0"]
src_name0 = f"glb2io_16_X{src_x0:02X}_Y{src_y0:02X}"
dst_x, dst_y = placement["I1"]
dst_name = f"io2glb_16_X{dst_x:02X}_Y{dst_y:02X}"
random.seed(0)
total = 0
for i in range(16):
tester.poke(circuit.interface[src_name0], i + 1)
total = total + i
tester.eval()
tester.expect(circuit.interface[dst_name], total)
tester.step(2)
tester.eval()
run_tb(tester)
def test_pond_config(run_tb):
# 1x1 interconnect with only PE tile
interconnect = create_cgra(1, 1, IOSide.None_, standalone=True,
mem_ratio=(0, 1),
add_pond=True)
# get pond core
pe_tile = interconnect.tile_circuits[0, 0]
pond_core = pe_tile.additional_cores[0]
pond_feat = pe_tile.features().index(pond_core)
sram_feat = pond_feat + pond_core.num_sram_features
circuit = interconnect.circuit()
tester = BasicTester(circuit, circuit.clk, circuit.reset)
tester.zero_inputs()
tester.reset()
config_data = []
# tile enable
reg_addr, value = pond_core.get_config_data("tile_en", 1)
config_data.append((interconnect.get_config_addr(reg_addr, pond_feat, 0, 0), value))
for i in range(32):
addr = interconnect.get_config_addr(i, sram_feat, 0, 0)
config_data.append((addr, i + 1))
for addr, data in config_data:
tester.configure(addr, data)
# read back
for addr, data in config_data:
tester.config_read(addr)
tester.expect(circuit.read_config_data, data)
run_tb(tester)
| 38.807065 | 118 | 0.669701 | 2,152 | 14,281 | 4.097119 | 0.078996 | 0.083929 | 0.047635 | 0.061926 | 0.871952 | 0.855166 | 0.821368 | 0.80651 | 0.7743 | 0.746399 | 0 | 0.035899 | 0.200266 | 14,281 | 367 | 119 | 38.912807 | 0.7361 | 0.020237 | 0 | 0.68797 | 0 | 0 | 0.088029 | 0.06393 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022556 | false | 0 | 0.030075 | 0.003759 | 0.056391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e5bc1873ed4af6add25da5c314218941fd0dddef | 39 | py | Python | contents/Deep_Q_Network_5/test.py | hbyzg/Reinforcement-learning-with-tensorflow | 5914f194e07113c823d02c75f801ae578caab14c | [
"MIT"
] | null | null | null | contents/Deep_Q_Network_5/test.py | hbyzg/Reinforcement-learning-with-tensorflow | 5914f194e07113c823d02c75f801ae578caab14c | [
"MIT"
] | null | null | null | contents/Deep_Q_Network_5/test.py | hbyzg/Reinforcement-learning-with-tensorflow | 5914f194e07113c823d02c75f801ae578caab14c | [
"MIT"
] | null | null | null | import tensorflow as ts
print(type(ts)) | 19.5 | 23 | 0.794872 | 7 | 39 | 4.428571 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 39 | 2 | 24 | 19.5 | 0.885714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
e5c4b142dbfb33eb394861b94ecfaab380150203 | 37 | py | Python | simplermock/__init__.py | Azdacha/SimplerMock | 1a4dde9d13250f3dae8fed055b488bf9f8351935 | [
"MIT"
] | null | null | null | simplermock/__init__.py | Azdacha/SimplerMock | 1a4dde9d13250f3dae8fed055b488bf9f8351935 | [
"MIT"
] | null | null | null | simplermock/__init__.py | Azdacha/SimplerMock | 1a4dde9d13250f3dae8fed055b488bf9f8351935 | [
"MIT"
] | null | null | null | from .simplermock import SimplerMock
| 18.5 | 36 | 0.864865 | 4 | 37 | 8 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.969697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e5ca5a2b23c4b41c1ed2b6f33fc96b96db8c5e5c | 17 | py | Python | src/engine/io/__init__.py | miladlink/Streamlit_Flask | 23340eeab192f0ccae9a6cc03f7eb9b7b8985f6a | [
"MIT"
] | 1 | 2021-12-28T07:57:56.000Z | 2021-12-28T07:57:56.000Z | src/engine/io/__init__.py | miladlink/Streamlit_Flask | 23340eeab192f0ccae9a6cc03f7eb9b7b8985f6a | [
"MIT"
] | null | null | null | src/engine/io/__init__.py | miladlink/Streamlit_Flask | 23340eeab192f0ccae9a6cc03f7eb9b7b8985f6a | [
"MIT"
] | null | null | null | from . import b64 | 17 | 17 | 0.764706 | 3 | 17 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0.176471 | 17 | 1 | 17 | 17 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e5d46a6af9b77885223bd9f7baec9240e8782c52 | 129 | py | Python | app/api_1_0/__init__.py | Ryconler/mybatcave | 062fd9c731a182545a9c578703af1d796e0c102a | [
"MIT"
] | null | null | null | app/api_1_0/__init__.py | Ryconler/mybatcave | 062fd9c731a182545a9c578703af1d796e0c102a | [
"MIT"
] | null | null | null | app/api_1_0/__init__.py | Ryconler/mybatcave | 062fd9c731a182545a9c578703af1d796e0c102a | [
"MIT"
] | null | null | null | #!/usr/bin/python
# -*- coding: UTF-8 -*-
from flask import Blueprint
api=Blueprint('api',__name__)
from . import resources,users | 25.8 | 29 | 0.72093 | 18 | 129 | 4.944444 | 0.777778 | 0.269663 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008696 | 0.108527 | 129 | 5 | 30 | 25.8 | 0.765217 | 0.294574 | 0 | 0 | 0 | 0 | 0.033333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
e5f38a919bae7a1c95e5e5cc45e674fec4d9d741 | 185 | py | Python | funcion-con-arg.py | josaphatsv/EjercicioPython | 269bf5552bc926917ba3e54477e735af4f9c1830 | [
"MIT"
] | null | null | null | funcion-con-arg.py | josaphatsv/EjercicioPython | 269bf5552bc926917ba3e54477e735af4f9c1830 | [
"MIT"
] | null | null | null | funcion-con-arg.py | josaphatsv/EjercicioPython | 269bf5552bc926917ba3e54477e735af4f9c1830 | [
"MIT"
] | null | null | null | #funcion con parametros
def funcion_arg(nombre,apellido):
print("El nombre recibido es:", nombre)
print("El nombre recibido es:", apellido)
funcion_arg("Josaphat","Lopez") | 26.428571 | 45 | 0.713514 | 24 | 185 | 5.416667 | 0.541667 | 0.153846 | 0.2 | 0.323077 | 0.353846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156757 | 185 | 7 | 46 | 26.428571 | 0.833333 | 0.118919 | 0 | 0 | 0 | 0 | 0.349693 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.25 | 0.5 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
e5fa418031880a56a7e0210cc5cf2a9ecd17c680 | 52,148 | py | Python | python/fdfault/interface.py | egdaub/fdfault | ec066f032ba109843164429aa7d9e7352485d735 | [
"MIT"
] | 12 | 2017-10-05T22:04:40.000Z | 2020-08-31T08:32:17.000Z | python/fdfault/interface.py | jhsa26/fdfault | ec066f032ba109843164429aa7d9e7352485d735 | [
"MIT"
] | 3 | 2020-05-06T16:48:32.000Z | 2020-09-18T11:41:41.000Z | python/fdfault/interface.py | jhsa26/fdfault | ec066f032ba109843164429aa7d9e7352485d735 | [
"MIT"
] | 12 | 2017-03-24T19:15:27.000Z | 2020-08-31T08:32:18.000Z | """
The ``interface`` class and its derived classes describe interfaces that link neighboring blocks
together. The code includes several types of interfaces: the standard ``interface`` class is for
a locked interface where no relative slip is allowed between the neighboring blocks. Other
interface types allow for frictional slip following several possible constitutive friction laws. The other
types are derived from the main ``interface`` class and thus inherit much of their functionality.
The ``interface`` class will not usually be invoked directly. This is because interfaces are
created automatically based on the number of blocks in the simulation. When the user
changes the number of blocks in the simulation, locked interfaces are automatically created
between all neighboring blocks. To modify the type of interface, it is preferred to use the
``set_iftype`` method of a problem to ensure that only the correct interfaces remain in the simulation.
Other interface types include: ``friction``, which describes frictionless interfaces; ``paramfric``, which
is a generic class for interfaces with parameters describing their behavior; ``statefric``, which is
a generic class for friction laws with a state variable; ``slipweak``, which describes slip weakening
and kinematically forced rupture interfaces; and ``stz``, which describes friction laws governed by
Shear Transformation Zone Theory. As with basic interfaces, none of these will be invoked directly,
and ``paramfric`` and ``statefric`` only create template methods for the generic behavior of
the corresponding type of interfaces and thus are not used in setting up a problem.
"""
from __future__ import division, print_function
from os.path import join
from .pert import load, swparam, stzparam, loadfile, swparamfile, stzparamfile, statefile
from .surface import surface, curve
class interface(object):
"""
Class representing a locked interface between blocks
This is the parent class of all other interfaces. The ``interface`` class describes locked interfaces,
while other interfaces require additional information to describe how relative slip can occur
between the blocks.
Interfaces have the following attributes:
:ivar ndim: Number of dimensions in problem (2 or 3)
:type ndim: int
:ivar iftype: Type of interface ('locked' for all standard interfaces)
:type iftype: str
:ivar index: index of interface (used for identification purposes only, order is irrelevant in simulation)
:type index: int
:ivar bm: Indices of block in the "minus" direction (tuple of 3 integers)
:type bm: tuple
:ivar bp: Indices of block in the "plus" direction (tuple of 3 integers)
:type bp: tuple
:ivar direction: Normal direction in computational space ("x", "y", or "z")
:type direction: str
"""
def __init__(self, ndim, index, direction, bm, bp):
"""
Initializes an instance of the ``interface`` class
Create a new ``interface`` given an index, direction, and block coordinates.
:param ndim: Number of spatial dimensions (must be 2 or 3)
:type ndim: int
:param index: Interface index, used for bookkeeping purposes, must be nonnegative
:type index: int
:param direction: String indicating normal direction of interface in computational space,
must be ``'x'``, ``'y'``, or ``'z'``, with ``'z'`` only allowed for 3D problems)
:type direction: str
:param bm: Coordinates of block in minus direction (tuple of length 3 of integers)
:type bm: tuple
:param bp: Coordinates of block in plus direction (tuple of length 3 or integers, must
differ from ``bm`` by 1 only along the given direction to ensure blocks
are neighboring one another)
:type bp: tuple
:returns: New instance of interface class
:rtype: interface
"""
assert int(ndim) == 2 or int(ndim) == 3, "number of dimensions must be 2 or 3"
assert int(index) >= 0, "interface index must be nonnegative"
assert (direction == "x" or direction == "y" or direction == "z"), "Direction must be x, y, or z"
assert len(bm) == 3, "must provide 3 integers for block indices"
assert len(bp) == 3, "must provide 3 integers for block indices"
for i in range(3):
assert bm[i] >= 0, " block indices must be nonegative"
assert bp[i] >= 0, "block indices must be nonnegative"
if direction == "x":
assert int(bp[0])-int(bm[0]) == 1, "blocks must be neighboring to be coupled via an interface"
assert int(bp[1]) == int(bm[1]), "blocks must be neighboring to be coupled via an interface"
assert int(bp[2]) == int(bm[2]), "blocks must be neighboring to be coupled via an interface"
elif direction == "y":
assert int(bp[1])-int(bm[1]) == 1, "blocks must be neighboring to be coupled via an interface"
assert int(bp[0]) == int(bm[0]), "blocks must be neighboring to be coupled via an interface"
assert int(bp[2]) == int(bm[2]), "blocks must be neighboring to be coupled via an interface"
else:
assert int(bp[2])-int(bm[2]) == 1, "blocks must be neighboring to be coupled via an interface"
assert int(bp[0]) == int(bm[0]), "blocks must be neighboring to be coupled via an interface"
assert int(bp[1]) == int(bm[1]), "blocks must be neighboring to be coupled via an interface"
self.ndim = int(ndim)
self.iftype = "locked"
self.index = int(index)
self.bm = (int(bm[0]), int(bm[1]), int(bm[2]))
self.bp = (int(bp[0]), int(bp[1]), int(bp[2]))
self.direction = direction
def get_direction(self):
"""
Returns interface orientation
Returns orientation (string indicating normal direction in computational space).
:returns: Interface orientation in computational space ('x', 'y', or 'z')
:rtype: str
"""
return self.direction
def get_index(self):
"""
Returns index
Returns the numerical index corresponding to the interface in question. Note that this is just
for bookkeeping purposes, the interfaces may be arranged in any order as long as no
index is repeated. The code will automatically handle the indices, so this is typically not
modified in any way.
:returns: Interface index
:rtype: int
"""
return self.index
def set_index(self,index):
"""
Sets interface index
Changes value of interface index. New index must be a nonnegative integer
:param index: New value of index (nonnegative integer)
:type index: int
:returns: None
"""
assert index >= 0, "interface index must be nonnegative"
self.index = int(index)
def get_type(self):
"""
Returns string of interface type
Returns the type of the given interface ("locked", "frictionless", "slipweak", or "stz")
:returns: Interface type
:rtype: str
"""
return self.iftype
def get_bm(self):
"""
Returns block on negative side
Returns tuple of block indices on negative size
:returns: Block indices on negative side (tuple of integers)
:rtype: tuple
"""
return self.bm
def get_bp(self):
"""
Returns block on positive side
Returns tuple of block indices on positive size
:returns: Block indices on positive side (tuple of integers)
:rtype: tuple
"""
return self.bp
def get_nloads(self):
"""
Returns number of load perturbations on the interface
Method returns the number of load perturbations presently in the list of loads.
:returns: Number of load perturbations
:rtype: int
"""
raise NotImplementedError("Interfaces do not support load perturbations")
def add_load(self, newload):
"""
Adds a load to list of load perturbations
Method adds the load provided to the list of load perturbations. If the ``newload``
parameter is not a load perturbation, this will result in an error.
:param newload: New load to be added to the interface (must have type ``load``)
:type newload: ~fdfault.load
:returns: None
"""
raise NotImplementedError("Interfaces do not support load perturbations")
def delete_load(self, index = -1):
"""
Deletes load at position index from the list of loads
Method deletes the load from the list of loads at position ``index``. Default is most
recently added load if an index is not provided. ``index`` must be a valid index into
the list of loads.
:param index: Position within load list to remove (optional, default is -1)
:type index: int
:returns: None
"""
raise NotImplementedError("Interfaces do not support load perturbations")
def get_load(self, index = None):
"""
Returns load at position index
Returns a load from the list of load perturbations at position ``index``.
If no index is provided (or ``None`` is given), the method returns entire list.
``index`` must be a valid list index given the number of loads.
:param index: Index within load list (optional, default is ``None`` to return full list)
:type index: int or None
:returns: load or list
"""
raise NotImplementedError("Interfaces do not support load perturbations")
def get_nperts(self):
"""
Returns number of friction parameter perturbations on interface
Method returns the number of parameter perturbations for the list
:returns: Number of parameter perturbations
:rtype: int
"""
raise NotImplementedError("Interfaces do not support parameter perturbations")
def add_pert(self,newpert):
"""
Add new friction parameter perturbation to an interface
Method adds a frictional parameter perturbation to an interface. ``newpert`` must
be a parameter perturbation of the correct kind for the given interface type (i.e. if
the interface is of type ``slipweak``, then ``newpert`` must have type ``swparam``).
:param newpert: New perturbation to be added. Must have a type that matches
the interface(s) in question.
:type newpert: pert (more precisely, one of the derived classes of friction parameter perturbations)
:returns: None
"""
raise NotImplementedError("Interfaces do not support parameter perturbations")
def delete_pert(self, index = -1):
"""
Deletes frictional parameter perturbation from interface
``index`` is an integer that indicates the position within the list of perturbations. Default is most
recently added (-1).
:param index: Index within perturbation list of the given interface to remove. Default is
last item (-1, or most recently added)
:type index: int
:returns: None
"""
raise NotImplementedError("Interfaces do not support parameter perturbations")
def get_pert(self, index = None):
"""
Returns perturbation at position index
Method returns a perturbation from the interface. ``index`` is the index into the perturbation
list for the particular index. If ``index`` is not provided or is ``None``, the method returns the
entire list.
:param index: Index into the perturbation list for the index in question (optional, if not
provided or ``None``, then returns entire list)
:type index: int or None
:returns: pert or list
"""
raise NotImplementedError("Interfaces do not support parameter perturbations")
def get_loadfile(self):
"""
Returns loadfile for interface
Loadfile sets any surface tractions set for the interface.
Note that these tractions are added to any any set by the constant initial stress tensor,
initial heterogeneous stress, or interface traction perturbations
:returns: Current loadfile for the interface (if the interface does not have a loadfile, returns None)
:rtype: loadfile or None
"""
raise NotImplementedError("Interfaces do not support load files")
def set_loadfile(self, newloadfile):
"""
Sets loadfile for interface
``newloadfile`` is the new loadfile (must have type ``loadfile``).
If the index is bad or the loadfile type is not correct, the code will raise an error.
Errors can also result if the shape of the loadfile does not match with the interface.
:param newloadfile: New loadfile to be used for the given interface
:type newloadfile: loadfile
:returns: None
"""
raise NotImplementedError("Interfaces do not support load files")
def delete_loadfile(self):
"""
Deletes the loadfile for the interface.
:returns: None
"""
raise NotImplementedError("Interfaces do not support load files")
def get_paramfile(self):
"""
Returns paramfile (holds arrays of heterogeneous friction parameters) for interface.
Can return a subtype of paramfile corresponding to any of the specific friction
law types.
:returns: paramfile
"""
raise NotImplementedError("Interfaces do not support parameter files")
def set_paramfile(self, newparamfile):
"""
Sets paramfile for the interface
Method sets the file holding frictional parameters for the interface.
``newparamfile`` must be a parameter perturbation file of the correct type for the given
interface type (i.e. if the interface is of type ``slipweak``, then ``newpert`` must have type
``swparamfile``). Errors can also result if the shape of the paramfile does not match
with the interface.
:param newparamfile: New frictional parameter file (type depends on interface in question)
:type newparamfile: paramfile (actual type must be the appropriate subclass for the
friction law of the particular interface and have the right shape)
:returns: None
"""
raise NotImplementedError("Interfaces do not support parameter files")
def delete_paramfile(self):
"""
Deletes friction parameter file for the interface
Removes the friction parameter file for the interface. The interface
must be a frictional interface that can accept parameter files.
:returns: None
"""
raise NotImplementedError("Interfaces do not support parameter files")
def check(self, nx):
"""
Checks if interface size is consistent with simulation. Only needed for interfaces using
files for load, state, or parameter values.
:param nx: Number of grid points of neighboring block (tuple of two integers)
:type nx: tuple
:returns: None
"""
pass
def write_input(self, f, probname, directory, endian = '='):
"""
Writes interface details to input file
This routine is called for every interface when writing problem data to file. It writes
the appropriate section for the interface in the input file. It also writes any necessary
binary files holding interface loads, parameters, or state variables.
:param f: File handle for input file
:type f: file
:param probname: problem name (used for naming binary files)
:type probname: str
:param directory: Directory for output
:type directory: str
:param endian: Byte ordering for binary files (``'<'`` little endian, ``'>'`` big endian, ``'='`` native,
default is native)
:type endian: str
:returns: None
"""
f.write("[fdfault.interface"+str(self.index)+"]\n")
f.write(self.direction+"\n")
f.write(str(self.bm[0])+" "+str(self.bm[1])+" "+str(self.bm[2])+"\n")
f.write(str(self.bp[0])+" "+str(self.bp[1])+" "+str(self.bp[2])+"\n")
f.write("\n")
def __str__(self):
return ('Interface '+str(self.index)+":\ndirection = "+self.direction+
"\nbm = "+str(self.bm)+"\nbp = "+str(self.bp))
class friction(interface):
"""
Class representing a frictionless interface between blocks
This is the parent class of all other frictional interfaces. The ``friction`` class describes frictionless
interfaces. While this interface type does not require any parameter specifications, it does
calculate slip from traction and thus the interface tractions are relevant. Therefore, it allows
for the user to specify interface tractions that are added to the stress changes calculated by the
code. These tractions can be set either as "perturbations" (tractions following some pre-specified
mathematical form), or "load files" where the tractions are set point-by-point and thus can be
arbitrarily complex.
Frictionless interfaces have the following attributes:
:ivar ndim: Number of dimensions in problem (2 or 3)
:type ndim: int
:ivar iftype: Type of interface ('locked' for all standard interfaces)
:type iftype: str
:ivar index: index of interface (used for identification purposes only, order is irrelevant in simulation)
:type index: int
:ivar bm: Indices of block in the "minus" direction (tuple of 3 integers)
:type bm: tuple
:ivar bp: Indices of block in the "plus" direction (tuple of 3 integers)
:type bp: tuple
:ivar direction: Normal direction in computational space ("x", "y", or "z")
:type direction: str
:ivar nloads: Number of load perturbations (length of ``loads`` list)
:type nloads: int
:ivar loads: List of load perturbations
:type loads: list
:ivar lf: Loadfile holding traction at each point
:type lf: loadfile
"""
def __init__(self, ndim, index, direction, bm, bp):
"""
Initializes an instance of the ``friction`` class
Create a new ``friction`` given an index, direction, and block coordinates.
:param ndim: Number of spatial dimensions (must be 2 or 3)
:type ndim: int
:param index: Interface index, used for bookkeeping purposes, must be nonnegative
:type index: int
:param direction: String indicating normal direction of interface in computational space,
must be ``'x'``, ``'y'``, or ``'z'``, with ``'z'`` only allowed for 3D problems)
:type direction: str
:param bm: Coordinates of block in minus direction (tuple of length 3 of integers)
:type bm: tuple
:param bp: Coordinates of block in plus direction (tuple of length 3 or integers, must
differ from ``bm`` by 1 only along the given direction to ensure blocks
are neighboring one another)
:type bp: tuple
:returns: New instance of friction class
:rtype: friction
"""
interface.__init__(self, ndim, index, direction, bm, bp)
self.iftype = "frictionless"
self.nloads = 0
self.loads = []
self.lf = None
def get_nloads(self):
"""
Returns number of load perturbations on the interface
Method returns the number of load perturbations presently in the list of loads.
:returns: Number of load perturbations
:rtype: int
"""
return self.nloads
def add_load(self,newload):
"""
Adds a load to list of load perturbations
Method adds the load provided to the list of load perturbations. If the ``newload``
parameter is not a load perturbation, this will result in an error.
:param newload: New load to be added to the interface (must have type ``load``)
:type newload: fdfault.load
:returns: None
"""
assert type(newload) is load, "Cannot add types other than loads to load list"
self.loads.append(newload)
self.nloads = len(self.loads)
def delete_load(self, index = -1):
"""
Deletes load at position index from the list of loads
Method deletes the load from the list of loads at position ``index``. Default is most
recently added load if an index is not provided. ``index`` must be a valid index into
the list of loads.
:param index: Position within load list to remove (optional, default is -1)
:type index: int
:returns: None
"""
self.loads.pop(index)
self.nloads = len(self.loads)
def get_load(self, index = None):
"""
Returns load at position index
Returns a load from the list of load perturbations at position ``index``.
If no index is provided (or ``None`` is given), the method returns entire list.
``index`` must be a valid list index given the number of loads.
:param index: Index within load list (optional, default is ``None`` to return full list)
:type index: int or None
:returns: load or list
"""
if index is None:
return self.loads
else:
assert index is int, "must provide integer index to load list"
return self.loads[index]
def get_loadfile(self):
"""
Returns loadfile for interface
Loadfile sets any surface tractions set for the interface.
Note that these tractions are added to any any set by the constant initial stress tensor,
initial heterogeneous stress, or interface traction perturbations
:returns: Current loadfile for the interface (if the interface does not have a loadfile, returns None)
:rtype: loadfile or None
"""
return self.loadfile
def set_loadfile(self, newloadfile):
"""
Sets loadfile for interface
``newloadfile`` is the new loadfile (must have type ``loadfile``).
If the index is bad or the loadfile type is not correct, the code will raise an error.
Errors can also result if the shape of the loadfile does not match with the interface.
:param newloadfile: New loadfile to be used for the given interface
:type newloadfile: loadfile
:returns: None
"""
assert type(newloadfile) is loadfile, "load file must have appropriate type"
self.lf = newloadfile
def delete_loadfile(self):
"""
Deletes the loadfile for the interface.
:returns: None
"""
self.lf = None
def check(self, nx):
"""
Checks if interface size is consistent with simulation. Only needed for interfaces using
files for load, state, or parameter values.
:param nx: Number of grid points of neighboring block (tuple of two integers)
:type nx: tuple
:returns: None
"""
if self.lf is not None:
assert (self.lf.get_n1() == nx[0] and self.lf.get_n2() == nx[1]), "loadfile size not consistent with neighboring blocks"
def write_input(self, f, probname, directory, endian = '='):
"""
Writes interface details to input file
This routine is called for every interface when writing problem data to file. It writes
the appropriate section for the interface in the input file. It also writes any necessary
binary files holding interface loads, parameters, or state variables.
:param f: File handle for input file
:type f: file
:param probname: problem name (used for naming binary files)
:type probname: str
:param directory: Directory for output
:type directory: str
:param endian: Byte ordering for binary files (``'<'`` little endian, ``'>'`` big endian, ``'='`` native,
default is native)
:type endian: str
:returns: None
"""
interface.write_input(self, f, probname, directory, endian)
if directory == "":
inputfiledir = 'problems/'
else:
inputfiledir = directory
f.write("[fdfault.friction]\n")
f.write(str(self.nloads)+'\n')
for l in self.loads:
l.write_input(f)
if self.lf is None:
f.write("none\n")
else:
f.write(join(inputfiledir, probname)+"_interface"+str(self.index)+".load\n")
self.lf.write(join(directory, probname+"_interface"+str(self.index)+".load"), endian)
f.write("\n")
def __str__(self):
"Returns string representation of interface"
loadstring = ""
for load in self.loads:
loadstring += "\n"+str(load)
return ('Frictional interface '+str(self.index)+":\ndirection = "+self.direction+
"\nbm = "+str(self.bm)+"\nbp = "+str(self.bp)+"\nnloads = "
+str(self.nloads)+"\nLoads:"+loadstring+"\nLoad File:\n"+str(self.lf))
class paramfric(friction):
"""
Class representing a generic frictional interface requiring parameters
This is the parent class of all frictional interfaces that require parameter specification.
The ``paramfric`` class contains common methods to all parameter friction laws.
This includes a list of parameter perturbations and a parameter file, which behave in the
same manner as loads.
Parameter Frictional interfaces have the following attributes:
:ivar ndim: Number of dimensions in problem (2 or 3)
:type ndim: int
:ivar iftype: Type of interface ('locked' for all standard interfaces)
:type iftype: str
:ivar index: index of interface (used for identification purposes only, order is irrelevant in simulation)
:type index: int
:ivar bm: Indices of block in the "minus" direction (tuple of 3 integers)
:type bm: tuple
:ivar bp: Indices of block in the "plus" direction (tuple of 3 integers)
:type bp: tuple
:ivar direction: Normal direction in computational space ("x", "y", or "z")
:type direction: str
:ivar nloads: Number of load perturbations (length of ``loads`` list)
:type nloads: int
:ivar loads: List of load perturbations
:type loads: list
:ivar lf: Loadfile holding traction at each point
:type lf: loadfile
:ivar nperts: Number of parameter perturbations (length of ``perts`` list)
:type nperts: int
:ivar perts: List of parameter perturbations (type of perturbation must match the interface type)
:type perts: list
:ivar pf: Paramfile holding traction at each point (type must match the interface type)
:type pf: paramfile
"""
def __init__(self, ndim, index, direction, bm, bp):
"""
Initializes an instance of the ``paramfric`` class
Create a new ``param`` given an index, direction, and block coordinates.
:param ndim: Number of spatial dimensions (must be 2 or 3)
:type ndim: int
:param index: Interface index, used for bookkeeping purposes, must be nonnegative
:type index: int
:param direction: String indicating normal direction of interface in computational space,
must be ``'x'``, ``'y'``, or ``'z'``, with ``'z'`` only allowed for 3D problems)
:type direction: str
:param bm: Coordinates of block in minus direction (tuple of length 3 of integers)
:type bm: tuple
:param bp: Coordinates of block in plus direction (tuple of length 3 or integers, must
differ from ``bm`` by 1 only along the given direction to ensure blocks
are neighboring one another)
:type bp: tuple
:returns: New instance of paramfric class
:rtype: paramfric
"""
friction.__init__(self, ndim, index, direction, bm, bp)
self.nperts = 0
self.perts = []
self.pf = None
def get_nperts(self):
"""
Returns number of friction parameter perturbations on interface
Method returns the number of parameter perturbations for the list
:returns: Number of parameter perturbations
:rtype: int
"""
return self.nperts
def add_pert(self,newpert):
"""
Add new friction parameter perturbation to an interface
Method adds a frictional parameter perturbation to an interface. ``newpert`` must
be a parameter perturbation of the correct kind for the given interface type (i.e. if
the interface is of type ``slipweak``, then ``newpert`` must have type ``swparam``).
:param newpert: New perturbation to be added. Must have a type that matches
the interface(s) in question.
:type newpert: pert (more precisely, one of the derived classes of friction parameter perturbations)
:returns: None
"""
self.perts.append(newpert)
self.nperts = len(self.perts)
def delete_pert(self, index = -1):
"""
Deletes frictional parameter perturbation from interface
``index`` is an integer that indicates the position within the list of perturbations. Default is most
recently added (-1).
:param index: Index within perturbation list of the given interface to remove. Default is
last item (-1, or most recently added)
:type index: int
:returns: None
"""
self.perts.pop(index)
self.nperts = len(self.perts)
def get_pert(self, index = None):
"""
Returns perturbation at position index
Method returns a perturbation from the interface. ``index`` is the index into the perturbation
list for the particular index. If ``index`` is not provided or is ``None``, the method returns the
entire list.
:param index: Index into the perturbation list for the index in question (optional, if not
provided or ``None``, then returns entire list)
:type index: int or None
:returns: pert or list
"""
if index is None:
return self.perts
else:
assert index is int, "index must be an integer"
return self.perts[index]
def get_paramfile(self):
"""
Returns paramfile (holds arrays of heterogeneous friction parameters) for interface.
Can return a subtype of paramfile corresponding to any of the specific friction
law types.
:returns: Paramfile for this interface
:rtype: paramfile
"""
return self.pf
def set_paramfile(self, newparamfile):
"""
Sets paramfile for the interface
Method sets the file holding frictional parameters for the interface.
``newparamfile`` must be a parameter perturbation file of the correct type for the given
interface type (i.e. if the interface is of type ``slipweak``, then ``newpert`` must have type
``swparamfile``). Errors can also result if the shape of the paramfile does not match
with the interface.
:param newparamfile: New frictional parameter file (type depends on interface in question)
:type newparamfile: paramfile (actual type must be the appropriate subclass for the
friction law of the particular interface and have the right shape)
:returns: None
"""
self.pf = newparamfile
def delete_paramfile(self):
"""
Deletes friction parameter file for the interface
Removes the friction parameter file for the interface. The interface
must be a frictional interface that can accept parameter files.
:returns: None
"""
self.pf = None
def check(self, nx):
"""
Checks if interface size is consistent with simulation. Only needed for interfaces using
files for load, state, or parameter values.
:param nx: Number of grid points of neighboring block (tuple of two integers)
:type nx: tuple
:returns: None
"""
friction.check(self, nx)
if self.pf is not None:
assert (self.pf.get_n1() == nx[0] and self.pf.get_n2() == nx[1]), "paramfile size not consistent with neighboring blocks"
def write_input(self, f, probname, directory, endian = '='):
"""
Writes interface details to input file
This routine is called for every interface when writing problem data to file. It writes
the appropriate section for the interface in the input file. It also writes any necessary
binary files holding interface loads, parameters, or state variables.
:param f: File handle for input file
:type f: file
:param probname: problem name (used for naming binary files)
:type probname: str
:param directory: Directory for output
:type directory: str
:param endian: Byte ordering for binary files (``'<'`` little endian, ``'>'`` big endian, ``'='`` native,
default is native)
:type endian: str
:returns: None
"""
friction.write_input(self, f, probname, directory, endian)
if directory == "":
inputfiledir = 'problems/'
else:
inputfiledir = directory
f.write("[fdfault."+self.iftype+"]\n")
f.write(str(self.nperts)+"\n")
for p in self.perts:
p.write_input(f)
if self.pf is None:
f.write("none\n")
else:
f.write(join(inputfiledir, probname)+"_interface"+str(self.index)+"."+self.suffix+"\n")
self.pf.write(join(directory, probname+"_interface"+str(self.index)+"."+self.suffix), endian)
f.write("\n")
def __str__(self):
"Returns string representation of generic friction law"
loadstring = ""
for load in self.loads:
loadstring += "\n"+str(load)
return (' frictional interface '+str(self.index)+":\ndirection = "+self.direction+
"\nbm = "+str(self.bm)+"\nbp = "+str(self.bp)
+"\nnloads = "+str(self.nloads)+"\nLoads:"+loadstring+"\nParameter File:\n"+str(self.pf))
class statefric(paramfric):
"""
Class representing a generic frictional interface with a state variable
This is the parent class of all frictional interfaces that require a state variable.
The ``statefric`` class contains common methods to all state variable friction laws.
This includes the uniform initial state variable and a file holding a heterogeneous initial
state variable
State Variable Frictional interfaces have the following attributes:
:ivar ndim: Number of dimensions in problem (2 or 3)
:type ndim: int
:ivar iftype: Type of interface ('locked' for all standard interfaces)
:type iftype: str
:ivar index: index of interface (used for identification purposes only, order is irrelevant in simulation)
:type index: int
:ivar bm: Indices of block in the "minus" direction (tuple of 3 integers)
:type bm: tuple
:ivar bp: Indices of block in the "plus" direction (tuple of 3 integers)
:type bp: tuple
:ivar direction: Normal direction in computational space ("x", "y", or "z")
:type direction: str
:ivar nloads: Number of load perturbations (length of ``loads`` list)
:type nloads: int
:ivar loads: List of load perturbations
:type loads: list
:ivar lf: Loadfile holding traction at each point
:type lf: loadfile
:ivar nperts: Number of parameter perturbations (length of ``perts`` list)
:type nperts: int
:ivar perts: List of parameter perturbations (type of perturbation must match the interface type)
:type perts: list
:ivar pf: Paramfile holding traction at each point (type must match the interface type)
:type pf: paramfile
:ivar state: Initial value of state variable
:type state: float
:ivar sf: Statefile holding heterogeneous initial state variable values
:type sf: statefile
"""
def __init__(self, ndim, index, direction, bm, bp):
"""
Initializes an instance of the ``statefric`` class
Create a new ``statefric`` given an index, direction, and block coordinates.
:param ndim: Number of spatial dimensions (must be 2 or 3)
:type ndim: int
:param index: Interface index, used for bookkeeping purposes, must be nonnegative
:type index: int
:param direction: String indicating normal direction of interface in computational space,
must be ``'x'``, ``'y'``, or ``'z'``, with ``'z'`` only allowed for 3D problems)
:type direction: str
:param bm: Coordinates of block in minus direction (tuple of length 3 of integers)
:type bm: tuple
:param bp: Coordinates of block in plus direction (tuple of length 3 or integers, must
differ from ``bm`` by 1 only along the given direction to ensure blocks
are neighboring one another)
:type bp: tuple
:returns: New instance of interface class
:rtype: interface
"""
paramfric.__init__(self, ndim, index, direction, bm, bp)
self.state = 0.
self.sf = None
def get_state(self):
"""
Returns initial state variable value for interface
:returns: Initial state variable
:rtype: float
"""
return self.state
def set_state(self, newstate):
"""
Sets initial state variable for interface
Set the initial value for the state variable. ``state`` is the new state variable (must
be a float or some other valid number).
:param state: New value of state variable
:type state: float
:returns: None
"""
self.state = float(newstate)
def get_statefile(self):
"""
Returns state file of interface
If interface does not have a statefile returns None
:param niface: index of desired interface (zero-indexed)
:type index: int
:returns: statefile or None
"""
return self.sf
def set_statefile(self, newstatefile):
"""
Sets state file for interface
Set the statefile for the interface.``newstatefile``must have type ``statefile``.
Errors can also result if the shape of the statefile does not match with the interface.
:param newstatefile: New statefile for the interface in question.
:type newstatefile: statefile
:returns: None
"""
assert type(newstatefile) is statefile, "new state file must be of type statefile"
self.sf = newstatefile
def delete_statefile(self):
"""
Deletes statefile for the interface
Delete the statefile for the interface. Will set the statefile attribute for the interface to None.
:returns: None
"""
self.sf = None
def check(self, nx):
"""
Checks if interface size is consistent with simulation. Only needed for interfaces using
files for load, state, or parameter values.
:param nx: Number of grid points of neighboring block (tuple of two integers)
:type nx: tuple
:returns: None
"""
paramfric.check(self, nx)
if self.sf is not None:
assert (self.sf.get_n1() == nx[0] and self.sf.get_n2() == nx[1]), "statefile size not consistent with neighboring blocks"
def write_input(self, f, probname, directory, endian = '='):
"""
Writes interface details to input file
This routine is called for every interface when writing problem data to file. It writes
the appropriate section for the interface in the input file. It also writes any necessary
binary files holding interface loads, parameters, or state variables.
:param f: File handle for input file
:type f: file
:param probname: problem name (used for naming binary files)
:type probname: str
:param directory: Directory for output
:type directory: str
:param endian: Byte ordering for binary files (``'<'`` little endian, ``'>'`` big endian, ``'='`` native,
default is native)
:type endian: str
:returns: None
"""
friction.write_input(self, f, probname, endian)
if directory == "":
inputfiledir = 'problems/'
else:
inputfiledir = directory
f.write("[fdfault."+self.iftype+"]\n")
f.write(str(self.state)+"\n")
if self.sf is None:
f.write("none\n")
else:
f.write(join(inputfiledir, probname)+"_interface"+str(self.index)+".state\n")
self.sf.write(join(directory, probname+"_interface"+str(self.index)+".state"), endian)
f.write(str(self.nperts)+"\n")
for p in self.perts:
p.write_input(f)
if self.paramfile is None:
f.write("none\n")
else:
f.write(join(inputfiledir, probname)+"_interface"+str(self.index)+"."+self.suffix+"\n")
self.paramfile.write(join(directory, probname+"_interface"+str(self.index)+"."+self.suffix), endian)
f.write("\n")
def __str__(self):
"Returns string representation of generic state variable friction law"
loadstring = ""
for load in self.loads:
loadstring += "\n"+str(load)
return (' frictional interface '+str(self.index)+":\ndirection = "+self.direction+
"\nbm = "+str(self.bm)+"\nbp = "+str(self.bp)
+"\nstate = "+str(self.state)+"\nstatefile = "+str(self.sf)+
+"\nnloads = "+str(self.nloads)+"\nLoads:"+loadstring+"\nLoad File:\n"+str(self.lf)
+"\nParameter File:\n"+str(self.pf))
class slipweak(paramfric):
"""
Class representing a slip weakening frictional interface
This class describes slip weakening friction laws. This is a frictional interface with parameter values.
Tractions on the interface are set using load perturbations and load files. Parameter values
are set using parameter perturbations (the ``swparam`` class) and parameter files (the
``swparamfile`` class). Parameters that can be specified include:
* The slip weakening distance :math:`{d_c}`, ``dc``
* The static friction value :math:`{\mu_s}`, ``mus``
* The dynamic friction value :math:`{\mu_d}`, ``mud``
* The frictional cohesion :math:`{c_0}`, ``c0``
* The forced rupture time :math:`{t_{rup}}`, ``trup``
* The characteristic weakening time :math:`{t_c}`, ``tc``
Slip weakening Frictional interfaces have the following attributes:
:ivar ndim: Number of dimensions in problem (2 or 3)
:type ndim: int
:ivar iftype: Type of interface ('locked' for all standard interfaces)
:type iftype: str
:ivar index: index of interface (used for identification purposes only, order is irrelevant in simulation)
:type index: int
:ivar bm: Indices of block in the "minus" direction (tuple of 3 integers)
:type bm: tuple
:ivar bp: Indices of block in the "plus" direction (tuple of 3 integers)
:type bp: tuple
:ivar direction: Normal direction in computational space ("x", "y", or "z")
:type direction: str
:ivar nloads: Number of load perturbations (length of ``loads`` list)
:type nloads: int
:ivar loads: List of load perturbations
:type loads: list
:ivar lf: Loadfile holding traction at each point
:type lf: loadfile
:ivar nperts: Number of parameter perturbations (length of ``perts`` list)
:type nperts: int
:ivar perts: List of parameter perturbations (perturbations must be ``swparam`` type)
:type perts: list
:ivar pf: Paramfile holding traction at each point
:type pf: swparamfile
"""
def __init__(self, ndim, index, direction, bm, bp):
"""
Initializes an instance of the ``slipweak`` class
Create a new ``slipweak`` given an index, direction, and block coordinates.
:param ndim: Number of spatial dimensions (must be 2 or 3)
:type ndim: int
:param index: Interface index, used for bookkeeping purposes, must be nonnegative
:type index: int
:param direction: String indicating normal direction of interface in computational space,
must be ``'x'``, ``'y'``, or ``'z'``, with ``'z'`` only allowed for 3D problems)
:type direction: str
:param bm: Coordinates of block in minus direction (tuple of length 3 of integers)
:type bm: tuple
:param bp: Coordinates of block in plus direction (tuple of length 3 or integers, must
differ from ``bm`` by 1 only along the given direction to ensure blocks
are neighboring one another)
:type bp: tuple
:returns: New instance of slipweak class
:rtype: slipweak
"""
paramfric.__init__(self, ndim, index, direction, bm, bp)
self.iftype = "slipweak"
self.suffix = 'sw'
def add_pert(self,newpert):
"""
Add new friction parameter perturbation to an interface
Method adds a frictional parameter perturbation to an interface. ``newpert`` must
must have type ``swparam``).
:param newpert: New perturbation to be added
:type newpert: swparam
:returns: None
"""
assert type(newpert) is swparam, "Cannot add types other than swparam to parameter list"
paramfric.add_pert(self, newpert)
def set_paramfile(self, newparamfile):
"""
Sets paramfile for the interface
Method sets the file holding frictional parameters for the interface.
``newparamfile`` must be a parameter perturbation file of type ``swparam``.
Errors can also result if the shape of the paramfile does not match with the interface.
:param newparamfile: New frictional parameter file
:type newparamfile: swparamfile
:returns: None
"""
assert type(newparamfile) is swparamfile, "parameter file must have appropriate type"
paramfric.set_paramfile(self, newparamfile)
def __str__(self):
"Returns string representation of slip weakening friction law"
return ('Slip weakening'+paramfric.__str__(self))
class stz(statefric):
"""
Class representing a Shear Transformation Zone (STZ) Theory Frictional Interface
STZ Frictional Interfaces are an interface with a state variable, in this case an
effective temperature. The interface also requires setting the interface tractions and parameter
values in addition to the initial value of the state variable. All of these can be set
using some combination of perturbations and files. Parameters include:
* Reference velocity :math:`{V_0}` , ``v0``
* Reference activation barrier :math:`{f_0}`, ``f0``
* Frictional direct effect :math:`{a}`, ``a``
* Frictional yield coefficient :math:`{\mu_y}`, ``muy``
* Effective temperature specific heat :math:`{c_0}`, ``c0``
* Effective temperature relaxation rate :math:`{R}`, ``R``
* Effective temperature relaxation barrier :math:`{\\beta}`, ``beta``
* Effective temperature activation barrier :math:`{\chi_w}`, ``chiw``
* Reference velocity for effective temperature activation :math:`{V_1}`, ``v1``
STZ Frictional interfaces have the following attributes:
:ivar ndim: Number of dimensions in problem (2 or 3)
:type ndim: int
:ivar iftype: Type of interface ('locked' for all standard interfaces)
:type iftype: str
:ivar index: index of interface (used for identification purposes only, order is irrelevant in simulation)
:type index: int
:ivar bm: Indices of block in the "minus" direction (tuple of 3 integers)
:type bm: tuple
:ivar bp: Indices of block in the "plus" direction (tuple of 3 integers)
:type bp: tuple
:ivar direction: Normal direction in computational space ("x", "y", or "z")
:type direction: str
:ivar nloads: Number of load perturbations (length of ``loads`` list)
:type nloads: int
:ivar loads: List of load perturbations
:type loads: list
:ivar lf: Loadfile holding traction at each point
:type lf: loadfile
:ivar nperts: Number of parameter perturbations (length of ``perts`` list)
:type nperts: int
:ivar perts: List of parameter perturbations (each must be ``stzparam``)
:type perts: list
:ivar pf: Paramfile holding traction at each point
:type pf: stzparamfile
:ivar state: Initial value of state variable
:type state: float
:ivar sf: Statefile holding heterogeneous initial state variable values
:type sf: statefile
"""
def __init__(self, ndim, index, direction, bm, bp):
"""
Initializes an instance of the ``stz`` class
Create a new ``stz`` given an index, direction, and block coordinates.
:param ndim: Number of spatial dimensions (must be 2 or 3)
:type ndim: int
:param index: Interface index, used for bookkeeping purposes, must be nonnegative
:type index: int
:param direction: String indicating normal direction of interface in computational space,
must be ``'x'``, ``'y'``, or ``'z'``, with ``'z'`` only allowed for 3D problems)
:type direction: str
:param bm: Coordinates of block in minus direction (tuple of length 3 of integers)
:type bm: tuple
:param bp: Coordinates of block in plus direction (tuple of length 3 or integers, must
differ from ``bm`` by 1 only along the given direction to ensure blocks
are neighboring one another)
:type bp: tuple
:returns: New instance of stz class
:rtype: stz
"""
statefric.__init__(self, ndim, index, direction, bm, bp)
self.iftype = "stz"
self.suffix = "stz"
def add_pert(self,newpert):
"""
Add new friction parameter perturbation to an interface
Method adds a frictional parameter perturbation to an interface. ``newpert`` must
must have type ``stzparam``).
:param newpert: New perturbation to be added
:type newpert: stzparam
:returns: None
"""
assert type(newpert) is stzparam, "Cannot add types other than stzparam to parameter list"
paramfric.add_pert(self, newpert)
def set_paramfile(self, newparamfile):
"""
Sets paramfile for the interface
Method sets the file holding frictional parameters for the interface.
``newparamfile`` must be a parameter perturbation file of type ``stzparam``.
Errors can also result if the shape of the paramfile does not match with the interface.
:param newparamfile: New frictional parameter file
:type newparamfile: stzparamfile
:returns: None
"""
assert type(newparamfile) is stzparamfile, "parameter file must have appropriate type"
paramfric.set_paramfile(self, newparamfile)
def __str__(self):
"Returns string representation of stz friction law"
return ('STZ'+paramfric.__str__(self))
| 41.7184 | 133 | 0.640427 | 6,622 | 52,148 | 5.018423 | 0.068408 | 0.022749 | 0.012187 | 0.002106 | 0.783371 | 0.767303 | 0.75006 | 0.737843 | 0.728093 | 0.714402 | 0 | 0.003742 | 0.277384 | 52,148 | 1,249 | 134 | 41.751801 | 0.878141 | 0.643591 | 0 | 0.518248 | 0 | 0 | 0.208148 | 0 | 0 | 0 | 0 | 0 | 0.105839 | 1 | 0.229927 | false | 0.00365 | 0.014599 | 0.00365 | 0.343066 | 0.00365 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f912c636bfbf63a0916f874a10200a2a0e1ce502 | 30 | py | Python | tests/test_outliers.py | scarlqq/py_outliers_utils | 3ddcd9152e17152b4ad0430834baf6545bcca231 | [
"MIT"
] | null | null | null | tests/test_outliers.py | scarlqq/py_outliers_utils | 3ddcd9152e17152b4ad0430834baf6545bcca231 | [
"MIT"
] | null | null | null | tests/test_outliers.py | scarlqq/py_outliers_utils | 3ddcd9152e17152b4ad0430834baf6545bcca231 | [
"MIT"
] | null | null | null | from outliers import outliers
| 15 | 29 | 0.866667 | 4 | 30 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
005b1c6a256bc2b9752ab8a24d6b2bcb395ea354 | 124 | py | Python | main_start/core/helpers.py | aviskumar/speedo | 758e8ac1fdeeb0b72c3a57742032ca5c79f0b2fa | [
"BSD-3-Clause"
] | null | null | null | main_start/core/helpers.py | aviskumar/speedo | 758e8ac1fdeeb0b72c3a57742032ca5c79f0b2fa | [
"BSD-3-Clause"
] | null | null | null | main_start/core/helpers.py | aviskumar/speedo | 758e8ac1fdeeb0b72c3a57742032ca5c79f0b2fa | [
"BSD-3-Clause"
] | 3 | 2021-10-12T08:17:01.000Z | 2021-12-21T01:17:54.000Z |
from main_start.config_var import Config
from main_start.helper_func.basic_helpers import edit_or_reply, is_admin_or_owner
| 31 | 81 | 0.887097 | 22 | 124 | 4.545455 | 0.727273 | 0.16 | 0.26 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080645 | 124 | 3 | 82 | 41.333333 | 0.877193 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0075f4107cd5ddcdaaf73aba8b74d5604457ad67 | 20,793 | py | Python | src/data/make_dataset_utils.py | gnocchiflette/NTU-RGB-D | 4f72ff17889294e68efb35b8632b4f0e0ef9d9f3 | [
"MIT"
] | 26 | 2020-03-03T15:26:28.000Z | 2022-01-31T00:47:10.000Z | src/data/make_dataset_utils.py | adeboissiere/FUSION-human-action-recognition | 4f72ff17889294e68efb35b8632b4f0e0ef9d9f3 | [
"MIT"
] | 11 | 2020-03-31T04:12:04.000Z | 2022-03-11T23:51:45.000Z | src/data/make_dataset_utils.py | gnocchiflette/NTU-RGB-D | 4f72ff17889294e68efb35b8632b4f0e0ef9d9f3 | [
"MIT"
] | 2 | 2020-05-22T06:47:42.000Z | 2020-11-24T15:00:56.000Z | r"""
This module creates different h5 files that contain the data provided by NTU RGB+D in numpy ready format.
The following functions are provided.
- *create_h5_2d_ir_skeleton*: Creates h5 with 2D IR skeleton data
- *create_h5_skeleton_dataset:* Creates h5 with 3D skeleton data
- *create_h5_ir_dataset*: Creates h5 with raw IR sequences
- *create_h5_ir_cropped_dataset_from_h5*: Creates h5 containing cropped IR sequences around the subjects with a
fixed bounding box. Requires *create_h5_ir_dataset* and *create_h5_2d_ir_skeleton* to be run first.
- *create_h5_ir_cropped_moving_dataset_from_h5*: Creates h5 containing cropped IR sequences around the subjects
with a moving bounding box. Requires *create_h5_ir_dataset* and *create_h5_2d_ir_skeleton* to be run first.
"""
import cv2
import h5py
import os
import skvideo.io
import skvideo.datasets
from click import progressbar
# Custom modules
from src.data.read_NTU_RGB_D_skeleton import *
from src.utils.joints import *
def create_h5_2d_ir_skeleton(input_path, output_path, compression="", compression_opts=9):
r"""Creates an h5 dataset of the 2D skeleton projected on the IR frames.
For each sequence, a new group with the name of the sequence, **SsssCcccPpppRrrrAaaa**, is created.
In each group, a new dataset is created containing the 2D skeleton data.
The skeleton data is of shape `(2 {x, y}, max_frame, num_joint, 2 {n_subjects})`
The h5 may be used as a standalone but is necessary to create the processed IR h5 files (see below).
The method creates the file "ir_skeleton.h5". **Warning:** The file should not be renamed!
Inputs:
- **input_path** (str): Path containing the raw NTU files (default: *./data/raw/.*
See **Project Organization** in *README.md*)
- **output_path** (str): Path containing the processed h5 files (default: *./data/processed/.*
See **Project Organization** in *README.md*)
- **compression** (str): Compression type for h5. May take values in ["", "lzf", "gzip"]
- **compression_otps** (int): Compression opts. For "gzip" compression only.
May take values in the [0; 9] range.
"""
# Folder containing raw skeleton files (input_path + skeleton_folder)
skeleton_folder = "nturgb+d_skeletons/"
# Create a log file to track and debug progress
open_type = "w"
file = open(output_path + 'log.txt', 'w')
file.close()
# Create h5 file
with h5py.File(output_path + 'ir_skeleton.h5', open_type) as hdf:
# Progress bar
progress_bar = progressbar(iterable=None,
length=len(next(os.walk(input_path + skeleton_folder))[2])
)
# Loop through skeleton files
for filename in os.listdir(input_path + skeleton_folder):
# Sequence name (ie. S001C002P003R004A005)
sequence_name = os.path.splitext(filename)[0]
# Retrieve skeleton data of shape (2, max_frame, num_joint, n_subjects)
skeleton = read_xy_ir(input_path + skeleton_folder + filename)
# Log current sequence
f = open(output_path + "log.txt", "a+")
f.write(sequence_name)
f.write("\r\n")
f.close()
# Create a group for the current sequence
sample = hdf.create_group(sequence_name)
# Create a dataset with the skeleton data
if compression == "":
sample.create_dataset("ir_skeleton", data=skeleton)
elif compression == "lzf":
sample.create_dataset("ir_skeleton", data=skeleton, compression=compression)
elif compression == "gzip":
sample.create_dataset("ir_skeleton", data=skeleton,
compression=compression,
compression_opts=compression_opts)
else:
print("Compression type not recognized ... Exiting")
return
progress_bar.update(1)
def create_h5_skeleton_dataset(input_path, output_path, compression="", compression_opts=9):
r"""Creates an h5 dataset of the 3D skeleton data.
For each sequence, a new group with the name of the sequence, **SsssCcccPpppRrrrAaaa**, is created.
In each group, a new dataset is created containing the 3D skeleton data.
The skeleton data is of shape `(3 {x, y, z}, max_frame, num_joint, 2 {n_subjects})`
The method creates the file "skeleton.h5". **Warning:** The file should not be renamed!
Inputs:
- **input_path** (str): Path containing the raw NTU files (default: *./data/raw/*.
See **Project Organization** in *README.md*)
- **output_path** (str): Path containing the processed h5 files (default: *./data/processed/*.
See **Project Organization** in *README.md*)
- **compression** (str): Compression type for h5. May take values in ["", "lzf", "gzip"]
- **compression_otps** (int): Compression opts. For "gzip" compression only.
May take values in the [0; 9] range.
"""
# Folder containing raw skeleton files (input_path + skeleton_folder)
skeleton_folder = "nturgb+d_skeletons/"
# Create a log file to track and debug progress
open_type = "w"
file = open(output_path + 'log.txt', 'w')
file.close()
# Create h5 file
with h5py.File(output_path + 'skeleton.h5', open_type) as hdf:
# Progress bar
progress_bar = progressbar(iterable=None,
length=len(next(os.walk(input_path + skeleton_folder))[2])
)
# Loop through skeleton files
for filename in os.listdir(input_path + skeleton_folder):
# Sequence name (ie. S001C002P003R004A005)
sequence_name = os.path.splitext(filename)[0]
# Retrieve skeleton data of shape (2, max_frame, num_joint, n_subjects)
skeleton = read_xyz(input_path + skeleton_folder + filename)
# Log current sequence
f = open(output_path + "log.txt", "a+")
f.write(sequence_name)
f.write("\r\n")
f.close()
# Create a group for the current sequence
sample = hdf.create_group(sequence_name)
# Create a dataset with the skeleton data
if compression == "":
sample.create_dataset("skeleton", data=skeleton)
elif compression == "lzf":
sample.create_dataset("skeleton", data=skeleton, compression=compression)
elif compression == "gzip":
sample.create_dataset("skeleton", data=skeleton,
compression=compression,
compression_opts=compression_opts)
else:
print("Compression type not recognized ... Exiting")
return
progress_bar.update(1)
def create_h5_ir_dataset(input_path, output_path, compression="", compression_opts=9):
r"""Creates an h5 dataset of the unprocessed IR sequences.
For each sequence, a new group with the name of the sequence, **SsssCcccPpppRrrrAaaa**, is created.
In each group, a new dataset is created containing the unprocessed IR sequence.
The IR video data is of shape `(n_frames, H, W)`.
The h5 may be used as a standalone but is necessary to create the processed IR h5 files (see below).
The method creates the file "ir.h5". **Warning:** The file should not be renamed!
Inputs:
- **input_path** (str): Path containing the raw NTU files (default: *./data/raw/*.
See **Project Organization** in *README.md*)
- **output_path** (str): Path containing the processed h5 files (default: *./data/processed/*.
See **Project Organization** in *README.md*)
- **compression** (str): Compression type for h5. May take values in ["", "lzf", "gzip"]
- **compression_otps** (int): Compression opts. For "gzip" compression only.
May take values in the [0; 9] range.
"""
# Folder containing raw IR files (input_path + ir_folder)
ir_folder = "nturgb+d_ir/"
# Create a log file to track and debug progress
open_type = "w"
file = open(output_path + 'log.txt', 'w')
file.close()
# Create h5 file
with h5py.File(output_path + 'ir.h5', open_type) as hdf:
# Progress bar
progress_bar = progressbar(iterable=None,
length=len(next(os.walk(input_path + ir_folder))[2])
)
# Loop through skeleton files
for filename in os.listdir(input_path + ir_folder):
# Sequence name (ie. S001C002P003R004A005)
sequence_name = os.path.splitext(filename)[0][:-3]
# Log current sequence
f = open(output_path + "log.txt", "a+")
f.write(sequence_name)
f.write("\r\n")
f.close()
# print(short_filename)
# Read corresponding video
video_data = skvideo.io.vread(
input_path + ir_folder + filename)[:, :, :, 0] # shape (n_frames, H, W)
# Get video dimensions
_, H, W = video_data.shape
# Create a group for the current sequence
sample = hdf.create_group(sequence_name)
# Create a dataset with the skeleton data
if compression == "":
sample.create_dataset("ir", data=video_data, chunks=(1, H, W))
elif compression == "lzf":
sample.create_dataset("ir", data=video_data, compression=compression, chunks=(1, H, W))
elif compression == "gzip":
sample.create_dataset("ir", data=video_data,
compression=compression,
compression_opts=compression_opts,
chunks=(1, H, W))
else:
print("Compression type not recognized ... Exiting")
return
progress_bar.update(1)
def create_h5_ir_cropped_dataset_from_h5(input_path, output_path, compression="", compression_opts=9):
r"""Creates an h5 dataset with processed IR sequences.
The frames are cropped with a bounding box provided by the 2D IR skeleton.
The bounding box is fixed across all frames.
For each sequence, a new group with the name of the sequence, **SsssCcccPpppRrrrAaaa**, is created.
In each group, a new dataset is created containing the unprocessed IR sequence.
The IR video data is of shape `(n_frames, H, W)`.
This method depends on the h5 datasets (ir.h5, ir_skeleton.h5) created by the corresponding methods.
The method creates the file "ir_cropped.h5". **Warning:** The file should not be renamed!
Inputs:
- **input_path** (str): Path containing the processed h5 files (default: *./data/processed/*.
See **Project Organization** in *README.md*)
- **output_path** (str): Path containing the processed h5 files (default: *./data/processed/*.
See **Project Organization** in *README.md*)
- **compression** (str): Compression type for h5. May take values in ["", "lzf", "gzip"]
- **compression_otps** (int): Compression opts. For "gzip" compression only.
May take values in the [0; 9] range.
"""
# Get samples list
samples_names_list = [line.rstrip('\n') for line in open(input_path + "samples_names.txt")]
# Existing h5 files
ir_skeleton_dataset_file_name = "ir_skeleton.h5"
ir_dataset_file_name = "ir.h5"
# Offset around bounding box
offset = 20
# Create a log file to track and debug progress
open_type = "w"
file = open(output_path + 'log.txt', 'w')
file.close()
# Open existing h5 files
ir_skeleton_dataset = h5py.File(input_path + ir_skeleton_dataset_file_name, 'r')
ir_dataset = h5py.File(input_path + ir_dataset_file_name, 'r')
# Create h5 file
with h5py.File(output_path + 'ir_cropped.h5', open_type) as hdf:
# Progress bar
progress_bar = progressbar(iterable=None, length=len(samples_names_list))
# Loop through skeleton files
for sequence_name in samples_names_list:
# Log current sequence
f = open(output_path + "log.txt", "a+")
f.write(sequence_name)
f.write("\r\n")
f.close()
# Fetch corresponding ir raw sequence
video_data = ir_dataset[sequence_name]["ir"][:]
# Pad video to compensate for offset
cropped_ir_sample = np.pad(video_data, ((0, 0), (offset, offset), (offset, offset)), mode='constant')
# Get corresponding ir skeleton shape(2 : {y, x}, seq_len, n_joints, n_subjects)
ir_skeleton = ir_skeleton_dataset[sequence_name]["ir_skeleton"][:].clip(min=0)
# Check if there is another subject if there exists non zero coordinates for subject 2
has_2_subjects = np.any(ir_skeleton[:, :, :, 1])
# Calculate boundaries
if not has_2_subjects:
y_min = min(np.uint16(np.amin(ir_skeleton[0, :, :, 0])), video_data.shape[2])
y_max = min(np.uint16(np.amax(ir_skeleton[0, :, :, 0])), video_data.shape[2])
x_min = min(np.uint16(np.amin(ir_skeleton[1, :, :, 0])), video_data.shape[1])
x_max = min(np.uint16(np.amax(ir_skeleton[1, :, :, 0])), video_data.shape[1])
else:
y_min = min(np.uint16(np.amin(ir_skeleton[0, :, :, :])), video_data.shape[2])
y_max = min(np.uint16(np.amax(ir_skeleton[0, :, :, :])), video_data.shape[2])
x_min = min(np.uint16(np.amin(ir_skeleton[1, :, :, :])), video_data.shape[1])
x_max = min(np.uint16(np.amax(ir_skeleton[1, :, :, :])), video_data.shape[1])
# Crop video
cropped_ir_sample = cropped_ir_sample[:, x_min:x_max + 2 * offset, y_min:y_max + 2 * offset]
# Get video dimensions
_, H, W = cropped_ir_sample.shape
# Create a group for the current sequence
sample = hdf.create_group(sequence_name)
# Create a dataset with the skeleton data
if compression == "":
sample.create_dataset("ir", data=cropped_ir_sample, chunks=(1, H, W))
elif compression == "lzf":
sample.create_dataset("ir", data=cropped_ir_sample, compression=compression, chunks=(1, H, W))
elif compression == "gzip":
sample.create_dataset("ir", data=cropped_ir_sample,
compression=compression,
compression_opts=compression_opts,
chunks=(1, H, W))
else:
print("Compression type not recognized ... Exiting")
return
progress_bar.update(1)
ir_skeleton_dataset.close()
ir_dataset.close()
def create_h5_ir_cropped_moving_dataset_from_h5(input_path, output_path, compression="", compression_opts=9):
r"""Creates an h5 dataset with processed IR sequences.
The frames are cropped with a bounding box provided by the 2D IR skeleton.
The bounding box is updated at every frame.
For each sequence, a new group with the name of the sequence, **SsssCcccPpppRrrrAaaa**, is created.
In each group, a new dataset is created containing the unprocessed IR sequence.
The IR video data is of shape `(n_frames, H, W)`.
This method depends on the h5 datasets (ir.h5, ir_skeleton.h5) created by the corresponding methods.
The method creates the file "ir_cropped_moving.h5". **Warning:** The file should not be renamed!
Inputs:
- **input_path** (str): Path containing the processed h5 files (default: *./data/processed/.*
See **Project Organization** in *README.md*)
- **output_path** (str): Path containing the processed h5 files (default: *./data/processed/.*
See **Project Organization** in *README.md*)
- **compression** (str): Compression type for h5. May take values in ["", "lzf", "gzip"]
- **compression_otps** (int): Compression opts. For "gzip" compression only.
May take values in the [0; 9] range.
"""
# Get samples list
samples_names_list = [line.rstrip('\n') for line in open(input_path + "samples_names.txt")]
# Existing h5 files
ir_skeleton_dataset_file_name = "ir_skeleton.h5"
ir_dataset_file_name = "ir.h5"
# Offset around bounding box
offset = 20
# Create a log file to track and debug progress
open_type = "w"
file = open(output_path + 'log.txt', 'w')
file.close()
# Open existing h5 files
ir_skeleton_dataset = h5py.File(input_path + ir_skeleton_dataset_file_name, 'r')
ir_dataset = h5py.File(input_path + ir_dataset_file_name, 'r')
# Create h5 file
with h5py.File(output_path + 'ir_cropped_moving.h5', open_type) as hdf:
# Progress bar
progress_bar = progressbar(iterable=None, length=len(samples_names_list))
# Loop through skeleton files
for sequence_name in samples_names_list:
# Log current sequence
f = open(output_path + "log.txt", "a+")
f.write(sequence_name)
f.write("\r\n")
f.close()
# Fetch corresponding ir raw sequence shape (n_frames, H, W)
video_data = ir_dataset[sequence_name]["ir"][:]
# Get corresponding ir skeleton shape(2 : {y, x}, seq_len, n_joints, n_subjects)
ir_skeleton = ir_skeleton_dataset[sequence_name]["ir_skeleton"][:].clip(min=0)
# Check if there is another subject if there exists non zero coordinates for subject 2
has_2_subjects = np.any(ir_skeleton[:, :, :, 1])
# Calculate boundaries for each frame
y_min = np.uint16(np.amin(ir_skeleton[0, :, :, 0], axis=1))
y_max = np.uint16(np.amax(ir_skeleton[0, :, :, 0], axis=1))
x_min = np.uint16(np.amin(ir_skeleton[1, :, :, 0], axis=1))
x_max = np.uint16(np.amax(ir_skeleton[1, :, :, 0], axis=1))
if has_2_subjects:
y_min = np.minimum(y_min, np.uint16(np.amin(ir_skeleton[0, :, :, 1], axis=1)))
y_max = np.maximum(y_max, np.uint16(np.amax(ir_skeleton[0, :, :, 1], axis=1)))
x_min = np.minimum(x_min, np.uint16(np.amin(ir_skeleton[1, :, :, 1], axis=1)))
x_max = np.maximum(x_max, np.uint16(np.amax(ir_skeleton[1, :, :, 1], axis=1)))
# Clip to avoid pixel out of frame
x_min.clip(max=video_data.shape[1])
x_max.clip(max=video_data.shape[1])
y_min.clip(max=video_data.shape[2])
y_max.clip(max=video_data.shape[2])
# Crop and scale ir video
new_sequence = []
for t in range(video_data.shape[0]):
# Fetch individual frame
frame = video_data[t] # shape (H, W)
# Pad frame with zeros (to compensate for offset)
frame = np.pad(frame, ((offset, offset), (offset, offset)), mode='constant')
# Crop frame
frame = frame[x_min[t]:x_max[t] + 2 * offset,
y_min[t]:y_max[t] + 2 * offset]
# Rescale frame
ir_frame = cv2.resize(frame, dsize=(112, 112))
new_sequence.append(ir_frame)
new_sequence = np.stack(new_sequence, axis=0) # shape (n_frames, 112, 112)
# Get video dimensions
_, H, W = new_sequence.shape
# Create a group for the current sequence
sample = hdf.create_group(sequence_name)
# Create a dataset with the skeleton data
if compression == "":
sample.create_dataset("ir", data=new_sequence, chunks=(1, H, W))
elif compression == "lzf":
sample.create_dataset("ir", data=new_sequence, compression=compression, chunks=(1, H, W))
elif compression == "gzip":
sample.create_dataset("ir", data=new_sequence,
compression=compression,
compression_opts=compression_opts,
chunks=(1, H, W))
else:
print("Compression type not recognized ... Exiting")
return
progress_bar.update(1)
ir_skeleton_dataset.close()
ir_dataset.close()
| 43.409186 | 115 | 0.605107 | 2,670 | 20,793 | 4.55206 | 0.08764 | 0.041139 | 0.013164 | 0.020734 | 0.896742 | 0.880533 | 0.862597 | 0.852477 | 0.835527 | 0.803192 | 0 | 0.020405 | 0.288222 | 20,793 | 478 | 116 | 43.5 | 0.800811 | 0.407541 | 0 | 0.59434 | 0 | 0 | 0.056265 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023585 | false | 0 | 0.037736 | 0 | 0.084906 | 0.023585 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dab37f8a2935a10e16e54ba9a0ae1da0c7cde822 | 59 | py | Python | acq4/modules/Patch/__init__.py | aleonlein/acq4 | 4b1fcb9ad2c5e8d4595a2b9cf99d50ece0c0f555 | [
"MIT"
] | 47 | 2015-01-05T16:18:10.000Z | 2022-03-16T13:09:30.000Z | acq4/modules/Patch/__init__.py | aleonlein/acq4 | 4b1fcb9ad2c5e8d4595a2b9cf99d50ece0c0f555 | [
"MIT"
] | 48 | 2015-04-19T16:51:41.000Z | 2022-03-31T14:48:16.000Z | acq4/modules/Patch/__init__.py | sensapex/acq4 | 9561ba73caff42c609bd02270527858433862ad8 | [
"MIT"
] | 32 | 2015-01-15T14:11:49.000Z | 2021-07-15T13:44:52.000Z | from __future__ import print_function
from .Patch import *
| 19.666667 | 37 | 0.830508 | 8 | 59 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135593 | 59 | 2 | 38 | 29.5 | 0.862745 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
dabb5d897ee01ef4c5d3c7fb03440e8cd6705e7f | 31 | py | Python | python/testData/psi/NestedMultilineFStringsWithMultilineExpressions.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/psi/NestedMultilineFStringsWithMultilineExpressions.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/psi/NestedMultilineFStringsWithMultilineExpressions.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | s = f"""{f'''
{"bar"
}
'''
}""" | 6.2 | 13 | 0.193548 | 4 | 31 | 1.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.193548 | 31 | 5 | 14 | 6.2 | 0.24 | 0 | 0 | 0 | 0 | 0 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dad23d57bbcadc807f92461c0be4560efd5b61c3 | 29,436 | py | Python | aquascope/tests/aquascope/webserver/api/test_items.py | MicroscopeIT/aquascope_backend | 6b8c13ca3d6bd0a96f750fae809b6cf5a0062f24 | [
"MIT"
] | null | null | null | aquascope/tests/aquascope/webserver/api/test_items.py | MicroscopeIT/aquascope_backend | 6b8c13ca3d6bd0a96f750fae809b6cf5a0062f24 | [
"MIT"
] | 3 | 2019-04-03T13:22:47.000Z | 2019-12-02T15:49:31.000Z | aquascope/tests/aquascope/webserver/api/test_items.py | MicroscopeIT/aquascope_backend | 6b8c13ca3d6bd0a96f750fae809b6cf5a0062f24 | [
"MIT"
] | 2 | 2019-05-15T13:30:42.000Z | 2020-06-12T02:42:49.000Z | import copy
import math
import unittest
from unittest import mock
from flask import json
from aquascope.tests.aquascope.webserver.data_access.db.dummy_items import DUMMY_ITEMS, \
DUMMY_ITEMS_WITH_DEFAULT_PROJECTION
from aquascope.tests.flask_app_test_case import FlaskAppTestCase
from aquascope.webserver.data_access.db import Item
MONGO_CONNECTION_STRING = 'mongodb://example.server.com/aquascopedb'
class TestGetPagedItems(FlaskAppTestCase):
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_can_get_single_page(self, mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
with self.app.app_context():
self.app.config['page_size'] = 2
request_data = {}
res = self.client().get('/items/paged', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
expected_items = [DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[0], DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[1]]
expected_items = [item.serializable() for item in expected_items]
self.assertCountEqual(response['items'], expected_items)
self.assertTrue('continuation_token' in response)
self.assertEqual(2, response['continuation_token'])
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_can_get_all_items_in_one_page(self, mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
with self.app.app_context():
self.app.config['page_size'] = 500
request_data = {}
res = self.client().get('/items/paged', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
expected_items = DUMMY_ITEMS_WITH_DEFAULT_PROJECTION
expected_items = [item.serializable() for item in expected_items]
self.assertCountEqual(response['items'], expected_items)
self.assertFalse('continuation_token' in response)
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_can_get_requested_page(self, mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
with self.app.app_context():
self.app.config['page_size'] = 2
request_data = {
'continuation_token': 2
}
res = self.client().get('/items/paged', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
expected_items = [DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[2], DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[3]]
expected_items = [item.serializable() for item in expected_items]
self.assertCountEqual(response['items'], expected_items)
self.assertTrue('continuation_token' in response)
self.assertEqual(3, response['continuation_token'])
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_can_get_last_page(self, mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
with self.app.app_context():
self.app.config['page_size'] = 2
last_page_idx = math.ceil(len(DUMMY_ITEMS_WITH_DEFAULT_PROJECTION) / self.app.config['page_size'])
request_data = {
'continuation_token': last_page_idx
}
res = self.client().get('/items/paged', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
expected_items_start = (last_page_idx - 1) * self.app.config['page_size']
expected_items = DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[
expected_items_start:]
expected_items = [item.serializable() for item in expected_items]
self.assertCountEqual(response['items'], expected_items)
self.assertFalse('continuation_token' in response)
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_cant_get_negative_page(self, mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
with self.app.app_context():
self.app.config['page_size'] = 2
request_data = {
'continuation_token': -10
}
res = self.client().get('/items/paged', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 400)
wrong_parameters = ['continuation_token']
res_wrong_parameters = [item['parameter'] for item in json.loads(res.data)["messages"]]
self.assertCountEqual(wrong_parameters, res_wrong_parameters)
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_cant_get_zero_page(self, mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
with self.app.app_context():
self.app.config['page_size'] = 2
request_data = {
'continuation_token': 0
}
res = self.client().get('/items/paged', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 400)
wrong_parameters = ['continuation_token']
res_wrong_parameters = [item['parameter'] for item in json.loads(res.data)["messages"]]
self.assertCountEqual(wrong_parameters, res_wrong_parameters)
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_can_get_zero_items_because_page_is_too_far(self, mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
with self.app.app_context():
self.app.config['page_size'] = 2
request_data = {
'continuation_token': 10
}
res = self.client().get('/items/paged', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
expected_items = []
self.assertCountEqual(response['items'], expected_items)
self.assertFalse('continuation_token' in response)
class TestGetItems(FlaskAppTestCase):
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_can_get_items_by_eating(self, mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
with self.app.app_context():
request_data = {
'eating': True
}
res = self.client().get('/items', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
expected_items = [DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[0], DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[1]]
expected_items = [item.serializable() for item in expected_items]
self.assertCountEqual(response['items'], expected_items)
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_can_get_items_by_taxonomy(self, mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
with self.app.app_context():
request_data = {
'empire': 'prokaryota'
}
res = self.client().get('/items', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
expected_items = [item.serializable() for item in DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[:4]]
self.assertCountEqual(response['items'], expected_items)
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_can_get_items_by_eating_list(self, mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
with self.app.app_context():
request_data = {
'eating': [True, '']
}
res = self.client().get('/items', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
expected_items = [DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[0], DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[1],
DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[3], DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[4]]
expected_items = [item.serializable() for item in expected_items]
self.assertCountEqual(response['items'], expected_items)
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_can_get_items_by_empty_species(self, mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
with self.app.app_context():
request_data = {
'species': ''
}
res = self.client().get('/items', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
expected_items = [DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[1], DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[3]]
expected_items = [item.serializable() for item in expected_items]
self.assertCountEqual(response['items'], expected_items)
def test_api_cant_get_items_with_bad_argument(self):
with self.app.app_context():
request_data = {
'invalid_key': [True, '']
}
res = self.client().get('/items', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 400)
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_can_get_items_with_empty_request(self, mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
with self.app.app_context():
request_data = {}
res = self.client().get('/items', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
expected_items = DUMMY_ITEMS_WITH_DEFAULT_PROJECTION
expected_items = [item.serializable() for item in expected_items]
self.assertCountEqual(response['items'], expected_items)
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_can_get_items_with_acquisition_time_range(self, mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
with self.app.app_context():
request_data = {
'acquisition_time_start': '2019-01-07T18:06:34.151Z',
'acquisition_time_end': '2019-01-12T18:06:34.151Z'
}
res = self.client().get('/items', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
expected_items = [DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[2]]
expected_items = [item.serializable() for item in expected_items]
self.assertCountEqual(response['items'], expected_items)
def test_api_get_emits_errors_for_all_wrong_parameters(self):
with self.app.app_context():
res = self.client().get('/items', query_string="eating=bar&multiple_species=foobar&eating=foo",
headers=self.headers)
wrong_parameters = ['eating.0', 'eating.1', 'multiple_species.0']
res_wrong_parameters = [item['parameter'] for item in json.loads(res.data)["messages"]]
self.assertCountEqual(wrong_parameters, res_wrong_parameters)
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_can_get_items_with_given_field_last_modified_by_given_user(self, mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
with self.app.app_context():
request_data = {
'eating': [True, ''],
'modified_by': 'user1'
}
res = self.client().get('/items', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
expected_items = [DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[0], DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[3]]
expected_items = [item.serializable() for item in expected_items]
self.assertCountEqual(response['items'], expected_items)
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_can_get_items_with_given_fields_last_modified_by_given_user(self, mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
with self.app.app_context():
request_data = {
'eating': [True, ''],
'empire': 'prokaryota',
'modified_by': 'user1'
}
res = self.client().get('/items', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
expected_items = [DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[0], DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[3]]
expected_items = [item.serializable() for item in expected_items]
self.assertCountEqual(response['items'], expected_items)
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_can_get_items_with_given_fields_last_modified_by_given_user_that_have_single_match(self,
mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
with self.app.app_context():
request_data = {
'eating': True,
'empire': 'prokaryota',
'modified_by': 'user1'
}
res = self.client().get('/items', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
expected_items = [DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[0]]
expected_items = [item.serializable() for item in expected_items]
self.assertCountEqual(response['items'], expected_items)
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_can_get_zero_items_with_given_fields_last_modified_by_given_user_that_have_no_match(self, mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
with self.app.app_context():
request_data = {
'allman': False,
'eating': [True, ''],
'empire': 'prokaryota',
'modified_by': 'user1'
}
res = self.client().get('/items', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
expected_items = []
self.assertCountEqual(response['items'], expected_items)
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_can_get_items_with_any_field_last_modified_by_given_user(self, mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
request_data = {
'modified_by': 'user2'
}
res = self.client().get('/items', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
expected_items = [DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[1], DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[3]]
expected_items = [item.serializable() for item in expected_items]
self.assertCountEqual(response['items'], expected_items)
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_can_get_items_with_any_field_last_modified_by_given_user_and_other_nonannotable_criteria(self,
mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
request_data = {
'acquisition_time_start': '2019-01-20T02:00:00.001Z',
'acquisition_time_end': '2019-01-20T12:06:34.151Z',
'modified_by': 'user2'
}
res = self.client().get('/items', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
expected_items = [DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[1]]
expected_items = [item.serializable() for item in expected_items]
self.assertCountEqual(response['items'], expected_items)
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_get_zero_items_with_given_fields_last_modified_by_nonexisting_user(self, mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
with self.app.app_context():
request_data = {
'eating': [True, ''],
'empire': 'prokaryota',
'modified_by': 'nosuchuser1'
}
res = self.client().get('/items', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
expected_items = []
self.assertCountEqual(response['items'], expected_items)
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_get_zero_items_with_any_field_last_modified_by_nonexisting_user(self, mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
with self.app.app_context():
request_data = {
'modified_by': 'nosuchuser1'
}
res = self.client().get('/items', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
expected_items = []
self.assertCountEqual(response['items'], expected_items)
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_get_items_with_any_field_last_modified_by_null_user(self, mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
with self.app.app_context():
request_data = {
'modified_by': ''
}
res = self.client().get('/items', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
expected_items = [DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[2], DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[4]]
expected_items = [item.serializable() for item in expected_items]
self.assertCountEqual(response['items'], expected_items)
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_can_get_items_with_given_field_and_modification_time_range(self, mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
with self.app.app_context():
request_data = {
'eating': [True, ''],
'modification_time_start': '2019-01-18T18:00:00.000Z',
'modification_time_end': '2019-01-25T18:00:00.000Z'
}
res = self.client().get('/items', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
expected_items = [DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[0], DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[1]]
expected_items = [item.serializable() for item in expected_items]
self.assertCountEqual(response['items'], expected_items)
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_can_get_items_with_given_user_and_modification_time_range(self, mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
with self.app.app_context():
request_data = {
'modified_by': 'user1',
'modification_time_start': '2019-01-18T18:00:00.000Z',
'modification_time_end': '2019-01-25T18:00:00.000Z'
}
res = self.client().get('/items', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
expected_items = [DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[0]]
expected_items = [item.serializable() for item in expected_items]
self.assertCountEqual(response['items'], expected_items)
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_can_get_items_by_tag(self, mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
with self.app.app_context():
request_data = {
'tags': ['dummy_tag_1']
}
res = self.client().get('/items', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
expected_items = [DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[1], DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[3]]
expected_items = [item.serializable() for item in expected_items]
self.assertCountEqual(response['items'], expected_items)
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_can_get_items_by_tag_and_regular_field(self, mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
with self.app.app_context():
request_data = {
'tags': ['sth'],
'eating': True
}
res = self.client().get('/items', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
expected_items = [DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[0]]
expected_items = [item.serializable() for item in expected_items]
self.assertCountEqual(response['items'], expected_items)
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_can_get_items_by_tags(self, mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
with self.app.app_context():
request_data = {
'tags': ['dummy_tag_1', 'dummy_tag_2']
}
res = self.client().get('/items', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
expected_items = [DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[1]]
expected_items = [item.serializable() for item in expected_items]
self.assertCountEqual(response['items'], expected_items)
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_can_get_items_by_empty_tags_list(self, mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
with self.app.app_context():
request_data = {
'tags': ''
}
res = self.client().get('/items', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
expected_items = [DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[4]]
expected_items = [item.serializable() for item in expected_items]
self.assertCountEqual(response['items'], expected_items)
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_can_get_no_items_by_nonexisting_tags(self, mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
with self.app.app_context():
request_data = {
'tags': ['dummy_tag_1', 'dummy_tag_4']
}
res = self.client().get('/items', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
expected_items = []
self.assertCountEqual(response['items'], expected_items)
class TestPostItems(FlaskAppTestCase):
def test_api_can_post_update_pairs(self):
with self.app.app_context():
item0 = copy.deepcopy(DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[0])
replace_item0 = copy.deepcopy(item0)
replace_item0.dead = True
item1 = copy.deepcopy(DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[1])
replace_item1 = copy.deepcopy(item1)
replace_item1.broken = True
request_data = json.dumps([
{
'current': item0.serializable(),
'update': replace_item0.serializable()
},
{
'current': item1.serializable(),
'update': replace_item1.serializable()
}
])
res = self.client().post('/items', data=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
self.assertEqual(response['matched'], 2)
self.assertEqual(response['modified'], 2)
def test_api_can_post_with_bad_argument(self):
item0 = copy.deepcopy(DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[0])
replace_item0 = copy.deepcopy(item0)
replace_item0.dead = True
request_data = json.dumps([
{
'current': item0.serializable(),
'update': replace_item0.serializable(),
'dummy': 'value'
}
])
res = self.client().post('/items', data=request_data, headers=self.headers)
self.assertEqual(res.status_code, 400)
def test_api_can_post_with_bad_argument_type(self):
item0 = copy.deepcopy(DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[0])
replace_item0 = copy.deepcopy(item0)
replace_item0.dead = 56
request_data = json.dumps([
{
'current': item0.serializable(),
'update': replace_item0.serializable()
}
])
res = self.client().post('/items', data=request_data, headers=self.headers)
self.assertEqual(res.status_code, 400)
def test_api_cant_post_with_empty_list(self):
request_data = json.dumps([])
res = self.client().post('/items', data=request_data, headers=self.headers)
self.assertEqual(res.status_code, 400)
def test_api_cant_post_with_empty_dict(self):
request_data = json.dumps({})
res = self.client().post('/items', data=request_data, headers=self.headers)
self.assertEqual(res.status_code, 400)
def test_api_post_emits_errors_for_all_wrong_parameters(self):
with self.app.app_context():
item0 = copy.deepcopy(DUMMY_ITEMS_WITH_DEFAULT_PROJECTION[0])
replace_item0 = copy.deepcopy(item0)
replace_item0.dead = 56
replace_item1 = copy.deepcopy(item0)
replace_item1.foo = "bar"
request_data = json.dumps([
{
'current': item0.serializable(),
'update': replace_item0.serializable()
},
{
'current': item0.serializable(),
'update': replace_item1.serializable()
},
])
res = self.client().post('/items', data=request_data, headers=self.headers)
expected_errors = [{'parameter': '0.update.dead', 'errors': ['Not a valid boolean.']},
{'parameter': '1.update.foo', 'errors': ['Unknown field.']}]
response_data = json.loads(res.data)["messages"]
self.assertCountEqual(expected_errors, response_data)
class TestItemsAnnotationFlow(FlaskAppTestCase):
@mock.patch('aquascope.webserver.data_access.storage.blob.make_blob_url')
def test_api_can_annotate_single_item(self, mock_make_blob_url):
mock_make_blob_url.return_value = 'mockedurl'
with self.app.app_context():
self.app.config['page_size'] = 5
request_data = {
'eating': True,
'tags': ['with_broken_records_field']
}
res = self.client().get('/items/paged', query_string=request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
item = response['items'][0]
changed_item = copy.deepcopy(item)
changed_item['eating'] = False
post_request_data = json.dumps([
{
'current': item,
'update': changed_item
},
])
res = self.client().post('/items', data=post_request_data, headers=self.headers)
self.assertEqual(res.status_code, 200)
response = res.json
self.assertEqual(response['matched'], 1)
self.assertEqual(response['modified'], 1)
if __name__ == '__main__':
unittest.main()
| 43.934328 | 127 | 0.642886 | 3,414 | 29,436 | 5.191564 | 0.056532 | 0.07188 | 0.055856 | 0.050779 | 0.922986 | 0.909614 | 0.903916 | 0.8906 | 0.885805 | 0.881968 | 0 | 0.016642 | 0.252854 | 29,436 | 669 | 128 | 44 | 0.789251 | 0 | 0 | 0.697936 | 0 | 0 | 0.135582 | 0.073855 | 0 | 0 | 0 | 0 | 0.148218 | 1 | 0.071295 | false | 0 | 0.015009 | 0 | 0.093809 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dade7097ac9af0ace9ecbe9dd4127f0fb3eb0099 | 71 | py | Python | dataset/__init__.py | sx14/ST-HOID-helper | f0822307fe03548c92dc1e2ef80bb738ed0bd3f5 | [
"MIT"
] | null | null | null | dataset/__init__.py | sx14/ST-HOID-helper | f0822307fe03548c92dc1e2ef80bb738ed0bd3f5 | [
"MIT"
] | null | null | null | dataset/__init__.py | sx14/ST-HOID-helper | f0822307fe03548c92dc1e2ef80bb738ed0bd3f5 | [
"MIT"
] | null | null | null | from .vidvrd_hoid import VidVRD_HOID
from .vidor_hoid import VidOR_HOID | 35.5 | 36 | 0.873239 | 12 | 71 | 4.833333 | 0.416667 | 0.344828 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098592 | 71 | 2 | 37 | 35.5 | 0.90625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
daebf7b8200872fcc1d1c0f89dd70172fe4fb29b | 259 | py | Python | dcodex_bible/views.py | rbturnbull/dcodex_bible | 7745726867bdc556b3de5505601bbb881d420477 | [
"Apache-2.0"
] | null | null | null | dcodex_bible/views.py | rbturnbull/dcodex_bible | 7745726867bdc556b3de5505601bbb881d420477 | [
"Apache-2.0"
] | 9 | 2021-04-08T20:32:39.000Z | 2022-03-12T01:06:09.000Z | dcodex_bible/views.py | rbturnbull/dcodex_bible | 7745726867bdc556b3de5505601bbb881d420477 | [
"Apache-2.0"
] | null | null | null | from django.shortcuts import render
from django.contrib.auth.decorators import login_required
from django.shortcuts import get_object_or_404, render
from django.template import loader
from django.http import HttpResponse
from dcodex.models import Manuscript
| 32.375 | 57 | 0.864865 | 37 | 259 | 5.945946 | 0.567568 | 0.227273 | 0.172727 | 0.227273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012876 | 0.100386 | 259 | 7 | 58 | 37 | 0.93133 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9738fc2f344767e5ce41b476e1b541718d6589ae | 32,804 | py | Python | teospy/liqair4b.py | jarethholt/teospy | 3bb23e67bbb765c0842aa8d4a73c1d55ea395d2f | [
"MIT"
] | null | null | null | teospy/liqair4b.py | jarethholt/teospy | 3bb23e67bbb765c0842aa8d4a73c1d55ea395d2f | [
"MIT"
] | null | null | null | teospy/liqair4b.py | jarethholt/teospy | 3bb23e67bbb765c0842aa8d4a73c1d55ea395d2f | [
"MIT"
] | null | null | null | """Wet air Gibbs energy and related properties.
This module provides the Gibbs function for liquid water-saturated (wet)
air and related thermodynamic quantities. The primary variables are the
total dry air fraction, temperature, and pressure. The 'total' fraction
here is the mass fraction of dry air in the total parcel (including
liquid) and uses the variable ``wair``. The dry air mass fraction in
humid air uses the variable ``airf``.
:Examples:
>>> liqair_g(0,0,0,0.5,300.,1e5)
-5396.77820137
>>> liqair_g(0,0,1,0.5,300.,1e5)
0.446729465555
>>> liqair_g(0,1,1,0.5,300.,1e5)
2.45335972867e-03
>>> cp(0.5,300.,1e5)
4267.95671050
>>> expansion(0.5,300.,1e5)
5.49182428703e-03
>>> lapserate(0.5,300.,1e5)
1.72449715057e-04
:Functions:
* :func:`liqair_g`: Wet air Gibbs free energy with derivatives.
* :func:`cp`: Wet air isobaric heat capacity.
* :func:`density`: Wet air density.
* :func:`enthalpy`: Wet air enthalpy.
* :func:`entropy`: Wet air entropy.
* :func:`expansion`: Wet air thermal expansion coefficient.
* :func:`kappa_t`: Wet air isothermal compressibility.
* :func:`lapserate`: Wet air adiabatic lapse rate.
* :func:`liquidfraction`: Total mass fraction of liquid water in wet
air.
* :func:`vapourfraction`: Total mass fraction of water vapour in wet
air.
"""
__all__ = ['liqair_g','cp','density','enthalpy','entropy','expansion','kappa_t',
'lapserate','liquidfraction','vapourfraction']
import warnings
import numpy
from teospy import constants0
from teospy import flu1
from teospy import air2
from teospy import flu2
from teospy import maths3
from teospy import air3a
from teospy import liqair4a
_CHKTOL = constants0.CHKTOL
_chkhumbnds = constants0.chkhumbnds
_chkflubnds = constants0.chkflubnds
_flu_f = flu1.flu_f
_air_f = air2.air_f
_air_eq_pressure = air2.eq_pressure
_air_eq_vappot = air2.eq_vappot
_flu_eq_pressure = flu2.eq_pressure
_flu_eq_chempot = flu2.eq_chempot
_newton = maths3.newton
_eq_atpe = liqair4a.eq_atpe
## Gibbs function
def liqair_g(drvw,drvt,drvp,wair,temp,pres,airf=None,dhum=None,
dliq=None,chkvals=False,chktol=_CHKTOL,airf0=None,dhum0=None,
dliq0=None,chkbnd=False,mathargs=None):
"""Calculate wet air Gibbs free energy with derivatives.
Calculate the specific Gibbs free energy of wet air or its
derivatives with respect to total dry air fraction, temperature,
and pressure.
:arg int drvw: Number of dry fraction derivatives.
:arg int drvt: Number of temperature derivatives.
:arg int drvp: Number of pressure derivatives.
:arg float wair: Total dry air fraction in kg/kg.
:arg float temp: Temperature in K.
:arg float pres: Pressure in Pa.
:arg airf: Dry air fraction in humid air in kg/kg.
:type airf: float or None
:arg dhum: Humid air density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dhum: float or None
:arg dliq: Liquid water density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg airf0: Initial guess for the dry fraction in kg/kg. If None
(default) then `iceair4a._approx_tp` is used.
:type airf0: float or None
:arg dhum0: Initial guess for the humid air density in kg/m3. If
None (default) then `liqair4a._approx_tp` is used.
:type dhum0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `liqair4a._approx_tp` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Gibbs free energy in units of
(J/kg) / (kg/kg)^drvw / K^drvt / Pa^drvp.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:raises RuntimeWarning: If air with the given parameters would be
unsaturated.
:Examples:
>>> liqair_g(0,0,0,0.5,300.,1e5)
-5396.77820137
>>> liqair_g(1,0,0,0.5,300.,1e5)
-263.4554912
>>> liqair_g(0,1,0,0.5,300.,1e5)
-343.783393872
>>> liqair_g(0,0,1,0.5,300.,1e5)
0.446729465555
>>> liqair_g(2,0,0,0.5,300.,1e5)
0.
>>> liqair_g(1,1,0,0.5,300.,1e5)
98.5580798842
>>> liqair_g(1,0,1,0.5,300.,1e5)
0.891452019991
>>> liqair_g(0,2,0,0.5,300.,1e5)
-14.2265223683
>>> liqair_g(0,1,1,0.5,300.,1e5)
2.45335972867e-03
>>> liqair_g(0,0,2,0.5,300.,1e5)
-4.62725155875e-06
"""
airf, __, __, dhum, dliq = _eq_atpe(temp=temp,pres=pres,airf=airf,
dhum=dhum,dliq=dliq,chkvals=chkvals,chktol=chktol,airf0=airf0,
dhum0=dhum0,dliq0=dliq0,chkbnd=chkbnd,mathargs=mathargs)
if airf <= wair:
warnmsg = 'Air with the given parameters is unsaturated'
warnings.warn(warnmsg,RuntimeWarning)
g = air3a.air_g(drvw,drvt,drvp,wair,temp,pres,dhum=dhum)
return g
w = wair / airf
# Simple derivative cases
if (drvw,drvt,drvp) == (0,0,0):
fh = _air_f(0,0,0,airf,temp,dhum)
fh_d = _air_f(0,0,1,airf,temp,dhum)
fl = _flu_f(0,0,temp,dliq)
fl_d = _flu_f(0,1,temp,dliq)
g = w*(fh + dhum*fh_d) + (1-w)*(fl + dliq*fl_d)
return g
elif (drvw,drvt,drvp) == (1,0,0):
fh_a = _air_f(1,0,0,airf,temp,dhum)
g_w = fh_a
return g_w
elif (drvw,drvt,drvp) == (0,1,0):
fh_t = _air_f(0,1,0,airf,temp,dhum)
fl_t = _flu_f(1,0,temp,dliq)
g_t = w*fh_t + (1-w)*fl_t
return g_t
elif (drvw,drvt,drvp) == (0,0,1):
g_p = w/dhum + (1-w)/dliq
return g_p
elif (drvw,drvt,drvp) == (2,0,0):
g_ww = 0.
return g_ww
elif (drvw,drvt,drvp) == (1,1,0):
fh_t = _air_f(0,1,0,airf,temp,dhum)
fl_t = _flu_f(1,0,temp,dliq)
g_wt = (fh_t - fl_t) / airf
return g_wt
elif (drvw,drvt,drvp) == (1,0,1):
g_wp = (dhum**(-1) - dliq**(-1)) / airf
return g_wp
# Higher-order derivatives require inversion
__, __, dlhs, drhs = liqair4a._diff_tp(airf,dhum,dliq,temp,pres)
ppg_x = drhs - dlhs
if (drvw,drvt,drvp) == (0,2,0):
ph_t = _air_eq_pressure(0,1,0,airf,temp,dhum)
pl_t = _flu_eq_pressure(1,0,temp,dliq)
muv_t = _air_eq_vappot(0,1,0,airf,temp,dhum)
gl_t = _flu_eq_chempot(1,0,temp,dliq)
ppg_t = numpy.array([ph_t,pl_t,muv_t-gl_t])
x_t = numpy.linalg.solve(ppg_x,-ppg_t)
fh_t = _air_f(0,1,0,airf,temp,dhum)
fh_at = _air_f(1,1,0,airf,temp,dhum)
fh_tt = _air_f(0,2,0,airf,temp,dhum)
fh_td = _air_f(0,1,1,airf,temp,dhum)
fl_t = _flu_f(1,0,temp,dliq)
fl_tt = _flu_f(2,0,temp,dliq)
fl_td = _flu_f(1,1,temp,dliq)
g_ta = -w/airf*(fh_t - airf*fh_at - fl_t)
g_th = w*fh_td
g_tl = (1-w)*fl_td
g_tx = numpy.array([g_ta,g_th,g_tl])
g_tt = w*fh_tt + (1-w)*fl_tt + g_tx.dot(x_t)
return g_tt
elif (drvw,drvt,drvp) == (0,1,1):
ppg_p = numpy.array([1.,1.,0.])
x_p = numpy.linalg.solve(ppg_x,ppg_p)
fh_t = _air_f(0,1,0,airf,temp,dhum)
fh_at = _air_f(1,1,0,airf,temp,dhum)
fh_td = _air_f(0,1,1,airf,temp,dhum)
fl_t = _flu_f(1,0,temp,dliq)
fl_td = _flu_f(1,1,temp,dliq)
g_ta = -w/airf*(fh_t - airf*fh_at - fl_t)
g_th = w*fh_td
g_tl = (1-w)*fl_td
g_tx = numpy.array([g_ta,g_th,g_tl])
g_tp = g_tx.dot(x_p)
return g_tp
elif (drvw,drvt,drvp) == (0,0,2):
ppg_p = numpy.array([1.,1.,0.])
x_p = numpy.linalg.solve(ppg_x,ppg_p)
g_pa = -w/airf*(dhum**(-1) - dliq**(-1))
g_ph = -w/dhum**2
g_pl = -(1-w)/dliq**2
g_px = numpy.array([g_pa,g_ph,g_pl])
g_pp = g_px.dot(x_p)
return g_pp
# Should not have made it this far!
errmsg = 'Derivatives {0} not recognized'.format((drvw,drvt,drvp))
raise ValueError(errmsg)
## Thermodynamic properties
def cp(wair,temp,pres,airf=None,dhum=None,dliq=None,chkvals=False,
chktol=_CHKTOL,airf0=None,dhum0=None,dliq0=None,chkbnd=False,
mathargs=None):
"""Calculate wet air isobaric heat capacity.
Calculate the isobaric heat capacity of wet air.
:arg float wair: Total dry air fraction in kg/kg.
:arg float temp: Temperature in K.
:arg float pres: Pressure in Pa.
:arg airf: Dry air fraction in humid air in kg/kg.
:type airf: float or None
:arg dhum: Humid air density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dhum: float or None
:arg dliq: Liquid water density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg airf0: Initial guess for the dry fraction in kg/kg. If None
(default) then `iceair4a._approx_tp` is used.
:type airf0: float or None
:arg dhum0: Initial guess for the humid air density in kg/m3. If
None (default) then `liqair4a._approx_tp` is used.
:type dhum0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `liqair4a._approx_tp` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Heat capacity in J/kg/K.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:raises RuntimeWarning: If air with the given parameters would be
unsaturated.
:Examples:
>>> cp(0.5,300.,1e5)
4267.95671050
"""
g_tt = liqair_g(0,2,0,wair,temp,pres,airf=airf,dhum=dhum,dliq=dliq,
chkvals=chkvals,chktol=chktol,airf0=airf0,dhum0=dhum0,dliq0=dliq0,
chkbnd=chkbnd,mathargs=mathargs)
cp = -temp * g_tt
return cp
def density(wair,temp,pres,airf=None,dhum=None,dliq=None,chkvals=False,
chktol=_CHKTOL,airf0=None,dhum0=None,dliq0=None,chkbnd=False,
mathargs=None):
"""Calculate wet air density.
Calculate the density of wet air.
:arg float wair: Total dry air fraction in kg/kg.
:arg float temp: Temperature in K.
:arg float pres: Pressure in Pa.
:arg airf: Dry air fraction in humid air in kg/kg.
:type airf: float or None
:arg dhum: Humid air density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dhum: float or None
:arg dliq: Liquid water density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg airf0: Initial guess for the dry fraction in kg/kg. If None
(default) then `iceair4a._approx_tp` is used.
:type airf0: float or None
:arg dhum0: Initial guess for the humid air density in kg/m3. If
None (default) then `liqair4a._approx_tp` is used.
:type dhum0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `liqair4a._approx_tp` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Density in kg/m3.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:raises RuntimeWarning: If air with the given parameters would be
unsaturated.
:Examples:
>>> density(0.5,300.,1e5)
2.23849125053
"""
g_p = liqair_g(0,0,1,wair,temp,pres,airf=airf,dhum=dhum,dliq=dliq,
chkvals=chkvals,chktol=chktol,airf0=airf0,dhum0=dhum0,dliq0=dliq0,
chkbnd=chkbnd,mathargs=mathargs)
dtot = g_p**(-1)
return dtot
def enthalpy(wair,temp,pres,airf=None,dhum=None,dliq=None,chkvals=False,
chktol=_CHKTOL,airf0=None,dhum0=None,dliq0=None,chkbnd=False,
mathargs=None):
"""Calculate wet air enthalpy.
Calculate the specific enthalpy of wet air.
:arg float wair: Total dry air fraction in kg/kg.
:arg float temp: Temperature in K.
:arg float pres: Pressure in Pa.
:arg airf: Dry air fraction in humid air in kg/kg.
:type airf: float or None
:arg dhum: Humid air density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dhum: float or None
:arg dliq: Liquid water density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg airf0: Initial guess for the dry fraction in kg/kg. If None
(default) then `iceair4a._approx_tp` is used.
:type airf0: float or None
:arg dhum0: Initial guess for the humid air density in kg/m3. If
None (default) then `liqair4a._approx_tp` is used.
:type dhum0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `liqair4a._approx_tp` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Enthalpy in J/kg.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:raises RuntimeWarning: If air with the given parameters would be
unsaturated.
:Examples:
>>> enthalpy(0.5,300.,1e5)
97738.2399604
"""
airf, __, __, dhum, dliq = _eq_atpe(temp=temp,pres=pres,airf=airf,
dhum=dhum,dliq=dliq,chkvals=chkvals,chktol=chktol,airf0=airf0,
dhum0=dhum0,dliq0=dliq0,chkbnd=chkbnd,mathargs=mathargs)
if airf <= wair:
warnmsg = 'Air with the given parameters is unsaturated'
warnings.warn(warnmsg,RuntimeWarning)
h = air3b.enthalpy(wair,temp,pres,dhum0=dhum0,mathargs=mathargs)
return h
g = liqair_g(0,0,0,wair,temp,pres,airf=airf,dhum=dhum,dliq=dliq)
g_t = liqair_g(0,1,0,wair,temp,pres,airf=airf,dhum=dhum,dliq=dliq)
h = g - temp*g_t
return h
def entropy(wair,temp,pres,airf=None,dhum=None,dliq=None,chkvals=False,
chktol=_CHKTOL,airf0=None,dhum0=None,dliq0=None,chkbnd=False,
mathargs=None):
"""Calculate wet air entropy.
Calculate the specific entropy of wet air.
:arg float wair: Total dry air fraction in kg/kg.
:arg float temp: Temperature in K.
:arg float pres: Pressure in Pa.
:arg airf: Dry air fraction in humid air in kg/kg.
:type airf: float or None
:arg dhum: Humid air density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dhum: float or None
:arg dliq: Liquid water density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg airf0: Initial guess for the dry fraction in kg/kg. If None
(default) then `iceair4a._approx_tp` is used.
:type airf0: float or None
:arg dhum0: Initial guess for the humid air density in kg/m3. If
None (default) then `liqair4a._approx_tp` is used.
:type dhum0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `liqair4a._approx_tp` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Entropy in J/kg/K.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:raises RuntimeWarning: If air with the given parameters would be
unsaturated.
:Examples:
>>> entropy(0.5,300.,1e5)
343.783393872
"""
g_t = liqair_g(0,1,0,wair,temp,pres,airf=airf,dhum=dhum,dliq=dliq,
chkvals=chkvals,chktol=chktol,airf0=airf0,dhum0=dhum0,dliq0=dliq0,
chkbnd=chkbnd,mathargs=mathargs)
s = -g_t
return s
def expansion(wair,temp,pres,airf=None,dhum=None,dliq=None,
chkvals=False,chktol=_CHKTOL,airf0=None,dhum0=None,dliq0=None,
chkbnd=False,mathargs=None):
"""Calculate wet air thermal expansion coefficient.
Calculate the thermal expansion coefficient of wet air.
:arg float wair: Total dry air fraction in kg/kg.
:arg float temp: Temperature in K.
:arg float pres: Pressure in Pa.
:arg airf: Dry air fraction in humid air in kg/kg.
:type airf: float or None
:arg dhum: Humid air density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dhum: float or None
:arg dliq: Liquid water density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg airf0: Initial guess for the dry fraction in kg/kg. If None
(default) then `iceair4a._approx_tp` is used.
:type airf0: float or None
:arg dhum0: Initial guess for the humid air density in kg/m3. If
None (default) then `liqair4a._approx_tp` is used.
:type dhum0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `liqair4a._approx_tp` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Expansion coefficient in 1/K.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:raises RuntimeWarning: If air with the given parameters would be
unsaturated.
:Examples:
>>> expansion(0.5,300.,1e5)
5.49182428703e-03
"""
airf, __, __, dhum, dliq = _eq_atpe(temp=temp,pres=pres,airf=airf,
dhum=dhum,dliq=dliq,chkvals=chkvals,chktol=chktol,airf0=airf0,
dhum0=dhum0,dliq0=dliq0,chkbnd=chkbnd,mathargs=mathargs)
if airf <= wair:
warnmsg = 'Air with the given parameters is unsaturated'
warnings.warn(warnmsg,RuntimeWarning)
alpha = air3b.expansion(wair,temp,pres,dhum0=dhum0,mathargs=mathargs)
return alpha
g_p = liqair_g(0,0,1,wair,temp,pres,airf=airf,dhum=dhum,dliq=dliq)
g_tp = liqair_g(0,1,1,wair,temp,pres,airf=airf,dhum=dhum,dliq=dliq)
alpha = g_tp / g_p
return alpha
def kappa_t(wair,temp,pres,airf=None,dhum=None,dliq=None,chkvals=False,
chktol=_CHKTOL,airf0=None,dhum0=None,dliq0=None,chkbnd=False,
mathargs=None):
"""Calculate wet air isothermal compressibility.
Calculate the isothermal compressibility of wet air.
:arg float wair: Total dry air fraction in kg/kg.
:arg float temp: Temperature in K.
:arg float pres: Pressure in Pa.
:arg airf: Dry air fraction in humid air in kg/kg.
:type airf: float or None
:arg dhum: Humid air density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dhum: float or None
:arg dliq: Liquid water density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg airf0: Initial guess for the dry fraction in kg/kg. If None
(default) then `iceair4a._approx_tp` is used.
:type airf0: float or None
:arg dhum0: Initial guess for the humid air density in kg/m3. If
None (default) then `liqair4a._approx_tp` is used.
:type dhum0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `liqair4a._approx_tp` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Compressibility in 1/Pa.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:raises RuntimeWarning: If air with the given parameters would be
unsaturated.
:Examples:
>>> kappa_t(0.5,300.,1e5)
1.03580621283e-05
"""
airf, __, __, dhum, dliq = _eq_atpe(temp=temp,pres=pres,airf=airf,
dhum=dhum,dliq=dliq,chkvals=chkvals,chktol=chktol,airf0=airf0,
dhum0=dhum0,dliq0=dliq0,chkbnd=chkbnd,mathargs=mathargs)
if airf <= wair:
warnmsg = 'Air with the given parameters is unsaturated'
warnings.warn(warnmsg,RuntimeWarning)
kappa = air3b.kappa_t(wair,temp,pres,dhum0=dhum0,mathargs=mathargs)
return kappa
g_p = liqair_g(0,0,1,wair,temp,pres,airf=airf,dhum=dhum,dliq=dliq)
g_pp = liqair_g(0,0,2,wair,temp,pres,airf=airf,dhum=dhum,dliq=dliq)
kappa = -g_pp / g_p
return kappa
def lapserate(wair,temp,pres,airf=None,dhum=None,dliq=None,
chkvals=False,chktol=_CHKTOL,airf0=None,dhum0=None,dliq0=None,
chkbnd=False,mathargs=None):
"""Calculate wet air adiabatic lapse rate.
Calculate the adiabatic lapse rate of wet air.
:arg float wair: Total dry air fraction in kg/kg.
:arg float temp: Temperature in K.
:arg float pres: Pressure in Pa.
:arg airf: Dry air fraction in humid air in kg/kg.
:type airf: float or None
:arg dhum: Humid air density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dhum: float or None
:arg dliq: Liquid water density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg airf0: Initial guess for the dry fraction in kg/kg. If None
(default) then `iceair4a._approx_tp` is used.
:type airf0: float or None
:arg dhum0: Initial guess for the humid air density in kg/m3. If
None (default) then `liqair4a._approx_tp` is used.
:type dhum0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `liqair4a._approx_tp` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Lapse rate in K/Pa.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:raises RuntimeWarning: If air with the given parameters would be
unsaturated.
:Examples:
>>> lapserate(0.5,300.,1e5)
1.72449715057e-04
"""
airf, __, __, dhum, dliq = _eq_atpe(temp=temp,pres=pres,airf=airf,
dhum=dhum,dliq=dliq,chkvals=chkvals,chktol=chktol,airf0=airf0,
dhum0=dhum0,dliq0=dliq0,chkbnd=chkbnd,mathargs=mathargs)
if airf <= wair:
warnmsg = 'Air with the given parameters is unsaturated'
warnings.warn(warnmsg,RuntimeWarning)
gamma = air3b.lapserate(wair,temp,pres,dhum0=dhum0,mathargs=mathargs)
g_tt = liqair_g(0,2,0,wair,temp,pres,airf=airf,dhum=dhum,dliq=dliq)
g_tp = liqair_g(0,1,1,wair,temp,pres,airf=airf,dhum=dhum,dliq=dliq)
gamma = -g_tp / g_tt
return gamma
def liquidfraction(wair,temp,pres,airf=None,dhum=None,dliq=None,
chkvals=False,chktol=_CHKTOL,airf0=None,dhum0=None,dliq0=None,
chkbnd=False,mathargs=None):
"""Calculate wet air liquid water fraction.
Calculate the mass fraction of liquid water in wet air.
:arg float wair: Total dry air fraction in kg/kg.
:arg float temp: Temperature in K.
:arg float pres: Pressure in Pa.
:arg airf: Dry air fraction in humid air in kg/kg.
:type airf: float or None
:arg dhum: Humid air density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dhum: float or None
:arg dliq: Liquid water density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg airf0: Initial guess for the dry fraction in kg/kg. If None
(default) then `iceair4a._approx_tp` is used.
:type airf0: float or None
:arg dhum0: Initial guess for the humid air density in kg/m3. If
None (default) then `liqair4a._approx_tp` is used.
:type dhum0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `liqair4a._approx_tp` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Liquid water mass fraction in kg/kg.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:raises RuntimeWarning: If air with the given parameters would be
unsaturated.
:Examples:
>>> liquidfraction(0.5,300.,1e5)
0.488546404734
"""
airf, __, __, dhum, dliq = _eq_atpe(temp=temp,pres=pres,airf=airf,
dhum=dhum,dliq=dliq,chkvals=chkvals,chktol=chktol,airf0=airf0,
dhum0=dhum0,dliq0=dliq0,chkbnd=chkbnd,mathargs=mathargs)
if airf <= wair:
warnmsg = 'Air with the given parameters is unsaturated'
warnings.warn(warnmsg,RuntimeWarning)
wliq = max(1 - wair/airf, 0.)
return wliq
def vapourfraction(wair,temp,pres,airf=None,dhum=None,dliq=None,
chkvals=False,chktol=_CHKTOL,airf0=None,dhum0=None,dliq0=None,
chkbnd=False,mathargs=None):
"""Calculate wet air vapour fraction.
Calculate the mass fraction of water vapour in wet air.
:arg float wair: Total dry air fraction in kg/kg.
:arg float temp: Temperature in K.
:arg float pres: Pressure in Pa.
:arg airf: Dry air fraction in humid air in kg/kg.
:type airf: float or None
:arg dhum: Humid air density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dhum: float or None
:arg dliq: Liquid water density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg airf0: Initial guess for the dry fraction in kg/kg. If None
(default) then `iceair4a._approx_tp` is used.
:type airf0: float or None
:arg dhum0: Initial guess for the humid air density in kg/m3. If
None (default) then `liqair4a._approx_tp` is used.
:type dhum0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `liqair4a._approx_tp` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Water vapour mass fraction in kg/kg.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:raises RuntimeWarning: If air with the given parameters would be
unsaturated.
:Examples:
>>> vapourfraction(0.5,300.,1e5)
1.14535952655e-2
"""
airf, __, __, dhum, dliq = _eq_atpe(temp=temp,pres=pres,airf=airf,
dhum=dhum,dliq=dliq,chkvals=chkvals,chktol=chktol,airf0=airf0,
dhum0=dhum0,dliq0=dliq0,chkbnd=chkbnd,mathargs=mathargs)
if airf <= wair:
warnmsg = 'Air with the given parameters is unsaturated'
warnings.warn(warnmsg,RuntimeWarning)
wvap = min(wair * (1-airf)/airf, 1-wair)
return wvap
| 42.110398 | 80 | 0.670071 | 5,083 | 32,804 | 4.249262 | 0.053315 | 0.013519 | 0.030557 | 0.038891 | 0.862586 | 0.845502 | 0.832909 | 0.819992 | 0.802583 | 0.800917 | 0 | 0.037455 | 0.234941 | 32,804 | 778 | 81 | 42.164524 | 0.823166 | 0.644677 | 0 | 0.481481 | 0 | 0 | 0.043857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046296 | false | 0 | 0.041667 | 0 | 0.194444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9739a01078f68a4d6f65a7ea16ee4d3c42b8304d | 45 | py | Python | __init__.py | sean512/pytoolkit | 59e83683cb01424816c24f464aa41bf257e015f8 | [
"MIT"
] | null | null | null | __init__.py | sean512/pytoolkit | 59e83683cb01424816c24f464aa41bf257e015f8 | [
"MIT"
] | 13 | 2019-09-30T17:49:57.000Z | 2020-04-10T07:01:30.000Z | __init__.py | sean512/pytoolkit | 59e83683cb01424816c24f464aa41bf257e015f8 | [
"MIT"
] | null | null | null | # pylint: skip-file
from .pytoolkit import *
| 15 | 24 | 0.733333 | 6 | 45 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155556 | 45 | 2 | 25 | 22.5 | 0.868421 | 0.377778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
976e3919966fb46be1ba8dbe4e39cb43487dfd30 | 11,854 | py | Python | tests/test_knapsack.py | zheng-gao/ez_code | fbf48990291aa57d6436d4548b0a6c25dfb8f82d | [
"MIT"
] | null | null | null | tests/test_knapsack.py | zheng-gao/ez_code | fbf48990291aa57d6436d4548b0a6c25dfb8f82d | [
"MIT"
] | null | null | null | tests/test_knapsack.py | zheng-gao/ez_code | fbf48990291aa57d6436d4548b0a6c25dfb8f82d | [
"MIT"
] | null | null | null | from ezcode.knapsack import Knapsack
from fixture.utils import equal_list
def test_knapsack_with_limited_items():
capacity, sizes, values = 4, [3, 1, 4], [20, 15, 30]
# Limited, Not fill to capacity
benchmark_dp_table = [
[0, 0, 0, 20, 20],
[0, 15, 15, 20, 35],
[0, 15, 15, 20, 35]
]
benchmark_item_list = [
[[], [], [], [0], [0]],
[[], [1], [1], [0], [0, 1]],
[[], [1], [1], [0], [0, 1]]
]
dp_table, item_list = Knapsack.best_value_with_limited_items_2d(
capacity=capacity, sizes=sizes, values=values, min_max=max,
fill_to_capacity=False, iterate_sizes_first=True, output_dp_table=True, output_item_list=True
)
assert equal_list(benchmark_dp_table, dp_table) and equal_list(benchmark_item_list, item_list)
dp_table, item_list = Knapsack.best_value_with_limited_items_2d(
capacity=capacity, sizes=sizes, values=values, min_max=max,
fill_to_capacity=False, iterate_sizes_first=False, output_dp_table=True, output_item_list=True
)
assert equal_list(benchmark_dp_table, dp_table) and equal_list(benchmark_item_list, item_list)
dp_table, item_list = Knapsack.best_value_with_limited_items_1d(
capacity=capacity, sizes=sizes, values=values, min_max=max,
fill_to_capacity=False, output_dp_table=True, output_item_list=True
)
assert equal_list(benchmark_dp_table[-1], dp_table) and equal_list(benchmark_item_list[-1], item_list)
# Limited, Fill to capacity
benchmark_dp_table = [
[0, float("-inf"), float("-inf"), 20, float("-inf")],
[0, 15, float("-inf"), 20, 35],
[0, 15, float("-inf"), 20, 35]
]
benchmark_item_list = [
[[], [], [], [0], []],
[[], [1], [], [0], [0, 1]],
[[], [1], [], [0], [0, 1]]
]
dp_table, item_list = Knapsack.best_value_with_limited_items_2d(
capacity=capacity, sizes=sizes, values=values, min_max=max,
fill_to_capacity=True, iterate_sizes_first=True, output_dp_table=True, output_item_list=True
)
assert equal_list(benchmark_dp_table, dp_table) and equal_list(benchmark_item_list, item_list)
dp_table, item_list = Knapsack.best_value_with_limited_items_2d(
capacity=capacity, sizes=sizes, values=values, min_max=max,
fill_to_capacity=True, iterate_sizes_first=False, output_dp_table=True, output_item_list=True
)
assert equal_list(benchmark_dp_table, dp_table) and equal_list(benchmark_item_list, item_list)
dp_table, item_list = Knapsack.best_value_with_limited_items_1d(
capacity=capacity, sizes=sizes, values=values, min_max=max,
fill_to_capacity=True, output_dp_table=True, output_item_list=True
)
assert equal_list(benchmark_dp_table[-1], dp_table) and equal_list(benchmark_item_list[-1], item_list)
def test_knapsack_with_unlimited_items():
capacity, sizes, values = 4, [3, 1, 4], [20, 15, 30]
# Unlimited, Not fill to capacity
benchmark_dp_table = [
[ 0, 0, 0, 20, 20],
[ 0, 15, 30, 45, 60],
[ 0, 15, 30, 45, 60],
]
benchmark_item_list = [
[[], [], [], [0], [0]],
[[], [1], [1, 1], [1, 1, 1], [1, 1, 1, 1]],
[[], [1], [1, 1], [1, 1, 1], [1, 1, 1, 1]]
]
dp_table, item_list = Knapsack.best_value_with_unlimited_items_2d(
capacity=capacity, sizes=sizes, values=values, min_max=max,
fill_to_capacity=False, iterate_sizes_first=True, output_dp_table=True, output_item_list=True
)
assert equal_list(benchmark_dp_table, dp_table) and equal_list(benchmark_item_list, item_list)
dp_table, item_list = Knapsack.best_value_with_unlimited_items_2d(
capacity=capacity, sizes=sizes, values=values, min_max=max,
fill_to_capacity=False, iterate_sizes_first=False, output_dp_table=True, output_item_list=True
)
assert equal_list(benchmark_dp_table, dp_table) and equal_list(benchmark_item_list, item_list)
dp_table, item_list = Knapsack.best_value_with_unlimited_items_1d(
capacity=capacity, sizes=sizes, values=values, min_max=max,
fill_to_capacity=False, iterate_sizes_first=True, output_dp_table=True, output_item_list=True
)
assert equal_list(benchmark_dp_table[-1], dp_table) and equal_list(benchmark_item_list[-1], item_list)
dp_table, item_list = Knapsack.best_value_with_unlimited_items_1d(
capacity=capacity, sizes=sizes, values=values, min_max=max,
fill_to_capacity=False, iterate_sizes_first=False, output_dp_table=True, output_item_list=True
)
assert equal_list(benchmark_dp_table[-1], dp_table) and equal_list(benchmark_item_list[-1], item_list)
# Unlimited, Fill to capacity
benchmark_dp_table = [
[0, float("-inf"), float("-inf"), 20, float("-inf")],
[0, 15, 30, 45, 60],
[0, 15, 30, 45, 60]
]
benchmark_item_list = [
[[], [], [], [0], []],
[[], [1], [1, 1], [1, 1, 1], [1, 1, 1, 1]],
[[], [1], [1, 1], [1, 1, 1], [1, 1, 1, 1]]
]
dp_table, item_list = Knapsack.best_value_with_unlimited_items_2d(
capacity=capacity, sizes=sizes, values=values, min_max=max,
fill_to_capacity=True, iterate_sizes_first=True, output_dp_table=True, output_item_list=True
)
assert equal_list(benchmark_dp_table, dp_table) and equal_list(benchmark_item_list, item_list)
dp_table, item_list = Knapsack.best_value_with_unlimited_items_2d(
capacity=capacity, sizes=sizes, values=values, min_max=max,
fill_to_capacity=True, iterate_sizes_first=False, output_dp_table=True, output_item_list=True
)
assert equal_list(benchmark_dp_table, dp_table) and equal_list(benchmark_item_list, item_list)
dp_table, item_list = Knapsack.best_value_with_unlimited_items_1d(
capacity=capacity, sizes=sizes, values=values, min_max=max,
fill_to_capacity=True, iterate_sizes_first=True, output_dp_table=True, output_item_list=True
)
assert equal_list(benchmark_dp_table[-1], dp_table) and equal_list(benchmark_item_list[-1], item_list)
dp_table, item_list = Knapsack.best_value_with_unlimited_items_1d(
capacity=capacity, sizes=sizes, values=values, min_max=max,
fill_to_capacity=True, iterate_sizes_first=False, output_dp_table=True, output_item_list=True
)
assert equal_list(benchmark_dp_table[-1], dp_table) and equal_list(benchmark_item_list[-1], item_list)
# Test min function
capacity, sizes, values = 11, [5, 7], [1, 1]
benchmark_dp_table = [
[0, float("inf"), float("inf"), float("inf"), float("inf"), 1, float("inf"), float("inf"), float("inf"), float("inf"),2, float("inf")],
[0, float("inf"), float("inf"), float("inf"), float("inf"), 1, float("inf"), 1, float("inf"), float("inf"),2, float("inf")],
]
benchmark_item_list = [
[[], [], [], [], [], [0], [], [], [], [], [0, 0], []],
[[], [], [], [], [], [0], [], [1], [], [], [0, 0], []]
]
dp_table, item_list = Knapsack.best_value_with_unlimited_items_2d(
capacity=capacity, sizes=sizes, values=values, min_max=min,
fill_to_capacity=True, iterate_sizes_first=True, output_dp_table=True, output_item_list=True
)
assert equal_list(benchmark_dp_table, dp_table) and equal_list(benchmark_item_list, item_list)
dp_table, item_list = Knapsack.best_value_with_unlimited_items_2d(
capacity=capacity, sizes=sizes, values=values, min_max=min,
fill_to_capacity=True, iterate_sizes_first=False, output_dp_table=True, output_item_list=True
)
assert equal_list(benchmark_dp_table, dp_table) and equal_list(benchmark_item_list, item_list)
dp_table, item_list = Knapsack.best_value_with_unlimited_items_1d(
capacity=capacity, sizes=sizes, values=values, min_max=min,
fill_to_capacity=True, iterate_sizes_first=True, output_dp_table=True, output_item_list=True
)
assert equal_list(benchmark_dp_table[-1], dp_table) and equal_list(benchmark_item_list[-1], item_list)
dp_table, item_list = Knapsack.best_value_with_unlimited_items_1d(
capacity=capacity, sizes=sizes, values=values, min_max=min,
fill_to_capacity=True, iterate_sizes_first=False, output_dp_table=True, output_item_list=True
)
assert equal_list(benchmark_dp_table[-1], dp_table) and equal_list(benchmark_item_list[-1], item_list)
def test_number_of_ways_to_fill_to_capacity():
assert 497097 == Knapsack.number_of_ways_to_fill_to_capacity_with_unlimited_items_2d(256, [1,2,4,8,16,32], False, False)
C = 7
S = [2,3,6,7]
# Unlimited
benchmark_dp_table = [
[1, 0, 1, 0, 1, 0, 1, 0],
[1, 0, 1, 1, 1, 1, 2, 1],
[1, 0, 1, 1, 1, 1, 3, 1],
[1, 0, 1, 1, 1, 1, 3, 2],
]
benchmark_item_list = [
[ [[]], None, [[0]], None, [[0, 0]], None, [[0, 0, 0]], None ],
[ [[]], None, [[0]], [[1]], [[0, 0]], [[0, 1]], [[0, 0, 0], [1, 1]], [[0, 0, 1]] ],
[ [[]], None, [[0]], [[1]], [[0, 0]], [[0, 1]], [[0, 0, 0], [1, 1], [2]], [[0, 0, 1]] ],
[ [[]], None, [[0]], [[1]], [[0, 0]], [[0, 1]], [[0, 0, 0], [1, 1], [2]], [[0, 0, 1], [3]] ]
]
dp_table, item_list = Knapsack.number_of_ways_to_fill_to_capacity_with_unlimited_items_2d(
capacity=C, sizes=S, output_dp_table=True, output_item_list=True)
assert equal_list(benchmark_dp_table, dp_table) and equal_list(benchmark_item_list, item_list)
dp_table, item_list = Knapsack.number_of_ways_to_fill_to_capacity_with_unlimited_items_1d(
capacity=C, sizes=S, output_dp_table=True, output_item_list=True)
assert equal_list(benchmark_dp_table[-1], dp_table) and equal_list(benchmark_item_list[-1], item_list)
# Limited
benchmark_dp_table = [
[1, 0, 1, 0, 0, 0, 0, 0],
[1, 0, 1, 1, 0, 1, 0, 0],
[1, 0, 1, 1, 0, 1, 1, 0],
[1, 0, 1, 1, 0, 1, 1, 1]
]
benchmark_item_list = [
[ [[]], None, [[0]], None, None, None, None, None ],
[ [[]], None, [[0]], [[1]], None, [[0, 1]], None, None ],
[ [[]], None, [[0]], [[1]], None, [[0, 1]], [[2]], None ],
[ [[]], None, [[0]], [[1]], None, [[0, 1]], [[2]], [[3]] ],
]
dp_table, item_list = Knapsack.number_of_ways_to_fill_to_capacity_with_limited_items_2d(
capacity=C, sizes=S, output_dp_table=True, output_item_list=True)
assert equal_list(benchmark_dp_table, dp_table) and equal_list(benchmark_item_list, item_list)
dp_table, item_list = Knapsack.number_of_ways_to_fill_to_capacity_with_limited_items_1d(
capacity=C, sizes=S, output_dp_table=True, output_item_list=True)
assert equal_list(benchmark_dp_table[-1], dp_table) and equal_list(benchmark_item_list[-1], item_list)
def test_best_value():
C = 10
S = [2, 3, 5, 7]
V = [1, 5, 2, 4]
Q = [1, 1, 1, 1]
assert equal_list(
list(Knapsack.best_value(capacity=C, sizes=S, values=V, quantities=Q, min_max=max, fill_to_capacity=False)),
[9, [1, 3]]
)
C = 62
S = [4,20,8,3,9,1,13,15,6,12,2,8,5,11,13,14,6,15,2,5,14,13,14,4,3,13,4,9,14,3]
V = [14,79,43,115,94,128,140,95,112,167,57,106,20,109,194,176,41,51,178,80,86,169,157,131,33,15,110,184,64,84]
Q = [16,1,19,13,1,6,16,15,19,15,4,1,4,8,14,9,1,3,18,17,17,15,7,15,14,16,15,18,17,14]
assert equal_list(
list(Knapsack.best_value(capacity=C, sizes=S, values=V, quantities=Q, min_max=max, fill_to_capacity=False)),
[4719, [3, 3, 3, 3, 3, 3, 5, 5, 5, 5, 5, 5, 10, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18]]
)
| 51.094828 | 143 | 0.637338 | 1,807 | 11,854 | 3.854455 | 0.058661 | 0.095477 | 0.019813 | 0.021823 | 0.940704 | 0.928643 | 0.912276 | 0.894329 | 0.874372 | 0.872793 | 0 | 0.067058 | 0.209971 | 11,854 | 231 | 144 | 51.316017 | 0.676668 | 0.012738 | 0 | 0.559406 | 0 | 0 | 0.007101 | 0 | 0 | 0 | 0 | 0 | 0.123762 | 1 | 0.019802 | false | 0 | 0.009901 | 0 | 0.029703 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
97716bdb91840e67b5e8e1a50e5150d5183a0e7e | 19,661 | py | Python | pyUTM/common.py | umd-lhcb/pyUTM | 8efa6f8fc4fdc2c19190ee20e9a393cee73df839 | [
"BSD-2-Clause"
] | 1 | 2021-03-30T00:10:26.000Z | 2021-03-30T00:10:26.000Z | pyUTM/common.py | umd-lhcb/pyUTM | 8efa6f8fc4fdc2c19190ee20e9a393cee73df839 | [
"BSD-2-Clause"
] | 3 | 2019-01-17T19:49:10.000Z | 2019-02-26T19:56:10.000Z | pyUTM/common.py | umd-lhcb/pyUTM | 8efa6f8fc4fdc2c19190ee20e9a393cee73df839 | [
"BSD-2-Clause"
] | 1 | 2020-12-18T10:32:41.000Z | 2020-12-18T10:32:41.000Z | #!/usr/bin/env python
#
# License: BSD 2-clause
# Last Change: Thu Dec 17, 2020 at 02:52 AM +0100
from collections import defaultdict
#############
# Constants #
#############
# Table for swapping JD# from Proto (right side) to True (left side)
#
# NOTE: The right and left are defined correctly. These should be interpreted as
# The key (left) being the connector name on the True, and the value (right)
# being the the connector name on the Proto.
#
# In other words: The <key> on True is the <value> on Proto
jd_swapping_true = {
'JD0': 'JD0',
'JD1': 'JD4',
'JD2': 'JD2',
'JD3': 'JD3',
'JD4': 'JD1',
'JD5': 'JD5',
'JD6': 'JD6',
'JD7': 'JD8',
'JD8': 'JD7',
'JD9': 'JD9',
'JD10': 'JD10',
'JD11': 'JD11'
}
# Table for swapping JD# from Proto (right side) to Mirror (left side)
jd_swapping_mirror = {
'JD0': 'JD4',
'JD1': 'JD0',
'JD2': 'JD3',
'JD3': 'JD2',
'JD4': 'JD5',
'JD5': 'JD1',
'JD6': 'JD8',
'JD7': 'JD6',
'JD8': 'JD9',
'JD9': 'JD7',
'JD10': 'JD11',
'JD11': 'JD10'
}
# Table for translating Proto JP# to DataFlex identifier on any True/Mirror BP
jp_flex_type_proto = {
'JP0': 'X-0-M',
'JP1': 'X-0-S',
'JP2': 'S-0-S',
'JP3': 'S-0-M',
'JP4': 'X-1-M',
'JP5': 'X-1-S',
'JP6': 'S-1-S',
'JP7': 'S-1-M',
'JP8': 'X-2-M',
'JP9': 'X-2-S',
'JP10': 'S-2-S',
'JP11': 'S-2-M',
}
# NOTE: This comment is correct
# Table for translating JP# from Proto (RIGHT side) to Mirror (LEFT side)
# This may seem weird at first, but this mapping allow us to do something like
# this:
# mirror_mapping = {jp: true_mapping[jp_swapping_mirror[jp]]
# for jp in true_mapping}
jp_swapping_mirror = {
'JP0': 'JP2',
'JP1': 'JP3',
'JP2': 'JP0',
'JP3': 'JP1',
'JP4': 'JP6',
'JP5': 'JP7',
'JP6': 'JP4',
'JP7': 'JP5',
'JP8': 'JP10',
'JP9': 'JP11',
'JP10': 'JP8',
'JP11': 'JP9',
}
# 'False' -> no depopulation
# 'True' -> depopulation in P/D type BPs
# 'None' -> doesn't exist in all variants
# Straight from:
# https://github.com/ZishuoYang/UT-Backplane-mapping/issues/59
jp_depop_true = {
'JP0': {'P1W': False, 'P1E': True, 'P2W': None, 'P2E': False, 'P3': False, 'P4': False},
'JP1': {'P1W': False, 'P1E': True, 'P2W': True, 'P2E': False, 'P3': False, 'P4': True},
'JP2': {'P1W': False, 'P1E': True, 'P2W': True, 'P2E': False, 'P3': False, 'P4': True},
'JP3': {'P1W': False, 'P1E': True, 'P2W': None, 'P2E': False, 'P3': False, 'P4': False},
'JP4': {'P1W': False, 'P1E': True, 'P2W': None, 'P2E': False, 'P3': False, 'P4': False},
'JP5': {'P1W': False, 'P1E': True, 'P2W': None, 'P2E': False, 'P3': False, 'P4': None},
'JP6': {'P1W': False, 'P1E': True, 'P2W': None, 'P2E': False, 'P3': False, 'P4': None},
'JP7': {'P1W': False, 'P1E': True, 'P2W': None, 'P2E': False, 'P3': False, 'P4': False},
'JP8': {'P1W': False, 'P1E': None, 'P2W': None, 'P2E': False, 'P3': False, 'P4': False},
'JP9': {'P1W': False, 'P1E': None, 'P2W': None, 'P2E': False, 'P3': False, 'P4': None},
'JP10': {'P1W': False, 'P1E': None, 'P2W': None, 'P2E': False, 'P3': False, 'P4': None},
'JP11': {'P1W': False, 'P1E': None, 'P2W': None, 'P2E': False, 'P3': False, 'P4': False}
}
jp_depop_mirror = {jp: jp_depop_true[jp_swapping_mirror[jp]]
for jp in jp_depop_true.keys()}
# 'False' -> no depopulation
# 'True' -> depopulation
jd_depop = {
'JD0': {'F': False, 'P': False, 'D': False},
'JD1': {'F': False, 'P': False, 'D': False},
'JD2': {'F': False, 'P': True, 'D': True},
'JD3': {'F': False, 'P': True, 'D': True},
'JD4': {'F': False, 'P': False, 'D': False},
'JD5': {'F': False, 'P': False, 'D': False},
'JD6': {'F': False, 'P': False, 'D': False},
'JD7': {'F': False, 'P': False, 'D': False},
'JD8': {'F': False, 'P': False, 'D': False},
'JD9': {'F': False, 'P': False, 'D': False},
'JD10': {'F': False, 'P': False, 'D': True},
'JD11': {'F': False, 'P': False, 'D': True},
}
all_pepis = {
# For true-type PEPIs
'Magnet-Top-C': [
{'stv_bp': 'X-0', 'stv_ut': 'UTbX_1C', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 't'},
{'stv_bp': 'S-0', 'stv_ut': 'UTbV_1C', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 't'},
{'stv_bp': 'X-1', 'stv_ut': 'UTbX_2C', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 't'},
{'stv_bp': 'S-1', 'stv_ut': 'UTbV_2C', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 't'},
{'stv_bp': 'X-2', 'stv_ut': 'UTbX_3C', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 't'},
{'stv_bp': 'S-2', 'stv_ut': 'UTbV_3C', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 't'},
{'stv_bp': 'X-0', 'stv_ut': 'UTbX_4C', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 't'},
{'stv_bp': 'S-0', 'stv_ut': 'UTbV_4C', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 't'},
{'stv_bp': 'X-1', 'stv_ut': 'UTbX_5C', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 't'},
{'stv_bp': 'S-1', 'stv_ut': 'UTbV_5C', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 't'},
{'stv_bp': 'X-2', 'stv_ut': 'UTbX_6C', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 't'},
{'stv_bp': 'S-2', 'stv_ut': 'UTbV_6C', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 't'},
{'stv_bp': 'X-0', 'stv_ut': 'UTbX_7C', 'bp_var': 'beta', 'bp_idx': 'outer', 'bp_type': 't'},
{'stv_bp': 'S-0', 'stv_ut': 'UTbV_7C', 'bp_var': 'beta', 'bp_idx': 'outer', 'bp_type': 't'},
{'stv_bp': 'X-1', 'stv_ut': 'UTbX_8C', 'bp_var': 'beta', 'bp_idx': 'outer', 'bp_type': 't'},
{'stv_bp': 'S-1', 'stv_ut': 'UTbV_8C', 'bp_var': 'beta', 'bp_idx': 'outer', 'bp_type': 't'},
{'stv_bp': 'X-2', 'stv_ut': 'UTbX_9C', 'bp_var': 'beta', 'bp_idx': 'outer', 'bp_type': 't'},
{'stv_bp': 'S-2', 'stv_ut': 'UTbV_9C', 'bp_var': 'beta', 'bp_idx': 'outer', 'bp_type': 't'},
],
'Magnet-Bottom-A': [
{'stv_bp': 'X-0', 'stv_ut': 'UTbX_1A', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 't'},
{'stv_bp': 'S-0', 'stv_ut': 'UTbV_1A', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 't'},
{'stv_bp': 'X-1', 'stv_ut': 'UTbX_2A', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 't'},
{'stv_bp': 'S-1', 'stv_ut': 'UTbV_2A', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 't'},
{'stv_bp': 'X-2', 'stv_ut': 'UTbX_3A', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 't'},
{'stv_bp': 'S-2', 'stv_ut': 'UTbV_3A', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 't'},
{'stv_bp': 'X-0', 'stv_ut': 'UTbX_4A', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 't'},
{'stv_bp': 'S-0', 'stv_ut': 'UTbV_4A', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 't'},
{'stv_bp': 'X-1', 'stv_ut': 'UTbX_5A', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 't'},
{'stv_bp': 'S-1', 'stv_ut': 'UTbV_5A', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 't'},
{'stv_bp': 'X-2', 'stv_ut': 'UTbX_6A', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 't'},
{'stv_bp': 'S-2', 'stv_ut': 'UTbV_6A', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 't'},
{'stv_bp': 'X-0', 'stv_ut': 'UTbX_7A', 'bp_var': 'beta', 'bp_idx': 'outer', 'bp_type': 't'},
{'stv_bp': 'S-0', 'stv_ut': 'UTbV_7A', 'bp_var': 'beta', 'bp_idx': 'outer', 'bp_type': 't'},
{'stv_bp': 'X-1', 'stv_ut': 'UTbX_8A', 'bp_var': 'beta', 'bp_idx': 'outer', 'bp_type': 't'},
{'stv_bp': 'S-1', 'stv_ut': 'UTbV_8A', 'bp_var': 'beta', 'bp_idx': 'outer', 'bp_type': 't'},
{'stv_bp': 'X-2', 'stv_ut': 'UTbX_9A', 'bp_var': 'beta', 'bp_idx': 'outer', 'bp_type': 't'},
{'stv_bp': 'S-2', 'stv_ut': 'UTbV_9A', 'bp_var': 'beta', 'bp_idx': 'outer', 'bp_type': 't'},
],
'IP-Top-A': [
{'stv_bp': 'X-0', 'stv_ut': 'UTaX_1A', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 't'},
{'stv_bp': 'S-0', 'stv_ut': 'UTaU_1A', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 't'},
{'stv_bp': 'X-1', 'stv_ut': 'UTaX_2A', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 't'},
{'stv_bp': 'S-1', 'stv_ut': 'UTaU_2A', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 't'},
{'stv_bp': 'X-2', 'stv_ut': 'UTaX_3A', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 't'},
{'stv_bp': 'S-2', 'stv_ut': 'UTaU_3A', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 't'},
{'stv_bp': 'X-0', 'stv_ut': 'UTaX_4A', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 't'},
{'stv_bp': 'S-0', 'stv_ut': 'UTaU_4A', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 't'},
{'stv_bp': 'X-1', 'stv_ut': 'UTaX_5A', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 't'},
{'stv_bp': 'S-1', 'stv_ut': 'UTaU_5A', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 't'},
{'stv_bp': 'X-2', 'stv_ut': 'UTaX_6A', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 't'},
{'stv_bp': 'S-2', 'stv_ut': 'UTaU_6A', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 't'},
{'stv_bp': 'X-0', 'stv_ut': 'UTaX_7A', 'bp_var': 'gamma', 'bp_idx': 'outer', 'bp_type': 't'},
{'stv_bp': 'S-0', 'stv_ut': 'UTaU_7A', 'bp_var': 'gamma', 'bp_idx': 'outer', 'bp_type': 't'},
{'stv_bp': 'X-1', 'stv_ut': 'UTaX_8A', 'bp_var': 'gamma', 'bp_idx': 'outer', 'bp_type': 't'},
{'stv_bp': 'S-1', 'stv_ut': 'UTaU_8A', 'bp_var': 'gamma', 'bp_idx': 'outer', 'bp_type': 't'},
],
'IP-Bottom-C': [
{'stv_bp': 'X-0', 'stv_ut': 'UTaX_1C', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 't'},
{'stv_bp': 'S-0', 'stv_ut': 'UTaU_1C', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 't'},
{'stv_bp': 'X-1', 'stv_ut': 'UTaX_2C', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 't'},
{'stv_bp': 'S-1', 'stv_ut': 'UTaU_2C', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 't'},
{'stv_bp': 'X-2', 'stv_ut': 'UTaX_3C', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 't'},
{'stv_bp': 'S-2', 'stv_ut': 'UTaU_3C', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 't'},
{'stv_bp': 'X-0', 'stv_ut': 'UTaX_4C', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 't'},
{'stv_bp': 'S-0', 'stv_ut': 'UTaU_4C', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 't'},
{'stv_bp': 'X-1', 'stv_ut': 'UTaX_5C', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 't'},
{'stv_bp': 'S-1', 'stv_ut': 'UTaU_5C', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 't'},
{'stv_bp': 'X-2', 'stv_ut': 'UTaX_6C', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 't'},
{'stv_bp': 'S-2', 'stv_ut': 'UTaU_6C', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 't'},
{'stv_bp': 'X-0', 'stv_ut': 'UTaX_7C', 'bp_var': 'gamma', 'bp_idx': 'outer', 'bp_type': 't'},
{'stv_bp': 'S-0', 'stv_ut': 'UTaU_7C', 'bp_var': 'gamma', 'bp_idx': 'outer', 'bp_type': 't'},
{'stv_bp': 'X-1', 'stv_ut': 'UTaX_8C', 'bp_var': 'gamma', 'bp_idx': 'outer', 'bp_type': 't'},
{'stv_bp': 'S-1', 'stv_ut': 'UTaU_8C', 'bp_var': 'gamma', 'bp_idx': 'outer', 'bp_type': 't'},
],
# Now for mirror-tye PEPIs
'Magnet-Bottom-C': [
{'stv_bp': 'X-0', 'stv_ut': 'UTbX_1C', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 'm'},
{'stv_bp': 'S-0', 'stv_ut': 'UTbV_1C', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 'm'},
{'stv_bp': 'X-1', 'stv_ut': 'UTbX_2C', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 'm'},
{'stv_bp': 'S-1', 'stv_ut': 'UTbV_2C', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 'm'},
{'stv_bp': 'X-2', 'stv_ut': 'UTbX_3C', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 'm'},
{'stv_bp': 'S-2', 'stv_ut': 'UTbV_3C', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 'm'},
{'stv_bp': 'X-0', 'stv_ut': 'UTbX_4C', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 'm'},
{'stv_bp': 'S-0', 'stv_ut': 'UTbV_4C', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 'm'},
{'stv_bp': 'X-1', 'stv_ut': 'UTbX_5C', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 'm'},
{'stv_bp': 'S-1', 'stv_ut': 'UTbV_5C', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 'm'},
{'stv_bp': 'X-2', 'stv_ut': 'UTbX_6C', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 'm'},
{'stv_bp': 'S-2', 'stv_ut': 'UTbV_6C', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 'm'},
{'stv_bp': 'X-0', 'stv_ut': 'UTbX_7C', 'bp_var': 'beta', 'bp_idx': 'outer', 'bp_type': 'm'},
{'stv_bp': 'S-0', 'stv_ut': 'UTbV_7C', 'bp_var': 'beta', 'bp_idx': 'outer', 'bp_type': 'm'},
{'stv_bp': 'X-1', 'stv_ut': 'UTbX_8C', 'bp_var': 'beta', 'bp_idx': 'outer', 'bp_type': 'm'},
{'stv_bp': 'S-1', 'stv_ut': 'UTbV_8C', 'bp_var': 'beta', 'bp_idx': 'outer', 'bp_type': 'm'},
{'stv_bp': 'X-2', 'stv_ut': 'UTbX_9C', 'bp_var': 'beta', 'bp_idx': 'outer', 'bp_type': 'm'},
{'stv_bp': 'S-2', 'stv_ut': 'UTbV_9C', 'bp_var': 'beta', 'bp_idx': 'outer', 'bp_type': 'm'},
],
'Magnet-Top-A': [
{'stv_bp': 'X-0', 'stv_ut': 'UTbX_1A', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 'm'},
{'stv_bp': 'S-0', 'stv_ut': 'UTbV_1A', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 'm'},
{'stv_bp': 'X-1', 'stv_ut': 'UTbX_2A', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 'm'},
{'stv_bp': 'S-1', 'stv_ut': 'UTbV_2A', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 'm'},
{'stv_bp': 'X-2', 'stv_ut': 'UTbX_3A', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 'm'},
{'stv_bp': 'S-2', 'stv_ut': 'UTbV_3A', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 'm'},
{'stv_bp': 'X-0', 'stv_ut': 'UTbX_4A', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 'm'},
{'stv_bp': 'S-0', 'stv_ut': 'UTbV_4A', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 'm'},
{'stv_bp': 'X-1', 'stv_ut': 'UTbX_5A', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 'm'},
{'stv_bp': 'S-1', 'stv_ut': 'UTbV_5A', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 'm'},
{'stv_bp': 'X-2', 'stv_ut': 'UTbX_6A', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 'm'},
{'stv_bp': 'S-2', 'stv_ut': 'UTbV_6A', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 'm'},
{'stv_bp': 'X-0', 'stv_ut': 'UTbX_7A', 'bp_var': 'beta', 'bp_idx': 'outer', 'bp_type': 'm'},
{'stv_bp': 'S-0', 'stv_ut': 'UTbV_7A', 'bp_var': 'beta', 'bp_idx': 'outer', 'bp_type': 'm'},
{'stv_bp': 'X-1', 'stv_ut': 'UTbX_8A', 'bp_var': 'beta', 'bp_idx': 'outer', 'bp_type': 'm'},
{'stv_bp': 'S-1', 'stv_ut': 'UTbV_8A', 'bp_var': 'beta', 'bp_idx': 'outer', 'bp_type': 'm'},
{'stv_bp': 'X-2', 'stv_ut': 'UTbX_9A', 'bp_var': 'beta', 'bp_idx': 'outer', 'bp_type': 'm'},
{'stv_bp': 'S-2', 'stv_ut': 'UTbV_9A', 'bp_var': 'beta', 'bp_idx': 'outer', 'bp_type': 'm'},
],
'IP-Bottom-A': [
{'stv_bp': 'X-0', 'stv_ut': 'UTaX_1A', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 'm'},
{'stv_bp': 'S-0', 'stv_ut': 'UTaU_1A', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 'm'},
{'stv_bp': 'X-1', 'stv_ut': 'UTaX_2A', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 'm'},
{'stv_bp': 'S-1', 'stv_ut': 'UTaU_2A', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 'm'},
{'stv_bp': 'X-2', 'stv_ut': 'UTaX_3A', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 'm'},
{'stv_bp': 'S-2', 'stv_ut': 'UTaU_3A', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 'm'},
{'stv_bp': 'X-0', 'stv_ut': 'UTaX_4A', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 'm'},
{'stv_bp': 'S-0', 'stv_ut': 'UTaU_4A', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 'm'},
{'stv_bp': 'X-1', 'stv_ut': 'UTaX_5A', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 'm'},
{'stv_bp': 'S-1', 'stv_ut': 'UTaU_5A', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 'm'},
{'stv_bp': 'X-2', 'stv_ut': 'UTaX_6A', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 'm'},
{'stv_bp': 'S-2', 'stv_ut': 'UTaU_6A', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 'm'},
{'stv_bp': 'X-0', 'stv_ut': 'UTaX_7A', 'bp_var': 'gamma', 'bp_idx': 'outer', 'bp_type': 'm'},
{'stv_bp': 'S-0', 'stv_ut': 'UTaU_7A', 'bp_var': 'gamma', 'bp_idx': 'outer', 'bp_type': 'm'},
{'stv_bp': 'X-1', 'stv_ut': 'UTaX_8A', 'bp_var': 'gamma', 'bp_idx': 'outer', 'bp_type': 'm'},
{'stv_bp': 'S-1', 'stv_ut': 'UTaU_8A', 'bp_var': 'gamma', 'bp_idx': 'outer', 'bp_type': 'm'},
],
'IP-Top-C': [
{'stv_bp': 'X-0', 'stv_ut': 'UTaX_1C', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 'm'},
{'stv_bp': 'S-0', 'stv_ut': 'UTaU_1C', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 'm'},
{'stv_bp': 'X-1', 'stv_ut': 'UTaX_2C', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 'm'},
{'stv_bp': 'S-1', 'stv_ut': 'UTaU_2C', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 'm'},
{'stv_bp': 'X-2', 'stv_ut': 'UTaX_3C', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 'm'},
{'stv_bp': 'S-2', 'stv_ut': 'UTaU_3C', 'bp_var': 'alpha', 'bp_idx': 'inner', 'bp_type': 'm'},
{'stv_bp': 'X-0', 'stv_ut': 'UTaX_4C', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 'm'},
{'stv_bp': 'S-0', 'stv_ut': 'UTaU_4C', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 'm'},
{'stv_bp': 'X-1', 'stv_ut': 'UTaX_5C', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 'm'},
{'stv_bp': 'S-1', 'stv_ut': 'UTaU_5C', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 'm'},
{'stv_bp': 'X-2', 'stv_ut': 'UTaX_6C', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 'm'},
{'stv_bp': 'S-2', 'stv_ut': 'UTaU_6C', 'bp_var': 'beta', 'bp_idx': 'middle', 'bp_type': 'm'},
{'stv_bp': 'X-0', 'stv_ut': 'UTaX_7C', 'bp_var': 'gamma', 'bp_idx': 'outer', 'bp_type': 'm'},
{'stv_bp': 'S-0', 'stv_ut': 'UTaU_7C', 'bp_var': 'gamma', 'bp_idx': 'outer', 'bp_type': 'm'},
{'stv_bp': 'X-1', 'stv_ut': 'UTaX_8C', 'bp_var': 'gamma', 'bp_idx': 'outer', 'bp_type': 'm'},
{'stv_bp': 'S-1', 'stv_ut': 'UTaU_8C', 'bp_var': 'gamma', 'bp_idx': 'outer', 'bp_type': 'm'},
]
}
#############################
# For YAML/Excel conversion #
#############################
# Take a list of dictionaries with the same dimensionality
def transpose(lst):
result = defaultdict(list)
for d in lst:
for k in d.keys():
result[k].append(d[k])
return dict(result)
def unpack_one_elem_dict(d):
return tuple(d.items())[0]
def flatten(lst, header='PlaceHolder'):
result = []
for d in lst:
key, value = unpack_one_elem_dict(d)
value[header] = key
result.append(value)
return result
def flatten_more(d, header='PlaceHolder'):
result = []
for k, items in d.items():
for i in items:
i[header] = k
result.append(i)
return result
def unflatten(lst, header):
result = []
for d in lst:
key = d[header]
del d[header]
result.append({key: d})
return result
def unflatten_all(d, header):
result = defaultdict(dict)
for k, items in d.items():
for i in unflatten(items, header):
pin, prop = unpack_one_elem_dict(i)
result[k][pin] = prop
return result
def collect_terms(d, filter_function):
return {k: d[k] for k in filter_function(d)}
###########
# Helpers #
###########
def split_netname(netname, num_of_split=2):
conn1, conn2, signal_id = netname.split('_', num_of_split)
return [conn1, conn2, signal_id]
| 53.865753 | 101 | 0.507248 | 3,121 | 19,661 | 2.918616 | 0.070811 | 0.074651 | 0.071138 | 0.086947 | 0.80382 | 0.783401 | 0.756395 | 0.750027 | 0.740806 | 0.727412 | 0 | 0.03198 | 0.196836 | 19,661 | 364 | 102 | 54.013736 | 0.544867 | 0.062916 | 0 | 0.067376 | 0 | 0 | 0.42636 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028369 | false | 0 | 0.003546 | 0.007092 | 0.060284 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
977fb972a8caddb539587efe53f65cb16c035c92 | 7,711 | py | Python | tests/test_models_acv.py | aistats2022exp/AccurateShapleyValues | 6662264f6ab9b07dc276d749a154174ddf04601c | [
"MIT"
] | 63 | 2021-03-25T11:52:23.000Z | 2022-03-31T09:10:53.000Z | tests/test_models_acv.py | aistats2022exp/AccurateShapleyValues | 6662264f6ab9b07dc276d749a154174ddf04601c | [
"MIT"
] | 2 | 2021-03-27T13:22:29.000Z | 2021-06-11T11:27:49.000Z | tests/test_models_acv.py | aistats2022exp/AccurateShapleyValues | 6662264f6ab9b07dc276d749a154174ddf04601c | [
"MIT"
] | 5 | 2021-11-08T11:39:45.000Z | 2021-12-19T13:32:35.000Z | from acv_explainers import ACVTree
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
import random
import numpy as np
import pytest
import sklearn
import sklearn.pipeline
import shap
random.seed(2021)
def test_xgboost_binary():
xgboost = pytest.importorskip('xgboost')
X_train, X_test, Y_train, _ = sklearn.model_selection.train_test_split(*shap.datasets.adult(),
test_size=0.2,
random_state=0)
models = [
xgboost.XGBClassifier()
]
for model in models:
model.fit(X_train.values, Y_train)
acvtree = ACVTree(model, X_train.values)
x = X_train.values[:10]
shap_values = acvtree.shap_values(x, C=[[]])
odd_means = np.mean(acvtree.predict(X_train.values), axis=0)
odd_pred = acvtree.predict(x)
assert np.allclose(np.sum(shap_values, axis=1).reshape(-1), odd_pred - odd_means, atol=1e-5)
def test_lightgbm_binary():
lightgbm = pytest.importorskip("lightgbm")
# train lightgbm model
X_train, X_test, Y_train, _ = sklearn.model_selection.train_test_split(*shap.datasets.adult(),
test_size=0.2,
random_state=0)
model = lightgbm.sklearn.LGBMClassifier(max_depth=6)
model.fit(X_train.values, Y_train)
acvtree = ACVTree(model, X_train.values)
x = X_train.values[:10]
shap_values = acvtree.shap_values(x, C=[[]])
odd_means = np.mean(acvtree.predict(X_train.values), axis=0)
odd_pred = acvtree.predict(x)
assert np.allclose(np.sum(shap_values, axis=1).reshape(-1), odd_pred - odd_means, atol=1e-5)
def test_catboost_binary():
catboost = pytest.importorskip("catboost")
max_features = 15
X, y = sklearn.datasets.load_breast_cancer(return_X_y=True)
model = catboost.CatBoostClassifier(iterations=10, learning_rate=0.5, random_seed=12, max_depth=6)
model.fit(
X[:, :max_features],
y,
verbose=False,
plot=False
)
X = X[:, :max_features]
acvtree = ACVTree(model, X)
x = X[:10]
shap_values = acvtree.shap_values(x, C=[[]])
y_pred = acvtree.predict(x)
exp = np.mean(acvtree.predict(X))
assert np.allclose(np.sum(shap_values, axis=1).reshape(-1), y_pred - exp)
def test_xgboost_multiclass():
xgboost = pytest.importorskip('xgboost')
np.random.seed(2021)
X, y = shap.datasets.iris()
X = X.values
model = xgboost.XGBClassifier()
model.fit(X, y)
acvtree = ACVTree(model, X)
x = X[:10]
shap_values = acvtree.shap_values_nopa(x, C=[[]])
y_pred = acvtree.predict(x)
exp = np.mean(acvtree.predict(X), axis=0)
assert np.allclose(np.sum(shap_values, axis=1), y_pred - exp)
def test_xgboost_regressor():
xgboost = pytest.importorskip('xgboost')
np.random.seed(2021)
X, y = shap.datasets.boston()
X = X.values
model = xgboost.XGBRegressor()
model.fit(X, y)
acvtree = ACVTree(model, X)
x = X[:10]
shap_values = acvtree.shap_values_nopa(x, C=[[]])
y_pred = acvtree.predict(x)
exp = np.mean(acvtree.predict(X))
assert np.allclose(np.sum(shap_values, axis=1).reshape(-1), y_pred - exp)
#
def test_catboost_regressor_multiclass():
catboost = pytest.importorskip("catboost")
# train catboost model
# X, y = shap.datasets.boston()
# X.drop(["RAD"], axis=1, inplace=True)
# # X["RAD"] = X["RAD"].astype(np.double)
# X = X.values
# model = catboost.CatBoostRegressor(iterations=30, learning_rate=0.1, random_seed=123)
# p = catboost.Pool(X, y)
# model.fit(p, verbose=False, plot=False)
#
# acvtree = ACVTree(model, X)
# y_pred = acvtree.predict(X)
# exp = np.mean(acvtree.predict(X), axis=0)
#
# shap_values = acvtree.shap_values(X, C=[[]])
#
# assert np.allclose(np.sum(shap_values, axis=1).reshape(-1), y_pred - exp)
# explain the model's predictions using SHAP values
X, y = sklearn.datasets.load_breast_cancer(return_X_y=True)
model = catboost.CatBoostClassifier(iterations=10, learning_rate=0.5, random_seed=12)
model.fit(
X,
y,
verbose=False,
plot=False
)
acvtree = ACVTree(model, X)
y_pred = acvtree.predict(X)
exp = np.mean(acvtree.predict(X), axis=0)
shap_values = acvtree.shap_values(X, C=[[]])
assert np.allclose(np.sum(shap_values, axis=1).reshape(-1), y_pred - exp)
def test_lightgbm_regressor():
np.random.seed(2021)
X, y = shap.datasets.boston()
X = X.values
lightgbm = pytest.importorskip("lightgbm")
model = lightgbm.sklearn.LGBMRegressor(n_estimators=10)
model.fit(X, y)
acvtree = ACVTree(model, X)
x = X[:10]
shap_values = acvtree.shap_values(x, C=[[]])
y_pred = acvtree.predict(x)
exp = np.mean(acvtree.predict(X))
assert np.allclose(np.sum(shap_values, axis=1).reshape(-1), y_pred - exp)
#
#
def test_lightgbm_multiclass():
lightgbm = pytest.importorskip("lightgbm")
np.random.seed(2021)
X, y = shap.datasets.iris()
X = X.values
model = lightgbm.sklearn.LGBMClassifier(num_classes=3, objective="multiclass")
model.fit(X, y)
acvtree = ACVTree(model, X)
x = X[:10]
y_pred = acvtree.predict(x)
exp = np.mean(acvtree.predict(X), axis=0)
shap_values = acvtree.shap_values(x, C=[[]])
assert np.allclose(np.sum(shap_values, axis=1), y_pred - exp)
#
def test_sklearn_random_forest_multiclass():
np.random.seed(2021)
X, y = shap.datasets.iris()
X = X.values
model = sklearn.ensemble.RandomForestClassifier(n_estimators=10, max_depth=5,
min_samples_split=2,
random_state=0)
model.fit(X, y)
acvtree = ACVTree(model, X)
x = X[:10]
y_pred = acvtree.predict(x)
exp = np.mean(acvtree.predict(X), axis=0)
shap_values = acvtree.shap_values(x, C=[[]])
assert np.allclose(np.sum(shap_values, axis=1), y_pred - exp)
def test_sklearn_regressor():
np.random.seed(2021)
X, y = shap.datasets.boston()
X = X.values
models = [
sklearn.ensemble.RandomForestRegressor(n_estimators=10, max_depth=5),
sklearn.ensemble.ExtraTreesRegressor(n_estimators=10, max_depth=5),
]
for model in models:
model.fit(X, y)
acvtree = ACVTree(model, X)
x = X[:10]
shap_values = acvtree.shap_values(x, C=[[]])
y_pred = acvtree.predict(x)
exp = np.mean(acvtree.predict(X))
assert np.allclose(np.sum(shap_values, axis=1).reshape(-1), y_pred - exp)
def test_sklearn_binary():
X_train, X_test, Y_train, _ = sklearn.model_selection.train_test_split(*shap.datasets.adult(),
test_size=0.2,
random_state=0)
models = [
sklearn.ensemble.RandomForestClassifier(n_estimators=10, max_depth=5),
sklearn.ensemble.ExtraTreesClassifier(n_estimators=10, max_depth=5),
]
for model in models:
model.fit(X_train, Y_train)
acvtree = ACVTree(model, X_train.values)
x = X_train.values[:10]
shap_values = acvtree.shap_values(x, C=[[]])
y_pred = acvtree.predict(x)
exp = np.mean(acvtree.predict(X_train.values), axis=0)
assert np.allclose(np.sum(shap_values, axis=1), y_pred - exp)
| 29.208333 | 102 | 0.611075 | 1,023 | 7,711 | 4.441838 | 0.109482 | 0.081426 | 0.079225 | 0.052817 | 0.763644 | 0.75044 | 0.739877 | 0.735255 | 0.71919 | 0.690801 | 0 | 0.023264 | 0.258592 | 7,711 | 263 | 103 | 29.319392 | 0.771559 | 0.075088 | 0 | 0.721893 | 0 | 0 | 0.00999 | 0 | 0 | 0 | 0 | 0 | 0.065089 | 1 | 0.065089 | false | 0 | 0.094675 | 0 | 0.159763 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
97bc1919f90d2d96567fb9ca6552c232a576738f | 163 | py | Python | restfulpy/cli/__init__.py | mehrdad1373pedramfar/restfulpy | 19757dc485f5477cdbf2a7033cd1c7c79ef97647 | [
"MIT"
] | null | null | null | restfulpy/cli/__init__.py | mehrdad1373pedramfar/restfulpy | 19757dc485f5477cdbf2a7033cd1c7c79ef97647 | [
"MIT"
] | null | null | null | restfulpy/cli/__init__.py | mehrdad1373pedramfar/restfulpy | 19757dc485f5477cdbf2a7033cd1c7c79ef97647 | [
"MIT"
] | null | null | null |
from .launchers import Launcher, RequireSubCommand
from .progressbar import ProgressBar, LineReaderProgressBar
from .autocompletion import AutoCompletionLauncher
| 32.6 | 59 | 0.877301 | 14 | 163 | 10.214286 | 0.642857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092025 | 163 | 4 | 60 | 40.75 | 0.966216 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8ae586e68c2862b80ff3e76f9897a1a295772baf | 5,072 | py | Python | scripts/achived/modify and save batch scripts.py | nmningmei/metacognition | 734082e247cc7fc9d277563e2676e10692617a3f | [
"MIT"
] | 3 | 2019-07-09T15:37:46.000Z | 2019-07-17T16:28:02.000Z | scripts/achived/modify and save batch scripts.py | nmningmei/metacognition | 734082e247cc7fc9d277563e2676e10692617a3f | [
"MIT"
] | null | null | null | scripts/achived/modify and save batch scripts.py | nmningmei/metacognition | 734082e247cc7fc9d277563e2676e10692617a3f | [
"MIT"
] | null | null | null | #!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
Created on Wed Sep 19 13:38:04 2018
@author: nmei
"""
import pandas as pd
import os
working_dir = ''
batch_dir = 'batch'
if not os.path.exists(batch_dir):
os.mkdir(batch_dir)
content = '''
#!/bin/bash
# This is a script to qsub jobs
#$ -cwd
#$ -o test_run/out_q.txt
#$ -e test_run/err_q.txt
#$ -m be
#$ -M nmei@bcbl.eu
#$ -N "qjob"
#$ -S /bin/bash
'''
with open(os.path.join(batch_dir,'qsub_jobs'),'w') as f:
f.write(content)
df = pd.read_csv(os.path.join(working_dir,'../data/PoSdata.csv'))
df = df[df.columns[1:]]
df.columns = ['participant',
'blocks',
'trials',
'firstgabor',
'success',
'tilted',
'correct',
'RT_correct',
'awareness',
'RT_awareness',
'confidence',
'RT_confidence']
participants = pd.unique(df['participant'])
# estimate the experimental score
for participant in participants:
with open(os.path.join(batch_dir,'classifcation_pos_n_trials_back (experiment score) ({}).py'.format(participant)),'wb') as new_file:
with open('classifcation_pos_n_trials_back (experiment score).py','rb') as old_file:
for line in old_file:
new_file.write(line.replace("participant = 'AC'","participant = '{}'".format(participant)))
# estimator chance level score
for participant in participants:
with open(os.path.join(batch_dir,'classifcation_pos_n_trials_back (empirical chance level) ({}).py'.format(participant)),'wb') as new_file:
with open('classifcation_pos_n_trials_back (empirical chance level).py','rb') as old_file:
for line in old_file:
new_file.write(line.replace("participant = 'AC'","participant = '{}'".format(participant)))
content = """
#!/bin/bash
# This is a script to send classifcation_pos_n_trials_back (empirical chance level) ({}).py as a batch job.
# it works on dataset {}
#$ -cwd
#$ -o test_run/out_{}.txt
#$ -e test_run/err_{}.txt
#$ -m be
#$ -M nmei@bcbl.eu
#$ -N "pos_{}"
#$ -S /bin/bash
module load rocks-python-2.7
python "classifcation_pos_n_trials_back (experiment score) ({}).py"
python "classifcation_pos_n_trials_back (empirical chance level) ({}).py"
"""
for participant in participants:
with open(os.path.join(batch_dir,'model_comparison_pos_{}'.format(participant)),'w') as f:
f.write(content.format(participant,participant,participant,participant,participant,participant,participant))
with open(os.path.join(batch_dir,'qsub_jobs'),'a') as f:
for participant in participants:
f.write('qsub model_comparison_pos_{}\n'.format(participant))
df = pd.read_csv(os.path.join(working_dir,'../data/ATTfoc.csv'))
df = df[df.columns[1:]]
df.columns = ['participant',
'blocks',
'trials',
'firstgabor',
'attention',
'tilted',
'correct',
'RT_correct',
'awareness',
'RT_awareness',
'confidence',
'RT_confidence']
participants = pd.unique(df['participant'])
batch_dir = 'batch'
if not os.path.exists(batch_dir):
os.mkdir(batch_dir)
# estimate the experimental score
for participant in participants:
with open(os.path.join(batch_dir,'classifcation_att_n_trials_back (experiment score) ({}).py'.format(participant)),'wb') as new_file:
with open('classifcation_att_n_trials_back (experiment score).py','rb') as old_file:
for line in old_file:
new_file.write(line.replace("participant = 'AS'","participant = '{}'".format(participant)))
# estimator chance level score
for participant in participants:
with open(os.path.join(batch_dir,'classifcation_att_n_trials_back (empirical chance level) ({}).py'.format(participant)),'wb') as new_file:
with open('classifcation_att_n_trials_back (empirical chance level).py','rb') as old_file:
for line in old_file:
new_file.write(line.replace("participant = 'AS'","participant = '{}'".format(participant)))
content = """
#!/bin/bash
# This is a script to send classifcation_att_n_trials_back (empirical chance level) ({}).py as a batch job.
# it works on dataset {}
#$ -cwd
#$ -o test_run/out_{}.txt
#$ -e test_run/err_{}.txt
#$ -m be
#$ -M nmei@bcbl.eu
#$ -N "att_{}"
#$ -S /bin/bash
module load rocks-python-2.7
python "classifcation_att_n_trials_back (experiment score) ({}).py"
python "classifcation_att_n_trials_back (empirical chance level) ({}).py"
"""
for participant in participants:
with open(os.path.join(batch_dir,'model_comparison_att_{}'.format(participant)),'w') as f:
f.write(content.format(participant,participant,participant,participant,participant,participant,participant))
with open(os.path.join(batch_dir,'qsub_jobs'),'a') as f:
for participant in participants:
f.write('qsub model_comparison_att_{}\n'.format(participant))
| 27.416216 | 143 | 0.646885 | 673 | 5,072 | 4.686478 | 0.179792 | 0.038047 | 0.048827 | 0.039949 | 0.935954 | 0.927077 | 0.921687 | 0.921687 | 0.883323 | 0.860495 | 0 | 0.004952 | 0.203667 | 5,072 | 184 | 144 | 27.565217 | 0.775935 | 0.042587 | 0 | 0.692982 | 0 | 0 | 0.43537 | 0.110973 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.017544 | 0 | 0.017544 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8af382396e2f4696babba2b9d8923e425be948f3 | 3,194 | py | Python | Documentation/References/sharptest.py | RogueProeliator/IndigoPlugins-Sharp-TV-Network-Remote | 68ba834f68dfa665dabd96a5d466f7ca8350ced7 | [
"MIT"
] | null | null | null | Documentation/References/sharptest.py | RogueProeliator/IndigoPlugins-Sharp-TV-Network-Remote | 68ba834f68dfa665dabd96a5d466f7ca8350ced7 | [
"MIT"
] | 1 | 2022-01-19T01:53:10.000Z | 2022-01-19T01:53:10.000Z | Documentation/References/sharptest.py | RogueProeliator/IndigoPlugins-Sharp-TV-Network-Remote | 68ba834f68dfa665dabd96a5d466f7ca8350ced7 | [
"MIT"
] | null | null | null | #! /usr/bin/env python
# -*- coding: utf-8 -*-
import functools
import httplib
import Queue
import os
import re
import string
import socket
import sys
import threading
import telnetlib
import time
import urllib
if __name__ == '__main__':
try:
ipConnection = telnetlib.Telnet("172.16.1.136", 10002, 3)
inData = ipConnection.read_until("Login:")
print inData
ipConnection.write("username\r")
inData = ipConnection.read_until("Password:")
print inData
ipConnection.write("password\r")
inData = ipConnection.read_until("\r", 1.5)
print inData
# issue command for "POWER ON COMMAND SETTINGS"
#print "Issuing Power On Command to IP ON"
#ipConnection.write("RSPW2 \r")
#inData = ipConnection.read_until("\r", 1.5)
#print inData
#print "Name: "
ipConnection.write("TVNM1 \r")
inData = ipConnection.read_until("\r", 3)
print inData
print "Model: "
ipConnection.write("MNRD1 \r")
inData = ipConnection.read_until("\r", 3)
print inData
#print "Software Version: "
#ipConnection.write("SWVN1 \r")
#inData = ipConnection.read_until("\r", 3)
#print inData
#print "IP Protocol Version: "
#ipConnection.write("IPPV1 \r")
#inData = ipConnection.read_until("\r", 3)
#print inData
#print "Sending Ch Request - Analog "
#ipConnection.write("DCCH??? \r")
#inData = ipConnection.read_until("\r", 3)
#print inData
#print "Sending Ch Request - Digital "
#ipConnection.write("DC2U??? \r")
#inData = ipConnection.read_until("\r", 3)
#print inData
#print "Current Volume: "
#ipConnection.write("VOLM?? \r")
#inData = ipConnection.read_until("\r", 3)
#print inData
#print "Mute: "
#ipConnection.write("MUTE? \r")
#inData = ipConnection.read_until("\r", 3)
#print inData
#print "Input: "
#ipConnection.write("IAVD? \r")
#inData = ipConnection.read_until("\r", 3)
#print inData
#print "Read Power: "
#ipConnection.write("POWR? \r")
#inData = ipConnection.read_until("\r", 3)
#print inData
#print "A/V Mode: "
#ipConnection.write("AVMD? \r")
#inData = ipConnection.read_until("\r", 3)
#print inData
#print "View Mode: "
#ipConnection.write("WIDE? \r")
#inData = ipConnection.read_until("\r", 3)
#print inData
#print "Surround: "
#ipConnection.write("ACSU? \r")
#inData = ipConnection.read_until("\r", 3)
#print inData
#print "Sleep Timer: "
#ipConnection.write("OFTM? \r")
#inData = ipConnection.read_until("\r", 3)
#print inData
#print "Closed Captioning: "
#ipConnection.write("CLCP? \r")
#inData = ipConnection.read_until("\r", 3)
#print inData
#print "Input: "
#ipConnection.write("ITGD? \r")
#inData = ipConnection.read_until("\r", 3)
#print inData
#print "Sending MENU "
#ipConnection.write("RCKY38 \r")
#inData = ipConnection.read_until("\r", 3)
#print inData
#print "Sending Return "
#ipConnection.write("RCKY45 \r")
#inData = ipConnection.read_until("\r", 3)
#print inData
#print "Sending Ch + "
#ipConnection.write("RCKY34 \r")
#inData = ipConnection.read_until("\r", 3)
#print inData
except Exception as e:
print "Exception: " + str(e)
| 24.569231 | 59 | 0.653726 | 396 | 3,194 | 5.194444 | 0.25 | 0.218765 | 0.245989 | 0.301896 | 0.525036 | 0.511424 | 0.511424 | 0.511424 | 0.511424 | 0.491492 | 0 | 0.019668 | 0.188165 | 3,194 | 130 | 60 | 24.569231 | 0.773621 | 0.644959 | 0 | 0.21875 | 0 | 0 | 0.093484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.0625 | 0.375 | null | null | 0.21875 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
c166fd83a288956f29f8926e46b44acaeb996403 | 51 | py | Python | requests_oauth2client/flask/__init__.py | guillp/requests_oauth2client | c6202abafd846e1d61803ec7f357a2ec98a2f3b1 | [
"Apache-2.0"
] | 2 | 2021-06-06T15:00:25.000Z | 2021-06-24T14:38:47.000Z | requests_oauth2client/flask/__init__.py | guillp/requests_oauth2client | c6202abafd846e1d61803ec7f357a2ec98a2f3b1 | [
"Apache-2.0"
] | 5 | 2021-02-23T14:15:43.000Z | 2021-12-01T08:23:29.000Z | requests_oauth2client/flask/__init__.py | guillp/requests_oauth2client | c6202abafd846e1d61803ec7f357a2ec98a2f3b1 | [
"Apache-2.0"
] | 1 | 2021-08-22T11:10:02.000Z | 2021-08-22T11:10:02.000Z | from .auth import FlaskOAuth2ClientCredentialsAuth
| 25.5 | 50 | 0.901961 | 4 | 51 | 11.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021277 | 0.078431 | 51 | 1 | 51 | 51 | 0.957447 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c1747bbaca6900bf98987b0dfa6391dd342dbddd | 28 | py | Python | 05.gbdt/runner/__init__.py | predora005/wheather-forecasting | deb3592ac52751ccaf81d7aa8bbb00a14d232f9f | [
"MIT"
] | null | null | null | 05.gbdt/runner/__init__.py | predora005/wheather-forecasting | deb3592ac52751ccaf81d7aa8bbb00a14d232f9f | [
"MIT"
] | null | null | null | 05.gbdt/runner/__init__.py | predora005/wheather-forecasting | deb3592ac52751ccaf81d7aa8bbb00a14d232f9f | [
"MIT"
] | null | null | null | from runner.runner import *
| 14 | 27 | 0.785714 | 4 | 28 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c184475bbde70ab95f78b3b82455a28344e42334 | 76 | py | Python | wystia/utils/parse/__init__.py | rnag/wystia | dfe21764a54d2737814157072c3262aa7b1cec7d | [
"MIT"
] | 1 | 2022-02-02T21:22:20.000Z | 2022-02-02T21:22:20.000Z | wystia/utils/parse/__init__.py | rnag/wystia | dfe21764a54d2737814157072c3262aa7b1cec7d | [
"MIT"
] | 9 | 2021-06-17T15:11:31.000Z | 2021-12-01T18:49:13.000Z | wystia/utils/parse/__init__.py | rnag/wystia | dfe21764a54d2737814157072c3262aa7b1cec7d | [
"MIT"
] | null | null | null | # flake8: noqa
from .file import *
from .srt import *
from .types import *
| 12.666667 | 20 | 0.684211 | 11 | 76 | 4.727273 | 0.636364 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016667 | 0.210526 | 76 | 5 | 21 | 15.2 | 0.85 | 0.157895 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c18fec3c940a7c4a9bf9747e268ad801ca1831d9 | 90 | py | Python | tests/larcv/core/dataformat/test_dataformat.py | zhulcher/larcv3 | 26d1ad33f0c27ddf6bb2c56bc0238aeaddcb772b | [
"MIT"
] | 8 | 2019-05-14T21:53:42.000Z | 2021-12-10T13:09:33.000Z | tests/larcv/core/dataformat/test_dataformat.py | zhulcher/larcv3 | 26d1ad33f0c27ddf6bb2c56bc0238aeaddcb772b | [
"MIT"
] | 34 | 2019-05-15T13:33:10.000Z | 2022-03-22T17:54:49.000Z | tests/larcv/core/dataformat/test_dataformat.py | zhulcher/larcv3 | 26d1ad33f0c27ddf6bb2c56bc0238aeaddcb772b | [
"MIT"
] | 6 | 2019-10-24T16:11:50.000Z | 2021-11-26T14:06:30.000Z | import unittest
# def test_import_dataformat_top():
# from larcv import dataformat
| 12.857143 | 35 | 0.755556 | 11 | 90 | 5.909091 | 0.727273 | 0.492308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.188889 | 90 | 6 | 36 | 15 | 0.890411 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c1bc05d6dd834a7075dcf1e09d92e3cef307dd1a | 2,372 | py | Python | pineapples-generation/generate_pfp.py | silvereh/PineapplesDayOut | 2a2dd82aab90d0d7c87d86748d32bf610ab03e9d | [
"MIT"
] | null | null | null | pineapples-generation/generate_pfp.py | silvereh/PineapplesDayOut | 2a2dd82aab90d0d7c87d86748d32bf610ab03e9d | [
"MIT"
] | null | null | null | pineapples-generation/generate_pfp.py | silvereh/PineapplesDayOut | 2a2dd82aab90d0d7c87d86748d32bf610ab03e9d | [
"MIT"
] | 1 | 2021-09-16T20:45:39.000Z | 2021-09-16T20:45:39.000Z | from PIL import Image, ImageOps
import sys
import random
import json
# ALINA
img1 = Image.open(f'./_assets/bg/bg9.png').convert('RGBA')
img2 = Image.open(f'./_assets/sk/sk13.png').convert('RGBA')
img3 = Image.open(f'./_assets/mo/mo15.png').convert('RGBA')
img4 = Image.open(f'./_assets/ey/ey16.png').convert('RGBA')
img5 = Image.open(f'./_assets/cr/cr12.png').convert('RGBA')
img6 = Image.open(f'./_assets/fw/fw8.png').convert('RGBA')
img7 = Image.open(f'./_assets/ac/ac4.png').convert('RGBA')
# Mash images
com1 = Image.alpha_composite(img1, img2)
com2 = Image.alpha_composite(com1, img3)
com3 = Image.alpha_composite(com2, img4)
com4 = Image.alpha_composite(com3, img5)
com5 = Image.alpha_composite(com4, img6)
com6 = Image.alpha_composite(com5, img7)
# Convert to RGB
result = com6.convert('RGB')
# Save file
filename = "alina.jpg"
result.save("./_output/images/" + filename, quality=95)
# VANDEMLAU
img1 = Image.open(f'./_assets/bg/bg1.png').convert('RGBA')
img2 = Image.open(f'./_assets/sk/sk15.png').convert('RGBA')
img3 = Image.open(f'./_assets/mo/mo3.png').convert('RGBA')
img4 = Image.open(f'./_assets/ey/ey15.png').convert('RGBA')
img5 = Image.open(f'./_assets/cr/cr11.png').convert('RGBA')
# Mash images
com1 = Image.alpha_composite(img1, img2)
com2 = Image.alpha_composite(com1, img3)
com3 = Image.alpha_composite(com2, img4)
com4 = Image.alpha_composite(com3, img5)
# Convert to RGB
result = com4.convert('RGB')
# Save file
filename = "vandemlau.jpg"
result.save("./_output/images/" + filename, quality=95)
# PINEAPPLE HEAD
img1 = Image.open(f'./_assets/bg/bg6.png').convert('RGBA')
img2 = Image.open(f'./_assets/sk/sk10.png').convert('RGBA')
img3 = Image.open(f'./_assets/mo/mo1.png').convert('RGBA')
img4 = Image.open(f'./_assets/ey/ey1.png').convert('RGBA')
img5 = Image.open(f'./_assets/cr/cr12.png').convert('RGBA')
img6 = Image.open(f'./_assets/fw/fw2.png').convert('RGBA')
img7 = Image.open(f'./_assets/ac/ac7.png').convert('RGBA')
# Mash images
com1 = Image.alpha_composite(img1, img2)
com2 = Image.alpha_composite(com1, img3)
com3 = Image.alpha_composite(com2, img4)
com4 = Image.alpha_composite(com3, img5)
com5 = Image.alpha_composite(com4, img6)
com6 = Image.alpha_composite(com5, img7)
# Convert to RGB
result = com6.convert('RGB')
# Save file
filename = "pineapplehead.jpg"
result.save("./_output/images/" + filename, quality=95)
| 31.210526 | 59 | 0.714587 | 364 | 2,372 | 4.552198 | 0.195055 | 0.103199 | 0.114665 | 0.183464 | 0.884731 | 0.86904 | 0.829209 | 0.829209 | 0.753168 | 0.492456 | 0 | 0.048193 | 0.090219 | 2,372 | 75 | 60 | 31.626667 | 0.719648 | 0.059444 | 0 | 0.479167 | 0 | 0 | 0.254398 | 0.08525 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c1c0aab944a1c7bc70bda43d3b619026e800dc3c | 140 | py | Python | 210201_hw/step_5.py | Inclementia/python_adv | c928648b1cc742083154a49fa40633b694e9b1c7 | [
"MIT"
] | null | null | null | 210201_hw/step_5.py | Inclementia/python_adv | c928648b1cc742083154a49fa40633b694e9b1c7 | [
"MIT"
] | null | null | null | 210201_hw/step_5.py | Inclementia/python_adv | c928648b1cc742083154a49fa40633b694e9b1c7 | [
"MIT"
] | null | null | null | nums_squared_lc = [num**2 for num in range(5)]
nums_squared_gc = (num**2 for num in range(5))
print(nums_squared_lc)
print(nums_squared_gc) | 28 | 46 | 0.757143 | 28 | 140 | 3.5 | 0.392857 | 0.44898 | 0.265306 | 0.204082 | 0.367347 | 0.367347 | 0.367347 | 0 | 0 | 0 | 0 | 0.032258 | 0.114286 | 140 | 5 | 47 | 28 | 0.758065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
a9b9d8a07f75fbca13cfe40211f675462e1eb5ee | 207 | py | Python | algebra_utilities/tests/examples_of_use/__init__.py | computational-group-the-golden-ticket/AlgebraUtilities | d5c7c2806b6bd394564ae4146a2c5164f4ebe882 | [
"MIT"
] | null | null | null | algebra_utilities/tests/examples_of_use/__init__.py | computational-group-the-golden-ticket/AlgebraUtilities | d5c7c2806b6bd394564ae4146a2c5164f4ebe882 | [
"MIT"
] | null | null | null | algebra_utilities/tests/examples_of_use/__init__.py | computational-group-the-golden-ticket/AlgebraUtilities | d5c7c2806b6bd394564ae4146a2c5164f4ebe882 | [
"MIT"
] | null | null | null | import sys
import os
__from_actual_to_dir__ = "../../.."
sys.path.append(
os.path.abspath(os.path.join(os.path.dirname(__from_actual_to_dir__),
os.path.pardir)))
| 25.875 | 74 | 0.589372 | 27 | 207 | 4 | 0.481481 | 0.222222 | 0.222222 | 0.277778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2657 | 207 | 7 | 75 | 29.571429 | 0.710526 | 0 | 0 | 0 | 0 | 0 | 0.04 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
e78d5edaf443584ec493838ede8cd1d9dcac2bfe | 162 | py | Python | demonet/__init__.py | zhiqwang/demonet | 8370fc41d56d28939403b883f4b4014172895781 | [
"Apache-2.0"
] | 11 | 2020-08-28T09:29:42.000Z | 2021-10-03T09:08:11.000Z | demonet/__init__.py | zhiqwang/demonet | 8370fc41d56d28939403b883f4b4014172895781 | [
"Apache-2.0"
] | 1 | 2021-11-15T03:58:37.000Z | 2021-11-15T04:23:22.000Z | demonet/__init__.py | zhiqwang/demonet | 8370fc41d56d28939403b883f4b4014172895781 | [
"Apache-2.0"
] | 3 | 2020-04-15T07:53:13.000Z | 2020-05-18T18:51:31.000Z | # Copyright (c) 2021, Zhiqiang Wang. All Rights Reserved.
from demonet import models
from demonet import data
from demonet import util
__version__ = "0.2.0a0"
| 18 | 57 | 0.765432 | 24 | 162 | 5 | 0.75 | 0.275 | 0.425 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059259 | 0.166667 | 162 | 8 | 58 | 20.25 | 0.82963 | 0.339506 | 0 | 0 | 0 | 0 | 0.066667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
99d713975b9ff7cac84d07396920ec9e81da5839 | 1,309 | py | Python | myproductivitytool/project/urls.py | jhahitesh/myroductivitytool | 40d2409bae408ab6b57136922d5d5fba47e6d9c5 | [
"MIT"
] | null | null | null | myproductivitytool/project/urls.py | jhahitesh/myroductivitytool | 40d2409bae408ab6b57136922d5d5fba47e6d9c5 | [
"MIT"
] | 5 | 2020-06-05T21:43:28.000Z | 2021-06-10T18:22:52.000Z | myproductivitytool/project/urls.py | jhahitesh/myproductivitytool | 40d2409bae408ab6b57136922d5d5fba47e6d9c5 | [
"MIT"
] | null | null | null | from django.conf.urls import url
import myproductivitytool.project.views as project_views
app_name = 'project'
urlpatterns = [
url(r'^statistics/$', project_views.Statistics.as_view(), name='statistics'),
url(r'^projects/(?P<pid>\d+)/(?P<action>[-\w]+)/$',
project_views.Projects.as_view(),
name='projects'),
url(r'^projects/(?P<action>[-\w]+)/$',
project_views.Projects.as_view(),
name='projects'),
url(r'^tasks/(?P<tid>\d+)/(?P<action>[-\w]+)/$',
project_views.Tasks.as_view(),
name='tasks'),
url(r'^tasks/(?P<action>[-\w]+)/$',
project_views.Tasks.as_view(),
name='tasks'),
url(r'^projects/(?P<pid>\d+)/tasks/(?P<tid>\d+)/(?P<action>[-\w]+)/$',
project_views.Tasks.as_view(),
name='tasks'),
url(r'^projects/(?P<pid>\d+)/tasks/(?P<action>[-\w]+)/$',
project_views.Tasks.as_view(),
name='tasks'),
url(r'^comments/(?P<tcid>\d+)/(?P<action>[-\w]+)/$',
project_views.TaskComments.as_view(),
name='comments'),
url(r'^comments/(?P<action>[-\w]+)/$',
project_views.TaskComments.as_view(),
name='comments'),
url(r'^tasks/(?P<tid>\d+)/comments/(?P<tcid>\d+)/(?P<action>[-\w]+)/$',
project_views.TaskComments.as_view(),
name='comments'),
url(r'^tasks/(?P<tid>\d+)/comments/(?P<action>[-\w]+)/$',
project_views.TaskComments.as_view(),
name='comments')
] | 26.714286 | 78 | 0.622613 | 188 | 1,309 | 4.207447 | 0.143617 | 0.197219 | 0.139064 | 0.189633 | 0.790139 | 0.790139 | 0.768647 | 0.768647 | 0.768647 | 0.768647 | 0 | 0 | 0.085562 | 1,309 | 49 | 79 | 26.714286 | 0.660819 | 0 | 0 | 0.527778 | 0 | 0.055556 | 0.408397 | 0.333588 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.055556 | 0 | 0.055556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
99e0908cca364e23dbe12ff7dadedf94111fe11b | 172 | py | Python | anima/env/fusion/libdll/kitap.py | tws0002/anima | 73c256d1f7716a2db7933d6d8519a51333c7e5b4 | [
"BSD-2-Clause"
] | 7 | 2016-03-30T14:43:33.000Z | 2020-11-12T17:56:40.000Z | anima/env/fusion/libdll/kitap.py | tws0002/anima | 73c256d1f7716a2db7933d6d8519a51333c7e5b4 | [
"BSD-2-Clause"
] | null | null | null | anima/env/fusion/libdll/kitap.py | tws0002/anima | 73c256d1f7716a2db7933d6d8519a51333c7e5b4 | [
"BSD-2-Clause"
] | 3 | 2017-04-13T04:29:04.000Z | 2019-05-08T00:28:44.000Z |
class kitaplar(object):
def __init__(self):
pass
def __kitapEkle(self,ad,yazar):
pass
def __kitapSil(self,kid):
pass
def __kitapGetir(self,kid):
pass
def | 10.75 | 32 | 0.697674 | 24 | 172 | 4.583333 | 0.541667 | 0.254545 | 0.2 | 0.254545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.19186 | 172 | 16 | 33 | 10.75 | 0.791367 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.4 | 0 | null | null | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
8228138e0b3c5b1e04f11617e1e4c04e72165a7c | 179 | py | Python | fire/__init__.py | fire717/fire | a84c50f6934361dab41dccfb6c6a768448d93a8e | [
"MIT"
] | 5 | 2020-11-26T09:30:39.000Z | 2021-12-31T02:39:37.000Z | fire/__init__.py | fire717/fire | a84c50f6934361dab41dccfb6c6a768448d93a8e | [
"MIT"
] | 1 | 2022-03-04T02:06:35.000Z | 2022-03-04T02:22:39.000Z | fire/__init__.py | fire717/fire | a84c50f6934361dab41dccfb6c6a768448d93a8e | [
"MIT"
] | 1 | 2021-08-19T14:58:24.000Z | 2021-08-19T14:58:24.000Z |
from fire._version import __version__
from fire.init import initFire
from fire.model import FireModel
from fire.runner import FireRunner
from fire.data import FireData
| 19.888889 | 38 | 0.798883 | 25 | 179 | 5.52 | 0.48 | 0.289855 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173184 | 179 | 8 | 39 | 22.375 | 0.932432 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
417097d6185aac0cbeb4d8b21854d261753dee09 | 24,113 | py | Python | scripts/incremental/incremental_experiments.py | fleur101/predict-python | d40c876d919232bbb77904e050b182c875bc36fa | [
"MIT"
] | 12 | 2018-06-27T08:09:18.000Z | 2021-10-10T22:19:04.000Z | scripts/incremental/incremental_experiments.py | fleur101/predict-python | d40c876d919232bbb77904e050b182c875bc36fa | [
"MIT"
] | 17 | 2018-06-12T17:36:11.000Z | 2020-11-16T21:23:22.000Z | scripts/incremental/incremental_experiments.py | fleur101/predict-python | d40c876d919232bbb77904e050b182c875bc36fa | [
"MIT"
] | 16 | 2018-08-02T14:40:17.000Z | 2021-11-12T12:28:46.000Z | import django
django.setup()
import json
import time
from enum import Enum
from src.encoding.models import ValueEncodings, TaskGenerationTypes
from src.hyperparameter_optimization.models import HyperOptLosses, HyperOptAlgorithms, HyperparameterOptimizationMethods
from src.labelling.models import LabelTypes
from src.predictive_model.classification.models import ClassificationMethods
from src.clustering.models import ClusteringMethods
from src.jobs.models import JobStatuses, JobTypes
from src.utils.experiments_utils import upload_split, send_job_request, create_classification_payload, retrieve_job
def retrieve_predictive_model_configuration(config):
if len(config) == 1:
config = config[0]['config']
elif len(config) > 1:
print('duplicate config')
config = config[0]['config']
else:
print('missing conf')
return {}
predictive_model_config = config['predictive_model']
del predictive_model_config['model_path']
predictive_model = predictive_model_config['predictive_model']
del predictive_model_config['predictive_model']
prediction_method = predictive_model_config['prediction_method']
del predictive_model_config['prediction_method']
return {predictive_model + '.' + prediction_method: predictive_model_config}
def init_database(experimentation_type, splits, dataset, base_folder):
if dataset not in splits:
splits[dataset] = {}
if experimentation_type == ExperimentationType.STD.value:
splits[dataset]['0-40_80-100'] = upload_split(train=base_folder + dataset + '0-40.xes',
test=base_folder + dataset + '80-100.xes', server_name='ashkin', server_port='50401')
splits[dataset]['0-80_80-100'] = upload_split(train=base_folder + dataset + '0-80.xes',
test=base_folder + dataset + '80-100.xes', server_name='ashkin', server_port='50401')
elif experimentation_type == ExperimentationType.INCREMENTAL.value:
splits[dataset]['40-80_80-100'] = upload_split(train=base_folder + dataset + '40-80.xes',
test=base_folder + dataset + '80-100.xes', server_name='ashkin', server_port='50401')
elif experimentation_type == ExperimentationType.DRIFT_SIZE.value:
splits[dataset]['40-55_80-100'] = upload_split(train=base_folder + dataset + '40-55.xes',
test=base_folder + dataset + '80-100.xes', server_name='ashkin', server_port='50401')
splits[dataset]['0-55_80-100'] = upload_split(train=base_folder + dataset + '0-55.xes',
test=base_folder + dataset + '80-100.xes', server_name='ashkin', server_port='50401')
def get_pretrained_model_id(config):
if len(config) == 1:
model_id = config[0]['id']
elif len(config) > 1:
print('duplicate model')
model_id = config[0]['id']
else:
print('missing model')
return {}
return model_id
def std_experiments(dataset, prefix_length, models, splits, classification_method, encoding_method):
models[dataset]['0-40_80-100'] = send_job_request(
payload=create_classification_payload(
split=splits[dataset]['0-40_80-100'],
encodings=[encoding_method],
encoding={"padding": "zero_padding",
"generation_type": TaskGenerationTypes.ALL_IN_ONE.value,
"prefix_length": prefix_length,
"features": []},
labeling={"type": LabelTypes.ATTRIBUTE_STRING.value,
"attribute_name": "label",
"add_remaining_time": False,
"add_elapsed_time": False,
"add_executed_events": False,
"add_resources_used": False,
"add_new_traces": False},
hyperparameter_optimization={"type": HyperparameterOptimizationMethods.HYPEROPT.value,
"max_evaluations": 1000,
"performance_metric": HyperOptLosses.AUC.value,
"algorithm_type": HyperOptAlgorithms.TPE.value},
classification=[classification_method]
), server_port='50401', server_name='ashkin'
)[0]['id']
models[dataset]['0-80_80-100'] = send_job_request(
payload=create_classification_payload(
split=splits[dataset]['0-80_80-100'],
encodings=[encoding_method],
encoding={"padding": "zero_padding",
"generation_type": TaskGenerationTypes.ALL_IN_ONE.value,
"prefix_length": prefix_length,
"features": []},
labeling={"type": LabelTypes.ATTRIBUTE_STRING.value,
"attribute_name": "label",
"add_remaining_time": False,
"add_elapsed_time": False,
"add_executed_events": False,
"add_resources_used": False,
"add_new_traces": False},
hyperparameter_optimization={"type": HyperparameterOptimizationMethods.HYPEROPT.value,
"max_evaluations": 1000,
"performance_metric": HyperOptLosses.AUC.value,
"algorithm_type": HyperOptAlgorithms.TPE.value},
classification=[classification_method]
), server_port='50401', server_name='ashkin'
)[0]['id']
def incremental_experiments(dataset, prefix_length, models, splits, classification_method, encoding_method):
pretrained_model_parameters = retrieve_predictive_model_configuration(
retrieve_job(config={
'type': JobTypes.PREDICTION.value,
# 'status': JobStatuses.COMPLETED.value, # TODO sometimes some jobs hang in running while they are actually finished
'create_models': True,
'split': splits[dataset]['0-40_80-100'],
'encoding': {"value_encoding": encoding_method,
"padding": True,
"task_generation_type": TaskGenerationTypes.ALL_IN_ONE.value,
"prefix_length": prefix_length},
'labelling': {"type": LabelTypes.ATTRIBUTE_STRING.value,
"attribute_name": "label",
"add_remaining_time": False,
"add_elapsed_time": False,
"add_executed_events": False,
"add_resources_used": False,
"add_new_traces": False},
'hyperparameter_optimization': {"optimization_method": HyperparameterOptimizationMethods.HYPEROPT.value},
# "max_evaluations": 1000, #TODO not yet supported
# "performance_metric": HyperOptLosses.AUC.value,
# "algorithm_type": HyperOptAlgorithms.TPE.value},
'predictive_model': {'predictive_model': 'classification',
'prediction_method': classification_method},
'clustering': {'clustering_method': ClusteringMethods.NO_CLUSTER.value}
}, server_name='ashkin', server_port='50401')
)
payload = create_classification_payload(
split=splits[dataset]['0-80_80-100'],
encodings=[encoding_method],
encoding={"padding": "zero_padding",
"generation_type": TaskGenerationTypes.ALL_IN_ONE.value,
"prefix_length": prefix_length,
"features": []},
labeling={"type": LabelTypes.ATTRIBUTE_STRING.value,
"attribute_name": "label",
"add_remaining_time": False,
"add_elapsed_time": False,
"add_executed_events": False,
"add_resources_used": False,
"add_new_traces": False},
hyperparameter_optimization={"type": HyperparameterOptimizationMethods.NONE.value},
classification=[classification_method]
)
payload.update(pretrained_model_parameters)
models[dataset]['0-80_80-100'] = send_job_request(payload=payload, server_port='50401', server_name='ashkin')[0]['id']
if classification_method != ClassificationMethods.RANDOM_FOREST.value:
payload = create_classification_payload(
split=splits[dataset]['40-80_80-100'],
encodings=[encoding_method],
encoding={"padding": "zero_padding",
"generation_type": TaskGenerationTypes.ALL_IN_ONE.value,
"prefix_length": prefix_length,
"features": []},
labeling={"type": LabelTypes.ATTRIBUTE_STRING.value,
"attribute_name": "label",
"add_remaining_time": False,
"add_elapsed_time": False,
"add_executed_events": False,
"add_resources_used": False,
"add_new_traces": False},
classification=[classification_method],
hyperparameter_optimization={"type": HyperparameterOptimizationMethods.NONE.value},
incremental_train=[
get_pretrained_model_id(
config=retrieve_job(config={
'type': JobTypes.PREDICTION.value,
# 'status': JobStatuses.COMPLETED.value, # TODO sometimes some jobs hang in running while they are actually finished
'create_models': True,
'split': splits[dataset]['0-40_80-100'],
'encoding': {"value_encoding": encoding_method,
"padding": True,
"task_generation_type": TaskGenerationTypes.ALL_IN_ONE.value,
"prefix_length": prefix_length},
'labelling': {"type": LabelTypes.ATTRIBUTE_STRING.value,
"attribute_name": "label",
"add_remaining_time": False,
"add_elapsed_time": False,
"add_executed_events": False,
"add_resources_used": False,
"add_new_traces": False},
'hyperparameter_optimization': {
"optimization_method": HyperparameterOptimizationMethods.HYPEROPT.value},
# "max_evaluations": 1000, #TODO not yet supported
# "performance_metric": HyperOptLosses.AUC.value,
# "algorithm_type": HyperOptAlgorithms.TPE.value},
'predictive_model': {'predictive_model': 'classification',
'prediction_method': classification_method},
'clustering': {'clustering_method': ClusteringMethods.NO_CLUSTER.value}
}, server_name='ashkin', server_port='50401')
)
]
)
payload.update(pretrained_model_parameters)
models[dataset]['40-80_80-100'] = send_job_request(payload=payload, server_port='50401', server_name='ashkin')[0]['id']
def drift_size_experimentation(dataset, prefix_length, models, splits, classification_method, encoding_method):
if classification_method != "randomForest":
models[dataset]['40-55_80-100'] = send_job_request(
payload=create_classification_payload(
split=splits[dataset]['40-55_80-100'],
encodings=[encoding_method],
encoding={"padding": "zero_padding",
"generation_type": TaskGenerationTypes.ALL_IN_ONE.value,
"prefix_length": prefix_length,
"features": []},
labeling={"type": LabelTypes.ATTRIBUTE_STRING.value,
"attribute_name": "label",
"add_remaining_time": False,
"add_elapsed_time": False,
"add_executed_events": False,
"add_resources_used": False,
"add_new_traces": False},
classification=[classification_method],
hyperparameter_optimization={"type": HyperparameterOptimizationMethods.NONE.value},
incremental_train=[
get_pretrained_model_id(
config=retrieve_job(config={
'type': JobTypes.PREDICTION.value,
# 'status': JobStatuses.COMPLETED.value, # TODO sometimes some jobs hang in running while they are actually finished
'create_models': True,
'split': splits[dataset]['0-40_80-100'],
'encoding': {"value_encoding": encoding_method,
"padding": True,
"task_generation_type": TaskGenerationTypes.ALL_IN_ONE.value,
"prefix_length": prefix_length},
'labelling': {"type": LabelTypes.ATTRIBUTE_STRING.value,
"attribute_name": "label",
"add_remaining_time": False,
"add_elapsed_time": False,
"add_executed_events": False,
"add_resources_used": False,
"add_new_traces": False},
'hyperparameter_optimization': {"optimization_method": HyperparameterOptimizationMethods.HYPEROPT.value},
# "max_evaluations": 1000, #TODO not yet supported
# "performance_metric": HyperOptLosses.AUC.value,
# "algorithm_type": HyperOptAlgorithms.TPE.value},
'predictive_model': {'predictive_model': 'classification',
'prediction_method': classification_method},
'clustering': {'clustering_method': ClusteringMethods.NO_CLUSTER.value}
}, server_name='ashkin', server_port='50401')
)
]
), server_port='50401', server_name='ashkin'
)[0]['id']
models[dataset]['0-55_80-100'] = send_job_request(
payload=create_classification_payload(
split=splits[dataset]['0-55_80-100'],
encodings=[encoding_method],
encoding={"padding": "zero_padding",
"generation_type": TaskGenerationTypes.ALL_IN_ONE.value,
"prefix_length": prefix_length,
"features": []},
labeling={"type": LabelTypes.ATTRIBUTE_STRING.value,
"attribute_name": "label",
"add_remaining_time": False,
"add_elapsed_time": False,
"add_executed_events": False,
"add_resources_used": False,
"add_new_traces": False},
classification=[classification_method],
hyperparameter_optimization={"type": HyperparameterOptimizationMethods.HYPEROPT.value,
"max_evaluations": 1000,
"performance_metric": HyperOptLosses.AUC.value,
"algorithm_type": HyperOptAlgorithms.TPE.value},
), server_port='50401', server_name='ashkin'
)[0]['id']
class ExperimentationType(Enum):
STD = 'std'
INCREMENTAL = 'incremental'
DRIFT_SIZE = 'drift_size'
def launch_experimentation(experimentation_type, datasets, splits, base_folder, models, prefixes=[10, 30, 50, 70],
classification_methods=[ClassificationMethods.MULTINOMIAL_NAIVE_BAYES.value],
encodings=[ValueEncodings.SIMPLE_INDEX.value]):
for dataset in datasets:
init_database(experimentation_type, splits, dataset, base_folder)
print(dataset, '[:::] Batch of logs uploaded')
if dataset not in models:
models[dataset] = {}
for prefix_length in prefixes: # NB: if you add something the splits and models are overwritten
for classification_method in classification_methods: # NB: if you add something the models are overwritten
for encoding_method in encodings: # NB: if you add something the models are overwritten
if experimentation_type == ExperimentationType.STD.value:
std_experiments(dataset, prefix_length, models, splits, classification_method, encoding_method)
elif experimentation_type == ExperimentationType.INCREMENTAL.value:
incremental_experiments(dataset, prefix_length, models, splits, classification_method,
encoding_method)
elif experimentation_type == ExperimentationType.DRIFT_SIZE.value:
drift_size_experimentation(dataset, prefix_length, models, splits, classification_method,
encoding_method)
print(dataset, '[:::] Batch of tasks created')
time.sleep(180)
if __name__ == '__main__':
print("Starting experiments")
base_folder = '/home/wrizzi/Documents/datasets/'
# base_folder = '/Users/Brisingr/Desktop/TEMP/dataset/prom_labeled_data/CAiSE18/'
experimentation = ExperimentationType.DRIFT_SIZE.value
datasets1 = [
'BPI11/f1/',
'BPI11/f2/',
'BPI11/f3/',
'BPI11/f4/',
'BPI15/f1/',
'BPI15/f2/',
'BPI15/f3/'
]
datasets2 = [
'Drift1/f1/',
'Drift2/f1/'
]
split_sizes = [
'0-40.xes',
'0-60.xes',
'0-55.xes',
'0-80.xes',
'40-80.xes',
'40-60.xes',
'40-55.xes',
'80-100.xes'
]
# TODO load from memory
splits = {
'BPI11/f1/': {
'0-40_80-100': 55,
'0-80_80-100': 56,
'40-80_80-100': 38,
},
'BPI11/f2/': {
'0-40_80-100': 57,
'0-80_80-100': 58,
'40-80_80-100': 39,
},
'BPI11/f3/': {
'0-40_80-100': 59,
'0-80_80-100': 60,
'40-80_80-100': 40,
},
'BPI11/f4/': {
'0-40_80-100': 61,
'0-80_80-100': 62,
'40-80_80-100': 41,
},
'BPI15/f1/': {
'0-40_80-100': 63,
'0-80_80-100': 64,
'40-80_80-100': 42,
},
'BPI15/f2/': {
'0-40_80-100': 65,
'0-80_80-100': 66,
'40-80_80-100': 43,
},
'BPI15/f3/': {
'0-40_80-100': 67,
'0-80_80-100': 68,
'40-80_80-100': 44,
},
'Drift1/f1/': {
'0-40_80-100': 69,
'0-80_80-100': 70,
'40-80_80-100': 45,
'40-60_80-100': 1111,
'0-60_80-100': 1111,
'40-55_80-100': 36, # +TANTO perche' uno e' stato ciccato
'0-55_80-100': 1111
},
'Drift2/f1/': {
'0-40_80-100': 71,
'0-80_80-100': 72,
'40-80_80-100': 46,
'40-60_80-100': 1111,
'0-60_80-100': 1111,
'40-55_80-100': 1111,
'0-55_80-100': 1111
}
}
models = {}
if experimentation == ExperimentationType.STD.value:
launch_experimentation(
ExperimentationType.STD.value,
datasets1,
splits,
base_folder,
models,
prefixes=[30, 50, 70],
classification_methods=[
ClassificationMethods.MULTINOMIAL_NAIVE_BAYES.value,
ClassificationMethods.SGDCLASSIFIER.value,
ClassificationMethods.PERCEPTRON.value,
ClassificationMethods.RANDOM_FOREST.value],
encodings=[
ValueEncodings.SIMPLE_INDEX.value,
ValueEncodings.COMPLEX.value]
)
launch_experimentation(
ExperimentationType.STD.value,
datasets2,
splits,
base_folder,
models,
prefixes=[3, 5, 7],
classification_methods=[
ClassificationMethods.MULTINOMIAL_NAIVE_BAYES.value,
ClassificationMethods.SGDCLASSIFIER.value,
ClassificationMethods.PERCEPTRON.value,
ClassificationMethods.RANDOM_FOREST.value],
encodings=[
ValueEncodings.SIMPLE_INDEX.value,
ValueEncodings.COMPLEX.value]
)
json.dump(splits, open("splits_1.json", 'w'))
json.dump(models, open("models_1.json", 'w'))
elif experimentation == ExperimentationType.DRIFT_SIZE.value:
launch_experimentation(
ExperimentationType.DRIFT_SIZE.value,
datasets2,
splits,
base_folder,
models,
prefixes=[3, 5, 7],
classification_methods=[
ClassificationMethods.MULTINOMIAL_NAIVE_BAYES.value,
ClassificationMethods.SGDCLASSIFIER.value,
ClassificationMethods.PERCEPTRON.value,
ClassificationMethods.RANDOM_FOREST.value],
encodings=[
ValueEncodings.SIMPLE_INDEX.value,
ValueEncodings.COMPLEX.value]
)
json.dump(splits, open("splits_2.json", 'w'))
json.dump(models, open("models_2.json", 'w'))
elif experimentation == ExperimentationType.INCREMENTAL.value:
# splits = json.load(open("../splits.json", 'r'))
# models = json.load(open("../models.json", 'r'))
launch_experimentation(
ExperimentationType.INCREMENTAL.value,
datasets1,
splits,
base_folder,
models,
prefixes=[30, 50, 70],
classification_methods=[
ClassificationMethods.MULTINOMIAL_NAIVE_BAYES.value,
ClassificationMethods.SGDCLASSIFIER.value,
ClassificationMethods.PERCEPTRON.value,
ClassificationMethods.RANDOM_FOREST.value],
encodings=[
ValueEncodings.SIMPLE_INDEX.value,
ValueEncodings.COMPLEX.value]
)
launch_experimentation(
ExperimentationType.INCREMENTAL.value,
datasets2,
splits,
base_folder,
models,
prefixes=[3, 5, 7],
classification_methods=[
ClassificationMethods.MULTINOMIAL_NAIVE_BAYES.value,
ClassificationMethods.SGDCLASSIFIER.value,
ClassificationMethods.PERCEPTRON.value,
ClassificationMethods.RANDOM_FOREST.value],
encodings=[
ValueEncodings.SIMPLE_INDEX.value,
ValueEncodings.COMPLEX.value]
)
json.dump(splits, open("splits_3.json", 'w'))
json.dump(models, open("models_3.json", 'w'))
print("End of the experiments")
| 47.003899 | 144 | 0.543897 | 2,098 | 24,113 | 5.999523 | 0.115348 | 0.024231 | 0.014459 | 0.009534 | 0.831652 | 0.786844 | 0.758958 | 0.727735 | 0.712004 | 0.688568 | 0 | 0.052628 | 0.354622 | 24,113 | 512 | 145 | 47.095703 | 0.756201 | 0.048812 | 0 | 0.616408 | 0 | 0 | 0.161247 | 0.004933 | 0 | 0 | 0 | 0.001953 | 0 | 1 | 0.015521 | false | 0 | 0.02439 | 0 | 0.05765 | 0.017738 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
419e06c073474dcb939d1efcb34ff5333364813f | 26 | py | Python | hello.py | AyushiS-Manit/profile-rest-api | 85ad8fbb40b41f94e72b42fdbef118cdfd131c37 | [
"MIT"
] | null | null | null | hello.py | AyushiS-Manit/profile-rest-api | 85ad8fbb40b41f94e72b42fdbef118cdfd131c37 | [
"MIT"
] | null | null | null | hello.py | AyushiS-Manit/profile-rest-api | 85ad8fbb40b41f94e72b42fdbef118cdfd131c37 | [
"MIT"
] | null | null | null | print("Ayushi this side")
| 13 | 25 | 0.730769 | 4 | 26 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 26 | 1 | 26 | 26 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
ec3b54bf995e20d87c86db9ae4f9e9d63ba7bcfb | 163 | py | Python | step 293.py | blulady/python | 65d8e99f6411cf79be0353abc99a2677dfeebe11 | [
"bzip2-1.0.6"
] | null | null | null | step 293.py | blulady/python | 65d8e99f6411cf79be0353abc99a2677dfeebe11 | [
"bzip2-1.0.6"
] | null | null | null | step 293.py | blulady/python | 65d8e99f6411cf79be0353abc99a2677dfeebe11 | [
"bzip2-1.0.6"
] | 1 | 2020-09-11T16:05:46.000Z | 2020-09-11T16:05:46.000Z | import time
for counter in range (1,11):
print(counter)
time.sleep(.5)
import time
for counter in range(10, 0, -1):
print(counter)
time.sleep(.5)
| 16.3 | 32 | 0.644172 | 27 | 163 | 3.888889 | 0.481481 | 0.190476 | 0.247619 | 0.380952 | 0.933333 | 0.514286 | 0 | 0 | 0 | 0 | 0 | 0.070866 | 0.220859 | 163 | 9 | 33 | 18.111111 | 0.755906 | 0 | 0 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.25 | 1 | 0 | 0 | null | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ec437c62692aef090f20e3eeffc3603e42cb03f3 | 172 | py | Python | twkbpy/__init__.py | atlefren/twkbpy | c7ca9874bc32e0d4f7630b8115fc4c3b95531e81 | [
"MIT"
] | 3 | 2016-11-27T21:18:21.000Z | 2017-07-02T19:44:58.000Z | twkbpy/__init__.py | atlefren/twkbpy | c7ca9874bc32e0d4f7630b8115fc4c3b95531e81 | [
"MIT"
] | null | null | null | twkbpy/__init__.py | atlefren/twkbpy | c7ca9874bc32e0d4f7630b8115fc4c3b95531e81 | [
"MIT"
] | 1 | 2021-03-13T04:47:29.000Z | 2021-03-13T04:47:29.000Z | # -*- coding: utf-8 -*-
from .decode import Decoder
def decode(*args):
return Decoder().decode(*args)
def to_geojson(*args):
return Decoder().to_geojson(*args)
| 15.636364 | 38 | 0.656977 | 23 | 172 | 4.826087 | 0.521739 | 0.18018 | 0.306306 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006993 | 0.168605 | 172 | 10 | 39 | 17.2 | 0.769231 | 0.122093 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | true | 0 | 0.2 | 0.4 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
ec4dc387e7fd6389e03b5f2ce8b06b6a53bd6a11 | 37 | py | Python | examples/modern_data_stack_assets/modern_data_stack_assets/__init__.py | kstennettlull/dagster | dd6f57e170ff03bf145f1dd1417e0b2c3156b1d6 | [
"Apache-2.0"
] | null | null | null | examples/modern_data_stack_assets/modern_data_stack_assets/__init__.py | kstennettlull/dagster | dd6f57e170ff03bf145f1dd1417e0b2c3156b1d6 | [
"Apache-2.0"
] | null | null | null | examples/modern_data_stack_assets/modern_data_stack_assets/__init__.py | kstennettlull/dagster | dd6f57e170ff03bf145f1dd1417e0b2c3156b1d6 | [
"Apache-2.0"
] | null | null | null | from .assets import analytics_assets
| 18.5 | 36 | 0.864865 | 5 | 37 | 6.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ec530d4a90aeeeabd82cca3c5be24b8012813e42 | 28,192 | py | Python | src/py/dl/nn/gan_512_nn.py | hina-shah/US-famli | f927c89ec9cb51f9e511bbdfa2f59ce15e0e8730 | [
"Apache-2.0"
] | null | null | null | src/py/dl/nn/gan_512_nn.py | hina-shah/US-famli | f927c89ec9cb51f9e511bbdfa2f59ce15e0e8730 | [
"Apache-2.0"
] | null | null | null | src/py/dl/nn/gan_512_nn.py | hina-shah/US-famli | f927c89ec9cb51f9e511bbdfa2f59ce15e0e8730 | [
"Apache-2.0"
] | null | null | null | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow as tf
import json
import os
import glob
from . import base_nn
import sys
class NN(base_nn.BaseNN):
def set_data_description(self, json_filename=None, data_description=None):
super(NN, self).set_data_description(json_filename=json_filename, data_description=data_description)
self.num_channels = 1
self.out_channels = 1
if "data_keys" in self.data_description:
data_keys = self.data_description["data_keys"]
if data_keys[0] in self.data_description and "shape" in self.data_description[data_keys[0]]:
self.num_channels = self.data_description[data_keys[0]]["shape"][-1]
self.out_channels = self.num_channels
self.value_range = [self.data_description[data_keys[0]]["min"], self.data_description[data_keys[0]]["max"]]
def up_conv_block(self, x0, in_filters, out_filters, cross_block, block='a', is_training=False, ps_device="/cpu:0", w_device="/gpu:0"):
x = self.convolution2d(x0, name= block + "_conv1_op", filter_shape=[1,1,in_filters,out_filters[0]], strides=[1,1,1,1], padding="SAME", use_bias=False, activation=None, initializer=tf.random_normal_initializer(mean=0,stddev=0.01), ps_device=ps_device, w_device=w_device)
x = tf.layers.batch_normalization(x, training=is_training)
x = tf.nn.leaky_relu(x)
out_shape=tf.shape(cross_block)
x = self.up_convolution2d(x, name=block + "_up_conv1_op", filter_shape=[3,3,out_filters[1],out_filters[0]], output_shape=out_shape, strides=[1,2,2,1], padding="SAME", use_bias=False, activation=None, initializer=tf.random_normal_initializer(mean=0,stddev=0.01), ps_device=ps_device, w_device=w_device)
x = tf.layers.batch_normalization(x, training=is_training)
x = tf.nn.leaky_relu(x)
x = tf.concat([cross_block, x], -1)
x = self.convolution2d(x, name=block + "_conv2_op", filter_shape=[1,1,x.get_shape().as_list()[-1],out_filters[2]], strides=[1,1,1,1], padding="SAME", use_bias=False, activation=None, initializer=tf.random_normal_initializer(mean=0,stddev=0.01), ps_device=ps_device, w_device=w_device)
x = tf.layers.batch_normalization(x, training=is_training)
x = tf.nn.leaky_relu(x)
shortcut = self.up_convolution2d(x0, name=block + "_up_conv2_op", filter_shape=[3,3,out_filters[2],in_filters], output_shape=out_shape, strides=[1,2,2,1], padding="SAME", use_bias=False, activation=None, initializer=tf.random_normal_initializer(mean=0,stddev=0.01), ps_device=ps_device, w_device=w_device)
shortcut = tf.layers.batch_normalization(shortcut, training=is_training)
x = tf.math.add(x, shortcut)
x = tf.nn.leaky_relu(x)
return x
def upblock(self, x0, out_filters, stride=2, ps_device="/cpu:0", w_device="/gpu:0"):
shape = tf.shape(x0)
batch_size = shape[0]
output_shape = x0.get_shape().as_list()
in_filters = output_shape[-1]
output_shape = [batch_size,output_shape[1]*stride,output_shape[2]*stride,out_filters]
x = self.up_convolution2d(x0, name="up_conv1", filter_shape=[1 + stride, 1 + stride,out_filters,in_filters], output_shape=output_shape, strides=[1,stride,stride,1], padding="SAME", use_bias=False, activation=None, initializer=tf.random_normal_initializer(mean=0,stddev=0.01), ps_device=ps_device, w_device=w_device)
x = tf.layers.batch_normalization(x, training=True)
x = tf.nn.leaky_relu(x)
return x
def higher_v2(self, x, reuse=False, is_training=False, keep_prob=1, ps_device="/cpu:0", w_device="/gpu:0"):
with tf.variable_scope("ued_resnet") as scope:
if(reuse):
scope.reuse_variables()
with tf.variable_scope("start"):
x = tf.layers.batch_normalization(x, training=is_training)
x = self.upblock(x, self.num_channels, stride=8, ps_device=ps_device, w_device=w_device)
x0 = x
with tf.variable_scope("block0"):
x = self.convolution2d(x0, name="conv0_0_op", filter_shape=[3,3,self.num_channels,64], strides=[1,2,2,1], padding="SAME", use_bias=False, activation=None, ps_device=ps_device, w_device=w_device)
x = tf.layers.batch_normalization(x, training=is_training)
x = tf.nn.leaky_relu(x)
x = tf.nn.dropout(x, keep_prob)
block0 = x
# block0_0_shape = tf.shape(x)
# x = self.convolution2d(x, name="conv1_0_op", filter_shape=[3,3,32,64], strides=[1,2,2,1], padding="SAME", use_bias=False, activation=None, ps_device=ps_device, w_device=w_device)
# x = tf.layers.batch_normalization(x, training=is_training)
# x = tf.nn.leaky_relu(x)
with tf.variable_scope("block1"):
x = self.conv_block(x, 64, [64,64,128], block='a', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
# x = self.identity_block(x, 64, [32,32,64], block='b', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
# x = self.identity_block(x, 128, [64,64,128], block='c', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
# block1 = tf.nn.dropout(x, keep_prob)
# x = self.convolution2d(x, name="conv1_0_op", filter_shape=[3,3,64,128], strides=[1,2,2,1], padding="SAME", use_bias=False, activation=None, ps_device=ps_device, w_device=w_device)
# x = tf.layers.batch_normalization(x, training=is_training)
# x = tf.nn.leaky_relu(x)
# block0_0_shape = tf.shape(x)
# x = self.convolution2d(x, name="conv1_0_op", filter_shape=[3,3,32,64], strides=[1,2,2,1], padding="SAME", use_bias=False, activation=None, ps_device=ps_device, w_device=w_device)
# x = tf.layers.batch_normalization(x, training=is_training)
# x = tf.nn.leaky_relu(x)
x = tf.nn.dropout(x, keep_prob)
block1 = x
with tf.variable_scope("block2"):
x = self.conv_block(x, 128, [128,128,256], block='a', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
# x = self.identity_block(x, 128, [64,64,128], block='b', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
# x = self.identity_block(x, 128, [64,64,128], block='c', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
# x = self.identity_block(x, 256, [128,128,256], block='d', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
x = tf.nn.dropout(x, keep_prob)
block2 = x
with tf.variable_scope("block3"):
x = self.conv_block(x, 256, [256,256,512], block='a', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
# x = self.identity_block(x, 256, [128,128,256], block='b', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
# x = self.identity_block(x, 256, [128,256,256], block='c', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
# x = self.identity_block(x, 256, [128,128,256], block='d', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
# x = self.identity_block(x, 512, [256,256,512], block='e', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
# block3 = tf.nn.dropout(x, keep_prob)
x = tf.nn.dropout(x, keep_prob)
# with tf.variable_scope("block4"):
# x = self.conv_block(x, 512, [512,512,1024], block='a', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
# x = self.identity_block(x, 1024, [512,512,1024], block='b', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
# # x = self.identity_block(x, 1024, [512,512,1024], block='c', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
# # x = self.identity_block(x, 1024, [512,512,1024], block='d', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
# # x = self.identity_block(x, 1024, [512,512,1024], block='e', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
# # x = self.identity_block(x, 1024, [512,512,1024], block='f', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
# x = tf.nn.dropout(x, keep_prob)
# with tf.variable_scope("up_block4"):
# x = self.identity_block(x, 1024, [512,512,1024], block='a', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
# # x = self.identity_block(x, 1024, [512,512,1024], block='b', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
# # x = self.identity_block(x, 1024, [512,512,1024], block='c', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
# # x = self.identity_block(x, 1024, [512,512,1024], block='d', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
# # x = self.identity_block(x, 1024, [512,512,1024], block='e', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
# x = self.up_conv_block(x, 1024, [512,512,512], block='f', cross_block=block3, is_training=is_training, ps_device=ps_device, w_device=w_device)
# x = tf.nn.dropout(x, keep_prob)
with tf.variable_scope("up_block3"):
# x = self.identity_block(x, 256, [128,128,256], block='a', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
# x = self.identity_block(x, 256, [128,128,256], block='b', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
# x = self.identity_block(x, 256, [128,128,256], block='c', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
# x = self.identity_block(x, 512, [256,256,512], block='d', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
x = self.up_conv_block(x, 512, [256,256,256], block='e', cross_block=block2, is_training=is_training, ps_device=ps_device, w_device=w_device)
x = tf.nn.dropout(x, keep_prob)
with tf.variable_scope("up_block2"):
# x = self.identity_block(x, 128, [64,64,128], block='a', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
# x = self.identity_block(x, 128, [64,64,128], block='b', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
# x = self.identity_block(x, 256, [128,128,256], block='c', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
x = self.up_conv_block(x, 256, [128,128,128], block='d', cross_block=block1, is_training=is_training, ps_device=ps_device, w_device=w_device)
x = tf.nn.dropout(x, keep_prob)
with tf.variable_scope("up_block1"):
# x = self.identity_block(x, 64, [32,32,64], block='a', activation=tf.nn.leaky_relu, is_training=is_training, ps_device=ps_device, w_device=w_device)
# x = self.identity_block(x, 128, [64,64,128], block='b', activation=tf.nn.leaky_relu,is_training=is_training, ps_device=ps_device, w_device=w_device)
x = self.up_conv_block(x, 128, [64,64,64], block='c', cross_block=block0, is_training=is_training, ps_device=ps_device, w_device=w_device)
x = tf.nn.dropout(x, keep_prob)
with tf.variable_scope("up_block_final"):
x = self.up_convolution2d(x, name="up_conv1_op", filter_shape=[3,3,self.out_channels,64], output_shape=tf.shape(x0), strides=[1,2,2,1], padding="SAME", use_bias=False, activation=None, ps_device=ps_device, w_device=w_device)
# x = tf.layers.batch_normalization(x, training=is_training)
# x = tf.nn.leaky_relu(x)
# x = self.up_convolution2d(x, name="up_conv2_op", filter_shape=[3,3,1,32], output_shape=tf.shape(x0), strides=[1,2,2,1], padding="SAME", use_bias=False, activation=None, ps_device=ps_device, w_device=w_device)
x = tf.nn.tanh(x)
x = tf.math.multiply(tf.math.add(x, 1), 127.5)
return x
def higher(self, x, reuse=False, is_training=False, keep_prob=1, ps_device="/cpu:0", w_device="/gpu:0"):
with tf.variable_scope("higher") as scope:
if(reuse):
scope.reuse_variables()
with tf.variable_scope("block0"):
x = self.upblock(x, 256, ps_device=ps_device, w_device=w_device)
x = tf.nn.dropout(x, keep_prob)
with tf.variable_scope("block1"):
x = self.upblock(x, 128, ps_device=ps_device, w_device=w_device)
x = tf.nn.dropout(x, keep_prob)
with tf.variable_scope("block2"):
shape = tf.shape(x)
batch_size = shape[0]
output_shape = x.get_shape().as_list()
output_shape = [batch_size,output_shape[1]*2,output_shape[2]*2,self.out_channels]
x = self.up_convolution2d(x, name="up_conv1_op", filter_shape=[3,3,self.out_channels,128], output_shape=output_shape, strides=[1,2,2,1], padding="SAME", use_bias=False, activation=None, ps_device=ps_device, w_device=w_device)
x = tf.nn.tanh(x)
x = tf.math.multiply(tf.math.add(x, 1), 127.5)
return x
def inference(self, data_tuple=None, images=None, keep_prob=1, is_training=False, ps_device="/cpu:0", w_device="/gpu:0"):
with tf.variable_scope("generator"):
batch_size = 1
if(is_training):
if(data_tuple):
shape = tf.shape(data_tuple[0])
batch_size = shape[0]
x = tf.random.normal([batch_size,128])
elif(data_tuple):
x = tf.reshape(data_tuple[0], [batch_size, 128])
else:
x = tf.reshape(images, [batch_size, 128])
self.print_tensor_shape(x, "input_x")
with tf.variable_scope("block0"):
x = self.matmul(x, 4*4*1024, name='project', activation=None, initializer=tf.random_normal_initializer(mean=0,stddev=0.01), ps_device=ps_device, w_device=w_device)
x = tf.reshape(x, [batch_size, 4, 4, 1024])
x = tf.layers.batch_normalization(x, training=True)
x = tf.nn.leaky_relu(x)
with tf.variable_scope("block1"):
x = self.upblock(x, 512, ps_device=ps_device, w_device=w_device)
with tf.variable_scope("block2"):
x = self.upblock(x, 256, ps_device=ps_device, w_device=w_device)
with tf.variable_scope("block3"):
x = self.upblock(x, 128, ps_device=ps_device, w_device=w_device)
with tf.variable_scope("block4"):
x = self.up_convolution2d(x, name="up_conv1", filter_shape=[3,3,self.out_channels,128], output_shape=[batch_size,64,64,self.out_channels], strides=[1,2,2,1], padding="SAME", use_bias=False, activation=None, initializer=tf.random_normal_initializer(mean=0,stddev=0.01), ps_device=ps_device, w_device=w_device)
x = tf.nn.tanh(x)
x = tf.math.multiply(tf.math.add(x, 1), 127.5)
return self.higher(x, is_training=True, keep_prob=keep_prob, ps_device=ps_device, w_device=w_device)
def discriminator(self, images=None, data_tuple=None, num_labels=2, keep_prob=1, is_training=False, reuse=False, ps_device="/cpu:0", w_device="/gpu:0"):
# input: tensor of images
# output: tensor of computed logits
if(data_tuple):
images = data_tuple[1]
self.print_tensor_shape(images, "images")
shape = tf.shape(images)
batch_size = shape[0]
with tf.variable_scope("discriminator_512") as scope:
if(reuse):
scope.reuse_variables()
x = tf.layers.batch_normalization(images, training=is_training)
with tf.variable_scope("block0"):
x = self.convolution2d(x, name="conv1", filter_shape=[3,3,self.num_channels,16], strides=[1,1,1,1], padding="SAME", use_bias=False, activation=None, ps_device=ps_device, w_device=w_device)
x = tf.layers.batch_normalization(x, training=is_training)
x = tf.nn.leaky_relu(x)
x = self.avg_pool(x, name="avg_pool_op", kernel=[1,3,3,1], strides=[1,2,2,1], ps_device=ps_device, w_device=w_device)
# x = self.convolution2d(x, name="conv2", filter_shape=[5,5,16,16], strides=[1,2,2,1], padding="SAME", use_bias=False, activation=None, ps_device=ps_device, w_device=w_device)
# x = tf.layers.batch_normalization(x, training=is_training)
# x = tf.nn.leaky_relu(x)
with tf.variable_scope("block1"):
x = self.convolution2d(x, name="conv1", filter_shape=[3,3,16,32], strides=[1,1,1,1], padding="SAME", use_bias=False, activation=None, ps_device=ps_device, w_device=w_device)
x = tf.layers.batch_normalization(x, training=is_training)
x = tf.nn.leaky_relu(x)
x = self.avg_pool(x, name="avg_pool_op", kernel=[1,3,3,1], strides=[1,2,2,1], ps_device=ps_device, w_device=w_device)
# x = self.convolution2d(x, name="conv2", filter_shape=[5,5,32,32], strides=[1,2,2,1], padding="SAME", use_bias=False, activation=None, ps_device=ps_device, w_device=w_device)
# x = tf.layers.batch_normalization(x, training=is_training)
# x = tf.nn.leaky_relu(x)
with tf.variable_scope("block2"):
x = self.convolution2d(x, name="conv1", filter_shape=[3,3,32,64], strides=[1,1,1,1], padding="SAME", use_bias=False, activation=None, ps_device=ps_device, w_device=w_device)
x = tf.layers.batch_normalization(x, training=is_training)
x = tf.nn.leaky_relu(x)
x = self.avg_pool(x, name="avg_pool_op", kernel=[1,3,3,1], strides=[1,2,2,1], ps_device=ps_device, w_device=w_device)
# x = self.convolution2d(x, name="conv2", filter_shape=[5,5,64,64], strides=[1,2,2,1], padding="SAME", use_bias=False, activation=None, ps_device=ps_device, w_device=w_device)
# x = tf.layers.batch_normalization(x, training=is_training)
# x = tf.nn.leaky_relu(x)
with tf.variable_scope("block3"):
x = self.convolution2d(x, name="conv1", filter_shape=[3,3,64,128], strides=[1,1,1,1], padding="SAME", use_bias=False, activation=None, ps_device=ps_device, w_device=w_device)
x = tf.layers.batch_normalization(x, training=is_training)
x = tf.nn.leaky_relu(x)
x = self.avg_pool(x, name="avg_pool_op", kernel=[1,3,3,1], strides=[1,2,2,1], ps_device=ps_device, w_device=w_device)
# x = self.convolution2d(x, name="conv2", filter_shape=[3,3,128,128], strides=[1,2,2,1], padding="SAME", use_bias=False, activation=None, ps_device=ps_device, w_device=w_device)
# x = tf.layers.batch_normalization(x, training=is_training)
# x = tf.nn.leaky_relu(x)
with tf.variable_scope("block4"):
x = self.convolution2d(x, name="conv1", filter_shape=[3,3,128,256], strides=[1,1,1,1], padding="SAME", use_bias=False, activation=None, ps_device=ps_device, w_device=w_device)
x = tf.layers.batch_normalization(x, training=is_training)
x = tf.nn.leaky_relu(x)
x = self.avg_pool(x, name="avg_pool_op", kernel=[1,3,3,1], strides=[1,2,2,1], ps_device=ps_device, w_device=w_device)
# x = self.convolution2d(x, name="conv2", filter_shape=[3,3,256,256], strides=[1,2,2,1], padding="SAME", use_bias=False, activation=None, ps_device=ps_device, w_device=w_device)
# x = tf.layers.batch_normalization(x, training=is_training)
# x = tf.nn.leaky_relu(x)
with tf.variable_scope("block5"):
x = self.convolution2d(x, name="conv1", filter_shape=[3,3,256,512], strides=[1,1,1,1], padding="SAME", use_bias=False, activation=None, ps_device=ps_device, w_device=w_device)
x = tf.layers.batch_normalization(x, training=is_training)
x = tf.nn.leaky_relu(x)
x = self.avg_pool(x, name="avg_pool_op", kernel=[1,3,3,1], strides=[1,2,2,1], ps_device=ps_device, w_device=w_device)
# # x = self.convolution2d(x, name="conv2", filter_shape=[3,3,512,512], strides=[1,2,2,1], padding="SAME", use_bias=False, activation=None, ps_device=ps_device, w_device=w_device)
# # x = tf.layers.batch_normalization(x, training=is_training)
# # x = tf.nn.leaky_relu(x)
with tf.variable_scope("block6"):
x = self.convolution2d(x, name="conv1", filter_shape=[3,3,512,1024], strides=[1,1,1,1], padding="SAME", use_bias=False, activation=None, ps_device=ps_device, w_device=w_device)
x = tf.layers.batch_normalization(x, training=is_training)
x = tf.nn.leaky_relu(x)
x = self.avg_pool(x, name="avg_pool_op", kernel=[1,3,3,1], strides=[1,2,2,1], ps_device=ps_device, w_device=w_device)
# # x = self.convolution2d(x, name="conv2", filter_shape=[3,3,1024,1024], strides=[1,2,2,1], padding="SAME", use_bias=False, activation=None, ps_device=ps_device, w_device=w_device)
# # x = tf.layers.batch_normalization(x, training=is_training)
# # x = tf.nn.leaky_relu(x)
with tf.variable_scope("fc"):
# kernel_size = x.get_shape().as_list()
# kernel_size[0] = 1
# kernel_size[-1] = 1
# x = self.avg_pool(x, name="avg_pool_op", kernel=kernel_size, strides=kernel_size, ps_device=ps_device, w_device=w_device)
x = tf.reshape(x, (batch_size, 4*4*1024))
x = self.matmul(x, 2, name='final_op', use_bias=False, activation=None, ps_device=ps_device, w_device=w_device)
return x
def metrics(self, logits, labels, name='collection_metrics'):
with tf.variable_scope(name):
weight_map = None
metrics_obj = {}
metrics_obj["ACCURACY"] = tf.metrics.accuracy(predictions=logits, labels=labels, weights=weight_map, name='accuracy')
for key in metrics_obj:
tf.summary.scalar(key, metrics_obj[key][0])
return metrics_obj
def training(self, loss, learning_rate, decay_steps, decay_rate, staircase, var_list=tf.GraphKeys.TRAINABLE_VARIABLES):
global_step = tf.Variable(self.global_step, name='global_step', trainable=False)
# create learning_decay
lr = tf.train.exponential_decay( learning_rate,
global_step,
decay_steps,
decay_rate, staircase=staircase )
tf.summary.scalar('2learning_rate', lr )
# Create the gradient descent optimizer with the given learning rate.
optimizer = tf.train.AdamOptimizer(lr)
train_op = optimizer.minimize(loss, global_step=global_step, var_list=var_list)
return train_op
def loss(self, logits, labels, class_weights=None):
self.print_tensor_shape( logits, 'logits shape')
self.print_tensor_shape( labels, 'labels shape')
# return tf.reduce_sum(tf.nn.softmax_cross_entropy_with_logits_v2(labels=labels, logits=logits))
return tf.compat.v1.losses.softmax_cross_entropy(onehot_labels=labels, logits=logits)
# return tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
def discriminator_lr(self, images=None, data_tuple=None, num_labels=2, keep_prob=1, is_training=False, reuse=False, ps_device="/cpu:0", w_device="/gpu:0"):
# input: tensor of images
# output: tensor of computed logits
if(data_tuple):
images = data_tuple[0]
self.print_tensor_shape(images, "images")
shape = tf.shape(images)
batch_size = shape[0]
with tf.variable_scope("discriminator") as scope:
if(reuse):
scope.reuse_variables()
x = tf.layers.batch_normalization(images, training=is_training)
with tf.variable_scope("block0"):
x = self.convolution2d(x, name="conv1", filter_shape=[3,3,self.num_channels,128], strides=[1,1,1,1], padding="SAME", use_bias=False, activation=None, ps_device=ps_device, w_device=w_device)
x = tf.layers.batch_normalization(x, training=is_training)
x = tf.nn.leaky_relu(x)
x = self.avg_pool(x, name="avg_pool_op", kernel=[1,3,3,1], strides=[1,2,2,1], ps_device=ps_device, w_device=w_device)
return x
def loss_high(self, logits, labels):
shape = tf.shape(logits)
batch_size = shape[0]
labels_conv = self.discriminator_lr(images=labels)
# labels_conv = tf.reshape(labels_conv, [batch_size, -1])
logits_conv = self.discriminator_lr(images=logits, reuse=True)
# logits_conv = tf.reshape(logits_conv, [batch_size, -1])
# labels = tf.math.subtract(tf.math.multiply(tf.math.divide(tf.math.subtract(labels, self.data_description["image"]["min"]), self.data_description["image"]["max"] - self.data_description["image"]["min"]), 2.0), 1.0)
# labels = tf.layers.batch_normalization(labels, training=True)
# logits_flat = tf.reshape(logits, [batch_size, -1])
# labels_flat = tf.reshape(labels, [batch_size, -1])
# return self.emd(labels_conv, logits_conv)
return self.emd(logits_conv, labels_conv)
def restore_variables(self, restore_all=True):
vars_train = tf.trainable_variables()
if(restore_all):
return vars_train
return [var for var in vars_train if 'generator' in var.name and 'generator_512' not in var.name or 'discriminator' in var.name and 'discriminator_512' not in var.name]
def prediction_type(self):
return "image"
def get_discriminator_vars(self):
vars_train = tf.trainable_variables()
vars_dis = [var for var in vars_train if 'discriminator_512' in var.name]
for var in vars_dis:
print('dis', var.name)
return vars_dis
def get_generator_vars(self):
vars_train = tf.trainable_variables()
vars_gen = [var for var in vars_train if 'higher' in var.name]
for var in vars_gen:
print('gen', var.name)
return vars_gen | 63.782805 | 324 | 0.636812 | 4,207 | 28,192 | 4.023532 | 0.051105 | 0.080818 | 0.125953 | 0.077509 | 0.80345 | 0.76635 | 0.746736 | 0.723873 | 0.70296 | 0.700065 | 0 | 0.047831 | 0.226518 | 28,192 | 442 | 325 | 63.782805 | 0.728423 | 0.316686 | 0 | 0.433198 | 0 | 0 | 0.043685 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064777 | false | 0 | 0.040486 | 0.004049 | 0.174089 | 0.032389 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6bcb43969a280223c08d3e22c5b14dcd324d286e | 3,402 | py | Python | extras/client.py | network2030/NewIP-Linux | 9a12984554a9d0233eaa2d5f757c9aa3d46e35b3 | [
"Apache-2.0"
] | null | null | null | extras/client.py | network2030/NewIP-Linux | 9a12984554a9d0233eaa2d5f757c9aa3d46e35b3 | [
"Apache-2.0"
] | 3 | 2022-03-03T16:25:30.000Z | 2022-03-18T10:12:36.000Z | extras/client.py | network2030/NewIP-Linux | 9a12984554a9d0233eaa2d5f757c9aa3d46e35b3 | [
"Apache-2.0"
] | 2 | 2022-03-03T03:19:36.000Z | 2022-03-03T06:08:10.000Z | # SPDX-License-Identifier: Apache-2.0-only
# Copyright (c) 2019-2022 @bhaskar792
from New_IP.setup import Setup
from New_IP.sender import Sender
setup_obj = Setup()
setup_obj.setup_topology()
setup_obj.start_receiver()
with setup_obj.h1:
sender_obj = Sender()
delay = 500
# IPv4 to IPv6
sender_obj.make_packet(
src_addr_type="ipv4",
src_addr="10.0.1.2",
dst_addr_type="ipv6",
dst_addr="10::2:2",
content="ipv4 to ipv6 from h1 to h2 more latency",
)
sender_obj.insert_contract(
contract_type="latency_based_forwarding", params=[0, 800, 300, 3]
) # min_delay, max_delay, fib_todelay, fib_tohops
sender_obj.send_packet(iface="h1_r1", show_pkt=True)
sender_obj.make_packet(
src_addr_type="ipv4",
src_addr="10.0.1.2",
dst_addr_type="ipv6",
dst_addr="10::2:2",
content="ipv4 to ipv6 from h1 to h2 more latency",
)
sender_obj.insert_contract(
contract_type="latency_based_forwarding", params=[500, 800, 300, 3]
) # min_delay, max_delay, fib_todelay, fib_tohops
sender_obj.send_packet(iface="h1_r1", show_pkt=True)
sender_obj.make_packet(
src_addr_type="ipv4",
src_addr="10.0.1.2",
dst_addr_type="ipv6",
dst_addr="10::2:2",
content="ipv4 to ipv6 from h1 to h2 less latency",
)
sender_obj.insert_contract(
contract_type="latency_based_forwarding", params=[350, 380, 300, 3]
) # min_delay, max_delay, fib_todelay, fib_tohops
sender_obj.send_packet(iface="h1_r1", show_pkt=True)
sender_obj.make_packet(
src_addr_type="ipv4",
src_addr="10.0.1.2",
dst_addr_type="ipv6",
dst_addr="10::2:2",
content="ipv4 to ipv6 from h1 to h2 much more latency",
)
sender_obj.insert_contract(
contract_type="latency_based_forwarding", params=[2000, 5000, 300, 3]
) # min_delay, max_delay, fib_todelay, fib_tohops
sender_obj.send_packet(iface="h1_r1", show_pkt=True)
# # 8bit to 8bit
sender_obj.make_packet(
src_addr_type="8bit",
src_addr=0b1,
dst_addr_type="8bit",
dst_addr=0b10,
content="8bit to 8bit from h1 to h2",
)
sender_obj.insert_contract(contract_type="max_delay_forwarding", params=[delay])
sender_obj.send_packet(iface="h1_r1")
# # 8bit to IPv4
sender_obj.make_packet(
src_addr_type="8bit",
src_addr=0b1,
dst_addr_type="ipv4",
dst_addr="10.0.3.2",
content="8bit to ipv4 from h1 to h3",
)
sender_obj.insert_contract(
contract_type="latency_based_forwarding", params=[500, 800, 300, 3]
) # min_delay, max_delay, fib_todelay, fib_tohops
sender_obj.send_packet(iface="h1_r1")
sender_obj.make_packet(
src_addr_type="8bit",
src_addr=0b1,
dst_addr_type="ipv4",
dst_addr="10.0.3.2",
content="8bit to ipv4 from h1 to h3",
)
sender_obj.send_packet(iface="h1_r1")
# IPv4 to IPv4
sender_obj.make_packet(
src_addr_type="ipv4",
src_addr="10.0.1.2",
dst_addr_type="ipv4",
dst_addr="10.0.2.2",
content="ipv4 to ipv4 from h1 to h2",
)
sender_obj.insert_contract(contract_type="max_delay_forwarding", params=[delay])
sender_obj.send_packet(iface="h1_r1", show_pkt=True)
setup_obj.show_stats()
| 30.927273 | 84 | 0.648148 | 512 | 3,402 | 3.996094 | 0.138672 | 0.105572 | 0.050831 | 0.074291 | 0.850929 | 0.841642 | 0.841642 | 0.832356 | 0.825513 | 0.819648 | 0 | 0.077922 | 0.230453 | 3,402 | 109 | 85 | 31.211009 | 0.703591 | 0.10582 | 0 | 0.644444 | 0 | 0 | 0.205086 | 0.03963 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.022222 | 0 | 0.022222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6bed74cf22cfd154e878b79f9a290a98133e3fcc | 2,710 | py | Python | model/utils/trajs2map.py | zekunhao1995/ControllableVideoGen | cae9bdf46a4eee1145b268ec74189f9f6ccbbb42 | [
"Apache-2.0"
] | 41 | 2018-06-05T09:34:11.000Z | 2021-09-01T10:58:25.000Z | model/utils/trajs2map.py | khz1995/ControllableVideoGen | cae9bdf46a4eee1145b268ec74189f9f6ccbbb42 | [
"Apache-2.0"
] | null | null | null | model/utils/trajs2map.py | khz1995/ControllableVideoGen | cae9bdf46a4eee1145b268ec74189f9f6ccbbb42 | [
"Apache-2.0"
] | 7 | 2018-06-12T00:57:42.000Z | 2021-04-17T07:58:34.000Z | import numpy as np
import torch
from torch.autograd import Variable
def trajs2map(trajs, height, width): # traj: [N, S/E, X/Y]
#kpmap_seq = np.zeros([num_frames, 6,self.height,self.width], dtype=np.float32)
#height = kpmap_seq.size(2)
#width = kpmap_seq.size(3)
kpmap_seq = Variable(torch.zeros(1,6,height,width).cuda())
for traj_no in range(len(trajs)):
kp_start_x = trajs[traj_no][0][0]
kp_start_y = trajs[traj_no][0][1]
kp_end_x = trajs[traj_no][1][0]
kp_end_y = trajs[traj_no][1][1]
kp_start_x_int = int(max(min(kp_start_x, width),0))
kp_start_y_int = int(max(min(kp_start_y, height),0))
kp_dx = kp_end_x - kp_start_x
kp_dy = kp_end_y - kp_start_y
kpmap_seq[0, 0,kp_start_y_int,kp_start_x_int] = 1.0
kpmap_seq[0, 1,kp_start_y_int,kp_start_x_int] = kp_dy/16.
kpmap_seq[0, 2,kp_start_y_int,kp_start_x_int] = kp_dx/16.
#vid_seq[0,1,kp_start_y,kp_start_x] = 0.5
kp_end_x_int = int(max(min(kp_end_x, width),0))
kp_end_y_int = int(max(min(kp_end_y, height),0))
kp_dx2 = kp_start_x - kp_end_x
kp_dy2 = kp_start_y - kp_end_y
kpmap_seq[0, 3,kp_end_y_int,kp_end_x_int] = 1.0
kpmap_seq[0, 4,kp_end_y_int,kp_end_x_int] = kp_dy2/16.
kpmap_seq[0, 5,kp_end_y_int,kp_end_x_int] = kp_dx2/16.
return kpmap_seq
def trajs2map2(trajs, height, width): # traj: [N, S/E, X/Y]
#kpmap_seq = np.zeros([num_frames, 6,self.height,self.width], dtype=np.float32)
#height = kpmap_seq.size(2)
#width = kpmap_seq.size(3)
kpmap_seq = Variable(torch.zeros(1,6,height,width).cuda())
for traj_no in range(trajs.shape[0]):
kp_start_x = trajs[traj_no,0,0]
kp_start_y = trajs[traj_no,0,1]
kp_end_x = trajs[traj_no,1,0]
kp_end_y = trajs[traj_no,1,1]
kp_start_x_int = int(max(min(kp_start_x, width),0))
kp_start_y_int = int(max(min(kp_start_y, height),0))
kp_dx = kp_end_x - kp_start_x
kp_dy = kp_end_y - kp_start_y
kpmap_seq[0, 0,kp_start_y_int,kp_start_x_int] = 1.0
kpmap_seq[0, 1,kp_start_y_int,kp_start_x_int] = kp_dy/16.
kpmap_seq[0, 2,kp_start_y_int,kp_start_x_int] = kp_dx/16.
#vid_seq[0,1,kp_start_y,kp_start_x] = 0.5
kp_end_x_int = int(max(min(kp_end_x, width),0))
kp_end_y_int = int(max(min(kp_end_y, height),0))
kp_dx2 = kp_start_x - kp_end_x
kp_dy2 = kp_start_y - kp_end_y
kpmap_seq[0, 3,kp_end_y_int,kp_end_x_int] = 1.0
kpmap_seq[0, 4,kp_end_y_int,kp_end_x_int] = kp_dy2/16.
kpmap_seq[0, 5,kp_end_y_int,kp_end_x_int] = kp_dx2/16.
return kpmap_seq
| 40.447761 | 83 | 0.644649 | 546 | 2,710 | 2.796703 | 0.102564 | 0.165029 | 0.094303 | 0.057629 | 0.933857 | 0.933857 | 0.933857 | 0.933857 | 0.933857 | 0.933857 | 0 | 0.050404 | 0.223985 | 2,710 | 66 | 84 | 41.060606 | 0.675701 | 0.139852 | 0 | 0.680851 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042553 | false | 0 | 0.06383 | 0 | 0.148936 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6bf9e3357e10f16eb65ce984e46f91a7c14ba252 | 88 | py | Python | fergus/models/common/__init__.py | braingineer/neural_tree_grammar | e0534b733e9a6815e97e9ab28434dae7b94a632f | [
"MIT"
] | 9 | 2016-10-11T06:24:30.000Z | 2018-09-11T03:39:35.000Z | fergus/models/common/__init__.py | braingineer/neural_tree_grammar | e0534b733e9a6815e97e9ab28434dae7b94a632f | [
"MIT"
] | null | null | null | fergus/models/common/__init__.py | braingineer/neural_tree_grammar | e0534b733e9a6815e97e9ab28434dae7b94a632f | [
"MIT"
] | null | null | null | from __future__ import absolute_import
from .embeddings import *
from .loggers import * | 22 | 38 | 0.818182 | 11 | 88 | 6.090909 | 0.545455 | 0.298507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 88 | 4 | 39 | 22 | 0.881579 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d40133378199c911bd0ad950d184a5ea9bab6012 | 33 | py | Python | cards/serializers/__init__.py | atalaydev/cardify | 594a7421580dd5cdc47d5da0d68c7298189a0422 | [
"MIT"
] | null | null | null | cards/serializers/__init__.py | atalaydev/cardify | 594a7421580dd5cdc47d5da0d68c7298189a0422 | [
"MIT"
] | null | null | null | cards/serializers/__init__.py | atalaydev/cardify | 594a7421580dd5cdc47d5da0d68c7298189a0422 | [
"MIT"
] | null | null | null | from .card import CardSerializer
| 16.5 | 32 | 0.848485 | 4 | 33 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d41815f8f915af2f973f503fb955bd7a9b82d413 | 26 | py | Python | utils/payload/__init__.py | sub-ninja/tivan | 11a205f915f0703a247628122c1958728b036171 | [
"MIT"
] | 16 | 2016-03-24T23:42:46.000Z | 2019-11-28T19:54:20.000Z | utils/payload/__init__.py | sub-ninja/tivan | 11a205f915f0703a247628122c1958728b036171 | [
"MIT"
] | 5 | 2016-02-03T13:47:06.000Z | 2016-02-18T15:11:54.000Z | utils/payload/__init__.py | sub-ninja/tivan | 11a205f915f0703a247628122c1958728b036171 | [
"MIT"
] | 5 | 2016-06-23T09:33:00.000Z | 2019-12-10T08:22:31.000Z | from payload_pb2 import *
| 13 | 25 | 0.807692 | 4 | 26 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0.153846 | 26 | 1 | 26 | 26 | 0.863636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d423fdc4a4e51386a6021dd145641c58629f9acc | 5,749 | py | Python | upwork/routers/finreport.py | alexandru-grajdeanu/python-upwork | ffe7994c084c88c455a386791e4ec62a93bb7b6a | [
"Apache-2.0",
"BSD-3-Clause"
] | 1 | 2020-05-17T17:13:28.000Z | 2020-05-17T17:13:28.000Z | upwork/routers/finreport.py | frolenkov-nikita/python-upwork | d052f5caedc632c73ad770b1f822a8a494f6b34b | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | upwork/routers/finreport.py | frolenkov-nikita/python-upwork | d052f5caedc632c73ad770b1f822a8a494f6b34b | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | # Python bindings to Upwork API
# python-upwork version 0.5
# (C) 2010-2015 Upwork
from upwork.namespaces import GdsNamespace
class Finreports(GdsNamespace):
api_url = 'finreports/'
version = 2
def get_provider_billings(self, provider_id, query):
"""
Generate Billing Reports for a Specific Provider.
*Parameters:*
:provider_id: Provider ID
:query: The GDS query string
"""
url = 'providers/{0}/billings'.format(provider_id)
tq = str(query)
result = self.get(url, data={'tq': tq})
return result
def get_provider_teams_billings(self, provider_team_id, query):
"""
Generate Billing Reports for a Specific Provider's Team.
The authenticated user must be an admin or
a staffing manager of the team.
*Parameters:*
:provider_team_id: Provider's Team ID
:query: The GDS query string
"""
url = 'provider_teams/{0}/billings'.format(provider_team_id)
tq = str(query)
result = self.get(url, data={'tq': tq})
return result
def get_provider_companies_billings(self, provider_company_id, query):
"""
Generate Billing Reports for a Specific Provider's Company.
The authenticated user must be the company owner
*Parameters:*
:provider_company_id: Provider's Company ID
:query: The GDS query string
"""
url = 'provider_companies/{0}/billings'.format(provider_company_id)
tq = str(query)
result = self.get(url, data={'tq': tq})
return result
def get_provider_earnings(self, provider_id, query):
"""
Generate Earning Reports for a Specific Provider
*Parameters:*
:provider_id: Provider ID
:query: The GDS query string
"""
url = 'providers/{0}/earnings'.format(provider_id)
tq = str(query)
result = self.get(url, data={'tq': tq})
return result
def get_provider_teams_earnings(self, provider_team_id, query):
"""
Generate Earning Reports for a Specific Provider's Team.
*Parameters:*
:provider_team_id: Provider's Team ID
:query: The GDS query string
"""
url = 'provider_teams/{0}/earnings'.format(provider_team_id)
tq = str(query)
result = self.get(url, data={'tq': tq})
return result
def get_provider_companies_earnings(self, provider_company_id, query):
"""
Generate Earning Reports for a Specific Provider's Company.
*Parameters:*
:provider_company_id: Provider's Team ID
:query: The GDS query string
"""
url = 'provider_companies/{0}/earnings'.format(provider_company_id)
tq = str(query)
result = self.get(url, data={'tq': tq})
return result
def get_buyer_teams_billings(self, buyer_team_id, query):
"""
Generate Billing Reports for a Specific Buyer's Team.
The authenticated user must be an admin or
a staffing manager of the team.
*Parameters:*
:buyer_team_id: Buyer's Team ID
:query: The GDS query string
"""
url = 'buyer_teams/{0}/billings'.format(buyer_team_id)
tq = str(query)
result = self.get(url, data={'tq': tq})
return result
def get_buyer_companies_billings(self, buyer_company_id, query):
"""
Generate Billing Reports for a Specific Buyer's Company.
The authenticated user must be the company owner.
*Parameters:*
:buyer_company_id: Buyer's Company ID
:query: The GDS query string
"""
url = 'buyer_companies/{0}/billings'.format(buyer_company_id)
tq = str(query)
result = self.get(url, data={'tq': tq})
return result
def get_buyer_teams_earnings(self, buyer_team_id, query):
"""
Generate Earning Reports for a Specific Buyer's Team.
*Parameters:*
:buyer_team_id: Buyer's Team ID
:query: The GDS query string
"""
url = 'buyer_teams/{0}/earnings'.format(buyer_team_id)
tq = str(query)
result = self.get(url, data={'tq': tq})
return result
def get_buyer_companies_earnings(self, buyer_company_id, query):
"""
Generate Earning Reports for a Specific Buyer's Company.
*Parameters:*
:buyer_company_id: Buyer's Team ID
:query: The GDS query string
"""
url = 'buyer_companies/{0}/earnings'.format(buyer_company_id)
tq = str(query)
result = self.get(url, data={'tq': tq})
return result
def get_financial_entities(self, accounting_id, query):
"""
Generate Financial Reports for a Specific Account.
*Parameters:*
:accounting_id: ID of an Accounting entity
:query: The GDS query string
"""
url = 'financial_accounts/{0}'.format(accounting_id)
tq = str(query)
result = self.get(url, data={'tq': tq})
return result
def get_financial_entities_provider(self, provider_id, query):
"""
Generate Financial Reports for an owned Account.
*Parameters:*
:provider_id: Provider ID
:query: The GDS query string
"""
url = 'financial_account_owner/{0}'.format(provider_id)
tq = str(query)
result = self.get(url, data={'tq': tq})
return result
| 29.331633 | 75 | 0.58358 | 676 | 5,749 | 4.807692 | 0.102071 | 0.049538 | 0.055385 | 0.059077 | 0.859077 | 0.848 | 0.773231 | 0.755692 | 0.755692 | 0.723692 | 0 | 0.005891 | 0.320925 | 5,749 | 195 | 76 | 29.482051 | 0.826588 | 0.376065 | 0 | 0.5625 | 0 | 0 | 0.117528 | 0.105708 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0 | 0.015625 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2e367445be85ffe4119018eae20af4c01503bddf | 49 | py | Python | torch2cmsis/__init__.py | BCJuan/torch2cmsis | 476555968b3cbc8381f56480413be8957debaa66 | [
"Apache-2.0"
] | 19 | 2020-11-15T09:40:05.000Z | 2022-03-24T15:21:30.000Z | torch2cmsis/__init__.py | BCJuan/torch2cmsis | 476555968b3cbc8381f56480413be8957debaa66 | [
"Apache-2.0"
] | 1 | 2021-07-02T01:01:52.000Z | 2021-07-02T01:01:52.000Z | torch2cmsis/__init__.py | BCJuan/torch2cmsis | 476555968b3cbc8381f56480413be8957debaa66 | [
"Apache-2.0"
] | 4 | 2021-08-25T08:22:10.000Z | 2022-01-11T03:26:13.000Z | from torch2cmsis.converter import CMSISConverter
| 24.5 | 48 | 0.897959 | 5 | 49 | 8.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022222 | 0.081633 | 49 | 1 | 49 | 49 | 0.955556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2e55f6102c5189540cda254afcbae28181b47674 | 37 | py | Python | auv_control_pi/tests/test_ahrs.py | adrienemery/auv-control-pi | 633fe89b652b07eb6ebe03c0550daa211b122297 | [
"MIT"
] | 9 | 2016-10-02T06:59:37.000Z | 2020-09-24T15:36:10.000Z | auv_control_pi/tests/test_ahrs.py | adrienemery/auv-control-pi | 633fe89b652b07eb6ebe03c0550daa211b122297 | [
"MIT"
] | null | null | null | auv_control_pi/tests/test_ahrs.py | adrienemery/auv-control-pi | 633fe89b652b07eb6ebe03c0550daa211b122297 | [
"MIT"
] | 4 | 2019-01-12T23:09:34.000Z | 2020-11-05T14:52:42.000Z | from ..components.ahrs import AHRS
| 9.25 | 34 | 0.756757 | 5 | 37 | 5.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162162 | 37 | 3 | 35 | 12.333333 | 0.903226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cf308c0e3b3276eefa8300db942f362d56d1ba93 | 195 | py | Python | src/utils/dict2namedtuple.py | georgezywang/BFT-RLForensics | 014be0b57f4edf44ed9d933d23df836cb46d8714 | [
"Apache-2.0"
] | null | null | null | src/utils/dict2namedtuple.py | georgezywang/BFT-RLForensics | 014be0b57f4edf44ed9d933d23df836cb46d8714 | [
"Apache-2.0"
] | null | null | null | src/utils/dict2namedtuple.py | georgezywang/BFT-RLForensics | 014be0b57f4edf44ed9d933d23df836cb46d8714 | [
"Apache-2.0"
] | null | null | null | """
Code adapted from https://github.com/TonghanWang/ROMA
"""
from collections import namedtuple
def convert(dictionary):
return namedtuple('GenericDict', dictionary.keys())(**dictionary)
| 19.5 | 69 | 0.748718 | 21 | 195 | 6.952381 | 0.809524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112821 | 195 | 9 | 70 | 21.666667 | 0.843931 | 0.271795 | 0 | 0 | 0 | 0 | 0.08209 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
d8a0cd0459eb203ede1329c8084541ab085ba285 | 79 | py | Python | pq1.py | nkukadiya89/learn-python | d0a8c438dd77b8feeb1e0126ec379873ef4b2978 | [
"MIT"
] | 1 | 2021-06-16T16:42:41.000Z | 2021-06-16T16:42:41.000Z | pq1.py | nkukadiya89/learn-python | d0a8c438dd77b8feeb1e0126ec379873ef4b2978 | [
"MIT"
] | null | null | null | pq1.py | nkukadiya89/learn-python | d0a8c438dd77b8feeb1e0126ec379873ef4b2978 | [
"MIT"
] | null | null | null | def foo(x):
def too(y):
return x * y
return too
foo(3)(5)
| 11.285714 | 20 | 0.455696 | 14 | 79 | 2.571429 | 0.571429 | 0.388889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042553 | 0.405063 | 79 | 6 | 21 | 13.166667 | 0.723404 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
d8a77cd3a57acc6d72ff662642e0e3289808e059 | 105 | py | Python | gigasecond/gigasecond.py | KrishanBhasin/exercism | f4a3dffb651166ac98cff8f0ea0f4aa8add29b2a | [
"MIT"
] | null | null | null | gigasecond/gigasecond.py | KrishanBhasin/exercism | f4a3dffb651166ac98cff8f0ea0f4aa8add29b2a | [
"MIT"
] | null | null | null | gigasecond/gigasecond.py | KrishanBhasin/exercism | f4a3dffb651166ac98cff8f0ea0f4aa8add29b2a | [
"MIT"
] | null | null | null | from datetime import timedelta
def add_gigasecond(date_in):
return date_in + timedelta(seconds = 10**9) | 26.25 | 44 | 0.790476 | 16 | 105 | 5 | 0.8125 | 0.15 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032609 | 0.12381 | 105 | 4 | 44 | 26.25 | 0.836957 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
d8bf335208a87702fde5854cc98dcfc397d609ca | 2,504 | py | Python | tests/functional/test_geo_distance.py | timgates42/pyeqs | 2e385c0a5d113af0e20be4d9393add2aabdd9565 | [
"MIT"
] | null | null | null | tests/functional/test_geo_distance.py | timgates42/pyeqs | 2e385c0a5d113af0e20be4d9393add2aabdd9565 | [
"MIT"
] | null | null | null | tests/functional/test_geo_distance.py | timgates42/pyeqs | 2e385c0a5d113af0e20be4d9393add2aabdd9565 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from sure import scenario
from pyeqs import QuerySet
from pyeqs.dsl import GeoDistance
from tests.helpers import prepare_data, cleanup_data, add_document
@scenario(prepare_data, cleanup_data)
def test_geo_distance_search_dict(context):
"""
Search with geo distance filter with dictionary
"""
# When create a queryset
t = QuerySet("localhost", index="foo")
# And there are records
add_document("foo", {"location": {"lat": 1.1, "lon": 2.1}})
add_document("foo", {"location": {"lat": 40.1, "lon": 80.1}})
# And I filter for distance
t.filter(GeoDistance({"lat": 1.0, "lon": 2.0}, "20mi"))
results = t[0:10]
# Then I get a the expected results
len(results).should.equal(1)
@scenario(prepare_data, cleanup_data)
def test_geo_distance_search_string(context):
"""
Search with geo distance filter with string
"""
# When create a queryset
t = QuerySet("localhost", index="foo")
# And there are records
add_document("foo", {"location": {"lat": 1.1, "lon": 2.1}})
add_document("foo", {"location": {"lat": 40.1, "lon": 80.1}})
# And I filter for distance
t.filter(GeoDistance("1.0,2.0", "20mi"))
results = t[0:10]
# Then I get a the expected results
len(results).should.equal(1)
@scenario(prepare_data, cleanup_data)
def test_geo_distance_search_array(context):
"""
Search with geo distance filter with array
"""
# When create a queryset
t = QuerySet("localhost", index="foo")
# And there are records
add_document("foo", {"location": {"lat": 1.1, "lon": 2.1}})
add_document("foo", {"location": {"lat": 40.1, "lon": 80.1}})
# And I filter for distance
t.filter(GeoDistance([2.0, 1.0], "20mi"))
results = t[0:10]
# Then I get a the expected results
len(results).should.equal(1)
@scenario(prepare_data, cleanup_data)
def test_geo_distance_search_with_field_name(context):
"""
Search with geo distance filter with field_name
"""
# When create a queryset
t = QuerySet("localhost", index="foo")
# And there are records
add_document("foo", {"foo_loc": {"lat": 1.1, "lon": 2.1}})
add_document("foo", {"foo_loc": {"lat": 40.1, "lon": 80.1}})
# And I filter for distance
t.filter(GeoDistance({"lat": 1.0, "lon": 2.0}, "20mi", field_name="foo_loc"))
results = t[0:10]
# Then I get a the expected results
len(results).should.equal(1)
| 28.134831 | 81 | 0.642971 | 368 | 2,504 | 4.247283 | 0.182065 | 0.06334 | 0.071657 | 0.084453 | 0.841331 | 0.841331 | 0.833653 | 0.736404 | 0.736404 | 0.721049 | 0 | 0.040439 | 0.20008 | 2,504 | 88 | 82 | 28.454545 | 0.73989 | 0.249601 | 0 | 0.594595 | 0 | 0 | 0.124306 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108108 | false | 0 | 0.135135 | 0 | 0.243243 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d8e3e899d31df899686f7617b367b830417a44c9 | 5,046 | py | Python | algorithms/longest_common_subsequence.py | rodrigoadfaria/playground | 58131f17072903f53ce4c4840094e9ed41cb9695 | [
"MIT"
] | null | null | null | algorithms/longest_common_subsequence.py | rodrigoadfaria/playground | 58131f17072903f53ce4c4840094e9ed41cb9695 | [
"MIT"
] | null | null | null | algorithms/longest_common_subsequence.py | rodrigoadfaria/playground | 58131f17072903f53ce4c4840094e9ed41cb9695 | [
"MIT"
] | null | null | null | import numpy as np
import quicksort as qs
def lcs_length(X, Y):
'''
Computes the longest common subsequence of two vectors X and Y, keeping
the size of those subsequences in a matrix c and the path to the longest
subsequence in another matrix of same size b.
We sum up 1 to m and n due to indexing of the original algorithm.
The characters are:
D - diagonal
U - upper
L - left
Based on CLRS 2ed p353.
'''
m = len(X) + 1
n = len(Y) + 1
c = [[None for j in range(n)] for i in range(m)] # initializing the c matrix of size mxn
b = [[None for j in range(n)] for i in range(m)] # initializing the b matrix of size mxn
for i in range(1, m):
c[i][0] = 0
for j in range(0, n):
c[0][j] = 0
for i in range(1, m):
for j in range(1, n):
if X[i-1] == Y[j-1]: # once we use the the length m and n, we have to subtract 1 to get all values in X, Y vectors
c[i][j] = c[i-1][j-1] + 1
b[i][j] = 'D'
elif c[i-1][j] >= c[i][j-1]:
c[i][j] = c[i-1][j]
b[i][j] = 'U'
else:
c[i][j] = c[i][j-1]
b[i][j] = 'L'
return c, b
def lcs_max_sum_length(X, Y):
'''
Computes the longest common subsequence of two vectors X and Y, keeping
the size of those subsequences in a matrix c and the path to the longest
subsequence in another matrix of same size b.
We sum up 1 to m and n due to indexing of the original algorithm.
The characters are:
D - diagonal
U - upper
L - left
Based on CLRS 2ed p353.
'''
m = len(X) + 1
n = len(Y) + 1
c = [[None for j in range(n)] for i in range(m)] # initializing the c matrix of size mxn
b = [[None for j in range(n)] for i in range(m)] # initializing the b matrix of size mxn
for i in range(1, m):
c[i][0] = 0
for j in range(0, n):
c[0][j] = 0
for i in range(1, m):
for j in range(1, n):
if X[i-1] == Y[j-1]: # once we use the the length m and n, we have to subtract 1 to get all values in X, Y vectors
c[i][j] = c[i-1][j-1] + X[i-1]
b[i][j] = 'D'
elif c[i-1][j] >= c[i][j-1]:
c[i][j] = c[i-1][j]
b[i][j] = 'U'
else:
c[i][j] = c[i][j-1]
b[i][j] = 'L'
return c, b
def print_lcs(b, X, i, j):
'''
Prints out the longest common subsequence of X and Y in the proper, forward
order, recursively.
Pay attention in line 'print X[i-1]' where we had to subtract 1 due to
algorithm indexing.
Based on CLRS 2ed p355.
'''
if i == 0 or j == 0:
return
if b[i][j] == 'D':
print_lcs(b, X, i-1, j-1)
print X[i-1],
elif b[i][j] == 'U':
print_lcs(b, X, i-1, j)
else:
print_lcs(b, X, i, j-1)
def build_longest_increasing_subsequence(v, n):
'''
Given a array v of integers, copies it in an auxiliary array u and
sorts it using a comparison sort algorithm (n lg n).
After that, it uses the LCS algorithm strategy to prints out the longest
common subsequence between v and u (the original array v in increasing order).
'''
u = [None] * n
for i in range(n):
u[i] = v[i]
qs.quicksort(u, 0, len(u)-1)
c, b = lcs_length(v, u)
print_lcs(b, v, len(v), len(u))
def get_lcs_max_sum(X, m, n, b, max_length):
'''
Retrieve the longest common subsequence of maximum sum of X and Y in the proper, forward
order, recursively.
Pay attention in line 'print X[i-1]' where we had to subtract 1 due to
algorithm indexing.
Based on CLRS 2ed p355.
'''
k = max_length
i = m
j = n
Z = [None] * k
l = 0
print Z
while i > 0 and j > 0:
if b[i][j] == 'D':
print X[i-1]
print k
Z[k-1] = X[i-1]
k = k-1
i = i-1
j = j-1
l = l+1
elif b[i][j] == 'L':
i = i-1
else:
j = j-1
return Z, l
def build_max_sum_lcs(X, n):
'''
Given a array v of integers, copies it in an auxiliary array u and
sorts it using a comparison sort algorithm (n lg n).
After that, it uses the LCS algorithm strategy to prints out the longest
common subsequence between v and u (the original array v in increasing order).
'''
Y = [None] * n
for i in range(n):
Y[i] = X[i]
qs.quicksort(Y, 0, len(Y)-1)
c, b = lcs_max_sum_length(X, Y)
print np.array(c)
print np.array(b)
print 'Length of the LCS max sum '
print get_lcs_max_sum(X, len(X), len(Y), b, len(X))
def main():
X = ['A', 'B', 'C', 'B', 'D', 'A', 'B']
Y = ['B', 'D', 'C', 'A', 'B', 'A']
c, b = lcs_length(X, Y)
print '==============================================='
print 'Longest Common Subsequence'
print '==============================================='
print 'X = ', X
print 'Y = ', Y
print 'c = '; print np.array(c)
print 'b = '; print np.array(b)
print_lcs(b, X, len(X), len(Y))
X = [2,4,5,7,1,3,8,6,15]
print
print '==============================================='
print 'Longest Common Subsequence'
print '==============================================='
print 'X = ', X
build_longest_increasing_subsequence(X, len(X))
#X = [42,4,5,7,1]
X = [2,4,7,5,9]
print
print '==============================================='
print 'Max Sum Longest Common Subsequence'
print '==============================================='
print 'X = ', X
build_max_sum_lcs(X, len(X))
if __name__=="__main__":
main() | 25.356784 | 117 | 0.573325 | 985 | 5,046 | 2.893401 | 0.129949 | 0.015439 | 0.021053 | 0.038596 | 0.827719 | 0.764561 | 0.728421 | 0.707368 | 0.691228 | 0.658947 | 0 | 0.027228 | 0.228498 | 5,046 | 199 | 118 | 25.356784 | 0.704855 | 0.06956 | 0 | 0.508333 | 0 | 0 | 0.152255 | 0.095626 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.016667 | null | null | 0.258333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2b4638194427a4958699a7e8b7cda5c6906dcdbc | 28 | py | Python | collection/clibs/__init__.py | WilkinsonK/python-collections | 2b2307a7f3f560be2a095eb59e28d51344db1772 | [
"MIT"
] | null | null | null | collection/clibs/__init__.py | WilkinsonK/python-collections | 2b2307a7f3f560be2a095eb59e28d51344db1772 | [
"MIT"
] | null | null | null | collection/clibs/__init__.py | WilkinsonK/python-collections | 2b2307a7f3f560be2a095eb59e28d51344db1772 | [
"MIT"
] | null | null | null | from clibs.api import _cmath | 28 | 28 | 0.857143 | 5 | 28 | 4.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 28 | 1 | 28 | 28 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.