hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2541a12a12b71082c6ac8d4714f94e5033661e93 | 5,977 | py | Python | {{cookiecutter.project_name}}/TensorFlow_imagenet/tensorflow_imagenet.py | Bhaskers-Blu-Org2/DistributedDeepLearning | 2f407881b49415188ca2e38e5331781962939251 | [
"MIT"
] | 45 | 2019-06-13T17:38:11.000Z | 2022-03-24T00:32:38.000Z | {{cookiecutter.project_name}}/TensorFlow_imagenet/tensorflow_imagenet.py | Hrashid789/DistributedDeepLearning | 2f407881b49415188ca2e38e5331781962939251 | [
"MIT"
] | 11 | 2019-06-06T15:50:18.000Z | 2019-10-21T08:45:26.000Z | {{cookiecutter.project_name}}/TensorFlow_imagenet/tensorflow_imagenet.py | Hrashid789/DistributedDeepLearning | 2f407881b49415188ca2e38e5331781962939251 | [
"MIT"
] | 10 | 2019-07-01T04:57:37.000Z | 2020-09-29T07:04:05.000Z | """Module for running TensorFlow training on Imagenet data
"""
from invoke import task, Collection
import os
from config import load_config
_BASE_PATH = os.path.dirname(os.path.abspath(__file__))
env_values = load_config()
@task
def submit_synthetic(c, node_count=int(env_values["CLUSTER_MAX_NODES"]), epochs=1):
"""Submit TensorFlow training job using synthetic imagenet data to remote cluster
Args:
node_count (int, optional): The number of nodes to use in cluster. Defaults to env_values['CLUSTER_MAX_NODES'].
epochs (int, optional): Number of epochs to run training for. Defaults to 1.
"""
from aml_compute import TFExperimentCLI
exp = TFExperimentCLI("synthetic_images_remote")
run = exp.submit(
os.path.join(_BASE_PATH, "src"),
"resnet_main.py",
{"--epochs": epochs},
node_count=node_count,
dependencies_file=os.path.join(_BASE_PATH, "environment_gpu.yml"),
wait_for_completion=True,
)
print(run)
@task
def submit_synthetic_local(c, epochs=1):
"""Submit TensorFlow training job using synthetic imagenet data for local execution
Args:
epochs (int, optional): Number of epochs to run training for. Defaults to 1.
"""
from aml_compute import TFExperimentCLI
exp = TFExperimentCLI("synthetic_images_local")
run = exp.submit_local(
os.path.join(_BASE_PATH, "src"),
"resnet_main.py",
{"--epochs": epochs},
dependencies_file=os.path.join(_BASE_PATH, "environment_gpu.yml"),
wait_for_completion=True,
)
print(run)
@task
def submit_images(c, node_count=int(env_values["CLUSTER_MAX_NODES"]), epochs=1):
"""Submit TensorFlow training job using real imagenet data to remote cluster
Args:
node_count (int, optional): The number of nodes to use in cluster. Defaults to env_values['CLUSTER_MAX_NODES'].
epochs (int, optional): Number of epochs to run training for. Defaults to 1.
"""
from aml_compute import TFExperimentCLI
exp = TFExperimentCLI("real_images_remote")
run = exp.submit(
os.path.join(_BASE_PATH, "src"),
"resnet_main.py",
{
"--training_data_path": "{datastore}/train",
"--validation_data_path": "{datastore}/validation",
"--epochs": epochs,
"--data_type": "images",
"--data-format": "channels_first",
},
node_count=node_count,
dependencies_file=os.path.join(_BASE_PATH, "environment_gpu.yml"),
wait_for_completion=True,
)
print(run)
@task
def submit_images_local(c, epochs=1):
"""Submit TensorFlow training job using real imagenet data for local execution
Args:
epochs (int, optional): Number of epochs to run training for. Defaults to 1.
"""
from aml_compute import TFExperimentCLI
exp = TFExperimentCLI("real_images_local")
run = exp.submit_local(
os.path.join(_BASE_PATH, "src"),
"resnet_main.py",
{
"--training_data_path": "/data/train",
"--validation_data_path": "/data/validation",
"--epochs": epochs,
"--data_type": "images",
"--data-format": "channels_first",
},
dependencies_file=os.path.join(_BASE_PATH, "environment_gpu.yml"),
docker_args=["-v", f"{env_values['DATA']}:/data"],
wait_for_completion=True,
)
print(run)
@task
def submit_tfrecords(c, node_count=int(env_values["CLUSTER_MAX_NODES"]), epochs=1):
"""Submit TensorFlow training job using real imagenet data as tfrecords to remote cluster
Args:
node_count (int, optional): The number of nodes to use in cluster. Defaults to env_values['CLUSTER_MAX_NODES'].
epochs (int, optional): Number of epochs to run training for. Defaults to 1.
"""
from aml_compute import TFExperimentCLI
exp = TFExperimentCLI("real_tfrecords_remote")
run = exp.submit(
os.path.join(_BASE_PATH, "src"),
"resnet_main.py",
{
"--training_data_path": "{datastore}/tfrecords/train",
"--validation_data_path": "{datastore}/tfrecords/validation",
"--epochs": epochs,
"--data_type": "tfrecords",
"--data-format": "channels_first",
},
node_count=node_count,
dependencies_file=os.path.join(_BASE_PATH, "environment_gpu.yml"),
wait_for_completion=True,
)
print(run)
@task
def submit_tfrecords_local(c, epochs=1):
"""Submit TensorFlow training job using real imagenet data as tfrecords for local execution
Args:
epochs (int, optional): Number of epochs to run training for. Defaults to 1.
"""
from aml_compute import TFExperimentCLI
exp = TFExperimentCLI("real_tfrecords_local")
run = exp.submit_local(
os.path.join(_BASE_PATH, "src"),
"resnet_main.py",
{
"--training_data_path": "/data/tfrecords/train",
"--validation_data_path": "/data/tfrecords/validation",
"--epochs": epochs,
"--data_type": "tfrecords",
"--data-format": "channels_first",
},
dependencies_file=os.path.join(_BASE_PATH, "environment_gpu.yml"),
docker_args=["-v", f"{env_values['DATA']}:/data"],
wait_for_completion=True,
)
print(run)
remote_collection = Collection("remote")
remote_collection.add_task(submit_images, "images")
remote_collection.add_task(submit_tfrecords, "tfrecords")
remote_collection.add_task(submit_synthetic, "synthetic")
local_collection = Collection("local")
local_collection.add_task(submit_images_local, "images")
local_collection.add_task(submit_tfrecords_local, "tfrecords")
local_collection.add_task(submit_synthetic_local, "synthetic")
submit_collection = Collection("submit", local_collection, remote_collection)
namespace = Collection("tf_imagenet", submit_collection)
| 33.768362 | 119 | 0.661034 | 723 | 5,977 | 5.217151 | 0.116183 | 0.022269 | 0.031813 | 0.044539 | 0.885207 | 0.791092 | 0.791092 | 0.791092 | 0.791092 | 0.789502 | 0 | 0.002577 | 0.220847 | 5,977 | 176 | 120 | 33.960227 | 0.807387 | 0.237075 | 0 | 0.577586 | 0 | 0 | 0.247278 | 0.075771 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051724 | false | 0 | 0.077586 | 0 | 0.12931 | 0.051724 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c2a0ee99b15d496c33bf063ce58d2a62ede25145 | 30 | py | Python | micropython.py | PaulskPt/micropython-mcp7940 | f01582214d06a582eacde2db84bd53fead86a850 | [
"MIT"
] | null | null | null | micropython.py | PaulskPt/micropython-mcp7940 | f01582214d06a582eacde2db84bd53fead86a850 | [
"MIT"
] | 10 | 2019-06-20T22:32:48.000Z | 2022-03-01T00:51:31.000Z | micropython.py | PaulskPt/micropython-mcp7940 | f01582214d06a582eacde2db84bd53fead86a850 | [
"MIT"
] | 2 | 2019-07-16T09:38:51.000Z | 2020-01-29T22:33:31.000Z | def const(val):
return val | 15 | 15 | 0.666667 | 5 | 30 | 4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.233333 | 30 | 2 | 16 | 15 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
c2c667e7a14b7fac157603c78ca5081db3f4c499 | 43 | py | Python | Python/chinese2digits/__init__.py | lai-bluejay/chinese2digits | 3da9c030a8a9ca5f82426e5719aff861109f51c1 | [
"Apache-1.1"
] | 271 | 2018-07-11T11:02:52.000Z | 2022-03-31T01:12:08.000Z | Python/chinese2digits/__init__.py | Geekzhangwei/chinese2digits | 921ac76f051e91768f42e68d77da040305d53cf0 | [
"Apache-1.1"
] | 22 | 2018-11-29T08:34:19.000Z | 2022-03-16T08:20:06.000Z | Python/chinese2digits/__init__.py | Geekzhangwei/chinese2digits | 921ac76f051e91768f42e68d77da040305d53cf0 | [
"Apache-1.1"
] | 52 | 2019-02-22T06:36:03.000Z | 2022-03-10T07:05:08.000Z | from chinese2digits.chinese2digits import * | 43 | 43 | 0.883721 | 4 | 43 | 9.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05 | 0.069767 | 43 | 1 | 43 | 43 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c2ca79fe5738a1745a937d67ebd5e853ec6cec9e | 111 | py | Python | pyxmeans/__init__.py | araceli252/pyxmeans | 11c69f88939f44f73b49376c92e932d7c7e6f858 | [
"MIT"
] | 84 | 2015-01-22T22:50:27.000Z | 2021-12-30T07:32:38.000Z | pyxmeans/__init__.py | araceli252/pyxmeans | 11c69f88939f44f73b49376c92e932d7c7e6f858 | [
"MIT"
] | 13 | 2015-01-19T11:47:34.000Z | 2017-12-03T20:24:55.000Z | pyxmeans/__init__.py | araceli252/pyxmeans | 11c69f88939f44f73b49376c92e932d7c7e6f858 | [
"MIT"
] | 39 | 2015-01-13T07:10:01.000Z | 2022-03-21T07:31:43.000Z | from . import _minibatch
from . import benchmark
from .mini_batch import MiniBatch
from .xmeans import XMeans
| 18.5 | 33 | 0.810811 | 15 | 111 | 5.866667 | 0.466667 | 0.227273 | 0.431818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153153 | 111 | 5 | 34 | 22.2 | 0.93617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
66a943c2bc1f4aecbfad9f25af6229384f949d24 | 135 | py | Python | pyrobolearn/tools/interfaces/audio/__init__.py | Pandinosaurus/pyrobolearn | 9cd7c060723fda7d2779fa255ac998c2c82b8436 | [
"Apache-2.0"
] | 2 | 2021-01-21T21:08:30.000Z | 2022-03-29T16:45:49.000Z | pyrobolearn/tools/interfaces/audio/__init__.py | Pandinosaurus/pyrobolearn | 9cd7c060723fda7d2779fa255ac998c2c82b8436 | [
"Apache-2.0"
] | null | null | null | pyrobolearn/tools/interfaces/audio/__init__.py | Pandinosaurus/pyrobolearn | 9cd7c060723fda7d2779fa255ac998c2c82b8436 | [
"Apache-2.0"
] | 1 | 2020-09-29T21:25:39.000Z | 2020-09-29T21:25:39.000Z | # -*- coding: utf-8 -*-
# import audio interfaces
# from .audio import *
# from . import audio
from .speaker import SpeakerInterface
| 16.875 | 37 | 0.696296 | 16 | 135 | 5.875 | 0.5625 | 0.234043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009091 | 0.185185 | 135 | 7 | 38 | 19.285714 | 0.845455 | 0.637037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dd0e23ac376765ebeebea98070e54299bb129f37 | 25 | py | Python | python_modules/dagster/dagster/generate/new_repo/new_repo/solids/__init__.py | chasleslr/dagster | 88907f9473fb8e7a9b1af9a0a8b349d42f4b8153 | [
"Apache-2.0"
] | null | null | null | python_modules/dagster/dagster/generate/new_repo/new_repo/solids/__init__.py | chasleslr/dagster | 88907f9473fb8e7a9b1af9a0a8b349d42f4b8153 | [
"Apache-2.0"
] | null | null | null | python_modules/dagster/dagster/generate/new_repo/new_repo/solids/__init__.py | chasleslr/dagster | 88907f9473fb8e7a9b1af9a0a8b349d42f4b8153 | [
"Apache-2.0"
] | null | null | null | from .hello import hello
| 12.5 | 24 | 0.8 | 4 | 25 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
06d3c56d8f83d304750b8f3bcc0b0da70e8aa16e | 16,431 | py | Python | src/core/test/test_query_rewriter.py | RogerTangos/datahub-stub | 8c3e89c792e45ccc9ad067fcf085ddd52f7ecd89 | [
"MIT"
] | null | null | null | src/core/test/test_query_rewriter.py | RogerTangos/datahub-stub | 8c3e89c792e45ccc9ad067fcf085ddd52f7ecd89 | [
"MIT"
] | null | null | null | src/core/test/test_query_rewriter.py | RogerTangos/datahub-stub | 8c3e89c792e45ccc9ad067fcf085ddd52f7ecd89 | [
"MIT"
] | null | null | null | from core.db.query_rewriter import SQLQueryRewriter
from django.db.models import signals
from django.test import TestCase
import factory
import sqlparse
from mock import patch
class QueryRewriter(TestCase):
"""Tests all the query rewriter operations in query_rewriter.py."""
@factory.django.mute_signals(signals.pre_save)
def setUp(self):
self.repo_base = "test_repobase"
self.user = "test_user"
self.query_rewriter = SQLQueryRewriter(self.repo_base, self.user)
self.mock_connection = self.create_patch(
'core.db.manager.DataHubConnection')
def create_patch(self, name):
# helper method for creating patches
patcher = patch(name)
thing = patcher.start()
self.addCleanup(patcher.stop)
return thing
def test_extract_table_info(self):
valid_table_token = "repo.table"
expected_result = ["repo", "table", None]
self.assertEqual(
self.query_rewriter.extract_table_info(valid_table_token),
expected_result)
valid_table_token = "repobase.repo.table"
expected_result = ["repo", "table", "repobase"]
self.assertEqual(
self.query_rewriter.extract_table_info(valid_table_token),
expected_result)
invalid_table_token = "testtable"
exception_raised = False
try:
self.query_rewriter.extract_table_info(invalid_table_token)
except Exception:
exception_raised = True
self.assertEquals(exception_raised, True)
invalid_table_token = "table1.table2.table3.table4"
exception_raised = False
try:
self.query_rewriter.extract_table_info(invalid_table_token)
except Exception:
exception_raised = True
self.assertEquals(exception_raised, True)
def test_extract_table_token(self):
query = "SELECT * from repo1.table1 as tbl1"
token = sqlparse.parse(query)[0].tokens[6]
expected_result = [(["repo1", "table1", None], "as tbl1")]
self.assertEqual(
self.query_rewriter.extract_table_token(token), expected_result)
query = ("SELECT * from repo1.table1 as tbl1, repo2.table2 as tbl2 "
"where ... ")
token = sqlparse.parse(query)[0].tokens[6]
expected_result = [(["repo1", "table1", None], "as tbl1"),
(["repo2", "table2", None], "as tbl2")]
self.assertEqual(
self.query_rewriter.extract_table_token(token), expected_result)
query = "SELECT * from repo1.table1 tbl1, repo2.table2 tbl2 where ... "
token = sqlparse.parse(query)[0].tokens[6]
expected_result = [(["repo1", "table1", None], "tbl1"),
(["repo2", "table2", None], "tbl2")]
self.assertEqual(
self.query_rewriter.extract_table_token(token), expected_result)
def test_extract_table_string(self):
valid_table_string = "repo.table"
expected_result = (["repo", "table", None], '')
self.assertEqual(
self.query_rewriter.extract_table_string(valid_table_string),
expected_result)
valid_table_string = "repo.table test"
expected_result = (["repo", "table", None], 'test')
self.assertEqual(
self.query_rewriter.extract_table_string(valid_table_string),
expected_result)
valid_table_string = "repo.table as test"
expected_result = (["repo", "table", None], 'as test')
self.assertEqual(
self.query_rewriter.extract_table_string(valid_table_string),
expected_result)
valid_table_string = "repobase.repo.table test "
expected_result = (["repo", "table", "repobase"], 'test')
self.assertEqual(
self.query_rewriter.extract_table_string(valid_table_string),
expected_result)
valid_table_string = "repobase.repo.table as test "
expected_result = (["repo", "table", "repobase"], 'as test')
self.assertEqual(
self.query_rewriter.extract_table_string(valid_table_string),
expected_result)
invalid_table_string = "invalidtable"
exception_raised = False
try:
self.query_rewriter.extract_table_string(invalid_table_string)
except Exception:
exception_raised = True
self.assertEquals(exception_raised, True)
def test_contains_subquery(self):
query = ("select * from (select * from repo.table where "
"repo.table.test = 'True')")
subquery_token = sqlparse.parse(query)[0].tokens[6]
no_subquery_token = sqlparse.parse(query)[0].tokens[0]
self.assertEqual(
self.query_rewriter.contains_subquery(subquery_token), True)
self.assertEqual(
self.query_rewriter.contains_subquery(no_subquery_token), False)
def test_extract_subquery(self):
query = ("select * from (select * from repo.table where "
"repo.table.test='True')")
subquery_token = sqlparse.parse(query)[0].tokens[6]
expected_result = ('(', ('select * from repo.table where '
'repo.table.test=\'True\''), ')')
self.assertEqual(
self.query_rewriter.extract_subquery(subquery_token),
expected_result)
def test_process_subquery(self):
query = "select * from (select * from repo.table)"
subquery_token = sqlparse.parse(query)[0].tokens[6]
mock_table_policies = self.create_patch(
'core.db.query_rewriter.SQLQueryRewriter.find_table_policies')
mock_table_policies.return_value = ["tester='Alice"]
expected_result = ("(select * from (SELECT * FROM repo.table WHERE "
"tester='Alice) AS repotable)")
self.assertEqual(
self.query_rewriter.process_subquery(subquery_token),
expected_result)
def test_apply_row_level_security_base(self):
mock_find_table_policies = self.create_patch(
'core.db.query_rewriter.SQLQueryRewriter.find_table_policies')
mock_find_table_policies.return_value = ["tester='Alice'"]
query = "select * from repo.table"
expected_result = ("select * from (SELECT * FROM repo.table WHERE "
"tester='Alice') AS repotable")
self.assertEqual(
self.query_rewriter.apply_row_level_security_base(query),
expected_result)
query = "select * from hola.orders limit 3"
expected_result = ("select * from (SELECT * FROM hola.orders WHERE "
"tester='Alice') AS holaorders limit 3")
self.assertEqual(
self.query_rewriter.apply_row_level_security_base(query),
expected_result)
mock_find_table_policies = self.create_patch(
'core.db.query_rewriter.SQLQueryRewriter.find_table_policies')
mock_find_table_policies.return_value = ["tester='Alice'",
"tester='Bob'"]
query = ("select * from hola.orders o, hola.customer t where "
"o.customerid=t.customerid order by customer")
expected_result = ("select * from (SELECT * FROM hola.orders "
"WHERE tester='Alice' OR tester='Bob') o, "
"(SELECT * FROM hola.customer WHERE tester='Alice' "
"OR tester='Bob') t where o.customerid=t.customerid"
" order by customer")
self.assertEqual(
self.query_rewriter.apply_row_level_security_base(query),
expected_result)
query = ("select * from test.orders right join test.customer "
"on test.orders.customerid=test.customer.customerid")
expected_result = ("select * from (SELECT * FROM test.orders WHERE "
"tester='Alice' OR tester='Bob') AS testorders "
"right join (SELECT * FROM test.customer WHERE "
"tester='Alice' OR tester='Bob') AS testcustomer "
"on testorders.customerid=testcustomer.customerid")
self.assertEqual(
self.query_rewriter.apply_row_level_security_base(query),
expected_result)
query = ("select * from test.orders right join test.customer on "
"test.orders.customerid=test.customer.customerid")
expected_result = ("select * from (SELECT * FROM test.orders WHERE "
"tester='Alice' OR tester='Bob') AS testorders "
"right join (SELECT * FROM test.customer WHERE "
"tester='Alice' OR tester='Bob') AS testcustomer "
"on testorders.customerid=testcustomer.customerid")
self.assertEqual(
self.query_rewriter.apply_row_level_security_base(query),
expected_result)
query = ("select * from test.orders right join test.customer on "
"test.orders.customerid=test.customer.customerid")
expected_result = ("select * from (SELECT * FROM test.orders WHERE "
"tester='Alice' OR tester='Bob') AS testorders "
"right join (SELECT * FROM test.customer WHERE "
"tester='Alice' OR tester='Bob') AS testcustomer "
"on testorders.customerid=testcustomer.customerid")
self.assertEqual(
self.query_rewriter.apply_row_level_security_base(query),
expected_result)
query = "select * from test.orders where test.orders.customerid='1'"
expected_result = ("select * from (SELECT * FROM test.orders WHERE "
"tester='Alice' OR tester='Bob') AS testorders "
"where testorders.customerid='1'")
self.assertEqual(
self.query_rewriter.apply_row_level_security_base(query),
expected_result)
query = ("select count(*), visible from hola.grades_file "
"group by visible")
expected_result = ("select count(*), visible from (SELECT * FROM "
"hola.grades_file WHERE tester='Alice' OR "
"tester='Bob') AS holagrades_file group by visible")
self.assertEqual(
self.query_rewriter.apply_row_level_security_base(query),
expected_result)
query = ("select * from (select * from "
"(select * from hola.orders) as i) as o")
expected_result = ("select * from (select * from (select * from "
"(SELECT * FROM hola.orders WHERE tester='Alice' "
"OR tester='Bob') AS holaorders) as i) as o")
self.assertEqual(
self.query_rewriter.apply_row_level_security_base(query),
expected_result)
query = ("select * from hola.orders where customerid = "
"(select customerid from hola.customer where customerid='3')")
expected_result = ("select * from (SELECT * FROM hola.orders WHERE "
"tester='Alice' OR tester='Bob') AS holaorders "
"where customerid = (select customerid from "
"(SELECT * FROM hola.customer WHERE tester='Alice' "
"OR tester='Bob') AS holacustomer where "
"customerid='3')")
self.assertEqual(
self.query_rewriter.apply_row_level_security_base(query),
expected_result)
query = ("select * from hola.orders as t, hola.orders_2, "
"hola.customer where t.customerid=hola.orders_2.customerid "
"and hola.orders_2.customerid=hola.customer.customerid")
expected_result = ("select * from (SELECT * FROM hola.orders WHERE "
"tester='Alice' OR tester='Bob') as t, "
"(SELECT * FROM hola.orders_2 WHERE tester='Alice' "
"OR tester='Bob') AS holaorders_2, "
"(SELECT * FROM hola.customer WHERE tester='Alice' "
"OR tester='Bob') AS holacustomer where "
"t.customerid=holaorders_2.customerid and "
"holaorders_2.customerid=holacustomer.customerid")
self.assertEqual(
self.query_rewriter.apply_row_level_security_base(query),
expected_result)
def test_apply_row_level_security_update(self):
mock_find_table_policies = self.create_patch(
'core.db.query_rewriter.SQLQueryRewriter.find_table_policies')
mock_find_table_policies.return_value = ["count > 10"]
query = ("update hola.grades_file set firstname='Alice' "
"where lastname='Abby'")
expected_result = ("update hola.grades_file set firstname='Alice' "
"where lastname='Abby' AND count > 10")
self.assertEquals(
self.query_rewriter.apply_row_level_security_update(query),
expected_result)
mock_find_table_policies = self.create_patch(
'core.db.query_rewriter.SQLQueryRewriter.find_table_policies')
mock_find_table_policies.return_value = []
query = ("update hola.grades_file set firstname='Alice' "
"where lastname='Abby'")
expected_result = ("update hola.grades_file set firstname='Alice' "
"where lastname='Abby'")
self.assertEquals(
self.query_rewriter.apply_row_level_security_update(query),
expected_result)
def test_apply_row_level_security_insert(self):
query = "insert into repo.table values (a,b,c)"
mock_find_table_policies = self.create_patch(
'core.db.query_rewriter.SQLQueryRewriter.find_table_policies')
mock_find_table_policies.return_value = ["INSERT='True'"]
expected_result = "insert into repo.table values (a,b,c)"
self.assertEquals(
self.query_rewriter.apply_row_level_security_insert(query),
expected_result)
mock_find_table_policies = self.create_patch(
'core.db.query_rewriter.SQLQueryRewriter.find_table_policies')
mock_find_table_policies.return_value = ["INSERT='False'"]
exception_raised = False
try:
self.query_rewriter.apply_row_level_security_insert(query)
except Exception:
exception_raised = True
self.assertEquals(exception_raised, True)
mock_find_table_policies = self.create_patch(
'core.db.query_rewriter.SQLQueryRewriter.find_table_policies')
mock_find_table_policies.return_value = []
self.assertEquals(
self.query_rewriter.apply_row_level_security_insert(query),
expected_result)
def test_apply_row_level_security(self):
mock_apply_rls_base = self.create_patch(
'core.db.query_rewriter.SQLQueryRewriter.'
'apply_row_level_security_base')
mock_apply_rls_base.return_value = "RLS for select called"
mock_apply_rls_insert = self.create_patch(
'core.db.query_rewriter.SQLQueryRewriter.'
'apply_row_level_security_insert')
mock_apply_rls_insert.return_value = "RLS for insert called"
mock_apply_rls_update = self.create_patch(
'core.db.query_rewriter.SQLQueryRewriter.'
'apply_row_level_security_update')
mock_apply_rls_update.return_value = "RLS for update called"
select_query = "select * from repo.table"
self.assertEquals(
self.query_rewriter.apply_row_level_security(select_query),
"RLS for select called")
insert_query = "insert into repo.table values (a,b,c)"
self.assertEquals(
self.query_rewriter.apply_row_level_security(insert_query),
"RLS for insert called")
update_query = ("update repo.table set firstname='Alice' "
"where lastname='Abby'")
self.assertEquals(
self.query_rewriter.apply_row_level_security(update_query),
"RLS for update called")
| 46.154494 | 79 | 0.612805 | 1,762 | 16,431 | 5.461975 | 0.08059 | 0.057149 | 0.065357 | 0.056733 | 0.824501 | 0.799979 | 0.782938 | 0.747506 | 0.722984 | 0.674771 | 0 | 0.005577 | 0.29067 | 16,431 | 355 | 80 | 46.284507 | 0.820163 | 0.005903 | 0 | 0.559211 | 0 | 0 | 0.319206 | 0.080353 | 0 | 0 | 0 | 0 | 0.118421 | 1 | 0.039474 | false | 0 | 0.019737 | 0 | 0.065789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
06dbbb053f3bfbd491b0175d66b52fe6c1c0c826 | 1,262 | py | Python | test/test_skipint.py | hlatkydavid/vnmrjpy | 48707a1000dc87e646e37c8bd686e695bd31a61e | [
"MIT"
] | null | null | null | test/test_skipint.py | hlatkydavid/vnmrjpy | 48707a1000dc87e646e37c8bd686e695bd31a61e | [
"MIT"
] | null | null | null | test/test_skipint.py | hlatkydavid/vnmrjpy | 48707a1000dc87e646e37c8bd686e695bd31a61e | [
"MIT"
] | null | null | null | import unittest
import vnmrjpy as vj
import glob
import nibabel as nib
class Test_SkipintGenerator(unittest.TestCase):
def test_generate_gems(self):
reduction = 4
gemsdir = sorted(glob.glob(vj.fids+'/gems*.fid'))[0]
procpar = gemsdir+'/procpar'
gen = vj.util.SkipintGenerator(procpar=procpar)
kmask = gen.generate_kspace_mask()
self.assertEqual(len(kmask.shape),4)
#nib.viewers.OrthoSlicer3D(kmask).show()
def test_generate_ge3d(self):
reduction = 4
gemsdir = sorted(glob.glob(vj.fids+'/ge3d_s*.fid'))[0]
procpar = gemsdir+'/procpar'
gen = vj.util.SkipintGenerator(procpar=procpar)
kmask = gen.generate_kspace_mask()
self.assertEqual(len(kmask.shape),4)
#nib.viewers.OrthoSlicer3D(kmask).show()
def test_generate_mge3d(self):
reduction = 4
gemsdir = sorted(glob.glob(vj.fids+'/mge3d*.fid'))[0]
procpar = gemsdir+'/procpar'
gen = vj.util.SkipintGenerator(procpar=procpar)
kmask = gen.generate_kspace_mask()
self.assertEqual(len(kmask.shape),4)
nib.viewers.OrthoSlicer3D(kmask).show()
def test_skiptab_ge3d(self):
""" Generate skiptab"""
pass
| 29.348837 | 62 | 0.637876 | 149 | 1,262 | 5.295302 | 0.268456 | 0.035488 | 0.057034 | 0.079848 | 0.776933 | 0.776933 | 0.776933 | 0.776933 | 0.776933 | 0.621039 | 0 | 0.017635 | 0.236133 | 1,262 | 42 | 63 | 30.047619 | 0.80083 | 0.075277 | 0 | 0.517241 | 1 | 0 | 0.04918 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 1 | 0.137931 | false | 0.034483 | 0.137931 | 0 | 0.310345 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
66553e8ba96f41222ec939bb440eeb71b0ed9440 | 75 | py | Python | 5 kyu/A Chain adding function.py | mwk0408/codewars_solutions | 9b4f502b5f159e68024d494e19a96a226acad5e5 | [
"MIT"
] | 6 | 2020-09-03T09:32:25.000Z | 2020-12-07T04:10:01.000Z | 5 kyu/A Chain adding function.py | mwk0408/codewars_solutions | 9b4f502b5f159e68024d494e19a96a226acad5e5 | [
"MIT"
] | 1 | 2021-12-13T15:30:21.000Z | 2021-12-13T15:30:21.000Z | 5 kyu/A Chain adding function.py | mwk0408/codewars_solutions | 9b4f502b5f159e68024d494e19a96a226acad5e5 | [
"MIT"
] | null | null | null | class add(int):
def __call__(self, func):
return add(self+func) | 25 | 29 | 0.626667 | 11 | 75 | 3.909091 | 0.727273 | 0.372093 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.24 | 75 | 3 | 30 | 25 | 0.754386 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
b097456c525d2365c86b50159fcf720176508198 | 108 | py | Python | pkgs/ops-pkg/src/genie/libs/ops/mld/ios/mld.py | miott/genielibs | 6464642cdd67aa2367bdbb12561af4bb060e5e62 | [
"Apache-2.0"
] | 94 | 2018-04-30T20:29:15.000Z | 2022-03-29T13:40:31.000Z | pkgs/ops-pkg/src/genie/libs/ops/mld/ios/mld.py | miott/genielibs | 6464642cdd67aa2367bdbb12561af4bb060e5e62 | [
"Apache-2.0"
] | 67 | 2018-12-06T21:08:09.000Z | 2022-03-29T18:00:46.000Z | pkgs/ops-pkg/src/genie/libs/ops/mld/ios/mld.py | miott/genielibs | 6464642cdd67aa2367bdbb12561af4bb060e5e62 | [
"Apache-2.0"
] | 49 | 2018-06-29T18:59:03.000Z | 2022-03-10T02:07:59.000Z | '''
Mld Genie Ops Object for IOS - CLI.
'''
from ..iosxe.mld import Mld as MldXE
class Mld(MldXE):
pass | 15.428571 | 36 | 0.657407 | 18 | 108 | 3.944444 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212963 | 108 | 7 | 37 | 15.428571 | 0.835294 | 0.324074 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
b0c1d8e57f1678c811b93fc8d5752bce6a572793 | 164 | py | Python | half/python/mutilthread.py | kong5664546498/half_a_wheel | d50c2359ac7dda55f54dd08bb588091eb6232b81 | [
"MIT"
] | null | null | null | half/python/mutilthread.py | kong5664546498/half_a_wheel | d50c2359ac7dda55f54dd08bb588091eb6232b81 | [
"MIT"
] | null | null | null | half/python/mutilthread.py | kong5664546498/half_a_wheel | d50c2359ac7dda55f54dd08bb588091eb6232b81 | [
"MIT"
] | null | null | null | from threading import Thread
from hello_world import hello
t = Thread(target=hello, args=("kitty",))
c = Thread(target=hello, args=("kitty",))
t.start()
c.start()
| 20.5 | 41 | 0.713415 | 25 | 164 | 4.64 | 0.48 | 0.206897 | 0.293103 | 0.362069 | 0.448276 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115854 | 164 | 7 | 42 | 23.428571 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0.060976 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
9fe801332cc30903478f4763357236cbc5b74a91 | 121 | py | Python | app.py | yahya09/liga-badr | 509a35249d7a0d2e068501082e6f8ab3ad45e55a | [
"MIT"
] | null | null | null | app.py | yahya09/liga-badr | 509a35249d7a0d2e068501082e6f8ab3ad45e55a | [
"MIT"
] | null | null | null | app.py | yahya09/liga-badr | 509a35249d7a0d2e068501082e6f8ab3ad45e55a | [
"MIT"
] | null | null | null | import coba
print(coba.addGlobal(5))
print(coba.powerGlobal(2))
print(coba.addGlobal(12345))
print(coba.powerGlobal(-1)) | 20.166667 | 28 | 0.77686 | 18 | 121 | 5.222222 | 0.5 | 0.382979 | 0.382979 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069565 | 0.049587 | 121 | 6 | 29 | 20.166667 | 0.747826 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.2 | 0.8 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
b009258637a67ca5b092238713ad06eca208ef2c | 38 | py | Python | user_interface/__init__.py | DrunkBearEKB/console-hex | fc3499105835379f339fc844b2d4be125438e5ce | [
"Apache-2.0"
] | 1 | 2021-06-02T19:05:59.000Z | 2021-06-02T19:05:59.000Z | user_interface/__init__.py | DrunkBearEKB/console-hex | fc3499105835379f339fc844b2d4be125438e5ce | [
"Apache-2.0"
] | null | null | null | user_interface/__init__.py | DrunkBearEKB/console-hex | fc3499105835379f339fc844b2d4be125438e5ce | [
"Apache-2.0"
] | null | null | null | from user_interface.win import Window
| 19 | 37 | 0.868421 | 6 | 38 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b02ed1b8c61843cbb061dbef658e4bd03b9fbb0f | 39 | py | Python | weibo_api/weibo_login/__init__.py | wdwind/weibo_api | 2ca654f2a216bdf792d2aef7b04ab3da3c734b27 | [
"MIT"
] | 13 | 2019-07-01T02:28:28.000Z | 2022-02-20T02:42:42.000Z | weibo_api/weibo_login/__init__.py | wdwind/weibo_api | 2ca654f2a216bdf792d2aef7b04ab3da3c734b27 | [
"MIT"
] | 3 | 2020-04-11T22:33:13.000Z | 2021-04-30T20:47:20.000Z | weibo_api/weibo_login/__init__.py | wdwind/weibo_api | 2ca654f2a216bdf792d2aef7b04ab3da3c734b27 | [
"MIT"
] | 4 | 2019-08-31T07:32:36.000Z | 2022-02-20T02:42:48.000Z | from .weibo_login import WeiboLoginApi
| 19.5 | 38 | 0.871795 | 5 | 39 | 6.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 39 | 1 | 39 | 39 | 0.942857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c6668e86bf770869c38dffbacccdca3db8bad5ed | 163 | py | Python | plpred/models/__init__.py | Christiankun/plpred | a134fafaadc694d22f10806d634efa77213a476d | [
"MIT"
] | 1 | 2021-04-09T19:25:47.000Z | 2021-04-09T19:25:47.000Z | plpred/models/__init__.py | Christiankun/plpred | a134fafaadc694d22f10806d634efa77213a476d | [
"MIT"
] | null | null | null | plpred/models/__init__.py | Christiankun/plpred | a134fafaadc694d22f10806d634efa77213a476d | [
"MIT"
] | null | null | null | from .plpred_rf import PlpredRF
from .plpred_gb import PlpredGB
from .base_model import BaseModel
from .plpred_nn import PlpredNN
from .plpred_svm import PlpredSVM | 32.6 | 33 | 0.852761 | 25 | 163 | 5.36 | 0.56 | 0.298507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116564 | 163 | 5 | 34 | 32.6 | 0.930556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c66f47d41238be4ca674ec0bef3e26cf67bb85fe | 89 | py | Python | tests/cerami/datatype/__init__.py | gummybuns/dorm | e97c0baa42c4bdfb10bbe3b4b859873e3d50aa3a | [
"MIT"
] | null | null | null | tests/cerami/datatype/__init__.py | gummybuns/dorm | e97c0baa42c4bdfb10bbe3b4b859873e3d50aa3a | [
"MIT"
] | null | null | null | tests/cerami/datatype/__init__.py | gummybuns/dorm | e97c0baa42c4bdfb10bbe3b4b859873e3d50aa3a | [
"MIT"
] | null | null | null | from .dynamo_data_type_test import *
from .translator import *
from .expression import *
| 22.25 | 36 | 0.797753 | 12 | 89 | 5.666667 | 0.666667 | 0.294118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134831 | 89 | 3 | 37 | 29.666667 | 0.883117 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c67fc7874e099b8b52fd1622ebeb2ff85dd8801e | 263 | py | Python | tests/test_integration.py | Yobmod/srimpy | 3071a9d48fb2201810a863fb3d881248b365d2cf | [
"MIT"
] | 1 | 2021-10-16T10:23:57.000Z | 2021-10-16T10:23:57.000Z | tests/test_integration.py | Yobmod/srimpy | 3071a9d48fb2201810a863fb3d881248b365d2cf | [
"MIT"
] | null | null | null | tests/test_integration.py | Yobmod/srimpy | 3071a9d48fb2201810a863fb3d881248b365d2cf | [
"MIT"
] | null | null | null | """ Integration Testing for pysrim #( c)2018
#( c)2018
""" #( c)2018
#( c)2018
import pytest #( c)2018
#( c)2018
from srim.core.target import Target #( c)2018
from srim.core.layer import Layer #( c)2018
from srim.core.element import Element #( c)2018
| 26.3 | 48 | 0.661597 | 42 | 263 | 4.142857 | 0.333333 | 0.258621 | 0.137931 | 0.229885 | 0.408046 | 0.114943 | 0 | 0 | 0 | 0 | 0 | 0.169014 | 0.190114 | 263 | 9 | 49 | 29.222222 | 0.647887 | 0.410646 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c6b73f1b0e0a85e0b549ad8c4ef0809032d6e6a0 | 3,010 | py | Python | AutoEncoder/ae_model.py | CsekM8/LVH-THESIS | b0dc60daaf0825ad43951e6895289da4e3ed911b | [
"MIT"
] | null | null | null | AutoEncoder/ae_model.py | CsekM8/LVH-THESIS | b0dc60daaf0825ad43951e6895289da4e3ed911b | [
"MIT"
] | null | null | null | AutoEncoder/ae_model.py | CsekM8/LVH-THESIS | b0dc60daaf0825ad43951e6895289da4e3ed911b | [
"MIT"
] | null | null | null | import torch.nn as nn
class ConvAE(nn.Module):
def __init__(self, variant='A'):
super(ConvAE, self).__init__()
if variant == 'A':
self.encoder = nn.Sequential(
nn.Conv2d(1, 12, 5, padding=2),
nn.ReLU(inplace=True),
nn.Conv2d(12, 6, 3, padding=1),
nn.ReLU(inplace=True),
nn.AvgPool2d(2),
nn.Conv2d(6, 3, 3, padding=1)
)
self.decoder = nn.Sequential(
nn.ConvTranspose2d(3, 6, 3, padding=1),
nn.ReLU(inplace=True),
nn.UpsamplingBilinear2d(scale_factor=2),
nn.ConvTranspose2d(6, 12, 3, padding=1),
nn.ReLU(inplace=True),
nn.ConvTranspose2d(12, 1, 5, padding=2)
)
elif variant == 'B':
self.encoder = nn.Sequential(
nn.Conv2d(1, 24, 5, padding=2),
nn.LeakyReLU(0.05, inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(24, 12, 3, padding=1),
nn.LeakyReLU(0.05, inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(12, 6, 3, padding=1),
nn.LeakyReLU(0.05, inplace=True),
nn.MaxPool2d(2)
)
self.decoder = nn.Sequential(
nn.ConvTranspose2d(6, 6, 3, padding=1),
nn.LeakyReLU(0.05, inplace=True),
nn.UpsamplingBilinear2d(scale_factor=2),
nn.ConvTranspose2d(6, 12, 3, padding=1),
nn.LeakyReLU(0.05, inplace=True),
nn.UpsamplingBilinear2d(scale_factor=2),
nn.ConvTranspose2d(12, 24, 3, padding=1),
nn.LeakyReLU(0.05, inplace=True),
nn.UpsamplingBilinear2d(scale_factor=2),
nn.ConvTranspose2d(24, 1, 5, padding=2),
)
else:
self.encoder = nn.Sequential(
nn.Conv2d(1, 24, 7, padding=3),
nn.ReLU(inplace=True),
nn.AvgPool2d(2),
nn.Conv2d(24, 12, 3, padding=1),
nn.ReLU(inplace=True),
nn.AvgPool2d(2),
nn.Conv2d(12, 6, 3, padding=1),
nn.ReLU(inplace=True),
nn.AvgPool2d(2)
)
self.decoder = nn.Sequential(
nn.ConvTranspose2d(6, 6, 3, padding=1),
nn.ReLU(inplace=True),
nn.UpsamplingBilinear2d(scale_factor=2),
nn.ConvTranspose2d(6, 12, 3, padding=1),
nn.ReLU(inplace=True),
nn.UpsamplingBilinear2d(scale_factor=2),
nn.ConvTranspose2d(12, 24, 3, padding=1),
nn.ReLU(inplace=True),
nn.UpsamplingBilinear2d(scale_factor=2),
nn.ConvTranspose2d(24, 1, 7, padding=3),
)
def forward(self, x):
x = self.encoder(x)
x = self.decoder(x)
return x
| 37.160494 | 57 | 0.47907 | 335 | 3,010 | 4.259701 | 0.134328 | 0.123336 | 0.14576 | 0.10021 | 0.83602 | 0.824107 | 0.796076 | 0.773651 | 0.725999 | 0.694464 | 0 | 0.091009 | 0.39402 | 3,010 | 80 | 58 | 37.625 | 0.691338 | 0 | 0 | 0.630137 | 0 | 0 | 0.000997 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027397 | false | 0 | 0.013699 | 0 | 0.068493 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
af2e789d7fa56fb50e86563d6fbef6b454a4caeb | 14 | py | Python | requirements.py | Kromey/err-nanobot | af07232512b2fc04efb19d5271064decd4c14d08 | [
"MIT"
] | 1 | 2017-07-06T03:21:51.000Z | 2017-07-06T03:21:51.000Z | requirements.py | Kromey/err-nanobot | af07232512b2fc04efb19d5271064decd4c14d08 | [
"MIT"
] | 1 | 2015-11-19T17:36:03.000Z | 2015-11-19T17:36:03.000Z | requirements.py | Kromey/err-nanobot | af07232512b2fc04efb19d5271064decd4c14d08 | [
"MIT"
] | null | null | null | pynano==0.1.1
| 7 | 13 | 0.642857 | 4 | 14 | 2.25 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 0.071429 | 14 | 1 | 14 | 14 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a57633172a008d57320dfbd9cf14b1e54232db1b | 165 | py | Python | cytoself/components/layers/norm_mse.py | royerlab/cytoself | 4c3b3a475f020b72540416ecd48198daccdfb2f1 | [
"BSD-3-Clause"
] | 16 | 2021-03-31T12:31:40.000Z | 2022-03-17T16:19:00.000Z | cytoself/components/layers/norm_mse.py | royerlab/cytoself | 4c3b3a475f020b72540416ecd48198daccdfb2f1 | [
"BSD-3-Clause"
] | null | null | null | cytoself/components/layers/norm_mse.py | royerlab/cytoself | 4c3b3a475f020b72540416ecd48198daccdfb2f1 | [
"BSD-3-Clause"
] | null | null | null | from tensorflow.compat.v1.keras.losses import MSE
def normalized_mse(var):
def loss(y_true, y_pred):
return MSE(y_true, y_pred) / var
return loss
| 18.333333 | 49 | 0.69697 | 27 | 165 | 4.074074 | 0.592593 | 0.090909 | 0.109091 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007692 | 0.212121 | 165 | 8 | 50 | 20.625 | 0.838462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.2 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
a592663ee49a52c64e37dbf24d8775a6fe50307b | 5,421 | py | Python | tests.py | Tauag/SPass | bbaba575d2922930ec4730f68e83adff4ab060ac | [
"MIT"
] | 2 | 2018-04-27T17:52:10.000Z | 2018-04-27T19:36:13.000Z | tests.py | Tauag/SPass | bbaba575d2922930ec4730f68e83adff4ab060ac | [
"MIT"
] | 2 | 2018-04-27T13:56:53.000Z | 2018-04-27T21:46:01.000Z | tests.py | Tauag/SPass | bbaba575d2922930ec4730f68e83adff4ab060ac | [
"MIT"
] | 1 | 2018-04-27T20:55:39.000Z | 2018-04-27T20:55:39.000Z | import unittest
import string
from spass.generators import generate_random_password, generate_passphrase
from spass.exceptions import ParameterError
class TestGenerators(unittest.TestCase):
def test_length(self):
pass_set = generate_random_password()
self.assertEqual(9, len(pass_set['password']), 'Generated password not of correct length: %s' % pass_set)
pass_set = generate_random_password(length=40)
self.assertEqual(40, len(pass_set['password']), 'Generated password not of correct length: %s' % pass_set)
pass_set = generate_random_password(length=150)
self.assertEqual(150, len(pass_set['password']), 'Generated password not of correct length: %s' % pass_set)
def test_characters(self):
target_chars = '<>,.?\\\'\"{}[]()=+-_^`~'
pass_set = generate_random_password(length=150, ignored_chars=target_chars)
for char in pass_set['password']:
self.assertTrue(char not in target_chars, '<%s> was found and not expected' % char)
pass_set = generate_random_password(length=150, letters=False)
for char in pass_set['password']:
self.assertTrue(char not in string.ascii_letters, '<%s> was found and not expected' % char)
pass_set = generate_random_password(length=150, digits=False)
for char in pass_set['password']:
self.assertTrue(char not in string.digits, '<%s> was found and not expected' % char)
pass_set = generate_random_password(length=150, punctuation=False)
for char in pass_set['password']:
self.assertTrue(char not in string.punctuation, '<%s> was found and not expected' % char)
pass_set = generate_random_password(length=150, letters=False, punctuation=False)
for char in pass_set['password']:
self.assertTrue(char not in string.ascii_letters + string.punctuation, '<%s> was found and not expected' % char)
pass_set = generate_random_password(length=150, letters=False, digits=False)
for char in pass_set['password']:
self.assertTrue(char not in string.ascii_letters + string.digits, '<%s> was found and not expected' % char)
def test_padding_characters(self):
pass_set = generate_passphrase(word_count=10, pad_length=10)
pad_count, bank = 0, string.digits + string.punctuation
for char in pass_set['password']:
if char in bank:
pad_count += 1
self.assertEqual(10, pad_count, 'Incorrect number of padding characters')
pass_set = generate_passphrase(word_count=10, pad_length=10, punctuation=False)
pad_count, bank = 0, string.digits
for char in pass_set['password']:
if char in bank:
pad_count += 1
self.assertEqual(10, pad_count, 'Incorrect number of padding characters')
pass_set = generate_passphrase(word_count=10, pad_length=10, digits=False)
pad_count, bank = 0, string.punctuation
for char in pass_set['password']:
if char in bank:
pad_count += 1
self.assertEqual(10, pad_count, 'Incorrect number of padding characters')
def test_exception(self):
with self.assertRaises(ParameterError):
generate_random_password(letters=False, punctuation=False, digits=False)
with self.assertRaises(ParameterError):
generate_passphrase(pad_length=5, punctuation=False, digits=False)
with self.assertRaises(ParameterError):
generate_passphrase(pad_length=10, punctuation=False, digits=False)
def test_entropy_random(self):
pass_set = generate_random_password()
self.assertEqual(58.99129966509874, pass_set['entropy'], 'Unexpected entropy value')
pass_set = generate_random_password(letters=False, digits=False)
self.assertEqual(45, pass_set['entropy'], 'Unexpected entropy value')
pass_set = generate_random_password(length=15)
self.assertEqual(98.31883277516458, pass_set['entropy'], 'Unexpected entropy value')
pass_set = generate_random_password(length=150, letters=False, digits=False)
self.assertEqual(750.0000000000001, pass_set['entropy'], 'Unexpected entropy value')
pass_set = generate_random_password(length=20)
self.assertEqual(131.09177703355275, pass_set['entropy'], 'Unexpected entropy value')
pass_set = generate_random_password(length=20, ignored_chars='\'\":;<>,./?[]{}\\()')
self.assertEqual(125.33573081389804, pass_set['entropy'], 'Unexpected entropy value')
def test_entropy_passphrase(self):
pass_set = generate_passphrase()
self.assertEqual(69.62406251802891, pass_set['entropy'], 'Unexpected entropy value')
pass_set = generate_passphrase(word_count=15)
self.assertEqual(208.8721875540867, pass_set['entropy'], 'Unexpected entropy value')
def test_entropy_deviation(self):
pass_set = generate_passphrase(pad_length=3)
self.assertEqual(16.176952268336283, pass_set['deviation'], 'Unexpected deviation value')
pass_set = generate_passphrase(pad_length=10)
self.assertEqual(53.923174227787605, pass_set['deviation'], 'Unexpected deviation value')
pass_set = generate_passphrase(pad_length=10, punctuation=False)
self.assertEqual(33.219280948873624, pass_set['deviation'], 'Unexpected deviation value')
| 48.837838 | 124 | 0.693968 | 663 | 5,421 | 5.470588 | 0.137255 | 0.094569 | 0.09512 | 0.086849 | 0.810587 | 0.778605 | 0.718224 | 0.697546 | 0.671078 | 0.618693 | 0 | 0.055158 | 0.200701 | 5,421 | 110 | 125 | 49.281818 | 0.781906 | 0 | 0 | 0.301205 | 1 | 0.072289 | 0.164176 | 0 | 0 | 0 | 0 | 0 | 0.313253 | 1 | 0.084337 | false | 0.626506 | 0.048193 | 0 | 0.144578 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
a5ad34e6ea0d935a875772260536808f7e5423ba | 44 | py | Python | tvae/models/__init__.py | khucnam/Efflux_TransVAE | 7da1cc614f016d5520648f4853e34e2362181aa7 | [
"MIT"
] | 43 | 2019-05-15T21:58:56.000Z | 2022-03-06T03:44:26.000Z | tvae/models/__init__.py | khucnam/Efflux_TransVAE | 7da1cc614f016d5520648f4853e34e2362181aa7 | [
"MIT"
] | 1 | 2020-01-11T12:03:00.000Z | 2020-01-11T12:03:00.000Z | tvae/models/__init__.py | khucnam/Efflux_TransVAE | 7da1cc614f016d5520648f4853e34e2362181aa7 | [
"MIT"
] | 6 | 2019-07-24T18:15:41.000Z | 2022-01-13T22:17:58.000Z | from .transformer_vae import TransformerVAE
| 22 | 43 | 0.886364 | 5 | 44 | 7.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 44 | 1 | 44 | 44 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3c19d9cdc30a17972d1b43936e0248c1edc09a6f | 140 | py | Python | gallery/locale.py | lwairore/gallery | 0ec163675e9e683f82ebfbb32e462c6e86198118 | [
"MIT"
] | 1 | 2020-08-15T11:39:14.000Z | 2020-08-15T11:39:14.000Z | gallery/locale.py | lwairore/django-explore-africa | 0ec163675e9e683f82ebfbb32e462c6e86198118 | [
"MIT"
] | 4 | 2021-03-19T01:22:57.000Z | 2021-09-08T01:03:46.000Z | gallery/locale.py | lwairore/django-explore-africa | 0ec163675e9e683f82ebfbb32e462c6e86198118 | [
"MIT"
] | 1 | 2019-06-17T15:09:00.000Z | 2019-06-17T15:09:00.000Z | import os
BASE_DIR= os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
print('Location of static',os.path.join(BASE_DIR, 'static')) | 46.666667 | 69 | 0.771429 | 24 | 140 | 4.25 | 0.541667 | 0.235294 | 0.254902 | 0.294118 | 0.313725 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05 | 140 | 3 | 70 | 46.666667 | 0.766917 | 0 | 0 | 0 | 0 | 0 | 0.170213 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
3c26e4745a87825db61fef2ee8fbaf4d96549e9d | 87 | py | Python | models/unet/utils/__init__.py | ustb-ai3d/automatic_inpainting | b50121fcd452f03e5a89e28a8154afa635cc027b | [
"MIT"
] | 2 | 2020-04-21T07:17:01.000Z | 2021-08-02T05:14:54.000Z | models/unet/utils/__init__.py | ustb-ai3d/automatic_inpainting | b50121fcd452f03e5a89e28a8154afa635cc027b | [
"MIT"
] | null | null | null | models/unet/utils/__init__.py | ustb-ai3d/automatic_inpainting | b50121fcd452f03e5a89e28a8154afa635cc027b | [
"MIT"
] | 1 | 2020-07-17T09:22:00.000Z | 2020-07-17T09:22:00.000Z | from .load import *
from .utils import *
from .data_vis import *
from .predict import * | 21.75 | 23 | 0.735632 | 13 | 87 | 4.846154 | 0.538462 | 0.47619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.172414 | 87 | 4 | 24 | 21.75 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3c32c0ceceda8195359574885751daa0d0ba36c8 | 64 | py | Python | acq4/modules/TaskRunner/__init__.py | aleonlein/acq4 | 4b1fcb9ad2c5e8d4595a2b9cf99d50ece0c0f555 | [
"MIT"
] | 47 | 2015-01-05T16:18:10.000Z | 2022-03-16T13:09:30.000Z | acq4/modules/TaskRunner/__init__.py | aleonlein/acq4 | 4b1fcb9ad2c5e8d4595a2b9cf99d50ece0c0f555 | [
"MIT"
] | 48 | 2015-04-19T16:51:41.000Z | 2022-03-31T14:48:16.000Z | acq4/modules/TaskRunner/__init__.py | sensapex/acq4 | 9561ba73caff42c609bd02270527858433862ad8 | [
"MIT"
] | 32 | 2015-01-15T14:11:49.000Z | 2021-07-15T13:44:52.000Z | from __future__ import print_function
from .TaskRunner import *
| 21.333333 | 37 | 0.84375 | 8 | 64 | 6.125 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 64 | 2 | 38 | 32 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
3c449589f89840855b4405837c43a0f5aa2d0967 | 68 | py | Python | vedro/_context.py | iri6e4k0/vedro | dd51c16400993d0fe1fd34bba57edff710ac2638 | [
"Apache-2.0"
] | 2 | 2021-08-24T12:49:30.000Z | 2022-01-23T07:21:25.000Z | vedro/_context.py | iri6e4k0/vedro | dd51c16400993d0fe1fd34bba57edff710ac2638 | [
"Apache-2.0"
] | 20 | 2015-12-09T11:04:23.000Z | 2022-03-20T09:18:17.000Z | vedro/_context.py | iri6e4k0/vedro | dd51c16400993d0fe1fd34bba57edff710ac2638 | [
"Apache-2.0"
] | 3 | 2015-12-09T07:31:23.000Z | 2022-01-28T11:03:24.000Z | from typing import Any
def context(fn: Any) -> Any:
return fn
| 11.333333 | 28 | 0.661765 | 11 | 68 | 4.090909 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 68 | 5 | 29 | 13.6 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
3c84b0ad6e9e695bae4b66f49c73036393efb790 | 125 | py | Python | main.py | GuiCardosooo/yolo-gun-detection | 184f2793319a4881ed9db2363dbbfc9fab329b10 | [
"MIT"
] | null | null | null | main.py | GuiCardosooo/yolo-gun-detection | 184f2793319a4881ed9db2363dbbfc9fab329b10 | [
"MIT"
] | null | null | null | main.py | GuiCardosooo/yolo-gun-detection | 184f2793319a4881ed9db2363dbbfc9fab329b10 | [
"MIT"
] | null | null | null | import cfg.config as cfg
import lib.prepare as lpp
# 1 - Baixa base de dados
# 2 - Divide base de dados
lpp.divide_dataset() | 20.833333 | 26 | 0.744 | 23 | 125 | 4 | 0.652174 | 0.130435 | 0.23913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019608 | 0.184 | 125 | 6 | 27 | 20.833333 | 0.882353 | 0.384 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b1cfbf0c8516d2af5ad28dafcfa2ba7dc101fb37 | 81 | py | Python | tests/modules/contrib/test_amixer.py | alexsr/bumblebee-status | f8d035c0798621d8f6b33b262aecb42236658bd0 | [
"MIT"
] | null | null | null | tests/modules/contrib/test_amixer.py | alexsr/bumblebee-status | f8d035c0798621d8f6b33b262aecb42236658bd0 | [
"MIT"
] | null | null | null | tests/modules/contrib/test_amixer.py | alexsr/bumblebee-status | f8d035c0798621d8f6b33b262aecb42236658bd0 | [
"MIT"
] | null | null | null | import pytest
def test_load_module():
__import__("modules.contrib.amixer")
| 13.5 | 40 | 0.753086 | 10 | 81 | 5.5 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135802 | 81 | 5 | 41 | 16.2 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0.275 | 0.275 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.666667 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b1e17d2f82466a9b2101dcd8faf944f0f5307cee | 111 | py | Python | Solutions/7kyu/7kyu_insert_dashes.py | citrok25/Codewars-1 | dc641c5079e2e8b5955eb027fd15427e5bdb2e26 | [
"MIT"
] | 46 | 2017-08-24T09:27:57.000Z | 2022-02-25T02:24:33.000Z | Solutions/7kyu/7kyu_insert_dashes.py | abbhishek971/Codewars | 9e761811db724da1e8aae44594df42b4ee879a16 | [
"MIT"
] | null | null | null | Solutions/7kyu/7kyu_insert_dashes.py | abbhishek971/Codewars | 9e761811db724da1e8aae44594df42b4ee879a16 | [
"MIT"
] | 35 | 2017-08-01T22:09:48.000Z | 2022-02-18T17:21:37.000Z | import re
def insert_dash(num):
return re.sub('[13579]+', lambda s: '-'.join(list(s.group(0))), str(num))
| 22.2 | 77 | 0.621622 | 19 | 111 | 3.578947 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 0.135135 | 111 | 4 | 78 | 27.75 | 0.645833 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
5916b47592f752cb57ec0faf7de65db10fdbafe1 | 149 | py | Python | libvirt_ebs/handlers/__init__.py | elprans/libvirt-ebs | a414711248db21de250d7740af06f8106cee57b8 | [
"Apache-2.0"
] | null | null | null | libvirt_ebs/handlers/__init__.py | elprans/libvirt-ebs | a414711248db21de250d7740af06f8106cee57b8 | [
"Apache-2.0"
] | null | null | null | libvirt_ebs/handlers/__init__.py | elprans/libvirt-ebs | a414711248db21de250d7740af06f8106cee57b8 | [
"Apache-2.0"
] | null | null | null | from ._routing import handle_request as handle_request # NoQA
from . import az # NoQA
from . import instances # NoQA
from . import volumes # NoQA | 37.25 | 62 | 0.751678 | 21 | 149 | 5.190476 | 0.47619 | 0.220183 | 0.385321 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.194631 | 149 | 4 | 63 | 37.25 | 0.908333 | 0.127517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
3cb7ca5ab1b4a32784b741c14790b4a27734d68e | 27 | py | Python | Text_to_Image/StyleGAN2_ada/__init__.py | talha-khalid-qureshi/Image-Captioning | 4fce3efe39319c1eb111b8c4a3ca063a52e8f1bf | [
"Apache-2.0"
] | null | null | null | Text_to_Image/StyleGAN2_ada/__init__.py | talha-khalid-qureshi/Image-Captioning | 4fce3efe39319c1eb111b8c4a3ca063a52e8f1bf | [
"Apache-2.0"
] | null | null | null | Text_to_Image/StyleGAN2_ada/__init__.py | talha-khalid-qureshi/Image-Captioning | 4fce3efe39319c1eb111b8c4a3ca063a52e8f1bf | [
"Apache-2.0"
] | null | null | null | from StyleGAN2_ada import * | 27 | 27 | 0.851852 | 4 | 27 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041667 | 0.111111 | 27 | 1 | 27 | 27 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5979faeab7725dd7b41ee09b56dfdf0927e1be64 | 62 | py | Python | emojex/main.py | 360macky/emojex | 09d1dca4065924fa499f6c5309eb7c69afb02415 | [
"MIT"
] | 1 | 2021-05-14T15:49:27.000Z | 2021-05-14T15:49:27.000Z | emojex/main.py | 360macky/emojex | 09d1dca4065924fa499f6c5309eb7c69afb02415 | [
"MIT"
] | null | null | null | emojex/main.py | 360macky/emojex | 09d1dca4065924fa499f6c5309eb7c69afb02415 | [
"MIT"
] | null | null | null | import openai
def set_api_key(key):
openai.api_key = key
| 12.4 | 24 | 0.725806 | 11 | 62 | 3.818182 | 0.545455 | 0.285714 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.193548 | 62 | 4 | 25 | 15.5 | 0.84 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
5987cfc363efe54fcc27bd5a79cf2d13045a2065 | 35 | py | Python | datasets/__init__.py | EagleMIT/m-i-d | 943c9dc3c411fd0392ebca7b0c52a7bc4561503f | [
"MIT"
] | 19 | 2021-07-27T06:08:39.000Z | 2022-03-18T07:31:44.000Z | datasets/__init__.py | EagleMIT/mid | 943c9dc3c411fd0392ebca7b0c52a7bc4561503f | [
"MIT"
] | 3 | 2021-09-07T13:20:05.000Z | 2021-10-11T01:51:29.000Z | datasets/__init__.py | EagleMIT/mid | 943c9dc3c411fd0392ebca7b0c52a7bc4561503f | [
"MIT"
] | 4 | 2021-07-27T06:20:32.000Z | 2021-08-29T04:28:18.000Z | from .midataset import SliceDataset | 35 | 35 | 0.885714 | 4 | 35 | 7.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 35 | 1 | 35 | 35 | 0.96875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
59a25cb3b22c47830c8e6d709ef537e3d40bcc90 | 45 | py | Python | enthought/traits/ui/helper.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | 3 | 2016-12-09T06:05:18.000Z | 2018-03-01T13:00:29.000Z | enthought/traits/ui/helper.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | 1 | 2020-12-02T00:51:32.000Z | 2020-12-02T08:48:55.000Z | enthought/traits/ui/helper.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | null | null | null | # proxy module
from traitsui.helper import *
| 15 | 29 | 0.777778 | 6 | 45 | 5.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155556 | 45 | 2 | 30 | 22.5 | 0.921053 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
abbc23bd75d072c3314df0446c6e069b6faa861b | 171 | py | Python | apps/certificate/admin.py | LizanLycan/CertsGen | 2e18d8ddea6adf90805face16cbb8f8fa06989c3 | [
"MIT"
] | null | null | null | apps/certificate/admin.py | LizanLycan/CertsGen | 2e18d8ddea6adf90805face16cbb8f8fa06989c3 | [
"MIT"
] | 1 | 2020-02-04T01:56:42.000Z | 2020-02-04T01:56:42.000Z | apps/certificate/admin.py | LizanLycan/CertsGen | 2e18d8ddea6adf90805face16cbb8f8fa06989c3 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Certificate
class CertificateAdmin(admin.ModelAdmin):
pass
admin.site.register(Certificate, CertificateAdmin)
| 17.1 | 50 | 0.812865 | 19 | 171 | 7.315789 | 0.684211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122807 | 171 | 9 | 51 | 19 | 0.926667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
e61f03351c81978fb25204707fa849c17e42478c | 5,773 | py | Python | Ionburst/ionburst.py | ionburstcloud/ionburst-sdk-python | c5544bc26558aa2916186dcd0f0ea389234b9e28 | [
"Apache-2.0"
] | 3 | 2021-06-23T10:58:59.000Z | 2021-07-01T18:27:51.000Z | Ionburst/ionburst.py | ionburstcloud/ionburst-sdk-python | c5544bc26558aa2916186dcd0f0ea389234b9e28 | [
"Apache-2.0"
] | null | null | null | Ionburst/ionburst.py | ionburstcloud/ionburst-sdk-python | c5544bc26558aa2916186dcd0f0ea389234b9e28 | [
"Apache-2.0"
] | 1 | 2021-07-15T04:42:08.000Z | 2021-07-15T04:42:08.000Z | from .settings import Settings
from .apiHandler import APIHandler
class Ionburst:
def __init__(self, server_url = None):
self.settings = Settings(server_url)
self.__apihandler = APIHandler(self.settings)
def __check_token(self):
if not self.settings.ionburst_id:
raise ValueError('ionburst_id is not specified!')
if not self.settings.ionburst_key:
raise ValueError('ionburst_key is not specified!')
if not self.settings.ionburst_uri:
raise ValueError('ionburst_uri is not specified!')
if not self.__apihandler.idToken:
res = self.__apihandler.GetJWT()
if res.status_code is not 200:
raise SyntaxError('{}, status: {}. {}'.format(res.reason, res.status_code, res.text))
def get(self, id = None):
self.__check_token()
res = self.__apihandler.downloadData(id)
if res.status_code is 200:
return res.content
else:
raise SyntaxError('{}, status: {}. {}'.format(res.reason, res.status_code, res.text))
def getSecrets(self, id = None):
self.__check_token()
res = self.__apihandler.downloadSecrets(id)
if res.status_code is 200:
return res.content
else:
raise SyntaxError('{}, status: {}. {}'.format(res.reason, res.status_code, res.text))
def put(self, request = {}):
self.__check_token()
res = self.__apihandler.uploadData(request)
if res.status_code is 200:
return res.text
else:
raise SyntaxError('{}, status: {}. {}'.format(res.reason, res.status_code, res.text))
def putSecrets(self, request = {}):
self.__check_token()
res = self.__apihandler.uploadSecrets(request)
if res.status_code is 200:
return res.text
else:
raise SyntaxError('{}, status: {}. {}'.format(res.reason, res.status_code, res.text))
def delete(self, id = None):
self.__check_token()
res = self.__apihandler.deleteData(id)
if res.status_code is 200:
return res.text
else:
raise SyntaxError('{}, status: {}. {}'.format(res.reason, res.status_code, res.text))
def deleteSecrets(self, id = None):
self.__check_token()
res = self.__apihandler.deleteSecrets(id)
if res.status_code is 200:
return res.text
else:
raise SyntaxError('{}, status: {}. {}'.format(res.reason, res.status_code, res.text))
def getClassifications(self):
if not self.settings.ionburst_uri:
raise ValueError('ionburst_uri is not specified!')
self.__check_token()
res = self.__apihandler.classifications()
if res.status_code is 200:
return res.text
else:
raise SyntaxError('{}, status: {}. {}'.format(res.reason, res.status_code, res.text))
def startDeferred(self, request = {}):
self.__check_token()
if 'action' not in request:
raise ValueError('action must be specified in the parameter!')
if 'id' not in request:
raise ValueError('id must be specified in the parameter!')
if request['action'] is 'GET':
res = self.__apihandler.downloadData(request['id'], True)
elif request['action'] is 'PUT':
res = self.__apihandler.uploadData(request, True)
else:
raise ValueError('Deferred action is only available for PUT or GET')
if res.status_code is 200:
return res.text
else:
raise SyntaxError('{}, status: {}. {}'.format(res.reason, res.status_code, res.text))
def startDeferredSecrets(self, request = {}):
self.__check_token()
if 'action' not in request:
raise ValueError('action must be specified in the parameter!')
if 'id' not in request:
raise ValueError('id must be specified in the parameter!')
if request['action'] is 'GET':
res = self.__apihandler.downloadSecrets(request['id'], True)
elif request['action'] is 'PUT':
res = self.__apihandler.uploadSecrets(request, True)
else:
raise ValueError('Deferred action is only available for PUT or GET')
if res.status_code is 200:
return res.text
else:
raise SyntaxError('{}, status: {}. {}'.format(res.reason, res.status_code, res.text))
def checkDeferred(self, token = None):
self.__check_token()
res = self.__apihandler.checkDeferred(token)
if res.status_code is 200 or res.status_code is 202:
return res.text
else:
raise SyntaxError('{}, status: {}. {}'.format(res.reason, res.status_code, res.text))
def checkDeferredSecrets(self, token = None):
self.__check_token()
res = self.__apihandler.checkDeferredSecrets(token)
if res.status_code is 200 or res.status_code is 202:
return res.text
else:
raise SyntaxError('{}, status: {}. {}'.format(res.reason, res.status_code, res.text))
def fetch(self,token = None):
self.__check_token()
res = self.__apihandler.fetch(token)
if res.status_code is 200:
return res.content
else:
raise SyntaxError('{}, status: {}. {}'.format(res.reason, res.status_code, res.text))
def fetchSecrets(self,token = None):
self.__check_token()
res = self.__apihandler.fetchSecrets(token)
if res.status_code is 200:
return res.content
else:
raise SyntaxError('{}, status: {}. {}'.format(res.reason, res.status_code, res.text)) | 40.090278 | 101 | 0.602113 | 672 | 5,773 | 4.991071 | 0.104167 | 0.080501 | 0.116279 | 0.071556 | 0.832737 | 0.813953 | 0.792188 | 0.792188 | 0.751342 | 0.64997 | 0 | 0.011558 | 0.280617 | 5,773 | 144 | 102 | 40.090278 | 0.796051 | 0 | 0 | 0.671875 | 0 | 0 | 0.118289 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117188 | false | 0 | 0.015625 | 0 | 0.242188 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e62ceeafa9f3a192093612680d0c268104d29a1d | 25 | py | Python | test_files_copy/file_2.py | A-Wei/multiprocess_copy_folder | de62c616bd5ac48a64aef2fea360951c443839ac | [
"MIT"
] | null | null | null | test_files_copy/file_2.py | A-Wei/multiprocess_copy_folder | de62c616bd5ac48a64aef2fea360951c443839ac | [
"MIT"
] | null | null | null | test_files_copy/file_2.py | A-Wei/multiprocess_copy_folder | de62c616bd5ac48a64aef2fea360951c443839ac | [
"MIT"
] | null | null | null | # Some comments
import os | 12.5 | 15 | 0.8 | 4 | 25 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 2 | 16 | 12.5 | 0.952381 | 0.52 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e63e5f8fb1afd9440a0969d0c412dc63659cb387 | 53 | py | Python | learn/__init__.py | starrysky9959/digital-recognition | a81c3fab5415cd037d362354116202d76006b755 | [
"MIT"
] | null | null | null | learn/__init__.py | starrysky9959/digital-recognition | a81c3fab5415cd037d362354116202d76006b755 | [
"MIT"
] | null | null | null | learn/__init__.py | starrysky9959/digital-recognition | a81c3fab5415cd037d362354116202d76006b755 | [
"MIT"
] | null | null | null | import learn.mymodel
from learn.mymodel import trans | 26.5 | 31 | 0.849057 | 8 | 53 | 5.625 | 0.625 | 0.533333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113208 | 53 | 2 | 31 | 26.5 | 0.957447 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
050fe0e2db0d86deb4eeb2760ae579b4c0b6b006 | 11,675 | py | Python | platform/radio/efr32_multiphy_configurator/pyradioconfig/parts/viper/calculators/calc_global.py | lmnotran/gecko_sdk | 2e82050dc8823c9fe0e8908c1b2666fb83056230 | [
"Zlib"
] | 69 | 2021-12-16T01:34:09.000Z | 2022-03-31T08:27:39.000Z | platform/radio/efr32_multiphy_configurator/pyradioconfig/parts/viper/calculators/calc_global.py | lmnotran/gecko_sdk | 2e82050dc8823c9fe0e8908c1b2666fb83056230 | [
"Zlib"
] | 6 | 2022-01-12T18:22:08.000Z | 2022-03-25T10:19:27.000Z | platform/radio/efr32_multiphy_configurator/pyradioconfig/parts/viper/calculators/calc_global.py | lmnotran/gecko_sdk | 2e82050dc8823c9fe0e8908c1b2666fb83056230 | [
"Zlib"
] | 21 | 2021-12-20T09:05:45.000Z | 2022-03-28T02:52:28.000Z | from pyradioconfig.parts.bobcat.calculators.calc_global import Calc_Global_Bobcat
from pycalcmodel.core.variable import ModelVariableFormat
from py_2_and_3_compatibility import *
class Calc_Global_Viper(Calc_Global_Bobcat):
def buildVariables(self, model):
# Build variables from the Ocelot calculations
super().buildVariables(model)
self._addModelRegister(model, 'RAC.RX.SYPFDCHPLPENRX', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'MODEM.CTRL5.DEC2', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CFG.DEC1' , int, ModelVariableFormat.HEX )
self._addModelRegister(model, 'FEFILT0.CFG.CHFGAINREDUCTION' , int, ModelVariableFormat.HEX )
self._addModelRegister(model, 'FEFILT0.GAINCTRL.DEC1GAIN' , int, ModelVariableFormat.HEX )
self._addModelRegister(model, 'FEFILT0.GAINCTRL.BBSS' , int, ModelVariableFormat.HEX )
self._addModelRegister(model, 'FEFILT0.SRC2.SRC2RATIO', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.SRC2.SRC2ENABLE', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.SRC2.UPGAPS', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.GAINCTRL.DEC0GAIN', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE00.SET0CSDCOEFF0', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE00.SET0CSDCOEFF1', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE00.SET0CSDCOEFF2', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE00.SET0CSDCOEFF3', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE01.SET0CSDCOEFF4', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE01.SET0CSDCOEFF5', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE01.SET0CSDCOEFF6', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE02.SET0CSDCOEFF7', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE02.SET0CSDCOEFF8', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE02.SET0CSDCOEFF9', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE03.SET0CSDCOEFF10', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE03.SET0CSDCOEFF11', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE10.SET1CSDCOEFF0', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE10.SET1CSDCOEFF1', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE10.SET1CSDCOEFF2', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE10.SET1CSDCOEFF3', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE11.SET1CSDCOEFF4', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE11.SET1CSDCOEFF5', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE11.SET1CSDCOEFF6', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE12.SET1CSDCOEFF7', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE12.SET1CSDCOEFF8', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE12.SET1CSDCOEFF9', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE13.SET1CSDCOEFF10', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE13.SET1CSDCOEFF11', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE00S.SET0CSDCOEFF0S', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE00S.SET0CSDCOEFF1S', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE00S.SET0CSDCOEFF2S', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE00S.SET0CSDCOEFF3S', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE00S.SET0CSDCOEFF4S', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE00S.SET0CSDCOEFF5S', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE00S.SET0CSDCOEFF6S', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE01S.SET0CSDCOEFF7S', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE01S.SET0CSDCOEFF8S', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE01S.SET0CSDCOEFF9S', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE01S.SET0CSDCOEFF10S', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE01S.SET0CSDCOEFF11S', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE10S.SET1CSDCOEFF0S', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE10S.SET1CSDCOEFF1S', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE10S.SET1CSDCOEFF2S', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE10S.SET1CSDCOEFF3S', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE10S.SET1CSDCOEFF4S', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE10S.SET1CSDCOEFF5S', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE10S.SET1CSDCOEFF6S', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE11S.SET1CSDCOEFF7S', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE11S.SET1CSDCOEFF8S', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE11S.SET1CSDCOEFF9S', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE11S.SET1CSDCOEFF10S', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CHFCSDCOE11S.SET1CSDCOEFF11S', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CFG.CHFCOEFFFWSWEN', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CFG.CHFCOEFFFWSWSEL', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CFG.CHFCOEFFSWEN', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.CFG.CHFCOEFFSWSEL', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.DIGMIXCTRL.DIGIQSWAPEN' , int, ModelVariableFormat.HEX )
self._addModelRegister(model, 'FEFILT0.DIGMIXCTRL.DIGMIXFREQ', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.DIGMIXCTRL.MIXERCONJ', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.DIGMIXCTRL.DIGMIXFBENABLE', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.DCCOMP.DCGAINGEAREN', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.DCCOMP.DCGAINGEAR', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.DCCOMP.DCGAINGEARSMPS', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.DCCOMP.DCESTIEN', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.DCCOMP.DCCOMPEN', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.DCCOMP.DCRSTEN', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.DCCOMP.DCCOMPFREEZE', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.DCCOMP.DCCOMPGEAR', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.DCCOMP.DCLIMIT', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.DCCOMPFILTINIT.DCCOMPINITVALI', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.DCCOMPFILTINIT.DCCOMPINITVALQ', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'FEFILT0.DCCOMPFILTINIT.DCCOMPINIT', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'MODEM.TXCORR.TXDGAIN6DB', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'MODEM.TXCORR.TXDGAIN', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'MODEM.TXCORR.TXGAINIMB', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'MODEM.TXCORR.TXPHSIMB', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'MODEM.TXCORR.TXFREQCORR', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'MODEM.TXMISC.FORCECLKEN', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'MODEM.TXMISC.TXIQIMBEN', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'MODEM.TXMISC.TXINTPEN', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'MODEM.TXMISC.TXDSEN', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'MODEM.TXMISC.TXIQSWAP', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'MODEM.TXMISC.TXDACFORMAT', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'MODEM.TXMISC.TXDACFORCE', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'MODEM.TXMISC.TXDCI', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'MODEM.TXMISC.TXDCQ', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'MODEM.TXMISC.BR2M', int, ModelVariableFormat.HEX)
self._addModelVariable(model, 'br2m', int, ModelVariableFormat.DECIMAL)
self._addModelActual(model, 'shaping_filter_gain_iqmod', float, ModelVariableFormat.DECIMAL)
def _add_SHAPING_regs(self, model):
self._addModelRegister(model, 'MODEM.SHAPING2.COEFF9', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'MODEM.SHAPING2.COEFF10' , int, ModelVariableFormat.HEX )
self._addModelRegister(model, 'MODEM.SHAPING2.COEFF11' , int, ModelVariableFormat.HEX )
self._addModelRegister(model, 'MODEM.SHAPING3.COEFF12' , int, ModelVariableFormat.HEX )
self._addModelRegister(model, 'MODEM.SHAPING3.COEFF13' , int, ModelVariableFormat.HEX )
self._addModelRegister(model, 'MODEM.SHAPING3.COEFF14' , int, ModelVariableFormat.HEX )
self._addModelRegister(model, 'MODEM.SHAPING3.COEFF15' , int, ModelVariableFormat.HEX )
def _add_MODEM_RXRESTART(self, model):
self._addModelRegister(model, 'MODEM.RXRESTART.RXRESTARTB4PREDET', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'MODEM.RXRESTART.RXRESTARTMATAP', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'MODEM.RXRESTART.RXRESTARTMALATCHSEL', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'MODEM.RXRESTART.RXRESTARTMACOMPENSEL', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'MODEM.RXRESTART.RXRESTARTMATHRESHOLD', int, ModelVariableFormat.HEX)
self._addModelRegister(model, 'MODEM.RXRESTART.RXRESTARTUPONMARSSI', int, ModelVariableFormat.HEX)
def _add_TXBR_regs(self, model):
self._addModelRegister(model, 'MODEM.TXBR.TXBRNUM', int, ModelVariableFormat.HEX)
| 88.44697 | 111 | 0.75863 | 1,042 | 11,675 | 8.373321 | 0.170825 | 0.272321 | 0.30659 | 0.345673 | 0.796103 | 0.787049 | 0.782579 | 0.766418 | 0 | 0 | 0 | 0.03094 | 0.136274 | 11,675 | 131 | 112 | 89.122137 | 0.834292 | 0.003769 | 0 | 0 | 0 | 0 | 0.268983 | 0.254794 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033898 | false | 0 | 0.025424 | 0 | 0.067797 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
058b5ca0c374af1e36cfbbaa49ba541aca6fde3d | 96 | py | Python | venv/lib/python3.8/site-packages/setuptools/command/bdist_egg.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/setuptools/command/bdist_egg.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/setuptools/command/bdist_egg.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/fa/ea/62/07a7c5b66f1c412423d4b4435691b5f93d78dc3b170af5747e1d37bbb5 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.416667 | 0 | 96 | 1 | 96 | 96 | 0.479167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
554a76df40e86a905ec7beb050e88706b55abee9 | 29 | py | Python | main.py | frimik/slack_status | 2bf713cc69e227dad2e835b5c7d85ea2da9d6d92 | [
"MIT"
] | null | null | null | main.py | frimik/slack_status | 2bf713cc69e227dad2e835b5c7d85ea2da9d6d92 | [
"MIT"
] | null | null | null | main.py | frimik/slack_status | 2bf713cc69e227dad2e835b5c7d85ea2da9d6d92 | [
"MIT"
] | null | null | null | from slack_status import app
| 14.5 | 28 | 0.862069 | 5 | 29 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
55506f33557db0709d5ab33afde8c76d3c5f3913 | 208 | py | Python | run_services.py | ojlangnes/digital_impersonator | cf2fa9cb9cfd78e1f2978ec7cfcebde3ef804d8b | [
"MIT"
] | null | null | null | run_services.py | ojlangnes/digital_impersonator | cf2fa9cb9cfd78e1f2978ec7cfcebde3ef804d8b | [
"MIT"
] | null | null | null | run_services.py | ojlangnes/digital_impersonator | cf2fa9cb9cfd78e1f2978ec7cfcebde3ef804d8b | [
"MIT"
] | null | null | null | from sys import executable
from subprocess import Popen
Popen([executable, "front_end_worker.py"])
Popen([executable, "back_end_worker.py"])
Popen([executable, "front_end.py"])
input("Press ENTER to exit.") | 26 | 42 | 0.769231 | 30 | 208 | 5.166667 | 0.533333 | 0.290323 | 0.258065 | 0.296774 | 0.335484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.091346 | 208 | 8 | 43 | 26 | 0.820106 | 0 | 0 | 0 | 0 | 0 | 0.330144 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
55aced861b5b51526e1b1f28379879d8801d5061 | 165 | py | Python | teme/admin.py | MDS-PBSCB/teme | f750713801246bda523d372d3c953b3c2bed2e6c | [
"MIT"
] | null | null | null | teme/admin.py | MDS-PBSCB/teme | f750713801246bda523d372d3c953b3c2bed2e6c | [
"MIT"
] | null | null | null | teme/admin.py | MDS-PBSCB/teme | f750713801246bda523d372d3c953b3c2bed2e6c | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Teacher, Course, Rating
admin.site.register(Teacher)
admin.site.register(Course)
admin.site.register(Rating)
| 18.333333 | 43 | 0.806061 | 23 | 165 | 5.782609 | 0.478261 | 0.203008 | 0.383459 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09697 | 165 | 8 | 44 | 20.625 | 0.892617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
e94fa3b4c36b58eef80105ba4526cf6de42b78a3 | 49 | py | Python | sharetempus/__init__.py | ShareTempus/sharetempus-python | 3e6285d013c00f0f466a03f5d2b8be45946d731a | [
"MIT"
] | 1 | 2020-05-12T18:08:54.000Z | 2020-05-12T18:08:54.000Z | sharetempus/__init__.py | ShareTempus/sharetempus-python | 3e6285d013c00f0f466a03f5d2b8be45946d731a | [
"MIT"
] | null | null | null | sharetempus/__init__.py | ShareTempus/sharetempus-python | 3e6285d013c00f0f466a03f5d2b8be45946d731a | [
"MIT"
] | null | null | null | from sharetempus.ShareTempus import ShareTempus;
| 24.5 | 48 | 0.877551 | 5 | 49 | 8.6 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 49 | 1 | 49 | 49 | 0.955556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e97ca67456f764501fac6fcb37278561294ca11b | 6,191 | py | Python | imcsdk/mometa/memory/MemoryPersistentMemoryLogicalConfiguration.py | ecoen66/imcsdk | b10eaa926a5ee57cea7182ae0adc8dd1c818b0ab | [
"Apache-2.0"
] | 31 | 2016-06-14T07:23:59.000Z | 2021-09-12T17:17:26.000Z | imcsdk/mometa/memory/MemoryPersistentMemoryLogicalConfiguration.py | sthagen/imcsdk | 1831eaecb5960ca03a8624b1579521749762b932 | [
"Apache-2.0"
] | 109 | 2016-05-25T03:56:56.000Z | 2021-10-18T02:58:12.000Z | imcsdk/mometa/memory/MemoryPersistentMemoryLogicalConfiguration.py | sthagen/imcsdk | 1831eaecb5960ca03a8624b1579521749762b932 | [
"Apache-2.0"
] | 67 | 2016-05-17T05:53:56.000Z | 2022-03-24T15:52:53.000Z | """This module contains the general information for MemoryPersistentMemoryLogicalConfiguration ManagedObject."""
from ...imcmo import ManagedObject
from ...imccoremeta import MoPropertyMeta, MoMeta
from ...imcmeta import VersionMeta
class MemoryPersistentMemoryLogicalConfigurationConsts:
ADMIN_ACTION_DISABLE_SECURITY = "disable-security"
ADMIN_ACTION_ENABLE_SECURITY = "enable-security"
ADMIN_ACTION_MODIFY_PASSPHRASE = "modify-passphrase"
ADMIN_ACTION_RESET_FACTORY_DEFAULT = "reset-factory-default"
ADMIN_ACTION_SECURE_ERASE = "secure-erase"
ADMIN_ACTION_UNLOCK_DIMMS = "unlock-dimms"
FORCE_CONFIG_FALSE = "false"
FORCE_CONFIG_NO = "no"
FORCE_CONFIG_TRUE = "true"
FORCE_CONFIG_YES = "yes"
MGMT_MODE_HOST_MANAGED = "host-managed"
MGMT_MODE_IMC_MANAGED = "imc-managed"
REBOOT_ON_UPDATE_FALSE = "false"
REBOOT_ON_UPDATE_NO = "no"
REBOOT_ON_UPDATE_TRUE = "true"
REBOOT_ON_UPDATE_YES = "yes"
class MemoryPersistentMemoryLogicalConfiguration(ManagedObject):
"""This is MemoryPersistentMemoryLogicalConfiguration class."""
consts = MemoryPersistentMemoryLogicalConfigurationConsts()
naming_props = set([])
mo_meta = {
"classic": MoMeta("MemoryPersistentMemoryLogicalConfiguration", "memoryPersistentMemoryLogicalConfiguration", "pmemory-lconfig", VersionMeta.Version404b, "InputOutput", 0xff, [], ["admin", "read-only", "user"], ['computeBoard'], ['memoryPersistentMemoryDimms', 'memoryPersistentMemoryGoal', 'memoryPersistentMemoryLogicalNamespace', 'memoryPersistentMemorySecurity'], [None]),
"modular": MoMeta("MemoryPersistentMemoryLogicalConfiguration", "memoryPersistentMemoryLogicalConfiguration", "pmemory-lconfig", VersionMeta.Version404b, "InputOutput", 0xff, [], ["admin", "read-only", "user"], ['computeBoard'], ['memoryPersistentMemoryDimms', 'memoryPersistentMemoryGoal', 'memoryPersistentMemoryLogicalNamespace', 'memoryPersistentMemorySecurity'], [None])
}
prop_meta = {
"classic": {
"admin_action": MoPropertyMeta("admin_action", "adminAction", "string", VersionMeta.Version404b, MoPropertyMeta.READ_WRITE, 0x2, 0, 510, None, ["disable-security", "enable-security", "modify-passphrase", "reset-factory-default", "secure-erase", "unlock-dimms"], []),
"dn": MoPropertyMeta("dn", "dn", "string", VersionMeta.Version404b, MoPropertyMeta.READ_WRITE, 0x4, 0, 255, None, [], []),
"force_config": MoPropertyMeta("force_config", "forceConfig", "string", VersionMeta.Version404b, MoPropertyMeta.READ_WRITE, 0x8, None, None, None, ["No", "Yes", "false", "no", "true", "yes"], []),
"mgmt_mode": MoPropertyMeta("mgmt_mode", "mgmtMode", "string", VersionMeta.Version404b, MoPropertyMeta.READ_WRITE, 0x10, None, None, None, ["host-managed", "imc-managed"], []),
"reboot_on_update": MoPropertyMeta("reboot_on_update", "rebootOnUpdate", "string", VersionMeta.Version404b, MoPropertyMeta.READ_WRITE, 0x20, None, None, None, ["No", "Yes", "false", "no", "true", "yes"], []),
"rn": MoPropertyMeta("rn", "rn", "string", VersionMeta.Version404b, MoPropertyMeta.READ_WRITE, 0x40, 0, 255, None, [], []),
"status": MoPropertyMeta("status", "status", "string", VersionMeta.Version404b, MoPropertyMeta.READ_WRITE, 0x80, None, None, None, ["", "created", "deleted", "modified", "removed"], []),
"child_action": MoPropertyMeta("child_action", "childAction", "string", VersionMeta.Version404b, MoPropertyMeta.INTERNAL, None, None, None, None, [], []),
},
"modular": {
"admin_action": MoPropertyMeta("admin_action", "adminAction", "string", VersionMeta.Version404b, MoPropertyMeta.READ_WRITE, 0x2, 0, 510, None, ["disable-security", "enable-security", "modify-passphrase", "reset-factory-default", "secure-erase", "unlock-dimms"], []),
"dn": MoPropertyMeta("dn", "dn", "string", VersionMeta.Version404b, MoPropertyMeta.READ_WRITE, 0x4, 0, 255, None, [], []),
"force_config": MoPropertyMeta("force_config", "forceConfig", "string", VersionMeta.Version404b, MoPropertyMeta.READ_WRITE, 0x8, None, None, None, ["No", "Yes", "no", "yes"], []),
"mgmt_mode": MoPropertyMeta("mgmt_mode", "mgmtMode", "string", VersionMeta.Version404b, MoPropertyMeta.READ_WRITE, 0x10, None, None, None, ["host-managed", "imc-managed"], []),
"reboot_on_update": MoPropertyMeta("reboot_on_update", "rebootOnUpdate", "string", VersionMeta.Version404b, MoPropertyMeta.READ_WRITE, 0x20, None, None, None, ["No", "Yes", "no", "yes"], []),
"rn": MoPropertyMeta("rn", "rn", "string", VersionMeta.Version404b, MoPropertyMeta.READ_WRITE, 0x40, 0, 255, None, [], []),
"status": MoPropertyMeta("status", "status", "string", VersionMeta.Version404b, MoPropertyMeta.READ_WRITE, 0x80, None, None, None, ["", "created", "deleted", "modified", "removed"], []),
"child_action": MoPropertyMeta("child_action", "childAction", "string", VersionMeta.Version404b, MoPropertyMeta.INTERNAL, None, None, None, None, [], []),
},
}
prop_map = {
"classic": {
"adminAction": "admin_action",
"dn": "dn",
"forceConfig": "force_config",
"mgmtMode": "mgmt_mode",
"rebootOnUpdate": "reboot_on_update",
"rn": "rn",
"status": "status",
"childAction": "child_action",
},
"modular": {
"adminAction": "admin_action",
"dn": "dn",
"forceConfig": "force_config",
"mgmtMode": "mgmt_mode",
"rebootOnUpdate": "reboot_on_update",
"rn": "rn",
"status": "status",
"childAction": "child_action",
},
}
def __init__(self, parent_mo_or_dn, **kwargs):
self._dirty_mask = 0
self.admin_action = None
self.force_config = None
self.mgmt_mode = None
self.reboot_on_update = None
self.status = None
self.child_action = None
ManagedObject.__init__(self, "MemoryPersistentMemoryLogicalConfiguration", parent_mo_or_dn, **kwargs)
| 60.696078 | 384 | 0.667259 | 569 | 6,191 | 7.047452 | 0.186292 | 0.04389 | 0.111721 | 0.167581 | 0.723441 | 0.714464 | 0.706733 | 0.705736 | 0.705736 | 0.700249 | 0 | 0.022973 | 0.177354 | 6,191 | 101 | 385 | 61.29703 | 0.764382 | 0.02649 | 0 | 0.4 | 0 | 0 | 0.312552 | 0.085619 | 0 | 0 | 0.009643 | 0 | 0 | 1 | 0.0125 | false | 0.0375 | 0.0375 | 0 | 0.3375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e994af040d1b678cef489263880d7232bdd9b630 | 69,486 | py | Python | test/ibm_qiskit/circuit/TestQiskitQuantumCircuit.py | rubenandrebarreiro/semi-quantum-conference-key-agreement-prototype | adefc5a43e4fb1c2b7926af5da93e346f96497c0 | [
"MIT"
] | null | null | null | test/ibm_qiskit/circuit/TestQiskitQuantumCircuit.py | rubenandrebarreiro/semi-quantum-conference-key-agreement-prototype | adefc5a43e4fb1c2b7926af5da93e346f96497c0 | [
"MIT"
] | null | null | null | test/ibm_qiskit/circuit/TestQiskitQuantumCircuit.py | rubenandrebarreiro/semi-quantum-conference-key-agreement-prototype | adefc5a43e4fb1c2b7926af5da93e346f96497c0 | [
"MIT"
] | null | null | null | """
Semi-Quantum Conference Key Agreement (SQCKA)
Author:
- Ruben Andre Barreiro (r.barreiro@campus.fct.unl.pt)
Supervisors:
- Andre Nuno Souto (ansouto@fc.ul.pt)
- Antonio Maria Ravara (aravara@fct.unl.pt)
Acknowledgments:
- Paulo Alexandre Mateus (pmat@math.ist.utl.pt)
"""
# Import Packages and Libraries
# Import Unittest for Python's Unitary Tests
import unittest
# Import N-Dimensional Arrays and Squared Roots from NumPy
from numpy import array, sqrt
# Import Assert_All_Close from NumPy.Testing
from numpy.testing import assert_allclose
# Import Aer and execute from Qiskit
from qiskit import Aer, execute
# Import QiskitQuantumCircuit from IBM_Qiskit.Circuit
from src.ibm_qiskit.circuit import QiskitQuantumCircuit
# Import QiskitQuantumRegister from IBM_Qiskit.Circuit.Quantum
from src.ibm_qiskit.circuit.registers.quantum import QiskitQuantumRegister
# Import QiskitClassicalRegister from IBM_Qiskit.Circuit.Classical
from src.ibm_qiskit.circuit.registers.classical import QiskitClassicalRegister
# Test Cases for the Prepare/Measure in the X-Basis (Diagonal Basis)
class PrepareMeasureXBasisTests(unittest.TestCase):
# Test #1 for the Prepare/Measure in the X-Basis (Diagonal Basis)
# Description of the Test Case:
# 1) The Quantum Circuit is created with a Quantum Register,
# with 1 Qubit initialized in the state |0⟩;
# 2) The Qubit is prepared/measured in the X-Basis (Diagonal Basis);
def test_prepare_measure_x_basis_1(self):
# The number of Qubits and Bits, for Quantum and Classical Registers, respectively
num_qubits = num_bits = 1
# Creation of the IBM Qiskit's Quantum and Classical Registers
qiskit_quantum_register_prepare_measure_x_basis_1 = \
QiskitQuantumRegister.QiskitQuantumRegister("qrmeasxbasis1", num_qubits)
qiskit_classical_register_prepare_measure_x_basis_1 = \
QiskitClassicalRegister.QiskitClassicalRegister("crmeasxbasis1", num_bits)
# Creation of the IBM Qiskit's Quantum Circuit with one Quantum and Classical Registers
qiskit_quantum_circuit_prepare_measure_x_basis_1 = \
QiskitQuantumCircuit.QiskitQuantumCircuit("qcmeasxbasis1",
qiskit_quantum_register_prepare_measure_x_basis_1,
qiskit_classical_register_prepare_measure_x_basis_1,
global_phase=0)
# Prepare/Measure the Qubit in the X-Basis (Diagonal Basis)
qiskit_quantum_circuit_prepare_measure_x_basis_1 \
.prepare_measure_single_qubit_in_x_basis(0, 0, 0, 0, is_final_measurement=False)
# Getting the Backend for the State Vector Representation
# (i.e., the Quantum State represented as State Vector)
state_vector_backend = Aer.get_backend('statevector_simulator')
# Execute the Quantum Circuit and store the Quantum State in a final state vector
final_state_vector = execute(qiskit_quantum_circuit_prepare_measure_x_basis_1.quantum_circuit,
state_vector_backend).result().get_statevector()
# Assert All Close, from NumPy's Testing, for the State Vector of the Qubit,
# after the Prepare/Measure in the X-Basis (Diagonal Basis) be performed
assert_allclose(final_state_vector, array([((1. / sqrt(2.)) + 0.j), ((1. / sqrt(2.)) + 0.j)]),
rtol=1e-7, atol=1e-7)
# Dummy Assert Equal for Unittest
self.assertEqual(True, True)
# Test #2 for the Prepare/Measure in the X-Basis (Diagonal Basis)
# Description of the Test Case:
# 1) The Quantum Circuit is created with a Quantum Register,
# with 1 Qubit initialized in the state |0⟩;
# 2) It is applied the Pauli-X Gate to the 1st Qubit, then, |0⟩ ↦ |1⟩;
# 3) The Qubit is prepared/measured in the X-Basis (Diagonal Basis);
def test_prepare_measure_x_basis_2(self):
# The number of Qubits and Bits, for Quantum and Classical Registers, respectively
num_qubits = num_bits = 1
# Creation of the IBM Qiskit's Quantum and Classical Registers
qiskit_quantum_register_prepare_measure_x_basis_2 = \
QiskitQuantumRegister.QiskitQuantumRegister("qrmeasxbasis2", num_qubits)
qiskit_classical_register_prepare_measure_x_basis_2 = \
QiskitClassicalRegister.QiskitClassicalRegister("crmeasxbasis2", num_bits)
# Creation of the IBM Qiskit's Quantum Circuit with one Quantum and Classical Registers
qiskit_quantum_circuit_prepare_measure_x_basis_2 = \
QiskitQuantumCircuit.QiskitQuantumCircuit("qcmeasxbasis2",
qiskit_quantum_register_prepare_measure_x_basis_2,
qiskit_classical_register_prepare_measure_x_basis_2,
global_phase=0)
# Apply the Pauli-X Gate to the 1st Qubit of the Quantum Circuit (|0⟩ ↦ |1⟩)
qiskit_quantum_circuit_prepare_measure_x_basis_2 \
.apply_pauli_x(qiskit_quantum_register_prepare_measure_x_basis_2.quantum_register[0])
# Prepare/Measure the Qubit in the X-Basis (Diagonal Basis)
qiskit_quantum_circuit_prepare_measure_x_basis_2 \
.prepare_measure_single_qubit_in_x_basis(0, 0, 0, 0, is_final_measurement=False)
# Getting the Backend for the State Vector Representation
# (i.e., the Quantum State represented as State Vector)
state_vector_backend = Aer.get_backend('statevector_simulator')
# Execute the Quantum Circuit and store the Quantum State in a final state vector
final_state_vector = execute(qiskit_quantum_circuit_prepare_measure_x_basis_2.quantum_circuit,
state_vector_backend).result().get_statevector()
# Assert All Close, from NumPy's Testing, for the State Vector of the Qubit,
# after the Prepare/Measure in the X-Basis (Diagonal Basis) be performed
assert_allclose(final_state_vector, array([((1. / sqrt(2.)) + 0.j), (-(1. / sqrt(2.)) + 0.j)]),
rtol=1e-7, atol=1e-7)
# Dummy Assert Equal for Unittest
self.assertEqual(True, True)
# Test #3 for the Prepare/Measure in the X-Basis (Diagonal Basis)
# Description of the Test Case:
# 1) The Quantum Circuit is created with a Quantum Register,
# with 1 Qubit initialized in the state |0⟩;
# 2) It is applied the Hadamard Gate to the 1st Qubit, then, |0⟩ ↦ |+⟩;
# 3) The Qubit is prepared/measured in the X-Basis (Diagonal Basis);
def test_prepare_measure_x_basis_3(self):
# The number of Qubits and Bits, for Quantum and Classical Registers, respectively
num_qubits = num_bits = 1
# Creation of the IBM Qiskit's Quantum and Classical Registers
qiskit_quantum_register_prepare_measure_x_basis_3 = \
QiskitQuantumRegister.QiskitQuantumRegister("qrmeasxbasis3", num_qubits)
qiskit_classical_register_prepare_measure_x_basis_3 = \
QiskitClassicalRegister.QiskitClassicalRegister("crmeasxbasis3", num_bits)
# Creation of the IBM Qiskit's Quantum Circuit with one Quantum and Classical Registers
qiskit_quantum_circuit_prepare_measure_x_basis_3 = \
QiskitQuantumCircuit.QiskitQuantumCircuit("qcmeasxbasis3",
qiskit_quantum_register_prepare_measure_x_basis_3,
qiskit_classical_register_prepare_measure_x_basis_3,
global_phase=0)
# Apply the Hadamard Gate to the 1st Qubit of the Quantum Circuit (|0⟩ ↦ |+⟩)
qiskit_quantum_circuit_prepare_measure_x_basis_3 \
.apply_hadamard(qiskit_quantum_register_prepare_measure_x_basis_3.quantum_register[0])
# Prepare/Measure the Qubit in the X-Basis (Diagonal Basis)
qiskit_quantum_circuit_prepare_measure_x_basis_3 \
.prepare_measure_single_qubit_in_x_basis(0, 0, 0, 0, is_final_measurement=False)
# Getting the Backend for the State Vector Representation
# (i.e., the Quantum State represented as State Vector)
state_vector_backend = Aer.get_backend('statevector_simulator')
# Execute the Quantum Circuit and store the Quantum State in a final state vector
final_state_vector = execute(qiskit_quantum_circuit_prepare_measure_x_basis_3.quantum_circuit,
state_vector_backend).result().get_statevector()
# Assert All Close, from NumPy's Testing, for the State Vector of the Qubit,
# after the Prepare/Measure in the X-Basis (Diagonal Basis) be performed
assert_allclose(final_state_vector, array([(1. + 0.j), (0. + 0.j)]), rtol=1e-7, atol=1e-7)
# Dummy Assert Equal for Unittest
self.assertEqual(True, True)
# Test #4 for the Prepare/Measure in the X-Basis
# Description of the Test Case:
# 1) The Quantum Circuit is created with a Quantum Register,
# with 1 Qubit initialized in the state |0⟩;
# 2) It is applied the Pauli-X Gate to the 1st Qubit, then, |0⟩ ↦ |1⟩;
# 3) It is applied the Hadamard Gate to the 1st Qubit, then, |1⟩ ↦ |-⟩;
# 4) The Qubit is prepared/measured in the X-Basis;
def test_prepare_measure_x_basis_4(self):
# The number of Qubits and Bits, for Quantum and Classical Registers, respectively
num_qubits = num_bits = 1
# Creation of the IBM Qiskit's Quantum and Classical Registers
qiskit_quantum_register_prepare_measure_x_basis_4 = \
QiskitQuantumRegister.QiskitQuantumRegister("qrmeasxbasis4", num_qubits)
qiskit_classical_register_prepare_measure_x_basis_4 = \
QiskitClassicalRegister.QiskitClassicalRegister("crmeasxbasis4", num_bits)
# Creation of the IBM Qiskit's Quantum Circuit with one Quantum and Classical Registers
qiskit_quantum_circuit_prepare_measure_x_basis_4 = \
QiskitQuantumCircuit.QiskitQuantumCircuit("qcmeasxbasis4",
qiskit_quantum_register_prepare_measure_x_basis_4,
qiskit_classical_register_prepare_measure_x_basis_4,
global_phase=0)
# Apply the Pauli-X Gate to the 1st Qubit of the Quantum Circuit (|0⟩ ↦ |1⟩)
qiskit_quantum_circuit_prepare_measure_x_basis_4 \
.apply_pauli_x(qiskit_quantum_register_prepare_measure_x_basis_4.quantum_register[0])
# Apply the Hadamard Gate to the 1st Qubit of the Quantum Circuit (|1⟩ ↦ |-⟩)
qiskit_quantum_circuit_prepare_measure_x_basis_4 \
.apply_hadamard(qiskit_quantum_register_prepare_measure_x_basis_4.quantum_register[0])
# Prepare/Measure the Qubit in the X-Basis (Diagonal Basis)
qiskit_quantum_circuit_prepare_measure_x_basis_4 \
.prepare_measure_single_qubit_in_x_basis(0, 0, 0, 0, is_final_measurement=False)
# Getting the Backend for the State Vector Representation
# (i.e., the Quantum State represented as State Vector)
state_vector_backend = Aer.get_backend('statevector_simulator')
# Execute the Quantum Circuit and store the Quantum State in a final state vector
final_state_vector = execute(qiskit_quantum_circuit_prepare_measure_x_basis_4.quantum_circuit,
state_vector_backend).result().get_statevector()
# Assert All Close, from NumPy's Testing, for the State Vector of the Qubit,
# after the Prepare/Measure in the X-Basis be performed
assert_allclose(final_state_vector, array([(0. + 0.j), (1. + 0.j)]), rtol=1e-7, atol=1e-7)
# Dummy Assert Equal for Unittest
self.assertEqual(True, True)
# Test Cases for the Prepare/Measure in the Y-Basis (Diagonal Basis)
class PrepareMeasureYBasisTests(unittest.TestCase):
# Test #1 for the Prepare/Measure in the Y-Basis (Diagonal Basis)
# Description of the Test Case:
# 1) The Quantum Circuit is created with a Quantum Register,
# with 1 Qubit initialized in the state |0⟩;
# 2) The Qubit is prepared/measured in the Y-Basis (Diagonal Basis);
def test_prepare_measure_y_basis_1(self):
# The number of Qubits and Bits, for Quantum and Classical Registers, respectively
num_qubits = num_bits = 1
# Creation of the IBM Qiskit's Quantum and Classical Registers
qiskit_quantum_register_prepare_measure_y_basis_1 = \
QiskitQuantumRegister.QiskitQuantumRegister("qrmeasybasis1", num_qubits)
qiskit_classical_register_prepare_measure_y_basis_1 = \
QiskitClassicalRegister.QiskitClassicalRegister("crmeasybasis1", num_bits)
# Creation of the IBM Qiskit's Quantum Circuit with one Quantum and Classical Registers
qiskit_quantum_circuit_prepare_measure_y_basis_1 = \
QiskitQuantumCircuit.QiskitQuantumCircuit("qcmeasybasis1",
qiskit_quantum_register_prepare_measure_y_basis_1,
qiskit_classical_register_prepare_measure_y_basis_1,
global_phase=0)
# Prepare/Measure the Qubit in the Y-Basis (Computational Basis)
qiskit_quantum_circuit_prepare_measure_y_basis_1 \
.prepare_measure_single_qubit_in_y_basis(0, 0, 0, 0, is_final_measurement=False)
# Getting the Backend for the State Vector Representation
# (i.e., the Quantum State represented as State Vector)
state_vector_backend = Aer.get_backend('statevector_simulator')
# Execute the Quantum Circuit and store the Quantum State in a final state vector
final_state_vector = execute(qiskit_quantum_circuit_prepare_measure_y_basis_1.quantum_circuit,
state_vector_backend).result().get_statevector()
# Assert All Close, from NumPy's Testing, for the State Vector of the Qubit,
# after the Prepare/Measure in the Y-Basis (Diagonal Basis) be performed
assert_allclose(final_state_vector, array([((1. / sqrt(2.)) + 0.j), (1. / sqrt(2.)) * (0. + 1.j)]),
rtol=1e-7, atol=1e-7)
# Dummy Assert Equal for Unittest
self.assertEqual(True, True)
# Test #2 for the Prepare/Measure in the Y-Basis (Diagonal Basis)
# Description of the Test Case:
# 1) The Quantum Circuit is created with a Quantum Register,
# with 1 Qubit initialized in the state |0⟩;
# 2) It is applied the Pauli-X Gate to the 1st Qubit, then, |0⟩ ↦ |1⟩;
# 3) The Qubit is prepared/measured in the Y-Basis (Diagonal Basis);
def test_prepare_measure_y_basis_2(self):
# The number of Qubits and Bits, for Quantum and Classical Registers, respectively
num_qubits = num_bits = 1
# Creation of the IBM Qiskit's Quantum and Classical Registers
qiskit_quantum_register_prepare_measure_y_basis_2 = \
QiskitQuantumRegister.QiskitQuantumRegister("qrmeasybasis2", num_qubits)
qiskit_classical_register_prepare_measure_y_basis_2 = \
QiskitClassicalRegister.QiskitClassicalRegister("crmeasybasis2", num_bits)
# Creation of the IBM Qiskit's Quantum Circuit with one Quantum and Classical Registers
qiskit_quantum_circuit_prepare_measure_y_basis_2 = \
QiskitQuantumCircuit.QiskitQuantumCircuit("qcmeasybasis2",
qiskit_quantum_register_prepare_measure_y_basis_2,
qiskit_classical_register_prepare_measure_y_basis_2,
global_phase=0)
# Apply the Pauli-X Gate to the 1st Qubit of the Quantum Circuit (|0⟩ ↦ |1⟩)
qiskit_quantum_circuit_prepare_measure_y_basis_2 \
.apply_pauli_x(qiskit_quantum_register_prepare_measure_y_basis_2.quantum_register[0])
# Prepare/Measure the Qubit in the Y-Basis (Diagonal Basis)
qiskit_quantum_circuit_prepare_measure_y_basis_2 \
.prepare_measure_single_qubit_in_y_basis(0, 0, 0, 0, is_final_measurement=False)
# Getting the Backend for the State Vector Representation
# (i.e., the Quantum State represented as State Vector)
state_vector_backend = Aer.get_backend('statevector_simulator')
# Execute the Quantum Circuit and store the Quantum State in a final state vector
final_state_vector = execute(qiskit_quantum_circuit_prepare_measure_y_basis_2.quantum_circuit,
state_vector_backend).result().get_statevector()
# Assert All Close, from NumPy's Testing, for the State Vector of the Qubit,
# after the Prepare/Measure in the Y-Basis (Diagonal Basis) be performed
assert_allclose(final_state_vector, array([((1. / sqrt(2.)) + 0.j), -(1. / sqrt(2.)) * (0. + 1.j)]),
rtol=1e-7, atol=1e-7)
# Dummy Assert Equal for Unittest
self.assertEqual(True, True)
# Test #3 for the Prepare/Measure in the Y-Basis (Diagonal Basis)
# Description of the Test Case:
# 1) The Quantum Circuit is created with a Quantum Register,
# with 1 Qubit initialized in the state |0⟩;
# 2) It is applied the Hadamard Gate to the 1st Qubit, then, |0⟩ ↦ |+⟩;
# 3) The Qubit is prepared/measured in the Y-Basis (Diagonal Basis);
def test_prepare_measure_y_basis_3(self):
# The number of Qubits and Bits, for Quantum and Classical Registers, respectively
num_qubits = num_bits = 1
# Creation of the IBM Qiskit's Quantum and Classical Registers
qiskit_quantum_register_prepare_measure_y_basis_3 = \
QiskitQuantumRegister.QiskitQuantumRegister("qrmeasybasis3", num_qubits)
qiskit_classical_register_prepare_measure_y_basis_3 = \
QiskitClassicalRegister.QiskitClassicalRegister("crmeasybasis3", num_bits)
# Creation of the IBM Qiskit's Quantum Circuit with one Quantum and Classical Registers
qiskit_quantum_circuit_prepare_measure_y_basis_3 = \
QiskitQuantumCircuit.QiskitQuantumCircuit("qcmeasybasis3",
qiskit_quantum_register_prepare_measure_y_basis_3,
qiskit_classical_register_prepare_measure_y_basis_3,
global_phase=0)
# Apply the Hadamard Gate to the 1st Qubit of the Quantum Circuit (|0⟩ ↦ |+⟩)
qiskit_quantum_circuit_prepare_measure_y_basis_3 \
.apply_hadamard(qiskit_quantum_register_prepare_measure_y_basis_3.quantum_register[0])
# Prepare/Measure the Qubit in the Y-Basis (Diagonal Basis)
qiskit_quantum_circuit_prepare_measure_y_basis_3 \
.prepare_measure_single_qubit_in_y_basis(0, 0, 0, 0, is_final_measurement=False)
# Getting the Backend for the State Vector Representation
# (i.e., the Quantum State represented as State Vector)
state_vector_backend = Aer.get_backend('statevector_simulator')
# Execute the Quantum Circuit and store the Quantum State in a final state vector
final_state_vector = execute(qiskit_quantum_circuit_prepare_measure_y_basis_3.quantum_circuit,
state_vector_backend).result().get_statevector()
# Assert All Close, from NumPy's Testing, for the State Vector of the Qubit,
# after the Prepare/Measure in the Y-Basis (Diagonal Basis) be performed
assert_allclose(final_state_vector, array([(1. + 0.j), (0. + 0.j)]),
rtol=1e-7, atol=1e-7)
# Dummy Assert Equal for Unittest
self.assertEqual(True, True)
# Test #4 for the Prepare/Measure in the Y-Basis (Diagonal Basis)
# Description of the Test Case:
# 1) The Quantum Circuit is created with a Quantum Register,
# with 1 Qubit initialized in the state |0⟩;
# 2) It is applied the Pauli-X Gate to the 1st Qubit, then, |0⟩ ↦ |1⟩;
# 3) It is applied the Hadamard Gate to the 1st Qubit, then, |1⟩ ↦ |-⟩;
# 4) The Qubit is prepared/measured in the Y-Basis (Diagonal Basis);
def test_prepare_measure_y_basis_4(self):
# The number of Qubits and Bits, for Quantum and Classical Registers, respectively
num_qubits = num_bits = 1
# Creation of the IBM Qiskit's Quantum and Classical Registers
qiskit_quantum_register_prepare_measure_y_basis_4 = \
QiskitQuantumRegister.QiskitQuantumRegister("qrmeasybasis4", num_qubits)
qiskit_classical_register_prepare_measure_y_basis_4 = \
QiskitClassicalRegister.QiskitClassicalRegister("crmeasybasis4", num_bits)
# Creation of the IBM Qiskit's Quantum Circuit with one Quantum and Classical Registers
qiskit_quantum_circuit_prepare_measure_y_basis_4 = \
QiskitQuantumCircuit.QiskitQuantumCircuit("qcmeasybasis4",
qiskit_quantum_register_prepare_measure_y_basis_4,
qiskit_classical_register_prepare_measure_y_basis_4,
global_phase=0)
# Apply the Pauli-X Gate to the 1st Qubit of the Quantum Circuit (|0⟩ ↦ |1⟩)
qiskit_quantum_circuit_prepare_measure_y_basis_4 \
.apply_pauli_x(qiskit_quantum_register_prepare_measure_y_basis_4.quantum_register[0])
# Apply the Hadamard Gate to the 1st Qubit of the Quantum Circuit (|1⟩ ↦ |-⟩)
qiskit_quantum_circuit_prepare_measure_y_basis_4 \
.apply_hadamard(qiskit_quantum_register_prepare_measure_y_basis_4.quantum_register[0])
# Prepare/Measure the Qubit in the Y-Basis (Diagonal Basis)
qiskit_quantum_circuit_prepare_measure_y_basis_4 \
.prepare_measure_single_qubit_in_y_basis(0, 0, 0, 0, is_final_measurement=False)
# Getting the Backend for the State Vector Representation
# (i.e., the Quantum State represented as State Vector)
state_vector_backend = Aer.get_backend('statevector_simulator')
# Execute the Quantum Circuit and store the Quantum State in a final state vector
final_state_vector = execute(qiskit_quantum_circuit_prepare_measure_y_basis_4.quantum_circuit,
state_vector_backend).result().get_statevector()
# Assert All Close, from NumPy's Testing, for the State Vector of the Qubit,
# after the Prepare/Measure in the Y-Basis (Diagonal Basis) be performed
assert_allclose(final_state_vector, array([(0. + 0.j), (0. + 1.j)]),
rtol=1e-7, atol=1e-7)
# Dummy Assert Equal for Unittest
self.assertEqual(True, True)
# Test Cases for the Prepare/Measure in the Z-Basis (Computational Basis)
class PrepareMeasureZBasisTests(unittest.TestCase):
# Test #1 for the Prepare/Measure in the Z-Basis (Computational Basis)
# Description of the Test Case:
# 1) The Quantum Circuit is created with a Quantum Register,
# with 1 Qubit initialized in the state |0⟩;
# 2) The Qubit is prepared/measured in the Z-Basis (Computational Basis);
def test_prepare_measure_z_basis_1(self):
# The number of Qubits and Bits, for Quantum and Classical Registers, respectively
num_qubits = num_bits = 1
# Creation of the IBM Qiskit's Quantum and Classical Registers
qiskit_quantum_register_prepare_measure_z_basis_1 = \
QiskitQuantumRegister.QiskitQuantumRegister("qrmeaszbasis1", num_qubits)
qiskit_classical_register_prepare_measure_z_basis_1 = \
QiskitClassicalRegister.QiskitClassicalRegister("crmeaszbasis1", num_bits)
# Creation of the IBM Qiskit's Quantum Circuit with one Quantum and Classical Registers
qiskit_quantum_circuit_prepare_measure_z_basis_1 = \
QiskitQuantumCircuit.QiskitQuantumCircuit("qcmeaszbasis1",
qiskit_quantum_register_prepare_measure_z_basis_1,
qiskit_classical_register_prepare_measure_z_basis_1,
global_phase=0)
# Prepare/Measure the Qubit in the Z-Basis (Computational Basis)
qiskit_quantum_circuit_prepare_measure_z_basis_1 \
.prepare_measure_single_qubit_in_z_basis(0, 0, 0, 0, is_final_measurement=False)
# Getting the Backend for the State Vector Representation
# (i.e., the Quantum State represented as State Vector)
state_vector_backend = Aer.get_backend('statevector_simulator')
# Execute the Quantum Circuit and store the Quantum State in a final state vector
final_state_vector = execute(qiskit_quantum_circuit_prepare_measure_z_basis_1.quantum_circuit,
state_vector_backend).result().get_statevector()
# Assert All Close, from NumPy's Testing, for the State Vector of the Qubit,
# after the Prepare/Measure in the Z-Basis (Computational Basis) be performed
assert_allclose(final_state_vector, array([(1. + 0.j), (0. + 0.j)]),
rtol=1e-7, atol=1e-7)
# Dummy Assert Equal for Unittest
self.assertEqual(True, True)
# Test #2 for the Prepare/Measure in the Z-Basis (Computational Basis)
# Description of the Test Case:
# 1) The Quantum Circuit is created with a Quantum Register,
# with 1 Qubit initialized in the state |0⟩;
# 2) It is applied the Pauli-X Gate to the 1st Qubit, then, |0⟩ ↦ |1⟩;
# 3) The Qubit is prepared/measured in the Z-Basis (Computational Basis);
def test_prepare_measure_z_basis_2(self):
# The number of Qubits and Bits, for Quantum and Classical Registers, respectively
num_qubits = num_bits = 1
# Creation of the IBM Qiskit's Quantum and Classical Registers
qiskit_quantum_register_prepare_measure_z_basis_2 = \
QiskitQuantumRegister.QiskitQuantumRegister("qrmeaszbasis2", num_qubits)
qiskit_classical_register_prepare_measure_z_basis_2 = \
QiskitClassicalRegister.QiskitClassicalRegister("crmeaszbasis2", num_bits)
# Creation of the IBM Qiskit's Quantum Circuit with one Quantum and Classical Registers
qiskit_quantum_circuit_prepare_measure_z_basis_2 = \
QiskitQuantumCircuit.QiskitQuantumCircuit("qcmeaszbasis2",
qiskit_quantum_register_prepare_measure_z_basis_2,
qiskit_classical_register_prepare_measure_z_basis_2,
global_phase=0)
# Apply the Pauli-X Gate to the 1st Qubit of the Quantum Circuit (|0⟩ ↦ |1⟩)
qiskit_quantum_circuit_prepare_measure_z_basis_2 \
.apply_pauli_x(qiskit_quantum_register_prepare_measure_z_basis_2.quantum_register[0])
# Prepare/Measure the Qubit in the Z-Basis (Computational Basis)
qiskit_quantum_circuit_prepare_measure_z_basis_2 \
.prepare_measure_single_qubit_in_z_basis(0, 0, 0, 0, is_final_measurement=False)
# Getting the Backend for the State Vector Representation
# (i.e., the Quantum State represented as State Vector)
state_vector_backend = Aer.get_backend('statevector_simulator')
# Execute the Quantum Circuit and store the Quantum State in a final state vector
final_state_vector = execute(qiskit_quantum_circuit_prepare_measure_z_basis_2.quantum_circuit,
state_vector_backend).result().get_statevector()
# Assert All Close, from NumPy's Testing, for the State Vector of the Qubit,
# after the Prepare/Measure in the Z-Basis (Computational Basis) be performed
assert_allclose(final_state_vector, array([(0. + 0.j), (1. + 0.j)]),
rtol=1e-7, atol=1e-7)
# Dummy Assert Equal for Unittest
self.assertEqual(True, True)
# Test #3 for the Prepare/Measure in the Z-Basis (Computational Basis)
# Description of the Test Case:
# 1) The Quantum Circuit is created with a Quantum Register,
# with 1 Qubit initialized in the state |0⟩;
# 2) It is applied the Hadamard Gate to the 1st Qubit, then, |0⟩ ↦ |+⟩;
# 3) The Qubit is prepared/measured in the Z-Basis (Computational Basis);
def test_prepare_measure_z_basis_3(self):
# The number of Qubits and Bits, for Quantum and Classical Registers, respectively
num_qubits = num_bits = 1
# Creation of the IBM Qiskit's Quantum and Classical Registers
qiskit_quantum_register_prepare_measure_z_basis_3 = \
QiskitQuantumRegister.QiskitQuantumRegister("qrmeaszbasis3", num_qubits)
qiskit_classical_register_prepare_measure_z_basis_3 = \
QiskitClassicalRegister.QiskitClassicalRegister("crmeaszbasis3", num_bits)
# Creation of the IBM Qiskit's Quantum Circuit with one Quantum and Classical Registers
qiskit_quantum_circuit_prepare_measure_z_basis_3 = \
QiskitQuantumCircuit.QiskitQuantumCircuit("qcmeaszbasis3",
qiskit_quantum_register_prepare_measure_z_basis_3,
qiskit_classical_register_prepare_measure_z_basis_3,
global_phase=0)
# Apply the Hadamard Gate to the 1st Qubit of the Quantum Circuit (|0⟩ ↦ |+⟩)
qiskit_quantum_circuit_prepare_measure_z_basis_3 \
.apply_hadamard(qiskit_quantum_register_prepare_measure_z_basis_3.quantum_register[0])
# Prepare/Measure the Qubit in the Z-Basis (Computational Basis)
qiskit_quantum_circuit_prepare_measure_z_basis_3 \
.prepare_measure_single_qubit_in_z_basis(0, 0, 0, 0, is_final_measurement=False)
# Getting the Backend for the State Vector Representation
# (i.e., the Quantum State represented as State Vector)
state_vector_backend = Aer.get_backend('statevector_simulator')
# Execute the Quantum Circuit and store the Quantum State in a final state vector
final_state_vector = execute(qiskit_quantum_circuit_prepare_measure_z_basis_3.quantum_circuit,
state_vector_backend).result().get_statevector()
# Assert All Close, from NumPy's Testing, for the State Vector of the Qubit,
# after the Prepare/Measure in the Z-Basis (Computational Basis) be performed
assert_allclose(final_state_vector, array([((1. / sqrt(2.)) + 0.j), ((1. / sqrt(2.)) + 0.j)]),
rtol=1e-7, atol=1e-7)
# Dummy Assert Equal for Unittest
self.assertEqual(True, True)
# Test #4 for the Prepare/Measure in the Z-Basis (Computational Basis)
# Description of the Test Case:
# 1) The Quantum Circuit is created with a Quantum Register,
# with 1 Qubit initialized in the state |0⟩;
# 2) It is applied the Pauli-X Gate to the 1st Qubit, then, |0⟩ ↦ |1⟩;
# 3) It is applied the Hadamard Gate to the 1st Qubit, then, |1⟩ ↦ |-⟩;
# 4) The Qubit is prepared/measured in the Z-Basis (Computational Basis);
def test_prepare_measure_z_basis_4(self):
# The number of Qubits and Bits, for Quantum and Classical Registers, respectively
num_qubits = num_bits = 1
# Creation of the IBM Qiskit's Quantum and Classical Registers
qiskit_quantum_register_prepare_measure_z_basis_4 = \
QiskitQuantumRegister.QiskitQuantumRegister("qrmeaszbasis4", num_qubits)
qiskit_classical_register_prepare_measure_z_basis_4 = \
QiskitClassicalRegister.QiskitClassicalRegister("crmeaszbasis4", num_bits)
# Creation of the IBM Qiskit's Quantum Circuit with one Quantum and Classical Registers
qiskit_quantum_circuit_prepare_measure_z_basis_4 = \
QiskitQuantumCircuit.QiskitQuantumCircuit("qcmeaszbasis4",
qiskit_quantum_register_prepare_measure_z_basis_4,
qiskit_classical_register_prepare_measure_z_basis_4,
global_phase=0)
# Apply the Pauli-X Gate to the 1st Qubit of the Quantum Circuit (|0⟩ ↦ |1⟩)
qiskit_quantum_circuit_prepare_measure_z_basis_4 \
.apply_pauli_x(qiskit_quantum_register_prepare_measure_z_basis_4.quantum_register[0])
# Apply the Hadamard Gate to the 1st Qubit of the Quantum Circuit (|1⟩ ↦ |-⟩)
qiskit_quantum_circuit_prepare_measure_z_basis_4 \
.apply_hadamard(qiskit_quantum_register_prepare_measure_z_basis_4.quantum_register[0])
# Prepare/Measure the Qubit in the Z-Basis (Computational Basis)
qiskit_quantum_circuit_prepare_measure_z_basis_4 \
.prepare_measure_single_qubit_in_z_basis(0, 0, 0, 0, is_final_measurement=False)
# Getting the Backend for the State Vector Representation
# (i.e., the Quantum State represented as State Vector)
state_vector_backend = Aer.get_backend('statevector_simulator')
# Execute the Quantum Circuit and store the Quantum State in a final state vector
final_state_vector = execute(qiskit_quantum_circuit_prepare_measure_z_basis_4.quantum_circuit,
state_vector_backend).result().get_statevector()
# Assert All Close, from NumPy's Testing, for the State Vector of the Qubit,
# after the Prepare/Measure in the Z-Basis (Computational Basis) be performed
assert_allclose(final_state_vector, array([((1. / sqrt(2.)) + 0.j), -((1. / sqrt(2.)) + 0.j)]),
rtol=1e-7, atol=1e-7)
# Dummy Assert Equal for Unittest
self.assertEqual(True, True)
# Test Cases for the Pauli-I Gate
class PauliIGateTests(unittest.TestCase):
# Test #1 for the Pauli-I Gate
# Description of the Test Case:
# 1) The Quantum Circuit is created with a Quantum Register,
# with 1 Qubit initialized in the state |0⟩;
# 2) It is applied the Pauli-I Gate to the 1st Qubit, then, |0⟩ ↦ |0⟩;
def test_apply_pauli_i_1(self):
# The number of Qubits and Bits, for Quantum and Classical Registers, respectively
num_qubits = num_bits = 1
# Creation of the IBM Qiskit's Quantum and Classical Registers
qiskit_quantum_register_pauli_i_1 = QiskitQuantumRegister.QiskitQuantumRegister("qrpaulii1", num_qubits)
qiskit_classical_register_pauli_i_1 = QiskitClassicalRegister.QiskitClassicalRegister("crpaulii1", num_bits)
# Creation of the IBM Qiskit's Quantum Circuit with one Quantum and Classical Registers
qiskit_quantum_circuit_pauli_i_1 = \
QiskitQuantumCircuit.QiskitQuantumCircuit("qcpaulii1",
qiskit_quantum_register_pauli_i_1,
qiskit_classical_register_pauli_i_1,
global_phase=0)
# Apply the Pauli-I Gate to the 1st Qubit of the Quantum Circuit (|0⟩ ↦ |0⟩)
qiskit_quantum_circuit_pauli_i_1.apply_pauli_i(qiskit_quantum_register_pauli_i_1.quantum_register[0])
# Getting the Backend for the State Vector Representation
# (i.e., the Quantum State represented as State Vector)
state_vector_backend = Aer.get_backend('statevector_simulator')
# Execute the Quantum Circuit and store the Quantum State in a final state vector
final_state_vector = \
execute(qiskit_quantum_circuit_pauli_i_1.quantum_circuit, state_vector_backend).result().get_statevector()
# Assert All Close, from NumPy's Testing, for the State Vector of the Qubit,
# after the Pauli-I Gate be applied
assert_allclose(final_state_vector, array([(1. + 0.j), (0. + 0.j)]), rtol=1e-7, atol=1e-7)
# Dummy Assert Equal for Unittest
self.assertEqual(True, True)
# Test #2 for the Pauli-I Gate
# Description of the Test Case:
# 1) The Quantum Circuit is created with a Quantum Register,
# with 1 Qubit initialized in the state |0⟩;
# 2) It is applied the Pauli-I Gate to the 1st Qubit, then, |0⟩ ↦ |0⟩;
# 3) It is applied, again, the Pauli-I Gate to the 1st Qubit, then, |0⟩ ↦ |0⟩;
def test_apply_pauli_i_2(self):
# The number of Qubits and Bits, for Quantum and Classical Registers, respectively
num_qubits = num_bits = 1
# Creation of the IBM Qiskit's Quantum and Classical Registers
qiskit_quantum_register_pauli_i_2 = QiskitQuantumRegister.QiskitQuantumRegister("qrpaulii2", num_qubits)
qiskit_classical_register_pauli_i_2 = QiskitClassicalRegister.QiskitClassicalRegister("crpaulii2", num_bits)
# Creation of the IBM Qiskit's Quantum Circuit with one Quantum and Classical Registers
qiskit_quantum_circuit_pauli_i_2 = \
QiskitQuantumCircuit.QiskitQuantumCircuit("qcpaulii2",
qiskit_quantum_register_pauli_i_2,
qiskit_classical_register_pauli_i_2,
global_phase=0)
# Apply the Pauli-I Gate to the 1st Qubit of the Quantum Circuit (|0⟩ ↦ |0⟩)
qiskit_quantum_circuit_pauli_i_2.apply_pauli_i(qiskit_quantum_register_pauli_i_2.quantum_register[0])
# Apply the Pauli-I Gate to the 1st Qubit of the Quantum Circuit, again (|0⟩ ↦ |0⟩)
qiskit_quantum_circuit_pauli_i_2.apply_pauli_i(qiskit_quantum_register_pauli_i_2.quantum_register[0])
# Getting the Backend for the State Vector Representation
# (i.e., the Quantum State represented as State Vector)
state_vector_backend = Aer.get_backend('statevector_simulator')
# Execute the Quantum Circuit and store the Quantum State in a final state vector
final_state_vector = \
execute(qiskit_quantum_circuit_pauli_i_2.quantum_circuit, state_vector_backend).result().get_statevector()
# Assert All Close, from NumPy's Testing, for the State Vector of the Qubit,
# after the two Pauli-I Gates be applied
assert_allclose(final_state_vector, array([(1. + 0.j), (0. + 0.j)]), rtol=1e-7, atol=1e-7)
# Dummy Assert Equal for Unittest
self.assertEqual(True, True)
# Test Cases for the Pauli-X Gate
class PauliXGateTests(unittest.TestCase):
# Test #1 for the Pauli-X Gate
# Description of the Test Case:
# 1) The Quantum Circuit is created with a Quantum Register,
# with 1 Qubit initialized in the state |0⟩;
# 2) It is applied the Pauli-X Gate to the 1st Qubit, then, |0⟩ ↦ |1⟩;
def test_apply_pauli_x_1(self):
# The number of Qubits and Bits, for Quantum and Classical Registers, respectively
num_qubits = num_bits = 1
# Creation of the IBM Qiskit's Quantum and Classical Registers
qiskit_quantum_register_pauli_x_1 = QiskitQuantumRegister.QiskitQuantumRegister("qrpaulix1", num_qubits)
qiskit_classical_register_pauli_x_1 = QiskitClassicalRegister.QiskitClassicalRegister("crpaulix1", num_bits)
# Creation of the IBM Qiskit's Quantum Circuit with one Quantum and Classical Registers
qiskit_quantum_circuit_pauli_x_1 = \
QiskitQuantumCircuit.QiskitQuantumCircuit("qcpaulix1",
qiskit_quantum_register_pauli_x_1,
qiskit_classical_register_pauli_x_1,
global_phase=0)
# Apply the Pauli-X Gate to the 1st Qubit of the Quantum Circuit (|0⟩ ↦ |1⟩)
qiskit_quantum_circuit_pauli_x_1.apply_pauli_x(qiskit_quantum_register_pauli_x_1.quantum_register[0])
# Getting the Backend for the State Vector Representation
# (i.e., the Quantum State represented as State Vector)
state_vector_backend = Aer.get_backend('statevector_simulator')
# Execute the Quantum Circuit and store the Quantum State in a final state vector
final_state_vector = \
execute(qiskit_quantum_circuit_pauli_x_1.quantum_circuit, state_vector_backend).result().get_statevector()
# Assert All Close, from NumPy's Testing, for the State Vector of the Qubit,
# after the Pauli-X Gate be applied
assert_allclose(final_state_vector, array([(0. + 0.j), (1. + 0.j)]), rtol=1e-7, atol=1e-7)
# Dummy Assert Equal for Unittest
self.assertEqual(True, True)
# Test #2 for the Pauli-X Gate
# Description of the Test Case:
# 1) The Quantum Circuit is created with a Quantum Register,
# with 1 Qubit initialized in the state |0⟩;
# 2) It is applied the Pauli-X Gate to the 1st Qubit, then, |0⟩ ↦ |1⟩;
# 3) It is applied, again, the Pauli-X Gate to the 1st Qubit, then, |1⟩ ↦ |0⟩;
def test_apply_pauli_x_2(self):
# The number of Qubits and Bits, for Quantum and Classical Registers, respectively
num_qubits = num_bits = 1
# Creation of the IBM Qiskit's Quantum and Classical Registers
qiskit_quantum_register_pauli_x_2 = QiskitQuantumRegister.QiskitQuantumRegister("qrpaulix2", num_qubits)
qiskit_classical_register_pauli_x_2 = QiskitClassicalRegister.QiskitClassicalRegister("crpaulix2", num_bits)
# Creation of the IBM Qiskit's Quantum Circuit with one Quantum and Classical Registers
qiskit_quantum_circuit_pauli_x_2 = \
QiskitQuantumCircuit.QiskitQuantumCircuit("qcpaulix2",
qiskit_quantum_register_pauli_x_2,
qiskit_classical_register_pauli_x_2,
global_phase=0)
# Apply the Pauli-X Gate to the 1st Qubit of the Quantum Circuit (|0⟩ ↦ |1⟩)
qiskit_quantum_circuit_pauli_x_2.apply_pauli_x(qiskit_quantum_register_pauli_x_2.quantum_register[0])
# Apply the Pauli-X Gate to the 1st Qubit of the Quantum Circuit, again (|1⟩ ↦ |0⟩)
qiskit_quantum_circuit_pauli_x_2.apply_pauli_x(qiskit_quantum_register_pauli_x_2.quantum_register[0])
# Getting the Backend for the State Vector Representation
# (i.e., the Quantum State represented as State Vector)
state_vector_backend = Aer.get_backend('statevector_simulator')
final_state_vector = \
execute(qiskit_quantum_circuit_pauli_x_2.quantum_circuit, state_vector_backend).result().get_statevector()
# Assert All Close, from NumPy's Testing, for the State Vector of the Qubit,
# after the two Pauli-X Gates be applied
assert_allclose(final_state_vector, array([(1. + 0.j), (0. + 0.j)]), rtol=1e-7, atol=1e-7)
# Dummy Assert Equal for Unittest
self.assertEqual(True, True)
# Test #3 for the Pauli-X Gate
# Description of the Test Case:
# 1) The Quantum Circuit is created with a Quantum Register,
# with 1 Qubit initialized in the state |0⟩;
# 2) It is applied the Hadamard Gate to the 1st Qubit, then, |0⟩ ↦ |+⟩;
# 3) It is applied the Pauli-X Gate to the 1st Qubit, then, |+⟩ ↦ |+⟩;
def test_apply_pauli_x_3(self):
# The number of Qubits and Bits, for Quantum and Classical Registers, respectively
num_qubits = num_bits = 1
# Creation of the IBM Qiskit's Quantum and Classical Registers
qiskit_quantum_register_pauli_x_3 = QiskitQuantumRegister.QiskitQuantumRegister("qrpaulix3", num_qubits)
qiskit_classical_register_pauli_x_3 = QiskitClassicalRegister.QiskitClassicalRegister("crpaulix3", num_bits)
# Creation of the IBM Qiskit's Quantum Circuit with one Quantum and Classical Registers
qiskit_quantum_circuit_pauli_x_3 = \
QiskitQuantumCircuit.QiskitQuantumCircuit("qcpaulix3",
qiskit_quantum_register_pauli_x_3,
qiskit_classical_register_pauli_x_3,
global_phase=0)
# Apply the Hadamard Gate to the 1st Qubit of the Quantum Circuit (|0⟩ ↦ |+⟩)
qiskit_quantum_circuit_pauli_x_3.apply_hadamard(qiskit_quantum_register_pauli_x_3.quantum_register[0])
# Apply the Pauli-X Gate to the 1st Qubit of the Quantum Circuit, again (|+⟩ ↦ |+⟩)
qiskit_quantum_circuit_pauli_x_3.apply_pauli_x(qiskit_quantum_register_pauli_x_3.quantum_register[0])
# Getting the Backend for the State Vector Representation
# (i.e., the Quantum State represented as State Vector)
state_vector_backend = Aer.get_backend('statevector_simulator')
# Execute the Quantum Circuit and store the Quantum State in a final state vector
final_state_vector = \
execute(qiskit_quantum_circuit_pauli_x_3.quantum_circuit, state_vector_backend).result().get_statevector()
# Assert All Close, from NumPy's Testing, for the State Vector of the Qubit,
# after the Single Qubits (Hadamard and Pauli-X) Gates be applied
assert_allclose(final_state_vector, array([((1. / sqrt(2.)) + 0.j), ((1. / sqrt(2.)) + 0.j)]),
rtol=1e-7, atol=1e-7)
# Dummy Assert Equal for Unittest
self.assertEqual(True, True)
# Test #4 for the Pauli-X Gate
# Description of the Test Case:
# 1) The Quantum Circuit is created with a Quantum Register,
# with 1 Qubit initialized in the state |0⟩;
# 2) It is applied the Hadamard Gate to the 1st Qubit, then, |0⟩ ↦ |+⟩;
# 3) It is applied the Pauli-X Gate to the 1st Qubit, then, |+⟩ ↦ |+⟩;
# 4) It is applied, again, the Hadamard Gate to the 1st Qubit, then, |+⟩ ↦ |0⟩;
def test_apply_pauli_x_4(self):
# The number of Qubits and Bits, for Quantum and Classical Registers, respectively
num_qubits = num_bits = 1
# Creation of the IBM Qiskit's Quantum and Classical Registers
qiskit_quantum_register_pauli_x_4 = QiskitQuantumRegister.QiskitQuantumRegister("qrpaulix4", num_qubits)
qiskit_classical_register_pauli_x_4 = QiskitClassicalRegister.QiskitClassicalRegister("crpaulix4", num_bits)
# Creation of the IBM Qiskit's Quantum Circuit with one Quantum and Classical Registers
qiskit_quantum_circuit_pauli_x_4 = \
QiskitQuantumCircuit.QiskitQuantumCircuit("qcpaulix4",
qiskit_quantum_register_pauli_x_4,
qiskit_classical_register_pauli_x_4,
global_phase=0)
# Apply the Hadamard Gate to the 1st Qubit of the Quantum Circuit (|0⟩ ↦ |+⟩)
qiskit_quantum_circuit_pauli_x_4.apply_hadamard(qiskit_quantum_register_pauli_x_4.quantum_register[0])
# Apply the Pauli-X Gate to the 1st Qubit of the Quantum Circuit, again (|+⟩ ↦ |+⟩)
qiskit_quantum_circuit_pauli_x_4.apply_pauli_x(qiskit_quantum_register_pauli_x_4.quantum_register[0])
# Apply the Hadamard Gate to the 1st Qubit of the Quantum Circuit (|+⟩ ↦ |0⟩)
qiskit_quantum_circuit_pauli_x_4.apply_hadamard(qiskit_quantum_register_pauli_x_4.quantum_register[0])
# Getting the Backend for the State Vector Representation
# (i.e., the Quantum State represented as State Vector)
state_vector_backend = Aer.get_backend('statevector_simulator')
# Execute the Quantum Circuit and store the Quantum State in a final state vector
final_state_vector = \
execute(qiskit_quantum_circuit_pauli_x_4.quantum_circuit, state_vector_backend).result().get_statevector()
# Assert All Close, from NumPy's Testing, for the State Vector of the Qubit,
# after the Single Qubits (Hadamard and Pauli-X) Gates be applied
assert_allclose(final_state_vector, array([(1. + 0.j), (0. + 0.j)]),
rtol=1e-7, atol=1e-7)
# Dummy Assert Equal for Unittest
self.assertEqual(True, True)
# Test Cases for the Pauli-Y Gate
class PauliYGateTests(unittest.TestCase):
# Test #1 for the Pauli-Y Gate
# Description of the Test Case:
# 1) The Quantum Circuit is created with a Quantum Register,
# with 1 Qubit initialized in the state |0⟩;
# 2) It is applied the Pauli-Y Gate to the 1st Qubit, then, |0⟩ ↦ |+i⟩;
def test_apply_pauli_y_1(self):
# The number of Qubits and Bits, for Quantum and Classical Registers, respectively
num_qubits = num_bits = 1
# Creation of the IBM Qiskit's Quantum and Classical Registers
qiskit_quantum_register_pauli_y_1 = QiskitQuantumRegister.QiskitQuantumRegister("qrpauliy1", num_qubits)
qiskit_classical_register_pauli_y_1 = QiskitClassicalRegister.QiskitClassicalRegister("crpauliy1", num_bits)
# Creation of the IBM Qiskit's Quantum Circuit with one Quantum and Classical Registers
qiskit_quantum_circuit_pauli_y_1 = \
QiskitQuantumCircuit.QiskitQuantumCircuit("qcpauliy1",
qiskit_quantum_register_pauli_y_1,
qiskit_classical_register_pauli_y_1,
global_phase=0)
# Apply the Pauli-Y Gate to the 1st Qubit of the Quantum Circuit (|0⟩ ↦ |+i⟩)
qiskit_quantum_circuit_pauli_y_1.apply_pauli_y(qiskit_quantum_register_pauli_y_1.quantum_register[0])
# Getting the Backend for the State Vector Representation
# (i.e., the Quantum State represented as State Vector)
state_vector_backend = Aer.get_backend('statevector_simulator')
# Execute the Quantum Circuit and store the Quantum State in a final state vector
final_state_vector = \
execute(qiskit_quantum_circuit_pauli_y_1.quantum_circuit, state_vector_backend).result().get_statevector()
# Assert All Close, from NumPy's Testing, for the State Vector of the Qubit,
# after the Pauli-Y Gate be applied
assert_allclose(final_state_vector, array([(0. + 0.j), (0. + 1.j)]), rtol=1e-7, atol=1e-7)
# Dummy Assert Equal for Unittest
self.assertEqual(True, True)
# Test #2 for the Pauli-Y Gate
# Description of the Test Case:
# 1) The Quantum Circuit is created with a Quantum Register,
# with 1 Qubit initialized in the state |0⟩;
# 2) It is applied the Pauli-Y Gate to the 1st Qubit, then, |0⟩ ↦ |+i⟩;
# 3) It is applied, again, the Pauli-Y Gate to the 1st Qubit, then, |+i⟩ ↦ |0⟩;
def test_apply_pauli_y_2(self):
# The number of Qubits and Bits, for Quantum and Classical Registers, respectively
num_qubits = num_bits = 1
# Creation of the IBM Qiskit's Quantum and Classical Registers
qiskit_quantum_register_pauli_y_2 = QiskitQuantumRegister.QiskitQuantumRegister("qrpauliy2", num_qubits)
qiskit_classical_register_pauli_y_2 = QiskitClassicalRegister.QiskitClassicalRegister("crpauliy2", num_bits)
# Creation of the IBM Qiskit's Quantum Circuit with one Quantum and Classical Registers
qiskit_quantum_circuit_pauli_y_2 = \
QiskitQuantumCircuit.QiskitQuantumCircuit("qcpauliy2",
qiskit_quantum_register_pauli_y_2,
qiskit_classical_register_pauli_y_2,
global_phase=0)
# Apply the Pauli-Y Gate to the 1st Qubit of the Quantum Circuit (|0⟩ ↦ |+i⟩)
qiskit_quantum_circuit_pauli_y_2.apply_pauli_y(qiskit_quantum_register_pauli_y_2.quantum_register[0])
# Apply the Pauli-X Gate to the 1st Qubit of the Quantum Circuit, again (|+i⟩ ↦ |0⟩)
qiskit_quantum_circuit_pauli_y_2.apply_pauli_y(qiskit_quantum_register_pauli_y_2.quantum_register[0])
# Getting the Backend for the State Vector Representation
# (i.e., the Quantum State represented as State Vector)
state_vector_backend = Aer.get_backend('statevector_simulator')
# Execute the Quantum Circuit and store the Quantum State in a final state vector
final_state_vector = \
execute(qiskit_quantum_circuit_pauli_y_2.quantum_circuit, state_vector_backend).result().get_statevector()
# Assert All Close, from NumPy's Testing, for the State Vector of the Qubit,
# after the two Pauli-Y Gates be applied
assert_allclose(final_state_vector, array([(1. + 0.j), (0. + 0.j)]), rtol=1e-7, atol=1e-7)
# Dummy Assert Equal for Unittest
self.assertEqual(True, True)
# Test #3 for the Pauli-Y Gate
# Description of the Test Case:
# 1) The Quantum Circuit is created with a Quantum Register,
# with 1 Qubit initialized in the state |0⟩;
# 2) It is applied the Pauli-X Gate to the 1st Qubit, then, |0⟩ ↦ |1⟩;
# 3) It is applied the Pauli-Y Gate to the 1st Qubit, then, |1⟩ ↦ -i|0⟩;
def test_apply_pauli_y_3(self):
# The number of Qubits and Bits, for Quantum and Classical Registers, respectively
num_qubits = num_bits = 1
# Creation of the IBM Qiskit's Quantum and Classical Registers
qiskit_quantum_register_pauli_y_3 = QiskitQuantumRegister.QiskitQuantumRegister("qrpauliy3", num_qubits)
qiskit_classical_register_pauli_y_3 = QiskitClassicalRegister.QiskitClassicalRegister("crpauliy3", num_bits)
# Creation of the IBM Qiskit's Quantum Circuit with one Quantum and Classical Registers
qiskit_quantum_circuit_pauli_y_3 = \
QiskitQuantumCircuit.QiskitQuantumCircuit("qcpauliy3",
qiskit_quantum_register_pauli_y_3,
qiskit_classical_register_pauli_y_3,
global_phase=0)
# Apply the Pauli-X Gate to the 1st Qubit of the Quantum Circuit (|0⟩ ↦ |1⟩)
qiskit_quantum_circuit_pauli_y_3.apply_pauli_x(qiskit_quantum_register_pauli_y_3.quantum_register[0])
# Apply the Pauli-Y Gate to the 1st Qubit of the Quantum Circuit, again (|1⟩ ↦ -i|0⟩)
qiskit_quantum_circuit_pauli_y_3.apply_pauli_y(qiskit_quantum_register_pauli_y_3.quantum_register[0])
# Getting the Backend for the State Vector Representation
# (i.e., the Quantum State represented as State Vector)
state_vector_backend = Aer.get_backend('statevector_simulator')
# Execute the Quantum Circuit and store the Quantum State in a final state vector
final_state_vector = \
execute(qiskit_quantum_circuit_pauli_y_3.quantum_circuit, state_vector_backend).result().get_statevector()
# Assert All Close, from NumPy's Testing, for the State Vector of the Qubit,
# after the Single Qubits (Pauli-X and Pauli-Y) Gates be applied
assert_allclose(final_state_vector, array([(0. - 1.j), (0. + 0.j)]), rtol=1e-7, atol=1e-7)
# Dummy Assert Equal for Unittest
self.assertEqual(True, True)
# Test #4 for the Pauli-Y Gate
# Description of the Test Case:
# 1) The Quantum Circuit is created with a Quantum Register,
# with 1 Qubit initialized in the state |0⟩;
# 2) It is applied the Hadamard Gate to the 1st Qubit, then, |0⟩ ↦ |+⟩;
# 3) It is applied the Pauli-Y Gate to the 1st Qubit, then, |+⟩ ↦ 1/sqrt(2)i x (-|0⟩ + |1⟩);
def test_apply_pauli_y_4(self):
# The number of Qubits and Bits, for Quantum and Classical Registers, respectively
num_qubits = num_bits = 1
# Creation of the IBM Qiskit's Quantum and Classical Registers
qiskit_quantum_register_pauli_y_4 = QiskitQuantumRegister.QiskitQuantumRegister("qrpauliy4", num_qubits)
qiskit_classical_register_pauli_y_4 = QiskitClassicalRegister.QiskitClassicalRegister("crpauliy4", num_bits)
# Creation of the IBM Qiskit's Quantum Circuit with one Quantum and Classical Registers
qiskit_quantum_circuit_pauli_y_4 = \
QiskitQuantumCircuit.QiskitQuantumCircuit("qcpauliy4",
qiskit_quantum_register_pauli_y_4,
qiskit_classical_register_pauli_y_4,
global_phase=0)
# Apply the Hadamard Gate to the 1st Qubit of the Quantum Circuit (|0⟩ ↦ |+⟩)
qiskit_quantum_circuit_pauli_y_4.apply_hadamard(qiskit_quantum_register_pauli_y_4.quantum_register[0])
# Apply the Pauli-Y Gate to the 1st Qubit of the Quantum Circuit, again (|+⟩ ↦ 1/sqrt(2)i x (-|0⟩ + |1⟩))
qiskit_quantum_circuit_pauli_y_4.apply_pauli_y(qiskit_quantum_register_pauli_y_4.quantum_register[0])
# Getting the Backend for the State Vector Representation
# (i.e., the Quantum State represented as State Vector)
state_vector_backend = Aer.get_backend('statevector_simulator')
# Execute the Quantum Circuit and store the Quantum State in a final state vector
final_state_vector = \
execute(qiskit_quantum_circuit_pauli_y_4.quantum_circuit, state_vector_backend).result().get_statevector()
# Assert All Close, from NumPy's Testing, for the State Vector of the Qubit,
# after the Single Qubits (Hadamard and Pauli-Y) Gates be applied
assert_allclose(final_state_vector, array([(0. - ((1. / sqrt(2.)) * 1.j)), (0. + ((1. / sqrt(2.)) * 1.j))]),
rtol=1e-7, atol=1e-7)
# Dummy Assert Equal for Unittest
self.assertEqual(True, True)
# Test Cases for the Pauli-Z Gate
class PauliZGateTests(unittest.TestCase):
# Test #1 for the Pauli-Z Gate
# Description of the Test Case:
# 1) The Quantum Circuit is created with a Quantum Register,
# with 1 Qubit initialized in the state |0⟩;
# 2) It is applied the Pauli-Z Gate to the 1st Qubit, then, |0⟩ ↦ |0⟩;
def test_apply_pauli_z_1(self):
# The number of Qubits and Bits, for Quantum and Classical Registers, respectively
num_qubits = num_bits = 1
# Creation of the IBM Qiskit's Quantum and Classical Registers
qiskit_quantum_register_pauli_z_1 = QiskitQuantumRegister.QiskitQuantumRegister("qrpauliz1", num_qubits)
qiskit_classical_register_pauli_z_1 = QiskitClassicalRegister.QiskitClassicalRegister("crpauliz1", num_bits)
# Creation of the IBM Qiskit's Quantum Circuit with one Quantum and Classical Registers
qiskit_quantum_circuit_pauli_z_1 = \
QiskitQuantumCircuit.QiskitQuantumCircuit("qcpauliz1",
qiskit_quantum_register_pauli_z_1,
qiskit_classical_register_pauli_z_1,
global_phase=0)
# Apply the Pauli-Z Gate to the 1st Qubit of the Quantum Circuit (|0⟩ ↦ |0⟩)
qiskit_quantum_circuit_pauli_z_1.apply_pauli_z(qiskit_quantum_register_pauli_z_1.quantum_register[0])
# Getting the Backend for the State Vector Representation
# (i.e., the Quantum State represented as State Vector)
state_vector_backend = Aer.get_backend('statevector_simulator')
# Execute the Quantum Circuit and store the Quantum State in a final state vector
final_state_vector = \
execute(qiskit_quantum_circuit_pauli_z_1.quantum_circuit, state_vector_backend).result().get_statevector()
# Assert All Close, from NumPy's Testing, for the State Vector of the Qubit,
# after the Pauli-Z Gate be applied
assert_allclose(final_state_vector, array([(1. + 0.j), (0. + 0.j)]), rtol=1e-7, atol=1e-7)
# Dummy Assert Equal for Unittest
self.assertEqual(True, True)
# Test #2 for the Pauli-Z Gate
# Description of the Test Case:
# 1) The Quantum Circuit is created with a Quantum Register,
# with 1 Qubit initialized in the state |0⟩;
# 2) It is applied the Pauli-Z Gate to the 1st Qubit, then, |0⟩ ↦ |0⟩;
# 3) It is applied, again, the Pauli-Z Gate to the 1st Qubit, then, |0⟩ ↦ |0⟩;
def test_apply_pauli_z_2(self):
# The number of Qubits and Bits, for Quantum and Classical Registers, respectively
num_qubits = num_bits = 1
# Creation of the IBM Qiskit's Quantum and Classical Registers
qiskit_quantum_register_pauli_z_2 = QiskitQuantumRegister.QiskitQuantumRegister("qrpauliz2", num_qubits)
qiskit_classical_register_pauli_z_2 = QiskitClassicalRegister.QiskitClassicalRegister("crpauliz2", num_bits)
# Creation of the IBM Qiskit's Quantum Circuit with one Quantum and Classical Registers
qiskit_quantum_circuit_pauli_z_2 = \
QiskitQuantumCircuit.QiskitQuantumCircuit("qcpauliz2",
qiskit_quantum_register_pauli_z_2,
qiskit_classical_register_pauli_z_2,
global_phase=0)
# Apply the Pauli-Z Gate to the 1st Qubit of the Quantum Circuit (|0⟩ ↦ |0⟩)
qiskit_quantum_circuit_pauli_z_2.apply_pauli_z(qiskit_quantum_register_pauli_z_2.quantum_register[0])
# Apply the Pauli-Z Gate to the 1st Qubit of the Quantum Circuit, again (|0⟩ ↦ |0⟩)
qiskit_quantum_circuit_pauli_z_2.apply_pauli_z(qiskit_quantum_register_pauli_z_2.quantum_register[0])
# Getting the Backend for the State Vector Representation
# (i.e., the Quantum State represented as State Vector)
state_vector_backend = Aer.get_backend('statevector_simulator')
# Execute the Quantum Circuit and store the Quantum State in a final state vector
final_state_vector = \
execute(qiskit_quantum_circuit_pauli_z_2.quantum_circuit, state_vector_backend).result().get_statevector()
# Assert All Close, from NumPy's Testing, for the State Vector of the Qubit,
# after the two Pauli-Z Gates be applied
assert_allclose(final_state_vector, array([(1. + 0.j), (0. + 0.j)]), rtol=1e-7, atol=1e-7)
# Dummy Assert Equal for Unittest
self.assertEqual(True, True)
# Test Cases for the Hadamard Gate
class HadamardGateTests(unittest.TestCase):
# Test #1 for the Hadamard Gate
# Description of the Test Case:
# 1) The Quantum Circuit is created with a Quantum Register,
# with 1 Qubit initialized in the state |0⟩;
# 2) It is applied the Hadamard Gate to the 1st Qubit, then, |0⟩ ↦ |+⟩;
def test_apply_hadamard_1(self):
# The number of Qubits and Bits, for Quantum and Classical Registers, respectively
num_qubits = num_bits = 1
# Creation of the IBM Qiskit's Quantum and Classical Registers
qiskit_quantum_register_hadamard_1 = QiskitQuantumRegister.QiskitQuantumRegister("qrhadamard1", num_qubits)
qiskit_classical_register_hadamard_1 = QiskitClassicalRegister.QiskitClassicalRegister("crhadamard1", num_bits)
# Creation of the IBM Qiskit's Quantum Circuit with one Quantum and Classical Registers
qiskit_quantum_circuit_hadamard_1 = \
QiskitQuantumCircuit.QiskitQuantumCircuit("qchadamard1",
qiskit_quantum_register_hadamard_1,
qiskit_classical_register_hadamard_1,
global_phase=0)
# Apply the Hadamard Gate to the 1st Qubit of the Quantum Circuit (|0⟩ ↦ |+⟩)
qiskit_quantum_circuit_hadamard_1.apply_hadamard(qiskit_quantum_register_hadamard_1.quantum_register[0])
# Getting the Backend for the State Vector Representation
# (i.e., the Quantum State represented as State Vector)
state_vector_backend = Aer.get_backend('statevector_simulator')
# Execute the Quantum Circuit and store the Quantum State in a final state vector
final_state_vector = \
execute(qiskit_quantum_circuit_hadamard_1.quantum_circuit, state_vector_backend).result().get_statevector()
# Assert All Close, from NumPy's Testing, for the State Vector of the Qubit,
# after the Hadamard Gate be applied
assert_allclose(final_state_vector, array([((1. / sqrt(2.)) + 0.j), ((1. / sqrt(2.)) + 0.j)]), rtol=1e-7,
atol=1e-7)
# Dummy Assert Equal for Unittest
self.assertEqual(True, True)
# Test #2 for the Hadamard Gate
# Description of the Test Case:
# 1) The Quantum Circuit is created with a Quantum Register,
# with 1 Qubit initialized in the state |0⟩;
# 2) It is applied the Hadamard Gate to the 1st Qubit, then, |0⟩ ↦ |+⟩;
# 3) It is applied, again, the Hadamard Gate to the 1st Qubit, then, |+⟩ ↦ |1⟩;
def test_apply_hadamard_2(self):
# The number of Qubits and Bits, for Quantum and Classical Registers, respectively
num_qubits = num_bits = 1
# Creation of the IBM Qiskit's Quantum and Classical Registers
qiskit_quantum_register_hadamard_2 = QiskitQuantumRegister.QiskitQuantumRegister("qrhadamard2", num_qubits)
qiskit_classical_register_hadamard_2 = QiskitClassicalRegister.QiskitClassicalRegister("crhadamard2", num_bits)
# Creation of the IBM Qiskit's Quantum Circuit with one Quantum and Classical Registers
qiskit_quantum_circuit_hadamard_2 = \
QiskitQuantumCircuit.QiskitQuantumCircuit("qchadamard2",
qiskit_quantum_register_hadamard_2,
qiskit_classical_register_hadamard_2,
global_phase=0)
# Apply the Hadamard Gate to the 1st Qubit of the Quantum Circuit (|0⟩ ↦ |+⟩)
qiskit_quantum_circuit_hadamard_2.apply_hadamard(qiskit_quantum_register_hadamard_2.quantum_register[0])
# Apply the Hadamard Gate to the 1st Qubit of the Quantum Circuit, again (|+⟩ ↦ |0⟩)
qiskit_quantum_circuit_hadamard_2.apply_hadamard(qiskit_quantum_register_hadamard_2.quantum_register[0])
# Getting the Backend for the State Vector Representation
# (i.e., the Quantum State represented as State Vector)
state_vector_backend = Aer.get_backend('statevector_simulator')
# Execute the Quantum Circuit and store the Quantum State in a final state vector
final_state_vector = \
execute(qiskit_quantum_circuit_hadamard_2.quantum_circuit, state_vector_backend).result().get_statevector()
# Assert All Close, from NumPy's Testing, for the State Vector of the Qubit,
# after the two Hadamard Gates be applied
assert_allclose(final_state_vector, array([(1. + 0.j), (0. + 0.j)]), rtol=1e-7, atol=1e-7)
# Dummy Assert Equal for Unittest
self.assertEqual(True, True)
# Configuration of the Test Suites
if __name__ == '__main__':
# Test Cases for the Measurements in the X-, Y- and Z-Basis
prepare_measure_x_basis_tests_suite = unittest.TestLoader().loadTestsFromTestCase(PrepareMeasureXBasisTests)
prepare_measure_y_basis_tests_suite = unittest.TestLoader().loadTestsFromTestCase(PrepareMeasureYBasisTests)
prepare_measure_z_basis_tests_suite = unittest.TestLoader().loadTestsFromTestCase(PrepareMeasureZBasisTests)
# Test Cases for the Pauli Gates
pauli_i_gate_tests_suite = unittest.TestLoader().loadTestsFromTestCase(PauliIGateTests)
pauli_x_gate_tests_suite = unittest.TestLoader().loadTestsFromTestCase(PauliXGateTests)
pauli_y_gate_tests_suite = unittest.TestLoader().loadTestsFromTestCase(PauliYGateTests)
pauli_z_gate_tests_suite = unittest.TestLoader().loadTestsFromTestCase(PauliZGateTests)
# Test Cases for the Hadamard Gates
hadamard_gate_tests_suite = unittest.TestLoader().loadTestsFromTestCase(HadamardGateTests)
# Create a Global for all the Test Cases established
all_test_cases = unittest.TestSuite([prepare_measure_x_basis_tests_suite,
prepare_measure_y_basis_tests_suite,
prepare_measure_z_basis_tests_suite,
pauli_i_gate_tests_suite, pauli_x_gate_tests_suite,
pauli_y_gate_tests_suite, pauli_z_gate_tests_suite,
hadamard_gate_tests_suite])
| 55.767255 | 119 | 0.674035 | 9,162 | 69,486 | 4.892163 | 0.027505 | 0.074651 | 0.044621 | 0.048726 | 0.877962 | 0.869751 | 0.834412 | 0.820289 | 0.790549 | 0.764111 | 0 | 0.020276 | 0.260412 | 69,486 | 1,245 | 120 | 55.812048 | 0.847152 | 0.400714 | 0 | 0.404762 | 0 | 0 | 0.034309 | 0.013267 | 0 | 0 | 0 | 0 | 0.105159 | 1 | 0.051587 | false | 0 | 0.013889 | 0 | 0.081349 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
758adba3eff4379f74d281f36b829216f6f577d7 | 186 | py | Python | blog/app/controller/admin/__init__.py | henrY2Young/flask-jwt | f1c47efee7fd7f271c02172371c2d9cec8adde5d | [
"MIT"
] | null | null | null | blog/app/controller/admin/__init__.py | henrY2Young/flask-jwt | f1c47efee7fd7f271c02172371c2d9cec8adde5d | [
"MIT"
] | null | null | null | blog/app/controller/admin/__init__.py | henrY2Young/flask-jwt | f1c47efee7fd7f271c02172371c2d9cec8adde5d | [
"MIT"
] | null | null | null | from flask import Flask
from flask import Blueprint
app = Flask(__name__)
admin = Blueprint('admin', __name__)
user = Blueprint('user', __name__)
from .admin import *
from .user import * | 26.571429 | 36 | 0.758065 | 25 | 186 | 5.16 | 0.32 | 0.139535 | 0.232558 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139785 | 186 | 7 | 37 | 26.571429 | 0.80625 | 0 | 0 | 0 | 0 | 0 | 0.048128 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.571429 | 0 | 0.571429 | 0.428571 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
f949d8f8f13e86557fd1fac026095a88589af0da | 76 | py | Python | p2pfs/__init__.py | yxwangcs/p2pfs | 57e90d8f911de36da70f5977822cde609d1c3561 | [
"MIT"
] | 2 | 2020-07-02T12:09:19.000Z | 2020-08-26T15:48:15.000Z | p2pfs/__init__.py | RyanWangGit/p2pfs | adcbf999010289e46c041aecc9af5c734c6de25e | [
"MIT"
] | null | null | null | p2pfs/__init__.py | RyanWangGit/p2pfs | adcbf999010289e46c041aecc9af5c734c6de25e | [
"MIT"
] | 2 | 2020-07-19T04:15:53.000Z | 2021-01-16T20:31:48.000Z | from p2pfs.core import *
from p2pfs.ui import PeerTerminal, TrackerTerminal
| 25.333333 | 50 | 0.828947 | 10 | 76 | 6.3 | 0.7 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029851 | 0.118421 | 76 | 2 | 51 | 38 | 0.910448 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ddc1f52b5cc0bf4b3c8a1c58078521f9458dff58 | 100 | py | Python | terrascript/packet/__init__.py | amlodzianowski/python-terrascript | 1111affe6cd30d9b8b7bc74ae4e27590f7d4dc49 | [
"BSD-2-Clause"
] | null | null | null | terrascript/packet/__init__.py | amlodzianowski/python-terrascript | 1111affe6cd30d9b8b7bc74ae4e27590f7d4dc49 | [
"BSD-2-Clause"
] | null | null | null | terrascript/packet/__init__.py | amlodzianowski/python-terrascript | 1111affe6cd30d9b8b7bc74ae4e27590f7d4dc49 | [
"BSD-2-Clause"
] | null | null | null | # terrascript/packet/__init__.py
import terrascript
class packet(terrascript.Provider):
pass
| 12.5 | 35 | 0.78 | 11 | 100 | 6.727273 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.14 | 100 | 7 | 36 | 14.285714 | 0.860465 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
ddd443f5c797846a41f52199f95e41748d41bf04 | 43 | py | Python | gym_DC/envs/__init__.py | Saeid-Rezaei-projects/Gym-DC | 58124e79e17a4f537a4a16c67808b7fff76be4c6 | [
"MIT"
] | null | null | null | gym_DC/envs/__init__.py | Saeid-Rezaei-projects/Gym-DC | 58124e79e17a4f537a4a16c67808b7fff76be4c6 | [
"MIT"
] | null | null | null | gym_DC/envs/__init__.py | Saeid-Rezaei-projects/Gym-DC | 58124e79e17a4f537a4a16c67808b7fff76be4c6 | [
"MIT"
] | null | null | null | from gym_DC.envs.DCgym_Env import DCGymEnv
| 21.5 | 42 | 0.860465 | 8 | 43 | 4.375 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 1 | 43 | 43 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
34b9e84cfe0060986816a172e20a835bbc5dd258 | 215 | py | Python | jsonresume/exceptions.py | kelvintaywl/jsonresume-validator | 73ac162cb30ca70699c942def629188f7dfd4d3c | [
"MIT"
] | 42 | 2016-06-03T18:17:24.000Z | 2021-12-09T04:13:14.000Z | jsonresume/exceptions.py | kelvintaywl/jsonresume-validator | 73ac162cb30ca70699c942def629188f7dfd4d3c | [
"MIT"
] | 3 | 2016-04-27T12:32:41.000Z | 2020-09-29T16:43:35.000Z | jsonresume/exceptions.py | kelvintaywl/jsonresume-validator | 73ac162cb30ca70699c942def629188f7dfd4d3c | [
"MIT"
] | 9 | 2016-05-08T15:31:53.000Z | 2021-04-28T09:17:47.000Z | # -*- coding: utf-8 -*-
import colander
class InvalidResumeError(colander.Invalid):
"""Exception when a JSON resume (as python object) has invalid schema.
Subclass of colander.Invalid.
"""
pass
| 17.916667 | 74 | 0.674419 | 25 | 215 | 5.8 | 0.84 | 0.206897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005917 | 0.213953 | 215 | 11 | 75 | 19.545455 | 0.852071 | 0.572093 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
9b37db811e6bb60c829230acc03bbdeb31510d4d | 135 | py | Python | 30-webpage/apptest/vue_app/views.py | AppTestBot/AppTestBot | 035e93e662753e50d7dcc38d6fd362933186983b | [
"Apache-2.0"
] | null | null | null | 30-webpage/apptest/vue_app/views.py | AppTestBot/AppTestBot | 035e93e662753e50d7dcc38d6fd362933186983b | [
"Apache-2.0"
] | null | null | null | 30-webpage/apptest/vue_app/views.py | AppTestBot/AppTestBot | 035e93e662753e50d7dcc38d6fd362933186983b | [
"Apache-2.0"
] | null | null | null | from django.shortcuts import render
# Create your views here.
def test_vue(request):
return render(request, 'vue_app/index.html')
| 22.5 | 48 | 0.762963 | 20 | 135 | 5.05 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140741 | 135 | 5 | 49 | 27 | 0.87069 | 0.17037 | 0 | 0 | 0 | 0 | 0.163636 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
9b3a8a0ec45d224e54fcf75398f8b8b1308c34db | 22 | py | Python | whatsapp_api_service/__init__.py | em230418/whatsapp-api-service | 6792cea86e1f76bfa68b526582391b8a685fa2c7 | [
"MIT"
] | null | null | null | whatsapp_api_service/__init__.py | em230418/whatsapp-api-service | 6792cea86e1f76bfa68b526582391b8a685fa2c7 | [
"MIT"
] | null | null | null | whatsapp_api_service/__init__.py | em230418/whatsapp-api-service | 6792cea86e1f76bfa68b526582391b8a685fa2c7 | [
"MIT"
] | 1 | 2022-01-21T13:13:27.000Z | 2022-01-21T13:13:27.000Z | from .base import app
| 11 | 21 | 0.772727 | 4 | 22 | 4.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 22 | 1 | 22 | 22 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
32ca356344d3b9131316284a815367005ce12b0a | 279 | py | Python | tests/test_cat.py | mloskot/python-package | 4b24c22811052492d3af9d2f7d1ffa8f6ae8b412 | [
"Unlicense"
] | null | null | null | tests/test_cat.py | mloskot/python-package | 4b24c22811052492d3af9d2f7d1ffa8f6ae8b412 | [
"Unlicense"
] | null | null | null | tests/test_cat.py | mloskot/python-package | 4b24c22811052492d3af9d2f7d1ffa8f6ae8b412 | [
"Unlicense"
] | null | null | null | def test_noise():
import pets.cat.noise
assert pets.cat.noise.make() == 'meow!'
def test_noise_from_cat():
from pets import cat
assert cat.noise.make() == 'meow!'
def test_noise_from_pets_cat():
from pets.cat import noise
assert noise.make() == 'meow!'
| 23.25 | 43 | 0.666667 | 42 | 279 | 4.238095 | 0.238095 | 0.157303 | 0.202247 | 0.179775 | 0.359551 | 0.359551 | 0.359551 | 0.359551 | 0 | 0 | 0 | 0 | 0.193548 | 279 | 11 | 44 | 25.363636 | 0.791111 | 0 | 0 | 0 | 0 | 0 | 0.053763 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fd107a5e926be2f15c8dbd8610cd275f5b315b19 | 48,154 | py | Python | deploy/adapters/ansible/roles/moon/files/controllers.py | wtwde/compass-docker-osa-ocata | af14c185b70125740ea4801981085c740bf98ae0 | [
"Apache-2.0"
] | null | null | null | deploy/adapters/ansible/roles/moon/files/controllers.py | wtwde/compass-docker-osa-ocata | af14c185b70125740ea4801981085c740bf98ae0 | [
"Apache-2.0"
] | null | null | null | deploy/adapters/ansible/roles/moon/files/controllers.py | wtwde/compass-docker-osa-ocata | af14c185b70125740ea4801981085c740bf98ae0 | [
"Apache-2.0"
] | null | null | null | # Copyright 2015 Open Platform for NFV Project, Inc. and its contributors
# This software is distributed under the terms and conditions of the
# 'Apache-2.0' license which can be found in the file 'LICENSE' in this
# package distribution or at 'http://www.apache.org/licenses/LICENSE-2.0'.
from keystone.common import controller
from keystone import config
from keystone import exception
from keystone.models import token_model
from keystone.contrib.moon.exception import * # noqa: F403
from oslo_log import log
from uuid import uuid4
import requests
CONF = config.CONF
LOG = log.getLogger(__name__)
@dependency.requires('configuration_api') # noqa: 405
class Configuration(controller.V3Controller):
collection_name = 'configurations'
member_name = 'configuration'
def __init__(self):
super(Configuration, self).__init__()
def _get_user_id_from_token(self, token_id):
response = self.token_provider_api.validate_token(token_id)
token_ref = token_model.KeystoneToken(
token_id=token_id, token_data=response)
return token_ref.get('user')
@controller.protected()
def get_policy_templates(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
return self.configuration_api.get_policy_templates_dict(user_id)
@controller.protected()
def get_aggregation_algorithms(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
return self.configuration_api.get_aggregation_algorithms_dict(user_id)
@controller.protected()
def get_sub_meta_rule_algorithms(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
return self.configuration_api.get_sub_meta_rule_algorithms_dict(
user_id)
@dependency.requires('tenant_api', 'resource_api') # noqa: 405
class Tenants(controller.V3Controller):
def __init__(self):
super(Tenants, self).__init__()
def _get_user_id_from_token(self, token_id):
response = self.token_provider_api.validate_token(token_id)
token_ref = token_model.KeystoneToken(
token_id=token_id, token_data=response)
return token_ref.get('user')
@controller.protected()
def get_tenants(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
return self.tenant_api.get_tenants_dict(user_id)
def __get_keystone_tenant_dict(
self, tenant_id="", tenant_name="", tenant_description="", domain="default"): # noqa
tenants = self.resource_api.list_projects()
for tenant in tenants:
if tenant_id and tenant_id == tenant['id']:
return tenant
if tenant_name and tenant_name == tenant['name']:
return tenant
if not tenant_id:
tenant_id = uuid4().hex
if not tenant_name:
tenant_name = tenant_id
tenant = {
"id": tenant_id,
"name": tenant_name,
"description": tenant_description,
"enabled": True,
"domain_id": domain
}
keystone_tenant = self.resource_api.create_project(
tenant["id"], tenant)
return keystone_tenant
@controller.protected()
def add_tenant(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
k_tenant_dict = self.__get_keystone_tenant_dict(
tenant_name=kw.get('tenant_name'),
tenant_description=kw.get(
'tenant_description', kw.get('tenant_name')),
domain=kw.get('tenant_domain', "default"),
)
tenant_dict = dict()
tenant_dict['id'] = k_tenant_dict['id']
tenant_dict['name'] = kw.get('tenant_name', None)
tenant_dict['description'] = kw.get('tenant_description', None)
tenant_dict['intra_authz_extension_id'] = kw.get(
'tenant_intra_authz_extension_id', None)
tenant_dict['intra_admin_extension_id'] = kw.get(
'tenant_intra_admin_extension_id', None)
return self.tenant_api.add_tenant_dict(
user_id, tenant_dict['id'], tenant_dict)
@controller.protected()
def get_tenant(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
tenant_id = kw.get('tenant_id', None)
return self.tenant_api.get_tenant_dict(user_id, tenant_id)
@controller.protected()
def del_tenant(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
tenant_id = kw.get('tenant_id', None)
return self.tenant_api.del_tenant(user_id, tenant_id)
@controller.protected()
def set_tenant(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
# Next line will raise an error if tenant doesn't exist
k_tenant_dict = self.resource_api.get_project(
kw.get('tenant_id', None))
tenant_id = kw.get('tenant_id', None)
tenant_dict = dict()
tenant_dict['name'] = k_tenant_dict.get('name', None)
if 'tenant_description' in kw:
tenant_dict['description'] = kw.get('tenant_description', None)
if 'tenant_intra_authz_extension_id' in kw:
tenant_dict['intra_authz_extension_id'] = kw.get(
'tenant_intra_authz_extension_id', None)
if 'tenant_intra_admin_extension_id' in kw:
tenant_dict['intra_admin_extension_id'] = kw.get(
'tenant_intra_admin_extension_id', None)
self.tenant_api.set_tenant_dict(user_id, tenant_id, tenant_dict)
def callback(self, context, prep_info, *args, **kwargs):
token_ref = ""
if context.get('token_id') is not None:
token_ref = token_model.KeystoneToken(
token_id=context['token_id'],
token_data=self.token_provider_api.validate_token(
context['token_id']))
if not token_ref:
raise exception.Unauthorized
@dependency.requires('authz_api') # noqa: 405
class Authz_v3(controller.V3Controller):
def __init__(self):
super(Authz_v3, self).__init__()
@controller.protected(callback)
def get_authz(self, context, tenant_id, subject_k_id,
object_name, action_name):
try:
return self.authz_api.authz(
tenant_id, subject_k_id, object_name, action_name)
except Exception as e:
return {'authz': False, 'comment': unicode(e)}
@dependency.requires('admin_api', 'root_api') # noqa: 405
class IntraExtensions(controller.V3Controller):
collection_name = 'intra_extensions'
member_name = 'intra_extension'
def __init__(self):
super(IntraExtensions, self).__init__()
def _get_user_id_from_token(self, token_id):
response = self.token_provider_api.validate_token(token_id)
token_ref = token_model.KeystoneToken(
token_id=token_id, token_data=response)
return token_ref.get('user')['id']
# IntraExtension functions
@controller.protected()
def get_intra_extensions(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
return self.admin_api.get_intra_extensions_dict(user_id)
@controller.protected()
def add_intra_extension(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_dict = dict()
intra_extension_dict['name'] = kw.get('intra_extension_name', None)
intra_extension_dict['model'] = kw.get('intra_extension_model', None)
intra_extension_dict['genre'] = kw.get('intra_extension_genre', None)
intra_extension_dict['description'] = kw.get(
'intra_extension_description', None)
intra_extension_dict['subject_categories'] = kw.get(
'intra_extension_subject_categories', dict())
intra_extension_dict['object_categories'] = kw.get(
'intra_extension_object_categories', dict())
intra_extension_dict['action_categories'] = kw.get(
'intra_extension_action_categories', dict())
intra_extension_dict['subjects'] = kw.get(
'intra_extension_subjects', dict())
intra_extension_dict['objects'] = kw.get(
'intra_extension_objects', dict())
intra_extension_dict['actions'] = kw.get(
'intra_extension_actions', dict())
intra_extension_dict['subject_scopes'] = kw.get(
'intra_extension_subject_scopes', dict())
intra_extension_dict['object_scopes'] = kw.get(
'intra_extension_object_scopes', dict())
intra_extension_dict['action_scopes'] = kw.get(
'intra_extension_action_scopes', dict())
intra_extension_dict['subject_assignments'] = kw.get(
'intra_extension_subject_assignments', dict())
intra_extension_dict['object_assignments'] = kw.get(
'intra_extension_object_assignments', dict())
intra_extension_dict['action_assignments'] = kw.get(
'intra_extension_action_assignments', dict())
intra_extension_dict['aggregation_algorithm'] = kw.get(
'intra_extension_aggregation_algorithm', dict())
intra_extension_dict['sub_meta_rules'] = kw.get(
'intra_extension_sub_meta_rules', dict())
intra_extension_dict['rules'] = kw.get('intra_extension_rules', dict())
ref = self.admin_api.load_intra_extension_dict(
user_id, intra_extension_dict=intra_extension_dict)
return self.admin_api.populate_default_data(ref)
@controller.protected()
def get_intra_extension(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
return self.admin_api.get_intra_extension_dict(
user_id, intra_extension_id)
@controller.protected()
def del_intra_extension(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
self.admin_api.del_intra_extension(user_id, intra_extension_id)
@controller.protected()
def set_intra_extension(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
intra_extension_dict = dict()
intra_extension_dict['name'] = kw.get('intra_extension_name', None)
intra_extension_dict['model'] = kw.get('intra_extension_model', None)
intra_extension_dict['genre'] = kw.get('intra_extension_genre', None)
intra_extension_dict['description'] = kw.get(
'intra_extension_description', None)
return self.admin_api.set_intra_extension_dict(
user_id, intra_extension_id, intra_extension_dict)
@controller.protected()
def load_root_intra_extension(self, context, **kw):
self.root_api.load_root_intra_extension_dict()
# Metadata functions
@controller.protected()
def get_subject_categories(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
return self.admin_api.get_subject_categories_dict(
user_id, intra_extension_id)
@controller.protected()
def add_subject_category(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
subject_category_dict = dict()
subject_category_dict['name'] = kw.get('subject_category_name', None)
subject_category_dict['description'] = kw.get(
'subject_category_description', None)
return self.admin_api.add_subject_category_dict(
user_id, intra_extension_id, subject_category_dict)
@controller.protected()
def get_subject_category(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
subject_category_id = kw.get('subject_category_id', None)
return self.admin_api.get_subject_category_dict(
user_id, intra_extension_id, subject_category_id)
@controller.protected()
def del_subject_category(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
subject_category_id = kw.get('subject_category_id', None)
self.admin_api.del_subject_category(
user_id, intra_extension_id, subject_category_id)
@controller.protected()
def set_subject_category(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
subject_category_id = kw.get('subject_category_id', None)
subject_category_dict = dict()
subject_category_dict['name'] = kw.get('subject_category_name', None)
subject_category_dict['description'] = kw.get(
'subject_category_description', None)
return self.admin_api.set_subject_category_dict(
user_id, intra_extension_id, subject_category_id, subject_category_dict) # noqa
@controller.protected()
def get_object_categories(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
return self.admin_api.get_object_categories_dict(
user_id, intra_extension_id)
@controller.protected()
def add_object_category(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
object_category_dict = dict()
object_category_dict['name'] = kw.get('object_category_name', None)
object_category_dict['description'] = kw.get(
'object_category_description', None)
return self.admin_api.add_object_category_dict(
user_id, intra_extension_id, object_category_dict)
@controller.protected()
def get_object_category(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
object_category_id = kw.get('object_category_id', None)
return self.admin_api.get_object_categories_dict(
user_id, intra_extension_id, object_category_id)
@controller.protected()
def del_object_category(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
object_category_id = kw.get('object_category_id', None)
self.admin_api.del_object_category(
user_id, intra_extension_id, object_category_id)
@controller.protected()
def set_object_category(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
object_category_id = kw.get('object_category_id', None)
object_category_dict = dict()
object_category_dict['name'] = kw.get('object_category_name', None)
object_category_dict['description'] = kw.get(
'object_category_description', None)
return self.admin_api.set_object_category_dict(
user_id, intra_extension_id, object_category_id, object_category_dict) # noqa
@controller.protected()
def get_action_categories(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
return self.admin_api.get_action_categories_dict(
user_id, intra_extension_id)
@controller.protected()
def add_action_category(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
action_category_dict = dict()
action_category_dict['name'] = kw.get('action_category_name', None)
action_category_dict['description'] = kw.get(
'action_category_description', None)
return self.admin_api.add_action_category_dict(
user_id, intra_extension_id, action_category_dict)
@controller.protected()
def get_action_category(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
action_category_id = kw.get('action_category_id', None)
return self.admin_api.get_action_categories_dict(
user_id, intra_extension_id, action_category_id)
@controller.protected()
def del_action_category(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
action_category_id = kw.get('action_category_id', None)
self.admin_api.del_action_category(
user_id, intra_extension_id, action_category_id)
@controller.protected()
def set_action_category(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
action_category_id = kw.get('action_category_id', None)
action_category_dict = dict()
action_category_dict['name'] = kw.get('action_category_name', None)
action_category_dict['description'] = kw.get(
'action_category_description', None)
return self.admin_api.set_action_category_dict(
user_id, intra_extension_id, action_category_id, action_category_dict) # noqa
# Perimeter functions
@controller.protected()
def get_subjects(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
return self.admin_api.get_subjects_dict(user_id, intra_extension_id)
@controller.protected()
def add_subject(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
subject_dict = dict()
subject_dict['name'] = kw.get('subject_name', None)
subject_dict['description'] = kw.get('subject_description', None)
subject_dict['password'] = kw.get('subject_password', None)
subject_dict['email'] = kw.get('subject_email', None)
return self.admin_api.add_subject_dict(
user_id, intra_extension_id, subject_dict)
@controller.protected()
def get_subject(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
subject_id = kw.get('subject_id', None)
return self.admin_api.get_subject_dict(
user_id, intra_extension_id, subject_id)
@controller.protected()
def del_subject(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
subject_id = kw.get('subject_id', None)
self.admin_api.del_subject(user_id, intra_extension_id, subject_id)
@controller.protected()
def set_subject(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
subject_id = kw.get('subject_id', None)
subject_dict = dict()
subject_dict['name'] = kw.get('subject_name', None)
subject_dict['description'] = kw.get('subject_description', None)
return self.admin_api.set_subject_dict(
user_id, intra_extension_id, subject_id, subject_dict)
@controller.protected()
def get_objects(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
return self.admin_api.get_objects_dict(user_id, intra_extension_id)
@controller.protected()
def add_object(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
object_dict = dict()
object_dict['name'] = kw.get('object_name', None)
object_dict['description'] = kw.get('object_description', None)
return self.admin_api.add_object_dict(
user_id, intra_extension_id, object_dict)
@controller.protected()
def get_object(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
object_id = kw.get('object_id', None)
return self.admin_api.get_object_dict(
user_id, intra_extension_id, object_id)
@controller.protected()
def del_object(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
object_id = kw.get('object_id', None)
self.admin_api.del_object(user_id, intra_extension_id, object_id)
@controller.protected()
def set_object(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
object_id = kw.get('object_id', None)
object_dict = dict()
object_dict['name'] = kw.get('object_name', None)
object_dict['description'] = kw.get('object_description', None)
return self.admin_api.set_object_dict(
user_id, intra_extension_id, object_id, object_dict)
@controller.protected()
def get_actions(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
return self.admin_api.get_actions_dict(user_id, intra_extension_id)
@controller.protected()
def add_action(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
action_dict = dict()
action_dict['name'] = kw.get('action_name', None)
action_dict['description'] = kw.get('action_description', None)
return self.admin_api.add_action_dict(
user_id, intra_extension_id, action_dict)
@controller.protected()
def get_action(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
action_id = kw.get('action_id', None)
return self.admin_api.get_action_dict(
user_id, intra_extension_id, action_id)
@controller.protected()
def del_action(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
action_id = kw.get('action_id', None)
self.admin_api.del_action(user_id, intra_extension_id, action_id)
@controller.protected()
def set_action(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
action_id = kw.get('action_id', None)
action_dict = dict()
action_dict['name'] = kw.get('action_name', None)
action_dict['description'] = kw.get('action_description', None)
return self.admin_api.set_action_dict(
user_id, intra_extension_id, action_id, action_dict)
# Scope functions
@controller.protected()
def get_subject_scopes(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
subject_category_id = kw.get('subject_category_id', None)
return self.admin_api.get_subject_scopes_dict(
user_id, intra_extension_id, subject_category_id)
@controller.protected()
def add_subject_scope(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
subject_category_id = kw.get('subject_category_id', None)
subject_scope_dict = dict()
subject_scope_dict['name'] = kw.get('subject_scope_name', None)
subject_scope_dict['description'] = kw.get(
'subject_scope_description', None)
return self.admin_api.add_subject_scope_dict(
user_id, intra_extension_id, subject_category_id, subject_scope_dict) # noqa
@controller.protected()
def get_subject_scope(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
subject_category_id = kw.get('subject_category_id', None)
subject_scope_id = kw.get('subject_scope_id', None)
return self.admin_api.get_subject_scope_dict(
user_id, intra_extension_id, subject_category_id, subject_scope_id)
@controller.protected()
def del_subject_scope(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
subject_category_id = kw.get('subject_category_id', None)
subject_scope_id = kw.get('subject_scope_id', None)
self.admin_api.del_subject_scope(
user_id,
intra_extension_id,
subject_category_id,
subject_scope_id)
@controller.protected()
def set_subject_scope(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
subject_category_id = kw.get('subject_category_id', None)
subject_scope_id = kw.get('subject_scope_id', None)
subject_scope_dict = dict()
subject_scope_dict['name'] = kw.get('subject_scope_name', None)
subject_scope_dict['description'] = kw.get(
'subject_scope_description', None)
return self.admin_api.set_subject_scope_dict(
user_id, intra_extension_id, subject_category_id, subject_scope_id, subject_scope_dict) # noqa
@controller.protected()
def get_object_scopes(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
object_category_id = kw.get('object_category_id', None)
return self.admin_api.get_object_scopes_dict(
user_id, intra_extension_id, object_category_id)
@controller.protected()
def add_object_scope(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
object_category_id = kw.get('object_category_id', None)
object_scope_dict = dict()
object_scope_dict['name'] = kw.get('object_scope_name', None)
object_scope_dict['description'] = kw.get(
'object_scope_description', None)
return self.admin_api.add_object_scope_dict(
user_id, intra_extension_id, object_category_id, object_scope_dict)
@controller.protected()
def get_object_scope(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
object_category_id = kw.get('object_category_id', None)
object_scope_id = kw.get('object_scope_id', None)
return self.admin_api.get_object_scope_dict(
user_id, intra_extension_id, object_category_id, object_scope_id)
@controller.protected()
def del_object_scope(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
object_category_id = kw.get('object_category_id', None)
object_scope_id = kw.get('object_scope_id', None)
self.admin_api.del_object_scope(
user_id,
intra_extension_id,
object_category_id,
object_scope_id)
@controller.protected()
def set_object_scope(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
object_category_id = kw.get('object_category_id', None)
object_scope_id = kw.get('object_scope_id', None)
object_scope_dict = dict()
object_scope_dict['name'] = kw.get('object_scope_name', None)
object_scope_dict['description'] = kw.get(
'object_scope_description', None)
return self.admin_api.set_object_scope_dict(
user_id, intra_extension_id, object_category_id, object_scope_id, object_scope_dict) # noqa
@controller.protected()
def get_action_scopes(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
action_category_id = kw.get('action_category_id', None)
return self.admin_api.get_action_scopes_dict(
user_id, intra_extension_id, action_category_id)
@controller.protected()
def add_action_scope(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
action_category_id = kw.get('action_category_id', None)
action_scope_dict = dict()
action_scope_dict['name'] = kw.get('action_scope_name', None)
action_scope_dict['description'] = kw.get(
'action_scope_description', None)
return self.admin_api.add_action_scope_dict(
user_id, intra_extension_id, action_category_id, action_scope_dict)
@controller.protected()
def get_action_scope(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
action_category_id = kw.get('action_category_id', None)
action_scope_id = kw.get('action_scope_id', None)
return self.admin_api.get_action_scope_dict(
user_id, intra_extension_id, action_category_id, action_scope_id)
@controller.protected()
def del_action_scope(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
action_category_id = kw.get('action_category_id', None)
action_scope_id = kw.get('action_scope_id', None)
self.admin_api.del_action_scope(
user_id,
intra_extension_id,
action_category_id,
action_scope_id)
@controller.protected()
def set_action_scope(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
action_category_id = kw.get('action_category_id', None)
action_scope_id = kw.get('action_scope_id', None)
action_scope_dict = dict()
action_scope_dict['name'] = kw.get('action_scope_name', None)
action_scope_dict['description'] = kw.get(
'action_scope_description', None)
return self.admin_api.set_action_scope_dict(
user_id, intra_extension_id, action_category_id, action_scope_id, action_scope_dict) # noqa
# Assignment functions
@controller.protected()
def add_subject_assignment(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
subject_id = kw.get('subject_id', None)
subject_category_id = kw.get('subject_category_id', None)
subject_scope_id = kw.get('subject_scope_id', None)
return self.admin_api.add_subject_assignment_list(
user_id, intra_extension_id, subject_id, subject_category_id, subject_scope_id) # noqa
@controller.protected()
def get_subject_assignment(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
subject_id = kw.get('subject_id', None)
subject_category_id = kw.get('subject_category_id', None)
return self.admin_api.get_subject_assignment_list(
user_id, intra_extension_id, subject_id, subject_category_id)
@controller.protected()
def del_subject_assignment(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
subject_id = kw.get('subject_id', None)
subject_category_id = kw.get('subject_category_id', None)
subject_scope_id = kw.get('subject_scope_id', None)
self.admin_api.del_subject_assignment(
user_id,
intra_extension_id,
subject_id,
subject_category_id,
subject_scope_id)
@controller.protected()
def add_object_assignment(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
object_id = kw.get('object_id', None)
object_category_id = kw.get('object_category_id', None)
object_scope_id = kw.get('object_scope_id', None)
return self.admin_api.add_object_assignment_list(
user_id, intra_extension_id, object_id, object_category_id, object_scope_id) # noqa
@controller.protected()
def get_object_assignment(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
object_id = kw.get('object_id', None)
object_category_id = kw.get('object_category_id', None)
return self.admin_api.get_object_assignment_list(
user_id, intra_extension_id, object_id, object_category_id)
@controller.protected()
def del_object_assignment(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
object_id = kw.get('object_id', None)
object_category_id = kw.get('object_category_id', None)
object_scope_id = kw.get('object_scope_id', None)
self.admin_api.del_object_assignment(
user_id,
intra_extension_id,
object_id,
object_category_id,
object_scope_id)
@controller.protected()
def add_action_assignment(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
action_id = kw.get('action_id', None)
action_category_id = kw.get('action_category_id', None)
action_scope_id = kw.get('action_scope_id', None)
return self.admin_api.add_action_assignment_list(
user_id, intra_extension_id, action_id, action_category_id, action_scope_id) # noqa
@controller.protected()
def get_action_assignment(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
action_id = kw.get('action_id', None)
action_category_id = kw.get('action_category_id', None)
return self.admin_api.get_action_assignment_list(
user_id, intra_extension_id, action_id, action_category_id)
@controller.protected()
def del_action_assignment(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
action_id = kw.get('action_id', None)
action_category_id = kw.get('action_category_id', None)
action_scope_id = kw.get('action_scope_id', None)
self.admin_api.del_action_assignment(
user_id,
intra_extension_id,
action_id,
action_category_id,
action_scope_id)
# Metarule functions
@controller.protected()
def get_aggregation_algorithm(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
return self.admin_api.get_aggregation_algorithm_id(
user_id, intra_extension_id)
@controller.protected()
def set_aggregation_algorithm(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
aggregation_algorithm_id = kw.get('aggregation_algorithm_id', None)
return self.admin_api.set_aggregation_algorithm_id(
user_id, intra_extension_id, aggregation_algorithm_id)
@controller.protected()
def get_sub_meta_rules(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
return self.admin_api.get_sub_meta_rules_dict(
user_id, intra_extension_id)
@controller.protected()
def add_sub_meta_rule(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
sub_meta_rule_dict = dict()
sub_meta_rule_dict['name'] = kw.get('sub_meta_rule_name', None)
sub_meta_rule_dict['algorithm'] = kw.get(
'sub_meta_rule_algorithm', None)
sub_meta_rule_dict['subject_categories'] = kw.get(
'sub_meta_rule_subject_categories', None)
sub_meta_rule_dict['object_categories'] = kw.get(
'sub_meta_rule_object_categories', None)
sub_meta_rule_dict['action_categories'] = kw.get(
'sub_meta_rule_action_categories', None)
return self.admin_api.add_sub_meta_rule_dict(
user_id, intra_extension_id, sub_meta_rule_dict)
@controller.protected()
def get_sub_meta_rule(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
sub_meta_rule_id = kw.get('sub_meta_rule_id', None)
return self.admin_api.get_sub_meta_rule_dict(
user_id, intra_extension_id, sub_meta_rule_id)
@controller.protected()
def del_sub_meta_rule(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
sub_meta_rule_id = kw.get('sub_meta_rule_id', None)
self.admin_api.del_sub_meta_rule(
user_id, intra_extension_id, sub_meta_rule_id)
@controller.protected()
def set_sub_meta_rule(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
sub_meta_rule_id = kw.get('sub_meta_rule_id', None)
sub_meta_rule_dict = dict()
sub_meta_rule_dict['name'] = kw.get('sub_meta_rule_name', None)
sub_meta_rule_dict['algorithm'] = kw.get(
'sub_meta_rule_algorithm', None)
sub_meta_rule_dict['subject_categories'] = kw.get(
'sub_meta_rule_subject_categories', None)
sub_meta_rule_dict['object_categories'] = kw.get(
'sub_meta_rule_object_categories', None)
sub_meta_rule_dict['action_categories'] = kw.get(
'sub_meta_rule_action_categories', None)
return self.admin_api.set_sub_meta_rule_dict(
user_id, intra_extension_id, sub_meta_rule_id, sub_meta_rule_dict)
# Rules functions
@controller.protected()
def get_rules(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
sub_meta_rule_id = kw.get('sub_meta_rule_id', None)
return self.admin_api.get_rules_dict(
user_id, intra_extension_id, sub_meta_rule_id)
@controller.protected()
def add_rule(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
sub_meta_rule_id = kw.get('sub_meta_rule_id', None)
subject_category_list = kw.get('subject_categories', [])
object_category_list = kw.get('object_categories', [])
action_category_list = kw.get('action_categories', [])
enabled_bool = kw.get('enabled', True)
rule_list = subject_category_list + action_category_list + \
object_category_list + [enabled_bool, ]
return self.admin_api.add_rule_dict(
user_id, intra_extension_id, sub_meta_rule_id, rule_list)
@controller.protected()
def get_rule(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
sub_meta_rule_id = kw.get('sub_meta_rule_id', None)
rule_id = kw.get('rule_id', None)
return self.admin_api.get_rule_dict(
user_id, intra_extension_id, sub_meta_rule_id, rule_id)
@controller.protected()
def del_rule(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
sub_meta_rule_id = kw.get('sub_meta_rule_id', None)
rule_id = kw.get('rule_id', None)
self.admin_api.del_rule(
user_id,
intra_extension_id,
sub_meta_rule_id,
rule_id)
@controller.protected()
def set_rule(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
intra_extension_id = kw.get('intra_extension_id', None)
sub_meta_rule_id = kw.get('sub_meta_rule_id', None)
rule_id = kw.get('rule_id', None)
rule_list = list()
subject_category_list = kw.get('subject_categories', [])
object_category_list = kw.get('object_categories', [])
action_category_list = kw.get('action_categories', [])
rule_list = subject_category_list + action_category_list + object_category_list # noqa
return self.admin_api.set_rule_dict(
user_id, intra_extension_id, sub_meta_rule_id, rule_id, rule_list)
@dependency.requires('authz_api') # noqa: 405
class InterExtensions(controller.V3Controller):
def __init__(self):
super(InterExtensions, self).__init__()
def _get_user_from_token(self, token_id):
response = self.token_provider_api.validate_token(token_id)
token_ref = token_model.KeystoneToken(
token_id=token_id, token_data=response)
return token_ref['user']
# @controller.protected()
# def get_inter_extensions(self, context, **kw):
# user = self._get_user_from_token(context.get('token_id'))
# return {
# 'inter_extensions':
# self.interextension_api.get_inter_extensions()
# }
# @controller.protected()
# def get_inter_extension(self, context, **kw):
# user = self._get_user_from_token(context.get('token_id'))
# return {
# 'inter_extensions':
# self.interextension_api.get_inter_extension(uuid=kw['inter_extension_id'])
# }
# @controller.protected()
# def create_inter_extension(self, context, **kw):
# user = self._get_user_from_token(context.get('token_id'))
# return self.interextension_api.create_inter_extension(kw)
# @controller.protected()
# def delete_inter_extension(self, context, **kw):
# user = self._get_user_from_token(context.get('token_id'))
# if 'inter_extension_id' not in kw:
# raise exception.Error
# return
# self.interextension_api.delete_inter_extension(kw['inter_extension_id'])
@dependency.requires('moonlog_api', 'authz_api') # noqa: 405
class Logs(controller.V3Controller):
def __init__(self):
super(Logs, self).__init__()
def _get_user_id_from_token(self, token_id):
response = self.token_provider_api.validate_token(token_id)
token_ref = token_model.KeystoneToken(
token_id=token_id, token_data=response)
return token_ref['user']
@controller.protected()
def get_logs(self, context, **kw):
user_id = self._get_user_id_from_token(context.get('token_id'))
options = kw.get('options', '')
return self.moonlog_api.get_logs(user_id, options)
@dependency.requires('identity_api', "token_provider_api", "resource_api") # noqa: 405
class MoonAuth(controller.V3Controller):
def __init__(self):
super(MoonAuth, self).__init__()
def _get_project(self, uuid="", name=""):
projects = self.resource_api.list_projects()
for project in projects:
if uuid and uuid == project['id']:
return project
elif name and name == project['name']:
return project
def get_token(self, context, **kw):
data_auth = {
"auth": {
"identity": {
"methods": [
"password"
],
"password": {
"user": {
"domain": {
"id": "Default"
},
"name": kw['username'],
"password": kw['password']
}
}
}
}
}
message = {}
if "project" in kw:
project = self._get_project(name=kw['project'])
if project:
data_auth["auth"]["scope"] = dict()
data_auth["auth"]["scope"]['project'] = dict()
data_auth["auth"]["scope"]['project']['id'] = project['id']
else:
message = {
"error": {
"message": "Unable to find project {}".format(kw['project']), # noqa
"code": 200,
"title": "UnScopedToken"
}}
# req = requests.post("http://localhost:5000/v3/auth/tokens",
# json=data_auth,
# headers={"Content-Type": "application/json"}
# )
req = requests.post("http://172.16.1.222:5000/v3/auth/tokens",
json=data_auth,
headers={"Content-Type": "application/json"}
)
if req.status_code not in (200, 201):
LOG.error(req.text)
else:
_token = req.headers['X-Subject-Token']
_data = req.json()
_result = {
"token": _token,
'message': message
}
try:
_result["roles"] = map(
lambda x: x['name'], _data["token"]["roles"])
except KeyError:
pass
return _result
return {"token": None, 'message': req.json()}
| 45.300094 | 107 | 0.669809 | 6,274 | 48,154 | 4.707683 | 0.034428 | 0.127505 | 0.112134 | 0.084101 | 0.86833 | 0.819982 | 0.78748 | 0.75369 | 0.735712 | 0.71743 | 0 | 0.001893 | 0.221311 | 48,154 | 1,062 | 108 | 45.34275 | 0.785796 | 0.037671 | 0 | 0.574777 | 0 | 0 | 0.142814 | 0.032414 | 0 | 0 | 0 | 0 | 0 | 1 | 0.109375 | false | 0.00558 | 0.008929 | 0 | 0.217634 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fd17fbfa4c9bc601605c850fd05b64b4b7b00bd8 | 20,316 | py | Python | processing/calssification-fusion.py | gurkirt/actNet-inAct | 1930bcb41553e50ddd83985a497a9d5ce4f1fcf2 | [
"MIT"
] | 27 | 2016-05-04T07:13:05.000Z | 2021-12-05T04:45:45.000Z | processing/calssification-fusion.py | gurkirt/actNet-inAct | 1930bcb41553e50ddd83985a497a9d5ce4f1fcf2 | [
"MIT"
] | 1 | 2017-12-28T08:29:00.000Z | 2017-12-28T08:29:00.000Z | processing/calssification-fusion.py | gurkirt/actNet-inAct | 1930bcb41553e50ddd83985a497a9d5ce4f1fcf2 | [
"MIT"
] | 12 | 2016-05-15T21:40:06.000Z | 2019-11-27T09:43:55.000Z | '''
Autor: Gurkirt Singh
Start data: 15th May 2016
purpose: of this file is read frame level predictions and process them to produce a label per video
'''
from sklearn.svm import LinearSVC,SVC
from sklearn.ensemble import RandomForestClassifier
import numpy as np
import pickle
import os,h5py
import time,json
#import pylab as plt
#######baseDir = "/mnt/sun-alpha/actnet/";
baseDir = "/data/shared/solar-machines/actnet/";
########imgDir = "/mnt/sun-alpha/actnet/rgb-images/";
######## imgDir = "/mnt/DATADISK2/ss-workspace/actnet/rgb-images/";
annotPklFile = "../Evaluation/data/actNet200-V1-3.pkl"
def readannos():
with open(annotPklFile,'rb') as f:
actNetDB = pickle.load(f)
actionIDs = actNetDB['actionIDs']; taxonomy=actNetDB['taxonomy']; database = actNetDB['database'];
return actionIDs,taxonomy,database
def getnames():
fname = baseDir+'data/lists/gtnames.list'
with open(fname,'rb') as f:
lines = f.readlines()
names = []
for name in lines:
name = name.rstrip('\n')
names.append(name)
# print names
return names
def gettopklabel(preds,k,classtopk):
scores = np.zeros(200)
topk = min(classtopk,np.shape(preds)[1]);
for i in range(200):
values = preds[i,:];
values = np.sort(values);
values = values[::-1]
scores[i] = np.mean(values[:topk])
# print scores
sortedlabel = np.argsort(scores)[::-1]
# print sortedlabel
sortedscores = scores[sortedlabel]
# print sortedlabel[:k],sortedscores[:k]
return sortedlabel[:k],sortedscores[:k]
def gettopklabel4mp(scores,k):
scores = scores - np.min(scores);
scores = scores/np.sum(scores);
sortedlabel = np.argsort(scores)[::-1]
# print sortedlabel
sortedscores = scores[sortedlabel]
# print sortedlabel[:k],sortedscores[:k]
ss = sortedscores[:20]
ss = ss/np.sum(ss)
ss = ss[:5]
ss = ss/np.sum(ss)
return sortedlabel[:k],ss[:k]
def sumfuse(mbh,ims,k):
mbh = mbh - np.min(mbh)+1.0;
ims = ims - np.min(ims)+1.0;
# mbh = mbh/np.sum(mbh)
# ims = ims/np.sum(ims)
scores = mbh*ims;
scores = scores/np.sum(scores);
sortedlabel = np.argsort(scores)[::-1]
# print sortedlabel
sortedscores = scores[sortedlabel]
# print sortedlabel[:k],sortedscores[:k]
ss = sortedscores[:5]
ss = ss/np.sum(ss)
return sortedlabel[:k],ss[:k]
def wAPfuse(mbh,ims,wmbh,wims,k):
for i in range(200):
mbh[i] = (1+wmbh[i])*mbh[i]
ims[i] = (1+wims[i])*ims[i]
mbh = mbh - np.min(mbh)+1;
ims = ims - np.min(ims)+1;
# mbh = mbh/np.sum(mbh)
# ims = ims/np.sum(ims)
scores = mbh + ims;
# scores = np.mean(wmbh)*mbh+np.mean(wims)*ims;
# scores = np.zeros(200)
# for i in range(200):
# scores[i] = (mbh[i]*wmbh[i]+wims[i]*ims[i])/(wmbh[i]+wims[i]+1);
scores = scores/np.sum(scores);
sortedlabel = np.argsort(scores)[::-1]
# print sortedlabel
sortedscores = scores[sortedlabel]
# print sortedlabel[:k],sortedscores[:k]
ss = sortedscores[:5]
ss = ss/np.sum(ss)
return sortedlabel[:k],ss[:k]
def fuseThree(mbh,ims,c3d,k):
mbh = mbh - np.min(mbh)+1;
ims = ims - np.min(ims)+1;
#c3d = c3d - np.min(c3d)+1;
# mbh = mbh/np.sum(mbh)
# ims = ims/np.sum(ims)
# print 'we are here in fuse three'
scores = np.zeros_like(mbh);#*ims*c3d;
for i in range(200):
scores[i] = (mbh[i]*c3d[i]*ims[i])*(mbh[i]+ims[i]+c3d[i])
# scores = mbh*ims;
# scores = np.mean(wmbh)*mbh+np.mean(wims)*ims;
# scores = np.zeros(200)
# for i in range(200):
# scores[i] = (mbh[i]*wmbh[i]+wims[i]*ims[i])/(wmbh[i]+wims[i]+1);
scores = scores/np.sum(scores);
sortedlabel = np.argsort(scores)[::-1]
# print sortedlabel
sortedscores = scores[sortedlabel]
# print sortedlabel[:k],sortedscores[:k]
ss = sortedscores[:k]
ss = ss/np.sum(ss[:5])
return sortedlabel[:k],ss[:k]
def fuseCLF(clf,mbh,ims,c3d,k):
mbh = mbh - np.min(mbh)+1;
ims = ims - np.min(ims)+1;
scores1 = mbh*ims*c3d
scores2 = mbh+ims+c3d
mbh = mbh/np.mean(mbh)
ims = ims/np.mean(ims)
c3d = c3d/np.mean(c3d)
scores = scores1/np.mean(scores1)
X = np.zeros((1,800))
count = 0;
X[count,:200] =c3d;
X[count,200:400] = mbh;
X[count,400:600] = ims;
X[count,600:] = scores;
# print np.shape(X)
clfScore = clf.decision_function(X);
clfScore = clfScore - np.min(clfScore) +1;
# print np.shape(clfScore)
clfScore = scores2*scores1*clfScore[0]
scores = clfScore/np.sum(clfScore);
sortedlabel = np.argsort(scores)[::-1]
# print scores
sortedscores = scores[sortedlabel]
# print sortedlabel[:k],sortedscores[:k]
ss = sortedscores/np.sum(sortedscores[:3])
return sortedlabel[:k],ss[:k]
def fuseCLFnEXT(clf,mbh,ims,c3d,ext,k):
mbh = mbh - np.min(mbh)+0.9;
ims = ims - np.min(ims)+1.4;
scores1 = mbh*ims*c3d
scores2 = mbh+ims+c3d+ext+1
mbh = mbh/np.mean(mbh)
ims = ims/np.mean(ims)
c3d = c3d/np.mean(c3d)
scores = scores1/np.mean(scores1)
X = np.zeros((1,800))
count = 0;
X[count,:200] =c3d;
X[count,200:400] = mbh;
X[count,400:600] = ims;
X[count,600:] = scores;
# print np.shape(X)
clfScore = clf.decision_function(X);
clfScore = clfScore - np.min(clfScore) +1;
# print np.shape(clfScore)
clfScore = scores2*scores1*clfScore[0]
scores = clfScore/np.sum(clfScore);
sortedlabel = np.argsort(scores)[::-1]
# print scores
sortedscores = scores[sortedlabel]
# print sortedlabel[:k],sortedscores[:k]
ss = sortedscores/np.sum(sortedscores[:3])
return sortedlabel[:k],ss[:k]
def fuseFour(mbh,ims,c3d,ext,k):
mbh = mbh - np.min(mbh)+1;
ims = ims - np.min(ims)+1;
#c3d = c3d - np.min(c3d)+1;
# mbh = mbh/np.sum(mbh)
# ims = ims/np.sum(ims)
scores = mbh*ims*c3d;
scores = scores-min(scores);
# scores = np.mean(wmbh)*mbh+np.mean(wims)*ims;
# scores = np.zeros(200)
# for i in range(200):
# scores[i] = (mbh[i]*wmbh[i]+wims[i]*ims[i])/(wmbh[i]+wims[i]+1);
scores = scores/np.sum(scores);
sortedlabel = np.argsort(scores)[::-1]
# print sortedlabel
sortedscores = scores[sortedlabel]
# print sortedlabel[:k],sortedscores[:k]
ss = sortedscores
ss = ss/np.sum(ss[:5])
return sortedlabel[:k],ss[:k]
def getC3dMeanPreds(preds,classtopk=80):
preds = preds - np.min(preds) + 0.9;
scores = np.zeros(200)
topk = min(classtopk,np.shape(preds)[0]);
# for i in range(np.shape(preds)[0]):
# preds[i,:] = preds[i,:] - np.min(preds[i,:])+1;
# preds[i,:] = preds[i,:]/np.sum(preds[i,:]) ;
for i in range(200):
values = preds[:,i];
values = np.sort(values);
values = values[::-1]
scores[i] = np.mean(values[:topk])
return scores
def getEXTMeanPreds(preds,classtopk=250):
# preds = preds - np.min(preds) + 1;
scores = np.zeros(200)
topk = min(classtopk,np.shape(preds)[0]);
# for i in range(np.shape(preds)[0]):
# preds[i,:] = preds[i,:] - np.min(preds[i,:])+1;
# preds[i,:] = preds[i,:]/np.sum(preds[i,:]) ;
for i in range(200):
values = preds[:,i];
values = np.sort(values);
values = values[::-1]
scores[i] = np.mean(values[:topk])
return scores
def readpkl(filename):
with open(filename) as f:
data = pickle.load(f)
return data
def processOnePredictions():
#########################################
#########################################
names = getnames()
gtlabels = readpkl('{}data/labels.pkl'.format(baseDir))
indexs = readpkl('{}data/indexs.pkl'.format(baseDir))
actionIDs,taxonomy,database = readannos()
########################################
########################################
K = 5;
subset = 'validation';#,'testing']:
featType = 'MBH'
savename = '{}data/predictions-{}-{}.pkl'.format(baseDir,subset,featType)
with open(savename,'r') as f:
data = pickle.load(f)
outfilename = '{}results/classification/{}-{}-{}.json'.format(baseDir,subset,featType,str(K).zfill(3))
if True: #not os.path.isfile(outfilename):
vcount = 0;
vdata = {};
vdata['external_data'] = {'used':True, 'details':"We use extraction Net model with its weights pretrained on imageNet dataset and fine tuned on Activitty Net. Plus ImagenetShuffle, MBH features, C3D features privide on challenge page"}
vdata['version'] = "VERSION 1.3"
results = {}
for videoId in database.keys():
videoInfo = database[videoId]
if videoInfo['subset'] == subset:
if vcount >-1:
vidresults = []
vcount+=1
vidname = 'v_'+videoId
print 'processing ', vidname, ' vcount ',vcount
ind = data['vIndexs'][videoId]
preds = data['scores'][ind,:]
print 'shape of preds',np.shape(preds)
labels,scores = gettopklabel4mp(preds,K)
print labels
print scores
for idx in range(K):
score = scores[idx]
# if score>0.05:
label = labels[idx]
name = names[label]
tempdict = {'label':name,'score':score}
vidresults.append(tempdict)
results[videoId] = vidresults
vdata['results'] = results
# print vdata
print 'results saved in ', outfilename
with open(outfilename,'wb') as f:
json.dump(vdata,f)
def getDATA(gtlabels,dataIMS,dataMBH,infileC3D,database,subset):
X = np.zeros((11000,800))
Y = np.zeros(11000)
count = 0;
for videoId in database.keys():
videoInfo = database[videoId]
if videoInfo['subset'] == subset:
# if vcount >-1:
vidresults = []
# vcount+=1
vidname = 'v_'+videoId
# print 'processing ', vidname, ' vcount ',vcount
ind = dataMBH['vIndexs'][videoId]
predsMBH = dataMBH['scores'][ind,:]
ind = dataIMS['vIndexs'][videoId]
predsIMS = dataIMS['scores'][ind,:]
preds = infileC3D[videoId]['scores']
predS3D = getC3dMeanPreds(preds)
predsMBH = predsMBH - np.min(predsMBH)+1;
predsIMS = predsIMS - np.min(predsIMS)+1;
scores = predS3D*predsMBH*predsIMS
predS3D = predS3D/np.mean(predS3D)
predsMBH = predsMBH/np.mean(predsMBH)
predsIMS = predsIMS/np.mean(predsIMS)
scores = scores/np.mean(scores)
X[count,:200] =predS3D;
X[count,200:400] = predsMBH;
X[count,400:600] = predsIMS;
X[count,600:] = scores;
Y[count] = gtlabels[videoId];
count+=1
#labels,scores = fuseThree(predsMBH,predsIMS,predS3D,K)
return X[:count],Y[:count]
def trainPreds():
#########################################
#########################################
names = getnames()
gtlabels = readpkl('{}data/labels.pkl'.format(baseDir))
indexs = readpkl('{}data/indexs.pkl'.format(baseDir))
actionIDs,taxonomy,database = readannos()
########################################
########################################
K = 5;
subset = 'validation';#,'testing']:
featType = 'IMS'
savename = '{}data/ALLpredictions-{}.pkl'.format(baseDir,featType)
with open(savename,'r') as f:
dataIMS = pickle.load(f)
featType = 'MBH'
savename = '{}data/ALLpredictions-{}.pkl'.format(baseDir,featType)
with open(savename,'r') as f:
dataMBH = pickle.load(f)
featType = 'C3D'
savename = '{}data/ALLpredictions-SVM-{}.hdf5'.format(baseDir,featType)
infileC3D = h5py.File(savename,'r');
xtrain,ytrain = getDATA(gtlabels,dataIMS,dataMBH,infileC3D,database,'training')
print 'got training and shape is ',np.shape(xtrain)
xval,yval = getDATA(gtlabels,dataIMS,dataMBH,infileC3D,database,'validation')
print 'got validation and shape is ',np.shape(xval)
numSamples = np.shape(xval)[0]
bestclf = {};
bestscore = 0;
Cs = [0.001,0.01,0.1,1,10,100];
for cc in Cs:
clf = LinearSVC(C = cc)#,probability=True)
clf = clf.fit(xtrain, ytrain)
preds = clf.predict(xval)
correctPreds = preds == yval;
score = 100*float(np.sum(correctPreds))/numSamples
print 'Overall Accuracy is ',score, '% ', ' C = ',str(cc),' features = ',featType
if score>bestscore:
bestclf = clf
bestscore = score
saveName = '{}data/LinearfusiontrainingSVM-{}.pkl'.format(baseDir,featType)
with open(saveName,'w') as f:
pickle.dump(bestclf,f)
def processThreePredictions():
#########################################
#########################################
names = getnames()
gtlabels = readpkl('{}data/labels.pkl'.format(baseDir))
indexs = readpkl('{}data/indexs.pkl'.format(baseDir))
actionIDs,taxonomy,database = readannos()
########################################
########################################
K = 196;
subset = 'testing' #'validation';#,'testing']:
featType = 'IMS'
savename = '{}data/ALLpredictions-{}.pkl'.format(baseDir,featType)
with open(savename,'r') as f:
dataIMS = pickle.load(f)
featType = 'MBH'
savename = '{}data/ALLpredictions-{}.pkl'.format(baseDir,featType)
with open(savename,'r') as f:
dataMBH = pickle.load(f)
featType = 'C3D'
savename = '{}data/ALLpredictions-SVM-{}.hdf5'.format(baseDir,featType)
infileC3D = h5py.File(savename,'r');
featType = 'EXT'
savename = '{}data/predictions-{}-{}.hdf5'.format(baseDir,subset,featType)
infileEXT = h5py.File(savename,'r');
featType = 'C3D'
savename = '{}data/LinearfusiontrainingSVM-{}.pkl'.format(baseDir,featType)
with open(savename,'r') as f:
clf = pickle.load(f)
outfilename = '{}results/classification/{}-{}-{}.json'.format(baseDir,subset,'IMS-MBH-C3D-SUBMIT-OLD',str(K).zfill(3))
if True: #not os.path.isfile(outfilename):
vcount = 0;
vdata = {};
vdata['external_data'] = {'used':True, 'details':"We use ImagenetShuffle features, MBH features and C3D features provided on challenge page."}
vdata['version'] = "VERSION 1.3"
results = {}
for videoId in database.keys():
videoInfo = database[videoId]
if videoInfo['subset'] == subset:
# if vcount >-1:
vidresults = []
vcount+=1
vidname = 'v_'+videoId
print 'processing ', vidname, ' vcount ',vcount
ind = dataMBH['vIndexs'][videoId]
predsMBH = dataMBH['scores'][ind,:]
ind = dataIMS['vIndexs'][videoId]
predsIMS = dataIMS['scores'][ind,:]
preds = infileC3D[videoId]['scores']
predS3D = getC3dMeanPreds(preds,10)
preds = np.transpose(infileEXT[videoId]['scores'])
predEXT = getEXTMeanPreds(preds,20)
#print 'shape of preds',np.shape(preds)
# labels,scores = fuseThree(predsMBH,predsIMS,predS3D,K)
# labels,scores = fuseFour(predsMBH,predsIMS,predS3D,predEXT,K)
# labels,scores = fuseCLF(clf,predsMBH,predsIMS,predS3D,K)
labels,scores = fuseCLFnEXT(clf,predsMBH,predsIMS,predS3D,predEXT,K)
print labels,scores
for idx in range(K):
score = scores[idx]
# if score>0.05:
label = labels[idx]
name = names[label]
tempdict = {'label':name,'score':score}
vidresults.append(tempdict)
results[videoId] = vidresults
vdata['results'] = results
# print vdata
print 'result saved in ',outfilename
print 'process three result saved in ',outfilename
with open(outfilename,'wb') as f:
json.dump(vdata,f)
def fuse2withAP():
#########################################
#########################################
names = getnames()
gtlabels = readpkl('{}data/labels.pkl'.format(baseDir))
indexs = readpkl('{}data/indexs.pkl'.format(baseDir))
actionIDs,taxonomy,database = readannos()
########################################
########################################
K = 5;
subset = 'validation';#,'testing']:
featType = 'IMS'
savename = '{}data/predictions-{}-{}.pkl'.format(baseDir,subset,featType)
with open(savename,'r') as f:
dataIMS = pickle.load(f)
savename = '{}data/weightAP-{}.pkl'.format(baseDir,featType)
with open(savename,'r') as f:
wIMS = pickle.load(f)
featType = 'MBH'
savename = '{}data/predictions-{}-{}.pkl'.format(baseDir,subset,featType)
with open(savename,'r') as f:
dataMBH = pickle.load(f)
savename = '{}data/weightAP-{}.pkl'.format(baseDir,featType)
with open(savename,'r') as f:
wMBH = pickle.load(f)
outfilename = '{}results/classification/{}-{}-{}.json'.format(baseDir,subset,'ap-fused-IMS-MBH',str(K).zfill(3))
if True: #not os.path.isfile(outfilename):
vcount = 0;
vdata = {};
vdata['external_data'] = {'used':True, 'details':"We use extraction Net model with its weights pretrained on imageNet dataset and fine tuned on Activitty Net. Plus ImagenetShuffle, MBH features, C3D features privide on challenge page"}
vdata['version'] = "VERSION 1.3"
results = {}
for videoId in database.keys():
videoInfo = database[videoId]
if videoInfo['subset'] == subset:
if vcount >-1:
vidresults = []
vcount+=1
vidname = 'v_'+videoId
print 'processing ', vidname, ' vcount ',vcount
ind = dataMBH['vIndexs'][videoId]
predsMBH = dataMBH['scores'][ind,:]
ind = dataIMS['vIndexs'][videoId]
predsIMS = dataIMS['scores'][ind,:]
#print 'shape of preds',np.shape(preds)
# labels,scores = sumfuse(predsMBH[:201],predsIMS[:201],K)
labels,scores = wAPfuse(predsMBH,predsIMS,wMBH,wIMS,K)
print labels
print scores
for idx in range(K):
score = scores[idx]
# if score>0.05:
label = labels[idx]
name = names[label]
tempdict = {'label':name,'score':score}
vidresults.append(tempdict)
results[videoId] = vidresults
vdata['results'] = results
# print vdata
print 'Result saved in ',outfilename
with open(outfilename,'wb') as f:
json.dump(vdata,f)
if __name__=="__main__":
# processOnePredictions()
# processTwoPredictions()
# fuse2withAP()
processThreePredictions()
# trainPreds()
| 35.767606 | 243 | 0.532191 | 2,256 | 20,316 | 4.784574 | 0.118351 | 0.012044 | 0.028164 | 0.024458 | 0.764314 | 0.744673 | 0.722624 | 0.712896 | 0.706967 | 0.700852 | 0 | 0.023727 | 0.292577 | 20,316 | 567 | 244 | 35.830688 | 0.727317 | 0.12596 | 0 | 0.665803 | 0 | 0.005181 | 0.116229 | 0.037991 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.015544 | null | null | 0.041451 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fd401a9f6126a85fdc3acd2797b86bfde3d81a7e | 111 | py | Python | orb_simulator/orbsim_language/orbsim_ast/stop_sim_node.py | dmguezjaviersnet/IA-Sim-Comp-Project | 8165b9546efc45f98091a3774e2dae4f45942048 | [
"MIT"
] | 1 | 2022-01-19T22:49:09.000Z | 2022-01-19T22:49:09.000Z | orb_simulator/orbsim_language/orbsim_ast/stop_sim_node.py | dmguezjaviersnet/IA-Sim-Comp-Project | 8165b9546efc45f98091a3774e2dae4f45942048 | [
"MIT"
] | 15 | 2021-11-10T14:25:02.000Z | 2022-02-12T19:17:11.000Z | orb_simulator/orbsim_language/orbsim_ast/stop_sim_node.py | dmguezjaviersnet/IA-Sim-Comp-Project | 8165b9546efc45f98091a3774e2dae4f45942048 | [
"MIT"
] | null | null | null | from orbsim_language.orbsim_ast.statement_node import StatementNode
class StopSimNode(StatementNode):
pass | 27.75 | 67 | 0.855856 | 13 | 111 | 7.076923 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.099099 | 111 | 4 | 68 | 27.75 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
fd5ffd243fed4a807727d79e8870e953082b8338 | 171 | py | Python | ovm/utils/compat23/__init__.py | lightcode/OVM | 3c6c3528ef851f65d4bd75cafb8738c54fba7b6f | [
"MIT"
] | 1 | 2018-03-20T14:54:10.000Z | 2018-03-20T14:54:10.000Z | ovm/utils/compat23/__init__.py | lightcode/OVM | 3c6c3528ef851f65d4bd75cafb8738c54fba7b6f | [
"MIT"
] | null | null | null | ovm/utils/compat23/__init__.py | lightcode/OVM | 3c6c3528ef851f65d4bd75cafb8738c54fba7b6f | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
from ovm.utils.compat23.with_popen import Popen
from ovm.utils.compat23.etree import etree
__all__ = ['Popen', 'etree']
| 17.1 | 47 | 0.701754 | 25 | 171 | 4.6 | 0.64 | 0.121739 | 0.208696 | 0.347826 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040541 | 0.134503 | 171 | 9 | 48 | 19 | 0.736486 | 0.251462 | 0 | 0 | 0 | 0 | 0.079365 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b5e1cb419bda79854213d848aeaff1048d037cac | 140 | py | Python | tests/basics/bytearray_decode.py | sebastien-riou/micropython | 116c15842fd48ddb77b0bc016341d936a0756573 | [
"MIT"
] | 13,648 | 2015-01-01T01:34:51.000Z | 2022-03-31T16:19:53.000Z | tests/basics/bytearray_decode.py | sebastien-riou/micropython | 116c15842fd48ddb77b0bc016341d936a0756573 | [
"MIT"
] | 7,092 | 2015-01-01T07:59:11.000Z | 2022-03-31T23:52:18.000Z | tests/basics/bytearray_decode.py | sebastien-riou/micropython | 116c15842fd48ddb77b0bc016341d936a0756573 | [
"MIT"
] | 4,942 | 2015-01-02T11:48:50.000Z | 2022-03-31T19:57:10.000Z | try:
print(bytearray(b'').decode())
print(bytearray(b'abc').decode())
except AttributeError:
print("SKIP")
raise SystemExit
| 20 | 37 | 0.657143 | 16 | 140 | 5.75 | 0.6875 | 0.304348 | 0.326087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171429 | 140 | 6 | 38 | 23.333333 | 0.793103 | 0 | 0 | 0 | 0 | 0 | 0.05 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
bd0551b1483aa6e1a5d31afcd1893e239690d37c | 48 | py | Python | platform/bq/third_party/inflection/__init__.py | google-cloud-sdk-unofficial/google-cloud-sdk | 2a48a04df14be46c8745050f98768e30474a1aac | [
"Apache-2.0"
] | 2 | 2019-11-10T09:17:07.000Z | 2019-12-18T13:44:08.000Z | platform/bq/third_party/inflection/__init__.py | google-cloud-sdk-unofficial/google-cloud-sdk | 2a48a04df14be46c8745050f98768e30474a1aac | [
"Apache-2.0"
] | null | null | null | platform/bq/third_party/inflection/__init__.py | google-cloud-sdk-unofficial/google-cloud-sdk | 2a48a04df14be46c8745050f98768e30474a1aac | [
"Apache-2.0"
] | 1 | 2020-07-25T01:40:19.000Z | 2020-07-25T01:40:19.000Z | #!/usr/bin/env python
from .inflection import *
| 16 | 25 | 0.729167 | 7 | 48 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 48 | 2 | 26 | 24 | 0.833333 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1fefc92fec9274173e1160ecb53131a0d3ef84a5 | 32 | py | Python | filament/locking.py | comstud/filament | be6dbd6bf76dbcb0655c7fae239333d64ee8bb5f | [
"MIT"
] | 2 | 2017-03-08T20:29:52.000Z | 2019-05-15T20:15:42.000Z | filament/locking.py | comstud/filament | be6dbd6bf76dbcb0655c7fae239333d64ee8bb5f | [
"MIT"
] | null | null | null | filament/locking.py | comstud/filament | be6dbd6bf76dbcb0655c7fae239333d64ee8bb5f | [
"MIT"
] | null | null | null | from _filament.locking import *
| 16 | 31 | 0.8125 | 4 | 32 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.892857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9527f61be2d1e19cb598533e897794282480fa9b | 9,081 | py | Python | tests/brainload/test_braindescriptors.py | dfsp-spirit/cogload | ff9d19803c2e0c9aea248a45380959c2758ba83a | [
"MIT"
] | 8 | 2018-11-11T11:41:19.000Z | 2022-02-09T10:50:34.000Z | tests/brainload/test_braindescriptors.py | dfsp-spirit/cogload | ff9d19803c2e0c9aea248a45380959c2758ba83a | [
"MIT"
] | 8 | 2018-11-05T10:11:09.000Z | 2019-11-05T20:34:19.000Z | tests/brainload/test_braindescriptors.py | dfsp-spirit/cogload | ff9d19803c2e0c9aea248a45380959c2758ba83a | [
"MIT"
] | 1 | 2020-07-20T06:43:57.000Z | 2020-07-20T06:43:57.000Z | import os
import pytest
import numpy as np
from numpy.testing import assert_array_equal, assert_allclose
import brainload as bl
import brainload.freesurferdata as fsd
import brainload.braindescriptors as bd
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
TEST_DATA_DIR = os.path.join(THIS_DIR, os.pardir, 'test_data')
# Respect the environment variable BRAINLOAD_TEST_DATA_DIR if it is set. If not, fall back to default.
TEST_DATA_DIR = os.getenv('BRAINLOAD_TEST_DATA_DIR', TEST_DATA_DIR)
def test_braindescriptors_init_nonempty():
expected_subject2_testdata_dir = os.path.join(TEST_DATA_DIR, 'subject2')
if not os.path.isdir(expected_subject2_testdata_dir):
pytest.skip("Test data missing: e.g., directory '%s' does not exist. You can get all test data by running './develop/get_test_data_all.bash' in the repo root." % expected_subject2_testdata_dir)
subjects_list = ['subject1', 'subject2']
bdi = bd.BrainDescriptors(TEST_DATA_DIR, subjects_list)
assert len(bdi.subjects_list) == 2
assert len(bdi.descriptor_names) == 0
assert len(bdi.hemis) == 2
assert bdi.descriptor_values.shape == (2, 0)
def test_braindescriptors_init_with_hemi():
expected_subject2_testdata_dir = os.path.join(TEST_DATA_DIR, 'subject2')
if not os.path.isdir(expected_subject2_testdata_dir):
pytest.skip("Test data missing: e.g., directory '%s' does not exist. You can get all test data by running './develop/get_test_data_all.bash' in the repo root." % expected_subject2_testdata_dir)
subjects_list = ['subject1', 'subject2']
bdi = bd.BrainDescriptors(TEST_DATA_DIR, subjects_list, hemi='lh')
bdi.report_descriptors()
bdi.check_for_hemi_dependent_file([])
assert len(bdi.subjects_list) == 2
assert len(bdi.descriptor_names) == 0
assert bdi.descriptor_values.shape == (2, 0)
assert len(bdi.hemis) == 1
def test_check_for_NaNs_no_descriptors_yet():
expected_subject2_testdata_dir = os.path.join(TEST_DATA_DIR, 'subject2')
if not os.path.isdir(expected_subject2_testdata_dir):
pytest.skip("Test data missing: e.g., directory '%s' does not exist. You can get all test data by running './develop/get_test_data_all.bash' in the repo root." % expected_subject2_testdata_dir)
subjects_list = ['subject1', 'subject2']
bdi = bd.BrainDescriptors(TEST_DATA_DIR, subjects_list, hemi='lh')
subjects_over_threshold, descriptors_over_threshold, nan_share_per_subject, nan_share_per_descriptor = bdi.check_for_NaNs()
assert len(subjects_over_threshold) == 0
assert len(descriptors_over_threshold) == 0
def test_check_for_NaNs_with_curv_descriptors():
expected_subject2_testdata_dir = os.path.join(TEST_DATA_DIR, 'subject2')
if not os.path.isdir(expected_subject2_testdata_dir):
pytest.skip("Test data missing: e.g., directory '%s' does not exist. You can get all test data by running './develop/get_test_data_all.bash' in the repo root." % expected_subject2_testdata_dir)
subjects_list = ['subject1', 'subject2']
bdi = bd.BrainDescriptors(TEST_DATA_DIR, subjects_list, hemi='lh')
bdi.add_curv_stats()
subjects_over_threshold, descriptors_over_threshold, nan_share_per_subject, nan_share_per_descriptor = bdi.check_for_NaNs()
assert len(subjects_over_threshold) == 0
assert len(descriptors_over_threshold) == 0
def test_check_for_custom_measure_stats_files_invalid_format():
expected_subject2_testdata_dir = os.path.join(TEST_DATA_DIR, 'subject2')
if not os.path.isdir(expected_subject2_testdata_dir):
pytest.skip("Test data missing: e.g., directory '%s' does not exist. You can get all test data by running './develop/get_test_data_all.bash' in the repo root." % expected_subject2_testdata_dir)
subjects_list = ['subject1', 'subject2']
bdi = bd.BrainDescriptors(TEST_DATA_DIR, subjects_list, hemi='rh')
with pytest.raises(ValueError) as exc_info:
bdi.check_for_custom_measure_stats_files(["aparc"], ["area"], morph_file_format="nosuchformat")
assert "nosuchformat" in str(exc_info.value)
assert "morph_file_format must be one of" in str(exc_info.value)
def test_check_for_custom_measure_stats_files_curv_format():
expected_subject2_testdata_dir = os.path.join(TEST_DATA_DIR, 'subject2')
if not os.path.isdir(expected_subject2_testdata_dir):
pytest.skip("Test data missing: e.g., directory '%s' does not exist. You can get all test data by running './develop/get_test_data_all.bash' in the repo root." % expected_subject2_testdata_dir)
subjects_list = ['subject1', 'subject2']
bdi = bd.BrainDescriptors(TEST_DATA_DIR, subjects_list, hemi='rh')
bdi.check_for_custom_measure_stats_files(["aparc"], ["area"], morph_file_format="curv")
def test_check_for_custom_measure_stats_files_mgh_format():
expected_subject2_testdata_dir = os.path.join(TEST_DATA_DIR, 'subject2')
if not os.path.isdir(expected_subject2_testdata_dir):
pytest.skip("Test data missing: e.g., directory '%s' does not exist. You can get all test data by running './develop/get_test_data_all.bash' in the repo root." % expected_subject2_testdata_dir)
subjects_list = ['subject1', 'subject2']
bdi = bd.BrainDescriptors(TEST_DATA_DIR, subjects_list, hemi='rh')
bdi.check_for_custom_measure_stats_files(["aparc"], ["area"], morph_file_format="mgh")
def test_braindescriptors_init_with_invalid_hemi():
expected_subject2_testdata_dir = os.path.join(TEST_DATA_DIR, 'subject2')
if not os.path.isdir(expected_subject2_testdata_dir):
pytest.skip("Test data missing: e.g., directory '%s' does not exist. You can get all test data by running './develop/get_test_data_all.bash' in the repo root." % expected_subject2_testdata_dir)
subjects_list = ['subject1', 'subject2']
with pytest.raises(ValueError) as exc_info:
bdi = bd.BrainDescriptors(TEST_DATA_DIR, subjects_list, hemi='nosuchhemi')
assert "hemi must be one of {'lh', 'rh', 'both'} but is" in str(exc_info.value)
assert "nosuchhemi" in str(exc_info.value)
def test_braindescriptors_parcellation_stats():
expected_subject2_testdata_dir = os.path.join(TEST_DATA_DIR, 'subject2')
if not os.path.isdir(expected_subject2_testdata_dir):
pytest.skip("Test data missing: e.g., directory '%s' does not exist. You can get all test data by running './develop/get_test_data_all.bash' in the repo root." % expected_subject2_testdata_dir)
subjects_list = ['subject1', 'subject2']
bdi = bd.BrainDescriptors(TEST_DATA_DIR, subjects_list)
bdi.add_parcellation_stats(['aparc', 'aparc.a2009s'])
bdi.add_segmentation_stats(['aseg'])
bdi.add_custom_measure_stats(['aparc'], ['area'])
bdi.add_curv_stats()
assert len(bdi.descriptor_names) == 3089
assert bdi.descriptor_values.shape == (2, 3089)
def test_braindescriptors_add_standard_stats():
expected_subject2_testdata_dir = os.path.join(TEST_DATA_DIR, 'subject2')
if not os.path.isdir(expected_subject2_testdata_dir):
pytest.skip("Test data missing: e.g., directory '%s' does not exist. You can get all test data by running './develop/get_test_data_all.bash' in the repo root." % expected_subject2_testdata_dir)
subjects_list = ['subject1', 'subject2']
bdi = bd.BrainDescriptors(TEST_DATA_DIR, subjects_list)
bdi.add_standard_stats()
assert len(bdi.descriptor_names) == 3426
assert bdi.descriptor_values.shape == (2, 3426)
def test_braindescriptors_standard_stats_have_unique_names():
expected_subject2_testdata_dir = os.path.join(TEST_DATA_DIR, 'subject2')
if not os.path.isdir(expected_subject2_testdata_dir):
pytest.skip("Test data missing: e.g., directory '%s' does not exist. You can get all test data by running './develop/get_test_data_all.bash' in the repo root." % expected_subject2_testdata_dir)
subjects_list = ['subject1', 'subject2']
bdi = bd.BrainDescriptors(TEST_DATA_DIR, subjects_list)
bdi.add_standard_stats()
assert len(bdi.descriptor_names) == 3426
assert bdi.descriptor_values.shape == (2, 3426)
assert len(bdi.descriptor_names) == len(list(set(bdi.descriptor_names)))
dup_list = bdi._check_for_duplicate_descriptor_names()
assert not dup_list
def test_braindescriptors_file_checks():
expected_subject2_testdata_dir = os.path.join(TEST_DATA_DIR, 'subject2')
if not os.path.isdir(expected_subject2_testdata_dir):
pytest.skip("Test data missing: e.g., directory '%s' does not exist. You can get all test data by running './develop/get_test_data_all.bash' in the repo root." % expected_subject2_testdata_dir)
subjects_list = ['subject1', 'subject2']
bdi = bd.BrainDescriptors(TEST_DATA_DIR, subjects_list)
bdi.check_for_parcellation_stats_files(['aparc', 'aparc.a2009s'])
bdi.check_for_segmentation_stats_files(['aseg', 'wmparc'])
bdi.check_for_custom_measure_stats_files(['aparc'], ['area'])
bdi.check_for_curv_stats_files()
assert len(bdi.subjects_list) == 2
assert len(bdi.descriptor_names) == 0
assert bdi.descriptor_values.shape == (2, 0)
| 57.474684 | 201 | 0.755203 | 1,337 | 9,081 | 4.812266 | 0.103216 | 0.082064 | 0.134287 | 0.151072 | 0.827479 | 0.80572 | 0.788778 | 0.777432 | 0.751166 | 0.737022 | 0 | 0.015954 | 0.13721 | 9,081 | 157 | 202 | 57.840764 | 0.805233 | 0.011012 | 0 | 0.646154 | 0 | 0.092308 | 0.255596 | 0.048001 | 0 | 0 | 0 | 0 | 0.215385 | 1 | 0.092308 | false | 0 | 0.053846 | 0 | 0.146154 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
95310012f871b8978aa95e6de4f314ac1a3cf95d | 73 | py | Python | jacdac/gyroscope/__init__.py | microsoft/jacdac-python | 712ad5559e29065f5eccb5dbfe029c039132df5a | [
"MIT"
] | 1 | 2022-02-15T21:30:36.000Z | 2022-02-15T21:30:36.000Z | jacdac/gyroscope/__init__.py | microsoft/jacdac-python | 712ad5559e29065f5eccb5dbfe029c039132df5a | [
"MIT"
] | null | null | null | jacdac/gyroscope/__init__.py | microsoft/jacdac-python | 712ad5559e29065f5eccb5dbfe029c039132df5a | [
"MIT"
] | 1 | 2022-02-08T19:32:45.000Z | 2022-02-08T19:32:45.000Z | # Autogenerated file.
from .client import GyroscopeClient # type: ignore
| 24.333333 | 50 | 0.794521 | 8 | 73 | 7.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136986 | 73 | 2 | 51 | 36.5 | 0.920635 | 0.438356 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
20fd5e0318d05107cbfe2ceb26e741c7fd70d53e | 33 | py | Python | deanslist/__init__.py | upeducationnetwork/deanslist-python | 226eda2580055427119397bc28e7976f019d7301 | [
"MIT"
] | null | null | null | deanslist/__init__.py | upeducationnetwork/deanslist-python | 226eda2580055427119397bc28e7976f019d7301 | [
"MIT"
] | 2 | 2016-05-16T19:54:26.000Z | 2016-05-20T12:02:20.000Z | deanslist/__init__.py | upeducationnetwork/deanslist-python | 226eda2580055427119397bc28e7976f019d7301 | [
"MIT"
] | null | null | null | from .deanslist import dl, dlall
| 16.5 | 32 | 0.787879 | 5 | 33 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151515 | 33 | 1 | 33 | 33 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1f1c111dfb17991d6534f57dfea9c5be22c90642 | 17,955 | py | Python | tests/test_extract_fragments.py | phenylazide/MolecularSimilarity | 429f64c3c18daa5d341110380f761aa003ad290b | [
"MIT"
] | 1 | 2020-09-14T16:01:50.000Z | 2020-09-14T16:01:50.000Z | tests/test_extract_fragments.py | phenylazide/MolecularSimilarity | 429f64c3c18daa5d341110380f761aa003ad290b | [
"MIT"
] | 5 | 2019-04-20T06:23:01.000Z | 2019-07-25T17:28:05.000Z | tests/test_extract_fragments.py | phenylazide/MolecularSimilarity | 429f64c3c18daa5d341110380f761aa003ad290b | [
"MIT"
] | 1 | 2020-07-07T14:55:14.000Z | 2020-07-07T14:55:14.000Z | #!/usr/bin/env python3
import unittest
import rdkit
import rdkit.Chem
import rdkit.Chem.AtomPairs.Utils
import extract_fragments
class TestCalc(unittest.TestCase):
def test_atom_pairs(self):
molecule = rdkit.Chem.MolFromSmiles("c1ccccn1")
result = [{"smiles": "CC", "index": 2509202, "type": "AP", "size": 2},
{"smiles": "CC", "index": 2509202, "type": "AP", "size": 2},
{"smiles": "CC", "index": 2509202, "type": "AP", "size": 2},
{"smiles": "CC", "index": 2509202, "type": "AP", "size": 2},
{"smiles": "CC", "index": 3557778, "type": "AP", "size": 3},
{"smiles": "CC", "index": 3557778, "type": "AP", "size": 3},
{"smiles": "CC", "index": 3557778, "type": "AP", "size": 3},
{"smiles": "CC", "index": 3557778, "type": "AP", "size": 3},
{"smiles": "CC", "index": 4606354, "type": "AP", "size": 4},
{"smiles": "CC", "index": 4606354, "type": "AP", "size": 4},
{"smiles": "CN", "index": 2574738, "type": "AP", "size": 2},
{"smiles": "CN", "index": 2574738, "type": "AP", "size": 2},
{"smiles": "CN", "index": 3623314, "type": "AP", "size": 3},
{"smiles": "CN", "index": 3623314, "type": "AP", "size": 3},
{"smiles": "CN", "index": 4671890, "type": "AP", "size": 4}]
self.assertEqual(extract_fragments.extract_atompair_fragments(molecule), result)
molecule = rdkit.Chem.MolFromSmiles("c1nccc2n1ccc2")
result = [{"smiles": "CC", "index": 2509202, "type": "AP", "size": 2},
{"smiles": "CC", "index": 2509202, "type": "AP", "size": 2},
{"smiles": "CC", "index": 2509202, "type": "AP", "size": 2},
{"smiles": "CC", "index": 3557778, "type": "AP", "size": 3},
{"smiles": "CC", "index": 3557778, "type": "AP", "size": 3},
{"smiles": "CC", "index": 3557778, "type": "AP", "size": 3},
{"smiles": "CC", "index": 3557778, "type": "AP", "size": 3},
{"smiles": "CC", "index": 4606354, "type": "AP", "size": 4},
{"smiles": "CC", "index": 4606354, "type": "AP", "size": 4},
{"smiles": "CC", "index": 4606354, "type": "AP", "size": 4},
{"smiles": "CC", "index": 4606354, "type": "AP", "size": 4},
{"smiles": "CC", "index": 4606354, "type": "AP", "size": 4},
{"smiles": "CC", "index": 4606354, "type": "AP", "size": 4},
{"smiles": "CC", "index": 5654930, "type": "AP", "size": 5},
{"smiles": "CC", "index": 5654930, "type": "AP", "size": 5},
{"smiles": "CC", "index": 2510226, "type": "AP", "size": 2},
{"smiles": "CC", "index": 2510226, "type": "AP", "size": 2},
{"smiles": "CC", "index": 3558802, "type": "AP", "size": 3},
{"smiles": "CC", "index": 3558802, "type": "AP", "size": 3},
{"smiles": "CC", "index": 3558802, "type": "AP", "size": 3},
{"smiles": "CC", "index": 3558802, "type": "AP", "size": 3},
{"smiles": "CN", "index": 2574738, "type": "AP", "size": 2},
{"smiles": "CN", "index": 2574738, "type": "AP", "size": 2},
{"smiles": "CN", "index": 3623314, "type": "AP", "size": 3},
{"smiles": "CN", "index": 4671890, "type": "AP", "size": 4},
{"smiles": "CN", "index": 5720466, "type": "AP", "size": 5},
{"smiles": "CN", "index": 5720466, "type": "AP", "size": 5},
{"smiles": "CN", "index": 4671891, "type": "AP", "size": 4},
{"smiles": "CN", "index": 2575762, "type": "AP", "size": 2},
{"smiles": "CN", "index": 2575762, "type": "AP", "size": 2},
{"smiles": "CN", "index": 3624338, "type": "AP", "size": 3},
{"smiles": "CN", "index": 3624338, "type": "AP", "size": 3},
{"smiles": "CN", "index": 3624338, "type": "AP", "size": 3},
{"smiles": "CN", "index": 4672914, "type": "AP", "size": 4},
{"smiles": "CN", "index": 2575763, "type": "AP", "size": 2},
{"smiles": "NN", "index": 3624402, "type": "AP", "size": 3}]
self.assertEqual(extract_fragments.extract_atompair_fragments(molecule), result)
molecule = rdkit.Chem.MolFromSmiles("CCO")
result = [{"smiles": "CC", "index": 2492801, "type": "AP", "size": 2},
{"smiles": "CO", "index": 3671425, "type": "AP", "size": 3},
{"smiles": "CO", "index": 2622850, "type": "AP", "size": 2}]
self.assertEqual(extract_fragments.extract_atompair_fragments(molecule), result)
def test_neighbourhood_fragments(self):
#ECFP
molecule = rdkit.Chem.MolFromSmiles("c1ccccn1")
options = {
"kekule": True,
"isomeric": True
}
size = 6
result = [{"smiles": "N1:C:C:C:C:C:1", "index": 755035130, "type": "ECFP", "size": 3}]
self.assertEqual(extract_fragments.extract_neighbourhood_fragments(molecule, size, options, True), result)
size = 4
result = [{"smiles": "C(:C:C):C:N", "index": 1207774339, "type": "ECFP", "size": 2},
{"smiles": "C(:C:C):C:N", "index": 1207774339, "type": "ECFP", "size": 2},
{"smiles": "N(:C:C):C:C", "index": 1343371647, "type": "ECFP", "size": 2},
{"smiles": "C(:C:C):N:C", "index": 1821698485, "type": "ECFP", "size": 2},
{"smiles": "C(:C:C):N:C", "index": 1821698485, "type": "ECFP", "size": 2},
{"smiles": "C(:C:C):C:C", "index": 2763854213, "type": "ECFP", "size": 2}]
self.assertEqual(extract_fragments.extract_neighbourhood_fragments(molecule, size, options, True), result)
options = {
"kekule": False,
"isomeric": True
}
result = [{"smiles": "c(cc)cn", "index": 1207774339, "type": "ECFP", "size": 2},
{"smiles": "c(cc)cn", "index": 1207774339, "type": "ECFP", "size": 2},
{"smiles": "n(cc)cc", "index": 1343371647, "type": "ECFP", "size": 2},
{"smiles": "c(cc)nc", "index": 1821698485, "type": "ECFP", "size": 2},
{"smiles": "c(cc)nc", "index": 1821698485, "type": "ECFP", "size": 2},
{"smiles": "c(cc)cc", "index": 2763854213, "type": "ECFP", "size": 2}]
self.assertEqual(extract_fragments.extract_neighbourhood_fragments(molecule, size, options, True), result)
molecule = rdkit.Chem.MolFromSmiles("c1nccc2n1ccc2")
options = {
"kekule": False,
"isomeric": False
}
result = [{"smiles": "c(cn)c(c)n", "index": 201245292, "type": "ECFP", "size": 2},
{"smiles": "c(cc)n(c)c", "index": 405194198, "type": "ECFP", "size": 2},
{"smiles": "n(cc)(cn)c(c)c", "index": 924977737, "type": "ECFP", "size": 2},
{"smiles": "c(cc)nc", "index": 1717044408, "type": "ECFP", "size": 2},
{"smiles": "c(cc)(cc)n(c)c", "index": 2345490282, "type": "ECFP", "size": 2},
{"smiles": "c(nc)n(c)c", "index": 2558786292, "type": "ECFP", "size": 2},
{"smiles": "n(cc)cn", "index": 2910395211, "type": "ECFP", "size": 2},
{"smiles": "c(cc)cn", "index": 3428161631, "type": "ECFP", "size": 2},
{"smiles": "c(cc)c(c)n", "index": 3896685563, "type": "ECFP", "size": 2}]
self.assertEqual(extract_fragments.extract_neighbourhood_fragments(molecule, size, options, True), result)
size = 6
molecule = rdkit.Chem.MolFromSmiles("C[C@H](O)c1ccccc1")
options = {
"kekule": False,
"isomeric": True
}
result = [{"smiles": "c1ccccc1", "index": 742000539, "type": "ECFP", "size": 3},
{"smiles": "c1cccc(C)c1", "index": 997097697, "type": "ECFP", "size": 3},
{"smiles": "c1ccccc1[C@H](C)O", "index": 1566387358, "type": "ECFP", "size": 3}]
self.assertEqual(extract_fragments.extract_neighbourhood_fragments(molecule, size, options, True), result)
options = {
"kekule": False,
"isomeric": False
}
result = [{"smiles": "c1ccccc1", "index": 742000539, "type": "ECFP", "size": 3},
{"smiles": "c1cccc(C)c1", "index": 997097697, "type": "ECFP", "size": 3},
{"smiles": "c1ccccc1C(C)O", "index": 1566387358, "type": "ECFP", "size": 3}]
self.assertEqual(extract_fragments.extract_neighbourhood_fragments(molecule, size, options, True), result)
options = {
"kekule": True,
"isomeric": True
}
result = [{"smiles": "C1:C:C:C:C:C:1", "index": 742000539, "type": "ECFP", "size": 3},
{"smiles": "C1:C:C:C:C(C):C:1", "index": 997097697, "type": "ECFP", "size": 3},
{"smiles": "C1:C:C:C:C:C:1[C@H](C)O", "index": 1566387358, "type": "ECFP", "size": 3}]
self.assertEqual(extract_fragments.extract_neighbourhood_fragments(molecule, size, options, True), result)
options = {
"kekule": True,
"isomeric": False
}
result = [{"smiles": "C1:C:C:C:C:C:1", "index": 742000539, "type": "ECFP", "size": 3},
{"smiles": "C1:C:C:C:C(C):C:1", "index": 997097697, "type": "ECFP", "size": 3},
{"smiles": "C1:C:C:C:C:C:1C(C)O", "index": 1566387358, "type": "ECFP", "size": 3}]
self.assertEqual(extract_fragments.extract_neighbourhood_fragments(molecule, size, options, True), result)
# FCFP
molecule = rdkit.Chem.MolFromSmiles("c1ccccn1")
options = {
"kekule": True,
"isomeric": True
}
size = 6
result = [{"smiles": "C1:C:C:C:C:N:1", "index": 1067478186, "type": "FCFP", "size": 3}]
self.assertEqual(extract_fragments.extract_neighbourhood_fragments(molecule, size, options, False), result)
molecule = rdkit.Chem.MolFromSmiles("c1nccc2n1ccc2")
options = {
"kekule": False,
"isomeric": False
}
size = 4
result = [{"smiles": "c(cc)(cc)n(c)c", "index": 435849959, "type": "FCFP", "size": 2},
{"smiles": "n(cc)cn", "index": 1127424909, "type": "FCFP", "size": 2},
{"smiles": "c(cc)cn", "index": 1230564256, "type": "FCFP", "size": 2},
{"smiles": "c(cc)nc", "index": 1251070542, "type": "FCFP", "size": 2},
{"smiles": "n(cc)(cn)c(c)c", "index": 1476508118, "type": "FCFP", "size": 2},
{"smiles": "c(nc)n(c)c", "index": 2154510652, "type": "FCFP", "size": 2},
{"smiles": "c(cc)n(c)c", "index": 2226952373, "type": "FCFP", "size": 2},
{"smiles": "c(cc)c(c)n", "index": 2460461453, "type": "FCFP", "size": 2},
{"smiles": "c(cn)c(c)n", "index": 2460461555, "type": "FCFP", "size": 2}]
self.assertEqual(extract_fragments.extract_neighbourhood_fragments(molecule, size, options, False), result)
molecule = rdkit.Chem.MolFromSmiles("CCO")
size = 2
result = [{"smiles": "CC", "index": 3205495869, "type": "FCFP", "size": 1},
{"smiles": "OC", "index": 3205496825, "type": "FCFP", "size": 1},
{"smiles": "C(C)O", "index": 3766532901, "type": "FCFP", "size": 1}]
self.assertEqual(extract_fragments.extract_neighbourhood_fragments(molecule, size, options, False), result)
def test_path_fragments(self):
molecule = rdkit.Chem.MolFromSmiles("c1ccccn1")
options = {
"kekule": False,
"isomeric": False
}
size = 2
result = [{"smiles": "cc", "index": 83025, "type": "TT", "size": 2},
{"smiles": "cn", "index": 148561, "type": "TT", "size": 2},
{"smiles": "cc", "index": 83025, "type": "TT", "size": 2},
{"smiles": "cc", "index": 83025, "type": "TT", "size": 2},
{"smiles": "cc", "index": 83025, "type": "TT", "size": 2},
{"smiles": "cn", "index": 148561, "type": "TT", "size": 2}]
self.assertEqual(extract_fragments.extract_path_fragments(molecule, size, options), result)
size = 3
result = [{"smiles": "ccc", "index": 85016657, "type": "TT", "size": 3},
{"smiles": "cnc", "index": 85082193, "type": "TT", "size": 3},
{"smiles": "ccn", "index": 152125521, "type": "TT", "size": 3},
{"smiles": "ccc", "index": 85016657, "type": "TT", "size": 3},
{"smiles": "ccc", "index": 85016657, "type": "TT", "size": 3},
{"smiles": "ccn", "index": 152125521, "type": "TT", "size": 3}]
self.assertEqual(extract_fragments.extract_path_fragments(molecule, size, options), result)
molecule = rdkit.Chem.MolFromSmiles("c1nccc2n1ccc2")
options = {
"kekule": False,
"isomeric": True
}
size = 5;
result = [{"smiles": "cccnc", "index": 90245936857169, "type": "TT", "size": 5},
{"smiles": "cccnc", "index": 89216219430993, "type": "TT", "size": 5},
{"smiles": "cccnc", "index": 89216219430993, "type": "TT", "size": 5},
{"smiles": "cccnc", "index": 89216218382417, "type": "TT", "size": 5},
{"smiles": "ccncn", "index": 159515237499985, "type": "TT", "size": 5},
{"smiles": "ccncn", "index": 159515237499985, "type": "TT", "size": 5},
{"smiles": "ccncn", "index": 159515237498961, "type": "TT", "size": 5},
{"smiles": "ncccn", "index": 160615754711185, "type": "TT", "size": 5},
{"smiles": "ccccn", "index": 159515169342545, "type": "TT", "size": 5},
{"smiles": "cncnc", "index": 90315730075729, "type": "TT", "size": 5},
{"smiles": "cncnc", "index": 89216218447953, "type": "TT", "size": 5},
{"smiles": "cccnc", "index": 89216219430993, "type": "TT", "size": 5},
{"smiles": "ccccc", "index": 89146426212433, "type": "TT", "size": 5},
{"smiles": "ccncn", "index": 160614748078161, "type": "TT", "size": 5},
{"smiles": "ccncc", "index": 89147567063121, "type": "TT", "size": 5},
{"smiles": "ccccc", "index": 89147498905681, "type": "TT", "size": 5},
{"smiles": "c1ccnc1", "index": 90315730010193, "type": "TT", "size": 5}]
self.assertEqual(extract_fragments.extract_path_fragments(molecule, size, options), result)
options = {
"kekule": True,
"isomeric": False
}
size = 3
result = [{"smiles": "C:N:C", "index": 85082193, "type": "TT", "size": 3},
{"smiles": "C:N:C", "index": 86131793, "type": "TT", "size": 3},
{"smiles": "C:N:C", "index": 85083217, "type": "TT", "size": 3},
{"smiles": "N:C:N", "index": 153174161, "type": "TT", "size": 3},
{"smiles": "C:C:N", "index": 152125521, "type": "TT", "size": 3},
{"smiles": "C:C:C", "index": 86065233, "type": "TT", "size": 3},
{"smiles": "C:C:N", "index": 153175121, "type": "TT", "size": 3},
{"smiles": "C:C:C", "index": 85017681, "type": "TT", "size": 3},
{"smiles": "C:N:C", "index": 86131793, "type": "TT", "size": 3},
{"smiles": "C:C:C", "index": 86065233, "type": "TT", "size": 3},
{"smiles": "C:C:N", "index": 153175121, "type": "TT", "size": 3},
{"smiles": "C:C:N", "index": 153174097, "type": "TT", "size": 3},
{"smiles": "C:C:C", "index": 85016657, "type": "TT", "size": 3}]
self.assertEqual(extract_fragments.extract_path_fragments(molecule, size, options), result)
options = {
"kekule": True,
"isomeric": True
}
size = 8;
result =[{"smiles": "C:C:N1:C:C:C:N:C:1", "index": 95794032134814304387153, "type": "TT", "size": 8},
{"smiles": "C:C:C:C:C:C:N:C", "index": 95794032134814236229713, "type": "TT", "size": 8},
{"smiles": "C:C:C1:C:C:C:N:1:C", "index": 95795185056317770383441, "type": "TT", "size": 8},
{"smiles": "C:C1:C:C:C:N:1:C:N", "index": 171278182067926592472145, "type": "TT", "size": 8},
{"smiles": "N:C:C:C1:C:C:C:N:1", "index": 171278108885601952546897, "type": "TT", "size": 8},
{"smiles": "C:N:C:N1:C:C:C:C:1", "index": 95794032206282492035153, "type": "TT", "size": 8},
{"smiles": "C:C:C1:C:C:N:C:N:1", "index": 95720317216182156476497, "type": "TT", "size": 8},
{"smiles": "C:C:C:N:C:N:C:C", "index": 95720317216182155427921, "type": "TT", "size": 8},
{"smiles": "C:C1:C:C:N:C:N:1:C", "index": 95795185126686513513553, "type": "TT", "size": 8}]
self.assertEqual(extract_fragments.extract_path_fragments(molecule, size, options), result)
molecule = rdkit.Chem.MolFromSmiles("CCO")
options = {
"kekule": False,
"isomeric": True
}
size = 2
result = [{"smiles": "CC", "index": 66624, "type": "TT", "size": 2},
{"smiles": "CO", "index": 196673, "type": "TT", "size": 2}]
self.assertEqual(extract_fragments.extract_path_fragments(molecule, size, options), result)
if __name__ == "__main__":
unittest.main()
| 57 | 115 | 0.482373 | 1,944 | 17,955 | 4.416667 | 0.085391 | 0.023993 | 0.062893 | 0.012113 | 0.854647 | 0.822735 | 0.787911 | 0.73515 | 0.710109 | 0.662124 | 0 | 0.134575 | 0.285269 | 17,955 | 314 | 116 | 57.181529 | 0.534481 | 0.001671 | 0 | 0.588679 | 0 | 0.003774 | 0.255957 | 0.001283 | 0 | 0 | 0 | 0 | 0.075472 | 1 | 0.011321 | false | 0 | 0.018868 | 0 | 0.033962 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1f215fc68a5e08bb449a230f4e6f0374bb2d8e4e | 1,413 | py | Python | sane/dataset/__init__.py | AndreyBuyanov/Neuro-Evolutionary-Calculations | af6c06890a869a768ab6929f7d2ba6f12fb1b81a | [
"MIT"
] | null | null | null | sane/dataset/__init__.py | AndreyBuyanov/Neuro-Evolutionary-Calculations | af6c06890a869a768ab6929f7d2ba6f12fb1b81a | [
"MIT"
] | null | null | null | sane/dataset/__init__.py | AndreyBuyanov/Neuro-Evolutionary-Calculations | af6c06890a869a768ab6929f7d2ba6f12fb1b81a | [
"MIT"
] | null | null | null | from .dataset_loader import Cancer1Dataset
from .dataset_loader import Cancer2Dataset
from .dataset_loader import Cancer3Dataset
from .dataset_loader import Diabetes1Dataset
from .dataset_loader import Diabetes2Dataset
from .dataset_loader import Diabetes3Dataset
from .dataset_loader import Glass1Dataset
from .dataset_loader import Glass2Dataset
from .dataset_loader import Glass3Dataset
from .dataset_loader import Card1Dataset
from .dataset_loader import Card2Dataset
from .dataset_loader import Card3Dataset
from .dataset_loader import Flare1Dataset
from .dataset_loader import Flare2Dataset
from .dataset_loader import Flare3Dataset
from .dataset_loader import Gene1Dataset
from .dataset_loader import Gene2Dataset
from .dataset_loader import Gene3Dataset
from .dataset_loader import Heart1Dataset
from .dataset_loader import Heart2Dataset
from .dataset_loader import Heart3Dataset
from .dataset_loader import Horse1Dataset
from .dataset_loader import Horse2Dataset
from .dataset_loader import Horse3Dataset
from .dataset_loader import Mushroom1Dataset
from .dataset_loader import Mushroom2Dataset
from .dataset_loader import Mushroom3Dataset
from .dataset_loader import Soybean1Dataset
from .dataset_loader import Soybean2Dataset
from .dataset_loader import Soybean3Dataset
from .dataset_loader import Thyroid1Dataset
from .dataset_loader import Thyroid2Dataset
from .dataset_loader import Thyroid3Dataset
| 41.558824 | 44 | 0.883227 | 165 | 1,413 | 7.363636 | 0.224242 | 0.298765 | 0.461728 | 0.624691 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025761 | 0.093418 | 1,413 | 33 | 45 | 42.818182 | 0.922717 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2f3acf1de589e82fbac9f59cd6bffd9bf4046433 | 42 | py | Python | ptb/monitor_utils/.ipynb_checkpoints/__init__-checkpoint.py | minhtannguyen/MomentumRNN | f185c5432a52533bb1625e2162ec044651e07d9a | [
"CC0-1.0"
] | 13 | 2020-06-16T16:15:55.000Z | 2021-11-24T05:24:48.000Z | mnist-timit/utils/__init__.py | minhtannguyen/MomentumRNN | f185c5432a52533bb1625e2162ec044651e07d9a | [
"CC0-1.0"
] | 2 | 2020-12-07T08:26:01.000Z | 2020-12-26T13:01:53.000Z | mnist-timit/utils/__init__.py | minhtannguyen/MomentumRNN | f185c5432a52533bb1625e2162ec044651e07d9a | [
"CC0-1.0"
] | 5 | 2020-06-11T16:13:23.000Z | 2022-03-07T14:28:46.000Z | """Useful utils
"""
from .logger import *
| 10.5 | 21 | 0.642857 | 5 | 42 | 5.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 42 | 3 | 22 | 14 | 0.771429 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2f4567b146067ec96c0feb23f68a7bf99f74f299 | 194 | py | Python | custom_app/outpatient/doctype/outpatient_record/outpatient_record.py | benson-tseng/frappe_custom | c7830b1610c7c5a71e81d75f790410b919b2fbf2 | [
"MIT"
] | 1 | 2021-09-02T12:44:22.000Z | 2021-09-02T12:44:22.000Z | custom_app/outpatient/doctype/outpatient_record/outpatient_record.py | benson-tseng/frappe_custom | c7830b1610c7c5a71e81d75f790410b919b2fbf2 | [
"MIT"
] | null | null | null | custom_app/outpatient/doctype/outpatient_record/outpatient_record.py | benson-tseng/frappe_custom | c7830b1610c7c5a71e81d75f790410b919b2fbf2 | [
"MIT"
] | null | null | null | # Copyright (c) 2021, aaa and contributors
# For license information, please see license.txt
# import frappe
from frappe.model.document import Document
class OutpatientRecord(Document):
pass
| 21.555556 | 49 | 0.793814 | 25 | 194 | 6.16 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023952 | 0.139175 | 194 | 8 | 50 | 24.25 | 0.898204 | 0.525773 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
85eeafc7076dad3bc43cfc2e3e89c5d0cee7f864 | 369 | py | Python | app.py | RomanGorelsky/project_mid | e08fd3f0feea30ee3618cbc4ce99ff141f484ca9 | [
"MIT"
] | null | null | null | app.py | RomanGorelsky/project_mid | e08fd3f0feea30ee3618cbc4ce99ff141f484ca9 | [
"MIT"
] | null | null | null | app.py | RomanGorelsky/project_mid | e08fd3f0feea30ee3618cbc4ce99ff141f484ca9 | [
"MIT"
] | null | null | null | from flask import Flask, render_template
app = Flask(__name__)
@app.route('/')
def hello_world():
return render_template('tables.html')
@app.route('/charts')
def render_charts():
return render_template('charts.html')
@app.route('/tables')
def render_tables():
return render_template('tables.html')
if __name__ == '__main__':
app.run(debug=True) | 16.772727 | 41 | 0.701897 | 48 | 369 | 5 | 0.416667 | 0.233333 | 0.25 | 0.216667 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140921 | 369 | 22 | 42 | 16.772727 | 0.757098 | 0 | 0 | 0.153846 | 0 | 0 | 0.151351 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.076923 | 0.230769 | 0.538462 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
c83c73d3759f3357a637d8f1c262a00f0201b30b | 174 | py | Python | pysid/identification/__init__.py | lima-84/pysid | 6038b9437e6f4bd23280c3541cb06c1cdf292d2a | [
"MIT"
] | 5 | 2019-09-08T17:22:04.000Z | 2022-01-08T18:09:56.000Z | pysid/identification/__init__.py | lima-84/pysid | 6038b9437e6f4bd23280c3541cb06c1cdf292d2a | [
"MIT"
] | null | null | null | pysid/identification/__init__.py | lima-84/pysid | 6038b9437e6f4bd23280c3541cb06c1cdf292d2a | [
"MIT"
] | 4 | 2019-09-08T17:49:23.000Z | 2022-01-10T11:44:50.000Z | #__init__.py for pysid
# Load all the functions by default
from .ivmethod import *
from .pemethod import *
from .tseries import *
from .accr import *
from .comcrit import *
| 19.333333 | 35 | 0.747126 | 25 | 174 | 5.04 | 0.68 | 0.31746 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178161 | 174 | 8 | 36 | 21.75 | 0.881119 | 0.316092 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c85dcbeadf179b1120b03a3cb0b74bc8ad296b89 | 339 | py | Python | movie/start.py | fuey/spiders | 4a39b0a0c55b23968e515d75a6946f3e8dc0991c | [
"Apache-2.0"
] | null | null | null | movie/start.py | fuey/spiders | 4a39b0a0c55b23968e515d75a6946f3e8dc0991c | [
"Apache-2.0"
] | null | null | null | movie/start.py | fuey/spiders | 4a39b0a0c55b23968e515d75a6946f3e8dc0991c | [
"Apache-2.0"
] | null | null | null | from scrapy import cmdline
cmdline.execute("scrapy crawl douban".split());
# cmdline.execute("scrapy crawl imdb_movie_top250".split())
# cmdline.execute("scrapy crawl imdb_tv_top250 -s LOG_FILE=spider.log".split())
# cmdline.execute("scrapy crawl rotten_tomatoes_top100".split())
# cmdline.execute("scrapy crawl mtc_alltime_top".split())
| 42.375 | 79 | 0.781711 | 47 | 339 | 5.446809 | 0.468085 | 0.273438 | 0.390625 | 0.488281 | 0.5 | 0.265625 | 0 | 0 | 0 | 0 | 0 | 0.028754 | 0.076696 | 339 | 7 | 80 | 48.428571 | 0.789137 | 0.749263 | 0 | 0 | 0 | 0 | 0.2375 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c0deee6815997981a4353d2570ca99f04eb912c3 | 83 | py | Python | tests/__init__.py | gauravmk/rq-dashboard | d760276d075cd5e7879127c0155cad874c55e6fb | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | tests/__init__.py | gauravmk/rq-dashboard | d760276d075cd5e7879127c0155cad874c55e6fb | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | tests/__init__.py | gauravmk/rq-dashboard | d760276d075cd5e7879127c0155cad874c55e6fb | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | from __future__ import absolute_import
from .basic import *
from .compat import *
| 16.6 | 38 | 0.795181 | 11 | 83 | 5.545455 | 0.545455 | 0.327869 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156627 | 83 | 4 | 39 | 20.75 | 0.871429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c0e9fd5f20e04d63f2ad82e439285b5a6ab244f8 | 150 | py | Python | python/testData/refactoring/move/cleanupImportsAfterMove/before/src/main.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/refactoring/move/cleanupImportsAfterMove/before/src/main.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/refactoring/move/cleanupImportsAfterMove/before/src/main.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | from lib import B
from lib import A
from lib import D
from lib import C
class C1:
print(A)
class C2:
print(B)
class C3:
print(C, D)
| 8.823529 | 17 | 0.64 | 29 | 150 | 3.310345 | 0.413793 | 0.291667 | 0.541667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028302 | 0.293333 | 150 | 16 | 18 | 9.375 | 0.877358 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.7 | 0.3 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8d06f9f3a3597bcd5b776daef5a8973d139fe355 | 916 | py | Python | arrays/numbers_even_number.py | wtlow003/leetcode-daily | e1d9c74b55e5b3106731a324d70a510e03b3b21f | [
"MIT"
] | null | null | null | arrays/numbers_even_number.py | wtlow003/leetcode-daily | e1d9c74b55e5b3106731a324d70a510e03b3b21f | [
"MIT"
] | null | null | null | arrays/numbers_even_number.py | wtlow003/leetcode-daily | e1d9c74b55e5b3106731a324d70a510e03b3b21f | [
"MIT"
] | 1 | 2022-01-05T17:52:41.000Z | 2022-01-05T17:52:41.000Z | """
1295. Find Numbers with Even Number of Digits
https://leetcode.com/problems/find-numbers-with-even-number-of-digits/
Given an array nums of integers, return how many of them contain an even number of digits.
Example:
Input: nums = [12,345,2,6,7896]
Output: 2
Explanation:
12 contains 2 digits (even number of digits).
345 contains 3 digits (odd number of digits).
2 contains 1 digit (odd number of digits).
6 contains 1 digit (odd number of digits).
7896 contains 4 digits (even number of digits).
Therefore only 12 and 7896 contain an even number of digits.
"""
# Runtime: 56ms
class Solution:
def findNumbers(self, nums: List[int]) -> int:
# Though process:
# We need to find the length of each number within the array to check for even
# We can use boolean to help us count the number of True events -> len(i) == even
return sum([len(str(i)) % 2 == 0 for i in nums])
| 31.586207 | 90 | 0.70524 | 154 | 916 | 4.194805 | 0.480519 | 0.123839 | 0.195046 | 0.167183 | 0.356037 | 0.281734 | 0.198142 | 0 | 0 | 0 | 0 | 0.057692 | 0.20524 | 916 | 28 | 91 | 32.714286 | 0.82967 | 0.816594 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
239818a07781a6623c9a9fbf09464b5e67a01a5e | 716 | py | Python | cebulany/sql_utils.py | hackerspace-silesia/cebulany-manager | 5965c39df15aca77a4a891a134762eb4230cbd51 | [
"MIT"
] | 4 | 2019-03-06T20:30:08.000Z | 2020-01-23T19:25:20.000Z | cebulany/sql_utils.py | hackerspace-silesia/cebulany-manager | 5965c39df15aca77a4a891a134762eb4230cbd51 | [
"MIT"
] | 9 | 2019-07-06T11:25:50.000Z | 2022-01-22T05:18:39.000Z | cebulany/sql_utils.py | hackerspace-silesia/cebulany-manager | 5965c39df15aca77a4a891a134762eb4230cbd51 | [
"MIT"
] | 3 | 2019-10-25T16:55:30.000Z | 2019-10-26T19:55:57.000Z | from sqlalchemy import func as sql_func, Date
from cebulany.models import db
def get_year_month_col(column: Date):
database_type = db.engine.dialect.name
if database_type == 'postgresql':
return sql_func.to_char(column, 'YYYY-MM')
if database_type == 'sqlite':
return sql_func.strftime('%Y-%m', column)
raise AttributeError(f'Unknown database type: {database_type}')
def get_year_col(column: Date):
database_type = db.engine.dialect.name
if database_type == 'postgresql':
return sql_func.to_char(column, 'YYYY')
if database_type == 'sqlite':
return sql_func.strftime('%Y', column)
raise AttributeError(f'Unknown database type: {database_type}')
| 29.833333 | 67 | 0.702514 | 98 | 716 | 4.928571 | 0.367347 | 0.248447 | 0.115942 | 0.086957 | 0.811594 | 0.811594 | 0.811594 | 0.811594 | 0.811594 | 0.401656 | 0 | 0 | 0.184358 | 716 | 23 | 68 | 31.130435 | 0.827055 | 0 | 0 | 0.5 | 0 | 0 | 0.175978 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
23d18e15e98aff62671326b817142b94c2e754be | 57,734 | py | Python | PNNL-Real-Time-Transactive-Energy/modificationScripts/supportFunctions/commercialLoads.py | GMLC-TDC/Use-Cases | 14d687fe04af731c1ee466e05acfd5813095660a | [
"BSD-3-Clause"
] | 1 | 2021-01-04T07:27:34.000Z | 2021-01-04T07:27:34.000Z | PNNL-Real-Time-Transactive-Energy/modificationScripts/supportFunctions/commercialLoads.py | GMLC-TDC/Use-Cases | 14d687fe04af731c1ee466e05acfd5813095660a | [
"BSD-3-Clause"
] | null | null | null | PNNL-Real-Time-Transactive-Energy/modificationScripts/supportFunctions/commercialLoads.py | GMLC-TDC/Use-Cases | 14d687fe04af731c1ee466e05acfd5813095660a | [
"BSD-3-Clause"
] | 2 | 2019-08-01T21:49:40.000Z | 2019-09-23T19:30:36.000Z | """
This file contains four fuctions to add commercial load types to a feeder based on the use flags and cofiguration defined
"""
##################################################################################################################
# Modified April 11, 2018 by Jacob Hansen (jacob.hansen@pnnl.gov)
# Created April 13, 2013 by Andy Fisher (andy.fisher@pnnl.gov)
# Copyright (c) 2013 Battelle Memorial Institute. The Government retains a paid-up nonexclusive, irrevocable
# worldwide license to reproduce, prepare derivative works, perform publicly and display publicly by or for the
# Government, including the right to distribute to other Government contractors.
##################################################################################################################
import math, random
def append_commercial(glmCaseDict, use_flags, commercial_dict, last_object_key, config_data):
"""
This fucntion appends commercial houses to a feeder based on existing loads
Inputs
glmCaseDict - dictionary containing the full feeder
use_flags - dictionary that contains the use flags
commercial_dict - dictionary that contains information about commercial loads spots
last_object_key - Last object key
use_config_file - dictionary that contains the configurations of the feeder
Outputs
glmCaseDict - dictionary containing the full feeder
last_object_key - Last object key
"""
# Initialize psuedo-random seed
# random.seed(4)
# Phase ABC - convert to "commercial buildings"
# if number of "houses" > 15, then create a large office
# if number of "houses" < 15 but > 6, create a big box commercial
# else, create a residential strip mall
# If using Configuration.m and load classifications,
# building type is chosen according to classification
# regardless of number of "houses"
# Check if last_object_key exists in glmCaseDict
if last_object_key in glmCaseDict:
while last_object_key in glmCaseDict:
last_object_key += 1
if len(commercial_dict) > 0 and use_flags["use_commercial"] == 1:
# setup all of the line configurations we may need
glmCaseDict[last_object_key] = {"object": "triplex_line_conductor",
"name": "comm_line_cfg_cnd1",
"resistance": "0.48",
"geometric_mean_radius": "0.0158"}
last_object_key += 1
glmCaseDict[last_object_key] = {"object": "triplex_line_conductor",
"name": "comm_line_cfg_cnd2",
"resistance": "0.48",
"geometric_mean_radius": "0.0158"}
last_object_key += 1
glmCaseDict[last_object_key] = {"object": "triplex_line_conductor",
"name": "comm_line_cfg_cndN",
"resistance": "0.48",
"geometric_mean_radius": "0.0158"}
last_object_key += 1
glmCaseDict[last_object_key] = {"object": "triplex_line_configuration",
"name": "commercial_line_config",
"conductor_1": "comm_line_cfg_cnd1",
"conductor_2": "comm_line_cfg_cnd2",
"conductor_N": "comm_line_cfg_cndN",
"insulation_thickness": "0.08",
"diameter": "0.522"}
last_object_key += 1
glmCaseDict[last_object_key] = {"object": "line_spacing",
"name": "line_spacing_commABC",
"distance_AB": "53.19999999996 in",
"distance_BC": "53.19999999996 in",
"distance_AC": "53.19999999996 in",
"distance_AN": "69.80000000004 in",
"distance_BN": "69.80000000004 in",
"distance_CN": "69.80000000004 in"}
last_object_key += 1
glmCaseDict[last_object_key] = {"object": "overhead_line_conductor",
"name": "overhead_line_conductor_comm",
"rating.summer.continuous": "443.0",
"geometric_mean_radius": "0.02270 ft",
"resistance": "0.05230"}
last_object_key += 1
glmCaseDict[last_object_key] = {"object": "line_configuration",
"name": "line_configuration_commABC",
"conductor_A": "overhead_line_conductor_comm",
"conductor_B": "overhead_line_conductor_comm",
"conductor_C": "overhead_line_conductor_comm",
"conductor_N": "overhead_line_conductor_comm",
"spacing": "line_spacing_commABC"}
last_object_key += 1
glmCaseDict[last_object_key] = {"object": "line_configuration",
"name": "line_configuration_commAB",
"conductor_A": "overhead_line_conductor_comm",
"conductor_B": "overhead_line_conductor_comm",
"conductor_N": "overhead_line_conductor_comm",
"spacing": "line_spacing_commABC"}
last_object_key += 1
glmCaseDict[last_object_key] = {"object": "line_configuration",
"name": "line_configuration_commAC",
"conductor_A": "overhead_line_conductor_comm",
"conductor_C": "overhead_line_conductor_comm",
"conductor_N": "overhead_line_conductor_comm",
"spacing": "line_spacing_commABC"}
last_object_key += 1
glmCaseDict[last_object_key] = {"object": "line_configuration",
"name": "line_configuration_commBC",
"conductor_B": "overhead_line_conductor_comm",
"conductor_C": "overhead_line_conductor_comm",
"conductor_N": "overhead_line_conductor_comm",
"spacing": "line_spacing_commABC"}
last_object_key += 1
glmCaseDict[last_object_key] = {"object": "line_configuration",
"name": "line_configuration_commA",
"conductor_A": "overhead_line_conductor_comm",
"conductor_N": "overhead_line_conductor_comm",
"spacing": "line_spacing_commABC"}
last_object_key += 1
glmCaseDict[last_object_key] = {"object": "line_configuration",
"name": "line_configuration_commB",
"conductor_B": "overhead_line_conductor_comm",
"conductor_N": "overhead_line_conductor_comm",
"spacing": "line_spacing_commABC"}
last_object_key += 1
glmCaseDict[last_object_key] = {"object": "line_configuration",
"name": "line_configuration_commC",
"conductor_C": "overhead_line_conductor_comm",
"conductor_N": "overhead_line_conductor_comm",
"spacing": "line_spacing_commABC"}
last_object_key += 1
# initializations for the commercial "house" list
# print('iterating over commercial_dict')
for iii in commercial_dict:
total_comm_houses = commercial_dict[iii]['number_of_houses'][0] + commercial_dict[iii]['number_of_houses'][1] + commercial_dict[iii]['number_of_houses'][2]
my_phases = 'ABC'
# read through the phases and do some bit-wise math
has_phase_A = 0
has_phase_B = 0
has_phase_C = 0
ph = ''
if "A" in commercial_dict[iii]['phases']:
has_phase_A = 1
ph += 'A'
if "B" in commercial_dict[iii]['phases']:
has_phase_B = 1
ph += 'B'
if "C" in commercial_dict[iii]['phases']:
has_phase_C = 1
ph += 'C'
no_of_phases = has_phase_A + has_phase_B + has_phase_C
if no_of_phases == 0:
raise Exception('The phases in commercial buildings did not add up right.')
# name of original load object
if commercial_dict[iii]['parent'] != 'None':
my_name = commercial_dict[iii]['parent'] #+ '_' + commercial_dict[iii]['name']
my_parent = commercial_dict[iii]['parent']
else:
my_name = commercial_dict[iii]['name']
my_parent = commercial_dict[iii]['name']
nom_volt = int(float(commercial_dict[iii]['nom_volt']))
# Same for everyone
# air_heat_fraction = 0
# mass_solar_gain_fraction = 0.5
# mass_internal_gain_fraction = 0.5
fan_type = 'ONE_SPEED'
heat_type = 'GAS'
cool_type = 'ELECTRIC'
aux_type = 'NONE'
# cooling_design_temperature = 100
# heating_design_temperature = 1
# over_sizing_factor = 0.3
no_of_stories = 1
surface_heat_trans_coeff = 0.59
# Office building - must have all three phases and enough load for 15 zones
# *or* load is classified to be office buildings
if total_comm_houses > 15 and no_of_phases == 3:
no_of_offices = int(round(total_comm_houses / 15))
glmCaseDict[last_object_key] = {"object": "transformer_configuration",
"name": "CTTF_config_A_{:s}".format(my_name),
"connect_type": "SINGLE_PHASE_CENTER_TAPPED",
"install_type": "POLETOP",
"impedance": "0.00033+0.0022j",
"shunt_impedance": "10000+10000j",
"primary_voltage": "{:.3f}".format(nom_volt),
"secondary_voltage": "120",
"powerA_rating": "100 kVA"}
last_object_key += 1
glmCaseDict[last_object_key] = {"object": "transformer_configuration",
"name": "CTTF_config_B_{:s}".format(my_name),
"connect_type": "SINGLE_PHASE_CENTER_TAPPED",
"install_type": "POLETOP",
"impedance": "0.00033+0.0022j",
"shunt_impedance": "10000+10000j",
"primary_voltage": "{:.3f}".format(nom_volt),
"secondary_voltage": "120",
"powerB_rating": "100 kVA"}
last_object_key += 1
glmCaseDict[last_object_key] = {"object": "transformer_configuration",
"name": "CTTF_config_C_{:s}".format(my_name),
"connect_type": "SINGLE_PHASE_CENTER_TAPPED",
"install_type": "POLETOP",
"impedance": "0.00033+0.0022j",
"shunt_impedance": "10000+10000j",
"primary_voltage": "{:.3f}".format(nom_volt),
"secondary_voltage": "120",
"powerC_rating": "100 kVA"}
last_object_key += 1
# print('iterating over number of offices')
for jjj in range(no_of_offices):
floor_area_choose = 40000. * (0.5 * random.random() + 0.5) # up to -50# #config_data.floor_area
ceiling_height = 13.
airchange_per_hour = 0.69
Rroof = 19.
Rwall = 18.3
Rfloor = 46.
Rdoors = 3.
glazing_layers = 'TWO'
glass_type = 'GLASS'
glazing_treatment = 'LOW_S'
window_frame = 'NONE'
int_gains = 3.24 # W/sf
glmCaseDict[last_object_key] = {"object": "overhead_line",
"from": "{:s}".format(my_parent),
"to": "{:s}_office_meter{:.0f}".format(my_name, jjj),
"phases": "{:s}".format(commercial_dict[iii]['phases']),
"length": "50ft",
"configuration": "line_configuration_comm{:s}".format(ph)}
last_object_key += 1
glmCaseDict[last_object_key] = {"object": "meter",
"phases": "{:s}".format(commercial_dict[iii]['phases']),
"name": "{:s}_office_meter{:.0f}".format(my_name, jjj),
"groupid": "Commercial_Meter",
"nominal_voltage": "{:f}".format(nom_volt)}
last_object_key += 1
# for phind = 1:3 #for each of three floors (5 zones each)
# for phind = 1:no_of_phases #jlh
for phind in range(1,4):
glmCaseDict[last_object_key] = {"object": "transformer",
"name": "{:s}_CTTF_{:s}_{:.0f}".format(my_name, ph[phind-1], jjj),
"phases": "{:s}S".format(ph[phind-1]),
"from": "{:s}_office_meter{:.0f}".format(my_name, jjj),
"to": "{:s}_tm_{:s}_{:.0f}".format(my_name, ph[phind-1], jjj),
"groupid": "Distribution_Trans",
"configuration": "CTTF_config_{:s}_{:s}".format(ph[phind-1], my_name)}
last_object_key += 1
glmCaseDict[last_object_key] = {"object": "triplex_meter",
"name": "{:s}_tm_{:s}_{:.0f}".format(my_name, ph[phind-1], jjj),
"phases": "{:s}S".format(ph[phind-1]),
"nominal_voltage": "120"}
last_object_key += 1
# skew each office zone identically per floor
sk = round(2 * random.normalvariate(0, 1))
skew_value = config_data["commercial_skew_std"] * sk
if skew_value < -config_data["commercial_skew_max"]:
skew_value = -config_data["commercial_skew_max"]
elif skew_value > config_data["commercial_skew_max"]:
skew_value = config_data["commercial_skew_max"]
for zoneind in range(1, 6):
total_depth = math.sqrt(floor_area_choose / (3. * 1.5))
total_width = 1.5 * total_depth
if phind < 3:
exterior_ceiling_fraction = 0
else:
exterior_ceiling_fraction = 1
if zoneind == 5:
exterior_wall_fraction = 0
w = total_depth - 30.
d = total_width - 30.
floor_area = w * d
aspect_ratio = w / d
else:
window_wall_ratio = 0.33
if zoneind == 1 or zoneind == 3:
w = total_width - 15.
d = 15.
floor_area = w * d
exterior_wall_fraction = w / (2. * (w + d))
aspect_ratio = w / d
else:
w = total_depth - 15.
d = 15.
floor_area = w * d
exterior_wall_fraction = w / (2. * (w + d))
aspect_ratio = w / d
if phind > 1:
exterior_floor_fraction = 0
else:
exterior_floor_fraction = w / (2. * (w + d)) / (floor_area / (floor_area_choose / 3.))
thermal_mass_per_floor_area = 3.9 * (0.5 + 1. * random.random()) # +/- 50#
interior_exterior_wall_ratio = (floor_area * (2. - 1.) + 0. * 20.) / (no_of_stories * ceiling_height * 2. * (w + d)) - 1. + window_wall_ratio * exterior_wall_fraction
no_of_doors = 0.1 # will round to zero
init_temp = 68. + 4. * random.random()
os_rand = config_data["over_sizing_factor"] * (0.8 + 0.4 * random.random())
COP_A = config_data["cooling_COP"] * (0.8 + 0.4 * random.random())
glmCaseDict[last_object_key] = {"object": "house",
"name": "office{:s}_{:s}{:.0f}_zone{:.0f}".format(my_name, my_phases[phind-1], jjj, zoneind),
"parent": "{:s}_tm_{:s}_{:.0f}".format(my_name, my_phases[phind-1], jjj),
"groupid": "Commercial",
"motor_model" : "BASIC",
"schedule_skew": "{:.0f}".format(skew_value),
"floor_area": "{:.0f}".format(floor_area),
"design_internal_gains": "{:.0f}".format(int_gains * floor_area * 3.413),
"number_of_doors": "{:.0f}".format(no_of_doors),
"aspect_ratio": "{:.2f}".format(aspect_ratio),
"total_thermal_mass_per_floor_area": "{:1.2f}".format(thermal_mass_per_floor_area),
"interior_surface_heat_transfer_coeff": "{:1.2f}".format(surface_heat_trans_coeff),
"interior_exterior_wall_ratio": "{:2.1f}".format(interior_exterior_wall_ratio),
"exterior_floor_fraction": "{:.3f}".format(exterior_floor_fraction),
"exterior_ceiling_fraction": "{:.3f}".format(exterior_ceiling_fraction),
"Rwall": "{:2.1f}".format(Rwall),
"Rroof": "{:2.1f}".format(Rroof),
"Rfloor": "{:.2f}".format(Rfloor),
"Rdoors": "{:2.1f}".format(Rdoors),
"exterior_wall_fraction": "{:.2f}".format(exterior_wall_fraction),
"glazing_layers": "{:s}".format(glazing_layers),
"glass_type": "{:s}".format(glass_type),
"glazing_treatment": "{:s}".format(glazing_treatment),
"window_frame": "{:s}".format(window_frame),
"airchange_per_hour": "{:.2f}".format(airchange_per_hour),
"window_wall_ratio": "{:0.3f}".format(window_wall_ratio),
"heating_system_type": "{:s}".format(heat_type),
"auxiliary_system_type": "{:s}".format(aux_type),
"fan_type": "{:s}".format(fan_type),
"cooling_system_type": "{:s}".format(cool_type),
"air_temperature": "{:.2f}".format(init_temp),
"mass_temperature": "{:.2f}".format(init_temp),
"over_sizing_factor": "{:.1f}".format(os_rand),
"cooling_COP": "{:2.2f}".format(COP_A),
"cooling_setpoint" : "office_cooling",
"heating_setpoint" : "office_heating"}
parent_house = glmCaseDict[last_object_key]
# if we do not use schedules we will assume the initial temp is the setpoint
if use_flags['use_schedules'] == 0:
del glmCaseDict[last_object_key]['cooling_setpoint']
del glmCaseDict[last_object_key]['heating_setpoint']
last_object_key += 1
# Need all of the "appliances"
# Lights
adj_lights = (0.9 + 0.1 * random.random()) * floor_area / 1000. # randomize 10# then convert W/sf -> kW
glmCaseDict[last_object_key] = {"object": "ZIPload",
"name": "lights_{:s}_{:s}_{:.0f}_zone{:.0f}".format(my_name, my_phases[phind-1], jjj, zoneind),
"parent": parent_house["name"],
"groupid": "Lights",
# "groupid": "Commercial_zip",
"schedule_skew": "{:.0f}".format(skew_value),
"heatgain_fraction": "1.0",
"power_fraction": "{:.2f}".format(config_data["c_pfrac"]),
"impedance_fraction": "{:.2f}".format(config_data["c_zfrac"]),
"current_fraction": "{:.2f}".format(config_data["c_ifrac"]),
"power_pf": "{:.2f}".format(config_data["c_p_pf"]),
"current_pf": "{:.2f}".format(config_data["c_i_pf"]),
"impedance_pf": "{:.2f}".format(config_data["c_z_pf"]),
"base_power": "office_lights*{:.2f}".format(adj_lights)}
# if we do not use schedules we will assume adj_lights is the fixed value
if use_flags['use_schedules'] == 0:
glmCaseDict[last_object_key]['base_power'] = "{:.2f}".format(adj_lights)
last_object_key += 1
# Plugs
adj_plugs = (0.9 + 0.2 * random.random()) * floor_area / 1000. # randomize 20# then convert W/sf -> kW
glmCaseDict[last_object_key] = {"object": "ZIPload",
"name": "plugs_{:s}_{:s}_{:.0f}_zone{:.0f}".format(my_name, my_phases[phind-1], jjj, zoneind),
"parent": parent_house["name"],
"groupid": "Plugs",
# "groupid": "Commercial_zip",
"schedule_skew": "{:.0f}".format(skew_value),
"heatgain_fraction": "1.0",
"power_fraction": "{:.2f}".format(config_data["c_pfrac"]),
"impedance_fraction": "{:.2f}".format(config_data["c_zfrac"]),
"current_fraction": "{:.2f}".format(config_data["c_ifrac"]),
"power_pf": "{:.2f}".format(config_data["c_p_pf"]),
"current_pf": "{:.2f}".format(config_data["c_i_pf"]),
"impedance_pf": "{:.2f}".format(config_data["c_z_pf"]),
"base_power": "office_plugs*{:.2f}".format(adj_plugs)}
# if we do not use schedules we will assume adj_plugs is the fixed value
if use_flags['use_schedules'] == 0:
glmCaseDict[last_object_key]['base_power'] = "{:.2f}".format(adj_plugs)
last_object_key += 1
# Gas Waterheater
adj_gas = (0.9 + 0.2 * random.random()) * floor_area / 1000. # randomize 20# then convert W/sf -> kW
glmCaseDict[last_object_key] = {"object": "ZIPload",
"name": "wh_{:s}_{:s}_{:.0f}_zone{:.0f}".format(my_name, my_phases[phind-1], jjj, zoneind),
"parent": parent_house["name"],
"groupid": "Gas_waterheater",
# "groupid": "Commercial_zip",
"schedule_skew": "{:.0f}".format(skew_value),
"heatgain_fraction": "1.0",
"power_fraction": "0.0",
"impedance_fraction": "0.0",
"current_fraction": "0.0",
"power_pf": "1.0",
"base_power": "office_gas*{:.2f}".format(adj_gas)}
# if we do not use schedules we will assume adj_gas is the fixed value
if use_flags['use_schedules'] == 0:
glmCaseDict[last_object_key]['base_power'] = "{:.2f}".format(adj_gas)
last_object_key += 1
# Exterior Lighting
adj_ext = (0.9 + 0.1 * random.random()) * floor_area / 1000. # randomize 10# then convert W/sf -> kW
glmCaseDict[last_object_key] = {"object": "ZIPload",
"name": "ext_{:s}_{:s}_{:.0f}_zone{:.0f}".format(my_name, my_phases[phind-1], jjj, zoneind),
"parent": parent_house["name"],
"groupid": "Exterior_lighting",
# "groupid": "Commercial_zip",
"schedule_skew": "{:.0f}".format(skew_value),
"heatgain_fraction": "0.0",
"power_fraction": "{:.2f}".format(config_data["c_pfrac"]),
"impedance_fraction": "{:.2f}".format(config_data["c_zfrac"]),
"current_fraction": "{:.2f}".format(config_data["c_ifrac"]),
"power_pf": "{:.2f}".format(config_data["c_p_pf"]),
"current_pf": "{:.2f}".format(config_data["c_i_pf"]),
"impedance_pf": "{:.2f}".format(config_data["c_z_pf"]),
"base_power": "office_exterior*{:.2f}".format(adj_ext)}
# if we do not use schedules we will assume adj_ext is the fixed value
if use_flags['use_schedules'] == 0:
glmCaseDict[last_object_key]['base_power'] = "{:.2f}".format(adj_ext)
last_object_key += 1
# Occupancy
adj_occ = (0.9 + 0.1 * random.random()) * floor_area / 1000. # randomize 10# then convert W/sf -> kW
glmCaseDict[last_object_key] = {"object": "ZIPload",
"name": "occ_{:s}_{:s}_{:.0f}_zone{:.0f}".format(my_name, my_phases[phind-1], jjj, zoneind),
"parent": parent_house["name"],
"groupid": "Occupancy",
# "groupid": "Commercial_zip",
"schedule_skew": "{:.0f}".format(skew_value),
"heatgain_fraction": "1.0",
"power_fraction": "0.0",
"impedance_fraction": "0.0",
"current_fraction": "0.0",
"power_pf": "1.0",
"base_power": "office_occupancy*{:.2f}".format(adj_occ)}
# if we do not use schedules we will assume adj_occ is the fixed value
if use_flags['use_schedules'] == 0:
glmCaseDict[last_object_key]['base_power'] = "{:.2f}".format(adj_occ)
last_object_key += 1
# end of house object
# end # office zones (1-5)
# end #office floors (1-3)
# end # total offices needed
# print('finished iterating over number of offices')
# Big box - has at least 2 phases and enough load for 6 zones
# *or* load is classified to be big boxes
elif total_comm_houses > 6 and no_of_phases >= 2:
no_of_bigboxes = int(round(total_comm_houses / 6.))
if has_phase_A == 1:
glmCaseDict[last_object_key] = {"object": "transformer_configuration",
"name": "CTTF_config_A_{:s}".format(my_name),
"connect_type": "SINGLE_PHASE_CENTER_TAPPED",
"install_type": "POLETOP",
"impedance": "0.00033+0.0022j",
"shunt_impedance": "10000+10000j",
"primary_voltage": "{:.3f}".format(nom_volt),
"secondary_voltage": "120",
"powerA_rating": "100 kVA"}
last_object_key += 1
if has_phase_B == 1:
glmCaseDict[last_object_key] = {"object": "transformer_configuration",
"name": "CTTF_config_B_{:s}".format(my_name),
"connect_type": "SINGLE_PHASE_CENTER_TAPPED",
"install_type": "POLETOP",
"impedance": "0.00033+0.0022j",
"shunt_impedance": "10000+10000j",
"primary_voltage": "{:.3f}".format(nom_volt),
"secondary_voltage": "120",
"powerB_rating": "100 kVA"}
last_object_key += 1
if has_phase_C == 1:
glmCaseDict[last_object_key] = {"object": "transformer_configuration",
"name": "CTTF_config_C_{:s}".format(my_name),
"connect_type": "SINGLE_PHASE_CENTER_TAPPED",
"install_type": "POLETOP",
"impedance": "0.00033+0.0022j",
"shunt_impedance": "10000+10000j",
"primary_voltage": "{:.3f}".format(nom_volt),
"secondary_voltage": "120",
"powerC_rating": "100 kVA"}
last_object_key += 1
# print('iterating over number of big boxes')
for jjj in range(no_of_bigboxes):
floor_area_choose = 20000. * (0.5 + 1. * random.random()) # +/- 50#
ceiling_height = 14.
airchange_per_hour = 1.5
Rroof = 19.
Rwall = 18.3
Rfloor = 46.
Rdoors = 3.
glazing_layers = 'TWO'
glass_type = 'GLASS'
glazing_treatment = 'LOW_S'
window_frame = 'NONE'
int_gains = 3.6 # W/sf
glmCaseDict[last_object_key] = {"object": "overhead_line",
"from": "{:s}".format(my_parent),
"to": "{:s}_bigbox_meter{:.0f}".format(my_name, jjj),
"phases": "{:s}".format(commercial_dict[iii]["phases"]),
"length": "50ft",
"configuration": "line_configuration_comm{:s}".format(ph)}
last_object_key += 1
glmCaseDict[last_object_key] = {"object": "meter",
"phases": "{:s}".format(commercial_dict[iii]["phases"]),
"name": "{:s}_bigbox_meter{:.0f}".format(my_name, jjj),
"groupid": "Commercial_Meter",
"nominal_voltage": "{:f}".format(nom_volt)}
last_object_key += 1
# skew each big box zone identically
sk = round(2 * random.normalvariate(0, 1))
skew_value = config_data["commercial_skew_std"] * sk
if skew_value < -config_data["commercial_skew_max"]:
skew_value = -config_data["commercial_skew_max"]
elif skew_value > config_data["commercial_skew_max"]:
skew_value = config_data["commercial_skew_max"]
total_index = 0
for phind in range(no_of_phases):
glmCaseDict[last_object_key] = {"object": "transformer",
"name": "{:s}_CTTF_{:s}_{:.0f}".format(my_name, ph[phind], jjj),
"phases": "{:s}S".format(ph[phind]),
"from": "{:s}_bigbox_meter{:.0f}".format(my_name, jjj),
"to": "{:s}_tm_{:s}_{:.0f}".format(my_name, ph[phind], jjj),
"groupid": "Distribution_Trans",
"configuration": "CTTF_config_{:s}_{:s}".format(ph[phind],
my_name)}
last_object_key += 1
glmCaseDict[last_object_key] = {"object": "triplex_meter",
"name": "{:s}_tm_{:s}_{:.0f}".format(my_name, ph[phind], jjj),
"phases": "{:s}S".format(ph[phind]),
"nominal_voltage": "120"}
last_object_key += 1
zones_per_phase = int(6. / no_of_phases)
for zoneind in range(1,zones_per_phase+1):
total_index += 1
thermal_mass_per_floor_area = 3.9 * (0.8 + 0.4 * random.random()) # +/- 20#
floor_area = floor_area_choose / 6.
exterior_ceiling_fraction = 1.
aspect_ratio = 1.28301275561855
total_depth = math.sqrt(floor_area_choose / aspect_ratio)
total_width = aspect_ratio * total_depth
d = total_width / 3.
w = total_depth / 2.
if total_index == 2 or total_index == 5:
exterior_wall_fraction = d / (2. * (d + w))
exterior_floor_fraction = (0. + d) / (2. * (total_width + total_depth)) / (floor_area / floor_area_choose)
else:
exterior_wall_fraction = 0.5
exterior_floor_fraction = (w + d) / (2. * (total_width + total_depth)) / (floor_area / floor_area_choose)
if total_index == 2:
window_wall_ratio = 0.76
else:
window_wall_ratio = 0.
if total_index < 4:
no_of_doors = 0.1 # this will round to 0
elif total_index == 4 or total_index == 6:
no_of_doors = 1.
else:
no_of_doors = 24.
interior_exterior_wall_ratio = (floor_area * (2. - 1.) + no_of_doors * 20.) / (no_of_stories * ceiling_height * 2. * (w + d)) - 1. + window_wall_ratio * exterior_wall_fraction
if total_index > 6:
raise Exception('Something wrong in the indexing of the retail strip.')
init_temp = 68. + 4. * random.random()
os_rand = config_data["over_sizing_factor"] * (0.8 + 0.4 * random.random())
COP_A = config_data["cooling_COP"] * (0.8 + 0.4 * random.random())
glmCaseDict[last_object_key] = {"object": "house",
"name": "bigbox{:s}_{:s}{:.0f}_zone{:.0f}".format(my_name, ph[phind], jjj, zoneind),
"groupid": "Commercial",
"motor_model": "BASIC",
"schedule_skew": "{:.0f}".format(skew_value),
"parent": "{:s}_tm_{:s}_{:.0f}".format(my_name, ph[phind],jjj),
"floor_area": "{:.0f}".format(floor_area),
"design_internal_gains": "{:.0f}".format(int_gains * floor_area * 3.413),
"number_of_doors": "{:.0f}".format(no_of_doors),
"aspect_ratio": "{:.2f}".format(aspect_ratio),
"total_thermal_mass_per_floor_area": "{:1.2f}".format(thermal_mass_per_floor_area),
"interior_surface_heat_transfer_coeff": "{:1.2f}".format(surface_heat_trans_coeff),
"interior_exterior_wall_ratio": "{:2.1f}".format(interior_exterior_wall_ratio),
"exterior_floor_fraction": "{:.3f}".format(exterior_floor_fraction),
"exterior_ceiling_fraction": "{:.3f}".format(exterior_ceiling_fraction),
"Rwall": "{:2.1f}".format(Rwall),
"Rroof": "{:2.1f}".format(Rroof),
"Rfloor": "{:.2f}".format(Rfloor),
"Rdoors": "{:2.1f}".format(Rdoors),
"exterior_wall_fraction": "{:.2f}".format(exterior_wall_fraction),
"glazing_layers": "{:s}".format(glazing_layers),
"glass_type": "{:s}".format(glass_type),
"glazing_treatment": "{:s}".format(glazing_treatment),
"window_frame": "{:s}".format(window_frame),
"airchange_per_hour": "{:.2f}".format(airchange_per_hour),
"window_wall_ratio": "{:0.3f}".format(window_wall_ratio),
"heating_system_type": "{:s}".format(heat_type),
"auxiliary_system_type": "{:s}".format(aux_type),
"fan_type": "{:s}".format(fan_type),
"cooling_system_type": "{:s}".format(cool_type),
"air_temperature": "{:.2f}".format(init_temp),
"mass_temperature": "{:.2f}".format(init_temp),
"over_sizing_factor": "{:.1f}".format(os_rand),
"cooling_COP": "{:2.2f}".format(COP_A),
"cooling_setpoint": "bigbox_cooling",
"heating_setpoint": "bigbox_heating"}
parent_house = glmCaseDict[last_object_key] # cache this for a second...
# if we do not use schedules we will assume the initial temp is the setpoint
if use_flags['use_schedules'] == 0:
del glmCaseDict[last_object_key]['cooling_setpoint']
del glmCaseDict[last_object_key]['heating_setpoint']
last_object_key += 1
# Need all of the "appliances"
# Lights
adj_lights = 1.2 * (0.9 + 0.1 * random.random()) * floor_area / 1000. # randomize 10# then convert W/sf -> kW
glmCaseDict[last_object_key] = {"object": "ZIPload",
"name": "lights_{:s}_{:s}_{:.0f}_zone{:.0f}".format(my_name, ph[phind], jjj, zoneind),
"parent": parent_house["name"],
"groupid": "Lights",
# "groupid": "Commercial_zip",
"schedule_skew": "{:.0f}".format(skew_value),
"heatgain_fraction": "1.0",
"power_fraction": "{:.2f}".format(config_data["c_pfrac"]),
"impedance_fraction": "{:.2f}".format(config_data["c_zfrac"]),
"current_fraction": "{:.2f}".format(config_data["c_ifrac"]),
"power_pf": "{:.2f}".format(config_data["c_p_pf"]),
"current_pf": "{:.2f}".format(config_data["c_i_pf"]),
"impedance_pf": "{:.2f}".format(config_data["c_z_pf"]),
"base_power": "bigbox_lights*{:.2f}".format(adj_lights)}
# if we do not use schedules we will assume adj_lights is the fixed value
if use_flags['use_schedules'] == 0:
glmCaseDict[last_object_key]['base_power'] = "{:.2f}".format(adj_lights)
last_object_key += 1
# Plugs
adj_plugs = (0.9 + 0.2 * random.random()) * floor_area / 1000. # randomize 20# then convert W/sf -> kW
glmCaseDict[last_object_key] = {"object": "ZIPload",
"name": "plugs_{:s}_{:s}_{:.0f}_zone{:.0f}".format(my_name, ph[phind], jjj, zoneind),
"parent": parent_house["name"],
"groupid": "Plugs",
# "groupid": "Commercial_zip",
"schedule_skew": "{:.0f}".format(skew_value),
"heatgain_fraction": "1.0",
"power_fraction": "{:.2f}".format(config_data["c_pfrac"]),
"impedance_fraction": "{:.2f}".format(config_data["c_zfrac"]),
"current_fraction": "{:.2f}".format(config_data["c_ifrac"]),
"power_pf": "{:.2f}".format(config_data["c_p_pf"]),
"current_pf": "{:.2f}".format(config_data["c_i_pf"]),
"impedance_pf": "{:.2f}".format(config_data["c_z_pf"]),
"base_power": "bigbox_plugs*{:.2f}".format(adj_plugs)}
# if we do not use schedules we will assume adj_plugs is the fixed value
if use_flags['use_schedules'] == 0:
glmCaseDict[last_object_key]['base_power'] = "{:.2f}".format(adj_plugs)
last_object_key += 1
# Gas Waterheater
adj_gas = (0.9 + 0.2 * random.random()) * floor_area / 1000. # randomize 20# then convert W/sf -> kW
glmCaseDict[last_object_key] = {"object": "ZIPload",
"name": "wh_{:s}_{:s}_{:.0f}_zone{:.0f}".format(my_name, ph[phind], jjj, zoneind),
"parent": parent_house["name"],
"groupid": "Gas_waterheater",
# "groupid": "Commercial_zip",
"schedule_skew": "{:.0f}".format(skew_value),
"heatgain_fraction": "1.0",
"power_fraction": "0.0",
"impedance_fraction": "0.0",
"current_fraction": "0.0",
"power_pf": "1.0",
"base_power": "bigbox_gas*{:.2f}".format(adj_gas)}
# if we do not use schedules we will assume adj_gas is the fixed value
if use_flags['use_schedules'] == 0:
glmCaseDict[last_object_key]['base_power'] = "{:.2f}".format(adj_gas)
last_object_key += 1
# Exterior Lighting
adj_ext = (0.9 + 0.1 * random.random()) * floor_area / 1000. # randomize 10# then convert W/sf -> kW
glmCaseDict[last_object_key] = {"object": "ZIPload",
"name": "ext_{:s}_{:s}_{:.0f}_zone{:.0f}".format(my_name, ph[phind], jjj, zoneind),
"parent": parent_house["name"],
"groupid": "Exterior_lighting",
# "groupid": "Commercial_zip",
"schedule_skew": "{:.0f}".format(skew_value),
"heatgain_fraction": "0.0",
"power_fraction": "{:.2f}".format(config_data["c_pfrac"]),
"impedance_fraction": "{:.2f}".format(config_data["c_zfrac"]),
"current_fraction": "{:.2f}".format(config_data["c_ifrac"]),
"power_pf": "{:.2f}".format(config_data["c_p_pf"]),
"current_pf": "{:.2f}".format(config_data["c_i_pf"]),
"impedance_pf": "{:.2f}".format(config_data["c_z_pf"]),
"base_power": "bigbox_exterior*{:.2f}".format(adj_ext)}
# if we do not use schedules we will assume adj_ext is the fixed value
if use_flags['use_schedules'] == 0:
glmCaseDict[last_object_key]['base_power'] = "{:.2f}".format(adj_ext)
last_object_key += 1
# Occupancy
adj_occ = (0.9 + 0.1 * random.random()) * floor_area / 1000. # randomize 10# then convert W/sf -> kW
glmCaseDict[last_object_key] = {"object": "ZIPload",
"name": "occ_{:s}_{:s}_{:.0f}_zone{:.0f}".format(my_name, ph[phind], jjj, zoneind),
"parent": parent_house["name"],
"groupid": "Occupancy",
# "groupid": "Commercial_zip",
"schedule_skew": "{:.0f}".format(skew_value),
"heatgain_fraction": "1.0",
"power_fraction": "0.0",
"impedance_fraction": "0.0",
"current_fraction": "0.0",
"power_pf": "1.0",
"base_power": "bigbox_occupancy*{:.2f}".format(adj_occ)}
# if we do not use schedules we will assume adj_occ is the fixed value
if use_flags['use_schedules'] == 0:
glmCaseDict[last_object_key]['base_power'] = "{:.2f}".format(adj_occ)
last_object_key += 1
# end #zone index
# end #phase index
# end #number of big boxes
# print('finished iterating over number of big boxes')
# Strip mall
elif total_comm_houses > 0: # unlike for big boxes and offices, if total house number = 0, just don't populate anything.
no_of_strip = total_comm_houses
strip_per_phase = int(math.ceil(no_of_strip / no_of_phases))
if has_phase_A == 1:
glmCaseDict[last_object_key] = {"object": "transformer_configuration",
"name": "CTTF_config_A_{:s}".format(my_name),
"connect_type": "SINGLE_PHASE_CENTER_TAPPED",
"install_type": "POLETOP",
"impedance": "0.00033+0.0022j",
"shunt_impedance": "100000+100000j",
"primary_voltage": "{:.3f}".format(nom_volt),
"secondary_voltage": "120",
"powerA_rating": "{:.0f} kVA".format(100. * strip_per_phase)}
last_object_key += 1
if has_phase_B == 1:
glmCaseDict[last_object_key] = {"object": "transformer_configuration",
"name": "CTTF_config_B_{:s}".format(my_name),
"connect_type": "SINGLE_PHASE_CENTER_TAPPED",
"install_type": "POLETOP",
"impedance": "0.00033+0.0022j",
"shunt_impedance": "100000+100000j",
"primary_voltage": "{:.3f}".format(nom_volt),
"secondary_voltage": "120",
"powerB_rating": "{:.0f} kVA".format(100. * strip_per_phase)}
last_object_key += 1
if has_phase_C == 1:
glmCaseDict[last_object_key] = {"object": "transformer_configuration",
"name": "CTTF_config_C_{:s}".format(my_name),
"connect_type": "SINGLE_PHASE_CENTER_TAPPED",
"install_type": "POLETOP",
"impedance": "0.00033+0.0022j",
"shunt_impedance": "100000+100000j",
"primary_voltage": "{:.3f}".format(nom_volt),
"secondary_voltage": "120",
"powerC_rating": "{:.0f} kVA".format(100. * strip_per_phase)}
last_object_key += 1
glmCaseDict[last_object_key] = {"object": "overhead_line",
"from": "{:s}".format(my_parent),
"to": "{:s}_strip_node".format(my_name),
"phases": "{:s}".format(commercial_dict[iii]["phases"]),
"length": "50ft",
"configuration": "line_configuration_comm{:s}".format(ph)}
last_object_key += 1
glmCaseDict[last_object_key] = {"object": "node",
"phases": "{:s}".format(commercial_dict[iii]["phases"]),
"name": "{:s}_strip_node".format(my_name),
"nominal_voltage": "{:f}".format(nom_volt)}
last_object_key += 1
# print('iterating over number of stripmalls')
for phind in range(no_of_phases):
floor_area_choose = 2400. * (0.7 + 0.6 * random.random()) # +/- 30#
# ceiling_height = 12
airchange_per_hour = 1.76
Rroof = 19.
Rwall = 18.3
Rfloor = 40.
Rdoors = 3.
glazing_layers = 'TWO'
glass_type = 'GLASS'
glazing_treatment = 'LOW_S'
window_frame = 'NONE'
int_gains = 3.6 # W/sf
thermal_mass_per_floor_area = 3.9 * (0.5 + 1. * random.random()) # +/- 50#
exterior_ceiling_fraction = 1.
for jjj in range(1, strip_per_phase+1):
# skew each office zone identically per floor
sk = round(2 * random.normalvariate(0, 1))
skew_value = config_data["commercial_skew_std"] * sk
if skew_value < -config_data["commercial_skew_max"]:
skew_value = -config_data["commercial_skew_max"]
elif skew_value > config_data["commercial_skew_max"]:
skew_value = config_data["commercial_skew_max"]
if jjj == 1 or jjj == (math.floor(strip_per_phase / 2.) + 1.):
floor_area = floor_area_choose
aspect_ratio = 1.5
window_wall_ratio = 0.05
# if (j == jjj):
# exterior_wall_fraction = 0.7;
# exterior_floor_fraction = 1.4;
# else:
exterior_wall_fraction = 0.4
exterior_floor_fraction = 0.8
interior_exterior_wall_ratio = -0.05
else:
floor_area = floor_area_choose / 2.
aspect_ratio = 3.0
window_wall_ratio = 0.03
if jjj == strip_per_phase:
exterior_wall_fraction = 0.63
exterior_floor_fraction = 2.
else:
exterior_wall_fraction = 0.25
exterior_floor_fraction = 0.8
interior_exterior_wall_ratio = -0.40
no_of_doors = 1
glmCaseDict[last_object_key] = {"object": "transformer",
"name": "{:s}_CTTF_{:s}_{:.0f}".format(my_name, ph[phind], jjj),
"phases": "{:s}S".format(ph[phind]),
"from": "{:s}_strip_node".format(my_name),
"to": "{:s}_tn_{:s}_{:.0f}".format(my_name, ph[phind], jjj),
"groupid": "Distribution_Trans'",
"configuration": "CTTF_config_{:s}_{:s}".format(ph[phind], my_name)}
last_object_key += 1
glmCaseDict[last_object_key] = {"object": "triplex_node",
"name": "{:s}_tn_{:s}_{:.0f}".format(my_name, ph[phind], jjj),
"phases": "{:s}S".format(ph[phind]),
"nominal_voltage": "120"}
last_object_key += 1
glmCaseDict[last_object_key] = {"object": "triplex_meter",
"parent": "{:s}_tn_{:s}_{:.0f}".format(my_name, ph[phind], jjj),
"name": "{:s}_tm_{:s}_{:.0f}".format(my_name, ph[phind], jjj),
"phases": "{:s}S".format(ph[phind]),
"groupid": "Commercial_Meter",
# was 'real(my_var), imag(my_var), but it's an int above
"nominal_voltage": "120"}
last_object_key += 1
init_temp = 68. + 4. * random.random()
os_rand = config_data["over_sizing_factor"] * (0.8 + 0.4 * random.random())
COP_A = config_data["cooling_COP"] * (0.8 + 0.4 * random.random())
glmCaseDict[last_object_key] = {"object": "house",
"name": "stripmall{:s}_{:s}{:.0f}".format(my_name, ph[phind], jjj),
"groupid": "Commercial",
"motor_model": "BASIC",
"schedule_skew": "{:.0f}".format(skew_value),
"parent": "{:s}_tm_{:s}_{:.0f}".format(my_name, ph[phind], jjj),
"floor_area": "{:.0f}".format(floor_area),
"design_internal_gains": "{:.0f}".format(int_gains * floor_area * 3.413),
"number_of_doors": "{:.0f}".format(no_of_doors),
"aspect_ratio": "{:.2f}".format(aspect_ratio),
"total_thermal_mass_per_floor_area": "{:1.2f}".format(thermal_mass_per_floor_area),
"interior_surface_heat_transfer_coeff": "{:1.2f}".format(surface_heat_trans_coeff),
"interior_exterior_wall_ratio": "{:2.1f}".format(interior_exterior_wall_ratio),
"exterior_floor_fraction": "{:.3f}".format(exterior_floor_fraction),
"exterior_ceiling_fraction": "{:.3f}".format(exterior_ceiling_fraction),
"Rwall": "{:2.1f}".format(Rwall),
"Rroof": "{:2.1f}".format(Rroof),
"Rfloor": "{:.2f}".format(Rfloor),
"Rdoors": "{:2.1f}".format(Rdoors),
"exterior_wall_fraction": "{:.2f}".format(exterior_wall_fraction),
"glazing_layers": "{:s}".format(glazing_layers),
"glass_type": "{:s}".format(glass_type),
"glazing_treatment": "{:s}".format(glazing_treatment),
"window_frame": "{:s}".format(window_frame),
"airchange_per_hour": "{:.2f}".format(airchange_per_hour),
"window_wall_ratio": "{:0.3f}".format(window_wall_ratio),
"heating_system_type": "{:s}".format(heat_type),
"auxiliary_system_type": "{:s}".format(aux_type),
"fan_type": "{:s}".format(fan_type),
"cooling_system_type": "{:s}".format(cool_type),
"air_temperature": "{:.2f}".format(init_temp),
"mass_temperature": "{:.2f}".format(init_temp),
"over_sizing_factor": "{:.1f}".format(os_rand),
"cooling_COP": "{:2.2f}".format(COP_A),
"cooling_setpoint": "stripmall_cooling",
"heating_setpoint": "stripmall_heating"}
parent_house = glmCaseDict[last_object_key]
# if we do not use schedules we will assume the initial temp is the setpoint
if use_flags['use_schedules'] == 0:
del glmCaseDict[last_object_key]['cooling_setpoint']
del glmCaseDict[last_object_key]['heating_setpoint']
last_object_key += 1
# Need all of the "appliances"
# Lights
adj_lights = (0.8 + 0.4 * random.random()) * floor_area / 1000. # randomize 10# then convert W/sf -> kW
glmCaseDict[last_object_key] = {"object": "ZIPload",
"name": "lights_{:s}_{:s}_{:.0f}".format(my_name, ph[phind], jjj),
"parent": parent_house["name"],
"groupid": "Lights",
# "groupid": "Commercial_zip",
"schedule_skew": "{:.0f}".format(skew_value),
"heatgain_fraction": "1.0",
"power_fraction": "{:.2f}".format(config_data["c_pfrac"]),
"impedance_fraction": "{:.2f}".format(config_data["c_zfrac"]),
"current_fraction": "{:.2f}".format(config_data["c_ifrac"]),
"power_pf": "{:.2f}".format(config_data["c_p_pf"]),
"current_pf": "{:.2f}".format(config_data["c_i_pf"]),
"impedance_pf": "{:.2f}".format(config_data["c_z_pf"]),
"base_power": "stripmall_lights*{:.2f}".format(adj_lights)}
# if we do not use schedules we will assume adj_lights is the fixed value
if use_flags['use_schedules'] == 0:
glmCaseDict[last_object_key]['base_power'] = "{:.2f}".format(adj_lights)
last_object_key += 1
# Plugs
adj_plugs = (0.8 + 0.4 * random.random()) * floor_area / 1000. # randomize 20# then convert W/sf -> kW
glmCaseDict[last_object_key] = {"object": "ZIPload",
"name": "plugs_{:s}_{:s}_{:.0f}".format(my_name, ph[phind], jjj),
"parent": parent_house["name"],
"groupid": "Plugs",
# "groupid": "Commercial_zip",
"schedule_skew": "{:.0f}".format(skew_value),
"heatgain_fraction": "1.0",
"power_fraction": "{:.2f}".format(config_data["c_pfrac"]),
"impedance_fraction": "{:.2f}".format(config_data["c_zfrac"]),
"current_fraction": "{:.2f}".format(config_data["c_ifrac"]),
"power_pf": "{:.2f}".format(config_data["c_p_pf"]),
"current_pf": "{:.2f}".format(config_data["c_i_pf"]),
"impedance_pf": "{:.2f}".format(config_data["c_z_pf"]),
"base_power": "stripmall_plugs*{:.2f}".format(adj_plugs)}
# if we do not use schedules we will assume adj_plugs is the fixed value
if use_flags['use_schedules'] == 0:
glmCaseDict[last_object_key]['base_power'] = "{:.2f}".format(adj_plugs)
last_object_key += 1
# Gas Waterheater
adj_gas = (0.8 + 0.4 * random.random()) * floor_area / 1000. # randomize 20# then convert W/sf -> kW
glmCaseDict[last_object_key] = {"object": "ZIPload",
"name": "wh_{:s}_{:s}_{:.0f}".format(my_name, ph[phind], jjj),
"parent": parent_house["name"],
"groupid": "Gas_waterheater",
# "groupid": "Commercial_zip",
"schedule_skew": "{:.0f}".format(skew_value),
"heatgain_fraction": "1.0",
"power_fraction": "0.0",
"impedance_fraction": "0.0",
"current_fraction": "0.0",
"power_pf": "1.0",
"base_power": "stripmall_gas*{:.2f}".format(adj_gas)}
# if we do not use schedules we will assume adj_gas is the fixed value
if use_flags['use_schedules'] == 0:
glmCaseDict[last_object_key]['base_power'] = "{:.2f}".format(adj_gas)
last_object_key += 1
# Exterior Lighting
adj_ext = (0.8 + 0.4 * random.random()) * floor_area / 1000. # randomize 10# then convert W/sf -> kW
glmCaseDict[last_object_key] = {"object": "ZIPload",
"name": "ext_{:s}_{:s}_{:.0f}".format(my_name, ph[phind], jjj),
"parent": parent_house["name"],
"groupid": "Exterior_lighting",
# "groupid": "Commercial_zip",
"schedule_skew": "{:.0f}".format(skew_value),
"heatgain_fraction": "0.0",
"power_fraction": "{:.2f}".format(config_data["c_pfrac"]),
"impedance_fraction": "{:.2f}".format(config_data["c_zfrac"]),
"current_fraction": "{:.2f}".format(config_data["c_ifrac"]),
"power_pf": "{:.2f}".format(config_data["c_p_pf"]),
"current_pf": "{:.2f}".format(config_data["c_i_pf"]),
"impedance_pf": "{:.2f}".format(config_data["c_z_pf"]),
"base_power": "stripmall_exterior*{:.2f}".format(adj_ext)}
# if we do not use schedules we will assume adj_ext is the fixed value
if use_flags['use_schedules'] == 0:
glmCaseDict[last_object_key]['base_power'] = "{:.2f}".format(adj_ext)
last_object_key += 1
# Occupancy
adj_occ = (0.8 + 0.4 * random.random()) * floor_area / 1000. # randomize 10# then convert W/sf -> kW
glmCaseDict[last_object_key] = {"object": "ZIPload",
"name": "occ_{:s}_{:s}_{:.0f}".format(my_name, ph[phind], jjj),
"parent": parent_house["name"],
"groupid": "Occupancy",
# "groupid": "Commercial_zip",
"schedule_skew": "{:.0f}".format(skew_value),
"heatgain_fraction": "1.0",
"power_fraction": "0.0",
"impedance_fraction": "0.0",
"current_fraction": "0.0",
"power_pf": "1.0",
"base_power": "stripmall_occupancy*{:.2f}".format(adj_occ)}
# if we do not use schedules we will assume adj_occ is the fixed value
if use_flags['use_schedules'] == 0:
glmCaseDict[last_object_key]['base_power'] = "{:.2f}".format(adj_occ)
last_object_key += 1
# end
# end #number of strip zones
# end #phase index
# end #commercial selection
# print('finished iterating over number of stripmalls')
# add the "street light" loads
# parent them to the METER as opposed to the node, so we don't
# have any "grandchildren"
elif total_comm_houses == 0 and sum(commercial_dict[iii]['load']) > 0:
# print('writing street_light')
glmCaseDict[last_object_key] = {"object": "load",
"parent": "{:s}".format(my_parent),
"name": "str_light_{:s}{:s}".format(ph, commercial_dict[iii]['name']),
"nominal_voltage": "{:.2f}".format(nom_volt),
"phases": "{:s}".format(ph)
}
if has_phase_A == 1 and commercial_dict[iii]['load'][0] > 0:
if use_flags['use_schedules'] == 1:
glmCaseDict[last_object_key]["base_power_A"] = "street_lighting*{:f}".format(config_data["light_scalar_comm"] * commercial_dict[iii]['load'][0])
else:
glmCaseDict[last_object_key]["base_power_A"] = "{:f}".format(config_data["light_scalar_comm"] * commercial_dict[iii]['load'][0])
glmCaseDict[last_object_key]["power_pf_A"] = "{:f}".format(config_data["c_p_pf"])
glmCaseDict[last_object_key]["current_pf_A"] = "{:f}".format(config_data["c_i_pf"])
glmCaseDict[last_object_key]["impedance_pf_A"] = "{:f}".format(config_data["c_z_pf"])
glmCaseDict[last_object_key]["power_fraction_A"] = "{:f}".format(config_data["c_pfrac"])
glmCaseDict[last_object_key]["current_fraction_A"] = "{:f}".format(config_data["c_ifrac"])
glmCaseDict[last_object_key]["impedance_fraction_A"] = "{:f}".format(config_data["c_zfrac"])
if has_phase_B == 1 and commercial_dict[iii]['load'][1] > 0:
if use_flags['use_schedules'] == 1:
glmCaseDict[last_object_key]["base_power_B"] = "street_lighting*{:f}".format(config_data["light_scalar_comm"] * commercial_dict[iii]['load'][1])
else:
glmCaseDict[last_object_key]["base_power_B"] = "{:f}".format(config_data["light_scalar_comm"] * commercial_dict[iii]['load'][1])
glmCaseDict[last_object_key]["power_pf_B"] = "{:f}".format(config_data["c_p_pf"])
glmCaseDict[last_object_key]["current_pf_B"] = "{:f}".format(config_data["c_i_pf"])
glmCaseDict[last_object_key]["impedance_pf_B"] = "{:f}".format(config_data["c_z_pf"])
glmCaseDict[last_object_key]["power_fraction_B"] = "{:f}".format(config_data["c_pfrac"])
glmCaseDict[last_object_key]["current_fraction_B"] = "{:f}".format(config_data["c_ifrac"])
glmCaseDict[last_object_key]["impedance_fraction_B"] = "{:f}".format(config_data["c_zfrac"])
if has_phase_C == 1 and commercial_dict[iii]['load'][2] > 0:
if use_flags['use_schedules'] == 1:
glmCaseDict[last_object_key]["base_power_C"] = "street_lighting*{:f}".format(config_data["light_scalar_comm"] * commercial_dict[iii]['load'][2])
else:
glmCaseDict[last_object_key]["base_power_C"] = "{:f}".format(config_data["light_scalar_comm"] * commercial_dict[iii]['load'][2])
glmCaseDict[last_object_key]["power_pf_C"] = "{:f}".format(config_data["c_p_pf"])
glmCaseDict[last_object_key]["current_pf_C"] = "{:f}".format(config_data["c_i_pf"])
glmCaseDict[last_object_key]["impedance_pf_C"] = "{:f}".format(config_data["c_z_pf"])
glmCaseDict[last_object_key]["power_fraction_C"] = "{:f}".format(config_data["c_pfrac"])
glmCaseDict[last_object_key]["current_fraction_C"] = "{:f}".format(config_data["c_ifrac"])
glmCaseDict[last_object_key]["impedance_fraction_C"] = "{:f}".format(config_data["c_zfrac"])
last_object_key += 1
# end 'for each load'
return glmCaseDict, last_object_key
def add_normalized_commercial_ziploads(loadshape_dict, commercial_dict, config_data, last_key):
"""
This fucntion appends commercial zip loads to a feeder based on existing loads
Inputs
loadshape_dict - dictionary containing the full feeder
commercial_dict - dictionary that contains information about commercial loads spots
last_key - Last object key
config_data - dictionary that contains the configurations of the feeder
Outputs
loadshape_dict - dictionary containing the full feeder
last_key - Last object key
"""
for x in list(commercial_dict.keys()):
load_name = commercial_dict[x]['name']
load_parent = commercial_dict[x].get('parent', 'None')
phases = commercial_dict[x]['phases']
#nom_volt = commercial_dict[x]['nom_volt']
nom_volt = '120.0'
bp_A = commercial_dict[x]['load'][0] * config_data['normalized_loadshape_scalar']
bp_B = commercial_dict[x]['load'][1] * config_data['normalized_loadshape_scalar']
bp_C = commercial_dict[x]['load'][2] * config_data['normalized_loadshape_scalar']
loadshape_dict[last_key] = {'object': 'load',
'name': '{:s}_loadshape'.format(load_name),
'phases': phases,
'nominal_voltage': nom_volt}
if load_parent != 'None':
loadshape_dict[last_key]['parent'] = load_parent
else:
loadshape_dict[last_key]['parent'] = load_parent
if 'A' in phases and bp_A > 0.0:
loadshape_dict[last_key]['base_power_A'] = 'norm_feeder_loadshape.value*{:f}'.format(bp_A)
loadshape_dict[last_key]['power_pf_A'] = '{:f}'.format(config_data['c_p_pf'])
loadshape_dict[last_key]['current_pf_A'] = '{:f}'.format(config_data['c_i_pf'])
loadshape_dict[last_key]['impedance_pf_A'] = '{:f}'.format(config_data['c_z_pf'])
loadshape_dict[last_key]['power_fraction_A'] = '{:f}'.format(config_data['c_pfrac'])
loadshape_dict[last_key]['current_fraction_A'] = '{:f}'.format(config_data['c_ifrac'])
loadshape_dict[last_key]['impedance_fraction_A'] = '{:f}'.format(config_data['c_zfrac'])
if 'B' in phases and bp_B > 0.0:
loadshape_dict[last_key]['base_power_B'] = 'norm_feeder_loadshape.value*{:f}'.format(bp_B)
loadshape_dict[last_key]['power_pf_B'] = '{:f}'.format(config_data['c_p_pf'])
loadshape_dict[last_key]['current_pf_B'] = '{:f}'.format(config_data['c_i_pf'])
loadshape_dict[last_key]['impedance_pf_B'] = '{:f}'.format(config_data['c_z_pf'])
loadshape_dict[last_key]['power_fraction_B'] = '{:f}'.format(config_data['c_pfrac'])
loadshape_dict[last_key]['current_fraction_B'] = '{:f}'.format(config_data['c_ifrac'])
loadshape_dict[last_key]['impedance_fraction_B'] = '{:f}'.format(config_data['c_zfrac'])
if 'C' in phases and bp_C > 0.0:
loadshape_dict[last_key]['base_power_C'] = 'norm_feeder_loadshape.value*{:f}'.format(bp_C)
loadshape_dict[last_key]['power_pf_C'] = '{:f}'.format(config_data['c_p_pf'])
loadshape_dict[last_key]['current_pf_C'] = '{:f}'.format(config_data['c_i_pf'])
loadshape_dict[last_key]['impedance_pf_C'] = '{:f}'.format(config_data['c_z_pf'])
loadshape_dict[last_key]['power_fraction_C'] = '{:f}'.format(config_data['c_pfrac'])
loadshape_dict[last_key]['current_fraction_C'] = '{:f}'.format(config_data['c_ifrac'])
loadshape_dict[last_key]['impedance_fraction_C'] = '{:f}'.format(config_data['c_zfrac'])
last_key += last_key
return loadshape_dict, last_key | 47.129796 | 182 | 0.594901 | 7,182 | 57,734 | 4.454748 | 0.065441 | 0.05251 | 0.068263 | 0.078015 | 0.844221 | 0.819122 | 0.794962 | 0.770082 | 0.760643 | 0.750016 | 0 | 0.031117 | 0.240205 | 57,734 | 1,225 | 183 | 47.129796 | 0.69824 | 0.11385 | 0 | 0.679039 | 0 | 0 | 0.290288 | 0.063417 | 0 | 0 | 0 | 0 | 0 | 1 | 0.002183 | false | 0 | 0.001092 | 0 | 0.005459 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
23d9368aebeeb12bab57e9e2287c481078587dba | 23 | py | Python | tests/__init__.py | newskylabs/newskylabs-collagen | 3e2e331605745e6709f57dce8730ceb9ceaa002c | [
"Apache-2.0"
] | null | null | null | tests/__init__.py | newskylabs/newskylabs-collagen | 3e2e331605745e6709f57dce8730ceb9ceaa002c | [
"Apache-2.0"
] | null | null | null | tests/__init__.py | newskylabs/newskylabs-collagen | 3e2e331605745e6709f57dce8730ceb9ceaa002c | [
"Apache-2.0"
] | null | null | null | from . import collagen
| 11.5 | 22 | 0.782609 | 3 | 23 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
23fb877a355a3c0c8b3523dc2d232a403e7cb2d5 | 145 | py | Python | env/models/robots/__init__.py | METU-KALFA/furniture | 1f81e8a3a2543ac33c06ca61448d784c625d3ca0 | [
"MIT"
] | null | null | null | env/models/robots/__init__.py | METU-KALFA/furniture | 1f81e8a3a2543ac33c06ca61448d784c625d3ca0 | [
"MIT"
] | null | null | null | env/models/robots/__init__.py | METU-KALFA/furniture | 1f81e8a3a2543ac33c06ca61448d784c625d3ca0 | [
"MIT"
] | null | null | null | from .robot import Robot
from .sawyer_robot import Sawyer
from .baxter_robot import Baxter
from .cursor import Cursor
from .ur5_robot import Ur5
| 24.166667 | 32 | 0.827586 | 23 | 145 | 5.086957 | 0.304348 | 0.376068 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016 | 0.137931 | 145 | 5 | 33 | 29 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9b1ca7bee9b8bed8320f7b949c141dce7be4a5d5 | 13,588 | py | Python | lexleader.py | wenting-zhao/lex-leader | 59be259aafb01f6c5b456d9d56f8c78ecaacc80f | [
"BSD-2-Clause-FreeBSD",
"MIT"
] | null | null | null | lexleader.py | wenting-zhao/lex-leader | 59be259aafb01f6c5b456d9d56f8c78ecaacc80f | [
"BSD-2-Clause-FreeBSD",
"MIT"
] | null | null | null | lexleader.py | wenting-zhao/lex-leader | 59be259aafb01f6c5b456d9d56f8c78ecaacc80f | [
"BSD-2-Clause-FreeBSD",
"MIT"
] | null | null | null | import sys
class LexLeader:
def __init__(self, columns, rows, option, columns_enabled=True, rows_enabled=True):
self.num_columns = columns
self.num_rows = rows
self.columns_enabled = columns_enabled
self.rows_enabled = rows_enabled
self.varmap = dict()
self.num_var = 0
self.parse_option(option)
for c in range(columns):
for r in range(rows):
self.num_var += 1
self.varmap[(c, r)] = self.num_var
def parse_option(self, option):
if option == "and":
self.which_lex = self._and_helper
elif option == "and-cse":
self.which_lex = self._and_subexpr_helper
elif option == "or":
self.which_lex = self._or_helper
elif option == "or-cse":
self.which_lex = self._or_subexpr_helper
elif option == "ror":
self.which_lex = self._ror_helper
elif option == "alpha":
self.which_lex = self._alpha_helper
elif option == "alpha-m":
self.which_lex = self._alpha_m_helper
elif option == "harvey":
self.which_lex = self._harvey_helper
def make_lexleader(self):
""" return the row and column lex-leader constraints of the full matrix
"""
full = []
if self.columns_enabled:
for c in range(self.num_columns-1, 0, -1):
column1 = [self.varmap[(c, r)] for r in range(self.num_rows)]
column2 = [self.varmap[(c-1, r)] for r in range(self.num_rows)]
full.append(self.which_lex(column1, column2))
if self.rows_enabled:
for r in range(self.num_rows-1, 0, -1):
row1 = [self.varmap[(c, r)] for c in range(self.num_columns)]
row2 = [self.varmap[(c, r-1)] for c in range(self.num_columns)]
full.append(self.which_lex(row1, row2))
return "\n& ".join(full)
def add_assumps(self, *variables):
assumps = []
for var in variables:
if var < 0:
assumps.append("!x{}".format(abs(var)))
else:
assumps.append("x{}".format(var))
return "\n& "+"\n& ".join(assumps)
def _and_helper(self, vector1, vector2):
""" creates the lex-leader constraints between two vectors of variables
via the plain AND decomposition encoding
inputs:
vector1, vector2: lists of integers, equivalent lengths,
each representing a vector of variables
returns:
string containing the full expression of the lex-leader constraint
"""
# setup vectors with 1-based indexing to match constraints in the source paper
A = [None] + vector1
B = [None] + vector2
res = []
res.append( "(!x{} | x{})".format(A[1], B[1]) )
assert len(vector1) == len(vector2)
for i in range(1, len(vector1)):
temp = []
for j in range(1, i+1):
temp.append( "(x{} = x{})".format(A[j], B[j]) )
temp = " & ".join(temp)
res.append( "({} -> (!x{} | x{}))".format(temp, A[i+1], B[i+1]) )
return "("+"\n& ".join(res)+")"
def _and_subexpr_helper(self, vector1, vector2):
""" creates the lex-leader constraints between two vectors of variables
via the AND decomposition encoding using common sub-expression elimination
inputs:
vector1, vector2: lists of integers, equivalent lengths,
each representing a vector of variables
returns:
string containing the full expression of the lex-leader constraint
"""
# setup vectors with 1-based indexing to match constraints in the source paper
A = [None] + vector1
B = [None] + vector2
# creating the extra variables
X = dict()
assert len(vector1) == len(vector2)
for i in range(1, len(vector1)):
self.num_var += 1
X[i] = self.num_var
res = []
# A[1] <= B[1] (thesis, 3.18)
res.append( "(!x{} | x{})".format(A[1], B[1]) )
# X[1] <=> (A[1] = B[1]) (thesis, 3.19)
res.append( "(x{} = (x{} = x{}))".format(X[1], A[1], B[1]) )
# 1 <= i <= n-2, X[i+1] <=> (X[i] & (A[i+1] = B[i+1])) (thesis, 3.20)
for i in range(1, len(vector1)-1):
res.append( "(x{} = (x{} & (x{} = x{})))".format(X[i+1], X[i], A[i+1], B[i+1]) )
# i <= i <= n-1, X[i] -> (A[i+1] <= B[i+1]) (thesis, 3.21)
for i in range(1, len(vector1)):
res.append( "(x{} -> (!x{} | x{}))".format(X[i], A[i+1], B[i+1]) )
return "("+"\n& ".join(res)+")"
def _or_helper(self, vector1, vector2):
""" creates the lex-leader constraints between two vectors of variables
via the plain OR decomposition encoding
inputs:
vector1, vector2: lists of integers, equivalent lengths,
each representing a vector of variables
returns:
string containing the full expression of the lex-leader constraint
"""
# setup vectors with 1-based indexing to match constraints in the source paper
A = [None] + vector1
B = [None] + vector2
res = []
res.append( "(!x{} & x{})".format(A[1], B[1]) )
assert len(vector1) == len(vector2)
for i in range(1, len(vector1)):
temp = []
for j in range(1, i+1):
temp.append( "(x{} = x{})".format(A[j], B[j]) )
temp = " & ".join(temp)
res.append( "({} & (!x{} & x{}))".format(temp, A[i+1], B[i+1]) )
temp = []
for i in range(1, len(vector1)+1):
temp.append( "(x{} = x{})".format(A[i], B[i]) )
res.append(" & ".join(temp))
return "("+"\n| ".join(res)+")"
def _or_subexpr_helper(self, vector1, vector2):
""" creates the lex-leader constraints between two vectors of variables
via the OR decomposition encoding using common sub-expression elimination
inputs:
vector1, vector2: lists of integers, equivalent lengths,
each representing a vector of variables
returns:
string containing the full expression of the lex-leader constraint
"""
# setup vectors with 1-based indexing to match constraints in the source paper
A = [None] + vector1
B = [None] + vector2
# creating the extra variables
X = dict()
assert len(vector1) == len(vector2)
for i in range(1, len(vector1)+1):
self.num_var += 1
X[i] = self.num_var
res = [] # for ANDing each element...
temp = [] # for ORing each element...
n = len(vector1)
# A[1] < B[1]
temp.append( "(!x{} & x{})".format(A[1], B[1]) )
# 1 <= i <= n-1, X[i] & (A[i+1] < B[i+1]))
for i in range(1, n):
temp.append( "(x{} & (!x{} & x{}))".format(X[i], A[i+1], B[i+1]) )
# X[n]
temp.append( "x{}".format(X[n]) )
res.append( "("+" | ".join(temp)+")" )
# X[1] <=> A[1] = B[1] (thesis, 3.36)
res.append( "(x{} = (x{} = x{}))".format(X[1], A[1], B[1]) )
# 1 <= i <= n−1, X[i+1] <=> (X[i] & (A[i+1] = B[i+1])) (thesis, 3.37)
for i in range(1, n):
res.append( "(x{} = (x{} & (x{} = x{})))".format(X[i+1], X[i], A[i+1], B[i+1]) )
return "("+"\n& ".join(res)+")"
def _ror_helper(self, vector1, vector2):
""" creates the lex-leader constraints between two vectors of variables
via the recursive OR decomposition encoding using common sub-expression elimination
inputs:
vector1, vector2: lists of integers, equivalent lengths,
each representing a vector of variables
returns:
string containing the full expression of the lex-leader constraint
"""
# setup vectors with 1-based indexing to match constraints in the source paper
A = [None] + vector1
B = [None] + vector2
assert len(vector1) == len(vector2)
n = len(vector1)
# creating the extra variables
X = dict()
for i in range(1, len(vector1)+1):
self.num_var += 1
X[i] = self.num_var
res = []
# X[1] (thesis, 3.44)
res.append( "(x{})".format(X[1]) )
# X[n] <=> (A[n] <= B[n]) (thesis, 3.45)
res.append( "(x{} = (!x{} | x{}))".format(X[n], A[n], B[n]) )
# 1 <= i <= n−1, X[n−i] <=> (A[n−i]<B[n−i] | (A[n−i]=B[n−i] & X[n−i+1])) (thesis, 3.46)
for i in range(1, n):
res.append( "(x{0} = ((!x{1} & x{2}) | ((x{1}=x{2}) & x{3})))".format(X[n-i], A[n-i], B[n-i], X[n-i+1]) )
return "("+"\n& ".join(res)+")"
def _alpha_helper(self, vector1, vector2):
""" creates the lex-leader constraints between two vectors of variables
via the Alpha encoding using common sub-expression elimination
inputs:
vector1, vector2: lists of integers, equivalent lengths,
each representing a vector of variables
returns:
string containing the full expression of the lex-leader constraint
"""
# setup vectors with 1-based indexing to match constraints in the source paper
A = [None] + vector1
B = [None] + vector2
assert len(vector1) == len(vector2)
n = len(vector1)
# creating the extra variables
alpha = dict()
for i in range(len(vector1)+1):
self.num_var += 1
alpha[i] = self.num_var
res = []
# alpha[0] (thesis, 3.66)
res.append( "(x{})".format(alpha[0]) )
# 0 <= i <= n−1, -alpha[i] -> -a[i+1] (thesis, 3.67)
for i in range(n):
res.append( "(!x{} -> !x{})".format(alpha[i], alpha[i+1]) )
# 1 <= i <= n, alpha[i] -> (A[i] = B[i]) (thesis, 3.68)
for i in range(1, n+1):
res.append( "(x{} -> (x{} = x{}))".format(alpha[i], A[i], B[i]) )
# 0 <= i <= n−1, ((alpha[i]) & (!alpha[i+1])) -> (A[i+1] < B[i+1]) (thesis, 3.69)
for i in range(n):
res.append( "((x{} & !x{}) -> (!x{} & x{}))".format(alpha[i], alpha[i+1], A[i+1], B[i+1]) )
# 0 <= i <= n−1, alpha[i] -> (A[i+1] <= B[i+1]) (thesis, 3.70)
for i in range(n):
res.append( "(x{} -> (!x{} | x{}))".format(alpha[i], A[i+1], B[i+1]) )
return "("+"\n& ".join(res)+")"
def _alpha_m_helper(self, vector1, vector2):
""" creates the lex-leader constraints between two vectors of variables
via the Alpha M encoding using common sub-expression elimination
inputs:
vector1, vector2: lists of integers, equivalent lengths,
each representing a vector of variables
returns:
string containing the full expression of the lex-leader constraint
"""
# setup vectors with 1-based indexing to match constraints in the source paper
A = [None] + vector1
B = [None] + vector2
assert len(vector1) == len(vector2)
n = len(vector1)
# creating the extra variables
alpha = dict()
for i in range(1, len(vector1)+2):
self.num_var += 1
alpha[i] = self.num_var
res = []
# alpha[1] (thesis, 3.81)
res.append( "(x{})".format(alpha[1]) )
# 1 <= i <= n, alpha[i] <=> (((A[i] < B[i])|alpha[i+1]) & (A[i]<=B[i])) (thesis, 3.82)
for i in range(1, n+1):
res.append( "(x{0} = (((!x{1} & x{2})|x{3}) & (!x{1} | x{2})))".format(alpha[i], A[i], B[i], alpha[i+1]) )
return "("+"\n& ".join(res)+")"
def _harvey_helper(self, vector1, vector2):
""" creates the lex-leader constraints between two vectors of variables
via the Harvey encoding
inputs:
vector1, vector2: lists of integers, equivalent lengths,
each representing a vector of variables
returns:
string containing the full expression of the lex-leader constraint
"""
# setup vectors with 1-based indexing to match constraints in the source paper
A = [None] + vector1
B = [None] + vector2
assert len(vector1) == len(vector2)
n = len(vector1)
# creating the extra variables
X = dict()
for i in range(1, len(vector1)+1):
self.num_var += 1
X[i] = self.num_var
res = []
# X[1] (thesis, 3.54)
res.append( "(x{})".format(X[1]) )
# X[n] <=> (A[n] < (B[n]+1)) (thesis, 3.55)
res.append( "(x{} = x{} -> x{})".format(X[n], A[n], B[n]) )
# 0 <= i <= n−2, X[n−i−1] <=> (A[n−i−1] < (B[n−i−1] + Bool2Int(X[n−i]))),
# the right-hand side becomes (B+X)(!A+B)(!A+X)
for i in range(0, len(vector1)-1):
res.append(
"(x{XX} = ((x{B} | x{X}) & (!x{A} | x{B}) & (!x{A} | x{X})))".format(
X=X[n-i], XX=X[n-i-1], A=A[n-i-1], B=B[n-i-1]
)
)
return "("+"\n& ".join(res)+")"
| 40.927711 | 118 | 0.497792 | 1,860 | 13,588 | 3.6 | 0.074731 | 0.013441 | 0.034349 | 0.032855 | 0.815711 | 0.778973 | 0.764486 | 0.733423 | 0.70236 | 0.683542 | 0 | 0.034379 | 0.338534 | 13,588 | 331 | 119 | 41.05136 | 0.7085 | 0.346703 | 0 | 0.521505 | 0 | 0.016129 | 0.082965 | 0 | 0 | 0 | 0 | 0 | 0.043011 | 1 | 0.064516 | false | 0 | 0.005376 | 0 | 0.129032 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f1a6ddc83d96d40c68db915b61b446f708890502 | 156 | py | Python | __init__.py | AbstractMonkey/flask_test | 84a983c204234f471420a5041c28400c1193f762 | [
"MIT"
] | null | null | null | __init__.py | AbstractMonkey/flask_test | 84a983c204234f471420a5041c28400c1193f762 | [
"MIT"
] | null | null | null | __init__.py | AbstractMonkey/flask_test | 84a983c204234f471420a5041c28400c1193f762 | [
"MIT"
] | null | null | null | import flask from Flask
from flask_test import routes
from flask_test import config
app = Flask(__name__)
# Config class
app.config.from_object(Config)
| 14.181818 | 30 | 0.801282 | 24 | 156 | 4.916667 | 0.416667 | 0.228814 | 0.237288 | 0.322034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147436 | 156 | 10 | 31 | 15.6 | 0.887218 | 0.076923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.6 | null | null | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
f1e9262f2424f0e7346b55deb6ded28c5b3e17a6 | 16,337 | py | Python | studd/studd_batch.py | vcerqueira/studd | c23dd2c81bb05abc47ef8e929b5c8a708f4b7923 | [
"BSD-3-Clause"
] | 2 | 2021-05-06T16:02:09.000Z | 2021-05-26T02:38:02.000Z | studd/studd_batch.py | vcerqueira/studd | c23dd2c81bb05abc47ef8e929b5c8a708f4b7923 | [
"BSD-3-Clause"
] | null | null | null | studd/studd_batch.py | vcerqueira/studd | c23dd2c81bb05abc47ef8e929b5c8a708f4b7923 | [
"BSD-3-Clause"
] | 1 | 2022-03-25T03:48:14.000Z | 2022-03-25T03:48:14.000Z | from skmultiflow.data.data_stream import DataStream
from skmultiflow.drift_detection.page_hinkley import PageHinkley as PHT
from ht_detectors.tracker_output import HypothesisTestDetector
import copy
import numpy as np
class STUDD:
def __init__(self, X, y, n_train):
"""
:param X:
:param y:
:param n_train:
"""
D = DataStream(X, y)
D.prepare_for_use()
self.datastream = D
self.n_train = n_train
self.W = n_train
self.base_model = None
self.student_model = None
self.init_training_data = None
def initial_fit(self, model, std_model):
"""
:return:
"""
X_tr, y_tr = self.datastream.next_sample(self.n_train)
model.fit(X_tr, y_tr)
yhat_tr = model.predict(X_tr)
std_model.fit(X_tr, yhat_tr)
self.base_model = model
self.student_model = std_model
self.init_training_data = dict({"X": X_tr, "y": y_tr, "y_hat": yhat_tr})
DETECTOR = PHT
@staticmethod
def drift_detection_std(datastream_, model_,
std_model_, n_train_,
delta, n_samples,
upd_model=False,
upd_std_model=True,
detector=DETECTOR):
datastream = copy.deepcopy(datastream_)
base_model = copy.deepcopy(model_)
student_model = copy.deepcopy(std_model_)
n_train = copy.deepcopy(n_train_)
std_detector = detector(delta=delta)
std_alarms = []
iter = n_train
n_updates = 0
samples_used = 0
y_hat_hist = []
y_buffer, y_hist = [], []
X_buffer, X_hist = [], []
while datastream.has_more_samples():
# print("Iteration: " + str(iter))
Xi, yi = datastream.next_sample()
y_hist.append(yi[0])
y_buffer.append(yi[0])
X_hist.append(Xi[0])
X_buffer.append(Xi[0])
model_yhat = base_model.predict(Xi)
y_hat_hist.append(model_yhat[0])
std_model_yhat = student_model.predict(Xi)
std_err = int(model_yhat != std_model_yhat)
std_detector.add_element(std_err)
if std_detector.detected_change():
print("Found change std in iter: " + str(iter))
std_alarms.append(iter)
if upd_model:
X_buffer = np.array(X_buffer)
y_buffer = np.array(y_buffer)
samples_used_iter = len(y_buffer[-n_samples:])
print("Updating model with " + str(samples_used_iter), " Observations")
base_model.fit(X_buffer[-n_samples:],
y_buffer[-n_samples:])
yhat_buffer = base_model.predict(X_buffer)
if upd_std_model:
student_model.fit(X_buffer, yhat_buffer)
else:
student_model.fit(X_buffer[-n_samples:],
yhat_buffer[-n_samples:])
# y_buffer = []
# X_buffer = []
y_buffer = list(y_buffer)
X_buffer = list(X_buffer)
n_updates += 1
samples_used += samples_used_iter
print("Moving on")
iter += 1
preds = dict({"y": y_hist, "y_hat": y_hat_hist})
output = dict({"alarms": std_alarms,
"preds": preds,
"n_updates": n_updates,
"samples_used": samples_used})
return output
@staticmethod
def drift_detection_spv(datastream_, model_, n_train_,
delay_time, observation_ratio,
delta, n_samples,
upd_model=False,
detector=DETECTOR):
import copy
import numpy as np
datastream = copy.deepcopy(datastream_)
model = copy.deepcopy(model_)
n_train = copy.deepcopy(n_train_)
driftmodel = detector(delta=delta)
alarms = []
iter = n_train
j, n_updates, samples_used = 0, 0, 0
yhat_hist = []
y_buffer, y_hist = [], []
X_buffer, X_hist = [], []
while datastream.has_more_samples():
# print("Iteration: " + str(iter))
Xi, yi = datastream.next_sample()
y_hist.append(yi[0])
y_buffer.append(yi[0])
X_hist.append(Xi[0])
X_buffer.append(Xi[0])
model_yhat = model.predict(Xi)
yhat_hist.append(model_yhat[0])
put_i_available = np.random.binomial(1, observation_ratio)
if put_i_available > 0:
if j >= delay_time:
err = int(y_hist[j - delay_time] != yhat_hist[j - delay_time])
driftmodel.add_element(err)
if driftmodel.detected_change():
print("Found change in iter: " + str(iter))
alarms.append(iter)
if upd_model:
X_buffer = np.array(X_buffer)
y_buffer = np.array(y_buffer)
samples_used_iter = len(y_buffer[-n_samples:])
print("Updating model with " + str(samples_used_iter), " Observations")
model.fit(X_buffer[-n_samples:],
y_buffer[-n_samples:])
y_buffer = list(y_buffer)
X_buffer = list(X_buffer)
n_updates += 1
samples_used += samples_used_iter
print("Moving on")
iter += 1
j += 1
preds = dict({"y": y_hist, "y_hat": yhat_hist})
output = dict({"alarms": alarms,
"preds": preds,
"n_updates": n_updates,
"samples_used": samples_used})
return output
@staticmethod
def BL2_retrain_after_w(datastream_, model_, n_train_, n_samples):
import copy
import numpy as np
datastream = copy.deepcopy(datastream_)
model = copy.deepcopy(model_)
n_train = copy.deepcopy(n_train_)
iter = copy.deepcopy(n_train_)
j, n_updates, samples_used = 0, 0, 0
yhat_hist = []
y_buffer, y_hist = [], []
X_buffer, X_hist = [], []
while datastream.has_more_samples():
# print("Iteration: " + str(iter))
Xi, yi = datastream.next_sample()
y_hist.append(yi[0])
y_buffer.append(yi[0])
X_hist.append(Xi[0])
X_buffer.append(Xi[0])
model_yhat = model.predict(Xi)
yhat_hist.append(model_yhat[0])
if iter % n_train == 0 and iter > n_train + 1:
X_buffer = np.array(X_buffer)
y_buffer = np.array(y_buffer)
samples_used_iter = len(y_buffer[-n_samples:])
print("Updating model with " + str(samples_used_iter), " Observations")
model.fit(X_buffer[-n_samples:],
y_buffer[-n_samples:])
y_buffer = list(y_buffer)
X_buffer = list(X_buffer)
n_updates += 1
samples_used += samples_used_iter
print("Moving on")
iter += 1
j += 1
preds = dict({"y": y_hist, "y_hat": yhat_hist})
output = dict({"alarms": [],
"preds": preds,
"n_updates": n_updates,
"samples_used": samples_used})
return output
@staticmethod
def BL1_never_adapt(datastream_, model_):
import copy
datastream = copy.deepcopy(datastream_)
model = copy.deepcopy(model_)
yhat_hist, y_hist = [], []
while datastream.has_more_samples():
# print("Iteration: " + str(iter))
Xi, yi = datastream.next_sample()
y_hist.append(yi[0])
model_yhat = model.predict(Xi)
yhat_hist.append(model_yhat[0])
preds = dict({"y": y_hist, "y_hat": yhat_hist})
output = dict({"alarms": [],
"preds": preds,
"n_updates": 0,
"samples_used": 0})
return output
@staticmethod
def drift_detection_uspv(datastream_, model_, n_train_,
use_prob,
method,
pvalue,
window_size,
n_samples,
upd_model=False):
import copy
import numpy as np
assert method in ["wrs", "tt", "ks"]
datastream = copy.deepcopy(datastream_)
model = copy.deepcopy(model_)
n_train = copy.deepcopy(n_train_)
driftmodel = HypothesisTestDetector(method=method,
window=window_size,
thr=pvalue)
alarms = []
y_buffer = []
y_hist = []
X_buffer = []
y_hat_hist = []
n_updates = 0
samples_used = 0
iter = n_train
while datastream.has_more_samples():
# print("Iteration: " + str(iter))
Xi, yi = datastream.next_sample()
y_buffer.append(yi[0])
y_hist.append(yi[0])
X_buffer.append(Xi[0])
y_hat_hist.append(model.predict(Xi)[0])
if use_prob:
yprob_all = model.predict_proba(Xi)
if len(yprob_all) < 2:
yhat = yprob_all[0]
elif len(yprob_all) == 2:
yhat = yprob_all[1]
else:
yhat = np.max(yprob_all)
else:
yhat = model.predict(Xi)[0]
driftmodel.add_element(yhat)
if driftmodel.detected_change():
print("Found change in iter: " + str(iter))
alarms.append(iter)
if upd_model:
X_buffer = np.array(X_buffer)
y_buffer = np.array(y_buffer)
samples_used_iter = len(y_buffer[-n_samples:])
print("Updating model with " + str(samples_used_iter), " Observations")
model.fit(X_buffer[-n_samples:],
y_buffer[-n_samples:])
# y_buffer = []
# X_buffer = []
y_buffer = list(y_buffer)
X_buffer = list(X_buffer)
n_updates += 1
samples_used += samples_used_iter
print("Moving on")
iter += 1
preds = dict({"y": y_hist, "y_hat": y_hat_hist})
output = dict({"alarms": alarms,
"preds": preds,
"n_updates": n_updates,
"samples_used": samples_used})
return output
@staticmethod
def drift_detection_uspv_f(datastream_, model_, n_train_,
use_prob,
method,
pvalue,
window_size,
n_samples,
upd_model=False):
import copy
import numpy as np
from ht_detectors.tracker_output import FixedWindowDetector
assert method in ["wrs", "tt", "ks"]
datastream = copy.deepcopy(datastream_)
model = copy.deepcopy(model_)
n_train = copy.deepcopy(n_train_)
driftmodel = FixedWindowDetector(ref_window=[], thr=pvalue, window_size=window_size)
alarms = []
y_buffer = []
y_hist = []
X_buffer = []
y_hat_hist = []
n_updates = 0
samples_used = 0
iter = n_train
while datastream.has_more_samples():
# print("Iteration: " + str(iter))
Xi, yi = datastream.next_sample()
y_buffer.append(yi[0])
y_hist.append(yi[0])
X_buffer.append(Xi[0])
y_hat_hist.append(model.predict(Xi)[0])
if use_prob:
yprob_all = model.predict_proba(Xi)
if len(yprob_all) < 2:
yhat = yprob_all[0]
elif len(yprob_all) == 2:
yhat = yprob_all[1]
else:
yhat = np.max(yprob_all)
else:
yhat = model.predict(Xi)[0]
driftmodel.add_element(yhat)
if driftmodel.detected_change():
print("Found change in iter: " + str(iter))
alarms.append(iter)
if upd_model:
X_buffer = np.array(X_buffer)
y_buffer = np.array(y_buffer)
samples_used_iter = len(y_buffer[-n_samples:])
print("Updating model with " + str(samples_used_iter), " Observations")
model.fit(X_buffer[-n_samples:],
y_buffer[-n_samples:])
# y_buffer = []
# X_buffer = []
y_buffer = list(y_buffer)
X_buffer = list(X_buffer)
n_updates += 1
samples_used += samples_used_iter
print("Moving on")
iter += 1
preds = dict({"y": y_hist, "y_hat": y_hat_hist})
output = dict({"alarms": alarms,
"preds": preds,
"n_updates": n_updates,
"samples_used": samples_used})
return output
@staticmethod
def drift_detection_uspv_x(datastream_, model_, n_train_,
X,
pvalue,
window_size,
n_samples,
upd_model=False):
import copy
import numpy as np
from ht_detectors.tracker_covariates import XCTracker
datastream = copy.deepcopy(datastream_)
model = copy.deepcopy(model_)
n_train = copy.deepcopy(n_train_)
driftmodel = XCTracker(X=X, thr=pvalue, W=window_size)
driftmodel.create_trackers()
alarms = []
y_buffer = []
y_hist = []
X_buffer = []
y_hat_hist = []
n_updates = 0
samples_used = 0
iter = n_train
while datastream.has_more_samples():
# print("Iteration: " + str(iter))
Xi, yi = datastream.next_sample()
y_buffer.append(yi[0])
y_hist.append(yi[0])
X_buffer.append(Xi[0])
y_hat_hist.append(model.predict(Xi)[0])
# yhat = model.predict(Xi)[0]
driftmodel.add_element(Xi)
if driftmodel.detected_change():
print("Found change in iter: " + str(iter))
alarms.append(iter)
if upd_model:
X_buffer = np.array(X_buffer)
y_buffer = np.array(y_buffer)
samples_used_iter = len(y_buffer[-n_samples:])
print("Updating model with " + str(samples_used_iter), " Observations")
model.fit(X_buffer[-n_samples:],
y_buffer[-n_samples:])
# y_buffer = []
# X_buffer = []
y_buffer = list(y_buffer)
X_buffer = list(X_buffer)
n_updates += 1
samples_used += samples_used_iter
print("Moving on")
iter += 1
preds = dict({"y": y_hist, "y_hat": y_hat_hist})
output = dict({"alarms": alarms,
"preds": preds,
"n_updates": n_updates,
"samples_used": samples_used})
return output
| 29.812044 | 92 | 0.483381 | 1,716 | 16,337 | 4.293124 | 0.081002 | 0.04941 | 0.038007 | 0.024433 | 0.787159 | 0.773992 | 0.749559 | 0.739514 | 0.72689 | 0.72689 | 0 | 0.008038 | 0.421252 | 16,337 | 547 | 93 | 29.866545 | 0.771126 | 0.025464 | 0 | 0.806452 | 0 | 0 | 0.041222 | 0 | 0 | 0 | 0 | 0 | 0.005376 | 1 | 0.024194 | false | 0 | 0.048387 | 0 | 0.096774 | 0.045699 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7b32fa1c33e00e06615c8ea7fda9d6cce271c330 | 68 | py | Python | v2/backend/security/admins/__init__.py | jonfairbanks/rtsp-nvr | c770c77e74a062c63fb5e2419bc00a17543da332 | [
"MIT"
] | 558 | 2017-10-04T14:33:18.000Z | 2022-03-24T21:25:08.000Z | v2/backend/security/admins/__init__.py | jonfairbanks/rtsp-nvr | c770c77e74a062c63fb5e2419bc00a17543da332 | [
"MIT"
] | 22 | 2018-04-29T04:25:49.000Z | 2021-08-02T17:26:02.000Z | v2/backend/security/admins/__init__.py | jonfairbanks/rtsp-nvr | c770c77e74a062c63fb5e2419bc00a17543da332 | [
"MIT"
] | 127 | 2017-11-14T19:47:27.000Z | 2022-03-24T21:25:12.000Z | from .role_admin import RoleAdmin
from .user_admin import UserAdmin
| 22.666667 | 33 | 0.852941 | 10 | 68 | 5.6 | 0.7 | 0.392857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 68 | 2 | 34 | 34 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9e3c9e493814a8d39e13b9c3d6d3e69357c107f8 | 56 | py | Python | hr_zk_attendance_integration/models/__init__.py | kelvzxu/odoo_hr_addons | b5e5af7b80c09697e857bc57eecd2126072501bc | [
"MIT"
] | null | null | null | hr_zk_attendance_integration/models/__init__.py | kelvzxu/odoo_hr_addons | b5e5af7b80c09697e857bc57eecd2126072501bc | [
"MIT"
] | null | null | null | hr_zk_attendance_integration/models/__init__.py | kelvzxu/odoo_hr_addons | b5e5af7b80c09697e857bc57eecd2126072501bc | [
"MIT"
] | null | null | null | from . import zk_machine
from . import machine_analysis
| 18.666667 | 30 | 0.821429 | 8 | 56 | 5.5 | 0.625 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 56 | 2 | 31 | 28 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9e4b06f8a78c786241694bba14664c815f7fd5ec | 195 | py | Python | countries/admin.py | Thuhaa/geo_knowledge | c27e7740bd5ffa1e6f91fe738ad2f183da13e8c9 | [
"MIT"
] | null | null | null | countries/admin.py | Thuhaa/geo_knowledge | c27e7740bd5ffa1e6f91fe738ad2f183da13e8c9 | [
"MIT"
] | null | null | null | countries/admin.py | Thuhaa/geo_knowledge | c27e7740bd5ffa1e6f91fe738ad2f183da13e8c9 | [
"MIT"
] | null | null | null | #from django.contrib import admin
from django.contrib.gis import admin
from .models import WorldBorders
admin.site.register(WorldBorders, admin.GeoModelAdmin)
# Register your models here.
| 27.857143 | 55 | 0.8 | 25 | 195 | 6.24 | 0.52 | 0.128205 | 0.217949 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 195 | 6 | 56 | 32.5 | 0.923077 | 0.302564 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7b45a77e384fe9c49f1834d5ec8afb42b3f1bccd | 34,943 | py | Python | vel/rl/buffers/tests/test_circular_vec_env_buffer_backend.py | galatolofederico/vel | 0473648cffb3f34fb784d12dbb25844ab58ffc3c | [
"MIT"
] | 273 | 2018-09-01T08:54:34.000Z | 2022-02-02T13:22:51.000Z | vel/rl/buffers/tests/test_circular_vec_env_buffer_backend.py | braincorp/vel | bdf9d9eb6ed66278330e8cbece307f6e63ce53c6 | [
"MIT"
] | 47 | 2018-08-17T11:27:08.000Z | 2022-03-11T23:26:55.000Z | vel/rl/buffers/tests/test_circular_vec_env_buffer_backend.py | braincorp/vel | bdf9d9eb6ed66278330e8cbece307f6e63ce53c6 | [
"MIT"
] | 37 | 2018-10-11T22:56:57.000Z | 2020-10-06T19:53:05.000Z | import gym
import gym.spaces
import numpy as np
import numpy.testing as nt
import pytest
from vel.exceptions import VelException
from vel.rl.buffers.circular_replay_buffer import CircularVecEnvBufferBackend
def get_half_filled_buffer(frame_history=1):
""" Return simple preinitialized buffer """
observation_space = gym.spaces.Box(low=0, high=255, shape=(2, 2, 1), dtype=int)
action_space = gym.spaces.Discrete(4)
buffer = CircularVecEnvBufferBackend(
20, num_envs=2, observation_space=observation_space, action_space=action_space, frame_history=frame_history
)
v1 = np.ones(8).reshape((2, 2, 2, 1))
for i in range(10):
item = v1.copy()
item[0] *= (i+1)
item[1] *= 10 * (i+1)
buffer.store_transition(item, 0, float(i)/2, False)
return buffer
def get_filled_buffer(frame_history=1):
""" Return simple preinitialized buffer """
observation_space = gym.spaces.Box(low=0, high=255, shape=(2, 2, 1), dtype=int)
action_space = gym.spaces.Discrete(4)
buffer = CircularVecEnvBufferBackend(
20, num_envs=2, observation_space=observation_space, action_space=action_space, frame_history=frame_history
)
v1 = np.ones(8).reshape((2, 2, 2, 1))
for i in range(30):
item = v1.copy()
item[0] *= (i+1)
item[1] *= 10 * (i+1)
buffer.store_transition(item, 0, float(i)/2, False)
return buffer
def get_filled_buffer1x1(frame_history=1):
""" Return simple preinitialized buffer """
observation_space = gym.spaces.Box(low=0, high=255, shape=(2,), dtype=int)
action_space = gym.spaces.Box(low=-1.0, high=1.0, shape=(2,), dtype=float)
buffer = CircularVecEnvBufferBackend(
20, num_envs=2, observation_space=observation_space, action_space=action_space, frame_history=frame_history
)
v1 = np.ones(4).reshape((2, 2))
a1 = np.arange(4).reshape((2, 2))
for i in range(30):
item = v1.copy()
item[:, 0] *= (i+1)
item[:, 1] *= 10 * (i+1)
buffer.store_transition(item, a1 * i, float(i)/2, False)
return buffer
def get_filled_buffer2x2(frame_history=1):
""" Return simple preinitialized buffer """
observation_space = gym.spaces.Box(low=0, high=255, shape=(2, 2), dtype=int)
action_space = gym.spaces.Box(low=-1.0, high=1.0, shape=(2, 2), dtype=float)
buffer = CircularVecEnvBufferBackend(
20, num_envs=2, observation_space=observation_space, action_space=action_space, frame_history=frame_history
)
v1 = np.ones(8).reshape((2, 2, 2))
a1 = np.arange(8).reshape((2, 2, 2))
for i in range(30):
item = v1.copy()
item[:, 0] *= (i+1)
item[:, 1] *= 10 * (i+1)
buffer.store_transition(item, a1 * i, float(i)/2, False)
return buffer
def get_filled_buffer3x3(frame_history=1):
""" Return simple preinitialized buffer """
observation_space = gym.spaces.Box(low=0, high=255, shape=(2, 2, 2), dtype=int)
action_space = gym.spaces.Box(low=-1.0, high=1.0, shape=(2, 2, 2), dtype=float)
buffer = CircularVecEnvBufferBackend(
20, num_envs=2, observation_space=observation_space, action_space=action_space, frame_history=frame_history
)
v1 = np.ones(16).reshape((2, 2, 2, 2))
a1 = np.arange(16).reshape((2, 2, 2, 2))
for i in range(30):
item = v1.copy()
item[:, 0] *= (i+1)
item[:, 1] *= 10 * (i+1)
buffer.store_transition(item, i * a1, float(i)/2, False)
return buffer
def get_filled_buffer1x1_history(frame_history=1):
""" Return simple preinitialized buffer """
observation_space = gym.spaces.Box(low=0, high=255, shape=(2, 1), dtype=int)
action_space = gym.spaces.Box(low=-1.0, high=1.0, shape=(2,), dtype=float)
buffer = CircularVecEnvBufferBackend(
20, num_envs=2, observation_space=observation_space, action_space=action_space, frame_history=frame_history
)
v1 = np.ones(4).reshape((2, 2, 1))
a1 = np.arange(4).reshape((2, 2))
for i in range(30):
item = v1.copy()
item[:, 0] *= (i+1)
item[:, 1] *= 10 * (i+1)
buffer.store_transition(item, a1 * i, float(i)/2, False)
return buffer
def get_filled_buffer2x2_history(frame_history=1):
""" Return simple preinitialized buffer """
observation_space = gym.spaces.Box(low=0, high=255, shape=(2, 2, 1), dtype=int)
action_space = gym.spaces.Box(low=-1.0, high=1.0, shape=(2, 2), dtype=float)
buffer = CircularVecEnvBufferBackend(
20, num_envs=2, observation_space=observation_space, action_space=action_space, frame_history=frame_history
)
v1 = np.ones(8).reshape((2, 2, 2, 1))
a1 = np.arange(8).reshape((2, 2, 2))
for i in range(30):
item = v1.copy()
item[:, 0] *= (i+1)
item[:, 1] *= 10 * (i+1)
buffer.store_transition(item, a1 * i, float(i)/2, False)
return buffer
def get_filled_buffer3x3_history(frame_history=1):
""" Return simple preinitialized buffer """
observation_space = gym.spaces.Box(low=0, high=255, shape=(2, 2, 2, 1), dtype=int)
action_space = gym.spaces.Box(low=-1.0, high=1.0, shape=(2, 2, 2), dtype=float)
buffer = CircularVecEnvBufferBackend(
20, num_envs=2, observation_space=observation_space, action_space=action_space, frame_history=frame_history
)
v1 = np.ones(16).reshape((2, 2, 2, 2, 1))
a1 = np.arange(16).reshape((2, 2, 2, 2))
for i in range(30):
item = v1.copy()
item[:, 0] *= (i+1)
item[:, 1] *= 10 * (i+1)
buffer.store_transition(item, i * a1, float(i)/2, False)
return buffer
def get_filled_buffer_extra_info(frame_history=1):
""" Return simple preinitialized buffer """
observation_space = gym.spaces.Box(low=0, high=255, shape=(2, 2, 1), dtype=int)
action_space = gym.spaces.Discrete(4)
buffer = CircularVecEnvBufferBackend(
20, num_envs=2, observation_space=observation_space, action_space=action_space, frame_history=frame_history
)
v1 = np.ones(8).reshape((2, 2, 2, 1))
for i in range(30):
item = v1.copy()
item[0] *= (i+1)
item[1] *= 10 * (i+1)
buffer.store_transition(item, 0, float(i)/2, False, extra_info={
'neglogp': np.array([i / 30.0, (i+1) / 30.0])
})
return buffer
def get_filled_buffer_with_dones(frame_history=1):
""" Return simple preinitialized buffer with some done's in there """
observation_space = gym.spaces.Box(low=0, high=255, shape=(2, 2, 1), dtype=int)
action_space = gym.spaces.Discrete(4)
buffer = CircularVecEnvBufferBackend(
20, num_envs=2, observation_space=observation_space, action_space=action_space, frame_history=frame_history
)
v1 = np.ones(8).reshape((2, 2, 2, 1))
done_set = {2, 5, 10, 13, 18, 22, 28}
for i in range(30):
item = v1.copy()
item[0] *= (i+1)
item[1] *= 10 * (i+1)
done_array = np.array([i in done_set, (i+1) in done_set], dtype=bool)
buffer.store_transition(item, 0, float(i)/2, done_array)
return buffer
def get_filled_buffer_frame_stack(frame_stack=4, frame_dim=1):
""" Return a preinitialized buffer with frame stack implemented """
observation_space = gym.spaces.Box(low=0, high=255, shape=(2, 2, frame_dim * frame_stack), dtype=int)
action_space = gym.spaces.Discrete(4)
buffer = CircularVecEnvBufferBackend(
buffer_capacity=20, num_envs=2, observation_space=observation_space, action_space=action_space,
frame_stack_compensation=True, frame_history=frame_stack
)
v1 = np.ones(8 * frame_dim).reshape((2, 2, 2, frame_dim))
done_set = {2, 5, 10, 13, 18, 22, 28}
# simple buffer of previous frames to simulate frame stack
item_array = []
for i in range(30):
item = v1.copy()
item[:, 0] *= (i+1)
item[:, 1] *= 10 * (i+1)
done_array = np.array([i in done_set, (i+1) in done_set], dtype=bool)
item_array.append(item)
if len(item_array) < frame_stack:
item_concatenated = np.concatenate([item] * frame_stack, axis=-1)
else:
item_concatenated = np.concatenate(item_array[-frame_stack:], axis=-1)
buffer.store_transition(item_concatenated, 0, float(i) / 2, done_array)
return buffer
def test_simple_get_frame():
""" Check if get_frame returns frames from a buffer partially full """
observation_space = gym.spaces.Box(low=0, high=255, shape=(2, 2, 1), dtype=int)
action_space = gym.spaces.Discrete(4)
buffer = CircularVecEnvBufferBackend(
20, num_envs=2, observation_space=observation_space, action_space=action_space, frame_history=4
)
v1 = np.ones(8).reshape((2, 2, 2, 1))
v1[1] *= 2
v2 = v1 * 2
v3 = v1 * 3
buffer.store_transition(v1, 0, 0, False)
buffer.store_transition(v2, 0, 0, False)
buffer.store_transition(v3, 0, 0, False)
assert np.all(buffer.get_frame(0, 0).max(0).max(0) == np.array([0, 0, 0, 1]))
assert np.all(buffer.get_frame(1, 0).max(0).max(0) == np.array([0, 0, 1, 2]))
assert np.all(buffer.get_frame(2, 0).max(0).max(0) == np.array([0, 1, 2, 3]))
assert np.all(buffer.get_frame(0, 1).max(0).max(0) == np.array([0, 0, 0, 2]))
assert np.all(buffer.get_frame(1, 1).max(0).max(0) == np.array([0, 0, 2, 4]))
assert np.all(buffer.get_frame(2, 1).max(0).max(0) == np.array([0, 2, 4, 6]))
with pytest.raises(VelException):
buffer.get_frame(3, 0)
with pytest.raises(VelException):
buffer.get_frame(4, 0)
with pytest.raises(VelException):
buffer.get_frame(3, 1)
with pytest.raises(VelException):
buffer.get_frame(4, 1)
def test_full_buffer_get_frame():
""" Check if get_frame returns frames for full buffer """
buffer = get_filled_buffer(frame_history=4)
nt.assert_array_equal(buffer.get_frame(0, 0).max(0).max(0), np.array([18, 19, 20, 21]))
nt.assert_array_equal(buffer.get_frame(1, 0).max(0).max(0), np.array([19, 20, 21, 22]))
nt.assert_array_equal(buffer.get_frame(9, 0).max(0).max(0), np.array([27, 28, 29, 30]))
nt.assert_array_equal(buffer.get_frame(0, 1).max(0).max(0), np.array([180, 190, 200, 210]))
nt.assert_array_equal(buffer.get_frame(1, 1).max(0).max(0), np.array([190, 200, 210, 220]))
nt.assert_array_equal(buffer.get_frame(9, 1).max(0).max(0), np.array([270, 280, 290, 300]))
with pytest.raises(VelException):
buffer.get_frame(10, 0)
with pytest.raises(VelException):
buffer.get_frame(11, 0)
with pytest.raises(VelException):
buffer.get_frame(12, 0)
with pytest.raises(VelException):
buffer.get_frame(10, 1)
with pytest.raises(VelException):
buffer.get_frame(11, 1)
with pytest.raises(VelException):
buffer.get_frame(12, 1)
nt.assert_array_equal(buffer.get_frame(13, 0).max(0).max(0), np.array([11, 12, 13, 14]))
nt.assert_array_equal(buffer.get_frame(19, 0).max(0).max(0), np.array([17, 18, 19, 20]))
nt.assert_array_equal(buffer.get_frame(13, 1).max(0).max(0), np.array([110, 120, 130, 140]))
nt.assert_array_equal(buffer.get_frame(19, 1).max(0).max(0), np.array([170, 180, 190, 200]))
def test_full_buffer_get_future_frame():
""" Check if get_frame_with_future works with full buffer """
buffer = get_filled_buffer(frame_history=4)
nt.assert_array_equal(buffer.get_frame_with_future(0, 0)[1].max(0).max(0), np.array([19, 20, 21, 22]))
nt.assert_array_equal(buffer.get_frame_with_future(1, 0)[1].max(0).max(0), np.array([20, 21, 22, 23]))
nt.assert_array_equal(buffer.get_frame_with_future(0, 1)[1].max(0).max(0), np.array([190, 200, 210, 220]))
nt.assert_array_equal(buffer.get_frame_with_future(1, 1)[1].max(0).max(0), np.array([200, 210, 220, 230]))
with pytest.raises(VelException):
buffer.get_frame_with_future(9, 0)
with pytest.raises(VelException):
buffer.get_frame_with_future(10, 0)
with pytest.raises(VelException):
buffer.get_frame_with_future(11, 0)
with pytest.raises(VelException):
buffer.get_frame_with_future(12, 0)
with pytest.raises(VelException):
buffer.get_frame_with_future(9, 1)
with pytest.raises(VelException):
buffer.get_frame_with_future(10, 1)
with pytest.raises(VelException):
buffer.get_frame_with_future(11, 1)
with pytest.raises(VelException):
buffer.get_frame_with_future(12, 1)
nt.assert_array_equal(buffer.get_frame_with_future(13, 0)[1].max(0).max(0), np.array([12, 13, 14, 15]))
nt.assert_array_equal(buffer.get_frame_with_future(19, 0)[1].max(0).max(0), np.array([18, 19, 20, 21]))
nt.assert_array_equal(buffer.get_frame_with_future(13, 1)[1].max(0).max(0), np.array([120, 130, 140, 150]))
nt.assert_array_equal(buffer.get_frame_with_future(19, 1)[1].max(0).max(0), np.array([180, 190, 200, 210]))
def test_buffer_filling_size():
""" Check if buffer size is properly updated when we add items """
observation_space = gym.spaces.Box(low=0, high=255, shape=(2, 2, 1), dtype=int)
action_space = gym.spaces.Discrete(4)
buffer = CircularVecEnvBufferBackend(20, num_envs=2, observation_space=observation_space, action_space=action_space)
v1 = np.ones(8).reshape((2, 2, 2, 1))
assert buffer.current_size == 0
buffer.store_transition(v1, 0, 0, False)
buffer.store_transition(v1, 0, 0, False)
assert buffer.current_size == 2
for i in range(30):
buffer.store_transition(v1 * (i+1), 0, float(i)/2, False)
assert buffer.current_size == buffer.buffer_capacity
def test_get_frame_with_dones():
""" Check if get_frame works properly in case there are multiple sequences in buffer """
buffer = get_filled_buffer_with_dones(frame_history=4)
nt.assert_array_equal(buffer.get_frame(0, 0).max(0).max(0), np.array([0, 0, 20, 21]))
nt.assert_array_equal(buffer.get_frame(1, 0).max(0).max(0), np.array([0, 20, 21, 22]))
nt.assert_array_equal(buffer.get_frame(2, 0).max(0).max(0), np.array([20, 21, 22, 23]))
nt.assert_array_equal(buffer.get_frame(3, 0).max(0).max(0), np.array([0, 0, 0, 24]))
nt.assert_array_equal(buffer.get_frame(8, 0).max(0).max(0), np.array([26, 27, 28, 29]))
nt.assert_array_equal(buffer.get_frame(9, 0).max(0).max(0), np.array([0, 0, 0, 30]))
nt.assert_array_equal(buffer.get_frame(0, 1).max(0).max(0), np.array([0, 190, 200, 210]))
nt.assert_array_equal(buffer.get_frame(1, 1).max(0).max(0), np.array([190, 200, 210, 220]))
nt.assert_array_equal(buffer.get_frame(2, 1).max(0).max(0), np.array([0, 0, 0, 230]))
nt.assert_array_equal(buffer.get_frame(3, 1).max(0).max(0), np.array([0, 0, 230, 240]))
nt.assert_array_equal(buffer.get_frame(8, 1).max(0).max(0), np.array([0, 0, 0, 290]))
nt.assert_array_equal(buffer.get_frame(9, 1).max(0).max(0), np.array([0, 0, 290, 300]))
with pytest.raises(VelException):
buffer.get_frame(10, 0)
with pytest.raises(VelException):
buffer.get_frame(10, 1)
nt.assert_array_equal(buffer.get_frame(11, 0).max(0).max(0), np.array([0, 0, 0, 12]))
nt.assert_array_equal(buffer.get_frame(12, 0).max(0).max(0), np.array([0, 0, 12, 13]))
with pytest.raises(VelException):
buffer.get_frame(11, 1)
with pytest.raises(VelException):
buffer.get_frame(12, 1)
def test_get_frame_future_with_dones():
""" Check if get_frame_with_future works properly in case there are multiple sequences in buffer """
buffer = get_filled_buffer_with_dones(frame_history=4)
nt.assert_array_equal(buffer.get_frame_with_future(0, 0)[1].max(0).max(0), np.array([0, 20, 21, 22]))
nt.assert_array_equal(buffer.get_frame_with_future(1, 0)[1].max(0).max(0), np.array([20, 21, 22, 23]))
nt.assert_array_equal(buffer.get_frame_with_future(2, 0)[1].max(0).max(0), np.array([0, 0, 0, 0]))
nt.assert_array_equal(buffer.get_frame_with_future(3, 0)[1].max(0).max(0), np.array([0, 0, 24, 25]))
nt.assert_array_equal(buffer.get_frame_with_future(8, 0)[1].max(0).max(0), np.array([0, 0, 0, 0]))
nt.assert_array_equal(buffer.get_frame_with_future(0, 1)[1].max(0).max(0), np.array([190, 200, 210, 220]))
nt.assert_array_equal(buffer.get_frame_with_future(1, 1)[1].max(0).max(0), np.array([0, 0, 0, 0]))
nt.assert_array_equal(buffer.get_frame_with_future(2, 1)[1].max(0).max(0), np.array([0, 0, 230, 240]))
nt.assert_array_equal(buffer.get_frame_with_future(3, 1)[1].max(0).max(0), np.array([0, 230, 240, 250]))
nt.assert_array_equal(buffer.get_frame_with_future(7, 1)[1].max(0).max(0), np.array([0, 0, 0, 0]))
with pytest.raises(VelException):
buffer.get_frame_with_future(9, 0)
with pytest.raises(VelException):
buffer.get_frame_with_future(10, 0)
nt.assert_array_equal(buffer.get_frame_with_future(11, 0)[1].max(0).max(0), np.array([0, 0, 12, 13]))
nt.assert_array_equal(buffer.get_frame_with_future(12, 0)[1].max(0).max(0), np.array([0, 12, 13, 14]))
with pytest.raises(VelException):
buffer.get_frame_with_future(9, 1)
with pytest.raises(VelException):
buffer.get_frame_with_future(10, 1)
with pytest.raises(VelException):
buffer.get_frame(11, 1)
with pytest.raises(VelException):
buffer.get_frame(12, 1)
nt.assert_array_equal(buffer.get_frame_with_future(13, 1)[1].max(0).max(0), np.array([0, 0, 140, 150]))
def test_get_batch():
""" Check if get_batch works properly for buffers """
buffer = get_filled_buffer_with_dones(frame_history=4)
batch = buffer.get_transitions(np.array([
[0, 1, 2, 3, 4, 5, 6, 7], # Frames for env=0
[1, 2, 3, 4, 5, 6, 7, 8], # Frames for env=1
]).T)
obs = batch['observations']
act = batch['actions']
rew = batch['rewards']
obs_tp1 = batch['observations_next']
dones = batch['dones']
nt.assert_array_equal(dones[:, 0], np.array([False, False, True, False, False, False, False, False]))
nt.assert_array_equal(dones[:, 1], np.array([True, False, False, False, False, False, True, False]))
nt.assert_array_equal(obs[:, 0].max(1).max(1), np.array([
[0, 0, 20, 21],
[0, 20, 21, 22],
[20, 21, 22, 23],
[0, 0, 0, 24],
[0, 0, 24, 25],
[0, 24, 25, 26],
[24, 25, 26, 27],
[25, 26, 27, 28],
]))
nt.assert_array_equal(obs[:, 1].max(1).max(1), np.array([
[190, 200, 210, 220],
[0, 0, 0, 230],
[0, 0, 230, 240],
[0, 230, 240, 250],
[230, 240, 250, 260],
[240, 250, 260, 270],
[250, 260, 270, 280],
[0, 0, 0, 290],
]))
nt.assert_array_equal(act[:, 0], np.array([0, 0, 0, 0, 0, 0, 0, 0]))
nt.assert_array_equal(rew[:, 0], np.array([10.0, 10.5, 11.0, 11.5, 12.0, 12.5, 13.0, 13.5]))
nt.assert_array_equal(act[:, 1], np.array([0, 0, 0, 0, 0, 0, 0, 0]))
nt.assert_array_equal(rew[:, 1], np.array([10.5, 11.0, 11.5, 12.0, 12.5, 13.0, 13.5, 14.0]))
nt.assert_array_equal(obs_tp1[:, 0].max(1).max(1), np.array([
[0, 20, 21, 22],
[20, 21, 22, 23],
[0, 0, 0, 0],
[0, 0, 24, 25],
[0, 24, 25, 26],
[24, 25, 26, 27],
[25, 26, 27, 28],
[26, 27, 28, 29]
]))
nt.assert_array_equal(obs_tp1[:, 1].max(1).max(1), np.array([
[0, 0, 0, 0],
[0, 0, 230, 240],
[0, 230, 240, 250],
[230, 240, 250, 260],
[240, 250, 260, 270],
[250, 260, 270, 280],
[0, 0, 0, 0],
[0, 0, 290, 300],
]))
with pytest.raises(VelException):
buffer.get_transitions(np.array([
[0, 1, 2, 3, 4, 5, 6, 7, 8],
[1, 2, 3, 4, 5, 6, 7, 8, 9]
]).T)
def test_sample_and_get_batch():
""" Check if batch sampling works properly """
buffer = get_filled_buffer_with_dones(frame_history=4)
for i in range(100):
indexes = buffer.sample_batch_transitions(batch_size=5)
batch = buffer.get_transitions(indexes)
obs = batch['observations']
act = batch['actions']
rew = batch['rewards']
obs_tp1 = batch['observations_next']
dones = batch['dones']
with pytest.raises(AssertionError):
nt.assert_array_equal(indexes[:, 0], indexes[:, 1])
assert obs.shape[0] == 5
assert act.shape[0] == 5
assert rew.shape[0] == 5
assert obs_tp1.shape[0] == 5
assert dones.shape[0] == 5
def test_storing_extra_info():
""" Make sure additional information are stored and recovered properly """
buffer = get_filled_buffer_extra_info(frame_history=4)
indexes = np.array([
[0, 1, 2, 17, 18, 19],
[0, 1, 2, 17, 18, 19],
]).T
batch = buffer.get_transitions(indexes)
nt.assert_equal(batch['neglogp'][0, 0], 20.0/30)
nt.assert_equal(batch['neglogp'][1, 0], 21.0/30)
nt.assert_equal(batch['neglogp'][2, 0], 22.0/30)
nt.assert_equal(batch['neglogp'][3, 0], 17.0/30)
nt.assert_equal(batch['neglogp'][4, 0], 18.0/30)
nt.assert_equal(batch['neglogp'][5, 0], 19.0/30)
nt.assert_equal(batch['neglogp'][0, 1], 21.0/30)
nt.assert_equal(batch['neglogp'][1, 1], 22.0/30)
nt.assert_equal(batch['neglogp'][2, 1], 23.0/30)
nt.assert_equal(batch['neglogp'][3, 1], 18.0/30)
nt.assert_equal(batch['neglogp'][4, 1], 19.0/30)
nt.assert_equal(batch['neglogp'][5, 1], 20.0/30)
def test_sample_rollout_half_filled():
""" Test if sampling rollout is correct and returns proper results """
buffer = get_half_filled_buffer(frame_history=4)
indexes = []
for i in range(1000):
rollout_idx = buffer.sample_batch_trajectories(rollout_length=5)
rollout = buffer.get_trajectories(indexes=rollout_idx, rollout_length=5)
assert rollout['observations'].shape[0] == 5 # Rollout length
assert rollout['observations'].shape[-1] == 4 # History length
indexes.append(rollout_idx)
assert np.min(indexes) == 4
assert np.max(indexes) == 8
with pytest.raises(VelException):
buffer.sample_batch_trajectories(rollout_length=10)
rollout_idx = buffer.sample_batch_trajectories(rollout_length=9)
rollout = buffer.get_trajectories(indexes=rollout_idx, rollout_length=9)
nt.assert_array_equal(rollout_idx, np.array([8, 8]))
nt.assert_array_equal(rollout['rewards'], np.array([
[0., 0.5, 1., 1.5, 2., 2.5, 3., 3.5, 4.],
[0., 0.5, 1., 1.5, 2., 2.5, 3., 3.5, 4.],
]).T)
def test_sample_rollout_filled():
""" Test if sampling rollout is correct and returns proper results """
buffer = get_filled_buffer(frame_history=4)
indexes = []
for i in range(1000):
rollout_idx = buffer.sample_batch_trajectories(rollout_length=5)
rollout = buffer.get_trajectories(indexes=rollout_idx, rollout_length=5)
assert rollout['observations'].shape[0] == 5 # Rollout length
assert rollout['observations'].shape[-1] == 4 # History length
indexes.append(rollout_idx)
assert np.min(indexes) == 0
assert np.max(indexes) == 19
with pytest.raises(VelException):
buffer.sample_batch_trajectories(rollout_length=17)
max_rollout = buffer.sample_batch_trajectories(rollout_length=16)
rollout = buffer.get_trajectories(max_rollout, rollout_length=16)
nt.assert_array_equal(max_rollout, np.array([8, 8]))
assert np.sum(rollout['rewards']) == pytest.approx(164.0 * 2, 1e-5)
def test_buffer_flexible_obs_action_sizes():
b1x1 = get_filled_buffer1x1(frame_history=1)
b2x2 = get_filled_buffer2x2(frame_history=1)
b3x3 = get_filled_buffer3x3(frame_history=1)
nt.assert_array_almost_equal(b1x1.get_frame(0, 0), np.array([21, 210]))
nt.assert_array_almost_equal(b2x2.get_frame(0, 0), np.array([[21, 21], [210, 210]]))
nt.assert_array_almost_equal(b3x3.get_frame(0, 0), np.array([[[21, 21], [21, 21]], [[210, 210], [210, 210]]]))
nt.assert_array_almost_equal(b1x1.get_transition(0, 0)['actions'], np.array([0, 20]))
nt.assert_array_almost_equal(b2x2.get_transition(0, 0)['actions'], np.array([[0, 20], [40, 60]]))
nt.assert_array_almost_equal(b3x3.get_transition(0, 0)['actions'], np.array(
[
[[0, 20], [40, 60]],
[[80, 100], [120, 140]]
]
))
def test_buffer_flexible_obs_action_sizes_with_history():
b1x1 = get_filled_buffer1x1_history(frame_history=2)
b2x2 = get_filled_buffer2x2_history(frame_history=2)
b3x3 = get_filled_buffer3x3_history(frame_history=2)
nt.assert_array_almost_equal(b1x1.get_frame(0, 0), np.array([[20, 21], [200, 210]]))
nt.assert_array_almost_equal(b2x2.get_frame(0, 0), np.array([[[20, 21], [20, 21]], [[200, 210], [200, 210]]]))
nt.assert_array_almost_equal(b3x3.get_frame(0, 0), np.array(
[[[[20, 21], [20, 21]], [[20, 21], [20, 21]]], [[[200, 210], [200, 210]], [[200, 210], [200, 210]]]]
))
nt.assert_array_almost_equal(b1x1.get_transition(0, 0)['observations_next'], np.array([[21, 22], [210, 220]]))
nt.assert_array_almost_equal(b2x2.get_transition(0, 0)['observations_next'], np.array(
[[[21, 22], [21, 22]], [[210, 220], [210, 220]]]
))
nt.assert_array_almost_equal(b3x3.get_transition(0, 0)['observations_next'], np.array(
[[[[21, 22], [21, 22]], [[21, 22], [21, 22]]],
[[[210, 220], [210, 220]], [[210, 220], [210, 220]]]]
))
def test_frame_stack_compensation_single_dim():
buffer = get_filled_buffer_frame_stack(frame_stack=4, frame_dim=1)
observations_1 = buffer.get_transition(frame_idx=0, env_idx=0)['observations']
observations_2 = buffer.get_transition(frame_idx=1, env_idx=0)['observations']
observations_3 = buffer.get_transition(frame_idx=2, env_idx=0)['observations']
nt.assert_array_almost_equal(
observations_1, np.array([[[0, 0, 20, 21],
[0, 0, 20, 21]],
[[0, 0, 200, 210],
[0, 0, 200, 210]]])
)
nt.assert_array_almost_equal(
observations_2, np.array([[[0, 20, 21, 22],
[0, 20, 21, 22]],
[[0, 200, 210, 220],
[0, 200, 210, 220]]])
)
nt.assert_array_almost_equal(
observations_3, np.array([[[20, 21, 22, 23],
[20, 21, 22, 23]],
[[200, 210, 220, 230],
[200, 210, 220, 230]]])
)
def test_frame_stack_compensation_multi_dim():
buffer = get_filled_buffer_frame_stack(frame_stack=4, frame_dim=2)
observations_1 = buffer.get_transition(frame_idx=0, env_idx=0)['observations']
observations_2 = buffer.get_transition(frame_idx=1, env_idx=0)['observations']
observations_3 = buffer.get_transition(frame_idx=2, env_idx=0)['observations']
nt.assert_array_almost_equal(
observations_1, np.array([[[0, 0, 0, 0, 20, 20, 21, 21],
[0, 0, 0, 0, 20, 20, 21, 21]],
[[0, 0, 0, 0, 200, 200, 210, 210],
[0, 0, 0, 0, 200, 200, 210, 210]]])
)
nt.assert_array_almost_equal(
observations_2, np.array([[[0, 0, 20, 20, 21, 21, 22, 22],
[0, 0, 20, 20, 21, 21, 22, 22]],
[[0, 0, 200, 200, 210, 210, 220, 220],
[0, 0, 200, 200, 210, 210, 220, 220]]])
)
nt.assert_array_almost_equal(
observations_3, np.array([[[20, 20, 21, 21, 22, 22, 23, 23],
[20, 20, 21, 21, 22, 22, 23, 23]],
[[200, 200, 210, 210, 220, 220, 230, 230],
[200, 200, 210, 210, 220, 220, 230, 230]]])
)
def test_get_frame_with_future_forward_steps_exceptions():
"""
Test function get_frame_with_future_forward_steps.
Does it throw vel exception properly if and only if cannot provide enough future frames.
"""
buffer = get_filled_buffer_frame_stack(frame_stack=4, frame_dim=2)
buffer.get_frame_with_future_forward_steps(0, 0, forward_steps=2, discount_factor=0.9)
buffer.get_frame_with_future_forward_steps(1, 0, forward_steps=2, discount_factor=0.9)
buffer.get_frame_with_future_forward_steps(2, 0, forward_steps=2, discount_factor=0.9)
buffer.get_frame_with_future_forward_steps(3, 0, forward_steps=2, discount_factor=0.9)
buffer.get_frame_with_future_forward_steps(4, 0, forward_steps=2, discount_factor=0.9)
buffer.get_frame_with_future_forward_steps(5, 0, forward_steps=2, discount_factor=0.9)
buffer.get_frame_with_future_forward_steps(6, 0, forward_steps=2, discount_factor=0.9)
buffer.get_frame_with_future_forward_steps(7, 0, forward_steps=2, discount_factor=0.9)
with pytest.raises(VelException):
# No future for the frame
buffer.get_frame_with_future_forward_steps(8, 0, forward_steps=2, discount_factor=0.9)
with pytest.raises(VelException):
# No future for the frame
buffer.get_frame_with_future_forward_steps(9, 0, forward_steps=2, discount_factor=0.9)
with pytest.raises(VelException):
# No history for the frame
buffer.get_frame_with_future_forward_steps(10, 0, forward_steps=2, discount_factor=0.9)
buffer.get_frame_with_future_forward_steps(11, 0, forward_steps=2, discount_factor=0.9)
buffer.get_frame_with_future_forward_steps(12, 0, forward_steps=2, discount_factor=0.9)
buffer.get_frame_with_future_forward_steps(13, 0, forward_steps=2, discount_factor=0.9)
buffer.get_frame_with_future_forward_steps(14, 0, forward_steps=2, discount_factor=0.9)
buffer.get_frame_with_future_forward_steps(15, 0, forward_steps=2, discount_factor=0.9)
buffer.get_frame_with_future_forward_steps(16, 0, forward_steps=2, discount_factor=0.9)
buffer.get_frame_with_future_forward_steps(17, 0, forward_steps=2, discount_factor=0.9)
buffer.get_frame_with_future_forward_steps(18, 0, forward_steps=2, discount_factor=0.9)
buffer.get_frame_with_future_forward_steps(19, 0, forward_steps=2, discount_factor=0.9)
with pytest.raises(VelException):
# Index beyond buffer size
buffer.get_frame_with_future_forward_steps(20, 0, forward_steps=2, discount_factor=0.9)
def test_get_frame_with_future_forward_steps_with_dones():
"""
Test function get_frame_with_future_forward_steps.
Does it return empty frame if there is a done in between.
Does it return correct rewards if there is a done in between
Does it return correct rewards if there is no done in between
"""
buffer = get_filled_buffer_frame_stack(frame_stack=4, frame_dim=2)
# Just a check to be sure
nt.assert_array_equal(
buffer.dones_buffer[:, 0],
np.array([
False, False, True, False, False, False, False, False, True, False,
True, False, False, True, False, False, False, False, True, False
])
)
nt.assert_array_equal(
buffer.reward_buffer[:, 0],
np.array([
10., 10.5, 11., 11.5, 12., 12.5, 13., 13.5, 14., 14.5, 5., 5.5, 6., 6.5, 7., 7.5, 8., 8.5, 9., 9.5
])
)
for i in [0, 3, 4, 5, 6, 11, 14, 15, 19]:
result = buffer.get_frame_with_future_forward_steps(i, 0, forward_steps=2, discount_factor=0.9)
next_frame = result[1]
reward = result[2]
done = result[3]
assert next_frame.max() != 0
assert done is False
assert reward == buffer.reward_buffer[i, 0] + 0.9 * buffer.reward_buffer[(i+1) % 20, 0]
for i in [1, 2, 7, 12, 13, 17, 18]:
result = buffer.get_frame_with_future_forward_steps(i, 0, forward_steps=2, discount_factor=0.9)
next_frame = result[1]
done = result[3]
assert next_frame.max() == 0
assert done is True
for i in [1, 7, 12, 17]:
result = buffer.get_frame_with_future_forward_steps(i, 0, forward_steps=2, discount_factor=0.9)
reward = result[2]
assert reward == buffer.reward_buffer[i, 0] + 0.9 * buffer.reward_buffer[(i+1) % 20, 0]
for i in [2, 13, 18]:
result = buffer.get_frame_with_future_forward_steps(i, 0, forward_steps=2, discount_factor=0.9)
reward = result[2]
assert reward == buffer.reward_buffer[i, 0]
def test_get_frame_with_future_forward_steps_without_dones():
"""
Test function get_frame_with_future_forward_steps.
Does it return correct frame if there is no done in between
"""
buffer = get_filled_buffer_frame_stack(frame_stack=4, frame_dim=2)
result = buffer.get_frame_with_future_forward_steps(0, 0, forward_steps=2, discount_factor=0.9)
frame = result[0]
future_frame = result[1]
nt.assert_array_equal(
frame,
np.array([[[0, 0, 0, 0, 20, 20, 21, 21],
[0, 0, 0, 0, 20, 20, 21, 21]],
[[0, 0, 0, 0, 200, 200, 210, 210],
[0, 0, 0, 0, 200, 200, 210, 210]]])
)
nt.assert_array_equal(
future_frame,
np.array([[[20, 20, 21, 21, 22, 22, 23, 23],
[20, 20, 21, 21, 22, 22, 23, 23]],
[[200, 200, 210, 210, 220, 220, 230, 230],
[200, 200, 210, 210, 220, 220, 230, 230]]])
)
result = buffer.get_frame_with_future_forward_steps(3, 0, forward_steps=4, discount_factor=0.9)
frame = result[0]
future_frame = result[1]
nt.assert_array_equal(
frame,
np.array([[[0, 0, 0, 0, 0, 0, 24, 24],
[0, 0, 0, 0, 0, 0, 24, 24]],
[[0, 0, 0, 0, 0, 0, 240, 240],
[0, 0, 0, 0, 0, 0, 240, 240]]])
)
nt.assert_array_equal(
future_frame,
np.array([[[25, 25, 26, 26, 27, 27, 28, 28],
[25, 25, 26, 26, 27, 27, 28, 28]],
[[250, 250, 260, 260, 270, 270, 280, 280],
[250, 250, 260, 260, 270, 270, 280, 280]]])
)
result = buffer.get_frame_with_future_forward_steps(19, 0, forward_steps=2, discount_factor=0.9)
frame = result[0]
future_frame = result[1]
nt.assert_array_equal(
frame,
np.array([[[0, 0, 0, 0, 0, 0, 20, 20],
[0, 0, 0, 0, 0, 0, 20, 20]],
[[0, 0, 0, 0, 0, 0, 200, 200],
[0, 0, 0, 0, 0, 0, 200, 200]]])
)
nt.assert_array_equal(
future_frame,
np.array([[[0, 0, 20, 20, 21, 21, 22, 22],
[0, 0, 20, 20, 21, 21, 22, 22]],
[[0, 0, 200, 200, 210, 210, 220, 220],
[0, 0, 200, 200, 210, 210, 220, 220]]])
)
| 37.252665 | 120 | 0.624388 | 5,467 | 34,943 | 3.796415 | 0.044083 | 0.017249 | 0.07285 | 0.059841 | 0.898723 | 0.872416 | 0.849627 | 0.824331 | 0.755095 | 0.728355 | 0 | 0.11079 | 0.218613 | 34,943 | 937 | 121 | 37.292423 | 0.649355 | 0.054517 | 0 | 0.513302 | 0 | 0 | 0.011971 | 0 | 0 | 0 | 0 | 0 | 0.200313 | 1 | 0.045383 | false | 0 | 0.010955 | 0 | 0.073552 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7b72186af6585eabbe5fd83946de8cfad8387a25 | 72 | py | Python | CrashCourse/cOOP/hello.py | atabaksahraei/Python-for-Developer | 6972b7c9a500312ce6a359817feb5f8461391078 | [
"MIT"
] | null | null | null | CrashCourse/cOOP/hello.py | atabaksahraei/Python-for-Developer | 6972b7c9a500312ce6a359817feb5f8461391078 | [
"MIT"
] | null | null | null | CrashCourse/cOOP/hello.py | atabaksahraei/Python-for-Developer | 6972b7c9a500312ce6a359817feb5f8461391078 | [
"MIT"
] | null | null | null | def welt():
print("Hallo Welt")
def mars():
print("Hallo Mars") | 14.4 | 23 | 0.583333 | 10 | 72 | 4.2 | 0.5 | 0.47619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 72 | 5 | 24 | 14.4 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.273973 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
7b7d07626d7be6284893caf95bddb00d9476e477 | 1,681 | py | Python | products/migrations/0002_auto_20200315_1517.py | gwoods22/beer-store-api | c21593734022718896720db916b73f0404840dc2 | [
"MIT"
] | 1 | 2020-09-10T16:56:56.000Z | 2020-09-10T16:56:56.000Z | products/migrations/0002_auto_20200315_1517.py | gwoods22/beer-store-api | c21593734022718896720db916b73f0404840dc2 | [
"MIT"
] | null | null | null | products/migrations/0002_auto_20200315_1517.py | gwoods22/beer-store-api | c21593734022718896720db916b73f0404840dc2 | [
"MIT"
] | 1 | 2020-09-20T17:47:07.000Z | 2020-09-20T17:47:07.000Z | # Generated by Django 3.0.4 on 2020-03-15 15:17
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('products', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='product',
name='price_per_100ml',
field=models.DecimalField(blank=True, decimal_places=2, default=None, max_digits=4, null=True),
),
migrations.AddField(
model_name='product',
name='price_per_abv',
field=models.DecimalField(blank=True, decimal_places=2, default=None, max_digits=4, null=True),
),
migrations.AlterField(
model_name='product',
name='attributes',
field=models.CharField(default='N/A', max_length=255),
),
migrations.AlterField(
model_name='product',
name='brewer',
field=models.CharField(default='N/A', max_length=255),
),
migrations.AlterField(
model_name='product',
name='category',
field=models.CharField(default='N/A', max_length=255),
),
migrations.AlterField(
model_name='product',
name='country',
field=models.CharField(default='N/A', max_length=255),
),
migrations.AlterField(
model_name='product',
name='style',
field=models.CharField(default='N/A', max_length=255),
),
migrations.AlterField(
model_name='product',
name='type',
field=models.CharField(default='N/A', max_length=255),
),
]
| 31.12963 | 107 | 0.559786 | 171 | 1,681 | 5.368421 | 0.321637 | 0.078431 | 0.139434 | 0.174292 | 0.793028 | 0.793028 | 0.760349 | 0.760349 | 0.671024 | 0.626362 | 0 | 0.038029 | 0.311719 | 1,681 | 53 | 108 | 31.716981 | 0.755402 | 0.02677 | 0 | 0.680851 | 1 | 0 | 0.099143 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.021277 | 0 | 0.085106 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7b9ebd390fc8c0cc6c28cfcd6c25c2169c0a6990 | 4,672 | py | Python | src/neural_nets.py | akensert/ddqn-isocratic-scouting-runs | ab8a87ce6dbf01cba9ffb92ea13c98ffc3a70e93 | [
"MIT"
] | null | null | null | src/neural_nets.py | akensert/ddqn-isocratic-scouting-runs | ab8a87ce6dbf01cba9ffb92ea13c98ffc3a70e93 | [
"MIT"
] | null | null | null | src/neural_nets.py | akensert/ddqn-isocratic-scouting-runs | ab8a87ce6dbf01cba9ffb92ea13c98ffc3a70e93 | [
"MIT"
] | null | null | null | import tensorflow as tf
from tensorflow.keras.initializers import TruncatedNormal
RANDOM_SEED = 42
class QNetwork(tf.keras.Model):
def __init__(self,
hidden_sizes=[1024, 1024],
dropout_rates=[0.2, 0.2],
output_dims=11):
super(QNetwork, self).__init__()
self.dense_block = tf.keras.models.Sequential([
tf.keras.layers.Dense(
units=hidden_sizes[0],
kernel_initializer=TruncatedNormal(0.0, 0.05, seed=RANDOM_SEED)),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Dropout(dropout_rates[0], seed=RANDOM_SEED),
tf.keras.layers.Dense(
units=hidden_sizes[1],
kernel_initializer=TruncatedNormal(0.0, 0.05, seed=RANDOM_SEED)),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Dropout(dropout_rates[1], seed=RANDOM_SEED),
tf.keras.layers.Dense(
units=output_dims,
kernel_initializer=TruncatedNormal(0.0, 0.05, seed=RANDOM_SEED)),
])
def call(self, inputs):
inputs = tf.where(
inputs >= 0, tf.math.log(tf.math.maximum(inputs, 0.001)), -10)
return self.dense_block(inputs)
class DuelingNetwork(tf.keras.Model):
def __init__(self,
hidden_sizes=[1024, 1024],
dropout_rates=[0.2, 0.2],
output_dims=11):
super(DuelingNetwork, self).__init__()
self.feat_block = tf.keras.models.Sequential([
tf.keras.layers.Dense(
units=hidden_sizes[0],
kernel_initializer=TruncatedNormal(0.0, 0.05, seed=RANDOM_SEED)),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Dropout(dropout_rates[0], seed=RANDOM_SEED),
])
self.val_block = tf.keras.models.Sequential([
tf.keras.layers.Dense(
units=hidden_sizes[1],
kernel_initializer=TruncatedNormal(0.0, 0.05, seed=RANDOM_SEED)),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Dropout(dropout_rates[1], seed=RANDOM_SEED),
tf.keras.layers.Dense(
units=1,
kernel_initializer=TruncatedNormal(0.0, 0.05, seed=RANDOM_SEED)),
])
self.adv_block = tf.keras.models.Sequential([
tf.keras.layers.Dense(
units=hidden_sizes[1],
kernel_initializer=TruncatedNormal(0.0, 0.05, seed=RANDOM_SEED)),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Dropout(dropout_rates[1], seed=RANDOM_SEED),
tf.keras.layers.Dense(
units=output_dims,
kernel_initializer=TruncatedNormal(0.0, 0.05, seed=RANDOM_SEED)),
])
def call(self, inputs):
inputs = tf.where(
inputs >= 0, tf.math.log(tf.math.maximum(inputs, 0.001)), -10)
feat = self.feat_block(inputs)
vals = self.val_block(feat)
advs = self.adv_block(feat)
qvals = vals + (advs - tf.math.reduce_mean(advs))
return qvals
class ActorCriticNetwork(tf.keras.Model):
def __init__(self,
hidden_units=[1024, 1024],
dropout_rates=[0.2, 0.2],
output_dims=[11, 1]):
super(ActorCriticNetwork, self).__init__()
self.actor = tf.keras.models.Sequential([
tf.keras.layers.Dense(
units=hidden_units[0],
kernel_initializer=TruncatedNormal(0.0, 0.05, seed=RANDOM_SEED)),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Dropout(dropout_rates[0], seed=RANDOM_SEED),
tf.keras.layers.Dense(
units=output_dims[0],
kernel_initializer=TruncatedNormal(0.0, 0.05, seed=RANDOM_SEED)),
tf.keras.layers.Activation('softmax'),
])
self.critic = tf.keras.models.Sequential([
tf.keras.layers.Dense(
units=hidden_units[0],
kernel_initializer=TruncatedNormal(0.0, 0.05, seed=RANDOM_SEED)),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Dropout(dropout_rates[0], seed=RANDOM_SEED),
tf.keras.layers.Dense(
units=output_dims[1],
kernel_initializer=TruncatedNormal(0.0, 0.05, seed=RANDOM_SEED))
])
def call(self, inputs):
inputs = tf.where(
inputs >= 0, tf.math.log(tf.math.maximum(inputs, 0.001)), -10)
policy_dist = self.actor(inputs)
value = self.critic(inputs)
return policy_dist, value
| 37.079365 | 81 | 0.579409 | 551 | 4,672 | 4.742287 | 0.117967 | 0.096441 | 0.134328 | 0.085725 | 0.819747 | 0.819747 | 0.819747 | 0.808649 | 0.808649 | 0.808649 | 0 | 0.043386 | 0.294521 | 4,672 | 125 | 82 | 37.376 | 0.749393 | 0 | 0 | 0.696078 | 0 | 0 | 0.007491 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.019608 | 0 | 0.137255 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a809f168ed42ee09b4353553a47ebb338b7f26d7 | 49 | py | Python | examples/example_resolve.py | juancarlospaco/thatlib | 37403983c228521b992ad592231957a1c7af01f2 | [
"MIT"
] | 31 | 2021-05-12T16:54:34.000Z | 2022-02-17T12:36:52.000Z | examples/example_resolve.py | juancarlospaco/thatlib | 37403983c228521b992ad592231957a1c7af01f2 | [
"MIT"
] | 1 | 2021-07-23T02:58:07.000Z | 2021-09-03T21:53:29.000Z | examples/example_resolve.py | juancarlospaco/thatlib | 37403983c228521b992ad592231957a1c7af01f2 | [
"MIT"
] | 1 | 2021-05-12T22:12:20.000Z | 2021-05-12T22:12:20.000Z | from thatlib import resolve
print(resolve("./"))
| 16.333333 | 27 | 0.734694 | 6 | 49 | 6 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102041 | 49 | 2 | 28 | 24.5 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0.040816 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
b541aed235c3941ffbe34c1759517cd9f51a7549 | 18 | py | Python | src/__init__.py | AlexanderFengler/ssm_simulators | cf650641647b7c049e60c48dde365607c8d3c54a | [
"MIT"
] | 1 | 2021-10-31T15:08:11.000Z | 2021-10-31T15:08:11.000Z | src/__init__.py | AlexanderFengler/ssm_simulators | cf650641647b7c049e60c48dde365607c8d3c54a | [
"MIT"
] | 3 | 2021-07-30T15:57:56.000Z | 2022-02-25T02:47:09.000Z | src/__init__.py | AlexanderFengler/ssm_simulators | cf650641647b7c049e60c48dde365607c8d3c54a | [
"MIT"
] | null | null | null | from . import cssm | 18 | 18 | 0.777778 | 3 | 18 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 18 | 1 | 18 | 18 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b59e1b766deaf1e0fb27afacbceaea13cd456868 | 5,862 | py | Python | tests/s3/test_s3_collections.py | paulhutchings/beartype-boto3-example | d69298d9444d578799e2a17cb63de11474b2278a | [
"MIT"
] | 3 | 2021-11-16T06:21:11.000Z | 2021-11-22T08:59:11.000Z | tests/s3/test_s3_collections.py | paulhutchings/beartype-boto3-example | d69298d9444d578799e2a17cb63de11474b2278a | [
"MIT"
] | 9 | 2021-11-19T03:29:00.000Z | 2021-12-30T23:54:47.000Z | tests/s3/test_s3_collections.py | paulhutchings/beartype-boto3-example | d69298d9444d578799e2a17cb63de11474b2278a | [
"MIT"
] | null | null | null | import pytest
from bearboto3.s3 import (
ServiceResourceBucketsCollection,
BucketMultipartUploadsCollection,
BucketObjectVersionsCollection,
BucketObjectsCollection,
MultipartUploadPartsCollection,
)
from beartype import beartype
from beartype.roar import (
BeartypeCallHintPepParamException,
BeartypeCallHintPepReturnException,
BeartypeDecorHintPep484585Exception,
)
# ============================
# ServiceResourceBucketsCollection
# ============================
def test_buckets_arg_pass(gen_service_resource_buckets_collection):
@beartype
def func(param: ServiceResourceBucketsCollection):
pass
func(gen_service_resource_buckets_collection)
def test_buckets_arg_fail(gen_bucket_objects_collection):
with pytest.raises(BeartypeCallHintPepParamException):
@beartype
def func(param: ServiceResourceBucketsCollection):
pass
func(gen_bucket_objects_collection)
def test_buckets_return_pass(gen_service_resource_buckets_collection):
@beartype
def func() -> ServiceResourceBucketsCollection:
return gen_service_resource_buckets_collection
func()
def test_buckets_return_fail(gen_bucket_objects_collection):
with pytest.raises(
(BeartypeCallHintPepReturnException, BeartypeDecorHintPep484585Exception)
):
@beartype
def func() -> ServiceResourceBucketsCollection:
return gen_bucket_objects_collection
func()
# ============================
# BucketMultipartUploadsCollection
# ============================
def test_multipart_uploads_arg_pass(gen_bucket_multipart_uploads_collection):
@beartype
def func(param: BucketMultipartUploadsCollection):
pass
func(gen_bucket_multipart_uploads_collection)
def test_multipart_uploads_arg_fail(gen_bucket_object_versions_collection):
with pytest.raises(BeartypeCallHintPepParamException):
@beartype
def func(param: BucketMultipartUploadsCollection):
pass
func(gen_bucket_object_versions_collection)
def test_multipart_uploads_return_pass(gen_bucket_multipart_uploads_collection):
@beartype
def func() -> BucketMultipartUploadsCollection:
return gen_bucket_multipart_uploads_collection
func()
def test_multipart_uploads_return_fail(gen_bucket_object_versions_collection):
with pytest.raises(
(BeartypeCallHintPepReturnException, BeartypeDecorHintPep484585Exception)
):
@beartype
def func() -> BucketMultipartUploadsCollection:
return gen_bucket_object_versions_collection
func()
# ============================
# BucketObjectVersionsCollection
# ============================
def test_object_versions_arg_pass(gen_bucket_object_versions_collection):
@beartype
def func(param: BucketObjectVersionsCollection):
pass
func(gen_bucket_object_versions_collection)
def test_object_versions_arg_fail(gen_bucket_objects_collection):
with pytest.raises(BeartypeCallHintPepParamException):
@beartype
def func(param: BucketObjectVersionsCollection):
pass
func(gen_bucket_objects_collection)
def test_object_versions_return_pass(gen_bucket_object_versions_collection):
@beartype
def func() -> BucketObjectVersionsCollection:
return gen_bucket_object_versions_collection
func()
def test_object_versions_return_fail(gen_bucket_objects_collection):
with pytest.raises(
(BeartypeCallHintPepReturnException, BeartypeDecorHintPep484585Exception)
):
@beartype
def func() -> BucketObjectVersionsCollection:
return gen_bucket_objects_collection
func()
# ============================
# BucketObjectsCollection
# ============================
def test_objects_arg_pass(gen_bucket_objects_collection):
@beartype
def func(param: BucketObjectsCollection):
pass
func(gen_bucket_objects_collection)
def test_objects_arg_fail(gen_service_resource_buckets_collection):
with pytest.raises(BeartypeCallHintPepParamException):
@beartype
def func(param: BucketObjectsCollection):
pass
func(gen_service_resource_buckets_collection)
def test_objects_return_pass(gen_bucket_objects_collection):
@beartype
def func() -> BucketObjectsCollection:
return gen_bucket_objects_collection
func()
def test_objects_return_fail(gen_service_resource_buckets_collection):
with pytest.raises(
(BeartypeCallHintPepReturnException, BeartypeDecorHintPep484585Exception)
):
@beartype
def func() -> BucketObjectsCollection:
return gen_service_resource_buckets_collection
func()
# ============================
# MultipartUploadPartsCollection
# ============================
def test_parts_arg_pass(gen_multipart_upload_parts_collection):
@beartype
def func(param: MultipartUploadPartsCollection):
pass
func(gen_multipart_upload_parts_collection)
def test_parts_arg_fail(gen_bucket_object_versions_collection):
with pytest.raises(BeartypeCallHintPepParamException):
@beartype
def func(param: MultipartUploadPartsCollection):
pass
func(gen_bucket_object_versions_collection)
def test_parts_return_pass(gen_multipart_upload_parts_collection):
@beartype
def func() -> MultipartUploadPartsCollection:
return gen_multipart_upload_parts_collection
func()
def test_parts_return_fail(gen_bucket_object_versions_collection):
with pytest.raises(
(BeartypeCallHintPepReturnException, BeartypeDecorHintPep484585Exception)
):
@beartype
def func() -> MultipartUploadPartsCollection:
return gen_bucket_object_versions_collection
func()
| 25.486957 | 81 | 0.724497 | 498 | 5,862 | 8.094378 | 0.076305 | 0.062516 | 0.074423 | 0.0774 | 0.839742 | 0.76904 | 0.701563 | 0.60903 | 0.511784 | 0.243116 | 0 | 0.007882 | 0.177584 | 5,862 | 229 | 82 | 25.598253 | 0.828251 | 0.07523 | 0 | 0.716418 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.298507 | false | 0.149254 | 0.029851 | 0.074627 | 0.402985 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
b5a2999fd0d0979a0647b76e28afd188fe758e39 | 230 | py | Python | CodeWars/2016/FindCapitals-7k.py | JLJTECH/TutorialTesting | f2dbbd49a86b3b086d0fc156ac3369fb74727f86 | [
"MIT"
] | null | null | null | CodeWars/2016/FindCapitals-7k.py | JLJTECH/TutorialTesting | f2dbbd49a86b3b086d0fc156ac3369fb74727f86 | [
"MIT"
] | null | null | null | CodeWars/2016/FindCapitals-7k.py | JLJTECH/TutorialTesting | f2dbbd49a86b3b086d0fc156ac3369fb74727f86 | [
"MIT"
] | null | null | null | #Return list index of all capital letters in string
def capitals(word):
return [l for l, c in enumerate(word) if c.isupper()]
#Alternate Solution
def capitals(word):
return [i for (i, c) in enumerate(word) if c.isupper()] | 32.857143 | 59 | 0.708696 | 39 | 230 | 4.179487 | 0.538462 | 0.134969 | 0.184049 | 0.257669 | 0.319018 | 0.319018 | 0.319018 | 0 | 0 | 0 | 0 | 0 | 0.178261 | 230 | 7 | 59 | 32.857143 | 0.862434 | 0.295652 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
a912f357e6076933c0ebd7b06dfd11f21c274f62 | 10,427 | py | Python | addons/project/tests/test_access_rights.py | SHIVJITH/Odoo_Machine_Test | 310497a9872db7844b521e6dab5f7a9f61d365a4 | [
"Apache-2.0"
] | null | null | null | addons/project/tests/test_access_rights.py | SHIVJITH/Odoo_Machine_Test | 310497a9872db7844b521e6dab5f7a9f61d365a4 | [
"Apache-2.0"
] | null | null | null | addons/project/tests/test_access_rights.py | SHIVJITH/Odoo_Machine_Test | 310497a9872db7844b521e6dab5f7a9f61d365a4 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Part of Odoo. See LICENSE file for full copyright and licensing details.
from odoo.addons.mail.tests.common import mail_new_test_user
from odoo.addons.project.tests.test_project_base import TestProjectCommon
from odoo.exceptions import AccessError, ValidationError
from odoo.tests.common import users
class TestAccessRights(TestProjectCommon):
def setUp(self):
super().setUp()
self.task = self.create_task('Make the world a better place')
self.user = mail_new_test_user(self.env, 'Internal user', groups='base.group_user')
self.portal = mail_new_test_user(self.env, 'Portal user', groups='base.group_portal')
def create_task(self, name, *, with_user=None, **kwargs):
values = dict(name=name, project_id=self.project_pigs.id, **kwargs)
return self.env['project.task'].with_user(with_user or self.env.user).create(values)
class TestCRUDVisibilityFollowers(TestAccessRights):
def setUp(self):
super().setUp()
self.project_pigs.privacy_visibility = 'followers'
@users('Internal user', 'Portal user')
def test_project_no_write(self):
with self.assertRaises(AccessError, msg="%s should not be able to write on the project" % self.env.user.name):
self.project_pigs.with_user(self.env.user).name = "Take over the world"
self.project_pigs.allowed_user_ids = self.env.user
with self.assertRaises(AccessError, msg="%s should not be able to write on the project" % self.env.user.name):
self.project_pigs.with_user(self.env.user).name = "Take over the world"
@users('Internal user', 'Portal user')
def test_project_no_unlink(self):
self.project_pigs.task_ids.unlink()
with self.assertRaises(AccessError, msg="%s should not be able to unlink the project" % self.env.user.name):
self.project_pigs.with_user(self.env.user).unlink()
self.project_pigs.allowed_user_ids = self.env.user
self.project_pigs.task_ids.unlink()
with self.assertRaises(AccessError, msg="%s should not be able to unlink the project" % self.env.user.name):
self.project_pigs.with_user(self.env.user).unlink()
@users('Internal user', 'Portal user')
def test_project_no_read(self):
self.project_pigs.invalidate_cache()
with self.assertRaises(AccessError, msg="%s should not be able to read the project" % self.env.user.name):
self.project_pigs.with_user(self.env.user).name
@users('Portal user')
def test_project_allowed_portal_no_read(self):
self.project_pigs.allowed_user_ids = self.env.user
self.project_pigs.invalidate_cache()
with self.assertRaises(AccessError, msg="%s should not be able to read the project" % self.env.user.name):
self.project_pigs.with_user(self.env.user).name
@users('Internal user')
def test_project_allowed_internal_read(self):
self.project_pigs.allowed_user_ids = self.env.user
self.project_pigs.invalidate_cache()
self.project_pigs.with_user(self.env.user).name
@users('Internal user', 'Portal user')
def test_task_no_read(self):
self.task.invalidate_cache()
with self.assertRaises(AccessError, msg="%s should not be able to read the task" % self.env.user.name):
self.task.with_user(self.env.user).name
@users('Portal user')
def test_task_allowed_portal_no_read(self):
self.project_pigs.allowed_user_ids = self.env.user
self.task.invalidate_cache()
with self.assertRaises(AccessError, msg="%s should not be able to read the task" % self.env.user.name):
self.task.with_user(self.env.user).name
@users('Internal user')
def test_task_allowed_internal_read(self):
self.project_pigs.allowed_user_ids = self.env.user
self.task.invalidate_cache()
self.task.with_user(self.env.user).name
@users('Internal user', 'Portal user')
def test_task_no_write(self):
with self.assertRaises(AccessError, msg="%s should not be able to write on the task" % self.env.user.name):
self.task.with_user(self.env.user).name = "Paint the world in black & white"
self.project_pigs.allowed_user_ids = self.env.user
with self.assertRaises(AccessError, msg="%s should not be able to write on the task" % self.env.user.name):
self.task.with_user(self.env.user).name = "Paint the world in black & white"
@users('Internal user', 'Portal user')
def test_task_no_create(self):
with self.assertRaises(AccessError, msg="%s should not be able to create a task" % self.env.user.name):
self.create_task("Archive the world, it's not needed anymore")
self.project_pigs.allowed_user_ids = self.env.user
with self.assertRaises(AccessError, msg="%s should not be able to create a task" % self.env.user.name):
self.create_task("Archive the world, it's not needed anymore")
@users('Internal user', 'Portal user')
def test_task_no_unlink(self):
with self.assertRaises(AccessError, msg="%s should not be able to unlink the task" % self.env.user.name):
self.task.with_user(self.env.user).unlink()
self.project_pigs.allowed_user_ids = self.env.user
with self.assertRaises(AccessError, msg="%s should not be able to unlink the task" % self.env.user.name):
self.task.with_user(self.env.user).unlink()
class TestCRUDVisibilityPortal(TestAccessRights):
def setUp(self):
super().setUp()
self.project_pigs.privacy_visibility = 'portal'
@users('Portal user')
def test_task_portal_no_read(self):
self.task.invalidate_cache()
with self.assertRaises(AccessError, msg="%s should not be able to read the task" % self.env.user.name):
self.task.with_user(self.env.user).name
@users('Portal user')
def test_task_allowed_portal_read(self):
self.project_pigs.allowed_user_ids = self.env.user
self.task.invalidate_cache()
self.task.with_user(self.env.user).name
@users('Internal user')
def test_task_internal_read(self):
self.task.with_user(self.env.user).name
class TestCRUDVisibilityEmployees(TestAccessRights):
def setUp(self):
super().setUp()
self.project_pigs.privacy_visibility = 'employees'
@users('Portal user')
def test_task_portal_no_read(self):
self.task.invalidate_cache()
with self.assertRaises(AccessError, msg="%s should not be able to read the task" % self.env.user.name):
self.task.with_user(self.env.user).name
self.project_pigs.allowed_user_ids = self.env.user
self.task.invalidate_cache()
with self.assertRaises(AccessError, msg="%s should not be able to read the task" % self.env.user.name):
self.task.with_user(self.env.user).name
@users('Internal user')
def test_task_allowed_portal_read(self):
self.task.invalidate_cache()
self.task.with_user(self.env.user).name
class TestAllowedUsers(TestAccessRights):
def setUp(self):
super().setUp()
self.project_pigs.privacy_visibility = 'followers'
def test_project_permission_added(self):
self.project_pigs.allowed_user_ids = self.user
self.assertIn(self.user, self.task.allowed_user_ids)
def test_project_default_permission(self):
self.project_pigs.allowed_user_ids = self.user
task = self.create_task("Review the end of the world")
self.assertIn(self.user, task.allowed_user_ids)
def test_project_default_customer_permission(self):
self.project_pigs.privacy_visibility = 'portal'
self.project_pigs.partner_id = self.portal.partner_id
self.assertIn(self.portal, self.task.allowed_user_ids)
self.assertIn(self.portal, self.project_pigs.allowed_user_ids)
def test_project_permission_removed(self):
self.project_pigs.allowed_user_ids = self.user
self.project_pigs.allowed_user_ids -= self.user
self.assertNotIn(self.user, self.task.allowed_user_ids)
def test_project_specific_permission(self):
self.project_pigs.allowed_user_ids = self.user
john = mail_new_test_user(self.env, login='John')
self.task.allowed_user_ids |= john
self.project_pigs.allowed_user_ids -= self.user
self.assertIn(john, self.task.allowed_user_ids, "John should still be allowed to read the task")
def test_project_specific_remove_mutliple_tasks(self):
self.project_pigs.allowed_user_ids = self.user
john = mail_new_test_user(self.env, login='John')
task = self.create_task('task')
self.task.allowed_user_ids |= john
self.project_pigs.allowed_user_ids -= self.user
self.assertIn(john, self.task.allowed_user_ids)
self.assertNotIn(john, task.allowed_user_ids)
self.assertNotIn(self.user, task.allowed_user_ids)
self.assertNotIn(self.user, self.task.allowed_user_ids)
def test_no_portal_allowed(self):
with self.assertRaises(ValidationError, msg="It should not allow to add portal users"):
self.task.allowed_user_ids = self.portal
def test_visibility_changed(self):
self.project_pigs.privacy_visibility = 'portal'
self.task.allowed_user_ids |= self.portal
self.assertNotIn(self.user, self.task.allowed_user_ids, "Internal user should have been removed from allowed users")
self.project_pigs.privacy_visibility = 'employees'
self.assertNotIn(self.portal, self.task.allowed_user_ids, "Portal user should have been removed from allowed users")
def test_write_task(self):
self.user.groups_id |= self.env.ref('project.group_project_user')
self.assertNotIn(self.user, self.project_pigs.allowed_user_ids)
self.task.allowed_user_ids = self.user
self.project_pigs.invalidate_cache()
self.task.invalidate_cache()
self.task.with_user(self.user).name = "I can edit a task!"
def test_no_write_project(self):
self.user.groups_id |= self.env.ref('project.group_project_user')
self.assertNotIn(self.user, self.project_pigs.allowed_user_ids)
with self.assertRaises(AccessError, msg="User should not be able to edit project"):
self.project_pigs.with_user(self.user).name = "I can't edit a task!"
| 45.532751 | 124 | 0.702215 | 1,482 | 10,427 | 4.744265 | 0.082321 | 0.055753 | 0.076661 | 0.070403 | 0.831461 | 0.816527 | 0.785379 | 0.759351 | 0.714408 | 0.659366 | 0 | 0.000118 | 0.189892 | 10,427 | 228 | 125 | 45.732456 | 0.832248 | 0.009015 | 0 | 0.677966 | 0 | 0 | 0.161084 | 0.005034 | 0 | 0 | 0 | 0 | 0.186441 | 1 | 0.180791 | false | 0 | 0.022599 | 0 | 0.237288 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a97d7a9e568e6bff482fe88025d5c5252ad6273c | 95 | py | Python | julee/intents/news.py | riczfe/SEPM_GROUP6 | 9c1f44958121f36b09c20be53be28d4744322c58 | [
"MIT"
] | null | null | null | julee/intents/news.py | riczfe/SEPM_GROUP6 | 9c1f44958121f36b09c20be53be28d4744322c58 | [
"MIT"
] | null | null | null | julee/intents/news.py | riczfe/SEPM_GROUP6 | 9c1f44958121f36b09c20be53be28d4744322c58 | [
"MIT"
] | null | null | null | import webbrowser
def open_news():
webbrowser.open_new_tab("https://abcnews.go.com/") | 19 | 54 | 0.705263 | 13 | 95 | 4.923077 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147368 | 95 | 5 | 54 | 19 | 0.790123 | 0 | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8d5456d91b24e383d2251d3243505d616011a834 | 27 | py | Python | randomstate/prng/xorshift1024/__init__.py | bashtage/ng-numpy-randomstate | b397db9cb8688b291fc40071ab043009dfa05a85 | [
"Apache-2.0",
"BSD-3-Clause"
] | 43 | 2016-02-11T03:38:16.000Z | 2022-02-03T10:00:15.000Z | randomstate/prng/xorshift1024/__init__.py | bashtage/pcg-python | b397db9cb8688b291fc40071ab043009dfa05a85 | [
"Apache-2.0",
"BSD-3-Clause"
] | 31 | 2015-12-26T19:47:36.000Z | 2018-12-10T15:55:46.000Z | randomstate/prng/xorshift1024/__init__.py | bashtage/ng-numpy-randomstate | b397db9cb8688b291fc40071ab043009dfa05a85 | [
"Apache-2.0",
"BSD-3-Clause"
] | 11 | 2016-04-28T02:00:38.000Z | 2020-08-07T10:33:10.000Z | from .xorshift1024 import * | 27 | 27 | 0.814815 | 3 | 27 | 7.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 0.111111 | 27 | 1 | 27 | 27 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8d63629bc59b322e97e3adbda6a9e5d92002e264 | 35 | py | Python | MovieKit/__init__.py | muellermax/Movie-Diary | a5ff2f70d545d95ec708813fd4656c4d3ccd7c31 | [
"Unlicense"
] | 1 | 2020-05-24T17:15:21.000Z | 2020-05-24T17:15:21.000Z | MovieKit/__init__.py | muellermax/Movie-Diary | a5ff2f70d545d95ec708813fd4656c4d3ccd7c31 | [
"Unlicense"
] | null | null | null | MovieKit/__init__.py | muellermax/Movie-Diary | a5ff2f70d545d95ec708813fd4656c4d3ccd7c31 | [
"Unlicense"
] | null | null | null | from .MovieDiary import MovieDiary
| 17.5 | 34 | 0.857143 | 4 | 35 | 7.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5750ec498f1879e7273ef4163f27245e5416c4f8 | 3,977 | py | Python | tests/test_autodiff/test_autodiff.py | VIVelev/nujo | 56c3058b14c4e0b7ae86d0f22dbe4c4dc81e8e71 | [
"MIT"
] | 5 | 2020-03-02T22:14:38.000Z | 2022-03-09T11:13:13.000Z | tests/test_autodiff/test_autodiff.py | VIVelev/nujo | 56c3058b14c4e0b7ae86d0f22dbe4c4dc81e8e71 | [
"MIT"
] | 30 | 2020-03-09T10:43:54.000Z | 2020-06-09T20:05:45.000Z | tests/test_autodiff/test_autodiff.py | VIVelev/nujo | 56c3058b14c4e0b7ae86d0f22dbe4c4dc81e8e71 | [
"MIT"
] | 3 | 2020-03-20T13:54:23.000Z | 2020-10-17T01:03:17.000Z | import pytest
import torch
from numpy import allclose, random
import nujo as nj
# ====================================================================================================
def test_scalar_diff(scalar_tensors):
(X_nj, y_nj, W1_nj, W2_nj, X_torch, y_torch, W1_torch,
W2_torch) = scalar_tensors
# Test Forward
loss_nj = nj.mean((X_nj * W1_nj * W2_nj - y_nj)**2)
loss_torch = torch.mean((X_torch * W1_torch * W2_torch - y_torch)**2)
assert allclose(loss_nj.value, loss_torch.detach().numpy())
# Test Backward
loss_nj.backward()
loss_torch.backward()
assert allclose(W1_nj.grad.value, W1_torch.grad.detach().numpy())
assert allclose(W2_nj.grad.value, W2_torch.grad.detach().numpy())
# ====================================================================================================
def test_matrix_diff(matrix_tensors):
(X_nj, y_nj, W1_nj, W2_nj, X_torch, y_torch, W1_torch,
W2_torch) = matrix_tensors
# Test Forward
loss_nj = nj.mean((X_nj @ W1_nj @ W2_nj - y_nj)**2)
loss_torch = torch.mean((X_torch @ W1_torch @ W2_torch - y_torch)**2)
assert allclose(loss_nj.value, loss_torch.detach().numpy())
# Test Backward
loss_nj.backward()
loss_torch.backward()
assert allclose(W1_nj.grad.value, W1_torch.grad.detach().numpy())
assert allclose(W2_nj.grad.value, W2_torch.grad.detach().numpy())
# ====================================================================================================
def test_prod_log(matrix_tensors):
(X_nj, y_nj, W1_nj, W2_nj, X_torch, y_torch, W1_torch,
W2_torch) = matrix_tensors
# Test Forward
loss_nj = nj.prod(nj.log(X_nj @ W1_nj @ W2_nj) + y_nj)
loss_torch = torch.prod(torch.log(X_torch @ W1_torch @ W2_torch) + y_torch)
assert allclose(loss_nj.value, loss_torch.detach().numpy())
# Test Backward
loss_nj.backward()
loss_torch.backward()
assert allclose(W1_nj.grad.value, W1_torch.grad.detach().numpy())
assert allclose(W2_nj.grad.value, W2_torch.grad.detach().numpy())
# ====================================================================================================
def test_aggregate_by_dim(matrix_tensors):
(X_nj, y_nj, W1_nj, _, X_torch, y_torch, W1_torch, _) = matrix_tensors
# Test Forward
loss_nj = nj.prod(nj.mean(X_nj @ W1_nj, dim=1, keepdim=True) + y_nj)
loss_torch = torch.prod(
torch.mean(X_torch @ W1_torch, axis=1, keepdim=True) + y_torch)
assert allclose(loss_nj.value, loss_torch.detach().numpy())
# Test Backward
loss_nj.backward()
loss_torch.backward()
assert allclose(W1_nj.grad.value, W1_torch.grad.detach().numpy())
# ====================================================================================================
# Unit Test fixtures - generate the same nujo and PyTorch tensors
@pytest.fixture
def scalar_tensors():
X = random.rand()
y = random.rand()
W1 = random.rand()
W2 = random.rand()
X_nj = nj.Tensor(X)
y_nj = nj.Tensor(y)
W1_nj = nj.Tensor(W1, diff=True)
W2_nj = nj.Tensor(W2, diff=True)
X_torch = torch.tensor(X)
y_torch = torch.tensor(y)
W1_torch = torch.tensor(W1, requires_grad=True)
W2_torch = torch.tensor(W2, requires_grad=True)
return X_nj, y_nj, W1_nj, W2_nj, X_torch, y_torch, W1_torch, W2_torch
@pytest.fixture
def matrix_tensors():
X = random.rand(3, 3)
y = random.rand(3, 1)
W1 = random.rand(3, 2)
W2 = random.rand(2, 1)
X_nj = nj.Tensor(X)
y_nj = nj.Tensor(y)
W1_nj = nj.Tensor(W1, diff=True)
W2_nj = nj.Tensor(W2, diff=True)
X_torch = torch.tensor(X)
y_torch = torch.tensor(y)
W1_torch = torch.tensor(W1, requires_grad=True)
W2_torch = torch.tensor(W2, requires_grad=True)
return X_nj, y_nj, W1_nj, W2_nj, X_torch, y_torch, W1_torch, W2_torch
# ====================================================================================================
| 29.029197 | 102 | 0.570279 | 557 | 3,977 | 3.793537 | 0.091562 | 0.030289 | 0.028396 | 0.030289 | 0.823474 | 0.823474 | 0.810222 | 0.780407 | 0.753431 | 0.753431 | 0 | 0.025339 | 0.166457 | 3,977 | 136 | 103 | 29.242647 | 0.612066 | 0.195373 | 0 | 0.60274 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150685 | 1 | 0.082192 | false | 0 | 0.054795 | 0 | 0.164384 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f5672d3d5ed0ca947fa69be1d52f58e3f2039412 | 71 | py | Python | CodeWars/8 Kyu/Merge two sorted arrays into one.py | anubhab-code/Competitive-Programming | de28cb7d44044b9e7d8bdb475da61e37c018ac35 | [
"MIT"
] | null | null | null | CodeWars/8 Kyu/Merge two sorted arrays into one.py | anubhab-code/Competitive-Programming | de28cb7d44044b9e7d8bdb475da61e37c018ac35 | [
"MIT"
] | null | null | null | CodeWars/8 Kyu/Merge two sorted arrays into one.py | anubhab-code/Competitive-Programming | de28cb7d44044b9e7d8bdb475da61e37c018ac35 | [
"MIT"
] | null | null | null | def merge_arrays(arr1, arr2):
return sorted(list(set(arr1 + arr2))) | 35.5 | 41 | 0.704225 | 11 | 71 | 4.454545 | 0.818182 | 0.326531 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065574 | 0.140845 | 71 | 2 | 41 | 35.5 | 0.737705 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
f593d25e862fd843feb96c153977c78f30f8bc50 | 1,870 | py | Python | tests_flstudio/stubs/collections.py | rjuang/rum-library | e7c61407c31832e46ddf1335f98f47c4b82652d0 | [
"MIT"
] | 3 | 2021-04-03T09:15:46.000Z | 2022-01-10T10:53:13.000Z | tests_flstudio/stubs/collections.py | rjuang/rum-library | e7c61407c31832e46ddf1335f98f47c4b82652d0 | [
"MIT"
] | 1 | 2022-01-30T04:06:24.000Z | 2022-01-30T04:06:24.000Z | tests_flstudio/stubs/collections.py | rjuang/rum | e7c61407c31832e46ddf1335f98f47c4b82652d0 | [
"MIT"
] | null | null | null | def channelCount(*args, **kwargs): pass
def channelNumber(*args, **kwargs): pass
def closeGraphEditor(*args, **kwargs): pass
def deselectAll(*args, **kwargs): pass
def focusEditor(*args, **kwargs): pass
def getChannelColor(*args, **kwargs): pass
def getChannelIndex(*args, **kwargs): pass
def getChannelMidiInPort(*args, **kwargs): pass
def getChannelName(*args, **kwargs): pass
def getChannelPan(*args, **kwargs): pass
def getChannelPitch(*args, **kwargs): pass
def getChannelVolume(*args, **kwargs): pass
def getCurrentStepParam(*args, **kwargs): pass
def getGridBit(*args, **kwargs): pass
def getGridBitWithLoop(*args, **kwargs): pass
def getRecEventId(*args, **kwargs): pass
def getStepParam(*args, **kwargs): pass
def getTargetFxTrack(*args, **kwargs): pass
def incEventValue(*args, **kwargs): pass
def isChannelMuted(*args, **kwargs): pass
def isChannelSelected(*args, **kwargs): pass
def isChannelSolo(*args, **kwargs): pass
def isGraphEditorVisible(*args, **kwargs): pass
def isGridBitAssigned(*args, **kwargs): pass
def isHighLighted(*args, **kwargs): pass
def midiNoteOn(*args, **kwargs): pass
def muteChannel(*args, **kwargs): pass
def processRECEvent(*args, **kwargs): pass
def selectAll(*args, **kwargs): pass
def selectChannel(*args, **kwargs): pass
def selectOneChannel(*args, **kwargs): pass
def selectedChannel(*args, **kwargs): pass
def setChannelColor(*args, **kwargs): pass
def setChannelName(*args, **kwargs): pass
def setChannelPan(*args, **kwargs): pass
def setChannelPitch(*args, **kwargs): pass
def setChannelVolume(*args, **kwargs): pass
def setGridBit(*args, **kwargs): pass
def setStepParameterByIndex(*args, **kwargs): pass
def showCSForm(*args, **kwargs): pass
def showEditor(*args, **kwargs): pass
def showGraphEditor(*args, **kwargs): pass
def soloChannel(*args, **kwargs): pass
def updateGraphEditor(*args, **kwargs): pass
version = 1.0
| 40.652174 | 50 | 0.738503 | 223 | 1,870 | 6.192825 | 0.2287 | 0.31861 | 0.446054 | 0.529327 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001183 | 0.095722 | 1,870 | 45 | 51 | 41.555556 | 0.815494 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.977778 | false | 0.977778 | 0 | 0 | 0.977778 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
194ff170922d2fe2cff8e77d4db31dac8b6737c5 | 190 | py | Python | Accounts/admin.py | Shreya549/AchieveVIT | 4623f80a4e38914f2d759fc0c3591bd642486a5b | [
"MIT"
] | 3 | 2020-08-29T20:23:27.000Z | 2021-05-20T05:44:01.000Z | Accounts/admin.py | Shreya549/AchieveVIT | 4623f80a4e38914f2d759fc0c3591bd642486a5b | [
"MIT"
] | 1 | 2020-09-29T16:28:24.000Z | 2020-09-29T16:28:24.000Z | Accounts/admin.py | Shreya549/AchieveVIT | 4623f80a4e38914f2d759fc0c3591bd642486a5b | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import User, Faculty, HR, OTPStore
admin.site.register(User)
admin.site.register(Faculty)
admin.site.register(HR)
admin.site.register(OTPStore) | 27.142857 | 47 | 0.810526 | 28 | 190 | 5.5 | 0.428571 | 0.233766 | 0.441558 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 190 | 7 | 48 | 27.142857 | 0.88 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.