hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
534a04c3322e1ecaccb87a17247c9e86ecb95e59 | 2,039 | py | Python | sim2net/speed/constant.py | harikuts/dsr_optimization | 796e58da578f7841a060233a8981eb69d92b798b | [
"MIT"
] | 12 | 2018-06-17T05:29:35.000Z | 2022-03-20T23:55:49.000Z | sim2net/speed/constant.py | harikuts/dsr_optimization | 796e58da578f7841a060233a8981eb69d92b798b | [
"MIT"
] | 2 | 2020-05-02T16:36:34.000Z | 2021-03-12T17:40:02.000Z | sim2net/speed/constant.py | harikuts/dsr_optimization | 796e58da578f7841a060233a8981eb69d92b798b | [
"MIT"
] | 6 | 2015-09-09T00:00:22.000Z | 2020-05-29T20:18:31.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# (c) 2012 Michal Kalewski <mkalewski at cs.put.poznan.pl>
#
# This file is a part of the Simple Network Simulator (sim2net) project.
# USE, MODIFICATION, COPYING AND DISTRIBUTION OF THIS SOFTWARE IS SUBJECT TO
# THE TERMS AND CONDITIONS OF THE MIT LICENSE. YOU SHOULD HAVE RECEIVED A COPY
# OF THE MIT LICENSE ALONG WITH THIS SOFTWARE; IF NOT, YOU CAN DOWNLOAD A COPY
# FROM HTTP://WWW.OPENSOURCE.ORG/.
#
# For bug reports, feature and support requests please visit
# <https://github.com/mkalewski/sim2net/issues>.
"""
Provides an implementation of a constant node speed. In this case a speed of a
node is constant at a given value.
"""
from math import fabs
from sim2net.speed._speed import Speed
from sim2net.utility.validation import check_argument_type
__docformat__ = 'reStructuredText'
class Constant(Speed):
"""
This class implements a constant node speed fixed at a given value.
"""
def __init__(self, speed):
"""
*Parameters*:
- **speed** (`float`): a value of the node speed.
*Example*:
.. testsetup::
from sim2net.speed.constant import Constant
.. doctest::
>>> speed = Constant(5.0)
>>> speed.current
5.0
>>> speed.get_new()
5.0
>>> speed = Constant(-5.0)
>>> speed.current
5.0
>>> speed.get_new()
5.0
"""
super(Constant, self).__init__(Constant.__name__)
check_argument_type(Constant.__name__, 'speed', float, speed,
self.logger)
self.__current_speed = fabs(float(speed))
@property
def current(self):
"""
(*Property*) The absolute value of the current speed of type `float`.
"""
return self.__current_speed
def get_new(self):
"""
Returns the absolute value of the given node speed of type `float`.
"""
return self.current
| 26.828947 | 79 | 0.60667 | 254 | 2,039 | 4.73622 | 0.440945 | 0.024938 | 0.029094 | 0.024938 | 0.159601 | 0.124688 | 0.124688 | 0.069825 | 0.069825 | 0.069825 | 0 | 0.015183 | 0.289358 | 2,039 | 75 | 80 | 27.186667 | 0.815045 | 0.591957 | 0 | 0 | 0 | 0 | 0.034483 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
534aebd1f9c4e46d72dc93169bc74d5b8daf04ea | 2,088 | py | Python | nexula/nexula_utility/utility_extract_func.py | haryoa/nexula | cc3b5a9b8dd8294bdc47150a1971cb49c4dde225 | [
"MIT"
] | 3 | 2020-05-06T08:53:22.000Z | 2020-09-24T07:45:38.000Z | nexula/nexula_utility/utility_extract_func.py | haryoa/nexula | cc3b5a9b8dd8294bdc47150a1971cb49c4dde225 | [
"MIT"
] | null | null | null | nexula/nexula_utility/utility_extract_func.py | haryoa/nexula | cc3b5a9b8dd8294bdc47150a1971cb49c4dde225 | [
"MIT"
] | null | null | null | from nexula.nexula_utility.utility_import_var import import_class
class NexusFunctionModuleExtractor():
"""
Used for constructing pipeline data preporcessing and feature representer
"""
def __init__(self, module_class_list, args_dict, **kwargs):
"""
Instantiate class(es) object in pipeline
Parameters
----------
module_class_list
args_dict
kwargs
"""
# self.list_of_cls = self._search_module_function(module_class_list)
self.list_of_cls = module_class_list
if 'logger' in kwargs:
self.logger = kwargs['logger']
self.logger.debug(args_dict) if 'logger' in self.__dict__ else None
self.args_init = [arg['init'] for arg in args_dict]
self.args_call = [arg['call'] for arg in args_dict]
self._construct_object()
# Extract call
def _construct_object(self):
"""
Instantiate object of all pipeline
"""
import logging
logger = logging.getLogger('nexula')
logger.debug(self.list_of_cls)
new_list_of_cls = []
for i, cls in enumerate(self.list_of_cls): # REFACTOR
logger.debug(cls)
new_list_of_cls.append(cls(**self.args_init[i]))
self.list_of_cls = new_list_of_cls
def _search_module_function(self, module_function_list):
"""
Search the module in the library
Parameters
----------
module_function_list
Returns
-------
"""
list_of_cls = []
for module, function in module_function_list:
# TODO Raise exception if empty
list_of_cls.append(import_class(function, module))
return list_of_cls
def __call__(self, x, y, *args, **kwargs):
"""
Call the object by evoking __call__ function
Returns
-------
"""
for i,cls in enumerate(self.list_of_cls):
current_args = self.args_call[i]
x, y = cls(x, y, **kwargs, **current_args)
return x, y
| 29.408451 | 77 | 0.594828 | 249 | 2,088 | 4.654618 | 0.253012 | 0.062123 | 0.093184 | 0.067299 | 0.194133 | 0.181191 | 0.096635 | 0.096635 | 0.053494 | 0 | 0 | 0 | 0.306034 | 2,088 | 70 | 78 | 29.828571 | 0.799862 | 0.230364 | 0 | 0.068966 | 0 | 0 | 0.02289 | 0 | 0 | 0 | 0 | 0.014286 | 0 | 1 | 0.137931 | false | 0 | 0.103448 | 0 | 0.344828 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5350847a4e985147242bdddaf7eae8ed5d884139 | 4,101 | py | Python | rest_framework_mongoengine/fields.py | Careerleaf/django-rest-framework-mongoengine | fc28dbf7af760528f6f7247e567328df46458799 | [
"MIT"
] | null | null | null | rest_framework_mongoengine/fields.py | Careerleaf/django-rest-framework-mongoengine | fc28dbf7af760528f6f7247e567328df46458799 | [
"MIT"
] | null | null | null | rest_framework_mongoengine/fields.py | Careerleaf/django-rest-framework-mongoengine | fc28dbf7af760528f6f7247e567328df46458799 | [
"MIT"
] | null | null | null | from bson.errors import InvalidId
from django.core.exceptions import ValidationError
from django.utils.encoding import smart_str
from mongoengine import dereference
from mongoengine.base.document import BaseDocument
from mongoengine.document import Document
from rest_framework import serializers
from mongoengine.fields import ObjectId
import sys
if sys.version_info[0] >= 3:
def unicode(val):
return str(val)
class MongoDocumentField(serializers.WritableField):
MAX_RECURSION_DEPTH = 5 # default value of depth
def __init__(self, *args, **kwargs):
try:
self.model_field = kwargs.pop('model_field')
self.depth = kwargs.pop('depth', self.MAX_RECURSION_DEPTH)
except KeyError:
raise ValueError("%s requires 'model_field' kwarg" % self.type_label)
super(MongoDocumentField, self).__init__(*args, **kwargs)
def transform_document(self, document, depth):
data = {}
# serialize each required field
for field in document._fields:
if hasattr(document, smart_str(field)):
# finally check for an attribute 'field' on the instance
obj = getattr(document, field)
else:
continue
val = self.transform_object(obj, depth-1)
if val is not None:
data[field] = val
return data
def transform_dict(self, obj, depth):
return dict([(key, self.transform_object(val, depth-1))
for key, val in obj.items()])
def transform_object(self, obj, depth):
"""
Models to natives
Recursion for (embedded) objects
"""
if depth == 0:
# Return primary key if exists, else return default text
return str(getattr(obj, 'pk', "Max recursion depth exceeded"))
elif isinstance(obj, BaseDocument):
# Document, EmbeddedDocument
return self.transform_document(obj, depth-1)
elif isinstance(obj, dict):
# Dictionaries
return self.transform_dict(obj, depth-1)
elif isinstance(obj, list):
# List
return [self.transform_object(value, depth-1) for value in obj]
elif obj is None:
return None
else:
return unicode(obj) if isinstance(obj, ObjectId) else obj
class ReferenceField(MongoDocumentField):
type_label = 'ReferenceField'
def from_native(self, value):
try:
dbref = self.model_field.to_python(value)
except InvalidId:
raise ValidationError(self.error_messages['invalid'])
instance = dereference.DeReference().__call__([dbref])[0]
# Check if dereference was successful
if not isinstance(instance, Document):
msg = self.error_messages['invalid']
raise ValidationError(msg)
return instance
def to_native(self, obj):
return self.transform_object(obj, self.depth)
class ListField(MongoDocumentField):
type_label = 'ListField'
def from_native(self, value):
return self.model_field.to_python(value)
def to_native(self, obj):
return self.transform_object(obj, self.depth)
class EmbeddedDocumentField(MongoDocumentField):
type_label = 'EmbeddedDocumentField'
def __init__(self, *args, **kwargs):
try:
self.document_type = kwargs.pop('document_type')
except KeyError:
raise ValueError("EmbeddedDocumentField requires 'document_type' kwarg")
super(EmbeddedDocumentField, self).__init__(*args, **kwargs)
def get_default_value(self):
return self.to_native(self.default())
def to_native(self, obj):
if obj is None:
return None
else:
return self.model_field.to_mongo(obj)
def from_native(self, value):
return self.model_field.to_python(value)
class DynamicField(MongoDocumentField):
type_label = 'DynamicField'
def to_native(self, obj):
return self.model_field.to_python(obj) | 29.717391 | 84 | 0.641795 | 464 | 4,101 | 5.517241 | 0.256466 | 0.039063 | 0.032813 | 0.03125 | 0.221484 | 0.183203 | 0.145313 | 0.089844 | 0.089844 | 0.089844 | 0 | 0.003348 | 0.271641 | 4,101 | 138 | 85 | 29.717391 | 0.853699 | 0.071934 | 0 | 0.258427 | 0 | 0 | 0.056263 | 0.011147 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157303 | false | 0 | 0.101124 | 0.089888 | 0.573034 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
5351dc5962b2184cb179f5f6f4ba10be7538464e | 81,840 | py | Python | tests/go_cd_configurator_test.py | agsmorodin/gomatic | e6ae871ffc2d027823f6b7a5755e0ac65c724538 | [
"MIT"
] | null | null | null | tests/go_cd_configurator_test.py | agsmorodin/gomatic | e6ae871ffc2d027823f6b7a5755e0ac65c724538 | [
"MIT"
] | null | null | null | tests/go_cd_configurator_test.py | agsmorodin/gomatic | e6ae871ffc2d027823f6b7a5755e0ac65c724538 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import unittest
from xml.dom.minidom import parseString
import xml.etree.ElementTree as ET
from decimal import Decimal
from gomatic import GoCdConfigurator, FetchArtifactDir, RakeTask, ExecTask, ScriptExecutorTask, FetchArtifactTask, \
FetchArtifactFile, Tab, GitMaterial, PipelineMaterial, Pipeline
from gomatic.fake import FakeHostRestClient, empty_config_xml, config, empty_config
from gomatic.gocd.pipelines import DEFAULT_LABEL_TEMPLATE
from gomatic.gocd.artifacts import Artifact
from gomatic.xml_operations import prettify
def find_with_matching_name(things, name):
return [thing for thing in things if thing.name == name]
def standard_pipeline_group():
return GoCdConfigurator(config('config-with-typical-pipeline')).ensure_pipeline_group('P.Group')
def typical_pipeline():
return standard_pipeline_group().find_pipeline('typical')
def more_options_pipeline():
return GoCdConfigurator(config('config-with-more-options-pipeline')).ensure_pipeline_group('P.Group').find_pipeline('more-options')
def empty_pipeline():
return GoCdConfigurator(empty_config()).ensure_pipeline_group("pg").ensure_pipeline("pl").set_git_url("gurl")
def empty_stage():
return empty_pipeline().ensure_stage("deploy-to-dev")
class TestAgents(unittest.TestCase):
def _agents_from_config(self):
return GoCdConfigurator(config('config-with-just-agents')).agents
def test_could_have_no_agents(self):
agents = GoCdConfigurator(empty_config()).agents
self.assertEquals(0, len(agents))
def test_agents_have_resources(self):
agents = self._agents_from_config()
self.assertEquals(2, len(agents))
self.assertEquals({'a-resource', 'b-resource'}, agents[0].resources)
def test_agents_have_names(self):
agents = self._agents_from_config()
self.assertEquals('go-agent-1', agents[0].hostname)
self.assertEquals('go-agent-2', agents[1].hostname)
def test_agent_could_have_no_resources(self):
agents = self._agents_from_config()
self.assertEquals(0, len(agents[1].resources))
def test_can_add_resource_to_agent_with_no_resources(self):
agent = self._agents_from_config()[1]
agent.ensure_resource('a-resource-that-it-does-not-already-have')
self.assertEquals(1, len(agent.resources))
def test_can_add_resource_to_agent(self):
agent = self._agents_from_config()[0]
self.assertEquals(2, len(agent.resources))
agent.ensure_resource('a-resource-that-it-does-not-already-have')
self.assertEquals(3, len(agent.resources))
class TestJobs(unittest.TestCase):
def test_jobs_have_resources(self):
stages = typical_pipeline().stages
job = stages[0].jobs[0]
resources = job.resources
self.assertEquals(1, len(resources))
self.assertEquals({'a-resource'}, resources)
def test_job_has_nice_tostring(self):
job = typical_pipeline().stages[0].jobs[0]
self.assertEquals("Job('compile', [ExecTask(['make', 'options', 'source code'])])", str(job))
def test_jobs_can_have_timeout(self):
job = typical_pipeline().ensure_stage("deploy").ensure_job("upload")
self.assertEquals(True, job.has_timeout)
self.assertEquals('20', job.timeout)
def test_can_set_timeout(self):
job = empty_stage().ensure_job("j")
j = job.set_timeout("42")
self.assertEquals(j, job)
self.assertEquals(True, job.has_timeout)
self.assertEquals('42', job.timeout)
def test_jobs_do_not_have_to_have_timeout(self):
stages = typical_pipeline().stages
job = stages[0].jobs[0]
self.assertEquals(False, job.has_timeout)
try:
timeout = job.timeout
self.fail("should have thrown exception")
except RuntimeError:
pass
def test_jobs_can_run_on_all_agents(self):
job = more_options_pipeline().ensure_stage("earlyStage").ensure_job("earlyWorm")
self.assertEquals(True, job.runs_on_all_agents)
def test_jobs_do_not_have_to_run_on_all_agents(self):
job = typical_pipeline().ensure_stage("build").ensure_job("compile")
self.assertEquals(False, job.runs_on_all_agents)
def test_jobs_can_be_made_to_run_on_all_agents(self):
job = typical_pipeline().ensure_stage("build").ensure_job("compile")
j = job.set_runs_on_all_agents()
self.assertEquals(j, job)
self.assertEquals(True, job.runs_on_all_agents)
def test_jobs_can_be_made_to_not_run_on_all_agents(self):
job = typical_pipeline().ensure_stage("build").ensure_job("compile")
j = job.set_runs_on_all_agents(False)
self.assertEquals(j, job)
self.assertEquals(False, job.runs_on_all_agents)
def test_can_ensure_job_has_resource(self):
stages = typical_pipeline().stages
job = stages[0].jobs[0]
j = job.ensure_resource('moo')
self.assertEquals(j, job)
self.assertEquals(2, len(job.resources))
self.assertEquals({'a-resource', 'moo'}, job.resources)
def test_jobs_have_artifacts(self):
job = more_options_pipeline().ensure_stage("earlyStage").ensure_job("earlyWorm")
artifacts = job.artifacts
self.assertEquals({
Artifact.get_build_artifact("target/universal/myapp*.zip", "artifacts"),
Artifact.get_build_artifact("scripts/*", "files"),
Artifact.get_test_artifact("from", "to")},
artifacts)
def test_job_that_has_no_artifacts_has_no_artifacts_element_to_reduce_thrash(self):
go_cd_configurator = GoCdConfigurator(empty_config())
job = go_cd_configurator.ensure_pipeline_group("g").ensure_pipeline("p").ensure_stage("s").ensure_job("j")
job.ensure_artifacts(set())
self.assertEquals(set(), job.artifacts)
xml = parseString(go_cd_configurator.config)
self.assertEquals(0, len(xml.getElementsByTagName('artifacts')))
def test_artifacts_might_have_no_dest(self):
job = more_options_pipeline().ensure_stage("s1").ensure_job("rake-job")
artifacts = job.artifacts
self.assertEquals(1, len(artifacts))
self.assertEquals({Artifact.get_build_artifact("things/*")}, artifacts)
def test_can_add_build_artifacts_to_job(self):
job = more_options_pipeline().ensure_stage("earlyStage").ensure_job("earlyWorm")
job_with_artifacts = job.ensure_artifacts({
Artifact.get_build_artifact("a1", "artifacts"),
Artifact.get_build_artifact("a2", "others")})
self.assertEquals(job, job_with_artifacts)
artifacts = job.artifacts
self.assertEquals(5, len(artifacts))
self.assertTrue({Artifact.get_build_artifact("a1", "artifacts"), Artifact.get_build_artifact("a2", "others")}.issubset(artifacts))
def test_can_add_test_artifacts_to_job(self):
job = more_options_pipeline().ensure_stage("earlyStage").ensure_job("earlyWorm")
job_with_artifacts = job.ensure_artifacts({
Artifact.get_test_artifact("a1"),
Artifact.get_test_artifact("a2")})
self.assertEquals(job, job_with_artifacts)
artifacts = job.artifacts
self.assertEquals(5, len(artifacts))
self.assertTrue({Artifact.get_test_artifact("a1"), Artifact.get_test_artifact("a2")}.issubset(artifacts))
def test_can_ensure_artifacts(self):
job = more_options_pipeline().ensure_stage("earlyStage").ensure_job("earlyWorm")
job.ensure_artifacts({
Artifact.get_test_artifact("from", "to"),
Artifact.get_build_artifact("target/universal/myapp*.zip", "somewhereElse"),
Artifact.get_test_artifact("another", "with dest"),
Artifact.get_build_artifact("target/universal/myapp*.zip", "artifacts")})
self.assertEquals({
Artifact.get_build_artifact("target/universal/myapp*.zip", "artifacts"),
Artifact.get_build_artifact("scripts/*", "files"),
Artifact.get_test_artifact("from", "to"),
Artifact.get_build_artifact("target/universal/myapp*.zip", "somewhereElse"),
Artifact.get_test_artifact("another", "with dest")
},
job.artifacts)
def test_jobs_have_tasks(self):
job = more_options_pipeline().ensure_stage("s1").jobs[2]
tasks = job.tasks
self.assertEquals(4, len(tasks))
self.assertEquals('rake', tasks[0].type)
self.assertEquals('sometarget', tasks[0].target)
self.assertEquals('passed', tasks[0].runif)
self.assertEquals('fetchartifact', tasks[1].type)
self.assertEquals('more-options', tasks[1].pipeline)
self.assertEquals('earlyStage', tasks[1].stage)
self.assertEquals('earlyWorm', tasks[1].job)
self.assertEquals(FetchArtifactDir('sourceDir'), tasks[1].src)
self.assertEquals('destDir', tasks[1].dest)
self.assertEquals('passed', tasks[1].runif)
def test_runif_defaults_to_passed(self):
pipeline = typical_pipeline()
tasks = pipeline.ensure_stage("build").ensure_job("compile").tasks
self.assertEquals("passed", tasks[0].runif)
def test_jobs_can_have_rake_tasks(self):
job = more_options_pipeline().ensure_stage("s1").jobs[0]
tasks = job.tasks
self.assertEquals(1, len(tasks))
self.assertEquals('rake', tasks[0].type)
self.assertEquals("boo", tasks[0].target)
def test_can_ensure_rake_task(self):
job = more_options_pipeline().ensure_stage("s1").jobs[0]
job.ensure_task(RakeTask("boo"))
self.assertEquals(1, len(job.tasks))
def test_can_add_rake_task(self):
job = more_options_pipeline().ensure_stage("s1").jobs[0]
job.ensure_task(RakeTask("another"))
self.assertEquals(2, len(job.tasks))
self.assertEquals("another", job.tasks[1].target)
def test_script_executor_task(self):
script = '''
echo This is script
echo 'This is a string in single quotes'
echo "This is a string in double quotes"
'''
job = more_options_pipeline().ensure_stage("script-executor").\
ensure_job('test-script-executor')
job.ensure_task(ScriptExecutorTask(script, runif='any'))
self.assertEquals(1, len(job.tasks))
self.assertEquals('script', job.tasks[0].type)
self.assertEquals(script, job.tasks[0].script)
self.assertEquals('any', job.tasks[0].runif)
job.ensure_task(ScriptExecutorTask(script, runif='failed'))
self.assertEquals(2, len(job.tasks))
self.assertEquals('script', job.tasks[1].type)
self.assertEquals(script, job.tasks[1].script)
self.assertEquals('failed', job.tasks[1].runif)
job.ensure_task(ScriptExecutorTask(script))
self.assertEquals(3, len(job.tasks))
self.assertEquals('script', job.tasks[2].type)
self.assertEquals(script, job.tasks[2].script)
self.assertEquals('passed', job.tasks[2].runif)
def test_can_add_exec_task_with_runif(self):
stages = typical_pipeline().stages
job = stages[0].jobs[0]
added_task = job.add_task(ExecTask(['ls', '-la'], 'some/dir', "failed"))
self.assertEquals(2, len(job.tasks))
task = job.tasks[1]
self.assertEquals(task, added_task)
self.assertEquals(['ls', '-la'], task.command_and_args)
self.assertEquals('some/dir', task.working_dir)
self.assertEquals('failed', task.runif)
def test_can_add_exec_task(self):
stages = typical_pipeline().stages
job = stages[0].jobs[0]
added_task = job.add_task(ExecTask(['ls', '-la'], 'some/dir'))
self.assertEquals(2, len(job.tasks))
task = job.tasks[1]
self.assertEquals(task, added_task)
self.assertEquals(['ls', '-la'], task.command_and_args)
self.assertEquals('some/dir', task.working_dir)
def test_can_ensure_exec_task(self):
stages = typical_pipeline().stages
job = stages[0].jobs[0]
t1 = job.ensure_task(ExecTask(['ls', '-la'], 'some/dir'))
t2 = job.ensure_task(ExecTask(['make', 'options', 'source code']))
job.ensure_task(ExecTask(['ls', '-la'], 'some/otherdir'))
job.ensure_task(ExecTask(['ls', '-la'], 'some/dir'))
self.assertEquals(3, len(job.tasks))
self.assertEquals(t2, job.tasks[0])
self.assertEquals(['make', 'options', 'source code'], (job.tasks[0]).command_and_args)
self.assertEquals(t1, job.tasks[1])
self.assertEquals(['ls', '-la'], (job.tasks[1]).command_and_args)
self.assertEquals('some/dir', (job.tasks[1]).working_dir)
self.assertEquals(['ls', '-la'], (job.tasks[2]).command_and_args)
self.assertEquals('some/otherdir', (job.tasks[2]).working_dir)
def test_exec_task_args_are_unescaped_as_appropriate(self):
job = more_options_pipeline().ensure_stage("earlyStage").ensure_job("earlyWorm")
task = job.tasks[1]
self.assertEquals(["bash", "-c",
'curl "http://domain.com/service/check?target=one+two+three&key=2714_beta%40domain.com"'],
task.command_and_args)
def test_exec_task_args_are_escaped_as_appropriate(self):
job = empty_stage().ensure_job("j")
task = job.add_task(ExecTask(["bash", "-c",
'curl "http://domain.com/service/check?target=one+two+three&key=2714_beta%40domain.com"']))
self.assertEquals(["bash", "-c",
'curl "http://domain.com/service/check?target=one+two+three&key=2714_beta%40domain.com"'],
task.command_and_args)
def test_can_have_no_tasks(self):
self.assertEquals(0, len(empty_stage().ensure_job("empty_job").tasks))
def test_can_add_fetch_artifact_task_to_job(self):
stages = typical_pipeline().stages
job = stages[0].jobs[0]
added_task = job.add_task(FetchArtifactTask('p', 's', 'j', FetchArtifactDir('d'), runif="any"))
self.assertEquals(2, len(job.tasks))
task = job.tasks[1]
self.assertEquals(added_task, task)
self.assertEquals('p', task.pipeline)
self.assertEquals('s', task.stage)
self.assertEquals('j', task.job)
self.assertEquals(FetchArtifactDir('d'), task.src)
self.assertEquals('any', task.runif)
def test_fetch_artifact_task_can_have_src_file_rather_than_src_dir(self):
job = more_options_pipeline().ensure_stage("s1").ensure_job("variety-of-tasks")
tasks = job.tasks
self.assertEquals(4, len(tasks))
self.assertEquals('more-options', tasks[1].pipeline)
self.assertEquals('earlyStage', tasks[1].stage)
self.assertEquals('earlyWorm', tasks[1].job)
self.assertEquals(FetchArtifactFile('someFile'), tasks[2].src)
self.assertEquals('passed', tasks[1].runif)
self.assertEquals(['true'], tasks[3].command_and_args)
def test_fetch_artifact_task_can_have_dest(self):
pipeline = more_options_pipeline()
job = pipeline.ensure_stage("s1").ensure_job("variety-of-tasks")
tasks = job.tasks
self.assertEquals(FetchArtifactTask("more-options",
"earlyStage",
"earlyWorm",
FetchArtifactDir("sourceDir"),
dest="destDir"),
tasks[1])
def test_can_ensure_fetch_artifact_tasks(self):
job = more_options_pipeline().ensure_stage("s1").ensure_job("variety-of-tasks")
job.ensure_task(FetchArtifactTask("more-options", "middleStage", "middleJob", FetchArtifactFile("someFile")))
first_added_task = job.ensure_task(FetchArtifactTask('p', 's', 'j', FetchArtifactDir('dir')))
self.assertEquals(5, len(job.tasks))
self.assertEquals(first_added_task, job.tasks[4])
self.assertEquals('p', (job.tasks[4]).pipeline)
self.assertEquals('s', (job.tasks[4]).stage)
self.assertEquals('j', (job.tasks[4]).job)
self.assertEquals(FetchArtifactDir('dir'), (job.tasks[4]).src)
self.assertEquals('passed', (job.tasks[4]).runif)
job.ensure_task(FetchArtifactTask('p', 's', 'j', FetchArtifactFile('f')))
self.assertEquals(FetchArtifactFile('f'), (job.tasks[5]).src)
job.ensure_task(FetchArtifactTask('p', 's', 'j', FetchArtifactDir('dir'), dest="somedest"))
self.assertEquals("somedest", (job.tasks[6]).dest)
job.ensure_task(FetchArtifactTask('p', 's', 'j', FetchArtifactDir('dir'), runif="failed"))
self.assertEquals('failed', (job.tasks[7]).runif)
def test_tasks_run_if_defaults_to_passed(self):
job = empty_stage().ensure_job("j")
job.add_task(ExecTask(['ls', '-la'], 'some/dir'))
job.add_task(FetchArtifactTask('p', 's', 'j', FetchArtifactDir('dir')))
job.add_task(RakeTask('x'))
self.assertEquals('passed', (job.tasks[0]).runif)
self.assertEquals('passed', (job.tasks[1]).runif)
self.assertEquals('passed', (job.tasks[2]).runif)
def test_tasks_run_if_variants(self):
job = more_options_pipeline().ensure_stage("s1").ensure_job("run-if-variants")
tasks = job.tasks
self.assertEquals('t-passed', tasks[0].command_and_args[0])
self.assertEquals('passed', tasks[0].runif)
self.assertEquals('t-none', tasks[1].command_and_args[0])
self.assertEquals('passed', tasks[1].runif)
self.assertEquals('t-failed', tasks[2].command_and_args[0])
self.assertEquals('failed', tasks[2].runif)
self.assertEquals('t-any', tasks[3].command_and_args[0])
self.assertEquals('any', tasks[3].runif)
self.assertEquals('t-both', tasks[4].command_and_args[0])
self.assertEquals('any', tasks[4].runif)
def test_cannot_set_runif_to_random_things(self):
try:
ExecTask(['x'], runif='whatever')
self.fail("should have thrown exception")
except RuntimeError as e:
self.assertTrue(e.message.count("whatever") > 0)
def test_can_set_runif_to_particular_values(self):
self.assertEquals('passed', ExecTask(['x'], runif='passed').runif)
self.assertEquals('failed', ExecTask(['x'], runif='failed').runif)
self.assertEquals('any', ExecTask(['x'], runif='any').runif)
def test_tasks_dest_defaults_to_none(self): # TODO: maybe None could be avoided
job = empty_stage().ensure_job("j")
job.add_task(FetchArtifactTask('p', 's', 'j', FetchArtifactDir('dir')))
self.assertEquals(None, (job.tasks[0]).dest)
def test_can_add_exec_task_to_empty_job(self):
job = empty_stage().ensure_job("j")
added_task = job.add_task(ExecTask(['ls', '-la'], 'some/dir', "any"))
self.assertEquals(1, len(job.tasks))
task = job.tasks[0]
self.assertEquals(task, added_task)
self.assertEquals(['ls', '-la'], task.command_and_args)
self.assertEquals('some/dir', task.working_dir)
self.assertEquals('any', task.runif)
def test_can_remove_all_tasks(self):
stages = typical_pipeline().stages
job = stages[0].jobs[0]
self.assertEquals(1, len(job.tasks))
j = job.without_any_tasks()
self.assertEquals(j, job)
self.assertEquals(0, len(job.tasks))
def test_can_have_encrypted_environment_variables(self):
pipeline = GoCdConfigurator(config('config-with-encrypted-variable')).ensure_pipeline_group("defaultGroup").find_pipeline("example")
job = pipeline.ensure_stage('defaultStage').ensure_job('defaultJob')
self.assertEquals({"MY_JOB_PASSWORD": "yq5qqPrrD9/j=="}, job.encrypted_environment_variables)
def test_can_set_encrypted_environment_variables(self):
job = empty_stage().ensure_job("j")
job.ensure_encrypted_environment_variables({'one': 'blah=='})
self.assertEquals({"one": "blah=="}, job.encrypted_environment_variables)
def test_can_add_environment_variables(self):
job = typical_pipeline() \
.ensure_stage("build") \
.ensure_job("compile")
j = job.ensure_environment_variables({"new": "one"})
self.assertEquals(j, job)
self.assertEquals({"CF_COLOR": "false", "new": "one"}, job.environment_variables)
def test_environment_variables_get_added_in_sorted_order_to_reduce_config_thrash(self):
go_cd_configurator = GoCdConfigurator(empty_config())
job = go_cd_configurator\
.ensure_pipeline_group('P.Group')\
.ensure_pipeline('P.Name') \
.ensure_stage("build") \
.ensure_job("compile")
job.ensure_environment_variables({"ant": "a", "badger": "a", "zebra": "a"})
xml = parseString(go_cd_configurator.config)
names = [e.getAttribute('name') for e in xml.getElementsByTagName('variable')]
self.assertEquals([u'ant', u'badger', u'zebra'], names)
def test_can_remove_all_environment_variables(self):
job = typical_pipeline() \
.ensure_stage("build") \
.ensure_job("compile")
j = job.without_any_environment_variables()
self.assertEquals(j, job)
self.assertEquals({}, job.environment_variables)
def test_job_can_haveTabs(self):
job = typical_pipeline() \
.ensure_stage("build") \
.ensure_job("compile")
self.assertEquals([Tab("Time_Taken", "artifacts/test-run-times.html")], job.tabs)
def test_can_addTab(self):
job = typical_pipeline() \
.ensure_stage("build") \
.ensure_job("compile")
j = job.ensure_tab(Tab("n", "p"))
self.assertEquals(j, job)
self.assertEquals([Tab("Time_Taken", "artifacts/test-run-times.html"), Tab("n", "p")], job.tabs)
def test_can_ensure_tab(self):
job = typical_pipeline() \
.ensure_stage("build") \
.ensure_job("compile")
job.ensure_tab(Tab("Time_Taken", "artifacts/test-run-times.html"))
self.assertEquals([Tab("Time_Taken", "artifacts/test-run-times.html")], job.tabs)
class TestStages(unittest.TestCase):
def test_pipelines_have_stages(self):
self.assertEquals(2, len(typical_pipeline().stages))
def test_stages_have_names(self):
stages = typical_pipeline().stages
self.assertEquals('build', stages[0].name)
self.assertEquals('deploy', stages[1].name)
def test_stages_can_have_manual_approval(self):
self.assertEquals(False, typical_pipeline().stages[0].has_manual_approval)
self.assertEquals(True, typical_pipeline().stages[1].has_manual_approval)
def test_can_set_manual_approval(self):
stage = typical_pipeline().stages[0]
s = stage.set_has_manual_approval()
self.assertEquals(s, stage)
self.assertEquals(True, stage.has_manual_approval)
def test_stages_have_fetch_materials_flag(self):
stage = typical_pipeline().ensure_stage("build")
self.assertEquals(True, stage.fetch_materials)
stage = more_options_pipeline().ensure_stage("s1")
self.assertEquals(False, stage.fetch_materials)
def test_can_set_fetch_materials_flag(self):
stage = typical_pipeline().ensure_stage("build")
s = stage.set_fetch_materials(False)
self.assertEquals(s, stage)
self.assertEquals(False, stage.fetch_materials)
stage = more_options_pipeline().ensure_stage("s1")
stage.set_fetch_materials(True)
self.assertEquals(True, stage.fetch_materials)
def test_stages_have_jobs(self):
stages = typical_pipeline().stages
jobs = stages[0].jobs
self.assertEquals(1, len(jobs))
self.assertEquals('compile', jobs[0].name)
def test_can_add_job(self):
stage = typical_pipeline().ensure_stage("deploy")
self.assertEquals(1, len(stage.jobs))
ensured_job = stage.ensure_job("new-job")
self.assertEquals(2, len(stage.jobs))
self.assertEquals(ensured_job, stage.jobs[1])
self.assertEquals("new-job", stage.jobs[1].name)
def test_can_add_job_to_empty_stage(self):
stage = empty_stage()
self.assertEquals(0, len(stage.jobs))
ensured_job = stage.ensure_job("new-job")
self.assertEquals(1, len(stage.jobs))
self.assertEquals(ensured_job, stage.jobs[0])
self.assertEquals("new-job", stage.jobs[0].name)
def test_can_ensure_job_exists(self):
stage = typical_pipeline().ensure_stage("deploy")
self.assertEquals(1, len(stage.jobs))
ensured_job = stage.ensure_job("upload")
self.assertEquals(1, len(stage.jobs))
self.assertEquals("upload", ensured_job.name)
def test_can_have_encrypted_environment_variables(self):
pipeline = GoCdConfigurator(config('config-with-encrypted-variable')).ensure_pipeline_group("defaultGroup").find_pipeline("example")
stage = pipeline.ensure_stage('defaultStage')
self.assertEquals({"MY_STAGE_PASSWORD": "yq5qqPrrD9/s=="}, stage.encrypted_environment_variables)
def test_can_set_encrypted_environment_variables(self):
stage = typical_pipeline().ensure_stage("deploy")
stage.ensure_encrypted_environment_variables({'one': 'blah=='})
self.assertEquals({"one": "blah=="}, stage.encrypted_environment_variables)
def test_can_set_environment_variables(self):
stage = typical_pipeline().ensure_stage("deploy")
s = stage.ensure_environment_variables({"new": "one"})
self.assertEquals(s, stage)
self.assertEquals({"BASE_URL": "http://myurl", "new": "one"}, stage.environment_variables)
def test_can_remove_all_environment_variables(self):
stage = typical_pipeline().ensure_stage("deploy")
s = stage.without_any_environment_variables()
self.assertEquals(s, stage)
self.assertEquals({}, stage.environment_variables)
class TestPipeline(unittest.TestCase):
def test_pipelines_have_names(self):
pipeline = typical_pipeline()
self.assertEquals('typical', pipeline.name)
def test_can_add_stage(self):
pipeline = empty_pipeline()
self.assertEquals(0, len(pipeline.stages))
new_stage = pipeline.ensure_stage("some_stage")
self.assertEquals(1, len(pipeline.stages))
self.assertEquals(new_stage, pipeline.stages[0])
self.assertEquals("some_stage", new_stage.name)
def test_can_ensure_stage(self):
pipeline = typical_pipeline()
self.assertEquals(2, len(pipeline.stages))
ensured_stage = pipeline.ensure_stage("deploy")
self.assertEquals(2, len(pipeline.stages))
self.assertEquals("deploy", ensured_stage.name)
def test_can_remove_stage(self):
pipeline = typical_pipeline()
self.assertEquals(2, len(pipeline.stages))
p = pipeline.ensure_removal_of_stage("deploy")
self.assertEquals(p, pipeline)
self.assertEquals(1, len(pipeline.stages))
self.assertEquals(0, len([s for s in pipeline.stages if s.name == "deploy"]))
def test_can_ensure_removal_of_stage(self):
pipeline = typical_pipeline()
self.assertEquals(2, len(pipeline.stages))
pipeline.ensure_removal_of_stage("stage-that-has-already-been-deleted")
self.assertEquals(2, len(pipeline.stages))
def test_can_ensure_initial_stage(self):
pipeline = typical_pipeline()
stage = pipeline.ensure_initial_stage("first")
self.assertEquals(stage, pipeline.stages[0])
self.assertEquals(3, len(pipeline.stages))
def test_can_ensure_initial_stage_if_already_exists_as_initial(self):
pipeline = typical_pipeline()
stage = pipeline.ensure_initial_stage("build")
self.assertEquals(stage, pipeline.stages[0])
self.assertEquals(2, len(pipeline.stages))
def test_can_ensure_initial_stage_if_already_exists(self):
pipeline = typical_pipeline()
stage = pipeline.ensure_initial_stage("deploy")
self.assertEquals(stage, pipeline.stages[0])
self.assertEquals("build", pipeline.stages[1].name)
self.assertEquals(2, len(pipeline.stages))
def test_can_set_stage_clean_policy(self):
pipeline = empty_pipeline()
stage1 = pipeline.ensure_stage("some_stage1").set_clean_working_dir()
stage2 = pipeline.ensure_stage("some_stage2")
self.assertEquals(True, pipeline.stages[0].clean_working_dir)
self.assertEquals(True, stage1.clean_working_dir)
self.assertEquals(False, pipeline.stages[1].clean_working_dir)
self.assertEquals(False, stage2.clean_working_dir)
def test_pipelines_can_have_git_urls(self):
pipeline = typical_pipeline()
self.assertEquals("git@bitbucket.org:springersbm/gomatic.git", pipeline.git_url)
def test_git_is_polled_by_default(self):
pipeline = GoCdConfigurator(empty_config()).ensure_pipeline_group("g").ensure_pipeline("p")
pipeline.set_git_url("some git url")
self.assertEquals(True, pipeline.git_material.polling)
def test_pipelines_can_have_git_material_with_material_name(self):
pipeline = more_options_pipeline()
self.assertEquals("git@bitbucket.org:springersbm/gomatic.git", pipeline.git_url)
self.assertEquals("some-material-name", pipeline.git_material.material_name)
def test_git_material_can_ignore_sources(self):
pipeline = GoCdConfigurator(config('config-with-source-exclusions')).ensure_pipeline_group("P.Group").find_pipeline("with-exclusions")
self.assertEquals({"excluded-folder", "another-excluded-folder"}, pipeline.git_material.ignore_patterns)
def test_can_set_pipeline_git_url(self):
pipeline = typical_pipeline()
p = pipeline.set_git_url("git@bitbucket.org:springersbm/changed.git")
self.assertEquals(p, pipeline)
self.assertEquals("git@bitbucket.org:springersbm/changed.git", pipeline.git_url)
self.assertEquals('master', pipeline.git_branch)
def test_can_set_pipeline_git_url_with_options(self):
pipeline = typical_pipeline()
p = pipeline.set_git_material(GitMaterial(
"git@bitbucket.org:springersbm/changed.git",
branch="branch",
destination_directory="foo",
material_name="material-name",
ignore_patterns={"ignoreMe", "ignoreThisToo"},
polling=False))
self.assertEquals(p, pipeline)
self.assertEquals("branch", pipeline.git_branch)
self.assertEquals("foo", pipeline.git_material.destination_directory)
self.assertEquals("material-name", pipeline.git_material.material_name)
self.assertEquals({"ignoreMe", "ignoreThisToo"}, pipeline.git_material.ignore_patterns)
self.assertFalse(pipeline.git_material.polling, "git polling")
def test_throws_exception_if_no_git_url(self):
pipeline = GoCdConfigurator(empty_config()).ensure_pipeline_group("g").ensure_pipeline("p")
self.assertEquals(False, pipeline.has_single_git_material)
try:
url = pipeline.git_url
self.fail("should have thrown exception")
except RuntimeError:
pass
def test_git_url_throws_exception_if_multiple_git_materials(self):
pipeline = GoCdConfigurator(empty_config()).ensure_pipeline_group("g").ensure_pipeline("p")
pipeline.ensure_material(GitMaterial("git@bitbucket.org:springersbm/one.git"))
pipeline.ensure_material(GitMaterial("git@bitbucket.org:springersbm/two.git"))
self.assertEquals(False, pipeline.has_single_git_material)
try:
url = pipeline.git_url
self.fail("should have thrown exception")
except RuntimeError:
pass
def test_set_git_url_throws_exception_if_multiple_git_materials(self):
pipeline = GoCdConfigurator(empty_config()).ensure_pipeline_group("g").ensure_pipeline("p")
pipeline.ensure_material(GitMaterial("git@bitbucket.org:springersbm/one.git"))
pipeline.ensure_material(GitMaterial("git@bitbucket.org:springersbm/two.git"))
try:
pipeline.set_git_url("git@bitbucket.org:springersbm/three.git")
self.fail("should have thrown exception")
except RuntimeError:
pass
def test_can_add_git_material(self):
pipeline = GoCdConfigurator(empty_config()).ensure_pipeline_group("g").ensure_pipeline("p")
p = pipeline.ensure_material(GitMaterial("git@bitbucket.org:springersbm/changed.git"))
self.assertEquals(p, pipeline)
self.assertEquals("git@bitbucket.org:springersbm/changed.git", pipeline.git_url)
def test_can_ensure_git_material(self):
pipeline = typical_pipeline()
pipeline.ensure_material(GitMaterial("git@bitbucket.org:springersbm/gomatic.git"))
self.assertEquals("git@bitbucket.org:springersbm/gomatic.git", pipeline.git_url)
self.assertEquals([GitMaterial("git@bitbucket.org:springersbm/gomatic.git")], pipeline.materials)
def test_can_have_multiple_git_materials(self):
pipeline = typical_pipeline()
pipeline.ensure_material(GitMaterial("git@bitbucket.org:springersbm/changed.git"))
self.assertEquals([GitMaterial("git@bitbucket.org:springersbm/gomatic.git"), GitMaterial("git@bitbucket.org:springersbm/changed.git")],
pipeline.materials)
def test_pipelines_can_have_pipeline_materials(self):
pipeline = more_options_pipeline()
self.assertEquals(2, len(pipeline.materials))
self.assertEquals(GitMaterial('git@bitbucket.org:springersbm/gomatic.git', branch="a-branch", material_name="some-material-name", polling=False),
pipeline.materials[0])
def test_pipelines_can_have_more_complicated_pipeline_materials(self):
pipeline = more_options_pipeline()
self.assertEquals(2, len(pipeline.materials))
self.assertEquals(True, pipeline.materials[0].is_git)
self.assertEquals(PipelineMaterial('pipeline2', 'build'), pipeline.materials[1])
def test_pipelines_can_have_no_materials(self):
pipeline = GoCdConfigurator(empty_config()).ensure_pipeline_group("g").ensure_pipeline("p")
self.assertEquals(0, len(pipeline.materials))
def test_can_add_pipeline_material(self):
pipeline = GoCdConfigurator(empty_config()).ensure_pipeline_group("g").ensure_pipeline("p")
p = pipeline.ensure_material(PipelineMaterial('deploy-qa', 'baseline-user-data'))
self.assertEquals(p, pipeline)
self.assertEquals(PipelineMaterial('deploy-qa', 'baseline-user-data'), pipeline.materials[0])
def test_can_add_more_complicated_pipeline_material(self):
pipeline = GoCdConfigurator(empty_config()).ensure_pipeline_group("g").ensure_pipeline("p")
p = pipeline.ensure_material(PipelineMaterial('p', 's', 'm'))
self.assertEquals(p, pipeline)
self.assertEquals(PipelineMaterial('p', 's', 'm'), pipeline.materials[0])
def test_can_ensure_pipeline_material(self):
pipeline = more_options_pipeline()
self.assertEquals(2, len(pipeline.materials))
pipeline.ensure_material(PipelineMaterial('pipeline2', 'build'))
self.assertEquals(2, len(pipeline.materials))
def test_can_remove_all_pipeline_materials(self):
pipeline = more_options_pipeline()
pipeline.remove_materials()
self.assertEquals(0, len(pipeline.materials))
def test_materials_are_sorted(self):
go_cd_configurator = GoCdConfigurator(empty_config())
pipeline = go_cd_configurator.ensure_pipeline_group("g").ensure_pipeline("p")
pipeline.ensure_material(PipelineMaterial('zeta', 'build'))
pipeline.ensure_material(GitMaterial('git@bitbucket.org:springersbm/zebra.git'))
pipeline.ensure_material(PipelineMaterial('alpha', 'build'))
pipeline.ensure_material(GitMaterial('git@bitbucket.org:springersbm/art.git'))
pipeline.ensure_material(PipelineMaterial('theta', 'build'))
pipeline.ensure_material(GitMaterial('git@bitbucket.org:springersbm/this.git'))
xml = parseString(go_cd_configurator.config)
materials = xml.getElementsByTagName('materials')[0].childNodes
self.assertEquals('git', materials[0].tagName)
self.assertEquals('git', materials[1].tagName)
self.assertEquals('git', materials[2].tagName)
self.assertEquals('pipeline', materials[3].tagName)
self.assertEquals('pipeline', materials[4].tagName)
self.assertEquals('pipeline', materials[5].tagName)
self.assertEquals('git@bitbucket.org:springersbm/art.git', materials[0].attributes['url'].value)
self.assertEquals('git@bitbucket.org:springersbm/this.git', materials[1].attributes['url'].value)
self.assertEquals('git@bitbucket.org:springersbm/zebra.git', materials[2].attributes['url'].value)
self.assertEquals('alpha', materials[3].attributes['pipelineName'].value)
self.assertEquals('theta', materials[4].attributes['pipelineName'].value)
self.assertEquals('zeta', materials[5].attributes['pipelineName'].value)
def test_can_set_pipeline_git_url_for_new_pipeline(self):
pipeline_group = standard_pipeline_group()
new_pipeline = pipeline_group.ensure_pipeline("some_name")
new_pipeline.set_git_url("git@bitbucket.org:springersbm/changed.git")
self.assertEquals("git@bitbucket.org:springersbm/changed.git", new_pipeline.git_url)
def test_pipelines_do_not_have_to_be_based_on_template(self):
pipeline = more_options_pipeline()
self.assertFalse(pipeline.is_based_on_template)
def test_pipelines_can_be_based_on_template(self):
pipeline = GoCdConfigurator(config('pipeline-based-on-template')).ensure_pipeline_group('defaultGroup').find_pipeline('siberian')
assert isinstance(pipeline, Pipeline)
self.assertTrue(pipeline.is_based_on_template)
template = GoCdConfigurator(config('pipeline-based-on-template')).templates[0]
self.assertEquals(template, pipeline.template)
def test_pipelines_can_be_created_based_on_template(self):
configurator = GoCdConfigurator(empty_config())
configurator.ensure_template('temple').ensure_stage('s').ensure_job('j')
pipeline = configurator.ensure_pipeline_group("g").ensure_pipeline('p').set_template_name('temple')
self.assertEquals('temple', pipeline.template.name)
def test_pipelines_have_environment_variables(self):
pipeline = typical_pipeline()
self.assertEquals({"JAVA_HOME": "/opt/java/jdk-1.8"}, pipeline.environment_variables)
def test_pipelines_have_encrypted_environment_variables(self):
pipeline = GoCdConfigurator(config('config-with-encrypted-variable')).ensure_pipeline_group("defaultGroup").find_pipeline("example")
self.assertEquals({"MY_SECURE_PASSWORD": "yq5qqPrrD9/htfwTWMYqGQ=="}, pipeline.encrypted_environment_variables)
def test_pipelines_have_unencrypted_secure_environment_variables(self):
pipeline = GoCdConfigurator(config('config-with-unencrypted-secure-variable')).ensure_pipeline_group("defaultGroup").find_pipeline("example")
self.assertEquals({"MY_SECURE_PASSWORD": "hunter2"}, pipeline.unencrypted_secure_environment_variables)
def test_can_add_environment_variables_to_pipeline(self):
pipeline = empty_pipeline()
p = pipeline.ensure_environment_variables({"new": "one", "again": "two"})
self.assertEquals(p, pipeline)
self.assertEquals({"new": "one", "again": "two"}, pipeline.environment_variables)
def test_can_add_encrypted_secure_environment_variables_to_pipeline(self):
pipeline = empty_pipeline()
pipeline.ensure_encrypted_environment_variables({"new": "one", "again": "two"})
self.assertEquals({"new": "one", "again": "two"}, pipeline.encrypted_environment_variables)
def test_can_add_unencrypted_secure_environment_variables_to_pipeline(self):
pipeline = empty_pipeline()
pipeline.ensure_unencrypted_secure_environment_variables({"new": "one", "again": "two"})
self.assertEquals({"new": "one", "again": "two"}, pipeline.unencrypted_secure_environment_variables)
def test_can_add_environment_variables_to_new_pipeline(self):
pipeline = typical_pipeline()
pipeline.ensure_environment_variables({"new": "one"})
self.assertEquals({"JAVA_HOME": "/opt/java/jdk-1.8", "new": "one"}, pipeline.environment_variables)
def test_can_modify_environment_variables_of_pipeline(self):
pipeline = typical_pipeline()
pipeline.ensure_environment_variables({"new": "one", "JAVA_HOME": "/opt/java/jdk-1.1"})
self.assertEquals({"JAVA_HOME": "/opt/java/jdk-1.1", "new": "one"}, pipeline.environment_variables)
def test_can_remove_all_environment_variables(self):
pipeline = typical_pipeline()
p = pipeline.without_any_environment_variables()
self.assertEquals(p, pipeline)
self.assertEquals({}, pipeline.environment_variables)
def test_can_remove_specific_environment_variable(self):
pipeline = empty_pipeline()
pipeline.ensure_encrypted_environment_variables({'a': 's'})
pipeline.ensure_environment_variables({'c': 'v', 'd': 'f'})
pipeline.remove_environment_variable('d')
p = pipeline.remove_environment_variable('unknown')
self.assertEquals(p, pipeline)
self.assertEquals({'a': 's'}, pipeline.encrypted_environment_variables)
self.assertEquals({'c': 'v'}, pipeline.environment_variables)
def test_environment_variables_get_added_in_sorted_order_to_reduce_config_thrash(self):
go_cd_configurator = GoCdConfigurator(empty_config())
pipeline = go_cd_configurator \
.ensure_pipeline_group('P.Group') \
.ensure_pipeline('P.Name')
pipeline.ensure_environment_variables({"badger": "a", "xray": "a"})
pipeline.ensure_environment_variables({"ant": "a2", "zebra": "a"})
xml = parseString(go_cd_configurator.config)
names = [e.getAttribute('name') for e in xml.getElementsByTagName('variable')]
self.assertEquals([u'ant', u'badger', u'xray', u'zebra'], names)
def test_encrypted_environment_variables_get_added_in_sorted_order_to_reduce_config_thrash(self):
go_cd_configurator = GoCdConfigurator(empty_config())
pipeline = go_cd_configurator \
.ensure_pipeline_group('P.Group') \
.ensure_pipeline('P.Name')
pipeline.ensure_encrypted_environment_variables({"badger": "a", "xray": "a"})
pipeline.ensure_encrypted_environment_variables({"ant": "a2", "zebra": "a"})
xml = parseString(go_cd_configurator.config)
names = [e.getAttribute('name') for e in xml.getElementsByTagName('variable')]
self.assertEquals([u'ant', u'badger', u'xray', u'zebra'], names)
def test_unencrypted_environment_variables_do_not_have_secure_attribute_in_order_to_reduce_config_thrash(self):
go_cd_configurator = GoCdConfigurator(empty_config())
pipeline = go_cd_configurator \
.ensure_pipeline_group('P.Group') \
.ensure_pipeline('P.Name')
pipeline.ensure_environment_variables({"ant": "a"})
xml = parseString(go_cd_configurator.config)
secure_attributes = [e.getAttribute('secure') for e in xml.getElementsByTagName('variable')]
# attributes that are missing are returned as empty
self.assertEquals([''], secure_attributes, "should not have any 'secure' attributes")
def test_cannot_have_environment_variable_which_is_both_secure_and_insecure(self):
go_cd_configurator = GoCdConfigurator(empty_config())
pipeline = go_cd_configurator \
.ensure_pipeline_group('P.Group') \
.ensure_pipeline('P.Name')
pipeline.ensure_unencrypted_secure_environment_variables({"ant": "a"})
pipeline.ensure_environment_variables({"ant": "b"}) # not secure
self.assertEquals({"ant": "b"}, pipeline.environment_variables)
self.assertEquals({}, pipeline.unencrypted_secure_environment_variables)
def test_can_change_environment_variable_from_secure_to_insecure(self):
go_cd_configurator = GoCdConfigurator(empty_config())
pipeline = go_cd_configurator \
.ensure_pipeline_group('P.Group') \
.ensure_pipeline('P.Name')
pipeline.ensure_unencrypted_secure_environment_variables({"ant": "a", "badger": "b"})
pipeline.ensure_environment_variables({"ant": "b"})
self.assertEquals({"ant": "b"}, pipeline.environment_variables)
self.assertEquals({"badger": "b"}, pipeline.unencrypted_secure_environment_variables)
def test_pipelines_have_parameters(self):
pipeline = more_options_pipeline()
self.assertEquals({"environment": "qa"}, pipeline.parameters)
def test_pipelines_have_no_parameters(self):
pipeline = typical_pipeline()
self.assertEquals({}, pipeline.parameters)
def test_can_add_params_to_pipeline(self):
pipeline = typical_pipeline()
p = pipeline.ensure_parameters({"new": "one", "again": "two"})
self.assertEquals(p, pipeline)
self.assertEquals({"new": "one", "again": "two"}, pipeline.parameters)
def test_can_modify_parameters_of_pipeline(self):
pipeline = more_options_pipeline()
pipeline.ensure_parameters({"new": "one", "environment": "qa55"})
self.assertEquals({"environment": "qa55", "new": "one"}, pipeline.parameters)
def test_can_remove_all_parameters(self):
pipeline = more_options_pipeline()
p = pipeline.without_any_parameters()
self.assertEquals(p, pipeline)
self.assertEquals({}, pipeline.parameters)
def test_can_have_timer(self):
pipeline = more_options_pipeline()
self.assertEquals(True, pipeline.has_timer)
self.assertEquals("0 15 22 * * ?", pipeline.timer)
self.assertEquals(False, pipeline.timer_triggers_only_on_changes)
def test_can_have_timer_with_onlyOnChanges_option(self):
pipeline = GoCdConfigurator(config('config-with-more-options-pipeline')).ensure_pipeline_group('P.Group').find_pipeline('pipeline2')
self.assertEquals(True, pipeline.has_timer)
self.assertEquals("0 0 22 ? * MON-FRI", pipeline.timer)
self.assertEquals(True, pipeline.timer_triggers_only_on_changes)
def test_need_not_have_timer(self):
pipeline = GoCdConfigurator(empty_config()).ensure_pipeline_group('Group').ensure_pipeline('Pipeline')
self.assertEquals(False, pipeline.has_timer)
try:
timer = pipeline.timer
self.fail('should have thrown an exception')
except RuntimeError:
pass
def test_can_set_timer(self):
pipeline = GoCdConfigurator(empty_config()).ensure_pipeline_group('Group').ensure_pipeline('Pipeline')
p = pipeline.set_timer("one two three")
self.assertEquals(p, pipeline)
self.assertEquals("one two three", pipeline.timer)
def test_can_set_timer_with_only_on_changes_flag_off(self):
pipeline = GoCdConfigurator(empty_config()).ensure_pipeline_group('Group').ensure_pipeline('Pipeline')
p = pipeline.set_timer("one two three", only_on_changes=False)
self.assertEquals(p, pipeline)
self.assertEquals("one two three", pipeline.timer)
self.assertEquals(False, pipeline.timer_triggers_only_on_changes)
def test_can_set_timer_with_only_on_changes_flag(self):
pipeline = GoCdConfigurator(empty_config()).ensure_pipeline_group('Group').ensure_pipeline('Pipeline')
p = pipeline.set_timer("one two three", only_on_changes=True)
self.assertEquals(p, pipeline)
self.assertEquals("one two three", pipeline.timer)
self.assertEquals(True, pipeline.timer_triggers_only_on_changes)
def test_can_remove_timer(self):
pipeline = GoCdConfigurator(empty_config()).ensure_pipeline_group('Group').ensure_pipeline('Pipeline')
pipeline.set_timer("one two three")
p = pipeline.remove_timer()
self.assertEquals(p, pipeline)
self.assertFalse(pipeline.has_timer)
def test_can_have_label_template(self):
pipeline = typical_pipeline()
self.assertEquals("something-${COUNT}", pipeline.label_template)
self.assertEquals(True, pipeline.has_label_template)
def test_might_not_have_label_template(self):
pipeline = more_options_pipeline() # TODO swap label with typical
self.assertEquals(False, pipeline.has_label_template)
try:
label_template = pipeline.label_template
self.fail('should have thrown an exception')
except RuntimeError:
pass
def test_can_set_label_template(self):
pipeline = GoCdConfigurator(empty_config()).ensure_pipeline_group('Group').ensure_pipeline('Pipeline')
p = pipeline.set_label_template("some label")
self.assertEquals(p, pipeline)
self.assertEquals("some label", pipeline.label_template)
def test_can_set_default_label_template(self):
pipeline = GoCdConfigurator(empty_config()).ensure_pipeline_group('Group').ensure_pipeline('Pipeline')
p = pipeline.set_default_label_template()
self.assertEquals(p, pipeline)
self.assertEquals(DEFAULT_LABEL_TEMPLATE, pipeline.label_template)
def test_can_set_automatic_pipeline_locking(self):
configurator = GoCdConfigurator(empty_config())
pipeline = configurator.ensure_pipeline_group("new_group").ensure_pipeline("some_name")
p = pipeline.set_automatic_pipeline_locking()
self.assertEquals(p, pipeline)
self.assertEquals(True, pipeline.has_automatic_pipeline_locking)
def test_pipelines_to_dict(self):
pipeline = typical_pipeline()
pp_dict = pipeline.to_dict("P.Group")
self.assertEquals('typical', pp_dict['name'])
self.assertEquals({'JAVA_HOME': '/opt/java/jdk-1.8'},
pp_dict['environment_variables'])
self.assertEquals({}, pp_dict['encrypted_environment_variables'])
self.assertEquals({}, pp_dict['parameters'])
self.assertEquals(2, len(pp_dict['stages']))
self.assertEquals(1, len(pp_dict['materials']))
self.assertFalse(pp_dict.has_key('template'))
self.assertTrue(pp_dict['cron_timer_spec'] is None)
self.assertFalse(pp_dict['automatic_pipeline_locking'])
class TestPipelineGroup(unittest.TestCase):
def _pipeline_group_from_config(self):
return GoCdConfigurator(config('config-with-two-pipelines')).ensure_pipeline_group('P.Group')
def test_pipeline_groups_have_names(self):
pipeline_group = standard_pipeline_group()
self.assertEquals("P.Group", pipeline_group.name)
def test_pipeline_groups_have_pipelines(self):
pipeline_group = self._pipeline_group_from_config()
self.assertEquals(2, len(pipeline_group.pipelines))
def test_can_add_pipeline(self):
configurator = GoCdConfigurator(empty_config())
pipeline_group = configurator.ensure_pipeline_group("new_group")
new_pipeline = pipeline_group.ensure_pipeline("some_name")
self.assertEquals(1, len(pipeline_group.pipelines))
self.assertEquals(new_pipeline, pipeline_group.pipelines[0])
self.assertEquals("some_name", new_pipeline.name)
self.assertEquals(False, new_pipeline.has_single_git_material)
self.assertEquals(False, new_pipeline.has_label_template)
self.assertEquals(False, new_pipeline.has_automatic_pipeline_locking)
def test_can_find_pipeline(self):
found_pipeline = self._pipeline_group_from_config().find_pipeline("pipeline2")
self.assertEquals("pipeline2", found_pipeline.name)
self.assertTrue(self._pipeline_group_from_config().has_pipeline("pipeline2"))
def test_does_not_find_missing_pipeline(self):
self.assertFalse(self._pipeline_group_from_config().has_pipeline("unknown-pipeline"))
try:
self._pipeline_group_from_config().find_pipeline("unknown-pipeline")
self.fail("should have thrown exception")
except RuntimeError as e:
self.assertTrue(e.message.count("unknown-pipeline"))
def test_can_remove_pipeline(self):
pipeline_group = self._pipeline_group_from_config()
pipeline_group.ensure_removal_of_pipeline("pipeline1")
self.assertEquals(1, len(pipeline_group.pipelines))
try:
pipeline_group.find_pipeline("pipeline1")
self.fail("should have thrown exception")
except RuntimeError:
pass
def test_ensuring_replacement_of_pipeline_leaves_it_empty_but_in_same_place(self):
pipeline_group = self._pipeline_group_from_config()
self.assertEquals("pipeline1", pipeline_group.pipelines[0].name)
pipeline = pipeline_group.find_pipeline("pipeline1")
pipeline.set_label_template("something")
self.assertEquals(True, pipeline.has_label_template)
p = pipeline_group.ensure_replacement_of_pipeline("pipeline1")
self.assertEquals(p, pipeline_group.pipelines[0])
self.assertEquals("pipeline1", p.name)
self.assertEquals(False, p.has_label_template)
def test_can_ensure_pipeline_removal(self):
pipeline_group = self._pipeline_group_from_config()
pg = pipeline_group.ensure_removal_of_pipeline("already-removed-pipeline")
self.assertEquals(pg, pipeline_group)
self.assertEquals(2, len(pipeline_group.pipelines))
try:
pipeline_group.find_pipeline("already-removed-pipeline")
self.fail("should have thrown exception")
except RuntimeError:
pass
class TestGoCdConfigurator(unittest.TestCase):
def test_can_tell_if_there_is_no_change_to_save(self):
configurator = GoCdConfigurator(config('config-with-two-pipeline-groups'))
p = configurator.ensure_pipeline_group('Second.Group').ensure_replacement_of_pipeline('smoke-tests')
p.set_git_url('git@bitbucket.org:springersbm/gomatic.git')
p.ensure_stage('build').ensure_job('compile').ensure_task(ExecTask(['make', 'source code']))
self.assertFalse(configurator.has_changes)
def test_can_tell_if_there_is_a_change_to_save(self):
configurator = GoCdConfigurator(config('config-with-two-pipeline-groups'))
p = configurator.ensure_pipeline_group('Second.Group').ensure_replacement_of_pipeline('smoke-tests')
p.set_git_url('git@bitbucket.org:springersbm/gomatic.git')
p.ensure_stage('moo').ensure_job('bar')
self.assertTrue(configurator.has_changes)
def test_keeps_schema_version(self):
empty_config = FakeHostRestClient(empty_config_xml.replace('schemaVersion="72"', 'schemaVersion="73"'), "empty_config()")
configurator = GoCdConfigurator(empty_config)
self.assertEquals(1, configurator.config.count('schemaVersion="73"'))
def test_can_find_out_server_settings(self):
configurator = GoCdConfigurator(config('config-with-server-settings'))
self.assertEquals("/some/dir", configurator.artifacts_dir)
self.assertEquals("http://10.20.30.40/", configurator.site_url)
self.assertEquals("my_ci_server", configurator.agent_auto_register_key)
self.assertEquals(Decimal("55.0"), configurator.purge_start)
self.assertEquals(Decimal("75.0"), configurator.purge_upto)
def test_can_find_out_server_settings_when_not_set(self):
configurator = GoCdConfigurator(config('config-with-no-server-settings'))
self.assertEquals(None, configurator.artifacts_dir)
self.assertEquals(None, configurator.site_url)
self.assertEquals(None, configurator.agent_auto_register_key)
self.assertEquals(None, configurator.purge_start)
self.assertEquals(None, configurator.purge_upto)
def test_can_set_server_settings(self):
configurator = GoCdConfigurator(config('config-with-no-server-settings'))
configurator.artifacts_dir = "/a/dir"
configurator.site_url = "http://1.2.3.4/"
configurator.agent_auto_register_key = "a_ci_server"
configurator.purge_start = Decimal("44.0")
configurator.purge_upto = Decimal("88.0")
self.assertEquals("/a/dir", configurator.artifacts_dir)
self.assertEquals("http://1.2.3.4/", configurator.site_url)
self.assertEquals("a_ci_server", configurator.agent_auto_register_key)
self.assertEquals(Decimal("44.0"), configurator.purge_start)
self.assertEquals(Decimal("88.0"), configurator.purge_upto)
def test_can_have_no_pipeline_groups(self):
self.assertEquals(0, len(GoCdConfigurator(empty_config()).pipeline_groups))
def test_gets_all_pipeline_groups(self):
self.assertEquals(2, len(GoCdConfigurator(config('config-with-two-pipeline-groups')).pipeline_groups))
def test_can_get_initial_config_md5(self):
configurator = GoCdConfigurator(empty_config())
self.assertEquals("42", configurator._initial_md5)
def test_config_is_updated_as_result_of_updating_part_of_it(self):
configurator = GoCdConfigurator(config('config-with-just-agents'))
agent = configurator.agents[0]
self.assertEquals(2, len(agent.resources))
agent.ensure_resource('a-resource-that-it-does-not-already-have')
configurator_based_on_new_config = GoCdConfigurator(FakeHostRestClient(configurator.config))
self.assertEquals(3, len(configurator_based_on_new_config.agents[0].resources))
def test_can_remove_agent(self):
configurator = GoCdConfigurator(config('config-with-just-agents'))
self.assertEquals(2, len(configurator.agents))
configurator.ensure_removal_of_agent('go-agent-1')
self.assertEquals(1, len(configurator.agents))
self.assertEquals('go-agent-2', configurator.agents[0].hostname)
def test_can_add_pipeline_group(self):
configurator = GoCdConfigurator(empty_config())
self.assertEquals(0, len(configurator.pipeline_groups))
new_pipeline_group = configurator.ensure_pipeline_group("a_new_group")
self.assertEquals(1, len(configurator.pipeline_groups))
self.assertEquals(new_pipeline_group, configurator.pipeline_groups[-1])
self.assertEquals("a_new_group", new_pipeline_group.name)
def test_can_ensure_pipeline_group_exists(self):
configurator = GoCdConfigurator(config('config-with-two-pipeline-groups'))
self.assertEquals(2, len(configurator.pipeline_groups))
pre_existing_pipeline_group = configurator.ensure_pipeline_group('Second.Group')
self.assertEquals(2, len(configurator.pipeline_groups))
self.assertEquals('Second.Group', pre_existing_pipeline_group.name)
def test_can_remove_all_pipeline_groups(self):
configurator = GoCdConfigurator(config('config-with-two-pipeline-groups'))
s = configurator.remove_all_pipeline_groups()
self.assertEquals(s, configurator)
self.assertEquals(0, len(configurator.pipeline_groups))
def test_can_remove_pipeline_group(self):
configurator = GoCdConfigurator(config('config-with-two-pipeline-groups'))
s = configurator.ensure_removal_of_pipeline_group('P.Group')
self.assertEquals(s, configurator)
self.assertEquals(1, len(configurator.pipeline_groups))
def test_can_ensure_removal_of_pipeline_group(self):
configurator = GoCdConfigurator(config('config-with-two-pipeline-groups'))
configurator.ensure_removal_of_pipeline_group('pipeline-group-that-has-already-been-removed')
self.assertEquals(2, len(configurator.pipeline_groups))
def test_can_have_templates(self):
templates = GoCdConfigurator(config('config-with-just-templates')).templates
self.assertEquals(2, len(templates))
self.assertEquals('api-component', templates[0].name)
self.assertEquals('deploy-stack', templates[1].name)
self.assertEquals('deploy-components', templates[1].stages[0].name)
def test_can_have_no_templates(self):
self.assertEquals(0, len(GoCdConfigurator(empty_config()).templates))
def test_can_add_template(self):
configurator = GoCdConfigurator(empty_config())
template = configurator.ensure_template('foo')
self.assertEquals(1, len(configurator.templates))
self.assertEquals(template, configurator.templates[0])
self.assertTrue(isinstance(configurator.templates[0], Pipeline), "so all methods that use to configure pipeline don't need to be tested for template")
def test_can_ensure_template(self):
configurator = GoCdConfigurator(config('config-with-just-templates'))
template = configurator.ensure_template('deploy-stack')
self.assertEquals('deploy-components', template.stages[0].name)
def test_can_ensure_replacement_of_template(self):
configurator = GoCdConfigurator(config('config-with-just-templates'))
template = configurator.ensure_replacement_of_template('deploy-stack')
self.assertEquals(0, len(template.stages))
def test_can_remove_template(self):
configurator = GoCdConfigurator(config('config-with-just-templates'))
self.assertEquals(2, len(configurator.templates))
configurator.ensure_removal_of_template('deploy-stack')
self.assertEquals(1, len(configurator.templates))
def test_if_remove_all_templates_also_remove_templates_element(self):
configurator = GoCdConfigurator(config('config-with-just-templates'))
self.assertEquals(2, len(configurator.templates))
configurator.ensure_removal_of_template('api-component')
configurator.ensure_removal_of_template('deploy-stack')
self.assertEquals(0, len(configurator.templates))
xml = configurator.config
root = ET.fromstring(xml)
self.assertEqual(['server'], [element.tag for element in root])
def test_top_level_elements_get_reordered_to_please_go(self):
configurator = GoCdConfigurator(config('config-with-agents-and-templates-but-without-pipelines'))
configurator.ensure_pipeline_group("some_group").ensure_pipeline("some_pipeline")
xml = configurator.config
root = ET.fromstring(xml)
self.assertEquals("pipelines", root[0].tag)
self.assertEquals("templates", root[1].tag)
self.assertEquals("agents", root[2].tag)
def test_top_level_elements_with_environment_get_reordered_to_please_go(self):
configurator = GoCdConfigurator(config('config-with-pipelines-environments-and-agents'))
configurator.ensure_pipeline_group("P.Group").ensure_pipeline("some_pipeline")
xml = configurator.config
root = ET.fromstring(xml)
self.assertEqual(['server', 'pipelines', 'environments', 'agents'], [element.tag for element in root])
def test_top_level_elements_that_cannot_be_created_get_reordered_to_please_go(self):
configurator = GoCdConfigurator(config('config-with-many-of-the-top-level-elements-that-cannot-be-added'))
configurator.ensure_pipeline_group("P.Group").ensure_pipeline("some_pipeline")
xml = configurator.config
root = ET.fromstring(xml)
self.assertEqual(['server', 'repositories', 'scms', 'pipelines', 'environments', 'agents'],
[element.tag for element in root])
def test_elements_can_be_created_in_order_to_please_go(self):
configurator = GoCdConfigurator(empty_config())
pipeline = configurator.ensure_pipeline_group("some_group").ensure_pipeline("some_pipeline")
pipeline.ensure_parameters({'p': 'p'})
pipeline.set_timer("some timer")
pipeline.ensure_environment_variables({'pe': 'pe'})
pipeline.set_git_url("gurl")
stage = pipeline.ensure_stage("s")
stage.ensure_environment_variables({'s': 's'})
job = stage.ensure_job("j")
job.ensure_environment_variables({'j': 'j'})
job.ensure_task(ExecTask(['ls']))
job.ensure_tab(Tab("n", "p"))
job.ensure_resource("r")
job.ensure_artifacts({Artifact.get_build_artifact('s', 'd')})
xml = configurator.config
pipeline_root = ET.fromstring(xml).find('pipelines').find('pipeline')
self.assertEquals("params", pipeline_root[0].tag)
self.assertEquals("timer", pipeline_root[1].tag)
self.assertEquals("environmentvariables", pipeline_root[2].tag)
self.assertEquals("materials", pipeline_root[3].tag)
self.assertEquals("stage", pipeline_root[4].tag)
self.__check_stage(pipeline_root)
def test_elements_are_reordered_in_order_to_please_go(self):
configurator = GoCdConfigurator(empty_config())
pipeline = configurator.ensure_pipeline_group("some_group").ensure_pipeline("some_pipeline")
pipeline.set_git_url("gurl")
pipeline.ensure_environment_variables({'pe': 'pe'})
pipeline.set_timer("some timer")
pipeline.ensure_parameters({'p': 'p'})
self.__configure_stage(pipeline)
self.__configure_stage(configurator.ensure_template('templ'))
xml = configurator.config
pipeline_root = ET.fromstring(xml).find('pipelines').find('pipeline')
self.assertEquals("params", pipeline_root[0].tag)
self.assertEquals("timer", pipeline_root[1].tag)
self.assertEquals("environmentvariables", pipeline_root[2].tag)
self.assertEquals("materials", pipeline_root[3].tag)
self.assertEquals("stage", pipeline_root[4].tag)
self.__check_stage(pipeline_root)
template_root = ET.fromstring(xml).find('templates').find('pipeline')
self.assertEquals("stage", template_root[0].tag)
self.__check_stage(template_root)
def __check_stage(self, pipeline_root):
stage_root = pipeline_root.find('stage')
self.assertEquals("environmentvariables", stage_root[0].tag)
self.assertEquals("jobs", stage_root[1].tag)
job_root = stage_root.find('jobs').find('job')
self.assertEquals("environmentvariables", job_root[0].tag)
self.assertEquals("tasks", job_root[1].tag)
self.assertEquals("tabs", job_root[2].tag)
self.assertEquals("resources", job_root[3].tag)
self.assertEquals("artifacts", job_root[4].tag)
def __configure_stage(self, pipeline):
stage = pipeline.ensure_stage("s")
job = stage.ensure_job("j")
stage.ensure_environment_variables({'s': 's'})
job.ensure_tab(Tab("n", "p"))
job.ensure_artifacts({Artifact.get_build_artifact('s', 'd')})
job.ensure_task(ExecTask(['ls']))
job.ensure_resource("r")
job.ensure_environment_variables({'j': 'j'})
def simplified(s):
return s.strip().replace("\t", "").replace("\n", "").replace("\\", "").replace(" ", "")
def sneakily_converted_to_xml(pipeline):
if pipeline.is_template:
return ET.tostring(pipeline.element)
else:
return ET.tostring(pipeline.parent.element)
class TestReverseEngineering(unittest.TestCase):
def check_round_trip_pipeline(self, configurator, before, show=False):
reverse_engineered_python = configurator.as_python(before, with_save=False)
if show:
print('r' * 88)
print(reverse_engineered_python)
pipeline = "evaluation failed"
template = "evaluation failed"
exec reverse_engineered_python
# exec reverse_engineered_python.replace("from gomatic import *", "from gomatic.go_cd_configurator import *")
xml_before = sneakily_converted_to_xml(before)
# noinspection PyTypeChecker
xml_after = sneakily_converted_to_xml(pipeline)
if show:
print('b' * 88)
print(prettify(xml_before))
print('a' * 88)
print(prettify(xml_after))
self.assertEquals(xml_before, xml_after)
if before.is_based_on_template:
# noinspection PyTypeChecker
self.assertEquals(sneakily_converted_to_xml(before.template), sneakily_converted_to_xml(template))
def test_can_round_trip_simplest_pipeline(self):
configurator = GoCdConfigurator(empty_config())
before = configurator.ensure_pipeline_group("group").ensure_pipeline("line")
self.check_round_trip_pipeline(configurator, before)
def test_can_round_trip_standard_label(self):
configurator = GoCdConfigurator(empty_config())
before = configurator.ensure_pipeline_group("group").ensure_pipeline("line").set_default_label_template()
self.check_round_trip_pipeline(configurator, before)
def test_can_round_trip_non_standard_label(self):
configurator = GoCdConfigurator(empty_config())
before = configurator.ensure_pipeline_group("group").ensure_pipeline("line").set_label_template("non standard")
self.check_round_trip_pipeline(configurator, before)
def test_can_round_trip_automatic_pipeline_locking(self):
configurator = GoCdConfigurator(empty_config())
before = configurator.ensure_pipeline_group("group").ensure_pipeline("line").set_automatic_pipeline_locking()
self.check_round_trip_pipeline(configurator, before)
def test_can_round_trip_pipeline_material(self):
configurator = GoCdConfigurator(empty_config())
before = configurator.ensure_pipeline_group("group").ensure_pipeline("line").ensure_material(PipelineMaterial("p", "s", "m"))
self.check_round_trip_pipeline(configurator, before)
def test_can_round_trip_multiple_git_materials(self):
configurator = GoCdConfigurator(empty_config())
before = configurator.ensure_pipeline_group("group").ensure_pipeline("line")
before.ensure_material(GitMaterial("giturl1", "b", "m1"))
before.ensure_material(GitMaterial("giturl2"))
self.check_round_trip_pipeline(configurator, before)
def test_can_round_trip_git_url(self):
configurator = GoCdConfigurator(empty_config())
before = configurator.ensure_pipeline_group("group").ensure_pipeline("line").set_git_url("some git url")
self.check_round_trip_pipeline(configurator, before)
def test_can_round_trip_git_extras(self):
configurator = GoCdConfigurator(empty_config())
before = configurator.ensure_pipeline_group("group").ensure_pipeline("line").set_git_material(
GitMaterial("some git url",
branch="some branch",
material_name="some material name",
polling=False,
ignore_patterns={"excluded", "things"},
destination_directory='foo/bar'))
self.check_round_trip_pipeline(configurator, before)
def test_can_round_trip_git_branch_only(self):
configurator = GoCdConfigurator(empty_config())
before = configurator.ensure_pipeline_group("group").ensure_pipeline("line").set_git_material(GitMaterial("some git url", branch="some branch"))
self.check_round_trip_pipeline(configurator, before)
def test_can_round_trip_git_material_only(self):
configurator = GoCdConfigurator(empty_config())
before = configurator.ensure_pipeline_group("group").ensure_pipeline("line").set_git_material(GitMaterial("some git url", material_name="m name"))
self.check_round_trip_pipeline(configurator, before)
def test_can_round_trip_git_polling_only(self):
configurator = GoCdConfigurator(empty_config())
before = configurator.ensure_pipeline_group("group").ensure_pipeline("line").set_git_material(GitMaterial("some git url", polling=False))
self.check_round_trip_pipeline(configurator, before)
def test_can_round_trip_git_ignore_patterns_only_ISSUE_4(self):
configurator = GoCdConfigurator(empty_config())
before = configurator.ensure_pipeline_group("group").ensure_pipeline("line").set_git_material(GitMaterial("git url", ignore_patterns={"ex", "cluded"}))
self.check_round_trip_pipeline(configurator, before)
def test_can_round_trip_git_destination_directory_only(self):
configurator = GoCdConfigurator(empty_config())
before = configurator.ensure_pipeline_group("group").ensure_pipeline("line").set_git_material(GitMaterial("git url", destination_directory='foo/bar'))
self.check_round_trip_pipeline(configurator, before)
def test_can_round_trip_pipeline_parameters(self):
configurator = GoCdConfigurator(empty_config())
before = configurator.ensure_pipeline_group("group").ensure_pipeline("line").ensure_parameters({"p": "v"})
self.check_round_trip_pipeline(configurator, before)
def test_can_round_trip_pipeline_environment_variables(self):
configurator = GoCdConfigurator(empty_config())
before = configurator.ensure_pipeline_group("group").ensure_pipeline("line").ensure_environment_variables({"p": "v"})
self.check_round_trip_pipeline(configurator, before)
def test_can_round_trip_pipeline_encrypted_environment_variables(self):
configurator = GoCdConfigurator(empty_config())
before = configurator.ensure_pipeline_group("group").ensure_pipeline("line").ensure_encrypted_environment_variables({"p": "v"})
self.check_round_trip_pipeline(configurator, before)
def test_can_round_trip_pipeline_unencrypted_secure_environment_variables(self):
configurator = GoCdConfigurator(empty_config())
before = configurator.ensure_pipeline_group("group").ensure_pipeline("line").ensure_unencrypted_secure_environment_variables({"p": "v"})
self.check_round_trip_pipeline(configurator, before)
def test_can_round_trip_timer(self):
configurator = GoCdConfigurator(empty_config())
before = configurator.ensure_pipeline_group("group").ensure_pipeline("line").set_timer("some timer")
self.check_round_trip_pipeline(configurator, before)
def test_can_round_trip_timer_only_on_changes(self):
configurator = GoCdConfigurator(empty_config())
before = configurator.ensure_pipeline_group("group").ensure_pipeline("line").set_timer("some timer", only_on_changes=True)
self.check_round_trip_pipeline(configurator, before)
def test_can_round_trip_stage_bits(self):
configurator = GoCdConfigurator(empty_config())
before = configurator.ensure_pipeline_group("group").ensure_pipeline("line")
before.ensure_stage("stage1").ensure_environment_variables({"k": "v"}).set_clean_working_dir().set_has_manual_approval().set_fetch_materials(False)
self.check_round_trip_pipeline(configurator, before)
def test_can_round_trip_stages(self):
configurator = GoCdConfigurator(empty_config())
before = configurator.ensure_pipeline_group("group").ensure_pipeline("line")
before.ensure_stage("stage1")
before.ensure_stage("stage2")
self.check_round_trip_pipeline(configurator, before)
def test_can_round_trip_job(self):
configurator = GoCdConfigurator(empty_config())
before = configurator.ensure_pipeline_group("group").ensure_pipeline("line")
before.ensure_stage("stage").ensure_job("job")
self.check_round_trip_pipeline(configurator, before)
def test_can_round_trip_job_bits(self):
configurator = GoCdConfigurator(empty_config())
before = configurator.ensure_pipeline_group("group").ensure_pipeline("line")
before.ensure_stage("stage").ensure_job("job") \
.ensure_artifacts({Artifact.get_build_artifact("s", "d"), Artifact.get_test_artifact("sauce")}) \
.ensure_environment_variables({"k": "v"}) \
.ensure_resource("r") \
.ensure_tab(Tab("n", "p")) \
.set_timeout("23") \
.set_runs_on_all_agents()
self.check_round_trip_pipeline(configurator, before)
def test_can_round_trip_jobs(self):
configurator = GoCdConfigurator(empty_config())
before = configurator.ensure_pipeline_group("group").ensure_pipeline("line")
stage = before.ensure_stage("stage")
stage.ensure_job("job1")
stage.ensure_job("job2")
self.check_round_trip_pipeline(configurator, before)
def test_can_round_trip_tasks(self):
configurator = GoCdConfigurator(empty_config())
before = configurator.ensure_pipeline_group("group").ensure_pipeline("line")
job = before.ensure_stage("stage").ensure_job("job")
job.add_task(ExecTask(["one", "two"], working_dir="somewhere", runif="failed"))
job.add_task(ExecTask(["one", "two"], working_dir="somewhere", runif="failed"))
job.ensure_task(ExecTask(["one"], working_dir="somewhere else"))
job.ensure_task(ExecTask(["two"], runif="any"))
job.ensure_task(FetchArtifactTask('p', 's', 'j', FetchArtifactFile('f'), runif="any"))
job.ensure_task(FetchArtifactTask('p', 's', 'j', FetchArtifactDir('d')))
job.ensure_task(FetchArtifactTask('p', 's', 'j', FetchArtifactDir('d'), dest="somewhere-else"))
job.ensure_task(FetchArtifactTask('p', 's', 'j', FetchArtifactDir('d'), dest="somewhere-else", runif="any"))
job.ensure_task(RakeTask('t1', runif="any"))
job.ensure_task(RakeTask('t2'))
self.check_round_trip_pipeline(configurator, before)
def test_can_round_trip_pipeline_base_on_template(self):
configurator = GoCdConfigurator(empty_config())
before = configurator.ensure_pipeline_group("group").ensure_pipeline("line").set_template_name("temple")
configurator.ensure_template("temple").ensure_stage("stage").ensure_job("job")
self.check_round_trip_pipeline(configurator, before)
def test_can_reverse_engineer_pipeline(self):
configurator = GoCdConfigurator(config('config-with-more-options-pipeline'))
actual = configurator.as_python(more_options_pipeline(), with_save=False)
expected = """#!/usr/bin/env python
from gomatic import *
configurator = GoCdConfigurator(FakeConfig(whatever))
pipeline = configurator\
.ensure_pipeline_group("P.Group")\
.ensure_replacement_of_pipeline("more-options")\
.set_timer("0 15 22 * * ?")\
.set_git_material(GitMaterial("git@bitbucket.org:springersbm/gomatic.git", branch="a-branch", material_name="some-material-name", polling=False))\
.ensure_material(PipelineMaterial("pipeline2", "build")).ensure_environment_variables({'JAVA_HOME': '/opt/java/jdk-1.7'})\
.ensure_parameters({'environment': 'qa'})
stage = pipeline.ensure_stage("earlyStage")
job = stage.ensure_job("earlyWorm").ensure_artifacts(set([BuildArtifact("scripts/*", "files"), BuildArtifact("target/universal/myapp*.zip", "artifacts"), TestArtifact("from", "to")])).set_runs_on_all_agents()
job.add_task(ExecTask(['ls']))
job.add_task(ExecTask(['bash', '-c', 'curl "http://domain.com/service/check?target=one+two+three&key=2714_beta%40domain.com"']))
stage = pipeline.ensure_stage("middleStage")
job = stage.ensure_job("middleJob")
stage = pipeline.ensure_stage("s1").set_fetch_materials(False)
job = stage.ensure_job("rake-job").ensure_artifacts({BuildArtifact("things/*")})
job.add_task(RakeTask("boo", "passed"))
job = stage.ensure_job("run-if-variants")
job.add_task(ExecTask(['t-passed']))
job.add_task(ExecTask(['t-none']))
job.add_task(ExecTask(['t-failed'], runif="failed"))
job.add_task(ExecTask(['t-any'], runif="any"))
job.add_task(ExecTask(['t-both'], runif="any"))
job = stage.ensure_job("variety-of-tasks")
job.add_task(RakeTask("sometarget", "passed"))
job.add_task(FetchArtifactTask("more-options", "earlyStage", "earlyWorm", FetchArtifactDir("sourceDir"), dest="destDir"))
job.add_task(FetchArtifactTask("more-options", "middleStage", "middleJob", FetchArtifactFile("someFile")))
job.add_task(ExecTask(['true']))
"""
self.assertEquals(simplified(expected), simplified(actual))
class TestXmlFormatting(unittest.TestCase):
def test_can_format_simple_xml(self):
expected = '<?xml version="1.0" ?>\n<top>\n\t<middle>stuff</middle>\n</top>'
non_formatted = "<top><middle>stuff</middle></top>"
formatted = prettify(non_formatted)
self.assertEquals(expected, formatted)
def test_can_format_more_complicated_xml(self):
expected = '<?xml version="1.0" ?>\n<top>\n\t<middle>\n\t\t<innermost>stuff</innermost>\n\t</middle>\n</top>'
non_formatted = "<top><middle><innermost>stuff</innermost></middle></top>"
formatted = prettify(non_formatted)
self.assertEquals(expected, formatted)
def test_can_format_actual_config(self):
formatted = prettify(open("test-data/config-unformatted.xml").read())
expected = open("test-data/config-formatted.xml").read()
def head(s):
return "\n".join(s.split('\n')[:10])
self.assertEquals(expected, formatted, "expected=\n%s\n%s\nactual=\n%s" % (head(expected), "=" * 88, head(formatted)))
| 49.271523 | 208 | 0.697849 | 9,620 | 81,840 | 5.65343 | 0.055198 | 0.118854 | 0.022984 | 0.02679 | 0.777553 | 0.690398 | 0.594454 | 0.533648 | 0.476005 | 0.424539 | 0 | 0.006538 | 0.17577 | 81,840 | 1,660 | 209 | 49.301205 | 0.799718 | 0.003739 | 0 | 0.407054 | 0 | 0.008082 | 0.137514 | 0.054191 | 0 | 0 | 0 | 0.000602 | 0.311536 | 0 | null | null | 0.022043 | 0.007348 | null | null | 0.004409 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5355dedf12aa8e15115b8c77564d80f57eb0ec2a | 1,577 | py | Python | set-env.py | sajaldebnath/vrops-custom-group-creation | e3c821336832445e93706ad29afe216867660123 | [
"MIT"
] | 1 | 2017-08-14T07:51:42.000Z | 2017-08-14T07:51:42.000Z | set-env.py | sajaldebnath/vrops-custom-group-creation | e3c821336832445e93706ad29afe216867660123 | [
"MIT"
] | null | null | null | set-env.py | sajaldebnath/vrops-custom-group-creation | e3c821336832445e93706ad29afe216867660123 | [
"MIT"
] | null | null | null | # !/usr/bin python
"""
#
# set-env - a small python program to setup the configuration environment for data-push.py
# data-push.py contains the python program to push attribute values to vROps
# Author Sajal Debnath <sdebnath@vmware.com>
#
"""
# Importing the required modules
import json
import base64
import os,sys
# Getting the absolute path from where the script is being run
def get_script_path():
return os.path.dirname(os.path.realpath(sys.argv[0]))
# Getting the inputs from user
def get_the_inputs():
adapterkind = raw_input("Please enter Adapter Kind: ")
resourceKind = raw_input("Please enter Resource Kind: ")
servername = raw_input("Enter enter Server IP/FQDN: ")
serveruid = raw_input("Please enter user id: ")
serverpasswd = raw_input("Please enter vRops password: ")
encryptedvar = base64.b64encode(serverpasswd)
data = {}
data["adapterKind"] = adapterkind
data["resourceKind"] = resourceKind
serverdetails = {}
serverdetails["name"] = servername
serverdetails["userid"] = serveruid
serverdetails["password"] = encryptedvar
data["server"] = serverdetails
return data
# Getting the path where env.json file should be kept
path = get_script_path()
fullpath = path+"/"+"env.json"
# Getting the data for the env.json file
final_data = get_the_inputs()
# Saving the data to env.json file
with open(fullpath, 'w') as outfile:
json.dump(final_data, outfile, sort_keys = True, indent = 2, separators=(',', ':'), ensure_ascii=False) | 28.672727 | 107 | 0.689918 | 202 | 1,577 | 5.30198 | 0.480198 | 0.037348 | 0.052288 | 0.070962 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006385 | 0.205453 | 1,577 | 55 | 107 | 28.672727 | 0.848364 | 0.304375 | 0 | 0 | 0 | 0 | 0.187379 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0.115385 | 0.115385 | 0.038462 | 0.269231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
535b6a1790a4b33142e1922aac85ef30e05ce452 | 1,487 | gyp | Python | binding.gyp | terrorizer1980/fs-admin | e21216161c56def4ca76a3ef4e71844e2ba26074 | [
"MIT"
] | 25 | 2017-10-14T22:54:00.000Z | 2022-02-28T16:45:44.000Z | binding.gyp | icecream17/fs-admin | e21216161c56def4ca76a3ef4e71844e2ba26074 | [
"MIT"
] | 46 | 2019-02-22T15:17:32.000Z | 2022-03-15T16:04:38.000Z | binding.gyp | icecream17/fs-admin | e21216161c56def4ca76a3ef4e71844e2ba26074 | [
"MIT"
] | 19 | 2018-01-04T00:52:17.000Z | 2022-02-05T17:18:17.000Z | {
'target_defaults': {
'win_delay_load_hook': 'false',
'conditions': [
['OS=="win"', {
'msvs_disabled_warnings': [
4530, # C++ exception handler used, but unwind semantics are not enabled
4506, # no definition for inline function
],
}],
],
},
'targets': [
{
'target_name': 'fs_admin',
'defines': [
"NAPI_VERSION=<(napi_build_version)",
],
'cflags!': [ '-fno-exceptions' ],
'cflags_cc!': [ '-fno-exceptions' ],
'xcode_settings': { 'GCC_ENABLE_CPP_EXCEPTIONS': 'YES',
'CLANG_CXX_LIBRARY': 'libc++',
'MACOSX_DEPLOYMENT_TARGET': '10.7',
},
'msvs_settings': {
'VCCLCompilerTool': { 'ExceptionHandling': 1 },
},
'sources': [
'src/main.cc',
],
'include_dirs': [
'<!(node -p "require(\'node-addon-api\').include_dir")',
],
'conditions': [
['OS=="win"', {
'sources': [
'src/fs-admin-win.cc',
],
'libraries': [
'-lole32.lib',
'-lshell32.lib',
],
}],
['OS=="mac"', {
'sources': [
'src/fs-admin-darwin.cc',
],
'libraries': [
'$(SDKROOT)/System/Library/Frameworks/Security.framework',
],
}],
['OS=="linux"', {
'sources': [
'src/fs-admin-linux.cc',
],
}],
],
}
]
}
| 24.377049 | 83 | 0.438467 | 120 | 1,487 | 5.241667 | 0.675 | 0.044515 | 0.057234 | 0.081081 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017076 | 0.369872 | 1,487 | 60 | 84 | 24.783333 | 0.654216 | 0.065905 | 0 | 0.416667 | 0 | 0 | 0.445887 | 0.146465 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
535e920c95d9b042b1a45ee54769faf051d34c56 | 1,013 | py | Python | app/domains/users/views.py | Geo-Gabriel/eccomerce_nestle_mongodb | 97bf5dbdc7bee20a9ca2f7cad98afc6e8f11bd3e | [
"MIT"
] | 3 | 2020-06-21T15:51:25.000Z | 2021-01-24T21:19:27.000Z | app/domains/users/views.py | Geo-Gabriel/eccomerce_nestle_mongodb | 97bf5dbdc7bee20a9ca2f7cad98afc6e8f11bd3e | [
"MIT"
] | null | null | null | app/domains/users/views.py | Geo-Gabriel/eccomerce_nestle_mongodb | 97bf5dbdc7bee20a9ca2f7cad98afc6e8f11bd3e | [
"MIT"
] | null | null | null | from flask import Blueprint, request, jsonify
from app.domains.users.actions import get_all_users, insert_user, get_user_by_id, update_user, delete_user
app_users = Blueprint('app.users', __name__)
@app_users.route('/users', methods=['GET'])
def get_users():
return jsonify([user.serialize() for user in get_all_users()]), 200
@app_users.route('/users/<id>', methods=["GET"])
def get_by_id(id: str):
user = get_user_by_id(id_user=id)
return jsonify(user.serialize()), 200
@app_users.route('/users', methods=["POST"])
def post_user():
payload = request.get_json()
user = insert_user(payload)
return jsonify(user.serialize()), 201
@app_users.route('/users/<id>', methods=["PUT"])
def update(id: str):
payload = request.get_json()
user = update_user(id_user=id, data=payload)
return jsonify(user.serialize()), 200
@app_users.route('/users/<id>', methods=["DELETE"])
def delete(id: str):
delete_user(id_user=id)
return jsonify({"message": "user deleted"}), 200
| 27.378378 | 106 | 0.698914 | 149 | 1,013 | 4.516779 | 0.241611 | 0.08321 | 0.096582 | 0.13373 | 0.506686 | 0.237741 | 0.197623 | 0.139673 | 0.139673 | 0 | 0 | 0.017084 | 0.133268 | 1,013 | 36 | 107 | 28.138889 | 0.749431 | 0 | 0 | 0.166667 | 0 | 0 | 0.090819 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.208333 | false | 0 | 0.083333 | 0.041667 | 0.5 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
536501345147bcbb0b1035da0ccdac716533b14a | 2,557 | py | Python | wired_version/mcs_wired.py | Harri-Renney/Mind_Control_Synth | 5a892a81a3f37444ef154f29a62d44fa1476bfbd | [
"MIT"
] | 1 | 2020-12-20T09:53:20.000Z | 2020-12-20T09:53:20.000Z | wired_version/mcs_wired.py | Harri-Renney/Mind_Control_Synth | 5a892a81a3f37444ef154f29a62d44fa1476bfbd | [
"MIT"
] | null | null | null | wired_version/mcs_wired.py | Harri-Renney/Mind_Control_Synth | 5a892a81a3f37444ef154f29a62d44fa1476bfbd | [
"MIT"
] | null | null | null | import time
import mido
from pinaps.piNapsController import PiNapsController
from NeuroParser import NeuroParser
"""
Equation of motion used to modify virbato.
"""
def positionStep(pos, vel, acc):
return pos + vel * 2 + (1/2) * acc * 4
def velocityStep(vel, acc):
return acc * 2 + vel
CTRL_LFO_PITCH = 26
CTRL_LFO_RATE = 29
MIDI_MESSAGE_PERIOD = 1
vibratoPos = 0
vibratoVel = 0
vibratoAcc = 4
def parserUpdateVibrato(packet):
global vibratoPos
global vibratoVel
global vibratoAcc
if(packet.code == NeuroParser.DataPacket.kPoorQuality):
print("Poor quality: " + str(packet.poorQuality))
if(packet.code == NeuroParser.DataPacket.kAttention):
print("Attention: " + str(packet.attention))
##Change in vibratoStrength depending on meditation values##
##@ToDo - Change to include more momentum build up etc##
if(packet.attention > 50):
vibratoPos = positionStep(vibratoPos, vibratoVel, vibratoAcc)
vibratoVel = velocityStep(vibratoVel, vibratoAcc)
vibratoPos = 100 if vibratoPos > 100 else vibratoPos
vibratoPos = 0 if vibratoPos < 0 else vibratoPos
else:
vibratoPos = positionStep(vibratoPos, vibratoVel, -vibratoAcc)
vibratoVel = velocityStep(vibratoVel, -vibratoAcc)
vibratoPos = 100 if vibratoPos > 100 else vibratoPos
vibratoPos = 0 if vibratoPos < 0 else vibratoPos
def main():
#Init USB:MIDI interface.
#print(mido.get_output_names()) #Used to originally find correct serial port.
port = mido.open_output('USB Midi:USB Midi MIDI 1 20:0')
msgModulate = mido.Message('control_change', control=CTRL_LFO_PITCH, value=100)
port.send(msgModulate)
#Init Pinaps.
pinapsController = PiNapsController()
pinapsController.defaultInitialise()
pinapsController.deactivateAllLEDs()
aParser = NeuroParser()
#Parse all available Pinaps EEG data. Calculate vibrato value and send as MIDI message.
while True:
data = pinapsController.readEEGSensor()
aParser.parse(data, parserUpdateVibrato)
print("Message vibrato strength: ", vibratoPos)
msgModulate = mido.Message('control_change', control=CTRL_LFO_RATE, value=vibratoPos)
port.send(msgModulate)
#Sleep for defined message period.
time.sleep(MIDI_MESSAGE_PERIOD)
if __name__ == '__main__':
main() | 35.027397 | 134 | 0.658193 | 268 | 2,557 | 6.186567 | 0.395522 | 0.033173 | 0.014475 | 0.027744 | 0.308806 | 0.268999 | 0.268999 | 0.268999 | 0.209891 | 0.209891 | 0 | 0.020116 | 0.261244 | 2,557 | 73 | 135 | 35.027397 | 0.857597 | 0.152522 | 0 | 0.122449 | 0 | 0 | 0.057115 | 0 | 0 | 0 | 0 | 0.013699 | 0 | 1 | 0.081633 | false | 0 | 0.081633 | 0.040816 | 0.204082 | 0.061224 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
536a166e562f305f44e421c35ddf14c30aa9d207 | 2,804 | py | Python | util/tools/split_train_val.py | JochenZoellner/tf_neiss-1 | c91019e5bce6d3c7512237eec5ea997fd95304ac | [
"Apache-2.0"
] | null | null | null | util/tools/split_train_val.py | JochenZoellner/tf_neiss-1 | c91019e5bce6d3c7512237eec5ea997fd95304ac | [
"Apache-2.0"
] | 1 | 2020-08-07T13:04:43.000Z | 2020-08-10T12:32:46.000Z | util/tools/split_train_val.py | JochenZoellner/tf_neiss-1 | c91019e5bce6d3c7512237eec5ea997fd95304ac | [
"Apache-2.0"
] | 1 | 2019-12-16T15:46:45.000Z | 2019-12-16T15:46:45.000Z | import glob
import logging
import os
import shutil
import sys
"""script to divide a folder with generated/training data into a train and val folder
- val folder contains 500 Samples if not changed in source code
- DOES NOT work if images structured in subfolders, see below
- if there is no dir in the given folder -> split this folder
- if there are dir/s in the folder -> perform split on each folder
- split on sorted list -> repeated runs should give the same result
"""
def main(args):
foldername = args[1]
print("CWD: {}".format(os.getcwd()))
print("foldername: {}".format(foldername))
dirs = os.walk(foldername).next()[1]
dirs = [os.path.join(foldername, x) for x in dirs]
print(dirs)
if len(dirs) == 0:
print("no subdirs found -> run directly on {}".format(foldername))
dirs = [foldername]
for dir in dirs:
print("perform split on {}".format(dir))
dir_path = dir
# image_list = sorted(glob.glob1(os.path.join(foldername, dir_path), "*.jpg"))
image_list = sorted(glob.glob1(dir_path, "*.jpg"))
# image_list = sorted(glob.glob1(dir_path , "*.png"))
if len(image_list) == 0:
logging.error("Could not find any '*.jpg' in {}".format(dir_path))
exit(1)
else:
print(" found {} images".format(len(image_list)))
# val_len = int(len(image_list) * 0.1)
val_len = int(500)
val_list = image_list[:val_len]
train_list = image_list[val_len:]
# save first 10%/500 of list to val list
for subdir, part_list in zip(["val", "train"], [val_list, train_list]):
os.makedirs(os.path.join(dir_path, subdir))
print(" move files in {}...".format(subdir))
for image_file in part_list:
shutil.move(os.path.join(dir_path, image_file), os.path.join(dir_path, subdir, image_file))
try:
shutil.move(os.path.join(dir_path, image_file + ".txt"),
os.path.join(dir_path, subdir, image_file + ".txt"))
except IOError as ex:
print(ex)
try:
shutil.move(os.path.join(dir_path, image_file + ".info"),
os.path.join(dir_path, subdir, image_file + ".info"))
except IOError as ex:
pass
print(" write list: {}...".format(os.path.join(dir_path, "{}_{}.lst".format(dir_path, subdir))))
with open(os.path.join(foldername, "{}_{}.lst".format(os.path.basename(dir_path), subdir)), "w") as fobj:
fobj.writelines([os.path.join(dir_path, subdir, x) + "\n" for x in part_list])
if __name__ == '__main__':
main(sys.argv)
| 40.057143 | 117 | 0.579529 | 382 | 2,804 | 4.117801 | 0.308901 | 0.071202 | 0.076287 | 0.07438 | 0.260648 | 0.210426 | 0.181182 | 0.181182 | 0.120153 | 0.097266 | 0 | 0.010505 | 0.28709 | 2,804 | 69 | 118 | 40.637681 | 0.776388 | 0.07311 | 0 | 0.085106 | 1 | 0 | 0.10519 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021277 | false | 0.021277 | 0.106383 | 0 | 0.12766 | 0.191489 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
536b943feabc16b11630bbf2f1fc6f9c7d3d5261 | 557 | py | Python | make/platform/registry.py | tompis/casual | d838716c7052a906af8a19e945a496acdc7899a2 | [
"MIT"
] | null | null | null | make/platform/registry.py | tompis/casual | d838716c7052a906af8a19e945a496acdc7899a2 | [
"MIT"
] | null | null | null | make/platform/registry.py | tompis/casual | d838716c7052a906af8a19e945a496acdc7899a2 | [
"MIT"
] | null | null | null |
import os
registry = {}
class RegisterPlatform(object):
'''
classdocs
'''
def __init__(self, platform):
'''
Constructor
'''
self.platform = platform
def __call__(self, clazz):
registry[self.platform] = clazz
def platform():
# Decide on which platform this runs
platform = os.uname()[0].lower()
if platform == "darwin":
platform = "osx"
if not registry:
raise SyntaxError, "No platforms are registered."
return registry[ platform]();
| 16.878788 | 57 | 0.563734 | 53 | 557 | 5.773585 | 0.622642 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002653 | 0.32316 | 557 | 32 | 58 | 17.40625 | 0.809019 | 0.061041 | 0 | 0 | 0 | 0 | 0.080435 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.071429 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
536bfa0db6a83d2b284796ec230b11252da51887 | 553 | py | Python | mailer/admin.py | everyvoter/everyvoter | 65d9b8bdf9b5c64057135c279f6e03b6c207e0fa | [
"MIT"
] | 5 | 2019-07-01T17:50:44.000Z | 2022-02-20T02:44:42.000Z | mailer/admin.py | everyvoter/everyvoter | 65d9b8bdf9b5c64057135c279f6e03b6c207e0fa | [
"MIT"
] | 3 | 2020-06-05T21:44:33.000Z | 2021-06-10T21:39:26.000Z | mailer/admin.py | everyvoter/everyvoter | 65d9b8bdf9b5c64057135c279f6e03b6c207e0fa | [
"MIT"
] | 1 | 2021-12-09T06:32:40.000Z | 2021-12-09T06:32:40.000Z | """Django Admin Panels for App"""
from django.contrib import admin
from mailer import models
@admin.register(models.SendingAddress)
class SendingAddressAdmin(admin.ModelAdmin):
"""Admin View for SendingAddress"""
list_display = ('address', 'organization')
list_filter = ('organization__name',)
actions = None
def has_delete_permission(self, request, obj=None):
"""The primary address can not be deleted via the django admin"""
if obj and obj.pk == 1:
return False
else:
return True
| 29.105263 | 73 | 0.672694 | 66 | 553 | 5.545455 | 0.69697 | 0.060109 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002347 | 0.229656 | 553 | 18 | 74 | 30.722222 | 0.856808 | 0.211573 | 0 | 0 | 0 | 0 | 0.088095 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.166667 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
7258cd5e14cfcac3370c20a51efc82ed53ffd2ed | 26,052 | py | Python | functest/tests/unit/odl/test_odl.py | hashnfv/hashnfv-functest | ff34df7ec7be6cd5fcf0f7557b393bd5d6266047 | [
"Apache-2.0"
] | null | null | null | functest/tests/unit/odl/test_odl.py | hashnfv/hashnfv-functest | ff34df7ec7be6cd5fcf0f7557b393bd5d6266047 | [
"Apache-2.0"
] | null | null | null | functest/tests/unit/odl/test_odl.py | hashnfv/hashnfv-functest | ff34df7ec7be6cd5fcf0f7557b393bd5d6266047 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# Copyright (c) 2016 Orange and others.
#
# All rights reserved. This program and the accompanying materials
# are made available under the terms of the Apache License, Version 2.0
# which accompanies this distribution, and is available at
# http://www.apache.org/licenses/LICENSE-2.0
"""Define the classes required to fully cover odl."""
import errno
import logging
import os
import unittest
from keystoneauth1.exceptions import auth_plugins
import mock
from robot.errors import DataError, RobotError
from robot.result import model
from robot.utils.robottime import timestamp_to_secs
import six
from six.moves import urllib
from functest.core import testcase
from functest.opnfv_tests.sdn.odl import odl
__author__ = "Cedric Ollivier <cedric.ollivier@orange.com>"
class ODLVisitorTesting(unittest.TestCase):
"""The class testing ODLResultVisitor."""
# pylint: disable=missing-docstring
def setUp(self):
self.visitor = odl.ODLResultVisitor()
def test_empty(self):
self.assertFalse(self.visitor.get_data())
def test_ok(self):
data = {'name': 'foo',
'parent': 'bar',
'status': 'PASS',
'starttime': "20161216 16:00:00.000",
'endtime': "20161216 16:00:01.000",
'elapsedtime': 1000,
'text': 'Hello, World!',
'critical': True}
test = model.TestCase(
name=data['name'], status=data['status'], message=data['text'],
starttime=data['starttime'], endtime=data['endtime'])
test.parent = mock.Mock()
config = {'name': data['parent'],
'criticality.test_is_critical.return_value': data[
'critical']}
test.parent.configure_mock(**config)
self.visitor.visit_test(test)
self.assertEqual(self.visitor.get_data(), [data])
class ODLTesting(unittest.TestCase):
"""The super class which testing classes could inherit."""
# pylint: disable=missing-docstring
logging.disable(logging.CRITICAL)
_keystone_ip = "127.0.0.1"
_neutron_url = "http://127.0.0.2:9696"
_sdn_controller_ip = "127.0.0.3"
_os_auth_url = "http://{}:5000/v3".format(_keystone_ip)
_os_projectname = "admin"
_os_username = "admin"
_os_password = "admin"
_odl_webport = "8080"
_odl_restconfport = "8181"
_odl_username = "admin"
_odl_password = "admin"
_os_userdomainname = 'Default'
_os_projectdomainname = 'Default'
def setUp(self):
for var in ("INSTALLER_TYPE", "SDN_CONTROLLER", "SDN_CONTROLLER_IP"):
if var in os.environ:
del os.environ[var]
os.environ["OS_AUTH_URL"] = self._os_auth_url
os.environ["OS_USERNAME"] = self._os_username
os.environ["OS_USER_DOMAIN_NAME"] = self._os_userdomainname
os.environ["OS_PASSWORD"] = self._os_password
os.environ["OS_PROJECT_NAME"] = self._os_projectname
os.environ["OS_PROJECT_DOMAIN_NAME"] = self._os_projectdomainname
os.environ["OS_PASSWORD"] = self._os_password
self.test = odl.ODLTests(case_name='odl', project_name='functest')
self.defaultargs = {'odlusername': self._odl_username,
'odlpassword': self._odl_password,
'neutronurl': "http://{}:9696".format(
self._keystone_ip),
'osauthurl': self._os_auth_url,
'osusername': self._os_username,
'osuserdomainname': self._os_userdomainname,
'osprojectname': self._os_projectname,
'osprojectdomainname': self._os_projectdomainname,
'ospassword': self._os_password,
'odlip': self._keystone_ip,
'odlwebport': self._odl_webport,
'odlrestconfport': self._odl_restconfport,
'pushtodb': False}
class ODLParseResultTesting(ODLTesting):
"""The class testing ODLTests.parse_results()."""
# pylint: disable=missing-docstring
_config = {'name': 'dummy', 'starttime': '20161216 16:00:00.000',
'endtime': '20161216 16:00:01.000'}
@mock.patch('robot.api.ExecutionResult', side_effect=DataError)
def test_raises_exc(self, mock_method):
with self.assertRaises(DataError):
self.test.parse_results()
mock_method.assert_called_once_with(
os.path.join(odl.ODLTests.res_dir, 'output.xml'))
def _test_result(self, config, result):
suite = mock.Mock()
suite.configure_mock(**config)
with mock.patch('robot.api.ExecutionResult',
return_value=mock.Mock(suite=suite)):
self.test.parse_results()
self.assertEqual(self.test.result, result)
self.assertEqual(self.test.start_time,
timestamp_to_secs(config['starttime']))
self.assertEqual(self.test.stop_time,
timestamp_to_secs(config['endtime']))
self.assertEqual(self.test.details,
{'description': config['name'], 'tests': []})
def test_null_passed(self):
self._config.update({'statistics.critical.passed': 0,
'statistics.critical.total': 20})
self._test_result(self._config, 0)
def test_no_test(self):
self._config.update({'statistics.critical.passed': 20,
'statistics.critical.total': 0})
self._test_result(self._config, 0)
def test_half_success(self):
self._config.update({'statistics.critical.passed': 10,
'statistics.critical.total': 20})
self._test_result(self._config, 50)
def test_success(self):
self._config.update({'statistics.critical.passed': 20,
'statistics.critical.total': 20})
self._test_result(self._config, 100)
class ODLRobotTesting(ODLTesting):
"""The class testing ODLTests.set_robotframework_vars()."""
# pylint: disable=missing-docstring
@mock.patch('fileinput.input', side_effect=Exception())
def test_set_vars_ko(self, mock_method):
self.assertFalse(self.test.set_robotframework_vars())
mock_method.assert_called_once_with(
os.path.join(odl.ODLTests.odl_test_repo,
'csit/variables/Variables.robot'), inplace=True)
@mock.patch('fileinput.input', return_value=[])
def test_set_vars_empty(self, mock_method):
self.assertTrue(self.test.set_robotframework_vars())
mock_method.assert_called_once_with(
os.path.join(odl.ODLTests.odl_test_repo,
'csit/variables/Variables.robot'), inplace=True)
@mock.patch('sys.stdout', new_callable=six.StringIO)
def _test_set_vars(self, msg1, msg2, *args):
line = mock.MagicMock()
line.__iter__.return_value = [msg1]
with mock.patch('fileinput.input', return_value=line) as mock_method:
self.assertTrue(self.test.set_robotframework_vars())
mock_method.assert_called_once_with(
os.path.join(odl.ODLTests.odl_test_repo,
'csit/variables/Variables.robot'), inplace=True)
self.assertEqual(args[0].getvalue(), "{}\n".format(msg2))
def test_set_vars_auth_default(self):
self._test_set_vars(
"@{AUTH} ",
"@{AUTH} admin admin")
def test_set_vars_auth1(self):
self._test_set_vars(
"@{AUTH1} foo bar",
"@{AUTH1} foo bar")
@mock.patch('sys.stdout', new_callable=six.StringIO)
def test_set_vars_auth_foo(self, *args):
line = mock.MagicMock()
line.__iter__.return_value = ["@{AUTH} "]
with mock.patch('fileinput.input', return_value=line) as mock_method:
self.assertTrue(self.test.set_robotframework_vars('foo', 'bar'))
mock_method.assert_called_once_with(
os.path.join(odl.ODLTests.odl_test_repo,
'csit/variables/Variables.robot'), inplace=True)
self.assertEqual(
args[0].getvalue(),
"@{AUTH} foo bar\n")
class ODLMainTesting(ODLTesting):
"""The class testing ODLTests.run_suites()."""
# pylint: disable=missing-docstring
def _get_run_suites_kwargs(self, key=None):
kwargs = {'odlusername': self._odl_username,
'odlpassword': self._odl_password,
'neutronurl': self._neutron_url,
'osauthurl': self._os_auth_url,
'osusername': self._os_username,
'osuserdomainname': self._os_userdomainname,
'osprojectname': self._os_projectname,
'osprojectdomainname': self._os_projectdomainname,
'ospassword': self._os_password,
'odlip': self._sdn_controller_ip,
'odlwebport': self._odl_webport,
'odlrestconfport': self._odl_restconfport}
if key:
del kwargs[key]
return kwargs
def _test_run_suites(self, status, *args):
kwargs = self._get_run_suites_kwargs()
self.assertEqual(self.test.run_suites(**kwargs), status)
if len(args) > 0:
args[0].assert_called_once_with(
odl.ODLTests.res_dir)
if len(args) > 1:
variable = [
'KEYSTONEURL:{}://{}'.format(
urllib.parse.urlparse(self._os_auth_url).scheme,
urllib.parse.urlparse(self._os_auth_url).netloc),
'NEUTRONURL:{}'.format(self._neutron_url),
'OS_AUTH_URL:"{}"'.format(self._os_auth_url),
'OSUSERNAME:"{}"'.format(self._os_username),
'OSUSERDOMAINNAME:"{}"'.format(self._os_userdomainname),
'OSTENANTNAME:"{}"'.format(self._os_projectname),
'OSPROJECTDOMAINNAME:"{}"'.format(self._os_projectdomainname),
'OSPASSWORD:"{}"'.format(self._os_password),
'ODL_SYSTEM_IP:{}'.format(self._sdn_controller_ip),
'PORT:{}'.format(self._odl_webport),
'RESTCONFPORT:{}'.format(self._odl_restconfport)]
args[1].assert_called_once_with(
odl.ODLTests.basic_suite_dir,
odl.ODLTests.neutron_suite_dir,
log='NONE',
output=os.path.join(odl.ODLTests.res_dir, 'output.xml'),
report='NONE',
stdout=mock.ANY,
variable=variable)
if len(args) > 2:
args[2].assert_called_with(
os.path.join(odl.ODLTests.res_dir, 'stdout.txt'))
def _test_no_keyword(self, key):
kwargs = self._get_run_suites_kwargs(key)
self.assertEqual(self.test.run_suites(**kwargs),
testcase.TestCase.EX_RUN_ERROR)
def test_no_odlusername(self):
self._test_no_keyword('odlusername')
def test_no_odlpassword(self):
self._test_no_keyword('odlpassword')
def test_no_neutronurl(self):
self._test_no_keyword('neutronurl')
def test_no_osauthurl(self):
self._test_no_keyword('osauthurl')
def test_no_osusername(self):
self._test_no_keyword('osusername')
def test_no_osprojectname(self):
self._test_no_keyword('osprojectname')
def test_no_ospassword(self):
self._test_no_keyword('ospassword')
def test_no_odlip(self):
self._test_no_keyword('odlip')
def test_no_odlwebport(self):
self._test_no_keyword('odlwebport')
def test_no_odlrestconfport(self):
self._test_no_keyword('odlrestconfport')
def test_set_vars_ko(self):
with mock.patch.object(self.test, 'set_robotframework_vars',
return_value=False) as mock_object:
self._test_run_suites(testcase.TestCase.EX_RUN_ERROR)
mock_object.assert_called_once_with(
self._odl_username, self._odl_password)
@mock.patch('os.makedirs', side_effect=Exception)
def test_makedirs_exc(self, mock_method):
with mock.patch.object(self.test, 'set_robotframework_vars',
return_value=True), \
self.assertRaises(Exception):
self._test_run_suites(testcase.TestCase.EX_RUN_ERROR,
mock_method)
@mock.patch('os.makedirs', side_effect=OSError)
def test_makedirs_oserror(self, mock_method):
with mock.patch.object(self.test, 'set_robotframework_vars',
return_value=True):
self._test_run_suites(testcase.TestCase.EX_RUN_ERROR,
mock_method)
@mock.patch('robot.run', side_effect=RobotError)
@mock.patch('os.makedirs')
def test_run_ko(self, *args):
with mock.patch.object(self.test, 'set_robotframework_vars',
return_value=True), \
self.assertRaises(RobotError):
self._test_run_suites(testcase.TestCase.EX_RUN_ERROR, *args)
@mock.patch('robot.run')
@mock.patch('os.makedirs')
def test_parse_results_ko(self, *args):
with mock.patch.object(self.test, 'set_robotframework_vars',
return_value=True), \
mock.patch.object(self.test, 'parse_results',
side_effect=RobotError):
self._test_run_suites(testcase.TestCase.EX_RUN_ERROR, *args)
@mock.patch('robot.run')
@mock.patch('os.makedirs')
def test_ok(self, *args):
with mock.patch.object(self.test, 'set_robotframework_vars',
return_value=True), \
mock.patch.object(self.test, 'parse_results'):
self._test_run_suites(testcase.TestCase.EX_OK, *args)
@mock.patch('robot.run')
@mock.patch('os.makedirs', side_effect=OSError(errno.EEXIST, ''))
def test_makedirs_oserror17(self, *args):
with mock.patch.object(self.test, 'set_robotframework_vars',
return_value=True), \
mock.patch.object(self.test, 'parse_results'):
self._test_run_suites(testcase.TestCase.EX_OK, *args)
@mock.patch('robot.run', return_value=1)
@mock.patch('os.makedirs')
def test_testcases_in_failure(self, *args):
with mock.patch.object(self.test, 'set_robotframework_vars',
return_value=True), \
mock.patch.object(self.test, 'parse_results'):
self._test_run_suites(testcase.TestCase.EX_OK, *args)
class ODLRunTesting(ODLTesting):
"""The class testing ODLTests.run()."""
# pylint: disable=missing-docstring
def _test_no_env_var(self, var):
with mock.patch('functest.utils.openstack_utils.get_endpoint',
return_value=ODLTesting._neutron_url):
del os.environ[var]
self.assertEqual(self.test.run(),
testcase.TestCase.EX_RUN_ERROR)
def _test_run(self, status=testcase.TestCase.EX_OK,
exception=None, **kwargs):
odlip = kwargs['odlip'] if 'odlip' in kwargs else '127.0.0.3'
odlwebport = kwargs['odlwebport'] if 'odlwebport' in kwargs else '8080'
odlrestconfport = (kwargs['odlrestconfport']
if 'odlrestconfport' in kwargs else '8181')
with mock.patch('functest.utils.openstack_utils.get_endpoint',
return_value=ODLTesting._neutron_url):
if exception:
self.test.run_suites = mock.Mock(side_effect=exception)
else:
self.test.run_suites = mock.Mock(return_value=status)
self.assertEqual(self.test.run(), status)
self.test.run_suites.assert_called_once_with(
odl.ODLTests.default_suites,
neutronurl=self._neutron_url,
odlip=odlip, odlpassword=self._odl_password,
odlrestconfport=odlrestconfport,
odlusername=self._odl_username, odlwebport=odlwebport,
osauthurl=self._os_auth_url,
ospassword=self._os_password,
osprojectname=self._os_projectname,
osusername=self._os_username,
osprojectdomainname=self._os_projectdomainname,
osuserdomainname=self._os_userdomainname)
def _test_multiple_suites(self, suites,
status=testcase.TestCase.EX_OK, **kwargs):
odlip = kwargs['odlip'] if 'odlip' in kwargs else '127.0.0.3'
odlwebport = kwargs['odlwebport'] if 'odlwebport' in kwargs else '8080'
odlrestconfport = (kwargs['odlrestconfport']
if 'odlrestconfport' in kwargs else '8181')
with mock.patch('functest.utils.openstack_utils.get_endpoint',
return_value=ODLTesting._neutron_url):
self.test.run_suites = mock.Mock(return_value=status)
self.assertEqual(self.test.run(suites=suites), status)
self.test.run_suites.assert_called_once_with(
suites,
neutronurl=self._neutron_url,
odlip=odlip, odlpassword=self._odl_password,
odlrestconfport=odlrestconfport,
odlusername=self._odl_username, odlwebport=odlwebport,
osauthurl=self._os_auth_url,
ospassword=self._os_password,
osprojectname=self._os_projectname,
osusername=self._os_username,
osprojectdomainname=self._os_projectdomainname,
osuserdomainname=self._os_userdomainname)
def test_exc(self):
with mock.patch('functest.utils.openstack_utils.get_endpoint',
side_effect=auth_plugins.MissingAuthPlugin()):
self.assertEqual(self.test.run(),
testcase.TestCase.EX_RUN_ERROR)
def test_no_os_auth_url(self):
self._test_no_env_var("OS_AUTH_URL")
def test_no_os_username(self):
self._test_no_env_var("OS_USERNAME")
def test_no_os_password(self):
self._test_no_env_var("OS_PASSWORD")
def test_no_os__name(self):
self._test_no_env_var("OS_PROJECT_NAME")
def test_run_suites_false(self):
os.environ["SDN_CONTROLLER_IP"] = self._sdn_controller_ip
self._test_run(testcase.TestCase.EX_RUN_ERROR,
odlip=self._sdn_controller_ip,
odlwebport=self._odl_webport)
def test_run_suites_exc(self):
with self.assertRaises(Exception):
os.environ["SDN_CONTROLLER_IP"] = self._sdn_controller_ip
self._test_run(status=testcase.TestCase.EX_RUN_ERROR,
exception=Exception(),
odlip=self._sdn_controller_ip,
odlwebport=self._odl_webport)
def test_no_sdn_controller_ip(self):
with mock.patch('functest.utils.openstack_utils.get_endpoint',
return_value=ODLTesting._neutron_url):
self.assertEqual(self.test.run(),
testcase.TestCase.EX_RUN_ERROR)
def test_without_installer_type(self):
os.environ["SDN_CONTROLLER_IP"] = self._sdn_controller_ip
self._test_run(testcase.TestCase.EX_OK,
odlip=self._sdn_controller_ip,
odlwebport=self._odl_webport)
def test_suites(self):
os.environ["SDN_CONTROLLER_IP"] = self._sdn_controller_ip
self._test_multiple_suites(
[odl.ODLTests.basic_suite_dir],
testcase.TestCase.EX_OK,
odlip=self._sdn_controller_ip,
odlwebport=self._odl_webport)
def test_fuel(self):
os.environ["INSTALLER_TYPE"] = "fuel"
self._test_run(testcase.TestCase.EX_OK,
odlip=urllib.parse.urlparse(self._neutron_url).hostname,
odlwebport='8181',
odlrestconfport='8282')
def test_apex_no_controller_ip(self):
with mock.patch('functest.utils.openstack_utils.get_endpoint',
return_value=ODLTesting._neutron_url):
os.environ["INSTALLER_TYPE"] = "apex"
self.assertEqual(self.test.run(),
testcase.TestCase.EX_RUN_ERROR)
def test_apex(self):
os.environ["SDN_CONTROLLER_IP"] = self._sdn_controller_ip
os.environ["INSTALLER_TYPE"] = "apex"
self._test_run(testcase.TestCase.EX_OK,
odlip=self._sdn_controller_ip, odlwebport='8081',
odlrestconfport='8081')
def test_netvirt_no_controller_ip(self):
with mock.patch('functest.utils.openstack_utils.get_endpoint',
return_value=ODLTesting._neutron_url):
os.environ["INSTALLER_TYPE"] = "netvirt"
self.assertEqual(self.test.run(),
testcase.TestCase.EX_RUN_ERROR)
def test_netvirt(self):
os.environ["SDN_CONTROLLER_IP"] = self._sdn_controller_ip
os.environ["INSTALLER_TYPE"] = "netvirt"
self._test_run(testcase.TestCase.EX_OK,
odlip=self._sdn_controller_ip, odlwebport='8081',
odlrestconfport='8081')
def test_joid_no_controller_ip(self):
with mock.patch('functest.utils.openstack_utils.get_endpoint',
return_value=ODLTesting._neutron_url):
os.environ["INSTALLER_TYPE"] = "joid"
self.assertEqual(self.test.run(),
testcase.TestCase.EX_RUN_ERROR)
def test_joid(self):
os.environ["SDN_CONTROLLER"] = self._sdn_controller_ip
os.environ["INSTALLER_TYPE"] = "joid"
self._test_run(testcase.TestCase.EX_OK,
odlip=self._sdn_controller_ip, odlwebport='8080')
def test_compass(self):
os.environ["INSTALLER_TYPE"] = "compass"
self._test_run(testcase.TestCase.EX_OK,
odlip=urllib.parse.urlparse(self._neutron_url).hostname,
odlrestconfport='8080')
def test_daisy_no_controller_ip(self):
with mock.patch('functest.utils.openstack_utils.get_endpoint',
return_value=ODLTesting._neutron_url):
os.environ["INSTALLER_TYPE"] = "daisy"
self.assertEqual(self.test.run(),
testcase.TestCase.EX_RUN_ERROR)
def test_daisy(self):
os.environ["SDN_CONTROLLER_IP"] = self._sdn_controller_ip
os.environ["INSTALLER_TYPE"] = "daisy"
self._test_run(testcase.TestCase.EX_OK,
odlip=self._sdn_controller_ip, odlwebport='8181',
odlrestconfport='8087')
class ODLArgParserTesting(ODLTesting):
"""The class testing ODLParser."""
# pylint: disable=missing-docstring
def setUp(self):
self.parser = odl.ODLParser()
super(ODLArgParserTesting, self).setUp()
def test_default(self):
self.assertEqual(self.parser.parse_args(), self.defaultargs)
def test_basic(self):
self.defaultargs['neutronurl'] = self._neutron_url
self.defaultargs['odlip'] = self._sdn_controller_ip
self.assertEqual(
self.parser.parse_args(
["--neutronurl={}".format(self._neutron_url),
"--odlip={}".format(self._sdn_controller_ip)]),
self.defaultargs)
@mock.patch('sys.stderr', new_callable=six.StringIO)
def test_fail(self, mock_method):
self.defaultargs['foo'] = 'bar'
with self.assertRaises(SystemExit):
self.parser.parse_args(["--foo=bar"])
self.assertTrue(mock_method.getvalue().startswith("usage:"))
def _test_arg(self, arg, value):
self.defaultargs[arg] = value
self.assertEqual(
self.parser.parse_args(["--{}={}".format(arg, value)]),
self.defaultargs)
def test_odlusername(self):
self._test_arg('odlusername', 'foo')
def test_odlpassword(self):
self._test_arg('odlpassword', 'foo')
def test_osauthurl(self):
self._test_arg('osauthurl', 'http://127.0.0.4:5000/v2')
def test_neutronurl(self):
self._test_arg('neutronurl', 'http://127.0.0.4:9696')
def test_osusername(self):
self._test_arg('osusername', 'foo')
def test_osuserdomainname(self):
self._test_arg('osuserdomainname', 'domain')
def test_osprojectname(self):
self._test_arg('osprojectname', 'foo')
def test_osprojectdomainname(self):
self._test_arg('osprojectdomainname', 'domain')
def test_ospassword(self):
self._test_arg('ospassword', 'foo')
def test_odlip(self):
self._test_arg('odlip', '127.0.0.4')
def test_odlwebport(self):
self._test_arg('odlwebport', '80')
def test_odlrestconfport(self):
self._test_arg('odlrestconfport', '80')
def test_pushtodb(self):
self.defaultargs['pushtodb'] = True
self.assertEqual(self.parser.parse_args(["--{}".format('pushtodb')]),
self.defaultargs)
def test_multiple_args(self):
self.defaultargs['neutronurl'] = self._neutron_url
self.defaultargs['odlip'] = self._sdn_controller_ip
self.assertEqual(
self.parser.parse_args(
["--neutronurl={}".format(self._neutron_url),
"--odlip={}".format(self._sdn_controller_ip)]),
self.defaultargs)
if __name__ == "__main__":
logging.disable(logging.CRITICAL)
unittest.main(verbosity=2)
| 40.642746 | 79 | 0.609857 | 2,858 | 26,052 | 5.252624 | 0.108467 | 0.047429 | 0.024181 | 0.027844 | 0.63769 | 0.575873 | 0.543832 | 0.513056 | 0.48741 | 0.451639 | 0 | 0.013743 | 0.276601 | 26,052 | 640 | 80 | 40.70625 | 0.782819 | 0.033395 | 0 | 0.411067 | 0 | 0 | 0.139924 | 0.04302 | 0 | 0 | 0 | 0 | 0.088933 | 1 | 0.156126 | false | 0.063241 | 0.025692 | 0 | 0.225296 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
725ce8235488dcfac8f6ef5c1aeb63ee7251e649 | 571 | py | Python | apps/configuration/fields.py | sotkonstantinidis/testcircle | 448aa2148fbc2c969e60f0b33ce112d4740a8861 | [
"Apache-2.0"
] | 3 | 2019-02-24T14:24:43.000Z | 2019-10-24T18:51:32.000Z | apps/configuration/fields.py | sotkonstantinidis/testcircle | 448aa2148fbc2c969e60f0b33ce112d4740a8861 | [
"Apache-2.0"
] | 17 | 2017-03-14T10:55:56.000Z | 2022-03-11T23:20:19.000Z | apps/configuration/fields.py | sotkonstantinidis/testcircle | 448aa2148fbc2c969e60f0b33ce112d4740a8861 | [
"Apache-2.0"
] | 2 | 2016-02-01T06:32:40.000Z | 2019-09-06T04:33:50.000Z | import unicodedata
from django.forms import fields
class XMLCompatCharField(fields.CharField):
"""
Strip 'control characters', as XML 1.0 does not allow them and the API may
return data in XML.
"""
def to_python(self, value):
value = super().to_python(value=value)
return self.remove_control_characters(value)
@staticmethod
def remove_control_characters(input):
valid_chars = ['\n', '\r']
return "".join(ch for ch in input if
unicodedata.category(ch)[0] != "C" or ch in valid_chars)
| 27.190476 | 79 | 0.644483 | 75 | 571 | 4.8 | 0.626667 | 0.141667 | 0.127778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007042 | 0.25394 | 571 | 20 | 80 | 28.55 | 0.838028 | 0.164623 | 0 | 0 | 0 | 0 | 0.010941 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.181818 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
7263c0e12b1f9385bffd20a482055a91cac00beb | 996 | py | Python | backend/server/server/wsgi.py | Stinger101/my_uno_ml_service | 47d19f6e5e19e73c465b7ddca889324c9bd5862f | [
"MIT"
] | null | null | null | backend/server/server/wsgi.py | Stinger101/my_uno_ml_service | 47d19f6e5e19e73c465b7ddca889324c9bd5862f | [
"MIT"
] | null | null | null | backend/server/server/wsgi.py | Stinger101/my_uno_ml_service | 47d19f6e5e19e73c465b7ddca889324c9bd5862f | [
"MIT"
] | null | null | null | """
WSGI config for server project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/3.1/howto/deployment/wsgi/
"""
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'server.settings')
application = get_wsgi_application()
import inspect
from apps.ml.registry import MLRegistry
from apps.ml.income_classifier.random_forest import RandomForestClassifier
try:
registry = MLRegistry()
rf = RandomForestClassifier()
registry.add_algorithm(endpoint_name="income_classifier",algorithm_object=rf,algorithm_name="random forest", algorithm_status="production", algorithm_version="0.0.1",owner="Piotr",algorithm_description="Random forest with simple pre and post processing",algorithm_code=inspect.getsource(RandomForestClassifier))
except Exception as e:
print ("Error while loading algorithm to the registry",str(e))
| 33.2 | 315 | 0.800201 | 131 | 996 | 5.954198 | 0.618321 | 0.046154 | 0.046154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005599 | 0.103414 | 996 | 29 | 316 | 34.344828 | 0.867861 | 0.212851 | 0 | 0 | 0 | 0 | 0.233247 | 0.028351 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.384615 | 0 | 0.384615 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
7265b89cf3b023b36a24bc0d387a352f1ee8492b | 1,881 | py | Python | models/toolscontext/errorhandler.py | vinirossa/password_generator_test | dd2f43540c6f58ff9217320c21b246c0be3fc55f | [
"MIT"
] | 2 | 2021-09-10T00:11:00.000Z | 2021-09-10T02:47:54.000Z | models/toolscontext/errorhandler.py | vinirossa/password_generator_test | dd2f43540c6f58ff9217320c21b246c0be3fc55f | [
"MIT"
] | null | null | null | models/toolscontext/errorhandler.py | vinirossa/password_generator_test | dd2f43540c6f58ff9217320c21b246c0be3fc55f | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
""" Module Name
Description...
"""
__author__ = "Vinícius Pereira"
__copyright__ = "Copyright 2021, Vinícius Pereira"
__credits__ = ["Vinícius Pereira","etc."]
__date__ = "2021/04/12"
__license__ = "GPL"
__version__ = "1.0.0"
__pythonversion__ = "3.9.1"
__maintainer__ = "Vinícius Pereira"
__contact__ = "viniciuspsb@gmail.com"
__status__ = "Development"
import sys, os
import logging
import inspect
import datetime
STD_LOG_FORMAT = ("%(asctime)s - %(levelname)s - %(name)s - %(filename)s - %(funcName)s() - ln.%(lineno)d"
" - %(message)s")
def file_logger(filename: str,
level:int = logging.DEBUG,
format: str = STD_LOG_FORMAT):
logger = logging.getLogger(__name__)
logger.setLevel(level)
formatter = logging.Formatter(format)
file_handler = logging.FileHandler(filename)
file_handler.setLevel(level)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
return logger
def prompt_logger(error):
caller = inspect.getframeinfo(inspect.stack()[1][0])
error_log = {"error_type": error.__class__.__name__,
"error_info": error.__doc__,
"error_line": error.__traceback__.tb_lineno,
"error_file": os.path.basename(caller.filename),
"error_time": datetime.datetime.now(),
"error_details": str(error).capitalize()}
print("----- ERROR -----")
print("Type:",error_log["error_type"])
print("Info:",error_log["error_info"])
print("Line:",error_log["error_line"])
print("File:",error_log["error_file"])
print("Time:",error_log["error_time"])
print("Details:",error_log["error_details"])
return error_log
def error_box():
pass
def sql_logger():
pass
if __name__ == "__main__":
pass | 24.115385 | 106 | 0.640617 | 214 | 1,881 | 5.158879 | 0.429907 | 0.057971 | 0.082428 | 0.030797 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014765 | 0.207868 | 1,881 | 78 | 107 | 24.115385 | 0.726175 | 0.037746 | 0 | 0.061224 | 0 | 0.020408 | 0.235392 | 0.011686 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081633 | false | 0.061224 | 0.081633 | 0 | 0.204082 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
726a44a6fe535cc4bca286ac9836e2fee0dcea3e | 8,580 | py | Python | examples/voc2007_extract.py | sis0truk/pretrained-models.pytorch | 4aea6d47996279b4b281355ca3d9738d0dff7469 | [
"BSD-3-Clause"
] | 91 | 2018-03-21T19:45:00.000Z | 2021-12-13T06:08:00.000Z | examples/voc2007_extract.py | wubin1836/pretrained-models.pytorch | cb5127f43c554c0bb52c5ded3c071d9de9a514a4 | [
"BSD-3-Clause"
] | 6 | 2019-08-03T08:49:21.000Z | 2022-03-11T23:43:56.000Z | examples/voc2007_extract.py | wubin1836/pretrained-models.pytorch | cb5127f43c554c0bb52c5ded3c071d9de9a514a4 | [
"BSD-3-Clause"
] | 13 | 2018-03-23T12:31:52.000Z | 2020-07-20T13:16:44.000Z | import os
import argparse
from tqdm import tqdm
import torch
from torch.autograd import Variable
from torch.utils import model_zoo
# http://scikit-learn.org
from sklearn.metrics import accuracy_score
from sklearn.metrics import average_precision_score
from sklearn.svm import LinearSVC
from sklearn.svm import SVC
import sys
sys.path.append('.')
import pretrainedmodels
import pretrainedmodels.utils
import pretrainedmodels.datasets
model_names = sorted(name for name in pretrainedmodels.__dict__
if not name.startswith("__")
and name.islower()
and callable(pretrainedmodels.__dict__[name]))
def extract_features_targets(model, features_size, loader, path_data, cuda=False):
if os.path.isfile(path_data):
print('Load features from {}'.format(path_data))
return torch.load(path_data)
print('\nExtract features on {}set'.format(loader.dataset.set))
features = torch.Tensor(len(loader.dataset), features_size)
targets = torch.Tensor(len(loader.dataset), len(loader.dataset.classes))
for batch_id, batch in enumerate(tqdm(loader)):
img = batch[0]
target = batch[2]
current_bsize = img.size(0)
from_ = int(batch_id * loader.batch_size)
to_ = int(from_ + current_bsize)
if cuda:
img = img.cuda(async=True)
input = Variable(img, requires_grad=False)
output = model(input)
features[from_:to_] = output.data.cpu()
targets[from_:to_] = target
os.system('mkdir -p {}'.format(os.path.dirname(path_data)))
print('save ' + path_data)
torch.save((features, targets), path_data)
print('')
return features, targets
def train_multilabel(features, targets, classes, train_split, test_split, C=1.0, ignore_hard_examples=True, after_ReLU=False, normalize_L2=False):
print('\nHyperparameters:\n - C: {}\n - after_ReLU: {}\n - normL2: {}'.format(C, after_ReLU, normalize_L2))
train_APs = []
test_APs = []
for class_id in range(len(classes)):
classifier = SVC(C=C, kernel='linear') # http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html
if ignore_hard_examples:
train_masks = (targets[train_split][:,class_id] != 0).view(-1, 1)
train_features = torch.masked_select(features[train_split], train_masks.expand_as(features[train_split])).view(-1,features[train_split].size(1))
train_targets = torch.masked_select(targets[train_split], train_masks.expand_as(targets[train_split])).view(-1,targets[train_split].size(1))
test_masks = (targets[test_split][:,class_id] != 0).view(-1, 1)
test_features = torch.masked_select(features[test_split], test_masks.expand_as(features[test_split])).view(-1,features[test_split].size(1))
test_targets = torch.masked_select(targets[test_split], test_masks.expand_as(targets[test_split])).view(-1,targets[test_split].size(1))
else:
train_features = features[train_split]
train_targets = targets[train_split]
test_features = features[test_split]
test_targets = features[test_split]
if after_ReLU:
train_features[train_features < 0] = 0
test_features[test_features < 0] = 0
if normalize_L2:
train_norm = torch.norm(train_features, p=2, dim=1).unsqueeze(1)
train_features = train_features.div(train_norm.expand_as(train_features))
test_norm = torch.norm(test_features, p=2, dim=1).unsqueeze(1)
test_features = test_features.div(test_norm.expand_as(test_features))
train_X = train_features.numpy()
train_y = (train_targets[:,class_id] != -1).numpy() # uses hard examples if not ignored
test_X = test_features.numpy()
test_y = (test_targets[:,class_id] != -1).numpy()
classifier.fit(train_X, train_y) # train parameters of the classifier
train_preds = classifier.predict(train_X)
train_acc = accuracy_score(train_y, train_preds) * 100
train_AP = average_precision_score(train_y, train_preds) * 100
train_APs.append(train_AP)
test_preds = classifier.predict(test_X)
test_acc = accuracy_score(test_y, test_preds) * 100
test_AP = average_precision_score(test_y, test_preds) * 100
test_APs.append(test_AP)
print('class "{}" ({}/{}):'.format(classes[class_id], test_y.sum(), test_y.shape[0]))
print(' - {:8}: acc {:.2f}, AP {:.2f}'.format(train_split, train_acc, train_AP))
print(' - {:8}: acc {:.2f}, AP {:.2f}'.format(test_split, test_acc, test_AP))
print('all classes:')
print(' - {:8}: mAP {:.4f}'.format(train_split, sum(train_APs)/len(classes)))
print(' - {:8}: mAP {:.4f}'.format(test_split, sum(test_APs)/len(classes)))
##########################################################################
# main
##########################################################################
parser = argparse.ArgumentParser(
description='Train/Evaluate models',
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('--dir_outputs', default='/tmp/outputs', type=str, help='')
parser.add_argument('--dir_datasets', default='/tmp/datasets', type=str, help='')
parser.add_argument('--C', default=1, type=float, help='')
parser.add_argument('-b', '--batch_size', default=50, type=float, help='')
parser.add_argument('-a', '--arch', default='alexnet', choices=model_names,
help='model architecture: ' +
' | '.join(model_names) +
' (default: alexnet)')
parser.add_argument('--train_split', default='train', type=str, help='')
parser.add_argument('--test_split', default='val', type=str, help='')
parser.add_argument('--cuda', const=True, nargs='?', type=bool, help='')
def main ():
global args
args = parser.parse_args()
print('\nCUDA status: {}'.format(args.cuda))
print('\nLoad pretrained model on Imagenet')
model = pretrainedmodels.__dict__[args.arch](num_classes=1000, pretrained='imagenet')
model.eval()
if args.cuda:
model.cuda()
features_size = model.last_linear.in_features
model.last_linear = pretrainedmodels.utils.Identity() # Trick to get inputs (features) from last_linear
print('\nLoad datasets')
tf_img = pretrainedmodels.utils.TransformImage(model)
train_set = pretrainedmodels.datasets.Voc2007Classification(args.dir_datasets, 'train', transform=tf_img)
val_set = pretrainedmodels.datasets.Voc2007Classification(args.dir_datasets, 'val', transform=tf_img)
test_set = pretrainedmodels.datasets.Voc2007Classification(args.dir_datasets, 'test', transform=tf_img)
train_loader = torch.utils.data.DataLoader(train_set, batch_size=args.batch_size, shuffle=False, num_workers=2)
val_loader = torch.utils.data.DataLoader(val_set, batch_size=args.batch_size, shuffle=False, num_workers=2)
test_loader = torch.utils.data.DataLoader(test_set, batch_size=args.batch_size, shuffle=False, num_workers=2)
print('\nLoad features')
dir_features = os.path.join(args.dir_outputs, 'data/{}'.format(args.arch))
path_train_data = '{}/{}set.pth'.format(dir_features, 'train')
path_val_data = '{}/{}set.pth'.format(dir_features, 'val')
path_test_data = '{}/{}set.pth'.format(dir_features, 'test')
features = {}
targets = {}
features['train'], targets['train'] = extract_features_targets(model, features_size, train_loader, path_train_data, args.cuda)
features['val'], targets['val'] = extract_features_targets(model, features_size, val_loader, path_val_data, args.cuda)
features['test'], targets['test'] = extract_features_targets(model, features_size, test_loader, path_test_data, args.cuda)
features['trainval'] = torch.cat([features['train'], features['val']], 0)
targets['trainval'] = torch.cat([targets['train'], targets['val']], 0)
print('\nTrain Support Vector Machines')
if args.train_split == 'train' and args.test_split == 'val':
print('\nHyperparameters search: train multilabel classifiers (on-versus-all) on train/val')
elif args.train_split == 'trainval' and args.test_split == 'test':
print('\nEvaluation: train a multilabel classifier on trainval/test')
else:
raise ValueError('Trying to train on {} and eval on {}'.format(args.train_split, args.test_split))
train_multilabel(features, targets, train_set.classes, args.train_split, args.test_split, C=args.C)
if __name__ == '__main__':
main() | 46.630435 | 156 | 0.675291 | 1,109 | 8,580 | 4.984671 | 0.190262 | 0.030753 | 0.024602 | 0.022793 | 0.271708 | 0.215268 | 0.103292 | 0.026049 | 0.026049 | 0.026049 | 0 | 0.011509 | 0.16958 | 8,580 | 184 | 157 | 46.630435 | 0.764351 | 0.025058 | 0 | 0.013986 | 0 | 0 | 0.11034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.097902 | null | null | 0.125874 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
726b9a5bd16ce05e24c2ecf163d38e79bdafc8e9 | 440 | py | Python | src/gui/tcltk/tcl/tests/langbench/proc.py | gspu/bitkeeper | 994fb651a4045b221e33703fc3d665c3a34784e1 | [
"Apache-2.0"
] | 342 | 2016-05-10T14:59:07.000Z | 2022-03-09T23:45:43.000Z | src/gui/tcltk/tcl/tests/langbench/proc.py | rvs/bitkeeper | 616740d0daad99530951e46ab48e577807cbbaf4 | [
"Apache-2.0"
] | 4 | 2016-05-16T20:14:27.000Z | 2020-10-04T19:59:25.000Z | src/gui/tcltk/tcl/tests/langbench/proc.py | rvs/bitkeeper | 616740d0daad99530951e46ab48e577807cbbaf4 | [
"Apache-2.0"
] | 78 | 2016-05-10T15:53:30.000Z | 2022-03-09T23:46:06.000Z | #!/usr/bin/python
def a(val):
return b(val)
def b(val):
return c(val)
def c(val):
return d(val)
def d(val):
return e(val)
def e(val):
return f(val)
def f(val):
return g(val, 2)
def g(v1, v2):
return h(v1, v2, 3)
def h(v1, v2, v3):
return i(v1, v2, v3, 4)
def i(v1, v2, v3, v4):
return j(v1, v2, v3, v4, 5)
def j(v1, v2, v3, v4, v5):
return v1 + v2 + v3 + v4 + v5
n = 100000
while n > 0:
x = a(n)
n = n - 1
print "x=%d" % x
| 15.172414 | 30 | 0.556818 | 101 | 440 | 2.425743 | 0.316832 | 0.130612 | 0.146939 | 0.130612 | 0.122449 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119403 | 0.238636 | 440 | 28 | 31 | 15.714286 | 0.61194 | 0.036364 | 0 | 0 | 0 | 0 | 0.009456 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.04 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
726cd8989837300a91d84b0ca0157304eb9a9398 | 821 | py | Python | src/metrics.py | dmitryrubtsov/Recommender-systems | 9debd7b1c2d67ebc508263a483c81da57521dea0 | [
"MIT"
] | null | null | null | src/metrics.py | dmitryrubtsov/Recommender-systems | 9debd7b1c2d67ebc508263a483c81da57521dea0 | [
"MIT"
] | null | null | null | src/metrics.py | dmitryrubtsov/Recommender-systems | 9debd7b1c2d67ebc508263a483c81da57521dea0 | [
"MIT"
] | 1 | 2021-09-11T09:12:34.000Z | 2021-09-11T09:12:34.000Z | import pandas as pd
import numpy as np
import swifter
def money_precision_at_k(y_pred: pd.Series, y_true: pd.Series, item_price, k=5):
y_pred = y_pred.swifter.progress_bar(False).apply(pd.Series)
user_filter = ~(y_true.swifter.progress_bar(False).apply(len) < k)
y_pred = y_pred.loc[user_filter]
y_true = y_true.loc[user_filter]
prices_recommended = y_pred.swifter.progress_bar(False).applymap(lambda item: item_price.price.get(item))
flags = y_pred.loc[:, :k - 1].swifter.progress_bar(False) \
.apply(lambda row: np.isin(np.array(row), y_true.get(row.name)), axis=1) \
.swifter.progress_bar(False).apply(pd.Series)
metric = (
(flags * prices_recommended.loc[:, :k - 1]).sum(axis=1) / prices_recommended.loc[:, :k - 1].sum(axis=1)
).mean()
return metric
| 34.208333 | 111 | 0.685749 | 132 | 821 | 4.05303 | 0.325758 | 0.065421 | 0.168224 | 0.214953 | 0.416822 | 0.364486 | 0.246729 | 0.11215 | 0 | 0 | 0 | 0.010189 | 0.163216 | 821 | 23 | 112 | 35.695652 | 0.768559 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.1875 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7272878a102453847fd4632382fedab2cb904e9f | 518 | py | Python | src/utils/pythonSrc/watchFaceParser/models/elements/battery/batteryGaugeElement.py | chm-dev/amazfitGTSwatchfaceBundle | 4cb04be5215de16628418e9b38152a35d5372d3e | [
"MIT"
] | 49 | 2019-12-18T11:24:56.000Z | 2022-03-28T09:56:27.000Z | src/utils/pythonSrc/watchFaceParser/models/elements/battery/batteryGaugeElement.py | chm-dev/amazfitGTSwatchfaceBundle | 4cb04be5215de16628418e9b38152a35d5372d3e | [
"MIT"
] | 6 | 2020-01-08T21:31:15.000Z | 2022-03-25T19:13:26.000Z | src/utils/pythonSrc/watchFaceParser/models/elements/battery/batteryGaugeElement.py | chm-dev/amazfitGTSwatchfaceBundle | 4cb04be5215de16628418e9b38152a35d5372d3e | [
"MIT"
] | 6 | 2019-12-27T17:30:24.000Z | 2021-09-30T08:11:01.000Z | import logging
from watchFaceParser.models.elements.common.imageSetElement import ImageSetElement
class BatteryGaugeElement(ImageSetElement):
def __init__(self, parameter, parent, name = None):
super(BatteryGaugeElement, self).__init__(parameter = parameter, parent = parent, name = name)
def draw3(self, drawer, resources, state):
assert(type(resources) == list)
super(BatteryGaugeElement, self).draw3(drawer, resources, int(state.getBatteryLevel() * self.getImagesCount() / 100))
| 43.166667 | 125 | 0.745174 | 53 | 518 | 7.150943 | 0.566038 | 0.079156 | 0.147757 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011312 | 0.146718 | 518 | 11 | 126 | 47.090909 | 0.843891 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 0 | null | null | 0 | 0.25 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
727b538b13be55c65e12b2a82a23b2e3906b8885 | 474 | py | Python | web/migrations/0007_auto_20180824_0925.py | zinaukarenku/zkr-platform | 8daf7d1206c482f1f8e0bcd54d4fde783e568774 | [
"Apache-2.0"
] | 2 | 2018-11-16T21:45:17.000Z | 2019-02-03T19:55:46.000Z | web/migrations/0007_auto_20180824_0925.py | zinaukarenku/zkr-platform | 8daf7d1206c482f1f8e0bcd54d4fde783e568774 | [
"Apache-2.0"
] | 13 | 2018-08-17T19:12:11.000Z | 2022-03-11T23:27:41.000Z | web/migrations/0007_auto_20180824_0925.py | zinaukarenku/zkr-platform | 8daf7d1206c482f1f8e0bcd54d4fde783e568774 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 2.1 on 2018-08-24 09:25
from django.db import migrations, models
import web.models
class Migration(migrations.Migration):
dependencies = [
('web', '0006_organizationmember_user'),
]
operations = [
migrations.AlterField(
model_name='organizationpartner',
name='logo',
field=models.ImageField(upload_to=web.models.OrganizationPartner._organization_partner_logo_file),
),
]
| 23.7 | 110 | 0.662447 | 49 | 474 | 6.244898 | 0.734694 | 0.058824 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.049862 | 0.238397 | 474 | 19 | 111 | 24.947368 | 0.797784 | 0.090717 | 0 | 0 | 1 | 0 | 0.125874 | 0.065268 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
727ca773247407a0b44c8ae5a52c27e130f63397 | 6,165 | py | Python | sqlmat/utils.py | haobtc/sqlmat | c6b6ef966ba01173b6a485afb932ed438c35b211 | [
"MIT"
] | null | null | null | sqlmat/utils.py | haobtc/sqlmat | c6b6ef966ba01173b6a485afb932ed438c35b211 | [
"MIT"
] | null | null | null | sqlmat/utils.py | haobtc/sqlmat | c6b6ef966ba01173b6a485afb932ed438c35b211 | [
"MIT"
] | null | null | null | from typing import Tuple, List, Optional
import json
import sys
import os
import shlex
import asyncio
import argparse
import logging
import tempfile
from urllib.parse import urlparse
logger = logging.getLogger(__name__)
def find_sqlmat_json() -> Optional[dict]:
json_path = os.getenv('SQLMAT_JSON_PATH')
if json_path:
with open(json_path) as f:
cfg = json.load(f)
return cfg
# iterate through the current dir up to the root dir "/" to find a
# .sqlmat.json
workdir = os.path.abspath(os.getcwd())
while workdir:
json_path = os.path.join(workdir, '.sqlmat.json')
if os.path.exists(json_path):
with open(json_path) as f:
cfg = json.load(f)
return cfg
parentdir = os.path.abspath(os.path.join(workdir, '..'))
if parentdir == workdir:
break
workdir = parentdir
logger.warning('fail to find .sqlmat.json')
return None
def find_dsn(prog: str, desc: str) -> Tuple[str, List[str]]:
parser = argparse.ArgumentParser(
prog=prog,
description=desc)
parser.add_argument('-d', '--dsn',
type=str,
help='postgresql dsn')
parser.add_argument('-g', '--db',
type=str,
default='default',
help='postgresql db instance defined in .sqlmat.json')
parser.add_argument('callee_args',
type=str,
nargs='*',
help='command line arguments of callee programs')
# from arguments
args = parser.parse_args()
if args.dsn:
return args.dsn, args.callee_args
# find dsn from ./.sqlmat.json
cfg = find_sqlmat_json()
if cfg:
dsn = cfg['databases'][args.db]['dsn']
assert isinstance(dsn, str)
return dsn, args.callee_args
# default dsn using username
user = os.getenv('USER', '')
default_dsn = f'postgres://{user}@127.0.0.1:5432/{args.db}'
logger.warning('no postgres dsn specified, use %s instead', default_dsn)
return default_dsn, args.callee_args
def joinargs(callee_args: List[str]) -> str:
if hasattr(shlex, 'join'):
return shlex.join(callee_args)
else:
return ' '.join(shlex.quote(a) for a in callee_args)
# run psql client
async def run_shell(dsn: str, callee_args: List[str]) -> None:
p = urlparse(dsn)
username = p.username or ''
password = p.password or ''
dbname = p.path[1:]
hostname = p.hostname
port = p.port or 5432
temp_pgpass = tempfile.NamedTemporaryFile(mode='w')
print(
'{}:{}:{}:{}:{}'.format(hostname, port, dbname, username, password),
file=temp_pgpass,
flush=True)
os.environ['PGPASSFILE'] = temp_pgpass.name
command = 'psql -h{} -p{} -U{} {} {}'.format(hostname, port, username, joinargs(callee_args), dbname)
proc = await asyncio.create_subprocess_shell(command)
await proc.communicate()
def cl_run_shell() -> None:
dsn, callee_args = find_dsn('sqlmat-shell', 'run psql client shell')
loop = asyncio.get_event_loop()
loop.run_until_complete(run_shell(dsn, callee_args))
# run dbdump
async def run_dbdump(dsn: str, callee_args: List[str]) -> None:
p = urlparse(dsn)
username = p.username or ''
password = p.password or ''
dbname = p.path[1:]
hostname = p.hostname
port = p.port or 5432
temp_pgpass = tempfile.NamedTemporaryFile(mode='w')
print(
'{}:{}:{}:{}:{}'.format(hostname, port, dbname, username, password),
file=temp_pgpass,
flush=True)
os.environ['PGPASSFILE'] = temp_pgpass.name
command = 'pg_dump -h{} -p{} -U{} {} {}'.format(hostname, port, username, joinargs(callee_args), dbname)
proc = await asyncio.create_subprocess_shell(command)
await proc.communicate()
def cl_run_dbdump() -> None:
dsn, callee_args = find_dsn('sqlmat-dump', 'dump database')
loop = asyncio.get_event_loop()
loop.run_until_complete(run_dbdump(dsn, callee_args))
# generate alembic migrations
def gen_migrate(dsn: str) -> None:
init_data = ALEMBIC_INIT.replace('{{dsn}}', dsn)
with open('alembic.ini', 'w') as f:
f.write(init_data)
def cl_gen_migrate() -> None:
dsn, callee_args = find_dsn('sqlmat-genmigrate', 'generate alembic migration')
gen_migrate(dsn)
print('Wrote alembic.ini')
ALEMBIC_INIT = '''\
# A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = migrations
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# timezone to use when rendering the date
# within the migration file as well as the filename.
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
#truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; this defaults
# to migrations/versions. When using multiple version
# directories, initial revisions must be specified with --version-path
# version_locations = %(here)s/bar %(here)s/bat migrations/versions
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
#sqlalchemy.url = driver://user:pass@localhost/dbname
sqlalchemy.url = {{dsn}}
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
'''
| 28.541667 | 108 | 0.65515 | 801 | 6,165 | 4.928839 | 0.319601 | 0.040527 | 0.016464 | 0.017224 | 0.270517 | 0.270517 | 0.270517 | 0.24772 | 0.24772 | 0.24772 | 0 | 0.005225 | 0.223844 | 6,165 | 215 | 109 | 28.674419 | 0.819854 | 0.032928 | 0 | 0.254438 | 0 | 0 | 0.372081 | 0.020326 | 0 | 0 | 0 | 0 | 0.005917 | 1 | 0.04142 | false | 0.071006 | 0.059172 | 0 | 0.147929 | 0.017751 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
72825a6195e36bed57090b1163e99522e513ffd4 | 2,410 | py | Python | eggs/ZConfig-3.0.4-py2.7.egg/ZConfig/tests/test_cookbook.py | salayhin/talkofacta | 8b5a14245dd467bb1fda75423074c4840bd69fb7 | [
"MIT"
] | null | null | null | eggs/ZConfig-3.0.4-py2.7.egg/ZConfig/tests/test_cookbook.py | salayhin/talkofacta | 8b5a14245dd467bb1fda75423074c4840bd69fb7 | [
"MIT"
] | null | null | null | eggs/ZConfig-3.0.4-py2.7.egg/ZConfig/tests/test_cookbook.py | salayhin/talkofacta | 8b5a14245dd467bb1fda75423074c4840bd69fb7 | [
"MIT"
] | null | null | null | ##############################################################################
#
# Copyright (c) 2004 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
"""Tests of examples from the online cookbook, so we don't break them
down the road. Unless we really mean to.
The ZConfig Cookbook is available online at:
http://dev.zope.org/Zope3/ZConfig
"""
import ZConfig.tests.support
import unittest
def basic_key_mapping_password_to_passwd(key):
# Lower-case the key since that's what basic-key does:
key = key.lower()
# Now map password to passwd:
if key == "password":
key = "passwd"
return key
def user_info_conversion(section):
return section
class CookbookTestCase(ZConfig.tests.support.TestHelper, unittest.TestCase):
def test_rewriting_key_names(self):
schema = self.load_schema_text("""
<schema prefix='%s'>
<sectiontype name='userinfo' datatype='.user_info_conversion'
keytype='.basic_key_mapping_password_to_passwd'>
<key name='userid' datatype='integer'/>
<key name='username' datatype='identifier'/>
<key name='password'/>
</sectiontype>
<section type='userinfo' name='*' attribute='userinfo'/>
</schema>
""" % __name__)
config = self.load_config_text(schema, """\
<userinfo>
USERID 42
USERNAME foouser
PASSWORD yeah-right
</userinfo>
""")
self.assertEqual(config.userinfo.userid, 42)
self.assertEqual(config.userinfo.username, "foouser")
self.assertEqual(config.userinfo.passwd, "yeah-right")
self.assertTrue(not hasattr(config.userinfo, "password"))
def test_suite():
return unittest.makeSuite(CookbookTestCase)
if __name__ == "__main__":
unittest.main(defaultTest="test_suite")
| 33.943662 | 78 | 0.612448 | 265 | 2,410 | 5.437736 | 0.498113 | 0.038862 | 0.03331 | 0.060375 | 0.047189 | 0.047189 | 0.047189 | 0 | 0 | 0 | 0 | 0.00592 | 0.229046 | 2,410 | 70 | 79 | 34.428571 | 0.769645 | 0.30332 | 0 | 0 | 0 | 0 | 0.454424 | 0.083167 | 0 | 0 | 0 | 0 | 0.108108 | 1 | 0.108108 | false | 0.216216 | 0.054054 | 0.054054 | 0.27027 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
7282cd43987729ea642d1583bf120e197cdcc79d | 420 | py | Python | src/bullet_point/migrations/0006_bulletpoint_sift_risk_score.py | ResearchHub/ResearchHub-Backend-Open | d36dca33afae2d442690694bb2ab17180d84bcd3 | [
"MIT"
] | 18 | 2021-05-20T13:20:16.000Z | 2022-02-11T02:40:18.000Z | src/bullet_point/migrations/0006_bulletpoint_sift_risk_score.py | ResearchHub/ResearchHub-Backend-Open | d36dca33afae2d442690694bb2ab17180d84bcd3 | [
"MIT"
] | 109 | 2021-05-21T20:14:23.000Z | 2022-03-31T20:56:10.000Z | src/bullet_point/migrations/0006_bulletpoint_sift_risk_score.py | ResearchHub/ResearchHub-Backend-Open | d36dca33afae2d442690694bb2ab17180d84bcd3 | [
"MIT"
] | 4 | 2021-05-17T13:47:53.000Z | 2022-02-12T10:48:21.000Z | # Generated by Django 2.2 on 2020-11-07 01:03
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('bullet_point', '0005_bulletpoint_created_location'),
]
operations = [
migrations.AddField(
model_name='bulletpoint',
name='sift_risk_score',
field=models.FloatField(blank=True, null=True),
),
]
| 22.105263 | 62 | 0.62619 | 45 | 420 | 5.688889 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058442 | 0.266667 | 420 | 18 | 63 | 23.333333 | 0.772727 | 0.102381 | 0 | 0 | 1 | 0 | 0.189333 | 0.088 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
72831f9ef5d36065feb1e5d281d84dbd15c6710a | 1,840 | py | Python | main.py | ezhkovskii/instagrapi-rest | a3570f279ef0973856b92e433b117e0be0d4c713 | [
"MIT"
] | null | null | null | main.py | ezhkovskii/instagrapi-rest | a3570f279ef0973856b92e433b117e0be0d4c713 | [
"MIT"
] | null | null | null | main.py | ezhkovskii/instagrapi-rest | a3570f279ef0973856b92e433b117e0be0d4c713 | [
"MIT"
] | null | null | null | import pkg_resources
from fastapi import FastAPI
from fastapi.openapi.utils import get_openapi
from starlette.responses import RedirectResponse, JSONResponse
from routers import auth, media, video, photo, user, igtv, clip, album, story, hashtag, direct
app = FastAPI()
app.include_router(auth.router)
app.include_router(media.router)
app.include_router(video.router)
app.include_router(photo.router)
app.include_router(user.router)
app.include_router(igtv.router)
app.include_router(clip.router)
app.include_router(album.router)
app.include_router(story.router)
app.include_router(hashtag.router)
app.include_router(direct.router)
@app.get("/", tags=["system"], summary="Redirect to /docs")
async def root():
"""Redirect to /docs
"""
return RedirectResponse(url="/docs")
@app.get("/version", tags=["system"], summary="Get dependency versions")
async def version():
"""Get dependency versions
"""
versions = {}
for name in ('instagrapi', ):
item = pkg_resources.require(name)
if item:
versions[name] = item[0].version
return versions
@app.exception_handler(Exception)
async def handle_exception(request, exc: Exception):
return JSONResponse({
"detail": str(exc),
"exc_type": str(type(exc).__name__)
}, status_code=500)
def custom_openapi():
if app.openapi_schema:
return app.openapi_schema
# for route in app.routes:
# body_field = getattr(route, 'body_field', None)
# if body_field:
# body_field.type_.__name__ = 'name'
openapi_schema = get_openapi(
title="instagrapi-rest",
version="1.0.0",
description="RESTful API Service for instagrapi",
routes=app.routes,
)
app.openapi_schema = openapi_schema
return app.openapi_schema
app.openapi = custom_openapi
| 28.307692 | 94 | 0.7 | 232 | 1,840 | 5.37931 | 0.323276 | 0.088141 | 0.141026 | 0.176282 | 0.05609 | 0.05609 | 0 | 0 | 0 | 0 | 0 | 0.004639 | 0.179891 | 1,840 | 64 | 95 | 28.75 | 0.822399 | 0.075 | 0 | 0.043478 | 0 | 0 | 0.088073 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021739 | false | 0 | 0.108696 | 0 | 0.23913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
728590b1ad401344329ed841bf7e2c36ccfdfcf3 | 119,428 | py | Python | synbioinformatica.py | nemami/synbioinformatica | 9306d7a7edb93aaa8e4de5e041db6633214c07b1 | [
"BSD-2-Clause"
] | null | null | null | synbioinformatica.py | nemami/synbioinformatica | 9306d7a7edb93aaa8e4de5e041db6633214c07b1 | [
"BSD-2-Clause"
] | null | null | null | synbioinformatica.py | nemami/synbioinformatica | 9306d7a7edb93aaa8e4de5e041db6633214c07b1 | [
"BSD-2-Clause"
] | 1 | 2021-03-01T08:48:35.000Z | 2021-03-01T08:48:35.000Z | #!/usr/bin/python -tt
import sys, re, math
from decimal import *
# TODO: work on naming scheme
# TODO: add more ORIs
# TODO: assemblytree alignment
# TODO: Wobble, SOEing
# TODO: (digestion, ligation) redundant products
# TODO: for PCR and Sequencing, renormalize based on LCS
# TODO: tutorials
dna_alphabet = {'A':'A', 'C':'C', 'G':'G', 'T':'T',
'R':'AG', 'Y':'CT', 'W':'AT', 'S':'CG', 'M':'AC', 'K':'GT',
'H':'ACT', 'B':'CGT', 'V':'ACG', 'D':'AGT',
'N':'ACGT',
'a': 'a', 'c': 'c', 'g': 'g', 't': 't',
'r':'ag', 'y':'ct', 'w':'at', 's':'cg', 'm':'ac', 'k':'gt',
'h':'act', 'b':'cgt', 'v':'acg', 'd':'agt',
'n':'acgt'}
complement_alphabet = {'A':'T', 'T':'A', 'C':'G', 'G':'C','R':'Y', 'Y':'R',
'W':'W', 'S':'S', 'M':'K', 'K':'M', 'H':'D', 'D':'H',
'B':'V', 'V':'B', 'N':'N','a':'t', 'c':'g', 'g':'c',
't':'a', 'r':'y', 'y':'r', 'w':'w', 's':'s','m':'k',
'k':'m', 'h':'d', 'd':'h', 'b':'v', 'v':'b', 'n':'n'}
gencode = {
'ATA':'I', 'ATC':'I', 'ATT':'I', 'ATG':'M',
'ACA':'T', 'ACC':'T', 'ACG':'T', 'ACT':'T',
'AAC':'N', 'AAT':'N', 'AAA':'K', 'AAG':'K',
'AGC':'S', 'AGT':'S', 'AGA':'R', 'AGG':'R',
'CTA':'L', 'CTC':'L', 'CTG':'L', 'CTT':'L',
'CCA':'P', 'CCC':'P', 'CCG':'P', 'CCT':'P',
'CAC':'H', 'CAT':'H', 'CAA':'Q', 'CAG':'Q',
'CGA':'R', 'CGC':'R', 'CGG':'R', 'CGT':'R',
'GTA':'V', 'GTC':'V', 'GTG':'V', 'GTT':'V',
'GCA':'A', 'GCC':'A', 'GCG':'A', 'GCT':'A',
'GAC':'D', 'GAT':'D', 'GAA':'E', 'GAG':'E',
'GGA':'G', 'GGC':'G', 'GGG':'G', 'GGT':'G',
'TCA':'S', 'TCC':'S', 'TCG':'S', 'TCT':'S',
'TTC':'F', 'TTT':'F', 'TTA':'L', 'TTG':'L',
'TAC':'Y', 'TAT':'Y', 'TAA':'_', 'TAG':'_',
'TGC':'C', 'TGT':'C', 'TGA':'_', 'TGG':'W'}
# Description: converts DNA string to amino acid string
def translate( sequence ):
"""Return the translated protein from 'sequence' assuming +1 reading frame"""
return ''.join([gencode.get(sequence[3*i:3*i+3],'X') for i in range(len(sequence)//3)])
# Description: read in all enzymes from REase tsv into dict EnzymeDictionary
def EnzymeDictionary():
EnzymeDictionary = {}
fh = open('REases.tsv', 'rU')
for line in fh:
card = line.rstrip().split('\t')
card[0] = re.sub(r'\-','_',card[0])
EnzymeDictionary[card[0]] = restrictionEnzyme(*card)
return EnzymeDictionary
# Description: Suffix Tree implementation for the purpose of PCR Longest Common Substring identification
# Code adapted from: http://chipsndips.livejournal.com/2005/12/07/
# Define a for a node in the suffix tree
class SuffixNode(dict):
def __init__(self):
self.suffixLink = None # Suffix link as defined by Ukkonen
class LCS:
def __init__(self,str1,str2):
# Hack for terimal 3' end matching
str = str1 + str2 + '#'
inf = len(str)
self.str = str #Keep a reference to str to ensure the string is not garbage collected
self.seed = SuffixNode() #Seed is a dummy node. Suffix link of root points to seed. For any char,there is a link from seed to root
self.root = SuffixNode() # Root of the suffix tree
self.root.suffixLink = self.seed
self.root.depth = 0
self.deepest = 0,0
# For each character of str[i], create suffixtree for str[0:i]
s = self.root; k=0
for i in range(len(str)):
self.seed[str[i]] = -2,-2,self.root
oldr = self.seed
t = str[i]
#Traverse the boundary path of the suffix tree for str[0:i-1]
while True:
# Descend the suffixtree until state s has a transition for the stringstr[k:i-1]
while i>k:
kk,pp,ss = s[str[k]]
if pp-kk < i-k:
k = k + pp-kk+1
s = ss
else:
break
# Exit this loop if s has a transition for the string str[k:i] (itmeans str[k:i] is repeated);
# Otherwise, split the state if necessary
if i>k:
tk = str[k]
kp,pp,sp = s[tk]
if t.lower() == str[kp+i-k].lower():
break
else: # Split the node
r = SuffixNode()
j = kp+i-k
tj = str[j]
r[tj] = j, pp, sp
s[str[kp]] = kp,j-1, r
r.depth = s.depth + (i-k)
sp.depth = r.depth + pp - j + 1
# Original statement was: if j<len(str1)<i and r.depth>self.deepest[0]:
# Adapted for PCR by restricting LCS matches to primer terminal 3' end
if len(str1)<i and r.depth>self.deepest[0] and j == len(str1) - 1:
self.deepest = r.depth, j-1
elif s.has_key(t):
break
else:
r = s
# Add a transition from r that starts with the letter str[i]
tmp = SuffixNode()
r[t] = i,inf,tmp
# Prepare for next iteration
oldr.suffixLink = r
oldr = r
s = s.suffixLink
# Last remaining endcase
oldr.suffixLink = s
def LongestCommonSubstring(self):
start, end = self.deepest[1]-self.deepest[0]+1, self.deepest[1]+1
return (self.str[start:end],start,end)
def LCSasRegex(self, currentPrimer, template, fwd):
annealingRegion = self.str[self.deepest[1] - self.deepest[0] + 1 : self.deepest[1] + 1]
if not fwd:
annealingRegion = reverseComplement(annealingRegion)
(AnnealingMatches, matchCount, MatchIndicesTuple) = ([], 0, ())
annealingRegex = re.compile(annealingRegion, re.IGNORECASE)
matchList = annealingRegex.finditer(template)
for match in matchList:
if primerTm(match.group()) > 45:
matchCount += 1
MatchIndicesTuple = (match.start(), match.end())
PrimerStub = currentPrimer[0:len(currentPrimer)-len(annealingRegion)-1]
return (matchCount, MatchIndicesTuple, PrimerStub)
# Description: identifies errors in primer design and raises exceptions based on errors and their context
def PCRErrorHandling(InputTuple):
(fwd,matchCount,matchedAlready,nextOrientation,currentPrimer,template) = InputTuple
if len(currentPrimer.sequence) > 7:
abbrev = currentPrimer.sequence[:3]+'...'+currentPrimer.sequence[-3:]
else:
abbrev = currentPrimer.sequence
if fwd:
if matchCount > 1: # if matches in forward direction more than once
if nextOrientation == 2: # ... but was supposed to match in reverse direction
raise Exception('*Primer error*: primers both anneal in forward (5\'->3\') orientation AND primer '+abbrev+' anneals to multiple sites in template.')
raise Exception('*Primer error*: primer '+abbrev+' anneals to multiple sites in template.')
elif matchCount == 1: # if matches in the forward direction exactly once
if nextOrientation == 2: # ... but was supposed to match in reverse direction
raise Exception('*Primer error*: primers both anneal in forward (5\'->3\') orientation.')
matchedAlready = 1
return matchedAlready
else:
if matchCount > 1: # if matches in reverse direction more than once
if matchedAlready == 1: # ... and already matched in forward direction
if nextOrientation == 1: # ... but was supposed to match in forward direction
raise Exception('*Primer error*: primers both anneal in reverse (3\'->5\') orientation AND primer '+abbrev+' anneals to multiple sites in template AND primer '+abbrev+' anneals in both orientations.')
raise Exception('*Primer error*: primer '+abbrev+' anneals to multiple sites in template AND primer '+abbrev+' anneals in both orientations.')
if nextOrientation == 1:
raise Exception('*Primer error*: primers both anneal in reverse (3\'->5\') orientation AND primer '+abbrev+' anneals to multiple sites in template.')
raise Exception('*Primer error*: primer '+abbrev+' anneals to multiple sites in template.')
elif matchCount == 1: # if matches in the reverse direction exactly once
if matchedAlready == 1: # ... and already matched in forward direction
if nextOrientation == 1: # ... but was supposed to match in forward direction
raise Exception('*Primer error*: both primers have same reverse (3\'->5\') orientation AND primer '+abbrev+' anneals in both orientations.')
raise Exception('*Primer error*: primer '+abbrev+' primes in both orientations.')
else:
matchedAlready = 2
if matchedAlready == 0: # if no matches
raise Exception('*Primer error*: primer '+abbrev+' does not anneal in either orientation.')
return matchedAlready
# Description: assigns relationships for PCR inputs and PCR product for assembly tree purposes
def pcrPostProcessing(inputTuple, parent, fwdTM, revTM):
(primer1DNA, primer2DNA, templateDNA) = inputTuple
for child in inputTuple:
child.addParent(parent)
parent.setChildren(inputTuple)
intVal = int(round(len(parent.sequence)/1000+0.5))
parent.setTimeStep(intVal)
parent.addMaterials(['Polymerase','dNTP mix','Polymerase buffer'])
thermoCycle = str(intVal)+'K'+str(int(round(max(fwdTM,revTM))))
parent.instructions = thermoCycle+' PCR template '+templateDNA.name+' with primers '+primer1DNA.name+', '+primer2DNA.name
return parent
# Description: PCR() function constructs generalized suffix tree for template and a given primer to identify annealing region,
# and raises PrimerError exceptions for different cases of failed PCR as a result of primer design
# Note: PCR() product is not case preserving
def PCR(primer1DNA, primer2DNA, templateDNA):
for pcrInput in (primer1DNA, primer2DNA, templateDNA):
if not isinstance(pcrInput, DNA):
raise Exception('*PCR error*: PCR function was passed a non-DNA argument.')
return None
# Suffix Tree string initialization, non-alphabet character concatenation
(template, primer_1, primer_2) = (templateDNA.sequence, primer1DNA, primer2DNA)
# Tuple of assemblyTree 'children', for the purpose of child/parent assignment
inputTuple = (primer1DNA, primer2DNA, templateDNA)
# Initialization of all parameters, where indices is the start / stop indices + direction of annealing primer sequences
(fwdTM, revTM, indices, counter, rightStub, leftStub, nextOrientation) = (0,0,[0,0,0,0,0,0],0,'','',0)
try:
# NOTE: no assumptions made about input primer directionality
for currentPrimer in (primer_1, primer_2):
currentSequence = currentPrimer.sequence + '$'
fwdMatch = LCS(currentSequence.upper(), template.upper())
(matchCount, forwardMatchIndicesTuple, forwardPrimerStub) = fwdMatch.LCSasRegex(currentSequence, template, 1)
(matchedAlready, start, stop) = (0,0,0) # Defaults
# Forward case error handling: delegated to PCRErrorHandling function
matchedAlready = PCRErrorHandling((1,matchCount,matchedAlready,nextOrientation,currentPrimer,template))
revMatch = LCS(currentSequence.upper(),reverseComplement(template).upper())
(matchCount, reverseMatchIndicesTuple, reversePrimerStub) = revMatch.LCSasRegex(currentSequence, template, 0)
# Reverse case error handling: delegated to PCRErrorHandling function
matchedAlready = PCRErrorHandling((0,matchCount,matchedAlready,nextOrientation,currentPrimer,template))
if matchedAlready == 1:
(indices[counter], indices[counter+1], indices[counter+2]) = (forwardMatchIndicesTuple[0], forwardMatchIndicesTuple[1], 'fwd')
(counter,nextOrientation,leftStub) = (counter+3, 2, forwardPrimerStub)
elif matchedAlready == 2:
(indices[counter], indices[counter+1], indices[counter+2]) = (reverseMatchIndicesTuple[0], reverseMatchIndicesTuple[1], 'rev')
(counter,nextOrientation,rightStub) = (counter+3, 1, reverseComplement(reversePrimerStub))
if indices[2] == 'fwd':
(fwdStart, fwdEnd, revStart, revEnd) = (indices[0], indices[1], indices[3], indices[4])
else:
(fwdStart, fwdEnd, revStart, revEnd) = (indices[3], indices[4], indices[0], indices[1])
(fwdTM, revTM) = (primerTm(template[fwdStart:fwdEnd]), primerTm(template[revStart:revEnd]))
if fwdStart < revStart and fwdEnd < revEnd:
parent = DNA('PCR product','PCR product of '+primer1DNA.name+', '+primer2DNA.name+' on '+templateDNA.name, leftStub+template[fwdStart:revEnd]+rightStub)
else:
# TODO remove
# circular template is exception to the fwdStart < revStart and fwdEnd < revEnd rule
if templateDNA.topology == 'circular':
parent = DNA('PCR product','PCR product of '+primer1DNA.name+', '+primer2DNA.name+' on '+templateDNA.name, leftStub+template[fwdStart:len(template)]+template[:revEnd]+rightStub)
else:
raise Exception('*PCR Error*: forward primer must anneal upstream of the reverse.')
return pcrPostProcessing(inputTuple, parent, fwdTM, revTM)
except:
raise
# Description: identifies errors in primer design and raises exceptions based on errors and their context
def SequenceErrorHandling(InputTuple):
(fwd,matchCount,matchedAlready,currentPrimer) = InputTuple
if len(currentPrimer.sequence) > 7:
abbrev = currentPrimer.sequence[:3]+'...'+currentPrimer.sequence[-3:]
else:
abbrev = currentPrimer.sequence
if fwd:
if matchCount > 1: # if matches in forward direction more than once
raise Exception('*Primer error*: primer '+abbrev+' anneals to multiple sites in template.')
elif matchCount == 1: # if matches in the forward direction exactly once
matchedAlready = 1
return matchedAlready
else:
if matchCount > 1: # if matches in reverse direction more than once
if matchedAlready == 1: # ... and already matched in forward direction
raise Exception('*Primer error*: primer '+abbrev+' anneals to multiple sites in template AND primer '+abbrev+' anneals in both orientations.')
raise Exception('*Primer error*: primer '+abbrev+' anneals to multiple sites in template.')
elif matchCount == 1: # if matches in the reverse direction exactly once
if matchedAlready == 1: # ... and already matched in forward direction
raise Exception('*Primer error*: primer '+abbrev+' primes in both orientations.')
else:
matchedAlready = 2
if matchedAlready == 0: # if no matches
raise Exception('*Primer error*: primer '+abbrev+' does not anneal in either orientation.')
return matchedAlready
def Sequence(InputDNA, inputPrimer):
for seqInput in (InputDNA, inputPrimer):
if not isinstance(seqInput, DNA):
raise Exception('*Sequencing error*: Sequence function was passed a non-DNA argument.')
return None
# Suffix Tree string initialization, non-alphabet character concatenation
(template, primer) = (InputDNA.sequence, inputPrimer)
# Tuple of assemblyTree 'children', for the purpose of child/parent assignment
# Initialization of all parameters, where indices is the start / stop indices + direction of annealing primer sequences
(fwdTM, revTM, indices, counter, rightStub, leftStub, nextOrientation, fwd, rev, read) = (0,0,[0,0,0],0,'','',0,0,0,'')
try:
# NOTE: no assumptions made about input primer directionality
currentSequence = primer.sequence + '$'
fwdMatch = LCS(currentSequence.upper(), template.upper())
(matchCount, forwardMatchIndicesTuple, forwardPrimerStub) = fwdMatch.LCSasRegex(currentSequence, template, 1)
(matchedAlready, start, stop) = (0,0,0) # Defaults
# Forward case error handling: delegated to SequenceErrorHandling function
matchedAlready = SequenceErrorHandling((1,matchCount,matchedAlready,primer))
revMatch = LCS(currentSequence.upper(),reverseComplement(template).upper())
(matchCount, reverseMatchIndicesTuple, reversePrimerStub) = revMatch.LCSasRegex(currentSequence, template, 0)
# Reverse case error handling: delegated to SequenceErrorHandling function
matchedAlready = SequenceErrorHandling((0,matchCount,matchedAlready,primer))
if matchedAlready == 1:
(fwdStart, fwdEnd, fwd) = (forwardMatchIndicesTuple[0], forwardMatchIndicesTuple[1], 1)
elif matchedAlready == 2:
(revStart, revEnd, rev) = (reverseMatchIndicesTuple[0], reverseMatchIndicesTuple[1], 1)
if fwd:
bindingTM = primerTm(template[fwdStart:fwdEnd])
if InputDNA.DNAclass == 'plasmid':
if fwdEnd + 1001 > len(template):
read = template[fwdEnd+1:] + template[:fwdEnd+1001-len(template)]
else:
read = template[fwdEnd+1:fwdEnd+1001]
else:
read = template[fwdEnd+1:fwdEnd+1001]
else:
bindingTM = primerTm(template[revStart:revEnd])
if InputDNA.DNAclass == 'plasmid':
if revStart - 1001 < 0:
read = template[revStart-1001+len(template):] + template[:revStart]
else:
read = template[revStart-1001:revStart]
else:
read = template[revStart-1001:revStart]
if bindingTM >= 55:
return read
else:
return ''
except:
raise
# Description: case preserving reverse complementation of nucleotide sequences
def reverseComplement(sequence):
return "".join([complement_alphabet.get(nucleotide, '') for nucleotide in sequence[::-1]])
# Description: case preserving string reversal
def reverse(sequence):
return sequence[::-1]
# Description: case preserving complementation of nucleotide sequences
def Complement(sequence):
return "".join([complement_alphabet.get(nucleotide, '') for nucleotide in sequence[0:]])
# Primer TM function suite: primerTm(), primerTmsimple(), get_55_primer(), nearestNeighborTmNonDegen(), getTerminalCorrectionsDsHash(),
# getTerminalCorrectionsDhHash(), getDsHash(), getDhHash()
# Implemented by Tim Hsaiu in JavaScript, adapted to Python by Nima Emami
# Based on Santa Lucia et. al. papers
def primerTm(sequence):
if sequence == '':
return 0
milliMolarSalt = 50
milliMolarMagnesium = 1.5
nanoMolarPrimerTotal = 200
molarSalt = milliMolarSalt/1000
molarMagnesium = milliMolarMagnesium/1000
molarPrimerTotal = Decimal(nanoMolarPrimerTotal)/Decimal(1000000000)
re.sub(r'\s','', sequence)
return nearestNeighborTmNonDegen(sequence, molarSalt, molarPrimerTotal, molarMagnesium)
def primerTmsimple(sequence):
return 64.9+41*(GCcontent(sequence)*len(sequence) - 16.4)/len(sequence)
# phusion notes on Tm
# https://www.finnzymes.fi/optimizing_tm_and_annealing.html
# get substring from the beginning of input that is 55C Tm
def get_55_primer(sequence):
lastChar = 17
myPrimer = sequence.substring(0,lastChar)
while( primerTmsimple(myPrimer) < 54.5 or lastChar > 60):
lastChar = lastChar + 1
myPrimer = sequence[0:lastChar]
return myPrimer
def nearestNeighborTmNonDegen (sequence, molarSalt, molarPrimerTotal, molarMagnesium):
# The most sophisticated Tm calculations take into account the exact sequence and base stacking parameters, not just the base composition.
# m = ((1000* dh)/(ds+(R * Math.log(primer concentration))))-273.15;
# Borer P.N. et al. (1974) J. Mol. Biol. 86, 843.
# SantaLucia, J. (1998) Proc. Nat. Acad. Sci. USA 95, 1460.
# Allawi, H.T. and SantaLucia, J. Jr. (1997) Biochemistry 36, 10581.
# von Ahsen N. et al. (1999) Clin. Chem. 45, 2094.
sequence = sequence.lower()
R = 1.987 # universal gas constant in Cal/degrees C * mol
ds = 0 # cal/Kelvin/mol
dh = 0 # kcal/mol
# perform salt correction
correctedSalt = molarSalt + molarMagnesium * 140 # adjust for greater stabilizing effects of Mg compared to Na or K. See von Ahsen et al 1999
ds = ds + 0.368 * (len(sequence) - 1) * math.log(correctedSalt) # from von Ahsen et al 1999
# perform terminal corrections
termDsCorr = getTerminalCorrectionsDsHash()
ds = ds + termDsCorr[sequence[0]]
ds = ds + termDsCorr[sequence[len(sequence) - 1]]
termDhCorr = getTerminalCorrectionsDhHash()
dh = dh + termDhCorr[sequence[0]]
dh = dh + termDhCorr[sequence[len(sequence) - 1]]
dsValues = getDsHash()
dhValues = getDhHash()
for i in range(len(sequence)-1):
ds = ds + dsValues[sequence[i] + sequence[i + 1]]
dh = dh + dhValues[sequence[i] + sequence[i + 1]]
return (((1000 * dh) / (ds + (R * math.log(molarPrimerTotal / 2)))) - 273.15)
def getTerminalCorrectionsDsHash():
# SantaLucia, J. (1998) Proc. Nat. Acad. Sci. USA 95, 1460.
dictionary = {'g' : -2.8,'a': 4.1,'t' : 4.1,'c' : -2.8}
return dictionary
def getTerminalCorrectionsDhHash():
# SantaLucia, J. (1998) Proc. Nat. Acad. Sci. USA 95, 1460.
dictionary = {'g':0.1,'a' : 2.3,'t' : 2.3,'c' : 0.1}
return dictionary
def getDsHash():
# SantaLucia, J. (1998) Proc. Nat. Acad. Sci. USA 95, 1460.
dictionary = {
'gg' : -19.9,
'ga' : -22.2,
'gt' : -22.4,
'gc' : -27.2,
'ag' : -21.0,
'aa' : -22.2,
'at' : -20.4,
'ac' : -22.4,
'tg' : -22.7,
'ta' : -21.3,
'tt' : -22.2,
'tc' : -22.2,
'cg' : -27.2,
'ca' : -22.7,
'ct' : -21.0,
'cc' : -19.9}
return dictionary
def getDhHash():
# SantaLucia, J. (1998) Proc. Nat. Acad. Sci. USA 95, 1460.
dictionary = {'gg': -8.0,
'ga' : -8.2,
'gt' : -8.4,
'gc' : -10.6,
'ag' : -7.8,
'aa' : -7.9,
'at' : -7.2,
'ac' : -8.4,
'tg' : -8.5,
'ta' : -7.2,
'tt' : -7.9,
'tc' : -8.2,
'cg' : -10.6,
'ca' : -8.5,
'ct' : -7.8,
'cc' : -8.0}
return dictionary
# Description: initialize Digest function parameters and checks for acceptable input format
def initDigest(InputDNA, Enzymes):
(indices, frags, sites, totalLength, enzNames, incubationTemp, nameList, filtered) = ([], [], "", len(InputDNA.sequence), '', 0, [], []) # Initialization
for enzyme in Enzymes:
nameList.append(enzyme.name)
enzNames = enzNames+enzyme.name+', '
incubationTemp = max(incubationTemp,enzyme.incubate_temp)
enzNames = enzNames[:-2]
if len(Enzymes) > 2:
raise Exception('*Digest error*: only double or single digests allowed (provided enzymes were '+enzNames+')')
if InputDNA.topology == "linear":
# Initialize indices array with start and end indices of the linear fragment
# Add dummy REase to avoid null pointers
dummy = restrictionEnzyme("dummy", "", "", "", "", "", 0, 0, "(0/0)","")
indices = [(0,0,'',dummy), (totalLength,0,'',dummy)]
return (indices, frags, sites, totalLength, enzNames, incubationTemp, nameList, filtered)
# Description: finds restriction sites for given Enzymes in given InputDNA molecule
def restrictionSearch(Enzymes, InputDNA, indices, totalLength):
for enzyme in Enzymes:
sites = enzyme.find_sites(InputDNA)
for site in sites:
# WARNING: end proximity for linear fragments exception
if InputDNA.topology == 'linear' and int(site[0]) - int(enzyme.endDistance) < 0 or int(site[1]) + int(enzyme.endDistance) > totalLength:
print '\n*Digest Warning*: end proximity for '+enzyme.name+' restriction site at indices '+str(site[0]%totalLength)+','+str(site[1]%totalLength)+' for input '+InputDNA.name+' (length '+str(totalLength)+')\n'
if InputDNA.topology == 'linear' and site[2] == 'antisense' and site[1] - max(enzyme.bottom_strand_offset,enzyme.top_strand_offset) < 0:
print '\n*Digest Warning*: restriction cut site for '+enzyme.name+' with recognition indices '+str(site[0]%totalLength)+','+str(site[1]%totalLength)+' out of bounds for input '+InputDNA.name+' (length '+str(totalLength)+')\n'
else:
pass
# WARNING: restriction index out of bounds exception
elif InputDNA.topology == 'linear' and site[2] == 'antisense' and site[1] - max(enzyme.bottom_strand_offset,enzyme.top_strand_offset) < 0:
print '\n*Digest Warning*: restriction cut site for '+enzyme.name+' with recognition indices '+str(site[0]%totalLength)+','+str(site[1]%totalLength)+' out of bounds for input '+InputDNA.name+' (length '+str(totalLength)+')\n'
else:
site = site + (enzyme, )
indices.append(site)
indices.sort()
return indices
# Description: if you have overlapping restriction sites, choose the first one and discard the second
# TODO: revise this?
def filterSites(filtered, indices):
siteCounter = 0
while siteCounter < len(indices):
try:
(currentTuple, nextTuple) = (indices[n], indices[n+1])
(currentStart, nextStart, currentEnzyme, nextEnzyme) = (currentTuple[0], nextTuple[0], currentTuple[3], nextTuple[3])
filtered.append(indices[siteCounter])
if currentStart + len(currentEnzyme.alpha_only_site) >= nextStart:
currentIndex = indices[siteCounter+1]
if currentIndex[0] == len(InputDNA.sequence):
pass
else:
raise Exception('Digest Error*: overlapping restriction sites '+currentTuple[3].name+' (indices '+str(currentTuple[0])+','+str(currentTuple[1])+') and '+nextTuple[3].name+' (indices '+str(nextTuple[0])+','+str(nextTuple[1])+')')
siteCounter += 1
siteCounter += 1
except: # got to end of list
filtered.append(indices[siteCounter])
siteCounter += 1
return filtered
# Description: determines digest start and stop indices, as well as overhang indices for left and right restriction
def digestIndices(direction, nextDirection, currentEnzyme, nextEnzyme, currentStart, nextStart, totalLength):
# CT(B)O = current top (bottom) overhang, AL(R)L = add left (right) length, NT(B)O = next top (bottom) overhang
(ALL, ARL) = (0,0)
# If it's on the sense strand, then overhang is positive
if direction == "sense":
(CTO, CBO) = (currentEnzyme.top_strand_offset, currentEnzyme.bottom_strand_offset)
# If it's on the antisense strand, then you have to go back towards the 5' to generate the overhang (so multiply by -1)
else:
(CTO, CBO) = (-1 * currentEnzyme.top_strand_offset, -1 * currentEnzyme.bottom_strand_offset)
ALL = max(CTO,CBO)
if nextDirection == "sense":
(NTO, NBO) = (nextEnzyme.top_strand_offset, nextEnzyme.bottom_strand_offset)
ARL = min(NTO,NBO)
else:
(NTO, NBO) = (-1 * nextEnzyme.top_strand_offset + 1, -1 * nextEnzyme.bottom_strand_offset + 1)
ARL = min(NTO,NBO)-1
(currentStart, digEnd) = ((currentStart+ALL) % totalLength, nextStart + ARL)
if currentEnzyme.reach and direction == "sense":
currentStart = currentStart + len(currentEnzyme.alpha_only_site)
if nextEnzyme.reach and nextDirection == "sense":
digEnd = digEnd + len(nextEnzyme.alpha_only_site)
return (currentStart, digEnd, CTO, CBO, NTO, NBO)
# Description: instantiates Overhang object as the TLO or BLO field of a digested DNA molecule object
def setLeftOverhang(digested, CTO, CBO, direction, currentStart, currentEnzyme, InputDNA):
if direction == "sense":
(TO, BO) = (CTO, CBO)
else:
(TO, BO) = (CBO, CTO)
difference = abs(abs(BO) - abs(TO))
# Generate TLO and BLO fragment overhangs
if abs(TO) < abs(BO) and direction == "sense" or abs(TO) > abs(BO) and direction == "antisense":
if currentStart - len(currentEnzyme.alpha_only_site) < 0:
digested.topLeftOverhang = Overhang(InputDNA.sequence[currentStart-difference:]+InputDNA.sequence[:currentStart])
else:
digested.topLeftOverhang = Overhang(InputDNA.sequence[currentStart-difference:currentStart])
digested.bottomLeftOverhang = Overhang('')
else:
digested.topLeftOverhang = Overhang('')
# Edge case statement
if currentStart - len(currentEnzyme.alpha_only_site) < 0:
digested.bottomLeftOverhang = Overhang(Complement(InputDNA.sequence[currentStart-difference:]+InputDNA.sequence[:currentStart]))
else:
digested.bottomLeftOverhang = Overhang(Complement(InputDNA.sequence[currentStart-difference:currentStart]))
return digested
# Description: instantiates Overhang object as the TRO or BRO field of a digested DNA molecule object
def setRightOverhang(digested, NTO, NBO, direction, digEnd, nextEnzyme, InputDNA, totalLength):
if direction == "sense":
(TO, BO) = (NTO, NBO)
else:
(TO, BO) = (NBO, NTO)
difference = abs(abs(BO) - abs(TO))
# Apply ( mod length ) operator to end index value digDiff to deal with edge cases
digDiff = digEnd + difference
digDiff = digDiff % totalLength
# Generate TRO and BRO fragment overhangs
if abs(TO) < abs(BO) and direction == "sense" or abs(TO) > abs(BO) and direction == "antisense":
digested.topRightOverhang = Overhang('')
# Edge case statement
if digDiff - len(nextEnzyme.alpha_only_site) < 0:
digested.bottomRightOverhang = Overhang(Complement(InputDNA.sequence[digEnd:]+InputDNA.sequence[:digDiff]))
else:
digested.bottomRightOverhang = Overhang(Complement(InputDNA.sequence[digEnd:digDiff]))
else:
# Edge case statement
if digDiff - len(nextEnzyme.alpha_only_site) < 0:
digested.topRightOverhang = Overhang(InputDNA.sequence[digEnd:]+InputDNA.sequence[:digDiff])
else:
digested.topRightOverhang = Overhang(InputDNA.sequence[digEnd:digDiff])
digested.bottomRightOverhang = Overhang('')
return digested
# Description: take digest fragments before they're output, and sets assemblytree relationships and fields,
# as well as digest buffer
def digestPostProcessing(frag, InputDNA, nameList, enzNames, incubationTemp):
frag.setChildren((InputDNA, ))
InputDNA.addParent(frag)
if len(nameList) == 2:
bufferChoices = DigestBuffer(nameList[0],nameList[1])
else:
bufferChoices = DigestBuffer(nameList[0])
bestBuffer = int(bufferChoices[0])
if bestBuffer < 5:
bestBuffer = 'NEB'+str(bestBuffer)
else:
bestBuffer = 'Buffer EcoRI'
frag.setTimeStep(1)
frag.addMaterials([bestBuffer,'ddH20'])
frag.instructions = 'Digest ('+InputDNA.name+') with '+enzNames+' at '+incubationTemp+'C in '+bestBuffer+' for 1 hour.'
return frag
# Description: takes in InputDNA molecule and list of EnzymeDictionary elements, outputting a list of digest products
def Digest(InputDNA, Enzymes):
# Initialization
if not isinstance(InputDNA, DNA):
raise Exception('*Digest Error*: Digest function passed empty list of DNA arguments. Returning empty list of products.')
return []
(indices, frags, sites, totalLength, enzNames, incubationTemp, nameList, filtered) = initDigest(InputDNA, Enzymes)
# Identify restriction sites, fill in indices array
indices = restrictionSearch(Enzymes, InputDNA, indices, totalLength)
# If you have overlapping restriction sites, choose the first one and discard they second
indices = filterSites(filtered, indices)
# If it's linear, only act on the first n - 1 fragments until you hit the blunt ending
# If it's circular, then the 'last' segment is adjacent to the 'first' one, so you
# need to consider the adjacency relationships among the full n fragments
if InputDNA.topology == "linear":
lastIt = len(indices) - 1
else:
lastIt = len(indices)
# Consider enzyme for the current restriction site as well as the next restriction
# site, so that you can generate overhangs for both sides of the current fragment
for n in range(lastIt):
currentTuple = indices[n]
if n+1 > len(indices) - 1:
n = -1
nextTuple = indices[n+1]
(currentStart, currentEnd, direction, currentEnzyme) = currentTuple
(nextStart, nextEnd, nextDirection, nextEnzyme) = nextTuple
# Update start value currentStart and apply ( mod length ) to deal with edge cases
# Also, update end value digEnd for fragment indices
(currentStart, digEnd, CTO, CBO, NTO, NBO) = digestIndices(direction, nextDirection, currentEnzyme, nextEnzyme, currentStart, nextStart, totalLength)
# Loop around fragment case for circular InputDNA's
if digEnd > 0 and currentStart > 0 and digEnd < currentStart and InputDNA.topology == 'circular':
if n == -1:
digested = DNA('digest','Digest of '+InputDNA.name+' with '+enzNames,InputDNA.sequence[currentStart:]+InputDNA.sequence[:digEnd])
else:
raise Exception('Digest Error*: restriction sites for '+currentTuple[3].name+' ('+str(currentTuple[0])+','+str(currentTuple[1])+') and '+nextTuple[3].name+' ('+str(nextTuple[0])+','+str(nextTuple[1])+') contain mutually interfering overhangs -- fragment discarded.')
continue
else:
digested = DNA('digest','Digest of '+InputDNA.name+' with '+enzNames,InputDNA.sequence[currentStart:digEnd])
# Discard small fragments
if len(digested.sequence) < 4:
pass
else:
# Adjust top and bottom overhang values based on the orientation of the restriction site
digested = setLeftOverhang(digested, CTO, CBO, direction, currentStart, currentEnzyme, InputDNA)
digested = setRightOverhang(digested, NTO, NBO, direction, digEnd, nextEnzyme, InputDNA, totalLength)
frags.append(digested)
for frag in frags:
frag = digestPostProcessing(frag, InputDNA, nameList, enzNames, incubationTemp)
return frags
class Overhang(object):
def __init__(self, seq=""):
self.sequence = seq
class DNA(object):
#for linear DNAs, this string should include the entire sequence (5' and 3' overhangs included
def __init__(self, DNAclass="", name="", seq=""):
self.sequence = seq
self.length = len(seq)
notDNA = re.compile('([^gatcrymkswhbvdn])')
isnotDNA = False
exceptionText = ""
for m in notDNA.finditer(self.sequence.lower()):
exceptionText += m.group() + " at position "+ str( m.start()) + " is not valid IUPAC DNA. "
isnotDNA = True
if(isnotDNA):
raise Exception(exceptionText)
self.name = name #would be pbca1256 for vectors or pbca1256-Bth8199 for plasmids
# self.description = "SpecR pUC" #this is for humans to read
self.dam_methylated = True
self.topLeftOverhang = Overhang('')
self.bottomLeftOverhang = Overhang('')
self.topRightOverhang = Overhang('')
self.bottomRightOverhang = Overhang('')
self.pnkTreated = False
#PCR product, miniprep, genomic DNA
self.DNAclass = DNAclass
self.provenance = ""
self.parents = []
self.children = ()
self.instructions = ""
self.materials = []
self.timeStep = 0
#Here is the linked list references for building up action-chains
# an action chain would be something like do PCR on day 1, do transformation on day 2, etc
self.head = None
self.tail = None
if DNAclass == "primer" or DNAclass == "genomic" or DNAclass == "PCR product" or DNAclass == "digest":
self.topology = "linear"
elif DNAclass == 'plasmid':
self.topology = "circular" #circular or linear, genomic should be considered linear
else:
raise Exception("Invalid molecule class. Acceptable classes are 'digest', genomic', 'PCR product', 'plasmid' and 'primer'.")
def reversecomp(self):
return reverseComplement(self.sequence) #reverses string
#code to handle the overhangs & other object attributes
def addParent(self, DNA):
self.parents.append(DNA)
def addMaterials(self, materialsList):
self.materials += materialsList
def phosphorylate(self):
self.pnkTreated = True
def setTimeStep(self, timeStep):
self.timeStep = timeStep
def setChildren(self, inputDNAs):
self.children = inputDNAs
def find(self, string):
return 0
def isEqual(self, other):
# TODO: implement plasmid rotation to allow circular alignment
if self.DNAclass == 'plasmid' and other.DNAclass == 'plasmid':
if self.sequence.lower() == other.sequence.lower():
return True
else:
if self.sequence.lower() == other.sequence.lower() and self.overhangsEqual(other):
return True
return False
def overhangsEqual(self, other):
if self.bottomLeftOverhang.sequence.lower() == other.bottomLeftOverhang.sequence.lower() and \
self.topLeftOverhang.sequence.lower() == other.topLeftOverhang.sequence.lower() and \
self.bottomRightOverhang.sequence.lower() == other.bottomRightOverhang.sequence.lower() and \
self.topRightOverhang.sequence.lower() == other.topRightOverhang.sequence.lower():
return True
return False
def clone(self):
clone = DNA(self.DNAclass, self.name, self.sequence)
clone.topLeftOverhang = Overhang(self.topLeftOverhang.sequence)
clone.topRightOverhang = Overhang(self.topRightOverhang.sequence)
clone.bottomLeftOverhang = Overhang(self.bottomLeftOverhang.sequence)
clone.bottomRightOverhang = Overhang(self.bottomRightOverhang.sequence)
return clone
def prettyPrint(self):
#prints out top and bottom strands, truncates middle so length is ~100bp
#example:
# TTATCG...[1034bp]...GGAA
# |||| ||||
# TAGC..............CCTTAA
if self.DNAclass == 'digest':
(TL,TR,BL,BR) = SetFlags(self)
if len(self.sequence) > 8:
trExtra = ''
brExtra = ''
if TR:
trExtra = self.topRightOverhang.sequence
if BR:
brExtra = self.bottomRightOverhang.sequence
print "\t"+self.topLeftOverhang.sequence+' '*len(self.bottomLeftOverhang.sequence)+self.sequence[:4]+'.'*3+'['+str(len(self.sequence)-8)+'bp]'+'.'*3+self.sequence[len(self.sequence)-4:]+trExtra
print "\t"+' '*len(self.topLeftOverhang.sequence)+'|'*4+' '*(10+len(str(len(self.sequence)-8)))+'|'*4
print "\t"+' '*len(self.topLeftOverhang.sequence)+self.bottomLeftOverhang.sequence+Complement(self.sequence[:4])+'.'*(10+len(str(len(self.sequence)-8)))+Complement(self.sequence[len(self.sequence)-4:])+brExtra
else:
trExtra = ''
brExtra = ''
if TR:
trExtra = self.topRightOverhang.sequence
if BR:
brExtra = self.bottomRightOverhang.sequence
print "\t"+self.topLeftOverhang.sequence+' '*len(self.bottomLeftOverhang.sequence)+self.sequence+trExtra
print "\t"+' '*len(self.topLeftOverhang.sequence)+'|'*len(self.sequence)
print "\t"+' '*len(self.topLeftOverhang.sequence)+self.bottomLeftOverhang.sequence+Complement(self.sequence)+brExtra
else:
if len(self.sequence) > 8:
print "\t"+self.sequence[:4]+'.'*3+'['+str(len(self.sequence)-8)+'bp]'+'.'*3+self.sequence[len(self.sequence)-4:]
print "\t"+'|'*4+' '*(10+len(str(len(self.sequence)-8)))+'|'*4
print "\t"+Complement(self.sequence[:4])+'.'*(10+len(str(len(self.sequence)-8)))+Complement(self.sequence[len(self.sequence)-4:])
else:
print "\t"+self.sequence
print "\t"+'|'*len(self.sequence)
print "\t"+Complement(self.sequence)
return 0
# Description: BaseExpand() for regex generation, taken from BioPython
def BaseExpand(base):
"""BaseExpand(base) -> string.
given a degenerated base, returns its meaning in IUPAC alphabet.
i.e:
b= 'A' -> 'A'
b= 'N' -> 'ACGT'
etc..."""
base = base.upper()
return dna_alphabet[base]
# Description: regex() function to convert recog site into regex, from Biopython
def regex(site):
"""regex(site) -> string.
Construct a regular expression from a DNA sequence.
i.e.:
site = 'ABCGN' -> 'A[CGT]CG.'"""
reg_ex = site
for base in reg_ex:
if base in ('A', 'T', 'C', 'G', 'a', 'c', 'g', 't'):
pass
if base in ('N', 'n'):
reg_ex = '.'.join(reg_ex.split('N'))
reg_ex = '.'.join(reg_ex.split('n'))
if base in ('R', 'Y', 'W', 'M', 'S', 'K', 'H', 'D', 'B', 'V'):
expand = '['+ str(BaseExpand(base))+']'
reg_ex = expand.join(reg_ex.split(base))
return reg_ex
# Description: ToRegex() function to convert recog site into regex, from Biopython
def ToRegex(site, name):
sense = ''.join(['(?P<', name, '>', regex(site.upper()), ')'])
antisense = ''.join(['(?P<', name, '_as>', regex( reverseComplement( site.upper() )), ')'])
rg = sense + '|' + antisense
return rg
# Description: restrictionEnzyme class encapsulates information about buffers, overhangs, incubation / inactivation, end distance, etc.
class restrictionEnzyme(object):
def __init__(self,name="", buffer1="", buffer2="", buffer3="", buffer4="", bufferecori="", heatinact="", incubatetemp="", recognitionsite="",distance=""):
self.name = name
self.buffer_activity =[buffer1, buffer2, buffer3, buffer4, bufferecori]
self.inactivate_temp = heatinact
self.incubate_temp = incubatetemp
#human-readable recognition site
self.recognition_site = recognitionsite
self.endDistance = distance
#function to convert recog site into regex
alpha_only_site = re.sub('[^a-zA-Z]+', '', recognitionsite)
self.alpha_only_site = alpha_only_site
# print ToRegex(alpha_only_site, name)
self.compsite = ToRegex(alpha_only_site, name)
self.reach = False
#convert information about where the restriction happens to an offset on the top and bottom strand
#for example, BamHI -> 1/5 with respect to the start of the site match
hasNum = re.compile('(-?\d+/-?\d+)')
not_completed = 1
for m in hasNum.finditer(recognitionsite):
(top, bottom) = m.group().split('/')
self.top_strand_offset = int(top)
self.bottom_strand_offset = int(bottom)
self.reach = True
not_completed = 0
p = re.compile("/")
for m in p.finditer(recognitionsite):
if not_completed:
self.top_strand_offset = int(m.start())
self.bottom_strand_offset = len(recognitionsite) - 1 - self.top_strand_offset
def prettyPrint(self):
print "Name: ", self.name, "Recognition Site: ", self.recognition_site
def find_sites(self, DNA):
seq = DNA.sequence
(fwd, rev) = self.compsite.split('|')
fwd_rease_re = re.compile(fwd)
rev_rease_re = re.compile(rev)
indices = []
seen = {}
if DNA.topology == "circular":
searchSequence = seq.upper() + seq[0:len(self.recognition_site)-2]
else:
searchSequence = seq.upper()
for m in fwd_rease_re.finditer(searchSequence):
span = m.span()
span = (span[0] % len(seq), span[1] % len(seq))
seen[span[0]] = 1
span = span + ('sense',)
indices.append(span)
for m in rev_rease_re.finditer(searchSequence):
span = m.span()
try:
seen[span[0]]
except:
span = span + ('antisense',)
indices.append(span)
return indices
# Description: phosphorylates 5' end of DNA molecule, allowing blunt end ligation
# see http://openwetware.org/wiki/PNK_Treatment_of_DNA_Ends
def TreatPNK(inputDNAs):
for inputDNA in inputDNAs:
inputDNA.phosphorylate()
return inputDNAs
# Description: DigestBuffer() function finds the optimal digestBuffer
# todo: If Buffer 2 > 150, return Buffer 2 and list of activity values, else, return buffer 1, 3, or 4 (ignore EcoRI)
# return format will be list, [rec_buff, [buff1_act, buff2_act...buff4_Act]]
def DigestBuffer(*str_or_list):
best_buff = ""
best_buff_score = [0,0,0,0,0]
enzdic = EnzymeDictionary()
num_enz = 0
for e in str_or_list:
enz = enzdic[e]
best_buff_score = list(x + int(y) for x, y in zip(best_buff_score, enz.buffer_activity))
num_enz = num_enz + 1
ret = []
if best_buff_score[1] >( 75 * num_enz):
ret.append(2)
ret.append(best_buff_score)
else:
m = max(best_buff_score)
p = best_buff_score.index(m)
ret.append(p)
ret.append(best_buff_score)
return ret
#accepts two primers and list of input template DNAs
#todo:implement this with PCR!
def SOERoundTwo(primer1, primer2, templates):
return 0
def SOE(list_of_primers, templates):
#assume primers are in the right order outer inner_rev inner_fwd outer
#call two pcrs with list[0], [1] and list[2], [3]
return 0
def Primers(product, template):
return rPrimers(product, template, 0)
def rPrimers(product, template, baseCase):
# Annealing region design criteria:
# TODO: incorporate these somehow
# In general, the 3' base of your oligos should be a G or C
# The overall G/C content of your annealing region should be between 50 and 65%
# The overall base composition of the sequences should be balanced (no missing bases, no excesses of one particular base)
# The length of your sequence can be modified to be around 18 and 25 bp
# The sequence should appear random. There shouldn't be long stretches of a single base, or large regions of G/C rich sequence and all A/T in other regions
# There should be little secondary structure. Ideally the Tm for the oligo should be under 40 degrees.
try:
# Die after 2 rounds of recursion
if baseCase == 2:
return ()
# Compute "forward" and "backwards" LCS (i.e. on both sides of a mutation)
fwdMatch = LCS(template.sequence.upper()+'$', product.sequence.upper())
(fwdMatchCount, forwardMatchIndicesTuple, forwardPrimerStub) = fwdMatch.LCSasRegex(template.sequence.upper()+'$', product.sequence.upper(), 1)
revMatch = LCS(reverse(template.sequence.upper())+'$', reverse(product.sequence.upper()))
(revMatchCount, reverseMatchIndicesTuple, revPrimerStub) = revMatch.LCSasRegex(reverse(template.sequence.upper())+'$', reverse(product.sequence.upper()), 1)
fFlag = False
if not len(forwardMatchIndicesTuple):
fMI = (len(product.sequence), len(product.sequence))
fFlag = True
else:
fMI = forwardMatchIndicesTuple
if not len(reverseMatchIndicesTuple):
if fFlag:
# neither side matches
raise Exception('For primer design, no detectable homology on terminal ends of product and template sequences.')
rMI = (0, 0)
else:
rMI = (0 , len(product.sequence) - reverseMatchIndicesTuple[0])
# wrap around mutation case
if not fMI[0] > rMI[1]:
diffLen = fMI[0] + len(product.sequence) - rMI[1]
insert = product.sequence[rMI[1]:] + product.sequence[:fMI[0]]
else:
diffLen = fMI[0] - rMI[1]
insert = product.sequence[rMI[1]:fMI[0]]
if 60 < diffLen <= 100:
primers, enz = DesignWobble(product, insert, (rMI[1], fMI[0]))
elif 1 <= diffLen <= 60:
primers, enz = DesignEIPCR(product, insert, (rMI[1], fMI[0]), template)
if primers[0] == 0:
print '*Primer Warning*: EIPCR primers could not be designed for given template and product. Try removing BsaI, BseRI, and/or BsmBI sites from template plasmid. Returning null data.'
return [], ''
# test the PCR --> will return an exception if they don't anneal
# TODO: FIX THIS / ERR HANDLING
amplifies = PCR(primers[0], primers[1], template)
# if it amplifies up ok, then return the primers
return primers, enz
# may be misaligned ==> realign and recurse
except:
baseCase += 1
# If you had an LCS on the fwd direction, re-align using that one
if fwdMatchCount:
myLCS = product.sequence[forwardMatchIndicesTuple[0]:forwardMatchIndicesTuple[1]]
newProduct = DNA('plasmid', product.name, product.sequence[forwardMatchIndicesTuple[0]:] + product.sequence[:forwardMatchIndicesTuple[0]])
match = re.search(myLCS.upper(), template.sequence.upper())
if match:
startSite = match.start()
newTemplate = DNA('plasmid', template.name, template.sequence[startSite:]+template.sequence[:startSite])
else:
return ()
# If you had an LCS in the rev direction, re-align using that one
elif revMatchCount:
myLCS = reverse(reverse(product.sequence)[reverseMatchIndicesTuple[0]:reverseMatchIndicesTuple[1]])
myMatch = re.search(myLCS.upper(), product.sequence.upper())
startIndex = myMatch.start()
newProduct = DNA('plasmid', product.name, product.sequence[startIndex:] + product.sequence[:startIndex])
match = re.search(myLCS.upper(), template.sequence.upper())
if match:
startSite = match.start()
newTemplate = DNA('plasmid', template.name, template.sequence[startSite:]+template.sequence[:startSite])
else:
return ()
else:
return ()
return rPrimers(newProduct, newTemplate, baseCase)
def getAnnealingRegion(template, fwd):
if len(template) <= 10:
return ''
if not fwd:
template = reverseComplement(template)
for i in range(len(template)):
currentRegion = template[:i]
if primerTm(currentRegion) >= 60:
break
return currentRegion
def chooseReachover(plasmid):
EnzDict = EnzymeDictionary()
bsaI = EnzDict['BsaI']; bsaMatch = bsaI.find_sites(plasmid); bsaFlag = len(bsaMatch) > 0
bsmBI = EnzDict['BsmBI']; bsmMatch = bsmBI.find_sites(plasmid); bsmFlag = len(bsmMatch) > 0
bseRI = EnzDict['BseRI']; bseMatch = bseRI.find_sites(plasmid); bseFlag = len(bseMatch) > 0
if not bsaFlag:
# use BsaI
tail = "taaattGGTCTCA"
return bsaI, tail, 2
if not bsmFlag:
# use bsmBI
tail = 'taaattCGTCTCA'
return bsmBI, tail, 2
if not bseFlag:
# use bsmBI
tail = 'taaattGAGGAGattcccta'
return bseRI, tail, 1
return 0, 0, 0
#given a parent plasmid and a desired product plasmid, design the eipcr primers
#use difflib to figure out where the differences are
#if there is a convenient restriction site in or near the modification, use that
# otherwise, check if there exists bseRI or bsaI sites, and design primers using those
# print/return warning if can't do this via eipcr (insert span too long)
def DesignEIPCR(product, insert, diffTuple, template):
# use 60 bp to right of mutation as domain for annealing region design
(fwdStart, fwdEnd) = (diffTuple[1], diffTuple[1]+60)
enz, tail, halfSiteSize = chooseReachover(template)
if enz == 0:
return 0, 0
# accounting for the wrap around case
if fwdEnd > len(product.sequence):
fwdEnd = fwdEnd % len(product.sequence)
fwdAnneal = getAnnealingRegion(product.sequence[fwdStart:] + product.sequence[:fwdEnd], 1)
else:
fwdAnneal = getAnnealingRegion(product.sequence[fwdStart:fwdEnd], 1)
# same with the 60 bp to the left of the mutation
(revStart, revEnd) = (diffTuple[0]-60, diffTuple[0])
if revStart < 0:
revAnneal = getAnnealingRegion(product.sequence[revStart:] + product.sequence[:revEnd], 0)
else:
revAnneal = getAnnealingRegion(product.sequence[revStart:revEnd], 0)
# use BsaI 'taaGGTCTCx1234' to do reachover digest and ligation
# wrap around case
if not diffTuple[1] > diffTuple[0]:
half = ((diffTuple[1] + len(product.sequence) - diffTuple[0]) / 2) + diffTuple[0]
else:
half = ((diffTuple[1] - diffTuple[0]) / 2) + diffTuple[0]
# the 4 bp in the overhang must not contain any N's --> otherwise, ligation won't work
overhang = product.sequence[half - halfSiteSize : half + halfSiteSize]
while 'N' in overhang.upper():
half = half + 1
overhang = product.sequence[half - halfSiteSize : half + halfSiteSize]
# Accounting for the == 0 case, which would otherwise send the mutagenic region to ''
if diffTuple[1] == 0:
fwdPrimer = DNA('primer','fwd EIPCR primer for '+product.name, tail + product.sequence[half - halfSiteSize :] + fwdAnneal)
else:
# Originally: product.sequence[half - 2 : diffTuple[1] + 1]
fwdPrimer = DNA('primer','fwd EIPCR primer for '+product.name, tail + product.sequence[half - halfSiteSize : diffTuple[1]] + fwdAnneal)
# print 'AFTER TAIL', product.sequence[half - halfSiteSize : diffTuple[1] + 1]
if half + halfSiteSize == 0:
revPrimer = DNA('primer','rev EIPCR primer for '+product.name, tail + reverseComplement(product.sequence[ diffTuple[0] :]) + revAnneal)
else:
revPrimer = DNA('primer','rev EIPCR primer for '+product.name, tail + reverseComplement(product.sequence[ diffTuple[0] : half + halfSiteSize]) + revAnneal)
# print 'REV AFTER TAIL', reverseComplement(product.sequence[ diffTuple[0] : half + halfSiteSize])
return (fwdPrimer, revPrimer), enz
# TODO: Implement this, along with restriction site checking?
def DesignWobble(parent, product):
return 0
def Distinguish2DNABands(a, b):
#case of 2
#for a standard 1-2% agarose gel,
#we can distinguish a and b if
#do the following in wolframalpha: LogLogPlot[|a - b| > (0.208*a+42), {a, 0, 9000}, {b, 0, 9000}]
return ( abs(a.length - b.length) > (0.208*a.length+42)) & (min(a.length, b.length) > 250 )
#only returns True if can distinguish between all of the DNA bands
def DistinguishDNABands(list_of_dnas):
ret_val = True
for i in range(len(list_of_dnas)-1):
ret_val = ret_val & Distinguish2DNABands(list_of_dnas[i], list_of_dnas[i+1])
return ret_val
def FindDistinguishingEnzyme(list_of_dnas):
#find the REase that can distinguish between the input DNAs
#DistinguishDNABands(a, b) returns true if we can
# tell apart bands a, b on a gel and a and b are both > 300bp, < 7kb
#Let n be the number of DNAs in the list. Let E be the enzyme under question
# Then we construct a n-dimensional matrix
# where the dimensions have max value defined by the number of fragments generated by E
# E can be used to distinguish between the DNAs if there is a complete row or column
# that is distinguishable (all True by DistinguishDNABands)
#ASSUMPTION, for now, only consider n=3
#iterate over all enzymes (enzyme list should be prioritized by availability and "goodness")
#execute find good enz
#iterate over all combinations of 2 enzymes
#execute find good enz
##find good enz
#for each enzyme/combo in the list
#calculate fragments for each input DNA
#skip if any DNA has # fragments > 6
#n-length list, each character represents the DNA fragment currently under investigation
#iterate to fill in the hypermatrix values
#find if the hypermatrix has a column/row that has all True
#returns top 5 list of enzymes/combos that work
return 0
def FindDistEnz():
return FindDistinguishingEnzyme(list_of_dnas)
# Description: SetFlags() returns overhang information about a DNA() digest object
def SetFlags(frag):
(TL,TR,BL,BR) = (0,0,0,0)
if frag.topLeftOverhang.sequence != '':
TL = 1
if frag.topRightOverhang.sequence != '':
TR = 1
if frag.bottomLeftOverhang.sequence != '':
BL = 1
if frag.bottomRightOverhang.sequence != '':
BR = 1
return (TL,TR,BL,BR)
def ligatePostProcessing(ligated, childrenTuple, message):
ligated.setChildren(childrenTuple)
for child in childrenTuple:
child.addParent(ligated)
ligated.setTimeStep(0.5)
ligated.addMaterials(['DNA Ligase','DNA Ligase Buffer','ddH20'])
ligated.instructions = message
return ligated
def isComplementary(seq1, seq2):
if seq1 == '' or seq2 == '':
return False
elif seq1 == Complement(seq2):
return True
return False
def isReverseComplementary(seq1, seq2):
if seq1 == '' or seq2 == '':
return False
elif seq1 == reverseComplement(seq2):
return True
return False
# Description: Ligate() function accepts a list of DNA() digest objects, and outputs list of DNA
def Ligate(inputDNAs):
products = []
# self ligation
for fragment in inputDNAs:
if not isinstance(fragment, DNA):
print '\n*Ligate Error*: Ligate function was passed a non-DNA argument. Argument discarded.\n'
continue
(TL,TR,BL,BR) = SetFlags(fragment)
if fragment.DNAclass == 'plasmid':
print '\n*Ligate Warning*: for ligation reaction, invalid input molecule removed -- ligation input DNA objects must be of class \'digest\' or be PNK treated linear molecules.\n'
elif TL+TR+BL+BR == 1:
pass
elif TL+TR+BL+BR == 0:
# blunt end self ligation case --> need to identify that both sides were digested (i.e. both ecoRV blunt ends)
# and then return circular product of same sequence.
pass
elif fragment.topLeftOverhang.sequence != '':
if isComplementary(fragment.topLeftOverhang.sequence.lower(), fragment.bottomRightOverhang.sequence.lower()):
ligated = DNA('plasmid',fragment.name+' self-ligation',fragment.topLeftOverhang.sequence+fragment.sequence)
products.append(ligatePostProcessing(ligated, (fragment, ), 'Self-ligate ('+fragment.name+') with DNA ligase for 30 minutes at room-temperature.'))
elif fragment.bottomLeftOverhang.sequence != '':
if isComplementary(fragment.bottomLeftOverhang.sequence.lower(), fragment.topRightOverhang.sequence.lower()):
ligated = DNA('plasmid',fragment.name+' self-ligation',fragment.sequence+fragment.topRightOverhang.sequence)
products.append(ligatePostProcessing(ligated, (fragment, ), 'Self-ligate ('+fragment.name+') with DNA ligase for 30 minutes at room-temperature.'))
if len(products) > 0 or len(inputDNAs) == 1:
return products
i = 0
while i < len(inputDNAs):
fragOne = inputDNAs[i]
if not isinstance(fragOne, DNA):
print '\n*Ligate Warning*: Ligate function was passed a non-DNA argument. Argument discarded.\n'
i += 1
continue
elif fragOne.DNAclass == 'plasmid':
i += 1
continue
j = i + 1
while j < len(inputDNAs):
fragTwo = inputDNAs[j]
if not isinstance(fragOne, DNA) or not isinstance(fragTwo, DNA):
j += 1
continue
elif fragTwo.DNAclass == 'plasmid':
j += 1
continue
(LTL,LTR,LBL,LBR) = SetFlags(fragOne)
(RTL,RTR,RBL,RBR) = SetFlags(fragTwo)
# first3 is the number of 3' overhangs for the left fragment, and so on for the other three classifiers
(first3, first5, second3, second5) = (LTR + LBL, LBR + LTL, RTR + RBL, RBR + RTL)
# blunt end ligation:
firstFlag = first3 + first5
secondFlag = second3 + second5
if fragOne.pnkTreated and fragTwo.pnkTreated and firstFlag <= 1 and secondFlag <= 1:
if not firstFlag and secondFlag or firstFlag and not secondFlag:
pass
elif not firstFlag and not secondFlag:
ligated = DNA('plasmid', fragOne.name+', '+fragTwo.name+' ligation product', fragOne.sequence + fragTwo.sequence)
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
elif firstFlag and secondFlag:
if first3 and second3:
if isComplementary(fragOne.topRightOverhang.sequence.upper(), fragTwo.bottomLeftOverhang.sequence.upper()):
ligated = DNA('plasmid',fragOne.name+', '+fragTwo.name+' ligation product',fragOne.sequence+fragOne.topRightOverhang.sequence+fragTwo.sequence)
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
if isComplementary(fragOne.bottomLeftOverhang.sequence.upper(), fragTwo.topRightOverhang.sequence.upper()):
ligated = DNA('plasmid',fragOne.name+', '+fragTwo.name+' ligation product',fragTwo.sequence+fragTwo.topRightOverhang.sequence+fragOne.sequence)
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
if isReverseComplementary(fragOne.topRightOverhang.sequence.upper(), fragTwo.topRightOverhang.sequence.upper()):
ligated = DNA('plasmid',fragOne.name+', '+fragTwo.name+' ligation product',fragOne.sequence+fragOne.topRightOverhang.sequence+reverseComplement(fragTwo.sequence))
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
if isReverseComplementary(fragOne.bottomLeftOverhang.sequence.upper(), fragTwo.bottomLeftOverhang.sequence.upper()):
ligated = DNA('plasmid',fragOne.name+', '+fragTwo.name+' ligation product',reverseComplement(fragTwo.sequence)+reverse(fragTwo.bottomLeftOverhang.sequence)+fragOne.sequence)
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
else:
if isComplementary(fragOne.topLeftOverhang.sequence.upper(), fragTwo.bottomRightOverhang.sequence.upper()):
ligated = DNA('plasmid',fragOne.name+', '+fragTwo.name+' ligation product',fragTwo.sequence+fragOne.topLeftOverhang.sequence+fragOne.sequence)
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
if isComplementary(fragOne.bottomRightOverhang.sequence.upper(), fragTwo.topLeftOverhang.sequence.upper()):
ligated = DNA('plasmid',fragOne.name+', '+fragTwo.name+' ligation product',fragOne.sequence+fragTwo.topLeftOverhang.sequence+fragTwo.sequence)
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
if isReverseComplementary(fragOne.topLeftOverhang.sequence.upper(), fragTwo.topLeftOverhang.sequence.upper()):
ligated = DNA('plasmid',fragOne.name+', '+fragTwo.name+' ligation product',reverseComplement(fragTwo.sequence)+fragOne.topLeftOverhang.sequence+fragOne.sequence)
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
if isReverseComplementary(fragOne.bottomRightOverhang.sequence.upper(), fragTwo.bottomRightOverhang.sequence.upper()):
ligated = DNA('plasmid',fragOne.name+', '+fragTwo.name+' ligation product',fragOne.sequence+Complement(fragOne.bottomRightOverhang.sequence)+reverseComplement(fragTwo.sequence))
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
# non-blunt ligation:
else:
if first3 == 2:
if isComplementary(fragOne.topRightOverhang.sequence.upper(), fragTwo.bottomLeftOverhang.sequence.upper()):
if isComplementary(fragOne.bottomLeftOverhang.sequence.upper(), fragTwo.topRightOverhang.sequence.upper()):
ligated = DNA('plasmid',fragOne.name+', '+fragTwo.name+' ligation product',fragOne.sequence+fragOne.topRightOverhang.sequence+fragTwo.sequence+fragTwo.topRightOverhang.sequence)
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
if isReverseComplementary(fragOne.topRightOverhang.sequence.upper(), fragTwo.topRightOverhang.sequence.upper()):
if isReverseComplementary(fragOne.bottomLeftOverhang.sequence.upper(), fragTwo.bottomLeftOverhang.sequence.upper()):
ligated = DNA('plasmid',fragOne.name+', '+fragTwo.name+' ligation product',fragOne.sequence+fragOne.topRightOverhang.sequence+reverseComplement(fragTwo.sequence)+reverse(fragTwo.bottomLeftOverhang.sequence))
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
elif first3 == 1:
if LTR:
# then you know it must have LTL
if RTR:
# then, if it is to ligate, it must have compatible RTL
if isReverseComplementary(fragOne.topRightOverhang.sequence.upper(), fragTwo.topRightOverhang.sequence.upper()):
if isReverseComplementary(fragOne.topLeftOverhang.sequence.upper(), fragTwo.topLeftOverhang.sequence.upper()):
ligated = DNA('plasmid',fragOne.name+', '+fragTwo.name+' ligation product',fragOne.topLeftOverhang.sequence+fragOne.sequence+fragOne.topRightOverhang.sequence+reverseComplement(fragTwo.sequence))
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
else:
# to ligate, it must have RBL and RBR
if isComplementary(fragOne.topRightOverhang.sequence.upper(), fragTwo.bottomLeftOverhang.sequence.upper()):
if isComplementary(fragOne.topLeftOverhang.sequence.upper(), fragTwo.bottomRightOverhang.sequence.upper()):
ligated = DNA('plasmid',fragOne.name+', '+fragTwo.name+' ligation product',fragOne.topLeftOverhang.sequence+fragOne.sequence+fragOne.topRightOverhang.sequence+fragTwo.sequence)
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
else:
# you know it has LBL as its 3 and LBR as its 5
if RTR:
# then, if it is to ligate, it must have compatible RTL
if isComplementary(fragTwo.topRightOverhang.sequence.upper(), fragOne.bottomLeftOverhang.sequence.upper()):
if isComplementary(fragTwo.topLeftOverhang.sequence.upper(), fragOne.bottomRightOverhang.sequence.upper()):
ligated = DNA('plasmid',fragOne.name+', '+fragTwo.name+' ligation product',fragOne.sequence+fragTwo.topLeftOverhang.sequence+fragTwo.sequence+fragTwo.topRightOverhang.sequence)
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
else:
# to ligate, it must have RBL and RBR
if isReverseComplementary(fragOne.bottomRightOverhang.sequence.upper(), fragTwo.bottomRightOverhang.sequence.upper()):
if isReverseComplementary(fragOne.bottomLeftOverhang.sequence.upper(), fragTwo.bottomLeftOverhang.sequence.upper()):
ligated = DNA('plasmid',fragOne.name+', '+fragTwo.name+' ligation product',Complement(fragOne.bottomLeftOverhang.sequence)+fragOne.sequence+Complement(fragOne.bottomRightOverhang.sequence)+reverseComplement(fragTwo.sequence))
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
else:
if isComplementary(fragOne.topLeftOverhang.sequence.upper(), fragTwo.bottomRightOverhang.sequence.upper()):
if isComplementary(fragOne.bottomRightOverhang.sequence.upper(), fragTwo.topLeftOverhang.sequence.upper()):
ligated = DNA('plasmid',fragOne.name+', '+fragTwo.name+' ligation product',fragOne.topLeftOverhang.sequence+fragOne.sequence+fragTwo.topLeftOverhang.sequence+fragTwo.sequence)
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
if isReverseComplementary(fragOne.topLeftOverhang.sequence.upper(), fragTwo.topLeftOverhang.sequence.upper()):
if isReverseComplementary(fragOne.bottomRightOverhang.sequence.upper(), fragTwo.bottomRightOverhang.sequence.upper()):
ligated = DNA('plasmid',fragOne.name+', '+fragTwo.name+' ligation product',fragOne.topLeftOverhang.sequence+fragOne.sequence+reverse(fragTwo.bottomRightOverhang.sequence)+reverseComplement(fragTwo.sequence))
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
j += 1
i += 1
if len(products) == 0:
raise Exception('*Ligate Error*: ligation resulted in zero products.')
return products
# Description: fragment processing function for zymo, short fragment and gel cleanups
def cleanupPostProcessing(band, source):
parentBand = band.clone()
parentBand.setChildren((band,))
band.addParent(parentBand)
timeStep = 0.5
cleanupMaterials = ['Zymo Column','Buffer PE','ddH20']
if source == 'short fragment':
cleanupMaterials.append('Ethanol / Isopropanol')
elif source == 'gel extraction and short fragment':
cleanupMaterials += ['Buffer ADB', 'Ethanol / Isopropanol']
timeStep = 1
elif source == 'gel extraction and zymo':
cleanupMaterials.append('Buffer ADB')
timeStep = 1
parentBand.setTimeStep(timeStep)
parentBand.addMaterials(cleanupMaterials)
parentBand.instructions = 'Perform '+source+' cleanup on ('+band.name+').'
return parentBand
# Description: ZymoPurify() function takes a list of DNA objects and filters out < 300 bp DNA's
def ZymoPurify(inputDNAs):
counter = 0
for zymoInput in inputDNAs:
if not isinstance(zymoInput, DNA):
print '\n*Zymo Warning*: Zymo purification function was passed a non-DNA argument. Argument discarded.\n'
inputDNAs.pop(counter)
else:
counter += 1
if len(inputDNAs) == 0:
raise Exception('*Zymo Error*: Zymo purification function passed empty input list.')
return inputDNAs
(outputBands, sizeTuples) = ([], [])
for DNA in inputDNAs:
sizeTuples.append((len(DNA.sequence),DNA))
sizeTuples.sort(reverse=True)
currentTuple = sizeTuples[0]
currentSize = currentTuple[0]
while currentSize > 300:
band = currentTuple[1]
outputBands.append(cleanupPostProcessing(band,'standard zymo'))
if len(sizeTuples) > 0:
sizeTuples.pop(0)
currentTuple = sizeTuples[0]
currentSize = currentTuple[0]
else:
break
return outputBands
# Description: ShortFragmentCleanup() function takes a list of DNA objects and filters out < 50 bp DNA's
def ShortFragmentCleanup(inputDNAs):
if len(inputDNAs) == 0:
raise Exception('*Short Fragment Cleanup Error*: short fragment cleanup function passed empty input list.')
return inputDNAs
outputBands = []
sizeTuples = []
for DNA in inputDNAs:
fragSize = len(DNA.sequence)
sizeTuples.append((fragSize,DNA))
sizeTuples.sort(reverse=True)
currentTuple = sizeTuples[0]
currentSize = currentTuple[0]
while currentSize > 50 and len(sizeTuples) > 1:
band = currentTuple[1]
outputBands.append(cleanupPostProcessing(band,'short fragment'))
sizeTuples.pop(0)
currentTuple = sizeTuples[0]
currentSize = currentTuple[0]
if currentSize > 50:
band = currentTuple[1]
outputBands.append(cleanupPostProcessing(band,'short fragment'))
return outputBands
# Description: GelAndZymoPurify() function employs a user-specified purification strategy to cut out a range of band sizes, and
# then filters out < 300 bp DNA's. If 50 bp < [ ] < 300 bp DNAs are detected, switches to short fragment cleanup mode.
def GelAndZymoPurify(inputDNAs, strategy):
# sort based on size
if len(inputDNAs) == 0:
raise Exception('*Gel Purification Error*: gel purification with strategy \'"+strategy+"\' passed empty input list.')
return inputDNAs
elif len(inputDNAs) == 1:
return inputDNAs
(shortFlag, lostFlag, interBands, outputBands, sizeTuples) = (False, False, [], [], [])
for DNA in inputDNAs:
sizeTuples.append((len(DNA.sequence),DNA))
if isinstance( strategy, str):
if strategy == 'L':
sizeTuples.sort(reverse=True)
n = 0
currentTuple = sizeTuples[n]
largestSize = currentTuple[n]
currentSize = largestSize
while currentSize > largestSize * 5/6 and n < len(sizeTuples) - 1:
interBands.append(currentTuple[1])
n += 1
currentTuple = sizeTuples[n]
currentSize = currentTuple[0]
if currentSize > largestSize * 5/6:
if currentSize < 50:
lostFlag = True
elif currentSize < 300:
shortFlag = True
interBands.append(currentTuple[1])
if len(interBands) > 1:
print '\n*Gel Purification Warning*: large fragment purification resulted in purification of multiple, possibly unintended distinct DNAs.\n'
elif strategy == 'S':
sizeTuples.sort()
n = 0
currentTuple = sizeTuples[n]
smallestSize = currentTuple[n]
currentSize = smallestSize
while currentSize < smallestSize * 5/6 and n < len(sizeTuples) - 1:
interBands.append(currentTuple[1])
n = n + 1
currentTuple = sizeTuples[n]
currentSize = currentTuple[0]
if currentSize > smallestSize * 5/6:
if currentSize < 50:
lostFlag = True
elif currentSize < 300:
shortFlag = True
interBands.append(currentTuple[1])
if len(interBands) > 1:
print '\n*Gel Purification Warning*: small fragment purification resulted in purification of multiple, possibly unintended distinct DNAs.\n'
elif isinstance( strategy, ( int, long ) ):
sizeTuples.sort(reverse=True)
currentTuple = sizeTuples[0]
currentSize = currentTuple[0]
while currentSize > strategy * 6/5 and len(sizeTuples) > 1:
sizeTuples.pop(0)
currentTuple = sizeTuples[0]
currentSize = currentTuple[0]
while currentSize > strategy * 5/6 and len(sizeTuples) > 1:
band = sizeTuples.pop(0)
interBands.append(band[1])
currentTuple = sizeTuples[0]
currentSize = currentTuple[0]
if currentSize > strategy * 5/6:
if currentSize < 50:
lostFlag = True
elif currentSize < 300:
shortFlag = True
interBands.append(currentTuple[1])
if len(interBands) == 0:
raise Exception('*Gel Purification Error*: for gel purification with strategy \'"+strategy+"\', no digest bands present in given range, with purification yielding zero DNA products.')
elif len(interBands) > 1:
print '\n*Gel Purification Warning*: fragment purification in range of band size '"+str(strategy)+"' resulted in purification of multiple, possibly unintended distinct DNAs.\n'
else:
raise Exception('*Gel Purification Error*: invalid cleanup strategy argument. Valid arguments are \'L\', \'S\', or integer size of band.')
if len(interBands) == 0:
if lostFlag:
print '\n*Gel Purification Warning*: purification with given strategy \'"+strategy+"\' returned short fragments (< 50 bp) that were lost. Returning empty products list.\n'
raise Exception('*Gel Purification Error*: purification with given strategy "'+strategy+'" yielded zero products.')
else:
if lostFlag:
print '\n*Gel Purification Warning*: purification with given strategy "'+strategy+'" returned at least one short fragment (< 50 bp) that was lost. Returning remaining products.\n'
for band in interBands:
outputBands.append(cleanupPostProcessing(band,'gel extraction and zymo'))
elif shortFlag:
print '\n*Gel Purification Warning*: purification with given strategy "'+strategy+'" yielded short fragments (< 300 bp). Returning short fragment cleanup products.\n'
for band in interBands:
outputBands.append(cleanupPostProcessing(band,'gel extraction and short fragment'))
else:
for band in interBands:
outputBands.append(cleanupPostProcessing(band,'gel extraction and zymo'))
return outputBands
# Description: Ligate() function that allows linear ligation products
# Note: also disallows blunt end ligation
def linLigate(inputDNAs):
products = []
# self ligation
for fragment in inputDNAs:
if not isinstance(fragment, DNA):
print '\n*Ligate Warning*: Ligate function was passed a non-DNA argument. Argument discarded.\n'
continue
(TL,TR,BL,BR) = SetFlags(fragment)
if fragment.DNAclass != 'digest':
print '\n*Ligate Warning*: for ligation reaction, invalid input molecule removed -- ligation input DNA objects must be of class \'digest\'.\n'
elif TL+TR+BL+BR == 1:
pass
elif TL+TR+BL+BR == 0:
# blunt end self ligation case --> need to identify that both sides were digested (i.e. both ecoRV blunt ends)
# and then return circular product of same sequence.
pass
elif fragment.topLeftOverhang.sequence != '':
if isComplementary(fragment.topLeftOverhang.sequence.lower(), fragment.bottomRightOverhang.sequence.lower()):
ligated = DNA('plasmid',fragment.name+' self-ligation',fragment.topLeftOverhang.sequence+fragment.sequence)
products.append(ligatePostProcessing(ligated, (fragment, ), 'Self-ligate ('+fragment.name+') with DNA ligase for 30 minutes at room-temperature.'))
elif fragment.bottomLeftOverhang.sequence != '':
if isComplementary(fragment.topLeftOverhang.sequence.lower(), fragment.topRightOverhang.sequence.lower()):
ligated = DNA('plasmid',fragment.name+' self-ligation',fragment.sequence+fragment.topRightOverhang.sequence)
products.append(ligatePostProcessing(ligated, (fragment, ), 'Self-ligate ('+fragment.name+') with DNA ligase for 30 minutes at room-temperature.'))
if len(products) > 0 or len(inputDNAs) == 1:
return products
i = 0
while i < len(inputDNAs):
fragOne = inputDNAs[i]
if not isinstance(fragOne, DNA):
print '\n*Ligate Warning*: Ligate function was passed a non-DNA argument. Argument discarded.\n'
i += 1
continue
j = i + 1
while j < len(inputDNAs):
fragTwo = inputDNAs[j]
if not isinstance(fragOne, DNA) or not isinstance(fragTwo, DNA):
print '\n*Ligate Warning*: Ligate function was passed a non-DNA argument. Argument discarded.\n'
j += 1
continue
elif fragOne.DNAclass != 'digest' or fragTwo.DNAclass != 'digest':
j += 1
continue
(LTL,LTR,LBL,LBR) = SetFlags(fragOne)
(RTL,RTR,RBL,RBR) = SetFlags(fragTwo)
# first3 is the number of 3' overhangs for the left fragment, and so on for the other three classifiers
(first3, first5, second3, second5) = (LTR + LBL, LBR + LTL, RTR + RBL, RBR + RTL)
firstFlag = first3 + first5
secondFlag = second3 + second5
# non-blunt end ligation:
if first3 == 2:
# Here, you know that it has LTR and LBL
# But you don't know about its RXX fields
if isComplementary(fragOne.topRightOverhang.sequence.upper(), fragTwo.bottomLeftOverhang.sequence.upper()):
if isComplementary(fragOne.bottomLeftOverhang.sequence.upper(), fragTwo.topRightOverhang.sequence.upper()):
ligated = DNA('plasmid',fragOne.name+', '+fragTwo.name+' ligation product',fragOne.sequence+fragOne.topRightOverhang.sequence+fragTwo.sequence+fragTwo.topRightOverhang.sequence)
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
else:
ligated = DNA('digest',fragOne.name+', '+fragTwo.name+' ligation product',fragOne.sequence+fragOne.topRightOverhang.sequence+fragTwo.sequence)
ligated.bottomLeftOverhang = Overhang(fragOne.bottomLeftOverhang.sequence)
# you don't know whether it is RTR or RBR
if RTR:
ligated.topRightOverhang = Overhang(fragTwo.topRightOverhang.sequence)
elif RBR:
ligated.bottomRightOverhang = Overhang(fragTwo.bottomRightOverhang.sequence)
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
# you know it's not going to circularize, but you also know it has a LBL
elif isComplementary(fragOne.bottomLeftOverhang.sequence.upper(), fragTwo.topRightOverhang.sequence.upper()):
ligated = DNA('digest',fragOne.name+', '+fragTwo.name+' ligation product',fragTwo.sequence+fragTwo.topRightOverhang.sequence+fragOne.sequence)
ligated.topRightOverhang = Overhang(fragOne.topRightOverhang.sequence)
# you don't know whether it is RTL or RBL
if RTL:
ligated.topLeftOverhang = Overhang(fragTwo.topLeftOverhang.sequence)
elif RBL:
ligated.bottomLeftOverhang = Overhang(fragTwo.bottomLeftOverhang.sequence)
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
if isReverseComplementary(fragOne.topRightOverhang.sequence.upper(), fragTwo.topRightOverhang.sequence.upper()):
if isReverseComplementary(fragOne.bottomLeftOverhang.sequence.upper(), fragTwo.bottomLeftOverhang.sequence.upper()):
ligated = DNA('plasmid',fragOne.name+', '+fragTwo.name+' ligation product',fragOne.sequence+fragOne.topRightOverhang.sequence+reverseComplement(fragTwo.sequence)+reverse(fragTwo.bottomLeftOverhang.sequence))
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
else:
ligated = DNA('digest',fragOne.name+', '+fragTwo.name+' ligation product',fragOne.sequence+fragOne.topRightOverhang.sequence+reverseComplement(fragTwo.sequence))
ligated.bottomLeftOverhang = Overhang(fragOne.bottomLeftOverhang.sequence)
# you don't know whether it is RBL or RTL
if RTL:
ligated.bottomRightOverhang = Overhang(reverse(fragTwo.topLeftOverhang.sequence))
elif RBL:
ligated.topRightOverhang = Overhang(reverse(fragTwo.bottomLeftOverhang.sequence))
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
# you know it's not going to circularize, but you also know it has a LBL
elif isReverseComplementary(fragOne.bottomLeftOverhang.sequence.upper(), fragTwo.bottomLeftOverhang.sequence.upper()):
ligated = DNA('digest',fragOne.name+', '+fragTwo.name+' ligation product',reverseComplement(fragTwo.sequence)+reverse(fragTwo.bottomLeftOverhang.sequence)+fragOne.sequence)
ligated.topRightOverhang = Overhang(fragOne.topRightOverhang.sequence)
# you don't know whether it is RTR or RBR
if RTR:
ligated.bottomLeftOverhang = Overhang(reverse(fragTwo.topRightOverhang.sequence))
elif RBR:
ligated.topLeftOverhang = Overhang(reverse(fragTwo.bottomRightOverhang.sequence))
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
elif first3 == 1:
if LTR:
# then you know it must have LTL
if RTR:
# then, if it is to ligate, it must have compatible RTL
if isReverseComplementary(fragOne.topRightOverhang.sequence.upper(), fragTwo.topRightOverhang.sequence.upper()):
if isReverseComplementary(fragOne.topLeftOverhang.sequence.upper(), fragTwo.topLeftOverhang.sequence.upper()):
ligated = DNA('plasmid',fragOne.name+', '+fragTwo.name+' ligation product',fragOne.topLeftOverhang.sequence+fragOne.sequence+fragOne.topRightOverhang.sequence+reverseComplement(fragTwo.sequence))
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
else:
ligated = DNA('digest',fragOne.name+', '+fragTwo.name+' ligation product',fragOne.sequence+fragOne.topRightOverhang.sequence+reverseComplement(fragTwo.sequence))
ligated.topLeftOverhang = Overhang(fragOne.topLeftOverhang.sequence)
# you don't know whether it is RTL or RBL
if RTL:
ligated.bottomRightOverhang = Overhang(reverse(fragTwo.topLeftOverhang.sequence))
elif RBL:
ligated.bottomLeftOverhang = Overhang(reverse(fragTwo.bottomLeftOverhang.sequence))
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
# now, you know it's not going to circularize, but you know it has LTL
elif isReverseComplementary(fragOne.topLeftOverhang.sequence.upper(), fragTwo.topLeftOverhang.sequence.upper()):
ligated = DNA('digest',fragOne.name+', '+fragTwo.name+' ligation product',reverseComplement(fragTwo.sequence)+fragOne.topLeftOverhang.sequence+fragOne.sequence)
ligated.topRightOverhang = Overhang(fragOne.topRightOverhang.sequence)
# you dont know whether you have RTR (=> BLO) or RBR (=> TLO) ==> correction: yes you do, you have RTR
ligated.bottomLeftOverhang = Overhang(reverse(fragTwo.topRightOverhang.sequence))
# if RTR:
# ligated.bottomLeftOverhang = Overhang(reverse(fragTwo.topRightOverhang.sequence))
# elif RBR:
# ligated.topLeftOverhang = Overhang(reverse(fragTwo.bottomRightOverhang.sequence))
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
# you know here that you have LTR and LTL, and that you do not have RTR
else:
# to ligate, it must have RBL and RBR
if isComplementary(fragOne.topRightOverhang.sequence.upper(), fragTwo.bottomLeftOverhang.sequence.upper()):
if isComplementary(fragOne.topLeftOverhang.sequence.upper(), fragTwo.bottomRightOverhang.sequence.upper()):
ligated = DNA('plasmid',fragOne.name+', '+fragTwo.name+' ligation product',fragOne.topLeftOverhang.sequence+fragOne.sequence+fragOne.topRightOverhang.sequence+fragTwo.sequence)
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
else:
ligated = DNA('digest',fragOne.name+', '+fragTwo.name+' ligation product',fragOne.sequence+fragOne.topRightOverhang.sequence+fragTwo.sequence)
ligated.topLeftOverhang = Overhang(fragOne.topLeftOverhang.sequence)
ligated.bottomRightOverhang = Overhang(fragTwo.bottomRightOverhang.sequence)
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
elif isComplementary(fragOne.topLeftOverhang.sequence.upper(), fragTwo.bottomRightOverhang.sequence.upper()):
# here, you know you have LTR and LTL, has a complementary RBR and does not have a RTR
ligated = DNA('digest',fragOne.name+', '+fragTwo.name+' ligation product',fragTwo.sequence+fragOne.topLeftOverhang.sequence+fragOne.sequence)
ligated.topRightOverhang = Overhang(fragOne.topRightOverhang.sequence)
if RTL:
ligated.topLeftOverhang= Overhang(fragTwo.topLeftOverhang.sequence)
elif RBL:
ligated.bottomLeftOverhang = Overhang(fragTwo.bottomLeftOverhang.sequence)
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
else:
# you know it has LBL as its 3 and LBR as its 5
if RTR:
# then, if it is to ligate, it must have compatible RTL
if isComplementary(fragTwo.topRightOverhang.sequence.upper(), fragOne.bottomLeftOverhang.sequence.upper()):
if isComplementary(fragTwo.topLeftOverhang.sequence.upper(), fragOne.bottomRightOverhang.sequence.upper()):
ligated = DNA('plasmid',fragOne.name+', '+fragTwo.name+' ligation product',fragOne.sequence+fragTwo.topLeftOverhang.sequence+fragTwo.sequence+fragTwo.topRightOverhang.sequence)
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
else:
ligated = DNA('digest',fragOne.name+', '+fragTwo.name+' ligation product',fragOne.sequence+fragTwo.topLeftOverhang.sequence+fragTwo.sequence)
ligated.bottomRightOverhang = Overhang(fragOne.bottomRightOverhang.sequence)
# you don't know whether it is a RBL or RTL
if RTL:
ligated.topLeftOverhang = Overhang(fragTwo.topLeftOverhang.sequence)
elif RBL:
ligated.bottomLeftOverhang = Overhang(fragTwo.bottomLeftOverhang.sequence)
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
# you know it's not going to circularize, but you know it has LBR
elif isComplementary(fragTwo.topLeftOverhang.sequence.upper(), fragOne.bottomRightOverhang.sequence.upper()):
ligated = DNA('plasmid',fragOne.name+', '+fragTwo.name+' ligation product',fragOne.sequence+fragTwo.topLeftOverhang.sequence+fragTwo.sequence)
ligated.bottomLeftOverhang = Overhang(fragOne.bottomLeftOverhang.sequence)
if RTR:
ligated.topRightOverhang = Overhang(fragTwo.topRightOverhang.sequence)
elif RBR:
ligated.bottomRightOverhang = Overhang(fragTwo.bottomRightOverhang.sequence)
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
# up to here is good
else:
# you kno it has LBL, LBR, and not RTR
# to ligate, it must have RBL and RBR
if isReverseComplementary(fragOne.bottomRightOverhang.sequence.upper(), fragTwo.bottomRightOverhang.sequence.upper()):
if isReverseComplementary(fragOne.bottomLeftOverhang.sequence.upper(), fragTwo.bottomLeftOverhang.sequence.upper()):
ligated = DNA('plasmid',fragOne.name+', '+fragTwo.name+' ligation product',Complement(fragOne.bottomLeftOverhang.sequence)+fragOne.sequence+Complement(fragOne.bottomRightOverhang.sequence)+reverseComplement(fragTwo.sequence))
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
else:
ligated = DNA('digest',fragOne.name+', '+fragTwo.name+' ligation product',fragOne.sequence+Complement(fragOne.bottomRightOverhang.sequence)+reverseComplement(fragTwo.sequence))
ligated.bottomLeftOverhang = Overhang(fragOne.bottomLeftOverhang.sequence)
if RTL:
ligated.bottomRightOverhang = Overhang(reverse(fragTwo.topLeftOverhang.sequence))
elif RBL:
ligated.topRightOverhang = Overhang(reverse(fragTwo.bottomLeftOverhang.sequence))
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
# you know it's not going to circularize, but you know it has LBL
elif isReverseComplementary(fragOne.bottomLeftOverhang.sequence.upper(), fragTwo.bottomLeftOverhang.sequence.upper()):
ligated = DNA('digest',fragOne.name+', '+fragTwo.name+' ligation product',reverseComplement(fragTwo.sequence)+Complement(fragOne.bottomLeftOverhang.sequence)+fragOne.sequence)
ligated.bottomRightOverhang = Overhang(fragOne.bottomRightOverhang.sequence)
ligated.topLeftOverhang = Overhang(reverse(fragTwo.bottomRightOverhang))
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
# here first3 == 0, so you know it has LTL and LBR
else:
if isComplementary(fragOne.topLeftOverhang.sequence.upper(), fragTwo.bottomRightOverhang.sequence.upper()):
if isComplementary(fragOne.bottomRightOverhang.sequence.upper(), fragTwo.topLeftOverhang.sequence.upper()):
ligated = DNA('plasmid',fragOne.name+', '+fragTwo.name+' ligation product',fragOne.topLeftOverhang.sequence+fragOne.sequence+fragTwo.topLeftOverhang.sequence+fragTwo.sequence)
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
else:
ligated = DNA('digest',fragOne.name+', '+fragTwo.name+' ligation product',fragTwo.sequence+fragOne.topLeftOverhang.sequence+fragTwo.sequence)
ligated.bottomRightOverhang = Overhang(fragOne.bottomRightOverhang.sequence)
if RTL:
ligated.topLeftOverhang = Overhang(fragTwo.topLeftOverhang.sequence)
elif RBL:
ligated.bottomLeftOverhang = Overhang(fragTwo.bottomLeftOverhang)
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
elif isComplementary(fragOne.bottomRightOverhang.sequence.upper(), fragTwo.topLeftOverhang.sequence.upper()):
ligated = DNA('digest',fragOne.name+', '+fragTwo.name+' ligation product',fragOne.sequence+fragTwo.topLeftOverhang.sequence+fragTwo.sequence)
ligated.topLeftOverhang = Overhang(fragOne.topLeftOverhang.sequence)
if RTR:
ligated.topRightOverhang = Overhang(fragTwo.topRightOverhang.sequence)
elif RBR:
ligated.bottomRightOverhang = Overhang(fragTwo.bottomRightOverhang.sequence)
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
# up to here is good
# here first3 == 0, so you know it has LTL and LBR
if isReverseComplementary(fragOne.topLeftOverhang.sequence.upper(), fragTwo.topLeftOverhang.sequence.upper()):
if isReverseComplementary(fragOne.bottomRightOverhang.sequence.upper(), fragTwo.bottomRightOverhang.sequence.upper()):
ligated = DNA('plasmid',fragOne.name+', '+fragTwo.name+' ligation product',fragOne.topLeftOverhang.sequence+fragOne.sequence+reverse(fragTwo.bottomRightOverhang.sequence)+reverseComplement(fragTwo.sequence))
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
else:
ligated = DNA('digest',fragOne.name+', '+fragTwo.name+' ligation product',reverseComplement(fragTwo.sequence)+fragOne.topLeftOverhang.sequence+fragOne.sequence)
ligated.bottomRightOverhang = Overhang(fragOne.bottomRightOverhang.sequence)
if RTR:
ligated.bottomLeftOverhang = Overhang(reverse(fragTwo.topRightOverhang.sequence))
if RBR:
ligated.topLeftOverhang = Overhang(reverse(fragTwo.bottomRightOverhang.sequence))
products.append(ligatePostProcessing(ligated, (fragOne, fragTwo), 'Ligate ('+fragOne.name+', '+fragTwo.name+') with DNA ligase for 30 minutes at room-temperature.'))
elif isReverseComplementary(fragOne.bottomRightOverhang.sequence.upper(), fragTwo.bottomRightOverhang.sequence.upper()):
ligated = DNA('plasmid',fragOne.name+', '+fragTwo.name+' ligation product',fragOne.sequence+Complement(fragOne.bottomRightOverhang.sequence)+reverseComplement(fragTwo.sequence))
ligated.topLeftOverhang = Overhang(fragOne.topLeftOverhang.sequence)
ligated.bottomRightOverhang = Overhang(reverse(fragTwo.topLeftOverhang.sequence))
j += 1
i += 1
return products
# Note: going to stick with the convention where they actually pass a list of restriction enzymes
# As in: GoldenGate(vector_DNA, list_of_DNAs, EnzymeDictionary['BsaI'], ['AmpR', 'KanR'])
def GoldenGate(VectorPlasmid, InputDNAs, reASE, resistanceList):
# ggEnzyme = EnzymeDictionary()[reASE]
ggDNAs, outputDNAs, resistanceList, vector = [], [], map(str.lower, resistanceList), None
vecDigest = Digest(VectorPlasmid, (reASE, ))
for frag in vecDigest:
if len(HasReplicon(frag.sequence)):
vector = frag
ggDNAs.append(vector)
break
if vector == None:
raise Exception('For GoldenGate function, no viable vector input provided (must contain origin of replication).')
for ggDNA in InputDNAs:
if ggDNA.DNAclass != 'plasmid':
print '\n*GoldenGate Warning*: linear inputs disallowed.\n'
continue
try:
ggDigest = Digest(ggDNA, (reASE, ))
ggDNAs += ggDigest
except:
pass
ggLigation = rGoldenGate(vector, [0, ], ggDNAs)
# for a ligation product to be part of the gg output, it must fulfill three criteria:
# 1) It must be circular (handled by Ligate() function)
# 2) It must have at least one replicon
# 3) It must have all of the above specified resistance markers
for product in ggLigation:
if product == None:
continue
if len(HasReplicon(product.sequence)) > 0:
resistanceFlag, resistanceMarkers = 1, map(str.lower, HasResistance(product.sequence))
for resistance in resistanceList:
if resistance not in resistanceMarkers:
resistanceFlag = 0
if resistanceFlag:
if not DNAlistContains(outputDNAs, product):
outputDNAs.append(product)
return outputDNAs
def DNAlistContains(DNAlist, candidateDNA):
for listDNA in DNAlist:
if candidateDNA.isEqual(listDNA):
return True
return False
def rGoldenGate(currentLink, linkList, allDNAs):
products = []
if currentLink.DNAclass == 'plasmid':
return (currentLink, )
else:
counter = 0
for myDNA in allDNAs:
newLink = linLigate([currentLink, myDNA])
if len(newLink) == 0:
counter += 1
continue
else:
for link in newLink:
if counter == 0:
return (None, )
elif counter in linkList:
return (None, )
else:
nextList = list(linkList)
nextList.append(counter)
nextLink = link
futureProducts = rGoldenGate(nextLink, nextList, allDNAs)
for futureProduct in futureProducts:
if isinstance(futureProduct, DNA):
if futureProduct.DNAclass == 'plasmid':
products.append(futureProduct)
counter += 1
return products
# Description: HasFeature() function checks for presence of regex-encoded feature in seq
def HasFeature(regex, seq):
#Regex must be lower case!
return bool( re.search(regex, seq.lower()) ) | bool( re.search(regex, reverseComplement(seq.lower()) ) )
#####Origins Suite: Checks for presence of certain origins of replication#####
def HasColE2(seq):
#has ColE2 origin, data from PMID 16428404
regexp = '....tga[gt]ac[ct]agataagcc[tgc]tatcagataacagcgcccttttggcgtctttttgagcacc'
return HasFeature(regexp, seq)
#necessary and sufficient element for ColE2 replication, however a longer sequence is needed for stable replication
# 'AGCGCCTCAGCGCGCCGTAGCGTCGATAAAAATTACGGGCTGGGGCGAAACTACCATCTGTTCGAAAAGGTCCGTAAATGGGCCTACAGAGCGATTCGTCAGGGCTGGCCTGTATTCTCACAATGGCTTGATGCCGTTATCCAGCGTGTCGAAATGTACAACGCTTCGCTTCCCGTTCCGCTTTCTCCGGCTGAATGTCGGGCTATTGGCAAGAGCATTGCGAAATATACACACAGGAAATTCTCACCAGAGGGATTTTCCGCTGTACAGGCCGCTCGCGGTCGCAAGGGCGGAACTAAATCTAAGCGCGCAGCAGTTCCTACATCAGCACGTTCGCTGAAACCGTGGGAGGCATTAGGCATCAGTCGAGCGACGTACTACCGAAAATTAAAATGTGACCCAGACCTCGCnnnntga'
#longer element shown in the Anderson lab that stably replicates
def HasColE1(seq):
regexp = 'tcatgaccaaaatcccttaacgtgagttttcgttccactgagcgtcagaccccgtagaaaagatcaaaggatcttcttgagatcctttttttctgcgcgtaatctgctgcttgcaaacaaaaaaaccaccgctaccagcggtggtttgtttgccggatcaagagcta[cagt]caactctttttccgaaggtaactggcttcagcagagcgcagataccaaatactgt[cagt]cttctagtgtagccgtagttaggccaccacttcaagaactctgtagcaccgcctacatacctcgctctgctaatcctgttaccagtggctgctgccagtggcgataagtcgtgtcttaccgggttggactcaagacgatagttaccggataaggcgcagcggtcgggctgaacggggggttcgtgcacacagcccagcttggagcgaacgacctacaccgaactgagatacctacagcgtgagc[cagt][cagt]tgagaaagcgccacgcttcccgaagggagaaaggcggacaggtatccggtaagcggcagggtcggaacaggagagcgcacgagggagcttccaggggg[acgt]aacgcctggtatctttatagtcctgtcgggtttcgccacctctgacttgagcgtcgatttttgtgatgctcgtcaggggggc[acgt]gagcct[ga]tggaaaaacgccagcaacgcggcc'
return HasFeature(regexp, seq)
def HasR6K(seq):
#has R6k, data from Anderson lab observations
regexp = 'gcagttcaacctgttgatagtacgtactaagctctcatgtttcacgtactaagctctcatgtttaacgtactaagctctcatgtttaacgaactaaaccctcatggctaacgtactaagctctcatggctaacgtactaagctctcatgtttcacgtactaagctctcatgtttgaacaataaaattaatataaatcagcaacttaaatagcctctaaggttttaagttttataagaaaaaaaagaatatataaggcttttaaagcttttaaggtttaacggttgtggacaacaagccagggatgtaacgcactgagaagcccttagagcctctcaaagcaattttgagtgacacaggaacacttaacggctgacatggg'.lower()
return HasFeature(regexp, seq)
def HasP15A(seq):
regex = 'aatattttatctgattaataagatgatcttcttgagatcgttttggtctgcgcgtaatctcttgctctgaaaacgaaaaaaccgccttgcagggcggtttttcgaaggttctctgagctaccaactctttgaaccgaggtaactggcttggaggagcgcagtcaccaaaacttgtcctttcagtttagccttaaccggcgcatgacttcaagactaactcctctaaatcaattaccagtggctgctgccagtggtgcttttgcatgtctttccgggttggactcaagacgatagttaccggataaggcgcagcggtcggactgaacggggggttcgtgcatacagtccagcttggagcgaactgcctacccggaactgagtgtcaggcgtggaatgagacaaacgcggccataacagcggaatgacaccggtaaaccgaaaggcaggaacaggagagcgcacgagggagccgccagggggaaacgcctggtatctttatagtcctgtcgggtttcgccaccactgatttgagcgtcagatttcgtgatgcttgtcaggggggcggagcctatggaaaaacggctttgccgcggccctctcacttccctgttaagtatcttcctggcatcttccaggaaatctccgccccgttcgtaagccatttccgctcgccgcagtcgaacgaccgagcgtagcgagtcagtgagcgaggaagcggaatatatcctgtatcacatattctgctgacgcaccggtgcagccttttttctcctgccacatgaagcacttcactgacaccctcatcagtgccaacatagtaag'
return HasFeature(regex, seq)
def HaspUC(seq):
regex = 'cccgtagaaaagatcaaaggatcttcttgagatcctttttttctgcgcgtaatctgctgcttgcaaacaaaaaaaccaccgctaccagcggtggtttgtttgccggatcaagagctaccaactctttttccgaaggtaactggcttcagcagagcgcagataccaaatactgtccttctagtgtagccgtagttaggccaccacttcaagaactctgtagcaccgcctacatacctcgctctgctaatcctgttaccagtggctgctgccagtggcgataagtcgtgtcttaccgggttggactcaagacgatagttaccggataaggcgcagcggtcgggctgaacggggggttcgtgcacacagcccagcttggagcgaacgacctacaccgaactgagatacctacagcgtgagcattgagaaagcgccacgcttcccgaagggagaaaggcggacaggtatccggtaagcggcagggtcggaacaggagagcgcacgagggagcttccagggggaaacgcctggtatctttatagtcctgtcgggtttcgccacctctgacttgagcgtcgatttttgtgatgctcgtcaggggggcggagcctatggaaaaacgccagcaacgcggcctttttacggttcctggccttttgctggccttttgctcacat'
return HasFeature(regex, seq)
#####Resistance Suite: Checks for presence of certain antibiotic resistance markers#####
def HasAAFeature(regex, DNAseq):
#must be uppercase, checks all six possibilities, fwd, rev x 3 frames
seq = DNAseq
retval = bool( re.search(regex, translate(seq.upper() )) ) | bool( re.search(regex,translate(seq[1:].upper() ) ) ) | bool( re.search(regex,translate(seq[2:].upper() ) ) )
seq = reverseComplement(seq)
retval = retval | bool( re.search(regex, translate(seq.upper() )) ) | bool( re.search(regex,translate(seq[1:].upper() ) ) ) | bool( re.search(regex,translate(seq[2:].upper() ) ) )
return retval
def HasSpecR(seq):
regex='MRSRNWSRTLTERSGGNGAVAVFMACYDCFFGVQSMPRASKQQARYAVGRCLMLWSSNDVTQQGSRPKTKLNIMREAVIAEVSTQLSEVVGVIERHLEPTLLAVHLYGSAVDGGLKPHSDIDLLVTVTVRLDETTRRALINDLLETSASPGESEILRAVEVTIVVHDDIIPWRYPAKRELQFGEWQRNDILAGIFEPATIDIDLAILLTKAREHSVALVGPAAEELFDPVPEQDLFEALNETLTLWNSPPDWAGDERNVVLTLSRIWYSAVTGKIAPKDVAADWAMERLPAQYQPVILEARQAYLGQEEDRLASRADQLEEFVHYVKGEITKVVGK'
return HasAAFeature(regex, seq)
def HasAmpR(seq):
# was: regex='MSIQHFRVALIPFFAAFCLPVFAHPETLVKVKDAEDQLGARVGYIELDLNSGKILESFRPEERFPMMSTFKVLLCGAVLSRIDAGQEQLGRRIHYSQNDLVEYSPVTEKHLTDGMTVRELCSAAITMSDNTAANLLLTTIGGPKELTAFLHNMGDHVTRLDRWEPELNEAIPNDERDTTMPVAMATTLRKLLTGELLTLASRQQLIDWMEADKVAGPLLRSALPAGWFIADKSGAGERGSRGIIAALGPDGKPSRIVVIYTTGSQATMDERNRQIAEIGASLIKHW'
# compared with: 'MSIQHFRVALIPFFAAFCLPVFAHPETLVKVKDAEDQLGARVGYIELDLNSGKILESFRPEERFPMMSTFKVLLCGAVLSRIDAGQEQLGRRIHYSQNDLVEYSPVTEKHLTDGMTVRELCSAAITMSDNTAANLLLTTIGGPKELTAFLHNMGDHVTRLDRWEPELNEAIPNDERDTTMPVAMATTLRKLLTGELLTLASRQQLIDWMEADKVAGPLLRSALPAGWFIADKSGAGERGSRGIIAALGPDGKPSRIVVIYTTGSQATMDERNRQIAEIGASLIKHW'
# result: aligned with clustal, got following output:
regex = 'MSTFKVLLCGAVLSR[VI]DAGQEQLGRRIHYSQNDLVEYSPVTEKHLTDGMTVRELCSAAITMSDNTAANLLLTTIGGPKELTAFLHNMGDHVTRLDRWEPELNEAIPNDERDTTMP[VA]AMATTLRKLLTGELLTLASRQQLIDWMEADKVAGPLLRSALPAGWFIADKSGAGERGSRGIIAALGPDGKPSRIVVIYTTGSQATMDERNRQIAEIGASLIKHW'
return HasAAFeature(regex, seq)
def HasKanR(seq):
regex='MSHIQRETSCSRPRLNSNMDADLYGYKWARDNVGQSGATIYRLYGKPDAPELFLKHGKGSVANDVTDEMVRLNWLTEFMPLPTIKHFIRTPDDAWLLTTAIPGKTAFQVLEEYPDSGENIVDALAVFLRRLHSIPVCNCPFNSDRVFRLAQAQSRMNNGLVDASDFDDERNGWPVEQVWKEMHKLLPFSPDSVVTHGDFSLDNLIFDEGKLIGCIDVGRVGIADRYQDLAILWNCLGEFSPSLQKRLFQKYGIDNPDMNKLQFHLMLDEFF'
return HasAAFeature(regex, seq)
def HasCmR(seq):
regex='MEKKITGYTTVDISQWHRKEHFEAFQSVAQCTYNQTVQLDITAFLKTVKKNKHKFYPAFIHILARLMNAHPEFRMAMKDGELVIWDSVHPCYTVFHEQTETFSSLWSEYHDDFRQFLHIYSQDVACYGENLAYFPKGFIENMFFVSANPWVSFTSFDLNVANMDNFFAPVFTMGKYYTQGDKVLMPLAIQVHHAVCDGFHVGRMLNELQQYCDEWQGGA'
return HasAAFeature(regex, seq)
def HasResistance(seq):
retval = []
if HasCmR(seq):
retval.append( 'CmR' )
if HasKanR(seq):
retval.append('KanR')
if HasAmpR(seq):
retval.append('AmpR')
if HasSpecR(seq):
retval.append('SpecR')
return retval
def HasReplicon(seq):
retval = []
if HasColE1(seq):
retval.append('ColE1')
if HasColE2(seq):
retval.append('ColE2')
if HasR6K(seq):
retval.append('R6K')
if HasP15A(seq):
retval.append('P15A')
if HaspUC(seq):
retval.append('pUC')
return retval
class Strain(object):
def __init__(self, name="", replication="", resistance="", plasmid=""):
#pass everything in as a comma separated list
self.name = name
delimit = re.compile(r'\s*,\s*')
self.replication = delimit.split(replication)
self.resistance = delimit.split(resistance) #should include the plasmid resistance!
if(plasmid != ""):
self.plasmids = [plasmid, ] #DNA object
else:
self.plasmids = []
# Description: accepts list of dnas and a strain, it should output a list of DNAs that survive the transformation
# this would completely reciplate the TransformPlateMiniprep cycle, it returns all the DNAs present in the cell
def TransformPlateMiniprep(DNAs, strain):
#strain is an object
transformed = strain.plasmids
selectionList = []
for dna in DNAs:
#check if circular, confers new resistance on strain, and doesn't compete with existing plasmid in strain
if dna.topology == 'circular':
newR = False
replicon_ok = False
no_existing_plasmid = False
err_msg = ""
success_msg = ""
resistances = HasResistance(dna.sequence)
replicons = HasReplicon(dna.sequence)
#just need one resistance not already in strain
for resistance in resistances:
if not(resistance in strain.resistance):
newR = True
if not resistance in selectionList:
selectionList.append(resistance)
success_msg += "\nTransformation of "+dna.name+" into "+strain.name+" successful -- use "+resistance+" antibiotic selection.\n"
for replicon in replicons:
#has the pir/repA necessary for ColE2/R6K?
if replicon in strain.replication:
replicon_ok = True
for replicon in replicons:
#check if existing plasmid would compete
existing_plasmids = []
for p in strain.plasmids:
existing_plasmids.append( HasReplicon(p.sequence) )
if not(replicon in existing_plasmids ):
no_existing_plasmid = True
if(newR & replicon_ok & no_existing_plasmid):
parent = dna.clone()
parent.setChildren((dna, ))
dna.addParent(parent)
parent.instructions = 'Transform '+dna.name+' into '+strain.name+', selecting for '+resistance+' resistance.'
parent.setTimeStep(24)
parent.addMaterials(['Buffers P1,P2,N3,PB,PE','Miniprep column',resistance[:-1]+' LB agar plates','LB '+resistance[:-1]+' media'])
transformed.append(dna)
print success_msg
else:
if not(newR):
raise Exception('*Transformation Error*: for transformation of '+dna.name+' into '+strain.name+', plasmid either doesn\'t have an antibiotic resistance or doesn\'t confer a new one on this strain')
if not(replicon_ok):
raise Exception('*Transformation Error*: for transformation of "'+dna.name+'" into "'+strain.name+'", plasmid replicon won\'t function in this strain')
if not(no_existing_plasmid):
raise Exception('*Transformation Error*: for transformation of "'+dna.name+'" into "'+strain.name+'", transformed plasmid replicon competes with existing plasmid in strain')
if len(transformed)<1:
raise Exception("*Transformation Error*: For transformation of "+dna.name+" into "+strain.name+", no DNAs successfully transformed. DNAs may be linear.")
return transformed | 62.040519 | 836 | 0.636509 | 12,316 | 119,428 | 6.15492 | 0.112049 | 0.01835 | 0.019234 | 0.023508 | 0.536608 | 0.501583 | 0.482178 | 0.469289 | 0.452113 | 0.430558 | 0 | 0.013458 | 0.257134 | 119,428 | 1,925 | 837 | 62.040519 | 0.84097 | 0.161503 | 0 | 0.422107 | 0 | 0.005171 | 0.151387 | 0.037381 | 0 | 0 | 0 | 0.001039 | 0 | 0 | null | null | 0.014221 | 0.001293 | null | null | 0.021332 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7286c123e2e1e4752621f27aef91caae14dc1664 | 727 | py | Python | ex039.py | vinisantos7/PythonExercicios | bc8f38e03a606d6b0216632a93affeab0792e534 | [
"MIT"
] | 2 | 2021-11-04T21:09:11.000Z | 2021-11-08T09:42:10.000Z | ex039.py | vinisantos7/PythonExercicios | bc8f38e03a606d6b0216632a93affeab0792e534 | [
"MIT"
] | null | null | null | ex039.py | vinisantos7/PythonExercicios | bc8f38e03a606d6b0216632a93affeab0792e534 | [
"MIT"
] | null | null | null | print("@"*30)
print("Alistamento - Serviço Militar")
print("@"*30)
from datetime import date
ano_nasc = int(input("Digite seu ano de nascimento: "))
ano_atual = date.today().year
idade = ano_atual - ano_nasc
print(f"Quem nasceu em {ano_nasc} tem {idade} anos em {ano_atual}")
if idade == 18:
print("É a hora de se alistar no serviço militar, IMEDIATAMENTE!")
elif idade < 18:
saldo = 18 - idade
print(f"Ainda falta {saldo} anos para o seu alistamento!")
ano = ano_atual + saldo
print(f"Seu alistamento será em {ano}")
else:
idade > 18
saldo = idade - 18
print(f"Já passou {saldo} anos do tempo para o seu alistamento!")
ano = ano_atual - saldo
print(f"O seu alistamento foi em {ano}") | 30.291667 | 70 | 0.674003 | 115 | 727 | 4.191304 | 0.426087 | 0.082988 | 0.093361 | 0.078838 | 0.170124 | 0.170124 | 0.170124 | 0.170124 | 0.170124 | 0.170124 | 0 | 0.02418 | 0.203576 | 727 | 24 | 71 | 30.291667 | 0.80829 | 0 | 0 | 0.095238 | 0 | 0 | 0.462912 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.047619 | 0.047619 | 0 | 0.047619 | 0.428571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
728800a60a85756e4875a7047b803fa961c8c2d3 | 25,990 | py | Python | test/python/spl/tk17/opt/.__splpy/packages/streamsx/topology/tester.py | Jaimie-Jin1/streamsx.topology | 6f316ec8e9ed1349c6f061d9bb7d03deb87e3d08 | [
"Apache-2.0"
] | 31 | 2015-06-24T06:21:14.000Z | 2020-08-28T21:45:50.000Z | test/python/spl/tk17/opt/.__splpy/packages/streamsx/topology/tester.py | Jaimie-Jin1/streamsx.topology | 6f316ec8e9ed1349c6f061d9bb7d03deb87e3d08 | [
"Apache-2.0"
] | 1,203 | 2015-06-15T02:11:49.000Z | 2021-03-22T09:47:54.000Z | test/python/spl/tk17/opt/.__splpy/packages/streamsx/topology/tester.py | Jaimie-Jin1/streamsx.topology | 6f316ec8e9ed1349c6f061d9bb7d03deb87e3d08 | [
"Apache-2.0"
] | 53 | 2015-05-28T21:14:16.000Z | 2021-12-23T12:58:59.000Z | # coding=utf-8
# Licensed Materials - Property of IBM
# Copyright IBM Corp. 2017
"""Testing support for streaming applications.
Allows testing of a streaming application by creation conditions
on streams that are expected to become valid during the processing.
`Tester` is designed to be used with Python's `unittest` module.
A complete application may be tested or fragments of it, for example a sub-graph can be tested
in isolation that takes input data and scores it using a model.
Supports execution of the application on
:py:const:`~streamsx.topology.context.ContextTypes.STREAMING_ANALYTICS_SERVICE`,
:py:const:`~streamsx.topology.context.ContextTypes.DISTRIBUTED`
or :py:const:`~streamsx.topology.context.ContextTypes.STANDALONE`.
A :py:class:`Tester` instance is created and associated with the :py:class:`Topology` to be tested.
Conditions are then created against streams, such as a stream must receive 10 tuples using
:py:meth:`~Tester.tuple_count`.
Here is a simple example that tests a filter correctly only passes tuples with values greater than 5::
import unittest
from streamsx.topology.topology import Topology
from streamsx.topology.tester import Tester
class TestSimpleFilter(unittest.TestCase):
def setUp(self):
# Sets self.test_ctxtype and self.test_config
Tester.setup_streaming_analytics(self)
def test_filter(self):
# Declare the application to be tested
topology = Topology()
s = topology.source([5, 7, 2, 4, 9, 3, 8])
s = s.filter(lambda x : x > 5)
# Create tester and assign conditions
tester = Tester(topology)
tester.contents(s, [7, 9, 8])
# Submit the application for test
# If it fails an AssertionError will be raised.
tester.test(self.test_ctxtype, self.test_config)
A stream may have any number of conditions and any number of streams may be tested.
A py:meth:`~Tester.local_check` is supported where a method of the
unittest class is executed once the job becomes healthy. This performs
checks from the context of the Python unittest class, such as
checking external effects of the application or using the REST api to
monitor the application.
.. warning::
Python 3.5 and Streaming Analytics service or IBM Streams 4.2 or later is required when using `Tester`.
"""
import streamsx.ec as ec
import streamsx.topology.context as stc
import os
import unittest
import logging
import collections
import threading
from streamsx.rest import StreamsConnection
from streamsx.rest import StreamingAnalyticsConnection
from streamsx.topology.context import ConfigParams
import time
import streamsx.topology.tester_runtime as sttrt
_logger = logging.getLogger('streamsx.topology.test')
class Tester(object):
"""Testing support for a Topology.
Allows testing of a Topology by creating conditions against the contents
of its streams.
Conditions may be added to a topology at any time before submission.
If a topology is submitted directly to a context then the graph
is not modified. This allows testing code to be inserted while
the topology is being built, but not acted upon unless the topology
is submitted in test mode.
If a topology is submitted through the test method then the topology
may be modified to include operations to ensure the conditions are met.
.. warning::
For future compatibility applications under test should not include intended failures that cause
a processing element to stop or restart. Thus, currently testing is against expected application behavior.
Args:
topology: Topology to be tested.
"""
def __init__(self, topology):
self.topology = topology
topology.tester = self
self._conditions = {}
self.local_check = None
@staticmethod
def setup_standalone(test):
"""
Set up a unittest.TestCase to run tests using IBM Streams standalone mode.
Requires a local IBM Streams install define by the STREAMS_INSTALL
environment variable. If STREAMS_INSTALL is not set, then the
test is skipped.
Two attributes are set in the test case:
* test_ctxtype - Context type the test will be run in.
* test_config- Test configuration.
Args:
test(unittest.TestCase): Test case to be set up to run tests using Tester
Returns: None
"""
if not 'STREAMS_INSTALL' in os.environ:
raise unittest.SkipTest("Skipped due to no local IBM Streams install")
test.test_ctxtype = stc.ContextTypes.STANDALONE
test.test_config = {}
@staticmethod
def setup_distributed(test):
"""
Set up a unittest.TestCase to run tests using IBM Streams distributed mode.
Requires a local IBM Streams install define by the STREAMS_INSTALL
environment variable. If STREAMS_INSTALL is not set then the
test is skipped.
The Streams instance to use is defined by the environment variables:
* STREAMS_ZKCONNECT - Zookeeper connection string
* STREAMS_DOMAIN_ID - Domain identifier
* STREAMS_INSTANCE_ID - Instance identifier
Two attributes are set in the test case:
* test_ctxtype - Context type the test will be run in.
* test_config - Test configuration.
Args:
test(unittest.TestCase): Test case to be set up to run tests using Tester
Returns: None
"""
if not 'STREAMS_INSTALL' in os.environ:
raise unittest.SkipTest("Skipped due to no local IBM Streams install")
if not 'STREAMS_INSTANCE_ID' in os.environ:
raise unittest.SkipTest("Skipped due to STREAMS_INSTANCE_ID environment variable not set")
if not 'STREAMS_DOMAIN_ID' in os.environ:
raise unittest.SkipTest("Skipped due to STREAMS_DOMAIN_ID environment variable not set")
test.username = os.getenv("STREAMS_USERNAME", "streamsadmin")
test.password = os.getenv("STREAMS_PASSWORD", "passw0rd")
test.test_ctxtype = stc.ContextTypes.DISTRIBUTED
test.test_config = {}
@staticmethod
def setup_streaming_analytics(test, service_name=None, force_remote_build=False):
"""
Set up a unittest.TestCase to run tests using Streaming Analytics service on IBM Bluemix cloud platform.
The service to use is defined by:
* VCAP_SERVICES environment variable containing `streaming_analytics` entries.
* service_name which defaults to the value of STREAMING_ANALYTICS_SERVICE_NAME environment variable.
If VCAP_SERVICES is not set or a service name is not defined, then the test is skipped.
Two attributes are set in the test case:
* test_ctxtype - Context type the test will be run in.
* test_config - Test configuration.
Args:
test(unittest.TestCase): Test case to be set up to run tests using Tester
service_name(str): Name of Streaming Analytics service to use. Must exist as an
entry in the VCAP services. Defaults to value of STREAMING_ANALYTICS_SERVICE_NAME environment variable.
Returns: None
"""
if not 'VCAP_SERVICES' in os.environ:
raise unittest.SkipTest("Skipped due to VCAP_SERVICES environment variable not set")
test.test_ctxtype = stc.ContextTypes.STREAMING_ANALYTICS_SERVICE
if service_name is None:
service_name = os.environ.get('STREAMING_ANALYTICS_SERVICE_NAME', None)
if service_name is None:
raise unittest.SkipTest("Skipped due to no service name supplied")
test.test_config = {'topology.service.name': service_name}
if force_remote_build:
test.test_config['topology.forceRemoteBuild'] = True
def add_condition(self, stream, condition):
"""Add a condition to a stream.
Conditions are normally added through :py:meth:`tuple_count`, :py:meth:`contents` or :py:meth:`tuple_check`.
This allows an additional conditions that are implementations of :py:class:`Condition`.
Args:
stream(Stream): Stream to be tested.
condition(Condition): Arbitrary condition.
Returns:
Stream: stream
"""
self._conditions[condition.name] = (stream, condition)
return stream
def tuple_count(self, stream, count, exact=True):
"""Test that a stream contains a number of tuples.
If `exact` is `True`, then condition becomes valid when `count`
tuples are seen on `stream` during the test. Subsequently if additional
tuples are seen on `stream` then the condition fails and can never
become valid.
If `exact` is `False`, then the condition becomes valid once `count`
tuples are seen on `stream` and remains valid regardless of
any additional tuples.
Args:
stream(Stream): Stream to be tested.
count(int): Number of tuples expected.
exact(bool): `True` if the stream must contain exactly `count`
tuples, `False` if the stream must contain at least `count` tuples.
Returns:
Stream: stream
"""
_logger.debug("Adding tuple count (%d) condition to stream %s.", count, stream)
if exact:
name = "ExactCount" + str(len(self._conditions))
cond = sttrt._TupleExactCount(count, name)
cond._desc = "{0} stream expects tuple count equal to {1}.".format(stream.name, count)
else:
name = "AtLeastCount" + str(len(self._conditions))
cond = sttrt._TupleAtLeastCount(count, name)
cond._desc = "'{0}' stream expects tuple count of at least {1}.".format(stream.name, count)
return self.add_condition(stream, cond)
def contents(self, stream, expected, ordered=True):
"""Test that a stream contains the expected tuples.
Args:
stream(Stream): Stream to be tested.
expected(list): Sequence of expected tuples.
ordered(bool): True if the ordering of received tuples must match expected.
Returns:
Stream: stream
"""
name = "StreamContents" + str(len(self._conditions))
if ordered:
cond = sttrt._StreamContents(expected, name)
cond._desc = "'{0}' stream expects tuple ordered contents: {1}.".format(stream.name, expected)
else:
cond = sttrt._UnorderedStreamContents(expected, name)
cond._desc = "'{0}' stream expects tuple unordered contents: {1}.".format(stream.name, expected)
return self.add_condition(stream, cond)
def tuple_check(self, stream, checker):
"""Check each tuple on a stream.
For each tuple ``t`` on `stream` ``checker(t)`` is called.
If the return evaluates to `False` then the condition fails.
Once the condition fails it can never become valid.
Otherwise the condition becomes or remains valid. The first
tuple on the stream makes the condition valid if the checker
callable evaluates to `True`.
The condition can be combined with :py:meth:`tuple_count` with
``exact=False`` to test a stream map or filter with random input data.
An example of combining `tuple_count` and `tuple_check` to test a filter followed
by a map is working correctly across a random set of values::
def rands():
r = random.Random()
while True:
yield r.random()
class TestFilterMap(unittest.testCase):
# Set up omitted
def test_filter(self):
# Declare the application to be tested
topology = Topology()
r = topology.source(rands())
r = r.filter(lambda x : x > 0.7)
r = r.map(lambda x : x + 0.2)
# Create tester and assign conditions
tester = Tester(topology)
# Ensure at least 1000 tuples pass through the filter.
tester.tuple_count(r, 1000, exact=False)
tester.tuple_check(r, lambda x : x > 0.9)
# Submit the application for test
# If it fails an AssertionError will be raised.
tester.test(self.test_ctxtype, self.test_config)
Args:
stream(Stream): Stream to be tested.
checker(callable): Callable that must evaluate to True for each tuple.
"""
name = "TupleCheck" + str(len(self._conditions))
cond = sttrt._TupleCheck(checker, name)
return self.add_condition(stream, cond)
def local_check(self, callable):
"""Perform local check while the application is being tested.
A call to `callable` is made after the application under test is submitted and becomes healthy.
The check is in the context of the Python runtime executing the unittest case,
typically the callable is a method of the test case.
The application remains running until all the conditions are met
and `callable` returns. If `callable` raises an error, typically
through an assertion method from `unittest` then the test will fail.
Used for testing side effects of the application, typically with `STREAMING_ANALYTICS_SERVICE`
or `DISTRIBUTED`. The callable may also use the REST api for context types that support
it to dynamically monitor the running application.
The callable can use `submission_result` and `streams_connection` attributes from :py:class:`Tester` instance
to interact with the job or the running Streams instance.
Simple example of checking the job is healthy::
import unittest
from streamsx.topology.topology import Topology
from streamsx.topology.tester import Tester
class TestLocalCheckExample(unittest.TestCase):
def setUp(self):
Tester.setup_distributed(self)
def test_job_is_healthy(self):
topology = Topology()
s = topology.source(['Hello', 'World'])
self.tester = Tester(topology)
self.tester.tuple_count(s, 2)
# Add the local check
self.tester.local_check = self.local_checks
# Run the test
self.tester.test(self.test_ctxtype, self.test_config)
def local_checks(self):
job = self.tester.submission_result.job
self.assertEqual('healthy', job.health)
.. warning::
A local check must not cancel the job (application under test).
Args:
callable: Callable object.
"""
self.local_check = callable
def test(self, ctxtype, config=None, assert_on_fail=True, username=None, password=None):
"""Test the topology.
Submits the topology for testing and verifies the test conditions are met and the job remained healthy through its execution.
The submitted application (job) is monitored for the test conditions and
will be canceled when all the conditions are valid or at least one failed.
In addition if a local check was specified using :py:meth:`local_check` then
that callable must complete before the job is cancelled.
The test passes if all conditions became valid and the local check callable (if present) completed without
raising an error.
The test fails if the job is unhealthy, any condition fails or the local check callable (if present) raised an exception.
Args:
ctxtype(str): Context type for submission.
config: Configuration for submission.
assert_on_fail(bool): True to raise an assertion if the test fails, False to return the passed status.
username(str): username for distributed tests
password(str): password for distributed tests
Attributes:
submission_result: Result of the application submission from :py:func:`~streamsx.topology.context.submit`.
streams_connection(StreamsConnection): Connection object that can be used to interact with the REST API of
the Streaming Analytics service or instance.
Returns:
bool: `True` if test passed, `False` if test failed if `assert_on_fail` is `False`.
"""
# Add the conditions into the graph as sink operators
_logger.debug("Adding conditions to topology %s.", self.topology.name)
for ct in self._conditions.values():
condition = ct[1]
stream = ct[0]
stream.for_each(condition, name=condition.name)
if config is None:
config = {}
_logger.debug("Starting test topology %s context %s.", self.topology.name, ctxtype)
if stc.ContextTypes.STANDALONE == ctxtype:
passed = self._standalone_test(config)
elif stc.ContextTypes.DISTRIBUTED == ctxtype:
passed = self._distributed_test(config, username, password)
elif stc.ContextTypes.STREAMING_ANALYTICS_SERVICE == ctxtype or stc.ContextTypes.ANALYTICS_SERVICE == ctxtype:
passed = self._streaming_analytics_test(ctxtype, config)
else:
raise NotImplementedError("Tester context type not implemented:", ctxtype)
if 'conditions' in self.result:
for cn,cnr in self.result['conditions'].items():
c = self._conditions[cn][1]
cdesc = cn
if hasattr(c, '_desc'):
cdesc = c._desc
if 'Fail' == cnr:
_logger.error("Condition: %s : %s", cnr, cdesc)
elif 'NotValid' == cnr:
_logger.warning("Condition: %s : %s", cnr, cdesc)
elif 'Valid' == cnr:
_logger.info("Condition: %s : %s", cnr, cdesc)
if assert_on_fail:
assert passed, "Test failed for topology: " + self.topology.name
if passed:
_logger.info("Test topology %s passed for context:%s", self.topology.name, ctxtype)
else:
_logger.error("Test topology %s failed for context:%s", self.topology.name, ctxtype)
return passed
def _standalone_test(self, config):
""" Test using STANDALONE.
Success is solely indicated by the process completing and returning zero.
"""
sr = stc.submit(stc.ContextTypes.STANDALONE, self.topology, config)
self.submission_result = sr
self.result = {'passed': sr['return_code'], 'submission_result': sr}
return sr['return_code'] == 0
def _distributed_test(self, config, username, password):
self.streams_connection = config.get(ConfigParams.STREAMS_CONNECTION)
if self.streams_connection is None:
# Supply a default StreamsConnection object with SSL verification disabled, because the default
# streams server is not shipped with a valid SSL certificate
self.streams_connection = StreamsConnection(username, password)
self.streams_connection.session.verify = False
config[ConfigParams.STREAMS_CONNECTION] = self.streams_connection
sjr = stc.submit(stc.ContextTypes.DISTRIBUTED, self.topology, config)
self.submission_result = sjr
if sjr['return_code'] != 0:
_logger.error("Failed to submit job to distributed instance.")
return False
return self._distributed_wait_for_result()
def _streaming_analytics_test(self, ctxtype, config):
sjr = stc.submit(ctxtype, self.topology, config)
self.submission_result = sjr
self.streams_connection = config.get(ConfigParams.STREAMS_CONNECTION)
if self.streams_connection is None:
vcap_services = config.get(ConfigParams.VCAP_SERVICES)
service_name = config.get(ConfigParams.SERVICE_NAME)
self.streams_connection = StreamingAnalyticsConnection(vcap_services, service_name)
if sjr['return_code'] != 0:
_logger.error("Failed to submit job to Streaming Analytics instance")
return False
return self._distributed_wait_for_result()
def _distributed_wait_for_result(self):
cc = _ConditionChecker(self, self.streams_connection, self.submission_result)
# Wait for the job to be healthy before calling the local check.
if cc._wait_for_healthy():
self._start_local_check()
self.result = cc._complete()
if self.local_check is not None:
self._local_thread.join()
else:
self.result = cc._end(False, _ConditionChecker._UNHEALTHY)
self.result['submission_result'] = self.submission_result
cc._canceljob(self.result)
if self.local_check_exception is not None:
raise self.local_check_exception
return self.result['passed']
def _start_local_check(self):
self.local_check_exception = None
if self.local_check is None:
return
self._local_thread = threading.Thread(target=self._call_local_check)
self._local_thread.start()
def _call_local_check(self):
try:
self.local_check_value = self.local_check()
except Exception as e:
self.local_check_value = None
self.local_check_exception = e
#######################################
# Internal functions
#######################################
def _result_to_dict(passed, t):
result = {}
result['passed'] = passed
result['valid'] = t[0]
result['fail'] = t[1]
result['progress'] = t[2]
result['conditions'] = t[3]
return result
class _ConditionChecker(object):
_UNHEALTHY = (False, False, False, None)
def __init__(self, tester, sc, sjr):
self.tester = tester
self._sc = sc
self._sjr = sjr
self._instance_id = sjr['instanceId']
self._job_id = sjr['jobId']
self._sequences = {}
for cn in tester._conditions:
self._sequences[cn] = -1
self.delay = 0.5
self.timeout = 10.0
self.waits = 0
self.additional_checks = 2
self.job = self._find_job()
# Wait for job to be healthy. Returns True
# if the job became healthy, False if not.
def _wait_for_healthy(self):
while (self.waits * self.delay) < self.timeout:
if self.__check_job_health():
self.waits = 0
return True
time.sleep(self.delay)
self.waits += 1
return False
def _complete(self):
while (self.waits * self.delay) < self.timeout:
check = self. __check_once()
if check[1]:
return self._end(False, check)
if check[0]:
if self.additional_checks == 0:
return self._end(True, check)
self.additional_checks -= 1
continue
if check[2]:
self.waits = 0
else:
self.waits += 1
time.sleep(self.delay)
return self._end(False, check)
def _end(self, passed, check):
result = _result_to_dict(passed, check)
return result
def _canceljob(self, result):
if self.job is not None:
self.job.cancel(force=not result['passed'])
def __check_once(self):
if not self.__check_job_health():
return _ConditionChecker._UNHEALTHY
cms = self._get_job_metrics()
valid = True
progress = True
fail = False
condition_states = {}
for cn in self._sequences:
condition_states[cn] = 'NotValid'
seq_mn = sttrt.Condition._mn('seq', cn)
# If the metrics are missing then the operator
# is probably still starting up, cannot be valid.
if not seq_mn in cms:
valid = False
continue
seq_m = cms[seq_mn]
if seq_m.value == self._sequences[cn]:
progress = False
else:
self._sequences[cn] = seq_m.value
fail_mn = sttrt.Condition._mn('fail', cn)
if not fail_mn in cms:
valid = False
continue
fail_m = cms[fail_mn]
if fail_m.value != 0:
fail = True
condition_states[cn] = 'Fail'
continue
valid_mn = sttrt.Condition._mn('valid', cn)
if not valid_mn in cms:
valid = False
continue
valid_m = cms[valid_mn]
if valid_m.value == 0:
valid = False
else:
condition_states[cn] = 'Valid'
return (valid, fail, progress, condition_states)
def __check_job_health(self):
self.job.refresh()
return self.job.health == 'healthy'
def _find_job(self):
instance = self._sc.get_instance(id=self._instance_id)
return instance.get_job(id=self._job_id)
def _get_job_metrics(self):
"""Fetch all the condition metrics for a job.
We refetch the metrics each time to ensure that we don't miss
any being added, e.g. if an operator is slow to start.
"""
cms = {}
for op in self.job.get_operators():
metrics = op.get_metrics(name=sttrt.Condition._METRIC_PREFIX + '*')
for m in metrics:
cms[m.name] = m
return cms
| 39.740061 | 133 | 0.634052 | 3,201 | 25,990 | 5.036239 | 0.147454 | 0.016128 | 0.017059 | 0.005583 | 0.283791 | 0.233608 | 0.190001 | 0.168042 | 0.138205 | 0.128094 | 0 | 0.004119 | 0.29015 | 25,990 | 653 | 134 | 39.800919 | 0.869695 | 0.460908 | 0 | 0.208029 | 0 | 0 | 0.112797 | 0.008028 | 0 | 0 | 0 | 0 | 0.010949 | 1 | 0.094891 | false | 0.065693 | 0.043796 | 0 | 0.237226 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
729a80929d56b3febe54ea0a4bcd62e2fff44b08 | 6,519 | py | Python | Beheer/tests.py | RamonvdW/nhb-apps | 5a9f840bfe066cd964174515c06b806a7b170c69 | [
"BSD-3-Clause-Clear"
] | 1 | 2021-12-22T13:11:12.000Z | 2021-12-22T13:11:12.000Z | Beheer/tests.py | RamonvdW/nhb-apps | 5a9f840bfe066cd964174515c06b806a7b170c69 | [
"BSD-3-Clause-Clear"
] | 9 | 2020-10-28T07:07:05.000Z | 2021-06-28T20:05:37.000Z | Beheer/tests.py | RamonvdW/nhb-apps | 5a9f840bfe066cd964174515c06b806a7b170c69 | [
"BSD-3-Clause-Clear"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright (c) 2020-2021 Ramon van der Winkel.
# All rights reserved.
# Licensed under BSD-3-Clause-Clear. See LICENSE file for details.
from django.conf import settings
from django.test import TestCase
from django.urls import reverse
from TestHelpers.e2ehelpers import E2EHelpers
# updaten met dit commando:
# for x in `./manage.py show_urls --settings=nhbapps.settings_dev | rev | cut -d'/' -f2- | rev | grep '/beheer/'`; do echo "'$x/',"; done | grep -vE ':object_id>/|/add/|/autocomplete/'
BEHEER_PAGINAS = (
'/beheer/Account/account/',
'/beheer/Account/accountemail/',
'/beheer/BasisTypen/boogtype/',
'/beheer/BasisTypen/indivwedstrijdklasse/',
'/beheer/BasisTypen/kalenderwedstrijdklasse/',
'/beheer/BasisTypen/leeftijdsklasse/',
'/beheer/BasisTypen/teamtype/',
'/beheer/BasisTypen/teamwedstrijdklasse/',
'/beheer/Competitie/competitie/',
'/beheer/Competitie/competitieklasse/',
'/beheer/Competitie/competitiemutatie/',
'/beheer/Competitie/deelcompetitie/',
'/beheer/Competitie/deelcompetitieklasselimiet/',
'/beheer/Competitie/deelcompetitieronde/',
'/beheer/Competitie/kampioenschapschutterboog/',
'/beheer/Competitie/regiocompetitierondeteam/',
'/beheer/Competitie/regiocompetitieschutterboog/',
'/beheer/Competitie/regiocompetitieteam/',
'/beheer/Competitie/regiocompetitieteampoule/',
'/beheer/Functie/functie/',
'/beheer/Functie/verklaringhanterenpersoonsgegevens/',
'/beheer/HistComp/histcompetitie/',
'/beheer/HistComp/histcompetitieindividueel/',
'/beheer/HistComp/histcompetitieteam/',
'/beheer/Kalender/kalenderwedstrijd/',
'/beheer/Kalender/kalenderwedstrijddeeluitslag/',
'/beheer/Kalender/kalenderwedstrijdsessie/',
'/beheer/Logboek/logboekregel/',
'/beheer/Mailer/mailqueue/',
'/beheer/NhbStructuur/nhbcluster/',
'/beheer/NhbStructuur/nhbrayon/',
'/beheer/NhbStructuur/nhbregio/',
'/beheer/NhbStructuur/nhbvereniging/',
'/beheer/NhbStructuur/speelsterkte/',
'/beheer/Overig/sitefeedback/',
'/beheer/Overig/sitetijdelijkeurl/',
'/beheer/Records/besteindivrecords/',
'/beheer/Records/indivrecord/',
'/beheer/Score/score/',
'/beheer/Score/scorehist/',
'/beheer/Sporter/sporter/',
'/beheer/Sporter/sporterboog/',
'/beheer/Sporter/sportervoorkeuren/',
'/beheer/Taken/taak/',
'/beheer/Wedstrijden/competitiewedstrijd/',
'/beheer/Wedstrijden/competitiewedstrijdenplan/',
'/beheer/Wedstrijden/competitiewedstrijduitslag/',
'/beheer/Wedstrijden/wedstrijdlocatie/',
'/beheer/auth/group/',
'/beheer/jsi18n/',
'/beheer/login/',
'/beheer/logout/',
'/beheer/password_change/',
)
class TestBeheer(E2EHelpers, TestCase):
""" unit tests voor de Beheer applicatie """
def setUp(self):
""" initialisatie van de test case """
self.account_admin = self.e2e_create_account_admin()
def test_login(self):
# controleer dat de admin login vervangen is door een redirect naar onze eigen login
url = reverse('admin:login') # interne url
self.assertEqual(url, '/beheer/login/')
self.e2e_logout()
with self.assert_max_queries(20):
resp = self.client.get('/beheer/login/', follow=True)
self.assertEqual(resp.redirect_chain[-1], ('/account/login/', 302))
with self.assert_max_queries(20):
resp = self.client.get('/beheer/login/?next=/records/', follow=True)
self.assertEqual(resp.redirect_chain[-1], ('/account/login/?next=/records/', 302))
self.e2e_assert_other_http_commands_not_supported('/beheer/login/')
def test_index(self):
# voordat 2FA verificatie gedaan is
self.e2e_login(self.account_admin)
# redirect naar wissel-van-rol pagina
with self.assert_max_queries(20):
resp = self.client.get('/beheer/', follow=True)
self.assertEqual(resp.redirect_chain[-1], ('/functie/otp-controle/?next=/beheer/', 302))
self.e2e_assert_other_http_commands_not_supported('/beheer/')
# na 2FA verificatie
self.e2e_login_and_pass_otp(self.account_admin)
with self.assert_max_queries(20):
resp = self.client.get('/beheer/', follow=True)
self.assertTrue(len(resp.redirect_chain) == 0)
self.assertEqual(resp.status_code, 200) # 200 = OK
self.assertContains(resp, '<title>Websitebeheer | Django-websitebeheer</title>')
# onnodig via beheer-login naar post-authenticatie pagina
with self.assert_max_queries(20):
resp = self.client.get('/beheer/login/?next=/records/', follow=True)
self.assertEqual(resp.redirect_chain[-1], ('/records/', 302))
# onnodig via beheer-login zonder post-authenticatie pagina
with self.assert_max_queries(20):
resp = self.client.get('/beheer/login/', follow=True)
self.assertEqual(resp.redirect_chain[-1], ('/plein/', 302))
def test_logout(self):
# controleer dat de admin login vervangen is door een redirect naar onze eigen login
url = reverse('admin:logout') # interne url
self.assertEqual(url, '/beheer/logout/')
self.e2e_login_and_pass_otp(self.account_admin)
with self.assert_max_queries(20):
resp = self.client.get('/beheer/logout/', follow=True)
self.assertEqual(resp.redirect_chain[-1], ('/account/logout/', 302))
def test_pw_change(self):
url = reverse('admin:password_change')
self.assertEqual(url, '/beheer/password_change/')
self.e2e_login_and_pass_otp(self.account_admin)
with self.assert_max_queries(20):
resp = self.client.get(url, follow=True)
self.assertEqual(resp.status_code, 200) # 200 = OK
self.assertContains(resp, 'Nieuw wachtwoord')
self.assertEqual(resp.redirect_chain[-1], ('/account/nieuw-wachtwoord/', 302))
def test_queries(self):
# controleer dat alle beheer pagina's het goed doen
settings.DEBUG = True
self.e2e_login_and_pass_otp(self.account_admin)
for url in BEHEER_PAGINAS:
with self.assert_max_queries(20):
self.client.get(url)
with self.assert_max_queries(20):
self.client.get(url + 'add/')
with self.assert_max_queries(20):
self.client.get(url + '1/change/')
# for
settings.DEBUG = False
# end of file
| 39.509091 | 185 | 0.670655 | 710 | 6,519 | 6.038028 | 0.302817 | 0.041987 | 0.035923 | 0.04362 | 0.36739 | 0.365057 | 0.349195 | 0.339865 | 0.3331 | 0.312573 | 0 | 0.017219 | 0.189293 | 6,519 | 164 | 186 | 39.75 | 0.793945 | 0.140206 | 0 | 0.194915 | 0 | 0 | 0.397273 | 0.337341 | 0 | 0 | 0 | 0 | 0.237288 | 1 | 0.050847 | false | 0.059322 | 0.033898 | 0 | 0.09322 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
729bee0345192e65862237553dfdacf7015f8ae1 | 5,578 | py | Python | isi_sdk_8_0/isi_sdk_8_0/models/auth_access_access_item_file.py | mohitjain97/isilon_sdk_python | a371f438f542568edb8cda35e929e6b300b1177c | [
"Unlicense"
] | 24 | 2018-06-22T14:13:23.000Z | 2022-03-23T01:21:26.000Z | isi_sdk_8_0/isi_sdk_8_0/models/auth_access_access_item_file.py | mohitjain97/isilon_sdk_python | a371f438f542568edb8cda35e929e6b300b1177c | [
"Unlicense"
] | 46 | 2018-04-30T13:28:22.000Z | 2022-03-21T21:11:07.000Z | isi_sdk_8_0/isi_sdk_8_0/models/auth_access_access_item_file.py | mohitjain97/isilon_sdk_python | a371f438f542568edb8cda35e929e6b300b1177c | [
"Unlicense"
] | 29 | 2018-06-19T00:14:04.000Z | 2022-02-08T17:51:19.000Z | # coding: utf-8
"""
Isilon SDK
Isilon SDK - Language bindings for the OneFS API # noqa: E501
OpenAPI spec version: 3
Contact: sdk@isilon.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
import pprint
import re # noqa: F401
import six
class AuthAccessAccessItemFile(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'group': 'str',
'mode': 'str',
'owner': 'str',
'relevant_mode': 'str'
}
attribute_map = {
'group': 'group',
'mode': 'mode',
'owner': 'owner',
'relevant_mode': 'relevant_mode'
}
def __init__(self, group=None, mode=None, owner=None, relevant_mode=None): # noqa: E501
"""AuthAccessAccessItemFile - a model defined in Swagger""" # noqa: E501
self._group = None
self._mode = None
self._owner = None
self._relevant_mode = None
self.discriminator = None
if group is not None:
self.group = group
if mode is not None:
self.mode = mode
if owner is not None:
self.owner = owner
if relevant_mode is not None:
self.relevant_mode = relevant_mode
@property
def group(self):
"""Gets the group of this AuthAccessAccessItemFile. # noqa: E501
Specifies the group name or ID for the file. # noqa: E501
:return: The group of this AuthAccessAccessItemFile. # noqa: E501
:rtype: str
"""
return self._group
@group.setter
def group(self, group):
"""Sets the group of this AuthAccessAccessItemFile.
Specifies the group name or ID for the file. # noqa: E501
:param group: The group of this AuthAccessAccessItemFile. # noqa: E501
:type: str
"""
self._group = group
@property
def mode(self):
"""Gets the mode of this AuthAccessAccessItemFile. # noqa: E501
Specifies the mode bits on the file. # noqa: E501
:return: The mode of this AuthAccessAccessItemFile. # noqa: E501
:rtype: str
"""
return self._mode
@mode.setter
def mode(self, mode):
"""Sets the mode of this AuthAccessAccessItemFile.
Specifies the mode bits on the file. # noqa: E501
:param mode: The mode of this AuthAccessAccessItemFile. # noqa: E501
:type: str
"""
self._mode = mode
@property
def owner(self):
"""Gets the owner of this AuthAccessAccessItemFile. # noqa: E501
Specifies the name or ID of the file owner. # noqa: E501
:return: The owner of this AuthAccessAccessItemFile. # noqa: E501
:rtype: str
"""
return self._owner
@owner.setter
def owner(self, owner):
"""Sets the owner of this AuthAccessAccessItemFile.
Specifies the name or ID of the file owner. # noqa: E501
:param owner: The owner of this AuthAccessAccessItemFile. # noqa: E501
:type: str
"""
self._owner = owner
@property
def relevant_mode(self):
"""Gets the relevant_mode of this AuthAccessAccessItemFile. # noqa: E501
Specifies the mode bits that are related to the user. # noqa: E501
:return: The relevant_mode of this AuthAccessAccessItemFile. # noqa: E501
:rtype: str
"""
return self._relevant_mode
@relevant_mode.setter
def relevant_mode(self, relevant_mode):
"""Sets the relevant_mode of this AuthAccessAccessItemFile.
Specifies the mode bits that are related to the user. # noqa: E501
:param relevant_mode: The relevant_mode of this AuthAccessAccessItemFile. # noqa: E501
:type: str
"""
self._relevant_mode = relevant_mode
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, AuthAccessAccessItemFile):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
| 28.030151 | 95 | 0.580495 | 645 | 5,578 | 4.908527 | 0.189147 | 0.058118 | 0.151611 | 0.128869 | 0.472521 | 0.401453 | 0.394188 | 0.352179 | 0.266582 | 0.18067 | 0 | 0.020599 | 0.329867 | 5,578 | 198 | 96 | 28.171717 | 0.826378 | 0.39602 | 0 | 0.071429 | 1 | 0 | 0.041835 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.035714 | 0 | 0.357143 | 0.02381 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
729e65cca9b0a6039288577ac59eedc5273e9d20 | 5,040 | py | Python | flow/visualize/plot_custom_callables.py | AHammoudeh/Flow_AH | 16c5641be3e9e85511756f75efd002478edaee9b | [
"MIT"
] | null | null | null | flow/visualize/plot_custom_callables.py | AHammoudeh/Flow_AH | 16c5641be3e9e85511756f75efd002478edaee9b | [
"MIT"
] | null | null | null | flow/visualize/plot_custom_callables.py | AHammoudeh/Flow_AH | 16c5641be3e9e85511756f75efd002478edaee9b | [
"MIT"
] | null | null | null | """Generate charts from with .npy files containing custom callables through replay."""
import argparse
from datetime import datetime
import errno
import numpy as np
import matplotlib.pyplot as plt
import os
import pytz
import sys
def make_bar_plot(vals, title):
print(len(vals))
fig = plt.figure()
plt.hist(vals, 10, facecolor='blue', alpha=0.5)
plt.title(title)
plt.xlim(1000,3000)
return fig
def plot_trip_distribution(all_trip_energy_distribution):
non_av_vals = []
figures = []
figure_names = []
for key in all_trip_energy_distribution:
if key != 'av':
non_av_vals.extend(all_trip_energy_distribution[key])
figures.append(make_bar_plot(all_trip_energy_distribution[key], key))
figure_names.append(key)
figure_names.append('All Non-AV')
figures.append(make_bar_plot(non_av_vals, 'All Non-AV'))
figure_names.append('All')
figures.append(make_bar_plot(non_av_vals + all_trip_energy_distribution['av'], 'All'))
return figure_names, figures
def parse_flags(args):
"""Parse training options user can specify in command line.
Returns
-------
argparse.Namespace
the output parser object
"""
parser = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter,
description="Parse argument used when running a Flow simulation.",
epilog="python train.py EXP_CONFIG")
parser.add_argument("target_folder", type=str,
help='Folder containing results')
parser.add_argument("--output_folder", type=str, required=False, default=None,
help='Folder to save charts to.')
parser.add_argument("--show_images", action='store_true',
help='Whether to display charts.')
parser.add_argument("--heatmap", type=str, required=False,
help='Make a heatmap of the supplied variable.')
return parser.parse_args(args)
if __name__ == "__main__":
flags = parse_flags(sys.argv[1:])
date = datetime.now(tz=pytz.utc)
date = date.astimezone(pytz.timezone('US/Pacific')).strftime("%m-%d-%Y")
if flags.output_folder:
if not os.path.exists(flags.output_folder):
try:
os.makedirs(flags.output_folder)
except OSError as exc:
if exc.errno != errno.EEXIST:
raise
info_dicts = []
custom_callable_names = set()
exp_names = []
for (dirpath, dir_names, file_names) in os.walk(flags.target_folder):
for file_name in file_names:
if file_name[-8:] == "info.npy":
exp_name = os.path.basename(dirpath)
info_dict = np.load(os.path.join(dirpath, file_name), allow_pickle=True).item()
info_dicts.append(info_dict)
print(info_dict.keys())
exp_names.append(exp_name)
custom_callable_names.update(info_dict.keys())
idxs = np.argsort(exp_names)
exp_names = [exp_names[i] for i in idxs]
info_dicts = [info_dicts[i] for i in idxs]
if flags.heatmap is not None:
heatmap = np.zeros((4, 6))
pr_spacing = np.around(np.linspace(0, 0.3, 4), decimals=2)
apr_spacing = np.around(np.linspace(0, 0.5, 6), decimals=2)
for exp_name, info_dict in zip(exp_names, info_dicts):
apr_bucket = int(np.around(float(exp_name.split('_')[1][3:]) / 0.1))
pr_bucket = int(np.around(float(exp_name.split('_')[0][2:]) / 0.1))
if flags.heatmap not in info_dict:
print(exp_name)
continue
else:
val = np.mean(info_dict[flags.heatmap])
print(exp_name, pr_bucket, pr_spacing[pr_bucket], apr_bucket, apr_spacing[apr_bucket], val)
heatmap[pr_bucket, apr_bucket] = val
fig = plt.figure()
plt.imshow(heatmap, interpolation='nearest', cmap='seismic', aspect='equal', vmin=1500, vmax=3000)
plt.title(flags.heatmap)
plt.yticks(ticks=np.arange(len(pr_spacing)), labels=pr_spacing)
plt.ylabel("AV Penetration")
plt.xticks(ticks=np.arange(len(apr_spacing)), labels=apr_spacing)
plt.xlabel("Aggressive Driver Penetration")
plt.colorbar()
plt.show()
plt.close(fig)
else:
for name in custom_callable_names:
y_vals = [np.mean(info_dict[name]) for info_dict in info_dicts]
y_stds = [np.std(info_dict[name]) for info_dict in info_dicts]
x_pos = np.arange(len(exp_names))
plt.bar(x_pos, y_vals, align='center', alpha=0.5)
plt.xticks(x_pos, [exp_name for exp_name in exp_names], rotation=60)
plt.xlabel('Experiment')
plt.title('I210 Replay Result: {}'.format(name))
plt.tight_layout()
if flags.output_folder:
plt.savefig(os.path.join(flags.output_folder, '{}-plot.png'.format(name)))
plt.show()
| 36.788321 | 107 | 0.624405 | 672 | 5,040 | 4.477679 | 0.331845 | 0.029246 | 0.021602 | 0.041542 | 0.12097 | 0.087072 | 0.087072 | 0.069126 | 0.046527 | 0 | 0 | 0.013081 | 0.256746 | 5,040 | 136 | 108 | 37.058824 | 0.790176 | 0.040079 | 0 | 0.075472 | 1 | 0 | 0.090304 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028302 | false | 0 | 0.075472 | 0 | 0.132075 | 0.037736 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
72a50676744b1429bd199408d3b9fb6111481c1b | 322 | py | Python | desktop/core/ext-py/PyYAML-3.12/tests/lib3/test_all.py | kokosing/hue | 2307f5379a35aae9be871e836432e6f45138b3d9 | [
"Apache-2.0"
] | 5,079 | 2015-01-01T03:39:46.000Z | 2022-03-31T07:38:22.000Z | desktop/core/ext-py/PyYAML-3.12/tests/lib3/test_all.py | zks888/hue | 93a8c370713e70b216c428caa2f75185ef809deb | [
"Apache-2.0"
] | 1,623 | 2015-01-01T08:06:24.000Z | 2022-03-30T19:48:52.000Z | desktop/core/ext-py/PyYAML-3.12/tests/lib3/test_all.py | zks888/hue | 93a8c370713e70b216c428caa2f75185ef809deb | [
"Apache-2.0"
] | 2,033 | 2015-01-04T07:18:02.000Z | 2022-03-28T19:55:47.000Z |
import sys, yaml, test_appliance
def main(args=None):
collections = []
import test_yaml
collections.append(test_yaml)
if yaml.__with_libyaml__:
import test_yaml_ext
collections.append(test_yaml_ext)
return test_appliance.run(collections, args)
if __name__ == '__main__':
main()
| 20.125 | 48 | 0.698758 | 40 | 322 | 5.1 | 0.45 | 0.156863 | 0.137255 | 0.245098 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.214286 | 322 | 15 | 49 | 21.466667 | 0.806324 | 0 | 0 | 0 | 0 | 0 | 0.025 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.272727 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
72b29fc480b17250901de0de9a0fdd2643e31525 | 369 | py | Python | auth0_client/menu/datafiles/scripts/get_active_user_count.py | rubelw/auth0_client | 51e68239babcf7c40e40491d1aaa3f8547a67f63 | [
"MIT"
] | 2 | 2020-10-08T21:42:56.000Z | 2021-03-21T08:17:52.000Z | auth0_client/menu/datafiles/scripts/get_active_user_count.py | rubelw/auth0_client | 51e68239babcf7c40e40491d1aaa3f8547a67f63 | [
"MIT"
] | null | null | null | auth0_client/menu/datafiles/scripts/get_active_user_count.py | rubelw/auth0_client | 51e68239babcf7c40e40491d1aaa3f8547a67f63 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import json
from auth0_client.Auth0Client import Auth0Client
from auth0_client.menu.menu_helper.common import *
from auth0_client.menu.menu_helper.pretty import *
try:
users = {}
client = Auth0Client(auth_config())
results = client.active_users()
print(pretty(results))
except (KeyboardInterrupt, SystemExit):
sys.exit()
| 19.421053 | 50 | 0.742547 | 46 | 369 | 5.804348 | 0.565217 | 0.101124 | 0.168539 | 0.142322 | 0.217228 | 0.217228 | 0 | 0 | 0 | 0 | 0 | 0.019231 | 0.154472 | 369 | 18 | 51 | 20.5 | 0.836538 | 0.054201 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.363636 | 0 | 0.363636 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
72b41799b9ea2ac1e3eee3e55900103983d55cfc | 852 | py | Python | encryptfinance/transactions/admin.py | dark-codr/encryptfinance | 573a8179c3a7c4b0f68d71bc9d461246f6fdba29 | [
"Apache-2.0"
] | null | null | null | encryptfinance/transactions/admin.py | dark-codr/encryptfinance | 573a8179c3a7c4b0f68d71bc9d461246f6fdba29 | [
"Apache-2.0"
] | 1 | 2022-03-31T03:16:16.000Z | 2022-03-31T03:16:16.000Z | encryptfinance/transactions/admin.py | dark-codr/encryptfinance | 573a8179c3a7c4b0f68d71bc9d461246f6fdba29 | [
"Apache-2.0"
] | null | null | null | from __future__ import absolute_import
from django.contrib import admin
from .models import Deposit, Withdrawal, Support
from .forms import DepositForm, WithdrawalForm
# Register your models here.
@admin.register(Deposit)
class DepositAdmin(admin.ModelAdmin):
# form = DepositForm
list_display = ["__str__", "amount", "approval", "deposited", "created"]
list_filter = ["approval", "created"]
list_editable = ["approval", "amount", "deposited"]
class Meta:
model = Deposit
@admin.register(Withdrawal)
class WithdrawalAdmin(admin.ModelAdmin):
form = WithdrawalForm
list_display = ["__str__", "amount", "wallet_id", "approval", "withdrawn", "created"]
list_filter = ["approval", "created"]
list_editable = ["approval", "withdrawn"]
class Meta:
model = Withdrawal
admin.site.register(Support)
| 28.4 | 89 | 0.706573 | 88 | 852 | 6.613636 | 0.420455 | 0.075601 | 0.065292 | 0.068729 | 0.178694 | 0.178694 | 0.178694 | 0.178694 | 0 | 0 | 0 | 0 | 0.16784 | 852 | 29 | 90 | 29.37931 | 0.820874 | 0.052817 | 0 | 0.2 | 0 | 0 | 0.190299 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
72bbe0388e18a5be8321d3239864b86aeb1e09f0 | 16,783 | py | Python | Experiment/ltpFR3_MTurk/ListGen/ltpFR3_listgen.py | jpazdera/PazdKaha22 | 9b3157cbcc68aafc829dbd38f3271f884caf541d | [
"CC-BY-4.0"
] | null | null | null | Experiment/ltpFR3_MTurk/ListGen/ltpFR3_listgen.py | jpazdera/PazdKaha22 | 9b3157cbcc68aafc829dbd38f3271f884caf541d | [
"CC-BY-4.0"
] | null | null | null | Experiment/ltpFR3_MTurk/ListGen/ltpFR3_listgen.py | jpazdera/PazdKaha22 | 9b3157cbcc68aafc829dbd38f3271f884caf541d | [
"CC-BY-4.0"
] | null | null | null | #!/usr/bin/env python2
import random
import itertools
import numpy
import sys
import json
import copy
def make_bins_ltpFR3(semArray):
"""
Creates four equal-width bins of WAS scores, identical to those used in ltpFR2. Then combine the middle two to give
three bins: low similarity, medium similarity, and high similarity.
A coordinate in semRows[i][j] and semCols[i][j] is the index of the jth word pair in semArray that falls in the ith
similarity bin.
"""
semArray_nondiag = semArray[numpy.where(semArray != 1)]
# Find lowest and highest similarity
min_sim = semArray_nondiag.min()
max_sim = semArray_nondiag.max()
# Split up the semantic space into four equal segments
semBins = list(numpy.linspace(min_sim, max_sim, 4))
# Combine the two middle bins by removing the bin boundary between them
# semBins = semBins[:2] + semBins[3:]
# Create bounds for the bins
semBins = zip(*[semBins[i:] + semBins[-1:i] for i in range(2)])
# For word pairs within the bounds of each bin, append the indices to semRows and semCols
semRows = []
semCols = []
for bin in semBins:
(i, j) = ((semArray > bin[0]) & (semArray < bin[1])).nonzero()
semRows.append(i)
semCols.append(j)
return semRows, semCols
def randomize_conditions_ltpFR3(config):
"""
Randomize the conditions for all sessions.
:param config: The imported configuration file, containing all parameters for the experiment
:return: A list of lists, where sublist n contains the ordering of list conditions for the nth session. cond[x][y][0]
defines the length of session x, list y; cond[x][y][1] defines the presentation rate of session x, list y;
cond[x][y][2] defines whether session x, list y uses visual or auditory presentation; cond[x][y][3] defines the
duration of the pre-list distractor task for session x, list y.
"""
options = [c for c in itertools.product(config.listLength, config.presRate, config.modality, config.distDur)]
cond = []
for i in range(config.nSessions):
sess = []
for j in range(config.reps):
random.shuffle(options)
sess += options[:]
cond.append(sess)
return cond
def choose_pairs_ltpFR3(wp_tot, cond, config, semRows, semCols):
"""
Selects word pairs to use in each list of each session.
:param wp_tot: A list containing all the words of the word pool. The order of the words is expected to correspond to
the indices used by semRows and semCols.
:param cond: A list of lists, where sublist n contains the ordering of list conditions for the nth session.
:param config: The imported configuration file, containing all parameters for the experiment.
:param semRows: See make_bins_ltpFR3()
:param semCols: See make_bins_ltpFR3()
:return: pairs - pairs[x][y][z] is the zth word pair in session x, list y
:return: pair_dicts - a list of dictionaries, where each dictionary contains all word pairs from a given session
:return: practice_lists - A list containing two practice lists, each with 18 words
"""
# pairs[x][y][z] will be the zth pair of words in the yth list on session x
pairs = []
# points to the other word in the pair for a given session
pair_dicts = []
# Deep copy the full word pool into full_wp_allowed, so it can be shuffled for each session without altering wp_tot
full_wp = wp_tot[:]
# Make word pairs for each session
session_num = 0
while session_num < config.nSessions:
#print 'Making session', session_num, ':',
#sys.stdout.flush()
# Shuffle the order of the word pool; I believe this is technically only necessary for the first session, in
# order to randomize which words are selected for the practice lists. All other lists have their items randomly
# chosen anyway
'''
IMPORTANT NOTE!!!:
Lists containing more than 2080 elements should not be randomized with shuffle, as explained here:
http://stackoverflow.com/questions/3062741/maximal-length-of-list-to-shuffle-with-python-random-shuffle
The full word pool contains 1638 words, so this is only a concern if the word pool is ever expanded.
'''
random.shuffle(full_wp)
# The first session has two 18-word practice lists
if session_num == 0:
practice_lists = [full_wp[:18], full_wp[18:36]]
sess_wp_allowed = full_wp[36:]
else:
sess_wp_allowed = full_wp[:]
# sess_pairs[x][y] will be the yth pair in the xth list on the current session
sess_pairs = []
# Track number of attempts to create the lists for the current session
sess_tries = 0
# Track whether the session completed successfully
goodSess = True
# Make word pairs for each list in the current session
list_num = 0
while list_num < len(cond[session_num]):
#print list_num,
#sys.stdout.flush()
# list_pairs[x] will be the xth pair in the current list on the current session
list_pairs = []
# Track number of attempts to create the current list
list_tries = 0
# Track whether the list completed successfully
goodList = True
# Retrieve the list length condition for the current list by looking in cond
listLength = cond[session_num][list_num][0]
# Length 12 lists have 2 pairs per bin, length 24 list have 4 pairs per bin
pairs_per_bin = 2 if listLength == 12 else 4
# Select two or four word pairs from each bin (based on list length)
for sem_i in range(len(semRows)):
# The pair for each semantic bin gets placed twice
pair_i = 0
while pair_i < pairs_per_bin:
# Get the indices (within the full word pool) of the words chosen for the current session
available_indices = [wp_tot.index(word) for word in sess_wp_allowed]
# Randomly choose indices/words from those in the current session until one is found that has one
# or more pairs in the current bin
index_word1 = random.choice(available_indices)
while index_word1 not in semRows[sem_i]:
index_word1 = random.choice(available_indices)
# Get the indices of all words whose pairing with the chosen word falls into the correct bin
good_second_indices = semCols[sem_i][semRows[sem_i] == index_word1]
# Eliminate the words that are not available in the session
good_second_indices = [i for i in good_second_indices if wp_tot[i] in sess_wp_allowed]
# Ensure that a word cannot be accidentally paired with itself
if index_word1 in good_second_indices:
del good_second_indices[good_second_indices.index(index_word1)]
# If there are no good words to choose from, restart
if len(good_second_indices) == 0:
list_tries += 1
if list_tries > 10:
goodList = False
break
else:
continue
# Choose the second word randomly
index_word2 = random.choice(good_second_indices)
# Add the pairs to list_pairs, delete them from the pool of allowed words
list_pairs.append([wp_tot[index_word1], wp_tot[index_word2]])
del sess_wp_allowed[sess_wp_allowed.index(wp_tot[index_word1])]
del sess_wp_allowed[sess_wp_allowed.index(wp_tot[index_word2])]
pair_i += 1
# If the list is bad, add the words back to the pool of allowed words
if not goodList:
sess_wp_allowed.extend([x[0] for x in list_pairs] + [x[1] for x in list_pairs])
break
# If the list is good, add the list_pairs to sess_pairs,
if goodList:
sess_pairs.append(list_pairs)
list_num += 1
else:
# Otherwise, try the session again (up to 50 times), then restart
list_pairs = []
sess_tries += 1
if sess_tries > 50:
goodSess = False
break
# If the whole session went successfully
if goodSess:
# Get the pairs from the lists, add them backwards and forwards to sess_pair_dict
sess_pair_dict = dict(itertools.chain(*sess_pairs))
sess_pair_dict.update(dict(zip(sess_pair_dict.values(), sess_pair_dict.keys())))
pair_dicts.append(sess_pair_dict)
pairs.append(sess_pairs)
session_num += 1
else: # If the session did not go well, try again.
sess_pairs = []
print ''
return pairs, pair_dicts, practice_lists
def place_pairs_ltpFR3(pairs, cond):
"""
:param pairs:
:param cond:
:param config:
:return:
"""
# Load all valid list compositions for 12-item lists (small lists are too restrictive to use trial and error)
with open('valid12.json', 'r') as f:
valid12 = json.load(f)['3bin-valid12']
# Loop through sessions
subj_wo = []
for (n, sess_pairs) in enumerate(pairs):
sess_wo = []
#print '\nPlacing session', n, ':',
#sys.stdout.flush()
# Loop through lists within each session
for (m, list_pairs) in enumerate(sess_pairs):
#print m,
#sys.stdout.flush()
# Create pairs of word pairs from the same bin -- one pair will have adjacent presentation, one distant
grouped_pairs = [list(group) for group in
zip([list_pairs[i] for i in range(len(list_pairs)) if i % 2 == 0],
[list_pairs[i] for i in range(len(list_pairs)) if i % 2 == 1])]
# Retrieve list length for the current list
list_length = cond[n][m][0]
# For 12-item lists, select a random solution template and assign word pairs to the variables in the
# template, such that one pair from each bin has adjacent presentation and one pair from each bin has
# distant presentation
if list_length == 12:
# Randomize the ordering of the grouped pairs, as well as the orderings within each group and each pair
adjacents = ['a', 'b', 'c']
distants = ['d', 'e', 'f']
random.shuffle(adjacents)
random.shuffle(distants)
key = {}
for group in grouped_pairs:
random.shuffle(group)
random.shuffle(group[0])
random.shuffle(group[1])
key[adjacents.pop(0)] = group[0]
key[distants.pop(0)] = group[1]
# Choose a random valid solution
list_wo = copy.deepcopy(random.choice(valid12))
# Each entry in the solution list is a string containing a letter followed by 0 or 1
# The letter corresponds to the word pair and the number corresponds to the item in the pair.
# Letters a, b, and c are adjacent presentation pairs; d, e, and f are distant presentation pairs.
for i in range(len(list_wo)):
w = list_wo[i]
list_wo[i] = key[w[0]][int(w[1])]
# For 24-item lists, create two 12-item lists based on random solution templates and concatenate them.
elif list_length == 24:
# Randomize the ordering of the grouped pairs, as well as the orderings within each group and each pair
adjacents1 = ['a', 'b', 'c']
distants1 = ['d', 'e', 'f']
adjacents2 = ['a', 'b', 'c']
distants2 = ['d', 'e', 'f']
random.shuffle(adjacents1)
random.shuffle(distants1)
random.shuffle(adjacents2)
random.shuffle(distants2)
key1 = {}
key2 = {}
for group_num, group in enumerate(grouped_pairs):
random.shuffle(group)
random.shuffle(group[0])
random.shuffle(group[1])
if group_num % 2 == 0:
key1[adjacents1.pop(0)] = group[0]
key1[distants1.pop(0)] = group[1]
else:
key2[adjacents2.pop(0)] = group[0]
key2[distants2.pop(0)] = group[1]
# Choose a random valid solution
list_wo1 = copy.deepcopy(random.choice(valid12))
list_wo2 = copy.deepcopy(random.choice(valid12))
# Each entry in the solution list is a string containing a letter followed by 0 or 1
# The letter corresponds to the word pair and the number corresponds to the item in the pair.
# Letters a, b, and c are adjacent presentation pairs; d, e, and f are distant presentation pairs.
for i in range(len(list_wo1)):
w = list_wo1[i]
list_wo1[i] = key1[w[0]][int(w[1])]
w = list_wo2[i]
list_wo2[i] = key2[w[0]][int(w[1])]
list_wo = list_wo1 + list_wo2
else:
raise ValueError('Function place_pairs_ltpFR3() can only handle word lists of length 12 or 24!')
# Add finalized list to the session
sess_wo.append(list_wo)
subj_wo.append(sess_wo)
return subj_wo
def listgen_ltpFR3(n):
"""
Generate all lists for a participant, including the conditions, word pairs
and word ordering. This function saves the results to a json file labelled
with the participant's number.
"""
import config
# Read in the semantic association matrix
semMat = []
with open(config.w2vfile) as w2vfile:
for word in w2vfile:
wordVals = []
wordValsString = word.split()
for val in wordValsString:
thisVal = float(val)
wordVals.append(thisVal)
semMat.append(wordVals)
semArray = numpy.array(semMat)
# Create three semantic similarity bins and sort word pairs by bin
semRows, semCols = make_bins_ltpFR3(semArray)
# Read in the word pool
with open(config.wpfile) as wpfile:
wp_tot = [x.strip() for x in wpfile.readlines()]
counts = numpy.zeros(len(wp_tot))
for i in range(n):
print '\nSubject ' + str(i) + '\n'
# Randomize list conditions (list length, presentation rate, modality, distractor duration)
condi = randomize_conditions_ltpFR3(config)
# Choose all of the pairs to be used in the experiment
pairs, pair_dicts, practice_lists = choose_pairs_ltpFR3(wp_tot, condi, config, semRows, semCols)
# Create all lists by placing the word pairs in appropriate positions
subj_wo = place_pairs_ltpFR3(pairs, condi)
# Add practice lists
subj_wo[0] = practice_lists + subj_wo[0]
practice_condi = [[18, 1200, 'a', 18000], [18, 1200, 'v', 18000]]
random.shuffle(practice_condi)
condi[0] = practice_condi + condi[0]
d = {'word_order': subj_wo, 'pairs': pair_dicts, 'conditions': condi}
for sess_dict in pair_dicts:
counts[numpy.array([wp_tot.index(w) for w in sess_dict])] += 1
counts[numpy.array([wp_tot.index(w) for w in practice_lists[0]])] += 1
counts[numpy.array([wp_tot.index(w) for w in practice_lists[1]])] += 1
with open('/Users/jessepazdera/AtomProjects/ltpFR3_MTurk/static/pools/lists/%d.js' % i, 'w') as f:
s = 'var sess_info = ' + json.dumps(d) + ';'
f.write(s)
with open('/Users/jessepazdera/AtomProjects/ltpFR3_MTurk/static/pools/lists/counts.json', 'w') as f:
f.write(str([c for c in counts]))
print max(counts), min(counts), len([wp_tot[i] for i in range(len(counts)) if counts[i] == 0])
return counts
if __name__ == "__main__":
nsess = input('How many sessions would you like to generate? ')
counts = listgen_ltpFR3(nsess)
print counts.mean()
print counts.std()
print counts.max()
print counts.min()
| 42.923274 | 121 | 0.599476 | 2,248 | 16,783 | 4.372331 | 0.185943 | 0.008648 | 0.005494 | 0.008953 | 0.243565 | 0.196358 | 0.177943 | 0.177943 | 0.166141 | 0.166141 | 0 | 0.019654 | 0.32092 | 16,783 | 390 | 122 | 43.033333 | 0.842766 | 0.268903 | 0 | 0.104478 | 0 | 0 | 0.03845 | 0.01489 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.034826 | null | null | 0.034826 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
72bd4a8996d5c4753f1f31aee9a880c97885b93a | 254 | py | Python | examples/single_message.py | Inrixia/pyais | b50fd4d75c687d71b3c70ee939ac9112cfec991e | [
"MIT"
] | 51 | 2019-10-07T11:26:56.000Z | 2022-03-16T10:45:15.000Z | examples/single_message.py | KingKongOne/pyais | ddee5cc4eb8f01f494c82f7b14bdd55aa393af47 | [
"MIT"
] | 57 | 2019-10-14T07:50:00.000Z | 2022-03-28T06:52:27.000Z | examples/single_message.py | KingKongOne/pyais | ddee5cc4eb8f01f494c82f7b14bdd55aa393af47 | [
"MIT"
] | 31 | 2019-10-13T17:17:56.000Z | 2022-03-26T16:46:54.000Z | from pyais.messages import NMEAMessage
message = NMEAMessage(b"!AIVDM,1,1,,B,15M67FC000G?ufbE`FepT@3n00Sa,0*5C")
print(message.decode())
# or
message = NMEAMessage.from_string("!AIVDM,1,1,,B,15M67FC000G?ufbE`FepT@3n00Sa,0*5C")
print(message.decode())
| 25.4 | 84 | 0.755906 | 39 | 254 | 4.897436 | 0.487179 | 0.188482 | 0.073298 | 0.08377 | 0.565445 | 0.565445 | 0.565445 | 0.565445 | 0.565445 | 0.565445 | 0 | 0.118143 | 0.066929 | 254 | 9 | 85 | 28.222222 | 0.687764 | 0.007874 | 0 | 0.4 | 0 | 0 | 0.376 | 0.376 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.4 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
72c0ae451f7caa49a39c48148ab8e7fb5585d0b8 | 800 | py | Python | 003_joint_probabilities.py | svetlanama/snowball | a41865a866dae124b4a22134f091a7d09bd0896e | [
"MIT"
] | null | null | null | 003_joint_probabilities.py | svetlanama/snowball | a41865a866dae124b4a22134f091a7d09bd0896e | [
"MIT"
] | null | null | null | 003_joint_probabilities.py | svetlanama/snowball | a41865a866dae124b4a22134f091a7d09bd0896e | [
"MIT"
] | null | null | null | import sys
sys.path.insert(0, '..')
import numpy
import time
import ConfigParser
import topicmodel
def main():
# read configuration file
config = ConfigParser.ConfigParser()
config.readfp(open('config.ini'))
dataDir = config.get('main', 'dataDir')
io = topicmodel.io(dataDir)
model = topicmodel.model(dataDir)
wordDictionary = io.load_csv_as_dict('out-word-dictionary-rare-words-excluded.csv')
model.set_word_dictionary(wordDictionary)
# print wordDictionary
# return
wwcovar=model.coccurences('tmp-all-paper-tokens.csv','+','.')
numpy.save(dataDir + '/tmp-joint-probabilities.npy', wwcovar)
return
if __name__ == "__main__":
t0 = time.time()
main()
t1 = time.time()
print "finished"
print "time=", t1 - t0
| 22.222222 | 87 | 0.665 | 94 | 800 | 5.521277 | 0.542553 | 0.05395 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007788 | 0.1975 | 800 | 35 | 88 | 22.857143 | 0.800623 | 0.06375 | 0 | 0 | 0 | 0 | 0.189262 | 0.127517 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.217391 | null | null | 0.086957 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
72c1a420d34dd573dce6d90546ddf3cb21473656 | 2,660 | py | Python | tests/bugs/core_4318_test.py | FirebirdSQL/firebird-qa | 96af2def7f905a06f178e2a80a2c8be4a4b44782 | [
"MIT"
] | 1 | 2022-02-05T11:37:13.000Z | 2022-02-05T11:37:13.000Z | tests/bugs/core_4318_test.py | FirebirdSQL/firebird-qa | 96af2def7f905a06f178e2a80a2c8be4a4b44782 | [
"MIT"
] | 1 | 2021-09-03T11:47:00.000Z | 2021-09-03T12:42:10.000Z | tests/bugs/core_4318_test.py | FirebirdSQL/firebird-qa | 96af2def7f905a06f178e2a80a2c8be4a4b44782 | [
"MIT"
] | 1 | 2021-06-30T14:14:16.000Z | 2021-06-30T14:14:16.000Z | #coding:utf-8
#
# id: bugs.core_4318
# title: Regression: Predicates involving PSQL variables/parameters are not pushed inside the aggregation
# decription:
# tracker_id: CORE-4318
# min_versions: ['3.0']
# versions: 3.0
# qmid: None
import pytest
from firebird.qa import db_factory, isql_act, Action
# version: 3.0
# resources: None
substitutions_1 = []
init_script_1 = """
recreate table t2 (
id integer not null,
t1_id integer
);
commit;
recreate table t1 (
id integer not null
);
commit;
set term ^;
execute block
as
declare variable i integer = 0;
begin
while (i < 1000) do begin
i = i + 1;
insert into t2(id, t1_id) values(:i, mod(:i, 10));
merge into t1 using (
select mod(:i, 10) as f from rdb$database
) src on t1.id = src.f
when not matched then
insert (id) values(src.f);
end -- while (i < 1000) do begin
end^
set term ;^
commit;
alter table t1 add constraint pk_t1 primary key (id);
alter table t2 add constraint pk_t2 primary key (id);
alter table t2 add constraint fk_t2_ref_t1 foreign key (t1_id) references t1(id);
commit;
"""
db_1 = db_factory(page_size=4096, sql_dialect=3, init=init_script_1)
test_script_1 = """
set explain on;
set planonly;
set term ^;
execute block
returns (
s integer
)
as
declare variable v integer = 1;
begin
with t as (
select t1_id as t1_id, sum(id) as s
from t2
group by 1
)
select s
from t
where t1_id = :v
into :s;
suspend;
end
^
set term ;^
-- In 3.0.0.30837 plan was:
-- Select Expression
-- -> Singularity Check
-- -> Filter
-- -> Aggregate
-- -> Table "T T2" Access By ID
-- -> Index "FK_T2_REF_T1" Scan
-- (i.e. there was NO "Filter" between "Aggregate" and "Table "T T2" Access By ID")
"""
act_1 = isql_act('db_1', test_script_1, substitutions=substitutions_1)
expected_stdout_1 = """
Select Expression
-> Singularity Check
-> Filter
-> Aggregate
-> Filter
-> Table "T2" as "T T2" Access By ID
-> Index "FK_T2_REF_T1" Range Scan (full match)
"""
@pytest.mark.version('>=3.0')
def test_1(act_1: Action):
act_1.expected_stdout = expected_stdout_1
act_1.execute()
assert act_1.clean_stdout == act_1.clean_expected_stdout
| 22.931034 | 112 | 0.557895 | 357 | 2,660 | 4.008403 | 0.37535 | 0.025157 | 0.014675 | 0.018868 | 0.194969 | 0.171209 | 0.089448 | 0.089448 | 0.037736 | 0.037736 | 0 | 0.052359 | 0.346617 | 2,660 | 115 | 113 | 23.130435 | 0.771001 | 0.104887 | 0 | 0.25 | 0 | 0.011905 | 0.790629 | 0 | 0 | 0 | 0 | 0 | 0.011905 | 1 | 0.011905 | false | 0 | 0.02381 | 0 | 0.035714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
72c23dc2d109c0b3025a3d48b3833415e7515ab1 | 1,686 | py | Python | GHOST.py | RadicalAjay/Ghost_data | b151b0b92d27c3b8454e28d4f037eafb587d7b23 | [
"MIT"
] | 1 | 2020-06-13T11:29:17.000Z | 2020-06-13T11:29:17.000Z | GHOST.py | RadicalAjay/Ghost_data | b151b0b92d27c3b8454e28d4f037eafb587d7b23 | [
"MIT"
] | null | null | null | GHOST.py | RadicalAjay/Ghost_data | b151b0b92d27c3b8454e28d4f037eafb587d7b23 | [
"MIT"
] | null | null | null | #! /usr/bin/python3
# Description: Data_Ghost, concealing data into spaces and tabs making it imperceptable to human eyes.
# Author: Ajay Dyavathi
# Github: Radical Ajay
class Ghost():
def __init__(self, file_name, output_format='txt'):
''' Converts ascii text to spaces and tabs '''
self.file_name = file_name
self.output_format = output_format
def ascii2bin(self, asc):
''' Converting ascii to bianry '''
return ''.join('{:08b}'.format(ord(i)) for i in asc)
def bin2ascii(self, bid):
''' Converting binary to ascii '''
return ''.join(chr(int(bid[i:i + 8], 2)) for i in range(0, len(bid), 8))
def ghost(self, filename):
''' Ghosting data converting it to spaces and tabs '''
with open(filename, 'w') as out_f:
with open(self.file_name, 'r') as in_f:
for in_data in in_f.readlines():
bin_data = self.ascii2bin(in_data)
out_data = bin_data.replace('1', '\t')
out_data = out_data.replace('0', ' ')
out_f.write(out_data)
def unghost(self, in_filename, out_filename):
''' Unghosting data converting back from spaces and tabs to human-readable text '''
with open(out_filename, 'w') as out_f:
with open(in_filename, 'r') as in_f:
for line in in_f.readlines():
line = line.replace('\t', '1')
line = line.replace(' ', '0')
out_f.write(self.bin2ascii(line))
# USAGE:
# ghoster = Ghost('data.txt')
# ghoster.ghost('ghosted.txt')
# ghoster.unghost('ghosted.txt', 'unghosted.txt')
| 33.72 | 102 | 0.577699 | 224 | 1,686 | 4.205357 | 0.352679 | 0.038217 | 0.055202 | 0.031847 | 0.104034 | 0.048832 | 0.048832 | 0 | 0 | 0 | 0 | 0.012542 | 0.290629 | 1,686 | 49 | 103 | 34.408163 | 0.775084 | 0.293594 | 0 | 0 | 0 | 0 | 0.019948 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.217391 | false | 0 | 0 | 0 | 0.347826 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
72c45f6e75be10ab1eab557e6f4a81a72ff78154 | 600 | py | Python | heroquest/migrations/0002_auto_20160819_1747.py | DeividVM/heroquest | c693d664717a849de645908ae78d62ec2a5837a5 | [
"MIT"
] | null | null | null | heroquest/migrations/0002_auto_20160819_1747.py | DeividVM/heroquest | c693d664717a849de645908ae78d62ec2a5837a5 | [
"MIT"
] | null | null | null | heroquest/migrations/0002_auto_20160819_1747.py | DeividVM/heroquest | c693d664717a849de645908ae78d62ec2a5837a5 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.9.9 on 2016-08-19 17:47
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('heroquest', '0001_initial'),
]
operations = [
migrations.RemoveField(
model_name='player',
name='armor',
),
migrations.AlterField(
model_name='player',
name='spell',
field=models.ManyToManyField(related_name='spells', to='armery.Spell', verbose_name='Hechizos'),
),
]
| 24 | 108 | 0.596667 | 61 | 600 | 5.704918 | 0.721311 | 0.051724 | 0.086207 | 0.109195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046083 | 0.276667 | 600 | 24 | 109 | 25 | 0.75576 | 0.111667 | 0 | 0.235294 | 1 | 0 | 0.130189 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.117647 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
72c4b917975b4d4f647b8a13d6702b4b9e1e961c | 1,727 | py | Python | dask/array/utils.py | epervago/dask | 958732ce6c51ef6af39db4727d948bfa66a0a8d6 | [
"BSD-3-Clause"
] | null | null | null | dask/array/utils.py | epervago/dask | 958732ce6c51ef6af39db4727d948bfa66a0a8d6 | [
"BSD-3-Clause"
] | null | null | null | dask/array/utils.py | epervago/dask | 958732ce6c51ef6af39db4727d948bfa66a0a8d6 | [
"BSD-3-Clause"
] | null | null | null | from distutils.version import LooseVersion
import difflib
import os
import numpy as np
from .core import Array
from ..async import get_sync
if LooseVersion(np.__version__) >= '1.10.0':
allclose = np.allclose
else:
def allclose(a, b, **kwargs):
if kwargs.pop('equal_nan', False):
a_nans = np.isnan(a)
b_nans = np.isnan(b)
if not (a_nans == b_nans).all():
return False
a = a[~a_nans]
b = b[~b_nans]
return np.allclose(a, b, **kwargs)
def _not_empty(x):
return x.shape and 0 not in x.shape
def _maybe_check_dtype(a, dtype=None):
# Only check dtype matches for non-empty
if _not_empty(a):
assert a.dtype == dtype
def assert_eq(a, b, **kwargs):
if isinstance(a, Array):
adt = a.dtype
a = a.compute(get=get_sync)
_maybe_check_dtype(a, adt)
else:
adt = getattr(a, 'dtype', None)
if isinstance(b, Array):
bdt = b.dtype
assert bdt is not None
b = b.compute(get=get_sync)
_maybe_check_dtype(b, bdt)
else:
bdt = getattr(b, 'dtype', None)
if str(adt) != str(bdt):
diff = difflib.ndiff(str(adt).splitlines(), str(bdt).splitlines())
raise AssertionError('string repr are different' + os.linesep +
os.linesep.join(diff))
try:
if _not_empty(a) and _not_empty(b):
# Treat all empty arrays as equivalent
assert a.shape == b.shape
assert allclose(a, b, **kwargs)
return
except TypeError:
pass
c = a == b
if isinstance(c, np.ndarray):
assert c.all()
else:
assert c
return True
| 24.671429 | 74 | 0.568037 | 242 | 1,727 | 3.921488 | 0.318182 | 0.012645 | 0.03372 | 0.05058 | 0.067439 | 0.067439 | 0.067439 | 0 | 0 | 0 | 0 | 0.004255 | 0.319629 | 1,727 | 69 | 75 | 25.028986 | 0.803404 | 0.043428 | 0 | 0.074074 | 0 | 0 | 0.030321 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 0 | null | null | 0.018519 | 0.111111 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
72c98748f08c6f90f0d9a63c5a27d1f4d96b3af8 | 1,685 | py | Python | tests/_site/myauth/models.py | ahmetdaglarbas/e-commerce | ff190244ccd422b4e08d7672f50709edcbb6ebba | [
"BSD-3-Clause"
] | 2 | 2015-12-11T00:19:15.000Z | 2021-11-14T19:44:42.000Z | tests/_site/myauth/models.py | ahmetdaglarbas/e-commerce | ff190244ccd422b4e08d7672f50709edcbb6ebba | [
"BSD-3-Clause"
] | null | null | null | tests/_site/myauth/models.py | ahmetdaglarbas/e-commerce | ff190244ccd422b4e08d7672f50709edcbb6ebba | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# Code will only work with Django >= 1.5. See tests/config.py
import re
from django.utils.translation import ugettext_lazy as _
from django.db import models
from django.core import validators
from django.contrib.auth.models import BaseUserManager
from oscar.apps.customer.abstract_models import AbstractUser
class CustomUserManager(BaseUserManager):
def create_user(self, username, email, password):
"""
Creates and saves a User with the given email and password.
"""
if not email:
raise ValueError('Users must have an email address')
user = self.model(
email=CustomUserManager.normalize_email(email),
username=username,
is_active=True,
)
user.set_password(password)
user.save(using=self._db)
return user
def create_superuser(self, username, email, password):
u = self.create_user(username, email, password=password)
u.is_admin = True
u.is_staff = True
u.save(using=self._db)
return u
class User(AbstractUser):
"""
Custom user based on Oscar's AbstractUser
"""
username = models.CharField(_('username'), max_length=30, unique=True,
help_text=_('Required. 30 characters or fewer. Letters, numbers and '
'@/./+/-/_ characters'),
validators=[
validators.RegexValidator(re.compile('^[\w.@+-]+$'), _('Enter a valid username.'), 'invalid')
])
extra_field = models.CharField(
_('Nobody needs me'), max_length=5, blank=True)
objects = CustomUserManager()
class Meta:
app_label = 'myauth'
| 29.051724 | 105 | 0.636795 | 196 | 1,685 | 5.357143 | 0.540816 | 0.038095 | 0.06 | 0.047619 | 0.04 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00638 | 0.255786 | 1,685 | 57 | 106 | 29.561404 | 0.830941 | 0.109199 | 0 | 0 | 0 | 0 | 0.12115 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0.111111 | 0.166667 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
72cdbe89dce7053232a88a1aed13e52d7045db37 | 641 | py | Python | company/migrations/0021_auto_20161208_1113.py | uktrade/directory-api | 45a9024a7ecc2842895201cbb51420ba9e57a168 | [
"MIT"
] | 2 | 2017-06-02T09:09:08.000Z | 2021-01-18T10:26:53.000Z | company/migrations/0021_auto_20161208_1113.py | konradko/directory-api | e9cd05b1deaf575e94352c46ddbd1857d8119fda | [
"MIT"
] | 629 | 2016-10-10T09:35:52.000Z | 2022-03-25T15:04:04.000Z | company/migrations/0021_auto_20161208_1113.py | konradko/directory-api | e9cd05b1deaf575e94352c46ddbd1857d8119fda | [
"MIT"
] | 5 | 2017-06-22T10:02:22.000Z | 2022-03-14T17:55:21.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.9.10 on 2016-12-08 11:13
from __future__ import unicode_literals
from django.db import migrations
from company import helpers
def ensure_verification_code(apps, schema_editor):
Company = apps.get_model("company", "Company")
for company in Company.objects.filter(verification_code=''):
company.verification_code = helpers.generate_verification_code()
company.save()
class Migration(migrations.Migration):
dependencies = [
('company', '0020_auto_20161208_1056'),
]
operations = [
migrations.RunPython(ensure_verification_code),
]
| 23.740741 | 72 | 0.711388 | 75 | 641 | 5.84 | 0.64 | 0.182648 | 0.100457 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.063098 | 0.184087 | 641 | 26 | 73 | 24.653846 | 0.774379 | 0.106084 | 0 | 0 | 1 | 0 | 0.077193 | 0.040351 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.2 | 0 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
72cf53ccf7f23461f4563c9f0a973dec0115aebc | 2,235 | py | Python | libhustpass/login.py | naivekun/libhustpass | d8d487e3af996898e4a7b21b924fbf0fc4fbe419 | [
"WTFPL"
] | 26 | 2020-02-18T14:30:30.000Z | 2021-11-30T02:50:37.000Z | libhustpass/login.py | ingdex/libhustpass | d8d487e3af996898e4a7b21b924fbf0fc4fbe419 | [
"WTFPL"
] | 3 | 2020-05-01T20:26:42.000Z | 2020-12-30T07:03:10.000Z | libhustpass/login.py | ingdex/libhustpass | d8d487e3af996898e4a7b21b924fbf0fc4fbe419 | [
"WTFPL"
] | 6 | 2020-02-18T14:33:39.000Z | 2022-01-28T11:09:25.000Z | import libhustpass.sbDes as sbDes
import libhustpass.captcha as captcha
import requests
import re
import random
def toWideChar(data):
data_bytes = bytes(data, encoding="utf-8")
ret = []
for i in data_bytes:
ret.extend([0, i])
while len(ret) % 8 != 0:
ret.append(0)
return ret
def Enc(data, first_key, second_key, third_key):
data_bytes = toWideChar(data)
key1_bytes = toWideChar(first_key)
key2_bytes = toWideChar(second_key)
key3_bytes = toWideChar(third_key)
ret_ = []
i = 0
while i < len(data_bytes):
tmp = data_bytes[i : i + 8]
x = 0
y = 0
z = 0
while x < len(key1_bytes):
enc1_ = sbDes.des(key1_bytes[x : x + 8], sbDes.ECB)
tmp = list(enc1_.encrypt(tmp))
x += 8
while y < len(key2_bytes):
enc2_ = sbDes.des(key2_bytes[y : y + 8], sbDes.ECB)
tmp = list(enc2_.encrypt(tmp))
y += 8
while z < len(key3_bytes):
enc3_ = sbDes.des(key3_bytes[z : z + 8], sbDes.ECB)
tmp = list(enc3_.encrypt(tmp))
z += 8
ret_.extend(tmp)
i += 8
ret = ""
for i in ret_:
ret += "%02X" % i
return ret
def login(username, password, url):
r = requests.session()
login_html = r.get(url)
captcha_content = r.get("https://pass.hust.edu.cn/cas/code?"+str(random.random()), stream=True)
captcha_content.raw.decode_content = True
nonce = re.search(
'<input type="hidden" id="lt" name="lt" value="(.*)" />', login_html.text
).group(1)
action = re.search(
'<form id="loginForm" action="(.*)" method="post">', login_html.text
).group(1)
post_params = {
"code": captcha.deCaptcha(captcha_content.raw),
"rsa": Enc(username + password + nonce, "1", "2", "3"),
"ul": len(username),
"pl": len(password),
"lt": nonce,
"execution": "e1s1",
"_eventId": "submit",
}
redirect_html = r.post(
"https://pass.hust.edu.cn" + action, data=post_params, allow_redirects=False
)
try:
return redirect_html.headers["Location"]
except:
raise Exception("login failed")
| 28.653846 | 99 | 0.561521 | 295 | 2,235 | 4.111864 | 0.355932 | 0.037098 | 0.022259 | 0.029678 | 0.117065 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025933 | 0.292617 | 2,235 | 77 | 100 | 29.025974 | 0.741303 | 0 | 0 | 0.057143 | 0 | 0 | 0.104251 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042857 | false | 0.1 | 0.071429 | 0 | 0.157143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
72de8b8aee3f4620163c8f97b222277abcb82e15 | 1,061 | py | Python | projects/detr/scripts/dd.py | zzzzzz0407/detectron2 | 021fc5b1502bbba54e4714735736898803835ab0 | [
"Apache-2.0"
] | 1 | 2020-07-03T07:17:17.000Z | 2020-07-03T07:17:17.000Z | projects/detr/scripts/dd.py | zzzzzz0407/detectron2 | 021fc5b1502bbba54e4714735736898803835ab0 | [
"Apache-2.0"
] | null | null | null | projects/detr/scripts/dd.py | zzzzzz0407/detectron2 | 021fc5b1502bbba54e4714735736898803835ab0 | [
"Apache-2.0"
] | null | null | null | import json
if __name__ == '__main__':
jsonFile = '/data00/home/zhangrufeng1/projects/detectron2/projects/detr/datasets/mot/mot17/annotations/mot17_train_half.json'
with open(jsonFile, 'r') as f:
infos = json.load(f)
count_dict = dict()
for info in infos["images"]:
if info["file_name"] in ["MOT17-02-FRCNN/img1/000091.jpg"]:
for ann in infos['annotations']:
if ann["image_id"] not in count_dict.keys() and ann["iscrowd"] == 0 and ann["bbox"][2] >= 1e-5 and ann["bbox"][3] >= 1e-5:
count_dict[ann["image_id"]] = 1
elif ann["image_id"] in count_dict.keys() and ann["iscrowd"] == 0:
count_dict[ann["image_id"]] += 1
max_count = 0
min_count = 999
num_freq = 0
for key, value in count_dict.items():
max_count = max(max_count, value)
min_count = min(min_count, value)
if value > 100:
num_freq += 1
print("max_count: {}".format(max_count))
print("min_count: {}".format(min_count))
print("num_freq: {}".format(num_freq))
| 33.15625 | 130 | 0.609802 | 156 | 1,061 | 3.923077 | 0.403846 | 0.088235 | 0.065359 | 0.04902 | 0.160131 | 0.160131 | 0.094771 | 0.094771 | 0 | 0 | 0 | 0.046569 | 0.230914 | 1,061 | 31 | 131 | 34.225806 | 0.703431 | 0 | 0 | 0 | 0 | 0.041667 | 0.253534 | 0.133836 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.041667 | null | null | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
72e1010bc4f2ebd173a6efd489e56ee4ea6793c8 | 1,228 | py | Python | problems/p009.py | davisschenk/project-euler-python | 1375412e6c8199ab02250bd67223c758d4df1725 | [
"MIT"
] | null | null | null | problems/p009.py | davisschenk/project-euler-python | 1375412e6c8199ab02250bd67223c758d4df1725 | [
"MIT"
] | null | null | null | problems/p009.py | davisschenk/project-euler-python | 1375412e6c8199ab02250bd67223c758d4df1725 | [
"MIT"
] | 2 | 2020-10-08T23:35:03.000Z | 2020-10-09T00:28:36.000Z | from math import ceil, sqrt
from problem import Problem
from utils.math import gcd
class PythagoreanTriplet(Problem, name="Special Pythagorean triplet", expected=31875000):
@Problem.solution()
def brute_force(self, ts=1000):
for a in range(3, round((ts - 3) / 2)):
for b in range(a + 1, round((ts - 1 - a) / 2)):
c = ts - a - b
if c * c == a * a + b * b:
return a * b * c
@Problem.solution()
def parametrisation(self, ts=1000):
s2 = ts / 2
mlimit = ceil(sqrt(s2)) - 1
for m in range(2, mlimit):
if s2 % m == 0:
sm = s2 / m
while sm % 2 == 0:
sm /= 2
if m % 2 == 1:
k = m + 2
else:
k = m + 1
while k < 2 * m and k <= sm:
if sm % k == 0 and gcd(k, m) == 1:
d = s2 / (k * m)
n = k - m
a = d * (m * m - n * n)
b = 2 * d * m * n
c = d * (m * m + n * n)
return a * b * c
k += 2
| 29.238095 | 89 | 0.35342 | 157 | 1,228 | 2.757962 | 0.299363 | 0.023095 | 0.083141 | 0.04157 | 0.023095 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075044 | 0.533388 | 1,228 | 41 | 90 | 29.95122 | 0.680628 | 0 | 0 | 0.121212 | 0 | 0 | 0.021987 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060606 | false | 0 | 0.090909 | 0 | 0.242424 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
72ea2c27713d0d21a3c0d65d78528e65b46ecc6c | 61,742 | py | Python | baseCli.py | eym55/mango-client-python | 2cb1ce77d785343c24ecba913eaa9693c3db1181 | [
"MIT"
] | null | null | null | baseCli.py | eym55/mango-client-python | 2cb1ce77d785343c24ecba913eaa9693c3db1181 | [
"MIT"
] | null | null | null | baseCli.py | eym55/mango-client-python | 2cb1ce77d785343c24ecba913eaa9693c3db1181 | [
"MIT"
] | null | null | null | import abc
import datetime
import enum
import logging
import time
import typing
import aysncio
import Layout as layouts
from decimal import Decimal
from pyserum.market import Market
from pyserum.open_orders_account import OpenOrdersAccount
from solana.account import Account
from solana.publickey import PublicKey
from solana.rpc.commitment import Single
from solana.rpc.types import MemcmpOpts, TokenAccountOpts, RPCMethod, RPCResponse
from spl.token.client import Token as SplToken
from spl.token.constants import TOKEN_PROGRAM_ID
from Constants import NUM_MARKETS, NUM_TOKENS, SOL_DECIMALS, SYSTEM_PROGRAM_ADDRESS, MAX_RATE,OPTIMAL_RATE,OPTIMAL_UTIL
from Context import Context
from Decoder import decode_binary, encode_binary, encode_key
class Version(enum.Enum):
UNSPECIFIED = 0
V1 = 1
V2 = 2
V3 = 3
V4 = 4
V5 = 5
class InstructionType(enum.IntEnum):
InitMangoGroup = 0
InitMarginAccount = 1
Deposit = 2
Withdraw = 3
Borrow = 4
SettleBorrow = 5
Liquidate = 6
DepositSrm = 7
WithdrawSrm = 8
PlaceOrder = 9
SettleFunds = 10
CancelOrder = 11
CancelOrderByClientId = 12
ChangeBorrowLimit = 13
PlaceAndSettle = 14
ForceCancelOrders = 15
PartialLiquidate = 16
def __str__(self):
return self.name
class AccountInfo:
def __init__(self, address: PublicKey, executable: bool, lamports: Decimal, owner: PublicKey, rent_epoch: Decimal, data: bytes):
self.logger: logging.Logger = logging.getLogger(self.__class__.__name__)
self.address: PublicKey = address
self.executable: bool = executable
self.lamports: Decimal = lamports
self.owner: PublicKey = owner
self.rent_epoch: Decimal = rent_epoch
self.data: bytes = data
def encoded_data(self) -> typing.List:
return encode_binary(self.data)
def __str__(self) -> str:
return f"""« AccountInfo [{self.address}]:
Owner: {self.owner}
Executable: {self.executable}
Lamports: {self.lamports}
Rent Epoch: {self.rent_epoch}
»"""
def __repr__(self) -> str:
return f"{self}"
@staticmethod
async def load(context: Context, address: PublicKey) -> typing.Optional["AccountInfo"]:
response: RPCResponse = context.client.get_account_info(address)
result = context.unwrap_or_raise_exception(response)
if result["value"] is None:
return None
return AccountInfo._from_response_values(result["value"], address)
@staticmethod
async def load_multiple(context: Context, addresses: typing.List[PublicKey]) -> typing.List["AccountInfo"]:
address_strings = list(map(PublicKey.__str__, addresses))
response = await context.client._provider.make_request(RPCMethod("getMultipleAccounts"), address_strings)
response_value_list = zip(response["result"]["value"], addresses)
return list(map(lambda pair: AccountInfo._from_response_values(pair[0], pair[1]), response_value_list))
@staticmethod
def _from_response_values(response_values: typing.Dict[str, typing.Any], address: PublicKey) -> "AccountInfo":
executable = bool(response_values["executable"])
lamports = Decimal(response_values["lamports"])
owner = PublicKey(response_values["owner"])
rent_epoch = Decimal(response_values["rentEpoch"])
data = decode_binary(response_values["data"])
return AccountInfo(address, executable, lamports, owner, rent_epoch, data)
@staticmethod
def from_response(response: RPCResponse, address: PublicKey) -> "AccountInfo":
return AccountInfo._from_response_values(response["result"]["value"], address)
class AddressableAccount(metaclass=abc.ABCMeta):
def __init__(self, account_info: AccountInfo):
self.logger: logging.Logger = logging.getLogger(self.__class__.__name__)
self.account_info = account_info
@property
def address(self) -> PublicKey:
return self.account_info.address
def __repr__(self) -> str:
return f"{self}"
class SerumAccountFlags:
def __init__(self, version: Version, initialized: bool, market: bool, open_orders: bool,
request_queue: bool, event_queue: bool, bids: bool, asks: bool, disabled: bool):
self.logger: logging.Logger = logging.getLogger(self.__class__.__name__)
self.version: Version = version
self.initialized = initialized
self.market = market
self.open_orders = open_orders
self.request_queue = request_queue
self.event_queue = event_queue
self.bids = bids
self.asks = asks
self.disabled = disabled
@staticmethod
def from_layout(layout: layouts.SERUM_ACCOUNT_FLAGS) -> "SerumAccountFlags":
return SerumAccountFlags(Version.UNSPECIFIED, layout.initialized, layout.market,
layout.open_orders, layout.request_queue, layout.event_queue,
layout.bids, layout.asks, layout.disabled)
def __str__(self) -> str:
flags: typing.List[typing.Optional[str]] = []
flags += ["initialized" if self.initialized else None]
flags += ["market" if self.market else None]
flags += ["open_orders" if self.open_orders else None]
flags += ["request_queue" if self.request_queue else None]
flags += ["event_queue" if self.event_queue else None]
flags += ["bids" if self.bids else None]
flags += ["asks" if self.asks else None]
flags += ["disabled" if self.disabled else None]
flag_text = " | ".join(flag for flag in flags if flag is not None) or "None"
return f"« SerumAccountFlags: {flag_text} »"
def __repr__(self) -> str:
return f"{self}"
class MangoAccountFlags:
def __init__(self, version: Version, initialized: bool, group: bool, margin_account: bool, srm_account: bool):
self.logger: logging.Logger = logging.getLogger(self.__class__.__name__)
self.version: Version = version
self.initialized = initialized
self.group = group
self.margin_account = margin_account
self.srm_account = srm_account
@staticmethod
def from_layout(layout: layouts.MANGO_ACCOUNT_FLAGS) -> "MangoAccountFlags":
return MangoAccountFlags(Version.UNSPECIFIED, layout.initialized, layout.group, layout.margin_account,
layout.srm_account)
def __str__(self) -> str:
flags: typing.List[typing.Optional[str]] = []
flags += ["initialized" if self.initialized else None]
flags += ["group" if self.group else None]
flags += ["margin_account" if self.margin_account else None]
flags += ["srm_account" if self.srm_account else None]
flag_text = " | ".join(flag for flag in flags if flag is not None) or "None"
return f"« MangoAccountFlags: {flag_text} »"
def __repr__(self) -> str:
return f"{self}"
class Index:
def __init__(self, version: Version, last_update: datetime.datetime, borrow: Decimal, deposit: Decimal):
self.logger: logging.Logger = logging.getLogger(self.__class__.__name__)
self.version: Version = version
self.last_update: datetime.datetime = last_update
self.borrow: Decimal = borrow
self.deposit: Decimal = deposit
@staticmethod
def from_layout(layout: layouts.INDEX, decimals: Decimal) -> "Index":
borrow = layout.borrow / Decimal(10 ** decimals)
deposit = layout.deposit / Decimal(10 ** decimals)
return Index(Version.UNSPECIFIED, layout.last_update, borrow, deposit)
def __str__(self) -> str:
return f"« Index: Borrow: {self.borrow:,.8f}, Deposit: {self.deposit:,.8f} [last update: {self.last_update}] »"
def __repr__(self) -> str:
return f"{self}"
class AggregatorConfig:
def __init__(self, version: Version, description: str, decimals: Decimal, restart_delay: Decimal,
max_submissions: Decimal, min_submissions: Decimal, reward_amount: Decimal,
reward_token_account: PublicKey):
self.logger: logging.Logger = logging.getLogger(self.__class__.__name__)
self.version: Version = version
self.description: str = description
self.decimals: Decimal = decimals
self.restart_delay: Decimal = restart_delay
self.max_submissions: Decimal = max_submissions
self.min_submissions: Decimal = min_submissions
self.reward_amount: Decimal = reward_amount
self.reward_token_account: PublicKey = reward_token_account
@staticmethod
def from_layout(layout: layouts.AGGREGATOR_CONFIG) -> "AggregatorConfig":
return AggregatorConfig(Version.UNSPECIFIED, layout.description, layout.decimals,
layout.restart_delay, layout.max_submissions, layout.min_submissions,
layout.reward_amount, layout.reward_token_account)
def __str__(self) -> str:
return f"« AggregatorConfig: '{self.description}', Decimals: {self.decimals} [restart delay: {self.restart_delay}], Max: {self.max_submissions}, Min: {self.min_submissions}, Reward: {self.reward_amount}, Reward Account: {self.reward_token_account} »"
def __repr__(self) -> str:
return f"{self}"
class Round:
def __init__(self, version: Version, id: Decimal, created_at: datetime.datetime, updated_at: datetime.datetime):
self.logger: logging.Logger = logging.getLogger(self.__class__.__name__)
self.version: Version = version
self.id: Decimal = id
self.created_at: datetime.datetime = created_at
self.updated_at: datetime.datetime = updated_at
@staticmethod
def from_layout(layout: layouts.ROUND) -> "Round":
return Round(Version.UNSPECIFIED, layout.id, layout.created_at, layout.updated_at)
def __str__(self) -> str:
return f"« Round[{self.id}], Created: {self.updated_at}, Updated: {self.updated_at} »"
def __repr__(self) -> str:
return f"{self}"
class Answer:
def __init__(self, version: Version, round_id: Decimal, median: Decimal, created_at: datetime.datetime, updated_at: datetime.datetime):
self.logger: logging.Logger = logging.getLogger(self.__class__.__name__)
self.version: Version = version
self.round_id: Decimal = round_id
self.median: Decimal = median
self.created_at: datetime.datetime = created_at
self.updated_at: datetime.datetime = updated_at
@staticmethod
def from_layout(layout: layouts.ANSWER) -> "Answer":
return Answer(Version.UNSPECIFIED, layout.round_id, layout.median, layout.created_at, layout.updated_at)
def __str__(self) -> str:
return f"« Answer: Round[{self.round_id}], Median: {self.median:,.8f}, Created: {self.updated_at}, Updated: {self.updated_at} »"
def __repr__(self) -> str:
return f"{self}"
class Aggregator(AddressableAccount):
def __init__(self, account_info: AccountInfo, version: Version, config: AggregatorConfig,
initialized: bool, name: str, owner: PublicKey, round_: Round,
round_submissions: PublicKey, answer: Answer, answer_submissions: PublicKey):
super().__init__(account_info)
self.version: Version = version
self.config: AggregatorConfig = config
self.initialized: bool = initialized
self.name: str = name
self.owner: PublicKey = owner
self.round: Round = round_
self.round_submissions: PublicKey = round_submissions
self.answer: Answer = answer
self.answer_submissions: PublicKey = answer_submissions
@property
def price(self) -> Decimal:
return self.answer.median / (10 ** self.config.decimals)
@staticmethod
def from_layout(layout: layouts.AGGREGATOR, account_info: AccountInfo, name: str) -> "Aggregator":
config = AggregatorConfig.from_layout(layout.config)
initialized = bool(layout.initialized)
round_ = Round.from_layout(layout.round)
answer = Answer.from_layout(layout.answer)
return Aggregator(account_info, Version.UNSPECIFIED, config, initialized, name, layout.owner,
round_, layout.round_submissions, answer, layout.answer_submissions)
@staticmethod
def parse(context: Context, account_info: AccountInfo) -> "Aggregator":
data = account_info.data
if len(data) != layouts.AGGREGATOR.sizeof():
raise Exception(f"Data length ({len(data)}) does not match expected size ({layouts.AGGREGATOR.sizeof()})")
name = context.lookup_oracle_name(account_info.address)
layout = layouts.AGGREGATOR.parse(data)
return Aggregator.from_layout(layout, account_info, name)
@staticmethod
def load(context: Context, account_address: PublicKey):
account_info = AccountInfo.load(context, account_address)
if account_info is None:
raise Exception(f"Aggregator account not found at address '{account_address}'")
return Aggregator.parse(context, account_info)
def __str__(self) -> str:
return f"""
« Aggregator '{self.name}' [{self.version}]:
Config: {self.config}
Initialized: {self.initialized}
Owner: {self.owner}
Round: {self.round}
Round Submissions: {self.round_submissions}
Answer: {self.answer}
Answer Submissions: {self.answer_submissions}
»
"""
class Token:
def __init__(self, name: str, mint: PublicKey, decimals: Decimal):
self.logger: logging.Logger = logging.getLogger(self.__class__.__name__)
self.name: str = name.upper()
self.mint: PublicKey = mint
self.decimals: Decimal = decimals
def round(self, value: Decimal) -> Decimal:
rounded = round(value, int(self.decimals))
return Decimal(rounded)
def name_matches(self, name: str) -> bool:
return self.name.upper() == name.upper()
@staticmethod
def find_by_name(values: typing.List["Token"], name: str) -> "Token":
found = [value for value in values if value.name_matches(name)]
if len(found) == 0:
raise Exception(f"Token '{name}' not found in token values: {values}")
if len(found) > 1:
raise Exception(f"Token '{name}' matched multiple tokens in values: {values}")
return found[0]
@staticmethod
def find_by_mint(values: typing.List["Token"], mint: PublicKey) -> "Token":
found = [value for value in values if value.mint == mint]
if len(found) == 0:
raise Exception(f"Token '{mint}' not found in token values: {values}")
if len(found) > 1:
raise Exception(f"Token '{mint}' matched multiple tokens in values: {values}")
return found[0]
# TokenMetadatas are equal if they have the same mint address.
def __eq__(self, other):
if hasattr(other, 'mint'):
return self.mint == other.mint
return False
def __str__(self) -> str:
return f"« Token '{self.name}' [{self.mint} ({self.decimals} decimals)] »"
def __repr__(self) -> str:
return f"{self}"
SolToken = Token("SOL", SYSTEM_PROGRAM_ADDRESS, SOL_DECIMALS)
class TokenLookup:
@staticmethod
def find_by_name(context: Context, name: str) -> Token:
if SolToken.name_matches(name):
return SolToken
mint = context.lookup_token_address(name)
if mint is None:
raise Exception(f"Could not find token with name '{name}'.")
return Token(name, mint, Decimal(6))
@staticmethod
def find_by_mint(context: Context, mint: PublicKey) -> Token:
if SolToken.mint == mint:
return SolToken
name = context.lookup_token_name(mint)
if name is None:
raise Exception(f"Could not find token with mint '{mint}'.")
return Token(name, mint, Decimal(6))
class BasketToken:
def __init__(self, token: Token, vault: PublicKey, index: Index):
self.logger: logging.Logger = logging.getLogger(self.__class__.__name__)
self.token: Token = token
self.vault: PublicKey = vault
self.index: Index = index
@staticmethod
def find_by_name(values: typing.List["BasketToken"], name: str) -> "BasketToken":
found = [value for value in values if value.token.name_matches(name)]
if len(found) == 0:
raise Exception(f"Token '{name}' not found in token values: {values}")
if len(found) > 1:
raise Exception(f"Token '{name}' matched multiple tokens in values: {values}")
return found[0]
@staticmethod
def find_by_mint(values: typing.List["BasketToken"], mint: PublicKey) -> "BasketToken":
found = [value for value in values if value.token.mint == mint]
if len(found) == 0:
raise Exception(f"Token '{mint}' not found in token values: {values}")
if len(found) > 1:
raise Exception(f"Token '{mint}' matched multiple tokens in values: {values}")
return found[0]
@staticmethod
def find_by_token(values: typing.List["BasketToken"], token: Token) -> "BasketToken":
return BasketToken.find_by_mint(values, token.mint)
# BasketTokens are equal if they have the same underlying token.
def __eq__(self, other):
if hasattr(other, 'token'):
return self.token == other.token
return False
def __str__(self) -> str:
return f"""« BasketToken [{self.token}]:
Vault: {self.vault}
Index: {self.index}
»"""
def __repr__(self) -> str:
return f"{self}"
class TokenValue:
def __init__(self, token: Token, value: Decimal):
self.token = token
self.value = value
@staticmethod
async def fetch_total_value_or_none(context: Context, account_public_key: PublicKey, token: Token) -> typing.Optional["TokenValue"]:
opts = TokenAccountOpts(mint=token.mint)
token_accounts_response = await context.client.get_token_accounts_by_owner(account_public_key, opts, commitment=context.commitment)
token_accounts = token_accounts_response["result"]["value"]
if len(token_accounts) == 0:
return None
total_value = Decimal(0)
for token_account in token_accounts:
result = await context.client.get_token_account_balance(token_account["pubkey"], commitment=context.commitment)
value = Decimal(result["result"]["value"]["amount"])
decimal_places = result["result"]["value"]["decimals"]
divisor = Decimal(10 ** decimal_places)
total_value += value / divisor
return TokenValue(token, total_value)
@staticmethod
def fetch_total_value(context: Context, account_public_key: PublicKey, token: Token) -> "TokenValue":
value = TokenValue.fetch_total_value_or_none(context, account_public_key, token)
if value is None:
return TokenValue(token, Decimal(0))
return value
@staticmethod
def report(reporter: typing.Callable[[str], None], values: typing.List["TokenValue"]) -> None:
for value in values:
reporter(f"{value.value:>18,.8f} {value.token.name}")
@staticmethod
def find_by_name(values: typing.List["TokenValue"], name: str) -> "TokenValue":
found = [value for value in values if value.token.name_matches(name)]
if len(found) == 0:
raise Exception(f"Token '{name}' not found in token values: {values}")
if len(found) > 1:
raise Exception(f"Token '{name}' matched multiple tokens in values: {values}")
return found[0]
@staticmethod
def find_by_mint(values: typing.List["TokenValue"], mint: PublicKey) -> "TokenValue":
found = [value for value in values if value.token.mint == mint]
if len(found) == 0:
raise Exception(f"Token '{mint}' not found in token values: {values}")
if len(found) > 1:
raise Exception(f"Token '{mint}' matched multiple tokens in values: {values}")
return found[0]
@staticmethod
def find_by_token(values: typing.List["TokenValue"], token: Token) -> "TokenValue":
return TokenValue.find_by_mint(values, token.mint)
@staticmethod
def changes(before: typing.List["TokenValue"], after: typing.List["TokenValue"]) -> typing.List["TokenValue"]:
changes: typing.List[TokenValue] = []
for before_balance in before:
after_balance = TokenValue.find_by_token(after, before_balance.token)
result = TokenValue(before_balance.token, after_balance.value - before_balance.value)
changes += [result]
return changes
def __str__(self) -> str:
return f"« TokenValue: {self.value:>18,.8f} {self.token.name} »"
def __repr__(self) -> str:
return f"{self}"
class OwnedTokenValue:
def __init__(self, owner: PublicKey, token_value: TokenValue):
self.owner = owner
self.token_value = token_value
@staticmethod
def find_by_owner(values: typing.List["OwnedTokenValue"], owner: PublicKey) -> "OwnedTokenValue":
found = [value for value in values if value.owner == owner]
if len(found) == 0:
raise Exception(f"Owner '{owner}' not found in: {values}")
if len(found) > 1:
raise Exception(f"Owner '{owner}' matched multiple tokens in: {values}")
return found[0]
@staticmethod
def changes(before: typing.List["OwnedTokenValue"], after: typing.List["OwnedTokenValue"]) -> typing.List["OwnedTokenValue"]:
changes: typing.List[OwnedTokenValue] = []
for before_value in before:
after_value = OwnedTokenValue.find_by_owner(after, before_value.owner)
token_value = TokenValue(before_value.token_value.token, after_value.token_value.value - before_value.token_value.value)
result = OwnedTokenValue(before_value.owner, token_value)
changes += [result]
return changes
def __str__(self) -> str:
return f"[{self.owner}]: {self.token_value}"
def __repr__(self) -> str:
return f"{self}"
class MarketMetadata:
def __init__(self, name: str, address: PublicKey, base: BasketToken, quote: BasketToken,
spot: PublicKey, oracle: PublicKey, decimals: Decimal):
self.logger: logging.Logger = logging.getLogger(self.__class__.__name__)
self.name: str = name
self.address: PublicKey = address
self.base: BasketToken = base
self.quote: BasketToken = quote
self.spot: PublicKey = spot
self.oracle: PublicKey = oracle
self.decimals: Decimal = decimals
self._market = None
async def fetch_market(self, context: Context) -> Market:
if self._market is None:
self._market = await Market.load(context.client, self.spot)
return self._market
def __str__(self) -> str:
return f"""« Market '{self.name}' [{self.spot}]:
Base: {self.base}
Quote: {self.quote}
Oracle: {self.oracle} ({self.decimals} decimals)
»"""
def __repr__(self) -> str:
return f"{self}"
class Group(AddressableAccount):
def __init__(self, account_info: AccountInfo, version: Version, context: Context,
account_flags: MangoAccountFlags, basket_tokens: typing.List[BasketToken],
markets: typing.List[MarketMetadata],
signer_nonce: Decimal, signer_key: PublicKey, dex_program_id: PublicKey,
total_deposits: typing.List[Decimal], total_borrows: typing.List[Decimal],
maint_coll_ratio: Decimal, init_coll_ratio: Decimal, srm_vault: PublicKey,
admin: PublicKey, borrow_limits: typing.List[Decimal]):
super().__init__(account_info)
self.version: Version = version
self.context: Context = context
self.account_flags: MangoAccountFlags = account_flags
self.basket_tokens: typing.List[BasketToken] = basket_tokens
self.markets: typing.List[MarketMetadata] = markets
self.signer_nonce: Decimal = signer_nonce
self.signer_key: PublicKey = signer_key
self.dex_program_id: PublicKey = dex_program_id
self.total_deposits: typing.List[Decimal] = total_deposits
self.total_borrows: typing.List[Decimal] = total_borrows
self.maint_coll_ratio: Decimal = maint_coll_ratio
self.init_coll_ratio: Decimal = init_coll_ratio
self.srm_vault: PublicKey = srm_vault
self.admin: PublicKey = admin
self.borrow_limits: typing.List[Decimal] = borrow_limits
self.mint_decimals: typing.List[int] = [token.mint for token in basket_tokens]
@property
def shared_quote_token(self) -> BasketToken:
return self.basket_tokens[-1]
@staticmethod
def from_layout(layout: layouts.GROUP, context: Context, account_info: AccountInfo) -> "Group":
account_flags = MangoAccountFlags.from_layout(layout.account_flags)
indexes = list(map(lambda pair: Index.from_layout(pair[0], pair[1]), zip(layout.indexes, layout.mint_decimals)))
basket_tokens: typing.List[BasketToken] = []
for index in range(NUM_TOKENS):
token_address = layout.tokens[index]
token_name = context.lookup_token_name(token_address)
if token_name is None:
raise Exception(f"Could not find token with mint '{token_address}' in Group.")
token = Token(token_name, token_address, layout.mint_decimals[index])
basket_token = BasketToken(token, layout.vaults[index], indexes[index])
basket_tokens += [basket_token]
markets: typing.List[MarketMetadata] = []
for index in range(NUM_MARKETS):
market_address = layout.spot_markets[index]
market_name = context.lookup_market_name(market_address)
base_name, quote_name = market_name.split("/")
base_token = BasketToken.find_by_name(basket_tokens, base_name)
quote_token = BasketToken.find_by_name(basket_tokens, quote_name)
market = MarketMetadata(market_name, market_address, base_token, quote_token,
layout.spot_markets[index],
layout.oracles[index],
layout.oracle_decimals[index])
markets += [market]
maint_coll_ratio = layout.maint_coll_ratio.quantize(Decimal('.01'))
init_coll_ratio = layout.init_coll_ratio.quantize(Decimal('.01'))
return Group(account_info, Version.UNSPECIFIED, context, account_flags, basket_tokens, markets,
layout.signer_nonce, layout.signer_key, layout.dex_program_id, layout.total_deposits,
layout.total_borrows, maint_coll_ratio, init_coll_ratio, layout.srm_vault,
layout.admin, layout.borrow_limits)
@staticmethod
def parse(context: Context, account_info: AccountInfo) -> "Group":
data = account_info.data
if len(data) != layouts.GROUP.sizeof():
raise Exception(f"Data length ({len(data)}) does not match expected size ({layouts.GROUP.sizeof()})")
layout = layouts.GROUP.parse(data)
return Group.from_layout(layout, context, account_info)
@staticmethod
def load(context: Context):
account_info = AccountInfo.load(context, context.group_id)
if account_info is None:
raise Exception(f"Group account not found at address '{context.group_id}'")
return Group.parse(context, account_info)
#TODO Test this method, implement get_ui_total_borrow,get_ui_total_deposit
def get_deposit_rate(self,token_index: int):
borrow_rate = self.get_borrow_rate(token_index)
total_borrows = self.get_ui_total_borrow(token_index)
total_deposits = self.get_ui_total_deposit(token_index)
if total_deposits == 0 and total_borrows == 0: return 0
elif total_deposits == 0: return MAX_RATE
utilization = total_borrows / total_deposits
return utilization * borrow_rate
#TODO Test this method, implement get_ui_total_borrow, get_ui_total_deposit
def get_borrow_rate(self,token_index: int):
total_borrows = self.get_ui_total_borrow(token_index)
total_deposits = self.get_ui_total_deposit(token_index)
if total_deposits == 0 and total_borrows == 0: return 0
if total_deposits <= total_borrows : return MAX_RATE
utilization = total_borrows / total_deposits
if utilization > OPTIMAL_UTIL:
extra_util = utilization - OPTIMAL_UTIL
slope = (MAX_RATE - OPTIMAL_RATE) / (1 - OPTIMAL_UTIL)
return OPTIMAL_RATE + slope * extra_util
else:
slope = OPTIMAL_RATE / OPTIMAL_UTIL
return slope * utilization
def get_token_index(self, token: Token) -> int:
for index, existing in enumerate(self.basket_tokens):
if existing.token == token:
return index
return -1
def get_prices(self) -> typing.List[TokenValue]:
started_at = time.time()
# Note: we can just load the oracle data in a simpler way, with:
# oracles = map(lambda market: Aggregator.load(self.context, market.oracle), self.markets)
# but that makes a network request for every oracle. We can reduce that to just one request
# if we use AccountInfo.load_multiple() and parse the data ourselves.
#
# This seems to halve the time this function takes.
oracle_addresses = list([market.oracle for market in self.markets])
oracle_account_infos = AccountInfo.load_multiple(self.context, oracle_addresses)
oracles = map(lambda oracle_account_info: Aggregator.parse(self.context, oracle_account_info),
oracle_account_infos)
prices = list(map(lambda oracle: oracle.price, oracles)) + [Decimal(1)]
token_prices = []
for index, price in enumerate(prices):
token_prices += [TokenValue(self.basket_tokens[index].token, price)]
time_taken = time.time() - started_at
self.logger.info(f"Faster fetching prices complete. Time taken: {time_taken:.2f} seconds.")
return token_prices
def fetch_balances(self, root_address: PublicKey) -> typing.List[TokenValue]:
balances: typing.List[TokenValue] = []
sol_balance = self.context.fetch_sol_balance(root_address)
balances += [TokenValue(SolToken, sol_balance)]
for basket_token in self.basket_tokens:
balance = TokenValue.fetch_total_value(self.context, root_address, basket_token.token)
balances += [balance]
return balances
def native_to_ui(self, amount, decimals) -> int:
return amount / (10 ** decimals)
def ui_to_native(self, amount, decimals) -> int:
return amount * (10 ** decimals)
def getUiTotalDeposit(self, tokenIndex: int) -> int:
return Group.ui_to_native(self.totalDeposits[tokenIndex] * self.indexes[tokenIndex].deposit, self.mint_decimals[tokenIndex])
def getUiTotalBorrow(self, tokenIndex: int) -> int:
return Group.native_to_ui(self.totalBorrows[tokenIndex] * self.indexes[tokenIndex].borrow, self.mint_decimals[tokenIndex])
def __str__(self) -> str:
total_deposits = "\n ".join(map(str, self.total_deposits))
total_borrows = "\n ".join(map(str, self.total_borrows))
borrow_limits = "\n ".join(map(str, self.borrow_limits))
return f"""
« Group [{self.version}] {self.address}:
Flags: {self.account_flags}
Tokens:
{self.basket_tokens}
Markets:
{self.markets}
DEX Program ID: « {self.dex_program_id} »
SRM Vault: « {self.srm_vault} »
Admin: « {self.admin} »
Signer Nonce: {self.signer_nonce}
Signer Key: « {self.signer_key} »
Initial Collateral Ratio: {self.init_coll_ratio}
Maintenance Collateral Ratio: {self.maint_coll_ratio}
Total Deposits:
{total_deposits}
Total Borrows:
{total_borrows}
Borrow Limits:
{borrow_limits}
»
"""
class TokenAccount(AddressableAccount):
def __init__(self, account_info: AccountInfo, version: Version, mint: PublicKey, owner: PublicKey, amount: Decimal):
super().__init__(account_info)
self.version: Version = version
self.mint: PublicKey = mint
self.owner: PublicKey = owner
self.amount: Decimal = amount
@staticmethod
def create(context: Context, account: Account, token: Token):
spl_token = await SplToken(context.client, token.mint, TOKEN_PROGRAM_ID, account)
owner = account.public_key()
new_account_address = spl_token.create_account(owner)
return TokenAccount.load(context, new_account_address)
@staticmethod
def fetch_all_for_owner_and_token(context: Context, owner_public_key: PublicKey, token: Token) -> typing.List["TokenAccount"]:
opts = TokenAccountOpts(mint=token.mint)
token_accounts_response = await context.client.get_token_accounts_by_owner(owner_public_key, opts, commitment=context.commitment)
all_accounts: typing.List[TokenAccount] = []
for token_account_response in token_accounts_response["result"]["value"]:
account_info = AccountInfo._from_response_values(token_account_response["account"], PublicKey(token_account_response["pubkey"]))
token_account = TokenAccount.parse(account_info)
all_accounts += [token_account]
return all_accounts
@staticmethod
def fetch_largest_for_owner_and_token(context: Context, owner_public_key: PublicKey, token: Token) -> typing.Optional["TokenAccount"]:
all_accounts = TokenAccount.fetch_all_for_owner_and_token(context, owner_public_key, token)
largest_account: typing.Optional[TokenAccount] = None
for token_account in all_accounts:
if largest_account is None or token_account.amount > largest_account.amount:
largest_account = token_account
return largest_account
@staticmethod
def fetch_or_create_largest_for_owner_and_token(context: Context, account: Account, token: Token) -> "TokenAccount":
all_accounts = TokenAccount.fetch_all_for_owner_and_token(context, account.public_key(), token)
largest_account: typing.Optional[TokenAccount] = None
for token_account in all_accounts:
if largest_account is None or token_account.amount > largest_account.amount:
largest_account = token_account
if largest_account is None:
return TokenAccount.create(context, account, token)
return largest_account
@staticmethod
def from_layout(layout: layouts.TOKEN_ACCOUNT, account_info: AccountInfo) -> "TokenAccount":
return TokenAccount(account_info, Version.UNSPECIFIED, layout.mint, layout.owner, layout.amount)
@staticmethod
def parse(account_info: AccountInfo) -> "TokenAccount":
data = account_info.data
if len(data) != layouts.TOKEN_ACCOUNT.sizeof():
raise Exception(f"Data length ({len(data)}) does not match expected size ({layouts.TOKEN_ACCOUNT.sizeof()})")
layout = layouts.TOKEN_ACCOUNT.parse(data)
return TokenAccount.from_layout(layout, account_info)
@staticmethod
def load(context: Context, address: PublicKey) -> typing.Optional["TokenAccount"]:
account_info = AccountInfo.load(context, address)
if account_info is None or (len(account_info.data) != layouts.TOKEN_ACCOUNT.sizeof()):
return None
return TokenAccount.parse(account_info)
def __str__(self) -> str:
return f"« Token: Mint: {self.mint}, Owner: {self.owner}, Amount: {self.amount} »"
class OpenOrders(AddressableAccount):
def __init__(self, account_info: AccountInfo, version: Version, program_id: PublicKey,
account_flags: SerumAccountFlags, market: PublicKey, owner: PublicKey,
base_token_free: Decimal, base_token_total: Decimal, quote_token_free: Decimal,
quote_token_total: Decimal, free_slot_bits: Decimal, is_bid_bits: Decimal,
orders: typing.List[Decimal], client_ids: typing.List[Decimal],
referrer_rebate_accrued: Decimal):
super().__init__(account_info)
self.version: Version = version
self.program_id: PublicKey = program_id
self.account_flags: SerumAccountFlags = account_flags
self.market: PublicKey = market
self.owner: PublicKey = owner
self.base_token_free: Decimal = base_token_free
self.base_token_total: Decimal = base_token_total
self.quote_token_free: Decimal = quote_token_free
self.quote_token_total: Decimal = quote_token_total
self.free_slot_bits: Decimal = free_slot_bits
self.is_bid_bits: Decimal = is_bid_bits
self.orders: typing.List[Decimal] = orders
self.client_ids: typing.List[Decimal] = client_ids
self.referrer_rebate_accrued: Decimal = referrer_rebate_accrued
# Sometimes pyserum wants to take its own OpenOrdersAccount as a parameter (e.g. in settle_funds())
def to_pyserum(self) -> OpenOrdersAccount:
return OpenOrdersAccount.from_bytes(self.address, self.account_info.data)
@staticmethod
def from_layout(layout: layouts.OPEN_ORDERS, account_info: AccountInfo,
base_decimals: Decimal, quote_decimals: Decimal) -> "OpenOrders":
account_flags = SerumAccountFlags.from_layout(layout.account_flags)
program_id = account_info.owner
base_divisor = 10 ** base_decimals
quote_divisor = 10 ** quote_decimals
base_token_free: Decimal = layout.base_token_free / base_divisor
base_token_total: Decimal = layout.base_token_total / base_divisor
quote_token_free: Decimal = layout.quote_token_free / quote_divisor
quote_token_total: Decimal = layout.quote_token_total / quote_divisor
nonzero_orders: typing.List[Decimal] = list([order for order in layout.orders if order != 0])
nonzero_client_ids: typing.List[Decimal] = list([client_id for client_id in layout.client_ids if client_id != 0])
return OpenOrders(account_info, Version.UNSPECIFIED, program_id, account_flags, layout.market,
layout.owner, base_token_free, base_token_total, quote_token_free, quote_token_total,
layout.free_slot_bits, layout.is_bid_bits, nonzero_orders, nonzero_client_ids,
layout.referrer_rebate_accrued)
@staticmethod
def parse(account_info: AccountInfo, base_decimals: Decimal, quote_decimals: Decimal) -> "OpenOrders":
data = account_info.data
if len(data) != layouts.OPEN_ORDERS.sizeof():
raise Exception(f"Data length ({len(data)}) does not match expected size ({layouts.OPEN_ORDERS.sizeof()})")
layout = layouts.OPEN_ORDERS.parse(data)
return OpenOrders.from_layout(layout, account_info, base_decimals, quote_decimals)
@staticmethod
async def load_raw_open_orders_account_infos(context: Context, group: Group) -> typing.Dict[str, AccountInfo]:
filters = [
MemcmpOpts(
offset=layouts.SERUM_ACCOUNT_FLAGS.sizeof() + 37,
bytes=encode_key(group.signer_key)
)
]
response = await context.client.get_program_accounts(group.dex_program_id, data_size=layouts.OPEN_ORDERS.sizeof(), memcmp_opts=filters, commitment=Single, encoding="base64")
account_infos = list(map(lambda pair: AccountInfo._from_response_values(pair[0], pair[1]), [(result["account"], PublicKey(result["pubkey"])) for result in response["result"]]))
account_infos_by_address = {key: value for key, value in [(str(account_info.address), account_info) for account_info in account_infos]}
return account_infos_by_address
@staticmethod
def load(context: Context, address: PublicKey, base_decimals: Decimal, quote_decimals: Decimal) -> "OpenOrders":
open_orders_account = AccountInfo.load(context, address)
if open_orders_account is None:
raise Exception(f"OpenOrders account not found at address '{address}'")
return OpenOrders.parse(open_orders_account, base_decimals, quote_decimals)
@staticmethod
async def load_for_market_and_owner(context: Context, market: PublicKey, owner: PublicKey, program_id: PublicKey, base_decimals: Decimal, quote_decimals: Decimal):
filters = [
MemcmpOpts(
offset=layouts.SERUM_ACCOUNT_FLAGS.sizeof() + 5,
bytes=encode_key(market)
),
MemcmpOpts(
offset=layouts.SERUM_ACCOUNT_FLAGS.sizeof() + 37,
bytes=encode_key(owner)
)
]
response = await context.client.get_program_accounts(context.dex_program_id, data_size=layouts.OPEN_ORDERS.sizeof(), memcmp_opts=filters, commitment=Single, encoding="base64")
accounts = list(map(lambda pair: AccountInfo._from_response_values(pair[0], pair[1]), [(result["account"], PublicKey(result["pubkey"])) for result in response["result"]]))
return list(map(lambda acc: OpenOrders.parse(acc, base_decimals, quote_decimals), accounts))
def __str__(self) -> str:
orders = ", ".join(map(str, self.orders)) or "None"
client_ids = ", ".join(map(str, self.client_ids)) or "None"
return f"""« OpenOrders:
Flags: {self.account_flags}
Program ID: {self.program_id}
Address: {self.address}
Market: {self.market}
Owner: {self.owner}
Base Token: {self.base_token_free:,.8f} of {self.base_token_total:,.8f}
Quote Token: {self.quote_token_free:,.8f} of {self.quote_token_total:,.8f}
Referrer Rebate Accrued: {self.referrer_rebate_accrued}
Orders:
{orders}
Client IDs:
{client_ids}
»"""
class BalanceSheet:
def __init__(self, token: Token, liabilities: Decimal, settled_assets: Decimal, unsettled_assets: Decimal):
self.logger: logging.Logger = logging.getLogger(self.__class__.__name__)
self.token: Token = token
self.liabilities: Decimal = liabilities
self.settled_assets: Decimal = settled_assets
self.unsettled_assets: Decimal = unsettled_assets
@property
def assets(self) -> Decimal:
return self.settled_assets + self.unsettled_assets
@property
def value(self) -> Decimal:
return self.assets - self.liabilities
@property
def collateral_ratio(self) -> Decimal:
if self.liabilities == Decimal(0):
return Decimal(0)
return self.assets / self.liabilities
def __str__(self) -> str:
name = "«Unspecified»"
if self.token is not None:
name = self.token.name
return f"""« BalanceSheet [{name}]:
Assets : {self.assets:>18,.8f}
Settled Assets : {self.settled_assets:>18,.8f}
Unsettled Assets : {self.unsettled_assets:>18,.8f}
Liabilities : {self.liabilities:>18,.8f}
Value : {self.value:>18,.8f}
Collateral Ratio : {self.collateral_ratio:>18,.2%}
»
"""
def __repr__(self) -> str:
return f"{self}"
class MarginAccount(AddressableAccount):
def __init__(self, account_info: AccountInfo, version: Version, account_flags: MangoAccountFlags,
mango_group: PublicKey, owner: PublicKey, deposits: typing.List[Decimal],
borrows: typing.List[Decimal], open_orders: typing.List[PublicKey]):
super().__init__(account_info)
self.version: Version = version
self.account_flags: MangoAccountFlags = account_flags
self.mango_group: PublicKey = mango_group
self.owner: PublicKey = owner
self.deposits: typing.List[Decimal] = deposits
self.borrows: typing.List[Decimal] = borrows
self.open_orders: typing.List[PublicKey] = open_orders
self.open_orders_accounts: typing.List[typing.Optional[OpenOrders]] = [None] * NUM_MARKETS
@staticmethod
def from_layout(layout: layouts.MARGIN_ACCOUNT, account_info: AccountInfo) -> "MarginAccount":
account_flags: MangoAccountFlags = MangoAccountFlags.from_layout(layout.account_flags)
deposits: typing.List[Decimal] = []
for index, deposit in enumerate(layout.deposits):
deposits += [deposit]
borrows: typing.List[Decimal] = []
for index, borrow in enumerate(layout.borrows):
borrows += [borrow]
return MarginAccount(account_info, Version.UNSPECIFIED, account_flags, layout.mango_group,
layout.owner, deposits, borrows, list(layout.open_orders))
@staticmethod
def parse(account_info: AccountInfo) -> "MarginAccount":
data = account_info.data
if len(data) != layouts.MARGIN_ACCOUNT.sizeof():
raise Exception(f"Data length ({len(data)}) does not match expected size ({layouts.MARGIN_ACCOUNT.sizeof()})")
layout = layouts.MARGIN_ACCOUNT.parse(data)
return MarginAccount.from_layout(layout, account_info)
@staticmethod
def load(context: Context, margin_account_address: PublicKey, group: typing.Optional[Group] = None) -> "MarginAccount":
account_info = AccountInfo.load(context, margin_account_address)
if account_info is None:
raise Exception(f"MarginAccount account not found at address '{margin_account_address}'")
margin_account = MarginAccount.parse(account_info)
if group is None:
group = Group.load(context)
margin_account.load_open_orders_accounts(context, group)
return margin_account
@staticmethod
def load_all_for_group(context: Context, program_id: PublicKey, group: Group) -> typing.List["MarginAccount"]:
filters = [
MemcmpOpts(
offset=layouts.MANGO_ACCOUNT_FLAGS.sizeof(), # mango_group is just after the MangoAccountFlags, which is the first entry
bytes=encode_key(group.address)
)
]
response = context.client.get_program_accounts(program_id, data_size=layouts.MARGIN_ACCOUNT.sizeof(), memcmp_opts=filters, commitment=Single, encoding="base64")
margin_accounts = []
for margin_account_data in response["result"]:
address = PublicKey(margin_account_data["pubkey"])
account = AccountInfo._from_response_values(margin_account_data["account"], address)
margin_account = MarginAccount.parse(account)
margin_accounts += [margin_account]
return margin_accounts
@staticmethod
def load_all_for_group_with_open_orders(context: Context, program_id: PublicKey, group: Group) -> typing.List["MarginAccount"]:
margin_accounts = MarginAccount.load_all_for_group(context, context.program_id, group)
open_orders = OpenOrders.load_raw_open_orders_account_infos(context, group)
for margin_account in margin_accounts:
margin_account.install_open_orders_accounts(group, open_orders)
return margin_accounts
@staticmethod
def load_all_for_owner(context: Context, owner: PublicKey, group: typing.Optional[Group] = None) -> typing.List["MarginAccount"]:
if group is None:
group = Group.load(context)
mango_group_offset = layouts.MANGO_ACCOUNT_FLAGS.sizeof() # mango_group is just after the MangoAccountFlags, which is the first entry.
owner_offset = mango_group_offset + 32 # owner is just after mango_group in the layout, and it's a PublicKey which is 32 bytes.
filters = [
MemcmpOpts(
offset=mango_group_offset,
bytes=encode_key(group.address)
),
MemcmpOpts(
offset=owner_offset,
bytes=encode_key(owner)
)
]
response = context.client.get_program_accounts(context.program_id, data_size=layouts.MARGIN_ACCOUNT.sizeof(), memcmp_opts=filters, commitment=Single, encoding="base64")
margin_accounts = []
for margin_account_data in response["result"]:
address = PublicKey(margin_account_data["pubkey"])
account = AccountInfo._from_response_values(margin_account_data["account"], address)
margin_account = MarginAccount.parse(account)
margin_account.load_open_orders_accounts(context, group)
margin_accounts += [margin_account]
return margin_accounts
@classmethod
def load_all_ripe(cls, context: Context) -> typing.List["MarginAccount"]:
logger: logging.Logger = logging.getLogger(cls.__name__)
started_at = time.time()
group = Group.load(context)
margin_accounts = MarginAccount.load_all_for_group_with_open_orders(context, context.program_id, group)
logger.info(f"Fetched {len(margin_accounts)} margin accounts to process.")
prices = group.get_prices()
nonzero: typing.List[MarginAccountMetadata] = []
for margin_account in margin_accounts:
balance_sheet = margin_account.get_balance_sheet_totals(group, prices)
if balance_sheet.collateral_ratio > 0:
balances = margin_account.get_intrinsic_balances(group)
nonzero += [MarginAccountMetadata(margin_account, balance_sheet, balances)]
logger.info(f"Of those {len(margin_accounts)}, {len(nonzero)} have a nonzero collateral ratio.")
ripe_metadata = filter(lambda mam: mam.balance_sheet.collateral_ratio <= group.init_coll_ratio, nonzero)
ripe_accounts = list(map(lambda mam: mam.margin_account, ripe_metadata))
logger.info(f"Of those {len(nonzero)}, {len(ripe_accounts)} are ripe 🥭.")
time_taken = time.time() - started_at
logger.info(f"Loading ripe 🥭 accounts complete. Time taken: {time_taken:.2f} seconds.")
return ripe_accounts
def load_open_orders_accounts(self, context: Context, group: Group) -> None:
for index, oo in enumerate(self.open_orders):
key = oo
if key != SYSTEM_PROGRAM_ADDRESS:
self.open_orders_accounts[index] = OpenOrders.load(context, key, group.basket_tokens[index].token.decimals, group.shared_quote_token.token.decimals)
def install_open_orders_accounts(self, group: Group, all_open_orders_by_address: typing.Dict[str, AccountInfo]) -> None:
for index, oo in enumerate(self.open_orders):
key = str(oo)
if key in all_open_orders_by_address:
open_orders_account_info = all_open_orders_by_address[key]
open_orders = OpenOrders.parse(open_orders_account_info,
group.basket_tokens[index].token.decimals,
group.shared_quote_token.token.decimals)
self.open_orders_accounts[index] = open_orders
def get_intrinsic_balance_sheets(self, group: Group) -> typing.List[BalanceSheet]:
settled_assets: typing.List[Decimal] = [Decimal(0)] * NUM_TOKENS
liabilities: typing.List[Decimal] = [Decimal(0)] * NUM_TOKENS
for index in range(NUM_TOKENS):
settled_assets[index] = group.basket_tokens[index].index.deposit * self.deposits[index]
liabilities[index] = group.basket_tokens[index].index.borrow * self.borrows[index]
unsettled_assets: typing.List[Decimal] = [Decimal(0)] * NUM_TOKENS
for index in range(NUM_MARKETS):
open_orders_account = self.open_orders_accounts[index]
if open_orders_account is not None:
unsettled_assets[index] += open_orders_account.base_token_total
unsettled_assets[NUM_TOKENS - 1] += open_orders_account.quote_token_total
balance_sheets: typing.List[BalanceSheet] = []
for index in range(NUM_TOKENS):
balance_sheets += [BalanceSheet(group.basket_tokens[index].token, liabilities[index],
settled_assets[index], unsettled_assets[index])]
return balance_sheets
def get_priced_balance_sheets(self, group: Group, prices: typing.List[TokenValue]) -> typing.List[BalanceSheet]:
priced: typing.List[BalanceSheet] = []
balance_sheets = self.get_intrinsic_balance_sheets(group)
for balance_sheet in balance_sheets:
price = TokenValue.find_by_token(prices, balance_sheet.token)
liabilities = balance_sheet.liabilities * price.value
settled_assets = balance_sheet.settled_assets * price.value
unsettled_assets = balance_sheet.unsettled_assets * price.value
priced += [BalanceSheet(
price.token,
price.token.round(liabilities),
price.token.round(settled_assets),
price.token.round(unsettled_assets)
)]
return priced
def get_balance_sheet_totals(self, group: Group, prices: typing.List[TokenValue]) -> BalanceSheet:
liabilities = Decimal(0)
settled_assets = Decimal(0)
unsettled_assets = Decimal(0)
balance_sheets = self.get_priced_balance_sheets(group, prices)
for balance_sheet in balance_sheets:
if balance_sheet is not None:
liabilities += balance_sheet.liabilities
settled_assets += balance_sheet.settled_assets
unsettled_assets += balance_sheet.unsettled_assets
# A BalanceSheet must have a token - it's a pain to make it a typing.Optional[Token].
# So in this one case, we produce a 'fake' token whose symbol is a summary of all token
# symbols that went into it.
#
# If this becomes more painful than typing.Optional[Token], we can go with making
# Token optional.
summary_name = "-".join([bal.token.name for bal in balance_sheets])
summary_token = Token(summary_name, SYSTEM_PROGRAM_ADDRESS, Decimal(0))
return BalanceSheet(summary_token, liabilities, settled_assets, unsettled_assets)
def get_intrinsic_balances(self, group: Group) -> typing.List[TokenValue]:
balance_sheets = self.get_intrinsic_balance_sheets(group)
balances: typing.List[TokenValue] = []
for index, balance_sheet in enumerate(balance_sheets):
if balance_sheet.token is None:
raise Exception(f"Intrinsic balance sheet with index [{index}] has no token.")
balances += [TokenValue(balance_sheet.token, balance_sheet.value)]
return balances
def __str__(self) -> str:
deposits = ", ".join([f"{item:,.8f}" for item in self.deposits])
borrows = ", ".join([f"{item:,.8f}" for item in self.borrows])
if all(oo is None for oo in self.open_orders_accounts):
open_orders = f"{self.open_orders}"
else:
open_orders_unindented = f"{self.open_orders_accounts}"
open_orders = open_orders_unindented.replace("\n", "\n ")
return f"""« MarginAccount: {self.address}
Flags: {self.account_flags}
Owner: {self.owner}
Mango Group: {self.mango_group}
Deposits: [{deposits}]
Borrows: [{borrows}]
Mango Open Orders: {open_orders}
»"""
class MarginAccountMetadata:
def __init__(self, margin_account: MarginAccount, balance_sheet: BalanceSheet, balances: typing.List[TokenValue]):
self.logger: logging.Logger = logging.getLogger(self.__class__.__name__)
self.margin_account = margin_account
self.balance_sheet = balance_sheet
self.balances = balances
@property
def assets(self):
return self.balance_sheet.assets
@property
def liabilities(self):
return self.balance_sheet.liabilities
@property
def collateral_ratio(self):
return self.balance_sheet.collateral_ratio
class LiquidationEvent:
def __init__(self, timestamp: datetime.datetime, signature: str, wallet_address: PublicKey, margin_account_address: PublicKey, balances_before: typing.List[TokenValue], balances_after: typing.List[TokenValue]):
self.timestamp = timestamp
self.signature = signature
self.wallet_address = wallet_address
self.margin_account_address = margin_account_address
self.balances_before = balances_before
self.balances_after = balances_after
def __str__(self) -> str:
changes = TokenValue.changes(self.balances_before, self.balances_after)
changes_text = "\n ".join([f"{change.value:>15,.8f} {change.token.name}" for change in changes])
return f"""« 🥭 Liqudation Event 💧 at {self.timestamp}
📇 Signature: {self.signature}
👛 Wallet: {self.wallet_address}
💳 Margin Account: {self.margin_account_address}
💸 Changes:
{changes_text}
»"""
def __repr__(self) -> str:
return f"{self}"
def _notebook_tests():
log_level = logging.getLogger().level
try:
logging.getLogger().setLevel(logging.CRITICAL)
from Constants import SYSTEM_PROGRAM_ADDRESS
from Context import default_context
balances_before = [
TokenValue(TokenLookup.find_by_name(default_context, "ETH"), Decimal(1)),
TokenValue(TokenLookup.find_by_name(default_context, "BTC"), Decimal("0.1")),
TokenValue(TokenLookup.find_by_name(default_context, "USDT"), Decimal(1000))
]
balances_after = [
TokenValue(TokenLookup.find_by_name(default_context, "ETH"), Decimal(1)),
TokenValue(TokenLookup.find_by_name(default_context, "BTC"), Decimal("0.05")),
TokenValue(TokenLookup.find_by_name(default_context, "USDT"), Decimal(2000))
]
timestamp = datetime.datetime(2021, 5, 17, 12, 20, 56)
event = LiquidationEvent(timestamp, "signature", SYSTEM_PROGRAM_ADDRESS, SYSTEM_PROGRAM_ADDRESS,
balances_before, balances_after)
assert(str(event) == """« 🥭 Liqudation Event 💧 at 2021-05-17 12:20:56
📇 Signature: signature
👛 Wallet: 11111111111111111111111111111111
💳 Margin Account: 11111111111111111111111111111111
💸 Changes:
0.00000000 ETH
-0.05000000 BTC
1,000.00000000 USDT
»""")
finally:
logging.getLogger().setLevel(log_level)
_notebook_tests()
del _notebook_tests
if __name__ == "__main__":
logging.getLogger().setLevel(logging.INFO)
import base64
from Constants import SYSTEM_PROGRAM_ADDRESS
from Context import default_context
# Just use any public key here
fake_public_key = SYSTEM_PROGRAM_ADDRESS
encoded = "AwAAAAAAAACCaOmpoURMK6XHelGTaFawcuQ/78/15LAemWI8jrt3SRKLy2R9i60eclDjuDS8+p/ZhvTUd9G7uQVOYCsR6+BhmqGCiO6EPYP2PQkf/VRTvw7JjXvIjPFJy06QR1Cq1WfTonHl0OjCkyEf60SD07+MFJu5pVWNFGGEO/8AiAYfduaKdnFTaZEHPcK5Eq72WWHeHg2yIbBF09kyeOhlCJwOoG8O5SgpPV8QOA64ZNV4aKroFfADg6kEy/wWCdp3fv0O4GJgAAAAAPH6Ud6jtjwAAQAAAAAAAADiDkkCi9UOAAEAAAAAAAAADuBiYAAAAACNS5bSy7soAAEAAAAAAAAACMvgO+2jCwABAAAAAAAAAA7gYmAAAAAAZFeDUBNVhwABAAAAAAAAABtRNytozC8AAQAAAAAAAABIBGiCcyaEZdNhrTyeqUY692vOzzPdHaxAxguht3JQGlkzjtd05dX9LENHkl2z1XvUbTNKZlweypNRetmH0lmQ9VYQAHqylxZVK65gEg85g27YuSyvOBZAjJyRmYU9KdCO1D+4ehdPu9dQB1yI1uh75wShdAaFn2o4qrMYwq3SQQEAAAAAAAAAAiH1PPJKAuh6oGiE35aGhUQhFi/bxgKOudpFv8HEHNCFDy1uAqR6+CTQmradxC1wyyjL+iSft+5XudJWwSdi7wvphsxb96x7Obj/AgAAAAAKlV4LL5ow6r9LMhIAAAAADvsOtqcVFmChDPzPnwAAAE33lx1h8hPFD04AAAAAAAA8YRV3Oa309B2wGwAAAAAA+yPBZRlZz7b605n+AQAAAACgmZmZmZkZAQAAAAAAAAAAMDMzMzMzMwEAAAAAAAAA25D1XcAtRzSuuyx3U+X7aE9vM1EJySU9KprgL0LMJ/vat9+SEEUZuga7O5tTUrcMDYWDg+LYaAWhSQiN2fYk7aCGAQAAAAAAgIQeAAAAAAAA8gUqAQAAAAYGBgICAAAA"
decoded = base64.b64decode(encoded)
group_account_info = AccountInfo(fake_public_key, False, Decimal(0), fake_public_key, Decimal(0), decoded)
group = Group.parse(default_context, group_account_info)
print("\n\nThis is hard-coded, not live information!")
print(group)
print(TokenLookup.find_by_name(default_context, "ETH"))
print(TokenLookup.find_by_name(default_context, "BTC"))
# USDT
print(TokenLookup.find_by_mint(default_context, PublicKey("Es9vMFrzaCERmJfrF4H2FYD4KCoNkY11McCe8BenwNYB")))
single_account_info = AccountInfo.load(default_context, default_context.dex_program_id)
print("DEX account info", single_account_info)
multiple_account_info = AccountInfo.load_multiple(default_context, [default_context.program_id, default_context.dex_program_id])
print("Mango program and DEX account info", multiple_account_info)
balances_before = [
TokenValue(TokenLookup.find_by_name(default_context, "ETH"), Decimal(1)),
TokenValue(TokenLookup.find_by_name(default_context, "BTC"), Decimal("0.1")),
TokenValue(TokenLookup.find_by_name(default_context, "USDT"), Decimal(1000))
]
balances_after = [
TokenValue(TokenLookup.find_by_name(default_context, "ETH"), Decimal(1)),
TokenValue(TokenLookup.find_by_name(default_context, "BTC"), Decimal("0.05")),
TokenValue(TokenLookup.find_by_name(default_context, "USDT"), Decimal(2000))
]
timestamp = datetime.datetime(2021, 5, 17, 12, 20, 56)
event = LiquidationEvent(timestamp, "signature", SYSTEM_PROGRAM_ADDRESS, SYSTEM_PROGRAM_ADDRESS,
balances_before, balances_after)
print(event) | 45.498895 | 1,008 | 0.678064 | 7,149 | 61,742 | 5.620506 | 0.068961 | 0.018666 | 0.008735 | 0.009407 | 0.457878 | 0.378935 | 0.318758 | 0.283418 | 0.243623 | 0.223439 | 0 | 0.00993 | 0.223624 | 61,742 | 1,357 | 1,009 | 45.498895 | 0.827016 | 0.020942 | 0 | 0.323689 | 0 | 0.005425 | 0.134013 | 0.033164 | 0 | 0 | 0 | 0.000737 | 0.000904 | 1 | 0.120253 | false | 0 | 0.022604 | 0.047016 | 0.299277 | 0.007233 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
72ea8bc7ed52d0a04a2d089038bfcf2f1a671d9a | 373 | py | Python | agendamentos/migrations/0011_alter_agendamentosfuncionarios_table.py | afnmachado/univesp_pi_1 | e6f2b545faaf53d14d17f751d2fb32e6618885b7 | [
"MIT"
] | null | null | null | agendamentos/migrations/0011_alter_agendamentosfuncionarios_table.py | afnmachado/univesp_pi_1 | e6f2b545faaf53d14d17f751d2fb32e6618885b7 | [
"MIT"
] | null | null | null | agendamentos/migrations/0011_alter_agendamentosfuncionarios_table.py | afnmachado/univesp_pi_1 | e6f2b545faaf53d14d17f751d2fb32e6618885b7 | [
"MIT"
] | null | null | null | # Generated by Django 3.2.8 on 2021-11-29 05:47
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('agendamentos', '0010_agendamentosfuncionarios'),
]
operations = [
migrations.AlterModelTable(
name='agendamentosfuncionarios',
table='agendamento_funcionario',
),
]
| 20.722222 | 58 | 0.640751 | 33 | 373 | 7.181818 | 0.848485 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068841 | 0.260054 | 373 | 17 | 59 | 21.941176 | 0.789855 | 0.120643 | 0 | 0 | 1 | 0 | 0.269939 | 0.233129 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
72ee1cbe6083bf017bca4e5b6925555840bc1de4 | 1,288 | py | Python | openstack/tests/unit/metric/v1/test_capabilities.py | teresa-ho/stx-openstacksdk | 7d723da3ffe9861e6e9abcaeadc1991689f782c5 | [
"Apache-2.0"
] | 43 | 2018-12-19T08:39:15.000Z | 2021-07-21T02:45:43.000Z | openstack/tests/unit/metric/v1/test_capabilities.py | teresa-ho/stx-openstacksdk | 7d723da3ffe9861e6e9abcaeadc1991689f782c5 | [
"Apache-2.0"
] | 11 | 2019-03-17T13:28:56.000Z | 2020-09-23T23:57:50.000Z | openstack/tests/unit/metric/v1/test_capabilities.py | teresa-ho/stx-openstacksdk | 7d723da3ffe9861e6e9abcaeadc1991689f782c5 | [
"Apache-2.0"
] | 47 | 2018-12-19T05:14:25.000Z | 2022-03-19T15:28:30.000Z | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import testtools
from openstack.metric.v1 import capabilities
BODY = {
'aggregation_methods': ['mean', 'max', 'avg'],
}
class TestCapabilites(testtools.TestCase):
def test_basic(self):
sot = capabilities.Capabilities()
self.assertEqual('/capabilities', sot.base_path)
self.assertEqual('metric', sot.service.service_type)
self.assertFalse(sot.allow_create)
self.assertTrue(sot.allow_get)
self.assertFalse(sot.allow_update)
self.assertFalse(sot.allow_delete)
self.assertFalse(sot.allow_list)
def test_make_it(self):
sot = capabilities.Capabilities(**BODY)
self.assertEqual(BODY['aggregation_methods'],
sot.aggregation_methods)
| 34.810811 | 75 | 0.714286 | 166 | 1,288 | 5.463855 | 0.572289 | 0.066152 | 0.079383 | 0.101433 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004812 | 0.193323 | 1,288 | 36 | 76 | 35.777778 | 0.868142 | 0.401398 | 0 | 0 | 0 | 0 | 0.088274 | 0 | 0 | 0 | 0 | 0 | 0.421053 | 1 | 0.105263 | false | 0 | 0.105263 | 0 | 0.263158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
72ef3701a3a8ef52c1a792f4ce8c00616bb47526 | 351 | py | Python | scripts/get-table-schemas.py | numankh/GRE-Vocab-Helper | c2858f3200f6d6673b1f316879e5ac482a6b7a83 | [
"MIT"
] | null | null | null | scripts/get-table-schemas.py | numankh/GRE-Vocab-Helper | c2858f3200f6d6673b1f316879e5ac482a6b7a83 | [
"MIT"
] | null | null | null | scripts/get-table-schemas.py | numankh/GRE-Vocab-Helper | c2858f3200f6d6673b1f316879e5ac482a6b7a83 | [
"MIT"
] | null | null | null | import psycopg2
from decouple import config
import pandas as pd
import dbconnect
cursor, connection = dbconnect.connect_to_db()
sql = """
SELECT "table_name","column_name", "data_type", "table_schema"
FROM INFORMATION_SCHEMA.COLUMNS
WHERE "table_schema" = 'public'
ORDER BY table_name
"""
df = pd.read_sql(sql, con=connection)
print(df.to_string()) | 25.071429 | 62 | 0.77208 | 51 | 351 | 5.098039 | 0.647059 | 0.069231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003215 | 0.11396 | 351 | 14 | 63 | 25.071429 | 0.832797 | 0 | 0 | 0 | 0 | 0 | 0.426136 | 0.150568 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.307692 | 0 | 0.307692 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
72ef4fcc94a467e2eb56273b32fbc169f181effc | 7,880 | py | Python | tests/test_table/test_pivot.py | andriyor/agate | 9b12d4bcc75bf3788e0774e23188f4409c3e7519 | [
"MIT"
] | 663 | 2016-02-16T13:43:00.000Z | 2022-03-13T17:21:19.000Z | tests/test_table/test_pivot.py | andriyor/agate | 9b12d4bcc75bf3788e0774e23188f4409c3e7519 | [
"MIT"
] | 347 | 2015-08-28T13:46:37.000Z | 2016-02-16T01:53:06.000Z | tests/test_table/test_pivot.py | andriyor/agate | 9b12d4bcc75bf3788e0774e23188f4409c3e7519 | [
"MIT"
] | 122 | 2016-02-23T02:43:24.000Z | 2022-03-04T17:21:14.000Z | #!/usr/bin/env python
# -*- coding: utf8 -*-
import sys
try:
from cdecimal import Decimal
except ImportError: # pragma: no cover
from decimal import Decimal
from agate import Table
from agate.aggregations import Sum
from agate.computations import Percent
from agate.data_types import Number, Text
from agate.testcase import AgateTestCase
class TestPivot(AgateTestCase):
def setUp(self):
self.rows = (
('joe', 'white', 'male', 20, 'blue'),
('jane', 'white', 'female', 20, 'blue'),
('josh', 'black', 'male', 20, 'blue'),
('jim', 'latino', 'male', 25, 'blue'),
('julia', 'white', 'female', 25, 'green'),
('joan', 'asian', 'female', 25, 'green')
)
self.number_type = Number()
self.text_type = Text()
self.column_names = ['name', 'race', 'gender', 'age', 'color']
self.column_types = [self.text_type, self.text_type, self.text_type, self.number_type, self.text_type]
def test_pivot(self):
table = Table(self.rows, self.column_names, self.column_types)
pivot_table = table.pivot('race', 'gender')
pivot_rows = (
('white', 1, 2),
('black', 1, 0),
('latino', 1, 0),
('asian', 0, 1)
)
self.assertColumnNames(pivot_table, ['race', 'male', 'female'])
self.assertRowNames(pivot_table, ['white', 'black', 'latino', 'asian'])
self.assertColumnTypes(pivot_table, [Text, Number, Number])
self.assertRows(pivot_table, pivot_rows)
def test_pivot_by_lambda(self):
table = Table(self.rows, self.column_names, self.column_types)
pivot_table = table.pivot(lambda r: r['gender'])
pivot_rows = (
('male', 3),
('female', 3)
)
self.assertColumnNames(pivot_table, ['group', 'Count'])
self.assertRowNames(pivot_table, ['male', 'female'])
self.assertColumnTypes(pivot_table, [Text, Number])
self.assertRows(pivot_table, pivot_rows)
def test_pivot_by_lambda_group_name(self):
table = Table(self.rows, self.column_names, self.column_types)
pivot_table = table.pivot(lambda r: r['gender'], key_name='gender')
pivot_rows = (
('male', 3),
('female', 3)
)
self.assertColumnNames(pivot_table, ['gender', 'Count'])
self.assertRowNames(pivot_table, ['male', 'female'])
self.assertColumnTypes(pivot_table, [Text, Number])
self.assertRows(pivot_table, pivot_rows)
def test_pivot_by_lambda_group_name_sequence_invalid(self):
table = Table(self.rows, self.column_names, self.column_types)
with self.assertRaises(ValueError):
table.pivot(['race', 'gender'], key_name='foo')
def test_pivot_no_key(self):
table = Table(self.rows, self.column_names, self.column_types)
pivot_table = table.pivot(pivot='gender')
pivot_rows = (
(3, 3),
)
self.assertColumnNames(pivot_table, ['male', 'female'])
self.assertColumnTypes(pivot_table, [Number, Number])
self.assertRows(pivot_table, pivot_rows)
def test_pivot_no_pivot(self):
table = Table(self.rows, self.column_names, self.column_types)
pivot_table = table.pivot('race')
pivot_rows = (
('white', 3),
('black', 1),
('latino', 1),
('asian', 1)
)
self.assertColumnNames(pivot_table, ['race', 'Count'])
self.assertColumnTypes(pivot_table, [Text, Number])
self.assertRows(pivot_table, pivot_rows)
def test_pivot_sum(self):
table = Table(self.rows, self.column_names, self.column_types)
pivot_table = table.pivot('race', 'gender', Sum('age'))
pivot_rows = (
('white', 20, 45),
('black', 20, 0),
('latino', 25, 0),
('asian', 0, 25)
)
self.assertColumnNames(pivot_table, ['race', 'male', 'female'])
self.assertColumnTypes(pivot_table, [Text, Number, Number])
self.assertRows(pivot_table, pivot_rows)
def test_pivot_multiple_keys(self):
table = Table(self.rows, self.column_names, self.column_types)
pivot_table = table.pivot(['race', 'gender'], 'age')
pivot_rows = (
('white', 'male', 1, 0),
('white', 'female', 1, 1),
('black', 'male', 1, 0),
('latino', 'male', 0, 1),
('asian', 'female', 0, 1),
)
self.assertRows(pivot_table, pivot_rows)
self.assertColumnNames(pivot_table, ['race', 'gender', '20', '25'])
self.assertRowNames(pivot_table, [
('white', 'male'),
('white', 'female'),
('black', 'male'),
('latino', 'male'),
('asian', 'female'),
])
self.assertColumnTypes(pivot_table, [Text, Text, Number, Number])
def test_pivot_multiple_keys_no_pivot(self):
table = Table(self.rows, self.column_names, self.column_types)
pivot_table = table.pivot(['race', 'gender'])
pivot_rows = (
('white', 'male', 1),
('white', 'female', 2),
('black', 'male', 1),
('latino', 'male', 1),
('asian', 'female', 1),
)
self.assertRows(pivot_table, pivot_rows)
self.assertColumnNames(pivot_table, ['race', 'gender', 'Count'])
self.assertColumnTypes(pivot_table, [Text, Text, Number])
def test_pivot_default_value(self):
table = Table(self.rows, self.column_names, self.column_types)
pivot_table = table.pivot('race', 'gender', default_value=None)
pivot_rows = (
('white', 1, 2),
('black', 1, None),
('latino', 1, None),
('asian', None, 1)
)
self.assertColumnNames(pivot_table, ['race', 'male', 'female'])
self.assertColumnTypes(pivot_table, [Text, Number, Number])
self.assertRows(pivot_table, pivot_rows)
def test_pivot_compute(self):
table = Table(self.rows, self.column_names, self.column_types)
pivot_table = table.pivot('gender', computation=Percent('Count'))
pivot_table.print_table(output=sys.stdout)
pivot_rows = (
('male', Decimal(50)),
('female', Decimal(50)),
)
self.assertColumnNames(pivot_table, ['gender', 'Percent'])
self.assertColumnTypes(pivot_table, [Text, Number])
self.assertRows(pivot_table, pivot_rows)
def test_pivot_compute_pivots(self):
table = Table(self.rows, self.column_names, self.column_types)
pivot_table = table.pivot('gender', 'color', computation=Percent('Count'))
pivot_table.print_table(output=sys.stdout)
pivot_rows = (
('male', Decimal(50), 0),
('female', Decimal(1) / Decimal(6) * Decimal(100), Decimal(1) / Decimal(3) * Decimal(100)),
)
self.assertColumnNames(pivot_table, ['gender', 'blue', 'green'])
self.assertColumnTypes(pivot_table, [Text, Number, Number])
self.assertRows(pivot_table, pivot_rows)
def test_pivot_compute_kwargs(self):
table = Table(self.rows, self.column_names, self.column_types)
pivot_table = table.pivot('gender', 'color', computation=Percent('Count', total=8))
pivot_table.print_table(output=sys.stdout)
pivot_rows = (
('male', Decimal(3) / Decimal(8) * Decimal(100), 0),
('female', Decimal(1) / Decimal(8) * Decimal(100), Decimal(2) / Decimal(8) * Decimal(100)),
)
self.assertColumnNames(pivot_table, ['gender', 'blue', 'green'])
self.assertColumnTypes(pivot_table, [Text, Number, Number])
self.assertRows(pivot_table, pivot_rows)
| 33.248945 | 110 | 0.583629 | 881 | 7,880 | 5.034052 | 0.112372 | 0.124014 | 0.047351 | 0.052762 | 0.747689 | 0.695603 | 0.683878 | 0.647802 | 0.636302 | 0.624803 | 0 | 0.017583 | 0.263832 | 7,880 | 236 | 111 | 33.389831 | 0.74694 | 0.00736 | 0 | 0.356322 | 0 | 0 | 0.096048 | 0 | 0 | 0 | 0 | 0 | 0.235632 | 1 | 0.08046 | false | 0 | 0.051724 | 0 | 0.137931 | 0.017241 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
72f29f7ed6f48568758a4eb5e3565edf5506bbba | 1,332 | py | Python | test_impartial.py | georg-wolflein/impartial | a53819cefcb74a57e3c1148a6b8fa88aed9264d4 | [
"Apache-2.0"
] | null | null | null | test_impartial.py | georg-wolflein/impartial | a53819cefcb74a57e3c1148a6b8fa88aed9264d4 | [
"Apache-2.0"
] | null | null | null | test_impartial.py | georg-wolflein/impartial | a53819cefcb74a57e3c1148a6b8fa88aed9264d4 | [
"Apache-2.0"
] | null | null | null | from functools import partial
from impartial import impartial
def f(x: int, y: int, z: int = 0) -> int:
return x + 2*y + z
def test_simple_call_args():
assert impartial(f, 1)(2) == f(1, 2)
def test_simple_call_kwargs():
assert impartial(f, y=2)(x=1) == f(1, 2)
def test_simple_call_empty():
assert impartial(f, 1, y=2)() == f(1, 2)
def test_decorator():
@impartial
def f(x, y):
return x + 2*y
assert f.with_y(2)(1) == 5
def test_func():
assert impartial(f, 1).func is f
def test_with_kwargs():
assert impartial(f, 1).with_z(3)(2) == f(1, 2, 3)
def test_multiple_with_kwargs():
assert impartial(f, 1).with_z(3).with_y(2)() == f(1, 2, 3)
def test_with_kwargs_override():
assert impartial(f, 1, 2).with_z(3).with_z(4)() == f(1, 2, 4)
def test_nested_impartial():
imp = impartial(f, x=1, y=2)
imp = impartial(imp, x=2)
imp = impartial(imp, x=3)
assert imp() == f(3, 2)
assert not isinstance(imp.func, impartial)
assert imp.func is f
def test_nested_partial():
imp = partial(f, x=1, y=2)
imp = partial(imp, x=2)
imp = impartial(imp, x=3)
assert imp() == f(3, 2)
assert not isinstance(imp.func, partial)
assert imp.func is f
def test_configure():
assert impartial(f, 1, z=2).configure(2, z=3)() == f(1, 2, 3)
| 20.492308 | 65 | 0.61036 | 240 | 1,332 | 3.2625 | 0.15 | 0.03576 | 0.034483 | 0.15198 | 0.490421 | 0.413793 | 0.378033 | 0.237548 | 0.237548 | 0.153257 | 0 | 0.054966 | 0.221471 | 1,332 | 64 | 66 | 20.8125 | 0.700096 | 0 | 0 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.384615 | 1 | 0.333333 | false | 0 | 0.051282 | 0.051282 | 0.435897 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
72f43506a3e179e12b61e504fc43770a91f14bf0 | 5,076 | py | Python | manager.py | smilechaser/screeps-script-caddy | 11b6e809675dfd0a5a4ff917a492adc4a5a08bca | [
"MIT"
] | 2 | 2016-02-23T09:50:15.000Z | 2016-02-28T22:08:03.000Z | manager.py | smilechaser/screeps-script-caddy | 11b6e809675dfd0a5a4ff917a492adc4a5a08bca | [
"MIT"
] | null | null | null | manager.py | smilechaser/screeps-script-caddy | 11b6e809675dfd0a5a4ff917a492adc4a5a08bca | [
"MIT"
] | null | null | null | '''
Python script for uploading/downloading scripts for use with the game Screeps.
http://support.screeps.com/hc/en-us/articles/203022612-Commiting-scripts-using-direct-API-access
Usage:
#
# general help/usage
#
python3 manager.py --help
#
# retrieve all scripts from the game and store them
# in the folder "some_folder"
#
python3 manager.py from_game some_folder
#
# send all *.js files to the game
#
python3 manager.py to_game some_folder
WARNING: Use at your own risk! Make backups of all your game content!
'''
import sys
import os
import argparse
import json
import requests
from requests.auth import HTTPBasicAuth
SCREEPS_ENDPOINT = 'https://screeps.com/api/user/code'
USER_ENV = 'SCREEPS_USER'
PASSWORD_ENV = 'SCREEPS_PASSWORD'
TO_SCREEPS = 'to_game'
FROM_SCREEPS = 'from_game'
def get_user_from_env():
user = os.environ.get('SCREEPS_USER')
if not user:
print('You must provide a username, i.e. export '
'{}=<your email address>'.
format(USER_ENV))
sys.exit()
return user
def get_password_from_env():
password = os.environ.get('SCREEPS_PASSWORD')
if not password:
print('You must provide a password, i.e. export {}=<your password>'.
format(PASSWORD_ENV))
sys.exit()
return password
def get_data(user, password):
print('Retrieving data...')
response = requests.get(SCREEPS_ENDPOINT,
auth=HTTPBasicAuth(user, password))
response.raise_for_status()
data = response.json()
if data['ok'] != 1:
raise Exception()
return data
def send_data(user, password, modules):
auth = HTTPBasicAuth(user, password)
headers = {'Content-Type': 'application/json; charset=utf-8'}
data = {'modules': modules}
resp = requests.post(SCREEPS_ENDPOINT,
data=json.dumps(data),
headers=headers,
auth=auth)
resp.raise_for_status()
def check_for_collisions(target_folder, modules):
for module in modules:
target = os.path.join(target_folder, '{}.js'.format(module))
if os.path.exists(target):
print('File {} exists.'.format(target))
print('Specify --force to overwrite. Aborting...')
sys.exit()
def main():
parser = argparse.ArgumentParser(description='')
parser.add_argument('operation',
choices=(TO_SCREEPS, FROM_SCREEPS),
help='')
parser.add_argument('destination', help='')
parser.add_argument('--user', help='')
parser.add_argument('--password', help='')
parser.add_argument('--force', action='store_const', const=True,
help='force overwrite of files in an existing folder')
parser.add_argument('--merge', action='store_const', const=True,
help='merge scripts into a single main.js module')
args = parser.parse_args()
user = args.user if args.user else get_user_from_env()
password = args.password if args.password else get_password_from_env()
target_folder = os.path.abspath(args.destination)
if args.operation == FROM_SCREEPS:
data = get_data(user, password)
# does the folder exist?
if not os.path.isdir(target_folder):
# no - create it
print('Creating new folder "{}"...'.format(target_folder))
os.makedirs(target_folder)
else:
# yes - check for collisions (unless --force was specified)
if not args.force:
print('Checking for collisions...')
check_for_collisions(target_folder, data['modules'])
print('Ok, no collisions.')
# for each module, create a corresponding filename and put it in
# the target folder
for module in data['modules']:
target = os.path.join(target_folder, '{}.js'.format(module))
with open(target, 'w') as fout:
fout.write(data['modules'][module])
else:
modules = {}
for root, folders, files in os.walk(target_folder):
folders[:] = []
for target_file in files:
name, ext = os.path.splitext(target_file)
if ext != '.js':
continue
with open(os.path.join(root, target_file), 'r') as fin:
modules[name] = fin.read()
if args.merge:
merge_modules(modules)
# upload modules
send_data(user, password, modules)
def generate_header(filename):
return '''
// {border}
// {name}
// {border}
'''.format(border='-' * 25, name=filename)
def merge_modules(modules):
keys = [x for x in modules.keys()]
keys.sort()
merged = ''
for key in keys:
merged = merged + generate_header(key) + modules[key]
del(modules[key])
modules['main.js'] = merged
if __name__ == '__main__':
main()
| 22.460177 | 96 | 0.597715 | 602 | 5,076 | 4.906977 | 0.299003 | 0.040623 | 0.034529 | 0.028436 | 0.104942 | 0.05281 | 0.033175 | 0.033175 | 0.033175 | 0.033175 | 0 | 0.00441 | 0.285264 | 5,076 | 225 | 97 | 22.56 | 0.809813 | 0.149921 | 0 | 0.081818 | 0 | 0 | 0.156243 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.072727 | false | 0.136364 | 0.054545 | 0.009091 | 0.163636 | 0.072727 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
72f4405126d83aa638993123007b34b00b84222c | 289 | py | Python | contact.py | Nemfeto/python_training | 4d04f07700da4b0d5b50736ba197ad85fd2ee549 | [
"Apache-2.0"
] | null | null | null | contact.py | Nemfeto/python_training | 4d04f07700da4b0d5b50736ba197ad85fd2ee549 | [
"Apache-2.0"
] | null | null | null | contact.py | Nemfeto/python_training | 4d04f07700da4b0d5b50736ba197ad85fd2ee549 | [
"Apache-2.0"
] | null | null | null | class Contact:
def __init__(self, first_name, last_name, nickname, address, mobile, email):
self.first_name = first_name
self.last_name = last_name
self.nickname = nickname
self.address = address
self.mobile = mobile
self.email = email
| 28.9 | 80 | 0.643599 | 35 | 289 | 5.028571 | 0.342857 | 0.153409 | 0.147727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.280277 | 289 | 9 | 81 | 32.111111 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
72f452ac4f4dcb2cc71e6a4cb7d5b81c957513cc | 1,158 | py | Python | integreat_cms/cms/views/dashboard/admin_dashboard_view.py | Integreat/cms-v2 | c79a54fd5abb792696420aa6427a5e5a356fa79c | [
"Apache-2.0"
] | 21 | 2018-10-26T20:10:45.000Z | 2020-10-22T09:41:46.000Z | integreat_cms/cms/views/dashboard/admin_dashboard_view.py | Integreat/cms-v2 | c79a54fd5abb792696420aa6427a5e5a356fa79c | [
"Apache-2.0"
] | 392 | 2018-10-25T08:34:07.000Z | 2020-11-19T08:20:30.000Z | integreat_cms/cms/views/dashboard/admin_dashboard_view.py | Integreat/cms-v2 | c79a54fd5abb792696420aa6427a5e5a356fa79c | [
"Apache-2.0"
] | 23 | 2019-03-06T17:11:35.000Z | 2020-10-16T04:36:41.000Z | import logging
from django.views.generic import TemplateView
from ...models import Feedback
from ..chat.chat_context_mixin import ChatContextMixin
logger = logging.getLogger(__name__)
class AdminDashboardView(TemplateView, ChatContextMixin):
"""
View for the admin dashboard
"""
#: The template to render (see :class:`~django.views.generic.base.TemplateResponseMixin`)
template_name = "dashboard/admin_dashboard.html"
#: The context dict passed to the template (see :class:`~django.views.generic.base.ContextMixin`)
extra_context = {"current_menu_item": "admin_dashboard"}
def get_context_data(self, **kwargs):
r"""
Returns a dictionary representing the template context
(see :meth:`~django.views.generic.base.ContextMixin.get_context_data`).
:param \**kwargs: The given keyword arguments
:type \**kwargs: dict
:return: The template context
:rtype: dict
"""
context = super().get_context_data(**kwargs)
context["admin_feedback"] = Feedback.objects.filter(
is_technical=True, read_by=None
)[:5]
return context
| 31.297297 | 101 | 0.686528 | 130 | 1,158 | 5.953846 | 0.5 | 0.056848 | 0.093023 | 0.085271 | 0.136951 | 0.077519 | 0 | 0 | 0 | 0 | 0 | 0.001089 | 0.207254 | 1,158 | 36 | 102 | 32.166667 | 0.842048 | 0.391192 | 0 | 0 | 0 | 0 | 0.1216 | 0.048 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.266667 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
72f81be16085865d9021b61f8214e479cfae5efb | 4,719 | py | Python | cdp/headless_experimental.py | HyperionGray/python-chrome-devtools-protocol | 5463a5f3d20100255c932961b944e4b37dbb7e61 | [
"MIT"
] | 42 | 2019-10-07T17:50:00.000Z | 2022-03-28T17:56:27.000Z | cdp/headless_experimental.py | HyperionGray/python-chrome-devtools-protocol | 5463a5f3d20100255c932961b944e4b37dbb7e61 | [
"MIT"
] | 23 | 2019-06-09T19:56:25.000Z | 2022-03-02T01:53:13.000Z | cdp/headless_experimental.py | HyperionGray/python-chrome-devtools-protocol | 5463a5f3d20100255c932961b944e4b37dbb7e61 | [
"MIT"
] | 15 | 2019-11-25T10:20:32.000Z | 2022-03-01T21:14:56.000Z | # DO NOT EDIT THIS FILE!
#
# This file is generated from the CDP specification. If you need to make
# changes, edit the generator and regenerate all of the modules.
#
# CDP domain: HeadlessExperimental (experimental)
from __future__ import annotations
from cdp.util import event_class, T_JSON_DICT
from dataclasses import dataclass
import enum
import typing
@dataclass
class ScreenshotParams:
'''
Encoding options for a screenshot.
'''
#: Image compression format (defaults to png).
format_: typing.Optional[str] = None
#: Compression quality from range [0..100] (jpeg only).
quality: typing.Optional[int] = None
def to_json(self) -> T_JSON_DICT:
json: T_JSON_DICT = dict()
if self.format_ is not None:
json['format'] = self.format_
if self.quality is not None:
json['quality'] = self.quality
return json
@classmethod
def from_json(cls, json: T_JSON_DICT) -> ScreenshotParams:
return cls(
format_=str(json['format']) if 'format' in json else None,
quality=int(json['quality']) if 'quality' in json else None,
)
def begin_frame(
frame_time_ticks: typing.Optional[float] = None,
interval: typing.Optional[float] = None,
no_display_updates: typing.Optional[bool] = None,
screenshot: typing.Optional[ScreenshotParams] = None
) -> typing.Generator[T_JSON_DICT,T_JSON_DICT,typing.Tuple[bool, typing.Optional[str]]]:
'''
Sends a BeginFrame to the target and returns when the frame was completed. Optionally captures a
screenshot from the resulting frame. Requires that the target was created with enabled
BeginFrameControl. Designed for use with --run-all-compositor-stages-before-draw, see also
https://goo.gl/3zHXhB for more background.
:param frame_time_ticks: *(Optional)* Timestamp of this BeginFrame in Renderer TimeTicks (milliseconds of uptime). If not set, the current time will be used.
:param interval: *(Optional)* The interval between BeginFrames that is reported to the compositor, in milliseconds. Defaults to a 60 frames/second interval, i.e. about 16.666 milliseconds.
:param no_display_updates: *(Optional)* Whether updates should not be committed and drawn onto the display. False by default. If true, only side effects of the BeginFrame will be run, such as layout and animations, but any visual updates may not be visible on the display or in screenshots.
:param screenshot: *(Optional)* If set, a screenshot of the frame will be captured and returned in the response. Otherwise, no screenshot will be captured. Note that capturing a screenshot can fail, for example, during renderer initialization. In such a case, no screenshot data will be returned.
:returns: A tuple with the following items:
0. **hasDamage** - Whether the BeginFrame resulted in damage and, thus, a new frame was committed to the display. Reported for diagnostic uses, may be removed in the future.
1. **screenshotData** - *(Optional)* Base64-encoded image data of the screenshot, if one was requested and successfully taken.
'''
params: T_JSON_DICT = dict()
if frame_time_ticks is not None:
params['frameTimeTicks'] = frame_time_ticks
if interval is not None:
params['interval'] = interval
if no_display_updates is not None:
params['noDisplayUpdates'] = no_display_updates
if screenshot is not None:
params['screenshot'] = screenshot.to_json()
cmd_dict: T_JSON_DICT = {
'method': 'HeadlessExperimental.beginFrame',
'params': params,
}
json = yield cmd_dict
return (
bool(json['hasDamage']),
str(json['screenshotData']) if 'screenshotData' in json else None
)
def disable() -> typing.Generator[T_JSON_DICT,T_JSON_DICT,None]:
'''
Disables headless events for the target.
'''
cmd_dict: T_JSON_DICT = {
'method': 'HeadlessExperimental.disable',
}
json = yield cmd_dict
def enable() -> typing.Generator[T_JSON_DICT,T_JSON_DICT,None]:
'''
Enables headless events for the target.
'''
cmd_dict: T_JSON_DICT = {
'method': 'HeadlessExperimental.enable',
}
json = yield cmd_dict
@event_class('HeadlessExperimental.needsBeginFramesChanged')
@dataclass
class NeedsBeginFramesChanged:
'''
Issued when the target starts or stops needing BeginFrames.
'''
#: True if BeginFrames are needed, false otherwise.
needs_begin_frames: bool
@classmethod
def from_json(cls, json: T_JSON_DICT) -> NeedsBeginFramesChanged:
return cls(
needs_begin_frames=bool(json['needsBeginFrames'])
)
| 40.333333 | 300 | 0.700148 | 617 | 4,719 | 5.23987 | 0.333874 | 0.023198 | 0.041757 | 0.024126 | 0.131457 | 0.111661 | 0.111661 | 0.09867 | 0.088463 | 0.042066 | 0 | 0.004315 | 0.21424 | 4,719 | 116 | 301 | 40.681034 | 0.867584 | 0.453062 | 0 | 0.184615 | 1 | 0 | 0.119658 | 0.05291 | 0 | 0 | 0 | 0 | 0 | 1 | 0.092308 | false | 0 | 0.076923 | 0.030769 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
72f84712a4005f1ecc74d20ce01f90b1d0a8a90c | 237 | py | Python | tests/test_geometry.py | resurtm/wvflib | 106f426cc2c63c8d21f3e0ec1b90b06450dfc547 | [
"MIT"
] | 1 | 2020-08-14T20:59:54.000Z | 2020-08-14T20:59:54.000Z | tests/test_geometry.py | resurtm/wvflib | 106f426cc2c63c8d21f3e0ec1b90b06450dfc547 | [
"MIT"
] | 3 | 2020-03-31T11:16:01.000Z | 2022-03-01T01:40:38.000Z | tests/test_geometry.py | resurtm/wvflib | 106f426cc2c63c8d21f3e0ec1b90b06450dfc547 | [
"MIT"
] | 3 | 2020-01-24T11:10:46.000Z | 2020-03-31T11:24:34.000Z | import unittest
from wvflib.geometry import Face
class TestGeometry(unittest.TestCase):
def test_constructor(self):
f = Face()
self.assertTrue(len(f.vertices) == 0)
if __name__ == '__main__':
unittest.main()
| 16.928571 | 45 | 0.675105 | 28 | 237 | 5.392857 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005348 | 0.21097 | 237 | 13 | 46 | 18.230769 | 0.802139 | 0 | 0 | 0 | 0 | 0 | 0.033755 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.125 | false | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
72fb29b0b3b127d1a4779c19adfdd5ba81413ede | 2,057 | py | Python | stix2/__init__.py | khdesai/cti-python-stix2 | 20a9bb316c43b7d9faaab686db8d51e5c89416da | [
"BSD-3-Clause"
] | null | null | null | stix2/__init__.py | khdesai/cti-python-stix2 | 20a9bb316c43b7d9faaab686db8d51e5c89416da | [
"BSD-3-Clause"
] | null | null | null | stix2/__init__.py | khdesai/cti-python-stix2 | 20a9bb316c43b7d9faaab686db8d51e5c89416da | [
"BSD-3-Clause"
] | null | null | null | """Python APIs for STIX 2.
.. autosummary::
:toctree: api
confidence
datastore
environment
equivalence
exceptions
markings
parsing
pattern_visitor
patterns
properties
serialization
utils
v20
v21
versioning
workbench
"""
# flake8: noqa
DEFAULT_VERSION = '2.1' # Default version will always be the latest STIX 2.X version
from .confidence import scales
from .datastore import CompositeDataSource
from .datastore.filesystem import (
FileSystemSink, FileSystemSource, FileSystemStore,
)
from .datastore.filters import Filter
from .datastore.memory import MemorySink, MemorySource, MemoryStore
from .datastore.taxii import (
TAXIICollectionSink, TAXIICollectionSource, TAXIICollectionStore,
)
from .environment import Environment, ObjectFactory
from .markings import (
add_markings, clear_markings, get_markings, is_marked, remove_markings,
set_markings,
)
from .parsing import _collect_stix2_mappings, parse, parse_observable
from .patterns import (
AndBooleanExpression, AndObservationExpression, BasicObjectPathComponent,
BinaryConstant, BooleanConstant, EqualityComparisonExpression,
FloatConstant, FollowedByObservationExpression,
GreaterThanComparisonExpression, GreaterThanEqualComparisonExpression,
HashConstant, HexConstant, InComparisonExpression, IntegerConstant,
IsSubsetComparisonExpression, IsSupersetComparisonExpression,
LessThanComparisonExpression, LessThanEqualComparisonExpression,
LikeComparisonExpression, ListConstant, ListObjectPathComponent,
MatchesComparisonExpression, ObjectPath, ObservationExpression,
OrBooleanExpression, OrObservationExpression, ParentheticalExpression,
QualifiedObservationExpression, ReferenceObjectPathComponent,
RepeatQualifier, StartStopQualifier, StringConstant, TimestampConstant,
WithinQualifier,
)
from .v21 import * # This import will always be the latest STIX 2.X version
from .version import __version__
from .versioning import new_version, revoke
_collect_stix2_mappings()
| 31.646154 | 85 | 0.808459 | 169 | 2,057 | 9.721893 | 0.615385 | 0.039562 | 0.014607 | 0.018259 | 0.046257 | 0.046257 | 0.046257 | 0.046257 | 0.046257 | 0.046257 | 0 | 0.007928 | 0.141468 | 2,057 | 64 | 86 | 32.140625 | 0.922424 | 0.191055 | 0 | 0 | 0 | 0 | 0.001814 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.361111 | 0 | 0.361111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
72fd07154cf4859fb20d5f7aa637f41a882f2a27 | 584 | py | Python | UMSLHackRestAPI/api/urls.py | trujivan/climate-impact-changes | 609b8197b0ede1c1fdac3aa82b34e73e6f4526e3 | [
"MIT"
] | 1 | 2020-03-29T17:52:26.000Z | 2020-03-29T17:52:26.000Z | UMSLHackRestAPI/api/urls.py | trujivan/climate-impact-changes | 609b8197b0ede1c1fdac3aa82b34e73e6f4526e3 | [
"MIT"
] | 6 | 2021-03-19T00:01:21.000Z | 2021-09-22T18:37:17.000Z | UMSLHackRestAPI/api/urls.py | trujivan/climate-impact-changes | 609b8197b0ede1c1fdac3aa82b34e73e6f4526e3 | [
"MIT"
] | null | null | null | from django.urls import path, include
from .views import main_view, PredictionView
#router = routers.DefaultRouter(trailing_slash=False)
#router.register('years', YearView, basename='years')
#router.register('predict', PredictionView, basename='predict')
urlpatterns = [
#path('api/', get_dummy_data),
#path('pollution/predict', get_prediction, name='test_predict'),
#path('myform/', api_form_view, name='year_form'),
#path('api/', include(router.urls)),
path(r'', main_view, name="main"),
path(r'api/v1/predict', PredictionView.as_view(), name='predict')
] | 36.5 | 69 | 0.714041 | 73 | 584 | 5.561644 | 0.493151 | 0.059113 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001927 | 0.111301 | 584 | 16 | 70 | 36.5 | 0.780347 | 0.585616 | 0 | 0 | 0 | 0 | 0.105932 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
f4030f6d52f16b8e41c89e74609c247cf9d493ab | 864 | py | Python | cattr/__init__.py | bluetech/cattrs | be438d5566bd308b584359a9b0011a7bd0006b06 | [
"MIT"
] | 1 | 2021-07-07T12:24:58.000Z | 2021-07-07T12:24:58.000Z | cattr/__init__.py | bluetech/cattrs | be438d5566bd308b584359a9b0011a7bd0006b06 | [
"MIT"
] | null | null | null | cattr/__init__.py | bluetech/cattrs | be438d5566bd308b584359a9b0011a7bd0006b06 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from .converters import Converter, UnstructureStrategy
__all__ = ('global_converter', 'unstructure', 'structure',
'structure_attrs_fromtuple', 'structure_attrs_fromdict',
'UnstructureStrategy')
__author__ = 'Tin Tvrtković'
__email__ = 'tinchester@gmail.com'
global_converter = Converter()
unstructure = global_converter.unstructure
structure = global_converter.structure
structure_attrs_fromtuple = global_converter.structure_attrs_fromtuple
structure_attrs_fromdict = global_converter.structure_attrs_fromdict
register_structure_hook = global_converter.register_structure_hook
register_structure_hook_func = global_converter.register_structure_hook_func
register_unstructure_hook = global_converter.register_unstructure_hook
register_unstructure_hook_func = \
global_converter.register_unstructure_hook_func
| 37.565217 | 76 | 0.834491 | 90 | 864 | 7.411111 | 0.277778 | 0.224888 | 0.125937 | 0.104948 | 0.38081 | 0.134933 | 0 | 0 | 0 | 0 | 0 | 0.001282 | 0.097222 | 864 | 22 | 77 | 39.272727 | 0.853846 | 0.024306 | 0 | 0 | 0 | 0 | 0.162901 | 0.058264 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.0625 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f404724df75288d1e7ccb8f451caae2678af6f55 | 534 | py | Python | 1. Algorithmic Toolbox/week2_algorithmic_warmup/4_lcm.py | vishweshwartyagi/Data-Structures-and-Algorithms-UCSD | de942b3a0eb2bf56f949f47c297fad713aa81489 | [
"MIT"
] | null | null | null | 1. Algorithmic Toolbox/week2_algorithmic_warmup/4_lcm.py | vishweshwartyagi/Data-Structures-and-Algorithms-UCSD | de942b3a0eb2bf56f949f47c297fad713aa81489 | [
"MIT"
] | null | null | null | 1. Algorithmic Toolbox/week2_algorithmic_warmup/4_lcm.py | vishweshwartyagi/Data-Structures-and-Algorithms-UCSD | de942b3a0eb2bf56f949f47c297fad713aa81489 | [
"MIT"
] | null | null | null | # Uses python3
import sys
def lcm_naive(a, b):
for l in range(1, a*b + 1):
if l % a == 0 and l % b == 0:
return l
return a*b
def gcd(a, b):
if a%b == 0:
return b
elif b%a == 0:
return a
if a > b:
return gcd(a%b, b)
else:
return gcd(b%a, a)
def lcm(a, b):
return int((a*b) / gcd(a, b))
if __name__ == '__main__':
# input = sys.stdin.read()
a, b = map(int, input().split())
# print(lcm_naive(a, b))
print(lcm(a, b))
| 14.833333 | 37 | 0.456929 | 93 | 534 | 2.516129 | 0.322581 | 0.111111 | 0.064103 | 0.08547 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021148 | 0.38015 | 534 | 35 | 38 | 15.257143 | 0.685801 | 0.11236 | 0 | 0 | 0 | 0 | 0.017058 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15 | false | 0 | 0.05 | 0.05 | 0.55 | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
f40992ff6f047f5e4c5a436cd251bdd645155f4b | 424 | py | Python | sample_project/sample_content/serializers.py | zentrumnawi/solid-backend | 0a6ac51608d4c713903856bb9b0cbf0068aa472c | [
"MIT"
] | 1 | 2021-01-24T11:54:01.000Z | 2021-01-24T11:54:01.000Z | sample_project/sample_content/serializers.py | zentrumnawi/solid-backend | 0a6ac51608d4c713903856bb9b0cbf0068aa472c | [
"MIT"
] | 112 | 2020-04-22T10:07:03.000Z | 2022-03-29T15:25:26.000Z | sample_project/sample_content/serializers.py | zentrumnawi/solid-backend | 0a6ac51608d4c713903856bb9b0cbf0068aa472c | [
"MIT"
] | null | null | null | from rest_framework import serializers
from solid_backend.photograph.serializers import PhotographSerializer
from solid_backend.media_object.serializers import MediaObjectSerializer
from .models import SampleProfile
class SampleProfileSerializer(serializers.ModelSerializer):
media_objects = MediaObjectSerializer(many=True)
class Meta:
model = SampleProfile
fields = "__all__"
depth = 1
| 28.266667 | 72 | 0.794811 | 41 | 424 | 8 | 0.634146 | 0.054878 | 0.097561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002809 | 0.160377 | 424 | 14 | 73 | 30.285714 | 0.918539 | 0 | 0 | 0 | 0 | 0 | 0.016509 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
f40b92b97d4fadcb832680913b2744036577bcf3 | 782 | py | Python | tests/reshape_4/generate_pb.py | wchsieh/utensor_cgen | 1774f0dfc0eb98b274271e7a67457dc3593b2593 | [
"Apache-2.0"
] | null | null | null | tests/reshape_4/generate_pb.py | wchsieh/utensor_cgen | 1774f0dfc0eb98b274271e7a67457dc3593b2593 | [
"Apache-2.0"
] | null | null | null | tests/reshape_4/generate_pb.py | wchsieh/utensor_cgen | 1774f0dfc0eb98b274271e7a67457dc3593b2593 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf8 -*-
import os
from utensor_cgen.utils import save_consts, save_graph, save_idx
import numpy as np
import tensorflow as tf
def generate():
test_dir = os.path.dirname(__file__)
graph = tf.Graph()
with graph.as_default():
x = tf.constant(np.random.randn(10),
dtype=tf.float32,
name='x')
output_x = tf.reshape(x, [5, 2], name="output_x")
with tf.Session(graph=graph) as sess:
save_consts(sess, test_dir)
save_graph(graph, 'test_reshape_4', test_dir)
np_output = output_x.eval()
save_idx(np_output, os.path.join(test_dir, 'output_x.idx'))
# test_reshape_4.pb is the same as test_quant_reshape_4.pb
# hack, since we do not have QuantizedReshape yet
if __name__ == "__main__":
generate()
| 28.962963 | 64 | 0.673913 | 122 | 782 | 4.016393 | 0.47541 | 0.057143 | 0.04898 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016077 | 0.204604 | 782 | 26 | 65 | 30.076923 | 0.771704 | 0.159847 | 0 | 0 | 1 | 0 | 0.06585 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.210526 | 0 | 0.263158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f40f0e5d0f6c305a62e87232ab24691dc4b36cbe | 4,053 | py | Python | DEMs/denmark/download_dk_dem.py | PeterFogh/digital_elevation_model_use_cases | 0e72cc6238ca5217a73d06dc3e8c3229024112c3 | [
"MIT"
] | null | null | null | DEMs/denmark/download_dk_dem.py | PeterFogh/digital_elevation_model_use_cases | 0e72cc6238ca5217a73d06dc3e8c3229024112c3 | [
"MIT"
] | null | null | null | DEMs/denmark/download_dk_dem.py | PeterFogh/digital_elevation_model_use_cases | 0e72cc6238ca5217a73d06dc3e8c3229024112c3 | [
"MIT"
] | null | null | null | """
Fetch all files from Kortforsyningen FTP server folder.
Copyright (c) 2021 Peter Fogh
See also command line alternative in `download_dk_dem.sh`
"""
from ftplib import FTP, error_perm
import os
from pathlib import Path
import time
import operator
import functools
import shutil
# TODO: use logging to std instead of print(time.ctime())
from environs import Env
# Functions
def download_FTP_tree(ftp, remote_dir, local_dir):
"""
Download FTP directory and all content to local directory.
Inspired by https://stackoverflow.com/a/55127679/7796217.
Parameters:
ftp : ftplib.FTP
Established FTP connection after login.
remote_dir : pathlib.Path
FTP directory to download.
local_dir : pathlib.Path
Local directory to store downloaded content.
"""
# Set up empty local dir and FTP current work dir before tree traversal.
shutil.rmtree(local_dir)
ftp.cwd(remote_dir.parent.as_posix())
local_dir.mkdir(parents=True, exist_ok=True)
return _recursive_download_FTP_tree(ftp, remote_dir, local_dir)
def _is_ftp_dir(ftp, name):
"""
Check if FTP entry is a directory.
Modified from here https://www.daniweb.com/programming/software-development/threads/243712/ftplib-isdir-or-isfile
to accommodate not necessarily being in the top-level directory.
Parameters:
ftp : ftplib.FTP
Established FTP connection after login.
name: str
Name of FTP file system entry to check if directory or not.
"""
try:
current_dir = ftp.pwd()
ftp.cwd(name)
#print(f'File system entry "{name=}" is a directory.')
ftp.cwd(current_dir)
return True
except error_perm as e:
#print(f'File system entry "{name=}" is a file.')
return False
def _recursive_download_FTP_tree(ftp, remote_dir, local_dir):
"""
Download FTP directory and all content to local directory.
Inspired by https://stackoverflow.com/a/55127679/7796217.
Parameters:
ftp : ftplib.FTP
Established FTP connection after login.
remote_dir : pathlib.Path
FTP directory to download.
local_dir : pathlib.Path
Local directory to store downloaded content.
"""
print(f'{remote_dir=}')
print(f'{local_dir=}')
ftp.cwd(remote_dir.name)
local_dir.mkdir(exist_ok=True)
print(f'{time.ctime()}: Fetching file & directory names within "{remote_dir}".')
dir_entries = ftp.nlst()
print(f'{time.ctime()}: Fetched file & directory names within "{remote_dir}".')
dirs = []
for filename in sorted(dir_entries)[-5:]: # TODO: remove restriction on downloaded of entries
if _is_ftp_dir(ftp, filename):
dirs.append(filename)
else:
local_file = local_dir/filename
print(f'{time.ctime()}: Downloading "{local_file}".')
ftp.retrbinary(
cmd=f'RETR {filename}',
callback=local_file.open('wb').write)
print(f'{time.ctime()}: Downloaded "{local_file}".')
print(f'Traverse dir tree to "{dirs=}"')
map_download_FTP_tree = map(lambda dir: _recursive_download_FTP_tree(
ftp, remote_dir/dir, local_dir/dir), dirs)
return functools.reduce(operator.iand, map_download_FTP_tree, True)
if __name__ == '__main__':
# Load environment variables from local `.env` file.
env = Env()
env.read_env()
# Set up server and source/destination paths.
ftp_host = 'ftp.kortforsyningen.dk'
dem_ftp_dir = Path('dhm_danmarks_hoejdemodel/DTM')
local_ftp_dir = env.path('LOCAL_FTP_DIR', './')
local_dem_ftp_dir = local_ftp_dir/'kortforsyningen'/dem_ftp_dir
# Perform FTP download.
print(f'{time.ctime()}: Connect to {ftp_host}')
ftp = FTP(ftp_host)
ftp.login(env('KORTFORSYNING_USERNAME'), env('KORTFORSYNING_PASSWORD'))
download_FTP_tree(ftp, dem_ftp_dir, local_dem_ftp_dir)
ftp.close()
print(f'{time.ctime()}: Finished')
| 32.166667 | 117 | 0.66642 | 540 | 4,053 | 4.818519 | 0.314815 | 0.036895 | 0.040354 | 0.034589 | 0.355496 | 0.355496 | 0.297079 | 0.283244 | 0.261722 | 0.219831 | 0 | 0.013162 | 0.231434 | 4,053 | 125 | 118 | 32.424 | 0.822151 | 0.400691 | 0 | 0 | 0 | 0 | 0.215134 | 0.041355 | 0 | 0 | 0 | 0.016 | 0 | 1 | 0.052632 | false | 0.017544 | 0.140351 | 0 | 0.263158 | 0.157895 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f4166388f315b81cfe6df485234fcfe561b8ac22 | 251 | py | Python | src/ychaos/utils/types.py | vanderh0ff/ychaos | 5148c889912b744ee73907e4dd30c9ddb851aeb3 | [
"Apache-2.0"
] | 8 | 2021-07-21T15:37:48.000Z | 2022-03-03T14:43:09.000Z | src/ychaos/utils/types.py | vanderh0ff/ychaos | 5148c889912b744ee73907e4dd30c9ddb851aeb3 | [
"Apache-2.0"
] | 102 | 2021-07-20T16:08:29.000Z | 2022-03-25T07:28:37.000Z | src/ychaos/utils/types.py | vanderh0ff/ychaos | 5148c889912b744ee73907e4dd30c9ddb851aeb3 | [
"Apache-2.0"
] | 8 | 2021-07-20T13:37:46.000Z | 2022-02-18T01:44:52.000Z | from typing import Dict, List, TypeVar, Union
JsonTypeVar = TypeVar("JsonTypeVar")
JsonPrimitive = Union[str, float, int, bool, None]
JsonDict = Dict[str, JsonTypeVar]
JsonArray = List[JsonTypeVar]
Json = Union[JsonPrimitive, JsonDict, JsonArray]
| 22.818182 | 50 | 0.760956 | 29 | 251 | 6.586207 | 0.586207 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131474 | 251 | 10 | 51 | 25.1 | 0.876147 | 0 | 0 | 0 | 0 | 0 | 0.043825 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f41a1ce9bbfb9a3f65c33e9986100ab487ba7015 | 537 | py | Python | app/hint/models.py | vigov5/oshougatsu2015 | 38cbf325675ee2c08a6965b8689fad8308eb84eb | [
"MIT"
] | null | null | null | app/hint/models.py | vigov5/oshougatsu2015 | 38cbf325675ee2c08a6965b8689fad8308eb84eb | [
"MIT"
] | null | null | null | app/hint/models.py | vigov5/oshougatsu2015 | 38cbf325675ee2c08a6965b8689fad8308eb84eb | [
"MIT"
] | null | null | null | import os
import datetime
from app import app, db
class Hint(db.Model):
__tablename__ = 'hints'
id = db.Column(db.Integer, primary_key=True)
description = db.Column(db.Text)
is_open = db.Column(db.Boolean)
problem_id = db.Column(db.Integer, db.ForeignKey('problems.id'))
def __repr__(self):
return '<Hint %r>' % (self.description)
def __init__(self, description='', is_open=False, problem=None):
self.description = description
self.is_open = is_open
self.problem = problem | 26.85 | 68 | 0.666667 | 72 | 537 | 4.722222 | 0.458333 | 0.094118 | 0.117647 | 0.070588 | 0.111765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210428 | 537 | 20 | 69 | 26.85 | 0.801887 | 0 | 0 | 0 | 0 | 0 | 0.046468 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.2 | 0.066667 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
f420912bbaeaef68549b8a153f2087a527d8302c | 475 | py | Python | example/example/urls.py | pmaccamp/django-tastypie-swagger | d51ef3ea8e33791617edba8ed55a1be1f16e4ccc | [
"Apache-2.0"
] | 2 | 2020-04-13T13:26:42.000Z | 2021-10-30T17:56:15.000Z | example/example/urls.py | pmaccamp/django-tastypie-swagger | d51ef3ea8e33791617edba8ed55a1be1f16e4ccc | [
"Apache-2.0"
] | null | null | null | example/example/urls.py | pmaccamp/django-tastypie-swagger | d51ef3ea8e33791617edba8ed55a1be1f16e4ccc | [
"Apache-2.0"
] | 5 | 2020-04-15T07:05:13.000Z | 2021-11-01T20:36:10.000Z | from django.conf.urls import include, url
from django.contrib import admin
from demo.apis import api
urlpatterns = [
url(r'^api/', include(api.urls)),
url(r'^api/doc/', include(('tastypie_swagger.urls', 'tastypie_swagger'),
namespace='demo_api_swagger'),
kwargs={
"tastypie_api_module":"demo.apis.api",
"namespace":"demo_api_swagger",
"version": "0.1"}
),
url(r'^admin/', admin.site.urls),
]
| 29.6875 | 76 | 0.6 | 58 | 475 | 4.775862 | 0.413793 | 0.043321 | 0.050542 | 0.166065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00554 | 0.24 | 475 | 15 | 77 | 31.666667 | 0.761773 | 0 | 0 | 0 | 0 | 0 | 0.296842 | 0.044211 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.214286 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f4217689eb43722ace5f25924ae5b537893153d9 | 668 | py | Python | 2.5.9/test_splash/test_splash/spiders/with_splash.py | feel-easy/myspider | dcc65032015d7dbd8bea78f846fd3cac7638c332 | [
"Apache-2.0"
] | 1 | 2019-02-28T10:16:00.000Z | 2019-02-28T10:16:00.000Z | 2.5.9/test_splash/test_splash/spiders/with_splash.py | wasalen/myspider | dcc65032015d7dbd8bea78f846fd3cac7638c332 | [
"Apache-2.0"
] | null | null | null | 2.5.9/test_splash/test_splash/spiders/with_splash.py | wasalen/myspider | dcc65032015d7dbd8bea78f846fd3cac7638c332 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import scrapy
from scrapy_splash import SplashRequest # 使用scrapy_splash包提供的request对象
class WithSplashSpider(scrapy.Spider):
name = 'with_splash'
allowed_domains = ['baidu.com']
start_urls = ['https://www.baidu.com/s?wd=13161933309']
def start_requests(self):
yield SplashRequest(self.start_urls[0],
callback=self.parse_splash,
args={'wait': 10}, # 最大超时时间,单位:秒
endpoint='render.html') # 使用splash服务的固定参数
def parse_splash(self, response):
with open('with_splash.html', 'w') as f:
f.write(response.body.decode())
| 35.157895 | 70 | 0.606287 | 74 | 668 | 5.337838 | 0.689189 | 0.050633 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030738 | 0.269461 | 668 | 18 | 71 | 37.111111 | 0.778689 | 0.116766 | 0 | 0 | 0 | 0 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
f423a60c36497f5bf95253c92fffc3d805f3c461 | 11,128 | py | Python | src/genui/models/models.py | Tontolda/genui | c5b7da7c5a99fc16d34878e2170145ac7c8e31c4 | [
"0BSD"
] | 15 | 2021-05-31T13:39:17.000Z | 2022-03-30T12:04:14.000Z | src/genui/models/models.py | martin-sicho/genui | ea7f1272030a13e8e253a7a9b6479ac6a78552d3 | [
"MIT"
] | 3 | 2021-04-08T22:02:22.000Z | 2022-03-16T09:10:20.000Z | src/genui/models/models.py | Tontolda/genui | c5b7da7c5a99fc16d34878e2170145ac7c8e31c4 | [
"0BSD"
] | 5 | 2021-03-04T11:00:54.000Z | 2021-12-18T22:59:22.000Z | import os
from django.db import models
import uuid
# Create your models here.
from djcelery_model.models import TaskMixin
from polymorphic.models import PolymorphicModel
from genui.utils.models import NON_POLYMORPHIC_CASCADE, OverwriteStorage
from genui.utils.extensions.tasks.models import TaskShortcutsMixIn, PolymorphicTaskManager
from genui.projects.models import DataSet
class AlgorithmMode(models.Model):
name = models.CharField(unique=True, blank=False, max_length=32)
def __str__(self):
return '%s object (%s)' % (self.__class__.__name__, self.name)
class ModelFileFormat(models.Model):
fileExtension = models.CharField(max_length=32, blank=False, unique=True)
description = models.TextField(max_length=10000, blank=True)
class ImportableModelComponent(models.Model):
corePackage = models.CharField(blank=False, null=False, default='genui.models.genuimodels', max_length=1024)
class Meta:
abstract = True
class Algorithm(ImportableModelComponent):
name = models.CharField(blank=False, max_length=128, unique=True)
fileFormats = models.ManyToManyField(ModelFileFormat)
validModes = models.ManyToManyField(AlgorithmMode)
def __str__(self):
return '%s object (%s)' % (self.__class__.__name__, self.name)
class ModelParameter(models.Model):
STRING = 'string'
BOOL = 'bool'
INTEGER = 'integer'
FLOAT = 'float'
CONTENT_TYPES = [
(STRING, 'String'),
(BOOL, 'Logical'),
(INTEGER, 'Integer'),
(FLOAT, 'Float'),
]
name = models.CharField(max_length=128, blank=False)
algorithm = models.ForeignKey(Algorithm, on_delete=models.CASCADE, null=False, related_name='parameters')
contentType = models.CharField(max_length=32, choices=CONTENT_TYPES, default=STRING)
defaultValue = models.ForeignKey("ModelParameterValue", on_delete=models.SET_NULL, null=True)
class Meta:
unique_together = ('name', 'algorithm')
def __str__(self):
return '%s object (%s)' % (self.__class__.__name__, self.name)
class ModelBuilder(ImportableModelComponent):
name = models.CharField(max_length=128, blank=False, unique=True)
def __str__(self):
return '%s object (%s)' % (self.__class__.__name__, self.name)
class ModelFile(models.Model):
MAIN = "main"
AUXILIARY = "aux"
KINDS = [
(MAIN, 'Main'),
(AUXILIARY, 'Auxiliary'),
]
class Rejected(Exception):
def __init__(self, msg):
super().__init__(msg)
class InvalidFileFormatError(Exception):
def __init__(self, msg):
super().__init__(msg)
modelInstance = models.ForeignKey("Model", null=False, related_name="files", on_delete=models.CASCADE)
kind = models.CharField(max_length=32, choices=KINDS, null=False, default=AUXILIARY)
note = models.CharField(max_length=128, blank=True)
format = models.ForeignKey(ModelFileFormat, null=True, on_delete=models.CASCADE)
file = models.FileField(null=True, upload_to='models/', storage=OverwriteStorage()) # TODO: add custom logic to save in a directory specific to the project where the model is
@property
def path(self):
return self.file.path
@staticmethod
def generateMainFileName(model, fileFormat):
return f"{model.trainingStrategy.algorithm.name}{model.id}_project{model.project.id}_{uuid.uuid4().hex}_main{fileFormat.fileExtension}"
@staticmethod
def generateAuxFileName(model, fileFormat):
return f"{model.trainingStrategy.algorithm.name}{model.id}_project{model.project.id}_{uuid.uuid4().hex}_aux{fileFormat.fileExtension}"
@staticmethod
def create(model, name, file_, kind=AUXILIARY, note=None):
if not note:
note = ''
algorithm = model.trainingStrategy.algorithm
if kind == ModelFile.MAIN and model.modelFile:
file_format = None
for format_ in algorithm.fileFormats.all():
if name.endswith(format_.fileExtension):
file_format = format_
break
if not file_format:
raise ModelFile.InvalidFileFormatError(f"The extension for file '{name}' of the submitted file did not match any of the known formats for algorithm: ({algorithm.name}).")
if model.modelFile.format.fileExtension == file_format.fileExtension:
model.modelFile.file.save(os.path.basename(model.modelFile.path), file_)
else:
model.modelFile.delete()
ModelFile.objects.create(
model=model,
kind=ModelFile.MAIN,
format=file_format,
note=note,
file=file_
)
return model.modelFile
else:
file_format = None
for format_ in ModelFileFormat.objects.all():
if name.endswith(format_.fileExtension):
file_format = format_
break
if kind == ModelFile.MAIN:
if not file_format:
raise ModelFile.InvalidFileFormatError(f"The extension for file '{name}' of the submitted file did not match any of the known formats for algorithm: ({algorithm.name}).")
ret = ModelFile.objects.create(
modelInstance=model,
kind=ModelFile.MAIN,
format=file_format,
note=note
)
ret.file.save(ret.generateMainFileName(model, file_format), file_)
else:
ret = ModelFile.objects.create(
modelInstance=model,
kind=kind,
format=file_format if file_format else ModelFileFormat.objects.get_or_create(
fileExtension='.' + name.split('.')[-1]
)[0],
note=note
)
ret.file.save(ret.generateAuxFileName(model, ret.format), file_)
return ret
class Model(TaskShortcutsMixIn, TaskMixin, DataSet):
objects = PolymorphicTaskManager()
builder = models.ForeignKey(ModelBuilder, on_delete=models.CASCADE, null=False)
def __str__(self):
return '%s object (%s)' % (self.__class__.__name__, self.name)
@property
def modelFile(self):
# TODO: exception when more than one main file found
main = self.files.filter(kind=ModelFile.MAIN)
if main:
return main.get()
else:
return None
def onFileSave(self, saved : ModelFile):
"""
This will be called when a file is being
saved to this model instance. You can throw
the ModelFile.Rejected exception if the file
is invalid.
:param saved:
:return:
"""
pass
# @modelFile.setter
# def modelFile(self, val):
# main = self.files.filter(kind=ModelFile.MAIN)
# if main:
# main.delete()
# val.kind = ModelFile.MAIN
# val.save()
# self.files.add(val)
# self.save()
@property
def trainingStrategy(self):
count = self.trainingStrategies.count()
if count == 1:
return self.trainingStrategies.get()
elif count == 0:
return None
else:
raise Exception("Training strategy returned more than one value. This indicates an integrity error in the database!")
@property
def validationStrategy(self):
count = self.validationStrategies.count()
if count == 1:
return self.validationStrategies.get()
elif count == 0:
return None
else:
raise Exception("Validation strategy returned more than one value. This indicates an integrity error in the database!")
class TrainingStrategy(PolymorphicModel):
algorithm = models.ForeignKey(Algorithm, on_delete=models.CASCADE, null=False)
mode = models.ForeignKey(AlgorithmMode, on_delete=models.CASCADE, null=False)
modelInstance = models.ForeignKey(Model, null=False, on_delete=models.CASCADE, related_name="trainingStrategies")
class ModelParameterValue(PolymorphicModel):
parameter = models.ForeignKey(ModelParameter, on_delete=models.CASCADE, null=False)
strategy = models.ForeignKey(TrainingStrategy, on_delete=NON_POLYMORPHIC_CASCADE, null=True, related_name='parameters')
@staticmethod
def parseValue(val):
return str(val)
class ModelParameterStr(ModelParameterValue):
value = models.CharField(max_length=1024)
class ModelParameterBool(ModelParameterValue):
value = models.BooleanField(null=False)
@staticmethod
def parseValue(val):
return bool(val)
class ModelParameterInt(ModelParameterValue):
value = models.IntegerField(null=False)
@staticmethod
def parseValue(val):
return int(val)
class ModelParameterFloat(ModelParameterValue):
value = models.FloatField(null=False)
@staticmethod
def parseValue(val):
return float(val)
PARAM_VALUE_CTYPE_TO_MODEL_MAP = {
ModelParameter.STRING : ModelParameterStr,
ModelParameter.INTEGER : ModelParameterInt,
ModelParameter.FLOAT : ModelParameterFloat,
ModelParameter.BOOL : ModelParameterBool
}
class ModelPerformanceMetric(ImportableModelComponent):
name = models.CharField(unique=True, blank=False, max_length=128)
validModes = models.ManyToManyField(AlgorithmMode, related_name='metrics')
validAlgorithms = models.ManyToManyField(Algorithm, related_name='metrics')
description = models.TextField(max_length=10000, blank=True)
def __str__(self):
return '%s object (%s)' % (self.__class__.__name__, self.name)
class ValidationStrategy(PolymorphicModel):
metrics = models.ManyToManyField(ModelPerformanceMetric)
modelInstance = models.ForeignKey(Model, null=False, on_delete=models.CASCADE, related_name='validationStrategies')
class CV(ValidationStrategy):
cvFolds = models.IntegerField(blank=False)
class Meta:
abstract = True
class ValidationSet(ValidationStrategy):
validSetSize = models.FloatField(blank=False)
class Meta:
abstract = True
class BasicValidationStrategy(ValidationSet, CV):
pass
class ModelPerformance(PolymorphicModel):
metric = models.ForeignKey(ModelPerformanceMetric, null=False, on_delete=models.CASCADE)
value = models.FloatField(blank=False)
model = models.ForeignKey(Model, null=False, on_delete=NON_POLYMORPHIC_CASCADE, related_name="performance")
class ModelPerformanceCV(ModelPerformance):
fold = models.IntegerField(blank=False)
class ModelPerfomanceNN(ModelPerformance):
epoch = models.IntegerField(null=False, blank=False)
step = models.IntegerField(null=False, blank=False)
class ROCCurvePoint(ModelPerformance):
fpr = models.FloatField(blank=False)
auc = models.ForeignKey(ModelPerformance, null=False, on_delete=NON_POLYMORPHIC_CASCADE, related_name="points")
@property
def tpr(self):
return self.value | 34.030581 | 190 | 0.668494 | 1,164 | 11,128 | 6.241409 | 0.19244 | 0.022299 | 0.021198 | 0.028906 | 0.412663 | 0.386373 | 0.322092 | 0.279835 | 0.245974 | 0.183758 | 0 | 0.005754 | 0.234723 | 11,128 | 327 | 191 | 34.030581 | 0.847346 | 0.048976 | 0 | 0.343612 | 0 | 0.017621 | 0.096679 | 0.025978 | 0 | 0 | 0 | 0.003058 | 0 | 1 | 0.092511 | false | 0.008811 | 0.052863 | 0.061674 | 0.599119 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
f42643ddcdfa49204eb89ec1d689fa4a85b4b22e | 38,947 | py | Python | rpython/jit/backend/llsupport/test/test_rewrite.py | jptomo/pypy-lang-scheme | 55edb2cec69d78f86793282a4566fcbc1ef9fcac | [
"MIT"
] | 1 | 2019-11-25T10:52:01.000Z | 2019-11-25T10:52:01.000Z | rpython/jit/backend/llsupport/test/test_rewrite.py | jptomo/pypy-lang-scheme | 55edb2cec69d78f86793282a4566fcbc1ef9fcac | [
"MIT"
] | null | null | null | rpython/jit/backend/llsupport/test/test_rewrite.py | jptomo/pypy-lang-scheme | 55edb2cec69d78f86793282a4566fcbc1ef9fcac | [
"MIT"
] | null | null | null | from rpython.jit.backend.llsupport.descr import get_size_descr,\
get_field_descr, get_array_descr, ArrayDescr, FieldDescr,\
SizeDescr, get_interiorfield_descr
from rpython.jit.backend.llsupport.gc import GcLLDescr_boehm,\
GcLLDescr_framework
from rpython.jit.backend.llsupport import jitframe
from rpython.jit.metainterp.gc import get_description
from rpython.jit.tool.oparser import parse
from rpython.jit.metainterp.optimizeopt.util import equaloplists
from rpython.jit.metainterp.history import JitCellToken, FLOAT
from rpython.jit.metainterp.history import AbstractFailDescr
from rpython.rtyper.lltypesystem import lltype, rffi
from rpython.rtyper import rclass
from rpython.jit.backend.x86.arch import WORD
class Evaluator(object):
def __init__(self, scope):
self.scope = scope
def __getitem__(self, key):
return eval(key, self.scope)
class FakeLoopToken(object):
pass
o_vtable = lltype.malloc(rclass.OBJECT_VTABLE, immortal=True)
class RewriteTests(object):
def check_rewrite(self, frm_operations, to_operations, **namespace):
S = lltype.GcStruct('S', ('x', lltype.Signed),
('y', lltype.Signed))
sdescr = get_size_descr(self.gc_ll_descr, S)
sdescr.tid = 1234
#
T = lltype.GcStruct('T', ('y', lltype.Signed),
('z', lltype.Ptr(S)),
('t', lltype.Signed))
tdescr = get_size_descr(self.gc_ll_descr, T)
tdescr.tid = 5678
tzdescr = get_field_descr(self.gc_ll_descr, T, 'z')
#
A = lltype.GcArray(lltype.Signed)
adescr = get_array_descr(self.gc_ll_descr, A)
adescr.tid = 4321
alendescr = adescr.lendescr
#
B = lltype.GcArray(lltype.Char)
bdescr = get_array_descr(self.gc_ll_descr, B)
bdescr.tid = 8765
blendescr = bdescr.lendescr
#
C = lltype.GcArray(lltype.Ptr(S))
cdescr = get_array_descr(self.gc_ll_descr, C)
cdescr.tid = 8111
clendescr = cdescr.lendescr
#
E = lltype.GcStruct('Empty')
edescr = get_size_descr(self.gc_ll_descr, E)
edescr.tid = 9000
#
vtable_descr = self.gc_ll_descr.fielddescr_vtable
O = lltype.GcStruct('O', ('parent', rclass.OBJECT),
('x', lltype.Signed))
o_descr = self.cpu.sizeof(O, True)
o_vtable = globals()['o_vtable']
#
tiddescr = self.gc_ll_descr.fielddescr_tid
wbdescr = self.gc_ll_descr.write_barrier_descr
WORD = globals()['WORD']
#
strdescr = self.gc_ll_descr.str_descr
unicodedescr = self.gc_ll_descr.unicode_descr
strlendescr = strdescr.lendescr
unicodelendescr = unicodedescr.lendescr
strhashdescr = self.gc_ll_descr.str_hash_descr
unicodehashdescr = self.gc_ll_descr.unicode_hash_descr
casmdescr = JitCellToken()
clt = FakeLoopToken()
clt._ll_initial_locs = [0, 8]
frame_info = lltype.malloc(jitframe.JITFRAMEINFO, flavor='raw')
clt.frame_info = frame_info
frame_info.jfi_frame_depth = 13
frame_info.jfi_frame_size = 255
framedescrs = self.gc_ll_descr.getframedescrs(self.cpu)
framelendescr = framedescrs.arraydescr.lendescr
jfi_frame_depth = framedescrs.jfi_frame_depth
jfi_frame_size = framedescrs.jfi_frame_size
jf_frame_info = framedescrs.jf_frame_info
jf_savedata = framedescrs.jf_savedata
jf_force_descr = framedescrs.jf_force_descr
jf_descr = framedescrs.jf_descr
jf_guard_exc = framedescrs.jf_guard_exc
jf_forward = framedescrs.jf_forward
jf_extra_stack_depth = framedescrs.jf_extra_stack_depth
signedframedescr = self.cpu.signedframedescr
floatframedescr = self.cpu.floatframedescr
casmdescr.compiled_loop_token = clt
#
guarddescr = AbstractFailDescr()
#
namespace.update(locals())
#
for funcname in self.gc_ll_descr._generated_functions:
namespace[funcname] = self.gc_ll_descr.get_malloc_fn(funcname)
namespace[funcname + '_descr'] = getattr(self.gc_ll_descr,
'%s_descr' % funcname)
#
ops = parse(frm_operations, namespace=namespace)
expected = parse(to_operations % Evaluator(namespace),
namespace=namespace)
operations = self.gc_ll_descr.rewrite_assembler(self.cpu,
ops.operations,
[])
remap = {}
for a, b in zip(ops.inputargs, expected.inputargs):
remap[b] = a
equaloplists(operations, expected.operations, remap=remap)
lltype.free(frame_info, flavor='raw')
class FakeTracker(object):
pass
class BaseFakeCPU(object):
JITFRAME_FIXED_SIZE = 0
def __init__(self):
self.tracker = FakeTracker()
self._cache = {}
self.signedframedescr = ArrayDescr(3, 8, FieldDescr('len', 0, 0, 0), 0)
self.floatframedescr = ArrayDescr(5, 8, FieldDescr('len', 0, 0, 0), 0)
def getarraydescr_for_frame(self, tp):
if tp == FLOAT:
return self.floatframedescr
return self.signedframedescr
def unpack_arraydescr_size(self, d):
return 0, d.itemsize, 0
def unpack_fielddescr(self, d):
return d.offset
def arraydescrof(self, ARRAY):
try:
return self._cache[ARRAY]
except KeyError:
r = ArrayDescr(1, 2, FieldDescr('len', 0, 0, 0), 0)
self._cache[ARRAY] = r
return r
def fielddescrof(self, STRUCT, fname):
key = (STRUCT, fname)
try:
return self._cache[key]
except KeyError:
r = FieldDescr(fname, 1, 1, 1)
self._cache[key] = r
return r
class TestBoehm(RewriteTests):
def setup_method(self, meth):
class FakeCPU(BaseFakeCPU):
def sizeof(self, STRUCT, is_object):
assert is_object
return SizeDescr(102, gc_fielddescrs=[],
vtable=o_vtable)
self.cpu = FakeCPU()
self.gc_ll_descr = GcLLDescr_boehm(None, None, None)
def test_new(self):
self.check_rewrite("""
[]
p0 = new(descr=sdescr)
jump()
""", """
[p1]
p0 = call_malloc_gc(ConstClass(malloc_fixedsize), %(sdescr.size)d,\
descr=malloc_fixedsize_descr)
jump()
""")
def test_no_collapsing(self):
self.check_rewrite("""
[]
p0 = new(descr=sdescr)
p1 = new(descr=sdescr)
jump()
""", """
[]
p0 = call_malloc_gc(ConstClass(malloc_fixedsize), %(sdescr.size)d,\
descr=malloc_fixedsize_descr)
p1 = call_malloc_gc(ConstClass(malloc_fixedsize), %(sdescr.size)d,\
descr=malloc_fixedsize_descr)
jump()
""")
def test_new_array_fixed(self):
self.check_rewrite("""
[]
p0 = new_array(10, descr=adescr)
jump()
""", """
[]
p0 = call_malloc_gc(ConstClass(malloc_array), \
%(adescr.basesize)d, \
10, \
%(adescr.itemsize)d, \
%(adescr.lendescr.offset)d, \
descr=malloc_array_descr)
jump()
""")
## should ideally be:
## p0 = call_malloc_gc(ConstClass(malloc_fixedsize), \
## %(adescr.basesize + 10 * adescr.itemsize)d, \
## descr=malloc_fixedsize_descr)
## setfield_gc(p0, 10, descr=alendescr)
def test_new_array_variable(self):
self.check_rewrite("""
[i1]
p0 = new_array(i1, descr=adescr)
jump()
""", """
[i1]
p0 = call_malloc_gc(ConstClass(malloc_array), \
%(adescr.basesize)d, \
i1, \
%(adescr.itemsize)d, \
%(adescr.lendescr.offset)d, \
descr=malloc_array_descr)
jump()
""")
def test_new_with_vtable(self):
self.check_rewrite("""
[]
p0 = new_with_vtable(descr=o_descr)
jump()
""", """
[p1]
p0 = call_malloc_gc(ConstClass(malloc_fixedsize), 102, \
descr=malloc_fixedsize_descr)
setfield_gc(p0, ConstClass(o_vtable), descr=vtable_descr)
jump()
""")
def test_newstr(self):
self.check_rewrite("""
[i1]
p0 = newstr(i1)
jump()
""", """
[i1]
p0 = call_malloc_gc(ConstClass(malloc_array), \
%(strdescr.basesize)d, \
i1, \
%(strdescr.itemsize)d, \
%(strlendescr.offset)d, \
descr=malloc_array_descr)
jump()
""")
def test_newunicode(self):
self.check_rewrite("""
[i1]
p0 = newunicode(10)
jump()
""", """
[i1]
p0 = call_malloc_gc(ConstClass(malloc_array), \
%(unicodedescr.basesize)d, \
10, \
%(unicodedescr.itemsize)d, \
%(unicodelendescr.offset)d, \
descr=malloc_array_descr)
jump()
""")
## should ideally be:
## p0 = call_malloc_gc(ConstClass(malloc_fixedsize), \
## %(unicodedescr.basesize + \
## 10 * unicodedescr.itemsize)d, \
## descr=malloc_fixedsize_descr)
## setfield_gc(p0, 10, descr=unicodelendescr)
class TestFramework(RewriteTests):
def setup_method(self, meth):
class config_(object):
class translation(object):
gc = 'minimark'
gcrootfinder = 'asmgcc'
gctransformer = 'framework'
gcremovetypeptr = False
gcdescr = get_description(config_)
self.gc_ll_descr = GcLLDescr_framework(gcdescr, None, None, None,
really_not_translated=True)
self.gc_ll_descr.write_barrier_descr.has_write_barrier_from_array = (
lambda cpu: True)
self.gc_ll_descr.malloc_zero_filled = False
#
class FakeCPU(BaseFakeCPU):
def sizeof(self, STRUCT, is_object):
descr = SizeDescr(104, gc_fielddescrs=[])
descr.tid = 9315
return descr
self.cpu = FakeCPU()
def test_rewrite_assembler_new_to_malloc(self):
self.check_rewrite("""
[p1]
p0 = new(descr=sdescr)
jump()
""", """
[p1]
p0 = call_malloc_nursery(%(sdescr.size)d)
setfield_gc(p0, 1234, descr=tiddescr)
jump()
""")
def test_rewrite_assembler_new3_to_malloc(self):
self.check_rewrite("""
[]
p0 = new(descr=sdescr)
p1 = new(descr=tdescr)
p2 = new(descr=sdescr)
jump()
""", """
[]
p0 = call_malloc_nursery( \
%(sdescr.size + tdescr.size + sdescr.size)d)
setfield_gc(p0, 1234, descr=tiddescr)
p1 = nursery_ptr_increment(p0, %(sdescr.size)d)
setfield_gc(p1, 5678, descr=tiddescr)
p2 = nursery_ptr_increment(p1, %(tdescr.size)d)
setfield_gc(p2, 1234, descr=tiddescr)
zero_ptr_field(p1, %(tdescr.gc_fielddescrs[0].offset)s)
jump()
""")
def test_rewrite_assembler_new_array_fixed_to_malloc(self):
self.check_rewrite("""
[]
p0 = new_array(10, descr=adescr)
jump()
""", """
[]
p0 = call_malloc_nursery( \
%(adescr.basesize + 10 * adescr.itemsize)d)
setfield_gc(p0, 4321, descr=tiddescr)
setfield_gc(p0, 10, descr=alendescr)
jump()
""")
def test_rewrite_assembler_new_and_new_array_fixed_to_malloc(self):
self.check_rewrite("""
[]
p0 = new(descr=sdescr)
p1 = new_array(10, descr=adescr)
jump()
""", """
[]
p0 = call_malloc_nursery( \
%(sdescr.size + \
adescr.basesize + 10 * adescr.itemsize)d)
setfield_gc(p0, 1234, descr=tiddescr)
p1 = nursery_ptr_increment(p0, %(sdescr.size)d)
setfield_gc(p1, 4321, descr=tiddescr)
setfield_gc(p1, 10, descr=alendescr)
jump()
""")
def test_rewrite_assembler_round_up(self):
self.check_rewrite("""
[]
p0 = new_array(6, descr=bdescr)
jump()
""", """
[]
p0 = call_malloc_nursery(%(bdescr.basesize + 8)d)
setfield_gc(p0, 8765, descr=tiddescr)
setfield_gc(p0, 6, descr=blendescr)
jump()
""")
def test_rewrite_assembler_round_up_always(self):
self.check_rewrite("""
[]
p0 = new_array(5, descr=bdescr)
p1 = new_array(5, descr=bdescr)
p2 = new_array(5, descr=bdescr)
p3 = new_array(5, descr=bdescr)
jump()
""", """
[]
p0 = call_malloc_nursery(%(4 * (bdescr.basesize + 8))d)
setfield_gc(p0, 8765, descr=tiddescr)
setfield_gc(p0, 5, descr=blendescr)
p1 = nursery_ptr_increment(p0, %(bdescr.basesize + 8)d)
setfield_gc(p1, 8765, descr=tiddescr)
setfield_gc(p1, 5, descr=blendescr)
p2 = nursery_ptr_increment(p1, %(bdescr.basesize + 8)d)
setfield_gc(p2, 8765, descr=tiddescr)
setfield_gc(p2, 5, descr=blendescr)
p3 = nursery_ptr_increment(p2, %(bdescr.basesize + 8)d)
setfield_gc(p3, 8765, descr=tiddescr)
setfield_gc(p3, 5, descr=blendescr)
jump()
""")
def test_rewrite_assembler_minimal_size(self):
self.check_rewrite("""
[]
p0 = new(descr=edescr)
p1 = new(descr=edescr)
jump()
""", """
[]
p0 = call_malloc_nursery(%(4*WORD)d)
setfield_gc(p0, 9000, descr=tiddescr)
p1 = nursery_ptr_increment(p0, %(2*WORD)d)
setfield_gc(p1, 9000, descr=tiddescr)
jump()
""")
def test_rewrite_assembler_variable_size(self):
self.check_rewrite("""
[i0]
p0 = new_array(i0, descr=bdescr)
jump(i0)
""", """
[i0]
p0 = call_malloc_nursery_varsize(0, 1, i0, descr=bdescr)
setfield_gc(p0, i0, descr=blendescr)
jump(i0)
""")
def test_rewrite_new_string(self):
self.check_rewrite("""
[i0]
p0 = newstr(i0)
jump(i0)
""", """
[i0]
p0 = call_malloc_nursery_varsize(1, 1, i0, descr=strdescr)
setfield_gc(p0, i0, descr=strlendescr)
setfield_gc(p0, 0, descr=strhashdescr)
jump(i0)
""")
def test_rewrite_assembler_nonstandard_array(self):
# a non-standard array is a bit hard to get; e.g. GcArray(Float)
# is like that on Win32, but not on Linux. Build one manually...
NONSTD = lltype.GcArray(lltype.Float)
nonstd_descr = get_array_descr(self.gc_ll_descr, NONSTD)
nonstd_descr.tid = 6464
nonstd_descr.basesize = 64 # <= hacked
nonstd_descr.itemsize = 8
nonstd_descr_gcref = 123
self.check_rewrite("""
[i0, p1]
p0 = new_array(i0, descr=nonstd_descr)
setarrayitem_gc(p0, i0, p1)
jump(i0)
""", """
[i0, p1]
p0 = call_malloc_gc(ConstClass(malloc_array_nonstandard), \
64, 8, \
%(nonstd_descr.lendescr.offset)d, \
6464, i0, \
descr=malloc_array_nonstandard_descr)
cond_call_gc_wb_array(p0, i0, descr=wbdescr)
setarrayitem_gc(p0, i0, p1)
jump(i0)
""", nonstd_descr=nonstd_descr)
def test_rewrite_assembler_maximal_size_1(self):
self.gc_ll_descr.max_size_of_young_obj = 100
self.check_rewrite("""
[]
p0 = new_array(103, descr=bdescr)
jump()
""", """
[]
p0 = call_malloc_gc(ConstClass(malloc_array), 1, \
%(bdescr.tid)d, 103, \
descr=malloc_array_descr)
jump()
""")
def test_rewrite_assembler_maximal_size_2(self):
self.gc_ll_descr.max_size_of_young_obj = 300
self.check_rewrite("""
[]
p0 = new_array(101, descr=bdescr)
p1 = new_array(102, descr=bdescr) # two new_arrays can be combined
p2 = new_array(103, descr=bdescr) # but not all three
jump()
""", """
[]
p0 = call_malloc_nursery( \
%(2 * (bdescr.basesize + 104))d)
setfield_gc(p0, 8765, descr=tiddescr)
setfield_gc(p0, 101, descr=blendescr)
p1 = nursery_ptr_increment(p0, %(bdescr.basesize + 104)d)
setfield_gc(p1, 8765, descr=tiddescr)
setfield_gc(p1, 102, descr=blendescr)
p2 = call_malloc_nursery( \
%(bdescr.basesize + 104)d)
setfield_gc(p2, 8765, descr=tiddescr)
setfield_gc(p2, 103, descr=blendescr)
jump()
""")
def test_rewrite_assembler_huge_size(self):
# "huge" is defined as "larger than 0xffffff bytes, or 16MB"
self.check_rewrite("""
[]
p0 = new_array(20000000, descr=bdescr)
jump()
""", """
[]
p0 = call_malloc_gc(ConstClass(malloc_array), 1, \
%(bdescr.tid)d, 20000000, \
descr=malloc_array_descr)
jump()
""")
def test_new_with_vtable(self):
self.check_rewrite("""
[]
p0 = new_with_vtable(descr=o_descr)
jump()
""", """
[p1]
p0 = call_malloc_nursery(104) # rounded up
setfield_gc(p0, 9315, descr=tiddescr)
setfield_gc(p0, 0, descr=vtable_descr)
jump()
""")
def test_new_with_vtable_too_big(self):
self.gc_ll_descr.max_size_of_young_obj = 100
self.check_rewrite("""
[]
p0 = new_with_vtable(descr=o_descr)
jump()
""", """
[p1]
p0 = call_malloc_gc(ConstClass(malloc_big_fixedsize), 104, 9315, \
descr=malloc_big_fixedsize_descr)
setfield_gc(p0, 0, descr=vtable_descr)
jump()
""")
def test_rewrite_assembler_newstr_newunicode(self):
self.check_rewrite("""
[i2]
p0 = newstr(14)
p1 = newunicode(10)
p2 = newunicode(i2)
p3 = newstr(i2)
jump()
""", """
[i2]
p0 = call_malloc_nursery( \
%(strdescr.basesize + 16 * strdescr.itemsize + \
unicodedescr.basesize + 10 * unicodedescr.itemsize)d)
setfield_gc(p0, %(strdescr.tid)d, descr=tiddescr)
setfield_gc(p0, 14, descr=strlendescr)
setfield_gc(p0, 0, descr=strhashdescr)
p1 = nursery_ptr_increment(p0, %(strdescr.basesize + 16 * strdescr.itemsize)d)
setfield_gc(p1, %(unicodedescr.tid)d, descr=tiddescr)
setfield_gc(p1, 10, descr=unicodelendescr)
setfield_gc(p1, 0, descr=unicodehashdescr)
p2 = call_malloc_nursery_varsize(2, %(unicodedescr.itemsize)d, i2,\
descr=unicodedescr)
setfield_gc(p2, i2, descr=unicodelendescr)
setfield_gc(p2, 0, descr=unicodehashdescr)
p3 = call_malloc_nursery_varsize(1, 1, i2, \
descr=strdescr)
setfield_gc(p3, i2, descr=strlendescr)
setfield_gc(p3, 0, descr=strhashdescr)
jump()
""")
def test_write_barrier_before_setfield_gc(self):
self.check_rewrite("""
[p1, p2]
setfield_gc(p1, p2, descr=tzdescr)
jump()
""", """
[p1, p2]
cond_call_gc_wb(p1, descr=wbdescr)
setfield_gc(p1, p2, descr=tzdescr)
jump()
""")
def test_write_barrier_before_array_without_from_array(self):
self.gc_ll_descr.write_barrier_descr.has_write_barrier_from_array = (
lambda cpu: False)
self.check_rewrite("""
[p1, i2, p3]
setarrayitem_gc(p1, i2, p3, descr=cdescr)
jump()
""", """
[p1, i2, p3]
cond_call_gc_wb(p1, descr=wbdescr)
setarrayitem_gc(p1, i2, p3, descr=cdescr)
jump()
""")
def test_write_barrier_before_short_array(self):
self.gc_ll_descr.max_size_of_young_obj = 2000
self.check_rewrite("""
[i2, p3]
p1 = new_array_clear(129, descr=cdescr)
call_n(123456)
setarrayitem_gc(p1, i2, p3, descr=cdescr)
jump()
""", """
[i2, p3]
p1 = call_malloc_nursery( \
%(cdescr.basesize + 129 * cdescr.itemsize)d)
setfield_gc(p1, 8111, descr=tiddescr)
setfield_gc(p1, 129, descr=clendescr)
zero_array(p1, 0, 129, descr=cdescr)
call_n(123456)
cond_call_gc_wb(p1, descr=wbdescr)
setarrayitem_gc(p1, i2, p3, descr=cdescr)
jump()
""")
def test_write_barrier_before_long_array(self):
# the limit of "being too long" is fixed, arbitrarily, at 130
self.gc_ll_descr.max_size_of_young_obj = 2000
self.check_rewrite("""
[i2, p3]
p1 = new_array_clear(130, descr=cdescr)
call_n(123456)
setarrayitem_gc(p1, i2, p3, descr=cdescr)
jump()
""", """
[i2, p3]
p1 = call_malloc_nursery( \
%(cdescr.basesize + 130 * cdescr.itemsize)d)
setfield_gc(p1, 8111, descr=tiddescr)
setfield_gc(p1, 130, descr=clendescr)
zero_array(p1, 0, 130, descr=cdescr)
call_n(123456)
cond_call_gc_wb_array(p1, i2, descr=wbdescr)
setarrayitem_gc(p1, i2, p3, descr=cdescr)
jump()
""")
def test_write_barrier_before_unknown_array(self):
self.check_rewrite("""
[p1, i2, p3]
setarrayitem_gc(p1, i2, p3, descr=cdescr)
jump()
""", """
[p1, i2, p3]
cond_call_gc_wb_array(p1, i2, descr=wbdescr)
setarrayitem_gc(p1, i2, p3, descr=cdescr)
jump()
""")
def test_label_makes_size_unknown(self):
self.check_rewrite("""
[i2, p3]
p1 = new_array_clear(5, descr=cdescr)
label(p1, i2, p3)
setarrayitem_gc(p1, i2, p3, descr=cdescr)
jump()
""", """
[i2, p3]
p1 = call_malloc_nursery( \
%(cdescr.basesize + 5 * cdescr.itemsize)d)
setfield_gc(p1, 8111, descr=tiddescr)
setfield_gc(p1, 5, descr=clendescr)
zero_array(p1, 0, 5, descr=cdescr)
label(p1, i2, p3)
cond_call_gc_wb_array(p1, i2, descr=wbdescr)
setarrayitem_gc(p1, i2, p3, descr=cdescr)
jump()
""")
def test_write_barrier_before_setinteriorfield_gc(self):
S1 = lltype.GcStruct('S1')
INTERIOR = lltype.GcArray(('z', lltype.Ptr(S1)))
interiordescr = get_array_descr(self.gc_ll_descr, INTERIOR)
interiordescr.tid = 1291
interiorlendescr = interiordescr.lendescr
interiorzdescr = get_interiorfield_descr(self.gc_ll_descr,
INTERIOR, 'z')
self.check_rewrite("""
[p1, p2]
setinteriorfield_gc(p1, 0, p2, descr=interiorzdescr)
jump(p1, p2)
""", """
[p1, p2]
cond_call_gc_wb_array(p1, 0, descr=wbdescr)
setinteriorfield_gc(p1, 0, p2, descr=interiorzdescr)
jump(p1, p2)
""", interiorzdescr=interiorzdescr)
def test_initialization_store(self):
self.check_rewrite("""
[p1]
p0 = new(descr=tdescr)
setfield_gc(p0, p1, descr=tzdescr)
jump()
""", """
[p1]
p0 = call_malloc_nursery(%(tdescr.size)d)
setfield_gc(p0, 5678, descr=tiddescr)
setfield_gc(p0, p1, descr=tzdescr)
jump()
""")
def test_initialization_store_2(self):
self.check_rewrite("""
[]
p0 = new(descr=tdescr)
p1 = new(descr=sdescr)
setfield_gc(p0, p1, descr=tzdescr)
jump()
""", """
[]
p0 = call_malloc_nursery(%(tdescr.size + sdescr.size)d)
setfield_gc(p0, 5678, descr=tiddescr)
p1 = nursery_ptr_increment(p0, %(tdescr.size)d)
setfield_gc(p1, 1234, descr=tiddescr)
# <<<no cond_call_gc_wb here>>>
setfield_gc(p0, p1, descr=tzdescr)
jump()
""")
def test_initialization_store_array(self):
self.check_rewrite("""
[p1, i2]
p0 = new_array_clear(5, descr=cdescr)
setarrayitem_gc(p0, i2, p1, descr=cdescr)
jump()
""", """
[p1, i2]
p0 = call_malloc_nursery( \
%(cdescr.basesize + 5 * cdescr.itemsize)d)
setfield_gc(p0, 8111, descr=tiddescr)
setfield_gc(p0, 5, descr=clendescr)
zero_array(p0, 0, 5, descr=cdescr)
setarrayitem_gc(p0, i2, p1, descr=cdescr)
jump()
""")
def test_zero_array_reduced_left(self):
self.check_rewrite("""
[p1, p2]
p0 = new_array_clear(5, descr=cdescr)
setarrayitem_gc(p0, 1, p1, descr=cdescr)
setarrayitem_gc(p0, 0, p2, descr=cdescr)
jump()
""", """
[p1, p2]
p0 = call_malloc_nursery( \
%(cdescr.basesize + 5 * cdescr.itemsize)d)
setfield_gc(p0, 8111, descr=tiddescr)
setfield_gc(p0, 5, descr=clendescr)
zero_array(p0, 2, 3, descr=cdescr)
setarrayitem_gc(p0, 1, p1, descr=cdescr)
setarrayitem_gc(p0, 0, p2, descr=cdescr)
jump()
""")
def test_zero_array_reduced_right(self):
self.check_rewrite("""
[p1, p2]
p0 = new_array_clear(5, descr=cdescr)
setarrayitem_gc(p0, 3, p1, descr=cdescr)
setarrayitem_gc(p0, 4, p2, descr=cdescr)
jump()
""", """
[p1, p2]
p0 = call_malloc_nursery( \
%(cdescr.basesize + 5 * cdescr.itemsize)d)
setfield_gc(p0, 8111, descr=tiddescr)
setfield_gc(p0, 5, descr=clendescr)
zero_array(p0, 0, 3, descr=cdescr)
setarrayitem_gc(p0, 3, p1, descr=cdescr)
setarrayitem_gc(p0, 4, p2, descr=cdescr)
jump()
""")
def test_zero_array_not_reduced_at_all(self):
self.check_rewrite("""
[p1, p2]
p0 = new_array_clear(5, descr=cdescr)
setarrayitem_gc(p0, 3, p1, descr=cdescr)
setarrayitem_gc(p0, 2, p2, descr=cdescr)
setarrayitem_gc(p0, 1, p2, descr=cdescr)
jump()
""", """
[p1, p2]
p0 = call_malloc_nursery( \
%(cdescr.basesize + 5 * cdescr.itemsize)d)
setfield_gc(p0, 8111, descr=tiddescr)
setfield_gc(p0, 5, descr=clendescr)
zero_array(p0, 0, 5, descr=cdescr)
setarrayitem_gc(p0, 3, p1, descr=cdescr)
setarrayitem_gc(p0, 2, p2, descr=cdescr)
setarrayitem_gc(p0, 1, p2, descr=cdescr)
jump()
""")
def test_zero_array_reduced_completely(self):
self.check_rewrite("""
[p1, p2]
p0 = new_array_clear(5, descr=cdescr)
setarrayitem_gc(p0, 3, p1, descr=cdescr)
setarrayitem_gc(p0, 4, p2, descr=cdescr)
setarrayitem_gc(p0, 0, p1, descr=cdescr)
setarrayitem_gc(p0, 2, p2, descr=cdescr)
setarrayitem_gc(p0, 1, p2, descr=cdescr)
jump()
""", """
[p1, p2]
p0 = call_malloc_nursery( \
%(cdescr.basesize + 5 * cdescr.itemsize)d)
setfield_gc(p0, 8111, descr=tiddescr)
setfield_gc(p0, 5, descr=clendescr)
zero_array(p0, 5, 0, descr=cdescr)
setarrayitem_gc(p0, 3, p1, descr=cdescr)
setarrayitem_gc(p0, 4, p2, descr=cdescr)
setarrayitem_gc(p0, 0, p1, descr=cdescr)
setarrayitem_gc(p0, 2, p2, descr=cdescr)
setarrayitem_gc(p0, 1, p2, descr=cdescr)
jump()
""")
def test_zero_array_reduced_left_with_call(self):
self.check_rewrite("""
[p1, p2]
p0 = new_array_clear(5, descr=cdescr)
setarrayitem_gc(p0, 0, p1, descr=cdescr)
call_n(321321)
setarrayitem_gc(p0, 1, p2, descr=cdescr)
jump()
""", """
[p1, p2]
p0 = call_malloc_nursery( \
%(cdescr.basesize + 5 * cdescr.itemsize)d)
setfield_gc(p0, 8111, descr=tiddescr)
setfield_gc(p0, 5, descr=clendescr)
zero_array(p0, 1, 4, descr=cdescr)
setarrayitem_gc(p0, 0, p1, descr=cdescr)
call_n(321321)
cond_call_gc_wb(p0, descr=wbdescr)
setarrayitem_gc(p0, 1, p2, descr=cdescr)
jump()
""")
def test_zero_array_reduced_left_with_label(self):
self.check_rewrite("""
[p1, p2]
p0 = new_array_clear(5, descr=cdescr)
setarrayitem_gc(p0, 0, p1, descr=cdescr)
label(p0, p2)
setarrayitem_gc(p0, 1, p2, descr=cdescr)
jump()
""", """
[p1, p2]
p0 = call_malloc_nursery( \
%(cdescr.basesize + 5 * cdescr.itemsize)d)
setfield_gc(p0, 8111, descr=tiddescr)
setfield_gc(p0, 5, descr=clendescr)
zero_array(p0, 1, 4, descr=cdescr)
setarrayitem_gc(p0, 0, p1, descr=cdescr)
label(p0, p2)
cond_call_gc_wb_array(p0, 1, descr=wbdescr)
setarrayitem_gc(p0, 1, p2, descr=cdescr)
jump()
""")
def test_zero_array_varsize(self):
self.check_rewrite("""
[p1, p2, i3]
p0 = new_array_clear(i3, descr=bdescr)
jump()
""", """
[p1, p2, i3]
p0 = call_malloc_nursery_varsize(0, 1, i3, descr=bdescr)
setfield_gc(p0, i3, descr=blendescr)
zero_array(p0, 0, i3, descr=bdescr)
jump()
""")
def test_zero_array_varsize_cannot_reduce(self):
self.check_rewrite("""
[p1, p2, i3]
p0 = new_array_clear(i3, descr=bdescr)
setarrayitem_gc(p0, 0, p1, descr=bdescr)
jump()
""", """
[p1, p2, i3]
p0 = call_malloc_nursery_varsize(0, 1, i3, descr=bdescr)
setfield_gc(p0, i3, descr=blendescr)
zero_array(p0, 0, i3, descr=bdescr)
cond_call_gc_wb_array(p0, 0, descr=wbdescr)
setarrayitem_gc(p0, 0, p1, descr=bdescr)
jump()
""")
def test_initialization_store_potentially_large_array(self):
# the write barrier cannot be omitted, because we might get
# an array with cards and the GC assumes that the write
# barrier is always called, even on young (but large) arrays
self.check_rewrite("""
[i0, p1, i2]
p0 = new_array(i0, descr=bdescr)
setarrayitem_gc(p0, i2, p1, descr=bdescr)
jump()
""", """
[i0, p1, i2]
p0 = call_malloc_nursery_varsize(0, 1, i0, descr=bdescr)
setfield_gc(p0, i0, descr=blendescr)
cond_call_gc_wb_array(p0, i2, descr=wbdescr)
setarrayitem_gc(p0, i2, p1, descr=bdescr)
jump()
""")
def test_non_initialization_store(self):
self.check_rewrite("""
[i0]
p0 = new(descr=tdescr)
p1 = newstr(i0)
setfield_gc(p0, p1, descr=tzdescr)
jump()
""", """
[i0]
p0 = call_malloc_nursery(%(tdescr.size)d)
setfield_gc(p0, 5678, descr=tiddescr)
zero_ptr_field(p0, %(tdescr.gc_fielddescrs[0].offset)s)
p1 = call_malloc_nursery_varsize(1, 1, i0, \
descr=strdescr)
setfield_gc(p1, i0, descr=strlendescr)
setfield_gc(p1, 0, descr=strhashdescr)
cond_call_gc_wb(p0, descr=wbdescr)
setfield_gc(p0, p1, descr=tzdescr)
jump()
""")
def test_non_initialization_store_label(self):
self.check_rewrite("""
[p1]
p0 = new(descr=tdescr)
label(p0, p1)
setfield_gc(p0, p1, descr=tzdescr)
jump()
""", """
[p1]
p0 = call_malloc_nursery(%(tdescr.size)d)
setfield_gc(p0, 5678, descr=tiddescr)
zero_ptr_field(p0, %(tdescr.gc_fielddescrs[0].offset)s)
label(p0, p1)
cond_call_gc_wb(p0, descr=wbdescr)
setfield_gc(p0, p1, descr=tzdescr)
jump()
""")
def test_multiple_writes(self):
self.check_rewrite("""
[p0, p1, p2]
setfield_gc(p0, p1, descr=tzdescr)
setfield_gc(p0, p2, descr=tzdescr)
jump(p1, p2, p0)
""", """
[p0, p1, p2]
cond_call_gc_wb(p0, descr=wbdescr)
setfield_gc(p0, p1, descr=tzdescr)
setfield_gc(p0, p2, descr=tzdescr)
jump(p1, p2, p0)
""")
def test_rewrite_call_assembler(self):
self.check_rewrite("""
[i0, f0]
i2 = call_assembler_i(i0, f0, descr=casmdescr)
""", """
[i0, f0]
i1 = getfield_raw_i(ConstClass(frame_info), descr=jfi_frame_size)
p1 = call_malloc_nursery_varsize_frame(i1)
setfield_gc(p1, 0, descr=tiddescr)
i2 = getfield_raw_i(ConstClass(frame_info), descr=jfi_frame_depth)
setfield_gc(p1, 0, descr=jf_extra_stack_depth)
setfield_gc(p1, NULL, descr=jf_savedata)
setfield_gc(p1, NULL, descr=jf_force_descr)
setfield_gc(p1, NULL, descr=jf_descr)
setfield_gc(p1, NULL, descr=jf_guard_exc)
setfield_gc(p1, NULL, descr=jf_forward)
setfield_gc(p1, i2, descr=framelendescr)
setfield_gc(p1, ConstClass(frame_info), descr=jf_frame_info)
setarrayitem_gc(p1, 0, i0, descr=signedframedescr)
setarrayitem_gc(p1, 1, f0, descr=floatframedescr)
i3 = call_assembler_i(p1, descr=casmdescr)
""")
def test_int_add_ovf(self):
self.check_rewrite("""
[i0]
p0 = new(descr=tdescr)
i1 = int_add_ovf(i0, 123)
guard_overflow(descr=guarddescr) []
jump()
""", """
[i0]
p0 = call_malloc_nursery(%(tdescr.size)d)
setfield_gc(p0, 5678, descr=tiddescr)
zero_ptr_field(p0, %(tdescr.gc_fielddescrs[0].offset)s)
i1 = int_add_ovf(i0, 123)
guard_overflow(descr=guarddescr) []
jump()
""")
def test_int_gt(self):
self.check_rewrite("""
[i0]
p0 = new(descr=tdescr)
i1 = int_gt(i0, 123)
guard_false(i1, descr=guarddescr) []
jump()
""", """
[i0]
p0 = call_malloc_nursery(%(tdescr.size)d)
setfield_gc(p0, 5678, descr=tiddescr)
zero_ptr_field(p0, %(tdescr.gc_fielddescrs[0].offset)s)
i1 = int_gt(i0, 123)
guard_false(i1, descr=guarddescr) []
jump()
""")
def test_zero_ptr_field_before_getfield(self):
# This case may need to be fixed in the metainterp/optimizeopt
# already so that it no longer occurs for rewrite.py. But anyway
# it's a good idea to make sure rewrite.py is correct on its own.
self.check_rewrite("""
[]
p0 = new(descr=tdescr)
p1 = getfield_gc_r(p0, descr=tdescr)
jump(p1)
""", """
[]
p0 = call_malloc_nursery(%(tdescr.size)d)
setfield_gc(p0, 5678, descr=tiddescr)
zero_ptr_field(p0, %(tdescr.gc_fielddescrs[0].offset)s)
p1 = getfield_gc_r(p0, descr=tdescr)
jump(p1)
""")
| 36.811909 | 90 | 0.518885 | 4,203 | 38,947 | 4.564835 | 0.091601 | 0.053685 | 0.037527 | 0.039612 | 0.687324 | 0.624257 | 0.557073 | 0.483425 | 0.457208 | 0.410091 | 0 | 0.048662 | 0.371582 | 38,947 | 1,057 | 91 | 36.846736 | 0.73524 | 0.031248 | 0 | 0.636364 | 0 | 0 | 0.658979 | 0.082488 | 0 | 0 | 0 | 0 | 0.001045 | 1 | 0.064786 | false | 0.00209 | 0.011494 | 0.003135 | 0.100313 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f428973b7e9156b1b01843493a65c906c5b5ba52 | 996 | py | Python | judge/migrations/0024_auto_20200705_0246.py | TheAvidDev/pnoj-site | 63299e873b1fb654667545222ce2b3157e78acd9 | [
"MIT"
] | 2 | 2020-04-02T19:50:03.000Z | 2020-08-06T18:30:25.000Z | judge/migrations/0024_auto_20200705_0246.py | TheAvidDev/pnoj-site | 63299e873b1fb654667545222ce2b3157e78acd9 | [
"MIT"
] | 28 | 2020-03-19T16:29:58.000Z | 2021-09-22T18:47:30.000Z | judge/migrations/0024_auto_20200705_0246.py | TheAvidDev/pnoj-site | 63299e873b1fb654667545222ce2b3157e78acd9 | [
"MIT"
] | 2 | 2020-08-09T06:23:12.000Z | 2020-10-13T00:13:25.000Z | # Generated by Django 3.0.8 on 2020-07-05 02:46
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('judge', '0023_auto_20200704_2318'),
]
operations = [
migrations.AlterField(
model_name='submission',
name='language',
field=models.CharField(choices=[('python3', 'Python 3'), ('java8', 'Java 8'), ('cpp17', 'C++17'), ('haskell', 'Haskell'), ('brainfuck', 'Brainfuck'), ('c18', 'C18'), ('java11', 'Java 11'), ('scratch', 'Scratch'), ('text', 'Text')], max_length=10, null=True),
),
migrations.AlterField(
model_name='user',
name='main_language',
field=models.CharField(choices=[('python3', 'Python 3'), ('java8', 'Java 8'), ('cpp17', 'C++17'), ('haskell', 'Haskell'), ('brainfuck', 'Brainfuck'), ('c18', 'C18'), ('java11', 'Java 11'), ('scratch', 'Scratch'), ('text', 'Text')], default='python3', max_length=10),
),
]
| 41.5 | 278 | 0.564257 | 106 | 996 | 5.226415 | 0.528302 | 0.072202 | 0.090253 | 0.104693 | 0.501805 | 0.501805 | 0.501805 | 0.501805 | 0.501805 | 0.501805 | 0 | 0.086735 | 0.212851 | 996 | 23 | 279 | 43.304348 | 0.619898 | 0.045181 | 0 | 0.235294 | 1 | 0 | 0.303477 | 0.024236 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f429e1e71ee50e3cb36b3bd6d0606c845af7b2a3 | 3,010 | py | Python | saleor/product/migrations/0141_update_descritpion_fields.py | fairhopeweb/saleor | 9ac6c22652d46ba65a5b894da5f1ba5bec48c019 | [
"CC-BY-4.0"
] | 15,337 | 2015-01-12T02:11:52.000Z | 2021-10-05T19:19:29.000Z | saleor/product/migrations/0141_update_descritpion_fields.py | fairhopeweb/saleor | 9ac6c22652d46ba65a5b894da5f1ba5bec48c019 | [
"CC-BY-4.0"
] | 7,486 | 2015-02-11T10:52:13.000Z | 2021-10-06T09:37:15.000Z | saleor/product/migrations/0141_update_descritpion_fields.py | aminziadna/saleor | 2e78fb5bcf8b83a6278af02551a104cfa555a1fb | [
"CC-BY-4.0"
] | 5,864 | 2015-01-16T14:52:54.000Z | 2021-10-05T23:01:15.000Z | # Generated by Django 3.1.5 on 2021-02-17 11:04
from django.db import migrations
import saleor.core.db.fields
import saleor.core.utils.editorjs
def update_empty_description_field(apps, schema_editor):
Category = apps.get_model("product", "Category")
CategoryTranslation = apps.get_model("product", "CategoryTranslation")
Collection = apps.get_model("product", "Collection")
CollectionTranslation = apps.get_model("product", "CollectionTranslation")
Product = apps.get_model("product", "Product")
ProductTranslation = apps.get_model("product", "ProductTranslation")
models = [
Category,
CategoryTranslation,
Collection,
CollectionTranslation,
Product,
ProductTranslation,
]
for model in models:
model.objects.filter(description={}).update(description=None)
class Migration(migrations.Migration):
dependencies = [
("product", "0140_auto_20210125_0905"),
]
operations = [
migrations.AlterField(
model_name="category",
name="description",
field=saleor.core.db.fields.SanitizedJSONField(
blank=True,
null=True,
sanitizer=saleor.core.utils.editorjs.clean_editor_js,
),
),
migrations.AlterField(
model_name="categorytranslation",
name="description",
field=saleor.core.db.fields.SanitizedJSONField(
blank=True,
null=True,
sanitizer=saleor.core.utils.editorjs.clean_editor_js,
),
),
migrations.AlterField(
model_name="collection",
name="description",
field=saleor.core.db.fields.SanitizedJSONField(
blank=True,
null=True,
sanitizer=saleor.core.utils.editorjs.clean_editor_js,
),
),
migrations.AlterField(
model_name="collectiontranslation",
name="description",
field=saleor.core.db.fields.SanitizedJSONField(
blank=True,
null=True,
sanitizer=saleor.core.utils.editorjs.clean_editor_js,
),
),
migrations.AlterField(
model_name="product",
name="description",
field=saleor.core.db.fields.SanitizedJSONField(
blank=True,
null=True,
sanitizer=saleor.core.utils.editorjs.clean_editor_js,
),
),
migrations.AlterField(
model_name="producttranslation",
name="description",
field=saleor.core.db.fields.SanitizedJSONField(
blank=True,
null=True,
sanitizer=saleor.core.utils.editorjs.clean_editor_js,
),
),
migrations.RunPython(
update_empty_description_field,
migrations.RunPython.noop,
),
]
| 31.354167 | 78 | 0.580066 | 259 | 3,010 | 6.610039 | 0.239382 | 0.081776 | 0.049065 | 0.073598 | 0.504089 | 0.504089 | 0.504089 | 0.504089 | 0.504089 | 0.504089 | 0 | 0.015166 | 0.32093 | 3,010 | 95 | 79 | 31.684211 | 0.822407 | 0.01495 | 0 | 0.583333 | 1 | 0 | 0.102599 | 0.021937 | 0 | 0 | 0 | 0 | 0 | 1 | 0.011905 | false | 0 | 0.035714 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f42c731576acb55056eef2a6a2b894f6ff9cf5c6 | 656 | py | Python | torch/_VF.py | Hacky-DH/pytorch | 80dc4be615854570aa39a7e36495897d8a040ecc | [
"Intel"
] | 60,067 | 2017-01-18T17:21:31.000Z | 2022-03-31T21:37:45.000Z | torch/_VF.py | Hacky-DH/pytorch | 80dc4be615854570aa39a7e36495897d8a040ecc | [
"Intel"
] | 66,955 | 2017-01-18T17:21:38.000Z | 2022-03-31T23:56:11.000Z | torch/_VF.py | Hacky-DH/pytorch | 80dc4be615854570aa39a7e36495897d8a040ecc | [
"Intel"
] | 19,210 | 2017-01-18T17:45:04.000Z | 2022-03-31T23:51:56.000Z | """
This makes the functions in torch._C._VariableFunctions available as
torch._VF.<funcname>
without mypy being able to find them.
A subset of those functions are mapped to ATen functions in
torch/jit/_builtins.py
See https://github.com/pytorch/pytorch/issues/21478 for the reason for
introducing torch._VF
"""
import torch
import sys
import types
class VFModule(types.ModuleType):
vf: types.ModuleType
def __init__(self, name):
super(VFModule, self).__init__(name)
self.vf = torch._C._VariableFunctions
def __getattr__(self, attr):
return getattr(self.vf, attr)
sys.modules[__name__] = VFModule(__name__)
| 21.866667 | 70 | 0.73628 | 91 | 656 | 5.010989 | 0.593407 | 0.048246 | 0.070175 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009225 | 0.17378 | 656 | 29 | 71 | 22.62069 | 0.832103 | 0.471037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.272727 | 0.090909 | 0.727273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
f42d1600d0b6bc46f53578838228c289c55fcb61 | 342 | py | Python | src/catalog/migrations/0003_remove_productattributevalue_name.py | earth-emoji/dennea | fbabd7d9ecc95898411aba238bbcca8b5e942c31 | [
"BSD-3-Clause"
] | null | null | null | src/catalog/migrations/0003_remove_productattributevalue_name.py | earth-emoji/dennea | fbabd7d9ecc95898411aba238bbcca8b5e942c31 | [
"BSD-3-Clause"
] | 13 | 2019-12-09T02:38:36.000Z | 2022-03-12T00:33:57.000Z | src/catalog/migrations/0003_remove_productattributevalue_name.py | earth-emoji/dennea | fbabd7d9ecc95898411aba238bbcca8b5e942c31 | [
"BSD-3-Clause"
] | null | null | null | # Generated by Django 2.2.12 on 2020-06-10 01:11
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('catalog', '0002_auto_20200610_0019'),
]
operations = [
migrations.RemoveField(
model_name='productattributevalue',
name='name',
),
]
| 19 | 48 | 0.608187 | 35 | 342 | 5.828571 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130612 | 0.283626 | 342 | 17 | 49 | 20.117647 | 0.702041 | 0.134503 | 0 | 0 | 1 | 0 | 0.187075 | 0.14966 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f436146fcd68e0fffec8d89af9ff63a4a2a79aad | 7,980 | py | Python | src/models/train_search_multi_deep.py | smadha/MlTrio | a7269fc4c6d77b2f71432ab9d2ab8fe4e28234d5 | [
"Apache-2.0"
] | null | null | null | src/models/train_search_multi_deep.py | smadha/MlTrio | a7269fc4c6d77b2f71432ab9d2ab8fe4e28234d5 | [
"Apache-2.0"
] | null | null | null | src/models/train_search_multi_deep.py | smadha/MlTrio | a7269fc4c6d77b2f71432ab9d2ab8fe4e28234d5 | [
"Apache-2.0"
] | null | null | null | '''
Uses flattened features in feature directory and run a SVM on it
'''
from keras.layers import Dense
from keras.models import Sequential
import keras.regularizers as Reg
from keras.optimizers import SGD, RMSprop
from keras.callbacks import EarlyStopping
import cPickle as pickle
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score
import theano
from models.down_sampling import balanced_subsample
theano.config.openmp = True
OMP_NUM_THREADS=16
users_va_te_dict = dict([ (v,idx) for (idx,v) in enumerate(pickle.load(open("../../bytecup2016data/users_va_te.p"))) ])
print "users_va_te_dict created ", len(users_va_te_dict)
def normalize(X_tr):
''' Normalize training and test data features
Args:
X_tr: Unnormalized training features
Output:
X_tr: Normalized training features
'''
X_mu = np.mean(X_tr, axis=0)
X_tr = X_tr - X_mu
X_sig = np.std(X_tr, axis=0)
X_tr = X_tr/X_sig
return X_tr, X_mu, X_sig
def genmodel(num_units, actfn='relu', reg_coeff=0.0, last_act='softmax'):
''' Generate a neural network model of approporiate architecture
Args:
num_units: architecture of network in the format [n1, n2, ... , nL]
actfn: activation function for hidden layers ('relu'/'sigmoid'/'linear'/'softmax')
reg_coeff: L2-regularization coefficient
last_act: activation function for final layer ('relu'/'sigmoid'/'linear'/'softmax')
Output:
model: Keras sequential model with appropriate fully-connected architecture
'''
model = Sequential()
for i in range(1, len(num_units)):
if i == 1 and i < len(num_units) - 1:
model.add(Dense(input_dim=num_units[0], output_dim=num_units[i], activation=actfn,
W_regularizer=Reg.l2(l=reg_coeff), init='glorot_normal'))
elif i == 1 and i == len(num_units) - 1:
model.add(Dense(input_dim=num_units[0], output_dim=num_units[i], activation=last_act,
W_regularizer=Reg.l2(l=reg_coeff), init='glorot_normal'))
elif i < len(num_units) - 1:
model.add(Dense(output_dim=num_units[i], activation=actfn,
W_regularizer=Reg.l2(l=reg_coeff), init='glorot_normal'))
elif i == len(num_units) - 1:
model.add(Dense(output_dim=num_units[i], activation=last_act,
W_regularizer=Reg.l2(l=reg_coeff), init='glorot_normal'))
return model
def transform_label(labels):
labels_new_arr = []
for idx,label in enumerate(labels):
label_new = [0] * len(users_va_te_dict) * 2
if label[1] == '0' :
label_new[ users_va_te_dict[label[0]] ] = 1
else :
label_new[ users_va_te_dict[label[0]] + 1 ] = 1
labels_new_arr.append(label_new)
# if (idx+1) % 1000 == 0:
# break
print "labels_new_arr created" , len(labels_new_arr)
return labels_new_arr
def original_label(label):
return [ l.index(1) for l in label]
def get_transform_label():
'''
Returns list of labels as list of [0/1 , 1/0]
if label = 1 [0, 1]
if label = 0 [1, 0]
'''
count = 0
users_order = []
##features to be deletd
del_rows = []
with open("../../bytecup2016data/invited_info_train_PROC.txt","r") as f:
training_data = f.readline().strip().split("\t")
while training_data and len(training_data) >= 2 :
user_id = training_data[1]
label = training_data[2]
if user_id in users_va_te_dict:
users_order.append((user_id,label) )
else:
del_rows.append(count)
count += 1
training_data = f.readline().strip().split("\t")
f.close()
print "users_order created ", len(users_order), len(del_rows)
return transform_label(users_order), del_rows
features = pickle.load( open("../feature_engg/feature/all_features.p", "rb") )
labels, del_rows = get_transform_label()
# features = np.random.normal(size=(26796,3))
# labels, del_rows = get_transform_label()
print len(features),len(features[0])
print len(labels),len(labels[0])
features = np.array(features)
features = np.delete(features, del_rows, axis=0)
col_deleted = np.nonzero((features==0).sum(axis=0) > (len(features)-1000))
# col_deleted = col_deleted[0].tolist() + range(6,22) + range(28,44)
print col_deleted
features = np.delete(features, col_deleted, axis=1)
print len(features),len(features[0])
print len(labels),len(labels[0])
features, X_mu, X_sig = normalize(features)
save_res = {"col_deleted":col_deleted,"X_mu":X_mu,"X_sig":X_sig}
with open("model/train_config", 'wb') as pickle_file:
pickle.dump(save_res, pickle_file, protocol=2)
print "Dumped config"
momentum = 0.99
eStop = True
sgd_Nesterov = True
sgd_lr = 1e-5
batch_size=5000
nb_epoch=100
verbose=True
features,labels = [] , []
features_tr, features_te,labels_tr, labels_te = train_test_split(features,labels, train_size = 0.85)
print "Using separate test data", len(features_tr), len(features_te)
def run_NN(arch, reg_coeff, sgd_decay, subsample_size=0, save=False):
# features_tr, labels_tr = balanced_subsample(features_tr, original_label(labels_tr), subsample_size = subsample_size)
# labels_tr = transform_label(labels_tr)
# print "Training data balanced-", features_tr.shape, len(labels_tr)
call_ES = EarlyStopping(monitor='val_acc', patience=3, verbose=1, mode='auto')
# Generate Model
model = genmodel(num_units=arch, reg_coeff=reg_coeff )
# Compile Model
sgd = SGD(lr=sgd_lr, decay=sgd_decay, momentum=momentum,
nesterov=sgd_Nesterov)
# sgd = RMSprop(lr=sgd_lr, rho=0.9, epsilon=1e-08, decay=sgd_decay)
model.compile(loss='MSE', optimizer=sgd,
metrics=['accuracy'])
# Train Model
if eStop:
model.fit(features_tr, labels_tr, nb_epoch=nb_epoch, batch_size=batch_size,
verbose=verbose, callbacks=[call_ES], validation_split=0.1,
validation_data=None, shuffle=True)
else:
model.fit(features_tr, labels_tr, nb_epoch=nb_epoch, batch_size=batch_size,
verbose=verbose)
labels_pred = model.predict_classes(features_te)
print len(labels_te[0]), labels_pred[0]
y_true, y_pred = original_label(labels_te), labels_pred
print y_true[0], y_pred[0]
print "arch, reg_coeff, sgd_decay, subsample_size", arch, reg_coeff, sgd_decay, subsample_size
macro_rep = f1_score(y_true, y_pred, average = 'macro')
print "macro", macro_rep
weighted_report = f1_score(y_true, y_pred, average = 'weighted')
print "weighted", weighted_report
with open("results_search_multi_deep.txt", "a") as f:
f.write("macro_rep- "+str(macro_rep))
f.write("\n")
f.write("weighted_report- "+str(weighted_report))
f.write("\n")
f.write(" ".join([str(s) for s in ["arch, reg_coeff, sgd_decay, subsample_size", arch, reg_coeff, sgd_decay, subsample_size]]))
f.write("\n")
if save:
# Save model
model.save("model/model_deep.h5")
print("Saved model to disk")
arch_range = [[len(features_tr[0]),1024,len(labels_tr[0])], [len(features_tr[0]),1024,512,len(labels_tr[0])], [len(features_tr[0]),1024,1024,len(labels_tr[0])],[len(features_tr[0]),1024,512,256,len(labels_tr[0])]]
reg_coeffs_range = [1e-6, 5e-6, 1e-5, 5e-5, 5e-4 ]
sgd_decays_range = [1e-6, 1e-5, 5e-5, 1e-4, 5e-4 ]
class_weight_0_range = [1]
# subsample_size_range = [2,2.5,3]
#GRID SEARCH ON BEST PARAM
for arch in arch_range:
for reg_coeff in reg_coeffs_range:
for sgd_decay in sgd_decays_range:
# for subsample_size in subsample_size_range:
run_NN(arch, reg_coeff, sgd_decay)
# arch = [len(features[0]),1024,512,2]
# reg_coeff = 1e-05
# sgd_decay = 1e-05
# class_weight_0 = 0.5
| 34.847162 | 213 | 0.663033 | 1,197 | 7,980 | 4.183793 | 0.223058 | 0.025559 | 0.014377 | 0.018171 | 0.276957 | 0.259784 | 0.244609 | 0.209665 | 0.209665 | 0.186502 | 0 | 0.032058 | 0.210401 | 7,980 | 228 | 214 | 35 | 0.762736 | 0.097118 | 0 | 0.133333 | 0 | 0 | 0.090192 | 0.02381 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.081481 | null | null | 0.118519 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f4418c7fe5090cc1ad72d42e956421d4fcbc0d8c | 5,253 | py | Python | transformers/tests/tokenization_xlnet_test.py | deepbluesea/transformers | 11a2317986aad6e9a72f542e31344cfb7c94cbab | [
"Apache-2.0"
] | 270 | 2020-04-26T17:54:36.000Z | 2022-03-24T20:47:11.000Z | transformers/tests/tokenization_xlnet_test.py | deepbluesea/transformers | 11a2317986aad6e9a72f542e31344cfb7c94cbab | [
"Apache-2.0"
] | 27 | 2020-06-03T17:34:41.000Z | 2022-03-31T01:17:34.000Z | transformers/tests/tokenization_xlnet_test.py | deepbluesea/transformers | 11a2317986aad6e9a72f542e31344cfb7c94cbab | [
"Apache-2.0"
] | 61 | 2020-04-25T21:48:11.000Z | 2022-03-23T02:39:10.000Z | # coding=utf-8
# Copyright 2018 The Google AI Language Team Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import unittest
from transformers.tokenization_xlnet import (XLNetTokenizer, SPIECE_UNDERLINE)
from .tokenization_tests_commons import CommonTestCases
SAMPLE_VOCAB = os.path.join(os.path.dirname(os.path.abspath(__file__)),
'fixtures/test_sentencepiece.model')
class XLNetTokenizationTest(CommonTestCases.CommonTokenizerTester):
tokenizer_class = XLNetTokenizer
def setUp(self):
super(XLNetTokenizationTest, self).setUp()
# We have a SentencePiece fixture for testing
tokenizer = XLNetTokenizer(SAMPLE_VOCAB, keep_accents=True)
tokenizer.save_pretrained(self.tmpdirname)
def get_tokenizer(self, **kwargs):
return XLNetTokenizer.from_pretrained(self.tmpdirname, **kwargs)
def get_input_output_texts(self):
input_text = u"This is a test"
output_text = u"This is a test"
return input_text, output_text
def test_full_tokenizer(self):
tokenizer = XLNetTokenizer(SAMPLE_VOCAB, keep_accents=True)
tokens = tokenizer.tokenize(u'This is a test')
self.assertListEqual(tokens, [u'▁This', u'▁is', u'▁a', u'▁t', u'est'])
self.assertListEqual(
tokenizer.convert_tokens_to_ids(tokens), [285, 46, 10, 170, 382])
tokens = tokenizer.tokenize(u"I was born in 92000, and this is falsé.")
self.assertListEqual(tokens, [SPIECE_UNDERLINE + u'I', SPIECE_UNDERLINE + u'was', SPIECE_UNDERLINE + u'b',
u'or', u'n', SPIECE_UNDERLINE + u'in', SPIECE_UNDERLINE + u'',
u'9', u'2', u'0', u'0', u'0', u',', SPIECE_UNDERLINE + u'and', SPIECE_UNDERLINE + u'this',
SPIECE_UNDERLINE + u'is', SPIECE_UNDERLINE + u'f', u'al', u's', u'é', u'.'])
ids = tokenizer.convert_tokens_to_ids(tokens)
self.assertListEqual(
ids, [8, 21, 84, 55, 24, 19, 7, 0,
602, 347, 347, 347, 3, 12, 66,
46, 72, 80, 6, 0, 4])
back_tokens = tokenizer.convert_ids_to_tokens(ids)
self.assertListEqual(back_tokens, [SPIECE_UNDERLINE + u'I', SPIECE_UNDERLINE + u'was', SPIECE_UNDERLINE + u'b',
u'or', u'n', SPIECE_UNDERLINE + u'in',
SPIECE_UNDERLINE + u'', u'<unk>', u'2', u'0', u'0', u'0', u',',
SPIECE_UNDERLINE + u'and', SPIECE_UNDERLINE + u'this',
SPIECE_UNDERLINE + u'is', SPIECE_UNDERLINE + u'f', u'al', u's',
u'<unk>', u'.'])
def test_tokenizer_lower(self):
tokenizer = XLNetTokenizer(SAMPLE_VOCAB, do_lower_case=True)
tokens = tokenizer.tokenize(u"I was born in 92000, and this is falsé.")
self.assertListEqual(tokens, [SPIECE_UNDERLINE + u'', u'i', SPIECE_UNDERLINE + u'was', SPIECE_UNDERLINE + u'b',
u'or', u'n', SPIECE_UNDERLINE + u'in', SPIECE_UNDERLINE + u'',
u'9', u'2', u'0', u'0', u'0', u',', SPIECE_UNDERLINE + u'and', SPIECE_UNDERLINE + u'this',
SPIECE_UNDERLINE + u'is', SPIECE_UNDERLINE + u'f', u'al', u'se', u'.'])
self.assertListEqual(tokenizer.tokenize(u"H\u00E9llo"), [u"▁he", u"ll", u"o"])
def test_tokenizer_no_lower(self):
tokenizer = XLNetTokenizer(SAMPLE_VOCAB, do_lower_case=False)
tokens = tokenizer.tokenize(u"I was born in 92000, and this is falsé.")
self.assertListEqual(tokens, [SPIECE_UNDERLINE + u'I', SPIECE_UNDERLINE + u'was', SPIECE_UNDERLINE + u'b', u'or',
u'n', SPIECE_UNDERLINE + u'in', SPIECE_UNDERLINE + u'',
u'9', u'2', u'0', u'0', u'0', u',', SPIECE_UNDERLINE + u'and', SPIECE_UNDERLINE + u'this',
SPIECE_UNDERLINE + u'is', SPIECE_UNDERLINE + u'f', u'al', u'se', u'.'])
def test_sequence_builders(self):
tokenizer = XLNetTokenizer.from_pretrained("xlnet-base-cased")
text = tokenizer.encode("sequence builders")
text_2 = tokenizer.encode("multi-sequence build")
encoded_sentence = tokenizer.add_special_tokens_single_sequence(text)
encoded_pair = tokenizer.add_special_tokens_sequence_pair(text, text_2)
assert encoded_sentence == text + [4, 3]
assert encoded_pair == text + [4] + text_2 + [4, 3]
if __name__ == '__main__':
unittest.main()
| 49.093458 | 128 | 0.61146 | 682 | 5,253 | 4.543988 | 0.285924 | 0.17909 | 0.185866 | 0.010326 | 0.426267 | 0.412391 | 0.380768 | 0.349145 | 0.349145 | 0.314295 | 0 | 0.027561 | 0.267847 | 5,253 | 106 | 129 | 49.556604 | 0.776911 | 0.119931 | 0 | 0.208955 | 0 | 0 | 0.090297 | 0.007163 | 0 | 0 | 0 | 0 | 0.149254 | 1 | 0.104478 | false | 0 | 0.074627 | 0.014925 | 0.238806 | 0.014925 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f4430a5ed7a70794aa650554ee2233f1a76e4ce7 | 1,362 | py | Python | Bot/db_aps.py | FaHoLo/Fish_shop | b08018223705bca169dab9f39ec5a55f62822f0b | [
"MIT"
] | null | null | null | Bot/db_aps.py | FaHoLo/Fish_shop | b08018223705bca169dab9f39ec5a55f62822f0b | [
"MIT"
] | null | null | null | Bot/db_aps.py | FaHoLo/Fish_shop | b08018223705bca169dab9f39ec5a55f62822f0b | [
"MIT"
] | null | null | null | import logging
import os
import redis
import moltin_aps
_database = None
db_logger = logging.getLogger('db_logger')
async def get_database_connection():
global _database
if _database is None:
database_password = os.getenv('DB_PASSWORD')
database_host = os.getenv('DB_HOST')
database_port = os.getenv('DB_PORT')
_database = redis.Redis(host=database_host, port=database_port, password=database_password)
db_logger.debug('Got new db connection')
return _database
def get_moltin_customer_id(customer_key):
db = await get_database_connection()
customer_id = db.get(customer_key)
if customer_id:
customer_id = customer_id.decode('utf-8')
db_logger.debug(f'Got moltin customer id «{customer_id}» from db')
return customer_id
def update_customer_info(customer_key, customer_info):
db = await get_database_connection()
customer_id = db.get(customer_key).decode('utf-8')
moltin_aps.update_customer_info(customer_id, customer_info)
db_logger.debug(f'Customer «{customer_id}» info was updated')
def create_customer(customer_key, customer_info):
db = await get_database_connection()
customer_id = moltin_aps.create_customer(customer_info)['data']['id']
db.set(customer_key, customer_id)
db_logger.debug(f'New customer «{customer_key}» was created')
| 29.608696 | 99 | 0.737885 | 195 | 1,362 | 4.866667 | 0.230769 | 0.136986 | 0.088514 | 0.056902 | 0.202318 | 0.202318 | 0.202318 | 0.202318 | 0.202318 | 0.202318 | 0 | 0.001759 | 0.165198 | 1,362 | 45 | 100 | 30.266667 | 0.827617 | 0 | 0 | 0.09375 | 0 | 0 | 0.146109 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09375 | false | 0.0625 | 0.125 | 0 | 0.28125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f4496e9806f5e5781ad656efc22821170a6cd22c | 3,702 | py | Python | tests/unit/discovery/test_py_spec.py | xavfernandez/virtualenv | dd37c7d2af8a21026f4d4b7f43142e4e1e0faf86 | [
"MIT"
] | 1 | 2020-02-25T15:08:59.000Z | 2020-02-25T15:08:59.000Z | tests/unit/discovery/test_py_spec.py | xavfernandez/virtualenv | dd37c7d2af8a21026f4d4b7f43142e4e1e0faf86 | [
"MIT"
] | null | null | null | tests/unit/discovery/test_py_spec.py | xavfernandez/virtualenv | dd37c7d2af8a21026f4d4b7f43142e4e1e0faf86 | [
"MIT"
] | null | null | null | from __future__ import absolute_import, unicode_literals
import itertools
import os
import sys
from copy import copy
import pytest
from virtualenv.discovery.py_spec import PythonSpec
def test_bad_py_spec():
text = "python2.3.4.5"
spec = PythonSpec.from_string_spec(text)
assert text in repr(spec)
assert spec.str_spec == text
assert spec.path == os.path.abspath(text)
content = vars(spec)
del content[str("str_spec")]
del content[str("path")]
assert all(v is None for v in content.values())
def test_py_spec_first_digit_only_major():
spec = PythonSpec.from_string_spec("278")
assert spec.major == 2
assert spec.minor == 78
def test_spec_satisfies_path_ok():
spec = PythonSpec.from_string_spec(sys.executable)
assert spec.satisfies(spec) is True
def test_spec_satisfies_path_nok(tmp_path):
spec = PythonSpec.from_string_spec(sys.executable)
of = PythonSpec.from_string_spec(str(tmp_path))
assert spec.satisfies(of) is False
def test_spec_satisfies_arch():
spec_1 = PythonSpec.from_string_spec("python-32")
spec_2 = PythonSpec.from_string_spec("python-64")
assert spec_1.satisfies(spec_1) is True
assert spec_2.satisfies(spec_1) is False
@pytest.mark.parametrize(
"req, spec",
list(itertools.combinations(["py", "CPython", "python"], 2)) + [("jython", "jython")] + [("CPython", "cpython")],
)
def test_spec_satisfies_implementation_ok(req, spec):
spec_1 = PythonSpec.from_string_spec(req)
spec_2 = PythonSpec.from_string_spec(spec)
assert spec_1.satisfies(spec_1) is True
assert spec_2.satisfies(spec_1) is True
def test_spec_satisfies_implementation_nok():
spec_1 = PythonSpec.from_string_spec("python")
spec_2 = PythonSpec.from_string_spec("jython")
assert spec_2.satisfies(spec_1) is False
assert spec_1.satisfies(spec_2) is False
def _version_satisfies_pairs():
target = set()
version = tuple(str(i) for i in sys.version_info[0:3])
for i in range(len(version) + 1):
req = ".".join(version[0:i])
for j in range(i + 1):
sat = ".".join(version[0:j])
# can be satisfied in both directions
target.add((req, sat))
target.add((sat, req))
return sorted(target)
@pytest.mark.parametrize("req, spec", _version_satisfies_pairs())
def test_version_satisfies_ok(req, spec):
req_spec = PythonSpec.from_string_spec("python{}".format(req))
sat_spec = PythonSpec.from_string_spec("python{}".format(spec))
assert sat_spec.satisfies(req_spec) is True
def _version_not_satisfies_pairs():
target = set()
version = tuple(str(i) for i in sys.version_info[0:3])
for i in range(len(version)):
req = ".".join(version[0 : i + 1])
for j in range(i + 1):
sat_ver = list(sys.version_info[0 : j + 1])
for l in range(j + 1):
for o in [1, -1]:
temp = copy(sat_ver)
temp[l] += o
sat = ".".join(str(i) for i in temp)
target.add((req, sat))
return sorted(target)
@pytest.mark.parametrize("req, spec", _version_not_satisfies_pairs())
def test_version_satisfies_nok(req, spec):
req_spec = PythonSpec.from_string_spec("python{}".format(req))
sat_spec = PythonSpec.from_string_spec("python{}".format(spec))
assert sat_spec.satisfies(req_spec) is False
def test_relative_spec(tmp_path, monkeypatch):
monkeypatch.chdir(tmp_path)
a_relative_path = str((tmp_path / "a" / "b").relative_to(tmp_path))
spec = PythonSpec.from_string_spec(a_relative_path)
assert spec.path == os.path.abspath(str(tmp_path / a_relative_path))
| 31.913793 | 117 | 0.679633 | 541 | 3,702 | 4.399261 | 0.18854 | 0.094118 | 0.134454 | 0.161345 | 0.620588 | 0.506303 | 0.382773 | 0.287815 | 0.272269 | 0.227731 | 0 | 0.016521 | 0.198811 | 3,702 | 115 | 118 | 32.191304 | 0.785907 | 0.009454 | 0 | 0.232558 | 0 | 0 | 0.044748 | 0 | 0 | 0 | 0 | 0 | 0.197674 | 1 | 0.139535 | false | 0 | 0.081395 | 0 | 0.244186 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f450452cbcef41209866e35540c53f785f67820d | 1,183 | py | Python | Scripts/simulation/careers/detective/detective_crime_scene.py | velocist/TS4CheatsInfo | b59ea7e5f4bd01d3b3bd7603843d525a9c179867 | [
"Apache-2.0"
] | null | null | null | Scripts/simulation/careers/detective/detective_crime_scene.py | velocist/TS4CheatsInfo | b59ea7e5f4bd01d3b3bd7603843d525a9c179867 | [
"Apache-2.0"
] | null | null | null | Scripts/simulation/careers/detective/detective_crime_scene.py | velocist/TS4CheatsInfo | b59ea7e5f4bd01d3b3bd7603843d525a9c179867 | [
"Apache-2.0"
] | null | null | null | # uncompyle6 version 3.7.4
# Python bytecode 3.7 (3394)
# Decompiled from: Python 3.7.9 (tags/v3.7.9:13c94747c7, Aug 17 2020, 18:58:18) [MSC v.1900 64 bit (AMD64)]
# Embedded file name: T:\InGame\Gameplay\Scripts\Server\careers\detective\detective_crime_scene.py
# Compiled at: 2015-02-08 03:00:54
# Size of source mod 2**32: 1608 bytes
from careers.career_event_zone_director import CareerEventZoneDirector
import sims4.log
logger = sims4.log.Logger('Crime Scene', default_owner='bhill')
class CrimeSceneZoneDirector(CareerEventZoneDirector):
def __init__(self, *args, **kwargs):
(super().__init__)(*args, **kwargs)
self._should_load_sims = False
def _load_custom_zone_director(self, zone_director_proto, reader):
self._should_load_sims = True
super()._load_custom_zone_director(zone_director_proto, reader)
def _on_maintain_zone_saved_sim(self, sim_info):
if self._should_load_sims:
super()._on_maintain_zone_saved_sim(sim_info)
else:
logger.info('Discarding saved sim: {}', sim_info)
def _process_injected_sim(self, sim_info):
logger.info('Discarding injected sim: {}', sim_info) | 42.25 | 107 | 0.72612 | 168 | 1,183 | 4.797619 | 0.547619 | 0.074442 | 0.052109 | 0.066998 | 0.054591 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067677 | 0.163145 | 1,183 | 28 | 108 | 42.25 | 0.746465 | 0.27388 | 0 | 0 | 0 | 0 | 0.078546 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.235294 | false | 0 | 0.117647 | 0 | 0.411765 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f4564217958b77537d1072c7c3fc29f0c202d7e9 | 3,509 | py | Python | pycspr/types/cl.py | momipsl/pycspr | 82c1ca003525a3d205d2aa3b7da5d1ecd275e9b5 | [
"Apache-2.0"
] | 2 | 2021-04-14T13:49:20.000Z | 2021-07-06T22:07:02.000Z | pycspr/types/cl.py | momipsl/pycspr | 82c1ca003525a3d205d2aa3b7da5d1ecd275e9b5 | [
"Apache-2.0"
] | null | null | null | pycspr/types/cl.py | momipsl/pycspr | 82c1ca003525a3d205d2aa3b7da5d1ecd275e9b5 | [
"Apache-2.0"
] | 1 | 2021-04-15T12:52:42.000Z | 2021-04-15T12:52:42.000Z | import dataclasses
import enum
class CLType(enum.Enum):
"""Enumeration over set of CL types.
"""
BOOL = 0
I32 = 1
I64 = 2
U8 = 3
U32 = 4
U64 = 5
U128 = 6
U256 = 7
U512 = 8
UNIT = 9
STRING = 10
KEY = 11
UREF = 12
OPTION = 13
LIST = 14
BYTE_ARRAY = 15
RESULT = 16
MAP = 17
TUPLE_1 = 18
TUPLE_2 = 19
TUPLE_3 = 20
ANY = 21
PUBLIC_KEY = 22
# Set of types considered to be simple.
CL_TYPES_SIMPLE = {
CLType.BOOL,
CLType.I32,
CLType.I64,
CLType.KEY,
CLType.PUBLIC_KEY,
CLType.STRING,
CLType.U8,
CLType.U32,
CLType.U64,
CLType.U128,
CLType.U256,
CLType.U512,
CLType.UNIT,
CLType.UREF,
}
@dataclasses.dataclass
class CLTypeInfo():
"""Encapsulates CL type information associated with a value.
"""
# Associated type within CSPR type system.
typeof: CLType
@property
def type_tag(self) -> int:
"""Returns a tag used when encoding/decoding."""
return self.typeof.value
@dataclasses.dataclass
class CLTypeInfoForByteArray(CLTypeInfo):
"""Encapsulates CL type information associated with a byte array value.
"""
# Size of associated byte array value.
size: int
@dataclasses.dataclass
class CLTypeInfoForList(CLTypeInfo):
"""Encapsulates CL type information associated with a list value.
"""
# Inner type within CSPR type system.
inner_type_info: CLTypeInfo
@dataclasses.dataclass
class CLTypeInfoForMap(CLTypeInfo):
"""Encapsulates CL type information associated with a byte array value.
"""
# Type info of map's key.
key_type_info: CLType
# Type info of map's value.
value_type_info: CLTypeInfo
@dataclasses.dataclass
class CLTypeInfoForOption(CLTypeInfo):
"""Encapsulates CL type information associated with an optional value.
"""
# Inner type within CSPR type system.
inner_type_info: CLTypeInfo
@dataclasses.dataclass
class CLTypeInfoForSimple(CLTypeInfo):
"""Encapsulates CL type information associated with a simple value.
"""
pass
@dataclasses.dataclass
class CLTypeInfoForTuple1(CLTypeInfo):
"""Encapsulates CL type information associated with a 1-ary tuple value value.
"""
# Type of first value within 1-ary tuple value.
t0_type_info: CLTypeInfo
@dataclasses.dataclass
class CLTypeInfoForTuple2(CLTypeInfo):
"""Encapsulates CL type information associated with a 2-ary tuple value value.
"""
# Type of first value within 1-ary tuple value.
t0_type_info: CLTypeInfo
# Type of first value within 2-ary tuple value.
t1_type_info: CLTypeInfo
@dataclasses.dataclass
class CLTypeInfoForTuple3(CLTypeInfo):
"""Encapsulates CL type information associated with a 3-ary tuple value value.
"""
# Type of first value within 1-ary tuple value.
t0_type_info: CLTypeInfo
# Type of first value within 2-ary tuple value.
t1_type_info: CLTypeInfo
# Type of first value within 3-ary tuple value.
t2_type_info: CLTypeInfo
@dataclasses.dataclass
class CLValue():
"""A CL value mapped from python type system.
"""
# Byte array representation of underlying data.
bytes: bytes
# Parsed pythonic representation of underlying data (for human convenience only).
parsed: object
# Type information used by a deserializer.
type_info: CLTypeInfo
| 21.396341 | 85 | 0.667427 | 432 | 3,509 | 5.351852 | 0.280093 | 0.044983 | 0.108131 | 0.108997 | 0.528114 | 0.505623 | 0.446799 | 0.423875 | 0.274221 | 0.274221 | 0 | 0.035728 | 0.258193 | 3,509 | 163 | 86 | 21.527607 | 0.852478 | 0.436307 | 0 | 0.209877 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012346 | false | 0.012346 | 0.024691 | 0 | 0.654321 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
f456c15808160c57f2b68cffa03b0cdb9fe05135 | 1,668 | py | Python | google/cloud/aiplatform_v1/types/env_var.py | nachocano/python-aiplatform | 1c6b998d9145309d79712f494a2b00b50a9a9bf4 | [
"Apache-2.0"
] | null | null | null | google/cloud/aiplatform_v1/types/env_var.py | nachocano/python-aiplatform | 1c6b998d9145309d79712f494a2b00b50a9a9bf4 | [
"Apache-2.0"
] | 1 | 2021-02-12T23:56:38.000Z | 2021-02-12T23:56:38.000Z | google/cloud/aiplatform_v1/types/env_var.py | nachocano/python-aiplatform | 1c6b998d9145309d79712f494a2b00b50a9a9bf4 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import proto # type: ignore
__protobuf__ = proto.module(package="google.cloud.aiplatform.v1", manifest={"EnvVar",},)
class EnvVar(proto.Message):
r"""Represents an environment variable present in a Container or
Python Module.
Attributes:
name (str):
Required. Name of the environment variable.
Must be a valid C identifier.
value (str):
Required. Variables that reference a $(VAR_NAME) are
expanded using the previous defined environment variables in
the container and any service environment variables. If a
variable cannot be resolved, the reference in the input
string will be unchanged. The $(VAR_NAME) syntax can be
escaped with a double $$, ie: $$(VAR_NAME). Escaped
references will never be expanded, regardless of whether the
variable exists or not.
"""
name = proto.Field(proto.STRING, number=1)
value = proto.Field(proto.STRING, number=2)
__all__ = tuple(sorted(__protobuf__.manifest))
| 34.040816 | 88 | 0.688249 | 224 | 1,668 | 5.058036 | 0.575893 | 0.052957 | 0.022948 | 0.028244 | 0.047661 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009397 | 0.234412 | 1,668 | 48 | 89 | 34.75 | 0.877839 | 0.767386 | 0 | 0 | 0 | 0 | 0.104575 | 0.084967 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
f45776dc27791b0b8c76dabaff8a799c99fa956b | 3,119 | py | Python | tools/borplay/packlib.py | MrCoolSpan/openbor | 846cfeb924906849c8a11e76c442e47286b707ea | [
"BSD-3-Clause"
] | 25 | 2015-03-10T06:14:12.000Z | 2021-04-28T03:42:32.000Z | tools/borplay/packlib.py | MrCoolSpan/openbor | 846cfeb924906849c8a11e76c442e47286b707ea | [
"BSD-3-Clause"
] | 2 | 2019-09-29T11:35:30.000Z | 2021-02-08T11:10:32.000Z | tools/borplay/packlib.py | MrCoolSpan/openbor | 846cfeb924906849c8a11e76c442e47286b707ea | [
"BSD-3-Clause"
] | 18 | 2015-03-14T02:43:26.000Z | 2020-07-24T02:08:58.000Z | # Copyright (c) 2009 Bryan Cain ("Plombo")
# Class and functions to read .PAK files.
import struct
from cStringIO import StringIO
class PackFileReader(object):
''' Represents a BOR packfile. '''
files = dict() # the index holding the location of each file
packfile = None # the file object
def __init__(self, fp):
'''fp is a file path (string) or file-like object (file, StringIO,
etc.) in binary read mode'''
if isinstance(fp, str):
self.packfile = open(fp, 'rb')
else:
self.packfile = fp
self.read_index()
# reads the packfile's index into self.files
def read_index(self):
f = self.packfile
# read through file
tmp = True # placeholder that doesn't evaluate to false
while tmp: tmp = f.read(8192)
# read index start postition and seek there
f.seek(-4, 1)
endpos = f.tell()
f.seek(struct.unpack('<I', f.read(4))[0])
while f.tell() < endpos:
ssize, fpos, fsize = struct.unpack('<III', f.read(12))
name = f.read(ssize-12).strip('\x00').replace('\\', '/').lower()
self.files[name] = fpos, fsize
# reads a file with its full path.
def read_file(self, filename):
'''Returns a file-like object for the file or None if the file isn't
contained in this packfile.
This method takes the full path starting with "data/" as a parameter.'''
key = filename.replace('\\', '/').lower().strip('\x00').strip()
if key not in self.files.keys(): return None
start, size = self.files[key]
self.packfile.seek(start)
f = StringIO()
bytesrem = size
while bytesrem >= 8192:
f.write(self.packfile.read(8192))
bytesrem -= 8192
if bytesrem: f.write(self.packfile.read(bytesrem))
f.seek(0)
return f
def find_file(self, filename):
'''Returns a file-like object for the file or None if the file isn't
contained in this packfile.
This method searches for the file by its filename.'''
filename = filename.lower().strip()
start, size = None, None
for key in self.files.keys():
if key.endswith(filename):
return self.read_file(key)
return None # file not found if it gets to this point
def list_music_files(self):
'''Lists the BOR files in the packfile.'''
borfiles = []
for key in self.files.keys():
if key.endswith('.bor'): borfiles.append(key)
borfiles.sort()
for key in borfiles: print key
def get_file(pak, borfile):
'''Prevents a need to directly use PackFileReader when you only want to get
one file, like in borplay and bor2wav. Returns a file-like object.'''
rdr = PackFileReader(pak)
if ('/' not in borfile) and ('\\' not in borfile): # only the filename is given; search for the file
return rdr.find_file(borfile)
else: # full path given
return rdr.read_file(borfile)
# For testing
if __name__ == '__main__':
rdr = PackFileReader('K:/BOR/OpenBOR/Paks/BOR.PAK')
#keys = rdr.files.keys(); keys.sort()
#print '\n'.join(keys)
#print rdr.read_file('data/chars/yamazaki/yamazaki.txt').read()
#print rdr.find_file('yamazaki.txt').read()
rdr.list_music_files()
| 29.990385 | 102 | 0.655338 | 466 | 3,119 | 4.330472 | 0.324034 | 0.024281 | 0.02775 | 0.023786 | 0.16551 | 0.132805 | 0.132805 | 0.132805 | 0.132805 | 0.099108 | 0 | 0.013895 | 0.215454 | 3,119 | 103 | 103 | 30.281553 | 0.810789 | 0.191087 | 0 | 0.070175 | 0 | 0 | 0.036676 | 0.015473 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.035088 | null | null | 0.017544 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f458c5e01d9e2170ec0f7c2f7180c5b33bb75bc9 | 16,446 | py | Python | spc/backend_utils.py | adamnew123456/spc | 8809d1817f66cf8266f145aa0c2474b32dc1087a | [
"MIT"
] | 1 | 2017-10-15T19:55:48.000Z | 2017-10-15T19:55:48.000Z | spc/backend_utils.py | adamnew123456/spc | 8809d1817f66cf8266f145aa0c2474b32dc1087a | [
"MIT"
] | null | null | null | spc/backend_utils.py | adamnew123456/spc | 8809d1817f66cf8266f145aa0c2474b32dc1087a | [
"MIT"
] | null | null | null | """
Utility functions and classes shared by multiple backends
"""
from collections import namedtuple
import logging
from . import symbols
from . import types
LOGGER = logging.getLogger('spc.backend_utils')
# NameContexts encapsulate both the function stack (which holds values) and
# the symbol table context (which binds them)
NameContext = namedtuple('NameContext', ['symbol_ctx', 'func_stack'])
# While loops are identified by two labels - the start label, for re-running
# the condition, and the end label, for exiting when the condition is false
WhileLabels = namedtuple('WhileLabels', ['cond', 'exit'])
# If conditions are identified by two labels - the else label, for when
# the condition is false (to skip the then block) and the end label, for
# when the condition is true (to skip the else block)
IfLabels = namedtuple('IfLabels', ['else_body', 'end'])
# Switch conditionals are handled sort of like if conditionals:
#
# (switch |
# (case T1 B1) | jump-if-not T1, l1prime; ...; jump l4; l1prime:
# (case T2 B2) | jump-if-not T2, l2prime; ...; jump l4; l2prime:
# (else B3)) | ...
# | l4:
class SwitchLabels:
"""
Switch labels are similar to conditionals:
(switch |
(case T1 B1) | jump-if-not T1, case_lbl_1; ...; jump end; case_lbl_1:
(case T2 B2) | jump-if-not T2, case_lbl_2; ...; jump end; case_lbl_2:
(else B3) | ...; end_lbl:
Since each case is processed in order, only the current case end label and
the end switch label is available at any given time.
"""
def __init__(self, end_label):
self.end_label = end_label
self.case_end_label = None
class CoercionContext:
"""
This is used to wrap up all the information needed to coerce values from
one type to another.
"""
def __init__(self, backend, temp_context, code_templates):
self.backend = backend
self.temp_context = temp_context
self.templates = code_templates
def copy_with_context(self, new_context):
"""
Creates a copy of this object, but within a new temporary context.
"""
return CoercionContext(self.backend, new_context, self.templates)
def coerce(self, input_offset, input_type, output_type):
"""
Coerces a value, located on the stack, from the given input type to the
given output type. Returns the stack offset of the converted
variable and the output type.
Raises a TypeError if this is not possible.
"""
if input_type == output_type:
return input_offset, output_type
elif (input_type, output_type) == (types.Integer, types.Byte):
return self._coerce_int_to_byte(input_offset), output_type
elif (input_type, output_type) == (types.Byte, types.Integer):
return self._coerce_byte_to_int(input_offset), output_type
else:
raise TypeError('Cannot coerce {} -> {}'.format(input_type, output_type))
def _coerce_int_to_byte(self, input_offset):
"""
Coerces an integer to a byte, returning the stack offset of the
resulting byte.
"""
byte_size = self.backend._type_size(types.Byte)
byte_align = self.backend._type_alignment(types.Byte)
dest_offset = self.temp_context.add_temp(byte_size, byte_align)
tmp_reg = self.templates.tmp_regs[0]
self.backend._write_comment('Coercing int@{} to byte@{}',
input_offset, dest_offset)
self.templates.emit_load_stack_word(tmp_reg, input_offset)
self.templates.emit_int_to_byte(tmp_reg)
self.templates.emit_save_stack_byte(tmp_reg, dest_offset)
return dest_offset
def _coerce_byte_to_int(self, input_offset):
"""
Coerces a byte to an integer, returning the stack offset of the
resulting integer.
"""
int_size = self.backend._type_size(types.Integer)
int_align = self.backend._type_alignment(types.Integer)
dest_offset = self.temp_context.add_temp(int_size, int_align)
tmp_reg = self.templates.tmp_regs[0]
self.backend._write_comment('Coercing byte@{} to int@{}',
input_offset, dest_offset)
self.templates.emit_load_stack_byte(tmp_reg, input_offset)
self.templates.emit_byte_to_int(tmp_reg)
self.templates.emit_save_stack_word(tmp_reg, dest_offset)
return dest_offset
class FunctionStack:
"""
Tracks where variables are on the function's stack.
Note that this makes a number of assumptions about how things are stored:
- All arguments are stored on the stack, in reverse order. This goes
against the calling conventions for register rich architectures, like
MIPS, but there are enough corner cases (like copying structs by value)
that ignoring the calling convention is worthwhile for a non-optimizing
compiler like this.
- Locals and temporaries are stored on the stack, in order of creation.
"""
def __init__(self, backend):
self.backend = backend
self.local_offset = self._starting_locals_offset()
self.param_offset = self._starting_param_offset()
self.vars = {}
def _starting_locals_offset(self):
"""
Returns the starting offset of the local variables on the stack.
"""
raise NotImplementedError
def _starting_param_offset(self):
"""
Returns the starting offset of the parameter on the stack.
"""
raise NotImplementedError
def _expand_stack(self, size):
"""
Emits code to expand the stack frame by the given size.
"""
raise NotImplementedError
def _shrink_stack(self, size):
"""
Emits code to reduce the stack frame by the given size.
"""
raise NotImplementedError
def pad_param(self, space):
"""
Adds blank space before the next parameter.
"""
self.param_offset += space
def add_param(self, name, size, alignment):
"""
Adds a new parameter to the stack.
"""
self.param_offset = types.align_address(self.param_offset, alignment)
self.vars[name] = self.param_offset
self.param_offset += size
self.backend._write_comment('Binding param "{}" to offset {}', name, self.vars[name])
def add_local(self, name, size, alignment):
"""
Adds a local variable to the stack.
"""
self.local_offset = (
types.align_address(self.local_offset - size, alignment,
types.Alignment.Down))
self.vars[name] = self.local_offset
self.backend._write_comment('Binding local "{}" to offset {}', name, self.vars[name])
def get_temp_context(self, backend):
"""
Returns a context which can be used for putting temporary values on
the stack. When the context exits, the space used by the temporary
variables is cleaned up.
"""
root = self
class TemporaryContext:
def __init__(self, start_offset):
self.tmp_offset = start_offset
self.total_tmp_size = 0
def __enter__(self):
pass
def __exit__(self, *exc_info):
root._shrink_stack(self.total_tmp_size)
def add_temp(self, size, alignment):
"""
Makes space for a new temporary, returning the $fp offset at
which to write it.
"""
old_tmp_offset = self.tmp_offset
self.tmp_offset = (
types.align_address(self.tmp_offset - size, alignment,
types.Alignment.Down))
size_used = old_tmp_offset - self.tmp_offset
self.total_tmp_size += size_used
root._expand_stack(size_used)
return self.tmp_offset
def get_temp_context(self):
"""
Creates a temporary context, which starts at this temporary context.
"""
return TemporaryContext(self.tmp_offset)
return TemporaryContext(self.local_offset)
def expand_locals(self):
"""
Makes enough space for the local variables on the stack.
"""
self._expand_stack(self.locals_size())
def cleanup_locals(self):
"""
Cleans up the space used by the local variables on the stack.
"""
self._shrink_stack(self.locals_size())
def locals_size(self):
"""
Gets the size used by all the locals.
"""
return abs(self.local_offset) - abs(self._starting_locals_offset())
def __getitem__(self, name):
"""
Gets the offset to the variable on the stack, or a Register (if the
name was bound to one of the first four parameters)
"""
return self.vars[name]
class VerificationContext:
"""
Used to record all values and types defined all at once (i.e. inside the
same declaration block), so that they can be verified all at once.
"Verification" here means that their types are checked to be valid, which
means different things for different types.
"""
def __init__(self):
self.types = []
self.values = []
def add_value(self, name):
"""
Registers a new value to be verified.
"""
self.values.append(name)
def add_type(self, name):
"""
Registers a new type to be defined.
"""
self.types.append(types)
def verify(self, backend):
"""
Verifies all the definitions against the backend.
"""
backend._check_valid_types(backend.ctx_types[name] for name in self.types)
backend._check_valid_types(backend.ctx_values[name] for name in self.values)
class ContextMixin:
"""
Manages the symbol table contexts for this backend (as well as its function stack
Depends upon the user of this mixin to inherit from BaseBackend in
addition to this one.
"""
def __init__(self):
self.parent_contexts = []
self.current_context = NameContext(symbols.Context(), None)
self.verify_context = VerificationContext()
def _register_file_ns(self, namespace):
"""
Replaces the current context, with one where the symbol context is
expanded to contain the file's namespace.
"""
file_context = self.current_context.symbol_ctx.register(namespace)
self.current_context = self.current_context._replace(symbol_ctx=file_context)
@property
def ctx_namespace(self):
"""
Gets the current namespace
"""
return self.current_context.symbol_ctx.search_path[0]
@property
def ctx_values(self):
"""
Returns the current context's value symbols.
"""
return self.current_context.symbol_ctx.values
@property
def ctx_types(self):
"""
Returns the current context's type symbols.
"""
return self.current_context.symbol_ctx.types
@property
def ctx_stack(self):
"""
Returns the current context's stack information.
"""
return self.current_context.func_stack
def _value_is_defined(self, name):
"""
Returns True if the given variable is defined in the current scope, or
False otherwise.
This is for the static expression processor function, var-def?
"""
return (name in self.ctx_values and
self.ctx_values.is_visible(name))
def _type_is_defined(self, name):
"""
Returns True if the given type is defined in the current scope, or
False otherwise.
This is for the static expression processor function, var-def?
"""
return (name in self.ctx_types and
self.ctx_types.is_visible(name))
def _make_func_stack(self):
raise NotImplementedError
def _push_context(self):
"""
Pushes a new binding context.
"""
old_context = self.current_context
self.parent_contexts.append(old_context)
self.current_context = NameContext(
self.current_context.symbol_ctx.enter(),
self._make_func_stack())
def _pop_context(self):
"""
Loads the previous binding context.
"""
self.current_context = self.parent_contexts.pop()
def _resolve_if_type_name(self, name):
"""
Resolves a type name into a concrete type.
"""
try:
return types.resolve_name(name, self.ctx_types)
except PermissionError as exn:
self.error(self.line, self.col,
'Cannot resolve hidden type "{}"', str(exn))
except RecursionError:
self.error(self.line, self.col,
'Type aliases too deep, when resolving "{}"', name)
except KeyError as exn:
self.error(self.line, self.col,
'Invalid type "{}"', str(exn))
def _verify_types(self):
"""
Verifies all the types across all this current context's symbols.
"""
self.verify_context.verify(self)
self.verify_context = VerificationContext()
class ThirtyTwoMixin:
"""
Defines some information about type sizes and alignment which 32-bit
platforms have in common.
Depends upon the user of this mixin to inherit from ContextMixin.
"""
def _type_alignment(self, type_obj):
"""
Returns alignment of the given type (1 for byte, 4 for word, etc.)
"""
type_obj = self._resolve_if_type_name(type_obj)
if type_obj is types.Integer:
return 4
elif type_obj is types.Byte:
return 1
elif isinstance(type_obj, (types.PointerTo, types.FunctionPointer)):
return 4
elif isinstance(type_obj, types.ArrayOf):
return self._type_alignment(type_obj.type)
elif isinstance(type_obj, types.Struct):
# The alignment only concerns the first element of the struct -
# the struct's internal alignment doesn't come into play
#
# Also, an OrderdDict's fields are not iterable, for whatever reason
struct_types = list(type_obj.fields.values())
return self._type_alignment(struct_types[0])
else:
raise TypeError('Not a compiler type: {}'.format(type_obj))
def _type_size(self, type_obj, depth=0):
"""
Returns the size of a type object in bytes.
"""
MAX_DEPTH = 100
if depth >= MAX_DEPTH:
self.error(self.line, self.col,
"Type nested too deeply - potential self-referential type")
type_obj = self._resolve_if_type_name(type_obj)
if type_obj is types.Integer:
return 4
elif type_obj is types.Byte:
return 1
elif isinstance(type_obj, (types.PointerTo, types.FunctionPointer)):
return 4
elif isinstance(type_obj, types.ArrayOf):
# To avoid wasting space on the last element, this pads all the
# elements but the last
base_size = self._type_size(type_obj.type, depth + 1)
return self._array_offset(type_obj, type_obj.count - 1) + base_size
elif isinstance(type_obj, types.Struct):
last_field = list(type_obj.fields)[-1]
last_field_type = type_obj.fields[last_field]
last_field_offset = self._field_offset(type_obj, last_field)
return last_field_offset + self._type_size(last_field_type, depth + 1)
else:
raise TypeError('Not a compiler type: {}'.format(type_obj))
class comment_after:
"""
Wraps a method - after the method executes, something is written to
the log.
"""
def __init__(self, fmt, *args, **kwargs):
self.fmt = fmt
self.args = args
self.kwargs = kwargs
def __call__(self, func):
def wrapper(parent, *args, **kwargs):
x = func(parent, *args, **kwargs)
parent._write_comment(self.fmt, *self.args, **self.kwargs)
return x
return wrapper
| 34.2625 | 93 | 0.623921 | 2,060 | 16,446 | 4.786893 | 0.195146 | 0.018457 | 0.021904 | 0.012778 | 0.366596 | 0.308488 | 0.229186 | 0.151303 | 0.137511 | 0.112767 | 0 | 0.004124 | 0.292229 | 16,446 | 479 | 94 | 34.334029 | 0.843041 | 0.310167 | 0 | 0.229358 | 0 | 0 | 0.040722 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.211009 | false | 0.004587 | 0.018349 | 0 | 0.40367 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f45d781494a8e177d3301348e5cd3f98b7503c8a | 1,925 | py | Python | 8/8_9.py | kopsh/python_cookbook | 298c092cd20404a0755e2170776c44a04e8648ad | [
"CNRI-Python"
] | null | null | null | 8/8_9.py | kopsh/python_cookbook | 298c092cd20404a0755e2170776c44a04e8648ad | [
"CNRI-Python"
] | null | null | null | 8/8_9.py | kopsh/python_cookbook | 298c092cd20404a0755e2170776c44a04e8648ad | [
"CNRI-Python"
] | null | null | null | class CheckType:
r"""
8.9 创建新的类或实例属性
使用描述器,实现参数类型检查
>>> @ParamAssert(a=int, b=list)
... class A:
... def __init__(self, a, b):
... self.a = a
... self.b = b
>>> a = A(1, [])
"""
def __init__(self, name, expected_type):
self.name = name
self.expected_type = expected_type
def __get__(self, instance, owner):
if instance is None:
return self
else:
return instance.__dict__[self.name]
def __set__(self, instance, value):
if not isinstance(value, self.expected_type):
raise TypeError("{} cannot be assigned by {!r}, it`s type is {!r}".format(self.name, value,
self.expected_type))
instance.__dict__[self.name] = value
class ParamAssert:
def __init__(self, **kwargs):
self.kwargs = kwargs
def __call__(self, cls):
for name, expected_type in self.kwargs.items():
setattr(cls, name, CheckType(name, expected_type))
return cls
class Integer:
def __init__(self, name):
self.name = name
def __get__(self, instance, cls):
if instance is None:
return self
else:
return instance.__dict__.get(self.name, None)
def __set__(self, instance, value):
if not isinstance(value, int):
raise TypeError("{} cannot be assigned by {!r}".format(self.name, value))
instance.__dict__[self.name] = value
class Point:
"""
>>> p = Point(0, 0)
>>> print(p.x)
0
>>> p.y = "1"
Traceback (most recent call last):
...
TypeError: y cannot be assigned by '1'
"""
x = Integer('x')
y = Integer('y')
def __init__(self, x, y):
self.x = x
self.y = y
if __name__ == '__main__':
import doctest
doctest.testmod() | 25.666667 | 106 | 0.535584 | 229 | 1,925 | 4.19214 | 0.279476 | 0.083333 | 0.057292 | 0.0625 | 0.361458 | 0.320833 | 0.258333 | 0.189583 | 0.189583 | 0.1 | 0 | 0.006284 | 0.338701 | 1,925 | 75 | 107 | 25.666667 | 0.74784 | 0.156364 | 0 | 0.27907 | 0 | 0 | 0.05642 | 0 | 0 | 0 | 0 | 0 | 0.023256 | 1 | 0.209302 | false | 0 | 0.023256 | 0 | 0.488372 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f45ec536c2f2748641c051d8785db2394218cb3f | 4,264 | py | Python | samples/RiskManagement/Verification/customer-match-denied-parties-list.py | snavinch/cybersource-rest-samples-python | adb7a6b4b55dff6ac833295192d6677b53003c16 | [
"MIT"
] | 21 | 2019-01-22T17:48:32.000Z | 2022-02-07T17:40:58.000Z | samples/RiskManagement/Verification/customer-match-denied-parties-list.py | broadpay/cybersource-rest-samples-python | f7af6f58c70ea3bf725d34929b40ee4b5fd4d77c | [
"MIT"
] | 10 | 2018-12-03T22:45:17.000Z | 2021-04-19T20:40:14.000Z | samples/RiskManagement/Verification/customer-match-denied-parties-list.py | broadpay/cybersource-rest-samples-python | f7af6f58c70ea3bf725d34929b40ee4b5fd4d77c | [
"MIT"
] | 29 | 2018-11-09T11:44:53.000Z | 2022-03-18T08:56:46.000Z | from CyberSource import *
import os
import json
from importlib.machinery import SourceFileLoader
config_file = os.path.join(os.getcwd(), "data", "Configuration.py")
configuration = SourceFileLoader("module.name", config_file).load_module()
# To delete None values in Input Request Json body
def del_none(d):
for key, value in list(d.items()):
if value is None:
del d[key]
elif isinstance(value, dict):
del_none(value)
return d
def customer_match_denied_parties_list():
clientReferenceInformationCode = "verification example"
clientReferenceInformationComments = "Export-basic"
clientReferenceInformationPartnerDeveloperId = "7891234"
clientReferenceInformationPartnerSolutionId = "89012345"
clientReferenceInformationPartner = Riskv1decisionsClientReferenceInformationPartner(
developer_id = clientReferenceInformationPartnerDeveloperId,
solution_id = clientReferenceInformationPartnerSolutionId
)
clientReferenceInformation = Riskv1decisionsClientReferenceInformation(
code = clientReferenceInformationCode,
comments = clientReferenceInformationComments,
partner = clientReferenceInformationPartner.__dict__
)
orderInformationBillToAddress1 = "901 Metro Centre Blvd"
orderInformationBillToAdministrativeArea = "CA"
orderInformationBillToCountry = "US"
orderInformationBillToLocality = "Foster City"
orderInformationBillToPostalCode = "94404"
orderInformationBillToCompanyName = "A & C International Trade, Inc"
orderInformationBillToCompany = Riskv1exportcomplianceinquiriesOrderInformationBillToCompany(
name = orderInformationBillToCompanyName
)
orderInformationBillToFirstName = "ANDREE"
orderInformationBillToLastName = "AGNESE"
orderInformationBillToEmail = "test@domain.com"
orderInformationBillTo = Riskv1exportcomplianceinquiriesOrderInformationBillTo(
address1 = orderInformationBillToAddress1,
administrative_area = orderInformationBillToAdministrativeArea,
country = orderInformationBillToCountry,
locality = orderInformationBillToLocality,
postal_code = orderInformationBillToPostalCode,
company = orderInformationBillToCompany.__dict__,
first_name = orderInformationBillToFirstName,
last_name = orderInformationBillToLastName,
email = orderInformationBillToEmail
)
orderInformationShipToCountry = "IN"
orderInformationShipToFirstName = "DumbelDore"
orderInformationShipToLastName = "Albus"
orderInformationShipTo = Riskv1exportcomplianceinquiriesOrderInformationShipTo(
country = orderInformationShipToCountry,
first_name = orderInformationShipToFirstName,
last_name = orderInformationShipToLastName
)
orderInformationLineItems = []
orderInformationLineItems1 = Riskv1exportcomplianceinquiriesOrderInformationLineItems(
unit_price = "120.50",
quantity = 3,
product_sku = "123456",
product_name = "Qwe",
product_code = "physical_software"
)
orderInformationLineItems.append(orderInformationLineItems1.__dict__)
orderInformation = Riskv1exportcomplianceinquiriesOrderInformation(
bill_to = orderInformationBillTo.__dict__,
ship_to = orderInformationShipTo.__dict__,
line_items = orderInformationLineItems
)
requestObj = ValidateExportComplianceRequest(
client_reference_information = clientReferenceInformation.__dict__,
order_information = orderInformation.__dict__
)
requestObj = del_none(requestObj.__dict__)
requestObj = json.dumps(requestObj)
try:
config_obj = configuration.Configuration()
client_config = config_obj.get_configuration()
api_instance = VerificationApi(client_config)
return_data, status, body = api_instance.validate_export_compliance(requestObj)
print("\nAPI RESPONSE CODE : ", status)
print("\nAPI RESPONSE BODY : ", body)
return return_data
except Exception as e:
print("\nException when calling VerificationApi->validate_export_compliance: %s\n" % e)
if __name__ == "__main__":
customer_match_denied_parties_list()
| 38.414414 | 97 | 0.754221 | 290 | 4,264 | 10.793103 | 0.565517 | 0.006709 | 0.012141 | 0.016613 | 0.019169 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013537 | 0.185741 | 4,264 | 110 | 98 | 38.763636 | 0.887961 | 0.011257 | 0 | 0 | 0 | 0 | 0.083294 | 0.010441 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022222 | false | 0 | 0.044444 | 0 | 0.088889 | 0.033333 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f45ec6261b1911d698e1ee71b90cc7668913450f | 936 | py | Python | SimulatePi.py | Lucchese-Anthony/MonteCarloSimulation | 45a625b88dab6658b43b472d49d82aaeb1e847bd | [
"CC0-1.0"
] | null | null | null | SimulatePi.py | Lucchese-Anthony/MonteCarloSimulation | 45a625b88dab6658b43b472d49d82aaeb1e847bd | [
"CC0-1.0"
] | null | null | null | SimulatePi.py | Lucchese-Anthony/MonteCarloSimulation | 45a625b88dab6658b43b472d49d82aaeb1e847bd | [
"CC0-1.0"
] | null | null | null | import numpy as np
import random
import math
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from matplotlib import style
angle = np.linspace( 0 , 2 * np.pi , 150)
radius = 1
x = radius * np.cos(angle)
y = radius * np.sin(angle)
#prints the circle
style.use('fivethirtyeight')
fig = plt.figure()
axes = fig.add_subplot(1,1,1)
axes.plot( x, y, color="red")
inside = []
outside = []
def inCircle(x, y):
return math.sqrt( (x**2) + (y**2) ) =< 1
def animate(i):
x = random.uniform(1,-1)
y = random.uniform(1,-1)
if (inCircle(x, y)):
point = axes.scatter(x, y, color="blue")
inside.append(point)
else:
point = axes.scatter(x, y, color="red")
outside.append(point)
try:
ratio = len(inside) / len(outside)
print(ratio)
except ZeroDivisionError:
print(0)
ani = animation.FuncAnimation(fig, animate, interval=5)
plt.show()
| 21.272727 | 55 | 0.628205 | 137 | 936 | 4.284672 | 0.459854 | 0.017036 | 0.035775 | 0.034072 | 0.078365 | 0.078365 | 0 | 0 | 0 | 0 | 0 | 0.024828 | 0.225427 | 936 | 43 | 56 | 21.767442 | 0.784828 | 0.018162 | 0 | 0 | 0 | 0 | 0.027233 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.176471 | null | null | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f45faefa310c1d7891d6abffc0a5f0a804569172 | 219 | py | Python | run.py | aarvanitii/adminWebsite | cf9a07c287571ebbc9954326806b578f6d19a11b | [
"MIT"
] | null | null | null | run.py | aarvanitii/adminWebsite | cf9a07c287571ebbc9954326806b578f6d19a11b | [
"MIT"
] | null | null | null | run.py | aarvanitii/adminWebsite | cf9a07c287571ebbc9954326806b578f6d19a11b | [
"MIT"
] | null | null | null | """
This is where the web application starts running
"""
from app.index import create_app
app = create_app()
if __name__ == "__main__":
app.secret_key = 'mysecret'
app.run(port=8080, host="0.0.0.0", debug=True) | 24.333333 | 50 | 0.694064 | 35 | 219 | 4.028571 | 0.742857 | 0.042553 | 0.042553 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043478 | 0.159817 | 219 | 9 | 50 | 24.333333 | 0.722826 | 0.219178 | 0 | 0 | 0 | 0 | 0.140244 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f465aa8f0880334955fcdd358466dab059344d4b | 355 | py | Python | generate_joke.py | audreymychan/djsmile | 8dc5d6337f1b32db8bf3dfbf13315ec25049ebb5 | [
"MIT"
] | 5 | 2019-05-30T20:15:34.000Z | 2020-04-16T08:21:16.000Z | generate_joke.py | audreymychan/djsmile | 8dc5d6337f1b32db8bf3dfbf13315ec25049ebb5 | [
"MIT"
] | 5 | 2021-08-25T14:43:34.000Z | 2022-02-10T00:14:09.000Z | generate_joke.py | audreymychan/djsmile | 8dc5d6337f1b32db8bf3dfbf13315ec25049ebb5 | [
"MIT"
] | null | null | null | # This script contains the get_joke() function to generate a new dad joke
import requests
def get_joke():
"""Return new joke string from icanhazdadjoke.com."""
url = "https://icanhazdadjoke.com/"
response = requests.get(url, headers={'Accept': 'application/json'})
raw_joke = response.json()
joke = raw_joke['joke']
return joke
| 27.307692 | 73 | 0.687324 | 47 | 355 | 5.106383 | 0.595745 | 0.058333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.185915 | 355 | 12 | 74 | 29.583333 | 0.83045 | 0.338028 | 0 | 0 | 1 | 0 | 0.231441 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f46ac6dc3031a12623e226f71b58aeded4ff617c | 440 | py | Python | config/api_urls.py | elcolie/battleship | 71b0a963c5b24ae243a193749813fec321d5f4d8 | [
"MIT"
] | null | null | null | config/api_urls.py | elcolie/battleship | 71b0a963c5b24ae243a193749813fec321d5f4d8 | [
"MIT"
] | 3 | 2018-04-22T04:40:25.000Z | 2020-06-05T19:10:08.000Z | config/api_urls.py | elcolie/battleship | 71b0a963c5b24ae243a193749813fec321d5f4d8 | [
"MIT"
] | null | null | null | from rest_framework import routers
from boards.api.viewsets import BoardViewSet
from fleets.api.viewsets import FleetViewSet
from missiles.api.viewsets import MissileViewSet
app_name = 'api'
router = routers.DefaultRouter()
router.register(r'boards', BoardViewSet, base_name='board')
router.register(r'fleets', FleetViewSet, base_name='fleet')
router.register(r'missiles', MissileViewSet, base_name='missile')
urlpatterns = router.urls
| 29.333333 | 65 | 0.811364 | 56 | 440 | 6.285714 | 0.446429 | 0.09375 | 0.144886 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086364 | 440 | 14 | 66 | 31.428571 | 0.875622 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
f47301fb50cbf2affb241d7c61d027660a0014ae | 24,598 | py | Python | messenger/client/messenger.py | marik348/python-messenger | 6c1916b0df439cd997cb6e9376221fe587c3f1c1 | [
"MIT"
] | 2 | 2021-05-24T08:44:51.000Z | 2022-03-17T10:41:48.000Z | messenger/client/messenger.py | marik348/python-messenger | 6c1916b0df439cd997cb6e9376221fe587c3f1c1 | [
"MIT"
] | 1 | 2020-11-28T12:08:25.000Z | 2020-11-28T12:08:25.000Z | messenger/client/messenger.py | marik348/python-messegner | 6c1916b0df439cd997cb6e9376221fe587c3f1c1 | [
"MIT"
] | 1 | 2021-05-24T08:50:42.000Z | 2021-05-24T08:50:42.000Z | from requests import get, post, exceptions
from datetime import datetime
from PyQt5 import QtWidgets, QtCore
from PyQt5.QtWidgets import QMessageBox
from PyQt5.QtGui import QFont
from qtwidgets import PasswordEdit
from client_commands import (help_client, online, status, myself, reg, role, ban, unban)
from client_content import (get_warning_messages, get_client_commands, get_message_box_text, get_message_style)
from click_label import clickable
from client_ui import Ui_Messenger
from preferences import Preferences
from style_sheet import load_stylesheet
class Messenger(QtWidgets.QMainWindow, Ui_Messenger):
"""
The messenger object acts as the main object and is managed by client.
Shows UI and is responsible for UX.
UI is separated on 3 main parts, which have their indexes: 0 - Login form, 1 - Registration form, 2 - Chat.
Every 5 seconds requests server status.
Every second shows new messages, if user logged in.
Under main label "Python Messenger" there is server status, which displays whether server is working,
if yes, you can hover on it to see full server status.
In case of disconnection from server it'll show server-off message and navigate to login form.
It's possible to change server IP address in preferences menu.
:param translate: properly shows all content
:param password_line1: input line with icons to show/hide password entries on login form
:param password_line2: input line with icons to show/hide password entries on registration form
:param username: user nickname string
:param password: user password string
:param last_message_time: last time of getting messages, defaults to 0
:param max_text_len: maximum text message length to send in chat, defaults to 250
:param server_IP: server IPv4 string
:param message_style: style for messages defined in :func:`get_message_style`
:param warning_messages: dict of warning messages defined in :func:`get_warning_messages`
:param message_box_text: dict of content for message box defined in :func:`get_message_box_text`
:param client_commands: list of dicts with client-side commands defined in :func:`get_client_commands`
:param run_client_command: dict, where key is the name of client command and value is the function of this command
:param server_commands: list of dicts with server-side commands defined in :func:`get_server_commands`
:param run_server_command: dict, where key is the name of server command and value is the function of this command
:param timer_get_messages: timer, which every second runs :func:`get_messages`
:param timer_get_status: timer, which every 5 seconds runs :func:`get_status`
"""
def __init__(self, parent=None):
"""Initialize messenger object."""
super().__init__(parent)
self.setupUi(self)
self.translate = QtCore.QCoreApplication.translate
self.password_line1 = PasswordEdit(True, self.login_page)
self.password_line2 = PasswordEdit(True, self.registration_page)
self.modify_password_lines()
# Connect buttons to the methods.
self.send_button.pressed.connect(self.send)
self.sign_up_button.pressed.connect(self.sign_up_user)
self.login_button.pressed.connect(self.login_user)
# Connect actions to the methods.
self.action_shortcuts.triggered.connect(self.show_shortcuts_box)
self.action_commands.triggered.connect(self.show_commands_box)
self.action_about.triggered.connect(self.show_about_box)
self.action_contacts.triggered.connect(self.show_contacts_box)
self.action_preferences.triggered.connect(self.open_preferences_window)
self.action_logout.triggered.connect(self.logout)
self.action_close.triggered.connect(self.close)
# Filter shortcuts and text overflow.
self.plain_text_edit.installEventFilter(self)
self.username = None
self.password = None
self.last_message_time = 0
self.max_text_len = 250
self.server_IP = '0.0.0.0:9000'
# Load client content.
self.message_style = get_message_style()
self.warning_messages = get_warning_messages()
self.message_box_text = get_message_box_text()
# Load commands.
self.client_commands = get_client_commands()
self.run_client_command = {'close': self.close,
'logout': self.logout,
'reload': self.reload}
self.server_commands = []
self.run_server_command = {}
self.timer_get_messages = QtCore.QTimer()
self.timer_get_messages.timeout.connect(self.get_messages)
self.timer_get_messages.start(1000)
self.timer_get_status = QtCore.QTimer()
self.timer_get_status.timeout.connect(self.get_status)
self.timer_get_status.start(5000)
clickable(self.go_to_sign_up).connect(self.go_to_registration_form)
clickable(self.go_to_login).connect(self.go_to_login_form)
self.get_status()
def eventFilter(self, obj, event):
"""
Filters Enter key press and message text length.
If Enter key pressed, sends user's message.
If length of message is above maximum, doesn't allow writing.
"""
if event.type() == QtCore.QEvent.KeyPress and obj is self.plain_text_edit:
text = self.plain_text_edit.toPlainText()
if event.key() == QtCore.Qt.Key_Return and self.plain_text_edit.hasFocus():
self.send()
return True
elif len(text) > self.max_text_len:
text = text[:self.max_text_len]
self.plain_text_edit.setPlainText(text)
cursor = self.plain_text_edit.textCursor()
cursor.setPosition(self.max_text_len)
self.plain_text_edit.setTextCursor(cursor)
return True
return super().eventFilter(obj, event)
def closeEvent(self, event):
"""
Shows question message box for acception or ignoring to close the messenger.
Asks user does he really wants to close the messenger, if yes,
than marks logout of user and closes the messenger.
Otherwise, ignores closing messenger event.
:param event: event to close the messenger
"""
reply = QMessageBox.question(self, 'Quit', self.message_box_text["close"],
QMessageBox.Yes | QMessageBox.No, QMessageBox.Yes)
# User closes the messenger and is logged in.
if reply == QMessageBox.Yes and self.stacked_widget.currentIndex() == 2:
try:
post(
f'http://{self.server_IP}/logout',
json={"username": self.username}, verify=False
)
except exceptions.RequestException as e:
raise SystemExit
event.accept()
# User closes the messenger and is logged out.
elif reply == QMessageBox.Yes:
event.accept()
else:
event.ignore()
def logout(self):
"""
Shows question message box for acception or ignoring to log out from account.
Asks user does he really wants to log out, if yes,
than marks logout and navigates to login form.
Otherwise, ignores logout event.
"""
reply = QMessageBox.question(self, 'Logout', self.message_box_text["logout"],
QMessageBox.Yes | QMessageBox.No, QMessageBox.Yes)
if reply == QMessageBox.Yes:
try:
post(
f'http://{self.server_IP}/logout',
json={"username": self.username}, verify=False
)
except exceptions.RequestException as e:
self.show_server_off_box()
self.clear_user_data()
return
self.go_to_login_form()
self.clear_user_data()
self.action_logout.setEnabled(False)
self.action_commands.setEnabled(False)
self.action_preferences.setEnabled(True)
else:
return
def modify_password_lines(self):
"""Modifies and appears password lines."""
geometry = QtCore.QRect(60, 200, 291, 41)
font = QFont()
font.setPointSize(14)
self.password_line1.setGeometry(geometry)
self.password_line1.setFont(font)
self.password_line1.setEchoMode(QtWidgets.QLineEdit.Password)
self.password_line1.setObjectName("password_line1")
self.password_line1.setPlaceholderText(self.translate("Messenger", "Password"))
self.password_line2.setGeometry(geometry)
self.password_line2.setFont(font)
self.password_line2.setEchoMode(QtWidgets.QLineEdit.Password)
self.password_line2.setObjectName("password_line2")
self.password_line2.setPlaceholderText(self.translate("Messenger", "Enter Your Password"))
def open_preferences_window(self):
"""Opens settings window."""
settings = Preferences(self)
if settings.exec():
self.server_IP = settings.server_IP.text()
def clear_user_data(self):
"""Clears user data after logout."""
self.username = None
self.plain_text_edit.clear()
self.text_browser.clear()
self.last_message_time = 0
def reload(self):
"""Reloads all messages and deletes commands output."""
self.text_browser.clear()
self.last_message_time = 0
def go_to_registration_form(self):
"""Navigates to registration menu."""
self.stacked_widget.setCurrentIndex(1)
def go_to_login_form(self):
"""Navigates to login menu."""
self.stacked_widget.setCurrentIndex(0)
def go_to_chat(self):
"""Navigates to chat."""
self.get_server_commands()
self.stacked_widget.setCurrentIndex(2)
self.action_logout.setEnabled(True)
self.action_commands.setEnabled(True)
self.action_preferences.setEnabled(False)
self.plain_text_edit.setFocus()
self.clear_credentials()
def clear_credentials(self):
"""Clears login and password lines after log in or sign up."""
self.password_line1.clear()
self.login_line1.clear()
self.password_line2.clear()
self.login_line2.clear()
self.password = None
def show_about_box(self):
"""Shows message box with content about messenger."""
QMessageBox.information(self, 'About', self.message_box_text["about"])
def show_contacts_box(self):
"""Shows message box with contacts information."""
QMessageBox.information(self, 'Contacts', self.message_box_text["contacts"])
def show_server_off_box(self):
"""Shows message box about server off information."""
QMessageBox.critical(self, 'Opsss...', self.message_box_text["server_is_off"])
self.go_to_login_form()
def show_shortcuts_box(self):
"""Shows message box with shortcuts."""
QMessageBox.information(self, 'Shortcuts', self.message_box_text["shortcuts"])
def show_commands_box(self):
"""Shows message box with available commands."""
output = help_client(self.client_commands, self.server_commands, [])
output = output.replace('=', '')
QMessageBox.information(self, 'Commands', output)
def sign_up_user(self):
"""
Registers user.
Verifies correctness of login and password input.
Sends request to sign up user.
"""
# Clear registration form.
self.login_error2.setText(self.translate("Messenger", self.warning_messages['empty_str']))
self.password_error2.setText(self.translate("Messenger", self.warning_messages['empty_str']))
self.login_line2.setStyleSheet("border: 1px solid #B8B5B2")
self.password_line2.setStyleSheet("border: 1px solid #B8B5B2")
self.username = self.login_line2.text()
self.password = self.password_line2.text()
# Check that form isn't empty.
if not self.username:
if not self.password:
self.login_error2.setText(self.translate("Messenger", self.warning_messages['login_required']))
self.password_error2.setText(self.translate("Messenger", self.warning_messages['password_required']))
self.login_line2.setStyleSheet("border: 1px solid red")
self.password_line2.setStyleSheet("border: 1px solid red")
return
else:
self.login_error2.setText(self.translate("Messenger", self.warning_messages['login_required']))
self.login_line2.setStyleSheet("border: 1px solid red")
return
else:
if not self.password:
self.password_error2.setText(self.translate("Messenger", self.warning_messages['password_required']))
self.password_line2.setStyleSheet("border: 1px solid red")
return
if not self.username.isalnum():
self.login_error2.setText(self.translate("Messenger", self.warning_messages['not_alphanumeric']))
self.login_error2.adjustSize()
self.login_line2.setStyleSheet("border: 1px solid red")
return
try:
response = post(
f'http://{self.server_IP}/sign_up',
auth=(self.username, self.password),
verify=False
)
except exceptions.RequestException as e:
self.show_server_off_box()
self.clear_credentials()
return
# Process bad request.
if response.json()['login_out_of_range']:
self.login_error2.setText(self.translate("Messenger", self.warning_messages['login_out_of_range']))
self.login_error2.adjustSize()
self.login_line2.setStyleSheet("border: 1px solid red")
return
elif response.json()['password_out_of_range']:
self.password_error2.setText(self.translate("Messenger", self.warning_messages['password_out_of_range']))
self.password_error2.adjustSize()
self.password_line2.setStyleSheet("border: 1px solid red")
return
elif not response.json()['ok']:
self.login_error2.setText(self.translate("Messenger", self.warning_messages['registered']))
self.login_error2.adjustSize()
self.login_line2.setStyleSheet("border: 1px solid red")
return
self.go_to_chat()
def login_user(self):
"""
Allows user to log in.
Verifies correctness of login and password input.
Sends request to authenticate user.
"""
# Clear login form.
self.login_error1.setText(self.translate("Messenger", self.warning_messages['empty_str']))
self.password_error1.setText(self.translate("Messenger", self.warning_messages['empty_str']))
self.login_line1.setStyleSheet("border: 1px solid #B8B5B2")
self.password_line1.setStyleSheet("border: 1px solid #B8B5B2")
self.username = self.login_line1.text()
self.password = self.password_line1.text()
# Check that form isn't empty.
if not self.username:
if not self.password:
self.login_error1.setText(self.translate("Messenger", self.warning_messages['login_required']))
self.password_error1.setText(self.translate("Messenger", self.warning_messages['password_required']))
self.login_line1.setStyleSheet("border: 1px solid red")
self.password_line1.setStyleSheet("border: 1px solid red")
return
else:
self.login_error1.setText(self.translate("Messenger", self.warning_messages['login_required']))
self.login_line1.setStyleSheet("border: 1px solid red")
return
else:
if not self.password:
self.password_error1.setText(self.translate("Messenger", self.warning_messages['password_required']))
self.password_line1.setStyleSheet("border: 1px solid red")
return
try:
response = post(
f'http://{self.server_IP}/auth',
auth=(self.username, self.password),
verify=False
)
except exceptions.RequestException as e:
self.show_server_off_box()
self.clear_credentials()
return
# Process bad request.
if not response.json()['exist']:
self.login_error1.setText(self.translate("Messenger", self.warning_messages['invalid_login']))
self.login_line1.setStyleSheet("border: 1px solid red")
return
if not response.json()['match']:
self.password_error1.setText(self.translate("Messenger", self.warning_messages['invalid_password']))
self.password_line1.setStyleSheet("border: 1px solid red")
return
if response.json()['banned']:
self.login_error1.setText(self.translate("Messenger", self.warning_messages['banned']))
self.login_line1.setStyleSheet("border: 1px solid red")
return
self.go_to_chat()
def get_server_commands(self):
"""Sends request to get available server-side commands for user."""
try:
response = post(
f'http://{self.server_IP}/command',
json={"username": self.username, "command": 'help'}, verify=False
)
except exceptions.RequestException as e:
self.clear_user_data()
self.show_server_off_box()
return
if not response.json()['ok']:
self.show_text(response.json()['output'] + "<br>")
self.plain_text_edit.clear()
return
self.server_commands = response.json()['output']
# Connect command name with function.
for cmd in self.server_commands:
if cmd['name'] != 'help': self.run_server_command[f"{cmd['name']}"] = globals()[cmd['name']]
def send(self):
"""Separates and directs messages & commands to relevant function."""
self.plain_text_edit.setFocus()
text = self.plain_text_edit.toPlainText()
text = text.strip()
# Validate text don't execute HTML.
text = text.replace('</', '')
text = text.replace('<', '')
text = text.replace('>', '')
if len(text) > self.max_text_len:
text = text[:self.max_text_len]
if not text:
return
elif text.startswith('/'):
self.send_command(text[1:])
else:
self.send_message(text)
def send_message(self, text):
"""
Stores message on the server.
:param text: text of message
"""
try:
post(
f'http://{self.server_IP}/send_message',
json={"username": self.username, "text": text},
verify=False
)
except exceptions.RequestException as e:
self.clear_user_data()
self.show_server_off_box()
return
self.plain_text_edit.clear()
self.plain_text_edit.repaint()
def send_command(self, cmd_string):
"""
Executes command.
If it's client-side command, executes directly from client.
If it's server-side command, sends command to execute
on the server and processes the output.
:param cmd_string: command with parameters to execute
"""
command = cmd_string.split()[0]
args = cmd_string.split()[1:] if len(cmd_string) > 1 else None
# Run client-side command.
if command in [cmd['name'] for cmd in self.client_commands]:
self.run_client_command.get(command)()
self.plain_text_edit.clear()
return
# Invalid command name.
elif command not in [cmd['name'] for cmd in self.server_commands]:
self.show_text(f"<b>Error:</b> Command '/{command}' not found.<br>"
f"Try '/help' to list all available commands :)<br>")
self.plain_text_edit.clear()
return
# Process 'help' command.
elif command == 'help':
output = help_client(self.client_commands, self.server_commands, args)
self.show_text(output)
self.plain_text_edit.clear()
return
try:
response = post(
f'http://{self.server_IP}/command',
json={"username": self.username, "command": cmd_string}, verify=False
)
except exceptions.RequestException as e:
self.clear_user_data()
self.show_server_off_box()
return
if not response.json()['ok']:
self.show_text("<b>Error:</b> " + response.json()['output'] + "<br>")
self.plain_text_edit.clear()
return
# Assign command function & run it with output from server.
run_command = self.run_server_command.get(command)
output = run_command(response.json()['output'], args)
self.show_text(output)
self.plain_text_edit.clear()
self.plain_text_edit.repaint()
def get_messages(self):
"""Sends request to get new messages and appears them in style."""
if not self.stacked_widget.currentIndex() == 2:
return
try:
response = get(
f'http://{self.server_IP}/get_messages',
params={'after': self.last_message_time},
verify=False
)
data = response.json()
except exceptions.RequestException as e:
self.clear_user_data()
self.show_server_off_box()
return
# Generate message.
for message in data['messages']:
# float -> datetime.
beauty_time = datetime.fromtimestamp(message['time'])
beauty_time = beauty_time.strftime('%d/%m %H:%M:%S')
# User will see his messages from the right side.
if message['username'] == self.username:
self.show_text(self.message_style['begin'] + beauty_time + ' ' + message['username']
+ self.message_style['middle'] + message['text'] + self.message_style['end'])
self.last_message_time = message['time']
else:
self.show_text(message['username'] + ' ' + beauty_time)
self.show_text(message['text'] + "<br>")
self.last_message_time = message['time']
def get_status(self):
"""Sends request to get server status."""
try:
response = get(
f'http://{self.server_IP}/status',
verify=False
)
status = response.json()
# Server is off.
except exceptions.RequestException as e:
self.server_status.setText(self.translate("Messenger", '<p style="font-size:12px">'
'<img src="images/server-is-off.png"> Offline</p>'))
tool_tip = f"Can't connect to the server<br>" \
f"Maybe server isn't run or you've entered an invalid IP address in Preferences"
self.server_status.setToolTip(tool_tip)
return
# Server is on.
self.server_status.setText(self.translate("Messenger", '<p style="font-size:12px">'
'<img src="images/server-is-on.png"> Online</p>'))
tool_tip = f"Server is working<br>" \
f"Users online: {status['users_online']}<br>" \
f"Date and time: {status['time']}<br>" \
f"Registered users: {status['users_count']}<br>" \
f"Written messages: {status['messages_count']}"
self.server_status.setToolTip(tool_tip)
def show_text(self, text):
"""Shows given text in messenger chat."""
self.text_browser.append(text)
self.text_browser.repaint()
app = QtWidgets.QApplication([])
window = Messenger()
app.setStyleSheet(load_stylesheet())
window.show()
app.exec_()
| 38.982567 | 120 | 0.622693 | 2,858 | 24,598 | 5.187544 | 0.138209 | 0.034804 | 0.034129 | 0.041076 | 0.462701 | 0.415756 | 0.361122 | 0.334345 | 0.32153 | 0.271887 | 0 | 0.008887 | 0.277218 | 24,598 | 630 | 121 | 39.044444 | 0.825018 | 0.186275 | 0 | 0.429293 | 0 | 0 | 0.112184 | 0.011809 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065657 | false | 0.113636 | 0.030303 | 0 | 0.179293 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f48259ce6371a22b92ea0a936d7be4886d4013dc | 4,030 | py | Python | agro_site/orders/migrations/0001_initial.py | LukoninDmitryPy/agro_site-2 | eab7694d42104774e5ce6db05a79f11215db6ae3 | [
"MIT"
] | null | null | null | agro_site/orders/migrations/0001_initial.py | LukoninDmitryPy/agro_site-2 | eab7694d42104774e5ce6db05a79f11215db6ae3 | [
"MIT"
] | null | null | null | agro_site/orders/migrations/0001_initial.py | LukoninDmitryPy/agro_site-2 | eab7694d42104774e5ce6db05a79f11215db6ae3 | [
"MIT"
] | 1 | 2022-03-13T11:32:48.000Z | 2022-03-13T11:32:48.000Z | # Generated by Django 2.2.16 on 2022-04-12 13:28
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import django.db.models.expressions
import django.utils.timezone
class Migration(migrations.Migration):
initial = True
dependencies = [
('sales_backend', '0001_initial'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Chat',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('type', models.CharField(choices=[('D', 'Dialog'), ('C', 'Chat')], default='D', max_length=1, verbose_name='Тип')),
('members', models.ManyToManyField(to=settings.AUTH_USER_MODEL, verbose_name='Участник')),
],
),
migrations.CreateModel(
name='Order',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('address', models.CharField(max_length=250)),
('postal_code', models.CharField(max_length=20)),
('city', models.CharField(max_length=100)),
('created', models.DateTimeField(auto_now_add=True)),
('updated', models.DateTimeField(auto_now=True)),
('paid', models.BooleanField(default=False)),
('status_order', models.CharField(choices=[('В обработке', 'В обработке'), ('Заказ собран', 'Заказ собран'), ('Заказ отправлен', 'Заказ отправлен')], default='В обработке', max_length=20)),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='user', to=settings.AUTH_USER_MODEL)),
],
options={
'verbose_name_plural': 'Заказы',
'ordering': ('-created',),
},
),
migrations.CreateModel(
name='OrderItem',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('price', models.DecimalField(decimal_places=2, max_digits=10)),
('quantity', models.PositiveIntegerField(default=1)),
('order', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='items', to='orders.Order')),
('product', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='order_items', to='sales_backend.Product')),
('seller', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='seller', to=settings.AUTH_USER_MODEL)),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='order_users', to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='Message',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('message', models.TextField(verbose_name='Сообщение')),
('pub_date', models.DateTimeField(default=django.utils.timezone.now, verbose_name='Дата сообщения')),
('is_readed', models.BooleanField(default=False, verbose_name='Прочитано')),
('author', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL, verbose_name='Пользователь')),
('chat', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='orders.Chat', verbose_name='Чат')),
],
options={
'ordering': ['pub_date'],
},
),
migrations.AddConstraint(
model_name='orderitem',
constraint=models.CheckConstraint(check=models.Q(_negated=True, user=django.db.models.expressions.F('seller')), name='dont_buy_yourself'),
),
]
| 52.337662 | 205 | 0.6134 | 419 | 4,030 | 5.732697 | 0.307876 | 0.054954 | 0.058285 | 0.073272 | 0.374271 | 0.345545 | 0.345545 | 0.318068 | 0.318068 | 0.318068 | 0 | 0.011408 | 0.23871 | 4,030 | 76 | 206 | 53.026316 | 0.771512 | 0.011414 | 0 | 0.333333 | 1 | 0 | 0.133099 | 0.005274 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.072464 | 0 | 0.130435 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f482d9773506167246440d9307b62395f61caa1a | 2,353 | py | Python | ais3-pre-exam-2022-writeup/Misc/JeetQode/chall/problems/astmath.py | Jimmy01240397/balsn-2021-writeup | 91b71dfbddc1c214552280b12979a82ee1c3cb7e | [
"MIT"
] | null | null | null | ais3-pre-exam-2022-writeup/Misc/JeetQode/chall/problems/astmath.py | Jimmy01240397/balsn-2021-writeup | 91b71dfbddc1c214552280b12979a82ee1c3cb7e | [
"MIT"
] | null | null | null | ais3-pre-exam-2022-writeup/Misc/JeetQode/chall/problems/astmath.py | Jimmy01240397/balsn-2021-writeup | 91b71dfbddc1c214552280b12979a82ee1c3cb7e | [
"MIT"
] | null | null | null | from problem import Problem
from typing import Any, Tuple
from random import randint
import ast
import json
def gen_num():
return str(randint(1, 9))
def gen_op():
return "+-*/"[randint(0, 3)]
def gen_expr(depth):
if randint(0, depth) == 0:
l = gen_expr(depth + 1)
r = gen_expr(depth + 1)
op = gen_op()
return f"({l}{op}{r})"
return f"({gen_num()})"
class ASTMath(Problem):
@property
def name(self) -> str:
return "AST Math"
@property
def desciption(self) -> str:
return """
Input: An AST of Python's arithmetic expression (only +,-,*,/)
Output: Result number
Examples:
Input: {"body": {"left": {"value": 1, "kind": null, "lineno": 1, "col_offset": 0, "end_lineno": 1, "end_col_offset": 1}, "op": "<_ast.Add object at 0x7f0387ccde20>", "right": {"value": 2, "kind": null, "lineno": 1, "col_offset": 2, "end_lineno": 1, "end_col_offset": 3}, "lineno": 1, "col_offset": 0, "end_lineno": 1, "end_col_offset": 3}}
Output: 3
Input: {"body": {"left": {"left": {"value": 8, "kind": null, "lineno": 1, "col_offset": 1, "end_lineno": 1, "end_col_offset": 2}, "op": "<_ast.Mult object at 0x7f20eb76aee0>", "right": {"value": 7, "kind": null, "lineno": 1, "col_offset": 3, "end_lineno": 1, "end_col_offset": 4}, "lineno": 1, "col_offset": 1, "end_lineno": 1, "end_col_offset": 4}, "op": "<_ast.Sub object at 0x7f20eb76ae80>", "right": {"left": {"value": 6, "kind": null, "lineno": 1, "col_offset": 7, "end_lineno": 1, "end_col_offset": 8}, "op": "<_ast.Mult object at 0x7f20eb76aee0>", "right": {"value": 3, "kind": null, "lineno": 1, "col_offset": 9, "end_lineno": 1, "end_col_offset": 10}, "lineno": 1, "col_offset": 7, "end_lineno": 1, "end_col_offset": 10}, "lineno": 1, "col_offset": 0, "end_lineno": 1, "end_col_offset": 11}}
Output: 38
"""
@property
def rounds(self) -> int:
return 10
def dumps(self, x):
return json.dumps(
x, default=lambda x: x.__dict__ if len(x.__dict__) else str(x)
)
def generate_testcase(self) -> Tuple[bool, Any]:
l = gen_expr(1)
r = gen_expr(1)
op = gen_op()
expr = f"{l}{op}{r}"
try:
result = eval(expr)
except ZeroDivisionError:
return self.generate_testcase()
return ast.parse(expr, mode="eval"), result
| 37.349206 | 800 | 0.592435 | 344 | 2,353 | 3.866279 | 0.258721 | 0.105263 | 0.075188 | 0.120301 | 0.394737 | 0.394737 | 0.322556 | 0.286466 | 0.224812 | 0.224812 | 0 | 0.050674 | 0.211645 | 2,353 | 62 | 801 | 37.951613 | 0.666307 | 0 | 0 | 0.104167 | 0 | 0.041667 | 0.557161 | 0 | 0 | 0 | 0.023799 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.104167 | 0.125 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
f488b9695ea3d93d4ce613f2ebb45a1be83ca949 | 1,631 | py | Python | python/cac_tripplanner/destinations/migrations/0021_event.py | maurizi/cac-tripplanner | 3f4f1f1edc9be9e52c74eb3e124b6697429a79d6 | [
"Apache-2.0"
] | null | null | null | python/cac_tripplanner/destinations/migrations/0021_event.py | maurizi/cac-tripplanner | 3f4f1f1edc9be9e52c74eb3e124b6697429a79d6 | [
"Apache-2.0"
] | null | null | null | python/cac_tripplanner/destinations/migrations/0021_event.py | maurizi/cac-tripplanner | 3f4f1f1edc9be9e52c74eb3e124b6697429a79d6 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.7 on 2017-11-28 17:32
from __future__ import unicode_literals
import ckeditor.fields
import destinations.models
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('destinations', '0020_auto_20170203_1251'),
]
operations = [
migrations.CreateModel(
name='Event',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50)),
('website_url', models.URLField(blank=True, null=True)),
('description', ckeditor.fields.RichTextField()),
('start_date', models.DateTimeField()),
('end_date', models.DateTimeField()),
('image', models.ImageField(help_text=b'The small image. Will be displayed at 310x155.', null=True, upload_to=destinations.models.generate_filename)),
('wide_image', models.ImageField(help_text=b'The large image. Will be displayed at 680x400.', null=True, upload_to=destinations.models.generate_filename)),
('published', models.BooleanField(default=False)),
('priority', models.IntegerField(default=9999)),
('destination', models.ForeignKey(null=True, blank=True, on_delete=django.db.models.deletion.SET_NULL, to='destinations.Destination')),
],
options={
'ordering': ['priority', '-start_date'],
},
),
]
| 42.921053 | 171 | 0.624157 | 171 | 1,631 | 5.807018 | 0.538012 | 0.032226 | 0.028197 | 0.04431 | 0.21148 | 0.16717 | 0.16717 | 0.100705 | 0 | 0 | 0 | 0.041229 | 0.24157 | 1,631 | 37 | 172 | 44.081081 | 0.76152 | 0.041692 | 0 | 0 | 1 | 0 | 0.175641 | 0.030128 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
be2e7ef040dc5a54cf6259bfaf5348f1c97d85ac | 2,061 | py | Python | prog_vae/prog_encoder/prog_encoder.py | Hanjun-Dai/sdvae | bd26ea949c496419634fd2cf4802fc8e19a9194c | [
"MIT"
] | 70 | 2018-02-24T07:50:59.000Z | 2021-12-27T02:42:37.000Z | prog_vae/prog_encoder/prog_encoder.py | Hanjun-Dai/sdvae | bd26ea949c496419634fd2cf4802fc8e19a9194c | [
"MIT"
] | 7 | 2018-05-31T00:50:19.000Z | 2021-09-28T11:58:22.000Z | prog_vae/prog_encoder/prog_encoder.py | Hanjun-Dai/sdvae | bd26ea949c496419634fd2cf4802fc8e19a9194c | [
"MIT"
] | 19 | 2019-01-11T10:56:00.000Z | 2022-03-23T23:09:39.000Z | #!/usr/bin/env python
from __future__ import print_function
import os
import sys
import csv
import numpy as np
import math
import random
from collections import defaultdict
import torch
from torch.autograd import Variable
from torch.nn.parameter import Parameter
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
sys.path.append( '%s/../prog_common' % os.path.dirname(os.path.realpath(__file__)) )
from prog_util import DECISION_DIM
from cmd_args import cmd_args
from pytorch_initializer import weights_init
sys.path.append( '%s/../cfg_parser' % os.path.dirname(os.path.realpath(__file__)) )
import cfg_parser as parser
class CNNEncoder(nn.Module):
def __init__(self, max_len, latent_dim):
super(CNNEncoder, self).__init__()
self.latent_dim = latent_dim
self.max_len = max_len
self.conv1 = nn.Conv1d(DECISION_DIM, cmd_args.c1, cmd_args.c1)
self.conv2 = nn.Conv1d(cmd_args.c1, cmd_args.c2, cmd_args.c2)
self.conv3 = nn.Conv1d(cmd_args.c2, cmd_args.c3, cmd_args.c3)
self.last_conv_size = max_len - cmd_args.c1 + 1 - cmd_args.c2 + 1 - cmd_args.c3 + 1
self.w1 = nn.Linear(self.last_conv_size * cmd_args.c3, cmd_args.dense)
self.mean_w = nn.Linear(cmd_args.dense, latent_dim)
self.log_var_w = nn.Linear(cmd_args.dense, latent_dim)
weights_init(self)
def forward(self, x_cpu):
if cmd_args.mode == 'cpu':
batch_input = Variable(torch.from_numpy(x_cpu))
else:
batch_input = Variable(torch.from_numpy(x_cpu).cuda())
h1 = self.conv1(batch_input)
h1 = F.relu(h1)
h2 = self.conv2(h1)
h2 = F.relu(h2)
h3 = self.conv3(h2)
h3 = F.relu(h3)
# h3 = torch.transpose(h3, 1, 2).contiguous()
flatten = h3.view(x_cpu.shape[0], -1)
h = self.w1(flatten)
h = F.relu(h)
z_mean = self.mean_w(h)
z_log_var = self.log_var_w(h)
return (z_mean, z_log_var)
if __name__ == '__main__':
pass
| 30.308824 | 91 | 0.6623 | 324 | 2,061 | 3.935185 | 0.305556 | 0.098824 | 0.028235 | 0.021961 | 0.216471 | 0.152157 | 0.152157 | 0.103529 | 0 | 0 | 0 | 0.027586 | 0.226104 | 2,061 | 67 | 92 | 30.761194 | 0.771787 | 0.031053 | 0 | 0 | 0 | 0 | 0.022055 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0.019231 | 0.346154 | 0 | 0.423077 | 0.019231 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
be357d6f3c1ddf5962bf29bb44f0430102e3f1c8 | 7,741 | py | Python | neutron_lbaas/drivers/driver_mixins.py | containers-kraken/neutron-lbaas | 43fbc34cc90512e33202bc4187ccf712dda6a782 | [
"Apache-2.0"
] | null | null | null | neutron_lbaas/drivers/driver_mixins.py | containers-kraken/neutron-lbaas | 43fbc34cc90512e33202bc4187ccf712dda6a782 | [
"Apache-2.0"
] | null | null | null | neutron_lbaas/drivers/driver_mixins.py | containers-kraken/neutron-lbaas | 43fbc34cc90512e33202bc4187ccf712dda6a782 | [
"Apache-2.0"
] | null | null | null | # Copyright 2014 A10 Networks
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
from neutron.plugins.common import constants
from oslo_log import log as logging
import six
from neutron_lbaas.db.loadbalancer import models
from neutron_lbaas.services.loadbalancer import constants as lb_const
from neutron_lbaas.services.loadbalancer import data_models
LOG = logging.getLogger(__name__)
@six.add_metaclass(abc.ABCMeta)
class BaseManagerMixin(object):
def __init__(self, driver):
self.driver = driver
@abc.abstractproperty
def db_delete_method(self):
pass
@abc.abstractmethod
def create(self, context, obj):
pass
@abc.abstractmethod
def update(self, context, obj_old, obj):
pass
@abc.abstractmethod
def delete(self, context, obj):
pass
def successful_completion(self, context, obj, delete=False,
lb_create=False):
"""
Sets the provisioning_status of the load balancer and obj to
ACTIVE. Should be called last in the implementor's BaseManagerMixin
methods for successful runs.
:param context: neutron context
:param obj: instance of a
neutron_lbaas.services.loadbalancer.data_model
:param delete: set True if being called from a delete method. Will
most likely result in the obj being deleted from the db.
:param lb_create: set True if this is being called after a successful
load balancer create.
"""
LOG.debug("Starting successful_completion method after a successful "
"driver action.")
obj_sa_cls = data_models.DATA_MODEL_TO_SA_MODEL_MAP[obj.__class__]
if delete:
# Check if driver is responsible for vip allocation. If the driver
# is responsible, then it is also responsible for cleaning it up.
# At this point, the VIP should already be cleaned up, so we are
# just doing neutron lbaas db cleanup.
if (obj == obj.root_loadbalancer and
self.driver.load_balancer.allocates_vip):
# NOTE(blogan): this is quite dumb to do but it is necessary
# so that a false negative pep8 error does not get thrown. An
# "unexpected-keyword-argument" pep8 error occurs bc
# self.db_delete_method is a @property method that returns a
# method.
kwargs = {'delete_vip_port': False}
self.db_delete_method(context, obj.id, **kwargs)
else:
self.db_delete_method(context, obj.id)
if obj == obj.root_loadbalancer and delete:
# Load balancer was deleted and no longer exists
return
lb_op_status = None
lb_p_status = constants.ACTIVE
if obj == obj.root_loadbalancer:
# only set the status to online if this an operation on the
# load balancer
lb_op_status = lb_const.ONLINE
# Update the load balancer's vip address and vip port id if the driver
# was responsible for allocating the vip.
if (self.driver.load_balancer.allocates_vip and lb_create and
isinstance(obj, data_models.LoadBalancer)):
self.driver.plugin.db.update_loadbalancer(
context, obj.id, {'vip_address': obj.vip_address,
'vip_port_id': obj.vip_port_id})
self.driver.plugin.db.update_status(
context, models.LoadBalancer, obj.root_loadbalancer.id,
provisioning_status=lb_p_status,
operating_status=lb_op_status)
if obj == obj.root_loadbalancer or delete:
# Do not want to update the status of the load balancer again
# Or the obj was deleted from the db so no need to update the
# statuses
return
obj_op_status = lb_const.ONLINE
if isinstance(obj, data_models.HealthMonitor):
# Health Monitor does not have an operating status
obj_op_status = None
LOG.debug("Updating object of type {0} with id of {1} to "
"provisioning_status = {2}, operating_status = {3}".format(
obj.__class__, obj.id, constants.ACTIVE, obj_op_status))
self.driver.plugin.db.update_status(
context, obj_sa_cls, obj.id,
provisioning_status=constants.ACTIVE,
operating_status=obj_op_status)
def failed_completion(self, context, obj):
"""
Sets the provisioning status of the obj to ERROR. If obj is a
loadbalancer it will be set to ERROR, otherwise set to ACTIVE. Should
be called whenever something goes wrong (raised exception) in an
implementor's BaseManagerMixin methods.
:param context: neutron context
:param obj: instance of a
neutron_lbaas.services.loadbalancer.data_model
"""
LOG.debug("Starting failed_completion method after a failed driver "
"action.")
if isinstance(obj, data_models.LoadBalancer):
LOG.debug("Updating load balancer {0} to provisioning_status = "
"{1}, operating_status = {2}.".format(
obj.root_loadbalancer.id, constants.ERROR,
lb_const.OFFLINE))
self.driver.plugin.db.update_status(
context, models.LoadBalancer, obj.root_loadbalancer.id,
provisioning_status=constants.ERROR,
operating_status=lb_const.OFFLINE)
return
obj_sa_cls = data_models.DATA_MODEL_TO_SA_MODEL_MAP[obj.__class__]
LOG.debug("Updating object of type {0} with id of {1} to "
"provisioning_status = {2}, operating_status = {3}".format(
obj.__class__, obj.id, constants.ERROR,
lb_const.OFFLINE))
self.driver.plugin.db.update_status(
context, obj_sa_cls, obj.id,
provisioning_status=constants.ERROR,
operating_status=lb_const.OFFLINE)
LOG.debug("Updating load balancer {0} to "
"provisioning_status = {1}".format(obj.root_loadbalancer.id,
constants.ACTIVE))
self.driver.plugin.db.update_status(
context, models.LoadBalancer, obj.root_loadbalancer.id,
provisioning_status=constants.ACTIVE)
def update_vip(self, context, loadbalancer_id, vip_address,
vip_port_id=None):
lb_update = {'vip_address': vip_address}
if vip_port_id:
lb_update['vip_port_id'] = vip_port_id
self.driver.plugin.db.update_loadbalancer(context, loadbalancer_id,
lb_update)
@six.add_metaclass(abc.ABCMeta)
class BaseRefreshMixin(object):
@abc.abstractmethod
def refresh(self, context, obj):
pass
@six.add_metaclass(abc.ABCMeta)
class BaseStatsMixin(object):
@abc.abstractmethod
def stats(self, context, obj):
pass
| 42.070652 | 79 | 0.628472 | 949 | 7,741 | 4.955743 | 0.239199 | 0.025516 | 0.03636 | 0.026791 | 0.456304 | 0.366575 | 0.277908 | 0.251967 | 0.243036 | 0.243036 | 0 | 0.004631 | 0.302674 | 7,741 | 183 | 80 | 42.300546 | 0.866617 | 0.301511 | 0 | 0.361111 | 0 | 0 | 0.099424 | 0.004031 | 0 | 0 | 0 | 0 | 0 | 1 | 0.092593 | false | 0.055556 | 0.064815 | 0 | 0.212963 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
be3e44160e188056687e999ee1b846a80b373896 | 1,819 | py | Python | build/generate_confirmed_cases_by_counties.py | jtagcat/koroonakaart | 16a6eb24a19b286589b063742b03a123315feefc | [
"CC0-1.0",
"MIT"
] | 1 | 2021-12-20T23:05:58.000Z | 2021-12-20T23:05:58.000Z | build/generate_confirmed_cases_by_counties.py | jtagcat/koroonakaart | 16a6eb24a19b286589b063742b03a123315feefc | [
"CC0-1.0",
"MIT"
] | null | null | null | build/generate_confirmed_cases_by_counties.py | jtagcat/koroonakaart | 16a6eb24a19b286589b063742b03a123315feefc | [
"CC0-1.0",
"MIT"
] | 1 | 2021-12-20T23:05:47.000Z | 2021-12-20T23:05:47.000Z | from build.chart_data_functions import get_confirmed_cases_by_county
from build.chart_data_functions import get_county_by_day
from build.constants import CONFIRMED_CASES_BY_COUNTIES_PATH
from build.constants import COUNTY_MAPPING
from build.constants import COUNTY_POPULATION
from build.constants import DATE_SETTINGS
from build.constants import TEST_RESULTS_PATH
from build.constants import TODAY_DMYHM
from build.constants import YESTERDAY_YMD
from build.utils import analyze_memory
from build.utils import analyze_time
from build.utils import logger
from build.utils import read_json_from_file
from build.utils import save_as_json
import pandas as pd
@analyze_time
@analyze_memory
def main():
# Log status
logger.info("Loading local data files")
test_results = read_json_from_file(TEST_RESULTS_PATH)
# Log status
logger.info("Calculating main statistics")
# Create date ranges for charts
case_dates = pd.date_range(start=DATE_SETTINGS["firstCaseDate"], end=YESTERDAY_YMD)
# Get data for each chart
logger.info("Calculating data for charts")
county_by_day = get_county_by_day(
test_results, case_dates, COUNTY_MAPPING, COUNTY_POPULATION
)
confirmed_cases_by_county = get_confirmed_cases_by_county(
test_results, COUNTY_MAPPING
)
del county_by_day["mapPlayback"]
del county_by_day["mapPlayback10k"]
# Create dictionary for final JSON
logger.info("Compiling final JSON")
final_json = {
"updatedOn": TODAY_DMYHM,
"dataConfirmedCasesByCounties": confirmed_cases_by_county,
"countyByDay": county_by_day,
}
# Dump JSON output
save_as_json(CONFIRMED_CASES_BY_COUNTIES_PATH, final_json)
# Log finish time
logger.info("Finished update process")
if __name__ == "__main__":
main()
| 30.316667 | 87 | 0.774601 | 248 | 1,819 | 5.33871 | 0.302419 | 0.095166 | 0.095166 | 0.126888 | 0.239426 | 0.054381 | 0.054381 | 0 | 0 | 0 | 0 | 0.001318 | 0.166025 | 1,819 | 59 | 88 | 30.830508 | 0.871457 | 0.077515 | 0 | 0 | 0 | 0 | 0.128743 | 0.016766 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02439 | false | 0 | 0.365854 | 0 | 0.390244 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
be4201706e45a3d4dd6cd9622ea3645d54ac325f | 440 | py | Python | users/models.py | makutas/CocktailWebsite | c5192e5fc2b750a32500f5c3421ed07e89c9c7e1 | [
"MIT"
] | null | null | null | users/models.py | makutas/CocktailWebsite | c5192e5fc2b750a32500f5c3421ed07e89c9c7e1 | [
"MIT"
] | null | null | null | users/models.py | makutas/CocktailWebsite | c5192e5fc2b750a32500f5c3421ed07e89c9c7e1 | [
"MIT"
] | null | null | null | from django.db import models
from django.contrib.auth.models import User
class UserProfile(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE)
user_description = models.CharField(max_length=200, null=True)
user_avatar = models.ImageField(null=True, blank=True)
user_uploaded_recipes = models.IntegerField() # Increment by 1 on upload
def __str__(self):
return f"{self.user.username}"
| 31.428571 | 77 | 0.747727 | 59 | 440 | 5.40678 | 0.644068 | 0.062696 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010753 | 0.154545 | 440 | 13 | 78 | 33.846154 | 0.846774 | 0.054545 | 0 | 0 | 0 | 0 | 0.048309 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.222222 | 0.111111 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
be466292d2d3ccf1cddc1f8ecf7d02c60e49df95 | 1,363 | py | Python | gen_cnn_dataset.py | NPCai/graphene-py | 50163eb65f55c25a3d090bad03e34304b1cb3037 | [
"MIT"
] | 5 | 2018-09-10T15:33:51.000Z | 2020-07-28T05:46:59.000Z | gen_cnn_dataset.py | NPCai/graphene-py | 50163eb65f55c25a3d090bad03e34304b1cb3037 | [
"MIT"
] | null | null | null | gen_cnn_dataset.py | NPCai/graphene-py | 50163eb65f55c25a3d090bad03e34304b1cb3037 | [
"MIT"
] | null | null | null | import wrapper as w
from multiprocessing import Process
import atexit
import time
from queue import Queue
''' 8 Processes, 24 threads per process = 192 threads '''
NUM_PROCESSES = 8
workerList = [] # Worker processes
class Worker(Process): # Need multiple threads or else it takes forever
def __init__(self, queue): # filNum is the id of the file to extract from
super().__init__()
self.queue = queue
self.outQueue = Queue()
def run(self):
with concurrent.futures.ThreadPoolExecutor(max_workers=24) as executor:
executor.submit(loadUrl())
def loadUrl():
while not self.queue.empty():
sentence = self.queue.get()
ex = w.GrapheneExtract(sentence)
self.outQueue.put(sentence.strip() + "\t" + str(ex.json) + "\n")
queues = [] # Use seperate queues to avoid waiting for locks
with open("data/all_news.txt", "r") as news:
for line in news[::len(news) / NUM_PROCESSES]:
queue = Queue()
queue.put(line.strip())
print("Queue populated")
for i in range(NUM_PROCESSES):
worker = Worker(queues[i])
worker.daemon = True
worker.start()
workerList.append(worker)
def close_running_threads():
for thread in workerList:
thread.join()
atexit.register(close_running_threads)
print("All threads registered and working.")
while True:
print(queue.qsize() " sentences remaining to be requested")
time.sleep(2) # Print every two seconds | 26.72549 | 74 | 0.726339 | 194 | 1,363 | 5.015464 | 0.541237 | 0.036999 | 0.026721 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008688 | 0.155539 | 1,363 | 51 | 75 | 26.72549 | 0.836664 | 0.131328 | 0 | 0 | 0 | 0 | 0.096257 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.128205 | null | null | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
be47030ab919977e3706aa43ef448dd537100bbd | 2,702 | py | Python | torch/_prims/context.py | EikanWang/pytorch | 823ddb6e87e8111c9b5a99523503172e5bf62c49 | [
"Intel"
] | null | null | null | torch/_prims/context.py | EikanWang/pytorch | 823ddb6e87e8111c9b5a99523503172e5bf62c49 | [
"Intel"
] | 1 | 2022-01-10T18:39:28.000Z | 2022-01-10T19:15:57.000Z | torch/_prims/context.py | HaoZeke/pytorch | 4075972c2675ef34fd85efd60c9bad75ad06d386 | [
"Intel"
] | null | null | null | from typing import Callable, Sequence, Any, Dict
import functools
import torch
import torch.overrides
from torch._prims.utils import torch_function_passthrough
import torch._refs as refs
import torch._refs
import torch._refs.nn
import torch._refs.nn.functional
import torch._refs.special
import torch._prims
# TODO: automap torch operations to references
# (need to throw a good assertion if the mapping doesn't exist)
_torch_to_reference_map = {
torch.add: refs.add,
# torch.div: refs.div,
torch.mul: refs.mul,
torch.ge: refs.ge,
torch.gt: refs.gt,
torch.le: refs.le,
torch.lt: refs.lt,
}
@functools.lru_cache(None)
def torch_to_refs_map():
"""
Mapping of torch API functions to torch._refs functions.
E.g. torch_to_refs_map()[torch.add] == torch._refs.add
"""
modules = [
(torch, torch._refs),
(torch.nn, torch._refs.nn),
(torch.nn.functional, torch._refs.nn.functional),
(torch.special, torch._refs.special),
]
r = {}
for mod_torch, mod_refs in modules:
for s in mod_refs.__all__: # type: ignore[attr-defined]
r[mod_torch.__dict__.get(s)] = mod_refs.__dict__.get(s)
return r
@functools.lru_cache(None)
def all_prims():
"""
Set of all prim functions, e.g., torch._prims.add in all_prims()
"""
return {torch._prims.__dict__.get(s) for s in torch._prims.__all__}
class TorchRefsMode(torch.overrides.TorchFunctionMode):
"""
Switches the interpretation of torch.* functions and Tensor methods to
use PrimTorch refs in torch._refs. (Direct calls to _refs are unaffected.)
>>> with TorchRefsMode.push():
... torch.add(x, y) # calls torch._refs.add(x, y)
By default, this context manager will fall back on the torch.* if the
ref does not exist; set strict=True to error if this occurs.
"""
def __init__(self, strict=False):
self.strict = strict
def __torch_function__(
self,
orig_func: Callable,
types: Sequence,
args: Sequence[Any] = (),
kwargs: Dict = None,
):
if kwargs is None:
kwargs = {}
# For primitive operations, run them as is without interception
if orig_func in torch_function_passthrough or orig_func in all_prims():
return orig_func(*args, **kwargs)
mapping = torch_to_refs_map()
func = mapping.get(orig_func, None)
if func is not None:
return func(*args, **kwargs)
if self.strict:
raise RuntimeError(
f"no _refs support for {torch.overrides.resolve_name(orig_func)}"
)
return orig_func(*args, **kwargs)
| 28.442105 | 81 | 0.650259 | 370 | 2,702 | 4.521622 | 0.335135 | 0.069934 | 0.04483 | 0.025105 | 0.057382 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.245744 | 2,702 | 94 | 82 | 28.744681 | 0.820903 | 0.281643 | 0 | 0.068966 | 0 | 0 | 0.033208 | 0.02196 | 0 | 0 | 0 | 0.010638 | 0 | 1 | 0.068966 | false | 0.034483 | 0.189655 | 0 | 0.362069 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
be53ecbf1f6e947fe3a12409a789c5940cb5ceed | 2,516 | py | Python | gaussian_blur/gaussian_blur.py | Soft-illusion/ComputerVision | 9afaa9eafef8ac47fdb1023c5332cff98626f1bd | [
"MIT"
] | null | null | null | gaussian_blur/gaussian_blur.py | Soft-illusion/ComputerVision | 9afaa9eafef8ac47fdb1023c5332cff98626f1bd | [
"MIT"
] | null | null | null | gaussian_blur/gaussian_blur.py | Soft-illusion/ComputerVision | 9afaa9eafef8ac47fdb1023c5332cff98626f1bd | [
"MIT"
] | null | null | null | import cv2 as cv
import sys
import numpy as np
import random as r
import os
from PIL import Image as im
def noisy(noise_typ,image):
if noise_typ == "gauss":
# Generate Gaussian noise
gauss = np.random.normal(0,1,image.size)
print(gauss)
gauss = gauss.reshape(image.shape[0],image.shape[1],image.shape[2]).astype('uint8')
# Add the Gaussian noise to the image
img_gauss = cv.add(image,gauss)
cv.imwrite("Noise.png", gauss)
return img_gauss
elif noise_typ == "s&p":
row,col,ch = image.shape
s_vs_p = 0.5
amount = 0.004
out = np.copy(image)
# Salt mode
num_salt = np.ceil(amount * image.size * s_vs_p)
coords = [np.random.randint(0, i - 1, int(num_salt))
for i in image.shape]
out[coords] = 1
# Pepper mode
num_pepper = np.ceil(amount* image.size * (1. - s_vs_p))
coords = [np.random.randint(0, i - 1, int(num_pepper))
for i in image.shape]
out[coords] = 0
return out
elif noise_typ == "poisson":
vals = len(np.unique(image))
vals = 2 ** np.ceil(np.log2(vals))
noisy = np.random.poisson(image * vals) / float(vals)
return noisy
elif noise_typ =="speckle":
row,col,ch = image.shape
gauss = np.random.randn(row,col,ch)
gauss = gauss.reshape(row,col,ch)
noisy = image + image * gauss
return noisy
img = cv.imread(cv.samples.findFile("3.png"))
if img is None:
sys.exit("Could not read the image.")
else :
width , height , depth = img.shape
img_noisy = noisy("gauss",img)
for kernal_size in range (1,71,2):
print(kernal_size)
dst = cv.GaussianBlur(img_noisy,(kernal_size,kernal_size),0)
# print( cv.getGaussianKernel(kernal_size,0))
file_name = "gaussian_blur" + str(kernal_size) + ".png"
cv.imwrite(file_name, dst)
# dst = img_noisy
# for kernal_no in range (0,200):
# print(kernal_no)
# dst = cv.GaussianBlur(dst,(3,3),1)
# # print( cv.getGaussianKernel(kernal_size,3))
# file_name = "gaussian_blur" + str(kernal_no) + ".png"
# cv.imwrite(file_name, dst)
for kernal_size in range (1,71,2):
print(kernal_size)
dst = cv.bilateralFilter(img_noisy,kernal_size,300,300)
# print( cv.getGaussianKernel(kernal_size,0))
file_name = "bilateral_blur" + str(kernal_size) + ".png"
cv.imwrite(file_name, dst)
| 32.675325 | 91 | 0.598967 | 366 | 2,516 | 3.994536 | 0.270492 | 0.082079 | 0.021888 | 0.06156 | 0.372093 | 0.295486 | 0.254446 | 0.220246 | 0.161423 | 0.161423 | 0 | 0.026244 | 0.273052 | 2,516 | 76 | 92 | 33.105263 | 0.7731 | 0.166534 | 0 | 0.218182 | 1 | 0 | 0.050913 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018182 | false | 0 | 0.109091 | 0 | 0.2 | 0.054545 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
be5dd7bfd950d236cdb2d9db1cde1c0dbae6c636 | 5,250 | py | Python | tests/functional/controllers/test_group_controller_superuser.py | roscisz/TensorHive | 4a680f47a0ee1ce366dc82ad9964e229d9749c4e | [
"Apache-2.0"
] | 129 | 2017-08-25T11:45:15.000Z | 2022-03-29T05:11:25.000Z | tests/functional/controllers/test_group_controller_superuser.py | roscisz/TensorHive | 4a680f47a0ee1ce366dc82ad9964e229d9749c4e | [
"Apache-2.0"
] | 251 | 2017-07-27T10:05:58.000Z | 2022-03-02T12:46:13.000Z | tests/functional/controllers/test_group_controller_superuser.py | roscisz/TensorHive | 4a680f47a0ee1ce366dc82ad9964e229d9749c4e | [
"Apache-2.0"
] | 20 | 2017-08-13T13:05:14.000Z | 2022-03-19T02:21:37.000Z | from tensorhive.models.Group import Group
from fixtures.controllers import API_URI as BASE_URI, HEADERS
from http import HTTPStatus
from importlib import reload
import json
import auth_patcher
ENDPOINT = BASE_URI + '/groups'
def setup_module(_):
auth_patches = auth_patcher.get_patches(superuser=True)
for auth_patch in auth_patches:
auth_patch.start()
for module in auth_patcher.CONTROLLER_MODULES:
reload(module)
for auth_patch in auth_patches:
auth_patch.stop()
# POST /groups
def test_create_group(tables, client):
group_name = 'TestGroup'
data = {'name': group_name}
resp = client.post(ENDPOINT, headers=HEADERS, data=json.dumps(data))
resp_json = json.loads(resp.data.decode('utf-8'))
assert resp.status_code == HTTPStatus.CREATED
assert resp_json['group']['id'] is not None
assert resp_json['group']['name'] == group_name
assert Group.get(int(resp_json['group']['id'])) is not None
# PUT /groups/{id}
def test_update_group(tables, client, new_group):
new_group.save()
new_group_name = new_group.name + '111'
resp = client.put(ENDPOINT + '/' + str(new_group.id), headers=HEADERS, data=json.dumps({'name': new_group_name}))
resp_json = json.loads(resp.data.decode('utf-8'))
assert resp.status_code == HTTPStatus.OK
assert resp_json['group']['name'] == new_group_name
assert Group.get(new_group.id).name == new_group_name
# PUT /groups/{id} - nonexistent id
def test_update_group_that_doesnt_exist(tables, client):
non_existent_id = '777'
resp = client.put(ENDPOINT + '/' + non_existent_id, headers=HEADERS, data=json.dumps({'name': 'test'}))
assert resp.status_code == HTTPStatus.NOT_FOUND
# DELETE /groups/{id}
def test_delete_group(tables, client, new_group):
new_group.save()
resp = client.delete(ENDPOINT + '/' + str(new_group.id), headers=HEADERS)
assert resp.status_code == HTTPStatus.OK
# Let's get all groups to verify
resp = client.get(ENDPOINT, headers=HEADERS)
resp_json = json.loads(resp.data.decode('utf-8'))
assert len(resp_json) == 0
# DELETE /groups/{id} - nonexistent id
def test_delete_group_that_doesnt_exist(tables, client):
non_existent_id = '777'
resp = client.delete(ENDPOINT + '/' + non_existent_id, headers=HEADERS)
assert resp.status_code == HTTPStatus.NOT_FOUND
# PUT /groups/{id}/users/{id}
def test_add_user_to_a_group(tables, client, new_group, new_user):
new_group.save()
new_user.save()
resp = client.put(ENDPOINT + '/{}/users/{}'.format(new_group.id, new_user.id), headers=HEADERS)
assert resp.status_code == HTTPStatus.OK
assert new_group in new_user.groups
assert new_user in new_group.users
# DELETE /groups/{id}/users/{id}
def test_remove_user_from_a_group(tables, client, new_group_with_member):
new_group_with_member.save()
user = new_group_with_member.users[0]
resp = client.delete(ENDPOINT + '/{}/users/{}'.format(new_group_with_member.id, user.id), headers=HEADERS)
assert resp.status_code == HTTPStatus.OK
assert new_group_with_member not in user.groups
assert user not in new_group_with_member.users
# PUT /groups/{id}/users/{id} - nonexistent user id
def test_add_nonexistent_user_to_a_group(tables, client, new_group):
new_group.save()
nonexistent_user_id = '777'
resp = client.put(ENDPOINT + '/{}/users/{}'.format(new_group.id, nonexistent_user_id), headers=HEADERS)
assert resp.status_code == HTTPStatus.NOT_FOUND
# PUT /groups/{id}/users/{id} - nonexistent group id
def test_add_user_to_nonexistent_group(tables, client, new_user):
new_user.save()
nonexistent_group_id = '777'
resp = client.put(ENDPOINT + '/{}/users/{}'.format(nonexistent_group_id, new_user.id), headers=HEADERS)
assert resp.status_code == HTTPStatus.NOT_FOUND
# DELETE /groups/{id}/users/{id} - nonexistent user id
def test_remove_nonexistent_user_from_a_group(tables, client, new_group):
new_group.save()
nonexistent_user_id = '777'
resp = client.delete(ENDPOINT + '/{}/users/{}'.format(new_group.id, nonexistent_user_id), headers=HEADERS)
assert resp.status_code == HTTPStatus.NOT_FOUND
# DELETE /groups/{id}/users/{id} - nonexistent group id
def test_remove_user_from_a_nonexistent_group(tables, client, new_user):
new_user.save()
nonexistent_group_id = '777'
resp = client.delete(ENDPOINT + '/{}/users/{}'.format(nonexistent_group_id, new_user.id), headers=HEADERS)
assert resp.status_code == HTTPStatus.NOT_FOUND
# PUT /groups/{id}
def test_set_group_as_a_default(tables, client, new_group):
new_group.save()
resp = client.put(ENDPOINT + '/{}'.format(new_group.id), data=json.dumps({'isDefault': True}), headers=HEADERS)
assert resp.status_code == HTTPStatus.OK
assert Group.get(new_group.id).is_default
# PUT /groups/{id}
def test_mark_default_group_as_non_default(tables, client, new_group):
new_group.is_default = True
new_group.save()
resp = client.put(ENDPOINT + '/{}'.format(new_group.id), data=json.dumps({'isDefault': False}),
headers=HEADERS)
assert resp.status_code == HTTPStatus.OK
assert Group.get(new_group.id).is_default is False
| 32.012195 | 117 | 0.717714 | 756 | 5,250 | 4.720899 | 0.124339 | 0.085178 | 0.05828 | 0.07285 | 0.774727 | 0.703839 | 0.658448 | 0.593444 | 0.534604 | 0.45615 | 0 | 0.005849 | 0.153333 | 5,250 | 163 | 118 | 32.208589 | 0.797075 | 0.086095 | 0 | 0.354839 | 0 | 0 | 0.041815 | 0 | 0 | 0 | 0 | 0 | 0.268817 | 1 | 0.150538 | false | 0 | 0.064516 | 0 | 0.215054 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.