hexsha stringlengths 40 40 | size int64 2 1.02M | ext stringclasses 10
values | lang stringclasses 1
value | max_stars_repo_path stringlengths 4 245 | max_stars_repo_name stringlengths 6 130 | max_stars_repo_head_hexsha stringlengths 40 40 | max_stars_repo_licenses listlengths 1 10 | max_stars_count int64 1 191k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 4 245 | max_issues_repo_name stringlengths 6 130 | max_issues_repo_head_hexsha stringlengths 40 40 | max_issues_repo_licenses listlengths 1 10 | max_issues_count int64 1 67k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 4 245 | max_forks_repo_name stringlengths 6 130 | max_forks_repo_head_hexsha stringlengths 40 40 | max_forks_repo_licenses listlengths 1 10 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 2 1.02M | avg_line_length float64 1 417k | max_line_length int64 1 987k | alphanum_fraction float64 0 1 | content_no_comment stringlengths 0 1.01M | is_comment_constant_removed bool 1
class | is_sharp_comment_removed bool 1
class |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
f720841a7564a5f85416ab65a54983fa155787e0 | 282 | py | Python | Seguranca/Ping/pingmultiplo.py | Luis12368/python | 23352d75ad13bcfd09ea85ab422fdc6ae1fcc5e7 | [
"MIT"
] | 1 | 2022-03-24T21:30:53.000Z | 2022-03-24T21:30:53.000Z | Seguranca-com-python/Ping/pingmultiplo.py | Luis12368/Bootcamp-Cognizant-Cloud-Data-Engineer | 789216a6fa76e4cb1336a73ed861bcf2e9d03751 | [
"MIT"
] | null | null | null | Seguranca-com-python/Ping/pingmultiplo.py | Luis12368/Bootcamp-Cognizant-Cloud-Data-Engineer | 789216a6fa76e4cb1336a73ed861bcf2e9d03751 | [
"MIT"
] | null | null | null | import os
import time
with open('hosts.txt') as file:
dump = file.read()
dump = dump.splitlines()
for ip in dump:
print("Verificando o IP", ip)
print("-" * 60)
os.system('ping -n 2 {}'.format(ip))
print("-" * 60)
time.sleep(5)
| 18.8 | 44 | 0.524823 | import os
import time
with open('hosts.txt') as file:
dump = file.read()
dump = dump.splitlines()
for ip in dump:
print("Verificando o IP", ip)
print("-" * 60)
os.system('ping -n 2 {}'.format(ip))
print("-" * 60)
time.sleep(5)
| true | true |
f72084785a53b5effc7e214a1463c10da25fda6e | 58,018 | py | Python | salt/utils/schema.py | johnskopis/salt | 86adb6b0fe40230b8be4c74229e897a7a08f81a6 | [
"Apache-2.0"
] | 2 | 2018-11-08T02:59:24.000Z | 2021-01-04T00:30:50.000Z | salt/utils/schema.py | johnskopis/salt | 86adb6b0fe40230b8be4c74229e897a7a08f81a6 | [
"Apache-2.0"
] | 4 | 2020-09-04T10:19:34.000Z | 2020-11-09T12:55:59.000Z | salt/utils/schema.py | johnskopis/salt | 86adb6b0fe40230b8be4c74229e897a7a08f81a6 | [
"Apache-2.0"
] | 5 | 2017-06-16T23:48:13.000Z | 2021-04-08T17:43:48.000Z | # -*- coding: utf-8 -*-
'''
:codeauthor: Pedro Algarvio (pedro@algarvio.me)
:codeauthor: Alexandru Bleotu (alexandru.bleotu@morganstanley.com)
salt.utils.schema
~~~~~~~~~~~~~~~~~
Object Oriented Configuration - JSON Schema compatible generator
This code was inspired by `jsl`__, "A Python DSL for describing JSON
schemas".
.. __: https://jsl.readthedocs.io/
A configuration document or configuration document section is defined using
the py:class:`Schema`, the configuration items are defined by any of the
subclasses of py:class:`BaseSchemaItem` as attributes of a subclass of
py:class:`Schema` class.
A more complex configuration document (containing a defininitions section)
is defined using the py:class:`DefinitionsSchema`. This type of
schema supports having complex configuration items as attributes (defined
extending the py:class:`ComplexSchemaItem`). These items have other
configuration items (complex or not) as attributes, allowing to verify
more complex JSON data structures
As an example:
.. code-block:: python
class HostConfig(Schema):
title = 'Host Configuration'
description = 'This is the host configuration'
host = StringItem(
'Host',
'The looong host description',
default=None,
minimum=1
)
port = NumberItem(
description='The port number',
default=80,
required=False,
minimum=0,
inclusiveMinimum=False,
maximum=65535
)
The serialized version of the above configuration definition is:
.. code-block:: python
>>> print(HostConfig.serialize())
OrderedDict([
('$schema', 'http://json-schema.org/draft-04/schema#'),
('title', 'Host Configuration'),
('description', 'This is the host configuration'),
('type', 'object'),
('properties', OrderedDict([
('host', {'minimum': 1,
'type': 'string',
'description': 'The looong host description',
'title': 'Host'}),
('port', {'description': 'The port number',
'default': 80,
'inclusiveMinimum': False,
'maximum': 65535,
'minimum': 0,
'type': 'number'})
])),
('required', ['host']),
('x-ordering', ['host', 'port']),
('additionalProperties', True)]
)
>>> print(salt.utils.json.dumps(HostConfig.serialize(), indent=2))
{
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "Host Configuration",
"description": "This is the host configuration",
"type": "object",
"properties": {
"host": {
"minimum": 1,
"type": "string",
"description": "The looong host description",
"title": "Host"
},
"port": {
"description": "The port number",
"default": 80,
"inclusiveMinimum": false,
"maximum": 65535,
"minimum": 0,
"type": "number"
}
},
"required": [
"host"
],
"x-ordering": [
"host",
"port"
],
"additionalProperties": false
}
The serialized version of the configuration block can be used to validate a
configuration dictionary using the `python jsonschema library`__.
.. __: https://pypi.python.org/pypi/jsonschema
.. code-block:: python
>>> import jsonschema
>>> jsonschema.validate({'host': 'localhost', 'port': 80}, HostConfig.serialize())
>>> jsonschema.validate({'host': 'localhost', 'port': -1}, HostConfig.serialize())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/site-packages/jsonschema/validators.py", line 478, in validate
cls(schema, *args, **kwargs).validate(instance)
File "/usr/lib/python2.7/site-packages/jsonschema/validators.py", line 123, in validate
raise error
jsonschema.exceptions.ValidationError: -1 is less than the minimum of 0
Failed validating 'minimum' in schema['properties']['port']:
{'default': 80,
'description': 'The port number',
'inclusiveMinimum': False,
'maximum': 65535,
'minimum': 0,
'type': 'number'}
On instance['port']:
-1
>>>
A configuration document can even be split into configuration sections. Let's reuse the above
``HostConfig`` class and include it in a configuration block:
.. code-block:: python
class LoggingConfig(Schema):
title = 'Logging Configuration'
description = 'This is the logging configuration'
log_level = StringItem(
'Logging Level',
'The logging level',
default='debug',
minimum=1
)
class MyConfig(Schema):
title = 'My Config'
description = 'This my configuration'
hostconfig = HostConfig()
logconfig = LoggingConfig()
The JSON Schema string version of the above is:
.. code-block:: python
>>> print salt.utils.json.dumps(MyConfig.serialize(), indent=4)
{
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "My Config",
"description": "This my configuration",
"type": "object",
"properties": {
"hostconfig": {
"id": "https://non-existing.saltstack.com/schemas/hostconfig.json#",
"title": "Host Configuration",
"description": "This is the host configuration",
"type": "object",
"properties": {
"host": {
"minimum": 1,
"type": "string",
"description": "The looong host description",
"title": "Host"
},
"port": {
"description": "The port number",
"default": 80,
"inclusiveMinimum": false,
"maximum": 65535,
"minimum": 0,
"type": "number"
}
},
"required": [
"host"
],
"x-ordering": [
"host",
"port"
],
"additionalProperties": false
},
"logconfig": {
"id": "https://non-existing.saltstack.com/schemas/logconfig.json#",
"title": "Logging Configuration",
"description": "This is the logging configuration",
"type": "object",
"properties": {
"log_level": {
"default": "debug",
"minimum": 1,
"type": "string",
"description": "The logging level",
"title": "Logging Level"
}
},
"required": [
"log_level"
],
"x-ordering": [
"log_level"
],
"additionalProperties": false
}
},
"additionalProperties": false
}
>>> import jsonschema
>>> jsonschema.validate(
{'hostconfig': {'host': 'localhost', 'port': 80},
'logconfig': {'log_level': 'debug'}},
MyConfig.serialize())
>>> jsonschema.validate(
{'hostconfig': {'host': 'localhost', 'port': -1},
'logconfig': {'log_level': 'debug'}},
MyConfig.serialize())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/site-packages/jsonschema/validators.py", line 478, in validate
cls(schema, *args, **kwargs).validate(instance)
File "/usr/lib/python2.7/site-packages/jsonschema/validators.py", line 123, in validate
raise error
jsonschema.exceptions.ValidationError: -1 is less than the minimum of 0
Failed validating 'minimum' in schema['properties']['hostconfig']['properties']['port']:
{'default': 80,
'description': 'The port number',
'inclusiveMinimum': False,
'maximum': 65535,
'minimum': 0,
'type': 'number'}
On instance['hostconfig']['port']:
-1
>>>
If however, you just want to use the configuration blocks for readability
and do not desire the nested dictionaries serialization, you can pass
``flatten=True`` when defining a configuration section as a configuration
subclass attribute:
.. code-block:: python
class MyConfig(Schema):
title = 'My Config'
description = 'This my configuration'
hostconfig = HostConfig(flatten=True)
logconfig = LoggingConfig(flatten=True)
The JSON Schema string version of the above is:
.. code-block:: python
>>> print(salt.utils.json.dumps(MyConfig, indent=4))
{
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "My Config",
"description": "This my configuration",
"type": "object",
"properties": {
"host": {
"minimum": 1,
"type": "string",
"description": "The looong host description",
"title": "Host"
},
"port": {
"description": "The port number",
"default": 80,
"inclusiveMinimum": false,
"maximum": 65535,
"minimum": 0,
"type": "number"
},
"log_level": {
"default": "debug",
"minimum": 1,
"type": "string",
"description": "The logging level",
"title": "Logging Level"
}
},
"x-ordering": [
"host",
"port",
"log_level"
],
"additionalProperties": false
}
'''
# Import python libs
from __future__ import absolute_import, print_function, unicode_literals
import sys
import inspect
import textwrap
import functools
# Import salt libs
import salt.utils.args
#import salt.utils.yaml
from salt.utils.odict import OrderedDict
# Import 3rd-party libs
from salt.ext import six
BASE_SCHEMA_URL = 'https://non-existing.saltstack.com/schemas'
RENDER_COMMENT_YAML_MAX_LINE_LENGTH = 80
class Prepareable(type):
'''
Preserve attributes order for python 2.x
'''
# This code was taken from
# https://github.com/aromanovich/jsl/blob/master/jsl/_compat/prepareable.py
# which in turn was taken from https://gist.github.com/DasIch/5562625 with minor fixes
if not six.PY3:
def __new__(mcs, name, bases, attributes):
try:
constructor = attributes["__new__"]
except KeyError:
return type.__new__(mcs, name, bases, attributes)
def preparing_constructor(mcs, name, bases, attributes):
try:
mcs.__prepare__
except AttributeError:
return constructor(mcs, name, bases, attributes)
namespace = mcs.__prepare__(name, bases)
defining_frame = sys._getframe(1)
for constant in reversed(defining_frame.f_code.co_consts):
if inspect.iscode(constant) and constant.co_name == name:
def get_index(attribute_name, _names=constant.co_names): # pylint: disable=cell-var-from-loop
try:
return _names.index(attribute_name)
except ValueError:
return 0
break
else:
return constructor(mcs, name, bases, attributes)
by_appearance = sorted(
attributes.items(), key=lambda item: get_index(item[0])
)
for key, value in by_appearance:
namespace[key] = value
return constructor(mcs, name, bases, namespace)
attributes["__new__"] = functools.wraps(constructor)(preparing_constructor)
return type.__new__(mcs, name, bases, attributes)
class NullSentinel(object):
'''
A class which instance represents a null value.
Allows specifying fields with a default value of null.
'''
def __bool__(self):
return False
__nonzero__ = __bool__
Null = NullSentinel()
'''
A special value that can be used to set the default value
of a field to null.
'''
# make sure nobody creates another Null value
def _failing_new(*args, **kwargs):
raise TypeError('Can\'t create another NullSentinel instance')
NullSentinel.__new__ = staticmethod(_failing_new)
del _failing_new
class SchemaMeta(six.with_metaclass(Prepareable, type)):
@classmethod
def __prepare__(mcs, name, bases):
return OrderedDict()
def __new__(mcs, name, bases, attrs):
# Mark the instance as a configuration document/section
attrs['__config__'] = True
attrs['__flatten__'] = False
attrs['__config_name__'] = None
# Let's record the configuration items/sections
items = {}
sections = {}
order = []
# items from parent classes
for base in reversed(bases):
if hasattr(base, '_items'):
items.update(base._items)
if hasattr(base, '_sections'):
sections.update(base._sections)
if hasattr(base, '_order'):
order.extend(base._order)
# Iterate through attrs to discover items/config sections
for key, value in six.iteritems(attrs):
entry_name = None
if not hasattr(value, '__item__') and not hasattr(value, '__config__'):
continue
if hasattr(value, '__item__'):
# the value is an item instance
if hasattr(value, 'title') and value.title is None:
# It's an item instance without a title, make the title
# it's name
value.title = key
entry_name = value.__item_name__ or key
items[entry_name] = value
if hasattr(value, '__config__'):
entry_name = value.__config_name__ or key
sections[entry_name] = value
order.append(entry_name)
attrs['_order'] = order
attrs['_items'] = items
attrs['_sections'] = sections
return type.__new__(mcs, name, bases, attrs)
def __call__(cls, flatten=False, allow_additional_items=False, **kwargs):
instance = object.__new__(cls)
instance.__config_name__ = kwargs.pop('name', None)
if flatten is True:
# This configuration block is to be treated as a part of the
# configuration for which it was defined as an attribute, not as
# it's own sub configuration
instance.__flatten__ = True
if allow_additional_items is True:
# The configuration block only accepts the configuration items
# which are defined on the class. On additional items, validation
# with jsonschema will fail
instance.__allow_additional_items__ = True
instance.__init__(**kwargs)
return instance
class BaseSchemaItemMeta(six.with_metaclass(Prepareable, type)):
'''
Config item metaclass to "tag" the class as a configuration item
'''
@classmethod
def __prepare__(mcs, name, bases):
return OrderedDict()
def __new__(mcs, name, bases, attrs):
# Register the class as an item class
attrs['__item__'] = True
attrs['__item_name__'] = None
# Instantiate an empty list to store the config item attribute names
attributes = []
for base in reversed(bases):
try:
base_attributes = getattr(base, '_attributes', [])
if base_attributes:
attributes.extend(base_attributes)
# Extend the attributes with the base argspec argument names
# but skip "self"
for argname in salt.utils.args.get_function_argspec(base.__init__).args:
if argname == 'self' or argname in attributes:
continue
if argname == 'name':
continue
attributes.append(argname)
except TypeError:
# On the base object type, __init__ is just a wrapper which
# triggers a TypeError when we're trying to find out it's
# argspec
continue
attrs['_attributes'] = attributes
return type.__new__(mcs, name, bases, attrs)
def __call__(cls, *args, **kwargs):
# Create the instance class
instance = object.__new__(cls)
if args:
raise RuntimeError(
'Please pass all arguments as named arguments. Un-named '
'arguments are not supported'
)
for key in kwargs.copy():
# Store the kwarg keys as the instance attributes for the
# serialization step
if key == 'name':
# This is the item name to override the class attribute name
instance.__item_name__ = kwargs.pop(key)
continue
if key not in instance._attributes:
instance._attributes.append(key)
# Init the class
instance.__init__(*args, **kwargs)
# Validate the instance after initialization
for base in reversed(inspect.getmro(cls)):
validate_attributes = getattr(base, '__validate_attributes__', None)
if validate_attributes:
if instance.__validate_attributes__.__func__.__code__ is not validate_attributes.__code__:
# The method was overridden, run base.__validate_attributes__ function
base.__validate_attributes__(instance)
# Finally, run the instance __validate_attributes__ function
instance.__validate_attributes__()
# Return the initialized class
return instance
class Schema(six.with_metaclass(SchemaMeta, object)):
'''
Configuration definition class
'''
# Define some class level attributes to make PyLint happier
title = None
description = None
_items = _sections = _order = None
__flatten__ = False
__allow_additional_items__ = False
@classmethod
def serialize(cls, id_=None):
# The order matters
serialized = OrderedDict()
if id_ is not None:
# This is meant as a configuration section, sub json schema
serialized['id'] = '{0}/{1}.json#'.format(BASE_SCHEMA_URL, id_)
else:
# Main configuration block, json schema
serialized['$schema'] = 'http://json-schema.org/draft-04/schema#'
if cls.title is not None:
serialized['title'] = cls.title
if cls.description is not None:
if cls.description == cls.__doc__:
serialized['description'] = textwrap.dedent(cls.description).strip()
else:
serialized['description'] = cls.description
required = []
ordering = []
serialized['type'] = 'object'
properties = OrderedDict()
cls.after_items_update = []
for name in cls._order: # pylint: disable=E1133
skip_order = False
item_name = None
if name in cls._sections: # pylint: disable=E1135
section = cls._sections[name]
serialized_section = section.serialize(None if section.__flatten__ is True else name)
if section.__flatten__ is True:
# Flatten the configuration section into the parent
# configuration
properties.update(serialized_section['properties'])
if 'x-ordering' in serialized_section:
ordering.extend(serialized_section['x-ordering'])
if 'required' in serialized_section:
required.extend(serialized_section['required'])
if hasattr(section, 'after_items_update'):
cls.after_items_update.extend(section.after_items_update)
skip_order = True
else:
# Store it as a configuration section
properties[name] = serialized_section
if name in cls._items: # pylint: disable=E1135
config = cls._items[name]
item_name = config.__item_name__ or name
# Handle the configuration items defined in the class instance
if config.__flatten__ is True:
serialized_config = config.serialize()
cls.after_items_update.append(serialized_config)
skip_order = True
else:
properties[item_name] = config.serialize()
if config.required:
# If it's a required item, add it to the required list
required.append(item_name)
if skip_order is False:
# Store the order of the item
if item_name is not None:
if item_name not in ordering:
ordering.append(item_name)
else:
if name not in ordering:
ordering.append(name)
if properties:
serialized['properties'] = properties
# Update the serialized object with any items to include after properties.
# Do not overwrite properties already existing in the serialized dict.
if cls.after_items_update:
after_items_update = {}
for entry in cls.after_items_update:
for name, data in six.iteritems(entry):
if name in after_items_update:
if isinstance(after_items_update[name], list):
after_items_update[name].extend(data)
else:
after_items_update[name] = data
if after_items_update:
after_items_update.update(serialized)
serialized = after_items_update
if required:
# Only include required if not empty
serialized['required'] = required
if ordering:
# Only include ordering if not empty
serialized['x-ordering'] = ordering
serialized['additionalProperties'] = cls.__allow_additional_items__
return serialized
@classmethod
def defaults(cls):
serialized = cls.serialize()
defaults = {}
for name, details in serialized['properties'].items():
if 'default' in details:
defaults[name] = details['default']
continue
if 'properties' in details:
for sname, sdetails in details['properties'].items():
if 'default' in sdetails:
defaults.setdefault(name, {})[sname] = sdetails['default']
continue
return defaults
@classmethod
def as_requirements_item(cls):
serialized_schema = cls.serialize()
required = serialized_schema.get('required', [])
for name in serialized_schema['properties']:
if name not in required:
required.append(name)
return RequirementsItem(requirements=required)
#@classmethod
#def render_as_rst(cls):
# '''
# Render the configuration block as a restructured text string
# '''
# # TODO: Implement RST rendering
# raise NotImplementedError
#@classmethod
#def render_as_yaml(cls):
# '''
# Render the configuration block as a parseable YAML string including comments
# '''
# # TODO: Implement YAML rendering
# raise NotImplementedError
class SchemaItem(six.with_metaclass(BaseSchemaItemMeta, object)):
'''
Base configuration items class.
All configurations must subclass it
'''
# Define some class level attributes to make PyLint happier
__type__ = None
__format__ = None
_attributes = None
__flatten__ = False
__serialize_attr_aliases__ = None
required = False
def __init__(self, required=None, **extra):
'''
:param required: If the configuration item is required. Defaults to ``False``.
'''
if required is not None:
self.required = required
self.extra = extra
def __validate_attributes__(self):
'''
Run any validation check you need the instance attributes.
ATTENTION:
Don't call the parent class when overriding this
method because it will just duplicate the executions. This class'es
metaclass will take care of that.
'''
if self.required not in (True, False):
raise RuntimeError(
'\'required\' can only be True/False'
)
def _get_argname_value(self, argname):
'''
Return the argname value looking up on all possible attributes
'''
# Let's see if there's a private function to get the value
argvalue = getattr(self, '__get_{0}__'.format(argname), None)
if argvalue is not None and callable(argvalue):
argvalue = argvalue()
if argvalue is None:
# Let's see if the value is defined as a public class variable
argvalue = getattr(self, argname, None)
if argvalue is None:
# Let's see if it's defined as a private class variable
argvalue = getattr(self, '__{0}__'.format(argname), None)
if argvalue is None:
# Let's look for it in the extra dictionary
argvalue = self.extra.get(argname, None)
return argvalue
def serialize(self):
'''
Return a serializable form of the config instance
'''
raise NotImplementedError
class BaseSchemaItem(SchemaItem):
'''
Base configuration items class.
All configurations must subclass it
'''
# Let's define description as a class attribute, this will allow a custom configuration
# item to do something like:
# class MyCustomConfig(StringItem):
# '''
# This is my custom config, blah, blah, blah
# '''
# description = __doc__
#
description = None
# The same for all other base arguments
title = None
default = None
enum = None
enumNames = None
def __init__(self, title=None, description=None, default=None, enum=None, enumNames=None, **kwargs):
'''
:param required:
If the configuration item is required. Defaults to ``False``.
:param title:
A short explanation about the purpose of the data described by this item.
:param description:
A detailed explanation about the purpose of the data described by this item.
:param default:
The default value for this configuration item. May be :data:`.Null` (a special value
to set the default value to null).
:param enum:
A list(list, tuple, set) of valid choices.
'''
if title is not None:
self.title = title
if description is not None:
self.description = description
if default is not None:
self.default = default
if enum is not None:
self.enum = enum
if enumNames is not None:
self.enumNames = enumNames
super(BaseSchemaItem, self).__init__(**kwargs)
def __validate_attributes__(self):
if self.enum is not None:
if not isinstance(self.enum, (list, tuple, set)):
raise RuntimeError(
'Only the \'list\', \'tuple\' and \'set\' python types can be used '
'to define \'enum\''
)
if not isinstance(self.enum, list):
self.enum = list(self.enum)
if self.enumNames is not None:
if not isinstance(self.enumNames, (list, tuple, set)):
raise RuntimeError(
'Only the \'list\', \'tuple\' and \'set\' python types can be used '
'to define \'enumNames\''
)
if len(self.enum) != len(self.enumNames):
raise RuntimeError(
'The size of \'enumNames\' must match the size of \'enum\''
)
if not isinstance(self.enumNames, list):
self.enumNames = list(self.enumNames)
def serialize(self):
'''
Return a serializable form of the config instance
'''
serialized = {'type': self.__type__}
for argname in self._attributes:
if argname == 'required':
# This is handled elsewhere
continue
argvalue = self._get_argname_value(argname)
if argvalue is not None:
if argvalue is Null:
argvalue = None
# None values are not meant to be included in the
# serialization, since this is not None...
if self.__serialize_attr_aliases__ and argname in self.__serialize_attr_aliases__:
argname = self.__serialize_attr_aliases__[argname]
serialized[argname] = argvalue
return serialized
def __get_description__(self):
if self.description is not None:
if self.description == self.__doc__:
return textwrap.dedent(self.description).strip()
return self.description
#def render_as_rst(self, name):
# '''
# Render the configuration item as a restructured text string
# '''
# # TODO: Implement YAML rendering
# raise NotImplementedError
#def render_as_yaml(self, name):
# '''
# Render the configuration item as a parseable YAML string including comments
# '''
# # TODO: Include the item rules in the output, minimum, maximum, etc...
# output = '# ----- '
# output += self.title
# output += ' '
# output += '-' * (RENDER_COMMENT_YAML_MAX_LINE_LENGTH - 7 - len(self.title) - 2)
# output += '>\n'
# if self.description:
# output += '\n'.join(textwrap.wrap(self.description,
# width=RENDER_COMMENT_YAML_MAX_LINE_LENGTH,
# initial_indent='# '))
# output += '\n'
# yamled_default_value = salt.utils.yaml.safe_dump(self.default, default_flow_style=False).split('\n...', 1)[0]
# output += '# Default: {0}\n'.format(yamled_default_value)
# output += '#{0}: {1}\n'.format(name, yamled_default_value)
# output += '# <---- '
# output += self.title
# output += ' '
# output += '-' * (RENDER_COMMENT_YAML_MAX_LINE_LENGTH - 7 - len(self.title) - 1)
# return output + '\n'
class NullItem(BaseSchemaItem):
__type__ = 'null'
class BooleanItem(BaseSchemaItem):
__type__ = 'boolean'
class StringItem(BaseSchemaItem):
'''
A string configuration field
'''
__type__ = 'string'
__serialize_attr_aliases__ = {
'min_length': 'minLength',
'max_length': 'maxLength'
}
format = None
pattern = None
min_length = None
max_length = None
def __init__(self,
format=None, # pylint: disable=redefined-builtin
pattern=None,
min_length=None,
max_length=None,
**kwargs):
'''
:param required:
If the configuration item is required. Defaults to ``False``.
:param title:
A short explanation about the purpose of the data described by this item.
:param description:
A detailed explanation about the purpose of the data described by this item.
:param default:
The default value for this configuration item. May be :data:`.Null` (a special value
to set the default value to null).
:param enum:
A list(list, tuple, set) of valid choices.
:param format:
A semantic format of the string (for example, ``"date-time"``, ``"email"``, or ``"uri"``).
:param pattern:
A regular expression (ECMA 262) that a string value must match.
:param min_length:
The minimum length
:param max_length:
The maximum length
'''
if format is not None: # pylint: disable=redefined-builtin
self.format = format
if pattern is not None:
self.pattern = pattern
if min_length is not None:
self.min_length = min_length
if max_length is not None:
self.max_length = max_length
super(StringItem, self).__init__(**kwargs)
def __validate_attributes__(self):
if self.format is None and self.__format__ is not None:
self.format = self.__format__
class EMailItem(StringItem):
'''
An internet email address, see `RFC 5322, section 3.4.1`__.
.. __: http://tools.ietf.org/html/rfc5322
'''
__format__ = 'email'
class IPv4Item(StringItem):
'''
An IPv4 address configuration field, according to dotted-quad ABNF syntax as defined in
`RFC 2673, section 3.2`__.
.. __: http://tools.ietf.org/html/rfc2673
'''
__format__ = 'ipv4'
class IPv6Item(StringItem):
'''
An IPv6 address configuration field, as defined in `RFC 2373, section 2.2`__.
.. __: http://tools.ietf.org/html/rfc2373
'''
__format__ = 'ipv6'
class HostnameItem(StringItem):
'''
An Internet host name configuration field, see `RFC 1034, section 3.1`__.
.. __: http://tools.ietf.org/html/rfc1034
'''
__format__ = 'hostname'
class DateTimeItem(StringItem):
'''
An ISO 8601 formatted date-time configuration field, as defined by `RFC 3339, section 5.6`__.
.. __: http://tools.ietf.org/html/rfc3339
'''
__format__ = 'date-time'
class UriItem(StringItem):
'''
A universal resource identifier (URI) configuration field, according to `RFC3986`__.
.. __: http://tools.ietf.org/html/rfc3986
'''
__format__ = 'uri'
class SecretItem(StringItem):
'''
A string configuration field containing a secret, for example, passwords, API keys, etc
'''
__format__ = 'secret'
class NumberItem(BaseSchemaItem):
__type__ = 'number'
__serialize_attr_aliases__ = {
'multiple_of': 'multipleOf',
'exclusive_minimum': 'exclusiveMinimum',
'exclusive_maximum': 'exclusiveMaximum',
}
multiple_of = None
minimum = None
exclusive_minimum = None
maximum = None
exclusive_maximum = None
def __init__(self,
multiple_of=None,
minimum=None,
exclusive_minimum=None,
maximum=None,
exclusive_maximum=None,
**kwargs):
'''
:param required:
If the configuration item is required. Defaults to ``False``.
:param title:
A short explanation about the purpose of the data described by this item.
:param description:
A detailed explanation about the purpose of the data described by this item.
:param default:
The default value for this configuration item. May be :data:`.Null` (a special value
to set the default value to null).
:param enum:
A list(list, tuple, set) of valid choices.
:param multiple_of:
A value must be a multiple of this factor.
:param minimum:
The minimum allowed value
:param exclusive_minimum:
Whether a value is allowed to be exactly equal to the minimum
:param maximum:
The maximum allowed value
:param exclusive_maximum:
Whether a value is allowed to be exactly equal to the maximum
'''
if multiple_of is not None:
self.multiple_of = multiple_of
if minimum is not None:
self.minimum = minimum
if exclusive_minimum is not None:
self.exclusive_minimum = exclusive_minimum
if maximum is not None:
self.maximum = maximum
if exclusive_maximum is not None:
self.exclusive_maximum = exclusive_maximum
super(NumberItem, self).__init__(**kwargs)
class IntegerItem(NumberItem):
__type__ = 'integer'
class ArrayItem(BaseSchemaItem):
__type__ = 'array'
__serialize_attr_aliases__ = {
'min_items': 'minItems',
'max_items': 'maxItems',
'unique_items': 'uniqueItems',
'additional_items': 'additionalItems'
}
items = None
min_items = None
max_items = None
unique_items = None
additional_items = None
def __init__(self,
items=None,
min_items=None,
max_items=None,
unique_items=None,
additional_items=None,
**kwargs):
'''
:param required:
If the configuration item is required. Defaults to ``False``.
:param title:
A short explanation about the purpose of the data described by this item.
:param description:
A detailed explanation about the purpose of the data described by this item.
:param default:
The default value for this configuration item. May be :data:`.Null` (a special value
to set the default value to null).
:param enum:
A list(list, tuple, set) of valid choices.
:param items:
Either of the following:
* :class:`BaseSchemaItem` -- all items of the array must match the field schema;
* a list or a tuple of :class:`fields <.BaseSchemaItem>` -- all items of the array must be
valid according to the field schema at the corresponding index (tuple typing);
:param min_items:
Minimum length of the array
:param max_items:
Maximum length of the array
:param unique_items:
Whether all the values in the array must be distinct.
:param additional_items:
If the value of ``items`` is a list or a tuple, and the array length is larger than
the number of fields in ``items``, then the additional items are described
by the :class:`.BaseField` passed using this argument.
:type additional_items: bool or :class:`.BaseSchemaItem`
'''
if items is not None:
self.items = items
if min_items is not None:
self.min_items = min_items
if max_items is not None:
self.max_items = max_items
if unique_items is not None:
self.unique_items = unique_items
if additional_items is not None:
self.additional_items = additional_items
super(ArrayItem, self).__init__(**kwargs)
def __validate_attributes__(self):
if not self.items and not self.additional_items:
raise RuntimeError(
'One of items or additional_items must be passed.'
)
if self.items is not None:
if isinstance(self.items, (list, tuple)):
for item in self.items:
if not isinstance(item, (Schema, SchemaItem)):
raise RuntimeError(
'All items passed in the item argument tuple/list must be '
'a subclass of Schema, SchemaItem or BaseSchemaItem, '
'not {0}'.format(type(item))
)
elif not isinstance(self.items, (Schema, SchemaItem)):
raise RuntimeError(
'The items argument passed must be a subclass of '
'Schema, SchemaItem or BaseSchemaItem, not '
'{0}'.format(type(self.items))
)
def __get_items__(self):
if isinstance(self.items, (Schema, SchemaItem)):
# This is either a Schema or a Basetem, return it in it's
# serialized form
return self.items.serialize()
if isinstance(self.items, (tuple, list)):
items = []
for item in self.items:
items.append(item.serialize())
return items
class DictItem(BaseSchemaItem):
__type__ = 'object'
__serialize_attr_aliases__ = {
'min_properties': 'minProperties',
'max_properties': 'maxProperties',
'pattern_properties': 'patternProperties',
'additional_properties': 'additionalProperties'
}
properties = None
pattern_properties = None
additional_properties = None
min_properties = None
max_properties = None
def __init__(self,
properties=None,
pattern_properties=None,
additional_properties=None,
min_properties=None,
max_properties=None,
**kwargs):
'''
:param required:
If the configuration item is required. Defaults to ``False``.
:type required:
boolean
:param title:
A short explanation about the purpose of the data described by this item.
:type title:
str
:param description:
A detailed explanation about the purpose of the data described by this item.
:param default:
The default value for this configuration item. May be :data:`.Null` (a special value
to set the default value to null).
:param enum:
A list(list, tuple, set) of valid choices.
:param properties:
A dictionary containing fields
:param pattern_properties:
A dictionary whose keys are regular expressions (ECMA 262).
Properties match against these regular expressions, and for any that match,
the property is described by the corresponding field schema.
:type pattern_properties: dict[str -> :class:`.Schema` or
:class:`.SchemaItem` or :class:`.BaseSchemaItem`]
:param additional_properties:
Describes properties that are not described by the ``properties`` or ``pattern_properties``.
:type additional_properties: bool or :class:`.Schema` or :class:`.SchemaItem`
or :class:`.BaseSchemaItem`
:param min_properties:
A minimum number of properties.
:type min_properties: int
:param max_properties:
A maximum number of properties
:type max_properties: int
'''
if properties is not None:
self.properties = properties
if pattern_properties is not None:
self.pattern_properties = pattern_properties
if additional_properties is not None:
self.additional_properties = additional_properties
if min_properties is not None:
self.min_properties = min_properties
if max_properties is not None:
self.max_properties = max_properties
super(DictItem, self).__init__(**kwargs)
def __validate_attributes__(self):
if not self.properties and not self.pattern_properties and not self.additional_properties:
raise RuntimeError(
'One of properties, pattern_properties or additional_properties must be passed'
)
if self.properties is not None:
if not isinstance(self.properties, (Schema, dict)):
raise RuntimeError(
'The passed properties must be passed as a dict or '
' a Schema not \'{0}\''.format(type(self.properties))
)
if not isinstance(self.properties, Schema):
for key, prop in self.properties.items():
if not isinstance(prop, (Schema, SchemaItem)):
raise RuntimeError(
'The passed property who\'s key is \'{0}\' must be of type '
'Schema, SchemaItem or BaseSchemaItem, not '
'\'{1}\''.format(key, type(prop))
)
if self.pattern_properties is not None:
if not isinstance(self.pattern_properties, dict):
raise RuntimeError(
'The passed pattern_properties must be passed as a dict '
'not \'{0}\''.format(type(self.pattern_properties))
)
for key, prop in self.pattern_properties.items():
if not isinstance(prop, (Schema, SchemaItem)):
raise RuntimeError(
'The passed pattern_property who\'s key is \'{0}\' must '
'be of type Schema, SchemaItem or BaseSchemaItem, '
'not \'{1}\''.format(key, type(prop))
)
if self.additional_properties is not None:
if not isinstance(self.additional_properties, (bool, Schema, SchemaItem)):
raise RuntimeError(
'The passed additional_properties must be of type bool, '
'Schema, SchemaItem or BaseSchemaItem, not \'{0}\''.format(
type(self.pattern_properties)
)
)
def __get_properties__(self):
if self.properties is None:
return
if isinstance(self.properties, Schema):
return self.properties.serialize()['properties']
properties = OrderedDict()
for key, prop in self.properties.items():
properties[key] = prop.serialize()
return properties
def __get_pattern_properties__(self):
if self.pattern_properties is None:
return
pattern_properties = OrderedDict()
for key, prop in self.pattern_properties.items():
pattern_properties[key] = prop.serialize()
return pattern_properties
def __get_additional_properties__(self):
if self.additional_properties is None:
return
if isinstance(self.additional_properties, bool):
return self.additional_properties
return self.additional_properties.serialize()
def __call__(self, flatten=False):
self.__flatten__ = flatten
return self
def serialize(self):
result = super(DictItem, self).serialize()
required = []
if self.properties is not None:
if isinstance(self.properties, Schema):
serialized = self.properties.serialize()
if 'required' in serialized:
required.extend(serialized['required'])
else:
for key, prop in self.properties.items():
if prop.required:
required.append(key)
if required:
result['required'] = required
return result
class RequirementsItem(SchemaItem):
__type__ = 'object'
requirements = None
def __init__(self, requirements=None):
if requirements is not None:
self.requirements = requirements
super(RequirementsItem, self).__init__()
def __validate_attributes__(self):
if self.requirements is None:
raise RuntimeError(
'The passed requirements must not be empty'
)
if not isinstance(self.requirements, (SchemaItem, list, tuple, set)):
raise RuntimeError(
'The passed requirements must be passed as a list, tuple, '
'set SchemaItem or BaseSchemaItem, not \'{0}\''.format(self.requirements)
)
if not isinstance(self.requirements, SchemaItem):
if not isinstance(self.requirements, list):
self.requirements = list(self.requirements)
for idx, item in enumerate(self.requirements):
if not isinstance(item, (six.string_types, SchemaItem)):
raise RuntimeError(
'The passed requirement at the {0} index must be of type '
'str or SchemaItem, not \'{1}\''.format(idx, type(item))
)
def serialize(self):
if isinstance(self.requirements, SchemaItem):
requirements = self.requirements.serialize()
else:
requirements = []
for requirement in self.requirements:
if isinstance(requirement, SchemaItem):
requirements.append(requirement.serialize())
continue
requirements.append(requirement)
return {'required': requirements}
class OneOfItem(SchemaItem):
__type__ = 'oneOf'
items = None
def __init__(self, items=None, required=None):
if items is not None:
self.items = items
super(OneOfItem, self).__init__(required=required)
def __validate_attributes__(self):
if not self.items:
raise RuntimeError(
'The passed items must not be empty'
)
if not isinstance(self.items, (list, tuple)):
raise RuntimeError(
'The passed items must be passed as a list/tuple not '
'\'{0}\''.format(type(self.items))
)
for idx, item in enumerate(self.items):
if not isinstance(item, (Schema, SchemaItem)):
raise RuntimeError(
'The passed item at the {0} index must be of type '
'Schema, SchemaItem or BaseSchemaItem, not '
'\'{1}\''.format(idx, type(item))
)
if not isinstance(self.items, list):
self.items = list(self.items)
def __call__(self, flatten=False):
self.__flatten__ = flatten
return self
def serialize(self):
return {self.__type__: [i.serialize() for i in self.items]}
class AnyOfItem(OneOfItem):
__type__ = 'anyOf'
class AllOfItem(OneOfItem):
__type__ = 'allOf'
class NotItem(SchemaItem):
__type__ = 'not'
item = None
def __init__(self, item=None):
if item is not None:
self.item = item
super(NotItem, self).__init__()
def __validate_attributes__(self):
if not self.item:
raise RuntimeError(
'An item must be passed'
)
if not isinstance(self.item, (Schema, SchemaItem)):
raise RuntimeError(
'The passed item be of type Schema, SchemaItem or '
'BaseSchemaItem, not \'{1}\''.format(type(self.item))
)
def serialize(self):
return {self.__type__: self.item.serialize()}
# ----- Custom Preconfigured Configs -------------------------------------------------------------------------------->
class PortItem(IntegerItem):
minimum = 0 # yes, 0 is a valid port number
maximum = 65535
# <---- Custom Preconfigured Configs ---------------------------------------------------------------------------------
class ComplexSchemaItem(BaseSchemaItem):
'''
.. versionadded:: 2016.11.0
Complex Schema Item
'''
# This attribute is populated by the metaclass, but pylint fails to see it
# and assumes it's not an iterable
_attributes = []
_definition_name = None
def __init__(self, definition_name=None, required=None):
super(ComplexSchemaItem, self).__init__(required=required)
self.__type__ = 'object'
self._definition_name = definition_name if definition_name else \
self.__class__.__name__
# Schema attributes might have been added as class attributes so we
# and they must be added to the _attributes attr
self._add_missing_schema_attributes()
def _add_missing_schema_attributes(self):
'''
Adds any missed schema attributes to the _attributes list
The attributes can be class attributes and they won't be
included in the _attributes list automatically
'''
for attr in [attr for attr in dir(self) if not attr.startswith('__')]:
attr_val = getattr(self, attr)
if isinstance(getattr(self, attr), SchemaItem) and \
attr not in self._attributes:
self._attributes.append(attr)
@property
def definition_name(self):
return self._definition_name
def serialize(self):
'''
The serialization of the complex item is a pointer to the item
definition
'''
return {'$ref': '#/definitions/{0}'.format(self.definition_name)}
def get_definition(self):
'''Returns the definition of the complex item'''
serialized = super(ComplexSchemaItem, self).serialize()
# Adjust entries in the serialization
del serialized['definition_name']
serialized['title'] = self.definition_name
properties = {}
required_attr_names = []
for attr_name in self._attributes:
attr = getattr(self, attr_name)
if attr and isinstance(attr, BaseSchemaItem):
# Remove the attribute entry added by the base serialization
del serialized[attr_name]
properties[attr_name] = attr.serialize()
properties[attr_name]['type'] = attr.__type__
if attr.required:
required_attr_names.append(attr_name)
if serialized.get('properties') is None:
serialized['properties'] = {}
serialized['properties'].update(properties)
# Assign the required array
if required_attr_names:
serialized['required'] = required_attr_names
return serialized
def get_complex_attrs(self):
'''Returns a dictionary of the complex attributes'''
return [getattr(self, attr_name) for attr_name in self._attributes if
isinstance(getattr(self, attr_name), ComplexSchemaItem)]
class DefinitionsSchema(Schema):
'''
.. versionadded:: 2016.11.0
JSON schema class that supports ComplexSchemaItem objects by adding
a definitions section to the JSON schema, containing the item definitions.
All references to ComplexSchemaItems are built using schema inline
dereferencing.
'''
@classmethod
def serialize(cls, id_=None):
# Get the initial serialization
serialized = super(DefinitionsSchema, cls).serialize(id_)
complex_items = []
# Augment the serializations with the definitions of all complex items
aux_items = cls._items.values()
# Convert dict_view object to a list on Python 3
if six.PY3:
aux_items = list(aux_items)
while aux_items:
item = aux_items.pop(0)
# Add complex attributes
if isinstance(item, ComplexSchemaItem):
complex_items.append(item)
aux_items.extend(item.get_complex_attrs())
# Handle container items
if isinstance(item, OneOfItem):
aux_items.extend(item.items)
elif isinstance(item, ArrayItem):
aux_items.append(item.items)
elif isinstance(item, DictItem):
if item.properties:
aux_items.extend(item.properties.values())
if item.additional_properties and \
isinstance(item.additional_properties, SchemaItem):
aux_items.append(item.additional_properties)
definitions = OrderedDict()
for config in complex_items:
if isinstance(config, ComplexSchemaItem):
definitions[config.definition_name] = \
config.get_definition()
serialized['definitions'] = definitions
return serialized
| 36.466373 | 122 | 0.561688 |
from __future__ import absolute_import, print_function, unicode_literals
import sys
import inspect
import textwrap
import functools
import salt.utils.args
from salt.utils.odict import OrderedDict
from salt.ext import six
BASE_SCHEMA_URL = 'https://non-existing.saltstack.com/schemas'
RENDER_COMMENT_YAML_MAX_LINE_LENGTH = 80
class Prepareable(type):
if not six.PY3:
def __new__(mcs, name, bases, attributes):
try:
constructor = attributes["__new__"]
except KeyError:
return type.__new__(mcs, name, bases, attributes)
def preparing_constructor(mcs, name, bases, attributes):
try:
mcs.__prepare__
except AttributeError:
return constructor(mcs, name, bases, attributes)
namespace = mcs.__prepare__(name, bases)
defining_frame = sys._getframe(1)
for constant in reversed(defining_frame.f_code.co_consts):
if inspect.iscode(constant) and constant.co_name == name:
def get_index(attribute_name, _names=constant.co_names):
try:
return _names.index(attribute_name)
except ValueError:
return 0
break
else:
return constructor(mcs, name, bases, attributes)
by_appearance = sorted(
attributes.items(), key=lambda item: get_index(item[0])
)
for key, value in by_appearance:
namespace[key] = value
return constructor(mcs, name, bases, namespace)
attributes["__new__"] = functools.wraps(constructor)(preparing_constructor)
return type.__new__(mcs, name, bases, attributes)
class NullSentinel(object):
def __bool__(self):
return False
__nonzero__ = __bool__
Null = NullSentinel()
def _failing_new(*args, **kwargs):
raise TypeError('Can\'t create another NullSentinel instance')
NullSentinel.__new__ = staticmethod(_failing_new)
del _failing_new
class SchemaMeta(six.with_metaclass(Prepareable, type)):
@classmethod
def __prepare__(mcs, name, bases):
return OrderedDict()
def __new__(mcs, name, bases, attrs):
# Mark the instance as a configuration document/section
attrs['__config__'] = True
attrs['__flatten__'] = False
attrs['__config_name__'] = None
# Let's record the configuration items/sections
items = {}
sections = {}
order = []
for base in reversed(bases):
if hasattr(base, '_items'):
items.update(base._items)
if hasattr(base, '_sections'):
sections.update(base._sections)
if hasattr(base, '_order'):
order.extend(base._order)
for key, value in six.iteritems(attrs):
entry_name = None
if not hasattr(value, '__item__') and not hasattr(value, '__config__'):
continue
if hasattr(value, '__item__'):
if hasattr(value, 'title') and value.title is None:
# it's name
value.title = key
entry_name = value.__item_name__ or key
items[entry_name] = value
if hasattr(value, '__config__'):
entry_name = value.__config_name__ or key
sections[entry_name] = value
order.append(entry_name)
attrs['_order'] = order
attrs['_items'] = items
attrs['_sections'] = sections
return type.__new__(mcs, name, bases, attrs)
def __call__(cls, flatten=False, allow_additional_items=False, **kwargs):
instance = object.__new__(cls)
instance.__config_name__ = kwargs.pop('name', None)
if flatten is True:
instance.__flatten__ = True
if allow_additional_items is True:
# The configuration block only accepts the configuration items
# which are defined on the class. On additional items, validation
# with jsonschema will fail
instance.__allow_additional_items__ = True
instance.__init__(**kwargs)
return instance
class BaseSchemaItemMeta(six.with_metaclass(Prepareable, type)):
@classmethod
def __prepare__(mcs, name, bases):
return OrderedDict()
def __new__(mcs, name, bases, attrs):
# Register the class as an item class
attrs['__item__'] = True
attrs['__item_name__'] = None
# Instantiate an empty list to store the config item attribute names
attributes = []
for base in reversed(bases):
try:
base_attributes = getattr(base, '_attributes', [])
if base_attributes:
attributes.extend(base_attributes)
# Extend the attributes with the base argspec argument names
# but skip "self"
for argname in salt.utils.args.get_function_argspec(base.__init__).args:
if argname == 'self' or argname in attributes:
continue
if argname == 'name':
continue
attributes.append(argname)
except TypeError:
# On the base object type, __init__ is just a wrapper which
# triggers a TypeError when we're trying to find out it's
# argspec
continue
attrs['_attributes'] = attributes
return type.__new__(mcs, name, bases, attrs)
def __call__(cls, *args, **kwargs):
# Create the instance class
instance = object.__new__(cls)
if args:
raise RuntimeError(
'Please pass all arguments as named arguments. Un-named '
'arguments are not supported'
)
for key in kwargs.copy():
# Store the kwarg keys as the instance attributes for the
# serialization step
if key == 'name':
# This is the item name to override the class attribute name
instance.__item_name__ = kwargs.pop(key)
continue
if key not in instance._attributes:
instance._attributes.append(key)
# Init the class
instance.__init__(*args, **kwargs)
# Validate the instance after initialization
for base in reversed(inspect.getmro(cls)):
validate_attributes = getattr(base, '__validate_attributes__', None)
if validate_attributes:
if instance.__validate_attributes__.__func__.__code__ is not validate_attributes.__code__:
# The method was overridden, run base.__validate_attributes__ function
base.__validate_attributes__(instance)
# Finally, run the instance __validate_attributes__ function
instance.__validate_attributes__()
# Return the initialized class
return instance
class Schema(six.with_metaclass(SchemaMeta, object)):
# Define some class level attributes to make PyLint happier
title = None
description = None
_items = _sections = _order = None
__flatten__ = False
__allow_additional_items__ = False
@classmethod
def serialize(cls, id_=None):
# The order matters
serialized = OrderedDict()
if id_ is not None:
# This is meant as a configuration section, sub json schema
serialized['id'] = '{0}/{1}.json
else:
# Main configuration block, json schema
serialized['$schema'] = 'http://json-schema.org/draft-04/schema
if cls.title is not None:
serialized['title'] = cls.title
if cls.description is not None:
if cls.description == cls.__doc__:
serialized['description'] = textwrap.dedent(cls.description).strip()
else:
serialized['description'] = cls.description
required = []
ordering = []
serialized['type'] = 'object'
properties = OrderedDict()
cls.after_items_update = []
for name in cls._order: # pylint: disable=E1133
skip_order = False
item_name = None
if name in cls._sections: # pylint: disable=E1135
section = cls._sections[name]
serialized_section = section.serialize(None if section.__flatten__ is True else name)
if section.__flatten__ is True:
# Flatten the configuration section into the parent
# configuration
properties.update(serialized_section['properties'])
if 'x-ordering' in serialized_section:
ordering.extend(serialized_section['x-ordering'])
if 'required' in serialized_section:
required.extend(serialized_section['required'])
if hasattr(section, 'after_items_update'):
cls.after_items_update.extend(section.after_items_update)
skip_order = True
else:
# Store it as a configuration section
properties[name] = serialized_section
if name in cls._items: # pylint: disable=E1135
config = cls._items[name]
item_name = config.__item_name__ or name
# Handle the configuration items defined in the class instance
if config.__flatten__ is True:
serialized_config = config.serialize()
cls.after_items_update.append(serialized_config)
skip_order = True
else:
properties[item_name] = config.serialize()
if config.required:
# If it's a required item, add it to the required list
required.append(item_name)
if skip_order is False:
if item_name is not None:
if item_name not in ordering:
ordering.append(item_name)
else:
if name not in ordering:
ordering.append(name)
if properties:
serialized['properties'] = properties
if cls.after_items_update:
after_items_update = {}
for entry in cls.after_items_update:
for name, data in six.iteritems(entry):
if name in after_items_update:
if isinstance(after_items_update[name], list):
after_items_update[name].extend(data)
else:
after_items_update[name] = data
if after_items_update:
after_items_update.update(serialized)
serialized = after_items_update
if required:
serialized['required'] = required
if ordering:
serialized['x-ordering'] = ordering
serialized['additionalProperties'] = cls.__allow_additional_items__
return serialized
@classmethod
def defaults(cls):
serialized = cls.serialize()
defaults = {}
for name, details in serialized['properties'].items():
if 'default' in details:
defaults[name] = details['default']
continue
if 'properties' in details:
for sname, sdetails in details['properties'].items():
if 'default' in sdetails:
defaults.setdefault(name, {})[sname] = sdetails['default']
continue
return defaults
@classmethod
def as_requirements_item(cls):
serialized_schema = cls.serialize()
required = serialized_schema.get('required', [])
for name in serialized_schema['properties']:
if name not in required:
required.append(name)
return RequirementsItem(requirements=required)
# Render the configuration block as a restructured text string
# '''
Render the configuration block as a parseable YAML string including comments
# '''
h_metaclass(BaseSchemaItemMeta, object)):
__type__ = None
__format__ = None
_attributes = None
__flatten__ = False
__serialize_attr_aliases__ = None
required = False
def __init__(self, required=None, **extra):
if required is not None:
self.required = required
self.extra = extra
def __validate_attributes__(self):
if self.required not in (True, False):
raise RuntimeError(
'\'required\' can only be True/False'
)
def _get_argname_value(self, argname):
argvalue = getattr(self, '__get_{0}__'.format(argname), None)
if argvalue is not None and callable(argvalue):
argvalue = argvalue()
if argvalue is None:
argvalue = getattr(self, argname, None)
if argvalue is None:
# Let's see if it's defined as a private class variable
argvalue = getattr(self, '__{0}__'.format(argname), None)
if argvalue is None:
# Let's look for it in the extra dictionary
argvalue = self.extra.get(argname, None)
return argvalue
def serialize(self):
raise NotImplementedError
class BaseSchemaItem(SchemaItem):
# item to do something like:
# class MyCustomConfig(StringItem):
# '''
# This is my custom config, blah, blah, blah
# '''
# description = __doc__
#
description = None
# The same for all other base arguments
title = None
default = None
enum = None
enumNames = None
def __init__(self, title=None, description=None, default=None, enum=None, enumNames=None, **kwargs):
if title is not None:
self.title = title
if description is not None:
self.description = description
if default is not None:
self.default = default
if enum is not None:
self.enum = enum
if enumNames is not None:
self.enumNames = enumNames
super(BaseSchemaItem, self).__init__(**kwargs)
def __validate_attributes__(self):
if self.enum is not None:
if not isinstance(self.enum, (list, tuple, set)):
raise RuntimeError(
'Only the \'list\', \'tuple\' and \'set\' python types can be used '
'to define \'enum\''
)
if not isinstance(self.enum, list):
self.enum = list(self.enum)
if self.enumNames is not None:
if not isinstance(self.enumNames, (list, tuple, set)):
raise RuntimeError(
'Only the \'list\', \'tuple\' and \'set\' python types can be used '
'to define \'enumNames\''
)
if len(self.enum) != len(self.enumNames):
raise RuntimeError(
'The size of \'enumNames\' must match the size of \'enum\''
)
if not isinstance(self.enumNames, list):
self.enumNames = list(self.enumNames)
def serialize(self):
serialized = {'type': self.__type__}
for argname in self._attributes:
if argname == 'required':
# This is handled elsewhere
continue
argvalue = self._get_argname_value(argname)
if argvalue is not None:
if argvalue is Null:
argvalue = None
# None values are not meant to be included in the
# serialization, since this is not None...
if self.__serialize_attr_aliases__ and argname in self.__serialize_attr_aliases__:
argname = self.__serialize_attr_aliases__[argname]
serialized[argname] = argvalue
return serialized
def __get_description__(self):
if self.description is not None:
if self.description == self.__doc__:
return textwrap.dedent(self.description).strip()
return self.description
#def render_as_rst(self, name):
# '''
# Render the configuration item as a restructured text string
# '''
# # TODO: Implement YAML rendering
# raise NotImplementedError
#def render_as_yaml(self, name):
# '''
# Render the configuration item as a parseable YAML string including comments
# '''
# # TODO: Include the item rules in the output, minimum, maximum, etc...
# output = '
# output += self.title
# output += ' '
# output += '-' * (RENDER_COMMENT_YAML_MAX_LINE_LENGTH - 7 - len(self.title) - 2)
# output += '>\n'
# if self.description:
# output += '\n'.join(textwrap.wrap(self.description,
# width=RENDER_COMMENT_YAML_MAX_LINE_LENGTH,
# initial_indent='
# output += '\n'
# yamled_default_value = salt.utils.yaml.safe_dump(self.default, default_flow_style=False).split('\n...', 1)[0]
# output += '
# output += '
# output += '
# output += self.title
# output += ' '
# output += '-' * (RENDER_COMMENT_YAML_MAX_LINE_LENGTH - 7 - len(self.title) - 1)
# return output + '\n'
class NullItem(BaseSchemaItem):
__type__ = 'null'
class BooleanItem(BaseSchemaItem):
__type__ = 'boolean'
class StringItem(BaseSchemaItem):
__type__ = 'string'
__serialize_attr_aliases__ = {
'min_length': 'minLength',
'max_length': 'maxLength'
}
format = None
pattern = None
min_length = None
max_length = None
def __init__(self,
format=None, # pylint: disable=redefined-builtin
pattern=None,
min_length=None,
max_length=None,
**kwargs):
if format is not None: # pylint: disable=redefined-builtin
self.format = format
if pattern is not None:
self.pattern = pattern
if min_length is not None:
self.min_length = min_length
if max_length is not None:
self.max_length = max_length
super(StringItem, self).__init__(**kwargs)
def __validate_attributes__(self):
if self.format is None and self.__format__ is not None:
self.format = self.__format__
class EMailItem(StringItem):
__format__ = 'email'
class IPv4Item(StringItem):
__format__ = 'ipv4'
class IPv6Item(StringItem):
__format__ = 'ipv6'
class HostnameItem(StringItem):
__format__ = 'hostname'
class DateTimeItem(StringItem):
__format__ = 'date-time'
class UriItem(StringItem):
__format__ = 'uri'
class SecretItem(StringItem):
__format__ = 'secret'
class NumberItem(BaseSchemaItem):
__type__ = 'number'
__serialize_attr_aliases__ = {
'multiple_of': 'multipleOf',
'exclusive_minimum': 'exclusiveMinimum',
'exclusive_maximum': 'exclusiveMaximum',
}
multiple_of = None
minimum = None
exclusive_minimum = None
maximum = None
exclusive_maximum = None
def __init__(self,
multiple_of=None,
minimum=None,
exclusive_minimum=None,
maximum=None,
exclusive_maximum=None,
**kwargs):
if multiple_of is not None:
self.multiple_of = multiple_of
if minimum is not None:
self.minimum = minimum
if exclusive_minimum is not None:
self.exclusive_minimum = exclusive_minimum
if maximum is not None:
self.maximum = maximum
if exclusive_maximum is not None:
self.exclusive_maximum = exclusive_maximum
super(NumberItem, self).__init__(**kwargs)
class IntegerItem(NumberItem):
__type__ = 'integer'
class ArrayItem(BaseSchemaItem):
__type__ = 'array'
__serialize_attr_aliases__ = {
'min_items': 'minItems',
'max_items': 'maxItems',
'unique_items': 'uniqueItems',
'additional_items': 'additionalItems'
}
items = None
min_items = None
max_items = None
unique_items = None
additional_items = None
def __init__(self,
items=None,
min_items=None,
max_items=None,
unique_items=None,
additional_items=None,
**kwargs):
if items is not None:
self.items = items
if min_items is not None:
self.min_items = min_items
if max_items is not None:
self.max_items = max_items
if unique_items is not None:
self.unique_items = unique_items
if additional_items is not None:
self.additional_items = additional_items
super(ArrayItem, self).__init__(**kwargs)
def __validate_attributes__(self):
if not self.items and not self.additional_items:
raise RuntimeError(
'One of items or additional_items must be passed.'
)
if self.items is not None:
if isinstance(self.items, (list, tuple)):
for item in self.items:
if not isinstance(item, (Schema, SchemaItem)):
raise RuntimeError(
'All items passed in the item argument tuple/list must be '
'a subclass of Schema, SchemaItem or BaseSchemaItem, '
'not {0}'.format(type(item))
)
elif not isinstance(self.items, (Schema, SchemaItem)):
raise RuntimeError(
'The items argument passed must be a subclass of '
'Schema, SchemaItem or BaseSchemaItem, not '
'{0}'.format(type(self.items))
)
def __get_items__(self):
if isinstance(self.items, (Schema, SchemaItem)):
# This is either a Schema or a Basetem, return it in it's
return self.items.serialize()
if isinstance(self.items, (tuple, list)):
items = []
for item in self.items:
items.append(item.serialize())
return items
class DictItem(BaseSchemaItem):
__type__ = 'object'
__serialize_attr_aliases__ = {
'min_properties': 'minProperties',
'max_properties': 'maxProperties',
'pattern_properties': 'patternProperties',
'additional_properties': 'additionalProperties'
}
properties = None
pattern_properties = None
additional_properties = None
min_properties = None
max_properties = None
def __init__(self,
properties=None,
pattern_properties=None,
additional_properties=None,
min_properties=None,
max_properties=None,
**kwargs):
if properties is not None:
self.properties = properties
if pattern_properties is not None:
self.pattern_properties = pattern_properties
if additional_properties is not None:
self.additional_properties = additional_properties
if min_properties is not None:
self.min_properties = min_properties
if max_properties is not None:
self.max_properties = max_properties
super(DictItem, self).__init__(**kwargs)
def __validate_attributes__(self):
if not self.properties and not self.pattern_properties and not self.additional_properties:
raise RuntimeError(
'One of properties, pattern_properties or additional_properties must be passed'
)
if self.properties is not None:
if not isinstance(self.properties, (Schema, dict)):
raise RuntimeError(
'The passed properties must be passed as a dict or '
' a Schema not \'{0}\''.format(type(self.properties))
)
if not isinstance(self.properties, Schema):
for key, prop in self.properties.items():
if not isinstance(prop, (Schema, SchemaItem)):
raise RuntimeError(
'The passed property who\'s key is \'{0}\' must be of type '
'Schema, SchemaItem or BaseSchemaItem, not '
'\'{1}\''.format(key, type(prop))
)
if self.pattern_properties is not None:
if not isinstance(self.pattern_properties, dict):
raise RuntimeError(
'The passed pattern_properties must be passed as a dict '
'not \'{0}\''.format(type(self.pattern_properties))
)
for key, prop in self.pattern_properties.items():
if not isinstance(prop, (Schema, SchemaItem)):
raise RuntimeError(
'The passed pattern_property who\'s key is \'{0}\' must '
'be of type Schema, SchemaItem or BaseSchemaItem, '
'not \'{1}\''.format(key, type(prop))
)
if self.additional_properties is not None:
if not isinstance(self.additional_properties, (bool, Schema, SchemaItem)):
raise RuntimeError(
'The passed additional_properties must be of type bool, '
'Schema, SchemaItem or BaseSchemaItem, not \'{0}\''.format(
type(self.pattern_properties)
)
)
def __get_properties__(self):
if self.properties is None:
return
if isinstance(self.properties, Schema):
return self.properties.serialize()['properties']
properties = OrderedDict()
for key, prop in self.properties.items():
properties[key] = prop.serialize()
return properties
def __get_pattern_properties__(self):
if self.pattern_properties is None:
return
pattern_properties = OrderedDict()
for key, prop in self.pattern_properties.items():
pattern_properties[key] = prop.serialize()
return pattern_properties
def __get_additional_properties__(self):
if self.additional_properties is None:
return
if isinstance(self.additional_properties, bool):
return self.additional_properties
return self.additional_properties.serialize()
def __call__(self, flatten=False):
self.__flatten__ = flatten
return self
def serialize(self):
result = super(DictItem, self).serialize()
required = []
if self.properties is not None:
if isinstance(self.properties, Schema):
serialized = self.properties.serialize()
if 'required' in serialized:
required.extend(serialized['required'])
else:
for key, prop in self.properties.items():
if prop.required:
required.append(key)
if required:
result['required'] = required
return result
class RequirementsItem(SchemaItem):
__type__ = 'object'
requirements = None
def __init__(self, requirements=None):
if requirements is not None:
self.requirements = requirements
super(RequirementsItem, self).__init__()
def __validate_attributes__(self):
if self.requirements is None:
raise RuntimeError(
'The passed requirements must not be empty'
)
if not isinstance(self.requirements, (SchemaItem, list, tuple, set)):
raise RuntimeError(
'The passed requirements must be passed as a list, tuple, '
'set SchemaItem or BaseSchemaItem, not \'{0}\''.format(self.requirements)
)
if not isinstance(self.requirements, SchemaItem):
if not isinstance(self.requirements, list):
self.requirements = list(self.requirements)
for idx, item in enumerate(self.requirements):
if not isinstance(item, (six.string_types, SchemaItem)):
raise RuntimeError(
'The passed requirement at the {0} index must be of type '
'str or SchemaItem, not \'{1}\''.format(idx, type(item))
)
def serialize(self):
if isinstance(self.requirements, SchemaItem):
requirements = self.requirements.serialize()
else:
requirements = []
for requirement in self.requirements:
if isinstance(requirement, SchemaItem):
requirements.append(requirement.serialize())
continue
requirements.append(requirement)
return {'required': requirements}
class OneOfItem(SchemaItem):
__type__ = 'oneOf'
items = None
def __init__(self, items=None, required=None):
if items is not None:
self.items = items
super(OneOfItem, self).__init__(required=required)
def __validate_attributes__(self):
if not self.items:
raise RuntimeError(
'The passed items must not be empty'
)
if not isinstance(self.items, (list, tuple)):
raise RuntimeError(
'The passed items must be passed as a list/tuple not '
'\'{0}\''.format(type(self.items))
)
for idx, item in enumerate(self.items):
if not isinstance(item, (Schema, SchemaItem)):
raise RuntimeError(
'The passed item at the {0} index must be of type '
'Schema, SchemaItem or BaseSchemaItem, not '
'\'{1}\''.format(idx, type(item))
)
if not isinstance(self.items, list):
self.items = list(self.items)
def __call__(self, flatten=False):
self.__flatten__ = flatten
return self
def serialize(self):
return {self.__type__: [i.serialize() for i in self.items]}
class AnyOfItem(OneOfItem):
__type__ = 'anyOf'
class AllOfItem(OneOfItem):
__type__ = 'allOf'
class NotItem(SchemaItem):
__type__ = 'not'
item = None
def __init__(self, item=None):
if item is not None:
self.item = item
super(NotItem, self).__init__()
def __validate_attributes__(self):
if not self.item:
raise RuntimeError(
'An item must be passed'
)
if not isinstance(self.item, (Schema, SchemaItem)):
raise RuntimeError(
'The passed item be of type Schema, SchemaItem or '
'BaseSchemaItem, not \'{1}\''.format(type(self.item))
)
def serialize(self):
return {self.__type__: self.item.serialize()}
class PortItem(IntegerItem):
minimum = 0
maximum = 65535
class ComplexSchemaItem(BaseSchemaItem):
_attributes = []
_definition_name = None
def __init__(self, definition_name=None, required=None):
super(ComplexSchemaItem, self).__init__(required=required)
self.__type__ = 'object'
self._definition_name = definition_name if definition_name else \
self.__class__.__name__
# Schema attributes might have been added as class attributes so we
# and they must be added to the _attributes attr
self._add_missing_schema_attributes()
def _add_missing_schema_attributes(self):
for attr in [attr for attr in dir(self) if not attr.startswith('__')]:
attr_val = getattr(self, attr)
if isinstance(getattr(self, attr), SchemaItem) and \
attr not in self._attributes:
self._attributes.append(attr)
@property
def definition_name(self):
return self._definition_name
def serialize(self):
return {'$ref': '
def get_definition(self):
serialized = super(ComplexSchemaItem, self).serialize()
# Adjust entries in the serialization
del serialized['definition_name']
serialized['title'] = self.definition_name
properties = {}
required_attr_names = []
for attr_name in self._attributes:
attr = getattr(self, attr_name)
if attr and isinstance(attr, BaseSchemaItem):
# Remove the attribute entry added by the base serialization
del serialized[attr_name]
properties[attr_name] = attr.serialize()
properties[attr_name]['type'] = attr.__type__
if attr.required:
required_attr_names.append(attr_name)
if serialized.get('properties') is None:
serialized['properties'] = {}
serialized['properties'].update(properties)
# Assign the required array
if required_attr_names:
serialized['required'] = required_attr_names
return serialized
def get_complex_attrs(self):
return [getattr(self, attr_name) for attr_name in self._attributes if
isinstance(getattr(self, attr_name), ComplexSchemaItem)]
class DefinitionsSchema(Schema):
@classmethod
def serialize(cls, id_=None):
# Get the initial serialization
serialized = super(DefinitionsSchema, cls).serialize(id_)
complex_items = []
# Augment the serializations with the definitions of all complex items
aux_items = cls._items.values()
# Convert dict_view object to a list on Python 3
if six.PY3:
aux_items = list(aux_items)
while aux_items:
item = aux_items.pop(0)
# Add complex attributes
if isinstance(item, ComplexSchemaItem):
complex_items.append(item)
aux_items.extend(item.get_complex_attrs())
# Handle container items
if isinstance(item, OneOfItem):
aux_items.extend(item.items)
elif isinstance(item, ArrayItem):
aux_items.append(item.items)
elif isinstance(item, DictItem):
if item.properties:
aux_items.extend(item.properties.values())
if item.additional_properties and \
isinstance(item.additional_properties, SchemaItem):
aux_items.append(item.additional_properties)
definitions = OrderedDict()
for config in complex_items:
if isinstance(config, ComplexSchemaItem):
definitions[config.definition_name] = \
config.get_definition()
serialized['definitions'] = definitions
return serialized
| true | true |
f720868bb7566cf137aaa7665b5dbd671fef24fb | 2,738 | py | Python | skfem/element/element_tet/element_tet_p2.py | carlosal1015/scikit-fem | 1e73a417e9b43fe0a36e29807792c41fa289b77d | [
"BSD-3-Clause"
] | null | null | null | skfem/element/element_tet/element_tet_p2.py | carlosal1015/scikit-fem | 1e73a417e9b43fe0a36e29807792c41fa289b77d | [
"BSD-3-Clause"
] | null | null | null | skfem/element/element_tet/element_tet_p2.py | carlosal1015/scikit-fem | 1e73a417e9b43fe0a36e29807792c41fa289b77d | [
"BSD-3-Clause"
] | null | null | null | import numpy as np
from ..element_h1 import ElementH1
class ElementTetP2(ElementH1):
nodal_dofs = 1
edge_dofs = 1
dim = 3
maxdeg = 2
dofnames = ['u', 'u']
doflocs = np.array([[0., 0., 0.],
[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.],
[.5, 0., 0.],
[.5, .5, 0.],
[0., .5, 0.],
[0., .0, .5],
[.5, .0, .5],
[.0, .5, .5]])
def lbasis(self, X, i):
x, y, z = X
if i == 0: # at (0,0,0)
phi = (1. - 3.*x + 2.*x**2 - 3.*y + 4.*x*y +
2.*y**2 - 3.*z + 4.*x*z + 4.*y*z + 2.*z**2)
dphi = np.array([
-3. + 4.*x + 4.*y + 4.*z,
-3. + 4.*x + 4.*y + 4.*z,
-3. + 4.*x + 4.*y + 4.*z,
])
elif i == 1: # at (1,0,0)
phi = - 1.*x + 2.*x**2
dphi = np.array([
-1 + 4*x,
0*x,
0*x,
])
elif i == 2: # at (0,1,0)
phi = - 1.*y + 2.*y**2
dphi = np.array([
0*x,
-1. + 4.*y,
0*x,
])
elif i == 3: # at (0,0,1)
phi = - 1.*z + 2.*z**2
dphi = np.array([
0*x,
0*x,
-1. + 4.*z,
])
elif i == 4: # between (0,1)
phi = 4.*x - 4.*x**2 - 4.*x*y - 4*x*z
dphi = np.array([
4. - 8.*x - 4.*y - 4.*z,
-4.*x,
-4.*x,
])
elif i == 5: # between (1,2)
phi = 4.*x*y
dphi = np.array([
4.*y,
4.*x,
0*x,
])
elif i == 6: # between (0,2)
phi = 0. + 4.*y - 4.*x*y - 4.*y**2 - 4.*y*z
dphi = np.array([
-4.*y,
4. - 4.*x - 8.*y - 4.*z,
-4.*y,
])
elif i == 7: # between (0,3)
phi = 0. + 4.*z - 4.*x*z - 4.*y*z - 4.*z**2
dphi = np.array([
-4.*z,
-4.*z,
4. - 4.*x - 4.*y - 8.*z,
])
elif i == 8:
phi = 0. + 4.*x*z
dphi = np.array([
4.*z,
0*x,
4*x,
])
elif i == 9:
phi = 0. + 4.*y*z
dphi = np.array([
0*x,
4*z,
4*y,
])
else:
raise Exception("!")
return phi, dphi
| 27.38 | 62 | 0.233382 | import numpy as np
from ..element_h1 import ElementH1
class ElementTetP2(ElementH1):
nodal_dofs = 1
edge_dofs = 1
dim = 3
maxdeg = 2
dofnames = ['u', 'u']
doflocs = np.array([[0., 0., 0.],
[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.],
[.5, 0., 0.],
[.5, .5, 0.],
[0., .5, 0.],
[0., .0, .5],
[.5, .0, .5],
[.0, .5, .5]])
def lbasis(self, X, i):
x, y, z = X
if i == 0:
phi = (1. - 3.*x + 2.*x**2 - 3.*y + 4.*x*y +
2.*y**2 - 3.*z + 4.*x*z + 4.*y*z + 2.*z**2)
dphi = np.array([
-3. + 4.*x + 4.*y + 4.*z,
-3. + 4.*x + 4.*y + 4.*z,
-3. + 4.*x + 4.*y + 4.*z,
])
elif i == 1:
phi = - 1.*x + 2.*x**2
dphi = np.array([
-1 + 4*x,
0*x,
0*x,
])
elif i == 2:
phi = - 1.*y + 2.*y**2
dphi = np.array([
0*x,
-1. + 4.*y,
0*x,
])
elif i == 3:
phi = - 1.*z + 2.*z**2
dphi = np.array([
0*x,
0*x,
-1. + 4.*z,
])
elif i == 4:
phi = 4.*x - 4.*x**2 - 4.*x*y - 4*x*z
dphi = np.array([
4. - 8.*x - 4.*y - 4.*z,
-4.*x,
-4.*x,
])
elif i == 5:
phi = 4.*x*y
dphi = np.array([
4.*y,
4.*x,
0*x,
])
elif i == 6:
phi = 0. + 4.*y - 4.*x*y - 4.*y**2 - 4.*y*z
dphi = np.array([
-4.*y,
4. - 4.*x - 8.*y - 4.*z,
-4.*y,
])
elif i == 7:
phi = 0. + 4.*z - 4.*x*z - 4.*y*z - 4.*z**2
dphi = np.array([
-4.*z,
-4.*z,
4. - 4.*x - 4.*y - 8.*z,
])
elif i == 8:
phi = 0. + 4.*x*z
dphi = np.array([
4.*z,
0*x,
4*x,
])
elif i == 9:
phi = 0. + 4.*y*z
dphi = np.array([
0*x,
4*z,
4*y,
])
else:
raise Exception("!")
return phi, dphi
| true | true |
f720884f949f265d08ef7c4e59ff5312172239dc | 5,043 | py | Python | back/Hera/utils.py | pingPoltergeist/Hera | 519336cebbcf14ff3da6299e946407788121a0b7 | [
"MIT"
] | 1 | 2021-12-09T11:37:20.000Z | 2021-12-09T11:37:20.000Z | back/Hera/utils.py | pingPoltergeist/Hera | 519336cebbcf14ff3da6299e946407788121a0b7 | [
"MIT"
] | 1 | 2021-11-05T09:14:50.000Z | 2021-11-05T09:14:50.000Z | back/Hera/utils.py | pingPoltergeist/Hera | 519336cebbcf14ff3da6299e946407788121a0b7 | [
"MIT"
] | 2 | 2022-01-13T15:12:36.000Z | 2022-03-10T01:35:25.000Z | import traceback
from pathlib import Path
import hashlib
import yaml
def get_media_dirs(media_dir_stream):
result = dict()
movie_dir_map = dict()
for media_location in media_dir_stream[0].replace('\n', '').replace('\r', '').split(','):
movie_dir_map[hashlib.md5(media_location.encode('utf-8')).hexdigest()] = Path(media_location)
tv_dir_map = dict()
for tv_location in media_dir_stream[1].replace('\n', '').replace('\r', '').split(','):
tv_dir_map[hashlib.md5(tv_location.encode('utf-8')).hexdigest()] = Path(tv_location)
result['movie_dir_map'] = movie_dir_map
result['tv_dir_map'] = tv_dir_map
return result
class Config:
__filepath = None
__config = dict()
__movie_dirs_map = dict()
__tv_dirs_map = dict()
def __init__(self, config_filepath=None):
blank_config = {
'movie_dir': list(),
'tv_dir': list()
}
self.__filepath = config_filepath
try:
with open(self.__filepath) as f:
self.__config = yaml.load(f, Loader=yaml.FullLoader)
if (
(not self.__config) or
(type(self.__config) != dict) or
(type(self.__config.get('movie_dir')) != list) or
(type(self.__config.get('tv_dir')) != list)
):
self.__config = blank_config
try:
with open(self.__filepath, 'w') as f:
data = yaml.dump(self.__config, f)
except Exception as ex:
print('Config :: update -> ', ex)
traceback.print_exc()
except Exception as ex:
self.__config = blank_config
try:
with open(self.__filepath, 'w') as f:
data = yaml.dump(self.__config, f)
except Exception as ex:
print('Config :: update -> ', ex)
traceback.print_exc()
print('Config::init: -> Creating a fresh config.yaml file')
finally:
if type(self.__config.get('movie_dir')) == list:
for media_location in self.__config.get('movie_dir'):
self.__movie_dirs_map[hashlib.md5(media_location.encode('utf-8')).hexdigest()] = Path(
media_location)
if type(self.__config.get('tv_dir')) == list:
for tv_location in self.__config.get('tv_dir'):
self.__tv_dirs_map[hashlib.md5(tv_location.encode('utf-8')).hexdigest()] = Path(tv_location)
def get(self):
return self.__config
def get_movie_dirs_map(self):
return self.__movie_dirs_map
def get_tv_dirs_map(self):
return self.__tv_dirs_map
def add_to_tv_dirs(self, new_tv_dir):
if Path(new_tv_dir).exists() and (new_tv_dir not in self.__config['tv_dir']):
self.__config['tv_dir'].append(new_tv_dir)
self.__tv_dirs_map[hashlib.md5(new_tv_dir.encode('utf-8')).hexdigest()] = Path(new_tv_dir)
def add_to_movie_dirs(self, new_movie_dir):
if Path(new_movie_dir).exists() and (new_movie_dir not in self.__config['movie_dir']):
self.__config['movie_dir'].append(new_movie_dir)
self.__movie_dirs_map[hashlib.md5(new_movie_dir.encode('utf-8')).hexdigest()] = Path(new_movie_dir)
def remove_from_movie_dirs(self, movie_dir):
if self.__config['movie_dir'] and movie_dir in self.__config['movie_dir']:
self.__config['movie_dir'].remove(movie_dir)
del self.__movie_dirs_map[hashlib.md5(movie_dir.encode('utf-8')).hexdigest()]
def remove_from_tv_dirs(self, tv_dir):
if self.__config['tv_dir'] and tv_dir in self.__config['tv_dir']:
self.__config['tv_dir'].remove(tv_dir)
del self.__tv_dirs_map[hashlib.md5(tv_dir.encode('utf-8')).hexdigest()]
def refresh(self):
try:
with open(self.__filepath) as f:
self.__config = yaml.load(f, Loader=yaml.FullLoader)
if type(self.__config.get('movie_dir')) == list:
for media_location in self.__config.get('movie_dir'):
self.__movie_dirs_map[hashlib.md5(media_location.encode('utf-8')).hexdigest()] = Path(
media_location)
if type(self.__config.get('tv_dir')) == list:
for tv_location in self.__config.get('tv_dir'):
self.__tv_dirs_map[hashlib.md5(tv_location.encode('utf-8')).hexdigest()] = Path(tv_location)
except Exception as ex:
print('Config :: init -> ', ex)
traceback.print_exc()
def update(self, updated_config=None):
if updated_config:
self.__config = updated_config
try:
with open(self.__filepath, 'w') as f:
data = yaml.dump(self.__config, f)
except Exception as ex:
print('Config :: update -> ', ex)
traceback.print_exc()
| 39.398438 | 112 | 0.584969 | import traceback
from pathlib import Path
import hashlib
import yaml
def get_media_dirs(media_dir_stream):
result = dict()
movie_dir_map = dict()
for media_location in media_dir_stream[0].replace('\n', '').replace('\r', '').split(','):
movie_dir_map[hashlib.md5(media_location.encode('utf-8')).hexdigest()] = Path(media_location)
tv_dir_map = dict()
for tv_location in media_dir_stream[1].replace('\n', '').replace('\r', '').split(','):
tv_dir_map[hashlib.md5(tv_location.encode('utf-8')).hexdigest()] = Path(tv_location)
result['movie_dir_map'] = movie_dir_map
result['tv_dir_map'] = tv_dir_map
return result
class Config:
__filepath = None
__config = dict()
__movie_dirs_map = dict()
__tv_dirs_map = dict()
def __init__(self, config_filepath=None):
blank_config = {
'movie_dir': list(),
'tv_dir': list()
}
self.__filepath = config_filepath
try:
with open(self.__filepath) as f:
self.__config = yaml.load(f, Loader=yaml.FullLoader)
if (
(not self.__config) or
(type(self.__config) != dict) or
(type(self.__config.get('movie_dir')) != list) or
(type(self.__config.get('tv_dir')) != list)
):
self.__config = blank_config
try:
with open(self.__filepath, 'w') as f:
data = yaml.dump(self.__config, f)
except Exception as ex:
print('Config :: update -> ', ex)
traceback.print_exc()
except Exception as ex:
self.__config = blank_config
try:
with open(self.__filepath, 'w') as f:
data = yaml.dump(self.__config, f)
except Exception as ex:
print('Config :: update -> ', ex)
traceback.print_exc()
print('Config::init: -> Creating a fresh config.yaml file')
finally:
if type(self.__config.get('movie_dir')) == list:
for media_location in self.__config.get('movie_dir'):
self.__movie_dirs_map[hashlib.md5(media_location.encode('utf-8')).hexdigest()] = Path(
media_location)
if type(self.__config.get('tv_dir')) == list:
for tv_location in self.__config.get('tv_dir'):
self.__tv_dirs_map[hashlib.md5(tv_location.encode('utf-8')).hexdigest()] = Path(tv_location)
def get(self):
return self.__config
def get_movie_dirs_map(self):
return self.__movie_dirs_map
def get_tv_dirs_map(self):
return self.__tv_dirs_map
def add_to_tv_dirs(self, new_tv_dir):
if Path(new_tv_dir).exists() and (new_tv_dir not in self.__config['tv_dir']):
self.__config['tv_dir'].append(new_tv_dir)
self.__tv_dirs_map[hashlib.md5(new_tv_dir.encode('utf-8')).hexdigest()] = Path(new_tv_dir)
def add_to_movie_dirs(self, new_movie_dir):
if Path(new_movie_dir).exists() and (new_movie_dir not in self.__config['movie_dir']):
self.__config['movie_dir'].append(new_movie_dir)
self.__movie_dirs_map[hashlib.md5(new_movie_dir.encode('utf-8')).hexdigest()] = Path(new_movie_dir)
def remove_from_movie_dirs(self, movie_dir):
if self.__config['movie_dir'] and movie_dir in self.__config['movie_dir']:
self.__config['movie_dir'].remove(movie_dir)
del self.__movie_dirs_map[hashlib.md5(movie_dir.encode('utf-8')).hexdigest()]
def remove_from_tv_dirs(self, tv_dir):
if self.__config['tv_dir'] and tv_dir in self.__config['tv_dir']:
self.__config['tv_dir'].remove(tv_dir)
del self.__tv_dirs_map[hashlib.md5(tv_dir.encode('utf-8')).hexdigest()]
def refresh(self):
try:
with open(self.__filepath) as f:
self.__config = yaml.load(f, Loader=yaml.FullLoader)
if type(self.__config.get('movie_dir')) == list:
for media_location in self.__config.get('movie_dir'):
self.__movie_dirs_map[hashlib.md5(media_location.encode('utf-8')).hexdigest()] = Path(
media_location)
if type(self.__config.get('tv_dir')) == list:
for tv_location in self.__config.get('tv_dir'):
self.__tv_dirs_map[hashlib.md5(tv_location.encode('utf-8')).hexdigest()] = Path(tv_location)
except Exception as ex:
print('Config :: init -> ', ex)
traceback.print_exc()
def update(self, updated_config=None):
if updated_config:
self.__config = updated_config
try:
with open(self.__filepath, 'w') as f:
data = yaml.dump(self.__config, f)
except Exception as ex:
print('Config :: update -> ', ex)
traceback.print_exc()
| true | true |
f7208878a3eceaf4d90f7cc71177db7ce94487d3 | 6,366 | py | Python | son.py | nathanmartins/Son-Of-Anton | d45eec2b9263dbd981f468219c9d0fb049bd481d | [
"MIT"
] | null | null | null | son.py | nathanmartins/Son-Of-Anton | d45eec2b9263dbd981f468219c9d0fb049bd481d | [
"MIT"
] | null | null | null | son.py | nathanmartins/Son-Of-Anton | d45eec2b9263dbd981f468219c9d0fb049bd481d | [
"MIT"
] | null | null | null | import logging
import math
import os
import pickle
import re
import PIL.Image
import numpy as np
from mtcnn import MTCNN
from numpy import expand_dims
from sklearn import preprocessing, neighbors
from tensorflow_core.python.keras.models import load_model
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
DATASET_DIR = os.path.join(BASE_DIR, "dataset")
TRAIN_DIR = os.path.join(DATASET_DIR, "train")
TEST_DIR = os.path.join(DATASET_DIR, "test")
DEBUG = True
# DEBUG = False
model = load_model('facenet_keras.h5')
logging.basicConfig(level=logging.DEBUG if DEBUG else logging.INFO)
def extract_faces(img_path: str):
faces_arr = list()
# Open file and convert to numpy
image_array = np.array(PIL.Image.open(img_path).convert("RGB"), "uint8")
detector = MTCNN()
faces = detector.detect_faces(image_array)
if len(faces) == 0:
# If there are no people in a training image, skip the image.
logging.warning(f"Image {img_path} not suitable for training. Size{len(faces)}")
return None, None
for face in faces:
# logging.debug(f"Image {img_path} is suitable for training!")
x1, y1, width, height = face['box']
# bug fix
x1, y1 = abs(x1), abs(y1)
x2, y2 = x1 + width, y1 + height
# extract the face
face = image_array[y1:y2, x1:x2]
# resize pixels to the model size
image = PIL.Image.fromarray(face)
image = image.resize((160, 160))
faces_arr.append(np.asarray(image))
return faces_arr, faces
def get_embedding(face_pixels):
# scale pixel values
face_pixels = face_pixels.astype('float32')
# standardize pixel values across channels (global)
mean, std = face_pixels.mean(), face_pixels.std()
face_pixels = (face_pixels - mean) / std
# transform face into one sample
samples = expand_dims(face_pixels, axis=0)
# make prediction to get embedding
return model.predict(samples)
def prepare():
x_train = list()
y_labels = list()
# Loop through each person in the training set
for label in os.listdir(TRAIN_DIR):
path = os.path.join(TRAIN_DIR, label)
# This will ignore anything that is not jpg|jpeg|png *USE WITH CAUTION*
allowed_files = [os.path.join(path, f) for f in os.listdir(path) if
re.match(r'.*\.(jpg|jpeg|png)', f, flags=re.I)]
for img_path in allowed_files:
logging.debug(f"File: {img_path}, Label: {label}")
faces, _ = extract_faces(img_path)
if faces is not None:
for face in faces:
x_train.append(np.asarray(face))
y_labels.append(label)
# Converting string labels into numbers.
le = preprocessing.LabelEncoder()
labels_encoded = le.fit_transform(y_labels)
with open("x_train.pickle", 'wb') as f:
pickle.dump(x_train, f)
with open("y_labels.pickle", 'wb') as f:
pickle.dump(y_labels, f)
with open("labels_encoded.pickle", 'wb') as f:
pickle.dump(labels_encoded, f)
def train():
with open("x_train.pickle", 'rb') as f:
x_train = pickle.load(f)
# x_train = np.array(x_train)
# x_train = np.reshape(x_train, (-1, 2))
with open("labels_encoded.pickle", 'rb') as f:
y_labels = pickle.load(f)
# convert each face in the train set to an embedding
encoded_x_train = list()
for face_pixels in x_train:
embedding = get_embedding(face_pixels)[0]
encoded_x_train.append(embedding)
encoded_x_train = np.asarray(encoded_x_train)
# Determine how many neighbors to use for weighting in the KNN classifier.
n_neighbors = int(round(math.sqrt(len(x_train))))
logging.info(f"n_neighbors: {n_neighbors}")
# Create and train the KNN classifier.
knn_clf = neighbors.KNeighborsClassifier(n_neighbors=n_neighbors, algorithm="ball_tree", weights='distance')
knn_clf.fit(encoded_x_train, y_labels)
# perdicted = list()
# metrics.accuracy(y_labels, perdicted)
# Save the trained KNN classifier
with open("model.clf", 'wb') as f:
pickle.dump(knn_clf, f)
def predict():
with open("model.clf", 'rb') as f:
knn_clf = pickle.load(f)
with open("labels_encoded.pickle", 'rb') as f:
y_labels = pickle.load(f)
le = preprocessing.LabelEncoder()
le.fit_transform(y_labels)
# breakpoint()
for img in os.listdir(TEST_DIR):
# logging.info(f"Testing image: {img}")
full_path = os.path.join(TEST_DIR, img)
faces, raws = extract_faces(full_path)
if faces is None:
logging.info(f"WARNING: COULD NOT FIND A FACE IN {full_path}")
continue
c = 0
for face in faces:
faces_encodings = get_embedding(face)
# A list of tuples of found face locations in css (top, right, bottom, left) order
x_face_locations = tuple(raws[c]["box"])
c += 1
# Use the KNN model to find the best matches for the test face
closest_distances = knn_clf.kneighbors(faces_encodings, n_neighbors=1)
are_matches = list()
for i in range(len(x_face_locations)):
try:
dis = closest_distances[0][i][0]
# logging.debug(f"Closest distance is {dis} - {dis < 7}")
if dis < 7:
# logging.debug(f"Adding a Dis {dis}")
are_matches.append(dis)
except IndexError:
pass
# logging.debug(f"Dis is {are_matches}")
pred = knn_clf.predict(faces_encodings)
if len(are_matches) > 0:
for pred, loc, rec in zip(pred, x_face_locations, are_matches):
if rec:
if pred == 1:
a = "unknown"
else:
a = "nsm"
logging.info(f"Found: {a} - {img}")
else:
logging.warning(f"WARNING: COULD NOT IDENTIFY A FACE IN {full_path}")
else:
a = "unknown"
logging.info(f"Found: {a} - {img}")
if __name__ == '__main__':
# prepare()
# train()
predict()
| 30.028302 | 112 | 0.600377 | import logging
import math
import os
import pickle
import re
import PIL.Image
import numpy as np
from mtcnn import MTCNN
from numpy import expand_dims
from sklearn import preprocessing, neighbors
from tensorflow_core.python.keras.models import load_model
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
DATASET_DIR = os.path.join(BASE_DIR, "dataset")
TRAIN_DIR = os.path.join(DATASET_DIR, "train")
TEST_DIR = os.path.join(DATASET_DIR, "test")
DEBUG = True
model = load_model('facenet_keras.h5')
logging.basicConfig(level=logging.DEBUG if DEBUG else logging.INFO)
def extract_faces(img_path: str):
faces_arr = list()
image_array = np.array(PIL.Image.open(img_path).convert("RGB"), "uint8")
detector = MTCNN()
faces = detector.detect_faces(image_array)
if len(faces) == 0:
logging.warning(f"Image {img_path} not suitable for training. Size{len(faces)}")
return None, None
for face in faces:
x1, y1, width, height = face['box']
x1, y1 = abs(x1), abs(y1)
x2, y2 = x1 + width, y1 + height
face = image_array[y1:y2, x1:x2]
image = PIL.Image.fromarray(face)
image = image.resize((160, 160))
faces_arr.append(np.asarray(image))
return faces_arr, faces
def get_embedding(face_pixels):
face_pixels = face_pixels.astype('float32')
mean, std = face_pixels.mean(), face_pixels.std()
face_pixels = (face_pixels - mean) / std
samples = expand_dims(face_pixels, axis=0)
return model.predict(samples)
def prepare():
x_train = list()
y_labels = list()
for label in os.listdir(TRAIN_DIR):
path = os.path.join(TRAIN_DIR, label)
allowed_files = [os.path.join(path, f) for f in os.listdir(path) if
re.match(r'.*\.(jpg|jpeg|png)', f, flags=re.I)]
for img_path in allowed_files:
logging.debug(f"File: {img_path}, Label: {label}")
faces, _ = extract_faces(img_path)
if faces is not None:
for face in faces:
x_train.append(np.asarray(face))
y_labels.append(label)
le = preprocessing.LabelEncoder()
labels_encoded = le.fit_transform(y_labels)
with open("x_train.pickle", 'wb') as f:
pickle.dump(x_train, f)
with open("y_labels.pickle", 'wb') as f:
pickle.dump(y_labels, f)
with open("labels_encoded.pickle", 'wb') as f:
pickle.dump(labels_encoded, f)
def train():
with open("x_train.pickle", 'rb') as f:
x_train = pickle.load(f)
with open("labels_encoded.pickle", 'rb') as f:
y_labels = pickle.load(f)
encoded_x_train = list()
for face_pixels in x_train:
embedding = get_embedding(face_pixels)[0]
encoded_x_train.append(embedding)
encoded_x_train = np.asarray(encoded_x_train)
n_neighbors = int(round(math.sqrt(len(x_train))))
logging.info(f"n_neighbors: {n_neighbors}")
knn_clf = neighbors.KNeighborsClassifier(n_neighbors=n_neighbors, algorithm="ball_tree", weights='distance')
knn_clf.fit(encoded_x_train, y_labels)
with open("model.clf", 'wb') as f:
pickle.dump(knn_clf, f)
def predict():
with open("model.clf", 'rb') as f:
knn_clf = pickle.load(f)
with open("labels_encoded.pickle", 'rb') as f:
y_labels = pickle.load(f)
le = preprocessing.LabelEncoder()
le.fit_transform(y_labels)
for img in os.listdir(TEST_DIR):
full_path = os.path.join(TEST_DIR, img)
faces, raws = extract_faces(full_path)
if faces is None:
logging.info(f"WARNING: COULD NOT FIND A FACE IN {full_path}")
continue
c = 0
for face in faces:
faces_encodings = get_embedding(face)
x_face_locations = tuple(raws[c]["box"])
c += 1
closest_distances = knn_clf.kneighbors(faces_encodings, n_neighbors=1)
are_matches = list()
for i in range(len(x_face_locations)):
try:
dis = closest_distances[0][i][0]
if dis < 7:
are_matches.append(dis)
except IndexError:
pass
pred = knn_clf.predict(faces_encodings)
if len(are_matches) > 0:
for pred, loc, rec in zip(pred, x_face_locations, are_matches):
if rec:
if pred == 1:
a = "unknown"
else:
a = "nsm"
logging.info(f"Found: {a} - {img}")
else:
logging.warning(f"WARNING: COULD NOT IDENTIFY A FACE IN {full_path}")
else:
a = "unknown"
logging.info(f"Found: {a} - {img}")
if __name__ == '__main__':
predict()
| true | true |
f72088a05bdf713d0b6f36ecd12b25eb950976d1 | 12,785 | py | Python | raiden/exceptions.py | luehrsFred/raiden | a1b118ebe14badb1acd0744b2d7f2b39f8ba5313 | [
"MIT"
] | null | null | null | raiden/exceptions.py | luehrsFred/raiden | a1b118ebe14badb1acd0744b2d7f2b39f8ba5313 | [
"MIT"
] | 69 | 2020-07-21T05:49:21.000Z | 2022-03-08T18:09:44.000Z | raiden/exceptions.py | luehrsFred/raiden | a1b118ebe14badb1acd0744b2d7f2b39f8ba5313 | [
"MIT"
] | null | null | null | """
What do you want from this file?
1. I need to look up when to raise what.
Then read on the docstrings.
2. I have to add a new exception.
Make sure you catch it somewhere. Sometimes you'll realize you cannot catch it.
Especially, if your new exception indicates bug in the Raiden codebase,
you are not supposed to catch the exception. Instead, use one of the
existing uncaught exceptions: RaidenUnrecoverableError or BrokenPreconditionError.
"""
import enum
from typing import Any, Dict, List
@enum.unique
class PFSError(enum.IntEnum):
""" Error codes as returned by the PFS.
Defined in the pathfinding_service.exceptions module in
https://github.com/raiden-network/raiden-services
"""
# TODO: link to PFS spec as soon as the error codes are added there.
# 20xx - General
INVALID_REQUEST = 2000
INVALID_SIGNATURE = 2001
REQUEST_OUTDATED = 2002
# 21xx - IOU errors
BAD_IOU = 2100
MISSING_IOU = 2101
WRONG_IOU_RECIPIENT = 2102
IOU_EXPIRED_TOO_EARLY = 2103
INSUFFICIENT_SERVICE_PAYMENT = 2104
IOU_ALREADY_CLAIMED = 2105
USE_THIS_IOU = 2106
DEPOSIT_TOO_LOW = 2107
# 22xx - Routing
NO_ROUTE_FOUND = 2201
@staticmethod
def is_iou_rejected(error_code: int) -> bool:
return error_code >= 2100 and error_code < 2200
class RaidenError(Exception):
"""Raiden base exception.
This exception exists for user code to catch all Raiden related exceptions.
This should be used with care, because `RaidenUnrecoverableError` is a
`RaidenError`, and when one of such exceptions is raised the state of the
client node is undetermined.
"""
class RaidenRecoverableError(RaidenError):
"""Exception for recoverable errors.
This base exception exists for code written in a EAFP style. It should be
inherited when exceptions are expected to happen and handling them will not
leave the node is a undefined state.
Usage examples:
- Operations that failed because of race conditions, e.g. openning a
channel fails because both participants try at the same time.
- Transient connectivety problems.
- Timeouts.
Note:
Some errors are undesirable, but they are still possible and should be
expected. Example a secret registration that finishes after the timeout
window.
"""
class RaidenUnrecoverableError(RaidenError):
"""Base exception for unexpected errors that should crash the client.
This exception is used when something unrecoverable happened:
- Corrupted database.
- Running out of disk space.
"""
class RaidenValidationError(RaidenRecoverableError):
"""Exception raised when an input value is invalid.
This exception must be raised on the edges of the system, to inform the
caller one of the provided values is invalid.
Actually, this exception can also be used in the proxies for insane values
that are not valid regardless of the chain state.
If a value is not acceptable because of the chain state, BrokenPreconditionError
must be used instead.
Also, if a value indicates a bug in our codebase, RaidenValidationError
is not the right error because RaidenValidationError is considered as a
recoverable error.
We prefer this exception over ValueError because libraries (e.g. web3.py)
raise ValueError sometimes, and we want to differentiate our own exceptions
from those.
"""
class PaymentConflict(RaidenRecoverableError):
""" Raised when there is another payment with the same identifier but the
attributes of the payment don't match.
"""
class InsufficientFunds(RaidenError):
""" Raised when provided account doesn't have token funds to complete the
requested deposit.
Used when a *user* tries to deposit a given amount of token in a channel,
but his account doesn't have enough funds to pay for the deposit.
"""
class DepositOverLimit(RaidenError):
""" Raised when the requested deposit is over the limit
Used when a *user* tries to deposit a given amount of token in a channel,
but the amount is over the testing limit.
"""
class DepositMismatch(RaidenRecoverableError):
""" Raised when the requested deposit is lower than actual channel deposit
Used when a *user* tries to deposit a given amount of tokens in a channel,
but the on-chain amount is already higher.
"""
class InvalidChannelID(RaidenError):
""" Raised when the user provided value is not a channel id. """
class WithdrawMismatch(RaidenRecoverableError):
""" Raised when the requested withdraw is larger than actual channel balance. """
class InvalidChecksummedAddress(RaidenError):
"""Raised when the user provided address is not a str or the value is not
properly checksummed.
Exception used to enforce the checksummed for external APIs. The address
provided by a user must be checksummed to avoid errors, the checksummed
address must be validated at the edges before calling internal functions.
"""
class InvalidBinaryAddress(RaidenValidationError):
"""Raised when the address is not binary or it is not 20 bytes long.
Exception used to enforce the sandwich encoding for python APIs. The
internal address representation used by Raiden is binary, the binary
address must be validated at the edges before calling internal functions.
"""
class InvalidSecret(RaidenError):
""" Raised when the user provided value is not a valid secret. """
class InvalidSecretHash(RaidenError):
""" Raised when the user provided value is not a valid secrethash. """
class InvalidAmount(RaidenError):
""" Raised when the user provided value is not a positive integer and
cannot be used to define a transfer value.
"""
class InvalidSettleTimeout(RaidenError):
""" Raised when the user provided timeout value is less than the minimum
settle timeout"""
class InvalidRevealTimeout(RaidenError):
""" Raised when the channel's settle timeout is less than
double the user provided reveal timeout value.
condition: settle_timeout < reveal_timeout * 2
"""
class InvalidSignature(RaidenError):
"""Raised on invalid signature recover/verify"""
class InvalidPaymentIdentifier(RaidenError):
"""Raised on invalid payment identifier"""
class SamePeerAddress(RaidenError):
""" Raised when a user tries to perform an action that requires two different partners
"""
class UnknownTokenAddress(RaidenError):
""" Raised when the token address in unknown. """
class TokenNotRegistered(RaidenError):
""" Raised if there is no token network for token used when opening a channel """
class AlreadyRegisteredTokenAddress(RaidenError):
""" Raised when the token address in already registered with the given network. """
class InvalidToken(RaidenError):
""" Raised if the token does not follow the ERC20 standard. """
class MaxTokenNetworkNumberReached(RaidenError):
""" Raised if the maximum amount of token networks has been registered. """
class InvalidTokenAddress(RaidenError):
""" Raised if the token address is invalid. """
class InvalidTokenNetworkDepositLimit(RaidenError):
""" Raised when an invalid token network deposit
limit is passed to the token network registry proxy.
"""
class InvalidChannelParticipantDepositLimit(RaidenError):
""" Raised when an invalid channel participant
deposit limit is passed to the token network registry proxy.
"""
class EthNodeInterfaceError(RaidenError):
""" Raised when the underlying ETH node does not support an rpc interface"""
class AddressWithoutCode(RaidenError):
"""Raised on attempt to execute contract on address without a code."""
class DuplicatedChannelError(RaidenRecoverableError):
"""Raised if someone tries to create a channel that already exists."""
class UnexpectedChannelState(RaidenRecoverableError):
"""Raised if an operation is attempted on a channel while it is in an unexpected state."""
class ContractCodeMismatch(RaidenError):
"""Raised if the onchain code of the contract differs."""
class APIServerPortInUseError(RaidenError):
"""Raised when API server port is already in use"""
class InvalidDBData(RaidenUnrecoverableError):
"""Raised when the data of the WAL are in an unexpected format"""
class InvalidBlockNumberInput(RaidenError):
"""Raised when the user provided a block number that is < 0 or > UINT64_MAX"""
class NoStateForBlockIdentifier(RaidenError):
"""
Raised when we attempt to provide a block identifier older
than STATE_PRUNING_AFTER_BLOCKS blocks
"""
class InvalidNumberInput(RaidenError):
"""Raised when the user provided an invalid number"""
class TransportError(RaidenError):
""" Raised when a transport encounters an unexpected error """
class ReplacementTransactionUnderpriced(RaidenError):
"""Raised when a replacement transaction is rejected by the blockchain"""
class EthereumNonceTooLow(RaidenUnrecoverableError):
"""Raised when a new transaction is sent with a nonce that has been used already."""
class ChannelOutdatedError(RaidenError):
""" Raised when an action is invoked on a channel whose
identifier has been replaced with a new channel identifier
due to a close/re-open of current channel.
"""
class InsufficientGasReserve(RaidenError):
""" Raised when an action cannot be done because the available balance
is not sufficient for the lifecycles of all active channels.
"""
class InsufficientEth(RaidenError):
""" Raised when an on-chain action failed because we could not pay for
the gas. (The case we try to avoid with `InsufficientGasReserve`
exceptions.)
"""
class BrokenPreconditionError(RaidenError):
""" Raised when the chain doesn't satisfy transaction preconditions
that proxies check at the specified block.
This exception should be used, when the proxy already sees that,
on the specified block, due to the blockchain state, an assert
or a revert in the smart contract would be hit for the triggering block.
This exception should not be used for errors independent of the
chain state. For example, when an argument needs to be always non-zero,
violation of this condition is not a BrokenPreconditionError, but
RaidenValidationError.
This exception can also be used when preconditions are not satisfied
on another Raiden node.
"""
class ServiceRequestFailed(RaidenError):
""" Raised when a request to one of the raiden services fails. """
class PFSReturnedError(ServiceRequestFailed):
""" The PFS responded with a json message containing an error """
def __init__(self, message: str, error_code: int, error_details: Dict[str, Any]) -> None:
args: List[Any] = [f"{message} (PFS error code: {error_code})"]
if error_details:
args.append(error_details)
super().__init__(*args)
self.message = error_code
self.error_code = error_code
self.error_details = error_details
@classmethod
def from_response(cls, response_json: Dict[str, Any]) -> "PFSReturnedError":
# TODO: Use marshmallow to deserialize the message fields. Otherwise we
# can't guarantee that the variables have the right type, causing bad
# error handling.
error_params = dict(
message=response_json.get("errors", ""),
error_code=response_json.get("error_code", 0),
error_details=response_json.get("error_details"),
)
if PFSError.is_iou_rejected(error_params["error_code"]):
return ServiceRequestIOURejected(**error_params)
return cls(**error_params)
class ServiceRequestIOURejected(PFSReturnedError):
""" Raised when a service request fails due to a problem with the iou. """
class UndefinedMediationFee(RaidenError):
"""The fee schedule is not applicable resulting in undefined fees
Either the raiden node is not capable of mediating this payment, or the
FeeSchedule is outdated/inconsistent."""
class TokenNetworkDeprecated(RaidenError):
""" Raised when the token network proxy safety switch
is turned on (i.e deprecated).
"""
class MintFailed(RaidenError):
""" Raised when an attempt to mint a testnet token failed. """
class SerializationError(RaidenError):
""" Invalid data are to be (de-)serialized. """
class MatrixSyncMaxTimeoutReached(RaidenRecoverableError):
""" Raised if processing the matrix response takes longer than the poll timeout. """
class ConfigurationError(RaidenError):
""" Raised when there is something wrong with the provided Raiden Configuration/arguments """
| 31.882793 | 97 | 0.732264 | import enum
from typing import Any, Dict, List
@enum.unique
class PFSError(enum.IntEnum):
INVALID_REQUEST = 2000
INVALID_SIGNATURE = 2001
REQUEST_OUTDATED = 2002
BAD_IOU = 2100
MISSING_IOU = 2101
WRONG_IOU_RECIPIENT = 2102
IOU_EXPIRED_TOO_EARLY = 2103
INSUFFICIENT_SERVICE_PAYMENT = 2104
IOU_ALREADY_CLAIMED = 2105
USE_THIS_IOU = 2106
DEPOSIT_TOO_LOW = 2107
NO_ROUTE_FOUND = 2201
@staticmethod
def is_iou_rejected(error_code: int) -> bool:
return error_code >= 2100 and error_code < 2200
class RaidenError(Exception):
class RaidenRecoverableError(RaidenError):
class RaidenUnrecoverableError(RaidenError):
class RaidenValidationError(RaidenRecoverableError):
class PaymentConflict(RaidenRecoverableError):
class InsufficientFunds(RaidenError):
class DepositOverLimit(RaidenError):
class DepositMismatch(RaidenRecoverableError):
class InvalidChannelID(RaidenError):
class WithdrawMismatch(RaidenRecoverableError):
class InvalidChecksummedAddress(RaidenError):
class InvalidBinaryAddress(RaidenValidationError):
class InvalidSecret(RaidenError):
class InvalidSecretHash(RaidenError):
class InvalidAmount(RaidenError):
class InvalidSettleTimeout(RaidenError):
class InvalidRevealTimeout(RaidenError):
class InvalidSignature(RaidenError):
class InvalidPaymentIdentifier(RaidenError):
class SamePeerAddress(RaidenError):
class UnknownTokenAddress(RaidenError):
class TokenNotRegistered(RaidenError):
class AlreadyRegisteredTokenAddress(RaidenError):
class InvalidToken(RaidenError):
class MaxTokenNetworkNumberReached(RaidenError):
class InvalidTokenAddress(RaidenError):
class InvalidTokenNetworkDepositLimit(RaidenError):
class InvalidChannelParticipantDepositLimit(RaidenError):
class EthNodeInterfaceError(RaidenError):
class AddressWithoutCode(RaidenError):
class DuplicatedChannelError(RaidenRecoverableError):
class UnexpectedChannelState(RaidenRecoverableError):
class ContractCodeMismatch(RaidenError):
class APIServerPortInUseError(RaidenError):
class InvalidDBData(RaidenUnrecoverableError):
class InvalidBlockNumberInput(RaidenError):
class NoStateForBlockIdentifier(RaidenError):
class InvalidNumberInput(RaidenError):
class TransportError(RaidenError):
class ReplacementTransactionUnderpriced(RaidenError):
class EthereumNonceTooLow(RaidenUnrecoverableError):
class ChannelOutdatedError(RaidenError):
class InsufficientGasReserve(RaidenError):
class InsufficientEth(RaidenError):
class BrokenPreconditionError(RaidenError):
class ServiceRequestFailed(RaidenError):
class PFSReturnedError(ServiceRequestFailed):
def __init__(self, message: str, error_code: int, error_details: Dict[str, Any]) -> None:
args: List[Any] = [f"{message} (PFS error code: {error_code})"]
if error_details:
args.append(error_details)
super().__init__(*args)
self.message = error_code
self.error_code = error_code
self.error_details = error_details
@classmethod
def from_response(cls, response_json: Dict[str, Any]) -> "PFSReturnedError":
# error handling.
error_params = dict(
message=response_json.get("errors", ""),
error_code=response_json.get("error_code", 0),
error_details=response_json.get("error_details"),
)
if PFSError.is_iou_rejected(error_params["error_code"]):
return ServiceRequestIOURejected(**error_params)
return cls(**error_params)
class ServiceRequestIOURejected(PFSReturnedError):
class UndefinedMediationFee(RaidenError):
class TokenNetworkDeprecated(RaidenError):
class MintFailed(RaidenError):
class SerializationError(RaidenError):
class MatrixSyncMaxTimeoutReached(RaidenRecoverableError):
class ConfigurationError(RaidenError):
| true | true |
f7208908dd0f530934afcd736f133ab23168b854 | 1,172 | py | Python | setup.py | njdanielsen/aws-data-wrangler | 5cdb316224370e952dfb3a701825e1b1ab331105 | [
"Apache-2.0"
] | null | null | null | setup.py | njdanielsen/aws-data-wrangler | 5cdb316224370e952dfb3a701825e1b1ab331105 | [
"Apache-2.0"
] | null | null | null | setup.py | njdanielsen/aws-data-wrangler | 5cdb316224370e952dfb3a701825e1b1ab331105 | [
"Apache-2.0"
] | null | null | null | import os
from io import open
from typing import Dict
from setuptools import find_packages, setup
here = os.path.abspath(os.path.dirname(__file__))
about: Dict[str, str] = {}
path = os.path.join(here, "awswrangler", "__metadata__.py")
with open(file=path, mode="r", encoding="utf-8") as f:
exec(f.read(), about)
with open("README.md", "r") as fh:
long_description = fh.read()
setup(
author="Igor Tavares",
url="https://github.com/awslabs/aws-data-wrangler",
name=about["__title__"],
version=about["__version__"],
description=about["__description__"],
long_description=long_description,
long_description_content_type="text/markdown",
license=about["__license__"],
packages=find_packages(exclude=["tests"]),
include_package_data=True,
python_requires=">=3.6, <3.10",
install_requires=open("requirements.txt").read().strip().split("\n"),
classifiers=[
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
],
extras_require={"sqlserver": ["pyodbc~=4.0.30"]},
)
| 31.675676 | 73 | 0.664676 | import os
from io import open
from typing import Dict
from setuptools import find_packages, setup
here = os.path.abspath(os.path.dirname(__file__))
about: Dict[str, str] = {}
path = os.path.join(here, "awswrangler", "__metadata__.py")
with open(file=path, mode="r", encoding="utf-8") as f:
exec(f.read(), about)
with open("README.md", "r") as fh:
long_description = fh.read()
setup(
author="Igor Tavares",
url="https://github.com/awslabs/aws-data-wrangler",
name=about["__title__"],
version=about["__version__"],
description=about["__description__"],
long_description=long_description,
long_description_content_type="text/markdown",
license=about["__license__"],
packages=find_packages(exclude=["tests"]),
include_package_data=True,
python_requires=">=3.6, <3.10",
install_requires=open("requirements.txt").read().strip().split("\n"),
classifiers=[
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
],
extras_require={"sqlserver": ["pyodbc~=4.0.30"]},
)
| true | true |
f72089bdffb1bfe66db8bea55840dc0bef158c5f | 18,291 | py | Python | sdk/network/azure-mgmt-network/azure/mgmt/network/v2019_12_01/operations/_web_application_firewall_policies_operations.py | iscai-msft/azure-sdk-for-python | 83715b95c41e519d5be7f1180195e2fba136fc0f | [
"MIT"
] | 1 | 2020-05-12T23:29:15.000Z | 2020-05-12T23:29:15.000Z | sdk/network/azure-mgmt-network/azure/mgmt/network/v2019_12_01/operations/_web_application_firewall_policies_operations.py | iscai-msft/azure-sdk-for-python | 83715b95c41e519d5be7f1180195e2fba136fc0f | [
"MIT"
] | 226 | 2019-07-24T07:57:21.000Z | 2019-10-15T01:07:24.000Z | sdk/network/azure-mgmt-network/azure/mgmt/network/v2019_12_01/operations/_web_application_firewall_policies_operations.py | iscai-msft/azure-sdk-for-python | 83715b95c41e519d5be7f1180195e2fba136fc0f | [
"MIT"
] | null | null | null | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
import uuid
from msrest.pipeline import ClientRawResponse
from msrestazure.azure_exceptions import CloudError
from msrest.polling import LROPoller, NoPolling
from msrestazure.polling.arm_polling import ARMPolling
from .. import models
class WebApplicationFirewallPoliciesOperations(object):
"""WebApplicationFirewallPoliciesOperations operations.
You should not instantiate directly this class, but create a Client instance that will create it for you and attach it as attribute.
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
:ivar api_version: Client API version. Constant value: "2019-12-01".
"""
models = models
def __init__(self, client, config, serializer, deserializer):
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self.api_version = "2019-12-01"
self.config = config
def list(
self, resource_group_name, custom_headers=None, raw=False, **operation_config):
"""Lists all of the protection policies within a resource group.
:param resource_group_name: The name of the resource group.
:type resource_group_name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: An iterator like instance of WebApplicationFirewallPolicy
:rtype:
~azure.mgmt.network.v2019_12_01.models.WebApplicationFirewallPolicyPaged[~azure.mgmt.network.v2019_12_01.models.WebApplicationFirewallPolicy]
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
def prepare_request(next_link=None):
if not next_link:
# Construct URL
url = self.list.metadata['url']
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
else:
url = next_link
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters, header_parameters)
return request
def internal_paging(next_link=None):
request = prepare_request(next_link)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
return response
# Deserialize response
header_dict = None
if raw:
header_dict = {}
deserialized = models.WebApplicationFirewallPolicyPaged(internal_paging, self._deserialize.dependencies, header_dict)
return deserialized
list.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies'}
def list_all(
self, custom_headers=None, raw=False, **operation_config):
"""Gets all the WAF policies in a subscription.
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: An iterator like instance of WebApplicationFirewallPolicy
:rtype:
~azure.mgmt.network.v2019_12_01.models.WebApplicationFirewallPolicyPaged[~azure.mgmt.network.v2019_12_01.models.WebApplicationFirewallPolicy]
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
def prepare_request(next_link=None):
if not next_link:
# Construct URL
url = self.list_all.metadata['url']
path_format_arguments = {
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
else:
url = next_link
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters, header_parameters)
return request
def internal_paging(next_link=None):
request = prepare_request(next_link)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
return response
# Deserialize response
header_dict = None
if raw:
header_dict = {}
deserialized = models.WebApplicationFirewallPolicyPaged(internal_paging, self._deserialize.dependencies, header_dict)
return deserialized
list_all.metadata = {'url': '/subscriptions/{subscriptionId}/providers/Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies'}
def get(
self, resource_group_name, policy_name, custom_headers=None, raw=False, **operation_config):
"""Retrieve protection policy with specified name within a resource group.
:param resource_group_name: The name of the resource group.
:type resource_group_name: str
:param policy_name: The name of the policy.
:type policy_name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: WebApplicationFirewallPolicy or ClientRawResponse if raw=true
:rtype:
~azure.mgmt.network.v2019_12_01.models.WebApplicationFirewallPolicy or
~msrest.pipeline.ClientRawResponse
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = self.get.metadata['url']
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'policyName': self._serialize.url("policy_name", policy_name, 'str', max_length=128),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters, header_parameters)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('WebApplicationFirewallPolicy', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies/{policyName}'}
def create_or_update(
self, resource_group_name, policy_name, parameters, custom_headers=None, raw=False, **operation_config):
"""Creates or update policy with specified rule set name within a resource
group.
:param resource_group_name: The name of the resource group.
:type resource_group_name: str
:param policy_name: The name of the policy.
:type policy_name: str
:param parameters: Policy to be created.
:type parameters:
~azure.mgmt.network.v2019_12_01.models.WebApplicationFirewallPolicy
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: WebApplicationFirewallPolicy or ClientRawResponse if raw=true
:rtype:
~azure.mgmt.network.v2019_12_01.models.WebApplicationFirewallPolicy or
~msrest.pipeline.ClientRawResponse
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = self.create_or_update.metadata['url']
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'policyName': self._serialize.url("policy_name", policy_name, 'str', max_length=128),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct body
body_content = self._serialize.body(parameters, 'WebApplicationFirewallPolicy')
# Construct and send request
request = self._client.put(url, query_parameters, header_parameters, body_content)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200, 201]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('WebApplicationFirewallPolicy', response)
if response.status_code == 201:
deserialized = self._deserialize('WebApplicationFirewallPolicy', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
create_or_update.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies/{policyName}'}
def _delete_initial(
self, resource_group_name, policy_name, custom_headers=None, raw=False, **operation_config):
# Construct URL
url = self.delete.metadata['url']
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'policyName': self._serialize.url("policy_name", policy_name, 'str', max_length=128),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.delete(url, query_parameters, header_parameters)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200, 202, 204]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
def delete(
self, resource_group_name, policy_name, custom_headers=None, raw=False, polling=True, **operation_config):
"""Deletes Policy.
:param resource_group_name: The name of the resource group.
:type resource_group_name: str
:param policy_name: The name of the policy.
:type policy_name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: The poller return type is ClientRawResponse, the
direct response alongside the deserialized response
:param polling: True for ARMPolling, False for no polling, or a
polling object for personal polling strategy
:return: An instance of LROPoller that returns None or
ClientRawResponse<None> if raw==True
:rtype: ~msrestazure.azure_operation.AzureOperationPoller[None] or
~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[None]]
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
raw_result = self._delete_initial(
resource_group_name=resource_group_name,
policy_name=policy_name,
custom_headers=custom_headers,
raw=True,
**operation_config
)
def get_long_running_output(response):
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
lro_delay = operation_config.get(
'long_running_operation_timeout',
self.config.long_running_operation_timeout)
if polling is True: polling_method = ARMPolling(lro_delay, **operation_config)
elif polling is False: polling_method = NoPolling()
else: polling_method = polling
return LROPoller(self._client, raw_result, get_long_running_output, polling_method)
delete.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies/{policyName}'}
| 46.780051 | 199 | 0.668526 |
import uuid
from msrest.pipeline import ClientRawResponse
from msrestazure.azure_exceptions import CloudError
from msrest.polling import LROPoller, NoPolling
from msrestazure.polling.arm_polling import ARMPolling
from .. import models
class WebApplicationFirewallPoliciesOperations(object):
models = models
def __init__(self, client, config, serializer, deserializer):
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self.api_version = "2019-12-01"
self.config = config
def list(
self, resource_group_name, custom_headers=None, raw=False, **operation_config):
def prepare_request(next_link=None):
if not next_link:
url = self.list.metadata['url']
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
else:
url = next_link
query_parameters = {}
header_parameters = {}
header_parameters['Accept'] = 'application/json'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
request = self._client.get(url, query_parameters, header_parameters)
return request
def internal_paging(next_link=None):
request = prepare_request(next_link)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
return response
header_dict = None
if raw:
header_dict = {}
deserialized = models.WebApplicationFirewallPolicyPaged(internal_paging, self._deserialize.dependencies, header_dict)
return deserialized
list.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies'}
def list_all(
self, custom_headers=None, raw=False, **operation_config):
def prepare_request(next_link=None):
if not next_link:
url = self.list_all.metadata['url']
path_format_arguments = {
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
else:
url = next_link
query_parameters = {}
header_parameters = {}
header_parameters['Accept'] = 'application/json'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
request = self._client.get(url, query_parameters, header_parameters)
return request
def internal_paging(next_link=None):
request = prepare_request(next_link)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
return response
header_dict = None
if raw:
header_dict = {}
deserialized = models.WebApplicationFirewallPolicyPaged(internal_paging, self._deserialize.dependencies, header_dict)
return deserialized
list_all.metadata = {'url': '/subscriptions/{subscriptionId}/providers/Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies'}
def get(
self, resource_group_name, policy_name, custom_headers=None, raw=False, **operation_config):
url = self.get.metadata['url']
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'policyName': self._serialize.url("policy_name", policy_name, 'str', max_length=128),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
header_parameters = {}
header_parameters['Accept'] = 'application/json'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
request = self._client.get(url, query_parameters, header_parameters)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('WebApplicationFirewallPolicy', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies/{policyName}'}
def create_or_update(
self, resource_group_name, policy_name, parameters, custom_headers=None, raw=False, **operation_config):
url = self.create_or_update.metadata['url']
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'policyName': self._serialize.url("policy_name", policy_name, 'str', max_length=128),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
header_parameters = {}
header_parameters['Accept'] = 'application/json'
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
body_content = self._serialize.body(parameters, 'WebApplicationFirewallPolicy')
request = self._client.put(url, query_parameters, header_parameters, body_content)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200, 201]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('WebApplicationFirewallPolicy', response)
if response.status_code == 201:
deserialized = self._deserialize('WebApplicationFirewallPolicy', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
create_or_update.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies/{policyName}'}
def _delete_initial(
self, resource_group_name, policy_name, custom_headers=None, raw=False, **operation_config):
url = self.delete.metadata['url']
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'policyName': self._serialize.url("policy_name", policy_name, 'str', max_length=128),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
header_parameters = {}
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
request = self._client.delete(url, query_parameters, header_parameters)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200, 202, 204]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
def delete(
self, resource_group_name, policy_name, custom_headers=None, raw=False, polling=True, **operation_config):
raw_result = self._delete_initial(
resource_group_name=resource_group_name,
policy_name=policy_name,
custom_headers=custom_headers,
raw=True,
**operation_config
)
def get_long_running_output(response):
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
lro_delay = operation_config.get(
'long_running_operation_timeout',
self.config.long_running_operation_timeout)
if polling is True: polling_method = ARMPolling(lro_delay, **operation_config)
elif polling is False: polling_method = NoPolling()
else: polling_method = polling
return LROPoller(self._client, raw_result, get_long_running_output, polling_method)
delete.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies/{policyName}'}
| true | true |
f7208a0bce6ed5a17f46fdcb513416605ec0135f | 3,249 | py | Python | cfgov/v1/models/browse_page.py | atuggle/cfgov-refresh | 5a9cfd92b460b9be7befb39f5845abf56857aeac | [
"CC0-1.0"
] | null | null | null | cfgov/v1/models/browse_page.py | atuggle/cfgov-refresh | 5a9cfd92b460b9be7befb39f5845abf56857aeac | [
"CC0-1.0"
] | 1 | 2016-09-14T21:11:19.000Z | 2016-09-14T21:11:19.000Z | cfgov/v1/models/browse_page.py | atuggle/cfgov-refresh | 5a9cfd92b460b9be7befb39f5845abf56857aeac | [
"CC0-1.0"
] | null | null | null | from django.db import models
from wagtail.wagtailadmin.edit_handlers import (
FieldPanel, ObjectList, StreamFieldPanel, TabbedInterface
)
from wagtail.wagtailcore import blocks
from wagtail.wagtailcore.fields import StreamField
from wagtail.wagtailcore.models import PageManager
from data_research.blocks import (
ConferenceRegistrationForm, MortgageDataDownloads
)
from jobmanager.models import JobListingTable
from v1 import blocks as v1_blocks
from v1.atomic_elements import molecules, organisms
from v1.models.base import CFGOVPage
from v1.util.util import get_secondary_nav_items
class BrowsePage(CFGOVPage):
header = StreamField([
('text_introduction', molecules.TextIntroduction()),
('featured_content', molecules.FeaturedContent()),
], blank=True)
content = StreamField([
('bureau_structure', organisms.BureauStructure()),
('info_unit_group', organisms.InfoUnitGroup()),
('well', organisms.Well()),
('full_width_text', organisms.FullWidthText()),
('expandable', organisms.Expandable()),
('expandable_group', organisms.ExpandableGroup()),
('table_block', organisms.AtomicTableBlock(
table_options={'renderer': 'html'})),
('job_listing_table', JobListingTable()),
('feedback', v1_blocks.Feedback()),
('conference_registration_form', ConferenceRegistrationForm()),
('raw_html_block', blocks.RawHTMLBlock(
label='Raw HTML block')),
('html_block', organisms.HTMLBlock()),
('chart_block', organisms.ChartBlock()),
('mortgage_chart_block', organisms.MortgageChartBlock()),
('mortgage_map_block', organisms.MortgageMapBlock()),
('mortgage_downloads_block', MortgageDataDownloads()),
('snippet_list', organisms.SnippetList()),
('data_snapshot', organisms.DataSnapshot()),
('image_text_25_75_group', organisms.ImageText2575Group()),
('image_text_50_50_group', organisms.ImageText5050Group()),
('half_width_link_blob_group', organisms.HalfWidthLinkBlobGroup()),
('third_width_link_blob_group', organisms.ThirdWidthLinkBlobGroup()),
], blank=True)
secondary_nav_exclude_sibling_pages = models.BooleanField(default=False)
# General content tab
content_panels = CFGOVPage.content_panels + [
StreamFieldPanel('header'),
StreamFieldPanel('content'),
]
sidefoot_panels = CFGOVPage.sidefoot_panels + [
FieldPanel('secondary_nav_exclude_sibling_pages'),
]
# Tab handler interface
edit_handler = TabbedInterface([
ObjectList(content_panels, heading='General Content'),
ObjectList(sidefoot_panels, heading='Sidebar'),
ObjectList(CFGOVPage.settings_panels, heading='Configuration'),
])
template = 'browse-basic/index.html'
objects = PageManager()
@property
def page_js(self):
return (
super(BrowsePage, self).page_js + ['secondary-navigation.js']
)
def get_context(self, request, *args, **kwargs):
context = super(BrowsePage, self).get_context(request, *args, **kwargs)
context.update({'get_secondary_nav_items': get_secondary_nav_items})
return context
| 37.77907 | 79 | 0.6996 | from django.db import models
from wagtail.wagtailadmin.edit_handlers import (
FieldPanel, ObjectList, StreamFieldPanel, TabbedInterface
)
from wagtail.wagtailcore import blocks
from wagtail.wagtailcore.fields import StreamField
from wagtail.wagtailcore.models import PageManager
from data_research.blocks import (
ConferenceRegistrationForm, MortgageDataDownloads
)
from jobmanager.models import JobListingTable
from v1 import blocks as v1_blocks
from v1.atomic_elements import molecules, organisms
from v1.models.base import CFGOVPage
from v1.util.util import get_secondary_nav_items
class BrowsePage(CFGOVPage):
header = StreamField([
('text_introduction', molecules.TextIntroduction()),
('featured_content', molecules.FeaturedContent()),
], blank=True)
content = StreamField([
('bureau_structure', organisms.BureauStructure()),
('info_unit_group', organisms.InfoUnitGroup()),
('well', organisms.Well()),
('full_width_text', organisms.FullWidthText()),
('expandable', organisms.Expandable()),
('expandable_group', organisms.ExpandableGroup()),
('table_block', organisms.AtomicTableBlock(
table_options={'renderer': 'html'})),
('job_listing_table', JobListingTable()),
('feedback', v1_blocks.Feedback()),
('conference_registration_form', ConferenceRegistrationForm()),
('raw_html_block', blocks.RawHTMLBlock(
label='Raw HTML block')),
('html_block', organisms.HTMLBlock()),
('chart_block', organisms.ChartBlock()),
('mortgage_chart_block', organisms.MortgageChartBlock()),
('mortgage_map_block', organisms.MortgageMapBlock()),
('mortgage_downloads_block', MortgageDataDownloads()),
('snippet_list', organisms.SnippetList()),
('data_snapshot', organisms.DataSnapshot()),
('image_text_25_75_group', organisms.ImageText2575Group()),
('image_text_50_50_group', organisms.ImageText5050Group()),
('half_width_link_blob_group', organisms.HalfWidthLinkBlobGroup()),
('third_width_link_blob_group', organisms.ThirdWidthLinkBlobGroup()),
], blank=True)
secondary_nav_exclude_sibling_pages = models.BooleanField(default=False)
content_panels = CFGOVPage.content_panels + [
StreamFieldPanel('header'),
StreamFieldPanel('content'),
]
sidefoot_panels = CFGOVPage.sidefoot_panels + [
FieldPanel('secondary_nav_exclude_sibling_pages'),
]
edit_handler = TabbedInterface([
ObjectList(content_panels, heading='General Content'),
ObjectList(sidefoot_panels, heading='Sidebar'),
ObjectList(CFGOVPage.settings_panels, heading='Configuration'),
])
template = 'browse-basic/index.html'
objects = PageManager()
@property
def page_js(self):
return (
super(BrowsePage, self).page_js + ['secondary-navigation.js']
)
def get_context(self, request, *args, **kwargs):
context = super(BrowsePage, self).get_context(request, *args, **kwargs)
context.update({'get_secondary_nav_items': get_secondary_nav_items})
return context
| true | true |
f7208bc906110bb2c8cae40a156edcf4c7547c8c | 7,727 | py | Python | plugins/opencv/src/opencv/__init__.py | IGx89/scrypted | 577b00a090393f31aaa81de67f5fd4555995921a | [
"MIT"
] | null | null | null | plugins/opencv/src/opencv/__init__.py | IGx89/scrypted | 577b00a090393f31aaa81de67f5fd4555995921a | [
"MIT"
] | null | null | null | plugins/opencv/src/opencv/__init__.py | IGx89/scrypted | 577b00a090393f31aaa81de67f5fd4555995921a | [
"MIT"
] | null | null | null | from __future__ import annotations
from time import sleep
from detect import DetectionSession, DetectPlugin
from typing import Any, List
import numpy as np
import cv2
import imutils
from gi.repository import GLib, Gst
from scrypted_sdk.types import ObjectDetectionModel, ObjectDetectionResult, ObjectsDetected
class OpenCVDetectionSession(DetectionSession):
cap: cv2.VideoCapture
previous_frame: Any
def __init__(self) -> None:
super().__init__()
self.previous_frame = None
self.cap = None
defaultThreshold = 25
defaultArea = 2000
defaultInterval = 250
class OpenCVPlugin(DetectPlugin):
def __init__(self, nativeId: str | None = None):
super().__init__(nativeId=nativeId)
self.color2Gray = None
self.pixelFormat = "I420"
self.pixelFormatChannelCount = 1
if True:
self.retainAspectRatio = False
self.color2Gray = None
self.pixelFormat = "I420"
self.pixelFormatChannelCount = 1
else:
self.retainAspectRatio = True
self.color2Gray = cv2.COLOR_BGRA2GRAY
self.pixelFormat = "BGRA"
self.pixelFormatChannelCount = 4
async def getDetectionModel(self) -> ObjectDetectionModel:
d: ObjectDetectionModel = {
'name': '@scrypted/opencv',
'classes': ['motion'],
}
settings = [
{
'title': "Motion Area",
'description': "The area size required to trigger motion. Higher values (larger areas) are less sensitive. Setting this to 0 will output all matches into the console.",
'value': defaultArea,
'key': 'area',
'placeholder': defaultArea,
'type': 'number',
},
{
'title': "Motion Threshold",
'description': "The threshold required to consider a pixel changed. Higher values (larger changes) are less sensitive.",
'value': defaultThreshold,
'key': 'threshold',
'placeholder': defaultThreshold,
'type': 'number',
},
{
'title': "Frame Analysis Interval",
'description': "The number of milliseconds to wait between motion analysis.",
'value': defaultInterval,
'key': 'interval',
'placeholder': defaultInterval,
'type': 'number',
},
]
d['settings'] = settings
return d
def get_pixel_format(self):
return self.pixelFormat
def parse_settings(self, settings: Any):
area = defaultArea
threshold = defaultThreshold
interval = defaultInterval
if settings:
area = float(settings.get('area', area))
threshold = int(settings.get('threshold', threshold))
interval = float(settings.get('interval', interval))
return area, threshold, interval
def detect(self, detection_session: OpenCVDetectionSession, frame, settings: Any, src_size, convert_to_src_size) -> ObjectsDetected:
area, threshold, interval = self.parse_settings(settings)
# see get_detection_input_size on undocumented size requirements for GRAY8
if self.color2Gray != None:
gray = cv2.cvtColor(frame, self.color2Gray)
else:
gray = frame
curFrame = cv2.GaussianBlur(gray, (21,21), 0)
if detection_session.previous_frame is None:
detection_session.previous_frame = curFrame
return
frameDelta = cv2.absdiff(detection_session.previous_frame, curFrame)
detection_session.previous_frame = curFrame
_, thresh = cv2.threshold(frameDelta, threshold, 255, cv2.THRESH_BINARY)
dilated = cv2.dilate(thresh, None, iterations=2)
fcontours = cv2.findContours(dilated, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = imutils.grab_contours(fcontours)
detections: List[ObjectDetectionResult] = []
detection_result: ObjectsDetected = {}
detection_result['detections'] = detections
detection_result['inputDimensions'] = src_size
for c in contours:
x, y, w, h = cv2.boundingRect(c)
# if w * h != contour_area:
# print("mismatch w/h", contour_area - w * h)
x2, y2 = convert_to_src_size((x + w, y + h))
x, y = convert_to_src_size((x, y))
w = x2 - x + 1
h = y2 - y + 1
contour_area = w * h
if not area or contour_area > area:
detection: ObjectDetectionResult = {}
detection['boundingBox'] = (x, y, w, h)
detection['className'] = 'motion'
detection['score'] = 1 if area else contour_area
detections.append(detection)
return detection_result
def run_detection_jpeg(self, detection_session: DetectionSession, image_bytes: bytes, min_score: float) -> ObjectsDetected:
raise Exception('can not run motion detection on jpeg')
def get_detection_input_size(self, src_size):
# The initial implementation of this plugin used BGRA
# because it seemed impossible to pull the Y frame out of I420 without corruption.
# This is because while 318x174 is aspect ratio correct,
# it seems to cause strange issues with stride and the image is skewed.
# By using 300x300, this seems to avoid some undocumented minimum size
# reqiurement in gst-videoscale or opencv. Unclear which.
# This is the same input size as tensorflow-lite. Allows for better pipelining.
if not self.retainAspectRatio:
return (300, 300)
width, height = src_size
if (width > height):
if (width > 318):
height = height / width * 318
width = 318
else:
if (height > 318):
width = width / height * 318
height = 318
width = int(np.floor(width / 6) * 6)
height = int(np.floor(height / 6) * 6)
return width, height
def end_session(self, detection_session: OpenCVDetectionSession):
if detection_session and detection_session.cap:
detection_session.cap.release()
detection_session.cap = None
return super().end_session(detection_session)
def run_detection_gstsample(self, detection_session: OpenCVDetectionSession, gst_sample, settings: Any, src_size, convert_to_src_size)-> ObjectsDetected:
buf = gst_sample.get_buffer()
caps = gst_sample.get_caps()
# can't trust the width value, compute the stride
height = caps.get_structure(0).get_value('height')
width = caps.get_structure(0).get_value('width')
result, info = buf.map(Gst.MapFlags.READ)
if not result:
return
try:
mat = np.ndarray(
(height,
width,
self.pixelFormatChannelCount),
buffer=info.data,
dtype= np.uint8)
return self.detect(detection_session, mat, settings, src_size, convert_to_src_size)
finally:
buf.unmap(info)
def create_detection_session(self):
return OpenCVDetectionSession()
def detection_event_notified(self, settings: Any):
area, threshold, interval = self.parse_settings(settings)
# it is safe to block here because gstreamer creates a queue thread
sleep(interval / 1000)
return super().detection_event_notified(settings)
| 38.442786 | 184 | 0.608645 | from __future__ import annotations
from time import sleep
from detect import DetectionSession, DetectPlugin
from typing import Any, List
import numpy as np
import cv2
import imutils
from gi.repository import GLib, Gst
from scrypted_sdk.types import ObjectDetectionModel, ObjectDetectionResult, ObjectsDetected
class OpenCVDetectionSession(DetectionSession):
cap: cv2.VideoCapture
previous_frame: Any
def __init__(self) -> None:
super().__init__()
self.previous_frame = None
self.cap = None
defaultThreshold = 25
defaultArea = 2000
defaultInterval = 250
class OpenCVPlugin(DetectPlugin):
def __init__(self, nativeId: str | None = None):
super().__init__(nativeId=nativeId)
self.color2Gray = None
self.pixelFormat = "I420"
self.pixelFormatChannelCount = 1
if True:
self.retainAspectRatio = False
self.color2Gray = None
self.pixelFormat = "I420"
self.pixelFormatChannelCount = 1
else:
self.retainAspectRatio = True
self.color2Gray = cv2.COLOR_BGRA2GRAY
self.pixelFormat = "BGRA"
self.pixelFormatChannelCount = 4
async def getDetectionModel(self) -> ObjectDetectionModel:
d: ObjectDetectionModel = {
'name': '@scrypted/opencv',
'classes': ['motion'],
}
settings = [
{
'title': "Motion Area",
'description': "The area size required to trigger motion. Higher values (larger areas) are less sensitive. Setting this to 0 will output all matches into the console.",
'value': defaultArea,
'key': 'area',
'placeholder': defaultArea,
'type': 'number',
},
{
'title': "Motion Threshold",
'description': "The threshold required to consider a pixel changed. Higher values (larger changes) are less sensitive.",
'value': defaultThreshold,
'key': 'threshold',
'placeholder': defaultThreshold,
'type': 'number',
},
{
'title': "Frame Analysis Interval",
'description': "The number of milliseconds to wait between motion analysis.",
'value': defaultInterval,
'key': 'interval',
'placeholder': defaultInterval,
'type': 'number',
},
]
d['settings'] = settings
return d
def get_pixel_format(self):
return self.pixelFormat
def parse_settings(self, settings: Any):
area = defaultArea
threshold = defaultThreshold
interval = defaultInterval
if settings:
area = float(settings.get('area', area))
threshold = int(settings.get('threshold', threshold))
interval = float(settings.get('interval', interval))
return area, threshold, interval
def detect(self, detection_session: OpenCVDetectionSession, frame, settings: Any, src_size, convert_to_src_size) -> ObjectsDetected:
area, threshold, interval = self.parse_settings(settings)
if self.color2Gray != None:
gray = cv2.cvtColor(frame, self.color2Gray)
else:
gray = frame
curFrame = cv2.GaussianBlur(gray, (21,21), 0)
if detection_session.previous_frame is None:
detection_session.previous_frame = curFrame
return
frameDelta = cv2.absdiff(detection_session.previous_frame, curFrame)
detection_session.previous_frame = curFrame
_, thresh = cv2.threshold(frameDelta, threshold, 255, cv2.THRESH_BINARY)
dilated = cv2.dilate(thresh, None, iterations=2)
fcontours = cv2.findContours(dilated, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = imutils.grab_contours(fcontours)
detections: List[ObjectDetectionResult] = []
detection_result: ObjectsDetected = {}
detection_result['detections'] = detections
detection_result['inputDimensions'] = src_size
for c in contours:
x, y, w, h = cv2.boundingRect(c)
x2, y2 = convert_to_src_size((x + w, y + h))
x, y = convert_to_src_size((x, y))
w = x2 - x + 1
h = y2 - y + 1
contour_area = w * h
if not area or contour_area > area:
detection: ObjectDetectionResult = {}
detection['boundingBox'] = (x, y, w, h)
detection['className'] = 'motion'
detection['score'] = 1 if area else contour_area
detections.append(detection)
return detection_result
def run_detection_jpeg(self, detection_session: DetectionSession, image_bytes: bytes, min_score: float) -> ObjectsDetected:
raise Exception('can not run motion detection on jpeg')
def get_detection_input_size(self, src_size):
if not self.retainAspectRatio:
return (300, 300)
width, height = src_size
if (width > height):
if (width > 318):
height = height / width * 318
width = 318
else:
if (height > 318):
width = width / height * 318
height = 318
width = int(np.floor(width / 6) * 6)
height = int(np.floor(height / 6) * 6)
return width, height
def end_session(self, detection_session: OpenCVDetectionSession):
if detection_session and detection_session.cap:
detection_session.cap.release()
detection_session.cap = None
return super().end_session(detection_session)
def run_detection_gstsample(self, detection_session: OpenCVDetectionSession, gst_sample, settings: Any, src_size, convert_to_src_size)-> ObjectsDetected:
buf = gst_sample.get_buffer()
caps = gst_sample.get_caps()
height = caps.get_structure(0).get_value('height')
width = caps.get_structure(0).get_value('width')
result, info = buf.map(Gst.MapFlags.READ)
if not result:
return
try:
mat = np.ndarray(
(height,
width,
self.pixelFormatChannelCount),
buffer=info.data,
dtype= np.uint8)
return self.detect(detection_session, mat, settings, src_size, convert_to_src_size)
finally:
buf.unmap(info)
def create_detection_session(self):
return OpenCVDetectionSession()
def detection_event_notified(self, settings: Any):
area, threshold, interval = self.parse_settings(settings)
# it is safe to block here because gstreamer creates a queue thread
sleep(interval / 1000)
return super().detection_event_notified(settings)
| true | true |
f7208be42fc87aa1346f3bb00981d1c67e5429aa | 1,457 | py | Python | manager.py | Hugh-wong/hydra | 5f2c4770e655d41d3c535f6e3c29ec4848d5d60e | [
"MIT"
] | 3 | 2017-02-03T01:44:30.000Z | 2019-02-27T12:00:00.000Z | manager.py | Hugh-wong/hydra | 5f2c4770e655d41d3c535f6e3c29ec4848d5d60e | [
"MIT"
] | null | null | null | manager.py | Hugh-wong/hydra | 5f2c4770e655d41d3c535f6e3c29ec4848d5d60e | [
"MIT"
] | 1 | 2021-07-12T07:41:07.000Z | 2021-07-12T07:41:07.000Z | # coding=utf-8
import sys
import signal
import time
from multiprocessing import Process
from allocator import Allocator, Event
class Manager(object):
"""A manager manage multi allocators, when told to stop, manager would tell the allocator to stop."""
def __init__(self, cfg_list):
self.allocator_list = []
self.event_list = []
for cfg in cfg_list:
event = Event()
cfg.update({'poison': event})
self.allocator_list.append(Allocator(**cfg))
self.event_list.append(event)
def start_all(self):
"""start all the allocators"""
self.process_list = []
for allocator in self.allocator_list:
process = Process(target=allocator.start)
process.start()
self.process_list.append(process)
def stop_all(self, signal, frame):
"""stop all the allocators"""
for event in self.event_list:
event.set()
for process in self.process_list:
process.join()
sys.exit()
@classmethod
def trigger(cls, cfg_list):
"""outer interface"""
manager = cls(cfg_list)
manager.start_all()
signal.signal(signal.SIGINT, manager.stop_all)
signal.signal(signal.SIGTERM, manager.stop_all)
while True: # dead loop might meets many problem, better using a finite loop.
time.sleep(2)
manager.stop_all(None, None)
| 28.019231 | 105 | 0.617021 |
import sys
import signal
import time
from multiprocessing import Process
from allocator import Allocator, Event
class Manager(object):
def __init__(self, cfg_list):
self.allocator_list = []
self.event_list = []
for cfg in cfg_list:
event = Event()
cfg.update({'poison': event})
self.allocator_list.append(Allocator(**cfg))
self.event_list.append(event)
def start_all(self):
self.process_list = []
for allocator in self.allocator_list:
process = Process(target=allocator.start)
process.start()
self.process_list.append(process)
def stop_all(self, signal, frame):
for event in self.event_list:
event.set()
for process in self.process_list:
process.join()
sys.exit()
@classmethod
def trigger(cls, cfg_list):
manager = cls(cfg_list)
manager.start_all()
signal.signal(signal.SIGINT, manager.stop_all)
signal.signal(signal.SIGTERM, manager.stop_all)
while True:
time.sleep(2)
manager.stop_all(None, None)
| true | true |
f7208c7f6012bde16b47b8b7a1531f00d2196076 | 1,084 | py | Python | tests/test_creational/test_prototype.py | smartlegionlab/python-patterns | be898272e4358fa2e60ed9f61ce5ed10aa367e77 | [
"BSD-3-Clause"
] | 2 | 2021-11-17T21:35:49.000Z | 2022-02-09T16:47:20.000Z | tests/test_creational/test_prototype.py | smartlegionlab/python-patterns | be898272e4358fa2e60ed9f61ce5ed10aa367e77 | [
"BSD-3-Clause"
] | null | null | null | tests/test_creational/test_prototype.py | smartlegionlab/python-patterns | be898272e4358fa2e60ed9f61ce5ed10aa367e77 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# --------------------------------------------------------
# Licensed under the terms of the BSD 3-Clause License
# (see LICENSE for details).
# Copyright © 2018-2021, A.A Suvorov
# All rights reserved.
# --------------------------------------------------------
"""Tests prototype.py"""
from patterns.creational.prototype import Bird
class TestPrototype:
def test_register(self, prototype, bird):
prototype.register('Bird', bird)
assert 'Bird' in prototype._objects
def test_unregister(self, prototype, bird):
prototype.register('Bird', bird)
prototype.unregister('Bird')
assert 'Bird' not in prototype._objects
def test_clone(self, prototype, bird):
prototype.register('Bird', bird)
duck = prototype.clone('Bird', {'name': 'Duck'})
assert isinstance(duck, Bird)
def test_get_attr(self, prototype, bird):
prototype.register('Bird', bird)
duck = prototype.clone('Bird', {'name': 'Duck'})
assert getattr(duck, 'name')
assert duck.name == 'Duck'
| 33.875 | 58 | 0.580258 |
from patterns.creational.prototype import Bird
class TestPrototype:
def test_register(self, prototype, bird):
prototype.register('Bird', bird)
assert 'Bird' in prototype._objects
def test_unregister(self, prototype, bird):
prototype.register('Bird', bird)
prototype.unregister('Bird')
assert 'Bird' not in prototype._objects
def test_clone(self, prototype, bird):
prototype.register('Bird', bird)
duck = prototype.clone('Bird', {'name': 'Duck'})
assert isinstance(duck, Bird)
def test_get_attr(self, prototype, bird):
prototype.register('Bird', bird)
duck = prototype.clone('Bird', {'name': 'Duck'})
assert getattr(duck, 'name')
assert duck.name == 'Duck'
| true | true |
f7208ce7bbb8661f4bd13d02bdc81b7510d9775a | 195 | py | Python | Demo/log/example_log_error_file.py | quecpython/EC100Y-SDK | 712c7eb7b54a3971009d94f6d6b21a6011d56f68 | [
"MIT"
] | 4 | 2021-01-28T01:30:59.000Z | 2021-06-15T07:13:41.000Z | Demo/log/example_log_error_file.py | QuePython/EC100Y-SDK | 712c7eb7b54a3971009d94f6d6b21a6011d56f68 | [
"MIT"
] | null | null | null | Demo/log/example_log_error_file.py | QuePython/EC100Y-SDK | 712c7eb7b54a3971009d94f6d6b21a6011d56f68 | [
"MIT"
] | 3 | 2021-04-07T09:55:59.000Z | 2022-01-08T15:15:23.000Z | import log
log.basicConfig(level=log.ERROR) # 设置日志输出级别
# 获取logger对象,如果不指定name则返回root对象,多次使用相同的name调用getLogger方法返回同一个logger对象
log = log.getLogger("error")
log.error("Test error message!!")
| 21.666667 | 69 | 0.784615 | import log
log.basicConfig(level=log.ERROR)
log = log.getLogger("error")
log.error("Test error message!!")
| true | true |
f7208d45b1bd9c79cf4bba8f873334a67415881d | 2,026 | py | Python | Python3_Data_Structure/32_Python_Amortized_Analysis/00_Python_PAA.py | jmmedel/Python3-Data-Structure-References | 3a607da2b67b5b80810d7084339e0602288c4f6b | [
"MIT"
] | null | null | null | Python3_Data_Structure/32_Python_Amortized_Analysis/00_Python_PAA.py | jmmedel/Python3-Data-Structure-References | 3a607da2b67b5b80810d7084339e0602288c4f6b | [
"MIT"
] | null | null | null | Python3_Data_Structure/32_Python_Amortized_Analysis/00_Python_PAA.py | jmmedel/Python3-Data-Structure-References | 3a607da2b67b5b80810d7084339e0602288c4f6b | [
"MIT"
] | null | null | null |
"""
Python - Amortized Analysis
Amortized analysis involves estimating the run time for the sequence of operations in a program without taking into consideration the span of the data distribution in the input values. A simple example is finding a value in a sorted list is quicker than in an unsorted list. If the list is already sorted, it does not matter how distributed the data is. But of course the length of the list has an impact as it decides the number of steps the algorithm has to go through to get the final result.
So we see that if the initial cost of a single step of obtaining a sorted list is high, then the cost of subsequent steps of finding an element becomes considerably low. So Amortized analysis helps us find a bound on the worst-case running time for a sequence of operations. There are three approaches to amortized analysis.
Accounting Method − This involves assigning a cost to each operation performed. If the actual operation finishes quicker than the assigned time then some positive credit is accumulated in the analysis. In the reverse scenario it will be negative credit. To keep track of these accumulated credits, we use a stack or tree data structure. The operations which are carried out early ( like sorting the list) have high amortized cost but the operations that are late in sequence have lower amortized cost as the accumulated credit is utilized. So the amortized cost is an upper bound of actual cost.
Potential Method − In this method the saved credit is utilized for future operations as mathematical function of the state of the data structure. The evaluation of the mathematical function and the amortized cost should be equal. So when the actual cost is greater than amortized cost there is a decrease in potential and it is used utilized for future operations which are expensive.
Aggregate analysis − In this method we estimate the upper bound on the total cost of n steps. The amortized cost is a simple division of total cost and the number of steps (n)..
"""
| 106.631579 | 595 | 0.80306 | true | true | |
f7208d73df1d4b1e754b10c36ce77d84bdc0b130 | 7,009 | py | Python | loadgen/generate_load.py | hythloda/ecommerce-demo | 83d23475677d546db59879452f3e388581ab88de | [
"Apache-2.0"
] | null | null | null | loadgen/generate_load.py | hythloda/ecommerce-demo | 83d23475677d546db59879452f3e388581ab88de | [
"Apache-2.0"
] | null | null | null | loadgen/generate_load.py | hythloda/ecommerce-demo | 83d23475677d546db59879452f3e388581ab88de | [
"Apache-2.0"
] | null | null | null | import barnum, random, time, json, requests, math, os
from mysql.connector import connect, Error
from kafka import KafkaProducer
# CONFIG
userSeedCount = 10000
itemSeedCount = 1000
purchaseGenCount = 500000
purchaseGenEveryMS = 100
pageviewMultiplier = 75 # Translates to 75x purchases, currently 750/sec or 65M/day
itemInventoryMin = 1000
itemInventoryMax = 5000
itemPriceMin = 5
itemPriceMax = 500
mysqlHost = 'mysql'
mysqlPort = '3306'
mysqlUser = 'root'
mysqlPass = 'debezium'
kafkaHostPort = os.getenv('KAFKA_ADDR', 'kafka:9092')
kafkaTopic = 'pageviews'
debeziumHostPort = 'debezium:8083'
channels = ['organic search', 'paid search', 'referral', 'social', 'display']
categories = ['widgets', 'gadgets', 'doodads', 'clearance']
# INSERT TEMPLATES
item_insert = "INSERT INTO shop.items (name, category, price, inventory) VALUES ( %s, %s, %s, %s )"
user_insert = "INSERT INTO shop.users (email, is_vip) VALUES ( %s, %s )"
purchase_insert = "INSERT INTO shop.purchases (user_id, item_id, quantity, purchase_price) VALUES ( %s, %s, %s, %s )"
#Initialize Debezium (Kafka Connect Component)
requests.post(('http://%s/connectors' % debeziumHostPort),
json={
"name": "mysql-connector",
"config": {
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"database.hostname": mysqlHost,
"database.port": mysqlPort,
"database.user": mysqlUser,
"database.password": mysqlPass,
"database.server.name": mysqlHost,
"database.server.id": '1234',
"database.history.kafka.bootstrap.servers": kafkaHostPort,
"database.history.kafka.topic": "mysql-history",
"time.precision.mode": "connect"
}
}
)
#Initialize Kafka
producer = KafkaProducer(bootstrap_servers=[kafkaHostPort],
value_serializer=lambda x:
json.dumps(x).encode('utf-8'))
def generatePageview(viewer_id, target_id, page_type):
return {
"user_id": viewer_id,
"url": f'/{page_type}/{target_id}',
"channel": random.choice(channels),
"received_at": int(time.time())
}
try:
with connect(
host=mysqlHost,
user=mysqlUser,
password=mysqlPass,
) as connection:
with connection.cursor() as cursor:
print("Initializing shop database...")
cursor.execute('CREATE DATABASE IF NOT EXISTS shop;')
cursor.execute(
"""CREATE TABLE IF NOT EXISTS shop.users
(
id SERIAL PRIMARY KEY,
email VARCHAR(255),
is_vip BOOLEAN DEFAULT FALSE,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
);"""
)
cursor.execute(
"""CREATE TABLE IF NOT EXISTS shop.items
(
id SERIAL PRIMARY KEY,
name VARCHAR(100),
category VARCHAR(100),
price DECIMAL(7,2),
inventory INT,
inventory_updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
);"""
)
cursor.execute(
"""CREATE TABLE IF NOT EXISTS shop.purchases
(
id SERIAL PRIMARY KEY,
user_id BIGINT UNSIGNED REFERENCES user(id),
item_id BIGINT UNSIGNED REFERENCES item(id),
status TINYINT UNSIGNED DEFAULT 1,
quantity INT UNSIGNED DEFAULT 1,
purchase_price DECIMAL(12,2),
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
);"""
)
connection.commit()
print("Seeding data...")
cursor.executemany(
item_insert,
[
(
barnum.create_nouns(),
random.choice(categories),
random.randint(itemPriceMin*100,itemPriceMax*100)/100,
random.randint(itemInventoryMin,itemInventoryMax)
) for i in range(itemSeedCount)
]
)
cursor.executemany(
user_insert,
[
(
barnum.create_email(),
(random.randint(0,10) > 8)
) for i in range(userSeedCount)
]
)
connection.commit()
print("Getting item ID and PRICEs...")
cursor.execute("SELECT id, price FROM shop.items")
item_prices = [(row[0], row[1]) for row in cursor]
print("Preparing to loop + seed kafka pageviews and purchases")
for i in range(purchaseGenCount):
# Get a user and item to purchase
purchase_item = random.choice(item_prices)
purchase_user = random.randint(0,userSeedCount-1)
purchase_quantity = random.randint(1,5)
# Write purchaser pageview
producer.send(kafkaTopic, key=str(purchase_user).encode('ascii'), value=generatePageview(purchase_user, purchase_item[0], 'products'))
# Write random pageviews to products or profiles
pageviewOscillator = int(pageviewMultiplier + (math.sin(time.time()/1000)*50))
for i in range(pageviewOscillator):
rand_user = random.randint(0,userSeedCount)
rand_page_type = random.choice(['products', 'profiles'])
target_id_max_range = itemSeedCount if rand_page_type == 'products' else userSeedCount
producer.send(kafkaTopic, key=str(rand_user).encode('ascii'), value=generatePageview(rand_user, random.randint(0,target_id_max_range), rand_page_type))
# Write purchase row
cursor.execute(
purchase_insert,
(
purchase_user,
purchase_item[0],
purchase_quantity,
purchase_item[1] * purchase_quantity
)
)
connection.commit()
#Pause
time.sleep(purchaseGenEveryMS/1000)
connection.close()
except Error as e:
print(e)
| 40.75 | 171 | 0.541304 | import barnum, random, time, json, requests, math, os
from mysql.connector import connect, Error
from kafka import KafkaProducer
userSeedCount = 10000
itemSeedCount = 1000
purchaseGenCount = 500000
purchaseGenEveryMS = 100
pageviewMultiplier = 75
itemInventoryMin = 1000
itemInventoryMax = 5000
itemPriceMin = 5
itemPriceMax = 500
mysqlHost = 'mysql'
mysqlPort = '3306'
mysqlUser = 'root'
mysqlPass = 'debezium'
kafkaHostPort = os.getenv('KAFKA_ADDR', 'kafka:9092')
kafkaTopic = 'pageviews'
debeziumHostPort = 'debezium:8083'
channels = ['organic search', 'paid search', 'referral', 'social', 'display']
categories = ['widgets', 'gadgets', 'doodads', 'clearance']
item_insert = "INSERT INTO shop.items (name, category, price, inventory) VALUES ( %s, %s, %s, %s )"
user_insert = "INSERT INTO shop.users (email, is_vip) VALUES ( %s, %s )"
purchase_insert = "INSERT INTO shop.purchases (user_id, item_id, quantity, purchase_price) VALUES ( %s, %s, %s, %s )"
requests.post(('http://%s/connectors' % debeziumHostPort),
json={
"name": "mysql-connector",
"config": {
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"database.hostname": mysqlHost,
"database.port": mysqlPort,
"database.user": mysqlUser,
"database.password": mysqlPass,
"database.server.name": mysqlHost,
"database.server.id": '1234',
"database.history.kafka.bootstrap.servers": kafkaHostPort,
"database.history.kafka.topic": "mysql-history",
"time.precision.mode": "connect"
}
}
)
producer = KafkaProducer(bootstrap_servers=[kafkaHostPort],
value_serializer=lambda x:
json.dumps(x).encode('utf-8'))
def generatePageview(viewer_id, target_id, page_type):
return {
"user_id": viewer_id,
"url": f'/{page_type}/{target_id}',
"channel": random.choice(channels),
"received_at": int(time.time())
}
try:
with connect(
host=mysqlHost,
user=mysqlUser,
password=mysqlPass,
) as connection:
with connection.cursor() as cursor:
print("Initializing shop database...")
cursor.execute('CREATE DATABASE IF NOT EXISTS shop;')
cursor.execute(
"""CREATE TABLE IF NOT EXISTS shop.users
(
id SERIAL PRIMARY KEY,
email VARCHAR(255),
is_vip BOOLEAN DEFAULT FALSE,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
);"""
)
cursor.execute(
"""CREATE TABLE IF NOT EXISTS shop.items
(
id SERIAL PRIMARY KEY,
name VARCHAR(100),
category VARCHAR(100),
price DECIMAL(7,2),
inventory INT,
inventory_updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
);"""
)
cursor.execute(
"""CREATE TABLE IF NOT EXISTS shop.purchases
(
id SERIAL PRIMARY KEY,
user_id BIGINT UNSIGNED REFERENCES user(id),
item_id BIGINT UNSIGNED REFERENCES item(id),
status TINYINT UNSIGNED DEFAULT 1,
quantity INT UNSIGNED DEFAULT 1,
purchase_price DECIMAL(12,2),
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
);"""
)
connection.commit()
print("Seeding data...")
cursor.executemany(
item_insert,
[
(
barnum.create_nouns(),
random.choice(categories),
random.randint(itemPriceMin*100,itemPriceMax*100)/100,
random.randint(itemInventoryMin,itemInventoryMax)
) for i in range(itemSeedCount)
]
)
cursor.executemany(
user_insert,
[
(
barnum.create_email(),
(random.randint(0,10) > 8)
) for i in range(userSeedCount)
]
)
connection.commit()
print("Getting item ID and PRICEs...")
cursor.execute("SELECT id, price FROM shop.items")
item_prices = [(row[0], row[1]) for row in cursor]
print("Preparing to loop + seed kafka pageviews and purchases")
for i in range(purchaseGenCount):
purchase_item = random.choice(item_prices)
purchase_user = random.randint(0,userSeedCount-1)
purchase_quantity = random.randint(1,5)
producer.send(kafkaTopic, key=str(purchase_user).encode('ascii'), value=generatePageview(purchase_user, purchase_item[0], 'products'))
pageviewOscillator = int(pageviewMultiplier + (math.sin(time.time()/1000)*50))
for i in range(pageviewOscillator):
rand_user = random.randint(0,userSeedCount)
rand_page_type = random.choice(['products', 'profiles'])
target_id_max_range = itemSeedCount if rand_page_type == 'products' else userSeedCount
producer.send(kafkaTopic, key=str(rand_user).encode('ascii'), value=generatePageview(rand_user, random.randint(0,target_id_max_range), rand_page_type))
cursor.execute(
purchase_insert,
(
purchase_user,
purchase_item[0],
purchase_quantity,
purchase_item[1] * purchase_quantity
)
)
connection.commit()
time.sleep(purchaseGenEveryMS/1000)
connection.close()
except Error as e:
print(e)
| true | true |
f7208e835090cc50acccc601a70a34153f65abaf | 549 | py | Python | gcn/lp.py | liqimai/GraphConvForSSL | ef94a897292275680b1058685f2de9d4a8a6449c | [
"MIT"
] | 74 | 2019-04-09T11:53:27.000Z | 2022-03-24T09:22:30.000Z | gcn/lp.py | liqimai/GraphConvForSSL | ef94a897292275680b1058685f2de9d4a8a6449c | [
"MIT"
] | 4 | 2019-07-11T08:47:29.000Z | 2020-06-15T03:19:31.000Z | gcn/lp.py | liqimai/GraphConvForSSL | ef94a897292275680b1058685f2de9d4a8a6449c | [
"MIT"
] | 16 | 2019-04-15T16:20:07.000Z | 2022-03-07T08:42:26.000Z | import numpy as np
from gcn.graphconv import ap_approximate
def Model17(adj, alpha, y_train, y_test):
k = int(np.ceil(4 * alpha))
prediction, time = ap_approximate(adj, y_train, alpha, k)
predicted_labels = np.argmax(prediction, axis=1)
prediction = np.zeros(prediction.shape)
prediction[np.arange(prediction.shape[0]), predicted_labels] = 1
test_acc = np.sum(prediction * y_test) / np.sum(y_test)
test_acc_of_class = np.sum(prediction * y_test, axis=0) / np.sum(y_test, axis=0)
return test_acc, test_acc_of_class
| 36.6 | 84 | 0.717668 | import numpy as np
from gcn.graphconv import ap_approximate
def Model17(adj, alpha, y_train, y_test):
k = int(np.ceil(4 * alpha))
prediction, time = ap_approximate(adj, y_train, alpha, k)
predicted_labels = np.argmax(prediction, axis=1)
prediction = np.zeros(prediction.shape)
prediction[np.arange(prediction.shape[0]), predicted_labels] = 1
test_acc = np.sum(prediction * y_test) / np.sum(y_test)
test_acc_of_class = np.sum(prediction * y_test, axis=0) / np.sum(y_test, axis=0)
return test_acc, test_acc_of_class
| true | true |
f7208e8a4a26e682a31e9e84d47b5e97601f74d8 | 173 | py | Python | learning_greek/signals.py | lucafavatella/learning-greek | b29a96668992823e2ec89547b6c82fbbbb9af9f3 | [
"MIT"
] | 10 | 2015-04-10T06:35:01.000Z | 2021-07-19T01:40:22.000Z | learning_greek/signals.py | lucafavatella/learning-greek | b29a96668992823e2ec89547b6c82fbbbb9af9f3 | [
"MIT"
] | 16 | 2015-02-08T16:39:01.000Z | 2018-06-10T16:14:44.000Z | learning_greek/signals.py | lucafavatella/learning-greek | b29a96668992823e2ec89547b6c82fbbbb9af9f3 | [
"MIT"
] | 6 | 2015-02-12T18:56:40.000Z | 2020-10-11T18:59:37.000Z | import django.dispatch
adoption_level_change = django.dispatch.Signal(providing_args=["level", "request"])
blurb_read = django.dispatch.Signal(providing_args=["request"])
| 28.833333 | 83 | 0.797688 | import django.dispatch
adoption_level_change = django.dispatch.Signal(providing_args=["level", "request"])
blurb_read = django.dispatch.Signal(providing_args=["request"])
| true | true |
f7208edc797411262aa09c8538bfe7878909fc92 | 1,273 | py | Python | test/test_sync_reports_rotate.py | Atomicology/isilon_sdk_python | 91039da803ae37ed4abf8d2a3f59c333f3ef1866 | [
"MIT"
] | null | null | null | test/test_sync_reports_rotate.py | Atomicology/isilon_sdk_python | 91039da803ae37ed4abf8d2a3f59c333f3ef1866 | [
"MIT"
] | null | null | null | test/test_sync_reports_rotate.py | Atomicology/isilon_sdk_python | 91039da803ae37ed4abf8d2a3f59c333f3ef1866 | [
"MIT"
] | null | null | null | # coding: utf-8
"""
Copyright 2016 SmartBear Software
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
ref: https://github.com/swagger-api/swagger-codegen
"""
from __future__ import absolute_import
import os
import sys
import unittest
import swagger_client
from swagger_client.rest import ApiException
from swagger_client.models.sync_reports_rotate import SyncReportsRotate
class TestSyncReportsRotate(unittest.TestCase):
""" SyncReportsRotate unit test stubs """
def setUp(self):
pass
def tearDown(self):
pass
def testSyncReportsRotate(self):
"""
Test SyncReportsRotate
"""
model = swagger_client.models.sync_reports_rotate.SyncReportsRotate()
if __name__ == '__main__':
unittest.main() | 25.979592 | 77 | 0.735271 |
from __future__ import absolute_import
import os
import sys
import unittest
import swagger_client
from swagger_client.rest import ApiException
from swagger_client.models.sync_reports_rotate import SyncReportsRotate
class TestSyncReportsRotate(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
def testSyncReportsRotate(self):
model = swagger_client.models.sync_reports_rotate.SyncReportsRotate()
if __name__ == '__main__':
unittest.main() | true | true |
f7208fe4ecb7d3a8e41c130280425906b33dd803 | 47,072 | py | Python | BioSTEAM 2.x.x/biorefineries/TAL/system_TAL_glucose.py | yoelcortes/Bioindustrial-Complex | d39edfec88e443ef7a62218ca0215e3b105f4b96 | [
"MIT"
] | 2 | 2020-01-03T21:04:41.000Z | 2020-01-09T01:15:48.000Z | BioSTEAM 2.x.x/biorefineries/TAL/system_TAL_glucose.py | yoelcortes/Bioindustrial-Complex | d39edfec88e443ef7a62218ca0215e3b105f4b96 | [
"MIT"
] | 6 | 2020-01-03T21:31:27.000Z | 2020-02-28T13:53:56.000Z | BioSTEAM 2.x.x/biorefineries/TAL/system_TAL_glucose.py | yoelcortes/Bioindustrial-Complex | d39edfec88e443ef7a62218ca0215e3b105f4b96 | [
"MIT"
] | 2 | 2020-01-07T14:04:06.000Z | 2020-01-08T23:05:25.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Bioindustrial-Park: BioSTEAM's Premier Biorefinery Models and Results
# Copyright (C) 2022-2023, Sarang Bhagwat <sarangb2@illinois.edu> (this biorefinery)
#
# This module is under the UIUC open-source license. See
# github.com/BioSTEAMDevelopmentGroup/biosteam/blob/master/LICENSE.txt
# for license details.
"""
@author: sarangbhagwat
Created on Sun Aug 23 12:11:15 2020
This module is a modified implementation of modules from the following:
[1] Bhagwat et al., Sustainable Production of Acrylic Acid via 3-Hydroxypropionic Acid from Lignocellulosic Biomass. ACS Sustainable Chem. Eng. 2021, 9 (49), 16659–16669. https://doi.org/10.1021/acssuschemeng.1c05441
[2] Li et al., Sustainable Lactic Acid Production from Lignocellulosic Biomass. ACS Sustainable Chem. Eng. 2021, 9 (3), 1341–1351. https://doi.org/10.1021/acssuschemeng.0c08055
[3] Cortes-Peña et al., BioSTEAM: A Fast and Flexible Platform for the Design, Simulation, and Techno-Economic Analysis of Biorefineries under Uncertainty. ACS Sustainable Chem. Eng. 2020, 8 (8), 3302–3310. https://doi.org/10.1021/acssuschemeng.9b07040
All units are explicitly defined here for transparency and easy reference.
Naming conventions:
D = Distillation column
AC = Adsorption column
F = Flash tank or multiple-effect evaporator
H = Heat exchange
M = Mixer
P = Pump (including conveying belt)
R = Reactor
S = Splitter (including solid/liquid separator)
T = Tank or bin for storage
U = Other units
Processes:
100: Feedstock preprocessing
200: Pretreatment
300: Conversion
400: Separation
500: Wastewater treatment
600: Storage
700: Co-heat and power
800: Cooling utility generation
900: Miscellaneous facilities
1000: Heat exchanger network
"""
# %% Setup
import biosteam as bst
import thermosteam as tmo
import flexsolve as flx
import numpy as np
from math import exp as math_exp
# from biosteam import main_flowsheet as F
# from copy import deepcopy
# from biosteam import System
from thermosteam import Stream
# from biorefineries.cornstover import CellulosicEthanolTEA
from biorefineries.TAL import units, facilities
from biorefineries.TAL._process_specification import ProcessSpecification
from biorefineries.TAL.process_settings import price, CFs
from biorefineries.TAL.utils import find_split, splits_df, baseline_feedflow
from biorefineries.TAL.chemicals_data import TAL_chemicals, chemical_groups, \
soluble_organics, combustibles
# from biorefineries.TAL.tea import TALTEA
from biorefineries.cornstover import CellulosicEthanolTEA as TALTEA
from biosteam import SystemFactory
from warnings import filterwarnings
filterwarnings('ignore')
Rxn = tmo.reaction.Reaction
ParallelRxn = tmo.reaction.ParallelReaction
# from lactic.hx_network import HX_Network
# # Do this to be able to show more streams in a diagram
# bst.units.Mixer._graphics.edge_in *= 2
bst.speed_up()
flowsheet = bst.Flowsheet('TAL')
bst.main_flowsheet.set_flowsheet(flowsheet)
# Speeds up ShortcutDistillation
bst.units.ShortcutColumn.minimum_guess_distillate_recovery = 0
# Baseline cost year is 2016
bst.CE = 541.7
# _labor_2007to2016 = 22.71 / 19.55
# Set default thermo object for the system
tmo.settings.set_thermo(TAL_chemicals)
# %% Utils
R = 8.314
TAL_Hm = 30883.66976 # by Dannenfelser-Yalkowsky method
TAL_Tm = TAL_chemicals['TAL'].Tm # 458.15 K
TAL_c = 6056.69421768496 # fitted parameter
TAL_c_by_R = TAL_c/R
TAL_Hm_by_R = TAL_Hm/R
def get_TAL_solubility_in_water(T): # mol TAL : mol (TAL+water)
return math_exp(-(TAL_Hm_by_R) * (1/T - 1/TAL_Tm))/math_exp(TAL_c_by_R/T)
def get_mol_TAL_dissolved(T, mol_water):
TAL_x = get_TAL_solubility_in_water(T)
return mol_water*TAL_x/(1-TAL_x)
def get_TAL_solubility_in_water_gpL(T):
return get_mol_TAL_dissolved(T, 1000./18.)*TAL_chemicals['TAL'].MW
def get_K(chem_ID, stream, phase_1, phase_2):
return (stream[phase_1].imol[chem_ID]/stream[phase_1].F_mol)/max(1e-6, (stream[phase_2].imol[chem_ID]/stream[phase_2].F_mol))
def get_TAL_solublity_in_solvent_very_rough(T, solvent_ID='Hexanol', units='g/L'):
temp_stream =\
tmo.Stream('temp_stream_get_TAL_solublity_in_solvent_very_rough')
mol_water = mol_solvent = 1000
mol_TAL = get_mol_TAL_dissolved(T, mol_water)
temp_stream.imol['Water'] = mol_water
temp_stream.imol[solvent_ID] = mol_solvent
temp_stream.imol['TAL'] = mol_TAL
temp_stream.lle(T=T, P=temp_stream.P)
# temp_stream.show(N=100)
phase_1 = 'l' if temp_stream.imol['l', solvent_ID] > temp_stream.imol['L', solvent_ID] else 'L'
phase_2 = 'L' if phase_1=='l' else 'l'
K_TAL_in_extract = get_K('TAL', temp_stream, phase_1, phase_2)
# print(K_TAL_in_extract)
if units=='g/L':
temp_stream_2 = tmo.Stream('temp_stream_2_get_TAL_solublity_in_solvent_very_rough')
temp_stream_2.imol['TAL'] = K_TAL_in_extract*mol_TAL
temp_stream_2.imol[solvent_ID] = mol_solvent
return temp_stream_2.imass['TAL']/temp_stream_2.F_vol
elif units=='mol/mol':
return K_TAL_in_extract*mol_TAL/(mol_TAL+mol_solvent) #
def get_TAL_solubility_in_hexanol():
return 2.*0.0222/(2.*0.0222+0.951) # mol/mol; 2 * Marco's initial experimental solubility of 2.8 wt% at 21 C
def get_TAL_solubility_in_ethanol_ww():
return 0.167682 # solubility of 157.425 g-TAL per L-ethanol
# %%
@SystemFactory(ID = 'TAL_sys')
def create_TAL_sys(ins, outs):
# %%
# =============================================================================
# Feedstock
# =============================================================================
# feedstock = Stream('feedstock',
# baseline_feedflow.copy(),
# units='kg/hr',
# price=price['Feedstock'])
feedstock = Stream('feedstock')
feedstock.imass['Glucose'] = 29000.
feedstock.imass['H2O'] = 500.
feedstock.price = price['Glucose']*feedstock.imass['Glucose']/feedstock.F_mass
feedstock.F_mass = 25802.9 # at the baseline, the amount of TAL produced would exactly satisfy the US demand for sorbic acid with a hypothetical 100% TAL->sorbic acid conversion.
U101 = units.FeedstockPreprocessing('U101', ins=feedstock)
# Handling costs/utilities included in feedstock cost thus not considered here
U101.cost_items['System'].cost = 0
U101.cost_items['System'].kW = 0
# %%
# =============================================================================
# Conversion streams
# =============================================================================
# Flow and price will be updated in EnzymeHydrolysateMixer
enzyme = Stream('enzyme', units='kg/hr', price=price['Enzyme'])
# Used to adjust enzymatic hydrolysis solid loading, will be updated in EnzymeHydrolysateMixer
enzyme_water = Stream('enzyme_water', units='kg/hr')
# Corn steep liquor as nitrogen nutrient for microbes, flow updated in R301
CSL = Stream('CSL', units='kg/hr')
# Lime for neutralization of produced acid
# fermentation_lime = Stream('fermentation_lime', units='kg/hr')
# For diluting concentrated, inhibitor-reduced hydrolysate
dilution_water = Stream('dilution_water', units='kg/hr')
# =============================================================================
# Conversion units
# =============================================================================
# Cool hydrolysate down to fermentation temperature at 50°C
H301 = bst.units.HXutility('H301', ins=U101-0, T=50+273.15)
M304 = bst.units.Mixer('M304', ins=(H301-0, dilution_water))
M304.water_to_sugar_mol_ratio = 5.
@M304.add_specification()
def adjust_M304_water():
M304_ins_1 = M304.ins[1]
M304_ins_1.imol['Water'] = M304.water_to_sugar_mol_ratio * M304.ins[0].imol['Glucose', 'Xylose'].sum()
M304._run()
# M304.specification = adjust_M304_water()
M304_H = bst.units.HXutility('M304_H', ins=M304-0, T=30+273.15, rigorous=True)
# Mix pretreatment hydrolysate/enzyme mixture with fermentation seed
S302 = bst.Splitter('S302', ins=M304_H-0,
outs = ('to_seedtrain', 'to_cofermentation'),
split = 0.07) # split = inoculum ratio
# Cofermentation
R302 = units.CoFermentation('R302',
ins=(S302-1, '', CSL),
outs=('fermentation_effluent', 'CO2_fermentation'))
def include_seed_CSL_in_cofermentation(): # note: effluent always has 0 CSL
R302._run()
R302.ins[2].F_mass*=1./(1-S302.split[0])
R302.specification = include_seed_CSL_in_cofermentation
# ferm_ratio is the ratio of conversion relative to the fermenter
R303 = units.SeedTrain('R303', ins=S302-0, outs=('seed', 'CO2_seedtrain'), ferm_ratio=0.95)
T301 = units.SeedHoldTank('T301', ins=R303-0, outs=1-R302)
# %%
# =============================================================================
# Separation streams
# =============================================================================
# This flow will be automatically updated in CellMassFilter
# separation_sulfuric_acid = Stream('separation_sulfuric_acid', units='kg/hr')
# # # To be mixed with sulfuric acid, will be updated in SulfuricAdditionTank
# # separation_acid_water = Stream('separation_acid_water', units='kg/hr')
# separation_DPHP = Stream('DPHP', DPHP =feedstock_dry_mass*22.1/1000*0.93,
# H2O=feedstock_dry_mass*22.1/1000*0.07, units='kg/hr')
# # Ethanol for esterification reaction, will be updated in the EsterificationReactor
# separation_ethanol = Stream('separation_ethanol', Ethanol=feedstock_dry_mass*22.1/1000*0.93,
# H2O=feedstock_dry_mass*22.1/1000*0.07, units='kg/hr')
# For ester hydrolysis
# separation_hydrolysis_water = Stream('separation_hydrolysis_water', units='kg/hr')
Hexanol_minimal = Stream('Hexanol_minimal', units = 'kg/hr')
# Heptane = Stream('Heptane', units = 'kg/hr')
# Toluene = Stream('Toluene', units = 'kg/hr')
# Hexanol_s = Stream('Hexanol_s', units = 'kg/hr')
Heptane_s = Stream('Heptane_s', units = 'kg/hr')
Toluene_s = Stream('Toluene_s', units = 'kg/hr')
Hydrogen = Stream('Hydrogen', units = 'kg/hr')
KOH = Stream('KOH', units = 'kg/hr')
HCl = Stream('HCl', units = 'kg/hr')
# =============================================================================
# Separation units
# =============================================================================
# Fake unit to enable solid-liquid equilibrium for fermentation broth
U401 = bst.Unit('U401', ins=R302-0, outs=('fermentation_broth_first_sle'))
def U401_spec():
U401_ins_0 = U401.ins[0]
tot_TAL = U401_ins_0.imol['TAL']
U401_outs_0 = U401.outs[0]
U401_outs_0.copy_like(U401_ins_0)
mol_TAL_dissolved = get_mol_TAL_dissolved(U401_outs_0.T, U401_outs_0.imol['Water'])
# U401_outs_0.sle('TAL', U401_outs_0.T) #!!! TODO: use computationally cheaper way of changing from Stream to MultiStream
U401_outs_0.phases=['s', 'l']
U401_outs_0.imol['l', 'TAL'] = min(mol_TAL_dissolved, tot_TAL)
U401_outs_0.imol['s', 'TAL'] = tot_TAL - min(mol_TAL_dissolved, tot_TAL)
U401.specification = U401_spec
# Change broth temperature to adjust TAL solubility
H401 = bst.HXutility('H401', ins=U401-0, outs=('H401_0'), T=273.15+70.)
def H401_spec():
H401_ins_0 = H401.ins[0]
H401_ins_0_water=H401_ins_0.imol['Water']
tot_TAL = H401_ins_0.imol['TAL']
# H401_spec_obj_fn = lambda T: get_TAL_solubility_in_water_gpL(T) -\
# H401_ins_0.imass['TAL']/H401_ins_0.F_vol
H401_spec_obj_fn = lambda T: get_mol_TAL_dissolved(T, H401_ins_0_water) - tot_TAL
H401.T = flx.IQ_interpolation(H401_spec_obj_fn, H401.ins[0].T, 99.+273.15)
H401._run()
H401_outs_0 = H401.outs[0]
mol_TAL_dissolved = get_mol_TAL_dissolved(H401_outs_0.T, H401_outs_0.imol['Water'])
# H401_outs_0.sle('TAL', H401_outs_0.T) #!!! TODO: use computationally cheaper way of changing from Stream to MultiStream
H401_outs_0.phases = ('l', 's')
H401_outs_0.imol['l', 'TAL'] = min(mol_TAL_dissolved, tot_TAL)
H401_outs_0.imol['s', 'TAL'] = max(0., round(tot_TAL - min(mol_TAL_dissolved, tot_TAL), 5))
# H401_outs_0.imol['s', 'TAL'] = max(0.0001, tot_TAL - min(mol_TAL_dissolved, tot_TAL))
# if H401_outs_0.imol['s', 'TAL'] == 0.0001:
# H401_ins_0.imol['s', 'TAL'] += 0.0001
H401.specification = H401_spec
U402 = bst.FakeSplitter('U402', ins=H401-0, outs = ('thermally_decarboxylated_broth','vented_CO2'))
U402.decarboxylation_rxns = ParallelRxn([
Rxn('TAL + H2O -> PD + CO2', 'TAL', 0.25),
])
def get_TAL_decarboxylation_conversion(T=273.15+80.):
return (0.2*(T-273.15) + 8.)/100. # temporaury
def U402_spec():
U402_outs_0 = U402.outs[0]
U402_outs_0.copy_like(U402.ins[0])
U402_outs_0.phases = ('l', 's')
U402.decarboxylation_rxns[0].X = get_TAL_decarboxylation_conversion(T=U402_outs_0.T)
U402.decarboxylation_rxns[0](U402_outs_0['l'])
U402.outs[1].imol['CO2'] = U402_outs_0.imol['l', 'CO2']
U402.outs[1].phase = 'g'
U402_outs_0.imol['l', 'CO2'] = 0.
U402.specification = U402_spec
# H401_design_og = H401._design
# def H401_design_modified():
# H401.ins[0].copy_like(U401.ins[0])
# H401.outs[0].copy_like(U401.ins[0])
# H401_design_og()
# # Remove solids from fermentation broth, modified from the pressure filter in Humbird et al.
S401_index = [splits_df.index[0]] + splits_df.index[2:].to_list()
S401_cell_mass_split = [splits_df['stream_571'][0]] + splits_df['stream_571'][2:].to_list()
S401_filtrate_split = [splits_df['stream_535'][0]] + splits_df['stream_535'][2:].to_list()
S401 = bst.units.SolidsCentrifuge('S401', ins=U402-0, outs=('S401_solid_fraction', 'S401_liquid_fraction'),
# moisture_content=0.50,
split=find_split(S401_index,
S401_cell_mass_split,
S401_filtrate_split,
chemical_groups),
solids =\
['Xylan', 'Glucan', 'Lignin', 'FermMicrobe',\
'Ash', 'Arabinan', 'Galactan', 'Mannan'])
# def S401_TAL_split_spec():
# S401._run()
# S401_ins_0 = S401.ins[0]
# TOT_TAL = S401_ins_0.imol['TAL']
# dissolved_TAL = get_mol_TAL_dissolved(S401_ins_0.T, S401_ins_0.imol['Water'])
# S401.outs[0].imol['TAL'] = TOT_TAL - dissolved_TAL # crystallized TAL
# S401.outs[1].imol['TAL'] = dissolved_TAL
def S401_TAL_split_spec():
S401._run()
S401_ins_0 = S401.ins[0]
S401.outs[0].imol['TAL'] = S401_ins_0.imol['s', 'TAL']
S401.outs[1].imol['TAL'] = S401_ins_0.imol['l', 'TAL']
S401.specification = S401_TAL_split_spec
H402 = bst.HXutility('H402', ins=S401-1, outs=('H402_0'), T=273.15+1.)
# def HXcrystalize(stream, T=None, H=None, P=None, V=None):
# tot_TAL = stream.imol['TAL']
# mol_TAL_dissolved = get_mol_TAL_dissolved(stream.T, stream.imol['Water'])
# stream.phases = ('s', 'l')
# stream.T = H402.T
# tal_dissolved = min(mol_TAL_dissolved, tot_TAL)
# stream.imol['l', 'TAL'] =
# stream.imol['s', 'TAL'] = max(0.0001, tot_TAL - min(mol_TAL_dissolved, tot_TAL))
def H402_spec():
H402._run()
H402_ins_0 = H402.ins[0]
tot_TAL = H402_ins_0.imol['TAL']
H402_outs_0 = H402.outs[0]
TAL_solubility = get_mol_TAL_dissolved(H402_outs_0.T, H402_outs_0.imol['Water'])
H402_outs_0.phases = ('s', 'l')
H402_outs_0.T = H402.T
TAL_dissolved = min(TAL_solubility, tot_TAL)
H402_outs_0.imol['l', 'TAL'] = TAL_dissolved
H402_outs_0.imol['s', 'TAL'] = max(0, tot_TAL - TAL_dissolved)
# H402_outs_0.imol['s', 'TAL'] = max(0.0001, tot_TAL - TAL_dissolved)
# if H402_outs_0.imol['s', 'TAL'] == 0.0001:
# H402_ins_0.imol['s', 'TAL'] += 0.0001
H402.specification = H402_spec
S402 = bst.units.SolidsCentrifuge('S402', ins=H402-0, outs=('S402_solid_fraction', 'S402_liquid_fraction'),
# moisture_content=0.50,
split=find_split(S401_index,
S401_cell_mass_split,
S401_filtrate_split,
chemical_groups), solids =\
['Xylan', 'Glucan', 'Lignin', 'FermMicrobe',\
'Ash', 'Arabinan', 'Galactan', 'Mannan'])
def S402_TAL_split_spec():
# S402._run()
# S402_ins_0 = S402.ins[0]
# S402_outs_0 = S402.outs[0]
# S402_outs_0.imol['TAL'] = 1.
# S402_outs_0.sle('TAL', S402_outs_0.T) #!!! TODO: use computationally cheaper way of changing from Stream to MultiStream
# S402_outs_0.imol['s', 'TAL'] = S402_ins_0.imol['s', 'TAL']
# S402_outs_0.imol['l', 'TAL'] = 0.
# S402.outs[1].imol['TAL'] = S402_ins_0.imol['l', 'TAL']
S402_ins_0 = S402.ins[0]
solid_TAL = float(S402_ins_0.imol['s', 'TAL'])
S402_ins_0.imol['s', 'TAL'] = 0.
S402._run()
S402.outs[0].imol['TAL'] = solid_TAL
S402.outs[1].imol['TAL'] = S402_ins_0.imol['l', 'TAL']
S402_ins_0.imol['s', 'TAL'] = solid_TAL
S402.specification = S402_TAL_split_spec
H403 = bst.HXutility('H403', ins=S402-0, outs=('heated_TAL'), T=273.15+40.)
F401 = bst.Flash('F401', ins=H403-0, outs = ('volatiles', 'pure_TAL_product'), V = 0.99, P=101325.)
def F401_spec():
F401_ins_0 = F401.ins[0]
F401.V = sum(F401_ins_0.imol['H2O',
'AceticAcid',
'Furfural',
'HMF',]) / F401_ins_0.F_mol
F401._run()
# F401.outs[1].imol['TAL'] = F401.ins[0].imol['TAL']
# F401.outs[0].imol['TAL'] = 0.
F401.specification = F401_spec
# %%
# =============================================================================
# Wastewater treatment streams
# =============================================================================
# For aerobic digestion, flow will be updated in AerobicDigestion
air_lagoon = Stream('air_lagoon', phase='g', units='kg/hr')
# To neutralize nitric acid formed by nitrification in aerobic digestion
# flow will be updated in AerobicDigestion
# The active chemical is modeled as NaOH, but the price is cheaper than that of NaOH
aerobic_caustic = Stream('aerobic_caustic', units='kg/hr', T=20+273.15, P=2*101325,
price=price['Caustics'])
# =============================================================================
# Wastewater treatment units
# =============================================================================
# Mix waste liquids for treatment
M501 = bst.units.Mixer('M501', ins=(
# F301-1,
S402-1,
F401-0,
# r_S402_s-1, r_S403_s-1, r_S404_s-1,
# X401-1, S408-0,
))
# This represents the total cost of wastewater treatment system
WWT_cost = units.WastewaterSystemCost('WWTcost501', ins=M501-0)
R501 = units.AnaerobicDigestion('R501', ins=WWT_cost-0,
outs=('biogas', 'anaerobic_treated_water',
'anaerobic_sludge'),
reactants=soluble_organics,
split=find_split(splits_df.index,
splits_df['stream_611'],
splits_df['stream_612'],
chemical_groups),
T=35+273.15)
get_flow_tpd = lambda: (feedstock.F_mass-feedstock.imass['H2O'])*24/907.185
# Mix recycled stream and wastewater after R501
M502 = bst.units.Mixer('M502', ins=(R501-1, ''))
R502 = units.AerobicDigestion('R502', ins=(M502-0, air_lagoon, aerobic_caustic),
outs=('aerobic_vent', 'aerobic_treated_water'),
reactants=soluble_organics,
ratio=get_flow_tpd()/2205)
# Membrane bioreactor to split treated wastewater from R502
S501 = bst.units.Splitter('S501', ins=R502-1, outs=('membrane_treated_water',
'membrane_sludge'),
split=find_split(splits_df.index,
splits_df['stream_624'],
splits_df['stream_625'],
chemical_groups))
S501.line = 'Membrane bioreactor'
# Recycled sludge stream of memberane bioreactor, the majority of it (96%)
# goes to aerobic digestion and the rest to sludge holding tank then to BT
S502 = bst.units.Splitter('S502', ins=S501-1, outs=('to_aerobic_digestion',
'to_boiler_turbogenerator'),
split=0.96)
M503 = bst.units.Mixer('M503', ins=(S502-0, 'centrate'), outs=1-M502)
# Mix anaerobic and 4% of membrane bioreactor sludge
M504 = bst.units.Mixer('M504', ins=(R501-2, S502-1))
# Sludge centrifuge to separate water (centrate) from sludge
S503 = bst.units.Splitter('S503', ins=M504-0, outs=(1-M503, 'sludge'),
split=find_split(splits_df.index,
splits_df['stream_616'],
splits_df['stream_623'],
chemical_groups))
S503.line = 'Sludge centrifuge'
# Reverse osmosis to treat membrane separated water
S504 = bst.units.Splitter('S504', ins=S501-0, outs=('discharged_water', 'waste_brine'),
split=find_split(splits_df.index,
splits_df['stream_626'],
splits_df['stream_627'],
chemical_groups))
S504.line = 'Reverse osmosis'
# Mix solid wastes to boiler turbogenerator
M505 = bst.units.Mixer('M505', ins=(S503-1,
# S301-0,
S401-0,
# F401-0, D401-0,
),
outs='wastes_to_boiler_turbogenerator')
# %%
# =============================================================================
# Facilities streams
# =============================================================================
sulfuric_acid_fresh = Stream('sulfuric_acid_fresh', price=price['Sulfuric acid'])
# TCP_fresh = Stream('TCP_fresh', price=price['TCP'])
ammonia_fresh = Stream('ammonia_fresh', price=price['AmmoniumHydroxide'])
CSL_fresh = Stream('CSL_fresh', price=price['CSL'])
# lime_fresh = Stream('lime_fresh', price=price['Lime'])
HCl_fresh = Stream('HCl_fresh', price=price['HCl'])
hexanol_fresh = Stream('hexanol_fresh', price=price['Hexanol'])
# heptane_fresh = Stream('heptane_fresh', price=price['Heptane'])
# toluene_fresh = Stream('toluene_fresh', price=price['Toluene'])
# hexanol_fresh_s = Stream('hexanol_fresh_s', price=price['Hexanol'])
heptane_fresh_s = Stream('heptane_fresh_s', price=price['Heptane'])
toluene_fresh_s = Stream('toluene_fresh_s', price=price['Toluene'])
hydrogen_fresh = Stream('hydrogen_fresh', price=price['Hydrogen'])
KOH_fresh = Stream('KOH_fresh', price=price['KOH'])
# S401_out1_F_mass = S401.outs[1].F_mass
# if not (S401_out1_F_mass == 0):
# ethanol_fresh = Stream('ethanol_fresh', Ethanol = 0.24 * S401_out1_F_mass, units='kg/hr', price=price['Ethanol']) - M401.ins[3].imass['Ethanol']
# DPHP_fresh = Stream('DPHP_fresh', DPHP = 0.25 * S401_out1_F_mass, units='kg/hr', price=price['DPHP']) - M401.ins[3].imass['Dipotassium hydrogen phosphate']
# else:
# ethanol_fresh = Stream('ethanol_fresh', Ethanol = get_feedstock_dry_mass()*48*22.1/1000*0.93, units='kg/hr', price=price['Ethanol'])
# DPHP_fresh = Stream('DPHP_fresh', DPHP = get_feedstock_dry_mass()*50*22.1/1000*0.93, units='kg/hr', price=price['DPHP'])
# Water used to keep system water usage balanced
system_makeup_water = Stream('system_makeup_water', price=price['Makeup water'])
# TAL stream
# TAL = Stream('TAL', units='kg/hr', price=price['TAL'])
# SA product
SA = Stream('SA', units='kg/hr', price=price['SA'])
# Acetoin product
# Acetoin = Stream('Acetoin', units='kg/hr', price=price['Acetoin'])
# # Isobutyraldehyde product
# IBA = Stream('IBA', units='kg/hr', price=price['IBA'])
# Chemicals used/generated in BT
# FGD_lime = Stream('FGD_lime')
ash = Stream('ash', price=price['Ash disposal'])
# boiler_chems = Stream('boiler_chems', price=price['Boiler chems'])
# baghouse_bag = Stream('baghouse_bag', price=price['Baghouse bag'])
# Supplementary natural gas for BT if produced steam not enough for regenerating
# all steam streams required by the system
# natural_gas = Stream('natural_gas', price=price['Natural gas'])
# Cooling tower chemicals
cooling_tower_chems = Stream('cooling_tower_chems', price=price['Cooling tower chems'])
# 145 based on equipment M-910 (clean-in-place system) in Humbird et al.
CIP_chems_in = Stream('CIP_chems_in', Water=145*get_flow_tpd()/2205, units='kg/hr')
# 1372608 based on stream 950 in Humbird et al.
# Air needed for multiple processes (including enzyme production that was not included here),
# not rigorously modeled, only scaled based on plant size
plant_air_in = Stream('plant_air_in', phase='g', units='kg/hr',
N2=0.79*1372608*get_flow_tpd()/2205,
O2=0.21*1372608*get_flow_tpd()/2205)
# 8021 based on stream 713 in Humbird et al.
fire_water_in = Stream('fire_water_in',
Water=8021*get_flow_tpd()/2205, units='kg/hr')
# =============================================================================
# Facilities units
# =============================================================================
# T601 = units.SulfuricAcidStorageTank('T601', ins=sulfuric_acid_fresh,
# outs=sulfuric_acid_T201)
# T601.line = 'Sulfuric acid storage tank'
# S601 = bst.units.ReversedSplitter('S601', ins=T601-0,
# outs=(pretreatment_sulfuric_acid,
# ''))
# T608 = units.TCPStorageTank('T608', ins=TCP_fresh,
# outs='TCP_catalyst')
# T608-0-3-R401
# T608.line = 'Tricalcium diphosphate storage tank'
#
# T602 = units.AmmoniaStorageTank('T602', ins=ammonia_fresh, outs=ammonia_M205)
# T602.line = 'Ammonia storage tank'
T603 = units.CSLstorageTank('T603', ins=CSL_fresh, outs=CSL)
T603.line = 'CSL storage tank'
# DPHP storage
#!!! Yalin suggests to use BioSTEAM's storage tank, and maybe we don't need the ConveryingBelt
# (Yalin removed that from lactic acid biorefinery)
T604 = units.DPHPStorageTank('T604', ins=hexanol_fresh)
T604.line = 'Hexanol storage tank'
T604_P = units.TALPump('T604_P', ins=T604-0, outs = Hexanol_minimal)
# T604_P = bst.units.ConveyingBelt('T604_P', ins=T604-0, outs = Hexanol)
# # 7-day storage time, similar to ethanol's in Humbird et al.
# T605 = units.DPHPStorageTank('T605', ins=heptane_fresh)
# T605.line = 'Heptane storage tank'
# T605_P = units.TALPump('T605_P', ins=T605-0, outs = Heptane)
# T606 = units.DPHPStorageTank('T606', ins=toluene_fresh)
# T606.line = 'Toluene storage tank'
# T606_P = units.TALPump('T606_P', ins=T606-0, outs = Toluene)
T607 = units.DPHPStorageTank('T607', ins=hydrogen_fresh, outs = Hydrogen)
T607.line = 'Hydrogen storage tank'
T608 = units.DPHPStorageTank('T608', ins=HCl_fresh, outs = HCl,
vessel_material = 'Stainless steel')
T608.line = 'HCl storage tank'
T609 = units.DPHPStorageTank('T609', ins=KOH_fresh, outs = KOH,
vessel_material = 'Stainless steel')
T609.line = 'KOH storage tank'
# T604_s = units.DPHPStorageTank('T604_s', ins=hexanol_fresh_s)
# T604_s.line = 'Hexanol storage tank s'
# T604_s_P = units.TALPump('T604_s_P', ins=T604_s-0, outs = Hexanol_s)
# 7-day storage time, similar to ethanol's in Humbird et al.
T605_s = units.DPHPStorageTank('T605_s', ins=heptane_fresh_s)
T605_s.line = 'Heptane storage tank s'
T605_s_P = units.TALPump('T605_s_P', ins=T605_s-0, outs = Heptane_s)
T606_s = units.DPHPStorageTank('T606_s', ins=toluene_fresh_s)
T606_s.line = 'Toluene storage tank s'
T606_s_P = units.TALPump('T606_s_P', ins=T606_s-0, outs = Toluene_s)
# T607_P = units.TALPump('T607_P', ins=T607-0, outs = Hydrogen)
# Connections to ATPE Mixer
# T604_P-0-1-M401
# T605_P-0-2-M401
# 7-day storage time, similar to ethanol's in Humbird et al.
T620 = units.TALStorageTank('T620', ins=F401-1, tau=7*24, V_wf=0.9,
vessel_type='Floating roof',
vessel_material='Stainless steel')
T620.line = 'SAStorageTank'
T620_P = units.TALPump('T620_P', ins=T620-0, outs=SA)
# # 7-day storage time, similar to ethanol's in Humbird et al.
# T607 = units.TALStorageTank('T607', ins=D402_H-0, tau=7*24, V_wf=0.9,
# vessel_type='Floating roof',
# vessel_material='Stainless steel')
# T607.line = 'AcetoinStorageTank'
# T607_P = units.TALPump('T607_P', ins=T607-0, outs=Acetoin)
# # 7-day storage time, similar to ethanol's in Humbird et al.
# T608 = units.TALStorageTank('T608', ins=D403_H-0, tau=7*24, V_wf=0.9,
# vessel_type='Floating roof',
# vessel_material='Stainless steel')
# T608.line = 'IBAStorageTank'
# T608_P = units.TALPump('T608_P', ins=T608-0, outs=IBA)
CIP = facilities.CIP('CIP901', ins=CIP_chems_in, outs='CIP_chems_out')
ADP = facilities.ADP('ADP902', ins=plant_air_in, outs='plant_air_out',
ratio=get_flow_tpd()/2205)
FWT = units.FireWaterTank('FWT903', ins=fire_water_in, outs='fire_water_out')
CWP = facilities.CWP('CWP802', ins='return_chilled_water',
outs='process_chilled_water')
# M505-0 is the liquid/solid mixture, R501-0 is the biogas, blowdown is discharged
# BT = facilities.BT('BT', ins=(M505-0, R501-0,
# FGD_lime, boiler_chems,
# baghouse_bag, natural_gas,
# 'BT_makeup_water'),
# B_eff=0.8, TG_eff=0.85,
# combustibles=combustibles,
# side_streams_to_heat=(water_M201, water_M202, steam_M203),
# outs=('gas_emission', ash, 'boiler_blowdown_water'))
BT = bst.facilities.BoilerTurbogenerator('BT701',
ins=(M505-0,
R501-0,
'boiler_makeup_water',
'natural_gas',
'lime',
'boilerchems'),
outs=('gas_emission', 'boiler_blowdown_water', ash,),
turbogenerator_efficiency=0.85)
# BT = bst.BDunits.BoilerTurbogenerator('BT',
# ins=(M505-0, R501-0, 'boiler_makeup_water', 'natural_gas', FGD_lime, boiler_chems),
# boiler_efficiency=0.80,
# turbogenerator_efficiency=0.85)
# Blowdown is discharged
CT = facilities.CT('CT801', ins=('return_cooling_water', cooling_tower_chems,
'CT_makeup_water'),
outs=('process_cooling_water', 'cooling_tower_blowdown'))
# All water used in the system, here only consider water usage,
# if heating needed, then heeating duty required is considered in BT
process_water_streams = (enzyme_water,
aerobic_caustic,
CIP.ins[-1], BT.ins[-1], CT.ins[-1])
PWC = facilities.PWC('PWC904', ins=(system_makeup_water, S504-0),
process_water_streams=process_water_streams,
recycled_blowdown_streams=None,
outs=('process_water', 'discharged_water'))
# Heat exchanger network
from hxn._heat_exchanger_network import HeatExchangerNetwork
# from biosteam import HeatExchangerNetwork
HXN = HeatExchangerNetwork('HXN1001',
# ignored=[H401, H402],
)
def HXN_no_run_cost():
HXN.heat_utilities = tuple()
HXN._installed_cost = 0.
# To simulate without HXN, uncomment the following 3 lines:
# HXN._cost = HXN_no_run_cost
# HXN.energy_balance_percent_error = 0.
# HXN.new_HXs = HXN.new_HX_utils = []
# HXN = HX_Network('HXN')
# %%
# =============================================================================
# Complete system
# =============================================================================
TAL_sys = create_TAL_sys()
f = bst.main_flowsheet
u = f.unit
s = f.stream
feedstock = s.feedstock
SA = s.SA
get_flow_tpd = lambda: (feedstock.F_mass-feedstock.imass['H2O'])*24/907.185
TEA_feeds = set([i for i in TAL_sys.feeds if i.price]+ \
[i for i in TAL_sys.feeds if i.price])
TEA_products = set([i for i in TAL_sys.products if i.price]+ \
[i for i in TAL_sys.products if i.price]+[SA])
for ui in u:
globals().update({ui.ID: ui})
# %%
# =============================================================================
# TEA
# =============================================================================
# TAL_tea = CellulosicEthanolTEA(system=TAL_sys, IRR=0.10, duration=(2016, 2046),
# depreciation='MACRS7', income_tax=0.21, operating_days=0.9*365,
# lang_factor=None, construction_schedule=(0.08, 0.60, 0.32),
# startup_months=3, startup_FOCfrac=1, startup_salesfrac=0.5,
# startup_VOCfrac=0.75, WC_over_FCI=0.05,
# finance_interest=0.08, finance_years=10, finance_fraction=0.4,
# # biosteam Splitters and Mixers have no cost,
# # cost of all wastewater treatment units are included in WWT_cost,
# # BT is not included in this TEA
# OSBL_units=(u.U101, u.WWT_cost,
# u.T601, u.T602, u.T603, u.T606, u.T606_P,
# u.CWP, u.CT, u.PWC, u.CIP, u.ADP, u.FWT, u.BT),
# warehouse=0.04, site_development=0.09, additional_piping=0.045,
# proratable_costs=0.10, field_expenses=0.10, construction=0.20,
# contingency=0.10, other_indirect_costs=0.10,
# labor_cost=3212962*get_flow_tpd()/2205,
# labor_burden=0.90, property_insurance=0.007, maintenance=0.03,
# steam_power_depreciation='MACRS20', boiler_turbogenerator=u.BT)
# TAL_no_BT_tea = TAL_tea
TAL_tea = TALTEA(system=TAL_sys, IRR=0.10, duration=(2016, 2046),
depreciation='MACRS7', income_tax=0.21, operating_days=0.9*365,
lang_factor=None, construction_schedule=(0.08, 0.60, 0.32),
startup_months=3, startup_FOCfrac=1, startup_salesfrac=0.5,
startup_VOCfrac=0.75, WC_over_FCI=0.05,
finance_interest=0.08, finance_years=10, finance_fraction=0.4,
# biosteam Splitters and Mixers have no cost,
# cost of all wastewater treatment units are included in WWT_cost,
# BT is not included in this TEA
OSBL_units=(u.U101, u.WWTcost501,
# u.T601, u.T602,
u.T603, u.T604, u.T620,
# u.T606, u.T606_P,
u.CWP802, u.CT801, u.PWC904, u.CIP901, u.ADP902, u.FWT903, u.BT701),
warehouse=0.04, site_development=0.09, additional_piping=0.045,
proratable_costs=0.10, field_expenses=0.10, construction=0.20,
contingency=0.10, other_indirect_costs=0.10,
labor_cost=3212962*get_flow_tpd()/2205,
labor_burden=0.90, property_insurance=0.007, maintenance=0.03,
steam_power_depreciation='MACRS20', boiler_turbogenerator=u.BT701)
TAL_no_BT_tea = TAL_tea
# # Removed because there is not double counting anyways.
# # Removes feeds/products of BT_sys from TAL_sys to avoid double-counting
# for i in BT_sys.feeds:
# TAL_sys.feeds.remove(i)
# for i in BT_sys.products:
# TAL_sys.products.remove(i)
# Boiler turbogenerator potentially has different depreciation schedule
# BT_tea = bst.TEA.like(BT_sys, TAL_no_BT_tea)
# BT_tea.labor_cost = 0
# Changed to MACRS 20 to be consistent with Humbird
# BT_tea.depreciation = 'MACRS20'
# BT_tea.OSBL_units = (BT,)
# %%
# =============================================================================
# Simulate system and get results
# =============================================================================
# def get_TAL_MPSP():
# TAL_sys.simulate()
# for i in range(3):
# TAL.price = TAL_tea.solve_price(TAL, TAL_no_BT_tea)
# return TAL.price
def get_SA_MPSP():
for i in range(3):
TAL_sys.simulate()
for i in range(3):
SA.price = TAL_tea.solve_price(SA)
return SA.price*SA.F_mass/SA.imass['TAL']
def get_titer():
return R302.outs[0].imass['TAL']/R302.outs[0].F_vol
def set_titer(titer):
M304.water_multiplier *= get_titer()/titer
get_SA_MPSP()
return get_titer()
# get_SA_MPSP()
# R301 = F('R301') # Fermentor
# yearly_production = 125000 # ton/yr
spec = ProcessSpecification(
evaporator = None,
pump = None,
mixer = u.M304,
heat_exchanger = u.M304_H,
seed_train_system = [],
reactor= u.R302,
reaction_name='fermentation_reaction',
substrates=('Xylose', 'Glucose'),
products=('TAL',),
spec_1=0.19,
spec_2=28.,
spec_3=0.19,
xylose_utilization_fraction = 0.80,
feedstock = feedstock,
dehydration_reactor = None,
byproduct_streams = [],
HXN = u.HXN1001,
maximum_inhibitor_concentration = 1.,
# pre_conversion_units = process_groups_dict['feedstock_group'].units + process_groups_dict['pretreatment_group'].units + [u.H301], # if the line below does not work (depends on BioSTEAM version)
pre_conversion_units = TAL_sys.split(u.M304.ins[0])[0],
# set baseline fermentation performance here
baseline_yield = 0.19,
baseline_titer = 28.,
baseline_productivity = 0.19,
# baseline_yield = 0.30,
# baseline_titer = 25.,
# baseline_productivity = 0.19,
feedstock_mass = feedstock.F_mass,
pretreatment_reactor = None)
spec.load_spec_1 = spec.load_yield
# spec.load_spec_2 = spec.load_titer
spec.load_spec_3 = spec.load_productivity
def M304_titer_obj_fn(water_to_sugar_mol_ratio):
M304, R302 = u.M304, u.R302
M304.water_to_sugar_mol_ratio = water_to_sugar_mol_ratio
M304.specification[0][0]()
u.M304_H._run()
u.S302._run()
u.R303._run()
u.T301._run()
R302.specification[0][0]()
# broth = R302.outs[0]
# return broth.imass['TAL']/broth.F_vol - R302.titer_to_load
return R302.effluent_titer - R302.titer_to_load
def load_titer_with_glucose(titer_to_load):
spec.spec_2 = titer_to_load
u.R302.titer_to_load = titer_to_load
flx.IQ_interpolation(M304_titer_obj_fn, 1e-3, 20000.)
# u.AC401.regeneration_velocity = min(14.4, 3.1158 + ((14.4-3.1158)/(30.-3.))*(titer_to_load-3.)) # heuristic to obtain regeneration velocity at which MPSP is minimum fitted to results from simulations at target_recovery=0.99
# u.AC401.regeneration_velocity = 14.4
spec.load_spec_2 = load_titer_with_glucose
# path = (F301, R302)
# @np.vectorize
# def calculate_titer(V):
# F301.V = V
# for i in path: i._run()
# return spec._calculate_titer()
# @np.vectorize
# def calculate_MPSP(V):
# F301.V = V
# TAL_sys.simulate()
# MPSP = SA.price = TAL_tea.solve_price(SA, TAL_no_BT_tea)
# return MPSP
# vapor_fractions = np.linspace(0.20, 0.80)
# titers = calculate_titer(vapor_fractions)
# MPSPs = calculate_MPSP(vapor_fractions)
# import matplotlib.pyplot as plt
# plt.plot(vapor_fractions, titers)
# plt.show()
# plt.plot(titers, MPSPs)
# plt.show()
# %%
# =============================================================================
# Life cycle analysis (LCA), waste disposal emission not included
# =============================================================================
# 100-year global warming potential (GWP) from material flows
LCA_streams = TEA_feeds.copy()
LCA_stream = Stream('LCA_stream', units='kg/hr')
def get_material_GWP():
LCA_stream.mass = sum(i.mass for i in LCA_streams)
chemical_GWP = LCA_stream.mass*CFs['GWP_CF_stream'].mass
# feedstock_GWP = feedstock.F_mass*CFs['GWP_CFs']['Corn stover']
return chemical_GWP.sum()/SA.F_mass
# GWP from combustion of non-biogenic carbons
get_non_bio_GWP = lambda: (natural_gas.get_atomic_flow('C'))* TAL_chemicals.CO2.MW / SA.F_mass
# +ethanol_fresh.get_atomic_flow('C')) \
# GWP from electricity
get_electricity_use = lambda: sum(i.power_utility.rate for i in TAL_sys.units)
get_electricity_GWP = lambda: get_electricity_use()*CFs['GWP_CFs']['Electricity'] \
/ SA.F_mass
# CO2 fixed in lactic acid product
get_fixed_GWP = lambda: \
SA.get_atomic_flow('C')*TAL_chemicals.CO2.MW/SA.F_mass
# carbon_content_of_feedstock = 0
get_GWP = lambda: get_material_GWP()+get_non_bio_GWP()+get_electricity_GWP()
# Fossil energy consumption (FEC) from materials
def get_material_FEC():
LCA_stream.mass = sum(i.mass for i in LCA_streams)
chemical_FEC = LCA_stream.mass*CFs['FEC_CF_stream'].mass
# feedstock_FEC = feedstock.F_mass*CFs['FEC_CFs']['Corn stover']
return chemical_FEC.sum()/SA.F_mass
# FEC from electricity
get_electricity_FEC = lambda: \
get_electricity_use()*CFs['FEC_CFs']['Electricity']/SA.F_mass
# Total FEC
get_FEC = lambda: get_material_FEC()+get_electricity_FEC()
# get_SPED = lambda: BT.system_heating_demand*0.001/SA.F_mass
SA_LHV = 31.45 # MJ/kg SA
# %% Full analysis
def simulate_and_print():
get_SA_MPSP()
print('\n---------- Simulation Results ----------')
print(f'MPSP is ${get_SA_MPSP():.3f}/kg')
# print(f'GWP is {get_GWP():.3f} kg CO2-eq/kg SA')
# print(f'FEC is {get_FEC():.2f} MJ/kg SA or {get_FEC()/SA_LHV:.2f} MJ/MJ SA')
# print(f'SPED is {get_SPED():.2f} MJ/kg SA or {get_SPED()/SA_LHV:.2f} MJ/MJ SA')
# print('--------------------\n')
# simulate_and_print()
# TAL_sys.simulate()
get_SA_MPSP()
spec.load_specifications(0.203, 35.9, 0.21)
simulate_and_print()
# %%
# =============================================================================
# For Monte Carlo and analyses
# =============================================================================
TAL_sub_sys = {
# 'feedstock_sys': (U101,),
# 'pretreatment_sys': (T201, M201, M202, M203,
# R201, R201_H, T202, T203,
# F201, F201_H,
# M204, T204, T204_P,
# M205, M205_P),
# 'conversion_sys': (H301, M301, M302, R301, R302, T301),
# 'separation_sys': (S401, M401, M401_P,
# S402,
# # F401, F401_H, X401,
# D401, D401_H, D401_P, S403,
# M402_P, S403,
# D403, D403_H, D403_P,
# M501,
# T606, T606_P, T607, T607_P)
# F402, F402_H, F402_P,
# D405, D405_H1, D405_H2, D405_P,
# M401, M401_P)
# 'wastewater_sys': (M501, WWT_cost, R501,
# M502, R502, S501, S502, M503,
# M504, S503, S504, M505),
# 'HXN': (HXN,),
# 'BT': (BT,),
# 'CT': (CT,),
# 'other_facilities': (T601, S601,
# T602, T603,
# T604, T604_P,
# T605, T605_P,
# T606, T606_P,
# PWC, CIP, ADP, FWT)
}
# for unit in sum(TAL_sub_sys.values(), ()):
# if not unit in TAL_sys.units:
# print(f'{unit.ID} not in TAL_sys.units')
# for unit in TAL_sys.units:
# if not unit in sum(TAL_sub_sys.values(), ()):
# print(f'{unit.ID} not in TAL_sub_sys') | 42.560579 | 252 | 0.574312 |
# Copyright (C) 2022-2023, Sarang Bhagwat <sarangb2@illinois.edu> (this biorefinery)
#
# This module is under the UIUC open-source license. See
# github.com/BioSTEAMDevelopmentGroup/biosteam/blob/master/LICENSE.txt
# for license details.
# %% Setup
import biosteam as bst
import thermosteam as tmo
import flexsolve as flx
import numpy as np
from math import exp as math_exp
# from biosteam import main_flowsheet as F
# from copy import deepcopy
# from biosteam import System
from thermosteam import Stream
# from biorefineries.cornstover import CellulosicEthanolTEA
from biorefineries.TAL import units, facilities
from biorefineries.TAL._process_specification import ProcessSpecification
from biorefineries.TAL.process_settings import price, CFs
from biorefineries.TAL.utils import find_split, splits_df, baseline_feedflow
from biorefineries.TAL.chemicals_data import TAL_chemicals, chemical_groups, \
soluble_organics, combustibles
# from biorefineries.TAL.tea import TALTEA
from biorefineries.cornstover import CellulosicEthanolTEA as TALTEA
from biosteam import SystemFactory
from warnings import filterwarnings
filterwarnings('ignore')
Rxn = tmo.reaction.Reaction
ParallelRxn = tmo.reaction.ParallelReaction
# from lactic.hx_network import HX_Network
# # Do this to be able to show more streams in a diagram
# bst.units.Mixer._graphics.edge_in *= 2
bst.speed_up()
flowsheet = bst.Flowsheet('TAL')
bst.main_flowsheet.set_flowsheet(flowsheet)
# Speeds up ShortcutDistillation
bst.units.ShortcutColumn.minimum_guess_distillate_recovery = 0
# Baseline cost year is 2016
bst.CE = 541.7
# _labor_2007to2016 = 22.71 / 19.55
# Set default thermo object for the system
tmo.settings.set_thermo(TAL_chemicals)
# %% Utils
R = 8.314
TAL_Hm = 30883.66976 # by Dannenfelser-Yalkowsky method
TAL_Tm = TAL_chemicals['TAL'].Tm # 458.15 K
TAL_c = 6056.69421768496 # fitted parameter
TAL_c_by_R = TAL_c/R
TAL_Hm_by_R = TAL_Hm/R
def get_TAL_solubility_in_water(T): # mol TAL : mol (TAL+water)
return math_exp(-(TAL_Hm_by_R) * (1/T - 1/TAL_Tm))/math_exp(TAL_c_by_R/T)
def get_mol_TAL_dissolved(T, mol_water):
TAL_x = get_TAL_solubility_in_water(T)
return mol_water*TAL_x/(1-TAL_x)
def get_TAL_solubility_in_water_gpL(T):
return get_mol_TAL_dissolved(T, 1000./18.)*TAL_chemicals['TAL'].MW
def get_K(chem_ID, stream, phase_1, phase_2):
return (stream[phase_1].imol[chem_ID]/stream[phase_1].F_mol)/max(1e-6, (stream[phase_2].imol[chem_ID]/stream[phase_2].F_mol))
def get_TAL_solublity_in_solvent_very_rough(T, solvent_ID='Hexanol', units='g/L'):
temp_stream =\
tmo.Stream('temp_stream_get_TAL_solublity_in_solvent_very_rough')
mol_water = mol_solvent = 1000
mol_TAL = get_mol_TAL_dissolved(T, mol_water)
temp_stream.imol['Water'] = mol_water
temp_stream.imol[solvent_ID] = mol_solvent
temp_stream.imol['TAL'] = mol_TAL
temp_stream.lle(T=T, P=temp_stream.P)
# temp_stream.show(N=100)
phase_1 = 'l' if temp_stream.imol['l', solvent_ID] > temp_stream.imol['L', solvent_ID] else 'L'
phase_2 = 'L' if phase_1=='l' else 'l'
K_TAL_in_extract = get_K('TAL', temp_stream, phase_1, phase_2)
# print(K_TAL_in_extract)
if units=='g/L':
temp_stream_2 = tmo.Stream('temp_stream_2_get_TAL_solublity_in_solvent_very_rough')
temp_stream_2.imol['TAL'] = K_TAL_in_extract*mol_TAL
temp_stream_2.imol[solvent_ID] = mol_solvent
return temp_stream_2.imass['TAL']/temp_stream_2.F_vol
elif units=='mol/mol':
return K_TAL_in_extract*mol_TAL/(mol_TAL+mol_solvent) #
def get_TAL_solubility_in_hexanol():
return 2.*0.0222/(2.*0.0222+0.951) # mol/mol; 2 * Marco's initial experimental solubility of 2.8 wt% at 21 C
def get_TAL_solubility_in_ethanol_ww():
return 0.167682
@SystemFactory(ID = 'TAL_sys')
def create_TAL_sys(ins, outs):
feedstock = Stream('feedstock')
feedstock.imass['Glucose'] = 29000.
feedstock.imass['H2O'] = 500.
feedstock.price = price['Glucose']*feedstock.imass['Glucose']/feedstock.F_mass
feedstock.F_mass = 25802.9
U101 = units.FeedstockPreprocessing('U101', ins=feedstock)
U101.cost_items['System'].cost = 0
U101.cost_items['System'].kW = 0
enzyme = Stream('enzyme', units='kg/hr', price=price['Enzyme'])
enzyme_water = Stream('enzyme_water', units='kg/hr')
CSL = Stream('CSL', units='kg/hr')
dilution_water = Stream('dilution_water', units='kg/hr')
H301 = bst.units.HXutility('H301', ins=U101-0, T=50+273.15)
M304 = bst.units.Mixer('M304', ins=(H301-0, dilution_water))
M304.water_to_sugar_mol_ratio = 5.
@M304.add_specification()
def adjust_M304_water():
M304_ins_1 = M304.ins[1]
M304_ins_1.imol['Water'] = M304.water_to_sugar_mol_ratio * M304.ins[0].imol['Glucose', 'Xylose'].sum()
M304._run()
M304_H = bst.units.HXutility('M304_H', ins=M304-0, T=30+273.15, rigorous=True)
S302 = bst.Splitter('S302', ins=M304_H-0,
outs = ('to_seedtrain', 'to_cofermentation'),
split = 0.07)
R302 = units.CoFermentation('R302',
ins=(S302-1, '', CSL),
outs=('fermentation_effluent', 'CO2_fermentation'))
def include_seed_CSL_in_cofermentation():
R302._run()
R302.ins[2].F_mass*=1./(1-S302.split[0])
R302.specification = include_seed_CSL_in_cofermentation
R303 = units.SeedTrain('R303', ins=S302-0, outs=('seed', 'CO2_seedtrain'), ferm_ratio=0.95)
T301 = units.SeedHoldTank('T301', ins=R303-0, outs=1-R302)
hr')
KOH = Stream('KOH', units = 'kg/hr')
HCl = Stream('HCl', units = 'kg/hr')
U401 = bst.Unit('U401', ins=R302-0, outs=('fermentation_broth_first_sle'))
def U401_spec():
U401_ins_0 = U401.ins[0]
tot_TAL = U401_ins_0.imol['TAL']
U401_outs_0 = U401.outs[0]
U401_outs_0.copy_like(U401_ins_0)
mol_TAL_dissolved = get_mol_TAL_dissolved(U401_outs_0.T, U401_outs_0.imol['Water'])
(mol_TAL_dissolved, tot_TAL)
U401_outs_0.imol['s', 'TAL'] = tot_TAL - min(mol_TAL_dissolved, tot_TAL)
U401.specification = U401_spec
H401 = bst.HXutility('H401', ins=U401-0, outs=('H401_0'), T=273.15+70.)
def H401_spec():
H401_ins_0 = H401.ins[0]
H401_ins_0_water=H401_ins_0.imol['Water']
tot_TAL = H401_ins_0.imol['TAL']
H401_spec_obj_fn = lambda T: get_mol_TAL_dissolved(T, H401_ins_0_water) - tot_TAL
H401.T = flx.IQ_interpolation(H401_spec_obj_fn, H401.ins[0].T, 99.+273.15)
H401._run()
H401_outs_0 = H401.outs[0]
mol_TAL_dissolved = get_mol_TAL_dissolved(H401_outs_0.T, H401_outs_0.imol['Water'])
in(mol_TAL_dissolved, tot_TAL)
H401_outs_0.imol['s', 'TAL'] = max(0., round(tot_TAL - min(mol_TAL_dissolved, tot_TAL), 5))
H401.specification = H401_spec
U402 = bst.FakeSplitter('U402', ins=H401-0, outs = ('thermally_decarboxylated_broth','vented_CO2'))
U402.decarboxylation_rxns = ParallelRxn([
Rxn('TAL + H2O -> PD + CO2', 'TAL', 0.25),
])
def get_TAL_decarboxylation_conversion(T=273.15+80.):
return (0.2*(T-273.15) + 8.)/100.
def U402_spec():
U402_outs_0 = U402.outs[0]
U402_outs_0.copy_like(U402.ins[0])
U402_outs_0.phases = ('l', 's')
U402.decarboxylation_rxns[0].X = get_TAL_decarboxylation_conversion(T=U402_outs_0.T)
U402.decarboxylation_rxns[0](U402_outs_0['l'])
U402.outs[1].imol['CO2'] = U402_outs_0.imol['l', 'CO2']
U402.outs[1].phase = 'g'
U402_outs_0.imol['l', 'CO2'] = 0.
U402.specification = U402_spec
lit = [splits_df['stream_571'][0]] + splits_df['stream_571'][2:].to_list()
S401_filtrate_split = [splits_df['stream_535'][0]] + splits_df['stream_535'][2:].to_list()
S401 = bst.units.SolidsCentrifuge('S401', ins=U402-0, outs=('S401_solid_fraction', 'S401_liquid_fraction'),
split=find_split(S401_index,
S401_cell_mass_split,
S401_filtrate_split,
chemical_groups),
solids =\
['Xylan', 'Glucan', 'Lignin', 'FermMicrobe',\
'Ash', 'Arabinan', 'Galactan', 'Mannan'])
S401_TAL_split_spec():
S401._run()
S401_ins_0 = S401.ins[0]
S401.outs[0].imol['TAL'] = S401_ins_0.imol['s', 'TAL']
S401.outs[1].imol['TAL'] = S401_ins_0.imol['l', 'TAL']
S401.specification = S401_TAL_split_spec
H402 = bst.HXutility('H402', ins=S401-1, outs=('H402_0'), T=273.15+1.)
def H402_spec():
H402._run()
H402_ins_0 = H402.ins[0]
tot_TAL = H402_ins_0.imol['TAL']
H402_outs_0 = H402.outs[0]
TAL_solubility = get_mol_TAL_dissolved(H402_outs_0.T, H402_outs_0.imol['Water'])
H402_outs_0.phases = ('s', 'l')
H402_outs_0.T = H402.T
TAL_dissolved = min(TAL_solubility, tot_TAL)
H402_outs_0.imol['l', 'TAL'] = TAL_dissolved
H402_outs_0.imol['s', 'TAL'] = max(0, tot_TAL - TAL_dissolved)
H402.specification = H402_spec
S402 = bst.units.SolidsCentrifuge('S402', ins=H402-0, outs=('S402_solid_fraction', 'S402_liquid_fraction'),
split=find_split(S401_index,
S401_cell_mass_split,
S401_filtrate_split,
chemical_groups), solids =\
['Xylan', 'Glucan', 'Lignin', 'FermMicrobe',\
'Ash', 'Arabinan', 'Galactan', 'Mannan'])
def S402_TAL_split_spec():
float(S402_ins_0.imol['s', 'TAL'])
S402_ins_0.imol['s', 'TAL'] = 0.
S402._run()
S402.outs[0].imol['TAL'] = solid_TAL
S402.outs[1].imol['TAL'] = S402_ins_0.imol['l', 'TAL']
S402_ins_0.imol['s', 'TAL'] = solid_TAL
S402.specification = S402_TAL_split_spec
H403 = bst.HXutility('H403', ins=S402-0, outs=('heated_TAL'), T=273.15+40.)
F401 = bst.Flash('F401', ins=H403-0, outs = ('volatiles', 'pure_TAL_product'), V = 0.99, P=101325.)
def F401_spec():
F401_ins_0 = F401.ins[0]
F401.V = sum(F401_ins_0.imol['H2O',
'AceticAcid',
'Furfural',
'HMF',]) / F401_ins_0.F_mol
F401._run()
F401.specification = F401_spec
air_lagoon = Stream('air_lagoon', phase='g', units='kg/hr')
aerobic_caustic = Stream('aerobic_caustic', units='kg/hr', T=20+273.15, P=2*101325,
price=price['Caustics'])
M501 = bst.units.Mixer('M501', ins=(
S402-1,
F401-0,
))
WWT_cost = units.WastewaterSystemCost('WWTcost501', ins=M501-0)
R501 = units.AnaerobicDigestion('R501', ins=WWT_cost-0,
outs=('biogas', 'anaerobic_treated_water',
'anaerobic_sludge'),
reactants=soluble_organics,
split=find_split(splits_df.index,
splits_df['stream_611'],
splits_df['stream_612'],
chemical_groups),
T=35+273.15)
get_flow_tpd = lambda: (feedstock.F_mass-feedstock.imass['H2O'])*24/907.185
M502 = bst.units.Mixer('M502', ins=(R501-1, ''))
R502 = units.AerobicDigestion('R502', ins=(M502-0, air_lagoon, aerobic_caustic),
outs=('aerobic_vent', 'aerobic_treated_water'),
reactants=soluble_organics,
ratio=get_flow_tpd()/2205)
S501 = bst.units.Splitter('S501', ins=R502-1, outs=('membrane_treated_water',
'membrane_sludge'),
split=find_split(splits_df.index,
splits_df['stream_624'],
splits_df['stream_625'],
chemical_groups))
S501.line = 'Membrane bioreactor'
S502 = bst.units.Splitter('S502', ins=S501-1, outs=('to_aerobic_digestion',
'to_boiler_turbogenerator'),
split=0.96)
M503 = bst.units.Mixer('M503', ins=(S502-0, 'centrate'), outs=1-M502)
M504 = bst.units.Mixer('M504', ins=(R501-2, S502-1))
S503 = bst.units.Splitter('S503', ins=M504-0, outs=(1-M503, 'sludge'),
split=find_split(splits_df.index,
splits_df['stream_616'],
splits_df['stream_623'],
chemical_groups))
S503.line = 'Sludge centrifuge'
S504 = bst.units.Splitter('S504', ins=S501-0, outs=('discharged_water', 'waste_brine'),
split=find_split(splits_df.index,
splits_df['stream_626'],
splits_df['stream_627'],
chemical_groups))
S504.line = 'Reverse osmosis'
M505 = bst.units.Mixer('M505', ins=(S503-1,
S401-0,
),
outs='wastes_to_boiler_turbogenerator')
sulfuric_acid_fresh = Stream('sulfuric_acid_fresh', price=price['Sulfuric acid'])
ammonia_fresh = Stream('ammonia_fresh', price=price['AmmoniumHydroxide'])
CSL_fresh = Stream('CSL_fresh', price=price['CSL'])
HCl_fresh = Stream('HCl_fresh', price=price['HCl'])
hexanol_fresh = Stream('hexanol_fresh', price=price['Hexanol'])
heptane_fresh_s = Stream('heptane_fresh_s', price=price['Heptane'])
toluene_fresh_s = Stream('toluene_fresh_s', price=price['Toluene'])
hydrogen_fresh = Stream('hydrogen_fresh', price=price['Hydrogen'])
KOH_fresh = Stream('KOH_fresh', price=price['KOH'])
system_makeup_water = Stream('system_makeup_water', price=price['Makeup water'])
SA = Stream('SA', units='kg/hr', price=price['SA'])
Stream('ash', price=price['Ash disposal'])
cooling_tower_chems = Stream('cooling_tower_chems', price=price['Cooling tower chems'])
CIP_chems_in = Stream('CIP_chems_in', Water=145*get_flow_tpd()/2205, units='kg/hr')
plant_air_in = Stream('plant_air_in', phase='g', units='kg/hr',
N2=0.79*1372608*get_flow_tpd()/2205,
O2=0.21*1372608*get_flow_tpd()/2205)
fire_water_in = Stream('fire_water_in',
Water=8021*get_flow_tpd()/2205, units='kg/hr')
T603 = units.CSLstorageTank('T603', ins=CSL_fresh, outs=CSL)
T603.line = 'CSL storage tank'
T604 = units.DPHPStorageTank('T604', ins=hexanol_fresh)
T604.line = 'Hexanol storage tank'
T604_P = units.TALPump('T604_P', ins=T604-0, outs = Hexanol_minimal)
h)
# T605.line = 'Heptane storage tank'
# T605_P = units.TALPump('T605_P', ins=T605-0, outs = Heptane)
# T606 = units.DPHPStorageTank('T606', ins=toluene_fresh)
# T606.line = 'Toluene storage tank'
# T606_P = units.TALPump('T606_P', ins=T606-0, outs = Toluene)
T607 = units.DPHPStorageTank('T607', ins=hydrogen_fresh, outs = Hydrogen)
T607.line = 'Hydrogen storage tank'
T608 = units.DPHPStorageTank('T608', ins=HCl_fresh, outs = HCl,
vessel_material = 'Stainless steel')
T608.line = 'HCl storage tank'
T609 = units.DPHPStorageTank('T609', ins=KOH_fresh, outs = KOH,
vessel_material = 'Stainless steel')
T609.line = 'KOH storage tank'
# T604_s = units.DPHPStorageTank('T604_s', ins=hexanol_fresh_s)
# T604_s.line = 'Hexanol storage tank s'
# T604_s_P = units.TALPump('T604_s_P', ins=T604_s-0, outs = Hexanol_s)
# 7-day storage time, similar to ethanol's in Humbird et al.
T605_s = units.DPHPStorageTank('T605_s', ins=heptane_fresh_s)
T605_s.line = 'Heptane storage tank s'
T605_s_P = units.TALPump('T605_s_P', ins=T605_s-0, outs = Heptane_s)
T606_s = units.DPHPStorageTank('T606_s', ins=toluene_fresh_s)
T606_s.line = 'Toluene storage tank s'
T606_s_P = units.TALPump('T606_s_P', ins=T606_s-0, outs = Toluene_s)
T620 = units.TALStorageTank('T620', ins=F401-1, tau=7*24, V_wf=0.9,
vessel_type='Floating roof',
vessel_material='Stainless steel')
T620.line = 'SAStorageTank'
T620_P = units.TALPump('T620_P', ins=T620-0, outs=SA)
# # 7-day storage time, similar to ethanol's in Humbird et al.
=7*24, V_wf=0.9,
# vessel_type='Floating roof',
# vessel_material='Stainless steel')
# T608.line = 'IBAStorageTank'
# T608_P = units.TALPump('T608_P', ins=T608-0, outs=IBA)
CIP = facilities.CIP('CIP901', ins=CIP_chems_in, outs='CIP_chems_out')
ADP = facilities.ADP('ADP902', ins=plant_air_in, outs='plant_air_out',
ratio=get_flow_tpd()/2205)
FWT = units.FireWaterTank('FWT903', ins=fire_water_in, outs='fire_water_out')
CWP = facilities.CWP('CWP802', ins='return_chilled_water',
outs='process_chilled_water')
# M505-0 is the liquid/solid mixture, R501-0 is the biogas, blowdown is discharged
# BT = facilities.BT('BT', ins=(M505-0, R501-0,
# FGD_lime, boiler_chems,
# baghouse_bag, natural_gas,
# 'BT_makeup_water'),
# B_eff=0.8, TG_eff=0.85,
# combustibles=combustibles,
# side_streams_to_heat=(water_M201, water_M202, steam_M203),
# outs=('gas_emission', ash, 'boiler_blowdown_water'))
BT = bst.facilities.BoilerTurbogenerator('BT701',
ins=(M505-0,
R501-0,
'boiler_makeup_water',
'natural_gas',
'lime',
'boilerchems'),
outs=('gas_emission', 'boiler_blowdown_water', ash,),
turbogenerator_efficiency=0.85)
# BT = bst.BDunits.BoilerTurbogenerator('BT',
# ins=(M505-0, R501-0, 'boiler_makeup_water', 'natural_gas', FGD_lime, boiler_chems),
# boiler_efficiency=0.80,
# turbogenerator_efficiency=0.85)
# Blowdown is discharged
CT = facilities.CT('CT801', ins=('return_cooling_water', cooling_tower_chems,
'CT_makeup_water'),
outs=('process_cooling_water', 'cooling_tower_blowdown'))
# All water used in the system, here only consider water usage,
# if heating needed, then heeating duty required is considered in BT
process_water_streams = (enzyme_water,
aerobic_caustic,
CIP.ins[-1], BT.ins[-1], CT.ins[-1])
PWC = facilities.PWC('PWC904', ins=(system_makeup_water, S504-0),
process_water_streams=process_water_streams,
recycled_blowdown_streams=None,
outs=('process_water', 'discharged_water'))
# Heat exchanger network
from hxn._heat_exchanger_network import HeatExchangerNetwork
# from biosteam import HeatExchangerNetwork
HXN = HeatExchangerNetwork('HXN1001',
# ignored=[H401, H402],
)
def HXN_no_run_cost():
HXN.heat_utilities = tuple()
HXN._installed_cost = 0.
# To simulate without HXN, uncomment the following 3 lines:
# HXN._cost = HXN_no_run_cost
# HXN.energy_balance_percent_error = 0.
# HXN.new_HXs = HXN.new_HX_utils = []
# HXN = HX_Network('HXN')
# %%
# =============================================================================
# Complete system
# =============================================================================
TAL_sys = create_TAL_sys()
f = bst.main_flowsheet
u = f.unit
s = f.stream
feedstock = s.feedstock
SA = s.SA
get_flow_tpd = lambda: (feedstock.F_mass-feedstock.imass['H2O'])*24/907.185
TEA_feeds = set([i for i in TAL_sys.feeds if i.price]+ \
[i for i in TAL_sys.feeds if i.price])
TEA_products = set([i for i in TAL_sys.products if i.price]+ \
[i for i in TAL_sys.products if i.price]+[SA])
for ui in u:
globals().update({ui.ID: ui})
# %%
# =============================================================================
# TEA
# =============================================================================
# TAL_tea = CellulosicEthanolTEA(system=TAL_sys, IRR=0.10, duration=(2016, 2046),
# depreciation='MACRS7', income_tax=0.21, operating_days=0.9*365,
# lang_factor=None, construction_schedule=(0.08, 0.60, 0.32),
# startup_months=3, startup_FOCfrac=1, startup_salesfrac=0.5,
# startup_VOCfrac=0.75, WC_over_FCI=0.05,
# finance_interest=0.08, finance_years=10, finance_fraction=0.4,
# # biosteam Splitters and Mixers have no cost,
# # cost of all wastewater treatment units are included in WWT_cost,
# # BT is not included in this TEA
# OSBL_units=(u.U101, u.WWT_cost,
# u.T601, u.T602, u.T603, u.T606, u.T606_P,
# u.CWP, u.CT, u.PWC, u.CIP, u.ADP, u.FWT, u.BT),
# warehouse=0.04, site_development=0.09, additional_piping=0.045,
# proratable_costs=0.10, field_expenses=0.10, construction=0.20,
# contingency=0.10, other_indirect_costs=0.10,
# labor_cost=3212962*get_flow_tpd()/2205,
# labor_burden=0.90, property_insurance=0.007, maintenance=0.03,
# steam_power_depreciation='MACRS20', boiler_turbogenerator=u.BT)
# TAL_no_BT_tea = TAL_tea
TAL_tea = TALTEA(system=TAL_sys, IRR=0.10, duration=(2016, 2046),
depreciation='MACRS7', income_tax=0.21, operating_days=0.9*365,
lang_factor=None, construction_schedule=(0.08, 0.60, 0.32),
startup_months=3, startup_FOCfrac=1, startup_salesfrac=0.5,
startup_VOCfrac=0.75, WC_over_FCI=0.05,
finance_interest=0.08, finance_years=10, finance_fraction=0.4,
# biosteam Splitters and Mixers have no cost,
# cost of all wastewater treatment units are included in WWT_cost,
# BT is not included in this TEA
OSBL_units=(u.U101, u.WWTcost501,
# u.T601, u.T602,
u.T603, u.T604, u.T620,
# u.T606, u.T606_P,
u.CWP802, u.CT801, u.PWC904, u.CIP901, u.ADP902, u.FWT903, u.BT701),
warehouse=0.04, site_development=0.09, additional_piping=0.045,
proratable_costs=0.10, field_expenses=0.10, construction=0.20,
contingency=0.10, other_indirect_costs=0.10,
labor_cost=3212962*get_flow_tpd()/2205,
labor_burden=0.90, property_insurance=0.007, maintenance=0.03,
steam_power_depreciation='MACRS20', boiler_turbogenerator=u.BT701)
TAL_no_BT_tea = TAL_tea
# # Removed because there is not double counting anyways.
# # Removes feeds/products of BT_sys from TAL_sys to avoid double-counting
# for i in BT_sys.feeds:
# TAL_sys.feeds.remove(i)
# for i in BT_sys.products:
# TAL_sys.products.remove(i)
# Boiler turbogenerator potentially has different depreciation schedule
# BT_tea = bst.TEA.like(BT_sys, TAL_no_BT_tea)
# BT_tea.labor_cost = 0
# Changed to MACRS 20 to be consistent with Humbird
# BT_tea.depreciation = 'MACRS20'
# BT_tea.OSBL_units = (BT,)
# %%
# =============================================================================
# Simulate system and get results
# =============================================================================
# def get_TAL_MPSP():
# TAL_sys.simulate()
# for i in range(3):
# TAL.price = TAL_tea.solve_price(TAL, TAL_no_BT_tea)
# return TAL.price
def get_SA_MPSP():
for i in range(3):
TAL_sys.simulate()
for i in range(3):
SA.price = TAL_tea.solve_price(SA)
return SA.price*SA.F_mass/SA.imass['TAL']
def get_titer():
return R302.outs[0].imass['TAL']/R302.outs[0].F_vol
def set_titer(titer):
M304.water_multiplier *= get_titer()/titer
get_SA_MPSP()
return get_titer()
# get_SA_MPSP()
# R301 = F('R301') # Fermentor
# yearly_production = 125000 # ton/yr
spec = ProcessSpecification(
evaporator = None,
pump = None,
mixer = u.M304,
heat_exchanger = u.M304_H,
seed_train_system = [],
reactor= u.R302,
reaction_name='fermentation_reaction',
substrates=('Xylose', 'Glucose'),
products=('TAL',),
spec_1=0.19,
spec_2=28.,
spec_3=0.19,
xylose_utilization_fraction = 0.80,
feedstock = feedstock,
dehydration_reactor = None,
byproduct_streams = [],
HXN = u.HXN1001,
maximum_inhibitor_concentration = 1.,
# pre_conversion_units = process_groups_dict['feedstock_group'].units + process_groups_dict['pretreatment_group'].units + [u.H301], # if the line below does not work (depends on BioSTEAM version)
pre_conversion_units = TAL_sys.split(u.M304.ins[0])[0],
# set baseline fermentation performance here
baseline_yield = 0.19,
baseline_titer = 28.,
baseline_productivity = 0.19,
# baseline_yield = 0.30,
# baseline_titer = 25.,
# baseline_productivity = 0.19,
feedstock_mass = feedstock.F_mass,
pretreatment_reactor = None)
spec.load_spec_1 = spec.load_yield
# spec.load_spec_2 = spec.load_titer
spec.load_spec_3 = spec.load_productivity
def M304_titer_obj_fn(water_to_sugar_mol_ratio):
M304, R302 = u.M304, u.R302
M304.water_to_sugar_mol_ratio = water_to_sugar_mol_ratio
M304.specification[0][0]()
u.M304_H._run()
u.S302._run()
u.R303._run()
u.T301._run()
R302.specification[0][0]()
# broth = R302.outs[0]
# return broth.imass['TAL']/broth.F_vol - R302.titer_to_load
return R302.effluent_titer - R302.titer_to_load
def load_titer_with_glucose(titer_to_load):
spec.spec_2 = titer_to_load
u.R302.titer_to_load = titer_to_load
flx.IQ_interpolation(M304_titer_obj_fn, 1e-3, 20000.)
# u.AC401.regeneration_velocity = min(14.4, 3.1158 + ((14.4-3.1158)/(30.-3.))*(titer_to_load-3.)) # heuristic to obtain regeneration velocity at which MPSP is minimum fitted to results from simulations at target_recovery=0.99
# u.AC401.regeneration_velocity = 14.4
spec.load_spec_2 = load_titer_with_glucose
# path = (F301, R302)
# @np.vectorize
# def calculate_titer(V):
# F301.V = V
# for i in path: i._run()
# return spec._calculate_titer()
# @np.vectorize
# def calculate_MPSP(V):
# F301.V = V
# TAL_sys.simulate()
# MPSP = SA.price = TAL_tea.solve_price(SA, TAL_no_BT_tea)
# return MPSP
# vapor_fractions = np.linspace(0.20, 0.80)
# titers = calculate_titer(vapor_fractions)
# MPSPs = calculate_MPSP(vapor_fractions)
# import matplotlib.pyplot as plt
# plt.plot(vapor_fractions, titers)
# plt.show()
# plt.plot(titers, MPSPs)
# plt.show()
# %%
# =============================================================================
# Life cycle analysis (LCA), waste disposal emission not included
# =============================================================================
# 100-year global warming potential (GWP) from material flows
LCA_streams = TEA_feeds.copy()
LCA_stream = Stream('LCA_stream', units='kg/hr')
def get_material_GWP():
LCA_stream.mass = sum(i.mass for i in LCA_streams)
chemical_GWP = LCA_stream.mass*CFs['GWP_CF_stream'].mass
# feedstock_GWP = feedstock.F_mass*CFs['GWP_CFs']['Corn stover']
return chemical_GWP.sum()/SA.F_mass
# GWP from combustion of non-biogenic carbons
get_non_bio_GWP = lambda: (natural_gas.get_atomic_flow('C'))* TAL_chemicals.CO2.MW / SA.F_mass
# +ethanol_fresh.get_atomic_flow('C')) \
# GWP from electricity
get_electricity_use = lambda: sum(i.power_utility.rate for i in TAL_sys.units)
get_electricity_GWP = lambda: get_electricity_use()*CFs['GWP_CFs']['Electricity'] \
/ SA.F_mass
# CO2 fixed in lactic acid product
get_fixed_GWP = lambda: \
SA.get_atomic_flow('C')*TAL_chemicals.CO2.MW/SA.F_mass
# carbon_content_of_feedstock = 0
get_GWP = lambda: get_material_GWP()+get_non_bio_GWP()+get_electricity_GWP()
# Fossil energy consumption (FEC) from materials
def get_material_FEC():
LCA_stream.mass = sum(i.mass for i in LCA_streams)
chemical_FEC = LCA_stream.mass*CFs['FEC_CF_stream'].mass
# feedstock_FEC = feedstock.F_mass*CFs['FEC_CFs']['Corn stover']
return chemical_FEC.sum()/SA.F_mass
# FEC from electricity
get_electricity_FEC = lambda: \
get_electricity_use()*CFs['FEC_CFs']['Electricity']/SA.F_mass
# Total FEC
get_FEC = lambda: get_material_FEC()+get_electricity_FEC()
# get_SPED = lambda: BT.system_heating_demand*0.001/SA.F_mass
SA_LHV = 31.45 # MJ/kg SA
# %% Full analysis
def simulate_and_print():
get_SA_MPSP()
print('\n---------- Simulation Results ----------')
print(f'MPSP is ${get_SA_MPSP():.3f}/kg')
# print(f'GWP is {get_GWP():.3f} kg CO2-eq/kg SA')
# print(f'FEC is {get_FEC():.2f} MJ/kg SA or {get_FEC()/SA_LHV:.2f} MJ/MJ SA')
# print(f'SPED is {get_SPED():.2f} MJ/kg SA or {get_SPED()/SA_LHV:.2f} MJ/MJ SA')
# print('--------------------\n')
# simulate_and_print()
# TAL_sys.simulate()
get_SA_MPSP()
spec.load_specifications(0.203, 35.9, 0.21)
simulate_and_print()
# %%
# =============================================================================
# For Monte Carlo and analyses
# =============================================================================
TAL_sub_sys = {
# 'feedstock_sys': (U101,),
# 'pretreatment_sys': (T201, M201, M202, M203,
# R201, R201_H, T202, T203,
# F201, F201_H,
# M204, T204, T204_P,
# M205, M205_P),
# 'conversion_sys': (H301, M301, M302, R301, R302, T301),
# 'separation_sys': (S401, M401, M401_P,
# S402,
# # F401, F401_H, X401,
# D401, D401_H, D401_P, S403,
# M402_P, S403,
# D403, D403_H, D403_P,
# M501,
# T606, T606_P, T607, T607_P)
# F402, F402_H, F402_P,
# D405, D405_H1, D405_H2, D405_P,
# M401, M401_P)
# 'wastewater_sys': (M501, WWT_cost, R501,
# M502, R502, S501, S502, M503,
# M504, S503, S504, M505),
# 'HXN': (HXN,),
# 'BT': (BT,),
# 'CT': (CT,),
# 'other_facilities': (T601, S601,
# T602, T603,
# T604, T604_P,
# T605, T605_P,
# T606, T606_P,
# PWC, CIP, ADP, FWT)
}
# for unit in sum(TAL_sub_sys.values(), ()):
# if not unit in TAL_sys.units:
# print(f'{unit.ID} not in TAL_sys.units')
# for unit in TAL_sys.units:
# if not unit in sum(TAL_sub_sys.values(), ()):
# print(f'{unit.ID} not in TAL_sub_sys') | true | true |
f720915b2ad6e84b10d393f7c627605ab0f69c7f | 2,018 | py | Python | tools/Polygraphy/polygraphy/config.py | spradius/TensorRT | eb5de99b523c76c2f3ae997855ad86d3a1e86a31 | [
"Apache-2.0"
] | 1 | 2021-08-23T01:15:16.000Z | 2021-08-23T01:15:16.000Z | tools/Polygraphy/polygraphy/config.py | spradius/TensorRT | eb5de99b523c76c2f3ae997855ad86d3a1e86a31 | [
"Apache-2.0"
] | null | null | null | tools/Polygraphy/polygraphy/config.py | spradius/TensorRT | eb5de99b523c76c2f3ae997855ad86d3a1e86a31 | [
"Apache-2.0"
] | null | null | null | #
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import sys
INTERNAL_CORRECTNESS_CHECKS = bool(os.environ.get("POLYGRAPHY_INTERNAL_CORRECTNESS_CHECKS", "0") != "0")
"""
Whether internal correctness checks are enabled.
This can be configured by setting the 'POLYGRAPHY_INTERNAL_CORRECTNESS_CHECKS' environment variable.
"""
AUTOINSTALL_DEPS = bool(os.environ.get("POLYGRAPHY_AUTOINSTALL_DEPS", "0") != "0")
"""
Whether Polygraphy will automatically install required Python packages at runtime.
This can be configured by setting the 'POLYGRAPHY_AUTOINSTALL_DEPS' environment variable.
"""
INSTALL_CMD = os.environ.get("POLYGRAPHY_INSTALL_CMD", "{:} -m pip install".format(sys.executable)).split()
"""
The command to use to automatically install dependencies. Only relevant when AUTOINSTALL_DEPS
is enabled. Defaults to ``["python3", "-m", "pip", "install"]``.
This can be configured by setting the 'POLYGRAPHY_INSTALL_CMD' environment variable to a
string containing the command; for example: ``python3 -m pip install``.
"""
ARRAY_SWAP_THRESHOLD_MB = int(os.environ.get("POLYGRAPHY_ARRAY_SWAP_THRESHOLD_MB", "-1"))
"""
The threshold, in megabytes, above which Polygraphy will evict a NumPy array from memory and swap it to disk.
A negative value disables swapping and a value of 0 causes all arrays to be saved to disk.
Disabled by default.
This can be configured by setting the 'POLYGRAPHY_ARRAY_SWAP_THRESHOLD_MB' environment variable.
"""
| 43.869565 | 109 | 0.77552 |
import os
import sys
INTERNAL_CORRECTNESS_CHECKS = bool(os.environ.get("POLYGRAPHY_INTERNAL_CORRECTNESS_CHECKS", "0") != "0")
AUTOINSTALL_DEPS = bool(os.environ.get("POLYGRAPHY_AUTOINSTALL_DEPS", "0") != "0")
INSTALL_CMD = os.environ.get("POLYGRAPHY_INSTALL_CMD", "{:} -m pip install".format(sys.executable)).split()
ARRAY_SWAP_THRESHOLD_MB = int(os.environ.get("POLYGRAPHY_ARRAY_SWAP_THRESHOLD_MB", "-1"))
| true | true |
f720930247112fff9273c3101072dae9279f6fa7 | 4,606 | py | Python | tasks/trace_agent.py | charyveer75/datadog-agent | fdf4d2028b0dccd485eb280f4fefda84931927bc | [
"Apache-2.0"
] | 2 | 2020-02-11T16:05:23.000Z | 2022-03-30T19:50:28.000Z | tasks/trace_agent.py | charyveer75/datadog-agent | fdf4d2028b0dccd485eb280f4fefda84931927bc | [
"Apache-2.0"
] | 2 | 2021-08-11T15:24:27.000Z | 2021-08-23T22:13:05.000Z | tasks/trace_agent.py | charyveer75/datadog-agent | fdf4d2028b0dccd485eb280f4fefda84931927bc | [
"Apache-2.0"
] | null | null | null | import os
import sys
import shutil
import invoke
from invoke import task
from .utils import bin_name, get_build_flags, get_version_numeric_only, load_release_versions
from .utils import REPO_PATH
from .build_tags import get_build_tags, get_default_build_tags, LINUX_ONLY_TAGS, REDHAT_AND_DEBIAN_ONLY_TAGS, REDHAT_AND_DEBIAN_DIST
from .go import deps
BIN_PATH = os.path.join(".", "bin", "trace-agent")
DEFAULT_BUILD_TAGS = [
"netcgo",
"secrets",
"docker",
"kubeapiserver",
"kubelet",
]
@task
def build(ctx, rebuild=False, race=False, precompile_only=False, build_include=None,
build_exclude=None, major_version='7', python_runtimes='3', arch="x64"):
"""
Build the trace agent.
"""
# get env prior to windows sources so we only have to set the target architecture once
ldflags, gcflags, env = get_build_flags(ctx, arch=arch, major_version=major_version, python_runtimes=python_runtimes)
# generate windows resources
if sys.platform == 'win32':
windres_target = "pe-x86-64"
if arch == "x86":
env["GOARCH"] = "386"
windres_target = "pe-i386"
ver = get_version_numeric_only(ctx, env, major_version=major_version)
maj_ver, min_ver, patch_ver = ver.split(".")
ctx.run("windmc --target {target_arch} -r cmd/trace-agent/windows_resources cmd/trace-agent/windows_resources/trace-agent-msg.mc".format(target_arch=windres_target))
ctx.run("windres --define MAJ_VER={maj_ver} --define MIN_VER={min_ver} --define PATCH_VER={patch_ver} -i cmd/trace-agent/windows_resources/trace-agent.rc --target {target_arch} -O coff -o cmd/trace-agent/rsrc.syso".format(
maj_ver=maj_ver,
min_ver=min_ver,
patch_ver=patch_ver,
target_arch=windres_target
))
build_include = DEFAULT_BUILD_TAGS if build_include is None else build_include.split(",")
build_exclude = [] if build_exclude is None else build_exclude.split(",")
if not sys.platform.startswith('linux'):
for ex in LINUX_ONLY_TAGS:
if ex not in build_exclude:
build_exclude.append(ex)
build_tags = get_build_tags(build_include, build_exclude)
cmd = "go build {race_opt} {build_type} -tags \"{go_build_tags}\" "
cmd += "-o {agent_bin} -gcflags=\"{gcflags}\" -ldflags=\"{ldflags}\" {REPO_PATH}/cmd/trace-agent"
args = {
"race_opt": "-race" if race else "",
"build_type": "-a" if rebuild else ("-i" if precompile_only else ""),
"go_build_tags": " ".join(build_tags),
"agent_bin": os.path.join(BIN_PATH, bin_name("trace-agent", android=False)),
"gcflags": gcflags,
"ldflags": ldflags,
"REPO_PATH": REPO_PATH,
}
ctx.run("go generate {REPO_PATH}/pkg/trace/info".format(**args), env=env)
ctx.run(cmd.format(**args), env=env)
@task
def integration_tests(ctx, install_deps=False, race=False, remote_docker=False):
"""
Run integration tests for trace agent
"""
if install_deps:
deps(ctx)
test_args = {
"go_build_tags": " ".join(get_default_build_tags()),
"race_opt": "-race" if race else "",
"exec_opts": "",
}
if remote_docker:
test_args["exec_opts"] = "-exec \"inv docker.dockerize-test\""
go_cmd = 'INTEGRATION=yes go test {race_opt} -v'.format(**test_args)
prefixes = [
"./pkg/trace/test/testsuite/...",
]
for prefix in prefixes:
ctx.run("{} {}".format(go_cmd, prefix))
@task
def cross_compile(ctx, tag=""):
"""
Cross-compiles the trace-agent binaries. Use the "--tag=X" argument to specify build tag.
"""
if not tag:
print("Argument --tag=<version> is required.")
return
print("Building tag %s..." % tag)
env = {
"TRACE_AGENT_VERSION": tag,
"V": tag,
}
ctx.run("git checkout $V", env=env)
ctx.run("mkdir -p ./bin/trace-agent/$V", env=env)
ctx.run("go generate ./pkg/trace/info", env=env)
ctx.run("go get -u github.com/karalabe/xgo")
ctx.run("xgo -dest=bin/trace-agent/$V -go=1.11 -out=trace-agent-$V -targets=windows-6.1/amd64,linux/amd64,darwin-10.11/amd64 ./cmd/trace-agent", env=env)
ctx.run("mv ./bin/trace-agent/$V/trace-agent-$V-windows-6.1-amd64.exe ./bin/trace-agent/$V/trace-agent-$V-windows-amd64.exe", env=env)
ctx.run("mv ./bin/trace-agent/$V/trace-agent-$V-darwin-10.11-amd64 ./bin/trace-agent/$V/trace-agent-$V-darwin-amd64 ", env=env)
ctx.run("git checkout -")
print("Done! Binaries are located in ./bin/trace-agent/%s" % tag)
| 35.430769 | 230 | 0.650673 | import os
import sys
import shutil
import invoke
from invoke import task
from .utils import bin_name, get_build_flags, get_version_numeric_only, load_release_versions
from .utils import REPO_PATH
from .build_tags import get_build_tags, get_default_build_tags, LINUX_ONLY_TAGS, REDHAT_AND_DEBIAN_ONLY_TAGS, REDHAT_AND_DEBIAN_DIST
from .go import deps
BIN_PATH = os.path.join(".", "bin", "trace-agent")
DEFAULT_BUILD_TAGS = [
"netcgo",
"secrets",
"docker",
"kubeapiserver",
"kubelet",
]
@task
def build(ctx, rebuild=False, race=False, precompile_only=False, build_include=None,
build_exclude=None, major_version='7', python_runtimes='3', arch="x64"):
ldflags, gcflags, env = get_build_flags(ctx, arch=arch, major_version=major_version, python_runtimes=python_runtimes)
if sys.platform == 'win32':
windres_target = "pe-x86-64"
if arch == "x86":
env["GOARCH"] = "386"
windres_target = "pe-i386"
ver = get_version_numeric_only(ctx, env, major_version=major_version)
maj_ver, min_ver, patch_ver = ver.split(".")
ctx.run("windmc --target {target_arch} -r cmd/trace-agent/windows_resources cmd/trace-agent/windows_resources/trace-agent-msg.mc".format(target_arch=windres_target))
ctx.run("windres --define MAJ_VER={maj_ver} --define MIN_VER={min_ver} --define PATCH_VER={patch_ver} -i cmd/trace-agent/windows_resources/trace-agent.rc --target {target_arch} -O coff -o cmd/trace-agent/rsrc.syso".format(
maj_ver=maj_ver,
min_ver=min_ver,
patch_ver=patch_ver,
target_arch=windres_target
))
build_include = DEFAULT_BUILD_TAGS if build_include is None else build_include.split(",")
build_exclude = [] if build_exclude is None else build_exclude.split(",")
if not sys.platform.startswith('linux'):
for ex in LINUX_ONLY_TAGS:
if ex not in build_exclude:
build_exclude.append(ex)
build_tags = get_build_tags(build_include, build_exclude)
cmd = "go build {race_opt} {build_type} -tags \"{go_build_tags}\" "
cmd += "-o {agent_bin} -gcflags=\"{gcflags}\" -ldflags=\"{ldflags}\" {REPO_PATH}/cmd/trace-agent"
args = {
"race_opt": "-race" if race else "",
"build_type": "-a" if rebuild else ("-i" if precompile_only else ""),
"go_build_tags": " ".join(build_tags),
"agent_bin": os.path.join(BIN_PATH, bin_name("trace-agent", android=False)),
"gcflags": gcflags,
"ldflags": ldflags,
"REPO_PATH": REPO_PATH,
}
ctx.run("go generate {REPO_PATH}/pkg/trace/info".format(**args), env=env)
ctx.run(cmd.format(**args), env=env)
@task
def integration_tests(ctx, install_deps=False, race=False, remote_docker=False):
if install_deps:
deps(ctx)
test_args = {
"go_build_tags": " ".join(get_default_build_tags()),
"race_opt": "-race" if race else "",
"exec_opts": "",
}
if remote_docker:
test_args["exec_opts"] = "-exec \"inv docker.dockerize-test\""
go_cmd = 'INTEGRATION=yes go test {race_opt} -v'.format(**test_args)
prefixes = [
"./pkg/trace/test/testsuite/...",
]
for prefix in prefixes:
ctx.run("{} {}".format(go_cmd, prefix))
@task
def cross_compile(ctx, tag=""):
if not tag:
print("Argument --tag=<version> is required.")
return
print("Building tag %s..." % tag)
env = {
"TRACE_AGENT_VERSION": tag,
"V": tag,
}
ctx.run("git checkout $V", env=env)
ctx.run("mkdir -p ./bin/trace-agent/$V", env=env)
ctx.run("go generate ./pkg/trace/info", env=env)
ctx.run("go get -u github.com/karalabe/xgo")
ctx.run("xgo -dest=bin/trace-agent/$V -go=1.11 -out=trace-agent-$V -targets=windows-6.1/amd64,linux/amd64,darwin-10.11/amd64 ./cmd/trace-agent", env=env)
ctx.run("mv ./bin/trace-agent/$V/trace-agent-$V-windows-6.1-amd64.exe ./bin/trace-agent/$V/trace-agent-$V-windows-amd64.exe", env=env)
ctx.run("mv ./bin/trace-agent/$V/trace-agent-$V-darwin-10.11-amd64 ./bin/trace-agent/$V/trace-agent-$V-darwin-amd64 ", env=env)
ctx.run("git checkout -")
print("Done! Binaries are located in ./bin/trace-agent/%s" % tag)
| true | true |
f7209497e72c208305cb4e1cce93a790ea4e4114 | 328 | py | Python | emoji_chengyu/main.py | alingse/emoji-chengyu | 2d4436212c1d2899dfc12a1c965ea2ddce9a4aab | [
"MIT"
] | 3 | 2020-04-28T03:25:36.000Z | 2022-01-24T04:52:01.000Z | emoji_chengyu/main.py | alingse/emoji-chengyu | 2d4436212c1d2899dfc12a1c965ea2ddce9a4aab | [
"MIT"
] | null | null | null | emoji_chengyu/main.py | alingse/emoji-chengyu | 2d4436212c1d2899dfc12a1c965ea2ddce9a4aab | [
"MIT"
] | 1 | 2020-04-28T03:25:49.000Z | 2020-04-28T03:25:49.000Z | import itertools
from emoji_chengyu.puzzle import gen_puzzle
def emoji_chengyu():
N = 100
pg = gen_puzzle()
puzzles = list(itertools.islice(pg, N))
puzzles.sort(key=lambda p: sum(p.mask), reverse=True)
M = 20
for puzzle in puzzles[:M]:
print(''.join(puzzle.puzzle), puzzle.chengyu_item.word)
| 21.866667 | 63 | 0.670732 | import itertools
from emoji_chengyu.puzzle import gen_puzzle
def emoji_chengyu():
N = 100
pg = gen_puzzle()
puzzles = list(itertools.islice(pg, N))
puzzles.sort(key=lambda p: sum(p.mask), reverse=True)
M = 20
for puzzle in puzzles[:M]:
print(''.join(puzzle.puzzle), puzzle.chengyu_item.word)
| true | true |
f72094b30996c32bb9a24f3c3252221bebecd3fa | 2,378 | py | Python | tests/technical_ratio_test.py | bopo/mooquant | 244a87d4cd8b4d918eec4f16905e0921c3b39f50 | [
"Apache-2.0"
] | 21 | 2017-09-07T16:08:21.000Z | 2020-10-15T13:42:21.000Z | tests/technical_ratio_test.py | bopo/MooQuant | 244a87d4cd8b4d918eec4f16905e0921c3b39f50 | [
"Apache-2.0"
] | 209 | 2018-10-09T11:57:39.000Z | 2021-03-25T21:40:30.000Z | tests/technical_ratio_test.py | bopo/MooQuant | 244a87d4cd8b4d918eec4f16905e0921c3b39f50 | [
"Apache-2.0"
] | 15 | 2018-11-17T20:14:37.000Z | 2022-02-04T23:55:29.000Z | # -*- coding: utf-8 -*-
# MooQuant
#
# Copyright 2011-2015 Gabriel Martin Becedillas Ruiz
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
.. moduleauthor:: Gabriel Martin Becedillas Ruiz <gabriel.becedillas@gmail.com>
"""
from mooquant import dataseries
from mooquant.technical import ratio
from . import common
class TestCase(common.TestCase):
def __buildRatio(self, values, ratioMaxLen=None):
seqDS = dataseries.SequenceDataSeries()
ret = ratio.Ratio(seqDS, ratioMaxLen)
for value in values:
seqDS.append(value)
return ret
def testSimple(self):
ratio = self.__buildRatio([1, 2, 1])
self.assertEqual(ratio[0], None)
self.assertEqual(ratio[1], 1)
self.assertEqual(ratio[2], -0.5)
self.assertEqual(ratio[-1], -0.5)
with self.assertRaises(IndexError):
ratio[3]
self.assertEqual(ratio[-2], ratio[1])
self.assertEqual(ratio[-1], ratio[2])
self.assertEqual(len(ratio.getDateTimes()), 3)
for i in range(len(ratio)):
self.assertEqual(ratio.getDateTimes()[i], None)
def testNegativeValues(self):
ratio = self.__buildRatio([-1, -2, -1])
self.assertEqual(ratio[0], None)
self.assertEqual(ratio[1], -1)
self.assertEqual(ratio[2], 0.5)
self.assertEqual(ratio[-1], 0.5)
with self.assertRaises(IndexError):
ratio[3]
self.assertEqual(ratio[-2], ratio[1])
self.assertEqual(ratio[-1], ratio[2])
self.assertEqual(len(ratio.getDateTimes()), 3)
for i in range(len(ratio)):
self.assertEqual(ratio.getDateTimes()[i], None)
def testBounded(self):
ratio = self.__buildRatio([-1, -2, -1], 2)
self.assertEqual(ratio[0], -1)
self.assertEqual(ratio[1], 0.5)
self.assertEqual(len(ratio), 2)
| 30.883117 | 79 | 0.647603 |
from mooquant import dataseries
from mooquant.technical import ratio
from . import common
class TestCase(common.TestCase):
def __buildRatio(self, values, ratioMaxLen=None):
seqDS = dataseries.SequenceDataSeries()
ret = ratio.Ratio(seqDS, ratioMaxLen)
for value in values:
seqDS.append(value)
return ret
def testSimple(self):
ratio = self.__buildRatio([1, 2, 1])
self.assertEqual(ratio[0], None)
self.assertEqual(ratio[1], 1)
self.assertEqual(ratio[2], -0.5)
self.assertEqual(ratio[-1], -0.5)
with self.assertRaises(IndexError):
ratio[3]
self.assertEqual(ratio[-2], ratio[1])
self.assertEqual(ratio[-1], ratio[2])
self.assertEqual(len(ratio.getDateTimes()), 3)
for i in range(len(ratio)):
self.assertEqual(ratio.getDateTimes()[i], None)
def testNegativeValues(self):
ratio = self.__buildRatio([-1, -2, -1])
self.assertEqual(ratio[0], None)
self.assertEqual(ratio[1], -1)
self.assertEqual(ratio[2], 0.5)
self.assertEqual(ratio[-1], 0.5)
with self.assertRaises(IndexError):
ratio[3]
self.assertEqual(ratio[-2], ratio[1])
self.assertEqual(ratio[-1], ratio[2])
self.assertEqual(len(ratio.getDateTimes()), 3)
for i in range(len(ratio)):
self.assertEqual(ratio.getDateTimes()[i], None)
def testBounded(self):
ratio = self.__buildRatio([-1, -2, -1], 2)
self.assertEqual(ratio[0], -1)
self.assertEqual(ratio[1], 0.5)
self.assertEqual(len(ratio), 2)
| true | true |
f72094fcc0eabe24153127ed8f8cfab3259c6ceb | 1,247 | py | Python | IRIS_data_download/IRIS_download_support/obspy/io/segy/util.py | earthinversion/Fnet_IRIS_data_automated_download | 09a6e0c992662feac95744935e038d1c68539fa1 | [
"MIT"
] | 2 | 2020-03-05T01:03:01.000Z | 2020-12-17T05:04:07.000Z | IRIS_data_download/IRIS_download_support/obspy/io/segy/util.py | earthinversion/Fnet_IRIS_data_automated_download | 09a6e0c992662feac95744935e038d1c68539fa1 | [
"MIT"
] | 4 | 2021-03-31T19:25:55.000Z | 2021-12-13T20:32:46.000Z | IRIS_data_download/IRIS_download_support/obspy/io/segy/util.py | earthinversion/Fnet_IRIS_data_automated_download | 09a6e0c992662feac95744935e038d1c68539fa1 | [
"MIT"
] | 2 | 2020-09-08T19:33:40.000Z | 2021-04-05T09:47:50.000Z | # -*- coding: utf-8 -*-
from __future__ import (absolute_import, division, print_function,
unicode_literals)
from future.builtins import * # NOQA
from struct import unpack
from obspy.core.util.libnames import _load_cdll
# Import shared libsegy
clibsegy = _load_cdll("segy")
def unpack_header_value(endian, packed_value, length, special_format):
"""
Unpacks a single value.
"""
# Use special format if necessary.
if special_format:
fmt = ('%s%s' % (endian, special_format)).encode('ascii', 'strict')
return unpack(fmt, packed_value)[0]
# Unpack according to different lengths.
elif length == 2:
format = ('%sh' % endian).encode('ascii', 'strict')
return unpack(format, packed_value)[0]
# Update: Seems to be correct. Two's complement integers seem to be
# the common way to store integer values.
elif length == 4:
format = ('%si' % endian).encode('ascii', 'strict')
return unpack(format, packed_value)[0]
# The unassigned field. Since it is unclear how this field is
# encoded it will just be stored as a string.
elif length == 8:
return packed_value
# Should not happen
else:
raise Exception
| 31.974359 | 75 | 0.652767 |
from __future__ import (absolute_import, division, print_function,
unicode_literals)
from future.builtins import *
from struct import unpack
from obspy.core.util.libnames import _load_cdll
clibsegy = _load_cdll("segy")
def unpack_header_value(endian, packed_value, length, special_format):
if special_format:
fmt = ('%s%s' % (endian, special_format)).encode('ascii', 'strict')
return unpack(fmt, packed_value)[0]
elif length == 2:
format = ('%sh' % endian).encode('ascii', 'strict')
return unpack(format, packed_value)[0]
# the common way to store integer values.
elif length == 4:
format = ('%si' % endian).encode('ascii', 'strict')
return unpack(format, packed_value)[0]
# The unassigned field. Since it is unclear how this field is
# encoded it will just be stored as a string.
elif length == 8:
return packed_value
# Should not happen
else:
raise Exception
| true | true |
f72095d4a18d4936ed6471f607ee84100d63dad6 | 2,845 | py | Python | MMLanScan/Data/build_port_services_list.py | cyb3rc/MMLanScan | 60cf1cb9476bad8ee522780ce4df5513a139f47d | [
"MIT"
] | null | null | null | MMLanScan/Data/build_port_services_list.py | cyb3rc/MMLanScan | 60cf1cb9476bad8ee522780ce4df5513a139f47d | [
"MIT"
] | null | null | null | MMLanScan/Data/build_port_services_list.py | cyb3rc/MMLanScan | 60cf1cb9476bad8ee522780ce4df5513a139f47d | [
"MIT"
] | null | null | null | #!/usr/bin/python
import urllib2
import xml.etree.ElementTree as ElementTree
import re
def refine_table(table):
result = table
result = re.sub(r"<td.*?>", "<td>", result)
result = re.sub(r"<tr.*?>", "<tr>", result)
result = re.sub(r"<a.*?>(.*?)</a>", "\\1", result)
result = re.sub(r"<span.*?>(.*?)</span>", "\\1", result)
result = re.sub(r"<b.*?>(.*?)</b>", "\\1", result)
result = re.sub(r"<br\s?/>", "", result)
result = re.sub(r"<sup.*?/sup>", "", result)
result = re.sub(r"<sub.*?/sub>", "", result)
result = re.sub(r"<caption.*?/caption>", "", result)
result = re.sub(r"</abbr>", "", result)
result = re.sub(r"\n", "", result)
result = re.sub(r"<div.*?>.*?<li>(.*?)</li>.*?</div>", "\\1", result, re.M | re.S)
return result
def main():
response = urllib2.urlopen("https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers")
content = response.read()
tables = re.findall(r"<table class=\"wikitable sortable collapsible\">(.*?)</table>", content, re.M | re.S)
table_well_known = refine_table(tables[0])
table_registered_known = refine_table(tables[1])
whole_table = "<table>" + table_well_known + table_registered_known + "</table>"
tree = ElementTree.fromstring(whole_table)
port_info = {}
for child in tree:
port = child[0].text
tcp = child[1].text
udp = child[2].text
desc = (child[3][0].text if len(child[3]) > 0 else child[3].text).replace(",", "").replace(".", "")
# skip invalid entries
if not port:
continue
if ("Reserved" in [tcp, udp]) or ("N/A" in [tcp, udp]):
continue
# defaulting to TCP
if (not tcp and not udp):
tcp = "TCP"
elif tcp and tcp.lower() in ["yes", "assigned", "?"]:
tcp = "TCP"
if udp and udp.lower() in ["yes", "assigned", "?"]:
tcp = "TCP"
# check if given is port range
try:
port_range = map(int, port.split("-"))
except:
continue
port_range = [int(port)] if port.isdigit() else map(int, port.split("-"))
for p in port_range:
if p not in port_info:
port_info[p] = [set(),[]]
if tcp == "TCP":
port_info[p][0].add("tcp")
if udp == "UDP":
port_info[p][0].add("udp")
port_info[p][1].append(desc)
with open("services.list", "w") as fsvcs:
for port, info in sorted(port_info.items()):
for proto in sorted(info[0]):
svc = (" | ".join(info[1])).replace(u"\u00e9", "e").replace(u"\u2013", "-").replace(u"\u2014", "-")
fsvcs.write(("%s,%s,%s" % (proto, port, svc)))
fsvcs.write("\n")
if __name__ == "__main__":
main()
| 31.611111 | 115 | 0.518102 |
import urllib2
import xml.etree.ElementTree as ElementTree
import re
def refine_table(table):
result = table
result = re.sub(r"<td.*?>", "<td>", result)
result = re.sub(r"<tr.*?>", "<tr>", result)
result = re.sub(r"<a.*?>(.*?)</a>", "\\1", result)
result = re.sub(r"<span.*?>(.*?)</span>", "\\1", result)
result = re.sub(r"<b.*?>(.*?)</b>", "\\1", result)
result = re.sub(r"<br\s?/>", "", result)
result = re.sub(r"<sup.*?/sup>", "", result)
result = re.sub(r"<sub.*?/sub>", "", result)
result = re.sub(r"<caption.*?/caption>", "", result)
result = re.sub(r"</abbr>", "", result)
result = re.sub(r"\n", "", result)
result = re.sub(r"<div.*?>.*?<li>(.*?)</li>.*?</div>", "\\1", result, re.M | re.S)
return result
def main():
response = urllib2.urlopen("https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers")
content = response.read()
tables = re.findall(r"<table class=\"wikitable sortable collapsible\">(.*?)</table>", content, re.M | re.S)
table_well_known = refine_table(tables[0])
table_registered_known = refine_table(tables[1])
whole_table = "<table>" + table_well_known + table_registered_known + "</table>"
tree = ElementTree.fromstring(whole_table)
port_info = {}
for child in tree:
port = child[0].text
tcp = child[1].text
udp = child[2].text
desc = (child[3][0].text if len(child[3]) > 0 else child[3].text).replace(",", "").replace(".", "")
if not port:
continue
if ("Reserved" in [tcp, udp]) or ("N/A" in [tcp, udp]):
continue
if (not tcp and not udp):
tcp = "TCP"
elif tcp and tcp.lower() in ["yes", "assigned", "?"]:
tcp = "TCP"
if udp and udp.lower() in ["yes", "assigned", "?"]:
tcp = "TCP"
try:
port_range = map(int, port.split("-"))
except:
continue
port_range = [int(port)] if port.isdigit() else map(int, port.split("-"))
for p in port_range:
if p not in port_info:
port_info[p] = [set(),[]]
if tcp == "TCP":
port_info[p][0].add("tcp")
if udp == "UDP":
port_info[p][0].add("udp")
port_info[p][1].append(desc)
with open("services.list", "w") as fsvcs:
for port, info in sorted(port_info.items()):
for proto in sorted(info[0]):
svc = (" | ".join(info[1])).replace(u"\u00e9", "e").replace(u"\u2013", "-").replace(u"\u2014", "-")
fsvcs.write(("%s,%s,%s" % (proto, port, svc)))
fsvcs.write("\n")
if __name__ == "__main__":
main()
| true | true |
f72095f560d4de6eaf24da310f2f00ce19300c51 | 4,718 | py | Python | API/content/views.py | kasimbozdag/SWE_573 | 4bce24f98fe6980b1f2c83196b8454b56118186b | [
"MIT"
] | null | null | null | API/content/views.py | kasimbozdag/SWE_573 | 4bce24f98fe6980b1f2c83196b8454b56118186b | [
"MIT"
] | 52 | 2019-02-19T10:43:11.000Z | 2022-02-10T10:36:37.000Z | API/content/views.py | kasimbozdag/SWE_573 | 4bce24f98fe6980b1f2c83196b8454b56118186b | [
"MIT"
] | null | null | null | import datetime
from django.shortcuts import render, get_object_or_404
# Create your views here.
from rest_framework.generics import ListAPIView
from rest_framework.response import Response
from rest_framework.status import HTTP_200_OK
from rest_framework.views import APIView
from content.models import Content
from content.serializers import ContentSerializer, ContentsSerializer, ContentsDetailSerializer
from lesson.models import Lesson, Contents
class ContentCreateAPIView(APIView):
def post(self, request, *args, **kwargs):
user = request.user
data = request.data
lesson = get_object_or_404(Lesson, pk=kwargs.get("pk"))
content = {'text': data['text'], "sub_title": data['title']}
if "file" in request.FILES:
file = request.FILES['file']
content['file'] = file
serializer = ContentSerializer(data=content)
serializer.is_valid(raise_exception=True)
serializer.save()
place = Contents.objects.filter(lesson=lesson.id).count() + 1
contents = {
"lesson": kwargs.get("pk"),
"content": serializer.instance.pk,
"owner": user.pk,
"place": place,
}
serializer = ContentsSerializer(data=contents)
serializer.is_valid(raise_exception=True)
serializer.save()
return Response(serializer.data, status=HTTP_200_OK)
class ContentsListAPIView(ListAPIView):
serializer_class = ContentsDetailSerializer
queryset = Contents.objects.all()
def get_queryset(self):
return Contents.objects.filter(lesson=self.kwargs.get("pk"), is_active=True).order_by("place")
class ContentsAPIView(APIView):
def put(self, request, *args, **kwargs):
user = request.user
data = request.data
pk = kwargs['pk']
contents = get_object_or_404(Contents, pk=pk)
if not contents.is_authorized(request.user):
return Response(status=401)
content = contents.content
content_data = {'text': data['text'], "last_edited_at": datetime.datetime.now(), "sub_title": data['title']}
if "file" in request.FILES:
file = request.FILES['file']
content_data['file'] = file
serializer = ContentSerializer(content, data=content_data)
serializer.is_valid(raise_exception=True)
serializer.save()
contents_data = {
"lesson": contents.lesson_id,
"content": content.pk,
"owner": contents.owner.pk,
"last_edited_at": datetime.datetime.now(),
}
serializer = ContentsSerializer(contents, data=contents_data)
serializer.is_valid(raise_exception=True)
serializer.save()
return Response(serializer.data, status=HTTP_200_OK)
def get(self, request, *args, **kwargs):
pk = kwargs['pk']
contents = get_object_or_404(Contents, pk=pk)
serializer = ContentsDetailSerializer(contents)
return Response(serializer.data, status=HTTP_200_OK)
class ContentsInactivateAPIView(APIView):
def put(self, request, *args, **kwargs):
pk = kwargs['pk']
content = get_object_or_404(Contents, pk=pk)
if not content.is_authorized(request.user):
return Response(status=401)
content.is_active = False
content.save()
return Response(status=HTTP_200_OK)
class ContentsActivateAPIView(APIView):
def put(self, request, *args, **kwargs):
pk = kwargs['pk']
content = get_object_or_404(Contents, pk=pk)
if not content.is_authorized(request.user):
return Response(status=401)
content.is_active = True
content.save()
return Response(status=HTTP_200_OK)
class TeacherContentsListAPIView(ListAPIView):
serializer_class = ContentsDetailSerializer
queryset = Contents.objects.all()
def get_queryset(self):
return Contents.objects.filter(lesson=self.kwargs.get("pk")).order_by("place")
class ChangePlaceAPIView(APIView):
def put(self, request, *args, **kwargs):
pk = kwargs['pk']
content = get_object_or_404(Contents, pk=pk)
if not content.is_authorized(request.user):
return Response(status=401)
place=request.data['place']
lte=content.place
gte=place
change=1
if lte < gte:
lte=place
gte=content.place
change=-1
contents=Contents.objects.filter(place__gte=gte).filter(place__lte=lte)
for l in contents:
l.place=l.place+change
l.save()
content.place=place
content.save()
return Response(status=HTTP_200_OK) | 35.473684 | 116 | 0.652183 | import datetime
from django.shortcuts import render, get_object_or_404
from rest_framework.generics import ListAPIView
from rest_framework.response import Response
from rest_framework.status import HTTP_200_OK
from rest_framework.views import APIView
from content.models import Content
from content.serializers import ContentSerializer, ContentsSerializer, ContentsDetailSerializer
from lesson.models import Lesson, Contents
class ContentCreateAPIView(APIView):
def post(self, request, *args, **kwargs):
user = request.user
data = request.data
lesson = get_object_or_404(Lesson, pk=kwargs.get("pk"))
content = {'text': data['text'], "sub_title": data['title']}
if "file" in request.FILES:
file = request.FILES['file']
content['file'] = file
serializer = ContentSerializer(data=content)
serializer.is_valid(raise_exception=True)
serializer.save()
place = Contents.objects.filter(lesson=lesson.id).count() + 1
contents = {
"lesson": kwargs.get("pk"),
"content": serializer.instance.pk,
"owner": user.pk,
"place": place,
}
serializer = ContentsSerializer(data=contents)
serializer.is_valid(raise_exception=True)
serializer.save()
return Response(serializer.data, status=HTTP_200_OK)
class ContentsListAPIView(ListAPIView):
serializer_class = ContentsDetailSerializer
queryset = Contents.objects.all()
def get_queryset(self):
return Contents.objects.filter(lesson=self.kwargs.get("pk"), is_active=True).order_by("place")
class ContentsAPIView(APIView):
def put(self, request, *args, **kwargs):
user = request.user
data = request.data
pk = kwargs['pk']
contents = get_object_or_404(Contents, pk=pk)
if not contents.is_authorized(request.user):
return Response(status=401)
content = contents.content
content_data = {'text': data['text'], "last_edited_at": datetime.datetime.now(), "sub_title": data['title']}
if "file" in request.FILES:
file = request.FILES['file']
content_data['file'] = file
serializer = ContentSerializer(content, data=content_data)
serializer.is_valid(raise_exception=True)
serializer.save()
contents_data = {
"lesson": contents.lesson_id,
"content": content.pk,
"owner": contents.owner.pk,
"last_edited_at": datetime.datetime.now(),
}
serializer = ContentsSerializer(contents, data=contents_data)
serializer.is_valid(raise_exception=True)
serializer.save()
return Response(serializer.data, status=HTTP_200_OK)
def get(self, request, *args, **kwargs):
pk = kwargs['pk']
contents = get_object_or_404(Contents, pk=pk)
serializer = ContentsDetailSerializer(contents)
return Response(serializer.data, status=HTTP_200_OK)
class ContentsInactivateAPIView(APIView):
def put(self, request, *args, **kwargs):
pk = kwargs['pk']
content = get_object_or_404(Contents, pk=pk)
if not content.is_authorized(request.user):
return Response(status=401)
content.is_active = False
content.save()
return Response(status=HTTP_200_OK)
class ContentsActivateAPIView(APIView):
def put(self, request, *args, **kwargs):
pk = kwargs['pk']
content = get_object_or_404(Contents, pk=pk)
if not content.is_authorized(request.user):
return Response(status=401)
content.is_active = True
content.save()
return Response(status=HTTP_200_OK)
class TeacherContentsListAPIView(ListAPIView):
serializer_class = ContentsDetailSerializer
queryset = Contents.objects.all()
def get_queryset(self):
return Contents.objects.filter(lesson=self.kwargs.get("pk")).order_by("place")
class ChangePlaceAPIView(APIView):
def put(self, request, *args, **kwargs):
pk = kwargs['pk']
content = get_object_or_404(Contents, pk=pk)
if not content.is_authorized(request.user):
return Response(status=401)
place=request.data['place']
lte=content.place
gte=place
change=1
if lte < gte:
lte=place
gte=content.place
change=-1
contents=Contents.objects.filter(place__gte=gte).filter(place__lte=lte)
for l in contents:
l.place=l.place+change
l.save()
content.place=place
content.save()
return Response(status=HTTP_200_OK) | true | true |
f72095f7242e0943bddfcfc8d4c0d806cf6d4b17 | 1,274 | py | Python | build/lib/elang/plot/utils/embedding.py | onlyphantom/elangdev | bdb80e10e98f98ef6510c313cda55daf9464d5c4 | [
"CC0-1.0"
] | 5 | 2020-02-26T15:05:47.000Z | 2022-01-25T01:15:27.000Z | build/lib/elang/plot/utils/embedding.py | onlyphantom/elangdev | bdb80e10e98f98ef6510c313cda55daf9464d5c4 | [
"CC0-1.0"
] | null | null | null | build/lib/elang/plot/utils/embedding.py | onlyphantom/elangdev | bdb80e10e98f98ef6510c313cda55daf9464d5c4 | [
"CC0-1.0"
] | 1 | 2020-02-13T08:14:11.000Z | 2020-02-13T08:14:11.000Z | import sys, os.path
import gensim
from gensim.models import Word2Vec
import numpy as np
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
def plot2d_demo(model, words=None):
assert (
model.vector_size >= 2
), "This function expects a model of size 2 (2-dimension word vectors) or higher."
if words is None:
words = [words for words in model.wv.vocab]
word_vec = np.array([model.wv[word] for word in words])
if model.vector_size > 2:
pca = PCA(2)
word_vec = pca.fit_transform(word_vec)
with plt.style.context("seaborn-pastel"):
plt.figure(figsize=(7, 5), dpi=180)
plt.scatter(word_vec[:, 0], word_vec[:, 1], s=5, edgecolors="k", c="c")
for word, (x, y) in zip(words, word_vec):
plt.text(x - 0.02, y + 0.02, word, fontsize=5)
plt.show()
if __name__ == "__main__":
MODEL_PATH = (
os.path.abspath(os.path.join(os.path.dirname(__file__), "../../"))
+ "/word2vec/model/demo2d.model"
# + "/word2vec/model/demo500d.model"
)
model = Word2Vec.load(MODEL_PATH)
print("Loaded from Path:", MODEL_PATH, "\n", model)
# plot2d_demo(model, words=["bca", "mandiri", "uob", "algoritma", "airbnb"])
plot2d_demo(model)
| 28.311111 | 86 | 0.625589 | import sys, os.path
import gensim
from gensim.models import Word2Vec
import numpy as np
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
def plot2d_demo(model, words=None):
assert (
model.vector_size >= 2
), "This function expects a model of size 2 (2-dimension word vectors) or higher."
if words is None:
words = [words for words in model.wv.vocab]
word_vec = np.array([model.wv[word] for word in words])
if model.vector_size > 2:
pca = PCA(2)
word_vec = pca.fit_transform(word_vec)
with plt.style.context("seaborn-pastel"):
plt.figure(figsize=(7, 5), dpi=180)
plt.scatter(word_vec[:, 0], word_vec[:, 1], s=5, edgecolors="k", c="c")
for word, (x, y) in zip(words, word_vec):
plt.text(x - 0.02, y + 0.02, word, fontsize=5)
plt.show()
if __name__ == "__main__":
MODEL_PATH = (
os.path.abspath(os.path.join(os.path.dirname(__file__), "../../"))
+ "/word2vec/model/demo2d.model"
)
model = Word2Vec.load(MODEL_PATH)
print("Loaded from Path:", MODEL_PATH, "\n", model)
plot2d_demo(model)
| true | true |
f720966df99b4facf9ee616398c70c34a8e38598 | 49,071 | py | Python | tensorflow/python/data/experimental/kernel_tests/snapshot_test.py | weikhor/tensorflow | ce047fc05c7b5ff54868ba53d724d9c171c4adbb | [
"Apache-2.0"
] | 10 | 2021-05-25T17:43:04.000Z | 2022-03-08T10:46:09.000Z | tensorflow/python/data/experimental/kernel_tests/snapshot_test.py | weikhor/tensorflow | ce047fc05c7b5ff54868ba53d724d9c171c4adbb | [
"Apache-2.0"
] | 1,056 | 2019-12-15T01:20:31.000Z | 2022-02-10T02:06:28.000Z | tensorflow/python/data/experimental/kernel_tests/snapshot_test.py | weikhor/tensorflow | ce047fc05c7b5ff54868ba53d724d9c171c4adbb | [
"Apache-2.0"
] | 6 | 2016-09-07T04:00:15.000Z | 2022-01-12T01:47:38.000Z | # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for the `SnapshotDataset` transformation."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import multiprocessing
import os
import shutil
import time
from absl.testing import parameterized
import numpy as np
from tensorflow.python.data.experimental.ops import snapshot
from tensorflow.python.data.kernel_tests import checkpoint_test_base
from tensorflow.python.data.kernel_tests import test_base
from tensorflow.python.data.kernel_tests import tf_record_test_base
from tensorflow.python.data.ops import dataset_ops
from tensorflow.python.data.ops import readers as core_readers
from tensorflow.python.framework import combinations
from tensorflow.python.framework import errors
from tensorflow.python.framework import ops
from tensorflow.python.ops import gen_array_ops
from tensorflow.python.ops import string_ops
from tensorflow.python.platform import test
def is_graphdef_file(filename):
return filename.endswith("-graph.pbtxt")
def is_temp_file(filename):
return "-tmp-" in filename
def listdir_and_filter(dirname, filter_fn):
return [path for path in sorted(os.listdir(dirname)) if filter_fn(path)]
class SnapshotTest(tf_record_test_base.TFRecordTestBase,
parameterized.TestCase):
def setUp(self):
super(SnapshotTest, self).setUp()
tmpdir = self.get_temp_dir()
tmpdir = os.path.join(tmpdir, "snapshot")
os.mkdir(tmpdir)
self._snapshot_dir = tmpdir
def tearDown(self):
super(SnapshotTest, self).tearDown()
shutil.rmtree(self._snapshot_dir)
def createTFRecords(self, num_files=10, num_records=100):
self._num_files = num_files
self._num_records = num_records
self._filenames = self._createFiles()
def removeTFRecords(self):
for filename in self._filenames:
os.remove(filename)
self._filenames = []
self._num_files = None
self._num_records = None
def assertDatasetProducesSet(self, dataset, expected):
actual = []
next_fn = self.getNext(dataset)
for _ in range(len(expected)):
elem = self.evaluate(next_fn())
actual.append(elem)
self.assertCountEqual(actual, expected)
with self.assertRaises(errors.OutOfRangeError):
self.evaluate(next_fn())
def assertSnapshotDirectoryContains(self, directory, num_fingerprints,
num_runs_per_fingerprint,
num_snapshot_shards_per_run):
# Ignore the graphdef pbtxts we write for debugging purposes and temporary
# files that are an artifact of how TF writes files.
dirlist = listdir_and_filter(
directory,
lambda p: not (is_graphdef_file(p) or is_temp_file(p)))
self.assertLen(dirlist, num_fingerprints)
for i in range(num_fingerprints):
fingerprint_dir = os.path.join(directory, dirlist[i])
fingerprint_dir_list = listdir_and_filter(
fingerprint_dir, lambda p: not is_temp_file(p))
self.assertLen(fingerprint_dir_list, num_runs_per_fingerprint + 1)
self.assertEqual(fingerprint_dir_list[num_runs_per_fingerprint],
"snapshot.metadata")
for j in range(num_runs_per_fingerprint):
run_dir = os.path.join(fingerprint_dir, fingerprint_dir_list[j])
run_dirlist = sorted(os.listdir(run_dir))
self.assertLen(run_dirlist, num_snapshot_shards_per_run)
file_counter = 0
for filename in run_dirlist:
self.assertEqual(filename, "%08d.shard" % file_counter)
file_counter += 1
@combinations.generate(test_base.default_test_combinations())
def testCreateSnapshotDataset(self):
dataset = dataset_ops.Dataset.from_tensors([1, 2, 3])
dataset.apply(snapshot.snapshot(self._snapshot_dir))
@combinations.generate(test_base.default_test_combinations())
def testReadSnapshotDatasetDefault(self):
self.createTFRecords()
filenames = self._filenames
expected = [
b"Record %d of file %d" % (r, f) # pylint:disable=g-complex-comprehension
for f in range(0, 10)
for r in range(0, 100)
]
dataset = core_readers._TFRecordDataset(filenames)
dataset = dataset.apply(snapshot.snapshot(self._snapshot_dir))
self.assertDatasetProduces(dataset, expected)
self.assertSnapshotDirectoryContains(
self._snapshot_dir,
num_fingerprints=1,
num_runs_per_fingerprint=1,
num_snapshot_shards_per_run=multiprocessing.cpu_count())
self.removeTFRecords()
dataset2 = core_readers._TFRecordDataset(filenames)
dataset2 = dataset2.apply(snapshot.snapshot(self._snapshot_dir))
self.assertDatasetProduces(dataset2, expected)
@combinations.generate(test_base.default_test_combinations())
def testReadSnapshotDatasetAutoWriteSnappyRead(self):
self.createTFRecords()
filenames = self._filenames
expected = [
b"Record %d of file %d" % (r, f) # pylint:disable=g-complex-comprehension
for f in range(0, 10)
for r in range(0, 100)
]
dataset = core_readers._TFRecordDataset(filenames)
dataset = dataset.apply(
snapshot.snapshot(self._snapshot_dir, compression="AUTO"))
self.assertDatasetProduces(dataset, expected)
self.removeTFRecords()
dataset2 = core_readers._TFRecordDataset(filenames)
dataset2 = dataset2.apply(
snapshot.snapshot(self._snapshot_dir, compression="SNAPPY"))
self.assertDatasetProduces(dataset2, expected)
@combinations.generate(test_base.default_test_combinations())
def testReadSnapshotDatasetCustomShardFn(self):
self.createTFRecords()
filenames = self._filenames
expected = [
b"Record %d of file %d" % (r, f) # pylint:disable=g-complex-comprehension
for f in range(0, 10)
for r in range(0, 100)
]
dataset = core_readers._TFRecordDataset(filenames)
dataset = dataset.apply(
snapshot.snapshot(self._snapshot_dir, shard_func=lambda _: np.int64(0)))
self.assertDatasetProduces(dataset, expected)
self.assertSnapshotDirectoryContains(
self._snapshot_dir,
num_fingerprints=1,
num_runs_per_fingerprint=1,
num_snapshot_shards_per_run=1)
self.removeTFRecords()
dataset2 = core_readers._TFRecordDataset(filenames)
dataset2 = dataset2.apply(
snapshot.snapshot(self._snapshot_dir, shard_func=lambda _: 0))
self.assertDatasetProduces(dataset2, expected)
@combinations.generate(test_base.default_test_combinations())
def testReadSnapshotDatasetCustomReaderFn(self):
self.createTFRecords()
filenames = self._filenames
expected = [
b"Record %d of file %d" % (r, f) # pylint:disable=g-complex-comprehension
for f in range(0, 10)
for r in range(0, 100)
]
dataset = core_readers._TFRecordDataset(filenames)
dataset = dataset.apply(
snapshot.snapshot(
self._snapshot_dir,
reader_func=(
lambda ds: ds.interleave( # pylint:disable=g-long-lambda
lambda x: x,
cycle_length=4,
num_parallel_calls=4))))
self.assertDatasetProduces(dataset, expected)
self.assertSnapshotDirectoryContains(
self._snapshot_dir,
num_fingerprints=1,
num_runs_per_fingerprint=1,
num_snapshot_shards_per_run=multiprocessing.cpu_count())
self.removeTFRecords()
dataset2 = core_readers._TFRecordDataset(filenames)
dataset2 = dataset2.apply(
snapshot.snapshot(
self._snapshot_dir,
reader_func=(
lambda ds: ds.interleave( # pylint:disable=g-long-lambda
lambda x: x,
cycle_length=4,
num_parallel_calls=4))))
self.assertDatasetProducesSet(dataset2, expected)
@combinations.generate(test_base.default_test_combinations())
def testSnapshotDatasetInvalidShardFn(self):
dataset = dataset_ops.Dataset.range(1000)
with self.assertRaises(TypeError):
dataset = dataset.apply(
snapshot.snapshot(
self._snapshot_dir, shard_func=lambda _: "invalid_fn"))
next_fn = self.getNext(dataset)
self.evaluate(next_fn())
@combinations.generate(test_base.default_test_combinations())
def testSnapshotDatasetInvalidReaderFn(self):
dataset = dataset_ops.Dataset.range(1000)
with self.assertRaises(TypeError):
dataset = dataset.apply(
snapshot.snapshot(self._snapshot_dir, reader_func=lambda x: x + 1))
next_fn = self.getNext(dataset)
self.evaluate(next_fn())
@combinations.generate(test_base.default_test_combinations())
def testRoundtripEmptySnapshot(self):
dataset = dataset_ops.Dataset.range(0)
dataset = dataset.apply(snapshot.snapshot(self._snapshot_dir))
self.assertDatasetProduces(dataset, [])
self.assertSnapshotDirectoryContains(
self._snapshot_dir,
num_fingerprints=1,
num_runs_per_fingerprint=1,
num_snapshot_shards_per_run=0)
dataset2 = dataset_ops.Dataset.range(0)
dataset2 = dataset.apply(snapshot.snapshot(self._snapshot_dir))
self.assertDatasetProduces(dataset2, [])
@combinations.generate(test_base.default_test_combinations())
def testWriteSnapshotDatasetSimple(self):
dataset = dataset_ops.Dataset.range(1000)
dataset = dataset.apply(snapshot.snapshot(self._snapshot_dir))
self.assertDatasetProduces(dataset, list(range(1000)))
self.assertSnapshotDirectoryContains(
self._snapshot_dir,
num_fingerprints=1,
num_runs_per_fingerprint=1,
num_snapshot_shards_per_run=multiprocessing.cpu_count())
@combinations.generate(test_base.default_test_combinations())
def testWriteSnapshotDatasetMultipleFingerprints(self):
dataset1 = dataset_ops.Dataset.range(1000)
dataset1 = dataset1.apply(snapshot.snapshot(self._snapshot_dir))
self.assertDatasetProduces(dataset1, list(range(1000)))
dataset2 = dataset_ops.Dataset.range(2000)
dataset2 = dataset2.apply(snapshot.snapshot(self._snapshot_dir))
self.assertDatasetProduces(dataset2, list(range(2000)))
self.assertSnapshotDirectoryContains(
self._snapshot_dir,
num_fingerprints=2,
num_runs_per_fingerprint=1,
num_snapshot_shards_per_run=multiprocessing.cpu_count())
@combinations.generate(test_base.default_test_combinations())
def testWriteSnapshotDatasetSameFingerprintMultipleCompleteRuns(self):
dataset1 = dataset_ops.Dataset.range(1000)
dataset1 = dataset1.apply(snapshot.snapshot(self._snapshot_dir))
self.assertDatasetProduces(dataset1, list(range(1000)))
dataset2 = dataset_ops.Dataset.range(1000)
dataset2 = dataset2.apply(snapshot.snapshot(self._snapshot_dir))
self.assertDatasetProduces(dataset2, list(range(1000)))
self.assertSnapshotDirectoryContains(
self._snapshot_dir,
num_fingerprints=1,
num_runs_per_fingerprint=1,
num_snapshot_shards_per_run=multiprocessing.cpu_count())
@combinations.generate(test_base.default_test_combinations())
def testWriteSnapshotDatasetSameFingerprintIncompleteRunRestart(self):
dataset1 = dataset_ops.Dataset.range(1000)
dataset1 = dataset1.apply(snapshot.snapshot(self._snapshot_dir))
next1 = self.getNext(dataset1)
for i in range(500):
self.assertEqual(i, self.evaluate(next1()))
dataset2 = dataset_ops.Dataset.range(1000)
dataset2 = dataset2.apply(snapshot.snapshot(self._snapshot_dir))
next2 = self.getNext(dataset2)
for i in range(500):
self.assertEqual(i, self.evaluate(next2()))
for i in range(500, 1000):
self.assertEqual(i, self.evaluate(next1()))
self.assertEqual(i, self.evaluate(next2()))
self.assertSnapshotDirectoryContains(
self._snapshot_dir,
num_fingerprints=1,
num_runs_per_fingerprint=2,
num_snapshot_shards_per_run=multiprocessing.cpu_count())
@combinations.generate(test_base.default_test_combinations())
def testWriteSnapshotCustomShardFunction(self):
dataset = dataset_ops.Dataset.range(1000)
dataset = dataset.enumerate()
dataset = dataset.apply(
snapshot.snapshot(self._snapshot_dir, shard_func=lambda i, _: i % 2))
dataset = dataset.map(lambda _, elem: elem)
self.assertDatasetProduces(dataset, list(range(1000)))
self.assertSnapshotDirectoryContains(
self._snapshot_dir,
num_fingerprints=1,
num_runs_per_fingerprint=1,
num_snapshot_shards_per_run=2)
@combinations.generate(test_base.default_test_combinations())
def testWriteSnapshotDatasetWithTuples(self):
dataset1 = dataset_ops.Dataset.range(0, 1000)
dataset2 = dataset_ops.Dataset.range(1000, 2000)
dataset3 = dataset_ops.Dataset.range(2000, 3000)
dataset4 = dataset_ops.Dataset.range(3000, 4000)
dataset = dataset_ops.Dataset.zip((dataset1, dataset2, dataset3, dataset4))
dataset = dataset.apply(snapshot.snapshot(self._snapshot_dir))
expected = list(
zip(
range(0, 1000), range(1000, 2000), range(2000, 3000),
range(3000, 4000)))
self.assertDatasetProduces(dataset, expected)
self.assertSnapshotDirectoryContains(
self._snapshot_dir,
num_fingerprints=1,
num_runs_per_fingerprint=1,
num_snapshot_shards_per_run=multiprocessing.cpu_count())
@combinations.generate(test_base.default_test_combinations())
def testWriteSnapshotShuffleSameFingerprint(self):
def make_dataset():
dataset = dataset_ops.Dataset.range(1000)
dataset = dataset.shuffle(1000)
dataset = dataset.apply(snapshot.snapshot(self._snapshot_dir))
return dataset
dataset1 = make_dataset()
self.assertDatasetProducesSet(dataset1, list(range(1000)))
dataset2 = make_dataset()
self.assertDatasetProducesSet(dataset2, list(range(1000)))
self.assertSnapshotDirectoryContains(
self._snapshot_dir,
num_fingerprints=1,
num_runs_per_fingerprint=1,
num_snapshot_shards_per_run=multiprocessing.cpu_count())
@combinations.generate(test_base.default_test_combinations())
def testReadUsingFlatMap(self):
dataset = dataset_ops.Dataset.range(1000)
dataset = dataset.apply(snapshot.snapshot(self._snapshot_dir))
self.assertDatasetProduces(dataset, list(range(1000)))
flat_map = dataset_ops.Dataset.from_tensors(dataset).flat_map(lambda x: x)
self.assertDatasetProduces(flat_map, list(range(1000)))
self.assertSnapshotDirectoryContains(
self._snapshot_dir,
num_fingerprints=1,
num_runs_per_fingerprint=1,
num_snapshot_shards_per_run=multiprocessing.cpu_count())
@combinations.generate(test_base.default_test_combinations())
def testReadOptimizableUsingFlatMap(self):
dataset = dataset_ops.Dataset.range(1000)
# Will be optimized into ShuffleAndRepeat.
dataset = dataset.shuffle(10)
dataset = dataset.repeat(2)
dataset = dataset.apply(snapshot.snapshot(self._snapshot_dir))
self.assertDatasetProducesSet(dataset, 2 * list(range(1000)))
flat_map = dataset_ops.Dataset.from_tensors(dataset).flat_map(lambda x: x)
self.assertDatasetProducesSet(flat_map, 2 * list(range(1000)))
self.assertSnapshotDirectoryContains(
self._snapshot_dir,
num_fingerprints=1,
num_runs_per_fingerprint=1,
num_snapshot_shards_per_run=multiprocessing.cpu_count())
@combinations.generate(test_base.default_test_combinations())
def testRepeatAndPrefetch(self):
"""This test reproduces github.com/tensorflow/tensorflow/issues/48903."""
dataset = dataset_ops.Dataset.from_tensor_slices(np.random.rand(16, 32))
dataset = dataset.apply(snapshot.snapshot(self._snapshot_dir))
dataset = dataset.shuffle(buffer_size=16)
dataset = dataset.batch(16)
dataset = dataset.repeat()
dataset = dataset.prefetch(1)
next_element = self.getNext(dataset)
for _ in range(30):
self.evaluate(next_element())
class LegacySnapshotTest(tf_record_test_base.TFRecordTestBase,
parameterized.TestCase):
def setUp(self):
super(LegacySnapshotTest, self).setUp()
self.removeTFRecords()
tmpdir = self.get_temp_dir()
tmpdir = os.path.join(tmpdir, "snapshot")
os.mkdir(tmpdir)
self.snapshot_dir = tmpdir
def tearDown(self):
super(LegacySnapshotTest, self).tearDown()
shutil.rmtree(self.snapshot_dir)
def removeTFRecords(self):
for filename in self._filenames:
os.remove(filename)
self._filenames = []
def setUpTFRecord(self, num_files=10, num_records=10):
self._num_files = num_files
self._num_records = num_records
self._filenames = self._createFiles()
def makeSnapshotDirectory(self):
return self.snapshot_dir
def assertSnapshotDirectoryContains(self, directory, num_fingerprints,
num_runs_per_fp, num_snapshot_files):
# Ignore the graphdef pbtxts we write for debugging purposes and temporary
# files that are an artifact of how TF writes files.
dirlist = listdir_and_filter(
directory,
lambda p: not (is_graphdef_file(p) or is_temp_file(p)))
self.assertLen(dirlist, num_fingerprints)
for i in range(num_fingerprints):
fingerprint_dir = os.path.join(directory, dirlist[i])
fingerprint_dir_list = listdir_and_filter(
fingerprint_dir, lambda p: not is_temp_file(p))
self.assertLen(fingerprint_dir_list, num_runs_per_fp + 1)
self.assertEqual(fingerprint_dir_list[num_runs_per_fp],
"snapshot.metadata")
for j in range(num_runs_per_fp):
run_dir = os.path.join(fingerprint_dir, fingerprint_dir_list[j])
run_dirlist = sorted(os.listdir(run_dir))
self.assertLen(run_dirlist, num_snapshot_files)
file_counter = 0
for filename in run_dirlist:
self.assertEqual(filename, "%08d.snapshot" % file_counter)
file_counter += 1
@combinations.generate(test_base.default_test_combinations())
def testWriteDifferentPipelinesInOneDirectory(self):
tmpdir = self.snapshot_dir
dataset = dataset_ops.Dataset.range(1000)
dataset = dataset.apply(snapshot.legacy_snapshot(tmpdir))
self.assertDatasetProduces(dataset, list(range(1000)))
dataset = dataset_ops.Dataset.range(1001)
dataset = dataset.apply(snapshot.legacy_snapshot(tmpdir))
self.assertDatasetProduces(dataset, list(range(1001)))
self.assertSnapshotDirectoryContains(tmpdir, 2, 1, 1)
@combinations.generate(test_base.default_test_combinations())
def testWriteSnapshotMultipleSimultaneous(self):
tmpdir = self.snapshot_dir
dataset1 = dataset_ops.Dataset.range(1000)
dataset1 = dataset1.apply(snapshot.legacy_snapshot(tmpdir))
next1 = self.getNext(dataset1)
dataset2 = dataset_ops.Dataset.range(1000)
dataset2 = dataset2.apply(snapshot.legacy_snapshot(tmpdir))
next2 = self.getNext(dataset2)
for i in range(0, 1000):
self.assertEqual(i, self.evaluate(next1()))
self.assertEqual(i, self.evaluate(next2()))
# we check that only one copy of the metadata has been written, and the
# one that lost the race would be in passthrough mode.
self.assertSnapshotDirectoryContains(tmpdir, 1, 1, 1)
@combinations.generate(test_base.default_test_combinations())
def testGetNextCreatesDir(self):
tmpdir = self.snapshot_dir
# We create two iterators but call getNext on only one.
dataset1 = dataset_ops.Dataset.range(1000)
dataset1 = dataset1.apply(snapshot.legacy_snapshot(tmpdir))
next1 = self.getNext(dataset1)
dataset2 = dataset_ops.Dataset.range(1001)
dataset2 = dataset2.apply(snapshot.legacy_snapshot(tmpdir))
_ = self.getNext(dataset2)
for _ in range(1000):
self.evaluate(next1())
# We check that only one directory is created.
self.assertSnapshotDirectoryContains(tmpdir, 1, 1, 1)
@combinations.generate(
combinations.times(
test_base.default_test_combinations(),
combinations.combine(compression=[
snapshot.COMPRESSION_NONE, snapshot.COMPRESSION_GZIP,
snapshot.COMPRESSION_SNAPPY
])))
def testWriteSnapshotSimpleSuccessful(self, compression):
tmpdir = self.snapshot_dir
dataset = dataset_ops.Dataset.range(1000)
dataset = dataset.apply(
snapshot.legacy_snapshot(tmpdir, compression=compression))
self.assertDatasetProduces(dataset, list(range(1000)))
self.assertSnapshotDirectoryContains(tmpdir, 1, 1, 1)
@combinations.generate(
combinations.times(
test_base.default_test_combinations(),
combinations.combine(compression=[
snapshot.COMPRESSION_NONE, snapshot.COMPRESSION_GZIP,
snapshot.COMPRESSION_SNAPPY
])))
def testWriteSnapshotRepeatAfterwards(self, compression):
tmpdir = self.snapshot_dir
dataset = dataset_ops.Dataset.range(10)
dataset = dataset.apply(
snapshot.legacy_snapshot(tmpdir, compression=compression))
dataset = dataset.repeat(10)
self.assertDatasetProduces(dataset, list(range(10)) * 10)
self.assertSnapshotDirectoryContains(tmpdir, 1, 1, 1)
@combinations.generate(
combinations.times(
test_base.default_test_combinations(),
combinations.combine(compression=[
snapshot.COMPRESSION_NONE, snapshot.COMPRESSION_GZIP,
snapshot.COMPRESSION_SNAPPY
])))
def testWriteSnapshotMixTypes(self, compression):
tmpdir = self.snapshot_dir
dataset = dataset_ops.Dataset.range(10)
def map_fn(x):
return (x, string_ops.as_string(x), string_ops.as_string(2 * x), 2 * x)
dataset = dataset.map(map_fn)
dataset = dataset.apply(
snapshot.legacy_snapshot(tmpdir, compression=compression))
dataset = dataset.repeat(10)
expected = []
for i in range(10):
expected.append((i, str(i), str(2 * i), 2 * i))
self.assertDatasetProduces(dataset, expected * 10)
self.assertSnapshotDirectoryContains(tmpdir, 1, 1, 1)
@combinations.generate(test_base.default_test_combinations())
def testSpecifySnapshotNameWriteAndRead(self):
tmpdir = self.snapshot_dir
dataset = dataset_ops.Dataset.range(10)
dataset = dataset.apply(
snapshot.legacy_snapshot(tmpdir, snapshot_name="my_custom_snapshot"))
dataset = dataset.repeat(10)
self.assertDatasetProduces(dataset, list(range(10)) * 10)
self.assertSnapshotDirectoryContains(tmpdir, 1, 1, 1)
self.assertTrue(
os.path.exists(os.path.join(tmpdir, "custom-my_custom_snapshot")))
self.assertTrue(
os.path.exists(
os.path.join(tmpdir, "custom-my_custom_snapshot", "custom")))
@combinations.generate(test_base.default_test_combinations())
def testForcePassthroughMode(self):
tmpdir = self.snapshot_dir
dataset = dataset_ops.Dataset.range(10)
dataset = dataset.apply(
snapshot.legacy_snapshot(tmpdir, mode="passthrough"))
dataset = dataset.repeat(10)
self.assertDatasetProduces(dataset, list(range(10)) * 10)
self.assertSnapshotDirectoryContains(tmpdir, 0, 0, 0)
@combinations.generate(test_base.default_test_combinations())
def testForceWriteMode(self):
tmpdir = self.snapshot_dir
dataset = dataset_ops.Dataset.range(10)
dataset = dataset.apply(snapshot.legacy_snapshot(tmpdir, mode="write"))
dataset = dataset.repeat(10)
self.assertDatasetProduces(dataset, list(range(10)) * 10)
# We will end up writing 10 different runs.
self.assertSnapshotDirectoryContains(tmpdir, 1, 10, 1)
@combinations.generate(test_base.default_test_combinations())
def testForceReadMode(self):
tmpdir = self.snapshot_dir
# We write a copy of the snapshot first.
dataset = dataset_ops.Dataset.range(10)
dataset = dataset.apply(
snapshot.legacy_snapshot(
tmpdir, mode="write", snapshot_name="my_custom_snapshot"))
self.assertDatasetProduces(dataset, list(range(10)))
# We move the run to a new name.
shutil.move(
os.path.join(tmpdir, "custom-my_custom_snapshot"),
os.path.join(tmpdir, "custom-my_custom_snapshot_2"))
# Even though the snapshot.metadata is pointing to the old run that no
# longer exists after we moved, we force it to read from the run we specify.
dataset = dataset_ops.Dataset.range(10)
dataset = dataset.apply(
snapshot.legacy_snapshot(
tmpdir, mode="read", snapshot_name="my_custom_snapshot_2"))
self.assertDatasetProduces(dataset, list(range(10)))
# We should still have one snapshot and one run.
self.assertSnapshotDirectoryContains(tmpdir, 1, 1, 1)
@combinations.generate(test_base.default_test_combinations())
def testForceReadNonexistentSnapshot(self):
tmpdir = self.snapshot_dir
dataset = dataset_ops.Dataset.range(10)
with self.assertRaises(errors.NotFoundError):
dataset = dataset.apply(snapshot.legacy_snapshot(tmpdir, mode="read"))
get_next = self.getNext(dataset)
self.evaluate(get_next())
@combinations.generate(test_base.default_test_combinations())
def testForceReadNonexistentNamedSnapshot(self):
tmpdir = self.snapshot_dir
dataset = dataset_ops.Dataset.range(10)
with self.assertRaises(errors.NotFoundError):
dataset = dataset.apply(
snapshot.legacy_snapshot(
tmpdir, mode="read", snapshot_name="my_nonexistent_snapshot"))
get_next = self.getNext(dataset)
self.evaluate(get_next())
@combinations.generate(
combinations.times(
test_base.default_test_combinations(),
combinations.combine(compression=[
snapshot.COMPRESSION_NONE, snapshot.COMPRESSION_GZIP,
snapshot.COMPRESSION_SNAPPY
])))
def testReadSnapshotBackAfterWrite(self, compression):
self.setUpTFRecord()
filenames = self._filenames
expected = [
b"Record %d of file %d" % (r, f) # pylint:disable=g-complex-comprehension
for f in range(0, 10)
for r in range(0, 10)
]
tmpdir = self.snapshot_dir
dataset = core_readers._TFRecordDataset(filenames)
dataset = dataset.apply(
snapshot.legacy_snapshot(tmpdir, compression=compression))
self.assertDatasetProduces(dataset, expected)
# remove the original files and try to read the data back only from snapshot
self.removeTFRecords()
dataset2 = core_readers._TFRecordDataset(filenames)
dataset2 = dataset2.apply(
snapshot.legacy_snapshot(tmpdir, compression=compression))
self.assertDatasetProduces(dataset2, expected)
@combinations.generate(test_base.default_test_combinations())
def testReadShuffledSnapshotAfterWrite(self):
self.setUpTFRecord(num_files=10, num_records=50)
filenames = self._filenames
expected = [
b"Record %d of file %d" % (r, f) # pylint:disable=g-complex-comprehension
for f in range(0, 10)
for r in range(0, 50)
]
tmpdir = self.snapshot_dir
dataset = core_readers._TFRecordDataset(filenames)
dataset = dataset.apply(
snapshot.legacy_snapshot(tmpdir, shard_size_bytes=100))
self.assertDatasetProduces(dataset, expected)
# remove the original files and try to read the data back only from snapshot
self.removeTFRecords()
dataset2 = core_readers._TFRecordDataset(filenames)
dataset2 = dataset2.apply(
snapshot.legacy_snapshot(
tmpdir, shard_size_bytes=100, shuffle_on_read=True))
next2 = self.getNext(dataset2)
res1 = self.evaluate(next2())
res2 = self.evaluate(next2())
res3 = self.evaluate(next2())
res4 = self.evaluate(next2())
res5 = self.evaluate(next2())
# make sure that we don't read the file back in the same order.
self.assertNotEqual([res1, res2, res3, res4, res5], expected[0:5])
# make sure all the elements are still there
dataset3 = core_readers._TFRecordDataset(filenames)
dataset3 = dataset3.apply(
snapshot.legacy_snapshot(
tmpdir, shard_size_bytes=100, shuffle_on_read=True))
self.assertDatasetProduces(dataset3, expected, assert_items_equal=True)
@combinations.generate(test_base.default_test_combinations())
def testReadShuffledSnapshotWithSeedAfterWrite(self):
self.setUpTFRecord(num_files=10, num_records=50)
filenames = self._filenames
expected = [
b"Record %d of file %d" % (r, f) # pylint:disable=g-complex-comprehension
for f in range(0, 10)
for r in range(0, 50)
]
tmpdir = self.snapshot_dir
dataset = core_readers._TFRecordDataset(filenames)
dataset = dataset.apply(
snapshot.legacy_snapshot(tmpdir, shard_size_bytes=10))
self.assertDatasetProduces(dataset, expected)
# remove the original files and try to read the data back only from snapshot
self.removeTFRecords()
dataset2 = core_readers._TFRecordDataset(filenames)
dataset2 = dataset2.apply(
snapshot.legacy_snapshot(
tmpdir,
shard_size_bytes=10,
shuffle_on_read=True,
shuffle_seed=123456))
next2 = self.getNext(dataset2)
dataset3 = core_readers._TFRecordDataset(filenames)
dataset3 = dataset3.apply(
snapshot.legacy_snapshot(
tmpdir,
shard_size_bytes=10,
shuffle_on_read=True,
shuffle_seed=123456))
next3 = self.getNext(dataset3)
# make sure that the items are read back in the same order for both datasets
for _ in range(500):
res2 = self.evaluate(next2())
res3 = self.evaluate(next3())
self.assertEqual(res2, res3)
@combinations.generate(
combinations.times(
test_base.default_test_combinations(),
combinations.combine(compression=[
snapshot.COMPRESSION_NONE, snapshot.COMPRESSION_GZIP,
snapshot.COMPRESSION_SNAPPY
])))
def testReadSnapshotParallelAfterWrite(self, compression):
self.setUpTFRecord(5, 500)
filenames = self._filenames
expected = [
b"Record %d of file %d" % (r, f) # pylint:disable=g-complex-comprehension
for f in range(0, 5)
for r in range(0, 500)
]
tmpdir = self.snapshot_dir
dataset = core_readers._TFRecordDataset(filenames)
dataset = dataset.apply(
snapshot.legacy_snapshot(
tmpdir,
shard_size_bytes=1024 * 1024,
num_reader_threads=2,
reader_buffer_size=10,
compression=compression))
self.assertDatasetProduces(dataset, expected, assert_items_equal=True)
# remove the original files and try to read the data back only from
# snapshot.
self.removeTFRecords()
dataset2 = core_readers._TFRecordDataset(filenames)
dataset2 = dataset2.apply(
snapshot.legacy_snapshot(
tmpdir,
shard_size_bytes=1024 * 1024,
num_reader_threads=2,
reader_buffer_size=10,
compression=compression))
self.assertDatasetProduces(dataset2, expected, assert_items_equal=True)
# Not testing Snappy here because Snappy reads currently require a lot of
# memory.
@combinations.generate(
combinations.times(
test_base.default_test_combinations(),
combinations.times(
combinations.combine(compression=[
snapshot.COMPRESSION_NONE, snapshot.COMPRESSION_GZIP
]),
combinations.combine(threads=2, size=[1, 2]) +
combinations.combine(threads=8, size=[1, 4, 8]))))
def testReadSnapshotBackAfterMultiThreadedWrite(self, compression, threads,
size):
self.setUpTFRecord()
filenames = self._filenames
expected = [
b"Record %d of file %d" % (r, f) # pylint:disable=g-complex-comprehension
for f in range(0, 10)
for r in range(0, 10)
]
tmpdir = self.snapshot_dir
dataset = core_readers._TFRecordDataset(filenames)
dataset = dataset.apply(
snapshot.legacy_snapshot(
tmpdir,
compression=compression,
num_writer_threads=threads,
writer_buffer_size=size))
self.assertDatasetProduces(dataset, expected)
# remove the original files and try to read the data back only from
# snapshot
self.removeTFRecords()
dataset2 = core_readers._TFRecordDataset(filenames)
dataset2 = dataset2.apply(
snapshot.legacy_snapshot(tmpdir, compression=compression))
self.assertDatasetProduces(dataset2, expected, assert_items_equal=True)
@combinations.generate(test_base.default_test_combinations())
def testSameFingerprintWithDifferentInitializationOrder(self):
tmpdir = self.snapshot_dir
dataset1 = dataset_ops.Dataset.range(0, 100)
dataset2 = dataset_ops.Dataset.range(100, 200)
dataset3 = dataset_ops.Dataset.range(200, 300)
dataset = dataset1.concatenate(dataset2).concatenate(dataset3)
dataset = dataset.apply(snapshot.legacy_snapshot(tmpdir))
self.assertDatasetProduces(dataset, list(range(300)))
dataset4 = dataset_ops.Dataset.range(200, 300)
dataset5 = dataset_ops.Dataset.range(100, 200)
dataset6 = dataset_ops.Dataset.range(0, 100)
dataset = dataset6.concatenate(dataset5).concatenate(dataset4)
dataset = dataset.apply(snapshot.legacy_snapshot(tmpdir))
self.assertDatasetProduces(dataset, list(range(300)))
self.assertSnapshotDirectoryContains(tmpdir, 1, 1, 1)
@combinations.generate(test_base.default_test_combinations())
def testExpiredSnapshotRewrite(self):
tmpdir = self.snapshot_dir
dataset1 = dataset_ops.Dataset.range(1000)
dataset1 = dataset1.apply(
snapshot.legacy_snapshot(tmpdir, pending_snapshot_expiry_seconds=1))
next1 = self.getNext(dataset1)
# Don't finish reading dataset1, so it is never finalized
for _ in range(500):
self.evaluate(next1())
self.assertSnapshotDirectoryContains(tmpdir, 1, 1, 1)
time.sleep(2)
# Creating dataset2 after we run through dataset1 due to eager mode, where
# the snapshot state is determined immediately upon dataset creation. We
# only want to determine the snapshot state for dataset2 after the first
# snapshot has expired.
dataset2 = dataset_ops.Dataset.range(1000)
dataset2 = dataset2.apply(
snapshot.legacy_snapshot(tmpdir, pending_snapshot_expiry_seconds=1))
next2 = self.getNext(dataset2)
for _ in range(500):
self.evaluate(next2())
self.assertSnapshotDirectoryContains(tmpdir, 1, 2, 1)
@combinations.generate(test_base.default_test_combinations())
def testSnapshotArgsCreateNewSnapshot(self):
tmpdir = self.snapshot_dir
dataset1 = dataset_ops.Dataset.range(1000)
dataset1 = dataset1.apply(
snapshot.legacy_snapshot(tmpdir, shard_size_bytes=10000))
next1 = self.getNext(dataset1)
for _ in range(1000):
self.evaluate(next1())
self.assertSnapshotDirectoryContains(tmpdir, 1, 1, 1)
# Create second snapshot with a different shard_size_bytes
dataset2 = dataset_ops.Dataset.range(1000)
dataset2 = dataset1.apply(
snapshot.legacy_snapshot(tmpdir, shard_size_bytes=20000))
next2 = self.getNext(dataset2)
for _ in range(1000):
self.evaluate(next2())
self.assertSnapshotDirectoryContains(tmpdir, 2, 1, 1)
@combinations.generate(
combinations.times(
test_base.default_test_combinations(),
combinations.combine(compression=[
snapshot.COMPRESSION_NONE, snapshot.COMPRESSION_GZIP,
snapshot.COMPRESSION_SNAPPY
])))
def testSpecifyShardSize(self, compression):
tmpdir = self.snapshot_dir
dataset = dataset_ops.Dataset.from_tensor_slices([1.0])
dataset = dataset.map(lambda x: gen_array_ops.broadcast_to(x, [1024, 1024]))
dataset = dataset.repeat(10)
dataset = dataset.apply(
snapshot.legacy_snapshot(
tmpdir, shard_size_bytes=10 * 1024 * 1024, compression=compression))
next_fn = self.getNext(dataset)
for _ in range(10):
self.evaluate(next_fn())
num_files = 1
if compression == snapshot.COMPRESSION_NONE:
num_files = 3
self.assertSnapshotDirectoryContains(tmpdir, 1, 1, num_files)
@combinations.generate(test_base.default_test_combinations())
def testAdditionalOperationsAfterReadBack(self):
self.setUpTFRecord()
filenames = self._filenames
expected = [
b"Record %d of file %d" % (r, f) # pylint:disable=g-complex-comprehension
for f in range(0, 10)
for r in range(0, 10)
]
tmpdir = self.snapshot_dir
dataset = core_readers._TFRecordDataset(filenames)
dataset = dataset.apply(snapshot.legacy_snapshot(tmpdir))
self.assertDatasetProduces(dataset, expected)
# remove the original files and try to read the data back only from snapshot
self.removeTFRecords()
dataset2 = core_readers._TFRecordDataset(filenames)
dataset2 = dataset2.apply(snapshot.legacy_snapshot(tmpdir))
self.assertDatasetProduces(dataset2, expected)
expected_after = [
b"cord %d of file %d" % (r, f) # pylint:disable=g-complex-comprehension
for f in range(0, 10)
for r in range(0, 10)
]
dataset3 = core_readers._TFRecordDataset(filenames)
dataset3 = dataset3.apply(snapshot.legacy_snapshot(tmpdir))
dataset3 = dataset3.map(lambda x: string_ops.substr_v2(x, 2, 1000))
self.assertDatasetProduces(dataset3, expected_after)
class SnapshotCheckpointTest(checkpoint_test_base.CheckpointTestBase,
parameterized.TestCase):
def _build_snapshot_dataset(self, repeat=False):
def ds_fn():
self._snapshot_dir = os.path.join(self.get_temp_dir(), "snapshot")
if not os.path.exists(self._snapshot_dir):
os.mkdir(self._snapshot_dir)
dataset = dataset_ops.Dataset.range(100)
dataset = dataset.apply(snapshot.snapshot(self._snapshot_dir))
if repeat:
dataset = dataset.repeat(2)
return dataset
return ds_fn
@combinations.generate(test_base.default_test_combinations())
def testCheckpointBeforeEpochEndNoRepeat(self):
ds_fn = self._build_snapshot_dataset(repeat=False)
outputs = self.gen_outputs(ds_fn, [], 50, verify_exhausted=False)
self.assertSequenceEqual(outputs, range(50))
outputs.extend(
self.gen_outputs(ds_fn, [], 50, ckpt_saved=True, verify_exhausted=True))
self.assertSequenceEqual(outputs, range(100))
@combinations.generate(test_base.default_test_combinations())
def testCheckpointBeforeOneEpochWithReading(self):
ds_fn = self._build_snapshot_dataset(repeat=True)
# Generate 50 entries from iterator and save checkpoint.
outputs = self.gen_outputs(ds_fn, [], 50, verify_exhausted=False)
self.assertSequenceEqual(outputs, list(range(50)))
# Restore from checkpoint and produce the rest of the elements from the
# iterator.
t = self.gen_outputs(ds_fn, [], 150, ckpt_saved=True, verify_exhausted=True)
outputs.extend(t)
self.assertSequenceEqual(
outputs,
list(range(50)) + list(range(50, 100)) + list(range(100)))
@combinations.generate(test_base.default_test_combinations())
def testCheckpointBeforeOneEpochThenRunAFewSteps(self):
ds_fn = self._build_snapshot_dataset(repeat=False)
outputs = self.gen_outputs(
ds_fn, [10], 20, verify_exhausted=False, save_checkpoint_at_end=False)
self.assertSequenceEqual(outputs, range(20))
outputs = outputs[:10]
outputs.extend(
self.gen_outputs(ds_fn, [], 90, ckpt_saved=True, verify_exhausted=True))
self.assertSequenceEqual(outputs, range(100))
@combinations.generate(test_base.default_test_combinations())
def testCheckpointAfterOneEpoch(self):
ds_fn = self._build_snapshot_dataset(repeat=True)
# Generate 110 entries from iterator and save checkpoint.
outputs = self.gen_outputs(ds_fn, [], 110, verify_exhausted=False)
self.assertSequenceEqual(outputs, list(range(100)) + list(range(10)))
# Restore from checkpoint and produce the rest of the elements from the
# iterator.
t = self.gen_outputs(ds_fn, [], 90, ckpt_saved=True, verify_exhausted=True)
outputs.extend(t)
self.assertSequenceEqual(
outputs,
list(range(100)) + list(range(10)) + list(range(10, 100)))
@combinations.generate(test_base.default_test_combinations())
def testCheckpointAfterOneEpochRunFewSteps(self):
ds_fn = self._build_snapshot_dataset(repeat=True)
# Generate 120 entries from iterator and save checkpoint at 110.
outputs = self.gen_outputs(
ds_fn, [110], 120, verify_exhausted=False, save_checkpoint_at_end=False)
self.assertSequenceEqual(outputs, list(range(100)) + list(range(20)))
# Restore from checkpoint and produce the rest of the elements from the
# iterator.
outputs = outputs[:110]
t = self.gen_outputs(ds_fn, [], 90, ckpt_saved=True, verify_exhausted=True)
outputs.extend(t)
self.assertSequenceEqual(
outputs,
list(range(100)) + list(range(10)) + list(range(10, 100)))
class LegacySnapshotCheckpointTest(
checkpoint_test_base.CheckpointTestBase, parameterized.TestCase):
def _build_snapshot_dataset(self,
num_threads=1,
repeat=False,
pending_snapshot_expiry_seconds=-1,
shard_size_bytes=None):
def ds_fn():
self.snapshot_dir = os.path.join(self.get_temp_dir(), "snapshot")
if not os.path.exists(self.snapshot_dir):
os.mkdir(self.snapshot_dir)
dataset = dataset_ops.Dataset.range(1000)
dataset = dataset.apply(
snapshot.legacy_snapshot(
self.snapshot_dir,
num_writer_threads=num_threads,
writer_buffer_size=2 * num_threads,
num_reader_threads=num_threads,
reader_buffer_size=2 * num_threads,
pending_snapshot_expiry_seconds=pending_snapshot_expiry_seconds,
shard_size_bytes=shard_size_bytes))
if repeat:
dataset = dataset.repeat(2)
return dataset
return ds_fn
@combinations.generate(
combinations.times(
test_base.default_test_combinations(),
combinations.combine(pending_snapshot_expiry_seconds=[None, 1])))
def testSnapshotBeforeEpochEnd(self, pending_snapshot_expiry_seconds):
ds_fn = self._build_snapshot_dataset(
pending_snapshot_expiry_seconds=pending_snapshot_expiry_seconds)
outputs = self.gen_outputs(ds_fn, [], 100, verify_exhausted=False)
self.assertSequenceEqual(outputs, range(100))
outputs.extend(
self.gen_outputs(
ds_fn, [], 900, ckpt_saved=True, verify_exhausted=False))
self.assertSequenceEqual(outputs, range(1000))
@combinations.generate(
combinations.times(
test_base.default_test_combinations(),
combinations.combine(pending_snapshot_expiry_seconds=[None, 1])))
def testCheckpointBeforeOneEpochThenRunFewStepsSmallShardMultiThread(
self, pending_snapshot_expiry_seconds):
ds_fn = self._build_snapshot_dataset(
pending_snapshot_expiry_seconds=pending_snapshot_expiry_seconds,
shard_size_bytes=100)
outputs = []
with ops.Graph().as_default() as g:
init_op, get_next_op, saver = self._build_graph(ds_fn)
with self.session(graph=g) as sess:
self._initialize(init_op, sess)
start = 0
end = 100
num_iters = end - start
for _ in range(num_iters):
outputs.append(sess.run(get_next_op))
self._save(sess, saver)
start = 100
end = 400
num_iters = end - start
for _ in range(num_iters):
outputs.append(sess.run(get_next_op))
self.assertSequenceEqual(outputs, range(400))
outputs = outputs[:100]
outputs.extend(
self.gen_outputs(
ds_fn, [], 900, ckpt_saved=True, verify_exhausted=False))
self.assertSequenceEqual(outputs, range(1000))
fp_dir_list = os.listdir(self.snapshot_dir)
self.assertLen(list(fp_dir_list), 2)
for d in fp_dir_list:
if not d.endswith("-graph.pbtxt"):
fp_dir = os.path.join(self.snapshot_dir, d)
run_dir_list = os.listdir(fp_dir)
self.assertLen(list(run_dir_list), 2)
for e in run_dir_list:
if e != "snapshot.metadata":
run_dir = os.path.join(fp_dir, e)
self.assertLen(list(os.listdir(run_dir)), 258)
@combinations.generate(
combinations.times(
test_base.default_test_combinations(),
combinations.combine(pending_snapshot_expiry_seconds=[None, 1])))
def testCheckpointBeforeOneEpochThenRunFewSteps(
self, pending_snapshot_expiry_seconds):
ds_fn = self._build_snapshot_dataset(
pending_snapshot_expiry_seconds=pending_snapshot_expiry_seconds)
# Generate 200 entries from iterator but save checkpoint after producing
# 100.
outputs = self.gen_outputs(
ds_fn, [100], 200, verify_exhausted=False, save_checkpoint_at_end=False)
self.assertSequenceEqual(outputs, range(200))
outputs = outputs[:100]
outputs.extend(
self.gen_outputs(
ds_fn, [], 900, ckpt_saved=True, verify_exhausted=False))
self.assertSequenceEqual(outputs, range(1000))
@combinations.generate(
combinations.times(
test_base.default_test_combinations(),
combinations.combine(pending_snapshot_expiry_seconds=[None, 1])))
def testCheckpointBeforeOneEpochThenRunFewStepsMultipleThreads(
self, pending_snapshot_expiry_seconds):
ds_fn = self._build_snapshot_dataset(
num_threads=2,
pending_snapshot_expiry_seconds=pending_snapshot_expiry_seconds)
# Generate 200 entries from iterator but save checkpoint after producing
# 100.
outputs = self.gen_outputs(
ds_fn, [100], 200, verify_exhausted=False, save_checkpoint_at_end=False)
self.assertSequenceEqual(outputs, range(200))
outputs = outputs[:100]
outputs.extend(
self.gen_outputs(
ds_fn, [], 900, ckpt_saved=True, verify_exhausted=False))
self.assertSequenceEqual(outputs, range(1000))
@combinations.generate(
combinations.times(
test_base.default_test_combinations(),
combinations.combine(pending_snapshot_expiry_seconds=[None, 1])))
def testCheckpointAfterOneEpoch(self, pending_snapshot_expiry_seconds):
ds_fn = self._build_snapshot_dataset(
repeat=True,
pending_snapshot_expiry_seconds=pending_snapshot_expiry_seconds)
# Generate 1100 entries from iterator and save checkpoint.
outputs = self.gen_outputs(ds_fn, [], 1100, verify_exhausted=False)
self.assertSequenceEqual(outputs, list(range(1000)) + list(range(100)))
# Restore from checkpoint and produce the rest of the elements from the
# iterator.
t = self.gen_outputs(
ds_fn, [], 900, ckpt_saved=True, verify_exhausted=False)
outputs.extend(t)
self.assertSequenceEqual(
outputs,
list(range(1000)) + list(range(100)) + list(range(900)))
@combinations.generate(
combinations.times(
test_base.default_test_combinations(),
combinations.combine(pending_snapshot_expiry_seconds=[None, 1])))
def testCheckpointAfterOneEpochThenRunFewSteps(
self, pending_snapshot_expiry_seconds):
ds_fn = self._build_snapshot_dataset(
repeat=True,
pending_snapshot_expiry_seconds=pending_snapshot_expiry_seconds)
# Generate 200 entries from iterator but save checkpoint after producing
# 100.
outputs = self.gen_outputs(
ds_fn, [1100],
1200,
verify_exhausted=False,
save_checkpoint_at_end=False)
self.assertSequenceEqual(
outputs,
list(range(1000)) + list(range(100)) + list(range(100)))
outputs = outputs[:1100]
t = self.gen_outputs(
ds_fn, [], 900, ckpt_saved=True, verify_exhausted=False)
outputs.extend(t)
self.assertSequenceEqual(
outputs, (list(range(1000)) + list(range(100)) + list(range(900))))
if __name__ == "__main__":
test.main()
| 37.834233 | 82 | 0.708259 |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import multiprocessing
import os
import shutil
import time
from absl.testing import parameterized
import numpy as np
from tensorflow.python.data.experimental.ops import snapshot
from tensorflow.python.data.kernel_tests import checkpoint_test_base
from tensorflow.python.data.kernel_tests import test_base
from tensorflow.python.data.kernel_tests import tf_record_test_base
from tensorflow.python.data.ops import dataset_ops
from tensorflow.python.data.ops import readers as core_readers
from tensorflow.python.framework import combinations
from tensorflow.python.framework import errors
from tensorflow.python.framework import ops
from tensorflow.python.ops import gen_array_ops
from tensorflow.python.ops import string_ops
from tensorflow.python.platform import test
def is_graphdef_file(filename):
return filename.endswith("-graph.pbtxt")
def is_temp_file(filename):
return "-tmp-" in filename
def listdir_and_filter(dirname, filter_fn):
return [path for path in sorted(os.listdir(dirname)) if filter_fn(path)]
class SnapshotTest(tf_record_test_base.TFRecordTestBase,
parameterized.TestCase):
def setUp(self):
super(SnapshotTest, self).setUp()
tmpdir = self.get_temp_dir()
tmpdir = os.path.join(tmpdir, "snapshot")
os.mkdir(tmpdir)
self._snapshot_dir = tmpdir
def tearDown(self):
super(SnapshotTest, self).tearDown()
shutil.rmtree(self._snapshot_dir)
def createTFRecords(self, num_files=10, num_records=100):
self._num_files = num_files
self._num_records = num_records
self._filenames = self._createFiles()
def removeTFRecords(self):
for filename in self._filenames:
os.remove(filename)
self._filenames = []
self._num_files = None
self._num_records = None
def assertDatasetProducesSet(self, dataset, expected):
actual = []
next_fn = self.getNext(dataset)
for _ in range(len(expected)):
elem = self.evaluate(next_fn())
actual.append(elem)
self.assertCountEqual(actual, expected)
with self.assertRaises(errors.OutOfRangeError):
self.evaluate(next_fn())
def assertSnapshotDirectoryContains(self, directory, num_fingerprints,
num_runs_per_fingerprint,
num_snapshot_shards_per_run):
dirlist = listdir_and_filter(
directory,
lambda p: not (is_graphdef_file(p) or is_temp_file(p)))
self.assertLen(dirlist, num_fingerprints)
for i in range(num_fingerprints):
fingerprint_dir = os.path.join(directory, dirlist[i])
fingerprint_dir_list = listdir_and_filter(
fingerprint_dir, lambda p: not is_temp_file(p))
self.assertLen(fingerprint_dir_list, num_runs_per_fingerprint + 1)
self.assertEqual(fingerprint_dir_list[num_runs_per_fingerprint],
"snapshot.metadata")
for j in range(num_runs_per_fingerprint):
run_dir = os.path.join(fingerprint_dir, fingerprint_dir_list[j])
run_dirlist = sorted(os.listdir(run_dir))
self.assertLen(run_dirlist, num_snapshot_shards_per_run)
file_counter = 0
for filename in run_dirlist:
self.assertEqual(filename, "%08d.shard" % file_counter)
file_counter += 1
@combinations.generate(test_base.default_test_combinations())
def testCreateSnapshotDataset(self):
dataset = dataset_ops.Dataset.from_tensors([1, 2, 3])
dataset.apply(snapshot.snapshot(self._snapshot_dir))
@combinations.generate(test_base.default_test_combinations())
def testReadSnapshotDatasetDefault(self):
self.createTFRecords()
filenames = self._filenames
expected = [
b"Record %d of file %d" % (r, f)
for f in range(0, 10)
for r in range(0, 100)
]
dataset = core_readers._TFRecordDataset(filenames)
dataset = dataset.apply(snapshot.snapshot(self._snapshot_dir))
self.assertDatasetProduces(dataset, expected)
self.assertSnapshotDirectoryContains(
self._snapshot_dir,
num_fingerprints=1,
num_runs_per_fingerprint=1,
num_snapshot_shards_per_run=multiprocessing.cpu_count())
self.removeTFRecords()
dataset2 = core_readers._TFRecordDataset(filenames)
dataset2 = dataset2.apply(snapshot.snapshot(self._snapshot_dir))
self.assertDatasetProduces(dataset2, expected)
@combinations.generate(test_base.default_test_combinations())
def testReadSnapshotDatasetAutoWriteSnappyRead(self):
self.createTFRecords()
filenames = self._filenames
expected = [
b"Record %d of file %d" % (r, f)
for f in range(0, 10)
for r in range(0, 100)
]
dataset = core_readers._TFRecordDataset(filenames)
dataset = dataset.apply(
snapshot.snapshot(self._snapshot_dir, compression="AUTO"))
self.assertDatasetProduces(dataset, expected)
self.removeTFRecords()
dataset2 = core_readers._TFRecordDataset(filenames)
dataset2 = dataset2.apply(
snapshot.snapshot(self._snapshot_dir, compression="SNAPPY"))
self.assertDatasetProduces(dataset2, expected)
@combinations.generate(test_base.default_test_combinations())
def testReadSnapshotDatasetCustomShardFn(self):
self.createTFRecords()
filenames = self._filenames
expected = [
b"Record %d of file %d" % (r, f)
for f in range(0, 10)
for r in range(0, 100)
]
dataset = core_readers._TFRecordDataset(filenames)
dataset = dataset.apply(
snapshot.snapshot(self._snapshot_dir, shard_func=lambda _: np.int64(0)))
self.assertDatasetProduces(dataset, expected)
self.assertSnapshotDirectoryContains(
self._snapshot_dir,
num_fingerprints=1,
num_runs_per_fingerprint=1,
num_snapshot_shards_per_run=1)
self.removeTFRecords()
dataset2 = core_readers._TFRecordDataset(filenames)
dataset2 = dataset2.apply(
snapshot.snapshot(self._snapshot_dir, shard_func=lambda _: 0))
self.assertDatasetProduces(dataset2, expected)
@combinations.generate(test_base.default_test_combinations())
def testReadSnapshotDatasetCustomReaderFn(self):
self.createTFRecords()
filenames = self._filenames
expected = [
b"Record %d of file %d" % (r, f)
for f in range(0, 10)
for r in range(0, 100)
]
dataset = core_readers._TFRecordDataset(filenames)
dataset = dataset.apply(
snapshot.snapshot(
self._snapshot_dir,
reader_func=(
lambda ds: ds.interleave(
lambda x: x,
cycle_length=4,
num_parallel_calls=4))))
self.assertDatasetProduces(dataset, expected)
self.assertSnapshotDirectoryContains(
self._snapshot_dir,
num_fingerprints=1,
num_runs_per_fingerprint=1,
num_snapshot_shards_per_run=multiprocessing.cpu_count())
self.removeTFRecords()
dataset2 = core_readers._TFRecordDataset(filenames)
dataset2 = dataset2.apply(
snapshot.snapshot(
self._snapshot_dir,
reader_func=(
lambda ds: ds.interleave(
lambda x: x,
cycle_length=4,
num_parallel_calls=4))))
self.assertDatasetProducesSet(dataset2, expected)
@combinations.generate(test_base.default_test_combinations())
def testSnapshotDatasetInvalidShardFn(self):
dataset = dataset_ops.Dataset.range(1000)
with self.assertRaises(TypeError):
dataset = dataset.apply(
snapshot.snapshot(
self._snapshot_dir, shard_func=lambda _: "invalid_fn"))
next_fn = self.getNext(dataset)
self.evaluate(next_fn())
@combinations.generate(test_base.default_test_combinations())
def testSnapshotDatasetInvalidReaderFn(self):
dataset = dataset_ops.Dataset.range(1000)
with self.assertRaises(TypeError):
dataset = dataset.apply(
snapshot.snapshot(self._snapshot_dir, reader_func=lambda x: x + 1))
next_fn = self.getNext(dataset)
self.evaluate(next_fn())
@combinations.generate(test_base.default_test_combinations())
def testRoundtripEmptySnapshot(self):
dataset = dataset_ops.Dataset.range(0)
dataset = dataset.apply(snapshot.snapshot(self._snapshot_dir))
self.assertDatasetProduces(dataset, [])
self.assertSnapshotDirectoryContains(
self._snapshot_dir,
num_fingerprints=1,
num_runs_per_fingerprint=1,
num_snapshot_shards_per_run=0)
dataset2 = dataset_ops.Dataset.range(0)
dataset2 = dataset.apply(snapshot.snapshot(self._snapshot_dir))
self.assertDatasetProduces(dataset2, [])
@combinations.generate(test_base.default_test_combinations())
def testWriteSnapshotDatasetSimple(self):
dataset = dataset_ops.Dataset.range(1000)
dataset = dataset.apply(snapshot.snapshot(self._snapshot_dir))
self.assertDatasetProduces(dataset, list(range(1000)))
self.assertSnapshotDirectoryContains(
self._snapshot_dir,
num_fingerprints=1,
num_runs_per_fingerprint=1,
num_snapshot_shards_per_run=multiprocessing.cpu_count())
@combinations.generate(test_base.default_test_combinations())
def testWriteSnapshotDatasetMultipleFingerprints(self):
dataset1 = dataset_ops.Dataset.range(1000)
dataset1 = dataset1.apply(snapshot.snapshot(self._snapshot_dir))
self.assertDatasetProduces(dataset1, list(range(1000)))
dataset2 = dataset_ops.Dataset.range(2000)
dataset2 = dataset2.apply(snapshot.snapshot(self._snapshot_dir))
self.assertDatasetProduces(dataset2, list(range(2000)))
self.assertSnapshotDirectoryContains(
self._snapshot_dir,
num_fingerprints=2,
num_runs_per_fingerprint=1,
num_snapshot_shards_per_run=multiprocessing.cpu_count())
@combinations.generate(test_base.default_test_combinations())
def testWriteSnapshotDatasetSameFingerprintMultipleCompleteRuns(self):
dataset1 = dataset_ops.Dataset.range(1000)
dataset1 = dataset1.apply(snapshot.snapshot(self._snapshot_dir))
self.assertDatasetProduces(dataset1, list(range(1000)))
dataset2 = dataset_ops.Dataset.range(1000)
dataset2 = dataset2.apply(snapshot.snapshot(self._snapshot_dir))
self.assertDatasetProduces(dataset2, list(range(1000)))
self.assertSnapshotDirectoryContains(
self._snapshot_dir,
num_fingerprints=1,
num_runs_per_fingerprint=1,
num_snapshot_shards_per_run=multiprocessing.cpu_count())
@combinations.generate(test_base.default_test_combinations())
def testWriteSnapshotDatasetSameFingerprintIncompleteRunRestart(self):
dataset1 = dataset_ops.Dataset.range(1000)
dataset1 = dataset1.apply(snapshot.snapshot(self._snapshot_dir))
next1 = self.getNext(dataset1)
for i in range(500):
self.assertEqual(i, self.evaluate(next1()))
dataset2 = dataset_ops.Dataset.range(1000)
dataset2 = dataset2.apply(snapshot.snapshot(self._snapshot_dir))
next2 = self.getNext(dataset2)
for i in range(500):
self.assertEqual(i, self.evaluate(next2()))
for i in range(500, 1000):
self.assertEqual(i, self.evaluate(next1()))
self.assertEqual(i, self.evaluate(next2()))
self.assertSnapshotDirectoryContains(
self._snapshot_dir,
num_fingerprints=1,
num_runs_per_fingerprint=2,
num_snapshot_shards_per_run=multiprocessing.cpu_count())
@combinations.generate(test_base.default_test_combinations())
def testWriteSnapshotCustomShardFunction(self):
dataset = dataset_ops.Dataset.range(1000)
dataset = dataset.enumerate()
dataset = dataset.apply(
snapshot.snapshot(self._snapshot_dir, shard_func=lambda i, _: i % 2))
dataset = dataset.map(lambda _, elem: elem)
self.assertDatasetProduces(dataset, list(range(1000)))
self.assertSnapshotDirectoryContains(
self._snapshot_dir,
num_fingerprints=1,
num_runs_per_fingerprint=1,
num_snapshot_shards_per_run=2)
@combinations.generate(test_base.default_test_combinations())
def testWriteSnapshotDatasetWithTuples(self):
dataset1 = dataset_ops.Dataset.range(0, 1000)
dataset2 = dataset_ops.Dataset.range(1000, 2000)
dataset3 = dataset_ops.Dataset.range(2000, 3000)
dataset4 = dataset_ops.Dataset.range(3000, 4000)
dataset = dataset_ops.Dataset.zip((dataset1, dataset2, dataset3, dataset4))
dataset = dataset.apply(snapshot.snapshot(self._snapshot_dir))
expected = list(
zip(
range(0, 1000), range(1000, 2000), range(2000, 3000),
range(3000, 4000)))
self.assertDatasetProduces(dataset, expected)
self.assertSnapshotDirectoryContains(
self._snapshot_dir,
num_fingerprints=1,
num_runs_per_fingerprint=1,
num_snapshot_shards_per_run=multiprocessing.cpu_count())
@combinations.generate(test_base.default_test_combinations())
def testWriteSnapshotShuffleSameFingerprint(self):
def make_dataset():
dataset = dataset_ops.Dataset.range(1000)
dataset = dataset.shuffle(1000)
dataset = dataset.apply(snapshot.snapshot(self._snapshot_dir))
return dataset
dataset1 = make_dataset()
self.assertDatasetProducesSet(dataset1, list(range(1000)))
dataset2 = make_dataset()
self.assertDatasetProducesSet(dataset2, list(range(1000)))
self.assertSnapshotDirectoryContains(
self._snapshot_dir,
num_fingerprints=1,
num_runs_per_fingerprint=1,
num_snapshot_shards_per_run=multiprocessing.cpu_count())
@combinations.generate(test_base.default_test_combinations())
def testReadUsingFlatMap(self):
dataset = dataset_ops.Dataset.range(1000)
dataset = dataset.apply(snapshot.snapshot(self._snapshot_dir))
self.assertDatasetProduces(dataset, list(range(1000)))
flat_map = dataset_ops.Dataset.from_tensors(dataset).flat_map(lambda x: x)
self.assertDatasetProduces(flat_map, list(range(1000)))
self.assertSnapshotDirectoryContains(
self._snapshot_dir,
num_fingerprints=1,
num_runs_per_fingerprint=1,
num_snapshot_shards_per_run=multiprocessing.cpu_count())
@combinations.generate(test_base.default_test_combinations())
def testReadOptimizableUsingFlatMap(self):
dataset = dataset_ops.Dataset.range(1000)
dataset = dataset.shuffle(10)
dataset = dataset.repeat(2)
dataset = dataset.apply(snapshot.snapshot(self._snapshot_dir))
self.assertDatasetProducesSet(dataset, 2 * list(range(1000)))
flat_map = dataset_ops.Dataset.from_tensors(dataset).flat_map(lambda x: x)
self.assertDatasetProducesSet(flat_map, 2 * list(range(1000)))
self.assertSnapshotDirectoryContains(
self._snapshot_dir,
num_fingerprints=1,
num_runs_per_fingerprint=1,
num_snapshot_shards_per_run=multiprocessing.cpu_count())
@combinations.generate(test_base.default_test_combinations())
def testRepeatAndPrefetch(self):
dataset = dataset_ops.Dataset.from_tensor_slices(np.random.rand(16, 32))
dataset = dataset.apply(snapshot.snapshot(self._snapshot_dir))
dataset = dataset.shuffle(buffer_size=16)
dataset = dataset.batch(16)
dataset = dataset.repeat()
dataset = dataset.prefetch(1)
next_element = self.getNext(dataset)
for _ in range(30):
self.evaluate(next_element())
class LegacySnapshotTest(tf_record_test_base.TFRecordTestBase,
parameterized.TestCase):
def setUp(self):
super(LegacySnapshotTest, self).setUp()
self.removeTFRecords()
tmpdir = self.get_temp_dir()
tmpdir = os.path.join(tmpdir, "snapshot")
os.mkdir(tmpdir)
self.snapshot_dir = tmpdir
def tearDown(self):
super(LegacySnapshotTest, self).tearDown()
shutil.rmtree(self.snapshot_dir)
def removeTFRecords(self):
for filename in self._filenames:
os.remove(filename)
self._filenames = []
def setUpTFRecord(self, num_files=10, num_records=10):
self._num_files = num_files
self._num_records = num_records
self._filenames = self._createFiles()
def makeSnapshotDirectory(self):
return self.snapshot_dir
def assertSnapshotDirectoryContains(self, directory, num_fingerprints,
num_runs_per_fp, num_snapshot_files):
dirlist = listdir_and_filter(
directory,
lambda p: not (is_graphdef_file(p) or is_temp_file(p)))
self.assertLen(dirlist, num_fingerprints)
for i in range(num_fingerprints):
fingerprint_dir = os.path.join(directory, dirlist[i])
fingerprint_dir_list = listdir_and_filter(
fingerprint_dir, lambda p: not is_temp_file(p))
self.assertLen(fingerprint_dir_list, num_runs_per_fp + 1)
self.assertEqual(fingerprint_dir_list[num_runs_per_fp],
"snapshot.metadata")
for j in range(num_runs_per_fp):
run_dir = os.path.join(fingerprint_dir, fingerprint_dir_list[j])
run_dirlist = sorted(os.listdir(run_dir))
self.assertLen(run_dirlist, num_snapshot_files)
file_counter = 0
for filename in run_dirlist:
self.assertEqual(filename, "%08d.snapshot" % file_counter)
file_counter += 1
@combinations.generate(test_base.default_test_combinations())
def testWriteDifferentPipelinesInOneDirectory(self):
tmpdir = self.snapshot_dir
dataset = dataset_ops.Dataset.range(1000)
dataset = dataset.apply(snapshot.legacy_snapshot(tmpdir))
self.assertDatasetProduces(dataset, list(range(1000)))
dataset = dataset_ops.Dataset.range(1001)
dataset = dataset.apply(snapshot.legacy_snapshot(tmpdir))
self.assertDatasetProduces(dataset, list(range(1001)))
self.assertSnapshotDirectoryContains(tmpdir, 2, 1, 1)
@combinations.generate(test_base.default_test_combinations())
def testWriteSnapshotMultipleSimultaneous(self):
tmpdir = self.snapshot_dir
dataset1 = dataset_ops.Dataset.range(1000)
dataset1 = dataset1.apply(snapshot.legacy_snapshot(tmpdir))
next1 = self.getNext(dataset1)
dataset2 = dataset_ops.Dataset.range(1000)
dataset2 = dataset2.apply(snapshot.legacy_snapshot(tmpdir))
next2 = self.getNext(dataset2)
for i in range(0, 1000):
self.assertEqual(i, self.evaluate(next1()))
self.assertEqual(i, self.evaluate(next2()))
self.assertSnapshotDirectoryContains(tmpdir, 1, 1, 1)
@combinations.generate(test_base.default_test_combinations())
def testGetNextCreatesDir(self):
tmpdir = self.snapshot_dir
dataset1 = dataset_ops.Dataset.range(1000)
dataset1 = dataset1.apply(snapshot.legacy_snapshot(tmpdir))
next1 = self.getNext(dataset1)
dataset2 = dataset_ops.Dataset.range(1001)
dataset2 = dataset2.apply(snapshot.legacy_snapshot(tmpdir))
_ = self.getNext(dataset2)
for _ in range(1000):
self.evaluate(next1())
self.assertSnapshotDirectoryContains(tmpdir, 1, 1, 1)
@combinations.generate(
combinations.times(
test_base.default_test_combinations(),
combinations.combine(compression=[
snapshot.COMPRESSION_NONE, snapshot.COMPRESSION_GZIP,
snapshot.COMPRESSION_SNAPPY
])))
def testWriteSnapshotSimpleSuccessful(self, compression):
tmpdir = self.snapshot_dir
dataset = dataset_ops.Dataset.range(1000)
dataset = dataset.apply(
snapshot.legacy_snapshot(tmpdir, compression=compression))
self.assertDatasetProduces(dataset, list(range(1000)))
self.assertSnapshotDirectoryContains(tmpdir, 1, 1, 1)
@combinations.generate(
combinations.times(
test_base.default_test_combinations(),
combinations.combine(compression=[
snapshot.COMPRESSION_NONE, snapshot.COMPRESSION_GZIP,
snapshot.COMPRESSION_SNAPPY
])))
def testWriteSnapshotRepeatAfterwards(self, compression):
tmpdir = self.snapshot_dir
dataset = dataset_ops.Dataset.range(10)
dataset = dataset.apply(
snapshot.legacy_snapshot(tmpdir, compression=compression))
dataset = dataset.repeat(10)
self.assertDatasetProduces(dataset, list(range(10)) * 10)
self.assertSnapshotDirectoryContains(tmpdir, 1, 1, 1)
@combinations.generate(
combinations.times(
test_base.default_test_combinations(),
combinations.combine(compression=[
snapshot.COMPRESSION_NONE, snapshot.COMPRESSION_GZIP,
snapshot.COMPRESSION_SNAPPY
])))
def testWriteSnapshotMixTypes(self, compression):
tmpdir = self.snapshot_dir
dataset = dataset_ops.Dataset.range(10)
def map_fn(x):
return (x, string_ops.as_string(x), string_ops.as_string(2 * x), 2 * x)
dataset = dataset.map(map_fn)
dataset = dataset.apply(
snapshot.legacy_snapshot(tmpdir, compression=compression))
dataset = dataset.repeat(10)
expected = []
for i in range(10):
expected.append((i, str(i), str(2 * i), 2 * i))
self.assertDatasetProduces(dataset, expected * 10)
self.assertSnapshotDirectoryContains(tmpdir, 1, 1, 1)
@combinations.generate(test_base.default_test_combinations())
def testSpecifySnapshotNameWriteAndRead(self):
tmpdir = self.snapshot_dir
dataset = dataset_ops.Dataset.range(10)
dataset = dataset.apply(
snapshot.legacy_snapshot(tmpdir, snapshot_name="my_custom_snapshot"))
dataset = dataset.repeat(10)
self.assertDatasetProduces(dataset, list(range(10)) * 10)
self.assertSnapshotDirectoryContains(tmpdir, 1, 1, 1)
self.assertTrue(
os.path.exists(os.path.join(tmpdir, "custom-my_custom_snapshot")))
self.assertTrue(
os.path.exists(
os.path.join(tmpdir, "custom-my_custom_snapshot", "custom")))
@combinations.generate(test_base.default_test_combinations())
def testForcePassthroughMode(self):
tmpdir = self.snapshot_dir
dataset = dataset_ops.Dataset.range(10)
dataset = dataset.apply(
snapshot.legacy_snapshot(tmpdir, mode="passthrough"))
dataset = dataset.repeat(10)
self.assertDatasetProduces(dataset, list(range(10)) * 10)
self.assertSnapshotDirectoryContains(tmpdir, 0, 0, 0)
@combinations.generate(test_base.default_test_combinations())
def testForceWriteMode(self):
tmpdir = self.snapshot_dir
dataset = dataset_ops.Dataset.range(10)
dataset = dataset.apply(snapshot.legacy_snapshot(tmpdir, mode="write"))
dataset = dataset.repeat(10)
self.assertDatasetProduces(dataset, list(range(10)) * 10)
self.assertSnapshotDirectoryContains(tmpdir, 1, 10, 1)
@combinations.generate(test_base.default_test_combinations())
def testForceReadMode(self):
tmpdir = self.snapshot_dir
dataset = dataset_ops.Dataset.range(10)
dataset = dataset.apply(
snapshot.legacy_snapshot(
tmpdir, mode="write", snapshot_name="my_custom_snapshot"))
self.assertDatasetProduces(dataset, list(range(10)))
shutil.move(
os.path.join(tmpdir, "custom-my_custom_snapshot"),
os.path.join(tmpdir, "custom-my_custom_snapshot_2"))
dataset = dataset_ops.Dataset.range(10)
dataset = dataset.apply(
snapshot.legacy_snapshot(
tmpdir, mode="read", snapshot_name="my_custom_snapshot_2"))
self.assertDatasetProduces(dataset, list(range(10)))
self.assertSnapshotDirectoryContains(tmpdir, 1, 1, 1)
@combinations.generate(test_base.default_test_combinations())
def testForceReadNonexistentSnapshot(self):
tmpdir = self.snapshot_dir
dataset = dataset_ops.Dataset.range(10)
with self.assertRaises(errors.NotFoundError):
dataset = dataset.apply(snapshot.legacy_snapshot(tmpdir, mode="read"))
get_next = self.getNext(dataset)
self.evaluate(get_next())
@combinations.generate(test_base.default_test_combinations())
def testForceReadNonexistentNamedSnapshot(self):
tmpdir = self.snapshot_dir
dataset = dataset_ops.Dataset.range(10)
with self.assertRaises(errors.NotFoundError):
dataset = dataset.apply(
snapshot.legacy_snapshot(
tmpdir, mode="read", snapshot_name="my_nonexistent_snapshot"))
get_next = self.getNext(dataset)
self.evaluate(get_next())
@combinations.generate(
combinations.times(
test_base.default_test_combinations(),
combinations.combine(compression=[
snapshot.COMPRESSION_NONE, snapshot.COMPRESSION_GZIP,
snapshot.COMPRESSION_SNAPPY
])))
def testReadSnapshotBackAfterWrite(self, compression):
self.setUpTFRecord()
filenames = self._filenames
expected = [
b"Record %d of file %d" % (r, f)
for f in range(0, 10)
for r in range(0, 10)
]
tmpdir = self.snapshot_dir
dataset = core_readers._TFRecordDataset(filenames)
dataset = dataset.apply(
snapshot.legacy_snapshot(tmpdir, compression=compression))
self.assertDatasetProduces(dataset, expected)
self.removeTFRecords()
dataset2 = core_readers._TFRecordDataset(filenames)
dataset2 = dataset2.apply(
snapshot.legacy_snapshot(tmpdir, compression=compression))
self.assertDatasetProduces(dataset2, expected)
@combinations.generate(test_base.default_test_combinations())
def testReadShuffledSnapshotAfterWrite(self):
self.setUpTFRecord(num_files=10, num_records=50)
filenames = self._filenames
expected = [
b"Record %d of file %d" % (r, f)
for f in range(0, 10)
for r in range(0, 50)
]
tmpdir = self.snapshot_dir
dataset = core_readers._TFRecordDataset(filenames)
dataset = dataset.apply(
snapshot.legacy_snapshot(tmpdir, shard_size_bytes=100))
self.assertDatasetProduces(dataset, expected)
self.removeTFRecords()
dataset2 = core_readers._TFRecordDataset(filenames)
dataset2 = dataset2.apply(
snapshot.legacy_snapshot(
tmpdir, shard_size_bytes=100, shuffle_on_read=True))
next2 = self.getNext(dataset2)
res1 = self.evaluate(next2())
res2 = self.evaluate(next2())
res3 = self.evaluate(next2())
res4 = self.evaluate(next2())
res5 = self.evaluate(next2())
self.assertNotEqual([res1, res2, res3, res4, res5], expected[0:5])
# make sure all the elements are still there
dataset3 = core_readers._TFRecordDataset(filenames)
dataset3 = dataset3.apply(
snapshot.legacy_snapshot(
tmpdir, shard_size_bytes=100, shuffle_on_read=True))
self.assertDatasetProduces(dataset3, expected, assert_items_equal=True)
@combinations.generate(test_base.default_test_combinations())
def testReadShuffledSnapshotWithSeedAfterWrite(self):
self.setUpTFRecord(num_files=10, num_records=50)
filenames = self._filenames
expected = [
b"Record %d of file %d" % (r, f) # pylint:disable=g-complex-comprehension
for f in range(0, 10)
for r in range(0, 50)
]
tmpdir = self.snapshot_dir
dataset = core_readers._TFRecordDataset(filenames)
dataset = dataset.apply(
snapshot.legacy_snapshot(tmpdir, shard_size_bytes=10))
self.assertDatasetProduces(dataset, expected)
# remove the original files and try to read the data back only from snapshot
self.removeTFRecords()
dataset2 = core_readers._TFRecordDataset(filenames)
dataset2 = dataset2.apply(
snapshot.legacy_snapshot(
tmpdir,
shard_size_bytes=10,
shuffle_on_read=True,
shuffle_seed=123456))
next2 = self.getNext(dataset2)
dataset3 = core_readers._TFRecordDataset(filenames)
dataset3 = dataset3.apply(
snapshot.legacy_snapshot(
tmpdir,
shard_size_bytes=10,
shuffle_on_read=True,
shuffle_seed=123456))
next3 = self.getNext(dataset3)
# make sure that the items are read back in the same order for both datasets
for _ in range(500):
res2 = self.evaluate(next2())
res3 = self.evaluate(next3())
self.assertEqual(res2, res3)
@combinations.generate(
combinations.times(
test_base.default_test_combinations(),
combinations.combine(compression=[
snapshot.COMPRESSION_NONE, snapshot.COMPRESSION_GZIP,
snapshot.COMPRESSION_SNAPPY
])))
def testReadSnapshotParallelAfterWrite(self, compression):
self.setUpTFRecord(5, 500)
filenames = self._filenames
expected = [
b"Record %d of file %d" % (r, f) # pylint:disable=g-complex-comprehension
for f in range(0, 5)
for r in range(0, 500)
]
tmpdir = self.snapshot_dir
dataset = core_readers._TFRecordDataset(filenames)
dataset = dataset.apply(
snapshot.legacy_snapshot(
tmpdir,
shard_size_bytes=1024 * 1024,
num_reader_threads=2,
reader_buffer_size=10,
compression=compression))
self.assertDatasetProduces(dataset, expected, assert_items_equal=True)
# remove the original files and try to read the data back only from
# snapshot.
self.removeTFRecords()
dataset2 = core_readers._TFRecordDataset(filenames)
dataset2 = dataset2.apply(
snapshot.legacy_snapshot(
tmpdir,
shard_size_bytes=1024 * 1024,
num_reader_threads=2,
reader_buffer_size=10,
compression=compression))
self.assertDatasetProduces(dataset2, expected, assert_items_equal=True)
# Not testing Snappy here because Snappy reads currently require a lot of
# memory.
@combinations.generate(
combinations.times(
test_base.default_test_combinations(),
combinations.times(
combinations.combine(compression=[
snapshot.COMPRESSION_NONE, snapshot.COMPRESSION_GZIP
]),
combinations.combine(threads=2, size=[1, 2]) +
combinations.combine(threads=8, size=[1, 4, 8]))))
def testReadSnapshotBackAfterMultiThreadedWrite(self, compression, threads,
size):
self.setUpTFRecord()
filenames = self._filenames
expected = [
b"Record %d of file %d" % (r, f) # pylint:disable=g-complex-comprehension
for f in range(0, 10)
for r in range(0, 10)
]
tmpdir = self.snapshot_dir
dataset = core_readers._TFRecordDataset(filenames)
dataset = dataset.apply(
snapshot.legacy_snapshot(
tmpdir,
compression=compression,
num_writer_threads=threads,
writer_buffer_size=size))
self.assertDatasetProduces(dataset, expected)
# remove the original files and try to read the data back only from
# snapshot
self.removeTFRecords()
dataset2 = core_readers._TFRecordDataset(filenames)
dataset2 = dataset2.apply(
snapshot.legacy_snapshot(tmpdir, compression=compression))
self.assertDatasetProduces(dataset2, expected, assert_items_equal=True)
@combinations.generate(test_base.default_test_combinations())
def testSameFingerprintWithDifferentInitializationOrder(self):
tmpdir = self.snapshot_dir
dataset1 = dataset_ops.Dataset.range(0, 100)
dataset2 = dataset_ops.Dataset.range(100, 200)
dataset3 = dataset_ops.Dataset.range(200, 300)
dataset = dataset1.concatenate(dataset2).concatenate(dataset3)
dataset = dataset.apply(snapshot.legacy_snapshot(tmpdir))
self.assertDatasetProduces(dataset, list(range(300)))
dataset4 = dataset_ops.Dataset.range(200, 300)
dataset5 = dataset_ops.Dataset.range(100, 200)
dataset6 = dataset_ops.Dataset.range(0, 100)
dataset = dataset6.concatenate(dataset5).concatenate(dataset4)
dataset = dataset.apply(snapshot.legacy_snapshot(tmpdir))
self.assertDatasetProduces(dataset, list(range(300)))
self.assertSnapshotDirectoryContains(tmpdir, 1, 1, 1)
@combinations.generate(test_base.default_test_combinations())
def testExpiredSnapshotRewrite(self):
tmpdir = self.snapshot_dir
dataset1 = dataset_ops.Dataset.range(1000)
dataset1 = dataset1.apply(
snapshot.legacy_snapshot(tmpdir, pending_snapshot_expiry_seconds=1))
next1 = self.getNext(dataset1)
# Don't finish reading dataset1, so it is never finalized
for _ in range(500):
self.evaluate(next1())
self.assertSnapshotDirectoryContains(tmpdir, 1, 1, 1)
time.sleep(2)
dataset2 = dataset_ops.Dataset.range(1000)
dataset2 = dataset2.apply(
snapshot.legacy_snapshot(tmpdir, pending_snapshot_expiry_seconds=1))
next2 = self.getNext(dataset2)
for _ in range(500):
self.evaluate(next2())
self.assertSnapshotDirectoryContains(tmpdir, 1, 2, 1)
@combinations.generate(test_base.default_test_combinations())
def testSnapshotArgsCreateNewSnapshot(self):
tmpdir = self.snapshot_dir
dataset1 = dataset_ops.Dataset.range(1000)
dataset1 = dataset1.apply(
snapshot.legacy_snapshot(tmpdir, shard_size_bytes=10000))
next1 = self.getNext(dataset1)
for _ in range(1000):
self.evaluate(next1())
self.assertSnapshotDirectoryContains(tmpdir, 1, 1, 1)
dataset2 = dataset_ops.Dataset.range(1000)
dataset2 = dataset1.apply(
snapshot.legacy_snapshot(tmpdir, shard_size_bytes=20000))
next2 = self.getNext(dataset2)
for _ in range(1000):
self.evaluate(next2())
self.assertSnapshotDirectoryContains(tmpdir, 2, 1, 1)
@combinations.generate(
combinations.times(
test_base.default_test_combinations(),
combinations.combine(compression=[
snapshot.COMPRESSION_NONE, snapshot.COMPRESSION_GZIP,
snapshot.COMPRESSION_SNAPPY
])))
def testSpecifyShardSize(self, compression):
tmpdir = self.snapshot_dir
dataset = dataset_ops.Dataset.from_tensor_slices([1.0])
dataset = dataset.map(lambda x: gen_array_ops.broadcast_to(x, [1024, 1024]))
dataset = dataset.repeat(10)
dataset = dataset.apply(
snapshot.legacy_snapshot(
tmpdir, shard_size_bytes=10 * 1024 * 1024, compression=compression))
next_fn = self.getNext(dataset)
for _ in range(10):
self.evaluate(next_fn())
num_files = 1
if compression == snapshot.COMPRESSION_NONE:
num_files = 3
self.assertSnapshotDirectoryContains(tmpdir, 1, 1, num_files)
@combinations.generate(test_base.default_test_combinations())
def testAdditionalOperationsAfterReadBack(self):
self.setUpTFRecord()
filenames = self._filenames
expected = [
b"Record %d of file %d" % (r, f)
for f in range(0, 10)
for r in range(0, 10)
]
tmpdir = self.snapshot_dir
dataset = core_readers._TFRecordDataset(filenames)
dataset = dataset.apply(snapshot.legacy_snapshot(tmpdir))
self.assertDatasetProduces(dataset, expected)
self.removeTFRecords()
dataset2 = core_readers._TFRecordDataset(filenames)
dataset2 = dataset2.apply(snapshot.legacy_snapshot(tmpdir))
self.assertDatasetProduces(dataset2, expected)
expected_after = [
b"cord %d of file %d" % (r, f)
for f in range(0, 10)
for r in range(0, 10)
]
dataset3 = core_readers._TFRecordDataset(filenames)
dataset3 = dataset3.apply(snapshot.legacy_snapshot(tmpdir))
dataset3 = dataset3.map(lambda x: string_ops.substr_v2(x, 2, 1000))
self.assertDatasetProduces(dataset3, expected_after)
class SnapshotCheckpointTest(checkpoint_test_base.CheckpointTestBase,
parameterized.TestCase):
def _build_snapshot_dataset(self, repeat=False):
def ds_fn():
self._snapshot_dir = os.path.join(self.get_temp_dir(), "snapshot")
if not os.path.exists(self._snapshot_dir):
os.mkdir(self._snapshot_dir)
dataset = dataset_ops.Dataset.range(100)
dataset = dataset.apply(snapshot.snapshot(self._snapshot_dir))
if repeat:
dataset = dataset.repeat(2)
return dataset
return ds_fn
@combinations.generate(test_base.default_test_combinations())
def testCheckpointBeforeEpochEndNoRepeat(self):
ds_fn = self._build_snapshot_dataset(repeat=False)
outputs = self.gen_outputs(ds_fn, [], 50, verify_exhausted=False)
self.assertSequenceEqual(outputs, range(50))
outputs.extend(
self.gen_outputs(ds_fn, [], 50, ckpt_saved=True, verify_exhausted=True))
self.assertSequenceEqual(outputs, range(100))
@combinations.generate(test_base.default_test_combinations())
def testCheckpointBeforeOneEpochWithReading(self):
ds_fn = self._build_snapshot_dataset(repeat=True)
outputs = self.gen_outputs(ds_fn, [], 50, verify_exhausted=False)
self.assertSequenceEqual(outputs, list(range(50)))
t = self.gen_outputs(ds_fn, [], 150, ckpt_saved=True, verify_exhausted=True)
outputs.extend(t)
self.assertSequenceEqual(
outputs,
list(range(50)) + list(range(50, 100)) + list(range(100)))
@combinations.generate(test_base.default_test_combinations())
def testCheckpointBeforeOneEpochThenRunAFewSteps(self):
ds_fn = self._build_snapshot_dataset(repeat=False)
outputs = self.gen_outputs(
ds_fn, [10], 20, verify_exhausted=False, save_checkpoint_at_end=False)
self.assertSequenceEqual(outputs, range(20))
outputs = outputs[:10]
outputs.extend(
self.gen_outputs(ds_fn, [], 90, ckpt_saved=True, verify_exhausted=True))
self.assertSequenceEqual(outputs, range(100))
@combinations.generate(test_base.default_test_combinations())
def testCheckpointAfterOneEpoch(self):
ds_fn = self._build_snapshot_dataset(repeat=True)
outputs = self.gen_outputs(ds_fn, [], 110, verify_exhausted=False)
self.assertSequenceEqual(outputs, list(range(100)) + list(range(10)))
t = self.gen_outputs(ds_fn, [], 90, ckpt_saved=True, verify_exhausted=True)
outputs.extend(t)
self.assertSequenceEqual(
outputs,
list(range(100)) + list(range(10)) + list(range(10, 100)))
@combinations.generate(test_base.default_test_combinations())
def testCheckpointAfterOneEpochRunFewSteps(self):
ds_fn = self._build_snapshot_dataset(repeat=True)
outputs = self.gen_outputs(
ds_fn, [110], 120, verify_exhausted=False, save_checkpoint_at_end=False)
self.assertSequenceEqual(outputs, list(range(100)) + list(range(20)))
outputs = outputs[:110]
t = self.gen_outputs(ds_fn, [], 90, ckpt_saved=True, verify_exhausted=True)
outputs.extend(t)
self.assertSequenceEqual(
outputs,
list(range(100)) + list(range(10)) + list(range(10, 100)))
class LegacySnapshotCheckpointTest(
checkpoint_test_base.CheckpointTestBase, parameterized.TestCase):
def _build_snapshot_dataset(self,
num_threads=1,
repeat=False,
pending_snapshot_expiry_seconds=-1,
shard_size_bytes=None):
def ds_fn():
self.snapshot_dir = os.path.join(self.get_temp_dir(), "snapshot")
if not os.path.exists(self.snapshot_dir):
os.mkdir(self.snapshot_dir)
dataset = dataset_ops.Dataset.range(1000)
dataset = dataset.apply(
snapshot.legacy_snapshot(
self.snapshot_dir,
num_writer_threads=num_threads,
writer_buffer_size=2 * num_threads,
num_reader_threads=num_threads,
reader_buffer_size=2 * num_threads,
pending_snapshot_expiry_seconds=pending_snapshot_expiry_seconds,
shard_size_bytes=shard_size_bytes))
if repeat:
dataset = dataset.repeat(2)
return dataset
return ds_fn
@combinations.generate(
combinations.times(
test_base.default_test_combinations(),
combinations.combine(pending_snapshot_expiry_seconds=[None, 1])))
def testSnapshotBeforeEpochEnd(self, pending_snapshot_expiry_seconds):
ds_fn = self._build_snapshot_dataset(
pending_snapshot_expiry_seconds=pending_snapshot_expiry_seconds)
outputs = self.gen_outputs(ds_fn, [], 100, verify_exhausted=False)
self.assertSequenceEqual(outputs, range(100))
outputs.extend(
self.gen_outputs(
ds_fn, [], 900, ckpt_saved=True, verify_exhausted=False))
self.assertSequenceEqual(outputs, range(1000))
@combinations.generate(
combinations.times(
test_base.default_test_combinations(),
combinations.combine(pending_snapshot_expiry_seconds=[None, 1])))
def testCheckpointBeforeOneEpochThenRunFewStepsSmallShardMultiThread(
self, pending_snapshot_expiry_seconds):
ds_fn = self._build_snapshot_dataset(
pending_snapshot_expiry_seconds=pending_snapshot_expiry_seconds,
shard_size_bytes=100)
outputs = []
with ops.Graph().as_default() as g:
init_op, get_next_op, saver = self._build_graph(ds_fn)
with self.session(graph=g) as sess:
self._initialize(init_op, sess)
start = 0
end = 100
num_iters = end - start
for _ in range(num_iters):
outputs.append(sess.run(get_next_op))
self._save(sess, saver)
start = 100
end = 400
num_iters = end - start
for _ in range(num_iters):
outputs.append(sess.run(get_next_op))
self.assertSequenceEqual(outputs, range(400))
outputs = outputs[:100]
outputs.extend(
self.gen_outputs(
ds_fn, [], 900, ckpt_saved=True, verify_exhausted=False))
self.assertSequenceEqual(outputs, range(1000))
fp_dir_list = os.listdir(self.snapshot_dir)
self.assertLen(list(fp_dir_list), 2)
for d in fp_dir_list:
if not d.endswith("-graph.pbtxt"):
fp_dir = os.path.join(self.snapshot_dir, d)
run_dir_list = os.listdir(fp_dir)
self.assertLen(list(run_dir_list), 2)
for e in run_dir_list:
if e != "snapshot.metadata":
run_dir = os.path.join(fp_dir, e)
self.assertLen(list(os.listdir(run_dir)), 258)
@combinations.generate(
combinations.times(
test_base.default_test_combinations(),
combinations.combine(pending_snapshot_expiry_seconds=[None, 1])))
def testCheckpointBeforeOneEpochThenRunFewSteps(
self, pending_snapshot_expiry_seconds):
ds_fn = self._build_snapshot_dataset(
pending_snapshot_expiry_seconds=pending_snapshot_expiry_seconds)
outputs = self.gen_outputs(
ds_fn, [100], 200, verify_exhausted=False, save_checkpoint_at_end=False)
self.assertSequenceEqual(outputs, range(200))
outputs = outputs[:100]
outputs.extend(
self.gen_outputs(
ds_fn, [], 900, ckpt_saved=True, verify_exhausted=False))
self.assertSequenceEqual(outputs, range(1000))
@combinations.generate(
combinations.times(
test_base.default_test_combinations(),
combinations.combine(pending_snapshot_expiry_seconds=[None, 1])))
def testCheckpointBeforeOneEpochThenRunFewStepsMultipleThreads(
self, pending_snapshot_expiry_seconds):
ds_fn = self._build_snapshot_dataset(
num_threads=2,
pending_snapshot_expiry_seconds=pending_snapshot_expiry_seconds)
outputs = self.gen_outputs(
ds_fn, [100], 200, verify_exhausted=False, save_checkpoint_at_end=False)
self.assertSequenceEqual(outputs, range(200))
outputs = outputs[:100]
outputs.extend(
self.gen_outputs(
ds_fn, [], 900, ckpt_saved=True, verify_exhausted=False))
self.assertSequenceEqual(outputs, range(1000))
@combinations.generate(
combinations.times(
test_base.default_test_combinations(),
combinations.combine(pending_snapshot_expiry_seconds=[None, 1])))
def testCheckpointAfterOneEpoch(self, pending_snapshot_expiry_seconds):
ds_fn = self._build_snapshot_dataset(
repeat=True,
pending_snapshot_expiry_seconds=pending_snapshot_expiry_seconds)
outputs = self.gen_outputs(ds_fn, [], 1100, verify_exhausted=False)
self.assertSequenceEqual(outputs, list(range(1000)) + list(range(100)))
t = self.gen_outputs(
ds_fn, [], 900, ckpt_saved=True, verify_exhausted=False)
outputs.extend(t)
self.assertSequenceEqual(
outputs,
list(range(1000)) + list(range(100)) + list(range(900)))
@combinations.generate(
combinations.times(
test_base.default_test_combinations(),
combinations.combine(pending_snapshot_expiry_seconds=[None, 1])))
def testCheckpointAfterOneEpochThenRunFewSteps(
self, pending_snapshot_expiry_seconds):
ds_fn = self._build_snapshot_dataset(
repeat=True,
pending_snapshot_expiry_seconds=pending_snapshot_expiry_seconds)
outputs = self.gen_outputs(
ds_fn, [1100],
1200,
verify_exhausted=False,
save_checkpoint_at_end=False)
self.assertSequenceEqual(
outputs,
list(range(1000)) + list(range(100)) + list(range(100)))
outputs = outputs[:1100]
t = self.gen_outputs(
ds_fn, [], 900, ckpt_saved=True, verify_exhausted=False)
outputs.extend(t)
self.assertSequenceEqual(
outputs, (list(range(1000)) + list(range(100)) + list(range(900))))
if __name__ == "__main__":
test.main()
| true | true |
f720977f63fc56cdf8a97938eefdecd9ebe62107 | 651 | py | Python | src/prereq/exercise8.py | kradical/cluster-analysis-udemy | e2101bdb08ae3b9ed0ed8c4c1c488e3a75a1b7c5 | [
"MIT"
] | null | null | null | src/prereq/exercise8.py | kradical/cluster-analysis-udemy | e2101bdb08ae3b9ed0ed8c4c1c488e3a75a1b7c5 | [
"MIT"
] | null | null | null | src/prereq/exercise8.py | kradical/cluster-analysis-udemy | e2101bdb08ae3b9ed0ed8c4c1c488e3a75a1b7c5 | [
"MIT"
] | null | null | null | import numpy as np
import matplotlib.pyplot as plt
# Plot a spiral dataset
def generateArm(rotation, step):
theta = np.random.rand(500) * step
r = np.exp(theta) - 1
x = r * np.cos(theta) + (np.random.rand(500) - 0.5) / 7
y = r * np.sin(theta) + (np.random.rand(500) - 0.5) / 7
x, y = x * np.cos(rotation) - y * np.sin(rotation), x * np.sin(rotation) + y * np.cos(rotation)
return (x, y)
def main():
arms = 6
step = 2 * np.pi / arms
for i in range(arms):
rotation = i * step
x, y = generateArm(rotation, step)
plt.scatter(x, y)
plt.show()
if __name__ == '__main__':
main()
| 20.34375 | 99 | 0.563748 | import numpy as np
import matplotlib.pyplot as plt
def generateArm(rotation, step):
theta = np.random.rand(500) * step
r = np.exp(theta) - 1
x = r * np.cos(theta) + (np.random.rand(500) - 0.5) / 7
y = r * np.sin(theta) + (np.random.rand(500) - 0.5) / 7
x, y = x * np.cos(rotation) - y * np.sin(rotation), x * np.sin(rotation) + y * np.cos(rotation)
return (x, y)
def main():
arms = 6
step = 2 * np.pi / arms
for i in range(arms):
rotation = i * step
x, y = generateArm(rotation, step)
plt.scatter(x, y)
plt.show()
if __name__ == '__main__':
main()
| true | true |
f7209acf0f181cb1058d455de2841e729f3c8cd5 | 3,798 | py | Python | mlops/parallelm/mlops/stats/health/categorical_hist_stat.py | lisapm/mlpiper | 74ad5ae343d364682cc2f8aaa007f2e8a1d84929 | [
"Apache-2.0"
] | 7 | 2019-04-08T02:31:55.000Z | 2021-11-15T14:40:49.000Z | mlops/parallelm/mlops/stats/health/categorical_hist_stat.py | lisapm/mlpiper | 74ad5ae343d364682cc2f8aaa007f2e8a1d84929 | [
"Apache-2.0"
] | 31 | 2019-02-22T22:23:26.000Z | 2021-08-02T17:17:06.000Z | mlops/parallelm/mlops/stats/health/categorical_hist_stat.py | lisapm/mlpiper | 74ad5ae343d364682cc2f8aaa007f2e8a1d84929 | [
"Apache-2.0"
] | 8 | 2019-03-15T23:46:08.000Z | 2020-02-06T09:16:02.000Z | """
The Code contains functions to calculate univariate statistics for categorical features, given a dataset.
"""
import numpy as np
from parallelm.mlops.stats.health.histogram_data_objects import CategoricalHistogramDataObject
class CategoricalHistogram(object):
"""
Class is responsible for providing fit and get_feature_histogram_rep functionality of categorical training dataset.
"""
def __init__(self):
"""
Class constructor to init variables.
:rtype: object
"""
# holds array of tuple of bin and edges. order can be followed by features list.
self._prob_dist_categorical = []
self._features = []
def fit(self, training_feature_values, training_feature_names, num_bins, pred_bins):
"""
Function is responsible for fitting training data and fill up _prob_dist_categorical containing tuples of bin and edges.
:rtype: self
"""
if isinstance(training_feature_values, np.ndarray):
prob_dist = self._cal_hist_params(training_feature_values, num_bins=num_bins, pred_bins=pred_bins)
self._prob_dist_categorical = prob_dist
self._features = training_feature_names
else:
raise Exception("categorical histograms are generated on numpy array only!")
return self
def get_feature_histogram_rep(self):
"""
Function is responsible for creating formatted representation of categorical histogram. It will be used in forwarding stat through MLOps.
:rtype: list containing CategoricalHistogramDataObject
"""
feature_histogram_rep = []
for index in range(len(self._features)):
edges = self._prob_dist_categorical[index][0]
# Since, edges can be of type string, bins array in python tuple can be stored as string.
# To make it strong, converting it to float on fly.
bins_str = self._prob_dist_categorical[index][1]
# Converting to float!
bins = [float(i) for i in bins_str]
normalized_bins = bins / np.sum(bins)
edges_rep = []
for each_edge_index in range(0, len(edges)):
edges_rep.append(str(edges[each_edge_index]))
categorical_histogram_data_object = CategoricalHistogramDataObject(feature_name=self._features[index],
edges=edges_rep,
bins=normalized_bins)
feature_histogram_rep.append(categorical_histogram_data_object)
return feature_histogram_rep
@staticmethod
def _cal_hist_params(sample, num_bins, pred_bins=None):
"""
Calculate the probability of each category in each column, assuming multi-nomial distribution.
:param sample: A dataset that is a 2D numpy array
:param num_bins: Number of bins to create. Although it is not used right now. But to make it scalable, passing now.
:param pred_bins: pre-defined bins. Although it is not used right now. But to make it scalable, passing now.
:rtype: prob_dist: A list containing the probability distribution of categories in each column of the sample.
Order of arrays in the list is same as the order of columns in sample data
"""
# convert whole nd array to float!
sample = sample.astype(str)
prob_dist = []
for a in range(0, sample.shape[1]):
# Determine the frequency of unique values
unique, counts = np.unique(sample[:, a], return_counts=True)
prob_dist.append(np.asarray((unique, counts * 1.0)))
return prob_dist
| 42.2 | 145 | 0.645076 |
import numpy as np
from parallelm.mlops.stats.health.histogram_data_objects import CategoricalHistogramDataObject
class CategoricalHistogram(object):
def __init__(self):
self._prob_dist_categorical = []
self._features = []
def fit(self, training_feature_values, training_feature_names, num_bins, pred_bins):
if isinstance(training_feature_values, np.ndarray):
prob_dist = self._cal_hist_params(training_feature_values, num_bins=num_bins, pred_bins=pred_bins)
self._prob_dist_categorical = prob_dist
self._features = training_feature_names
else:
raise Exception("categorical histograms are generated on numpy array only!")
return self
def get_feature_histogram_rep(self):
feature_histogram_rep = []
for index in range(len(self._features)):
edges = self._prob_dist_categorical[index][0]
bins_str = self._prob_dist_categorical[index][1]
bins = [float(i) for i in bins_str]
normalized_bins = bins / np.sum(bins)
edges_rep = []
for each_edge_index in range(0, len(edges)):
edges_rep.append(str(edges[each_edge_index]))
categorical_histogram_data_object = CategoricalHistogramDataObject(feature_name=self._features[index],
edges=edges_rep,
bins=normalized_bins)
feature_histogram_rep.append(categorical_histogram_data_object)
return feature_histogram_rep
@staticmethod
def _cal_hist_params(sample, num_bins, pred_bins=None):
sample = sample.astype(str)
prob_dist = []
for a in range(0, sample.shape[1]):
unique, counts = np.unique(sample[:, a], return_counts=True)
prob_dist.append(np.asarray((unique, counts * 1.0)))
return prob_dist
| true | true |
f7209b17100218a42c80c8e984c08597d630b188 | 47 | py | Python | odoo-13.0/myaddons/SCM/__init__.py | VaibhavBhujade/Blockchain-ERP-interoperability | b5190a037fb6615386f7cbad024d51b0abd4ba03 | [
"MIT"
] | 181 | 2016-11-11T04:39:43.000Z | 2022-03-14T21:17:19.000Z | odoo-13.0/myaddons/SCM/__init__.py | VaibhavBhujade/Blockchain-ERP-interoperability | b5190a037fb6615386f7cbad024d51b0abd4ba03 | [
"MIT"
] | 899 | 2016-11-14T02:42:56.000Z | 2022-03-29T20:47:39.000Z | odoo-13.0/myaddons/SCM/__init__.py | VaibhavBhujade/Blockchain-ERP-interoperability | b5190a037fb6615386f7cbad024d51b0abd4ba03 | [
"MIT"
] | 227 | 2016-11-10T17:16:59.000Z | 2022-03-26T16:46:38.000Z | from . import models
from . import controllers
| 15.666667 | 25 | 0.787234 | from . import models
from . import controllers
| true | true |
f7209b3cfc5d5bd59dca31198344266661bee81a | 94 | py | Python | vaccine_card/api/apps.py | Unanimad/lais_046_2020_etapa_2 | 630efc6b25a580be44b6cd50be6744a01221a2c4 | [
"Apache-2.0"
] | null | null | null | vaccine_card/api/apps.py | Unanimad/lais_046_2020_etapa_2 | 630efc6b25a580be44b6cd50be6744a01221a2c4 | [
"Apache-2.0"
] | null | null | null | vaccine_card/api/apps.py | Unanimad/lais_046_2020_etapa_2 | 630efc6b25a580be44b6cd50be6744a01221a2c4 | [
"Apache-2.0"
] | null | null | null | from django.apps import AppConfig
class ApiConfig(AppConfig):
name = 'vaccine_card.api'
| 15.666667 | 33 | 0.755319 | from django.apps import AppConfig
class ApiConfig(AppConfig):
name = 'vaccine_card.api'
| true | true |
f7209bc8e13eb725ec7ffbe1187a20a64ae43a57 | 3,850 | py | Python | EWR/ab6_v2/Sort.py | Koopakiller/Edu | 575c43dae24a4432e8c8fb2eda96e948cc33ec32 | [
"MIT"
] | null | null | null | EWR/ab6_v2/Sort.py | Koopakiller/Edu | 575c43dae24a4432e8c8fb2eda96e948cc33ec32 | [
"MIT"
] | null | null | null | EWR/ab6_v2/Sort.py | Koopakiller/Edu | 575c43dae24a4432e8c8fb2eda96e948cc33ec32 | [
"MIT"
] | null | null | null | # coding=utf-8
# Author: Tom Lambert
# Content: Implementierung der Sort-Klasse für ab6.
class Sort(object):
"""Implementiert Sortier-Algorithmen mit der Möglichkeit einer statistischen Auswertung"""
def __init__(self):
self.counter_swap = 0 # entspricht ca 2 Elementabrufen und 2 Elementzuweisungen
self.counter_list_item_assignment = 0
self.counter_item_compare = 0
self.counter_get_item_from_list = 0
self.counter_add_item_to_result_list = 0
self.counter_recursive_call = 0
self.counter_split_list = 0
self.counter_copy_list = 0
self.counter_sort_call = 0
def quick_sort(self, lst):
"""
Sortiert die lst-Liste mit dem Quick-Sort-Algorithmus und gibt die sortierte Liste zurück.
Bestimmte Operationen werden in den counter_-Variablen gezählt.
"""
self.counter_sort_call += 1
if len(lst) > 1:
self.counter_get_item_from_list += 1
pivot = lst[0]
ltp = [] # less than pivot item
gtp = [] # greater than pivot item
ep = [] # equals pivot item
for item in lst:
self.counter_get_item_from_list += 1
self.counter_item_compare += 1
if item < pivot:
self.counter_add_item_to_result_list += 1
ltp.append(item)
elif item > pivot:
self.counter_add_item_to_result_list += 1
gtp.append(item)
else:
self.counter_add_item_to_result_list += 1
ep.append(item)
self.counter_split_list += 1
self.counter_recursive_call += 1
ltp = self.quick_sort(ltp)
self.counter_recursive_call += 1
gtp = self.quick_sort(gtp)
result = ltp
self.counter_add_item_to_result_list += len(ep)
result.extend(ep)
self.counter_add_item_to_result_list += len(gtp)
result.extend(gtp)
return result
else:
return lst
def gnome_sort(self, lst):
"""
Sortiert die lst-Liste mit dem Gnome-Sort-Algorithmus und gibt die sortierte Liste zurück.
Bestimmte Operationen werden in den counter_-Variablen gezählt.
"""
self.counter_sort_call += 1
self.counter_copy_list += 1
lst = list(lst) # copy the list, because lists are mutable and passed by reference
pos = 0
while pos < len(lst):
self.counter_get_item_from_list += 2
self.counter_item_compare += 1
if pos == 0 or lst[pos] >= lst[pos - 1]:
pos += 1
else:
self.counter_swap += 1
lst[pos], lst[pos - 1] = lst[pos - 1], lst[pos]
pos -= 1
return lst
def insertion_sort(self, lst):
"""
Sortiert die lst-Liste mit dem Insertion-Sort-Algorithmus und gibt die sortierte Liste zurück.
Bestimmte Operationen werden in den counter_-Variablen gezählt.
"""
self.counter_sort_call += 1
self.counter_copy_list += 1
lst = list(lst) # copy the list, because lists are mutable and passed by reference
for i in range(1, len(lst)):
self.counter_get_item_from_list += 1
val = lst[i]
j = i
while j > 0 and lst[j - 1] > val:
self.counter_item_compare += 1
self.counter_get_item_from_list += 1
self.counter_list_item_assignment += 1
lst[j] = lst[j - 1]
j = j - 1 # breaks the while loop
lst[j] = val
self.counter_list_item_assignment += 1
return lst
| 38.118812 | 103 | 0.562857 |
class Sort(object):
def __init__(self):
self.counter_swap = 0
self.counter_list_item_assignment = 0
self.counter_item_compare = 0
self.counter_get_item_from_list = 0
self.counter_add_item_to_result_list = 0
self.counter_recursive_call = 0
self.counter_split_list = 0
self.counter_copy_list = 0
self.counter_sort_call = 0
def quick_sort(self, lst):
self.counter_sort_call += 1
if len(lst) > 1:
self.counter_get_item_from_list += 1
pivot = lst[0]
ltp = []
gtp = []
ep = []
for item in lst:
self.counter_get_item_from_list += 1
self.counter_item_compare += 1
if item < pivot:
self.counter_add_item_to_result_list += 1
ltp.append(item)
elif item > pivot:
self.counter_add_item_to_result_list += 1
gtp.append(item)
else:
self.counter_add_item_to_result_list += 1
ep.append(item)
self.counter_split_list += 1
self.counter_recursive_call += 1
ltp = self.quick_sort(ltp)
self.counter_recursive_call += 1
gtp = self.quick_sort(gtp)
result = ltp
self.counter_add_item_to_result_list += len(ep)
result.extend(ep)
self.counter_add_item_to_result_list += len(gtp)
result.extend(gtp)
return result
else:
return lst
def gnome_sort(self, lst):
self.counter_sort_call += 1
self.counter_copy_list += 1
lst = list(lst)
pos = 0
while pos < len(lst):
self.counter_get_item_from_list += 2
self.counter_item_compare += 1
if pos == 0 or lst[pos] >= lst[pos - 1]:
pos += 1
else:
self.counter_swap += 1
lst[pos], lst[pos - 1] = lst[pos - 1], lst[pos]
pos -= 1
return lst
def insertion_sort(self, lst):
self.counter_sort_call += 1
self.counter_copy_list += 1
lst = list(lst)
for i in range(1, len(lst)):
self.counter_get_item_from_list += 1
val = lst[i]
j = i
while j > 0 and lst[j - 1] > val:
self.counter_item_compare += 1
self.counter_get_item_from_list += 1
self.counter_list_item_assignment += 1
lst[j] = lst[j - 1]
j = j - 1
lst[j] = val
self.counter_list_item_assignment += 1
return lst
| true | true |
f7209bcc24d3c78f312379c3537a4b94dc7b13db | 436 | py | Python | packages/python/plotly/plotly/validators/icicle/marker/colorbar/_tickangle.py | mastermind88/plotly.py | efa70710df1af22958e1be080e105130042f1839 | [
"MIT"
] | null | null | null | packages/python/plotly/plotly/validators/icicle/marker/colorbar/_tickangle.py | mastermind88/plotly.py | efa70710df1af22958e1be080e105130042f1839 | [
"MIT"
] | null | null | null | packages/python/plotly/plotly/validators/icicle/marker/colorbar/_tickangle.py | mastermind88/plotly.py | efa70710df1af22958e1be080e105130042f1839 | [
"MIT"
] | null | null | null | import _plotly_utils.basevalidators
class TickangleValidator(_plotly_utils.basevalidators.AngleValidator):
def __init__(
self, plotly_name="tickangle", parent_name="icicle.marker.colorbar", **kwargs
):
super(TickangleValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "colorbars"),
**kwargs,
)
| 31.142857 | 85 | 0.665138 | import _plotly_utils.basevalidators
class TickangleValidator(_plotly_utils.basevalidators.AngleValidator):
def __init__(
self, plotly_name="tickangle", parent_name="icicle.marker.colorbar", **kwargs
):
super(TickangleValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "colorbars"),
**kwargs,
)
| true | true |
f7209c409ce293886a6af24bf96c863400e8931e | 4,791 | py | Python | luminoth/utils/bbox_transform_tf.py | jsdussanc/luminoth | 7637c52cc01d2826a231fef43746aa10951f99f0 | [
"BSD-3-Clause"
] | 2,584 | 2017-08-16T20:31:52.000Z | 2022-03-16T07:53:54.000Z | luminoth/utils/bbox_transform_tf.py | dun933/Tabulo | dc1c1203a40e1ecf2aaca9647f3008ab72b41438 | [
"BSD-3-Clause"
] | 197 | 2017-08-17T14:49:18.000Z | 2022-02-10T01:50:50.000Z | luminoth/utils/bbox_transform_tf.py | dun933/Tabulo | dc1c1203a40e1ecf2aaca9647f3008ab72b41438 | [
"BSD-3-Clause"
] | 462 | 2017-08-16T22:00:23.000Z | 2022-03-08T19:14:00.000Z | import tensorflow as tf
def get_width_upright(bboxes):
with tf.name_scope('BoundingBoxTransform/get_width_upright'):
bboxes = tf.cast(bboxes, tf.float32)
x1, y1, x2, y2 = tf.split(bboxes, 4, axis=1)
width = x2 - x1 + 1.
height = y2 - y1 + 1.
# Calculate up right point of bbox (urx = up right x)
urx = x1 + .5 * width
ury = y1 + .5 * height
return width, height, urx, ury
def encode(bboxes, gt_boxes, variances=None):
with tf.name_scope('BoundingBoxTransform/encode'):
(bboxes_width, bboxes_height,
bboxes_urx, bboxes_ury) = get_width_upright(bboxes)
(gt_boxes_width, gt_boxes_height,
gt_boxes_urx, gt_boxes_ury) = get_width_upright(gt_boxes)
if variances is None:
variances = [1., 1.]
targets_dx = (gt_boxes_urx - bboxes_urx)/(bboxes_width * variances[0])
targets_dy = (gt_boxes_ury - bboxes_ury)/(bboxes_height * variances[0])
targets_dw = tf.log(gt_boxes_width / bboxes_width) / variances[1]
targets_dh = tf.log(gt_boxes_height / bboxes_height) / variances[1]
targets = tf.concat(
[targets_dx, targets_dy, targets_dw, targets_dh], axis=1)
return targets
def decode(roi, deltas, variances=None):
with tf.name_scope('BoundingBoxTransform/decode'):
(roi_width, roi_height,
roi_urx, roi_ury) = get_width_upright(roi)
dx, dy, dw, dh = tf.split(deltas, 4, axis=1)
if variances is None:
variances = [1., 1.]
pred_ur_x = dx * roi_width * variances[0] + roi_urx
pred_ur_y = dy * roi_height * variances[0] + roi_ury
pred_w = tf.exp(dw * variances[1]) * roi_width
pred_h = tf.exp(dh * variances[1]) * roi_height
bbox_x1 = pred_ur_x - 0.5 * pred_w
bbox_y1 = pred_ur_y - 0.5 * pred_h
# This -1. extra is different from reference implementation.
bbox_x2 = pred_ur_x + 0.5 * pred_w - 1.
bbox_y2 = pred_ur_y + 0.5 * pred_h - 1.
bboxes = tf.concat(
[bbox_x1, bbox_y1, bbox_x2, bbox_y2], axis=1)
return bboxes
def clip_boxes(bboxes, imshape):
"""
Clips bounding boxes to image boundaries based on image shape.
Args:
bboxes: Tensor with shape (num_bboxes, 4)
where point order is x1, y1, x2, y2.
imshape: Tensor with shape (2, )
where the first value is height and the next is width.
Returns
Tensor with same shape as bboxes but making sure that none
of the bboxes are outside the image.
"""
with tf.name_scope('BoundingBoxTransform/clip_bboxes'):
bboxes = tf.cast(bboxes, dtype=tf.float32)
imshape = tf.cast(imshape, dtype=tf.float32)
x1, y1, x2, y2 = tf.split(bboxes, 4, axis=1)
width = imshape[1]
height = imshape[0]
x1 = tf.maximum(tf.minimum(x1, width - 1.0), 0.0)
x2 = tf.maximum(tf.minimum(x2, width - 1.0), 0.0)
y1 = tf.maximum(tf.minimum(y1, height - 1.0), 0.0)
y2 = tf.maximum(tf.minimum(y2, height - 1.0), 0.0)
bboxes = tf.concat([x1, y1, x2, y2], axis=1)
return bboxes
def change_order(bboxes):
"""Change bounding box encoding order.
TensorFlow works with the (y_min, x_min, y_max, x_max) order while we work
with the (x_min, y_min, x_max, y_min).
While both encoding options have its advantages and disadvantages we
decided to use the (x_min, y_min, x_max, y_min), forcing use to switch to
TensorFlow's every time we want to use a std function that handles bounding
boxes.
Args:
bboxes: A Tensor of shape (total_bboxes, 4)
Returns:
bboxes: A Tensor of shape (total_bboxes, 4) with the order swaped.
"""
with tf.name_scope('BoundingBoxTransform/change_order'):
first_min, second_min, first_max, second_max = tf.unstack(
bboxes, axis=1
)
bboxes = tf.stack(
[second_min, first_min, second_max, first_max], axis=1
)
return bboxes
if __name__ == '__main__':
import numpy as np
bboxes = tf.placeholder(tf.float32)
bboxes_val = [[10, 10, 20, 22]]
gt_boxes = tf.placeholder(tf.float32)
gt_boxes_val = [[11, 13, 34, 31]]
imshape = tf.placeholder(tf.int32)
imshape_val = (100, 100)
deltas = encode(bboxes, gt_boxes)
decoded_bboxes = decode(bboxes, deltas)
final_decoded_bboxes = clip_boxes(decoded_bboxes, imshape)
with tf.Session() as sess:
final_decoded_bboxes = sess.run(final_decoded_bboxes, feed_dict={
bboxes: bboxes_val,
gt_boxes: gt_boxes_val,
imshape: imshape_val,
})
assert np.all(gt_boxes_val == final_decoded_bboxes)
| 31.313725 | 79 | 0.622626 | import tensorflow as tf
def get_width_upright(bboxes):
with tf.name_scope('BoundingBoxTransform/get_width_upright'):
bboxes = tf.cast(bboxes, tf.float32)
x1, y1, x2, y2 = tf.split(bboxes, 4, axis=1)
width = x2 - x1 + 1.
height = y2 - y1 + 1.
urx = x1 + .5 * width
ury = y1 + .5 * height
return width, height, urx, ury
def encode(bboxes, gt_boxes, variances=None):
with tf.name_scope('BoundingBoxTransform/encode'):
(bboxes_width, bboxes_height,
bboxes_urx, bboxes_ury) = get_width_upright(bboxes)
(gt_boxes_width, gt_boxes_height,
gt_boxes_urx, gt_boxes_ury) = get_width_upright(gt_boxes)
if variances is None:
variances = [1., 1.]
targets_dx = (gt_boxes_urx - bboxes_urx)/(bboxes_width * variances[0])
targets_dy = (gt_boxes_ury - bboxes_ury)/(bboxes_height * variances[0])
targets_dw = tf.log(gt_boxes_width / bboxes_width) / variances[1]
targets_dh = tf.log(gt_boxes_height / bboxes_height) / variances[1]
targets = tf.concat(
[targets_dx, targets_dy, targets_dw, targets_dh], axis=1)
return targets
def decode(roi, deltas, variances=None):
with tf.name_scope('BoundingBoxTransform/decode'):
(roi_width, roi_height,
roi_urx, roi_ury) = get_width_upright(roi)
dx, dy, dw, dh = tf.split(deltas, 4, axis=1)
if variances is None:
variances = [1., 1.]
pred_ur_x = dx * roi_width * variances[0] + roi_urx
pred_ur_y = dy * roi_height * variances[0] + roi_ury
pred_w = tf.exp(dw * variances[1]) * roi_width
pred_h = tf.exp(dh * variances[1]) * roi_height
bbox_x1 = pred_ur_x - 0.5 * pred_w
bbox_y1 = pred_ur_y - 0.5 * pred_h
bbox_x2 = pred_ur_x + 0.5 * pred_w - 1.
bbox_y2 = pred_ur_y + 0.5 * pred_h - 1.
bboxes = tf.concat(
[bbox_x1, bbox_y1, bbox_x2, bbox_y2], axis=1)
return bboxes
def clip_boxes(bboxes, imshape):
with tf.name_scope('BoundingBoxTransform/clip_bboxes'):
bboxes = tf.cast(bboxes, dtype=tf.float32)
imshape = tf.cast(imshape, dtype=tf.float32)
x1, y1, x2, y2 = tf.split(bboxes, 4, axis=1)
width = imshape[1]
height = imshape[0]
x1 = tf.maximum(tf.minimum(x1, width - 1.0), 0.0)
x2 = tf.maximum(tf.minimum(x2, width - 1.0), 0.0)
y1 = tf.maximum(tf.minimum(y1, height - 1.0), 0.0)
y2 = tf.maximum(tf.minimum(y2, height - 1.0), 0.0)
bboxes = tf.concat([x1, y1, x2, y2], axis=1)
return bboxes
def change_order(bboxes):
with tf.name_scope('BoundingBoxTransform/change_order'):
first_min, second_min, first_max, second_max = tf.unstack(
bboxes, axis=1
)
bboxes = tf.stack(
[second_min, first_min, second_max, first_max], axis=1
)
return bboxes
if __name__ == '__main__':
import numpy as np
bboxes = tf.placeholder(tf.float32)
bboxes_val = [[10, 10, 20, 22]]
gt_boxes = tf.placeholder(tf.float32)
gt_boxes_val = [[11, 13, 34, 31]]
imshape = tf.placeholder(tf.int32)
imshape_val = (100, 100)
deltas = encode(bboxes, gt_boxes)
decoded_bboxes = decode(bboxes, deltas)
final_decoded_bboxes = clip_boxes(decoded_bboxes, imshape)
with tf.Session() as sess:
final_decoded_bboxes = sess.run(final_decoded_bboxes, feed_dict={
bboxes: bboxes_val,
gt_boxes: gt_boxes_val,
imshape: imshape_val,
})
assert np.all(gt_boxes_val == final_decoded_bboxes)
| true | true |
f7209df8e01166c38a64b49337433ccac2442afc | 2,808 | py | Python | 5-MaksimovKA/code/submit.py | remtav/SpaceNet7_Multi-Temporal_Solutions | ee535c61fc22bffa45331519239c6d1b044b1514 | [
"Apache-2.0"
] | 38 | 2021-02-18T07:04:54.000Z | 2022-03-22T15:31:06.000Z | 5-MaksimovKA/code/submit.py | remtav/SpaceNet7_Multi-Temporal_Solutions | ee535c61fc22bffa45331519239c6d1b044b1514 | [
"Apache-2.0"
] | 2 | 2021-02-22T18:53:19.000Z | 2021-06-22T20:28:06.000Z | 5-MaksimovKA/code/submit.py | remtav/SpaceNet7_Multi-Temporal_Solutions | ee535c61fc22bffa45331519239c6d1b044b1514 | [
"Apache-2.0"
] | 15 | 2021-02-25T17:25:40.000Z | 2022-01-31T16:59:32.000Z | import os
import tqdm
import glob
import fiona
import geopandas as gpd
from fire import Fire
def sn7_convert_geojsons_to_csv(json_dirs, output_csv_path, population='proposal'):
'''
Convert jsons to csv
Population is either "ground" or "proposal"
'''
first_file = True # switch that will be turned off once we process the first file
for json_dir in tqdm.tqdm(json_dirs[:]):
json_files = sorted(glob.glob(os.path.join(json_dir, '*.geojson')))
for json_file in tqdm.tqdm(json_files):
try:
df = gpd.read_file(json_file)
except (fiona.errors.DriverError):
message = '! Invalid dataframe for %s' % json_file
print(message)
continue
# raise Exception(message)
if population == 'ground':
file_name_col = df.image_fname.apply(lambda x: os.path.splitext(x)[0])
elif population == 'proposal':
file_name_col = os.path.splitext(os.path.basename(json_file))[0]
else:
raise Exception('! Invalid population')
if len(df) == 0:
message = '! Empty dataframe for %s' % json_file
print(message)
# raise Exception(message)
df = gpd.GeoDataFrame({
'filename': file_name_col,
'id': 0,
'geometry': "POLYGON EMPTY",
})
else:
try:
df = gpd.GeoDataFrame({
'filename': file_name_col,
'id': df.Id.astype(int),
'geometry': df.geometry,
})
except:
print(df)
if first_file:
net_df = df
first_file = False
else:
net_df = net_df.append(df)
net_df.to_csv(output_csv_path, index=False)
return net_df
def make_submit(out_file='/wdata/solution.csv'):
pred_top_dir = '/wdata/'
# out_dir_csv = os.path.join(pred_top_dir, 'csvs')
# os.makedirs(out_dir_csv, exist_ok=True)
# prop_file = os.path.join(out_dir_csv, 'solution.csv')
prop_file = out_file
aoi_dirs = sorted([os.path.join(pred_top_dir, 'pred_jsons_match', aoi) \
for aoi in os.listdir(os.path.join(pred_top_dir, 'pred_jsons_match')) \
if os.path.isdir(os.path.join(pred_top_dir, 'pred_jsons_match', aoi))])
print("aoi_dirs:", aoi_dirs)
# Execute
if os.path.exists(prop_file):
os.remove(prop_file)
net_df = sn7_convert_geojsons_to_csv(aoi_dirs, prop_file, 'proposal')
print("prop_file:", prop_file)
if __name__ == '__main__':
Fire(make_submit) | 35.1 | 94 | 0.551994 | import os
import tqdm
import glob
import fiona
import geopandas as gpd
from fire import Fire
def sn7_convert_geojsons_to_csv(json_dirs, output_csv_path, population='proposal'):
first_file = True
for json_dir in tqdm.tqdm(json_dirs[:]):
json_files = sorted(glob.glob(os.path.join(json_dir, '*.geojson')))
for json_file in tqdm.tqdm(json_files):
try:
df = gpd.read_file(json_file)
except (fiona.errors.DriverError):
message = '! Invalid dataframe for %s' % json_file
print(message)
continue
if population == 'ground':
file_name_col = df.image_fname.apply(lambda x: os.path.splitext(x)[0])
elif population == 'proposal':
file_name_col = os.path.splitext(os.path.basename(json_file))[0]
else:
raise Exception('! Invalid population')
if len(df) == 0:
message = '! Empty dataframe for %s' % json_file
print(message)
df = gpd.GeoDataFrame({
'filename': file_name_col,
'id': 0,
'geometry': "POLYGON EMPTY",
})
else:
try:
df = gpd.GeoDataFrame({
'filename': file_name_col,
'id': df.Id.astype(int),
'geometry': df.geometry,
})
except:
print(df)
if first_file:
net_df = df
first_file = False
else:
net_df = net_df.append(df)
net_df.to_csv(output_csv_path, index=False)
return net_df
def make_submit(out_file='/wdata/solution.csv'):
pred_top_dir = '/wdata/'
prop_file = out_file
aoi_dirs = sorted([os.path.join(pred_top_dir, 'pred_jsons_match', aoi) \
for aoi in os.listdir(os.path.join(pred_top_dir, 'pred_jsons_match')) \
if os.path.isdir(os.path.join(pred_top_dir, 'pred_jsons_match', aoi))])
print("aoi_dirs:", aoi_dirs)
if os.path.exists(prop_file):
os.remove(prop_file)
net_df = sn7_convert_geojsons_to_csv(aoi_dirs, prop_file, 'proposal')
print("prop_file:", prop_file)
if __name__ == '__main__':
Fire(make_submit) | true | true |
f7209e86a2976584564066d1289e7d7d2268a733 | 2,401 | py | Python | tests/commands/test_modifyoncondition.py | lpd-patrick/oaxmlapi | 1e73881c290ca3181c2d33a7b5fa74fb5f86e62c | [
"MIT"
] | 25 | 2015-05-20T01:23:39.000Z | 2021-03-01T17:13:59.000Z | tests/commands/test_modifyoncondition.py | lpd-patrick/oaxmlapi | 1e73881c290ca3181c2d33a7b5fa74fb5f86e62c | [
"MIT"
] | 16 | 2015-03-03T00:59:29.000Z | 2021-11-30T16:45:15.000Z | tests/commands/test_modifyoncondition.py | lpd-patrick/oaxmlapi | 1e73881c290ca3181c2d33a7b5fa74fb5f86e62c | [
"MIT"
] | 12 | 2016-01-04T20:06:44.000Z | 2020-09-27T20:15:27.000Z | # -*- coding: utf-8
from __future__ import absolute_import
import unittest
from oaxmlapi import commands, datatypes
try:
import xml.etree.cElementTree as ET
except ImportError:
import xml.etree.ElementTree as ET
class TestModifyOnConditionClass(unittest.TestCase):
def test_str(self):
slip = datatypes.Datatype(
'Slip',
{'id': '1234'}
)
date = datatypes.Datatype(
'Date',
{'year': '2012'}
)
self.assertEqual(
str(commands.ModifyOnCondition('Slip', slip, date)),
'<ModifyOnCondition type=Slip>'
)
def test_modifyoncondition(self):
slip = datatypes.Datatype(
'Slip',
{'id': '1234'}
)
date = datatypes.Datatype(
'Date',
{'year': '2012'}
)
self.assertIsInstance(
commands.ModifyOnCondition('Slip', slip, date).modify(),
ET.Element
)
def test_tostring(self):
slip = datatypes.Datatype(
'Slip',
{'id': '1234'}
)
date = datatypes.Datatype(
'Date',
{'year': '2012'}
)
self.assertEqual(
commands.ModifyOnCondition('Slip', slip, date).tostring(),
(
b'<ModifyOnCondition condition="if-not-updated" type="Slip">'
b'<Slip><id>1234</id></Slip><Date><year>2012</year></Date>'
b'</ModifyOnCondition>'
)
)
def test_prettify(self):
slip = datatypes.Datatype(
'Slip',
{'id': '1234'}
)
date = datatypes.Datatype(
'Date',
{'year': '2012'}
)
self.assertEqual(
commands.ModifyOnCondition('Slip', slip, date).prettify(),
(
b'<?xml version="1.0" encoding="utf-8"?>\n'
b'<ModifyOnCondition condition="if-not-updated" type="Slip">\n'
b' <Slip>\n'
b' <id>1234</id>\n'
b' </Slip>\n'
b' <Date>\n'
b' <year>2012</year>\n'
b' </Date>\n'
b'</ModifyOnCondition>\n'
)
)
suite = unittest.TestLoader().loadTestsFromTestCase(TestModifyOnConditionClass)
unittest.TextTestRunner(verbosity=2).run(suite)
| 27.918605 | 79 | 0.493544 |
from __future__ import absolute_import
import unittest
from oaxmlapi import commands, datatypes
try:
import xml.etree.cElementTree as ET
except ImportError:
import xml.etree.ElementTree as ET
class TestModifyOnConditionClass(unittest.TestCase):
def test_str(self):
slip = datatypes.Datatype(
'Slip',
{'id': '1234'}
)
date = datatypes.Datatype(
'Date',
{'year': '2012'}
)
self.assertEqual(
str(commands.ModifyOnCondition('Slip', slip, date)),
'<ModifyOnCondition type=Slip>'
)
def test_modifyoncondition(self):
slip = datatypes.Datatype(
'Slip',
{'id': '1234'}
)
date = datatypes.Datatype(
'Date',
{'year': '2012'}
)
self.assertIsInstance(
commands.ModifyOnCondition('Slip', slip, date).modify(),
ET.Element
)
def test_tostring(self):
slip = datatypes.Datatype(
'Slip',
{'id': '1234'}
)
date = datatypes.Datatype(
'Date',
{'year': '2012'}
)
self.assertEqual(
commands.ModifyOnCondition('Slip', slip, date).tostring(),
(
b'<ModifyOnCondition condition="if-not-updated" type="Slip">'
b'<Slip><id>1234</id></Slip><Date><year>2012</year></Date>'
b'</ModifyOnCondition>'
)
)
def test_prettify(self):
slip = datatypes.Datatype(
'Slip',
{'id': '1234'}
)
date = datatypes.Datatype(
'Date',
{'year': '2012'}
)
self.assertEqual(
commands.ModifyOnCondition('Slip', slip, date).prettify(),
(
b'<?xml version="1.0" encoding="utf-8"?>\n'
b'<ModifyOnCondition condition="if-not-updated" type="Slip">\n'
b' <Slip>\n'
b' <id>1234</id>\n'
b' </Slip>\n'
b' <Date>\n'
b' <year>2012</year>\n'
b' </Date>\n'
b'</ModifyOnCondition>\n'
)
)
suite = unittest.TestLoader().loadTestsFromTestCase(TestModifyOnConditionClass)
unittest.TextTestRunner(verbosity=2).run(suite)
| true | true |
f7209ed2b871197ee55089411c3d1ab6b323fbba | 3,956 | py | Python | docs/user_guides/tools/elastic_ctr/elastic_ctr/dist_train_demo.py | shiyutang/docs | b05612213a08daf9f225abce08fc42f924ef51ad | [
"Apache-2.0"
] | 104 | 2018-09-04T08:16:05.000Z | 2021-05-06T20:45:26.000Z | docs/user_guides/tools/elastic_ctr/elastic_ctr/dist_train_demo.py | shiyutang/docs | b05612213a08daf9f225abce08fc42f924ef51ad | [
"Apache-2.0"
] | 1,582 | 2018-06-25T06:14:11.000Z | 2021-05-14T16:00:43.000Z | docs/user_guides/tools/elastic_ctr/elastic_ctr/dist_train_demo.py | shiyutang/docs | b05612213a08daf9f225abce08fc42f924ef51ad | [
"Apache-2.0"
] | 387 | 2018-06-20T07:42:32.000Z | 2021-05-14T08:35:28.000Z | # Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import paddle.fluid.core as core
import math
import os
import sys
import numpy
import paddle
import paddle.fluid as fluid
BATCH_SIZE = 64
PASS_NUM = 1
def loss_net(hidden, label):
prediction = fluid.layers.fc(input=hidden, size=10, act='softmax')
loss = fluid.layers.cross_entropy(input=prediction, label=label)
avg_loss = fluid.layers.mean(loss)
acc = fluid.layers.accuracy(input=prediction, label=label)
return prediction, avg_loss, acc
def conv_net(img, label):
conv_pool_1 = fluid.nets.simple_img_conv_pool(
input=img,
filter_size=5,
num_filters=20,
pool_size=2,
pool_stride=2,
act="relu")
conv_pool_1 = fluid.layers.batch_norm(conv_pool_1)
conv_pool_2 = fluid.nets.simple_img_conv_pool(
input=conv_pool_1,
filter_size=5,
num_filters=50,
pool_size=2,
pool_stride=2,
act="relu")
return loss_net(conv_pool_2, label)
def train(use_cuda, role, endpoints, current_endpoint, trainer_id, trainers):
if use_cuda and not fluid.core.is_compiled_with_cuda():
return
img = fluid.layers.data(name='img', shape=[1, 28, 28], dtype='float32')
label = fluid.layers.data(name='label', shape=[1], dtype='int64')
prediction, avg_loss, acc = conv_net(img, label)
test_program = fluid.default_main_program().clone(for_test=True)
optimizer = fluid.optimizer.Adam(learning_rate=0.001)
optimizer.minimize(avg_loss)
t = fluid.DistributeTranspiler()
t.transpile(trainer_id, pservers=endpoints, trainers=trainers)
if role == "pserver":
prog = t.get_pserver_program(current_endpoint)
startup = t.get_startup_program(current_endpoint, pserver_program=prog)
exe = fluid.Executor(fluid.CPUPlace())
exe.run(startup)
exe.run(prog)
elif role == "trainer":
prog = t.get_trainer_program()
place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()
exe = fluid.Executor(place)
train_reader = paddle.batch(
paddle.reader.shuffle(paddle.dataset.mnist.train(), buf_size=500),
batch_size=BATCH_SIZE)
test_reader = paddle.batch(
paddle.dataset.mnist.test(), batch_size=BATCH_SIZE)
feeder = fluid.DataFeeder(feed_list=[img, label], place=place)
exe.run(fluid.default_startup_program())
for pass_id in range(PASS_NUM):
for batch_id, data in enumerate(train_reader()):
acc_np, avg_loss_np = exe.run(
prog, feed=feeder.feed(data), fetch_list=[acc, avg_loss])
if (batch_id + 1) % 10 == 0:
print(
'PassID {0:1}, BatchID {1:04}, Loss {2:2.2}, Acc {3:2.2}'.
format(pass_id, batch_id + 1,
float(avg_loss_np.mean()), float(
acc_np.mean())))
if __name__ == '__main__':
if len(sys.argv) != 6:
print(
"Usage: python %s role endpoints current_endpoint trainer_id trainers"
% sys.argv[0])
exit(0)
role, endpoints, current_endpoint, trainer_id, trainers = \
sys.argv[1:]
train(True, role, endpoints, current_endpoint,
int(trainer_id), int(trainers))
| 35.321429 | 82 | 0.65091 |
from __future__ import print_function
import paddle.fluid.core as core
import math
import os
import sys
import numpy
import paddle
import paddle.fluid as fluid
BATCH_SIZE = 64
PASS_NUM = 1
def loss_net(hidden, label):
prediction = fluid.layers.fc(input=hidden, size=10, act='softmax')
loss = fluid.layers.cross_entropy(input=prediction, label=label)
avg_loss = fluid.layers.mean(loss)
acc = fluid.layers.accuracy(input=prediction, label=label)
return prediction, avg_loss, acc
def conv_net(img, label):
conv_pool_1 = fluid.nets.simple_img_conv_pool(
input=img,
filter_size=5,
num_filters=20,
pool_size=2,
pool_stride=2,
act="relu")
conv_pool_1 = fluid.layers.batch_norm(conv_pool_1)
conv_pool_2 = fluid.nets.simple_img_conv_pool(
input=conv_pool_1,
filter_size=5,
num_filters=50,
pool_size=2,
pool_stride=2,
act="relu")
return loss_net(conv_pool_2, label)
def train(use_cuda, role, endpoints, current_endpoint, trainer_id, trainers):
if use_cuda and not fluid.core.is_compiled_with_cuda():
return
img = fluid.layers.data(name='img', shape=[1, 28, 28], dtype='float32')
label = fluid.layers.data(name='label', shape=[1], dtype='int64')
prediction, avg_loss, acc = conv_net(img, label)
test_program = fluid.default_main_program().clone(for_test=True)
optimizer = fluid.optimizer.Adam(learning_rate=0.001)
optimizer.minimize(avg_loss)
t = fluid.DistributeTranspiler()
t.transpile(trainer_id, pservers=endpoints, trainers=trainers)
if role == "pserver":
prog = t.get_pserver_program(current_endpoint)
startup = t.get_startup_program(current_endpoint, pserver_program=prog)
exe = fluid.Executor(fluid.CPUPlace())
exe.run(startup)
exe.run(prog)
elif role == "trainer":
prog = t.get_trainer_program()
place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()
exe = fluid.Executor(place)
train_reader = paddle.batch(
paddle.reader.shuffle(paddle.dataset.mnist.train(), buf_size=500),
batch_size=BATCH_SIZE)
test_reader = paddle.batch(
paddle.dataset.mnist.test(), batch_size=BATCH_SIZE)
feeder = fluid.DataFeeder(feed_list=[img, label], place=place)
exe.run(fluid.default_startup_program())
for pass_id in range(PASS_NUM):
for batch_id, data in enumerate(train_reader()):
acc_np, avg_loss_np = exe.run(
prog, feed=feeder.feed(data), fetch_list=[acc, avg_loss])
if (batch_id + 1) % 10 == 0:
print(
'PassID {0:1}, BatchID {1:04}, Loss {2:2.2}, Acc {3:2.2}'.
format(pass_id, batch_id + 1,
float(avg_loss_np.mean()), float(
acc_np.mean())))
if __name__ == '__main__':
if len(sys.argv) != 6:
print(
"Usage: python %s role endpoints current_endpoint trainer_id trainers"
% sys.argv[0])
exit(0)
role, endpoints, current_endpoint, trainer_id, trainers = \
sys.argv[1:]
train(True, role, endpoints, current_endpoint,
int(trainer_id), int(trainers))
| true | true |
f7209f8d535077c8af62477c05e769c4ef76e2d5 | 9,895 | py | Python | mazda3_joystick.py | moralrecordings/elm327_joystick | edebae95f24913f7c6caed98751fb54477743169 | [
"BSD-3-Clause"
] | 7 | 2017-12-18T13:11:18.000Z | 2022-01-08T21:13:26.000Z | mazda3_joystick.py | moralrecordings/elm327_joystick | edebae95f24913f7c6caed98751fb54477743169 | [
"BSD-3-Clause"
] | null | null | null | mazda3_joystick.py | moralrecordings/elm327_joystick | edebae95f24913f7c6caed98751fb54477743169 | [
"BSD-3-Clause"
] | 3 | 2018-10-27T21:37:01.000Z | 2022-01-08T21:13:24.000Z | #!/usr/bin/env python3
import uinput
from elm327 import ELM327, PROTOCOLS
from mrcrowbar import models as mrc
import math
import time
from optparse import OptionParser
class OptParser( OptionParser ):
def format_epilog( self, formatter ):
return '\n{}\n'.format( '\n'.join( [formatter._format_text( x ) for x in self.epilog.split( '\n' )] ) )
class Steering( mrc.Block ):
RANGE = 0x00D2
axis_raw = mrc.UInt16_BE( 0x00 )
@property
def axis( self ):
return min( max( (255*(self.axis_raw - 0x8000)//self.RANGE), -255 ), 255 )
class Accelerator( mrc.Block ):
RANGE = 0xC8
axis_raw = mrc.UInt8( 0x06 )
@property
def axis( self ):
return min( max( (255*(self.axis_raw)//self.RANGE), 0 ), 255 )
class Brake( mrc.Block ):
button = mrc.Bits( 0x02, 0b01000000 )
class Cruise( mrc.Block ):
button = mrc.Bits( 0x00, 0b10000000 )
class Controls( mrc.Block ):
driver_door = mrc.Bits( 0x00, 0b10000000 )
high_beams = mrc.Bits( 0x03, 0b01000000 )
class Mazda3:
LATCH_TIME = 0.1
PRESS_THRESHOLD = 32
STEER_THRESHOLD = 64
SHOVE_THRESHOLD = 128
def __init__( self, name, mapping ):
print( 'Creating uinput device "{}"...'.format( name ) )
self.device = uinput.Device( mapping, name )
self.steering = 0
self.accelerator = 0
self.brake = 0
self.high_beams = 0
self.cruise_t = self.driver_door_t = time.time() + self.LATCH_TIME
self.cruise = 0
self.driver_door = 0
self.cruise_prev = 0
self.driver_door_prev = 0
def update( self, msg_id, msg_b ):
t = time.time()
self.cruise_prev = self.cruise
self.driver_door_prev = self.driver_door
if msg_id == 0x4da:
self.steering = Steering( msg_b ).axis
elif msg_id == 0x201:
self.accelerator = Accelerator( msg_b ).axis
elif msg_id == 0x205:
self.brake = Brake( msg_b ).button
elif msg_id == 0x4ec:
self.cruise = Cruise( msg_b ).button
elif msg_id == 0x433:
obj = Controls( msg_b )
self.high_beams = obj.high_beams
self.driver_door = obj.driver_door
else:
return
if self.cruise != self.cruise_prev:
self.cruise_t = t
if self.driver_door != self.driver_door_prev:
self.driver_door_t = t
self.set_controls()
return
def set_controls( self ):
pass
class Mazda3Joystick( Mazda3 ):
NAME = 'Mazda 3 Joystick'
DEVICE = [
uinput.ABS_WHEEL + (-255, 255, 0, 0),
uinput.ABS_GAS + (0, 255, 0, 0),
uinput.BTN_0,
uinput.BTN_1,
uinput.BTN_2,
uinput.BTN_3
]
def __init__( self ):
super().__init__( name=self.NAME, mapping=self.DEVICE )
def set_controls( self ):
t = time.time()
self.device.emit( uinput.ABS_WHEEL, self.steering )
self.device.emit( uinput.ABS_GAS, self.accelerator )
self.device.emit( uinput.BTN_0, self.brake )
self.device.emit( uinput.BTN_1, self.high_beams )
self.device.emit( uinput.BTN_2, 1 if t < (self.cruise_t + self.LATCH_TIME) else 0 )
self.device.emit( uinput.BTN_3, 1 if t < (self.driver_door_t + self.LATCH_TIME) else 0 )
return
class Mazda3Doom( Mazda3Joystick ):
NAME = 'Mazda 3 Doom'
DEVICE = [
uinput.ABS_WHEEL + (-255, 255, 0, 0),
uinput.ABS_GAS + (-255, 255, 0, 0),
uinput.BTN_0,
uinput.BTN_1,
uinput.BTN_2,
uinput.BTN_3
]
class Mazda3DOS( Mazda3Joystick ):
NAME = 'Mazda 3 DOS'
DEVICE = [
uinput.ABS_WHEEL + (-255, 255, 0, 0),
uinput.ABS_GAS + (-255, 255, 0, 0),
uinput.BTN_0,
uinput.BTN_1,
uinput.BTN_2,
uinput.BTN_3
]
def set_controls( self ):
t = time.time()
self.device.emit( uinput.ABS_WHEEL, self.steering )
self.device.emit( uinput.ABS_GAS, self.accelerator*2-255 )
self.device.emit( uinput.BTN_0, self.brake )
self.device.emit( uinput.BTN_1, self.high_beams )
self.device.emit( uinput.BTN_2, 1 if t < (self.cruise_t + self.LATCH_TIME) else 0 )
self.device.emit( uinput.BTN_3, 1 if t < (self.driver_door_t + self.LATCH_TIME) else 0 )
return
class Mazda3Descent( Mazda3 ):
NAME = 'Mazda 3 Descent'
DEVICE = [
uinput.ABS_WHEEL + (-255, 255, 0, 0),
uinput.ABS_GAS + (-255, 255, 0, 0),
uinput.BTN_0,
uinput.BTN_1,
uinput.BTN_2,
uinput.BTN_3,
uinput.KEY_UP,
uinput.KEY_DOWN
]
DOUBLE_TAP = 0.5
def __init__( self ):
super().__init__( name=self.NAME, mapping=self.DEVICE )
self.high_beams_prev = 0
self.high_beams_t = time.time()
self.high_beams_key = uinput.KEY_DOWN
def update( self, msg_id, msg_b ):
t = time.time()
self.high_beams_prev = self.high_beams
super().update( msg_id, msg_b )
if self.high_beams != self.high_beams_prev:
if self.high_beams:
self.high_beams_key = uinput.KEY_UP if (t - self.high_beams_t < self.DOUBLE_TAP) else uinput.KEY_DOWN
self.device.emit( self.high_beams_key, 1 )
self.high_beams_t = t
else:
self.device.emit( self.high_beams_key, 0 )
def set_controls( self ):
t = time.time()
self.device.emit( uinput.ABS_WHEEL, self.steering )
self.device.emit( uinput.ABS_GAS, self.accelerator )
self.device.emit( uinput.BTN_0, self.brake )
self.device.emit( uinput.BTN_2, 1 if t < (self.cruise_t + self.LATCH_TIME) else 0 )
self.device.emit( uinput.BTN_3, 1 if t < (self.driver_door_t + self.LATCH_TIME) else 0 )
return
class Mazda3Grim( Mazda3 ):
NAME = 'Mazda 3 Grim Fandango'
DEVICE = [
uinput.KEY_LEFT,
uinput.KEY_UP,
uinput.KEY_RIGHT,
uinput.KEY_U,
uinput.KEY_LEFTSHIFT,
uinput.KEY_E,
uinput.KEY_P,
uinput.KEY_I
]
def __init__( self ):
super().__init__( name=self.NAME, mapping=self.DEVICE )
def set_controls( self ):
t = time.time()
self.device.emit( uinput.KEY_LEFT, 1 if self.steering < -self.STEER_THRESHOLD else 0 )
self.device.emit( uinput.KEY_RIGHT, 1 if self.steering > self.STEER_THRESHOLD else 0 )
self.device.emit( uinput.KEY_UP, 1 if self.accelerator > self.PRESS_THRESHOLD else 0 )
self.device.emit( uinput.KEY_LEFTSHIFT, 1 if self.accelerator > self.SHOVE_THRESHOLD else 0 )
self.device.emit( uinput.KEY_U, self.brake )
self.device.emit( uinput.KEY_E, self.high_beams )
self.device.emit( uinput.KEY_P, 1 if t < self.cruise_t + self.LATCH_TIME else 0 )
self.device.emit( uinput.KEY_I, 1 if t < self.driver_door_t + self.LATCH_TIME else 0 )
return
class Mazda3Sonic( Mazda3 ):
NAME = 'Mazda 3 Sonic'
DEVICE = [
uinput.KEY_LEFT,
uinput.KEY_UP,
uinput.KEY_RIGHT,
uinput.KEY_DOWN,
uinput.KEY_Z,
uinput.KEY_ENTER
]
def __init__( self ):
super().__init__( name=self.NAME, mapping=self.DEVICE )
def set_controls( self ):
t = time.time()
self.device.emit( uinput.KEY_LEFT, 1 if self.steering < -self.STEER_THRESHOLD else 0 )
self.device.emit( uinput.KEY_RIGHT, 1 if self.steering > self.STEER_THRESHOLD else 0 )
self.device.emit( uinput.KEY_Z, 1 if self.accelerator > self.PRESS_THRESHOLD else 0 )
self.device.emit( uinput.KEY_DOWN, self.brake )
self.device.emit( uinput.KEY_UP, self.high_beams )
self.device.emit( uinput.KEY_ENTER, 1 if t < self.cruise_t + self.LATCH_TIME else 0 )
return
CONTROLLERS = {
'joystick': Mazda3Joystick,
'grim': Mazda3Grim,
'descent': Mazda3Descent,
'doom': Mazda3Doom,
'dos': Mazda3DOS,
'sonic': Mazda3Sonic,
}
if __name__ == '__main__':
usage = 'Usage: %prog [options]'
parser = OptParser( epilog='Protocols supported by the ELM327:\n{}'.format( PROTOCOLS ) )
parser.add_option( '-g', '--game', dest='game', help='Game configuration to use (choices: {})'.format( ' '.join( CONTROLLERS.keys() ) ) )
parser.add_option( '-d', '--device', dest='device', help='Path to ELM327 serial device' )
parser.add_option( '-b', '--baudrate', dest='baud_rate', help='Baud rate' )
parser.add_option( '-p', '--protocol', dest='protocol', help='ELM327 message protocol to use' )
(options, argv) = parser.parse_args()
args = {}
controller_type = 'joystick'
if options.game and options.game in CONTROLLERS:
controller_type = options.game
elif len( argv ) >= 1 and argv[0] in CONTROLLERS:
controller_type = argv[0]
controller = CONTROLLERS[controller_type]()
if options.device:
args['device'] = options.device
elif len( argv ) >= 2:
args['device'] = argv[1]
if options.baud_rate:
args['baud_rate'] = options.baud_rate
elif len( argv ) >= 3:
args['baud_rate'] = argv[2]
if options.protocol:
args['protocol'] = options.protocol
elif len( argv ) >= 4:
args['protocol'] = argv[3]
elm = ELM327( **args )
elm.reset()
elm.set_can_whitelist( [0x4da, 0x201, 0x205, 0x4ec, 0x433] )
elm.start_can()
try:
while True:
msg_id, msg_b = elm.recv_can()
if msg_id >= 0:
controller.update( msg_id, msg_b )
else:
print('-- Miss: {}'.format( msg_b ))
except EOFError:
print('-- Hit the end')
except KeyboardInterrupt:
pass
elm.get_prompt()
| 30.167683 | 141 | 0.597979 |
import uinput
from elm327 import ELM327, PROTOCOLS
from mrcrowbar import models as mrc
import math
import time
from optparse import OptionParser
class OptParser( OptionParser ):
def format_epilog( self, formatter ):
return '\n{}\n'.format( '\n'.join( [formatter._format_text( x ) for x in self.epilog.split( '\n' )] ) )
class Steering( mrc.Block ):
RANGE = 0x00D2
axis_raw = mrc.UInt16_BE( 0x00 )
@property
def axis( self ):
return min( max( (255*(self.axis_raw - 0x8000)//self.RANGE), -255 ), 255 )
class Accelerator( mrc.Block ):
RANGE = 0xC8
axis_raw = mrc.UInt8( 0x06 )
@property
def axis( self ):
return min( max( (255*(self.axis_raw)//self.RANGE), 0 ), 255 )
class Brake( mrc.Block ):
button = mrc.Bits( 0x02, 0b01000000 )
class Cruise( mrc.Block ):
button = mrc.Bits( 0x00, 0b10000000 )
class Controls( mrc.Block ):
driver_door = mrc.Bits( 0x00, 0b10000000 )
high_beams = mrc.Bits( 0x03, 0b01000000 )
class Mazda3:
LATCH_TIME = 0.1
PRESS_THRESHOLD = 32
STEER_THRESHOLD = 64
SHOVE_THRESHOLD = 128
def __init__( self, name, mapping ):
print( 'Creating uinput device "{}"...'.format( name ) )
self.device = uinput.Device( mapping, name )
self.steering = 0
self.accelerator = 0
self.brake = 0
self.high_beams = 0
self.cruise_t = self.driver_door_t = time.time() + self.LATCH_TIME
self.cruise = 0
self.driver_door = 0
self.cruise_prev = 0
self.driver_door_prev = 0
def update( self, msg_id, msg_b ):
t = time.time()
self.cruise_prev = self.cruise
self.driver_door_prev = self.driver_door
if msg_id == 0x4da:
self.steering = Steering( msg_b ).axis
elif msg_id == 0x201:
self.accelerator = Accelerator( msg_b ).axis
elif msg_id == 0x205:
self.brake = Brake( msg_b ).button
elif msg_id == 0x4ec:
self.cruise = Cruise( msg_b ).button
elif msg_id == 0x433:
obj = Controls( msg_b )
self.high_beams = obj.high_beams
self.driver_door = obj.driver_door
else:
return
if self.cruise != self.cruise_prev:
self.cruise_t = t
if self.driver_door != self.driver_door_prev:
self.driver_door_t = t
self.set_controls()
return
def set_controls( self ):
pass
class Mazda3Joystick( Mazda3 ):
NAME = 'Mazda 3 Joystick'
DEVICE = [
uinput.ABS_WHEEL + (-255, 255, 0, 0),
uinput.ABS_GAS + (0, 255, 0, 0),
uinput.BTN_0,
uinput.BTN_1,
uinput.BTN_2,
uinput.BTN_3
]
def __init__( self ):
super().__init__( name=self.NAME, mapping=self.DEVICE )
def set_controls( self ):
t = time.time()
self.device.emit( uinput.ABS_WHEEL, self.steering )
self.device.emit( uinput.ABS_GAS, self.accelerator )
self.device.emit( uinput.BTN_0, self.brake )
self.device.emit( uinput.BTN_1, self.high_beams )
self.device.emit( uinput.BTN_2, 1 if t < (self.cruise_t + self.LATCH_TIME) else 0 )
self.device.emit( uinput.BTN_3, 1 if t < (self.driver_door_t + self.LATCH_TIME) else 0 )
return
class Mazda3Doom( Mazda3Joystick ):
NAME = 'Mazda 3 Doom'
DEVICE = [
uinput.ABS_WHEEL + (-255, 255, 0, 0),
uinput.ABS_GAS + (-255, 255, 0, 0),
uinput.BTN_0,
uinput.BTN_1,
uinput.BTN_2,
uinput.BTN_3
]
class Mazda3DOS( Mazda3Joystick ):
NAME = 'Mazda 3 DOS'
DEVICE = [
uinput.ABS_WHEEL + (-255, 255, 0, 0),
uinput.ABS_GAS + (-255, 255, 0, 0),
uinput.BTN_0,
uinput.BTN_1,
uinput.BTN_2,
uinput.BTN_3
]
def set_controls( self ):
t = time.time()
self.device.emit( uinput.ABS_WHEEL, self.steering )
self.device.emit( uinput.ABS_GAS, self.accelerator*2-255 )
self.device.emit( uinput.BTN_0, self.brake )
self.device.emit( uinput.BTN_1, self.high_beams )
self.device.emit( uinput.BTN_2, 1 if t < (self.cruise_t + self.LATCH_TIME) else 0 )
self.device.emit( uinput.BTN_3, 1 if t < (self.driver_door_t + self.LATCH_TIME) else 0 )
return
class Mazda3Descent( Mazda3 ):
NAME = 'Mazda 3 Descent'
DEVICE = [
uinput.ABS_WHEEL + (-255, 255, 0, 0),
uinput.ABS_GAS + (-255, 255, 0, 0),
uinput.BTN_0,
uinput.BTN_1,
uinput.BTN_2,
uinput.BTN_3,
uinput.KEY_UP,
uinput.KEY_DOWN
]
DOUBLE_TAP = 0.5
def __init__( self ):
super().__init__( name=self.NAME, mapping=self.DEVICE )
self.high_beams_prev = 0
self.high_beams_t = time.time()
self.high_beams_key = uinput.KEY_DOWN
def update( self, msg_id, msg_b ):
t = time.time()
self.high_beams_prev = self.high_beams
super().update( msg_id, msg_b )
if self.high_beams != self.high_beams_prev:
if self.high_beams:
self.high_beams_key = uinput.KEY_UP if (t - self.high_beams_t < self.DOUBLE_TAP) else uinput.KEY_DOWN
self.device.emit( self.high_beams_key, 1 )
self.high_beams_t = t
else:
self.device.emit( self.high_beams_key, 0 )
def set_controls( self ):
t = time.time()
self.device.emit( uinput.ABS_WHEEL, self.steering )
self.device.emit( uinput.ABS_GAS, self.accelerator )
self.device.emit( uinput.BTN_0, self.brake )
self.device.emit( uinput.BTN_2, 1 if t < (self.cruise_t + self.LATCH_TIME) else 0 )
self.device.emit( uinput.BTN_3, 1 if t < (self.driver_door_t + self.LATCH_TIME) else 0 )
return
class Mazda3Grim( Mazda3 ):
NAME = 'Mazda 3 Grim Fandango'
DEVICE = [
uinput.KEY_LEFT,
uinput.KEY_UP,
uinput.KEY_RIGHT,
uinput.KEY_U,
uinput.KEY_LEFTSHIFT,
uinput.KEY_E,
uinput.KEY_P,
uinput.KEY_I
]
def __init__( self ):
super().__init__( name=self.NAME, mapping=self.DEVICE )
def set_controls( self ):
t = time.time()
self.device.emit( uinput.KEY_LEFT, 1 if self.steering < -self.STEER_THRESHOLD else 0 )
self.device.emit( uinput.KEY_RIGHT, 1 if self.steering > self.STEER_THRESHOLD else 0 )
self.device.emit( uinput.KEY_UP, 1 if self.accelerator > self.PRESS_THRESHOLD else 0 )
self.device.emit( uinput.KEY_LEFTSHIFT, 1 if self.accelerator > self.SHOVE_THRESHOLD else 0 )
self.device.emit( uinput.KEY_U, self.brake )
self.device.emit( uinput.KEY_E, self.high_beams )
self.device.emit( uinput.KEY_P, 1 if t < self.cruise_t + self.LATCH_TIME else 0 )
self.device.emit( uinput.KEY_I, 1 if t < self.driver_door_t + self.LATCH_TIME else 0 )
return
class Mazda3Sonic( Mazda3 ):
NAME = 'Mazda 3 Sonic'
DEVICE = [
uinput.KEY_LEFT,
uinput.KEY_UP,
uinput.KEY_RIGHT,
uinput.KEY_DOWN,
uinput.KEY_Z,
uinput.KEY_ENTER
]
def __init__( self ):
super().__init__( name=self.NAME, mapping=self.DEVICE )
def set_controls( self ):
t = time.time()
self.device.emit( uinput.KEY_LEFT, 1 if self.steering < -self.STEER_THRESHOLD else 0 )
self.device.emit( uinput.KEY_RIGHT, 1 if self.steering > self.STEER_THRESHOLD else 0 )
self.device.emit( uinput.KEY_Z, 1 if self.accelerator > self.PRESS_THRESHOLD else 0 )
self.device.emit( uinput.KEY_DOWN, self.brake )
self.device.emit( uinput.KEY_UP, self.high_beams )
self.device.emit( uinput.KEY_ENTER, 1 if t < self.cruise_t + self.LATCH_TIME else 0 )
return
CONTROLLERS = {
'joystick': Mazda3Joystick,
'grim': Mazda3Grim,
'descent': Mazda3Descent,
'doom': Mazda3Doom,
'dos': Mazda3DOS,
'sonic': Mazda3Sonic,
}
if __name__ == '__main__':
usage = 'Usage: %prog [options]'
parser = OptParser( epilog='Protocols supported by the ELM327:\n{}'.format( PROTOCOLS ) )
parser.add_option( '-g', '--game', dest='game', help='Game configuration to use (choices: {})'.format( ' '.join( CONTROLLERS.keys() ) ) )
parser.add_option( '-d', '--device', dest='device', help='Path to ELM327 serial device' )
parser.add_option( '-b', '--baudrate', dest='baud_rate', help='Baud rate' )
parser.add_option( '-p', '--protocol', dest='protocol', help='ELM327 message protocol to use' )
(options, argv) = parser.parse_args()
args = {}
controller_type = 'joystick'
if options.game and options.game in CONTROLLERS:
controller_type = options.game
elif len( argv ) >= 1 and argv[0] in CONTROLLERS:
controller_type = argv[0]
controller = CONTROLLERS[controller_type]()
if options.device:
args['device'] = options.device
elif len( argv ) >= 2:
args['device'] = argv[1]
if options.baud_rate:
args['baud_rate'] = options.baud_rate
elif len( argv ) >= 3:
args['baud_rate'] = argv[2]
if options.protocol:
args['protocol'] = options.protocol
elif len( argv ) >= 4:
args['protocol'] = argv[3]
elm = ELM327( **args )
elm.reset()
elm.set_can_whitelist( [0x4da, 0x201, 0x205, 0x4ec, 0x433] )
elm.start_can()
try:
while True:
msg_id, msg_b = elm.recv_can()
if msg_id >= 0:
controller.update( msg_id, msg_b )
else:
print('-- Miss: {}'.format( msg_b ))
except EOFError:
print('-- Hit the end')
except KeyboardInterrupt:
pass
elm.get_prompt()
| true | true |
f720a09f334fcd690a947282fa271733c27fe7d4 | 218 | py | Python | WriteaFunction.py | jibinmathew691993/PythonHackerrank | 14ab5b620435a006d5ccff17536bc01acd7c22dc | [
"MIT"
] | null | null | null | WriteaFunction.py | jibinmathew691993/PythonHackerrank | 14ab5b620435a006d5ccff17536bc01acd7c22dc | [
"MIT"
] | null | null | null | WriteaFunction.py | jibinmathew691993/PythonHackerrank | 14ab5b620435a006d5ccff17536bc01acd7c22dc | [
"MIT"
] | null | null | null | def is_leap(year):
leap = False
if year>=1900:
if year%4 == 0:
leap = True
if year%100 == 0 and year%400 != 0:
leap = False
return leap
year = int(input()) | 18.166667 | 47 | 0.463303 | def is_leap(year):
leap = False
if year>=1900:
if year%4 == 0:
leap = True
if year%100 == 0 and year%400 != 0:
leap = False
return leap
year = int(input()) | true | true |
f720a191c0da487593b1eee10de1b8b4af27c2df | 2,276 | py | Python | pyecharts/charts/basic_charts/scatter.py | myqf555/pyecharts | 050309ee3d2016142df3e2265a091e27aa58a027 | [
"MIT"
] | 1 | 2020-02-13T14:48:20.000Z | 2020-02-13T14:48:20.000Z | pyecharts/charts/basic_charts/scatter.py | eclipse2007/pyecharts | 651731a1a5220420a9a03808d2f5eb38ffe18e09 | [
"MIT"
] | null | null | null | pyecharts/charts/basic_charts/scatter.py | eclipse2007/pyecharts | 651731a1a5220420a9a03808d2f5eb38ffe18e09 | [
"MIT"
] | 1 | 2020-09-12T05:55:48.000Z | 2020-09-12T05:55:48.000Z | import itertools
from ... import options as opts
from ... import types
from ...charts.chart import RectChart
from ...globals import ChartType
class Scatter(RectChart):
"""
<<< Scatter >>>
The scatter diagram on the rectangular coordinate system can be used to
show the relationship between x and y of the data. If the data item has
multiple dimensions, it can be represented by color, and the
visualmap component can be used.
"""
def add_yaxis(
self,
series_name: str,
y_axis: types.Sequence,
*,
is_selected: bool = True,
xaxis_index: types.Optional[types.Numeric] = None,
yaxis_index: types.Optional[types.Numeric] = None,
color: types.Optional[str] = None,
symbol: types.Optional[str] = None,
symbol_size: types.Union[types.Numeric, types.Sequence] = 10,
symbol_rotate: types.Optional[types.Numeric] = None,
label_opts: types.Label = opts.LabelOpts(position="right"),
markpoint_opts: types.MarkPoint = None,
markline_opts: types.MarkLine = None,
tooltip_opts: types.Tooltip = None,
itemstyle_opts: types.ItemStyle = None,
):
self._append_color(color)
self._append_legend(series_name, is_selected)
if len(y_axis) > 0 and isinstance(y_axis[0], types.Sequence):
data = [
list(itertools.chain(list([x]), y))
for x, y in zip(self._xaxis_data, y_axis)
]
else:
data = [list(z) for z in zip(self._xaxis_data, y_axis)]
self.options.get("series").append(
{
"type": ChartType.SCATTER,
"name": series_name,
"xAxisIndex": xaxis_index,
"yAxisIndex": yaxis_index,
"symbol": symbol,
"symbolSize": symbol_size,
"symbolRotate": symbol_rotate,
"data": data,
"label": label_opts,
"markPoint": markpoint_opts,
"markLine": markline_opts,
"tooltip": tooltip_opts,
"itemStyle": itemstyle_opts,
}
)
return self
| 35.5625 | 76 | 0.560193 | import itertools
from ... import options as opts
from ... import types
from ...charts.chart import RectChart
from ...globals import ChartType
class Scatter(RectChart):
def add_yaxis(
self,
series_name: str,
y_axis: types.Sequence,
*,
is_selected: bool = True,
xaxis_index: types.Optional[types.Numeric] = None,
yaxis_index: types.Optional[types.Numeric] = None,
color: types.Optional[str] = None,
symbol: types.Optional[str] = None,
symbol_size: types.Union[types.Numeric, types.Sequence] = 10,
symbol_rotate: types.Optional[types.Numeric] = None,
label_opts: types.Label = opts.LabelOpts(position="right"),
markpoint_opts: types.MarkPoint = None,
markline_opts: types.MarkLine = None,
tooltip_opts: types.Tooltip = None,
itemstyle_opts: types.ItemStyle = None,
):
self._append_color(color)
self._append_legend(series_name, is_selected)
if len(y_axis) > 0 and isinstance(y_axis[0], types.Sequence):
data = [
list(itertools.chain(list([x]), y))
for x, y in zip(self._xaxis_data, y_axis)
]
else:
data = [list(z) for z in zip(self._xaxis_data, y_axis)]
self.options.get("series").append(
{
"type": ChartType.SCATTER,
"name": series_name,
"xAxisIndex": xaxis_index,
"yAxisIndex": yaxis_index,
"symbol": symbol,
"symbolSize": symbol_size,
"symbolRotate": symbol_rotate,
"data": data,
"label": label_opts,
"markPoint": markpoint_opts,
"markLine": markline_opts,
"tooltip": tooltip_opts,
"itemStyle": itemstyle_opts,
}
)
return self
| true | true |
f720a2030d77959bb04a88ba16fda69fd042f960 | 8,410 | py | Python | LichessBotMain.py | nfeddersen/lichess-bot | 6457b5f66104b59b91316ba3a944b55710ca64e5 | [
"MIT"
] | null | null | null | LichessBotMain.py | nfeddersen/lichess-bot | 6457b5f66104b59b91316ba3a944b55710ca64e5 | [
"MIT"
] | null | null | null | LichessBotMain.py | nfeddersen/lichess-bot | 6457b5f66104b59b91316ba3a944b55710ca64e5 | [
"MIT"
] | 1 | 2021-07-12T14:11:04.000Z | 2021-07-12T14:11:04.000Z | import requests
import json
from stockfish import Stockfish
stockfish = Stockfish('stockfish_20090216_x64_bmi2.exe', parameters={"Threads": 8, "Minimum Thinking Time": 300})
stockfish.set_depth(15)
stockfish.set_skill_level(25)
api_key = 'REPLACE_WITH_API_KEY'
headers = {'Authorization': f'Bearer {api_key}'}
game_state_url = 'https://lichess.org/api/stream/event'
game_id = 'placeholder'
is_checkmate = False
bot_challenges = False
while True:
state_session = requests.Session()
request = state_session.get(game_state_url, headers=headers, stream=True)
for line in request.iter_lines():
if len(line) == 0:
print('Request response is empty.')
if len(line) != 0:
challenge_state_json = json.loads(line)
if challenge_state_json['type'] == 'challenge':
print('BOT_NAME has been challenged.')
challenge_id = challenge_state_json['challenge']['id']
challenger = challenge_state_json['challenge']['challenger']['id']
print('Challenge ID is: ' + challenge_id + '. Challenger is: ' + challenger)
if challenge_state_json['challenge']['variant']['key'] != 'standard':
requests.post('https://lichess.org/api/challenge/' + challenge_id + '/decline', params={
},
headers={
'Authorization': f'Bearer {api_key}'
})
print('Challenge has been declined for improper variant.')
continue
else:
requests.post('https://lichess.org/api/challenge/' + challenge_id + '/accept', params={
},
headers={
'Authorization': f'Bearer {api_key}'
})
current_move = 'placeholder'
best_move = 'placeholder'
position = ['placeholder', 'placeholder']
black_position = ['placeholder', 'placeholder']
white = True
second_session = requests.Session()
request = second_session.get(game_state_url, headers=headers, stream=True)
for line in request.iter_lines():
game_start_json = json.loads(line)
print(game_start_json)
game_id = game_start_json['game']['id']
print('Game ID is: ' + game_id)
break
game_stream_url = 'https://lichess.org/api/bot/game/stream/' + game_id
bot_move_url = 'https://lichess.org/api/bot/game/' + game_id + '/move/'
s = requests.Session()
r = s.get(game_stream_url, headers=headers, stream=True)
i = 0
move_count = 0
for line in r.iter_lines():
if line:
i = i + 1
move_count = move_count + .5
move_count = float(move_count)
if move_count.is_integer():
move_count = int(move_count)
move_count = str(move_count)
print('It is move ' + move_count + '.')
move_count = float(move_count)
start_json = json.loads(line)
if i == 1:
if start_json["white"]["id"] == 'REPLACE_WITH_BOT_USERNAME':
white = True
print('It is white to move. I am white.')
else:
white = False
print('It is white to move. I am black.')
if start_json['speed'] == 'bullet' and i == 1:
stockfish.set_depth(15)
stockfish.set_skill_level(20)
elif start_json['speed'] == 'blitz' and i == 1:
stockfish.set_depth(15)
stockfish.set_skill_level(25)
elif start_json['speed'] == 'rapid' and i == 1:
stockfish.set_depth(19)
stockfish.set_skill_level(30)
elif start_json['speed'] == 'classical' and i == 1:
stockfish.set_depth(20)
stockfish.set_skill_level(30)
elif start_json['speed'] == 'correspondence' and i == 1:
stockfish.set_depth(20)
stockfish.set_skill_level(30)
if white and i == 1:
position.clear()
stockfish.set_position()
best_move = stockfish.get_best_move()
requests.post(bot_move_url + best_move, params={
},
headers={
'Authorization': f'Bearer {api_key}'
})
best_move = str(best_move)
position.append(best_move)
stockfish.set_position(position)
if not white and i == 1:
print('I am waiting to move.')
if not white and i == 2:
black_position.clear()
current_move = start_json["moves"]
current_move = str(current_move)
current_move = current_move.split()
current_move = current_move[-1]
black_position.append(current_move)
stockfish.set_position(black_position)
best_move = stockfish.get_best_move()
black_position.append(best_move)
requests.post(bot_move_url + best_move, params={
},
headers={
'Authorization': f'Bearer {api_key}'
})
if start_json['type'] == 'gameState':
# If bot is white and first move has already been played
if not i % 2 == 0 and white and i > 1:
current_move = start_json["moves"]
current_move = str(current_move)
current_move = current_move.split()
current_move = current_move[-1]
position.append(current_move)
stockfish.set_position(position)
best_move = stockfish.get_best_move()
position.append(best_move)
requests.post(bot_move_url + best_move, params={
},
headers={
'Authorization': f'Bearer {api_key}'
})
# If bot is black and first move has been played
if not white and i % 2 == 0 and i > 2:
current_move = start_json["moves"]
current_move = str(current_move)
current_move = current_move.split()
current_move = current_move[-1]
black_position.append(current_move)
stockfish.set_position(black_position)
best_move = stockfish.get_best_move()
black_position.append(best_move)
requests.post(bot_move_url + best_move, params={
},
headers={
'Authorization': f'Bearer {api_key}'
})
else:
print('I am waiting to move.')
continue
| 46.464088 | 114 | 0.437574 | import requests
import json
from stockfish import Stockfish
stockfish = Stockfish('stockfish_20090216_x64_bmi2.exe', parameters={"Threads": 8, "Minimum Thinking Time": 300})
stockfish.set_depth(15)
stockfish.set_skill_level(25)
api_key = 'REPLACE_WITH_API_KEY'
headers = {'Authorization': f'Bearer {api_key}'}
game_state_url = 'https://lichess.org/api/stream/event'
game_id = 'placeholder'
is_checkmate = False
bot_challenges = False
while True:
state_session = requests.Session()
request = state_session.get(game_state_url, headers=headers, stream=True)
for line in request.iter_lines():
if len(line) == 0:
print('Request response is empty.')
if len(line) != 0:
challenge_state_json = json.loads(line)
if challenge_state_json['type'] == 'challenge':
print('BOT_NAME has been challenged.')
challenge_id = challenge_state_json['challenge']['id']
challenger = challenge_state_json['challenge']['challenger']['id']
print('Challenge ID is: ' + challenge_id + '. Challenger is: ' + challenger)
if challenge_state_json['challenge']['variant']['key'] != 'standard':
requests.post('https://lichess.org/api/challenge/' + challenge_id + '/decline', params={
},
headers={
'Authorization': f'Bearer {api_key}'
})
print('Challenge has been declined for improper variant.')
continue
else:
requests.post('https://lichess.org/api/challenge/' + challenge_id + '/accept', params={
},
headers={
'Authorization': f'Bearer {api_key}'
})
current_move = 'placeholder'
best_move = 'placeholder'
position = ['placeholder', 'placeholder']
black_position = ['placeholder', 'placeholder']
white = True
second_session = requests.Session()
request = second_session.get(game_state_url, headers=headers, stream=True)
for line in request.iter_lines():
game_start_json = json.loads(line)
print(game_start_json)
game_id = game_start_json['game']['id']
print('Game ID is: ' + game_id)
break
game_stream_url = 'https://lichess.org/api/bot/game/stream/' + game_id
bot_move_url = 'https://lichess.org/api/bot/game/' + game_id + '/move/'
s = requests.Session()
r = s.get(game_stream_url, headers=headers, stream=True)
i = 0
move_count = 0
for line in r.iter_lines():
if line:
i = i + 1
move_count = move_count + .5
move_count = float(move_count)
if move_count.is_integer():
move_count = int(move_count)
move_count = str(move_count)
print('It is move ' + move_count + '.')
move_count = float(move_count)
start_json = json.loads(line)
if i == 1:
if start_json["white"]["id"] == 'REPLACE_WITH_BOT_USERNAME':
white = True
print('It is white to move. I am white.')
else:
white = False
print('It is white to move. I am black.')
if start_json['speed'] == 'bullet' and i == 1:
stockfish.set_depth(15)
stockfish.set_skill_level(20)
elif start_json['speed'] == 'blitz' and i == 1:
stockfish.set_depth(15)
stockfish.set_skill_level(25)
elif start_json['speed'] == 'rapid' and i == 1:
stockfish.set_depth(19)
stockfish.set_skill_level(30)
elif start_json['speed'] == 'classical' and i == 1:
stockfish.set_depth(20)
stockfish.set_skill_level(30)
elif start_json['speed'] == 'correspondence' and i == 1:
stockfish.set_depth(20)
stockfish.set_skill_level(30)
if white and i == 1:
position.clear()
stockfish.set_position()
best_move = stockfish.get_best_move()
requests.post(bot_move_url + best_move, params={
},
headers={
'Authorization': f'Bearer {api_key}'
})
best_move = str(best_move)
position.append(best_move)
stockfish.set_position(position)
if not white and i == 1:
print('I am waiting to move.')
if not white and i == 2:
black_position.clear()
current_move = start_json["moves"]
current_move = str(current_move)
current_move = current_move.split()
current_move = current_move[-1]
black_position.append(current_move)
stockfish.set_position(black_position)
best_move = stockfish.get_best_move()
black_position.append(best_move)
requests.post(bot_move_url + best_move, params={
},
headers={
'Authorization': f'Bearer {api_key}'
})
if start_json['type'] == 'gameState':
if not i % 2 == 0 and white and i > 1:
current_move = start_json["moves"]
current_move = str(current_move)
current_move = current_move.split()
current_move = current_move[-1]
position.append(current_move)
stockfish.set_position(position)
best_move = stockfish.get_best_move()
position.append(best_move)
requests.post(bot_move_url + best_move, params={
},
headers={
'Authorization': f'Bearer {api_key}'
})
if not white and i % 2 == 0 and i > 2:
current_move = start_json["moves"]
current_move = str(current_move)
current_move = current_move.split()
current_move = current_move[-1]
black_position.append(current_move)
stockfish.set_position(black_position)
best_move = stockfish.get_best_move()
black_position.append(best_move)
requests.post(bot_move_url + best_move, params={
},
headers={
'Authorization': f'Bearer {api_key}'
})
else:
print('I am waiting to move.')
continue
| true | true |
f720a21b297857bd9d0f8b0c056695ecb84a54fe | 1,510 | py | Python | pyArango/index.py | jarvisav/pyArango | dc054e2258c9fccbc54443afc244b74ad0abb8b0 | [
"Apache-2.0"
] | null | null | null | pyArango/index.py | jarvisav/pyArango | dc054e2258c9fccbc54443afc244b74ad0abb8b0 | [
"Apache-2.0"
] | null | null | null | pyArango/index.py | jarvisav/pyArango | dc054e2258c9fccbc54443afc244b74ad0abb8b0 | [
"Apache-2.0"
] | null | null | null | import json
from .theExceptions import (CreationError, DeletionError, UpdateError)
class Index(object) :
"""An index on a collection's fields. Indexes are meant to de created by ensureXXX functions of Collections.
Indexes have a .infos dictionary that stores all the infos about the index"""
def __init__(self, collection, infos = None, creationData = None) :
self.collection = collection
self.connection = self.collection.database.connection
self.indexesURL = "%s/index" % self.collection.database.URL
self.infos = None
if infos :
self.infos = infos
elif creationData :
self._create(creationData)
if self.infos :
self.URL = "%s/%s" % (self.indexesURL, self.infos["id"])
def _create(self, postData) :
"""Creates an index of any type according to postData"""
if self.infos is None :
r = self.connection.session.post(self.indexesURL, params = {"collection" : self.collection.name}, data = json.dumps(postData, default=str))
data = r.json()
if (r.status_code >= 400) or data['error'] :
raise CreationError(data['errorMessage'], data)
self.infos = data
def delete(self) :
"""Delete the index"""
r = self.connection.session.delete(self.URL)
data = r.json()
if (r.status_code != 200 and r.status_code != 202) or data['error'] :
raise DeletionError(data['errorMessage'], data)
| 40.810811 | 151 | 0.625828 | import json
from .theExceptions import (CreationError, DeletionError, UpdateError)
class Index(object) :
def __init__(self, collection, infos = None, creationData = None) :
self.collection = collection
self.connection = self.collection.database.connection
self.indexesURL = "%s/index" % self.collection.database.URL
self.infos = None
if infos :
self.infos = infos
elif creationData :
self._create(creationData)
if self.infos :
self.URL = "%s/%s" % (self.indexesURL, self.infos["id"])
def _create(self, postData) :
if self.infos is None :
r = self.connection.session.post(self.indexesURL, params = {"collection" : self.collection.name}, data = json.dumps(postData, default=str))
data = r.json()
if (r.status_code >= 400) or data['error'] :
raise CreationError(data['errorMessage'], data)
self.infos = data
def delete(self) :
r = self.connection.session.delete(self.URL)
data = r.json()
if (r.status_code != 200 and r.status_code != 202) or data['error'] :
raise DeletionError(data['errorMessage'], data)
| true | true |
f720a26b353f6b3ddd1548566f5b0a972c16828a | 584 | py | Python | soleka.py | jshenaop/eko | bb8e96ee9e460ed10505c8046a444a24fdcfcd06 | [
"Apache-2.0"
] | null | null | null | soleka.py | jshenaop/eko | bb8e96ee9e460ed10505c8046a444a24fdcfcd06 | [
"Apache-2.0"
] | null | null | null | soleka.py | jshenaop/eko | bb8e96ee9e460ed10505c8046a444a24fdcfcd06 | [
"Apache-2.0"
] | null | null | null | # coding=utf8
from flask import Flask, render_template
from flask_restful.utils import cors
from flask_cors import CORS, cross_origin
import config
import models
from resources_v1.predictions import predictions_api_v1
from templates.templates import home
app = Flask(__name__)
CORS(app)
app.register_blueprint(predictions_api_v1, url_prefix='/api/v1')
@app.route('/')
def index():
return render_template("main.html")
if __name__ == '__main__':
models.initilize()
#app.run(host=config.HOST)
app.run(debug=config.DEBUG, host=config.HOST)
| 22.461538 | 65 | 0.738014 |
from flask import Flask, render_template
from flask_restful.utils import cors
from flask_cors import CORS, cross_origin
import config
import models
from resources_v1.predictions import predictions_api_v1
from templates.templates import home
app = Flask(__name__)
CORS(app)
app.register_blueprint(predictions_api_v1, url_prefix='/api/v1')
@app.route('/')
def index():
return render_template("main.html")
if __name__ == '__main__':
models.initilize()
app.run(debug=config.DEBUG, host=config.HOST)
| true | true |
f720a3476873e92b2ed94d5e14b117589ae55b83 | 23,418 | py | Python | index.py | 201723050210/17wanxiaoCheckin-Actions | 666a2c8473f876a607011021a8be1ee876ce6c34 | [
"MIT"
] | null | null | null | index.py | 201723050210/17wanxiaoCheckin-Actions | 666a2c8473f876a607011021a8be1ee876ce6c34 | [
"MIT"
] | null | null | null | index.py | 201723050210/17wanxiaoCheckin-Actions | 666a2c8473f876a607011021a8be1ee876ce6c34 | [
"MIT"
] | null | null | null | import time
import os
import datetime
import json
import logging
import requests
from utils.server_chan import server_push
from utils.qq_email import qq_email_push
from utils.qmsg import qmsg_push
from login import CampusLogin
def initLogging():
logging.getLogger().setLevel(logging.INFO)
logging.basicConfig(format="[%(levelname)s]; %(message)s")
def get_token(username, password, device_id):
"""
获取用户令牌,模拟登录获取:https://github.com/zhongbr/wanmei_campus
:param device_id: 设备ID
:param username: 账号
:param password: 密码
:return:
"""
for _ in range(3):
try:
campus_login = CampusLogin(phone_num=username, device_id=device_id)
except Exception as e:
logging.warning(e)
continue
login_dict = campus_login.pwd_login(password)
if login_dict["status"]:
logging.info(f"{username[:4]},{login_dict['msg']}")
return login_dict["token"]
elif login_dict['errmsg'] == "该手机号未注册完美校园":
logging.warning(f"{username[:4]},{login_dict['errmsg']}")
return None
elif login_dict['errmsg'].startswith("密码错误"):
logging.warning(f"{username[:4]},{login_dict['errmsg']}")
logging.warning("代码是死的,密码错误了就是错误了,赶紧去查看一下是不是输错了!")
return None
else:
logging.info(f"{username[:4]},{login_dict['errmsg']}")
logging.warning('正在尝试重新登录......')
time.sleep(5)
return None
def get_school_name(token):
post_data = {"token": token, "method": "WX_BASE_INFO", "param": "%7B%7D"}
headers = {"Content-Type": "application/x-www-form-urlencoded"}
try:
res = requests.post(
"https://server.59wanmei.com/YKT_Interface/xyk",
data=post_data,
headers=headers,
)
return res.json()["data"]["customerName"]
except:
return "泪目,没获取到学校名字"
def get_user_info(token):
"""
用来获取custom_id,即类似与打卡模板id
:param token: 用户令牌
:return: return
"""
data = {"appClassify": "DK", "token": token}
for _ in range(3):
try:
res = requests.post(
"https://reportedh5.17wanxiao.com/api/clock/school/getUserInfo", data=data
)
user_info = res.json()["userInfo"]
logging.info('获取个人信息成功')
return user_info
except:
logging.warning('获取个人信息失败,正在重试......')
time.sleep(1)
return None
def get_post_json(post_json):
"""
获取打卡数据
:param jsons: 用来获取打卡数据的json字段
:return:
"""
for _ in range(3):
try:
res = requests.post(
url="https://reportedh5.17wanxiao.com/sass/api/epmpics",
json=post_json,
timeout=10,
).json()
except:
logging.warning("获取完美校园打卡post参数失败,正在重试...")
time.sleep(1)
continue
if res["code"] != "10000":
logging.warning(res)
data = json.loads(res["data"])
# print(data)
post_dict = {
"areaStr": data['areaStr'],
"deptStr": data['deptStr'],
"deptid": data['deptStr']['deptid'] if data['deptStr'] else None,
"customerid": data['customerid'],
"userid": data['userid'],
"username": data['username'],
"stuNo": data['stuNo'],
"phonenum": data["phonenum"],
"templateid": data["templateid"],
"updatainfo": [
{"propertyname": i["propertyname"], "value": i["value"]}
for i in data["cusTemplateRelations"]
],
"updatainfo_detail": [
{
"propertyname": i["propertyname"],
"checkValues": i["checkValues"],
"description": i["decription"],
"value": i["value"],
}
for i in data["cusTemplateRelations"]
],
"checkbox": [
{"description": i["decription"], "value": i["value"], "propertyname": i["propertyname"]}
for i in data["cusTemplateRelations"]
],
}
# print(json.dumps(post_dict, sort_keys=True, indent=4, ensure_ascii=False))
logging.info("获取完美校园打卡post参数成功")
return post_dict
return None
def healthy_check_in(token, username, post_dict):
"""
第一类健康打卡
:param username: 手机号
:param token: 用户令牌
:param post_dict: 打卡数据
:return:
"""
check_json = {
"businessType": "epmpics",
"method": "submitUpInfo",
"jsonData": {
"deptStr": post_dict["deptStr"],
"areaStr": post_dict["areaStr"],
"reportdate": round(time.time() * 1000),
"customerid": post_dict["customerid"],
"deptid": post_dict["deptid"],
"source": "app",
"templateid": post_dict["templateid"],
"stuNo": post_dict["stuNo"],
"username": post_dict["username"],
"phonenum": username,
"userid": post_dict["userid"],
"updatainfo": post_dict["updatainfo"],
"gpsType": 1,
"token": token,
},
}
for _ in range(3):
try:
res = requests.post(
"https://reportedh5.17wanxiao.com/sass/api/epmpics", json=check_json
).json()
if res['code'] == '10000':
logging.info(res)
return {
"status": 1,
"res": res,
"post_dict": post_dict,
"check_json": check_json,
"type": "healthy",
}
elif "频繁" in res['data']:
logging.info(res)
return {
"status": 1,
"res": res,
"post_dict": post_dict,
"check_json": check_json,
"type": "healthy",
}
else:
logging.warning(res)
return {"status": 0, "errmsg": f"{post_dict['username']}: {res}"}
except:
errmsg = f"```打卡请求出错```"
logging.warning("健康打卡请求出错")
return {"status": 0, "errmsg": errmsg}
return {"status": 0, "errmsg": "健康打卡请求出错"}
def get_recall_data(token):
"""
获取第二类健康打卡的打卡数据
:param token: 用户令牌
:return: 返回dict数据
"""
for _ in range(3):
try:
res = requests.post(
url="https://reportedh5.17wanxiao.com/api/reported/recall",
data={"token": token},
timeout=10,
).json()
except:
logging.warning("获取完美校园打卡post参数失败,正在重试...")
time.sleep(1)
continue
if res["code"] == 0:
logging.info("获取完美校园打卡post参数成功")
return res["data"]
else:
logging.warning(res)
return None
def receive_check_in(token, custom_id, post_dict):
"""
第二类健康打卡
:param token: 用户令牌
:param custom_id: 健康打卡id
:param post_dict: 健康打卡数据
:return:
"""
check_json = {
"userId": post_dict["userId"],
"name": post_dict["name"],
"stuNo": post_dict["stuNo"],
"whereabouts": post_dict["whereabouts"],
"familyWhereabouts": "",
"beenToWuhan": post_dict["beenToWuhan"],
"contactWithPatients": post_dict["contactWithPatients"],
"symptom": post_dict["symptom"],
"fever": post_dict["fever"],
"cough": post_dict["cough"],
"soreThroat": post_dict["soreThroat"],
"debilitation": post_dict["debilitation"],
"diarrhea": post_dict["diarrhea"],
"cold": post_dict["cold"],
"staySchool": post_dict["staySchool"],
"contacts": post_dict["contacts"],
"emergencyPhone": post_dict["emergencyPhone"],
"address": post_dict["address"],
"familyForAddress": "",
"collegeId": post_dict["collegeId"],
"majorId": post_dict["majorId"],
"classId": post_dict["classId"],
"classDescribe": post_dict["classDescribe"],
"temperature": post_dict["temperature"],
"confirmed": post_dict["confirmed"],
"isolated": post_dict["isolated"],
"passingWuhan": post_dict["passingWuhan"],
"passingHubei": post_dict["passingHubei"],
"patientSide": post_dict["patientSide"],
"patientContact": post_dict["patientContact"],
"mentalHealth": post_dict["mentalHealth"],
"wayToSchool": post_dict["wayToSchool"],
"backToSchool": post_dict["backToSchool"],
"haveBroadband": post_dict["haveBroadband"],
"emergencyContactName": post_dict["emergencyContactName"],
"helpInfo": "",
"passingCity": "",
"longitude": "", # 请在此处填写需要打卡位置的longitude
"latitude": "", # 请在此处填写需要打卡位置的latitude
"token": token,
}
headers = {
"referer": f"https://reportedh5.17wanxiao.com/nCovReport/index.html?token={token}&customerId={custom_id}",
"content-type": "application/x-www-form-urlencoded;charset=UTF-8",
}
try:
res = requests.post(
"https://reportedh5.17wanxiao.com/api/reported/receive",
headers=headers,
data=check_json,
).json()
# 以json格式打印json字符串
# print(res)
if res["code"] == 0:
logging.info(res)
return dict(
status=1,
res=res,
post_dict=post_dict,
check_json=check_json,
type="healthy",
)
else:
logging.warning(res)
return dict(
status=1,
res=res,
post_dict=post_dict,
check_json=check_json,
type="healthy",
)
except:
errmsg = f"```打卡请求出错```"
logging.warning("打卡请求出错,网络不稳定")
return dict(status=0, errmsg=errmsg)
def get_ap():
"""
获取当前时间,用于校内打卡
:return: 返回布尔列表:[am, pm, ev]
"""
now_time = datetime.datetime.utcnow() + datetime.timedelta(hours=8)
am = 0 <= now_time.hour < 12
pm = 12 <= now_time.hour < 17
ev = 17 <= now_time.hour <= 23
return [am, pm, ev]
def get_id_list(token, custom_id):
"""
通过校内模板id获取校内打卡具体的每个时间段id
:param token: 用户令牌
:param custom_id: 校内打卡模板id
:return: 返回校内打卡id列表
"""
post_data = {
"customerAppTypeId": custom_id,
"longitude": "",
"latitude": "",
"token": token,
}
try:
res = requests.post(
"https://reportedh5.17wanxiao.com/api/clock/school/rules", data=post_data
)
# print(res.text)
return res.json()["customerAppTypeDto"]["ruleList"]
except:
return None
def get_id_list_v1(token):
"""
通过校内模板id获取校内打卡具体的每个时间段id(初版,暂留)
:param token: 用户令牌
:return: 返回校内打卡id列表
"""
post_data = {"appClassify": "DK", "token": token}
try:
res = requests.post(
"https://reportedh5.17wanxiao.com/api/clock/school/childApps",
data=post_data,
)
if res.json()["appList"]:
id_list = sorted(
res.json()["appList"][-1]["customerAppTypeRuleList"],
key=lambda x: x["id"],
)
res_dict = [
{"id": j["id"], "templateid": f"clockSign{i + 1}"}
for i, j in enumerate(id_list)
]
return res_dict
return None
except:
return None
def campus_check_in(username, token, post_dict, id):
"""
校内打卡
:param username: 电话号
:param token: 用户令牌
:param post_dict: 校内打卡数据
:param id: 校内打卡id
:return:
"""
check_json = {
"businessType": "epmpics",
"method": "submitUpInfoSchool",
"jsonData": {
"deptStr": post_dict["deptStr"],
"areaStr": post_dict["areaStr"],
"reportdate": round(time.time() * 1000),
"customerid": post_dict["customerid"],
"deptid": post_dict["deptid"],
"source": "app",
"templateid": post_dict["templateid"],
"stuNo": post_dict["stuNo"],
"username": post_dict["username"],
"phonenum": username,
"userid": post_dict["userid"],
"updatainfo": post_dict["updatainfo"],
"customerAppTypeRuleId": id,
"clockState": 0,
"token": token,
},
"token": token,
}
# print(check_json)
try:
res = requests.post(
"https://reportedh5.17wanxiao.com/sass/api/epmpics", json=check_json
).json()
# 以json格式打印json字符串
if res["code"] != "10000":
logging.warning(res)
return dict(
status=1,
res=res,
post_dict=post_dict,
check_json=check_json,
type=post_dict["templateid"],
)
else:
logging.info(res)
return dict(
status=1,
res=res,
post_dict=post_dict,
check_json=check_json,
type=post_dict["templateid"],
)
except BaseException:
errmsg = f"```校内打卡请求出错```"
logging.warning("校内打卡请求出错")
return dict(status=0, errmsg=errmsg)
def check_in(username, password, device_id):
check_dict_list = []
# 登录获取token用于打卡
token = get_token(username, password, device_id)
if not token:
errmsg = f"{username[:4]},获取token失败,打卡失败"
logging.warning(errmsg)
check_dict_list.append({"status": 0, "errmsg": errmsg})
return check_dict_list
# print(token)
# 获取现在是上午,还是下午,还是晚上
# ape_list = get_ap()
# 获取学校使用打卡模板Id
user_info = get_user_info(token)
if not user_info:
errmsg = f"{username[:4]},获取user_info失败,打卡失败"
logging.warning(errmsg)
check_dict_list.append({"status": 0, "errmsg": errmsg})
return check_dict_list
# 获取第一类健康打卡的参数
json1 = {
"businessType": "epmpics",
"jsonData": {"templateid": "pneumonia", "token": token},
"method": "userComeApp",
}
post_dict = get_post_json(json1)
if post_dict:
# 第一类健康打卡
# print(post_dict)
# 修改温度等参数
# for j in post_dict['updatainfo']: # 这里获取打卡json字段的打卡信息,微信推送的json字段
# if j['propertyname'] == 'temperature': # 找到propertyname为temperature的字段
# j['value'] = '36.2' # 由于原先为null,这里直接设置36.2(根据自己学校打卡选项来)
# if j['propertyname'] == 'xinqing':
# j['value'] = '健康'
# if j['propertyname'] == 'outdoor':
# j['value'] = '否'
# if j['propertyname'] == 'symptom':
# j['value'] = '无症状'
# if j['propertyname'] == 'ownbodyzk':
# j['value'] = '身体健康,无异常'
# 修改地址,依照自己完美校园,查一下地址即可
# post_dict['areaStr'] = '{"streetNumber":"89号","street":"建设东路","district":"","city":"新乡市","province":"河南省",' \
# '"town":"","pois":"河南师范大学(东区)","lng":113.91572178314209,' \
# '"lat":35.327695868943984,"address":"牧野区建设东路89号河南师范大学(东区)","text":"河南省-新乡市",' \
# '"code":""} '
healthy_check_dict = healthy_check_in(token, username, post_dict)
check_dict_list.append(healthy_check_dict)
else:
# 获取第二类健康打卡参数
post_dict = get_recall_data(token)
# 第二类健康打卡
healthy_check_dict = receive_check_in(token, user_info["customerId"], post_dict)
check_dict_list.append(healthy_check_dict)
# # 获取校内打卡ID
# id_list = get_id_list(token, user_info.get('customerAppTypeId'))
# # print(id_list)
# if not id_list:
# return check_dict_list
#
# # 校内打卡
# for index, i in enumerate(id_list):
# if ape_list[index]:
# # print(i)
# logging.info(f"-------------------------------{i['templateid']}-------------------------------")
# json2 = {"businessType": "epmpics",
# "jsonData": {"templateid": i['templateid'], "customerAppTypeRuleId": i['id'],
# "stuNo": post_dict['stuNo'],
# "token": token}, "method": "userComeAppSchool",
# "token": token}
# campus_dict = get_post_json(json2)
# campus_dict['areaStr'] = post_dict['areaStr']
# for j in campus_dict['updatainfo']:
# if j['propertyname'] == 'temperature':
# j['value'] = '36.4'
# if j['propertyname'] == 'symptom':
# j['value'] = '无症状'
# campus_check_dict = campus_check_in(username, token, campus_dict, i['id'])
# check_dict_list.append(campus_check_dict)
# logging.info("--------------------------------------------------------------")
return check_dict_list
def wanxiao_server_push(sckey, check_info_list):
utc8_time = datetime.datetime.utcnow() + datetime.timedelta(hours=8)
push_list = [f"""
------
#### 现在时间:
```
{utc8_time.strftime("%Y-%m-%d %H:%M:%S %p")}
```"""]
for check_info in check_info_list:
if check_info["status"]:
if check_info["post_dict"].get("checkbox"):
post_msg = "\n".join(
[
f"| {i['description']} | {j['value']} |"
for i in check_info["post_dict"].get("checkbox")
for j in check_info["post_dict"].get("updatainfo")
if i["propertyname"] == j["propertyname"]
]
)
else:
post_msg = "暂无详情"
name = check_info["post_dict"].get("username")
if not name:
name = check_info["post_dict"]["name"]
push_list.append(
f"""#### {name}{check_info['type']}打卡信息:
```
{json.dumps(check_info['check_json'], sort_keys=True, indent=4, ensure_ascii=False)}
```
------
| Text | Message |
| :----------------------------------- | :--- |
{post_msg}
------
```
{check_info['res']}
```"""
)
else:
push_list.append(
f"""------
#### {check_info['errmsg']}
------
"""
)
push_list.append(
f"""
>
> [17wanxiaoCheckin-Actions](https://github.com/ReaJason/17wanxiaoCheckin-Actions)
>
>微信消息测试!
"""
)
return server_push(sckey, "健康打卡", "\n".join(push_list))
def wanxiao_qq_mail_push(send_email, send_pwd, receive_email, check_info_list):
bj_time = datetime.datetime.utcnow() + datetime.timedelta(hours=8)
bj_time.strftime("%Y-%m-%d %H:%M:%S %p")
mail_msg_list = [f"""
<h2><center> >>>> <a href="https://github.com/ReaJason/17wanxiaoCheckin-Actions">17wanxiaoCheckin-Actions</a>
<<<<</center></h2>
<h2><center>微信消息提醒!</center></h2>
<h3><center>打卡时间:{bj_time}</center></h3>
"""
]
for check in check_info_list:
if check["status"]:
name = check['post_dict'].get('username')
if not name:
name = check['post_dict']['name']
mail_msg_list.append(f"""<hr>
<details>
<summary style="font-family: 'Microsoft YaHei UI',serif; color: deepskyblue;">{name}:{check["type"]} 打卡结果:{check['res']}</summary>
<pre><code>
{json.dumps(check['check_json'], sort_keys=True, indent=4, ensure_ascii=False)}
</code></pre>
</details>
<details>
<summary style="font-family: 'Microsoft YaHei UI',serif; color: black;" >>>>填写数据抓包详情(便于代码的编写)<<<</summary>
<pre><code>
{json.dumps(check['post_dict']['updatainfo_detail'], sort_keys=True, indent=4, ensure_ascii=False)}
</code></pre>
</details>
<details>
<summary style="font-family: 'Microsoft YaHei UI',serif; color: lightskyblue;" >>>>打卡信息数据表格<<<</summary>
<table id="customers">
<tr>
<th>Text</th>
<th>Value</th>
</tr>
"""
)
for index, box in enumerate(check["post_dict"]["checkbox"]):
if index % 2:
mail_msg_list.append(
f"""<tr>
<td>{box['description']}</td>
<td>{box['value']}</td>
</tr>"""
)
else:
mail_msg_list.append(f"""<tr class="alt">
<td>{box['description']}</td>
<td>{box['value']}</td>
</tr>"""
)
mail_msg_list.append(
f"""
</table></details>"""
)
else:
mail_msg_list.append(
f"""<hr>
<b style="color: red">{check['errmsg']}</b>"""
)
css = """<style type="text/css">
#customers
{
font-family:"Trebuchet MS", Arial, Helvetica, sans-serif;
width:100%;
border-collapse:collapse;
}
#customers td, #customers th
{
font-size:1em;
border:1px solid #98bf21;
padding:3px 7px 2px 7px;
}
#customers th
{
font-size:1.1em;
text-align:left;
padding-top:5px;
padding-bottom:4px;
background-color:#A7C942;
color:#ffffff;
}
#customers tr.alt td
{
color:#000000;
background-color:#EAF2D3;
}
</style>"""
mail_msg_list.append(css)
return qq_email_push(send_email, send_pwd, receive_email,
title="完美校园健康打卡", text="".join(mail_msg_list))
def wanxiao_qmsg_push(key, qq_num, check_info_list, send_type):
utc8_time = datetime.datetime.utcnow() + datetime.timedelta(hours=8)
push_list = [f'@face=74@ {utc8_time.strftime("%Y-%m-%d %H:%M:%S")} @face=74@ ']
for check_info in check_info_list:
if check_info["status"]:
name = check_info["post_dict"].get("username")
if not name:
name = check_info["post_dict"]["name"]
push_list.append(f"""\
@face=54@ {name}{check_info['type']} @face=54@
@face=211@
{check_info['res']}
@face=211@""")
else:
push_list.append(check_info['errmsg'])
return qmsg_push(key, qq_num, "\n".join(push_list), send_type)
def main_handler(*args, **kwargs):
initLogging()
raw_info = []
username_list = os.environ['USERNAME'].split(',')
password_list = os.environ['PASSWORD'].split(',')
device_id_list = os.environ['DEVICEID'].split(',')
sckey = os.environ.get('SCKEY')
key = os.environ.get('KEY')
qq_num = os.environ.get('QQ_NUM')
send_email = os.environ.get('SEND_EMAIL')
send_pwd = os.environ.get('SEND_PWD')
receive_email = os.environ.get('RECEIVE_EMAIL')
for username, password, device_id in zip(
[i.strip() for i in username_list if i != ''],
[i.strip() for i in password_list if i != ''],
[i.strip() for i in device_id_list if i != '']):
check_dict = check_in(username, password, device_id)
raw_info.extend(check_dict)
if sckey:
logging.info(wanxiao_server_push(sckey, raw_info))
if send_email and send_pwd and receive_email:
logging.info(wanxiao_qq_mail_push(send_email, send_pwd, receive_email, raw_info))
if key:
logging.info(wanxiao_qmsg_push(key, qq_num, raw_info, send_type="send"))
if __name__ == "__main__":
main_handler()
| 32.211829 | 130 | 0.530575 | import time
import os
import datetime
import json
import logging
import requests
from utils.server_chan import server_push
from utils.qq_email import qq_email_push
from utils.qmsg import qmsg_push
from login import CampusLogin
def initLogging():
logging.getLogger().setLevel(logging.INFO)
logging.basicConfig(format="[%(levelname)s]; %(message)s")
def get_token(username, password, device_id):
for _ in range(3):
try:
campus_login = CampusLogin(phone_num=username, device_id=device_id)
except Exception as e:
logging.warning(e)
continue
login_dict = campus_login.pwd_login(password)
if login_dict["status"]:
logging.info(f"{username[:4]},{login_dict['msg']}")
return login_dict["token"]
elif login_dict['errmsg'] == "该手机号未注册完美校园":
logging.warning(f"{username[:4]},{login_dict['errmsg']}")
return None
elif login_dict['errmsg'].startswith("密码错误"):
logging.warning(f"{username[:4]},{login_dict['errmsg']}")
logging.warning("代码是死的,密码错误了就是错误了,赶紧去查看一下是不是输错了!")
return None
else:
logging.info(f"{username[:4]},{login_dict['errmsg']}")
logging.warning('正在尝试重新登录......')
time.sleep(5)
return None
def get_school_name(token):
post_data = {"token": token, "method": "WX_BASE_INFO", "param": "%7B%7D"}
headers = {"Content-Type": "application/x-www-form-urlencoded"}
try:
res = requests.post(
"https://server.59wanmei.com/YKT_Interface/xyk",
data=post_data,
headers=headers,
)
return res.json()["data"]["customerName"]
except:
return "泪目,没获取到学校名字"
def get_user_info(token):
data = {"appClassify": "DK", "token": token}
for _ in range(3):
try:
res = requests.post(
"https://reportedh5.17wanxiao.com/api/clock/school/getUserInfo", data=data
)
user_info = res.json()["userInfo"]
logging.info('获取个人信息成功')
return user_info
except:
logging.warning('获取个人信息失败,正在重试......')
time.sleep(1)
return None
def get_post_json(post_json):
for _ in range(3):
try:
res = requests.post(
url="https://reportedh5.17wanxiao.com/sass/api/epmpics",
json=post_json,
timeout=10,
).json()
except:
logging.warning("获取完美校园打卡post参数失败,正在重试...")
time.sleep(1)
continue
if res["code"] != "10000":
logging.warning(res)
data = json.loads(res["data"])
post_dict = {
"areaStr": data['areaStr'],
"deptStr": data['deptStr'],
"deptid": data['deptStr']['deptid'] if data['deptStr'] else None,
"customerid": data['customerid'],
"userid": data['userid'],
"username": data['username'],
"stuNo": data['stuNo'],
"phonenum": data["phonenum"],
"templateid": data["templateid"],
"updatainfo": [
{"propertyname": i["propertyname"], "value": i["value"]}
for i in data["cusTemplateRelations"]
],
"updatainfo_detail": [
{
"propertyname": i["propertyname"],
"checkValues": i["checkValues"],
"description": i["decription"],
"value": i["value"],
}
for i in data["cusTemplateRelations"]
],
"checkbox": [
{"description": i["decription"], "value": i["value"], "propertyname": i["propertyname"]}
for i in data["cusTemplateRelations"]
],
}
logging.info("获取完美校园打卡post参数成功")
return post_dict
return None
def healthy_check_in(token, username, post_dict):
check_json = {
"businessType": "epmpics",
"method": "submitUpInfo",
"jsonData": {
"deptStr": post_dict["deptStr"],
"areaStr": post_dict["areaStr"],
"reportdate": round(time.time() * 1000),
"customerid": post_dict["customerid"],
"deptid": post_dict["deptid"],
"source": "app",
"templateid": post_dict["templateid"],
"stuNo": post_dict["stuNo"],
"username": post_dict["username"],
"phonenum": username,
"userid": post_dict["userid"],
"updatainfo": post_dict["updatainfo"],
"gpsType": 1,
"token": token,
},
}
for _ in range(3):
try:
res = requests.post(
"https://reportedh5.17wanxiao.com/sass/api/epmpics", json=check_json
).json()
if res['code'] == '10000':
logging.info(res)
return {
"status": 1,
"res": res,
"post_dict": post_dict,
"check_json": check_json,
"type": "healthy",
}
elif "频繁" in res['data']:
logging.info(res)
return {
"status": 1,
"res": res,
"post_dict": post_dict,
"check_json": check_json,
"type": "healthy",
}
else:
logging.warning(res)
return {"status": 0, "errmsg": f"{post_dict['username']}: {res}"}
except:
errmsg = f"```打卡请求出错```"
logging.warning("健康打卡请求出错")
return {"status": 0, "errmsg": errmsg}
return {"status": 0, "errmsg": "健康打卡请求出错"}
def get_recall_data(token):
for _ in range(3):
try:
res = requests.post(
url="https://reportedh5.17wanxiao.com/api/reported/recall",
data={"token": token},
timeout=10,
).json()
except:
logging.warning("获取完美校园打卡post参数失败,正在重试...")
time.sleep(1)
continue
if res["code"] == 0:
logging.info("获取完美校园打卡post参数成功")
return res["data"]
else:
logging.warning(res)
return None
def receive_check_in(token, custom_id, post_dict):
check_json = {
"userId": post_dict["userId"],
"name": post_dict["name"],
"stuNo": post_dict["stuNo"],
"whereabouts": post_dict["whereabouts"],
"familyWhereabouts": "",
"beenToWuhan": post_dict["beenToWuhan"],
"contactWithPatients": post_dict["contactWithPatients"],
"symptom": post_dict["symptom"],
"fever": post_dict["fever"],
"cough": post_dict["cough"],
"soreThroat": post_dict["soreThroat"],
"debilitation": post_dict["debilitation"],
"diarrhea": post_dict["diarrhea"],
"cold": post_dict["cold"],
"staySchool": post_dict["staySchool"],
"contacts": post_dict["contacts"],
"emergencyPhone": post_dict["emergencyPhone"],
"address": post_dict["address"],
"familyForAddress": "",
"collegeId": post_dict["collegeId"],
"majorId": post_dict["majorId"],
"classId": post_dict["classId"],
"classDescribe": post_dict["classDescribe"],
"temperature": post_dict["temperature"],
"confirmed": post_dict["confirmed"],
"isolated": post_dict["isolated"],
"passingWuhan": post_dict["passingWuhan"],
"passingHubei": post_dict["passingHubei"],
"patientSide": post_dict["patientSide"],
"patientContact": post_dict["patientContact"],
"mentalHealth": post_dict["mentalHealth"],
"wayToSchool": post_dict["wayToSchool"],
"backToSchool": post_dict["backToSchool"],
"haveBroadband": post_dict["haveBroadband"],
"emergencyContactName": post_dict["emergencyContactName"],
"helpInfo": "",
"passingCity": "",
"longitude": "",
"latitude": "",
"token": token,
}
headers = {
"referer": f"https://reportedh5.17wanxiao.com/nCovReport/index.html?token={token}&customerId={custom_id}",
"content-type": "application/x-www-form-urlencoded;charset=UTF-8",
}
try:
res = requests.post(
"https://reportedh5.17wanxiao.com/api/reported/receive",
headers=headers,
data=check_json,
).json()
if res["code"] == 0:
logging.info(res)
return dict(
status=1,
res=res,
post_dict=post_dict,
check_json=check_json,
type="healthy",
)
else:
logging.warning(res)
return dict(
status=1,
res=res,
post_dict=post_dict,
check_json=check_json,
type="healthy",
)
except:
errmsg = f"```打卡请求出错```"
logging.warning("打卡请求出错,网络不稳定")
return dict(status=0, errmsg=errmsg)
def get_ap():
now_time = datetime.datetime.utcnow() + datetime.timedelta(hours=8)
am = 0 <= now_time.hour < 12
pm = 12 <= now_time.hour < 17
ev = 17 <= now_time.hour <= 23
return [am, pm, ev]
def get_id_list(token, custom_id):
post_data = {
"customerAppTypeId": custom_id,
"longitude": "",
"latitude": "",
"token": token,
}
try:
res = requests.post(
"https://reportedh5.17wanxiao.com/api/clock/school/rules", data=post_data
)
return res.json()["customerAppTypeDto"]["ruleList"]
except:
return None
def get_id_list_v1(token):
post_data = {"appClassify": "DK", "token": token}
try:
res = requests.post(
"https://reportedh5.17wanxiao.com/api/clock/school/childApps",
data=post_data,
)
if res.json()["appList"]:
id_list = sorted(
res.json()["appList"][-1]["customerAppTypeRuleList"],
key=lambda x: x["id"],
)
res_dict = [
{"id": j["id"], "templateid": f"clockSign{i + 1}"}
for i, j in enumerate(id_list)
]
return res_dict
return None
except:
return None
def campus_check_in(username, token, post_dict, id):
check_json = {
"businessType": "epmpics",
"method": "submitUpInfoSchool",
"jsonData": {
"deptStr": post_dict["deptStr"],
"areaStr": post_dict["areaStr"],
"reportdate": round(time.time() * 1000),
"customerid": post_dict["customerid"],
"deptid": post_dict["deptid"],
"source": "app",
"templateid": post_dict["templateid"],
"stuNo": post_dict["stuNo"],
"username": post_dict["username"],
"phonenum": username,
"userid": post_dict["userid"],
"updatainfo": post_dict["updatainfo"],
"customerAppTypeRuleId": id,
"clockState": 0,
"token": token,
},
"token": token,
}
try:
res = requests.post(
"https://reportedh5.17wanxiao.com/sass/api/epmpics", json=check_json
).json()
if res["code"] != "10000":
logging.warning(res)
return dict(
status=1,
res=res,
post_dict=post_dict,
check_json=check_json,
type=post_dict["templateid"],
)
else:
logging.info(res)
return dict(
status=1,
res=res,
post_dict=post_dict,
check_json=check_json,
type=post_dict["templateid"],
)
except BaseException:
errmsg = f"```校内打卡请求出错```"
logging.warning("校内打卡请求出错")
return dict(status=0, errmsg=errmsg)
def check_in(username, password, device_id):
check_dict_list = []
token = get_token(username, password, device_id)
if not token:
errmsg = f"{username[:4]},获取token失败,打卡失败"
logging.warning(errmsg)
check_dict_list.append({"status": 0, "errmsg": errmsg})
return check_dict_list
user_info = get_user_info(token)
if not user_info:
errmsg = f"{username[:4]},获取user_info失败,打卡失败"
logging.warning(errmsg)
check_dict_list.append({"status": 0, "errmsg": errmsg})
return check_dict_list
json1 = {
"businessType": "epmpics",
"jsonData": {"templateid": "pneumonia", "token": token},
"method": "userComeApp",
}
post_dict = get_post_json(json1)
if post_dict:
healthy_check_dict = healthy_check_in(token, username, post_dict)
check_dict_list.append(healthy_check_dict)
else:
post_dict = get_recall_data(token)
healthy_check_dict = receive_check_in(token, user_info["customerId"], post_dict)
check_dict_list.append(healthy_check_dict)
return check_dict_list
def wanxiao_server_push(sckey, check_info_list):
utc8_time = datetime.datetime.utcnow() + datetime.timedelta(hours=8)
push_list = [f"""
------
#### 现在时间:
```
{utc8_time.strftime("%Y-%m-%d %H:%M:%S %p")}
```"""]
for check_info in check_info_list:
if check_info["status"]:
if check_info["post_dict"].get("checkbox"):
post_msg = "\n".join(
[
f"| {i['description']} | {j['value']} |"
for i in check_info["post_dict"].get("checkbox")
for j in check_info["post_dict"].get("updatainfo")
if i["propertyname"] == j["propertyname"]
]
)
else:
post_msg = "暂无详情"
name = check_info["post_dict"].get("username")
if not name:
name = check_info["post_dict"]["name"]
push_list.append(
f"""#### {name}{check_info['type']}打卡信息:
```
{json.dumps(check_info['check_json'], sort_keys=True, indent=4, ensure_ascii=False)}
```
------
| Text | Message |
| :----------------------------------- | :--- |
{post_msg}
------
```
{check_info['res']}
```"""
)
else:
push_list.append(
f"""------
#### {check_info['errmsg']}
------
"""
)
push_list.append(
f"""
>
> [17wanxiaoCheckin-Actions](https://github.com/ReaJason/17wanxiaoCheckin-Actions)
>
>微信消息测试!
"""
)
return server_push(sckey, "健康打卡", "\n".join(push_list))
def wanxiao_qq_mail_push(send_email, send_pwd, receive_email, check_info_list):
bj_time = datetime.datetime.utcnow() + datetime.timedelta(hours=8)
bj_time.strftime("%Y-%m-%d %H:%M:%S %p")
mail_msg_list = [f"""
<h2><center> >>>> <a href="https://github.com/ReaJason/17wanxiaoCheckin-Actions">17wanxiaoCheckin-Actions</a>
<<<<</center></h2>
<h2><center>微信消息提醒!</center></h2>
<h3><center>打卡时间:{bj_time}</center></h3>
"""
]
for check in check_info_list:
if check["status"]:
name = check['post_dict'].get('username')
if not name:
name = check['post_dict']['name']
mail_msg_list.append(f"""<hr>
<details>
<summary style="font-family: 'Microsoft YaHei UI',serif; color: deepskyblue;">{name}:{check["type"]} 打卡结果:{check['res']}</summary>
<pre><code>
{json.dumps(check['check_json'], sort_keys=True, indent=4, ensure_ascii=False)}
</code></pre>
</details>
<details>
<summary style="font-family: 'Microsoft YaHei UI',serif; color: black;" >>>>填写数据抓包详情(便于代码的编写)<<<</summary>
<pre><code>
{json.dumps(check['post_dict']['updatainfo_detail'], sort_keys=True, indent=4, ensure_ascii=False)}
</code></pre>
</details>
<details>
<summary style="font-family: 'Microsoft YaHei UI',serif; color: lightskyblue;" >>>>打卡信息数据表格<<<</summary>
<table id="customers">
<tr>
<th>Text</th>
<th>Value</th>
</tr>
"""
)
for index, box in enumerate(check["post_dict"]["checkbox"]):
if index % 2:
mail_msg_list.append(
f"""<tr>
<td>{box['description']}</td>
<td>{box['value']}</td>
</tr>"""
)
else:
mail_msg_list.append(f"""<tr class="alt">
<td>{box['description']}</td>
<td>{box['value']}</td>
</tr>"""
)
mail_msg_list.append(
f"""
</table></details>"""
)
else:
mail_msg_list.append(
f"""<hr>
<b style="color: red">{check['errmsg']}</b>"""
)
css = """<style type="text/css">
#customers
{
font-family:"Trebuchet MS", Arial, Helvetica, sans-serif;
width:100%;
border-collapse:collapse;
}
#customers td, #customers th
{
font-size:1em;
border:1px solid #98bf21;
padding:3px 7px 2px 7px;
}
#customers th
{
font-size:1.1em;
text-align:left;
padding-top:5px;
padding-bottom:4px;
background-color:#A7C942;
color:#ffffff;
}
#customers tr.alt td
{
color:#000000;
background-color:#EAF2D3;
}
</style>"""
mail_msg_list.append(css)
return qq_email_push(send_email, send_pwd, receive_email,
title="完美校园健康打卡", text="".join(mail_msg_list))
def wanxiao_qmsg_push(key, qq_num, check_info_list, send_type):
utc8_time = datetime.datetime.utcnow() + datetime.timedelta(hours=8)
push_list = [f'@face=74@ {utc8_time.strftime("%Y-%m-%d %H:%M:%S")} @face=74@ ']
for check_info in check_info_list:
if check_info["status"]:
name = check_info["post_dict"].get("username")
if not name:
name = check_info["post_dict"]["name"]
push_list.append(f"""\
@face=54@ {name}{check_info['type']} @face=54@
@face=211@
{check_info['res']}
@face=211@""")
else:
push_list.append(check_info['errmsg'])
return qmsg_push(key, qq_num, "\n".join(push_list), send_type)
def main_handler(*args, **kwargs):
initLogging()
raw_info = []
username_list = os.environ['USERNAME'].split(',')
password_list = os.environ['PASSWORD'].split(',')
device_id_list = os.environ['DEVICEID'].split(',')
sckey = os.environ.get('SCKEY')
key = os.environ.get('KEY')
qq_num = os.environ.get('QQ_NUM')
send_email = os.environ.get('SEND_EMAIL')
send_pwd = os.environ.get('SEND_PWD')
receive_email = os.environ.get('RECEIVE_EMAIL')
for username, password, device_id in zip(
[i.strip() for i in username_list if i != ''],
[i.strip() for i in password_list if i != ''],
[i.strip() for i in device_id_list if i != '']):
check_dict = check_in(username, password, device_id)
raw_info.extend(check_dict)
if sckey:
logging.info(wanxiao_server_push(sckey, raw_info))
if send_email and send_pwd and receive_email:
logging.info(wanxiao_qq_mail_push(send_email, send_pwd, receive_email, raw_info))
if key:
logging.info(wanxiao_qmsg_push(key, qq_num, raw_info, send_type="send"))
if __name__ == "__main__":
main_handler()
| true | true |
f720a3b2937b6b9bd9e76101573c229ffea21101 | 913 | py | Python | tabu/package_info.py | wbernoudy/dwave-tabu | 793e76405ba60d2da87bc15634adeda821d82564 | [
"Apache-2.0"
] | null | null | null | tabu/package_info.py | wbernoudy/dwave-tabu | 793e76405ba60d2da87bc15634adeda821d82564 | [
"Apache-2.0"
] | null | null | null | tabu/package_info.py | wbernoudy/dwave-tabu | 793e76405ba60d2da87bc15634adeda821d82564 | [
"Apache-2.0"
] | null | null | null | # Copyright 2018 D-Wave Systems Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
__packagename__ = 'dwave-tabu'
__title__ = 'D-Wave Tabu'
__version__ = '0.1.3'
__author__ = 'D-Wave Systems Inc.'
__authoremail__ = 'tools@dwavesys.com'
__description__ = 'Optimized Tabu solver for QUBOs'
__url__ = 'https://github.com/dwavesystems/dwave-tabu'
__license__ = 'Apache 2.0'
__copyright__ = '2018, D-Wave Systems Inc.'
| 38.041667 | 74 | 0.756846 |
__packagename__ = 'dwave-tabu'
__title__ = 'D-Wave Tabu'
__version__ = '0.1.3'
__author__ = 'D-Wave Systems Inc.'
__authoremail__ = 'tools@dwavesys.com'
__description__ = 'Optimized Tabu solver for QUBOs'
__url__ = 'https://github.com/dwavesystems/dwave-tabu'
__license__ = 'Apache 2.0'
__copyright__ = '2018, D-Wave Systems Inc.'
| true | true |
f720a3be847c19fa4b9c7affc33b5d31a4209713 | 276 | py | Python | EyeTracker/display.py | PoweredByME/SSVEP_FYP | 6839be6a4aeddfa1b29c587b23d64c95a90810f8 | [
"MIT"
] | null | null | null | EyeTracker/display.py | PoweredByME/SSVEP_FYP | 6839be6a4aeddfa1b29c587b23d64c95a90810f8 | [
"MIT"
] | null | null | null | EyeTracker/display.py | PoweredByME/SSVEP_FYP | 6839be6a4aeddfa1b29c587b23d64c95a90810f8 | [
"MIT"
] | null | null | null | import cv2;
class Display(object):
def __init__(self):
pass;
def showFrame(self, frame, windowName = "frame"):
cv2.namedWindow(windowName, cv2.WINDOW_NORMAL);
cv2.imshow(windowName, frame);
def end(self):
cv2.destroyAllWindows(); | 23 | 55 | 0.637681 | import cv2;
class Display(object):
def __init__(self):
pass;
def showFrame(self, frame, windowName = "frame"):
cv2.namedWindow(windowName, cv2.WINDOW_NORMAL);
cv2.imshow(windowName, frame);
def end(self):
cv2.destroyAllWindows(); | true | true |
f720a3c8713f607158450310ae4112fcb026294a | 6,344 | py | Python | simple_history/manager.py | felixschloesser/django-simple-history | 28abacb8a776fbaffcf0a42432a6a88be3561a86 | [
"BSD-3-Clause"
] | 2 | 2021-03-26T09:20:05.000Z | 2021-05-26T13:46:48.000Z | simple_history/manager.py | felixschloesser/django-simple-history | 28abacb8a776fbaffcf0a42432a6a88be3561a86 | [
"BSD-3-Clause"
] | 42 | 2021-03-30T11:12:23.000Z | 2022-03-28T09:20:13.000Z | simple_history/manager.py | hramezani/django-simple-history | 32645206749a1cc68539d9ad6499f1a938b2c9f4 | [
"BSD-3-Clause"
] | 1 | 2021-10-05T10:25:35.000Z | 2021-10-05T10:25:35.000Z | from django.db import connection, models
from django.db.models import OuterRef, Subquery
from django.utils import timezone
from simple_history.utils import get_change_reason_from_object
class HistoryDescriptor:
def __init__(self, model):
self.model = model
def __get__(self, instance, owner):
if instance is None:
return HistoryManager(self.model)
return HistoryManager(self.model, instance)
class HistoryManager(models.Manager):
def __init__(self, model, instance=None):
super(HistoryManager, self).__init__()
self.model = model
self.instance = instance
def get_super_queryset(self):
return super(HistoryManager, self).get_queryset()
def get_queryset(self):
qs = self.get_super_queryset()
if self.instance is None:
return qs
if isinstance(self.instance._meta.pk, models.ForeignKey):
key_name = self.instance._meta.pk.name + "_id"
else:
key_name = self.instance._meta.pk.name
return self.get_super_queryset().filter(**{key_name: self.instance.pk})
def most_recent(self):
"""
Returns the most recent copy of the instance available in the history.
"""
if not self.instance:
raise TypeError(
"Can't use most_recent() without a {} instance.".format(
self.model._meta.object_name
)
)
tmp = []
excluded_fields = getattr(self.model, "_history_excluded_fields", [])
for field in self.instance._meta.fields:
if field.name in excluded_fields:
continue
if isinstance(field, models.ForeignKey):
tmp.append(field.name + "_id")
else:
tmp.append(field.name)
fields = tuple(tmp)
try:
values = self.get_queryset().values(*fields)[0]
except IndexError:
raise self.instance.DoesNotExist(
"%s has no historical record." % self.instance._meta.object_name
)
return self.instance.__class__(**values)
def as_of(self, date):
"""Get a snapshot as of a specific date.
Returns an instance, or an iterable of the instances, of the
original model with all the attributes set according to what
was present on the object on the date provided.
"""
if not self.instance:
return self._as_of_set(date)
queryset = self.get_queryset().filter(history_date__lte=date)
try:
history_obj = queryset[0]
except IndexError:
raise self.instance.DoesNotExist(
"%s had not yet been created." % self.instance._meta.object_name
)
if history_obj.history_type == "-":
raise self.instance.DoesNotExist(
"%s had already been deleted." % self.instance._meta.object_name
)
return history_obj.instance
def _as_of_set(self, date):
model = type(self.model().instance) # a bit of a hack to get the model
pk_attr = model._meta.pk.name
queryset = self.get_queryset().filter(history_date__lte=date)
# If using MySQL, need to get a list of IDs in memory and then use them for the
# second query.
# Does mean two loops through the DB to get the full set, but still a speed
# improvement.
backend = connection.vendor
if backend == "mysql":
history_ids = {}
for item in queryset.order_by("-history_date", "-pk"):
if getattr(item, pk_attr) not in history_ids:
history_ids[getattr(item, pk_attr)] = item.pk
latest_historics = queryset.filter(history_id__in=history_ids.values())
elif backend == "postgresql":
latest_pk_attr_historic_ids = (
queryset.order_by(pk_attr, "-history_date", "-pk")
.distinct(pk_attr)
.values_list("pk", flat=True)
)
latest_historics = queryset.filter(
history_id__in=latest_pk_attr_historic_ids
)
else:
latest_pk_attr_historic_ids = (
queryset.filter(**{pk_attr: OuterRef(pk_attr)})
.order_by("-history_date", "-pk")
.values("pk")[:1]
)
latest_historics = queryset.filter(
history_id__in=Subquery(latest_pk_attr_historic_ids)
)
adjusted = latest_historics.exclude(history_type="-").order_by(pk_attr)
for historic_item in adjusted:
yield historic_item.instance
def bulk_history_create(
self,
objs,
batch_size=None,
update=False,
default_user=None,
default_change_reason="",
default_date=None,
):
"""
Bulk create the history for the objects specified by objs.
If called by bulk_update_with_history, use the update boolean and
save the history_type accordingly.
"""
history_type = "+"
if update:
history_type = "~"
historical_instances = []
for instance in objs:
history_user = getattr(
instance,
"_history_user",
default_user or self.model.get_default_history_user(instance),
)
row = self.model(
history_date=getattr(
instance, "_history_date", default_date or timezone.now()
),
history_user=history_user,
history_change_reason=get_change_reason_from_object(instance)
or default_change_reason,
history_type=history_type,
**{
field.attname: getattr(instance, field.attname)
for field in instance._meta.fields
if field.name not in self.model._history_excluded_fields
},
)
if hasattr(self.model, "history_relation"):
row.history_relation_id = instance.pk
historical_instances.append(row)
return self.model.objects.bulk_create(
historical_instances, batch_size=batch_size
)
| 36.67052 | 87 | 0.585908 | from django.db import connection, models
from django.db.models import OuterRef, Subquery
from django.utils import timezone
from simple_history.utils import get_change_reason_from_object
class HistoryDescriptor:
def __init__(self, model):
self.model = model
def __get__(self, instance, owner):
if instance is None:
return HistoryManager(self.model)
return HistoryManager(self.model, instance)
class HistoryManager(models.Manager):
def __init__(self, model, instance=None):
super(HistoryManager, self).__init__()
self.model = model
self.instance = instance
def get_super_queryset(self):
return super(HistoryManager, self).get_queryset()
def get_queryset(self):
qs = self.get_super_queryset()
if self.instance is None:
return qs
if isinstance(self.instance._meta.pk, models.ForeignKey):
key_name = self.instance._meta.pk.name + "_id"
else:
key_name = self.instance._meta.pk.name
return self.get_super_queryset().filter(**{key_name: self.instance.pk})
def most_recent(self):
if not self.instance:
raise TypeError(
"Can't use most_recent() without a {} instance.".format(
self.model._meta.object_name
)
)
tmp = []
excluded_fields = getattr(self.model, "_history_excluded_fields", [])
for field in self.instance._meta.fields:
if field.name in excluded_fields:
continue
if isinstance(field, models.ForeignKey):
tmp.append(field.name + "_id")
else:
tmp.append(field.name)
fields = tuple(tmp)
try:
values = self.get_queryset().values(*fields)[0]
except IndexError:
raise self.instance.DoesNotExist(
"%s has no historical record." % self.instance._meta.object_name
)
return self.instance.__class__(**values)
def as_of(self, date):
if not self.instance:
return self._as_of_set(date)
queryset = self.get_queryset().filter(history_date__lte=date)
try:
history_obj = queryset[0]
except IndexError:
raise self.instance.DoesNotExist(
"%s had not yet been created." % self.instance._meta.object_name
)
if history_obj.history_type == "-":
raise self.instance.DoesNotExist(
"%s had already been deleted." % self.instance._meta.object_name
)
return history_obj.instance
def _as_of_set(self, date):
model = type(self.model().instance) # a bit of a hack to get the model
pk_attr = model._meta.pk.name
queryset = self.get_queryset().filter(history_date__lte=date)
# If using MySQL, need to get a list of IDs in memory and then use them for the
# second query.
# Does mean two loops through the DB to get the full set, but still a speed
# improvement.
backend = connection.vendor
if backend == "mysql":
history_ids = {}
for item in queryset.order_by("-history_date", "-pk"):
if getattr(item, pk_attr) not in history_ids:
history_ids[getattr(item, pk_attr)] = item.pk
latest_historics = queryset.filter(history_id__in=history_ids.values())
elif backend == "postgresql":
latest_pk_attr_historic_ids = (
queryset.order_by(pk_attr, "-history_date", "-pk")
.distinct(pk_attr)
.values_list("pk", flat=True)
)
latest_historics = queryset.filter(
history_id__in=latest_pk_attr_historic_ids
)
else:
latest_pk_attr_historic_ids = (
queryset.filter(**{pk_attr: OuterRef(pk_attr)})
.order_by("-history_date", "-pk")
.values("pk")[:1]
)
latest_historics = queryset.filter(
history_id__in=Subquery(latest_pk_attr_historic_ids)
)
adjusted = latest_historics.exclude(history_type="-").order_by(pk_attr)
for historic_item in adjusted:
yield historic_item.instance
def bulk_history_create(
self,
objs,
batch_size=None,
update=False,
default_user=None,
default_change_reason="",
default_date=None,
):
history_type = "+"
if update:
history_type = "~"
historical_instances = []
for instance in objs:
history_user = getattr(
instance,
"_history_user",
default_user or self.model.get_default_history_user(instance),
)
row = self.model(
history_date=getattr(
instance, "_history_date", default_date or timezone.now()
),
history_user=history_user,
history_change_reason=get_change_reason_from_object(instance)
or default_change_reason,
history_type=history_type,
**{
field.attname: getattr(instance, field.attname)
for field in instance._meta.fields
if field.name not in self.model._history_excluded_fields
},
)
if hasattr(self.model, "history_relation"):
row.history_relation_id = instance.pk
historical_instances.append(row)
return self.model.objects.bulk_create(
historical_instances, batch_size=batch_size
)
| true | true |
f720a3de70f0386fe46cbec6ae3539699c8d83d8 | 5,653 | py | Python | sdk/python/pulumi_azure_native/botservice/v20171201/get_bot_connection.py | sebtelko/pulumi-azure-native | 711ec021b5c73da05611c56c8a35adb0ce3244e4 | [
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure_native/botservice/v20171201/get_bot_connection.py | sebtelko/pulumi-azure-native | 711ec021b5c73da05611c56c8a35adb0ce3244e4 | [
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure_native/botservice/v20171201/get_bot_connection.py | sebtelko/pulumi-azure-native | 711ec021b5c73da05611c56c8a35adb0ce3244e4 | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from ... import _utilities
from . import outputs
__all__ = [
'GetBotConnectionResult',
'AwaitableGetBotConnectionResult',
'get_bot_connection',
]
@pulumi.output_type
class GetBotConnectionResult:
"""
Bot channel resource definition
"""
def __init__(__self__, etag=None, id=None, kind=None, location=None, name=None, properties=None, sku=None, tags=None, type=None):
if etag and not isinstance(etag, str):
raise TypeError("Expected argument 'etag' to be a str")
pulumi.set(__self__, "etag", etag)
if id and not isinstance(id, str):
raise TypeError("Expected argument 'id' to be a str")
pulumi.set(__self__, "id", id)
if kind and not isinstance(kind, str):
raise TypeError("Expected argument 'kind' to be a str")
pulumi.set(__self__, "kind", kind)
if location and not isinstance(location, str):
raise TypeError("Expected argument 'location' to be a str")
pulumi.set(__self__, "location", location)
if name and not isinstance(name, str):
raise TypeError("Expected argument 'name' to be a str")
pulumi.set(__self__, "name", name)
if properties and not isinstance(properties, dict):
raise TypeError("Expected argument 'properties' to be a dict")
pulumi.set(__self__, "properties", properties)
if sku and not isinstance(sku, dict):
raise TypeError("Expected argument 'sku' to be a dict")
pulumi.set(__self__, "sku", sku)
if tags and not isinstance(tags, dict):
raise TypeError("Expected argument 'tags' to be a dict")
pulumi.set(__self__, "tags", tags)
if type and not isinstance(type, str):
raise TypeError("Expected argument 'type' to be a str")
pulumi.set(__self__, "type", type)
@property
@pulumi.getter
def etag(self) -> Optional[str]:
"""
Entity Tag
"""
return pulumi.get(self, "etag")
@property
@pulumi.getter
def id(self) -> str:
"""
Specifies the resource ID.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def kind(self) -> Optional[str]:
"""
Required. Gets or sets the Kind of the resource.
"""
return pulumi.get(self, "kind")
@property
@pulumi.getter
def location(self) -> Optional[str]:
"""
Specifies the location of the resource.
"""
return pulumi.get(self, "location")
@property
@pulumi.getter
def name(self) -> str:
"""
Specifies the name of the resource.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def properties(self) -> 'outputs.ConnectionSettingPropertiesResponse':
"""
The set of properties specific to bot channel resource
"""
return pulumi.get(self, "properties")
@property
@pulumi.getter
def sku(self) -> Optional['outputs.SkuResponse']:
"""
Gets or sets the SKU of the resource.
"""
return pulumi.get(self, "sku")
@property
@pulumi.getter
def tags(self) -> Optional[Mapping[str, str]]:
"""
Contains resource tags defined as key/value pairs.
"""
return pulumi.get(self, "tags")
@property
@pulumi.getter
def type(self) -> str:
"""
Specifies the type of the resource.
"""
return pulumi.get(self, "type")
class AwaitableGetBotConnectionResult(GetBotConnectionResult):
# pylint: disable=using-constant-test
def __await__(self):
if False:
yield self
return GetBotConnectionResult(
etag=self.etag,
id=self.id,
kind=self.kind,
location=self.location,
name=self.name,
properties=self.properties,
sku=self.sku,
tags=self.tags,
type=self.type)
def get_bot_connection(connection_name: Optional[str] = None,
resource_group_name: Optional[str] = None,
resource_name: Optional[str] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetBotConnectionResult:
"""
Bot channel resource definition
:param str connection_name: The name of the Bot Service Connection Setting resource
:param str resource_group_name: The name of the Bot resource group in the user subscription.
:param str resource_name: The name of the Bot resource.
"""
__args__ = dict()
__args__['connectionName'] = connection_name
__args__['resourceGroupName'] = resource_group_name
__args__['resourceName'] = resource_name
if opts is None:
opts = pulumi.InvokeOptions()
if opts.version is None:
opts.version = _utilities.get_version()
__ret__ = pulumi.runtime.invoke('azure-native:botservice/v20171201:getBotConnection', __args__, opts=opts, typ=GetBotConnectionResult).value
return AwaitableGetBotConnectionResult(
etag=__ret__.etag,
id=__ret__.id,
kind=__ret__.kind,
location=__ret__.location,
name=__ret__.name,
properties=__ret__.properties,
sku=__ret__.sku,
tags=__ret__.tags,
type=__ret__.type)
| 32.488506 | 144 | 0.618433 |
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from ... import _utilities
from . import outputs
__all__ = [
'GetBotConnectionResult',
'AwaitableGetBotConnectionResult',
'get_bot_connection',
]
@pulumi.output_type
class GetBotConnectionResult:
def __init__(__self__, etag=None, id=None, kind=None, location=None, name=None, properties=None, sku=None, tags=None, type=None):
if etag and not isinstance(etag, str):
raise TypeError("Expected argument 'etag' to be a str")
pulumi.set(__self__, "etag", etag)
if id and not isinstance(id, str):
raise TypeError("Expected argument 'id' to be a str")
pulumi.set(__self__, "id", id)
if kind and not isinstance(kind, str):
raise TypeError("Expected argument 'kind' to be a str")
pulumi.set(__self__, "kind", kind)
if location and not isinstance(location, str):
raise TypeError("Expected argument 'location' to be a str")
pulumi.set(__self__, "location", location)
if name and not isinstance(name, str):
raise TypeError("Expected argument 'name' to be a str")
pulumi.set(__self__, "name", name)
if properties and not isinstance(properties, dict):
raise TypeError("Expected argument 'properties' to be a dict")
pulumi.set(__self__, "properties", properties)
if sku and not isinstance(sku, dict):
raise TypeError("Expected argument 'sku' to be a dict")
pulumi.set(__self__, "sku", sku)
if tags and not isinstance(tags, dict):
raise TypeError("Expected argument 'tags' to be a dict")
pulumi.set(__self__, "tags", tags)
if type and not isinstance(type, str):
raise TypeError("Expected argument 'type' to be a str")
pulumi.set(__self__, "type", type)
@property
@pulumi.getter
def etag(self) -> Optional[str]:
return pulumi.get(self, "etag")
@property
@pulumi.getter
def id(self) -> str:
return pulumi.get(self, "id")
@property
@pulumi.getter
def kind(self) -> Optional[str]:
return pulumi.get(self, "kind")
@property
@pulumi.getter
def location(self) -> Optional[str]:
return pulumi.get(self, "location")
@property
@pulumi.getter
def name(self) -> str:
return pulumi.get(self, "name")
@property
@pulumi.getter
def properties(self) -> 'outputs.ConnectionSettingPropertiesResponse':
return pulumi.get(self, "properties")
@property
@pulumi.getter
def sku(self) -> Optional['outputs.SkuResponse']:
return pulumi.get(self, "sku")
@property
@pulumi.getter
def tags(self) -> Optional[Mapping[str, str]]:
return pulumi.get(self, "tags")
@property
@pulumi.getter
def type(self) -> str:
return pulumi.get(self, "type")
class AwaitableGetBotConnectionResult(GetBotConnectionResult):
# pylint: disable=using-constant-test
def __await__(self):
if False:
yield self
return GetBotConnectionResult(
etag=self.etag,
id=self.id,
kind=self.kind,
location=self.location,
name=self.name,
properties=self.properties,
sku=self.sku,
tags=self.tags,
type=self.type)
def get_bot_connection(connection_name: Optional[str] = None,
resource_group_name: Optional[str] = None,
resource_name: Optional[str] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetBotConnectionResult:
__args__ = dict()
__args__['connectionName'] = connection_name
__args__['resourceGroupName'] = resource_group_name
__args__['resourceName'] = resource_name
if opts is None:
opts = pulumi.InvokeOptions()
if opts.version is None:
opts.version = _utilities.get_version()
__ret__ = pulumi.runtime.invoke('azure-native:botservice/v20171201:getBotConnection', __args__, opts=opts, typ=GetBotConnectionResult).value
return AwaitableGetBotConnectionResult(
etag=__ret__.etag,
id=__ret__.id,
kind=__ret__.kind,
location=__ret__.location,
name=__ret__.name,
properties=__ret__.properties,
sku=__ret__.sku,
tags=__ret__.tags,
type=__ret__.type)
| true | true |
f720a41fe1294d6a6a4f0a5fcf613dac72c9f8da | 9,425 | py | Python | pulsar-functions/instance/src/main/python/function_stats.py | bruth/pulsar | fe2c8ee4d37e2a45dfb528592915746827416e18 | [
"Apache-2.0"
] | null | null | null | pulsar-functions/instance/src/main/python/function_stats.py | bruth/pulsar | fe2c8ee4d37e2a45dfb528592915746827416e18 | [
"Apache-2.0"
] | null | null | null | pulsar-functions/instance/src/main/python/function_stats.py | bruth/pulsar | fe2c8ee4d37e2a45dfb528592915746827416e18 | [
"Apache-2.0"
] | 1 | 2019-03-15T09:34:50.000Z | 2019-03-15T09:34:50.000Z | #
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
import traceback
import time
import util
from prometheus_client import Counter, Summary, Gauge
# We keep track of the following metrics
class Stats(object):
metrics_label_names = ['tenant', 'namespace', 'function', 'instance_id', 'cluster']
PULSAR_FUNCTION_METRICS_PREFIX = "pulsar_function_"
USER_METRIC_PREFIX = "user_metric_";
TOTAL_PROCESSED = 'processed_total'
TOTAL_SUCCESSFULLY_PROCESSED = 'processed_successfully_total'
TOTAL_SYSTEM_EXCEPTIONS = 'system_exceptions_total'
TOTAL_USER_EXCEPTIONS = 'user_exceptions_total'
PROCESS_LATENCY_MS = 'process_latency_ms'
LAST_INVOCATION = 'last_invocation'
TOTAL_RECEIVED = 'received_total'
TOTAL_PROCESSED_1min = 'processed_total_1min'
TOTAL_SUCCESSFULLY_PROCESSED_1min = 'processed_successfully_total_1min'
TOTAL_SYSTEM_EXCEPTIONS_1min = 'system_exceptions_total_1min'
TOTAL_USER_EXCEPTIONS_1min = 'user_exceptions_total_1min'
PROCESS_LATENCY_MS_1min = 'process_latency_ms_1min'
TOTAL_RECEIVED_1min = 'received_total_1min'
# Declare Prometheus
stat_total_processed = Counter(PULSAR_FUNCTION_METRICS_PREFIX + TOTAL_PROCESSED, 'Total number of messages processed.', metrics_label_names)
stat_total_processed_successfully = Counter(PULSAR_FUNCTION_METRICS_PREFIX + TOTAL_SUCCESSFULLY_PROCESSED,
'Total number of messages processed successfully.', metrics_label_names)
stat_total_sys_exceptions = Counter(PULSAR_FUNCTION_METRICS_PREFIX+ TOTAL_SYSTEM_EXCEPTIONS, 'Total number of system exceptions.',
metrics_label_names)
stat_total_user_exceptions = Counter(PULSAR_FUNCTION_METRICS_PREFIX + TOTAL_USER_EXCEPTIONS, 'Total number of user exceptions.',
metrics_label_names)
stat_process_latency_ms = Summary(PULSAR_FUNCTION_METRICS_PREFIX + PROCESS_LATENCY_MS, 'Process latency in milliseconds.', metrics_label_names)
stat_last_invocation = Gauge(PULSAR_FUNCTION_METRICS_PREFIX + LAST_INVOCATION, 'The timestamp of the last invocation of the function.', metrics_label_names)
stat_total_received = Counter(PULSAR_FUNCTION_METRICS_PREFIX + TOTAL_RECEIVED, 'Total number of messages received from source.', metrics_label_names)
# 1min windowed metrics
stat_total_processed_1min = Counter(PULSAR_FUNCTION_METRICS_PREFIX + TOTAL_PROCESSED_1min,
'Total number of messages processed in the last 1 minute.', metrics_label_names)
stat_total_processed_successfully_1min = Counter(PULSAR_FUNCTION_METRICS_PREFIX + TOTAL_SUCCESSFULLY_PROCESSED_1min,
'Total number of messages processed successfully in the last 1 minute.', metrics_label_names)
stat_total_sys_exceptions_1min = Counter(PULSAR_FUNCTION_METRICS_PREFIX + TOTAL_SYSTEM_EXCEPTIONS_1min,
'Total number of system exceptions in the last 1 minute.',
metrics_label_names)
stat_total_user_exceptions_1min = Counter(PULSAR_FUNCTION_METRICS_PREFIX + TOTAL_USER_EXCEPTIONS_1min,
'Total number of user exceptions in the last 1 minute.',
metrics_label_names)
stat_process_latency_ms_1min = Summary(PULSAR_FUNCTION_METRICS_PREFIX + PROCESS_LATENCY_MS_1min,
'Process latency in milliseconds in the last 1 minute.', metrics_label_names)
stat_total_received_1min = Counter(PULSAR_FUNCTION_METRICS_PREFIX + TOTAL_RECEIVED_1min,
'Total number of messages received from source in the last 1 minute.', metrics_label_names)
latest_user_exception = []
latest_sys_exception = []
def __init__(self, metrics_labels):
self.metrics_labels = metrics_labels;
self.process_start_time = None
# start time for windowed metrics
util.FixedTimer(60, self.reset).start()
def get_total_received(self):
return self.stat_total_received.labels(*self.metrics_labels)._value.get();
def get_total_processed(self):
return self.stat_total_processed.labels(*self.metrics_labels)._value.get();
def get_total_processed_successfully(self):
return self.stat_total_processed_successfully.labels(*self.metrics_labels)._value.get();
def get_total_sys_exceptions(self):
return self.stat_total_sys_exceptions.labels(*self.metrics_labels)._value.get();
def get_total_user_exceptions(self):
return self.stat_total_user_exceptions.labels(*self.metrics_labels)._value.get();
def get_avg_process_latency(self):
process_latency_ms_count = self.stat_process_latency_ms.labels(*self.metrics_labels)._count.get()
process_latency_ms_sum = self.stat_process_latency_ms.labels(*self.metrics_labels)._sum.get()
return 0.0 \
if process_latency_ms_count <= 0.0 \
else process_latency_ms_sum / process_latency_ms_count
def get_total_received_1min(self):
return self.stat_total_received_1min.labels(*self.metrics_labels)._value.get();
def get_total_processed_1min(self):
return self.stat_total_processed_1min.labels(*self.metrics_labels)._value.get();
def get_total_processed_successfully_1min(self):
return self.stat_total_processed_successfully_1min.labels(*self.metrics_labels)._value.get();
def get_total_sys_exceptions_1min(self):
return self.stat_total_sys_exceptions_1min.labels(*self.metrics_labels)._value.get();
def get_total_user_exceptions_1min(self):
return self.stat_total_user_exceptions_1min.labels(*self.metrics_labels)._value.get();
def get_avg_process_latency_1min(self):
process_latency_ms_count = self.stat_process_latency_ms_1min.labels(*self.metrics_labels)._count.get()
process_latency_ms_sum = self.stat_process_latency_ms_1min.labels(*self.metrics_labels)._sum.get()
return 0.0 \
if process_latency_ms_count <= 0.0 \
else process_latency_ms_sum / process_latency_ms_count
def get_last_invocation(self):
return self.stat_last_invocation.labels(*self.metrics_labels)._value.get()
def incr_total_processed(self):
self.stat_total_processed.labels(*self.metrics_labels).inc()
self.stat_total_processed_1min.labels(*self.metrics_labels).inc()
def incr_total_processed_successfully(self):
self.stat_total_processed_successfully.labels(*self.metrics_labels).inc()
self.stat_total_processed_successfully_1min.labels(*self.metrics_labels).inc()
def incr_total_sys_exceptions(self):
self.stat_total_sys_exceptions.labels(*self.metrics_labels).inc()
self.stat_total_sys_exceptions_1min.labels(*self.metrics_labels).inc()
self.add_sys_exception()
def incr_total_user_exceptions(self):
self.stat_total_user_exceptions.labels(*self.metrics_labels).inc()
self.stat_total_user_exceptions_1min.labels(*self.metrics_labels).inc()
self.add_user_exception()
def incr_total_received(self):
self.stat_total_received.labels(*self.metrics_labels).inc()
self.stat_total_received_1min.labels(*self.metrics_labels).inc()
def process_time_start(self):
self.process_start_time = time.time();
def process_time_end(self):
if self.process_start_time:
duration = (time.time() - self.process_start_time) * 1000.0
self.stat_process_latency_ms.labels(*self.metrics_labels).observe(duration)
self.stat_process_latency_ms_1min.labels(*self.metrics_labels).observe(duration)
def set_last_invocation(self, time):
self.stat_last_invocation.labels(*self.metrics_labels).set(time * 1000.0)
def add_user_exception(self):
self.latest_sys_exception.append((traceback.format_exc(), int(time.time() * 1000)))
if len(self.latest_sys_exception) > 10:
self.latest_sys_exception.pop(0)
def add_sys_exception(self):
self.latest_sys_exception.append((traceback.format_exc(), int(time.time() * 1000)))
if len(self.latest_sys_exception) > 10:
self.latest_sys_exception.pop(0)
def reset(self):
self.latest_user_exception = []
self.latest_sys_exception = []
self.stat_total_processed_1min.labels(*self.metrics_labels)._value.set(0.0)
self.stat_total_processed_successfully_1min.labels(*self.metrics_labels)._value.set(0.0)
self.stat_total_user_exceptions_1min.labels(*self.metrics_labels)._value.set(0.0)
self.stat_total_sys_exceptions_1min.labels(*self.metrics_labels)._value.set(0.0)
self.stat_process_latency_ms_1min.labels(*self.metrics_labels)._sum.set(0.0)
self.stat_process_latency_ms_1min.labels(*self.metrics_labels)._count.set(0.0)
self.stat_total_received_1min.labels(*self.metrics_labels)._value.set(0.0) | 49.088542 | 158 | 0.766154 |
import traceback
import time
import util
from prometheus_client import Counter, Summary, Gauge
class Stats(object):
metrics_label_names = ['tenant', 'namespace', 'function', 'instance_id', 'cluster']
PULSAR_FUNCTION_METRICS_PREFIX = "pulsar_function_"
USER_METRIC_PREFIX = "user_metric_";
TOTAL_PROCESSED = 'processed_total'
TOTAL_SUCCESSFULLY_PROCESSED = 'processed_successfully_total'
TOTAL_SYSTEM_EXCEPTIONS = 'system_exceptions_total'
TOTAL_USER_EXCEPTIONS = 'user_exceptions_total'
PROCESS_LATENCY_MS = 'process_latency_ms'
LAST_INVOCATION = 'last_invocation'
TOTAL_RECEIVED = 'received_total'
TOTAL_PROCESSED_1min = 'processed_total_1min'
TOTAL_SUCCESSFULLY_PROCESSED_1min = 'processed_successfully_total_1min'
TOTAL_SYSTEM_EXCEPTIONS_1min = 'system_exceptions_total_1min'
TOTAL_USER_EXCEPTIONS_1min = 'user_exceptions_total_1min'
PROCESS_LATENCY_MS_1min = 'process_latency_ms_1min'
TOTAL_RECEIVED_1min = 'received_total_1min'
stat_total_processed = Counter(PULSAR_FUNCTION_METRICS_PREFIX + TOTAL_PROCESSED, 'Total number of messages processed.', metrics_label_names)
stat_total_processed_successfully = Counter(PULSAR_FUNCTION_METRICS_PREFIX + TOTAL_SUCCESSFULLY_PROCESSED,
'Total number of messages processed successfully.', metrics_label_names)
stat_total_sys_exceptions = Counter(PULSAR_FUNCTION_METRICS_PREFIX+ TOTAL_SYSTEM_EXCEPTIONS, 'Total number of system exceptions.',
metrics_label_names)
stat_total_user_exceptions = Counter(PULSAR_FUNCTION_METRICS_PREFIX + TOTAL_USER_EXCEPTIONS, 'Total number of user exceptions.',
metrics_label_names)
stat_process_latency_ms = Summary(PULSAR_FUNCTION_METRICS_PREFIX + PROCESS_LATENCY_MS, 'Process latency in milliseconds.', metrics_label_names)
stat_last_invocation = Gauge(PULSAR_FUNCTION_METRICS_PREFIX + LAST_INVOCATION, 'The timestamp of the last invocation of the function.', metrics_label_names)
stat_total_received = Counter(PULSAR_FUNCTION_METRICS_PREFIX + TOTAL_RECEIVED, 'Total number of messages received from source.', metrics_label_names)
stat_total_processed_1min = Counter(PULSAR_FUNCTION_METRICS_PREFIX + TOTAL_PROCESSED_1min,
'Total number of messages processed in the last 1 minute.', metrics_label_names)
stat_total_processed_successfully_1min = Counter(PULSAR_FUNCTION_METRICS_PREFIX + TOTAL_SUCCESSFULLY_PROCESSED_1min,
'Total number of messages processed successfully in the last 1 minute.', metrics_label_names)
stat_total_sys_exceptions_1min = Counter(PULSAR_FUNCTION_METRICS_PREFIX + TOTAL_SYSTEM_EXCEPTIONS_1min,
'Total number of system exceptions in the last 1 minute.',
metrics_label_names)
stat_total_user_exceptions_1min = Counter(PULSAR_FUNCTION_METRICS_PREFIX + TOTAL_USER_EXCEPTIONS_1min,
'Total number of user exceptions in the last 1 minute.',
metrics_label_names)
stat_process_latency_ms_1min = Summary(PULSAR_FUNCTION_METRICS_PREFIX + PROCESS_LATENCY_MS_1min,
'Process latency in milliseconds in the last 1 minute.', metrics_label_names)
stat_total_received_1min = Counter(PULSAR_FUNCTION_METRICS_PREFIX + TOTAL_RECEIVED_1min,
'Total number of messages received from source in the last 1 minute.', metrics_label_names)
latest_user_exception = []
latest_sys_exception = []
def __init__(self, metrics_labels):
self.metrics_labels = metrics_labels;
self.process_start_time = None
util.FixedTimer(60, self.reset).start()
def get_total_received(self):
return self.stat_total_received.labels(*self.metrics_labels)._value.get();
def get_total_processed(self):
return self.stat_total_processed.labels(*self.metrics_labels)._value.get();
def get_total_processed_successfully(self):
return self.stat_total_processed_successfully.labels(*self.metrics_labels)._value.get();
def get_total_sys_exceptions(self):
return self.stat_total_sys_exceptions.labels(*self.metrics_labels)._value.get();
def get_total_user_exceptions(self):
return self.stat_total_user_exceptions.labels(*self.metrics_labels)._value.get();
def get_avg_process_latency(self):
process_latency_ms_count = self.stat_process_latency_ms.labels(*self.metrics_labels)._count.get()
process_latency_ms_sum = self.stat_process_latency_ms.labels(*self.metrics_labels)._sum.get()
return 0.0 \
if process_latency_ms_count <= 0.0 \
else process_latency_ms_sum / process_latency_ms_count
def get_total_received_1min(self):
return self.stat_total_received_1min.labels(*self.metrics_labels)._value.get();
def get_total_processed_1min(self):
return self.stat_total_processed_1min.labels(*self.metrics_labels)._value.get();
def get_total_processed_successfully_1min(self):
return self.stat_total_processed_successfully_1min.labels(*self.metrics_labels)._value.get();
def get_total_sys_exceptions_1min(self):
return self.stat_total_sys_exceptions_1min.labels(*self.metrics_labels)._value.get();
def get_total_user_exceptions_1min(self):
return self.stat_total_user_exceptions_1min.labels(*self.metrics_labels)._value.get();
def get_avg_process_latency_1min(self):
process_latency_ms_count = self.stat_process_latency_ms_1min.labels(*self.metrics_labels)._count.get()
process_latency_ms_sum = self.stat_process_latency_ms_1min.labels(*self.metrics_labels)._sum.get()
return 0.0 \
if process_latency_ms_count <= 0.0 \
else process_latency_ms_sum / process_latency_ms_count
def get_last_invocation(self):
return self.stat_last_invocation.labels(*self.metrics_labels)._value.get()
def incr_total_processed(self):
self.stat_total_processed.labels(*self.metrics_labels).inc()
self.stat_total_processed_1min.labels(*self.metrics_labels).inc()
def incr_total_processed_successfully(self):
self.stat_total_processed_successfully.labels(*self.metrics_labels).inc()
self.stat_total_processed_successfully_1min.labels(*self.metrics_labels).inc()
def incr_total_sys_exceptions(self):
self.stat_total_sys_exceptions.labels(*self.metrics_labels).inc()
self.stat_total_sys_exceptions_1min.labels(*self.metrics_labels).inc()
self.add_sys_exception()
def incr_total_user_exceptions(self):
self.stat_total_user_exceptions.labels(*self.metrics_labels).inc()
self.stat_total_user_exceptions_1min.labels(*self.metrics_labels).inc()
self.add_user_exception()
def incr_total_received(self):
self.stat_total_received.labels(*self.metrics_labels).inc()
self.stat_total_received_1min.labels(*self.metrics_labels).inc()
def process_time_start(self):
self.process_start_time = time.time();
def process_time_end(self):
if self.process_start_time:
duration = (time.time() - self.process_start_time) * 1000.0
self.stat_process_latency_ms.labels(*self.metrics_labels).observe(duration)
self.stat_process_latency_ms_1min.labels(*self.metrics_labels).observe(duration)
def set_last_invocation(self, time):
self.stat_last_invocation.labels(*self.metrics_labels).set(time * 1000.0)
def add_user_exception(self):
self.latest_sys_exception.append((traceback.format_exc(), int(time.time() * 1000)))
if len(self.latest_sys_exception) > 10:
self.latest_sys_exception.pop(0)
def add_sys_exception(self):
self.latest_sys_exception.append((traceback.format_exc(), int(time.time() * 1000)))
if len(self.latest_sys_exception) > 10:
self.latest_sys_exception.pop(0)
def reset(self):
self.latest_user_exception = []
self.latest_sys_exception = []
self.stat_total_processed_1min.labels(*self.metrics_labels)._value.set(0.0)
self.stat_total_processed_successfully_1min.labels(*self.metrics_labels)._value.set(0.0)
self.stat_total_user_exceptions_1min.labels(*self.metrics_labels)._value.set(0.0)
self.stat_total_sys_exceptions_1min.labels(*self.metrics_labels)._value.set(0.0)
self.stat_process_latency_ms_1min.labels(*self.metrics_labels)._sum.set(0.0)
self.stat_process_latency_ms_1min.labels(*self.metrics_labels)._count.set(0.0)
self.stat_total_received_1min.labels(*self.metrics_labels)._value.set(0.0) | true | true |
f720a49b1ccb9afbd2e211fb293d03f55301e147 | 2,226 | py | Python | Numpy/code.py | JayeshSukhija/ga-learner-dsmp-repo | 4c05d980462dde423b6be41cca1218d6d98e8e48 | [
"MIT"
] | null | null | null | Numpy/code.py | JayeshSukhija/ga-learner-dsmp-repo | 4c05d980462dde423b6be41cca1218d6d98e8e48 | [
"MIT"
] | null | null | null | Numpy/code.py | JayeshSukhija/ga-learner-dsmp-repo | 4c05d980462dde423b6be41cca1218d6d98e8e48 | [
"MIT"
] | null | null | null | # --------------
# Importing header files
import numpy as np
#New record
new_record=[[50, 9, 4, 1, 0, 0, 40, 0]]
#Code starts here
#Loading data file and saving it into a new numpy array
data = np.genfromtxt(path, delimiter=",", skip_header=1)
print(data.shape)
#Concatenating the new record to the existing numpy array
census=np.concatenate((data, new_record),axis = 0)
print(census.shape)
#Code ends here
# --------------
#Code starts here
import numpy as np
age=census[:,0]
print (age)
print ('='*50)
max_age=np.max(age)
print (max_age)
print ('='*50)
min_age=np.min(age)
print (min_age)
print ('='*50)
age_mean=np.mean(age)
print (age_mean)
print ('='*50)
age_std=np.std(age)
print (age_std)
print('='*50)
# --------------
#Code starts here
#Creating new subsets based on 'Age'
race_0=census[census[:,2]==0]
race_1=census[census[:,2]==1]
race_2=census[census[:,2]==2]
race_3=census[census[:,2]==3]
race_4=census[census[:,2]==4]
#Finding the length of the above created subsets
len_0=len(race_0)
len_1=len(race_1)
len_2=len(race_2)
len_3=len(race_3)
len_4=len(race_4)
#Printing the length of the above created subsets
print('Race_0: ', len_0)
print('Race_1: ', len_1)
print('Race_2: ', len_2)
print('Race_3: ', len_3)
print('Race_4: ', len_4)
#Storing the different race lengths with appropriate indexes
race_list=[len_0, len_1,len_2, len_3, len_4]
#Storing the race with minimum length into a variable
minority_race=race_list.index(min(race_list))
print ('minority_race:',minority_race)
#Code ends here
# --------------
#Code starts here
import numpy as np
senior_citizens=census[census[:,0 ]>60]
working_hours_sum=senior_citizens.sum(axis=0)[6]
senior_citizens_len=len(senior_citizens)
avg_working_hours=(working_hours_sum/senior_citizens_len)
print (avg_working_hours)
# --------------
#Code starts here
import numpy as np
high=census[census[:,1 ]>10]
low=census[census[:,1]<=10]
avg_pay_high=high.mean(axis=0)[7]
avg_pay_low=low.mean(axis=0)[7]
if (avg_pay_high>avg_pay_low):
print ("Better Education leads to better pay")
else:
print ("Better Education does not leads to better pay")
| 20.611111 | 61 | 0.676101 |
import numpy as np
new_record=[[50, 9, 4, 1, 0, 0, 40, 0]]
data = np.genfromtxt(path, delimiter=",", skip_header=1)
print(data.shape)
census=np.concatenate((data, new_record),axis = 0)
print(census.shape)
import numpy as np
age=census[:,0]
print (age)
print ('='*50)
max_age=np.max(age)
print (max_age)
print ('='*50)
min_age=np.min(age)
print (min_age)
print ('='*50)
age_mean=np.mean(age)
print (age_mean)
print ('='*50)
age_std=np.std(age)
print (age_std)
print('='*50)
race_0=census[census[:,2]==0]
race_1=census[census[:,2]==1]
race_2=census[census[:,2]==2]
race_3=census[census[:,2]==3]
race_4=census[census[:,2]==4]
len_0=len(race_0)
len_1=len(race_1)
len_2=len(race_2)
len_3=len(race_3)
len_4=len(race_4)
print('Race_0: ', len_0)
print('Race_1: ', len_1)
print('Race_2: ', len_2)
print('Race_3: ', len_3)
print('Race_4: ', len_4)
race_list=[len_0, len_1,len_2, len_3, len_4]
minority_race=race_list.index(min(race_list))
print ('minority_race:',minority_race)
import numpy as np
senior_citizens=census[census[:,0 ]>60]
working_hours_sum=senior_citizens.sum(axis=0)[6]
senior_citizens_len=len(senior_citizens)
avg_working_hours=(working_hours_sum/senior_citizens_len)
print (avg_working_hours)
import numpy as np
high=census[census[:,1 ]>10]
low=census[census[:,1]<=10]
avg_pay_high=high.mean(axis=0)[7]
avg_pay_low=low.mean(axis=0)[7]
if (avg_pay_high>avg_pay_low):
print ("Better Education leads to better pay")
else:
print ("Better Education does not leads to better pay")
| true | true |
f720a7182cc9382555831934ef834b1a0aab840c | 473 | py | Python | enaml/qt/__init__.py | dandycheung/enaml | 1a7d9c95717a359bb2a8435c597eda36c9235fab | [
"BSD-3-Clause-Clear"
] | null | null | null | enaml/qt/__init__.py | dandycheung/enaml | 1a7d9c95717a359bb2a8435c597eda36c9235fab | [
"BSD-3-Clause-Clear"
] | null | null | null | enaml/qt/__init__.py | dandycheung/enaml | 1a7d9c95717a359bb2a8435c597eda36c9235fab | [
"BSD-3-Clause-Clear"
] | null | null | null | # ------------------------------------------------------------------------------
# Copyright (c) 2013-2022, Nucleic Development Team.
#
# Distributed under the terms of the Modified BSD License.
#
# The full license is in the file LICENSE, distributed with this software.
# ------------------------------------------------------------------------------
from qtpy import (
API as QT_API,
PYQT5_API,
PYSIDE2_API,
PYQT6_API,
PYSIDE6_API,
QT_VERSION,
)
| 29.5625 | 80 | 0.46723 |
from qtpy import (
API as QT_API,
PYQT5_API,
PYSIDE2_API,
PYQT6_API,
PYSIDE6_API,
QT_VERSION,
)
| true | true |
f720a75e584185882c002770a51c3e80de659a39 | 1,945 | py | Python | src/slamcore_ros2_examples/python/slamcore_ros2_examples/xacro_file_contents.py | slamcore/slamcore-ros2-examples | f101a277d7bbf07e081b89ca8efb77110abc2110 | [
"BSD-3-Clause"
] | 1 | 2022-01-31T16:00:39.000Z | 2022-01-31T16:00:39.000Z | src/slamcore_ros2_examples/python/slamcore_ros2_examples/xacro_file_contents.py | slamcore/slamcore-ros2-examples | f101a277d7bbf07e081b89ca8efb77110abc2110 | [
"BSD-3-Clause"
] | null | null | null | src/slamcore_ros2_examples/python/slamcore_ros2_examples/xacro_file_contents.py | slamcore/slamcore-ros2-examples | f101a277d7bbf07e081b89ca8efb77110abc2110 | [
"BSD-3-Clause"
] | null | null | null | """Module for the XacroFile substitution class."""
from pathlib import Path
from typing import Text, cast
import xacro
from launch.launch_context import LaunchContext
from launch.some_substitutions_type import SomeSubstitutionsType
from launch.substitution import Substitution
from launch.substitutions import SubstitutionFailure
from launch.utilities import normalize_to_list_of_substitutions
class XacroFileContents(Substitution):
"""
Reads the xacro file provided and returns its context during evalution.
"""
name = "XacroFileContents"
def __init__(self, substitution: SomeSubstitutionsType) -> None:
"""Create a class instance."""
self.__substitution = normalize_to_list_of_substitutions((substitution,))[0] # type: ignore
@property
def substitution(self) -> Substitution:
"""Getter."""
return self.__substitution
def describe(self) -> Text:
"""Return a description of this substitution as a string."""
return f"{self.name}({self.substitution.describe()})"
@classmethod
def read_xacro(cls, path: Path) -> str:
"""Read the xacro contents and return the corresponding string."""
doc = xacro.process_file(path)
xacro_contents = doc.toprettyxml(indent=" ") # type: ignore
return cast(str, xacro_contents)
def perform(self, context: LaunchContext) -> Text:
"""Perform the substitution - return the contents of the given xacro file."""
path = Path(self.substitution.perform(context))
if not path.is_file():
raise SubstitutionFailure(f"Not a file: {path.absolute()}")
xacro_contents = self.read_xacro(path)
# I have to escape double quotes, then double quote the whole string so that the YAML
# parser is happy
xacro_contents = xacro_contents.replace('"', '\\"')
xacro_contents = f'"{xacro_contents}"'
return xacro_contents
| 36.018519 | 100 | 0.69563 |
from pathlib import Path
from typing import Text, cast
import xacro
from launch.launch_context import LaunchContext
from launch.some_substitutions_type import SomeSubstitutionsType
from launch.substitution import Substitution
from launch.substitutions import SubstitutionFailure
from launch.utilities import normalize_to_list_of_substitutions
class XacroFileContents(Substitution):
name = "XacroFileContents"
def __init__(self, substitution: SomeSubstitutionsType) -> None:
self.__substitution = normalize_to_list_of_substitutions((substitution,))[0]
@property
def substitution(self) -> Substitution:
return self.__substitution
def describe(self) -> Text:
return f"{self.name}({self.substitution.describe()})"
@classmethod
def read_xacro(cls, path: Path) -> str:
doc = xacro.process_file(path)
xacro_contents = doc.toprettyxml(indent=" ")
return cast(str, xacro_contents)
def perform(self, context: LaunchContext) -> Text:
path = Path(self.substitution.perform(context))
if not path.is_file():
raise SubstitutionFailure(f"Not a file: {path.absolute()}")
xacro_contents = self.read_xacro(path)
xacro_contents = xacro_contents.replace('"', '\\"')
xacro_contents = f'"{xacro_contents}"'
return xacro_contents
| true | true |
f720a8cc460fe602c10d1ad4a160e2c5c625a3d0 | 9,766 | py | Python | cloud_provider/aws/aws_bid_advisor_test.py | mridhul/minion-manager | 7301ac6360a82dfdd27e682d070c34e90f080149 | [
"Apache-2.0"
] | 54 | 2018-07-06T18:06:54.000Z | 2019-06-03T15:21:01.000Z | cloud_provider/aws/aws_bid_advisor_test.py | mridhul/minion-manager | 7301ac6360a82dfdd27e682d070c34e90f080149 | [
"Apache-2.0"
] | 28 | 2018-07-05T23:32:22.000Z | 2019-07-19T17:19:26.000Z | cloud_provider/aws/aws_bid_advisor_test.py | mridhul/minion-manager | 7301ac6360a82dfdd27e682d070c34e90f080149 | [
"Apache-2.0"
] | 15 | 2018-07-28T04:51:01.000Z | 2019-07-30T14:50:25.000Z | """The file has unit tests for the AWSBidAdvisor."""
import unittest
from mock import patch, MagicMock
import datetime
from dateutil.tz import tzutc
from cloud_provider.aws.aws_bid_advisor import AWSBidAdvisor
REFRESH_INTERVAL = 10
REGION = 'us-west-2'
MOCK_SPOT_PRICE={'NextToken': '', 'SpotPriceHistory': [{'AvailabilityZone': 'us-west-2b', 'InstanceType': 'm5.4xlarge', 'ProductDescription': 'Linux/UNIX', 'SpotPrice': '0.300000', 'Timestamp': datetime.datetime(2019, 7, 13, 20, 30, 22, tzinfo=tzutc())}, {'AvailabilityZone': 'us-west-2c', 'InstanceType': 'm5.4xlarge', 'ProductDescription': 'Linux/UNIX', 'SpotPrice': '0.291400', 'Timestamp': datetime.datetime(2019, 7, 13, 20, 13, 34, tzinfo=tzutc())}, {'AvailabilityZone': 'us-west-2a', 'InstanceType': 'm5.4xlarge', 'ProductDescription': 'Linux/UNIX', 'SpotPrice': '0.320100', 'Timestamp': datetime.datetime(2019, 7, 13, 18, 33, 30, tzinfo=tzutc())}, {'AvailabilityZone': 'us-west-2c', 'InstanceType': 'm3.medium', 'ProductDescription': 'Linux/UNIX', 'SpotPrice': '0.006700', 'Timestamp': datetime.datetime(2019, 7, 13, 17, 7, 9, tzinfo=tzutc())}, {'AvailabilityZone': 'us-west-2b', 'InstanceType': 'm3.medium', 'ProductDescription': 'Linux/UNIX', 'SpotPrice': '0.006700', 'Timestamp': datetime.datetime(2019, 7, 13, 17, 7, 9, tzinfo=tzutc())}, {'AvailabilityZone': 'us-west-2a', 'InstanceType': 'm3.medium', 'ProductDescription': 'Linux/UNIX', 'SpotPrice': '0.006700', 'Timestamp': datetime.datetime(2019, 7, 13, 17, 7, 9, tzinfo=tzutc())}, {'AvailabilityZone': 'us-west-2b', 'InstanceType': 'm5.4xlarge', 'ProductDescription': 'Linux/UNIX', 'SpotPrice': '0.300400', 'Timestamp': datetime.datetime(2019, 7, 13, 15, 46, 1, tzinfo=tzutc())}, {'AvailabilityZone': 'us-west-2c', 'InstanceType': 'm5.4xlarge', 'ProductDescription': 'Linux/UNIX', 'SpotPrice': '0.291500', 'Timestamp': datetime.datetime(2019, 7, 13, 14, 47, 14, tzinfo=tzutc())}, {'AvailabilityZone': 'us-west-2a', 'InstanceType': 'm5.4xlarge', 'ProductDescription': 'Linux/UNIX', 'SpotPrice': '0.321600', 'Timestamp': datetime.datetime(2019, 7, 13, 13, 40, 47, tzinfo=tzutc())}, {'AvailabilityZone': 'us-west-2d', 'InstanceType': 'm5.4xlarge', 'ProductDescription': 'Linux/UNIX', 'SpotPrice': '0.270400', 'Timestamp': datetime.datetime(2019, 7, 13, 6, 23, 5, tzinfo=tzutc())}, {'AvailabilityZone': 'us-west-2c', 'InstanceType': 'm3.medium', 'ProductDescription': 'Linux/UNIX', 'SpotPrice': '0.006700', 'Timestamp': datetime.datetime(2019, 7, 12, 17, 7, 5, tzinfo=tzutc())}, {'AvailabilityZone': 'us-west-2b', 'InstanceType': 'm3.medium', 'ProductDescription': 'Linux/UNIX', 'SpotPrice': '0.006700', 'Timestamp': datetime.datetime(2019, 7, 12, 17, 7, 5, tzinfo=tzutc())}, {'AvailabilityZone': 'us-west-2a', 'InstanceType': 'm3.medium', 'ProductDescription': 'Linux/UNIX', 'SpotPrice': '0.006700', 'Timestamp': datetime.datetime(2019, 7, 12, 17, 7, 5, tzinfo=tzutc())}], 'ResponseMetadata': {'RequestId': 'f428bcba-016f-476f-b9ed-755f71af2d36', 'HTTPStatusCode': 200, 'HTTPHeaders': {'content-type': 'text/xml;charset=UTF-8', 'content-length': '4341', 'vary': 'accept-encoding', 'date': 'Sun, 14 Jul 2019 00:45:52 GMT', 'server': 'AmazonEC2'}, 'RetryAttempts': 0}}
class AWSBidAdvisorTest(unittest.TestCase):
"""
Tests for AWSBidAdvisor.
"""
@patch.object(AWSBidAdvisor.SpotInstancePriceUpdater, 'ec2_get_spot_price_history', MagicMock(return_value=MOCK_SPOT_PRICE))
def test_ba_lifecycle(self):
"""
Tests that the AWSBidVisor starts threads and stops them correctly.
"""
bidadv = AWSBidAdvisor(REFRESH_INTERVAL, REFRESH_INTERVAL, REGION)
assert len(bidadv.all_bid_advisor_threads) == 0
bidadv.run()
assert len(bidadv.all_bid_advisor_threads) == 2
bidadv.shutdown()
assert len(bidadv.all_bid_advisor_threads) == 0
def test_ba_on_demand_pricing(self):
"""
Tests that the AWSBidVisor correctly gets the on-demand pricing.
"""
bidadv = AWSBidAdvisor(REFRESH_INTERVAL, REFRESH_INTERVAL, REGION)
assert len(bidadv.on_demand_price_dict) == 0
updater = bidadv.OnDemandUpdater(bidadv)
updater.get_on_demand_pricing()
assert len(bidadv.on_demand_price_dict) > 0
@patch.object(AWSBidAdvisor.SpotInstancePriceUpdater, 'ec2_get_spot_price_history', MagicMock(return_value=MOCK_SPOT_PRICE))
def test_ba_spot_pricing(self):
"""
Tests that the AWSBidVisor correctly gets the spot instance pricing.
"""
bidadv = AWSBidAdvisor(REFRESH_INTERVAL, REFRESH_INTERVAL, REGION)
assert len(bidadv.spot_price_list) == 0
updater = bidadv.SpotInstancePriceUpdater(bidadv)
updater.get_spot_price_info()
assert len(bidadv.spot_price_list) > 0
@patch.object(AWSBidAdvisor.SpotInstancePriceUpdater, 'ec2_get_spot_price_history', MagicMock(return_value=MOCK_SPOT_PRICE))
def test_ba_price_update(self):
"""
Tests that the AXBidVisor actually updates the pricing info.
"""
bidadv = AWSBidAdvisor(REFRESH_INTERVAL, REFRESH_INTERVAL, REGION)
od_updater = bidadv.OnDemandUpdater(bidadv)
od_updater.get_on_demand_pricing()
sp_updater = bidadv.SpotInstancePriceUpdater(bidadv)
sp_updater.get_spot_price_info()
# Verify that the pricing info was populated.
assert len(bidadv.on_demand_price_dict) > 0
assert len(bidadv.spot_price_list) > 0
# Make the price dicts empty to check if they get updated.
bidadv.on_demand_price_dict = {}
bidadv.spot_price_list = {}
od_updater.get_on_demand_pricing()
sp_updater.get_spot_price_info()
# Verify that the pricing info is populated again.
assert len(bidadv.on_demand_price_dict) > 0
assert len(bidadv.spot_price_list) > 0
def test_ba_get_bid(self):
"""
Tests that the bid_advisor's get_new_bid() method returns correct
bid information.
"""
bidadv = AWSBidAdvisor(REFRESH_INTERVAL, REFRESH_INTERVAL, REGION)
instance_type = "m3.large"
zones = ["us-west-2b"]
# Manually populate the prices so that spot-instance prices are chosen.
bidadv.on_demand_price_dict["m3.large"] = "100"
bidadv.spot_price_list = [{'InstanceType': instance_type,
'SpotPrice': '80',
'AvailabilityZone': "us-west-2b"}]
bid_info = bidadv.get_new_bid(zones, instance_type)
assert bid_info is not None, "BidAdvisor didn't return any " + \
"now bid information."
assert bid_info["type"] == "spot"
assert isinstance(bid_info["price"], str)
# Manually populate the prices so that on-demand instances are chosen.
bidadv.spot_price_list = [{'InstanceType': instance_type,
'SpotPrice': '85',
'AvailabilityZone': "us-west-2b"}]
bid_info = bidadv.get_new_bid(zones, instance_type)
assert bid_info is not None, "BidAdvisor didn't return any now " + \
"bid information."
assert bid_info["type"] == "on-demand"
def test_ba_get_bid_no_data(self):
"""
Tests that the BidAdvisor returns the default if the pricing
information hasn't be obtained yet.
"""
bidadv = AWSBidAdvisor(REFRESH_INTERVAL, REFRESH_INTERVAL, REGION)
bid_info = bidadv.get_new_bid(['us-west-2a'], 'm3.large')
assert bid_info["type"] == "on-demand"
@patch.object(AWSBidAdvisor.SpotInstancePriceUpdater, 'ec2_get_spot_price_history', MagicMock(return_value=MOCK_SPOT_PRICE))
def test_ba_get_current_price(self):
"""
Tests that the BidAdvisor returns the most recent price information.
"""
bidadv = AWSBidAdvisor(REFRESH_INTERVAL, REFRESH_INTERVAL, REGION)
od_updater = bidadv.OnDemandUpdater(bidadv)
od_updater.get_on_demand_pricing()
sp_updater = bidadv.SpotInstancePriceUpdater(bidadv)
sp_updater.get_spot_price_info()
# Verify that the pricing info was populated.
assert len(bidadv.on_demand_price_dict) > 0
assert len(bidadv.spot_price_list) > 0
price_info_map = bidadv.get_current_price()
assert price_info_map["spot"] is not None
assert price_info_map["on-demand"] is not None
def test_ba_parse_row(self):
"""
Tests that the BidAdvisor parses the rows in on-demand price information.
"""
bidadv = AWSBidAdvisor(REFRESH_INTERVAL, REFRESH_INTERVAL, REGION)
od_updater = bidadv.OnDemandUpdater(bidadv)
row = {}
row['RateCode'] = "JRTCKXETXF.6YS6EN2CT7"
row["TermType"] = "OnDemand"
row["PriceDescription"] = "On Demand Linux"
row["Location"] = "US West (Oregon)"
row["Operating System"] = "Linux"
row["Pre Installed S/W"] = "NA"
row["Tenancy"] = "Shared"
row["PricePerUnit"] = "0.453"
row["Instance Type"] = "m5.4xlarge"
od_updater.parse_price_row(row)
assert od_updater.bid_advisor.on_demand_price_dict['m5.4xlarge'] == "0.453"
od_updater.parse_price_row(row)
assert od_updater.bid_advisor.on_demand_price_dict['m5.4xlarge'] == "0.453"
row["PricePerUnit"] = "0.658"
od_updater.parse_price_row(row)
assert od_updater.bid_advisor.on_demand_price_dict['m5.4xlarge'] == "0.658"
row["PricePerUnit"] = "0.00"
od_updater.parse_price_row(row)
assert od_updater.bid_advisor.on_demand_price_dict['m5.4xlarge'] == "0.658"
row['RateCode'] = "Some Random RateCode"
od_updater.parse_price_row(row)
| 57.111111 | 2,928 | 0.668646 |
import unittest
from mock import patch, MagicMock
import datetime
from dateutil.tz import tzutc
from cloud_provider.aws.aws_bid_advisor import AWSBidAdvisor
REFRESH_INTERVAL = 10
REGION = 'us-west-2'
MOCK_SPOT_PRICE={'NextToken': '', 'SpotPriceHistory': [{'AvailabilityZone': 'us-west-2b', 'InstanceType': 'm5.4xlarge', 'ProductDescription': 'Linux/UNIX', 'SpotPrice': '0.300000', 'Timestamp': datetime.datetime(2019, 7, 13, 20, 30, 22, tzinfo=tzutc())}, {'AvailabilityZone': 'us-west-2c', 'InstanceType': 'm5.4xlarge', 'ProductDescription': 'Linux/UNIX', 'SpotPrice': '0.291400', 'Timestamp': datetime.datetime(2019, 7, 13, 20, 13, 34, tzinfo=tzutc())}, {'AvailabilityZone': 'us-west-2a', 'InstanceType': 'm5.4xlarge', 'ProductDescription': 'Linux/UNIX', 'SpotPrice': '0.320100', 'Timestamp': datetime.datetime(2019, 7, 13, 18, 33, 30, tzinfo=tzutc())}, {'AvailabilityZone': 'us-west-2c', 'InstanceType': 'm3.medium', 'ProductDescription': 'Linux/UNIX', 'SpotPrice': '0.006700', 'Timestamp': datetime.datetime(2019, 7, 13, 17, 7, 9, tzinfo=tzutc())}, {'AvailabilityZone': 'us-west-2b', 'InstanceType': 'm3.medium', 'ProductDescription': 'Linux/UNIX', 'SpotPrice': '0.006700', 'Timestamp': datetime.datetime(2019, 7, 13, 17, 7, 9, tzinfo=tzutc())}, {'AvailabilityZone': 'us-west-2a', 'InstanceType': 'm3.medium', 'ProductDescription': 'Linux/UNIX', 'SpotPrice': '0.006700', 'Timestamp': datetime.datetime(2019, 7, 13, 17, 7, 9, tzinfo=tzutc())}, {'AvailabilityZone': 'us-west-2b', 'InstanceType': 'm5.4xlarge', 'ProductDescription': 'Linux/UNIX', 'SpotPrice': '0.300400', 'Timestamp': datetime.datetime(2019, 7, 13, 15, 46, 1, tzinfo=tzutc())}, {'AvailabilityZone': 'us-west-2c', 'InstanceType': 'm5.4xlarge', 'ProductDescription': 'Linux/UNIX', 'SpotPrice': '0.291500', 'Timestamp': datetime.datetime(2019, 7, 13, 14, 47, 14, tzinfo=tzutc())}, {'AvailabilityZone': 'us-west-2a', 'InstanceType': 'm5.4xlarge', 'ProductDescription': 'Linux/UNIX', 'SpotPrice': '0.321600', 'Timestamp': datetime.datetime(2019, 7, 13, 13, 40, 47, tzinfo=tzutc())}, {'AvailabilityZone': 'us-west-2d', 'InstanceType': 'm5.4xlarge', 'ProductDescription': 'Linux/UNIX', 'SpotPrice': '0.270400', 'Timestamp': datetime.datetime(2019, 7, 13, 6, 23, 5, tzinfo=tzutc())}, {'AvailabilityZone': 'us-west-2c', 'InstanceType': 'm3.medium', 'ProductDescription': 'Linux/UNIX', 'SpotPrice': '0.006700', 'Timestamp': datetime.datetime(2019, 7, 12, 17, 7, 5, tzinfo=tzutc())}, {'AvailabilityZone': 'us-west-2b', 'InstanceType': 'm3.medium', 'ProductDescription': 'Linux/UNIX', 'SpotPrice': '0.006700', 'Timestamp': datetime.datetime(2019, 7, 12, 17, 7, 5, tzinfo=tzutc())}, {'AvailabilityZone': 'us-west-2a', 'InstanceType': 'm3.medium', 'ProductDescription': 'Linux/UNIX', 'SpotPrice': '0.006700', 'Timestamp': datetime.datetime(2019, 7, 12, 17, 7, 5, tzinfo=tzutc())}], 'ResponseMetadata': {'RequestId': 'f428bcba-016f-476f-b9ed-755f71af2d36', 'HTTPStatusCode': 200, 'HTTPHeaders': {'content-type': 'text/xml;charset=UTF-8', 'content-length': '4341', 'vary': 'accept-encoding', 'date': 'Sun, 14 Jul 2019 00:45:52 GMT', 'server': 'AmazonEC2'}, 'RetryAttempts': 0}}
class AWSBidAdvisorTest(unittest.TestCase):
@patch.object(AWSBidAdvisor.SpotInstancePriceUpdater, 'ec2_get_spot_price_history', MagicMock(return_value=MOCK_SPOT_PRICE))
def test_ba_lifecycle(self):
bidadv = AWSBidAdvisor(REFRESH_INTERVAL, REFRESH_INTERVAL, REGION)
assert len(bidadv.all_bid_advisor_threads) == 0
bidadv.run()
assert len(bidadv.all_bid_advisor_threads) == 2
bidadv.shutdown()
assert len(bidadv.all_bid_advisor_threads) == 0
def test_ba_on_demand_pricing(self):
bidadv = AWSBidAdvisor(REFRESH_INTERVAL, REFRESH_INTERVAL, REGION)
assert len(bidadv.on_demand_price_dict) == 0
updater = bidadv.OnDemandUpdater(bidadv)
updater.get_on_demand_pricing()
assert len(bidadv.on_demand_price_dict) > 0
@patch.object(AWSBidAdvisor.SpotInstancePriceUpdater, 'ec2_get_spot_price_history', MagicMock(return_value=MOCK_SPOT_PRICE))
def test_ba_spot_pricing(self):
bidadv = AWSBidAdvisor(REFRESH_INTERVAL, REFRESH_INTERVAL, REGION)
assert len(bidadv.spot_price_list) == 0
updater = bidadv.SpotInstancePriceUpdater(bidadv)
updater.get_spot_price_info()
assert len(bidadv.spot_price_list) > 0
@patch.object(AWSBidAdvisor.SpotInstancePriceUpdater, 'ec2_get_spot_price_history', MagicMock(return_value=MOCK_SPOT_PRICE))
def test_ba_price_update(self):
bidadv = AWSBidAdvisor(REFRESH_INTERVAL, REFRESH_INTERVAL, REGION)
od_updater = bidadv.OnDemandUpdater(bidadv)
od_updater.get_on_demand_pricing()
sp_updater = bidadv.SpotInstancePriceUpdater(bidadv)
sp_updater.get_spot_price_info()
assert len(bidadv.on_demand_price_dict) > 0
assert len(bidadv.spot_price_list) > 0
bidadv.on_demand_price_dict = {}
bidadv.spot_price_list = {}
od_updater.get_on_demand_pricing()
sp_updater.get_spot_price_info()
assert len(bidadv.on_demand_price_dict) > 0
assert len(bidadv.spot_price_list) > 0
def test_ba_get_bid(self):
bidadv = AWSBidAdvisor(REFRESH_INTERVAL, REFRESH_INTERVAL, REGION)
instance_type = "m3.large"
zones = ["us-west-2b"]
bidadv.on_demand_price_dict["m3.large"] = "100"
bidadv.spot_price_list = [{'InstanceType': instance_type,
'SpotPrice': '80',
'AvailabilityZone': "us-west-2b"}]
bid_info = bidadv.get_new_bid(zones, instance_type)
assert bid_info is not None, "BidAdvisor didn't return any " + \
"now bid information."
assert bid_info["type"] == "spot"
assert isinstance(bid_info["price"], str)
# Manually populate the prices so that on-demand instances are chosen.
bidadv.spot_price_list = [{'InstanceType': instance_type,
'SpotPrice': '85',
'AvailabilityZone': "us-west-2b"}]
bid_info = bidadv.get_new_bid(zones, instance_type)
assert bid_info is not None, "BidAdvisor didn't return any now " + \
"bid information."
assert bid_info["type"] == "on-demand"
def test_ba_get_bid_no_data(self):
bidadv = AWSBidAdvisor(REFRESH_INTERVAL, REFRESH_INTERVAL, REGION)
bid_info = bidadv.get_new_bid(['us-west-2a'], 'm3.large')
assert bid_info["type"] == "on-demand"
@patch.object(AWSBidAdvisor.SpotInstancePriceUpdater, 'ec2_get_spot_price_history', MagicMock(return_value=MOCK_SPOT_PRICE))
def test_ba_get_current_price(self):
bidadv = AWSBidAdvisor(REFRESH_INTERVAL, REFRESH_INTERVAL, REGION)
od_updater = bidadv.OnDemandUpdater(bidadv)
od_updater.get_on_demand_pricing()
sp_updater = bidadv.SpotInstancePriceUpdater(bidadv)
sp_updater.get_spot_price_info()
assert len(bidadv.on_demand_price_dict) > 0
assert len(bidadv.spot_price_list) > 0
price_info_map = bidadv.get_current_price()
assert price_info_map["spot"] is not None
assert price_info_map["on-demand"] is not None
def test_ba_parse_row(self):
bidadv = AWSBidAdvisor(REFRESH_INTERVAL, REFRESH_INTERVAL, REGION)
od_updater = bidadv.OnDemandUpdater(bidadv)
row = {}
row['RateCode'] = "JRTCKXETXF.6YS6EN2CT7"
row["TermType"] = "OnDemand"
row["PriceDescription"] = "On Demand Linux"
row["Location"] = "US West (Oregon)"
row["Operating System"] = "Linux"
row["Pre Installed S/W"] = "NA"
row["Tenancy"] = "Shared"
row["PricePerUnit"] = "0.453"
row["Instance Type"] = "m5.4xlarge"
od_updater.parse_price_row(row)
assert od_updater.bid_advisor.on_demand_price_dict['m5.4xlarge'] == "0.453"
od_updater.parse_price_row(row)
assert od_updater.bid_advisor.on_demand_price_dict['m5.4xlarge'] == "0.453"
row["PricePerUnit"] = "0.658"
od_updater.parse_price_row(row)
assert od_updater.bid_advisor.on_demand_price_dict['m5.4xlarge'] == "0.658"
row["PricePerUnit"] = "0.00"
od_updater.parse_price_row(row)
assert od_updater.bid_advisor.on_demand_price_dict['m5.4xlarge'] == "0.658"
row['RateCode'] = "Some Random RateCode"
od_updater.parse_price_row(row)
| true | true |
f720aa2b27c1b9f527f04697350eace8a44cc17c | 214 | py | Python | diypy3/tests/arr_stk.py | anqurvanillapy/diypy | 56ced55011e95a19b7238992c2fc612b196ff17d | [
"CC0-1.0"
] | 1 | 2015-12-08T10:35:21.000Z | 2015-12-08T10:35:21.000Z | diypy3/tests/arr_stk.py | anqurvanillapy/diypy3 | 56ced55011e95a19b7238992c2fc612b196ff17d | [
"CC0-1.0"
] | null | null | null | diypy3/tests/arr_stk.py | anqurvanillapy/diypy3 | 56ced55011e95a19b7238992c2fc612b196ff17d | [
"CC0-1.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""\
This script creates a stack array
"""
import diypy3
d = diypy3.Diypy3()
arr_stk = (1, 2, 3, 4, 5)
max_size = 100
inc = 10
d.array_stack(max_size, inc, arr_stk)
| 14.266667 | 37 | 0.635514 |
import diypy3
d = diypy3.Diypy3()
arr_stk = (1, 2, 3, 4, 5)
max_size = 100
inc = 10
d.array_stack(max_size, inc, arr_stk)
| true | true |
f720aae2cd32fb0ceac540aa226171b36ea197e1 | 9,383 | py | Python | yandex/cloud/mdb/mysql/v1alpha/backup_service_pb2.py | kbespalov/python-sdk | e86563ee850e46a35b4c84053ecd4affdf66a963 | [
"MIT"
] | null | null | null | yandex/cloud/mdb/mysql/v1alpha/backup_service_pb2.py | kbespalov/python-sdk | e86563ee850e46a35b4c84053ecd4affdf66a963 | [
"MIT"
] | null | null | null | yandex/cloud/mdb/mysql/v1alpha/backup_service_pb2.py | kbespalov/python-sdk | e86563ee850e46a35b4c84053ecd4affdf66a963 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: yandex/cloud/mdb/mysql/v1alpha/backup_service.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2
from yandex.cloud import validation_pb2 as yandex_dot_cloud_dot_validation__pb2
from yandex.cloud.mdb.mysql.v1alpha import backup_pb2 as yandex_dot_cloud_dot_mdb_dot_mysql_dot_v1alpha_dot_backup__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='yandex/cloud/mdb/mysql/v1alpha/backup_service.proto',
package='yandex.cloud.mdb.mysql.v1alpha',
syntax='proto3',
serialized_options=_b('\n\"yandex.cloud.api.mdb.mysql.v1alphaZHgithub.com/yandex-cloud/go-genproto/yandex/cloud/mdb/mysql/v1alpha;mysql'),
serialized_pb=_b('\n3yandex/cloud/mdb/mysql/v1alpha/backup_service.proto\x12\x1eyandex.cloud.mdb.mysql.v1alpha\x1a\x1cgoogle/api/annotations.proto\x1a\x1dyandex/cloud/validation.proto\x1a+yandex/cloud/mdb/mysql/v1alpha/backup.proto\"+\n\x10GetBackupRequest\x12\x17\n\tbackup_id\x18\x01 \x01(\tB\x04\xe8\xc7\x31\x01\"s\n\x12ListBackupsRequest\x12\x1f\n\tfolder_id\x18\x01 \x01(\tB\x0c\xe8\xc7\x31\x01\x8a\xc8\x31\x04<=50\x12\x1d\n\tpage_size\x18\x02 \x01(\x03\x42\n\xfa\xc7\x31\x06<=1000\x12\x1d\n\npage_token\x18\x03 \x01(\tB\t\x8a\xc8\x31\x05<=100\"r\n\x13ListBackupsResponse\x12\x37\n\x07\x62\x61\x63kups\x18\x01 \x03(\x0b\x32&.yandex.cloud.mdb.mysql.v1alpha.Backup\x12\"\n\x0fnext_page_token\x18\x02 \x01(\tB\t\x8a\xc8\x31\x05<=1002\xbf\x02\n\rBackupService\x12\x93\x01\n\x03Get\x12\x30.yandex.cloud.mdb.mysql.v1alpha.GetBackupRequest\x1a&.yandex.cloud.mdb.mysql.v1alpha.Backup\"2\x82\xd3\xe4\x93\x02,\x12*/managed-mysql/v1alpha/backups/{backup_id}\x12\x97\x01\n\x04List\x12\x32.yandex.cloud.mdb.mysql.v1alpha.ListBackupsRequest\x1a\x33.yandex.cloud.mdb.mysql.v1alpha.ListBackupsResponse\"&\x82\xd3\xe4\x93\x02 \x12\x1e/managed-mysql/v1alpha/backupsBn\n\"yandex.cloud.api.mdb.mysql.v1alphaZHgithub.com/yandex-cloud/go-genproto/yandex/cloud/mdb/mysql/v1alpha;mysqlb\x06proto3')
,
dependencies=[google_dot_api_dot_annotations__pb2.DESCRIPTOR,yandex_dot_cloud_dot_validation__pb2.DESCRIPTOR,yandex_dot_cloud_dot_mdb_dot_mysql_dot_v1alpha_dot_backup__pb2.DESCRIPTOR,])
_GETBACKUPREQUEST = _descriptor.Descriptor(
name='GetBackupRequest',
full_name='yandex.cloud.mdb.mysql.v1alpha.GetBackupRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='backup_id', full_name='yandex.cloud.mdb.mysql.v1alpha.GetBackupRequest.backup_id', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\350\3071\001'), file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=193,
serialized_end=236,
)
_LISTBACKUPSREQUEST = _descriptor.Descriptor(
name='ListBackupsRequest',
full_name='yandex.cloud.mdb.mysql.v1alpha.ListBackupsRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='folder_id', full_name='yandex.cloud.mdb.mysql.v1alpha.ListBackupsRequest.folder_id', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\350\3071\001\212\3101\004<=50'), file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='page_size', full_name='yandex.cloud.mdb.mysql.v1alpha.ListBackupsRequest.page_size', index=1,
number=2, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\372\3071\006<=1000'), file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='page_token', full_name='yandex.cloud.mdb.mysql.v1alpha.ListBackupsRequest.page_token', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\212\3101\005<=100'), file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=238,
serialized_end=353,
)
_LISTBACKUPSRESPONSE = _descriptor.Descriptor(
name='ListBackupsResponse',
full_name='yandex.cloud.mdb.mysql.v1alpha.ListBackupsResponse',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='backups', full_name='yandex.cloud.mdb.mysql.v1alpha.ListBackupsResponse.backups', index=0,
number=1, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='next_page_token', full_name='yandex.cloud.mdb.mysql.v1alpha.ListBackupsResponse.next_page_token', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\212\3101\005<=100'), file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=355,
serialized_end=469,
)
_LISTBACKUPSRESPONSE.fields_by_name['backups'].message_type = yandex_dot_cloud_dot_mdb_dot_mysql_dot_v1alpha_dot_backup__pb2._BACKUP
DESCRIPTOR.message_types_by_name['GetBackupRequest'] = _GETBACKUPREQUEST
DESCRIPTOR.message_types_by_name['ListBackupsRequest'] = _LISTBACKUPSREQUEST
DESCRIPTOR.message_types_by_name['ListBackupsResponse'] = _LISTBACKUPSRESPONSE
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
GetBackupRequest = _reflection.GeneratedProtocolMessageType('GetBackupRequest', (_message.Message,), {
'DESCRIPTOR' : _GETBACKUPREQUEST,
'__module__' : 'yandex.cloud.mdb.mysql.v1alpha.backup_service_pb2'
# @@protoc_insertion_point(class_scope:yandex.cloud.mdb.mysql.v1alpha.GetBackupRequest)
})
_sym_db.RegisterMessage(GetBackupRequest)
ListBackupsRequest = _reflection.GeneratedProtocolMessageType('ListBackupsRequest', (_message.Message,), {
'DESCRIPTOR' : _LISTBACKUPSREQUEST,
'__module__' : 'yandex.cloud.mdb.mysql.v1alpha.backup_service_pb2'
# @@protoc_insertion_point(class_scope:yandex.cloud.mdb.mysql.v1alpha.ListBackupsRequest)
})
_sym_db.RegisterMessage(ListBackupsRequest)
ListBackupsResponse = _reflection.GeneratedProtocolMessageType('ListBackupsResponse', (_message.Message,), {
'DESCRIPTOR' : _LISTBACKUPSRESPONSE,
'__module__' : 'yandex.cloud.mdb.mysql.v1alpha.backup_service_pb2'
# @@protoc_insertion_point(class_scope:yandex.cloud.mdb.mysql.v1alpha.ListBackupsResponse)
})
_sym_db.RegisterMessage(ListBackupsResponse)
DESCRIPTOR._options = None
_GETBACKUPREQUEST.fields_by_name['backup_id']._options = None
_LISTBACKUPSREQUEST.fields_by_name['folder_id']._options = None
_LISTBACKUPSREQUEST.fields_by_name['page_size']._options = None
_LISTBACKUPSREQUEST.fields_by_name['page_token']._options = None
_LISTBACKUPSRESPONSE.fields_by_name['next_page_token']._options = None
_BACKUPSERVICE = _descriptor.ServiceDescriptor(
name='BackupService',
full_name='yandex.cloud.mdb.mysql.v1alpha.BackupService',
file=DESCRIPTOR,
index=0,
serialized_options=None,
serialized_start=472,
serialized_end=791,
methods=[
_descriptor.MethodDescriptor(
name='Get',
full_name='yandex.cloud.mdb.mysql.v1alpha.BackupService.Get',
index=0,
containing_service=None,
input_type=_GETBACKUPREQUEST,
output_type=yandex_dot_cloud_dot_mdb_dot_mysql_dot_v1alpha_dot_backup__pb2._BACKUP,
serialized_options=_b('\202\323\344\223\002,\022*/managed-mysql/v1alpha/backups/{backup_id}'),
),
_descriptor.MethodDescriptor(
name='List',
full_name='yandex.cloud.mdb.mysql.v1alpha.BackupService.List',
index=1,
containing_service=None,
input_type=_LISTBACKUPSREQUEST,
output_type=_LISTBACKUPSRESPONSE,
serialized_options=_b('\202\323\344\223\002 \022\036/managed-mysql/v1alpha/backups'),
),
])
_sym_db.RegisterServiceDescriptor(_BACKUPSERVICE)
DESCRIPTOR.services_by_name['BackupService'] = _BACKUPSERVICE
# @@protoc_insertion_point(module_scope)
| 43.845794 | 1,281 | 0.775338 |
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
_sym_db = _symbol_database.Default()
from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2
from yandex.cloud import validation_pb2 as yandex_dot_cloud_dot_validation__pb2
from yandex.cloud.mdb.mysql.v1alpha import backup_pb2 as yandex_dot_cloud_dot_mdb_dot_mysql_dot_v1alpha_dot_backup__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='yandex/cloud/mdb/mysql/v1alpha/backup_service.proto',
package='yandex.cloud.mdb.mysql.v1alpha',
syntax='proto3',
serialized_options=_b('\n\"yandex.cloud.api.mdb.mysql.v1alphaZHgithub.com/yandex-cloud/go-genproto/yandex/cloud/mdb/mysql/v1alpha;mysql'),
serialized_pb=_b('\n3yandex/cloud/mdb/mysql/v1alpha/backup_service.proto\x12\x1eyandex.cloud.mdb.mysql.v1alpha\x1a\x1cgoogle/api/annotations.proto\x1a\x1dyandex/cloud/validation.proto\x1a+yandex/cloud/mdb/mysql/v1alpha/backup.proto\"+\n\x10GetBackupRequest\x12\x17\n\tbackup_id\x18\x01 \x01(\tB\x04\xe8\xc7\x31\x01\"s\n\x12ListBackupsRequest\x12\x1f\n\tfolder_id\x18\x01 \x01(\tB\x0c\xe8\xc7\x31\x01\x8a\xc8\x31\x04<=50\x12\x1d\n\tpage_size\x18\x02 \x01(\x03\x42\n\xfa\xc7\x31\x06<=1000\x12\x1d\n\npage_token\x18\x03 \x01(\tB\t\x8a\xc8\x31\x05<=100\"r\n\x13ListBackupsResponse\x12\x37\n\x07\x62\x61\x63kups\x18\x01 \x03(\x0b\x32&.yandex.cloud.mdb.mysql.v1alpha.Backup\x12\"\n\x0fnext_page_token\x18\x02 \x01(\tB\t\x8a\xc8\x31\x05<=1002\xbf\x02\n\rBackupService\x12\x93\x01\n\x03Get\x12\x30.yandex.cloud.mdb.mysql.v1alpha.GetBackupRequest\x1a&.yandex.cloud.mdb.mysql.v1alpha.Backup\"2\x82\xd3\xe4\x93\x02,\x12*/managed-mysql/v1alpha/backups/{backup_id}\x12\x97\x01\n\x04List\x12\x32.yandex.cloud.mdb.mysql.v1alpha.ListBackupsRequest\x1a\x33.yandex.cloud.mdb.mysql.v1alpha.ListBackupsResponse\"&\x82\xd3\xe4\x93\x02 \x12\x1e/managed-mysql/v1alpha/backupsBn\n\"yandex.cloud.api.mdb.mysql.v1alphaZHgithub.com/yandex-cloud/go-genproto/yandex/cloud/mdb/mysql/v1alpha;mysqlb\x06proto3')
,
dependencies=[google_dot_api_dot_annotations__pb2.DESCRIPTOR,yandex_dot_cloud_dot_validation__pb2.DESCRIPTOR,yandex_dot_cloud_dot_mdb_dot_mysql_dot_v1alpha_dot_backup__pb2.DESCRIPTOR,])
_GETBACKUPREQUEST = _descriptor.Descriptor(
name='GetBackupRequest',
full_name='yandex.cloud.mdb.mysql.v1alpha.GetBackupRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='backup_id', full_name='yandex.cloud.mdb.mysql.v1alpha.GetBackupRequest.backup_id', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\350\3071\001'), file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=193,
serialized_end=236,
)
_LISTBACKUPSREQUEST = _descriptor.Descriptor(
name='ListBackupsRequest',
full_name='yandex.cloud.mdb.mysql.v1alpha.ListBackupsRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='folder_id', full_name='yandex.cloud.mdb.mysql.v1alpha.ListBackupsRequest.folder_id', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\350\3071\001\212\3101\004<=50'), file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='page_size', full_name='yandex.cloud.mdb.mysql.v1alpha.ListBackupsRequest.page_size', index=1,
number=2, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\372\3071\006<=1000'), file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='page_token', full_name='yandex.cloud.mdb.mysql.v1alpha.ListBackupsRequest.page_token', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\212\3101\005<=100'), file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=238,
serialized_end=353,
)
_LISTBACKUPSRESPONSE = _descriptor.Descriptor(
name='ListBackupsResponse',
full_name='yandex.cloud.mdb.mysql.v1alpha.ListBackupsResponse',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='backups', full_name='yandex.cloud.mdb.mysql.v1alpha.ListBackupsResponse.backups', index=0,
number=1, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='next_page_token', full_name='yandex.cloud.mdb.mysql.v1alpha.ListBackupsResponse.next_page_token', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\212\3101\005<=100'), file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=355,
serialized_end=469,
)
_LISTBACKUPSRESPONSE.fields_by_name['backups'].message_type = yandex_dot_cloud_dot_mdb_dot_mysql_dot_v1alpha_dot_backup__pb2._BACKUP
DESCRIPTOR.message_types_by_name['GetBackupRequest'] = _GETBACKUPREQUEST
DESCRIPTOR.message_types_by_name['ListBackupsRequest'] = _LISTBACKUPSREQUEST
DESCRIPTOR.message_types_by_name['ListBackupsResponse'] = _LISTBACKUPSRESPONSE
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
GetBackupRequest = _reflection.GeneratedProtocolMessageType('GetBackupRequest', (_message.Message,), {
'DESCRIPTOR' : _GETBACKUPREQUEST,
'__module__' : 'yandex.cloud.mdb.mysql.v1alpha.backup_service_pb2'
})
_sym_db.RegisterMessage(GetBackupRequest)
ListBackupsRequest = _reflection.GeneratedProtocolMessageType('ListBackupsRequest', (_message.Message,), {
'DESCRIPTOR' : _LISTBACKUPSREQUEST,
'__module__' : 'yandex.cloud.mdb.mysql.v1alpha.backup_service_pb2'
})
_sym_db.RegisterMessage(ListBackupsRequest)
ListBackupsResponse = _reflection.GeneratedProtocolMessageType('ListBackupsResponse', (_message.Message,), {
'DESCRIPTOR' : _LISTBACKUPSRESPONSE,
'__module__' : 'yandex.cloud.mdb.mysql.v1alpha.backup_service_pb2'
})
_sym_db.RegisterMessage(ListBackupsResponse)
DESCRIPTOR._options = None
_GETBACKUPREQUEST.fields_by_name['backup_id']._options = None
_LISTBACKUPSREQUEST.fields_by_name['folder_id']._options = None
_LISTBACKUPSREQUEST.fields_by_name['page_size']._options = None
_LISTBACKUPSREQUEST.fields_by_name['page_token']._options = None
_LISTBACKUPSRESPONSE.fields_by_name['next_page_token']._options = None
_BACKUPSERVICE = _descriptor.ServiceDescriptor(
name='BackupService',
full_name='yandex.cloud.mdb.mysql.v1alpha.BackupService',
file=DESCRIPTOR,
index=0,
serialized_options=None,
serialized_start=472,
serialized_end=791,
methods=[
_descriptor.MethodDescriptor(
name='Get',
full_name='yandex.cloud.mdb.mysql.v1alpha.BackupService.Get',
index=0,
containing_service=None,
input_type=_GETBACKUPREQUEST,
output_type=yandex_dot_cloud_dot_mdb_dot_mysql_dot_v1alpha_dot_backup__pb2._BACKUP,
serialized_options=_b('\202\323\344\223\002,\022*/managed-mysql/v1alpha/backups/{backup_id}'),
),
_descriptor.MethodDescriptor(
name='List',
full_name='yandex.cloud.mdb.mysql.v1alpha.BackupService.List',
index=1,
containing_service=None,
input_type=_LISTBACKUPSREQUEST,
output_type=_LISTBACKUPSRESPONSE,
serialized_options=_b('\202\323\344\223\002 \022\036/managed-mysql/v1alpha/backups'),
),
])
_sym_db.RegisterServiceDescriptor(_BACKUPSERVICE)
DESCRIPTOR.services_by_name['BackupService'] = _BACKUPSERVICE
| true | true |
f720ac11c40ba6b8dd0d3806ca655474e9e8841f | 344 | py | Python | convert/_3D/to/_1D.py | flew-software/Dem | 20b7eb9bc7c11f1baf23acfe7bfbab359ddd97fb | [
"MIT"
] | 1 | 2021-02-17T08:30:05.000Z | 2021-02-17T08:30:05.000Z | convert/_3D/to/_1D.py | flew-software/Dem | 20b7eb9bc7c11f1baf23acfe7bfbab359ddd97fb | [
"MIT"
] | null | null | null | convert/_3D/to/_1D.py | flew-software/Dem | 20b7eb9bc7c11f1baf23acfe7bfbab359ddd97fb | [
"MIT"
] | null | null | null | def row_major(l: list) -> tuple[list, int]:
""" converts a 2d list to a 1d list using row major algorithm and returns a 1d list and row count """
out = []
i = 0
while i < len(l):
ii = 0
a = l[i]
while ii < len(a):
out.append(a[ii])
ii += 1
i += 1
return out, len(l)
| 22.933333 | 105 | 0.479651 | def row_major(l: list) -> tuple[list, int]:
out = []
i = 0
while i < len(l):
ii = 0
a = l[i]
while ii < len(a):
out.append(a[ii])
ii += 1
i += 1
return out, len(l)
| true | true |
f720ae0b4dc5919f8c14b48866a4d15a378b186e | 2,887 | py | Python | EDSR/common.py | NateLol/BAM_A_lightweight_but_efficient_Balanced_attention_mechanism_for_super_resolution | f23c043c6cd5c064e58b6b11bd7100fc55224702 | [
"MIT"
] | 33 | 2021-04-30T02:40:05.000Z | 2022-03-09T09:35:49.000Z | EDSR/common.py | chisyliu/BAM_A_lightweight_but_efficient_Balanced_attention_mechanism_for_super_resolution | 4c977ea1586e7836248acb5cbd648e124b43aca3 | [
"MIT"
] | 6 | 2021-05-10T23:19:35.000Z | 2021-12-13T02:13:16.000Z | EDSR/common.py | chisyliu/BAM_A_lightweight_but_efficient_Balanced_attention_mechanism_for_super_resolution | 4c977ea1586e7836248acb5cbd648e124b43aca3 | [
"MIT"
] | 13 | 2021-05-18T12:21:48.000Z | 2022-01-21T07:17:19.000Z | import math
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
def default_conv(in_channels, out_channels, kernel_size, bias=True):
return nn.Conv2d(
in_channels, out_channels, kernel_size,
padding=(kernel_size//2), bias=bias)
class MeanShift(nn.Conv2d):
def __init__(self, rgb_range, rgb_mean=(0.4488, 0.4371, 0.4040), rgb_std=(1.0, 1.0, 1.0), sign=-1):
super(MeanShift, self).__init__(3, 3, kernel_size=1)
std = torch.Tensor(rgb_std)
self.weight.data = torch.eye(3).view(3, 3, 1, 1)
self.weight.data.div_(std.view(3, 1, 1, 1))
self.bias.data = sign * rgb_range * torch.Tensor(rgb_mean)
self.bias.data.div_(std)
self.requires_grad = False
class BasicBlock(nn.Sequential):
def __init__(
self, in_channels, out_channels, kernel_size, stride=1, bias=False,
bn=True, act=nn.ReLU(True)):
m = [nn.Conv2d(
in_channels, out_channels, kernel_size,
padding=(kernel_size//2), stride=stride, bias=bias)
]
if bn: m.append(nn.BatchNorm2d(out_channels))
if act is not None: m.append(act)
super(BasicBlock, self).__init__(*m)
class ResBlock(nn.Module):
def __init__(
self, conv, n_feats, kernel_size,
bias=True, bn=False, act=nn.ReLU(True), res_scale=1):
super(ResBlock, self).__init__()
m = []
for i in range(2):
m.append(conv(n_feats, n_feats, kernel_size, bias=bias))
if bn: m.append(nn.BatchNorm2d(n_feats))
if i == 0: m.append(act)
self.body = nn.Sequential(*m)
self.res_scale = res_scale
def forward(self, x):
res = self.body(x).mul(self.res_scale)
res += x
return res
class Upsampler(nn.Sequential):
def __init__(self, conv, scale, n_feats, bn=False, act=False, bias=True):
m = []
if (scale & (scale - 1)) == 0: # Is scale = 2^n?
for _ in range(int(math.log(scale, 2))):
m.append(conv(n_feats, 4 * n_feats, 3, bias))
m.append(nn.PixelShuffle(2))
if bn: m.append(nn.BatchNorm2d(n_feats))
if act == 'relu':
m.append(nn.ReLU(True))
elif act == 'prelu':
m.append(nn.PReLU(n_feats))
elif scale == 3:
m.append(conv(n_feats, 9 * n_feats, 3, bias))
m.append(nn.PixelShuffle(3))
if bn: m.append(nn.BatchNorm2d(n_feats))
if act == 'relu':
m.append(nn.ReLU(True))
elif act == 'prelu':
m.append(nn.PReLU(n_feats))
else:
raise NotImplementedError
super(Upsampler, self).__init__(*m) | 33.569767 | 104 | 0.55594 | import math
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
def default_conv(in_channels, out_channels, kernel_size, bias=True):
return nn.Conv2d(
in_channels, out_channels, kernel_size,
padding=(kernel_size//2), bias=bias)
class MeanShift(nn.Conv2d):
def __init__(self, rgb_range, rgb_mean=(0.4488, 0.4371, 0.4040), rgb_std=(1.0, 1.0, 1.0), sign=-1):
super(MeanShift, self).__init__(3, 3, kernel_size=1)
std = torch.Tensor(rgb_std)
self.weight.data = torch.eye(3).view(3, 3, 1, 1)
self.weight.data.div_(std.view(3, 1, 1, 1))
self.bias.data = sign * rgb_range * torch.Tensor(rgb_mean)
self.bias.data.div_(std)
self.requires_grad = False
class BasicBlock(nn.Sequential):
def __init__(
self, in_channels, out_channels, kernel_size, stride=1, bias=False,
bn=True, act=nn.ReLU(True)):
m = [nn.Conv2d(
in_channels, out_channels, kernel_size,
padding=(kernel_size//2), stride=stride, bias=bias)
]
if bn: m.append(nn.BatchNorm2d(out_channels))
if act is not None: m.append(act)
super(BasicBlock, self).__init__(*m)
class ResBlock(nn.Module):
def __init__(
self, conv, n_feats, kernel_size,
bias=True, bn=False, act=nn.ReLU(True), res_scale=1):
super(ResBlock, self).__init__()
m = []
for i in range(2):
m.append(conv(n_feats, n_feats, kernel_size, bias=bias))
if bn: m.append(nn.BatchNorm2d(n_feats))
if i == 0: m.append(act)
self.body = nn.Sequential(*m)
self.res_scale = res_scale
def forward(self, x):
res = self.body(x).mul(self.res_scale)
res += x
return res
class Upsampler(nn.Sequential):
def __init__(self, conv, scale, n_feats, bn=False, act=False, bias=True):
m = []
if (scale & (scale - 1)) == 0:
for _ in range(int(math.log(scale, 2))):
m.append(conv(n_feats, 4 * n_feats, 3, bias))
m.append(nn.PixelShuffle(2))
if bn: m.append(nn.BatchNorm2d(n_feats))
if act == 'relu':
m.append(nn.ReLU(True))
elif act == 'prelu':
m.append(nn.PReLU(n_feats))
elif scale == 3:
m.append(conv(n_feats, 9 * n_feats, 3, bias))
m.append(nn.PixelShuffle(3))
if bn: m.append(nn.BatchNorm2d(n_feats))
if act == 'relu':
m.append(nn.ReLU(True))
elif act == 'prelu':
m.append(nn.PReLU(n_feats))
else:
raise NotImplementedError
super(Upsampler, self).__init__(*m) | true | true |
f720ae8b8a42176cee5c72888875e04ea9be0096 | 749 | py | Python | tests/hooks.py | j-mechacorta/atoolbox | 900ad665f463d16911982dfadab7015cb95aa5ca | [
"MIT"
] | null | null | null | tests/hooks.py | j-mechacorta/atoolbox | 900ad665f463d16911982dfadab7015cb95aa5ca | [
"MIT"
] | null | null | null | tests/hooks.py | j-mechacorta/atoolbox | 900ad665f463d16911982dfadab7015cb95aa5ca | [
"MIT"
] | null | null | null | import os
from os.path import dirname as _dir
import logging
def get_logger(name):
return logging.getLogger('conftest.%s' % name)
def pytest_sessionstart(session):
BASE_FORMAT = "[%(name)s][%(levelname)-6s] %(message)s"
FILE_FORMAT = "[%(asctime)s]" + BASE_FORMAT
root_logger = logging.getLogger('conftest')
dir_path = os.path.dirname(os.path.realpath(__file__))
top_level = _dir(_dir(dir_path))
log_file = os.path.join(top_level, 'pytest-functional-tests.log')
print(log_file)
root_logger.setLevel(logging.INFO)
# File Logger
fh = logging.FileHandler(log_file)
fh.setLevel(logging.DEBUG)
fh.setFormatter(logging.Formatter(FILE_FORMAT, "%Y-%m-%d %H:%M:%S"))
root_logger.addHandler(fh)
| 26.75 | 72 | 0.698264 | import os
from os.path import dirname as _dir
import logging
def get_logger(name):
return logging.getLogger('conftest.%s' % name)
def pytest_sessionstart(session):
BASE_FORMAT = "[%(name)s][%(levelname)-6s] %(message)s"
FILE_FORMAT = "[%(asctime)s]" + BASE_FORMAT
root_logger = logging.getLogger('conftest')
dir_path = os.path.dirname(os.path.realpath(__file__))
top_level = _dir(_dir(dir_path))
log_file = os.path.join(top_level, 'pytest-functional-tests.log')
print(log_file)
root_logger.setLevel(logging.INFO)
fh = logging.FileHandler(log_file)
fh.setLevel(logging.DEBUG)
fh.setFormatter(logging.Formatter(FILE_FORMAT, "%Y-%m-%d %H:%M:%S"))
root_logger.addHandler(fh)
| true | true |
f720ae996883f8afdc19851c7b8222b960cb4d67 | 389 | py | Python | python-ds-practice/10_frequency/frequency.py | MostFunGuy/SpringboardProjectsPublic | bbda3ba26ecf8a09e62df81583122cae83acc1e6 | [
"MIT"
] | null | null | null | python-ds-practice/10_frequency/frequency.py | MostFunGuy/SpringboardProjectsPublic | bbda3ba26ecf8a09e62df81583122cae83acc1e6 | [
"MIT"
] | null | null | null | python-ds-practice/10_frequency/frequency.py | MostFunGuy/SpringboardProjectsPublic | bbda3ba26ecf8a09e62df81583122cae83acc1e6 | [
"MIT"
] | null | null | null | def frequency(lst, search_term):
"""Return frequency of term in lst.
>>> frequency([1, 4, 3, 4, 4], 4)
3
>>> frequency([1, 4, 3], 7)
0
"""
return lst.count(search_term)
print(F"frequency.py: frequency([1, 4, 3, 4, 4], 4) = `3` = {frequency([1, 4, 3, 4, 4], 4)}")
print(F"frequency.py: frequency([1, 4, 3], 7) = `0` = {frequency([1, 4, 3], 7)}") | 35.363636 | 93 | 0.511568 | def frequency(lst, search_term):
return lst.count(search_term)
print(F"frequency.py: frequency([1, 4, 3, 4, 4], 4) = `3` = {frequency([1, 4, 3, 4, 4], 4)}")
print(F"frequency.py: frequency([1, 4, 3], 7) = `0` = {frequency([1, 4, 3], 7)}") | true | true |
f720aeded9d52c0f3f6082dcb150a7020df5a4fb | 107 | py | Python | models/__init__.py | Abdulah-Fawaz/Benchmarking-Surface-DL | 9693379f26d57f9aabf28b973f40a9f6f627d26f | [
"MIT"
] | 2 | 2021-12-04T07:04:56.000Z | 2021-12-13T16:28:50.000Z | models/__init__.py | Abdulah-Fawaz/Benchmarking-Surface-DL | 9693379f26d57f9aabf28b973f40a9f6f627d26f | [
"MIT"
] | 1 | 2021-12-21T09:36:11.000Z | 2022-01-25T10:26:43.000Z | models/__init__.py | Abdulah-Fawaz/Benchmarking-Surface-DL | 9693379f26d57f9aabf28b973f40a9f6f627d26f | [
"MIT"
] | 1 | 2022-02-27T17:38:19.000Z | 2022-02-27T17:38:19.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sun Nov 29 10:29:06 2020
@author: fa19
"""
| 11.888889 | 35 | 0.588785 | true | true | |
f720af0f5626dfcd589a2242cb387ffca059e40a | 7,100 | py | Python | querybook/server/models/admin.py | czgu/querybook | fb3120245cd9693b7aa67bf0f08d427fd2dde74b | [
"Apache-2.0"
] | 1,144 | 2021-03-30T05:06:16.000Z | 2022-03-31T10:40:31.000Z | querybook/server/models/admin.py | czgu/querybook | fb3120245cd9693b7aa67bf0f08d427fd2dde74b | [
"Apache-2.0"
] | 100 | 2021-03-30T19:43:45.000Z | 2022-03-25T17:29:32.000Z | querybook/server/models/admin.py | czgu/querybook | fb3120245cd9693b7aa67bf0f08d427fd2dde74b | [
"Apache-2.0"
] | 113 | 2021-03-30T00:07:20.000Z | 2022-03-31T07:18:43.000Z | import sqlalchemy as sql
from sqlalchemy.orm import relationship, backref
from app import db
from const.admin import AdminOperation
from const.db import (
name_length,
now,
description_length,
# mediumtext_length,
# text_length
)
from lib.sqlalchemy import CRUDMixin
Base = db.Base
class Announcement(CRUDMixin, Base):
__tablename__ = "announcements"
__table_args__ = {"mysql_engine": "InnoDB", "mysql_charset": "utf8mb4"}
id = sql.Column(sql.Integer, primary_key=True)
created_at = sql.Column(sql.DateTime, default=now)
updated_at = sql.Column(sql.DateTime, default=now)
uid = sql.Column(sql.Integer, sql.ForeignKey("user.id", ondelete="CASCADE"))
message = sql.Column(sql.String(length=description_length))
url_regex = sql.Column(sql.String(length=name_length))
can_dismiss = sql.Column(sql.Boolean, default=True)
active_from = sql.Column(sql.Date)
active_till = sql.Column(sql.Date)
def to_dict(self):
return {
"id": self.id,
"message": self.message,
"url_regex": self.url_regex,
"can_dismiss": self.can_dismiss,
}
def to_dict_admin(self):
return {
"id": self.id,
"created_at": self.created_at,
"updated_at": self.updated_at,
"message": self.message,
"uid": self.uid,
"url_regex": self.url_regex,
"can_dismiss": self.can_dismiss,
"active_from": self.active_from,
"active_till": self.active_till,
}
class QueryEngineEnvironment(CRUDMixin, Base):
__tablename__ = "query_engine_environment"
__table_args__ = (
sql.UniqueConstraint(
"query_engine_id", "environment_id", name="unique_query_engine_environment"
),
)
id = sql.Column(sql.Integer, primary_key=True, autoincrement=True)
query_engine_id = sql.Column(
sql.Integer,
sql.ForeignKey("query_engine.id", ondelete="CASCADE"),
nullable=False,
)
environment_id = sql.Column(
sql.Integer,
sql.ForeignKey("environment.id", ondelete="CASCADE"),
nullable=False,
)
engine_order = sql.Column(sql.Integer, nullable=False)
class QueryEngine(CRUDMixin, Base):
__tablename__ = "query_engine"
id = sql.Column(sql.Integer, primary_key=True)
created_at = sql.Column(sql.DateTime, default=now)
updated_at = sql.Column(sql.DateTime, default=now)
deleted_at = sql.Column(sql.DateTime)
name = sql.Column(sql.String(length=name_length), unique=True, nullable=False)
description = sql.Column(sql.String(length=name_length))
language = sql.Column(sql.String(length=name_length), nullable=False)
executor = sql.Column(sql.String(length=name_length), nullable=False)
status_checker = sql.Column(sql.String(length=name_length))
# JSON field
executor_params = sql.Column(sql.JSON)
control_params = sql.Column(sql.JSON, default={}, nullable=False)
metastore_id = sql.Column(
sql.Integer, sql.ForeignKey("query_metastore.id", ondelete="SET NULL")
)
metastore = relationship("QueryMetastore", backref="query_engine")
environments = relationship(
"Environment",
secondary="query_engine_environment",
backref=backref(
"query_engines", order_by="QueryEngineEnvironment.engine_order"
),
)
def to_dict(self):
# IMPORTANT: do not expose executor params unless it is for admin
return {
"id": self.id,
"name": self.name,
"language": self.language,
"description": self.description,
"metastore_id": self.metastore_id,
"executor": self.executor,
}
def to_dict_admin(self):
# THIS API IS FOR ADMIN USAGE
return {
"id": self.id,
"created_at": self.created_at,
"updated_at": self.updated_at,
"deleted_at": self.deleted_at,
"name": self.name,
"language": self.language,
"description": self.description,
"metastore_id": self.metastore_id,
"executor": self.executor,
"executor_params": self.get_engine_params(),
"control_params": self.control_params,
"status_checker": self.status_checker,
"environments": self.environments,
}
def get_engine_params(self):
return self.executor_params
class QueryMetastore(CRUDMixin, Base):
__tablename__ = "query_metastore"
id = sql.Column(sql.Integer, primary_key=True)
created_at = sql.Column(sql.DateTime, default=now)
updated_at = sql.Column(sql.DateTime, default=now)
deleted_at = sql.Column(sql.DateTime)
name = sql.Column(sql.String(length=name_length), unique=True, nullable=False)
# Comma separated hive metastore urls
loader = sql.Column(sql.String(length=128), nullable=False)
metastore_params = sql.Column(sql.JSON)
acl_control = sql.Column(sql.JSON, default={}, nullable=False)
def to_dict(self):
return {"id": self.id, "name": self.name}
def to_dict_admin(self):
return {
"id": self.id,
"created_at": self.created_at,
"updated_at": self.updated_at,
"deleted_at": self.deleted_at,
"name": self.name,
"loader": self.loader,
"metastore_params": self.metastore_params,
"acl_control": self.acl_control,
}
class APIAccessToken(CRUDMixin, Base):
__tablename__ = "api_access_token"
id = sql.Column(sql.Integer, primary_key=True)
token = sql.Column(sql.String(length=128), unique=True, nullable=False)
description = sql.Column(sql.String(length=description_length))
enabled = sql.Column(sql.Boolean, default=True)
created_at = sql.Column(sql.DateTime, default=now)
creator_uid = sql.Column(sql.Integer, sql.ForeignKey("user.id", ondelete="CASCADE"))
updated_at = sql.Column(sql.DateTime, default=now)
updater_uid = sql.Column(sql.Integer, sql.ForeignKey("user.id", ondelete="CASCADE"))
def to_dict(self):
return {
"id": self.id,
"description": self.description,
"enabled": self.enabled,
"created_at": self.created_at,
"creator_uid": self.creator_uid,
"updated_at": self.updated_at,
"updater_uid": self.updater_uid,
}
class AdminAuditLog(CRUDMixin, Base):
__tablename__ = "admin_audit_log"
id = sql.Column(sql.Integer, primary_key=True)
created_at = sql.Column(sql.DateTime, default=now, nullable=False)
uid = sql.Column(sql.Integer, sql.ForeignKey("user.id", ondelete="CASCADE"))
item_type = sql.Column(sql.String(length=name_length), nullable=False, index=True)
item_id = sql.Column(sql.Integer, nullable=False, index=True)
op = sql.Column(sql.Enum(AdminOperation), nullable=False)
log = sql.Column(sql.String(length=description_length))
user = relationship("User", uselist=False)
| 33.809524 | 88 | 0.647183 | import sqlalchemy as sql
from sqlalchemy.orm import relationship, backref
from app import db
from const.admin import AdminOperation
from const.db import (
name_length,
now,
description_length,
)
from lib.sqlalchemy import CRUDMixin
Base = db.Base
class Announcement(CRUDMixin, Base):
__tablename__ = "announcements"
__table_args__ = {"mysql_engine": "InnoDB", "mysql_charset": "utf8mb4"}
id = sql.Column(sql.Integer, primary_key=True)
created_at = sql.Column(sql.DateTime, default=now)
updated_at = sql.Column(sql.DateTime, default=now)
uid = sql.Column(sql.Integer, sql.ForeignKey("user.id", ondelete="CASCADE"))
message = sql.Column(sql.String(length=description_length))
url_regex = sql.Column(sql.String(length=name_length))
can_dismiss = sql.Column(sql.Boolean, default=True)
active_from = sql.Column(sql.Date)
active_till = sql.Column(sql.Date)
def to_dict(self):
return {
"id": self.id,
"message": self.message,
"url_regex": self.url_regex,
"can_dismiss": self.can_dismiss,
}
def to_dict_admin(self):
return {
"id": self.id,
"created_at": self.created_at,
"updated_at": self.updated_at,
"message": self.message,
"uid": self.uid,
"url_regex": self.url_regex,
"can_dismiss": self.can_dismiss,
"active_from": self.active_from,
"active_till": self.active_till,
}
class QueryEngineEnvironment(CRUDMixin, Base):
__tablename__ = "query_engine_environment"
__table_args__ = (
sql.UniqueConstraint(
"query_engine_id", "environment_id", name="unique_query_engine_environment"
),
)
id = sql.Column(sql.Integer, primary_key=True, autoincrement=True)
query_engine_id = sql.Column(
sql.Integer,
sql.ForeignKey("query_engine.id", ondelete="CASCADE"),
nullable=False,
)
environment_id = sql.Column(
sql.Integer,
sql.ForeignKey("environment.id", ondelete="CASCADE"),
nullable=False,
)
engine_order = sql.Column(sql.Integer, nullable=False)
class QueryEngine(CRUDMixin, Base):
__tablename__ = "query_engine"
id = sql.Column(sql.Integer, primary_key=True)
created_at = sql.Column(sql.DateTime, default=now)
updated_at = sql.Column(sql.DateTime, default=now)
deleted_at = sql.Column(sql.DateTime)
name = sql.Column(sql.String(length=name_length), unique=True, nullable=False)
description = sql.Column(sql.String(length=name_length))
language = sql.Column(sql.String(length=name_length), nullable=False)
executor = sql.Column(sql.String(length=name_length), nullable=False)
status_checker = sql.Column(sql.String(length=name_length))
executor_params = sql.Column(sql.JSON)
control_params = sql.Column(sql.JSON, default={}, nullable=False)
metastore_id = sql.Column(
sql.Integer, sql.ForeignKey("query_metastore.id", ondelete="SET NULL")
)
metastore = relationship("QueryMetastore", backref="query_engine")
environments = relationship(
"Environment",
secondary="query_engine_environment",
backref=backref(
"query_engines", order_by="QueryEngineEnvironment.engine_order"
),
)
def to_dict(self):
return {
"id": self.id,
"name": self.name,
"language": self.language,
"description": self.description,
"metastore_id": self.metastore_id,
"executor": self.executor,
}
def to_dict_admin(self):
return {
"id": self.id,
"created_at": self.created_at,
"updated_at": self.updated_at,
"deleted_at": self.deleted_at,
"name": self.name,
"language": self.language,
"description": self.description,
"metastore_id": self.metastore_id,
"executor": self.executor,
"executor_params": self.get_engine_params(),
"control_params": self.control_params,
"status_checker": self.status_checker,
"environments": self.environments,
}
def get_engine_params(self):
return self.executor_params
class QueryMetastore(CRUDMixin, Base):
__tablename__ = "query_metastore"
id = sql.Column(sql.Integer, primary_key=True)
created_at = sql.Column(sql.DateTime, default=now)
updated_at = sql.Column(sql.DateTime, default=now)
deleted_at = sql.Column(sql.DateTime)
name = sql.Column(sql.String(length=name_length), unique=True, nullable=False)
loader = sql.Column(sql.String(length=128), nullable=False)
metastore_params = sql.Column(sql.JSON)
acl_control = sql.Column(sql.JSON, default={}, nullable=False)
def to_dict(self):
return {"id": self.id, "name": self.name}
def to_dict_admin(self):
return {
"id": self.id,
"created_at": self.created_at,
"updated_at": self.updated_at,
"deleted_at": self.deleted_at,
"name": self.name,
"loader": self.loader,
"metastore_params": self.metastore_params,
"acl_control": self.acl_control,
}
class APIAccessToken(CRUDMixin, Base):
__tablename__ = "api_access_token"
id = sql.Column(sql.Integer, primary_key=True)
token = sql.Column(sql.String(length=128), unique=True, nullable=False)
description = sql.Column(sql.String(length=description_length))
enabled = sql.Column(sql.Boolean, default=True)
created_at = sql.Column(sql.DateTime, default=now)
creator_uid = sql.Column(sql.Integer, sql.ForeignKey("user.id", ondelete="CASCADE"))
updated_at = sql.Column(sql.DateTime, default=now)
updater_uid = sql.Column(sql.Integer, sql.ForeignKey("user.id", ondelete="CASCADE"))
def to_dict(self):
return {
"id": self.id,
"description": self.description,
"enabled": self.enabled,
"created_at": self.created_at,
"creator_uid": self.creator_uid,
"updated_at": self.updated_at,
"updater_uid": self.updater_uid,
}
class AdminAuditLog(CRUDMixin, Base):
__tablename__ = "admin_audit_log"
id = sql.Column(sql.Integer, primary_key=True)
created_at = sql.Column(sql.DateTime, default=now, nullable=False)
uid = sql.Column(sql.Integer, sql.ForeignKey("user.id", ondelete="CASCADE"))
item_type = sql.Column(sql.String(length=name_length), nullable=False, index=True)
item_id = sql.Column(sql.Integer, nullable=False, index=True)
op = sql.Column(sql.Enum(AdminOperation), nullable=False)
log = sql.Column(sql.String(length=description_length))
user = relationship("User", uselist=False)
| true | true |
f720af70da7be7958d444ad2af3d0b7e0b2ef072 | 601 | py | Python | basicsortings/SelectionSort.py | ankushdecoded123/basicalgorithms | f8d42a57d7619ddb29fd6eae9e5f2db27ee5712c | [
"Apache-2.0"
] | null | null | null | basicsortings/SelectionSort.py | ankushdecoded123/basicalgorithms | f8d42a57d7619ddb29fd6eae9e5f2db27ee5712c | [
"Apache-2.0"
] | null | null | null | basicsortings/SelectionSort.py | ankushdecoded123/basicalgorithms | f8d42a57d7619ddb29fd6eae9e5f2db27ee5712c | [
"Apache-2.0"
] | null | null | null | # selectionsort() method
def selectionSort(arr):
arraySize = len(arr)
for i in range(arraySize):
min = i
for j in range(i+1, arraySize):
if arr[j] < arr[min]:
min = j
#swap values
arr[i], arr[min] = arr[min], arr[i]
# method to print an array
def printList(arr):
for i in range(len(arr)):
print(arr[i],end=" ")
print("\n")
# driver method
if __name__ == '__main__':
arr = [3,4,1,7,6,2,8]
print ("Given array: ", end="\n")
printList(arr)
selectionSort(arr)
print("Sorted array: ", end="\n")
printList(arr) | 20.033333 | 39 | 0.55574 |
def selectionSort(arr):
arraySize = len(arr)
for i in range(arraySize):
min = i
for j in range(i+1, arraySize):
if arr[j] < arr[min]:
min = j
arr[i], arr[min] = arr[min], arr[i]
def printList(arr):
for i in range(len(arr)):
print(arr[i],end=" ")
print("\n")
if __name__ == '__main__':
arr = [3,4,1,7,6,2,8]
print ("Given array: ", end="\n")
printList(arr)
selectionSort(arr)
print("Sorted array: ", end="\n")
printList(arr) | true | true |
f720b19536007d90852dfc1229d07fda01236456 | 2,189 | py | Python | functions/sample/python/main.py | aneeshmraj/agfzb-CloudAppDevelopment_Capstone | ed9b1a675a0c4325e56bf77ed4497a36d1755484 | [
"Apache-2.0"
] | null | null | null | functions/sample/python/main.py | aneeshmraj/agfzb-CloudAppDevelopment_Capstone | ed9b1a675a0c4325e56bf77ed4497a36d1755484 | [
"Apache-2.0"
] | null | null | null | functions/sample/python/main.py | aneeshmraj/agfzb-CloudAppDevelopment_Capstone | ed9b1a675a0c4325e56bf77ed4497a36d1755484 | [
"Apache-2.0"
] | null | null | null | #
#
# main() will be run when you invoke this action
#
# @param Cloud Functions actions accept a single parameter, which must be a JSON object.
#
# @return The output of this action, which must be a JSON object.
#
#
from cloudant.client import Cloudant
from cloudant.error import CloudantException
from cloudant.query import Query
from requests import ConnectionError, ReadTimeout, RequestException
import requests
import sys
def main(dict):
print(dict)
service = Cloudant.iam(None, dict["IAM_API_KEY"], url=dict["COUCH_URL"], connect=True)
db = service['reviews']
try:
selector = {'dealership': {'$eq':int(dict["dealerId"])}}
docs = db.get_query_result(selector)
reviews = []
for doc in docs:
reviews.append(doc)
return {"docs":reviews}
except CloudantException as ce:
print("Method failed")
print(" - status code: " + str(ce.code))
print(" - error message: " + ce.message)
except ConnectionError as cerr:
print("Connection error occurred:")
print(cerr)
except ReadTimeout as rt:
# The server did not send any data in the allotted amount of time.
print("Read timed out:")
print(rt)
except RequestException as re:
# Handle other request failures
print("Request Exception:")
print(re)
#add review
def main1(dict):
print(dict)
service = Cloudant.iam(None, dict["IAM_API_KEY"], url=dict["COUCH_URL"], connect=True)
db = service['reviews']
try:
# Create a document using the Database API
my_document = db.create_document(dict["review"])
# Check that the document exists in the database
if my_document.exists():
return {"text": "Review successfully added."}
except ConnectionError as cerr:
print("Connection error occurred:")
print(cerr)
except ReadTimeout as rt:
# The server did not send any data in the allotted amount of time.
print("Read timed out:")
print(rt)
except RequestException as re:
# Handle other request failures
print("Request Exception:")
print(re)
| 31.724638 | 90 | 0.643216 |
from cloudant.client import Cloudant
from cloudant.error import CloudantException
from cloudant.query import Query
from requests import ConnectionError, ReadTimeout, RequestException
import requests
import sys
def main(dict):
print(dict)
service = Cloudant.iam(None, dict["IAM_API_KEY"], url=dict["COUCH_URL"], connect=True)
db = service['reviews']
try:
selector = {'dealership': {'$eq':int(dict["dealerId"])}}
docs = db.get_query_result(selector)
reviews = []
for doc in docs:
reviews.append(doc)
return {"docs":reviews}
except CloudantException as ce:
print("Method failed")
print(" - status code: " + str(ce.code))
print(" - error message: " + ce.message)
except ConnectionError as cerr:
print("Connection error occurred:")
print(cerr)
except ReadTimeout as rt:
print("Read timed out:")
print(rt)
except RequestException as re:
print("Request Exception:")
print(re)
def main1(dict):
print(dict)
service = Cloudant.iam(None, dict["IAM_API_KEY"], url=dict["COUCH_URL"], connect=True)
db = service['reviews']
try:
my_document = db.create_document(dict["review"])
if my_document.exists():
return {"text": "Review successfully added."}
except ConnectionError as cerr:
print("Connection error occurred:")
print(cerr)
except ReadTimeout as rt:
print("Read timed out:")
print(rt)
except RequestException as re:
print("Request Exception:")
print(re)
| true | true |
f720b26349b04b5e0459f3b75168c72fe5c3ff77 | 5,299 | py | Python | src/OFS/tests/test_Uninstalled.py | rbanffy/Zope | ecf6770219052e7c7f8c9634ddf187a1e6280742 | [
"ZPL-2.1"
] | null | null | null | src/OFS/tests/test_Uninstalled.py | rbanffy/Zope | ecf6770219052e7c7f8c9634ddf187a1e6280742 | [
"ZPL-2.1"
] | 1 | 2020-11-11T07:11:31.000Z | 2020-11-11T07:11:31.000Z | src/OFS/tests/test_Uninstalled.py | rbanffy/Zope | ecf6770219052e7c7f8c9634ddf187a1e6280742 | [
"ZPL-2.1"
] | null | null | null | ##############################################################################
#
# Copyright (c) 2006 Zope Foundation and Contributors.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
import unittest
from OFS.SimpleItem import SimpleItem
from Testing.ZopeTestCase import base
class ToBreak(SimpleItem):
pass
class TestsOfBroken(unittest.TestCase):
"""Tests for the factory for "broken" classes.
"""
def setUp(self):
from OFS.Uninstalled import broken_klasses
from OFS.Uninstalled import broken_klasses_lock
self.broken_klasses_OLD = {}
broken_klasses_lock.acquire()
try:
self.broken_klasses_OLD.update(broken_klasses)
broken_klasses.clear()
finally:
broken_klasses_lock.release()
def tearDown(self):
from OFS.Uninstalled import broken_klasses
from OFS.Uninstalled import broken_klasses_lock
broken_klasses_lock.acquire()
try:
broken_klasses.clear()
broken_klasses.update(self.broken_klasses_OLD)
finally:
broken_klasses_lock.release()
def test_Broken_non_product_no_oid_yields_class_derived_from_Broken(self):
from OFS.Uninstalled import Broken
from OFS.Uninstalled import BrokenClass
klass = Broken(self, None, ('some.python.module', 'MyClass'))
self.assertTrue(issubclass(klass, BrokenClass))
self.assertEqual(klass.__name__, 'MyClass')
self.assertEqual(klass.__module__, 'some.python.module')
self.assertEqual(klass.product_name, 'unknown')
def test_Broken_product_no_oid_yields_class_derived_from_Broken(self):
from OFS.Uninstalled import Broken
from OFS.Uninstalled import BrokenClass
klass = Broken(self, None, ('Products.MyProduct.MyClass', 'MyClass'))
self.assertTrue(issubclass(klass, BrokenClass))
self.assertEqual(klass.__name__, 'MyClass')
self.assertEqual(klass.__module__, 'Products.MyProduct.MyClass')
self.assertEqual(klass.product_name, 'MyProduct')
def test_Broken_product_with_oid_yields_instance_derived_from_Broken(self):
from OFS.Uninstalled import Broken
from OFS.Uninstalled import BrokenClass
OID = '\x01' * 8
inst = Broken(self, OID, ('Products.MyProduct.MyClass', 'MyClass'))
self.assertIsInstance(inst, BrokenClass)
self.assertTrue(inst._p_jar is self)
self.assertEqual(inst._p_oid, OID)
klass = inst.__class__
self.assertEqual(klass.__name__, 'MyClass')
self.assertEqual(klass.__module__, 'Products.MyProduct.MyClass')
self.assertEqual(klass.product_name, 'MyProduct')
def test_Broken_instance___getattr___allows_persistence_attrs(self):
from OFS.Uninstalled import Broken
OID = '\x01' * 8
PERSISTENCE_ATTRS = ["_p_changed",
"_p_jar",
"_p_mtime",
"_p_oid",
"_p_serial",
"_p_state"]
PERSISTENCE_METHODS = ["_p_deactivate",
"_p_activate",
"_p_invalidate",
"_p_getattr",
"_p_setattr",
"_p_delattr"]
inst = Broken(self, OID, ('Products.MyProduct.MyClass', 'MyClass'))
for attr_name in PERSISTENCE_ATTRS:
getattr(inst, attr_name) # doesn't raise
for meth_name in PERSISTENCE_METHODS:
getattr(inst, meth_name) # doesn't raise
class TestsIntegratedBroken(base.TestCase):
def test_Broken_instance___getstate___gives_access_to_its_state(self):
from Acquisition import aq_base
from OFS.Uninstalled import BrokenClass
from OFS.tests import test_Uninstalled
import transaction
# store an instance
tr = ToBreak()
tr.id = 'tr'
self.app._setObject('tr', tr)
# commit to allow access in another connection
transaction.commit()
# remove class from namespace to ensure broken object
del test_Uninstalled.ToBreak
# get new connection that will give access to broken object
app = base.app()
inst = aq_base(app.tr)
self.assertIsInstance(inst, BrokenClass)
state = inst.__getstate__()
self.assertEqual(state, {'id': 'tr'})
# cleanup
app.manage_delObjects('tr')
transaction.commit()
# check that object is not left over
app = base.app()
self.assertFalse('tr' in app.objectIds())
def test_suite():
suite = unittest.TestSuite()
suite.addTest(unittest.makeSuite(TestsOfBroken))
suite.addTest(unittest.makeSuite(TestsIntegratedBroken))
return suite
| 35.804054 | 79 | 0.627854 | true | true | |
f720b26c972d7bdb64501599e3be9253fa24774d | 18,722 | py | Python | python/ray/ml/tests/test_preprocessors.py | siddgoel/ray | 7f3031f451de410b71a5fcb18e04452bfa7351d6 | [
"Apache-2.0"
] | 22 | 2018-05-08T05:52:34.000Z | 2020-04-01T10:09:55.000Z | python/ray/ml/tests/test_preprocessors.py | siddgoel/ray | 7f3031f451de410b71a5fcb18e04452bfa7351d6 | [
"Apache-2.0"
] | 51 | 2018-05-17T05:55:28.000Z | 2020-03-18T06:49:49.000Z | python/ray/ml/tests/test_preprocessors.py | siddgoel/ray | 7f3031f451de410b71a5fcb18e04452bfa7351d6 | [
"Apache-2.0"
] | 10 | 2018-04-27T10:50:59.000Z | 2020-02-24T02:41:43.000Z | import warnings
from unittest.mock import patch
import numpy as np
import pandas as pd
import pytest
import ray
from ray.ml.preprocessor import PreprocessorNotFittedException
from ray.ml.preprocessors import (
BatchMapper,
StandardScaler,
MinMaxScaler,
OrdinalEncoder,
OneHotEncoder,
LabelEncoder,
SimpleImputer,
Chain,
)
def test_standard_scaler():
"""Tests basic StandardScaler functionality."""
col_a = [-1, 0, 1, 2]
col_b = [1, 1, 5, 5]
col_c = [1, 1, 1, None]
in_df = pd.DataFrame.from_dict({"A": col_a, "B": col_b, "C": col_c})
ds = ray.data.from_pandas(in_df)
scaler = StandardScaler(["B", "C"])
# Transform with unfitted preprocessor.
with pytest.raises(PreprocessorNotFittedException):
scaler.transform(ds)
# Fit data.
scaler.fit(ds)
assert scaler.stats_ == {
"mean(B)": 3.0,
"mean(C)": 1.0,
"std(B)": 2.0,
"std(C)": 0.0,
}
# Transform data.
transformed = scaler.transform(ds)
out_df = transformed.to_pandas()
processed_col_a = col_a
processed_col_b = [-1.0, -1.0, 1.0, 1.0]
processed_col_c = [0.0, 0.0, 0.0, None]
expected_df = pd.DataFrame.from_dict(
{"A": processed_col_a, "B": processed_col_b, "C": processed_col_c}
)
assert out_df.equals(expected_df)
# Transform batch.
pred_col_a = [1, 2, 3]
pred_col_b = [3, 5, 7]
pred_col_c = [0, 1, 2]
pred_in_df = pd.DataFrame.from_dict(
{"A": pred_col_a, "B": pred_col_b, "C": pred_col_c}
)
pred_out_df = scaler.transform_batch(pred_in_df)
pred_processed_col_a = pred_col_a
pred_processed_col_b = [0.0, 1.0, 2.0]
pred_processed_col_c = [-1.0, 0.0, 1.0]
pred_expected_df = pd.DataFrame.from_dict(
{
"A": pred_processed_col_a,
"B": pred_processed_col_b,
"C": pred_processed_col_c,
}
)
assert pred_out_df.equals(pred_expected_df)
@patch.object(warnings, "warn")
def test_fit_twice(mocked_warn):
"""Tests that a warning msg should be printed."""
col_a = [-1, 0, 1]
col_b = [1, 3, 5]
col_c = [1, 1, None]
in_df = pd.DataFrame.from_dict({"A": col_a, "B": col_b, "C": col_c})
ds = ray.data.from_pandas(in_df)
scaler = MinMaxScaler(["B", "C"])
# Fit data.
scaler.fit(ds)
assert scaler.stats_ == {"min(B)": 1, "max(B)": 5, "min(C)": 1, "max(C)": 1}
ds = ds.map_batches(lambda x: x * 2)
# Fit again
scaler.fit(ds)
# Assert that the fitted state is corresponding to the second ds.
assert scaler.stats_ == {"min(B)": 2, "max(B)": 10, "min(C)": 2, "max(C)": 2}
msg = (
"`fit` has already been called on the preprocessor (or at least one "
"contained preprocessors if this is a chain). "
"All previously fitted state will be overwritten!"
)
mocked_warn.assert_called_once_with(msg)
def test_min_max_scaler():
"""Tests basic MinMaxScaler functionality."""
col_a = [-1, 0, 1]
col_b = [1, 3, 5]
col_c = [1, 1, None]
in_df = pd.DataFrame.from_dict({"A": col_a, "B": col_b, "C": col_c})
ds = ray.data.from_pandas(in_df)
scaler = MinMaxScaler(["B", "C"])
# Transform with unfitted preprocessor.
with pytest.raises(PreprocessorNotFittedException):
scaler.transform(ds)
# Fit data.
scaler.fit(ds)
assert scaler.stats_ == {"min(B)": 1, "max(B)": 5, "min(C)": 1, "max(C)": 1}
transformed = scaler.transform(ds)
out_df = transformed.to_pandas()
processed_col_a = col_a
processed_col_b = [0.0, 0.5, 1.0]
processed_col_c = [0.0, 0.0, None]
expected_df = pd.DataFrame.from_dict(
{"A": processed_col_a, "B": processed_col_b, "C": processed_col_c}
)
assert out_df.equals(expected_df)
# Transform batch.
pred_col_a = [1, 2, 3]
pred_col_b = [3, 5, 7]
pred_col_c = [0, 1, 2]
pred_in_df = pd.DataFrame.from_dict(
{"A": pred_col_a, "B": pred_col_b, "C": pred_col_c}
)
pred_out_df = scaler.transform_batch(pred_in_df)
pred_processed_col_a = pred_col_a
pred_processed_col_b = [0.5, 1.0, 1.5]
pred_processed_col_c = [-1.0, 0.0, 1.0]
pred_expected_df = pd.DataFrame.from_dict(
{
"A": pred_processed_col_a,
"B": pred_processed_col_b,
"C": pred_processed_col_c,
}
)
assert pred_out_df.equals(pred_expected_df)
def test_ordinal_encoder():
"""Tests basic OrdinalEncoder functionality."""
col_a = ["red", "green", "blue", "red"]
col_b = ["warm", "cold", "hot", "cold"]
col_c = [1, 10, 5, 10]
in_df = pd.DataFrame.from_dict({"A": col_a, "B": col_b, "C": col_c})
ds = ray.data.from_pandas(in_df)
encoder = OrdinalEncoder(["B", "C"])
# Transform with unfitted preprocessor.
with pytest.raises(PreprocessorNotFittedException):
encoder.transform(ds)
# Fit data.
encoder.fit(ds)
assert encoder.stats_ == {
"unique_values(B)": {"cold": 0, "hot": 1, "warm": 2},
"unique_values(C)": {1: 0, 5: 1, 10: 2},
}
# Transform data.
transformed = encoder.transform(ds)
out_df = transformed.to_pandas()
processed_col_a = col_a
processed_col_b = [2, 0, 1, 0]
processed_col_c = [0, 2, 1, 2]
expected_df = pd.DataFrame.from_dict(
{"A": processed_col_a, "B": processed_col_b, "C": processed_col_c}
)
assert out_df.equals(expected_df)
# Transform batch.
pred_col_a = ["blue", "yellow", None]
pred_col_b = ["cold", "warm", "other"]
pred_col_c = [10, 1, 20]
pred_in_df = pd.DataFrame.from_dict(
{"A": pred_col_a, "B": pred_col_b, "C": pred_col_c}
)
pred_out_df = encoder.transform_batch(pred_in_df)
pred_processed_col_a = pred_col_a
pred_processed_col_b = [0, 2, None]
pred_processed_col_c = [2, 0, None]
pred_expected_df = pd.DataFrame.from_dict(
{
"A": pred_processed_col_a,
"B": pred_processed_col_b,
"C": pred_processed_col_c,
}
)
assert pred_out_df.equals(pred_expected_df)
# Test null behavior.
null_col = [1, None]
nonnull_col = [1, 1]
null_df = pd.DataFrame.from_dict({"A": null_col})
null_ds = ray.data.from_pandas(null_df)
nonnull_df = pd.DataFrame.from_dict({"A": nonnull_col})
nonnull_ds = ray.data.from_pandas(nonnull_df)
null_encoder = OrdinalEncoder(["A"])
# Verify fit fails for null values.
with pytest.raises(ValueError):
null_encoder.fit(null_ds)
null_encoder.fit(nonnull_ds)
# Verify transform fails for null values.
with pytest.raises(ValueError):
null_encoder.transform(null_ds)
null_encoder.transform(nonnull_ds)
# Verify transform_batch fails for null values.
with pytest.raises(ValueError):
null_encoder.transform_batch(null_df)
null_encoder.transform_batch(nonnull_df)
def test_one_hot_encoder():
"""Tests basic OneHotEncoder functionality."""
col_a = ["red", "green", "blue", "red"]
col_b = ["warm", "cold", "hot", "cold"]
col_c = [1, 10, 5, 10]
in_df = pd.DataFrame.from_dict({"A": col_a, "B": col_b, "C": col_c})
ds = ray.data.from_pandas(in_df)
encoder = OneHotEncoder(["B", "C"])
# Transform with unfitted preprocessor.
with pytest.raises(PreprocessorNotFittedException):
encoder.transform(ds)
# Fit data.
encoder.fit(ds)
assert encoder.stats_ == {
"unique_values(B)": {"cold": 0, "hot": 1, "warm": 2},
"unique_values(C)": {1: 0, 5: 1, 10: 2},
}
# Transform data.
transformed = encoder.transform(ds)
out_df = transformed.to_pandas()
processed_col_a = col_a
processed_col_b_cold = [0, 1, 0, 1]
processed_col_b_hot = [0, 0, 1, 0]
processed_col_b_warm = [1, 0, 0, 0]
processed_col_c_1 = [1, 0, 0, 0]
processed_col_c_5 = [0, 0, 1, 0]
processed_col_c_10 = [0, 1, 0, 1]
expected_df = pd.DataFrame.from_dict(
{
"A": processed_col_a,
"B_cold": processed_col_b_cold,
"B_hot": processed_col_b_hot,
"B_warm": processed_col_b_warm,
"C_1": processed_col_c_1,
"C_5": processed_col_c_5,
"C_10": processed_col_c_10,
}
)
assert out_df.equals(expected_df)
# Transform batch.
pred_col_a = ["blue", "yellow", None]
pred_col_b = ["cold", "warm", "other"]
pred_col_c = [10, 1, 20]
pred_in_df = pd.DataFrame.from_dict(
{"A": pred_col_a, "B": pred_col_b, "C": pred_col_c}
)
pred_out_df = encoder.transform_batch(pred_in_df)
pred_processed_col_a = ["blue", "yellow", None]
pred_processed_col_b_cold = [1, 0, 0]
pred_processed_col_b_hot = [0, 0, 0]
pred_processed_col_b_warm = [0, 1, 0]
pred_processed_col_c_1 = [0, 1, 0]
pred_processed_col_c_5 = [0, 0, 0]
pred_processed_col_c_10 = [1, 0, 0]
pred_expected_df = pd.DataFrame.from_dict(
{
"A": pred_processed_col_a,
"B_cold": pred_processed_col_b_cold,
"B_hot": pred_processed_col_b_hot,
"B_warm": pred_processed_col_b_warm,
"C_1": pred_processed_col_c_1,
"C_5": pred_processed_col_c_5,
"C_10": pred_processed_col_c_10,
}
)
assert pred_out_df.equals(pred_expected_df)
# Test null behavior.
null_col = [1, None]
nonnull_col = [1, 1]
null_df = pd.DataFrame.from_dict({"A": null_col})
null_ds = ray.data.from_pandas(null_df)
nonnull_df = pd.DataFrame.from_dict({"A": nonnull_col})
nonnull_ds = ray.data.from_pandas(nonnull_df)
null_encoder = OneHotEncoder(["A"])
# Verify fit fails for null values.
with pytest.raises(ValueError):
null_encoder.fit(null_ds)
null_encoder.fit(nonnull_ds)
# Verify transform fails for null values.
with pytest.raises(ValueError):
null_encoder.transform(null_ds)
null_encoder.transform(nonnull_ds)
# Verify transform_batch fails for null values.
with pytest.raises(ValueError):
null_encoder.transform_batch(null_df)
null_encoder.transform_batch(nonnull_df)
def test_label_encoder():
"""Tests basic LabelEncoder functionality."""
col_a = ["red", "green", "blue", "red"]
col_b = ["warm", "cold", "cold", "hot"]
col_c = [1, 2, 3, 4]
in_df = pd.DataFrame.from_dict({"A": col_a, "B": col_b, "C": col_c})
ds = ray.data.from_pandas(in_df)
encoder = LabelEncoder("A")
# Transform with unfitted preprocessor.
with pytest.raises(PreprocessorNotFittedException):
encoder.transform(ds)
# Fit data.
encoder.fit(ds)
assert encoder.stats_ == {"unique_values(A)": {"blue": 0, "green": 1, "red": 2}}
# Transform data.
transformed = encoder.transform(ds)
out_df = transformed.to_pandas()
processed_col_a = [2, 1, 0, 2]
processed_col_b = col_b
processed_col_c = col_c
expected_df = pd.DataFrame.from_dict(
{"A": processed_col_a, "B": processed_col_b, "C": processed_col_c}
)
assert out_df.equals(expected_df)
# Transform batch.
pred_col_a = ["blue", "red", "yellow"]
pred_col_b = ["cold", "unknown", None]
pred_col_c = [10, 20, None]
pred_in_df = pd.DataFrame.from_dict(
{"A": pred_col_a, "B": pred_col_b, "C": pred_col_c}
)
pred_out_df = encoder.transform_batch(pred_in_df)
pred_processed_col_a = [0, 2, None]
pred_processed_col_b = pred_col_b
pred_processed_col_c = pred_col_c
pred_expected_df = pd.DataFrame.from_dict(
{
"A": pred_processed_col_a,
"B": pred_processed_col_b,
"C": pred_processed_col_c,
}
)
assert pred_out_df.equals(pred_expected_df)
# Test null behavior.
null_col = [1, None]
nonnull_col = [1, 1]
null_df = pd.DataFrame.from_dict({"A": null_col})
null_ds = ray.data.from_pandas(null_df)
nonnull_df = pd.DataFrame.from_dict({"A": nonnull_col})
nonnull_ds = ray.data.from_pandas(nonnull_df)
null_encoder = LabelEncoder("A")
# Verify fit fails for null values.
with pytest.raises(ValueError):
null_encoder.fit(null_ds)
null_encoder.fit(nonnull_ds)
# Verify transform fails for null values.
with pytest.raises(ValueError):
null_encoder.transform(null_ds)
null_encoder.transform(nonnull_ds)
# Verify transform_batch fails for null values.
with pytest.raises(ValueError):
null_encoder.transform_batch(null_df)
null_encoder.transform_batch(nonnull_df)
def test_simple_imputer():
col_a = [1, 1, 1, np.nan]
col_b = [1, 3, None, np.nan]
col_c = [1, 1, 1, 1]
in_df = pd.DataFrame.from_dict({"A": col_a, "B": col_b, "C": col_c})
ds = ray.data.from_pandas(in_df)
imputer = SimpleImputer(["B", "C"])
# Transform with unfitted preprocessor.
with pytest.raises(PreprocessorNotFittedException):
imputer.transform(ds)
# Fit data.
imputer.fit(ds)
assert imputer.stats_ == {"mean(B)": 2.0, "mean(C)": 1.0}
# Transform data.
transformed = imputer.transform(ds)
out_df = transformed.to_pandas()
processed_col_a = col_a
processed_col_b = [1.0, 3.0, 2.0, 2.0]
processed_col_c = [1, 1, 1, 1]
expected_df = pd.DataFrame.from_dict(
{"A": processed_col_a, "B": processed_col_b, "C": processed_col_c}
)
assert out_df.equals(expected_df)
# Transform batch.
pred_col_a = [1, 2, np.nan]
pred_col_b = [1, 2, np.nan]
pred_col_c = [None, None, None]
pred_in_df = pd.DataFrame.from_dict(
{"A": pred_col_a, "B": pred_col_b, "C": pred_col_c}
)
pred_out_df = imputer.transform_batch(pred_in_df)
pred_processed_col_a = pred_col_a
pred_processed_col_b = [1.0, 2.0, 2.0]
pred_processed_col_c = [1.0, 1.0, 1.0]
pred_expected_df = pd.DataFrame.from_dict(
{
"A": pred_processed_col_a,
"B": pred_processed_col_b,
"C": pred_processed_col_c,
}
)
assert pred_out_df.equals(pred_expected_df)
# Test "most_frequent" strategy.
most_frequent_col_a = [1, 2, 2, None, None, None]
most_frequent_col_b = [None, "c", "c", "b", "b", "a"]
most_frequent_df = pd.DataFrame.from_dict(
{"A": most_frequent_col_a, "B": most_frequent_col_b}
)
most_frequent_ds = ray.data.from_pandas(most_frequent_df).repartition(3)
most_frequent_imputer = SimpleImputer(["A", "B"], strategy="most_frequent")
most_frequent_imputer.fit(most_frequent_ds)
assert most_frequent_imputer.stats_ == {
"most_frequent(A)": 2.0,
"most_frequent(B)": "c",
}
most_frequent_transformed = most_frequent_imputer.transform(most_frequent_ds)
most_frequent_out_df = most_frequent_transformed.to_pandas()
most_frequent_processed_col_a = [1.0, 2.0, 2.0, 2.0, 2.0, 2.0]
most_frequent_processed_col_b = ["c", "c", "c", "b", "b", "a"]
most_frequent_expected_df = pd.DataFrame.from_dict(
{"A": most_frequent_processed_col_a, "B": most_frequent_processed_col_b}
)
assert most_frequent_out_df.equals(most_frequent_expected_df)
# Test "constant" strategy.
constant_col_a = ["apple", None]
constant_df = pd.DataFrame.from_dict({"A": constant_col_a})
constant_ds = ray.data.from_pandas(constant_df)
with pytest.raises(ValueError):
SimpleImputer(["A"], strategy="constant")
constant_imputer = SimpleImputer(
["A", "B"], strategy="constant", fill_value="missing"
)
constant_transformed = constant_imputer.transform(constant_ds)
constant_out_df = constant_transformed.to_pandas()
constant_processed_col_a = ["apple", "missing"]
constant_expected_df = pd.DataFrame.from_dict({"A": constant_processed_col_a})
assert constant_out_df.equals(constant_expected_df)
def test_chain():
"""Tests basic Chain functionality."""
col_a = [-1, -1, 1, 1]
col_b = [1, 1, 1, None]
col_c = ["sunday", "monday", "tuesday", "tuesday"]
in_df = pd.DataFrame.from_dict({"A": col_a, "B": col_b, "C": col_c})
ds = ray.data.from_pandas(in_df)
def udf(df):
df["A"] *= 2
return df
batch_mapper = BatchMapper(fn=udf)
imputer = SimpleImputer(["B"])
scaler = StandardScaler(["A", "B"])
encoder = LabelEncoder("C")
chain = Chain(scaler, imputer, encoder, batch_mapper)
# Fit data.
chain.fit(ds)
assert imputer.stats_ == {
"mean(B)": 0.0,
}
assert scaler.stats_ == {
"mean(A)": 0.0,
"mean(B)": 1.0,
"std(A)": 1.0,
"std(B)": 0.0,
}
assert encoder.stats_ == {
"unique_values(C)": {"monday": 0, "sunday": 1, "tuesday": 2}
}
# Transform data.
transformed = chain.transform(ds)
out_df = transformed.to_pandas()
processed_col_a = [-2.0, -2.0, 2.0, 2.0]
processed_col_b = [0.0, 0.0, 0.0, 0.0]
processed_col_c = [1, 0, 2, 2]
expected_df = pd.DataFrame.from_dict(
{"A": processed_col_a, "B": processed_col_b, "C": processed_col_c}
)
assert out_df.equals(expected_df)
# Transform batch.
pred_col_a = [1, 2, None]
pred_col_b = [0, None, 2]
pred_col_c = ["monday", "tuesday", "wednesday"]
pred_in_df = pd.DataFrame.from_dict(
{"A": pred_col_a, "B": pred_col_b, "C": pred_col_c}
)
pred_out_df = chain.transform_batch(pred_in_df)
pred_processed_col_a = [2, 4, None]
pred_processed_col_b = [-1.0, 0.0, 1.0]
pred_processed_col_c = [0, 2, None]
pred_expected_df = pd.DataFrame.from_dict(
{
"A": pred_processed_col_a,
"B": pred_processed_col_b,
"C": pred_processed_col_c,
}
)
assert pred_out_df.equals(pred_expected_df)
def test_batch_mapper():
"""Tests batch mapper functionality."""
old_column = [1, 2, 3, 4]
to_be_modified = [1, -1, 1, -1]
in_df = pd.DataFrame.from_dict(
{"old_column": old_column, "to_be_modified": to_be_modified}
)
ds = ray.data.from_pandas(in_df)
def add_and_modify_udf(df: "pd.DataFrame"):
df["new_col"] = df["old_column"] + 1
df["to_be_modified"] *= 2
return df
batch_mapper = BatchMapper(fn=add_and_modify_udf)
batch_mapper.fit(ds)
transformed = batch_mapper.transform(ds)
out_df = transformed.to_pandas()
expected_df = pd.DataFrame.from_dict(
{
"old_column": old_column,
"to_be_modified": [2, -2, 2, -2],
"new_col": [2, 3, 4, 5],
}
)
assert out_df.equals(expected_df)
if __name__ == "__main__":
import sys
sys.exit(pytest.main(["-sv", __file__]))
| 29.670365 | 84 | 0.626429 | import warnings
from unittest.mock import patch
import numpy as np
import pandas as pd
import pytest
import ray
from ray.ml.preprocessor import PreprocessorNotFittedException
from ray.ml.preprocessors import (
BatchMapper,
StandardScaler,
MinMaxScaler,
OrdinalEncoder,
OneHotEncoder,
LabelEncoder,
SimpleImputer,
Chain,
)
def test_standard_scaler():
col_a = [-1, 0, 1, 2]
col_b = [1, 1, 5, 5]
col_c = [1, 1, 1, None]
in_df = pd.DataFrame.from_dict({"A": col_a, "B": col_b, "C": col_c})
ds = ray.data.from_pandas(in_df)
scaler = StandardScaler(["B", "C"])
with pytest.raises(PreprocessorNotFittedException):
scaler.transform(ds)
scaler.fit(ds)
assert scaler.stats_ == {
"mean(B)": 3.0,
"mean(C)": 1.0,
"std(B)": 2.0,
"std(C)": 0.0,
}
transformed = scaler.transform(ds)
out_df = transformed.to_pandas()
processed_col_a = col_a
processed_col_b = [-1.0, -1.0, 1.0, 1.0]
processed_col_c = [0.0, 0.0, 0.0, None]
expected_df = pd.DataFrame.from_dict(
{"A": processed_col_a, "B": processed_col_b, "C": processed_col_c}
)
assert out_df.equals(expected_df)
pred_col_a = [1, 2, 3]
pred_col_b = [3, 5, 7]
pred_col_c = [0, 1, 2]
pred_in_df = pd.DataFrame.from_dict(
{"A": pred_col_a, "B": pred_col_b, "C": pred_col_c}
)
pred_out_df = scaler.transform_batch(pred_in_df)
pred_processed_col_a = pred_col_a
pred_processed_col_b = [0.0, 1.0, 2.0]
pred_processed_col_c = [-1.0, 0.0, 1.0]
pred_expected_df = pd.DataFrame.from_dict(
{
"A": pred_processed_col_a,
"B": pred_processed_col_b,
"C": pred_processed_col_c,
}
)
assert pred_out_df.equals(pred_expected_df)
@patch.object(warnings, "warn")
def test_fit_twice(mocked_warn):
col_a = [-1, 0, 1]
col_b = [1, 3, 5]
col_c = [1, 1, None]
in_df = pd.DataFrame.from_dict({"A": col_a, "B": col_b, "C": col_c})
ds = ray.data.from_pandas(in_df)
scaler = MinMaxScaler(["B", "C"])
scaler.fit(ds)
assert scaler.stats_ == {"min(B)": 1, "max(B)": 5, "min(C)": 1, "max(C)": 1}
ds = ds.map_batches(lambda x: x * 2)
scaler.fit(ds)
assert scaler.stats_ == {"min(B)": 2, "max(B)": 10, "min(C)": 2, "max(C)": 2}
msg = (
"`fit` has already been called on the preprocessor (or at least one "
"contained preprocessors if this is a chain). "
"All previously fitted state will be overwritten!"
)
mocked_warn.assert_called_once_with(msg)
def test_min_max_scaler():
col_a = [-1, 0, 1]
col_b = [1, 3, 5]
col_c = [1, 1, None]
in_df = pd.DataFrame.from_dict({"A": col_a, "B": col_b, "C": col_c})
ds = ray.data.from_pandas(in_df)
scaler = MinMaxScaler(["B", "C"])
with pytest.raises(PreprocessorNotFittedException):
scaler.transform(ds)
scaler.fit(ds)
assert scaler.stats_ == {"min(B)": 1, "max(B)": 5, "min(C)": 1, "max(C)": 1}
transformed = scaler.transform(ds)
out_df = transformed.to_pandas()
processed_col_a = col_a
processed_col_b = [0.0, 0.5, 1.0]
processed_col_c = [0.0, 0.0, None]
expected_df = pd.DataFrame.from_dict(
{"A": processed_col_a, "B": processed_col_b, "C": processed_col_c}
)
assert out_df.equals(expected_df)
pred_col_a = [1, 2, 3]
pred_col_b = [3, 5, 7]
pred_col_c = [0, 1, 2]
pred_in_df = pd.DataFrame.from_dict(
{"A": pred_col_a, "B": pred_col_b, "C": pred_col_c}
)
pred_out_df = scaler.transform_batch(pred_in_df)
pred_processed_col_a = pred_col_a
pred_processed_col_b = [0.5, 1.0, 1.5]
pred_processed_col_c = [-1.0, 0.0, 1.0]
pred_expected_df = pd.DataFrame.from_dict(
{
"A": pred_processed_col_a,
"B": pred_processed_col_b,
"C": pred_processed_col_c,
}
)
assert pred_out_df.equals(pred_expected_df)
def test_ordinal_encoder():
col_a = ["red", "green", "blue", "red"]
col_b = ["warm", "cold", "hot", "cold"]
col_c = [1, 10, 5, 10]
in_df = pd.DataFrame.from_dict({"A": col_a, "B": col_b, "C": col_c})
ds = ray.data.from_pandas(in_df)
encoder = OrdinalEncoder(["B", "C"])
with pytest.raises(PreprocessorNotFittedException):
encoder.transform(ds)
encoder.fit(ds)
assert encoder.stats_ == {
"unique_values(B)": {"cold": 0, "hot": 1, "warm": 2},
"unique_values(C)": {1: 0, 5: 1, 10: 2},
}
transformed = encoder.transform(ds)
out_df = transformed.to_pandas()
processed_col_a = col_a
processed_col_b = [2, 0, 1, 0]
processed_col_c = [0, 2, 1, 2]
expected_df = pd.DataFrame.from_dict(
{"A": processed_col_a, "B": processed_col_b, "C": processed_col_c}
)
assert out_df.equals(expected_df)
pred_col_a = ["blue", "yellow", None]
pred_col_b = ["cold", "warm", "other"]
pred_col_c = [10, 1, 20]
pred_in_df = pd.DataFrame.from_dict(
{"A": pred_col_a, "B": pred_col_b, "C": pred_col_c}
)
pred_out_df = encoder.transform_batch(pred_in_df)
pred_processed_col_a = pred_col_a
pred_processed_col_b = [0, 2, None]
pred_processed_col_c = [2, 0, None]
pred_expected_df = pd.DataFrame.from_dict(
{
"A": pred_processed_col_a,
"B": pred_processed_col_b,
"C": pred_processed_col_c,
}
)
assert pred_out_df.equals(pred_expected_df)
null_col = [1, None]
nonnull_col = [1, 1]
null_df = pd.DataFrame.from_dict({"A": null_col})
null_ds = ray.data.from_pandas(null_df)
nonnull_df = pd.DataFrame.from_dict({"A": nonnull_col})
nonnull_ds = ray.data.from_pandas(nonnull_df)
null_encoder = OrdinalEncoder(["A"])
with pytest.raises(ValueError):
null_encoder.fit(null_ds)
null_encoder.fit(nonnull_ds)
with pytest.raises(ValueError):
null_encoder.transform(null_ds)
null_encoder.transform(nonnull_ds)
with pytest.raises(ValueError):
null_encoder.transform_batch(null_df)
null_encoder.transform_batch(nonnull_df)
def test_one_hot_encoder():
col_a = ["red", "green", "blue", "red"]
col_b = ["warm", "cold", "hot", "cold"]
col_c = [1, 10, 5, 10]
in_df = pd.DataFrame.from_dict({"A": col_a, "B": col_b, "C": col_c})
ds = ray.data.from_pandas(in_df)
encoder = OneHotEncoder(["B", "C"])
with pytest.raises(PreprocessorNotFittedException):
encoder.transform(ds)
encoder.fit(ds)
assert encoder.stats_ == {
"unique_values(B)": {"cold": 0, "hot": 1, "warm": 2},
"unique_values(C)": {1: 0, 5: 1, 10: 2},
}
transformed = encoder.transform(ds)
out_df = transformed.to_pandas()
processed_col_a = col_a
processed_col_b_cold = [0, 1, 0, 1]
processed_col_b_hot = [0, 0, 1, 0]
processed_col_b_warm = [1, 0, 0, 0]
processed_col_c_1 = [1, 0, 0, 0]
processed_col_c_5 = [0, 0, 1, 0]
processed_col_c_10 = [0, 1, 0, 1]
expected_df = pd.DataFrame.from_dict(
{
"A": processed_col_a,
"B_cold": processed_col_b_cold,
"B_hot": processed_col_b_hot,
"B_warm": processed_col_b_warm,
"C_1": processed_col_c_1,
"C_5": processed_col_c_5,
"C_10": processed_col_c_10,
}
)
assert out_df.equals(expected_df)
pred_col_a = ["blue", "yellow", None]
pred_col_b = ["cold", "warm", "other"]
pred_col_c = [10, 1, 20]
pred_in_df = pd.DataFrame.from_dict(
{"A": pred_col_a, "B": pred_col_b, "C": pred_col_c}
)
pred_out_df = encoder.transform_batch(pred_in_df)
pred_processed_col_a = ["blue", "yellow", None]
pred_processed_col_b_cold = [1, 0, 0]
pred_processed_col_b_hot = [0, 0, 0]
pred_processed_col_b_warm = [0, 1, 0]
pred_processed_col_c_1 = [0, 1, 0]
pred_processed_col_c_5 = [0, 0, 0]
pred_processed_col_c_10 = [1, 0, 0]
pred_expected_df = pd.DataFrame.from_dict(
{
"A": pred_processed_col_a,
"B_cold": pred_processed_col_b_cold,
"B_hot": pred_processed_col_b_hot,
"B_warm": pred_processed_col_b_warm,
"C_1": pred_processed_col_c_1,
"C_5": pred_processed_col_c_5,
"C_10": pred_processed_col_c_10,
}
)
assert pred_out_df.equals(pred_expected_df)
null_col = [1, None]
nonnull_col = [1, 1]
null_df = pd.DataFrame.from_dict({"A": null_col})
null_ds = ray.data.from_pandas(null_df)
nonnull_df = pd.DataFrame.from_dict({"A": nonnull_col})
nonnull_ds = ray.data.from_pandas(nonnull_df)
null_encoder = OneHotEncoder(["A"])
with pytest.raises(ValueError):
null_encoder.fit(null_ds)
null_encoder.fit(nonnull_ds)
with pytest.raises(ValueError):
null_encoder.transform(null_ds)
null_encoder.transform(nonnull_ds)
with pytest.raises(ValueError):
null_encoder.transform_batch(null_df)
null_encoder.transform_batch(nonnull_df)
def test_label_encoder():
col_a = ["red", "green", "blue", "red"]
col_b = ["warm", "cold", "cold", "hot"]
col_c = [1, 2, 3, 4]
in_df = pd.DataFrame.from_dict({"A": col_a, "B": col_b, "C": col_c})
ds = ray.data.from_pandas(in_df)
encoder = LabelEncoder("A")
with pytest.raises(PreprocessorNotFittedException):
encoder.transform(ds)
encoder.fit(ds)
assert encoder.stats_ == {"unique_values(A)": {"blue": 0, "green": 1, "red": 2}}
transformed = encoder.transform(ds)
out_df = transformed.to_pandas()
processed_col_a = [2, 1, 0, 2]
processed_col_b = col_b
processed_col_c = col_c
expected_df = pd.DataFrame.from_dict(
{"A": processed_col_a, "B": processed_col_b, "C": processed_col_c}
)
assert out_df.equals(expected_df)
pred_col_a = ["blue", "red", "yellow"]
pred_col_b = ["cold", "unknown", None]
pred_col_c = [10, 20, None]
pred_in_df = pd.DataFrame.from_dict(
{"A": pred_col_a, "B": pred_col_b, "C": pred_col_c}
)
pred_out_df = encoder.transform_batch(pred_in_df)
pred_processed_col_a = [0, 2, None]
pred_processed_col_b = pred_col_b
pred_processed_col_c = pred_col_c
pred_expected_df = pd.DataFrame.from_dict(
{
"A": pred_processed_col_a,
"B": pred_processed_col_b,
"C": pred_processed_col_c,
}
)
assert pred_out_df.equals(pred_expected_df)
null_col = [1, None]
nonnull_col = [1, 1]
null_df = pd.DataFrame.from_dict({"A": null_col})
null_ds = ray.data.from_pandas(null_df)
nonnull_df = pd.DataFrame.from_dict({"A": nonnull_col})
nonnull_ds = ray.data.from_pandas(nonnull_df)
null_encoder = LabelEncoder("A")
with pytest.raises(ValueError):
null_encoder.fit(null_ds)
null_encoder.fit(nonnull_ds)
with pytest.raises(ValueError):
null_encoder.transform(null_ds)
null_encoder.transform(nonnull_ds)
with pytest.raises(ValueError):
null_encoder.transform_batch(null_df)
null_encoder.transform_batch(nonnull_df)
def test_simple_imputer():
col_a = [1, 1, 1, np.nan]
col_b = [1, 3, None, np.nan]
col_c = [1, 1, 1, 1]
in_df = pd.DataFrame.from_dict({"A": col_a, "B": col_b, "C": col_c})
ds = ray.data.from_pandas(in_df)
imputer = SimpleImputer(["B", "C"])
with pytest.raises(PreprocessorNotFittedException):
imputer.transform(ds)
imputer.fit(ds)
assert imputer.stats_ == {"mean(B)": 2.0, "mean(C)": 1.0}
transformed = imputer.transform(ds)
out_df = transformed.to_pandas()
processed_col_a = col_a
processed_col_b = [1.0, 3.0, 2.0, 2.0]
processed_col_c = [1, 1, 1, 1]
expected_df = pd.DataFrame.from_dict(
{"A": processed_col_a, "B": processed_col_b, "C": processed_col_c}
)
assert out_df.equals(expected_df)
pred_col_a = [1, 2, np.nan]
pred_col_b = [1, 2, np.nan]
pred_col_c = [None, None, None]
pred_in_df = pd.DataFrame.from_dict(
{"A": pred_col_a, "B": pred_col_b, "C": pred_col_c}
)
pred_out_df = imputer.transform_batch(pred_in_df)
pred_processed_col_a = pred_col_a
pred_processed_col_b = [1.0, 2.0, 2.0]
pred_processed_col_c = [1.0, 1.0, 1.0]
pred_expected_df = pd.DataFrame.from_dict(
{
"A": pred_processed_col_a,
"B": pred_processed_col_b,
"C": pred_processed_col_c,
}
)
assert pred_out_df.equals(pred_expected_df)
most_frequent_col_a = [1, 2, 2, None, None, None]
most_frequent_col_b = [None, "c", "c", "b", "b", "a"]
most_frequent_df = pd.DataFrame.from_dict(
{"A": most_frequent_col_a, "B": most_frequent_col_b}
)
most_frequent_ds = ray.data.from_pandas(most_frequent_df).repartition(3)
most_frequent_imputer = SimpleImputer(["A", "B"], strategy="most_frequent")
most_frequent_imputer.fit(most_frequent_ds)
assert most_frequent_imputer.stats_ == {
"most_frequent(A)": 2.0,
"most_frequent(B)": "c",
}
most_frequent_transformed = most_frequent_imputer.transform(most_frequent_ds)
most_frequent_out_df = most_frequent_transformed.to_pandas()
most_frequent_processed_col_a = [1.0, 2.0, 2.0, 2.0, 2.0, 2.0]
most_frequent_processed_col_b = ["c", "c", "c", "b", "b", "a"]
most_frequent_expected_df = pd.DataFrame.from_dict(
{"A": most_frequent_processed_col_a, "B": most_frequent_processed_col_b}
)
assert most_frequent_out_df.equals(most_frequent_expected_df)
constant_col_a = ["apple", None]
constant_df = pd.DataFrame.from_dict({"A": constant_col_a})
constant_ds = ray.data.from_pandas(constant_df)
with pytest.raises(ValueError):
SimpleImputer(["A"], strategy="constant")
constant_imputer = SimpleImputer(
["A", "B"], strategy="constant", fill_value="missing"
)
constant_transformed = constant_imputer.transform(constant_ds)
constant_out_df = constant_transformed.to_pandas()
constant_processed_col_a = ["apple", "missing"]
constant_expected_df = pd.DataFrame.from_dict({"A": constant_processed_col_a})
assert constant_out_df.equals(constant_expected_df)
def test_chain():
col_a = [-1, -1, 1, 1]
col_b = [1, 1, 1, None]
col_c = ["sunday", "monday", "tuesday", "tuesday"]
in_df = pd.DataFrame.from_dict({"A": col_a, "B": col_b, "C": col_c})
ds = ray.data.from_pandas(in_df)
def udf(df):
df["A"] *= 2
return df
batch_mapper = BatchMapper(fn=udf)
imputer = SimpleImputer(["B"])
scaler = StandardScaler(["A", "B"])
encoder = LabelEncoder("C")
chain = Chain(scaler, imputer, encoder, batch_mapper)
chain.fit(ds)
assert imputer.stats_ == {
"mean(B)": 0.0,
}
assert scaler.stats_ == {
"mean(A)": 0.0,
"mean(B)": 1.0,
"std(A)": 1.0,
"std(B)": 0.0,
}
assert encoder.stats_ == {
"unique_values(C)": {"monday": 0, "sunday": 1, "tuesday": 2}
}
transformed = chain.transform(ds)
out_df = transformed.to_pandas()
processed_col_a = [-2.0, -2.0, 2.0, 2.0]
processed_col_b = [0.0, 0.0, 0.0, 0.0]
processed_col_c = [1, 0, 2, 2]
expected_df = pd.DataFrame.from_dict(
{"A": processed_col_a, "B": processed_col_b, "C": processed_col_c}
)
assert out_df.equals(expected_df)
pred_col_a = [1, 2, None]
pred_col_b = [0, None, 2]
pred_col_c = ["monday", "tuesday", "wednesday"]
pred_in_df = pd.DataFrame.from_dict(
{"A": pred_col_a, "B": pred_col_b, "C": pred_col_c}
)
pred_out_df = chain.transform_batch(pred_in_df)
pred_processed_col_a = [2, 4, None]
pred_processed_col_b = [-1.0, 0.0, 1.0]
pred_processed_col_c = [0, 2, None]
pred_expected_df = pd.DataFrame.from_dict(
{
"A": pred_processed_col_a,
"B": pred_processed_col_b,
"C": pred_processed_col_c,
}
)
assert pred_out_df.equals(pred_expected_df)
def test_batch_mapper():
old_column = [1, 2, 3, 4]
to_be_modified = [1, -1, 1, -1]
in_df = pd.DataFrame.from_dict(
{"old_column": old_column, "to_be_modified": to_be_modified}
)
ds = ray.data.from_pandas(in_df)
def add_and_modify_udf(df: "pd.DataFrame"):
df["new_col"] = df["old_column"] + 1
df["to_be_modified"] *= 2
return df
batch_mapper = BatchMapper(fn=add_and_modify_udf)
batch_mapper.fit(ds)
transformed = batch_mapper.transform(ds)
out_df = transformed.to_pandas()
expected_df = pd.DataFrame.from_dict(
{
"old_column": old_column,
"to_be_modified": [2, -2, 2, -2],
"new_col": [2, 3, 4, 5],
}
)
assert out_df.equals(expected_df)
if __name__ == "__main__":
import sys
sys.exit(pytest.main(["-sv", __file__]))
| true | true |
f720b26f06cbae99f40eb0f83633ea9c408ef321 | 5,737 | py | Python | astro/plugins/_core.py | Lightyagami788/Astro-UB | cb2d8c76064c474ffd507e38421509f51918520f | [
"Apache-2.0"
] | null | null | null | astro/plugins/_core.py | Lightyagami788/Astro-UB | cb2d8c76064c474ffd507e38421509f51918520f | [
"Apache-2.0"
] | null | null | null | astro/plugins/_core.py | Lightyagami788/Astro-UB | cb2d8c76064c474ffd507e38421509f51918520f | [
"Apache-2.0"
] | 1 | 2021-11-16T06:20:41.000Z | 2021-11-16T06:20:41.000Z | import asyncio
import os
from datetime import datetime
from pathlib import Path
from telethon.tl.types import InputMessagesFilterDocument
from astro.config import Config
from astro import CMD_HELP
from astro.utils import admin_cmd, load_module, remove_plugin
NAME = Config.NAME
DELETE_TIMEOUT = 5
thumb_image_path = "./resources/astro.jpeg"
DEFAULTUSER = str(NAME) if NAME else "ASTRO USER"
@astro.on(admin_cmd(pattern=r"send (?P<shortname>\w+)", outgoing=True))
@astro.on(sudo_cmd(pattern=r"send (?P<shortname>\w+)", allow_sudo=True))
async def send(event):
ok = await eor(event, "Sending...")
if event.fwd_from:
return
hmm = bot.uid
message_id = event.message.id
thumb = thumb_image_path
input_str = event.pattern_match.group(1)
the_plugin_file = "./astro/plugins/{}.py".format(input_str)
if os.path.exists(the_plugin_file):
await ok.delete()
start = datetime.now()
pro = await event.client.send_file(
event.chat_id,
the_plugin_file,
force_document=True,
allow_cache=False,
thumb=thumb,
reply_to=message_id,
)
end = datetime.now()
time_taken_in_ms = (end - start).seconds
await pro.edit(
f"**► Plugin Name:** `{input_str}`\n**► Uploaded by:** [{DEFAULTUSER}](tg://user?id={hmm})\n\n© @Astro_HelpChat"
)
await asyncio.sleep(DELETE_TIMEOUT)
else:
await ok.edit("**404**: `No Such Plugin!`")
@astro.on(admin_cmd(pattern="install"))
async def install(event):
if event.fwd_from:
return
if event.reply_to_msg_id:
try:
downloaded_file_name = (
await event.client.download_media( # pylint:disable=E0602
await event.get_reply_message(),
"astro/plugins/", # pylint:disable=E0602
)
)
if "(" not in downloaded_file_name:
path1 = Path(downloaded_file_name)
shortname = path1.stem
load_module(shortname.replace(".py", ""))
await event.edit(
"astro Succesfully Installed The Plugin `{}`".format(
os.path.basename(downloaded_file_name)
)
)
else:
os.remove(downloaded_file_name)
await event.edit(
"**Error!**\nPlugin cannot be installed!\nMight have been pre-installed."
)
except Exception as e: # pylint:disable=C0103,W0703
await event.edit(str(e))
os.remove(downloaded_file_name)
await asyncio.sleep(DELETE_TIMEOUT)
await event.delete()
@astro.on(admin_cmd(pattern=r"unload (?P<shortname>\w+)$"))
async def unload(event):
if event.fwd_from:
return
shortname = event.pattern_match["shortname"]
try:
remove_plugin(shortname)
await event.edit(f"astro has successfully unloaded {shortname}")
except Exception as e:
await event.edit(
"astro has successfully unloaded {shortname}\n{}".format(
shortname, str(e)
)
)
@astro.on(admin_cmd(pattern=r"load (?P<shortname>\w+)$"))
async def load(event):
if event.fwd_from:
return
shortname = event.pattern_match["shortname"]
try:
try:
remove_plugin(shortname)
except BaseException:
pass
load_module(shortname)
await event.edit(f"astro has successfully loaded {shortname}")
except Exception as e:
await event.edit(
f"astro could not load {shortname} because of the following error.\n{str(e)}"
)
@astro.on(admin_cmd(pattern=r"installall$"))
async def install(event):
if event.fwd_from:
return
documentss = await event.client.get_messages(
event.chat_id, None, search=".py", filter=InputMessagesFilterDocument
)
total = int(documentss.total)
total_doxx = range(0, total)
b = await event.client.send_message(
event.chat_id,
f"**Installing {total} plugins...**\n`This msg will be deleted after the installation gets completed`",
)
text = "**Installing Plugins...**\n\n"
a = await event.client.send_message(event.chat_id, text)
if total == 0:
await a.edit("**No plugins to install.**")
await event.delete()
return
for ixo in total_doxx:
mxo = documentss[ixo].id
downloaded_file_name = await event.client.download_media(
await event.client.get_messages(event.chat_id, ids=mxo), "astro/plugins/"
)
if "(" not in downloaded_file_name:
path1 = Path(downloaded_file_name)
shortname = path1.stem
try:
load_module(shortname.replace(".py", ""))
text += f"**• Installed** `{(os.path.basename(downloaded_file_name))}` **successfully.**\n"
except BaseException:
text += f"**• Error installing** `{(os.path.basename(downloaded_file_name))}`\n"
else:
text += f"**• Plugin** `{(os.path.basename(downloaded_file_name))}` **already installed.**\n"
await a.edit(f"{text}\n**Installed every plugin.**")
await event.delete()
await b.delete()
CMD_HELP.update(
{
"core": ".load <plugin name>\nUse - Load the plugin.\
\n\n.unload <plugin name>\nUse - Unload the plugin.\
\n\n.install <reply to plugin file (.py)>\nUse - Install the plugin.\
\n\n.installall\nUse - Install all the plugins in the group/channel where it is used in.\
\n\n.send <plugin name>\nUse - Send the plugin."
}
)
| 34.981707 | 124 | 0.599791 | import asyncio
import os
from datetime import datetime
from pathlib import Path
from telethon.tl.types import InputMessagesFilterDocument
from astro.config import Config
from astro import CMD_HELP
from astro.utils import admin_cmd, load_module, remove_plugin
NAME = Config.NAME
DELETE_TIMEOUT = 5
thumb_image_path = "./resources/astro.jpeg"
DEFAULTUSER = str(NAME) if NAME else "ASTRO USER"
@astro.on(admin_cmd(pattern=r"send (?P<shortname>\w+)", outgoing=True))
@astro.on(sudo_cmd(pattern=r"send (?P<shortname>\w+)", allow_sudo=True))
async def send(event):
ok = await eor(event, "Sending...")
if event.fwd_from:
return
hmm = bot.uid
message_id = event.message.id
thumb = thumb_image_path
input_str = event.pattern_match.group(1)
the_plugin_file = "./astro/plugins/{}.py".format(input_str)
if os.path.exists(the_plugin_file):
await ok.delete()
start = datetime.now()
pro = await event.client.send_file(
event.chat_id,
the_plugin_file,
force_document=True,
allow_cache=False,
thumb=thumb,
reply_to=message_id,
)
end = datetime.now()
time_taken_in_ms = (end - start).seconds
await pro.edit(
f"**► Plugin Name:** `{input_str}`\n**► Uploaded by:** [{DEFAULTUSER}](tg://user?id={hmm})\n\n© @Astro_HelpChat"
)
await asyncio.sleep(DELETE_TIMEOUT)
else:
await ok.edit("**404**: `No Such Plugin!`")
@astro.on(admin_cmd(pattern="install"))
async def install(event):
if event.fwd_from:
return
if event.reply_to_msg_id:
try:
downloaded_file_name = (
await event.client.download_media(
await event.get_reply_message(),
"astro/plugins/",
)
)
if "(" not in downloaded_file_name:
path1 = Path(downloaded_file_name)
shortname = path1.stem
load_module(shortname.replace(".py", ""))
await event.edit(
"astro Succesfully Installed The Plugin `{}`".format(
os.path.basename(downloaded_file_name)
)
)
else:
os.remove(downloaded_file_name)
await event.edit(
"**Error!**\nPlugin cannot be installed!\nMight have been pre-installed."
)
except Exception as e:
await event.edit(str(e))
os.remove(downloaded_file_name)
await asyncio.sleep(DELETE_TIMEOUT)
await event.delete()
@astro.on(admin_cmd(pattern=r"unload (?P<shortname>\w+)$"))
async def unload(event):
if event.fwd_from:
return
shortname = event.pattern_match["shortname"]
try:
remove_plugin(shortname)
await event.edit(f"astro has successfully unloaded {shortname}")
except Exception as e:
await event.edit(
"astro has successfully unloaded {shortname}\n{}".format(
shortname, str(e)
)
)
@astro.on(admin_cmd(pattern=r"load (?P<shortname>\w+)$"))
async def load(event):
if event.fwd_from:
return
shortname = event.pattern_match["shortname"]
try:
try:
remove_plugin(shortname)
except BaseException:
pass
load_module(shortname)
await event.edit(f"astro has successfully loaded {shortname}")
except Exception as e:
await event.edit(
f"astro could not load {shortname} because of the following error.\n{str(e)}"
)
@astro.on(admin_cmd(pattern=r"installall$"))
async def install(event):
if event.fwd_from:
return
documentss = await event.client.get_messages(
event.chat_id, None, search=".py", filter=InputMessagesFilterDocument
)
total = int(documentss.total)
total_doxx = range(0, total)
b = await event.client.send_message(
event.chat_id,
f"**Installing {total} plugins...**\n`This msg will be deleted after the installation gets completed`",
)
text = "**Installing Plugins...**\n\n"
a = await event.client.send_message(event.chat_id, text)
if total == 0:
await a.edit("**No plugins to install.**")
await event.delete()
return
for ixo in total_doxx:
mxo = documentss[ixo].id
downloaded_file_name = await event.client.download_media(
await event.client.get_messages(event.chat_id, ids=mxo), "astro/plugins/"
)
if "(" not in downloaded_file_name:
path1 = Path(downloaded_file_name)
shortname = path1.stem
try:
load_module(shortname.replace(".py", ""))
text += f"**• Installed** `{(os.path.basename(downloaded_file_name))}` **successfully.**\n"
except BaseException:
text += f"**• Error installing** `{(os.path.basename(downloaded_file_name))}`\n"
else:
text += f"**• Plugin** `{(os.path.basename(downloaded_file_name))}` **already installed.**\n"
await a.edit(f"{text}\n**Installed every plugin.**")
await event.delete()
await b.delete()
CMD_HELP.update(
{
"core": ".load <plugin name>\nUse - Load the plugin.\
\n\n.unload <plugin name>\nUse - Unload the plugin.\
\n\n.install <reply to plugin file (.py)>\nUse - Install the plugin.\
\n\n.installall\nUse - Install all the plugins in the group/channel where it is used in.\
\n\n.send <plugin name>\nUse - Send the plugin."
}
)
| true | true |
f720b37866bd3fbd5203ac0faed5ee3a58cc01bc | 317 | py | Python | raspi/deskTimer.py | Itera/ariot2018 | e83adc8ac4e788df09fe412dd57ce3aca966b99a | [
"MIT"
] | null | null | null | raspi/deskTimer.py | Itera/ariot2018 | e83adc8ac4e788df09fe412dd57ce3aca966b99a | [
"MIT"
] | 1 | 2018-03-15T15:04:10.000Z | 2018-03-15T16:02:28.000Z | raspi/deskTimer.py | Itera/ariot2018 | e83adc8ac4e788df09fe412dd57ce3aca966b99a | [
"MIT"
] | null | null | null | from threading import Timer
class DeskTimer(object):
current_timer = None
def start(self, time, callback, *args):
self.current_timer = Timer(time, callback, args)
self.current_timer.start()
def stop(self):
if self.current_timer != None:
self.current_timer.cancel()
| 24.384615 | 56 | 0.649842 | from threading import Timer
class DeskTimer(object):
current_timer = None
def start(self, time, callback, *args):
self.current_timer = Timer(time, callback, args)
self.current_timer.start()
def stop(self):
if self.current_timer != None:
self.current_timer.cancel()
| true | true |
f720b3c07515379b83fea8c011c643547f776843 | 19,885 | py | Python | perfkitbenchmarker/providers/rackspace/rackspace_virtual_machine.py | msidana/PerfKitBenchmarker | 2784642d3e6b20b3f474c4e27edb1ef163804f66 | [
"Apache-2.0"
] | 1 | 2018-08-28T19:33:21.000Z | 2018-08-28T19:33:21.000Z | perfkitbenchmarker/providers/rackspace/rackspace_virtual_machine.py | msidana/PerfKitBenchmarker | 2784642d3e6b20b3f474c4e27edb1ef163804f66 | [
"Apache-2.0"
] | null | null | null | perfkitbenchmarker/providers/rackspace/rackspace_virtual_machine.py | msidana/PerfKitBenchmarker | 2784642d3e6b20b3f474c4e27edb1ef163804f66 | [
"Apache-2.0"
] | null | null | null | # Copyright 2015 PerfKitBenchmarker Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Class to represent a Rackspace Virtual Machine object.
Zones:
DFW (Dallas-Fort Worth)
IAD (Northern Virginia)
ORD (Chicago)
LON (London)
SYD (Sydney)
HKG (Hong Kong)
Machine Types:
run 'rack servers flavor list'
Images:
run 'rack servers image list'
All VM specifics are self-contained and the class provides methods to
operate on the VM: boot, shutdown, etc.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from collections import OrderedDict
import json
import logging
import re
import tempfile
from perfkitbenchmarker import errors
from perfkitbenchmarker import flags
from perfkitbenchmarker import linux_virtual_machine
from perfkitbenchmarker import virtual_machine
from perfkitbenchmarker import vm_util
from perfkitbenchmarker import providers
from perfkitbenchmarker.configs import option_decoders
from perfkitbenchmarker.providers.rackspace import rackspace_disk
from perfkitbenchmarker.providers.rackspace import rackspace_network
from perfkitbenchmarker.providers.rackspace import util
import six
from six.moves import range
from six.moves import zip
FLAGS = flags.FLAGS
CLOUD_CONFIG_TEMPLATE = '''#cloud-config
users:
- name: {0}
ssh-authorized-keys:
- {1}
sudo: ['ALL=(ALL) NOPASSWD:ALL']
groups: sudo
shell: /bin/bash
'''
BLOCK_DEVICE_TEMPLATE = '''
source-type=image,
source-id={0},
dest=volume,
size={1},
shutdown=remove,
bootindex=0
'''
LSBLK_REGEX = (r'NAME="(.*)"\s+MODEL="(.*)"\s+SIZE="(.*)"'
r'\s+TYPE="(.*)"\s+MOUNTPOINT="(.*)"\s+LABEL="(.*)"')
LSBLK_PATTERN = re.compile(LSBLK_REGEX)
UBUNTU_IMAGE = '09de0a66-3156-48b4-90a5-1cf25a905207'
RHEL_IMAGE = '92f8a8b8-6019-4c27-949b-cf9910b84ffb'
INSTANCE_EXISTS_STATUSES = frozenset(
['BUILD', 'ACTIVE', 'PAUSED', 'SHUTOFF', 'ERROR'])
INSTANCE_DELETED_STATUSES = frozenset(
['DELETED'])
INSTANCE_KNOWN_STATUSES = INSTANCE_EXISTS_STATUSES | INSTANCE_DELETED_STATUSES
REMOTE_BOOT_DISK_SIZE_GB = 50
def RenderBlockDeviceTemplate(image, volume_size):
"""Renders template used for the block-device flag in RackCLI.
Args:
image: string. Image ID of the source image.
volume_size: string. Size in GB of desired volume size.
Returns:
string value for block-device parameter used when creating a VM.
"""
blk_params = BLOCK_DEVICE_TEMPLATE.replace('\n', '').format(
image, str(volume_size))
return blk_params
class RackspaceVmSpec(virtual_machine.BaseVmSpec):
"""Object containing the information needed to create a
RackspaceVirtualMachine.
Attributes:
project: None or string. Project ID, also known as Tenant ID
rackspace_region: None or string. Rackspace region to build VM resources.
rack_profile: None or string. Rack CLI profile configuration.
"""
CLOUD = providers.RACKSPACE
@classmethod
def _ApplyFlags(cls, config_values, flag_values):
"""Modifies config options based on runtime flag values.
Args:
config_values: dict mapping config option names to provided values. May
be modified by this function.
flag_values: flags.FlagValues. Runtime flags that may override the
provided config values.
"""
super(RackspaceVmSpec, cls)._ApplyFlags(config_values, flag_values)
if flag_values['project'].present:
config_values['project'] = flag_values.project
if flag_values['rackspace_region'].present:
config_values['rackspace_region'] = flag_values.rackspace_region
if flag_values['rack_profile'].present:
config_values['rack_profile'] = flag_values.rack_profile
@classmethod
def _GetOptionDecoderConstructions(cls):
"""Gets decoder classes and constructor args for each configurable option.
Returns:
dict. Maps option name string to a (ConfigOptionDecoder class, dict) pair.
The pair specifies a decoder class and its __init__() keyword
arguments to construct in order to decode the named option.
"""
result = super(RackspaceVmSpec, cls)._GetOptionDecoderConstructions()
result.update({
'project': (option_decoders.StringDecoder, {'default': None}),
'rackspace_region': (option_decoders.StringDecoder, {'default': 'IAD'}),
'rack_profile': (option_decoders.StringDecoder, {'default': None})})
return result
class RackspaceVirtualMachine(virtual_machine.BaseVirtualMachine):
"""Object representing a Rackspace Public Cloud Virtual Machine."""
CLOUD = providers.RACKSPACE
DEFAULT_IMAGE = None
def __init__(self, vm_spec):
"""Initialize a Rackspace Virtual Machine
Args:
vm_spec: virtual_machine.BaseVirtualMachineSpec object of the VM.
"""
super(RackspaceVirtualMachine, self).__init__(vm_spec)
self.boot_metadata = {}
self.boot_device = None
self.boot_disk_allocated = False
self.allocated_disks = set()
self.id = None
self.image = self.image or self.DEFAULT_IMAGE
self.region = vm_spec.rackspace_region
self.project = vm_spec.project
self.profile = vm_spec.rack_profile
# Isolated tenant networks are regional, not globally available.
# Security groups (firewalls) apply to a network, hence they are regional.
# TODO(meteorfox) Create tenant network if it doesn't exist in the region.
self.firewall = rackspace_network.RackspaceFirewall.GetFirewall()
def _CreateDependencies(self):
"""Create dependencies prior creating the VM."""
# TODO(meteorfox) Create security group (if applies)
self._UploadSSHPublicKey()
def _Create(self):
"""Creates a Rackspace VM instance and waits until it's ACTIVE."""
self._CreateInstance()
self._WaitForInstanceUntilActive()
@vm_util.Retry()
def _PostCreate(self):
"""Gets the VM's information."""
get_cmd = util.RackCLICommand(self, 'servers', 'instance', 'get')
get_cmd.flags['id'] = self.id
stdout, _, _ = get_cmd.Issue()
resp = json.loads(stdout)
self.internal_ip = resp['PrivateIPv4']
self.ip_address = resp['PublicIPv4']
self.AddMetadata(**self.vm_metadata)
def _Exists(self):
"""Returns true if the VM exists otherwise returns false."""
if self.id is None:
return False
get_cmd = util.RackCLICommand(self, 'servers', 'instance', 'get')
get_cmd.flags['id'] = self.id
stdout, _, _ = get_cmd.Issue(suppress_warning=True)
try:
resp = json.loads(stdout)
except ValueError:
return False
status = resp['Status']
return status in INSTANCE_EXISTS_STATUSES
def _Delete(self):
"""Deletes a Rackspace VM instance and waits until API returns 404."""
if self.id is None:
return
self._DeleteInstance()
self._WaitForInstanceUntilDeleted()
def _DeleteDependencies(self):
"""Deletes dependencies that were need for the VM after the VM has been
deleted."""
# TODO(meteorfox) Delete security group (if applies)
self._DeleteSSHPublicKey()
def _UploadSSHPublicKey(self):
"""Uploads SSH public key to the VM's region. 1 key per VM per Region."""
cmd = util.RackCLICommand(self, 'servers', 'keypair', 'upload')
cmd.flags = OrderedDict([
('name', self.name), ('file', self.ssh_public_key)])
cmd.Issue()
def _DeleteSSHPublicKey(self):
"""Deletes SSH public key used for a VM."""
cmd = util.RackCLICommand(self, 'servers', 'keypair', 'delete')
cmd.flags['name'] = self.name
cmd.Issue()
def _CreateInstance(self):
"""Generates and execute command for creating a Rackspace VM."""
with tempfile.NamedTemporaryFile(dir=vm_util.GetTempDir(),
prefix='user-data') as tf:
with open(self.ssh_public_key) as f:
public_key = f.read().rstrip('\n')
tf.write(CLOUD_CONFIG_TEMPLATE.format(self.user_name, public_key))
tf.flush()
create_cmd = self._GetCreateCommand(tf)
stdout, stderr, _ = create_cmd.Issue()
if stderr:
resp = json.loads(stderr)
raise errors.Error(''.join(
('Non-recoverable error has occurred: %s\n' % str(resp),
'Following command caused the error: %s' % repr(create_cmd),)))
resp = json.loads(stdout)
self.id = resp['ID']
def _GetCreateCommand(self, tf):
"""Generates RackCLI command for creating a Rackspace VM.
Args:
tf: file object containing cloud-config script.
Returns:
RackCLICommand containing RackCLI arguments to build a Rackspace VM.
"""
create_cmd = util.RackCLICommand(self, 'servers', 'instance', 'create')
create_cmd.flags['name'] = self.name
create_cmd.flags['keypair'] = self.name
create_cmd.flags['flavor-id'] = self.machine_type
if FLAGS.rackspace_boot_from_cbs_volume:
blk_flag = RenderBlockDeviceTemplate(self.image, REMOTE_BOOT_DISK_SIZE_GB)
create_cmd.flags['block-device'] = blk_flag
else:
create_cmd.flags['image-id'] = self.image
if FLAGS.rackspace_network_id is not None:
create_cmd.flags['networks'] = ','.join([
rackspace_network.PUBLIC_NET_ID, rackspace_network.SERVICE_NET_ID,
FLAGS.rackspace_network_id])
create_cmd.flags['user-data'] = tf.name
metadata = ['owner=%s' % FLAGS.owner]
for key, value in six.iteritems(self.boot_metadata):
metadata.append('%s=%s' % (key, value))
create_cmd.flags['metadata'] = ','.join(metadata)
return create_cmd
@vm_util.Retry(poll_interval=5, max_retries=720, log_errors=False,
retryable_exceptions=(errors.Resource.RetryableCreationError,))
def _WaitForInstanceUntilActive(self):
"""Waits until instance achieves non-transient state."""
get_cmd = util.RackCLICommand(self, 'servers', 'instance', 'get')
get_cmd.flags['id'] = self.id
stdout, stderr, _ = get_cmd.Issue()
if stdout:
instance = json.loads(stdout)
if instance['Status'] == 'ACTIVE':
logging.info('VM: %s is up and running.' % self.name)
return
elif instance['Status'] == 'ERROR':
logging.error('VM: %s failed to boot.' % self.name)
raise errors.VirtualMachine.VmStateError()
raise errors.Resource.RetryableCreationError(
'VM: %s is not running. Retrying to check status.' % self.name)
def _DeleteInstance(self):
"""Executes delete command for removing a Rackspace VM."""
cmd = util.RackCLICommand(self, 'servers', 'instance', 'delete')
cmd.flags['id'] = self.id
stdout, _, _ = cmd.Issue(suppress_warning=True)
resp = json.loads(stdout)
if 'result' not in resp or 'Deleting' not in resp['result']:
raise errors.Resource.RetryableDeletionError()
@vm_util.Retry(poll_interval=5, max_retries=-1, timeout=300,
log_errors=False,
retryable_exceptions=(errors.Resource.RetryableDeletionError,))
def _WaitForInstanceUntilDeleted(self):
"""Waits until instance has been fully removed, or deleted."""
get_cmd = util.RackCLICommand(self, 'servers', 'instance', 'get')
get_cmd.flags['id'] = self.id
stdout, stderr, _ = get_cmd.Issue()
if stderr:
resp = json.loads(stderr)
if 'error' in resp and "couldn't find" in resp['error']:
logging.info('VM: %s has been successfully deleted.' % self.name)
return
instance = json.loads(stdout)
if instance['Status'] == 'ERROR':
logging.error('VM: %s failed to delete.' % self.name)
raise errors.VirtualMachine.VmStateError()
if instance['Status'] == 'DELETED':
logging.info('VM: %s has been successfully deleted.' % self.name)
else:
raise errors.Resource.RetryableDeletionError(
'VM: %s has not been deleted. Retrying to check status.' % self.name)
def AddMetadata(self, **kwargs):
"""Adds metadata to the VM via RackCLI update-metadata command."""
if not kwargs:
return
cmd = util.RackCLICommand(self, 'servers', 'instance', 'update-metadata')
cmd.flags['id'] = self.id
cmd.flags['metadata'] = ','.join('{0}={1}'.format(key, value)
for key, value in six.iteritems(kwargs))
cmd.Issue()
def OnStartup(self):
"""Executes commands on the VM immediately after it has booted."""
super(RackspaceVirtualMachine, self).OnStartup()
self.boot_device = self._GetBootDevice()
def CreateScratchDisk(self, disk_spec):
"""Creates a VM's scratch disk that will be used for a benchmark.
Given a data_disk_type it will either create a corresponding Disk object,
or raise an error that such data disk type is not supported.
Args:
disk_spec: virtual_machine.BaseDiskSpec object of the disk.
Raises:
errors.Error indicating that the requested 'data_disk_type' is
not supported.
"""
if disk_spec.disk_type == rackspace_disk.BOOT: # Ignore num_striped_disks
self._AllocateBootDisk(disk_spec)
elif disk_spec.disk_type == rackspace_disk.LOCAL:
self._AllocateLocalDisks(disk_spec)
elif disk_spec.disk_type in rackspace_disk.REMOTE_TYPES:
self._AllocateRemoteDisks(disk_spec)
else:
raise errors.Error('Unsupported data disk type: %s' % disk_spec.disk_type)
def _AllocateBootDisk(self, disk_spec):
"""Allocate the VM's boot, or system, disk as the scratch disk.
Boot disk can only be allocated once. If multiple data disks are required
it will raise an error.
Args:
disk_spec: virtual_machine.BaseDiskSpec object of the disk.
Raises:
errors.Error when boot disk has already been allocated as a data disk.
"""
if self.boot_disk_allocated:
raise errors.Error('Only one boot disk can be created per VM')
device_path = '/dev/%s' % self.boot_device['name']
scratch_disk = rackspace_disk.RackspaceBootDisk(
disk_spec, self.zone, self.project, device_path, self.image)
self.boot_disk_allocated = True
self.scratch_disks.append(scratch_disk)
scratch_disk.Create()
path = disk_spec.mount_point
mk_cmd = 'sudo mkdir -p {0}; sudo chown -R $USER:$USER {0};'.format(path)
self.RemoteCommand(mk_cmd)
def _AllocateLocalDisks(self, disk_spec):
"""Allocate the VM's local disks (included with the VM), as a data disk(s).
A local disk can only be allocated once per data disk.
Args:
disk_spec: virtual_machine.BaseDiskSpec object of the disk.
"""
block_devices = self._GetBlockDevices()
free_blk_devices = self._GetFreeBlockDevices(block_devices, disk_spec)
disks = []
for i in range(disk_spec.num_striped_disks):
local_device = free_blk_devices[i]
disk_name = '%s-local-disk-%d' % (self.name, i)
device_path = '/dev/%s' % local_device['name']
local_disk = rackspace_disk.RackspaceLocalDisk(
disk_spec, disk_name, self.zone, self.project, device_path)
self.allocated_disks.add(local_disk)
disks.append(local_disk)
self._CreateScratchDiskFromDisks(disk_spec, disks)
def _AllocateRemoteDisks(self, disk_spec):
"""Creates and allocates Rackspace Cloud Block Storage volumes as
as data disks.
Args:
disk_spec: virtual_machine.BaseDiskSpec object of the disk.
"""
scratch_disks = []
for disk_num in range(disk_spec.num_striped_disks):
volume_name = '%s-volume-%d' % (self.name, disk_num)
scratch_disk = rackspace_disk.RackspaceRemoteDisk(
disk_spec, volume_name, self.zone, self.project,
media=disk_spec.disk_type)
scratch_disks.append(scratch_disk)
self._CreateScratchDiskFromDisks(disk_spec, scratch_disks)
def _GetFreeBlockDevices(self, block_devices, disk_spec):
"""Returns available block devices that are not in used as data disk or as
a boot disk.
Args:
block_devices: list of dict containing information about all block devices
in the VM.
disk_spec: virtual_machine.BaseDiskSpec of the disk.
Returns:
list of dicts of only block devices that are not being used.
Raises:
errors.Error Whenever there are no available block devices.
"""
free_blk_devices = []
for dev in block_devices:
if self._IsDiskAvailable(dev):
free_blk_devices.append(dev)
if not free_blk_devices:
raise errors.Error(
''.join(('Machine type %s does not include' % self.machine_type,
' local disks. Please use a different disk_type,',
' or a machine_type that provides local disks.')))
elif len(free_blk_devices) < disk_spec.num_striped_disks:
raise errors.Error('Not enough local data disks. '
'Requesting %d disk(s) but only %d available.'
% (disk_spec.num_striped_disks, len(free_blk_devices)))
return free_blk_devices
def _GetBlockDevices(self):
"""Execute command on VM to gather all block devices in the VM.
Returns:
list of dicts block devices in the VM.
"""
stdout, _ = self.RemoteCommand(
'sudo lsblk -o NAME,MODEL,SIZE,TYPE,MOUNTPOINT,LABEL -n -b -P')
lines = stdout.splitlines()
groups = [LSBLK_PATTERN.match(line) for line in lines]
tuples = [g.groups() for g in groups if g]
colnames = ('name', 'model', 'size_bytes', 'type', 'mountpoint', 'label',)
blk_devices = [dict(list(zip(colnames, t))) for t in tuples]
for d in blk_devices:
d['model'] = d['model'].rstrip()
d['label'] = d['label'].rstrip()
d['size_bytes'] = int(d['size_bytes'])
return blk_devices
def _GetBootDevice(self):
"""Returns backing block device where '/' is mounted on.
Returns:
dict blk device data
Raises:
errors.Error indicates that could not find block device with '/'.
"""
blk_devices = self._GetBlockDevices()
boot_blk_device = None
for dev in blk_devices:
if dev['mountpoint'] == '/':
boot_blk_device = dev
break
if boot_blk_device is None: # Unlikely
raise errors.Error('Could not find disk with "/" root mount point.')
if boot_blk_device['type'] != 'part':
return boot_blk_device
return self._FindBootBlockDevice(blk_devices, boot_blk_device)
def _FindBootBlockDevice(self, blk_devices, boot_blk_device):
"""Helper method to search for backing block device of a partition."""
blk_device_name = boot_blk_device['name'].rstrip('0123456789')
for dev in blk_devices:
if dev['type'] == 'disk' and dev['name'] == blk_device_name:
boot_blk_device = dev
return boot_blk_device
def _IsDiskAvailable(self, blk_device):
"""Returns True if a block device is available.
An available disk, is a disk that has not been allocated previously as
a data disk, or is not being used as boot disk.
"""
return (blk_device['type'] != 'part' and
blk_device['name'] != self.boot_device['name'] and
'config' not in blk_device['label'] and
blk_device['name'] not in self.allocated_disks)
class DebianBasedRackspaceVirtualMachine(RackspaceVirtualMachine,
linux_virtual_machine.DebianMixin):
DEFAULT_IMAGE = UBUNTU_IMAGE
class RhelBasedRackspaceVirtualMachine(RackspaceVirtualMachine,
linux_virtual_machine.RhelMixin):
DEFAULT_IMAGE = RHEL_IMAGE
| 37.029795 | 80 | 0.692582 |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from collections import OrderedDict
import json
import logging
import re
import tempfile
from perfkitbenchmarker import errors
from perfkitbenchmarker import flags
from perfkitbenchmarker import linux_virtual_machine
from perfkitbenchmarker import virtual_machine
from perfkitbenchmarker import vm_util
from perfkitbenchmarker import providers
from perfkitbenchmarker.configs import option_decoders
from perfkitbenchmarker.providers.rackspace import rackspace_disk
from perfkitbenchmarker.providers.rackspace import rackspace_network
from perfkitbenchmarker.providers.rackspace import util
import six
from six.moves import range
from six.moves import zip
FLAGS = flags.FLAGS
CLOUD_CONFIG_TEMPLATE = '''#cloud-config
users:
- name: {0}
ssh-authorized-keys:
- {1}
sudo: ['ALL=(ALL) NOPASSWD:ALL']
groups: sudo
shell: /bin/bash
'''
BLOCK_DEVICE_TEMPLATE = '''
source-type=image,
source-id={0},
dest=volume,
size={1},
shutdown=remove,
bootindex=0
'''
LSBLK_REGEX = (r'NAME="(.*)"\s+MODEL="(.*)"\s+SIZE="(.*)"'
r'\s+TYPE="(.*)"\s+MOUNTPOINT="(.*)"\s+LABEL="(.*)"')
LSBLK_PATTERN = re.compile(LSBLK_REGEX)
UBUNTU_IMAGE = '09de0a66-3156-48b4-90a5-1cf25a905207'
RHEL_IMAGE = '92f8a8b8-6019-4c27-949b-cf9910b84ffb'
INSTANCE_EXISTS_STATUSES = frozenset(
['BUILD', 'ACTIVE', 'PAUSED', 'SHUTOFF', 'ERROR'])
INSTANCE_DELETED_STATUSES = frozenset(
['DELETED'])
INSTANCE_KNOWN_STATUSES = INSTANCE_EXISTS_STATUSES | INSTANCE_DELETED_STATUSES
REMOTE_BOOT_DISK_SIZE_GB = 50
def RenderBlockDeviceTemplate(image, volume_size):
blk_params = BLOCK_DEVICE_TEMPLATE.replace('\n', '').format(
image, str(volume_size))
return blk_params
class RackspaceVmSpec(virtual_machine.BaseVmSpec):
CLOUD = providers.RACKSPACE
@classmethod
def _ApplyFlags(cls, config_values, flag_values):
super(RackspaceVmSpec, cls)._ApplyFlags(config_values, flag_values)
if flag_values['project'].present:
config_values['project'] = flag_values.project
if flag_values['rackspace_region'].present:
config_values['rackspace_region'] = flag_values.rackspace_region
if flag_values['rack_profile'].present:
config_values['rack_profile'] = flag_values.rack_profile
@classmethod
def _GetOptionDecoderConstructions(cls):
result = super(RackspaceVmSpec, cls)._GetOptionDecoderConstructions()
result.update({
'project': (option_decoders.StringDecoder, {'default': None}),
'rackspace_region': (option_decoders.StringDecoder, {'default': 'IAD'}),
'rack_profile': (option_decoders.StringDecoder, {'default': None})})
return result
class RackspaceVirtualMachine(virtual_machine.BaseVirtualMachine):
CLOUD = providers.RACKSPACE
DEFAULT_IMAGE = None
def __init__(self, vm_spec):
super(RackspaceVirtualMachine, self).__init__(vm_spec)
self.boot_metadata = {}
self.boot_device = None
self.boot_disk_allocated = False
self.allocated_disks = set()
self.id = None
self.image = self.image or self.DEFAULT_IMAGE
self.region = vm_spec.rackspace_region
self.project = vm_spec.project
self.profile = vm_spec.rack_profile
self.firewall = rackspace_network.RackspaceFirewall.GetFirewall()
def _CreateDependencies(self):
# TODO(meteorfox) Create security group (if applies)
self._UploadSSHPublicKey()
def _Create(self):
self._CreateInstance()
self._WaitForInstanceUntilActive()
@vm_util.Retry()
def _PostCreate(self):
get_cmd = util.RackCLICommand(self, 'servers', 'instance', 'get')
get_cmd.flags['id'] = self.id
stdout, _, _ = get_cmd.Issue()
resp = json.loads(stdout)
self.internal_ip = resp['PrivateIPv4']
self.ip_address = resp['PublicIPv4']
self.AddMetadata(**self.vm_metadata)
def _Exists(self):
if self.id is None:
return False
get_cmd = util.RackCLICommand(self, 'servers', 'instance', 'get')
get_cmd.flags['id'] = self.id
stdout, _, _ = get_cmd.Issue(suppress_warning=True)
try:
resp = json.loads(stdout)
except ValueError:
return False
status = resp['Status']
return status in INSTANCE_EXISTS_STATUSES
def _Delete(self):
if self.id is None:
return
self._DeleteInstance()
self._WaitForInstanceUntilDeleted()
def _DeleteDependencies(self):
# TODO(meteorfox) Delete security group (if applies)
self._DeleteSSHPublicKey()
def _UploadSSHPublicKey(self):
cmd = util.RackCLICommand(self, 'servers', 'keypair', 'upload')
cmd.flags = OrderedDict([
('name', self.name), ('file', self.ssh_public_key)])
cmd.Issue()
def _DeleteSSHPublicKey(self):
cmd = util.RackCLICommand(self, 'servers', 'keypair', 'delete')
cmd.flags['name'] = self.name
cmd.Issue()
def _CreateInstance(self):
with tempfile.NamedTemporaryFile(dir=vm_util.GetTempDir(),
prefix='user-data') as tf:
with open(self.ssh_public_key) as f:
public_key = f.read().rstrip('\n')
tf.write(CLOUD_CONFIG_TEMPLATE.format(self.user_name, public_key))
tf.flush()
create_cmd = self._GetCreateCommand(tf)
stdout, stderr, _ = create_cmd.Issue()
if stderr:
resp = json.loads(stderr)
raise errors.Error(''.join(
('Non-recoverable error has occurred: %s\n' % str(resp),
'Following command caused the error: %s' % repr(create_cmd),)))
resp = json.loads(stdout)
self.id = resp['ID']
def _GetCreateCommand(self, tf):
create_cmd = util.RackCLICommand(self, 'servers', 'instance', 'create')
create_cmd.flags['name'] = self.name
create_cmd.flags['keypair'] = self.name
create_cmd.flags['flavor-id'] = self.machine_type
if FLAGS.rackspace_boot_from_cbs_volume:
blk_flag = RenderBlockDeviceTemplate(self.image, REMOTE_BOOT_DISK_SIZE_GB)
create_cmd.flags['block-device'] = blk_flag
else:
create_cmd.flags['image-id'] = self.image
if FLAGS.rackspace_network_id is not None:
create_cmd.flags['networks'] = ','.join([
rackspace_network.PUBLIC_NET_ID, rackspace_network.SERVICE_NET_ID,
FLAGS.rackspace_network_id])
create_cmd.flags['user-data'] = tf.name
metadata = ['owner=%s' % FLAGS.owner]
for key, value in six.iteritems(self.boot_metadata):
metadata.append('%s=%s' % (key, value))
create_cmd.flags['metadata'] = ','.join(metadata)
return create_cmd
@vm_util.Retry(poll_interval=5, max_retries=720, log_errors=False,
retryable_exceptions=(errors.Resource.RetryableCreationError,))
def _WaitForInstanceUntilActive(self):
get_cmd = util.RackCLICommand(self, 'servers', 'instance', 'get')
get_cmd.flags['id'] = self.id
stdout, stderr, _ = get_cmd.Issue()
if stdout:
instance = json.loads(stdout)
if instance['Status'] == 'ACTIVE':
logging.info('VM: %s is up and running.' % self.name)
return
elif instance['Status'] == 'ERROR':
logging.error('VM: %s failed to boot.' % self.name)
raise errors.VirtualMachine.VmStateError()
raise errors.Resource.RetryableCreationError(
'VM: %s is not running. Retrying to check status.' % self.name)
def _DeleteInstance(self):
cmd = util.RackCLICommand(self, 'servers', 'instance', 'delete')
cmd.flags['id'] = self.id
stdout, _, _ = cmd.Issue(suppress_warning=True)
resp = json.loads(stdout)
if 'result' not in resp or 'Deleting' not in resp['result']:
raise errors.Resource.RetryableDeletionError()
@vm_util.Retry(poll_interval=5, max_retries=-1, timeout=300,
log_errors=False,
retryable_exceptions=(errors.Resource.RetryableDeletionError,))
def _WaitForInstanceUntilDeleted(self):
get_cmd = util.RackCLICommand(self, 'servers', 'instance', 'get')
get_cmd.flags['id'] = self.id
stdout, stderr, _ = get_cmd.Issue()
if stderr:
resp = json.loads(stderr)
if 'error' in resp and "couldn't find" in resp['error']:
logging.info('VM: %s has been successfully deleted.' % self.name)
return
instance = json.loads(stdout)
if instance['Status'] == 'ERROR':
logging.error('VM: %s failed to delete.' % self.name)
raise errors.VirtualMachine.VmStateError()
if instance['Status'] == 'DELETED':
logging.info('VM: %s has been successfully deleted.' % self.name)
else:
raise errors.Resource.RetryableDeletionError(
'VM: %s has not been deleted. Retrying to check status.' % self.name)
def AddMetadata(self, **kwargs):
if not kwargs:
return
cmd = util.RackCLICommand(self, 'servers', 'instance', 'update-metadata')
cmd.flags['id'] = self.id
cmd.flags['metadata'] = ','.join('{0}={1}'.format(key, value)
for key, value in six.iteritems(kwargs))
cmd.Issue()
def OnStartup(self):
super(RackspaceVirtualMachine, self).OnStartup()
self.boot_device = self._GetBootDevice()
def CreateScratchDisk(self, disk_spec):
if disk_spec.disk_type == rackspace_disk.BOOT:
self._AllocateBootDisk(disk_spec)
elif disk_spec.disk_type == rackspace_disk.LOCAL:
self._AllocateLocalDisks(disk_spec)
elif disk_spec.disk_type in rackspace_disk.REMOTE_TYPES:
self._AllocateRemoteDisks(disk_spec)
else:
raise errors.Error('Unsupported data disk type: %s' % disk_spec.disk_type)
def _AllocateBootDisk(self, disk_spec):
if self.boot_disk_allocated:
raise errors.Error('Only one boot disk can be created per VM')
device_path = '/dev/%s' % self.boot_device['name']
scratch_disk = rackspace_disk.RackspaceBootDisk(
disk_spec, self.zone, self.project, device_path, self.image)
self.boot_disk_allocated = True
self.scratch_disks.append(scratch_disk)
scratch_disk.Create()
path = disk_spec.mount_point
mk_cmd = 'sudo mkdir -p {0}; sudo chown -R $USER:$USER {0};'.format(path)
self.RemoteCommand(mk_cmd)
def _AllocateLocalDisks(self, disk_spec):
block_devices = self._GetBlockDevices()
free_blk_devices = self._GetFreeBlockDevices(block_devices, disk_spec)
disks = []
for i in range(disk_spec.num_striped_disks):
local_device = free_blk_devices[i]
disk_name = '%s-local-disk-%d' % (self.name, i)
device_path = '/dev/%s' % local_device['name']
local_disk = rackspace_disk.RackspaceLocalDisk(
disk_spec, disk_name, self.zone, self.project, device_path)
self.allocated_disks.add(local_disk)
disks.append(local_disk)
self._CreateScratchDiskFromDisks(disk_spec, disks)
def _AllocateRemoteDisks(self, disk_spec):
scratch_disks = []
for disk_num in range(disk_spec.num_striped_disks):
volume_name = '%s-volume-%d' % (self.name, disk_num)
scratch_disk = rackspace_disk.RackspaceRemoteDisk(
disk_spec, volume_name, self.zone, self.project,
media=disk_spec.disk_type)
scratch_disks.append(scratch_disk)
self._CreateScratchDiskFromDisks(disk_spec, scratch_disks)
def _GetFreeBlockDevices(self, block_devices, disk_spec):
free_blk_devices = []
for dev in block_devices:
if self._IsDiskAvailable(dev):
free_blk_devices.append(dev)
if not free_blk_devices:
raise errors.Error(
''.join(('Machine type %s does not include' % self.machine_type,
' local disks. Please use a different disk_type,',
' or a machine_type that provides local disks.')))
elif len(free_blk_devices) < disk_spec.num_striped_disks:
raise errors.Error('Not enough local data disks. '
'Requesting %d disk(s) but only %d available.'
% (disk_spec.num_striped_disks, len(free_blk_devices)))
return free_blk_devices
def _GetBlockDevices(self):
stdout, _ = self.RemoteCommand(
'sudo lsblk -o NAME,MODEL,SIZE,TYPE,MOUNTPOINT,LABEL -n -b -P')
lines = stdout.splitlines()
groups = [LSBLK_PATTERN.match(line) for line in lines]
tuples = [g.groups() for g in groups if g]
colnames = ('name', 'model', 'size_bytes', 'type', 'mountpoint', 'label',)
blk_devices = [dict(list(zip(colnames, t))) for t in tuples]
for d in blk_devices:
d['model'] = d['model'].rstrip()
d['label'] = d['label'].rstrip()
d['size_bytes'] = int(d['size_bytes'])
return blk_devices
def _GetBootDevice(self):
blk_devices = self._GetBlockDevices()
boot_blk_device = None
for dev in blk_devices:
if dev['mountpoint'] == '/':
boot_blk_device = dev
break
if boot_blk_device is None:
raise errors.Error('Could not find disk with "/" root mount point.')
if boot_blk_device['type'] != 'part':
return boot_blk_device
return self._FindBootBlockDevice(blk_devices, boot_blk_device)
def _FindBootBlockDevice(self, blk_devices, boot_blk_device):
blk_device_name = boot_blk_device['name'].rstrip('0123456789')
for dev in blk_devices:
if dev['type'] == 'disk' and dev['name'] == blk_device_name:
boot_blk_device = dev
return boot_blk_device
def _IsDiskAvailable(self, blk_device):
return (blk_device['type'] != 'part' and
blk_device['name'] != self.boot_device['name'] and
'config' not in blk_device['label'] and
blk_device['name'] not in self.allocated_disks)
class DebianBasedRackspaceVirtualMachine(RackspaceVirtualMachine,
linux_virtual_machine.DebianMixin):
DEFAULT_IMAGE = UBUNTU_IMAGE
class RhelBasedRackspaceVirtualMachine(RackspaceVirtualMachine,
linux_virtual_machine.RhelMixin):
DEFAULT_IMAGE = RHEL_IMAGE
| true | true |
f720b4c65b03ffc9b7a16a20c28258f9373c712e | 1,316 | py | Python | app/modules/ssh.py | danielpodwysocki/zoltan | 52536c41e95ca7b641d4e2b740f68c9e00170aee | [
"MIT"
] | null | null | null | app/modules/ssh.py | danielpodwysocki/zoltan | 52536c41e95ca7b641d4e2b740f68c9e00170aee | [
"MIT"
] | null | null | null | app/modules/ssh.py | danielpodwysocki/zoltan | 52536c41e95ca7b641d4e2b740f68c9e00170aee | [
"MIT"
] | null | null | null | import re
import paramiko
class Handler:
'''
A slash command for checking ssh connectivity and rebooting machines.
'''
id = 2
def __init__(self, regexp):
'''
Takes a regexp as an argument, the regexp will then be used to check if the format of the hostname is correct
'''
self.prog = re.compile(regexp)
def command(self, message):
response = "Something went wrong :("
if not message:
response = "Run `/ssh [machine's name]` to see if the machine is reachable"
elif bool(self.prog.match(message)):
response = "Checking `%s`" % message
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.WarningPolicy())
#try connecting to the machine specified by the message
try:
ssh.connect(message, key_filename="/ssh/zoltan", username='zoltan')
response = "The machine is reachable."
ssh.close()
except Exception as e:
print(e)
response = "The machine is not reachable."
else:
response = "The machine's name is not in the correct format. Run `/help ssh` for command examples"
return response
| 32.097561 | 117 | 0.569149 | import re
import paramiko
class Handler:
id = 2
def __init__(self, regexp):
self.prog = re.compile(regexp)
def command(self, message):
response = "Something went wrong :("
if not message:
response = "Run `/ssh [machine's name]` to see if the machine is reachable"
elif bool(self.prog.match(message)):
response = "Checking `%s`" % message
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.WarningPolicy())
#try connecting to the machine specified by the message
try:
ssh.connect(message, key_filename="/ssh/zoltan", username='zoltan')
response = "The machine is reachable."
ssh.close()
except Exception as e:
print(e)
response = "The machine is not reachable."
else:
response = "The machine's name is not in the correct format. Run `/help ssh` for command examples"
return response
| true | true |
f720b5b8dd20b02542407ce32d85af6fe11ca20b | 29,285 | py | Python | pyqstrat/account.py | alexanu/pyqstrat | ec62a1a7b048df05e8d1058a37bfe2cf113d2815 | [
"BSD-3-Clause"
] | null | null | null | pyqstrat/account.py | alexanu/pyqstrat | ec62a1a7b048df05e8d1058a37bfe2cf113d2815 | [
"BSD-3-Clause"
] | null | null | null | pyqstrat/account.py | alexanu/pyqstrat | ec62a1a7b048df05e8d1058a37bfe2cf113d2815 | [
"BSD-3-Clause"
] | null | null | null | from collections import defaultdict
from sortedcontainers import SortedDict
import math
import pandas as pd
import numpy as np
from pyqstrat.pq_types import ContractGroup, Trade, Contract
from types import SimpleNamespace
from typing import Sequence, Any, Tuple, Callable, Union, MutableSet, MutableSequence, MutableMapping, List
def calc_trade_pnl(open_qtys: np.ndarray,
open_prices: np.ndarray,
new_qtys: np.ndarray,
new_prices: np.ndarray,
multiplier: float) -> Tuple[np.ndarray, np.ndarray, float, float, float]:
'''
>>> print(calc_trade_pnl(
... open_qtys = np.array([], dtype = np.float), open_prices = np.array([], dtype = np.float),
... new_qtys = np.array([-8, 9, -4]), new_prices = np.array([10, 11, 6]), multiplier = 100))
(array([-3.]), array([6.]), -3.0, 6.0, -1300.0)
>>> print(calc_trade_pnl(open_qtys = np.array([], dtype = np.float), open_prices = np.array([], dtype = np.float), new_qtys = np.array([3, 10, -5]),
... new_prices = np.array([51, 50, 45]), multiplier = 100))
(array([8.]), array([50.]), 8.0, 50.0, -2800.0)
>>> print(calc_trade_pnl(open_qtys = np.array([]), open_prices = np.array([]),
... new_qtys = np.array([-58, -5, -5, 6, -8, 5, 5, -5, 19, 7, 5, -5, 39]),
... new_prices = np.array([2080, 2075.25, 2070.75, 2076, 2066.75, 2069.25, 2074.75, 2069.75, 2087.25, 2097.25, 2106, 2088.25, 2085.25]),
... multiplier = 50))
(array([], dtype=float64), array([], dtype=float64), 0.0, 0, -33762.5) '''
# TODO: Cythonize this
realized = 0.
new_qtys = new_qtys.copy()
new_prices = new_prices.copy()
_open_prices = np.zeros(len(open_prices) + len(new_prices), dtype=np.float)
_open_prices[:len(open_prices)] = open_prices
_open_qtys = np.zeros(len(open_qtys) + len(new_qtys), dtype=np.float)
_open_qtys[:len(open_qtys)] = open_qtys
new_qty_indices = np.nonzero(new_qtys)[0]
open_qty_indices = np.zeros(len(_open_qtys), dtype=np.int)
nonzero_indices = np.nonzero(_open_qtys)[0]
open_qty_indices[:len(nonzero_indices)] = nonzero_indices
i = 0 # index into new_qty_indices to get idx of the new qty we are currently netting
o = len(nonzero_indices) # virtual length of open_qty_indices
j = 0 # index into open_qty_indices to get idx of the open qty we are currently netting
k = len(open_qtys) # virtual length of _open_qtys
# Try to net all new trades against existing non-netted trades.
# Append any remaining non-netted new trades to end of existing trades
while i < len(new_qty_indices):
# Always try to net first non-zero new trade against first non-zero existing trade
# FIFO acccounting
new_idx = new_qty_indices[i]
new_qty, new_price = new_qtys[new_idx], new_prices[new_idx]
# print(f'i: {i} j: {j} k: {k} o: {o} oq: {_open_qtys} oqi: {open_qty_indices} op: {_open_prices} nq: {new_qtys} np: {new_prices}')
if j < o: # while we still have open positions to net against
open_idx = open_qty_indices[j]
open_qty, open_price = _open_qtys[open_idx], _open_prices[open_idx]
if math.copysign(1, open_qty) == math.copysign(1, new_qty):
# Nothing to net against so add this trade to the array and wait for the next offsetting trade
_open_qtys[k] = new_qty
_open_prices[k] = new_price
open_qty_indices[o] = k
k += 1
o += 1
new_qtys[new_idx] = 0
i += 1
elif abs(new_qty) > abs(open_qty):
# New trade has more qty than offsetting trade so:
# a. net against offsetting trade
# b. remove the offsetting trade
# c. reduce qty of new trade
open_qty, open_price = _open_qtys[open_idx], _open_prices[open_idx]
realized += open_qty * (new_price - open_price)
# print(f'open_qty: {open_qty} open_price: {open_price} open_idx: {open_idx} i: {i}
# j: {j} k: {k} l: {l} oq: {_open_qtys} oqi: {open_qty_indices} op: {_open_prices} nq: {new_qtys} np: {new_prices}')
_open_qtys[open_idx] = 0
j += 1
new_qtys[new_idx] += open_qty
else:
# New trade has less qty than offsetting trade so:
# a. net against offsetting trade
# b. remove new trade
# c. reduce qty of offsetting trade
realized += new_qty * (open_price - new_price)
new_qtys[new_idx] = 0
i += 1
_open_qtys[open_idx] += new_qty
else:
# Nothing to net against so add this trade to the open trades array and wait for the next offsetting trade
_open_qtys[k] = new_qty
_open_prices[k] = new_price
open_qty_indices[o] = k
k += 1
o += 1
new_qtys[new_idx] = 0
i += 1
mask = _open_qtys != 0
_open_qtys = _open_qtys[mask]
_open_prices = _open_prices[mask]
open_qty = np.sum(_open_qtys)
if math.isclose(open_qty, 0):
weighted_avg_price = 0
else:
weighted_avg_price = np.sum(_open_qtys * _open_prices) / open_qty
return _open_qtys, _open_prices, open_qty, weighted_avg_price, realized * multiplier
def leading_nan_to_zero(df: pd.DataFrame, columns: Sequence[str]) -> pd.DataFrame:
for column in columns:
vals = df[column].values
first_non_nan_index = np.ravel(np.nonzero(~np.isnan(vals)))
if len(first_non_nan_index):
first_non_nan_index = first_non_nan_index[0]
else:
first_non_nan_index = -1
if first_non_nan_index > 0 and first_non_nan_index < len(vals):
vals[:first_non_nan_index] = np.nan_to_num(vals[:first_non_nan_index])
df[column] = vals
return df
def find_last_non_nan_index(array: np.ndarray) -> int:
i = np.nonzero(np.isfinite(array))[0]
if len(i): return i[-1]
return 0
def find_index_before(sorted_dict: SortedDict, key: Any) -> int:
'''
Find index of the first key in a sorted dict that is less than or equal to the key passed in.
If the key is less than the first key in the dict, return -1
'''
size = len(sorted_dict)
if not size: return -1
i = sorted_dict.bisect_left(key)
if i == size: return size - 1
if sorted_dict.keys()[i] != key:
return i - 1
return i
class ContractPNL:
'''Computes pnl for a single contract over time given trades and market data'''
def __init__(self,
contract: Contract,
account_timestamps: np.ndarray,
price_function: Callable[[Contract, np.ndarray, int, SimpleNamespace], float],
strategy_context: SimpleNamespace) -> None:
self.contract = contract
self._price_function = price_function
self.strategy_context = strategy_context
self._account_timestamps = account_timestamps
self._trade_pnl = SortedDict()
self._net_pnl = SortedDict()
# Store trades that are not offset so when new trades come in we can offset against these to calc pnl
self.open_qtys = np.empty(0, dtype=np.int)
self.open_prices = np.empty(0, dtype=np.float)
self.first_trade_timestamp = None
self.final_pnl = np.nan
def _add_trades(self, trades: Sequence[Trade]) -> None:
'''
Args:
trades: Must be sorted by timestamp
'''
if not len(trades): return
timestamps = [trade.timestamp for trade in trades]
if len(self._trade_pnl):
k, v = self._trade_pnl.peekitem(0)
if timestamps[0] <= k:
raise Exception(f'Can only add a trade that is newer than last added current: {timestamps[0]} prev max timestamp: {k}')
if self.first_trade_timestamp is None: self.first_trade_timestamp = timestamps[0]
for i, timestamp in enumerate(timestamps):
t_trades = [trade for trade in trades if trade.timestamp == timestamp]
open_qtys, open_prices, open_qty, weighted_avg_price, realized_chg = calc_trade_pnl(
self.open_qtys, self.open_prices,
np.array([trade.qty for trade in t_trades]),
np.array([trade.price for trade in t_trades]),
self.contract.multiplier)
self.open_qtys = open_qtys
self.open_prices = open_prices
position_chg = sum([trade.qty for trade in t_trades])
commission_chg = sum([trade.commission for trade in t_trades])
fee_chg = sum([trade.fee for trade in t_trades])
index = find_index_before(self._trade_pnl, timestamp)
if index == -1:
self._trade_pnl[timestamp] = (position_chg, realized_chg, fee_chg, commission_chg, open_qty, weighted_avg_price)
else:
prev_timestamp, (prev_position, prev_realized, prev_fee, prev_commission, _, _) = self._trade_pnl.peekitem(index)
self._trade_pnl[timestamp] = (prev_position + position_chg, prev_realized + realized_chg,
prev_fee + fee_chg, prev_commission + commission_chg, open_qty, weighted_avg_price)
self.calc_net_pnl(timestamp)
def calc_net_pnl(self, timestamp: np.datetime64) -> None:
if timestamp in self._net_pnl: return
if timestamp < self.first_trade_timestamp: return
# TODO: Option expiry should be a special case. If option expires at 3:00 pm, we put in an expiry order at 3 pm and the
# trade comes in at 3:01 pm. In this case, the final pnl is recorded at 3:01 but should be at 3 pm.
if self.contract.expiry is not None and timestamp > self.contract.expiry and not math.isnan(self.final_pnl): return
i = np.searchsorted(self._account_timestamps, timestamp)
assert(self._account_timestamps[i] == timestamp)
# Find the index before or equal to current timestamp. If not found, set to 0's
trade_pnl_index = find_index_before(self._trade_pnl, timestamp)
if trade_pnl_index == -1:
realized, fee, commission, open_qty, open_qty, weighted_avg_price = 0, 0, 0, 0, 0, 0
else:
_, (_, realized, fee, commission, open_qty, weighted_avg_price) = self._trade_pnl.peekitem(trade_pnl_index)
price = np.nan
if math.isclose(open_qty, 0):
unrealized = 0
else:
price = self._price_function(self.contract, self._account_timestamps, i, self.strategy_context)
assert np.isreal(price), \
f'Unexpected price type: {price} {type(price)} for contract: {self.contract} timestamp: {self._account_timestamps[i]}'
if math.isnan(price):
index = find_index_before(self._net_pnl, timestamp) # Last index we computed net pnl for
if index == -1:
prev_unrealized = 0
else:
_, (_, prev_unrealized, _) = self._net_pnl.peekitem(index)
unrealized = prev_unrealized
else:
unrealized = open_qty * (price - weighted_avg_price) * self.contract.multiplier
net_pnl = realized + unrealized - commission - fee
self._net_pnl[timestamp] = (price, unrealized, net_pnl)
if self.contract.expiry is not None and timestamp > self.contract.expiry:
self.final_pnl = net_pnl
def position(self, timestamp: np.datetime64) -> float:
index = find_index_before(self._trade_pnl, timestamp)
if index == -1: return 0.
_, (position, _, _, _, _, _) = self._trade_pnl.peekitem(index) # Less than or equal to timestamp
return position
def net_pnl(self, timestamp: np.datetime64) -> float:
if self.contract.expiry is not None and timestamp > self.contract.expiry and not math.isnan(self.final_pnl):
return self.final_pnl
index = find_index_before(self._net_pnl, timestamp)
if index == -1: return 0.
_, (_, _, net_pnl) = self._net_pnl.peekitem(index) # Less than or equal to timestamp
return net_pnl
def pnl(self, timestamp: np.datetime64) -> Tuple[float, float, float, float, float, float, float]:
index = find_index_before(self._trade_pnl, timestamp)
position, realized, fee, commission, price, unrealized, net_pnl = 0, 0, 0, 0, 0, 0, 0
if index != -1:
_, (position, realized, fee, commission, _, _) = self._trade_pnl.peekitem(index) # Less than or equal to timestamp
index = find_index_before(self._net_pnl, timestamp)
if index != -1:
_, (price, unrealized, net_pnl) = self._net_pnl.peekitem(index) # Less than or equal to timestamp
return position, price, realized, unrealized, fee, commission, net_pnl
def df(self) -> pd.DataFrame:
'''Returns a pandas dataframe with pnl data'''
df_trade_pnl = pd.DataFrame.from_records([
(k, v[0], v[1], v[2], v[3]) for k, v in self._trade_pnl.items()],
columns=['timestamp', 'position', 'realized', 'fee', 'commission'])
df_net_pnl = pd.DataFrame.from_records([
(k, v[0], v[1], v[2]) for k, v in self._net_pnl.items()],
columns=['timestamp', 'price', 'unrealized', 'net_pnl'])
all_timestamps = np.unique(np.concatenate((df_trade_pnl.timestamp.values, df_net_pnl.timestamp.values)))
df_trade_pnl = df_trade_pnl.set_index('timestamp').reindex(all_timestamps, method='ffill').reset_index()
df_trade_pnl = leading_nan_to_zero(df_trade_pnl, ['position', 'realized', 'fee', 'commission'])
df_net_pnl = df_net_pnl.set_index('timestamp').reindex(all_timestamps, method='ffill').reset_index()
del df_net_pnl['timestamp']
df = pd.concat([df_trade_pnl, df_net_pnl], axis=1)
df['symbol'] = self.contract.symbol
df = df[['symbol', 'timestamp', 'position', 'price', 'unrealized', 'realized', 'commission', 'fee', 'net_pnl']]
return df
def _get_calc_timestamps(timestamps: np.ndarray, pnl_calc_time: int) -> np.ndarray:
time_delta = np.timedelta64(pnl_calc_time, 'm')
calc_timestamps = np.unique(timestamps.astype('M8[D]')) + time_delta
calc_indices = np.searchsorted(timestamps, calc_timestamps, side='left') - 1
if calc_indices[0] == -1: calc_indices[0] = 0
return np.unique(timestamps[calc_indices])
class Account:
'''An Account calculates pnl for a set of contracts'''
def __init__(self,
contract_groups: Sequence[ContractGroup],
timestamps: np.ndarray,
price_function: Callable[[Contract, np.ndarray, int, SimpleNamespace], float],
strategy_context: SimpleNamespace,
starting_equity: float = 1.0e6,
pnl_calc_time: int = 15 * 60) -> None:
'''
Args:
contract_groups: Contract groups that we want to compute PNL for
timestamps: Timestamps that we might compute PNL at
price_function: Function that returns contract prices used to compute pnl
strategy_context: This is passed into the price function so we can use current state of strategy to compute prices
starting_equity: Starting equity in account currency. Default 1.e6
pnl_calc_time: Number of minutes past midnight that we should calculate PNL at. Default 15 * 60, i.e. 3 pm
'''
self.starting_equity = starting_equity
self._price_function = price_function
self.strategy_context = strategy_context
self.timestamps = timestamps
self.calc_timestamps = _get_calc_timestamps(timestamps, pnl_calc_time)
self.contracts: MutableSet[Contract] = set()
self._trades: MutableSequence[Trade] = []
self._pnl = SortedDict()
self.symbol_pnls_by_contract_group: MutableMapping[str, MutableSequence[ContractPNL]] = defaultdict(list)
self.symbol_pnls: MutableMapping[str, ContractPNL] = {}
def symbols(self) -> MutableSequence[str]:
return [contract.symbol for contract in self.contracts]
def _add_contract(self, contract: Contract, timestamp: np.datetime64) -> None:
if contract.symbol in self.symbol_pnls:
raise Exception(f'Already have contract with symbol: {contract.symbol} {contract}')
contract_pnl = ContractPNL(contract, self.timestamps, self._price_function, self.strategy_context)
self.symbol_pnls[contract.symbol] = contract_pnl
# For fast lookup in position function
self.symbol_pnls_by_contract_group[contract.contract_group.name].append(contract_pnl)
self.contracts.add(contract)
def add_trades(self, trades: Sequence[Trade]) -> None:
trades = sorted(trades, key=lambda x: getattr(x, 'timestamp'))
# Break up trades by contract so we can add them in a batch
trades_by_contract: MutableMapping[Contract, List[Trade]] = defaultdict(list)
for trade in trades:
contract = trade.contract
if contract not in self.contracts: self._add_contract(contract, trade.timestamp)
trades_by_contract[contract].append(trade)
for contract, contract_trades in trades_by_contract.items():
contract_trades.sort(key=lambda x: x.timestamp)
self.symbol_pnls[contract.symbol]._add_trades(contract_trades)
self._trades += trades
def calc(self, timestamp: np.datetime64) -> None:
'''
Computes P&L and stores it internally for all contracts.
Args:
timestamp: timestamp to compute P&L at. Account remembers the last timestamp it computed P&L up to and will compute P&L
between these and including timestamp. If there is more than one day between the last index and current index, we will
include pnl for at the defined pnl_calc_time for those dates as well.
'''
if timestamp in self._pnl: return
prev_idx = find_index_before(self._pnl, timestamp)
prev_timestamp = None if prev_idx == -1 else self.timestamps[prev_idx]
# Find the last timestamp per day that is between the previous index we computed and the current index,
# so we can compute daily pnl in addition to the current index pnl
calc_timestamps = self.calc_timestamps
intermediate_calc_timestamps = calc_timestamps[calc_timestamps <= timestamp]
if prev_timestamp is not None:
intermediate_calc_timestamps = intermediate_calc_timestamps[intermediate_calc_timestamps > prev_timestamp]
if not len(intermediate_calc_timestamps) or intermediate_calc_timestamps[-1] != timestamp:
intermediate_calc_timestamps = np.append(intermediate_calc_timestamps, timestamp)
for ts in intermediate_calc_timestamps:
net_pnl = 0.
for symbol_pnl in self.symbol_pnls.values():
symbol_pnl.calc_net_pnl(ts)
net_pnl += symbol_pnl.net_pnl(ts)
self._pnl[ts] = net_pnl
def position(self, contract_group: ContractGroup, timestamp: np.datetime64) -> float:
'''Returns netted position for a contract_group at a given date in number of contracts or shares.'''
position = 0.
for symbol_pnl in self.symbol_pnls_by_contract_group[contract_group.name]:
position += symbol_pnl.position(timestamp)
return position
def positions(self, contract_group: ContractGroup, timestamp: np.datetime64) -> MutableSequence[Tuple[Contract, float]]:
'''
Returns all non-zero positions in a contract group
'''
positions = []
for contract in contract_group.contracts:
symbol = contract.symbol
if symbol not in self.symbol_pnls: continue
position = self.symbol_pnls[symbol].position(timestamp)
if not math.isclose(position, 0): positions.append((contract, position))
return positions
def equity(self, timestamp: np.datetime64) -> float:
'''Returns equity in this account in Account currency. Will cause calculation if Account has not previously
calculated up to this date'''
pnl = self._pnl.get(timestamp)
if pnl is None:
self.calc(timestamp)
pnl = self._pnl[timestamp]
return self.starting_equity + pnl
def trades(self,
contract_group: ContractGroup = None,
start_date: np.datetime64 = None,
end_date: np.datetime64 = None) -> MutableSequence[Trade]:
'''Returns a list of trades with the given symbol and with trade date between (and including) start date
and end date if they are specified. If symbol is None trades for all symbols are returned'''
# start_date, end_date = str2date(start_date), str2date(end_date)
return [trade for trade in self._trades if (start_date is None or trade.timestamp >= start_date) and (
end_date is None or trade.timestamp <= end_date) and (
contract_group is None or trade.contract.contract_group == contract_group)]
def df_pnl(self, contract_groups: Union[ContractGroup, Sequence[ContractGroup]] = None) -> pd.DataFrame:
'''
Returns a dataframe with P&L columns broken down by contract group and symbol
Args:
contract_group: Return PNL for this contract group. If None (default), include all contract groups
'''
if contract_groups is None:
contract_groups = list(set([contract.contract_group for contract in self.contracts]))
if isinstance(contract_groups, ContractGroup): contract_groups = [contract_groups]
dfs = []
for contract_group in contract_groups:
for contract in contract_group.contracts:
symbol = contract.symbol
if symbol not in self.symbol_pnls: continue
df = self.symbol_pnls[symbol].df()
if len(df) > 1:
net_pnl_diff = np.diff(df.net_pnl.values) # np.diff returns a vector one shorter than the original
last_index = np.nonzero(net_pnl_diff)
if len(last_index[0]):
last_index = last_index[0][-1] + 1
df = df.iloc[:last_index + 1]
df['contract_group'] = contract_group.name
dfs.append(df)
ret_df = pd.concat(dfs)
ret_df = ret_df.sort_values(by=['timestamp', 'contract_group', 'symbol'])
ret_df = ret_df[['timestamp', 'contract_group', 'symbol', 'position', 'price', 'unrealized', 'realized',
'commission', 'fee', 'net_pnl']]
return ret_df
def df_account_pnl(self, contract_group: ContractGroup = None) -> pd.DataFrame:
'''
Returns PNL at the account level.
Args:
contract_group: If set, we only return pnl for this contract_group. Otherwise we return pnl for all contract groups
'''
if contract_group is not None:
symbols = [contract.symbol for contract in contract_group.contracts if contract.symbol in self.symbol_pnls]
symbol_pnls = [self.symbol_pnls[symbol] for symbol in symbols]
else:
symbol_pnls = list(self.symbol_pnls.values())
timestamps = self.calc_timestamps
position = np.full(len(timestamps), 0., dtype=np.float)
realized = np.full(len(timestamps), 0., dtype=np.float)
unrealized = np.full(len(timestamps), 0., dtype=np.float)
fee = np.full(len(timestamps), 0., dtype=np.float)
commission = np.full(len(timestamps), 0., dtype=np.float)
net_pnl = np.full(len(timestamps), 0., dtype=np.float)
for i, timestamp in enumerate(timestamps):
for symbol_pnl in symbol_pnls:
_position, _price, _realized, _unrealized, _fee, _commission, _net_pnl = symbol_pnl.pnl(timestamp)
if math.isfinite(_position): position[i] += _position
if math.isfinite(_realized): realized[i] += _realized
if math.isfinite(_unrealized): unrealized[i] += _unrealized
if math.isfinite(_fee): fee[i] += _fee
if math.isfinite(_commission): commission[i] += _commission
if math.isfinite(_net_pnl): net_pnl[i] += _net_pnl
df = pd.DataFrame.from_records(zip(timestamps, position, unrealized, realized, commission, fee, net_pnl),
columns=['timestamp', 'position', 'unrealized', 'realized', 'commission', 'fee', 'net_pnl'])
df['equity'] = self.starting_equity + df.net_pnl
return df[['timestamp', 'position', 'unrealized', 'realized', 'commission', 'fee', 'net_pnl', 'equity']]
def df_trades(self,
contract_group: ContractGroup = None,
start_date: np.datetime64 = None,
end_date: np.datetime64 = None) -> pd.DataFrame:
'''
Returns a dataframe of trades
Args:
contract_group: Return trades for this contract group. If None (default), include all contract groups
start_date: Include trades with date greater than or equal to this timestamp.
end_date: Include trades with date less than or equal to this timestamp.
'''
# start_date, end_date = str2date(start_date), str2date(end_date)
trades = self.trades(contract_group, start_date, end_date)
df = pd.DataFrame.from_records([(
trade.contract.symbol,
trade.timestamp,
trade.qty,
trade.price,
trade.fee,
trade.commission,
trade.order.timestamp,
trade.order.qty,
trade.order.reason_code,
(str(trade.order.properties.__dict__) if trade.order.properties.__dict__ else ''),
(str(trade.contract.properties.__dict__) if trade.contract.properties.__dict__ else '')) for trade in trades],
columns=['symbol', 'timestamp', 'qty', 'price', 'fee', 'commission', 'order_date', 'order_qty',
'reason_code', 'order_props', 'contract_props'])
df = df.sort_values(by=['timestamp', 'symbol'])
return df
def test_account():
from pyqstrat.pq_types import MarketOrder
def get_close_price(contract, timestamps, idx, strategy_context):
if contract.symbol == "IBM":
price = idx + 10.1
elif contract.symbol == "MSFT":
price = idx + 15.3
else:
raise Exception(f'unknown contract: {contract}')
return price
ContractGroup.clear()
Contract.clear()
ibm_cg = ContractGroup.create('IBM')
msft_cg = ContractGroup.create('MSFT')
ibm_contract = Contract.create('IBM', contract_group=ibm_cg)
msft_contract = Contract.create('MSFT', contract_group=msft_cg)
timestamps = np.array(['2018-01-01 09:00', '2018-01-02 08:00', '2018-01-02 09:00', '2018-01-05 13:35'], dtype='M8[m]')
account = Account([ibm_cg, msft_cg], timestamps, get_close_price, None)
# account = Account([Contract(symbol)], timestamps, get_close_price)
trade_1 = Trade(ibm_contract, MarketOrder(ibm_contract, np.datetime64('2018-01-01 09:00'), 10),
np.datetime64('2018-01-02 08:00'), 10, 10.1, commission=0.01)
trade_2 = Trade(ibm_contract, MarketOrder(ibm_contract, np.datetime64('2018-01-01 09:00'), -20),
np.datetime64('2018-01-02 09:00'), -20, 15.1, commission=0.02)
trade_3 = Trade(msft_contract, MarketOrder(msft_contract, timestamps[1], 15), timestamps[1], 20, 13.2, commission=0.04)
trade_4 = Trade(msft_contract, MarketOrder(msft_contract, timestamps[2], 20), timestamps[2], 20, 16.2, commission=0.05)
account.add_trades([trade_1, trade_2, trade_3, trade_4])
account.calc(np.datetime64('2018-01-05 13:35'))
assert(len(account.df_trades()) == 4)
assert(len(account.df_pnl()) == 6)
assert(np.allclose(np.array([9.99, 61.96, 79.97, 103.91, 69.97, 143.91]), account.df_pnl().net_pnl.values, rtol=0))
assert(np.allclose(np.array([10, 20, -10, 40, -10, 40]), account.df_pnl().position.values, rtol=0))
assert(np.allclose(np.array([1000000., 1000183.88, 1000213.88]), account.df_account_pnl().equity.values, rtol=0))
if __name__ == "__main__":
test_account()
import doctest
doctest.testmod(optionflags=doctest.NORMALIZE_WHITESPACE)
| 50.753899 | 155 | 0.625337 | from collections import defaultdict
from sortedcontainers import SortedDict
import math
import pandas as pd
import numpy as np
from pyqstrat.pq_types import ContractGroup, Trade, Contract
from types import SimpleNamespace
from typing import Sequence, Any, Tuple, Callable, Union, MutableSet, MutableSequence, MutableMapping, List
def calc_trade_pnl(open_qtys: np.ndarray,
open_prices: np.ndarray,
new_qtys: np.ndarray,
new_prices: np.ndarray,
multiplier: float) -> Tuple[np.ndarray, np.ndarray, float, float, float]:
realized = 0.
new_qtys = new_qtys.copy()
new_prices = new_prices.copy()
_open_prices = np.zeros(len(open_prices) + len(new_prices), dtype=np.float)
_open_prices[:len(open_prices)] = open_prices
_open_qtys = np.zeros(len(open_qtys) + len(new_qtys), dtype=np.float)
_open_qtys[:len(open_qtys)] = open_qtys
new_qty_indices = np.nonzero(new_qtys)[0]
open_qty_indices = np.zeros(len(_open_qtys), dtype=np.int)
nonzero_indices = np.nonzero(_open_qtys)[0]
open_qty_indices[:len(nonzero_indices)] = nonzero_indices
i = 0
o = len(nonzero_indices)
j = 0
k = len(open_qtys)
while i < len(new_qty_indices):
new_idx = new_qty_indices[i]
new_qty, new_price = new_qtys[new_idx], new_prices[new_idx]
if j < o:
open_idx = open_qty_indices[j]
open_qty, open_price = _open_qtys[open_idx], _open_prices[open_idx]
if math.copysign(1, open_qty) == math.copysign(1, new_qty):
_open_qtys[k] = new_qty
_open_prices[k] = new_price
open_qty_indices[o] = k
k += 1
o += 1
new_qtys[new_idx] = 0
i += 1
elif abs(new_qty) > abs(open_qty):
open_qty, open_price = _open_qtys[open_idx], _open_prices[open_idx]
realized += open_qty * (new_price - open_price)
# j: {j} k: {k} l: {l} oq: {_open_qtys} oqi: {open_qty_indices} op: {_open_prices} nq: {new_qtys} np: {new_prices}')
_open_qtys[open_idx] = 0
j += 1
new_qtys[new_idx] += open_qty
else:
realized += new_qty * (open_price - new_price)
new_qtys[new_idx] = 0
i += 1
_open_qtys[open_idx] += new_qty
else:
_open_qtys[k] = new_qty
_open_prices[k] = new_price
open_qty_indices[o] = k
k += 1
o += 1
new_qtys[new_idx] = 0
i += 1
mask = _open_qtys != 0
_open_qtys = _open_qtys[mask]
_open_prices = _open_prices[mask]
open_qty = np.sum(_open_qtys)
if math.isclose(open_qty, 0):
weighted_avg_price = 0
else:
weighted_avg_price = np.sum(_open_qtys * _open_prices) / open_qty
return _open_qtys, _open_prices, open_qty, weighted_avg_price, realized * multiplier
def leading_nan_to_zero(df: pd.DataFrame, columns: Sequence[str]) -> pd.DataFrame:
for column in columns:
vals = df[column].values
first_non_nan_index = np.ravel(np.nonzero(~np.isnan(vals)))
if len(first_non_nan_index):
first_non_nan_index = first_non_nan_index[0]
else:
first_non_nan_index = -1
if first_non_nan_index > 0 and first_non_nan_index < len(vals):
vals[:first_non_nan_index] = np.nan_to_num(vals[:first_non_nan_index])
df[column] = vals
return df
def find_last_non_nan_index(array: np.ndarray) -> int:
i = np.nonzero(np.isfinite(array))[0]
if len(i): return i[-1]
return 0
def find_index_before(sorted_dict: SortedDict, key: Any) -> int:
size = len(sorted_dict)
if not size: return -1
i = sorted_dict.bisect_left(key)
if i == size: return size - 1
if sorted_dict.keys()[i] != key:
return i - 1
return i
class ContractPNL:
def __init__(self,
contract: Contract,
account_timestamps: np.ndarray,
price_function: Callable[[Contract, np.ndarray, int, SimpleNamespace], float],
strategy_context: SimpleNamespace) -> None:
self.contract = contract
self._price_function = price_function
self.strategy_context = strategy_context
self._account_timestamps = account_timestamps
self._trade_pnl = SortedDict()
self._net_pnl = SortedDict()
self.open_qtys = np.empty(0, dtype=np.int)
self.open_prices = np.empty(0, dtype=np.float)
self.first_trade_timestamp = None
self.final_pnl = np.nan
def _add_trades(self, trades: Sequence[Trade]) -> None:
if not len(trades): return
timestamps = [trade.timestamp for trade in trades]
if len(self._trade_pnl):
k, v = self._trade_pnl.peekitem(0)
if timestamps[0] <= k:
raise Exception(f'Can only add a trade that is newer than last added current: {timestamps[0]} prev max timestamp: {k}')
if self.first_trade_timestamp is None: self.first_trade_timestamp = timestamps[0]
for i, timestamp in enumerate(timestamps):
t_trades = [trade for trade in trades if trade.timestamp == timestamp]
open_qtys, open_prices, open_qty, weighted_avg_price, realized_chg = calc_trade_pnl(
self.open_qtys, self.open_prices,
np.array([trade.qty for trade in t_trades]),
np.array([trade.price for trade in t_trades]),
self.contract.multiplier)
self.open_qtys = open_qtys
self.open_prices = open_prices
position_chg = sum([trade.qty for trade in t_trades])
commission_chg = sum([trade.commission for trade in t_trades])
fee_chg = sum([trade.fee for trade in t_trades])
index = find_index_before(self._trade_pnl, timestamp)
if index == -1:
self._trade_pnl[timestamp] = (position_chg, realized_chg, fee_chg, commission_chg, open_qty, weighted_avg_price)
else:
prev_timestamp, (prev_position, prev_realized, prev_fee, prev_commission, _, _) = self._trade_pnl.peekitem(index)
self._trade_pnl[timestamp] = (prev_position + position_chg, prev_realized + realized_chg,
prev_fee + fee_chg, prev_commission + commission_chg, open_qty, weighted_avg_price)
self.calc_net_pnl(timestamp)
def calc_net_pnl(self, timestamp: np.datetime64) -> None:
if timestamp in self._net_pnl: return
if timestamp < self.first_trade_timestamp: return
if self.contract.expiry is not None and timestamp > self.contract.expiry and not math.isnan(self.final_pnl): return
i = np.searchsorted(self._account_timestamps, timestamp)
assert(self._account_timestamps[i] == timestamp)
trade_pnl_index = find_index_before(self._trade_pnl, timestamp)
if trade_pnl_index == -1:
realized, fee, commission, open_qty, open_qty, weighted_avg_price = 0, 0, 0, 0, 0, 0
else:
_, (_, realized, fee, commission, open_qty, weighted_avg_price) = self._trade_pnl.peekitem(trade_pnl_index)
price = np.nan
if math.isclose(open_qty, 0):
unrealized = 0
else:
price = self._price_function(self.contract, self._account_timestamps, i, self.strategy_context)
assert np.isreal(price), \
f'Unexpected price type: {price} {type(price)} for contract: {self.contract} timestamp: {self._account_timestamps[i]}'
if math.isnan(price):
index = find_index_before(self._net_pnl, timestamp) # Last index we computed net pnl for
if index == -1:
prev_unrealized = 0
else:
_, (_, prev_unrealized, _) = self._net_pnl.peekitem(index)
unrealized = prev_unrealized
else:
unrealized = open_qty * (price - weighted_avg_price) * self.contract.multiplier
net_pnl = realized + unrealized - commission - fee
self._net_pnl[timestamp] = (price, unrealized, net_pnl)
if self.contract.expiry is not None and timestamp > self.contract.expiry:
self.final_pnl = net_pnl
def position(self, timestamp: np.datetime64) -> float:
index = find_index_before(self._trade_pnl, timestamp)
if index == -1: return 0.
_, (position, _, _, _, _, _) = self._trade_pnl.peekitem(index) # Less than or equal to timestamp
return position
def net_pnl(self, timestamp: np.datetime64) -> float:
if self.contract.expiry is not None and timestamp > self.contract.expiry and not math.isnan(self.final_pnl):
return self.final_pnl
index = find_index_before(self._net_pnl, timestamp)
if index == -1: return 0.
_, (_, _, net_pnl) = self._net_pnl.peekitem(index) # Less than or equal to timestamp
return net_pnl
def pnl(self, timestamp: np.datetime64) -> Tuple[float, float, float, float, float, float, float]:
index = find_index_before(self._trade_pnl, timestamp)
position, realized, fee, commission, price, unrealized, net_pnl = 0, 0, 0, 0, 0, 0, 0
if index != -1:
_, (position, realized, fee, commission, _, _) = self._trade_pnl.peekitem(index) # Less than or equal to timestamp
index = find_index_before(self._net_pnl, timestamp)
if index != -1:
_, (price, unrealized, net_pnl) = self._net_pnl.peekitem(index) # Less than or equal to timestamp
return position, price, realized, unrealized, fee, commission, net_pnl
def df(self) -> pd.DataFrame:
df_trade_pnl = pd.DataFrame.from_records([
(k, v[0], v[1], v[2], v[3]) for k, v in self._trade_pnl.items()],
columns=['timestamp', 'position', 'realized', 'fee', 'commission'])
df_net_pnl = pd.DataFrame.from_records([
(k, v[0], v[1], v[2]) for k, v in self._net_pnl.items()],
columns=['timestamp', 'price', 'unrealized', 'net_pnl'])
all_timestamps = np.unique(np.concatenate((df_trade_pnl.timestamp.values, df_net_pnl.timestamp.values)))
df_trade_pnl = df_trade_pnl.set_index('timestamp').reindex(all_timestamps, method='ffill').reset_index()
df_trade_pnl = leading_nan_to_zero(df_trade_pnl, ['position', 'realized', 'fee', 'commission'])
df_net_pnl = df_net_pnl.set_index('timestamp').reindex(all_timestamps, method='ffill').reset_index()
del df_net_pnl['timestamp']
df = pd.concat([df_trade_pnl, df_net_pnl], axis=1)
df['symbol'] = self.contract.symbol
df = df[['symbol', 'timestamp', 'position', 'price', 'unrealized', 'realized', 'commission', 'fee', 'net_pnl']]
return df
def _get_calc_timestamps(timestamps: np.ndarray, pnl_calc_time: int) -> np.ndarray:
time_delta = np.timedelta64(pnl_calc_time, 'm')
calc_timestamps = np.unique(timestamps.astype('M8[D]')) + time_delta
calc_indices = np.searchsorted(timestamps, calc_timestamps, side='left') - 1
if calc_indices[0] == -1: calc_indices[0] = 0
return np.unique(timestamps[calc_indices])
class Account:
def __init__(self,
contract_groups: Sequence[ContractGroup],
timestamps: np.ndarray,
price_function: Callable[[Contract, np.ndarray, int, SimpleNamespace], float],
strategy_context: SimpleNamespace,
starting_equity: float = 1.0e6,
pnl_calc_time: int = 15 * 60) -> None:
self.starting_equity = starting_equity
self._price_function = price_function
self.strategy_context = strategy_context
self.timestamps = timestamps
self.calc_timestamps = _get_calc_timestamps(timestamps, pnl_calc_time)
self.contracts: MutableSet[Contract] = set()
self._trades: MutableSequence[Trade] = []
self._pnl = SortedDict()
self.symbol_pnls_by_contract_group: MutableMapping[str, MutableSequence[ContractPNL]] = defaultdict(list)
self.symbol_pnls: MutableMapping[str, ContractPNL] = {}
def symbols(self) -> MutableSequence[str]:
return [contract.symbol for contract in self.contracts]
def _add_contract(self, contract: Contract, timestamp: np.datetime64) -> None:
if contract.symbol in self.symbol_pnls:
raise Exception(f'Already have contract with symbol: {contract.symbol} {contract}')
contract_pnl = ContractPNL(contract, self.timestamps, self._price_function, self.strategy_context)
self.symbol_pnls[contract.symbol] = contract_pnl
# For fast lookup in position function
self.symbol_pnls_by_contract_group[contract.contract_group.name].append(contract_pnl)
self.contracts.add(contract)
def add_trades(self, trades: Sequence[Trade]) -> None:
trades = sorted(trades, key=lambda x: getattr(x, 'timestamp'))
# Break up trades by contract so we can add them in a batch
trades_by_contract: MutableMapping[Contract, List[Trade]] = defaultdict(list)
for trade in trades:
contract = trade.contract
if contract not in self.contracts: self._add_contract(contract, trade.timestamp)
trades_by_contract[contract].append(trade)
for contract, contract_trades in trades_by_contract.items():
contract_trades.sort(key=lambda x: x.timestamp)
self.symbol_pnls[contract.symbol]._add_trades(contract_trades)
self._trades += trades
def calc(self, timestamp: np.datetime64) -> None:
if timestamp in self._pnl: return
prev_idx = find_index_before(self._pnl, timestamp)
prev_timestamp = None if prev_idx == -1 else self.timestamps[prev_idx]
# Find the last timestamp per day that is between the previous index we computed and the current index,
# so we can compute daily pnl in addition to the current index pnl
calc_timestamps = self.calc_timestamps
intermediate_calc_timestamps = calc_timestamps[calc_timestamps <= timestamp]
if prev_timestamp is not None:
intermediate_calc_timestamps = intermediate_calc_timestamps[intermediate_calc_timestamps > prev_timestamp]
if not len(intermediate_calc_timestamps) or intermediate_calc_timestamps[-1] != timestamp:
intermediate_calc_timestamps = np.append(intermediate_calc_timestamps, timestamp)
for ts in intermediate_calc_timestamps:
net_pnl = 0.
for symbol_pnl in self.symbol_pnls.values():
symbol_pnl.calc_net_pnl(ts)
net_pnl += symbol_pnl.net_pnl(ts)
self._pnl[ts] = net_pnl
def position(self, contract_group: ContractGroup, timestamp: np.datetime64) -> float:
position = 0.
for symbol_pnl in self.symbol_pnls_by_contract_group[contract_group.name]:
position += symbol_pnl.position(timestamp)
return position
def positions(self, contract_group: ContractGroup, timestamp: np.datetime64) -> MutableSequence[Tuple[Contract, float]]:
positions = []
for contract in contract_group.contracts:
symbol = contract.symbol
if symbol not in self.symbol_pnls: continue
position = self.symbol_pnls[symbol].position(timestamp)
if not math.isclose(position, 0): positions.append((contract, position))
return positions
def equity(self, timestamp: np.datetime64) -> float:
pnl = self._pnl.get(timestamp)
if pnl is None:
self.calc(timestamp)
pnl = self._pnl[timestamp]
return self.starting_equity + pnl
def trades(self,
contract_group: ContractGroup = None,
start_date: np.datetime64 = None,
end_date: np.datetime64 = None) -> MutableSequence[Trade]:
# start_date, end_date = str2date(start_date), str2date(end_date)
return [trade for trade in self._trades if (start_date is None or trade.timestamp >= start_date) and (
end_date is None or trade.timestamp <= end_date) and (
contract_group is None or trade.contract.contract_group == contract_group)]
def df_pnl(self, contract_groups: Union[ContractGroup, Sequence[ContractGroup]] = None) -> pd.DataFrame:
if contract_groups is None:
contract_groups = list(set([contract.contract_group for contract in self.contracts]))
if isinstance(contract_groups, ContractGroup): contract_groups = [contract_groups]
dfs = []
for contract_group in contract_groups:
for contract in contract_group.contracts:
symbol = contract.symbol
if symbol not in self.symbol_pnls: continue
df = self.symbol_pnls[symbol].df()
if len(df) > 1:
net_pnl_diff = np.diff(df.net_pnl.values) # np.diff returns a vector one shorter than the original
last_index = np.nonzero(net_pnl_diff)
if len(last_index[0]):
last_index = last_index[0][-1] + 1
df = df.iloc[:last_index + 1]
df['contract_group'] = contract_group.name
dfs.append(df)
ret_df = pd.concat(dfs)
ret_df = ret_df.sort_values(by=['timestamp', 'contract_group', 'symbol'])
ret_df = ret_df[['timestamp', 'contract_group', 'symbol', 'position', 'price', 'unrealized', 'realized',
'commission', 'fee', 'net_pnl']]
return ret_df
def df_account_pnl(self, contract_group: ContractGroup = None) -> pd.DataFrame:
if contract_group is not None:
symbols = [contract.symbol for contract in contract_group.contracts if contract.symbol in self.symbol_pnls]
symbol_pnls = [self.symbol_pnls[symbol] for symbol in symbols]
else:
symbol_pnls = list(self.symbol_pnls.values())
timestamps = self.calc_timestamps
position = np.full(len(timestamps), 0., dtype=np.float)
realized = np.full(len(timestamps), 0., dtype=np.float)
unrealized = np.full(len(timestamps), 0., dtype=np.float)
fee = np.full(len(timestamps), 0., dtype=np.float)
commission = np.full(len(timestamps), 0., dtype=np.float)
net_pnl = np.full(len(timestamps), 0., dtype=np.float)
for i, timestamp in enumerate(timestamps):
for symbol_pnl in symbol_pnls:
_position, _price, _realized, _unrealized, _fee, _commission, _net_pnl = symbol_pnl.pnl(timestamp)
if math.isfinite(_position): position[i] += _position
if math.isfinite(_realized): realized[i] += _realized
if math.isfinite(_unrealized): unrealized[i] += _unrealized
if math.isfinite(_fee): fee[i] += _fee
if math.isfinite(_commission): commission[i] += _commission
if math.isfinite(_net_pnl): net_pnl[i] += _net_pnl
df = pd.DataFrame.from_records(zip(timestamps, position, unrealized, realized, commission, fee, net_pnl),
columns=['timestamp', 'position', 'unrealized', 'realized', 'commission', 'fee', 'net_pnl'])
df['equity'] = self.starting_equity + df.net_pnl
return df[['timestamp', 'position', 'unrealized', 'realized', 'commission', 'fee', 'net_pnl', 'equity']]
def df_trades(self,
contract_group: ContractGroup = None,
start_date: np.datetime64 = None,
end_date: np.datetime64 = None) -> pd.DataFrame:
# start_date, end_date = str2date(start_date), str2date(end_date)
trades = self.trades(contract_group, start_date, end_date)
df = pd.DataFrame.from_records([(
trade.contract.symbol,
trade.timestamp,
trade.qty,
trade.price,
trade.fee,
trade.commission,
trade.order.timestamp,
trade.order.qty,
trade.order.reason_code,
(str(trade.order.properties.__dict__) if trade.order.properties.__dict__ else ''),
(str(trade.contract.properties.__dict__) if trade.contract.properties.__dict__ else '')) for trade in trades],
columns=['symbol', 'timestamp', 'qty', 'price', 'fee', 'commission', 'order_date', 'order_qty',
'reason_code', 'order_props', 'contract_props'])
df = df.sort_values(by=['timestamp', 'symbol'])
return df
def test_account():
from pyqstrat.pq_types import MarketOrder
def get_close_price(contract, timestamps, idx, strategy_context):
if contract.symbol == "IBM":
price = idx + 10.1
elif contract.symbol == "MSFT":
price = idx + 15.3
else:
raise Exception(f'unknown contract: {contract}')
return price
ContractGroup.clear()
Contract.clear()
ibm_cg = ContractGroup.create('IBM')
msft_cg = ContractGroup.create('MSFT')
ibm_contract = Contract.create('IBM', contract_group=ibm_cg)
msft_contract = Contract.create('MSFT', contract_group=msft_cg)
timestamps = np.array(['2018-01-01 09:00', '2018-01-02 08:00', '2018-01-02 09:00', '2018-01-05 13:35'], dtype='M8[m]')
account = Account([ibm_cg, msft_cg], timestamps, get_close_price, None)
# account = Account([Contract(symbol)], timestamps, get_close_price)
trade_1 = Trade(ibm_contract, MarketOrder(ibm_contract, np.datetime64('2018-01-01 09:00'), 10),
np.datetime64('2018-01-02 08:00'), 10, 10.1, commission=0.01)
trade_2 = Trade(ibm_contract, MarketOrder(ibm_contract, np.datetime64('2018-01-01 09:00'), -20),
np.datetime64('2018-01-02 09:00'), -20, 15.1, commission=0.02)
trade_3 = Trade(msft_contract, MarketOrder(msft_contract, timestamps[1], 15), timestamps[1], 20, 13.2, commission=0.04)
trade_4 = Trade(msft_contract, MarketOrder(msft_contract, timestamps[2], 20), timestamps[2], 20, 16.2, commission=0.05)
account.add_trades([trade_1, trade_2, trade_3, trade_4])
account.calc(np.datetime64('2018-01-05 13:35'))
assert(len(account.df_trades()) == 4)
assert(len(account.df_pnl()) == 6)
assert(np.allclose(np.array([9.99, 61.96, 79.97, 103.91, 69.97, 143.91]), account.df_pnl().net_pnl.values, rtol=0))
assert(np.allclose(np.array([10, 20, -10, 40, -10, 40]), account.df_pnl().position.values, rtol=0))
assert(np.allclose(np.array([1000000., 1000183.88, 1000213.88]), account.df_account_pnl().equity.values, rtol=0))
if __name__ == "__main__":
test_account()
import doctest
doctest.testmod(optionflags=doctest.NORMALIZE_WHITESPACE)
| true | true |
f720b5f3be28e969cd5ce5fed492f2e66b5c370c | 881 | py | Python | setup.py | bayjan/openrisknet_magkoufopoulou | b1ed6dea48d67243c9ac81eec59e5d7830ca68de | [
"MIT"
] | null | null | null | setup.py | bayjan/openrisknet_magkoufopoulou | b1ed6dea48d67243c9ac81eec59e5d7830ca68de | [
"MIT"
] | null | null | null | setup.py | bayjan/openrisknet_magkoufopoulou | b1ed6dea48d67243c9ac81eec59e5d7830ca68de | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Setup file for openrisknet_magkoufopoulou.
This file was generated with PyScaffold 3.0.3.
PyScaffold helps you to put up the scaffold of your new Python project.
Learn more under: http://pyscaffold.org/
"""
import sys
from setuptools import setup
# Add here console scripts and other entry points in ini-style format
entry_points = """
[console_scripts]
# script_name = openrisknet_magkoufopoulou.module:function
# For example:
# fibonacci = openrisknet_magkoufopoulou.skeleton:run
"""
def setup_package():
needs_sphinx = {'build_sphinx', 'upload_docs'}.intersection(sys.argv)
sphinx = ['sphinx'] if needs_sphinx else []
setup(setup_requires=['pyscaffold>=3.0a0,<3.1a0'] + sphinx,
entry_points=entry_points,
use_pyscaffold=True)
if __name__ == "__main__":
setup_package()
| 26.69697 | 75 | 0.713961 |
import sys
from setuptools import setup
entry_points = """
[console_scripts]
# script_name = openrisknet_magkoufopoulou.module:function
# For example:
# fibonacci = openrisknet_magkoufopoulou.skeleton:run
"""
def setup_package():
needs_sphinx = {'build_sphinx', 'upload_docs'}.intersection(sys.argv)
sphinx = ['sphinx'] if needs_sphinx else []
setup(setup_requires=['pyscaffold>=3.0a0,<3.1a0'] + sphinx,
entry_points=entry_points,
use_pyscaffold=True)
if __name__ == "__main__":
setup_package()
| true | true |
f720b71a384cd705368c6959d78e6566a4530fc2 | 349 | py | Python | materials/sp20/hw/hw01/tests/q9.py | ds-modules/Deepnote-demo | 548c12ced6cae774ecd0036aa1e8bb833af6472c | [
"BSD-3-Clause"
] | null | null | null | materials/sp20/hw/hw01/tests/q9.py | ds-modules/Deepnote-demo | 548c12ced6cae774ecd0036aa1e8bb833af6472c | [
"BSD-3-Clause"
] | null | null | null | materials/sp20/hw/hw01/tests/q9.py | ds-modules/Deepnote-demo | 548c12ced6cae774ecd0036aa1e8bb833af6472c | [
"BSD-3-Clause"
] | null | null | null | test = {
'name': 'q9',
'points': 1,
'suites': [
{
'cases': [
{
'code': r"""
>>> survey == "2020 vision"
True
""",
'hidden': False,
'locked': False
}
],
'scored': True,
'setup': '',
'teardown': '',
'type': 'doctest'
}
]
}
| 15.173913 | 37 | 0.312321 | test = {
'name': 'q9',
'points': 1,
'suites': [
{
'cases': [
{
'code': r"""
>>> survey == "2020 vision"
True
""",
'hidden': False,
'locked': False
}
],
'scored': True,
'setup': '',
'teardown': '',
'type': 'doctest'
}
]
}
| true | true |
f720b7d1ae3ebf2758b3f637eac569a944ecce67 | 291 | py | Python | utils/test_clear_data.py | M1d0r1/py_mantis | 8d2b05601b9240e76e2e07b50770e39df5bcade9 | [
"Apache-2.0"
] | null | null | null | utils/test_clear_data.py | M1d0r1/py_mantis | 8d2b05601b9240e76e2e07b50770e39df5bcade9 | [
"Apache-2.0"
] | null | null | null | utils/test_clear_data.py | M1d0r1/py_mantis | 8d2b05601b9240e76e2e07b50770e39df5bcade9 | [
"Apache-2.0"
] | null | null | null | import random
def test_clear_projects_helper(app):
while app.project.count()>0:
app.project.navigate_to_manage_projects_page()
old_projects = app.project.get_project_list()
project = random.choice(old_projects)
app.project.delete_by_name(project.name)
| 26.454545 | 54 | 0.725086 | import random
def test_clear_projects_helper(app):
while app.project.count()>0:
app.project.navigate_to_manage_projects_page()
old_projects = app.project.get_project_list()
project = random.choice(old_projects)
app.project.delete_by_name(project.name)
| true | true |
f720b849c7ccaf49918d5f8db7e3b19f11f3203f | 5,729 | py | Python | tempest/api/compute/admin/test_servers_negative.py | rishabh20111990/tempest | df15531cd4231000b0da016f5cd8641523ce984e | [
"Apache-2.0"
] | 2 | 2015-08-13T00:07:49.000Z | 2020-08-07T06:38:44.000Z | tempest/api/compute/admin/test_servers_negative.py | rishabh20111990/tempest | df15531cd4231000b0da016f5cd8641523ce984e | [
"Apache-2.0"
] | 1 | 2019-08-08T10:36:44.000Z | 2019-08-09T05:58:23.000Z | tempest/api/compute/admin/test_servers_negative.py | rishabh20111990/tempest | df15531cd4231000b0da016f5cd8641523ce984e | [
"Apache-2.0"
] | 5 | 2016-06-24T20:03:52.000Z | 2020-02-05T10:14:54.000Z | # Copyright 2013 Huawei Technologies Co.,LTD.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import testtools
from tempest.api.compute import base
from tempest.common import tempest_fixtures as fixtures
from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
CONF = config.CONF
class ServersAdminNegativeTestJSON(base.BaseV2ComputeAdminTest):
"""Tests Servers API using admin privileges"""
@classmethod
def setup_clients(cls):
super(ServersAdminNegativeTestJSON, cls).setup_clients()
cls.client = cls.os_admin.servers_client
cls.quotas_client = cls.os_admin.quotas_client
@classmethod
def resource_setup(cls):
super(ServersAdminNegativeTestJSON, cls).resource_setup()
cls.tenant_id = cls.client.tenant_id
server = cls.create_test_server(wait_until='ACTIVE')
cls.s1_id = server['id']
@decorators.idempotent_id('28dcec23-f807-49da-822c-56a92ea3c687')
@testtools.skipUnless(CONF.compute_feature_enabled.resize,
'Resize not available.')
@decorators.attr(type=['negative'])
def test_resize_server_using_overlimit_ram(self):
# NOTE(mriedem): Avoid conflicts with os-quota-class-sets tests.
self.useFixture(fixtures.LockFixture('compute_quotas'))
quota_set = self.quotas_client.show_quota_set(
self.tenant_id)['quota_set']
ram = quota_set['ram']
if ram == -1:
raise self.skipException("ram quota set is -1,"
" cannot test overlimit")
ram += 1
vcpus = 1
disk = 5
flavor_ref = self.create_flavor(ram=ram, vcpus=vcpus, disk=disk)
self.assertRaises((lib_exc.Forbidden, lib_exc.OverLimit),
self.client.resize_server,
self.s1_id,
flavor_ref['id'])
@decorators.idempotent_id('7368a427-2f26-4ad9-9ba9-911a0ec2b0db')
@testtools.skipUnless(CONF.compute_feature_enabled.resize,
'Resize not available.')
@decorators.attr(type=['negative'])
def test_resize_server_using_overlimit_vcpus(self):
# NOTE(mriedem): Avoid conflicts with os-quota-class-sets tests.
self.useFixture(fixtures.LockFixture('compute_quotas'))
quota_set = self.quotas_client.show_quota_set(
self.tenant_id)['quota_set']
vcpus = quota_set['cores']
if vcpus == -1:
raise self.skipException("cores quota set is -1,"
" cannot test overlimit")
vcpus += 1
ram = 512
disk = 5
flavor_ref = self.create_flavor(ram=ram, vcpus=vcpus, disk=disk)
self.assertRaises((lib_exc.Forbidden, lib_exc.OverLimit),
self.client.resize_server,
self.s1_id,
flavor_ref['id'])
@decorators.attr(type=['negative'])
@decorators.idempotent_id('b0b4d8af-1256-41ef-9ee7-25f1c19dde80')
def test_reset_state_server_invalid_state(self):
self.assertRaises(lib_exc.BadRequest,
self.client.reset_state, self.s1_id,
state='invalid')
@decorators.attr(type=['negative'])
@decorators.idempotent_id('4cdcc984-fab0-4577-9a9d-6d558527ee9d')
def test_reset_state_server_invalid_type(self):
self.assertRaises(lib_exc.BadRequest,
self.client.reset_state, self.s1_id,
state=1)
@decorators.attr(type=['negative'])
@decorators.idempotent_id('e741298b-8df2-46f0-81cb-8f814ff2504c')
def test_reset_state_server_nonexistent_server(self):
self.assertRaises(lib_exc.NotFound,
self.client.reset_state, '999', state='error')
@decorators.attr(type=['negative'])
@decorators.idempotent_id('46a4e1ca-87ae-4d28-987a-1b6b136a0221')
def test_migrate_non_existent_server(self):
# migrate a non existent server
self.assertRaises(lib_exc.NotFound,
self.client.migrate_server,
data_utils.rand_uuid())
@decorators.idempotent_id('b0b17f83-d14e-4fc4-8f31-bcc9f3cfa629')
@testtools.skipUnless(CONF.compute_feature_enabled.cold_migration,
'Cold migration not available.')
@testtools.skipUnless(CONF.compute_feature_enabled.suspend,
'Suspend is not available.')
@decorators.attr(type=['negative'])
def test_migrate_server_invalid_state(self):
# create server.
server = self.create_test_server(wait_until='ACTIVE')
server_id = server['id']
# suspend the server.
self.client.suspend_server(server_id)
waiters.wait_for_server_status(self.client,
server_id, 'SUSPENDED')
# migrate a suspended server should fail
self.assertRaises(lib_exc.Conflict,
self.client.migrate_server,
server_id)
| 42.437037 | 78 | 0.64479 |
import testtools
from tempest.api.compute import base
from tempest.common import tempest_fixtures as fixtures
from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
CONF = config.CONF
class ServersAdminNegativeTestJSON(base.BaseV2ComputeAdminTest):
@classmethod
def setup_clients(cls):
super(ServersAdminNegativeTestJSON, cls).setup_clients()
cls.client = cls.os_admin.servers_client
cls.quotas_client = cls.os_admin.quotas_client
@classmethod
def resource_setup(cls):
super(ServersAdminNegativeTestJSON, cls).resource_setup()
cls.tenant_id = cls.client.tenant_id
server = cls.create_test_server(wait_until='ACTIVE')
cls.s1_id = server['id']
@decorators.idempotent_id('28dcec23-f807-49da-822c-56a92ea3c687')
@testtools.skipUnless(CONF.compute_feature_enabled.resize,
'Resize not available.')
@decorators.attr(type=['negative'])
def test_resize_server_using_overlimit_ram(self):
self.useFixture(fixtures.LockFixture('compute_quotas'))
quota_set = self.quotas_client.show_quota_set(
self.tenant_id)['quota_set']
ram = quota_set['ram']
if ram == -1:
raise self.skipException("ram quota set is -1,"
" cannot test overlimit")
ram += 1
vcpus = 1
disk = 5
flavor_ref = self.create_flavor(ram=ram, vcpus=vcpus, disk=disk)
self.assertRaises((lib_exc.Forbidden, lib_exc.OverLimit),
self.client.resize_server,
self.s1_id,
flavor_ref['id'])
@decorators.idempotent_id('7368a427-2f26-4ad9-9ba9-911a0ec2b0db')
@testtools.skipUnless(CONF.compute_feature_enabled.resize,
'Resize not available.')
@decorators.attr(type=['negative'])
def test_resize_server_using_overlimit_vcpus(self):
self.useFixture(fixtures.LockFixture('compute_quotas'))
quota_set = self.quotas_client.show_quota_set(
self.tenant_id)['quota_set']
vcpus = quota_set['cores']
if vcpus == -1:
raise self.skipException("cores quota set is -1,"
" cannot test overlimit")
vcpus += 1
ram = 512
disk = 5
flavor_ref = self.create_flavor(ram=ram, vcpus=vcpus, disk=disk)
self.assertRaises((lib_exc.Forbidden, lib_exc.OverLimit),
self.client.resize_server,
self.s1_id,
flavor_ref['id'])
@decorators.attr(type=['negative'])
@decorators.idempotent_id('b0b4d8af-1256-41ef-9ee7-25f1c19dde80')
def test_reset_state_server_invalid_state(self):
self.assertRaises(lib_exc.BadRequest,
self.client.reset_state, self.s1_id,
state='invalid')
@decorators.attr(type=['negative'])
@decorators.idempotent_id('4cdcc984-fab0-4577-9a9d-6d558527ee9d')
def test_reset_state_server_invalid_type(self):
self.assertRaises(lib_exc.BadRequest,
self.client.reset_state, self.s1_id,
state=1)
@decorators.attr(type=['negative'])
@decorators.idempotent_id('e741298b-8df2-46f0-81cb-8f814ff2504c')
def test_reset_state_server_nonexistent_server(self):
self.assertRaises(lib_exc.NotFound,
self.client.reset_state, '999', state='error')
@decorators.attr(type=['negative'])
@decorators.idempotent_id('46a4e1ca-87ae-4d28-987a-1b6b136a0221')
def test_migrate_non_existent_server(self):
self.assertRaises(lib_exc.NotFound,
self.client.migrate_server,
data_utils.rand_uuid())
@decorators.idempotent_id('b0b17f83-d14e-4fc4-8f31-bcc9f3cfa629')
@testtools.skipUnless(CONF.compute_feature_enabled.cold_migration,
'Cold migration not available.')
@testtools.skipUnless(CONF.compute_feature_enabled.suspend,
'Suspend is not available.')
@decorators.attr(type=['negative'])
def test_migrate_server_invalid_state(self):
server = self.create_test_server(wait_until='ACTIVE')
server_id = server['id']
self.client.suspend_server(server_id)
waiters.wait_for_server_status(self.client,
server_id, 'SUSPENDED')
self.assertRaises(lib_exc.Conflict,
self.client.migrate_server,
server_id)
| true | true |
f720b868a2e8693f457acc29e9d2ffcfcf7e2f08 | 2,174 | py | Python | debug/ssd/test_ssd300.py | jjjkkkjjj/pytorch.dl | d82aa1191c14f328c62de85e391ac6fa1b4c7ee3 | [
"MIT"
] | 2 | 2021-02-06T22:40:13.000Z | 2021-03-26T09:15:34.000Z | debug/ssd/test_ssd300.py | jjjkkkjjj/pytorch.dl | d82aa1191c14f328c62de85e391ac6fa1b4c7ee3 | [
"MIT"
] | 8 | 2020-07-11T07:10:51.000Z | 2022-03-12T00:39:03.000Z | debug/ssd/test_ssd300.py | jjjkkkjjj/pytorch.dl | d82aa1191c14f328c62de85e391ac6fa1b4c7ee3 | [
"MIT"
] | 2 | 2021-03-26T09:19:42.000Z | 2021-07-27T02:38:09.000Z | from dl.data.objdetn import datasets, utils, target_transforms
from dl.data import transforms
from dl.models.ssd.ssd300 import SSD300
from dl.data.utils.converter import toVisualizeRectLabelRGBimg
from torch.utils.data import DataLoader
import cv2
if __name__ == '__main__':
augmentation = None
transform = transforms.Compose(
[transforms.Resize((300, 300)),
transforms.ToTensor(),
transforms.Normalize(rgb_means=(0.485, 0.456, 0.406), rgb_stds=(0.229, 0.224, 0.225))]
)
target_transform = target_transforms.Compose(
[target_transforms.Corners2Centroids(),
target_transforms.OneHot(class_nums=datasets.VOC_class_nums, add_background=True),
target_transforms.ToTensor()]
)
test_dataset = datasets.VOC2007_TestDataset(transform=transform, target_transform=target_transform, augmentation=augmentation)
test_loader = DataLoader(test_dataset,
batch_size=32,
shuffle=True,
collate_fn=utils.batch_ind_fn,
num_workers=4,
pin_memory=False)
model = SSD300(class_labels=datasets.VOC_class_labels, batch_norm=False)
model.load_weights('../../weights/ssd300-voc2007+12+coco/ssd300-voc2007+2012+coco_i-0025000_checkpoints20200611.pth')
#model.load_for_finetune('./weights/ssd300-voc2007+12+coco/ssd300-voc2007+2012+coco_i-30000.pth')
model.eval()
print(model)
#evaluator = VOC2007Evaluator(test_loader, iteration_interval=5000)
#ap = evaluator(model)
#print(ap)
image = cv2.cvtColor(cv2.imread('../../scripts/ssd/assets/coco_testimg.jpg'), cv2.COLOR_BGR2RGB)
infers, imgs, orig_imgs = model.infer(image, visualize=True, toNorm=True)
for i, img in enumerate(imgs):
cv2.imshow('result', cv2.cvtColor(img, cv2.COLOR_RGB2BGR))
cv2.waitKey()
images = [test_dataset[i][0] for i in range(20)]
inf, ret_imgs, orig_imgs = model.infer(images, visualize=True, toNorm=False)
for img in ret_imgs:
cv2.imshow('result', cv2.cvtColor(img, cv2.COLOR_RGB2BGR))
cv2.waitKey() | 43.48 | 130 | 0.677553 | from dl.data.objdetn import datasets, utils, target_transforms
from dl.data import transforms
from dl.models.ssd.ssd300 import SSD300
from dl.data.utils.converter import toVisualizeRectLabelRGBimg
from torch.utils.data import DataLoader
import cv2
if __name__ == '__main__':
augmentation = None
transform = transforms.Compose(
[transforms.Resize((300, 300)),
transforms.ToTensor(),
transforms.Normalize(rgb_means=(0.485, 0.456, 0.406), rgb_stds=(0.229, 0.224, 0.225))]
)
target_transform = target_transforms.Compose(
[target_transforms.Corners2Centroids(),
target_transforms.OneHot(class_nums=datasets.VOC_class_nums, add_background=True),
target_transforms.ToTensor()]
)
test_dataset = datasets.VOC2007_TestDataset(transform=transform, target_transform=target_transform, augmentation=augmentation)
test_loader = DataLoader(test_dataset,
batch_size=32,
shuffle=True,
collate_fn=utils.batch_ind_fn,
num_workers=4,
pin_memory=False)
model = SSD300(class_labels=datasets.VOC_class_labels, batch_norm=False)
model.load_weights('../../weights/ssd300-voc2007+12+coco/ssd300-voc2007+2012+coco_i-0025000_checkpoints20200611.pth')
model.eval()
print(model)
image = cv2.cvtColor(cv2.imread('../../scripts/ssd/assets/coco_testimg.jpg'), cv2.COLOR_BGR2RGB)
infers, imgs, orig_imgs = model.infer(image, visualize=True, toNorm=True)
for i, img in enumerate(imgs):
cv2.imshow('result', cv2.cvtColor(img, cv2.COLOR_RGB2BGR))
cv2.waitKey()
images = [test_dataset[i][0] for i in range(20)]
inf, ret_imgs, orig_imgs = model.infer(images, visualize=True, toNorm=False)
for img in ret_imgs:
cv2.imshow('result', cv2.cvtColor(img, cv2.COLOR_RGB2BGR))
cv2.waitKey() | true | true |
f720b9d8103adc5ec7d583ef9b481eed71f4b5ce | 4,118 | py | Python | cinder/api/v3/backups.py | hashsos/hashcloudos-cinder | 6d8b648399e2160b419e3f9535eb520c7de9120e | [
"Apache-2.0"
] | null | null | null | cinder/api/v3/backups.py | hashsos/hashcloudos-cinder | 6d8b648399e2160b419e3f9535eb520c7de9120e | [
"Apache-2.0"
] | null | null | null | cinder/api/v3/backups.py | hashsos/hashcloudos-cinder | 6d8b648399e2160b419e3f9535eb520c7de9120e | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2016 Intel, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""The backups V3 API."""
from oslo_log import log as logging
from webob import exc
from cinder.api.contrib import backups as backups_v2
from cinder.api import microversions as mv
from cinder.api.openstack import wsgi
from cinder.api.v3.views import backups as backup_views
from cinder import exception
from cinder.i18n import _
from cinder.policies import backups as policy
LOG = logging.getLogger(__name__)
class BackupsController(backups_v2.BackupsController):
"""The backups API controller for the OpenStack API V3."""
_view_builder_class = backup_views.ViewBuilder
@wsgi.Controller.api_version(mv.BACKUP_UPDATE)
def update(self, req, id, body):
"""Update a backup."""
context = req.environ['cinder.context']
self.assert_valid_body(body, 'backup')
req_version = req.api_version_request
backup_update = body['backup']
self.validate_name_and_description(backup_update)
update_dict = {}
if 'name' in backup_update:
update_dict['display_name'] = backup_update.pop('name')
if 'description' in backup_update:
update_dict['display_description'] = (
backup_update.pop('description'))
if (req_version.matches(
mv.BACKUP_METADATA) and 'metadata' in backup_update):
update_dict['metadata'] = backup_update.pop('metadata')
# Check no unsupported fields.
if backup_update:
msg = _("Unsupported fields %s.") % (", ".join(backup_update))
raise exc.HTTPBadRequest(explanation=msg)
new_backup = self.backup_api.update(context, id, update_dict)
return self._view_builder.summary(req, new_backup)
def _add_backup_project_attribute(self, req, backup):
db_backup = req.get_db_backup(backup['id'])
key = "os-backup-project-attr:project_id"
backup[key] = db_backup['project_id']
def show(self, req, id):
"""Return data about the given backup."""
LOG.debug('Show backup with id %s.', id)
context = req.environ['cinder.context']
req_version = req.api_version_request
# Not found exception will be handled at the wsgi level
backup = self.backup_api.get(context, backup_id=id)
req.cache_db_backup(backup)
resp_backup = self._view_builder.detail(req, backup)
if req_version.matches(mv.BACKUP_PROJECT):
try:
context.authorize(policy.BACKUP_ATTRIBUTES_POLICY)
self._add_backup_project_attribute(req, resp_backup['backup'])
except exception.PolicyNotAuthorized:
pass
return resp_backup
def detail(self, req):
resp_backup = super(BackupsController, self).detail(req)
context = req.environ['cinder.context']
req_version = req.api_version_request
if req_version.matches(mv.BACKUP_PROJECT):
try:
context.authorize(policy.BACKUP_ATTRIBUTES_POLICY)
for bak in resp_backup['backups']:
self._add_backup_project_attribute(req, bak)
except exception.PolicyNotAuthorized:
pass
return resp_backup
def _convert_sort_name(self, req_version, sort_keys):
if req_version.matches(mv.BACKUP_SORT_NAME) and 'name' in sort_keys:
sort_keys[sort_keys.index('name')] = 'display_name'
def create_resource():
return wsgi.Resource(BackupsController())
| 37.099099 | 78 | 0.674114 |
from oslo_log import log as logging
from webob import exc
from cinder.api.contrib import backups as backups_v2
from cinder.api import microversions as mv
from cinder.api.openstack import wsgi
from cinder.api.v3.views import backups as backup_views
from cinder import exception
from cinder.i18n import _
from cinder.policies import backups as policy
LOG = logging.getLogger(__name__)
class BackupsController(backups_v2.BackupsController):
_view_builder_class = backup_views.ViewBuilder
@wsgi.Controller.api_version(mv.BACKUP_UPDATE)
def update(self, req, id, body):
context = req.environ['cinder.context']
self.assert_valid_body(body, 'backup')
req_version = req.api_version_request
backup_update = body['backup']
self.validate_name_and_description(backup_update)
update_dict = {}
if 'name' in backup_update:
update_dict['display_name'] = backup_update.pop('name')
if 'description' in backup_update:
update_dict['display_description'] = (
backup_update.pop('description'))
if (req_version.matches(
mv.BACKUP_METADATA) and 'metadata' in backup_update):
update_dict['metadata'] = backup_update.pop('metadata')
if backup_update:
msg = _("Unsupported fields %s.") % (", ".join(backup_update))
raise exc.HTTPBadRequest(explanation=msg)
new_backup = self.backup_api.update(context, id, update_dict)
return self._view_builder.summary(req, new_backup)
def _add_backup_project_attribute(self, req, backup):
db_backup = req.get_db_backup(backup['id'])
key = "os-backup-project-attr:project_id"
backup[key] = db_backup['project_id']
def show(self, req, id):
LOG.debug('Show backup with id %s.', id)
context = req.environ['cinder.context']
req_version = req.api_version_request
backup = self.backup_api.get(context, backup_id=id)
req.cache_db_backup(backup)
resp_backup = self._view_builder.detail(req, backup)
if req_version.matches(mv.BACKUP_PROJECT):
try:
context.authorize(policy.BACKUP_ATTRIBUTES_POLICY)
self._add_backup_project_attribute(req, resp_backup['backup'])
except exception.PolicyNotAuthorized:
pass
return resp_backup
def detail(self, req):
resp_backup = super(BackupsController, self).detail(req)
context = req.environ['cinder.context']
req_version = req.api_version_request
if req_version.matches(mv.BACKUP_PROJECT):
try:
context.authorize(policy.BACKUP_ATTRIBUTES_POLICY)
for bak in resp_backup['backups']:
self._add_backup_project_attribute(req, bak)
except exception.PolicyNotAuthorized:
pass
return resp_backup
def _convert_sort_name(self, req_version, sort_keys):
if req_version.matches(mv.BACKUP_SORT_NAME) and 'name' in sort_keys:
sort_keys[sort_keys.index('name')] = 'display_name'
def create_resource():
return wsgi.Resource(BackupsController())
| true | true |
f720ba2b3e7006741b82f2fe08ab0e27de7bf237 | 8,422 | py | Python | discordware/_vendors/hype/parser.py | znqi/discordware | e456bf7b0314ef8f29fabb9fa69f8c979f34d655 | [
"MIT"
] | 13 | 2021-07-31T12:07:06.000Z | 2022-03-24T15:00:50.000Z | discordware/_vendors/hype/parser.py | znqi/discordware | e456bf7b0314ef8f29fabb9fa69f8c979f34d655 | [
"MIT"
] | 2 | 2021-08-02T14:04:58.000Z | 2021-09-06T09:35:20.000Z | discordware/_vendors/hype/parser.py | znqi/discordware | e456bf7b0314ef8f29fabb9fa69f8c979f34d655 | [
"MIT"
] | 3 | 2021-08-07T13:23:54.000Z | 2022-01-24T13:23:08.000Z |
# Copyright (c) 2021, Serum Studio
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
from typing import List
from hype.command import HypeCommand
import optparse
from optparse import HelpFormatter
import sys
import textwrap
class HypeParser(optparse.OptionParser):
"""
A command parser for Hype CLI that was built on the top of
`optparse.OptionParser` This parser is pretty simmilar with
OptionParser, the only difference is the commands.
Parameters:
commands (list):
A list of all HypeCommands
**options (dict):
A dictionary of kwargs
Example:
>>> greet = HypeCommand('greet', help="%prog [OPTIONS]")
>>> greet.add_option('--name', type=str)
>>> ...
>>> goodbye = HypeCommand('goodbye', help="%prog [OPTIONS]")
>>> goodbye.add_option('--name', type=str)
>>> ...
>>> parser = HypeParser( commands=(greet, goodbye) )
>>> options, commands, command_opt, args = parser.parse_args()
"""
_HelpCommand = HypeCommand('help', help="All details about the commands", aliases=('?'))
def __init__(
self,
commands: List[HypeCommand] = [],
*args,
**options
):
self.commands = commands
self.options = options
if 'usage' not in self.options:
self.options['usage'] = "%prog COMMAND [ARGS..]\n%prog help COMMAND"
super(HypeParser, self).__init__(*args, **options)
for command in self.commands:
command.parser.prog = "%s %s" % (self.get_prog_name(), command.name)
self.disable_interspersed_args()
def add_command(self, cmd: HypeCommand):
"""
Add a command.
Parameters
---
cmd (HypeCommand):
The command to be add.
Example:
>>> goodbye = HypeCommand(..)
>>> parser = HyperParser(...)
>>> ...
>>> parser.add_command(goodbye)
"""
if not isinstance(cmd, HypeCommand):
raise TypeError('{} is not a instance of HypeCommand'.format(cmd))
self.commands.append(cmd)
def remove_command(self, name: str):
"""
Remove the command name to the list of registered command.
Parameters
---
name (str):
The name of the command to be removed.
Example:
>>> goodbye = HypeCommand(..)
>>> parser = HyperParser(...)
>>> ...
>>> parser.add_command(goodbye)
>>> parser.remove_command('goodbye')
"""
for command in self.commands:
if command.name == name:
self.commands.remove(command)
def format_help(self, formatter=None) -> str:
out = optparse.OptionParser.format_help(self, formatter)
if formatter == None:
formatter = self.formatter
#: HEADER for the Help command
result = ['\n']
result.append(formatter.format_heading('Commands'))
formatter.indent()
display_names = []
help_position = 0
for command in self.commands:
name = command.name
if command.aliases:
#: Add aliases of the command
name += ' (%s)' % (', '.join(command.aliases))
display_names.append(name)
#: Set the help position based on the max width.
proposed_help_position = len(name) + formatter.current_indent + 2
if proposed_help_position <= formatter.max_help_position:
help_position = max(help_position, proposed_help_position)
#: Add the command to the output
for command, name in zip(self.commands, display_names):
#: From optparse.py
name_width = help_position - formatter.current_indent - 2
if len(name) > name_width:
name = "%*s%s\n" % (formatter.current_indent, "", name)
indent_first = help_position
else:
name = "%*s%-*s " % (formatter.current_indent, "",
name_width, name)
indent_first = 0
result.append(name)
help_width = formatter.width - help_position
help_lines = textwrap.wrap(command.help, help_width)
result.append("%*s%s\n" % (indent_first, "", help_lines[0]))
result.extend(["%*s%s\n" % (help_position, "", line)
for line in help_lines[1:]])
result += ['\n']
formatter.dedent()
# Concatenate the original help message with the command list.
return out + "".join(result)
def __command_for_name(self, name):
"""
Return the command in self.commands matching the
given name. The name may either be the name of a subcommand or
an alias. If no subcommand matches, returns None.
Parameters:
name (str):
The name of the command to be matched.
"""
_command = None
for command in self.commands:
try:
if name == command.name or name in command.aliases:
_command = command
except TypeError:
pass
return _command
def parse_args(self, _args=None, _value=None):
"""
Just like the `parse_args` from OptionParser but add some more value.
Added Value:
---
- options: The option passed to the root parser
- command: the command object that was invoked
- command_opt: The option parsed to the command parser
- command_args: The positional arguments passed to the sub command
Parameters:
---
_args (any):
inherited from `optparse.OptionParser.parse_args`
_value (any):
inherited from `optparse.OptionParser.parse_args`
Example:
---
>>> parser = HypeParser(...)
>>> parser.add_option(...)
>>> ...
>>> options, command, \
... command_opt, command_args = parser.parse_args()
"""
self.commands.insert(len(self.commands), self._HelpCommand)
options, args = optparse.OptionParser.parse_args(self, _args, _value)
if not args:
# No command given, show the help message
self.print_help()
self.exit()
else:
command_name = args.pop(0)
command = self.__command_for_name(command_name)
if not command:
self.error('Unknown Command: {}'.format(command_name))
command_opt, command_args = command.parser.parse_args(args)
if command is self._HelpCommand:
if command_args:
command_name = command_args[0]
#: Check for the help command on the command arguments.
helpcommand = self.__command_for_name(command_name)
helpcommand.parser.print_help()
self.exit()
else:
self.print_help()
self.exit()
return options, command, command_opt, command_args
| 31.425373 | 92 | 0.572786 |
from typing import List
from hype.command import HypeCommand
import optparse
from optparse import HelpFormatter
import sys
import textwrap
class HypeParser(optparse.OptionParser):
_HelpCommand = HypeCommand('help', help="All details about the commands", aliases=('?'))
def __init__(
self,
commands: List[HypeCommand] = [],
*args,
**options
):
self.commands = commands
self.options = options
if 'usage' not in self.options:
self.options['usage'] = "%prog COMMAND [ARGS..]\n%prog help COMMAND"
super(HypeParser, self).__init__(*args, **options)
for command in self.commands:
command.parser.prog = "%s %s" % (self.get_prog_name(), command.name)
self.disable_interspersed_args()
def add_command(self, cmd: HypeCommand):
if not isinstance(cmd, HypeCommand):
raise TypeError('{} is not a instance of HypeCommand'.format(cmd))
self.commands.append(cmd)
def remove_command(self, name: str):
for command in self.commands:
if command.name == name:
self.commands.remove(command)
def format_help(self, formatter=None) -> str:
out = optparse.OptionParser.format_help(self, formatter)
if formatter == None:
formatter = self.formatter
result = ['\n']
result.append(formatter.format_heading('Commands'))
formatter.indent()
display_names = []
help_position = 0
for command in self.commands:
name = command.name
if command.aliases:
name += ' (%s)' % (', '.join(command.aliases))
display_names.append(name)
proposed_help_position = len(name) + formatter.current_indent + 2
if proposed_help_position <= formatter.max_help_position:
help_position = max(help_position, proposed_help_position)
for command, name in zip(self.commands, display_names):
name_width = help_position - formatter.current_indent - 2
if len(name) > name_width:
name = "%*s%s\n" % (formatter.current_indent, "", name)
indent_first = help_position
else:
name = "%*s%-*s " % (formatter.current_indent, "",
name_width, name)
indent_first = 0
result.append(name)
help_width = formatter.width - help_position
help_lines = textwrap.wrap(command.help, help_width)
result.append("%*s%s\n" % (indent_first, "", help_lines[0]))
result.extend(["%*s%s\n" % (help_position, "", line)
for line in help_lines[1:]])
result += ['\n']
formatter.dedent()
return out + "".join(result)
def __command_for_name(self, name):
_command = None
for command in self.commands:
try:
if name == command.name or name in command.aliases:
_command = command
except TypeError:
pass
return _command
def parse_args(self, _args=None, _value=None):
self.commands.insert(len(self.commands), self._HelpCommand)
options, args = optparse.OptionParser.parse_args(self, _args, _value)
if not args:
self.print_help()
self.exit()
else:
command_name = args.pop(0)
command = self.__command_for_name(command_name)
if not command:
self.error('Unknown Command: {}'.format(command_name))
command_opt, command_args = command.parser.parse_args(args)
if command is self._HelpCommand:
if command_args:
command_name = command_args[0]
helpcommand = self.__command_for_name(command_name)
helpcommand.parser.print_help()
self.exit()
else:
self.print_help()
self.exit()
return options, command, command_opt, command_args
| true | true |
f720ba72a3c86311008ec04f4371a49d7784b17c | 433 | py | Python | xlwt/__init__.py | drmelectronic/MIT | e28a82cd02dcc52ac233b89b43f29ede00993d11 | [
"MIT"
] | 292 | 2015-09-12T14:19:32.000Z | 2022-02-19T08:46:12.000Z | xlwt/__init__.py | drmelectronic/MIT | e28a82cd02dcc52ac233b89b43f29ede00993d11 | [
"MIT"
] | 4 | 2015-11-18T08:10:14.000Z | 2017-03-25T13:32:20.000Z | xlwt/__init__.py | drmelectronic/MIT | e28a82cd02dcc52ac233b89b43f29ede00993d11 | [
"MIT"
] | 131 | 2015-09-14T06:32:03.000Z | 2021-06-11T02:31:38.000Z | # -*- coding: windows-1252 -*-
__VERSION__ = '0.7.4'
import sys
if sys.version_info[:2] < (2, 3):
print >> sys.stderr, "Sorry, xlwt requires Python 2.3 or later"
sys.exit(1)
from Workbook import Workbook
from Worksheet import Worksheet
from Row import Row
from Column import Column
from Formatting import Font, Alignment, Borders, Pattern, Protection
from Style import XFStyle, easyxf, easyfont
from ExcelFormula import *
| 25.470588 | 68 | 0.741339 |
__VERSION__ = '0.7.4'
import sys
if sys.version_info[:2] < (2, 3):
print >> sys.stderr, "Sorry, xlwt requires Python 2.3 or later"
sys.exit(1)
from Workbook import Workbook
from Worksheet import Worksheet
from Row import Row
from Column import Column
from Formatting import Font, Alignment, Borders, Pattern, Protection
from Style import XFStyle, easyxf, easyfont
from ExcelFormula import *
| true | true |
f720bb34265c748c8a67a6a2025eb32ff567dad4 | 773 | py | Python | src/tools/reshape.py | Lin-Lei/CenterNet | 0778dfcf4fb8e5b013dda7ab8c680f232ca851b1 | [
"MIT"
] | null | null | null | src/tools/reshape.py | Lin-Lei/CenterNet | 0778dfcf4fb8e5b013dda7ab8c680f232ca851b1 | [
"MIT"
] | null | null | null | src/tools/reshape.py | Lin-Lei/CenterNet | 0778dfcf4fb8e5b013dda7ab8c680f232ca851b1 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Thu Aug 23 16:06:35 2018
@author: libo
"""
from PIL import Image
import os
def image_resize(image_path, new_path): # 统一图片尺寸
print('============>>修改图片尺寸')
for img_name in os.listdir(image_path):
img_path = image_path + "/" + img_name # 获取该图片全称
image = Image.open(img_path) # 打开特定一张图片
image = image.resize((512, 512)) # 设置需要转换的图片大小
# process the 1 channel image
image.save(new_path + '/' + img_name)
print("end the processing!")
if __name__ == '__main__':
print("ready for :::::::: ")
ori_path = r"Z:\pycharm_projects\ssd\VOC2007\JPEGImages" # 输入图片的文件夹路径
new_path = 'Z:/pycharm_projects/ssd/VOC2007/reshape' # resize之后的文件夹路径
image_resize(ori_path, new_path) | 30.92 | 74 | 0.635188 |
from PIL import Image
import os
def image_resize(image_path, new_path):
print('============>>修改图片尺寸')
for img_name in os.listdir(image_path):
img_path = image_path + "/" + img_name
image = Image.open(img_path)
image = image.resize((512, 512))
image.save(new_path + '/' + img_name)
print("end the processing!")
if __name__ == '__main__':
print("ready for :::::::: ")
ori_path = r"Z:\pycharm_projects\ssd\VOC2007\JPEGImages"
new_path = 'Z:/pycharm_projects/ssd/VOC2007/reshape'
image_resize(ori_path, new_path) | true | true |
f720bbb8b5465f2391f8ff3bd20b2b9312393ba6 | 5,112 | py | Python | BlindTest/display.py | smart-fun/Raspberry | e2ac8caff2732786bc51a7c5ab64507e7a9a8fac | [
"Apache-2.0"
] | null | null | null | BlindTest/display.py | smart-fun/Raspberry | e2ac8caff2732786bc51a7c5ab64507e7a9a8fac | [
"Apache-2.0"
] | null | null | null | BlindTest/display.py | smart-fun/Raspberry | e2ac8caff2732786bc51a7c5ab64507e7a9a8fac | [
"Apache-2.0"
] | null | null | null | import pygame as pg
import pygame_widgets as pw
from math import sin, cos
SCREEN_WIDTH = 640
SCREEN_HEIGHT = 480
WHITE = (255,255,255)
YELLOW = (220,220,0)
RED = (220,0,0)
GREY = (180,180,180)
BLACK = (0,0,0)
GREEN = (0,200,0)
BUTTON_COLOR = (0,0,220)
BUTTON_HOVER_COLOR = GREEN
BUTTON_PRESS_COLOR = (0,100,0)
def createScreen():
screen = pg.display.set_mode((SCREEN_WIDTH, SCREEN_HEIGHT))
screen.fill(GREY)
return screen
def displayCircle(screen, message, yellow, red):
x = SCREEN_WIDTH / 2
y = SCREEN_HEIGHT / 2
radius = SCREEN_HEIGHT / 4
if (yellow and red):
pg.draw.circle(screen, RED, [x, y], radius, 0, draw_top_right=True, draw_bottom_right=True)
pg.draw.circle(screen, YELLOW, [x, y], radius, 0, draw_top_left=True , draw_bottom_left=True)
elif yellow:
pg.draw.circle(screen, YELLOW, [x, y], radius, 0)
elif red:
pg.draw.circle(screen, RED, [x, y], radius, 0)
font = pg.font.SysFont(None, 40)
text = font.render(message, True, BLACK)
textRect = text.get_rect()
textRect.centerx = screen.get_rect().centerx
textRect.centery = screen.get_rect().centery
screen.blit(text,textRect)
def simulateNeoPixel(screen, neopixel):
size = 10
radius = 100
angle = 0
for color in neopixel.pixels:
x = int((SCREEN_WIDTH / 2) + radius*cos(angle))
y = int((SCREEN_HEIGHT / 2) - radius*sin(angle))
pg.draw.circle(screen, color, [x, y], size, 0)
angle += 3.14159 / 12
def displayStartButton(screen, callback):
width = 200
height = 50
x = (SCREEN_WIDTH - width) / 2
y = SCREEN_HEIGHT * 0.8
button = pw.Button(
screen, x, y, width, height, text='START',
fontSize=50,
textColour=(255,255,255),
inactiveColour=BUTTON_COLOR,
hoverColour=BUTTON_HOVER_COLOR,
pressedColour=BUTTON_PRESS_COLOR,
radius=10,
onClick=callback
)
return button
def displayYesButton(screen, callback):
width = 200
height = 50
x = (SCREEN_WIDTH * 0.45) - width
y = SCREEN_HEIGHT * 0.8
button = pw.Button(
screen, x, y, width, height, text='YES',
fontSize=50,
textColour=(255,255,255),
inactiveColour=BUTTON_COLOR,
hoverColour=BUTTON_HOVER_COLOR,
pressedColour=BUTTON_PRESS_COLOR,
radius=10,
onClick=callback
)
return button
def displayNoButton(screen, callback):
width = 200
height = 50
x = (SCREEN_WIDTH * 0.55)
y = SCREEN_HEIGHT * 0.8
button = pw.Button(
screen, x, y, width, height, text='NO',
fontSize=50,
textColour=(255,255,255),
inactiveColour=BUTTON_COLOR,
hoverColour=BUTTON_HOVER_COLOR,
pressedColour=BUTTON_PRESS_COLOR,
radius=10,
onClick=callback
)
return button
def createRoundButton(screen, callback, x, y, text, color):
width = 40
height = 40
button = pw.Button(
screen, x, y, width, height, text=text,
fontSize=60,
textColour=(255,255,255),
inactiveColour=color,
hoverColour=color,
pressedColour=color,
radius=20,
onClick=callback
)
return button
def displayIncYellowButton(screen, callback):
x = 20
y = SCREEN_HEIGHT * 0.4
return createRoundButton(screen, callback, x, y, "+", YELLOW)
def displayDecYellowButton(screen, callback):
x = 20
y = SCREEN_HEIGHT * 0.5
return createRoundButton(screen, callback, x, y, "-", YELLOW)
def displayIncRedButton(screen, callback):
x = SCREEN_WIDTH - 40 - 20
y = SCREEN_HEIGHT * 0.4
return createRoundButton(screen, callback, x, y, "+", RED)
def displayDecRedButton(screen, callback):
x = SCREEN_WIDTH - 40 - 20
y = SCREEN_HEIGHT * 0.5
return createRoundButton(screen, callback, x, y, "-", RED)
def createSkipButton(screen, callback):
width = 100
height = 40
x = (SCREEN_WIDTH - width) * 0.5
y = SCREEN_HEIGHT - 50 - 10
button = pw.Button(
screen, x, y, width, height, text="SKIP",
fontSize=30,
textColour=(255,255,255),
inactiveColour=BUTTON_COLOR,
hoverColour=BUTTON_HOVER_COLOR,
pressedColour=BUTTON_PRESS_COLOR,
radius=20,
onClick=callback
)
return button
def displayScore(screen, yellow, red):
font = pg.font.SysFont(None, 100)
text = font.render(str(yellow), True, YELLOW)
textRect = text.get_rect()
textRect.centerx = SCREEN_WIDTH * 0.17
textRect.centery = screen.get_rect().centery
screen.blit(text,textRect)
text = font.render(str(red), True, RED)
textRect = text.get_rect()
textRect.centerx = SCREEN_WIDTH * (1 - 0.17)
textRect.centery = screen.get_rect().centery
screen.blit(text,textRect)
def displayMusicTitle(screen, title):
font = pg.font.SysFont(None, 30)
text = font.render(str(title), True, BLACK)
textRect = text.get_rect()
textRect.centerx = int(SCREEN_WIDTH * 0.5)
textRect.centery = int(SCREEN_HEIGHT * 0.1)
screen.blit(text,textRect)
| 28.719101 | 101 | 0.635759 | import pygame as pg
import pygame_widgets as pw
from math import sin, cos
SCREEN_WIDTH = 640
SCREEN_HEIGHT = 480
WHITE = (255,255,255)
YELLOW = (220,220,0)
RED = (220,0,0)
GREY = (180,180,180)
BLACK = (0,0,0)
GREEN = (0,200,0)
BUTTON_COLOR = (0,0,220)
BUTTON_HOVER_COLOR = GREEN
BUTTON_PRESS_COLOR = (0,100,0)
def createScreen():
screen = pg.display.set_mode((SCREEN_WIDTH, SCREEN_HEIGHT))
screen.fill(GREY)
return screen
def displayCircle(screen, message, yellow, red):
x = SCREEN_WIDTH / 2
y = SCREEN_HEIGHT / 2
radius = SCREEN_HEIGHT / 4
if (yellow and red):
pg.draw.circle(screen, RED, [x, y], radius, 0, draw_top_right=True, draw_bottom_right=True)
pg.draw.circle(screen, YELLOW, [x, y], radius, 0, draw_top_left=True , draw_bottom_left=True)
elif yellow:
pg.draw.circle(screen, YELLOW, [x, y], radius, 0)
elif red:
pg.draw.circle(screen, RED, [x, y], radius, 0)
font = pg.font.SysFont(None, 40)
text = font.render(message, True, BLACK)
textRect = text.get_rect()
textRect.centerx = screen.get_rect().centerx
textRect.centery = screen.get_rect().centery
screen.blit(text,textRect)
def simulateNeoPixel(screen, neopixel):
size = 10
radius = 100
angle = 0
for color in neopixel.pixels:
x = int((SCREEN_WIDTH / 2) + radius*cos(angle))
y = int((SCREEN_HEIGHT / 2) - radius*sin(angle))
pg.draw.circle(screen, color, [x, y], size, 0)
angle += 3.14159 / 12
def displayStartButton(screen, callback):
width = 200
height = 50
x = (SCREEN_WIDTH - width) / 2
y = SCREEN_HEIGHT * 0.8
button = pw.Button(
screen, x, y, width, height, text='START',
fontSize=50,
textColour=(255,255,255),
inactiveColour=BUTTON_COLOR,
hoverColour=BUTTON_HOVER_COLOR,
pressedColour=BUTTON_PRESS_COLOR,
radius=10,
onClick=callback
)
return button
def displayYesButton(screen, callback):
width = 200
height = 50
x = (SCREEN_WIDTH * 0.45) - width
y = SCREEN_HEIGHT * 0.8
button = pw.Button(
screen, x, y, width, height, text='YES',
fontSize=50,
textColour=(255,255,255),
inactiveColour=BUTTON_COLOR,
hoverColour=BUTTON_HOVER_COLOR,
pressedColour=BUTTON_PRESS_COLOR,
radius=10,
onClick=callback
)
return button
def displayNoButton(screen, callback):
width = 200
height = 50
x = (SCREEN_WIDTH * 0.55)
y = SCREEN_HEIGHT * 0.8
button = pw.Button(
screen, x, y, width, height, text='NO',
fontSize=50,
textColour=(255,255,255),
inactiveColour=BUTTON_COLOR,
hoverColour=BUTTON_HOVER_COLOR,
pressedColour=BUTTON_PRESS_COLOR,
radius=10,
onClick=callback
)
return button
def createRoundButton(screen, callback, x, y, text, color):
width = 40
height = 40
button = pw.Button(
screen, x, y, width, height, text=text,
fontSize=60,
textColour=(255,255,255),
inactiveColour=color,
hoverColour=color,
pressedColour=color,
radius=20,
onClick=callback
)
return button
def displayIncYellowButton(screen, callback):
x = 20
y = SCREEN_HEIGHT * 0.4
return createRoundButton(screen, callback, x, y, "+", YELLOW)
def displayDecYellowButton(screen, callback):
x = 20
y = SCREEN_HEIGHT * 0.5
return createRoundButton(screen, callback, x, y, "-", YELLOW)
def displayIncRedButton(screen, callback):
x = SCREEN_WIDTH - 40 - 20
y = SCREEN_HEIGHT * 0.4
return createRoundButton(screen, callback, x, y, "+", RED)
def displayDecRedButton(screen, callback):
x = SCREEN_WIDTH - 40 - 20
y = SCREEN_HEIGHT * 0.5
return createRoundButton(screen, callback, x, y, "-", RED)
def createSkipButton(screen, callback):
width = 100
height = 40
x = (SCREEN_WIDTH - width) * 0.5
y = SCREEN_HEIGHT - 50 - 10
button = pw.Button(
screen, x, y, width, height, text="SKIP",
fontSize=30,
textColour=(255,255,255),
inactiveColour=BUTTON_COLOR,
hoverColour=BUTTON_HOVER_COLOR,
pressedColour=BUTTON_PRESS_COLOR,
radius=20,
onClick=callback
)
return button
def displayScore(screen, yellow, red):
font = pg.font.SysFont(None, 100)
text = font.render(str(yellow), True, YELLOW)
textRect = text.get_rect()
textRect.centerx = SCREEN_WIDTH * 0.17
textRect.centery = screen.get_rect().centery
screen.blit(text,textRect)
text = font.render(str(red), True, RED)
textRect = text.get_rect()
textRect.centerx = SCREEN_WIDTH * (1 - 0.17)
textRect.centery = screen.get_rect().centery
screen.blit(text,textRect)
def displayMusicTitle(screen, title):
font = pg.font.SysFont(None, 30)
text = font.render(str(title), True, BLACK)
textRect = text.get_rect()
textRect.centerx = int(SCREEN_WIDTH * 0.5)
textRect.centery = int(SCREEN_HEIGHT * 0.1)
screen.blit(text,textRect)
| true | true |
f720bbc6a4f8599443bb6753b941ccb39af1e390 | 647 | py | Python | merchant/migrations/0001_initial.py | Pesenin-Team/pesenin | 6b3dcc84e6e48768ce231ffedc43c56981fc6606 | [
"MIT"
] | 4 | 2019-10-15T12:35:15.000Z | 2019-10-16T12:38:51.000Z | merchant/migrations/0001_initial.py | Pesenin-Team/pesenin | 6b3dcc84e6e48768ce231ffedc43c56981fc6606 | [
"MIT"
] | null | null | null | merchant/migrations/0001_initial.py | Pesenin-Team/pesenin | 6b3dcc84e6e48768ce231ffedc43c56981fc6606 | [
"MIT"
] | null | null | null | # Generated by Django 2.2.6 on 2019-10-17 10:38
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Merchant',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('nama_merchant', models.CharField(max_length=100)),
('desc', models.CharField(max_length=200)),
('link_gambar', models.CharField(max_length=200)),
],
),
]
| 26.958333 | 115 | 0.55796 |
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Merchant',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('nama_merchant', models.CharField(max_length=100)),
('desc', models.CharField(max_length=200)),
('link_gambar', models.CharField(max_length=200)),
],
),
]
| true | true |
f720bbea22a5dbf7c0ffdeeda3c286344fc9500b | 12,806 | py | Python | protonvpn-applet.py | seadanda/protonvpn-applet | f32978192f523ed8ee661d200c508b221e0ffccd | [
"MIT"
] | 15 | 2019-09-13T07:11:52.000Z | 2021-05-23T10:13:57.000Z | protonvpn-applet.py | seadanda/pvpn-applet | f32978192f523ed8ee661d200c508b221e0ffccd | [
"MIT"
] | 11 | 2019-11-26T12:08:20.000Z | 2020-10-24T13:08:24.000Z | protonvpn-applet.py | seadanda/pvpn-applet | f32978192f523ed8ee661d200c508b221e0ffccd | [
"MIT"
] | 2 | 2019-11-24T00:44:55.000Z | 2020-06-28T20:31:42.000Z | #!/usr/bin/env python3
import sys
import subprocess
import functools
from enum import Enum
import gi
gi.require_version('Notify', '0.7')
from gi.repository import Notify
from PyQt5.QtWidgets import QApplication, QMainWindow, QWidget, QSystemTrayIcon, QMenu, QAction, qApp, QMessageBox
from PyQt5.QtCore import QSize, QThread, pyqtSignal
from PyQt5.QtGui import QIcon
from protonvpn_cli import utils, country_codes
from protonvpn_cli.utils import is_connected
PROTONVPN_APPLET_VERSION = "0.1.7"
class VPNStatusException(Exception):
"""General exception to throw when anything goes wrong
"""
class VPNCommand(Enum):
"""Commands to run the CLI
"""
status = 'protonvpn s'
connect_fastest = 'protonvpn c -f'
disconnect = 'protonvpn d'
version = 'protonvpn -v'
connect_random = 'protonvpn c -r'
connect_fastest_cc = 'protonvpn c --cc'
connect_fastest_p2p = 'protonvpn c --p2p'
connect_fastest_sc = 'protonvpn c --sc'
connect_fastest_tor = 'protonvpn c --tor'
reconnect = 'protonvpn r'
def check_single_instance():
"""Use pgrep to check if protonvpn-applet is already running
"""
pid = None
try:
pid = subprocess.run('pgrep protonvpn-applet'.split(), check=True, capture_output=True)
except subprocess.CalledProcessError:
try:
pid = subprocess.run('pgrep protonvpn-applet.py'.split(), check=True, capture_output=True)
except subprocess.CalledProcessError:
pass
if pid is not None:
print('There is an instance already running.')
sys.exit(1)
class Status(Enum):
"""Enum to keep track of the previous connection state
"""
connected = 'Connected'
disconnected = 'Disconnected'
class Polling(QThread):
"""Thread to check the VPN state every second and notifies on disconnection
"""
def __init__(self, applet):
QThread.__init__(self)
self.applet = applet
def __del__(self):
self.wait()
def run(self):
while self.applet.is_polling():
if is_connected():
self.applet.tray_icon.setIcon(QIcon('icons/16x16/protonvpn-connected.png'))
self.applet.previous_status = Status.connected
else:
# notify on disconnection
if self.applet.show_notifications() and self.applet.previous_status == Status.connected:
CheckStatus(self).start()
self.applet.tray_icon.setIcon(QIcon('icons/16x16/protonvpn-disconnected.png'))
self.applet.previous_status = Status.disconnected
self.sleep(1)
class ConnectVPN(QThread):
"""Thread to connect using the specified profile
"""
def __init__(self, applet, command):
QThread.__init__(self)
self.applet = applet
self.command = command
print(self.command)
def __del__(self):
self.wait()
def run(self):
subprocess.run([self.applet.auth] + self.command.split(), check=False)
self.applet.status_vpn()
class DisconnectVPN(QThread):
"""Thread to disconnect the VPN
"""
def __init__(self, applet):
QThread.__init__(self)
self.applet = applet
def __del__(self):
self.wait()
def run(self):
subprocess.run([self.applet.auth] + VPNCommand.disconnect.value.split(), check=False)
self.applet.status_vpn()
class ReconnectVPN(QThread):
"""Thread to connect using previously used profile
"""
def __init__(self, applet):
QThread.__init__(self)
self.applet = applet
def __del__(self):
self.wait()
def run(self):
subprocess.run([self.applet.auth] + VPNCommand.reconnect.value.split(), check=False)
self.applet.status_vpn()
class CheckStatus(QThread):
"""Thread to report ProtonVPN status
"""
def __init__(self, applet):
QThread.__init__(self)
self.applet = applet
def __del__(self):
self.wait()
def run(self):
result = subprocess.run(VPNCommand.status.value.split(), check=False, capture_output=True)
Notify.Notification.new(result.stdout.decode()).show()
class CheckProtonVPNVersion(QThread):
"""Thread to check version
"""
protonvpn_version_ready = pyqtSignal(str)
def __init__(self, parent=None):
super().__init__(parent=parent)
self.parent = parent
self.version = 'None'
def __del__(self):
self.wait()
def run(self):
self.version = subprocess.check_output(VPNCommand.version.value.split()).decode(sys.stdout.encoding)
self.protonvpn_version_ready.emit(self.version)
class PVPNApplet(QMainWindow):
"""Main applet body
"""
tray_icon = None
polling = True
previous_status = None
#auth = 'pkexec'
auth = 'sudo'
# Override the class constructor
def __init__(self):
super(PVPNApplet, self).__init__()
self.country_codes = country_codes # Keep a list of country codes
# Init QSystemTrayIcon
self.tray_icon = QSystemTrayIcon(self)
self.tray_icon.setIcon(QIcon('icons/16x16/protonvpn-disconnected.png'))
# Init libnotify
Notify.init('ProtonVPN')
# Refresh server list, store the resulting servers so we can populate the menu
self.servers = self.update_available_servers()
# Menu actions
connect_fastest_action = QAction('Connect fastest', self)
reconnect_action = QAction('Reconnect', self)
disconnect_action = QAction('Disconnect', self)
status_action = QAction('Status', self)
connect_fastest_sc_action = QAction('Secure Core', self)
connect_fastest_p2p_action = QAction('P2P', self)
connect_fastest_tor_action = QAction('Tor', self)
connect_random_action = QAction('Random', self)
show_protonvpn_applet_version_action = QAction('About ProtonVPN-Applet', self)
show_protonvpn_version_action = QAction('About ProtonVPN', self)
quit_action = QAction('Exit', self)
self.show_notifications_action = QAction('Show Notifications')
self.show_notifications_action.setCheckable(True)
self.show_notifications_action.setChecked(False)
# Triggers
quit_action.triggered.connect(qApp.quit)
connect_fastest_action.triggered.connect(self.connect_fastest)
disconnect_action.triggered.connect(self.disconnect_vpn)
status_action.triggered.connect(self.status_vpn)
show_protonvpn_applet_version_action.triggered.connect(self.show_protonvpn_applet_version)
show_protonvpn_version_action.triggered.connect(self.get_protonvpn_version)
connect_fastest_sc_action.triggered.connect(self.connect_fastest_sc)
connect_fastest_p2p_action.triggered.connect(self.connect_fastest_p2p)
connect_fastest_tor_action.triggered.connect(self.connect_fastest_tor)
connect_random_action.triggered.connect(self.connect_random)
reconnect_action.triggered.connect(self.reconnect_vpn)
# Generate connection menu for specific countries
connect_country_actions = []
for country_name in self.get_available_countries(self.servers):
# Get the ISO-3166 Alpha-2 country code
country_name_to_code = {v: k for k, v in country_codes.country_codes.items()}
country_code = country_name_to_code[country_name]
# Dynamically create functions for connecting to each country; each function just passes its respective
# country code to `self.connect_fastest_cc()`
setattr(self, f'connect_fastest_{country_code}', functools.partial(self.connect_fastest_cc, country_code))
# Generate an action for each country; set up the trigger; append to actions list
country_action = QAction(f'{country_name}', self)
country_action.triggered.connect(getattr(self, f'connect_fastest_{country_code}'))
connect_country_actions.append(country_action)
# Create a scrollable country connection menu
connect_country_menu = QMenu("Country...", self)
connect_country_menu.setStyleSheet('QMenu { menu-scrollable: 1; }')
connect_country_menu.addActions(connect_country_actions)
# Generate connection menu
connection_menu = QMenu("Other connections...", self)
connection_menu.addMenu(connect_country_menu)
connection_menu.addAction(connect_fastest_sc_action)
connection_menu.addAction(connect_fastest_p2p_action)
connection_menu.addAction(connect_fastest_tor_action)
connection_menu.addAction(connect_random_action)
# Draw menu
tray_menu = QMenu()
tray_menu.addAction(connect_fastest_action)
tray_menu.addAction(reconnect_action)
tray_menu.addMenu(connection_menu)
tray_menu.addAction(disconnect_action)
tray_menu.addAction(status_action)
tray_menu.addSeparator()
tray_menu.addAction(self.show_notifications_action)
tray_menu.addAction(show_protonvpn_applet_version_action)
tray_menu.addAction(show_protonvpn_version_action)
tray_menu.addAction(quit_action)
self.tray_icon.setContextMenu(tray_menu)
self.tray_icon.show()
# Polling thread
self.start_polling()
def is_polling(self):
return self.polling
def kill_polling(self):
self.polling = False
def start_polling(self):
self.polling = True
self.polling_thread = Polling(self)
self.polling_thread.start()
def _connect_vpn(self, command):
self.kill_polling()
connect_thread = ConnectVPN(self, command)
connect_thread.finished.connect(self.start_polling)
connect_thread.start()
def connect_fastest(self):
self._connect_vpn(VPNCommand.connect_fastest.value)
def connect_fastest_p2p(self):
self._connect_vpn(VPNCommand.connect_fastest_p2p.value)
def connect_fastest_sc(self):
self._connect_vpn(VPNCommand.connect_fastest_sc.value)
def connect_fastest_cc(self, cc):
command = VPNCommand.connect_fastest_cc.value + f' {cc}'
self._connect_vpn(command)
def connect_fastest_tor(self):
self._connect_vpn(VPNCommand.connect_fastest_tor.value)
def connect_random(self):
self._connect_vpn(VPNCommand.connect_random.value)
def disconnect_vpn(self):
disconnect_thread = DisconnectVPN(self)
disconnect_thread.start()
def status_vpn(self):
status_thread = CheckStatus(self)
status_thread.start()
def reconnect_vpn(self):
reconnect_thread = ReconnectVPN(self)
reconnect_thread.start()
# Override closeEvent to intercept the window closing event
def closeEvent(self, event):
event.ignore()
self.hide()
def show_notifications(self):
return self.show_notifications_action.isChecked()
def show_protonvpn_applet_version(self):
"""Show the protonvpn-applet version.
"""
name = '© 2020 Dónal Murray'
email = 'dmurray654@gmail.com'
github = 'https://github.com/seadanda/protonvpn-applet'
info = [f'<center>Version: {PROTONVPN_APPLET_VERSION}',
f'{name}',
f"<a href='{email}'>{email}</a>",
f"<a href='{github}'>{github}</a></center>"]
centered_text = f'<center>{"<br>".join(info)}</center>'
QMessageBox.information(self, 'protonvpn-applet', centered_text)
def get_protonvpn_version(self):
"""Start the CheckProtonVPNVersion thread; when it gets the version, it will call `self.show_protonvpn_version`
"""
print('called get_protonvpn_version')
check_protonvpn_version_thread = CheckProtonVPNVersion(self)
check_protonvpn_version_thread.protonvpn_version_ready.connect(self.show_protonvpn_version)
check_protonvpn_version_thread.start()
def show_protonvpn_version(self, version):
"""
Show the ProtonVPN version in a QMessageBox.
Parameters
----------
version : str
Version number to be shown.
"""
print('called show_protonvpn_version')
QMessageBox.information(self, 'ProtonVPN Version', f'Version: {version}')
def update_available_servers(self):
utils.pull_server_data()
return utils.get_servers()
@staticmethod
def get_available_countries(servers):
return sorted(list({utils.get_country_name(server['ExitCountry']) for server in servers}))
if __name__ == '__main__':
check_single_instance()
app = QApplication(sys.argv)
mw = PVPNApplet()
sys.exit(app.exec())
| 33.878307 | 119 | 0.679291 |
import sys
import subprocess
import functools
from enum import Enum
import gi
gi.require_version('Notify', '0.7')
from gi.repository import Notify
from PyQt5.QtWidgets import QApplication, QMainWindow, QWidget, QSystemTrayIcon, QMenu, QAction, qApp, QMessageBox
from PyQt5.QtCore import QSize, QThread, pyqtSignal
from PyQt5.QtGui import QIcon
from protonvpn_cli import utils, country_codes
from protonvpn_cli.utils import is_connected
PROTONVPN_APPLET_VERSION = "0.1.7"
class VPNStatusException(Exception):
class VPNCommand(Enum):
status = 'protonvpn s'
connect_fastest = 'protonvpn c -f'
disconnect = 'protonvpn d'
version = 'protonvpn -v'
connect_random = 'protonvpn c -r'
connect_fastest_cc = 'protonvpn c --cc'
connect_fastest_p2p = 'protonvpn c --p2p'
connect_fastest_sc = 'protonvpn c --sc'
connect_fastest_tor = 'protonvpn c --tor'
reconnect = 'protonvpn r'
def check_single_instance():
pid = None
try:
pid = subprocess.run('pgrep protonvpn-applet'.split(), check=True, capture_output=True)
except subprocess.CalledProcessError:
try:
pid = subprocess.run('pgrep protonvpn-applet.py'.split(), check=True, capture_output=True)
except subprocess.CalledProcessError:
pass
if pid is not None:
print('There is an instance already running.')
sys.exit(1)
class Status(Enum):
connected = 'Connected'
disconnected = 'Disconnected'
class Polling(QThread):
def __init__(self, applet):
QThread.__init__(self)
self.applet = applet
def __del__(self):
self.wait()
def run(self):
while self.applet.is_polling():
if is_connected():
self.applet.tray_icon.setIcon(QIcon('icons/16x16/protonvpn-connected.png'))
self.applet.previous_status = Status.connected
else:
if self.applet.show_notifications() and self.applet.previous_status == Status.connected:
CheckStatus(self).start()
self.applet.tray_icon.setIcon(QIcon('icons/16x16/protonvpn-disconnected.png'))
self.applet.previous_status = Status.disconnected
self.sleep(1)
class ConnectVPN(QThread):
def __init__(self, applet, command):
QThread.__init__(self)
self.applet = applet
self.command = command
print(self.command)
def __del__(self):
self.wait()
def run(self):
subprocess.run([self.applet.auth] + self.command.split(), check=False)
self.applet.status_vpn()
class DisconnectVPN(QThread):
def __init__(self, applet):
QThread.__init__(self)
self.applet = applet
def __del__(self):
self.wait()
def run(self):
subprocess.run([self.applet.auth] + VPNCommand.disconnect.value.split(), check=False)
self.applet.status_vpn()
class ReconnectVPN(QThread):
def __init__(self, applet):
QThread.__init__(self)
self.applet = applet
def __del__(self):
self.wait()
def run(self):
subprocess.run([self.applet.auth] + VPNCommand.reconnect.value.split(), check=False)
self.applet.status_vpn()
class CheckStatus(QThread):
def __init__(self, applet):
QThread.__init__(self)
self.applet = applet
def __del__(self):
self.wait()
def run(self):
result = subprocess.run(VPNCommand.status.value.split(), check=False, capture_output=True)
Notify.Notification.new(result.stdout.decode()).show()
class CheckProtonVPNVersion(QThread):
protonvpn_version_ready = pyqtSignal(str)
def __init__(self, parent=None):
super().__init__(parent=parent)
self.parent = parent
self.version = 'None'
def __del__(self):
self.wait()
def run(self):
self.version = subprocess.check_output(VPNCommand.version.value.split()).decode(sys.stdout.encoding)
self.protonvpn_version_ready.emit(self.version)
class PVPNApplet(QMainWindow):
tray_icon = None
polling = True
previous_status = None
auth = 'sudo'
def __init__(self):
super(PVPNApplet, self).__init__()
self.country_codes = country_codes
self.tray_icon = QSystemTrayIcon(self)
self.tray_icon.setIcon(QIcon('icons/16x16/protonvpn-disconnected.png'))
Notify.init('ProtonVPN')
self.servers = self.update_available_servers()
connect_fastest_action = QAction('Connect fastest', self)
reconnect_action = QAction('Reconnect', self)
disconnect_action = QAction('Disconnect', self)
status_action = QAction('Status', self)
connect_fastest_sc_action = QAction('Secure Core', self)
connect_fastest_p2p_action = QAction('P2P', self)
connect_fastest_tor_action = QAction('Tor', self)
connect_random_action = QAction('Random', self)
show_protonvpn_applet_version_action = QAction('About ProtonVPN-Applet', self)
show_protonvpn_version_action = QAction('About ProtonVPN', self)
quit_action = QAction('Exit', self)
self.show_notifications_action = QAction('Show Notifications')
self.show_notifications_action.setCheckable(True)
self.show_notifications_action.setChecked(False)
quit_action.triggered.connect(qApp.quit)
connect_fastest_action.triggered.connect(self.connect_fastest)
disconnect_action.triggered.connect(self.disconnect_vpn)
status_action.triggered.connect(self.status_vpn)
show_protonvpn_applet_version_action.triggered.connect(self.show_protonvpn_applet_version)
show_protonvpn_version_action.triggered.connect(self.get_protonvpn_version)
connect_fastest_sc_action.triggered.connect(self.connect_fastest_sc)
connect_fastest_p2p_action.triggered.connect(self.connect_fastest_p2p)
connect_fastest_tor_action.triggered.connect(self.connect_fastest_tor)
connect_random_action.triggered.connect(self.connect_random)
reconnect_action.triggered.connect(self.reconnect_vpn)
connect_country_actions = []
for country_name in self.get_available_countries(self.servers):
country_name_to_code = {v: k for k, v in country_codes.country_codes.items()}
country_code = country_name_to_code[country_name]
setattr(self, f'connect_fastest_{country_code}', functools.partial(self.connect_fastest_cc, country_code))
country_action = QAction(f'{country_name}', self)
country_action.triggered.connect(getattr(self, f'connect_fastest_{country_code}'))
connect_country_actions.append(country_action)
connect_country_menu = QMenu("Country...", self)
connect_country_menu.setStyleSheet('QMenu { menu-scrollable: 1; }')
connect_country_menu.addActions(connect_country_actions)
connection_menu = QMenu("Other connections...", self)
connection_menu.addMenu(connect_country_menu)
connection_menu.addAction(connect_fastest_sc_action)
connection_menu.addAction(connect_fastest_p2p_action)
connection_menu.addAction(connect_fastest_tor_action)
connection_menu.addAction(connect_random_action)
tray_menu = QMenu()
tray_menu.addAction(connect_fastest_action)
tray_menu.addAction(reconnect_action)
tray_menu.addMenu(connection_menu)
tray_menu.addAction(disconnect_action)
tray_menu.addAction(status_action)
tray_menu.addSeparator()
tray_menu.addAction(self.show_notifications_action)
tray_menu.addAction(show_protonvpn_applet_version_action)
tray_menu.addAction(show_protonvpn_version_action)
tray_menu.addAction(quit_action)
self.tray_icon.setContextMenu(tray_menu)
self.tray_icon.show()
self.start_polling()
def is_polling(self):
return self.polling
def kill_polling(self):
self.polling = False
def start_polling(self):
self.polling = True
self.polling_thread = Polling(self)
self.polling_thread.start()
def _connect_vpn(self, command):
self.kill_polling()
connect_thread = ConnectVPN(self, command)
connect_thread.finished.connect(self.start_polling)
connect_thread.start()
def connect_fastest(self):
self._connect_vpn(VPNCommand.connect_fastest.value)
def connect_fastest_p2p(self):
self._connect_vpn(VPNCommand.connect_fastest_p2p.value)
def connect_fastest_sc(self):
self._connect_vpn(VPNCommand.connect_fastest_sc.value)
def connect_fastest_cc(self, cc):
command = VPNCommand.connect_fastest_cc.value + f' {cc}'
self._connect_vpn(command)
def connect_fastest_tor(self):
self._connect_vpn(VPNCommand.connect_fastest_tor.value)
def connect_random(self):
self._connect_vpn(VPNCommand.connect_random.value)
def disconnect_vpn(self):
disconnect_thread = DisconnectVPN(self)
disconnect_thread.start()
def status_vpn(self):
status_thread = CheckStatus(self)
status_thread.start()
def reconnect_vpn(self):
reconnect_thread = ReconnectVPN(self)
reconnect_thread.start()
def closeEvent(self, event):
event.ignore()
self.hide()
def show_notifications(self):
return self.show_notifications_action.isChecked()
def show_protonvpn_applet_version(self):
name = '© 2020 Dónal Murray'
email = 'dmurray654@gmail.com'
github = 'https://github.com/seadanda/protonvpn-applet'
info = [f'<center>Version: {PROTONVPN_APPLET_VERSION}',
f'{name}',
f"<a href='{email}'>{email}</a>",
f"<a href='{github}'>{github}</a></center>"]
centered_text = f'<center>{"<br>".join(info)}</center>'
QMessageBox.information(self, 'protonvpn-applet', centered_text)
def get_protonvpn_version(self):
print('called get_protonvpn_version')
check_protonvpn_version_thread = CheckProtonVPNVersion(self)
check_protonvpn_version_thread.protonvpn_version_ready.connect(self.show_protonvpn_version)
check_protonvpn_version_thread.start()
def show_protonvpn_version(self, version):
print('called show_protonvpn_version')
QMessageBox.information(self, 'ProtonVPN Version', f'Version: {version}')
def update_available_servers(self):
utils.pull_server_data()
return utils.get_servers()
@staticmethod
def get_available_countries(servers):
return sorted(list({utils.get_country_name(server['ExitCountry']) for server in servers}))
if __name__ == '__main__':
check_single_instance()
app = QApplication(sys.argv)
mw = PVPNApplet()
sys.exit(app.exec())
| true | true |
f720bbec31bcc03b0a76267cc6d1919b2116ffc8 | 3,982 | py | Python | pygraphblas/demo/dnn.py | szarnyasg/pygraphblas | 7465ef6fcc77c9901869b70ddf1d77a86570c336 | [
"Apache-2.0"
] | null | null | null | pygraphblas/demo/dnn.py | szarnyasg/pygraphblas | 7465ef6fcc77c9901869b70ddf1d77a86570c336 | [
"Apache-2.0"
] | null | null | null | pygraphblas/demo/dnn.py | szarnyasg/pygraphblas | 7465ef6fcc77c9901869b70ddf1d77a86570c336 | [
"Apache-2.0"
] | null | null | null | import os
from functools import wraps, partial
from time import time
from statistics import mean
from pathlib import Path
from pygraphblas import *
from multiprocessing.pool import ThreadPool
from multiprocessing import cpu_count
NFEATURES = 60000
BIAS = {1024: -0.3, 4096: -0.35, 16384: -0.4, 65536: -0.45}
def timing(f):
@wraps(f)
def wrap(*args, **kw):
ts = time()
result = f(*args, **kw)
te = time()
print('func:%r took: %2.4f' % (f.__name__, te-ts))
return result
return wrap
@timing
def dnn(W, B, Y):
for w, b in zip(W, B):
Y = Y @ w
with plus_plus:
Y = Y @ b
Y = Y.select('>0')
M = Y.select('>', 32)
if len(M):
Y[M] = 32
return Y
@timing
def dnn2(W, B, Y):
for w, b in zip(W, B):
Y = Y.mxm(w, out=Y)
with plus_plus:
Y = Y.mxm(b, out=Y)
Y.select('>0', out=Y)
M = Y.select('>', 32)
if len(M):
Y[M] = 32
return Y
@timing
def load_images(neurons, dest):
fname = '{}/sparse-images-{}.{}'
binfile = fname.format(dest, neurons, 'ssb')
if Path(binfile).exists():
return Matrix.from_binfile(binfile.encode('ascii'))
images = Path(fname.format(dest, neurons, 'tsv'))
with images.open() as i:
m = Matrix.from_tsv(i, FP32, NFEATURES, neurons)
m.to_binfile(binfile.encode('ascii'))
return m
def load_categories(neurons, nlayers, dest):
fname = '{}/neuron{}-l{}-categories.tsv'
cats = Path(fname.format(dest, neurons, nlayers))
result = Vector.from_type(BOOL, NFEATURES)
with cats.open() as i:
for line in i.readlines():
result[int(line.strip())-1] = True
return result
def load_layer(i, dest):
fname = '{}/neuron{}/n{}-l{}.{}'
binfile = fname.format(dest, neurons, neurons, str(i+1), 'ssb')
if Path(binfile).exists():
return Matrix.from_binfile(binfile.encode('ascii'))
l = Path(fname.format(dest, neurons, neurons, str(i+1), 'tsv'))
with l.open() as f:
m = Matrix.from_tsv(f, FP32, neurons, neurons)
m.to_binfile(binfile.encode('ascii'))
return m
@timing
def generate_layers(neurons, nlayers, dest):
neurons = Path('{}/neuron{}'.format(dest, neurons))
with ThreadPool(cpu_count()) as pool:
return pool.map(partial(load_layer, dest=dest), range(nlayers))
@timing
def generate_bias(neurons, nlayers):
result = []
for i in range(nlayers):
bias = Matrix.from_type(FP32, neurons, neurons)
for i in range(neurons):
bias[i,i] = BIAS[neurons]
bias.nvals # causes async completion
result.append(bias)
return result
@timing
def run(neurons, images, layers, bias, dest):
result = dnn2(layers,
bias,
images)
r = result.reduce_vector()
cats = r.apply(lib.GxB_ONE_BOOL, out=Vector.from_type(BOOL, r.size))
truecats = load_categories(neurons, nlayers, dest)
assert cats == truecats
num_neurons = [1024, 4096, 16384, 65536]
num_layers = [120, 480, 1920]
if __name__ == '__main__':
dest = os.getenv('DEST')
neurons = os.getenv('NEURONS')
nlayers = os.getenv('NLAYERS')
if neurons and nlayers:
neurons = int(neurons)
nlayers = int(nlayers)
images = load_images(neurons, dest)
layers = generate_layers(neurons, nlayers, dest)
bias = generate_bias(neurons, nlayers)
run(neurons, images, layers, bias, dest)
else:
for neurons in num_neurons:
print('Building layers for %s neurons' % neurons)
layers = generate_layers(neurons, 1920, dest)
bias = generate_bias(neurons, 1920)
images = load_images(neurons, dest)
for nlayers in num_layers:
print('Benching %s neurons %s layers' % (neurons, nlayers))
run(neurons, images, layers[:nlayers], bias[:nlayers], dest)
| 30.396947 | 76 | 0.594927 | import os
from functools import wraps, partial
from time import time
from statistics import mean
from pathlib import Path
from pygraphblas import *
from multiprocessing.pool import ThreadPool
from multiprocessing import cpu_count
NFEATURES = 60000
BIAS = {1024: -0.3, 4096: -0.35, 16384: -0.4, 65536: -0.45}
def timing(f):
@wraps(f)
def wrap(*args, **kw):
ts = time()
result = f(*args, **kw)
te = time()
print('func:%r took: %2.4f' % (f.__name__, te-ts))
return result
return wrap
@timing
def dnn(W, B, Y):
for w, b in zip(W, B):
Y = Y @ w
with plus_plus:
Y = Y @ b
Y = Y.select('>0')
M = Y.select('>', 32)
if len(M):
Y[M] = 32
return Y
@timing
def dnn2(W, B, Y):
for w, b in zip(W, B):
Y = Y.mxm(w, out=Y)
with plus_plus:
Y = Y.mxm(b, out=Y)
Y.select('>0', out=Y)
M = Y.select('>', 32)
if len(M):
Y[M] = 32
return Y
@timing
def load_images(neurons, dest):
fname = '{}/sparse-images-{}.{}'
binfile = fname.format(dest, neurons, 'ssb')
if Path(binfile).exists():
return Matrix.from_binfile(binfile.encode('ascii'))
images = Path(fname.format(dest, neurons, 'tsv'))
with images.open() as i:
m = Matrix.from_tsv(i, FP32, NFEATURES, neurons)
m.to_binfile(binfile.encode('ascii'))
return m
def load_categories(neurons, nlayers, dest):
fname = '{}/neuron{}-l{}-categories.tsv'
cats = Path(fname.format(dest, neurons, nlayers))
result = Vector.from_type(BOOL, NFEATURES)
with cats.open() as i:
for line in i.readlines():
result[int(line.strip())-1] = True
return result
def load_layer(i, dest):
fname = '{}/neuron{}/n{}-l{}.{}'
binfile = fname.format(dest, neurons, neurons, str(i+1), 'ssb')
if Path(binfile).exists():
return Matrix.from_binfile(binfile.encode('ascii'))
l = Path(fname.format(dest, neurons, neurons, str(i+1), 'tsv'))
with l.open() as f:
m = Matrix.from_tsv(f, FP32, neurons, neurons)
m.to_binfile(binfile.encode('ascii'))
return m
@timing
def generate_layers(neurons, nlayers, dest):
neurons = Path('{}/neuron{}'.format(dest, neurons))
with ThreadPool(cpu_count()) as pool:
return pool.map(partial(load_layer, dest=dest), range(nlayers))
@timing
def generate_bias(neurons, nlayers):
result = []
for i in range(nlayers):
bias = Matrix.from_type(FP32, neurons, neurons)
for i in range(neurons):
bias[i,i] = BIAS[neurons]
bias.nvals
result.append(bias)
return result
@timing
def run(neurons, images, layers, bias, dest):
result = dnn2(layers,
bias,
images)
r = result.reduce_vector()
cats = r.apply(lib.GxB_ONE_BOOL, out=Vector.from_type(BOOL, r.size))
truecats = load_categories(neurons, nlayers, dest)
assert cats == truecats
num_neurons = [1024, 4096, 16384, 65536]
num_layers = [120, 480, 1920]
if __name__ == '__main__':
dest = os.getenv('DEST')
neurons = os.getenv('NEURONS')
nlayers = os.getenv('NLAYERS')
if neurons and nlayers:
neurons = int(neurons)
nlayers = int(nlayers)
images = load_images(neurons, dest)
layers = generate_layers(neurons, nlayers, dest)
bias = generate_bias(neurons, nlayers)
run(neurons, images, layers, bias, dest)
else:
for neurons in num_neurons:
print('Building layers for %s neurons' % neurons)
layers = generate_layers(neurons, 1920, dest)
bias = generate_bias(neurons, 1920)
images = load_images(neurons, dest)
for nlayers in num_layers:
print('Benching %s neurons %s layers' % (neurons, nlayers))
run(neurons, images, layers[:nlayers], bias[:nlayers], dest)
| true | true |
f720bca64500834838fa25d7053779b0ff0a3d49 | 1,252 | py | Python | backend/server.py | ryzbaka/Niyuddha | ca54a5c79b8e733aca494f996f05c10ef5cf4950 | [
"MIT"
] | null | null | null | backend/server.py | ryzbaka/Niyuddha | ca54a5c79b8e733aca494f996f05c10ef5cf4950 | [
"MIT"
] | null | null | null | backend/server.py | ryzbaka/Niyuddha | ca54a5c79b8e733aca494f996f05c10ef5cf4950 | [
"MIT"
] | null | null | null | from flask import Flask,jsonify,request
import os
from subprocess import PIPE,Popen
app = Flask(__name__)
@app.route("/",methods=["GET"])
def home():
return "Working"
@app.route("/sendcode",methods=["POST"])
def sendCode():
print(request.json)
owd = os.getcwd() # chdir into this once done executing.
username = request.json['username']
code = request.json['code']
os.chdir('users')
userFolders=os.listdir()
if username not in userFolders:
os.mkdir(username)
os.chdir(username)
with open(f"{username}.py","w") as f:
f.write(code)
os.system(f'docker run -it --name {username}container --detach --rm python:3')
os.system(f'docker cp {username}.py {username}container:/{username}.py')
# result = os.popen(f'docker exec {username}container python {username}.py').read()
p = Popen(f"docker exec {username}container python {username}.py",shell=True,stdout=PIPE,stderr=PIPE)
stdout,stderr = p.communicate()
os.system(f'docker kill {username}container')
os.chdir(owd)#switching back to original directory
print(os.path.abspath(os.curdir))
return jsonify({"message":stdout.decode(),"error":stderr.decode()})
if __name__=='__main__':
app.run(port=5555,debug=True) | 36.823529 | 105 | 0.682907 | from flask import Flask,jsonify,request
import os
from subprocess import PIPE,Popen
app = Flask(__name__)
@app.route("/",methods=["GET"])
def home():
return "Working"
@app.route("/sendcode",methods=["POST"])
def sendCode():
print(request.json)
owd = os.getcwd()
username = request.json['username']
code = request.json['code']
os.chdir('users')
userFolders=os.listdir()
if username not in userFolders:
os.mkdir(username)
os.chdir(username)
with open(f"{username}.py","w") as f:
f.write(code)
os.system(f'docker run -it --name {username}container --detach --rm python:3')
os.system(f'docker cp {username}.py {username}container:/{username}.py')
p = Popen(f"docker exec {username}container python {username}.py",shell=True,stdout=PIPE,stderr=PIPE)
stdout,stderr = p.communicate()
os.system(f'docker kill {username}container')
os.chdir(owd)
print(os.path.abspath(os.curdir))
return jsonify({"message":stdout.decode(),"error":stderr.decode()})
if __name__=='__main__':
app.run(port=5555,debug=True) | true | true |
f720bcd371ce68d744f5e4f9a76e113f3947b3e5 | 3,220 | py | Python | pyrasterframes/src/main/python/pyrasterframes/rf_context.py | mjohns-databricks/rasterframes | 44f40726b79e4b3600d6990b73c815b6f891be07 | [
"Apache-2.0"
] | 180 | 2018-03-21T13:34:08.000Z | 2022-03-19T03:31:24.000Z | pyrasterframes/src/main/python/pyrasterframes/rf_context.py | mjohns-databricks/rasterframes | 44f40726b79e4b3600d6990b73c815b6f891be07 | [
"Apache-2.0"
] | 442 | 2018-05-02T13:14:35.000Z | 2022-03-28T21:49:58.000Z | pyrasterframes/src/main/python/pyrasterframes/rf_context.py | mjohns-databricks/rasterframes | 44f40726b79e4b3600d6990b73c815b6f891be07 | [
"Apache-2.0"
] | 45 | 2018-05-03T13:46:04.000Z | 2022-01-30T23:16:00.000Z | #
# This software is licensed under the Apache 2 license, quoted below.
#
# Copyright 2019 Astraea, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy of
# the License at
#
# [http://www.apache.org/licenses/LICENSE-2.0]
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations under
# the License.
#
# SPDX-License-Identifier: Apache-2.0
#
"""
This module contains access to the jvm SparkContext with RasterFrameLayer support.
"""
from pyspark import SparkContext
from pyspark.sql import SparkSession
from typing import Any, List
from py4j.java_gateway import JavaMember
from py4j.java_collections import JavaList, JavaMap
from typing import Tuple
__all__ = ['RFContext']
class RFContext(object):
"""
Entrypoint to RasterFrames services
"""
def __init__(self, spark_session: SparkSession):
self._spark_session = spark_session
self._gateway = spark_session.sparkContext._gateway
self._jvm = self._gateway.jvm
jsess = self._spark_session._jsparkSession
self._jrfctx = self._jvm.org.locationtech.rasterframes.py.PyRFContext(jsess)
def list_to_seq(self, py_list: List[Any]) -> JavaList:
conv = self.lookup('_listToSeq')
return conv(py_list)
def lookup(self, function_name: str) -> JavaMember:
return getattr(self._jrfctx, function_name)
def build_info(self) -> JavaMap:
return self._jrfctx.buildInfo()
def companion_of(self, classname: str):
if not classname.endswith("$"):
classname = classname + "$"
companion_module = getattr(self._jvm, classname)
singleton = getattr(companion_module, "MODULE$")
return singleton
# NB: Tightly coupled to `org.locationtech.rasterframes.py.PyRFContext._resolveRasterRef`
def _resolve_raster_ref(self, ref_struct):
f = self.lookup("_resolveRasterRef")
return f(
ref_struct.source.raster_source_kryo,
ref_struct.bandIndex,
ref_struct.subextent.xmin,
ref_struct.subextent.ymin,
ref_struct.subextent.xmax,
ref_struct.subextent.ymax,
)
@staticmethod
def active():
"""
Get the active Python RFContext and throw an error if it is not enabled for RasterFrames.
"""
sc = SparkContext._active_spark_context
if not hasattr(sc, '_rf_context'):
raise AttributeError(
"RasterFrames have not been enabled for the active session. Call 'SparkSession.withRasterFrames()'.")
return sc._rf_context
@staticmethod
def call(name, *args):
f = RFContext.active().lookup(name)
return f(*args)
@staticmethod
def jvm():
"""
Get the active Scala PyRFContext and throw an error if it is not enabled for RasterFrames.
"""
return RFContext.active()._jvm
| 32.525253 | 117 | 0.684472 |
from pyspark import SparkContext
from pyspark.sql import SparkSession
from typing import Any, List
from py4j.java_gateway import JavaMember
from py4j.java_collections import JavaList, JavaMap
from typing import Tuple
__all__ = ['RFContext']
class RFContext(object):
def __init__(self, spark_session: SparkSession):
self._spark_session = spark_session
self._gateway = spark_session.sparkContext._gateway
self._jvm = self._gateway.jvm
jsess = self._spark_session._jsparkSession
self._jrfctx = self._jvm.org.locationtech.rasterframes.py.PyRFContext(jsess)
def list_to_seq(self, py_list: List[Any]) -> JavaList:
conv = self.lookup('_listToSeq')
return conv(py_list)
def lookup(self, function_name: str) -> JavaMember:
return getattr(self._jrfctx, function_name)
def build_info(self) -> JavaMap:
return self._jrfctx.buildInfo()
def companion_of(self, classname: str):
if not classname.endswith("$"):
classname = classname + "$"
companion_module = getattr(self._jvm, classname)
singleton = getattr(companion_module, "MODULE$")
return singleton
def _resolve_raster_ref(self, ref_struct):
f = self.lookup("_resolveRasterRef")
return f(
ref_struct.source.raster_source_kryo,
ref_struct.bandIndex,
ref_struct.subextent.xmin,
ref_struct.subextent.ymin,
ref_struct.subextent.xmax,
ref_struct.subextent.ymax,
)
@staticmethod
def active():
sc = SparkContext._active_spark_context
if not hasattr(sc, '_rf_context'):
raise AttributeError(
"RasterFrames have not been enabled for the active session. Call 'SparkSession.withRasterFrames()'.")
return sc._rf_context
@staticmethod
def call(name, *args):
f = RFContext.active().lookup(name)
return f(*args)
@staticmethod
def jvm():
return RFContext.active()._jvm
| true | true |
f720bd0c2b5ca565bfafb6e86a7b848c423f5997 | 686 | py | Python | tests/scrubber/test_scrubber.py | scottkleinman/lexos | d362ddd05ef23b5173ce303eb7b08ff3583ac709 | [
"MIT"
] | null | null | null | tests/scrubber/test_scrubber.py | scottkleinman/lexos | d362ddd05ef23b5173ce303eb7b08ff3583ac709 | [
"MIT"
] | null | null | null | tests/scrubber/test_scrubber.py | scottkleinman/lexos | d362ddd05ef23b5173ce303eb7b08ff3583ac709 | [
"MIT"
] | null | null | null | """test_scrubber.py."""
# Import a minimal text loader class, the functions for scrubber pipelines,
# and the scrubber function registry
from lexos.io.basic import Loader
from lexos.scrubber.pipeline import make_pipeline
from lexos.scrubber.registry import scrubber_components
from lexos.scrubber.scrubber import Scrubber
# Load a text
data = "tests/test_data/Austen_Pride.txt"
loader = Loader()
loader.load(data)
text = loader.texts[0]
lower_case = scrubber_components.get("lower_case")
scrub = make_pipeline(lower_case)
pipeline = (lower_case)
s = Scrubber()
s.add_pipeline(pipeline)
show_pipeline = s.get_pipeline()
texts = s.scrub(text)
for text in texts:
print(text[0:50])
| 26.384615 | 75 | 0.781341 |
from lexos.io.basic import Loader
from lexos.scrubber.pipeline import make_pipeline
from lexos.scrubber.registry import scrubber_components
from lexos.scrubber.scrubber import Scrubber
data = "tests/test_data/Austen_Pride.txt"
loader = Loader()
loader.load(data)
text = loader.texts[0]
lower_case = scrubber_components.get("lower_case")
scrub = make_pipeline(lower_case)
pipeline = (lower_case)
s = Scrubber()
s.add_pipeline(pipeline)
show_pipeline = s.get_pipeline()
texts = s.scrub(text)
for text in texts:
print(text[0:50])
| true | true |
f720be44decd15c1c50cc613e248b09f157857d5 | 335 | py | Python | resources/ai/swagger/__init__.py | GMKrieger/ai_api | 9ed661d29afb3232b7930727d056abdedfb91b43 | [
"MIT"
] | null | null | null | resources/ai/swagger/__init__.py | GMKrieger/ai_api | 9ed661d29afb3232b7930727d056abdedfb91b43 | [
"MIT"
] | 10 | 2020-01-28T22:15:24.000Z | 2021-04-30T20:36:27.000Z | resources/ai/swagger/__init__.py | GMKrieger/ai_api | 9ed661d29afb3232b7930727d056abdedfb91b43 | [
"MIT"
] | null | null | null | """
swagger module -
A package defining the swagger features. This module creates the swagger
structure and defines the data to show when the swagger is activated.
It does not contain the html and css files used to create the page,
only the underlying structure. The html and css can be found at the static module.
""" | 41.875 | 86 | 0.746269 | true | true | |
f720bf58d889e6191c183282ec836d74afba0701 | 704 | py | Python | tools/mo/openvino/tools/mo/front/caffe/grn_ext.py | pazamelin/openvino | b7e8ef910d7ed8e52326d14dc6fd53b71d16ed48 | [
"Apache-2.0"
] | 1 | 2019-09-22T01:05:07.000Z | 2019-09-22T01:05:07.000Z | tools/mo/openvino/tools/mo/front/caffe/grn_ext.py | pazamelin/openvino | b7e8ef910d7ed8e52326d14dc6fd53b71d16ed48 | [
"Apache-2.0"
] | 58 | 2020-11-06T12:13:45.000Z | 2022-03-28T13:20:11.000Z | tools/mo/openvino/tools/mo/front/caffe/grn_ext.py | pazamelin/openvino | b7e8ef910d7ed8e52326d14dc6fd53b71d16ed48 | [
"Apache-2.0"
] | 2 | 2021-07-14T07:40:50.000Z | 2021-07-27T01:40:03.000Z | # Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
from openvino.tools.mo.ops.grn import GRNOp
from openvino.tools.mo.front.caffe.collect_attributes import merge_attrs
from openvino.tools.mo.front.extractor import FrontExtractorOp
class GRNFrontExtractor(FrontExtractorOp):
op = 'GRN'
enabled = True
@classmethod
def extract(cls, node):
proto_layer = node.pb
param = proto_layer.grn_param
update_attrs = {
'bias': param.bias,
}
mapping_rule = merge_attrs(param, update_attrs)
# update the attributes of the node
GRNOp.update_node_stat(node, mapping_rule)
return cls.enabled
| 26.074074 | 72 | 0.693182 |
from openvino.tools.mo.ops.grn import GRNOp
from openvino.tools.mo.front.caffe.collect_attributes import merge_attrs
from openvino.tools.mo.front.extractor import FrontExtractorOp
class GRNFrontExtractor(FrontExtractorOp):
op = 'GRN'
enabled = True
@classmethod
def extract(cls, node):
proto_layer = node.pb
param = proto_layer.grn_param
update_attrs = {
'bias': param.bias,
}
mapping_rule = merge_attrs(param, update_attrs)
GRNOp.update_node_stat(node, mapping_rule)
return cls.enabled
| true | true |
f720bf66d4521a60fee34b616ef7d1b5989d5e01 | 256 | py | Python | code/learn-AI/matplotlib/graph/sigmoid_function.py | lsieun/learn-AI | 0a164bc2e6317de3aa03c747c0e6f15d93e7f49a | [
"Apache-2.0"
] | 1 | 2019-03-27T23:22:44.000Z | 2019-03-27T23:22:44.000Z | code/learn-AI/matplotlib/graph/sigmoid_function.py | lsieun/learn-AI | 0a164bc2e6317de3aa03c747c0e6f15d93e7f49a | [
"Apache-2.0"
] | null | null | null | code/learn-AI/matplotlib/graph/sigmoid_function.py | lsieun/learn-AI | 0a164bc2e6317de3aa03c747c0e6f15d93e7f49a | [
"Apache-2.0"
] | null | null | null | import numpy as np
import matplotlib.pyplot as plt
def func(x):
return 1 / (1 + np.exp(-x))
# Return evenly spaced numbers over a specified interval.
xdata = np.linspace(-8, 8, 960,endpoint=True)
ydata = func(xdata)
plt.plot(xdata,ydata)
plt.show() | 19.692308 | 57 | 0.707031 | import numpy as np
import matplotlib.pyplot as plt
def func(x):
return 1 / (1 + np.exp(-x))
xdata = np.linspace(-8, 8, 960,endpoint=True)
ydata = func(xdata)
plt.plot(xdata,ydata)
plt.show() | true | true |
f720bf6a9fb2642c27030209f924c321a1edff82 | 3,343 | py | Python | DownData/Link_down.py | Max-astro/A2Project | 5d40263742133f214936b06b622d08092e694aed | [
"MIT"
] | null | null | null | DownData/Link_down.py | Max-astro/A2Project | 5d40263742133f214936b06b622d08092e694aed | [
"MIT"
] | null | null | null | DownData/Link_down.py | Max-astro/A2Project | 5d40263742133f214936b06b622d08092e694aed | [
"MIT"
] | null | null | null | import requests
import sys
import h5py
import numpy as np
import os
def get(path, params=None, savedir=None):
# make HTTP GET request to path
headers = {"api-key":"27d44ba55cd115b10f2dd9153589aff0"}
r = requests.get(path, params=params, headers=headers)
# raise exception if response code is not HTTP SUCCESS (200)
r.raise_for_status()
if r.headers['content-type'] == 'application/json':
return r.json() # parse json responses automatically
if 'content-disposition' in r.headers:
filename = r.headers['content-disposition'].split("filename=")[1]
if savedir != None:
filename = savedir + filename
with open(filename, 'wb') as f:
f.write(r.content)
return filename # return the filename string
return r
def HaloProgenitors(haloID):
'''
haloID is the subhalo's ID in snap_099
return a dict = {'SnapNum' : SubfindID}
'''
url = "http://www.tng-project.org/api/TNG100-1/snapshots/99/subhalos/%haloID/sublink/simple.json"%haloID
try:
sublink = get(url, savedir='/home/sublink/')
except:
print(sys.exc_info()[0])
return -1
f = sublink
#Find halo's Subfind ID with redshift(ie:SnapNum), and save the dict in '/Raid0/zhouzb/diskHalo_Sublink/'
snap_num = np.array(f['SnapNum'])
subfind_ID = np.array(f['SubfindID'])
Progenitors_dict = {}
for i in range(len(snap_num)):
Progenitors_dict['%d'%snap_num[i]] = subfind_ID[i]
f.close()
return Progenitors_dict
'''
snap_91 z=0.1
snap_84 z=0.2
snap_78 z=0.3
snap_72 z=0.4
snap_67 z=0.5
snap_59 z=0.7
snap_50 z=1.0
snap_40 z=1.5
snap_33 z=2.0
'''
barred = np.load('F:/Linux/data/099fig/barredID.npy')
snap = [99, 91, 84, 78, 72, 67, 59, 50, 40, 33]
errorHalo = []
for haloID in barred:
Prog_dict = HaloProgenitors(haloID)
if Prog_dict == -1:
print('halo: %d Network ERROR, Try next'%haloID)
errorHalo.append(haloID)
continue
else:
#Download stellar particles' information in all selected snapshot z
for z in snap:
print('Now download halo %d in snap_%d'%(haloID, z))
try:
subID = Prog_dict['%d'%z]
cutoff_url = 'http://www.tng-project.org/api/TNG100-1/snapshots/%d/subhalos/%d/cutout.hdf5?stars=Masses,Coordinates,Velocities,GFM_StellarFormationTime'%(z, subID)
if os.path.isfile('F:/Linux/data/TNG/cutoff/disk_%d/cutout_%d.hdf5'%(z, subID)) == False:
get(cutoff_url, savedir='F:/Linux/data/TNG/cutoff/disk_%d/'%z)
except:
print("halo %d in snap_%d Fail:"%(haloID, z), sys.exc_info()[0])
print("You need to reload this halo.")
errorHalo.append(haloID)
break
else:
print('halo %d in snap_%d downloaded'%(haloID, z))
print('halo %d in all snapshot download Completed'%haloID)
if len(errorHalo) == 0:
print('All done.')
else:
print('%d halo download faild'%len(errorHalo))
print("Error halo's ID were saved in '/Raid0/zhouzb/downError.log.npy'.")
np.save('F:/Linux/data/TNG/errorID.npy', errorHalo)
| 29.324561 | 180 | 0.597368 | import requests
import sys
import h5py
import numpy as np
import os
def get(path, params=None, savedir=None):
headers = {"api-key":"27d44ba55cd115b10f2dd9153589aff0"}
r = requests.get(path, params=params, headers=headers)
r.raise_for_status()
if r.headers['content-type'] == 'application/json':
return r.json()
if 'content-disposition' in r.headers:
filename = r.headers['content-disposition'].split("filename=")[1]
if savedir != None:
filename = savedir + filename
with open(filename, 'wb') as f:
f.write(r.content)
return filename
return r
def HaloProgenitors(haloID):
url = "http://www.tng-project.org/api/TNG100-1/snapshots/99/subhalos/%haloID/sublink/simple.json"%haloID
try:
sublink = get(url, savedir='/home/sublink/')
except:
print(sys.exc_info()[0])
return -1
f = sublink
snap_num = np.array(f['SnapNum'])
subfind_ID = np.array(f['SubfindID'])
Progenitors_dict = {}
for i in range(len(snap_num)):
Progenitors_dict['%d'%snap_num[i]] = subfind_ID[i]
f.close()
return Progenitors_dict
barred = np.load('F:/Linux/data/099fig/barredID.npy')
snap = [99, 91, 84, 78, 72, 67, 59, 50, 40, 33]
errorHalo = []
for haloID in barred:
Prog_dict = HaloProgenitors(haloID)
if Prog_dict == -1:
print('halo: %d Network ERROR, Try next'%haloID)
errorHalo.append(haloID)
continue
else:
#Download stellar particles' information in all selected snapshot z
for z in snap:
print('Now download halo %d in snap_%d'%(haloID, z))
try:
subID = Prog_dict['%d'%z]
cutoff_url = 'http://www.tng-project.org/api/TNG100-1/snapshots/%d/subhalos/%d/cutout.hdf5?stars=Masses,Coordinates,Velocities,GFM_StellarFormationTime'%(z, subID)
if os.path.isfile('F:/Linux/data/TNG/cutoff/disk_%d/cutout_%d.hdf5'%(z, subID)) == False:
get(cutoff_url, savedir='F:/Linux/data/TNG/cutoff/disk_%d/'%z)
except:
print("halo %d in snap_%d Fail:"%(haloID, z), sys.exc_info()[0])
print("You need to reload this halo.")
errorHalo.append(haloID)
break
else:
print('halo %d in snap_%d downloaded'%(haloID, z))
print('halo %d in all snapshot download Completed'%haloID)
if len(errorHalo) == 0:
print('All done.')
else:
print('%d halo download faild'%len(errorHalo))
print("Error halo's ID were saved in '/Raid0/zhouzb/downError.log.npy'.")
np.save('F:/Linux/data/TNG/errorID.npy', errorHalo)
| true | true |
f720bf86d570b0fdfb0907c0f3e9814300ec73f6 | 15,954 | py | Python | aphla/gui/qrangeslider.py | NSLS-II/aphla | ceb5410dc836a8fb16321b6dc5e10d442be765c5 | [
"BSD-3-Clause"
] | null | null | null | aphla/gui/qrangeslider.py | NSLS-II/aphla | ceb5410dc836a8fb16321b6dc5e10d442be765c5 | [
"BSD-3-Clause"
] | 1 | 2020-02-17T18:56:18.000Z | 2020-02-20T17:06:20.000Z | aphla/gui/qrangeslider.py | NSLS-II/aphla | ceb5410dc836a8fb16321b6dc5e10d442be765c5 | [
"BSD-3-Clause"
] | 1 | 2021-03-08T16:07:11.000Z | 2021-03-08T16:07:11.000Z | #!/usr/bin/env python
# ------------------------------------------------------------------------------
# Copyright (c) 2011-2012, Ryan Galloway (ryan@rsgalloway.com)
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# - Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# - Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# - Neither the name of the software nor the names of its contributors
# may be used to endorse or promote products derived from this software
# without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# ------------------------------------------------------------------------------
# docs and latest version available for download at
# http://rsgalloway.github.com/qrangeslider
# ------------------------------------------------------------------------------
__author__ = "Ryan Galloway <ryan@rsgalloway.com>"
__version__ = "0.1"
# ------------------------------------------------------------------------------
# SUMMARY
# ------------------------------------------------------------------------------
"""The QRangeSlider class implements a horizontal range slider widget.
"""
# ------------------------------------------------------------------------------
# TODO
# ------------------------------------------------------------------------------
"""
- smoother mouse move event handler
- support splits and joins
- verticle sliders
- ticks
"""
# ------------------------------------------------------------------------------
# IMPORTS
# ------------------------------------------------------------------------------
import os
import sys
from PyQt4 import QtCore
from PyQt4 import QtGui
from PyQt4 import uic
try:
_fromUtf8 = QtCore.QString.fromUtf8
except AttributeError:
_fromUtf8 = lambda s: s
__all__ = ['QRangeSlider']
DEFAULT_CSS = """
QRangeSlider * {
border: 0px;
padding: 0px;
}
QRangeSlider #Head {
background: #fff;
}
QRangeSlider #Span {
background: #393;
}
QRangeSlider #Span:active {
background: #282;
}
QRangeSlider #Tail {
background: #fff;
}
QRangeSlider > QSplitter::handle {
background: #393;
}
QRangeSlider > QSplitter::handle:vertical {
height: 4px;
}
QRangeSlider > QSplitter::handle:pressed {
background: #ca5;
}
"""
class Ui_Form(object):
"""default range slider form"""
def setupUi(self, Form):
Form.setObjectName(_fromUtf8("QRangeSlider"))
Form.resize(300, 30)
Form.setStyleSheet(_fromUtf8(DEFAULT_CSS))
self.gridLayout = QtGui.QGridLayout(Form)
self.gridLayout.setMargin(0)
self.gridLayout.setSpacing(0)
self.gridLayout.setObjectName(_fromUtf8("gridLayout"))
self._splitter = QtGui.QSplitter(Form)
self._splitter.setMinimumSize(QtCore.QSize(0, 0))
self._splitter.setMaximumSize(QtCore.QSize(16777215, 16777215))
self._splitter.setOrientation(QtCore.Qt.Horizontal)
self._splitter.setObjectName(_fromUtf8("splitter"))
self._head = QtGui.QGroupBox(self._splitter)
self._head.setTitle(_fromUtf8(""))
self._head.setObjectName(_fromUtf8("Head"))
self._handle = QtGui.QGroupBox(self._splitter)
self._handle.setTitle(_fromUtf8(""))
self._handle.setObjectName(_fromUtf8("Span"))
self._tail = QtGui.QGroupBox(self._splitter)
self._tail.setTitle(_fromUtf8(""))
self._tail.setObjectName(_fromUtf8("Tail"))
self.gridLayout.addWidget(self._splitter, 0, 0, 1, 1)
self.retranslateUi(Form)
QtCore.QMetaObject.connectSlotsByName(Form)
def retranslateUi(self, Form):
encoding = QtGui.QApplication.UnicodeUTF8
Form.setWindowTitle(QtGui.QApplication.translate("QRangeSlider",
"QRangeSlider",
None, encoding))
class Element(QtGui.QGroupBox):
def __init__(self, parent, main):
super(Element, self).__init__(parent)
self.main = main
def setStyleSheet(self, style):
"""redirect style to parent groupbox"""
self.parent().setStyleSheet(style)
def textColor(self):
"""text paint color"""
return getattr(self, '__textColor', QtGui.QColor(125, 125, 125))
def setTextColor(self, color):
"""set the text paint color"""
if type(color) == tuple and len(color) == 3:
color = QtGui.QColor(color[0], color[1], color[2])
elif type(color) == int:
color = QtGui.QColor(color, color, color)
setattr(self, '__textColor', color)
def paintEvent(self, event):
"""overrides paint event to handle text"""
qp = QtGui.QPainter()
qp.begin(self)
if self.main.drawValues():
self.drawText(event, qp)
qp.end()
class Head(Element):
"""area before the handle"""
def __init__(self, parent, main):
super(Head, self).__init__(parent, main)
def drawText(self, event, qp):
qp.setPen(self.textColor())
qp.setFont(QtGui.QFont('Arial', 10))
qp.drawText(event.rect(), QtCore.Qt.AlignLeft, str(self.main.min()))
class Tail(Element):
"""area after the handle"""
def __init__(self, parent, main):
super(Tail, self).__init__(parent, main)
def drawText(self, event, qp):
qp.setPen(self.textColor())
qp.setFont(QtGui.QFont('Arial', 10))
qp.drawText(event.rect(), QtCore.Qt.AlignRight, str(self.main.max()))
class Handle(Element):
"""handle area"""
def __init__(self, parent, main):
super(Handle, self).__init__(parent, main)
def drawText(self, event, qp):
qp.setPen(self.textColor())
qp.setFont(QtGui.QFont('Arial', 10))
qp.drawText(event.rect(), QtCore.Qt.AlignLeft, str(self.main.start()))
qp.drawText(event.rect(), QtCore.Qt.AlignRight, str(self.main.end()))
def mouseMoveEvent(self, event):
event.accept()
mx = event.globalX()
_mx = getattr(self, '__mx', None)
if not _mx:
setattr(self, '__mx', mx)
dx = 0
else:
dx = mx - _mx
setattr(self, '__mx', mx)
if dx == 0:
event.ignore()
return
elif dx > 0:
dx = 1
elif dx < 0:
dx = -1
s = self.main.start() + dx
e = self.main.end() + dx
if s >= self.main.min() and e <= self.main.max():
self.main.setRange(s, e)
class QRangeSlider(QtGui.QWidget, Ui_Form):
"""
The QRangeSlider class implements a horizontal range slider widget.
Inherits QWidget.
Methods
* __init__ (self, QWidget parent = None)
* bool drawValues (self)
* int end (self)
* (int, int) getRange (self)
* int max (self)
* int min (self)
* int start (self)
* setBackgroundStyle (self, QString styleSheet)
* setDrawValues (self, bool draw)
* setEnd (self, int end)
* setStart (self, int start)
* setRange (self, int start, int end)
* setSpanStyle (self, QString styleSheet)
Signals
* endValueChanged (int)
* maxValueChanged (int)
* minValueChanged (int)
* startValueChanged (int)
Customizing QRangeSlider
You can style the range slider as below:
::
QRangeSlider * {
border: 0px;
padding: 0px;
}
QRangeSlider #Head {
background: #222;
}
QRangeSlider #Span {
background: #393;
}
QRangeSlider #Span:active {
background: #282;
}
QRangeSlider #Tail {
background: #222;
}
Styling the range slider handles follows QSplitter options:
::
QRangeSlider > QSplitter::handle {
background: #393;
}
QRangeSlider > QSplitter::handle:vertical {
height: 4px;
}
QRangeSlider > QSplitter::handle:pressed {
background: #ca5;
}
"""
endValueChanged = QtCore.pyqtSignal(int)
maxValueChanged = QtCore.pyqtSignal(int)
minValueChanged = QtCore.pyqtSignal(int)
startValueChanged = QtCore.pyqtSignal(int)
# define splitter indices
_SPLIT_START = 1
_SPLIT_END = 2
def __init__(self, parent=None):
"""Create a new QRangeSlider instance.
:param parent: QWidget parent
:return: New QRangeSlider instance.
"""
super(QRangeSlider, self).__init__(parent)
self.setupUi(self)
self.setMouseTracking(False)
#self._splitter.setChildrenCollapsible(False)
self._splitter.splitterMoved.connect(self._handleMoveSplitter)
# head layout
self._head_layout = QtGui.QHBoxLayout()
self._head_layout.setSpacing(0)
self._head_layout.setMargin(0)
self._head.setLayout(self._head_layout)
self.head = Head(self._head, main=self)
self._head_layout.addWidget(self.head)
# handle layout
self._handle_layout = QtGui.QHBoxLayout()
self._handle_layout.setSpacing(0)
self._handle_layout.setMargin(0)
self._handle.setLayout(self._handle_layout)
self.handle = Handle(self._handle, main=self)
self.handle.setTextColor((150, 255, 150))
self._handle_layout.addWidget(self.handle)
# tail layout
self._tail_layout = QtGui.QHBoxLayout()
self._tail_layout.setSpacing(0)
self._tail_layout.setMargin(0)
self._tail.setLayout(self._tail_layout)
self.tail = Tail(self._tail, main=self)
self._tail_layout.addWidget(self.tail)
# defaults
self.setMin(0)
self.setMax(99)
self.setStart(0)
self.setEnd(99)
self.setDrawValues(True)
def min(self):
""":return: minimum value"""
return getattr(self, '__min', None)
def max(self):
""":return: maximum value"""
return getattr(self, '__max', None)
def setMin(self, value):
"""sets minimum value"""
assert type(value) is int
setattr(self, '__min', value)
self.minValueChanged.emit(value)
def setMax(self, value):
"""sets maximum value"""
assert type(value) is int
setattr(self, '__max', value)
self.maxValueChanged.emit(value)
def start(self):
""":return: range slider start value"""
return getattr(self, '__start', None)
def end(self):
""":return: range slider end value"""
return getattr(self, '__end', None)
def _setStart(self, value):
"""stores the start value only"""
setattr(self, '__start', value)
self.startValueChanged.emit(value)
def setStart(self, value):
"""sets the range slider start value"""
assert type(value) is int
v = self._valueToPos(value)
self._splitter.moveSplitter(v, self._SPLIT_START)
self._setStart(value)
def _setEnd(self, value):
"""stores the end value only"""
setattr(self, '__end', value)
self.endValueChanged.emit(value)
def setEnd(self, value):
"""set the range slider end value"""
assert type(value) is int
v = self._valueToPos(value)
self._splitter.moveSplitter(v, self._SPLIT_END)
self._setEnd(value)
def drawValues(self):
""":return: True if slider values will be drawn"""
return getattr(self, '__drawValues', None)
def setDrawValues(self, draw):
"""sets draw values boolean to draw slider values"""
assert type(draw) is bool
setattr(self, '__drawValues', draw)
def getRange(self):
""":return: the start and end values as a tuple"""
return (self.start(), self.end())
def setRange(self, start, end):
"""set the start and end values"""
self.setStart(start)
self.setEnd(end)
def keyPressEvent(self, event):
"""overrides key press event to move range left and right"""
key = event.key()
if key == QtCore.Qt.Key_Left:
s = self.start()-1
e = self.end()-1
elif key == QtCore.Qt.Key_Right:
s = self.start()+1
e = self.end()+1
else:
event.ignore()
return
event.accept()
if s >= self.min() and e <= self.max():
self.setRange(s, e)
def setBackgroundStyle(self, style):
"""sets background style"""
self._tail.setStyleSheet(style)
self._head.setStyleSheet(style)
def setSpanStyle(self, style):
"""sets range span handle style"""
self._handle.setStyleSheet(style)
def _valueToPos(self, value):
"""converts slider value to local pixel x coord"""
return int(self.width() * (float(value) / self.max()))
def _posToValue(self, xpos):
"""converts local pixel x coord to slider value"""
return int(((xpos + self._splitter.handleWidth()) / float(self.width())) * self.max())
def _handleMoveSplitter(self, xpos, index):
"""private method for handling moving splitter handles"""
hw = self._splitter.handleWidth()
def _lockWidth(widget):
width = widget.size().width()
widget.setMinimumWidth(width)
widget.setMaximumWidth(width)
def _unlockWidth(widget):
widget.setMinimumWidth(0)
widget.setMaximumWidth(16777215)
v = self._posToValue(xpos)
if index == self._SPLIT_START:
_lockWidth(self._tail)
if v >= self.end():
return
offset = -20
w = xpos + offset
self._setStart(v)
elif index == self._SPLIT_END:
_lockWidth(self._head)
if v <= self.start():
return
offset = -40
w = self.width() - xpos + offset
self._setEnd(v)
_unlockWidth(self._tail)
_unlockWidth(self._head)
_unlockWidth(self._handle)
#-------------------------------------------------------------------------------
# MAIN
#-------------------------------------------------------------------------------
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
rs = QRangeSlider()
rs.show()
rs.setRange(15, 35)
rs.setBackgroundStyle('background: qlineargradient(x1:0, y1:0, x2:0, y2:1, stop:0 #222, stop:1 #333);')
rs.handle.setStyleSheet('background: qlineargradient(x1:0, y1:0, x2:0, y2:1, stop:0 #282, stop:1 #393);')
app.exec_()
| 31.529644 | 109 | 0.573587 |
__author__ = "Ryan Galloway <ryan@rsgalloway.com>"
__version__ = "0.1"
import os
import sys
from PyQt4 import QtCore
from PyQt4 import QtGui
from PyQt4 import uic
try:
_fromUtf8 = QtCore.QString.fromUtf8
except AttributeError:
_fromUtf8 = lambda s: s
__all__ = ['QRangeSlider']
DEFAULT_CSS = """
QRangeSlider * {
border: 0px;
padding: 0px;
}
QRangeSlider #Head {
background: #fff;
}
QRangeSlider #Span {
background: #393;
}
QRangeSlider #Span:active {
background: #282;
}
QRangeSlider #Tail {
background: #fff;
}
QRangeSlider > QSplitter::handle {
background: #393;
}
QRangeSlider > QSplitter::handle:vertical {
height: 4px;
}
QRangeSlider > QSplitter::handle:pressed {
background: #ca5;
}
"""
class Ui_Form(object):
def setupUi(self, Form):
Form.setObjectName(_fromUtf8("QRangeSlider"))
Form.resize(300, 30)
Form.setStyleSheet(_fromUtf8(DEFAULT_CSS))
self.gridLayout = QtGui.QGridLayout(Form)
self.gridLayout.setMargin(0)
self.gridLayout.setSpacing(0)
self.gridLayout.setObjectName(_fromUtf8("gridLayout"))
self._splitter = QtGui.QSplitter(Form)
self._splitter.setMinimumSize(QtCore.QSize(0, 0))
self._splitter.setMaximumSize(QtCore.QSize(16777215, 16777215))
self._splitter.setOrientation(QtCore.Qt.Horizontal)
self._splitter.setObjectName(_fromUtf8("splitter"))
self._head = QtGui.QGroupBox(self._splitter)
self._head.setTitle(_fromUtf8(""))
self._head.setObjectName(_fromUtf8("Head"))
self._handle = QtGui.QGroupBox(self._splitter)
self._handle.setTitle(_fromUtf8(""))
self._handle.setObjectName(_fromUtf8("Span"))
self._tail = QtGui.QGroupBox(self._splitter)
self._tail.setTitle(_fromUtf8(""))
self._tail.setObjectName(_fromUtf8("Tail"))
self.gridLayout.addWidget(self._splitter, 0, 0, 1, 1)
self.retranslateUi(Form)
QtCore.QMetaObject.connectSlotsByName(Form)
def retranslateUi(self, Form):
encoding = QtGui.QApplication.UnicodeUTF8
Form.setWindowTitle(QtGui.QApplication.translate("QRangeSlider",
"QRangeSlider",
None, encoding))
class Element(QtGui.QGroupBox):
def __init__(self, parent, main):
super(Element, self).__init__(parent)
self.main = main
def setStyleSheet(self, style):
self.parent().setStyleSheet(style)
def textColor(self):
return getattr(self, '__textColor', QtGui.QColor(125, 125, 125))
def setTextColor(self, color):
if type(color) == tuple and len(color) == 3:
color = QtGui.QColor(color[0], color[1], color[2])
elif type(color) == int:
color = QtGui.QColor(color, color, color)
setattr(self, '__textColor', color)
def paintEvent(self, event):
qp = QtGui.QPainter()
qp.begin(self)
if self.main.drawValues():
self.drawText(event, qp)
qp.end()
class Head(Element):
def __init__(self, parent, main):
super(Head, self).__init__(parent, main)
def drawText(self, event, qp):
qp.setPen(self.textColor())
qp.setFont(QtGui.QFont('Arial', 10))
qp.drawText(event.rect(), QtCore.Qt.AlignLeft, str(self.main.min()))
class Tail(Element):
def __init__(self, parent, main):
super(Tail, self).__init__(parent, main)
def drawText(self, event, qp):
qp.setPen(self.textColor())
qp.setFont(QtGui.QFont('Arial', 10))
qp.drawText(event.rect(), QtCore.Qt.AlignRight, str(self.main.max()))
class Handle(Element):
def __init__(self, parent, main):
super(Handle, self).__init__(parent, main)
def drawText(self, event, qp):
qp.setPen(self.textColor())
qp.setFont(QtGui.QFont('Arial', 10))
qp.drawText(event.rect(), QtCore.Qt.AlignLeft, str(self.main.start()))
qp.drawText(event.rect(), QtCore.Qt.AlignRight, str(self.main.end()))
def mouseMoveEvent(self, event):
event.accept()
mx = event.globalX()
_mx = getattr(self, '__mx', None)
if not _mx:
setattr(self, '__mx', mx)
dx = 0
else:
dx = mx - _mx
setattr(self, '__mx', mx)
if dx == 0:
event.ignore()
return
elif dx > 0:
dx = 1
elif dx < 0:
dx = -1
s = self.main.start() + dx
e = self.main.end() + dx
if s >= self.main.min() and e <= self.main.max():
self.main.setRange(s, e)
class QRangeSlider(QtGui.QWidget, Ui_Form):
endValueChanged = QtCore.pyqtSignal(int)
maxValueChanged = QtCore.pyqtSignal(int)
minValueChanged = QtCore.pyqtSignal(int)
startValueChanged = QtCore.pyqtSignal(int)
_SPLIT_START = 1
_SPLIT_END = 2
def __init__(self, parent=None):
super(QRangeSlider, self).__init__(parent)
self.setupUi(self)
self.setMouseTracking(False)
self._splitter.splitterMoved.connect(self._handleMoveSplitter)
self._head_layout = QtGui.QHBoxLayout()
self._head_layout.setSpacing(0)
self._head_layout.setMargin(0)
self._head.setLayout(self._head_layout)
self.head = Head(self._head, main=self)
self._head_layout.addWidget(self.head)
self._handle_layout = QtGui.QHBoxLayout()
self._handle_layout.setSpacing(0)
self._handle_layout.setMargin(0)
self._handle.setLayout(self._handle_layout)
self.handle = Handle(self._handle, main=self)
self.handle.setTextColor((150, 255, 150))
self._handle_layout.addWidget(self.handle)
self._tail_layout = QtGui.QHBoxLayout()
self._tail_layout.setSpacing(0)
self._tail_layout.setMargin(0)
self._tail.setLayout(self._tail_layout)
self.tail = Tail(self._tail, main=self)
self._tail_layout.addWidget(self.tail)
self.setMin(0)
self.setMax(99)
self.setStart(0)
self.setEnd(99)
self.setDrawValues(True)
def min(self):
return getattr(self, '__min', None)
def max(self):
return getattr(self, '__max', None)
def setMin(self, value):
assert type(value) is int
setattr(self, '__min', value)
self.minValueChanged.emit(value)
def setMax(self, value):
assert type(value) is int
setattr(self, '__max', value)
self.maxValueChanged.emit(value)
def start(self):
return getattr(self, '__start', None)
def end(self):
return getattr(self, '__end', None)
def _setStart(self, value):
setattr(self, '__start', value)
self.startValueChanged.emit(value)
def setStart(self, value):
assert type(value) is int
v = self._valueToPos(value)
self._splitter.moveSplitter(v, self._SPLIT_START)
self._setStart(value)
def _setEnd(self, value):
setattr(self, '__end', value)
self.endValueChanged.emit(value)
def setEnd(self, value):
assert type(value) is int
v = self._valueToPos(value)
self._splitter.moveSplitter(v, self._SPLIT_END)
self._setEnd(value)
def drawValues(self):
return getattr(self, '__drawValues', None)
def setDrawValues(self, draw):
assert type(draw) is bool
setattr(self, '__drawValues', draw)
def getRange(self):
return (self.start(), self.end())
def setRange(self, start, end):
self.setStart(start)
self.setEnd(end)
def keyPressEvent(self, event):
key = event.key()
if key == QtCore.Qt.Key_Left:
s = self.start()-1
e = self.end()-1
elif key == QtCore.Qt.Key_Right:
s = self.start()+1
e = self.end()+1
else:
event.ignore()
return
event.accept()
if s >= self.min() and e <= self.max():
self.setRange(s, e)
def setBackgroundStyle(self, style):
self._tail.setStyleSheet(style)
self._head.setStyleSheet(style)
def setSpanStyle(self, style):
self._handle.setStyleSheet(style)
def _valueToPos(self, value):
return int(self.width() * (float(value) / self.max()))
def _posToValue(self, xpos):
return int(((xpos + self._splitter.handleWidth()) / float(self.width())) * self.max())
def _handleMoveSplitter(self, xpos, index):
hw = self._splitter.handleWidth()
def _lockWidth(widget):
width = widget.size().width()
widget.setMinimumWidth(width)
widget.setMaximumWidth(width)
def _unlockWidth(widget):
widget.setMinimumWidth(0)
widget.setMaximumWidth(16777215)
v = self._posToValue(xpos)
if index == self._SPLIT_START:
_lockWidth(self._tail)
if v >= self.end():
return
offset = -20
w = xpos + offset
self._setStart(v)
elif index == self._SPLIT_END:
_lockWidth(self._head)
if v <= self.start():
return
offset = -40
w = self.width() - xpos + offset
self._setEnd(v)
_unlockWidth(self._tail)
_unlockWidth(self._head)
_unlockWidth(self._handle)
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
rs = QRangeSlider()
rs.show()
rs.setRange(15, 35)
rs.setBackgroundStyle('background: qlineargradient(x1:0, y1:0, x2:0, y2:1, stop:0 #222, stop:1 #333);')
rs.handle.setStyleSheet('background: qlineargradient(x1:0, y1:0, x2:0, y2:1, stop:0 #282, stop:1 #393);')
app.exec_()
| true | true |
f720c25b3abb18927b7fd60019577787312ad4c2 | 3,406 | py | Python | backend/remap/predictors.py | hugocalcad/remap_rev | fa435784f897b7f4186b8ff703b3e08f48160b9f | [
"Apache-2.0"
] | 17 | 2018-08-30T22:46:47.000Z | 2021-12-23T08:19:50.000Z | backend/remap/predictors.py | red-list-ecosystem/REMAP | e1e60c56dad76dc1927af5f24a30cb28144a91c8 | [
"Apache-2.0"
] | 3 | 2019-11-01T13:58:19.000Z | 2021-03-11T10:21:51.000Z | backend/remap/predictors.py | hugocalcad/remap_rev | fa435784f897b7f4186b8ff703b3e08f48160b9f | [
"Apache-2.0"
] | 2 | 2017-11-29T02:40:03.000Z | 2017-12-20T22:00:37.000Z | predictors = [
{
"description": "todo",
"long_name": "Normalised Difference Vegetation index",
"short_name": "NDVI",
"type": "Index",
"ee_import": 'LANDSAT/LC8_SR',
"checked": True,
"vis": True,
"ramp": '000000, 00FF00'
},
{
"description": "todo",
"long_name": "Normalised Difference Water index",
"short_name": "NDWI",
"type": "Index",
"ee_import": 'LANDSAT/LC8_SR',
"checked": True,
"vis": True,
"ramp": '070467, 17ffed'
},
{
"description": "todo",
"long_name": "Water Band Index",
"type": "Index",
"ee_import": 'LANDSAT/LC8_SR',
"short_name": "WBI",
"vis": False
},
{
"description": "todo",
"long_name": "Blue band minus Red band",
"type": "Index",
"ee_import": 'LANDSAT/LC8_SR',
"short_name": "BR",
"vis": False
},
{
"description": "todo",
"long_name": "Normalised Difference Blue Green",
"short_name": "BG",
"type": "Index",
"ee_import": 'LANDSAT/LC8_SR',
"checked": True,
"vis": False
},
{
"description": "todo",
"long_name": " Blue band",
"short_name": "Blue",
"type": "Band Value",
"ee_import": 'LANDSAT/LC8_SR',
"checked": True,
"vis": False
},
{
"description": "todo",
"long_name": "Green band",
"short_name": "Green",
"type": "Band Value",
"ee_import": 'LANDSAT/LC8_SR',
"checked": True,
"vis": False
},
{
"description": "todo",
"long_name": "Red band",
"short_name": "Red",
"type": "Band Value",
"ee_import": 'LANDSAT/LC8_SR',
"checked": True,
"vis": False
},
{
"description": "todo",
"long_name": "Near Infrared band",
"short_name": "NIR",
"type": "Band Value",
"ee_import": 'LANDSAT/LC8_SR',
"checked": True,
"vis": True,
"ramp": '000000,ffffff',
},
{
"description": "todo",
"type": "Elevation",
"long_name": "SRTM Digital Elevation Data 30m",
"short_name": "Elevation",
"ee_import": 'USGS/SRTMGL1_003',
"checked": True,
"vis": True,
"ramp": "00a0b0,edc951,ed6841,cc2a36,4f372d"
},
{
"description": "todo",
"type": "Elevation",
"long_name": "SRTM Slope",
"short_name": "Slope",
"ee_import": 'USGS/SRTMGL1_003',
"checked": True,
"vis": True,
"ramp": "edc951,ed6841,cc2a36,4f372d,00a0b0"
},
{
"description": "todo",
"type": "BIOCLIM",
"long_name": "Mean Annual Temperature",
"ee_import": 'WORLDCLIM/V1/BIO',
"short_name": "Mean Annual Temperature",
"vis": True,
"ramp": "39018a,0090fe,98ff77,ffff0b,fa0100,590000"
},
{
"description": "todo",
"long_name": "Annual Precipitation",
"type": "BIOCLIM",
"ee_import": 'WORLDCLIM/V1/BIO',
"short_name": "Annual Precipitation",
"vis": True,
"ramp": 'ffffff,c7d6f7,00057a'
}
]
predictor_dict = {}
# build a dict for vis lookup later
for p in predictors:
predictor_dict[p['short_name']] = p
| 26 | 62 | 0.491486 | predictors = [
{
"description": "todo",
"long_name": "Normalised Difference Vegetation index",
"short_name": "NDVI",
"type": "Index",
"ee_import": 'LANDSAT/LC8_SR',
"checked": True,
"vis": True,
"ramp": '000000, 00FF00'
},
{
"description": "todo",
"long_name": "Normalised Difference Water index",
"short_name": "NDWI",
"type": "Index",
"ee_import": 'LANDSAT/LC8_SR',
"checked": True,
"vis": True,
"ramp": '070467, 17ffed'
},
{
"description": "todo",
"long_name": "Water Band Index",
"type": "Index",
"ee_import": 'LANDSAT/LC8_SR',
"short_name": "WBI",
"vis": False
},
{
"description": "todo",
"long_name": "Blue band minus Red band",
"type": "Index",
"ee_import": 'LANDSAT/LC8_SR',
"short_name": "BR",
"vis": False
},
{
"description": "todo",
"long_name": "Normalised Difference Blue Green",
"short_name": "BG",
"type": "Index",
"ee_import": 'LANDSAT/LC8_SR',
"checked": True,
"vis": False
},
{
"description": "todo",
"long_name": " Blue band",
"short_name": "Blue",
"type": "Band Value",
"ee_import": 'LANDSAT/LC8_SR',
"checked": True,
"vis": False
},
{
"description": "todo",
"long_name": "Green band",
"short_name": "Green",
"type": "Band Value",
"ee_import": 'LANDSAT/LC8_SR',
"checked": True,
"vis": False
},
{
"description": "todo",
"long_name": "Red band",
"short_name": "Red",
"type": "Band Value",
"ee_import": 'LANDSAT/LC8_SR',
"checked": True,
"vis": False
},
{
"description": "todo",
"long_name": "Near Infrared band",
"short_name": "NIR",
"type": "Band Value",
"ee_import": 'LANDSAT/LC8_SR',
"checked": True,
"vis": True,
"ramp": '000000,ffffff',
},
{
"description": "todo",
"type": "Elevation",
"long_name": "SRTM Digital Elevation Data 30m",
"short_name": "Elevation",
"ee_import": 'USGS/SRTMGL1_003',
"checked": True,
"vis": True,
"ramp": "00a0b0,edc951,ed6841,cc2a36,4f372d"
},
{
"description": "todo",
"type": "Elevation",
"long_name": "SRTM Slope",
"short_name": "Slope",
"ee_import": 'USGS/SRTMGL1_003',
"checked": True,
"vis": True,
"ramp": "edc951,ed6841,cc2a36,4f372d,00a0b0"
},
{
"description": "todo",
"type": "BIOCLIM",
"long_name": "Mean Annual Temperature",
"ee_import": 'WORLDCLIM/V1/BIO',
"short_name": "Mean Annual Temperature",
"vis": True,
"ramp": "39018a,0090fe,98ff77,ffff0b,fa0100,590000"
},
{
"description": "todo",
"long_name": "Annual Precipitation",
"type": "BIOCLIM",
"ee_import": 'WORLDCLIM/V1/BIO',
"short_name": "Annual Precipitation",
"vis": True,
"ramp": 'ffffff,c7d6f7,00057a'
}
]
predictor_dict = {}
for p in predictors:
predictor_dict[p['short_name']] = p
| true | true |
f720c4b0ba8a3112b5e4c2e356fdfa9e370b254c | 11,793 | py | Python | test/unit/mongo_class/server_connect.py | mjpernot/mongo-lib | be8aa4f0cbf7fdf475bf67c07df813ffc560c3ef | [
"MIT"
] | null | null | null | test/unit/mongo_class/server_connect.py | mjpernot/mongo-lib | be8aa4f0cbf7fdf475bf67c07df813ffc560c3ef | [
"MIT"
] | null | null | null | test/unit/mongo_class/server_connect.py | mjpernot/mongo-lib | be8aa4f0cbf7fdf475bf67c07df813ffc560c3ef | [
"MIT"
] | null | null | null | #!/usr/bin/python
# Classification (U)
"""Program: server_connect.py
Description: Unit testing of Server.connect in mongo_class.py.
Usage:
test/unit/mongo_class/server_connect.py
Arguments:
"""
# Libraries and Global Variables
# Standard
import sys
import os
if sys.version_info < (2, 7):
import unittest2 as unittest
else:
import unittest
# Third-party
import mock
# Local
sys.path.append(os.getcwd())
import mongo_class
import version
__version__ = version.__version__
class UnitTest(unittest.TestCase):
"""Class: UnitTest
Description: Class which is a representation of a unit testing.
Methods:
setUp
test_auth_mech3
test_auth_mech2
test_auth_mech
test_conn_false2
test_conn_false
test_conn_true2
test_conn_true
test_fail_get_srv_attr2
test_fail_get_srv_attr
test_auth_arg4
test_auth_arg3
test_auth_arg2
test_auth_arg
test_no_auth2
test_no_auth
"""
def setUp(self):
"""Function: setUp
Description: Initialization for unit testing.
Arguments:
"""
self.name = "Mongo_Server"
self.user = "mongo_user"
self.japd = "mongo_pd"
self.host = "host_server"
self.port = 27017
self.dbs = "test"
self.coll = None
self.db_auth = None
self.conf_file = "Conf_File"
self.errmsg = "Error Message"
self.auth_mech = "SCRAM-SHA-1"
self.auth_mech2 = "MONGODB-CR"
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_auth_mech3(self, mock_cmd, mock_client):
"""Function: test_auth_mech3
Description: Test with auth_mech set to SCRAM-SHA-1.
Arguments:
"""
mock_cmd.return_value = (True, None)
mock_client.return_value = True
mongo = mongo_class.Server(
self.name, self.user, self.japd, host=self.host, port=self.port,
auth=True, use_arg=True, auth_mech=self.auth_mech)
mongo.conn = False
mongo.connect()
self.assertEqual(
(mongo.name, mongo.user, mongo.japd, mongo.host, mongo.port,
mongo.auth_mech),
(self.name, self.user, self.japd, self.host, self.port,
self.auth_mech))
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_auth_mech2(self, mock_cmd, mock_client):
"""Function: test_auth_mech2
Description: Test with auth_mech set to MONGODB-CR.
Arguments:
"""
mock_cmd.return_value = (True, None)
mock_client.return_value = True
mongo = mongo_class.Server(
self.name, self.user, self.japd, host=self.host, port=self.port,
auth=True, use_arg=True, auth_mech=self.auth_mech2)
mongo.conn = False
mongo.connect()
self.assertEqual(
(mongo.name, mongo.user, mongo.japd, mongo.host, mongo.port,
mongo.auth_mech),
(self.name, self.user, self.japd, self.host, self.port,
self.auth_mech2))
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_auth_mech(self, mock_cmd, mock_client):
"""Function: test_auth_mech
Description: Test with auth_mech default.
Arguments:
"""
mock_cmd.return_value = (True, None)
mock_client.return_value = True
mongo = mongo_class.Server(
self.name, self.user, self.japd, host=self.host, port=self.port,
auth=True, use_arg=True)
mongo.conn = False
mongo.connect()
self.assertEqual(
(mongo.name, mongo.user, mongo.japd, mongo.host, mongo.port,
mongo.auth_mech),
(self.name, self.user, self.japd, self.host, self.port,
self.auth_mech))
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_conn_false2(self, mock_cmd, mock_client):
"""Function: test_conn_false2
Description: Test with conn set to False.
Arguments:
"""
mock_cmd.return_value = (True, None)
mock_client.return_value = True
mongo = mongo_class.Server(
self.name, self.user, self.japd, host=self.host, port=self.port,
auth=True, use_arg=True)
mongo.conn = False
mongo.connect()
self.assertEqual(
(mongo.name, mongo.user, mongo.japd, mongo.host, mongo.port),
(self.name, self.user, self.japd, self.host, self.port))
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_conn_false(self, mock_cmd, mock_client):
"""Function: test_conn_false
Description: Test with conn set to False.
Arguments:
"""
mock_cmd.return_value = (True, None)
mock_client.return_value = True
mongo = mongo_class.Server(
self.name, self.user, self.japd, host=self.host, port=self.port,
auth=True, use_arg=True)
mongo.conn = False
self.assertEqual(mongo.connect(), (True, None))
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_conn_true2(self, mock_cmd, mock_client):
"""Function: test_conn_true2
Description: Test with conn set to True.
Arguments:
"""
mock_cmd.return_value = (True, None)
mock_client.return_value = True
mongo = mongo_class.Server(
self.name, self.user, self.japd, host=self.host, port=self.port,
auth=True, use_arg=True)
mongo.conn = True
mongo.connect()
self.assertEqual(
(mongo.name, mongo.user, mongo.japd, mongo.host, mongo.port),
(self.name, self.user, self.japd, self.host, self.port))
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_conn_true(self, mock_cmd, mock_client):
"""Function: test_conn_true
Description: Test with conn set to True.
Arguments:
"""
mock_cmd.return_value = (True, None)
mock_client.return_value = True
mongo = mongo_class.Server(
self.name, self.user, self.japd, host=self.host, port=self.port,
auth=True, use_arg=True)
mongo.conn = True
self.assertEqual(mongo.connect(), (True, None))
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_fail_get_srv_attr2(self, mock_cmd, mock_client):
"""Function: test_fail_get_srv_attr2
Description: Test with failed get_srv_attr call.
Arguments:
"""
mock_cmd.return_value = (False, self.errmsg)
mock_client.return_value = True
mongo = mongo_class.Server(
self.name, self.user, self.japd, host=self.host, port=self.port)
mongo.connect()
self.assertEqual(
(mongo.name, mongo.user, mongo.japd, mongo.host, mongo.port),
(self.name, self.user, self.japd, self.host, self.port))
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_fail_get_srv_attr(self, mock_cmd, mock_client):
"""Function: test_fail_get_srv_attr
Description: Test with failed get_srv_attr call.
Arguments:
"""
mock_cmd.return_value = (False, self.errmsg)
mock_client.return_value = True
mongo = mongo_class.Server(
self.name, self.user, self.japd, host=self.host, port=self.port)
self.assertEqual(mongo.connect(), (False, self.errmsg))
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_auth_arg4(self, mock_cmd, mock_client):
"""Function: test_auth_arg4
Description: Test with arg present and no auth.
Arguments:
"""
mock_cmd.return_value = (True, None)
mock_client.return_value = True
mongo = mongo_class.Server(
self.name, self.user, self.japd, host=self.host, port=self.port,
auth=False)
self.assertEqual(mongo.connect(), (True, None))
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_auth_arg3(self, mock_cmd, mock_client):
"""Function: test_auth_arg3
Description: Test with arg present and no auth.
Arguments:
"""
mock_cmd.return_value = (True, None)
mock_client.return_value = True
mongo = mongo_class.Server(
self.name, self.user, self.japd, host=self.host, port=self.port,
auth=False)
mongo.connect()
self.assertEqual(
(mongo.name, mongo.user, mongo.japd, mongo.host, mongo.port),
(self.name, self.user, self.japd, self.host, self.port))
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_auth_arg2(self, mock_cmd, mock_client):
"""Function: test_auth_arg2
Description: Test with auth and arg present.
Arguments:
"""
mock_cmd.return_value = (True, None)
mock_client.return_value = True
mongo = mongo_class.Server(
self.name, self.user, self.japd, host=self.host, port=self.port)
self.assertEqual(mongo.connect(), (True, None))
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_auth_arg(self, mock_cmd, mock_client):
"""Function: test_auth_arg
Description: Test with auth and arg present.
Arguments:
"""
mock_cmd.return_value = (True, None)
mock_client.return_value = True
mongo = mongo_class.Server(
self.name, self.user, self.japd, host=self.host, port=self.port)
mongo.connect()
self.assertEqual(
(mongo.name, mongo.user, mongo.japd, mongo.host, mongo.port),
(self.name, self.user, self.japd, self.host, self.port))
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_no_auth2(self, mock_cmd, mock_client):
"""Function: test_no_auth2
Description: Test with no auth present.
Arguments:
"""
mock_cmd.return_value = (True, None)
mock_client.return_value = True
mongo = mongo_class.Server(self.name, self.user, self.japd,
host=self.host, port=self.port, auth=False)
mongo.connect()
self.assertEqual(
(mongo.name, mongo.user, mongo.japd, mongo.host, mongo.port),
(self.name, self.user, self.japd, self.host, self.port))
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_no_auth(self, mock_cmd, mock_client):
"""Function: test_no_auth
Description: Test with no auth present.
Arguments:
"""
mock_cmd.return_value = (True, None)
mock_client.return_value = True
mongo = mongo_class.Server(self.name, self.user, self.japd,
host=self.host, port=self.port, auth=False)
self.assertEqual(mongo.connect(), (True, None))
if __name__ == "__main__":
unittest.main()
| 27.425581 | 78 | 0.620114 |
import sys
import os
if sys.version_info < (2, 7):
import unittest2 as unittest
else:
import unittest
import mock
sys.path.append(os.getcwd())
import mongo_class
import version
__version__ = version.__version__
class UnitTest(unittest.TestCase):
def setUp(self):
self.name = "Mongo_Server"
self.user = "mongo_user"
self.japd = "mongo_pd"
self.host = "host_server"
self.port = 27017
self.dbs = "test"
self.coll = None
self.db_auth = None
self.conf_file = "Conf_File"
self.errmsg = "Error Message"
self.auth_mech = "SCRAM-SHA-1"
self.auth_mech2 = "MONGODB-CR"
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_auth_mech3(self, mock_cmd, mock_client):
mock_cmd.return_value = (True, None)
mock_client.return_value = True
mongo = mongo_class.Server(
self.name, self.user, self.japd, host=self.host, port=self.port,
auth=True, use_arg=True, auth_mech=self.auth_mech)
mongo.conn = False
mongo.connect()
self.assertEqual(
(mongo.name, mongo.user, mongo.japd, mongo.host, mongo.port,
mongo.auth_mech),
(self.name, self.user, self.japd, self.host, self.port,
self.auth_mech))
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_auth_mech2(self, mock_cmd, mock_client):
mock_cmd.return_value = (True, None)
mock_client.return_value = True
mongo = mongo_class.Server(
self.name, self.user, self.japd, host=self.host, port=self.port,
auth=True, use_arg=True, auth_mech=self.auth_mech2)
mongo.conn = False
mongo.connect()
self.assertEqual(
(mongo.name, mongo.user, mongo.japd, mongo.host, mongo.port,
mongo.auth_mech),
(self.name, self.user, self.japd, self.host, self.port,
self.auth_mech2))
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_auth_mech(self, mock_cmd, mock_client):
mock_cmd.return_value = (True, None)
mock_client.return_value = True
mongo = mongo_class.Server(
self.name, self.user, self.japd, host=self.host, port=self.port,
auth=True, use_arg=True)
mongo.conn = False
mongo.connect()
self.assertEqual(
(mongo.name, mongo.user, mongo.japd, mongo.host, mongo.port,
mongo.auth_mech),
(self.name, self.user, self.japd, self.host, self.port,
self.auth_mech))
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_conn_false2(self, mock_cmd, mock_client):
mock_cmd.return_value = (True, None)
mock_client.return_value = True
mongo = mongo_class.Server(
self.name, self.user, self.japd, host=self.host, port=self.port,
auth=True, use_arg=True)
mongo.conn = False
mongo.connect()
self.assertEqual(
(mongo.name, mongo.user, mongo.japd, mongo.host, mongo.port),
(self.name, self.user, self.japd, self.host, self.port))
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_conn_false(self, mock_cmd, mock_client):
mock_cmd.return_value = (True, None)
mock_client.return_value = True
mongo = mongo_class.Server(
self.name, self.user, self.japd, host=self.host, port=self.port,
auth=True, use_arg=True)
mongo.conn = False
self.assertEqual(mongo.connect(), (True, None))
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_conn_true2(self, mock_cmd, mock_client):
mock_cmd.return_value = (True, None)
mock_client.return_value = True
mongo = mongo_class.Server(
self.name, self.user, self.japd, host=self.host, port=self.port,
auth=True, use_arg=True)
mongo.conn = True
mongo.connect()
self.assertEqual(
(mongo.name, mongo.user, mongo.japd, mongo.host, mongo.port),
(self.name, self.user, self.japd, self.host, self.port))
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_conn_true(self, mock_cmd, mock_client):
mock_cmd.return_value = (True, None)
mock_client.return_value = True
mongo = mongo_class.Server(
self.name, self.user, self.japd, host=self.host, port=self.port,
auth=True, use_arg=True)
mongo.conn = True
self.assertEqual(mongo.connect(), (True, None))
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_fail_get_srv_attr2(self, mock_cmd, mock_client):
mock_cmd.return_value = (False, self.errmsg)
mock_client.return_value = True
mongo = mongo_class.Server(
self.name, self.user, self.japd, host=self.host, port=self.port)
mongo.connect()
self.assertEqual(
(mongo.name, mongo.user, mongo.japd, mongo.host, mongo.port),
(self.name, self.user, self.japd, self.host, self.port))
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_fail_get_srv_attr(self, mock_cmd, mock_client):
mock_cmd.return_value = (False, self.errmsg)
mock_client.return_value = True
mongo = mongo_class.Server(
self.name, self.user, self.japd, host=self.host, port=self.port)
self.assertEqual(mongo.connect(), (False, self.errmsg))
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_auth_arg4(self, mock_cmd, mock_client):
mock_cmd.return_value = (True, None)
mock_client.return_value = True
mongo = mongo_class.Server(
self.name, self.user, self.japd, host=self.host, port=self.port,
auth=False)
self.assertEqual(mongo.connect(), (True, None))
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_auth_arg3(self, mock_cmd, mock_client):
mock_cmd.return_value = (True, None)
mock_client.return_value = True
mongo = mongo_class.Server(
self.name, self.user, self.japd, host=self.host, port=self.port,
auth=False)
mongo.connect()
self.assertEqual(
(mongo.name, mongo.user, mongo.japd, mongo.host, mongo.port),
(self.name, self.user, self.japd, self.host, self.port))
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_auth_arg2(self, mock_cmd, mock_client):
mock_cmd.return_value = (True, None)
mock_client.return_value = True
mongo = mongo_class.Server(
self.name, self.user, self.japd, host=self.host, port=self.port)
self.assertEqual(mongo.connect(), (True, None))
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_auth_arg(self, mock_cmd, mock_client):
mock_cmd.return_value = (True, None)
mock_client.return_value = True
mongo = mongo_class.Server(
self.name, self.user, self.japd, host=self.host, port=self.port)
mongo.connect()
self.assertEqual(
(mongo.name, mongo.user, mongo.japd, mongo.host, mongo.port),
(self.name, self.user, self.japd, self.host, self.port))
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_no_auth2(self, mock_cmd, mock_client):
mock_cmd.return_value = (True, None)
mock_client.return_value = True
mongo = mongo_class.Server(self.name, self.user, self.japd,
host=self.host, port=self.port, auth=False)
mongo.connect()
self.assertEqual(
(mongo.name, mongo.user, mongo.japd, mongo.host, mongo.port),
(self.name, self.user, self.japd, self.host, self.port))
@mock.patch("mongo_class.pymongo.MongoClient")
@mock.patch("mongo_class.Server.get_srv_attr")
def test_no_auth(self, mock_cmd, mock_client):
mock_cmd.return_value = (True, None)
mock_client.return_value = True
mongo = mongo_class.Server(self.name, self.user, self.japd,
host=self.host, port=self.port, auth=False)
self.assertEqual(mongo.connect(), (True, None))
if __name__ == "__main__":
unittest.main()
| true | true |
f720c55c567a173b520fcbc4127c246b39b6746f | 8,696 | py | Python | tests/python/contrib/test_ethosn/test_networks.py | BaldLee/tvm | b53472c7b6afa34260afeffc5f088591352c58c3 | [
"Apache-2.0"
] | 10 | 2019-03-09T07:51:56.000Z | 2021-09-14T03:06:20.000Z | tests/python/contrib/test_ethosn/test_networks.py | BaldLee/tvm | b53472c7b6afa34260afeffc5f088591352c58c3 | [
"Apache-2.0"
] | 9 | 2021-10-20T13:48:52.000Z | 2021-12-09T07:14:24.000Z | tests/python/contrib/test_ethosn/test_networks.py | BaldLee/tvm | b53472c7b6afa34260afeffc5f088591352c58c3 | [
"Apache-2.0"
] | 5 | 2020-11-13T19:26:25.000Z | 2022-01-25T07:55:16.000Z | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""Ethos-N integration end-to-end network tests"""
import pytest
pytest.importorskip("tflite")
pytest.importorskip("tensorflow")
from tvm import relay
from tvm.testing import requires_ethosn
from tvm.contrib import download
from tvm.testing import requires_ethosn
import tvm.relay.testing.tf as tf_testing
import tflite.Model
from . import infrastructure as tei
def _get_tflite_model(tflite_model_path, inputs_dict, dtype):
with open(tflite_model_path, "rb") as f:
tflite_model_buffer = f.read()
try:
tflite_model = tflite.Model.Model.GetRootAsModel(tflite_model_buffer, 0)
except AttributeError:
tflite_model = tflite.Model.GetRootAsModel(tflite_model_buffer, 0)
shape_dict = {}
dtype_dict = {}
for input in inputs_dict:
input_shape = inputs_dict[input]
shape_dict[input] = input_shape
dtype_dict[input] = dtype
return relay.frontend.from_tflite(
tflite_model,
shape_dict=shape_dict,
dtype_dict=dtype_dict,
)
def _test_image_network(
model_url,
model_sub_path,
input_dict,
compile_hash,
output_count,
host_ops=0,
npu_partitions=1,
run=False,
):
"""Test an image network.
Parameters
----------
model_url : str
The URL to the model.
model_sub_path : str
The name of the model file.
input_dict : dict
The input dict.
compile_hash : str, set
The compile hash(es) to check the compilation output against.
output_count : int
The expected number of outputs.
host_ops : int
The expected number of host operators.
npu_partitions : int
The expected number of Ethos-N partitions.
run : bool
Whether or not to try running the network. If hardware isn't
available, the run will still take place but with a mocked
inference function, so the results will be incorrect. This is
therefore just to test the runtime flow is working rather than
to check the correctness/accuracy.
"""
def get_model():
if model_url[-3:] in ("tgz", "zip"):
model_path = tf_testing.get_workload_official(
model_url,
model_sub_path,
)
else:
model_path = download.download_testdata(
model_url,
model_sub_path,
)
return _get_tflite_model(model_path, input_dict, "uint8")
inputs = {}
for input_name in input_dict:
input_shape = input_dict[input_name]
inputs[input_name] = tei.get_real_image(input_shape[1], input_shape[2])
mod, params = get_model()
m = tei.build(mod, params, npu=True, expected_host_ops=host_ops, npu_partitions=npu_partitions)
tei.assert_lib_hash(m.get_lib(), compile_hash)
if run:
tei.run(m, inputs, output_count, npu=True)
@requires_ethosn
def test_mobilenet_v1():
# If this test is failing due to a hash mismatch, please notify @mbaret and
# @Leo-arm. The hash is there to catch any changes in the behaviour of the
# codegen, which could come about from either a change in Support Library
# version or a change in the Ethos-N codegen. To update this requires running
# on hardware that isn't available in CI.
_compile_hash = {"1fd4ef29a1ea9f3a015cab87c0b8014a"}
if tei.get_ethosn_variant() == "Ethos-N78_1TOPS_2PLE_RATIO":
_compile_hash = {"b879dfbff1f907eaf6129dfd41b44ece"}
if tei.get_ethosn_api_version() == 2011:
_compile_hash = {"9c9f63b30824f5b223cdb27d2f22c857"}
if tei.get_ethosn_variant() == "Ethos-N78_1TOPS_2PLE_RATIO":
_compile_hash = {"cd13279061df2319124a7aac81581d81"}
_test_image_network(
model_url="https://storage.googleapis.com/download.tensorflow.org/"
"models/mobilenet_v1_2018_08_02/mobilenet_v1_1.0_224_quant.tgz",
model_sub_path="mobilenet_v1_1.0_224_quant.tflite",
input_dict={"input": (1, 224, 224, 3)},
compile_hash=_compile_hash,
output_count=1,
host_ops=3,
npu_partitions=1,
run=True,
)
@requires_ethosn
def test_inception_v3():
# If this test is failing due to a hash mismatch, please notify @mbaret and
# @Leo-arm. The hash is there to catch any changes in the behaviour of the
# codegen, which could come about from either a change in Support Library
# version or a change in the Ethos-N codegen. To update this requires running
# on hardware that isn't available in CI.
_compile_hash = {"b90ed315639c6a0e97584c2dbc42a55c"}
if tei.get_ethosn_variant() == "Ethos-N78_1TOPS_2PLE_RATIO":
_compile_hash = {"5693569055695e581a8739194d0301aa"}
if tei.get_ethosn_api_version() == 2011:
_compile_hash = {"46ccafc840633633aca441645e41b444"}
if tei.get_ethosn_variant() == "Ethos-N78_1TOPS_2PLE_RATIO":
_compile_hash = {"4a33f397ac3e15c0f9869f7b8286fc2f"}
_test_image_network(
model_url="https://storage.googleapis.com/download.tensorflow.org/"
"models/tflite_11_05_08/inception_v3_quant.tgz",
model_sub_path="inception_v3_quant.tflite",
input_dict={"input": (1, 299, 299, 3)},
compile_hash=_compile_hash,
output_count=1,
host_ops=0,
npu_partitions=1,
)
@requires_ethosn
def test_inception_v4():
# If this test is failing due to a hash mismatch, please notify @mbaret and
# @Leo-arm. The hash is there to catch any changes in the behaviour of the
# codegen, which could come about from either a change in Support Library
# version or a change in the Ethos-N codegen. To update this requires running
# on hardware that isn't available in CI.
_compile_hash = {"b36877d2386d9f9c37a11772e3c4072c"}
if tei.get_ethosn_variant() == "Ethos-N78_1TOPS_2PLE_RATIO":
_compile_hash = {"b5046a6f56d78af0b4f51960bf2deeda"}
if tei.get_ethosn_api_version() == 2011:
_compile_hash = {"4a1a56393078367dd27915a188d6a6af"}
if tei.get_ethosn_variant() == "Ethos-N78_1TOPS_2PLE_RATIO":
_compile_hash = {"905caf389dd6b868aeff6acbca1fecef"}
_test_image_network(
model_url="https://storage.googleapis.com/download.tensorflow.org/"
"models/inception_v4_299_quant_20181026.tgz",
model_sub_path="inception_v4_299_quant.tflite",
input_dict={"input": (1, 299, 299, 3)},
compile_hash=_compile_hash,
output_count=1,
host_ops=3,
npu_partitions=1,
)
@requires_ethosn
def test_ssd_mobilenet_v1():
# If this test is failing due to a hash mismatch, please notify @mbaret and
# @Leo-arm. The hash is there to catch any changes in the behaviour of the
# codegen, which could come about from either a change in Support Library
# version or a change in the Ethos-N codegen. To update this requires running
# on hardware that isn't available in CI.
_compile_hash = {"956caf9e7fe5cfd5c042bd17857f7407", "4313033d14328e2aa022b1bd71b27b1c"}
if tei.get_ethosn_variant() == "Ethos-N78_1TOPS_2PLE_RATIO":
_compile_hash = {"dc60cc687d892cd2877873094e9dfc0b", "6b3deeec16c24c0dcef23df0db5fb162"}
if tei.get_ethosn_api_version() == 2011:
_compile_hash = {"10826406ae724e52f360a06c35ced09d", "9a484d5ecec7acb18c9d6bc6058be031"}
if tei.get_ethosn_variant() == "Ethos-N78_1TOPS_2PLE_RATIO":
_compile_hash = {"425b38830f34b6eb448fa77dbfe9ac96", "de49128643cbf1c659a9a63aad1cba62"}
_test_image_network(
model_url="https://storage.googleapis.com/download.tensorflow.org/"
"models/tflite/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip",
model_sub_path="detect.tflite",
input_dict={"normalized_input_image_tensor": (1, 300, 300, 3)},
compile_hash=_compile_hash,
output_count=4,
host_ops=28,
npu_partitions=2,
)
| 39.171171 | 100 | 0.700092 |
import pytest
pytest.importorskip("tflite")
pytest.importorskip("tensorflow")
from tvm import relay
from tvm.testing import requires_ethosn
from tvm.contrib import download
from tvm.testing import requires_ethosn
import tvm.relay.testing.tf as tf_testing
import tflite.Model
from . import infrastructure as tei
def _get_tflite_model(tflite_model_path, inputs_dict, dtype):
with open(tflite_model_path, "rb") as f:
tflite_model_buffer = f.read()
try:
tflite_model = tflite.Model.Model.GetRootAsModel(tflite_model_buffer, 0)
except AttributeError:
tflite_model = tflite.Model.GetRootAsModel(tflite_model_buffer, 0)
shape_dict = {}
dtype_dict = {}
for input in inputs_dict:
input_shape = inputs_dict[input]
shape_dict[input] = input_shape
dtype_dict[input] = dtype
return relay.frontend.from_tflite(
tflite_model,
shape_dict=shape_dict,
dtype_dict=dtype_dict,
)
def _test_image_network(
model_url,
model_sub_path,
input_dict,
compile_hash,
output_count,
host_ops=0,
npu_partitions=1,
run=False,
):
def get_model():
if model_url[-3:] in ("tgz", "zip"):
model_path = tf_testing.get_workload_official(
model_url,
model_sub_path,
)
else:
model_path = download.download_testdata(
model_url,
model_sub_path,
)
return _get_tflite_model(model_path, input_dict, "uint8")
inputs = {}
for input_name in input_dict:
input_shape = input_dict[input_name]
inputs[input_name] = tei.get_real_image(input_shape[1], input_shape[2])
mod, params = get_model()
m = tei.build(mod, params, npu=True, expected_host_ops=host_ops, npu_partitions=npu_partitions)
tei.assert_lib_hash(m.get_lib(), compile_hash)
if run:
tei.run(m, inputs, output_count, npu=True)
@requires_ethosn
def test_mobilenet_v1():
_compile_hash = {"1fd4ef29a1ea9f3a015cab87c0b8014a"}
if tei.get_ethosn_variant() == "Ethos-N78_1TOPS_2PLE_RATIO":
_compile_hash = {"b879dfbff1f907eaf6129dfd41b44ece"}
if tei.get_ethosn_api_version() == 2011:
_compile_hash = {"9c9f63b30824f5b223cdb27d2f22c857"}
if tei.get_ethosn_variant() == "Ethos-N78_1TOPS_2PLE_RATIO":
_compile_hash = {"cd13279061df2319124a7aac81581d81"}
_test_image_network(
model_url="https://storage.googleapis.com/download.tensorflow.org/"
"models/mobilenet_v1_2018_08_02/mobilenet_v1_1.0_224_quant.tgz",
model_sub_path="mobilenet_v1_1.0_224_quant.tflite",
input_dict={"input": (1, 224, 224, 3)},
compile_hash=_compile_hash,
output_count=1,
host_ops=3,
npu_partitions=1,
run=True,
)
@requires_ethosn
def test_inception_v3():
# If this test is failing due to a hash mismatch, please notify @mbaret and
# @Leo-arm. The hash is there to catch any changes in the behaviour of the
# codegen, which could come about from either a change in Support Library
# version or a change in the Ethos-N codegen. To update this requires running
# on hardware that isn't available in CI.
_compile_hash = {"b90ed315639c6a0e97584c2dbc42a55c"}
if tei.get_ethosn_variant() == "Ethos-N78_1TOPS_2PLE_RATIO":
_compile_hash = {"5693569055695e581a8739194d0301aa"}
if tei.get_ethosn_api_version() == 2011:
_compile_hash = {"46ccafc840633633aca441645e41b444"}
if tei.get_ethosn_variant() == "Ethos-N78_1TOPS_2PLE_RATIO":
_compile_hash = {"4a33f397ac3e15c0f9869f7b8286fc2f"}
_test_image_network(
model_url="https://storage.googleapis.com/download.tensorflow.org/"
"models/tflite_11_05_08/inception_v3_quant.tgz",
model_sub_path="inception_v3_quant.tflite",
input_dict={"input": (1, 299, 299, 3)},
compile_hash=_compile_hash,
output_count=1,
host_ops=0,
npu_partitions=1,
)
@requires_ethosn
def test_inception_v4():
_compile_hash = {"b36877d2386d9f9c37a11772e3c4072c"}
if tei.get_ethosn_variant() == "Ethos-N78_1TOPS_2PLE_RATIO":
_compile_hash = {"b5046a6f56d78af0b4f51960bf2deeda"}
if tei.get_ethosn_api_version() == 2011:
_compile_hash = {"4a1a56393078367dd27915a188d6a6af"}
if tei.get_ethosn_variant() == "Ethos-N78_1TOPS_2PLE_RATIO":
_compile_hash = {"905caf389dd6b868aeff6acbca1fecef"}
_test_image_network(
model_url="https://storage.googleapis.com/download.tensorflow.org/"
"models/inception_v4_299_quant_20181026.tgz",
model_sub_path="inception_v4_299_quant.tflite",
input_dict={"input": (1, 299, 299, 3)},
compile_hash=_compile_hash,
output_count=1,
host_ops=3,
npu_partitions=1,
)
@requires_ethosn
def test_ssd_mobilenet_v1():
# If this test is failing due to a hash mismatch, please notify @mbaret and
# @Leo-arm. The hash is there to catch any changes in the behaviour of the
# codegen, which could come about from either a change in Support Library
# version or a change in the Ethos-N codegen. To update this requires running
# on hardware that isn't available in CI.
_compile_hash = {"956caf9e7fe5cfd5c042bd17857f7407", "4313033d14328e2aa022b1bd71b27b1c"}
if tei.get_ethosn_variant() == "Ethos-N78_1TOPS_2PLE_RATIO":
_compile_hash = {"dc60cc687d892cd2877873094e9dfc0b", "6b3deeec16c24c0dcef23df0db5fb162"}
if tei.get_ethosn_api_version() == 2011:
_compile_hash = {"10826406ae724e52f360a06c35ced09d", "9a484d5ecec7acb18c9d6bc6058be031"}
if tei.get_ethosn_variant() == "Ethos-N78_1TOPS_2PLE_RATIO":
_compile_hash = {"425b38830f34b6eb448fa77dbfe9ac96", "de49128643cbf1c659a9a63aad1cba62"}
_test_image_network(
model_url="https://storage.googleapis.com/download.tensorflow.org/"
"models/tflite/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip",
model_sub_path="detect.tflite",
input_dict={"normalized_input_image_tensor": (1, 300, 300, 3)},
compile_hash=_compile_hash,
output_count=4,
host_ops=28,
npu_partitions=2,
)
| true | true |
f720c5e752d2911c7077e24ef935f054b6818fb0 | 6,799 | py | Python | source/Mlos.Python/mlos/Optimizers/RegressionModels/SklearnRidgeRegressionModelConfig.py | kkanellis/MLOS | 791d670a4c44467b2b4c9633f8aa1bebab50771f | [
"MIT"
] | 81 | 2020-08-25T17:08:05.000Z | 2022-03-19T08:58:56.000Z | source/Mlos.Python/mlos/Optimizers/RegressionModels/SklearnRidgeRegressionModelConfig.py | grlap/MLOS | f828cf2b46ed63d7c9b3bd6cef73b2027a7ad12a | [
"MIT"
] | 173 | 2020-08-25T17:38:04.000Z | 2021-11-02T19:34:00.000Z | source/Mlos.Python/mlos/Optimizers/RegressionModels/SklearnRidgeRegressionModelConfig.py | grlap/MLOS | f828cf2b46ed63d7c9b3bd6cef73b2027a7ad12a | [
"MIT"
] | 38 | 2020-08-25T20:49:14.000Z | 2022-03-16T16:30:27.000Z | #
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
#
from enum import Enum
from mlos.Spaces import SimpleHypergrid, ContinuousDimension, DiscreteDimension, CategoricalDimension, Point
from mlos.Spaces.Configs.DefaultConfigMeta import DefaultConfigMeta
class SklearnRidgeRegressionModelConfig(metaclass=DefaultConfigMeta):
class Solver(Enum):
"""
From https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html:
Solver to use in the computational routines:
* ‘auto’ chooses the solver automatically based on the type of data.
* ‘svd’ uses a Singular Value Decomposition of X to compute the Ridge coefficients. More stable for
singular matrices than ‘cholesky’.
* ‘cholesky’ uses the standard scipy.linalg.solve function to obtain a closed-form solution.
* ‘sparse_cg’ uses the conjugate gradient solver as found in scipy.sparse.linalg.cg.
As an iterative algorithm, this solver is more appropriate than ‘cholesky’ for
large-scale data (possibility to set tol and max_iter).
* ‘lsqr’ uses the dedicated regularized least-squares routine scipy.sparse.linalg.lsqr.
It is the fastest and uses an iterative procedure.
* ‘sag’ uses a Stochastic Average Gradient descent, and ‘saga’ uses its improved,
unbiased version named SAGA. Both methods also use an iterative procedure, and are
often faster than other solvers when both n_samples and n_features are large.
Note that ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with
approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing.
All last five solvers support both dense and sparse data. However, only ‘sag’ and ‘sparse_cg’ supports
sparse input when fit_intercept is True.
"""
AUTO = 'auto' # default
SVD = 'svd'
CHOLESKY = 'cholesky'
LSQR = 'lsqr'
SPARSE_CG = 'sparse_cg'
SAG = 'sag'
SAGA = 'saga'
CONFIG_SPACE = SimpleHypergrid(
name="sklearn_ridge_regression_model_config",
dimensions=[
ContinuousDimension(name="alpha", min=0, max=2 ** 16),
CategoricalDimension(name="fit_intercept", values=[False, True]),
CategoricalDimension(name="normalize", values=[False, True]),
CategoricalDimension(name="copy_x", values=[False, True]),
DiscreteDimension(name="max_iter", min=0, max=10 ** 5),
ContinuousDimension(name="tol", min=0, max=2 ** 10),
CategoricalDimension(name="solver", values=[solver.value for solver in Solver]),
]
)
_DEFAULT = Point(
alpha=1.0,
fit_intercept=False,
normalize=False,
copy_x=True,
max_iter=1000,
tol=10 ** -4,
solver=Solver.AUTO.value
)
@classmethod
def contains(cls, config):
return Point(
alpha=config.alpha,
fit_intercept=config.fit_intercept,
normalize=config.normalize,
copy_x=config.copy_x,
max_iter=config.max_iter,
tol=config.tol,
random_state=config.random_state,
solver=config.solver
) in cls.CONFIG_SPACE
@classmethod
def create_from_config_point(cls, config_point):
assert cls.contains(config_point)
config_key_value_pairs = {param_name: value for param_name, value in config_point}
return cls(**config_key_value_pairs)
def __init__(
self,
alpha=_DEFAULT.alpha,
fit_intercept=_DEFAULT.fit_intercept,
normalize=_DEFAULT.normalize,
copy_x=_DEFAULT.copy_x,
max_iter=_DEFAULT.max_iter,
tol=_DEFAULT.tol,
random_state=None,
solver=_DEFAULT.solver
):
"""
Ridge parameters:
:param alpha:Regularization strength; must be a positive float. Defaults to 1.0.
:param fit_intercept: Whether to calculate the intercept for this model.
:param normalize: This parameter is ignored when ``fit_intercept`` is set to False.
If True, the regressors X will be normalized before regression by
subtracting the mean and dividing by the l2-norm.
:param copy_x: If ``True``, X will be copied; else, it may be overwritten.
:param max_iter: The maximum number of iterations
:param tol: The tolerance for the optimization: if the updates are
smaller than ``tol``, the optimization code checks the
dual gap for optimality and continues until it is smaller
than ``tol``.
:param solver: Solver to use in the computational routines:
- 'auto' chooses the solver automatically based on the type of data.
- 'svd' uses a Singular Value Decomposition of X to compute the Ridge
coefficients. More stable for singular matrices than 'cholesky'.
- 'cholesky' uses the standard scipy.linalg.solve function to
obtain a closed-form solution.
- 'sparse_cg' uses the conjugate gradient solver as found in
scipy.sparse.linalg.cg. As an iterative algorithm, this solver is
more appropriate than 'cholesky' for large-scale data
(possibility to set `tol` and `max_iter`).
- 'lsqr' uses the dedicated regularized least-squares routine
scipy.sparse.linalg.lsqr. It is the fastest and uses an iterative
procedure.
- 'sag' uses a Stochastic Average Gradient descent, and 'saga' uses
its improved, unbiased version named SAGA. Both methods also use an
iterative procedure, and are often faster than other solvers when
both n_samples and n_features are large. Note that 'sag' and
'saga' fast convergence is only guaranteed on features with
approximately the same scale. You can preprocess the data with a
scaler from sklearn.preprocessing.
:param random_state: The seed of the pseudo random number generator that selects a random
feature to update. Used when ``selection`` == 'random'.
"""
self.alpha = alpha
self.fit_intercept = fit_intercept
self.normalize = normalize
self.copy_x = copy_x
self.max_iter = max_iter
self.tol = tol
self.random_state = random_state
self.solver = solver
| 49.268116 | 116 | 0.631563 |
from enum import Enum
from mlos.Spaces import SimpleHypergrid, ContinuousDimension, DiscreteDimension, CategoricalDimension, Point
from mlos.Spaces.Configs.DefaultConfigMeta import DefaultConfigMeta
class SklearnRidgeRegressionModelConfig(metaclass=DefaultConfigMeta):
class Solver(Enum):
AUTO = 'auto'
SVD = 'svd'
CHOLESKY = 'cholesky'
LSQR = 'lsqr'
SPARSE_CG = 'sparse_cg'
SAG = 'sag'
SAGA = 'saga'
CONFIG_SPACE = SimpleHypergrid(
name="sklearn_ridge_regression_model_config",
dimensions=[
ContinuousDimension(name="alpha", min=0, max=2 ** 16),
CategoricalDimension(name="fit_intercept", values=[False, True]),
CategoricalDimension(name="normalize", values=[False, True]),
CategoricalDimension(name="copy_x", values=[False, True]),
DiscreteDimension(name="max_iter", min=0, max=10 ** 5),
ContinuousDimension(name="tol", min=0, max=2 ** 10),
CategoricalDimension(name="solver", values=[solver.value for solver in Solver]),
]
)
_DEFAULT = Point(
alpha=1.0,
fit_intercept=False,
normalize=False,
copy_x=True,
max_iter=1000,
tol=10 ** -4,
solver=Solver.AUTO.value
)
@classmethod
def contains(cls, config):
return Point(
alpha=config.alpha,
fit_intercept=config.fit_intercept,
normalize=config.normalize,
copy_x=config.copy_x,
max_iter=config.max_iter,
tol=config.tol,
random_state=config.random_state,
solver=config.solver
) in cls.CONFIG_SPACE
@classmethod
def create_from_config_point(cls, config_point):
assert cls.contains(config_point)
config_key_value_pairs = {param_name: value for param_name, value in config_point}
return cls(**config_key_value_pairs)
def __init__(
self,
alpha=_DEFAULT.alpha,
fit_intercept=_DEFAULT.fit_intercept,
normalize=_DEFAULT.normalize,
copy_x=_DEFAULT.copy_x,
max_iter=_DEFAULT.max_iter,
tol=_DEFAULT.tol,
random_state=None,
solver=_DEFAULT.solver
):
self.alpha = alpha
self.fit_intercept = fit_intercept
self.normalize = normalize
self.copy_x = copy_x
self.max_iter = max_iter
self.tol = tol
self.random_state = random_state
self.solver = solver
| true | true |
f720c7aa06d180672d2e8ae9ac3670dabcc51952 | 10,404 | py | Python | tensorflow_probability/substrates/meta/rewrite.py | varomodt/probability | d68de79e67c06ab46509744574a044ccb966c4d5 | [
"Apache-2.0"
] | 1 | 2020-01-16T02:19:34.000Z | 2020-01-16T02:19:34.000Z | tensorflow_probability/substrates/meta/rewrite.py | varomodt/probability | d68de79e67c06ab46509744574a044ccb966c4d5 | [
"Apache-2.0"
] | null | null | null | tensorflow_probability/substrates/meta/rewrite.py | varomodt/probability | d68de79e67c06ab46509744574a044ccb966c4d5 | [
"Apache-2.0"
] | 1 | 2020-10-19T11:24:40.000Z | 2020-10-19T11:24:40.000Z | # Copyright 2019 The TensorFlow Probability Authors.
#
# Licensed under the Apache License, Version 2.0 (the 'License');
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an 'AS IS' BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Rewrite script for TF->JAX."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
# Dependency imports
from absl import app
from absl import flags
flags.DEFINE_boolean('numpy_to_jax', False,
'Whether or not to rewrite numpy imports to jax.numpy')
flags.DEFINE_list('omit_deps', [], 'List of build deps being omitted.')
FLAGS = flags.FLAGS
TF_REPLACEMENTS = {
'import tensorflow ':
'from tensorflow_probability.python.internal.backend import numpy ',
'import tensorflow.compat.v1':
'from tensorflow_probability.python.internal.backend.numpy.compat '
'import v1',
'import tensorflow.compat.v2':
'from tensorflow_probability.python.internal.backend.numpy.compat '
'import v2',
'import tensorflow_probability as tfp':
'import tensorflow_probability as tfp; '
'tfp = tfp.substrates.numpy',
'from tensorflow.python.framework import tensor_shape':
('from tensorflow_probability.python.internal.backend.numpy.gen '
'import tensor_shape'),
'from tensorflow.python.framework import ops':
('from tensorflow_probability.python.internal.backend.numpy '
'import ops'),
'from tensorflow.python.framework import tensor_util':
('from tensorflow_probability.python.internal.backend.numpy '
'import ops'),
'from tensorflow.python.util import':
'from tensorflow_probability.python.internal.backend.numpy import',
'from tensorflow.python.util.all_util':
'from tensorflow_probability.python.internal.backend.numpy.private',
'from tensorflow.python.ops.linalg':
'from tensorflow_probability.python.internal.backend.numpy.gen',
'from tensorflow.python.ops import parallel_for':
'from tensorflow_probability.python.internal.backend.numpy '
'import functional_ops as parallel_for',
'from tensorflow.python.ops import control_flow_ops':
'from tensorflow_probability.python.internal.backend.numpy '
'import control_flow as control_flow_ops',
'from tensorflow.python.eager import context':
'from tensorflow_probability.python.internal.backend.numpy '
'import private',
('from tensorflow.python.client '
'import pywrap_tf_session as c_api'):
'pass',
('from tensorflow.python '
'import pywrap_tensorflow as c_api'):
'pass'
}
DISABLED_BY_PKG = {
'experimental':
('auto_batching', 'composite_tensor', 'linalg',
'marginalize', 'nn', 'sequential', 'substrates', 'vi'),
}
LIBS = ('bijectors', 'distributions', 'experimental', 'math', 'mcmc',
'optimizer', 'random', 'stats', 'util')
INTERNALS = ('assert_util', 'batched_rejection_sampler', 'broadcast_util',
'cache_util', 'callable_util',
'custom_gradient', 'distribution_util', 'dtype_util',
'hypothesis_testlib', 'implementation_selection', 'monte_carlo',
'name_util', 'nest_util', 'numerics_testing',
'parameter_properties', 'prefer_static', 'samplers',
'special_math', 'structural_tuple', 'tensor_util',
'tensorshape_util', 'test_combinations', 'test_util', 'unnest',
'variadic_reduce', 'vectorization_util')
OPTIMIZERS = ('linesearch',)
LINESEARCH = ('internal',)
SAMPLERS = ('categorical', 'normal', 'poisson', 'uniform', 'shuffle')
PRIVATE_TF_PKGS = ('array_ops', 'control_flow_util', 'gradient_checker_v2',
'numpy_text', 'random_ops')
def main(argv):
disabled_by_pkg = dict(DISABLED_BY_PKG)
for dep in FLAGS.omit_deps:
pkg = dep.split('/python/')[1].split(':')[0].replace('/', '.')
lib = dep.split(':')[1]
if pkg.endswith('.{}'.format(lib)):
pkg = pkg.replace('.{}'.format(lib), '')
disabled_by_pkg.setdefault(pkg, ())
disabled_by_pkg[pkg] += (lib,)
else:
disabled_by_pkg.setdefault(pkg, ())
disabled_by_pkg[pkg] += (lib,)
replacements = collections.OrderedDict(TF_REPLACEMENTS)
for pkg, disabled in disabled_by_pkg.items():
replacements.update({
'from tensorflow_probability.python.{}.{} '.format(pkg, item):
'# from tensorflow_probability.python.{}.{} '.format(pkg, item)
for item in disabled
})
replacements.update({
'from tensorflow_probability.python.{} import {}'.format(pkg, item):
'# from tensorflow_probability.python.{} import {}'.format(pkg, item)
for item in disabled
})
replacements.update({
'tensorflow_probability.python.{}'.format(lib):
'tensorflow_probability.substrates.numpy.{}'.format(lib)
for lib in LIBS
})
replacements.update({
'tensorflow_probability.python import {} as'.format(lib):
'tensorflow_probability.substrates.numpy import {} as'.format(lib)
for lib in LIBS
})
replacements.update({
'tensorflow_probability.python import {}'.format(lib):
'tensorflow_probability.substrates.numpy import {}'.format(lib)
for lib in LIBS
})
replacements.update({
# Permits distributions.internal, psd_kernels.internal.
# 'as psd_kernels as': 'as',
})
replacements.update({
'tensorflow_probability.python.internal.{}'.format(internal):
'tensorflow_probability.substrates.numpy.internal.{}'.format(internal)
for internal in INTERNALS
})
# pylint: disable=g-complex-comprehension
replacements.update({
'tensorflow_probability.python.internal import {}'.format(internal):
'tensorflow_probability.substrates.numpy.internal import {}'.format(
internal)
for internal in INTERNALS
})
replacements.update({
'tensorflow.python.ops import {}'.format(private):
'tensorflow_probability.python.internal.backend.numpy import private'
' as {}'.format(private)
for private in PRIVATE_TF_PKGS
})
replacements.update({
'tensorflow.python.framework.ops import {}'.format(
private):
'tensorflow_probability.python.internal.backend.numpy import private'
' as {}'.format(private)
for private in PRIVATE_TF_PKGS
})
# pylint: enable=g-complex-comprehension
# TODO(bjp): Delete this block after TFP uses stateless samplers.
replacements.update({
'tf.random.{}'.format(sampler): 'tf.random.stateless_{}'.format(sampler)
for sampler in SAMPLERS
})
replacements.update({
'self._maybe_assert_dtype': '# self._maybe_assert_dtype',
'SKIP_DTYPE_CHECKS = False': 'SKIP_DTYPE_CHECKS = True',
'@test_util.test_all_tf_execution_regimes':
'# @test_util.test_all_tf_execution_regimes',
'@test_util.test_graph_and_eager_modes':
'# @test_util.test_graph_and_eager_modes',
'@test_util.test_graph_mode_only':
'# @test_util.test_graph_mode_only',
'TestCombinationsTest(test_util.TestCase)':
'TestCombinationsDoNotTest(object)',
'@six.add_metaclass(TensorMetaClass)':
'# @six.add_metaclass(TensorMetaClass)',
})
filename = argv[1]
contents = open(filename, encoding='utf-8').read()
if '__init__.py' in filename:
# Comment out items from __all__.
for pkg, disabled in disabled_by_pkg.items():
for item in disabled:
def disable_all(name):
replacements.update({
'"{}"'.format(name): '# "{}"'.format(name),
'\'{}\''.format(name): '# \'{}\''.format(name),
})
if 'from tensorflow_probability.python.{} import {}'.format(
pkg, item) in contents:
disable_all(item)
for segment in contents.split(
'from tensorflow_probability.python.{}.{} import '.format(
pkg, item)):
disable_all(segment.split('\n')[0])
for find, replace in replacements.items():
contents = contents.replace(find, replace)
disabler = 'JAX_DISABLE' if FLAGS.numpy_to_jax else 'NUMPY_DISABLE'
lines = contents.split('\n')
for i, l in enumerate(lines):
if disabler in l:
lines[i] = '# {}'.format(l)
contents = '\n'.join(lines)
if not FLAGS.numpy_to_jax:
contents = contents.replace('NUMPY_MODE = False', 'NUMPY_MODE = True')
if FLAGS.numpy_to_jax:
contents = contents.replace('tfp.substrates.numpy', 'tfp.substrates.jax')
contents = contents.replace('substrates.numpy', 'substrates.jax')
contents = contents.replace('backend.numpy', 'backend.jax')
contents = contents.replace('def _call_jax', 'def __call__')
contents = contents.replace('JAX_MODE = False', 'JAX_MODE = True')
contents = contents.replace('SKIP_DTYPE_CHECKS = True',
'SKIP_DTYPE_CHECKS = False')
is_test = lambda x: x.endswith('_test.py') or x.endswith('_test_util.py')
if is_test(argv[1]): # Test-only rewrites.
contents = contents.replace(
'tf.test.main()',
'from jax.config import config; '
'config.update("jax_enable_x64", True); '
'config.enable_omnistaging(); '
'tf.test.main()')
print('# ' + '@' * 78)
print('# This file is auto-generated by substrates/meta/rewrite.py')
print('# It will be surfaced by the build system as a symlink at:')
substrate = 'jax' if FLAGS.numpy_to_jax else 'numpy'
print('# `tensorflow_probability/substrates/{substrate}/{path}`'.format(
substrate=substrate, path=filename.split('/python/')[1]))
print('# For more info, see substrate_runfiles_symlinks in build_defs.bzl')
print('# ' + '@' * 78)
print('\n# (This notice adds 10 to line numbering.)\n\n')
print(contents, file=open(1, 'w', encoding='utf-8', closefd=False))
if __name__ == '__main__':
app.run(main)
| 40.48249 | 78 | 0.663975 |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
from absl import app
from absl import flags
flags.DEFINE_boolean('numpy_to_jax', False,
'Whether or not to rewrite numpy imports to jax.numpy')
flags.DEFINE_list('omit_deps', [], 'List of build deps being omitted.')
FLAGS = flags.FLAGS
TF_REPLACEMENTS = {
'import tensorflow ':
'from tensorflow_probability.python.internal.backend import numpy ',
'import tensorflow.compat.v1':
'from tensorflow_probability.python.internal.backend.numpy.compat '
'import v1',
'import tensorflow.compat.v2':
'from tensorflow_probability.python.internal.backend.numpy.compat '
'import v2',
'import tensorflow_probability as tfp':
'import tensorflow_probability as tfp; '
'tfp = tfp.substrates.numpy',
'from tensorflow.python.framework import tensor_shape':
('from tensorflow_probability.python.internal.backend.numpy.gen '
'import tensor_shape'),
'from tensorflow.python.framework import ops':
('from tensorflow_probability.python.internal.backend.numpy '
'import ops'),
'from tensorflow.python.framework import tensor_util':
('from tensorflow_probability.python.internal.backend.numpy '
'import ops'),
'from tensorflow.python.util import':
'from tensorflow_probability.python.internal.backend.numpy import',
'from tensorflow.python.util.all_util':
'from tensorflow_probability.python.internal.backend.numpy.private',
'from tensorflow.python.ops.linalg':
'from tensorflow_probability.python.internal.backend.numpy.gen',
'from tensorflow.python.ops import parallel_for':
'from tensorflow_probability.python.internal.backend.numpy '
'import functional_ops as parallel_for',
'from tensorflow.python.ops import control_flow_ops':
'from tensorflow_probability.python.internal.backend.numpy '
'import control_flow as control_flow_ops',
'from tensorflow.python.eager import context':
'from tensorflow_probability.python.internal.backend.numpy '
'import private',
('from tensorflow.python.client '
'import pywrap_tf_session as c_api'):
'pass',
('from tensorflow.python '
'import pywrap_tensorflow as c_api'):
'pass'
}
DISABLED_BY_PKG = {
'experimental':
('auto_batching', 'composite_tensor', 'linalg',
'marginalize', 'nn', 'sequential', 'substrates', 'vi'),
}
LIBS = ('bijectors', 'distributions', 'experimental', 'math', 'mcmc',
'optimizer', 'random', 'stats', 'util')
INTERNALS = ('assert_util', 'batched_rejection_sampler', 'broadcast_util',
'cache_util', 'callable_util',
'custom_gradient', 'distribution_util', 'dtype_util',
'hypothesis_testlib', 'implementation_selection', 'monte_carlo',
'name_util', 'nest_util', 'numerics_testing',
'parameter_properties', 'prefer_static', 'samplers',
'special_math', 'structural_tuple', 'tensor_util',
'tensorshape_util', 'test_combinations', 'test_util', 'unnest',
'variadic_reduce', 'vectorization_util')
OPTIMIZERS = ('linesearch',)
LINESEARCH = ('internal',)
SAMPLERS = ('categorical', 'normal', 'poisson', 'uniform', 'shuffle')
PRIVATE_TF_PKGS = ('array_ops', 'control_flow_util', 'gradient_checker_v2',
'numpy_text', 'random_ops')
def main(argv):
disabled_by_pkg = dict(DISABLED_BY_PKG)
for dep in FLAGS.omit_deps:
pkg = dep.split('/python/')[1].split(':')[0].replace('/', '.')
lib = dep.split(':')[1]
if pkg.endswith('.{}'.format(lib)):
pkg = pkg.replace('.{}'.format(lib), '')
disabled_by_pkg.setdefault(pkg, ())
disabled_by_pkg[pkg] += (lib,)
else:
disabled_by_pkg.setdefault(pkg, ())
disabled_by_pkg[pkg] += (lib,)
replacements = collections.OrderedDict(TF_REPLACEMENTS)
for pkg, disabled in disabled_by_pkg.items():
replacements.update({
'from tensorflow_probability.python.{}.{} '.format(pkg, item):
'# from tensorflow_probability.python.{}.{} '.format(pkg, item)
for item in disabled
})
replacements.update({
'from tensorflow_probability.python.{} import {}'.format(pkg, item):
'# from tensorflow_probability.python.{} import {}'.format(pkg, item)
for item in disabled
})
replacements.update({
'tensorflow_probability.python.{}'.format(lib):
'tensorflow_probability.substrates.numpy.{}'.format(lib)
for lib in LIBS
})
replacements.update({
'tensorflow_probability.python import {} as'.format(lib):
'tensorflow_probability.substrates.numpy import {} as'.format(lib)
for lib in LIBS
})
replacements.update({
'tensorflow_probability.python import {}'.format(lib):
'tensorflow_probability.substrates.numpy import {}'.format(lib)
for lib in LIBS
})
replacements.update({
})
replacements.update({
'tensorflow_probability.python.internal.{}'.format(internal):
'tensorflow_probability.substrates.numpy.internal.{}'.format(internal)
for internal in INTERNALS
})
replacements.update({
'tensorflow_probability.python.internal import {}'.format(internal):
'tensorflow_probability.substrates.numpy.internal import {}'.format(
internal)
for internal in INTERNALS
})
replacements.update({
'tensorflow.python.ops import {}'.format(private):
'tensorflow_probability.python.internal.backend.numpy import private'
' as {}'.format(private)
for private in PRIVATE_TF_PKGS
})
replacements.update({
'tensorflow.python.framework.ops import {}'.format(
private):
'tensorflow_probability.python.internal.backend.numpy import private'
' as {}'.format(private)
for private in PRIVATE_TF_PKGS
})
replacements.update({
'tf.random.{}'.format(sampler): 'tf.random.stateless_{}'.format(sampler)
for sampler in SAMPLERS
})
replacements.update({
'self._maybe_assert_dtype': '# self._maybe_assert_dtype',
'SKIP_DTYPE_CHECKS = False': 'SKIP_DTYPE_CHECKS = True',
'@test_util.test_all_tf_execution_regimes':
'# @test_util.test_all_tf_execution_regimes',
'@test_util.test_graph_and_eager_modes':
'# @test_util.test_graph_and_eager_modes',
'@test_util.test_graph_mode_only':
'# @test_util.test_graph_mode_only',
'TestCombinationsTest(test_util.TestCase)':
'TestCombinationsDoNotTest(object)',
'@six.add_metaclass(TensorMetaClass)':
'# @six.add_metaclass(TensorMetaClass)',
})
filename = argv[1]
contents = open(filename, encoding='utf-8').read()
if '__init__.py' in filename:
for pkg, disabled in disabled_by_pkg.items():
for item in disabled:
def disable_all(name):
replacements.update({
'"{}"'.format(name): '# "{}"'.format(name),
'\'{}\''.format(name): '# \'{}\''.format(name),
})
if 'from tensorflow_probability.python.{} import {}'.format(
pkg, item) in contents:
disable_all(item)
for segment in contents.split(
'from tensorflow_probability.python.{}.{} import '.format(
pkg, item)):
disable_all(segment.split('\n')[0])
for find, replace in replacements.items():
contents = contents.replace(find, replace)
disabler = 'JAX_DISABLE' if FLAGS.numpy_to_jax else 'NUMPY_DISABLE'
lines = contents.split('\n')
for i, l in enumerate(lines):
if disabler in l:
lines[i] = '# {}'.format(l)
contents = '\n'.join(lines)
if not FLAGS.numpy_to_jax:
contents = contents.replace('NUMPY_MODE = False', 'NUMPY_MODE = True')
if FLAGS.numpy_to_jax:
contents = contents.replace('tfp.substrates.numpy', 'tfp.substrates.jax')
contents = contents.replace('substrates.numpy', 'substrates.jax')
contents = contents.replace('backend.numpy', 'backend.jax')
contents = contents.replace('def _call_jax', 'def __call__')
contents = contents.replace('JAX_MODE = False', 'JAX_MODE = True')
contents = contents.replace('SKIP_DTYPE_CHECKS = True',
'SKIP_DTYPE_CHECKS = False')
is_test = lambda x: x.endswith('_test.py') or x.endswith('_test_util.py')
if is_test(argv[1]):
contents = contents.replace(
'tf.test.main()',
'from jax.config import config; '
'config.update("jax_enable_x64", True); '
'config.enable_omnistaging(); '
'tf.test.main()')
print('# ' + '@' * 78)
print('# This file is auto-generated by substrates/meta/rewrite.py')
print('# It will be surfaced by the build system as a symlink at:')
substrate = 'jax' if FLAGS.numpy_to_jax else 'numpy'
print('# `tensorflow_probability/substrates/{substrate}/{path}`'.format(
substrate=substrate, path=filename.split('/python/')[1]))
print('# For more info, see substrate_runfiles_symlinks in build_defs.bzl')
print('# ' + '@' * 78)
print('\n# (This notice adds 10 to line numbering.)\n\n')
print(contents, file=open(1, 'w', encoding='utf-8', closefd=False))
if __name__ == '__main__':
app.run(main)
| true | true |
f720c8e34817cce8439e26b7ffd83fa810781ad6 | 35,031 | py | Python | Pilot1/Combo/combo_dose.py | j-woz/Benchmarks | d518162fdafb7cfa26071b6a30a3b456dad024f6 | [
"MIT"
] | 2 | 2021-02-06T06:47:19.000Z | 2021-02-24T13:45:02.000Z | Pilot1/Combo/combo_dose.py | j-woz/Benchmarks | d518162fdafb7cfa26071b6a30a3b456dad024f6 | [
"MIT"
] | null | null | null | Pilot1/Combo/combo_dose.py | j-woz/Benchmarks | d518162fdafb7cfa26071b6a30a3b456dad024f6 | [
"MIT"
] | 1 | 2019-08-14T14:29:42.000Z | 2019-08-14T14:29:42.000Z | #! /usr/bin/env python
from __future__ import division, print_function
import argparse
import collections
import logging
import os
import random
import threading
import numpy as np
import pandas as pd
from itertools import cycle, islice
import keras
from keras import backend as K
from keras import optimizers
from keras.models import Model
from keras.layers import Input, Dense, Dropout
from keras.callbacks import Callback, ModelCheckpoint, ReduceLROnPlateau, LearningRateScheduler, TensorBoard
from keras.utils import get_custom_objects
from keras.utils.vis_utils import plot_model
from sklearn.metrics import r2_score, mean_squared_error, mean_absolute_error
from sklearn.model_selection import KFold, StratifiedKFold, GroupKFold
from scipy.stats.stats import pearsonr
import matplotlib as mpl
mpl.use('Agg')
import matplotlib.pyplot as plt
import combo
import candle
import NCI60
logger = logging.getLogger(__name__)
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
def set_seed(seed):
os.environ['PYTHONHASHSEED'] = '0'
np.random.seed(seed)
random.seed(seed)
if K.backend() == 'tensorflow':
import tensorflow as tf
tf.set_random_seed(seed)
# session_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
# sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
# K.set_session(sess)
# Uncommit when running on an optimized tensorflow where NUM_INTER_THREADS and
# NUM_INTRA_THREADS env vars are set.
# session_conf = tf.ConfigProto(inter_op_parallelism_threads=int(os.environ['NUM_INTER_THREADS']),
# intra_op_parallelism_threads=int(os.environ['NUM_INTRA_THREADS']))
# sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
# K.set_session(sess)
def verify_path(path):
folder = os.path.dirname(path)
if folder and not os.path.exists(folder):
os.makedirs(folder)
def set_up_logger(logfile, verbose):
verify_path(logfile)
fh = logging.FileHandler(logfile)
fh.setFormatter(logging.Formatter("[%(asctime)s %(process)d] %(message)s", datefmt="%Y-%m-%d %H:%M:%S"))
fh.setLevel(logging.DEBUG)
sh = logging.StreamHandler()
sh.setFormatter(logging.Formatter(''))
sh.setLevel(logging.DEBUG if verbose else logging.INFO)
logger.setLevel(logging.DEBUG)
logger.addHandler(fh)
logger.addHandler(sh)
def extension_from_parameters(args):
"""Construct string for saving model with annotation of parameters"""
ext = ''
ext += '.A={}'.format(args.activation)
ext += '.B={}'.format(args.batch_size)
ext += '.E={}'.format(args.epochs)
ext += '.O={}'.format(args.optimizer)
# ext += '.LEN={}'.format(args.maxlen)
ext += '.LR={}'.format(args.learning_rate)
ext += '.CF={}'.format(''.join([x[0] for x in sorted(args.cell_features)]))
ext += '.DF={}'.format(''.join([x[0] for x in sorted(args.drug_features)]))
if args.feature_subsample > 0:
ext += '.FS={}'.format(args.feature_subsample)
if args.dropout > 0:
ext += '.DR={}'.format(args.dropout)
if args.warmup_lr:
ext += '.wu_lr'
if args.reduce_lr:
ext += '.re_lr'
if args.residual:
ext += '.res'
if args.use_landmark_genes:
ext += '.L1000'
if args.gen:
ext += '.gen'
if args.use_combo_score:
ext += '.scr'
for i, n in enumerate(args.dense):
if n > 0:
ext += '.D{}={}'.format(i+1, n)
if args.dense_feature_layers != args.dense:
for i, n in enumerate(args.dense):
if n > 0:
ext += '.FD{}={}'.format(i+1, n)
return ext
def discretize(y, bins=5):
percentiles = [100 / bins * (i + 1) for i in range(bins - 1)]
thresholds = [np.percentile(y, x) for x in percentiles]
classes = np.digitize(y, thresholds)
return classes
class ComboDataLoader(object):
"""Load merged drug response, drug descriptors and cell line essay data
"""
def __init__(self, seed, val_split=0.2, shuffle=True,
cell_features=['expression'], drug_features=['descriptors'],
response_url=None, use_landmark_genes=False, use_combo_score=False,
preprocess_rnaseq=None, exclude_cells=[], exclude_drugs=[],
feature_subsample=None, scaling='std', scramble=False,
cv_partition='overlapping', cv=0):
"""Initialize data merging drug response, drug descriptors and cell line essay.
Shuffle and split training and validation set
Parameters
----------
seed: integer
seed for random generation
val_split : float, optional (default 0.2)
fraction of data to use in validation
cell_features: list of strings from 'expression', 'expression_5platform', 'mirna', 'proteome', 'all', 'categorical' (default ['expression'])
use one or more cell line feature sets: gene expression, microRNA, proteome
use 'all' for ['expression', 'mirna', 'proteome']
use 'categorical' for one-hot encoded cell lines
drug_features: list of strings from 'descriptors', 'latent', 'all', 'categorical', 'noise' (default ['descriptors'])
use dragon7 descriptors, latent representations from Aspuru-Guzik's SMILES autoencoder
trained on NSC drugs, or both; use random features if set to noise
use 'categorical' for one-hot encoded drugs
shuffle : True or False, optional (default True)
if True shuffles the merged data before splitting training and validation sets
scramble: True or False, optional (default False)
if True randomly shuffle dose response data as a control
feature_subsample: None or integer (default None)
number of feature columns to use from cellline expressions and drug descriptors
use_landmark_genes: True or False
only use LINCS1000 landmark genes
use_combo_score: bool (default False)
use combination score in place of percent growth (stored in 'GROWTH' column)
scaling: None, 'std', 'minmax' or 'maxabs' (default 'std')
type of feature scaling: 'maxabs' to [-1,1], 'maxabs' to [-1, 1], 'std' for standard normalization
"""
self.cv_partition = cv_partition
np.random.seed(seed)
df = NCI60.load_combo_dose_response(response_url=response_url, use_combo_score=use_combo_score, fraction=True, exclude_cells=exclude_cells, exclude_drugs=exclude_drugs)
logger.info('Loaded {} unique (CL, D1, D2) response sets.'.format(df.shape[0]))
if 'all' in cell_features:
self.cell_features = ['expression', 'mirna', 'proteome']
else:
self.cell_features = cell_features
if 'all' in drug_features:
self.drug_features = ['descriptors', 'latent']
else:
self.drug_features = drug_features
for fea in self.cell_features:
if fea == 'expression' or fea == 'rnaseq':
self.df_cell_expr = NCI60.load_cell_expression_rnaseq(ncols=feature_subsample, scaling=scaling, use_landmark_genes=use_landmark_genes, preprocess_rnaseq=preprocess_rnaseq)
df = df.merge(self.df_cell_expr[['CELLNAME']], on='CELLNAME')
elif fea == 'expression_u133p2':
self.df_cell_expr = NCI60.load_cell_expression_u133p2(ncols=feature_subsample, scaling=scaling, use_landmark_genes=use_landmark_genes)
df = df.merge(self.df_cell_expr[['CELLNAME']], on='CELLNAME')
elif fea == 'expression_5platform':
self.df_cell_expr = NCI60.load_cell_expression_5platform(ncols=feature_subsample, scaling=scaling, use_landmark_genes=use_landmark_genes)
df = df.merge(self.df_cell_expr[['CELLNAME']], on='CELLNAME')
elif fea == 'mirna':
self.df_cell_mirna = NCI60.load_cell_mirna(ncols=feature_subsample, scaling=scaling)
df = df.merge(self.df_cell_mirna[['CELLNAME']], on='CELLNAME')
elif fea == 'proteome':
self.df_cell_prot = NCI60.load_cell_proteome(ncols=feature_subsample, scaling=scaling)
df = df.merge(self.df_cell_prot[['CELLNAME']], on='CELLNAME')
elif fea == 'categorical':
df_cell_ids = df[['CELLNAME']].drop_duplicates()
cell_ids = df_cell_ids['CELLNAME'].map(lambda x: x.replace(':', '.'))
df_cell_cat = pd.get_dummies(cell_ids)
df_cell_cat.index = df_cell_ids['CELLNAME']
self.df_cell_cat = df_cell_cat.reset_index()
for fea in self.drug_features:
if fea == 'descriptors':
self.df_drug_desc = NCI60.load_drug_descriptors(ncols=feature_subsample, scaling=scaling)
df = df[df['NSC1'].isin(self.df_drug_desc['NSC']) & df['NSC2'].isin(self.df_drug_desc['NSC'])]
elif fea == 'latent':
self.df_drug_auen = NCI60.load_drug_autoencoded_AG(ncols=feature_subsample, scaling=scaling)
df = df[df['NSC1'].isin(self.df_drug_auen['NSC']) & df['NSC2'].isin(self.df_drug_auen['NSC'])]
elif fea == 'categorical':
df_drug_ids = df[['NSC1']].drop_duplicates()
df_drug_ids.columns = ['NSC']
drug_ids = df_drug_ids['NSC']
df_drug_cat = pd.get_dummies(drug_ids)
df_drug_cat.index = df_drug_ids['NSC']
self.df_drug_cat = df_drug_cat.reset_index()
elif fea == 'noise':
ids1 = df[['NSC1']].drop_duplicates().rename(columns={'NSC1':'NSC'})
ids2 = df[['NSC2']].drop_duplicates().rename(columns={'NSC2':'NSC'})
df_drug_ids = pd.concat([ids1, ids2]).drop_duplicates()
noise = np.random.normal(size=(df_drug_ids.shape[0], 500))
df_rand = pd.DataFrame(noise, index=df_drug_ids['NSC'],
columns=['RAND-{:03d}'.format(x) for x in range(500)])
self.df_drug_rand = df_rand.reset_index()
logger.info('Filtered down to {} rows with matching information.'.format(df.shape[0]))
ids1 = df[['NSC1']].drop_duplicates().rename(columns={'NSC1':'NSC'})
ids2 = df[['NSC2']].drop_duplicates().rename(columns={'NSC2':'NSC'})
df_drug_ids = pd.concat([ids1, ids2]).drop_duplicates().reset_index(drop=True)
n_drugs = df_drug_ids.shape[0]
n_val_drugs = int(n_drugs * val_split)
n_train_drugs = n_drugs - n_val_drugs
logger.info('Unique cell lines: {}'.format(df['CELLNAME'].nunique()))
logger.info('Unique drugs: {}'.format(n_drugs))
# df.to_csv('filtered.growth.min.tsv', sep='\t', index=False, float_format='%.4g')
# df.to_csv('filtered.score.max.tsv', sep='\t', index=False, float_format='%.4g')
if shuffle:
df = df.sample(frac=1.0, random_state=seed).reset_index(drop=True)
df_drug_ids = df_drug_ids.sample(frac=1.0, random_state=seed).reset_index(drop=True)
self.df_response = df
self.df_drug_ids = df_drug_ids
self.train_drug_ids = df_drug_ids['NSC'][:n_train_drugs]
self.val_drug_ids = df_drug_ids['NSC'][-n_val_drugs:]
if scramble:
growth = df[['GROWTH']]
random_growth = growth.iloc[np.random.permutation(np.arange(growth.shape[0]))].reset_index()
self.df_response[['GROWTH']] = random_growth['GROWTH']
logger.warn('Randomly shuffled dose response growth values.')
logger.info('Distribution of dose response:')
logger.info(self.df_response[['GROWTH']].describe())
self.total = df.shape[0]
self.n_val = int(self.total * val_split)
self.n_train = self.total - self.n_val
logger.info('Rows in train: {}, val: {}'.format(self.n_train, self.n_val))
self.cell_df_dict = {'expression': 'df_cell_expr',
'expression_5platform': 'df_cell_expr',
'expression_u133p2': 'df_cell_expr',
'rnaseq': 'df_cell_expr',
'mirna': 'df_cell_mirna',
'proteome': 'df_cell_prot',
'categorical': 'df_cell_cat'}
self.drug_df_dict = {'descriptors': 'df_drug_desc',
'latent': 'df_drug_auen',
'categorical': 'df_drug_cat',
'noise': 'df_drug_rand'}
self.input_features = collections.OrderedDict()
self.feature_shapes = {}
for fea in self.cell_features:
feature_type = 'cell.' + fea
feature_name = 'cell.' + fea
df_cell = getattr(self, self.cell_df_dict[fea])
self.input_features[feature_name] = feature_type
self.feature_shapes[feature_type] = (df_cell.shape[1] - 1,)
for drug in ['drug1', 'drug2']:
for fea in self.drug_features:
feature_type = 'drug.' + fea
feature_name = drug + '.' + fea
df_drug = getattr(self, self.drug_df_dict[fea])
self.input_features[feature_name] = feature_type
self.feature_shapes[feature_type] = (df_drug.shape[1] - 1,)
self.feature_shapes['dose'] = (1,)
for dose in ['dose1', 'dose2']:
self.input_features[dose] = 'dose'
logger.info('Input features shapes:')
for k, v in self.input_features.items():
logger.info(' {}: {}'.format(k, self.feature_shapes[v]))
self.input_dim = sum([np.prod(self.feature_shapes[x]) for x in self.input_features.values()])
logger.info('Total input dimensions: {}'.format(self.input_dim))
if cv > 1:
if cv_partition == 'disjoint':
pass
elif cv_partition == 'disjoint_cells':
y = self.df_response['GROWTH'].values
groups = self.df_response['CELLNAME'].values
gkf = GroupKFold(n_splits=cv)
splits = gkf.split(y, groups=groups)
self.cv_train_indexes = []
self.cv_val_indexes = []
for index, (train_index, val_index) in enumerate(splits):
print(index, train_index)
self.cv_train_indexes.append(train_index)
self.cv_val_indexes.append(val_index)
else:
y = self.df_response['GROWTH'].values
# kf = KFold(n_splits=cv)
# splits = kf.split(y)
skf = StratifiedKFold(n_splits=cv, random_state=seed)
splits = skf.split(y, discretize(y, bins=cv))
self.cv_train_indexes = []
self.cv_val_indexes = []
for index, (train_index, val_index) in enumerate(splits):
print(index, train_index)
self.cv_train_indexes.append(train_index)
self.cv_val_indexes.append(val_index)
def load_data_all(self, switch_drugs=False):
df_all = self.df_response
y_all = df_all['GROWTH'].values
x_all_list = []
for fea in self.cell_features:
df_cell = getattr(self, self.cell_df_dict[fea])
df_x_all = pd.merge(df_all[['CELLNAME']], df_cell, on='CELLNAME', how='left')
x_all_list.append(df_x_all.drop(['CELLNAME'], axis=1).values)
# for fea in loader.cell_features:
# df_cell = getattr(loader, loader.cell_df_dict[fea])
# df_x_all = pd.merge(df_all[['CELLNAME']], df_cell, on='CELLNAME', how='left')
# df_x_all[:1000].to_csv('df.{}.1k.csv'.format(fea), index=False, float_format="%g")
drugs = ['NSC1', 'NSC2']
doses = ['pCONC1', 'pCONC2']
if switch_drugs:
drugs = ['NSC2', 'NSC1']
doses = ['pCONC2', 'pCONC1']
for drug in drugs:
for fea in self.drug_features:
df_drug = getattr(self, self.drug_df_dict[fea])
df_x_all = pd.merge(df_all[[drug]], df_drug, left_on=drug, right_on='NSC', how='left')
x_all_list.append(df_x_all.drop([drug, 'NSC'], axis=1).values)
for dose in doses:
x_all_list.append(df_all[dose].values)
# for drug in drugs:
# for fea in loader.drug_features:
# df_drug = getattr(loader, loader.drug_df_dict[fea])
# df_x_all = pd.merge(df_all[[drug]], df_drug, left_on=drug, right_on='NSC', how='left')
# print(df_x_all.shape)
# df_x_all[:1000].drop([drug], axis=1).to_csv('df.{}.{}.1k.csv'.format(drug, fea), index=False, float_format="%g")
# df_all[:1000].to_csv('df.growth.1k.csv', index=False, float_format="%g")
return x_all_list, y_all, df_all
def load_data_by_index(self, train_index, val_index):
x_all_list, y_all, df_all = self.load_data_all()
x_train_list = [x[train_index] for x in x_all_list]
x_val_list = [x[val_index] for x in x_all_list]
y_train = y_all[train_index]
y_val = y_all[val_index]
df_train = df_all.iloc[train_index, :]
df_val = df_all.iloc[val_index, :]
if self.cv_partition == 'disjoint':
logger.info('Training drugs: {}'.format(set(df_train['NSC1'])))
logger.info('Validation drugs: {}'.format(set(df_val['NSC1'])))
elif self.cv_partition == 'disjoint_cells':
logger.info('Training cells: {}'.format(set(df_train['CELLNAME'])))
logger.info('Validation cells: {}'.format(set(df_val['CELLNAME'])))
return x_train_list, y_train, x_val_list, y_val, df_train, df_val
def load_data_cv(self, fold):
train_index = self.cv_train_indexes[fold]
val_index = self.cv_val_indexes[fold]
# print('fold', fold)
# print(train_index[:5])
return self.load_data_by_index(train_index, val_index)
def load_data(self):
if self.cv_partition == 'disjoint':
train_index = self.df_response[(self.df_response['NSC1'].isin(self.train_drug_ids)) & (self.df_response['NSC2'].isin(self.train_drug_ids))].index
val_index = self.df_response[(self.df_response['NSC1'].isin(self.val_drug_ids)) & (self.df_response['NSC2'].isin(self.val_drug_ids))].index
else:
train_index = range(self.n_train)
val_index = range(self.n_train, self.total)
return self.load_data_by_index(train_index, val_index)
def load_data_old(self):
# bad performance (4x slow) possibly due to incontiguous data
df_train = self.df_response.iloc[:self.n_train, :]
df_val = self.df_response.iloc[self.n_train:, :]
y_train = df_train['GROWTH'].values
y_val = df_val['GROWTH'].values
x_train_list = []
x_val_list = []
for fea in self.cell_features:
df_cell = getattr(self, self.cell_df_dict[fea])
df_x_train = pd.merge(df_train[['CELLNAME']], df_cell, on='CELLNAME', how='left')
df_x_val = pd.merge(df_val[['CELLNAME']], df_cell, on='CELLNAME', how='left')
x_train_list.append(df_x_train.drop(['CELLNAME'], axis=1).values)
x_val_list.append(df_x_val.drop(['CELLNAME'], axis=1).values)
for drug in ['NSC1', 'NSC2']:
for fea in self.drug_features:
df_drug = getattr(self, self.drug_df_dict[fea])
df_x_train = pd.merge(df_train[[drug]], df_drug, left_on=drug, right_on='NSC', how='left')
df_x_val = pd.merge(df_val[[drug]], df_drug, left_on=drug, right_on='NSC', how='left')
x_train_list.append(df_x_train.drop([drug, 'NSC'], axis=1).values)
x_val_list.append(df_x_val.drop([drug, 'NSC'], axis=1).values)
return x_train_list, y_train, x_val_list, y_val, df_train, df_val
class ComboDataGenerator(object):
"""Generate training, validation or testing batches from loaded data
"""
def __init__(self, data, partition='train', batch_size=32):
self.lock = threading.Lock()
self.data = data
self.partition = partition
self.batch_size = batch_size
if partition == 'train':
self.cycle = cycle(range(data.n_train))
self.num_data = data.n_train
elif partition == 'val':
self.cycle = cycle(range(data.total)[-data.n_val:])
self.num_data = data.n_val
else:
raise Exception('Data partition "{}" not recognized.'.format(partition))
def flow(self):
"""Keep generating data batches
"""
while 1:
self.lock.acquire()
indices = list(islice(self.cycle, self.batch_size))
self.lock.release()
df = self.data.df_response.iloc[indices, :]
y = df['GROWTH'].values
x_list = []
for fea in self.data.cell_features:
df_cell = getattr(self.data, self.data.cell_df_dict[fea])
df_x = pd.merge(df[['CELLNAME']], df_cell, on='CELLNAME', how='left')
x_list.append(df_x.drop(['CELLNAME'], axis=1).values)
for drug in ['NSC1', 'NSC2']:
for fea in self.data.drug_features:
df_drug = getattr(self.data, self.data.drug_df_dict[fea])
df_x = pd.merge(df[[drug]], df_drug, left_on=drug, right_on='NSC', how='left')
x_list.append(df_x.drop([drug, 'NSC'], axis=1).values)
yield x_list, y
def test_generator(loader):
gen = ComboDataGenerator(loader).flow()
x_list, y = next(gen)
for x in x_list:
print(x.shape)
print(y.shape)
def test_loader(loader):
x_train_list, y_train, x_val_list, y_val = loader.load_data()
print('x_train shapes:')
for x in x_train_list:
print(x.shape)
print('y_train shape:', y_train.shape)
print('x_val shapes:')
for x in x_val_list:
print(x.shape)
print('y_val shape:', y_val.shape)
def r2(y_true, y_pred):
SS_res = K.sum(K.square(y_true - y_pred))
SS_tot = K.sum(K.square(y_true - K.mean(y_true)))
return (1 - SS_res/(SS_tot + K.epsilon()))
def mae(y_true, y_pred):
return keras.metrics.mean_absolute_error(y_true, y_pred)
def evaluate_prediction(y_true, y_pred):
mse = mean_squared_error(y_true, y_pred)
mae = mean_absolute_error(y_true, y_pred)
r2 = r2_score(y_true, y_pred)
corr, _ = pearsonr(y_true, y_pred)
return {'mse': mse, 'mae': mae, 'r2': r2, 'corr': corr}
def log_evaluation(metric_outputs, description='Comparing y_true and y_pred:'):
logger.info(description)
for metric, value in metric_outputs.items():
logger.info(' {}: {:.4f}'.format(metric, value))
def plot_history(out, history, metric='loss', title=None):
title = title or 'model {}'.format(metric)
val_metric = 'val_{}'.format(metric)
plt.figure(figsize=(8, 6))
plt.plot(history.history[metric], marker='o')
plt.plot(history.history[val_metric], marker='d')
plt.title(title)
plt.ylabel(metric)
plt.xlabel('epoch')
plt.legend(['train_{}'.format(metric), 'val_{}'.format(metric)], loc='upper center')
png = '{}.plot.{}.png'.format(out, metric)
plt.savefig(png, bbox_inches='tight')
class LoggingCallback(Callback):
def __init__(self, print_fcn=print):
Callback.__init__(self)
self.print_fcn = print_fcn
def on_epoch_end(self, epoch, logs={}):
msg = "[Epoch: %i] %s" % (epoch, ", ".join("%s: %f" % (k, v) for k, v in sorted(logs.items())))
self.print_fcn(msg)
class PermanentDropout(Dropout):
def __init__(self, rate, **kwargs):
super(PermanentDropout, self).__init__(rate, **kwargs)
self.uses_learning_phase = False
def call(self, x, mask=None):
if 0. < self.rate < 1.:
noise_shape = self._get_noise_shape(x)
x = K.dropout(x, self.rate, noise_shape)
return x
class ModelRecorder(Callback):
def __init__(self, save_all_models=False):
Callback.__init__(self)
self.save_all_models = save_all_models
get_custom_objects()['PermanentDropout'] = PermanentDropout
def on_train_begin(self, logs={}):
self.val_losses = []
self.best_val_loss = np.Inf
self.best_model = None
def on_epoch_end(self, epoch, logs={}):
val_loss = logs.get('val_loss')
self.val_losses.append(val_loss)
if val_loss < self.best_val_loss:
self.best_model = keras.models.clone_model(self.model)
self.best_val_loss = val_loss
def build_feature_model(input_shape, name='', dense_layers=[1000, 1000],
activation='relu', residual=False,
dropout_rate=0, permanent_dropout=True):
x_input = Input(shape=input_shape)
h = x_input
for i, layer in enumerate(dense_layers):
x = h
h = Dense(layer, activation=activation)(h)
if dropout_rate > 0:
if permanent_dropout:
h = PermanentDropout(dropout_rate)(h)
else:
h = Dropout(dropout_rate)(h)
if residual:
try:
h = keras.layers.add([h, x])
except ValueError:
pass
model = Model(x_input, h, name=name)
return model
def build_model(loader, args, verbose=False):
input_models = {}
dropout_rate = args.dropout
permanent_dropout = True
for fea_type, shape in loader.feature_shapes.items():
box = build_feature_model(input_shape=shape, name=fea_type,
dense_layers=args.dense_feature_layers,
dropout_rate=dropout_rate, permanent_dropout=permanent_dropout)
if verbose:
box.summary()
input_models[fea_type] = box
inputs = []
encoded_inputs = []
for fea_name, fea_type in loader.input_features.items():
shape = loader.feature_shapes[fea_type]
fea_input = Input(shape, name='input.'+fea_name)
inputs.append(fea_input)
input_model = input_models[fea_type]
encoded = input_model(fea_input)
encoded_inputs.append(encoded)
merged = keras.layers.concatenate(encoded_inputs)
h = merged
for i, layer in enumerate(args.dense):
x = h
h = Dense(layer, activation=args.activation)(h)
if dropout_rate > 0:
if permanent_dropout:
h = PermanentDropout(dropout_rate)(h)
else:
h = Dropout(dropout_rate)(h)
if args.residual:
try:
h = keras.layers.add([h, x])
except ValueError:
pass
output = Dense(1)(h)
return Model(inputs, output)
def get_combo_parser():
description = 'Build neural network based models to predict tumor response to drug pairs.'
parser = argparse.ArgumentParser(prog='combo_baseline', formatter_class=argparse.ArgumentDefaultsHelpFormatter,
description=description)
return combo.common_parser(parser)
# def initialize_parameters():
# # Get command-line parameters
# parser = get_combo_parser()
# args = parser.parse_args()
# # Get parameters from configuration file
# file_params = combo.read_config_file(args.config_file)
# # Consolidate parameter set. Command-line parameters overwrite file configuration
# params = p1_common.args_overwrite_config(args, file_params)
# # print(params)
# return params
def initialize_parameters():
# Build benchmark object
comboBmk = combo.BenchmarkCombo(combo.file_path, 'combo_default_model.txt', 'keras',
prog='combo_baseline',
desc = 'Build neural network based models to predict tumor response to drug pairs.')
# Initialize parameters
gParameters = candle.finalize_parameters(comboBmk)
#combo.logger.info('Params: {}'.format(gParameters))
return gParameters
class Struct:
def __init__(self, **entries):
self.__dict__.update(entries)
def run(params):
args = Struct(**params)
set_seed(args.rng_seed)
ext = extension_from_parameters(args)
prefix = args.save + ext
logfile = args.logfile if args.logfile else prefix+'.log'
set_up_logger(logfile, args.verbose)
logger.info('Params: {}'.format(params))
loader = ComboDataLoader(seed=args.rng_seed,
val_split=args.validation_split,
cell_features=args.cell_features,
drug_features=args.drug_features,
response_url=args.response_url,
use_landmark_genes=args.use_landmark_genes,
preprocess_rnaseq=args.preprocess_rnaseq,
exclude_cells=args.exclude_cells,
exclude_drugs=args.exclude_drugs,
use_combo_score=args.use_combo_score,
cv_partition=args.cv_partition, cv=args.cv)
# test_loader(loader)
# test_generator(loader)
train_gen = ComboDataGenerator(loader, batch_size=args.batch_size).flow()
val_gen = ComboDataGenerator(loader, partition='val', batch_size=args.batch_size).flow()
train_steps = int(loader.n_train / args.batch_size)
val_steps = int(loader.n_val / args.batch_size)
model = build_model(loader, args, verbose=True)
model.summary()
# plot_model(model, to_file=prefix+'.model.png', show_shapes=True)
if args.cp:
model_json = model.to_json()
with open(prefix+'.model.json', 'w') as f:
print(model_json, file=f)
def warmup_scheduler(epoch):
lr = args.learning_rate or base_lr * args.batch_size/100
if epoch <= 5:
K.set_value(model.optimizer.lr, (base_lr * (5-epoch) + lr * epoch) / 5)
logger.debug('Epoch {}: lr={}'.format(epoch, K.get_value(model.optimizer.lr)))
return K.get_value(model.optimizer.lr)
df_pred_list = []
cv_ext = ''
cv = args.cv if args.cv > 1 else 1
fold = 0
while fold < cv:
if args.cv > 1:
logger.info('Cross validation fold {}/{}:'.format(fold+1, cv))
cv_ext = '.cv{}'.format(fold+1)
model = build_model(loader, args)
optimizer = optimizers.deserialize({'class_name': args.optimizer, 'config': {}})
base_lr = args.base_lr or K.get_value(optimizer.lr)
if args.learning_rate:
K.set_value(optimizer.lr, args.learning_rate)
model.compile(loss=args.loss, optimizer=optimizer, metrics=[mae, r2])
# calculate trainable and non-trainable params
# params.update(compute_trainable_params(model))
# candle_monitor = CandleRemoteMonitor(params=params)
# timeout_monitor = TerminateOnTimeOut(params['timeout'])
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=5, min_lr=0.00001)
warmup_lr = LearningRateScheduler(warmup_scheduler)
checkpointer = ModelCheckpoint(prefix+cv_ext+'.weights.h5', save_best_only=True, save_weights_only=True)
tensorboard = TensorBoard(log_dir="tb/tb{}{}".format(ext, cv_ext))
history_logger = LoggingCallback(logger.debug)
model_recorder = ModelRecorder()
callbacks = [history_logger, model_recorder]
# callbacks = [candle_monitor, timeout_monitor, history_logger, model_recorder]
if args.reduce_lr:
callbacks.append(reduce_lr)
if args.warmup_lr:
callbacks.append(warmup_lr)
if args.cp:
callbacks.append(checkpointer)
if args.tb:
callbacks.append(tensorboard)
if args.gen:
history = model.fit_generator(train_gen, train_steps,
epochs=args.epochs,
callbacks=callbacks,
validation_data=val_gen, validation_steps=val_steps)
else:
if args.cv > 1:
x_train_list, y_train, x_val_list, y_val, df_train, df_val = loader.load_data_cv(fold)
else:
x_train_list, y_train, x_val_list, y_val, df_train, df_val = loader.load_data()
y_shuf = np.random.permutation(y_val)
log_evaluation(evaluate_prediction(y_val, y_shuf),
description='Between random pairs in y_val:')
history = model.fit(x_train_list, y_train,
batch_size=args.batch_size,
shuffle=args.shuffle,
epochs=args.epochs,
callbacks=callbacks,
validation_data=(x_val_list, y_val))
if args.cp:
model.load_weights(prefix+cv_ext+'.weights.h5')
if not args.gen:
y_val_pred = model.predict(x_val_list, batch_size=args.batch_size).flatten()
scores = evaluate_prediction(y_val, y_val_pred)
if args.cv > 1 and scores[args.loss] > args.max_val_loss:
logger.warn('Best val_loss {} is greater than {}; retrain the model...'.format(scores[args.loss], args.max_val_loss))
continue
else:
fold += 1
log_evaluation(scores)
df_val.is_copy = False
df_val['GROWTH_PRED'] = y_val_pred
df_val['GROWTH_ERROR'] = y_val_pred - y_val
df_pred_list.append(df_val)
if args.cp:
# model.save(prefix+'.model.h5')
model_recorder.best_model.save(prefix+'.model.h5')
# test reloadded model prediction
new_model = keras.models.load_model(prefix+'.model.h5')
new_model.load_weights(prefix+cv_ext+'.weights.h5')
new_pred = new_model.predict(x_val_list, batch_size=args.batch_size).flatten()
# print('y_val:', y_val[:10])
# print('old_pred:', y_val_pred[:10])
# print('new_pred:', new_pred[:10])
plot_history(prefix, history, 'loss')
plot_history(prefix, history, 'r2')
if K.backend() == 'tensorflow':
K.clear_session()
pred_fname = prefix + '.predicted.growth.tsv'
if args.use_combo_score:
pred_fname = prefix + '.predicted.score.tsv'
df_pred = pd.concat(df_pred_list)
df_pred.to_csv(pred_fname, sep='\t', index=False, float_format='%.4g')
logger.handlers = []
return history
def main():
params = initialize_parameters()
run(params)
if __name__ == '__main__':
main()
if K.backend() == 'tensorflow':
K.clear_session()
| 40.876313 | 187 | 0.61163 |
from __future__ import division, print_function
import argparse
import collections
import logging
import os
import random
import threading
import numpy as np
import pandas as pd
from itertools import cycle, islice
import keras
from keras import backend as K
from keras import optimizers
from keras.models import Model
from keras.layers import Input, Dense, Dropout
from keras.callbacks import Callback, ModelCheckpoint, ReduceLROnPlateau, LearningRateScheduler, TensorBoard
from keras.utils import get_custom_objects
from keras.utils.vis_utils import plot_model
from sklearn.metrics import r2_score, mean_squared_error, mean_absolute_error
from sklearn.model_selection import KFold, StratifiedKFold, GroupKFold
from scipy.stats.stats import pearsonr
import matplotlib as mpl
mpl.use('Agg')
import matplotlib.pyplot as plt
import combo
import candle
import NCI60
logger = logging.getLogger(__name__)
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
def set_seed(seed):
os.environ['PYTHONHASHSEED'] = '0'
np.random.seed(seed)
random.seed(seed)
if K.backend() == 'tensorflow':
import tensorflow as tf
tf.set_random_seed(seed)
def verify_path(path):
folder = os.path.dirname(path)
if folder and not os.path.exists(folder):
os.makedirs(folder)
def set_up_logger(logfile, verbose):
verify_path(logfile)
fh = logging.FileHandler(logfile)
fh.setFormatter(logging.Formatter("[%(asctime)s %(process)d] %(message)s", datefmt="%Y-%m-%d %H:%M:%S"))
fh.setLevel(logging.DEBUG)
sh = logging.StreamHandler()
sh.setFormatter(logging.Formatter(''))
sh.setLevel(logging.DEBUG if verbose else logging.INFO)
logger.setLevel(logging.DEBUG)
logger.addHandler(fh)
logger.addHandler(sh)
def extension_from_parameters(args):
ext = ''
ext += '.A={}'.format(args.activation)
ext += '.B={}'.format(args.batch_size)
ext += '.E={}'.format(args.epochs)
ext += '.O={}'.format(args.optimizer)
ext += '.LR={}'.format(args.learning_rate)
ext += '.CF={}'.format(''.join([x[0] for x in sorted(args.cell_features)]))
ext += '.DF={}'.format(''.join([x[0] for x in sorted(args.drug_features)]))
if args.feature_subsample > 0:
ext += '.FS={}'.format(args.feature_subsample)
if args.dropout > 0:
ext += '.DR={}'.format(args.dropout)
if args.warmup_lr:
ext += '.wu_lr'
if args.reduce_lr:
ext += '.re_lr'
if args.residual:
ext += '.res'
if args.use_landmark_genes:
ext += '.L1000'
if args.gen:
ext += '.gen'
if args.use_combo_score:
ext += '.scr'
for i, n in enumerate(args.dense):
if n > 0:
ext += '.D{}={}'.format(i+1, n)
if args.dense_feature_layers != args.dense:
for i, n in enumerate(args.dense):
if n > 0:
ext += '.FD{}={}'.format(i+1, n)
return ext
def discretize(y, bins=5):
percentiles = [100 / bins * (i + 1) for i in range(bins - 1)]
thresholds = [np.percentile(y, x) for x in percentiles]
classes = np.digitize(y, thresholds)
return classes
class ComboDataLoader(object):
def __init__(self, seed, val_split=0.2, shuffle=True,
cell_features=['expression'], drug_features=['descriptors'],
response_url=None, use_landmark_genes=False, use_combo_score=False,
preprocess_rnaseq=None, exclude_cells=[], exclude_drugs=[],
feature_subsample=None, scaling='std', scramble=False,
cv_partition='overlapping', cv=0):
self.cv_partition = cv_partition
np.random.seed(seed)
df = NCI60.load_combo_dose_response(response_url=response_url, use_combo_score=use_combo_score, fraction=True, exclude_cells=exclude_cells, exclude_drugs=exclude_drugs)
logger.info('Loaded {} unique (CL, D1, D2) response sets.'.format(df.shape[0]))
if 'all' in cell_features:
self.cell_features = ['expression', 'mirna', 'proteome']
else:
self.cell_features = cell_features
if 'all' in drug_features:
self.drug_features = ['descriptors', 'latent']
else:
self.drug_features = drug_features
for fea in self.cell_features:
if fea == 'expression' or fea == 'rnaseq':
self.df_cell_expr = NCI60.load_cell_expression_rnaseq(ncols=feature_subsample, scaling=scaling, use_landmark_genes=use_landmark_genes, preprocess_rnaseq=preprocess_rnaseq)
df = df.merge(self.df_cell_expr[['CELLNAME']], on='CELLNAME')
elif fea == 'expression_u133p2':
self.df_cell_expr = NCI60.load_cell_expression_u133p2(ncols=feature_subsample, scaling=scaling, use_landmark_genes=use_landmark_genes)
df = df.merge(self.df_cell_expr[['CELLNAME']], on='CELLNAME')
elif fea == 'expression_5platform':
self.df_cell_expr = NCI60.load_cell_expression_5platform(ncols=feature_subsample, scaling=scaling, use_landmark_genes=use_landmark_genes)
df = df.merge(self.df_cell_expr[['CELLNAME']], on='CELLNAME')
elif fea == 'mirna':
self.df_cell_mirna = NCI60.load_cell_mirna(ncols=feature_subsample, scaling=scaling)
df = df.merge(self.df_cell_mirna[['CELLNAME']], on='CELLNAME')
elif fea == 'proteome':
self.df_cell_prot = NCI60.load_cell_proteome(ncols=feature_subsample, scaling=scaling)
df = df.merge(self.df_cell_prot[['CELLNAME']], on='CELLNAME')
elif fea == 'categorical':
df_cell_ids = df[['CELLNAME']].drop_duplicates()
cell_ids = df_cell_ids['CELLNAME'].map(lambda x: x.replace(':', '.'))
df_cell_cat = pd.get_dummies(cell_ids)
df_cell_cat.index = df_cell_ids['CELLNAME']
self.df_cell_cat = df_cell_cat.reset_index()
for fea in self.drug_features:
if fea == 'descriptors':
self.df_drug_desc = NCI60.load_drug_descriptors(ncols=feature_subsample, scaling=scaling)
df = df[df['NSC1'].isin(self.df_drug_desc['NSC']) & df['NSC2'].isin(self.df_drug_desc['NSC'])]
elif fea == 'latent':
self.df_drug_auen = NCI60.load_drug_autoencoded_AG(ncols=feature_subsample, scaling=scaling)
df = df[df['NSC1'].isin(self.df_drug_auen['NSC']) & df['NSC2'].isin(self.df_drug_auen['NSC'])]
elif fea == 'categorical':
df_drug_ids = df[['NSC1']].drop_duplicates()
df_drug_ids.columns = ['NSC']
drug_ids = df_drug_ids['NSC']
df_drug_cat = pd.get_dummies(drug_ids)
df_drug_cat.index = df_drug_ids['NSC']
self.df_drug_cat = df_drug_cat.reset_index()
elif fea == 'noise':
ids1 = df[['NSC1']].drop_duplicates().rename(columns={'NSC1':'NSC'})
ids2 = df[['NSC2']].drop_duplicates().rename(columns={'NSC2':'NSC'})
df_drug_ids = pd.concat([ids1, ids2]).drop_duplicates()
noise = np.random.normal(size=(df_drug_ids.shape[0], 500))
df_rand = pd.DataFrame(noise, index=df_drug_ids['NSC'],
columns=['RAND-{:03d}'.format(x) for x in range(500)])
self.df_drug_rand = df_rand.reset_index()
logger.info('Filtered down to {} rows with matching information.'.format(df.shape[0]))
ids1 = df[['NSC1']].drop_duplicates().rename(columns={'NSC1':'NSC'})
ids2 = df[['NSC2']].drop_duplicates().rename(columns={'NSC2':'NSC'})
df_drug_ids = pd.concat([ids1, ids2]).drop_duplicates().reset_index(drop=True)
n_drugs = df_drug_ids.shape[0]
n_val_drugs = int(n_drugs * val_split)
n_train_drugs = n_drugs - n_val_drugs
logger.info('Unique cell lines: {}'.format(df['CELLNAME'].nunique()))
logger.info('Unique drugs: {}'.format(n_drugs))
if shuffle:
df = df.sample(frac=1.0, random_state=seed).reset_index(drop=True)
df_drug_ids = df_drug_ids.sample(frac=1.0, random_state=seed).reset_index(drop=True)
self.df_response = df
self.df_drug_ids = df_drug_ids
self.train_drug_ids = df_drug_ids['NSC'][:n_train_drugs]
self.val_drug_ids = df_drug_ids['NSC'][-n_val_drugs:]
if scramble:
growth = df[['GROWTH']]
random_growth = growth.iloc[np.random.permutation(np.arange(growth.shape[0]))].reset_index()
self.df_response[['GROWTH']] = random_growth['GROWTH']
logger.warn('Randomly shuffled dose response growth values.')
logger.info('Distribution of dose response:')
logger.info(self.df_response[['GROWTH']].describe())
self.total = df.shape[0]
self.n_val = int(self.total * val_split)
self.n_train = self.total - self.n_val
logger.info('Rows in train: {}, val: {}'.format(self.n_train, self.n_val))
self.cell_df_dict = {'expression': 'df_cell_expr',
'expression_5platform': 'df_cell_expr',
'expression_u133p2': 'df_cell_expr',
'rnaseq': 'df_cell_expr',
'mirna': 'df_cell_mirna',
'proteome': 'df_cell_prot',
'categorical': 'df_cell_cat'}
self.drug_df_dict = {'descriptors': 'df_drug_desc',
'latent': 'df_drug_auen',
'categorical': 'df_drug_cat',
'noise': 'df_drug_rand'}
self.input_features = collections.OrderedDict()
self.feature_shapes = {}
for fea in self.cell_features:
feature_type = 'cell.' + fea
feature_name = 'cell.' + fea
df_cell = getattr(self, self.cell_df_dict[fea])
self.input_features[feature_name] = feature_type
self.feature_shapes[feature_type] = (df_cell.shape[1] - 1,)
for drug in ['drug1', 'drug2']:
for fea in self.drug_features:
feature_type = 'drug.' + fea
feature_name = drug + '.' + fea
df_drug = getattr(self, self.drug_df_dict[fea])
self.input_features[feature_name] = feature_type
self.feature_shapes[feature_type] = (df_drug.shape[1] - 1,)
self.feature_shapes['dose'] = (1,)
for dose in ['dose1', 'dose2']:
self.input_features[dose] = 'dose'
logger.info('Input features shapes:')
for k, v in self.input_features.items():
logger.info(' {}: {}'.format(k, self.feature_shapes[v]))
self.input_dim = sum([np.prod(self.feature_shapes[x]) for x in self.input_features.values()])
logger.info('Total input dimensions: {}'.format(self.input_dim))
if cv > 1:
if cv_partition == 'disjoint':
pass
elif cv_partition == 'disjoint_cells':
y = self.df_response['GROWTH'].values
groups = self.df_response['CELLNAME'].values
gkf = GroupKFold(n_splits=cv)
splits = gkf.split(y, groups=groups)
self.cv_train_indexes = []
self.cv_val_indexes = []
for index, (train_index, val_index) in enumerate(splits):
print(index, train_index)
self.cv_train_indexes.append(train_index)
self.cv_val_indexes.append(val_index)
else:
y = self.df_response['GROWTH'].values
skf = StratifiedKFold(n_splits=cv, random_state=seed)
splits = skf.split(y, discretize(y, bins=cv))
self.cv_train_indexes = []
self.cv_val_indexes = []
for index, (train_index, val_index) in enumerate(splits):
print(index, train_index)
self.cv_train_indexes.append(train_index)
self.cv_val_indexes.append(val_index)
def load_data_all(self, switch_drugs=False):
df_all = self.df_response
y_all = df_all['GROWTH'].values
x_all_list = []
for fea in self.cell_features:
df_cell = getattr(self, self.cell_df_dict[fea])
df_x_all = pd.merge(df_all[['CELLNAME']], df_cell, on='CELLNAME', how='left')
x_all_list.append(df_x_all.drop(['CELLNAME'], axis=1).values)
drugs = ['NSC1', 'NSC2']
doses = ['pCONC1', 'pCONC2']
if switch_drugs:
drugs = ['NSC2', 'NSC1']
doses = ['pCONC2', 'pCONC1']
for drug in drugs:
for fea in self.drug_features:
df_drug = getattr(self, self.drug_df_dict[fea])
df_x_all = pd.merge(df_all[[drug]], df_drug, left_on=drug, right_on='NSC', how='left')
x_all_list.append(df_x_all.drop([drug, 'NSC'], axis=1).values)
for dose in doses:
x_all_list.append(df_all[dose].values)
return x_all_list, y_all, df_all
def load_data_by_index(self, train_index, val_index):
x_all_list, y_all, df_all = self.load_data_all()
x_train_list = [x[train_index] for x in x_all_list]
x_val_list = [x[val_index] for x in x_all_list]
y_train = y_all[train_index]
y_val = y_all[val_index]
df_train = df_all.iloc[train_index, :]
df_val = df_all.iloc[val_index, :]
if self.cv_partition == 'disjoint':
logger.info('Training drugs: {}'.format(set(df_train['NSC1'])))
logger.info('Validation drugs: {}'.format(set(df_val['NSC1'])))
elif self.cv_partition == 'disjoint_cells':
logger.info('Training cells: {}'.format(set(df_train['CELLNAME'])))
logger.info('Validation cells: {}'.format(set(df_val['CELLNAME'])))
return x_train_list, y_train, x_val_list, y_val, df_train, df_val
def load_data_cv(self, fold):
train_index = self.cv_train_indexes[fold]
val_index = self.cv_val_indexes[fold]
return self.load_data_by_index(train_index, val_index)
def load_data(self):
if self.cv_partition == 'disjoint':
train_index = self.df_response[(self.df_response['NSC1'].isin(self.train_drug_ids)) & (self.df_response['NSC2'].isin(self.train_drug_ids))].index
val_index = self.df_response[(self.df_response['NSC1'].isin(self.val_drug_ids)) & (self.df_response['NSC2'].isin(self.val_drug_ids))].index
else:
train_index = range(self.n_train)
val_index = range(self.n_train, self.total)
return self.load_data_by_index(train_index, val_index)
def load_data_old(self):
df_train = self.df_response.iloc[:self.n_train, :]
df_val = self.df_response.iloc[self.n_train:, :]
y_train = df_train['GROWTH'].values
y_val = df_val['GROWTH'].values
x_train_list = []
x_val_list = []
for fea in self.cell_features:
df_cell = getattr(self, self.cell_df_dict[fea])
df_x_train = pd.merge(df_train[['CELLNAME']], df_cell, on='CELLNAME', how='left')
df_x_val = pd.merge(df_val[['CELLNAME']], df_cell, on='CELLNAME', how='left')
x_train_list.append(df_x_train.drop(['CELLNAME'], axis=1).values)
x_val_list.append(df_x_val.drop(['CELLNAME'], axis=1).values)
for drug in ['NSC1', 'NSC2']:
for fea in self.drug_features:
df_drug = getattr(self, self.drug_df_dict[fea])
df_x_train = pd.merge(df_train[[drug]], df_drug, left_on=drug, right_on='NSC', how='left')
df_x_val = pd.merge(df_val[[drug]], df_drug, left_on=drug, right_on='NSC', how='left')
x_train_list.append(df_x_train.drop([drug, 'NSC'], axis=1).values)
x_val_list.append(df_x_val.drop([drug, 'NSC'], axis=1).values)
return x_train_list, y_train, x_val_list, y_val, df_train, df_val
class ComboDataGenerator(object):
def __init__(self, data, partition='train', batch_size=32):
self.lock = threading.Lock()
self.data = data
self.partition = partition
self.batch_size = batch_size
if partition == 'train':
self.cycle = cycle(range(data.n_train))
self.num_data = data.n_train
elif partition == 'val':
self.cycle = cycle(range(data.total)[-data.n_val:])
self.num_data = data.n_val
else:
raise Exception('Data partition "{}" not recognized.'.format(partition))
def flow(self):
while 1:
self.lock.acquire()
indices = list(islice(self.cycle, self.batch_size))
self.lock.release()
df = self.data.df_response.iloc[indices, :]
y = df['GROWTH'].values
x_list = []
for fea in self.data.cell_features:
df_cell = getattr(self.data, self.data.cell_df_dict[fea])
df_x = pd.merge(df[['CELLNAME']], df_cell, on='CELLNAME', how='left')
x_list.append(df_x.drop(['CELLNAME'], axis=1).values)
for drug in ['NSC1', 'NSC2']:
for fea in self.data.drug_features:
df_drug = getattr(self.data, self.data.drug_df_dict[fea])
df_x = pd.merge(df[[drug]], df_drug, left_on=drug, right_on='NSC', how='left')
x_list.append(df_x.drop([drug, 'NSC'], axis=1).values)
yield x_list, y
def test_generator(loader):
gen = ComboDataGenerator(loader).flow()
x_list, y = next(gen)
for x in x_list:
print(x.shape)
print(y.shape)
def test_loader(loader):
x_train_list, y_train, x_val_list, y_val = loader.load_data()
print('x_train shapes:')
for x in x_train_list:
print(x.shape)
print('y_train shape:', y_train.shape)
print('x_val shapes:')
for x in x_val_list:
print(x.shape)
print('y_val shape:', y_val.shape)
def r2(y_true, y_pred):
SS_res = K.sum(K.square(y_true - y_pred))
SS_tot = K.sum(K.square(y_true - K.mean(y_true)))
return (1 - SS_res/(SS_tot + K.epsilon()))
def mae(y_true, y_pred):
return keras.metrics.mean_absolute_error(y_true, y_pred)
def evaluate_prediction(y_true, y_pred):
mse = mean_squared_error(y_true, y_pred)
mae = mean_absolute_error(y_true, y_pred)
r2 = r2_score(y_true, y_pred)
corr, _ = pearsonr(y_true, y_pred)
return {'mse': mse, 'mae': mae, 'r2': r2, 'corr': corr}
def log_evaluation(metric_outputs, description='Comparing y_true and y_pred:'):
logger.info(description)
for metric, value in metric_outputs.items():
logger.info(' {}: {:.4f}'.format(metric, value))
def plot_history(out, history, metric='loss', title=None):
title = title or 'model {}'.format(metric)
val_metric = 'val_{}'.format(metric)
plt.figure(figsize=(8, 6))
plt.plot(history.history[metric], marker='o')
plt.plot(history.history[val_metric], marker='d')
plt.title(title)
plt.ylabel(metric)
plt.xlabel('epoch')
plt.legend(['train_{}'.format(metric), 'val_{}'.format(metric)], loc='upper center')
png = '{}.plot.{}.png'.format(out, metric)
plt.savefig(png, bbox_inches='tight')
class LoggingCallback(Callback):
def __init__(self, print_fcn=print):
Callback.__init__(self)
self.print_fcn = print_fcn
def on_epoch_end(self, epoch, logs={}):
msg = "[Epoch: %i] %s" % (epoch, ", ".join("%s: %f" % (k, v) for k, v in sorted(logs.items())))
self.print_fcn(msg)
class PermanentDropout(Dropout):
def __init__(self, rate, **kwargs):
super(PermanentDropout, self).__init__(rate, **kwargs)
self.uses_learning_phase = False
def call(self, x, mask=None):
if 0. < self.rate < 1.:
noise_shape = self._get_noise_shape(x)
x = K.dropout(x, self.rate, noise_shape)
return x
class ModelRecorder(Callback):
def __init__(self, save_all_models=False):
Callback.__init__(self)
self.save_all_models = save_all_models
get_custom_objects()['PermanentDropout'] = PermanentDropout
def on_train_begin(self, logs={}):
self.val_losses = []
self.best_val_loss = np.Inf
self.best_model = None
def on_epoch_end(self, epoch, logs={}):
val_loss = logs.get('val_loss')
self.val_losses.append(val_loss)
if val_loss < self.best_val_loss:
self.best_model = keras.models.clone_model(self.model)
self.best_val_loss = val_loss
def build_feature_model(input_shape, name='', dense_layers=[1000, 1000],
activation='relu', residual=False,
dropout_rate=0, permanent_dropout=True):
x_input = Input(shape=input_shape)
h = x_input
for i, layer in enumerate(dense_layers):
x = h
h = Dense(layer, activation=activation)(h)
if dropout_rate > 0:
if permanent_dropout:
h = PermanentDropout(dropout_rate)(h)
else:
h = Dropout(dropout_rate)(h)
if residual:
try:
h = keras.layers.add([h, x])
except ValueError:
pass
model = Model(x_input, h, name=name)
return model
def build_model(loader, args, verbose=False):
input_models = {}
dropout_rate = args.dropout
permanent_dropout = True
for fea_type, shape in loader.feature_shapes.items():
box = build_feature_model(input_shape=shape, name=fea_type,
dense_layers=args.dense_feature_layers,
dropout_rate=dropout_rate, permanent_dropout=permanent_dropout)
if verbose:
box.summary()
input_models[fea_type] = box
inputs = []
encoded_inputs = []
for fea_name, fea_type in loader.input_features.items():
shape = loader.feature_shapes[fea_type]
fea_input = Input(shape, name='input.'+fea_name)
inputs.append(fea_input)
input_model = input_models[fea_type]
encoded = input_model(fea_input)
encoded_inputs.append(encoded)
merged = keras.layers.concatenate(encoded_inputs)
h = merged
for i, layer in enumerate(args.dense):
x = h
h = Dense(layer, activation=args.activation)(h)
if dropout_rate > 0:
if permanent_dropout:
h = PermanentDropout(dropout_rate)(h)
else:
h = Dropout(dropout_rate)(h)
if args.residual:
try:
h = keras.layers.add([h, x])
except ValueError:
pass
output = Dense(1)(h)
return Model(inputs, output)
def get_combo_parser():
description = 'Build neural network based models to predict tumor response to drug pairs.'
parser = argparse.ArgumentParser(prog='combo_baseline', formatter_class=argparse.ArgumentDefaultsHelpFormatter,
description=description)
return combo.common_parser(parser)
desc = 'Build neural network based models to predict tumor response to drug pairs.')
gParameters = candle.finalize_parameters(comboBmk)
return gParameters
class Struct:
def __init__(self, **entries):
self.__dict__.update(entries)
def run(params):
args = Struct(**params)
set_seed(args.rng_seed)
ext = extension_from_parameters(args)
prefix = args.save + ext
logfile = args.logfile if args.logfile else prefix+'.log'
set_up_logger(logfile, args.verbose)
logger.info('Params: {}'.format(params))
loader = ComboDataLoader(seed=args.rng_seed,
val_split=args.validation_split,
cell_features=args.cell_features,
drug_features=args.drug_features,
response_url=args.response_url,
use_landmark_genes=args.use_landmark_genes,
preprocess_rnaseq=args.preprocess_rnaseq,
exclude_cells=args.exclude_cells,
exclude_drugs=args.exclude_drugs,
use_combo_score=args.use_combo_score,
cv_partition=args.cv_partition, cv=args.cv)
train_gen = ComboDataGenerator(loader, batch_size=args.batch_size).flow()
val_gen = ComboDataGenerator(loader, partition='val', batch_size=args.batch_size).flow()
train_steps = int(loader.n_train / args.batch_size)
val_steps = int(loader.n_val / args.batch_size)
model = build_model(loader, args, verbose=True)
model.summary()
if args.cp:
model_json = model.to_json()
with open(prefix+'.model.json', 'w') as f:
print(model_json, file=f)
def warmup_scheduler(epoch):
lr = args.learning_rate or base_lr * args.batch_size/100
if epoch <= 5:
K.set_value(model.optimizer.lr, (base_lr * (5-epoch) + lr * epoch) / 5)
logger.debug('Epoch {}: lr={}'.format(epoch, K.get_value(model.optimizer.lr)))
return K.get_value(model.optimizer.lr)
df_pred_list = []
cv_ext = ''
cv = args.cv if args.cv > 1 else 1
fold = 0
while fold < cv:
if args.cv > 1:
logger.info('Cross validation fold {}/{}:'.format(fold+1, cv))
cv_ext = '.cv{}'.format(fold+1)
model = build_model(loader, args)
optimizer = optimizers.deserialize({'class_name': args.optimizer, 'config': {}})
base_lr = args.base_lr or K.get_value(optimizer.lr)
if args.learning_rate:
K.set_value(optimizer.lr, args.learning_rate)
model.compile(loss=args.loss, optimizer=optimizer, metrics=[mae, r2])
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=5, min_lr=0.00001)
warmup_lr = LearningRateScheduler(warmup_scheduler)
checkpointer = ModelCheckpoint(prefix+cv_ext+'.weights.h5', save_best_only=True, save_weights_only=True)
tensorboard = TensorBoard(log_dir="tb/tb{}{}".format(ext, cv_ext))
history_logger = LoggingCallback(logger.debug)
model_recorder = ModelRecorder()
callbacks = [history_logger, model_recorder]
if args.reduce_lr:
callbacks.append(reduce_lr)
if args.warmup_lr:
callbacks.append(warmup_lr)
if args.cp:
callbacks.append(checkpointer)
if args.tb:
callbacks.append(tensorboard)
if args.gen:
history = model.fit_generator(train_gen, train_steps,
epochs=args.epochs,
callbacks=callbacks,
validation_data=val_gen, validation_steps=val_steps)
else:
if args.cv > 1:
x_train_list, y_train, x_val_list, y_val, df_train, df_val = loader.load_data_cv(fold)
else:
x_train_list, y_train, x_val_list, y_val, df_train, df_val = loader.load_data()
y_shuf = np.random.permutation(y_val)
log_evaluation(evaluate_prediction(y_val, y_shuf),
description='Between random pairs in y_val:')
history = model.fit(x_train_list, y_train,
batch_size=args.batch_size,
shuffle=args.shuffle,
epochs=args.epochs,
callbacks=callbacks,
validation_data=(x_val_list, y_val))
if args.cp:
model.load_weights(prefix+cv_ext+'.weights.h5')
if not args.gen:
y_val_pred = model.predict(x_val_list, batch_size=args.batch_size).flatten()
scores = evaluate_prediction(y_val, y_val_pred)
if args.cv > 1 and scores[args.loss] > args.max_val_loss:
logger.warn('Best val_loss {} is greater than {}; retrain the model...'.format(scores[args.loss], args.max_val_loss))
continue
else:
fold += 1
log_evaluation(scores)
df_val.is_copy = False
df_val['GROWTH_PRED'] = y_val_pred
df_val['GROWTH_ERROR'] = y_val_pred - y_val
df_pred_list.append(df_val)
if args.cp:
model_recorder.best_model.save(prefix+'.model.h5')
new_model = keras.models.load_model(prefix+'.model.h5')
new_model.load_weights(prefix+cv_ext+'.weights.h5')
new_pred = new_model.predict(x_val_list, batch_size=args.batch_size).flatten()
plot_history(prefix, history, 'loss')
plot_history(prefix, history, 'r2')
if K.backend() == 'tensorflow':
K.clear_session()
pred_fname = prefix + '.predicted.growth.tsv'
if args.use_combo_score:
pred_fname = prefix + '.predicted.score.tsv'
df_pred = pd.concat(df_pred_list)
df_pred.to_csv(pred_fname, sep='\t', index=False, float_format='%.4g')
logger.handlers = []
return history
def main():
params = initialize_parameters()
run(params)
if __name__ == '__main__':
main()
if K.backend() == 'tensorflow':
K.clear_session()
| true | true |
f720c93c4cd30b631b5aa4846951a414fc4befea | 8,078 | py | Python | nets/mobilenet/mobilenet_v2.py | Popcorn-sugar/Deep_v2 | 23c25f74e36016658558e690890499bc7fd2aeb2 | [
"MIT"
] | null | null | null | nets/mobilenet/mobilenet_v2.py | Popcorn-sugar/Deep_v2 | 23c25f74e36016658558e690890499bc7fd2aeb2 | [
"MIT"
] | null | null | null | nets/mobilenet/mobilenet_v2.py | Popcorn-sugar/Deep_v2 | 23c25f74e36016658558e690890499bc7fd2aeb2 | [
"MIT"
] | null | null | null | # Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Implementation of Mobilenet V2.
Architecture: https://arxiv.org/abs/1801.04381
The base model gives 72.2% accuracy on ImageNet, with 300MMadds,
3.4 M parameters.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import copy
import functools
import tensorflow as tf
from nets.mobilenet import conv_blocks as ops
from nets.mobilenet import mobilenet as lib
import tf_slim as slim
op = lib.op
expand_input = ops.expand_input_by_factor
# pyformat: disable
# Architecture: https://arxiv.org/abs/1801.04381
V2_DEF = dict(
defaults={
# Note: these parameters of batch norm affect the architecture
# that's why they are here and not in training_scope.
(slim.batch_norm,): {'center': True, 'scale': True},
(slim.conv2d, slim.fully_connected, slim.separable_conv2d): {
'normalizer_fn': slim.batch_norm, 'activation_fn': tf.nn.relu6
},
(ops.expanded_conv,): {
'expansion_size': expand_input(6),
'split_expansion': 1,
'normalizer_fn': slim.batch_norm,
'residual': True
},
(slim.conv2d, slim.separable_conv2d): {'padding': 'SAME'}
},
spec=[
op(slim.conv2d, stride=2, num_outputs=32, kernel_size=[3, 3]),
op(ops.expanded_conv,
expansion_size=expand_input(1, divisible_by=1),
num_outputs=16),
op(ops.expanded_conv, stride=2, num_outputs=24),
op(ops.expanded_conv, stride=1, num_outputs=24),
op(ops.expanded_conv, stride=2, num_outputs=32),
op(ops.expanded_conv, stride=1, num_outputs=32),
op(ops.expanded_conv, stride=1, num_outputs=32),
op(ops.expanded_conv, stride=2, num_outputs=64),
op(ops.expanded_conv, stride=1, num_outputs=64),
op(ops.expanded_conv, stride=1, num_outputs=64),
op(ops.expanded_conv, stride=1, num_outputs=64),
op(ops.expanded_conv, stride=1, num_outputs=96),
op(ops.expanded_conv, stride=1, num_outputs=96),
op(ops.expanded_conv, stride=1, num_outputs=96),
op(ops.expanded_conv, stride=2, num_outputs=160),
op(ops.expanded_conv, stride=1, num_outputs=160),
op(ops.expanded_conv, stride=1, num_outputs=160),
op(ops.expanded_conv, stride=1, num_outputs=320),
op(slim.conv2d, stride=1, kernel_size=[1, 1], num_outputs=1280)
],
)
# pyformat: enable
@slim.add_arg_scope
def mobilenet(input_tensor,
num_classes=1001,
depth_multiplier=1.0,
scope='MobilenetV2',
conv_defs=None,
finegrain_classification_mode=False,
min_depth=None,
divisible_by=None,
activation_fn=None,
**kwargs):
"""Creates mobilenet V2 network.
Inference mode is created by default. To create training use training_scope
below.
with tf.contrib.slim.arg_scope(mobilenet_v2.training_scope()):
logits, endpoints = mobilenet_v2.mobilenet(input_tensor)
Args:
input_tensor: The input tensor
num_classes: number of classes
depth_multiplier: The multiplier applied to scale number of
channels in each layer. Note: this is called depth multiplier in the
paper but the name is kept for consistency with slim's model builder.
scope: Scope of the operator
conv_defs: Allows to override default conv def.
finegrain_classification_mode: When set to True, the model
will keep the last layer large even for small multipliers. Following
https://arxiv.org/abs/1801.04381
suggests that it improves performance for ImageNet-type of problems.
*Note* ignored if final_endpoint makes the builder exit earlier.
min_depth: If provided, will ensure that all layers will have that
many channels after application of depth multiplier.
divisible_by: If provided will ensure that all layers # channels
will be divisible by this number.
activation_fn: Activation function to use, defaults to tf.nn.relu6 if not
specified.
**kwargs: passed directly to mobilenet.mobilenet:
prediction_fn- what prediction function to use.
reuse-: whether to reuse variables (if reuse set to true, scope
must be given).
Returns:
logits/endpoints pair
Raises:
ValueError: On invalid arguments
"""
if conv_defs is None:
conv_defs = V2_DEF
if 'multiplier' in kwargs:
raise ValueError('mobilenetv2 doesn\'t support generic '
'multiplier parameter use "depth_multiplier" instead.')
if finegrain_classification_mode:
conv_defs = copy.deepcopy(conv_defs)
if depth_multiplier < 1:
conv_defs['spec'][-1].params['num_outputs'] /= depth_multiplier
if activation_fn:
conv_defs = copy.deepcopy(conv_defs)
defaults = conv_defs['defaults']
conv_defaults = (
defaults[(slim.conv2d, slim.fully_connected, slim.separable_conv2d)])
conv_defaults['activation_fn'] = activation_fn
depth_args = {}
# NB: do not set depth_args unless they are provided to avoid overriding
# whatever default depth_multiplier might have thanks to arg_scope.
if min_depth is not None:
depth_args['min_depth'] = min_depth
if divisible_by is not None:
depth_args['divisible_by'] = divisible_by
with slim.arg_scope((lib.depth_multiplier,), **depth_args):
return lib.mobilenet(
input_tensor,
num_classes=num_classes,
conv_defs=conv_defs,
scope=scope,
multiplier=depth_multiplier,
**kwargs)
mobilenet.default_image_size = 224
def wrapped_partial(func, *args, **kwargs):
partial_func = functools.partial(func, *args, **kwargs)
functools.update_wrapper(partial_func, func)
return partial_func
# Wrappers for mobilenet v2 with depth-multipliers. Be noticed that
# 'finegrain_classification_mode' is set to True, which means the embedding
# layer will not be shrinked when given a depth-multiplier < 1.0.
mobilenet_v2_140 = wrapped_partial(mobilenet, depth_multiplier=1.4)
mobilenet_v2_050 = wrapped_partial(mobilenet, depth_multiplier=0.50,
finegrain_classification_mode=True)
mobilenet_v2_035 = wrapped_partial(mobilenet, depth_multiplier=0.35,
finegrain_classification_mode=True)
@slim.add_arg_scope
def mobilenet_base(input_tensor, depth_multiplier=1.0, **kwargs):
"""Creates base of the mobilenet (no pooling and no logits) ."""
return mobilenet(input_tensor,
depth_multiplier=depth_multiplier,
base_only=True, **kwargs)
def training_scope(**kwargs):
"""Defines MobilenetV2 training scope.
Usage:
with tf.contrib.slim.arg_scope(mobilenet_v2.training_scope()):
logits, endpoints = mobilenet_v2.mobilenet(input_tensor)
with slim.
Args:
**kwargs: Passed to mobilenet.training_scope. The following parameters
are supported:
weight_decay- The weight decay to use for regularizing the model.
stddev- Standard deviation for initialization, if negative uses xavier.
dropout_keep_prob- dropout keep probability
bn_decay- decay for the batch norm moving averages.
Returns:
An `arg_scope` to use for the mobilenet v2 model.
"""
return lib.training_scope(**kwargs)
__all__ = ['training_scope', 'mobilenet_base', 'mobilenet', 'V2_DEF']
| 37.225806 | 80 | 0.694974 |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import copy
import functools
import tensorflow as tf
from nets.mobilenet import conv_blocks as ops
from nets.mobilenet import mobilenet as lib
import tf_slim as slim
op = lib.op
expand_input = ops.expand_input_by_factor
V2_DEF = dict(
defaults={
(slim.batch_norm,): {'center': True, 'scale': True},
(slim.conv2d, slim.fully_connected, slim.separable_conv2d): {
'normalizer_fn': slim.batch_norm, 'activation_fn': tf.nn.relu6
},
(ops.expanded_conv,): {
'expansion_size': expand_input(6),
'split_expansion': 1,
'normalizer_fn': slim.batch_norm,
'residual': True
},
(slim.conv2d, slim.separable_conv2d): {'padding': 'SAME'}
},
spec=[
op(slim.conv2d, stride=2, num_outputs=32, kernel_size=[3, 3]),
op(ops.expanded_conv,
expansion_size=expand_input(1, divisible_by=1),
num_outputs=16),
op(ops.expanded_conv, stride=2, num_outputs=24),
op(ops.expanded_conv, stride=1, num_outputs=24),
op(ops.expanded_conv, stride=2, num_outputs=32),
op(ops.expanded_conv, stride=1, num_outputs=32),
op(ops.expanded_conv, stride=1, num_outputs=32),
op(ops.expanded_conv, stride=2, num_outputs=64),
op(ops.expanded_conv, stride=1, num_outputs=64),
op(ops.expanded_conv, stride=1, num_outputs=64),
op(ops.expanded_conv, stride=1, num_outputs=64),
op(ops.expanded_conv, stride=1, num_outputs=96),
op(ops.expanded_conv, stride=1, num_outputs=96),
op(ops.expanded_conv, stride=1, num_outputs=96),
op(ops.expanded_conv, stride=2, num_outputs=160),
op(ops.expanded_conv, stride=1, num_outputs=160),
op(ops.expanded_conv, stride=1, num_outputs=160),
op(ops.expanded_conv, stride=1, num_outputs=320),
op(slim.conv2d, stride=1, kernel_size=[1, 1], num_outputs=1280)
],
)
# pyformat: enable
@slim.add_arg_scope
def mobilenet(input_tensor,
num_classes=1001,
depth_multiplier=1.0,
scope='MobilenetV2',
conv_defs=None,
finegrain_classification_mode=False,
min_depth=None,
divisible_by=None,
activation_fn=None,
**kwargs):
if conv_defs is None:
conv_defs = V2_DEF
if 'multiplier' in kwargs:
raise ValueError('mobilenetv2 doesn\'t support generic '
'multiplier parameter use "depth_multiplier" instead.')
if finegrain_classification_mode:
conv_defs = copy.deepcopy(conv_defs)
if depth_multiplier < 1:
conv_defs['spec'][-1].params['num_outputs'] /= depth_multiplier
if activation_fn:
conv_defs = copy.deepcopy(conv_defs)
defaults = conv_defs['defaults']
conv_defaults = (
defaults[(slim.conv2d, slim.fully_connected, slim.separable_conv2d)])
conv_defaults['activation_fn'] = activation_fn
depth_args = {}
if min_depth is not None:
depth_args['min_depth'] = min_depth
if divisible_by is not None:
depth_args['divisible_by'] = divisible_by
with slim.arg_scope((lib.depth_multiplier,), **depth_args):
return lib.mobilenet(
input_tensor,
num_classes=num_classes,
conv_defs=conv_defs,
scope=scope,
multiplier=depth_multiplier,
**kwargs)
mobilenet.default_image_size = 224
def wrapped_partial(func, *args, **kwargs):
partial_func = functools.partial(func, *args, **kwargs)
functools.update_wrapper(partial_func, func)
return partial_func
mobilenet_v2_140 = wrapped_partial(mobilenet, depth_multiplier=1.4)
mobilenet_v2_050 = wrapped_partial(mobilenet, depth_multiplier=0.50,
finegrain_classification_mode=True)
mobilenet_v2_035 = wrapped_partial(mobilenet, depth_multiplier=0.35,
finegrain_classification_mode=True)
@slim.add_arg_scope
def mobilenet_base(input_tensor, depth_multiplier=1.0, **kwargs):
return mobilenet(input_tensor,
depth_multiplier=depth_multiplier,
base_only=True, **kwargs)
def training_scope(**kwargs):
return lib.training_scope(**kwargs)
__all__ = ['training_scope', 'mobilenet_base', 'mobilenet', 'V2_DEF']
| true | true |
f720cb08684941936d653c433957364d390d8967 | 3,246 | py | Python | mt/preprocess/1_process_raw.py | salvacarrion/nmt-continual-learning | 302147ac9c270f3341a68a72c803c457f05ff37b | [
"MIT"
] | 1 | 2021-05-26T11:35:09.000Z | 2021-05-26T11:35:09.000Z | mt/preprocess/1_process_raw.py | salvacarrion/nmt-continual-learning | 302147ac9c270f3341a68a72c803c457f05ff37b | [
"MIT"
] | 1 | 2021-05-26T11:36:24.000Z | 2021-05-26T11:36:24.000Z | mt/preprocess/1_process_raw.py | salvacarrion/nmt-continual-learning | 302147ac9c270f3341a68a72c803c457f05ff37b | [
"MIT"
] | null | null | null | import os
import pandas as pd
from pathlib import Path
import numpy as np
from mt import RAW_PATH
from mt import utils
SUFFLE = True
CONSTRAINED = True
TR_DATA_PATH = "/home/salva/Documents/Programming/Datasets/scielo/originals/scielo-gma/scielo-gma"
TR_RAW_FILES = ["es-en-gma-biological.csv", "es-en-gma-health.csv", "fr-en-gma-health.csv",
"pt-en-gma-biological.csv", "pt-en-gma-health.csv"]
TS_DATA_PATH = "/home/salva/Documents/Programming/Datasets/scielo/originals/testset-gma/testset_gma"
TS_RAW_FILES = ["test-gma-en2es-biological.csv", "test-gma-en2es-health.csv", "test-gma-en2fr-health.csv",
"test-gma-en2pt-biological.csv", "test-gma-en2pt-health.csv", "test-gma-es2en-biological.csv",
"test-gma-es2en-health.csv", "test-gma-fr2en-health.csv", "test-gma-pt2en-biological.csv",
"test-gma-pt2en-health.csv"]
# Create path if doesn't exists
path = Path(RAW_PATH)
path.mkdir(parents=True, exist_ok=True)
# Process splits train/test files
for split in ["train", "test"]:
# Select split to process
if split == "train":
print("Processing training files...")
DATA_PATH = TR_DATA_PATH
RAW_FILES = TR_RAW_FILES
istrain = True
elif split == "test":
print("Processing test files...")
DATA_PATH = TS_DATA_PATH
RAW_FILES = TS_RAW_FILES
istrain = False
else:
raise ValueError("Invalid split name")
# Process raw files
for fname in RAW_FILES:
# Read file
print(f"Reading file... ({fname})")
filename = os.path.join(DATA_PATH, fname)
df = pd.read_csv(filename)
# Limit dataset
domain = utils.get_domain(fname)
SRC_LANG, TRG_LANG = utils.get_langs(fname, istrain=istrain)
# Clean dataset (basic)
total_old = len(df)
df = utils.preprocess_dataset(df, src_col=SRC_LANG, trg_col=TRG_LANG)
# Shuffle dataset
if SUFFLE:
np.random.seed(123)
np.random.shuffle(df.values)
if CONSTRAINED and istrain:
if domain == "health" and "es" in {SRC_LANG, TRG_LANG}:
max_size = 123597 # Biological rows
print(f"Limiting size to {max_size}")
df = df[:max_size]
elif domain == "health" and "pt" in {SRC_LANG, TRG_LANG}:
max_size = 120301 # Biological rows
print(f"Limiting size to {max_size}")
df = df[:max_size]
# Stats
total_doctypes = df['doctype'].value_counts()
removed = total_old - len(df)
print(f"Stats for: {fname} **************************")
print(f"\t- Documents: {len(set(df['docid']))}")
print(f"\t- Sentences: {len(df)}")
print("\t\t- Removed: {} ({:.2f}%)".format(removed, removed / total_old * 100))
print("\t- Titles/Abstracts: {}/{} ({:.2f}%)".format(total_doctypes['title'], total_doctypes['text'],
total_doctypes['title'] / total_doctypes['text'] * 100))
# Save data
df.to_csv(os.path.join(RAW_PATH, fname), index=False)
print("File saved!")
print("")
print("Done!")
| 35.282609 | 117 | 0.594886 | import os
import pandas as pd
from pathlib import Path
import numpy as np
from mt import RAW_PATH
from mt import utils
SUFFLE = True
CONSTRAINED = True
TR_DATA_PATH = "/home/salva/Documents/Programming/Datasets/scielo/originals/scielo-gma/scielo-gma"
TR_RAW_FILES = ["es-en-gma-biological.csv", "es-en-gma-health.csv", "fr-en-gma-health.csv",
"pt-en-gma-biological.csv", "pt-en-gma-health.csv"]
TS_DATA_PATH = "/home/salva/Documents/Programming/Datasets/scielo/originals/testset-gma/testset_gma"
TS_RAW_FILES = ["test-gma-en2es-biological.csv", "test-gma-en2es-health.csv", "test-gma-en2fr-health.csv",
"test-gma-en2pt-biological.csv", "test-gma-en2pt-health.csv", "test-gma-es2en-biological.csv",
"test-gma-es2en-health.csv", "test-gma-fr2en-health.csv", "test-gma-pt2en-biological.csv",
"test-gma-pt2en-health.csv"]
path = Path(RAW_PATH)
path.mkdir(parents=True, exist_ok=True)
# Process splits train/test files
for split in ["train", "test"]:
# Select split to process
if split == "train":
print("Processing training files...")
DATA_PATH = TR_DATA_PATH
RAW_FILES = TR_RAW_FILES
istrain = True
elif split == "test":
print("Processing test files...")
DATA_PATH = TS_DATA_PATH
RAW_FILES = TS_RAW_FILES
istrain = False
else:
raise ValueError("Invalid split name")
# Process raw files
for fname in RAW_FILES:
# Read file
print(f"Reading file... ({fname})")
filename = os.path.join(DATA_PATH, fname)
df = pd.read_csv(filename)
# Limit dataset
domain = utils.get_domain(fname)
SRC_LANG, TRG_LANG = utils.get_langs(fname, istrain=istrain)
# Clean dataset (basic)
total_old = len(df)
df = utils.preprocess_dataset(df, src_col=SRC_LANG, trg_col=TRG_LANG)
# Shuffle dataset
if SUFFLE:
np.random.seed(123)
np.random.shuffle(df.values)
if CONSTRAINED and istrain:
if domain == "health" and "es" in {SRC_LANG, TRG_LANG}:
max_size = 123597 # Biological rows
print(f"Limiting size to {max_size}")
df = df[:max_size]
elif domain == "health" and "pt" in {SRC_LANG, TRG_LANG}:
max_size = 120301 # Biological rows
print(f"Limiting size to {max_size}")
df = df[:max_size]
# Stats
total_doctypes = df['doctype'].value_counts()
removed = total_old - len(df)
print(f"Stats for: {fname} **************************")
print(f"\t- Documents: {len(set(df['docid']))}")
print(f"\t- Sentences: {len(df)}")
print("\t\t- Removed: {} ({:.2f}%)".format(removed, removed / total_old * 100))
print("\t- Titles/Abstracts: {}/{} ({:.2f}%)".format(total_doctypes['title'], total_doctypes['text'],
total_doctypes['title'] / total_doctypes['text'] * 100))
# Save data
df.to_csv(os.path.join(RAW_PATH, fname), index=False)
print("File saved!")
print("")
print("Done!")
| true | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.